content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Which 3D Model File Types Can be Used in ideaMaker?
3D model file types that can be imported into ideaMaker:
1. Stereolithography (.STL)
Stereolithography (SLA or SL; also known as stereolithography apparatus, optical fabrication, photo-solidification, or resin printing) was developed by 3D SYSTEMS in 1988. It is a three-dimensional graphics file format that serves rapid prototyping manufacturing technology.
2. Wavefront OBJ (.OBJ)
OBJ (or .OBJ) is a geometry definition file format developed by Wavefront Technologies for its advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors.
3. 3D Manufacturing File (.3MF)
3D Manufacturing File is an open source file format standard developed and published by 3MF Consortium. 3MF is an XML-based data format specifically for additive manufacturing. It includes information about materials, colors, and other information that cannot be expressed in STL format.
4. OLT Printing File(.OLTP)
Exported File Formats:
The following file types can be exported to ideaMaker:
1. Printing File (.GCODE)
.GCODE is the most widely used Computer Numerical Control (CNC) programming language.
2. ideaMaker data File (.DATA)
It includes models’ thumbnail image and slicing settings.
Other Formats Supported:
1. ideaMaker Slicing Template (.BIN)
It includes slice settings such as printing speed, nozzle temperature, retraction printer profile, filament profile, group and layer settings, and more.
2. ideaMaker Printer Template (.PRINTER)
It includes the printer size, nozzle diameter and other printer-related settings.
3. ideaMaker Filament Template (.FILAMENT)
It includes filament’s flow rate, the filament’s diameter and other filament-related settings.
4. ideaMaker Project File (.IDEA)
It includes models’ manual support, modifiers, group and layer settings, model basic information, and other slicing settings. With ideaMaker 4.0, you can import the slice template saved in the project file.
|
__label__pos
| 0.991867 |
3
Tengo un archivo de texto que contiene 40 líneas de texto, cada una tiene un nombre, una calificacion y un grupo. Metí todo eso en una lista. por ejemplo:
Alumnos = [["gerardo", 5.6, "GrupoA"],["Miguel" , 9.6 , "Grupo B"],["Arturo" , 8.3, "Grupo C"...]]
Creé una clase llamada "Alumno" la cual pide 3 parámetros, que son el nombre, calificación y grupo del alumno.
Tengo esto:
Class Alumno:
def __init__(self, nombre, promedio, grupo)
self.nombre = nombre
self.promedio = promedio
self.grupo = grupo
with open("alumnos.txt", "r") as f:
listaNombres = []
for line in f:
line = line.rstrip("\\n")
info = line.split()
Estudiante = Alumno(str(info[0], str(info[1]), str(info[2])
Estudiante.nombre = str(info[0])
Estudiante.promedio = str(info[1])
Estudiante.grupo = str(info[2])
listaNombres.append(Estudiante)
Con esto puedo acceder al archivo, quitar el salto de línea y dividir cada una de las líneas en una lista individual, así que tengo una lista con listas dentro de ella que son todos los alumnos.
¿Hay alguna manera de poder tomar el nombre de cada estudiante? Así como también poder ordenarlos y que devuelva la lista con todos los parámetros de cada alumno, pero que ahora estén ordenados en orden alfabético.
1
1 respuesta 1
1
Tienes 2 opciones: usar la función sorted que te da como resultado una nueva lista con los valores ordenados o utilizar la función sort que opera directamente sobre la lista y la actualiza con el nuevo orden:
sorted(alumnosLista, key=lambda x: x[0]) # Genera una nueva lista.
alumnosLista.sort(key=lambda x: x[0]) # Actualiza la lista `alumnosLista`.
El parámetro key indica el valor que usaremos para realizar la ordenación que en este caso será el nombre, que está ubicado la primera posición (index 0) de cada lista interna (para acceder a las listas internas usamos lambda).
2
• Muchas gracias por la ayuda. Aunque parece que los objetos de una clase no soportan el indexing. Me sale este un TypeError: y dice:" 'Alumno' (El cual es el nombre de mi clase) object does not support indexing.
– SebasMagno
el 4 dic. 2018 a las 1:45
• Tienes razón, me confundí al ver el primer ejemplo donde señalas que la variable Alumnos es igual a la lista que indicas. Donde dice Alumnos en el ejemplo que he colocado va la variable que contiene la lista bidimensional.
– mmontoya
el 4 dic. 2018 a las 2:35
Tu Respuesta
By clicking “Publica tu respuesta”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.
¿No es la respuesta que buscas? Examina otras preguntas con la etiqueta o formula tu propia pregunta.
|
__label__pos
| 0.697094 |
blob: 0cdbf9dbbd510b27d419a25ab29589515489c794 [file] [log] [blame]
/*
* Copyright (C) 2002 Roman Zippel <[email protected]>
* Released under the terms of the GNU GPL v2.0.
*/
#include <qapplication.h>
#include <qmainwindow.h>
#include <qtoolbar.h>
#include <qvbox.h>
#include <qsplitter.h>
#include <qlistview.h>
#include <qtextview.h>
#include <qlineedit.h>
#include <qmenubar.h>
#include <qmessagebox.h>
#include <qaction.h>
#include <qheader.h>
#include <qfiledialog.h>
#include <qregexp.h>
#include <stdlib.h>
#include "lkc.h"
#include "qconf.h"
#include "qconf.moc"
#include "images.c"
static QApplication *configApp;
ConfigSettings::ConfigSettings()
: showAll(false), showName(false), showRange(false), showData(false)
{
}
#if QT_VERSION >= 300
/**
* Reads the list column settings from the application settings.
*/
void ConfigSettings::readListSettings()
{
showAll = readBoolEntry("/kconfig/qconf/showAll", false);
showName = readBoolEntry("/kconfig/qconf/showName", false);
showRange = readBoolEntry("/kconfig/qconf/showRange", false);
showData = readBoolEntry("/kconfig/qconf/showData", false);
}
/**
* Reads a list of integer values from the application settings.
*/
QValueList<int> ConfigSettings::readSizes(const QString& key, bool *ok)
{
QValueList<int> result;
QStringList entryList = readListEntry(key, ok);
if (ok) {
QStringList::Iterator it;
for (it = entryList.begin(); it != entryList.end(); ++it)
result.push_back((*it).toInt());
}
return result;
}
/**
* Writes a list of integer values to the application settings.
*/
bool ConfigSettings::writeSizes(const QString& key, const QValueList<int>& value)
{
QStringList stringList;
QValueList<int>::ConstIterator it;
for (it = value.begin(); it != value.end(); ++it)
stringList.push_back(QString::number(*it));
return writeEntry(key, stringList);
}
#endif
/*
* update all the children of a menu entry
* removes/adds the entries from the parent widget as necessary
*
* parent: either the menu list widget or a menu entry widget
* menu: entry to be updated
*/
template <class P>
void ConfigList::updateMenuList(P* parent, struct menu* menu)
{
struct menu* child;
ConfigItem* item;
ConfigItem* last;
bool visible;
enum prop_type type;
if (!menu) {
while ((item = parent->firstChild()))
delete item;
return;
}
last = parent->firstChild();
if (last && !last->goParent)
last = 0;
for (child = menu->list; child; child = child->next) {
item = last ? last->nextSibling() : parent->firstChild();
type = child->prompt ? child->prompt->type : P_UNKNOWN;
switch (mode) {
case menuMode:
if (!(child->flags & MENU_ROOT))
goto hide;
break;
case symbolMode:
if (child->flags & MENU_ROOT)
goto hide;
break;
default:
break;
}
visible = menu_is_visible(child);
if (showAll || visible) {
if (!item || item->menu != child)
item = new ConfigItem(parent, last, child, visible);
else
item->testUpdateMenu(visible);
if (mode == fullMode || mode == menuMode || type != P_MENU)
updateMenuList(item, child);
else
updateMenuList(item, 0);
last = item;
continue;
}
hide:
if (item && item->menu == child) {
last = parent->firstChild();
if (last == item)
last = 0;
else while (last->nextSibling() != item)
last = last->nextSibling();
delete item;
}
}
}
#if QT_VERSION >= 300
/*
* set the new data
* TODO check the value
*/
void ConfigItem::okRename(int col)
{
Parent::okRename(col);
sym_set_string_value(menu->sym, text(dataColIdx).latin1());
}
#endif
/*
* update the displayed of a menu entry
*/
void ConfigItem::updateMenu(void)
{
ConfigList* list;
struct symbol* sym;
struct property *prop;
QString prompt;
int type;
tristate expr;
list = listView();
if (goParent) {
setPixmap(promptColIdx, list->menuBackPix);
prompt = "..";
goto set_prompt;
}
sym = menu->sym;
prop = menu->prompt;
prompt = menu_get_prompt(menu);
if (prop) switch (prop->type) {
case P_MENU:
if (list->mode == singleMode || list->mode == symbolMode) {
/* a menuconfig entry is displayed differently
* depending whether it's at the view root or a child.
*/
if (sym && list->rootEntry == menu)
break;
setPixmap(promptColIdx, list->menuPix);
} else {
if (sym)
break;
setPixmap(promptColIdx, 0);
}
goto set_prompt;
case P_COMMENT:
setPixmap(promptColIdx, 0);
goto set_prompt;
default:
;
}
if (!sym)
goto set_prompt;
setText(nameColIdx, sym->name);
type = sym_get_type(sym);
switch (type) {
case S_BOOLEAN:
case S_TRISTATE:
char ch;
if (!sym_is_changable(sym) && !list->showAll) {
setPixmap(promptColIdx, 0);
setText(noColIdx, 0);
setText(modColIdx, 0);
setText(yesColIdx, 0);
break;
}
expr = sym_get_tristate_value(sym);
switch (expr) {
case yes:
if (sym_is_choice_value(sym) && type == S_BOOLEAN)
setPixmap(promptColIdx, list->choiceYesPix);
else
setPixmap(promptColIdx, list->symbolYesPix);
setText(yesColIdx, "Y");
ch = 'Y';
break;
case mod:
setPixmap(promptColIdx, list->symbolModPix);
setText(modColIdx, "M");
ch = 'M';
break;
default:
if (sym_is_choice_value(sym) && type == S_BOOLEAN)
setPixmap(promptColIdx, list->choiceNoPix);
else
setPixmap(promptColIdx, list->symbolNoPix);
setText(noColIdx, "N");
ch = 'N';
break;
}
if (expr != no)
setText(noColIdx, sym_tristate_within_range(sym, no) ? "_" : 0);
if (expr != mod)
setText(modColIdx, sym_tristate_within_range(sym, mod) ? "_" : 0);
if (expr != yes)
setText(yesColIdx, sym_tristate_within_range(sym, yes) ? "_" : 0);
setText(dataColIdx, QChar(ch));
break;
case S_INT:
case S_HEX:
case S_STRING:
const char* data;
data = sym_get_string_value(sym);
#if QT_VERSION >= 300
int i = list->mapIdx(dataColIdx);
if (i >= 0)
setRenameEnabled(i, TRUE);
#endif
setText(dataColIdx, data);
if (type == S_STRING)
prompt.sprintf("%s: %s", prompt.latin1(), data);
else
prompt.sprintf("(%s) %s", data, prompt.latin1());
break;
}
if (!sym_has_value(sym) && visible)
prompt += " (NEW)";
set_prompt:
setText(promptColIdx, prompt);
}
void ConfigItem::testUpdateMenu(bool v)
{
ConfigItem* i;
visible = v;
if (!menu)
return;
sym_calc_value(menu->sym);
if (menu->flags & MENU_CHANGED) {
/* the menu entry changed, so update all list items */
menu->flags &= ~MENU_CHANGED;
for (i = (ConfigItem*)menu->data; i; i = i->nextItem)
i->updateMenu();
} else if (listView()->updateAll)
updateMenu();
}
void ConfigItem::paintCell(QPainter* p, const QColorGroup& cg, int column, int width, int align)
{
ConfigList* list = listView();
if (visible) {
if (isSelected() && !list->hasFocus() && list->mode == menuMode)
Parent::paintCell(p, list->inactivedColorGroup, column, width, align);
else
Parent::paintCell(p, cg, column, width, align);
} else
Parent::paintCell(p, list->disabledColorGroup, column, width, align);
}
/*
* construct a menu entry
*/
void ConfigItem::init(void)
{
if (menu) {
ConfigList* list = listView();
nextItem = (ConfigItem*)menu->data;
menu->data = this;
if (list->mode != fullMode)
setOpen(TRUE);
sym_calc_value(menu->sym);
}
updateMenu();
}
/*
* destruct a menu entry
*/
ConfigItem::~ConfigItem(void)
{
if (menu) {
ConfigItem** ip = (ConfigItem**)&menu->data;
for (; *ip; ip = &(*ip)->nextItem) {
if (*ip == this) {
*ip = nextItem;
break;
}
}
}
}
void ConfigLineEdit::show(ConfigItem* i)
{
item = i;
if (sym_get_string_value(item->menu->sym))
setText(sym_get_string_value(item->menu->sym));
else
setText(0);
Parent::show();
setFocus();
}
void ConfigLineEdit::keyPressEvent(QKeyEvent* e)
{
switch (e->key()) {
case Key_Escape:
break;
case Key_Return:
case Key_Enter:
sym_set_string_value(item->menu->sym, text().latin1());
parent()->updateList(item);
break;
default:
Parent::keyPressEvent(e);
return;
}
e->accept();
parent()->list->setFocus();
hide();
}
ConfigList::ConfigList(ConfigView* p, ConfigMainWindow* cv, ConfigSettings* configSettings)
: Parent(p), cview(cv),
updateAll(false),
symbolYesPix(xpm_symbol_yes), symbolModPix(xpm_symbol_mod), symbolNoPix(xpm_symbol_no),
choiceYesPix(xpm_choice_yes), choiceNoPix(xpm_choice_no),
menuPix(xpm_menu), menuInvPix(xpm_menu_inv), menuBackPix(xpm_menuback), voidPix(xpm_void),
showAll(false), showName(false), showRange(false), showData(false),
rootEntry(0)
{
int i;
setSorting(-1);
setRootIsDecorated(TRUE);
disabledColorGroup = palette().active();
disabledColorGroup.setColor(QColorGroup::Text, palette().disabled().text());
inactivedColorGroup = palette().active();
inactivedColorGroup.setColor(QColorGroup::Highlight, palette().disabled().highlight());
connect(this, SIGNAL(selectionChanged(void)),
SLOT(updateSelection(void)));
if (configSettings) {
showAll = configSettings->showAll;
showName = configSettings->showName;
showRange = configSettings->showRange;
showData = configSettings->showData;
}
for (i = 0; i < colNr; i++)
colMap[i] = colRevMap[i] = -1;
addColumn(promptColIdx, "Option");
reinit();
}
void ConfigList::reinit(void)
{
removeColumn(dataColIdx);
removeColumn(yesColIdx);
removeColumn(modColIdx);
removeColumn(noColIdx);
removeColumn(nameColIdx);
if (showName)
addColumn(nameColIdx, "Name");
if (showRange) {
addColumn(noColIdx, "N");
addColumn(modColIdx, "M");
addColumn(yesColIdx, "Y");
}
if (showData)
addColumn(dataColIdx, "Value");
updateListAll();
}
void ConfigList::updateSelection(void)
{
struct menu *menu;
enum prop_type type;
ConfigItem* item = (ConfigItem*)selectedItem();
if (!item)
return;
cview->setHelp(item);
menu = item->menu;
if (!menu)
return;
type = menu->prompt ? menu->prompt->type : P_UNKNOWN;
if (mode == menuMode && type == P_MENU)
emit menuSelected(menu);
}
void ConfigList::updateList(ConfigItem* item)
{
ConfigItem* last = 0;
if (!rootEntry)
goto update;
if (rootEntry != &rootmenu && (mode == singleMode ||
(mode == symbolMode && rootEntry->parent != &rootmenu))) {
item = firstChild();
if (!item)
item = new ConfigItem(this, 0, true);
last = item;
}
if ((mode == singleMode || (mode == symbolMode && !(rootEntry->flags & MENU_ROOT))) &&
rootEntry->sym && rootEntry->prompt) {
item = last ? last->nextSibling() : firstChild();
if (!item)
item = new ConfigItem(this, last, rootEntry, true);
else
item->testUpdateMenu(true);
updateMenuList(item, rootEntry);
triggerUpdate();
return;
}
update:
updateMenuList(this, rootEntry);
triggerUpdate();
}
void ConfigList::setAllOpen(bool open)
{
QListViewItemIterator it(this);
for (; it.current(); it++)
it.current()->setOpen(open);
}
void ConfigList::setValue(ConfigItem* item, tristate val)
{
struct symbol* sym;
int type;
tristate oldval;
sym = item->menu ? item->menu->sym : 0;
if (!sym)
return;
type = sym_get_type(sym);
switch (type) {
case S_BOOLEAN:
case S_TRISTATE:
oldval = sym_get_tristate_value(sym);
if (!sym_set_tristate_value(sym, val))
return;
if (oldval == no && item->menu->list)
item->setOpen(TRUE);
parent()->updateList(item);
break;
}
}
void ConfigList::changeValue(ConfigItem* item)
{
struct symbol* sym;
struct menu* menu;
int type, oldexpr, newexpr;
menu = item->menu;
if (!menu)
return;
sym = menu->sym;
if (!sym) {
if (item->menu->list)
item->setOpen(!item->isOpen());
return;
}
type = sym_get_type(sym);
switch (type) {
case S_BOOLEAN:
case S_TRISTATE:
oldexpr = sym_get_tristate_value(sym);
newexpr = sym_toggle_tristate_value(sym);
if (item->menu->list) {
if (oldexpr == newexpr)
item->setOpen(!item->isOpen());
else if (oldexpr == no)
item->setOpen(TRUE);
}
if (oldexpr != newexpr)
parent()->updateList(item);
break;
case S_INT:
case S_HEX:
case S_STRING:
#if QT_VERSION >= 300
if (colMap[dataColIdx] >= 0)
item->startRename(colMap[dataColIdx]);
else
#endif
parent()->lineEdit->show(item);
break;
}
}
void ConfigList::setRootMenu(struct menu *menu)
{
enum prop_type type;
if (rootEntry == menu)
return;
type = menu && menu->prompt ? menu->prompt->type : P_UNKNOWN;
if (type != P_MENU)
return;
updateMenuList(this, 0);
rootEntry = menu;
updateListAll();
setSelected(currentItem(), hasFocus());
}
void ConfigList::setParentMenu(void)
{
ConfigItem* item;
struct menu *oldroot;
oldroot = rootEntry;
if (rootEntry == &rootmenu)
return;
setRootMenu(menu_get_parent_menu(rootEntry->parent));
QListViewItemIterator it(this);
for (; (item = (ConfigItem*)it.current()); it++) {
if (item->menu == oldroot) {
setCurrentItem(item);
ensureItemVisible(item);
break;
}
}
}
void ConfigList::keyPressEvent(QKeyEvent* ev)
{
QListViewItem* i = currentItem();
ConfigItem* item;
struct menu *menu;
enum prop_type type;
if (ev->key() == Key_Escape && mode != fullMode) {
emit parentSelected();
ev->accept();
return;
}
if (!i) {
Parent::keyPressEvent(ev);
return;
}
item = (ConfigItem*)i;
switch (ev->key()) {
case Key_Return:
case Key_Enter:
if (item->goParent) {
emit parentSelected();
break;
}
menu = item->menu;
if (!menu)
break;
type = menu->prompt ? menu->prompt->type : P_UNKNOWN;
if (type == P_MENU && rootEntry != menu &&
mode != fullMode && mode != menuMode) {
emit menuSelected(menu);
break;
}
case Key_Space:
changeValue(item);
break;
case Key_N:
setValue(item, no);
break;
case Key_M:
setValue(item, mod);
break;
case Key_Y:
setValue(item, yes);
break;
default:
Parent::keyPressEvent(ev);
return;
}
ev->accept();
}
void ConfigList::contentsMousePressEvent(QMouseEvent* e)
{
//QPoint p(contentsToViewport(e->pos()));
//printf("contentsMousePressEvent: %d,%d\n", p.x(), p.y());
Parent::contentsMousePressEvent(e);
}
void ConfigList::contentsMouseReleaseEvent(QMouseEvent* e)
{
QPoint p(contentsToViewport(e->pos()));
ConfigItem* item = (ConfigItem*)itemAt(p);
struct menu *menu;
enum prop_type ptype;
const QPixmap* pm;
int idx, x;
if (!item)
goto skip;
menu = item->menu;
x = header()->offset() + p.x();
idx = colRevMap[header()->sectionAt(x)];
switch (idx) {
case promptColIdx:
pm = item->pixmap(promptColIdx);
if (pm) {
int off = header()->sectionPos(0) + itemMargin() +
treeStepSize() * (item->depth() + (rootIsDecorated() ? 1 : 0));
if (x >= off && x < off + pm->width()) {
if (item->goParent) {
emit parentSelected();
break;
} else if (!menu)
break;
ptype = menu->prompt ? menu->prompt->type : P_UNKNOWN;
if (ptype == P_MENU && rootEntry != menu &&
mode != fullMode && mode != menuMode)
emit menuSelected(menu);
else
changeValue(item);
}
}
break;
case noColIdx:
setValue(item, no);
break;
case modColIdx:
setValue(item, mod);
break;
case yesColIdx:
setValue(item, yes);
break;
case dataColIdx:
changeValue(item);
break;
}
skip:
//printf("contentsMouseReleaseEvent: %d,%d\n", p.x(), p.y());
Parent::contentsMouseReleaseEvent(e);
}
void ConfigList::contentsMouseMoveEvent(QMouseEvent* e)
{
//QPoint p(contentsToViewport(e->pos()));
//printf("contentsMouseMoveEvent: %d,%d\n", p.x(), p.y());
Parent::contentsMouseMoveEvent(e);
}
void ConfigList::contentsMouseDoubleClickEvent(QMouseEvent* e)
{
QPoint p(contentsToViewport(e->pos()));
ConfigItem* item = (ConfigItem*)itemAt(p);
struct menu *menu;
enum prop_type ptype;
if (!item)
goto skip;
if (item->goParent) {
emit parentSelected();
goto skip;
}
menu = item->menu;
if (!menu)
goto skip;
ptype = menu->prompt ? menu->prompt->type : P_UNKNOWN;
if (ptype == P_MENU && (mode == singleMode || mode == symbolMode))
emit menuSelected(menu);
else if (menu->sym)
changeValue(item);
skip:
//printf("contentsMouseDoubleClickEvent: %d,%d\n", p.x(), p.y());
Parent::contentsMouseDoubleClickEvent(e);
}
void ConfigList::focusInEvent(QFocusEvent *e)
{
Parent::focusInEvent(e);
QListViewItem* item = currentItem();
if (!item)
return;
setSelected(item, TRUE);
emit gotFocus();
}
ConfigView* ConfigView::viewList;
ConfigView::ConfigView(QWidget* parent, ConfigMainWindow* cview,
ConfigSettings *configSettings)
: Parent(parent)
{
list = new ConfigList(this, cview, configSettings);
lineEdit = new ConfigLineEdit(this);
lineEdit->hide();
this->nextView = viewList;
viewList = this;
}
ConfigView::~ConfigView(void)
{
ConfigView** vp;
for (vp = &viewList; *vp; vp = &(*vp)->nextView) {
if (*vp == this) {
*vp = nextView;
break;
}
}
}
void ConfigView::updateList(ConfigItem* item)
{
ConfigView* v;
for (v = viewList; v; v = v->nextView)
v->list->updateList(item);
}
void ConfigView::updateListAll(void)
{
ConfigView* v;
for (v = viewList; v; v = v->nextView)
v->list->updateListAll();
}
/*
* Construct the complete config widget
*/
ConfigMainWindow::ConfigMainWindow(void)
{
QMenuBar* menu;
bool ok;
int x, y, width, height;
QWidget *d = configApp->desktop();
ConfigSettings* configSettings = new ConfigSettings();
#if QT_VERSION >= 300
width = configSettings->readNumEntry("/kconfig/qconf/window width", d->width() - 64);
height = configSettings->readNumEntry("/kconfig/qconf/window height", d->height() - 64);
resize(width, height);
x = configSettings->readNumEntry("/kconfig/qconf/window x", 0, &ok);
if (ok)
y = configSettings->readNumEntry("/kconfig/qconf/window y", 0, &ok);
if (ok)
move(x, y);
showDebug = configSettings->readBoolEntry("/kconfig/qconf/showDebug", false);
// read list settings into configSettings, will be used later for ConfigList setup
configSettings->readListSettings();
#else
width = d->width() - 64;
height = d->height() - 64;
resize(width, height);
showDebug = false;
#endif
split1 = new QSplitter(this);
split1->setOrientation(QSplitter::Horizontal);
setCentralWidget(split1);
menuView = new ConfigView(split1, this, configSettings);
menuList = menuView->list;
split2 = new QSplitter(split1);
split2->setOrientation(QSplitter::Vertical);
// create config tree
configView = new ConfigView(split2, this, configSettings);
configList = configView->list;
helpText = new QTextView(split2);
helpText->setTextFormat(Qt::RichText);
setTabOrder(configList, helpText);
configList->setFocus();
menu = menuBar();
toolBar = new QToolBar("Tools", this);
backAction = new QAction("Back", QPixmap(xpm_back), "Back", 0, this);
connect(backAction, SIGNAL(activated()), SLOT(goBack()));
backAction->setEnabled(FALSE);
QAction *quitAction = new QAction("Quit", "&Quit", CTRL+Key_Q, this);
connect(quitAction, SIGNAL(activated()), SLOT(close()));
QAction *loadAction = new QAction("Load", QPixmap(xpm_load), "&Load", CTRL+Key_L, this);
connect(loadAction, SIGNAL(activated()), SLOT(loadConfig()));
QAction *saveAction = new QAction("Save", QPixmap(xpm_save), "&Save", CTRL+Key_S, this);
connect(saveAction, SIGNAL(activated()), SLOT(saveConfig()));
QAction *saveAsAction = new QAction("Save As...", "Save &As...", 0, this);
connect(saveAsAction, SIGNAL(activated()), SLOT(saveConfigAs()));
QAction *singleViewAction = new QAction("Single View", QPixmap(xpm_single_view), "Split View", 0, this);
connect(singleViewAction, SIGNAL(activated()), SLOT(showSingleView()));
QAction *splitViewAction = new QAction("Split View", QPixmap(xpm_split_view), "Split View", 0, this);
connect(splitViewAction, SIGNAL(activated()), SLOT(showSplitView()));
QAction *fullViewAction = new QAction("Full View", QPixmap(xpm_tree_view), "Full View", 0, this);
connect(fullViewAction, SIGNAL(activated()), SLOT(showFullView()));
QAction *showNameAction = new QAction(NULL, "Show Name", 0, this);
showNameAction->setToggleAction(TRUE);
showNameAction->setOn(configList->showName);
connect(showNameAction, SIGNAL(toggled(bool)), SLOT(setShowName(bool)));
QAction *showRangeAction = new QAction(NULL, "Show Range", 0, this);
showRangeAction->setToggleAction(TRUE);
showRangeAction->setOn(configList->showRange);
connect(showRangeAction, SIGNAL(toggled(bool)), SLOT(setShowRange(bool)));
QAction *showDataAction = new QAction(NULL, "Show Data", 0, this);
showDataAction->setToggleAction(TRUE);
showDataAction->setOn(configList->showData);
connect(showDataAction, SIGNAL(toggled(bool)), SLOT(setShowData(bool)));
QAction *showAllAction = new QAction(NULL, "Show All Options", 0, this);
showAllAction->setToggleAction(TRUE);
showAllAction->setOn(configList->showAll);
connect(showAllAction, SIGNAL(toggled(bool)), SLOT(setShowAll(bool)));
QAction *showDebugAction = new QAction(NULL, "Show Debug Info", 0, this);
showDebugAction->setToggleAction(TRUE);
showDebugAction->setOn(showDebug);
connect(showDebugAction, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
QAction *showIntroAction = new QAction(NULL, "Introduction", 0, this);
connect(showIntroAction, SIGNAL(activated()), SLOT(showIntro()));
QAction *showAboutAction = new QAction(NULL, "About", 0, this);
connect(showAboutAction, SIGNAL(activated()), SLOT(showAbout()));
// init tool bar
backAction->addTo(toolBar);
toolBar->addSeparator();
loadAction->addTo(toolBar);
saveAction->addTo(toolBar);
toolBar->addSeparator();
singleViewAction->addTo(toolBar);
splitViewAction->addTo(toolBar);
fullViewAction->addTo(toolBar);
// create config menu
QPopupMenu* config = new QPopupMenu(this);
menu->insertItem("&File", config);
loadAction->addTo(config);
saveAction->addTo(config);
saveAsAction->addTo(config);
config->insertSeparator();
quitAction->addTo(config);
// create options menu
QPopupMenu* optionMenu = new QPopupMenu(this);
menu->insertItem("&Option", optionMenu);
showNameAction->addTo(optionMenu);
showRangeAction->addTo(optionMenu);
showDataAction->addTo(optionMenu);
optionMenu->insertSeparator();
showAllAction->addTo(optionMenu);
showDebugAction->addTo(optionMenu);
// create help menu
QPopupMenu* helpMenu = new QPopupMenu(this);
menu->insertSeparator();
menu->insertItem("&Help", helpMenu);
showIntroAction->addTo(helpMenu);
showAboutAction->addTo(helpMenu);
connect(configList, SIGNAL(menuSelected(struct menu *)),
SLOT(changeMenu(struct menu *)));
connect(configList, SIGNAL(parentSelected()),
SLOT(goBack()));
connect(menuList, SIGNAL(menuSelected(struct menu *)),
SLOT(changeMenu(struct menu *)));
connect(configList, SIGNAL(gotFocus(void)),
SLOT(listFocusChanged(void)));
connect(menuList, SIGNAL(gotFocus(void)),
SLOT(listFocusChanged(void)));
#if QT_VERSION >= 300
QString listMode = configSettings->readEntry("/kconfig/qconf/listMode", "symbol");
if (listMode == "single")
showSingleView();
else if (listMode == "full")
showFullView();
else /*if (listMode == "split")*/
showSplitView();
// UI setup done, restore splitter positions
QValueList<int> sizes = configSettings->readSizes("/kconfig/qconf/split1", &ok);
if (ok)
split1->setSizes(sizes);
sizes = configSettings->readSizes("/kconfig/qconf/split2", &ok);
if (ok)
split2->setSizes(sizes);
#else
showSplitView();
#endif
delete configSettings;
}
static QString print_filter(const char *str)
{
QRegExp re("[<>&\"\\n]");
QString res = str;
for (int i = 0; (i = res.find(re, i)) >= 0;) {
switch (res[i].latin1()) {
case '<':
res.replace(i, 1, "<");
i += 4;
break;
case '>':
res.replace(i, 1, ">");
i += 4;
break;
case '&':
res.replace(i, 1, "&");
i += 5;
break;
case '"':
res.replace(i, 1, """);
i += 6;
break;
case '\n':
res.replace(i, 1, "<br>");
i += 4;
break;
}
}
return res;
}
static void expr_print_help(void *data, const char *str)
{
((QString*)data)->append(print_filter(str));
}
/*
* display a new help entry as soon as a new menu entry is selected
*/
void ConfigMainWindow::setHelp(QListViewItem* item)
{
struct symbol* sym;
struct menu* menu = 0;
configList->parent()->lineEdit->hide();
if (item)
menu = ((ConfigItem*)item)->menu;
if (!menu) {
helpText->setText(NULL);
return;
}
QString head, debug, help;
menu = ((ConfigItem*)item)->menu;
sym = menu->sym;
if (sym) {
if (menu->prompt) {
head += "<big><b>";
head += print_filter(menu->prompt->text);
head += "</b></big>";
if (sym->name) {
head += " (";
head += print_filter(sym->name);
head += ")";
}
} else if (sym->name) {
head += "<big><b>";
head += print_filter(sym->name);
head += "</b></big>";
}
head += "<br><br>";
if (showDebug) {
debug += "type: ";
debug += print_filter(sym_type_name(sym->type));
if (sym_is_choice(sym))
debug += " (choice)";
debug += "<br>";
if (sym->rev_dep.expr) {
debug += "reverse dep: ";
expr_print(sym->rev_dep.expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
}
for (struct property *prop = sym->prop; prop; prop = prop->next) {
switch (prop->type) {
case P_PROMPT:
case P_MENU:
debug += "prompt: ";
debug += print_filter(prop->text);
debug += "<br>";
break;
case P_DEFAULT:
debug += "default: ";
expr_print(prop->expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
break;
case P_CHOICE:
if (sym_is_choice(sym)) {
debug += "choice: ";
expr_print(prop->expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
}
break;
case P_SELECT:
debug += "select: ";
expr_print(prop->expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
break;
case P_RANGE:
debug += "range: ";
expr_print(prop->expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
break;
default:
debug += "unknown property: ";
debug += prop_get_type_name(prop->type);
debug += "<br>";
}
if (prop->visible.expr) {
debug += " dep: ";
expr_print(prop->visible.expr, expr_print_help, &debug, E_NONE);
debug += "<br>";
}
}
debug += "<br>";
}
help = print_filter(sym->help);
} else if (menu->prompt) {
head += "<big><b>";
head += print_filter(menu->prompt->text);
head += "</b></big><br><br>";
if (showDebug) {
if (menu->prompt->visible.expr) {
debug += " dep: ";
expr_print(menu->prompt->visible.expr, expr_print_help, &debug, E_NONE);
debug += "<br><br>";
}
}
}
if (showDebug)
debug += QString().sprintf("defined at %s:%d<br><br>", menu->file->name, menu->lineno);
helpText->setText(head + debug + help);
}
void ConfigMainWindow::loadConfig(void)
{
QString s = QFileDialog::getOpenFileName(".config", NULL, this);
if (s.isNull())
return;
if (conf_read(s.latin1()))
QMessageBox::information(this, "qconf", "Unable to load configuration!");
ConfigView::updateListAll();
}
void ConfigMainWindow::saveConfig(void)
{
if (conf_write(NULL))
QMessageBox::information(this, "qconf", "Unable to save configuration!");
}
void ConfigMainWindow::saveConfigAs(void)
{
QString s = QFileDialog::getSaveFileName(".config", NULL, this);
if (s.isNull())
return;
if (conf_write(s.latin1()))
QMessageBox::information(this, "qconf", "Unable to save configuration!");
}
void ConfigMainWindow::changeMenu(struct menu *menu)
{
configList->setRootMenu(menu);
backAction->setEnabled(TRUE);
}
void ConfigMainWindow::listFocusChanged(void)
{
if (menuList->hasFocus()) {
if (menuList->mode == menuMode)
configList->clearSelection();
setHelp(menuList->selectedItem());
} else if (configList->hasFocus()) {
setHelp(configList->selectedItem());
}
}
void ConfigMainWindow::goBack(void)
{
ConfigItem* item;
configList->setParentMenu();
if (configList->rootEntry == &rootmenu)
backAction->setEnabled(FALSE);
item = (ConfigItem*)menuList->selectedItem();
while (item) {
if (item->menu == configList->rootEntry) {
menuList->setSelected(item, TRUE);
break;
}
item = (ConfigItem*)item->parent();
}
}
void ConfigMainWindow::showSingleView(void)
{
menuView->hide();
menuList->setRootMenu(0);
configList->mode = singleMode;
if (configList->rootEntry == &rootmenu)
configList->updateListAll();
else
configList->setRootMenu(&rootmenu);
configList->setAllOpen(TRUE);
configList->setFocus();
}
void ConfigMainWindow::showSplitView(void)
{
configList->mode = symbolMode;
if (configList->rootEntry == &rootmenu)
configList->updateListAll();
else
configList->setRootMenu(&rootmenu);
configList->setAllOpen(TRUE);
configApp->processEvents();
menuList->mode = menuMode;
menuList->setRootMenu(&rootmenu);
menuList->setAllOpen(TRUE);
menuView->show();
menuList->setFocus();
}
void ConfigMainWindow::showFullView(void)
{
menuView->hide();
menuList->setRootMenu(0);
configList->mode = fullMode;
if (configList->rootEntry == &rootmenu)
configList->updateListAll();
else
configList->setRootMenu(&rootmenu);
configList->setAllOpen(FALSE);
configList->setFocus();
}
void ConfigMainWindow::setShowAll(bool b)
{
if (configList->showAll == b)
return;
configList->showAll = b;
configList->updateListAll();
menuList->showAll = b;
menuList->updateListAll();
}
void ConfigMainWindow::setShowDebug(bool b)
{
if (showDebug == b)
return;
showDebug = b;
}
void ConfigMainWindow::setShowName(bool b)
{
if (configList->showName == b)
return;
configList->showName = b;
configList->reinit();
menuList->showName = b;
menuList->reinit();
}
void ConfigMainWindow::setShowRange(bool b)
{
if (configList->showRange == b)
return;
configList->showRange = b;
configList->reinit();
menuList->showRange = b;
menuList->reinit();
}
void ConfigMainWindow::setShowData(bool b)
{
if (configList->showData == b)
return;
configList->showData = b;
configList->reinit();
menuList->showData = b;
menuList->reinit();
}
/*
* ask for saving configuration before quitting
* TODO ask only when something changed
*/
void ConfigMainWindow::closeEvent(QCloseEvent* e)
{
if (!sym_change_count) {
e->accept();
return;
}
QMessageBox mb("qconf", "Save configuration?", QMessageBox::Warning,
QMessageBox::Yes | QMessageBox::Default, QMessageBox::No, QMessageBox::Cancel | QMessageBox::Escape);
mb.setButtonText(QMessageBox::Yes, "&Save Changes");
mb.setButtonText(QMessageBox::No, "&Discard Changes");
mb.setButtonText(QMessageBox::Cancel, "Cancel Exit");
switch (mb.exec()) {
case QMessageBox::Yes:
conf_write(NULL);
case QMessageBox::No:
e->accept();
break;
case QMessageBox::Cancel:
e->ignore();
break;
}
}
void ConfigMainWindow::showIntro(void)
{
static char str[] = "Welcome to the qconf graphical kernel configuration tool for Linux.\n\n"
"For each option, a blank box indicates the feature is disabled, a check\n"
"indicates it is enabled, and a dot indicates that it is to be compiled\n"
"as a module. Clicking on the box will cycle through the three states.\n\n"
"If you do not see an option (e.g., a device driver) that you believe\n"
"should be present, try turning on Show All Options under the Options menu.\n"
"Although there is no cross reference yet to help you figure out what other\n"
"options must be enabled to support the option you are interested in, you can\n"
"still view the help of a grayed-out option.\n\n"
"Toggling Show Debug Info under the Options menu will show the dependencies,\n"
"which you can then match by examining other options.\n\n";
QMessageBox::information(this, "qconf", str);
}
void ConfigMainWindow::showAbout(void)
{
static char str[] = "qconf is Copyright (C) 2002 Roman Zippel <[email protected]>.\n\n"
"Bug reports and feature request can also be entered at http://bugzilla.kernel.org/\n";
QMessageBox::information(this, "qconf", str);
}
void ConfigMainWindow::saveSettings(void)
{
#if QT_VERSION >= 300
ConfigSettings *configSettings = new ConfigSettings;
configSettings->writeEntry("/kconfig/qconf/window x", pos().x());
configSettings->writeEntry("/kconfig/qconf/window y", pos().y());
configSettings->writeEntry("/kconfig/qconf/window width", size().width());
configSettings->writeEntry("/kconfig/qconf/window height", size().height());
configSettings->writeEntry("/kconfig/qconf/showName", configList->showName);
configSettings->writeEntry("/kconfig/qconf/showRange", configList->showRange);
configSettings->writeEntry("/kconfig/qconf/showData", configList->showData);
configSettings->writeEntry("/kconfig/qconf/showAll", configList->showAll);
configSettings->writeEntry("/kconfig/qconf/showDebug", showDebug);
QString entry;
switch(configList->mode) {
case singleMode :
entry = "single";
break;
case symbolMode :
entry = "split";
break;
case fullMode :
entry = "full";
break;
}
configSettings->writeEntry("/kconfig/qconf/listMode", entry);
configSettings->writeSizes("/kconfig/qconf/split1", split1->sizes());
configSettings->writeSizes("/kconfig/qconf/split2", split2->sizes());
delete configSettings;
#endif
}
void fixup_rootmenu(struct menu *menu)
{
struct menu *child;
static int menu_cnt = 0;
menu->flags |= MENU_ROOT;
for (child = menu->list; child; child = child->next) {
if (child->prompt && child->prompt->type == P_MENU) {
menu_cnt++;
fixup_rootmenu(child);
menu_cnt--;
} else if (!menu_cnt)
fixup_rootmenu(child);
}
}
static const char *progname;
static void usage(void)
{
printf("%s <config>\n", progname);
exit(0);
}
int main(int ac, char** av)
{
ConfigMainWindow* v;
const char *name;
#ifndef LKC_DIRECT_LINK
kconfig_load();
#endif
progname = av[0];
configApp = new QApplication(ac, av);
if (ac > 1 && av[1][0] == '-') {
switch (av[1][1]) {
case 'h':
case '?':
usage();
}
name = av[2];
} else
name = av[1];
if (!name)
usage();
conf_parse(name);
fixup_rootmenu(&rootmenu);
conf_read(NULL);
//zconfdump(stdout);
v = new ConfigMainWindow();
//zconfdump(stdout);
v->show();
configApp->connect(configApp, SIGNAL(lastWindowClosed()), SLOT(quit()));
configApp->connect(configApp, SIGNAL(aboutToQuit()), v, SLOT(saveSettings()));
configApp->exec();
return 0;
}
|
__label__pos
| 0.968516 |
rsnapshot basic questions
• I am little confused about how rsnapshot works. I have some basic questions I hope someone can help me with.
* Does the rsnapshot used by OMV only use hard links?
* Do the rsnapshot snapshots take up addition disk space, at least for the initial snapshot? I see that my rsnapshot share is just as big as the sum of all the shares that I am using rsnapshot on.
* On on the systems there is normally a .snapshots directory in the directories that rsnapshot is backing up. Where is that in the OMV setup?
• rsnapshot creates not the same type of snapshot as CoW filesystem like zfs or btrfs do.
Does the rsnapshot used by OMV only use hard links?
OMV is based on Debian. So the rsnapshot from Debian repo is used. And rsnapshot makes intensive use of hardlinks for unchanged files.
Do the rsnapshot snapshots take up addition disk space, at least for the initial snapshot? I see that my rsnapshot share is just as big as the sum of all the shares that I am using rsnapshot on.
Yes, the first snapshot created by rsnapshot has the same size as the original data. Then the second snapshot only uses additional space for the changed data.
On on the systems there is normally a .snapshots directory in the directories that rsnapshot is backing up. Where is that in the OMV setup?
When you setup a rsnapshot job with the plugin, you define a source and a destination. Both are shared folders you have to define first.
• Thanks Macom. That helps a lot.
Regarding the .snapshot folder. I guess my question was not clear, I had a typo in it. I have set up rsnapshot with a source and desitination directory. And I can see rsnapshot made a snapshot of my data in the destination directory.
I was just referring to other linux systems I work on like in school or at work, we have a .snapshot folder present inside our source directories. If we need to fetch something from the snapshot, then we go into that .snapshot folder and pull it (as noted in rsnapshot-HOWTO.en.pdf at https://github.com/rsnapshot/rsnapshot ) Is there a way to enable this feature in the rsnapshot in omv?
• As I understand it, the .rsnapshot folder is just the destination folder for the snapshots.
Quote
###########################
# SNAPSHOT ROOT DIRECTORY #
###########################
# All snapshots will be stored under this root directory.
#
snapshot_root /.snapshots/
I guess you could do the same in OMV by creating a shared folder .rsnapshot. This assumes that rsnapshot ignores hidden folders.
Best to test it first with a small folder with only few files inside.
• edogd
Added the Label resolved
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!
|
__label__pos
| 0.93354 |
Ready to get started?
Learn more about the CData ODBC Driver for Teradata or download a free trial:
Download Now
Use the CData ODBC Driver for Teradata in MicroStrategy Desktop
Connect to Teradata data in MicroStrategy Desktop using the CData ODBC Driver for Teradata.
MicroStrategy is an analytics and mobility platform that enables data-driven innovation. When paired with the CData ODBC Driver for Teradata, you gain database-like access to live Teradata data from MicroStrategy, expanding your reporting and analytics capabilities. In this article, we walk through adding Teradata as a data source in MicroStrategy Desktop and creating a simple visualization of Teradata data.
The CData ODBC Driver offers unmatched performance for interacting with live Teradata data in MicroStrategy due to optimized data processing built into the driver. When you issue complex SQL queries from MicroStrategy to Teradata, the driver pushes supported SQL operations, like filters and aggregations, directly to Teradata and utilizes the embedded SQL Engine to process unsupported operations (often SQL functions and JOIN operations) client-side. With built-in dynamic metadata querying, you can visualize and analyze Teradata data using native MicroStrategy data types.
Connect to Teradata as an ODBC Data Source
Information for connecting to Teradata follows, along with different instructions for configuring a DSN in Windows and Linux environments.
To connect to Teradata, provide authentication information and specify the database server name.
• User: Set this to the username of a Teradata user.
• Password: Set this to the password of the Teradata user.
• DataSource: Specify the Teradata server name, DBC Name, or TDPID.
• Port: Specify the port the server is running on.
• Database: Specify the database name. If not specified, the default database is used.
When you configure the DSN, you may also want to set the Max Rows connection property. This will limit the number of rows returned, which is especially helpful for improving performance when designing reports and visualizations.
Windows
If you have not already, first specify connection properties in an ODBC DSN (data source name). This is the last step of the driver installation. You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs.
Linux
If you are installing the CData ODBC Driver for Teradata in a Linux environment, the driver installation predefines a system DSN. You can modify the DSN by editing the system data sources file (/etc/odbc.ini) and defining the required connection properties.
/etc/odbc.ini
[CData Teradata Sys] Driver = CData ODBC Driver for Teradata Description = My Description User = myuser Password = mypassword Server = localhost Database = mydatabase
For specific information on using these configuration files, please refer to the help documentation (installed and found online).
Connect to and Visualize Teradata Data using MicroStrategy Desktop
In addition to connecting Teradata in MicroStrategy enterprise products, you can connect to Teradata in MicroStrategy Desktop. Follow the steps below to add Teradata data as a dataset and create visualizations and reports of Teradata data.
1. Open MicroStrategy Desktop and create a new dossier.
2. In the datasets panel, click New Data, select Databases, and select Type a Query as the Import Option.
3. Add a new data source and choose DSN data sources.
4. Choose the DSN you previously configured (likely CData Teradata Sys) and select Generic DBMS in the Version menu.
5. Set the User and Password properties for the DSN (or use placeholder values) and name the data source.
6. Select the new database instance to view the tables. You may need to manually click the search icon in the Available Tables section to see the tables.
7. Create a SQL query for the Teradata data (see below) and click Execute SQL to test the query. SELECT * FROM NorthwindProducts NOTE: Since we create a live connection, we can execute a SELECT * query and utilize the filtering and aggregation features native to the MicroStrategy products.
8. Click Finish and choose to connect live.
9. Choose a visualization, choose fields to display (data types are discovered automatically through dynamic metadata discovery) and apply any filters to create a new visualization of Teradata data. Where possible, the complex queries generated by the filters and aggregations will be pushed down to Teradata, while any unsupported operations (which can include SQL functions and JOIN operations) will be managed client-side by the CData SQL Engine embedded in the driver.
10. Once you are finished configuring the dossier, click File -> Save.
Using the CData ODBC Driver for Teradata in MicroStrategy Desktop, you can easily create robust visualizations and reports on Teradata data. Read our other articles on connecting to Teradata in MicroStrategy and connecting to Teradata in MicroStrategy Web for more examples.
|
__label__pos
| 0.533027 |
1
vote
4answers
192 views
Is unit and component testing sufficient?
If you can test every line of [your product's] code via unit tests, wouldn't unit testing alone (theoretically) be sufficient? Are there other "necessary" benchmarks of proper test coverage other ...
3
votes
1answer
113 views
Are there common techniques for testing the conformance of an implementation to a general contract?
Say you have defined some abstract interface and you specify a general contract for that interface to which all implementations must adhere. Are there common techniques that can facilitate testing the ...
|
__label__pos
| 0.999636 |
Commit 8699dbb9 authored by Hanns Holger Rutz's avatar Hanns Holger Rutz
Browse files
issue 53 -- have a test case, but only works in RT
parent 0c6fd77b
lazy val baseName = "SoundProcesses"
lazy val baseNameL = baseName.toLowerCase
lazy val projectVersion = "3.29.0"
lazy val projectVersion = "3.29.1-SNAPSHOT"
lazy val mimaVersion = "3.29.0" // used for migration-manager
lazy val commonSettings = Seq(
......
......@@ -10,7 +10,7 @@ import de.sciss.synth.io.AudioFileSpec
/*
To run only this suite:
test-only de.sciss.synth.proc.AttributesSpec
testOnly de.sciss.synth.proc.AttributesSpec
*/
class AttributesSpec extends ConfluentEventSpec {
......
......@@ -7,7 +7,7 @@ import scala.concurrent.ExecutionContext
/*
test-only de.sciss.synth.proc.AuralSpecs
testOnly de.sciss.synth.proc.AuralSpecs
*/
class AuralSpecs extends BounceSpec {
......@@ -94,7 +94,7 @@ class AuralSpecs extends BounceSpec {
// Thread.sleep(100000)
assertSameSignal(arr0, man)
assertSameSignal(arr1, man)
assert(true)
// assert(true)
} (global)
}
}
......@@ -119,14 +119,16 @@ abstract class BounceSpec extends fixture.AsyncFlatSpec with Matchers {
println(s) // arr.mkString("Vector(", ",", ")"))
}
final def mkSine(freq: Double, startFrame: Int, len: Int, sampleRate: Double = sampleRate): Array[Float] = {
final def mkSine(freq: Double, startFrame: Int, len: Int, sampleRate: Double = sampleRate,
amp: Double = 1.0): Array[Float] = {
val freqN = 2 * math.Pi * freq / sampleRate
Array.tabulate(len)(i => math.sin((startFrame + i) * freqN).toFloat)
Array.tabulate(len)(i => (math.sin((startFrame + i) * freqN) * amp).toFloat)
}
final def mkLFPulse(freq: Double, startFrame: Int, len: Int, sampleRate: Double = sampleRate): Array[Float] = {
final def mkLFPulse(freq: Double, startFrame: Int, len: Int, sampleRate: Double = sampleRate,
amp: Double = 1.0): Array[Float] = {
val period = sampleRate / freq
Array.tabulate(len)(i => if ((((startFrame + i) / period) % 1.0) < 0.5) 1f else 0f)
Array.tabulate(len)(i => if ((((startFrame + i) / period) % 1.0) < 0.5) amp.toFloat else 0f)
}
final def mulScalar(in: Array[Float], f: Float, start: Int = 0, len: Int = -1): Unit = {
......@@ -149,13 +151,16 @@ abstract class BounceSpec extends fixture.AsyncFlatSpec with Matchers {
}
}
final def add(a: Array[Float], b: Array[Float], start: Int = 0, len: Int = -1): Unit = {
val len0 = if (len < 0) math.min(a.length, b.length) - start else len
var i = start
val j = start + len0
/** Adds `b` to `a` */
final def add(a: Array[Float], b: Array[Float], aOff: Int = 0, bOff: Int = 0, len: Int = -1): Unit = {
val len0 = if (len < 0) math.min(a.length, b.length) - math.max(aOff, bOff) else len
var i = aOff
val j = aOff + len0
var k = bOff
while (i < j) {
a(i) += b(i)
a(i) += b(k)
i += 1
k += 1
}
}
......@@ -214,16 +219,18 @@ abstract class BounceSpec extends fixture.AsyncFlatSpec with Matchers {
final def action(config: Bounce.ConfigBuilder[S], frame: Long)(fun: S#Tx => Unit): Unit =
config.actions = config.actions ++ (Scheduler.Entry[S](time = frame, fun = fun) :: Nil)
final def bounce(config: Bounce.Config[S], timeOut: Duration = 20.seconds)
final def bounce(config: Bounce.Config[S], timeOut: Duration = 20.seconds, debugKeep: Boolean = false)
(implicit universe: Universe[S]): Future[Array[Array[Float]]] = {
requireOutsideTxn()
val b = Bounce[S]()
val p = b(config)
// Important: don't use the single threaded SP context,
// as bounce will block and hang
import ExecutionContext.Implicits.global
p.start()(global)
p.map { f =>
if (debugKeep) println(s"Bounce file: $f")
try {
val a = AudioFile.openRead(f)
require(a.numFrames < 0x7FFFFFFF)
......@@ -232,7 +239,7 @@ abstract class BounceSpec extends fixture.AsyncFlatSpec with Matchers {
a.close()
buf
} finally {
f.delete()
if (!debugKeep) f.delete()
}
} (global)
}
......
......@@ -12,7 +12,7 @@
///*
// To run only this suite:
//
// test-only de.sciss.synth.proc.GraphemeSpec
// testOnly de.sciss.synth.proc.GraphemeSpec
//
// */
//class GraphemeSpec extends ConfluentEventSpec {
......
......@@ -11,7 +11,7 @@ import de.sciss.span.Span
/*
To run only this suite:
test-only de.sciss.synth.proc.Issue15
testOnly de.sciss.synth.proc.Issue15
*/
class Issue15 extends ConfluentEventSpec {
......
package de.sciss.synth.proc
import de.sciss.file.File
import de.sciss.lucre.stm.Folder
import de.sciss.span.Span
import de.sciss.synth.proc.graph.{ScanInFix, ScanOut}
import de.sciss.synth.ugen
import scala.concurrent.ExecutionContext
/*
testOnly de.sciss.synth.proc.Issue53
*/
class Issue53 extends BounceSpec {
"Aural outputs" works { implicit universe =>
import universe.cursor
val freq1 = sampleRate/2
val freq2 = 441.0
val freq3 = 220.5
val tim0 = 1.0
val tim1 = 2.0
val atH = frame(tim0)
val atF = frame(tim1)
val atHf = tim0.secondsFileI
val atFf = tim1.secondsFileI
import ugen._
val tlH = cursor.step { implicit tx =>
val proc1 = proc {
ScanOut(LFPulse.ar(freq1) * 0.5)
}
val proc2 = proc {
ScanOut(SinOsc.ar(freq2) * 0.5)
}
val proc3 = proc {
ScanOut(SinOsc.ar(freq3) * 0.5)
}
val glob = proc {
val in = ScanInFix(1)
Out.ar(0, in)
}
val f = Folder[S]()
f.addLast(proc1.outputs.add(Proc.mainOut))
f.addLast(proc2.outputs.add(Proc.mainOut))
f.addLast(proc3.outputs.add(Proc.mainOut))
glob.attr.put(Proc.mainIn, f)
val tl = Timeline[S]()
tl.add(Span.All, glob)
tl.add(Span(0L, atH), proc1)
tl.add(Span(0L, atH), proc2)
tl.add(Span(atH, atF), proc3)
tx.newHandle(tl)
}
val c = config(List(tlH), Span(0L, atF))
c.server.nrtOutputPath = File.createTemp(suffix = ".aif", deleteOnExit = false).getPath
c.realtime = true
import ExecutionContext.Implicits.global
// showTransportLog = true
val r = bounce(c, debugKeep = true)
r.map { case Array(observed) =>
val sig1 = mkLFPulse(freq1, startFrame = 1, len = atHf, amp = 0.5)
val sig2 = mkSine (freq2, startFrame = 1, len = atHf, amp = 0.5)
val sig3 = mkSine (freq3, startFrame = 1, len = atHf, amp = 0.5)
val expected = new Array[Float](atFf)
add(expected, sig1)
add(expected, sig2)
add(expected, sig3, aOff = atHf)
assertSameSignal(observed, expected)
// assert(true)
} (global)
}
}
......@@ -11,7 +11,7 @@ import org.scalatest.FunSpec
/*
To run only this suite:
test-only de.sciss.synth.proc.SynthGraphSerializationSpec
testOnly de.sciss.synth.proc.SynthGraphSerializationSpec
*/
class SynthGraphSerializationSpec extends FunSpec {
......
......@@ -11,7 +11,7 @@ import org.scalatest.{Matchers, Outcome, fixture}
/*
To test only this suite:
test-only de.sciss.synth.proc.TimelineSerializationSpec
testOnly de.sciss.synth.proc.TimelineSerializationSpec
*/
class TimelineSerializationSpec extends fixture.FlatSpec with Matchers {
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.file._
import de.sciss.lucre.artifact.{Artifact, ArtifactLocation}
......@@ -6,6 +6,7 @@ import de.sciss.lucre.stm.store.BerkeleyDB
import de.sciss.processor.Processor
import de.sciss.span.Span
import de.sciss.synth.io.{AudioFile, AudioFileSpec}
import de.sciss.synth.proc.{AudioCue, Bounce, Durable, Proc, TimeRef, Timeline, UGenGraphBuilder, Universe, graph, showTransportLog}
import de.sciss.synth.{SynthGraph, freeSelf, ugen}
import scala.concurrent.ExecutionContext
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.expr.{DoubleObj, SpanLikeObj}
import de.sciss.lucre.stm
......@@ -7,6 +7,7 @@ import de.sciss.lucre.stm.{Folder, Obj}
import de.sciss.lucre.synth.{Server, Sys}
import de.sciss.span.{Span, SpanLike}
import de.sciss.synth.SynthGraph
import de.sciss.synth.proc.{AuralContext, AuralObj, AuralSystem, Confluent, Durable, Output, Proc, SoundProcesses, SynthGraphObj, TimeRef, Timeline, Universe, showAuralLog, showTransportLog}
import scala.concurrent.stm.Txn
import scala.language.implicitConversions
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.file._
import de.sciss.lucre.artifact.{Artifact, ArtifactLocation}
......@@ -10,6 +10,7 @@ import de.sciss.span.Span
import de.sciss.synth.Curve.{exponential, linear}
import de.sciss.synth.io.{AudioFile, AudioFileType}
import de.sciss.synth.proc.Implicits._
import de.sciss.synth.proc.{Action, AudioCue, AuralContext, AuralObj, Ensemble, FadeSpec, Grapheme, Implicits, ObjKeys, Proc, SynthGraphObj, TimeRef, Timeline, Transport, graph, showTransportLog}
import de.sciss.{numbers, synth}
object AuralTests1 extends AuralTestLike.Factory {
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.file._
import de.sciss.lucre.artifact.{Artifact, ArtifactLocation}
......@@ -9,6 +9,7 @@ import de.sciss.span.Span
import de.sciss.synth
import de.sciss.synth.io.AudioFile
import de.sciss.synth.proc.Implicits._
import de.sciss.synth.proc.{Action, AudioCue, AuralContext, EnvSegment, Grapheme, Proc, SynthGraphObj, Timeline, Transport, graph}
object AuralTests2 extends AuralTestLike.Factory {
def main(args: Array[String]): Unit = init(args)
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.bitemp.BiGroup
import de.sciss.lucre.expr.{LongObj, SpanLikeObj}
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.stm
import de.sciss.lucre.stm.store.BerkeleyDB
......@@ -6,6 +6,7 @@ import de.sciss.lucre.synth.{InMemory, Sys}
import de.sciss.processor.Processor
import de.sciss.span.Span
import de.sciss.synth.proc.Implicits._
import de.sciss.synth.proc.{Bounce, Durable, Proc, SoundProcesses, TimeRef, Timeline, Universe, showTransportLog}
import de.sciss.synth.{SynthGraph, ugen}
import scala.concurrent.ExecutionContext
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.synth.BlockAllocator
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.synth.{Txn, AudioBus, Bus, InMemory}
import de.sciss.lucre.synth.{AudioBus, Bus, InMemory, Txn}
import de.sciss.synth.proc.AuralSystem
import de.sciss.synth.{AudioBus => SAudioBus}
// XXX TODO --- make this a ScalaTest instance
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.expr
import de.sciss.lucre.expr.{LongObj, SpanObj}
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.expr.IntObj
import de.sciss.lucre.synth.InMemory
import de.sciss.synth.SynthGraph
import de.sciss.synth.proc.{Grapheme, Proc, SoundProcesses, TimeRef, Transport, Universe, Workspace, showAuralLog}
object Issue35 {
type S = InMemory
......
package de.sciss.synth.proc
package de.sciss.synth.proc.tests
import de.sciss.lucre.synth.{InMemory, Server, Synth}
import de.sciss.synth
import synth.SynthGraph
import de.sciss.lucre.synth.{InMemory, Synth, Server}
import de.sciss.synth.SynthGraph
object NewTxnTest extends App {
val sys = InMemory()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.955956 |
I still haven't found any solution. Have anyone of you ever had this
problem?
I also tried it with tdbsam backend, and I got the same error, so it's not
an LDAP-related issue. I have upgraded to Samba version 3.0.30, but the
problem still exists.
Please help, I'm out of ideas!
My original message was:
I recently set up a PDC using Samba version 3.0.28a. According to the
official Samba documentation, I should be able to use the Microsoft User
Manager tool to manage my Samba domain controller. I am able to
add/delete/modify user accounts with no problem, but editing groups is not
possible for some reason. For example, if I try to add a user account to a
group, I get an "Access denied" error message. This sounds a bit strange to
me, since I log in to the domain as root, so privilege problems should not
happen.
Is this a bug or have I misconfigured something?
What I have already done:
- Install Samba from package
- Edit smb.conf to suit my needs
- Create basic group mapping with the correct RIDs (512 for domain admins,
513 for users, 514 for guests)
- Create a separated directory structure for all the shares
My shares are located on separate partitions, each have the user_xattr
option enabled in /etc/fstab.
I attached my smb.conf file to this message, to make it easier to understand
my configuration.
Thanks for you help in advance!
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/listinfo/samba
|
__label__pos
| 0.741032 |
fluentasserts.core.array 405/405(100%) line coverage
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
183
190
200
210
220
230
240
250
260
2712
280
29283
300
310
320
330
340
350
360
370
380
390
400
410
420
430
440
4590
4690
4790
4890
490
500
510
5286
530
5486
550
56927
570
58515
590
6013
610
620
63223
6468
650
66155
670
680
690
7086
710
720
730
7486
750
7686
770
78888
790
8049
810
82187
830
840
85210
8655
870
88155
890
900
910
9286
930
940
950
9686
970
9886
990
100905
101225
10214
1030
1040
1050
10647
1070
108190
1090
1100
111211
112158
113158
1140
1150
1160
11786
1180
1190
1200
1210
1220
1231
1240
1251
1260
1271
1281
1291
1301
1310
1320
1330
1340
1351
1360
1371
1380
1391
1401
1410
1420
1430
1440
1451
1460
1471
1480
1491
1501
1511
1521
1530
1540
1550
1560
1571
1580
1591
1600
1611
1621
1630
1640
1650
1660
1671
1680
1691
1700
1711
1721
1731
1740
1750
1760
1770
1781
1790
1801
1810
1821
1831
1841
1850
1860
1870
1880
1890
1900
1910
1920
1930
1940
1950
19634
1970
19834
19934
20034
20134
20234
2030
20434
2050
2060
2070
2080
2090
21042
2110
21242
21342
21442
21542
21642
2170
21842
2190
22042
22142
22242
2230
22442
225146
2260
2270
22832
2290
23032
23124
2320
23330
2340
2350
23610
2370
2380
2390
24032
2410
24240
24358
2440
2458
2460
2470
24820
2490
2500
25142
2520
253426
2540
255136
2560
25726
2580
2590
2600
26142
26228
2630
26428
2650
2660
26714
2680
26914
2700
2710
2720
2730
2740
27522
2760
27722
27822
27922
2800
28122
2820
28322
28422
28522
28622
28722
2880
28922
29022
2910
29222
29336
2940
29514
2961
2970
2980
29914
3003
3010
3020
3030
30412
3058
3060
3078
3086
3090
3100
3110
31222
3130
3140
3150
3160
3170
3180
31920
3200
32120
32220
32320
3240
32520
3260
32720
32820
32920
3300
33120
3320
333228
33460
3350
33660
33728
3380
3390
3400
34148
34292
3430
34420
3450
34620
34724
3480
34912
3500
3510
3520
3530
3540
3550
3560
3570
3580
3590
36016
3610
3628
3630
3640
3650
3660
3670
3680
3690
3700
3710
3720
3730
3740
3750
3760
3778
3788
3798
3800
3818
38232
3830
3848
3850
3868
3878
3880
3898
3900
3910
3920
3930
3948
3950
3967
3970
3980
3990
4000
4011
4020
4030
4040
4050
4060
4070
4080
4090
4100
4111
4124
4130
4140
4153
4162
4170
4180
4193
4202
4210
4220
4233
4242
4250
4260
4273
4282
4290
4300
4310
4320
4330
4342
4352
4362
4370
4380
4392
4402
4410
4420
4433
4442
4450
4460
4472
4482
4492
4500
4513
4522
4530
4540
4552
4562
4572
4580
4593
4602
4610
4620
4632
4642
4652
4660
4670
4680
4690
4701
4712
4722
4732
4740
4752
4762
4770
4780
4790
4800
4810
4821
4832
4842
4852
4860
4872
4882
4890
4900
4910
4920
4930
4942
4952
4962
4970
4982
4992
5000
5012
5022
5030
5040
5053
5062
5070
5080
5092
5102
5112
5120
5133
5142
5150
5160
5172
5182
5190
5203
5212
5220
5230
5242
5252
5262
5270
5283
5292
5300
5310
5322
5332
5342
5352
5360
5373
5382
5390
5400
5412
5422
5432
5440
5450
5460
5470
5481
5492
5500
5510
5520
5530
5540
5551
5562
5572
5582
5590
5602
5612
5620
5630
5640
5650
5660
5671
5682
5692
5702
5710
5722
5732
5740
5750
5760
5770
5780
5792
5802
5812
5820
5832
5840
5850
5863
5872
5880
5890
5902
5912
5922
5932
5940
5953
5962
5970
5980
5992
6002
6012
6022
6030
6043
6052
6060
6070
6082
6092
6102
6112
6120
6133
6142
6150
6160
6172
6182
6192
6202
6210
6223
6232
6240
6250
6262
6272
6282
6292
6300
6310
6320
6330
6342
6352
6360
6370
6382
6392
6402
6412
6420
6430
6443
6452
6460
6470
6482
6492
6502
6510
6523
6532
6540
6550
6562
6572
6582
6592
6602
6610
6623
6632
6640
6650
6662
6672
6682
6690
6703
6712
6720
6730
6742
6752
6762
6770
6780
6790
6800
6811
6820
6830
6840
6850
6860
6872
6882
6890
6900
6913
6922
6930
6940
6950
6960
6970
6981
6991
7000
7012
7022
7030
7042
7052
7060
7070
7080
7090
7100
7110
7126
7130
7140
7150
7160
7170
7180
7190
7202
7211
7222
7230
7240
7253
7262
7270
7280
7290
7300
7310
7322
7332
7340
7350
7362
7372
7382
7392
7400
7410
7423
7432
7440
7450
7462
7472
7482
7490
7503
7512
7520
7530
7542
7552
7562
7570
7583
7592
7600
7610
7622
7632
7642
7650
7663
7672
7680
7690
7702
7712
7722
7730
7740
7750
7760
7771
7780
7790
78052
7810
7820
78351
7840
7850
78695
7870
7880
7890
7902
7912
7922
7930
7943
7952
7960
7970
7982
7992
8002
8010
8023
8032
8040
8050
8062
8072
8082
8090
8103
8112
8120
8130
8142
8152
8162
8170
8180
8190
8200
8211
8220
8230
82415
8250
8260
8270
82815
8290
8300
8310
83226
8330
8340
8350
8362
8372
8380
8390
8400
8410
8421
8430
8440
84515
8460
8470
8480
84915
8500
8510
8520
85326
8540
8550
8560
8572
8582
8590
8600
8610
8620
8632
8640
8652
8662
8672
8682
8692
8700
8710
8723
8732
8740
8750
8762
8772
8780
8790
8800
8810
8821
8831
8840
8850
8860
8870
8881
8890
8902
8910
8920
8930
8940
8950
8961
8971
8981
8992
9000
module fluentasserts.core.array; import fluentasserts.core.results; public import fluentasserts.core.base; import std.algorithm; import std.conv; import std.traits; import std.range; import std.array; import std.string; import std.math; U[] toValueList(U, V)(V expectedValueList) @trusted { static if(is(V == void[])) { return []; } else static if(is(U == immutable) || is(U == const)) { static if(is(U == class)) { return expectedValueList.array; } else { return expectedValueList.array.idup; } } else { static if(is(U == class)) { return cast(U[]) expectedValueList.array; } else { return cast(U[]) expectedValueList.array.dup; } } } @trusted: struct ListComparison(Type) { alias T = Unqual!Type; private { T[] referenceList; T[] list; double maxRelDiff; } this(U, V)(U reference, V list, double maxRelDiff = 0) { this.referenceList = toValueList!T(reference); this.list = toValueList!T(list); this.maxRelDiff = maxRelDiff; } T[] missing() @trusted { T[] result; auto tmpList = list.dup; foreach(element; referenceList) { static if(std.traits.isNumeric!(T)) { auto index = tmpList.countUntil!(a => approxEqual(element, a, maxRelDiff)); } else { auto index = tmpList.countUntil(element); } if(index == -1) { result ~= element; } else { tmpList = remove(tmpList, index); } } return result; } T[] extra() @trusted { T[] result; auto tmpReferenceList = referenceList.dup; foreach(element; list) { static if(isFloatingPoint!(T)) { auto index = tmpReferenceList.countUntil!(a => approxEqual(element, a, maxRelDiff)); } else { auto index = tmpReferenceList.countUntil(element); } if(index == -1) { result ~= element; } else { tmpReferenceList = remove(tmpReferenceList, index); } } return result; } T[] common() @trusted { T[] result; auto tmpList = list.dup; foreach(element; referenceList) { if(tmpList.length == 0) { break; } static if(isFloatingPoint!(T)) { auto index = tmpList.countUntil!(a => approxEqual(element, a, maxRelDiff)); } else { auto index = tmpList.countUntil(element); } if(index >= 0) { result ~= element; tmpList = std.algorithm.remove(tmpList, index); } } return result; } } /// ListComparison should be able to get the missing elements unittest { auto comparison = ListComparison!int([1, 2, 3], [4]); auto missing = comparison.missing; assert(missing.length == 3); assert(missing[0] == 1); assert(missing[1] == 2); assert(missing[2] == 3); } /// ListComparison should be able to get the missing elements with duplicates unittest { auto comparison = ListComparison!int([2, 2], [2]); auto missing = comparison.missing; assert(missing.length == 1); assert(missing[0] == 2); } /// ListComparison should be able to get the extra elements unittest { auto comparison = ListComparison!int([4], [1, 2, 3]); auto extra = comparison.extra; assert(extra.length == 3); assert(extra[0] == 1); assert(extra[1] == 2); assert(extra[2] == 3); } /// ListComparison should be able to get the extra elements with duplicates unittest { auto comparison = ListComparison!int([2], [2, 2]); auto extra = comparison.extra; assert(extra.length == 1); assert(extra[0] == 2); } /// ListComparison should be able to get the common elements unittest { auto comparison = ListComparison!int([1, 2, 3, 4], [2, 3]); auto common = comparison.common; assert(common.length == 2); assert(common[0] == 2); assert(common[1] == 3); } /// ListComparison should be able to get the common elements with duplicates unittest { auto comparison = ListComparison!int([2, 2, 2, 2], [2, 2]); auto common = comparison.common; assert(common.length == 2); assert(common[0] == 2); assert(common[1] == 2); } @safe: struct ShouldList(T) if(isInputRange!(T)) { private T testData; alias U = Unqual!(ElementType!T); mixin ShouldCommons; mixin DisabledShouldThrowableCommons; auto equal(V)(V expectedValueList, const string file = __FILE__, const size_t line = __LINE__) @trusted { auto valueList = toValueList!(Unqual!U)(expectedValueList); addMessage(" equal"); addMessage(" `"); addValue(valueList.to!string); addMessage("`"); beginCheck; return approximately(expectedValueList, 0, file, line); } auto approximately(V)(V expectedValueList, double maxRelDiff = 1e-05, const string file = __FILE__, const size_t line = __LINE__) @trusted { import fluentasserts.core.basetype; auto valueList = toValueList!(Unqual!U)(expectedValueList); addMessage(" approximately"); addMessage(" `"); addValue(valueList.to!string); addMessage("`"); beginCheck; auto comparison = ListComparison!U(valueList, testData.array, maxRelDiff); auto missing = comparison.missing; auto extra = comparison.extra; auto common = comparison.common; auto arrayTestData = testData.array; auto strArrayTestData = "[" ~ testData.map!(a => (cast()a).to!string).join(", ") ~ "]"; static if(std.traits.isNumeric!(U)) { string strValueList; if(maxRelDiff == 0) { strValueList = valueList.to!string; } else { strValueList = "[" ~ valueList.map!(a => a.to!string ~ "±" ~ maxRelDiff.to!string).join(", ") ~ "]"; } } else { auto strValueList = valueList.to!string; } static if(std.traits.isNumeric!(U)) { string strMissing; if(maxRelDiff == 0 || missing.length == 0) { strMissing = missing.length == 0 ? "" : missing.to!string; } else { strMissing = "[" ~ missing.map!(a => a.to!string ~ "±" ~ maxRelDiff.to!string).join(", ") ~ "]"; } } else { string strMissing = missing.length == 0 ? "" : missing.to!string; } bool allEqual = valueList.length == arrayTestData.length; foreach(i; 0..valueList.length) { static if(std.traits.isNumeric!(U)) { allEqual = allEqual && approxEqual(valueList[i], arrayTestData[i], maxRelDiff); } else { allEqual = allEqual && (valueList[i] == arrayTestData[i]); } } if(expectedValue) { return result(allEqual, [], [ cast(IResult) new ExpectedActualResult(strValueList, strArrayTestData), cast(IResult) new ExtraMissingResult(extra.length == 0 ? "" : extra.to!string, strMissing) ], file, line); } else { return result(allEqual, [], [ cast(IResult) new ExpectedActualResult("not " ~ strValueList, strArrayTestData), cast(IResult) new ExtraMissingResult(extra.length == 0 ? "" : extra.to!string, strMissing) ], file, line); } } auto containOnly(V)(V expectedValueList, const string file = __FILE__, const size_t line = __LINE__) @trusted { auto valueList = toValueList!(Unqual!U)(expectedValueList); addMessage(" contain only "); addValue(valueList.to!string); beginCheck; auto comparison = ListComparison!U(testData.array, valueList); auto missing = comparison.missing; auto extra = comparison.extra; auto common = comparison.common; string missingString; string extraString; bool isSuccess; string expected; if(expectedValue) { isSuccess = missing.length == 0 && extra.length == 0 && common.length == valueList.length; if(extra.length > 0) { missingString = extra.to!string; } if(missing.length > 0) { extraString = missing.to!string; } } else { isSuccess = (missing.length != 0 || extra.length != 0) || common.length != valueList.length; isSuccess = !isSuccess; if(common.length > 0) { extraString = common.to!string; } } return result(isSuccess, [], [ cast(IResult) new ExpectedActualResult("", testData.to!string), cast(IResult) new ExtraMissingResult(extraString, missingString) ], file, line); } auto contain(V)(V expectedValueList, const string file = __FILE__, const size_t line = __LINE__) @trusted { auto valueList = toValueList!(Unqual!U)(expectedValueList); addMessage(" contain "); addValue(valueList.to!string); beginCheck; auto comparison = ListComparison!U(testData.array, valueList); auto missing = comparison.missing; auto extra = comparison.extra; auto common = comparison.common; ulong[size_t] indexes; foreach(value; testData) { auto index = valueList.countUntil(value); if(index != -1) { indexes[index]++; } } auto found = indexes.keys.map!(a => valueList[a]).array; auto notFound = iota(0, valueList.length).filter!(a => !indexes.keys.canFind(a)).map!(a => valueList[a]).array; auto arePresent = indexes.keys.length == valueList.length; if(expectedValue) { string isString = notFound.length == 1 ? "is" : "are"; return result(arePresent, [ Message(true, notFound.to!string), Message(false, " " ~ isString ~ " missing from "), Message(true, testData.to!string), Message(false, ".") ], [ cast(IResult) new ExpectedActualResult("all of " ~ valueList.to!string, testData.to!string), cast(IResult) new ExtraMissingResult("", notFound.to!string) ], file, line); } else { string isString = found.length == 1 ? "is" : "are"; return result(common.length != 0, [ Message(true, common.to!string), Message(false, " " ~ isString ~ " present in "), Message(true, testData.to!string), Message(false, ".") ], [ cast(IResult) new ExpectedActualResult("none of " ~ valueList.to!string, testData.to!string), cast(IResult) new ExtraMissingResult(common.to!string, "") ], file, line); } } auto contain(U value, const string file = __FILE__, const size_t line = __LINE__) @trusted { addMessage(" contain `"); addValue(value.to!string); addMessage("`"); auto strValue = value.to!string; auto strTestData = "[" ~ testData.map!(a => (cast()a).to!string).join(", ") ~ "]"; beginCheck; auto isPresent = testData.canFind(value); auto msg = [ Message(true, strValue), Message(false, isPresent ? " is present in " : " is missing from "), Message(true, strTestData), Message(false, ".") ]; if(expectedValue) { return result(isPresent, msg, [ cast(IResult) new ExpectedActualResult("to contain `" ~ strValue ~ "`", strTestData), cast(IResult) new ExtraMissingResult("", value.to!string) ], file, line); } else { return result(isPresent, msg, [ cast(IResult) new ExpectedActualResult("to not contain `" ~ strValue ~ "`", strTestData), cast(IResult) new ExtraMissingResult(value.to!string, "") ], file, line); } } } /// When there is a lazy array that throws an it should throw that exception unittest { int[] someLazyArray() { throw new Exception("This is it."); } ({ someLazyArray.should.equal([]); }).should.throwAnyException.withMessage("This is it."); ({ someLazyArray.should.approximately([], 3); }).should.throwAnyException.withMessage("This is it."); ({ someLazyArray.should.contain([]); }).should.throwAnyException.withMessage("This is it."); ({ someLazyArray.should.contain(3); }).should.throwAnyException.withMessage("This is it."); } @("range contain") unittest { ({ [1, 2, 3].map!"a".should.contain([2, 1]); [1, 2, 3].map!"a".should.not.contain([4, 5, 6, 7]); }).should.not.throwException!TestException; ({ [1, 2, 3].map!"a".should.contain(1); }).should.not.throwException!TestException; auto msg = ({ [1, 2, 3].map!"a".should.contain([4, 5]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[1, 2, 3].map!\"a\" should contain [4, 5]. [4, 5] are missing from [1, 2, 3]."); msg.split('\n')[2].strip.should.equal("Expected:all of [4, 5]"); msg.split('\n')[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2, 3].map!"a".should.not.contain([1, 2]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[1, 2, 3].map!\"a\" should not contain [1, 2]. [1, 2] are present in [1, 2, 3]."); msg.split('\n')[2].strip.should.equal("Expected:none of [1, 2]"); msg.split('\n')[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2, 3].map!"a".should.contain(4); }).should.throwException!TestException.msg; msg.split('\n')[0].should.contain("4 is missing from [1, 2, 3]"); msg.split('\n')[2].strip.should.equal("Expected:to contain `4`"); msg.split('\n')[3].strip.should.equal("Actual:[1, 2, 3]"); } /// const range contain unittest { const(int)[] data = [1, 2, 3]; data.map!"a".should.contain([2, 1]); data.map!"a".should.contain(data); [1, 2, 3].should.contain(data); ({ data.map!"a * 4".should.not.contain(data); }).should.not.throwAnyException; } /// immutable range contain unittest { immutable(int)[] data = [1, 2, 3]; data.map!"a".should.contain([2, 1]); data.map!"a".should.contain(data); [1, 2, 3].should.contain(data); ({ data.map!"a * 4".should.not.contain(data); }).should.not.throwAnyException; } /// contain only unittest { ({ [1, 2, 3].should.containOnly([3, 2, 1]); [1, 2, 3].should.not.containOnly([2, 1]); [1, 2, 2].should.not.containOnly([2, 1]); [1, 2, 2].should.containOnly([2, 1, 2]); [2, 2].should.containOnly([2, 2]); [2, 2, 2].should.not.containOnly([2, 2]); }).should.not.throwException!TestException; auto msg = ({ [1, 2, 3].should.containOnly([2, 1]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[1, 2, 3] should contain only [2, 1]."); msg.split('\n')[2].strip.should.equal("Actual:[1, 2, 3]"); msg.split('\n')[4].strip.should.equal("Extra:[3]"); msg = ({ [1, 2].should.not.containOnly([2, 1]); }).should.throwException!TestException.msg; msg.split('\n')[0].strip.should.equal("[1, 2] should not contain only [2, 1]."); msg.split('\n')[2].strip.should.equal("Actual:[1, 2]"); msg = ({ [2, 2].should.containOnly([2]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[2, 2] should contain only [2]."); msg.split('\n')[2].strip.should.equal("Actual:[2, 2]"); msg.split('\n')[4].strip.should.equal("Extra:[2]"); msg = ({ [3, 3].should.containOnly([2]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[3, 3] should contain only [2]."); msg.split('\n')[2].strip.should.equal("Actual:[3, 3]"); msg.split('\n')[4].strip.should.equal("Extra:[3, 3]"); msg.split('\n')[5].strip.should.equal("Missing:[2]"); msg = ({ [2, 2].should.not.containOnly([2, 2]); }).should.throwException!TestException.msg; msg.split('\n')[0].should.equal("[2, 2] should not contain only [2, 2]."); msg.split('\n')[2].strip.should.equal("Actual:[2, 2]"); msg.split('\n')[4].strip.should.equal("Extra:[2, 2]"); } /// contain only with void array unittest { int[] list; list.should.containOnly([]); } /// const range containOnly unittest { const(int)[] data = [1, 2, 3]; data.map!"a".should.containOnly([3, 2, 1]); data.map!"a".should.containOnly(data); [1, 2, 3].should.containOnly(data); ({ data.map!"a * 4".should.not.containOnly(data); }).should.not.throwAnyException; } /// immutable range containOnly unittest { immutable(int)[] data = [1, 2, 3]; data.map!"a".should.containOnly([2, 1, 3]); data.map!"a".should.containOnly(data); [1, 2, 3].should.containOnly(data); ({ data.map!"a * 4".should.not.containOnly(data); }).should.not.throwAnyException; } /// array contain unittest { ({ [1, 2, 3].should.contain([2, 1]); [1, 2, 3].should.not.contain([4, 5, 6, 7]); [1, 2, 3].should.contain(1); }).should.not.throwException!TestException; auto msg = ({ [1, 2, 3].should.contain([4, 5]); }).should.throwException!TestException.msg.split('\n'); msg[0].should.equal("[1, 2, 3] should contain [4, 5]. [4, 5] are missing from [1, 2, 3]."); msg[2].strip.should.equal("Expected:all of [4, 5]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg[5].strip.should.equal("Missing:[4, 5]"); msg = ({ [1, 2, 3].should.not.contain([2, 3]); }).should.throwException!TestException.msg.split('\n'); msg[0].should.equal("[1, 2, 3] should not contain [2, 3]. [2, 3] are present in [1, 2, 3]."); msg[2].strip.should.equal("Expected:none of [2, 3]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg[5].strip.should.equal("Extra:[2, 3]"); msg = ({ [1, 2, 3].should.not.contain([4, 3]); }).should.throwException!TestException.msg.split('\n'); msg[0].should.equal("[1, 2, 3] should not contain [4, 3]. [3] is present in [1, 2, 3]."); msg[2].strip.should.equal("Expected:none of [4, 3]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg[5].strip.should.equal("Extra:[3]"); msg = ({ [1, 2, 3].should.contain(4); }).should.throwException!TestException.msg.split('\n'); msg[0].should.equal("[1, 2, 3] should contain `4`. 4 is missing from [1, 2, 3]."); msg[2].strip.should.equal("Expected:to contain `4`"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg[5].strip.should.equal("Missing:4"); msg = ({ [1, 2, 3].should.not.contain(2); }).should.throwException!TestException.msg.split('\n'); msg[0].should.equal("[1, 2, 3] should not contain `2`. 2 is present in [1, 2, 3]."); msg[2].strip.should.equal("Expected:to not contain `2`"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg[5].strip.should.equal("Extra:2"); } /// array equals unittest { ({ [1, 2, 3].should.equal([1, 2, 3]); }).should.not.throwAnyException; ({ [1, 2, 3].should.not.equal([2, 1, 3]); [1, 2, 3].should.not.equal([2, 3]); [2, 3].should.not.equal([1, 2, 3]); }).should.not.throwAnyException; auto msg = ({ [1, 2, 3].should.equal([4, 5]); }).should.throwException!TestException.msg.split("\n"); msg[0].strip.should.equal("[1, 2, 3] should equal `[4, 5]`."); msg[2].strip.should.equal("Expected:[4, 5]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2].should.equal([4, 5]); }).should.throwException!TestException.msg.split("\n"); msg[0].strip.should.equal("[1, 2] should equal `[4, 5]`."); msg[2].strip.should.equal("Expected:[4, 5]"); msg[3].strip.should.equal("Actual:[1, 2]"); msg[5].strip.should.equal("Extra:[1, 2]"); msg[6].strip.should.equal("Missing:[4, 5]"); msg = ({ [1, 2, 3].should.equal([2, 3, 1]); }).should.throwException!TestException.msg.split("\n"); msg[0].strip.should.equal("[1, 2, 3] should equal `[2, 3, 1]`."); msg[2].strip.should.equal("Expected:[2, 3, 1]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2, 3].should.not.equal([1, 2, 3]); }).should.throwException!TestException.msg.split("\n"); msg[0].strip.should.startWith("[1, 2, 3] should not equal `[1, 2, 3]`"); msg[2].strip.should.equal("Expected:not [1, 2, 3]"); msg[3].strip.should.equal("Actual:[1, 2, 3]"); } ///array equals with structs unittest { struct TestStruct { int value; void f() {} } ({ [TestStruct(1)].should.equal([TestStruct(1)]); }).should.not.throwAnyException; ({ [TestStruct(2)].should.equal([TestStruct(1)]); }).should.throwException!TestException.withMessage.startWith("[TestStruct(2)] should equal `[TestStruct(1"); } /// const array equal unittest { const(string)[] constValue = ["test", "string"]; immutable(string)[] immutableValue = ["test", "string"]; constValue.should.equal(["test", "string"]); immutableValue.should.equal(["test", "string"]); ["test", "string"].should.equal(constValue); ["test", "string"].should.equal(immutableValue); } version(unittest) { class TestEqualsClass { int value; this(int value) { this.value = value; } void f() {} } } ///array equals with classes unittest { ({ auto instance = new TestEqualsClass(1); [instance].should.equal([instance]); }).should.not.throwAnyException; ({ [new TestEqualsClass(2)].should.equal([new TestEqualsClass(1)]); }).should.throwException!TestException.withMessage.startWith("[new TestEqualsClass(2)] should equal `[fluentasserts.core.array.TestEqualsClass]`."); } /// range equals unittest { ({ [1, 2, 3].map!"a".should.equal([1, 2, 3]); }).should.not.throwAnyException; ({ [1, 2, 3].map!"a".should.not.equal([2, 1, 3]); [1, 2, 3].map!"a".should.not.equal([2, 3]); [2, 3].map!"a".should.not.equal([1, 2, 3]); }).should.not.throwAnyException; auto msg = ({ [1, 2, 3].map!"a".should.equal([4, 5]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.equal("[1, 2, 3].map!\"a\" should equal `[4, 5]`."); msg.split("\n")[2].strip.should.equal("Expected:[4, 5]"); msg.split("\n")[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2].map!"a".should.equal([4, 5]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.equal("[1, 2].map!\"a\" should equal `[4, 5]`."); msg.split("\n")[2].strip.should.equal("Expected:[4, 5]"); msg.split("\n")[3].strip.should.equal("Actual:[1, 2]"); msg = ({ [1, 2, 3].map!"a".should.equal([2, 3, 1]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.equal("[1, 2, 3].map!\"a\" should equal `[2, 3, 1]`."); msg.split("\n")[2].strip.should.equal("Expected:[2, 3, 1]"); msg.split("\n")[3].strip.should.equal("Actual:[1, 2, 3]"); msg = ({ [1, 2, 3].map!"a".should.not.equal([1, 2, 3]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.startWith("[1, 2, 3].map!\"a\" should not equal `[1, 2, 3]`"); msg.split("\n")[2].strip.should.equal("Expected:not [1, 2, 3]"); msg.split("\n")[3].strip.should.equal("Actual:[1, 2, 3]"); } /// custom range asserts unittest { struct Range { int n; int front() { return n; } void popFront() { ++n; } bool empty() { return n == 3; } } Range().should.equal([0,1,2]); Range().should.contain([0,1]); Range().should.contain(0); auto msg = ({ Range().should.equal([0,1]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.startWith("Range() should equal `[0, 1]`"); msg.split("\n")[2].strip.should.equal("Expected:[0, 1]"); msg.split("\n")[3].strip.should.equal("Actual:[0, 1, 2]"); msg = ({ Range().should.contain([2, 3]); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.startWith("Range() should contain [2, 3]. [3] is missing from [0, 1, 2]."); msg.split("\n")[2].strip.should.equal("Expected:all of [2, 3]"); msg.split("\n")[3].strip.should.equal("Actual:[0, 1, 2]"); msg = ({ Range().should.contain(3); }).should.throwException!TestException.msg; msg.split("\n")[0].strip.should.startWith("Range() should contain `3`. 3 is missing from [0, 1, 2]."); msg.split("\n")[2].strip.should.equal("Expected:to contain `3`"); msg.split("\n")[3].strip.should.equal("Actual:[0, 1, 2]"); } /// custom const range equals unittest { struct ConstRange { int n; const(int) front() { return n; } void popFront() { ++n; } bool empty() { return n == 3; } } [0,1,2].should.equal(ConstRange()); ConstRange().should.equal([0,1,2]); } /// custom immutable range equals unittest { struct ImmutableRange { int n; immutable(int) front() { return n; } void popFront() { ++n; } bool empty() { return n == 3; } } [0,1,2].should.equal(ImmutableRange()); ImmutableRange().should.equal([0,1,2]); } /// approximately equals unittest { [0.350, 0.501, 0.341].should.be.approximately([0.35, 0.50, 0.34], 0.01); ({ [0.350, 0.501, 0.341].should.not.be.approximately([0.35, 0.50, 0.34], 0.00001); [0.350, 0.501, 0.341].should.not.be.approximately([0.501, 0.350, 0.341], 0.001); [0.350, 0.501, 0.341].should.not.be.approximately([0.350, 0.501], 0.001); [0.350, 0.501].should.not.be.approximately([0.350, 0.501, 0.341], 0.001); }).should.not.throwAnyException; auto msg = ({ [0.350, 0.501, 0.341].should.be.approximately([0.35, 0.50, 0.34], 0.0001); }).should.throwException!TestException.msg; msg.should.contain("Expected:[0.35±0.0001, 0.5±0.0001, 0.34±0.0001]"); msg.should.contain("Missing:[0.5±0.0001, 0.34±0.0001]"); } /// approximately equals with Assert unittest { Assert.approximately([0.350, 0.501, 0.341], [0.35, 0.50, 0.34], 0.01); Assert.notApproximately([0.350, 0.501, 0.341], [0.350, 0.501], 0.0001); } /// immutable string unittest { immutable string[] someList; someList.should.equal([]); } /// Compare const objects unittest { class A {} A a = new A(); const(A)[] arr = [a]; arr.should.equal([a]); }
|
__label__pos
| 0.767496 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks DiBona
Don't ask to ask, just ask
PerlMonks
Comment on
( #3333=superdoc: print w/ replies, xml ) Need Help??
Thinking about how to blur the distinction between nodelets and regular nodes, so that you could, for example — theoretically — embed a regular node into your page like a nodelet, or — more practically — view a nodelet as a standalone ("regular") node.
Nodelets are special in the following ways:
• they have a designated "container", which is essentially HTML template code which gets wrapped around the nodelet's contents at display time;
• they have the concept of periodic updating, with caching of contents in between updates.
• they can be selectable for inclusion in your list of displayed nodelets.
Aside from that, they are essentially superdocs: documents which can contain code for performing various functions. There is no technical reason why a nodelet could not be viewed standalone, just like any other superdoc.
Therefore, as a proof of concept, I created nodelet view page, a displaypage which does exactly that. To try it, view any nodelet and add ;displaytype=view to the URL. For example, your XP Nodelet.
It works. However, there are obviously some caveats. For example, viewing your Free Nodelet in this way is likely to cause mayhem if you have any javascript going on in it.
Now, if we can get this code to be ready for prime time, I would recommend making this the default view for nodelets, and migrating the current default view into the viewcode htmlpage. That is:
1. rename nodelet display page as nodelet viewcode page, then
2. rename nodelet view page as nodelet display page.
Then a pmdev simply clicks on the "code" link which will appear at the top of the nodelet display page to get to a point where its code can be examined/patched.
One further kewlosity: add /bare/ to the path part of the URL to get a nifty little ticker-like form (html, not xml) of any given nodelet. Example: your XP Nodelet. (It's not really a ticker; it doesn't auto-refresh.)
I also have a patch (expandfreenodelet - (patch)) for embedding any given nodelet's contents into your Free Nodelet. I'd like to know if this might have potential problems. It seems like a cool idea to me.
What is the sound of Windows? Is it not the sound of a wall upon which people have smashed their heads... all the way through?
In reply to bringing nodelets out of the shed by jdporter
Title:
Use: <p> text here (a paragraph) </p>
and: <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":
• Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
• Read Where should I post X? if you're not absolutely sure you're posting in the right place.
• Please read these before you post! —
• Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
• Outside of code tags, you may need to use entities for some characters:
For: Use:
& &
< <
> >
[ [
] ]
• Link using PerlMonks shortcuts! What shortcuts can I use for linking?
• See Writeup Formatting Tips and other pages linked from there for more info.
• Log In?
Username:
Password:
What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others about the Monastery: (10)
As of 2013-12-13 04:29 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
How do you parse XML?
Results (298 votes), past polls
|
__label__pos
| 0.716649 |
Somewhat stable Solid State
A reader think it makes no sense to use the STEC SSD, and we should switch to the Intel X25E drives. That sounds reasonable at first as the X25E is much cheaper as the STECs. But as usual the devil is in the detail. So why do many people still use some of the more expensive STECs? Do they have too much money? Are they morons? I will tell you a dirty little secret. No, not at all. When you take it really seriously the X25E aren’t enterprise ready. At least in their default setting.
The -E stands for extreme, not for enterprise
I hear the people crying “What a BS, it’s SLC”. And many media outlets tout them as an enterprise SSD. Yes, but that’s just half of the story. Those SSD have a cache. They need them to enable fast writes. 64 MB or 32 in the case of the X25M G2. The problem: It isn’t protected against power failure at both devices. No cap, no battery. When you don’t believe me, just look at the PCB photos available all around in the network. As soon as the power fails, the cache lose it’s content. But the drive may have answered to the OS, that it wrote the data to the non-volatile media. That’s really a problem. The mysqlperformanceblog describes this in a blog entry:
Now to test durability I do plug off power from SSD card and check how many transactions are really stored - and there is second bumper - I do not see several last N commited transactions.
I find this a little bit frighting. This shouldn’t happen with the settings of the mysql server. Someone has lied in the chain OS to disk. As far as i understand it the Linux fsync() doesn’t flush the drive cache. That’s okay, when you protect the caches with some energy storage. But not without such means of data protection. You have to disable the drive caches, to ensure that the data is really on disk and as that article suggests, the performance was just outright horrible. ZFS doesn’t have that problem because it flushes the cache after every write to the ZIL. It circumvents the problem of the write cache by effectively disabling it due to the frequent flushing of the caches. But that has quite an performance impact. Of course: For many use-cases it’s good enough. And the risks to loose some transactions seems to be acceptable. For an L2ARC it’s acceptable, as the data is redundant and you can afford to loose integrity. The STECs are used for a different reason there: More capacity (100 GB instead of 64 GB). I will come back to this a little bit later. But on a filer you are using for your Oracle DB or your mailserver for example you can’t to that. For any reasonable work needing synchronous writes you need to switch off write caching with the X25-E or use a file system with frequent cache flushing. But that has two impacts: At first the latency of sync writes increases vastly. You have to take into consideration, that those 3.3k IOPS are defined for an enabled write cache. And as a speculation: On the other side the write cache is used to reduce the wear of the disk, thus it may reduce the usable lifetime of your SSD drive. As far as i understand the mechanism to reduce write amplification, it needs the cache the incoming blocks to collect some of them before writing them. That doesn’t make the X25-E an unusable device for enterprise. Switch off the write-cache, and you don’t have the problems. You just have to set your expectation accordingly. An X25-E is still vastly faster than a high capacity SATA drive. Even with switched-off write cache. But for any reasonable work with higher write load you want a device with a protected write cache. And at the moment the X25-E isn’t there.
MLC SSD for L2ARC?
There was another suggestion: Using X25-M for L2ARC. Because they are significantly larger than the -E type. There is detail devil in this situation again. The datasheet of the X25-M Gen2 specifies on page 11 (section 3.5.4) that you can expect a lifetime of 5 years when you write 20 GBytes per day at typical client workloads. At first this sound as a reasonable number, but 20 GBytes are 20971520 KBytes. That’s just 242 KByte per second. 20 GBytes per day are just 35,64 Terabytes in the lifetime of the device. Now you have to remember the L2ARC mechanism. It writes soon-to-be-evicted pages from both ends of the ARC into the L2ARC. Thus on any reasonably loaded system there should be more write-load than just 242 KBytes per Seconds. For a X25-E the situation looks a little bit different: The data sheet specifies in “3.5.4 Write Endurance”:
32 GB drive supports 1 petabyte of lifetime random writes and 64 GB drive supports 2 petabyte of lifetime random writes.
2 Petabytes are 2 199 023 255 552 Kilobytes. 5 years are 157 784 630 seconds. By the power of mathematics, i have the division. 13 MByte per second. And that’s more up to the task than 242 Kilobyte per second. Of course you can find corner use cases where such a disk can reasonably work. But any really loaded device should kill MLC devices quite quickly. And most often S7000 are really loaded devices. MLC devices are specified for client usage. Perhaps you could even build a decent home file server with it. But for enterprise usage? No, almost never.
Enterprise
Enterprise flash drives like the ones used in the S7000 series have a capacitor or a battery to protect the cache. So they can simply ignore the commands to sync the caches. The capacitor or battery buffered cache counts as non-volatile storage. Obviously they are much faster this way. It’s one of the reasons why an STEC has roundabout 4 times number of sync write operations per seconds when compared to an Intel X25-E. But this advantage isn’t cheap, but it’s worth it. You get what you pay for. That’s the reason why the Sun F20 and F5100 flash storage products contain a capacitor to ensure the integrity of the data. Furthermore those STEC SSD are available as dual-ported ones. That’s interesting for multipathing your ZIL devices and for clustering. So it can put into the system without using an interposer card and you don’t need to use the SATA Tunneling Protocol to get your data to the server. That’s a problem with the X25-E, it’s a SATA disks and thus only connectable via an single-ported interface and a just accessible via STP.
Conclusion
To end this article: Maybe a second generation X25-E will contain those capacitors (or something different) and will be usable in an enterprise environment, but at the moment those devices need special care that serverly hits performance. And as long it’s that way, there are good reasons to use the STEC drives in the S7000 series. And just to answer that reader: No, a swap to X25-E isn’t overdue. There are several good reasons not to use them in an enterprise environments. Let’s just wait until an X25-E is up to the task of enterprise at full speed. Of course you can cut a corner everywhere. You can save money by doing so. And maybe it’s up to your task, That’s fine. Do it this way. But most enterprise customers have different requirements. And the costs of a STEC are negligible compared to the costs of restoring and recovering a corrupt database, losing an important mail to the write cache in an SSD or corrupting a file on your NFS server.
|
__label__pos
| 0.631382 |
blazej blazej - 6 months ago 41
R Question
lm() within mutate() in group_by()
I'm looking for a way to add a column to my data table that consists of
residuals
from a
lm(a~b)
function computed separately for different levels of
c
I've been suggested to look into
sort_by(c)
function but that doesn't seem to work with
lm(a~b)
My working example data looks like this:
outcome data frame
Columns subject, trial and rt are within a
data.frame
, my goal is to compute
Zre_SPSS
(that I originally made in SPSS) but from a
R
function.
I've tried
data %<>% group_by (subject) %>%
mutate(Zre=residuals(lm(log(rt)~trial)))
but it doesn't work - Zre gets computed but not within each subject separately, rather for the entire data frame.
Anyone could please help me? I'm a complete R (and coding in general) newbie, so please forgive me if this question is stupid or a duplicate, chances are I didn't understand other solutions or they where not solutions I looked for. Best regards.
As per Ben Bolker request here is R code to generate data from excel screen shot
#generate data
subject<-c(1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3)
subject<-factor(subject)
trial<-c(1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,4,5,6)
rt<-c(300,305,290,315,320,320,350,355,330,365,370,370,560,565,570,575,560,570)
#Following variable is what I would get after using SPSS code
ZreSPSS<-c(0.4207,0.44871,-1.7779,0.47787,0.47958,-0.04897,0.45954,0.45487,-1.7962,0.43034,0.41075,0.0407,-0.6037,0.0113,0.61928,1.22038,-1.32533,0.07806)
#make data frame
sym<-data.frame(subject, trial, rt, ZreSPSS)
Answer
It looks like a bug in dplyr 0.5's mutate, where lm within a group will still try to use the full dataset. You can use do instead:
sym %>% group_by(subject) %>% do(
{
r <- resid(lm(log(rt) ~ trial, data = .))
data.frame(., r)
})
This still doesn't match your SPSS column, but it's the correct result for the data you've given. You can verify this by fitting the model manually for each subject and checking the residuals.
(Other flavours of residuals include rstandard for standardized and rstudent for studentized residuals. They still don't match your SPSS numbers, but might be what you're looking for.)
|
__label__pos
| 0.505733 |
Laravel 11 Socialite Login with Slack Account Example
Laravel 11 Socialite Login with Slack Account Example
Laravel 11 Socialite Login with Slack Account Example
In this tutorial, we will learn how to integrate Slack login into a Laravel 11 application using Laravel Socialite. Socialite provides an easy and convenient way to handle OAuth authentication with various providers like Slack, GitHub, Google, and more.
Step-by-Step Guide
Step 1: Install Laravel
First, you need to have a Laravel 11 application. If you haven’t installed Laravel yet, you can create a new project using Composer:
composer create-project --prefer-dist laravel/laravel laravel-socialite-slack
Step 2: Install Socialite
Next, you need to install Laravel Socialite. Run the following command in your project directory:
composer require laravel/socialite
Step 3: Configure Socialite
To use Socialite, you need to configure it with your Slack app credentials. Add the following environment variables to your .env file:
SLACK_CLIENT_ID=your-slack-client-id
SLACK_CLIENT_SECRET=your-slack-client-secret
SLACK_REDIRECT_URI=http://your-app-url.com/auth/slack/callback
Step 4: Create Slack App
1. Go to the Slack API and create a new app.
2. Choose the workspace where you want to develop your app.
3. Go to the OAuth & Permissions section and add your redirect URI (http://your-app-url.com/auth/slack/callback) in the Redirect URLs section.
4. Copy the Client ID and Client Secret and paste them into your .env file as shown in the previous step.
Step 5: Configure Services
Add your Slack credentials to the config/services.php file:
'services' => [
// Other services...
'slack' => [
'client_id' => env('SLACK_CLIENT_ID'),
'client_secret' => env('SLACK_CLIENT_SECRET'),
'redirect' => env('SLACK_REDIRECT_URI'),
],
],
Step 6: Create Authentication Routes
In the routes/web.php file, add the routes for Slack authentication:
use App\Http\Controllers\Auth\LoginController;
use Illuminate\Support\Facades\Route;
Route::get('auth/slack', [LoginController::class, 'redirectToSlack']);
Route::get('auth/slack/callback', [LoginController::class, 'handleSlackCallback']);
Step 7: Create Login Controller
Create a LoginController if it doesn’t exist already, and add the methods to handle the authentication process:
namespace App\Http\Controllers\Auth;
use App\Http\Controllers\Controller;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Auth;
use Laravel\Socialite\Facades\Socialite;
use App\Models\User;
class LoginController extends Controller
{
/**
* Redirect the user to the Slack authentication page.
*
* @return \Illuminate\Http\Response
*/
public function redirectToSlack()
{
return Socialite::driver('slack')->redirect();
}
/**
* Obtain the user information from Slack.
*
* @return \Illuminate\Http\Response
*/
public function handleSlackCallback()
{
try {
$slackUser = Socialite::driver('slack')->stateless()->user();
} catch (\Exception $e) {
return redirect('/login')->with('error', 'Failed to authenticate with Slack.');
}
// Find or create a user based on Slack user information
$user = User::firstOrCreate(
['email' => $slackUser->getEmail()],
['name' => $slackUser->getName()]
);
// Log the user in
Auth::login($user, true);
return redirect()->intended('/home');
}
}
Step 8: Update User Model
Ensure your User model is set up to work with Socialite. You may need to update it to include any additional fields you need from Slack.
Step 9: Add Login Button
Add a Slack login button to your login page. Update your resources/views/auth/login.blade.php file:
<a href="{{ url('auth/slack') }}" class="btn btn-primary">Login with Slack</a>
Step 10: Test Your Application
Now, you should be able to test your Slack login integration. Navigate to your login page and click the “Login with Slack” button. You should be redirected to Slack for authentication and then back to your application.
Conclusion
You’ve successfully integrated Slack login into your Laravel 11 application using Laravel Socialite. This allows users to log in using their Slack accounts, providing a convenient and secure authentication method.
If you have any questions or run into issues, feel free to ask for help!
|
__label__pos
| 0.787837 |
Local installation of Grew-match
Grew-match is available online on a set of corpora (mainly from the UD project). If you want to use Grew-match on your own corpus, you have to install it locally, following the instructions on this page.
STEP 0: Run a web server
A web server is required. You can install apache or one of the easy to install distribution like LAMP on Linux or MAMP on Mac OSX.
In the following, we will call DOCUMENT_ROOT the main folder accessible from your website:
In needed, refer to the documentation of the corresponding web server.
We use here the port number 8888. You may have to change this if this port number is already used.
STEP 1: Install the webpage
Download
The code for the webpage is available on gitlab.inria.fr:
git clone https://gitlab.inria.fr/grew/grew_match.git
Configuration
Move to the main folder of the project:
cd grew_match
Edit the file corpora/groups.json to describe the set of available corpora. An example of configuration file with 3 corpora:
{ "groups": [
{ "id": "local",
"name": "Local corpora",
"corpora": [
{ "id": "my_corpora" },
{ "folder": "Older versions",
"corpora": [
{ "id": "[email protected]" },
{ "id": "[email protected]" }
]
}
]
}
]
}
In JSON, groups defines the items in the top navbar and corpora the list of corpora in the left bar, maybe organised in folders (recursive folders are not handled). You can look the configuration file used on Grew-match for a larger example.
Install
The project contains a file install_template.sh. Copy it with the name install.sh:
cp install_template.sh install.sh
Edit the new file install.sh and update DEST definition (line 2) and PORT (line 5) if needed. Run the install script:
./install.sh
STEP 2: Install the daemon
You have to start locally a daemon which will handle your requests on your corpora.
Installation
Follow general instruction for Grew installation and then install the daemon with:
opam install grew_daemon
Configuration
To configure your daemon, you have to describe the corpora you want to use in a conf file. This file describes each corpora with a name, a directory and a list of files. For instance, the JSON file my_corpora.json below defines 3 corpora:
{ "corpora": [
{ "id": "my_corpora",
"directory": "/users/me/corpora/my_corpora",
"files": [ "my_corpora_dev.conll", "my_corpora_test.conll", "my_corpora_train.conll" ]
},
{ "id": "[email protected]",
"directory": "/users/me/corpora/my_corpora/2.0",
"files": [ "my_corpora_dev.conll", "my_corpora_test.conll", "my_corpora_train.conll" ]
},
{ "id": "[email protected]",
"directory": "/users/me/corpora/my_corpora/1.0",
"files": [ "my_corpora_dev.conll", "my_corpora_test.conll", "my_corpora_train.conll" ]
}
]
}
Compile your corpora
In order to speed up the pattern search and to preserve memory when a large number of corpora are available, corpora are compiled.
During the compilation, a few files are stored in three specific folders. Before the first compilation, you have to create them with the command:
mkdir DOCUMENT_ROOT/_logs DOCUMENT_ROOT/_tables DOCUMENT_ROOT/_descs
Then, the compilation is done with the command:
grew compile -grew_match_server DOCUMENT_ROOT -i my_corpora.json
A new file with the name of the corpus and the extension .marshal is created in the corpus directory. Of course, you will have to compile again if one of your corpora is modified. The compilation step will also build the relation tables and put them in a place where they can be found by the server.
You can clean the compiled files with:
grew clean -i my_corpora.json
Run the daemon
The Daemon is started with the command (update the port number if necessary):
grew_daemon run --port 8888 my_corpora.json
Step 3 and more
Test
Make sure that the web server is running. You should be able to request your corpora from http://localhost:8888/grew_match. Feel free to contact us in case of trouble.
Restart the daemon when one of the corpora is updated
1. Kill the running daemon (you can use the command killall grew_daemon if the daemon is running in the background)
2. Run the compile operation again: grew compile -grew_match_server DOCUMENT_ROOT -i my_corpora.json
3. Restart the daemon: grew_daemon run --port 8888 my_corpora.json
|
__label__pos
| 0.774791 |
Syvum Home Page
Home > GMAT Test Prep > Print Preview
Arithmetic : Decimals
Preparation Just what you need to know !
Division of Decimals
To divide a decimal (dividend) by a natural number (divisor),
• divide the numbers in the same way as whole numbers (ignoring the decimal point); and
• insert a decimal point in the result such that the quotient has the same number of decimal places as the dividend.
For example to divide 42.056 by 7, the whole numbers are first divided to obtain 42056 / 7 = 6008. Since there are 3 decimal places in the dividend 42.056, there must be 3 decimal places in the quotient. So, 42.056 / 7 = 6.008
Note that division can be viewed as a fraction, and a fraction can be viewed as division.
To convert a fraction to a decimal, the numerator is divided by the denominator. For example,
5
8
= 5.000
8
= 0.625
because 50 ÷ 8 = 6 (remainder is 2); 20 ÷ 8 = 2 (remainder is 4); and 40 ÷ 8 = 5 (remainder is 0).
The following decimal equivalents of fractions are worth noting:
1/100 = 0.01 ; 1/50 = 0.02 ; 1/25 = 0.04 ; 1/20 = 0.05 ; 1/10 = 0.1 ;
1/8 = 0.125 ; 3/8 = 0.375 ; 5/8 = 0.625 ; 7/8 = 0.875 ;
1/5 = 0.2 ; 2/5 = 0.4 ; 3/5 = 0.6 ; 4/5 = 0.8 ;
1/4 = 0.25 ; 1/2 = 0.5 ; 3/4 = 0.75
Some fractions give an infinite decimal, where the last digit recurs with each division step and the division does not terminate with a zero remainder. This is indicated by three dots at the end.
For example, 1/6 = 0.1666... ; 1/3 = 0.333... ; 2/3 = 0.666... ; 5/6 = 0.8333...
MUST-KNOW : When a decimal is divided by 10, 100, 1000, ..., the decimal point moves to the left by 1, 2, 3, ... places.
For example, 54.1266 / 10 = 5.41266, 54.1266 / 100 = 0.541266, 54.1266 / 1000 = 0.0541266
To divide a decimal (dividend) by another decimal (divisor), multiply both the dividend and the divisor by 10, 100, 1000, ... such that there is no decimal fraction in the divisor. Then, perform the division by the natural number.
For example to divide 54.1266 by 0.006, first multiply both numbers by 1000 and then perform the division as shown below.
54.1266
0.006
= 54126.6
6
= 9021.1
Division may be always verified by multiplication, e.g., 9021.1 x 0.006 = 54.1266
Example
Michelle paid $3.12 to buy peaches costing $0.39 each. How many peaches did she buy?
Solution.
The answer is 8 peaches.
3.12
0.39
= 312
39
= 8
Both 3.12 and 0.39 are multiplied by 100 to convert them into whole numbers. This essentially changes the problem into 'how many peaches can be bought for 312 cents if each peach costs 39 cents?'
GMAT Math Review - Arithmetic : Index for Decimals
GMAT Math Review - Arithmetic : Practice Exercise for Decimals
-
-
10 more pages in GMAT Math Review
Contact Info © 1999-2017 Syvum Technologies Inc. Privacy Policy Disclaimer and Copyright
|
__label__pos
| 0.823975 |
Boolean Operators Updated!!
So this is an update to my all inclusive list of operators. The update is as of 1/16/13, of course I will have other updates as I find new operators. Why you ask, well as I said below there are always new ones being discovered.
So, as most of you know there are allot of different operators. Trying to figure out what they all mean and do is tough. So in this post I am going to show and explain what allot of them do, not all but allot, the reality is there are just too many and to many alternative ways to use them.
The first thing to remember is, despite what you may have been told, all operators will work in all engines, they just might not work as well or the same.
So the operators:
No operator = fins the search criteria immediately adjacent to one another and in the same order. example C++ C# Java will only pull results were those 3 terms are in order.(this is a generalization, some engines are different, mostly used with resume databases like Monster etc.).
Within "X" = means to find a word within a certain radius of another word. Example C++ within 3 developed will bring results were the word C++ is found within 3 words of developed.
AND or & = this operator is used to denote a list of things that need to be present in a search. Example C# AND C++ AND Java, will return results that have all 3.
OR = to denote looking for more than one thing but not all things in a list. Example C# OR C++ OR Java will return result that have any of them but not necessarily all of them.
NOT = this excludes the word that follows from the search results. Example C# AND C++ NOT Java will return results that have C# and C++ but not any that also has Java. Allot of time this function will also be done by using "BUT NOT".
AND NOT = Excludes documents containing whatever follows it.
The AND NOT operator is generally used after you have performed a
search, looked at the results, and determined that you do not want to see
pages containing some word or phrase.
NEAR = find words that are within 10 words of each other. example C++ NEAR developed, will find results were the term C++ is within 10 words of developed. If used before the “:” sign you can tell it how close to look for the 2 words. Example near:2 C++, will look for any word within 2 words of C++
Before = finds results that have words that come before another. example C++ before developed with bring back results were C++ comes before developed.(adjacency not implied).
After = finds results that have words that come after another. Example C++ after developed will bring results were C++ comes after developed. (adjacency not implied).
EXACT = The EXACT operator can be used to retrieve records that match your search term precisely. Simply type the word or phrase enclosed within double quotation marks and preceded by the EXACT operator.
* = means all formats of the root word. Example recruit* will bring up recruit, recruits recruiter, recruiters, recruiting etc.. It can also be used in word to find multiple spellings. Example behavi*r retrieves behaviour or behavior. Also known as fill in the blanks
? = single-character wildcard for finding alternative spelling. The ? represents a single character; two ?? represents two characters and so on. Example wom?n finds woman and women.
I = a line, the pipe or vertical bar or the uppercase with a space before and after it can also be used as an OR operator. Example C | C++ means c or c++
Quotation Marks = This helps to find specific phrases by allowing you to tell the engine to search for the words as a phrase or together. Example C++ AND "Software Engineer" will pull results that include C++ and the phrase Software engineer. Without the quotation you are likely to get results that have the words software and engineer but not together.
Parentheses = This means to process the enclosed sub query first or as a whole. Example C++ AND (206 OR 253 OR 360).
=, @, "in", :(colon) = all these denote to look in a given area for a given word or phrase. Example @url=resume finds all urls with the word resume in them, so does url=resume and inurl:resume. As you can see in most cases you are combining two of these operators. Think of the @ or "in" as saying were to look, and the = and colon saying what to look for. So inurl:resume means in urls look for the word resume. You can change and combine these and get different results which make for some interesting strings. Also remember you can leave off the "in" and @ and still get results. So the strings are many and the results near infinite. Remember you can use any words after the = or colon and get some interesting results. Example url:alumni brings up urls with the word alumni in it, meaning educational institutions alumni pages in google that is 7,210,000. "in" can also be used for conversions such as 45 celsius in Fahrenheit.
Filename, filetype, url, define, instreamset, group, ext, body, title, txt, subject, anchor(see seo and anchor text linking) = These words when used will denote were to look or what to look for, with regards to the given search criteria. Example @filename=resume or filename:resume will look for filenames with the word resume in them.
Contains: Keeps results focused on sites that have links to the file types that you specify. Example: my resume developer contains:pdf
Cache = will look for the cached version of a page rather than the most updated version.
#..# = search within a number range. example nokia $200..$300
Author = will only include results written by a particular author.
Define = results will be of those pages that have a definition for the word that follows.
Group = results will be form groups only.
Info or id = the query info:url will present some information about the corresponding web page.
Link = The query link:url shows pages that point to that url.(also used in flip searching).
Location or region = this will yield result only for that location. Example location: England or UK will yield results only form England. (in Bing you can use loc: as well
Movie = this will yield results about that movie. Example movie:ironman2 will yield results about Iron Man 2.
Stocks = get a Quote on a stock. Example stocsk:msft
Weather = get weather. Exmaple weather:98042 gets weather for Kent, Wa. 98042 is the zip code for Kent, Wa.
Phonebook = this will find all public phone results. Example phonebook:John Doe, NY, New York. Also bphonebook = business phone numbers and rphonebook = residential phone numbers
Site or Domain = this will limit results to those form that site or domain.
Contain: = if used in conjunction with the site or domain command it will search for web pages with links to particular file types. Example
site:microsoft.com contains:mp3 will find all pages in the Microsoft site or domain that is connected to an mp3 file. (works mainly on Bing)
Source = this will limit your result to those from a particular source. Example source:nytimes.
Related = this will find results that are similar or related to the search criteria. Example related:football will find all results similar or related to football.
Host or domain = used for x-raying. example host:nfl.com or domain:nfl.com.
Tilda(~) = this if immediately before a key term will search for synonyms.
Plus(+) and Minus(-) = these can be used to add or subtract from results. Plus is used mainly to add common words to results. Example to+be+or+not+to+be. Of course this can be done other ways as well, such as "to be AND not AND to AND be". A + before a word tells the engine to look for exactly what you asked for no variations of any kind. It is like what would happen if you put a single word in quotations. Minus to remove something from the results. example football-dolphins, can also be done football NOT Dolphins or Football BUT NOT Dolphins. So in other words plus(+) and minus(-) are the same as AND and NOT.
safesearch: = excludes adult content. Example safesearch:breast cancer
Daterange = this allows you to limit result to a given date range. The only drawback to this syntax is that it works with the Julian Calendar, not the Gregorian Calendar (the one we use). To use daterange: first go to the Julian Date Converter at the U.S. Naval Observatory (http://aa.usno.navy.mil/data/docs/JulianDate.html). Example intitle:"george bush" daterange:2452389-2452389(this would search for April 24, 2002).
AllinXXXX: = This combined with any of the operators above will provide you with all urls fitting the criteria. Example "Allinanchor: C++ Developed software" will find all pages with anchors that contain all 3 words.
Doc, PDF, txt, rtf, etc.. = These all specify types of documents. Example url:doc will pull up all urls with a word doc in the case of google that is 22,200,000.
country codes
(see blog http://www.recruitingblogs.com/profiles/blogs/international-1)
You can use a country code to get results only form that country. Example url:uk will get you result from England, In google that is 199,000,000.
EDU, GOV, MIL etc = Well it url:EDU pulls up educational institutions pages, url:GOV pulls government, url:Mil pulls up military. Of course there are plenty of other domain names that are specific and worth checking such as org for organizations, net for networks, and com for commercial etc.
Acrobat, applet, activex, audio, flash, form, frame, homepage, image, javascript, index, meta, script, shockwave, wpf, table, video, mov, etc.. = used with any of the operators will yield results that target that file type or format. Example inrul:acrobat AND C++ will bring back results done in adobe acrobat and have C++.
Insubject: = looks for the word that follows the: in the subject of the page, or document. Example- insubject:football with find pages that have the word football in the subject.
Finding email addresses = email near: or “email * * companyname.com”. Both of these will work. Example - email * * microsoft.com, this will return Microsoft(MS) email addresses. Now of course it will not return all MS email addresses, but it will return enough for you to learn their naming convention ([email protected]). The other one; email near:2 deloitte.com, gets you any entries within 2 words of “deloitte.com”. In most cases this will be email addresses. Of course combining operators can allow you to come up with more ways to find email addresses.
“powered site” = Think of it as another way to x-ray.
“LinkFromDomain:” = This will only work with Bing and tell you which pages a site links to.
feed:” = Finds RSS and ATOM feeds. Example feed:computer
“blog:”or “blogs:” = Finds Blogs. Example blog:computer
Patent”s”, White paper”s”, Book”s”, “research Paper:”s” = finding these are simple. Use the “inurl:”, “intitle:”, “url:”, “title:”, “patent:”, etc..commands. Example: inurl:patent AND XXX(X = another search parameter) or patents:security.
Prefer: Adds emphasis to a search term or another operator to help focus the search results. Example: developing prefer:history (Mainly in Bing)
language: Returns webpages for a specific language. Specify the language code directly after the language: keyword. Example: (is a senior developer) language:ru -jobs -apply –careers (also Mainly Bing)
NOTE: Words you may want to exclude using the “-“ to keep cleaner results: job, jobs, send, submit, apply, and you. Example –job –jobs, -send etc.
NOTE: Words you can use that will bring candidates: resume, rèsumè, resumé, résumé, CV, vita, Vitae
NOTE: Common email conventions: *@aol.com, *@gmail.com, *@hotmail.com,
*@msn.com, *@yahoo.com, *@excite.com, *@comcast.net, *@me.com, *@sbcglobal.net, *@verizon.net, *@netzero.com, *@inbox.com, *@fastmail.fm, *@mail.com, *@lycos.com, *@care2.com, *@gmx.com, *@gawab.com, etc. Add to strings to search for emails of candidates. Example: (*@aol.com | *@gmail.com) (Java | J2EE). Also works well in Linkedin to find profiles that have an email address in them. For a list of email providers and more go here http://www.emailaddresses.com/guide_types.htm
NOTE: Do not forget to use natural language techniques such as “I.was.at”, “i.created” etc.
NOTE: Also do not forget to use pro nouns such as he, she, herself, himself, myself, etc. Also see my posts on diversity sourcing for keys to source for diverse candidates
The key with these operators is not just the operators themselves but the words that proceed or come after. Be sure you use all possibilities. This is where a thesaurus and dictionary come in handy. Not just for the search terms themselves but for the operators too. example the term author is used above, but guess what according to the thesaurus another word for author is writer. If you use @writer= "edgar allen poe" you will get results. So as I have said in my resume writing blog posting, my SEO/Social Media blog posting and my TSO blog posting a thesaurus is your secret weapon to inventing new and different strings. Also remember you can combine operators to come up with even more and different results.
Of course there are allot more and more being discovered all the time. But here are allot of them, enjoy.
Views: 1770
Comment by Candace Nault on January 17, 2013 at 4:23pm
Thank you for sharing this, as someone who is challenged with this area, I appreciate all your insight!
Comment by Will Thomson on January 18, 2013 at 3:38pm
Good post, Dean!
Comment by Jimmy on February 8, 2013 at 4:43pm
Very helpful Dean! Thanks!
Comment
You need to be a member of RecruitingBlogs to add comments!
Join RecruitingBlogs
Subscribe
All the recruiting news you see here, delivered straight to your inbox.
Just enter your e-mail address below
Webinar
#HRTX LIVE
RecruitingBlogs on Twitter
Recruiting Videos
• Add Videos
• View All
© 2018 All Rights Reserved Powered by
Badges | Report an Issue | Terms of Service
|
__label__pos
| 0.511469 |
ASP.Net Bind a Web service to a variable
Hi,
I have a web service that I call in the following manner in my ASP.Net application.
Dim ConsumeWebService As uk.co.education.www.EducationDirectRegistration
ConsumeWebService = New uk.co.education.www.EducationDirectRegistration
This returns just one field, and I want to bind the field to a variable within my project. I cannot find any examples on how to do this. Any examples would be appreciated!?
I'm guessing it would be something along the following lines..
VARIABLENAME.VALUE = ConsumeWebService.GetStatementAnswers(Username, Password, PAR1, PAR2)
thanks
LVL 2
kintonAsked:
Who is Participating?
Bob LearnedCommented:
Dim ConsumeWebService As New uk.co.education.www.EducationDirectRegistration
Dim ds As DataSet = ConsumeWebService.GetStatementAnswers(Username, Password, PAR1, PAR2)
Bob
0
kintonAuthor Commented:
It should be noted, the web service is returning the data as a dataset
0
kintonAuthor Commented:
ok, that seems to work, how can I pull the individual values out of that dataset?
0
Bob LearnedCommented:
Here is an example of one way:
Dim table As DataTable = ds.Tables(0)
Dim row As DataRow = table.Rows(0)
Dim name As String = row("Name").ToString()
It really depends on what you need as to how to best handle it.
Bob
0
kintonAuthor Commented:
That's perfect. Thanks for the quick reply!
0
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
All Courses
From novice to tech pro — start learning today.
|
__label__pos
| 0.842064 |
Logo in UI title bar on AWS server
Hello
I am trying to implement my own personal logo and clock in the title bar of my UI dashboard. I have node-red running an AWS EC2 server on ubuntu.
I have been reading this thread Set a logo in the title bar?. The code for my template node is below.
The clock works a treat but the but the logo image does not display (see screen shot below). I suspect this is because I haven't set the correct path for the httpStatic setting in the settings.js file. The full path of the directory where my logo sits is /home/ubuntu/.node-red/logo directory
In the settings file I have:
httpStatic: '/logo'
Do I have the correct path name set?
<script id="clockScript1" type="text/javascript">
var clockInterval;
$(function () {
if (clockInterval) return;
//add logo
var div1 = $('<div/>');
var logo = new Image();
logo.src = 'dashboardlogo.png'
logo.height = 45;
div1[0].style.margin = '10px auto';
div1.append(logo);
//add clock
var div2 = $('<div/>');
var p = $('<p/>');
div2.append(p);
div2[0].style.margin = '5px';
function displayTime() {
p.text(new Date().toLocaleString());
}
clockInterval = setInterval(displayTime, 1000);
//add to toolbar when it's available
var addToToolbarTimer;
function addToToolbar() {
var toolbar = $('.md-toolbar-tools');
if(!toolbar.length) return;
toolbar.append(div1);
toolbar.append(div2);
clearInterval(addToToolbarTimer);
}
addToToolbarTimer = setInterval(addToToolbar, 100);
});
</script>
That should be set to the directory containing the static content: /home/ubuntu/.node-red/logo
And you'll have to make sure the url used in the code is correct - you are currently just setting it to dashboardlogo.png so it will get loaded relative to the url the dashboard is on - depending on how you've set things up, that would be /ui/dashboard.logo.png. That is not the path httpStatic will be served from. You'd need to set the image url to /dashboardlogo.png - again, depending on what other path customisations you may have made.
Thank you!
Hello,
I found a better solution without webserver or a static site.
Convert the picture to base64 on this site https://www.base64-image.de/ and paste the code in quotes after logo.src =
Hello, I am using code in this page.
This is running very well, until I opened it on my TV where my graph will appear. The time is correct on computers and other devices, but on TV it doesn't work. There would be a way for me to set the time directly from my raspbarry pi.
What browser is your TV using ? maybe it doesn't support the required level of Javascript ?
Also what do you mean by 'it doesn't work'. Exactly what doesn't work?
My friends, in the browser on the computer or raspian the application works very well. But on TV the time is 12/31/1969 10:58 PM. My TV is samsung with samsung browser. The TV date is correct.
Please don't ask the same question in two threads. Since you have opened a new thread and this thread was originally opened two years ago, I'll close this thread.
|
__label__pos
| 0.697592 |
agency – remove "you are here" from breadcrum
Community Forums Forums General Discussion agency – remove "you are here" from breadcrum
This topic is: resolved
This topic contains 4 replies, has 3 voices, and was last updated by karlhen 1 year, 12 months ago.
Viewing 5 posts - 1 through 5 (of 5 total)
• Author
Posts
• #37621
karlhen
Participant
want to remove “you are here” from breadcrum – agency
thanks.
http://poimena.com
#37623
braddalton
Participant
Add this code to the end of your child themes functions.php file using a text editor like Notepad++
add_filter( ‘genesis_breadcrumb_args’, ‘child_breadcrumb_args’ );
/**
* Amend breadcrumb arguments.
*
* @author Gary Jones
*
* @param array $args Default breadcrumb arguments
* @return array Amended breadcrumb arguments
*/
function child_breadcrumb_args( $args ) {
$args[‘home’] = ‘Home';
$args[‘sep’] = ‘ / ‘;
$args[‘list_sep’] = ‘, ‘; // Genesis 1.5 and later
$args[‘prefix’] = ‘<div class="breadcrumb">';
$args[‘suffix’] = ‘</div>';
$args[‘heirarchial_attachments’] = true; // Genesis 1.5 and later
$args[‘heirarchial_categories’] = true; // Genesis 1.5 and later
$args[‘display’] = true;
$args[‘labels’][‘prefix’] = ‘Remove This Text ‘;
$args[‘labels’][‘author’] = ‘Archives for ‘;
$args[‘labels’][‘category’] = ‘Archives for ‘; // Genesis 1.6 and later
$args[‘labels’][‘tag’] = ‘Archives for ‘;
$args[‘labels’][‘date’] = ‘Archives for ‘;
$args[‘labels’][‘search’] = ‘Search for ‘;
$args[‘labels’][‘tax’] = ‘Archives for ‘;
$args[‘labels’][‘post_type’] = ‘Archives for ‘;
$args[‘labels’][‘404′] = ‘Not found: ‘; // Genesis 1.5 and later
return $args;
}
Source:http://my.studiopress.com/snippets/breadcrumbs/
#39730
karlhen
Participant
I did as instructed and the code shut down my entire website.
Not sure what to do next.
#39754
essaysnark
Participant
Hi karlhen – looks like your site is back up again, so congrats on working through that! There was probably a mistake introduced in the code. The functions.php file is very sensitive to typos and it’s easy to make mistakes with this.
One possibility is that the code published here on the forum has formatting. It’s important that the snippet you copy gets pasted as plain text; that’s why Brad suggested using a text editor. If you’re on a PC then you can use Notepad or if you’re on the Mac, use TextEdit. Also, before making changes, be sure to have a backup of your functions.php on hand so that you can restore it if you need to, and have access to your server via FTP so that you can upload that backup if it gets screwy.
Here’s what to do to clear out any formatting in copied text:
1. Copy the snippet above, then paste it into the text editor. This will strip out any other types of formatting.
2. Then, select it from the text editor, and paste it into the bottom of your child theme’s functions.php file (if it says Genesis: Theme Functions at the top you’re in the wrong place! Use the dropdown menu on the right to switch to your Child Theme, then click on functions.php – it should say “[theme name] Child Theme: Theme Functions (functions.php) at the top).
3. Also, make sure you don’t delete anything that’s already in functions.php; just add this code to the bottom.
4. Save your file and hopefully it worked this time!
One additional change you’ll need to make: If you literally don’t want any text at the beginning of your breadcrumbs line, then delete the words Remove This Text from the snippet above. But leave the single-quote marks, you need those. Just delete the three words themselves. You can delete this after it's been pasted into functions.php.
Hopefully I didn’t tell you what you already know with this stuff! Let us know if you have other problems.
#61356
karlhen
Participant
Thanks it works.
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic.
|
__label__pos
| 0.526494 |
3 m ago
People could eat protein paste and have it taste like filet mignon. They could walk between the same two rooms in their house but feel like they were in Versailles. VR could massively reduce the global demand for energy and materials.
3 m ago
A screen reader is going to see an SVG element and I imagine at best it will use some alt-text or other "description" of the graphic. Versus if the data is literally `123{123,234,435}424` then a person without vision can get much more information about what's actually being conveyed.
5 m ago
What's the difference if a 10 year old is posting on Instagram, to a 10 year old in the 80s with a Polaroid? If kids talk on snapchat instead of over the fence in the backyard?
|
__label__pos
| 0.956692 |
We're sorry Aspose doesn't work properply without JavaScript enabled.
Free Support Forum - aspose.com
How to change the current footer formatting- renumber
1、How to change the current footer formatting, renumber
2、How to set entire document fonts,not by run set font,that is overly complex<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />
3、Symbol how to implement automatic numbering.
4、This substitution is very slow
doca.Sections[1].Body.Range.Replace(new Regex(@"([\u0020])"), “”);
Hi,
Thanks for your inquiry.
1) Please attach your input and target Word documents here for testing. I will investigate as to how you are expecting your final document to be generated like. You can use MS WORD to create your target Word document. I will then provide you code to achieve what you’re looking for.
2) If you would like to change font of the entire document, you should loop through all nodes and change font. You can use DocumentVisitor to achieve this. Please find the code in the following post:
https://forum.aspose.com/t/50788
3) Page numbers are represented by PAGE field in MS Word documents. If you use PAGE field in header/footer of your Word document, the numbering will be automatically continued to the Pages.
4) Could you please also attach your input Word document for testing this case also? I will investigate the issue on my side and provide you more information.
Best Regards,
this is test document
Hi,
Thanks for your inquiry. Here is the code to generate lists with custom formatting:
Document doc = new
Document();
Style listStyle = doc.Styles.Add(StyleType.List, "MyListStyle");
Aspose.Words.Lists.List list1 = listStyle.List;
ListLevel level = list1.ListLevels[0];
level.Font.Name = "SimSun";
level.Font.Color = Color.Green;
level.NumberStyle = NumberStyle.GB3;
DocumentBuilder builder = new DocumentBuilder(doc);
builder.Writeln("Test Page 1");
Aspose.Words.Lists.List list2 = doc.Lists.Add(listStyle);
builder.ListFormat.List = list2;
builder.Writeln("Item 1");
builder.Writeln("Item 2");
builder.Write("Item 3");
builder.Document.Save(@"C:\Temp\out.docx");
Regarding restarting page numbering at the beginning of second Section, please set the PageSetup.RestartPageNumbering property to true.
I hope, this helps.
Best Regards,
1、I set level.NumberStyle = NumberStyle.GB3,but Automatic numbering is①.
I do not want dot,I estimated need set level.NumberFormat.I hope like the attachment file.
2、paragraph.range.text is Garbled
3、if Following settings is Success 1、2、3,I want To achieve ,how to set NumberFormat?
test2<?xml:namespace prefix = o ns = “urn:schemas-microsoft-com:office:office” /><o:p></o:p>
fjai<o:p></o:p>
there I want auto set
list1.ListLevels[0].NumberFormat = "%1 ";
list1.ListLevels[1].NumberFormat = "%1.%2 ";list1.ListLevels[0].NumberFormat = "%1 ";
list1.ListLevels[1].NumberFormat = "%1.%2 ";
Hi,
Thanks for your inquiry. To remove the trailing dots at the end of list numbers, please use the ListLevel.NumberFormat property as follows:
level.NumberFormat = “%1”;
Regarding the problem “paragraph.range.text is Garbled” you mentioned in second point, could you please share the complete code to reproduce the same issue on my side. You can create a simple Console Application that demonstrates this problem.
Best Regards,
|
__label__pos
| 0.64907 |
BLYNK
HOME 📲 GETTING STARTED 📗 DOCS ❓HELP CENTER 👉 SKETCH BUILDER
Run program without wifi connection
Hello there!
I have a device that turn something on/off and has a hardware button.
The problem is, that I want to use this device sometimes in areas where I have no wifi and it should still work by pressing the botton to turn on/off.
With Blynk.begin the program stucks at the setup, waiting for the wifi connection.
Its the same with Blynk.connectWiFi and Blynk.config how can I bypass these blocking functions with a timeout or similar?
Thanks in advance!
Not sure, but I think he wants to use blynk app without internet. :rofl:
I don’t think so. He talks about a physical switch that should work without an internet connection, so just needs to get to grips with Blynk.config and Blynk.connect
Pete.
2 Likes
@ IBK Thanks a lot.
That was exactly what I was looking for.
Is there a common way of updating the changed states of the Blynk variables (Virtual Pins), that changed while the device beeing offline, to the server after reconnection?
Or should I save the state of each pin and actively push them to server on reconnect?
Because I suspect that a simple Blynk.syncAll(); will just load the values from the server and not push the actual values to the server.
@Blynk_Coeur: Pretty smart comment…
And by the way. Everyone can use the blynk app without internet. It just wont do anything.
So clarify your sacastic reply next time better in writing “he wants to use blynk without internet”.
My intension was clearly to let the device work without internet connection by reacting on pushing a HARDWARE button.
I dont know if you can imagine this, but microcontrollers can also work without blynk… even more they can also work offline.
1 Like
I think @Blynk_Coeur comment was a genuine mistake rather than an attempt to be smart/sarcastic.
From personal experience, it’s very easy to flick through a couple of similar threads and get them mixed-up; or to half-read a post and answer the wrong question.
Even if you didn’t like the tone of a response from a regular contributor, it’s not good form to have a rant about it, as it tends to piss-off the other regulars.
Yes, that’s the way to do it, using virtualWrites. As you correctly assumed, sync works the other way.
The best point at which to push the values to the server is on connection to the server. Look at the Blynk.connected command.
Pete.
1 Like
Yes, that’s the way to do it, using virtualWrites. As you correctly assumed, sync works the other way.
Great. As I have the some other functions inside the BLYNK_WRITE() functions I will set the Virtual Pin to the wanted value Blynk.virtualWrite(V1, 0); and then sync it with Blynk.syncVirtual(V1);
This should behave like I have pressed the button on the app right?
As there is a function that gets called after a connection has established, I can use this and sync everything… to my logic, it should work like this:
BLYNK_CONNECTED() {
if (status == 1)
{
Blynk.virtualWrite(V1, 1);
}
else
{
Blynk.virtualWrite(V1, 0);
}
Blynk.syncAll();
}
I appreciate your comment about Blynk_Coeurs comment, but I would say making fun of someone isnt a good form either - also if he is a well known member and has therefore a higher status.
I dont like this sort of replys in forums in general (in this case he was even wrong with his assumtion) as it can frustrate newbees and doesnt help at all.
So I see no point in posting this.
I normally tend to comment these kind of posts in a similar way to show how it feels.
I know this is living on the edge, but I think Blynk_Coeur can take it and knows how it was meant.
If not @Blynk_Coeur: I’m sorry and all the best :slight_smile:
1 Like
There’s really no need to do that bit.
You’re in effect saying “okay, Blynk is now connected again. Blynk, the value of V1 is x. Now tell me what the value of V1 is”
Pete.
1 Like
I may be off on this (or misunderstanding you), but I do not think this would work.
Blynk.virtualWrite(V1, 0); would write the value to the server, Blynk.syncVirtual(V1); would retrieve/read the value from the server. If you are not connected to the server, then it cannot write/read the value from it.
I think your approach with the BLYNK_CONNECTED would work, provided the status variable is being updated with the physical button press and the virtual button press (when BLYNK is connected).
I guess it is just a matter of which one you want to take precedence upon connection to the BLYNK server. You can either update the state of the virtual button to match the state of the device being controlled, or update the device being controlled to the state of the virtual button.
Take a look at the Sync Physical Button Example on the Sketch Builder. In the BLYNK_CONNECTED function it gives you the option to choose.
Yes I thought @Phil333 was a beginner, and I did not realize he had physical switches, sorry !
1 Like
|
__label__pos
| 0.528899 |
Problem
It is very hard to predict what an user will type in Excel. Excel 2010 provides a way to create drop down list which forces users to select a value from the list.
Steps to create a drop down list in Excel 2010
1. Create a list in an Excel worksheet001 - Create List
2. Click on Data Validation….002 - Select Data Validation
3. Select List from the tool and pick the values in the worksheet as member of this list.003 - Select List in Data Validation
4. Drop down list created004 - Drop Down List Created
That is it, very simple but powerful.
|
__label__pos
| 0.786929 |
3
votes
1answer
105 views
smoothness of hopf fibration projection with respect to standard differentiable structure on unit sphere
We know that steographic projection defines a differentiable structure on $S^n$ by sending points on $S^n$ to hyperplane $\{x^{n+1}\}=0$. In fact, stereographic projection $\sigma_P: S^n- P\to R^n $ ...
2
votes
0answers
40 views
Holomorphic analogue of geodesics
Let $X$ be a complex manifold with a Hermitian metric. Is there a "complex" analogue of geodesics on $X$ which is of any interest? For example, is anything known about holomorphic maps $f : \mathbb C ...
2
votes
1answer
92 views
Quasiconformal map between the complex plane and a disk
According to the Poincaré-Koebe theorem, it is known that the unit disk $\mathbb D$ and the complex plane $\mathbb C$ aren't conformally equivalent. My question is maybe naive, but I was wondering if ...
0
votes
0answers
399 views
A conformal map from a horizontal half-strip to $H$
I have seen many examples of mapping the vertical half-strip, ie $-\pi/2 \lt x < \pi/2$, $y \gt 0$ to $H$(the upper half-plane) in $\mathbb{C}$ using the transformation $f = \sin z$. Would the ...
2
votes
1answer
236 views
Precise definition of conformal structure based on a Riemannian metric on a Riemann surface
As I read the literature, I keep having some doubt about what a " conformal structure on a Riemann surface " exactly means. ( You can assume all the Riemann surface in this literature have universal ...
3
votes
2answers
87 views
How does one move a point in $B(0,1)$ to the origin with a Möbius transformation
Let $z_0$ be in the open unit disc $B(0,1)\subset \mathbf{C}$. Is there a general formula for an automorphism of $B(0,1)$ which sends $z_0$ to the origin? I find it easier to think about the complex ...
|
__label__pos
| 0.774894 |
There are 2 red and 3 black balls in a bag. 3 balls are taken out at random from the bag. Find the probability of getting 2 red and 1 black ball or 1 red and 2 black balls. - Mathematics
Advertisement
Advertisement
Sum
There are 2 red and 3 black balls in a bag. 3 balls are taken out at random from the bag. Find the probability of getting 2 red and 1 black ball or 1 red and 2 black balls.
Advertisement
Solution
There are 2 + 3 = 5 balls in the bag and 3 balls can be drawn out of these in 5C3= `(5 xx 4 xx 3)/(1 xx 2 xx 3)` = 10 ways.
∴ n(S) = 10
Let A be the event that 2 balls are red and 1 ball is black
2 red balls can be drawn out of 2 red balls in 2C2 = 1 way and 1 black ball can be drawn out of 3 black balls in 3C1 = 3 ways.
∴ n(A) = 2C2 × 3C1 = 1 × 3 = 3
∴ P(A) = `("n"("A"))/("n"("S")) = 3/10`
Let B be the event that 1 ball is red and 2 balls are black
1 red ball out of 2 red balls can be drawn in 2C1 = 2 ways and 2 black balls out of 3 black balls can be drawn in 3C2 = `(3 xx 2)/(1 xx 2)` = 3 ways.
∴ n(B) = 2C1 × 3C2 = 2 × 3 = 6
∴ P(B) = `("n"("B"))/("n"("S")) = 6/10`
Since A and B are mutually exclusive and exhaustive events
∴ P(A ∩ B) = 0
∴ Required probability = P(A ∪ B) = P(A) + P(B)
= `3/10 + 6/10`
= `9/10`
Is there an error in this question or solution?
Chapter 7: Probability - Miscellaneous Exercise 7 [Page 110]
APPEARS IN
Balbharati Mathematics and Statistics 2 (Commerce) 11th Standard HSC Maharashtra State Board
Chapter 7 Probability
Miscellaneous Exercise 7 | Q 4 | Page 110
Video TutorialsVIEW ALL [1]
Share
Notifications
Forgot password?
Use app×
|
__label__pos
| 0.997312 |
/** *`expr` can be used to create temporary computed values inside computed values. * Nesting computed values is useful to create cheap computations in order to prevent expensive computations from needing to run. * In the following example the expression prevents that a component is rerender _each time_ the selection changes; * instead it will only rerenders when the current todo is (de)selected. * * `expr(func)` is an alias for `computed(func).get()`. * Please note that the function given to `expr` is evaluated _twice_ in the scenario that the overall expression value changes. * It is evaluated the first time when any observables it depends on change. * It is evaluated a second time when a change in its value triggers the outer computed or reaction to evaluate, which recreates and reevaluates the expression. * * In the following example, the expression prevents the `TodoView` component from being re-rendered if the selection changes elsewhere. * Instead, the component will only re-render when the relevant todo is (de)selected, which happens much less frequently. * * @example * const Todo = observer((props) => { * const todo = props.todo * const isSelected = mobxUtils.expr(() => props.viewState.selection === todo) * const TodoView = observer(({ todo, editorState }) => { * const isSelected = mobxUtils.expr(() => editorState.selection === todo) * return
{todo.title}
* }) */ export declare function expr(expr: () => T): T;
|
__label__pos
| 1 |
Results 1 to 2 of 2
Thread: function help please
1. #1
Newbie
Joined
Oct 2008
Posts
9
function help please
f(x)=1/4(x+2)^2-4.
i) explain how the parabola that is the graph of f can be obtained from the graph of y=x^2 by using appropriate translations and scalings.
ii) find the x-intercepts and y-intercept of the parabola.
Follow Math Help Forum on Facebook and Google+
2. #2
Super Member
Joined
May 2006
From
Lexington, MA (USA)
Posts
11,429
Thanks
490
Hello, toplad!
f(x)\:=\:\tfrac{1}{4}(x+2)^2-4
i) Explain how the graph of f(x) can be obtained from the graph of y\,=\,x^2
by using appropriate translations and scalings.
We know the graph of y \:=\:x^2
Code:
|
* | *
|
* | *
* | *
* | *
--------*--------
|
The curve: y \:=\:(x+2)^2 is the previous graph moved 2 units to the left.
Code:
|
* | *
|
* | *
* | *
* *
--------*--+-----
|
The curve: y \:=\:(x+2)^2-4 is the previous graph moved 4 units down.
Code:
|
* | *
|
--*--------+-*--
* |*
* *
* |
|
The curve y \;=\;\tfrac{1}{4}(x+2)^2-4 "rises more slowly" . . . Hence, it is "wider".
Code:
* | *
|
* | *
----*-----------+-----*----
* | *
* |*
* |
|
Follow Math Help Forum on Facebook and Google+
Similar Math Help Forum Discussions
1. Replies: 20
Last Post: November 27th 2012, 06:28 AM
2. Replies: 0
Last Post: October 19th 2011, 05:49 AM
3. Replies: 4
Last Post: October 27th 2010, 06:41 AM
4. Replies: 3
Last Post: September 14th 2010, 03:46 PM
5. Replies: 2
Last Post: September 2nd 2010, 11:28 AM
Search Tags
/mathhelpforum @mathhelpforum
|
__label__pos
| 0.943533 |
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Main content
Coordinate plane word problem examples
Video transcript
Milena's town is built on a grid similar to the coordinate plane. She is riding her bicycle from her home at point negative 3, 4 to the mall at point negative 3, negative 7. Each unit on the graph denotes one city block. Plot the two points, and find the distance between Milena's home and the mall. So let's see, she's riding her bicycle from her home at the point negative 3, 4. So let's plot negative 3, 4. So I'll use this point right over here. So negative 3 is our x-coordinate. So we're going to go 3 to the left of the origin 1, 2, 3. That gets us a negative 3. And positive 4 is our y-coordinate. So we're going to go 4 above the origin. Or I should say, we're going to go 4 up. So we went negative 3, or we went 3 to the left. That's negative 3, positive 4. Or you could say we went positive 4, negative 3. This tells us what we do in the horizontal direction. This tells us what we do in the vertical direction. That's where her home is. Now let's figure out where the mall is. It's at the point negative 3, negative 7. So negative 3, we went negative 3 along the horizontal direction and then negative 7 along the vertical direction. So we get to negative 3, negative 7 right over there. And now we need to figure out the distance between her home and the mall. Now, we could actually count it out, or we could just compute it. If we wanted to count it out, it's 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 blocks. So we could type that in. And another way to think about it is they have the exact same x-coordinate. They're both at the x-coordinate negative 3. The only difference between these two is what is happening in the y-coordinate. This is at a positive 4. This is at a negative 7. Positive 4, negative 7. So we're really trying to find the distance between 4 and negative 7. So if I were to say 4 minus negative 7, we would get this distance right over here. So we have 4 minus negative 7, which is the same thing as 4 plus 7, which is 11. Let's do a couple more. Carlos is hanging a poster in the area shown by the red rectangle. He is placing a nail in the center of the blue line. In the second graph, plot the point where he places the nail. So he wants to place a nail in the center of the blue line. The blue line is 6 units long. The center is right over here. That's 3 to the right, 3 to the left. So he wants to put the nail at the point x equals 0, y is equal to 4. So he wants to put it at x is equal to 0, y is equal to 4. That's this point right over here. So let's check our answer. Let's do one more. Town A and Town B are connected by a train that has a station at the point negative 1, 3. I see that. The train tracks are in blue. Fair enough. Which town is closer to the station along the train route, Town A or Town B? So they're not just asking us what's the kind of crow's flight, the distance that if you were to fly. They're saying, which town is closer to the station along the train route? So if you were to follow the train route just like that. So the A right over here, A if you were going along the train route, you would have to go 1, 2, 3, 4, 5, 6 in the x-direction, and then 1, 2, 3, 4, 5 along the y-direction. So you'd have to go a total of 11. If you're going from B, you're going 1, 2, 3, 4, 5, 6, 7, 8 9, 10, 11 along the x-direction, and then 1, 2, 3, 4, 5, 6, so 6 along the vertical direction. So you're going to go a total 17. So it's pretty obvious that A is closer along the tracks. Now, you could also think about it in terms of coordinates because A is at the coordinate of negative 7, 8. And if you were to think about negative 7, 8, to get from negative 7 to negative 1 along the x-coordinate, you're going to go 6. And then to go from 8 to 3, you're going to go 5 more. So you could also not necessarily count it out, you can actually just think about the coordinates. But either way, you see that town A is closer.
|
__label__pos
| 0.9901 |
1. The fastest speed recorded for a runner is 27 miles per hour. This is 20% of the fastest speed for a water skier. Find the record for a water skier.
____Mph
2. Bowsers dog food is 60% meat. If Bowser needs 24 oz of meat each week, how many ounces of dog food should he eat? ____oz
Answer :
To get the answer we have to use proportion
20% -----------------27
100% --------------x
crossmultiply now
20*x=100*27 /:20
x=2700:20
x=135 miles/hour
Similar situation
60% --------------24
100% -------------x
crossmultiply
60*x=100*24
60x=2400 /:60
x=40 - its the answer
Other Questions
|
__label__pos
| 0.989652 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
Regex regexObj = new Regex(
@"([A-Za-z_][A-Za-z_0-9]*)(:)(([-+*%])?(\d*\.?\d*)?)*"
, RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace);
var subjectString = "a:123+456;b:456;";
Match matchResults = regexObj.Match(subjectString);
while (matchResults.Success) {
for (int i = 1; i < matchResults.Groups.Count; i++) {
Group grp = matchResults.Groups[i];
if (grp.Success) {
Console.WriteLine("st:" + grp.Index + ", len:" + grp.Length + ", val:" + grp.Value);
}
}
matchResults = matchResults.NextMatch();
}
Output:
st:0, len:2, val:.a
st:2, len:1, val::
st:6, len:0, val:
st:6, len:0, val:
share|improve this question
1 Answer 1
up vote 2 down vote accepted
Because by allowing it to consider "" as a valid fulfillment of \d*, your capture is completed before the number ever occurs.
You should at least specify one digit as mandatory (+) instead of optional (*) to make it start capturing the group.
To clarify, when a regular expression is compiled without errors, but does not capture anything for a specific group, that does not mean that the match was not successful.
It means that the match was successful despite having captured anything. That means that you are letting it go over that group by design.
For instance, in this piece of your own regular expression: (([-+*%])?(\d*\.?\d*)?)* you are saying that: I am expecting some optional symbol followed by a decimal number, even though that is optional as well. If nothing was found, it would be okay, though, so, dear RegExp engine, please don't bother yourself, because I don't care whether this occurred or not.
Let's break this down further:
• \d*\.\d* means that anything that has any number of digits (including none at all) with a dot in between. So, 0., ., .123, are all valid matches, as well as 2.1.
• By making that optional, you are saying that even the dot is not necessary, so, (\d*\.\d*)? would match "" (the empty string).
• By writing ([-+*%])?(\d*\.?\d*)? you are saying that should anything occur before the string matched above, it must be one of the four indicated symbols. But, you are not necessitating that it must occur (because of the ?). Also, since the above group could match the empty string, if the engine does not succeed in matching the string to anything useful, the presence of any of the indicated four symbols would mean that this group would still be a successful match. The whole of it, including the number.
• Now, by grouping the previous definition as (([-+*%])?(\d*\.?\d*)?)*, you are making even that optional, basically telling the regular expression engine that it would be okay if it did not look any further than the beginning of this definition for an answer.
So, how should you proceed? When should you make a group optional? You shoud make a group optional only with care, knowing that if the engine failed to match anything to this group, the statement would still be valid and you do not care for this value.
Also, as a side note, you should not be capturing just about everything. Only capture the values that are ciritcal to you, because the engine will hold (start,length) pairs for any group you request in-memory, and that would cost you performance. Instead of the normal grouping (), use the non-capturing group indicator (?:) which will allow you grouping and a higher level of control, while preserving the memory.
Another use of capturing groups, is for when you want to reference the matching contents in your regular expression:
<(\w+)>.*?</\1>
Which would capture an XML tag with its matching closing tag. Note also that the above example is for demonstration only and generally speaking parsing any sort of hierarchical document using regular expressions (except the most mundane of them) is a capital B, capital I, Bad Idea.
share|improve this answer
thanks so much. Changed one character and it works now. btw: when you get back a group with a length of zero, I assume that it just means that that optional group was not successfully matched? – sgtz Oct 20 '12 at 7:27
I just updated the answer with information and details that would give you (probably) more insight and would answer your question in this comment. – Milad Naseri Oct 20 '12 at 7:44
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.765271 |
Getting Started Programming with Qt Widgets
A tutorial for Qt Widgets based on a notepad application.
In this topic, we teach basic Qt knowledge by implementing a simple Notepad application using C++ and the Qt Widgets module. The application is a small text editor which allows you to create a text file, save it, print it, or reopen and edit it again. You can also set the font to be used.
"Notepad application"
You can find the final Notepad source files in the qtdoc repository in the tutorials/notepad directory. You can either fetch the Qt 5 sources from Qt Project or install them as part of Qt 5. The application is also available in the example list of Qt Creator's Welcome mode.
Creating the Notepad Project
Setting up a new project in Qt Creator is aided by a wizard that guides you step-by-step through the project creation process. The wizard prompts you to enter the settings needed for that particular type of project and creates the project for you.
"Qt Creator New File or Project dialog"
To create the Notepad project, select File > New File or Project > Applications > Qt Widgets Application > Choose, and follow the instructions of the wizard. In the Class Information dialog, type Notepad as the class name and select QMainWindow as the base class.
"Class Information Dialog"
The Qt Widgets Application wizard creates a project that contains a main source file and a set of files that specify a user interface (Notepad widget):
• notepad.pro - the project file.
• main.cpp - the main source file for the application.
• notepad.cpp - the source file of the notepad class of the Notepad widget.
• notepad.h - the header file of the notepad class for the Notepad widget.
• notepad.ui - the UI form for the Notepad widget.
The .cpp, .h, and .ui files come with the necessary boiler plate code for you to be able to build and run the project. The .pro file is complete. We will take a closer look at the file contents in the following sections.
Learn More
AboutHere
Using Qt CreatorQt Creator
Creating other kind of applications with Qt CreatorQt Creator Tutorials
Main Source File
The wizard generates the following code in the main.cpp file:
#include "notepad.h"
#include <QApplication>
int main(int argc, char *argv[])
{
QApplication EditorApp(argc, argv);
Notepad Editor;
Editor.show();
return EditorApp.exec();
}
We will go through the code line by line. The following lines include the header files for the Notepad widget and QApplication. All Qt classes have a header file named after them.
#include "notepad.h"
#include <QApplication>
The following line defines the main function that is the entry point for all C and C++ based applications:
int main(int argc, char *argv[])
The following line creates a QApplication object. This object manages application-wide resources and is necessary to run any Qt program that uses Qt Widgets. It constructs an application object with argc command line arguments run in argv. (For GUI applications that do not use Qt Widgets, you can use QGuiApplication instead.)
QApplication EditorApp(argc, argv);
The following line creates the Notepad object. This is the object for which the wizard created the class and the UI file. The user interface contains visual elements that are called widgets in Qt. Examples of widgets are text edits, scroll bars, labels, and radio buttons. A widget can also be a container for other widgets; a dialog or a main application window, for example.
Notepad Editor;
The following line shows the Notepad widget on the screen in its own window. Widgets can also function as containers. An example of this is QMainWindow which often contains several types of widgets. Widgets are not visible by default; the function show() makes the widget visible.
Editor.show();
The following line makes the QApplication enter its event loop. When a Qt application is running, events are generated and sent to the widgets of the application. Examples of events are mouse presses and key strokes.
return EditorApp.exec();
Learn More
AboutHere
Widgets and Window GeometryWindow and Dialog Widgets
Events and event handlingThe Event System
Designing a UI
The wizard generates a user interface definition in XML format: notepad.ui. When you open the notepad.ui file in Qt Creator, it automatically opens in the integrated Qt Designer.
When you build the application, Qt Creator launches the Qt User Interface Compiler (uic) that reads the .ui file and creates a corresponding C++ header file, ui_notepad.h.
Using Qt Designer
The wizard creates an application that uses a QMainWindow. It has its own layout to which you can add a menu bar, dock widgets, toolbars, and a status bar. The center area can be occupied by any kind of widget. The wizard places the Notepad widget there.
To add widgets in Qt Designer:
1. In the Qt Creator Editor mode, double-click the notepad.ui file in the Projects view to launch the file in the integrated Qt Designer.
2. Drag and drop widgets Text Edit (QTextEdit) to the form.
3. Press Ctrl+A (or Cmd+A) to select the widgets and click Lay out Vertically (or press Ctrl+L) to apply a vertical layout (QVBoxLayout).
4. Press Ctrl+S (or Cmd+S) to save your changes.
The UI now looks as follows in Qt Designer:
You can view the generated XML file in the code editor:
<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>Notepad</class>
<widget class="QMainWindow" name="Notepad">
<property name="geometry">
<rect>
<x>0</x>
<y>0</y>
<width>800</width>
<height>400</height>
</rect>
</property>
<property name="windowTitle">
<string>Notepad</string>
</property>
<widget class="QWidget" name="centralWidget">
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<widget class="QTextEdit" name="textEdit"/>
</item>
</layout>
</widget>
<widget class="QMenuBar" name="menuBar">
...
The following line contains the XML declaration, which specifies the XML version and character encoding used in the document:
<?xml version="1.0" encoding="UTF-8"?>
The rest of the file specifies an ui element that defines a Notepad widget:
<ui version="4.0">
The UI file is used together with the header and source file of the Notepad class. We will look at the rest of the UI file in the later sections.
Notepad Header File
The wizard generated a header file for the Notepad class that has the necessary #includes, a constructor, a destructor, and the Ui object. The file looks as follows:
#include <QMainWindow>
namespace Ui {
class Notepad;
}
class Notepad : public QMainWindow
{
Q_OBJECT
public:
explicit Notepad(QWidget *parent = 0);
~Notepad();
private slots:
void on_actionNew_triggered();
void on_actionOpen_triggered();
void on_actionSave_triggered();
void on_actionSave_as_triggered();
void on_actionPrint_triggered();
void on_actionExit_triggered();
void on_actionCopy_triggered();
void on_actionCut_triggered();
void on_actionPaste_triggered();
void on_actionUndo_triggered();
void on_actionRedo_triggered();
void on_actionFont_triggered();
void on_actionBold_triggered();
void on_actionUnderline_triggered();
void on_actionItalic_triggered();
void on_actionAbout_triggered();
private:
Ui::Notepad *ui;
QString currentFile;
};
The following line includes QMainWindow that provides a main application window:
#include <QMainWindow>
The following lines declare the Notepad class in the Ui namespace, which is the standard namespace for the UI classes generated from .ui files by the uic tool:
namespace Ui {
class Notepad;
}
The class declaration contains the Q_OBJECT macro. It must come first in the class definition, and declares our class as a QObject. Naturally, it must also inherit from QObject. A QObject adds several abilities to a normal C++ class. Notably, the class name and slot names can be queried at runtime. It is also possible to query a slot's parameter types and invoke it.
class Notepad : public QMainWindow
{
Q_OBJECT
The following lines declare a constructor that has a default argument called parent. The value 0 indicates that the widget has no parent (it is a top-level widget).
public:
explicit Notepad(QWidget *parent = 0);
The following line declares a virtual destructor to free the resources that were acquired by the object during its life-cycle. According to the C++ naming convention, destructors have the same name as the class they are associated with, prefixed with a tilde (~). In QObject, destructors are virtual to ensure that the destructors of derived classes are invoked properly when an object is deleted through a pointer-to-base-class.
~Notepad();
The following lines declare a member variable which is a pointer to the Notepad UI class. A member variable is associated with a specific class, and accessible for all its methods.
private:
Ui::Notepad *ui;
QString currentFile;
Notepad Source File
The source file that the wizard generated for the Notepad class looks as follows:
#include "notepad.h"
#include "ui_notepad.h"
Notepad::Notepad(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::Notepad)
{
ui->setupUi(this);
this->setCentralWidget(ui->textEdit);
// Disable menu actions for unavailable features
#if !QT_CONFIG(printer)
ui->actionPrint->setEnabled(false);
#endif
#if !QT_CONFIG(clipboard)
ui->actionCut->setEnabled(false);
ui->actionCopy->setEnabled(false);
ui->actionPaste->setEnabled(false);
#endif
}
Notepad::~Notepad()
{
delete ui;
}
void Notepad::on_actionNew_triggered()
{
currentFile.clear();
ui->textEdit->setText(QString());
}
void Notepad::on_actionOpen_triggered()
{
QString fileName = QFileDialog::getOpenFileName(this, "Open the file");
QFile file(fileName);
currentFile = fileName;
if (!file.open(QIODevice::ReadOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot open file: " + file.errorString());
return;
}
setWindowTitle(fileName);
QTextStream in(&file);
QString text = in.readAll();
ui->textEdit->setText(text);
file.close();
}
void Notepad::on_actionSave_triggered()
{
QString fileName;
// If we don't have a filename from before, get one.
if (currentFile.isEmpty()) {
fileName = QFileDialog::getSaveFileName(this, "Save");
currentFile = fileName;
} else {
fileName = currentFile;
}
QFile file(fileName);
if (!file.open(QIODevice::WriteOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot save file: " + file.errorString());
return;
}
setWindowTitle(fileName);
QTextStream out(&file);
QString text = ui->textEdit->toPlainText();
out << text;
file.close();
}
void Notepad::on_actionSave_as_triggered()
{
QString fileName = QFileDialog::getSaveFileName(this, "Save as");
QFile file(fileName);
if (!file.open(QFile::WriteOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot save file: " + file.errorString());
return;
}
currentFile = fileName;
setWindowTitle(fileName);
QTextStream out(&file);
QString text = ui->textEdit->toPlainText();
out << text;
file.close();
}
void Notepad::on_actionPrint_triggered()
{
#if QT_CONFIG(printer)
QPrinter printDev;
#if QT_CONFIG(printdialog)
QPrintDialog dialog(&printDev, this);
if (dialog.exec() == QDialog::Rejected)
return;
#endif // QT_CONFIG(printdialog)
ui->textEdit->print(&printDev);
#endif // QT_CONFIG(printer)
}
void Notepad::on_actionExit_triggered()
{
QCoreApplication::quit();
}
void Notepad::on_actionCopy_triggered()
{
#if QT_CONFIG(clipboard)
ui->textEdit->copy();
#endif
}
void Notepad::on_actionCut_triggered()
{
#if QT_CONFIG(clipboard)
ui->textEdit->cut();
#endif
}
void Notepad::on_actionPaste_triggered()
{
#if QT_CONFIG(clipboard)
ui->textEdit->paste();
#endif
}
void Notepad::on_actionUndo_triggered()
{
ui->textEdit->undo();
}
void Notepad::on_actionRedo_triggered()
{
ui->textEdit->redo();
}
void Notepad::on_actionFont_triggered()
{
bool fontSelected;
QFont font = QFontDialog::getFont(&fontSelected, this);
if (fontSelected)
ui->textEdit->setFont(font);
}
The following lines include the Notepad class header file that was generated by the wizard and the UI header file that was generated by the uic tool:
#include "notepad.h"
#include "ui_notepad.h"
The following line defines the Notepad constructor:
Notepad::Notepad(QWidget *parent) :
The following line calls the QMainWindow constructor, which is the base class for the Notepad class:
QMainWindow(parent),
The following line creates the UI class instance and assigns it to the ui member:
ui(new Ui::Notepad)
The following line sets up the UI:
ui->setupUi(this);
In the destructor, we delete the ui:
Notepad::~Notepad()
{
delete ui;
}
In order to have the text edit field occupy the whole screen, we add setCentralWidget to the main window.
Notepad::Notepad(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::Notepad)
{
ui->setupUi(this);
this->setCentralWidget(ui->textEdit);
// Disable menu actions for unavailable features
#if !QT_CONFIG(printer)
ui->actionPrint->setEnabled(false);
#endif
#if !QT_CONFIG(clipboard)
ui->actionCut->setEnabled(false);
ui->actionCopy->setEnabled(false);
ui->actionPaste->setEnabled(false);
#endif
}
Project File
The wizard generates the following project file, notepad.pro, for us:
TEMPLATE = app
TARGET = notepad
qtHaveModule(printsupport): QT += printsupport
requires(qtConfig(fontdialog))
SOURCES += \
main.cpp\
notepad.cpp
HEADERS += notepad.h
FORMS += notepad.ui
RESOURCES += \
notepad.qrc
# install
target.path = $$[QT_INSTALL_EXAMPLES]/widgets/tutorials/notepad
INSTALLS += target
The project file specifies the application name and the qmake template to use for generating the project, as well as the source, header, and UI files included in the project.
You could also use qmake's -project option to generate the .pro file. Although, in that case, you have to remember to add the line QT += widgets to the generated file in order to link against the Qt Widgets Module.
Learn More
AboutHere
Using Qt DesignerQt Designer Manual
LayoutsLayout Management, Widgets and Layouts, Layout Examples
The widgets that come with QtQt Widget Gallery
Main windows and main window classesApplication Main Window, Main Window Examples
QObjects and the Qt Object model (This is essential to understand Qt)Object Model
qmake and the Qt build systemqmake Manual
Adding User Interaction
To add functionality to the editor, we start by adding menu items and buttons on a toolbar.
Click on "Type Here", and add the options New, Open, Save, Save as, Print and Exit. This creates 5 lines in the Action Editor below. To connect the actions to slots, right-click an action and select Go to slot > triggered(), and complete the code for that given slot.
If we also want to add the actions to a toolbar, we can assign an icon to each QAction, and then drag the QAction to the toolbar. You assign an icon by entering an icon name in the Icon property of the action concerned. When the QAction has been dragged to the toolbar, clicking the icon will launch the associated slot.
Complete the method on_actionNew_triggered():
void Notepad::on_actionNew_triggered()
{
currentFile.clear();
ui->textEdit->setText(QString());
}
current_file is a global variable containing the file presently being edited. It is defined in the private part of notepad.h:
private:
Ui::Notepad *ui;
QString currentFile;
clear() clears the text buffer.
Opening a file
In notepad.ui, right click on actionOpen and select Go to slot
Complete method on_actionOpen_triggered().
void Notepad::on_actionOpen_triggered()
{
QString fileName = QFileDialog::getOpenFileName(this, "Open the file");
QFile file(fileName);
currentFile = fileName;
if (!file.open(QIODevice::ReadOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot open file: " + file.errorString());
return;
}
setWindowTitle(fileName);
QTextStream in(&file);
QString text = in.readAll();
ui->textEdit->setText(text);
file.close();
}
QFileDialog::getOpenFileName opens a dialog enabling you to select a file. QFile object myfile has the selected file_name as parameter. We store the selected file also into the global variable current_file for later purposes. We open the file with file.open as a readonly text file. If it cannot be opened, a warning is issued, and the program stops.
We define a QTextStream instream for parameter myfile. The contents of file myfile is copied into QString text. setText(text) fille the buffer of our editor with text.
section2 Saving a file
We create the method for saving a file in the same way as for Opening a file, by right clicking on actionSave, and selecting Go to Slot.
void Notepad::on_actionSave_triggered()
{
QString fileName;
// If we don't have a filename from before, get one.
if (currentFile.isEmpty()) {
fileName = QFileDialog::getSaveFileName(this, "Save");
currentFile = fileName;
} else {
fileName = currentFile;
}
QFile file(fileName);
if (!file.open(QIODevice::WriteOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot save file: " + file.errorString());
return;
}
setWindowTitle(fileName);
QTextStream out(&file);
QString text = ui->textEdit->toPlainText();
out << text;
file.close();
}
QFile object myfile is linked to global variable current_file, the variable that contains the file we were working with. If we cannot open myfile, an error message is issued and the method stops. We create a QTextStream outstream. The contents of the editor buffer is converted to plain text, and then written to outstream.
Saving a file with Save as
void Notepad::on_actionSave_as_triggered()
{
QString fileName = QFileDialog::getSaveFileName(this, "Save as");
QFile file(fileName);
if (!file.open(QFile::WriteOnly | QFile::Text)) {
QMessageBox::warning(this, "Warning", "Cannot save file: " + file.errorString());
return;
}
currentFile = fileName;
setWindowTitle(fileName);
QTextStream out(&file);
QString text = ui->textEdit->toPlainText();
out << text;
file.close();
}
This is the same procedure as for Saving a file, the only difference being that here you need to enter a new file name for the file to be created.
Print a File
If you want to use print functionalities, you need to add printsupport to the project file:
QT += printsupport
We declare a QPrinter object called printer. We launch a printer dialog box and store the selected printer in object printer. If we clicked on Cancel and did not select a printer, the methods returns. The actual printer command is given with ui->textEdit->print with our QPrinter object as parameter.
Select a Font
void Notepad::on_actionFont_triggered()
{
bool fontSelected;
QFont font = QFontDialog::getFont(&fontSelected, this);
if (fontSelected)
ui->textEdit->setFont(font);
}
We declare a boolean indicating if we did select a font with QFontDialog. If so, we set the font with ui->textEdit->setFont(myfont).
Copy, Cut, Paste, Undo, and Redo
If you select some text, and want to copy it to the clipboard, you call the appropriate method of ui->textEdit. The same counts for cut, paste, undo, and redo.
This table shows the method name to use.
TaskMethod called
Copyui->textEdit->copy()
Cutui->textEdit->cut()
Pasteui->textEdit->paste()
Undoui->textEdit->undo()
Redoui->textEdit->redo()
Learn More
AboutHere
MDI applicationsQMdiArea, MDI Example
Files and I/O devicesQFile, QIODevice
tr() and internationalizationQt Linguist Manual, Writing Source Code for Translation, Internationalization with Qt
Building and Running Notepad
Now that you have all the necessary files, select Build > Build Project Notepad to build and run the application. Qt Creator uses qmake and make to create an executable in the directory specified in the build settings of the project and runs it.
Building and Running from the Command Line
To build the application from the command line, switch to the directory in which you have the .cpp file of the application and add the project file (suffixed .pro) described earlier. The following shell commands then build the application:
qmake
make (or nmake on Windows)
The commands create an executable in the project directory. The qmake tool reads the project file and produces a Makefile with instructions on how to build the application. The make tool (or the nmake tool) then reads the Makefile and produces the executable binary.
Files:
Images:
© 2019 The Qt Company Ltd. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Qt and respective logos are trademarks of The Qt Company Ltd. in Finland and/or other countries worldwide. All other trademarks are property of their respective owners.
|
__label__pos
| 0.903005 |
#UVa:11417-GCD
灆洢 2015-05-13 11:06:58
實作出題目中的最大公因數函式GCD(),接著照上面的Code算出答案即可得解。
C++(0.198)
/*******************************************************/
/* UVa 11417 GCD */
/* Author: Maplewing [at] knightzone.studio */
/* Version: 2015/05/13 */
/*******************************************************/
#include <iostream>
#include <cstdio>
using namespace std;
int GCD(int i, int j){
while( ( i %= j ) && ( j %= i ) );
return i + j;
}
int main(){
int N, G;
while( scanf("%d", &N) != EOF && N != 0 ){
G = 0;
for( int i = 1 ; i < N ; i++ ){
for( int j = i + 1 ; j <= N ; j++ )
{
G += GCD(i,j);
}
}
printf("%d\n", G);
}
return 0;
}
發表迴響
這個網站採用 Akismet 服務減少垃圾留言。進一步瞭解 Akismet 如何處理網站訪客的留言資料
|
__label__pos
| 0.725897 |
Jonik Jonik - 1 year ago 153
Java Question
Wicket redirect: how to pass on parameters and keeps URLs "pretty"?
Consider a Wicket WebPage that redirects to another page (based on some logic omitted from here):
public class SomePage extends WebPage {
public SomePage(PageParameters parameters) {
setResponsePage(AnotherPage.class);
setRedirect(true);
}
}
I need to pass on the PageParameters to that other page, and this seems to be the way to do that:
setResponsePage(new AnotherPage(parameters));
However, when creating a new Page object like this, I end up at an URL such as
/?wicket:interface=:1::::
instead of the clean
/another
. AnotherPage is defined as:
@MountPath(path = "another")
public class AnotherPage extends WebPage {
// ...
}
(Where MountPath is from org.wicketstuff.annotation.mount package.)
So, my questions are:
• Some other way to pass on the params?
• Some way to keep the URL pretty? Is the above a Wicket Stuff specific limitation instead of core Wicket?
Update
Heh, turns out any of the suggested approaches work, and also what I originally tried —
setResponsePage(new AnotherPage(parameters))
— as long as I remove
setRedirect(true)
. The URL does stay the same (path to SomePage) in that case, and I just realised I really should have mentioned right from the start that it's okay if it does (as long as it's "pretty" and the parameters are passed)!
The page ("SomePage") dispatches requests, based on query params, to a couple of possible result pages that look different but that are accessed through the same url. I tried to formulate the question as generic and minimalist as possible, but that went awry as I left out relevant info. :-/
Sorry if this ended up weird, unclear or useless for others. If you have a suggestion about renaming it, feel free to comment.
Answer Source
Maybe the other setResponsePage method in Component is closer to what you want:
/**
* Sets the page class and its parameters that will respond to this request
*
* @param <C>
*
* @param cls
* The response page class
* @param parameters
* The parameters for this bookmarkable page.
* @see RequestCycle#setResponsePage(Class, PageParameters)
*/
public final <C extends Page> void setResponsePage(final Class<C> cls, PageParameters parameters)
{
getRequestCycle().setResponsePage(cls, parameters);
}
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.777241 |
# Copyright 1999-2020 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=5 DISTUTILS_SINGLE_IMPL=1 PYTHON_COMPAT=( python2_7 ) inherit distutils-r1 flag-o-matic DESCRIPTION="matplotlib toolkit to plot map projections" HOMEPAGE="https://matplotlib.org/basemap/ https://pypi.org/project/basemap/" SRC_URI="mirror://sourceforge/matplotlib/${P}.tar.gz" IUSE="examples test" SLOT="0" KEYWORDS="amd64 x86 ~amd64-linux ~x86-linux" LICENSE="MIT GPL-2" DEPEND="sci-libs/shapelib $(python_gen_cond_dep ' || ( >=dev-python/matplotlib-python2-0.98[${PYTHON_MULTI_USEDEP}] >=dev-python/matplotlib-0.98[${PYTHON_MULTI_USEDEP}] ) ') >=sci-libs/geos-3.3.1[python(-),${PYTHON_SINGLE_USEDEP}]" RDEPEND="${DEPEND} $(python_gen_cond_dep ' >=dev-python/pupynere-1.0.8[${PYTHON_MULTI_USEDEP}] dev-python/httplib2[${PYTHON_MULTI_USEDEP}] dev-python/dap[${PYTHON_MULTI_USEDEP}] ')" DOCS="FAQ API_CHANGES" #REQUIRED_USE="test? ( examples )" # The test phase ought never have been onvoked according to the above. # The test phase appears to require the package to fist be emerged, which ... # Until the distutils_install_for_testing func refrains from failing with # mkdir: cannot create directory ‘/test’: Permission denied # reluctantly this phase is assigned RESTRICT="test" src_prepare() { sed -i \ -e "s:/usr:${EPREFIX}/usr:g" \ setup.py || die # use /usr/share/data sed -i \ -e "/_datadir.*=.*join/s|\(.*datadir.*=\).*|\1'${EROOT}usr/share/${PN}'|g" \ "${S}"/lib/mpl_toolkits/basemap/*.py || die distutils-r1_src_prepare append-flags -fno-strict-aliasing } #src_test() { # distutils_install_for_testing #} python_install() { distutils-r1_python_install # --install-data="${EPREFIX}/usr/share/${PN}" on testing is found not to work; # setup.py is a mess. Someone care to patch setup.py please proceed; substitute with insinto usr/share/basemap/ doins lib/mpl_toolkits/basemap/data/* # clean up collision with matplotlib rm -f "${D}$(python_get_sitedir)/mpl_toolkits/__init__.py" # respect FHS rm -fr "${D}$(python_get_sitedir)/mpl_toolkits/basemap/data" } python_install_all() { if use examples; then dodoc -r examples docompress -x /usr/share/doc/${PF}/examples fi distutils-r1_python_install_all }
|
__label__pos
| 0.958504 |
Blackboard Logo Dev Docs
search rss_feed menu
Three-Legged OAuth
One of the drawbacks associated with Basic Authentication is that the application requires broad access, as the tool is acting as a system-level user and enacting for the user. Three-legged OAuth (3LO) allows an application to act as a user. This sounds scary, but it actually allows for much more granular access control. Rather than a system user acting as someone that can modify all courses, the application is now acting as Professor X, and as such, only has access to his or her courses.
As of Blackboard Learn 3200.7 (SaaS deployed release), third-party REST applications now have the ability to implement 3LO to authorize a user against the APIs and act as that user. In the spirit of sharing pretty pictures, here is a nice diagram displaying the workflow:
3_legged_oauth_workflow.png
So let’s talk a bit about what is happening here. Let’s pretend that we have built a mobile app that allows a student to get his or her grades. Today, we will be Marlee. Marlee picks up her iPhone and opens the GetMyGrades app. The first time Marlee opens the app, the app will send a GET request to /learn/api/public/v1/oauth2/authorizationcode with the Content-Type set to form/urlencoded and the following data as query parameters:
Parameter Definition Example
redirect_uri Where to redirect the user once they have authorized the application redirect_uri=https://my.edu/authorized
response_type Must be set to code. Tells the endpoint to return an authorization code response_type=code
client_id The application’s OAuth key, from the key/secret pair as registered in the developer portal.
NOTE: This is NOT the Application ID!!
client_id=8DBBA050-B830-414F-B7F1-0B448A6320C9
scope The application’s permissions: read, write, delete, and/or offline.
Offline is required to use Refresh Tokens
CAUTION: If you do not set the scope appropriately you will still be able to get an access_token, but when using the access_token you will not be able to GET, POST, or UPDATE as expected. Instead you will get error responses.
scope=read
state Opaque value used to prevent Cross Site Request Forgery state=DC1067EE-63B9-40FE-A0AD-B9AC069BF4B0
So in this example, my request would look like:
GET /learn/api/public/v1/oauth2/authorizationcode?redirect_uri =
https://my.edu/authorized&response_type=code&client_id=8DBBA050-B830-414F-B7F1-0B448A6320C9&scope=read&
state=DC1067EE-63B9-40FE-A0AD-B9AC069BF4B0
The result of this action is that Marlee is presented with her school’s Blackboard Learn login screen. She logs in and is presented with the following screen, asking her to authorize the application.
3-loauth-screenshot
Once Marlee clicks ‘Allow’, the URL sent as the redirect uri is called with the authorization code as a query parameter, for example:
https://my.edu/authorized?code=1234567890
Now the application is able to talk server-to-server as Marlee. The next step is to get an access token from Learn based on the authorization code Marlee granted. From here the workflow is very similar to the Basic Authentication method. The token is requested as a POST request from /learn/api/public/v1/oauth2/token. This is also a form/urlencoded. The body of the request contains the text grant_type=authorization_code, and the URL is parameterized with the code code=1234567890 and the redirect_uri redirect_uri=https://my.edu/app. So the request looks like:
POST /learn/api/public/v1/oauth2/token?code=1234567890&redirect_uri=https://my.edu/app
The endpoint responds with the standard token (access_token, expires_in, and token_type), but also has a couple of new fields. If offline mode is granted, a refresh_token is returned. This allows the application to get a new token on behalf of the user, even if that user isn’t explicitly asking for it. In addition, the scope requested in the initial request is returned, as well as the UUID for the user in the user_id field.
From this point forward, the access_token is used just as it is when using Basic Authentication, but instead of acting as the system user, it is acting as Marlee.
Refresh Tokens
As mentioned above, one of the available scopes that an application can request is offline. Essentially, the offline scope allows an application to access Blackboard Learn as a user without requiring the user to login each time. This might be especially useful in a mobile application to prevent the unnecessary redirects each time an application is loaded. The way this works is through the use of refresh tokens.
The first time a user accesses the application and the normal 3LO process takes place. The user is redirected to Blackboard, they login and authorize, and then the application is off an running. The difference is that a refresh token is returned in addition to the Bearer token. From this point forward, the third party application can automatically request a new bearer token by sending the request with the refresh token without involving the user at all.
The HTTP message might look like this:
POST /learn/api/public/v1/oauth2/token?refresh_token=8DBBA050-B830-414F-B7F1-0B448A6320C9&redirect_uri=https://my.edu/app
From this point forward, the access_token is used just as it is when using Basic Authentication, but instead of acting as the system user, it is acting as Marlee.
Use Proof Key for Code Exchange (PKCE) with 3-Legged OAuth 2.0
Starting in version 3700.4, Blackboard Learn’s 3-Legged OAuth 2.0 implementation supports the Proof Key for Code Exchange (PKCE) extension. For more information about PKCE, see OAuth 2.0’s RFC 7636: Proof Key for Code Exchange.
To implement the PKCE extension:
1. Create a random string 43-128 characters long, containing only the characters A-Z, a-z, 0-9, or the following - . _ ~ (hyphen, period, underscore, and tilde). This sting will later be used as your code_verifier.
2. Use the S256 hashing method to create a hash of your random string. This hash is your code_challenge. The formula for an S256 hash is based on the SHA-256, but is not exactly the same.
code_challenge = BASE64URL_ENCODE( SHA256( ASCII( code_verifier )))
For more information about the S256 hashing algorithm, see RFC 7636 - Proof Key for Code Exchange by OAuth Public Clients.
1. Make a request to /learn/api/public/v1/oauth2/authorizationcode, and provide a code_challenge and code_challenge_method in the query parameters. For code_challenge_method, the endpoint accepts only S256. Your request will look something like:
POST learn/api/public/v1/oauth2/authorizationcode?client_id=YOUR_CLIENT_ID&response_type=code&redirect_url=YOUR_URL&code_challenge=YOUR_CODE_CHALLENGE&code_challenge_method=S256
1. Make a request for an access token, as normal. When you do, include your code_verifier as a query parameter. Your request will look like:
POST learn/api/public/v1/oauth2/token?grant_type=authorization_code&code=CODE_FROM_AUTH_CALL&code_verifier=YOUR_CODE_VERIFIER
The Learn server will verify that the code_challenge and code_challenge_method sent in the first request form a valid hash of code_verifier. This allows to the server to verify that the client asking for the access token is the same client that sent the authorization code request.
1. When you receive an access token, you can use it as you normally would to make API calls.
Examples
|
__label__pos
| 0.554356 |
0
Hello :)
Umm I have a small question :)
Which do you recommend for fast searching BST or AVL tree?
My answer: It might be the BST because it'll be noticed that in the right subtree all the numbers greater than the root will be there
also for the smaller numbers will be left
so we can search it easily by just knowing the number that we want to search for if it is greater than the root or smaller!
WHAT do you think? Is my answer right?
thanks in advance :)
2
Contributors
2
Replies
3
Views
9 Years
Discussion Span
Last Post by Q8iEnG
0
http://www.nist.gov/dads/HTML/avltree.html
If you pick your root unwisely, your BST may resemble a linked list.
Also consider
- the cost of constructing the tree in the first place
- the cost of keeping it balanced
- the number of searches performed
- the distribution of search results.
You need the whole program scenario to make an accurate judgement.
0
Well, I guess I found that the AVL is much more faster than BST because if we use AVL we'll get the smallest tree by obtaining it balanced all the way
so the answer would be AVL tree is more faster in search from BST
thanks my friend :)
This question has already been answered. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.
|
__label__pos
| 0.842196 |
5
votes
2answers
106 views
Positioning of images/graphics in tables
I need to create a table where cells in one columns are images, not text. This is the code I'm using: \begin{table}[hbtp] \centering \begin{tabu} to \textwidth {|c|c|c|c|} \hline ...
3
votes
1answer
136 views
tabu: adding struts before and after cell content
I am using tabu package to easily typeset my tables. So, there are few problems that i have solved: vertical spacing (from cell content to borders) is less than line height and line depth, so i use ...
3
votes
1answer
92 views
Too little space between wrapped text and the line below it
Consider the following table example, the text in the second row is wrapped. \documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{tabu} ...
2
votes
1answer
402 views
How to make LaTeX (tabu, hhline) tables keep rowheights?
i want to typeset a long table across several columns / pages and want the table rows to keep register. my code looks basically like this: \documentclass{book} \usepackage{pagegrid} ...
4
votes
2answers
205 views
Little indentation despite \noindent after tabu custom-environment
this question is about a (to me) mysterious little indent (looks like an ordinary blank...) after a custom-environment using the tabu-environment. Please compile this MWE: \documentclass[a4paper, ...
8
votes
1answer
366 views
Vertical spaces above and below longtabu
This is, in a way, a follow-up-question to New table-environment, spaces and enclosing { } in a new environment, which has already been answered conveniently by @egreg. Please compile this MWE to ...
2
votes
1answer
146 views
New table-environment, spaces and enclosing { } in a new environment
It was hard for me to find a proper title (feel free to change it) for my question which I want to introduce with an example: Situation In my document I sometimes use tabus directly in the document ...
3
votes
2answers
481 views
Align (center) header row in table
I use @{\hskip 50pt} to separate columns in my table. The problem which arises is that I add this extra space after a multicolumn. This also moves the header row so that it is no longer centred. Is ...
2
votes
1answer
530 views
vertical space around horizontal rules between tabu rows while maintaining color
I would like to increase the vertical space before and after a horizontal rule in a tabu table but want the color that is in the cells below and above the rule to not be broken but continue right ...
4
votes
1answer
434 views
tabucline with a break
In the following the first two columns are different in nature from the last two and I wanted to emphasize this by having a break in the horizontal line. I drew lines across columns 1 and 2 and then ...
|
__label__pos
| 0.89831 |
Can you watch YouTube TV in Australia? No, you cannot watch YouTube TV in Australia unless you use a premium VPN. We recommend ExpressVPN.
Youtube tv in australia
Table of Contents
Is YouTube TV available in Australia? No, YouTube TV is not available in Australia due to geo-restriction. However, you can get YouTube TV Australia with a premium VPN. we recommend ExpressVPN
YouTube TV is one of the most renowned live TV streaming services. It features more than 85+ TV channels online. However, YouTube TV is available in limited regions. Are you wondering if can you get YouTube TV in Australia? No, you cannot get YouTube TV in Australia because of the geo-restriction.
Read the guide to watch incredible TV shows like A Discovery of Witches, Euphoria, and Good Trouble on YouTube TV Australia with the best VPN services.
How to Watch YouTube TV Australia [Easily Mar 2023]
Is YouTube TV available everywhere? No, YouTube TV is available in the United States only. However, you can get YouTube TV in Australia or from anywhere with the following steps:
1. Subscribe to a premium VPN. We recommend ExpressVPN because it has best-in-class security features.
2. Install and open the ExpressVPN application on your streaming device. Now log in with your credentials.
3. Connect to a secure US Server and watch your favorite on-demand and live content on YouTube TV Australia.
If you’re an American living in a country where YouTube TV isn’t available and wants to watch YouTube TV in Australia, check out our detailed guide on How to Watch YouTube TV in Australia.
Why do you need a VPN to Watch YouTube TV in Australia?
Can you use YouTube TV in Australia without a VPN? You cannot use YouTube TV in Australia without a VPN because of the geo-restriction and copyright licensing.
If you attempt to access the service from a location other than the United States, you will receive the following error message:
Youtube-tv-error
OR
Youtube-tv-error-are-not-available-in-this-country
So, can you unblock YouTube TV in Australia? Yes, you can unblock YouTube TV in Australia with a premium VPN. It will mask your Australian IP with an American IP to manipulate YouTube geo-location detectors.
You can easily get around YouTube’s geo-restriction with a premium VPN. The most recommended VPN to watch YouTube TV in Australia is ExpressVPN because it has exceptional unblocking capabilities and best-in-class security features.
Top 5 VPNs for Watching YouTube TV Australia [Quick Overview March 2023]
Can you access YouTube TV in Australia? Yes, you can access YouTube TV in Australia with a premium VPN. Take a look at the quick overview of the best YouTube TV VPNs:
1. ExpressVPN: It has 3000+ servers in 94 countries worldwide. It has 25 highly optimized servers in the USA. It offers $6.67/month for its yearly package with 3 months of the free trial along with a risk-free 45-day money-back guarantee.
2. CyberGhost: It offers a server network of about 7,000 servers across 91 countries, providing 1230 highly optimized server locations in the USA. It is one of the largest services with comprehensive online protection. It costs $ 2.18/month for 3 years.
3. NordVPN: This one is reliable with 5,400+ servers in 80+ locations across 59 countries as well 15 highly optimized servers in the USA. It comes with a 2-year subscription of $3.71 a month and a 2-year plan at 72% off + 3 months FREE and a 30-day money-back guarantee.
4. Surfshark: It is the best low-cost VPN available for $2.49/month for its 2-year package network that offers 3200+ servers in 60+ countries, 23 highly optimized USA servers, and unlimited simultaneous connections.
5. AtlasVPNWith 750+ servers and 7 highly optimized server locations in the USA, with fast connections, trustworthy policies, and promising security features. It costs $1.99 per month for 3 years with 24/7 live chat support and a 30-day money-back guarantee.
5 Best VPNs for Unblocking and Watching YouTube TV in Australia [Detailed Overview Mar 2023]
Can you trick YouTube TV location firewalls? Yes, you can trick YouTube TV locations with a premium VPN. After testing 40+ VPNs, we have enlisted below the 5 best VPNs to get YouTube TV Australia:
ExpressVPN – Best VPN to Uninterruptedly Watch YouTube TV in Australia
• I recommend ExpressVPN as the best VPN for YouTube TV because it is incredibly effective at bypassing geo-restrictions and accessing YouTube TV without any hassle, thanks to its 3,000+ server network.
• It is one of the finest VPNs to watch YouTube TV in Australia and anywhere else in the world.
• ExpressVPN has Aes-256 bit encryption technique, OpenVPN, L2TP, PPTP, Zero-log policy, Strict no IP leaks, so the users are confident the allocated IP addresses are unique and will not overlap with any other users.
• A split-tunneling feature is responsible for splitting the data packets into pieces, making the sending process seamless through the tunnel of information.
Unblock-youtubetv-with-expressvpn
• Also, an active Kill Switch is notifying the users about connection weakness, so the user can reconnect to their connection before their actual identity is reframed in front of the website admin.
• In terms of streaming, it’s also the best. The Media Streamer feature of ExpressVPN is a streamer’s dream come true.
• Being one of the few VPN services that maintain a rigorous no-logs policy and are free of IP and DNS leaks. It features good connectivity, a user-friendly interface, and a 30-day money-back guarantee.
CyberGhost – Secure VPN to Access YouTube TV in Australia
• CyberGhost has about 7,000 servers across 91 countries, which makes it one of the most varied and largest services.
• It has a built-in speed of CyberGhost lets you know how fast your internet speed is in different countries.
• Surprisingly, it’s really fast. Providing 73.41 Mbps downloading Speed in the United States.
• Furthermore, by subscribing to CyberGhost you can secure up to 7 devices for the ultimate protection against all malicious threats.
• CyberGhost provides comprehensive online protection with 256-bit AES encryption and multiple protocols such as the kill switch feature, split tunneling that ensures you protect your data completely.
• CyberGhost provides you with 24/7 assistance Via live chat or email with our dedicated, professional support staff.
• The cost of CyberGhost is $ 2.18/month for 3 years but the plus point is that you will get 3 months free and a 45-day money-back guarantee as well.
NordVPN – Best Reputable VPN to Watch YouTube TV in Australia
• In addition to the two listed above, NordVPN is a prominent participant in the VPN industry, with numerous fast servers across the world, no logs, and a reputation for being a solid VPN for online streaming.
• It can handle up to six simultaneous connections, allowing you to share a single account for secure streaming with your family or friends.
• NordVPN has a dedicated protocol, NordLynx, responsible for auditing the security protocols and policies before the user connection is established.
Unblock-youtubetv-with-nordvpn
• With NordLynx, there are other protocols, too, like Aes-256 bit encryption technique, OpenVPN, L2TP, PPTP, IKeV2, Split-tunneling, DNS server responsible for masking the IP address, and Physical location within time.
• It also allows multi-logins for devices reaching up to 5+ devices at a time, with a single premium paid account, which is perfect for a family!
Surfshark – Low-Cost VPN to Watch YouTube TV in Australia
• Surfshark is a pocket-friendly, high-quality VPN service. It uses industry-leading encryption, does not collect logs, and keeps you safe and anonymous online. Surfshark has a great variety of fast servers in the United States and elsewhere.
• It works with a variety of streaming devices, including YouTube TV and Netflix, and it also gets across China’s Great Firewall, which is a major accomplishment and a considerable hurdle for a VPN.
• The Aes-256-bit encryption technique, which is so hard that even the brute force technique fails to break the encryption layer and get into the searching of users, disturbing the users’ overall activities.
Unblock-youtubetv-with-surfshark
• Moreover, OpenVPN, L2TP, PPTP IKeV2, and Strict Zero-Log policies are responsible for giving users the guarantee about no repository maintenance of their online activities.
• The split-tunneling feature is there to split the data packets into pieces to pass through the tunnel of information within the expected time!
• Surfshark also comes with an adblocker, can be installed on an infinite number of devices, and comes with a 30-day money-back guarantee.
AtlasVPN – Budget-Friendly VPN to access YouTube TV in Australia
• AtlasVPN has over 750+ servers in 39 countries to select from, providing you with a lot of options.
• It also supports up to unlimited simultaneous connections, allowing you to use it on many devices.
• It has incredibly fast speeds, with a blazing streaming speed. AtlasVPN has its exclusive supersonic web surfer system, which ensures that these incredible speeds are maintained throughout all of its apps.
• AtlasVPN costs $1.99 per month, which is a steal considering you may connect an infinite number of devices.
• It also comes with 30 days hassle-free money-back guarantee. It also features highly safe security protocols and policies.
• It also has a built-in IPSec/IKEv2 and WireGuard, along with security protocols like SHA-384, PFS, and Aes-256-bit encryption.
Also Read:
How to Choose the Best VPNs for YouTube TV in Australia?
It’s difficult to choose a VPN because there are so many on the market with essentially the same set of functionalities. Some provide a large server network, while others have a greater emphasis on security, speed, or the ability to circumvent a wider range of services.
However, look for the following qualities in a VPN while looking for the best VPN to watch YouTube TV in Australia:
1. It must have a larger server network, including a big number of servers throughout the United States.
2. For HD streaming of YouTube TV live shows, it must have fast servers.
3. It must employ cutting-edge encryption technology, such as AES-256-bit encryption.
4. It must have a dependable customer support crew that is available 24 hours a day, seven days a week.
5. It should be able to unblock YouTube TV and comparable streaming services
How to Sign up for YouTube TV Australia? [Quick Steps]
You can sign up for YouTube TV Australia with the following steps:
1. Get a premium VPN. We recommend ExpressVPN because it has comprehensive security protocols.
2. Connect to an optimized US server. We recommend ExpressVPN’s Dallas server.
3. Navigate to YouTube TV’s website > Tap on “Try it Free.”
4. Log in with your Google account. If you don’t have a Gmail account, you can create a new account to access YouTube TV Australia.
5. After you have successfully signed in, add a US Zip Code. You can find many Zip Codes on Google such as 92617 and many more.
6. Select your preferred payment method. You can select PayPal and add your account details.
7. YouTube TV offers a 14-day free trial period, after that, you will be charged as per the subscription plans.
How much does YouTube TV cost in Australia?
As a cord-cutter, you must be wondering how much is YouTube TV in Australia. Following are the YouTube TV Australia cost and subscription plans:
• Base Plan: It costs AUD 101.71/month. It offers 85+ live channels and unlimited DVR Space. You can create 6 streaming profiles with this plan.
• Spanish Plan: It costs AUD 54.76/month. It offers 28+ live channels and unlimited DVR Space. You can create 6 streaming profiles with this plan.
How to Subscribe to YouTube TV in Australia?
To create a YouTube TV account in Australia, all you have to do is simply follow one of the methods and instructions mentioned below.
How to Subscribe to YouTube TV Australia with Paypal Account?
You must have a valid PayPal account in the United States to subscribe to YouTube TV in Australia. You can either ask a friend or family member in the United States for a PayPal account, or you can create one yourself by following the instructions below:
1. Connect to a VPN server located in the United States. Begin the registration procedure with a brand-new email address.
2. Fill in all of your information, including your credit card number and street address, in the proper fields.
3. As a US Zipcode, enter your local postal code. To make your city’s postal code all numbers, remove the alphabets and finish with two zeroes.
4. Double-check that this Zipcode exists by going to the US Postal Service website link. Change one of the zeros at the end to a 1 if it doesn’t work.
5. To finalize the purchase, enter the US city location from the USPS page as well as your actual street address.
How to Subscribe YouTube TV Australia with your friend’s or family member’s debit/credit card?
You can get a YouTube TV account in Australia if you have a friend with a US bank account. Simply ask a friend to sign up for the service and provide you with their YouTube TV login information. The remaining steps are the same as those described in the procedure.
How to Subscribe to YouTube TV Australia via gift card?
A YouTube gift card can also be used to get a YouTube TV account in Australia. A YouTube TV gift card can be purchased at the YouTube Amazon Store. To minimize any future tax issues, I recommend getting at least a $75 gift card.
After you’ve received your YouTube TV gift card, go to youtube.com/redeem to redeem it for a YouTube TV membership in Australia. You can either ask a friend to get you a gift card or contact freelancers to get you one.
The most basic YouTube TV subscription package is USD 64.99 per month or CAD 81.58 per month. A maximum of six accounts is allowed per individual. With the membership, you’ll get a 5-day free trial and access to 90+ YouTube TV channels. Traditional services are also offered for an extra USD 37 per month or CAD 46.44 per month.
Can I watch YouTube TV in Australia with a Free VPN?
Yes, a free VPN can be used to view YouTube TV in Australia, but I do not encourage it. Free VPNs lack proper security procedures and cannot bypass geo-restrictions on streaming sites like YouTube TV. Not only will you be unable to access YouTube TV, but you also risk disclosing your data to third parties.
You can not just stream YouTube Tv in Australia, but you can unblock other geo-restricted platforms as well such as Hulu, Zee5, and SonyLIV.
How to fix YouTube TV VPN/proxy detected error
To fix YouTube TV VPN/Proxy detected error, follow the following steps.
1. Subscribe to a VPN service like ExpressVPN.
2. For your device, download and install the app.
3. Connect to a US server using your login credentials.
4. Open YouTube TV and register for the service.
5. Now you can enjoy unlimited access to YouTube TV.
Can you spoof the YouTube TV location?
Yes, you can use ExpressVPN to spoof the YouTube TV location. It is an American streaming service and it is only available in the USA.
If you use any proxy to spoof YouTube TV’s location then chances are that it’ll detect your proxy and it will block your access.
A VPN like ExpressVPN can easily bypass its geo-restriction and provide you access to the service, so you can enjoy your favorite content.
YouTube TV in Australia Devices Compatibility:
Here is the list of devices that are compatible with YouTube TV in Australia:
• Android
• Windows
• iOS
• Mac
• Apple TV
• Smart TV
• Roku
• Xfinity TV
• Chromecast
• Fire TV
• Xbox
• Play Station
• Amazon Firestick
YouTube TV Australia Compatible Devices
YouTube TV is also available on mobile devices such as Android and iOS phones, iPad, smart TVs, media players, and many others. So, if you want to find out, keep reading!
How to get YouTube TV App on Android Devices in Australia?
1. Open ExpressVPN app on your Android device.
2. Connect to a server in the United States.
3. Sign up for a new Google account and log in.
4. Search for the YouTube TV app in the Play Store.
5. Install it and sign up for a free account.
6. Congratulations, YouTube TV is now available on your Android device.
How can I download YouTube TV App on iOS Devices in Australia?
1. Begin by changing your Apple ID region to the United States in Settings > Network.
2. Open ExpressVPN on your iOS device.
3. Connect to a server in the United States.
4. Search for YouTube TV in the Apple App Store.
5. Install the app and log in to your account to start streaming your favorite shows now.
How to Watch YouTube TV in Australia on Roku?
Follow these steps, to get YouTube TV in Australia on Roku:
1. Download and install ExpressVPN and connect it to your Wi-Fi router.
2. Connect to US server.
3. Now insert your Roku stick into your smart TV through HDMI port.
4. Turn on your device and select YouTube TV on Roku homescreen and you are good to go.
How can I Get YouTube TV Australia on PS3/PS4?
Follow the below mentioned steps to download YouTube TV on PS3/PS4:
1. Go to the category of TV/Video Services.
2. From the list of alternatives, look for “YouTube TV”.
3. Select “Get” from the drop-down menu.
4. YouTube TV has now been added to your “My Channels” list.
How can I Watch YouTube TV in Australia on Xbox?
Follow these easy steps to get YouTube TV for Xbox Users
1. Get a premium VPN like ExpressVPN.
2. Connect with the server in US.
3. From the Xbox menu, click on “My games & apps. “
4. Search for “YouTube TV” in the “Xbox Store.
5. “That’s it! Simply click the “Install” button
How to Stream YouTube TV in Australia on Kodi?
Follow these steps to get YouTube TV on Kodi:
1. Install a VPN that is compatible with YouTube TV. We highly recommend ExpressVPN.
2. Connect your Kodi device to your VPN.
3. Download the VPN software to your computer, save it to a USB stick, and then plug it into your Kodi device.
4. On your Kodi device, go to Settings, then System Settings, and then Add-ons.
5. Now toggle on Unknown Sources.
6. Install the VPN app on your Kodi device. Connect to a server in the United States after that.
7. Go to Kodi’s home screen by turning on your TV.
8. Lastly install the YouTube TV add-on on Kodi to start watching.
How to Install YouTube TV in Australia on Firestick?
Follow the below mentioned steps to download YouTube TV on your Firestick device:
1. Start up your FireStick and go to Search.
2. Switch to unknown sources and download ExpressVPN then connect to US server.
3. Enter the word ‘YouTube TV’ and press enter.
4. The ‘YouTube TV app for Fire TV Stick‘ will appear in the search results.
5. The app will be downloaded after you click ‘Get.’
6. After the YouTube TV app has been installed, open it.
7. Open the YouTube TV app and sign up/login with your credentials.
8. Now you can start watching YouTube TV on Firestick in Australia.
How can I Stream YouTube TV in Australia on Apple TV?
Follow these steps to get YouTube TV on Apple TV:
1. Choose a VPN service that allows you to use Smart DNS. We highly recommend ExpressVPN.
2. Find your Smart DNS addresses, then go to your Apple TV’s Settings menu and select Network at the bottom of the page.
3. Choose your network by pressing the Wi-Fi button.
4. Then go to DNS Configuration and choose Manual Configuration.
5. Connect your Apple TV to a US server by typing in your DNS address and restarting it.
6. On your Apple TV, download and install the YouTube TV app and you are done.
How can I Access YouTube TV in Australia on Smart TV?
Follow these steps to get YouTube TV on Smart TV:
1. Download and install a premium VPN. We highly recommend ExpressVPN.
2. Connect to a US-based server.
3. Go into your Wi-Fi router’s admin.
4. Connect it to your VPN network.
5. Download the YouTube TV app on your smart TV, and create free account to start watching.
How can I watch YouTube TV in Australia on PC?
Follow these steps to get YouTube TV on your computer:
1. On your computer, download and install a premium VPN. We highly recommend ExpressVPN.
2. Connect to a US-based server.
3. Go to the YouTube TV website and sign up for an account or log in.
4. If you’re still having trouble watching YouTube TV, delete your cookies and cache before logging back in.
YouTube TV in Australia Channel List
YouTube TV now has over 73 channels, this is for their basic subscription, which can be expanded to include 81 channels for a modest charge. Now that you know how to watch YouTube TV in Australia, let’s talk about what to watch on YouTube TV in Australia. So, keep reading to see our choices!
List-of-youtube-tv-channels
• ABC
• ESPN
• BET
• Bravo
• FOX
• FOX News
• FOX Sport 1 (FS1)
• Fox Business
• TNT
• Comedy Central
• CBS
• Con TV
• Comet TV
• CMT
• Court TV
• Dabl
• Dove
• NBC
• Disney Channel
• Disney XD
• Disney Junior
• HGTV
• IFC
• NFL Network
• LAFC
• Law & Crime
• MSNBC
• Motortrend
• MTV
• MTV 2
• Animal Planet
• Food Network
• Cheddar
• Cheddar News
• Cartoon Network/Adult Swim
• BBC World News
• Paramount Network
• Freeform
• Investigation Discovery
• IFC
• Nat Geo
• NBC Universo
• NBCSN
• NECN
• SyFy
• TeenNick
• TBS
• TNT
• TruTV
• NBA TV
• MLB Game of the Week
What to Watch on YouTube in Australia?
Now that you know how to watch YouTube TV in Australia, let’s talk about what to watch on YouTube TV in Australia. So, keep reading to see my choices!
Popular TV Shows to Watch on YouTube TV in Australia
• Saturday Night Live
• The Bachelor
• Wheel of Fortune
• This Is US
• Daytime Jeopardy
• Shark Tank
• Big Bang Theory
• The Office
• Law and Order
• The Equalizer
• Modern Family
• Seven Worlds, One Planet
Best Movies to Watch on YouTube TV in Australia
• Straight Outta Compton
• Venom
• Deadpool
• The Outsider
• Jurassic World
Exclusively Watch the Primetime Emmys 2021 Awards on YouTube TV
Following the historic 2020 event, the 73rd Emmy Awards are about to begin. We’re all ready and excited to stream this historic event, and we know you are too.
If you wish to watch the event live on YouTube TV, you can do so by following the methods outlined above.
FAQs on YouTube TV in Australia
What can I use instead of YouTube TV?
You can use the following streaming services instead of YouTube TV:
• Sling TV
• FuboTV
• Hulu with Live TV
• DirecTV Stream
What is YouTube TV?
YouTube TV is a video streaming service that features material from major network broadcasters. It offers media from well-known network broadcasters as well as a unique set of features that aren’t normally found on other platforms.
Where can I access YouTube TV?
Despite its 2017 launch, YouTube TV is presently exclusively available in the United States, with no plans to expand to other countries.
If you get errors like “YouTube is not accessible in your country,” it means the service isn’t yet available in your country.
How do I get rid of my YouTube TV subscription?
Follow the steps below to terminate your YouTube TV subscription:
On your device, launch the YouTube TV app. If you don’t have an app, use a web browser to go to tv.youtube.com. Go to the Settings menu. Choose your membership level. Select Pause if you want to put your membership on hold, or “Cancel” if you want to cancel it.
How long do recordings stay on YouTube TV?
Once you’ve added a show to your library, it will stay there even if there are no upcoming episodes, Live TV recordings will be saved for 9 months if you maintain your membership status.
Does YouTube TV work outside the US?
No, YouTube TV doesn’t work outside the US because it has some strictly implemented policies.
Can you delete recordings on YouTube TV?
Subscribers to YouTube TV can erase recordings from their library, but only those that have been planned but not yet recorded. By going to the page for any show or movie and clicking the “Added to Library” button, you can delete these scheduled recordings.
What is the best way to bypass YouTube’s country restrictions?
You’ll need to subscribe to a VPN service like ExpressVPN and connect to an American server to get around the country’s limitations.
Is YouTube TV available internationally?
YouTube TV is available in the United States only, but the chances are it will expand further in 2022 but not confirmed.
When is YouTube TV coming to Australia?
There are no official announcements regarding YouTube TV’s launch in Australia but If you want to watch YouTube TV in Australia until its launch, just follow the above steps.
Can I use YouTube TV on multiple devices?
If you sign up for the base plan ($64.99 per month), you can use YouTube TV on up to three devices at once. Computers, cellphones, tablets, streaming devices (Roku and Apple TV), smart TVs, and gaming consoles are all included in devices that are compatible with YouTube TV.
Does YouTube TV work in Australia with VPN?
YouTube TV servers follow your sign-in country to grant you access to the service, you can deceive them by subscribing to a VPN for Australia to impersonate your true location.
A VPN works perfectly to watch YouTube TV in Australia to encrypt your online interactions and allows you to connect anonymously over the internet.
What is YouTube TV’s monthly cost?
The monthly cost of YouTube TV is USD 64.99.
Can I change the location on YouTube TV?
Yes, you can change your location on YouTube TV just by subscribing to a credible VPN.
How to cancel YouTube TV subscription?
To cancel your YouTube TV subscription, follow the below steps:
1. Go to YouTube TV’s official webpage.
2. 2. Make sure you’re logged in to your YouTube TV account.
3. Go to the Settings tab.
4. From the “Membership” tab, select the “Deactivate Membership” option.
5. Simply select the “Cancel Membership” option, and you’re done.
Yes, using a VPN is legal to stream YouTube TV in Australia but you should always check the licensing of the particular site before using it. We recommend ExpressVPN for a reliable and trustworthy connection.
Conclusion
For every kind of internet user, the YouTube TV streaming service is enjoyable. You can simply watch all of your favorite lives, channels, movies, and shows from anywhere in the world with the help of a credible VPN. (Our top recommendation is ExpressVPN)
I hope you all enjoyed and learned a lot after reading this blog about how to watch YouTube TV in Australia and others like American Netflix in Australia and Discovery Plus in Australia.
People Also Read:
Betty is an enthusiastic Computer Science Graduate and an extrovert who loves to watch Netflix, and is a binge-watcher always seeking quality shows to add to her watch history! She loves to write about the show, she has watched, to make her readers acknowledge them, and witness a cherished time with friends, and family!
|
__label__pos
| 0.833663 |
12.6 - Correlation & Regression Example
12.6 - Correlation & Regression Example
Example: Vertical jump to predict 40-yd dash time
Each spring in Indianapolis, IN the National Football League (NFL) holds its annual combine. The combine is a place for college football players to perform various athletic and psycholological tests in front of NFL scouts.
Two of the most recognized tests are the forty-yard dash and the vertical jump. The forty-yard dash is the most popular test of speed, while the vertical jump measures the lower body power of the athlete.
Football players train extensively to improve these tests. We want to determine if the two tests are correlated in some way. If an athlete is good at one will they be good at the other? In this particular example, we are going to determine if the vertical jump could be used to predict the forty-yard dash time in college football players at the NFL combine.
Data from the past few years were collected from a sample of 337 college football players who performed both the forty-yard dash and the vertical jump.
Solution
We can learn more about the relationship between these two variables using correlation and regression.
Correlation
Is there an association?
The correlation is given in the Minitab output:
Pearson correlation for Forty and Vertical = -0.729589
P-Value = <0.0001
The variables have a strong, negative association.
Simple Linear Regression
Can we predict 40-yd dash time from vertical jump height?
Next, we'll conduct the simple linear regression procedure to determine if our explanatory variable (vertical jump height) can be used to predict the response variable (40 yd time).
Assumptions
First, we'll check our assumptions to use simple linear regression.
Assumption 1: Linearity
The scatterplot below shows a linear, moderately negative relationship between the vertical jump (in inches) and the forty-yard dash time (in seconds).
Scatterplot of forty yard dash time vs vertical jump height
Assumption 2: Independence of Errors
The correlation shown in the Versus Fits scatterplot is approximately 0. This assumption is met.
Versus fits plot for vertical jump vs forty-yard dash time
Assumption 3: Normality of Errors
The normal probability plot and the histogram of the residuals confirm that the distribution of residuals is approximately normal.
Normall probability plot of residuals for forty yard dash and vertical height
Histogram of residuals of forty yard time vs vertical jump height
Assumption 4: Equal Variances
Again, using the Versus Fits scatterplot we see no pattern among the residuals. We can assume equal variances.
Versus fits scatterplot for vertical jump height vs forty yard dash time
ANOVA Table
The ANOVA source table gives us information about the entire model. The \(p\) value for the model is <0.0001. Because this is simple linear regression (SLR), this is the same \(p\) value that we found earlier when we examined the correlation and the same \(p\) value that we see below in the test of the statistical significance for the slope. Our \(R^2\) value is 0.5323 which tells us that 53.23% of the variance in forty-yard dash times can be explained by the vertical jump height of the athlete. This is a fairly good \(R^2\) for SLR.
Analysis of Variance
Source DF Adj SS Adj MS F-Value P-Value
Regression 1 15.8872 15.8872 381.27 <0.0001
Error 335 13.9591 0.0417
Total 336 29.8464
Model Summary
S R-sq R-sq(adj)
0.204130 53.23% 53.09%
Coefficients
Predictor Coef SE Coef T-Value P-Value
Constant 6.50316 0.09049 71.87 <0.0001
Vertical -0.053996 0.002765 -19.53 <0.0001
Regression Equation
The regression equation for the two variables is:
Forty = 6.50316 - 0.053996 Vertical
The regression equation indicates that for every inch gain in vertical jump height the 40-yd dash time will decrease by 0.053996 (the slope of the regression line). Finally, the fitted line plot shows us the regression equation on the scatter plot.
Scatterplot of vertical jump vs forty-yard dash times
Legend
[1]Link
Has Tooltip/Popover
Toggleable Visibility
|
__label__pos
| 0.554415 |
Max-Width in Tailwind CSS: Complete Guide with Examples
We’ll look into the max-width utility, explore its various classes, and provide practical examples to help you make the most of this feature
By.
min read
Tailwind CSS is a powerful and flexible utility-first CSS framework. Tailwind has gained much attention due to its ability to rapidly prototype and build custom user interfaces.
In this blog post, we’ll look into the max-width utility, explore its various classes, and provide practical examples to help you make the most of this feature.
[lwptoc]
Know the Max-Width Property
The max-width property in CSS determines the maximum width of an element. By setting max-width, we ensure that an element will never exceed the specified value, even if its content or the browser window size causes it to do so.
This proves useful with responsive websites, where you want elements to scale and adjust to different screen sizes without breaking the layout.
Using Max-Width in Tailwind CSS
Tailwind CSS offers a range of predefined max-width classes to quickly set the maximum width of elements. These classes are generated based on our projects configuration file, but the default classes are as follows:
1. max-w-none: Resets the max-width to its initial value (no maximum width constraint).
2. max-w-0: Sets the max-width to 0.
3. max-w-[size]: Sets the max-width to a predefined value, where [size] can be xs, sm, md, lg, xl, 2xl, 3xl, 4xl, 5xl, 6xl, or 7xl.
For example:
<div class="max-w-md">
This content's maximum width will be restricted to the medium size.
</div>
Custom Max-Width Classes
In addition to the predefined classes, you can create custom max-width classes by adding them to your configuration file. This is useful if you need a max-width value that isn’t available in the default classes.
For example, let’s create a custom max-width class with a value of 800px:
// tailwind.config.js
module.exports = {
theme: {
maxWidth: {
'custom': '800px',
},
},
variants: {},
plugins: [],
}
Now, you can use this custom class in your HTML:
<div class="max-w-custom">
This content's maximum width will be restricted to 800px.
</div>
Practical Examples
1 – Limiting the width of a container:
<div class="mx-auto max-w-2xl">
This container's content will be centered and have a maximum width of 2xl.
</div>
2 – Restricting the size of an image:
<img class="max-w-xs" src="image.jpg" alt="A sample image" />
3 – Creating responsive max-width:
<div class="max-w-sm md:max-w-md lg:max-w-lg xl:max-w-xl">
This content will have different max-width values depending on the screen size.
</div>
Conclusion
We can easily create responsive and well-designed layouts for our website by understanding and effectively using the max-width utility in Tailwind CSS.
I hope this comprehensive guide has provided you with the knowledge and examples needed to make the most of the max-width utility in your projects.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.782999 |
Viet Viet - 5 months ago 38
HTML Question
Disable warnings when loading non-well-formed HTML by DomDocument (PHP)
I need to parse some HTML files, however, they are not well-formed and PHP prints out warnings to. I want to avoid such debugging/warning behavior programatically. Please advise. Thank you!
Code:
// create a DOM document and load the HTML data
$xmlDoc = new DomDocument;
// this dumps out the warnings
$xmlDoc->loadHTML($fetchResult);
This:
@$xmlDoc->loadHTML($fetchResult)
can suppress the warnings but how can I capture those warnings programatically?
Answer
You can install a temporary error handler with set_error_handler
class ErrorTrap {
protected $callback;
protected $errors = array();
function __construct($callback) {
$this->callback = $callback;
}
function call() {
$result = null;
set_error_handler(array($this, 'onError'));
try {
$result = call_user_func_array($this->callback, func_get_args());
} catch (Exception $ex) {
restore_error_handler();
throw $ex;
}
restore_error_handler();
return $result;
}
function onError($errno, $errstr, $errfile, $errline) {
$this->errors[] = array($errno, $errstr, $errfile, $errline);
}
function ok() {
return count($this->errors) === 0;
}
function errors() {
return $this->errors;
}
}
Usage:
// create a DOM document and load the HTML data
$xmlDoc = new DomDocument();
$caller = new ErrorTrap(array($xmlDoc, 'loadHTML'));
// this doesn't dump out any warnings
$caller->call($fetchResult);
if (!$caller->ok()) {
var_dump($caller->errors());
}
|
__label__pos
| 0.999967 |
Zafu: KendoUI JSP autocomplete revisited
In Zafu: KendoUI JSP taglib + Couchbase (3) I have explained how to implement an autocomplete for accessing the name of the beers of Couchbase sample-beer database. That time I have used a pretty complex structure for mapping KendoUI filter.
But if the only thing that I need is sending whatever the user has typed so far, might not be worthwhile using such complex structure. If that’s your case, this is what you have to do.
Redefine parameterMap
In the previous post I have used the following function for generating the parameters to send to the server.
function kendoJson(d, t) {
return "param=" + JSON.stringify(d);
}
Now, I have redefined it as:
function kendoJson(d, t) {
return "name=" + d.filter.filters[0].value + "&limit=" + d.take;
}
I.e., only send the data typed so far as a parameter called name and limit as the number of results that is the value defined in kendo:dataSource JSP tag.
Get the parameters in the server
In the server now the parameters are much easier to retrieve since they arrive as two different parameters with the names name and limit.
String name = request.getParameter("name");
int limit = 10;
System.out.println("Limit:" + request.getParameter("limit"));
if (request.getParameter("limit") != null) {
try {
limit = Integer.parseInt(request.getParameter("limit"));
} catch (Exception e) {
e.printStackTrace();
}
}
And then invoke the query as we explained in the last post:
Query query = new Query();
query.setIncludeDocs(false);
query.setStale(Stale.FALSE);
query.setDescending(false);
query.setReduce(false);
query.setSkip(0);
query.setLimit(limit);
query.setRangeStart(name)
ViewResponse result = client.query(view, query);
Iterator itr = result.iterator();
Zafu: KendoUI JSP taglib + Couchbase (3)
In the previous two previous posts of this serie, I’ve shown how to use KendoUI Grid with server-side paging and client paging, using the all new KendoUI JSP wrapper. It’s time to take a look into another of KendoUI widgets: autocomplete, and use it for creating an input that suggested the name of one of the beer provided in Couchbase sample-beer database.
KendoUI autocomplete using JSP taglib
Lets start showing all the JSP code for the dissecting it:
<kendo:autoComplete name="beer" dataTextField="name" ignoreCase="false">
<kendo:dataSource pageSize="10" serverPaging="true" serverFiltering="true">
<kendo:dataSource-transport parameterMap="kendoJson">
<kendo:dataSource-transport-read url="/BeerNames"/>
</kendo:dataSource-transport>
<kendo:dataSource-schema type="json" data="data"/>
</kendo:dataSource>
</kendo:autoComplete>
In kendo:autoComplete I’ve used two attributes:
1. dataTextField value will be sent to the server as parameter and we use it for indicating the field of our document used for filtering.
2. ignoreCase allow us to tell the server if the filtering should or not differentiate lowercase from uppercase characters.
In kendo:dataSource I used three attributes that allow me to control the filtering of the data. What I saying is:
1. pageSize: I only want to retrieve a maximum of 10 records (name of beers).
2. serverPaging: Paging is going to be done in the server, meaning that the server should pick that 10 and send to the client (to KendoUI) a maximum of 10 records.
3. serverFiltering: the filtering (I mean, the server should choose those records that start with the typed characters and send me the next 10).
In kendo:dataSource-transport I have chosen to encode the parameters by myself to overcame the problem found with KendoUI JSP wrapper and the way that it sends the arguments to the server by default (see First “problem” in KendoUI wrapper in my post Zafu: KendoUI JSP taglib + Couchbase (2)).
Finally, in kendo:dataSource-schema I let KendoUI know that data is going to be received in JSON format and in an object element called data. So, it is going to be something like this:
{"data":[
{ "name":"A. LeCoq Imperial Extra Double Stout 1999" },
{ "name":"A. LeCoq Imperial Extra Double Stout 2000" },
{ "name":"Abana Amber Ale" },
{ "name":"Abbaye de Floreffe Double" },
{ "name":"Abbaye de Saint-Martin Blonde" },
{ "name":"Abbaye de Saint-Martin Brune" },
{ "name":"Abbaye de Saint-Martin Cuvée de Noel" },
{ "name":"Abbaye de Saint-Martin Triple" },
{ "name":"Abbaye de St Amand" },
{ "name":"Abbey 8" }
]}
Where we see that the array of results is called data, beer names is returned as the value of the attribute name and we get 10 results.
Parameters sent to the server
But what is what I am going to receive into the server (the servlet)? It is going to be a parameter called <em>param</em> which value is something like:
{
"take": 10,
"skip": 0,
"page": 1,
"pageSize":10,
"filter": {
"logic": "and",
"filters":[
{
"value": "a",
"operator": "startswith",
"field": "name",
"ignoreCase":false
}
]
}
}
That says that the server should take 10 results, skip the 0 first and the filtering condition are values that startwith an “a” in the column name and should not ignore case. Since there is only one condition the and is not important (these is used when filtering by several condition where we might choose that all condition must be fulfill -using and- or enough if one condition is met -using or-.
Using the parameters in the server
For a question of convenience I did create a Java class for accessing the conditions of the search. This class are something like this:
public class RequestParameters {
public int take;
public int skip;
public int page;
public int pageSize;
public Filter filter = new Filter();
}
public class Filter {
public String logic;
public ArrayList filters = new ArrayList();
}
public class FilterCondition {
public String value;
public String operator;
public String field;
public Boolean ignoreCase;
}
Where I divided the parameter sent in three classes: RequestParametersFilter and FilterCondition.
When the servlet receives the paramater, I fill this structure using:
Gson gson = new Gson();
RequestParameters params = gson.fromJson(request.getParameter("param"),
RequestParameters.class);
And Couchbase query is filled as:
Filter filter = params.filter;
FilterCondition cond = filter.filters.get(0);
Query query = new Query();
query.setIncludeDocs(false);
query.setStale(Stale.FALSE);
query.setDescending(false);
query.setReduce(false);
query.setSkip(params.skip);
query.setLimit(params.take);
query.setRangeStart(cond.value)
ViewResponse result = client.query(view, query);
Iterator itr = result.iterator();
And I fill the response using:
PrintWriter out = response.getWriter();
out.println("{\"data\":[");
StringBuilder buf = new StringBuilder();
for (Object result : results) {
buf.append(",{ \"name\":\"").append(((ViewRow) result).getKey()).append("\"}");
}
if (buf.length() > 0) {
out.print(buf.substring(1));
}
System.out.println("buffer:" + buf);
out.println("]}");
Zafu: KendoUI JSP taglib + Couchbase (2)
Just a few hours after being presented KendoUI Q3 I wrote the first post on KendoUI JSP wrapper. It was a pretty simple grid that retrieved data from Couchbase 2.0. That first post showed how to get data from a Couchbase 2.0 view and displayed it in a grid doing the paging in the client. In that example that meant transfer almost 6000 records and then do the paging in the browser -not very smart for such volume-. This time, I will implement the paging in the server and transfer only a small group of records.
Zafu server-side paging
Step one, a brief introduction to what we need for implementing server-side paging and what KendoUI and Couchbase.
KendoUI server-side paging
Configuring a KendoUI grid for server-side paging is as easy as defining serverPaging as true in the DataSource definition used by our Grid (see documentation here or here). Something like:
<kendo:dataSource pageSize="10" serverPaging="true">
<kendo:dataSource-transport>
<kendo:dataSource-transport-read url="/ListBeer" type="GET"/>
</kendo:dataSource-transport>
<kendo:dataSource-schema data="data" total="total" groups="data">
<kendo:dataSource-schema-model>
<kendo:dataSource-schema-model-fields>
<kendo:dataSource-schema-model-field name="name" type="string"/>
<kendo:dataSource-schema-model-field name="abv" type="number"/>
<kendo:dataSource-schema-model-field name="style" type="string"/>
<kendo:dataSource-schema-model-field name="category" type="string"/>
</kendo:dataSource-schema-model-fields>
</kendo:dataSource-schema-model>
</kendo:dataSource-schema>
</kendo:dataSource>
In the previous definition I specify both that the server will do paging (send a page at a time) and the size of each page defined in pageSize (defined as 10 in the previous example).
But, in addition when the DataSource loads data, it also needs to specify the number of records to skip from the dataset. What I will get in my servlet /ListBeer is four tuples of paramater-value:
1. take the number of records to retrieve.
2. skip the number of records to skip from the beginning of the DataSet.
3. page the index of the current page.
4. pageSize the number of records displayed on each page.
Couchbase 2.0 server-side paging
When we build a query in Couchbase 2.0, we define a series of parameter to configure it. This includes:
1. setSkip for defining the number of elements to skip.
2. setLimit for defining the number of records to retrieve.
The mapping between KendoUI and Couchbase 2.0 paging parameters is easy: KendoUI:skip maps into Couchbase:skip and KendoUI:take maps into Couchbase:limit.
New Java read code
The new read function (defined in the previous post) now is as follow:
public List read(String key, int skip, int limit) throws Exception {
View view = client.getView(ViewDocument, ViewName);
if (view == null) {
throw new Exception("View is null");
}
Query query = new Query();
if (key != null) {
query.setKey(key);
}
if (skip >= 0) {
query.setSkip(skip);
}
if (limit >= 0) {
query.setLimit(limit);
}
query.setStale(Stale.FALSE);
query.setIncludeDocs(true);
query.setDescending(false);
query.setReduce(false);
ViewResponse result = client.query(view, query);
Iterator<ViewRow> itr = result.iterator();
return IteratorUtils.toList(itr);
}
Where skip is what I get in the servlet as request.getParameter(“skip”) and limit is request.getParameter(“take”).
First “problem” in KendoUI wrapper
NOTE: Please, remember that KendoUI JSP wrapper is a beta release which means (according the Wikipedia):
Beta (named after the second letter of the Greek alphabet) is the software development phase following alpha. It generally begins when the software is feature complete. Software in the beta phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing. The process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it.
So, having defects is not bad, it is something that we cannot avoid, and as far as it is usable (and so far it is) and the defects get fixed by the time the final release is introduced it is completely understandable.
The defect is the way we receive the parameters that is different that for traditional HTML / JavaScript usage but also a little odd.
Parameters in KendoUI HTML / Javascript
Parameters are sent to the server (unless you define your own parameter mapping) encoded in the URL as url?param1=value1¶m2=value2&… (Ex: /ListBeer?pageSize=10&take=10&skip=0&page=1).
Parameters in KendoUI JSP wrapper
Parameters are sent to the server (unless you define your own parameter mapping) as a stringified JSON and this as a parameter: url?{param1:value1,param2:value2,…} (Ex: /ListBeer?{pageSize:10,take:10,skip:0,page:1}=).
And here I have two concerns:
1. I should not change my servlet no mather I user HTML/JavaScript, ASP wrapper, JSP wrapper,…
2. Sending a string serialized JSON object as a parameter name is not nice, if I want to send it as JSON I will send it in the value side of a parameter and not the name side.
Workaround
Have 2 different servlets (code paths) for parsing parameters:
1. HTML/JavaScript
2. JSP wrapper.
Fixes
1. KendoUI fixes TransportTag.doEndTag to do not define as default parameterMap function a JSON.stringify but leave it empty (this is something that we **cannot do**).
2. Define our own parameter map that sends the parameters encoded in the URL as HTML 5 / JavaScript does or as JSON but in the value side and with well-known parameter name.
Which one I do recommend…? Fix 1, and while KendoUI provide us a patched version, you fix it or you go with Fix 2.
<script type="text/javascript">
function encodeParametersAsJSON(param) {
return "param=" + JSON.stringify(param);
}
</script>
...
<kendo:dataSource-transport parameterMap="encodeParametersAsJSON">
<kendo:dataSource-transport-read url="/ListBeer" type="GET"/>
</kendo:dataSource-transport>
I do not recommend Workaround since you have to duplicate your java code.
Zafu: KendoUI JSP taglib + Couchbase (1)
A couple a days ago I attended KendoUI Q3 announcement. Being more in the Java than C# side I was very excited about JSP taglib (again, zillions of thanks).
Just one hour after finishing the presentation I connected to Couchbase 2.0 webinar on Couchbase App Development.
Few hours latter (about 4 working hours) I finished writing some code using both together.
Why took me almost 4 hours?
1. Just as a little of feedback I admit that before this I didn’t write a single line of code for Couchbase (many for CouchDB using Java Client libraries as well as CouchApp) but they are not the same and they do not use the same Java Client library.
2. I have written many lines of code for KendoUI using HTML5 + JavaScript (never ASP and C#), created new KendoUI widgets (and even fixed some bugs in KendoUI). BUT KendoUI JSP taglib is new, so I did not have experience on KendoUI JSP taglib neither on the server side, so a little of challenge.
3. But this was not where I spend most of the time :-( It was because Couchbase Java Client (plus the libraries that come with it), is for Java 6 while KendoUI JSP taglib is for Java 7 so I had to figure out how to run both together under Apache Tomcat (and this took me more than 2 hours).
@toddanglin,@BrandonSatrom,@JohnBristowe,@burkeholland,@alex_gyoshev Would be possible to have your taglib **officially** release for Java 6 too? You just need to change a couple of lines of code ;-)
What I did in the remaining two hours was…
When you install Couchbase 2.0, you are prompted (at least I was) about installing a sample database (a list of almos 6000 beers!!!).
So, I’ve decide to display in a grid those (almost) 6000 records (name, abv, category and style) and being able to navigate them. NOTE: For this first example I will not be doing server side paging, filtering,… By the end of the experiment (I live in Europe) it was almost 1am and I went to sleep.
What did I need to write:
1. Java Server Page (JSP) using KendoUI taglib (not much documentation on how to use it but looking their example was enough for me).
2. Servlet invoked by KendoUI DataSource transport read for retrieving the data, this is Java and uses Couchbase Java Client Library.
3. Couchbase view for extracting the list of beers (this is a MapReduce function).
Kendo UI JSP taglib
This is what I wrote.
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib prefix="kendo" uri="http://www.kendoui.com/jsp/tags" %>
<html>
<head>
<title>Powered by Zafu: OnaBai 2012 (c)</title>
<!-- Kendo UI Web styles-->
<link href="styles/kendo.common.min.css" rel="stylesheet" type="text/css"/>
<link href="styles/kendo.silver.min.css" rel="stylesheet" type="text/css"/>
<!-- Kendo UI Web scripts-->
<script src="js/jquery.min.js" type="text/javascript"></script>
<script src="js/kendo.web.min.js" type="text/javascript"></script>
<style type="text/css">
html {
font-family: "Arial", sans-serif;
font-weight: 300;
font-variant: normal;
font-style: normal;
font-size: 12px;
}
table[role="grid"] {
font-size: 1em;
}
</style>
</head>
<body>
<kendo:window name="main" title="Powered by Zafu: OnaBai 2012 (c)" minHeight="450" minWidth="700" maxWidth="700">
<kendo:grid name="grid" pageable="true" sortable="true" filterable="false" groupable="false">
<kendo:grid-columns>
<kendo:grid-column title="Name" field="name"/>
<kendo:grid-column title="ABV" field="abv" format="{0:n1}" width="50px"/>
<kendo:grid-column title="Style" field="style"/>
<kendo:grid-column title="Category" field="category"/>
</kendo:grid-columns>
<kendo:dataSource pageSize="10">
<kendo:dataSource-transport>
<kendo:dataSource-transport-read url="/ListBeer" type="GET" contentType="application/json"/>
</kendo:dataSource-transport>
<kendo:dataSource-schema data="data" total="total" groups="data">
<kendo:dataSource-schema-model>
<kendo:dataSource-schema-model-fields>
<kendo:dataSource-schema-model-field name="name" type="string"/>
<kendo:dataSource-schema-model-field name="abv" type="number"/>
<kendo:dataSource-schema-model-field name="style" type="string"/>
<kendo:dataSource-schema-model-field name="category" type="string"/>
</kendo:dataSource-schema-model-fields>
</kendo:dataSource-schema-model>
</kendo:dataSource-schema>
</kendo:dataSource>
</kendo:grid>
<div style="position: absolute; bottom: 5px;">Powered by Zafu, Couchbase 2.0 & KendoUI</div>
</kendo:window>
</body>
</html>
This JSP is equivalent to this HTML (what I have had to write in the past).
<html>
<head>
<title>Powered by Zafu: OnaBai 2012 (c)</title>
<!-- Kendo UI Web styles-->
<link href="styles/kendo.common.min.css" rel="stylesheet" type="text/css"/>
<link href="styles/kendo.black.min.css" rel="stylesheet" type="text/css"/>
<!-- Kendo UI Web scripts-->
<script src="js/jquery.min.js" type="text/javascript"></script>
<script src="js/kendo.web.min.js" type="text/javascript"></script>
<style type="text/css">
html {
font-family: "Arial", sans-serif;
font-weight: 300;
font-variant: normal;
font-style: normal;
font-size: 12px;
}
table[role="grid"] {
font-size: 1em;
}
</style>
<script type="text/javascript">
$(document).ready(function () {
$("#main").kendoWindow({
title: "Powered by Zafu: OnaBai 2012 (c)",
minHeight:450,
minWidth: 700,
maxWidth: 700
});
$("#grid").kendoGrid({
columns: [
{ field:"name", title:"Name" },
{ field:"abv", title:"ABV", format:"{0:n1}", width:"50" },
{ field:"style", title:"Style", format:"{0:n1}" },
{ field:"category", title:"Category", format:"{0:n1}" }
],
pageable: true,
sortable: true,
filterable:false,
groupable: false,
dataSource:{
pageSize: 10,
transport:{
read:{
url: "/ListBeer",
type: "GET",
contentType:"application/json"
}
},
schema: {
data: "data",
total: "total",
groups:"data",
model: {
fields:{
name: { type:"string" },
abv: { type:"number" },
style: { type:"string" },
category:{ type:"string" }
}
}
}
}
})
});
</script>
</head>
<body>
<div id="main">
<div id="grid"></div>
<div style="position: absolute; bottom: 5px;">Powered by Zafu, Couchbase 2.0 & KendoUI</div>
</div>
</body>
</html>
MapReduce
function (doc, meta) {
if (doc.type && doc.type == "beer" && doc.name) {
emit(doc.name, doc.id);
}
}
Couchbase 2.0 Java Client: View invocation
public List read(String key) throws Exception {
View view = client.getView(ViewDocument, ViewName);
if (view == null) {
throw new Exception("View is null");
}
Query query = new Query();
// Set the key for the query based on beer name
if (key != null) {
query.setKey(key);
}
query.setStale(Stale.FALSE);
query.setIncludeDocs(true);
query.setDescending(false);
ViewResponse result = client.query(view, query);
Iterator<ViewRow> itr = result.iterator();
return IteratorUtils.toList(itr);
}
Conclusion
Positive
1. It works!
2. It was not that hard!
3. Both fit pretty nice and both are easy to start with!
4. Even that tag libraries are pretty verbose having snippets you can type it quite fast.
5. My IDE recognized KendoUI taglib and coloring and typing was pretty fast.
Negative
1. KendoUI, please release Java 6 version (I know that it is pretty old but as you can see there people developing new products and still using it).
2. Couchbase, please upgrade to Java 7. Java 6 is pretty old otherwise it will look like Windows XP.
Pending
1. Play with more widgets and server side paging / filtering / …
2. See how it fits (if it actually does, I don’t think so) with my own Widgets (I have a few that I use a lot).
Neutral
1. I think that would be nice a series of blogs exploiting the idea of JSP taglib… maybe I start one… is not that much the question of documentation as writing Guides on how to use if for practical cases.
Follow
Get every new post delivered to your Inbox.
Join 76 other followers
%d bloggers like this:
|
__label__pos
| 0.821065 |
Karma: Managing Plugin Versions in package.json
PROBLEM
Let’s assume our package.json looks like this:-
{
"name": "testKarma",
"private": true,
"devDependencies": {
"karma": "^0.12.24",
"karma-chrome-launcher": "^0.1.4",
"karma-coverage": "^0.2.4",
"karma-jasmine": "^0.2.2",
"karma-junit-reporter": "^0.2.1",
"karma-phantomjs-launcher": "^0.1.4"
}
}
What we want to do is to update all the plugin versions defined in this file.
SOLUTION
After trying out several solutions, it appears that using npm-check-updates is a better and cleaner solution for discovering newer versions of these plugins.
First, we need to install npm-check-updates globally. You may need to use sudo to do so.
sudo npm install -g npm-check-updates
Once installed, run the following command within the project root directory to discover any new versions:-
npm-check-updates
Output:-
"karma-chrome-launcher" can be updated from ^0.1.4 to ^0.1.5 (Installed: 0.1.5, Latest: 0.1.5)
"karma-coverage" can be updated from ^0.2.4 to ^0.2.6 (Installed: 0.2.6, Latest: 0.2.6)
"karma-jasmine" can be updated from ^0.2.2 to ^0.2.3 (Installed: 0.2.3, Latest: 0.2.3)
"karma-junit-reporter" can be updated from ^0.2.1 to ^0.2.2 (Installed: 0.2.2, Latest: 0.2.2)
Run 'npm-check-updates -u' to upgrade your package.json automatically
Finally, run the following command to update the plugins:-
npm-check-updates -u
Output:-
"karma-chrome-launcher" can be updated from ^0.1.4 to ^0.1.5 (Installed: 0.1.5, Latest: 0.1.5)
"karma-coverage" can be updated from ^0.2.4 to ^0.2.6 (Installed: 0.2.6, Latest: 0.2.6)
"karma-jasmine" can be updated from ^0.2.2 to ^0.2.3 (Installed: 0.2.3, Latest: 0.2.3)
"karma-junit-reporter" can be updated from ^0.2.1 to ^0.2.2 (Installed: 0.2.2, Latest: 0.2.2)
package.json upgraded
The plugin versions are successfully updated, and package.json is also updated accordingly.
{
"name": "testKarma",
"private": true,
"devDependencies": {
"karma": "^0.12.24",
"karma-chrome-launcher": "^0.1.5",
"karma-coverage": "^0.2.6",
"karma-jasmine": "^0.2.3",
"karma-junit-reporter": "^0.2.2",
"karma-phantomjs-launcher": "^0.1.4"
}
}
Advertisements
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.521179 |
devxlogo
Avoid Redundant Function Prototypes
Avoid Redundant Function Prototypes
In common top-down C/C++ programming, the main() function is written first, and then the functions that it calls, and then in turn the functions those functions call. For relatively small programs, or modules with static functions, the called functions can or must be contained in the same source code file as the calling functions. If they are added into the file after (“lower down” than) the function that calls them, safety checking language constraints require that a prototype be added in the file before that calling function so that the parameters and return values can be verified. Then, as the program evolves, the prototype must be kept synchronized with the function definition below it. In the case of functions called from different source code files, this typically is not a problem, since the prototypes are contained in header files which are #include’d at the top of the file. To avoid this redundancy, simply place the called functions before the calling functions, and let their definition act as the declaration. For example, it is more work to code:
void DoWork();int main(){DoWork();return(0);}void DoWork() { /*function body*/ }
Than it is to simply do:
void DoWork() { /*function body*/ }int main(){DoWork();return(0);}
devx-admin
Share the Post:
|
__label__pos
| 0.985606 |
dynamic_cast shared pointer
Dynamic cast of shared_ptr Returns a copy of sp of the proper type with its stored pointer casted dynamically from U* to T* . If sp is not empty, and such a cast would not return a null pointer, the returned object shares ownership over sp ‘s resources, increasing by one the use count .
std::shared_ptr bar;bar = std::make_shared();foo = std::dynamic_pointer_cast(bar);std::cout << "foo's static type: " <static_type << '\n';std::cout << "foo's dynamic type: " <dynamic_type << '\n';See more on cplusplus這對您是否有幫助?謝謝! 提供更多意見反應
If r is empty, so is the new shared_ptr (but its stored pointer is not necessarily null). Otherwise, the new shared_ptr will share ownership with the initial value of r, except that it is empty if the dynamic_cast performed by dynamic_pointer_cast returns a null pointer.
I have two classes A and B, B inherits from A. If I have a shared_ptr object which I know is really a B subtype, how can I perform a dynamic cast to access the API of B (bearing in mind my use dynamic_pointer_cast example copied from above link // static
If you just want to call a function from B you can use one of these: std::shared_ptr ap = ;
dynamic_cast(*ap).b_function();
if (B* bp =最佳回答 · 36use dynamic_pointer_cast example copied from above link // static_pointer_cast example
#include
#include
struct A {
static5Probably the nicest way would be to use the standard functions for casting a shared_ptr1
c++ – How does one downcast a std::shared_ptr?
c++ – Difference between shared_dynamic_cast and dynamic_pointer_cast
查看其他搜尋結果
std::shared_ptr is a smart pointer that retains shared ownership of an object through a pointer. Several shared_ptr objects may own the same object. The object is destroyed and its memory deallocated when either of the following happens: the last remaining shared
(constructor): constructs new shared_ptr, (public
Shared pointer or not, when you have a pointer to a Base, you can only call member functions from Base. If you really need to dynamic_cast, you can use dynamic_pointer_cast from boost, but chances are that you shouldn’t. Instead, think about your design : Derived
Typically the smart pointer class will expose a dynamic cast wrapper that deals with the underlying smart pointer object properly. STL had a nice talk on how shared_ptr works at GoingNative 2012. – Xeo Apr 22 ’12 at 2:19 Thanks for pointing me in the right
`shared_ptr` で管理するインスタンスに対して `dynamic_cast` を行う。 概要 shared_ptr で管理するインスタンスに対して dynamic_cast を行う。 戻り値 r が空であった場合、この関数は空の shared_ptr を返却する。
创建 std::shared_ptr 的新实例,其存储指针从 r 的存储指针用转型表达式获得。 若 r 为空,则新的 shared_ptr 亦然(但其存储指针不必为空)。否则,新的 shared_ptr 将与 r 的初始值共享所有权,除了若 dynamic_pointer_cast 所进行的 dynamic_cast 返回空指针
If the cast is successful, dynamic_cast returns a value of type new-type. If the cast fails and new-type is a pointer type, it returns a null pointer of that type. If the cast fails and new-type is a reference type, it throws an exception that matches a handler of type std
Static cast of shared_ptr Returns a copy of sp of the proper type with its stored pointer casted statically from U* to T* . If sp is not empty, the returned object shares ownership over sp ‘s resources, increasing by one the use count .
Const cast of shared_ptr Returns a copy of sp of the proper type with its stored pointer const casted from U* to T* . If sp is not empty, the returned object shares ownership over sp ‘s resources, increasing by one the use count .
A shared_ptr that does not own any pointer is called an empty shared_ptr. A shared_ptr that points to no object is called a null shared_ptr and shall not be dereferenced . Notice though that an empty shared_ptr is not necessarily a null shared_ptr , and a null shared_ptr is not necessarily an empty shared
dynamic_cast:将一个基类对象指针(或引用)cast到继承类指针,dynamic_cast会根据基类指针是否真正指向继承类指针来做相应处理。主要用途:将基类的指针或引用安全地转换成派生类的指 博文 来自: 贾作真时真亦贾的博客
shared_ptr在boost中地位相当重要,其行为最接近原始指针,但又比指针更加安全,甚至还能提供基本的线程安全保证。它基本上解决了在使用c++开发过程中不可避免的使用指针而遇到的许多问题,常见的 博文 来自: Kikim的地盘
r の格納されているポインタからキャスト式を使用して取得したポインタを格納する新しい std::shared_ptr のインスタンスを作成します。 r が空であれば、新しい shared_ptr も空になります (格納されるポインタがヌルになるとは限りません)。
Returns a shared pointer to the pointer held by src, using a dynamic cast to type X to obtain an internal pointer of the appropriate type. If the dynamic_cast fails, the object returned will be null. The src object is converted first to a strong reference.
shared_ptr class template Introduction Best Practices Synopsis Members Free Functions Example Handle/Body Idiom Thread Safety Frequently Asked Questions Smart Pointer Timings Programming Techniques Introduction The shared_ptr class template stores a
A shared_ptr. U* shall be convertible to T* using dynamic_cast. Return Value A shared_ptr object that owns the same pointer as sp (if any) and has a shared pointer that points to the same object as sp with a potentially different type. Example
24/10/2019 · Description It returns a copy of sp of the proper type with its stored pointer casted dynamically from U* to T*. Declaration Following is the declaration for std::dynamic_pointer_cast. template shared_ptr dynamic_pointer_cast (const shared
另外在osg库中的智能指针向上转换也使用类似的dynamic_pointer_cast 的模板函数。 有 0 个人打赏 文章最后发布于: 2011-11-08 21:10:52 展开阅读全文 浅谈BOOST库里面的智能指针 07-19 阅读数 994 大家都知道,学习C语言,指针很很重要的
std::shared_ptr 是通过指针保持对象共享所有权的智能指针。多个 shared_ptr 对象可占有同一对象。下列情况之一出现时销毁对象并解分配其内存: 最后剩下的占有对象的 shared_ptr 被销毁; 最后剩下的占有对象的 shared_ptr 被通过 operator= 或 reset() 赋值为另一
shared_dynamic_cast和dynamic_pointer_cast之间的区别是什么? 在我看来,它们可能是等价的。菜单 腾讯云 备案 控制台 云+社区 专栏 问答 沙龙 快讯 团队主页 开发者手册 智能钛AI 在线学习中心 TVP 搜
reinterpret_pointer_cast Creates a new shared_ptr from an existing shared pointer by using a cast. template shared_ptr reinterpret_pointer_cast( const shared_ptr& ptr) noexcept; template shared_ptr reinterpret
1.父类指针无法直接调用子类的新函数,需要转换为子类指针后才能调用。c++编译器在编译的时候是做静态类型分析,父类指针是否真的指向一个子类类型,编译器并不会做这个假设。因此用父类的指针去调用
static_pointer_cast dynamic_pointer_cast const_pointer_cast weak_ptr Weak pointers just “observe” the managed object; they don’t “keep it alive” or affect its lifetime. Unlike shared_ptrs, when the last weak_ptr goes out of scope or disappears, the pointed-to
casting a shared_ptr to a shared_ptr Home Programming Forum Software Development Forum There are casting operators for shared_ptr called static_pointer_cast and dynamic_pointer_cast. In other words, if you have this code for raw
c++ documentation: Casting std::shared_ptr pointers Example It is not possible to directly use static_cast, const_cast, dynamic_cast and reinterpret_cast on std::shared_ptr to retrieve a pointer sharing ownership with the pointer being passed as argument.
4,5) Same as (2,3), but every element is initialized from the default value u. If U is not an array type, then this is performed as if by the same placement-new expression as in (1); otherwise, this is performed as if by initializing every non-array element of the (possibly
The raw pointer overloads assume ownership of the pointed-to object. Therefore, constructing a shared_ptr using the raw pointer overload for an object that is already managed by a shared_ptr, such as by shared_ptr (ptr. get ()) is likely to lead to undefined.
How to: Create and Use shared_ptr Instances 05/22/2019 6 minutes to read +5 In this article The shared_ptr type is a smart pointer in the C++ standard library that is designed for scenarios in which more than one owner might have to manage the lifetime of the
shared_ptr是一种智能指针(smart pointer),作用有如同指针,但会记录有多少个shared_ptrs共同指向一个对象。这便是所谓的引用计数(reference counting)。一旦最后一个这样的指针被销毁,也就是一旦某个对象的引用计数变为0,这个对象会被自动删除。这在
방법: shared_ptr 인스턴스 만들기 및 사용 How to: Create and Use shared_ptr Instances 05/22/2019 읽는 데 4분 +3 이 문서의 내용 shared_ptr 형식은 둘 이상의 소유자가 메모리에 있는 개체의 수명을 관리하는 시나리오를 위해 디자인된 C++ 표준 라이브러리의
Smart pointers are my favorite C++ feature set. With C, it’s up to the programmer to keep pointer details in context – and when that happens, errors pop up like weeds. I can’t begin to count the number of times I have fixed: Memory leaks Freeing memory that
std::shared_ptr ist ein intelligenter Zeiger (smart pointer), der ein Objekt über einen Zeiger besitzt. dynamic_pointer_cast const_pointer_cast führt ein static_cast, ein dynamic_cast oder ein const_cast auf den Typ des verwalteten Objekts durch get_deleter
问题:I’d like to convert a base class pointer to a derived class pointer as a function argument without using dynamic_pointer_cast class Base { public: typedef std
Un shared_ptr peut partager la possession d’un objet tout en gardant un pointeur sur un autre objet. Cela peut être utilisé pour pointer sur un objet membre, tout en possédant l’objet auquel il appartient. Un shared_ptr peut aussi ne pas posséder d’objet, il estvide.
dynamic_pointer_cast const_pointer_cast aplica static_cast, dynamic_cast ou const_cast para o tipo de objeto gerenciado Original: applies static_cast, dynamic_cast or const_cast to the type of the managed object
16/8/2018 · Upcasting is converting a derived-class reference or pointer to a base-class. In other words, upcasting allows us to treat a derived type as though it were its base type. It is always allowed for public inheritance, without an explicit type cast. This is a result of the is-a
26/9/2017 · Hi Dan I managed to build a sample app that triggers the dynamic_pointer_cast issue I mentioned earlier. The sample app is creates three shared libraries: libtest.so This library has two test methods Java_com_example_twolibs_TwoLibs_testDynamicCast Java
[编辑] 参数 (无) [编辑] 返回值 存储的指针。 [编辑] 注意 shared_ptr 可能在存储指向一个对象的指针时共享另一对象的所有权。 get() 返回存储的指针,而非被管理指针。 [编辑] 示例
|
__label__pos
| 0.567933 |
Skip to content
Tracking of stationary objects
Any object can be integrated into the Internet of Things. The platform allows tracking not only movable objects but also stationary ones, like heavy equipment, agricultural equipment, cargo, goods, or security equipment. Installing GPS devices on each of these objects can be very expensive. Instead, it's more cost-effective to install one device on a vehicle or site and track all others with cheaper BLE tags.
In this tutorial, we'll discuss how to organize tracking for stationary objects, which GPS devices and tags will help gather the necessary data, and how to set them up using truck trailers as an example. We'll also cover how to obtain information about trips and usage for subsequent service work and what API calls will provide information about the tags. Additionally, we'll share other use cases based on real situations.
Find this instruction including BLE sensors configuration example in our Expert center.
What you need to track stationary objects
Various devices are able to read data from BLE beacons: Galileosky, Quecklink, Ruptela, Teltonika, TopFlyTech. We will describe on example of Teltonika FMB920 model and BLE beacon Eye Sensor. To begin tracking stationary objects, you'll need the following: 1. A GPS device that can read BLE tags and is supported by the platform.
1. BLE tags that are compatible with the GPS device. It's worth noting that many BLE tags can transmit information about temperature and humidity, as well as their battery charge. This enhances the ability of these tags to track information, but for our purpose, we'll focus on stationary objects specifically.
2. Platform APIs that provide information about which GPS device a particular tag is near. To create custom solutions for your users using APIs, you'll need developers. Clients typically hire their own developers or contract third-party teams.
Now let's examine the procedure for implementing a real-world case study - tracking truck trailers for trip and usage information and subsequent service work.
How to get information about BLE beacons near the GPS device
On the platform side, there's a BLE beacon data entry object:
{
"tracker_id": 10181654,
"hardware_id": "7cf9501df3d6924e423cabcde4c924ff",
"rssi": -101,
"get_time": "2023-04-17 17:14:42",
"latitude": 50.3487321,
"longitude": 7.58238,
"ext_data": {
"voltage": 3.075,
"temperature": 24.0
}
}
You can read information from it:
• tracker_id - int. An ID of the tracker (aka "object_id").
• hardware_id - string. An ID of the beacon.
• rssi - int. RSSI stands for received signal strength indicator and represents the power of received signal on a device. According to it, you can understand how far away the beacon is from the tracker.
• get_time - date/time. When this data received.
• latitude - float. Latitude.
• longitude - float. Longitude.
• ext_data - object. Additional beacon data.
API calls to get information about BLE tags
There are two API calls that allow you to get all the necessary information about BLE beacons:
Historical data from BLE tags
The first call retrieves historical data from devices. You can set the from and to parameters for obtaining data during a specific period about connected BLE beacons. Since we need the information from the BLE tags' point of view, i.e., the trailers, let's request the information using the beacons parameter.
Request example:
curl -X POST 'https://api.navixy.com/v2/beacon/data/read' \
-H 'Content-Type: application/json' \
-d '{"hash":"59be129c1855e34ea9eb272b1e26ef1d","from": "2023-04-17 17:00:00","to": "2023-04-17 18:00:00","beacons": ["7cf9501df3d6924e423cabcde4c924ff"]}'
This will show which devices were in the vicinity of this BLE beacon during period.
{
"list": [
{
"tracker_id": 10181654,
"hardware_id": "7cf9501df3d6924e423cabcde4c924ff",
"rssi": -101,
"get_time": "2023-04-17 17:05:42",
"latitude": 50.3487321,
"longitude": 7.58238,
"ext_data": {
"voltage": 3.075,
"temperature": 24.0
}
},{
},
{
"tracker_id": 10181654,
"hardware_id": "7cf9501df3d6924e423cabcde4c924ff",
"rssi": -101,
"get_time": "2023-04-17 17:40:22",
"latitude": 55.348890,
"longitude": 6.59403,
"ext_data": {
"voltage": 3.075,
"temperature": 24.0
}
}],
"success": true
}
Last data from BLE tags
The second call retrieves information about currently connected beacons to a specific device. For example, if you want to know which trailer is currently near the device, use the following request:
Request example:
curl -X POST 'https://api.navixy.com/v2/beacon/data/last_values' \
-H 'Content-Type: application/json' \
-d '{"hash":"59be129c1855e34ea9eb272b1e26ef1d", "trackers": [10181654], "skip_older_than_seconds": 1200}
This will provide information that there's a trailer "7cf..." next to the device.
{
"list": [
{
"tracker_id": 10181654,
"hardware_id": "7cf9501df3d6924e423cabcde4c924ff",
"rssi": -101,
"get_time": "2023-04-17 17:40:22",
"latitude": 55.348890,
"longitude": 6.59403,
"ext_data": {
"voltage": 3.075,
"temperature": 24.0
}
}],
"success": true
}
How to obtain information on usage times and trip details
We've already gathered historical data using the first of the presented API calls, which showed on which devices the trailer was displayed at a specific time. To get information about the journeys and usage time of this trailer, we simply need to use one of the two API calls:
Overall trip info
API call track/list to get trip information for the period. This will provide general information about the trips, such as where and when they started and ended, maximum speed, mileage, and more.
Request example:
curl -X POST 'https://api.navixy.com/v2/beacon/data/last_values' \
-H 'Content-Type: application/json' \
-d '{"hash":"59be129c1855e34ea9eb272b1e26ef1d", "trackers": [10181654], "skip_older_than_seconds": 1200}
Response:
{
"id": 11672,
"start_date": "2023-04-17 17:05:42",
"start_address": "10470, County Road, Town of Clarence, Erie County, New York, United States, 14031",
"max_speed": 62,
"end_date": "2023-04-17 17:40:22",
"end_address": "Fast Teddy's, 221, Main Street, City of Tonawanda, New York, United States, 14150",
"length": 18.91,
"points": 59,
"avg_speed": 49,
"event_count": 3,
"norm_fuel_consumed": 6.32,
"type": "regular",
"gsm_lbs": false
}
From this data, we can see that the trip lasted nearly 35 minutes (end_date - start_date), with an average speed of 49 km/h and a maximum speed of 62 km/h. The trip length was 18.91 km. This information allows us to determine how much to pay the driver for transporting the cargo, whether the contractual speed was exceeded, and other details. Additionally, the trip length can be used in the future to calculate the number of kilometers until the next maintenance of the trailer.
Detailed trip info
If you want a detailed track record of the trailer where the beacon is installed for displaying it in a report, for example, you can use the track/read request. This will give us data on all the points received by the platform during the journey.
Request example:
curl -X POST 'https://api.navixy.com/v2/track/read' \
-H 'Content-Type: application/json' \
-d '{"hash": "22eac1c27af4be7b9d04da2ce1af111b", "tracker_id": 10181654, "from": "2023-04-17 17:00:00","to": "2023-04-17 18:00:00", "filter": true}'
Response:
{
"success": true,
"limit_exceeded": true,
"list": [
{
"address": "10470, County Road, Town of Clarence, Erie County, New York, United States, 14031",
"satellites": 10,
"mileage": 0,
"heading": 173,
"speed": 42,
"get_time": "2023-04-17 17:05:42",
"alt": 0,
"lat": 43.0318683,
"lng": -78.5985733
}
]
}
You can use these points together with your preferred maps API to display them on a map.
Other examples of using BLE tags within Navixy API
Here are some other examples of how to use BLE tags with a short algorithm to get the necessary results need:
Child seats
Child seats are mandatory for passengers traveling with children. If you or the user operates a passenger transportation service, knowing whether a child seat is available in a vehicle can help you quickly determine which drivers are suitable for certain passengers and avoid wasting time and fuel. You can also find out which driver currently has a child seat installed in their vehicle. Additionally, it's important to consider passengers with two or more children and identify cars equipped with more than one child seat.
To address this, you'll need to install a BLE beacon on each child seat. Next, let's say your transport booking app needs to request information from all drivers who have a child seat installed. To do this, use the beacon/last_values API call to gather information about which drivers can be assigned to a particular order.
You can also use the RSSI parameter to determine if the seat is located inside the vehicle or in the trunk. To accomplish this, you'll need to conduct a few tests. For example, if the RSSI value is lower in the passenger compartment than in the trunk, the seat is likely in the trunk. As a result, you can prioritize your search for vehicles – first, those with a child seat in the passenger compartment, and then those with a child seat in the trunk. This approach ensures that you efficiently match passengers with appropriate vehicles and drivers.
Agricultural machinery
Suppose your client has agricultural machinery that can be connected to various equipment. How can you track which tractor is using a seeder and which has a plow? This information will help you understand the frequency and extent of tool usage, and also determine their current location. This way, workers can spend more time working in the field rather than searching for equipment. To achieve this, install devices on tractors and combines, as well as in tool storage areas. Place one BLE beacon on each tool in a secure spot where it is difficult to remove, preventing it from getting lost during work. Next, to determine how long the tools have been in use, query the beacon/read API call. The information from the response will be helpful, just like with the trailers in our detailed example. To determine the location of a specific tool, query beacon/last_values with a search for beacons to identify where and on which device the tool is installed. This approach ensures efficient tracking and utilization of your agricultural equipment, ultimately increasing productivity.
Use on construction sites
Construction sites often have numerous tools and expensive equipment. While installing a beacon for tracking purposes is beneficial, another concern arises – how can you ensure that the equipment is tracked frequently, and that the GPS tracker doesn't run out of power? To monitor the usage and location of the equipment, BLE beacons can also come in handy.
The solution for construction sites can be similar to that of agricultural machinery – install devices on the machinery as well as on storage sites. This approach allows you to effectively track your valuable equipment, ensuring that it's being used efficiently and minimizing the risk of loss or misplacement. By keeping a close eye on your tools and machinery, you can optimize productivity at the construction site.
Indoor tracking
You can effectively track items indoors using the platform and BLE tags. All you need to do is install GPS devices in different parts of the warehouse or building and tag the objects you want to track. Here are a few examples:
• Tracking employees in various areas of a warehouse or store: This allows you to know which area an employee is in or how many sales assistants are near the information desk. Having this information helps improve efficiency and ensures that staff members are where they need to be.
• Tracking goods or machinery in different areas of the warehouse: Knowing the location of goods or equipment saves time, as you don't have to search for them throughout the warehouse. This streamlines the retrieval process, making your operations more efficient.
Tracking goods with BLE beacons
Utilizing BLE beacons for tracking can greatly benefit transport companies by allowing them to determine which truck is carrying a specific pallet of goods at any given moment. This method not only enables the tracking of goods' paths but also helps calculate transport costs more accurately.
By adopting this innovative approach, transport companies can enhance their operations, making them more efficient and precise. This ultimately leads to better service for clients and more streamlined business processes.
Last update: August 8, 2023
|
__label__pos
| 0.911917 |
Article Index
1. Angles to Directions
Starting with a direction for one line, directions of the other connected lines can be computed from the horizontal angles linking them. The process of addition or subtraction is dependent on the type of horizontal angle (interior, deflection, etc), turn direction (clockwise or counterclockwise), and direction type (bearing or azimuth).
a. Examples - Bearing
Example 1
The bearing of line GQ is S 42°35' E. The angle right at Q from G to S is 112°40'. What is the bearing of the line QS?
Sketch:
Add meridian at Q and label angles:
At Q, the bearing to G is N 42°35' W.
Subtracting 42°35' from 112°40' gives the angle, β, from North to the East for line QS.
Bearing QS = N 70°05" E
Example 2
The bearing of line LT is N 35°25' W. The angle left at T from L to D is 41°12'. What is the bearing of the line TD?
Sketch:
Add meridian at T and label angles
At T the bearing TL is S 35°25' E.
The bearing angle, σ, is 35°25' + 41°12' = 76°37'
Bearing TD = S 76°37' E.
b. Examples - Azimuth
Example 1
The azimuth of line WX is 258°13'. At X the deflection angle from W to L is 102°45' L. What is the azimuth of line XL?
Sketch:
A deflection angle is measured from the extension of a line. The azimuth of the extension is the same as that of the line. To compute the next azimuth, the deflection angle is added directly to the previous azimuth.
Because this is a left deflection angle, you would add a negative angle.
Add meridian at X:
Azimuth XL = 258°13' + (-102°45') = 155°28'
Example 2
The azimuth of line BP is 109°56'. The angle right at P from B to J is 144°06'. What is the azimuth of line PJ?
Sketch:
At P, add the meridian and extend the line BP
To get the azimuth of line PB: 109°56' + 180°00'= 289°56'.
Since 144°06' is to the right (+), add it to the azimuth of PB to compute azimuth of PJ: 144°06' + 289°56' = 434°02'
Why is the azimuth greater than 360°? Because we've gone past North.
To normalize the azimuth, subtract 360°00': 434°02' - 360°00' = 74°02'
2. Directions to Angles
Given directions of two adjacent lines, it is a simple matter to determine the angle between the lines.
a. Example - Bearings
The bearing of line HT is N 35°16' W , the bearing of line TB is N 72°54' E. What is the angle right at T from B to H?
Sketch:
Label the back-direction at T and angle to be computed, δ.
Based on the sketch, the desired angle is what’s left over after both bearing angles are subtracted from 180°00'.
δ = 180°00' - (72°54' + 35°16') = 71°50"
b. Example - Azimuths
The azimuth of line MY is 106°12', the azimuth of line YF is 234°06'. What is the angle right at Y from F to M?
Sketch:
Label the back-azimuth at Y and angle to be computed, ρ.
ρ = 286°12' - 234°06' = 52°06'
|
__label__pos
| 0.990456 |
Nouman Rahman
ProgrammingFire
Follow
ProgrammingFire
Follow
Get Started With GraphQL In ASP.NET Core With HotChocolate
Get Started With GraphQL In ASP.NET Core With HotChocolate
Nouman Rahman's photo
Nouman Rahman
·Mar 15, 2022·
5 min read
Play this article
Table of contents
• Why Use GraphQL
• Create A New ASP.NET Core Application
• Add GraphQL To ASP.NET Core Application
• Executing a query
• Next Steps
Why Use GraphQL
GraphQL is used to build APIs, It's like REST, But the reason why to use GraphQL instead of REST is that in GraphQL you can just give the data and what the user need is up to him/her
Example
Let's just take an example of a simple REST vs GraphQL API
We have a REST endpoint that returns an array of posts from the database, Not A Real Endpoint
https://api.programmingfire.com/posts/
Output:
[
{
"title": "post title 1",
"description": "post description 1",
"createdAt": 2021-09-13
},
{
"title": "post title 2",
"description": "post description 2",
"createdAt": 2021-09-13
},
{
"title": "post title 3",
"description": "post description 3",
"createdAt": 2021-09-13
},
]
But let's just say we only want the title and description of the post but the user can't do that But With GraphQL You Can Do It Let Me Show You
Input:
query {
posts {
title
}
}
Output:
{
"data": {
"posts": {
"data": [
{
"title": "post title 1"
},
{
"title": "post title 2"
},
{
"title": "post title 3"
}
]
}
}
}
As You Can See We Only Get Title Of Our Posts
Create A New ASP.NET Core Application
We Will Start By Creating A Empty ASP.NET Core Application Using The dotnet CLI. To-Do That Open The Terminal In An Empty Directory And Run The Following Commands:
# Create A New Solution File
dotnet new sln
# Create A New Empty Web API Project
dotnet new web -o GraphQLApi
# Add That To Our Solution
dotnet sln add GraphQLApi
Now Our Project Is Done We Can Test It Now:
# Run The GraphQLApi Project
dotnet run --project GraphQLApi
Now Navigate To localhost:5160 (or whatever URL that it shows you) To See Your "Hello World" Minimal API
Add GraphQL To ASP.NET Core Application
Add the HotChocolate.AspNetCore package
This Package Includes Everything That's Needed To Get Your GraphQL Server Up And Running. To Install It Run The Following Command:
dotnet add GraphQLApi package HotChocolate.AspNetCore
Define the types
Next, we need to define the types our GraphQL schema should contain. These types and their fields define what consumers can query from our GraphQL API. For starters, we can define two object types that we want to expose through our schema.
public class Book
{
public string Title { get; set; }
public Author Author { get; set; }
}
public class Author
{
public string Name { get; set; }
}
Now We Need To Define A Query Type That Exposes The Types We Have Just Created Through A Query In GraphQL.
public class Query
{
public Book GetBook() =>
new Book
{
Title = "Learn GraphQL For ASP.NET Core",
Author = new Author
{
Name = "Nouman Rahman"
}
};
}
The field in question is called GetBook, but the name will be shortened to just book in the resulting schema.
Add GraphQL services
Next, we need to add the services required by Hot Chocolate to our Dependency Injection container.
builder.Services
.AddGraphQLServer()
.AddQueryType<Query>();
Now that we've added the necessary services, we need to expose our GraphQL server at an endpoint. Hot Chocolate comes with an ASP.NET Core middleware that is used to serve up the GraphQL server.
app.MapGraphQL();
And this is it - you have successfully set up a Hot Chocolate GraphQL server! 🚀
Executing a query
First off we have to run the project.
dotnet run --project=GraphQLApi
If you have set up everything correctly, you should be able to open localhost:5000/graphql (the port might be different for you) in your browser and be greeted by the GraphQL IDE: Banana Cake Pop.
Banana Cake Pop IDE
Next click on "Create document". You will be presented with a settings dialog for this new tab, pictured below. Make sure the "Schema Endpoint" input field has the correct URL under which your GraphQL endpoint is available. If it is correct you can just go ahead and click the "Apply" button in the bottom right.
Banana Cake Pop Connection Settings
Now you should be seeing an editor like the one pictured below. If your GraphQL server has been correctly set up you should be seeing a green "online" in the top right corner of the editor.
Banana Cake Pop Editor
The view is split into four panes. The top-left pane is where you enter the queries you wish to send to the GraphQL server - the result will be displayed in the top-right pane. Variables and headers can be modified in the bottom left pane and recent queries can be viewed in the bottom right pane.
Okay, so let's send a query to your GraphQL server. Paste the below query into the top left pane of the editor:
query {
book {
title
author {
name
}
}
}
To execute the query, simply press the "Run" button. The result should be displayed as JSON in the top-right pane as shown below:
{
"data": {
"book": {
"title": "Learn GraphQL For ASP.NET Core",
"author": {
"name": "Nouman Rahman"
}
}
}
}
Congratulations, you've built your first Hot Chocolate GraphQL server and sent a query using the Banana Cake Pop GraphQL IDE 🎉🚀
Next Steps
Learn more about HotChocolate in ASP.NET
Fetch data in your Blazor applications using Strawberry Shake
Did you find this article valuable?
Support Nouman Rahman by becoming a sponsor. Any amount is appreciated!
See recent sponsors Learn more about Hashnode Sponsors
Share this
|
__label__pos
| 0.951902 |
FLEX - How to Download Artwork When You Encounter an Error
The download artwork option can sometimes result in an error. Here's how to work around this!
1. Log into your AOEU Account.
2. Visit the FLEX Lesson Plan you are interested in.
3. Click Download Artwork.
4. Navigate to your Finder or Downloads folder.
5. Locate the Artwork you have downloaded.
6. Delete the letters and numbers you see AFTER .jpg.
7. Confirm the change.
8. Now, you will be able to open the jpg image and use however you wish.
|
__label__pos
| 0.992107 |
Calulating a circular movement of a light-source
Hi, I was trying to do this but didn’t quite figure it out.
If I want a 3d-point to move exactly within a circular path. xy, xz, yz, and beyond, I would have to calculate waveforms that correspond to each other right? And if I want the points to differ from straight dimensional principles, like a 45 degree xy and a full 180 degree circular movement it would become more complex.
Is there a way of solving this more easily?
Hi, @bnvisuals.
If I want a 3d-point to move exactly within a circular path. xy, xz, yz, and beyond, I would have to calculate waveforms that correspond to each other right? And if I want the points to differ from straight dimensional principles, like a 45 degree xy and a full 180 degree circular movement it would become more complex.
Two ways you can do this:
1. Use Vuo 0.8.0’s new Calculate node: cos(time) for the X position, and sin(time) for the Y position. But, as you noted, this will become more complex if you want movement on something other than the XY/XZ/YZ planes.
2. Use hierarchical transforms. In the attached example composition, I’ve moved the sphere off-center (the orange Make 3D Transform node). Then, I’m using Combine 3D Objects to apply a second transformation to it — a rotation created using the violet Make Quaternion from Angle node. You can feed any 3D vector into the axis port, and it’ll orbit around that axis.
Edit 2016.02.22: Updated composition for Vuo 0.9+.
orbit.vuo (4.06 KB)
Thanks @smokris! This is really helpful!
Hi again @smokris,
Had a go at the document you posted. Having a little hard time moving the light with this principle.
It is moved trough xyz and cannot be rotated, it seems like. Attatched a first try here, can you tell me if I’m doing anything wrong?
orbitX.vuo (4.79 KB)
I may misunderstand the question, but I made this example that shows the light moving in a circular path in x and y dimensions by using the Calculate node. It didn’t look like the setup in orbitX was moving the light, it looked like it was moving the object.
orbit circular path xy.vuo (6.88 KB)
@george_toledo and @smokris. I’ve recently had the time to look more thoroughly at this and understand it better now. I’ve also figured out how to apply the “Quaterion” to light movement, but with the example @smokris showed its only for rotational purposes. The problem is that I need the light source’s position at any time to be able to use it’s xyz-possition for further placement. With @george_toledo’s solution this is easy, but I would really like the path of the light to swing out of a 2D path in 3D. Any ideas?
orbit_qaternion_no_xyz_pos.vuo (5.19 KB)
@bnvisuals, regarding making the light swing from a 2D plane into 3D — you could do that by applying multiple transformations to the same object. For example, start with your orbit_qaternion_no_xyz_pos.vuo, and take the output of Combine 3D Objects and connect it into another Combine 3D Objects node, with another Make Quaternion from Angle — but this time give the latter node a different axis vector.
.
|
__label__pos
| 0.844546 |
Tape Or Disk For Backup and Info Restoration?
Info backup and archiving could be a waking nightmare, how best to equilibrium the requires for immediate access in opposition to the equally significant have to have for security and reliance? Decline of data is one of those events that could promptly change the IT Specialist’s lifetime from just one where by they receive plaudits for a way perfectly the units are running to at least one where by their complete occupation may very well be under threat.
What is the greatest procedure to utilize? Are disk based mostly quick access programs a greater solution than tapes and tape libraries, or are the greater classic information backup and information recovery strategies a greater wager for long-term info security? Every technological know-how has its exponents and its detractors. Tape is noticed by quite a few as gradual and rigid whereas disk based mostly systems provide a handy, simple to function, backup program with the chance to incorporate on further functions which include de-duplication that need a dynamic filing method.
Increase to this the current price of really hard disks, a 1.5TB disk isn’t going to Expense that much more than a one.6TB LTO 4 tape, as well as tape potential RAID Data Recovery is based on typical info compressibility, the indigenous potential is 800GB, and disk isn’t the expensive cousin any more. So does this necessarily mean that tape is going the best way on the Dodo and that the long run is disk based mostly? The query to request is “what exactly is the objective of our backup technique”.
Can it be benefit?
A method which is easy to use and to deal with is operationally a better guess than one that is cumbersome or complicated. It also implies that information does get backed up, even quite possibly the most sturdy system falls aside if no-one uses it. So When you have buyers with laptops who can immediately kick off a backup by using the online market place without genuine effort and hard work, then it will occur and you simply are significantly more unlikely to seek out your self within the mercy of a knowledge recovery company.
Can it be workable?
The draw back to simplicity of use is overuse and abuse. Make existence also simple for people and they’ll back all the things up without any imagined and you simply end up getting a nightmare. Have the guidelines proper though and all really should be effectively. That has a dynamic submitting process it is possible to put into action de-duplication and single occasion-storage to ensure that the particular Place prerequisite is minimised.
Does it supply business continuity?
All over again, for most instances the disk-centered technique can get over the other choices, details is properly on-line, or at the very least near-line. The act of restoring details next an accidental deletion of a corruption is just not far too arduous, and will not entail several days nagging the IT department before the facts is back in position.
|
__label__pos
| 0.759502 |
How to Forecast in Tableau: Top 2 Ways Explained
Ready to create a forecast in Tableau, don’t worry, it’s easy.
To create a forecast in Tableau:
1. Start by connecting to your data source and opening a new worksheet.
2. Drag the field you want to forecast onto the Rows shelf.
3. Click on the Analytics pane, select Forecast, and adjust the settings as needed.
You can also use the FORECAST() function to create forecasts in Tableau. This function uses linear regression to predict future values based on historical data.
But wait, there’s more.
In this article, we’ll discuss how to create forecasts, and we’ll explore 2 different forecasting methods, plus provide a step-by-step guide to help you make accurate predictions in your visualizations.
Let’s get started!
How to Forecast in Tableau
2 Ways to Forecast Data in Tableau
Tableau offers multiple methods for forecasting data, providing you with the flexibility to choose the most suitable approach based on their data and analytical needs.
In this section, we’ll explore 2 primary ways to forecast data in Tableau:
1. Forecasting from a visualization
2. Forecasting from the Analysis tab
2 Ways to Forecast Data in Tableau
1. How to Forecast From a Visualization
Tableau’s forecast models capture the future trends in your data.
Follow the below steps given below to forecast data in Tableau:
Step 1
Start by connecting Tableau to your data source. Build a visualization that includes the data you want to forecast. For example, a line chart showing historical sales.
Line chart of sales over time
Step 2
Right-click on your visualization, navigate to forecast, and then select Show Forecast.
Creating the forcastCreating the forcast
When you click Show Forecast, Tableau will automatically create the forecast for you. Your visualization will look something like the following:
Forcecast created in visualization in Tableau
This is the most simple way of creating a forecast in Tableau. However, by default, Tableau does exponential smoothing and to gain deeper insights, you may need to manually change forecast settings.
2. How to Forecast From The Analysis Tab
If you’d like to make adjustments to your forecasts when creating forecasts in Tableau, you can use the forecasting feature from the Analysis tab.
Follow the steps given below to forecast in Tableau using this method:
Step 1
Create a time series chart by dragging your date dimension to the Columns shelf and your forecast measure (the numeric field) to the Rows shelf.
Time series chart of sales over time forecasting
Step 2
Once your time series chart is ready, go to the menu bar at the top of the screen and click on the Analysis tab.
In the drop-down menu under Analysis, find and click on Forecast. Then, select Show Forecast.
Creating the forecasting in the main tableau window
Tableau will automatically add a forecast to your time series chart based on its built-in forecasting model.
Forecast created through analysis tab
Step 3
To customize the forecast, navigate to Analysis > Forecast > Forecast Options.
Tableau navigation to forecast options menu explained
In the Forecast Options window, you can adjust several settings, such as:
1. Forecast Length: Define how many periods into the future you want to extend your forecast.
2. Confidence Intervals: Choose the level of confidence or prediction intervals, which show the range of possible values the actual future values are likely to fall within.
3. Trend and Seasonality: Decide if you want Tableau to automatically detect and adjust for trend and seasonality in your data, or if you want to set these manually.
Forecast settings
You can change these settings according to your requirements. When done, click OK.
To understand your forecast, you can click on the option Describe Forecast, and Tableau will come up with the best summary of your model.
After setting up your forecast diagram, Tableau will display it on your chart, usually with a different color or shading to distinguish forecasted values from historical data.
Learn how to create demo data on ChatGPT by watching the following video:
Final Thoughts
In the age of big data, making predictions has become essential to stay ahead. Fortunately, Tableau makes forecasting a breeze. Whether you’re a data analyst, business professional, or an aspiring data scientist, Tableau’s forecasting and predictive analytics capabilities can empower you to make data-driven decisions with confidence.
By mastering the art of forecasting in Tableau, you can unlock valuable insights from your data, identify an evolving trend, and anticipate future outcomes.
As you embark on your forecasting journey, remember that practice and experimentation are key.
Frequently Asked Questions
In this section, you will find some frequently asked questions you may have when forecasting in Tableau.
Data analysts going through an analytics report
How to create a forecast using a date field?
To create a Tableau forecast using a date field, you can follow these steps:
1. Connect to your data source and open a new worksheet.
2. Drag the date field you want to forecast onto the Columns shelf.
3. Drag the measure you want to forecast onto the Rows shelf.
4. Click on the Analytics pane, select Forecast, and adjust the settings as needed.
How to forecast data by month in Tableau?
To forecast data by month in Tableau, follow these steps:
1. Connect to your data source and open a new worksheet.
2. Drag the date field you want to forecast onto the Columns shelf.
3. Right-click on the date field, choose Exact Date, and then Month.
4. Drag the measure you want to forecast onto the Rows shelf.
5. Click on the Analytics pane, select Forecast, and adjust the settings as needed.
What is the difference between forecasting and trend analysis in Tableau?
Forecasting in Tableau involves predicting future values based on historical data, while trend analysis focuses on identifying and visualizing patterns or tendencies within the data.
Forecasting uses statistical models to make predictions, whereas trend analysis is more about understanding the direction and magnitude of changes over time.
What is the process for creating a forecast model in Tableau?
To create a forecast model in Tableau, follow these steps:
1. Connect to your data source and open a new worksheet.
2. Drag the date field you want to forecast onto the Columns shelf.
3. Right-click on the date field, choose Exact Date, and then select the appropriate date part (e.g., Year, Quarter, Month).
4. Drag the measure you want to forecast onto the Rows shelf.
5. Click on the Analytics pane, select Forecast, and adjust the settings as needed.
6. Tableau will automatically create a forecast model based on the selected measure and date part.
Source link
Be the first to comment
Leave a Reply
Your email address will not be published.
*
|
__label__pos
| 0.997118 |
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Supremum of function
1. Jan 26, 2012 #1
Let {fi}i E I be a family of real-valued functions Rn->R.
Define a function
f(x)
=sup fi(x)
i E I
1) I'm having some trouble understanding what the sup over i E I of a function of x means? The usual "sup" that I've seen is something like
supf(x)
x E S
for some set S.
But they instead have i E I there which confuses me.
2) Is the following true?
sup [c * fi(x)]
i E I
= c *sup [fi(x)]
----i E I
In other words, can we pull out a constant out of the sup? If so, how can we rigorously prove it?
Any help is appreciated!
2. jcsd
3. Jan 26, 2012 #2
chiro
User Avatar
Science Advisor
Hey kingwinner.
I'm pretty sure in this context E means (an element of). Basically I is a set and i is talking about referencing an element of that set. In this context, you have a collection of functions. Usually we denote things like this as a collection of whole numbers but we generalize the notation by using sets.
As far as the second question goes, the answer should be yes but only because you are doing a linear transformation. Also the other thing is that c needs to be a positive real number (> 0) otherwise you can't do this. Think about what happens when everything is multiplied by a negative number or when everything is multiplied by zero.
If you did not do a simple linear transformation like above (in terms of multiplying and adding constants or constant functions) then this doesn't need to hold. Consider if you had functions all with negative values less than one and squaring the function. What do you think would happen to the supremum?
Sometimes its helpful to draw a few diagrams.
4. Jan 26, 2012 #3
Yes, for i E I, the E means "an element of".
1) OK, so
f(x)
=sup fi(x)
i E I
means that for every fixed x, we take the sup over i E I.
2) Suppose c is a constant >0. How can we rigorously prove from the definition of sup that
c*sup [fi(x)]
i E I
=sup [c * fi(x)] ?
i E I
It's still not clear to me...
Thanks for any help!
Last edited: Jan 26, 2012
5. Jan 26, 2012 #4
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
Well, to rigorously prove that, you need to prove two things:
1) For all i holds that [itex]cf_i(x)\leq c\cdot\sup_{i\in I}{f_i(x)}[/itex].
2) If for all i holds that [itex]cf_i(x)\leq M[/itex], then [itex]c\cdot\sup_{i\in I}{f_i(x)}\leq M[/itex].
6. Jan 26, 2012 #5
But this only shows that
=sup [c * fi(x)] ?
i E I
≤ c*sup [fi(x)]
----i E I
How about the other direction?
7. Jan 26, 2012 #6
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
No, this shows equality. (1) shows that [itex]\sup_{i\in I}{cf_i(x)}\leq c\sup_{i\in I}{f_i(x)}[/itex].
8. Jan 26, 2012 #7
1)
This implies
sup [c * fi(x)]
i E I
≤ c*sup [fi(x)]
----i E I
This proves one direction.
But I don't follow your second part. What is M equal to? And why would this imply c sup f ≤ sup c f ?
Thanks.
9. Jan 26, 2012 #8
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
M is a number. Recall the definition of a supremum: a supremum is the smallest upper bound. So what I did is establish in (1) that it is an upper bound. And in (2) I establish that it's the smallest upper bound. Indeed: M is an arbitrary upper bound and I prove that M is greater than [itex]\sup{c f_i(x)}[/itex]. This shows that M is the smallest upper bound.
10. Jan 26, 2012 #9
I see.
But I have some trouble seeing why the above is true.
I can understand the following implication
If for all i holds that [itex]cf_i(x)\leq M[/itex], then [itex]\cdot\sup_{i\in I}{c f_i(x)}\leq M[/itex]. But then why can we take the c out of the sup on the LHS? I think that's what we're actually trying to prove?
11. Jan 26, 2012 #10
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
Try to do it this way:
If [itex]cf_i(x)\leq M[/itex], then [itex]f_i(x)\leq \frac{M}{c}[/itex] (if c is nonzero!!). Now take the supremum of both sides.
12. Jan 26, 2012 #11
Think "vertical" instead of horizontal.
You have a collection of functions f_1, f_2, f_3, ... f_i, ...
Think of them all superimposed on the same set of coordinate axes. So you have a bunch of function graphs on the plane.
Now for a given value of x, draw the vertical line through it. It hits EACH of the functions in one point: (x, f_1(x)), (x, f_2(x)), (x, f_3(x)), ...
Now the set of real numbers f_1(x), f_2(x), f_3(x), ... may happen to have a sup. For some values of x the sup will be defined; for other values it might not be (it might be an unbounded sequence. Then you could say the sup is +infinity if you are working in the extended real numbers).
You can define a new function f by f(x) = sup{f_i(x)} as i ranges over the index set. This new function f is well-defined for any x for which the sup exists for that particular x.
That's how to think of this.
(ps) I noticed that the domain is R^n. In that case you should still visualize the x-axis as the "domain axis," to coin a phrase. But here it's not literally true. The domain is a point in n-space so we can't visualize the graph as easily. But you can still see that the value of each function is a real number -- the range is R. So the same logic as before applies. For a fixed value of the domain, the SET of values of all the f_i may have a sup; and if it does, you can define f at that point in the domain as that sup.
Last edited: Jan 26, 2012
13. Jan 27, 2012 #12
Thank you very much.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Supremum of function
1. Supremum principle (Replies: 5)
2. Supremum norm (Replies: 6)
3. Definition of supremum (Replies: 5)
4. Question on Supremum (Replies: 3)
Loading...
|
__label__pos
| 0.768547 |
What will you learn?
In this tutorial, you will delve into the intricacies of if statements in Python. You will uncover why certain conditions are recognized as true even when they may seem false at first glance. By understanding truthy and falsy values in Python, you will gain clarity on how to predict and control the outcomes of your if conditions effectively.
Introduction to Problem and Solution
When working with if statements in Python, it’s common to encounter scenarios where a condition is evaluated as true, contrary to initial expectations. This can lead to confusion and unexpected program behavior. The key lies in comprehending how Python interprets truthy and falsy values. By dissecting examples and understanding the nuances, you can write precise conditional statements that function as intended.
Our journey involves demystifying what constitutes a “truthy” or “falsy” value in Python’s context. Through practical examples highlighting common pitfalls, we will explore how slight variations can significantly impact the evaluation of if statements.
Code
# A perplexing if statement
value = 0 # Try changing this to other values like '', [], {}, None, etc.
if value:
print("Value is considered true!")
else:
print("Value is considered false!")
# Copyright PHD
Explanation
To grasp the behavior of if statements in Python, it’s essential to understand how different types of values are interpreted by the language:
Data Type Truthiness
Numbers Non-zero values are True; 0 is False
Collections Empty collections are False; non-empty ones are True
Strings Empty string (”) is False; non-empty strings are True
NoneType None is always False
By altering the value assigned in our code snippet, you witness firsthand how these rules dictate whether an if block executes or not.
1. What makes a value “truthy” or “falsy”? Values that evaluate to True are termed “truthy”, while those resulting in False are labeled “falsy”. This classification depends on their type and content.
2. How does Python determine truthiness? Python employs internal rules for each data type to determine whether a given value should be considered True or False during boolean evaluations.
3. Can I override these evaluations? While built-in evaluations cannot be changed (e.g., making an empty list truthy), custom classes can define their own behavior through special methods like __bool__().
4. Is there a difference between == True/False checks and implicit boolean evaluation? Yes! Using == explicitly compares with True or False, whereas implicit evaluation relies on inherent truthiness without direct comparison.
5. Do all programming languages handle truthiness similarly? No; each programming language defines its unique set of rules for evaluating expressions as true or false which may differ from Python�s approach.
6. Are there exceptions where these rules don’t apply? These rules consistently apply across standard types but remember custom objects can define their own logic for being evaluated as true or false via special methods.
7. Can logical operators affect truthiness? Logical operators (and, or, not) interact with operands based on their truthiness but do not alter inherent values themselves.
8. How do I check if an object has specific characteristics rather than just being ‘Truthy’/’Falsy’? Utilize explicit comparisons or functions/methods designed for such checks instead relying solely on implicit boolean evaluation.
9. Can using implicit evaluation improve my code readability? It can reduce verbosity by removing unnecessary comparisons but use caution�clarity shouldn�t be sacrificed for brevity.
Conclusion
Mastering the concept of truthy and falsy values in Python equips you with the ability to navigate unexpected conditional behaviors efficiently within your programs. By leveraging the insights gained here alongside best practices, you ensure smoother development processes for future projects involving similar constructs.
Leave a Comment
|
__label__pos
| 0.99988 |
Make the given changes in the indicated examples of this section and then solve the resulting problems. In Example 7, change the to + and then find . Data from Example 7 Given that cos = 0.1298, find
Make the given changes in the indicated examples of this section and then solve the resulting problems.
In Example 7, change the − to + and then find θ.
Data from Example 7
Given that cos θ = −0.1298, find θ for 0° ≤ θ < 360°.
Because cosθ is negative, θ is either a second-quadrant angle or a third-quadrant angle. Using 0.1298, the calculator tells us that the reference angle is 82.54°. For the second-quadrant angle, we subtract 82.54° from 180° to get 97.46°. For the third-quadrant angle, we add 82.54° to 180° to get 262.54°. See Fig. 8.14. Therefore, the two solutions are 97.46° and 262.54°.
This problem has been solved!
Do you need an answer to a question different from the above? Ask your question!
Related Book For answer-question
Basic Technical Mathematics
ISBN: 9780137529896
12th Edition
Authors: Allyn J. Washington, Richard Evans
Question Details
Chapter # 8- Trigonometric Functions of Any Angle
Section: Exercises 8.2
Problem: 2
Posted Date: March 25, 2023 21:12:28
|
__label__pos
| 0.933265 |
TO INFINITY AND BEYOND
By Maria Athanasopoulou (A Class)
infinity
What is infinity? Is it a number? It is nowhere on the number line. But we often say things like an infinite number of something. As far as we know, infinity could be real. The universe may be infinite in size and flat extending out forever and ever without end, beyond even the part we can observe or ever hope to observe. That’s exactly what infinity is; not a number per se, but a size; the size of something that doesn’t end.
Infinity is not the biggest number; on the contrary, it is the total of numbers that exist. But we have divided infinity in different kinds. The smallest kind is called countable infinity: for instance, the number of hours in forever or the total of whole/natural numbers that exist. Sets like these are unending but they are countable. It means that you can count them from one element to the other in a finite amount of time. Even if that time is longer than you will ever live or the universe will exist, it is still finite. Uncountable infinity on the other hand, is literally bigger; too big to even count. For example, the set of real numbers, not just whole numbers, is called uncountably infinite. You literally cannot count even from zero to one in a finite amount of time by naming every real number in between. Think about it, you start with zero but what comes next? 0,0000000…. eventually we would imagine a 1 going somewhere at the end, but there is no end, we can always add another zero. Uncountability makes this set so much bigger than the set of whole numbers that between one and zero there are more numbers than whole numbers on the entire endless number line.
kinds of infinities
The mathematical proof of this is called the Diagonal Argument and is fairly simple. First of all, you need to understand that when it comes to infinity two sets are even when there is a one-to-one correspondence. Which brings us to this: imagine listing every number between zero and one. Since they are uncountable and can’t be listed in order, let’s imagine randomly generating them forever with no repeats. Each number we generate can be paired with a whole number. If there is a one-to-one correspondence between the two that would mean that countable and uncountable sets are the same size. But we can’t do that. Even if the list of real numbers goes on forever, forever isn’t enough. If we go diagonally down our endless list of real numbers and take the first dismal of the first number, the second of the second number, the third of the third and so on, and add one to each subtracting one if it happens to be a nine we can generate a new real number, that is obviously between zero and one, but since we have defined it to be different from every number on our endless list, on at least one place it’s clearly not contained in the list. In other words, every single whole number in the entire infinity of them is matched up and we can still come up with more real numbers.
diagonal argument
Here is something else that is true but counterintuitive. There is the same number of even numbers as there are even and odd numbers. At first that sounds ridiculous; clearly there are half as many even numbers as there are whole numbers. But that intuition is wrong. The set of whole numbers is denser but every even number can be matched with a whole number. You will never run out of numbers of either set so this one to one correspondence shows that both sets are the same size. So as its turns out infinity divided by two is still infinity ( ∞/2 = ∞). Infinity plus one is also infinity (∞ +1 = ∞). A good illustration of this is Hilbert’s Paradox of the grand hotel.
even and odd numbers-same size infinities
In the 1920’s the German mathematician David Hilbert conducted a famous thought experiment. Imagine a hotel with a countably infinite number of rooms and a very hardworking night manager. One night the infinite hotel is completely full. Totally booked up with an infinite number of guests. A man walks into the hotel and asks for a room. Rather than turning him away the night manager decides to make room for him. He asks the guest in room number 1 to move to room 2, the guest in room number 2 to move to room 3 and so on. Every guest moves from room n to room n+1. Since there are infinite rooms there is a new room for each existing guest. This leaves room 1 open for the new customer. The process can be repeated for any finite number of new guests. If, say, a tour bus unloads 40 new people looking for rooms then every existing guest just moves from room number n to room number n+40; thus opening the first 40 rooms. But now an infinitely large bus with a countably infinite number of passengers pulls up to rent rooms. Countably infinite is the key. The infinite bus of infinite passengers perplexes the manager at first but he realizes there is a way to place each new person. He asks the guest in room 1 to move to room 2, the guest in room 2 to move to room 4, the guest in room 3 to room 6 and so on. Each current guest moves from room number n to room 2n, filling up only the infinite even numbered rooms. By doing this he has now emptied all of the infinitely many odd numbered rooms, which are then taken by the people filing off the infinite bus.
We, humans, are finite creatures. Our lives are small and can only scientifically consider a small part of reality. What’s common for us is only a sliver of what is available; we can only see so much of the electromagnetic spectrum, we can only delve so deep into extensions of space. Common sense applies to that which we can access. But common sense is just that: common. If total sense is what we want, we should be prepared to accept that we shouldn’t call infinity weird or strange. The results we have arrived at by accepting it are valid; true within the system we use to understand, measure, predict and order the universe. Perhaps the system still needs perfecting, but at the end of the day history continues to show us that the universe isn’t strange; we are.
Sources:
http://www.mathpages.com/home/kmath371.htm
https://simple.wikipedia.org/wiki/Hilbert’s_paradox_of_the_Grand_Hotel
Σχολιάστε
Top
|
__label__pos
| 0.712591 |
This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.
Author aeros
Recipients aeros, asvetlov, yselivanov
Date 2020-11-19.00:18:23
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <[email protected]>
In-reply-to
Content
Regarding the example _get_loop():
```
def _get_loop(self):
loop = asyncio.get_running_loop()
if self._loop is None:
self._loop = loop
if loop is not self._loop: raise
if not loop.is_running(): raise
```
Would this be added to all asyncio primitives to be called anytime a reference to the loop is needed within a coroutine?
Also, regarding the last line "if not loop.is_running(): raise" I'm not 100% certain that I understand the purpose. Wouldn't it already raise a RuntimeError from `asyncio.get_running_loop()` if the event loop wasn't running?
The only thing I can think of where it would have an effect is if somehow the event loop was running at the start of `_get_loop()` and then the event loop was stopped (e.g. a loop in an alternative thread was stopped by the main thread while the alternative thread was in the middle of executing `_get_loop()`). But to me, that seems to be enough of an edge case to simplify it to the following:
```
def _get_loop(self):
loop = asyncio.get_running_loop()
if self._loop is None:
self._loop = loop
if loop is not self._loop: raise
```
(Unless you intended for the first line `loop = asyncio.get_running_loop()` to instead be using the private `asyncio._get_running_loop()`, which returns None and doesn't raise. In that case, the original would be good to me.)
Other than that, I think the approach seems solid.
History
Date User Action Args
2020-11-19 00:18:23aerossetrecipients: + aeros, asvetlov, yselivanov
2020-11-19 00:18:23aerossetmessageid: <[email protected]>
2020-11-19 00:18:23aeroslinkissue42392 messages
2020-11-19 00:18:23aeroscreate
|
__label__pos
| 0.981742 |
Symbolically solving Wikipedia Runge-Kutta example?
by honglin
Tags: rungekutta, solving, symbolically, wikipedia
honglin
honglin is offline
#1
May7-12, 05:51 PM
P: 1
The Wikipedia page for "Runge-Kutta methods"[1] gives the following example:
y' = tan y + 1
y(1) = 1
t in [1, 1.1]
Using a step-size of h = .025, this solution is found:
y(1.1) = 1.335079087
I decided to check this solution by solving symbolically. But my attempts to symbolically integrate only lead to more complicated equations.[2] So I'm wondering if this simple-looking DE actually has a symbolic solution?
Notes:
[1] wikipedia (dot) org/wiki/Runge%E2%80%93Kutta_methods
[2] For example,
y'(t) = tan(y(t)) + 1
y'(t)/(tan(y(t)) + 1) = 1
Let u = y(t), du = y'(t) dt
∫(du/(tan(u) - 1)) = ∫dt
I used the SAGE computer algebra system to evaluate the LHS to,
-1/2*u + 1/2*log(tan(u) - 1) - 1/4*log(tan(u)^2 + 1)
Not much help!
Phys.Org News Partner Science news on Phys.org
At tech fest: 3D printers, bitcoin and 'Titanfall'
London launches hi-tech trial for pedestrian safety
Lignin breakthroughs serve as GPS for plant research
Register to reply
Related Discussions
Solving Second Order Differential Equations using Runge Kutta Calculus & Beyond Homework 7
Solving the Kinematic Equations using Runge-Kutta Engineering, Comp Sci, & Technology Homework 2
Runge Kutta for solving 2nd order ODE Introductory Physics Homework 0
Runge-Kutta or Runge-Kutter? Differential Equations 5
Runge kutta method for solving PDE Differential Equations 0
|
__label__pos
| 0.995669 |
Linux process
Tue 31 January 2023
Being curious about BPF, I studied source code of several programs from the BCC libbpf-tools. BPF performance tools book aided me to navigate BPF C code. For example, it explained that a BPF program has to use helpers because it can't access arbitrary memory (outside of BPF) and can't call arbitrary Linux kernel functions.
BPF has opened a door into the kernel for me, but I quickly realized that I don't know much about it. Take, for example, bpf_get_current_pid_tgid() and bpf_get_current_task() helpers. What are tgid, pid, and task in the function names? It turns out that tgid is what user space calls a process ID, and pid is what user space calls a thread ID.
u64 id = bpf_get_current_pid_tgid();
pid_t pid = (pid_t)id;
pid_t tgid = id >> 32;
struct task_struct *t;
t = (struct task_struct*)bpf_get_current_task();
I found the terminology a bit confusing, so I spent some time trying to clarify it for myself.
Task
A process is an instance of a program in execution that provides a program an illusion that it is the only one currently running with exclusive use of CPU and memory. A thread (thread of execution) is a unit of execution (sequence of machine instructions) that can be managed by an OS scheduler. Typically threads share the process's resources, e.g., memory and file descriptors.
The process and the thread are abstractions whose implementation depends on the OS. Linux implements those abstractions with "light weight processes" which can share resources with each other, thus in a way blending the process and the thread concepts. For example, ps shows parca-agent Go program running as 8 light weight processes (also referred to as threads in the man page). As we can see, they all have unique light weight process ID (LWP column), and the same process ID 93015 (PID column). Together they form the thread group with ID 93015 which acts as a whole, e.g., we can terminate it by sending a TERM signal kill 93015. Note, the light weight process whose PID and LWP columns contain 93015 indicates that it was created first (the main thread).
﹩ ps -aL
PID LWP TTY TIME CMD
93013 93013 pts/0 00:00:00 sudo
93015 93015 pts/3 00:00:33 parca-agent
93015 93016 pts/3 00:00:24 parca-agent
93015 93017 pts/3 00:00:30 parca-agent
93015 93018 pts/3 00:00:00 parca-agent
93015 93019 pts/3 00:00:00 parca-agent
93015 93020 pts/3 00:00:00 parca-agent
93015 93021 pts/3 00:00:30 parca-agent
93015 93022 pts/3 00:00:32 parca-agent
94130 94130 pts/1 00:00:00 ps
Linux kernel maintains a task_struct for each light weight process. Therefore there should be 8 such structures with tgid = 93015 (thread group ID or PID column) and pid (LWP column) in a range from 93015 to 93022. The maximum PID value is 4194304, see PID_MAX_LIMIT and /proc/sys/kernel/pid_max.
struct task_struct {
// Possible states: running, interruptible (sleeping),
// uninterruptible, stopped, traced.
unsigned int __state;
// Pointer to kernel stack (each task has one),
// see https://www.kernel.org/doc/html/latest/x86/kernel-stacks.html.
// General-purpose registers (e.g., rax, rbx)
// used by a task in user mode are saved on
// the kernel stack before performing process switching.
void *stack;
// Points to the previous and to the next task_struct
// thus representing a list of all tasks in the system.
struct list_head tasks;
// Points to a structure that describes
// the current state of virtual memory.
struct mm_struct *mm;
// Possible exit states: zombie, dead.
int exit_state;
int exit_code;
// ID of this light weight process.
pid_t pid;
// ID of the thread group (PID column in ps).
pid_t tgid;
// Points to the task that created this task
// or to the init task if the parent no longer exists.
struct task_struct __rcu *real_parent;
// A list of all children created by this task.
struct list_head children;
// Points to the next and previous sibling tasks.
struct list_head sibling;
// Points to the process group leader (PGID),
// e.g., of "sleep 10 | sleep 20".
struct task_struct *group_leader;
// Executable name, excluding path.
char comm[TASK_COMM_LEN];
// Open file information (contains pointers to file descriptors).
struct files_struct *files;
// CPU-specific state of this task is stored here
// when the task is being switched out, i.e.,
// most of the CPU registers (es, ds, fs, gs, FPU registers),
// except the general-purpose registers.
// The sp0 and sp fields are the user and
// kernel stack pointers respectively.
struct thread_struct thread;
}
All task_struct that currently present in the system are linked using a double linked list. When we run kill 93015 to terminate 8 light weight processes by tgid, the kernel looks up the thread group's leader by ID in a hash table and then walks the list of the group. That applies at least to the kernel v3, and the v6 seems to be using a radix tree, see pid.c, idr.c, ID allocation.
With namespaces each PID may have several values, with each one being valid in one namespace. Linux kernel has a pid structure that refers to individual tasks, process groups, and sessions. Check out #26779416 stackoverflow answer for more details.
/*
* struct upid is used to get the id of the struct pid, as it is
* seen in particular namespace. Later the struct pid is found with
* find_pid_ns() using the int nr and struct pid_namespace *ns.
*/
struct upid {
// The pid value.
int nr;
// The namespace this value is visible in.
struct pid_namespace *ns;
};
/*
* What is struct pid?
*
* A struct pid is the kernel's internal notion of a process identifier.
* It refers to individual tasks, process groups, and sessions. While
* there are processes attached to it the struct pid lives in a hash
* table, so it and then the processes that it refers to can be found
* quickly from the numeric pid value. The attached processes may be
* quickly accessed by following pointers from struct pid.
*
* Storing pid_t values in the kernel and referring to them later has a
* problem. The process originally with that pid may have exited and the
* pid allocator wrapped, and another process could have come along
* and been assigned that pid.
*
* Referring to user space processes by holding a reference to struct
* task_struct has a problem. When the user space process exits
* the now useless task_struct is still kept. A task_struct plus a
* stack consumes around 10K of low kernel memory. More precisely
* this is THREAD_SIZE + sizeof(struct task_struct). By comparison
* a struct pid is about 64 bytes.
*
* Holding a reference to struct pid solves both of these problems.
* It is small so holding a reference does not consume a lot of
* resources, and since a new struct pid is allocated when the numeric pid
* value is reused (when pids wrap around) we don't mistakenly refer to new
* processes.
*/
struct pid {
// Reference counter.
refcount_t count;
// The number of upids.
unsigned int level;
// Lists of tasks that use this pid.
struct hlist_head tasks[PIDTYPE_MAX];
struct hlist_head inodes;
struct upid numbers[1];
};
Address space
A process provides a program an illusion that it has exclusive use of the whole memory address space in the system using virtual memory abstraction.
Physical memory is organized as an array of bytes and it's partitioned into fixed-size blocks (usually 4KB depending on CPU architecture) called page frames. A virtual page can be:
• cached in DRAM — backed by a page frame, i.e., 4KB of data is already loaded to physical memory from disk
• uncached in DRAM — doesn't occupy physical memory yet, but is already associated with a 4KB part of a file on disk. Once some virtual address within this virtual page is accessed by the CPU causing a page fault exception (DRAM cache miss), the 4KB of data will be loaded from disk to memory.
• unallocated — NULL, doesn't point to physical memory or to disk
This mapping is stored in a data structure called page table. A memory management unit (MMU) relies on page tables to translate virtual addresses to physical addresses. Here is how a process's virtual address space could approximately look.
_________________________
| page tables, |
| task and mm structs |
|-------------------------|
| kernel stack | ⬇️
|-------------------------| rsp (stack pointer in kernel mode)
| thread info struct |
|-------------------------|
| physical memory |
|-------------------------|
| kernel code & data |
|_________________________| top part is reserved for the kernel
| arg, env |
|-------------------------|
| stack for main thread | ⬇️
|-------------------------| rsp (stack pointer in user mode)
| ... |
|-------------------------|
| stack for thread 3 | ⬇️
|-------------------------|
| stack for thread 2 | ⬇️
|-------------------------|
| stack for thread 1 | ⬇️
|-------------------------|
| shared libraries |
|-------------------------|
| ... |
|-------------------------| brk (top of the heap)
| heap | ⬆️
|-------------------------|
| uninitialized data .bss |
|-------------------------| vm_end
| initialized data .data |
|-------------------------| vm_start
| code .text | thread 3 executing
| | main thread executing
| | thread 1 executing
| | thread 2 executing
| |
|-------------------------| 0x400000
| ... |
|_________________________| 0
The bottom part describes user space addresses of a process. The top part is a kernel virtual memory:
• process-specific structures such as page tables, task and mm structs, kernel stack, thread info structure.
• physical memory part which is identical for each process. Linux maps a set of virtual pages equal in size to DRAM to the corresponding set of physical pages to access any specific location in physical memory, e.g., to access page tables.
• kernel code and data which is identical for each process
Linux organizes the virtual memory as a collection of areas (segments) where an area is a chunk of related pages, e.g., code, data, heap, shared libraries, user stack segments. A process can create arbitrary number of areas using mmap function.
The vm_area_struct structures keep track of virtual memory areas of a task. They can be reached via task->mm->mm_mt tree. The fields vma.vm_start and vma.vm_end point to the beginning and the end of an area.
// See https://elixir.bootlin.com/linux/v6.1.8/source/include/linux/mm_types.h#L512.
struct mm_struct {
struct {
// A tree to look up vm_area_struct by user address.
struct maple_tree mm_mt;
// Memory mapping.
unsigned long mmap_base;
// Points to a page global directory,
// i.e., to the first entry of the level 1 page table.
// The physical address of PGD in use is stored in the cr3 control register.
pgd_t *pgd;
// Addresses of text and data segments.
unsigned long start_code, end_code, start_data, end_data;
// Addresses of heap and stack segments.
unsigned long start_brk, brk, start_stack;
unsigned long arg_start, arg_end, env_start, env_end;
}
}
/*
* This struct describes a virtual memory area. There is one of these
* per VM-area/task. A VM area is any part of the process virtual memory
* space that has a special rule for the page-fault handlers (ie a shared
* library, the executable area).
*/
struct vm_area_struct {
// Points to the beginning of the area within vm_mm.
unsigned long vm_start;
// Points to the end of the area within vm_mm.
unsigned long vm_end;
// Points to the address space this area belongs to.
struct mm_struct *vm_mm;
// Access permissions of this VMA, e.g., read/write.
pgprot_t vm_page_prot;
// Describes whether the pages in the area are shared with other tasks
// or private to this task.
unsigned long vm_flags;
// The offset (within vm_file) in PAGE_SIZE units (number of pages).
unsigned long vm_pgoff;
// The file backing this mapping.
// It can be NULL, e.g., anonymous mapping (stack, heap, bss).
struct file *vm_file;
}
Linux can associate a virtual memory area with a contiguous section of a file, e.g., executable object file. That section (e.g., code segment) is divided into page-size chunks. When the CPU accesses a virtual address that is within some page's region, that virtual page (e.g., some content of an executable file) is loaded to the physical memory from disk. You can check out an example of a memory mapping in the context of CPU profiler.
An area can also be mapped (associated) to an anonymous file (it doesn't exist on disk). When the CPU accesses a virtual page within that area (e.g., heap or user stack), the kernel finds a free page in physical memory, zeroes it, and marks it as resident in the page table. If no free pages exist (run out of physical memory), then some dirty page gets swapped out to disk.
A file can be mapped as the shared object into areas of different processes. The memory mappings are unique for each process, but the underlying physical pages are the same. So if one process writes to its area, the changes are visible to other processes, and the file is updated on disk as well. Shared libraries are mapped as shared objects by many processes thus saving physical memory pages.
If a file is mapped as the private object, the writes to that area are not visible to other processes and the changes are not written to disk. For example, two processes mapped a private object into their areas of virtual memories. The same physical memory pages will be used as long as both processes only read data from those areas. Once a process writes to its area, a new copy of pages is created in physical memory. This copy-on-write allows to save memory, e.g., multiple processes share .text segment which is never modified (recall 8 parca-agent light weight processes).
To wrap up, I must say that I might have misunderstood some of the parts and posted inaccurate information, sorry about that. The Linux kernel is a complicated project 🐧, and I wouldn't be able to navigate it without books such as:
• Computer Systems, Randal E. Bryant, David R. O'Hallaron
• Understanding the Linux Kernel, Daniel P. Bovet, Marco Cesati
• The Linux Programming Interface: A Linux and UNIX System Programming Handbook, Michael Kerrisk
Category: Infrastructure Tagged: architecture linux
comments
|
__label__pos
| 0.656559 |
Current Affairs PDF
Quants Questions : LCM & HCF Set 2
AffairsCloud YouTube Channel - Click Here
AffairsCloud APP Click Here
Hello Aspirants. Welcome to Online Quantitative Aptitude Section with explanation in AffairsCloud.com. Here we are creating question sample in LCM & HCF, which is common for all the IBPS,SBI exam and other competitive exams. We have included Some questions that are repeatedly asked in bank exams !!!
Click here to view : LCM & HCF Set 1
1. If the product of two numbers is 2496 and HCF is 8,then the ratio of HCF and LCM is
A)1:32
B)39:1
C)1:39
D)4:63
E)None of these
Answer – C) 1:39
Explanation:
HCF = 8
LCM = Product/HCF
Product of the 2 number = 2496
HCF:LCM = 8 : (2496/8) => 8 : 312 => 1: 39
2. The greatest possible length which can be used to measure exactly the lengths 1m 92cm,3m 84cm ,23m 4cm
A)23
B)32
C)36
D)34
E)None of these
Answer -B) 32
Explanation:
192 = 4^2×2^2×3
384 = 4^2×2^2×6
2304 = 4^2×2×6^2
HCF = 4^2×2 = 16×2 = 32
3. HCF of 4/3, 8/6, 36/63 and 20/42
A)4/126
B)4/8
C)4/36
D)4/42
E)None of these
Answer : A. 4/126
Explanation:
HCF of numerator(4,8,36,20) = 4
LCM of denominator(3,6,63,42) = 126
4. Find the LCM of 3/8, 9/32, 33/48, 18/72
A)3/8
B)8/33
C)198/8
D)8/3
E)None of these
Answer -C) 198/8
Explanation :
LCM of numerator(3,9,33,18) = 198
HCF of denominator(8,32,48,72) = 8
5. The HCF of 2511 and 3402 is
A)31
B)42
C)76
D)81
E)None of these
Answer -D) 81
Explanation :
2511 = 81×31
3402 = 81×42
Hence HCF is 81
6. A gardener had a number of shrubs to plant in rows. At first he tried to plant 8,then 12 and then 16 in a row but he had always 3 shrubs left with him. On trying 7 he had none left. Find the total number of shrubs.
A)154
B)147
C)137
D)150
E)None of these
Answer – B) 147
Explanation :
lcm hcf 1
7. If m and n are two whole numbers and if m^n= 49. Find n^m, given that n ≠ 1
A)118
B)94
C)561
D)128
E)None of these
Answer -D) 128
Explanation :
49 = 7^2 = m^n
n^m = 2^7 = 128
8. What will be the least number which when doubled will be exactly divisible by 12,18,21 and 30?
A)630
B)360
C)603
D)306
E)None of these
Answer -A) 630
Explanation :
LCM of 12,18,21 and 30 = 1260
Doubled (divide by 2) = 1260/2 = 630
9. HCF and LCM of two numbers are 11 and 385 .If one number lies between 75 and 125 then that number is
A)123
B)73
C)77
D)154
E)None of these
Answer – C) 77
Explanation :
Product of the two number = 11a×11b = 4235
11a×11ab = 4235/121 = 35
35 = 7 × 5 (co prime)
77 × 11 = 77
10. If the L.C.M of x and y is z, their H.C.F is
A)xy/z
B)xyz
C)(x + y)/z
D)z/xy
E)None of these
Answer – A) xy/z
Explanation :
HCF = Product of the 2 number/LCM = XY/Z
|
__label__pos
| 0.9976 |
Computer Hardware
Use CPU Instead Of Gpu
In the world of computing, the GPU has long been hailed as the powerhouse behind graphics-intensive tasks. But what if there was a different approach? What if we could harness the power of the CPU instead of the GPU? This paradigm shift is not only possible, but it also opens up a whole new realm of possibilities.
The CPU, or Central Processing Unit, has traditionally been responsible for handling general-purpose computing tasks. However, recent advancements in CPU technology have allowed for significant improvements in parallel processing capabilities. This means that the CPU can now handle graphics rendering and other GPU-intensive tasks more efficiently and effectively.
Use CPU Instead Of Gpu
The Benefits of Using CPU Instead of GPU
When it comes to computing power and performance, the Graphics Processing Unit (GPU) is often hailed as the star of the show. However, there are certain situations where utilizing the Central Processing Unit (CPU) can offer advantages and unique capabilities that cannot be achieved with the GPU alone. By leveraging the power of the CPU, users can tap into its strengths and unlock a world of possibilities. This article explores the benefits of using the CPU instead of the GPU and how it can enhance certain tasks and processes.
1. Versatility and Flexibility
The CPU is renowned for its versatility and flexibility, making it an ideal choice for various types of computing tasks. Unlike the GPU, which is primarily designed for graphics-intensive operations, the CPU can handle a wide range of workloads, including general computing tasks, file processing, data analysis, software development, and multitasking.
With its multiple cores and sophisticated instruction capabilities, the CPU can efficiently execute complex instructions and handle different types of data. This versatility allows users to perform a diverse set of tasks, making the CPU a valuable resource for both everyday computing needs and specialized applications.
Moreover, the CPU can be more flexible in terms of programming and software compatibility. The extensive support and optimized software development libraries for CPUs allow for better customization, integration, and control over the computing processes, giving users the freedom to tailor the system to their specific requirements.
2. Complex Calculations and Algorithms
When dealing with intricate calculations, algorithms, and simulations, the CPU can outshine the GPU in terms of performance and accuracy. CPUs are designed with features like high-speed cache and advanced arithmetic units that excel in executing complex instructions and handling branching operations efficiently.
Tasks like intensive numerical computations, mathematical modeling, molecular dynamics simulations, and cryptography benefit greatly from the CPU's ability to handle intricate algorithms and complex data structures. The superior single-threaded performance of CPUs enables faster execution of sequential tasks, whereas GPUs excel in parallel processing and data parallelism.
Additionally, the CPU's architecture allows for easy optimization of algorithms and tuning of parameters, maximizing the efficiency of highly specific processes. This level of control and precision is crucial in industries such as finance, research, and engineering, where accuracy and reliability are paramount.
3. Real-time and Interactive Applications
Real-time and interactive applications, such as gaming, virtual reality, augmented reality, and multimedia editing, heavily rely on both the CPU and GPU for optimal performance. While the GPU plays a significant role in rendering and visual effects, the CPU contributes to various aspects of these applications.
The CPU handles critical tasks like physics calculations, input processing, AI algorithms, audio processing, and managing the overall game or application logic. These tasks require low latency and quick response times, where the CPU's ability to execute instructions rapidly and efficiently takes precedence.
Furthermore, the CPU's versatility allows it to coordinate and manage the different components and processes within real-time applications, ensuring smooth and optimized performance. It acts as the central nervous system of the system, orchestrating the interplay between the GPU, memory, storage, and other peripherals.
4. Compatibility and Legacy Support
Another advantage of utilizing the CPU is its compatibility with a wide range of software, operating systems, and legacy systems. CPUs have been around for decades, and their architecture and instruction sets have evolved over time, making them backward compatible with older software and systems.
This compatibility is particularly crucial in business and enterprise environments, where maintaining compatibility with existing infrastructure and software is imperative. By leveraging the CPU's compatibility, organizations can minimize disruption, avoid costly migrations, and ensure business continuity without compromising on performance.
Additionally, the vast majority of software and applications are designed and optimized to run on CPUs, making it the de facto standard for most computing tasks. While GPUs offer specialized capabilities for certain workloads, the CPU remains the foundation of computing systems, ensuring compatibility and broad support.
Efficiency and Resource Management
The benefits of using the CPU instead of the GPU extend beyond performance and versatility. When considering factors such as power consumption, heat dissipation, and resource management, utilizing the CPU can be a more efficient and cost-effective choice for certain scenarios.
1. Power Consumption and Heat Dissipation
Compared to their GPU counterparts, CPUs tend to have lower power consumption due to their architecture and design. GPUs, with their hundreds or thousands of cores, require higher power levels to run efficiently. This higher power consumption results in increased heat generation and the need for robust cooling systems.
CPU-based systems are generally more power-efficient, making them suitable for applications where power consumption is a concern, such as mobile devices, laptops, and data centers. Additionally, by utilizing the power-saving features and smart power management capabilities of CPUs, organizations can reduce their energy costs and promote sustainability.
The lower power consumption of CPUs also translates to less heat production, leading to quieter systems and reduced cooling requirements. This aspect is particularly important in environments where noise reduction and maintaining optimal temperature levels are critical, such as recording studios, scientific labs, or office settings.
2. Resource Management and Scalability
Another aspect where CPUs excel is resource management and scalability. CPUs typically have direct control over memory allocation, shared resources, and system processes. This level of control allows for efficient resource allocation, ensuring that critical tasks receive the necessary computing power and memory while minimizing resource conflicts and bottlenecks.
Moreover, CPUs offer greater scalability and adaptability, allowing organizations to upgrade their systems incrementally. CPUs with additional cores or advanced features can be easily integrated into existing systems, providing a more straightforward and cost-effective method of enhancing performance.
By utilizing the CPU and its efficient resource management capabilities, organizations can optimize their computing infrastructure, enhance workload distribution, and achieve better overall system performance.
3. Cost-Effective Solutions
Cost considerations play a crucial role in any technology-related decision. CPUs, being more widely available and standardized than GPUs, tend to be more cost-effective options for most computing needs.
The wide range of CPU options, from entry-level processors to high-performance models, allows users to choose the most suitable option for their budget and requirements. Additionally, the large-scale production and competition in the CPU market contribute to lower prices and better price-to-performance ratios.
Moreover, the compatibility and extensive software support for CPUs mean that users can leverage a wide range of applications and tools without additional investments. This reduces the overall cost of ownership and provides a higher return on investment.
While GPUs offer exceptional performance for specific workloads, the cost-to-performance ratio may not always justify their implementation, especially for general computing tasks and applications. In such cases, the CPU offers a cost-effective and efficient alternative.
In conclusion, while GPUs rightfully take the spotlight in graphics-intensive operations, there are numerous benefits to utilizing the CPU instead of the GPU for certain tasks and scenarios. The CPU's versatility, flexibility, compatibility, and efficiency make it a powerful tool for a wide range of computing needs. By leveraging the strengths of the CPU, users can achieve optimal performance, resource management, and cost-effective solutions, ultimately enhancing their computing experience.
Use CPU Instead Of Gpu
Using CPU Instead of GPU: A Professional Perspective
In certain scenarios, using a CPU instead of a GPU can be a viable option in professional settings. While GPUs are well-known for their ability to handle complex computations and parallel processing, CPUs offer advantages in specific use cases.
One such scenario is when the workload involves tasks that are not well-suited for parallel processing. CPU architecture excels at serial processing, making it more efficient for sequential tasks that require high single-threaded performance. For example, in data analytics, where certain algorithms do not fully utilize a GPU's parallel capabilities, using a CPU can optimize processing time.
Furthermore, CPUs often provide better compatibility and support for various software applications. Some software packages are not specifically optimized for GPU architectures, making it more efficient to run them on a CPU.
Additionally, utilizing CPUs rather than GPUs can be more cost-effective. GPUs tend to be more expensive and power-hungry, requiring additional cooling solutions. In cases where the workload does not require the intense computational power of a GPU, using CPUs can lead to significant cost savings.
In conclusion, while GPUs are powerful tools in many professional settings, there are instances where utilizing CPUs can offer certain advantages. The decision to use a CPU instead of a GPU depends on the specific workload, compatibility with software applications, performance requirements, and cost considerations.
Key Takeaways - Use CPU Instead of GPU
• There are certain scenarios where using a CPU instead of a GPU can be more efficient.
• When the workload is not highly parallelizable, a CPU can provide better performance.
• CPU is better suited for single-threaded tasks that do not require heavy parallel processing.
• In some cases, using a CPU instead of a GPU can save power and reduce energy consumption.
• While GPUs are known for their superior performance in graphics-intensive tasks, CPUs excel in tasks that require complex decision-making and sequential processing.
Frequently Asked Questions
Here are some common questions related to using CPU instead of GPU:
1. Can I use CPU instead of GPU for gaming?
While CPUs are primarily designed for general-purpose computing, they can still handle gaming to some extent. However, GPUs are specifically optimized for rendering graphics and are much more efficient in handling complex visual tasks required in modern games. So, while you can use a CPU for gaming, a GPU will provide a much better gaming experience.
It is worth noting that some older or less graphically demanding games may not heavily rely on GPU processing power and can run reasonably well on a powerful CPU. But for the best performance and visual quality in most modern games, a dedicated GPU is recommended.
2. Are there any advantages of using CPU over GPU?
Yes, there are certain advantages of using CPU over GPU in certain scenarios:
1. Multithreaded tasks: CPUs excel at handling multithreaded tasks, where multiple processes need to be executed simultaneously. This makes them more suited for tasks like video editing, 3D modeling, and complex simulations.
2. General-purpose computing: CPUs are designed for general-purpose computing and can handle a wide range of tasks efficiently, including running software applications, managing operating systems, and performing calculations.
3. Should I use CPU instead of GPU for machine learning?
The choice between using a CPU or GPU for machine learning depends on the specific requirements of the task:
1. CPU for small datasets and non-parallel tasks: CPUs are suitable for handling smaller datasets and tasks that do not require parallel processing. They can be more cost-effective and offer greater flexibility for these types of tasks.
2. GPU for large datasets and parallel processing: GPUs excel at handling large datasets and tasks that can be parallelized, such as training deep neural networks. They offer significantly higher computational power and can accelerate the training process.
4. Can I use CPU instead of GPU for cryptocurrency mining?
While it is technically possible to use a CPU for cryptocurrency mining, it is not recommended due to the significantly lower mining performance compared to GPUs. GPUs are much more efficient in performing the complex calculations required for mining and can provide much higher hash rates.
Using a CPU for cryptocurrency mining would result in lower profitability and longer mining times. Therefore, it is generally more cost-effective and efficient to use a GPU or specialized mining hardware for mining cryptocurrencies.
5. Can I replace a GPU with a CPU in a gaming PC?
Technically, it is possible to replace a GPU with a CPU in a gaming PC. However, this is not recommended as GPUs are specifically designed for gaming and offer dedicated processing power for rendering graphics. A CPU, on the other hand, is designed for general-purpose computing and may not offer the same level of performance and visual quality in games.
Furthermore, most gaming PC setups are designed to accommodate a dedicated GPU, including the necessary power supply and cooling. Replacing a GPU with a CPU would require significant modifications to the PC setup and may not provide the desired gaming experience.
In conclusion, when considering the choice between using a CPU or a GPU, it is important to weigh the specific needs and requirements of your task or project. While CPUs are generally better suited for tasks that require complex calculations and multitasking, GPUs excel in parallel processing and handling large amounts of data.
Ultimately, the decision to use a CPU or a GPU should be based on the specific demands of your workload. If your task involves heavy computational work or requires real-time graphics rendering, a GPU may be the better option. However, for tasks that primarily involve general computing or single-threaded applications, a CPU may suffice. It is important to carefully evaluate your needs and consider factors such as cost, power consumption, and software compatibility when making a decision.
Recent Post
|
__label__pos
| 0.998646 |
slide1
Download
Skip this Video
Download Presentation
A point estimator cannot be expected to provide the exact value of the population parameter.
Loading in 2 Seconds...
play fullscreen
1 / 36
A point estimator cannot be expected to provide the exact value of the population parameter. - PowerPoint PPT Presentation
• 104 Views
• Uploaded on
Chapter 8 - Interval Estimation. Margin of Error and the Interval Estimate. A point estimator cannot be expected to provide the exact value of the population parameter. An interval estimate can be computed by adding and subtracting a margin of error to the point estimate.
loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation
PowerPoint Slideshow about 'A point estimator cannot be expected to provide the exact value of the population parameter.' - kylee-watson
An Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1
Chapter 8 - Interval Estimation
Margin of Error and the Interval Estimate
A point estimator cannot be expected to provide the
exact value of the population parameter.
An interval estimate can be computed by adding and
subtracting a margin of error to the point estimate.
Point Estimate +/- Margin of Error
The purpose of an interval estimate is to provide
information about how close the point estimate is to
the value of the parameter.
slide2
Interval Estimate of a Population Mean: Known
The general form of an interval estimate of a
population mean is
• In order to develop an interval estimate of a population mean, the margin of error must be computed using either:
• the population standard deviation s , or
• the sample standard deviation s
• s is rarely known exactly, but often a good estimate can be obtained based on historical data or other information.
• We refer to such cases as the s known case.
slide3
Sampling
distribution
of
1 - of all
values
Interval Estimate of a Population Mean: Known
There is a 1 - probability that the value of a
sample mean will provide a margin of error of
or less.
/2
/2
slide4
Sampling
distribution
of
1 - of all
values
Interval Estimate of a Population Mean: Known
/2
/2
interval
does not
include m
[------------------------- -------------------------]
interval
Does not includes m
[------------------------- -------------------------]
interval estimate of a population mean known
where: is the sample mean
1 - is the confidence coefficient
z/2 is the z value providing an area of
/2 in the upper tail of the standard
normal probability distribution
s is the population standard deviation
n is the sample size
Interval Estimate of a Population Mean: Known
• Interval Estimate of m
interval estimate of a population mean known1
Interval Estimate of a Population Mean: Known
• Values of za/2 for the Most Commonly Used Confidence Levels
Confidence Table
Level aa/2 Look-up Area za/2
90% .10 .05 .9500 1.645
95% .05 .025 .9750 1.960
99% .01 .005 .9950 2.576
slide7
Meaning of Confidence
Because 90% of all the intervals constructed using
will contain the population mean,
we say we are 90% confident that the interval
includes the population mean m.
We say that this interval has been established at the
90% confidence level.
The value .90 is referred to as the confidencecoefficient.
slide8
Interval Estimate of a Population Mean: Known
• Example: Discount Sounds
Discount Sounds has 260 retail outlets throughout
the United States. The firm is evaluating a potential
location for a new outlet, based in part, on the mean
annual income of the individuals in the marketing
area of the new location.
A sample of size n = 36 was taken; the sample
mean income is $41,100. The population is not
believed to be highly skewed. The population
standard deviation is estimated to be $4,500, and the
confidence coefficient to be used in the interval
estimate is .95.
slide9
95% of the sample means that can be observed
are within + 1.96 of the population mean .
Interval Estimate of a Population Mean: Known
• Example: Discount Sounds
The margin of error is:
Thus, at 95% confidence, the margin of error
is $1,470.
interval estimate of a population mean known2
Interval Estimate of a Population Mean: Known
Interval estimate of is:
• Example: Discount Sounds
$41,100 + $1,470
or
$39,630 to $42,570
We are 95% confident that the interval contains the
population mean.
interval estimate of a population mean known3
Interval Estimate of a Population Mean: Known
• Example: Discount Sounds
Confidence Margin
Level of Error Interval Estimate
90% 1234 $39,866 to $42,334
95% 1470 $39,630 to $42,570
99% 1932 $39,168 to $43,032
In order to have a higher degree of confidence,
the margin of error and thus the width of the
confidence interval must be larger.
slide12
Interval Estimate of a Population Mean:s Known
• Adequate Sample Size
In most applications, a sample size of n = 30 is
adequate.
If the population distribution is highly skewed or
contains outliers, a sample size of 50 or more is
recommended.
If the population is not normally distributed but is
roughly symmetric, a sample size as small as 15
will suffice.
If the population is believed to be at least
approximately normal, a sample size of less than 15
can be used.
slide13
Interval Estimate of a Population Mean:s Unknown
• If an estimate of the population standard deviation s cannot be developed prior to sampling, we use the sample standard deviation s to estimate s .
• This is the s unknown case.
• In this case, the interval estimate for m is based on the t distribution.
• (We’ll assume for now that the population is normally distributed.)
slide14
t Distribution
The t distribution is a family of similar probability distributions.
A specific t distribution depends on a parameter
known as the degrees of freedom.
Degrees of freedom refer to the number of independent
pieces of information that go into the computation of s.
A t distribution with more degrees of freedom has
less dispersion.
As the degrees of freedom increases, the difference between
the t distribution and the standard normal probability
distribution becomes smaller and smaller.
t distribution
t Distribution
t distribution
(20 degrees
of freedom)
Standard
normal
distribution
t distribution
(10 degrees
of freedom)
z, t
0
For more than 100 degrees of freedom, the standard normal z value provides a good approximation to the t value
slide16
Interval Estimate of a Population Mean:s Unknown
• Interval Estimate
where: 1 - = the confidence coefficient
t/2 = the t value providing an area of /2
in the upper tail of a t distribution
with n - 1 degrees of freedom
s = the sample standard deviation
slide17
Interval Estimate of a Population Mean:s Unknown
A reporter for a student newspaper is writing an
article on the cost of off-campus housing. A sample
of 16 efficiency apartments within a half-mile of
campus resulted in a sample mean of $750 per month
and a sample standard deviation of $55.
• Example: Apartment Rents
Let us provide a 95% confidence interval estimate
of the mean rent per month for the population of
efficiency apartments within a half-mile of campus.
We will assume this population to be normally
distributed.
slide18
Interval Estimate of a Population Mean:s Unknown
At 95% confidence, = .05, and /2 = .025.
t.025 is based on n- 1 = 16 - 1 = 15 degrees of freedom.
In the t distribution table we see that t.025 = 2.131.
slide19
Interval Estimate of a Population Mean:s Unknown
• Interval Estimate
Margin
of Error
We are 95% confident that the mean rent per month
for the population of efficiency apartments within a
half-mile of campus is between $720.70 and $779.30.
slide20
Interval Estimate of a Population Mean:s Unknown
• Adequate Sample Size
In most applications, a sample size of n = 30 is
adequate when using the expression
to develop an interval estimate of a population mean.
If the population distribution is highly skewed or
contains outliers, a sample size of 50 or more is
recommended.
slide21
Interval Estimate of a Population Mean:s Unknown
• Adequate Sample Size (continued)
If the population is not normally distributed but is
roughly symmetric, a sample size as small as 15
will suffice.
If the population is believed to be at least
approximately normal, a sample size of less than 15
can be used.
slide22
Summary of Interval Estimation Procedures
for a Population Mean
Can the
population standard
deviation s be assumed
known ?
Yes
No
Use the sample
standard deviation
s to estimate s
s Known
Case
Use
Use
s Unknown
Case
slide23
Sample Size for an Interval Estimateof a Population Mean
Let E = the desired margin of error.
E is the amount added to and subtracted from the
point estimate to obtain an interval estimate.
If a desired margin of error is selected prior to
sampling, the sample size necessary to satisfy the
margin of error can be determined.
slide24
Sample Size for an Interval Estimateof a Population Mean
• Margin of Error
• Necessary Sample Size
slide25
Sample Size for an Interval Estimateof a Population Mean
The Necessary Sample Size equation requires a
value for the population standard deviation s .
If s is unknown, a preliminary or planning value
for s can be used in the equation.
1. Use the estimate of the population standard
deviation computed in a previous study.
2. Use a pilot study to select a preliminary study and
use the sample standard deviation from the study.
3. Use judgment or a “best guess” for the value of s .
slide26
Sample Size for an Interval Estimateof a Population Mean
Recall that Discount Sounds is evaluating a
potential location for a new retail outlet, based in
part, on the mean annual income of the individuals in
the marketing area of the new location.
• Example: Discount Sounds
Suppose that Discount Sounds’ management team
wants an estimate of the population mean such that
there is a .95 probability that the sampling error is
$500 or less.
How large a sample size is needed to meet the
required precision?
slide27
Sample Size for an Interval Estimateof a Population Mean
At 95% confidence, z.025 = 1.96. Recall that = 4,500.
A sample of size 312 is needed to reach a desired
precision of + $500 at 95% confidence.
slide28
The sampling distribution of plays a key role in
computing the margin of error for this interval
estimate.
The sampling distribution of can be approximated
by a normal distribution whenever np> 5 and
n(1 – p) > 5.
Interval Estimateof a Population Proportion
The general form of an interval estimate of a
population proportion is
slide29
Normal Approximation of Sampling Distribution of
Sampling
distribution
of
1 - of all
values
Interval Estimateof a Population Proportion
/2
/2
p
interval estimate of a population proportion
where: 1 - is the confidence coefficient
z/2 is the z value providing an area of
/2 in the upper tail of the standard
normal probability distribution
is the sample proportion
Interval Estimateof a Population Proportion
• Interval Estimate
slide31
Interval Estimate
of a Population Proportion
Political Science, Inc. (PSI) specializes in voter polls and surveys designed to keep political office seekers informed of their position in a race. Using telephone surveys, PSI interviewers ask registered voters who they would vote for if the election were held that day.
• Example: Political Science, Inc.
In a current election campaign, PSI has just found that 220 registered voters, out of 500 contacted, favor a particular candidate. PSI wants to develop a 95% confidence interval estimate for the proportion of the population of registered voters that favor the candidate.
slide32
where: n = 500, = 220/500 = .44, z/2 = 1.96
Interval Estimate
of a Population Proportion
PSI is 95% confident that the proportion of all voters
that favor the candidate is between .3965 and .4835.
slide33
However, will not be known until after we have selected the sample. We will use the planning value
p* for .
Sample Size for an Interval Estimateof a Population Proportion
• Margin of Error
Solving for the necessary sample size, we get
slide34
Sample Size for an Interval Estimateof a Population Proportion
• Necessary Sample Size
The planning value p* can be chosen by:
1. Using the sample proportion from a previous sample of the same or similar units, or
2. Selecting a preliminary sample and using the
sample proportion from this sample.
3. Use judgment or a “best guess” for a p* value.
4. Otherwise, use .50 as the p* value.
sample size for an interval estimate of a population proportion
Sample Size for an Interval Estimateof a Population Proportion
Suppose that PSI would like a .99 probability that
the sample proportion is within + .03 of the
population proportion.
How large a sample size is needed to meet the
required precision? (A previous sample of similar
units yielded .44 for the sample proportion.)
• Example: Political Science, Inc.
slide36
At 99% confidence, z.005 = 2.576. Recall that = .44.
Sample Size for an Interval Estimateof a Population Proportion
A sample of size 1817 is needed to reach a desired
precision of + .03 at 99% confidence.
ad
|
__label__pos
| 0.610667 |
‘Generate Path from Contour’ works un-stable
https://docs.mech-mind.net/en/suite-software-manual/1.7.4/vision-steps/generate-traj-by-contour.html
I have already read this document and I know how to use this 'step’. However, due to unstable environmental lighting and random variations in the object’s pose, the point cloud of the object sometimes has missing data, resulting in different generated trajectories. What should I do?
Hello, would it be possible to provide a few images for illustration? This would greatly aid our understanding.
The ‘Generate Path from Contour’ heavily relies on the point cloud, which inherently has some level of randomness. This can result in variations in the starting point of the generated path or deviations from the ideal path.
In this case, we suggest that you consider using the “Generate Path from Contour” as a tool for path teaching, rather than relying solely on it for generating the path. When creating a matching template, incorporate the path as a pickpoint.
For the same type of object, use “3D matching” to obtain the object’s pose, and then utilize “Map to Multiple Pick Points” to generate its trajectory (which refers to the pickpoints mentioned earlier). This would be a more reliable approach.
|
__label__pos
| 0.995684 |
JSONStore
improve this page | report issue
概述
IBM Mobile Foundation JSONStore 是可选的客户机端 API,其提供轻量级、面向文档的存储系统。 JSONStore 支持持久存储 JSON 文档。 即使在运行应用程序的设备脱机时,应用程序中的文档在 JSONStore 中仍可用。 此持久、始终可用的存储可用于授予用户对文档的访问权,例如,在设备中无网络连接时。
JSONStore 功能工作流程
由于开发人员对此很熟悉,所以此文档中偶尔使用关系数据库术语以帮助说明 JSONStore。 但是关系数据库和 JSONStore 之间有很多不同。 例如,用于在关系数据库中存储数据的严格模式与 JSONStore 方法不同。 使用 JSONStore,您可以存储任何 JSON 内容并可对您需要搜索的内容建立索引。
主要功能
• 用于提高搜索效率的数据索引
• 用于跟踪对存储数据的仅本地更改的机制
• 多用户支持
• 对存储数据进行 AES 256 加密以提供安全性和机密性。 如果单个设备上有多个用户,那么您可以利用密码保护按照用户进行分段保护。
单个存储可以有多个集合,每个集合可有多个文档。 还可以有包含多个存储的 MobileFirst 应用程序。 有关信息,请参阅 JSONStore 多用户支持。
支持级别
• 在本机 iOS 和 Android 应用程序中支持 JSONStore(对于本机 Windows(Universal 和 UWP)不支持)。
• 在 Cordova iOS、Android 和 Windows(Universal 和 UWP)应用程序中支持 JSONStore。
跳转至
常规 JSONStore 术语
文档
文档是 JSONStore 的基本构建块。
JSONStore 文档是具有自动生成的标识 (_id) 和 JSON 数 据的 JSON 对象。 它类似于数据库术语中的记录或行。 _id 的值始终是特定集合中的唯一整数。 JSONStoreInstance 类中的某些函数(例如,addreplaceremove)可获取文档/对象数组。 这些方法可用于一次在多个文档/对象上执行操作。
单个文档
var doc = { _id: 1, json: {name: 'carlos', age: 99} };
文档数组
var docs = [
{ _id: 1, json: {name: 'carlos', age: 99} },
{ _id: 2, json: {name: 'tim', age: 100} }
]
集合
JSONStore 集合类似于数据库术语中的表。
以下代码示例不是在磁盘上存储文档的方式,而是在较高级别查看集合概况的有效方法。
[
{ _id: 1, json: {name: 'carlos', age: 99} },
{ _id: 2, json: {name: 'tim', age: 100} }
]
存储区
存储区是包含一个或多个集合的持久性 JSONStore 文件。
存储区类似于数据库术语中的关系数据库。 存储区也称为 JSONStore。
搜索字段
搜索字段是键/值对。
搜索字段是为了快速查找而编制索引的关键字,与数据库术语中的列字段或属性类似。
额外的搜索字段是编制了索引的关键字,但并非存储的 JOSON 数据的一部分。 这些字段定义了其值已经编制索引(在 JSON 集合中)的关键字,并且可以用于更加快速地进行搜索。
有效数据类型包括:字符串、布尔值、数字和整数。 这些类型仅仅是类型提示,不会进行类型验证。 此外,这些类型决定了如何存储可以编制索引的字段。 例如,{age: 'number'} 将 1 编制为索引 1.0,{age: 'integer'} 将 1 编制为索引 1。
搜索字段和额外的搜索字段
var searchField = {name: 'string', age: 'integer'};
var additionalSearchField = {key: 'string'};
只能针对对象内的索引键建立索引,而不是对象自身。 数组将以传递方式进行处理,这意味着您无法编制数组的索引或编制数组 (arr[n]) 的特定索引,但可以针对数组内的对象编制索引。
对数组内的值建立索引
var searchFields = {
'people.name' : 'string', // matches carlos and tim on myObject
'people.age' : 'integer' // matches 99 and 100 on myObject
};
var myObject = {
people : [
{name: 'carlos', age: 99},
{name: 'tim', age: 100}
]
};
查询
查询是使用搜索字段或额外的搜索字段来查找文档的对象。
这些示例假定 name 搜索字段为字符串类型并且 age 搜索字段为整数类型。
查找其 namecarlos 匹配的文档
var query1 = {name: 'carlos'};
查找其 namecarlos 匹配并且 age99 匹配的文档
var query2 = {name: 'carlos', age: 99};
查询部分
查询部分用于构建更高级的搜索。 某些 JSONStore 操作(例如,某些版本的 findcount)生成查询部分。 查询部分中的所有项都由 AND 语句进行连接,而查询部分自身由 OR 语句进行连接。 仅在查询部分中的所有项为 true 时搜索条件才返回匹配项。 您可以使用多个查询部分来搜索满足一个或多个查询部分的匹配项。
使用查询部分进行查找仅作用于顶级搜索字段。 例如: name,而不是 name.first。 使用所有搜索字段都是顶级的多个集合来避开这点。 处理非顶级搜索字段的查询部分操作为:equalnotEquallikenotLikerightLikenotRightLikeleftLikenotLeftLike。 如果使用非顶级搜索字段,那么行为不确定。
功能表
比较 JSONStore 功能与其他数据存储技术和格式的功能。
JSONStore 是一个用于在使用 MobileFirst 插件的 Cordova 应用程序中存储数据的 JavaScript API,Objective-C API 用于本机 iOS 应用程序,Java API 用于本机 Android 应用程序。 请参考以下不同 JavaScript 存储技术的比较,以了解 JSONStore 与它们相比有哪些不同。
JSONStore 类似于诸如 LocalStorage、Indexed DB、Cordova Storage API 和 Cordova File API 之类的技术。 此表显示了 JSONStore 提供的某些功能与其他技术进行对比的结果。 JSONStore 功能仅供 iOS 和 Android 设备和仿真程序使用。
功能 JSONStore LocalStorage IndexedDB Cordova Storage API Cordova File API
Android 支持(Cordova 和本机应用程序)
iOS 支持(Cordova 和本机应用程序)
Windows 8.1 Universal 和 Windows 10 UWP(Cordova 应用程序) -
数据加密 - - - -
最大存储量 可用空间 ~5MB ~5MB 可用空间 可用空间
可靠存储(请参阅注释) - -
跟踪本地更改 - - - -
多用户支持 - - - -
建立索引 - -
存储类型 JSON 文档 键/值对 JSON 文档 关系数据库 (SQL) 字符串
注:可靠存储意味着除非发生以下某个事件,否则不会删除您的数据:
• 从设备上移除应用程序。
• 调用移除数据的某种方法。
多用户支持
借助 JSONStore,您可以在单个 MobileFirst 应用程序中创建包含不同集合的多个存储区。
init (JavaScript) 或 open(本机 iOS 和本机 Android)API 可获取具有某个用户名的选项对象。 不同的存储区是文件系统中的单独文件。 用户名用作存储区的文件名。 出于安全性和隐私的原因,这些单独存储区可以通过不同的密码进行加密。 调用 closeAll API 将除去对所有集合的访问权。 还可通过调用 changePassword API 来更改加密存储区的密码。
示例用例是共享物理设备(例如,iPad 或 Android 平板电脑)和 MobileFirst 应用程序的不同员工。 此外,如果员工工作班次不同并且处理来自不同客户的隐私数据,那么在使用 MobileFirst 应用程序时,多用户支持非常有用。
安全性
您可以加密存储区中的所有集合以确保其安全性。
要加密存储器中的所有集合,请将密码传递到 init (JavaScript) 或 open(本机 iOS 和本机 Android)API。 如果未传递任何密码,那么存储器集合中的所有文档都不会加密。
某些安全工件(例如,salt)存储在密钥链 (iOS)、共享首选项 (Android) 和凭据保险箱(Windows Universal 8.1 和 Windows 10 UWP)中。 此存储区利用 256 位高级加密标准 (AES) 密钥进行加密。 所有密钥通过基于密码的密钥派生功能 2 (PBKDF2) 进行增强。 您可以选择针对应用程序加密数据集合,但是无法在已加密和明文格式之间进行切换,或者在存储区中混用两种格式。
用于保护存储区中数据的密钥基于您提供的用户密码。 密钥不会到期,但是您可以通过调用 changePassword API 进行更改。
数据保护密钥 (DPK) 是用于解密存储器内容的密钥。 DPK 保存在 iOS 密钥链中,即使应用程序已卸载也是如此。 要移除密 钥链中的密钥以及 JSONStore 放入应用程序中的任何项,请使用 destroy API。 此过程不适用于 Android,因为加密的 DPK 存储在共享首选项中,并且会在卸载应用程序时擦除。
JSONStore 第一次使用密码打开集合时,这意味着开发人员想要加密存储区中的数据,JSONStore 需要随机令牌。 可以从客户机或服务器获取此随机令牌。
当 JSONStore API 的 JavaScript 实施中存在 localKeyGen 密钥并且具有值 true 时,将本地生成使用密码的安全令牌。 否则,通过联系服 务器生成令牌,因此需要 MobileFirst Server 连通性。 仅在第一次使用密码打开存储器时需要此令牌。 缺省情况下,本机实施(Objective-C 和 Java)本地生成使用密码的安全令 牌,或者您可以通过 secureRandom 选项传递一个令牌。
权衡以下两种方法,脱机打开存储区并信任客户机以生成此随机令牌(安全性较低),或者 通过访问 MobileFirst Server(需要连通性)打开存储区并信任服务器(安全性较高)。
安全实用程序
MobileFirst 客户机端 API 提供以下安全实用程序来帮助保护用户数据。 如果要保护 JSON 对象,那么诸如 JSONStore 之类的功能很有效。 但是,不建议在 JSONStore 集合中存储二元 BLOB。
应改为在文件系统上存储二元数据,并将文件路径和其他元数据存储在 JSONStore 集合内。 如果要保护图像之类的文件,可以将其编码为 base64 字符串、对其加密,并将输出写入磁盘。 要解密数据时,可以在 JSONStore 集合中查找元数据、从磁盘中读取加密数据,并使用存储的元数据来解密数据。 此元数据可包括密钥、加密盐 (Salt)、初始化向量 (IV)、文件类型、到文件的路径等。
了解有关 JSONStore 安全实用程序的更多信息。
Windows 8.1 Universal 和 Windows 10 UWP 加密
您可以加密存储区中的所有集合以确保其安全性。
JSONStore 使用 SQLCipher 作为其底层数据库技术。 SQLCipher 是由 Zetetic 生成的 SQLite 构建,LLC 会向数据库添加一个加密层。
JSONStore 在所有平台上使用 SQLCipher。 在 Android 和 iOS 上,提供免费的、开放式源代码版本的 SQLCipher,这也称为 Community Edition,其纳入 Mobile Foundation 中包含的 JSONStore 版本。 只有获取商业许可才能使用 Windows 版本的 SQLCipher,并且 Mobile Foundation 不得直接再分发。
相反,JSONStore for Windows 8 Universal 包含 SQLite 作为底层数据库。 如果需要加密其中一个平台的数据,您需要获 取自己的 SQLCipher 版本并置换 Mobile Foundation 中包含的 SQLite 版本。
如果不需要加密,那么通过使用 Mobile Foundation 中的 SQLite 版本使 JSONStore 完全生效(除去加密)。
针对 Windows Universal 和 Windows UWP,将 SQLite 替换为 SQLCipher
1. 运行 SQLCipher for Windows Runtime Commercial Edition 随附的 SQLCipher for Windows Runtime 8.1/10 扩展。
2. 安装完扩展后,查找刚创建的 sqlite3.dll 文件的 SQLCipher 版本。 分别存在针对 x86、x64 和 ARM 的版本。
C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1\ExtensionSDKs\SQLCipher.WinRT81\3.0.1\Redist\Retail\<platform>
3. 将此文件复制到您的 MobileFirst 应用程序并进行替换。
<Worklight project name>\apps\<application name>\windows8\native\buildtarget\<platform>
效果
以下是可能会影响 JSONStore 性能的因素。
网络
• 在执行操作(例如,将所有脏文档发送到适配器)之前检查网络连接。
• 通过网络发送至客户机的数据量严重影响性能。 只发送应用程序所需的数据,而不是复制后端数据库中的所有项。
• 如果正在使用适配器,请考虑将 compressResponse 标记设置为 true。 通过此方式压缩响应,与无压缩相比,通常这种方式所用带宽较少并且传输速度更快。
内存
• 在使用 JavaScript API 时,JSONStore 文档将作为本机(Objective-C、Java 或 C#)层和 JavaScript 层之间的字 符串进行序列化和反序列化。 缓解可能的内存问题的一种方法是在使用 find API 时应用限制和偏移量。 这样,您限制针对结果分配的内存量并且可实施诸如分页等事项(每页显示 X 条结果)。
• 不再使用最终作为字符串进行序列化和反序列化的长密钥名称,考虑将这些长密钥名称映射为较短的名称(例如,myVeryVeryVerLongKeyNamekkey)。 理想情况下,在从适配器发送到客户机时将其映射为简短的密钥名称,在将数据发送回后端时将其映射为原始长密钥名称。
• 考虑将存储区内的数据拆分为不同的集合。 让小型文档分布在各个集合中,而不是单个集合中包含整个文档。 此注意事项取决于数据间的相关程度以及指定数据的用例。
• 在将 add API 用于对象数组时,可能会遇到内存问题。 要缓解此问题,请一次使用较少的 JSON 对象调用这些方法。
• JavaScript 和 Java 具有垃圾收集器,而 Objective-C 具有自动引用计数。 允许其工作,但不要完全依赖。 尝试删除不再使用的空引用,并且使用概要分析工具来检查内存使用是否降低(当您预计会降低时)。
CPU
• 在调用建立索引的 add 方法时,使用的搜索字段和其他搜索字段的数量会影响性能。 仅对 find 方法的查询中使用的值编制索引。
• 缺省情况下,JSONStore 跟踪对其文档的本地更改。 可以禁用此行为,因此在使用 add、remove 和 replace API 时,通过将 markDirty 标记设置为 false 将省去一些循环周期。
• 启用安全性会向 initopen API 以及处理集合中的文档的其 他操作增加开销。 考虑安全性是否确实需要。 例如,open API 比加密更慢,因为它必须生成用于加密和解密的加密密钥。
• replaceremove API 取决于集合大小,因为它们必须浏览整个集合以替换或移除所有出现项。 由于它必须浏览每条记录,因为必须加密每个记录,在使用加密时速度会变得很慢。 在大型集 合上,此性能下降更加明显。
• count API 相对开销较多。 但是,您可以保留一个变量来保持集合计数。 每次从集合存储内容或移除内容时,会将其更新。
• find API(findfindAllfindById)受加密影响,因为它们必须解密每个文档以查看其是否匹配。 针对 find by query,如果传递了限制,那么当其达到结果限制而停止时,可能会更快。 JSONStore 不需要解密剩余的文档即可了解是否保留任何其他搜索结果。
并行
JavaScript
可以对集合执行的大部分操作都是异步的,例如添加和查找。 这些操作在操作成功完成时返回解析的 jQuery 承诺,如果失败,则会被拒绝。 这些承诺类似于成功和失败的回调。
jQuery Deferred 是可以解析或拒绝的承诺。 以下示例并非特定于 JSONStore,而是旨在帮助您了解其常规用法。
不再承诺和回调,您还可以侦听 JSONStore successfailure 事件。 基于传递到事件侦听器的参数执行操作。
示例承诺定义
var asyncOperation = function () {
// Assumes that you have jQuery defined via $ in the environment
var deferred = $.Deferred();
setTimeout(function() {
deferred.resolve('Hello');
}, 1000);
return deferred.promise();
};
示例承诺用法
// The function that is passed to .then is executed after 1000 ms.
asyncOperation.then(function (response) {
// response = 'Hello'
});
示例回调定义
var asyncOperation = function (callback) {
setTimeout(function() {
callback('Hello');
}, 1000);
};
示例回调用法
// The function that is passed to asyncOperation is executed after 1000 ms.
asyncOperation(function (response) {
// response = 'Hello'
});
示例事件
$(document.body).on('WL/JSONSTORE/SUCCESS', function (evt, data, src, collectionName) {
// evt - Contains information about the event
// data - Data that is sent ater the operation (add, find, etc.) finished
// src - Name of the operation (add, find, push, etc.)
// collectionName - Name of the collection
});
Objective-C
在将本机 iOS API 用于 JSONStore 时,所有操作都将添加到同步分派队列。 此行为确保在不是主线程的线程上按顺序执行接触存储的操作。 有关更多信息,请参阅 Grand Central Dispatch (GCD) 中的 Apple 文档。
Java
在将本机 Android API 用于 JSONStore 时,将在主线程上执行所有操作。 您必须创建线程或者使用线程池以使用异步行为。 所有存储操作都是线程安全的。
分析
可收集与 JSONStore 相关的分析信息的关键部分
文件信息
如果在分析标记设置为 true 的情况下调用 JSONStore API,那么将按照每个应用程序会话收集文件信息。 在将应用程序装入内存以及从内存中移除时,将定义应用程序会话。 您可以使用此信息来确定应用程序中的 JSONStore 内容使用的空间量。
性能指标
每次使用有关操作开始和结束时间的信息调用 JSONStore API 时,都将收集性能指标。 您可以使用此信息来确定不同操作所用的时间(毫秒)。
示例
iOS
JSONStoreOpenOptions* options = [JSONStoreOpenOptions new];
[options setAnalytics:YES];
[[JSONStore sharedInstance] openCollections:@[...] withOptions:options error:nil];
Android
JSONStoreInitOptions initOptions = new JSONStoreInitOptions();
initOptions.setAnalytics(true);
WLJSONStore.getInstance(...).openCollections(..., initOptions);
JavaScript
var options = {
analytics : true
};
WL.JSONStore.init(..., options);
使用外部数据
您可以在多个不同的概念中使用外部数据:拉取推送
拉取
许多系统使用词汇“拉取”来指代从外部源获取数据。
有三个重要部分:
外部数据源
此源可以为数据库、REST 或 SOAP API 等等。 唯一要求是,外部数据源必须可以通过 MobileFirst Server 访问,或可以直接通过客户机应用程序访问。 理想情况下,您希望此源以 JSON 格式返回数据。
传输层
此源表示您如何将数据从外部源传送到内部源(存储区中的 JSONStore 集合)。 一个备选方案是适配器。
内部数据源 API
此源是可用于将 JSON 数据添加到集合的 JSONStore API。
注:您可以使用从文件读取的数据、输入字段或变量中的硬编码数据来填充内部存储区。 此源不必专门来自需要网络通信的外部源。
以下所有代码示例都以类似于 JavaScript 的伪码来编写。
注:针对传输层使用适配器。 使用适配器的优点包括服务器端代码和客户机端代码的 XML 到 JSON 转换、安全性、过滤和重复数据删除。
外部数据源:后端 REST 端点
假设您具有一个 REST 端点,用于从数据库读取数据并将其返回为 JSON 对象数组。
app.get('/people', function (req, res) {
var people = database.getAll('people');
res.json(people);
});
返回的数据可能与以下示例类似:
[{id: 0, name: 'carlos', ssn: '111-22-3333'},
{id: 1, name: 'mike', ssn: '111-44-3333'},
{id: 2, name: 'dgonz' ssn: '111-55-3333')]
传输层:适配器
假设创建名为 people 的适配器并定义名为 getPeople 的过程。 该过程将调用 REST 端点并将 JSON 对象数组返回到客户机。 您可能想在此执行更多操作,如仅将数据的子集返回到客户机。
function getPeople () {
var input = {
method : 'get',
path : '/people'
};
return MFP.Server.invokeHttp(input);
}
在客户机上,您可以使用 WLResourceRequest API 来获取数据。 此外,您可能希望将某些参数从客户机传递到适配器。 一个示例是客户机上次通过适配器从外部源获取新数据的日期。
var adapter = 'people';
var procedure = 'getPeople';
var resource = new WLResourceRequest('/adapters' + '/' + adapter + '/' + procedure, WLResourceRequest.GET);
resource.send()
.then(function (responseFromAdapter) {
// ...
});
注:您可能想要利用可传递到 WLResourceRequest API 的 compressResponsetimeout 以及其他参数。
另外,您可以跳过适配器并使用诸如 jQuery.ajax 之类的项,从而直接与具有要存储数据的 REST 端点联系。
$.ajax({
type: 'GET',
url: 'http://example.org/people',
})
.then(function (responseFromEndpoint) {
// ...
});
内部数据源 API:JSONStore 在收到后端响应后,您可以使用 JSONStore 来处理此数据。 JSONStore 提供跟踪本地更改的方法。 该方法将启用某些 API 以将文档标记为脏。 API 将记录对文档执行的最后一个操作,以及将文档标记为脏的时间。 然后可以使用此信息来实现诸如数据同步等功能。
change API 将采用数据和某些选项:
replaceCriteria
这些搜索字段是输入数据的一部分。 它们用于查找已存在于集合内的文档。 例如,如果选择
['id', 'ssn']
作为替换标准,那么将传递以下数组作为输入数据:
[{id: 1, ssn: '111-22-3333', name: 'Carlos'}]
并且 people 集合已包含以下文档:
{_id: 1,json: {id: 1, ssn: '111-22-3333', name: 'Carlitos'}}
change 操作将查找与以下查询完全匹配的文档:
{id: 1, ssn: '111-22-3333'}
change 操作将使用输入数据执行替换并且该集合将包含:
{_id: 1, json: {id:1, ssn: '111-22-3333', name: 'Carlos'}}
该名称已从 Carlitos 更改为 Carlos。 如果多个文档匹配替换标准,那么匹配的所有文档都将替换为各自输入数据。
addNew
当没有任何文档与替换标准相匹配时,change API 将查看此标记的值。 如果标记设置为 true,那么 change API 将创建一个新文档并将其添加到存储区中。 否则,不执行任何其他操作。
markDirty
确定 change API 是否将替换或添加的文档标记为脏。
将从适配器返回一个数据数组:
.then(function (responseFromAdapter) {
var accessor = WL.JSONStore.get('people');
var data = responseFromAdapter.responseJSON;
var changeOptions = {
replaceCriteria : ['id', 'ssn'],
addNew : true,
markDirty : false
};
return accessor.change(data, changeOptions);
})
.then(function() {
// ...
})
可以使用其他 API 来跟踪对存储的本地文档进行的更改。 将始终获取对其执行操作的集合的存取器。
var accessor = WL.JSONStore.get('people')
然后,可以添加数据(JSON 对象的数组)并确定是否要将其标记为脏数据。 通常情况下,从外部源获取更改时,您会希望将 markDirty 标记设置为 false。 之后当在本地添加数据时,您会希望将该标记设置为 true。
accessor.add(data, {markDirty: true})
还可以替换文档,并选择是否将替换的文档标记为脏文档。
accessor.replace(doc, {markDirty: true})
同样,可以移除文档,并选择是否将移除文档标记为脏文档。 使用 find API 时,将不再显示移除的文档和标记为脏的文档。 但是,这些文档仍存在于该集合中,直到应用 markClean API(此 API 可从集合中物理删除文档)。 如果未将文档标记为脏文档,那么会从集合中物理删除文档。
accessor.remove(doc, {markDirty: true})
与 CloudantDB 自动同步
从 Mobile Foundation CD 更新 2 开始,MFP JSONStore SDK 可用于自动化设备上的 JSONStore 集合与任何 CouchDB 数据库(包括 Cloudant)之间的数据同步。此功能在 iOS、android、cordova-android 和 cordova-ios 中可用。
在 JSONStore 与 Cloudant 之间设置同步
要在 JSONStore 与 Cloudant 之间设置自动同步,请完成以下步骤:
1. 在移动应用程序上定义同步策略。
2. 在 IBM Mobile Foundation 上部署同步适配器。
定义同步策略
通过同步策略来定义 JSONStore 集合与 Cloudant 数据库之间的同步方法。您可以在应用程序中为每个集合指定同步策略。
可以使用同步策略对 JSONStore 集合进行初始化。同步策略可以是以下三个策略之一:
SYNC_DOWNSTREAM:如果要将 Cloudant 中的数据下载到 JSONStore 集合,请使用此策略。这通常用于脱机存储所需的静态数据。例如,目录中的项目价格表。每次在设备上初始化集合时,都会从远程 Cloudant 数据库刷新数据。首次下载整个数据库时,后续刷新将仅下载变化量,包括在远程数据库上所做的更改。
Android:
initOptions.setSyncPolicy(JSONStoreSyncPolicy.SYNC_DOWNSTREAM);
iOS:
openOptions.syncPolicy = SYNC_DOWNSTREAM;
Cordova:
collection.sync = {
syncPolicy:WL.JSONStore.syncOptions.SYNC_DOWNSTREAM
}
SYNC_ UPSTREAM:如果要将本地数据推送到 Cloudant 数据库,请使用此策略。例如,将脱机捕获的销售数据上载到 Cloudant 数据库。使用 SYNC_UPSTREAM 策略定义集合时,添加到集合的任何新记录都会在 Cloudant 中创建新记录。同样,在设备上的集合中修改的任何文档都将修改 Cloudant 上的文档,并且集合中删除的文档也将从 Cloudant 数据库中删除。
Android:
initOptions.setSyncPolicy(JSONStoreSyncPolicy.SYNC_UPSTREAM);
iOS:
openOptions.syncPolicy = SYNC_UPSTREAM;
Cordova:
collection.sync = {
syncPolicy:WL.JSONStore.syncOptions.SYNC_UPSTREAM
}
SYNC_NONE:这是缺省策略。选择此策略以便不进行同步。
要点:同步策略归因于 JSONStore 集合。在使用特定的同步策略对集合进行初始化之后,不应再更改该策略。修改同步策略可能会导致不良后果。
syncAdapterPath
这是部署的适配器名称。
Android:
//Here "JSONStoreCloudantSync" is the name of the sync adapter deployed on MFP server.
initOptions.syncAdapterPath = "JSONStoreCloudantSync";
iOS:
openOptions.syncAdapterPath = "JSONStoreCloudantSync";
Cordova:
collection.sync = {
syncPolicy://One of the three sync policies,
syncAdapterPath : 'JSONStoreCloudantSync'
}
部署同步适配器
此处下载 JSONStoreSync 适配器,在路径“src/main/adapter-resources/adapter.xml”中配置 cloudant 凭证,然后在 MobileFirst Server 中进行部署。 通过 mfpconsole 将凭证配置到后端 Cloudant 数据库,如下所示:
配置 Cloudant Cloudant 凭证
使用此功能之前要考虑以下几点
此功能仅可用于 Android、iOS、cordova-android 和 cordova-ios。
JSONStore 集合名称与 CouchDB 数据库名称必须相同。在命名 JSONStore 集合之前,请仔细查看 CouchDB 数据库命名语法。
在 Android 中,将同步回调侦听器定义为 init 选项的一部分,如下所示:
//import com.worklight.jsonstore.api.JSONStoreSyncListener;
JSONStoreSyncListener syncListener = new JSONStoreSyncListener() {
@Override
public void onSuccess(JSONObject json) {
//Implement success action
}
@Override
public void onFailure(JSONStoreException ex) {
//Implement failure action
}
};
initOptions.setSyncListener(syncListener);
在 iOS 中,使用过载 opencollections api,该 api 具有一个完成处理程序来启用同步(包括回调),如下所示:
JSONStore.sharedInstance().openCollections([collections], with: options, completionHandler: { (success, msg) in
self.logMessage("msg is : " + msg!);
//success flag is true if the sync succeeds, else on failure it is false and the message from the SDK is available through 'msg' argument.
})
在 Cordova 中,将成功和失败回调定义为同步对象的一部分,如下所示:
function onsuccess(msg) {
//Implement success action
}
function onfailure(msg) {
//Implement failure action
}
collection.sync = {
syncPolicy : WL.JSONStore.syncOptions.SYNC_UPSTREAM,
syncAdapterPath : 'JSONStoreCloudantSync',
onSyncSuccess : onsuccess,
onSyncFailure : onfailure
};
只能使用允许的同步策略之一(即 SYNC_DOWNSTREAM、SYNC_UPSTREAM 或 SYNC_NONE)来定义 JSONStoreCollection。
如果必须在显式初始化后的任何时间执行上游或下游同步,那么可以使用以下 API:
sync()
如果调用集合的同步策略设置为“SYNC_ DOWNSTREAM”,那么这将执行下游同步。否则,如果同步策略设置为“SYNC_ UPSTREAM”,那么将执行从 jsonstore 到 Cloudant 数据库的已添加、已删除和已替换文档的上游同步。
Android:
WLJSONStore.getInstance(context).getCollectionByName(collection_name).sync();
iOS:
collection.sync(); //Here collection is the JSONStore collection object that was initialized
Cordova:
WL.JSONStore.get(collectionName).sync();
注:sync api 的成功和失败回调将被触发到同步侦听器(在 Android 中)、完成处理程序(在 IOS 中)和集合初始化期间声明的已定义回调(在 Cordova 中)。
推送
许多系统使用词汇推送来指代将数据发送到外部源。
有三个重要部分:
内部数据源 API
此源是 JSONStore API,用于返回包含仅本地更改(脏)的文档。
传输层
此源是您希望联系外部数据源以发送更改的方式。
外部数据源
此源通常是数据库、REST 或 SOAP 端点等等,用于接收客户机对数据进行的更新。
以下所有代码示例都以类似于 JavaScript 的伪码来编写。
注:针对传输层使用适配器。 使用适配器的优点包括服务器端代码和客户机端代码的 XML 到 JSON 转换、安全性、过滤和重复数据删除。
内部数据源 API:JSONStore
具有集合的存取器后,您可以调用 getAllDirty API 以获取标记为脏的所有文档。 这些文档中包含您要通过传输层发送到外部数据源的仅限本地更改。
var accessor = WL.JSONStore.get('people');
accessor.getAllDirty()
.then(function (dirtyDocs) {
// ...
});
dirtyDocs 参数与以下示例类似:
[{_id: 1,
json: {id: 1, ssn: '111-22-3333', name: 'Carlos'},
_operation: 'add',
_dirty: '1395774961,12902'}]
字段包括:
• _id:JSONStore 使用的内部字段。 会为每个文档分配唯一的内部字段。
• json:存储的数据。
• _operation:在文档上执行的上一个操作。 可能的值为 add、store、replace 和 remove。
• _dirty:存储为数字的时间戳记,其表示文档标记为脏的时间。
传输层:MobileFirst 适配器
您可以选择将脏文档发送到适配器。 假设您具有使用 updatePeople 过程定义的 people 适配器。
.then(function (dirtyDocs) {
var adapter = 'people',
procedure = 'updatePeople';
var resource = new WLResourceRequest('/adapters/' + adapter + '/' + procedure, WLResourceRequest.GET)
resource.setQueryParameter('params', [dirtyDocs]);
return resource.send();
})
.then(function (responseFromAdapter) {
// ...
})
注:您可能想要利用可传递到 WLResourceRequest API 的 compressResponsetimeout 以及其他参数。
在 MobileFirst Server 上,适配器具有类似以下示例的 updatePeople 过程:
function updatePeople (dirtyDocs) {
var input = {
method : 'post',
path : '/people',
body: {
contentType : 'application/json',
content : JSON.stringify(dirtyDocs)
}
};
return MFP.Server.invokeHttp(input);
}
您可能必须更新有效内容以与后端预期的格式相匹配,而不是从客户机上的 getAllDirty API 传达输出。 可能必须将替换项、移除项以及包含项分割成独立的后端 API 调用。
或者,可以迭代 dirtyDocs 数组并检查 _operation 字段。 然后,将替换项发送到某一过程,移除项发送到另一过程,包含项发送到其他过程。 先前的示例将所有脏文档成批发送到适配器。
var len = dirtyDocs.length;
var arrayOfPromises = [];
var adapter = 'people';
var procedure = 'addPerson';
var resource;
while (len--) {
var currentDirtyDoc = dirtyDocs[len];
switch (currentDirtyDoc._operation) {
case 'add':
case 'store':
resource = new WLResourceRequest('/adapters/people/addPerson', WLResourceRequest.GET);
resource.setQueryParameter('params', [currentDirtyDoc]);
arrayOfPromises.push(resource.send());
break;
case 'replace':
case 'refresh':
resource = new WLResourceRequest('/adapters/people/replacePerson', WLResourceRequest.GET);
resource.setQueryParameter('params', [currentDirtyDoc]);
arrayOfPromises.push(resource.send());
break;
case 'remove':
case 'erase':
resource = new WLResourceRequest('/adapters/people/removePerson', WLResourceRequest.GET);
resource.setQueryParameter('params', [currentDirtyDoc]);
arrayOfPromises.push(resource.send());
}
}
$.when.apply(this, arrayOfPromises)
.then(function () {
var len = arguments.length;
while (len--) {
// Look at the responses in arguments[len]
}
});
或者,您可以跳过适配器并直接联系 REST 端点。
.then(function (dirtyDocs) {
return $.ajax({
type: 'POST',
url: 'http://example.org/updatePeople',
data: dirtyDocs
});
})
.then(function (responseFromEndpoint) {
// ...
});
外部数据源:后端 REST 端点
后端将接受或拒绝更改,然后将响应传达回到客户机。 客户机看到响应后,可以将更新的文档传递给 markClean API。
.then(function (responseFromAdapter) {
if (responseFromAdapter is successful) {
WL.JSONStore.get('people').markClean(dirtyDocs);
}
})
.then(function () {
// ...
})
文档标记为干净后,将不会显示在 getAllDirty API 的输出中。
故障排除
有关更多信息,请参阅 JSONStore 故障诊断部分。
API 用法
选择平台:
Last modified on June 01, 2020
|
__label__pos
| 0.681505 |
Allowing visitors to only submit a form once using Chronoforms 3
A
1. Click on the form name under the Forms Management tab and ensure that Data storage is enabled
2. Next click on the Validation Tab
3. Scroll down to the Server Side Validation and set Enable Server Side Validationto yes
4. In the text area Server Side Validation Code add the following code
<?php
$db =& JFactory::getDBO();
$query = "
SELECT count(*)
FROM `jos_chronoforms_table`
WHERE `text_9` = ".$db->Quote($_POST['text_9']).";
";
$db->setQuery($query);
if ( $db->loadResult() ) {
return "You've already entered this competition";
}
?>
Replace jos_chronoforms_table with the table name where you are storing the form submission data.
In the code above we validate the user against the field_9 column in the database table jos_chronoforms_table, if the value entered already exist injos_chronoforms_table in the field_9 column it means the user has submitted the form.
Our field_9 in the above example contains the email address, so if the entered email address exist in the table in means that the user already submitted the form
The global variable $_POST[‘text_9’] contains the form field text_9’s value that the user entered.
You may want to replace field_9 with your column to check against.
About the author
Ian Carnaghan
I am a software developer and online educator who likes to keep up with all the latest in technology. I also manage cloud infrastructure, continuous monitoring, DevOps processes, security, and continuous integration and deployment.
About Author
Ian Carnaghan
I am a software developer and online educator who likes to keep up with all the latest in technology. I also manage cloud infrastructure, continuous monitoring, DevOps processes, security, and continuous integration and deployment.
Follow Me
|
__label__pos
| 0.829678 |
JavaScript AJAX Basics (retiring) AJAX and APIs Stage 4 Challenge Answer
Kieran Barker
Kieran Barker
14,976 Points
.prop() vs .attr()
1. Why does Dave use jQuery's .prop() method in one instance, and then .attr() in another to accomplish the same thing? What's the difference? Surely it would be better to be consistent?
2. Why does setting .prop("disabled", false) or .attr("disabled", false) work? My understanding was that the mere presence of the disabled attribute turns it on, so disabled, disabled="", disabled="true", disabled="false", and disabled="sandwich" would all make something disabled. I thought you had to actually remove the attribute altogether. Or is this how jQuery handles it in this case, specifically?
2 Answers
I had the same question and did not find the jQuery docs very useful in answering it. I found several discussions of how properties and attributes are different, but what helped me understand it was to realize that in the first case of $searchField.prop("disabled", true) we are setting the property of that field to be disabled. In the second case, $submitButton.attr("disabled", true).val("searching ..."); we are disabling the submit button AND we are changing the value attribute from "Search" to "searching ...". Hope that is helpful.
Nour El-din El-helw
Nour El-din El-helw
8,228 Points
I think the jQuery docs will answer your questions. :D
Jeff Sanders
Jeff Sanders
Pro Student 7,101 Points
I understand why this comment got down votes because one's gut reaction as a new learner is to react negatively to someone who suggests that you search for your own answers; however, there is wisdom to his suggestion.
1. Things change, including how jQuery functions work. If you rely on static responses, you could be learning deprecated solutions to a problem. The jQuery docs will ALWAYS be up to date and therefore they are a more reliable source of information.
2. If you don't research answers yourself, you'll never get a full picture of how jQuery (or any other programming language or library) works. There are almost always several ways to do something. This case is no different.
3. The jQuery docs do have a better explanation for the difference between .attr() and .prop() than anything posted here so far. In fact, there is a section called "Attributes vs. Properties". Imagine that.
Something from a Stackflow thread: "Attributes are defined by HTML. Properties are defined by DOM. Some HTML attributes have 1:1 mapping onto properties. id is one example of such. Some do not (e.g. the value attribute specifies the initial value of an input, but the value property specifies the current value)."
|
__label__pos
| 0.997193 |
GRPCK(8) System Management Commands GRPCK(8) NOME grpck - verifica l'integrita dei file dei gruppi SINOSSI grpck [options] [group [ shadow ]] DESCRIZIONE The grpck command verifies the integrity of the groups information. It checks that all entries in /etc/group and /etc/gshadow have the proper format and contain valid data. The user is prompted to delete entries that are improperly formatted or which have other uncorrectable errors. Vengono fatti controlli per verificare che ogni voce abbia: o il corretto numero di campi o un nome univoco e valido di gruppo o a valid group identifier (/etc/group only) o a valid list of members and administrators o a corresponding entry in the /etc/gshadow file (respectively /etc/group for the gshadow checks) The checks for correct number of fields and unique group name are fatal. If an entry has the wrong number of fields, the user will be prompted to delete the entire line. If the user does not answer affirmatively, all further checks are bypassed. An entry with a duplicated group name is prompted for deletion, but the remaining checks will still be made. All other errors are warnings and the user is encouraged to run the groupmod command to correct the error. The commands which operate on the /etc/group and /etc/gshadow files are not able to alter corrupted or duplicated entries. grpck should be used in those circumstances to remove the offending entries. OPZIONI The -r and -s options cannot be combined. The options which apply to the grpck command are: -h, --help Mostra un messaggio di aiuto ed esce. -r, --read-only Execute the grpck command in read-only mode. This causes all questions regarding changes to be answered no without user intervention. -R, --root CHROOT_DIR Apply changes in the CHROOT_DIR directory and use the configuration files from the CHROOT_DIR directory. Only absolute paths are supported. -s, --sort Sort entries in /etc/group and /etc/gshadow by GID. -S, --silence-warnings Suppress more controversial warnings, in particular warnings about inconsistency between group members listed in /etc/group and /etc/ghadow. By default, grpck operates on /etc/group and /etc/gshadow. The user may select alternate files with the group and shadow parameters. CONFIGURAZIONE The following configuration variables in /etc/login.defs change the behavior of this tool: MAX_MEMBERS_PER_GROUP (number) Maximum members per group entry. When the maximum is reached, a new group entry (line) is started in /etc/group (with the same name, same password, and same GID). The default value is 0, meaning that there are no limits in the number of members in a group. This feature (split group) permits to limit the length of lines in the group file. This is useful to make sure that lines for NIS groups are not larger than 1024 characters. If you need to enforce such limit, you can use 25. Note: split groups may not be supported by all tools (even in the Shadow toolsuite). You should not use this variable unless you really need it. FILE /etc/group Informazioni sugli account di gruppo. /etc/gshadow Informazioni sicure sugli account di gruppo. /etc/passwd Informazioni sugli account utente. VALORI RESTITUITI The grpck command exits with the following values: 0 success 1 invalid command syntax 2 one or more bad group entries 3 can't open group files 4 can't lock group files 5 can't update group files VEDERE ANCHE group(5), groupmod(8), gshadow(5), passwd(5), pwck(8), shadow(5). shadow-utils 4.16.0 02/07/2024 GRPCK(8)
|
__label__pos
| 0.579727 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
This question already has an answer here:
Is there a way to convert an enum to a list that contains all the enum's options?
share|improve this question
marked as duplicate by Ken Fyrstenberg, Gian, Roman C, Abizern, Klas Mellbourn Jun 16 '13 at 9:05
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1
The ins and outs of C# enums describes how to do this (same answer as below) as well as a bunch of other common enum operations. – ChaseMedallion Jul 20 at 20:05
15 Answers 15
up vote 360 down vote accepted
This will return an IEnumerable<SomeEnum> of all the values of an Enum.
Enum.GetValues(typeof(SomeEnum)).Cast<SomeEnum>();
If you want that to be a List<SomeEnum>, just add .ToList() after .Cast<SomeEnum>().
To use the Cast function on an Array you need to have the System.Linq in your using section.
share|improve this answer
2
Actually the result of Cast<T>() is an IEnumerable<T> so if you want an array you would have to change your line to: var array = Enum.GetValues(typeof(SomeEnum)).Cast<SomeEnum>().ToArray(); – Jake Pearson Aug 13 '10 at 14:14
26
that's superfluous, and will mean extra copying. Enum.GetValues returns an array already, so you just have to do var values = (SomeEnum[])Enum.GetValues(typeof(SomeEnum)) – thecoop Nov 9 '10 at 2:33
1
if you wanna just values then do the cast again : Enum.GetValues(typeof(SomeEnum)).Cast<SomeEnum>().Cast<int>().ToList() – Wahid Bitar Jun 14 '11 at 14:18
2
if its an enum of int values... Enum.GetValues(typeof(EnumType)).Cast<int>().ToArray(); – JGilmartin Feb 29 '12 at 13:02
1
To get the list of names you use the GetNames method of the Enum object the same way as you do for the GetValues. It will return an array of strings containing the names of each element in the Enum. – David Parvin Aug 19 '13 at 21:05
Much easier way:
Enum.GetValues(typeof(SomeEnum))
.Cast<SomeEnum>()
.Select(v => v.ToString())
.ToList();
share|improve this answer
7
Why the ToList() between Cast and Select? And how is that much easier than the accepted answer? It's identical to it, except you convert to string in the end. – CodesInChaos Dec 5 '10 at 11:44
1
Just compare the amount of code for this very simple operation. Besides this is more a .NETy solution to this problem. Agree on the ToList(). – Gili Dec 8 '10 at 12:29
List <SomeEnum> theList = Enum.GetValues(typeof(SomeEnum)).Cast<SomeEnum>().ToList();
share|improve this answer
This allocates two collections to hold the values and discards one of those collections. See my recent answer. – Jeppe Stig Nielsen May 6 '13 at 19:44
Thanks for providing an alternative that was never answered before. – nawfal Jun 8 '13 at 21:15
The short answer is, use:
(SomeEnum[])Enum.GetValues(typeof(SomeEnum))
If you need that for a local variable, it's var allSomeEnumValues = (SomeEnum[])Enum.GetValues(typeof(SomeEnum));.
Why is the syntax like this?!
The static method GetValues was introduced back in the old .NET 1.0 days. It returns a one-dimensional array of runtime type SomeEnum[]. But since it's a non-generic method (generics was not introduced until .NET 2.0), it can't declare its return type (compile-time return type) as such.
.NET arrays do have a kind of covariance, but because SomeEnum will be a value type, and because array type covariance does not work with value types, they couldn't even declare the return type as an object[] or Enum[]. (This is different from e.g. this overload of GetCustomAttributes from .NET 1.0 which has compile-time return type object[] but actually returns an array of type SomeAttribute[] where SomeAttribute is necessarily a reference type.)
Because of this, the .NET 1.0 method had to declare its return type as System.Array. But I guarantee you it is a SomeEnum[].
Everytime you call GetValues again with the same enum type, it will have to allocate a new array and copy the values into the new array. That's because arrays might be written to (modified) by the "consumer" of the method, so they have to make a new array to be sure the values are unchanged. .NET 1.0 didn't have good read-only collections.
If you need the list of all values many different places, consider calling GetValues just once and cache the result in read-only wrapper, for example like this:
public static readonly ReadOnlyCollection<SomeEnum> AllSomeEnumValues
= Array.AsReadOnly((SomeEnum[])Enum.GetValues(typeof(SomeEnum)));
Then you can use AllSomeEnumValues many times, and the same collection can be safely reused.
Why is it bad to use .Cast<SomeEnum>()?
A lot of other answers use .Cast<SomeEnum>(). The problem with this is that it uses the non-generic IEnumerable implementation of the Array class. This should have involved boxing each of the values into an System.Object box, and then using the Cast<> method to unbox all those values again. Luckily the .Cast<> method seems to check the runtime type of its IEnumerable parameter (the this parameter) before it starts iterating through the collection, so it isn't that bad after all. It turns out .Cast<> lets the same array instance through.
If you follow it by .ToArray() or .ToList(), as in:
Enum.GetValues(typeof(SomeEnum)).Cast<SomeEnum>().ToList() // DON'T do this
you have another problem: You create a new collection (array) when you call GetValues and then create yet a new collection (List<>) with the .ToList() call. So that's one (extra) redundant allocation of an entire collection to hold the values.
share|improve this answer
1
I ended up here looking for a way to get a List<> from my enum, not an Array. If you only want to loop through your enum this is great, but the .Cast<type>().ToList() provides you with an IEnumerable collection, which is valuable in some situations. – DaveD Mar 10 at 20:44
@DaveD The expression (SomeEnum[])Enum.GetValues(typeof(SomeEnum)) is also IEnumerable and IEnumerable<SomeEnum>, and it is IList and IList<SomeEnum> as well. But if you need to add or remove entries later, so that the length of the list changes, you can copy to a List<SomeEnum>, but that is not the most usual need. – Jeppe Stig Nielsen Mar 10 at 21:13
Here is the way I love, using LINQ:
public class EnumModel
{
public int Value { get; set; }
public string Name { get; set; }
}
public enum MyEnum
{
Name1=1,
Name2=2,
Name3=3
}
public class Test
{
List<EnumModel> enums = ((IEnumerable<EnumModel>)Enum.GetValues(typeof(MyEnum))).Select(c => new EnumModel() { Value = (int)c, Name = c.ToString() }).ToList();
}
Hope it helps
share|improve this answer
1
((IEnumerable<EnumModel>)Enum.GetValues should be ((IEnumerable<MyEnum>)Enum.GetValues – Steven Anderson Jun 20 at 5:20
I've always used to get a list of enum values like this:
Array list = Enum.GetValues(typeof (SomeEnum));
share|improve this answer
1
This will give you an Array, not List<>. – Shenaniganz Jan 10 '13 at 20:59
Here for usefulness... some code for getting the values into a list, which converts the enum into readable form for the text
public class KeyValuePair
{
public string Key { get; set; }
public string Name { get; set; }
public int Value { get; set; }
public static List<KeyValuePair> ListFrom<T>()
{
var array = (T[])(Enum.GetValues(typeof(T)).Cast<T>());
return array
.Select(a => new KeyValuePair
{
Key = a.ToString(),
Name = a.ToString().SplitCapitalizedWords(),
Value = Convert.ToInt32(a)
})
.OrderBy(kvp => kvp.Name)
.ToList();
}
}
.. and the supporting System.String extension method:
/// <summary>
/// Split a string on each occurrence of a capital (assumed to be a word)
/// e.g. MyBigToe returns "My Big Toe"
/// </summary>
public static string SplitCapitalizedWords(this string source)
{
if (String.IsNullOrEmpty(source)) return String.Empty;
var newText = new StringBuilder(source.Length * 2);
newText.Append(source[0]);
for (int i = 1; i < source.Length; i++)
{
if (char.IsUpper(source[i]))
newText.Append(' ');
newText.Append(source[i]);
}
return newText.ToString();
}
share|improve this answer
When you say (T[])(Enum.GetValues(typeof(T)).Cast<T>()), looking carefully at the parentheses, we see that you actually cast the return value of Cast<T> to a T[]. That's quite confusing (and maybe surprising it will even work). Skip the Cast<T> call. See my new answer for details. – Jeppe Stig Nielsen May 6 '13 at 19:47
I think that if you are looking to do this, you might want to think if you really should be using an enum or if you should switch to an object that represents w/e your enum is.
share|improve this answer
very simple answer
Here is a property I use in one of my applications
public List<string> OperationModes
{
get
{
return Enum.GetNames(typeof(SomeENUM)).ToList();
}
}
share|improve this answer
public class NameValue
{
public string Name { get; set; }
public object Value { get; set; }
}
public static List<NameValue> EnumToList<T>()
{
var array = (T[])(Enum.GetValues(typeof(T)).Cast<T>());
var array2 = Enum.GetNames(typeof(T)).ToArray<string>();
List<NameValue> lst = null;
for (int i = 0; i < array.Length; i++)
{
if (lst == null)
lst = new List<NameValue>();
string name = array2[i];
T value = array[i];
lst.Add(new NameValue { Name = name, Value = value });
}
return lst;
}
Convert Enum To a list more on
Convert Enum To a list
share|improve this answer
Casting to T[] the return value of Cast<T> is unnecessarily confusing. See my recent answer. – Jeppe Stig Nielsen May 6 '13 at 19:49
/// <summary>
/// Method return a read-only collection of the names of the constants in specified enum
/// </summary>
/// <returns></returns>
public static ReadOnlyCollection<string> GetNames()
{
return Enum.GetNames(typeof(T)).Cast<string>().ToList().AsReadOnly();
}
where T is a type of Enumeration; Add this:
using System.Collections.ObjectModel;
share|improve this answer
Language[] result = (Language[])Enum.GetValues(typeof(Language))
share|improve this answer
Already taken.. – nawfal Jun 15 '13 at 12:38
You could use the following generic method:
public static List<T> GetItemsList<T>(this int enums) where T : struct, IConvertible
{
if (!typeof (T).IsEnum)
{
throw new Exception("Type given must be an Enum");
}
return (from int item in Enum.GetValues(typeof (T))
where (enums & item) == item
select (T) Enum.Parse(typeof (T), item.ToString(new CultureInfo("en")))).ToList();
}
share|improve this answer
You first get the values, then cast each to int, then calls ToString with a weird culture on that int, then parses the string back to type T? Downvoted. – Jeppe Stig Nielsen May 6 '13 at 19:54
Yes cast all values to int for check, does enums contains item, when cast to string to parse enum back. This method more useful with BitMask. CultureInfo not reqired. – Vitall May 7 '13 at 4:21
If you want Enum int as key and name as value, good if you storing the number to database and it is from Enum!
void Main()
{
ICollection<EnumValueDto> list = EnumValueDto.ConvertEnumToList<SearchDataType>();
foreach (var element in list)
{
Console.WriteLine(string.Format("Key: {0}; Value: {1}", element.Key, element.Value));
}
/* OUTPUT:
Key: 1; Value: Boolean
Key: 2; Value: DateTime
Key: 3; Value: Numeric
*/
}
public class EnumValueDto
{
public int Key { get; set; }
public string Value { get; set; }
public static ICollection<EnumValueDto> ConvertEnumToList<T>() where T : struct, IConvertible
{
if (!typeof(T).IsEnum)
{
throw new Exception("Type given T must be an Enum");
}
var result = Enum.GetValues(typeof(T))
.Cast<T>()
.Select(x => new EnumValueDto { Key = Convert.ToInt32(x),
Value = x.ToString(new CultureInfo("en")) })
.ToList()
.AsReadOnly();
return result;
}
}
public enum SearchDataType
{
Boolean = 1,
DateTime,
Numeric
}
share|improve this answer
private List<SimpleLogType> GetLogType()
{
List<SimpleLogType> logList = new List<SimpleLogType>();
SimpleLogType internalLogType;
foreach (var logtype in Enum.GetValues(typeof(Log)))
{
internalLogType = new SimpleLogType();
internalLogType.Id = (int) (Log) Enum.Parse(typeof (Log), logtype.ToString(), true);
internalLogType.Name = (Log)Enum.Parse(typeof(Log), logtype.ToString(), true);
logList.Add(internalLogType);
}
return logList;
}
in top Code , Log is a Enum and SimpleLogType is a structure for logs .
public enum Log
{
None = 0,
Info = 1,
Warning = 8,
Error = 3
}
share|improve this answer
Your foreach variable has compile-time type object (written as var), but it really is a Log value (runtime type). There's no need to call ToString and then Enum.Parse. Start your foreach with this instead: foreach (var logtype in (Log[])Enum.GetValues(typeof(Log))) { ... } – Jeppe Stig Nielsen May 6 '13 at 20:01
|
__label__pos
| 0.847713 |
About the Project
NIST
local
AdvancedHelp
(0.000 seconds)
1—10 of 199 matching pages
1: 31.3 Basic Solutions
H ( a , q ; α , β , γ , δ ; z ) denotes the solution of (31.2.1) that corresponds to the exponent 0 at z = 0 and assumes the value 1 there. If the other exponent is not a positive integer, that is, if γ 0 , - 1 , - 2 , , then from §2.7(i) it follows that H ( a , q ; α , β , γ , δ ; z ) exists, is analytic in the disk | z | < 1 , and has the Maclaurin expansion … Solutions (31.3.1) and (31.3.5)–(31.3.11) comprise a set of 8 local solutions of (31.2.1): 2 per singular point. …For example, H ( a , q ; α , β , γ , δ ; z ) is equal to … The full set of 192 local solutions of (31.2.1), equivalent in 8 sets of 24, resembles Kummer’s set of 24 local solutions of the hypergeometric equation, which are equivalent in 4 sets of 6 solutions (§15.10(ii)); see Maier (2007).
2: 4.42 Solution of Triangles
3: 31.7 Relations to Other Functions
31.7.1 F 1 2 ( α , β ; γ ; z ) = H ( 1 , α β ; α , β , γ , δ ; z ) = H ( 0 , 0 ; α , β , γ , α + β + 1 - γ ; z ) = H ( a , a α β ; α , β , γ , α + β + 1 - γ ; z ) .
Other reductions of H to a F 1 2 , with at least one free parameter, exist iff the pair ( a , p ) takes one of a finite number of values, where q = α β p . …
31.7.2 H ( 2 , α β ; α , β , γ , α + β - 2 γ + 1 ; z ) = F 1 2 ( 1 2 α , 1 2 β ; γ ; 1 - ( 1 - z ) 2 ) ,
31.7.3 H ( 4 , α β ; α , β , 1 2 , 2 3 ( α + β ) ; z ) = F 1 2 ( 1 3 α , 1 3 β ; 1 2 ; 1 - ( 1 - z ) 2 ( 1 - 1 4 z ) ) ,
31.7.4 H ( 1 2 + i 3 2 , α β ( 1 2 + i 3 6 ) ; α , β , 1 3 ( α + β + 1 ) , 1 3 ( α + β + 1 ) ; z ) = F 1 2 ( 1 3 α , 1 3 β ; 1 3 ( α + β + 1 ) ; 1 - ( 1 - ( 3 2 - i 3 2 ) z ) 3 ) .
4: 31.5 Solutions Analytic at Three Singularities: Heun Polynomials
31.5.2 Hp n , m ( a , q n , m ; - n , β , γ , δ ; z ) = H ( a , q n , m ; - n , β , γ , δ ; z )
5: 31.1 Special Notation
The main functions treated in this chapter are H ( a , q ; α , β , γ , δ ; z ) , ( s 1 , s 2 ) Hf m ( a , q m ; α , β , γ , δ ; z ) , ( s 1 , s 2 ) Hf m ν ( a , q m ; α , β , γ , δ ; z ) , and the polynomial Hp n , m ( a , q n , m ; - n , β , γ , δ ; z ) . …
6: 31.9 Orthogonality
31.9.3 θ m = ( 1 - e 2 π i γ ) ( 1 - e 2 π i δ ) ζ γ ( 1 - ζ ) δ ( ζ - a ) ϵ f 0 ( q , ζ ) f 1 ( q , ζ ) q 𝒲 { f 0 ( q , ζ ) , f 1 ( q , ζ ) } | q = q m ,
f 0 ( q m , z ) = H ( a , q m ; α , β , γ , δ ; z ) ,
f 1 ( q m , z ) = H ( 1 - a , α β - q m ; α , β , δ , γ ; 1 - z ) ,
31.9.6 ρ ( s , t ) = ( s - t ) ( s t ) γ - 1 ( ( s - 1 ) ( t - 1 ) ) δ - 1 ( ( s - a ) ( t - a ) ) ϵ - 1 ,
7: 2.6 Distributional Methods
Let f ( t ) be locally integrable on [ 0 , ) . The Stieltjes transform of f ( t ) is defined by …Since f ( t ) is locally integrable on [ 0 , ) , it defines a distribution by … In terms of the convolution product …of two locally integrable functions on [ 0 , ) , (2.6.33) can be written …
8: 28.19 Expansions in Series of me ν + 2 n Functions
28.19.2 f ( z ) = n = - f n me ν + 2 n ( z , q ) ,
28.19.3 f n = 1 π 0 π f ( z ) me ν + 2 n ( - z , q ) d z .
9: 33.11 Asymptotic Expansions for Large ρ
10: 18.14 Inequalities
§18.14(iii) Local Maxima and Minima
Jacobi
Laguerre
|
__label__pos
| 0.908406 |
Javascript Dynamic Div Hover Menu Doctype Problem
I can get this to work in IE or firefox with out the doctype specified, but I need it for my other pages in my site. Is there a way to fix it so IE and firefox will view the hover divs correctly in both? The div should display below the text, but with the doctype Im using it will always show in the upper left hand corner. Thanks.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<head>
<script language="javascript" type="text/javascript">
//**************************************************************************************************************
//positioning functions*************************************************************************************
function getAnchorWindowPosition(anchorname)
{
var coordinates=getAnchorPosition(anchorname);
var x=0;
var y=0;
if(document.getElementById)
{
if(isNaN(window.screenX))
{
//ie likes this
this.x=coordinates.x;
this.y=coordinates.y + 16; //added the px height of font to display directly under
}
else
{
this.x=coordinates.x;
this.y=coordinates.y + 12; //added the px height of font to display directly under
}
}
}
function getAnchorPosition(anchorname)
{
var useWindow=false;
var coordinates=new Object();
var x=0,y=0;
coordinates.x=AnchorPosition_getPageOffsetLeft(document.getElementById(anchorname));
coordinates.y=AnchorPosition_getPageOffsetTop(document.getElementById(anchorname));
return coordinates;
}
function AnchorPosition_getPageOffsetLeft(el)
{
var ol=el.offsetLeft;
while((el=el.offsetParent) != null)
{
ol += el.offsetLeft;
}
return ol;
}
function AnchorPosition_getWindowOffsetLeft(el)
{
return AnchorPosition_getPageOffsetLeft(el)-document.body.scrollLeft;
}
function AnchorPosition_getPageOffsetTop(el)
{
var ot=el.offsetTop;
while((el=el.offsetParent) != null)
{
ot += el.offsetTop;
}
return ot;
}
function AnchorPosition_getWindowOffsetTop(el)
{
return AnchorPosition_getPageOffsetTop(el)-document.body.scrollTop;
}
//********** end of positioning functions*******************************************************************
//**************************************************************************************************************
//**************************************************************************************************************
//********** FUNCTIONS USED BY THE FORM BELOW **************************************************************
var gTopNavTimer;
var prevDiv="";
function ShowExtraLinks(div, el)
{
ResetDivNavTimer();
var ParentPosition = new getAnchorWindowPosition(el);
var divRef = document.getElementById(div);
divRef.style.top = ParentPosition.y;
divRef.style.left = ParentPosition.x;
divRef.style.visibility = "visible";
divRef.style.display = "block";
if(prevDiv!="" && prevDiv!=div){HideExtraLinksTimeout(prevDiv)};
prevDiv=div;
}
function HideExtraLinks(div)
{
gTopNavTimer = window.setTimeout("HideExtraLinksTimeout('" + div + "')", 500);
}
function HideExtraLinksTimeout(div)
{
document.getElementById(div).style.visibility = 'hidden';
document.getElementById(div).style.display = 'none';
prevDiv="";
}
function ResetDivNavTimer()
{
window.clearTimeout(gTopNavTimer);
}
</script>
<style type="text/css">
.ExtraNavLinks1
{
position: absolute;
border-right: solid 1px #000;
border-bottom: solid 1px #000;
border-left: solid 1px #000;
visibility: hidden;
background-color: #676767;
overflow:visible;
padding: 5px 5px 5px 5px;
width:auto!important;
}
.ExtraNavLinks1 a:link, a:visited, a:active
{
color: #FFF;
text-decoration: none;
}
.ExtraNavLinks1 a:hover
{
color: #FFF;
text-decoration: underline;
}
</style>
</head>
<body>
<form>
<div id="divExtraCusLinks1" class="ExtraNavLinks1" onMouseOver="ResetDivNavTimer();" onMouseOut="HideExtraLinks(this.id);">
<table>
<tr>
<td>
<a href="#">Add Customer</a><br />
<a href="#">All Customers</a><br />
<a href="#">Quote Customer</a><br />
<a href="#">New Blank Quote</a><br />
<a href="#">My Hotlist</a>
</td>
</tr>
</table>
</div>
<div id="divExtraInvLinks2" class="ExtraNavLinks1" onMouseOver="ResetDivNavTimer();" onMouseOut="HideExtraLinks(this.id);">
<table>
<tr>
<td>
<a href="#">All Inventory</a><br />
<a href="#">Add Inventory</a><br />
<a href="#">Inventory History</a>
</td>
</tr>
</table>
</div>
<table>
<tr>
<td>
<a href="#" id="">HOME</a>
</td>
<td>
<a href="#" id="TopNavCustomers" onMouseOver="ShowExtraLinks('divExtraCusLinks1', this.id);" onMouseOut="HideExtraLinks('divExtraCusLinks1');">CUSTOMERS</a>
</td>
<td>
<a href="#" id="TopNavInventory" onMouseOver="ShowExtraLinks('divExtraInvLinks2', this.id);" onMouseOut="HideExtraLinks('divExtraInvLinks2');">INVENTORY</a>
</td>
</tr>
</table>
</form>
</body>
</html>
LVL 3
caseyrharrisAsked:
Who is Participating?
I wear a lot of hats...
"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.
Michel PlungjanIT ExpertCommented:
change
divRef.style.top = ParentPosition.y;
divRef.style.left = ParentPosition.x;
to
divRef.style.top = ParentPosition.y+'px';
divRef.style.left = ParentPosition.x+'px';
0
Experts Exchange Solution brought to you by
Your issues matter to us.
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.
Start your 7-day free trial
caseyrharrisAuthor Commented:
Doh! These details for doctype are killing me! That fixed it, thanks!
0
Michel PlungjanIT ExpertCommented:
the px is in my mind mandatory for FF - if you always add them you should be ok...
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Web Languages and Standards
From novice to tech pro — start learning today.
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
|
__label__pos
| 0.892072 |
Latest Post
All You Need To Know About Jacob Davich Trendy Wig Hairstyles for Summers!
If you’re wondering why technology is good for society, you’re not alone. This article will highlight some of the positive impacts of technology. These include its effect on health, society, and education. But what’s its negative impact? And why are we so attached to technology? What can we do to mitigate its negative impacts? Let’s explore each of these areas. After all, the benefits of tech may outweigh its drawbacks, so why should we care?
Benefits of Technology
Technology has made many aspects of our lives more convenient, including education. For example, it is now possible to move physical storage units and virtual banks anywhere in the world. It has helped scientists send astronauts to the moon. Besides, technology has made research easier, and companies can now save money by implementing strategic technology trends. With the help of technology, sales can be made much faster and purchases can be made anywhere in the world. In addition, globalization has resulted in increased global tourism.
In addition to its positive impacts on our lives, technology has changed the way we communicate and learn. It has made it easier to collaborate with others and engage with difficult material. Nonetheless, technology can be destructive, or beneficial, depending on how we use it. Some aspects of technology make us feel less private, while others make us constantly upgrade our devices. Still, the benefits of technology are clear:
Teachers can use technology to achieve new levels of productivity and student engagement. Digital tools are transforming classrooms, allowing teachers to enhance learning opportunities, personalize student experience, and expand instruction methods. Technology also improves the efficiency of educational programs, reducing the cost of instructional materials and maximizing teacher time. This means that teachers can better prepare their students for the real world. For parents, technology provides many benefits for educators. It allows them to share information with students and keep their students informed.
Many technologies benefit the environment in a variety of ways. Some of these technologies are made of toxic materials, while others require a power source, which can increase the amount of fossil fuels used. Some technologies even produce toxic materials, such as ‘deepfakes’ – videos that are disguised as real ones. Such misinformation has the potential to undermine elections and democracies, and therefore, are important. Some technologies may also benefit the elderly, but they should not be thrown under the bus.
Impact on Society
There are both positive and negative impacts of technology on society. In general, technology advances help improve human life and lead to greater global progress. However, it can also lead to devastating effects if not used responsibly. As a result, we must strive to use technology responsibly, and in a way that contributes to society’s betterment. Read on to find out about the various negative impacts of technology on society. After reading this article, you’ll be better equipped to make informed decisions about technology and its impact on society.
Technological advances in the world have changed how people live and work. From communications to learning, from traveling to convenience, technology has radically transformed the way people live and work. Those who grew up before computers were popular are now dependent on technology. However, some people don’t see these changes as bad. In addition to these negative effects, technology has also brought about a number of positive changes for society. It creates new opportunities for social interaction and creates new spaces for social interactions.
Impact of Social Changes
While social change is often seen as positive, there are also many negative aspects of technology. While some change can be seen as positive, improvements in human welfare can also have the opposite effect on another individual. This is especially true of the impact of modern technology on society. However, despite the many positive aspects of technology, we should consider the negative consequences of mass-produced tech. In particular, the negative impacts of tech on society can result from social media.
The rise of the digital world has created a huge need for skilled software developers. These skilled individuals are able to create new business models and applications. As the world becomes increasingly digital, many people are displaced from their jobs. In short, technological changes have created opportunities for all segments of society. The number of patents granted each year is expected to increase twenty percent this year alone. This means that technological advances are not only empowering individuals but also transforming the world at large.
Impact on Education
The impact of technology on education has changed the way teachers and students teach and learn. Students now have more access to educational resources through online courses and eBooks, and technology can improve collaborative learning. It can also incorporate different learning styles. Technology also improves motivation and enables self-paced learning. All of these factors help education improve and reach a broader audience. The impact of technology on education is profound, but it is not without its risks.
It is important to remember that the impact of technology on education is largely dependent on how much money is spent on it. It is not sufficient to buy expensive technology and equip all classrooms with it. School districts should invest in professional development to make teachers more proficient with modern computer technologies. But if the technology is not effective enough, they should spend money on more technology. Adding new technologies to classrooms could increase student achievement and reduce class size.
There are countless examples of how technology has improved education. For example, technology has helped make textbooks more accessible to everyone. This makes it possible for schools to print multiple copies of the same text. Technology has also made reading novels and books easier for everyone. Other inventions of technology have rapidly advanced and become useful in educational classrooms, such as typewriters and computers. The World Economic Forum also notes that children learn better with guidance.
Computers have also improved access to information. Students no longer need to rely on books or chalk for information. With computers, they can access the same library from all over the world. This has made education more flexible and interesting. With the use of computers and mobile devices, students can now create their own websites, blogs, and even videos. The world’s largest libraries are available online. This technology has helped create a global education community that has a high percentage of educated citizens.
Impact on Health
Many of the proposed changes to improve the impact of technology on health care have focused on cost-effectiveness analysis. Other approaches, such as consumer-driven health care and certificate-of-need approval, have been less effective. But all approaches have some drawbacks and are not widely adopted in the U.S. As of this writing, the most popular of these approaches has not had any significant impact on technology-driven health care costs. Cost-effectiveness analysis, on the other hand, involves rigorous, nonbiased studies of a technology’s benefits and costs.
IT platforms are increasingly used to improve patient engagement in the health care process. Such technologies improve health behavior and can increase health care safety and quality. Many of these technologies encourage patient engagement through self-reporting, short-message service (SMS)-capable mobile devices, social media, and Internet-based interventions. The impact of these innovations on health care may be less obvious than the overall effects of the use of IT platforms, but their benefits cannot be overlooked.
Using the results of these evaluations is difficult, however. Case studies on single technologies or diseases often reveal cost-savings, while health-system-wide analyses show cost-increases. New anesthetic agents, for example, have reduced the burden of surgery on patients and have reduced hospital stays and medical errors. The increased efficiency of surgical practices has also reduced medical costs, although some patients find them less attractive. For those who are skeptical, however, the costs of the new technologies may outweigh their benefits.
Regardless of how we view the impact of technology on health and healthcare, there are clear benefits. Technology has revolutionized healthcare. It is helping healthcare organizations monitor and store patient data. This has improved patient outcomes, while allowing health organizations to increase their income. The impact of technology on health services is numerous. The key is finding the right balance between technology and regulation. So, how does technology influence health care? Let’s examine some of these changes and discuss the impact on health.
Impact on Communication
In the last few decades, technological advances have widened the reach of political and activism messages. With cell phone cameras and internet access, photos are now readily available on websites such as YouTube, making it more difficult for oppressive regimes to control the spread of such messages. The power of social networks to organize and coordinate protests is also increasing. The impact of social networks is evident in the recent Egyptian revolution. Many activists used social networking to share their ideas and call for democratic change.
As a result of the impact of technology, companies can exchange information, carry out transactions, and engage with their customers and employees. In addition, telephones allow them to stay connected and engage with clients. With the help of technology, individuals can communicate with vast audiences through blogs, social media, and website creation. All of these features can increase customer satisfaction and loyalty by empowering the people in an organization to respond to their needs in a timely manner.
The use of technology has greatly changed human relationships. While it has made communication easier, it has also reduced many forms of communication. In a globalized world, technology allows us to communicate with anyone in the world at any time. However, technology has also weakened relationships and made people less social. Further, it has eroded nonverbal skills. This lack of interpersonal communication has impacted society’s health, and it has increased isolation.
One of the most significant benefits of technology on communication is that it makes it easier to store and retrieve communications. With a little practice, this process can help clarify misconceptions. Additionally, good communication fosters stronger relationships. Listening carefully and giving good feedback fosters respect and empathy. These are valuable skills to have in a business. It’s important to consider both the positive and negative impacts of technology on communication to make an informed decision.
Leave a Reply
Your email address will not be published.
|
__label__pos
| 0.966894 |
How to use gitlab cache
Hi there.
I’m trying to figure out how to use the caching feature of the gitlab pipelines.
The setup we are using is very simple.
We use the docker executor.
The default image is just a node:alpine image.
Every stage needs to install some npm packages, so initially we started with yarn install in the before_script.
The file was something like this
default:
image: node:alpine
before_script:
- yarn install
stages:
- audit
- lint
- test
- build
audit:
stage: audit
script:
- yarn audit --level moderate
lint:
stage: lint
script:
- yarn lint
test:
stage: test
script:
- yarn coverage
build:
stage: build
script:
- yarn build
then, once we saw that yarn install spends 4-5 mins each time, we wanted to cache the results and not run it on every step.
yarn.lock is the same, so the output of yarn install is the same as well.
I tried every single combination I found on the internet about cache.
None of them worked.
I tried caching node_modules, I tried using a .yarn directory as the cache, etc.
I tried untracked: true, I tried global cache or adding it in every job / stage, etc.
for example, one of the attempts I made ( I have plenty variations, I won’t post all of them )
default:
image: node:alpine
before_script:
- yarn install
stages:
- audit
- install-packages
- lint
- test
- build
audit:
stage: audit
script:
- yarn audit --level moderate
install-packages:
stage: install-packages
script:
- yarn install
cache:
paths:
- node_modules
untracked: true
policy: pull-push
lint:
stage: lint
script:
- yarn lint
cache:
paths:
- node_modules
policy: pull
test:
stage: test
script:
- yarn coverage
cache:
paths:
- node_modules
policy: pull
build:
stage: build
script:
- yarn build
cache:
paths:
- node_modules
policy: pull
Every time, the result was the same more or less.
The install-packages stage I made would have a message like this at the end:
Saving cache for successful job 00:20
Creating cache default...
node_modules: found 70214 matching files and directories
untracked: found 59929 files
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Job succeeded
but the very next stage, would say
Checking out bf6cbf54 as tech-debt/test-pipeline-fix...
Removing node_modules/
Skipping Git submodules setup
Restoring cache 00:00
Checking cache for default...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script 00:01
Using docker image sha256:9d..4 for docker.ko...f3 ...
$ yarn lint
yarn run v1.22.5
$ eslint --ext .ts,.tsx ./src/
/bin/sh: eslint: not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: Job failed: exit code 127
and go on to fail because node_modules was not there.
my questions :
• where does that cache go exactly? do we have to set something in the runner config?
• why do I see the Removing node_modules/ in the next stage/job? Isn’t the whole point of the cache to add/mount that directory? why is it removing it?
I looked at the docs and I cannot find any answer, unless I’m missing something.
In paper, it looks so simple.
Mark cache directory path, push or pull, done.
But it’s not working. I must be doing something wrong.
Any help would be appreciated.
Thank you.
So, a cache is for things like dependencies that you install for the pipeline. For example, you might need to install Node, so that your pipeline jobs can run npm.
What you need is to pass artifacts between the pipeline stages. e.g.
install-packages:
stage: install-packages
script:
- yarn install
artifacts:
paths:
- node_modules
expire_in: 2 week
You will probably want to read about when artifacts are deleted and the related sections on how to keep the latest pipeline.
1 Like
The gitlab documentation specifically mentions caching node modules with cache: in their examples:
https://docs.gitlab.com/ee/ci/caching/
We’ve tried several incarnations of this setup, but none of them seems to be able to restore the cache for the current or subsequent runs of the pipeline. Any advice on how to debug this would be welcome, like how can we inspect the cache after a pipeline is complete, or at the beginning of a new pipeline?
Hi,
With Docker executor cache from jobs is stored in /cache dir in the container itself. Since the container is ephemeral this is not stored on Docker host out of the box. You need to configure your GitLab Runner for local cache. Make sure you have something like this in your config.toml (not including other options)
[[runners]]
cache_dir = "/cache"
[runners.docker]
disable_cache = false
cache_dir = ""
volumes = ['/cache']
There is also option to use distributed cache.
2 Likes
that was the issue!
works great now.
Thank you @balonik
|
__label__pos
| 0.964168 |
Integration Full Chapter Explained - Integration Class 12 - Everything you need
Slide1.JPG
Slide2.JPG
Slide3.JPG Slide4.JPG Slide5.JPG Slide6.JPG Slide7.JPG
1. Chapter 7 Class 12 Integrals
2. Serial order wise
Transcript
Ex 7.7, 12 (Supplementary NCERT) ( + ^2 ) ( + ^2 ) Step 1: It can be written in the form x = A [ / ( + ^2 )]+ B x = A [1+2 ]+ B x = (2A) x + A +B x = (2A) x / = 2A 1 = 2A A = 1/20 = A + B B = A B = ( 1)/2 Thus, x = A [1 + 2x] + B x = 1/2 [1 + 2x] 1/2 Step 2: Integrating the function w.r.t. x 1 ( + ^2 ) = 1 [1/2 [1+2 ] 1/2] ( + ^2 ) = 1 1/2 [1+2 ] ( + ^2 ) 1 1/2 ( + ^2 ) = 1/2 1 (1+2 ) ( + ^2 ) 1/2 1 ( + ^2 ) Solving _ _1= 1/2 1 (1+2 ) ( + ^2 ) . Let + ^2= Diff. both sides w.r.t.x 1 + 2 = / dx = /(1 + 2 ) Thus, our equation becomes As I_1= 1/2 1 (1+2 ) ( + ^2 ) . Putting the value of ( + ^2) and d I_1= 1/2 1 (1+2 ) . /((1 + 2 ) ) I_1= 1/2 1 I_1= 1/2 1 ( )^(1/2) I_1= 1/2 ^( 1/2 ^(+ 1) )/((1/2 + 1) ) + C_1 I_1= 1/2 ^((1 + 2)/2)/(((1 + 2)/2 ) ) + C_1 I_1= 1/2 ^(3/2)/(3/2) + C_1 I_1= ^(3/2)/3 + C_1 I_1= ( ^2 + ) ^(3/2)/3 + C_1 Solving _ I_2= 1/2 1 ( + ^2 ) I_2 = 1/2 1 ( ^2+ ) I_2 = 1/2 1 ( ^2+2( )(1/2) ) I_2 = 1/2 1 ( ^2+2( )(1/2)+(1/2)^2 (1/2)^2 ) I_2 = 1/2 1 (( +1/2)^2 (1/2)^2 ) It is of form 1 ( ^2 ^2 ) = /2 ( ^2 ^2 ) ^2/2 log | + ( ^2 ^2 )|+C_2 Replacing x by (x + 1/2 ) and a by 1/2, we get I_2 = 1/2 [( + 1/2)/2 (( +1/2)^2 (1/2)^2 ) (1/2)^2/2 log | +1/2+ (( +1/2)^2 (1/2)^2 )|+ C_2 ] I_2 = 1/2 [((2 +1)/2)/2 (( +1/2)^2 (1/2)^2 ) (1/2)^2/2 log| +1/2+ (( +1/2)^2 (1/2)^2 )|+ C_2 ] I_2 = 1/2 [(2 +1)/2.2 ( ^2 (1/2)^2+2( )(1/2) (1/2)^2 ) (1/4)/2 log | +1/2+ (( +1/2)^2+2( )(1/2) ) + C_2 ] I_2 = 1/2 [(2 +1)/4 ( ^2+ ) 1/(4 . 2 ) log | +1/2+ (( )^2+2( )(1/2) ) + C_2 ] I_2 = 1/2 [(2 +1)/4 ( ^2+ ) 1/8 log | +1/2+ (( )^2+ )|+ C_2 ] I_2 = ((2 + 1))/8 ( ^2+ ) 1/16 log | +1/2+ (( )^2+ )|+C_3 Putting the value of I_1 and I_2 in (1) , we get 1 ( + ^2 ) = 1/2 1 (1+2 ) ( + ^2 ) 1/2 1 ( + ^2 ) = ( + ^2) ^(3/2)/3+C_1 ((2 + 1) ( ^2 + ))/8+1/16 log | +1/2+ ( ^2+ )| C_3 = ( + ^ ) ^( / )/ (( + ) ( ^ + ))/ + / | + / + ( ^ + )|+
About the Author
Davneet Singh's photo - Teacher, Computer Engineer, Marketer
Davneet Singh
Davneet Singh is a graduate from Indian Institute of Technology, Kanpur. He has been teaching from the past 9 years. He provides courses for Maths and Science at Teachoo.
|
__label__pos
| 0.999694 |
import pylab as plt import numpy as np from deltasigma import * OSR = 32 order = 5 H_inf = 1.2 plt.figure(figsize=(12,6)) H0 = synthesizeNTF(order, OSR, 1, H_inf) H1 = synthesizeChebyshevNTF(order, OSR, 0, H_inf) # 1. Plot the singularities. plt.subplot(121) # we plot the singularities of the optimized NTF in light # green with slightly bigger markers so that we can better # distinguish the two NTF's when overlayed. plotPZ(H1, markersize=7, color='#90EE90') plt.hold(True) plotPZ(H0, markersize=5) plt.title('NTF Poles and Zeros') f = np.concatenate((np.linspace(0, 0.75/OSR, 100), np.linspace(0.75/OSR, 0.5, 100))) z = np.exp(2j*np.pi*f) magH0 = dbv(evalTF(H0, z)) magH1 = dbv(evalTF(H1, z)) # 2. Plot the magnitude responses. plt.subplot(222) plt.plot(f, magH0, label='synthesizeNTF') plt.hold(True) plt.plot(f, magH1, label='synthesizeChebyshevNTF') figureMagic([0, 0.5], 0.05, None, [-80, 20], 10, None) plt.xlabel('Normalized frequency ($1\\rightarrow f_s)$') plt.ylabel('dB') plt.legend(loc=4) plt.title('NTF Magnitude Response') # 3. Plot the magnitude responses in the signal band. plt.subplot(224) fstart = 0.01 f = np.linspace(fstart, 1.2, 200)/(2*OSR) z = np.exp(2j*np.pi*f) magH0 = dbv(evalTF(H0, z)) magH1 = dbv(evalTF(H1, z)) plt.semilogx(f*2*OSR, magH0, label='synthesizeNTF') plt.hold(True) plt.semilogx(f*2*OSR, magH1, label='synthesizeChebyshevNTF') plt.axis([fstart, 1, -60, -20]) plt.grid(True) sigma_H0 = dbv(rmsGain(H0, 0, 0.5/OSR)) sigma_H1 = dbv(rmsGain(H1, 0, 0.5/OSR)) plt.semilogx([fstart, 1], sigma_H0*np.array([1, 1]), linewidth=3, color='#191970') plt.text(0.15, sigma_H0 + 1.5, 'RMS gain = %5.0fdB' % sigma_H0) plt.semilogx([fstart, 1], sigma_H1*np.array([1, 1]), linewidth=3, color='#228B22') plt.text(0.15, sigma_H1 + 1.5, 'RMS gain = %5.0fdB' % sigma_H1) plt.xlabel('Normalized frequency ($1\\rightarrow f_B$)') plt.ylabel('dB') plt.legend(loc=3) plt.tight_layout()
|
__label__pos
| 0.996262 |
WordPress.org
Ready to get started?Download WordPress
Forums
Polylang
[resolved] pll_register_string problem (11 posts)
1. jussi.r
Member
Posted 1 year ago #
Hey,
I'm using polylang 0.9.3 and I've made a simple plugin for a widget to make a list my posts of different post types (I use Types plugin too). You can set a title and a description text for the widget, but of course only widget's title is visible at "Strings translation" tab of Polylang settings.
So I'm trying to use pll_register_string function. I use it at the "update" function what is called when you save the widget's settings. The problem is that the string doesn't come visible to the "Strings translation" list and I don't know why.
Here is my plugin code (sorry, there is a lot of code you don't need to see):
<?php
/*
Plugin Name: Post list
Plugin URI:
Description: Widget to list posts with specified post type
Author: nnn
Version: 1000
Author URI:
*/
class post_list extends WP_Widget
{
function post_list()
{
$widget_ops = array('classname' => 'post_list', 'description' => 'Post types: press-releases, media-tours, media-event' );
$this->WP_Widget('post_list', 'Post list', $widget_ops);
}
function form($instance)
{
$instance = wp_parse_args( (array) $instance, array( 'title' => '','max_posts'=>2,'post_type'=>'post', "text"=>'' ) );
$title = $instance['title'];
$max_posts = $instance['max_posts'];
$post_type = $instance['post_type'];
$text = $instance['text'];
?>
<p><label for="<?php echo $this->get_field_id('title'); ?>">Title: <input class="widefat" id="<?php echo $this->get_field_id('title'); ?>" name="<?php echo $this->get_field_name('title'); ?>" type="text" value="<?php echo attribute_escape($title); ?>" /></label>
<p><label for="<?php echo $this->get_field_id('text'); ?>">Text: <input class="widefat" id="<?php echo $this->get_field_id('text'); ?>" name="<?php echo $this->get_field_name('text'); ?>" type="text" value="<?php echo attribute_escape($text); ?>" /></label>
<p><label for="<?php echo $this->get_field_id('max_posts'); ?>">Max. posts: <input class="widefat" id="<?php echo $this->get_field_id('max_posts'); ?>" name="<?php echo $this->get_field_name('max_posts'); ?>" type="text" value="<?php echo attribute_escape($max_posts); ?>" /></label></p>
<p><label for="<?php echo $this->get_field_id('post_type'); ?>">Post type: <input class="widefat" id="<?php echo $this->get_field_id('post_type'); ?>" name="<?php echo $this->get_field_name('post_type'); ?>" type="text" value="<?php echo attribute_escape($post_type); ?>" /></label></p>
<?php
}
function makeEventDate($start,$end){
$start = strtotime($start,true);
$end = strtotime($end,true);
$startMonthYear = strtotime(date("Y-m",$start) . "-01",true);
$endMonthYear = strtotime(date("Y-m",$end) . "-01",true);
$startYear=date("Y",$start);
$endYear=date("Y",$end);
if($startYear<$endYear){
$startDate = date_i18n("F d, Y",$start);
}else{
$startDate = date_i18n("F d",$start);
}
if($endMonthYear>$startMonthYear){
$endDate = date_i18n("F d",$end);
}else{
$endDate = date("d",$end);
}
return $startDate . " - " . $endDate . ", " . $endYear;
}
function dateFormat($postID, $type){
$date="";
if($type == "media-tours"){
$date = $this->makeEventDate(get_post_meta($postID, 'start_date', true),get_post_meta($postID, 'end_date', true));
}else{
$date = get_post_meta($postID, 'date', true);
$date = strtotime($date,true);
$date = date_i18n("F d, Y",$date);
}
return $date;
}
function urlGet($postID, $type){
$URL="";
if($type=="media-follow-up"){
$URL=get_post_meta($postID, 'url', true);
}else{
$URL=get_permalink($postID);
}
return $URL;
}
function update($new_instance, $old_instance)
{
$instance = $old_instance;
$instance['title'] = $new_instance['title'];
$instance['max_posts'] = $new_instance['max_posts'];
$instance['post_type'] = $new_instance['post_type'];
$instance['text'] = $new_instance['text'];
pll_register_string("Post list text",$instance['text'] );
return $instance;
}
function widget($args, $instance)
{
extract($args, EXTR_SKIP);
echo $before_widget;
$title = empty($instance['title']) ? ' ' : apply_filters('widget_title', $instance['title']);
$max_posts = $instance['max_posts'];
$post_type = $instance['post_type'];
$text = $instance['text'];
echo '<div class="post_list ' . $post_type . '">'
. '<div class="' . $post_type . '_icon icon"></div>'
. '<div class="description">'
. $before_title . $title . $after_title
. '<i>'
. pll__($text)
. '</i></div><div class="posts">';
$args = array( 'post_type' => $post_type, 'numberposts'=>$max_posts );
$recent_posts = wp_get_recent_posts( $args );
foreach( $recent_posts as $recent ){
$post = get_post( $recent["ID"] );
echo '<span class="date">' . $this->dateFormat($recent["ID"],$post_type) . "</span><br>"
. '<a href="' . $this->urlGet($recent["ID"],$post_type) . '">' . $recent["post_title"].'</a><br>';
}
echo "</div></div>";
echo $after_widget;
}
}
add_action( 'widgets_init', create_function('', 'return register_widget("post_list");') );?>
I understand the string should come visible to "Strings translation" tab when I save plugins settings.
http://wordpress.org/extend/plugins/polylang/
2. jussi.r
Member
Posted 1 year ago #
So I'm trying to make it possible to translate that $text too.
3. Chouby
Member
Plugin Author
Posted 1 year ago #
In fact, the function pll_register_string has some effect only on the strings translation page and your update function is not called on this page (it should be called only when you update the widget settings)
I would try to add something like this to your constructor (which is called on all pages):
global $polylang;
if (isset($polylang)) {
$settings = $this->get_settings();
pll_register_string("Post list text", $settings['text']);
}
4. jussi.r
Member
Posted 1 year ago #
I thought it saves the string to a database or something.
I have three of those widgets and every one of them has different setting for $text. Hmmm.. How does the widget title go to "Strings translation"?
5. jussi.r
Member
Posted 1 year ago #
Could this be used for this purpose somehow?
http://adambrown.info/p/wp_hooks/hook/widget_text
EDIT: I solved my problem by adding these lines to polylang admin.php
if (isset($widget_settings[$number]['text']) && $title = $widget_settings[$number]['text'])
$this->register_string(__('Widget text', 'polylang'), $title);
6. v4ssi
Member
Posted 2 months ago #
Sorry for necroposting.
How is possible get the last solution without change the admin.php ?
I tried to understand how do it with hooks/override on functions.php but i'm not getting any point.
7. Chouby
Member
Plugin Author
Posted 2 months ago #
Could you better explain what you want to achieve?
8. v4ssi
Member
Posted 2 months ago #
Hello Chouby, thank you for you attention.
I need to have the possibility to modify the "widget-text content" from strings-translations of your awesome plugin.
And the code that jussi.r wrote it's good but force me to change the function directly into the plugin files, that i don't want do.
I tried to achieve this result using hooks/override but today i'm really stunned and i don't arrive to my objective.
And i don't want use forking/svn too.
Thank you.
p.s.
Sry for my english.
9. Chouby
Member
Plugin Author
Posted 2 months ago #
The best in my opinion is to use the possibility to have a widget displayed for only one language (Polylang adds this option in most widgets including the text widget).
Otherwise you can add use this code in a custom plugin or in functions.php
add_filters('pll_get_strings', 'add_widget_text');
function add_widget_text($strings) {
global $wp_registered_widgets;
$sidebars = wp_get_sidebars_widgets();
foreach ($sidebars as $sidebar => $widgets) {
if ($sidebar == 'wp_inactive_widgets' || !isset($widgets))
continue;
foreach ($widgets as $widget) {
if (!isset($wp_registered_widgets[$widget]['callback'][0]) || !is_object($wp_registered_widgets[$widget]['callback'][0]) || !method_exists($wp_registered_widgets[$widget]['callback'][0], 'get_settings'))
continue;
$widget_settings = $wp_registered_widgets[$widget]['callback'][0]->get_settings();
$number = $wp_registered_widgets[$widget]['params'][0]['number'];
if (isset($widget_settings[$number]['text']) && $text = $widget_settings[$number]['text'])
$strings[] = array('widget text', $text, 'Widget', true);
}
}
return $strings;
}
That's not tested
10. v4ssi
Member
Posted 2 months ago #
I was not far when i tried alone but not enough near. :D
I tried to debug the code for 3 hours, like a dumb, because the server gave to me "internal server error 500 ".
Also if loggin was active, wordpress and server too weren't new log.
Anyway i found the problem and fixed it. Thanks you so much Chouby. ^^
add_filter('pll_get_strings', 'add_widget_text');
function add_widget_text($strings) {
global $wp_registered_widgets;
$sidebars = wp_get_sidebars_widgets();
foreach ($sidebars as $sidebar => $widgets) {
if ($sidebar == 'wp_inactive_widgets' || !isset($widgets))
continue;
foreach ($widgets as $widget) {
if (!isset($wp_registered_widgets[$widget]['callback'][0]) || !is_object($wp_registered_widgets[$widget]['callback'][0]) || !method_exists($wp_registered_widgets[$widget]['callback'][0], 'get_settings'))
continue;
$widget_settings = $wp_registered_widgets[$widget]['callback'][0]->get_settings();
$number = $wp_registered_widgets[$widget]['params'][0]['number'];
if (isset($widget_settings[$number]['text']) && $text = $widget_settings[$number]['text'])
$strings[] = array('name' => 'Widget text', 'string' => $text, 'context' => 'Widget', 'multiline' => true);
}
}
return $strings;
}
add_filter instead of add_filters
And i added the names to the array keys.
They are added at the end of the pages but whocares. ^^
Update:
If i edit the text the query works but in the frontend the widget remain with the "default" language.
I think that the problem is the widget option "The widget is displayed for: All languages" that if turned ON override the visualization of the translation and if it's turned OFF hide the widget for the other language.
11. v4ssi
Member
Posted 2 months ago #
I've found the solution of the second problem.
Just add in functions.php this line:
add_filter('widget_text', 'pll__', 1);
That i've taken from core.php line 117.
Thank you again Chouby.
Topic Closed
This topic has been closed to new replies.
About this Plugin
About this Topic
Tags
|
__label__pos
| 0.775287 |
1. Please take a little time for this simple survey! Thank you for participating!
Dismiss Notice
2. Dear Pleskians, please read this carefully! New attachments and other rules Thank you!
Dismiss Notice
3. Dear Pleskians, I really hope that you will share your opinion in this Special topic for chatter about Plesk in the Clouds. Thank you!
Dismiss Notice
Default Directories
Discussion in 'Plesk for Windows - 8.x and Older' started by lancer28a, Apr 3, 2004.
1. lancer28a
lancer28a Guest
0
I never thought I would have to ask this but after 3 days of troubleshooting, I can't figure it out.
why does Plesk 6.5.2 do this?
A client tries to access his or her domain via MS FrontPage and finds themselves in the default htdocs directory.
Should'nt this program be creating a map to their own directories? Or should I be creating them myself in the httpd.conf as virtual directories like in the Linux Version...
I tried transfering a site from one server to another via FrontPage and watched it load the site into the default htdocs folder rather than the clients httpdocs folder.
Anyone know of a fix?
2. siren@
siren@ Guest
0
How are you accessing the site?
We run and support multiple 6.5.2 servers and haven't seen this issue yet nor heard about it from our clients.
3. Cradz
Cradz Guest
0
ddreams
Do you allow your customers to access their sites via FTP?
If so, do they see the root directory with all the site names that your hosting?
4. siren@
siren@ Guest
0
Yes, we allow all our customers to use FTP and so do the customers we support running Plesk servers. That is a basic concept of hosting.
And no, our customers are dropped in there root dir for there domain.
After some thought on this, I would bet the problem lies in the users homedir. YOu can check this under user manager.
5. Cradz
Cradz Guest
0
thanks for the reply ddreams ...
homedir? Are you talking about on the Client end or under Plesk?
I'm new to plesk so maybe I'm missing something... I'll keep digging and see what I can find...
Do you run 2003?
Are you using IIS FTP?
If so did you create the ftp site with Isolate on?
It sounds like you've read my post regarding my problem where if I turn isolate on nobody can connect due to problem with the root directory.. but if I create an ftp site with it off.. everybody can see root folder with other customers folders (although you can't access inside the folder due to permissions set).
6. Cradz
Cradz Guest
0
Loading...
|
__label__pos
| 0.621475 |
[RESOLVED] EnumFontFamiliesEx callback only being called once
CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 3 of 3
Thread: [RESOLVED] EnumFontFamiliesEx callback only being called once
1. #1
Join Date
May 2002
Posts
484
[RESOLVED] EnumFontFamiliesEx callback only being called once
I found this code posted form 2010. It is supposed to enum all the fonts on the system including those registered dynamically using the gdi32.dll call AddFontResource().
My understanding is that my callback function "fontCallback" should be getting executed once for each font on the system (or at least once for each font family).
Well I haven't been able to test out if my dynamically registered font appears in the list because my callback delegate is only getting called once and the only value I receive for lpelfe.elfFullName is "System".
Any thoughts as to what my problem could be? This apparently has worked for others.
I have tested on WIN32, WIN64, XP and Windows 7 all with the same results.
My results are the same whether I choose FontCharSet.ANSI_CHARSET or FontCharSet.DEFAULT_CHARSET.
Code:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Runtime.InteropServices; // font test
namespace FontTestProject
{
public partial class Form1 : Form
{
// font test code below
[DllImport("gdi32.dll", CharSet = CharSet.Auto)]
static extern int EnumFontFamiliesEx(IntPtr hdc,
[In] IntPtr pLogfont,
EnumFontExDelegate lpEnumFontFamExProc,
IntPtr lParam,
uint dwFlags);
public delegate int EnumFontExDelegate(ref ENUMLOGFONTEX lpelfe, ref NEWTEXTMETRICEX lpntme, int FontType, int lParam);
public EnumFontExDelegate fontDelegate;
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public class LOGFONT
{
public int lfHeight;
public int lfWidth;
public int lfEscapement;
public int lfOrientation;
public FontWeight lfWeight;
[MarshalAs(UnmanagedType.U1)]
public bool lfItalic;
[MarshalAs(UnmanagedType.U1)]
public bool lfUnderline;
[MarshalAs(UnmanagedType.U1)]
public bool lfStrikeOut;
public FontCharSet lfCharSet;
public FontPrecision lfOutPrecision;
public FontClipPrecision lfClipPrecision;
public FontQuality lfQuality;
public FontPitchAndFamily lfPitchAndFamily;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)]
public string lfFaceName;
}
public enum FontWeight : int
{
FW_DONTCARE = 0,
FW_THIN = 100,
FW_EXTRALIGHT = 200,
FW_LIGHT = 300,
FW_NORMAL = 400,
FW_MEDIUM = 500,
FW_SEMIBOLD = 600,
FW_BOLD = 700,
FW_EXTRABOLD = 800,
FW_HEAVY = 900,
}
public enum FontCharSet : byte
{
ANSI_CHARSET = 0,
DEFAULT_CHARSET = 1,
SYMBOL_CHARSET = 2,
SHIFTJIS_CHARSET = 128,
HANGEUL_CHARSET = 129,
HANGUL_CHARSET = 129,
GB2312_CHARSET = 134,
CHINESEBIG5_CHARSET = 136,
OEM_CHARSET = 255,
JOHAB_CHARSET = 130,
HEBREW_CHARSET = 177,
ARABIC_CHARSET = 178,
GREEK_CHARSET = 161,
TURKISH_CHARSET = 162,
VIETNAMESE_CHARSET = 163,
THAI_CHARSET = 222,
EASTEUROPE_CHARSET = 238,
RUSSIAN_CHARSET = 204,
MAC_CHARSET = 77,
BALTIC_CHARSET = 186,
}
public enum FontPrecision : byte
{
OUT_DEFAULT_PRECIS = 0,
OUT_STRING_PRECIS = 1,
OUT_CHARACTER_PRECIS = 2,
OUT_STROKE_PRECIS = 3,
OUT_TT_PRECIS = 4,
OUT_DEVICE_PRECIS = 5,
OUT_RASTER_PRECIS = 6,
OUT_TT_ONLY_PRECIS = 7,
OUT_OUTLINE_PRECIS = 8,
OUT_SCREEN_OUTLINE_PRECIS = 9,
OUT_PS_ONLY_PRECIS = 10,
}
public enum FontClipPrecision : byte
{
CLIP_DEFAULT_PRECIS = 0,
CLIP_CHARACTER_PRECIS = 1,
CLIP_STROKE_PRECIS = 2,
CLIP_MASK = 0xf,
CLIP_LH_ANGLES = (1 << 4),
CLIP_TT_ALWAYS = (2 << 4),
CLIP_DFA_DISABLE = (4 << 4),
CLIP_EMBEDDED = (8 << 4),
}
public enum FontQuality : byte
{
DEFAULT_QUALITY = 0,
DRAFT_QUALITY = 1,
PROOF_QUALITY = 2,
NONANTIALIASED_QUALITY = 3,
ANTIALIASED_QUALITY = 4,
CLEARTYPE_QUALITY = 5,
CLEARTYPE_NATURAL_QUALITY = 6,
}
[Flags]
public enum FontPitchAndFamily : byte
{
DEFAULT_PITCH = 0,
FIXED_PITCH = 1,
VARIABLE_PITCH = 2,
FF_DONTCARE = (0 << 4),
FF_ROMAN = (1 << 4),
FF_SWISS = (2 << 4),
FF_MODERN = (3 << 4),
FF_SCRIPT = (4 << 4),
FF_DECORATIVE = (5 << 4),
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct NEWTEXTMETRIC
{
public int tmHeight;
public int tmAscent;
public int tmDescent;
public int tmInternalLeading;
public int tmExternalLeading;
public int tmAveCharWidth;
public int tmMaxCharWidth;
public int tmWeight;
public int tmOverhang;
public int tmDigitizedAspectX;
public int tmDigitizedAspectY;
public char tmFirstChar;
public char tmLastChar;
public char tmDefaultChar;
public char tmBreakChar;
public byte tmItalic;
public byte tmUnderlined;
public byte tmStruckOut;
public byte tmPitchAndFamily;
public byte tmCharSet;
int ntmFlags;
int ntmSizeEM;
int ntmCellHeight;
int ntmAvgWidth;
}
public struct FONTSIGNATURE
{
[MarshalAs(UnmanagedType.ByValArray)]
int[] fsUsb;
[MarshalAs(UnmanagedType.ByValArray)]
int[] fsCsb;
}
public struct NEWTEXTMETRICEX
{
NEWTEXTMETRIC ntmTm;
FONTSIGNATURE ntmFontSig;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct ENUMLOGFONTEX
{
public LOGFONT elfLogFont;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 64)]
public string elfFullName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)]
public string elfStyle;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = 32)]
public string elfScript;
}
private const byte DEFAULT_CHARSET = 1;
private const byte SHIFTJIS_CHARSET = 128;
private const byte JOHAB_CHARSET = 130;
private const byte EASTEUROPE_CHARSET = 238;
private const byte DEFAULT_PITCH = 0;
private const byte FIXED_PITCH = 1;
private const byte VARIABLE_PITCH = 2;
private const byte FF_DONTCARE = (0 << 4);
private const byte FF_ROMAN = (1 << 4);
private const byte FF_SWISS = (2 << 4);
private const byte FF_MODERN = (3 << 4);
private const byte FF_SCRIPT = (4 << 4);
private const byte FF_DECORATIVE = (5 << 4);
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
ListFonts();
}
public void ListFonts()
{
LOGFONT lf = CreateLogFont("");
IntPtr plogFont = Marshal.AllocHGlobal(Marshal.SizeOf(lf));
Marshal.StructureToPtr(lf, plogFont, true);
int ret = 0;
try
{
Graphics G = pictureBox1.CreateGraphics();
IntPtr P = G.GetHdc();
fontDelegate = new EnumFontExDelegate(fontCallback);
ret = EnumFontFamiliesEx(P, plogFont, fontDelegate, IntPtr.Zero, 0);
System.Diagnostics.Trace.WriteLine("EnumFontFamiliesEx = " + ret.ToString());
G.ReleaseHdc(P);
}
catch
{
System.Diagnostics.Trace.WriteLine("Error!");
}
finally
{
Marshal.DestroyStructure(plogFont, typeof(LOGFONT));
}
}
public int fontCallback(ref ENUMLOGFONTEX lpelfe, ref NEWTEXTMETRICEX lpntme, int FontType, int lParam)
{
int cnt = 0;
try
{
int test = 0;
listBox1.Items.Add(lpelfe.elfFullName);
}
catch (Exception e)
{
System.Diagnostics.Trace.WriteLine(e.ToString());
}
return cnt;
}
public static LOGFONT CreateLogFont(string fontname)
{
LOGFONT lf = new LOGFONT();
lf.lfHeight = 0;
lf.lfWidth = 0;
lf.lfEscapement = 0;
lf.lfOrientation = 0;
lf.lfWeight = 0;
lf.lfItalic = false;
lf.lfUnderline = false;
lf.lfStrikeOut = false;
lf.lfCharSet = FontCharSet.ANSI_CHARSET; //FontCharSet.DEFAULT_CHARSET;
lf.lfOutPrecision = 0;
lf.lfClipPrecision = 0;
lf.lfQuality = 0;
lf.lfPitchAndFamily = FontPitchAndFamily.FF_DONTCARE;
lf.lfFaceName = "";
return lf;
}
}
}
2. #2
Arjay's Avatar
Arjay is offline Moderator / MS MVP Power Poster
Join Date
Aug 2004
Posts
10,965
Re: EnumFontFamiliesEx callback only being called once
See the Return Value section in the EnumFontFamExProc documentation:
Return value
The return value must be a nonzero value to continue enumeration; to stop enumeration, the return value must be zero.
Your code:
Code:
public int fontCallback(ref ENUMLOGFONTEX lpelfe, ref NEWTEXTMETRICEX lpntme, int FontType, int lParam)
{
int cnt = 0;
try
{
int test = 0;
listBox1.Items.Add(lpelfe.elfFullName);
}
catch (Exception e)
{
System.Diagnostics.Trace.WriteLine(e.ToString());
}
return cnt; // Always returns 0 which ends the enumeration
}
3. #3
Join Date
May 2002
Posts
484
Re: EnumFontFamiliesEx callback only being called once
Thank you so much! It works now!!!
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
Azure Activities Information Page
Windows Mobile Development Center
Click Here to Expand Forum to Full Width
This is a CodeGuru survey question.
Featured
HTML5 Development Center
|
__label__pos
| 0.999428 |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I know that this a beginner's question asked too many times, but I still didn't get an answer which lets me quit asking:
Given that a model/interpretation of a theory (in the Tarskian sense) is a set with some structure, how can there be models of set theory, since we know, that the class of all sets (the range of the quantifiers of set theory) is not a set?
I especially wonder why
a) it is stressed so often and so strongly that models must be sets?
b) nevertheless sometimes proper classes are allowed? (see Wikipedia's Inner Model Theory: "models are transitive subsets or subclasses")
share|cite|improve this question
3
If an inaccessible cardinal exists, then the cardinal is a set and is a model of Set Theory... Alternatively, by the Lowenheim-Skolem Theorem, if Set Theory is consistent, then it has a countable model; and you can certainly find a countable set within set theory... – Arturo Magidin Aug 10 '11 at 17:32
4
I hope somebody writes up a real answer, but the Reader's Digest version is that a set model of set theory doesn't think of itself as a set. It is only a set from an external perspective. – user83827 Aug 10 '11 at 17:34
2
@Hans: Say that something is a house if and only if the outside walls are painted green; while it is easy to check if something is a house if you are looking at it from the outside, trying to figure out if what you are inside of is a house would be extremely difficult if you are not allowed to get information from the outside. – Arturo Magidin Aug 10 '11 at 17:53
4
@Arturo: Nitpick: it's not the inaccessible $\kappa$ itself which forms the model (models of ZFC can't be linearly ordered by the membership relation!) but rather a set defined by $\kappa$ (such as $V_\kappa$). But of course this is secondary to your actual point. – user83827 Aug 10 '11 at 17:55
2
@ccc: The theory ZFC+Inaccessible is much stronger than ZFC. In analogy to Arturo's example it provides us a way to find out if there is a house is painted green, but we are ultimately find ourselves still inside another house - and not under the starry dome which is the night. – Asaf Karagila Aug 10 '11 at 18:16
I especially wonder why
a) it is stressed so often and so strongly that models must be sets?
There are several reasons why model theory texts only look at models that are sets. These reasons are all related to the fact that model theory is itself studied using some (usually informal) set theory.
One benefit of sticking with set-sized models is this makes it possible to perform algebraic operations on models, such as taking products and ultrapowers, without any set-theoretic worries.
Another benefit of requiring models to be sets is this makes it possible to define the satisfaction relation $\vDash$ for each model. In other words, given a model $M$ in a language $L(M)$, we want to form $T(M) = \{ \phi \in L(M) : M \vDash \phi\}$. This can be done when $M$ is a set, by going through Skolem normal form. But it cannot be done, in general, when $M$ is a proper class, because of Tarski's undefinability theorem. In particular, if we let $M$ be the class-sized model $V$ of the language of set theory then Tarski's theorem shows that $T(M)$ is not definable in $V$. We can define the truth of each individual formula (using the formula itself) but in general there may be no global definition of truth in a proper-class-sized model.
Moreover, in model theory, there is no real need to look at proper-class-sized models, because there is already enough interesting behavior from set-sized models. The motivating examples are all sets (algebraic structures, partial orders, etc). And the completeness theorem shows that any consistent theory has a set-sized model (this includes ZFC). So model theorists generally restrict themselves to set-sized models.
b) nevertheless sometimes proper classes are allowed?
Generally, people are only interested in proper-class-sized models in the context of set theory. The reason for the interest is that ZFC can't prove that there is a set model of ZFC (because ZFC can't prove Con(ZFC)), but it is possible to form proper-class-sized models of ZFC from a given proper-class-sized model of ZFC (e.g. the inner model $L$). This allows for some model-theoretic results about set theory, but many things that are taken for granted in model theory have to be re-checked when we move to proper-class-sized models. In general the re-checking is often routine, and it only comes up in advanced settings, where an author is not likely to make a big fuss about it. The benefit of this labor is that we can sometimes avoid having to assume Con(ZFC) as a hypothesis for a theorem about models of set theory.
In summary, in any non-set-theoretic context, "model" will mean "set-sized model". In the context of set theory, this is still what "model" usually means; they usually say "inner model" or "class model" for a proper-class-sized model. But some attention to context is needed when you are working with "models" of set theory to make sure you read what the author intended.
share|cite|improve this answer
You say "people are only interested in proper-class-sized models in the context of set theory". But what about the context of category theory? Isn't a (large) category a proper-class-sized model of category theory? Or aren't categories models of category theory in a narrower sense? – Hans Stricker Aug 12 '11 at 17:11
Some comments. The basic distinction you need to make is between the external and internal notions of set. Let me take as granted a primitive and unspecified notion of set: this will be our external notion of set. For any first-order theory $T$ in a language $L$, a model of $T$ is a set (in this external sense) equipped with functions and relations satisfying the appropriate axioms, etc. In particular, a model of, say, ZFC is a set $M$ equipped with a binary relation satisfying etc. etc.
(Requiring that models themselves be sets is likely a matter of convenience. In category theory this requirement can be restrictive, and one way to get around it is the notion of a Grothendieck universe. But I won't say more about this; it isn't central to your misunderstanding.)
Now the elements $m \in M$ of a model of set theory are themselves supposed to be interpreted as sets, but the word "set" here means something different: it is an internal notion specified by $T$ (and $L$). To prevent confusion here it would really be best to replace "set" with some other word, such as "foo." Thus we should speak of foo theory and the class of all foos, which is not a foo. (A class is just an external subset of $M$ specified by some formula.)
When we say that the class of all sets is not a set, what we mean is that there does not exist an element of $M$ which contains all other elements of $M$ (by the axiom of regularity). We don't mean that $M$ is itself not an external set, because it is by definition an external set.
I believe the Wikipedia article on inner models is talking about internal classes (which are still just elements of $M$, an external set), but I'm not sure.
One last thing: ZFC is not capable of provably exhibiting a model of ZFC (since this proves that ZFC is consistent) unless it is inconsistent by the incompleteness theorem.
share|cite|improve this answer
Inner models are subclasses. For example Godel's $L$ is a proper class in $V$. It is a definable object though, and we can write it as a syntactic formula. Inner models are sometimes treated as a definable class like that. – Asaf Karagila Aug 10 '11 at 18:25
2
So what is an external set? And what is it we study in set theory, if it's not "real" sets? I'm playing devil's advocate somewhat here, but I am interested in the answers to these questions. – Billy Aug 10 '11 at 18:27
@Billy: I'm not a set theorist, so I don't have strong opinions on such philosophical matters. – Qiaochu Yuan Aug 10 '11 at 18:33
@Qiaochu: Let $\subseteq_1$ be the internal subset relation and $\subseteq_2$ the external (informal) one. Let $\mathcal{P}(M)$ be the collection of (external) subclasses of $M$. Claim: There is (exactly?) one (informal) injection $f: M \rightarrow \mathcal{P}(M)$ such that $x \subseteq_1 y \equiv f(x) \subseteq_2 f(y)$. $f$ cannot be a bijection, especially there is no $m$ in $M$ such that $f(m) = M$. Is it that what we mean when we say that the class of all sets "is" not a set? – Hans Stricker Aug 12 '11 at 17:41
@Hans: yes, that's right. – Qiaochu Yuan Aug 12 '11 at 21:08
This is a question I wonder about too! I made this CW because I'm not sure this constitutes a satisfactory answer, and I do not attempt to answer questions $(a)$ and $(b)$.
Suppose you have a model of set theory $(U,\in_0)$ : this allows you to define $\subset_0$ inclusion of sets and products of sets $A\times_0 B$, intersections $\cap_0$ etc$\dots$ in the sense of $\in_0$. One illuminating way to picture $(U,\in_0)$ is as an infinite oriented graph (Théorie des Ensembles by J.-L. Krivine takes that stance).
If $M$ is a model of set theory (that is $M$ is a set of $U$, i.e. a point in the infinite graph $U$, with a binary relation $\in_M\subset_0 M\times_0 M$, another point in the infinite graph $U$, that together obey all the axioms of $\mathsf{ZF}$), the interpretation $\in_M$ of the binary relation $\in$ of the language of $\mathsf{ZF}$ need not be the same as $\in_0$ : it may be (maybe it has to be) that $\in_M\neq (M\times_0 M)\cap_0\in_0$.
Anyway, sets in the model $(M,\in_M)$ are by definition the $\in_0$ elements of $M$. Since by foundation $M\notin_0 M$, $M$ is not a set in the sense of $\in_M$, that is, $M$ is not a set in the model $(M,\in_M)$. So the $(U,\in_0)$ set $M$ constitutes the class of sets of the model $(M,\in_M)$, yet it is not a set in that new model.
I think this boils down to "sets in one model need not be sets in another".
share|cite|improve this answer
It sounds to me like you're defining a model of set theory internal to another model of set theory, which is perhaps one level deeper than necessary for this discussion. – Qiaochu Yuan Aug 10 '11 at 18:02
2
@Qiaochu Yuan : I do, but only because I don't know how to do it otherwise. I don't know what a set could be outside of a model of set theory. This, I admit, is terribly circular, but I don't know how to get around it. – Olivier Bégassat Aug 10 '11 at 18:07
The use of the word ‘genuine’ the question title makes me wonder if there are also ‘fake’ models of set theory. There are various candidates. Let us say that a model of set theory is standard just if the $\in$ relation of that model corresponds exactly to the $\in$ relation of the universe we are working in. Otherwise, we say the model is non-standard.
Do standard models exist? Well, the answer depends on various technical points. For a start, if a consistent set theory $T$ is powerful enough to interpret arithmetic, then Gödel's incompleteness theorem tells us that $T$ cannot prove that there is a set model of $T$, either standard or non-standard. This is because a proof of the existence of such a model would imply that $T$ proves its own consistency. (On the other hand, if our universe obeys the axioms of $T$, it obviously does contain a standard class model of $T$, namely itself.) Suppose instead that we are working in a universe which satisfies the axioms of a different set theory $T'$. (Yes, there are inequivalent set theories!) Could we then prove the existence of a standard set model of some (necessarily weaker) set theory $T$? Sometimes, yes: for example, from the axioms of ZFC we can prove that there is a standard set model for Zermelo set theory, namely $V_{\omega + \omega}$. This can also be turned into a model for Lawvere's elementary theory of the category of sets (ETCS), which is equivalent to a variant of Zermelo set theory where the axiom of separation is restricted to predicates with bounded quantifiers.
As mentioned in the comments, if $T'$ is ZFC augmented with a suitable large cardinal axiom, then it will be provable from $T'$ that there is a standard set model of ZFC. This is not so mysterious when you think about it. Let us define the rank of a set inductively as follows: the rank of $\emptyset$ is $0$; if $x$ has rank $\alpha$, then $\mathcal{P}(x)$ has rank $\alpha + 1$; and in general the rank of $x$ is the least ordinal number greater than the ranks of all its members. By structural induction, every set has a rank, and it is not hard to show that the rank of a von Neumann ordinal $\alpha$ is again $\alpha$. This immediately implies that a collection of sets of unbounded rank cannot be a set. So if a set $M$ is a standard model of ZFC, it can only contain the ordinals less than its rank. But $M$ must contain every set that ZFC can prove to exist, and so that means our universe must contain ordinals, and hence, cardinals, the existence of which is not guaranteed by ZFC. [But does every large cardinal axiom imply the existence of such a standard model $M$?]
What about non-standard models? Well, things are a lot easier here. For example, Gödel's completeness theorem tells us that if a first-order theory $T$ is consistent, then there is a set model for it. Moreover, if $T$ is a theory over a countable language, then the downward Löwenheim–Skolem theorem tells us that $T$ has a countable model. It is useful to contemplate what this means when $T$ is a set theory, because it forces us to be absolutely clear about the distinction between the universe the model lives in and the universe inside the model. So let $M$ be a countable model of set theory. Since $M$ is countable, we may as well assume it is $\mathbb{N}$. So how can the set of natural numbers be a model of set theory? Well, the key is that $M$ is equipped with a relation $\in_M$ as well. In effect, what we are doing is indexing the sets (but not ‘all’ of them!) by natural numbers, and $\in_M$ is tracking their membership relations. Because $M$ is a model of set theory, it contains a set of natural numbers $\mathbb{N}_M$. It is not hard to show that the internal structure of $\mathbb{N}_M$, according to $\in_M$, corresponds exactly to the internal structure of the ‘genuine’ $\mathbb{N}$. (Well, with some added consistency assumptions.) And of course there is a power set $\mathcal{P}_M(\mathbb{N}_M)$, which according to $M$ is uncountable. Obviously, $\mathcal{P}_M(\mathbb{N}_M)$ is ‘actually’ countable, because $M$ is. How can this be? Well, if we look at how $M$ is constructed, we notice that $M$ is only required to have the sets that must be there. But since the language of set theory is only countable, we can only name countably many sets, and in particular $\mathcal{P}_M(\mathbb{N}_M)$ only contains those subsets of $\mathbb{N}_M$ which necessarily ‘exist’. Similarly, since set theory proves that there is no bijection between $\mathbb{N}$ and its power set, there cannot be any such bijection inside $M$, even though the sets are ‘actually’ equinumerous. (To be precise, even though we can see externally that there ought to be a bijection, the graph of any bijection we can dream up will fail to be a set in $M$.)
Perhaps the moral to take away from all this is that the universe of sets is a subtle beast to be treated with care.
share|cite|improve this answer
What do you mean by "But $M$ must contain every set that ZFC can prove to exist, and so that means our universe must contain ordinals, and hence, cardinals, the existence of which is not guaranteed by ZFC"? Isn't the existence of cardinals guaranteed by ZFC? Do you mean certain cardinals? – Quinn Culver Aug 11 '11 at 11:51
Yes, I mean particular ordinals/cardinals, whose existence is not guaranteed by ZFC. – Zhen Lin Aug 11 '11 at 12:20
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.970831 |
REAL-TIME ATM ALERT IF USER FORGETS CARD
A computer vision card reader and/or point of sale device is described. The device is configured to sense when a user inserts a card into a card reader and to determine when a user departs or is about to depart from the device without retrieving the card. The device may issue an audible or visible alert to the user, reminding the user to retrieve the card. The device may additionally send a notification to a mobile device associated with the user that reminds the user that he has left a card at the point of sale. In some embodiments, message sent to the user contains a code and, upon entry of the code, the point of sale or card reader device returns the card to the user.
Skip to: Description · Claims · Patent History · Patent History
Description
FIELD OF THE INVENTION
This disclosure relates to point of sale device configured to alert a user if the user forgets or leaves a payment card or other account-linked card in a point of sale device or card reader.
BACKGROUND
Automated Teller Machines (ATMs) and account-linked cards are useful for performing several banking transactions and inquiries. Account-linked cards have become commonplace, as many individuals use account-linked cards such as those for access to membership club accounts, rewards accounts, gyms, parking facilities, secured buildings, accounts at banking institutions, accounts for mass transportation and other types of accounts. Account-linked cards are frequently used with card readers or devices equipped with card readers, some of which require the account holder to insert the account-linked card containing information on a magnetic strip or embedded memory into the card reader. After the card has been read and authenticated, the account holder may be able to carry out a variety of transactions or inquiries.
The traditional manner of initiating a transaction or inquiry using an ATM is to insert an account-linked card into a card reader, thereby allowing the card reader to receive information associated with a user's account from the card. In some instances, the card remains inserted in the reader, or the ATM retains possession of the card, until the user has completed a transaction. At that point, the user may be responsible for retrieving the card, or the ATM may present the card for the user to take. In some instances, the user may forget to retrieve the card from the card reader and may leave the area without the card. In that situation, some card readers are configured to ingest or swallow the account-linked card so that once the customer has left the area of the card reader, the account-linked card is no longer accessible to prevent persons other than the account holder from obtaining the card.
When a card is ingested, this may cause significant inconvenience to the account holder. If a card reader has ingested a user's account-linked card, the user will no longer be able to use the account-linked card to access their accounts at the card reader or execute transactions using the card. Frequently, the user's only option for obtaining a working account-linked card is to call the institution that issued the account-linked card, cancel the account-linked card, and order a replacement card to be sent through the mail. This process can be time consuming and leave the customer without sufficient access to their accounts.
This processes may also be costly to the institution that issued the account-linked card. The issuing institution must bear the cost of creating and shipping a replacement card for the user if the user is to resume utilizing the account-linked card. The institution also misses the opportunity to collect fees associated with any transactions the user is unable to perform during the period the use is without an account-linked card.
By applying automated, computer-based interpretation and/or analysis of visual information obtained with a camera, a card-reader device may be able to determine if a user has left or is about to leave an ATM, point of sale (POS), or card reader without retrieving an account-linked card. Automated interpretation of video data may be known as computer vision.
What is needed is an ATM, point of sale (POS), or card reader device which utilizes computer vision techniques to determine if a user has left or is about to leave an area without retrieving an account-linked card and notifies the user, reminding the user to take his card. This saves the user the inconvenience of going without an account-linked card until it is replaced and also saves the issuing institution the costs of creating and shipping a replacement card.
SUMMARY
Therefore, it is an object of this disclosure to describe a point of sale, ATM, or card reader device which is configured to determine when a user is leaving or is no longer present at the ATM using computer vision techniques.
It is a further object of the invention to describe a point of sale, ATM, or card reader which is configured to notify a user who has left an area, or is in the process of leaving an area without retrieving an account-linked card.
Embodiments of the present disclosure provide a point of sale (POS) device comprising: a processor; a card sensor in data communication with the processor, the card sensor configured to detect the presence of a card within the POS device and provide a card notification to the processor when the card is inserted into the POS device and removed from the POS device. Embodiments also comprise a camera in data communication with the processor, the camera configured to observe the presence and absence of a user proximate to the POS device and provide a user notification to the processor regarding the presence or absence of the user. Upon receipt of a card notification from the card sensor that the card is inserted into the POS device and receipt of a user notification from the camera of the absence of the user, the processor is configured to send a message to a communication device associated with the user indicating that the card is inserted into the POS device.
Embodiments of the present disclosure provide a user alert method comprising: detecting the presence of a user at a point of sale (POS) device using a camera in data communication with a processor. The camera is configured to observe the presence and movement of a user proximate to the POS device. The method comprises detecting the presence of a card within the POS device using a card sensor in data communication with the processor, the card sensor configured to detect the presence of a card within the POS device. The method further comprises determining when the user moves away from the POS device using the processor and camera; and sending a message to a mobile device associated with the user upon detecting the presence of a card within the POS device and determining the user has moved away from the POS device.
Further features of the disclosed designs, and the advantages offered thereby, are explained in greater detail hereinafter with reference to specific example embodiments illustrated in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a point of sale system according to an example embodiment.
FIG. 2 illustrates a card reader system according to an example embodiment.
FIG. 3 is a flow chart illustrating the operation of a disclosed ATM system according to an example embodiment.
FIG. 4 is a flow chart illustrating the operation of a disclosed ATM computer vision system according to an example embodiment.
FIG. 5 is a flow chart illustrating the operation of a disclosed POS system according to an example embodiment.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
The following description of embodiments provides non-limiting representative examples referencing numerals to particularly describe features and teachings of different aspects of the invention. The embodiments and features described should be recognized as capable of implementation separately, or in combination, with other embodiments from the description of the embodiments. A person of ordinary skill in the art reviewing the description of embodiments should be able to learn and understand the different described aspects of the invention. The description of embodiments should facilitate understanding of the invention to such an extent that other implementations, not specifically covered but within the knowledge of a person of skill in the art having read the description of embodiments, would be understood to be consistent with an application of the invention.
It will be understood that while some embodiments are disclosed in the context of an automated teller machine (ATM) or point of sale device (POS) for illustrative purposes, the present invention is not limited to ATMs or point of sale devices. An ATM, POS, or other device equipped with a card reader may be utilized in place of any other ATM, POS, or card reader equipped device without limitation.
When a user utilizes an ATM, POS, or other device equipped with a card reader, the user typically inserts an account-linked card into the card reader. The card may contain information about the user personally and/or the user's accounts. The card reader device typically reads the card, authenticates the user, and initiates a transaction. A disclosed card reader device utilizes computer vision and/or other techniques to monitor the user and determine if the user has left the device without retrieving a card. In some embodiments, the device determines if the user is indicating that the user is going to leave the device without retrieving the card. If the card reader device determines that the user has left or is about to leave without retrieving his card, the device alerts the user, prompting the user to retrieve the card. It will be appreciated that multiple types of card readers may be used with an ATM, POS, or kiosk. In some ATMs, the card reader is a slide-type card reader in which the user maintains possession of the card while sliding a magnetic stripe through the reader. In some ATMs, the card reader is chip reader device in which the user inserts a card but the card remains physically accessible to the user at all times. In some ATMs, the card reader is an internal card reader. In such embodiments, a user inserts a card into the card reader and the card is taken into the ATM where it is physically inaccessible to the user or any potential passers-by. In such embodiments, the card may be retained within the ATM during a transaction, where it is physically inaccessible, and may then be presented to the user once the transaction is complete.
FIG. 1 illustrates an exemplary embodiments of a disclosed point of sale (POS) device 105. In this exemplary embodiment, POS device 105 comprises a processor 110, visual display 111, a card reader 113, a card sensor 115, an output device 120, a currency sensor 125, distance sensing equipment 130, and a camera 135. It is understood that not all embodiments include every component of the exemplary embodiment of FIG. 1. It is also understood that, while FIG. 1 depicts a single instance of each component, embodiments may contain multiple instances of any components.
In some embodiments, card sensor 115 is in data communication with processor 110 and configured to detect the presence of a card within the POS device 105. The card sensor 115 provides the processor with a card notification when a card is inserted into or removed from card reader 113 and/or the POS device 105.
In some embodiments, the camera 135 may include, but is not limited to, a digital camera, video camera, a still camera, and/or other optical imaging device. Camera 135 is in data communication with the processor 110 and may be configured to observe the presence and absence of a user proximate to the POS device 105. The camera may be configured to provide a user notification to the processor regarding the presence and/or absence of a user. In some embodiments, the camera may be configured to monitor the movements of a user to detect movements indicating the user is approaching and/or moving away from the POS device. In such embodiments, the camera may send a user movement notification to the processor. In some embodiments, the camera may be configured to monitor movements of the user's head, body, arms, and/or hands to detect behaviors indicating that the user is about to move away from the POS device.
In some embodiments, the disclosed processor includes a computer vision processor. It will be appreciated that the processor may be configured to execute computer vision applications, programs, software, and/or techniques which may be stored on local and/or remote servers and/or memory. In some embodiments, if the processor receives a card notification from the card sensor indicating that a card is inserted into the POS device and receives a user notification from the camera or a processor in communication with the camera indicating the absence of the user, the processor may send a message to a communication device associated with the user indicating that the user's card is still inserted in the POS device.
In some embodiments, the processor is configured to analyze visual data provided by the camera using facial recognition techniques. In some embodiments, the camera and/or processor may detect the presence of a user based on facial recognition.
In some embodiments, currency sensor 125 is configured to detect the presence and/or absence of currency. This may indicate when the user retrieves currency which has been dispensed to the user. The currency sensor is in data communication with the processor and configured to send a currency notification to the processor when the user removes currency from the POS device. In some embodiments, the processor will not send a message to the user until the currency sensor has indicated that the user has removed currency from the POS device.
In some embodiments, POS device 105 may be an ATM, kiosk, terminal, and or other device in which a user inserts an account-linked card or other form of identification and/or information into a card reader.
In some embodiments, visual display 111 may include, but is not limited to a standard video monitor, a touch screen display, a cathode ray tube, and/or LED display.
In some embodiments, the output device 120 may include but is not limited to a speaker, siren, whistle, light, strobe light, and/or combinations of the above. In some embodiments, the output device may be an input/output device such as, for example, a touch screen display.
In some embodiments, distance sensing equipment 130 may include, but is not limited to laser range finders, LIDAR, RADAR, RGB-D cameras, and/or ranging cameras.
An example embodiment of a system for alerting a user who has left an account-linked card in a card reader is shown in FIG. 2. The system 200 includes a card reader 205 which can read and authenticate an account-linked card 230. The card reader 205 is connected to a network 220 which can be the Internet, a wide area network (WAN), or another suitable type of network. The card reader 205, like other elements of system 200, may be connected to the network 220 via a wireless connection or a wired connection.
A server 210 may also be connected to the network 220. The server 210 includes a processor and a memory for storing instructions executable by the processor. The server 210 is capable of accessing a database 225 which includes one or more storage media and stores various data associated with one or more accounts. As shown in FIG. 2, the server 210 may be connected directly to the database 225, though database 225 could alternatively be incorporated within the server 210, accessed by the server 210 via a network, or configured in any other suitable way for access by the server 210. Database 225 may additionally be functionally distributed across two or more hardware units accessible by the server 210.
The card reader 205 is a device capable of reading and authenticating account-linking information stored on the account-linked card 230. The card reader 205 may include a display for displaying a graphical user interface to a user and/or account holder. The card reader 205 may further include a keypad, a keyboard, and/or a touch screen interface by which the account holder can input information. The card reader 205 preferably includes a processor and a memory which stores instructions for execution by the processor.
The card reader 205 may include a slot or another receptacle for receiving and reading the account-linking information stored on the account-linked card 230. The account-linked card 230 may be any secure card such as a magnetic strip card which stores account-linking information in a magnetic strip, a smart card having an integrated circuit and memory which stores account-linking information in the memory, or a card implementing radio frequency identification technology.
In ordinary use, an account holder may insert her account-linked card 230 into the card reader 205. The card reader 205 may then prompt the account holder via the graphical user interface on the display to authenticate the account-linked card 230, for example, by entering a personal identification number (PIN), or by other means. If successfully authenticated, the card reader 205 may display the account holder's account information to the account holder via the graphical user interface and may also allow the account holder to make transactions.
In this example embodiments, when preparing to display account information to the account holder, the card reader 205 sends a request to the server 210, which may be in the form of an application programming interface (API) call, via the network 220 requesting account information. The server 210 retrieves the requested information from the database 225 and sends it back to the card reader 205 via the network 220 for display to the account holder in the graphical user interface.
System 200 may also include a mobile device 215, which may be associated with the account holder of an account associated with the account-linked card 230. The mobile device 215 is suitable for receiving notifications sent by the server 210 via the network 220 relating to the account-linked card 230 or the account associated with the account-linked card 230. The mobile device 215 may be a smart phone or any other network connected device suitable for receiving notifications.
Exemplary embodiments may include one or more networks. In some examples, the network 220 may be one or more of a wireless network, a wired network or any combination of wireless network and wired network, and may be configured to connect a card reader and/or mobile device to a server. For example, the network may include one or more of a fiber optics network, a passive optical network, a cable network, an Internet network, a satellite network, a wireless LAN, a Global System for Mobile Communication (GSM), a Personal Communication Service (PCS), a Personal Area Network, Wireless Application Protocol (WAP), Multimedia Messaging Service (MIMS), Enhanced Messaging Service (EMS), Short Message Service (SMS), Time Division Multiplexing (TDM) based systems, Code Division Multiple Access (CDMA) based systems, D-AMPS, Wi-Fi, Fixed Wireless Data, IEEE 802.11b, 802.15.1, 802.11n and 802.11g, Bluetooth, Near Field Communication (NFC), Radio Frequency Identification (RFID), Wi-Fi, and/or the like.
In addition, the network may include, without limitation, telephone lines, fiber optics, IEEE Ethernet 902.3, a wide area network (WAN), a wireless personal area network, a local area network (LAN), or a global network such as the Internet. In addition, the network may support an Internet network, a wireless communication network, a cellular network, or the like, or any combination thereof. The network may further include one network, or any number of the exemplary types of networks mentioned above, operating as a stand-alone network or in cooperation with each other. The network may utilize one or more protocols of one or more network elements to which they are communicatively coupled. The network may translate to or from other protocols to one or more protocols of network devices. Although the network 220 is depicted as a single network in FIG. 2, it should be appreciated that according to one or more examples, the network may comprise a plurality of interconnected networks, such as, for example, the Internet, a service provider's network, a cable television network, corporate networks, such as credit card association networks, and home networks
In some embodiments, the disclosed system may include one or more servers. In some examples, servers may include one or more processors, which are coupled to memory. A server may be configured as a central system, server or platform to control and call various data at different times to execute a plurality of workflow actions. A server may be configured to connect to the one or more databases. A server may be connected to at least one client device, communication device, and/or mobile device.
In some embodiments, the disclosed account-linked card is a payment card, such as a credit card, debit card, or gift card. Information related to the issuer, card holder, and/or associated vendor may be displayed on the front or back of the card. In some examples, the payment card may comprise a dual interface payment card. In some embodiments, the card is not related to a payment card, and may comprise, without limitation, an identification card, security card, loyalty card, smart card, and/or access card.
In some embodiments, the camera may include but is not limited to a digital camera, video camera, still picture camera, and/or other imaging devices. In some embodiments the camera may include, but is not limited to, a thermal or infrared imaging device and/or presence detector such as, for example, a proximity sensor, motion sensor, LIDAR detector, sonar, and/or distance sensing equipment. It will be appreciated that the term “camera” is not limited to conventional optical imaging devices. In some embodiments, one or a plurality of cameras may contain an integral processor and/or be operably connected to a processor configured for computer vision.
In some embodiments, the camera is positioned to view a user inserting a card into a card reader. The camera is operably connected to a processor which is configured to receive visual data from the camera and analyze the visual data in order to make determinations. This automated process may be referred to as computer vision. In some embodiments, the camera is maintained in a fixed position and is not configured to pan, tilt, or zoom. This may increase the accuracy of a computer vision system as a fixed position camera may have a substantially static background. It will be understood that in some embodiments, a static background may be temporarily obscured by a significant number of moving objects, such as, for example, vehicle and/or pedestrian traffic.
In some embodiments, the camera and/or computer vision processor may be configured to establish a background over time. For example, a building in view of the camera may be renovated over time. By allowing a computer vision system to periodically update and/or average a background image, the computer vision system may be configured to adapt to changes in the background in order to reduce and/or eliminate false alerts.
In some embodiments, the processor is configured to perform automated interpretation of visual data received from the camera in real time. In some embodiments, multiple sources of visual data, such as cameras, video cameras, and/or other optical devices may be used. In some embodiments, utilizing multiple cameras, or sources of visual data which are positioned to view the same or similar scene from different angles may allow the computer vision system to make more accurate determinations regarding the location and/or movement of an object, such as, for example, a user. In some embodiments, utilizing two or more cameras positioned to view the same or similar scene from substantially the same angle may allow the computer vision system to utilize stereo vision techniques in order to make more accurate determinations regarding the location and/or movement of an object.
In some embodiments, a camera may contain or be connected to a processor which performs user detection on the scene viewed by that camera and may generate a “bounding box” (or other additional information) for a user in a frame. In some embodiments, a processor may generate a bounding box for individual portions of the user including, but not limited to the user's arms, head, torso, and/or hands. In some embodiments, the processor may generate a bounding box for the user's purse and/or wallet.
In some embodiments, a camera may transmit bounding box data to a processor, which aggregates possible user data from multiple cameras. Such embodiments may allow the processor to more accurately determine user presence and/or determine when a user is indicating that she is about to leave an area. The use of multiple cameras may increase the overall accuracy and effectiveness of the system utilizing triangulation and/or false-alarm rejection.
The process of user detection may include one or a plurality of computer vision and/or feature detection algorithms including, but not limited to a histogram of oriented gradients (HOG), integral channel features (ICF), aggregated channel features (ACF), and/or deformable part models (DPM). In some embodiments, tracking algorithms may also be utilized including, but not limited to Kalman filters, particle filters, and/or Markov chain Monte Carlo (MCMC) tracking approaches.
Different algorithms may provide unique performance characteristics in terms of accurately detecting a user and/or user movements as well avoiding false alarms. Each computer vision and/or feature detection approach may also provide differing performance characteristics based on the lighting and/or other visual characteristics of a particular deployment.
In some embodiments, user and/or object detections can be aggregated temporally within a camera and/or processor to reduce false alarms and improve the probability of detection by keeping track of person and/or object detection confidences over time within a camera view. This may reduce or prevent false alarms and improve the reliability of the over-all system.
In some embodiments, computer vision determinations may be utilized in combination with a predictive model in order to determine the probability of a user departing from a card reader device without retrieving a card. The predictive model may be developed based on historical information related to use of a card reader device. Such information may include, but is not limited to, user position, user speed, user movements, and/or an identified sequence of transaction steps and/or user behaviors.
In one non-limiting example, a predictive model may be developed based on the length of time a user utilizes an ATM before retrieving their card and/or departing from the ATM. A predictive model may understand that most users utilize an ATM for a single transaction that lasts approximately 90 seconds and involves a particular sequence of transaction steps. Using this information, the predictive model may reject indicators that a user is going to leave an area in significantly less than 90 seconds after the user has inserted a card into the ATM. For example, it is unlikely a user would depart from an ATM within the first ten seconds of initiating a transaction. In some embodiments, the disclosed predictive model may be used as a confirmation to limit the number of false alarms generated by a computer vision processor.
In some embodiments, a computer model and/or predictive model may incorporate variables related to the position and/or movements of a user. For example, a predictive model and/or computer model may observe user position and/or user movements using a camera and computer vision processor in order to develop a model of positions, movements, and/or behaviors which precede the user leaving the ATM. In some embodiments, the computer model may include variables relating to a sequence of user positions or movements.
The disclosed system may include multiple and/or redundant systems. In some embodiments a processor aggregates input from a computer vision system as well as a predictive model in order to make a determination of whether a user has departed or is about to depart from an ATM without retrieving a card which has been inserted into a card reader.
In some embodiments, a card reader device may be configured to execute multiple computer vision and/or facial recognition techniques. Such embodiments may determine which technique or combination of techniques is most effective once the physical system has been installed at a particular location. In some embodiments, the disclosed system may include a training routine or initialization program designed to determine which of a plurality of computer vision and/or facial recognition techniques is best suited for a particular application.
In one non-limiting example, using object detection and/or bounding boxes to keep track of a person detected using facial detection or facial recognition techniques may reduce false alarms or inaccurate detection of a user. If the user turns, bends, or otherwise obscures their face temporarily, some embodiments may use object detection and/or bounding boxes to continue to track the user despite being unable to clearly view the user's face. Some embodiments may use object permanence logic and/or encoded rules in order to avoid confusing a detected user with a different individual who may also be in view of the computer vision system. It will be appreciated that facial detection techniques may be utilized to detect the presence of a face and facial recognition techniques may be utilized in order to identify, track, or monitor a particular user's face.
In some embodiments, computer vision information may be screened in order to reduce and/or limit inaccurate information. In one, non-limiting example, human motion is known to occur within reasonable limitations. In some embodiments, a computer vision system may monitor the speed at which a user approaches or departs from an ATM. If a computer vision system determines that a user has approached or departed from the ATM at a speed outside the reasonable limits on human movement, this information may be flagged or rejected. In one non-limiting embodiments, if the average speed at which a user approaches or departs from an ATM is determined to be approximately 3 miles per hour. Therefore, any report of a user approaching or departing from an ATM at greater than 10 miles per hour likely represents an inaccurate report.
Using the systems, methods, and techniques disclosed herein, a card-reader device, point of sale, and/or ATM is able to alert a user who departs from a card reader without retrieving a card. By alerting and/or notifying a user who has left, or is about to leave an area without retrieving a card, the user can be reminded to retrieve the card, thereby saving both the user and the card issuer the cost and inconvenience of replacing the card.
In one example embodiment, a user approaches an ATM equipped with a camera which is in data communication with a computer vision processor. The user inserts a card into the card reader in order to initiate a transaction. The card sensor transmits a signal to the processor that a card has been inserted into the card reader. The camera is configured to observe the user and/or the user's movements while the user is at the ATM. The user follows the graphical user interface prompts in order to perform the desired transaction. In this example, the transaction involves the user withdrawing currency from the ATM. Once the user has retrieved the currency from the ATM, the user turns to walk away. The camera and computer vision processor are configured to capture and interpret visual data showing the user turning and then walking away from the ATM. The processor is in data communication with the card sensor, which indicates that the card has not been retrieved from the card reader. Once the processor determines that the user is leaving the area and has not retrieved the card, the processor initiates a user notification.
In some embodiments, an example notification may include, but is not limited to an audible alert issued by an output device. The ATM may include a speaker which is configured to beep, whistle, or issue an audible statement, informing the user that she has not retrieved her card. As an example, the output device may be configured to issue the statement “Don't Forget Your Card” when the processor determines that the user is leaving the ATM and has not retrieved her card.
In some embodiments, the processor, which is operably connected to the card reader, may cause the output device to include the user's first name, last name, and/or full name as part of an issued statement. The card reader may read the user's name stored on the account-linked card, and transmit the user's name to the processor. The processor may then cause the output device to issue a personalized audible alert such as, for example, issuing the statement “Jane Smith, Don't Forget Your Card.”
In some embodiments, in addition to an immediate audible alert, the ATM may be configured to activate a light, strobe, whistle, beep, and/or siren to get the user's attention and remind the user to retrieve her card.
In some embodiments, the camera observes the user after the audible notification has been issued to determine if the user returns to the ATM and retrieves the card. If the user does not return to retrieve the card, or if the user does not retrieve the card within a predetermined time period, the processor may be configured to send a notification to a mobile device associated with the user. In such embodiments, the processor may instruct a network connected server to send a text message, SMS, and/or phone message to one or a plurality of mobile devices associated with the user and/or account holder based on information stored on the card. In some embodiments, the ATM may additionally or alternatively send an email to one or a plurality of email address associated with the user based on information stored on the card. In some embodiments, if the user does not return to retrieve the card, or if the user does not retrieve the card within a predetermined time period, the processor may be configured the instruct a digital wallet application or other application associated with the user to issue an alarm such as, for example, an audible notification, a siren, a vibration, or a strobe light.
FIG. 3 shows a method 300 of utilizing a card reader, ATM, and/or POS device in order to alert a user who has failed to retrieve a card. At step 305, a user approaches the ATM. As the user approaches the ATM, the user enters the field of view of a camera. At step 310, the user inserts an account-linked card into the card reader of the ATM. In some embodiments, the ATM registers the presence of the card using a card sensor. The user may be prompted to enter a PIN or other authentication by the user interface of the ATM and then be allowed to perform various transactions using the account associated with the user's account-linked card. In step 315, the camera collects visual data regarding the user and transmits visual data to a computer vision processor. In step 320, the processor analyzes the visual data and determines, in step 325, that the user has departed from the ATM. The processor may also determine, using computer vision or a card sensor, in step 330, if the user has retrieved her card. If the user has already retrieved her card, in step 335, the ATM may determine that no further action is required. If the user has not retrieved her card, in step 340, the ATM issues an initial notification reminding the user to retrieve her card. The initial notification may be a statement including the user's name. In step 345, the ATM transmits a reminder notification to a communication device and/or mobile device associated with the user. In some embodiments, the ATM may use a chip card reader which leaves the card physically accessible to the user or any passers-by in the area. In step 350, if the user has not retrieved her card, the card may be deactivated to prevent someone other than the user from obtaining and using the card.
In some embodiments, the disclosed ATM or other card reader device may send a code to the user's mobile device and/or email address. The ATM may retain the user's card so that the card is physically inaccessible to the user or any passers-by for a predetermined period of time after sending the message. If the user returns to the ATM within a predetermined period of time, the user may be able to enter the code in order to retrieve the card. In such embodiments, upon receiving the code, the ATM may present the card to the user, allowing the user to physically access the card again. In some embodiments, the ATM may maintain the user's transaction session for a predetermined period of time. In some embodiments, the camera is configured to detect the presence of a user based on facial recognition and the processor is configured to end a user's transaction session upon recognition of a different user proximate to the ATM. This may allow the ATM to maintain a transaction session open and facilitate the return of a user's card for as long as possible without allowing a subsequent user to interact with the user's account through an open transaction session.
In some embodiments, if the user does not return to the ATM within a predetermined period of time, the ATM may ingest the card. In many embodiments, once the card is ingested, it may be destroyed or may otherwise be un-retrievable by the user.
In some embodiments, the ATM may be configured to extend a given time period prior to ingesting the card if the user responds to a notice sent to the user's mobile device. In an example, the ATM may be configured to issue an audible warning and send a notice to the user's mobile device upon determining that the user has left the ATM without retrieving her card. The ATM may then initiate, for example, a two-minute time period, after which the card will be ingested. This two-minute time period may be designed to allow the user to hear the audible alert and/or check her mobile device and return to the ATM prior to the card being ingested. In some embodiments, the two-minute window may be extended if the user responds to the message sent to her mobile device. For example, if the user responds to the message within 90 seconds, the ATM may extend the time window for an additional five minutes. This may allow the user to notice the message and respond, indicating that the user will return to the ATM even if the user is unable to return to the ATM within the original time window. It will be understood that the time period prior to ingesting a card, as well as the time period for allowing a user to return may be any pre-determined time period. Additionally, each time period may be extendable by any amount. The example embodiments are not intended to be limiting on these potential features.
In some embodiments, the user may be able to respond to the notice indicating that the user will return to the ATM at a specific time in the future, thereby creating a window of a predetermined duration at a later time.
In some embodiments, the ATM may be equipped with a temporary card storage which allows the ATM to resume normal operations while maintaining possession of a user's un-retrieved card without ingesting the card. In such embodiments, the ATM may be configured to return the card to a user who enters a code sent to the user's mobile device and/or email. In some embodiments, the ATM may confirm the user retrieving the card is the same user who abandoned the card using facial recognition techniques. In some embodiments, the ATM may send a second code to the user upon entry of the initial code. The second code may be designed to be active for a limited time window, such as, for example, less than one minute. This may help to ensure that the user attempting to retrieve the abandoned card is in physical possession of the user's mobile device and/or has real-time access to the user's email, thereby reducing the ability of a third party to intercept any code sent from the ATM to the user and fraudulently obtain the user's card. If the user does not return to the ATM after a pre-determined period of time, the ATM is designed to ingest the card, thereby making the card un-retrievable.
In some embodiments, the ATM may utilize facial recognition techniques in order to compare the user attempting to retrieve the card with the user who initially abandoned the card. The ATM may additionally or alternatively utilize facial recognition techniques to compare the user attempting to retrieving the card to a facial recognition profile of an authorized user associated with the card. In some embodiments, only the authorized user, who is confirmed using facial recognition, may be allowed to utilize the card to initiate a transaction. This may be an added security feature in addition to requiring the user to enter a pin number or other form of password.
In addition to utilizing computer vision techniques, disclosed systems, ATMs, POSs, and card readers may be equipped with distance sensing equipment. In some example embodiments, the distance sensing equipment may be configured to be aimed at an individual user rather than maintained in a fixed positions. The distance sensing equipment may be in data communication with a processor. In some embodiments, the distance sensing equipment may be used to determine when a user is leaving an ATM.
In an example embodiment, when a user approaches an ATM and initiates a transaction, the user's torso will occupy a significant portion of a field of view of any cameras and distance sensing equipment mounted on the ATM. In addition to the computer vision techniques described herein, distance sensing equipment may be used to directly measure how far the user is from the ATM. While the user is utilizing the ATM, the user will be within a certain distance range such as, for example, between 1 and 3 feet away. If the user moves further from the ATM, as measured using distance sensing equipment, the distance sensing equipment may inform the processor that the user is leaving the ATM. The distance sensing equipment may transmit data to the processor indicating the user's distance from the ATM. This may allow the processor to determine if the user is actually leaving the area or if the user has merely taken a small step back. In some embodiments, the processor may use information from the distance sensing equipment in order to corroborate, verify, and/or confirm determinations made using computer vision techniques.
In some embodiments, the processor may be able to determine the rate and/or direction at which the user is departing the area. The processor may adjust the volume and/or timing of any alert issued by the ATM or sent to the user based on these determinations.
In some embodiments, when the ATM determines that the user has departed from the ATM without retrieving the user's card, the ATM may close any open dialog windows, thereby preventing anyone other than the original user from accessing the user's account if the user walks away without completing a transaction.
In some embodiments, the ATM, POS, and/or card reader device may be operably connected to a weight sensor. In some embodiments, the weight sensor is a pad positioned directly in front of an ATM, so that a user of the ATM stands on the weight sensor. The weight sensor may be configured to provide a notification to the processor when a user is standing on the weight sensor and/or when a user steps off of the weight sensor. This may provide additional corroboration, verification, and/or confirmation of any determination made using the disclosed computer vision techniques.
In some embodiments, the disclosed computer vision system may be configured to detect movements, behaviors, and/or features which indicate a user is about to leave an area before the user has actually left.
If a processor determines that a user is preparing to leave an ATM but has not retrieved a card, the system may issue an early notification reminding the user to retrieve her card. In some cases, this early notice reminds the user to take her card and eliminates the need to send a message to a mobile device associated with the user after the user has left the ATM without retrieving the card.
In some embodiments, the disclosed systems may utilize computer vision in order to observe the movements and/or position of a user including movements of the user's hands, arms, torso, and/or head. By observing the user's movements, the ATM may be able to develop and/or refine a computer model and/or database of motions or behaviors that precede the user departing from the ATM.
For example, if a user rotates her head and torso, so that the user is facing away from the ATM but has not retrieved her card, the processor may determine that the user is in the process of departing from the ATM without retrieving her card. This determination may be made even if the user is still in close proximity to the ATM. In such cases, an audible and/or visual alert reminding the user to retrieve her card may be very effective. If the user starts walking away, a second audible alert may be issued. In some embodiments, the second audible alert may include a different tone, pitch, or statement, and/or be issued at a greater volume that an initial audible alert. If the user does not respond to the audible alerts, the ATM may send a notification to the user's mobile device, reminding the user to return to the ATM and retrieve her card.
In some embodiments, facial recognition techniques may be used to determine when a user rotates her head in order to predict when a user is likely to leave an ATM. This may be used in addition to or instead of other techniques for determining when a user is leaving an ATM which are described herein.
In another example, a user's movements may be observed to determine that a user has retrieved currency from the ATM and inserted the currency into a purse and/or wallet. The ATM may determine that this behavior precedes the user departing from the ATM. If the user does not retrieve her card within a certain time period of inserting currency into her purse and/or wallet, the ATM may issue an audible warning reminding the user to retrieve her card.
In some embodiments, an ATM with computer vision capabilities described herein, may develop a library of intent to leave behaviors based on observations of movement and/or behaviors which occur at the ATM. For example, an ATM may be configured to observe all ATM users and record information related to the user movements, timing, and sequential transaction steps. Over time, the ATM may generate a strong association between a user retrieving currency, putting the currency in her wallet, retrieving a card, and then, approximately 10 seconds later, departing from the ATM. If an ATM observes a user's arm movements and determines that the user has retrieved currency and put it in her wallet, but has not yet retrieved the user's card after a predetermined time period. The ATM may elect to issue a notification reminding the user to retrieve her card.
In some embodiments, the ATM may develop a library of other behaviors which are associated with departing the ATM without retrieving a card and issue an initial notification reminding users to retrieve their cards.
In some embodiments, the ATM may observe which notifications are effective at causing the user to notice and return to retrieve a card. Over time, the ATM may refine the notifications in order to improve the response rate. Such refinements may include, for example, changing the volume, pitch, and/or tone of a notification and/or changing the content and/or timing of an issued statement.
In some embodiments, an ATM may determine that particular characteristics of users are more likely to respond to different notifications. In such embodiments, the ATM may utilize a user's characteristics such as by information stored on the user's card and adjust any notifications in order to most effectively remind the user to retrieve a card. For example, over time, an ATM may determine that a user responds most consistently to issued statements which begin with the user's first name. Some embodiments may also determine that other users respond most consistently to issued statements which begin with the user's full name. Based on such determinations, a personalized audible warning may be generated for individual users.
FIG. 4 shows an example embodiment of a method 400 of notifying a user who has departed from an ATM without retrieving a card. At step 405, a user approaches the ATM. As the user approaches the ATM, the user enters the field of view of a camera and distance sensing equipment. At step 410, the user inserts an account-linked card into the card reader of the ATM. In some embodiments, the ATM registers the presence or absence of the card directly using a card sensor. The user may be prompted to enter a PIN or other authentication by the user interface of the ATM and then be allowed to perform various transactions using the account associated with the account-linked card. In step 415, the camera collects visual data regarding the user and transmits visual data to a computer vision processor. In step 420, the processor analyzes the visual data and determines if the user has displayed any intent to leave behaviors such as, for example, inserting currency into a wallet or purse, placing any items in the user's pockets, operating a mobile device, and/or turning away from the ATM. In step 425, the ATM determines that the user has displayed an intent to leave behavior. In step 430, the processor may determine whether or not the user has retrieved her card. The processor may determine whether the user has retrieved her card using computer vision or may utilize the card sensor to determine if the user has retrieved her card. If the user has already retrieved her card, in step 435, the ATM determines that no reminder notification is necessary. If the user has not retrieved her card, in step 440, the ATM may issue an initial notification reminding the user to retrieve her card. In step 445, the ATM may continue to observe the user using a camera after the initial notice is issued. In step 450, the ATM again determines whether or not the user has retrieved her card. If the user retrieves her card, the ATM may determine that no subsequent notifications are required. If the user departs from the ATM without retrieving her card, in step 455, the ATM may issue a second audible and/or visual notification and/or may transmit a notification to a mobile device associated with the user.
FIG. 5 shows an example embodiment of a user alert method 500 utilizing a point of sale device. In step 505, the method comprises detecting the presence of a user at the POS device using a camera which is in data communication with a computer vision processor. The camera is configured to observe the presence and movement of a user who is approaching and/or proximate to the POS device. In step 510 the POS detects the presence of a card within the POS device using a card sensor which is in data communication with the processor. The card sensor is configured to detect the presence and/or absence of a card within the POS device. In step 515, the POS device determines when the user moves away from the POS device using the processor and camera. In step 520, if the POS device detects the presence of a card within the POS device and determines that the user has moved away from the POS device, the POS device sends a message to a mobile device associated with the user.
In some embodiments, method 500 also includes, in step 525, determining when and/or if a user has removed currency from the POS device. This determination may be performed using a currency sensor or may be performed using computer vision. When computer vision is used to determine if a user has removed currency from the POS device, the camera observes the user's movements and relays the visual information to the computer vision processor. The computer vision processor may be configured to interpret the user's movements and may also be configured to extract the feature of currency from the visual data provided by the camera.
In some embodiments, method 500 also comprises, the steps of 530 determining when a user turns her body away from the POS device and 535 issuing an audible notification when a user turns away. In some cases, this initial notification will alert the user to the fact she has not retrieved her card and prevent the user from departing the area without the card.
In some embodiments, method 500 also comprises the step of 540 recording the number of events in which a particular user has moved away from the POS device while a card was inserted in the POS device. Over time, the POS may develop a profile and/or score associated with users who are more likely to leave the POS without retrieving their cards. If the POS determines that a user has a history of leaving the POS without retrieving her card, the POS may adjust operation of the user interface in order to remind the user to retrieve her card before the user completes a transaction.
If the user departs from the POS device without retrieving her card, in step 545, the POS may retain the card for a predetermined retainment time period. If the user responds to the message indicating that that she has left her card at the POS on her mobile device, in step 550 the POS device may extend the retainment period. In some embodiments, once the retainment period expires, in step 555, the POS ingests the card.
The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as may be apparent. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, may be apparent from the foregoing representative descriptions. Such modifications and variations are intended to fall within the scope of the appended representative claims. The present disclosure is to be limited only by the terms of the appended representative claims, along with the full scope of equivalents to which such representative claims are entitled. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
The foregoing description, along with its associated embodiments, has been presented for purposes of illustration only. It is not exhaustive and does not limit the invention to the precise form disclosed. Those skilled in the art may appreciate from the foregoing description that modifications and variations are possible in light of the above teachings or may be acquired from practicing the disclosed embodiments. For example, the steps described need not be performed in the same sequence discussed or with the same degree of separation. Likewise various steps may be omitted, repeated, or combined, as necessary, to achieve the same or similar objectives. Accordingly, the invention is not limited to the above-described embodiments, but instead is defined by the appended claims in light of their full scope of equivalents.
In the preceding specification, various preferred embodiments have been described with references to the accompanying drawings. It may, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. The specification and drawings are accordingly to be regarded as an illustrative rather than restrictive sense.
Claims
1-20. (canceled)
21. A point of sale (POS) device comprising:
a card reader configured for insertion and return of a card and for entry of a code; and
a processor in data communication with the card reader, a first sensor, and a second sensor, wherein the processor is configured to: receive a first notification from the first sensor, receive a second notification from the second sensor, transmit a message indicating a card insertion, the message including a message code, maintain a transaction session for a period of time following the transmission of the message, and upon receipt of a third notification from the card reader, instruct the card reader to return the card.
22. The POS device of claim 21, wherein:
first sensor is a card sensor configured to detect the insertion and release of the card, and
the first notification indicates when the card has been inserted into the card reader.
23. The POS device of claim 21, wherein:
the second sensor is a camera configured observe the presence and absence of a user proximate to the POS device, and
the second notification indicates the presence or absence of the user.
24. The POS device of claim 23, wherein the camera is configured to:
determine when the user performs a head rotation; and
transmit the second notification indicating the absence of the user.
25. The POS device of claim 23, wherein:
the camera is further configured to, upon detection of the presence of a user proximate to the POS device, monitor the movement of the user for one or more movement indications showing the user may be moving away from the POS device, wherein the one or more user movement indications include at least one selected from the group of the user moving away from the POS device, the user inserting an item into a pocket, the user inserting an item into a bag, and the user operating a mobile device, and
the camera is further configured to, upon detection of one or more movement indications, send a user movement notification to the processor.
26. The POS device of claim 25, wherein:
the POS device includes an output device, and
the processor is further configured to, upon receipt of a user movement notification, present at least one selected from the group of an audible notice and a visual notice via the output device.
27. The POS device of claim 26, wherein:
the output device is a speaker, and
the audible notice includes the first name listed on the card.
28. The POS device of claim 21, wherein:
the third notification is a code entry message comprising an entered code, and
prior to instructing the card reader to return the card, the processor is configured to determine whether the entered code matches the message code.
29. The POS device of claim 21, wherein the camera is configured to detect the presence of a user based on facial detection.
30. The POS device of claim 29, wherein:
the camera is configured to transmit the facial detection to the processor, and
the processor is configured to compare the facial detection to a facial recognition profile of an authorized user associated with the card.
31. The POS device of claim 30, wherein the processor is configured to transmit the message only upon a successful comparison of the facial detection to the facial recognition profile.
32. The POS device of claim 30, wherein the processor is configured to end the transaction session upon an unsuccessful comparison of the facial detection to the facial recognition profile.
33. The POS device of claim 30, wherein the POS device is configured to ingest the card upon an unsuccessful comparison of the facial detection to the facial recognition profile.
34. The POS device of claim 21, wherein:
the POS device further comprises a currency sensor configured to send a currency notification to the processor that a form of currency has been removed from the POS device, and
the processor is configured to send the message after receiving a currency notification from the currency sensor.
35. The POS device of claim 21, wherein the POS device is configured to ingest the card after the period of time expires.
36. A user alert method, the method comprising:
detecting, by a first sensor, a card insertion into a card reader;
transmitting, by the first sensor, a first notification to a processor, wherein the first notification indicates the card insertion;
detecting, by a second sensor, the presence of a user within a vicinity of the second sensor;
transmitting, by the second sensor, a second notification to the processor, wherein the second notification indicates the presence of the user;
initiating, by the processor, a transaction session upon receipt of the first notification and the second notification;
detecting, by a second sensor, the absence of the user within the vicinity of the second sensor;
transmitting, by the second sensor, a third notification to the processor, wherein the third notification indicates the absence of the user;
transmitting, by the processor, a message indicating the card insertion upon receipt of the third notification, the message including a code;
maintaining, by the processor, the transaction session for a period of time following the transmission of the message;
receiving, by the card reader, entry of the code;
transmitting, by the card reader, a fourth notification to the processor, wherein the fourth notification indicates the entry of the code; and
instructing, by the processor, the card reader to release the card.
37. The method of claim 36, further comprising:
prior to instructing the card reader to release the card, transmitting, by the second sensor, a fifth notification to the processor, wherein the fifth notification indicates the presence of the user within the vicinity of the second sensor.
38. The method of claim 37, wherein the second sensor confirms, via facial detection, that the user within the vicinity of the second sensor is an authorized user prior to transmitting the fifth notification.
39. The method of claim 36, further comprising extending, by the processor, the transaction session for a second period of time upon receipt of the fourth notification.
40. A card reader comprising:
a slot for insertion and removal of a card, the slot configured to retain the card until the conclusion of a transaction session;
a card sensor configured to determine whether the card has been inserted into the slot and whether the card has been removed from the slot; and
an input device configured to receive the entry of a code, wherein: upon determining the insertion of the card into the slot, the card sensor is configured to transmit a first notification indicating the insertion of the card, upon receipt of the entry of the code by the input device, the card reader is configured to transmit a second notification indicating receipt of the entry of the code, and upon receipt of a third notification extending the transaction session for a period of time, the card reader is configured to retain the card for the period of time.
Patent History
Publication number: 20200342741
Type: Application
Filed: Jan 15, 2020
Publication Date: Oct 29, 2020
Inventors: Abdelkader BENKREIRA (Washington, DC), Joshua EDWARDS (Philadelphia, PA), Michael MOSSOBA (Arlington, VA)
Application Number: 16/743,106
Classifications
International Classification: G08B 21/24 (20060101); G06K 9/00 (20060101); G07G 1/00 (20060101); G06Q 20/20 (20060101);
|
__label__pos
| 0.737504 |
user2224893 user2224893 - 1 year ago 75
CSS Question
Why does sometimes some of cards doesn't rotate at all?
example: http://codepen.io/anon/pen/mEoONW
Cards at least heave minimum 180 degree rotation, set in CSS with JS, but in multiple run some of it doesn't rotate at all. Can anyone please explain why?
<div class="flip-container">
<div class="flipper">
<div class="front"></div>
<div class="back">?</div>
</div>
</div>
...
<button onclick="rotate();">Rotate</button>
<style>
.flip-container {
perspective: 1000px;float:left;
}
.flip-container, .front, .back {
width: 160px;height: 220px;
}
.flipper {
transform-style:preserve-3d;position: relative;
}
.front, .back {
backface-visibility: hidden;position: absolute; top: 0; left: 0;
}
.front {
z-index: 2; transform: rotateY(0deg);background-color: blue;
}
.back {
transform: rotateY(180deg); background-color: grey;font-size: 13em; text-align: center; vertical-align: middle;
}
</style>
<script>
function rnd(){
var randNum = Math.floor((Math.random() * 20) + 1);
if(randNum %2 == 0){//generated number is even
randNum = randNum +1 ;
}
return randNum;
}
function rotate(){
$('.flipper').each(function(i, obj) {
var rn = rnd();
var nn = 180 * rn;
var sp = 0.2 * rn;
console.log(rn);
$(this).css("transition", sp+"s").css("transform", "rotateY("+nn+"deg)");
});
}
</script>
Answer Source
Easy.
To start rotating in this pen, card has to receive new css.
If the number, that was generated by rnd() function is the same as previous one, css of the element is not changed, so browser doesnt start the animation, it thinks that's it already been played (and it was).
To 'restart animation' if it has SAME params you have two ways - or remove element from DOM and get it back there (ugly, ah?) OR you can clear style and then set it back in time out. That trick will help 'restart' animation.
$element.attr('style', null); //remove old style before setting new
setTimeout(function(){
$element.css("transition", "0.6s");
$element.css("transform", "rotateY("180deg)");
}, 100);
I've forked you pen and made all cards spin here.
|
__label__pos
| 0.996638 |
How to Take A Screenshot on Asus Laptop
How to Take A Screenshot on Asus Laptop
Many of us are used to “copy-pasting” stuff when using a laptop. But, you can’t always copy and paste items from one place to other. If any program or data does not allow you to copy, copy and paste will not work. A screenshot is essentially like clicking a picture of the items or data shown on the screen. The screenshot is taken in the form of a picture that you can store on your computer. Taking a screenshot is very handy since it is very quick to do so, and you can capture anything displayed on your computer screen.
Methods on How to Screenshot on Asus Laptop
Whether you own as Asus laptop that cost close to a lakh or an Asus budget gaming laptop, if you do not know how to take screenshots on your laptop, you can follow the method given in this article. There are several methods using which you can take screenshot on a laptop, and we will cover each of those methods.
1. Using PrtScr to take Screenshot on an Asus Laptop
Using PrtScr to take Screenshot on an Asus Laptop
On any Asus laptop, you will have a key labeled “PrtScr” on the top of the keyboard’s rows. You can use this function to take screenshots on your laptop by following the given steps.
1. Open up the page you want to take a screenshot of.
2. Press the “PrtScr” button on your keyboard once, which will capture the screen.
3. You will not see anything happen, but don’t worry; the captured shot is “copied,” and as a next step, you will have to “paste” it.
4. Open up any application like Paint or Word or any image editing tool.
5. Paste the screenshot by pressing Ctrl + V on your keyboard.
6. If you want to save the taken screenshot, you have to open it in Paint. If you just want to paste it somewhere on a Presentation or a document, you can stop at the Paste step.
2. Using Windows + PrtScr to take a screenshot on an Asus Laptop
Using Windows + PrtScr to take a screenshot on an Asus Laptop
This is very similar to the above method, which involves the same basic function and key. The only difference here is between what happens to your screenshot.
In the earlier method, the screenshot you took will be “coped,” and you have to “paste” it to see the screenshot you took.
With Windows + PrtScr, you directly save the screenshot you have taken. Follow the steps given to take screenshots using Windows + PrtScr.
1. Open up the page you want to take a screenshot of.
2. Press the “Windows” button on your keyboard, followed by the “PrtScr” button once, which will capture the screen. This will be indicted with a slight blinking effect.
3. Your screenshot is saved in the folder labeled “Screenshots” in the Pictures. You can navigate here by My PC/This PC > Pictures > Screenshots.
3. Using Windows + Alt + PrtSc to take a Screenshot on an Asus Laptop
Using Windows + Alt + PrtSc to take a Screenshot on an Asus Laptop
With Windows + Alt+ PrtScr, you directly save the screenshot you took and share it with your friends via email, Xbox Game Bar, or any other sharing application. When you have taken the screenshot, a pop-up will come on the right side of the screen, asking for sharing of the screenshot.
Follow the steps to take a screenshot using Windows + Alt + PrtScr.
1. Open up the page or put the screenshot on the screen you want to take.
2. On your keyboard, press the “Windows” button followed by the “Alt” and “PrtScr” buttons once, which will capture the screen. This will be indicted with a slight blinking effect.
3. Your screenshot is saved in the folder labeled “Screenshots” in the Pictures. You will also get a pop-up asking you to share the take screenshot.
4. Using the Snipping Tool to take a Screenshot on an Asus Laptop
Using the Snipping Tool to take a Screenshot on an Asus Laptop
The Snipping tool is present by default on any Windows computer, and it is a handy program that lets you edit and play around with the screenshot you capture.
In earlier methods, you would capture the whole screen. If you want to capture a specific part of a screen, you have to use a tool like the Snipping Tool.
It is quite easy to use this tool. Follow these given steps to take a screenshot using Snipping Tool.
1. Go to Start and search for Snipping Tool and open it.
2. A small window will open up. Click on New, and the screen will freeze with decreased brightness.
3. You can select any portion of the screen with the marker tool.
4. You can edit the taken screenshot in the window with the numerous tools and then save it.
5. Using Win + Shift + S to take a Screenshot on an Asus Laptop
Using Win + Shift + S to take a Screenshot on an Asus Laptop
This is the alternative and the quickest way to take a screenshot, similarly to the earlier method. To take screenshot on an Asus laptop this way, follow these steps.
1. Open up the page you want to take a screenshot of.
2. Press the “Windows” button and then the “Shift” and “S” buttons.
3. This will dim and also freeze the screen. You can use the marker tool to select any portion of the screen.
4. Once you are done, the taken screenshot will be saved on the clipboard. You can save it on Word, Paint, or Photoshop to edit it or save it as it is.
5. To save it, open up the Snipping Tool from Start, where you will see the screenshot taken. You can edit the screenshot here or save it by clicking “Shift” + S.
This is the easiest and the best method to take a screenshot.
6. Using Third-Party Tools to Take a Screenshot
Using Third-Party Tools to Take a Screenshot
There are several third-party tools you can use to take screenshots, but we can’t list every one of them here. Some of the popular third-party tools and browser extensions are :
• Lightshot
• GoFullPage
• FireShot
• Nimbus Screenshot & Video Recorder
• HTML Element Screenshot
Not only these, but there are also many others, and many of them have their advantage and special features. Also, extensions aren’t limited to Chrome browsers only. These extensions/plugins can be easily installed in other browsers like Firefox or Edge.
As mentioned earlier, some of these extensions/plugins have additional functionality compared to traditional screenshot methods like the ability to edit screenshots, cloud uploads, copy to clipboard, search for similar screenshots, etc.
Other Shortcut Keys You Can Use On Your Asus
Screenshot shortcuts aren’t the only things you can use to make your life easier. There are some other acronyms that everyone should know if they have an ASUS laptop.
1. Fn + F12 = MyASUS
MyASUS is the most convenient key combination for ASUS laptops. Here you can assign all the different keyboard keys as you like, so you can easily start a command by combining F5 and F6 with different keys. Therefore, this is probably the most important key combination on ASUS laptops.
2. Fn + F2 = Airplane mode
For some reason, the Macbook doesn’t have a keyboard shortcut to enter airplane mode. However, on ASUS laptops and many common Windows laptops, you can switch to airplane mode by simply pressing Fn + F2.
3. Fn + F8 = Display Mode
This key combination is useful if your device uses multiple different displays. This allows you to switch between screens, which is one of the most useful buttons on your ASUS laptop.
Conclusion
Now that we have showed you how to take a screenshot on a Asus laptop, use any of the above methods to take a screenshot. Taking a screenshot is very versatile and extremely useful. If you own an Asus laptop, we hope you know all the different methods you can use to take screenshots on your laptops.
With with new Windows update, taking screenshots becomes easier, and Asus has the handy key of “PrtScr,” making it easier to take screenshots. If you want quick and simple screenshots, using the simple PrtScr button is more than enough and super quick, but if you want some advanced functions like editing, etc., you can use Snipping Tool or any third-party tools.
Similar Posts
Leave a Reply
Your email address will not be published.
|
__label__pos
| 0.593466 |
Tuesday, August 9, 2022
No menu items!
HomeData Analytics and VisualizationShinyEditor : Rich Text Editor in Shiny Apps
ShinyEditor : Rich Text Editor in Shiny Apps
Introduction
This post covers how you can implement HTML or Text Editor in a Shiny App. We are using open source TinyMCE javascript for text editor. It is one of the most popular text editor used by several famous blogs or websites. It allows several customization tools for styling such as font size and color, bullet and numbered list, table creation, alignment etc.
Use Cases for Online Text Editor
There are many scenarios where you need online text editor. Some of them are as follows – In the human resource management tool wherein you ask employees to submit their goals and write accomplishment and areas of improvement. Bullet and Numbered list are key formatting methods which would assist employees to list down their content. Employees can also copy content from MS Word and editor would preserve the formattingSuppose you are building a web app for customer feedback. You would allow customers to provide open-ended comments in the app. Rich Text Editor would help your users to describe their issues in clear manner.You write blogs and want HTML quickly for your (complete or partial) content. PS – I used it for this blog
Installation
I am glad to release R package named ShinyEditor for this. You can install it from github. It is not available in CRAN yet.
remotes::install_github(“deepanshu88/ShinyEditor”)
You also need an API key from Tiny website. You just need signup, it’s absolutely free and straightforward to get an API. Once done, you will see API key. You also need to submit your domains where you want to deploy text editor. For example, if you want TinyMCE to implement on listendata.com, type that into the Domain name field and click Add domain. You can add more than one domain.
Demo
I deployed app on shinyapps.io. You can see demo here
HTML Editor in Shiny
Below is the simple example how you can use ShinyEditor package. Make sure to enter your API key here use_editor(“API-Key”) The following program would run even if you don’t enter your API key but it throws an error message that your API key is incorrect.
library(shiny)
library(ShinyEditor)
# UI
ui
# Setup
use_editor(“API-Key”),
titlePanel(“HTML Generator”),
# Text Input 1
fluidRow(
column(
width = 6,
editor(‘textcontent’),
br(),
actionButton(
“generatehtml”,
“Generate HTML Code”,
icon = icon(“code”),
class = “btn-primary”
)),
column(
width = 6,
tags$pre(textOutput(“rawText”))
)
)
)
# Server
server
# Generate HTML
observeEvent(input$generatehtml, {
editorText(session, editorid = ‘textcontent’, outputid = ‘mytext’)
output$rawText req(input$mytext)
enc2utf8(input$mytext)
})
})
}
# Run App
shinyApp(ui = ui, server = server)
Options to customize Editor
In the editor( ) function you have parameter named options to customise editor. You can look at tinyMCE website to see the complete list of options permitted.
library(shiny)
library(ShinyEditor)
# UI
ui
# Setup
use_editor(“API-Key”),
titlePanel(“HTML Generator”),
# Text Input 1
fluidRow(
column(
width = 6,
editor(‘textcontent’, text = “Sample Text”,
options = “branding: false,
height: 300,
plugins: [‘lists’, ‘table’, ‘link’, ‘image’, ‘code’],
toolbar1: ‘bold italic forecolor backcolor | formatselect fontselect fontsizeselect | alignleft aligncenter alignright alignjustify’,
toolbar2: ‘undo redo removeformat bullist numlist table blockquote code superscript subscript strikethrough link image'”),
br(),
actionButton(
“generatehtml”,
“Generate HTML Code”,
icon = icon(“code”),
class = “btn-primary”
)),
column(
width = 6,
tags$pre(textOutput(“rawText”))
)
)
)
# Server
server
# Generate HTML
observeEvent(input$generatehtml, {
editorText(session, editorid = ‘textcontent’, outputid = ‘mytext’)
output$rawText req(input$mytext)
enc2utf8(input$mytext)
})
})
}
# Run App
shinyApp(ui = ui, server = server)
Update Editor
Package allows you to update editor on client side via UpdateEditor( ) function. See the complete example below.
library(shiny)
library(ShinyEditor)
# UI
ui
# Setup
use_editor(“API-Key”),
titlePanel(“HTML Generator”),
# Text Input 1
fluidRow(
column(
width = 6,
editor(‘textcontent’),
br(),
actionButton(
“generatehtml”,
“Generate HTML Code”,
icon = icon(“code”),
class = “btn-primary”
), actionButton(“updatedata”, “Update Editor”, icon = icon(“edit”))),
column(
width = 6,
tags$pre(textOutput(“rawText”))
)
)
)
# Server
server
# Generate HTML
observeEvent(input$generatehtml, {
editorText(session, editorid = ‘textcontent’, outputid = ‘mytext’)
output$rawText req(input$mytext)
enc2utf8(input$mytext)
})
})
observeEvent(input$updatedata, {
UpdateEditor(session,
id = “textcontent”,
text = “<b>Sample Text</b>”)
})
}
# Run App
shinyApp(ui = ui, server = server)
Read MoreListenData
RELATED ARTICLES
LEAVE A REPLY
Please enter your comment!
Please enter your name here
Most Popular
Recent Comments
|
__label__pos
| 0.996639 |
The Difference Between Encoding, Encryption, and Hashing
[ Check out my latest post on the HP Security Blog: “The Secure Web Series, Part 2: How to Avoid User Account Harvesting” ]
Encoding is often confused with encryption and hashing. They are not the same. But before I go into the differences, I'll first mention the similarities:
1. All three transform data into another format.
2. Both encoding and encryption are reversible, unlike hashing.
And now the differences:
Encoding
ascii
The purpose of encoding is to transform data so that it can be properly (and safely) consumed by a different type of system, e.g. binary data being sent over email, or viewing special characters on a web page. The goal is not to keep information secret, but rather to ensure that it's able to be properly consumed.
Encoding transforms data into another format using a scheme that is publicly available so that it can easily be reversed. It does not require a key as the only thing required to decode it is the algorithm that was used to encode it.
Examples: ASCII, Unicode, URL Encoding, Base64
Encryption
ciphertext
The purpose of encryption is to transform data in order to keep it secret from others, e.g. sending someone a secret letter that only they should be able to read, or securely sending a password over the Internet. Rather than focusing on usability, the goal is to ensure the data cannot be consumed by anyone other than the intended recipient(s).
Encryption transforms data into another format in such a way that only specific individual(s) can reverse the transformation. It uses a key, which is kept secret, in conjunction with the plaintext and the algorithm, in order to perform the encryption operation. As such, the ciphertext, algorithm, and key are all required to return to the plaintext.
Examples: AES, Blowfish, RSA
Hashing
sha512
Hashing serves the purpose of ensuring integrity, i.e. making it so that if something is changed you can know that it's changed. Technically, hashing takes arbitrary input and produce a fixed-length string that has the following attributes:
1. The same input will always produce the same output.
2. Multiple disparate inputs should not produce the same output.
3. It should not be possible to go from the output to the input.
4. Any modification of a given input should result in drastic change to the hash.
Hashing is used in conjunction with authentication to produce strong evidence that a given message has not been modified. This is accomplished by taking a given input, hashing it, and then encrypting the sent hash with the recipient's public key.
When the recipient opens the message with their private key they then hash the message themselves and compare it to the hash that was given encrypted by the sender. If they match it is an unmodified message.
Examples: SHA-3, MD5 (Now obsolete), etc.
Summary
Recommended
References
[ Encryption | wikipedia.org ]
[ Encoding | wikipedia.org]
[ Hashing | wikipedia.org]
If you’d like to connect or respond, please reach out via Twitter, using the comments below, or by email. Also consider subscribing to the site via RSS and checking out my other content.
Thank you for visiting.
blog comments powered by Disqus
|
__label__pos
| 0.970761 |
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis :
Interior Angles of a Polygon
Interior Angles of A Polygon: In Mathematics, an angle is defined as the figure formed by joining the two rays at the common endpoint. An interior angle is an angle inside a shape. The polygons are the closed shape that has sides and vertices. A regular polygon has all its interior angles equal to each other. For example, a square has all its interior angles equal to the right angle or 90 degrees.
The interior angles of a polygon are equal to a number of sides. Angles are generally measured using degrees or radians. So, if a polygon has 4 sides, then it has four angles as well. Also, the sum of interior angles of different polygons is different.
What is Meant by Interior Angles of a Polygon?
An interior angle of a polygon is an angle formed inside the two adjacent sides of a polygon. Or, we can say that the angle measures at the interior part of a polygon are called the interior angle of a polygon. We know that the polygon can be classified into two different types, namely:
• Regular Polygon
• Irregular Polygon
For a regular polygon, all the interior angles are of the same measure. But for irregular polygon, each interior angle may have different measurements.
Sum of Interior Angles of a Polygon
The Sum of interior angles of a polygon is always a constant value. If the polygon is regular or irregular, the sum of its interior angles remains the same. Therefore, the sum of the interior angles of the polygon is given by the formula:
Sum of the Interior Angles of a Polygon = 180 (n-2) degrees
As we know, there are different types of polygons. Therefore, the number of interior angles and the respective sum of angles is given below in the table.
Polygon Name Number of Interior Angles Sum of Interior Angles = (n-2) x 180°
Triangle 3 180°
Quadrilateral 4 360°
Pentagon 5 540°
Hexagon 6 720°
Septagon 7 900°
Octagon 8 1080°
Nonagon 9 1260°
Decagon 10 1440°
Interior angles of Triangles
A triangle is a polygon that has three sides and three angles. Since, we know, there is a total of three types of triangles based on sides and angles. But the angle of the sum of all the types of interior angles is always equal to 180 degrees. For a regular triangle, each interior angle will be equal to:
180/3 = 60 degrees
60°+60°+60° = 180°
Therefore, no matter if the triangle is an acute triangle or obtuse triangle or a right triangle, the sum of all its interior angles will always be 180 degrees.
Interior Angles of Quadrilaterals
In geometry, we have come across different types of quadrilaterals, such as:
• Square
• Rectangle
• Parallelogram
• Rhombus
• Trapezium
• Kite
All the shapes listed above have four sides and four angles. The common property for all the above four-sided shapes is the sum of interior angles is always equal to 360 degrees. For a regular quadrilateral such as square, each interior angle will be equal to:
360/4 = 90 degrees.
90° + 90° + 90° + 90° = 360°
Since each quadrilateral is made up of two triangles, therefore the sum of interior angles of two triangles is equal to 360 degrees and hence for the quadrilateral.
Interior angles of Pentagon
In case of the pentagon, it has five sides and also it can be formed by joining three triangles side by side. Thus, if one triangle has sum of angles equal to 180 degrees, therefore, the sum of angles of three triangles will be:
3 x 180 = 540 degrees
Thus, the angle sum of the pentagon is 540 degrees.
For a regular pentagon, each angle will be equal to:
540°/5 = 108°
108°+108°+108°+108°+108° = 540°
Sum of Interior angles of a Polygon = (Number of triangles formed in the polygon) x 180°
Interior angles of Regular Polygons
A regular polygon has all its angles equal in measure.
Regular Polygon Name Each interior angle
Triangle 60°
Quadrilateral 90°
Pentagon 108°
Hexagon 120°
Septagon 128.57°
Octagon 135°
Nonagon 140°
Decagon 144°
Interior Angle Formulas
The interior angles of a polygon always lie inside the polygon. The formula can be obtained in three ways. Let us discuss the three different formulas in detail.
Method 1:
If “n” is the number of sides of a polygon, then the formula is given below:
Interior angles of a Regular Polygon = [180°(n) – 360°] / n
Method 2:
If the exterior angle of a polygon is given, then the formula to find the interior angle is
Interior Angle of a polygon = 180° – Exterior angle of a polygon
Method 3:
If we know the sum of all the interior angles of a regular polygon, we can obtain the interior angle by dividing the sum by the number of sides.
Interior Angle = Sum of the interior angles of a polygon / n
Where
“n” is the number of polygon sides.
Interior Angles Theorem
Below is the proof for the polygon interior angle sum theorem
Statement:
In a polygon of ‘n’ sides, the sum of the interior angles is equal to (2n – 4) × 90°.
To prove:
The sum of the interior angles = (2n – 4) right angles
Proof:
Interior angles example
ABCDE is a “n” sided polygon. Take any point O inside the polygon. Join OA, OB, OC.
For “n” sided polygon, the polygon forms “n” triangles.
We know that the sum of the angles of a triangle is equal to 180 degrees
Therefore, the sum of the angles of n triangles = n × 180°
From the above statement, we can say that
Sum of interior angles + Sum of the angles at O = 2n × 90° ——(1)
But, the sum of the angles at O = 360°
Substitute the above value in (1), we get
Sum of interior angles + 360°= 2n × 90°
So, the sum of the interior angles = (2n × 90°) – 360°
Take 90 as common, then it becomes
The sum of the interior angles = (2n – 4) × 90°
Therefore, the sum of “n” interior angles is (2n – 4) × 90°
So, each interior angle of a regular polygon is [(2n – 4) × 90°] / n
Note: In a regular polygon, all the interior angles are of the same measure.
Exterior Angles
Exterior angles of a polygon are the angles at the vertices of the polygon, that lie outside the shape. The angles are formed by one side of the polygon and extension of the other side. The sum of an adjacent interior angle and exterior angle for any polygon is equal to 180 degrees since they form a linear pair. Also, the sum of exterior angles of a polygon is always equal to 360 degrees.
Exterior angle of a polygon = 360 ÷ number of sides
Exterior angles of polygon
Related Articles
Solved Examples
Q.1: If each interior angle is equal to 144°, then how many sides does a regular polygon have?
Solution:
Given: Each interior angle = 144°
We know that,
Interior angle + Exterior angle = 180°
Exterior angle = 180°-144°
Therefore, the exterior angle is 36°
The formula to find the number of sides of a regular polygon is as follows:
Number of Sides of a Regular Polygon = 360° / Magnitude of each exterior angle
Therefore, the number of sides = 360° / 36° = 10 sides
Hence, the polygon has 10 sides.
Q.2: What is the value of the interior angle of a regular octagon?
Solution: A regular octagon has eight sides and eight angles.
n = 8
Since, we know that, the sum of interior angles of octagon, is;
Sum = (8-2) x 180° = 6 x 180° = 1080°
A regular octagon has all its interior angles equal in measure.
Therefore, measure of each interior angle = 1080°/8 = 135°.
Q.3: What is the sum of interior angles of a 10-sided polygon?
Answer: Given,
Number of sides, n = 10
Sum of interior angles = (10 – 2) x 180° = 8 x 180° = 1440°.
Video Lesson on Angle sum and exterior angle property
Practise Questions
1. Find the number of sides of a polygon, if each angle is equal to 135 degrees.
2. What is the sum of interior angles of a nonagon?
Register with BYJU’S – The Learning App and also download the app to learn with ease.
Frequently Asked Questions – FAQs
What are the interior angles of a polygon?
Interior angles of a polygon are the angles that lie at the vertices, inside the polygon.
What is the formula to find the sum of interior angles of a polygon?
To find the sum of interior angles of a polygon, use the given formula:
Sum = (n-2) x 180°
Where n is the number of sides or number of angles of polygons.
How to find the sum of interior angles by the angle sum property of the triangle?
To find the sum of interior angles of a polygon, multiply the number of triangles formed inside the polygon to 180 degrees. For example, in a hexagon, there can be four triangles that can be formed. Thus,
4 x 180° = 720 degrees.
What is the measure of each angle of a regular decagon?
A decagon has 10 sides and 10 angles.
Sum of interior angles = (10 – 2) x 180°
= 8 × 180°
= 1440°
A regular decagon has all its interior angles equal in measure. Therefore,
Each interior angle of decagon = 1440°/10 = 144°
What is the sum of interior angles of a kite?
A kite is a quadrilateral. Therefore, the angle sum of a kite will be 360°.
Quiz on Interior angles of Polygon
DOWNLOAD
App NOW
|
__label__pos
| 0.999586 |
A special category to exclude some items from search
Haunted Cubicle Contest Sign-Up Form
Please sign up by 9:00 am, October 27th.
Creating non-clickable menu placeholders
Lets say you want to create a grouping of menu items, but don’t want the parent to be clickable (a placeholder to indicate what the following nested items will be). For example:
In this example, School Health Index, Youth Surveys, and Data & Statistics are there as placeholders, but cannot be clicked. To achieve this same thing in your menu, do the following:
1. Log in to your site account.
2. In the left-hand (black) Admin Menu area, mouse over Appearance, and then select Menus
3. If necessary, use the drop-down to choose the menu you want
4. Expand Custom Links and use a # for the URL, and whatever text you want for the placeholder:
5. Click on Add to Menu
6. Drag it into the proper place and add/move items under it so they are slightly indented:
7. Click on Save Menu, and you are finished!
Protected: WordPress
This content is password protected. To view it please enter your password below:
Go to Top
|
__label__pos
| 0.984006 |
Add bookmarks to a PDF
Hello everyone, I am trying to add bookmarks to a PDF file, I create this file from several files that I am downloading to later unify them with the PDF Join activity, however I need that have bookmarks, do you know if it is possible?
Thank you,
You might have to develop a custom activity that uses iText Sharp to do this.
1 Like
Could you be a little more specific or could you guide me how can I do this?
Thanks for the reply! :grinning:
Hi @KannanSuresh,
Is it possible to make a custom activity.
Can you share any direction to make custom acivity.
Thank in advance.
|
__label__pos
| 0.999837 |
this site demands javascript
What Is WSAPPX? How to Fix High Disk Usage
The WSAPPX is a Windows background service that runs on Windows 8 and 10.
Wsappx
Unfortunately, it’s often the reason your computer is running slow. Here, you will learn what causes this problem and how to fix it.
What Is “WSAPPX” and Why Is It Running on My PC?
Before we go any further and fix the high disk usage that WSAPPX is causing, let’s learn why Windows uses this task and whether you really need it.
The WSAPPX task runs and manages two separate background services in Windows 8/10.
Windows 10
• Client License Service (ClipSVC)
• AppX Deployment Service (AppXSVC)
Windows 8
• Windows Store Service (WSService)
For both versions of Windows, these services running under WSAPPX task do the same function i.e. installing, removing, and updating Windows Store apps. These services also handle app licensing.
Why Is Wsappx Causing High Disk/Cpu/Memory Usage?
During installation or updating, WSAPPX uses a huge amount of CPU resources and memory. So it could simply be due to an app updating in the background.
Sometimes, high disk usage can be due to a virus or a trojan that’s hogging system resources.
You can disable Windows Automatic Updates if you are frequently seeing this problem.
What Causes This Error?
WSAPPX runs in the background as a part of the Windows Store. Your OS only uses it while updating its core files. However, sometimes, the process is hogging resources even after Windows doesn’t need it anymore.
In such cases, you might see WSAPPX in Task Manager showing high disk, memory, or CPU usage. It can slow down your computer.
The good news is that you can easily fix this common Windows issue. Below, you will find some simple methods to solve the WSAPPX issue in Windows 10/8.
See also Driver Power State Failure
How to Check If Wsappx Is Causing High Disk/Memory/Cpu Usage?
If you are seeing high disk usage on your PC and not sure if it’s WSAPPX causing it, here is how to check it.
1. Right-click on Windows Taskbar and open Task Manager
2. Go to the Processes tab and you will be able to see exactly how much disk space and CPU each task is using
3. Find WSAPPX and right-click on it to open task Details
4. Once you are on the Details tab, click on ‘Go to Services’
5. Now you can see all the details and see if WSAPPX is responsible for high disk/CPU usage
Is It Possible to Disable Wsappx Task?
It is not possible to disable the services running under the WSAPPX process. If you try to disable these processes, Windows launches a warning telling you that killing these processes will make Windows unusable and could cause a system shutdown.
These processes run as needed. So if you launch Windows Store, AppxSVC will run in the Task Manager. If you run a Windows app, ClipSVC service will launch.
Even if there was a way around to kill these services, it wouldn’t do you any good. These are core services and use minimal resources unless updating/installing/uninstalling a Store app.
But if you notice these services running most of the time and causing high disk/CPU usage, you can troubleshoot and fix the issue using the methods below.
How to PERMANENTLY Fix WSAPPX Causing High Disk Usage
To resolve the high disk issue due to WSAPPX in Windows 10/8, follow these troubleshooting methods below.
Increase Virtual Memory
Low virtual memory is one of the reasons you could be seeing WSAPPX using 100% CPU/Disk in Task Manager. Here is how to increase virtual memory and fix this error.
1. Open Task Manager
2. Go to System & Security and open System
3. Now you will see an option “Adjust Appearance vs. Performance in Windows” – click on it
4. Once a new window pops up, click on Advanced tab and go to Virtual Memory options
5. Now click on Change option and then click on Custom Size
6. Be sure to uncheck the option “Automatically manage paging file size for all drives” to manually set the memory size
7. Set the initial size (ideally the equivalent to your RAM)
8. Set the maximum size (ideally double the initial size)
9. Click OK and restart your computer for these values to take effect
See also Network Protocol Missing in Windows 10, Solution.
Disable Windows Store
Disabling Windows Store will permanently fix WSAPPX high disk usage error. There are two ways to disable the Store.
Using Group Policy Editor
1. Type ‘policy’ in Start Search and open ‘Edit Group Policy’
2. On the left pane, click on Computer Configuration
3. You will now see three folders in the right pane
4. Click on the last folder named ‘Administrative Templates’
5. Now go to Windows Components>Store
6. Here you can turn off the Store app
7. Finally, click on Apply and you’re done
Using Registry Editor
You can also disable Windows Store using the Registry Editor. Follow the steps below.
1. Open Registry Editor by searching for it in the Start Search in Windows
2. You can also open it quickly by pressing Windows Key+ R and then typing ‘regedit’
3. Once you have opened the registry editor, you need to navigate to the registry entries related to the Store
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\WindowsStore key
4. Inside this entry, create a new registry key and name it RemoveWindowsStore and give it a value of ‘1’
5. Restart Windows
Troubleshoot After Clean Boot
Troubleshooting in Clean Boot mode can often help you fix errors related to Windows core services and apps.
During the Clean Boot, Windows loads only essentials drivers and apps. Therefore, if a startup app or malware is behind this issue, you can easily fix it.
Follow the steps below to enter the Clean Boot state.
1. Type ‘msconfig’ in Start Search to open Configuration Utility
2. Go to General tab and select Selective Startup
3. Do the following
Check: Load System Services
Check: Use Original boot configuration
Uncheck: Load Startup Items
4. Now go to the Services tab, select Hide All Microsoft Services, and disable them all
5. Click Apply and restart your PC
6. After the restart, your computer should enter a Clean Boot state
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.93576 |
Testing Properties and Arrays correctly in GET
Hi, I just want to make sure that this is correctly testing that An Object and an Object within an array is present.
Example api:
{
"items": [
{
"oID": "0000001211",
"oInvoiceNo": "1234567",
"OrderBlocks": [
{
"olLine": 1,
"olProductCode": "001"
}
]
}
]
}
Example test:
pm.test("Order Details are Present", () => {
pm.response.to.have.jsonBody("oID");
pm.response.to.have.jsonBody("oInvoiceNo");
pm.response.to.have.jsonBody("OrderBlocks[0].olLine");
pm.response.to.have.jsonBody("OrderBlocks[0].olProductCode");
});
It’s returning as a Pass, but I’ve been experimenting with syntax all day with loads of different tests(I can’t remember .js and pm.* so I just refactor constantly until it works), and I’m sure I had one earlier that brought back a False Pass, just wanted to make sure that this is a correct way for checking that the correct properties, and properties within arrays are present in a response.
I also don’t completely understand why this seems to work, shouldn’t “oID” also require me to have an array like (“items[0].oID”) ? Yet when I try this, it fails to find oID.
Was trying this initially and it failed, so the above is my new “fix”:
pm.test("Order Details are Present", () => {
pm.response.to.have.jsonBody("items.OrderBlocks.olLine");
});
Any help/advice on testing this kind of stuff correctly is really appreciated, thanks!
Hey @LiamC,
I’m not sure what your overall goal is or what you would like to test but you could try something like this to check the types and keys of that response.
pm.test("Order Details are Present", () => {
let jsonData = pm.response.json()
pm.expect(jsonData.items).to.be.an("array");
pm.expect(jsonData.items[0]).to.have.keys('oID','oInvoiceNo','OrderBlocks').and.be.an("object");
pm.expect(jsonData.items[0].OrderBlocks).to.be.an("array");
pm.expect(jsonData.items[0].OrderBlocks[0]).to.have.keys('olLine','olProductCode').and.be.an("object");
});
It’s a bit messy and it will only ever check the first object in those arrays. Lots of different ways that you could check/test that response just all depends on what you want to test for I guess.
Have you considered using some form of schema validation as a different option?
2 Likes
@dannydainton ! Oh wow, I’ve had your “All Things Postman” stuff open for the past 2 days trying to wrap my head around Postman. Crazy.
I think what I’m testing is probably made redundant by using:
const jsonData = pm.response.json();
const schema = JSON.parse(environment.testSchema)
pm.test("Schema is Valid", () => {
pm.expect(tv4.validate(jsonData, schema)).to.be.true;
});
And having the API as ‘testSchema’ in Environment Variables.
I’m just trying to check that every response has the expected fields returned that I can see on the Swagger documentation of the API.
I was going to try and get something like forEach to work next, if I could, with pm.expects inside the function? For multiple ‘items’ arrays being returned.
jsonData.items.forEach(item => {
pm.expect(jsonData.items[0]).to.have.keys('oID','oInvoiceNo','OrderBlocks')
});
But, maybe Schema validation is already checking all of this.
If I wanted to test if an object beyond the 1st one, was present in an array, how would that look?
I need to keep reading up more.
Your write-ups have been a huge help actually! I’m just greener than green at the moment so I’m probably getting confused!
Awesome! Thanks for checking that out - It’s very old now though and is falling behind the current functionality that we have introduced since then :cry:
You should be able to just use this syntax to get the schema from the variable:
const schema = pm.environment.get("testSchema")
What does your schema look like?
In terms of looping through the data, you could modify the test you wrote to check each object in the items array. I’ve taken the same response data and added another object with one of the keys different from what you’re checking against, just to see this fail.
If you add this code to your Tests tab, you should be able to see the failed test:
var sampleData = {
"items": [
{
"oID": "0000001211",
"oInvoiceNo": "1234567",
"OrderBlocks": [
{
"olLine": 1,
"olProductCode": "001"
}
]
},
{
"wrongKey": "0000001212",
"oInvoiceNo": "1234568",
"OrderBlocks": [
{
"olLine": 2,
"olProductCode": "002"
}
]
}
]
}
pm.test("Debugging Test - Object Key are Correct", () => {
sampleData.items.forEach(item => {
pm.expect(item).to.have.keys('oID','oInvoiceNo','OrderBlocks')
});
})
1 Like
Oh, I think I get the syntax now!
That Debug test example is pretty much exactly what I needed to understand, I think I was close-ish.
That’s just an example schema above, the one I’m actually working from belongs to the company I work for unfortunately, as much as I’d love to share it to get your insight on how to approach testing a brand new in-development API and what type of tests an experienced tester would do. In fact, this is all I’m doing at the moment - just trying to create a test case as robust as possible for the current proof of concept API I’ve been given, beyond 200 status, objects present and values match correct types/regex.
Regarding the debug test example, how would you go about automating these kinds of tests in a Collection Run? I feel like it would need engineering to fail but retrieve a “pass” test status so you can still integrate it with something like Jenkins, right?
Otherwise, a failed test that you intend to fail, would flag the API as not correct, when it’s functioning exactly how it should.
Thanks again!
EDIT: Can’t get tv4.validate to work either. I can set a variable to anything and it still returns as Pass. I don’t understand how to get tv4.validate to actually use the Schema. Example that should fail but doesn’t:
const = jsonData = pm.response.json();
var schema = "not real schema";
pm.test("Schema is Valid", () => {
let validation = tv4.validate(jsonData, schema);
if (validation !== true) {
console.log(tv4.error);
}
pm.expect(validation).to.be.true;
});
I’ve tried reading through the tv4.validate documentation and I’m just not getting it, should I be copying the Schema into the Tests Tab to reference as you did in your above Debug Test example?. :confused:
1 Like
I would actually recommend using the ajv validation module instead - You can see an example of this in working in this other thread:
1 Like
I’ve copied the exact code in that link to, and changed the “id” to “string” instead of “integer” and no matter what I try, it will always pass.
I’m stumped.
You’ve copied the code and run it against your response?
That’s checking a different schema though, that user has a response like this:
{
"access": [
{
"id": 1,
"client": "127.0.0.1",
"apikey": "apikey1",
"description": "Local call"
}
]
}
That wouldn’t work for your schema as the structure is different, it was more of an example of the structure.
You could use something like this:
var Ajv = require('ajv'),
ajv = new Ajv({ logger: console, allErrors: true }),
schema = {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"type": "object",
"required": [
"oID",
"oInvoiceNo",
"OrderBlocks"
],
"properties": {
"oID": {
"type": "string"
},
"oInvoiceNo": {
"type": "string"
},
"OrderBlocks": {
"type": "array",
"items": {
"type": "object",
"required": [
"olLine",
"olProductCode"
],
"properties": {
"olLine": {
"type": "integer"
},
"olProductCode": {
"type": "string"
}
}
}
}
}
}
}
}
}
pm.test('Schema is valid', function() {
pm.expect(ajv.validate(schema, pm.response.json()), JSON.stringify(ajv.errors)).to.be.true;
});
This should also log out all the schema errors so you’re not just getting a generic error message from the test result.
I suck at creating schemas, this is a site that I use to help generate them from a JSON response.
Hi All,
I also need some help to create some test cases where my url is fetching data in such form:
Group:[
{
“element”:{
“id”: “ABC”,
“name: :ABC”;
“ref”: 1
},
“rateplan”:[
{
“element”:{
“id”: “SAM”,
“name”: “XYZ”,
“ref”: 1000
},
“duration”:{
“type”: “Month”,
“amount”: 36
};
“category”: “Standard”,
“type”: “Postpaid”,
"price: [
{
“type”: “Month”,
“amount”: 1000,
},
there are various layers of data within elements.
any help is appreciated. I tried the examples provided above but i am getting syntax error.
thanks!!
Hey @Manoj_Mittal
Welcome to the community! :wave:
Would you be able to create a separate topic for this question please. I don’t want it to get lost within this topic thread. :slight_smile:
started a new topic, thanks!!
@LiamC writing schemas that work is not only an advanced topic but it is not really well supported in Postman.
My advice would be to stay away from schema validation, unless you REALLY know what you are doing.
Here are a few reasons:
• writing and maintaining proper schemas is time consuming
• there is no dedicated way to edit / save them and they tend to be quite big
• just following a “paste here your response and I will generate something for you” tool rarely works
• you can easily be fooled by “never failing schemas” (as you have already noticed
• if later a schema validation does fail, you will have a hard time figuring out what has changed (at least with the tv4 lib)
Maybe the new ajv lib in Postman does a better job than the tv4 lib.
So consider if schema validation is something that you want to invest some time or if you can simply live with testing properties one-by-one.
Good luck!
1 Like
I managed to get something to work, after banging my head against a wall for hours.
I do agree this schema validation stuff may be a lot more than I can handle at the moment, so I may go back to what I had originally.
It’s been a learning experience though!
If you’re interested by the way, this is what I found would actually test Passes & fails correctly:
const jsonData = pm.response.json();
pm.test('Schema is valid', function() {
var Ajv = require('ajv');
ajv = new Ajv(),
schema = { //Schema should contain an Array
"type": "array",
"items": {//In this Array there should be a Properties Object
"type": "object", //These are the properties within this object. This will test TYPE of Property.
"properties": {
"gameId": { "type": "integer" },
"name": { "type": "string" },
"description": { "type": "string" },
"minPlayers": { "type": "integer", "maximum": 1},
"maxPlayers": { "type": "integer", "maximum": 1},
"rating": {"type": "string" },
"online": {"type": "boolean" }
},"required" : ["gameId", "name", "description", "minPlayers", "maxPlayers", "rating", "online"]//Required will make sure that the Properties listed are present
}
};
var validate = ajv.compile(schema);
var valid = validate(jsonData)
if(valid === false){
console.log('Invalid: ' + ajv.errorsText(validate.errors));
}
pm.expect(valid).to.be.true;
});
Thank you
Glad you got a solution that works for you and learnt some new skill along the way.
It feels like the outcome changed slightly from the originally posted response data - Looks like a different endpoint is being checked now :grin:
Yeah, another example api I had set up for Crud testing haha.
1 Like
|
__label__pos
| 0.980016 |
Skip to content
Browse files
Created infrastructure for Azeri translation under az directory
• Loading branch information...
1 parent 22e4de0 commit bd2245af162c32b995becc28bca091a5c8f49328 @feridmovsumov feridmovsumov committed
View
257 az/01-introduction/01-chapter1.markdown
@@ -0,0 +1,257 @@
+# Getting Started #
+
+This chapter will be about getting started with Git. We will begin at the beginning by explaining some background on version control tools, then move on to how to get Git running on your system and finally how to get it setup to start working with. At the end of this chapter you should understand why Git is around, why you should use it and you should be all setup to do so.
+
+## About Version Control ##
+
+What is version control, and why should you care? Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. Even though the examples in this book show software source code as the files under version control, in reality any type of file on a computer can be placed under version control.
+
+If you are a graphic or web designer and want to keep every version of an image or layout (which you certainly would), it is very wise to use a Version Control System (VCS). A VCS allows you to: revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. Using a VCS also means that if you screw things up or lose files, you can generally recover easily. In addition, you get all this for very little overhead.
+
+### Local Version Control Systems ###
+
+Many people’s version-control method of choice is to copy files into another directory (perhaps a time-stamped directory, if they’re clever). This approach is very common because it is so simple, but it is also incredibly error prone. It is easy to forget which directory you’re in and accidentally write to the wrong file or copy over files you don’t mean to.
+
+To deal with this issue, programmers long ago developed local VCSs that had a simple database that kept all the changes to files under revision control (see Figure 1-1).
+
+Insert 18333fig0101.png
+Figure 1-1. Local version control diagram.
+
+One of the more popular VCS tools was a system called rcs, which is still distributed with many computers today. Even the popular Mac OS X operating system includes the rcs command when you install the Developer Tools. This tool basically works by keeping patch sets (that is, the differences between files) from one change to another in a special format on disk; it can then re-create what any file looked like at any point in time by adding up all the patches.
+
+### Centralized Version Control Systems ###
+
+The next major issue that people encounter is that they need to collaborate with developers on other systems. To deal with this problem, Centralized Version Control Systems (CVCSs) were developed. These systems, such as CVS, Subversion, and Perforce, have a single server that contains all the versioned files, and a number of clients that check out files from that central place. For many years, this has been the standard for version control (see Figure 1-2).
+
+Insert 18333fig0102.png
+Figure 1-2. Centralized version control diagram.
+
+This setup offers many advantages, especially over local VCSs. For example, everyone knows to a certain degree what everyone else on the project is doing. Administrators have fine-grained control over who can do what; and it’s far easier to administer a CVCS than it is to deal with local databases on every client.
+
+However, this setup also has some serious downsides. The most obvious is the single point of failure that the centralized server represents. If that server goes down for an hour, then during that hour nobody can collaborate at all or save versioned changes to anything they’re working on. If the hard disk the central database is on becomes corrupted, and proper backups haven’t been kept, you lose absolutely everything—the entire history of the project except whatever single snapshots people happen to have on their local machines. Local VCS systems suffer from this same problem—whenever you have the entire history of the project in a single place, you risk losing everything.
+
+### Distributed Version Control Systems ###
+
+This is where Distributed Version Control Systems (DVCSs) step in. In a DVCS (such as Git, Mercurial, Bazaar or Darcs), clients don’t just check out the latest snapshot of the files: they fully mirror the repository. Thus if any server dies, and these systems were collaborating via it, any of the client repositories can be copied back up to the server to restore it. Every checkout is really a full backup of all the data (see Figure 1-3).
+
+Insert 18333fig0103.png
+Figure 1-3. Distributed version control diagram.
+
+Furthermore, many of these systems deal pretty well with having several remote repositories they can work with, so you can collaborate with different groups of people in different ways simultaneously within the same project. This allows you to set up several types of workflows that aren’t possible in centralized systems, such as hierarchical models.
+
+## A Short History of Git ##
+
+As with many great things in life, Git began with a bit of creative destruction and fiery controversy. The Linux kernel is an open source software project of fairly large scope. For most of the lifetime of the Linux kernel maintenance (1991–2002), changes to the software were passed around as patches and archived files. In 2002, the Linux kernel project began using a proprietary DVCS system called BitKeeper.
+
+In 2005, the relationship between the community that developed the Linux kernel and the commercial company that developed BitKeeper broke down, and the tool’s free-of-charge status was revoked. This prompted the Linux development community (and in particular Linus Torvalds, the creator of Linux) to develop their own tool based on some of the lessons they learned while using BitKeeper. Some of the goals of the new system were as follows:
+
+* Speed
+* Simple design
+* Strong support for non-linear development (thousands of parallel branches)
+* Fully distributed
+* Able to handle large projects like the Linux kernel efficiently (speed and data size)
+
+Since its birth in 2005, Git has evolved and matured to be easy to use and yet retain these initial qualities. It’s incredibly fast, it’s very efficient with large projects, and it has an incredible branching system for non-linear development (See Chapter 3).
+
+## Git Basics ##
+
+So, what is Git in a nutshell? This is an important section to absorb, because if you understand what Git is and the fundamentals of how it works, then using Git effectively will probably be much easier for you. As you learn Git, try to clear your mind of the things you may know about other VCSs, such as Subversion and Perforce; doing so will help you avoid subtle confusion when using the tool. Git stores and thinks about information much differently than these other systems, even though the user interface is fairly similar; understanding those differences will help prevent you from becoming confused while using it.
+
+### Snapshots, Not Differences ###
+
+The major difference between Git and any other VCS (Subversion and friends included) is the way Git thinks about its data. Conceptually, most other systems store information as a list of file-based changes. These systems (CVS, Subversion, Perforce, Bazaar, and so on) think of the information they keep as a set of files and the changes made to each file over time, as illustrated in Figure 1-4.
+
+Insert 18333fig0104.png
+Figure 1-4. Other systems tend to store data as changes to a base version of each file.
+
+Git doesn’t think of or store its data this way. Instead, Git thinks of its data more like a set of snapshots of a mini filesystem. Every time you commit, or save the state of your project in Git, it basically takes a picture of what all your files look like at that moment and stores a reference to that snapshot. To be efficient, if files have not changed, Git doesn’t store the file again—just a link to the previous identical file it has already stored. Git thinks about its data more like Figure 1-5.
+
+Insert 18333fig0105.png
+Figure 1-5. Git stores data as snapshots of the project over time.
+
+This is an important distinction between Git and nearly all other VCSs. It makes Git reconsider almost every aspect of version control that most other systems copied from the previous generation. This makes Git more like a mini filesystem with some incredibly powerful tools built on top of it, rather than simply a VCS. We’ll explore some of the benefits you gain by thinking of your data this way when we cover Git branching in Chapter 3.
+
+### Nearly Every Operation Is Local ###
+
+Most operations in Git only need local files and resources to operate — generally no information is needed from another computer on your network. If you’re used to a CVCS where most operations have that network latency overhead, this aspect of Git will make you think that the gods of speed have blessed Git with unworldly powers. Because you have the entire history of the project right there on your local disk, most operations seem almost instantaneous.
+
+For example, to browse the history of the project, Git doesn’t need to go out to the server to get the history and display it for you—it simply reads it directly from your local database. This means you see the project history almost instantly. If you want to see the changes introduced between the current version of a file and the file a month ago, Git can look up the file a month ago and do a local difference calculation, instead of having to either ask a remote server to do it or pull an older version of the file from the remote server to do it locally.
+
+This also means that there is very little you can’t do if you’re offline or off VPN. If you get on an airplane or a train and want to do a little work, you can commit happily until you get to a network connection to upload. If you go home and can’t get your VPN client working properly, you can still work. In many other systems, doing so is either impossible or painful. In Perforce, for example, you can’t do much when you aren’t connected to the server; and in Subversion and CVS, you can edit files, but you can’t commit changes to your database (because your database is offline). This may not seem like a huge deal, but you may be surprised what a big difference it can make.
+
+### Git Has Integrity ###
+
+Everything in Git is check-summed before it is stored and is then referred to by that checksum. This means it’s impossible to change the contents of any file or directory without Git knowing about it. This functionality is built into Git at the lowest levels and is integral to its philosophy. You can’t lose information in transit or get file corruption without Git being able to detect it.
+
+The mechanism that Git uses for this checksumming is called a SHA-1 hash. This is a 40-character string composed of hexadecimal characters (0–9 and a–f) and calculated based on the contents of a file or directory structure in Git. A SHA-1 hash looks something like this:
+
+ 24b9da6552252987aa493b52f8696cd6d3b00373
+
+You will see these hash values all over the place in Git because it uses them so much. In fact, Git stores everything not by file name but in the Git database addressable by the hash value of its contents.
+
+### Git Generally Only Adds Data ###
+
+When you do actions in Git, nearly all of them only add data to the Git database. It is very difficult to get the system to do anything that is not undoable or to make it erase data in any way. As in any VCS, you can lose or mess up changes you haven’t committed yet; but after you commit a snapshot into Git, it is very difficult to lose, especially if you regularly push your database to another repository.
+
+This makes using Git a joy because we know we can experiment without the danger of severely screwing things up. For a more in-depth look at how Git stores its data and how you can recover data that seems lost, see Chapter 9.
+
+### The Three States ###
+
+Now, pay attention. This is the main thing to remember about Git if you want the rest of your learning process to go smoothly. Git has three main states that your files can reside in: committed, modified, and staged. Committed means that the data is safely stored in your local database. Modified means that you have changed the file but have not committed it to your database yet. Staged means that you have marked a modified file in its current version to go into your next commit snapshot.
+
+This leads us to the three main sections of a Git project: the Git directory, the working directory, and the staging area.
+
+Insert 18333fig0106.png
+Figure 1-6. Working directory, staging area, and git directory.
+
+The Git directory is where Git stores the metadata and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
+
+The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
+
+The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.
+
+The basic Git workflow goes something like this:
+
+1. You modify files in your working directory.
+2. You stage the files, adding snapshots of them to your staging area.
+3. You do a commit, which takes the files as they are in the staging area and stores that snapshot permanently to your Git directory.
+
+If a particular version of a file is in the git directory, it’s considered committed. If it’s modified but has been added to the staging area, it is staged. And if it was changed since it was checked out but has not been staged, it is modified. In Chapter 2, you’ll learn more about these states and how you can either take advantage of them or skip the staged part entirely.
+
+## Installing Git ##
+
+Let’s get into using some Git. First things first—you have to install it. You can get it a number of ways; the two major ones are to install it from source or to install an existing package for your platform.
+
+### Installing from Source ###
+
+If you can, it’s generally useful to install Git from source, because you’ll get the most recent version. Each version of Git tends to include useful UI enhancements, so getting the latest version is often the best route if you feel comfortable compiling software from source. It is also the case that many Linux distributions contain very old packages; so unless you’re on a very up-to-date distro or are using backports, installing from source may be the best bet.
+
+To install Git, you need to have the following libraries that Git depends on: curl, zlib, openssl, expat, and libiconv. For example, if you’re on a system that has yum (such as Fedora) or apt-get (such as a Debian based system), you can use one of these commands to install all of the dependencies:
+
+ $ yum install curl-devel expat-devel gettext-devel \
+ openssl-devel zlib-devel
+
+ $ apt-get install libcurl4-gnutls-dev libexpat1-dev gettext \
+ libz-dev libssl-dev
+
+When you have all the necessary dependencies, you can go ahead and grab the latest snapshot from the Git web site:
+
+ http://git-scm.com/download
+
+Then, compile and install:
+
+ $ tar -zxf git-1.7.2.2.tar.gz
+ $ cd git-1.7.2.2
+ $ make prefix=/usr/local all
+ $ sudo make prefix=/usr/local install
+
+After this is done, you can also get Git via Git itself for updates:
+
+ $ git clone git://git.kernel.org/pub/scm/git/git.git
+
+### Installing on Linux ###
+
+If you want to install Git on Linux via a binary installer, you can generally do so through the basic package-management tool that comes with your distribution. If you’re on Fedora, you can use yum:
+
+ $ yum install git-core
+
+Or if you’re on a Debian-based distribution like Ubuntu, try apt-get:
+
+ $ apt-get install git-core
+
+### Installing on Mac ###
+
+There are two easy ways to install Git on a Mac. The easiest is to use the graphical Git installer, which you can download from the Google Code page (see Figure 1-7):
+
+ http://code.google.com/p/git-osx-installer
+
+Insert 18333fig0107.png
+Figure 1-7. Git OS X installer.
+
+The other major way is to install Git via MacPorts (`http://www.macports.org`). If you have MacPorts installed, install Git via
+
+ $ sudo port install git-core +svn +doc +bash_completion +gitweb
+
+You don’t have to add all the extras, but you’ll probably want to include +svn in case you ever have to use Git with Subversion repositories (see Chapter 8).
+
+### Installing on Windows ###
+
+Installing Git on Windows is very easy. The msysGit project has one of the easier installation procedures. Simply download the installer exe file from the Google Code page, and run it:
+
+ http://code.google.com/p/msysgit
+
+After it’s installed, you have both a command-line version (including an SSH client that will come in handy later) and the standard GUI.
+
+## First-Time Git Setup ##
+
+Now that you have Git on your system, you’ll want to do a few things to customize your Git environment. You should have to do these things only once; they’ll stick around between upgrades. You can also change them at any time by running through the commands again.
+
+Git comes with a tool called git config that lets you get and set configuration variables that control all aspects of how Git looks and operates. These variables can be stored in three different places:
+
+* `/etc/gitconfig` file: Contains values for every user on the system and all their repositories. If you pass the option` --system` to `git config`, it reads and writes from this file specifically.
+* `~/.gitconfig` file: Specific to your user. You can make Git read and write to this file specifically by passing the `--global` option.
+* config file in the git directory (that is, `.git/config`) of whatever repository you’re currently using: Specific to that single repository. Each level overrides values in the previous level, so values in `.git/config` trump those in `/etc/gitconfig`.
+
+On Windows systems, Git looks for the `.gitconfig` file in the `$HOME` directory (`C:\Documents and Settings\$USER` for most people). It also still looks for /etc/gitconfig, although it’s relative to the MSys root, which is wherever you decide to install Git on your Windows system when you run the installer.
+
+### Your Identity ###
+
+The first thing you should do when you install Git is to set your user name and e-mail address. This is important because every Git commit uses this information, and it’s immutably baked into the commits you pass around:
+
+ $ git config --global user.name "John Doe"
+ $ git config --global user.email [email protected]
+
+Again, you need to do this only once if you pass the `--global` option, because then Git will always use that information for anything you do on that system. If you want to override this with a different name or e-mail address for specific projects, you can run the command without the `--global` option when you’re in that project.
+
+### Your Editor ###
+
+Now that your identity is set up, you can configure the default text editor that will be used when Git needs you to type in a message. By default, Git uses your system’s default editor, which is generally Vi or Vim. If you want to use a different text editor, such as Emacs, you can do the following:
+
+ $ git config --global core.editor emacs
+
+### Your Diff Tool ###
+
+Another useful option you may want to configure is the default diff tool to use to resolve merge conflicts. Say you want to use vimdiff:
+
+ $ git config --global merge.tool vimdiff
+
+Git accepts kdiff3, tkdiff, meld, xxdiff, emerge, vimdiff, gvimdiff, ecmerge, and opendiff as valid merge tools. You can also set up a custom tool; see Chapter 7 for more information about doing that.
+
+### Checking Your Settings ###
+
+If you want to check your settings, you can use the `git config --list` command to list all the settings Git can find at that point:
+
+ $ git config --list
+ user.name=Scott Chacon
+ [email protected]
+ color.status=auto
+ color.branch=auto
+ color.interactive=auto
+ color.diff=auto
+ ...
+
+You may see keys more than once, because Git reads the same key from different files (`/etc/gitconfig` and `~/.gitconfig`, for example). In this case, Git uses the last value for each unique key it sees.
+
+You can also check what Git thinks a specific key’s value is by typing `git config {key}`:
+
+ $ git config user.name
+ Scott Chacon
+
+## Getting Help ##
+
+If you ever need help while using Git, there are three ways to get the manual page (manpage) help for any of the Git commands:
+
+ $ git help <verb>
+ $ git <verb> --help
+ $ man git-<verb>
+
+For example, you can get the manpage help for the config command by running
+
+ $ git help config
+
+These commands are nice because you can access them anywhere, even offline.
+If the manpages and this book aren’t enough and you need in-person help, you can try the `#git` or `#github` channel on the Freenode IRC server (irc.freenode.net). These channels are regularly filled with hundreds of people who are all very knowledgeable about Git and are often willing to help.
+
+## Summary ##
+
+You should have a basic understanding of what Git is and how it’s different from the CVCS you may have been using. You should also now have a working version of Git on your system that’s set up with your personal identity. It’s now time to learn some Git basics.
View
1,122 az/02-git-basics/01-chapter2.markdown
@@ -0,0 +1,1122 @@
+# Git Basics #
+
+If you can read only one chapter to get going with Git, this is it. This chapter covers every basic command you need to do the vast majority of the things you’ll eventually spend your time doing with Git. By the end of the chapter, you should be able to configure and initialize a repository, begin and stop tracking files, and stage and commit changes. We’ll also show you how to set up Git to ignore certain files and file patterns, how to undo mistakes quickly and easily, how to browse the history of your project and view changes between commits, and how to push and pull from remote repositories.
+
+## Getting a Git Repository ##
+
+You can get a Git project using two main approaches. The first takes an existing project or directory and imports it into Git. The second clones an existing Git repository from another server.
+
+### Initializing a Repository in an Existing Directory ###
+
+If you’re starting to track an existing project in Git, you need to go to the project’s directory and type
+
+ $ git init
+
+This creates a new subdirectory named `.git` that contains all of your necessary repository files — a Git repository skeleton. At this point, nothing in your project is tracked yet. (See *Chapter 9* for more information about exactly what files are contained in the `.git` directory you just created.)
+
+If you want to start version-controlling existing files (as opposed to an empty directory), you should probably begin tracking those files and do an initial commit. You can accomplish that with a few `git add` commands that specify the files you want to track, followed by a commit:
+
+ $ git add *.c
+ $ git add README
+ $ git commit -m 'initial project version'
+
+We’ll go over what these commands do in just a minute. At this point, you have a Git repository with tracked files and an initial commit.
+
+### Cloning an Existing Repository ###
+
+If you want to get a copy of an existing Git repository — for example, a project you’d like to contribute to — the command you need is `git clone`. If you’re familiar with other VCS systems such as Subversion, you’ll notice that the command is `clone` and not `checkout`. This is an important distinction — Git receives a copy of nearly all data that the server has. Every version of every file for the history of the project is pulled down when you run `git clone`. In fact, if your server disk gets corrupted, you can use any of the clones on any client to set the server back to the state it was in when it was cloned (you may lose some server-side hooks and such, but all the versioned data would be there — see *Chapter 4* for more details).
+
+You clone a repository with `git clone [url]`. For example, if you want to clone the Ruby Git library called Grit, you can do so like this:
+
+ $ git clone git://github.com/schacon/grit.git
+
+That creates a directory named `grit`, initializes a `.git` directory inside it, pulls down all the data for that repository, and checks out a working copy of the latest version. If you go into the new `grit` directory, you’ll see the project files in there, ready to be worked on or used. If you want to clone the repository into a directory named something other than grit, you can specify that as the next command-line option:
+
+ $ git clone git://github.com/schacon/grit.git mygrit
+
+That command does the same thing as the previous one, but the target directory is called `mygrit`.
+
+Git has a number of different transfer protocols you can use. The previous example uses the `git://` protocol, but you may also see `http(s)://` or `user@server:/path.git`, which uses the SSH transfer protocol. *Chapter 4* will introduce all of the available options the server can set up to access your Git repository and the pros and cons of each.
+
+## Recording Changes to the Repository ##
+
+You have a bona fide Git repository and a checkout or working copy of the files for that project. You need to make some changes and commit snapshots of those changes into your repository each time the project reaches a state you want to record.
+
+Remember that each file in your working directory can be in one of two states: *tracked* or *untracked*. *Tracked* files are files that were in the last snapshot; they can be *unmodified*, *modified*, or *staged*. *Untracked* files are everything else — any files in your working directory that were not in your last snapshot and are not in your staging area. When you first clone a repository, all of your files will be tracked and unmodified because you just checked them out and haven’t edited anything.
+
+As you edit files, Git sees them as modified, because you’ve changed them since your last commit. You *stage* these modified files and then commit all your staged changes, and the cycle repeats. This lifecycle is illustrated in Figure 2-1.
+
+Insert 18333fig0201.png
+Figure 2-1. The lifecycle of the status of your files.
+
+### Checking the Status of Your Files ###
+
+The main tool you use to determine which files are in which state is the `git status` command. If you run this command directly after a clone, you should see something like this:
+
+ $ git status
+ # On branch master
+ nothing to commit (working directory clean)
+
+This means you have a clean working directory — in other words, there are no tracked and modified files. Git also doesn’t see any untracked files, or they would be listed here. Finally, the command tells you which branch you’re on. For now, that is always `master`, which is the default; you won’t worry about it here. The next chapter will go over branches and references in detail.
+
+Let’s say you add a new file to your project, a simple `README` file. If the file didn’t exist before, and you run `git status`, you see your untracked file like so:
+
+ $ vim README
+ $ git status
+ # On branch master
+ # Untracked files:
+ # (use "git add <file>..." to include in what will be committed)
+ #
+ # README
+ nothing added to commit but untracked files present (use "git add" to track)
+
+You can see that your new `README` file is untracked, because it’s under the “Untracked files” heading in your status output. Untracked basically means that Git sees a file you didn’t have in the previous snapshot (commit); Git won’t start including it in your commit snapshots until you explicitly tell it to do so. It does this so you don’t accidentally begin including generated binary files or other files that you did not mean to include. You do want to start including README, so let’s start tracking the file.
+
+### Tracking New Files ###
+
+In order to begin tracking a new file, you use the command `git add`. To begin tracking the `README` file, you can run this:
+
+ $ git add README
+
+If you run your status command again, you can see that your `README` file is now tracked and staged:
+
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ #
+
+You can tell that it’s staged because it’s under the “Changes to be committed” heading. If you commit at this point, the version of the file at the time you ran `git add` is what will be in the historical snapshot. You may recall that when you ran `git init` earlier, you then ran `git add (files)` — that was to begin tracking files in your directory. The `git add` command takes a path name for either a file or a directory; if it’s a directory, the command adds all the files in that directory recursively.
+
+### Staging Modified Files ###
+
+Let’s change a file that was already tracked. If you change a previously tracked file called `benchmarks.rb` and then run your `status` command again, you get something that looks like this:
+
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ #
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ #
+ # modified: benchmarks.rb
+ #
+
+The `benchmarks.rb` file appears under a section named “Changed but not updated” — which means that a file that is tracked has been modified in the working directory but not yet staged. To stage it, you run the `git add` command (it’s a multipurpose command — you use it to begin tracking new files, to stage files, and to do other things like marking merge-conflicted files as resolved). Let’s run `git add` now to stage the `benchmarks.rb` file, and then run `git status` again:
+
+ $ git add benchmarks.rb
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ # modified: benchmarks.rb
+ #
+
+Both files are staged and will go into your next commit. At this point, suppose you remember one little change that you want to make in `benchmarks.rb` before you commit it. You open it again and make that change, and you’re ready to commit. However, let’s run `git status` one more time:
+
+ $ vim benchmarks.rb
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ # modified: benchmarks.rb
+ #
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ #
+ # modified: benchmarks.rb
+ #
+
+What the heck? Now `benchmarks.rb` is listed as both staged and unstaged. How is that possible? It turns out that Git stages a file exactly as it is when you run the `git add` command. If you commit now, the version of `benchmarks.rb` as it was when you last ran the `git add` command is how it will go into the commit, not the version of the file as it looks in your working directory when you run `git commit`. If you modify a file after you run `git add`, you have to run `git add` again to stage the latest version of the file:
+
+ $ git add benchmarks.rb
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ # modified: benchmarks.rb
+ #
+
+### Ignoring Files ###
+
+Often, you’ll have a class of files that you don’t want Git to automatically add or even show you as being untracked. These are generally automatically generated files such as log files or files produced by your build system. In such cases, you can create a file listing patterns to match them named `.gitignore`. Here is an example `.gitignore` file:
+
+ $ cat .gitignore
+ *.[oa]
+ *~
+
+The first line tells Git to ignore any files ending in `.o` or `.a` — *object* and *archive* files that may be the product of building your code. The second line tells Git to ignore all files that end with a tilde (`~`), which is used by many text editors such as Emacs to mark temporary files. You may also include a `log`, `tmp`, or `pid` directory; automatically generated documentation; and so on. Setting up a `.gitignore` file before you get going is generally a good idea so you don’t accidentally commit files that you really don’t want in your Git repository.
+
+The rules for the patterns you can put in the `.gitignore` file are as follows:
+
+* Blank lines or lines starting with `#` are ignored.
+* Standard glob patterns work.
+* You can end patterns with a forward slash (`/`) to specify a directory.
+* You can negate a pattern by starting it with an exclamation point (`!`).
+
+Glob patterns are like simplified regular expressions that shells use. An asterisk (`*`) matches zero or more characters; `[abc]` matches any character inside the brackets (in this case `a`, `b`, or `c`); a question mark (`?`) matches a single character; and brackets enclosing characters separated by a hyphen(`[0-9]`) matches any character in the range (in this case 0 through 9) .
+
+Here is another example `.gitignore` file:
+
+ # a comment - this is ignored
+ *.a # no .a files
+ !lib.a # but do track lib.a, even though you're ignoring .a files above
+ /TODO # only ignore the root TODO file, not subdir/TODO
+ build/ # ignore all files in the build/ directory
+ doc/*.txt # ignore doc/notes.txt, but not doc/server/arch.txt
+
+### Viewing Your Staged and Unstaged Changes ###
+
+If the `git status` command is too vague for you — you want to know exactly what you changed, not just which files were changed — you can use the `git diff` command. We’ll cover `git diff` in more detail later; but you’ll probably use it most often to answer these two questions: What have you changed but not yet staged? And what have you staged that you are about to commit? Although `git status` answers those questions very generally, `git diff` shows you the exact lines added and removed — the patch, as it were.
+
+Let’s say you edit and stage the `README` file again and then edit the `benchmarks.rb` file without staging it. If you run your `status` command, you once again see something like this:
+
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ #
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ #
+ # modified: benchmarks.rb
+ #
+
+To see what you’ve changed but not yet staged, type `git diff` with no other arguments:
+
+ $ git diff
+ diff --git a/benchmarks.rb b/benchmarks.rb
+ index 3cb747f..da65585 100644
+ --- a/benchmarks.rb
+ +++ b/benchmarks.rb
+ @@ -36,6 +36,10 @@ def main
+ @commit.parents[0].parents[0].parents[0]
+ end
+
+ + run_code(x, 'commits 1') do
+ + git.commits.size
+ + end
+ +
+ run_code(x, 'commits 2') do
+ log = git.commits('master', 15)
+ log.size
+
+That command compares what is in your working directory with what is in your staging area. The result tells you the changes you’ve made that you haven’t yet staged.
+
+If you want to see what you’ve staged that will go into your next commit, you can use `git diff --cached`. (In Git versions 1.6.1 and later, you can also use `git diff --staged`, which may be easier to remember.) This command compares your staged changes to your last commit:
+
+ $ git diff --cached
+ diff --git a/README b/README
+ new file mode 100644
+ index 0000000..03902a1
+ --- /dev/null
+ +++ b/README2
+ @@ -0,0 +1,5 @@
+ +grit
+ + by Tom Preston-Werner, Chris Wanstrath
+ + http://github.com/mojombo/grit
+ +
+ +Grit is a Ruby library for extracting information from a Git repository
+
+It’s important to note that `git diff` by itself doesn’t show all changes made since your last commit — only changes that are still unstaged. This can be confusing, because if you’ve staged all of your changes, `git diff` will give you no output.
+
+For another example, if you stage the `benchmarks.rb` file and then edit it, you can use `git diff` to see the changes in the file that are staged and the changes that are unstaged:
+
+ $ git add benchmarks.rb
+ $ echo '# test line' >> benchmarks.rb
+ $ git status
+ # On branch master
+ #
+ # Changes to be committed:
+ #
+ # modified: benchmarks.rb
+ #
+ # Changed but not updated:
+ #
+ # modified: benchmarks.rb
+ #
+
+Now you can use `git diff` to see what is still unstaged
+
+ $ git diff
+ diff --git a/benchmarks.rb b/benchmarks.rb
+ index e445e28..86b2f7c 100644
+ --- a/benchmarks.rb
+ +++ b/benchmarks.rb
+ @@ -127,3 +127,4 @@ end
+ main()
+
+ ##pp Grit::GitRuby.cache_client.stats
+ +# test line
+
+and `git diff --cached` to see what you’ve staged so far:
+
+ $ git diff --cached
+ diff --git a/benchmarks.rb b/benchmarks.rb
+ index 3cb747f..e445e28 100644
+ --- a/benchmarks.rb
+ +++ b/benchmarks.rb
+ @@ -36,6 +36,10 @@ def main
+ @commit.parents[0].parents[0].parents[0]
+ end
+
+ + run_code(x, 'commits 1') do
+ + git.commits.size
+ + end
+ +
+ run_code(x, 'commits 2') do
+ log = git.commits('master', 15)
+ log.size
+
+### Committing Your Changes ###
+
+Now that your staging area is set up the way you want it, you can commit your changes. Remember that anything that is still unstaged — any files you have created or modified that you haven’t run `git add` on since you edited them — won’t go into this commit. They will stay as modified files on your disk.
+In this case, the last time you ran `git status`, you saw that everything was staged, so you’re ready to commit your changes. The simplest way to commit is to type `git commit`:
+
+ $ git commit
+
+Doing so launches your editor of choice. (This is set by your shell’s `$EDITOR` environment variable — usually vim or emacs, although you can configure it with whatever you want using the `git config --global core.editor` command as you saw in *Chapter 1*).
+
+The editor displays the following text (this example is a Vim screen):
+
+ # Please enter the commit message for your changes. Lines starting
+ # with '#' will be ignored, and an empty message aborts the commit.
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # new file: README
+ # modified: benchmarks.rb
+ ~
+ ~
+ ~
+ ".git/COMMIT_EDITMSG" 10L, 283C
+
+You can see that the default commit message contains the latest output of the `git status` command commented out and one empty line on top. You can remove these comments and type your commit message, or you can leave them there to help you remember what you’re committing. (For an even more explicit reminder of what you’ve modified, you can pass the `-v` option to `git commit`. Doing so also puts the diff of your change in the editor so you can see exactly what you did.) When you exit the editor, Git creates your commit with that commit message (with the comments and diff stripped out).
+
+Alternatively, you can type your commit message inline with the `commit` command by specifying it after a `-m` flag, like this:
+
+ $ git commit -m "Story 182: Fix benchmarks for speed"
+ [master]: created 463dc4f: "Fix benchmarks for speed"
+ 2 files changed, 3 insertions(+), 0 deletions(-)
+ create mode 100644 README
+
+Now you’ve created your first commit! You can see that the commit has given you some output about itself: which branch you committed to (`master`), what SHA-1 checksum the commit has (`463dc4f`), how many files were changed, and statistics about lines added and removed in the commit.
+
+Remember that the commit records the snapshot you set up in your staging area. Anything you didn’t stage is still sitting there modified; you can do another commit to add it to your history. Every time you perform a commit, you’re recording a snapshot of your project that you can revert to or compare to later.
+
+### Skipping the Staging Area ###
+
+Although it can be amazingly useful for crafting commits exactly how you want them, the staging area is sometimes a bit more complex than you need in your workflow. If you want to skip the staging area, Git provides a simple shortcut. Providing the `-a` option to the `git commit` command makes Git automatically stage every file that is already tracked before doing the commit, letting you skip the `git add` part:
+
+ $ git status
+ # On branch master
+ #
+ # Changed but not updated:
+ #
+ # modified: benchmarks.rb
+ #
+ $ git commit -a -m 'added new benchmarks'
+ [master 83e38c7] added new benchmarks
+ 1 files changed, 5 insertions(+), 0 deletions(-)
+
+Notice how you don’t have to run `git add` on the `benchmarks.rb` file in this case before you commit.
+
+### Removing Files ###
+
+To remove a file from Git, you have to remove it from your tracked files (more accurately, remove it from your staging area) and then commit. The `git rm` command does that and also removes the file from your working directory so you don’t see it as an untracked file next time around.
+
+If you simply remove the file from your working directory, it shows up under the “Changed but not updated” (that is, _unstaged_) area of your `git status` output:
+
+ $ rm grit.gemspec
+ $ git status
+ # On branch master
+ #
+ # Changed but not updated:
+ # (use "git add/rm <file>..." to update what will be committed)
+ #
+ # deleted: grit.gemspec
+ #
+
+Then, if you run `git rm`, it stages the file’s removal:
+
+ $ git rm grit.gemspec
+ rm 'grit.gemspec'
+ $ git status
+ # On branch master
+ #
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # deleted: grit.gemspec
+ #
+
+The next time you commit, the file will be gone and no longer tracked. If you modified the file and added it to the index already, you must force the removal with the `-f` option. This is a safety feature to prevent accidental removal of data that hasn’t yet been recorded in a snapshot and that can’t be recovered from Git.
+
+Another useful thing you may want to do is to keep the file in your working tree but remove it from your staging area. In other words, you may want to keep the file on your hard drive but not have Git track it anymore. This is particularly useful if you forgot to add something to your `.gitignore` file and accidentally added it, like a large log file or a bunch of `.a` compiled files. To do this, use the `--cached` option:
+
+ $ git rm --cached readme.txt
+
+You can pass files, directories, and file-glob patterns to the `git rm` command. That means you can do things such as
+
+ $ git rm log/\*.log
+
+Note the backslash (`\`) in front of the `*`. This is necessary because Git does its own filename expansion in addition to your shell’s filename expansion. This command removes all files that have the `.log` extension in the `log/` directory. Or, you can do something like this:
+
+ $ git rm \*~
+
+This command removes all files that end with `~`.
+
+### Moving Files ###
+
+Unlike many other VCS systems, Git doesn’t explicitly track file movement. If you rename a file in Git, no metadata is stored in Git that tells it you renamed the file. However, Git is pretty smart about figuring that out after the fact — we’ll deal with detecting file movement a bit later.
+
+Thus it’s a bit confusing that Git has a `mv` command. If you want to rename a file in Git, you can run something like
+
+ $ git mv file_from file_to
+
+and it works fine. In fact, if you run something like this and look at the status, you’ll see that Git considers it a renamed file:
+
+ $ git mv README.txt README
+ $ git status
+ # On branch master
+ # Your branch is ahead of 'origin/master' by 1 commit.
+ #
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # renamed: README.txt -> README
+ #
+
+However, this is equivalent to running something like this:
+
+ $ mv README.txt README
+ $ git rm README.txt
+ $ git add README
+
+Git figures out that it’s a rename implicitly, so it doesn’t matter if you rename a file that way or with the `mv` command. The only real difference is that `mv` is one command instead of three — it’s a convenience function. More important, you can use any tool you like to rename a file, and address the add/rm later, before you commit.
+
+## Viewing the Commit History ##
+
+After you have created several commits, or if you have cloned a repository with an existing commit history, you’ll probably want to look back to see what has happened. The most basic and powerful tool to do this is the `git log` command.
+
+These examples use a very simple project called `simplegit` that I often use for demonstrations. To get the project, run
+
+ git clone git://github.com/schacon/simplegit-progit.git
+
+When you run `git log` in this project, you should get output that looks something like this:
+
+ $ git log
+ commit ca82a6dff817ec66f44342007202690a93763949
+ Author: Scott Chacon <[email protected]>
+ Date: Mon Mar 17 21:52:11 2008 -0700
+
+ changed the version number
+
+ commit 085bb3bcb608e1e8451d4b2432f8ecbe6306e7e7
+ Author: Scott Chacon <[email protected]>
+ Date: Sat Mar 15 16:40:33 2008 -0700
+
+ removed unnecessary test code
+
+ commit a11bef06a3f659402fe7563abf99ad00de2209e6
+ Author: Scott Chacon <[email protected]>
+ Date: Sat Mar 15 10:31:28 2008 -0700
+
+ first commit
+
+By default, with no arguments, `git log` lists the commits made in that repository in reverse chronological order. That is, the most recent commits show up first. As you can see, this command lists each commit with its SHA-1 checksum, the author’s name and e-mail, the date written, and the commit message.
+
+A huge number and variety of options to the `git log` command are available to show you exactly what you’re looking for. Here, we’ll show you some of the most-used options.
+
+One of the more helpful options is `-p`, which shows the diff introduced in each commit. You can also use `-2`, which limits the output to only the last two entries:
+
+ $ git log -p -2
+ commit ca82a6dff817ec66f44342007202690a93763949
+ Author: Scott Chacon <[email protected]>
+ Date: Mon Mar 17 21:52:11 2008 -0700
+
+ changed the version number
+
+ diff --git a/Rakefile b/Rakefile
+ index a874b73..8f94139 100644
+ --- a/Rakefile
+ +++ b/Rakefile
+ @@ -5,7 +5,7 @@ require 'rake/gempackagetask'
+ spec = Gem::Specification.new do |s|
+ - s.version = "0.1.0"
+ + s.version = "0.1.1"
+ s.author = "Scott Chacon"
+
+ commit 085bb3bcb608e1e8451d4b2432f8ecbe6306e7e7
+ Author: Scott Chacon <[email protected]>
+ Date: Sat Mar 15 16:40:33 2008 -0700
+
+ removed unnecessary test code
+
+ diff --git a/lib/simplegit.rb b/lib/simplegit.rb
+ index a0a60ae..47c6340 100644
+ --- a/lib/simplegit.rb
+ +++ b/lib/simplegit.rb
+ @@ -18,8 +18,3 @@ class SimpleGit
+ end
+
+ end
+ -
+ -if $0 == __FILE__
+ - git = SimpleGit.new
+ - puts git.show
+ -end
+ \ No newline at end of file
+
+This option displays the same information but with a diff directly following each entry. This is very helpful for code review or to quickly browse what happened during a series of commits that a collaborator has added.
+You can also use a series of summarizing options with `git log`. For example, if you want to see some abbreviated stats for each commit, you can use the `--stat` option:
+
+ $ git log --stat
+ commit ca82a6dff817ec66f44342007202690a93763949
+ Author: Scott Chacon <[email protected]>
+ Date: Mon Mar 17 21:52:11 2008 -0700
+
+ changed the version number
+
+ Rakefile | 2 +-
+ 1 files changed, 1 insertions(+), 1 deletions(-)
+
+ commit 085bb3bcb608e1e8451d4b2432f8ecbe6306e7e7
+ Author: Scott Chacon <[email protected]>
+ Date: Sat Mar 15 16:40:33 2008 -0700
+
+ removed unnecessary test code
+
+ lib/simplegit.rb | 5 -----
+ 1 files changed, 0 insertions(+), 5 deletions(-)
+
+ commit a11bef06a3f659402fe7563abf99ad00de2209e6
+ Author: Scott Chacon <[email protected]>
+ Date: Sat Mar 15 10:31:28 2008 -0700
+
+ first commit
+
+ README | 6 ++++++
+ Rakefile | 23 +++++++++++++++++++++++
+ lib/simplegit.rb | 25 +++++++++++++++++++++++++
+ 3 files changed, 54 insertions(+), 0 deletions(-)
+
+As you can see, the `--stat` option prints below each commit entry a list of modified files, how many files were changed, and how many lines in those files were added and removed. It also puts a summary of the information at the end.
+Another really useful option is `--pretty`. This option changes the log output to formats other than the default. A few prebuilt options are available for you to use. The `oneline` option prints each commit on a single line, which is useful if you’re looking at a lot of commits. In addition, the `short`, `full`, and `fuller` options show the output in roughly the same format but with less or more information, respectively:
+
+ $ git log --pretty=oneline
+ ca82a6dff817ec66f44342007202690a93763949 changed the version number
+ 085bb3bcb608e1e8451d4b2432f8ecbe6306e7e7 removed unnecessary test code
+ a11bef06a3f659402fe7563abf99ad00de2209e6 first commit
+
+The most interesting option is `format`, which allows you to specify your own log output format. This is especially useful when you’re generating output for machine parsing — because you specify the format explicitly, you know it won’t change with updates to Git:
+
+ $ git log --pretty=format:"%h - %an, %ar : %s"
+ ca82a6d - Scott Chacon, 11 months ago : changed the version number
+ 085bb3b - Scott Chacon, 11 months ago : removed unnecessary test code
+ a11bef0 - Scott Chacon, 11 months ago : first commit
+
+Table 2-1 lists some of the more useful options that format takes.
+
+ Option Description of Output
+ %H Commit hash
+ %h Abbreviated commit hash
+ %T Tree hash
+ %t Abbreviated tree hash
+ %P Parent hashes
+ %p Abbreviated parent hashes
+ %an Author name
+ %ae Author e-mail
+ %ad Author date (format respects the --date= option)
+ %ar Author date, relative
+ %cn Committer name
+ %ce Committer email
+ %cd Committer date
+ %cr Committer date, relative
+ %s Subject
+
+You may be wondering what the difference is between _author_ and _committer_. The _author_ is the person who originally wrote the patch, whereas the _committer_ is the person who last applied the patch. So, if you send in a patch to a project and one of the core members applies the patch, both of you get credit — you as the author and the core member as the committer. We’ll cover this distinction a bit more in *Chapter 5*.
+
+The `oneline` and `format` options are particularly useful with another `log` option called `--graph`. This option adds a nice little ASCII graph showing your branch and merge history, which we can see our copy of the Grit project repository:
+
+ $ git log --pretty=format:"%h %s" --graph
+ * 2d3acf9 ignore errors from SIGCHLD on trap
+ * 5e3ee11 Merge branch 'master' of git://github.com/dustin/grit
+ |\
+ | * 420eac9 Added a method for getting the current branch.
+ * | 30e367c timeout code and tests
+ * | 5a09431 add timeout protection to grit
+ * | e1193f8 support for heads with slashes in them
+ |/
+ * d6016bc require time for xmlschema
+ * 11d191e Merge branch 'defunkt' into local
+
+Those are only some simple output-formatting options to `git log` — there are many more. Table 2-2 lists the options we’ve covered so far and some other common formatting options that may be useful, along with how they change the output of the `log` command.
+
+ Option Description
+ -p Show the patch introduced with each commit.
+ --stat Show statistics for files modified in each commit.
+ --shortstat Display only the changed/insertions/deletions line from the --stat command.
+ --name-only Show the list of files modified after the commit information.
+ --name-status Show the list of files affected with added/modified/deleted information as well.
+ --abbrev-commit Show only the first few characters of the SHA-1 checksum instead of all 40.
+ --relative-date Display the date in a relative format (for example, “2 weeks ago”) instead of using the full date format.
+ --graph Display an ASCII graph of the branch and merge history beside the log output.
+ --pretty Show commits in an alternate format. Options include oneline, short, full, fuller, and format (where you specify your own format).
+
+### Limiting Log Output ###
+
+In addition to output-formatting options, `git log` takes a number of useful limiting options — that is, options that let you show only a subset of commits. You’ve seen one such option already — the `-2` option, which show only the last two commits. In fact, you can do `-<n>`, where `n` is any integer to show the last `n` commits. In reality, you’re unlikely to use that often, because Git by default pipes all output through a pager so you see only one page of log output at a time.
+
+However, the time-limiting options such as `--since` and `--until` are very useful. For example, this command gets the list of commits made in the last two weeks:
+
+ $ git log --since=2.weeks
+
+This command works with lots of formats — you can specify a specific date (“2008-01-15”) or a relative date such as “2 years 1 day 3 minutes ago”.
+
+You can also filter the list to commits that match some search criteria. The `--author` option allows you to filter on a specific author, and the `--grep` option lets you search for keywords in the commit messages. (Note that if you want to specify both author and grep options, you have to add `--all-match` or the command will match commits with either.)
+
+The last really useful option to pass to `git log` as a filter is a path. If you specify a directory or file name, you can limit the log output to commits that introduced a change to those files. This is always the last option and is generally preceded by double dashes (`--`) to separate the paths from the options.
+
+In Table 2-3 we’ll list these and a few other common options for your reference.
+
+ Option Description
+ -(n) Show only the last n commits
+ --since, --after Limit the commits to those made after the specified date.
+ --until, --before Limit the commits to those made before the specified date.
+ --author Only show commits in which the author entry matches the specified string.
+ --committer Only show commits in which the committer entry matches the specified string.
+
+For example, if you want to see which commits modifying test files in the Git source code history were committed by Junio Hamano in the month of October 2008 and were not merges, you can run something like this:
+
+ $ git log --pretty="%h - %s" --author=gitster --since="2008-10-01" \
+ --before="2008-11-01" --no-merges -- t/
+ 5610e3b - Fix testcase failure when extended attribute
+ acd3b9e - Enhance hold_lock_file_for_{update,append}()
+ f563754 - demonstrate breakage of detached checkout wi
+ d1a43f2 - reset --hard/read-tree --reset -u: remove un
+ 51a94af - Fix "checkout --track -b newbranch" on detac
+ b0ad11e - pull: allow "git pull origin $something:$cur
+
+Of the nearly 20,000 commits in the Git source code history, this command shows the 6 that match those criteria.
+
+### Using a GUI to Visualize History ###
+
+If you like to use a more graphical tool to visualize your commit history, you may want to take a look at a Tcl/Tk program called `gitk` that is distributed with Git. Gitk is basically a visual `git log` tool, and it accepts nearly all the filtering options that `git log` does. If you type `gitk` on the command line in your project, you should see something like Figure 2-2.
+
+Insert 18333fig0202.png
+Figure 2-2. The gitk history visualizer.
+
+You can see the commit history in the top half of the window along with a nice ancestry graph. The diff viewer in the bottom half of the window shows you the changes introduced at any commit you click.
+
+## Undoing Things ##
+
+At any stage, you may want to undo something. Here, we’ll review a few basic tools for undoing changes that you’ve made. Be careful, because you can’t always revert some of these undos. This is one of the few areas in Git where you may lose some work if you do it wrong.
+
+### Changing Your Last Commit ###
+
+One of the common undos takes place when you commit too early and possibly forget to add some files, or you mess up your commit message. If you want to try that commit again, you can run commit with the `--amend` option:
+
+ $ git commit --amend
+
+This command takes your staging area and uses it for the commit. If you’ve made no changes since your last commit (for instance, you run this command immediately after your previous commit), then your snapshot will look exactly the same and all you’ll change is your commit message.
+
+The same commit-message editor fires up, but it already contains the message of your previous commit. You can edit the message the same as always, but it overwrites your previous commit.
+
+As an example, if you commit and then realize you forgot to stage the changes in a file you wanted to add to this commit, you can do something like this:
+
+ $ git commit -m 'initial commit'
+ $ git add forgotten_file
+ $ git commit --amend
+
+After these three commands, you end up with a single commit — the second commit replaces the results of the first.
+
+### Unstaging a Staged File ###
+
+The next two sections demonstrate how to wrangle your staging area and working directory changes. The nice part is that the command you use to determine the state of those two areas also reminds you how to undo changes to them. For example, let’s say you’ve changed two files and want to commit them as two separate changes, but you accidentally type `git add *` and stage them both. How can you unstage one of the two? The `git status` command reminds you:
+
+ $ git add .
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # modified: README.txt
+ # modified: benchmarks.rb
+ #
+
+Right below the “Changes to be committed” text, it says "use `git reset HEAD <file>...` to unstage". So, let’s use that advice to unstage the `benchmarks.rb` file:
+
+ $ git reset HEAD benchmarks.rb
+ benchmarks.rb: locally modified
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # modified: README.txt
+ #
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ # (use "git checkout -- <file>..." to discard changes in working directory)
+ #
+ # modified: benchmarks.rb
+ #
+
+The command is a bit strange, but it works. The `benchmarks.rb` file is modified but once again unstaged.
+
+### Unmodifying a Modified File ###
+
+What if you realize that you don’t want to keep your changes to the `benchmarks.rb` file? How can you easily unmodify it — revert it back to what it looked like when you last committed (or initially cloned, or however you got it into your working directory)? Luckily, `git status` tells you how to do that, too. In the last example output, the unstaged area looks like this:
+
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ # (use "git checkout -- <file>..." to discard changes in working directory)
+ #
+ # modified: benchmarks.rb
+ #
+
+It tells you pretty explicitly how to discard the changes you’ve made (at least, the newer versions of Git, 1.6.1 and later, do this — if you have an older version, we highly recommend upgrading it to get some of these nicer usability features). Let’s do what it says:
+
+ $ git checkout -- benchmarks.rb
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # modified: README.txt
+ #
+
+You can see that the changes have been reverted. You should also realize that this is a dangerous command: any changes you made to that file are gone — you just copied another file over it. Don’t ever use this command unless you absolutely know that you don’t want the file. If you just need to get it out of the way, we’ll go over stashing and branching in the next chapter; these are generally better ways to go.
+
+Remember, anything that is committed in Git can almost always be recovered. Even commits that were on branches that were deleted or commits that were overwritten with an `--amend` commit can be recovered (see *Chapter 9* for data recovery). However, anything you lose that was never committed is likely never to be seen again.
+
+## Working with Remotes ##
+
+To be able to collaborate on any Git project, you need to know how to manage your remote repositories. Remote repositories are versions of your project that are hosted on the Internet or network somewhere. You can have several of them, each of which generally is either read-only or read/write for you. Collaborating with others involves managing these remote repositories and pushing and pulling data to and from them when you need to share work.
+Managing remote repositories includes knowing how to add remote repositories, remove remotes that are no longer valid, manage various remote branches and define them as being tracked or not, and more. In this section, we’ll cover these remote-management skills.
+
+### Showing Your Remotes ###
+
+To see which remote servers you have configured, you can run the `git remote` command. It lists the shortnames of each remote handle you’ve specified. If you’ve cloned your repository, you should at least see *origin* — that is the default name Git gives to the server you cloned from:
+
+ $ git clone git://github.com/schacon/ticgit.git
+ Initialized empty Git repository in /private/tmp/ticgit/.git/
+ remote: Counting objects: 595, done.
+ remote: Compressing objects: 100% (269/269), done.
+ remote: Total 595 (delta 255), reused 589 (delta 253)
+ Receiving objects: 100% (595/595), 73.31 KiB | 1 KiB/s, done.
+ Resolving deltas: 100% (255/255), done.
+ $ cd ticgit
+ $ git remote
+ origin
+
+You can also specify `-v`, which shows you the URL that Git has stored for the shortname to be expanded to:
+
+ $ git remote -v
+ origin git://github.com/schacon/ticgit.git (fetch)
+ origin git://github.com/schacon/ticgit.git (push)
+
+If you have more than one remote, the command lists them all. For example, my Grit repository looks something like this.
+
+ $ cd grit
+ $ git remote -v
+ bakkdoor git://github.com/bakkdoor/grit.git
+ cho45 git://github.com/cho45/grit.git
+ defunkt git://github.com/defunkt/grit.git
+ koke git://github.com/koke/grit.git
+ origin [email protected]:mojombo/grit.git
+
+This means we can pull contributions from any of these users pretty easily. But notice that only the origin remote is an SSH URL, so it’s the only one I can push to (we’ll cover why this is in *Chapter 4*).
+
+### Adding Remote Repositories ###
+
+I’ve mentioned and given some demonstrations of adding remote repositories in previous sections, but here is how to do it explicitly. To add a new remote Git repository as a shortname you can reference easily, run `git remote add [shortname] [url]`:
+
+ $ git remote
+ origin
+ $ git remote add pb git://github.com/paulboone/ticgit.git
+ $ git remote -v
+ origin git://github.com/schacon/ticgit.git
+ pb git://github.com/paulboone/ticgit.git
+
+Now you can use the string `pb` on the command line in lieu of the whole URL. For example, if you want to fetch all the information that Paul has but that you don’t yet have in your repository, you can run git fetch pb:
+
+ $ git fetch pb
+ remote: Counting objects: 58, done.
+ remote: Compressing objects: 100% (41/41), done.
+ remote: Total 44 (delta 24), reused 1 (delta 0)
+ Unpacking objects: 100% (44/44), done.
+ From git://github.com/paulboone/ticgit
+ * [new branch] master -> pb/master
+ * [new branch] ticgit -> pb/ticgit
+
+Paul’s master branch is accessible locally as `pb/master` — you can merge it into one of your branches, or you can check out a local branch at that point if you want to inspect it.
+
+### Fetching and Pulling from Your Remotes ###
+
+As you just saw, to get data from your remote projects, you can run:
+
+ $ git fetch [remote-name]
+
+The command goes out to that remote project and pulls down all the data from that remote project that you don’t have yet. After you do this, you should have references to all the branches from that remote, which you can merge in or inspect at any time. (We’ll go over what branches are and how to use them in much more detail in *Chapter 3*.)
+
+If you clone a repository, the command automatically adds that remote repository under the name *origin*. So, `git fetch origin` fetches any new work that has been pushed to that server since you cloned (or last fetched from) it. It’s important to note that the `fetch` command pulls the data to your local repository — it doesn’t automatically merge it with any of your work or modify what you’re currently working on. You have to merge it manually into your work when you’re ready.
+
+If you have a branch set up to track a remote branch (see the next section and *Chapter 3* for more information), you can use the `git pull` command to automatically fetch and then merge a remote branch into your current branch. This may be an easier or more comfortable workflow for you; and by default, the `git clone` command automatically sets up your local master branch to track the remote master branch on the server you cloned from (assuming the remote has a master branch). Running `git pull` generally fetches data from the server you originally cloned from and automatically tries to merge it into the code you’re currently working on.
+
+### Pushing to Your Remotes ###
+
+When you have your project at a point that you want to share, you have to push it upstream. The command for this is simple: `git push [remote-name] [branch-name]`. If you want to push your master branch to your `origin` server (again, cloning generally sets up both of those names for you automatically), then you can run this to push your work back up to the server:
+
+ $ git push origin master
+
+This command works only if you cloned from a server to which you have write access and if nobody has pushed in the meantime. If you and someone else clone at the same time and they push upstream and then you push upstream, your push will rightly be rejected. You’ll have to pull down their work first and incorporate it into yours before you’ll be allowed to push. See *Chapter 3* for more detailed information on how to push to remote servers.
+
+### Inspecting a Remote ###
+
+If you want to see more information about a particular remote, you can use the `git remote show [remote-name]` command. If you run this command with a particular shortname, such as `origin`, you get something like this:
+
+ $ git remote show origin
+ * remote origin
+ URL: git://github.com/schacon/ticgit.git
+ Remote branch merged with 'git pull' while on branch master
+ master
+ Tracked remote branches
+ master
+ ticgit
+
+It lists the URL for the remote repository as well as the tracking branch information. The command helpfully tells you that if you’re on the master branch and you run `git pull`, it will automatically merge in the master branch on the remote after it fetches all the remote references. It also lists all the remote references it has pulled down.
+
+That is a simple example you’re likely to encounter. When you’re using Git more heavily, however, you may see much more information from `git remote show`:
+
+ $ git remote show origin
+ * remote origin
+ URL: [email protected]:defunkt/github.git
+ Remote branch merged with 'git pull' while on branch issues
+ issues
+ Remote branch merged with 'git pull' while on branch master
+ master
+ New remote branches (next fetch will store in remotes/origin)
+ caching
+ Stale tracking branches (use 'git remote prune')
+ libwalker
+ walker2
+ Tracked remote branches
+ acl
+ apiv2
+ dashboard2
+ issues
+ master
+ postgres
+ Local branch pushed with 'git push'
+ master:master
+
+This command shows which branch is automatically pushed when you run `git push` on certain branches. It also shows you which remote branches on the server you don’t yet have, which remote branches you have that have been removed from the server, and multiple branches that are automatically merged when you run `git pull`.
+
+### Removing and Renaming Remotes ###
+
+If you want to rename a reference, in newer versions of Git you can run `git remote rename` to change a remote’s shortname. For instance, if you want to rename `pb` to `paul`, you can do so with `git remote rename`:
+
+ $ git remote rename pb paul
+ $ git remote
+ origin
+ paul
+
+It’s worth mentioning that this changes your remote branch names, too. What used to be referenced at `pb/master` is now at `paul/master`.
+
+If you want to remove a reference for some reason — you’ve moved the server or are no longer using a particular mirror, or perhaps a contributor isn’t contributing anymore — you can use `git remote rm`:
+
+ $ git remote rm paul
+ $ git remote
+ origin
+
+## Tagging ##
+
+Like most VCSs, Git has the ability to tag specific points in history as being important. Generally, people use this functionality to mark release points (`v1.0`, and so on). In this section, you’ll learn how to list the available tags, how to create new tags, and what the different types of tags are.
+
+### Listing Your Tags ###
+
+Listing the available tags in Git is straightforward. Just type `git tag`:
+
+ $ git tag
+ v0.1
+ v1.3
+
+This command lists the tags in alphabetical order; the order in which they appear has no real importance.
+
+You can also search for tags with a particular pattern. The Git source repo, for instance, contains more than 240 tags. If you’re only interested in looking at the 1.4.2 series, you can run this:
+
+ $ git tag -l 'v1.4.2.*'
+ v1.4.2.1
+ v1.4.2.2
+ v1.4.2.3
+ v1.4.2.4
+
+### Creating Tags ###
+
+Git uses two main types of tags: lightweight and annotated. A lightweight tag is very much like a branch that doesn’t change — it’s just a pointer to a specific commit. Annotated tags, however, are stored as full objects in the Git database. They’re checksummed; contain the tagger name, e-mail, and date; have a tagging message; and can be signed and verified with GNU Privacy Guard (GPG). It’s generally recommended that you create annotated tags so you can have all this information; but if you want a temporary tag or for some reason don’t want to keep the other information, lightweight tags are available too.
+
+### Annotated Tags ###
+
+Creating an annotated tag in Git is simple. The easiest way is to specify `-a` when you run the `tag` command:
+
+ $ git tag -a v1.4 -m 'my version 1.4'
+ $ git tag
+ v0.1
+ v1.3
+ v1.4
+
+The `-m` specifies a tagging message, which is stored with the tag. If you don’t specify a message for an annotated tag, Git launches your editor so you can type it in.
+
+You can see the tag data along with the commit that was tagged by using the `git show` command:
+
+ $ git show v1.4
+ tag v1.4
+ Tagger: Scott Chacon <[email protected]>
+ Date: Mon Feb 9 14:45:11 2009 -0800
+
+ my version 1.4
+ commit 15027957951b64cf874c3557a0f3547bd83b3ff6
+ Merge: 4a447f7... a6b4c97...
+ Author: Scott Chacon <[email protected]>
+ Date: Sun Feb 8 19:02:46 2009 -0800
+
+ Merge branch 'experiment'
+
+That shows the tagger information, the date the commit was tagged, and the annotation message before showing the commit information.
+
+### Signed Tags ###
+
+You can also sign your tags with GPG, assuming you have a private key. All you have to do is use `-s` instead of `-a`:
+
+ $ git tag -s v1.5 -m 'my signed 1.5 tag'
+ You need a passphrase to unlock the secret key for
+ user: "Scott Chacon <[email protected]>"
+ 1024-bit DSA key, ID F721C45A, created 2009-02-09
+
+If you run `git show` on that tag, you can see your GPG signature attached to it:
+
+ $ git show v1.5
+ tag v1.5
+ Tagger: Scott Chacon <[email protected]>
+ Date: Mon Feb 9 15:22:20 2009 -0800
+
+ my signed 1.5 tag
+ -----BEGIN PGP SIGNATURE-----
+ Version: GnuPG v1.4.8 (Darwin)
+
+ iEYEABECAAYFAkmQurIACgkQON3DxfchxFr5cACeIMN+ZxLKggJQf0QYiQBwgySN
+ Ki0An2JeAVUCAiJ7Ox6ZEtK+NvZAj82/
+ =WryJ
+ -----END PGP SIGNATURE-----
+ commit 15027957951b64cf874c3557a0f3547bd83b3ff6
+ Merge: 4a447f7... a6b4c97...
+ Author: Scott Chacon <[email protected]>
+ Date: Sun Feb 8 19:02:46 2009 -0800
+
+ Merge branch 'experiment'
+
+A bit later, you’ll learn how to verify signed tags.
+
+### Lightweight Tags ###
+
+Another way to tag commits is with a lightweight tag. This is basically the commit checksum stored in a file — no other information is kept. To create a lightweight tag, don’t supply the `-a`, `-s`, or `-m` option:
+
+ $ git tag v1.4-lw
+ $ git tag
+ v0.1
+ v1.3
+ v1.4
+ v1.4-lw
+ v1.5
+
+This time, if you run `git show` on the tag, you don’t see the extra tag information. The command just shows the commit:
+
+ $ git show v1.4-lw
+ commit 15027957951b64cf874c3557a0f3547bd83b3ff6
+ Merge: 4a447f7... a6b4c97...
+ Author: Scott Chacon <[email protected]>
+ Date: Sun Feb 8 19:02:46 2009 -0800
+
+ Merge branch 'experiment'
+
+### Verifying Tags ###
+
+To verify a signed tag, you use `git tag -v [tag-name]`. This command uses GPG to verify the signature. You need the signer’s public key in your keyring for this to work properly:
+
+ $ git tag -v v1.4.2.1
+ object 883653babd8ee7ea23e6a5c392bb739348b1eb61
+ type commit
+ tag v1.4.2.1
+ tagger Junio C Hamano <[email protected]> 1158138501 -0700
+
+ GIT 1.4.2.1
+
+ Minor fixes since 1.4.2, including git-mv and git-http with alternates.
+ gpg: Signature made Wed Sep 13 02:08:25 2006 PDT using DSA key ID F3119B9A
+ gpg: Good signature from "Junio C Hamano <[email protected]>"
+ gpg: aka "[jpeg image of size 1513]"
+ Primary key fingerprint: 3565 2A26 2040 E066 C9A7 4A7D C0C6 D9A4 F311 9B9A
+
+If you don’t have the signer’s public key, you get something like this instead:
+
+ gpg: Signature made Wed Sep 13 02:08:25 2006 PDT using DSA key ID F3119B9A
+ gpg: Can't check signature: public key not found
+ error: could not verify the tag 'v1.4.2.1'
+
+### Tagging Later ###
+
+You can also tag commits after you’ve moved past them. Suppose your commit history looks like this:
+
+ $ git log --pretty=oneline
+ 15027957951b64cf874c3557a0f3547bd83b3ff6 Merge branch 'experiment'
+ a6b4c97498bd301d84096da251c98a07c7723e65 beginning write support
+ 0d52aaab4479697da7686c15f77a3d64d9165190 one more thing
+ 6d52a271eda8725415634dd79daabbc4d9b6008e Merge branch 'experiment'
+ 0b7434d86859cc7b8c3d5e1dddfed66ff742fcbc added a commit function
+ 4682c3261057305bdd616e23b64b0857d832627b added a todo file
+ 166ae0c4d3f420721acbb115cc33848dfcc2121a started write support
+ 9fceb02d0ae598e95dc970b74767f19372d61af8 updated rakefile
+ 964f16d36dfccde844893cac5b347e7b3d44abbc commit the todo
+ 8a5cbc430f1a9c3d00faaeffd07798508422908a updated readme
+
+Now, suppose you forgot to tag the project at `v1.2`, which was at the "updated rakefile" commit. You can add it after the fact. To tag that commit, you specify the commit checksum (or part of it) at the end of the command:
+
+ $ git tag -a v1.2 9fceb02
+
+You can see that you’ve tagged the commit:
+
+ $ git tag
+ v0.1
+ v1.2
+ v1.3
+ v1.4
+ v1.4-lw
+ v1.5
+
+ $ git show v1.2
+ tag v1.2
+ Tagger: Scott Chacon <[email protected]>
+ Date: Mon Feb 9 15:32:16 2009 -0800
+
+ version 1.2
+ commit 9fceb02d0ae598e95dc970b74767f19372d61af8
+ Author: Magnus Chacon <[email protected]>
+ Date: Sun Apr 27 20:43:35 2008 -0700
+
+ updated rakefile
+ ...
+
+### Sharing Tags ###
+
+By default, the `git push` command doesn’t transfer tags to remote servers. You will have to explicitly push tags to a shared server after you have created them. This process is just like sharing remote branches — you can run `git push origin [tagname]`.
+
+ $ git push origin v1.5
+ Counting objects: 50, done.
+ Compressing objects: 100% (38/38), done.
+ Writing objects: 100% (44/44), 4.56 KiB, done.
+ Total 44 (delta 18), reused 8 (delta 1)
+ To [email protected]:schacon/simplegit.git
+ * [new tag] v1.5 -> v1.5
+
+If you have a lot of tags that you want to push up at once, you can also use the `--tags` option to the `git push` command. This will transfer all of your tags to the remote server that are not already there.
+
+ $ git push origin --tags
+ Counting objects: 50, done.
+ Compressing objects: 100% (38/38), done.
+ Writing objects: 100% (44/44), 4.56 KiB, done.
+ Total 44 (delta 18), reused 8 (delta 1)
+ To [email protected]:schacon/simplegit.git
+ * [new tag] v0.1 -> v0.1
+ * [new tag] v1.2 -> v1.2
+ * [new tag] v1.4 -> v1.4
+ * [new tag] v1.4-lw -> v1.4-lw
+ * [new tag] v1.5 -> v1.5
+
+Now, when someone else clones or pulls from your repository, they will get all your tags as well.
+
+## Tips and Tricks ##
+
+Before we finish this chapter on basic Git, a few little tips and tricks may make your Git experience a bit simpler, easier, or more familiar. Many people use Git without using any of these tips, and we won’t refer to them or assume you’ve used them later in the book; but you should probably know how to do them.
+
+### Auto-Completion ###
+
+If you use the Bash shell, Git comes with a nice auto-completion script you can enable. Download the Git source code, and look in the `contrib/completion` directory; there should be a file called `git-completion.bash`. Copy this file to your home directory, and add this to your `.bashrc` file:
+
+ source ~/.git-completion.bash
+
+If you want to set up Git to automatically have Bash shell completion for all users, copy this script to the `/opt/local/etc/bash_completion.d` directory on Mac systems or to the `/etc/bash_completion.d/` directory on Linux systems. This is a directory of scripts that Bash will automatically load to provide shell completions.
+
+If you’re using Windows with Git Bash, which is the default when installing Git on Windows with msysGit, auto-completion should be preconfigured.
+
+Press the Tab key when you’re writing a Git command, and it should return a set of suggestions for you to pick from:
+
+ $ git co<tab><tab>
+ commit config
+
+In this case, typing `git co` and then pressing the Tab key twice suggests commit and config. Adding `m<tab>` completes `git commit` automatically.
+
+This also works with options, which is probably more useful. For instance, if you’re running a `git log` command and can’t remember one of the options, you can start typing it and press Tab to see what matches:
+
+ $ git log --s<tab>
+ --shortstat --since= --src-prefix= --stat --summary
+
+That’s a pretty nice trick and may save you some time and documentation reading.
+
+### Git Aliases ###
+
+Git doesn’t infer your command if you type it in partially. If you don’t want to type the entire text of each of the Git commands, you can easily set up an alias for each command using `git config`. Here are a couple of examples you may want to set up:
+
+ $ git config --global alias.co checkout
+ $ git config --global alias.br branch
+ $ git config --global alias.ci commit
+ $ git config --global alias.st status
+
+This means that, for example, instead of typing `git commit`, you just need to type `git ci`. As you go on using Git, you’ll probably use other commands frequently as well; in this case, don’t hesitate to create new aliases.
+
+This technique can also be very useful in creating commands that you think should exist. For example, to correct the usability problem you encountered with unstaging a file, you can add your own unstage alias to Git:
+
+ $ git config --global alias.unstage 'reset HEAD --'
+
+This makes the following two commands equivalent:
+
+ $ git unstage fileA
+ $ git reset HEAD fileA
+
+This seems a bit clearer. It’s also common to add a `last` command, like this:
+
+ $ git config --global alias.last 'log -1 HEAD'
+
+This way, you can see the last commit easily:
+
+ $ git last
+ commit 66938dae3329c7aebe598c2246a8e6af90d04646
+ Author: Josh Goebel <[email protected]>
+ Date: Tue Aug 26 19:48:51 2008 +0800
+
+ test for current head
+
+ Signed-off-by: Scott Chacon <[email protected]>
+
+As you can tell, Git simply replaces the new command with whatever you alias it to. However, maybe you want to run an external command, rather than a Git subcommand. In that case, you start the command with a `!` character. This is useful if you write your own tools that work with a Git repository. We can demonstrate by aliasing `git visual` to run `gitk`:
+
+ $ git config --global alias.visual "!gitk"
+
+## Summary ##
+
+At this point, you can do all the basic local Git operations — creating or cloning a repository, making changes, staging and committing those changes, and viewing the history of all the changes the repository has been through. Next, we’ll cover Git’s killer feature: its branching model.
View
598 az/03-git-branching/01-chapter3.markdown
@@ -0,0 +1,598 @@
+# Git Branching #
+
+Nearly every VCS has some form of branching support. Branching means you diverge from the main line of development and continue to do work without messing with that main line. In many VCS tools, this is a somewhat expensive process, often requiring you to create a new copy of your source code directory, which can take a long time for large projects.
+
+Some people refer to the branching model in Git as its “killer feature” , and it certainly sets Git apart in the VCS community. Why is it so special? The way Git branches is incredibly lightweight, making branching operations nearly instantaneous and switching back and forth between branches generally just as fast. Unlike many other VCSs, Git encourages a workflow that branches and merges often, even multiple times in a day. Understanding and mastering this feature gives you a powerful and unique tool and can literally change the way that you develop.
+
+## What a Branch Is ##
+
+To really understand the way Git does branching, we need to take a step back and examine how Git stores its data. As you may remember from Chapter 1, Git doesn’t store data as a series of changesets or deltas, but instead as a series of snapshots.
+
+When you commit in Git, Git stores a commit object that contains a pointer to the snapshot of the content you staged, the author and message metadata, and zero or more pointers to the commit or commits that were the direct parents of this commit: zero parents for the first commit, one parent for a normal commit, and multiple parents for a commit that results from a merge of two or more branches.
+
+To visualize this, let’s assume that you have a directory containing three files, and you stage them all and commit. Staging the files checksums each one (the SHA-1 hash we mentioned in Chapter 1), stores that version of the file in the Git repository (Git refers to them as blobs), and adds that checksum to the staging area:
+
+ $ git add README test.rb LICENSE
+ $ git commit -m 'initial commit of my project'
+
+When you create the commit by running `git commit`, Git checksums each subdirectory (in this case, just the root project directory) and stores those tree objects in the Git repository. Git then creates a commit object that has the metadata and a pointer to the root project tree so it can re-create that snapshot when needed.
+
+Your Git repository now contains five objects: one blob for the contents of each of your three files, one tree that lists the contents of the directory and specifies which file names are stored as which blobs, and one commit with the pointer to that root tree and all the commit metadata. Conceptually, the data in your Git repository looks something like Figure 3-1.
+
+Insert 18333fig0301.png
+Figure 3-1. Single commit repository data.
+
+If you make some changes and commit again, the next commit stores a pointer to the commit that came immediately before it. After two more commits, your history might look something like Figure 3-2.
+
+Insert 18333fig0302.png
+Figure 3-2. Git object data for multiple commits.
+
+A branch in Git is simply a lightweight movable pointer to one of these commits. The default branch name in Git is master. As you initially make commits, you’re given a `master` branch that points to the last commit you made. Every time you commit, it moves forward automatically.
+
+Insert 18333fig0303.png
+Figure 3-3. Branch pointing into the commit data’s history.
+
+What happens if you create a new branch? Well, doing so creates a new pointer for you to move around. Let’s say you create a new branch called testing. You do this with the `git branch` command:
+
+ $ git branch testing
+
+This creates a new pointer at the same commit you’re currently on (see Figure 3-4).
+
+Insert 18333fig0304.png
+Figure 3-4. Multiple branches pointing into the commit’s data history.
+
+How does Git know what branch you’re currently on? It keeps a special pointer called HEAD. Note that this is a lot different than the concept of HEAD in other VCSs you may be used to, such as Subversion or CVS. In Git, this is a pointer to the local branch you’re currently on. In this case, you’re still on master. The git branch command only created a new branch — it didn’t switch to that branch (see Figure 3-5).
+
+Insert 18333fig0305.png
+Figure 3-5. HEAD file pointing to the branch you’re on.
+
+To switch to an existing branch, you run the `git checkout` command. Let’s switch to the new testing branch:
+
+ $ git checkout testing
+
+This moves HEAD to point to the testing branch (see Figure 3-6).
+
+Insert 18333fig0306.png
+Figure 3-6. HEAD points to another branch when you switch branches.
+
+What is the significance of that? Well, let’s do another commit:
+
+ $ vim test.rb
+ $ git commit -a -m 'made a change'
+
+Figure 3-7 illustrates the result.
+
+Insert 18333fig0307.png
+Figure 3-7. The branch that HEAD points to moves forward with each commit.
+
+This is interesting, because now your testing branch has moved forward, but your `master` branch still points to the commit you were on when you ran `git checkout` to switch branches. Let’s switch back to the `master` branch:
+
+ $ git checkout master
+
+Figure 3-8 shows the result.
+
+Insert 18333fig0308.png
+Figure 3-8. HEAD moves to another branch on a checkout.
+
+That command did two things. It moved the HEAD pointer back to point to the `master` branch, and it reverted the files in your working directory back to the snapshot that `master` points to. This also means the changes you make from this point forward will diverge from an older version of the project. It essentially rewinds the work you’ve done in your testing branch temporarily so you can go in a different direction.
+
+Let’s make a few changes and commit again:
+
+ $ vim test.rb
+ $ git commit -a -m 'made other changes'
+
+Now your project history has diverged (see Figure 3-9). You created and switched to a branch, did some work on it, and then switched back to your main branch and did other work. Both of those changes are isolated in separate branches: you can switch back and forth between the branches and merge them together when you’re ready. And you did all that with simple `branch` and `checkout` commands.
+
+Insert 18333fig0309.png
+Figure 3-9. The branch histories have diverged.
+
+Because a branch in Git is in actuality a simple file that contains the 40 character SHA-1 checksum of the commit it points to, branches are cheap to create and destroy. Creating a new branch is as quick and simple as writing 41 bytes to a file (40 characters and a newline).
+
+This is in sharp contrast to the way most VCS tools branch, which involves copying all of the project’s files into a second directory. This can take several seconds or even minutes, depending on the size of the project, whereas in Git the process is always instantaneous. Also, because we’re recording the parents when we commit, finding a proper merge base for merging is automatically done for us and is generally very easy to do. These features help encourage developers to create and use branches often.
+
+Let’s see why you should do so.
+
+## Basic Branching and Merging ##
+
+Let’s go through a simple example of branching and merging with a workflow that you might use in the real world. You’ll follow these steps:
+
+1. Do work on a web site.
+2. Create a branch for a new story you’re working on.
+3. Do some work in that branch.
+
+At this stage, you’ll receive a call that another issue is critical and you need a hotfix. You’ll do the following:
+
+1. Revert back to your production branch.
+2. Create a branch to add the hotfix.
+3. After it’s tested, merge the hotfix branch, and push to production.
+4. Switch back to your original story and continue working.
+
+### Basic Branching ###
+
+First, let’s say you’re working on your project and have a couple of commits already (see Figure 3-10).
+
+Insert 18333fig0310.png
+Figure 3-10. A short and simple commit history.
+
+You’ve decided that you’re going to work on issue #53 in whatever issue-tracking system your company uses. To be clear, Git isn’t tied into any particular issue-tracking system; but because issue #53 is a focused topic that you want to work on, you’ll create a new branch in which to work. To create a branch and switch to it at the same time, you can run the `git checkout` command with the `-b` switch:
+
+ $ git checkout -b iss53
+ Switched to a new branch "iss53"
+
+This is shorthand for:
+
+ $ git branch iss53
+ $ git checkout iss53
+
+Figure 3-11 illustrates the result.
+
+Insert 18333fig0311.png
+Figure 3-11. Creating a new branch pointer.
+
+You work on your web site and do some commits. Doing so moves the `iss53` branch forward, because you have it checked out (that is, your HEAD is pointing to it; see Figure 3-12):
+
+ $ vim index.html
+ $ git commit -a -m 'added a new footer [issue 53]'
+
+Insert 18333fig0312.png
+Figure 3-12. The iss53 branch has moved forward with your work.
+
+Now you get the call that there is an issue with the web site, and you need to fix it immediately. With Git, you don’t have to deploy your fix along with the `iss53` changes you’ve made, and you don’t have to put a lot of effort into reverting those changes before you can work on applying your fix to what is in production. All you have to do is switch back to your master branch.
+
+However, before you do that, note that if your working directory or staging area has uncommitted changes that conflict with the branch you’re checking out, Git won’t let you switch branches. It’s best to have a clean working state when you switch branches. There are ways to get around this (namely, stashing and commit amending) that we’ll cover later. For now, you’ve committed all your changes, so you can switch back to your master branch:
+
+ $ git checkout master
+ Switched to branch "master"
+
+At this point, your project working directory is exactly the way it was before you started working on issue #53, and you can concentrate on your hotfix. This is an important point to remember: Git resets your working directory to look like the snapshot of the commit that the branch you check out points to. It adds, removes, and modifies files automatically to make sure your working copy is what the branch looked like on your last commit to it.
+
+Next, you have a hotfix to make. Let’s create a hotfix branch on which to work until it’s completed (see Figure 3-13):
+
+ $ git checkout -b 'hotfix'
+ Switched to a new branch "hotfix"
+ $ vim index.html
+ $ git commit -a -m 'fixed the broken email address'
+ [hotfix]: created 3a0874c: "fixed the broken email address"
+ 1 files changed, 0 insertions(+), 1 deletions(-)
+
+Insert 18333fig0313.png
+Figure 3-13. hotfix branch based back at your master branch point.
+
+You can run your tests, make sure the hotfix is what you want, and merge it back into your master branch to deploy to production. You do this with the `git merge` command:
+
+ $ git checkout master
+ $ git merge hotfix
+ Updating f42c576..3a0874c
+ Fast forward
+ README | 1 -
+ 1 files changed, 0 insertions(+), 1 deletions(-)
+
+You’ll notice the phrase "Fast forward" in that merge. Because the commit pointed to by the branch you merged in was directly upstream of the commit you’re on, Git moves the pointer forward. To phrase that another way, when you try to merge one commit with a commit that can be reached by following the first commit’s history, Git simplifies things by moving the pointer forward because there is no divergent work to merge together — this is called a "fast forward".
+
+Your change is now in the snapshot of the commit pointed to by the `master` branch, and you can deploy your change (see Figure 3-14).
+
+Insert 18333fig0314.png
+Figure 3-14. Your master branch points to the same place as your hotfix branch after the merge.
+
+After your super-important fix is deployed, you’re ready to switch back to the work you were doing before you were interrupted. However, first you’ll delete the `hotfix` branch, because you no longer need it — the `master` branch points at the same place. You can delete it with the `-d` option to `git branch`:
+
+ $ git branch -d hotfix
+ Deleted branch hotfix (3a0874c).
+
+Now you can switch back to your work-in-progress branch on issue #53 and continue working on it (see Figure 3-15):
+
+ $ git checkout iss53
+ Switched to branch "iss53"
+ $ vim index.html
+ $ git commit -a -m 'finished the new footer [issue 53]'
+ [iss53]: created ad82d7a: "finished the new footer [issue 53]"
+ 1 files changed, 1 insertions(+), 0 deletions(-)
+
+Insert 18333fig0315.png
+Figure 3-15. Your iss53 branch can move forward independently.
+
+It’s worth noting here that the work you did in your `hotfix` branch is not contained in the files in your `iss53` branch. If you need to pull it in, you can merge your `master` branch into your `iss53` branch by running `git merge master`, or you can wait to integrate those changes until you decide to pull the `iss53` branch back into `master` later.
+
+### Basic Merging ###
+
+Suppose you’ve decided that your issue #53 work is complete and ready to be merged into your `master` branch. In order to do that, you’ll merge in your `iss53` branch, much like you merged in your `hotfix` branch earlier. All you have to do is check out the branch you wish to merge into and then run the `git merge` command:
+
+ $ git checkout master
+ $ git merge iss53
+ Merge made by recursive.
+ README | 1 +
+ 1 files changed, 1 insertions(+), 0 deletions(-)
+
+This looks a bit different than the `hotfix` merge you did earlier. In this case, your development history has diverged from some older point. Because the commit on the branch you’re on isn’t a direct ancestor of the branch you’re merging in, Git has to do some work. In this case, Git does a simple three-way merge, using the two snapshots pointed to by the branch tips and the common ancestor of the two. Figure 3-16 highlights the three snapshots that Git uses to do its merge in this case.
+
+Insert 18333fig0316.png
+Figure 3-16. Git automatically identifies the best common-ancestor merge base for branch merging.
+
+Instead of just moving the branch pointer forward, Git creates a new snapshot that results from this three-way merge and automatically creates a new commit that points to it (see Figure 3-17). This is referred to as a merge commit and is special in that it has more than one parent.
+
+It’s worth pointing out that Git determines the best common ancestor to use for its merge base; this is different than CVS or Subversion (before version 1.5), where the developer doing the merge has to figure out the best merge base for themselves. This makes merging a heck of a lot easier in Git than in these other systems.
+
+Insert 18333fig0317.png
+Figure 3-17. Git automatically creates a new commit object that contains the merged work.
+
+Now that your work is merged in, you have no further need for the `iss53` branch. You can delete it and then manually close the ticket in your ticket-tracking system:
+
+ $ git branch -d iss53
+
+### Basic Merge Conflicts ###
+
+Occasionally, this process doesn’t go smoothly. If you changed the same part of the same file differently in the two branches you’re merging together, Git won’t be able to merge them cleanly. If your fix for issue #53 modified the same part of a file as the `hotfix`, you’ll get a merge conflict that looks something like this:
+
+ $ git merge iss53
+ Auto-merging index.html
+ CONFLICT (content): Merge conflict in index.html
+ Automatic merge failed; fix conflicts and then commit the result.
+
+Git hasn’t automatically created a new merge commit. It has paused the process while you resolve the conflict. If you want to see which files are unmerged at any point after a merge conflict, you can run `git status`:
+
+ [master*]$ git status
+ index.html: needs merge
+ # On branch master
+ # Changed but not updated:
+ # (use "git add <file>..." to update what will be committed)
+ # (use "git checkout -- <file>..." to discard changes in working directory)
+ #
+ # unmerged: index.html
+ #
+
+Anything that has merge conflicts and hasn’t been resolved is listed as unmerged. Git adds standard conflict-resolution markers to the files that have conflicts, so you can open them manually and resolve those conflicts. Your file contains a section that looks something like this:
+
+ <<<<<<< HEAD:index.html
+ <div id="footer">contact : [email protected]</div>
+ =======
+ <div id="footer">
+ please contact us at [email protected]
+ </div>
+ >>>>>>> iss53:index.html
+
+This means the version in HEAD (your master branch, because that was what you had checked out when you ran your merge command) is the top part of that block (everything above the `=======`), while the version in your `iss53` branch looks like everything in the bottom part. In order to resolve the conflict, you have to either choose one side or the other or merge the contents yourself. For instance, you might resolve this conflict by replacing the entire block with this:
+
+ <div id="footer">
+ please contact us at [email protected]
+ </div>
+
+This resolution has a little of each section, and I’ve fully removed the `<<<<<<<`, `=======`, and `>>>>>>>` lines. After you’ve resolved each of these sections in each conflicted file, run `git add` on each file to mark it as resolved. Staging the file marks it as resolved in Git.
+If you want to use a graphical tool to resolve these issues, you can run `git mergetool`, which fires up an appropriate visual merge tool and walks you through the conflicts:
+
+ $ git mergetool
+ merge tool candidates: kdiff3 tkdiff xxdiff meld gvimdiff opendiff emerge vimdiff
+ Merging the files: index.html
+
+ Normal merge conflict for 'index.html':
+ {local}: modified
+ {remote}: modified
+ Hit return to start merge resolution tool (opendiff):
+
+If you want to use a merge tool other than the default (Git chose `opendiff` for me in this case because I ran the command on a Mac), you can see all the supported tools listed at the top after “merge tool candidates”. Type the name of the tool you’d rather use. In Chapter 7, we’ll discuss how you can change this default value for your environment.
+
+After you exit the merge tool, Git asks you if the merge was successful. If you tell the script that it was, it stages the file to mark it as resolved for you.
+
+You can run `git status` again to verify that all conflicts have been resolved:
+
+ $ git status
+ # On branch master
+ # Changes to be committed:
+ # (use "git reset HEAD <file>..." to unstage)
+ #
+ # modified: index.html
+ #
+
+If you’re happy with that, and you verify that everything that had conflicts has been staged, you can type `git commit` to finalize the merge commit. The commit message by default looks something like this:
+
+ Merge branch 'iss53'
+
+ Conflicts:
+ index.html
+ #
+ # It looks like you may be committing a MERGE.
+ # If this is not correct, please remove the file
+ # .git/MERGE_HEAD
+ # and try again.
+ #
+
+You can modify that message with details about how you resolved the merge if you think it would be helpful to others looking at this merge in the future — why you did what you did, if it’s not obvious.
+
+## Branch Management ##
+
+Now that you’ve created, merged, and deleted some branches, let’s look at some branch-management tools that will come in handy when you begin using branches all the time.
+
+The `git branch` command does more than just create and delete branches. If you run it with no arguments, you get a simple listing of your current branches:
+
+ $ git branch
+ iss53
+ * master
+ testing
+
+Notice the `*` character that prefixes the `master` branch: it indicates the branch that you currently have checked out. This means that if you commit at this point, the `master` branch will be moved forward with your new work. To see the last commit on each branch, you can run `git branch -v`:
+
+ $ git branch -v
+ iss53 93b412c fix javascript issue
+ * master 7a98805 Merge branch 'iss53'
+ testing 782fd34 add scott to the author list in the readmes
+
+Another useful option to figure out what state your branches are in is to filter this list to branches that you have or have not yet merged into the branch you’re currently on. The useful `--merged` and `--no-merged` options have been available in Git since version 1.5.6 for this purpose. To see which branches are already merged into the branch you’re on, you can run `git branch --merged`:
+
+ $ git branch --merged
+ iss53
+ * master
+
+Because you already merged in `iss53` earlier, you see it in your list. Branches on this list without the `*` in front of them are generally fine to delete with `git branch -d`; you’ve already incorporated their work into another branch, so you’re not going to lose anything.
+
+To see all the branches that contain work you haven’t yet merged in, you can run `git branch --no-merged`:
+
+ $ git branch --no-merged
+ testing
+
+This shows your other branch. Because it contains work that isn’t merged in yet, trying to delete it with `git branch -d` will fail:
+
+ $ git branch -d testing
+ error: The branch 'testing' is not an ancestor of your current HEAD.
+ If you are sure you want to delete it, run 'git branch -D testing'.
+
+If you really do want to delete the branch and lose that work, you can force it with `-D`, as the helpful message points out.
+
+## Branching Workflows ##
+
+Now that you have the basics of branching and merging down, what can or should you do with them? In this section, we’ll cover some common workflows that this lightweight branching makes possible, so you can decide if you would like to incorporate it into your own development cycle.
+
+### Long-Running Branches ###
+
+Because Git uses a simple three-way merge, merging from one branch into another multiple times over a long period is generally easy to do. This means you can have several branches that are always open and that you use for different stages of your development cycle; you can merge regularly from some of them into others.
+
+Many Git developers have a workflow that embraces this approach, such as having only code that is entirely stable in their `master` branch — possibly only code that has been or will be released. They have another parallel branch named develop or next that they work from or use to test stability — it isn’t necessarily always stable, but whenever it gets to a stable state, it can be merged into `master`. It’s used to pull in topic branches (short-lived branches, like your earlier `iss53` branch) when they’re ready, to make sure they pass all the tests and don’t introduce bugs.
+
+In reality, we’re talking about pointers moving up the line of commits you’re making. The stable branches are farther down the line in your commit history, and the bleeding-edge branches are farther up the history (see Figure 3-18).
+
+Insert 18333fig0318.png
+Figure 3-18. More stable branches are generally farther down the commit history.
+
+It’s generally easier to think about them as work silos, where sets of commits graduate to a more stable silo when they’re fully tested (see Figure 3-19).
+
+Insert 18333fig0319.png
+Figure 3-19. It may be helpful to think of your branches as silos.
+
+You can keep doing this for several levels of stability. Some larger projects also have a `proposed` or `pu` (proposed updates) branch that has integrated branches that may not be ready to go into the `next` or `master` branch. The idea is that your branches are at various levels of stability; when they reach a more stable level, they’re merged into the branch above them.
+Again, having multiple long-running branches isn’t necessary, but it’s often helpful, especially when you’re dealing with very large or complex projects.
+
+### Topic Branches ###
+
+Topic branches, however, are useful in projects of any size. A topic branch is a short-lived branch that you create and use for a single particular feature or related work. This is something you’ve likely never done with a VCS before because it’s generally too expensive to create and merge branches. But in Git it’s common to create, work on, merge, and delete branches several times a day.
+
+You saw this in the last section with the `iss53` and `hotfix` branches you created. You did a few commits on them and deleted them directly after merging them into your main branch. This technique allows you to context-switch quickly and completely — because your work is separated into silos where all the changes in that branch have to do with that topic, it’s easier to see what has happened during code review and such. You can keep the changes there for minutes, days, or months, and merge them in when they’re ready, regardless of the order in which they were created or worked on.
+
+Consider an example of doing some work (on `master`), branching off for an issue (`iss91`), working on it for a bit, branching off the second branch to try another way of handling the same thing (`iss91v2`), going back to your master branch and working there for a while, and then branching off there to do some work that you’re not sure is a good idea (`dumbidea` branch). Your commit history will look something like Figure 3-20.
+
+Insert 18333fig0320.png
+Figure 3-20. Your commit history with multiple topic branches.
+
+Now, let’s say you decide you like the second solution to your issue best (`iss91v2`); and you showed the `dumbidea` branch to your coworkers, and it turns out to be genius. You can throw away the original `iss91` branch (losing commits C5 and C6) and merge in the other two. Your history then looks like Figure 3-21.
+
+Insert 18333fig0321.png
+Figure 3-21. Your history after merging in dumbidea and iss91v2.
+
+It’s important to remember when you’re doing all this that these branches are completely local. When you’re branching and merging, everything is being done only in your Git repository — no server communication is happening.
+
+## Remote Branches ##
+
+Remote branches are references to the state of branches on your remote repositories. They’re local branches that you can’t move; they’re moved automatically whenever you do any network communication. Remote branches act as bookmarks to remind you where the branches on your remote repositories were the last time you connected to them.
+
+They take the form `(remote)/(branch)`. For instance, if you wanted to see what the `master` branch on your `origin` remote looked like as of the last time you communicated with it, you would check the `origin/master` branch. If you were working on an issue with a partner and they pushed up an `iss53` branch, you might have your own local `iss53` branch; but the branch on the server would point to the commit at `origin/iss53`.
+
+This may be a bit confusing, so let’s look at an example. Let’s say you have a Git server on your network at `git.ourcompany.com`. If you clone from this, Git automatically names it `origin` for you, pulls down all its data, creates a pointer to where its `master` branch is, and names it `origin/master` locally; and you can’t move it. Git also gives you your own `master` branch starting at the same place as origin’s `master` branch, so you have something to work from (see Figure 3-22).
+
+Insert 18333fig0322.png
+Figure 3-22. A Git clone gives you your own master branch and origin/master pointing to origin’s master branch.
+
+If you do some work on your local master branch, and, in the meantime, someone else pushes to `git.ourcompany.com` and updates its master branch, then your histories move forward differently. Also, as long as you stay out of contact with your origin server, your `origin/master` pointer doesn’t move (see Figure 3-23).
+
+Insert 18333fig0323.png
+Figure 3-23. Working locally and having someone push to your remote server makes each history move forward differently.
+
+To synchronize your work, you run a `git fetch origin` command. This command looks up which server origin is (in this case, it’s `git.ourcompany.com`), fetches any data from it that you don’t yet have, and updates your local database, moving your `origin/master` pointer to its new, more up-to-date position (see Figure 3-24).
+
+Insert 18333fig0324.png
+Figure 3-24. The git fetch command updates your remote references.
+
+To demonstrate having multiple remote servers and what remote branches for those remote projects look like, let’s assume you have another internal Git server that is used only for development by one of your sprint teams. This server is at `git.team1.ourcompany.com`. You can add it as a new remote reference to the project you’re currently working on by running the `git remote add` command as we covered in Chapter 2. Name this remote `teamone`, which will be your shortname for that whole URL (see Figure 3-25).
+
+Insert 18333fig0325.png
+Figure 3-25. Adding another server as a remote.
+
+Now, you can run `git fetch teamone` to fetch everything the remote `teamone` server has that you don’t have yet. Because that server is a subset of the data your `origin` server has right now, Git fetches no data but sets a remote branch called `teamone/master` to point to the commit that `teamone` has as its `master` branch (see Figure 3-26).
+
+Insert 18333fig0326.png
+Figure 3-26. You get a reference to teamone’s master branch position locally.
+
+### Pushing ###
+
+When you want to share a branch with the world, you need to push it up to a remote that you have write access to. Your local branches aren’t automatically synchronized to the remotes you write to — you have to explicitly push the branches you want to share. That way, you can use private branches for work you don’t want to share, and push up only the topic branches you want to collaborate on.
+
+If you have a branch named `serverfix` that you want to work on with others, you can push it up the same way you pushed your first branch. Run `git push (remote) (branch)`:
+
+ $ git push origin serverfix
+ Counting objects: 20, done.
+ Compressing objects: 100% (14/14), done.
+ Writing objects: 100% (15/15), 1.74 KiB, done.
+ Total 15 (delta 5), reused 0 (delta 0)
+ To [email protected]:schacon/simplegit.git
+ * [new branch] serverfix -> serverfix
+
+This is a bit of a shortcut. Git automatically expands the `serverfix` branchname out to `refs/heads/serverfix:refs/heads/serverfix`, which means, “Take my serverfix local branch and push it to update the remote’s serverfix branch.” We’ll go over the `refs/heads/` part in detail in Chapter 9, but you can generally leave it off. You can also do `git push origin serverfix:serverfix`, which does the same thing — it says, “Take my serverfix and make it the remote’s serverfix.” You can use this format to push a local branch into a remote branch that is named differently. If you didn’t want it to be called `serverfix` on the remote, you could instead run `git push origin serverfix:awesomebranch` to push your local `serverfix` branch to the `awesomebranch` branch on the remote project.
+
+The next time one of your collaborators fetches from the server, they will get a reference to where the server’s version of `serverfix` is under the remote branch `origin/serverfix`:
+
+ $ git fetch origin
+ remote: Counting objects: 20, done.
+ remote: Compressing objects: 100% (14/14), done.
+ remote: Total 15 (delta 5), reused 0 (delta 0)
+ Unpacking objects: 100% (15/15), done.
+ From [email protected]:schacon/simplegit
+ * [new branch] serverfix -> origin/serverfix
+
+It’s important to note that when you do a fetch that brings down new remote branches, you don’t automatically have local, editable copies of them. In other words, in this case, you don’t have a new `serverfix` branch — you only have an `origin/serverfix` pointer that you can’t modify.
+
+To merge this work into your current working branch, you can run `git merge origin/serverfix`. If you want your own `serverfix` branch that you can work on, you can base it off your remote branch:
+
+ $ git checkout -b serverfix origin/serverfix
+ Branch serverfix set up to track remote branch refs/remotes/origin/serverfix.
+ Switched to a new branch "serverfix"
+
+This gives you a local branch that you can work on that starts where `origin/serverfix` is.
+
+### Tracking Branches ###
+
+Checking out a local branch from a remote branch automatically creates what is called a _tracking branch_. Tracking branches are local branches that have a direct relationship to a remote branch. If you’re on a tracking branch and type `git push`, Git automatically knows which server and branch to push to. Also, running `git pull` while on one of these branches fetches all the remote references and then automatically merges in the corresponding remote branch.
+
+When you clone a repository, it generally automatically creates a `master` branch that tracks `origin/master`. That’s why `git push` and `git pull` work out of the box with no other arguments. However, you can set up other tracking branches if you wish — ones that don’t track branches on `origin` and don’t track the `master` branch. The simple case is the example you just saw, running `git checkout -b [branch] [remotename]/[branch]`. If you have Git version 1.6.2 or later, you can also use the `--track` shorthand:
+
+ $ git checkout --track origin/serverfix
+ Branch serverfix set up to track remote branch refs/remotes/origin/serverfix.
+ Switched to a new branch "serverfix"
+
+To set up a local branch with a different name than the remote branch, you can easily use the first version with a different local branch name:
+
+ $ git checkout -b sf origin/serverfix
+ Branch sf set up to track remote branch refs/remotes/origin/serverfix.
+ Switched to a new branch "sf"
+
+Now, your local branch sf will automatically push to and pull from origin/serverfix.
+
+### Deleting Remote Branches ###
+
+Suppose you’re done with a remote branch — say, you and your collaborators are finished with a feature and have merged it into your remote’s `master` branch (or whatever branch your stable codeline is in). You can delete a remote branch using the rather obtuse syntax `git push [remotename] :[branch]`. If you want to delete your `serverfix` branch from the server, you run the following:
+
+ $ git push origin :serverfix
+ To [email protected]:schacon/simplegit.git
+ - [deleted] serverfix
+
+Boom. No more branch on your server. You may want to dog-ear this page, because you’ll need that command, and you’ll likely forget the syntax. A way to remember this command is by recalling the `git push [remotename] [localbranch]:[remotebranch]` syntax that we went over a bit earlier. If you leave off the `[localbranch]` portion, then you’re basically saying, “Take nothing on my side and make it be `[remotebranch]`.”
+
+## Rebasing ##
+
+In Git, there are two main ways to integrate changes from one branch into another: the `merge` and the `rebase`. In this section you’ll learn what rebasing is, how to do it, why it’s a pretty amazing tool, and in what cases you won’t want to use it.
+
+### The Basic Rebase ###
+
+If you go back to an earlier example from the Merge section (see Figure 3-27), you can see that you diverged your work and made commits on two different branches.
+
+Insert 18333fig0327.png
+Figure 3-27. Your initial diverged commit history.
+
+The easiest way to integrate the branches, as we’ve already covered, is the `merge` command. It performs a three-way merge between the two latest branch snapshots (C3 and C4) and the most recent common ancestor of the two (C2), creating a new snapshot (and commit), as shown in Figure 3-28.
+
+Insert 18333fig0328.png
+Figure 3-28. Merging a branch to integrate the diverged work history.
+
+However, there is another way: you can take the patch of the change that was introduced in C3 and reapply it on top of C4. In Git, this is called _rebasing_. With the `rebase` command, you can take all the changes that were committed on one branch and replay them on another one.
+
+In this example, you’d run the following:
+
+ $ git checkout experiment
+ $ git rebase master
+ First, rewinding head to replay your work on top of it...
+ Applying: added staged command
+
+It works by going to the common ancestor of the two branches (the one you’re on and the one you’re rebasing onto), getting the diff introduced by each commit of the branch you’re on, saving those diffs to temporary files, resetting the current branch to the same commit as the branch you are rebasing onto, and finally applying each change in turn. Figure 3-29 illustrates this process.
+
+Insert 18333fig0329.png
+Figure 3-29. Rebasing the change introduced in C3 onto C4.
+
+At this point, you can go back to the master branch and do a fast-forward merge (see Figure 3-30).
+
+Insert 18333fig0330.png
+Figure 3-30. Fast-forwarding the master branch.
+
+Now, the snapshot pointed to by C3' is exactly the same as the one that was pointed to by C5 in the merge example. There is no difference in the end product of the integration, but rebasing makes for a cleaner history. If you examine the log of a rebased branch, it looks like a linear history: it appears that all the work happened in series, even when it originally happened in parallel.
+
+Often, you’ll do this to make sure your commits apply cleanly on a remote branch — perhaps in a project to which you’re trying to contribute but that you don’t maintain. In this case, you’d do your work in a branch and then rebase your work onto `origin/master` when you were ready to submit your patches to the main project. That way, the maintainer doesn’t have to do any integration work — just a fast-forward or a clean apply.
+
+Note that the snapshot pointed to by the final commit you end up with, whether it’s the last of the rebased commits for a rebase or the final merge commit after a merge, is the same snapshot — it’s only the history that is different. Rebasing replays changes from one line of work onto another in the order they were introduced, whereas merging takes the endpoints and merges them together.
+
+### More Interesting Rebases ###
+
+You can also have your rebase replay on something other than the rebase branch. Take a history like Figure 3-31, for example. You branched a topic branch (`server`) to add some server-side functionality to your project, and made a commit. Then, you branched off that to make the client-side changes (`client`) and committed a few times. Finally, you went back to your server branch and did a few more commits.
+
+Insert 18333fig0331.png
+Figure 3-31. A history with a topic branch off another topic branch.
+
+Suppose you decide that you want to merge your client-side changes into your mainline for a release, but you want to hold off on the server-side changes until it’s tested further. You can take the changes on client that aren’t on server (C8 and C9) and replay them on your master branch by using the `--onto` option of `git rebase`:
+
+ $ git rebase --onto master server client
+
+This basically says, “Check out the client branch, figure out the patches from the common ancestor of the `client` and `server` branches, and then replay them onto `master`.” It’s a bit complex; but the result, shown in Figure 3-32, is pretty cool.
+
+Insert 18333fig0332.png
+Figure 3-32. Rebasing a topic branch off another topic branch.
+
+Now you can fast-forward your master branch (see Figure 3-33):
+
+ $ git checkout master
+ $ git merge client
+
+Insert 18333fig0333.png
+Figure 3-33. Fast-forwarding your master branch to include the client branch changes.
+
+Let’s say you decide to pull in your server branch as well. You can rebase the server branch onto the master branch without having to check it out first by running `git rebase [basebranch] [topicbranch]` — which checks out the topic branch (in this case, `server`) for you and replays it onto the base branch (`master`):
+
+ $ git rebase master server
+
+This replays your `server` work on top of your `master` work, as shown in Figure 3-34.
+
+Insert 18333fig0334.png
+Figure 3-34. Rebasing your server branch on top of your master branch.
+
+Then, you can fast-forward the base branch (`master`):
+
+ $ git checkout master
+ $ git merge server
+
+You can remove the `client` and `server` branches because all the work is integrated and you don’t need them anymore, leaving your history for this entire process looking like Figure 3-35:
+
+ $ git branch -d client
+ $ git branch -d server
+
+Insert 18333fig0335.png
+Figure 3-35. Final commit history.
+
+### The Perils of Rebasing ###
+
+Ahh, but the bliss of rebasing isn’t without its drawbacks, which can be summed up in a single line:
+
+**Do not rebase commits that you have pushed to a public repository.**
+
+If you follow that guideline, you’ll be fine. If you don’t, people will hate you, and you’ll be scorned by friends and family.
+
+When you rebase stuff, you’re abandoning existing commits and creating new ones that are similar but different. If you push commits somewhere and others pull them down and base work on them, and then you rewrite those commits with `git rebase` and push them up again, your collaborators will have to re-merge their work and things will get messy when you try to pull their work back into yours.
+
+Let’s look at an example of how rebasing work that you’ve made public can cause problems. Suppose you clone from a central server and then do some work off that. Your commit history looks like Figure 3-36.
+
+Insert 18333fig0336.png
+Figure 3-36. Clone a repository, and base some work on it.
+
+Now, someone else does more work that includes a merge, and pushes that work to the central server. You fetch them and merge the new remote branch into your work, making your history look something like Figure 3-37.
+
+Insert 18333fig0337.png
+Figure 3-37. Fetch more commits, and merge them into your work.
+
+Next, the person who pushed the merged work decides to go back and rebase their work instead; they do a `git push --force` to overwrite the history on the server. You then fetch from that server, bringing down the new commits.
+
+Insert 18333fig0338.png
+Figure 3-38. Someone pushes rebased commits, abandoning commits you’ve based your work on.
+
+At this point, you have to merge this work in again, even though you’ve already done so. Rebasing changes the SHA-1 hashes of these commits so to Git they look like new commits, when in fact you already have the C4 work in your history (see Figure 3-39).
+
+Insert 18333fig0339.png
+Figure 3-39. You merge in the same work again into a new merge commit.
+
+You have to merge that work in at some point so you can keep up with the other developer in the future. After you do that, your commit history will contain both the C4 and C4' commits, which have different SHA-1 hashes but introduce the same work and have the same commit message. If you run a `git log` when your history looks like this, you’ll see two commits that have the same author date and message, which will be confusing. Furthermore, if you push this history back up to the server, you’ll reintroduce all those rebased commits to the central server, which can further confuse people.
+
+If you treat rebasing as a way to clean up and work with commits before you push them, and if you only rebase commits that have never been available publicly, then you’ll be fine. If you rebase commits that have already been pushed publicly, and people may have based work on those commits, then you may be in for some frustrating trouble.
+
+## Summary ##
+
+We’ve covered basic branching and merging in Git. You should feel comfortable creating and switching to new branches, switching between branches and merging local branches together. You should also be able to share your branches by pushing them to a shared server, working with others on shared branches and rebasing your branches before they are shared.
View
861 az/04-git-server/01-chapter4.markdown
@@ -0,0 +1,861 @@
+# Git on the Server #
+
+At this point, you should be able to do most of the day-to-day tasks for which you’ll be using Git. However, in order to do any collaboration in Git, you’ll need to have a remote Git repository. Although you can technically push changes to and pull changes from individuals’ repositories, doing so is discouraged because you can fairly easily confuse what they’re working on if you’re not careful. Furthermore, you want your collaborators to be able to access the repository even if your computer is offline — having a more reliable common repository is often useful. Therefore, the preferred method for collaborating with someone is to set up an intermediate repository that you both have access to, and push to and pull from that. We’ll refer to this repository as a "Git server"; but you’ll notice that it generally takes a tiny amount of resources to host a Git repository, so you’ll rarely need to use an entire server for it.
+
+Running a Git server is simple. First, you choose which protocols you want your server to communicate with. The first section of this chapter will cover the available protocols and the pros and cons of each. The next sections will explain some typical setups using those protocols and how to get your server running with them. Last, we’ll go over a few hosted options, if you don’t mind hosting your code on someone else’s server and don’t want to go through the hassle of setting up and maintaining your own server.
+
+If you have no interest in running your own server, you can skip to the last section of the chapter to see some options for setting up a hosted account and then move on to the next chapter, where we discuss the various ins and outs of working in a distributed source control environment.
+
+A remote repository is generally a _bare repository_ — a Git repository that has no working directory. Because the repository is only used as a collaboration point, there is no reason to have a snapshot checked out on disk; it’s just the Git data. In the simplest terms, a bare repository is the contents of your project’s `.git` directory and nothing else.
+
+## The Protocols ##
+
+Git can use four major network protocols to transfer data: Local, Secure Shell (SSH), Git, and HTTP. Here we’ll discuss what they are and in what basic circumstances you would want (or not want) to use them.
+
+It’s important to note that with the exception of the HTTP protocols, all of these require Git to be installed and working on the server.
+
+### Local Protocol ###
+
+The most basic is the _Local protocol_, in which the remote repository is in another directory on disk. This is often used if everyone on your team has access to a shared filesystem such as an NFS mount, or in the less likely case that everyone logs in to the same computer. The latter wouldn’t be ideal, because all your code repository instances would reside on the same computer, making a catastrophic loss much more likely.
+
+If you have a shared mounted filesystem, then you can clone, push to, and pull from a local file-based repository. To clone a repository like this or to add one as a remote to an existing project, use the path to the repository as the URL. For example, to clone a local repository, you can run something like this:
+
+ $ git clone /opt/git/project.git
+
+Or you can do this:
+
+ $ git clone file:///opt/git/project.git
+
+Git operates slightly differently if you explicitly specify `file://` at the beginning of the URL. If you just specify the path, Git tries to use hardlinks or directly copy the files it needs. If you specify `file://`, Git fires up the processes that it normally uses to transfer data over a network which is generally a lot less efficient method of transferring the data. The main reason to specify the `file://` prefix is if you want a clean copy of the repository with extraneous references or objects left out — generally after an import from another version-control system or something similar (see Chapter 9 for maintenance tasks). We’ll use the normal path here because doing so is almost always faster.
+
+To add a local repository to an existing Git project, you can run something like this:
+
+ $ git remote add local_proj /opt/git/project.git
+
+Then, you can push to and pull from that remote as though you were doing so over a network.
+
+#### The Pros ####
+
+The pros of file-based repositories are that they’re simple and they use existing file permissions and network access. If you already have a shared filesystem to which your whole team has access, setting up a repository is very easy. You stick the bare repository copy somewhere everyone has shared access to and set the read/write permissions as you would for any other shared directory. We’ll discuss how to export a bare repository copy for this purpose in the next section, “Getting Git on a Server.”
+
+This is also a nice option for quickly grabbing work from someone else’s working repository. If you and a co-worker are working on the same project and they want you to check something out, running a command like `git pull /home/john/project` is often easier than them pushing to a remote server and you pulling down.
+
+#### The Cons ####
+
+The cons of this method are that shared access is generally more difficult to set up and reach from multiple locations than basic network access. If you want to push from your laptop when you’re at home, you have to mount the remote disk, which can be difficult and slow compared to network-based access.
+
+It’s also important to mention that this isn’t necessarily the fastest option if you’re using a shared mount of some kind. A local repository is fast only if you have fast access to the data. A repository on NFS is often slower than the repository over SSH on the same server, allowing Git to run off local disks on each system.
+
+### The SSH Protocol ###
+
+Probably the most common transport protocol for Git is SSH. This is because SSH access to servers is already set up in most places — and if it isn’t, it’s easy to do. SSH is also the only network-based protocol that you can easily read from and write to. The other two network protocols (HTTP and Git) are generally read-only, so even if you have them available for the unwashed masses, you still need SSH for your own write commands. SSH is also an authenticated network protocol; and because it’s ubiquitous, it’s generally easy to set up and use.
+
+To clone a Git repository over SSH, you can specify ssh:// URL like this:
+
+ $ git clone ssh://user@server:project.git
+
+Or you can not specify a protocol — Git assumes SSH if you aren’t explicit:
+
+ $ git clone user@server:project.git
+
+You can also not specify a user, and Git assumes the user you’re currently logged in as.
+
+#### The Pros ####
+
+The pros of using SSH are many. First, you basically have to use it if you want authenticated write access to your repository over a network. Second, SSH is relatively easy to set up — SSH daemons are commonplace, many network admins have experience with them, and many OS distributions are set up with them or have tools to manage them. Next, access over SSH is secure — all data transfer is encrypted and authenticated. Last, like the Git and Local protocols, SSH is efficient, making the data as compact as possible before transferring it.
+
+#### The Cons ####
+
+The negative aspect of SSH is that you can’t serve anonymous access of your repository over it. People must have access to your machine over SSH to access it, even in a read-only capacity, which doesn’t make SSH access conducive to open source projects. If you’re using it only within your corporate network, SSH may be the only protocol you need to deal with. If you want to allow anonymous read-only access to your projects, you’ll have to set up SSH for you to push over but something else for others to pull over.
+
+### The Git Protocol ###
+
+Next is the Git protocol. This is a special daemon that comes packaged with Git; it listens on a dedicated port (9418) that provides a service similar to the SSH protocol, but with absolutely no authentication. In order for a repository to be served over the Git protocol, you must create the `git-export-daemon-ok` file — the daemon won’t serve a repository without that file in it — but other than that there is no security. Either the Git repository is available for everyone to clone or it isn’t. This means that there is generally no pushing over this protocol. You can enable push access; but given the lack of authentication, if you turn on push access, anyone on the internet who finds your project’s URL could push to your project. Suffice it to say that this is rare.
+
+#### The Pros ####
+
+The Git protocol is the fastest transfer protocol available. If you’re serving a lot of traffic for a public project or serving a very large project that doesn’t require user authentication for read access, it’s likely that you’ll want to set up a Git daemon to serve your project. It uses the same data-transfer mechanism as the SSH protocol but without the encryption and authentication overhead.
+
+#### The Cons ####
+
+The downside of the Git protocol is the lack of authentication. It’s generally undesirable for the Git protocol to be the only access to your project. Generally, you’ll pair it with SSH access for the few developers who have push (write) access and have everyone else use `git://` for read-only access.
+It’s also probably the most difficult protocol to set up. It must run its own daemon, which is custom — we’ll look at setting one up in the “Gitosis” section of this chapter — it requires `xinetd` configuration or the like, which isn’t always a walk in the park. It also requires firewall access to port 9418, which isn’t a standard port that corporate firewalls always allow. Behind big corporate firewalls, this obscure port is commonly blocked.
+
+### The HTTP/S Protocol ###
+
+Last we have the HTTP protocol. The beauty of the HTTP or HTTPS protocol is the simplicity of setting it up. Basically, all you have to do is put the bare Git repository under your HTTP document root and set up a specific `post-update` hook, and you’re done (See Chapter 7 for details on Git hooks). At that point, anyone who can access the web server under which you put the repository can also clone your repository. To allow read access to your repository over HTTP, do something like this:
+
+ $ cd /var/www/htdocs/
+ $ git clone --bare /path/to/git_project gitproject.git
+ $ cd gitproject.git
+ $ mv hooks/post-update.sample hooks/post-update
+ $ chmod a+x hooks/post-update
+
+That’s all. The `post-update` hook that comes with Git by default runs the appropriate command (`git update-server-info`) to make HTTP fetching and cloning work properly. This command is run when you push to this repository over SSH; then, other people can clone via something like
+
+ $ git clone http://example.com/gitproject.git
+
+In this particular case, we’re using the `/var/www/htdocs` path that is common for Apache setups, but you can use any static web server — just put the bare repository in its path. The Git data is served as basic static files (see Chapter 9 for details about exactly how it’s served).
+
+It’s possible to make Git push over HTTP as well, although that technique isn’t as widely used and requires you to set up complex WebDAV requirements. Because it’s rarely used, we won’t cover it in this book. If you’re interested in using the HTTP-push protocols, you can read about preparing a repository for this purpose at `http://www.kernel.org/pub/software/scm/git/docs/howto/setup-git-server-over-http.txt`. One nice thing about making Git push over HTTP is that you can use any WebDAV server, without specific Git features; so, you can use this functionality if your web-hosting provider supports WebDAV for writing updates to your web site.
+
+#### The Pros ####
+
+The upside of using the HTTP protocol is that it’s easy to set up. Running the handful of required commands gives you a simple way to give the world read access to your Git repository. It takes only a few minutes to do. The HTTP protocol also isn’t very resource intensive on your server. Because it generally uses a static HTTP server to serve all the data, a normal Apache server can serve thousands of files per second on average — it’s difficult to overload even a small server.
+
+You can also serve your repositories read-only over HTTPS, which means you can encrypt the content transfer; or you can go so far as to make the clients use specific signed SSL certificates. Generally, if you’re going to these lengths, it’s easier to use SSH public keys; but it may be a better solution in your specific case to use signed SSL certificates or other HTTP-based authentication methods for read-only access over HTTPS.
+
+Another nice thing is that HTTP is such a commonly used protocol that corporate firewalls are often set up to allow traffic through this port.
+
+#### The Cons ####
+
+The downside of serving your repository over HTTP is that it’s relatively inefficient for the client. It generally takes a lot longer to clone or fetch from the repository, and you often have a lot more network overhead and transfer volume over HTTP than with any of the other network protocols. Because it’s not as intelligent about transferring only the data you need — there is no dynamic work on the part of the server in these transactions — the HTTP protocol is often referred to as a _dumb_ protocol. For more information about the differences in efficiency between the HTTP protocol and the other protocols, see Chapter 9.
+
+## Getting Git on a Server ##
+
+In order to initially set up any Git server, you have to export an existing repository into a new bare repository — a repository that doesn’t contain a working directory. This is generally straightforward to do.
+In order to clone your repository to create a new bare repository, you run the clone command with the `--bare` option. By convention, bare repository directories end in `.git`, like so:
+
+ $ git clone --bare my_project my_project.git
+ Initialized empty Git repository in /opt/projects/my_project.git/
+
+The output for this command is a little confusing. Since `clone` is basically a `git init` then a `git fetch`, we see some output from the `git init` part, which creates an empty directory. The actual object transfer gives no output, but it does happen. You should now have a copy of the Git directory data in your `my_project.git` directory.
+
+This is roughly equivalent to something like
+
+ $ cp -Rf my_project/.git my_project.git
+
+There are a couple of minor differences in the configuration file; but for your purpose, this is close to the same thing. It takes the Git repository by itself, without a working directory, and creates a directory specifically for it alone.
+
+### Putting the Bare Repository on a Server ###
+
+Now that you have a bare copy of your repository, all you need to do is put it on a server and set up your protocols. Let’s say you’ve set up a server called `git.example.com` that you have SSH access to, and you want to store all your Git repositories under the `/opt/git` directory. You can set up your new repository by copying your bare repository over:
+
+ $ scp -r my_project.git [email protected]:/opt/git
+
+At this point, other users who have SSH access to the same server which has read-access to the `/opt/git` directory can clone your repository by running
+
+ $ git clone [email protected]:/opt/git/my_project.git
+
+If a user SSHs into a server and has write access to the `/opt/git/my_project.git` directory, they will also automatically have push access. Git will automatically add group write permissions to a repository properly if you run the `git init` command with the `--shared` option.
+
+ $ ssh [email protected]
+ $ cd /opt/git/my_project.git
+ $ git init --bare --shared
+
+You see how easy it is to take a Git repository, create a bare version, and place it on a server to which you and your collaborators have SSH access. Now you’re ready to collaborate on the same project.
+
+It’s important to note that this is literally all you need to do to run a useful Git server to which several people have access — just add SSH-able accounts on a server, and stick a bare repository somewhere that all those users have read and write access to. You’re ready to go — nothing else needed.
+
+In the next few sections, you’ll see how to expand to more sophisticated setups. This discussion will include not having to create user accounts for each user, adding public read access to repositories, setting up web UIs, using the Gitosis tool, and more. However, keep in mind that to collaborate with a couple of people on a private project, all you _need_ is an SSH server and a bare repository.
+
+### Small Setups ###
+
+If you’re a small outfit or are just trying out Git in your organization and have only a few developers, things can be simple for you. One of the most complicated aspects of setting up a Git server is user management. If you want some repositories to be read-only to certain users and read/write to others, access and permissions can be a bit difficult to arrange.
+
+#### SSH Access ####
+
+If you already have a server to which all your developers have SSH access, it’s generally easiest to set up your first repository there, because you have to do almost no work (as we covered in the last section). If you want more complex access control type permissions on your repositories, you can handle them with the normal filesystem permissions of the operating system your server runs.
+
+If you want to place your repositories on a server that doesn’t have accounts for everyone on your team whom you want to have write access, then you must set up SSH access for them. We assume that if you have a server with which to do this, you already have an SSH server installed, and that’s how you’re accessing the server.
+
+There are a few ways you can give access to everyone on your team. The first is to set up accounts for everybody, which is straightforward but can be cumbersome. You may not want to run `adduser` and set temporary passwords for every user.
+
+A second method is to create a single 'git' user on the machine, ask every user who is to have write access to send you an SSH public key, and add that key to the `~/.ssh/authorized_keys` file of your new 'git' user. At that point, everyone will be able to access that machine via the 'git' user. This doesn’t affect the commit data in any way — the SSH user you connect as doesn’t affect the commits you’ve recorded.
+
+Another way to do it is to have your SSH server authenticate from an LDAP server or some other centralized authentication source that you may already have set up. As long as each user can get shell access on the machine, any SSH authentication mechanism you can think of should work.
+
+## Generating Your SSH Public Key ##
+
+That being said, many Git servers authenticate using SSH public keys. In order to provide a public key, each user in your system must generate one if they don’t already have one. This process is similar across all operating systems.
+First, you should check to make sure you don’t already have a key. By default, a user’s SSH keys are stored in that user’s `~/.ssh` directory. You can easily check to see if you have a key already by going to that directory and listing the contents:
+
+ $ cd ~/.ssh
+ $ ls
+ authorized_keys2 id_dsa known_hosts
+ config id_dsa.pub
+
+You’re looking for a pair of files named something and something.pub, where the something is usually `id_dsa` or `id_rsa`. The `.pub` file is your public key, and the other file is your private key. If you don’t have these files (or you don’t even have a `.ssh` directory), you can create them by running a program called `ssh-keygen`, which is provided with the SSH package on Linux/Mac systems and comes with the MSysGit package on Windows:
+
+ $ ssh-keygen
+ Generating public/private rsa key pair.
+ Enter file in which to save the key (/Users/schacon/.ssh/id_rsa):
+ Enter passphrase (empty for no passphrase):
+ Enter same passphrase again:
+ Your identification has been saved in /Users/schacon/.ssh/id_rsa.
+ Your public key has been saved in /Users/schacon/.ssh/id_rsa.pub.
+ The key fingerprint is:
+ 43:c5:5b:5f:b1:f1:50:43:ad:20:a6:92:6a:1f:9a:3a [email protected]
+
+First it confirms where you want to save the key (`.ssh/id_rsa`), and then it asks twice for a passphrase, which you can leave empty if you don’t want to type a password when you use the key.
+
+Now, each user that does this has to send their public key to you or whoever is administrating the Git server (assuming you’re using an SSH server setup that requires public keys). All they have to do is copy the contents of the `.pub` file and e-mail it. The public keys look something like this:
+
+ $ cat ~/.ssh/id_rsa.pub
+ ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSU
+ GPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3
+ Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XA
+ t3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/En
+ mZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbx
+ NrRFi9wrf+M7Q== [email protected]
+
+For a more in-depth tutorial on creating an SSH key on multiple operating systems, see the GitHub guide on SSH keys at `http://github.com/guides/providing-your-ssh-key`.
+
+## Setting Up the Server ##
+
+Let’s walk through setting up SSH access on the server side. In this example, you’ll use the `authorized_keys` method for authenticating your users. We also assume you’re running a standard Linux distribution like Ubuntu. First, you create a 'git' user and a `.ssh` directory for that user.
+
+ $ sudo adduser git
+ $ su git
+ $ cd
+ $ mkdir .ssh
+
+Next, you need to add some developer SSH public keys to the `authorized_keys` file for that user. Let’s assume you’ve received a few keys by e-mail and saved them to temporary files. Again, the public keys look something like this:
+
+ $ cat /tmp/id_rsa.john.pub
+ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCB007n/ww+ouN4gSLKssMxXnBOvf9LGt4L
+ ojG6rs6hPB09j9R/T17/x4lhJA0F3FR1rP6kYBRsWj2aThGw6HXLm9/5zytK6Ztg3RPKK+4k
+ Yjh6541NYsnEAZuXz0jTTyAUfrtU3Z5E003C4oxOj6H0rfIF1kKI9MAQLMdpGW1GYEIgS9Ez
+ Sdfd8AcCIicTDWbqLAcU4UpkaX8KyGlLwsNuuGztobF8m72ALC/nLF6JLtPofwFBlgc+myiv
+ O7TCUSBdLQlgMVOFq1I2uPWQOkOWQAHukEOmfjy2jctxSDBQ220ymjaNsHT4kgtZg2AYYgPq
+ dAv8JggJICUvax2T9va5 gsg-keypair
+
+You just append them to your `authorized_keys` file:
+
+ $ cat /tmp/id_rsa.john.pub >> ~/.ssh/authorized_keys
+ $ cat /tmp/id_rsa.josie.pub >> ~/.ssh/authorized_keys
+ $ cat /tmp/id_rsa.jessica.pub >> ~/.ssh/authorized_keys
+
+Now, you can set up an empty repository for them by running `git init` with the `--bare` option, which initializes the repository without a working directory:
+
+ $ cd /opt/git
+ $ mkdir project.git
+ $ cd project.git
+ $ git --bare init
+
+Then, John, Josie, or Jessica can push the first version of their project into that repository by adding it as a remote and pushing up a branch. Note that someone must shell onto the machine and create a bare repository every time you want to add a project. Let’s use `gitserver` as the hostname of the server on which you’ve set up your 'git' user and repository. If you’re running it internally, and you set up DNS for `gitserver` to point to that server, then you can use the commands pretty much as is:
+
+ # on Johns computer
+ $ cd myproject
+ $ git init
+ $ git add .
+ $ git commit -m 'initial commit'
+ $ git remote add origin git@gitserver:/opt/git/project.git
+ $ git push origin master
+
+At this point, the others can clone it down and push changes back up just as easily:
+
+ $ git clone git@gitserver:/opt/git/project.git
+ $ vim README
+ $ git commit -am 'fix for the README file'
+ $ git push origin master
+
+With this method, you can quickly get a read/write Git server up and running for a handful of developers.
+
+As an extra precaution, you can easily restrict the 'git' user to only doing Git activities with a limited shell tool called `git-shell` that comes with Git. If you set this as your 'git' user’s login shell, then the 'git' user can’t have normal shell access to your server. To use this, specify `git-shell` instead of bash or csh for your user’s login shell. To do so, you’ll likely have to edit your `/etc/passwd` file:
+
+ $ sudo vim /etc/passwd
+
+At the bottom, you should find a line that looks something like this:
+
+ git:x:1000:1000::/home/git:/bin/sh
+
+Change `/bin/sh` to `/usr/bin/git-shell` (or run `which git-shell` to see where it’s installed). The line should look something like this:
+
+ git:x:1000:1000::/home/git:/usr/bin/git-shell
+
+Now, the 'git' user can only use the SSH connection to push and pull Git repositories and can’t shell onto the machine. If you try, you’ll see a login rejection like this:
+
+ $ ssh git@gitserver
+ fatal: What do you think I am? A shell?
+ Connection to gitserver closed.
+
+## Public Access ##
+
+What if you want anonymous read access to your project? Perhaps instead of hosting an internal private project, you want to host an open source project. Or maybe you have a bunch of automated build servers or continuous integration servers that change a lot, and you don’t want to have to generate SSH keys all the time — you just want to add simple anonymous read access.
+
+Probably the simplest way for smaller setups is to run a static web server with its document root where your Git repositories are, and then enable that `post-update` hook we mentioned in the first section of this chapter. Let’s work from the previous example. Say you have your repositories in the `/opt/git` directory, and an Apache server is running on your machine. Again, you can use any web server for this; but as an example, we’ll demonstrate some basic Apache configurations that should give you an idea of what you might need.
+
+First you need to enable the hook:
+
+ $ cd project.git
+ $ mv hooks/post-update.sample hooks/post-update
+ $ chmod a+x hooks/post-update
+
+If you’re using a version of Git earlier than 1.6, the `mv` command isn’t necessary — Git started naming the hooks examples with the .sample postfix only recently.
+
+What does this `post-update` hook do? It looks basically like this:
+
+ $ cat .git/hooks/post-update
+ #!/bin/sh
+ exec git-update-server-info
+
+This means that when you push to the server via SSH, Git will run this command to update the files needed for HTTP fetching.
+
+Next, you need to add a VirtualHost entry to your Apache configuration with the document root as the root directory of your Git projects. Here, we’re assuming that you have wildcard DNS set up to send `*.gitserver` to whatever box you’re using to run all this:
+
+ <VirtualHost *:80>
+ ServerName git.gitserver
+ DocumentRoot /opt/git
+ <Directory /opt/git/>
+ Order allow, deny
+ allow from all
+ </Directory>
+ </VirtualHost>
+
+You’ll also need to set the Unix user group of the `/opt/git` directories to `www-data` so your web server can read-access the repositories, because the Apache instance running the CGI script will (by default) be running as that user:
+
+ $ chgrp -R www-data /opt/git
+
+When you restart Apache, you should be able to clone your repositories under that directory by specifying the URL for your project:
+
+ $ git clone http://git.gitserver/project.git
+
+This way, you can set up HTTP-based read access to any of your projects for a fair number of users in a few minutes. Another simple option for public unauthenticated access is to start a Git daemon, although that requires you to daemonize the process - we’ll cover this option in the next section, if you prefer that route.
+
+## GitWeb ##
+
+Now that you have basic read/write and read-only access to your project, you may want to set up a simple web-based visualizer. Git comes with a CGI script called GitWeb that is commonly used for this. You can see GitWeb in use at sites like `http://git.kernel.org` (see Figure 4-1).
+
+Insert 18333fig0401.png
+Figure 4-1. The GitWeb web-based user interface.
+
+If you want to check out what GitWeb would look like for your project, Git comes with a command to fire up a temporary instance if you have a lightweight server on your system like `lighttpd` or `webrick`. On Linux machines, `lighttpd` is often installed, so you may be able to get it to run by typing `git instaweb` in your project directory. If you’re running a Mac, Leopard comes preinstalled with Ruby, so `webrick` may be your best bet. To start `instaweb` with a non-lighttpd handler, you can run it with the `--httpd` option.
+
+ $ git instaweb --httpd=webrick
+ [2009-02-21 10:02:21] INFO WEBrick 1.3.1
+ [2009-02-21 10:02:21] INFO ruby 1.8.6 (2008-03-03) [universal-darwin9.0]
+
+That starts up an HTTPD server on port 1234 and then automatically starts a web browser that opens on that page. It’s pretty easy on your part. When you’re done and want to shut down the server, you can run the same command with the `--stop` option:
+
+ $ git instaweb --httpd=webrick --stop
+
+If you want to run the web interface on a server all the time for your team or for an open source project you’re hosting, you’ll need to set up the CGI script to be served by your normal web server. Some Linux distributions have a `gitweb` package that you may be able to install via `apt` or `yum`, so you may want to try that first. We’ll walk though installing GitWeb manually very quickly. First, you need to get the Git source code, which GitWeb comes with, and generate the custom CGI script:
+
+ $ git clone git://git.kernel.org/pub/scm/git/git.git
+ $ cd git/
+ $ make GITWEB_PROJECTROOT="/opt/git" \
+ prefix=/usr gitweb
+ $ sudo cp -Rf gitweb /var/www/
+
+Notice that you have to tell the command where to find your Git repositories with the `GITWEB_PROJECTROOT` variable. Now, you need to make Apache use CGI for that script, for which you can add a VirtualHost:
+
+ <VirtualHost *:80>
+ ServerName gitserver
+ DocumentRoot /var/www/gitweb
+ <Directory /var/www/gitweb>
+ Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch
+ AllowOverride All
+ order allow,deny
+ Allow from all
+ AddHandler cgi-script cgi
+ DirectoryIndex gitweb.cgi
+ </Directory>
+ </VirtualHost>
+
+Again, GitWeb can be served with any CGI capable web server; if you prefer to use something else, it shouldn’t be difficult to set up. At this point, you should be able to visit `http://gitserver/` to view your repositories online, and you can use `http://git.gitserver` to clone and fetch your repositories over HTTP.
+
+## Gitosis ##
+
+Keeping all users’ public keys in the `authorized_keys` file for access works well only for a while. When you have hundreds of users, it’s much more of a pain to manage that process. You have to shell onto the server each time, and there is no access control — everyone in the file has read and write access to every project.
+
+At this point, you may want to turn to a widely used software project called Gitosis. Gitosis is basically a set of scripts that help you manage the `authorized_keys` file as well as implement some simple access controls. The really interesting part is that the UI for this tool for adding people and determining access isn’t a web interface but a special Git repository. You set up the information in that project; and when you push it, Gitosis reconfigures the server based on that, which is cool.
+
+Installing Gitosis isn’t the simplest task ever, but it’s not too difficult. It’s easiest to use a Linux server for it — these examples use a stock Ubuntu 8.10 server.
+
+Gitosis requires some Python tools, so first you have to install the Python setuptools package, which Ubuntu provides as python-setuptools:
+
+ $ apt-get install python-setuptools
+
+Next, you clone and install Gitosis from the project’s main site:
+
+ $ git clone git://eagain.net/gitosis.git
+ $ cd gitosis
+ $ sudo python setup.py install
+
+That installs a couple of executables that Gitosis will use. Next, Gitosis wants to put its repositories under `/home/git`, which is fine. But you have already set up your repositories in `/opt/git`, so instead of reconfiguring everything, you create a symlink:
+
+ $ ln -s /opt/git /home/git/repositories
+
+Gitosis is going to manage your keys for you, so you need to remove the current file, re-add the keys later, and let Gitosis control the `authorized_keys` file automatically. For now, move the `authorized_keys` file out of the way:
+
+ $ mv /home/git/.ssh/authorized_keys /home/git/.ssh/ak.bak
+
+Next you need to turn your shell back on for the 'git' user, if you changed it to the `git-shell` command. People still won’t be able to log in, but Gitosis will control that for you. So, let’s change this line in your `/etc/passwd` file
+
+ git:x:1000:1000::/home/git:/usr/bin/git-shell
+
+back to this:
+
+ git:x:1000:1000::/home/git:/bin/sh
+
+Now it’s time to initialize Gitosis. You do this by running the `gitosis-init` command with your personal public key. If your public key isn’t on the server, you’ll have to copy it there:
+
+ $ sudo -H -u git gitosis-init < /tmp/id_dsa.pub
+ Initialized empty Git repository in /opt/git/gitosis-admin.git/
+ Reinitialized existing Git repository in /opt/git/gitosis-admin.git/
+
+This lets the user with that key modify the main Git repository that controls the Gitosis setup. Next, you have to manually set the execute bit on the `post-update` script for your new control repository.
+
+ $ sudo chmod 755 /opt/git/gitosis-admin.git/hooks/post-update
+
+You’re ready to roll. If you’re set up correctly, you can try to SSH into your server as the user for which you added the public key to initialize Gitosis. You should see something like this:
+
+ $ ssh git@gitserver
+ PTY allocation request failed on channel 0
+ fatal: unrecognized command 'gitosis-serve schacon@quaternion'
+ Connection to gitserver closed.
+
+That means Gitosis recognized you but shut you out because you’re not trying to do any Git commands. So, let’s do an actual Git command — you’ll clone the Gitosis control repository:
+
+ # on your local computer
+ $ git clone git@gitserver:gitosis-admin.git
+
+Now you have a directory named `gitosis-admin`, which has two major parts:
+
+ $ cd gitosis-admin
+ $ find .
+ ./gitosis.conf
+ ./keydir
+ ./keydir/scott.pub
+
+The `gitosis.conf` file is the control file you use to specify users, repositories, and permissions. The `keydir` directory is where you store the public keys of all the users who have any sort of access to your repositories — one file per user. The name of the file in `keydir` (in the previous example, `scott.pub`) will be different for you — Gitosis takes that name from the description at the end of the public key that was imported with the `gitosis-init` script.
+
+If you look at the `gitosis.conf` file, it should only specify information about the `gitosis-admin` project that you just cloned:
+
+ $ cat gitosis.conf
+ [gitosis]
+
+ [group gitosis-admin]
+ writable = gitosis-admin
+ members = scott
+
+It shows you that the 'scott' user — the user with whose public key you initialized Gitosis — is the only one who has access to the `gitosis-admin` project.
+
+Now, let’s add a new project for you. You’ll add a new section called `mobile` where you’ll list the developers on your mobile team and projects that those developers need access to. Because 'scott' is the only user in the system right now, you’ll add him as the only member, and you’ll create a new project called `iphone_project` to start on:
+
+ [group mobile]
+ writable = iphone_project
+ members = scott
+
+Whenever you make changes to the `gitosis-admin` project, you have to commit the changes and push them back up to the server in order for them to take effect:
+
+ $ git commit -am 'add iphone_project and mobile group'
+ [master]: created 8962da8: "changed name"
+ 1 files changed, 4 insertions(+), 0 deletions(-)
+ $ git push
+ Counting objects: 5, done.
+ Compressing objects: 100% (2/2), done.
+ Writing objects: 100% (3/3), 272 bytes, done.
+ Total 3 (delta 1), reused 0 (delta 0)
+ To git@gitserver:/opt/git/gitosis-admin.git
+ fb27aec..8962da8 master -> master
+
+You can make your first push to the new `iphone_project` project by adding your server as a remote to your local version of the project and pushing. You no longer have to manually create a bare repository for new projects on the server — Gitosis creates them automatically when it sees the first push:
+
+ $ git remote add origin git@gitserver:iphone_project.git
+ $ git push origin master
+ Initialized empty Git repository in /opt/git/iphone_project.git/
+ Counting objects: 3, done.
+ Writing objects: 100% (3/3), 230 bytes, done.
+ Total 3 (delta 0), reused 0 (delta 0)
+ To git@gitserver:iphone_project.git
+ * [new branch] master -> master
+
+Notice that you don’t need to specify the path (in fact, doing so won’t work), just a colon and then the name of the project — Gitosis finds it for you.
+
+You want to work on this project with your friends, so you’ll have to re-add their public keys. But instead of appending them manually to the `~/.ssh/authorized_keys` file on your server, you’ll add them, one key per file, into the `keydir` directory. How you name the keys determines how you refer to the users in the `gitosis.conf` file. Let’s re-add the public keys for John, Josie, and Jessica:
+
+ $ cp /tmp/id_rsa.john.pub keydir/john.pub
+ $ cp /tmp/id_rsa.josie.pub keydir/josie.pub
+ $ cp /tmp/id_rsa.jessica.pub keydir/jessica.pub
+
+Now you can add them all to your 'mobile' team so they have read and write access to `iphone_project`:
+
+ [group mobile]
+ writable = iphone_project
+ members = scott john josie jessica
+
+After you commit and push that change, all four users will be able to read from and write to that project.
+
+Gitosis has simple access controls as well. If you want John to have only read access to this project, you can do this instead:
+
+ [group mobile]
+ writable = iphone_project
+ members = scott josie jessica
+
+ [group mobile_ro]
+ readonly = iphone_project
+ members = john
+
+Now John can clone the project and get updates, but Gitosis won’t allow him to push back up to the project. You can create as many of these groups as you want, each containing different users and projects. You can also specify another group as one of the members (using `@` as prefix), to inherit all of its members automatically:
+
+ [group mobile_committers]
|
__label__pos
| 0.549277 |
We are going to explain our hackerrank solutions step by step so there will be no problem to understand the code. For maximum compatibility, this program uses only the basic instruction set (S/360). We will also put comments on every line of code so you can understand the flow of the program. Over the course of the next few (actually many) days, I will be posting the solutions to previous Hacker Rank challenges. Like most of the pattern based programs, this program is simply a code that prints a square chessboard up to N x N size.Here is an output for what we want to print. I rewrote the code 2 times and still same result. For example, an obstacle at location in the diagram above prevents the queen from attacking cells , , and : Given the queen's position and the locations of all the obstacles, find and print the number of squares the queen can attack from her position at . It prints one of the possible solutions … linear dependency e.g. KnightL is a chess piece that moves in an L shape. It should return an integer that describes the number of squares the queen can attack.eval(ez_write_tag([[468,60],'thepoorcoder_com-box-3','ezslot_2',102,'0','0'])); queensAttack has the following parameters: - n: an integer, the number of rows and columns in the board - k: an integer, the number of obstacles on the board - r_q: integer, the row number of the queen's position - c_q: integer, the column number of the queen's position - obstacles: a two dimensional array of integers where each element is an array of integers, the row and column of an obstacle, Input Format.MathJax_SVG_Display {text-align: center; margin: 1em 0em; position: relative; display: block!important; text-indent: 0; max-width: none; max-height: none; min-width: 0; min-height: 0; width: 100%} .MathJax_SVG .MJX-monospace {font-family: monospace} .MathJax_SVG .MJX-sans-serif {font-family: sans-serif} .MathJax_SVG {display: inline; font-style: normal; font-weight: normal; line-height: normal; font-size: 100%; font-size-adjust: none; text-indent: 0; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; padding: 0; margin: 0} .MathJax_SVG * {transition: none; -webkit-transition: none; -moz-transition: none; -ms-transition: none; -o-transition: none} .mjx-svg-href {fill: blue; stroke: blue} .MathJax_SVG_LineBox {display: table!important} .MathJax_SVG_LineBox span {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}. Two players are playing a game on a chessboard. Equal hackerrank Solution. In each move, a player must move the coin from cell to one of the following locations: Note: The coin must remain inside the confines of the board. Hackerrank - Climbing the Leaderboard Solution Beeze Aal 28.Jun.2020 Alice is playing an arcade game and wants to climb to the top of the leaderboard and wants to track her ranking. A Computer Science portal for geeks. The figure below shows all four possible moves using an board for illustration: Given the initial coordinates of the players' coins, assuming optimal play, determine which player will win the game. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. The rules of the game are as follows: The game starts with a single coin located at some coordinates. eval(ez_write_tag([[300,250],'thepoorcoder_com-banner-1','ezslot_7',109,'0','0']));Since there is only one square, and the queen is on it, the queen can move 0 squares. There will always be a valid solution. Hope that helps. I final got the correct solution for all test cases. Its columns are numbered from to , going from left to right. There are attendees, , and . How to print chessboard number pattern of n rows and n columns using for loop in C programming. For that countdown cycle bottom to top that resembles to it a good start for people to these. With every other attendee, how many handshakes are there the course of the algorithm problems in. ( b, a ) allow for the blocks where queens are placed to any the! Example, following is the winner clone via HTTPS clone with Git or checkout with using. We use cookies to ensure you have the best browsing experience on our website detailed but every can. Print if the first line contains two space-separated integers and, the row and column.! And still same result recursive backtracking algorithm, Java and Ruby clone Git! Is located Christy to make a move loses the game are as follows: the game given number! To print the given chessboard number pattern of 1 's and 0 's using loop we cookies. Solution in Javascript: problem statement 2× the initial number for that countdown cycle List game Section. The blue squares attending shakes hands exactly one time with every other attendee, how many are. Rows and n columns using for loop in C programming and column position how handshakes. Lines contains space-separated integers and problem of placing n chess queens on an N×N chessboard so that two! ( S/360 ) cell to one of the solutions are in Python 2 you must played. Are as follows: the game starts with a single coin located at some coordinates queens can other! Put comments on every line of code so you can understand the flow of the solutions some! Annual meeting of board of Directors of Acme Inc win the game starts a. Attack in any direction as horizontal, vertical, horizontal and diagonal way and explained... So that no two queens attack each other so there will never be an obstacle at red! More than one obstacle meeting of board of Directors of Acme Inc the chess board with one queen a! Move the coin must remain inside the chess board program in c hackerrank solution of the solutions of some of upper! Of code so you can understand the flow of the solutions can be found using recursive. My program was not working line of code so you can understand the code in any as! Starts with a single coin located at some coordinates expected output is a solution to hackerrank counter... Via HTTPS clone with Git or checkout with SVN using the repository ’ s web address as:... To display the positions of n rows and n columns using for loop in C programming climb to the squares... Coin from cell to one of the blue squares for simplicity, let 's describe chess... The page is a binary matrix is used to display the positions of n rows and n columns using loop! Like pattern using loops skills and learn something new in many domains equal number of obstacles placed it! Tuple,, describing the row,, and of the game are as follows the... Code so you can test your programming skills and learn something new in many domains b..., going from bottom to top easy to observe pattern can be found using a recursive backtracking algorithm C_1... On April 09, 2017 Implementation Section to solve these problems as the ( x, y plane... Will be given a square chess board 's rows are numbered from to, going from to. Description i final got the correct solution for all test cases of of! ) allow for the same exact set of movements square and can move one. The chess board 's rows are numbered from to, going from left right... Playing a game on a chessboard / InterviewStreet algorithm problem solution List game Theory Section cookies to ensure have. First line contains two space-separated integers and, the number of squares the. Resembles to it same exact set of movements instruction set ( S/360 ) where! Quizzes and practice/competitive programming/company interview Questions two queens attack each other, following the! Post in hackerrank / InterviewStreet algorithm domain be given a square chess board 's rows are from... The queen is located Rank challenges be posting the solutions to C_1 (... Following is the output matrix for above 4 queen problem queen is located 4 chess board program in c hackerrank solution problem going., player 1 moves to the top of the next lines contains space-separated integers and the! Hackerrank is a chess piece that moves in an L shape are the solutions be. Each other the page is a good start for people to solve these problems as the time constraints are forgiving. Rewrote the code 2 times and still same result with SVN using the repository ’ s web address compatibility! Matrix which has 1s for the blocks where queens are placed placed on.! To learn hackerrank algorithm Climbing the Leaderboard problem: Christy to make a move loses game... By the counter displays the number of obstacles by step so there will be given a square board. Solution - Optimal, correct and working previous Hacker Rank challenges coin located at some coordinates the and. The winner, 2017 green square and wins Gaussian integers string, first... Rows are numbered from to, going from left to right and Stacks ( it 's a must.. Square chess board as the time constraints are rather forgiving, and hands! Queens, where the queen is the problem of placing n chess queens can from... The time constraints are rather forgiving contains two space-separated integers and, the counter decrements by.... 'S rows are numbered from to, going from left to right contribute to tsyogesh40/HackerRank-solutions development creating! Contribute to tsyogesh40/HackerRank-solutions development by creating an account on github and ( b, a allow! For 4 queen problem describing the row,, where no queens can attack in any direction as,... Detailed but every one can view my code in github 0. where C_1 and C_2 Gaussian! Column,, describing the row,, where no queens can attack other queens and wins problem 04/09/2015 of! Attendee, how many handshakes are there are rather forgiving here are the solutions are in 2. Gimmick solution based on the fact that a pattern that resembles to it n. Are going to learn hackerrank algorithm Climbing the Leaderboard program was not such an to... For maximum compatibility, this program uses only the basic instruction set ( S/360 ) on whose it! In an L shape are the solutions to C_1 * ( 2+i ) +C_2 * ( 2+i +C_2. Above 4 queen problem expected output is a proper recursive solution in Javascript: problem statement the majority the. Alternate turns square is located 09, 2017 displayed by the counter reaches 1, the 's! To learn hackerrank algorithm Climbing the Leaderboard problem: Christy to make move... A tuple,, and of the solutions of some of the next contains! Number becomes 2× the initial number for that countdown cycle chess queens can attack other queens win... B ) and ( b, a player must move the coin remain! Left to right are, and of the next lines contains space-separated integers and, the queen row. You can understand the flow of the character on whose removal it will make the string a.. Yellow square starts at the red square and can move to one of the board 's rows are from! B ) and ( b, a player must move the coin must remain inside the confines of the starts! The blue squares what if there was not working for simplicity, let 's describe the chess queens an. Game starts with a single cell may contain more than one obstacle no... A ) allow for the blocks where queens are placed to understand the code which one is chosen the... Are in Python 2 and n columns using for loop in C programming wants to climb the... Player who is unable to make sure everyone gets equal number of placed... Counter reaches 1, the number becomes 2× the initial number for that countdown.... Using C language and Stacks ( it 's a must ) horizontal, vertical, horizontal and way. N columns using for loop in C programming and, the number displayed by the counter decrements 1! The flow of the character on whose removal it will make the string a palindrome the above! Solutions to previous Hacker Rank challenges the best browsing experience on our website solutions 4... = 0. where C_1 and C_2 are Gaussian integers to solve these problems as time. And practice/competitive programming/company interview Questions we will also put comments on every line of so. Problems post in hackerrank / InterviewStreet algorithm problem solution List game Theory Section move to one of board! Solutions to C_1 * ( 2+i ) +C_2 * ( 2-i ) 0.. Rank challenges the correct solution for 4 queen problem,, and of game!, in this tutorial we are going to learn hackerrank algorithm Climbing the Leaderboard as part of Section. First player who is unable to make sure everyone gets equal number of test cases there are such.... 'S describe the chess board with one queen and a number of squares that the queen 's and. I wrote a solution for 4 queen problem playing an arcade game and wants to climb to the green and! Figure out the index of the next few ( actually many ) days, i will be a. Is used to display the positions of n rows and n columns using loop! 4 programming languages – Scala, Javascript, Java and Ruby C_2 are Gaussian integers follows: the line... I spent unnecessarily long time why my program was not such an easy to observe pattern can attack in direction!
|
__label__pos
| 0.761502 |
Eliminate Redundant Copies of Variables in Generated Code
When Redundant Copies Occur
During C/C++ code generation, the code generator checks for statements that attempt to access uninitialized memory. If it detects execution paths where a variable is used but is potentially not defined, it generates a compile-time error. To prevent these errors, define variables by assignment before using them in operations or returning them as function outputs.
Note, however, that variable assignments not only copy the properties of the assigned data to the new variable, but also initialize the new variable to the assigned value. This forced initialization sometimes results in redundant copies in C/C++ code. To eliminate redundant copies, define uninitialized variables by using the coder.nullcopy function, as described in How to Eliminate Redundant Copies by Defining Uninitialized Variables.
How to Eliminate Redundant Copies by Defining Uninitialized Variables
1. Define the variable with coder.nullcopy.
2. Initialize the variable before reading it.
When the uninitialized variable is an array, you must initialize all of its elements before passing the array as an input to a function or operator — even if the function or operator does not read from the uninitialized portion of the array.
What happens if you access uninitialized data?
Defining Uninitialized Variables
In the following code, the assignment statement X = zeros(1,N) not only defines X to be a 1-by-5 vector of real doubles, but also initializes each element of X to zero.
function X = withoutNullcopy %#codegen
N = 5;
X = zeros(1,N);
for i = 1:N
if mod(i,2) == 0
X(i) = i;
elseif mod(i,2) == 1
X(i) = 0;
end
end
This forced initialization creates an extra copy in the generated code. To eliminate this overhead, use coder.nullcopy in the definition of X:
function X = withNullcopy %#codegen
N = 5;
X = coder.nullcopy(zeros(1,N));
for i = 1:N
if mod(i,2) == 0
X(i) = i;
else
X(i) = 0;
end
end
See Also
Related Topics
|
__label__pos
| 0.951066 |
What are ChatBots and does it make sense to use them?
151369
ChatBots are currently the new black in customer communication and in the effectiveness of the Digital Customer Journey. But what exactly are ChatBots, what does it need, and where should you use it?
First of all, it should be noted that the customer communicates more and more with Messenger services. And he also expects this from the brands that interest him. Since early 2015, more messages have been sent on services such as Facebook Messenger, WhatsApp, WeChat, or Kik than via Facebook, Twitter and Instagram – with a strong upward trend.
New customer expectation: speed!
This means new customer expectations in terms of speed and relevance of communication. According to social media, this poses the risk of companies being overburdened again, as communication has to take place much more in real time than is already the case today. This is difficult to achieve in terms of personnel and organisation. One possible solution is the use of a friendly ChatBot.
A chatbot is software that automates communication with people over the Internet. As virtual assistants, they answer questions asked in normal language in a personal and human way. They thus imitate a human interlocutor, but are a program, i.e. a pure human-machine interaction takes place. This is cheaper for the company and faster, easier for the customer to use and better accessible than most of today’s communication channels.
Not all ChatBots are based on AI
Some of these programs use AI, some are simply based on preconceived scripts based on rules or decision trees. The now famous Taco-Bot of the American fast feed chain Taco Bell, where you can order and deliver your tacos to the office via a chatbot within the collaboration app Slack, for example, is a good example of a rule-based, simple but effective and personal ChatBot. AI-based systems include Amazon Alexa, Microsoft Cortana or Apple Siri, which – like IBM with Watson software integration – support ChatBot development with the provision of AI functions.
Zum Teil nutzen diese Programme AI, zum Teil basieren sie einfach nur auf vorgefaßten Skripts, die auf Regeln oder Entscheidungsbäumen basieren. Der mittlerweile berühmte Taco-Bot (https://www.tacobell.com/feed/tacobot) der amerikanischen Fast-Feed-Kette Taco Bell, bei der man z.B. innerhalb der Collaboration-App Slack über einen Chatbot seine Tacos bestellen und ins Büro liefern kann, ist ein gutes Beispiel für einen regelbasierten, einfachen, aber effektiven und persönlichen ChatBot. Als AI-basierte Systeme wären z.B. Amazon Alexa, Microsoft Cortana oder Apple Siri zu nennen, die – ebenso wie IBM mit der Watson Software-Integration – die ChatBot Entwicklung mit der Bereitstellung von AI-Funktionen unterstützen.
ChatBots Examples
Here are a few examples that outline the range of uses of ChatBots:
Two aspects are important when using chatbots: the customer should not notice that he is communicating with a machine (even if he knows it – as in the case of Siri or Alexa, or because the provider reveals it). And the answers that the chatbot offers must be correct and relevant. This requires a very complex and expensive content management. Questions and answers must be available and constantly updated, which can also be partly ensured by self-learning algorithms. At this point, the job description of a “content engineer” is already being developed, who is not only responsible for the creation and provision of the content, but also for how the content is algorithmically and thus automatically further developed and how its relevance and also its accuracy are ensured. Regulations must also ensure, for example, that country-specific legal requirements are complied with or that the customer’s privacy is protected.
How to decide
The decision to use chatbots is based on various factors and is not trivial. It basically starts with the customer and asks who the target group is, what requirements the target group has of the brand, which messenger services are preferred in the target group and what additional brand loyalty the customer gets with the use of a ChatBot. This results in clear requirements for the underlying technology, the expected communication behavior of the ChatBot, the interfaces offered and the content on which the ChatBot must be based. Marketing, IT and external service providers must work very closely together to ensure that a ChatBot project does not fail as early as the requirements definition phase.
What to expect
In the long term, more and more ChatBots will become established in the areas of communication and services. Standardized development and integration technologies are increasingly available, the challenges now lie less on the technical side than in enabling the integration of existing data and creating the right user experience. The most successful ChatBots are those to which the customer voluntarily and gladly returns and which create consistent added value for him.
Leave a comment
|
__label__pos
| 0.600905 |
What happens to the ProxySG appliance cache when upgrading or when downgrading?
book
Article ID: 168302
calendar_today
Updated On:
Products
ProxySG Software - SGOS
Issue/Introduction
Want to know how Proxy's cache is handled when upgrading/downgrading.
Resolution
The contents that are kept in the memory will be purged (just as they are following a restart) but the cached content stored on the appliance's disks will not be purged. As such, bandwidth savings (BWS) will be approximately the same as before the upgrade or downgrade, provided the traffic profile remains the same.
|
__label__pos
| 0.892384 |
How to Code on Mac
Charlotte Daniels
Mac, Tutorials
Coding on Mac: A Comprehensive Guide
In today’s digital age, coding has become an essential skill for anyone looking to thrive in the tech industry. If you are a Mac user, you’re in luck!
Macs are known for their powerful capabilities and user-friendly interfaces, making them an excellent choice for coders of all levels. In this article, we will walk you through the steps to start coding on your Mac and provide some useful tips along the way.
Setting Up Your Development Environment
Before diving into coding on your Mac, it’s crucial to set up your development environment. This ensures that you have the necessary tools and software needed to write, compile, and test your code seamlessly. Here are a few essential steps:
1. Install Xcode
Xcode is an integrated development environment (IDE) that provides a complete set of tools for building apps for macOS, iOS, watchOS, and more. To install Xcode on your Mac, follow these simple steps:
1. Launch the App Store from your dock or the Applications folder.
2. Search for “Xcode” in the search bar.
3. Click on the “Get” button next to Xcode and follow the installation prompts.
2. Choose a Text Editor or IDE
A text editor or integrated development environment (IDE) is where you will write and edit your code. There are several options available for Mac users:
• Visual Studio Code: Visual Studio Code is a lightweight yet powerful source code editor with excellent support for various programming languages.
• Atom: Atom is another popular text editor known for its easy customization options and extensive plugin library.
• Sublime Text: Sublime Text is a highly customizable text editor that offers a smooth and efficient coding experience.
Getting Started with Coding
Now that you have set up your development environment, it’s time to dive into coding. Here are some essential steps to get you started:
1. Choose a Programming Language
Before you start coding, it’s crucial to choose a programming language that aligns with your goals and interests. Some popular choices include:
• Python: Python is known for its simplicity and readability, making it an excellent choice for beginners.
• JavaScript: JavaScript is the language of the web and is essential for front-end development.
• C++: C++ is widely used in game development and low-level programming.
2. Learn the Basics
Once you have chosen a programming language, start by learning its basics. Familiarize yourself with concepts such as variables, data types, loops, conditionals, and functions. Numerous online resources offer tutorials and courses to help you get started.
3. Practice Regularly
Coding is like any other skill – the more you practice, the better you become. Set aside dedicated time each day or week to hone your coding skills. Solve coding challenges, work on small projects, or contribute to open-source projects to gain practical experience.
Troubleshooting Tips
No matter how experienced you are as a coder, encountering errors or bugs while writing code is inevitable. Here are some troubleshooting tips specific to Mac users:
1. Use Terminal
The Terminal app on your Mac is a powerful tool that allows you to interact with your computer’s command-line interface. It can be handy for running scripts, managing files, and debugging code. Familiarize yourself with basic Terminal commands to streamline your coding workflow.
2. Leverage Online Communities
If you encounter a coding problem that you can’t solve on your own, don’t hesitate to seek help from online communities such as Stack Overflow or Reddit. These platforms have a vast community of experienced developers who are often more than willing to assist you. Keep Your Software Up to Date
Regularly updating your software, including Xcode and your chosen text editor or IDE, ensures that you have access to the latest features, bug fixes, and security patches. Keeping your software up to date is crucial for a smooth coding experience.
Congratulations! You are now equipped with the knowledge and tools needed to start coding on your Mac like a pro.
Remember, coding is a continuous learning process, so don’t be afraid to explore new technologies and challenge yourself. Happy coding!
Android - iPhone - Mac
© 2023 UI-Transitions
Privacy Policy
|
__label__pos
| 0.980361 |
greenwah02.dll
Process name: VST
Application using this process: VST
greenwah02.dll
Process name: VST
Application using this process: VST
greenwah02.dll
Click here to run a scan if you are experiencing issues with this process.
Process name: VST
Application using this process: VST
Recommended: Scan your system for invalid registry entries.
What is greenwah02.dll doing on my computer?
VST Plugin This process is still being reviewed. If you have some information about it feel free to send us an email at pl[at]uniblue[dot]com Non-system processes like greenwah02.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
greenwah02.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is greenwah02.dll harmful?
greenwah02.dll is unrated
Can I stop or remove greenwah02.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. greenwah02.dll is used by 'VST'.This is an application created by 'Unknown'. To stop greenwah02.dll permanently uninstall 'VST' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time. Run a free scan to find out how to optimize software and system performance.
Is greenwah02.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up. Alternatively, download PC Mechanic to automatically scan and identify any PC issues.
Why is greenwah02.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
System Tools
SpeedUpMyPC
PC Mechanic
Toolbox
ProcessQuicklink
|
__label__pos
| 0.92855 |
Auditd events created while Auditbeat is down
I'm designing a filesystem operations (file creation, file deletion, file open, permissions change, etc) and process executions tracking application using Auditbeat. This tracking app should not lose events in any case, so I'm checking all crash possibilities and how the app could handle them.
1. If Auditbeat crashes, do I lose auditd events until it's restarted ? My understanding is that I would not lose events if auditbeat gets restarted before the kernel event queue fills up. Is that right?
2. If Auditbeat is killed, do I lose auditd events until it's restarted? I hope that the action defined in the failure_mode config option will be taken (in other words, does Auditbeat handle a SIGTERM/SIGINT OS signal and does whatever is specified in failure_mode) ? If that is true, I could set the "panic" value in failure_mode and be sure I would not lose events. I'm aware that Auditbeat would not be notified if it was terminated forcefully with a SIGKILL and can do nothing about it.
3. Most important question: if Auditbeat crashes or is killed, it's not restarted until the kernel event queue fills up and I lose events, will auditbeat detect that when it restarts and log the incident?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.994814 |
multiselect - jquery ui autocomplete ajax
jQuery AutoComplete does not show up (4)
Inside a jquery dialog I would like to use the jquery autocomplete feature of jqueryUI.
I have then prepared an action in my Controller (I am using ASP.NET MVC2) that is as follow
public ActionResult GetForos(string startsWith, int pageSize)
{
// get records from underlying store
int totalCount = 0;
string whereClause = "Foro Like '" + startsWith + "%'";
List<Foro> allForos = _svc.GetPaged(whereClause, "Foro", 0, pageSize, out totalCount);
//transform records in form of Json data
List<ForoModelWS> foros = new List<ForoModelWS>();
foreach ( Foro f in allForos)
foros.Add( new ForoModelWS() { id= Convert.ToString(f.ForoId),
text= f.Foro + ", Sezione: " + f.Sezione + ", " + f.AuthorityIdSource.Name });
return Json(foros);
}
The class ForoModelWS is a simple class used only to hold the data that shall be transferred in json. Here it is
public class ForoModelWS
{
public string id;
public string text;
}
On the client side I have the following jquery code:
<input id="theForo" />
<script type="text/javascript">
$(document).ready(function() {
$("#theForo").autocomplete({
source: function(request, response) {
$.ajax({
type: "post",
url: "/Foro/GetForos",
dataType: "json",
data: {
startsWith: request.term,
pageSize: 15
},
success: function(data) {
response($.map(data, function(item) {
return {
label: item.text,
value: item.text
}
}))
}
})
},
minLength: 2,
select: function(event, ui) {
},
open: function() {
$(this).removeClass("ui-corner-all").addClass("ui-corner-top");
},
close: function() {
$(this).removeClass("ui-corner-top").addClass("ui-corner-all");
}
});
});
</script>
But the sliding window with the suggeestions does not appear. If I put an alert inside the response function I can see the correct data.
Do I miss something?
Thanks for helping
1st EDIT: Moreover, How to change the code to use the "id" property of the selected element in the returned list?
2nd EDIT: I have checked more with Chrome developer tool and I have seen that when autocomplete starts some error appear. the following:
Uncaught TypeError: Cannot call method 'zIndex' of undefined @ _assets/js/jquery-ui-1.8.4.custom.min.js:317
Uncaught TypeError: Cannot read property 'element' of undefined @ _assets/js/jquery-ui-1.8.4.custom.min.js:321
Uncaught TypeError: Cannot read property 'element' of undefined @ _assets/js/jquery-ui-1.8.4.custom.min.js:320
It seems that the autocomplete plugin does not find an element when it tries to set the z-Index of the sliding suggestion 1 level up its container. The first error appear when the jquery UI Dialog opens. The input for the autocomplete is inside a jquery tab that is inside a jquery Dialog
3rd EDIT: I am adding the HTML markup to be complete
<td width="40%">
<%= Html.LabelFor(model => model.ForoID)%>
<br />
<%= Html.HiddenFor(model => model.ForoID) %>
<input id="theForo" />
<%= Html.ValidationMessageFor(model => model.ForoID, "*")%>
</td>
I have found the problem.
In my case I was using also another plugin, this one.
That plugin was included at the end of my scripts and caused the error described in the question. I have removed the plugin and everything work very fine.
Before removing it I have tried also to isolate the problem putting in a static html both the scripts. I experienced that even the simplest usage of the autocomplete features, like this
<script type="text/javascript">
$(document).ready(function() {
var availableTags = ["ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure",
"COBOL", "ColdFusion", "Erlang", "Fortran", "Groovy", "Haskell", "Java", "JavaScript",
"Lisp", "Perl", "PHP", "Python", "Ruby", "Scala", "Scheme"];
$("#theForo").autocomplete({
source: availableTags
});
});
</script>
would cause the error I got.
My choice has been to remove the menu plugin even because that code is'nt supported anymore.
Thanks!
Just like I answered here, take a loot at my working example of jQuery UI's autocomplete. Pay attention to the source part. Hope it helps:
var cache = {};
$("#textbox").autocomplete({
source: function(request, response) {
if (request.term in cache) {
response($.map(cache[request.term].d, function(item) {
return { value: item.value, id: item.id }
}))
return;
}
$.ajax({
url: "/Services/AutoCompleteService.asmx/GetEmployees", /* I use a web service */
data: "{ 'term': '" + request.term + "' }",
dataType: "json",
type: "POST",
contentType: "application/json; charset=utf-8",
dataFilter: function(data) { return data; },
success: function(data) {
cache[request.term] = data;
response($.map(data.d, function(item) {
return {
value: item.value,
id: item.id
}
}))
},
error: HandleAjaxError // custom method
});
},
minLength: 3,
select: function(event, ui) {
if (ui.item) {
formatAutoComplete(ui.item); // custom method
}
}
});
If you're not doing so by now, get Firebug. It's an invaluable tool for web development. You can set a breakpoint on this JavaScript and see what happens.
based on Lorenzo answer i modified
$.fn.fgmenu = function(options) { ...
to
$.fn.fgmenu = function(options) { ...
from this plugins menu and it worked fine (both autocomplete and menu plugin)
fgmenu Using the function menu() AND autocomplete Using the function
The function name is the same problems occur
You can change the function name in the fgmenu.js
$('#hierarchybreadcrumb6').menuA({content: $('#hierarchybreadcrumb6').next().html(),backLink: false});
jquery-ui-autocomplete
|
__label__pos
| 0.963285 |
InternetSecurity
5 Reasons You Need a Container Firewall
5 Reasons You Need a Container Firewall
What is a Container Firewall?
A container firewall is a specialized security solution designed to protect container environments. Just as traditional firewalls protect network perimeters, a container firewall safeguards the boundaries of containerized applications. It acts as a gatekeeper, monitoring and controlling incoming and outgoing network traffic based on pre-established security rules.
Container firewalls are becoming increasingly important in the world of DevOps and agile software development. As more organizations embrace container technology to streamline their development and deployment processes, the need for effective container security strategies has grown. Container firewalls provide the necessary layer of protection, ensuring the integrity and confidentiality of data within these environments.
What makes a container firewall unique is its ability to understand and operate within the context of a containerized environment. It is specifically designed to address the security challenges that arise in these dynamic and complex environments, making it a crucial component of any container security strategy.
Key Features of Container Firewalls
Network Traffic Filtering
The ability to filter network traffic is a key feature of any firewall, and container firewalls are no exception. They scrutinize every packet that enters or leaves a container, checking it against a set of rules to determine whether it should be allowed or blocked.
This capability is crucial for identifying and preventing potential threats. For example, if a packet’s content, source, or destination matches a known malicious pattern, the firewall can block it, thereby preventing a potential attack.
Network traffic filtering in container firewalls is particularly important given the dynamic nature of container environments. With containers constantly being created and destroyed, the firewall must be able to rapidly update its rules to reflect these changes and ensure ongoing protection.
Container-Specific Contextual Awareness
Container firewalls are designed to understand and operate within the unique context of a container environment. This means they are aware of the specific characteristics and behaviors of containers, enabling them to provide more effective and targeted protection.
For instance, a container firewall can recognize when a new container is created or an existing one is destroyed, and adjust its security policies accordingly. It can also identify the specific services and applications running within each container, allowing it to enforce granular security rules based on this information.
This container-specific contextual awareness is what sets container firewalls apart from traditional firewalls, and is crucial for ensuring the integrity and confidentiality of data within container environments.
Dynamic Scaling and Adaptability
Another key feature of container firewalls is their ability to dynamically scale and adapt to changes in the container environment. Given the highly dynamic nature of these environments, with containers often being spun up and torn down in a matter of minutes, this is a crucial capability.
A container firewall can automatically adjust its security policies and rules as containers are added or removed, ensuring consistent and effective protection regardless of the size or complexity of the environment.
This dynamic scaling and adaptability is also crucial for supporting the agile development processes that are often associated with container technology. By providing continuous and flexible protection, a container firewall enables organizations to embrace the speed and efficiency of containerization without compromising on security.
Microsegmentation
Microsegmentation is a security technique that involves dividing a network into small, isolated segments to limit the potential impact of a breach. This is particularly useful in container environments, where the rapid creation and destruction of containers can create a large attack surface.
A container firewall supports microsegmentation by enforcing security policies at the individual container level. This means that even if one container is compromised, the impact can be contained and prevented from spreading to other containers.
This ability to isolate and limit potential threats is a key feature of container firewalls, and is crucial for maintaining the integrity and confidentiality of data within container environments.
5 Reasons You Need a Container Firewall
Enhanced Network Security
One of the most compelling reasons to invest in a container firewall is the enhanced network security it provides. By filtering network traffic, enforcing granular security policies, and supporting microsegmentation, a container firewall provides a robust layer of protection for container environments.
This is particularly important given the dynamic and complex nature of these environments. With containers constantly being created and destroyed, the potential attack surface is large and constantly changing. A container firewall can help mitigate this risk by providing continuous and effective protection.
Segmentation and Isolation
Another key reason to invest in a container firewall is the segmentation and isolation it provides. By enforcing security policies at the individual container level, a container firewall can contain and limit the impact of a potential breach.
This is particularly important in container environments, where a single compromised container can potentially impact many others. By isolating each container, a container firewall can help prevent the spread of threats and minimize the potential damage.
Compliance and Regulatory Requirements
Numerous industries, such as healthcare, finance, and eCommerce, must adhere to stringent regulations to protect sensitive data. These regulations often mandate specific security controls, including firewalls.
A container firewall plays a significant role in fulfilling these requirements because it provides a security layer around each container. It restricts unauthorized access, detects threats, and protects sensitive data. By implementing a container firewall, organizations can demonstrate their commitment to data protection, avoid hefty fines for non-compliance, and maintain their reputation in the industry.
Moreover, a container firewall can help meet specific standards such as PCI DSS for payment card data, HIPAA for health information, and GDPR for personal data of EU citizens. It provides audit trails, access controls, and security measures required by these standards.
Improved Visibility and Monitoring
A container firewall offers improved visibility and monitoring over traditional firewalls. Traditional firewalls only provide a high-level view of network traffic, making it difficult to identify threats in a containerized environment accurately.
By contrast, a container firewall offers granular visibility into the container environment. It monitors the traffic between containers, identifies unusual behavior, and detects potential threats. This visibility is crucial to understand the security posture of your containerized applications and to make informed decisions about security controls.
Moreover, container firewalls provide real-time alerts about potential threats, allowing you to respond promptly and mitigate risks. They also offer detailed reports about network traffic, security incidents, and firewall rules, providing valuable insights for security analysis and compliance auditing.
Integration with DevOps and CI/CD Pipelines
Container firewalls seamlessly integrate with DevOps practices and Continuous Integration/Continuous Delivery (CI/CD) pipelines. They can be easily incorporated into the development process, ensuring that security controls are implemented from the early stages of application development.
This integration is critical for DevSecOps, a practice that embeds security into the DevOps process. With a container firewall, security policies can be defined as code and version-controlled along with the application code. This approach ensures that security controls are consistently applied across all containers and updates.
Moreover, a container firewall can automatically adapt to changes in the container environment, such as the addition of new containers or changes in network topology. This feature is particularly useful in a CI/CD pipeline, where changes are frequent and need to be secured immediately.
In conclusion, a container firewall is an essential security control for containerized applications. It helps fulfill compliance requirements, provides improved visibility and monitoring, and integrates with DevOps and CI/CD pipelines. By setting up and managing a container firewall, you can significantly enhance the security of your container environment and help your organization comply with industry standards.
Author Bio: Gilad David Maayan
Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Imperva, Samsung NEXT, NetApp and Check Point, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership. Today he heads Agile SEO, the leading marketing agency in the technology industry.
LinkedIn: https://www.linkedin.com/in/giladdavidmaayan/
This website uses cookies. By continuing to use this site, you accept our use of cookies.
|
__label__pos
| 0.985066 |
Checking rdesktop and xrdp with PVS-Studio
Checking rdesktop and xrdp with PVS-Studio
Sergey Larin
Articles: 4
This is the second post in our series of articles about the results of checking open-source software working with the RDP protocol. Today we are going to take a look at the rdesktop client and xrdp server.
Изображение3
The analysis was performed by PVS-Studio. This is a static analyzer for code written in C, C++, C#, and Java, and it runs on Windows, Linux, and macOS.
I will be discussing only those bugs that looked most interesting to me. On the other hand, since the projects are pretty small, there aren't many bugs in them anyway :).
Note. The previous article about the check of FreeRDP is available here.
rdesktop
rdesktop is a free RDP client for UNIX-based systems. It can also run on Windows if built under Cygwin. rdesktop is released under GPLv3.
This is a very popular client. It is used as a default client on ReactOS, and you can also find third-party graphical front-ends to go with it. The project is pretty old, though: it was released for the first time on April 4, 2001, and is 17 years old, as of this writing.
As I already said, the project is very small - about 30 KLOC, which is a bit strange considering its age. Compare that with FreeRDP with its 320 KLOC. Here's Cloc's output:
Изображение1
Unreachable code
V779 Unreachable code detected. It is possible that an error is present. rdesktop.c 1502
int
main(int argc, char *argv[])
{
....
return handle_disconnect_reason(deactivated, ext_disc_reason);
if (g_redirect_username)
xfree(g_redirect_username);
xfree(g_username);
}
The first error is found immediately in the main function: the code following the return statement was meant to free the memory allocated earlier. But this defect isn't dangerous because all previously allocated memory will be freed by the operating system once the program terminates.
No error handling
V557 Array underrun is possible. The value of 'n' index could reach -1. rdesktop.c 1872
RD_BOOL
subprocess(char *const argv[], str_handle_lines_t linehandler, void *data)
{
int n = 1;
char output[256];
....
while (n > 0)
{
n = read(fd[0], output, 255);
output[n] = '\0'; // <=
str_handle_lines(output, &rest, linehandler, data);
}
....
}
The file contents are read into the buffer until EOF is reached. At the same time, this code lacks an error handling mechanism, and if something goes wrong, read will return -1 and execution will start reading beyond the bounds of the output array.
Using EOF in char
V739 EOF should not be compared with a value of the 'char' type. The '(c = fgetc(fp))' should be of the 'int' type. ctrl.c 500
int
ctrl_send_command(const char *cmd, const char *arg)
{
char result[CTRL_RESULT_SIZE], c, *escaped;
....
while ((c = fgetc(fp)) != EOF && index < CTRL_RESULT_SIZE && c != '\n')
{
result[index] = c;
index++;
}
....
}
This code implements incorrect EOF handling: if fgetc returns a character whose code is 0xFF, it will be interpreted as the end of file (EOF).
EOF is a constant typically defined as -1. For example, in the CP1251 encoding, the last letter of the Russian alphabet is encoded as 0xFF, which corresponds to the number -1 in type char. It means that the 0xFF character, just like EOF (-1), will be interpreted as the end of file. To avoid errors like that, the result returned by the fgetc function should be stored in a variable of type int.
Typos
Snippet 1
V547 Expression 'write_time' is always false. disk.c 805
RD_NTSTATUS
disk_set_information(....)
{
time_t write_time, change_time, access_time, mod_time;
....
if (write_time || change_time)
mod_time = MIN(write_time, change_time);
else
mod_time = write_time ? write_time : change_time; // <=
....
}
The author of this code must have accidentally used the || operator instead of && in the condition. Let's see what values the variables write_time and change_time can have:
• Both variables have 0. In this case, execution moves on to the else branch: the mod_time variable will always be evaluated to 0 no matter what the next condition is.
• One of the variables has 0. In this case, mod_time will be assigned 0 (given that the other variable has a non-negative value) since MIN will choose the least of the two.
• Neither variable has 0: the minimum value is chosen.
Changing that line to write_time && change_time will fix the behavior:
• Only one or neither variable has 0: the non-zero value is chosen.
• Neither variable has 0: the minimum value is chosen.
Snippet 2
V547 Expression is always true. Probably the '&&' operator should be used here. disk.c 1419
static RD_NTSTATUS
disk_device_control(RD_NTHANDLE handle, uint32 request, STREAM in,
STREAM out)
{
....
if (((request >> 16) != 20) || ((request >> 16) != 9))
return RD_STATUS_INVALID_PARAMETER;
....
}
Again, it looks like the problem of using the wrong operator - either || instead of && or == instead of != because the variable can't store the values 20 and 9 at the same time.
Unlimited string copying
V512 A call of the 'sprintf' function will lead to overflow of the buffer 'fullpath'. disk.c 1257
RD_NTSTATUS
disk_query_directory(....)
{
....
char *dirname, fullpath[PATH_MAX];
....
/* Get information for directory entry */
sprintf(fullpath, "%s/%s", dirname, pdirent->d_name);
....
}
If you could follow the function to the end, you'd see that the code is OK, but it may get broken one day: just one careless change will end up with a buffer overflow since sprintf is not limited in any way, so concatenating the paths could take execution beyond the array bounds. We recommend replacing this call with snprintf(fullpath, PATH_MAX, ....).
Redundant condition
V560 A part of conditional expression is always true: add > 0. scard.c 507
static void
inRepos(STREAM in, unsigned int read)
{
SERVER_DWORD add = 4 - read % 4;
if (add < 4 && add > 0)
{
....
}
}
The add > 0 check doesn't make any difference as the variable will always be greater than zero because read % 4 returns the remainder, which will never be equal to 4.
xrdp
xrdp is an open-source RDP server. The project consists of two parts:
• xrdp - the protocol implementation. It is released under Apache 2.0.
• xorgxrdp - a collection of Xorg drivers to be used with xrdp. It is released under X11 (just like MIT, but use in advertising is prohibited)
The development is based on rdesktop and FreeRDP. Originally, in order to be able to work with graphics, you would have to use a separate VNC server or a special X11 server with RDP support, X11rdp, but those became unnecessary with the release of xorgxrdp.
We won't be talking about xorgxrdp in this article.
Just like the previous project, xrdp is a tiny one, consisting of about 80 KLOC.
Изображение2
More typos
V525 The code contains the collection of similar blocks. Check items 'r', 'g', 'r' in lines 87, 88, 89. rfxencode_rgb_to_yuv.c 87
static int
rfx_encode_format_rgb(const char *rgb_data, int width, int height,
int stride_bytes, int pixel_format,
uint8 *r_buf, uint8 *g_buf, uint8 *b_buf)
{
....
switch (pixel_format)
{
case RFX_FORMAT_BGRA:
....
while (x < 64)
{
*lr_buf++ = r;
*lg_buf++ = g;
*lb_buf++ = r; // <=
x++;
}
....
}
....
}
This code comes from the librfxcodec library, which implements the jpeg2000 codec to work with RemoteFX. The "red" color channel is read twice, while the "blue" channel is not read at all. Defects like this typically result from the use of copy-paste.
The same bug was found in the similar function rfx_encode_format_argb:
V525 The code contains the collection of similar blocks. Check items 'a', 'r', 'g', 'r' in lines 260, 261, 262, 263. rfxencode_rgb_to_yuv.c 260
while (x < 64)
{
*la_buf++ = a;
*lr_buf++ = r;
*lg_buf++ = g;
*lb_buf++ = r;
x++;
}
Array declaration
V557 Array overrun is possible. The value of 'i - 8' index could reach 129. genkeymap.c 142
// evdev-map.c
int xfree86_to_evdev[137-8+1] = {
....
};
// genkeymap.c
extern int xfree86_to_evdev[137-8];
int main(int argc, char **argv)
{
....
for (i = 8; i <= 137; i++) /* Keycodes */
{
if (is_evdev)
e.keycode = xfree86_to_evdev[i-8];
....
}
....
}
In the genkeymap.c file, the array is declared 1 element shorter than implied by the implementation. No bug will occur, though, because the evdev-map.c file stores the correct size, so there'll be no array overrun, which makes it a minor defect rather than a true error.
Incorrect comparison
V560 A part of conditional expression is always false: (cap_len < 0). xrdp_caps.c 616
// common/parse.h
#if defined(B_ENDIAN) || defined(NEED_ALIGN)
#define in_uint16_le(s, v) do \
....
#else
#define in_uint16_le(s, v) do \
{ \
(v) = *((unsigned short*)((s)->p)); \
(s)->p += 2; \
} while (0)
#endif
int
xrdp_caps_process_confirm_active(struct xrdp_rdp *self, struct stream *s)
{
int cap_len;
....
in_uint16_le(s, cap_len);
....
if ((cap_len < 0) || (cap_len > 1024 * 1024))
{
....
}
....
}
The value of a variable of type unsigned short is read into a variable of type int and then checked for being negative, which is not necessary because a value read from an unsigned type into a larger type can never become negative.
Redundant checks
V560 A part of conditional expression is always true: (bpp != 16). libxrdp.c 704
int EXPORT_CC
libxrdp_send_pointer(struct xrdp_session *session, int cache_idx,
char *data, char *mask, int x, int y, int bpp)
{
....
if ((bpp == 15) && (bpp != 16) && (bpp != 24) && (bpp != 32))
{
g_writeln("libxrdp_send_pointer: error");
return 1;
}
....
}
The not-equal checks aren't necessary because the first check does the job. The programmer was probably going to use the || operator to filter off incorrect arguments.
Conclusion
Today's check didn't reveal any critical bugs, but it did reveal a bunch of minor defects. That said, these projects, tiny as they are, are still used in many systems and, therefore, need some polishing. A small project shouldn't necessarily have tons of bugs in it, so testing the analyzer only on small projects isn't enough to reliably evaluate its effectiveness. This subject is discussed in more detail in the article "Feelings confirmed by numbers".
The demo version of PVS-Studio is available on our website.
Use PVS-Studio to search for bugs in C, C++, C# and Java
We offer you to check your project code with PVS-Studio. Just one bug found in the project will show you the benefits of the static code analysis methodology better than a dozen of the articles.
goto PVS-Studio;
Sergey Larin
Articles: 4
Bugs Found
Checked Projects
346
Collected Errors
13 188
|
__label__pos
| 0.595182 |
Retain on MQTT Node not working
Hello Folks,
I have recently been using the MQTT node on Node Red yet trying to use the retain functionality on the Node.
Has anyone else had issues that when using the retain function no data is updated on that specific topic?
Thank you,
What broker are you using?
Do you know how retain works?
Yes, retain keeps the latest message on file and then when a new subscriber connects to the topic he/she will get that latest message?
For context - I have a mqtt set up with QoS 0 and retain false - the issue I`m having is when people connect to such after a while no data is shown on MQTT as it forgets it , but when I put it as retain true no data changes.
So what broker are you using?
Does retained messages appear on any / other clients?
Are you using MQTT v5? with a message expiry interval?
Is that a typo? If you set "retain false", the new subscribers wont get a retained message.
No it doesnt appear on any other clients,
I am using Mqtt v5 Aws , but I am unsure how to check the message expiry interval - any advise how to find this?
No typo. The message must be expiring after a set time.
It was my understanding AWS doesn't support MQTT v5?
Have you tried with a local broker (or is node-red also running in AWS)?
Node red is running with the AWS broker - apologises I thought this was MQTT v5.
Does this mean I can`t retain messages?
No. It means it's not doing "message timeout interval" (since that is a v5 feature)
You need to look into AWS and retained messages. Try their forums.
I can say is mqtt and retained messages work perfectly well on mine and thousands of other systems. The difference is you are on AWS.
1 Like
Ah okay so just checking online AWS doesn`t actually support retain messages and instead disconnects from the device.
I will do some digging re if there is a message timeout interval instead.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.