content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Archive
Posts Tagged โArrayListโ
HashMap->ArrayList breakdown and reconstitution of documents inย Java
March 2, 2010 Leave a comment
Given the general utility of HashMaps with ArrayLists as values, as illustrated in the earlier post HashMap key -> ArrayList, itโs fairly obvious that we can in most cases (actually all, but Iโm open to persuasion that somewhere an exception may exist) represent a document, such as a short story by Lord Dunsany, a play by William Shakespeare, an XML or Soap document, in fact any document in which repetitious tokens occur or are likely to occur, as a set of keys indicating the tokens, and an arraylist of their relative positions in the document.
Letโs as an example take a block of text such as that found in Churchillโs speech of November 10, 1942 in the wake of Alexander and Montgomery repelling Rommelโs thrust at Alexandria: โNow this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.โ Itโs short, sweet, and contains the criteria we need for a good example, notably multiple repetitive tokens. For the purposes of this examination weโll totally neuter the punctuation and capitalisation so that we just have tokens:now this is not the end it is not even the beginning of the end but it is perhaps the end of the beginning. At this juncture it is still recognisable as Churchillโs text. Letโs turn this into a reproducible hashmap in which each unique word is a key to a value set which is an ArrayList containing instances of the positions of the word in the text which we supplied it with. I wonโt bore you with repeating the addArrayListItem method (itโs in the post immediately below this oneโฆ) and the printOutHashMap method is somewhat self-evident but included for compeleteness.
public void someBoringMethodSig()
{
String inString = "now this is not the end it is not even the beginning of the end ";
inString += "but it is perhaps the end of the beginning";
HashMap ha = new HashMap();
String[] tok = inString.split(" ");
int k = tok.length;
ย for (int i = 0; i< k; i++)
ย ย ย ย ย ย ย {
ย ย ย ย ย ย ย addArrayListItem(ha,tok[i],i );
ย ย ย ย ย ย ย }
ย ย ย printOutHashMap(ha);
ย ย ย // assemble(ha);
ย ย ย }
public void printOutHashMap(HashMap hamp)
{
Collection c = hamp.values();
Collection b = hamp.keySet();
Iterator itr = c.iterator();
Iterator wd = b.iterator();
ย ย ย System.out.println("HASHMAP CONTENTS");
ย ย ย System.out.println("================");
while(itr.hasNext()){
System.out.print("Key: >>" + wd.next() + "<< Values: " );//+ itr.next()); }
ย ย ย ArrayList x = (ArrayList) itr.next();
ย ย ย Iterator lit = x.iterator();
ย ย ย while (lit.hasNext())
ย ย ย {System.out.print(lit.next() + " ");
ย ย ย }
ย ย ย System.out.println("");
}
}
This will give us an output which looks very much like
HASHMAP CONTENTS
================
Key: >>not<< Values: 3 8
Key: >>of<< Values: 12 21
Key: >>but<< Values: 15
Key: >>is<< Values: 2 7 17
Key: >>beginning<< Values: 11 23
Key: >>it<< Values: 6 16
Key: >>now<< Values: 0
Key: >>even<< Values: 9
Key: >>the<< Values: 4 10 13 19 22
Key: >>perhaps<< Values: 18
Key: >>this<< Values: 1
Key: >>end<< Values: 5 14 20
We can obviously reassemble this by recourse to the values, either in forward (starting at 0) or in reverse order (finding the max int value) and reassembling in a decremental loop. The implications of this are significant when one considers the potential applications of this sort of document treatment from a number of perspectives, notably IT security, improved data transmissibility by reduction, cryptography (where one would not transmit the keys but a serializable numeric indicator to a separately transmitted (or held) common dictionary, etc, etc.
Generally the reassembly process looks something like the following :
public void assemble(HashMap hamp)
{
Collection c = hamp.values();
Collection b = hamp.keySet();
int posCtr = 0;
int max = 0;
Iterator xAll = hamp.values().iterator();
while(xAll.hasNext()){
ArrayList xxx = (ArrayList) xAll.next();
max += xxx.size();
}
String stOut = "";
boolean unfinished = true;
while (unfinished)
{
Iterator itr = c.iterator();
Iterator wd = b.iterator();
start:
while(unfinished){
String tmp = (String) wd.next() ; // next key
ArrayList x = (ArrayList) itr.next(); // next ArrayList
Iterator lit = x.iterator(); // next ArrayList iterator
while (lit.hasNext())
{
//System.out.println(lit.next());
if((Integer.valueOf((Integer) lit.next()) == posCtr )) // if it matches the positional counter then this is the word we want...
{
stOut = stOut + " " + tmp ; // add the word in tmp to the output String
posCtr++;
wd = b.iterator(); //don't forget to reset the iterators to the beginning!
itr = c.iterator();
if(posCtr == max) // last word to deal with....
{
System.out.println("FINISHED");
unfinished = false;}
break start; // otherwise goto the start point, rinse, repeat (irl this would be handled more elegantly, illustrative code here!)
}
}
}
}
System.out.println("ASSEMBLED: " + stOut);
}
This can obviously be improved in a number of ways, particularly moving from a String-based paradigm to one in which char[] array representations replace Strings in the HashMap.
Advertisements
HashMap key ->ย ArrayList
March 1, 2010 Leave a comment
In the earlier and gentle introduction article on HashMaps, HashMaps and how to bend them to your will, I looked at how we might use a HashMap to do vocabulary count assessment of a passed String document. HashMaps are, as we have seen, ostensibly dumb key-value sets. I say ostensibly: the given key or value does not however have to be a single item as HashMap effects storage in both value and key as Object types. From a programmatic perspective this realisation has significant implications.
Letโs look at how we might use a keyed HashMap paired with values expressed as ArrayLists. The following methods will give you an idea of how to put int values in per key, the first method using a single value, the second handling an array of values. Obviously these methods are ripe for extension, exception handling and overloading but they are stripped down here from an illustrative point of view:
public HashMap<String, ArrayList> addArrayListItem(HashMap<String, ArrayList> hamp,String sKey,int value )
{
if(!hamp.containsKey(sKey)) // no key found?
{
ArrayList lx = new ArrayList();
lx.add(value);
hamp.put(sKey, lx);
}
else // the key pre-exists so we just add the (new) value to the existing arraylist
{
ArrayList lx = hamp.get(sKey) ;
lx.add(value);
hamp.put(sKey, lx);
}
return hamp;
}
public HashMap<String, ArrayList> addArrayListItems(HashMap<String, ArrayList> hamp,String sKey,int[] value )
{
int iSize = value.length;
ArrayList iList = (ArrayList) Arrays.asList(value);
if(!hamp.containsKey(sKey)) // no key found? // create key and add list
{
ArrayList lx = new ArrayList();
lx.addAll(iList);
hamp.put(sKey, lx);
}
else // the key pre-exists so we just add the array of values to the existing list
{
ArrayList lx = hamp.get(sKey) ;
lx.addAll(iList);
hamp.put(sKey, lx);
}
return hamp;
}
The getter method to retrieve by key is even easier (although again this method is open to extension, overloading, exception handling etc):
public ArrayList<int[]> getItems(HashMap<String, ArrayList> hamp,String sKey)
{
return hamp.get(sKey);
}
We can moreover use an ArrayList in an arbitrary, or, indeed, in a very carefully structured, fashion, e.g. to store not only integers but Strings, booleans, whatever, in order to achieve our purpose. Again, an untyped ArrayList is Object type agnostic as the following snippet of dummy code illustrates.
ArrayList al = new ArrayList();
int i = 1;
String s = "Monty Python";
boolean isVerySilly = true;
al.add(i);
al.add(s);
al.add(isVerySilly);
Iterator it = al.iterator();
while (it.hasNext())
{
System.out.println(it.next());
}
The realisation of this has similarly significant implications for the intelligent programmer. Exploiting the HashMap/ArrayList (or similar collection type) combination allows you to do some very powerful things with the data coming your way.
|
__label__pos
| 0.933579 |
gui: Do not decode the pressed key multiple times.
[paraslash.git] / spx_common.c
1 /*
2 * Copyright (C) 2002-2006 Jean-Marc Valin
3 * Copyright (C) 2010 Andre Noll <[email protected]>
4 *
5 * Licensed under the GPL v2, see file COPYING.
6 */
7
8 /**
9 * \file spx_common.c Functions used by the speex decoder and the speex audio
10 * format handler.
11 */
12
13 /* This file is based on speexdec.c, by Jean-Marc Valin, see below. */
14
15 /* Copyright (C) 2002-2006 Jean-Marc Valin
16 File: speexdec.c
17
18 Redistribution and use in source and binary forms, with or without
19 modification, are permitted provided that the following conditions
20 are met:
21
22 - Redistributions of source code must retain the above copyright
23 notice, this list of conditions and the following disclaimer.
24
25 - Redistributions in binary form must reproduce the above copyright
26 notice, this list of conditions and the following disclaimer in the
27 documentation and/or other materials provided with the distribution.
28
29 - Neither the name of the Xiph.org Foundation nor the names of its
30 contributors may be used to endorse or promote products derived from
31 this software without specific prior written permission.
32
33 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
34 ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
35 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
36 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR
37 CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
38 EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
39 PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
40 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
41 LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
42 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
43 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
44 */
45 #include <regex.h>
46 #include <speex/speex_header.h>
47 #include <speex/speex_stereo.h>
48 #include <speex/speex_callbacks.h>
49
50 #include "para.h"
51 #include "error.h"
52 #include "spx.h"
53
54 /**
55 * Wrapper for speex_decoder_ctl().
56 *
57 * \param state Decoder state.
58 * \param request ioctl-type request.
59 * \param ptr Value-result pointer.
60 *
61 * \return Standard.
62 */
63 static int spx_ctl(void *state, int request, void *ptr)
64 {
65 int ret = speex_decoder_ctl(state, request, ptr);
66
67 if (ret == 0) /* success */
68 return 1;
69 if (ret == -1)
70 return -E_SPX_CTL_BAD_RQ;
71 return -E_SPX_CTL_INVAL;
72 }
73
74 /**
75 * Obtain information about a speex file from an ogg packet.
76 *
77 * \param packet Start of the ogg packet.
78 * \param bytes Length of the ogg packet.
79 * \param shi Result pointer.
80 *
81 * \return Standard.
82 */
83 int spx_process_header(unsigned char *packet, long bytes,
84 struct spx_header_info *shi)
85 {
86 int ret;
87 spx_int32_t enh_enabled = 1;
88 SpeexHeader *h = speex_packet_to_header((char *)packet, bytes);
89
90 if (!h)
91 return -E_SPX_HEADER;
92 ret = -E_SPX_HEADER_MODE;
93 if (h->mode >= SPEEX_NB_MODES || h->mode < 0)
94 goto out;
95 shi->mode = speex_lib_get_mode(h->mode);
96
97 ret = -E_SPX_VERSION;
98 if (h->speex_version_id > 1)
99 goto out;
100 if (shi->mode->bitstream_version < h->mode_bitstream_version)
101 goto out;
102 if (shi->mode->bitstream_version > h->mode_bitstream_version)
103 goto out;
104
105 ret = -E_SPX_DECODER_INIT;
106 shi->state = speex_decoder_init(shi->mode);
107 if (!shi->state)
108 goto out;
109
110 ret = spx_ctl(shi->state, SPEEX_SET_ENH, &enh_enabled);
111 if (ret < 0)
112 goto out;
113 ret = spx_ctl(shi->state, SPEEX_GET_FRAME_SIZE, &shi->frame_size);
114 if (ret < 0)
115 goto out;
116 shi->sample_rate = h->rate;
117 ret = spx_ctl(shi->state, SPEEX_SET_SAMPLING_RATE, &shi->sample_rate);
118 if (ret < 0)
119 goto out;
120 shi->nframes = h->frames_per_packet;
121 shi->channels = h->nb_channels;
122 if (shi->channels != 1) {
123 SpeexCallback callback = {
124 .callback_id = SPEEX_INBAND_STEREO,
125 .func = speex_std_stereo_request_handler,
126 .data = &shi->stereo,
127 };
128 shi->stereo = (SpeexStereoState)SPEEX_STEREO_STATE_INIT;
129 ret = spx_ctl(shi->state, SPEEX_SET_HANDLER, &callback);
130 if (ret < 0)
131 goto out;
132 shi->channels = 2;
133 }
134 ret = spx_ctl(shi->state, SPEEX_GET_BITRATE, &shi->bitrate);
135 if (ret < 0)
136 goto out;
137 PARA_NOTICE_LOG("%d Hz, %s, %s, %s, %d bits/s\n",
138 shi->sample_rate, shi->mode->modeName,
139 shi->channels == 1? "mono" : "stereo",
140 h->vbr? "vbr" : "cbr",
141 shi->bitrate
142 );
143 shi->extra_headers = h->extra_headers;
144 ret = 1;
145 out:
146 free(h);
147 return ret;
148 }
|
__label__pos
| 0.928949 |
vn.py้ๅ็คพๅบ
By Traders, For Traders.
Administrator
avatar
ๅ ๅ
ฅไบ:
ๅธๅญ: 198
ๅฃฐๆ: 35
ไฝ่
๏ผ lijiang ๏ผๆฅๆบ๏ผ็ปดๆฉ็ๆดพ่ฎบๅ ๏ผๅบ็จ็ๆฌ vn.demo ๏ผ2016ๅนด็ๆฌ๏ผ
ๅฎ็ฐๆญฅ้ชค
1. ๅจMainWindowไธป็ชๅฃไธ๏ผๅขๅ ไปทๆ ผๅพ็็ชๅฃๅธๅฑใ
2. ๆทปๅ PriceWidgetๆจกๅ,ๅฎไน็ธๅ
ณๅๆฐ๏ผๅ้ใ
3. ่ฐ็จๅฝๆฐself.__connectMongo(),่ฟๆฅๆฐๆฎๅบ๏ผ
4. ๅๅงๅ็้ข๏ผๅนถไธ่ฝฝๅๅฒๆฐๆฎใ
5. ๆณจๅไบไปถ็ๅฌ๏ผๆดๆฐๅฎๆถๆฐๆฎ๏ผonBar()้่ฐ็จ็ปๅพๅฝๆฐใ
6. ็ปๅพๆจกๅ๏ผTICKๅพๆฏ่พ็ฎๅ๏ผKๅพๆฏไปpyqtgraph้็ดๆฅcopy่ฟๆฅ็ๆจกๅ๏ผๅ
ณ้ฎ็นๅจp.drawLineใ
ย
ๅฏๅจvn.demo/ctpdemo/demoMain.py๏ผ็นๅป้ๆฉ็ณป็ปใ็ปๅฝ่ดฆๅท๏ผๅจไปฃ็ ๆกๆฒๅ
ฅๅ็บฆไปฃ็ +enter้ฎใ
ๆๆๅพๅฆไธ๏ผ
ย
enter image description here
ย
ไปฃ็ ๅฎ็ฐๅฆไธ๏ผ
# encoding: UTF-8
"""
่ฏฅๆไปถไธญๅ
ๅซ็ๆฏไบคๆๅนณๅฐ็ไธๅฑUI้จๅ๏ผ
้่ฟๅพๅฝข็้ข่ฐ็จไธญ้ดๅฑ็ไธปๅจๅฝๆฐ๏ผๅนถ็ๆง็ธๅ
ณๆฐๆฎๆดๆฐใ
Monitorไธป่ฆ่ด่ดฃ็ๆงๆฐๆฎ๏ผๆ้จๅๅ
ๅซไธปๅจๅ่ฝใ
Widgetไธป่ฆ็จไบ่ฐ็จไธปๅจๅ่ฝ๏ผๆ้จๅๅ
ๅซๆฐๆฎ็ๆงใ
"""
from __future__ import division
import time
import sys
import shelve
from collections import OrderedDict
import sip
from PyQt4 import QtCore, QtGui
import pyqtgraph as pg
import numpy as np
from eventEngine import *
from pymongo import MongoClient
from pymongo.errors import *
from datetime import datetime, timedelta
########################################################################
class LogMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบๆฅๅฟ"""
signal = QtCore.pyqtSignal(type(Event()))
#----------------------------------------------------------------------
def __init__(self, eventEngine, parent=None):
"""Constructor"""
super(LogMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
"""ๅๅงๅ็้ข"""
self.setWindowTitle(u'ๆฅๅฟ')
self.setColumnCount(2)
self.setHorizontalHeaderLabels([u'ๆถ้ด', u'ๆฅๅฟ'])
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
# ่ชๅจ่ฐๆดๅๅฎฝ
self.horizontalHeader().setResizeMode(0, QtGui.QHeaderView.ResizeToContents)
self.horizontalHeader().setResizeMode(1, QtGui.QHeaderView.Stretch)
#----------------------------------------------------------------------
def registerEvent(self):
"""ๆณจๅไบไปถ็ๅฌ"""
# Qtๅพๅฝข็ปไปถ็GUIๆดๆฐๅฟ
้กปไฝฟ็จSignal/Slotๆบๅถ๏ผๅฆๅๆๅฏ่ฝๅฏผ่ด็จๅบๅดฉๆบ
# ๅ ๆญค่ฟ้ๅ
ๅฐๅพๅฝขๆดๆฐๅฝๆฐไฝไธบSlot๏ผๅไฟกๅท่ฟๆฅ่ตทๆฅ
# ็ถๅๅฐไฟกๅท็่งฆๅๅฝๆฐๆณจๅๅฐไบไปถ้ฉฑๅจๅผๆไธญ
self.signal.connect(self.updateLog)
self.__eventEngine.register(EVENT_LOG, self.signal.emit)
#----------------------------------------------------------------------
def updateLog(self, event):
"""ๆดๆฐๆฅๅฟ"""
# ่ทๅๅฝๅๆถ้ดๅๆฅๅฟๅ
ๅฎน
t = time.strftime('%H:%M:%S',time.localtime(time.time()))
log = event.dict_['log']
# ๅจ่กจๆ ผๆไธๆนๆๅ
ฅไธ่ก
self.insertRow(0)
# ๅๅปบๅๅ
ๆ ผ
cellTime = QtGui.QTableWidgetItem(t)
cellLog = QtGui.QTableWidgetItem(log)
# ๅฐๅๅ
ๆ ผๆๅ
ฅ่กจๆ ผ
self.setItem(0, 0, cellTime)
self.setItem(0, 1, cellLog)
########################################################################
class AccountMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบ่ดฆๆท"""
signal = QtCore.pyqtSignal(type(Event()))
dictLabels = OrderedDict()
dictLabels['AccountID'] = u'ๆ่ต่
่ดฆๆท'
dictLabels['PreBalance'] = u'ๆจ็ป'
dictLabels['CloseProfit'] = u'ๅนณไป็ไบ'
dictLabels['PositionProfit'] = u'ๆไป็ไบ'
dictLabels['Commission'] = u'ๆ็ปญ่ดน'
dictLabels['CurrMargin'] = u'ๅฝๅไฟ่ฏ้'
dictLabels['Balance'] = u'่ดฆๆท่ต้'
dictLabels['Available'] = u'ๅฏ็จ่ต้'
dictLabels['WithdrawQuota'] = u'ๅฏๅ่ต้'
dictLabels['FrozenCash'] = u'ๅป็ป่ต้'
dictLabels['FrozenMargin'] = u'ๅป็ปไฟ่ฏ้'
dictLabels['FrozenCommission'] = u'ๅป็ปๆ็ปญ่ดน'
dictLabels['Withdraw'] = u'ๅบ้'
dictLabels['Deposit'] = u'ๅ
ฅ้'
#----------------------------------------------------------------------
def __init__(self, eventEngine, parent=None):
"""Constructor"""
super(AccountMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.dictAccount = {} # ็จๆฅไฟๅญ่ดฆๆทๅฏนๅบ็ๅๅ
ๆ ผ
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'่ดฆๆท')
self.setColumnCount(len(self.dictLabels))
self.setHorizontalHeaderLabels(self.dictLabels.values())
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signal.connect(self.updateAccount)
self.__eventEngine.register(EVENT_ACCOUNT, self.signal.emit)
#----------------------------------------------------------------------
def updateAccount(self, event):
""""""
data = event.dict_['data']
accountid = data['AccountID']
# ๅฆๆไนๅๅทฒ็ปๆถๅฐ่ฟ่ฟไธช่ดฆๆท็ๆฐๆฎ, ๅ็ดๆฅๆดๆฐ
if accountid in self.dictAccount:
d = self.dictAccount[accountid]
for label, cell in d.items():
cell.setText(str(data[label]))
# ๅฆๅๆๅ
ฅๆฐ็ไธ่ก๏ผๅนถๆดๆฐ
else:
self.insertRow(0)
d = {}
for col, label in enumerate(self.dictLabels.keys()):
cell = QtGui.QTableWidgetItem(str(data[label]))
self.setItem(0, col, cell)
d[label] = cell
self.dictAccount[accountid] = d
########################################################################
class TradeMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบๆไบค่ฎฐๅฝ"""
signal = QtCore.pyqtSignal(type(Event()))
dictLabels = OrderedDict()
dictLabels['InstrumentID'] = u'ๅ็บฆไปฃ็ '
dictLabels['ExchangeID'] = u'ไบคๆๆ'
dictLabels['Direction'] = u'ๆนๅ'
dictLabels['OffsetFlag'] = u'ๅผๅนณ'
dictLabels['TradeID'] = u'ๆไบค็ผๅท'
dictLabels['TradeTime'] = u'ๆไบคๆถ้ด'
dictLabels['Volume'] = u'ๆฐ้'
dictLabels['Price'] = u'ไปทๆ ผ'
dictLabels['OrderRef'] = u'ๆฅๅๅท'
dictLabels['OrderSysID'] = u'ๆฅๅ็ณป็ปๅท'
dictDirection = {}
dictDirection['0'] = u'ไนฐ'
dictDirection['1'] = u' ๅ'
dictDirection['2'] = u'ETF็ณ่ดญ'
dictDirection['3'] = u'ETF่ตๅ'
dictDirection['4'] = u'ETF็ฐ้ๆฟไปฃ'
dictDirection['5'] = u'ๅบๅธๅ
ฅๅบ'
dictDirection['6'] = u'ๅบๅธๅบๅบ'
dictDirection['7'] = u'้
่ก'
dictDirection['8'] = u'่ฝฌๆ็ฎก'
dictDirection['9'] = u'ไฟก็จ่ดฆๆท้
่ก'
dictDirection['A'] = u'ๆ
ไฟๅไนฐๅ
ฅ'
dictDirection['B'] = u'ๆ
ไฟๅๅๅบ'
dictDirection['C'] = u'ๆ
ไฟๅ่ฝฌๅ
ฅ'
dictDirection['D'] = u'ๆ
ไฟๅ่ฝฌๅบ'
dictDirection['E'] = u'่่ตไนฐๅ
ฅ'
dictDirection['F'] = u'่่ตๅๅบ'
dictDirection['G'] = u'ๅๅธ่ฟๆฌพ'
dictDirection['H'] = u'ไนฐๅธ่ฟๅธ'
dictDirection['I'] = u'็ดๆฅ่ฟๆฌพ'
dictDirection['J'] = u'็ดๆฅๆขๅธ'
dictDirection['K'] = u'ไฝๅธๅ่ฝฌ'
dictDirection['L'] = u'OF็ณ่ดญ'
dictDirection['M'] = u'OF่ตๅ'
dictDirection['N'] = u'SFๆๅ'
dictDirection['O'] = u'SFๅๅนถ'
dictDirection['P'] = u'ๅคๅ
'
dictDirection['Q'] = u'่ฏๅธๅป็ป/่งฃๅป'
dictDirection['R'] = u'่กๆ'
dictOffset = {}
dictOffset['0'] = u'ๅผไป'
dictOffset['1'] = u' ๅนณไป'
dictOffset['2'] = u' ๅผบๅนณ'
dictOffset['3'] = u' ๅนณไป'
dictOffset['4'] = u' ๅนณๆจ'
dictOffset['5'] = u' ๅผบๅ'
dictOffset['6'] = u' ๆฌๅฐๅผบๅนณ'
#----------------------------------------------------------------------
def __init__(self, eventEngine, parent=None):
"""Constructor"""
super(TradeMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'ๆไบค')
self.setColumnCount(len(self.dictLabels))
self.setHorizontalHeaderLabels(self.dictLabels.values())
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signal.connect(self.updateTrade)
self.__eventEngine.register(EVENT_TRADE, self.signal.emit)
#----------------------------------------------------------------------
def updateTrade(self, event):
""""""
data = event.dict_['data']
self.insertRow(0)
for col, label in enumerate(self.dictLabels.keys()):
if label == 'Direction':
try:
value = self.dictDirection[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
elif label == 'OffsetFlag':
try:
value = self.dictOffset[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
else:
value = str(data[label])
cell = QtGui.QTableWidgetItem(value)
self.setItem(0, col, cell)
########################################################################
class PositionMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบๆไป"""
signal = QtCore.pyqtSignal(type(Event()))
dictLabels = OrderedDict()
dictLabels['InstrumentID'] = u'ๅ็บฆไปฃ็ '
dictLabels['PosiDirection'] = u'ๆนๅ'
dictLabels['Position'] = u'ๆไป'
dictLabels['PositionCost'] = u'ๆไปๆๆฌ'
dictLabels['PositionProfit'] = u'ๆไป็ไบ'
dictPosiDirection = {}
dictPosiDirection['1'] = u' ๅ'
dictPosiDirection['2'] = u'ๅค'
dictPosiDirection['3'] = u' ็ฉบ'
#----------------------------------------------------------------------
def __init__(self, eventEngine, parent=None):
"""Constructor"""
super(PositionMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.dictPosition = {} # ็จๆฅไฟๅญๆไปๅฏนๅบ็ๅๅ
ๆ ผ
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'ๆไป')
self.setColumnCount(len(self.dictLabels))
self.setHorizontalHeaderLabels(self.dictLabels.values())
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signal.connect(self.updatePosition)
self.__eventEngine.register(EVENT_POSITION, self.signal.emit)
#----------------------------------------------------------------------
def updatePosition(self, event):
""""""
data = event.dict_['data']
# ่ฟๆปค่ฟๅๅผไธบ็ฉบ็ๆ
ๅต
if data['InstrumentID']:
posid = data['InstrumentID'] + '.' + data['PosiDirection']
# ๅฆๆไนๅๅทฒ็ปๆถๅฐ่ฟ่ฟไธช่ดฆๆท็ๆฐๆฎ, ๅ็ดๆฅๆดๆฐ
if posid in self.dictPosition:
d = self.dictPosition[posid]
for label, cell in d.items():
if label == 'PosiDirection':
try:
value = self.dictPosiDirection[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
else:
value = str(data[label])
cell.setText(value)
# ๅฆๅๆๅ
ฅๆฐ็ไธ่ก๏ผๅนถๆดๆฐ
else:
self.insertRow(0)
d = {}
for col, label in enumerate(self.dictLabels.keys()):
if label == 'PosiDirection':
try:
value = self.dictPosiDirection[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
else:
value = str(data[label])
cell = QtGui.QTableWidgetItem(value)
self.setItem(0, col, cell)
d[label] = cell
self.dictPosition[posid] = d
########################################################################
class OrderMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบๆๆๆฅๅ"""
signal = QtCore.pyqtSignal(type(Event()))
dictLabels = OrderedDict()
dictLabels['OrderRef'] = u'ๆฅๅๅท'
dictLabels['InsertTime'] = u'ๅงๆๆถ้ด'
dictLabels['InstrumentID'] = u'ๅ็บฆไปฃ็ '
dictLabels['Direction'] = u'ๆนๅ'
dictLabels['CombOffsetFlag'] = u'ๅผๅนณ'
dictLabels['LimitPrice'] = u'ไปทๆ ผ'
dictLabels['VolumeTotalOriginal'] = u'ๅงๆๆฐ้'
dictLabels['VolumeTraded'] = u'ๆไบคๆฐ้'
dictLabels['StatusMsg'] = u'็ถๆไฟกๆฏ'
dictLabels['OrderSysID'] = u'็ณป็ป็ผๅท'
dictDirection = {}
dictDirection['0'] = u'ไนฐ'
dictDirection['1'] = u' ๅ'
dictDirection['2'] = u'ETF็ณ่ดญ'
dictDirection['3'] = u'ETF่ตๅ'
dictDirection['4'] = u'ETF็ฐ้ๆฟไปฃ'
dictDirection['5'] = u'ๅบๅธๅ
ฅๅบ'
dictDirection['6'] = u'ๅบๅธๅบๅบ'
dictDirection['7'] = u'้
่ก'
dictDirection['8'] = u'่ฝฌๆ็ฎก'
dictDirection['9'] = u'ไฟก็จ่ดฆๆท้
่ก'
dictDirection['A'] = u'ๆ
ไฟๅไนฐๅ
ฅ'
dictDirection['B'] = u'ๆ
ไฟๅๅๅบ'
dictDirection['C'] = u'ๆ
ไฟๅ่ฝฌๅ
ฅ'
dictDirection['D'] = u'ๆ
ไฟๅ่ฝฌๅบ'
dictDirection['E'] = u'่่ตไนฐๅ
ฅ'
dictDirection['F'] = u'่่ตๅๅบ'
dictDirection['G'] = u'ๅๅธ่ฟๆฌพ'
dictDirection['H'] = u'ไนฐๅธ่ฟๅธ'
dictDirection['I'] = u'็ดๆฅ่ฟๆฌพ'
dictDirection['J'] = u'็ดๆฅๆขๅธ'
dictDirection['K'] = u'ไฝๅธๅ่ฝฌ'
dictDirection['L'] = u'OF็ณ่ดญ'
dictDirection['M'] = u'OF่ตๅ'
dictDirection['N'] = u'SFๆๅ'
dictDirection['O'] = u'SFๅๅนถ'
dictDirection['P'] = u'ๅคๅ
'
dictDirection['Q'] = u'่ฏๅธๅป็ป/่งฃๅป'
dictDirection['R'] = u'่กๆ'
dictOffset = {}
dictOffset['0'] = u'ๅผไป'
dictOffset['1'] = u' ๅนณไป'
dictOffset['2'] = u' ๅผบๅนณ'
dictOffset['3'] = u' ๅนณไป'
dictOffset['4'] = u' ๅนณๆจ'
dictOffset['5'] = u' ๅผบๅ'
dictOffset['6'] = u' ๆฌๅฐๅผบๅนณ'
#----------------------------------------------------------------------
def __init__(self, eventEngine, mainEngine, parent=None):
"""Constructor"""
super(OrderMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.__mainEngine = mainEngine
self.dictOrder = {} # ็จๆฅไฟๅญๆฅๅๅทๅฏนๅบ็ๅๅ
ๆ ผๅฏน่ฑก
self.dictOrderData = {} # ็จๆฅไฟๅญๆฅๅๆฐๆฎ
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'ๆฅๅ')
self.setColumnCount(len(self.dictLabels))
self.setHorizontalHeaderLabels(self.dictLabels.values())
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signal.connect(self.updateOrder)
self.__eventEngine.register(EVENT_ORDER, self.signal.emit)
self.itemDoubleClicked.connect(self.cancelOrder)
#----------------------------------------------------------------------
def updateOrder(self, event):
""""""
data = event.dict_['data']
orderref = data['OrderRef']
self.dictOrderData[orderref] = data
# ๅฆๆไนๅๅทฒ็ปๆถๅฐ่ฟ่ฟไธช่ดฆๆท็ๆฐๆฎ, ๅ็ดๆฅๆดๆฐ
if orderref in self.dictOrder:
d = self.dictOrder[orderref]
for label, cell in d.items():
if label == 'Direction':
try:
value = self.dictDirection[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
elif label == 'CombOffsetFlag':
try:
value = self.dictOffset[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
elif label == 'StatusMsg':
value = data[label].decode('gbk')
else:
value = str(data[label])
cell.setText(value)
# ๅฆๅๆๅ
ฅๆฐ็ไธ่ก๏ผๅนถๆดๆฐ
else:
self.insertRow(0)
d = {}
for col, label in enumerate(self.dictLabels.keys()):
if label == 'Direction':
try:
value = self.dictDirection[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
elif label == 'CombOffsetFlag':
try:
value = self.dictOffset[data[label]]
except KeyError:
value = u'ๆช็ฅ็ฑปๅ'
elif label == 'StatusMsg':
value = data[label].decode('gbk')
else:
value = str(data[label])
cell = QtGui.QTableWidgetItem(value)
self.setItem(0, col, cell)
d[label] = cell
cell.orderref = orderref # ๅจๆ็ปๅฎๆฅๅๅทๅฐๅๅ
ๆ ผไธ
self.dictOrder[orderref] = d
#----------------------------------------------------------------------
def cancelOrder(self, cell):
"""ๅๅปๆคๅ"""
orderref = cell.orderref
order = self.dictOrderData[orderref]
# ๆคๅๅๆฃๆฅๆฅๅๆฏๅฆๅทฒ็ปๆค้ๆ่
ๅ
จ้จๆไบค
if not (order['OrderStatus'] == '0' or order['OrderStatus'] == '5'):
self.__mainEngine.cancelOrder(order['InstrumentID'],
order['ExchangeID'],
orderref,
order['FrontID'],
order['SessionID'])
#----------------------------------------------------------------------
def cancelAll(self):
"""ๅ
จๆค"""
for order in self.dictOrderData.values():
if not (order['OrderStatus'] == '0' or order['OrderStatus'] == '5'):
self.__mainEngine.cancelOrder(order['InstrumentID'],
order['ExchangeID'],
order['OrderRef'],
order['FrontID'],
order['SessionID'])
########################################################################
class MarketDataMonitor(QtGui.QTableWidget):
"""็จไบๆพ็คบ่กๆ
"""
signal = QtCore.pyqtSignal(type(Event()))
dictLabels = OrderedDict()
dictLabels['Name'] = u'ๅ็บฆๅ็งฐ'
dictLabels['InstrumentID'] = u'ๅ็บฆไปฃ็ '
dictLabels['ExchangeInstID'] = u'ๅ็บฆไบคๆๆไปฃ็ '
dictLabels['BidPrice1'] = u'ไนฐไธไปท'
dictLabels['BidVolume1'] = u'ไนฐไธ้'
dictLabels['AskPrice1'] = u'ๅไธไปท'
dictLabels['AskVolume1'] = u'ๅไธ้'
dictLabels['LastPrice'] = u'ๆๆฐไปท'
dictLabels['Volume'] = u'ๆไบค้'
dictLabels['UpdateTime'] = u'ๆดๆฐๆถ้ด'
#----------------------------------------------------------------------
def __init__(self, eventEngine, mainEngine, parent=None):
"""Constructor"""
super(MarketDataMonitor, self).__init__(parent)
self.__eventEngine = eventEngine
self.__mainEngine = mainEngine
self.dictData = {}
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'่กๆ
')
self.setColumnCount(len(self.dictLabels))
self.setHorizontalHeaderLabels(self.dictLabels.values())
self.verticalHeader().setVisible(False) # ๅ
ณ้ญๅทฆ่พน็ๅ็ด่กจๅคด
self.setEditTriggers(QtGui.QTableWidget.NoEditTriggers) # ่ฎพไธบไธๅฏ็ผ่พ็ถๆ
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signal.connect(self.updateData)
self.__eventEngine.register(EVENT_MARKETDATA, self.signal.emit)
#----------------------------------------------------------------------
def updateData(self, event):
""""""
data = event.dict_['data']
instrumentid = data['InstrumentID']
# ๅฆๆไนๅๅทฒ็ปๆถๅฐ่ฟ่ฟไธช่ดฆๆท็ๆฐๆฎ, ๅ็ดๆฅๆดๆฐ
if instrumentid in self.dictData:
d = self.dictData[instrumentid]
for label, cell in d.items():
if label != 'Name':
value = str(data[label])
else:
value = self.getName(data['InstrumentID'])
cell.setText(value)
# ๅฆๅๆๅ
ฅๆฐ็ไธ่ก๏ผๅนถๆดๆฐ
else:
row = self.rowCount()
self.insertRow(row)
d = {}
for col, label in enumerate(self.dictLabels.keys()):
if label != 'Name':
value = str(data[label])
cell = QtGui.QTableWidgetItem(value)
self.setItem(row, col, cell)
d[label] = cell
else:
name = self.getName(data['InstrumentID'])
cell = QtGui.QTableWidgetItem(name)
self.setItem(row, col, cell)
d[label] = cell
self.dictData[instrumentid] = d
#----------------------------------------------------------------------
def getName(self, instrumentid):
"""่ทๅๅ็งฐ"""
instrument = self.__mainEngine.selectInstrument(instrumentid)
if instrument:
return instrument['InstrumentName'].decode('GBK')
else:
return ''
########################################################################
class LoginWidget(QtGui.QDialog):
"""็ปๅฝ"""
#----------------------------------------------------------------------
def __init__(self, mainEngine, parent=None):
"""Constructor"""
super(LoginWidget, self).__init__()
self.__mainEngine = mainEngine
self.initUi()
self.loadData()
#----------------------------------------------------------------------
def initUi(self):
"""ๅๅงๅ็้ข"""
self.setWindowTitle(u'็ปๅฝ')
# ่ฎพ็ฝฎ็ปไปถ
labelUserID = QtGui.QLabel(u'่ดฆๅท๏ผ')
labelPassword = QtGui.QLabel(u'ๅฏ็ ๏ผ')
labelMdAddress = QtGui.QLabel(u'่กๆ
ๆๅกๅจ๏ผ')
labelTdAddress = QtGui.QLabel(u'ไบคๆๆๅกๅจ๏ผ')
labelBrokerID = QtGui.QLabel(u'็ป็บชๅไปฃ็ ')
self.editUserID = QtGui.QLineEdit()
self.editPassword = QtGui.QLineEdit()
self.editMdAddress = QtGui.QLineEdit()
self.editTdAddress = QtGui.QLineEdit()
self.editBrokerID = QtGui.QLineEdit()
self.editUserID.setMinimumWidth(200)
self.editPassword.setEchoMode(QtGui.QLineEdit.Password)
buttonLogin = QtGui.QPushButton(u'็ปๅฝ')
buttonCancel = QtGui.QPushButton(u'ๅๆถ')
buttonLogin.clicked.connect(self.login)
buttonCancel.clicked.connect(self.close)
# ่ฎพ็ฝฎๅธๅฑ
buttonHBox = QtGui.QHBoxLayout()
buttonHBox.addStretch()
buttonHBox.addWidget(buttonLogin)
buttonHBox.addWidget(buttonCancel)
grid = QtGui.QGridLayout()
grid.addWidget(labelUserID, 0, 0)
grid.addWidget(labelPassword, 1, 0)
grid.addWidget(labelMdAddress, 2, 0)
grid.addWidget(labelTdAddress, 3, 0)
grid.addWidget(labelBrokerID, 4, 0)
grid.addWidget(self.editUserID, 0, 1)
grid.addWidget(self.editPassword, 1, 1)
grid.addWidget(self.editMdAddress, 2, 1)
grid.addWidget(self.editTdAddress, 3, 1)
grid.addWidget(self.editBrokerID, 4, 1)
grid.addLayout(buttonHBox, 5, 0, 1, 2)
self.setLayout(grid)
#----------------------------------------------------------------------
def login(self):
"""็ปๅฝ"""
userid = str(self.editUserID.text())
password = str(self.editPassword.text())
mdAddress = str(self.editMdAddress.text())
tdAddress = str(self.editTdAddress.text())
brokerid = str(self.editBrokerID.text())
self.__mainEngine.login(userid, password, brokerid, mdAddress, tdAddress)
self.close()
#----------------------------------------------------------------------
def loadData(self):
"""่ฏปๅๆฐๆฎ"""
f = shelve.open('setting.vn')
try:
setting = f['login']
userid = setting['userid']
password = setting['password']
mdAddress = setting['mdAddress']
tdAddress = setting['tdAddress']
brokerid = setting['brokerid']
self.editUserID.setText(userid)
self.editPassword.setText(password)
self.editMdAddress.setText(mdAddress)
self.editTdAddress.setText(tdAddress)
self.editBrokerID.setText(brokerid)
except KeyError:
pass
f.close()
#----------------------------------------------------------------------
def saveData(self):
"""ไฟๅญๆฐๆฎ"""
setting = {}
setting['userid'] = str(self.editUserID.text())
setting['password'] = str(self.editPassword.text())
setting['mdAddress'] = str(self.editMdAddress.text())
setting['tdAddress'] = str(self.editTdAddress.text())
setting['brokerid'] = str(self.editBrokerID.text())
f = shelve.open('setting.vn')
f['login'] = setting
f.close()
#----------------------------------------------------------------------
def closeEvent(self, event):
"""ๅ
ณ้ญไบไปถๅค็"""
# ๅฝ็ชๅฃ่ขซๅ
ณ้ญๆถ๏ผๅ
ไฟๅญ็ปๅฝๆฐๆฎ๏ผๅๅ
ณ้ญ
self.saveData()
event.accept()
########################################################################
class ControlWidget(QtGui.QWidget):
"""่ฐ็จๆฅ่ฏขๅฝๆฐ"""
#----------------------------------------------------------------------
def __init__(self, mainEngine, parent=None):
"""Constructor"""
super(ControlWidget, self).__init__()
self.__mainEngine = mainEngine
self.initUi()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'ๆต่ฏ')
buttonAccount = QtGui.QPushButton(u'ๆฅ่ฏข่ดฆๆท')
buttonInvestor = QtGui.QPushButton(u'ๆฅ่ฏขๆ่ต่
')
buttonPosition = QtGui.QPushButton(u'ๆฅ่ฏขๆไป')
buttonAccount.clicked.connect(self.__mainEngine.getAccount)
buttonInvestor.clicked.connect(self.__mainEngine.getInvestor)
buttonPosition.clicked.connect(self.__mainEngine.getPosition)
hBox = QtGui.QHBoxLayout()
hBox.addWidget(buttonAccount)
hBox.addWidget(buttonInvestor)
hBox.addWidget(buttonPosition)
self.setLayout(hBox)
########################################################################
class TradingWidget(QtGui.QWidget):
"""ไบคๆ"""
signal = QtCore.pyqtSignal(type(Event()))
dictDirection = OrderedDict()
dictDirection['0'] = u'ไนฐ'
dictDirection['1'] = u'ๅ'
dictOffset = OrderedDict()
dictOffset['0'] = u'ๅผไป'
dictOffset['1'] = u'ๅนณไป'
dictOffset['3'] = u'ๅนณไป'
dictPriceType = OrderedDict()
dictPriceType['1'] = u'ไปปๆไปท'
dictPriceType['2'] = u'้ไปท'
dictPriceType['3'] = u'ๆไผไปท'
dictPriceType['4'] = u'ๆๆฐไปท'
# ๅ่ฝฌๅญๅ
ธ
dictDirectionReverse = {value:key for key,value in dictDirection.items()}
dictOffsetReverse = {value:key for key, value in dictOffset.items()}
dictPriceTypeReverse = {value:key for key, value in dictPriceType.items()}
#----------------------------------------------------------------------
def __init__(self, eventEngine, mainEngine, orderMonitor, parent=None):
"""Constructor"""
super(TradingWidget, self).__init__()
self.__eventEngine = eventEngine
self.__mainEngine = mainEngine
self.__orderMonitor = orderMonitor
self.instrumentid = ''
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
"""ๅๅงๅ็้ข"""
self.setWindowTitle(u'ไบคๆ')
# ๅทฆ่พน้จๅ
labelID = QtGui.QLabel(u'ไปฃ็ ')
labelName = QtGui.QLabel(u'ๅ็งฐ')
labelDirection = QtGui.QLabel(u'ๅงๆ็ฑปๅ')
labelOffset = QtGui.QLabel(u'ๅผๅนณ')
labelPrice = QtGui.QLabel(u'ไปทๆ ผ')
labelVolume = QtGui.QLabel(u'ๆฐ้')
labelPriceType = QtGui.QLabel(u'ไปทๆ ผ็ฑปๅ')
self.lineID = QtGui.QLineEdit()
self.lineName = QtGui.QLineEdit()
self.comboDirection = QtGui.QComboBox()
self.comboDirection.addItems(self.dictDirection.values())
self.comboOffset = QtGui.QComboBox()
self.comboOffset.addItems(self.dictOffset.values())
self.spinPrice = QtGui.QDoubleSpinBox()
self.spinPrice.setDecimals(4)
self.spinPrice.setMinimum(0)
self.spinPrice.setMaximum(10000)
self.spinVolume = QtGui.QSpinBox()
self.spinVolume.setMinimum(0)
self.spinVolume.setMaximum(1000000)
self.comboPriceType = QtGui.QComboBox()
self.comboPriceType.addItems(self.dictPriceType.values())
gridleft = QtGui.QGridLayout()
gridleft.addWidget(labelID, 0, 0)
gridleft.addWidget(labelName, 1, 0)
gridleft.addWidget(labelDirection, 2, 0)
gridleft.addWidget(labelOffset, 3, 0)
gridleft.addWidget(labelPrice, 4, 0)
gridleft.addWidget(labelVolume, 5, 0)
gridleft.addWidget(labelPriceType, 6, 0)
gridleft.addWidget(self.lineID, 0, 1)
gridleft.addWidget(self.lineName, 1, 1)
gridleft.addWidget(self.comboDirection, 2, 1)
gridleft.addWidget(self.comboOffset, 3, 1)
gridleft.addWidget(self.spinPrice, 4, 1)
gridleft.addWidget(self.spinVolume, 5, 1)
gridleft.addWidget(self.comboPriceType, 6, 1)
# ๅณ่พน้จๅ
labelBid1 = QtGui.QLabel(u'ไนฐไธ')
labelBid2 = QtGui.QLabel(u'ไนฐไบ')
labelBid3 = QtGui.QLabel(u'ไนฐไธ')
labelBid4 = QtGui.QLabel(u'ไนฐๅ')
labelBid5 = QtGui.QLabel(u'ไนฐไบ')
labelAsk1 = QtGui.QLabel(u'ๅไธ')
labelAsk2 = QtGui.QLabel(u'ๅไบ')
labelAsk3 = QtGui.QLabel(u'ๅไธ')
labelAsk4 = QtGui.QLabel(u'ๅๅ')
labelAsk5 = QtGui.QLabel(u'ๅไบ')
self.labelBidPrice1 = QtGui.QLabel()
self.labelBidPrice2 = QtGui.QLabel()
self.labelBidPrice3 = QtGui.QLabel()
self.labelBidPrice4 = QtGui.QLabel()
self.labelBidPrice5 = QtGui.QLabel()
self.labelBidVolume1 = QtGui.QLabel()
self.labelBidVolume2 = QtGui.QLabel()
self.labelBidVolume3 = QtGui.QLabel()
self.labelBidVolume4 = QtGui.QLabel()
self.labelBidVolume5 = QtGui.QLabel()
self.labelAskPrice1 = QtGui.QLabel()
self.labelAskPrice2 = QtGui.QLabel()
self.labelAskPrice3 = QtGui.QLabel()
self.labelAskPrice4 = QtGui.QLabel()
self.labelAskPrice5 = QtGui.QLabel()
self.labelAskVolume1 = QtGui.QLabel()
self.labelAskVolume2 = QtGui.QLabel()
self.labelAskVolume3 = QtGui.QLabel()
self.labelAskVolume4 = QtGui.QLabel()
self.labelAskVolume5 = QtGui.QLabel()
labelLast = QtGui.QLabel(u'ๆๆฐ')
self.labelLastPrice = QtGui.QLabel()
self.labelReturn = QtGui.QLabel()
self.labelLastPrice.setMinimumWidth(60)
self.labelReturn.setMinimumWidth(60)
gridRight = QtGui.QGridLayout()
gridRight.addWidget(labelAsk5, 0, 0)
gridRight.addWidget(labelAsk4, 1, 0)
gridRight.addWidget(labelAsk3, 2, 0)
gridRight.addWidget(labelAsk2, 3, 0)
gridRight.addWidget(labelAsk1, 4, 0)
gridRight.addWidget(labelLast, 5, 0)
gridRight.addWidget(labelBid1, 6, 0)
gridRight.addWidget(labelBid2, 7, 0)
gridRight.addWidget(labelBid3, 8, 0)
gridRight.addWidget(labelBid4, 9, 0)
gridRight.addWidget(labelBid5, 10, 0)
gridRight.addWidget(self.labelAskPrice5, 0, 1)
gridRight.addWidget(self.labelAskPrice4, 1, 1)
gridRight.addWidget(self.labelAskPrice3, 2, 1)
gridRight.addWidget(self.labelAskPrice2, 3, 1)
gridRight.addWidget(self.labelAskPrice1, 4, 1)
gridRight.addWidget(self.labelLastPrice, 5, 1)
gridRight.addWidget(self.labelBidPrice1, 6, 1)
gridRight.addWidget(self.labelBidPrice2, 7, 1)
gridRight.addWidget(self.labelBidPrice3, 8, 1)
gridRight.addWidget(self.labelBidPrice4, 9, 1)
gridRight.addWidget(self.labelBidPrice5, 10, 1)
gridRight.addWidget(self.labelAskVolume5, 0, 2)
gridRight.addWidget(self.labelAskVolume4, 1, 2)
gridRight.addWidget(self.labelAskVolume3, 2, 2)
gridRight.addWidget(self.labelAskVolume2, 3, 2)
gridRight.addWidget(self.labelAskVolume1, 4, 2)
gridRight.addWidget(self.labelReturn, 5, 2)
gridRight.addWidget(self.labelBidVolume1, 6, 2)
gridRight.addWidget(self.labelBidVolume2, 7, 2)
gridRight.addWidget(self.labelBidVolume3, 8, 2)
gridRight.addWidget(self.labelBidVolume4, 9, 2)
gridRight.addWidget(self.labelBidVolume5, 10, 2)
# ๅๅๆ้ฎ
buttonSendOrder = QtGui.QPushButton(u'ๅๅ')
buttonCancelAll = QtGui.QPushButton(u'ๅ
จๆค')
# ๆดๅๅธๅฑ
hbox = QtGui.QHBoxLayout()
hbox.addLayout(gridleft)
hbox.addLayout(gridRight)
vbox = QtGui.QVBoxLayout()
vbox.addLayout(hbox)
vbox.addWidget(buttonSendOrder)
vbox.addWidget(buttonCancelAll)
self.setLayout(vbox)
# ๅ
ณ่ๆดๆฐ
buttonSendOrder.clicked.connect(self.sendOrder)
buttonCancelAll.clicked.connect(self.__orderMonitor.cancelAll)
self.lineID.returnPressed.connect(self.updateID)
#----------------------------------------------------------------------
def updateID(self):
"""ๅ็บฆๅๅ"""
instrumentid = str(self.lineID.text())
# ่ทๅๅ็บฆ
instrument = self.__mainEngine.selectInstrument(instrumentid)
if instrument:
self.lineName.setText(instrument['InstrumentName'].decode('GBK'))
# ๆธ
็ฉบไปทๆ ผๆฐ้
self.spinPrice.setValue(0)
self.spinVolume.setValue(0)
# ๆธ
็ฉบ่กๆ
ๆพ็คบ
self.labelBidPrice1.setText('')
self.labelBidPrice2.setText('')
self.labelBidPrice3.setText('')
self.labelBidPrice4.setText('')
self.labelBidPrice5.setText('')
self.labelBidVolume1.setText('')
self.labelBidVolume2.setText('')
self.labelBidVolume3.setText('')
self.labelBidVolume4.setText('')
self.labelBidVolume5.setText('')
self.labelAskPrice1.setText('')
self.labelAskPrice2.setText('')
self.labelAskPrice3.setText('')
self.labelAskPrice4.setText('')
self.labelAskPrice5.setText('')
self.labelAskVolume1.setText('')
self.labelAskVolume2.setText('')
self.labelAskVolume3.setText('')
self.labelAskVolume4.setText('')
self.labelAskVolume5.setText('')
self.labelLastPrice.setText('')
self.labelReturn.setText('')
# ้ๆฐๆณจๅไบไปถ็ๅฌ
self.__eventEngine.unregister(EVENT_MARKETDATA_CONTRACT+self.instrumentid, self.signal.emit)
self.__eventEngine.register(EVENT_MARKETDATA_CONTRACT+instrumentid, self.signal.emit)
# ่ฎข้
ๅ็บฆ
self.__mainEngine.subscribe(instrumentid, instrument['ExchangeID'])
# ๆดๆฐ็ฎๅ็ๅ็บฆ
self.instrumentid = instrumentid
#----------------------------------------------------------------------
def updateMarketData(self, event):
"""ๆดๆฐ่กๆ
"""
data = event.dict_['data']
if data['InstrumentID'] == self.instrumentid:
self.labelBidPrice1.setText(str(data['BidPrice1']))
self.labelAskPrice1.setText(str(data['AskPrice1']))
self.labelBidVolume1.setText(str(data['BidVolume1']))
self.labelAskVolume1.setText(str(data['AskVolume1']))
if data['BidVolume2']:
self.labelBidPrice2.setText(str(data['BidPrice2']))
self.labelBidPrice3.setText(str(data['BidPrice3']))
self.labelBidPrice4.setText(str(data['BidPrice4']))
self.labelBidPrice5.setText(str(data['BidPrice5']))
self.labelAskPrice2.setText(str(data['AskPrice2']))
self.labelAskPrice3.setText(str(data['AskPrice3']))
self.labelAskPrice4.setText(str(data['AskPrice4']))
self.labelAskPrice5.setText(str(data['AskPrice5']))
self.labelBidVolume2.setText(str(data['BidVolume2']))
self.labelBidVolume3.setText(str(data['BidVolume3']))
self.labelBidVolume4.setText(str(data['BidVolume4']))
self.labelBidVolume5.setText(str(data['BidVolume5']))
self.labelAskVolume2.setText(str(data['AskVolume2']))
self.labelAskVolume3.setText(str(data['AskVolume3']))
self.labelAskVolume4.setText(str(data['AskVolume4']))
self.labelAskVolume5.setText(str(data['AskVolume5']))
self.labelLastPrice.setText(str(data['LastPrice']))
rt = (data['LastPrice']/data['PreClosePrice'])-1
self.labelReturn.setText(('%.2f' %(rt*100))+'%')
#----------------------------------------------------------------------
def registerEvent(self):
"""ๆณจๅไบไปถ็ๅฌ"""
self.signal.connect(self.updateMarketData)
#----------------------------------------------------------------------
def sendOrder(self):
"""ๅๅ"""
instrumentid = str(self.lineID.text())
instrument = self.__mainEngine.selectInstrument(instrumentid)
if instrument:
exchangeid = instrument['ExchangeID']
direction = self.dictDirectionReverse[unicode(self.comboDirection.currentText())]
offset = self.dictOffsetReverse[unicode(self.comboOffset.currentText())]
price = float(self.spinPrice.value())
volume = int(self.spinVolume.value())
pricetype = self.dictPriceTypeReverse[unicode(self.comboPriceType.currentText())]
self.__mainEngine.sendOrder(instrumentid, exchangeid, price, pricetype, volume ,direction, offset)
########################################################################
class AboutWidget(QtGui.QDialog):
"""ๆพ็คบๅ
ณไบไฟกๆฏ"""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
super(AboutWidget, self).__init__(parent)
self.initUi()
#----------------------------------------------------------------------
def initUi(self):
""""""
self.setWindowTitle(u'ๅ
ณไบ')
text = u"""
vn.pyๆกๆถDemo
ๅฎๆๆฅๆ๏ผ2015/4/17
ไฝ่
๏ผ็จPython็ไบคๆๅ
License๏ผMIT
ไธป้กต๏ผvnpy.org
Github๏ผgithub.com/vnpy/vnpy
QQไบคๆต็พค๏ผ262656087
ๅผๅ็ฏๅข
ๆไฝ็ณป็ป๏ผWindows 7 ไธไธ็ 64ไฝ
Pythonๅ่ก็๏ผPython 2.7.6 (Anaconda 1.9.2 Win-32)
ๅพๅฝขๅบ๏ผPyQt4 4.11.3 Py2.7-x32
ไบคๆๆฅๅฃ๏ผvn.lts/vn.ctp
ไบไปถ้ฉฑๅจๅผๆ๏ผvn.event
ๅผๅ็ฏๅข๏ผWingIDE 5.0.6
EXEๆๅ
๏ผNuitka 0.5.12.1 Python2.7 32 bit MSI
"""
label = QtGui.QLabel()
label.setText(text)
label.setMinimumWidth(450)
vbox = QtGui.QVBoxLayout()
vbox.addWidget(label)
self.setLayout(vbox)
########################################################################
class PriceWidget(QtGui.QWidget):
"""็จไบๆพ็คบไปทๆ ผ่ตฐๅฟๅพ"""
signal = QtCore.pyqtSignal(type(Event()))
# tickๅพ็็ธๅ
ณๅๆฐใๅ้
listlastPrice = np.empty(1000)
fastMA = 0
midMA = 0
slowMA = 0
listfastMA = np.empty(1000)
listmidMA = np.empty(1000)
listslowMA = np.empty(1000)
tickFastAlpha = 0.0333 # ๅฟซ้ๅ็บฟ็ๅๆฐ,30
tickMidAlpha = 0.0167 # ไธญ้ๅ็บฟ็ๅๆฐ,60
tickSlowAlpha = 0.0083 # ๆ
ข้ๅ็บฟ็ๅๆฐ,120
ptr = 0
ticktime = None # tickๆฐๆฎๆถ้ด
# K็บฟๅพEMAๅ็บฟ็ๅๆฐใๅ้
EMAFastAlpha = 0.0167 # ๅฟซ้EMA็ๅๆฐ,60
EMASlowAlpha = 0.0083 # ๆ
ข้EMA็ๅๆฐ,120
fastEMA = 0 # ๅฟซ้EMA็ๆฐๅผ
slowEMA = 0 # ๆ
ข้EMA็ๆฐๅผ
listfastEMA = []
listslowEMA = []
# K็บฟ็ผๅญๅฏน่ฑก
barOpen = 0
barHigh = 0
barLow = 0
barClose = 0
barTime = None
barOpenInterest = 0
num = 0
# ไฟๅญK็บฟๆฐๆฎ็ๅ่กจๅฏน่ฑก
listBar = []
listClose = []
listHigh = []
listLow = []
listOpen = []
listOpenInterest = []
# ๆฏๅฆๅฎๆไบๅๅฒๆฐๆฎ็่ฏปๅ
initCompleted = False
# ๅๅงๅๆถ่ฏปๅ็ๅๅฒๆฐๆฎ็่ตทๅงๆฅๆ(ๅฏไปฅ้ๆฉๅค้จ่ฎพ็ฝฎ)
startDate = None
symbol = 'SR701'
class CandlestickItem(pg.GraphicsObject):
def __init__(self, data):
pg.GraphicsObject.__init__(self)
self.data = data ## data must have fields: time, open, close, min, max
self.generatePicture()
def generatePicture(self):
## pre-computing a QPicture object allows paint() to run much more quickly,
## rather than re-drawing the shapes every time.
self.picture = QtGui.QPicture()
p = QtGui.QPainter(self.picture)
p.setPen(pg.mkPen(color='w', width=0.4)) # 0.4 means w*2
# w = (self.data[1][0] - self.data[0][0]) / 3.
w = 0.2
for (t, open, close, min, max) in self.data:
p.drawLine(QtCore.QPointF(t, min), QtCore.QPointF(t, max))
if open > close:
p.setBrush(pg.mkBrush('g'))
else:
p.setBrush(pg.mkBrush('r'))
p.drawRect(QtCore.QRectF(t-w, open, w*2, close-open))
p.end()
def paint(self, p, *args):
p.drawPicture(0, 0, self.picture)
def boundingRect(self):
## boundingRect _must_ indicate the entire area that will be drawn on
## or else we will get artifacts and possibly crashing.
## (in this case, QPicture does all the work of computing the bouning rect for us)
return QtCore.QRectF(self.picture.boundingRect())
#----------------------------------------------------------------------
def __init__(self, eventEngine, mainEngine, parent=None):
"""Constructor"""
super(PriceWidget, self).__init__(parent)
self.__eventEngine = eventEngine
self.__mainEngine = mainEngine
# MongoDBๆฐๆฎๅบ็ธๅ
ณ
self.__mongoConnected = False
self.__mongoConnection = None
self.__mongoTickDB = None
# ่ฐ็จๅฝๆฐ
self.__connectMongo()
self.initUi(startDate=None)
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self, startDate=None):
"""ๅๅงๅ็้ข"""
self.setWindowTitle(u'Price')
self.vbl_1 = QtGui.QVBoxLayout()
self.initplotTick() # plotTickๅๅงๅ
self.vbl_2 = QtGui.QVBoxLayout()
self.initplotKline() # plotKlineๅๅงๅ
self.initplotTendency() # plotๅๆถๅพ็ๅๅงๅ
# ๆดไฝๅธๅฑ
self.hbl = QtGui.QHBoxLayout()
self.hbl.addLayout(self.vbl_1)
self.hbl.addLayout(self.vbl_2)
self.setLayout(self.hbl)
self.initHistoricalData() # ไธ่ฝฝๅๅฒๆฐๆฎ
#----------------------------------------------------------------------
def initplotTick(self):
""""""
self.pw1 = pg.PlotWidget(name='Plot1')
self.vbl_1.addWidget(self.pw1)
self.pw1.setRange(xRange=[-360, 0])
self.pw1.setLimits(xMax=5)
self.pw1.setDownsampling(mode='peak')
self.pw1.setClipToView(True)
self.curve1 = self.pw1.plot()
self.curve2 = self.pw1.plot()
self.curve3 = self.pw1.plot()
self.curve4 = self.pw1.plot()
#----------------------------------------------------------------------
def initplotKline(self):
"""Kline"""
self.pw2 = pg.PlotWidget(name='Plot2') # K็บฟๅพ
self.vbl_2.addWidget(self.pw2)
self.pw2.setDownsampling(mode='peak')
self.pw2.setClipToView(True)
self.curve5 = self.pw2.plot()
self.curve6 = self.pw2.plot()
self.candle = self.CandlestickItem(self.listBar)
self.pw2.addItem(self.candle)
## Draw an arrowhead next to the text box
# self.arrow = pg.ArrowItem()
# self.pw2.addItem(self.arrow)
#----------------------------------------------------------------------
def initplotTendency(self):
""""""
self.pw3 = pg.PlotWidget(name='Plot3')
self.vbl_2.addWidget(self.pw3)
self.pw3.setDownsampling(mode='peak')
self.pw3.setClipToView(True)
self.pw3.setMaximumHeight(200)
self.pw3.setXLink('Plot2') # X linked with Plot2
self.curve7 = self.pw3.plot()
#----------------------------------------------------------------------
def initHistoricalData(self,startDate=None):
"""ๅๅงๅๅฒๆฐๆฎ"""
td = timedelta(days=1) # ่ฏปๅ3ๅคฉ็ๅๅฒTICKๆฐๆฎ
if startDate:
cx = self.loadTick(self.symbol, startDate-td)
else:
today = datetime.today().replace(hour=0, minute=0, second=0, microsecond=0)
cx = self.loadTick(self.symbol, today-td)
if cx:
for data in cx:
tick = Tick(data['InstrumentID'])
tick.openPrice = data['OpenPrice']
tick.highPrice = data['HighestPrice']
tick.lowPrice = data['LowestPrice']
tick.lastPrice = data['LastPrice']
tick.volume = data['Volume']
tick.openInterest = data['OpenInterest']
tick.upperLimit = data['UpperLimitPrice']
tick.lowerLimit = data['LowerLimitPrice']
tick.time = data['UpdateTime']
tick.ms = data['UpdateMillisec']
tick.bidPrice1 = data['BidPrice1']
tick.bidPrice2 = data['BidPrice2']
tick.bidPrice3 = data['BidPrice3']
tick.bidPrice4 = data['BidPrice4']
tick.bidPrice5 = data['BidPrice5']
tick.askPrice1 = data['AskPrice1']
tick.askPrice2 = data['AskPrice2']
tick.askPrice3 = data['AskPrice3']
tick.askPrice4 = data['AskPrice4']
tick.askPrice5 = data['AskPrice5']
tick.bidVolume1 = data['BidVolume1']
tick.bidVolume2 = data['BidVolume2']
tick.bidVolume3 = data['BidVolume3']
tick.bidVolume4 = data['BidVolume4']
tick.bidVolume5 = data['BidVolume5']
tick.askVolume1 = data['AskVolume1']
tick.askVolume2 = data['AskVolume2']
tick.askVolume3 = data['AskVolume3']
tick.askVolume4 = data['AskVolume4']
tick.askVolume5 = data['AskVolume5']
self.onTick(tick)
self.initCompleted = True # ่ฏปๅๅๅฒๆฐๆฎๅฎๆ
# pprint('load historic data completed')
#----------------------------------------------------------------------
def plotTick(self):
"""็ปtickๅพ"""
if self.initCompleted:
self.curve1.setData(self.listlastPrice[:self.ptr])
self.curve2.setData(self.listfastMA[:self.ptr], pen=(255, 0, 0), name="Red curve")
self.curve3.setData(self.listmidMA[:self.ptr], pen=(0, 255, 0), name="Green curve")
self.curve4.setData(self.listslowMA[:self.ptr], pen=(0, 0, 255), name="Blue curve")
self.curve1.setPos(-self.ptr, 0)
self.curve2.setPos(-self.ptr, 0)
self.curve3.setPos(-self.ptr, 0)
self.curve4.setPos(-self.ptr, 0)
#----------------------------------------------------------------------
def plotKline(self):
"""K็บฟๅพ"""
if self.initCompleted:
# ๅ็บฟ
self.curve5.setData(self.listfastEMA, pen=(255, 0, 0), name="Red curve")
self.curve6.setData(self.listslowEMA, pen=(0, 255, 0), name="Green curve")
# ็ปK็บฟ
self.pw2.removeItem(self.candle)
self.candle = self.CandlestickItem(self.listBar)
self.pw2.addItem(self.candle)
self.plotText() # ๆพ็คบๅผไปไฟกๅทไฝ็ฝฎ
#----------------------------------------------------------------------
def plotTendency(self):
""""""
if self.initCompleted:
self.curve7.setData(self.listOpenInterest, pen=(255, 255, 255), name="White curve")
#----------------------------------------------------------------------
def plotText(self):
lenClose = len(self.listClose)
if lenClose >= 5: # Fractal Signal
if self.listClose[-1] > self.listClose[-2] and self.listClose[-3] > self.listClose[-2] and self.listClose[-4] > self.listClose[-2] and self.listClose[-5] > self.listClose[-2] and self.listfastEMA[-1] > self.listslowEMA[-1]:
## Draw an arrowhead next to the text box
# self.pw2.removeItem(self.arrow)
self.arrow = pg.ArrowItem(pos=(lenClose-1, self.listLow[-1]), angle=90, brush=(255, 0, 0))
self.pw2.addItem(self.arrow)
elif self.listClose[-1] < self.listClose[-2] and self.listClose[-3] < self.listClose[-2] and self.listClose[-4] < self.listClose[-2] and self.listClose[-5] < self.listClose[-2] and self.listfastEMA[-1] < self.listslowEMA[-1]:
## Draw an arrowhead next to the text box
# self.pw2.removeItem(self.arrow)
self.arrow = pg.ArrowItem(pos=(lenClose-1, self.listHigh[-1]), angle=-90, brush=(0, 255, 0))
self.pw2.addItem(self.arrow)
#----------------------------------------------------------------------
def updateMarketData(self, event):
"""ๆดๆฐ่กๆ
"""
data = event.dict_['data']
symbol = data['InstrumentID']
tick = Tick(symbol)
tick.openPrice = data['OpenPrice']
tick.highPrice = data['HighestPrice']
tick.lowPrice = data['LowestPrice']
tick.lastPrice = data['LastPrice']
tick.volume = data['Volume']
tick.openInterest = data['OpenInterest']
tick.upperLimit = data['UpperLimitPrice']
tick.lowerLimit = data['LowerLimitPrice']
tick.time = data['UpdateTime']
tick.ms = data['UpdateMillisec']
tick.bidPrice1 = data['BidPrice1']
tick.bidPrice2 = data['BidPrice2']
tick.bidPrice3 = data['BidPrice3']
tick.bidPrice4 = data['BidPrice4']
tick.bidPrice5 = data['BidPrice5']
tick.askPrice1 = data['AskPrice1']
tick.askPrice2 = data['AskPrice2']
tick.askPrice3 = data['AskPrice3']
tick.askPrice4 = data['AskPrice4']
tick.askPrice5 = data['AskPrice5']
tick.bidVolume1 = data['BidVolume1']
tick.bidVolume2 = data['BidVolume2']
tick.bidVolume3 = data['BidVolume3']
tick.bidVolume4 = data['BidVolume4']
tick.bidVolume5 = data['BidVolume5']
tick.askVolume1 = data['AskVolume1']
tick.askVolume2 = data['AskVolume2']
tick.askVolume3 = data['AskVolume3']
tick.askVolume4 = data['AskVolume4']
tick.askVolume5 = data['AskVolume5']
self.onTick(tick) # tickๆฐๆฎๆดๆฐ
# # ๅฐๆฐๆฎๆๅ
ฅMongoDBๆฐๆฎๅบ๏ผๅฎ็ๅปบ่ฎฎๅฆๅผ็จๅบ่ฎฐๅฝTICKๆฐๆฎ
# self.__recordTick(data)
#----------------------------------------------------------------------
def onTick(self, tick):
"""tickๆฐๆฎๆดๆฐ"""
from datetime import time
# ้ฆๅ
็ๆdatetime.timeๆ ผๅผ็ๆถ้ด๏ผไพฟไบๆฏ่พ๏ผ,ไปๅญ็ฌฆไธฒๆถ้ด่ฝฌๅไธบtimeๆ ผๅผ็ๆถ้ด
hh, mm, ss = tick.time.split(':')
self.ticktime = time(int(hh), int(mm), int(ss), microsecond=tick.ms)
# ่ฎก็ฎtickๅพ็็ธๅ
ณๅๆฐ
if self.ptr == 0:
self.fastMA = tick.lastPrice
self.midMA = tick.lastPrice
self.slowMA = tick.lastPrice
else:
self.fastMA = (1-self.tickFastAlpha) * self.fastMA + self.tickFastAlpha * tick.lastPrice
self.midMA = (1-self.tickMidAlpha) * self.midMA + self.tickMidAlpha * tick.lastPrice
self.slowMA = (1-self.tickSlowAlpha) * self.slowMA + self.tickSlowAlpha * tick.lastPrice
self.listlastPrice[self.ptr] = tick.lastPrice
self.listfastMA[self.ptr] = self.fastMA
self.listmidMA[self.ptr] = self.midMA
self.listslowMA[self.ptr] = self.slowMA
self.ptr += 1
# pprint("----------")
# pprint(self.ptr)
if self.ptr >= self.listlastPrice.shape[0]:
tmp = self.listlastPrice
self.listlastPrice = np.empty(self.listlastPrice.shape[0] * 2)
self.listlastPrice[:tmp.shape[0]] = tmp
tmp = self.listfastMA
self.listfastMA = np.empty(self.listfastMA.shape[0] * 2)
self.listfastMA[:tmp.shape[0]] = tmp
tmp = self.listmidMA
self.listmidMA = np.empty(self.listmidMA.shape[0] * 2)
self.listmidMA[:tmp.shape[0]] = tmp
tmp = self.listslowMA
self.listslowMA = np.empty(self.listslowMA.shape[0] * 2)
self.listslowMA[:tmp.shape[0]] = tmp
# K็บฟๆฐๆฎ
# ๅ่ฎพๆฏๆถๅฐ็็ฌฌไธไธชTICK
if self.barOpen == 0:
# ๅๅงๅๆฐ็K็บฟๆฐๆฎ
self.barOpen = tick.lastPrice
self.barHigh = tick.lastPrice
self.barLow = tick.lastPrice
self.barClose = tick.lastPrice
self.barTime = self.ticktime
self.barOpenInterest = tick.openInterest
self.onBar(self.num, self.barOpen, self.barClose, self.barLow, self.barHigh, self.barOpenInterest)
else:
# ๅฆๆๆฏๅฝๅไธๅ้ๅ
็ๆฐๆฎ
if self.ticktime.minute == self.barTime.minute:
if self.ticktime.second >= 30 and self.barTime.second < 30: # ๅคๆญ30็งๅจๆK็บฟ
# ๅ
ไฟๅญK็บฟๆถ็ไปท
self.num += 1
self.onBar(self.num, self.barOpen, self.barClose, self.barLow, self.barHigh, self.barOpenInterest)
# ๅๅงๅๆฐ็K็บฟๆฐๆฎ
self.barOpen = tick.lastPrice
self.barHigh = tick.lastPrice
self.barLow = tick.lastPrice
self.barClose = tick.lastPrice
self.barTime = self.ticktime
self.barOpenInterest = tick.openInterest
# ๆฑๆปTICK็ๆK็บฟ
self.barHigh = max(self.barHigh, tick.lastPrice)
self.barLow = min(self.barLow, tick.lastPrice)
self.barClose = tick.lastPrice
self.barTime = self.ticktime
self.listBar.pop()
self.listfastEMA.pop()
self.listslowEMA.pop()
self.listOpen.pop()
self.listClose.pop()
self.listHigh.pop()
self.listLow.pop()
self.listOpenInterest.pop()
self.onBar(self.num, self.barOpen, self.barClose, self.barLow, self.barHigh, self.barOpenInterest)
# ๅฆๆๆฏๆฐไธๅ้็ๆฐๆฎ
else:
# ๅ
ไฟๅญK็บฟๆถ็ไปท
self.num += 1
self.onBar(self.num, self.barOpen, self.barClose, self.barLow, self.barHigh, self.barOpenInterest)
# ๅๅงๅๆฐ็K็บฟๆฐๆฎ
self.barOpen = tick.lastPrice
self.barHigh = tick.lastPrice
self.barLow = tick.lastPrice
self.barClose = tick.lastPrice
self.barTime = self.ticktime
self.barOpenInterest = tick.openInterest
#----------------------------------------------------------------------
def onBar(self, n, o, c, l, h, oi):
self.listBar.append((n, o, c, l, h))
self.listOpen.append(o)
self.listClose.append(c)
self.listHigh.append(h)
self.listLow.append(l)
self.listOpenInterest.append(oi)
#่ฎก็ฎK็บฟๅพEMAๅ็บฟ
if self.fastEMA:
self.fastEMA = c*self.EMAFastAlpha + self.fastEMA*(1-self.EMAFastAlpha)
self.slowEMA = c*self.EMASlowAlpha + self.slowEMA*(1-self.EMASlowAlpha)
else:
self.fastEMA = c
self.slowEMA = c
self.listfastEMA.append(self.fastEMA)
self.listslowEMA.append(self.slowEMA)
# ่ฐ็จ็ปๅพๅฝๆฐ
self.plotTick() # tickๅพ
self.plotKline() # K็บฟๅพ
self.plotTendency() # K็บฟๅฏๅพ๏ผๆไป้
#----------------------------------------------------------------------
def __connectMongo(self):
"""่ฟๆฅMongoDBๆฐๆฎๅบ"""
try:
self.__mongoConnection = MongoClient()
self.__mongoConnected = True
self.__mongoTickDB = self.__mongoConnection['TickDB']
except ConnectionFailure:
pass
#----------------------------------------------------------------------
def __recordTick(self, data):
"""ๅฐTickๆฐๆฎๆๅ
ฅๅฐMongoDBไธญ"""
if self.__mongoConnected:
symbol = data['InstrumentID']
data['date'] = self.today
self.__mongoTickDB[symbol].insert(data)
#----------------------------------------------------------------------
def loadTick(self, symbol, startDate, endDate=None):
"""ไปMongoDBไธญ่ฏปๅTickๆฐๆฎ"""
if self.__mongoConnected:
collection = self.__mongoTickDB[symbol]
# ๅฆๆ่พๅ
ฅไบ่ฏปๅTICK็ๆๅๆฅๆ
if endDate:
cx = collection.find({'date': {'$gte': startDate, '$lte': endDate}})
else:
cx = collection.find({'date': {'$gte': startDate}})
return cx
else:
return None
#----------------------------------------------------------------------
def registerEvent(self):
"""ๆณจๅไบไปถ็ๅฌ"""
self.signal.connect(self.updateMarketData)
self.__eventEngine.register(EVENT_MARKETDATA, self.signal.emit)
########################################################################
class MainWindow(QtGui.QMainWindow):
"""ไธป็ชๅฃ"""
signalInvestor = QtCore.pyqtSignal(type(Event()))
signalLog = QtCore.pyqtSignal(type(Event()))
#----------------------------------------------------------------------
def __init__(self, eventEngine, mainEngine):
"""Constructor"""
super(MainWindow, self).__init__()
self.__eventEngine = eventEngine
self.__mainEngine = mainEngine
self.initUi()
self.registerEvent()
#----------------------------------------------------------------------
def initUi(self):
""""""
# ่ฎพ็ฝฎๅ็งฐ
self.setWindowTitle(u'ๆฌข่ฟไฝฟ็จvn.pyๆกๆถDemo')
# ๅธๅฑ่ฎพ็ฝฎ
self.logM = LogMonitor(self.__eventEngine, self)
self.accountM = AccountMonitor(self.__eventEngine, self)
self.positionM = PositionMonitor(self.__eventEngine, self)
self.tradeM = TradeMonitor(self.__eventEngine, self)
self.orderM = OrderMonitor(self.__eventEngine, self.__mainEngine, self)
self.marketdataM = MarketDataMonitor(self.__eventEngine, self.__mainEngine, self)
self.tradingW = TradingWidget(self.__eventEngine, self.__mainEngine, self.orderM, self)
self.PriceW = PriceWidget(self.__eventEngine, self.__mainEngine, self)
righttab = QtGui.QTabWidget()
righttab.addTab(self.accountM, u'่ดฆๆท')
righttab.addTab(self.positionM, u'ๆไป')
lefttab = QtGui.QTabWidget()
lefttab.addTab(self.logM, u'ๆฅๅฟ')
lefttab.addTab(self.orderM, u'ๆฅๅ')
lefttab.addTab(self.tradeM, u'ๆไบค')
mkttab = QtGui.QTabWidget()
mkttab.addTab(self.PriceW, u'Price')
mkttab.addTab(self.marketdataM, u'่กๆ
')
self.tradingW.setMaximumWidth(400)
tradingVBox = QtGui.QVBoxLayout()
tradingVBox.addWidget(self.tradingW)
tradingVBox.addStretch()
upHBox = QtGui.QHBoxLayout()
upHBox.addLayout(tradingVBox)
upHBox.addWidget(mkttab)
downHBox = QtGui.QHBoxLayout()
downHBox.addWidget(lefttab)
downHBox.addWidget(righttab)
vBox = QtGui.QVBoxLayout()
vBox.addLayout(upHBox)
vBox.addLayout(downHBox)
centralwidget = QtGui.QWidget()
centralwidget.setLayout(vBox)
self.setCentralWidget(centralwidget)
# ่ฎพ็ฝฎ็ถๆๆ
self.bar = self.statusBar()
self.bar.showMessage(u'ๅฏๅจDemo')
# ่ฎพ็ฝฎ่ๅๆ
actionLogin = QtGui.QAction(u'็ปๅฝ', self)
actionLogin.triggered.connect(self.openLoginWidget)
actionExit = QtGui.QAction(u'้ๅบ', self)
actionExit.triggered.connect(self.close)
actionAbout = QtGui.QAction(u'ๅ
ณไบ', self)
actionAbout.triggered.connect(self.openAboutWidget)
menubar = self.menuBar()
sysMenu = menubar.addMenu(u'็ณป็ป')
sysMenu.addAction(actionLogin)
sysMenu.addAction(actionExit)
helpMenu = menubar.addMenu(u'ๅธฎๅฉ')
helpMenu.addAction(actionAbout)
#----------------------------------------------------------------------
def registerEvent(self):
""""""
self.signalInvestor.connect(self.updateInvestor)
self.signalLog.connect(self.updateLog)
self.__eventEngine.register(EVENT_INVESTOR, self.signalInvestor.emit)
self.__eventEngine.register(EVENT_LOG, self.signalLog.emit)
#----------------------------------------------------------------------
def updateInvestor(self, event):
""""""
data = event.dict_['data']
self.setWindowTitle(u'ๆฌข่ฟไฝฟ็จvn.pyๆกๆถDemo ' + data['InvestorName'].decode('GBK'))
#----------------------------------------------------------------------
def updateLog(self, event):
""""""
log = event.dict_['log']
self.bar.showMessage(log)
#----------------------------------------------------------------------
def openLoginWidget(self):
"""ๆๅผ็ปๅฝ"""
try:
self.loginW.show()
except AttributeError:
self.loginW = LoginWidget(self.__mainEngine, self)
self.loginW.show()
#----------------------------------------------------------------------
def openAboutWidget(self):
"""ๆๅผๅ
ณไบ"""
try:
self.aboutW.show()
except AttributeError:
self.aboutW = AboutWidget(self)
self.aboutW.show()
#----------------------------------------------------------------------
def closeEvent(self, event):
"""้ๅบไบไปถๅค็"""
reply = QtGui.QMessageBox.question(self, u'้ๅบ',
u'็กฎ่ฎค้ๅบ?', QtGui.QMessageBox.Yes |
QtGui.QMessageBox.No, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.Yes:
self.__mainEngine.exit()
event.accept()
else:
event.ignore()
class Tick:
"""Tickๆฐๆฎๅฏน่ฑก"""
#----------------------------------------------------------------------
def __init__(self, symbol):
"""Constructor"""
self.symbol = symbol # ๅ็บฆไปฃ็
self.openPrice = 0 # OHLC
self.highPrice = 0
self.lowPrice = 0
self.lastPrice = 0
self.volume = 0 # ๆไบค้
self.openInterest = 0 # ๆไป้
self.upperLimit = 0 # ๆถจๅไปท
self.lowerLimit = 0 # ่ทๅไปท
self.time = '' # ๆดๆฐๆถ้ดๅๆฏซ็ง
self.ms = 0
self.bidPrice1 = 0 # ๆทฑๅบฆ่กๆ
self.bidPrice2 = 0
self.bidPrice3 = 0
self.bidPrice4 = 0
self.bidPrice5 = 0
self.askPrice1 = 0
self.askPrice2 = 0
self.askPrice3 = 0
self.askPrice4 = 0
self.askPrice5 = 0
self.bidVolume1 = 0
self.bidVolume2 = 0
self.bidVolume3 = 0
self.bidVolume4 = 0
self.bidVolume5 = 0
self.askVolume1 = 0
self.askVolume2 = 0
self.askVolume3 = 0
self.askVolume4 = 0
self.askVolume5 = 0
Member
avatar
ๅ ๅ
ฅไบ:
ๅธๅญ: 4
ๅฃฐๆ: 0
verynice, ๆๆฅๆไธไธ่ฟไธช่ๅธ
Member
avatar
ๅ ๅ
ฅไบ:
ๅธๅญ: 144
ๅฃฐๆ: 3
ๅค่ฐข๏ผ
ๆ่ฐข่ฟไบๅไบซไปฃ็ ็ๆๅใ
ยฉ 2015-2019 ไธๆตท้ฆ็บณ่ฝฏไปถ็งๆๆ้ๅ
ฌๅธ
ๅคๆกๆๅกๅท๏ผๆฒชICPๅค18006526ๅท-3
|
__label__pos
| 0.89949 |
How to use error class of io.kotest.inspectors package
Best Kotest code snippet using io.kotest.inspectors.error
MapInspectorsTest.kt
Source:MapInspectorsTest.kt Github
copy
Full Screen
1package com.sksamuel.kotest.inspectors2import io.kotest.assertions.assertSoftly3import io.kotest.assertions.throwables.shouldThrow4import io.kotest.assertions.throwables.shouldThrowAny5import io.kotest.core.spec.style.WordSpec6import io.kotest.inspectors.forAll7import io.kotest.inspectors.forAllKeys8import io.kotest.inspectors.forAllValues9import io.kotest.inspectors.forAny10import io.kotest.inspectors.forAnyKey11import io.kotest.inspectors.forAnyValue12import io.kotest.inspectors.forAtLeastOne13import io.kotest.inspectors.forAtLeastOneKey14import io.kotest.inspectors.forAtLeastOneValue15import io.kotest.inspectors.forAtMostOne16import io.kotest.inspectors.forAtMostOneKey17import io.kotest.inspectors.forAtMostOneValue18import io.kotest.inspectors.forExactly19import io.kotest.inspectors.forKeysExactly20import io.kotest.inspectors.forNone21import io.kotest.inspectors.forNoneKey22import io.kotest.inspectors.forNoneValue23import io.kotest.inspectors.forOne24import io.kotest.inspectors.forOneKey25import io.kotest.inspectors.forOneValue26import io.kotest.inspectors.forSome27import io.kotest.inspectors.forSomeKeys28import io.kotest.inspectors.forSomeValues29import io.kotest.inspectors.forValuesExactly30import io.kotest.matchers.ints.shouldBeGreaterThan31import io.kotest.matchers.maps.shouldContain32import io.kotest.matchers.shouldBe33import io.kotest.matchers.shouldNotBe34@Suppress("ConstantConditionIf")35class MapInspectorsTest : WordSpec() {36 private val map = mapOf(1 to "1", 2 to "2", 3 to "3", 4 to "4", 5 to "5")37 init {38 "forAllKeys" should {39 "pass if all keys of a map pass" {40 map.forAllKeys {41 it.shouldBeGreaterThan(0)42 }43 }44 "return itself" {45 map.forAllKeys {46 it.shouldBeGreaterThan(0)47 }.forAllKeys {48 it.shouldBeGreaterThan(0)49 }50 }51 }52 "forAllValues" should {53 "pass if all values of a map pass" {54 map.forAllValues {55 it.toInt().shouldBeGreaterThan(0)56 }57 }58 "return itself" {59 map.forAllValues {60 it.toInt().shouldBeGreaterThan(0)61 }.forAllValues {62 it.toInt().shouldBeGreaterThan(0)63 }64 }65 }66 "forAll" should {67 "pass if all entries of a map pass" {68 map.forAll {69 it.key.shouldBe(it.value.toInt())70 }71 }72 "return itself" {73 map.forAll {74 it.key.shouldBe(it.value.toInt())75 }.forAll {76 it.key.shouldBe(it.value.toInt())77 }78 }79 "fail when an exception is thrown inside a map" {80 shouldThrowAny {81 map.forAll {82 if (true) throw NullPointerException()83 }84 }.message shouldBe "0 elements passed but expected 5\n" +85 "\n" +86 "The following elements passed:\n" +87 "--none--\n" +88 "\n" +89 "The following elements failed:\n" +90 "1=1 => java.lang.NullPointerException\n" +91 "2=2 => java.lang.NullPointerException\n" +92 "3=3 => java.lang.NullPointerException\n" +93 "4=4 => java.lang.NullPointerException\n" +94 "5=5 => java.lang.NullPointerException"95 }96 }97 "forNoneKeys" should {98 "pass if no keys pass fn test for a map" {99 map.forNoneKey {100 it.shouldBeGreaterThan(10)101 }102 }103 "return itself" {104 map.forNoneKey {105 it.shouldBeGreaterThan(10)106 }.forNoneKey {107 it.shouldBeGreaterThan(10)108 }109 }110 }111 "forNoneValues" should {112 "pass if no values pass fn test for a map" {113 map.forNoneValue {114 it.toInt().shouldBeGreaterThan(10)115 }116 }117 "return itself" {118 map.forNoneValue {119 it.toInt().shouldBeGreaterThan(10)120 }.forNoneValue {121 it.toInt().shouldBeGreaterThan(10)122 }123 }124 }125 "forNone" should {126 "pass if no entries of a map pass" {127 map.forNone {128 it shouldBe mapOf(10 to "10").entries.first()129 }130 }131 "pass if an entry throws an exception" {132 map.forNone {133 if (true) throw NullPointerException()134 }135 }136 "return itself" {137 map.forNone {138 it shouldBe mapOf(10 to "10").entries.first()139 }.forNone {140 it shouldBe mapOf(10 to "10").entries.first()141 }142 }143 "fail if one entry passes fn test" {144 shouldThrow<AssertionError> {145 map.forNone {146 it shouldBe mapOf(4 to "4").entries.first()147 }148 }.message shouldBe """1 elements passed but expected 0149The following elements passed:1504=4151The following elements failed:1521=1 => expected:<4=4> but was:<1=1>1532=2 => expected:<4=4> but was:<2=2>1543=3 => expected:<4=4> but was:<3=3>1555=5 => expected:<4=4> but was:<5=5>"""156 }157 "fail if all entries pass fn test" {158 shouldThrow<AssertionError> {159 map.forNone {160 it.key shouldBe it.value.toInt()161 }162 }.message shouldBe """5 elements passed but expected 0163The following elements passed:1641=11652=21663=31674=41685=5169The following elements failed:170--none--"""171 }172 "work inside assertSoftly block (for map)" {173 assertSoftly(map) {174 forNone {175 it.key shouldBe 10176 it.value shouldBe "10"177 }178 }179 }180 }181 "forSomeKeys" should {182 "pass if one key pass fn test for a map" {183 map.forSomeKeys {184 it shouldBe 1185 }186 }187 "return itself" {188 map.forSomeKeys {189 it shouldBe 1190 }.forSomeKeys {191 it shouldBe 1192 }193 }194 }195 "forSomeValues" should {196 "pass if one value pass fn test for a map" {197 map.forSomeValues {198 it.toInt() shouldBe 1199 }200 }201 "return itself" {202 map.forSomeValues {203 it.toInt() shouldBe 1204 }.forSomeValues {205 it.toInt() shouldBe 1206 }207 }208 }209 "forSome" should {210 "pass if one entry pass test" {211 map.forSome {212 it shouldBe mapOf(1 to "1").entries.first()213 }214 }215 "return itself" {216 map.forSome {217 it shouldBe mapOf(1 to "1").entries.first()218 }.forSome {219 it shouldBe mapOf(1 to "1").entries.first()220 }221 }222 "fail if no entries pass test" {223 shouldThrow<AssertionError> {224 map.forSome {225 it shouldBe mapOf(0 to "0").entries.first()226 }227 }.message shouldBe """No elements passed but expected at least one228The following elements passed:229--none--230The following elements failed:2311=1 => expected:<0=0> but was:<1=1>2322=2 => expected:<0=0> but was:<2=2>2333=3 => expected:<0=0> but was:<3=3>2344=4 => expected:<0=0> but was:<4=4>2355=5 => expected:<0=0> but was:<5=5>"""236 }237 "fail if all entries pass test" {238 shouldThrow<AssertionError> {239 map.forSome {240 it.key shouldBe it.value.toInt()241 }242 }.message shouldBe """All elements passed but expected < 5243The following elements passed:2441=12452=22463=32474=42485=5249The following elements failed:250--none--"""251 }252 "work inside assertSoftly block (for map)" {253 assertSoftly(map) {254 forSome {255 it.key shouldBe 1256 it.value shouldBe "1"257 }258 }259 }260 }261 "forOneKey" should {262 "pass if one key pass fn test for a map" {263 map.forOneKey {264 it shouldBe 1265 }266 }267 "return itself" {268 map.forOneKey {269 it shouldBe 1270 }.forOneKey {271 it shouldBe 1272 }273 }274 }275 "forOneValue" should {276 "pass if one value pass fn test for a map" {277 map.forOneValue {278 it.toInt() shouldBe 1279 }280 }281 "return itself" {282 map.forOneValue {283 it.toInt() shouldBe 1284 }.forOneValue {285 it.toInt() shouldBe 1286 }287 }288 }289 "forOne" should {290 "pass if one entry pass test" {291 map.forOne {292 it shouldBe mapOf(1 to "1").entries.first()293 }294 }295 "return itself" {296 map.forOne {297 it shouldBe mapOf(1 to "1").entries.first()298 }.forOne {299 it shouldBe mapOf(1 to "1").entries.first()300 }301 }302 "fail if > 1 entries pass test" {303 shouldThrow<AssertionError> {304 map.forOne {305 mapOf(3 to "3", 4 to "4", 5 to "5").shouldContain(it.toPair())306 }307 }.message shouldBe """3 elements passed but expected 1308The following elements passed:3093=33104=43115=5312The following elements failed:3131=1 => Map should contain mapping 1=1 but was {3=3, 4=4, 5=5}3142=2 => Map should contain mapping 2=2 but was {3=3, 4=4, 5=5}"""315 }316 "fail if no entries pass test" {317 shouldThrow<AssertionError> {318 map.forOne {319 it shouldBe mapOf(22 to "22").entries.first()320 }321 }.message shouldBe """0 elements passed but expected 1322The following elements passed:323--none--324The following elements failed:3251=1 => expected:<22=22> but was:<1=1>3262=2 => expected:<22=22> but was:<2=2>3273=3 => expected:<22=22> but was:<3=3>3284=4 => expected:<22=22> but was:<4=4>3295=5 => expected:<22=22> but was:<5=5>"""330 }331 "work inside assertSoftly block (for map)" {332 assertSoftly(map) {333 forOne {334 it.key shouldBe 1335 it.value shouldBe "1"336 }337 }338 }339 }340 "forAnyKey" should {341 "pass if one key pass fn test for a map" {342 map.forAnyKey {343 it shouldBe 1344 }345 }346 "return itself" {347 map.forAnyKey {348 it shouldBe 1349 }.forAnyKey {350 it shouldBe 1351 }352 }353 }354 "forAnyValue" should {355 "pass if one value pass fn test for a map" {356 map.forAnyValue {357 it.toInt() shouldBe 1358 }359 }360 "return itself" {361 map.forAnyValue {362 it.toInt() shouldBe 1363 }.forAnyValue {364 it.toInt() shouldBe 1365 }366 }367 }368 "forAny" should {369 "pass if any entries pass test" {370 map.forAny {371 mapOf(1 to "1", 2 to "2").shouldContain(it.toPair())372 }373 }374 "return itself" {375 map.forAny {376 mapOf(1 to "1", 2 to "2").shouldContain(it.toPair())377 }.forAny {378 mapOf(1 to "1", 2 to "2").shouldContain(it.toPair())379 }380 }381 "fail if no entries pass test" {382 shouldThrow<AssertionError> {383 map.forAny {384 it shouldBe mapOf(6 to "6").entries.first()385 }386 }.message shouldBe """0 elements passed but expected at least 1387The following elements passed:388--none--389The following elements failed:3901=1 => expected:<6=6> but was:<1=1>3912=2 => expected:<6=6> but was:<2=2>3923=3 => expected:<6=6> but was:<3=3>3934=4 => expected:<6=6> but was:<4=4>3945=5 => expected:<6=6> but was:<5=5>"""395 }396 "work inside assertSoftly block (for map)" {397 assertSoftly(map) {398 forAny {399 it.key shouldBe 1400 it.value shouldBe "1"401 }402 }403 }404 }405 "forKeysExactly" should {406 "pass if one key pass fn test for a map" {407 map.forKeysExactly(1) {408 it shouldBe 1409 }410 }411 "return itself" {412 map.forKeysExactly(1) {413 it shouldBe 1414 }.forKeysExactly(1) {415 it shouldBe 1416 }417 }418 }419 "forValuesExactly" should {420 "pass if one value pass fn test for a map" {421 map.forValuesExactly(1) {422 it.toInt() shouldBe 1423 }424 }425 "return itself" {426 map.forValuesExactly(1) {427 it.toInt() shouldBe 1428 }.forValuesExactly(1) {429 it.toInt() shouldBe 1430 }431 }432 }433 "forExactly" should {434 "pass if exactly k entries pass" {435 map.forExactly(4) {436 it shouldNotBe mapOf(1 to "1").entries.first()437 }438 }439 "fail if more entries pass test" {440 shouldThrow<AssertionError> {441 map.forExactly(3) {442 it shouldNotBe mapOf(1 to "1").entries.first()443 }444 }.message shouldBe """4 elements passed but expected 3445The following elements passed:4462=24473=34484=44495=5450The following elements failed:4511=1 => 1=1 should not equal 1=1"""452 }453 "fail if less entries pass test" {454 shouldThrow<AssertionError> {455 map.forExactly(5) {456 it shouldNotBe mapOf(1 to "1").entries.first()457 }458 }.message shouldBe """4 elements passed but expected 5459The following elements passed:4602=24613=34624=44635=5464The following elements failed:4651=1 => 1=1 should not equal 1=1"""466 }467 "fail if no entries pass test" {468 shouldThrow<AssertionError> {469 map.forExactly(1) {470 it shouldBe mapOf(10 to "10").entries.first()471 }472 }.message shouldBe """0 elements passed but expected 1473The following elements passed:474--none--475The following elements failed:4761=1 => expected:<10=10> but was:<1=1>4772=2 => expected:<10=10> but was:<2=2>4783=3 => expected:<10=10> but was:<3=3>4794=4 => expected:<10=10> but was:<4=4>4805=5 => expected:<10=10> but was:<5=5>"""481 }482 }483 "forAtMostOneKey" should {484 "pass if one key pass fn test for a map" {485 map.forAtMostOneKey {486 it shouldBe 1487 }488 }489 "return itself" {490 map.forAtMostOneKey {491 it shouldBe 1492 }.forAtMostOneKey {493 it shouldBe 1494 }495 }496 }497 "forAtMostOneValue" should {498 "pass if one value pass fn test for a map" {499 map.forAtMostOneValue {500 it.toInt() shouldBe 1501 }502 }503 "return itself" {504 map.forAtMostOneValue {505 it.toInt() shouldBe 1506 }.forAtMostOneValue {507 it.toInt() shouldBe 1508 }509 }510 }511 "forAtMostOne" should {512 "pass if one elements pass test" {513 map.forAtMostOne {514 it shouldBe mapOf(3 to "3")515 }516 }517 "fail if 2 elements pass test" {518 shouldThrow<AssertionError> {519 map.forAtMostOne {520 mapOf(4 to "4", 5 to "5").shouldContain(it.toPair())521 }522 }.message shouldBe """2 elements passed but expected at most 1523The following elements passed:5244=45255=5526The following elements failed:5271=1 => Map should contain mapping 1=1 but was {4=4, 5=5}5282=2 => Map should contain mapping 2=2 but was {4=4, 5=5}5293=3 => Map should contain mapping 3=3 but was {4=4, 5=5}"""530 }531 "work inside assertSoftly block (for map)" {532 assertSoftly(map) {533 forAtMostOne {534 it.key shouldBe 1535 it.value shouldBe "1"536 }537 }538 }539 }540 "forAtLeastOneKey" should {541 "pass if one key pass fn test for a map" {542 map.forAtLeastOneKey {543 it shouldBe 1544 }545 }546 "return itself" {547 map.forAtLeastOneKey {548 it shouldBe 1549 }.forAtLeastOneKey {550 it shouldBe 1551 }552 }553 }554 "forAtLeastOneValue" should {555 "pass if one value pass fn test for a map" {556 map.forAtLeastOneValue {557 it.toInt() shouldBe 1558 }559 }560 "return itself" {561 map.forAtLeastOneValue {562 it.toInt() shouldBe 1563 }.forAtLeastOneValue {564 it.toInt() shouldBe 1565 }566 }567 }568 "forAtLeastOne" should {569 "pass if one elements pass test" {570 map.forAtLeastOne {571 it shouldBe mapOf(3 to "3").entries.first()572 }573 }574 "fail if no elements pass test" {575 shouldThrow<AssertionError> {576 map.forAtLeastOne {577 it shouldBe mapOf(22 to "22").entries.first()578 }579 }.message shouldBe """0 elements passed but expected at least 1580The following elements passed:581--none--582The following elements failed:5831=1 => expected:<22=22> but was:<1=1>5842=2 => expected:<22=22> but was:<2=2>5853=3 => expected:<22=22> but was:<3=3>5864=4 => expected:<22=22> but was:<4=4>5875=5 => expected:<22=22> but was:<5=5>"""588 }589 "work inside assertSoftly block (for map)" {590 assertSoftly(map) {591 forAtLeastOne {592 it.key shouldBe 1593 it.value shouldBe "1"594 }595 }596 }597 }598 }599}...
Full Screen
Full Screen
AnalyseMessagesTest.kt
Source:AnalyseMessagesTest.kt Github
copy
Full Screen
...13class AnalyseMessagesTest : FreeSpec({14 "analyseMessages() returns results which" - {15 "given a single passing scenario" - {16 val result = openMessageSample("singlePassingScenario").use { analyseMessages(it) }17 "contains no errors" {18 result.problemCount shouldBe 019 }20 }21 "given a single failing scenario" - {22 val result = openMessageSample("singleFailingScenario").use { analyseMessages(it) }23// result.appendTo(System.out)24 "contains a single error" {25 result.problemCount shouldBe 126 result.features shouldHaveSize 127 result.features[0].scenarios shouldHaveSize 128 }29 val feature = result.features[0]30 "references the correct feature" {31 feature.feature.name shouldBe "failing"32 }33 val error = feature.scenarios[0]34 "references the correct scenario" {35 error.scenario.name shouldBe "failing"36 }37 "references the correct step" {38 val step = error.step39 step should beInstanceOf(StepInfo::class)40 if(step is StepInfo) {41 step.text shouldBe "it fails"42 }43 }44 "has the correct status" {45 error.status shouldBe FAILED46 }47 }48 "given a single unimplemented scenario" - {49 val result = openMessageSample("singleUnimplementedScenario").use { analyseMessages(it) }50// result.appendTo(System.out)51 "contains a single error" {52 result.problemCount shouldBe 153 result.features shouldHaveSize 154 result.features[0].scenarios shouldHaveSize 155 }56 "has the correct status" {57 result.features[0].scenarios[0].status shouldBe UNDEFINED58 }59 }60 "given multiple features & scenarios" - {61 val result = openMessageSample("1Failing1Passing1Unimplemented").use { analyseMessages(it) }62// result.appendTo(System.out)63 "contains the correct number of error" {64 result.problemCount shouldBe 265 }66 "references the correct number of features" {67 result.features shouldHaveSize 268 }69 "associates errors with the correct features and scenarios" {70 result.features.forAll {71 it.scenarios shouldHaveSize 172 }73 result.features.forOne {74 it.feature.name shouldBe "failing"75 val s = it.scenarios[0]76 s.scenario.name shouldBe "failing"77 s.status shouldBe FAILED78 }79 result.features.forOne {80 it.feature.name shouldBe "unimplemented"81 val s = it.scenarios[0]82 s.scenario.name shouldBe "unimplemented"83 s.status shouldBe UNDEFINED84 }85 }86 }87 "given a feature containing rules" - {88 val result = openMessageSample("2RulesWith1ErrorInEach").use { analyseMessages(it) }89// result.appendTo(System.out)90 "contains the correct number of errors" {91 result.problemCount shouldBe 292 result.features shouldHaveSize 193 }94 val feature = result.features[0]95 "references the correct number of rules" {96 feature.allChildren shouldHaveSize 297 feature.rules shouldHaveSize 298 feature.scenarios should beEmpty()99 }100 "associates errors with the correct rules and scenarios" {101 feature.rules.forAll {102 it.scenarios shouldHaveSize 1103 }104 feature.rules.forOne {105 it.rule.name shouldBe "Rules should be included"106 val s = it.scenarios[0]107 s.scenario.name shouldBe "This should fail"108 s.status shouldBe FAILED109 }110 feature.rules.forOne {111 it.rule.name shouldBe "Multiple rules are allowed"112 val s = it.scenarios[0]113 s.scenario.name shouldBe "This is unimplemented"114 s.status shouldBe UNDEFINED115 }116 }117 }118 "given rules with background steps" - {119 val result = openMessageSample("2RulesWithBackground").use { analyseMessages(it) }120// result.appendTo(System.out)121 "contains the correct number of errors" {122 result.problemCount shouldBe 2123 result.features shouldHaveSize 1124 }125 val feature = result.features[0]126 "references the correct number of rules" {127 feature.allChildren shouldHaveSize 2128 feature.rules shouldHaveSize 2129 feature.scenarios should beEmpty()130 }131 "references the background when a background step fails" {132 feature.rules.forOne {133 it.rule.name shouldBe("backstories can fail")134 val s = it.scenarios[0]135 s.scenario.name shouldBe "pre-failure"136 val step = s.step137 step should beInstanceOf(BackgroundStepInfo::class)138 if(step is BackgroundStepInfo) {139 step.background.name shouldBe "bad backstory"140 }141 }142 }143 "references the scenario when a scenario step fails" {144 feature.rules.forOne {145 it.rule.name shouldBe("backstories can succeed")146 val s = it.scenarios[0]147 s.scenario.name shouldBe "failure"148 val step = s.step149 step.type shouldBe TestCaseStepType.STEP150 }151 }152 }153 "given a scenario with step variables" - {154 val result = openMessageSample("1ScenarioWithVariables").use { analyseMessages(it) }155// result.appendTo(System.out)156 "contains a single error" {157 result.problemCount shouldBe 1158 result.features shouldHaveSize 1159 result.features[0].scenarios shouldHaveSize 1160 }161 val error = result.features[0].scenarios[0]162 "references the correct step" {163 (error.step as StepInfo).text shouldBe "3 equals 4"164 }165 "has the correct status" {166 error.status shouldBe FAILED167 }168 }169 "given a scenario outline" - {170 val result = openMessageSample("scenarioOutlineWith3Examples").use { analyseMessages(it) }171// result.appendTo(System.out)172 "contains the correct number of errors error" {173 result.problemCount shouldBe 2174 result.features shouldHaveSize 1175 result.features[0].scenarios shouldHaveSize 2176 }177 "contains the expected errors" {178 result.features[0].scenarios.forOne {179 it.scenario.name shouldBe "Outline var"180 (it.step as StepInfo).text shouldBe "3 equals 4"181 }182 result.features[0].scenarios.forOne {183 it.scenario.name shouldBe "Outline baz"184 (it.step as StepInfo).text shouldBe "123 equals 3231"185 }186 }187 }188 "given a scenario containing a data table" - {189 val result = openMessageSample("scenarioWithDataTable").use { analyseMessages(it) }190// result.appendTo(System.out)191 "contains a single error" {192 result.problemCount shouldBe 1193 }194 "references the correct step" {195 (result.features[0].scenarios[0].step as StepInfo).text shouldBe "this table fails:"196 }197 }198 "given a scenario with a failing hook" - {199 val result = openMessageSample("scenarioWithFailingHook").use { analyseMessages(it) }200// result.appendTo(System.out)201 "contains a single error" {202 result.problemCount shouldBe 1203 }204 "references the failing hook" {205 val scenario = result.features[0].scenarios[0]206 scenario.scenario.name shouldBe "Hook"207 scenario.step.type shouldBe TestCaseStepType.HOOK208 }209 }210 }211})...
Full Screen
Full Screen
NewlineBeforeElseTest.kt
Source:NewlineBeforeElseTest.kt Github
copy
Full Screen
1package nl.deltadak.ktlintruleset2import com.pinterest.ktlint.core.LintError3import com.pinterest.ktlint.test.format4import com.pinterest.ktlint.test.lint5import io.kotest.core.spec.style.StringSpec6import io.kotest.inspectors.forExactly7import io.kotest.matchers.collections.shouldBeEmpty8import io.kotest.matchers.shouldBe9class NewlineBeforeElseTest : StringSpec() {10 init {11 "no newline before else" {12 val lintErrors = NewlineBeforeKeywordRule().lint("""13 val temp = if (true) {14 "hi"15 } else {16 "nothing"17 }18 """.trimIndent())19 lintErrors.forExactly(1) {20 it.shouldBe(LintError(3, 3, NewlineBeforeKeywordRule().id, "Missing newline before \"else\""))21 }22 }23 "curly brace before else" {24 val lintErrors = NewlineBeforeKeywordRule().lint("""25 val temp = if (true) {26 "hi"27 }else {28 "nothing"29 }30 """.trimIndent())31 lintErrors.forExactly(1) {32 it.shouldBe(LintError(3, 2, NewlineBeforeKeywordRule().id, "Missing newline before \"else\""))33 }34 }35 "newline before else" {36 val lintErrors = NewlineBeforeKeywordRule().lint("""37 val temp = if (true) {38 "hi"39 }40 else {41 "nothing"42 }43 """.trimIndent())44 lintErrors.shouldBeEmpty()45 }46 "newline between comment and else" {47 val lintErrors = NewlineBeforeKeywordRule().lint("""48 val temp = if (true) {49 "hi"50 }51 // This is a comment.52 else {53 "nothing"54 }55 """.trimIndent())56 lintErrors.shouldBeEmpty()57 }58 "expected no newline before else" {59 val lintErrors = NewlineBeforeKeywordRule().lint("""60 val temp = if (true) "hi" else "nothing"61 """.trimIndent())62 lintErrors.shouldBeEmpty()63 }64 "formatting" {65 NewlineBeforeKeywordRule().format("""66 class Dummy {67 fun temp() {68 if (true) {69 "hi"70 } else {71 "nothing"72 }73 }74 }75 """.trimIndent()) shouldBe """76 class Dummy {77 fun temp() {78 if (true) {79 "hi"80 }81 else {82 "nothing"83 }84 }85 }86 """.trimIndent()87 }88 }89}...
Full Screen
Full Screen
SpacesAroundKeywordTest.kt
Source:SpacesAroundKeywordTest.kt Github
copy
Full Screen
1package nl.deltadak.ktlintruleset2import com.pinterest.ktlint.core.LintError3import com.pinterest.ktlint.test.format4import com.pinterest.ktlint.test.lint5import io.kotest.core.spec.style.StringSpec6import io.kotest.inspectors.forExactly7import io.kotest.matchers.collections.shouldBeEmpty8import io.kotest.matchers.shouldBe9class SpacesAroundKeywordTest : StringSpec() {10 init {11 "space after if" {12 val lintErrors = SpacesAroundKeywordRule().lint("""13 fun temp() {14 if (true) println(true)15 }16 """.trimIndent())17 lintErrors.shouldBeEmpty()18 }19 "no space after if" {20 val lintErrors = SpacesAroundKeywordRule().lint("""21 fun temp() {22 if(true) println(true)23 }24 """.trimIndent())25 lintErrors.forExactly(1) {26 it.shouldBe(LintError(2, 7, "keyword-spaces", "Missing space after \"if\""))27 }28 }29 "formatting if" {30 SpacesAroundKeywordRule().format("""31 fun temp() {32 if(true) println(true)33 }34 """.trimIndent()) shouldBe """35 fun temp() {36 if (true) println(true)37 }38 """.trimIndent()39 }40 "no space after get" {41 val lintErrors = SpacesAroundKeywordRule().lint("""42 class Dummy {43 val dum: Int44 get() = 345 }46 """.trimIndent())47 lintErrors.shouldBeEmpty()48 }49 "space after get" {50 val lintErrors = SpacesAroundKeywordRule().lint("""51 class Dummy {52 val dum: Int53 get () = 354 }55 """.trimIndent())56 lintErrors.forExactly(1) {57 it.shouldBe(LintError(3, 9, "keyword-spaces", "Unexpected space after \"get\""))58 }59 }60 "formatting get" {61 SpacesAroundKeywordRule().format("""62 class Dummy {63 val dum: Int64 get () = 365 }66 """.trimIndent()) shouldBe """67 class Dummy {68 val dum: Int69 get() = 370 }71 """.trimIndent()72 }73 }74}...
Full Screen
Full Screen
NewlineAfterClassHeaderTest.kt
Source:NewlineAfterClassHeaderTest.kt Github
copy
Full Screen
1package nl.deltadak.ktlintruleset2import com.pinterest.ktlint.core.LintError3import com.pinterest.ktlint.test.format4import com.pinterest.ktlint.test.lint5import io.kotest.core.spec.style.StringSpec6import io.kotest.inspectors.forExactly7import io.kotest.matchers.collections.shouldBeEmpty8import io.kotest.matchers.shouldBe9class NewlineAfterClassHeaderTest : StringSpec() {10 init {11 "no newline after class header" {12 val lintErrors = NewlineAfterClassHeader().lint("""13 class Hi {14 override fun toString() = "bye"15 }16 """.trimIndent())17 lintErrors.forExactly(1) {18 it.shouldBe(LintError(1, 10, NewlineAfterClassHeader().id, "Missing newline after class header"))19 }20 }21 "no newline at start of companion object" {22 val lintErrors = NewlineAfterClassHeader().lint("""23 class Hi {24 25 companion object {26 const val WHAT = "bye"27 }28 }29 """.trimIndent())30 lintErrors.forExactly(1) {31 it.shouldBe(LintError(3, 22, NewlineAfterClassHeader().id, "Missing newline after class header"))32 }33 }34 "with newline" {35 val lintErrors = NewlineAfterClassHeader().lint("""36 class Hi {37 38 companion object {39 40 const val WHAT = "bye"41 }42 }43 """.trimIndent())44 lintErrors.shouldBeEmpty()45 }46 "formatting" {47 NewlineAfterClassHeader().format("""48 class Hi {49 companion object {50 const val WHAT = "bye"51 }52 }53 """.trimIndent()) shouldBe """54 class Hi {55 56 companion object {57 58 const val WHAT = "bye"59 }60 }61 """.trimIndent()62 }63 }64}...
Full Screen
Full Screen
ClienteServiceTest.kt
Source:ClienteServiceTest.kt Github
copy
Full Screen
1package service2import br.com.iupp.controller.dto.ClientRequest3import br.com.iupp.controller.dto.ClientResponse4import br.com.iupp.model.ClientEntity5import br.com.iupp.repository.ClientRepository6import br.com.iupp.repository.ClientRepositoryImpl7import br.com.iupp.service.ClientService8import br.com.iupp.service.ClientServiceImpl9import io.kotest.core.spec.style.AnnotationSpec10import io.kotest.inspectors.buildAssertionError11import io.kotest.matchers.shouldBe12import io.micronaut.test.extensions.kotest.annotation.MicronautTest13import io.mockk.every14import io.mockk.mockk15import java.util.*16@MicronautTest17class ClienteServiceTest:AnnotationSpec() {18 val repository = mockk<ClientRepository>(relaxed = true)19 val service = ClientServiceImpl(repository)20 lateinit var clientEntity: ClientEntity21 lateinit var clientRequest: ClientRequest22 lateinit var clientResponse: ClientResponse23 val id = UUID.randomUUID()24 @BeforeEach25 fun setUp(){26 clientEntity = ClientEntity(id = id,name = "bia", email = "[email protected]", cpf = "12345678900")27 clientRequest = ClientRequest("bia","[email protected]","12345678900")28 clientResponse = ClientResponse(id = id, name = "bia", email = "[email protected]")29 }30 @Test31 fun `send a client request and should be received a client response`(){32 every { repository.repSaveClient(any()) } answers { clientEntity }33 val result = service.saveClient(clientRequest)34 result shouldBe clientResponse35 }36 @Test37 fun `should be received a list of clientResponse`(){38 every { repository.ListOfClient() } answers { listOf(clientResponse) }39 val result = service.listClient()40 result shouldBe listOf(clientResponse)41 }42 @Test43 fun `should be received a clientResponse`(){44 every { repository.findClienteById(any())} answers { clientResponse }45 val result = service.findClientebyId(id)46 result shouldBe clientResponse47 }48}...
Full Screen
Full Screen
error.kt
Source:error.kt Github
copy
Full Screen
...6import io.kotest.inspectors.ElementFail7import io.kotest.inspectors.ElementPass8import io.kotest.inspectors.ElementResult9/**10 * Build assertion error message.11 *12 * Show 10 passed and failed results by default. You can change the number of output results by setting the13 * system property `kotest.assertions.output.max=20`.14 *15 * E.g.:16 *17 * ```18 * -Dkotest.assertions.output.max=2019 * ```20 */21fun <T> buildAssertionError(msg: String, results: List<ElementResult<T>>): Nothing {22 val maxResults = AssertionsConfig.maxErrorsOutput23 val passed = results.filterIsInstance<ElementPass<T>>()24 val failed = results.filterIsInstance<ElementFail<T>>()...
Full Screen
Full Screen
AppKtTest.kt
Source:AppKtTest.kt Github
copy
Full Screen
...23 // then24 String(out.toByteArray()) shouldBe expectedOutput25 }26 }27 "should display error information"{28 listOf(29 "1 2 3 4 X command" to "ERROR: Provided value: 'X' at index:5 is not recognizable by any known parser\n",30 ).forAll { (line, expectedOutput) ->31 // given32 val out = ByteArrayOutputStream()33 val app = App(PrintStream(out))34 // when35 app.printSummary(line)36 // then37 String(out.toByteArray()) shouldBe expectedOutput38 }39 }40})...
Full Screen
Full Screen
error
Using AI Code Generation
copy
Full Screen
1val result = listOf(1, 2, 3, 4, 5)2result.forAll {3it should beLessThan(6)4}5val result = listOf(1, 2, 3, 4, 5)6result.forAll {7it should beLessThan(6)8}9at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:39)10at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:36)11at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:33)12at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:30)13at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:27)14at io.kotest.inspectors.InspectorError$Companion.create$default(InspectorError.kt:26)15at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:25)16at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:24)17at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:23)18at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:22)19at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:21)20at io.kotest.inspectors.InspectorError$Companion.create$default(InspectorError.kt:20)21at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:19)22at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:18)23at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:17)24at io.kotest.inspectors.InspectorError$Companion.create(InspectorError.kt:16)
Full Screen
Full Screen
error
Using AI Code Generation
copy
Full Screen
1 import io.kotest.inspectors.forAll2 import io.kotest.matchers.shouldBe3 import org.junit.jupiter.api.Test4 class ErrorClassTest {5 fun `should use error class`() {6 val list = listOf("a", "b", "c")7 forAll(list) { it.length shouldBe 1 }8 }9 }
Full Screen
Full Screen
error
Using AI Code Generation
copy
Full Screen
1 context("Should be able to use error class of io.kotest.inspectors package") {2 val list = listOf(1, 2, 3, 4)3 shouldThrowExactly<IllegalArgumentException> {4 list.forEach {5 if (it == 3) {6 throw IllegalArgumentException("3 is not allowed")7 }8 }9 }10 }11 }12}
Full Screen
Full Screen
Automation Testing Tutorials
Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.
LambdaTest Learning Hubs:
YouTube
You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.
Run Kotest automation tests on LambdaTest cloud grid
Perform automation testing on 3000+ real desktop and mobile devices online.
Most used methods in error
Try LambdaTest Now !!
Get 100 minutes of automation test minutes FREE!!
Next-Gen App & Browser Testing Cloud
Was this article helpful?
Helpful
NotHelpful
|
__label__pos
| 0.987928 |
ะะตัะตะนัะธ ะบ ัะพะดะตัะถะฐะฝะธั
NavigationContainer
The NavigationContainer is responsible for managing your app state and linking your top-level navigator to the app environment.
The container takes care of platform specific integration and provides various useful functionality:
1. Deep link integration with the linking prop.
2. Notify state changes for screen tracking, state persistence etc.
3. Handle system back button on Android by using the BackHandler API from React Native.
Usage:
1
2
3
4
5
6
7
8
9
10
11
12
import { NavigationContainer } from '@react-navigation/native';
import { createNativeStackNavigator } from '@react-navigation/native-stack';
const Stack = createNativeStackNavigator();
export default function App() {
return (
<NavigationContainer>
<Stack.Navigator>{/* ... */}</Stack.Navigator>
</NavigationContainer>
);
}
Ref
It's also possible to attach a ref to the container to get access to various helper methods, for example, dispatch navigation actions. This should be used in rare cases when you don't have access to the navigation prop, such as a Redux middleware.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import { NavigationContainer, useNavigationContainerRef } from '@react-navigation/native';
function App() {
const navigationRef = useNavigationContainerRef(); // You can also use a regular ref with `React.useRef()`
return (
<View style={{ flex: 1 }}>
<Button onPress={() => navigationRef.navigate('Home')}>
Go home
</Button>
<NavigationContainer ref={navigationRef}>{/* ... */}</NavigationContainer>
</View>
);
}
If you're using a regular ref object, keep in mind that the ref may be initially null in some situations (such as when linking is enabled). To make sure that the ref is initialized, you can use the onReady callback to get notified when the navigation container finishes mounting.
See the Navigating without the navigation prop guide for more details.
Methods on the ref
The ref object includes all of the common navigation methods such as navigate, goBack etc. See docs for CommonActions for more details.
Example:
1
navigationRef.navigate(name, params);
All of these methods will act as if they were called inside the currently focused screen. It's important note that there must be a navigator rendered to handle these actions.
In addition to these methods, the ref object also includes the following special methods:
resetRoot
The resetRoot method lets you reset the state of the navigation tree to the specified state object:
1
2
3
4
navigationRef.resetRoot({
index: 0,
routes: [{ name: 'Profile' }],
});
Unlike the reset method, this acts on the root navigator instead of navigator of the currently focused screen.
getRootState
The getRootState method returns a navigation state object containing the navigation states for all navigators in the navigation tree:
1
const state = navigationRef.getRootState();
Note that the returned state object will be undefined if there are no navigators currently rendered.
getCurrentRoute
The getCurrentRoute method returns the route object for the currently focused screen in the whole navigation tree:
1
const route = navigationRef.getCurrentRoute();
Note that the returned route object will be undefined if there are no navigators currently rendered.
getCurrentOptions
The getCurrentOptions method returns the options for the currently focused screen in the whole navigation tree:
1
const options = navigationRef.getCurrentOptions();
Note that the returned options object will be undefined if there are no navigators currently rendered.
addListener
The addListener method lets you listen to the following events:
state
The event is triggered whenever the navigation state changes in any navigator in the navigation tree:
1
2
3
4
5
6
7
const unsubscribe = navigationRef.addListener('state', (e) => {
// You can get the raw navigation state (partial state object of the root navigator)
console.log(e.data.state);
// Or get the full state object with `getRootState()`
console.log(navigationRef.getRootState());
});
This is analogous to the onStateChange method. The only difference is that the e.data.state object might contain partial state object unlike the state argument in onStateChange which will always contain the full state object.
options
The event is triggered whenever the options change for the currently focused screen in the navigation tree:
1
2
3
4
const unsubscribe = navigationRef.addListener('options', (e) => {
// You can get the new options for the currently focused screen
console.log(e.data.options);
});
Props
initialState
Prop that accepts initial state for the navigator. This can be useful for cases such as deep linking, state persistence etc.
Example:
1
2
3
4
5
<NavigationContainer
initialState={initialState}
>
{/* ... */}
</NavigationContainer>
Providing a custom initial state object will override the initial state object obtained via linking configuration or from browser's URL. If you're providing an initial state object, make sure that you don't pass it on web and that there's no deep link to handle.
Example:
1
2
3
4
5
const initialUrl = await Linking.getInitialURL();
if (Platform.OS !== 'web' && initialUrl == null) {
// Only restore state if there's no deep link and we're not on web
}
See state persistence guide for more details on how to persist and restore state.
onStateChange
Note: Consider the navigator's state object to be internal and subject to change in a minor release. Avoid using properties from the navigation state object except index and routes, unless you really need it. If there is some functionality you cannot achieve without relying on the structure of the state object, please open an issue.
Function that gets called every time navigation state changes. It receives the new navigation state as the argument.
You can use it to track the focused screen, persist the navigation state etc.
Example:
1
2
3
4
5
<NavigationContainer
onStateChange={(state) => console.log('New state is', state)}
>
{/* ... */}
</NavigationContainer>
onReady
Function which is called after the navigation container and all its children finish mounting for the first time. You can use it for:
Example:
1
2
3
4
5
<NavigationContainer
onReady={() => console.log('Navigation container is ready')}
>
{/* ... */}
</NavigationContainer>
onUnhandledAction
Function which is called when a navigation action is not handled by any of the navigators.
By default, React Navigation will show a development-only error message when an action was not handled. You can override the default behavior by providing a custom function.
linking
Configuration for linking integration used for deep linking, URL support in browsers etc.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import { NavigationContainer } from '@react-navigation/native';
function App() {
const linking = {
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Home: 'feed/:sort',
},
},
};
return (
<NavigationContainer linking={linking} fallback={<Text>Loading...</Text>}>
{/* content */}
</NavigationContainer>
);
}
See configuring links guide for more details on how to configure deep links and URL integration.
Options
linking.prefixes
URL prefixes to handle. You can provide multiple prefixes to support custom schemes as well as universal links.
Only URLs matching these prefixes will be handled. The prefix will be stripped from the URL before parsing.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
<NavigationContainer
linking={{
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Chat: 'feed/:sort',
},
},
}}
>
{/* content */}
</NavigationContainer>
This is only supported on iOS and Android.
linking.config
Config to fine-tune how to parse the path. The config object should represent the structure of the navigators in the app.
For example, if we have Catalog screen inside Home screen and want it to handle the item/:id pattern:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
screens: {
Home: {
screens: {
Catalog: {
path: 'item/:id',
parse: {
id: Number,
},
},
},
},
}
}
The options for parsing can be an object or a string:
1
2
3
4
5
{
screens: {
Catalog: 'item/:id',
}
}
When a string is specified, it's equivalent to providing the path option.
The path option is a pattern to match against the path. Any segments starting with : are recognized as a param with the same name. For example item/42 will be parsed to { name: 'item', params: { id: '42' } }.
The initialRouteName option ensures that the route name passed there will be present in the state for the navigator, e.g. for config:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
screens: {
Home: {
initialRouteName: 'Feed',
screens: {
Catalog: {
path: 'item/:id',
parse: {
id: Number,
},
},
Feed: 'feed',
},
},
}
}
and URL : /item/42, the state will look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
routes: [
{
name: 'Home',
state: {
index: 1,
routes: [
{
name: 'Feed'
},
{
name: 'Catalog',
params: { id: 42 },
},
],
},
},
],
}
The parse option controls how the params are parsed. Here, you can provide the name of the param to parse as a key, and a function which takes the string value for the param and returns a parsed value:
1
2
3
4
5
6
7
8
9
10
{
screens: {
Catalog: {
path: 'item/:id',
parse: {
id: id => parseInt(id, 10),
},
},
}
}
If no custom function is provided for parsing a param, it'll be parsed as a string.
linking.enabled
Optional boolean to enable or disable the linking integration. Defaults to true if the linking prop is specified.
linking.getInitialURL
By default, linking integrates with React Native's Linking API and uses Linking.getInitialURL() to provide built-in support for deep linking. However, you might also want to handle links from other sources, such as Branch, or push notifications using Firebase etc.
You can provide a custom getInitialURL function where you can return the link which we should use as the initial URL. The getInitialURL function should return a string if there's a URL to handle, otherwise undefined.
For example, you could do something like following to handle both deep linking and Firebase notifications:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import messaging from '@react-native-firebase/messaging';
<NavigationContainer
linking={{
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Chat: 'feed/:sort',
},
},
async getInitialURL() {
// Check if app was opened from a deep link
const url = await Linking.getInitialURL();
if (url != null) {
return url;
}
// Check if there is an initial firebase notification
const message = await messaging().getInitialNotification();
// Get the `url` property from the notification which corresponds to a screen
// This property needs to be set on the notification payload when sending it
return message?.data?.url;
},
}}
>
{/* content */}
</NavigationContainer>
This option is not available on Web.
linking.subscribe
Similar to getInitialURL, you can provide a custom subscribe function to handle any incoming links instead of the default deep link handling. The subscribe function will receive a listener as the argument and you can call it with a URL string whenever there's a new URL to handle. It should return a cleanup function where you can unsubscribe from any event listeners that you have setup.
For example, you could do something like following to handle both deep linking and Firebase notifications:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import messaging from '@react-native-firebase/messaging';
<NavigationContainer
linking={{
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Chat: 'feed/:sort',
},
},
subscribe(listener) {
const onReceiveURL = ({ url }: { url: string }) => listener(url);
// Listen to incoming links from deep linking
const subscription = Linking.addEventListener('url', onReceiveURL);
// Listen to firebase push notifications
const unsubscribeNotification = messaging().onNotificationOpenedApp(
(message) => {
const url = message.data?.url;
if (url) {
// Any custom logic to check whether the URL needs to be handled
//...
// Call the listener to let React Navigation handle the URL
listener(url);
}
}
);
return () => {
// Clean up the event listeners
subscription.remove();
unsubscribeNotification();
};
},
}}
>
{/* content */}
</NavigationContainer>
This option is not available on Web.
linking.getStateFromPath
You can optionally override the way React Navigation parses links to a state object by providing your own implementation.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<NavigationContainer
linking={{
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Chat: 'feed/:sort',
},
},
getStateFromPath(path, config) {
// Return a state object here
// You can also reuse the default logic by importing `getStateFromPath` from `@react-navigation/native`
},
}}
>
{/* content */}
</NavigationContainer>
linking.getPathFromState
You can optionally override the way React Navigation serializes state objects to link by providing your own implementation. This is necessary for proper web support if you have specified getStateFromPath.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<NavigationContainer
linking={{
prefixes: ['https://mychat.com', 'mychat://'],
config: {
screens: {
Chat: 'feed/:sort',
},
},
getPathFromState(state, config) {
// Return a path string here
// You can also reuse the default logic by importing `getPathFromState` from `@react-navigation/native`
},
}}
>
{/* content */}
</NavigationContainer>
fallback
React Element to use as a fallback while we resolve deep links. Defaults to null.
If you have a native splash screen, please use onReady instead of fallback prop.
documentTitle
By default, React Navigation automatically updates the document title on Web to match the title option of the focused screen. You can disable it or customize it using this prop. It accepts a configuration object with the following options:
documentTitle.enabled
Whether document title handling should be enabled. Defaults to true.
documentTitle.formatter
Custom formatter to use if you want to customize the title text. Defaults to:
1
(options, route) => options?.title ?? route?.name;
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import { NavigationContainer } from '@react-navigation/native';
function App() {
return (
<NavigationContainer
documentTitle={{
formatter: (options, route) =>
`${options?.title ?? route?.name} - My Cool App`,
}}
>
{/* content */}
</NavigationContainer>
);
}
theme
Custom theme to use for the navigation components such as the header, tab bar etc. See theming guide for more details and usage guide.
independent
Whether this navigation container should be independent of parent containers. If this is not set to true, this container cannot be nested inside another container. Setting it to true disconnects any children navigators from parent container.
You probably don't want to set this to true in a typical React Native app. This is only useful if you have navigation trees that work like their own mini-apps and don't need to navigate to the screens outside of them.
ะะพะผะผะตะฝัะฐัะธะธ
|
__label__pos
| 0.994007 |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 30 Aug 2014, 14:33
Close
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, weโll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
If x is a positive integer, is x^3 - 3x^2 + 2x divisible by
ย Question banks Downloads My Bookmarks Reviews Important topics ย
Author Message
TAGS:
Intern
Intern
avatar
Joined: 26 Nov 2009
Posts: 14
Followers: 0
Kudos [?]: 4 [0], given: 0
If x is a positive integer, is x^3 - 3x^2 + 2x divisible byย [#permalink] New postย 06 Dec 2009, 11:42
00:00
A
B
C
D
E
Difficulty:
ย 5% (low)
Question Stats:
86% (02:03) correct 14% (01:32) wrong based on 14 sessions
If x is a positive integer, is x^3 - 3x^2 + 2x divisible by 4?
1) x = 4y + 4, where y is an integer
2) x = 2z + 2, where z is an integer
[Reveal] Spoiler: OA
Manager
Manager
avatar
Joined: 12 Oct 2008
Posts: 58
Followers: 1
Kudos [?]: 1 [0], given: 3
Re: MGMAT Problem Divisibility Help, Properties of "0"ย [#permalink] New postย 10 Dec 2009, 08:20
does 0 divide by integer(4) consider as divisible ?
Senior Manager
Senior Manager
avatar
Joined: 18 Aug 2009
Posts: 333
Followers: 8
Kudos [?]: 162 [0], given: 13
GMAT Tests User
Re: MGMAT Problem Divisibility Help, Properties of "0"ย [#permalink] New postย 10 Dec 2009, 16:31
tashu wrote:
does 0 divide by integer(4) consider as divisible ?
Yes. 0 divided by anything returns 0. And zero is a non-positive, non-negative integer.
Manager
Manager
User avatar
Joined: 06 Feb 2010
Posts: 175
Concentration: Marketing, Leadership
Schools: University of Dhaka - Class of 2010
GMAT 1: Q0 V0
GPA: 3.63
WE: Business Development (Consumer Products)
Followers: 38
Kudos [?]: 535 [0], given: 182
GMAT Tests User
positive integerย [#permalink] New postย 21 Oct 2010, 06:34
If x is a positive integer, is x^2-3x^2+2x divisible by 4 ?
1) x= 4y+4, where y is an integer
2) x=2z + 2, where z is an integer
_________________
Practice Makes a Man Perfect. Practice. Practice. Practice......Perfectly
Critical Reasoning: best-critical-reasoning-shortcuts-notes-tips-91280.html
Collections of MGMAT CAT: collections-of-mgmat-cat-math-152750.html
MGMAT SC SUMMARY: mgmat-sc-summary-of-fourth-edition-152753.html
Sentence Correction: sentence-correction-strategies-and-notes-91218.html
Arithmatic & Algebra: arithmatic-algebra-93678.html
Helpful Geometry formula sheet: best-geometry-93676.html
I hope these will help to understand the basic concepts & strategies. Please Click ON KUDOS Button.
Intern
Intern
avatar
Joined: 19 Oct 2010
Posts: 7
Location: India
Schools: Insead, ISB, Kellogg, Cornell
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: positive integerย [#permalink] New postย 21 Oct 2010, 07:15
The option is D, Each statement alone is sufficient
Simplifying statement , we get 2x(1-x)
Now in option A, we can take 4 as common so the statement becomes 2.4(y+1)(1-4y-4) which is definitely divisible by 4, In option B, we can take 2 common same way, which when multiplied by 2 already in statement gives 4 as common which is definitely divisible by 4 , Hence each statement alone in sufficient
Intern
Intern
avatar
Joined: 18 Aug 2010
Posts: 6
Followers: 1
Kudos [?]: 3 [0], given: 1
Re: positive integerย [#permalink] New postย 21 Oct 2010, 07:19
x^2-3x^2 + 2x
= -x^2 + 2x
= 2x (1-x)
so if we know x if x is divisible by 2, we know that 2x(1-x) is divisible by 4
1) x = 4y+4 = 4(y+1) -> x is divisible by 4, so enough to identify that 2x(1-x) is divisible by 4
2) x=2z + 2 = 2(z+1) , Again x is divisible by 2, so sufficient,
Ans D
_________________
"Learning never exhausts the mind."
--Leonardo da Vinci
Manager
Manager
User avatar
Status: ==GMAT Ninja==
Joined: 08 Jan 2011
Posts: 247
Schools: ISB, IIMA ,SP Jain , XLRI
WE 1: Aditya Birla Group (sales)
WE 2: Saint Gobain Group (sales)
Followers: 4
Kudos [?]: 43 [0], given: 46
GMAT Tests User
is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 06:36
if x is a positive integer, is x^3 - 3x^2 + 2x divisible by 4?
1. x = 4y + 4, where y is an integer
2. x = 2z + 2, where z is an integer
i need some clarification about B
please guide me... :( :(
_________________
WarLocK
_____________________________________________________________________________
The War is oNNNNNNNNNNNNN for 720+
see my Test exp here http://gmatclub.com/forum/my-test-experience-111610.html
do not hesitate me giving kudos if you like my post. :)
SVP
SVP
avatar
Joined: 16 Nov 2010
Posts: 1691
Location: United States (IN)
Concentration: Strategy, Technology
Followers: 30
Kudos [?]: 288 [0], given: 36
GMAT Tests User Premium Member Reviews Badge
Re: is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 06:54
(1) is obviously sufficient, as 4 can be factored out of the expression
x = 4y+4, => x is divisible by 4
and the expression x^3-3x^2+2x is also divisible by 4.
2) x = 2(z+1)
The expression is - x(x^2 + 3x + 2) = x(x+1)(x+2)
So if we simplify this by substituting x = 2(z+1)
then
expression = 2(z+1) (2z+3)(2z+4) = 4 (z+1)(2z+3)(z+2)
which is divisible by 4
Answer - D
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
Get the best GMAT Prep Resources with GMAT Club Premium Membership
Manager
Manager
User avatar
Status: ==GMAT Ninja==
Joined: 08 Jan 2011
Posts: 247
Schools: ISB, IIMA ,SP Jain , XLRI
WE 1: Aditya Birla Group (sales)
WE 2: Saint Gobain Group (sales)
Followers: 4
Kudos [?]: 43 [0], given: 46
GMAT Tests User
Re: is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 07:11
subhashghosh wrote:
(1) is obviously sufficient, as 4 can be factored out of the expression
x = 4y+4, => x is divisible by 4
and the expression x^3-3x^2+2x is also divisible by 4.
2) x = 2(z+1)
The expression is - x(x^2[highlight]+[/highlight]3x + 2) = [highlight]x(x+1)(x+2)[/highlight] (i think it should be x(x-2)(x-1))
So if we simplify this by substituting x = 2(z+1)
then
expression = 2(z+1) (2z+3)(2z+4) = 4 (z+1)(2z+3)(z+2) [[highlight]2(z+1) (2z) (2z+1) = 4 (z+1)(z)(2z+1)[/highlight]]
which is divisible by 4
Answer - D
Now i got it
Thanks Anyway i committed a mistake in the steps.... :( :(
_________________
WarLocK
_____________________________________________________________________________
The War is oNNNNNNNNNNNNN for 720+
see my Test exp here http://gmatclub.com/forum/my-test-experience-111610.html
do not hesitate me giving kudos if you like my post. :)
Manager
Manager
User avatar
Joined: 11 Jul 2009
Posts: 148
WE: Design (Computer Software)
Followers: 1
Kudos [?]: 16 [0], given: 35
GMAT ToolKit User CAT Tests
Re: is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 07:14
Both sufficient....
_________________
Kaustubh
SVP
SVP
avatar
Joined: 16 Nov 2010
Posts: 1691
Location: United States (IN)
Concentration: Strategy, Technology
Followers: 30
Kudos [?]: 288 [0], given: 36
GMAT Tests User Premium Member Reviews Badge
Re: is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 07:16
Yeah, you're right. I'm reading in a hurry these days :(
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
Get the best GMAT Prep Resources with GMAT Club Premium Membership
Intern
Intern
avatar
Joined: 24 May 2010
Posts: 13
Followers: 0
Kudos [?]: 0 [0], given: 6
Re: is x^3 - 3x^2 + 2x divisible by 4ย [#permalink] New postย 03 Jul 2011, 10:26
IMO D:
x^3 - 3x^2 + 2x = x(x^2-3x+2)
= x(x-2)(x-1)
Condition 1: x = 4y + 4, where y is an integer
x = 4(y+1)
If we put x = 4(y+1) in above eq x(x-2)(x-1) it will be divisible by 4 for all values because x = 4(y+1)
2. x = 2z + 2, where z is an integer
x = 2(z+1)
if we put x in equation "x(x-2)(x-1)"
= 2(z+1)(2z)(2z+1)
= 4(z+1)z(2z+1)
Again this condition is fine.
Hence option D
Intern
Intern
avatar
Joined: 24 Sep 2011
Posts: 10
Followers: 0
Kudos [?]: 4 [0], given: 1
DS - Number Propertiesย [#permalink] New postย 25 Sep 2011, 08:06
Help.
If x is a positive integer, is x^3 - 3x^2 +2x divisible by 4?
1) x = 4y + 4, where y is an integer
2) x = 2z + 2, where z is an integer
Rephrased question: is x(x-1)(x-2) divisible by 4? or is x even?
Both statements say that x is even, and thus answer is D, which is also the OA.
But...
Statement 1 - what if y = -1, and thus x=0?
Statement 2 - what if z = 0, and thus x = 0?
in each case, if x^3 - 3x^2 +2x = 0, is 0 divisible by 4? or more generally, is 0 divisible by any number?
Current Student
avatar
Joined: 26 May 2005
Posts: 571
Followers: 18
Kudos [?]: 107 [0], given: 13
GMAT Tests User
Re: DS - Number Propertiesย [#permalink] New postย 25 Sep 2011, 08:13
syh244 wrote:
Help.
If x is a positive integer, is x^3 - 3x^2 +2x divisible by 4?
1) x = 4y + 4, where y is an integer
2) x = 2z + 2, where z is an integer
Rephrased question: is x(x-1)(x-2) divisible by 4? or is x even?
Both statements say that x is even, and thus answer is D, which is also the OA.
But...
Statement 1 - what if y = -1, and thus x=0?
Statement 2 - what if z = 0, and thus x = 0?
in each case, if x^3 - 3x^2 +2x = 0, is 0 divisible by 4? or more generally, is 0 divisible by any number?
O is a multiple of any number . So even if X = 0 ; its divisible by 4
therefore its D
Director
Director
avatar
Joined: 01 Feb 2011
Posts: 770
Followers: 14
Kudos [?]: 82 [0], given: 42
GMAT Tests User
Re: DS - Number Propertiesย [#permalink] New postย 25 Sep 2011, 08:33
Yes. 0 is divisible by any number.
Another way to solve this ..
x^3-3x^2+2x is divisible by 4?
1. Sufficient
x = 4y+4 = 4(y+1)
As x already has a factor 4, x^2,x^3,2x will all have a factor 4 for sure. Hence x^3-3x^2+2x is divisible by 4.
2. Sufficient
x= 2z+2 = 2(z+1)
x^2 = 4(z+1)^2
x^3 = 8(z+1)^3
2x = 4(z+1)
x^2,x^3 and 2x all have a factor 4. Hence x^3-3x^2+2x is divisible by 4.
Answer is D.
Math Forum Moderator
avatar
Joined: 20 Dec 2010
Posts: 2047
Followers: 128
Kudos [?]: 904 [0], given: 376
GMAT Tests User
Re: MGMAT DS: Is the polynomial divisible by 4ย [#permalink] New postย 25 Sep 2011, 08:40
JimmyWorld wrote:
If x is a positive integer, is x^3 - 3x^2 + 2x divisible by 4?
1) x = 4y + 4, where y is an integer
2) x = 2z + 2, where z is an integer
Rephrase:
Is x even?
1) x = 4y + 4= 4(y+1)
x is even.
Sufficient.
2) x = 2z + 2= 2(z+1)
x is even.
Sufficient.
Ans: "D"
_________________
~fluke
Get the best GMAT Prep Resources with GMAT Club Premium Membership
Manager
Manager
avatar
Status: Bell the GMAT!!!
Affiliations: Aidha
Joined: 16 Aug 2011
Posts: 186
Location: Singapore
Concentration: Finance, General Management
GMAT 1: 680 Q46 V37
GMAT 2: 620 Q49 V27
GMAT 3: 700 Q49 V36
WE: Other (Other)
Followers: 5
Kudos [?]: 34 [0], given: 43
GMAT Tests User
Re: MGMAT DS: Is the polynomial divisible by 4ย [#permalink] New postย 26 Sep 2011, 04:10
+ 1 for D.
Time taken: 52 seconds.
However, I did this question on paper, on second thoughts I realised that this can be done with mental calculations. All we need to know here is that can we have 4 a a common multiple of all components of the equation.
_________________
If my post did a dance in your mind, send me the steps through kudos :)
My MBA journey at http://mbadilemma.wordpress.com/
Re: MGMAT DS: Is the polynomial divisible by 4 ย [#permalink] 26 Sep 2011, 04:10
ย ย Similar topics Author Replies Last post
Similar
Topics:
4 Experts publish their posts in the topic If x = -1, then -(x^4 + x^3 + x^2 + x) = Bunuel 7 03 Feb 2014, 00:23
1 Experts publish their posts in the topic If 3 - x = 2x - 3, then 4x = Bunuel 4 14 Jan 2014, 00:12
Experts publish their posts in the topic If x = -1, then (x^4 - x^3 + x^2)/(x - 1) = Walkabout 2 19 Dec 2012, 05:17
Experts publish their posts in the topic If x is a positive integer, is x^3-3x^2+2x divisible by 4? kashishh 9 06 Feb 2012, 12:30
if x is a positive intiger ,is x^3-3x^2+2x divisible by 4? yezz 4 16 Sep 2006, 08:13
Display posts from previous: Sort by
If x is a positive integer, is x^3 - 3x^2 + 2x divisible by
ย Question banks Downloads My Bookmarks Reviews Important topics ย
GMAT Club MBA Forum Home| About| Privacy Policy| Terms and Conditions| GMAT Club Rules| Contact| Sitemap
Powered by phpBB ยฉ phpBB Group and phpBB SEO
Kindly note that the GMATยฎ test is a registered trademark of the Graduate Management Admission Councilยฎ, and this site has neither been reviewed nor endorsed by GMACยฎ.
|
__label__pos
| 0.507408 |
vijos1439 ๅบ้ด ๏ผๆๅบ๏ผ
P1439ๅบ้ด
่ๆฏ
ๆ่ฟฐ
็ปๅฎnไธช้ญๅบ้ด [ai,bi], i=1,2,...,n. ่ฟไบๅบ้ด็ๅๅฏไปฅ็จไธคไธคไธ็ธไบค็้ญๅบ้ด็ๅๆฅ่กจ็คบใไฝ ็ไปปๅกๆฏๆพๅฐ่ฟๆ ท็ๅบ้ดๆฐ็ฎๆๅฐ็่กจ็คบ๏ผไธๆๅฎไปฌๆๅๅบ็ๆนๅผๅๅฐ่พๅบๆไปถไธญใๅฝไธไป
ๅฝa <= b < c <= dๆถ๏ผๅบ้ด[a; b] ใ[c; d]ๆๆฏๅๅบ
ๅไธไธช็จๅบๅฎๆไปฅไธไปปๅก๏ผ
่ฏปๅๅบ้ด
่ฎก็ฎๅบๆปก่ถณไธ่ฟฐๆกไปถ็ไธคไธคไธ็ธไบค็ๅบ้ด
ๆๆพๅฐ็ๅบ้ดๆๅๅบ่พๅบ
ๆ ผๅผ
่พๅ
ฅๆ ผๅผ
็ฌฌไธ่กๅชๆไธไธชๆฐn, 3 <= n <= 50000๏ผไปฃ่กจๅบ้ดๆฐ.็ฌฌI+1่กๆไธคไธชๆฐai,bi,ไน้ด็จไธไธช็ฉบๆ ผ้ๅผ๏ผๅๅซ่กจ็คบๅบ้ด[ai,bi]็่ตทๅงๅ็ปๆ๏ผ1 <= i <= n๏ผ๏ผ1 <= ai <= bi <= 1000000
่พๅบๆ ผๅผ
่พๅบๆไปถๅบ่ฏฅๅ
ๅซ่ฎก็ฎๅบ็ๆๆๅบ้ด๏ผๆฏ่กๅไธไธชๅบ้ด๏ผๆฏ่กๅชๆไธคไธชๆฐ๏ผๅๅซๆฏๅบ้ด่ตทๅงๅ็ปๆ๏ผไน้ด็จไธไธช็ฉบๆ ผๅๅผใ่ฎฐไฝๅฟ
้กปๆฏๆๅๅบ่พๅบใ
ๆ ทไพ1
ๆ ทไพ่พๅ
ฅ1[ๅคๅถ]
5
5 6
1 4
10 10
6 9
8 10
ๆ ทไพ่พๅบ1[ๅคๅถ]
1 4
5 10
้ๅถ
ๅไธชๆต่ฏ็น1s
ๆ็คบ
ไปฃ็ ๏ผ
#include<cstdio>
#include<algorithm>
using namespace std;
const int maxn=5e4;
struct tnode{
int x,y;
}p[maxn+10];
bool cmp_p(tnode a,tnode b)
{
return a.x<b.x;
}
int main()
{
// freopen("1.in","r",stdin);
int n,i,l,r;
scanf("%d",&n);
for(i=1;i<=n;i++)
scanf("%d%d",&p[i].x,&p[i].y);
sort(p+1,p+n+1,cmp_p);
l=p[1].x,r=p[1].y;
for(i=2;i<=n;i++)
if(p[i].x>r)
{
printf("%d %d\n",l,r);
l=p[i].x,r=p[i].y;
}
else
r=max(p[i].y,r);
printf("%d %d\n",l,r);
return 0;
}
้
่ฏปๆดๅค
ๆ็ซ ๆ ็ญพ๏ผ vijos1439 ๅบ้ด ๆๅบ
ๆณๅฏนไฝ่
่ฏด็นไปไน๏ผ ๆๆฅ่ฏดไธๅฅ
ๆด่ฐท P2434 [SDOI2005]ๅบ้ด
่ดชๅฟ
Rlt1296 Rlt1296
2016-11-09 18:01:14
้
่ฏปๆฐ๏ผ316
ๆฒกๆๆดๅคๆจ่ไบ๏ผ่ฟๅ้ฆ้กต
ๅ ๅ
ฅCSDN๏ผไบซๅๆด็ฒพๅ็ๅ
ๅฎนๆจ่๏ผไธ500ไธ็จๅบๅๅ
ฑๅๆ้ฟ๏ผ
ๅ
ณ้ญ
ๅ
ณ้ญ
|
__label__pos
| 0.595553 |
Articles of struct
typedef struct en Obj-c
Estoy viendo un comportamiento extraรฑo y necesitarรญa ayuda. En structure.h tengo: typedef struct { NSString *summary; NSArray *legs; NSString *copyrights; struct polylineSruct overview_polyline; struct directionBounds bounds; } route; typedef struct { NSArray *routes; NSString *status; } directions; En structure.m tengo: (directions) a_Function_that_builds_the_struct { directions direct; direct.status = @"OK"; NSMutableArray *routes = [NSMutableArray array]; for(xxx) { [โฆ]
NSUndoManager y GLKit
Estoy tratando de admitir deshacer / rehacer en una aplicaciรณn de iOS que utiliza GLKit . Cuando trato lo siguiente: GLKVector3 currentTranslation = _panningObject.translation; [[self.undoManager prepareWithInvocationTarget:_panningObject] setTranslation:currentTranslation]; Tengo un crash: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '+[NSMethodSignature signatureWithObjCTypes:]: unsupported type encoding spec '(' in '4(_GLKVector3={?=fff}{?=fff}{?=fff}[3f])8'' ยฟAlgunas ideas?
Asignar / Desasignar matriz dinรกmica de pointers de estructura en C
digamos que tengo las siguientes dos definiciones de estructura en C. struct child { int x; }; struct Yoyo { struct child **Kids; }; ยฟCรณmo iba a destinar la memory para Kids? digamos, por ejemplo, tengo alguna function Yoyo_create (). static struct Yoyo * yoyo_create() { int n = 32000; struct Yoyo *y; y = [โฆ]
Escritura de estructura en un flujo de salida en Swift 3
Estoy intentando pasar una estructura en un flujo para que luego se envรญe por el zรณcalo a otro dispositivo. El cรณdigo funciona, pero se envรญan los datos incorrectos. Y cada vez que se envรญan datos aleatorios, entonces estoy haciendo algo mal. ยฟDรณnde estoy equivocado? Aquรญ estรก mi cรณdigo: public struct PStypes { var u: UInt32 [โฆ]
Almacenar una matriz C de estructuras C en NSData
Tengo una estructura de C muy simple asรญ: typedef struct { int tag; CGPoint position; } Place; Es decir, todos los types escalares y sin pointers en la estructura. Entonces tengo una variedad de esos puntos asรญ: Place *dynamicPlaces = calloc(numberOfPlaces, sizeof(Place)); Entonces, cada elemento de dynamicPlaces deberรญa ser (a less que haya mezclado algo [โฆ]
ยฟCรณmo save una struct en NSUserDefaults en Objective-c?
ยฟCรณmo NSUserDefaults una struct personalizada a NSUserDefaults ? struct Paging { NSInteger currentPage; NSInteger totalResults; NSInteger resultsPerPage; }; typedef struct Paging Paging; NSUserDefaults *userInfo = [NSUserDefaults standardUserDefaults]; [userInfo setStruct:paging forKey:@"paging"]; [userInfo synchronize]; El cรณdigo anterior produce una advertencia de time de ejecuciรณn: *** -[NSUserDefaults setObject:forKey:]: Attempt to insert non-property valueโฆ
ยฟCรณmo devuelve una estructura como CLLocationCoordinate2D de un mรฉtodo de class correctamente?
Esta pregunta utiliza CLLocationCoordinate2D como ejemplo, pero esto tambiรฉn se aplica a otras estructuras, como CGPoint (aunque generalmente se incluyen automรกticamente). Quiero usar CLLocationCoordinate2D como un valor de retorno en un mรฉtodo de class. Si se tratara de un object, podrรญa escribir lo siguiente en la parte superior y estarรญa bien, siempre que el file [โฆ]
Objetivo C: el mรฉtodo personalizado "make" similar a CLLocationCoordinate2DMake
He escrito una estructura personalizada en un file de encabezado separado. Se ve algo como esto typedef struct RequestSpecifics { BOOL includeMetaData; BOOL includeVerboseData; } RequestSpecifics; Ahora quiero hacer un mรฉtodo personalizado de 'make', similar al mรฉtodo CLLocationCoordinate2DMake CLLocationCoordinate2DMake de la estructura de CoreLocation. He intentado dos forms diferentes. Si bien ambas forms no dan [โฆ]
Pasar el puntero de estructura rรกpido a la function C
Digamos que tengo una estructura Swift llamada Foo struct Foo { var a,b,c : Float var d : Double init() { a = 0 b = 0 c = 0 d = 0 } } Swift sizeof (Foo) imprime 24 bytes, 4 bytes para campos flotantes, 8 para dobles y 4 bytes de relleno en [โฆ]
__unsafe_unretained NSString struct var
Estoy intentando crear una estructura que tenga varias variables diferentes de diferentes types. varios de los types son de NSString, pero al intentar hacer esto, se produjo un error ARC forbids Objective-C objects in structs or unions asรญ que habiendo leรญdo sobre el error que veo es sensato agregar __unsafe_unretained antes de la statement NSString, [โฆ]
|
__label__pos
| 0.983184 |
Table of Contents
1. Introduction
2. What is HTTP?
3. What is HTTPS?
4. Benefits of HTTPS
5. Why Should You Migrate from HTTP to HTTPS?
6. Preparing for HTTP to HTTPS Migration
โข Backup Your Website
โข Update Internal Links
โข Update External Links
โข Update Social Media Links
7. Purchasing an SSL Certificate
8. Installing and Configuring the SSL Certificate
9. Update Google Analytics Settings
10. Updating Google Search Console
11. Testing the HTTPS Migration
12. Monitoring and Troubleshooting
13. Updating Robots.txt File
14. Redirecting HTTP to HTTPS
15. Finalizing the Migration
Introduction
In the modern digital landscape, website security is of utmost importance. One crucial aspect of website security is migrating from HTTP to HTTPS. This article will guide you through the process of changing settings for Google Analytics and Google Search Console during an HTTP to HTTPS site migration.
What is HTTP?
HTTP (Hypertext Transfer Protocol) is a protocol used for transmitting and receiving information on the Internet. It allows the transfer of data between a web server and a web browser.
What is HTTPS?
HTTPS (Hypertext Transfer Protocol Secure) is a secure version of HTTP. It encrypts the data transmitted between a web server and a web browser, ensuring the confidentiality and integrity of the information.
Benefits of HTTPS
There are several benefits to migrating from HTTP to HTTPS:
1. Improved Security: HTTPS encrypts data, protecting it from potential eavesdropping and tampering.
2. Trust and Credibility: HTTPS displays a padlock symbol and โSecureโ label in web browsers, instilling trust and confidence in your website visitors.
3. SEO Advantages: Google considers HTTPS as a ranking signal, potentially improving your websiteโs search engine rankings.
4. Protection Against Data Modification: HTTPS ensures the integrity of data, preventing unauthorized modification during transmission.
5. Compliance: Many regulations and standards, such as the General Data Protection Regulation (GDPR), require websites to use HTTPS when handling sensitive data.
Why Should You Migrate from HTTP to HTTPS?
Migrating from HTTP to HTTPS is essential to provide a secure browsing experience for your website visitors. It helps protect user data, build trust, and improve your websiteโs visibility in search engine results. Failure to migrate to HTTPS can result in security vulnerabilities and a negative impact on user experience and SEO rankings.
Preparing for HTTP to HTTPS Migration
Before initiating the migration process, itโs crucial to make adequate preparations. Follow these steps to ensure a smooth transition:
Backup Your Website
Before making any changes, create a backup of your entire website, including databases and files. This ensures you have a copy of your website in case of any unforeseen issues during the migration process.
Update Internal Links
Review your websiteโs internal links and ensure they point to the HTTPS versions of your pages. Update any hardcoded HTTP links in your websiteโs HTML, CSS, and JavaScript files.
Update External Links
If your website has external links, such as backlinks from other websites, contact the respective website owners and request them to update the links to HTTPS versions.
Update Social Media Links
Update the links to your website on social media profiles and posts. This ensures that the shared links lead to the secure HTTPS version of your website.
Purchasing an SSL Certificate
To enable HTTPS on your website, you need to purchase and install an SSL (Secure Socket Layer) certificate. An SSL certificate is a digital certificate that verifies the authenticity of your website and enables secure communication between the web server and the browser.
There are various types of SSL certificates available, such as domain validation (DV), organization validation (OV), and extended validation (EV) certificates. Choose the one that best suits your websiteโs needs.
When purchasing an SSL certificate, ensure that it covers all the necessary subdomains and provides adequate encryption strength.
Installing and Configuring the SSL Certificate
Once you have obtained the SSL certificate, follow these steps to install and configure it on your website:
1. Generate a Certificate Signing Request (CSR) from your web hosting control panel.
2. Provide the CSR to the SSL certificate provider.
3. Complete the domain ownership verification process as specified by the certificate provider.
4. After verification, you will receive the SSL certificate files.
5. Log in to your web hosting control panel and locate the SSL/TLS settings.
6. Upload the SSL certificate files and configure the appropriate settings.
7. Verify that the SSL certificate is installed correctly by accessing your website via HTTPS. The browser should display a secure connection with a padlock symbol.
Update Google Analytics Settings
During the HTTP to HTTPS migration, it is crucial to update your Google Analytics settings to ensure accurate tracking of website traffic. Follow these steps to make the necessary changes:
1. Log in to your Google Analytics account.
2. Navigate to the โAdminโ section.
3. In the โPropertyโ column, click on โProperty Settings.โ
4. Update the โDefault URLโ and โProperty Nameโ fields to reflect the new HTTPS version of your website.
5. Save the changes.
6. If you have any custom filters or settings in Google Analytics, review them and ensure they are compatible with the HTTPS version of your website.
Updating Google Search Console
Google Search Console (previously known as Google Webmaster Tools) is a powerful tool for monitoring and optimizing your websiteโs presence in Google search results. To update the settings in Google Search Console, follow these steps:
1. Log in to your Google Search Console account.
2. Select the property (website) you want to update.
3. Click on the gear icon in the top right corner and select โSite Settings.โ
4. Update the โPreferred Domainโ to the HTTPS version of your website.
5. Submit the updated sitemap to Google Search Console to ensure proper indexing of your HTTPS pages.
Testing the HTTPS Migration
After completing the necessary changes and updates, it is essential to test the HTTPS migration to ensure everything is functioning correctly. Follow these steps to perform a thorough test:
1. Use an online SSL checker to verify that your SSL certificate is installed correctly and functioning properly.
2. Manually test various pages of your website by accessing them through HTTPS URLs. Ensure that all the elements, including images, scripts, and stylesheets, are loading securely.
3. Test any forms or interactive elements on your website to ensure they are submitting data securely.
4. Verify that all internal and external links are correctly redirected to their HTTPS counterparts.
Monitoring and Troubleshooting
Monitor your website closely after the HTTPS migration to identify any issues or errors. Keep an eye on website traffic, user behavior, and search engine rankings. If you encounter any problems, refer to the following troubleshooting steps:
1. Check for mixed content warnings. Mixed content occurs when your website loads insecure (HTTP) resources on an HTTPS page. Update all resources to use HTTPS.
2. Monitor server logs for any errors or warnings related to the HTTPS migration.
3. Test your websiteโs performance and loading speed to ensure there are no significant delays or issues.
4. Use online tools and scanners to check for vulnerabilities or security loopholes.
5. If you encounter any major issues that you are unable to resolve, consider seeking assistance from a professional web developer or security expert.
Updating Robots.txt File
The robots.txt file plays a crucial role in guiding search engine bots on how to crawl and index your website. During the HTTPS migration, itโs important to update the robots.txt file to ensure search engines properly recognize and index your HTTPS pages. Follow these steps:
1. Open your websiteโs robots.txt file.
2. Update any HTTP URLs to their HTTPS counterparts.
3. Save the changes and upload the updated robots.txt file to your websiteโs root directory.
Redirecting HTTP to HTTPS
To ensure a seamless transition for your website visitors, itโs important to set up proper redirects from HTTP to HTTPS. Redirects automatically send users who enter the HTTP version of your website to the corresponding HTTPS version. There are different methods to set up redirects, including using .htaccess file or server-side configurations. Consult your web hosting provider or refer to relevant documentation for specific instructions.
Finalizing the Migration
Once you have completed all the necessary steps and confirmed that your HTTPS website is functioning correctly, itโs time to finalize the migration. Here are a few final actions to take:
1. Notify your website visitors, subscribers, and customers about the HTTPS migration. Inform them of the added security measures and the benefits of a secure browsing experience.
2. Update any external platforms, such as advertising networks or social media profiles, to reflect the new HTTPS URLs.
3. Update any other relevant tools or integrations that interact with your website, such as email marketing software or e-commerce platforms.
4. Perform a final check to ensure all the pages on your website are accessible via HTTPS.
Congratulations! You have successfully migrated your website from HTTP to HTTPS, providing a secure browsing experience for your visitors and improving your websiteโs search engine visibility.
HTTP to HTTPS Site Migration
WATCH THIS VIDEO https://www.youtube.com/watch?v=fOsNkVegMCQ
Conclusion
Migrating your website from HTTP to HTTPS is a crucial step in enhancing security, building trust, and improving your websiteโs performance in search engine rankings. By following the outlined steps and ensuring proper configuration of Google Analytics and Google Search Console settings, you can smoothly transition to HTTPS. Remember to regularly monitor your website and promptly address any issues that may arise.
Frequently Asked Questions (FAQs)
1. Will migrating from HTTP to HTTPS affect my search engine rankings? Migrating to HTTPS can potentially have a positive impact on your search engine rankings. Google considers HTTPS as a ranking signal, and having a secure website can boost your visibility in search results.
2. Do I need to purchase an SSL certificate for every subdomain of my website? It depends on your specific requirements and the type of SSL certificate you choose. Some SSL certificates cover multiple subdomains, while others are valid for a single domain. Evaluate your needs and select the appropriate SSL certificate.
3. How long does the HTTPS migration process typically take? The time required for the migration process can vary depending on the size and complexity of your website. Itโs important to allocate sufficient time for planning, testing, and implementation. The process can take anywhere from a few hours to a few days.
4. What happens to my existing SEO rankings and backlinks after migrating to HTTPS? When properly implemented, the migration from HTTP to HTTPS should not have a significant negative impact on your SEO rankings. However, itโs important to set up proper redirects and update any backlinks to the new HTTPS URLs to ensure a seamless transition and maintain the authority of your website.
5. Do I need to update all my internal links to HTTPS manually? Yes, itโs important to update all internal links on your website to the HTTPS version manually. This includes links within your content, navigation menus, footer, and any other areas where internal links are present. By doing so, you ensure a consistent and secure browsing experience for your users.
In conclusion, migrating your website from HTTP to HTTPS is essential for security, trust, and improved search engine performance. By following the outlined steps, including updating Google Analytics and Google Search Console settings, purchasing and installing an SSL certificate, and properly redirecting HTTP to HTTPS, you can successfully make the transition. Remember to monitor your website after the migration and address any issues promptly. Enjoy the benefits of a secure and optimized website!
Thanks
Abhay Ranjan
|
__label__pos
| 0.912708 |
Uploaded image for project: 'Hive'
1. Hive
2. HIVE-7445
Improve LOGS for Hive when a query is not able to acquire locks
Details
โข Type: Improvement
โข Status: Closed
โข Priority: Minor
โข Resolution: Fixed
โข Affects Version/s: 0.13.1
โข Fix Version/s: 0.14.0
โข Component/s: Diagnosability, Logging
โข Labels:
None
Description
Currently the error thrown when you cannot acquire a lock is:
Error in acquireLocks...
FAILED: Error in acquiring locks: Locks on the underlying objects cannot be acquired. retry after some time
This error is insufficient if the user would like to understand what is blocking them and insufficient from a diagnosability perspective because it is difficult to know what query is blocking the lock acquisition.
Attachments
1. HIVE-7445.patch
4 kB
Chaoyu Tang
2. HIVE-7445.1.patch
12 kB
Chaoyu Tang
3. HIVE-7445.2.patch
12 kB
Chaoyu Tang
4. HIVE-7445.3.patch
12 kB
Chaoyu Tang
Activity
People
โข Assignee:
ctang.ma Chaoyu Tang
Reporter:
ctang.ma Chaoyu Tang
โข Votes:
0 Vote for this issue
Watchers:
3 Start watching this issue
Dates
โข Created:
Updated:
Resolved:
|
__label__pos
| 0.976281 |
blob: a1d0d2657bbc7d99a37f29bb7b5e39d513b7d079 [file] [log] [blame]
/* GStreamer
* Copyright (C) 2004 Wim Taymans <[email protected]>
*
* gstmessage.c: GstMessage subsystem
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Library General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Library General Public License for more details.
*
* You should have received a copy of the GNU Library General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
* Boston, MA 02110-1301, USA.
*/
/**
* SECTION:gstmessage
* @short_description: Lightweight objects to signal the application of
* pipeline events
* @see_also: #GstBus, #GstMiniObject, #GstElement
*
* Messages are implemented as a subclass of #GstMiniObject with a generic
* #GstStructure as the content. This allows for writing custom messages without
* requiring an API change while allowing a wide range of different types
* of messages.
*
* Messages are posted by objects in the pipeline and are passed to the
* application using the #GstBus.
*
* The basic use pattern of posting a message on a #GstBus is as follows:
* |[<!-- language="C" -->
* gst_bus_post (bus, gst_message_new_eos());
* ]|
*
* A #GstElement usually posts messages on the bus provided by the parent
* container using gst_element_post_message().
*/
#include "gst_private.h"
#include <string.h> /* memcpy */
#include "gsterror.h"
#include "gstenumtypes.h"
#include "gstinfo.h"
#include "gstmessage.h"
#include "gsttaglist.h"
#include "gstutils.h"
#include "gstquark.h"
#include "gstvalue.h"
typedef struct
{
GstMessage message;
GstStructure *structure;
} GstMessageImpl;
#define GST_MESSAGE_STRUCTURE(m) (((GstMessageImpl *)(m))->structure)
typedef struct
{
const gint type;
const gchar *name;
GQuark quark;
} GstMessageQuarks;
static GstMessageQuarks message_quarks[] = {
{GST_MESSAGE_UNKNOWN, "unknown", 0},
{GST_MESSAGE_EOS, "eos", 0},
{GST_MESSAGE_ERROR, "error", 0},
{GST_MESSAGE_WARNING, "warning", 0},
{GST_MESSAGE_INFO, "info", 0},
{GST_MESSAGE_TAG, "tag", 0},
{GST_MESSAGE_BUFFERING, "buffering", 0},
{GST_MESSAGE_STATE_CHANGED, "state-changed", 0},
{GST_MESSAGE_STATE_DIRTY, "state-dirty", 0},
{GST_MESSAGE_STEP_DONE, "step-done", 0},
{GST_MESSAGE_CLOCK_PROVIDE, "clock-provide", 0},
{GST_MESSAGE_CLOCK_LOST, "clock-lost", 0},
{GST_MESSAGE_NEW_CLOCK, "new-clock", 0},
{GST_MESSAGE_STRUCTURE_CHANGE, "structure-change", 0},
{GST_MESSAGE_STREAM_STATUS, "stream-status", 0},
{GST_MESSAGE_APPLICATION, "application", 0},
{GST_MESSAGE_ELEMENT, "element", 0},
{GST_MESSAGE_SEGMENT_START, "segment-start", 0},
{GST_MESSAGE_SEGMENT_DONE, "segment-done", 0},
{GST_MESSAGE_DURATION_CHANGED, "duration-changed", 0},
{GST_MESSAGE_LATENCY, "latency", 0},
{GST_MESSAGE_ASYNC_START, "async-start", 0},
{GST_MESSAGE_ASYNC_DONE, "async-done", 0},
{GST_MESSAGE_REQUEST_STATE, "request-state", 0},
{GST_MESSAGE_STEP_START, "step-start", 0},
{GST_MESSAGE_QOS, "qos", 0},
{GST_MESSAGE_PROGRESS, "progress", 0},
{GST_MESSAGE_TOC, "toc", 0},
{GST_MESSAGE_RESET_TIME, "reset-time", 0},
{GST_MESSAGE_STREAM_START, "stream-start", 0},
{GST_MESSAGE_NEED_CONTEXT, "need-context", 0},
{GST_MESSAGE_HAVE_CONTEXT, "have-context", 0},
{GST_MESSAGE_DEVICE_ADDED, "device-added", 0},
{GST_MESSAGE_DEVICE_REMOVED, "device-removed", 0},
{GST_MESSAGE_PROPERTY_NOTIFY, "property-notify", 0},
{GST_MESSAGE_STREAM_COLLECTION, "stream-collection", 0},
{GST_MESSAGE_STREAMS_SELECTED, "streams-selected", 0},
{GST_MESSAGE_REDIRECT, "redirect", 0},
{0, NULL, 0}
};
static GQuark details_quark = 0;
GType _gst_message_type = 0;
GST_DEFINE_MINI_OBJECT_TYPE (GstMessage, gst_message);
void
_priv_gst_message_initialize (void)
{
gint i;
GST_CAT_INFO (GST_CAT_GST_INIT, "init messages");
for (i = 0; message_quarks[i].name; i++) {
message_quarks[i].quark =
g_quark_from_static_string (message_quarks[i].name);
}
details_quark = g_quark_from_static_string ("details");
_gst_message_type = gst_message_get_type ();
}
/**
* gst_message_type_get_name:
* @type: the message type
*
* Get a printable name for the given message type. Do not modify or free.
*
* Returns: a reference to the static name of the message.
*/
const gchar *
gst_message_type_get_name (GstMessageType type)
{
gint i;
for (i = 0; message_quarks[i].name; i++) {
if (type == message_quarks[i].type)
return message_quarks[i].name;
}
return "unknown";
}
/**
* gst_message_type_to_quark:
* @type: the message type
*
* Get the unique quark for the given message type.
*
* Returns: the quark associated with the message type
*/
GQuark
gst_message_type_to_quark (GstMessageType type)
{
gint i;
for (i = 0; message_quarks[i].name; i++) {
if (type == message_quarks[i].type)
return message_quarks[i].quark;
}
return 0;
}
static gboolean
_gst_message_dispose (GstMessage * message)
{
gboolean do_free = TRUE;
if (GST_MINI_OBJECT_FLAG_IS_SET (message, GST_MESSAGE_FLAG_ASYNC_DELIVERY)) {
/* revive message, so bus can finish with it and clean it up */
gst_message_ref (message);
GST_INFO ("[msg %p] signalling async free", message);
GST_MESSAGE_LOCK (message);
GST_MESSAGE_SIGNAL (message);
GST_MESSAGE_UNLOCK (message);
/* don't free it yet, let bus finish with it first */
do_free = FALSE;
}
return do_free;
}
static void
_gst_message_free (GstMessage * message)
{
GstStructure *structure;
g_return_if_fail (message != NULL);
GST_CAT_LOG (GST_CAT_MESSAGE, "finalize message %p, %s from %s", message,
GST_MESSAGE_TYPE_NAME (message), GST_MESSAGE_SRC_NAME (message));
if (GST_MESSAGE_SRC (message)) {
gst_object_unref (GST_MESSAGE_SRC (message));
GST_MESSAGE_SRC (message) = NULL;
}
structure = GST_MESSAGE_STRUCTURE (message);
if (structure) {
gst_structure_set_parent_refcount (structure, NULL);
gst_structure_free (structure);
}
g_slice_free1 (sizeof (GstMessageImpl), message);
}
static void
gst_message_init (GstMessageImpl * message, GstMessageType type,
GstObject * src);
static GstMessage *
_gst_message_copy (GstMessage * message)
{
GstMessageImpl *copy;
GstStructure *structure;
GST_CAT_LOG (GST_CAT_MESSAGE, "copy message %p, %s from %s", message,
GST_MESSAGE_TYPE_NAME (message),
GST_OBJECT_NAME (GST_MESSAGE_SRC (message)));
copy = g_slice_new0 (GstMessageImpl);
gst_message_init (copy, GST_MESSAGE_TYPE (message),
GST_MESSAGE_SRC (message));
GST_MESSAGE_TIMESTAMP (copy) = GST_MESSAGE_TIMESTAMP (message);
GST_MESSAGE_SEQNUM (copy) = GST_MESSAGE_SEQNUM (message);
structure = GST_MESSAGE_STRUCTURE (message);
if (structure) {
GST_MESSAGE_STRUCTURE (copy) = gst_structure_copy (structure);
gst_structure_set_parent_refcount (GST_MESSAGE_STRUCTURE (copy),
©->message.mini_object.refcount);
} else {
GST_MESSAGE_STRUCTURE (copy) = NULL;
}
return GST_MESSAGE_CAST (copy);
}
static void
gst_message_init (GstMessageImpl * message, GstMessageType type,
GstObject * src)
{
gst_mini_object_init (GST_MINI_OBJECT_CAST (message), 0, _gst_message_type,
(GstMiniObjectCopyFunction) _gst_message_copy,
(GstMiniObjectDisposeFunction) _gst_message_dispose,
(GstMiniObjectFreeFunction) _gst_message_free);
GST_MESSAGE_TYPE (message) = type;
if (src)
gst_object_ref (src);
GST_MESSAGE_SRC (message) = src;
GST_MESSAGE_TIMESTAMP (message) = GST_CLOCK_TIME_NONE;
GST_MESSAGE_SEQNUM (message) = gst_util_seqnum_next ();
}
/**
* gst_message_new_custom:
* @type: The #GstMessageType to distinguish messages
* @src: (transfer none) (allow-none): The object originating the message.
* @structure: (transfer full) (allow-none): the structure for the
* message. The message will take ownership of the structure.
*
* Create a new custom-typed message. This can be used for anything not
* handled by other message-specific functions to pass a message to the
* app. The structure field can be %NULL.
*
* Returns: (transfer full): The new message.
*
* MT safe.
*/
GstMessage *
gst_message_new_custom (GstMessageType type, GstObject * src,
GstStructure * structure)
{
GstMessageImpl *message;
message = g_slice_new0 (GstMessageImpl);
GST_CAT_LOG (GST_CAT_MESSAGE, "source %s: creating new message %p %s",
(src ? GST_OBJECT_NAME (src) : "NULL"), message,
gst_message_type_get_name (type));
if (structure) {
/* structure must not have a parent */
if (!gst_structure_set_parent_refcount (structure,
&message->message.mini_object.refcount))
goto had_parent;
}
gst_message_init (message, type, src);
GST_MESSAGE_STRUCTURE (message) = structure;
return GST_MESSAGE_CAST (message);
/* ERRORS */
had_parent:
{
g_slice_free1 (sizeof (GstMessageImpl), message);
g_warning ("structure is already owned by another object");
return NULL;
}
}
/**
* gst_message_get_seqnum:
* @message: A #GstMessage.
*
* Retrieve the sequence number of a message.
*
* Messages have ever-incrementing sequence numbers, which may also be set
* explicitly via gst_message_set_seqnum(). Sequence numbers are typically used
* to indicate that a message corresponds to some other set of messages or
* events, for example a SEGMENT_DONE message corresponding to a SEEK event. It
* is considered good practice to make this correspondence when possible, though
* it is not required.
*
* Note that events and messages share the same sequence number incrementor;
* two events or messages will never have the same sequence number unless
* that correspondence was made explicitly.
*
* Returns: The message's sequence number.
*
* MT safe.
*/
guint32
gst_message_get_seqnum (GstMessage * message)
{
g_return_val_if_fail (GST_IS_MESSAGE (message), -1);
return GST_MESSAGE_SEQNUM (message);
}
/**
* gst_message_set_seqnum:
* @message: A #GstMessage.
* @seqnum: A sequence number.
*
* Set the sequence number of a message.
*
* This function might be called by the creator of a message to indicate that
* the message relates to other messages or events. See gst_message_get_seqnum()
* for more information.
*
* MT safe.
*/
void
gst_message_set_seqnum (GstMessage * message, guint32 seqnum)
{
g_return_if_fail (GST_IS_MESSAGE (message));
GST_MESSAGE_SEQNUM (message) = seqnum;
}
/**
* gst_message_new_eos:
* @src: (transfer none) (allow-none): The object originating the message.
*
* Create a new eos message. This message is generated and posted in
* the sink elements of a GstBin. The bin will only forward the EOS
* message to the application if all sinks have posted an EOS message.
*
* Returns: (transfer full): The new eos message.
*
* MT safe.
*/
GstMessage *
gst_message_new_eos (GstObject * src)
{
GstMessage *message;
message = gst_message_new_custom (GST_MESSAGE_EOS, src, NULL);
return message;
}
/**
* gst_message_new_error_with_details:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
* @details: (transfer full): (allow-none): A GstStructure with details
*
* Create a new error message. The message will copy @error and
* @debug. This message is posted by element when a fatal event
* occurred. The pipeline will probably (partially) stop. The application
* receiving this message should stop the pipeline.
*
* Returns: (transfer full): the new error message.
*
* Since: 1.10
*/
GstMessage *
gst_message_new_error_with_details (GstObject * src, GError * error,
const gchar * debug, GstStructure * details)
{
GstMessage *message;
GstStructure *structure;
if (!g_utf8_validate (debug, -1, NULL)) {
debug = NULL;
g_warning ("Trying to set debug field of error message, but "
"string is not valid UTF-8. Please file a bug.");
}
structure = gst_structure_new_id (GST_QUARK (MESSAGE_ERROR),
GST_QUARK (GERROR), G_TYPE_ERROR, error,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
message = gst_message_new_custom (GST_MESSAGE_ERROR, src, structure);
if (details) {
GValue v = G_VALUE_INIT;
g_value_init (&v, GST_TYPE_STRUCTURE);
g_value_take_boxed (&v, details);
gst_structure_id_take_value (GST_MESSAGE_STRUCTURE (message), details_quark,
&v);
}
return message;
}
/**
* gst_message_new_error:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
*
* Create a new error message. The message will copy @error and
* @debug. This message is posted by element when a fatal event
* occurred. The pipeline will probably (partially) stop. The application
* receiving this message should stop the pipeline.
*
* Returns: (transfer full): the new error message.
*
* MT safe.
*/
GstMessage *
gst_message_new_error (GstObject * src, GError * error, const gchar * debug)
{
return gst_message_new_error_with_details (src, error, debug, NULL);
}
/**
* gst_message_parse_error_details:
* @message: The message object
* @structure: (out): A pointer to the returned details
*
* Returns the optional details structure, may be NULL if none.
* The returned structure must not be freed.
*
* Since: 1.10
*/
void
gst_message_parse_error_details (GstMessage * message,
const GstStructure ** structure)
{
const GValue *v;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_ERROR);
g_return_if_fail (structure != NULL);
*structure = NULL;
v = gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (message),
details_quark);
if (v) {
*structure = g_value_get_boxed (v);
}
}
/**
* gst_message_new_warning_with_details:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
* @details: (transfer full): (allow-none): A GstStructure with details
*
* Create a new warning message. The message will make copies of @error and
* @debug.
*
* Returns: (transfer full): the new warning message.
*
* Since: 1.10
*/
GstMessage *
gst_message_new_warning_with_details (GstObject * src, GError * error,
const gchar * debug, GstStructure * details)
{
GstMessage *message;
GstStructure *structure;
if (!g_utf8_validate (debug, -1, NULL)) {
debug = NULL;
g_warning ("Trying to set debug field of warning message, but "
"string is not valid UTF-8. Please file a bug.");
}
structure = gst_structure_new_id (GST_QUARK (MESSAGE_WARNING),
GST_QUARK (GERROR), G_TYPE_ERROR, error,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
message = gst_message_new_custom (GST_MESSAGE_WARNING, src, structure);
if (details) {
GValue v = G_VALUE_INIT;
g_value_init (&v, GST_TYPE_STRUCTURE);
g_value_take_boxed (&v, details);
gst_structure_id_take_value (GST_MESSAGE_STRUCTURE (message), details_quark,
&v);
}
return message;
}
/**
* gst_message_new_warning:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
*
* Create a new warning message. The message will make copies of @error and
* @debug.
*
* Returns: (transfer full): the new warning message.
*
* MT safe.
*/
GstMessage *
gst_message_new_warning (GstObject * src, GError * error, const gchar * debug)
{
return gst_message_new_warning_with_details (src, error, debug, NULL);
}
/**
* gst_message_parse_warning_details:
* @message: The message object
* @structure: (out): A pointer to the returned details structure
*
* Returns the optional details structure, may be NULL if none
* The returned structure must not be freed.
*
* Since: 1.10
*/
void
gst_message_parse_warning_details (GstMessage * message,
const GstStructure ** structure)
{
const GValue *v;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_WARNING);
g_return_if_fail (structure != NULL);
*structure = NULL;
v = gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (message),
details_quark);
if (v) {
*structure = g_value_get_boxed (v);
}
}
/**
* gst_message_new_info_with_details:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
* @details: (transfer full): (allow-none): A GstStructure with details
*
* Create a new info message. The message will make copies of @error and
* @debug.
*
* Returns: (transfer full): the new warning message.
*
* Since: 1.10
*/
GstMessage *
gst_message_new_info_with_details (GstObject * src, GError * error,
const gchar * debug, GstStructure * details)
{
GstMessage *message;
GstStructure *structure;
if (!g_utf8_validate (debug, -1, NULL)) {
debug = NULL;
g_warning ("Trying to set debug field of info message, but "
"string is not valid UTF-8. Please file a bug.");
}
structure = gst_structure_new_id (GST_QUARK (MESSAGE_INFO),
GST_QUARK (GERROR), G_TYPE_ERROR, error,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
message = gst_message_new_custom (GST_MESSAGE_INFO, src, structure);
if (details) {
GValue v = G_VALUE_INIT;
g_value_init (&v, GST_TYPE_STRUCTURE);
g_value_take_boxed (&v, details);
gst_structure_id_take_value (GST_MESSAGE_STRUCTURE (message), details_quark,
&v);
}
return message;
}
/**
* gst_message_new_info:
* @src: (transfer none) (allow-none): The object originating the message.
* @error: (transfer none): The GError for this message.
* @debug: A debugging string.
*
* Create a new info message. The message will make copies of @error and
* @debug.
*
* Returns: (transfer full): the new info message.
*
* MT safe.
*/
GstMessage *
gst_message_new_info (GstObject * src, GError * error, const gchar * debug)
{
return gst_message_new_info_with_details (src, error, debug, NULL);
}
/**
* gst_message_parse_info_details:
* @message: The message object
* @structure: (out): A pointer to the returned details structure
*
* Returns the optional details structure, may be NULL if none
* The returned structure must not be freed.
*
* Since: 1.10
*/
void
gst_message_parse_info_details (GstMessage * message,
const GstStructure ** structure)
{
const GValue *v;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_INFO);
g_return_if_fail (structure != NULL);
*structure = NULL;
v = gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (message),
details_quark);
if (v) {
*structure = g_value_get_boxed (v);
}
}
/**
* gst_message_new_tag:
* @src: (transfer none) (allow-none): The object originating the message.
* @tag_list: (transfer full): the tag list for the message.
*
* Create a new tag message. The message will take ownership of the tag list.
* The message is posted by elements that discovered a new taglist.
*
* Returns: (transfer full): the new tag message.
*
* MT safe.
*/
GstMessage *
gst_message_new_tag (GstObject * src, GstTagList * tag_list)
{
GstStructure *s;
GstMessage *message;
GValue val = G_VALUE_INIT;
g_return_val_if_fail (GST_IS_TAG_LIST (tag_list), NULL);
s = gst_structure_new_id_empty (GST_QUARK (MESSAGE_TAG));
g_value_init (&val, GST_TYPE_TAG_LIST);
g_value_take_boxed (&val, tag_list);
gst_structure_id_take_value (s, GST_QUARK (TAGLIST), &val);
message = gst_message_new_custom (GST_MESSAGE_TAG, src, s);
return message;
}
/**
* gst_message_new_buffering:
* @src: (transfer none) (allow-none): The object originating the message.
* @percent: The buffering percent
*
* Create a new buffering message. This message can be posted by an element that
* needs to buffer data before it can continue processing. @percent should be a
* value between 0 and 100. A value of 100 means that the buffering completed.
*
* When @percent is < 100 the application should PAUSE a PLAYING pipeline. When
* @percent is 100, the application can set the pipeline (back) to PLAYING.
* The application must be prepared to receive BUFFERING messages in the
* PREROLLING state and may only set the pipeline to PLAYING after receiving a
* message with @percent set to 100, which can happen after the pipeline
* completed prerolling.
*
* MT safe.
*
* Returns: (transfer full): The new buffering message.
*/
GstMessage *
gst_message_new_buffering (GstObject * src, gint percent)
{
GstMessage *message;
GstStructure *structure;
gint64 buffering_left;
g_return_val_if_fail (percent >= 0 && percent <= 100, NULL);
buffering_left = (percent == 100 ? 0 : -1);
structure = gst_structure_new_id (GST_QUARK (MESSAGE_BUFFERING),
GST_QUARK (BUFFER_PERCENT), G_TYPE_INT, percent,
GST_QUARK (BUFFERING_MODE), GST_TYPE_BUFFERING_MODE, GST_BUFFERING_STREAM,
GST_QUARK (AVG_IN_RATE), G_TYPE_INT, -1,
GST_QUARK (AVG_OUT_RATE), G_TYPE_INT, -1,
GST_QUARK (BUFFERING_LEFT), G_TYPE_INT64, buffering_left, NULL);
message = gst_message_new_custom (GST_MESSAGE_BUFFERING, src, structure);
return message;
}
/**
* gst_message_new_state_changed:
* @src: (transfer none) (allow-none): The object originating the message.
* @oldstate: the previous state
* @newstate: the new (current) state
* @pending: the pending (target) state
*
* Create a state change message. This message is posted whenever an element
* changed its state.
*
* Returns: (transfer full): the new state change message.
*
* MT safe.
*/
GstMessage *
gst_message_new_state_changed (GstObject * src,
GstState oldstate, GstState newstate, GstState pending)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_STATE_CHANGED),
GST_QUARK (OLD_STATE), GST_TYPE_STATE, (gint) oldstate,
GST_QUARK (NEW_STATE), GST_TYPE_STATE, (gint) newstate,
GST_QUARK (PENDING_STATE), GST_TYPE_STATE, (gint) pending, NULL);
message = gst_message_new_custom (GST_MESSAGE_STATE_CHANGED, src, structure);
return message;
}
/**
* gst_message_new_state_dirty:
* @src: (transfer none) (allow-none): The object originating the message
*
* Create a state dirty message. This message is posted whenever an element
* changed its state asynchronously and is used internally to update the
* states of container objects.
*
* Returns: (transfer full): the new state dirty message.
*
* MT safe.
*/
GstMessage *
gst_message_new_state_dirty (GstObject * src)
{
GstMessage *message;
message = gst_message_new_custom (GST_MESSAGE_STATE_DIRTY, src, NULL);
return message;
}
/**
* gst_message_new_clock_provide:
* @src: (transfer none) (allow-none): The object originating the message.
* @clock: (transfer none): the clock it provides
* @ready: %TRUE if the sender can provide a clock
*
* Create a clock provide message. This message is posted whenever an
* element is ready to provide a clock or lost its ability to provide
* a clock (maybe because it paused or became EOS).
*
* This message is mainly used internally to manage the clock
* selection.
*
* Returns: (transfer full): the new provide clock message.
*
* MT safe.
*/
GstMessage *
gst_message_new_clock_provide (GstObject * src, GstClock * clock,
gboolean ready)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_CLOCK_PROVIDE),
GST_QUARK (CLOCK), GST_TYPE_CLOCK, clock,
GST_QUARK (READY), G_TYPE_BOOLEAN, ready, NULL);
message = gst_message_new_custom (GST_MESSAGE_CLOCK_PROVIDE, src, structure);
return message;
}
/**
* gst_message_new_clock_lost:
* @src: (transfer none) (allow-none): The object originating the message.
* @clock: (transfer none): the clock that was lost
*
* Create a clock lost message. This message is posted whenever the
* clock is not valid anymore.
*
* If this message is posted by the pipeline, the pipeline will
* select a new clock again when it goes to PLAYING. It might therefore
* be needed to set the pipeline to PAUSED and PLAYING again.
*
* Returns: (transfer full): The new clock lost message.
*
* MT safe.
*/
GstMessage *
gst_message_new_clock_lost (GstObject * src, GstClock * clock)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_CLOCK_LOST),
GST_QUARK (CLOCK), GST_TYPE_CLOCK, clock, NULL);
message = gst_message_new_custom (GST_MESSAGE_CLOCK_LOST, src, structure);
return message;
}
/**
* gst_message_new_new_clock:
* @src: (transfer none) (allow-none): The object originating the message.
* @clock: (transfer none): the new selected clock
*
* Create a new clock message. This message is posted whenever the
* pipeline selects a new clock for the pipeline.
*
* Returns: (transfer full): The new new clock message.
*
* MT safe.
*/
GstMessage *
gst_message_new_new_clock (GstObject * src, GstClock * clock)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_NEW_CLOCK),
GST_QUARK (CLOCK), GST_TYPE_CLOCK, clock, NULL);
message = gst_message_new_custom (GST_MESSAGE_NEW_CLOCK, src, structure);
return message;
}
/**
* gst_message_new_structure_change:
* @src: (transfer none) (allow-none): The object originating the message.
* @type: The change type.
* @owner: (transfer none): The owner element of @src.
* @busy: Whether the structure change is busy.
*
* Create a new structure change message. This message is posted when the
* structure of a pipeline is in the process of being changed, for example
* when pads are linked or unlinked.
*
* @src should be the sinkpad that unlinked or linked.
*
* Returns: (transfer full): the new structure change message.
*
* MT safe.
*/
GstMessage *
gst_message_new_structure_change (GstObject * src, GstStructureChangeType type,
GstElement * owner, gboolean busy)
{
GstMessage *message;
GstStructure *structure;
g_return_val_if_fail (GST_IS_PAD (src), NULL);
/* g_return_val_if_fail (GST_PAD_DIRECTION (src) == GST_PAD_SINK, NULL); */
g_return_val_if_fail (GST_IS_ELEMENT (owner), NULL);
structure = gst_structure_new_id (GST_QUARK (MESSAGE_STRUCTURE_CHANGE),
GST_QUARK (TYPE), GST_TYPE_STRUCTURE_CHANGE_TYPE, type,
GST_QUARK (OWNER), GST_TYPE_ELEMENT, owner,
GST_QUARK (BUSY), G_TYPE_BOOLEAN, busy, NULL);
message = gst_message_new_custom (GST_MESSAGE_STRUCTURE_CHANGE, src,
structure);
return message;
}
/**
* gst_message_new_segment_start:
* @src: (transfer none) (allow-none): The object originating the message.
* @format: The format of the position being played
* @position: The position of the segment being played
*
* Create a new segment message. This message is posted by elements that
* start playback of a segment as a result of a segment seek. This message
* is not received by the application but is used for maintenance reasons in
* container elements.
*
* Returns: (transfer full): the new segment start message.
*
* MT safe.
*/
GstMessage *
gst_message_new_segment_start (GstObject * src, GstFormat format,
gint64 position)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_SEGMENT_START),
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (POSITION), G_TYPE_INT64, position, NULL);
message = gst_message_new_custom (GST_MESSAGE_SEGMENT_START, src, structure);
return message;
}
/**
* gst_message_new_segment_done:
* @src: (transfer none) (allow-none): The object originating the message.
* @format: The format of the position being done
* @position: The position of the segment being done
*
* Create a new segment done message. This message is posted by elements that
* finish playback of a segment as a result of a segment seek. This message
* is received by the application after all elements that posted a segment_start
* have posted the segment_done.
*
* Returns: (transfer full): the new segment done message.
*
* MT safe.
*/
GstMessage *
gst_message_new_segment_done (GstObject * src, GstFormat format,
gint64 position)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_SEGMENT_DONE),
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (POSITION), G_TYPE_INT64, position, NULL);
message = gst_message_new_custom (GST_MESSAGE_SEGMENT_DONE, src, structure);
return message;
}
/**
* gst_message_new_application:
* @src: (transfer none) (allow-none): The object originating the message.
* @structure: (transfer full): the structure for the message. The message
* will take ownership of the structure.
*
* Create a new application-typed message. GStreamer will never create these
* messages; they are a gift from us to you. Enjoy.
*
* Returns: (transfer full): The new application message.
*
* MT safe.
*/
GstMessage *
gst_message_new_application (GstObject * src, GstStructure * structure)
{
g_return_val_if_fail (structure != NULL, NULL);
return gst_message_new_custom (GST_MESSAGE_APPLICATION, src, structure);
}
/**
* gst_message_new_element:
* @src: (transfer none) (allow-none): The object originating the message.
* @structure: (transfer full): The structure for the
* message. The message will take ownership of the structure.
*
* Create a new element-specific message. This is meant as a generic way of
* allowing one-way communication from an element to an application, for example
* "the firewire cable was unplugged". The format of the message should be
* documented in the element's documentation. The structure field can be %NULL.
*
* Returns: (transfer full): The new element message.
*
* MT safe.
*/
GstMessage *
gst_message_new_element (GstObject * src, GstStructure * structure)
{
g_return_val_if_fail (structure != NULL, NULL);
return gst_message_new_custom (GST_MESSAGE_ELEMENT, src, structure);
}
/**
* gst_message_new_duration_changed:
* @src: (transfer none) (allow-none): The object originating the message.
*
* Create a new duration changed message. This message is posted by elements
* that know the duration of a stream when the duration changes. This message
* is received by bins and is used to calculate the total duration of a
* pipeline.
*
* Returns: (transfer full): The new duration-changed message.
*
* MT safe.
*/
GstMessage *
gst_message_new_duration_changed (GstObject * src)
{
GstMessage *message;
message = gst_message_new_custom (GST_MESSAGE_DURATION_CHANGED, src,
gst_structure_new_id_empty (GST_QUARK (MESSAGE_DURATION_CHANGED)));
return message;
}
/**
* gst_message_new_async_start:
* @src: (transfer none) (allow-none): The object originating the message.
*
* This message is posted by elements when they start an ASYNC state change.
*
* Returns: (transfer full): The new async_start message.
*
* MT safe.
*/
GstMessage *
gst_message_new_async_start (GstObject * src)
{
GstMessage *message;
message = gst_message_new_custom (GST_MESSAGE_ASYNC_START, src, NULL);
return message;
}
/**
* gst_message_new_async_done:
* @src: (transfer none) (allow-none): The object originating the message.
* @running_time: the desired running_time
*
* The message is posted when elements completed an ASYNC state change.
* @running_time contains the time of the desired running_time when this
* elements goes to PLAYING. A value of #GST_CLOCK_TIME_NONE for @running_time
* means that the element has no clock interaction and thus doesn't care about
* the running_time of the pipeline.
*
* Returns: (transfer full): The new async_done message.
*
* MT safe.
*/
GstMessage *
gst_message_new_async_done (GstObject * src, GstClockTime running_time)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_ASYNC_DONE),
GST_QUARK (RUNNING_TIME), G_TYPE_UINT64, running_time, NULL);
message = gst_message_new_custom (GST_MESSAGE_ASYNC_DONE, src, structure);
return message;
}
/**
* gst_message_new_latency:
* @src: (transfer none) (allow-none): The object originating the message.
*
* This message can be posted by elements when their latency requirements have
* changed.
*
* Returns: (transfer full): The new latency message.
*
* MT safe.
*/
GstMessage *
gst_message_new_latency (GstObject * src)
{
GstMessage *message;
message = gst_message_new_custom (GST_MESSAGE_LATENCY, src, NULL);
return message;
}
/**
* gst_message_new_request_state:
* @src: (transfer none) (allow-none): The object originating the message.
* @state: The new requested state
*
* This message can be posted by elements when they want to have their state
* changed. A typical use case would be an audio server that wants to pause the
* pipeline because a higher priority stream is being played.
*
* Returns: (transfer full): the new request state message.
*
* MT safe.
*/
GstMessage *
gst_message_new_request_state (GstObject * src, GstState state)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_REQUEST_STATE),
GST_QUARK (NEW_STATE), GST_TYPE_STATE, (gint) state, NULL);
message = gst_message_new_custom (GST_MESSAGE_REQUEST_STATE, src, structure);
return message;
}
/**
* gst_message_get_structure:
* @message: The #GstMessage.
*
* Access the structure of the message.
*
* Returns: (transfer none): The structure of the message. The structure is
* still owned by the message, which means that you should not free it and
* that the pointer becomes invalid when you free the message.
*
* MT safe.
*/
const GstStructure *
gst_message_get_structure (GstMessage * message)
{
g_return_val_if_fail (GST_IS_MESSAGE (message), NULL);
return GST_MESSAGE_STRUCTURE (message);
}
/**
* gst_message_has_name:
* @message: The #GstMessage.
* @name: name to check
*
* Checks if @message has the given @name. This function is usually used to
* check the name of a custom message.
*
* Returns: %TRUE if @name matches the name of the message structure.
*/
gboolean
gst_message_has_name (GstMessage * message, const gchar * name)
{
GstStructure *structure;
g_return_val_if_fail (GST_IS_MESSAGE (message), FALSE);
structure = GST_MESSAGE_STRUCTURE (message);
if (structure == NULL)
return FALSE;
return gst_structure_has_name (structure, name);
}
/**
* gst_message_parse_tag:
* @message: A valid #GstMessage of type GST_MESSAGE_TAG.
* @tag_list: (out callee-allocates): return location for the tag-list.
*
* Extracts the tag list from the GstMessage. The tag list returned in the
* output argument is a copy; the caller must free it when done.
*
* Typical usage of this function might be:
* |[<!-- language="C" -->
* ...
* switch (GST_MESSAGE_TYPE (msg)) {
* case GST_MESSAGE_TAG: {
* GstTagList *tags = NULL;
*
* gst_message_parse_tag (msg, &tags);
* g_print ("Got tags from element %s\n", GST_OBJECT_NAME (msg->src));
* handle_tags (tags);
* gst_tag_list_unref (tags);
* break;
* }
* ...
* }
* ...
* ]|
*
* MT safe.
*/
void
gst_message_parse_tag (GstMessage * message, GstTagList ** tag_list)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_TAG);
g_return_if_fail (tag_list != NULL);
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (TAGLIST), GST_TYPE_TAG_LIST, tag_list, NULL);
}
/**
* gst_message_parse_buffering:
* @message: A valid #GstMessage of type GST_MESSAGE_BUFFERING.
* @percent: (out) (allow-none): Return location for the percent.
*
* Extracts the buffering percent from the GstMessage. see also
* gst_message_new_buffering().
*
* MT safe.
*/
void
gst_message_parse_buffering (GstMessage * message, gint * percent)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_BUFFERING);
if (percent)
*percent =
g_value_get_int (gst_structure_id_get_value (GST_MESSAGE_STRUCTURE
(message), GST_QUARK (BUFFER_PERCENT)));
}
/**
* gst_message_set_buffering_stats:
* @message: A valid #GstMessage of type GST_MESSAGE_BUFFERING.
* @mode: a buffering mode
* @avg_in: the average input rate
* @avg_out: the average output rate
* @buffering_left: amount of buffering time left in milliseconds
*
* Configures the buffering stats values in @message.
*/
void
gst_message_set_buffering_stats (GstMessage * message, GstBufferingMode mode,
gint avg_in, gint avg_out, gint64 buffering_left)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_BUFFERING);
gst_structure_id_set (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (BUFFERING_MODE), GST_TYPE_BUFFERING_MODE, mode,
GST_QUARK (AVG_IN_RATE), G_TYPE_INT, avg_in,
GST_QUARK (AVG_OUT_RATE), G_TYPE_INT, avg_out,
GST_QUARK (BUFFERING_LEFT), G_TYPE_INT64, buffering_left, NULL);
}
/**
* gst_message_parse_buffering_stats:
* @message: A valid #GstMessage of type GST_MESSAGE_BUFFERING.
* @mode: (out) (allow-none): a buffering mode, or %NULL
* @avg_in: (out) (allow-none): the average input rate, or %NULL
* @avg_out: (out) (allow-none): the average output rate, or %NULL
* @buffering_left: (out) (allow-none): amount of buffering time left in
* milliseconds, or %NULL
*
* Extracts the buffering stats values from @message.
*/
void
gst_message_parse_buffering_stats (GstMessage * message,
GstBufferingMode * mode, gint * avg_in, gint * avg_out,
gint64 * buffering_left)
{
GstStructure *structure;
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_BUFFERING);
structure = GST_MESSAGE_STRUCTURE (message);
if (mode)
*mode = (GstBufferingMode)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (BUFFERING_MODE)));
if (avg_in)
*avg_in = g_value_get_int (gst_structure_id_get_value (structure,
GST_QUARK (AVG_IN_RATE)));
if (avg_out)
*avg_out = g_value_get_int (gst_structure_id_get_value (structure,
GST_QUARK (AVG_OUT_RATE)));
if (buffering_left)
*buffering_left =
g_value_get_int64 (gst_structure_id_get_value (structure,
GST_QUARK (BUFFERING_LEFT)));
}
/**
* gst_message_parse_state_changed:
* @message: a valid #GstMessage of type GST_MESSAGE_STATE_CHANGED
* @oldstate: (out) (allow-none): the previous state, or %NULL
* @newstate: (out) (allow-none): the new (current) state, or %NULL
* @pending: (out) (allow-none): the pending (target) state, or %NULL
*
* Extracts the old and new states from the GstMessage.
*
* Typical usage of this function might be:
* |[<!-- language="C" -->
* ...
* switch (GST_MESSAGE_TYPE (msg)) {
* case GST_MESSAGE_STATE_CHANGED: {
* GstState old_state, new_state;
*
* gst_message_parse_state_changed (msg, &old_state, &new_state, NULL);
* g_print ("Element %s changed state from %s to %s.\n",
* GST_OBJECT_NAME (msg->src),
* gst_element_state_get_name (old_state),
* gst_element_state_get_name (new_state));
* break;
* }
* ...
* }
* ...
* ]|
*
* MT safe.
*/
void
gst_message_parse_state_changed (GstMessage * message,
GstState * oldstate, GstState * newstate, GstState * pending)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STATE_CHANGED);
structure = GST_MESSAGE_STRUCTURE (message);
if (oldstate)
*oldstate = (GstState)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (OLD_STATE)));
if (newstate)
*newstate = (GstState)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (NEW_STATE)));
if (pending)
*pending = (GstState)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (PENDING_STATE)));
}
/**
* gst_message_parse_clock_provide:
* @message: A valid #GstMessage of type GST_MESSAGE_CLOCK_PROVIDE.
* @clock: (out) (allow-none) (transfer none): a pointer to hold a clock
* object, or %NULL
* @ready: (out) (allow-none): a pointer to hold the ready flag, or %NULL
*
* Extracts the clock and ready flag from the GstMessage.
* The clock object returned remains valid until the message is freed.
*
* MT safe.
*/
void
gst_message_parse_clock_provide (GstMessage * message, GstClock ** clock,
gboolean * ready)
{
const GValue *clock_gvalue;
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_CLOCK_PROVIDE);
structure = GST_MESSAGE_STRUCTURE (message);
clock_gvalue = gst_structure_id_get_value (structure, GST_QUARK (CLOCK));
g_return_if_fail (clock_gvalue != NULL);
g_return_if_fail (G_VALUE_TYPE (clock_gvalue) == GST_TYPE_CLOCK);
if (ready)
*ready =
g_value_get_boolean (gst_structure_id_get_value (structure,
GST_QUARK (READY)));
if (clock)
*clock = (GstClock *) g_value_get_object (clock_gvalue);
}
/**
* gst_message_parse_clock_lost:
* @message: A valid #GstMessage of type GST_MESSAGE_CLOCK_LOST.
* @clock: (out) (allow-none) (transfer none): a pointer to hold the lost clock
*
* Extracts the lost clock from the GstMessage.
* The clock object returned remains valid until the message is freed.
*
* MT safe.
*/
void
gst_message_parse_clock_lost (GstMessage * message, GstClock ** clock)
{
const GValue *clock_gvalue;
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_CLOCK_LOST);
structure = GST_MESSAGE_STRUCTURE (message);
clock_gvalue = gst_structure_id_get_value (structure, GST_QUARK (CLOCK));
g_return_if_fail (clock_gvalue != NULL);
g_return_if_fail (G_VALUE_TYPE (clock_gvalue) == GST_TYPE_CLOCK);
if (clock)
*clock = (GstClock *) g_value_get_object (clock_gvalue);
}
/**
* gst_message_parse_new_clock:
* @message: A valid #GstMessage of type GST_MESSAGE_NEW_CLOCK.
* @clock: (out) (allow-none) (transfer none): a pointer to hold the selected
* new clock
*
* Extracts the new clock from the GstMessage.
* The clock object returned remains valid until the message is freed.
*
* MT safe.
*/
void
gst_message_parse_new_clock (GstMessage * message, GstClock ** clock)
{
const GValue *clock_gvalue;
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_NEW_CLOCK);
structure = GST_MESSAGE_STRUCTURE (message);
clock_gvalue = gst_structure_id_get_value (structure, GST_QUARK (CLOCK));
g_return_if_fail (clock_gvalue != NULL);
g_return_if_fail (G_VALUE_TYPE (clock_gvalue) == GST_TYPE_CLOCK);
if (clock)
*clock = (GstClock *) g_value_get_object (clock_gvalue);
}
/**
* gst_message_parse_structure_change:
* @message: A valid #GstMessage of type GST_MESSAGE_STRUCTURE_CHANGE.
* @type: (out): A pointer to hold the change type
* @owner: (out) (allow-none) (transfer none): The owner element of the
* message source
* @busy: (out) (allow-none): a pointer to hold whether the change is in
* progress or has been completed
*
* Extracts the change type and completion status from the GstMessage.
*
* MT safe.
*/
void
gst_message_parse_structure_change (GstMessage * message,
GstStructureChangeType * type, GstElement ** owner, gboolean * busy)
{
const GValue *owner_gvalue;
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STRUCTURE_CHANGE);
structure = GST_MESSAGE_STRUCTURE (message);
owner_gvalue = gst_structure_id_get_value (structure, GST_QUARK (OWNER));
g_return_if_fail (owner_gvalue != NULL);
g_return_if_fail (G_VALUE_TYPE (owner_gvalue) == GST_TYPE_ELEMENT);
if (type)
*type = (GstStructureChangeType)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (TYPE)));
if (owner)
*owner = (GstElement *) g_value_get_object (owner_gvalue);
if (busy)
*busy =
g_value_get_boolean (gst_structure_id_get_value (structure,
GST_QUARK (BUSY)));
}
/**
* gst_message_parse_error:
* @message: A valid #GstMessage of type GST_MESSAGE_ERROR.
* @gerror: (out) (allow-none) (transfer full): location for the GError
* @debug: (out) (allow-none) (transfer full): location for the debug message,
* or %NULL
*
* Extracts the GError and debug string from the GstMessage. The values returned
* in the output arguments are copies; the caller must free them when done.
*
* Typical usage of this function might be:
* |[<!-- language="C" -->
* ...
* switch (GST_MESSAGE_TYPE (msg)) {
* case GST_MESSAGE_ERROR: {
* GError *err = NULL;
* gchar *dbg_info = NULL;
*
* gst_message_parse_error (msg, &err, &dbg_info);
* g_printerr ("ERROR from element %s: %s\n",
* GST_OBJECT_NAME (msg->src), err->message);
* g_printerr ("Debugging info: %s\n", (dbg_info) ? dbg_info : "none");
* g_error_free (err);
* g_free (dbg_info);
* break;
* }
* ...
* }
* ...
* ]|
*
* MT safe.
*/
void
gst_message_parse_error (GstMessage * message, GError ** gerror, gchar ** debug)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_ERROR);
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (GERROR), G_TYPE_ERROR, gerror,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
}
/**
* gst_message_parse_warning:
* @message: A valid #GstMessage of type GST_MESSAGE_WARNING.
* @gerror: (out) (allow-none) (transfer full): location for the GError
* @debug: (out) (allow-none) (transfer full): location for the debug message,
* or %NULL
*
* Extracts the GError and debug string from the GstMessage. The values returned
* in the output arguments are copies; the caller must free them when done.
*
* MT safe.
*/
void
gst_message_parse_warning (GstMessage * message, GError ** gerror,
gchar ** debug)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_WARNING);
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (GERROR), G_TYPE_ERROR, gerror,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
}
/**
* gst_message_parse_info:
* @message: A valid #GstMessage of type GST_MESSAGE_INFO.
* @gerror: (out) (allow-none) (transfer full): location for the GError
* @debug: (out) (allow-none) (transfer full): location for the debug message,
* or %NULL
*
* Extracts the GError and debug string from the GstMessage. The values returned
* in the output arguments are copies; the caller must free them when done.
*
* MT safe.
*/
void
gst_message_parse_info (GstMessage * message, GError ** gerror, gchar ** debug)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_INFO);
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (GERROR), G_TYPE_ERROR, gerror,
GST_QUARK (DEBUG), G_TYPE_STRING, debug, NULL);
}
/**
* gst_message_parse_segment_start:
* @message: A valid #GstMessage of type GST_MESSAGE_SEGMENT_START.
* @format: (out) (allow-none): Result location for the format, or %NULL
* @position: (out) (allow-none): Result location for the position, or %NULL
*
* Extracts the position and format from the segment start message.
*
* MT safe.
*/
void
gst_message_parse_segment_start (GstMessage * message, GstFormat * format,
gint64 * position)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_SEGMENT_START);
structure = GST_MESSAGE_STRUCTURE (message);
if (format)
*format = (GstFormat)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (FORMAT)));
if (position)
*position =
g_value_get_int64 (gst_structure_id_get_value (structure,
GST_QUARK (POSITION)));
}
/**
* gst_message_parse_segment_done:
* @message: A valid #GstMessage of type GST_MESSAGE_SEGMENT_DONE.
* @format: (out) (allow-none): Result location for the format, or %NULL
* @position: (out) (allow-none): Result location for the position, or %NULL
*
* Extracts the position and format from the segment done message.
*
* MT safe.
*/
void
gst_message_parse_segment_done (GstMessage * message, GstFormat * format,
gint64 * position)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_SEGMENT_DONE);
structure = GST_MESSAGE_STRUCTURE (message);
if (format)
*format = (GstFormat)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (FORMAT)));
if (position)
*position =
g_value_get_int64 (gst_structure_id_get_value (structure,
GST_QUARK (POSITION)));
}
/**
* gst_message_parse_async_done:
* @message: A valid #GstMessage of type GST_MESSAGE_ASYNC_DONE.
* @running_time: (out) (allow-none): Result location for the running_time or %NULL
*
* Extract the running_time from the async_done message.
*
* MT safe.
*/
void
gst_message_parse_async_done (GstMessage * message, GstClockTime * running_time)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_ASYNC_DONE);
structure = GST_MESSAGE_STRUCTURE (message);
if (running_time)
*running_time =
g_value_get_uint64 (gst_structure_id_get_value (structure,
GST_QUARK (RUNNING_TIME)));
}
/**
* gst_message_parse_request_state:
* @message: A valid #GstMessage of type GST_MESSAGE_REQUEST_STATE.
* @state: (out) (allow-none): Result location for the requested state or %NULL
*
* Extract the requested state from the request_state message.
*
* MT safe.
*/
void
gst_message_parse_request_state (GstMessage * message, GstState * state)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_REQUEST_STATE);
structure = GST_MESSAGE_STRUCTURE (message);
if (state)
*state = (GstState)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (NEW_STATE)));
}
/**
* gst_message_new_stream_status:
* @src: The object originating the message.
* @type: The stream status type.
* @owner: (transfer none): the owner element of @src.
*
* Create a new stream status message. This message is posted when a streaming
* thread is created/destroyed or when the state changed.
*
* Returns: (transfer full): the new stream status message.
*
* MT safe.
*/
GstMessage *
gst_message_new_stream_status (GstObject * src, GstStreamStatusType type,
GstElement * owner)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_STREAM_STATUS),
GST_QUARK (TYPE), GST_TYPE_STREAM_STATUS_TYPE, (gint) type,
GST_QUARK (OWNER), GST_TYPE_ELEMENT, owner, NULL);
message = gst_message_new_custom (GST_MESSAGE_STREAM_STATUS, src, structure);
return message;
}
/**
* gst_message_parse_stream_status:
* @message: A valid #GstMessage of type GST_MESSAGE_STREAM_STATUS.
* @type: (out): A pointer to hold the status type
* @owner: (out) (transfer none): The owner element of the message source
*
* Extracts the stream status type and owner the GstMessage. The returned
* owner remains valid for as long as the reference to @message is valid and
* should thus not be unreffed.
*
* MT safe.
*/
void
gst_message_parse_stream_status (GstMessage * message,
GstStreamStatusType * type, GstElement ** owner)
{
const GValue *owner_gvalue;
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAM_STATUS);
structure = GST_MESSAGE_STRUCTURE (message);
owner_gvalue = gst_structure_id_get_value (structure, GST_QUARK (OWNER));
g_return_if_fail (owner_gvalue != NULL);
if (type)
*type = (GstStreamStatusType)
g_value_get_enum (gst_structure_id_get_value (structure,
GST_QUARK (TYPE)));
if (owner)
*owner = (GstElement *) g_value_get_object (owner_gvalue);
}
/**
* gst_message_set_stream_status_object:
* @message: A valid #GstMessage of type GST_MESSAGE_STREAM_STATUS.
* @object: the object controlling the streaming
*
* Configures the object handling the streaming thread. This is usually a
* GstTask object but other objects might be added in the future.
*/
void
gst_message_set_stream_status_object (GstMessage * message,
const GValue * object)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAM_STATUS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_set_value (structure, GST_QUARK (OBJECT), object);
}
/**
* gst_message_get_stream_status_object:
* @message: A valid #GstMessage of type GST_MESSAGE_STREAM_STATUS.
*
* Extracts the object managing the streaming thread from @message.
*
* Returns: a GValue containing the object that manages the streaming thread.
* This object is usually of type GstTask but other types can be added in the
* future. The object remains valid as long as @message is valid.
*/
const GValue *
gst_message_get_stream_status_object (GstMessage * message)
{
const GValue *result;
GstStructure *structure;
g_return_val_if_fail (GST_IS_MESSAGE (message), NULL);
g_return_val_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAM_STATUS,
NULL);
structure = GST_MESSAGE_STRUCTURE (message);
result = gst_structure_id_get_value (structure, GST_QUARK (OBJECT));
return result;
}
/**
* gst_message_new_step_done:
* @src: The object originating the message.
* @format: the format of @amount
* @amount: the amount of stepped data
* @rate: the rate of the stepped amount
* @flush: is this an flushing step
* @intermediate: is this an intermediate step
* @duration: the duration of the data
* @eos: the step caused EOS
*
* This message is posted by elements when they complete a part, when @intermediate set
* to %TRUE, or a complete step operation.
*
* @duration will contain the amount of time (in GST_FORMAT_TIME) of the stepped
* @amount of media in format @format.
*
* Returns: (transfer full): the new step_done message.
*
* MT safe.
*/
GstMessage *
gst_message_new_step_done (GstObject * src, GstFormat format, guint64 amount,
gdouble rate, gboolean flush, gboolean intermediate, guint64 duration,
gboolean eos)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_STEP_DONE),
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (AMOUNT), G_TYPE_UINT64, amount,
GST_QUARK (RATE), G_TYPE_DOUBLE, rate,
GST_QUARK (FLUSH), G_TYPE_BOOLEAN, flush,
GST_QUARK (INTERMEDIATE), G_TYPE_BOOLEAN, intermediate,
GST_QUARK (DURATION), G_TYPE_UINT64, duration,
GST_QUARK (EOS), G_TYPE_BOOLEAN, eos, NULL);
message = gst_message_new_custom (GST_MESSAGE_STEP_DONE, src, structure);
return message;
}
/**
* gst_message_parse_step_done:
* @message: A valid #GstMessage of type GST_MESSAGE_STEP_DONE.
* @format: (out) (allow-none): result location for the format
* @amount: (out) (allow-none): result location for the amount
* @rate: (out) (allow-none): result location for the rate
* @flush: (out) (allow-none): result location for the flush flag
* @intermediate: (out) (allow-none): result location for the intermediate flag
* @duration: (out) (allow-none): result location for the duration
* @eos: (out) (allow-none): result location for the EOS flag
*
* Extract the values the step_done message.
*
* MT safe.
*/
void
gst_message_parse_step_done (GstMessage * message, GstFormat * format,
guint64 * amount, gdouble * rate, gboolean * flush, gboolean * intermediate,
guint64 * duration, gboolean * eos)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STEP_DONE);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (AMOUNT), G_TYPE_UINT64, amount,
GST_QUARK (RATE), G_TYPE_DOUBLE, rate,
GST_QUARK (FLUSH), G_TYPE_BOOLEAN, flush,
GST_QUARK (INTERMEDIATE), G_TYPE_BOOLEAN, intermediate,
GST_QUARK (DURATION), G_TYPE_UINT64, duration,
GST_QUARK (EOS), G_TYPE_BOOLEAN, eos, NULL);
}
/**
* gst_message_new_step_start:
* @src: The object originating the message.
* @active: if the step is active or queued
* @format: the format of @amount
* @amount: the amount of stepped data
* @rate: the rate of the stepped amount
* @flush: is this an flushing step
* @intermediate: is this an intermediate step
*
* This message is posted by elements when they accept or activate a new step
* event for @amount in @format.
*
* @active is set to %FALSE when the element accepted the new step event and has
* queued it for execution in the streaming threads.
*
* @active is set to %TRUE when the element has activated the step operation and
* is now ready to start executing the step in the streaming thread. After this
* message is emitted, the application can queue a new step operation in the
* element.
*
* Returns: (transfer full): The new step_start message.
*
* MT safe.
*/
GstMessage *
gst_message_new_step_start (GstObject * src, gboolean active, GstFormat format,
guint64 amount, gdouble rate, gboolean flush, gboolean intermediate)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_STEP_START),
GST_QUARK (ACTIVE), G_TYPE_BOOLEAN, active,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (AMOUNT), G_TYPE_UINT64, amount,
GST_QUARK (RATE), G_TYPE_DOUBLE, rate,
GST_QUARK (FLUSH), G_TYPE_BOOLEAN, flush,
GST_QUARK (INTERMEDIATE), G_TYPE_BOOLEAN, intermediate, NULL);
message = gst_message_new_custom (GST_MESSAGE_STEP_START, src, structure);
return message;
}
/**
* gst_message_parse_step_start:
* @message: A valid #GstMessage of type GST_MESSAGE_STEP_DONE.
* @active: (out) (allow-none): result location for the active flag
* @format: (out) (allow-none): result location for the format
* @amount: (out) (allow-none): result location for the amount
* @rate: (out) (allow-none): result location for the rate
* @flush: (out) (allow-none): result location for the flush flag
* @intermediate: (out) (allow-none): result location for the intermediate flag
*
* Extract the values from step_start message.
*
* MT safe.
*/
void
gst_message_parse_step_start (GstMessage * message, gboolean * active,
GstFormat * format, guint64 * amount, gdouble * rate, gboolean * flush,
gboolean * intermediate)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STEP_START);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (ACTIVE), G_TYPE_BOOLEAN, active,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (AMOUNT), G_TYPE_UINT64, amount,
GST_QUARK (RATE), G_TYPE_DOUBLE, rate,
GST_QUARK (FLUSH), G_TYPE_BOOLEAN, flush,
GST_QUARK (INTERMEDIATE), G_TYPE_BOOLEAN, intermediate, NULL);
}
/**
* gst_message_new_qos:
* @src: The object originating the message.
* @live: if the message was generated by a live element
* @running_time: the running time of the buffer that generated the message
* @stream_time: the stream time of the buffer that generated the message
* @timestamp: the timestamps of the buffer that generated the message
* @duration: the duration of the buffer that generated the message
*
* A QOS message is posted on the bus whenever an element decides to drop a
* buffer because of QoS reasons or whenever it changes its processing strategy
* because of QoS reasons (quality adjustments such as processing at lower
* accuracy).
*
* This message can be posted by an element that performs synchronisation against the
* clock (live) or it could be dropped by an element that performs QoS because of QOS
* events received from a downstream element (!live).
*
* @running_time, @stream_time, @timestamp, @duration should be set to the
* respective running-time, stream-time, timestamp and duration of the (dropped)
* buffer that generated the QoS event. Values can be left to
* GST_CLOCK_TIME_NONE when unknown.
*
* Returns: (transfer full): The new qos message.
*
* MT safe.
*/
GstMessage *
gst_message_new_qos (GstObject * src, gboolean live, guint64 running_time,
guint64 stream_time, guint64 timestamp, guint64 duration)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_QOS),
GST_QUARK (LIVE), G_TYPE_BOOLEAN, live,
GST_QUARK (RUNNING_TIME), G_TYPE_UINT64, running_time,
GST_QUARK (STREAM_TIME), G_TYPE_UINT64, stream_time,
GST_QUARK (TIMESTAMP), G_TYPE_UINT64, timestamp,
GST_QUARK (DURATION), G_TYPE_UINT64, duration,
GST_QUARK (JITTER), G_TYPE_INT64, (gint64) 0,
GST_QUARK (PROPORTION), G_TYPE_DOUBLE, (gdouble) 1.0,
GST_QUARK (QUALITY), G_TYPE_INT, (gint) 1000000,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, GST_FORMAT_UNDEFINED,
GST_QUARK (PROCESSED), G_TYPE_UINT64, (guint64) - 1,
GST_QUARK (DROPPED), G_TYPE_UINT64, (guint64) - 1, NULL);
message = gst_message_new_custom (GST_MESSAGE_QOS, src, structure);
return message;
}
/**
* gst_message_set_qos_values:
* @message: A valid #GstMessage of type GST_MESSAGE_QOS.
* @jitter: The difference of the running-time against the deadline.
* @proportion: Long term prediction of the ideal rate relative to normal rate
* to get optimal quality.
* @quality: An element dependent integer value that specifies the current
* quality level of the element. The default maximum quality is 1000000.
*
* Set the QoS values that have been calculated/analysed from the QoS data
*
* MT safe.
*/
void
gst_message_set_qos_values (GstMessage * message, gint64 jitter,
gdouble proportion, gint quality)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_QOS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_set (structure,
GST_QUARK (JITTER), G_TYPE_INT64, jitter,
GST_QUARK (PROPORTION), G_TYPE_DOUBLE, proportion,
GST_QUARK (QUALITY), G_TYPE_INT, quality, NULL);
}
/**
* gst_message_set_qos_stats:
* @message: A valid #GstMessage of type GST_MESSAGE_QOS.
* @format: Units of the 'processed' and 'dropped' fields. Video sinks and video
* filters will use GST_FORMAT_BUFFERS (frames). Audio sinks and audio filters
* will likely use GST_FORMAT_DEFAULT (samples).
* @processed: Total number of units correctly processed since the last state
* change to READY or a flushing operation.
* @dropped: Total number of units dropped since the last state change to READY
* or a flushing operation.
*
* Set the QoS stats representing the history of the current continuous pipeline
* playback period.
*
* When @format is @GST_FORMAT_UNDEFINED both @dropped and @processed are
* invalid. Values of -1 for either @processed or @dropped mean unknown values.
*
* MT safe.
*/
void
gst_message_set_qos_stats (GstMessage * message, GstFormat format,
guint64 processed, guint64 dropped)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_QOS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_set (structure,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (PROCESSED), G_TYPE_UINT64, processed,
GST_QUARK (DROPPED), G_TYPE_UINT64, dropped, NULL);
}
/**
* gst_message_parse_qos:
* @message: A valid #GstMessage of type GST_MESSAGE_QOS.
* @live: (out) (allow-none): if the message was generated by a live element
* @running_time: (out) (allow-none): the running time of the buffer that
* generated the message
* @stream_time: (out) (allow-none): the stream time of the buffer that
* generated the message
* @timestamp: (out) (allow-none): the timestamps of the buffer that
* generated the message
* @duration: (out) (allow-none): the duration of the buffer that
* generated the message
*
* Extract the timestamps and live status from the QoS message.
*
* The returned values give the running_time, stream_time, timestamp and
* duration of the dropped buffer. Values of GST_CLOCK_TIME_NONE mean unknown
* values.
*
* MT safe.
*/
void
gst_message_parse_qos (GstMessage * message, gboolean * live,
guint64 * running_time, guint64 * stream_time, guint64 * timestamp,
guint64 * duration)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_QOS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (LIVE), G_TYPE_BOOLEAN, live,
GST_QUARK (RUNNING_TIME), G_TYPE_UINT64, running_time,
GST_QUARK (STREAM_TIME), G_TYPE_UINT64, stream_time,
GST_QUARK (TIMESTAMP), G_TYPE_UINT64, timestamp,
GST_QUARK (DURATION), G_TYPE_UINT64, duration, NULL);
}
/**
* gst_message_parse_qos_values:
* @message: A valid #GstMessage of type GST_MESSAGE_QOS.
* @jitter: (out) (allow-none): The difference of the running-time against
* the deadline.
* @proportion: (out) (allow-none): Long term prediction of the ideal rate
* relative to normal rate to get optimal quality.
* @quality: (out) (allow-none): An element dependent integer value that
* specifies the current quality level of the element. The default
* maximum quality is 1000000.
*
* Extract the QoS values that have been calculated/analysed from the QoS data
*
* MT safe.
*/
void
gst_message_parse_qos_values (GstMessage * message, gint64 * jitter,
gdouble * proportion, gint * quality)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_QOS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (JITTER), G_TYPE_INT64, jitter,
GST_QUARK (PROPORTION), G_TYPE_DOUBLE, proportion,
GST_QUARK (QUALITY), G_TYPE_INT, quality, NULL);
}
/**
* gst_message_parse_qos_stats:
* @message: A valid #GstMessage of type GST_MESSAGE_QOS.
* @format: (out) (allow-none): Units of the 'processed' and 'dropped' fields.
* Video sinks and video filters will use GST_FORMAT_BUFFERS (frames).
* Audio sinks and audio filters will likely use GST_FORMAT_DEFAULT
* (samples).
* @processed: (out) (allow-none): Total number of units correctly processed
* since the last state change to READY or a flushing operation.
* @dropped: (out) (allow-none): Total number of units dropped since the last
* state change to READY or a flushing operation.
*
* Extract the QoS stats representing the history of the current continuous
* pipeline playback period.
*
* When @format is @GST_FORMAT_UNDEFINED both @dropped and @processed are
* invalid. Values of -1 for either @processed or @dropped mean unknown values.
*
* MT safe.
*/
void
gst_message_parse_qos_stats (GstMessage * message, GstFormat * format,
guint64 * processed, guint64 * dropped)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_QOS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (FORMAT), GST_TYPE_FORMAT, format,
GST_QUARK (PROCESSED), G_TYPE_UINT64, processed,
GST_QUARK (DROPPED), G_TYPE_UINT64, dropped, NULL);
}
/**
* gst_message_new_progress:
* @src: The object originating the message.
* @type: a #GstProgressType
* @code: a progress code
* @text: free, user visible text describing the progress
*
* Progress messages are posted by elements when they use an asynchronous task
* to perform actions triggered by a state change.
*
* @code contains a well defined string describing the action.
* @text should contain a user visible string detailing the current action.
*
* Returns: (transfer full): The new qos message.
*/
GstMessage *
gst_message_new_progress (GstObject * src, GstProgressType type,
const gchar * code, const gchar * text)
{
GstMessage *message;
GstStructure *structure;
gint percent = 100, timeout = -1;
g_return_val_if_fail (code != NULL, NULL);
g_return_val_if_fail (text != NULL, NULL);
if (type == GST_PROGRESS_TYPE_START || type == GST_PROGRESS_TYPE_CONTINUE)
percent = 0;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_PROGRESS),
GST_QUARK (TYPE), GST_TYPE_PROGRESS_TYPE, type,
GST_QUARK (CODE), G_TYPE_STRING, code,
GST_QUARK (TEXT), G_TYPE_STRING, text,
GST_QUARK (PERCENT), G_TYPE_INT, percent,
GST_QUARK (TIMEOUT), G_TYPE_INT, timeout, NULL);
message = gst_message_new_custom (GST_MESSAGE_PROGRESS, src, structure);
return message;
}
/**
* gst_message_parse_progress:
* @message: A valid #GstMessage of type GST_MESSAGE_PROGRESS.
* @type: (out) (allow-none): location for the type
* @code: (out) (allow-none) (transfer full): location for the code
* @text: (out) (allow-none) (transfer full): location for the text
*
* Parses the progress @type, @code and @text.
*/
void
gst_message_parse_progress (GstMessage * message, GstProgressType * type,
gchar ** code, gchar ** text)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_PROGRESS);
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_get (structure,
GST_QUARK (TYPE), GST_TYPE_PROGRESS_TYPE, type,
GST_QUARK (CODE), G_TYPE_STRING, code,
GST_QUARK (TEXT), G_TYPE_STRING, text, NULL);
}
/**
* gst_message_new_toc:
* @src: the object originating the message.
* @toc: (transfer none): #GstToc structure for the message.
* @updated: whether TOC was updated or not.
*
* Create a new TOC message. The message is posted by elements
* that discovered or updated a TOC.
*
* Returns: (transfer full): a new TOC message.
*
* MT safe.
*/
GstMessage *
gst_message_new_toc (GstObject * src, GstToc * toc, gboolean updated)
{
GstStructure *toc_struct;
g_return_val_if_fail (toc != NULL, NULL);
toc_struct = gst_structure_new_id (GST_QUARK (MESSAGE_TOC),
GST_QUARK (TOC), GST_TYPE_TOC, toc,
GST_QUARK (UPDATED), G_TYPE_BOOLEAN, updated, NULL);
return gst_message_new_custom (GST_MESSAGE_TOC, src, toc_struct);
}
/**
* gst_message_parse_toc:
* @message: a valid #GstMessage of type GST_MESSAGE_TOC.
* @toc: (out) (transfer full): return location for the TOC.
* @updated: (out): return location for the updated flag.
*
* Extract the TOC from the #GstMessage. The TOC returned in the
* output argument is a copy; the caller must free it with
* gst_toc_unref() when done.
*
* MT safe.
*/
void
gst_message_parse_toc (GstMessage * message, GstToc ** toc, gboolean * updated)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_TOC);
g_return_if_fail (toc != NULL);
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (TOC), GST_TYPE_TOC, toc,
GST_QUARK (UPDATED), G_TYPE_BOOLEAN, updated, NULL);
}
/**
* gst_message_new_reset_time:
* @src: (transfer none) (allow-none): The object originating the message.
* @running_time: the requested running-time
*
* This message is posted when the pipeline running-time should be reset to
* @running_time, like after a flushing seek.
*
* Returns: (transfer full): The new reset_time message.
*
* MT safe.
*/
GstMessage *
gst_message_new_reset_time (GstObject * src, GstClockTime running_time)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_RESET_TIME),
GST_QUARK (RUNNING_TIME), G_TYPE_UINT64, running_time, NULL);
message = gst_message_new_custom (GST_MESSAGE_RESET_TIME, src, structure);
return message;
}
/**
* gst_message_parse_reset_time:
* @message: A valid #GstMessage of type GST_MESSAGE_RESET_TIME.
* @running_time: (out) (allow-none): Result location for the running_time or
* %NULL
*
* Extract the running-time from the RESET_TIME message.
*
* MT safe.
*/
void
gst_message_parse_reset_time (GstMessage * message, GstClockTime * running_time)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_RESET_TIME);
structure = GST_MESSAGE_STRUCTURE (message);
if (running_time)
*running_time =
g_value_get_uint64 (gst_structure_id_get_value (structure,
GST_QUARK (RUNNING_TIME)));
}
/**
* gst_message_new_stream_start:
* @src: (transfer none) (allow-none): The object originating the message.
*
* Create a new stream_start message. This message is generated and posted in
* the sink elements of a GstBin. The bin will only forward the STREAM_START
* message to the application if all sinks have posted an STREAM_START message.
*
* Returns: (transfer full): The new stream_start message.
*
* MT safe.
*/
GstMessage *
gst_message_new_stream_start (GstObject * src)
{
GstMessage *message;
GstStructure *s;
s = gst_structure_new_id_empty (GST_QUARK (MESSAGE_STREAM_START));
message = gst_message_new_custom (GST_MESSAGE_STREAM_START, src, s);
return message;
}
/**
* gst_message_set_group_id:
* @message: the message
* @group_id: the group id
*
* Sets the group id on the stream-start message.
*
* All streams that have the same group id are supposed to be played
* together, i.e. all streams inside a container file should have the
* same group id but different stream ids. The group id should change
* each time the stream is started, resulting in different group ids
* each time a file is played for example.
*
* MT safe.
*
* Since: 1.2
*/
void
gst_message_set_group_id (GstMessage * message, guint group_id)
{
GstStructure *structure;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAM_START);
g_return_if_fail (gst_message_is_writable (message));
structure = GST_MESSAGE_STRUCTURE (message);
gst_structure_id_set (structure, GST_QUARK (GROUP_ID), G_TYPE_UINT, group_id,
NULL);
}
/**
* gst_message_parse_group_id:
* @message: A valid #GstMessage of type GST_MESSAGE_STREAM_START.
* @group_id: (out) (allow-none): Result location for the group id or
* %NULL
*
* Extract the group from the STREAM_START message.
*
* Returns: %TRUE if the message had a group id set, %FALSE otherwise
*
* MT safe.
*
* Since: 1.2
*/
gboolean
gst_message_parse_group_id (GstMessage * message, guint * group_id)
{
GstStructure *structure;
const GValue *v;
g_return_val_if_fail (GST_IS_MESSAGE (message), FALSE);
g_return_val_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAM_START,
FALSE);
if (!group_id)
return TRUE;
structure = GST_MESSAGE_STRUCTURE (message);
v = gst_structure_id_get_value (structure, GST_QUARK (GROUP_ID));
if (!v)
return FALSE;
*group_id = g_value_get_uint (v);
return TRUE;
}
/**
* gst_message_new_need_context:
* @src: (transfer none) (allow-none): The object originating the message.
* @context_type: The context type that is needed
*
* This message is posted when an element needs a specific #GstContext.
*
* Returns: (transfer full): The new need-context message.
*
* MT safe.
*
* Since: 1.2
*/
GstMessage *
gst_message_new_need_context (GstObject * src, const gchar * context_type)
{
GstMessage *message;
GstStructure *structure;
g_return_val_if_fail (context_type != NULL, NULL);
structure = gst_structure_new_id (GST_QUARK (MESSAGE_NEED_CONTEXT),
GST_QUARK (CONTEXT_TYPE), G_TYPE_STRING, context_type, NULL);
message = gst_message_new_custom (GST_MESSAGE_NEED_CONTEXT, src, structure);
return message;
}
/**
* gst_message_parse_context_type:
* @message: a GST_MESSAGE_NEED_CONTEXT type message
* @context_type: (out) (allow-none): the context type, or %NULL
*
* Parse a context type from an existing GST_MESSAGE_NEED_CONTEXT message.
*
* Returns: a #gboolean indicating if the parsing succeeded.
*
* Since: 1.2
*/
gboolean
gst_message_parse_context_type (GstMessage * message,
const gchar ** context_type)
{
GstStructure *structure;
const GValue *value;
g_return_val_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_NEED_CONTEXT,
FALSE);
structure = GST_MESSAGE_STRUCTURE (message);
if (context_type) {
value = gst_structure_id_get_value (structure, GST_QUARK (CONTEXT_TYPE));
*context_type = g_value_get_string (value);
}
return TRUE;
}
/**
* gst_message_new_have_context:
* @src: (transfer none) (allow-none): The object originating the message.
* @context: (transfer full): the context
*
* This message is posted when an element has a new local #GstContext.
*
* Returns: (transfer full): The new have-context message.
*
* MT safe.
*
* Since: 1.2
*/
GstMessage *
gst_message_new_have_context (GstObject * src, GstContext * context)
{
GstMessage *message;
GstStructure *structure;
structure = gst_structure_new_id (GST_QUARK (MESSAGE_HAVE_CONTEXT),
GST_QUARK (CONTEXT), GST_TYPE_CONTEXT, context, NULL);
message = gst_message_new_custom (GST_MESSAGE_HAVE_CONTEXT, src, structure);
gst_context_unref (context);
return message;
}
/**
* gst_message_parse_have_context:
* @message: A valid #GstMessage of type GST_MESSAGE_HAVE_CONTEXT.
* @context: (out) (transfer full) (allow-none): Result location for the
* context or %NULL
*
* Extract the context from the HAVE_CONTEXT message.
*
* MT safe.
*
* Since: 1.2
*/
void
gst_message_parse_have_context (GstMessage * message, GstContext ** context)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_HAVE_CONTEXT);
if (context)
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (CONTEXT), GST_TYPE_CONTEXT, context, NULL);
}
/**
* gst_message_new_device_added:
* @src: The #GstObject that created the message
* @device: (transfer none): The new #GstDevice
*
* Creates a new device-added message. The device-added message is produced by
* #GstDeviceProvider or a #GstDeviceMonitor. They announce the appearance
* of monitored devices.
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.4
*/
GstMessage *
gst_message_new_device_added (GstObject * src, GstDevice * device)
{
GstMessage *message;
GstStructure *structure;
g_return_val_if_fail (device != NULL, NULL);
g_return_val_if_fail (GST_IS_DEVICE (device), NULL);
structure = gst_structure_new_id (GST_QUARK (MESSAGE_DEVICE_ADDED),
GST_QUARK (DEVICE), GST_TYPE_DEVICE, device, NULL);
message = gst_message_new_custom (GST_MESSAGE_DEVICE_ADDED, src, structure);
return message;
}
/**
* gst_message_parse_device_added:
* @message: a #GstMessage of type %GST_MESSAGE_DEVICE_ADDED
* @device: (out) (allow-none) (transfer full): A location where to store a
* pointer to the new #GstDevice, or %NULL
*
* Parses a device-added message. The device-added message is produced by
* #GstDeviceProvider or a #GstDeviceMonitor. It announces the appearance
* of monitored devices.
*
* Since: 1.4
*/
void
gst_message_parse_device_added (GstMessage * message, GstDevice ** device)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_DEVICE_ADDED);
if (device)
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (DEVICE), GST_TYPE_DEVICE, device, NULL);
}
/**
* gst_message_new_device_removed:
* @src: The #GstObject that created the message
* @device: (transfer none): The removed #GstDevice
*
* Creates a new device-removed message. The device-removed message is produced
* by #GstDeviceProvider or a #GstDeviceMonitor. They announce the
* disappearance of monitored devices.
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.4
*/
GstMessage *
gst_message_new_device_removed (GstObject * src, GstDevice * device)
{
GstMessage *message;
GstStructure *structure;
g_return_val_if_fail (device != NULL, NULL);
g_return_val_if_fail (GST_IS_DEVICE (device), NULL);
structure = gst_structure_new_id (GST_QUARK (MESSAGE_DEVICE_REMOVED),
GST_QUARK (DEVICE), GST_TYPE_DEVICE, device, NULL);
message = gst_message_new_custom (GST_MESSAGE_DEVICE_REMOVED, src, structure);
return message;
}
/**
* gst_message_parse_device_removed:
* @message: a #GstMessage of type %GST_MESSAGE_DEVICE_REMOVED
* @device: (out) (allow-none) (transfer full): A location where to store a
* pointer to the removed #GstDevice, or %NULL
*
* Parses a device-removed message. The device-removed message is produced by
* #GstDeviceProvider or a #GstDeviceMonitor. It announces the
* disappearance of monitored devices.
*
* Since: 1.4
*/
void
gst_message_parse_device_removed (GstMessage * message, GstDevice ** device)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_DEVICE_REMOVED);
if (device)
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (DEVICE), GST_TYPE_DEVICE, device, NULL);
}
/**
* gst_message_new_property_notify:
* @src: The #GstObject whose property changed (may or may not be a #GstElement)
* @property_name: name of the property that changed
* @val: (allow-none) (transfer full): new property value, or %NULL
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.10
*/
GstMessage *
gst_message_new_property_notify (GstObject * src, const gchar * property_name,
GValue * val)
{
GstStructure *structure;
GValue name_val = G_VALUE_INIT;
g_return_val_if_fail (property_name != NULL, NULL);
structure = gst_structure_new_id_empty (GST_QUARK (MESSAGE_PROPERTY_NOTIFY));
g_value_init (&name_val, G_TYPE_STRING);
/* should already be interned, but let's make sure */
g_value_set_static_string (&name_val, g_intern_string (property_name));
gst_structure_id_take_value (structure, GST_QUARK (PROPERTY_NAME), &name_val);
if (val != NULL)
gst_structure_id_take_value (structure, GST_QUARK (PROPERTY_VALUE), val);
return gst_message_new_custom (GST_MESSAGE_PROPERTY_NOTIFY, src, structure);
}
/**
* gst_message_parse_property_notify:
* @message: a #GstMessage of type %GST_MESSAGE_PROPERTY_NOTIFY
* @object: (out) (allow-none) (transfer none): location where to store a
* pointer to the object whose property got changed, or %NULL
* @property_name: (out) (allow-none): return location for the name of the
* property that got changed, or %NULL
* @property_value: (out) (allow-none): return location for the new value of
* the property that got changed, or %NULL. This will only be set if the
* property notify watch was told to include the value when it was set up
*
* Parses a property-notify message. These will be posted on the bus only
* when set up with gst_element_add_property_notify_watch() or
* gst_element_add_property_deep_notify_watch().
*
* Since: 1.10
*/
void
gst_message_parse_property_notify (GstMessage * message, GstObject ** object,
const gchar ** property_name, const GValue ** property_value)
{
const GstStructure *s = GST_MESSAGE_STRUCTURE (message);
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_PROPERTY_NOTIFY);
if (object)
*object = GST_MESSAGE_SRC (message);
if (property_name) {
const GValue *name_value;
name_value = gst_structure_id_get_value (s, GST_QUARK (PROPERTY_NAME));
*property_name = g_value_get_string (name_value);
}
if (property_value)
*property_value =
gst_structure_id_get_value (s, GST_QUARK (PROPERTY_VALUE));
}
/**
* gst_message_new_stream_collection:
* @src: The #GstObject that created the message
* @collection: (transfer none): The #GstStreamCollection
*
* Creates a new stream-collection message. The message is used to announce new
* #GstStreamCollection
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.10
*/
GstMessage *
gst_message_new_stream_collection (GstObject * src,
GstStreamCollection * collection)
{
GstMessage *message;
GstStructure *structure;
g_return_val_if_fail (collection != NULL, NULL);
g_return_val_if_fail (GST_IS_STREAM_COLLECTION (collection), NULL);
structure =
gst_structure_new_id (GST_QUARK (MESSAGE_STREAM_COLLECTION),
GST_QUARK (COLLECTION), GST_TYPE_STREAM_COLLECTION, collection, NULL);
message =
gst_message_new_custom (GST_MESSAGE_STREAM_COLLECTION, src, structure);
return message;
}
/**
* gst_message_parse_stream_collection:
* @message: a #GstMessage of type %GST_MESSAGE_STREAM_COLLECTION
* @collection: (out) (allow-none) (transfer full): A location where to store a
* pointer to the #GstStreamCollection, or %NULL
*
* Parses a stream-collection message.
*
* Since: 1.10
*/
void
gst_message_parse_stream_collection (GstMessage * message,
GstStreamCollection ** collection)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) ==
GST_MESSAGE_STREAM_COLLECTION);
if (collection)
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (COLLECTION), GST_TYPE_STREAM_COLLECTION, collection, NULL);
}
/**
* gst_message_new_streams_selected:
* @src: The #GstObject that created the message
* @collection: (transfer none): The #GstStreamCollection
*
* Creates a new steams-selected message. The message is used to announce
* that an array of streams has been selected. This is generally in response
* to a #GST_EVENT_SELECT_STREAMS event, or when an element (such as decodebin3)
* makes an initial selection of streams.
*
* The message also contains the #GstStreamCollection to which the various streams
* belong to.
*
* Users of gst_message_new_streams_selected() can add the selected streams with
* gst_message_streams_selected_add().
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.10
*/
GstMessage *
gst_message_new_streams_selected (GstObject * src,
GstStreamCollection * collection)
{
GstMessage *message;
GstStructure *structure;
GValue val = G_VALUE_INIT;
g_return_val_if_fail (collection != NULL, NULL);
g_return_val_if_fail (GST_IS_STREAM_COLLECTION (collection), NULL);
structure =
gst_structure_new_id (GST_QUARK (MESSAGE_STREAMS_SELECTED),
GST_QUARK (COLLECTION), GST_TYPE_STREAM_COLLECTION, collection, NULL);
g_value_init (&val, GST_TYPE_ARRAY);
gst_structure_id_take_value (structure, GST_QUARK (STREAMS), &val);
message =
gst_message_new_custom (GST_MESSAGE_STREAMS_SELECTED, src, structure);
return message;
}
/**
* gst_message_streams_selected_get_size:
* @message: a #GstMessage of type %GST_MESSAGE_STREAMS_SELECTED
*
* Returns the number of streams contained in the @message.
*
* Returns: The number of streams contained within.
*
* Since: 1.10
*/
guint
gst_message_streams_selected_get_size (GstMessage * msg)
{
const GValue *val;
g_return_val_if_fail (GST_IS_MESSAGE (msg), 0);
g_return_val_if_fail (GST_MESSAGE_TYPE (msg) == GST_MESSAGE_STREAMS_SELECTED,
0);
val =
gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (msg),
GST_QUARK (STREAMS));
return gst_value_array_get_size (val);
}
/**
* gst_message_streams_selected_add:
* @message: a #GstMessage of type %GST_MESSAGE_STREAMS_SELECTED
* @stream: (transfer none): a #GstStream to add to @message
*
* Adds the @stream to the @message.
*
* Since: 1.10
*/
void
gst_message_streams_selected_add (GstMessage * msg, GstStream * stream)
{
GValue *val;
GValue to_add = G_VALUE_INIT;
g_return_if_fail (GST_IS_MESSAGE (msg));
g_return_if_fail (GST_MESSAGE_TYPE (msg) == GST_MESSAGE_STREAMS_SELECTED);
g_return_if_fail (GST_IS_STREAM (stream));
val =
(GValue *) gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (msg),
GST_QUARK (STREAMS));
g_value_init (&to_add, GST_TYPE_STREAM);
g_value_set_object (&to_add, stream);
gst_value_array_append_and_take_value (val, &to_add);
}
/**
* gst_message_streams_selected_get_stream:
* @message: a #GstMessage of type %GST_MESSAGE_STREAMS_SELECTED
* @idx: Index of the stream to retrieve
*
* Retrieves the #GstStream with index @index from the @message.
*
* Returns: (transfer full): A #GstStream
*
* Since: 1.10
*/
GstStream *
gst_message_streams_selected_get_stream (GstMessage * msg, guint idx)
{
const GValue *streams, *val;
g_return_val_if_fail (GST_IS_MESSAGE (msg), NULL);
g_return_val_if_fail (GST_MESSAGE_TYPE (msg) == GST_MESSAGE_STREAMS_SELECTED,
NULL);
streams =
gst_structure_id_get_value (GST_MESSAGE_STRUCTURE (msg),
GST_QUARK (STREAMS));
val = gst_value_array_get_value (streams, idx);
if (val) {
return (GstStream *) g_value_dup_object (val);
}
return NULL;
}
/**
* gst_message_parse_streams_selected:
* @message: a #GstMessage of type %GST_MESSAGE_STREAMS_SELECTED
* @collection: (out) (allow-none) (transfer full): A location where to store a
* pointer to the #GstStreamCollection, or %NULL
*
* Parses a streams-selected message.
*
* Since: 1.10
*/
void
gst_message_parse_streams_selected (GstMessage * message,
GstStreamCollection ** collection)
{
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_STREAMS_SELECTED);
if (collection)
gst_structure_id_get (GST_MESSAGE_STRUCTURE (message),
GST_QUARK (COLLECTION), GST_TYPE_STREAM_COLLECTION, collection, NULL);
}
/**
* gst_message_new_redirect:
* @src: The #GstObject whose property changed (may or may not be a #GstElement)
* @location: (transfer none): location string for the new entry
* @tag_list: (transfer full) (allow-none): tag list for the new entry
* @entry_struct: (transfer full) (allow-none): structure for the new entry
*
* Creates a new redirect message and adds a new entry to it. Redirect messages
* are posted when an element detects that the actual data has to be retrieved
* from a different location. This is useful if such a redirection cannot be
* handled inside a source element, for example when HTTP 302/303 redirects
* return a non-HTTP URL.
*
* The redirect message can hold multiple entries. The first one is added
* when the redirect message is created, with the given location, tag_list,
* entry_struct arguments. Use gst_message_add_redirect_entry() to add more
* entries.
*
* Each entry has a location, a tag list, and a structure. All of these are
* optional. The tag list and structure are useful for additional metadata,
* such as bitrate statistics for the given location.
*
* By default, message recipients should treat entries in the order they are
* stored. The recipient should therefore try entry #0 first, and if this
* entry is not acceptable or working, try entry #1 etc. Senders must make
* sure that they add entries in this order. However, recipients are free to
* ignore the order and pick an entry that is "best" for them. One example
* would be a recipient that scans the entries for the one with the highest
* bitrate tag.
*
* The specified location string is copied. However, ownership over the tag
* list and structure are transferred to the message.
*
* Returns: a newly allocated #GstMessage
*
* Since: 1.10
*/
GstMessage *
gst_message_new_redirect (GstObject * src, const gchar * location,
GstTagList * tag_list, const GstStructure * entry_struct)
{
GstStructure *structure;
GstMessage *message;
GValue entry_locations_gvalue = G_VALUE_INIT;
GValue entry_taglists_gvalue = G_VALUE_INIT;
GValue entry_structures_gvalue = G_VALUE_INIT;
g_return_val_if_fail (location != NULL, NULL);
g_value_init (&entry_locations_gvalue, GST_TYPE_LIST);
g_value_init (&entry_taglists_gvalue, GST_TYPE_LIST);
g_value_init (&entry_structures_gvalue, GST_TYPE_LIST);
structure = gst_structure_new_id_empty (GST_QUARK (MESSAGE_REDIRECT));
gst_structure_id_take_value (structure, GST_QUARK (REDIRECT_ENTRY_LOCATIONS),
&entry_locations_gvalue);
gst_structure_id_take_value (structure, GST_QUARK (REDIRECT_ENTRY_TAGLISTS),
&entry_taglists_gvalue);
gst_structure_id_take_value (structure, GST_QUARK (REDIRECT_ENTRY_STRUCTURES),
&entry_structures_gvalue);
message = gst_message_new_custom (GST_MESSAGE_REDIRECT, src, structure);
g_assert (message != NULL);
gst_message_add_redirect_entry (message, location, tag_list, entry_struct);
return message;
}
/**
* gst_message_add_redirect_entry:
* @message: a #GstMessage of type %GST_MESSAGE_REDIRECT
* @location: (transfer none): location string for the new entry
* @tag_list: (transfer full) (allow-none): tag list for the new entry
* @entry_struct: (transfer full) (allow-none): structure for the new entry
*
* Creates and appends a new entry.
*
* The specified location string is copied. However, ownership over the tag
* list and structure are transferred to the message.
*
* Since: 1.10
*/
void
gst_message_add_redirect_entry (GstMessage * message, const gchar * location,
GstTagList * tag_list, const GstStructure * entry_struct)
{
GValue val = G_VALUE_INIT;
GstStructure *structure;
GValue *entry_locations_gvalue;
GValue *entry_taglists_gvalue;
GValue *entry_structures_gvalue;
g_return_if_fail (location != NULL);
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_REDIRECT);
structure = GST_MESSAGE_STRUCTURE (message);
entry_locations_gvalue =
(GValue *) gst_structure_id_get_value (structure,
GST_QUARK (REDIRECT_ENTRY_LOCATIONS));
g_return_if_fail (GST_VALUE_HOLDS_LIST (entry_locations_gvalue));
entry_taglists_gvalue =
(GValue *) gst_structure_id_get_value (structure,
GST_QUARK (REDIRECT_ENTRY_TAGLISTS));
g_return_if_fail (GST_VALUE_HOLDS_LIST (entry_taglists_gvalue));
entry_structures_gvalue =
(GValue *) gst_structure_id_get_value (structure,
GST_QUARK (REDIRECT_ENTRY_STRUCTURES));
g_return_if_fail (GST_VALUE_HOLDS_LIST (entry_structures_gvalue));
g_value_init (&val, G_TYPE_STRING);
if (location)
g_value_set_string (&val, location);
gst_value_list_append_and_take_value (entry_locations_gvalue, &val);
g_value_init (&val, GST_TYPE_TAG_LIST);
if (tag_list)
g_value_take_boxed (&val, tag_list);
gst_value_list_append_and_take_value (entry_taglists_gvalue, &val);
g_value_init (&val, GST_TYPE_STRUCTURE);
if (entry_struct)
g_value_take_boxed (&val, entry_struct);
gst_value_list_append_and_take_value (entry_structures_gvalue, &val);
}
/**
* gst_message_parse_redirect_entry:
* @message: a #GstMessage of type %GST_MESSAGE_REDIRECT
* @entry_index: index of the entry to parse
* @location: (out) (transfer none) (allow-none): return location for
* the pointer to the entry's location string, or %NULL
* @tag_list: (out) (transfer none) (allow-none): return location for
* the pointer to the entry's tag list, or %NULL
* @entry_struct: (out) (transfer none) (allow-none): return location
* for the pointer to the entry's structure, or %NULL
*
* Parses the location and/or structure from the entry with the given index.
* The index must be between 0 and gst_message_get_num_redirect_entries() - 1.
* Returned pointers are valid for as long as this message exists.
*
* Since: 1.10
*/
void
gst_message_parse_redirect_entry (GstMessage * message, gsize entry_index,
const gchar ** location, GstTagList ** tag_list,
const GstStructure ** entry_struct)
{
const GValue *val;
GstStructure *structure;
const GValue *entry_locations_gvalue;
const GValue *entry_taglists_gvalue;
const GValue *entry_structures_gvalue;
g_return_if_fail (GST_IS_MESSAGE (message));
g_return_if_fail (GST_MESSAGE_TYPE (message) == GST_MESSAGE_REDIRECT);
if (G_UNLIKELY (!location && !tag_list && !entry_struct))
return;
structure = GST_MESSAGE_STRUCTURE (message);
entry_locations_gvalue =
gst_structure_id_get_value (structure,
GST_QUARK (REDIRECT_ENTRY_LOCATIONS));
g_return_if_fail (GST_VALUE_HOLDS_LIST (entry_locations_gvalue));
entry_taglists_gvalue =
gst_structure_id_get_value (structure,
GST_QUARK (REDIRECT_ENTRY_TAG
|
__label__pos
| 0.996492 |
6.6 Why am I getting FlexNet error -15: Cannot connect to floating license server system?
If you are using ARM software products with a floating license, your workstation must be able to communicate with a server running FlexNet server software. If such communication cannot be established, a commonly reported FlexNet error code on the client is -15.
Possible reasons for FlexNet error code -15 are:
โข The wrong license file is being referenced by the application program.
โข The floating license server specified in the license file has not been started.
โข You are using the wrong port@host information.
โข The vendor daemon specified in the license file is not running.
โข The hostname in the license file is not recognized by the system.
โข The network between the client machine and the server machine is down.
To solve these issues, check that you have started your floating license server or servers. You must also check that your client have been correctly configured. The Tool Licensing FAQs on the ARM Technical Support website might also be helpful.
You can try running tests on your server or client computers to identify possible causes of the failure:
1. Try running the lmutil lmdiag utility, which is designed primarily for this purpose.
2. Verify that the application is referencing the correct license file.
3. Verify that the vendor daemon, armlmd, is running. You can use ps on the server to look for the daemon on Unix/Linux, or the Windows Task Manager.
4. Examine the server log file to see if any problems are reported, particularly messages indicating that the vendor daemon has quit.
5. Run lmutil lmstat -a on the server machine to verify that the vendor daemon is alive.
6. Run lmutil lmstat -a on the client machine to verify the connection from client to vendor daemon across the network.
If none of the above tests identifies the cause of the licensing failure, check whether your client machine can communicate to the server over TCP/IP using a utility such as ping. If this fails, then it is possible that communication is being blocked between the server and client.
Firewalls
Your floating license server and client might be on opposite sides of a firewall. If so, you must configure the firewall to allow access to fixed ports for both the lmgrd and armlmd license daemons. Define these ports in the server license file by modifying the top of the license file as shown, substituting your own values:
SERVER myserver server_hostid 8224
VENDOR armlmd port=portnumber
Subnets
If your floating license server and client are on different subnets, then using the fully qualified domain name or IP address of the server might solve the problem. Using the IP address normally circumvents issues arising from domain name resolution.
Server hostname length
There is a character length limit for server hostnames used in the license files. For FLEXlm 8.1b and older, this limit is 32 characters. If you are using newer versions of FlexNet, the limit is 64 characters. If your floating license server name is too long, you must use the IP address of the server instead of the hostname in the license file and client license environment variable.
Intermittent failures
You might encounter intermittent licensing failures if your server is under very heavy load, for example, if you use automated build scripts. These failures can be caused by intermittent networking failures. The current versions of the ARM development tools are more resilient with respect to such temporary network interruptions. If you are using older tools, consider adding retry capability to your build scripts to work around the behavior.
Related reference
9.2ย armlmdiag utility
Non-ConfidentialPDF file iconย PDF versionARM DUI0577K
Copyright ยฉ 2011-2015 ARM. All rights reserved.ย
|
__label__pos
| 0.626354 |
Slide15.JPG
Slide16.JPG
Transcript
Theorem 6.8 :- If a side of a triangle is produced, then the exterior angle so formed is equal to the sum of the two interior opposite angles. Given :- A PQR ,QR is produced to point S. where PRS is exterior angle of PQR. To Prove :- 4 = 1 + 2 Proof:- From (1) and (2) 3 + 4 = 1 + 2 + 3 4 = 1 + 2 PRS = RPQ + PQR Hence, exterior angle is sum of two opposite interior angles. Hence proved
Go Ad-free
Davneet Singh's photo - Co-founder, Teachoo
Made by
Davneet Singh
Davneet Singh has done his B.Tech from Indian Institute of Technology, Kanpur. He has been teaching from the past 14 years. He provides courses for Maths, Science, Social Science, Physics, Chemistry, Computer Science at Teachoo.
|
__label__pos
| 0.686674 |
ย
+0ย ย
ย
0
49
1
avatar
Catherine rolls a standard 6-sided die six times, and the product of her rolls isย \(2500\).ย How many different sequences of rolls could there have been? ย (The order of the rolls matters.)
ย May 15, 2023
ย #1
avatar
-1
The number of different sequences is 300.
ย May 15, 2023
3 Online Users
avatar
|
__label__pos
| 0.951639 |
Dasar Input Output pada C pada CLI
Posted on ย Januari 22, 2018 ย | ย Last Modified ย Oktober 9, 2018
Dasar Input Output C
Dasar Perintah keluaran (Output) C
Perintah Keluaran (Output) adalah aktivitas memberikan informasi dari program untuk pengguna. Informasi tersebut bisa berbentuk data seperti tulisan, gambar, file, hardcopy, objek dan lain-lain.
Di artikel ini penulis akan jelaskan mengenai dasar perintah keluaran berupa tulisan di dalam CLI. Dalam bahasa pemrograman C++ kita dapat melakukan perintah keluaran menggunakan cout. Jika dalam bahasa pemrograman C, kita dapat menggunakan perintah keluaran seperti printf, puts, putchar dan lain-lain.
Macam-Macam Dasar Perintah Keluaran (Output) pada C Pada CLI
Function printf()
printf (Print Formatted data to stdout ) adalah fungsi yang berasal dari bahasa pemrograman C, fungsi yang paling umum digunakan pada bahasa pemrograman C dalam melakukan perintah keluaran pada CLI. printf() merupakan fungsi yang berasal dari pustaka stdio.h .
Bentuk Penulisan
printf("kalimat karakter atau kumpulan karakter %Penentu_format_variabel", variabel);
Keterangan
Kita dapat menaruh kalimat atau nilai di dalam parameter sebagai argument untuk function printf, ketika kita ingin menampilkan nilai dari variabel dengan menggunakan function printf anda bisa menggunakan penentu format.
Penentu format digunakan untuk menentukan jenis format apa yang akan digunakan untuk mencetak nilai variabel yang di panggil, identitas variabel juga disebutkan pada ruang parameter ke dua yang merupakan daftar urutan identitas variabel, peletakan variabel urut berdasarkan urutan format pada parameter satu.
Macam โ macam Dasar penentu format
Tipe Data Penentu Format
Integer
%d atau %i
Floating Point :
โ Bentuk Desimal
%f
โ Bentuk Berpangkat
%e
โ Bentuk Desimal dan Pangkat
%g
Double
%if
Character
%c
String
%s
Unsigned Integer
%u
Long Integer
%ld
Long Unsigned Integer
%lu
Unsigned Hexadecimal Integer
%x
Unsigned Octal Integer
%o
Contoh Penulisan
printf("%s mendapatkan nilai %f", nama, nilai);
printf("%s mendapatkan peringkat %c", nama, grade);
Contoh program
#include <stdio.h>
int main()
{
char nama[] = "BelajarCPP", grade = 'A';
float nilai = 98;
printf("%s mendapatkan nilai %f \n", nama, nilai);
printf("%s mendapatkan peringkat %c", nama, grade);
return 0;
}
Function puts()
puts() adalah fungsi yang mempunyai kepajangan Put String merupakan standar output sama seperti printf(). function puts() berfungsi untuk menyalin alamat dari string sebagai argumen untuk dicetak ke layar dalam bentuk kalimat. Puts() Termasuk dalam pustaka stdio.h. yang merupakan turunan dari bahasa pemrograman C.
Baca : ย Penjelasan dan Contoh Program DFA (Deterministic Finite Automaton) dengan Bahasa C
Perbedaan fungsi puts() dan printf()
printf() puts()
Harus menenukan format data untuk tipe data sting, yaitu %s Tidak perlu menentukan tipe data string karena ini khusus untuk data string
Untuk mencetak pindah baris memerlukan notasi \n Untuk mencetak pindah bais tidak perlu notasi \n, karena sudah diberikan secara otomatis saat pernyataan puts() selesai.
Contoh Penulisan
puts("Belajar C++");
puts(nama);
Contoh Program
#include <stdio.h>
int main()
{
char nama[] = "Belajar C++";
puts("Anda Sedang belajar di ");
puts(nama);
return 0;
}
function putchar()
function putchar() digunakan untuk menampilkan sebuah karakter ke layar. Penampilan karakter tidak diakhiri dengan pindah baris. Putchar berasal dari pustaka stdio.h.
Contoh Penulisan
putchar('a');
putchar(var);
function putchar hanya bisa mendapatkan 1 argument dan hanya bisa mencetak satu karakter.
Contoh Program
#include <stdio.h>
int main()
{
char var='a';
putchar('B');
putchar('E');
putchar('L');
putchar(var);
putchar('J');
putchar('A');
putchar('R');
putchar('C');
putchar('+');
putchar('+');
return 0;
}
Dasar Perintah Masukan (Input) C
Input adalah aktifitas pengguna kepada program, yang memungkinkan program untuk menerima data dari pengguna yang biasanya berbentuk text, file, gambar, hardcopy, obyek dan lain-lain. Data tersebut biasanya akan diolah oleh program dan dikeluarkan dengan perintah keluaran yang ditujukan untuk pengguna itu sendiri.
Di artikel sebelumnya, pada bahasa pemrograman C++ kita dapat melakukan aktifitas input menggunakan object cin. Dan pada artikel ini penulis akan membahas mengenai peritah input pada bahasa pemrograman C.
ada beberapa standar function yang dapat kita gunakan untuk melakukan aktifitas input informasi dari manusia ke program dalam bahasa pemrograman C, yaitu dengan menggunakan scanf(), gets(), dan getchar() yang merupakan standar function yang biasa digunakan dalam aktifitas input pada CLI.
Macam-Macam Dasar Perintah Masukan (Input) C pada CLI
Function scanf()
scanf (Scan/Read Formatted data from stdin) adalah function yang berasal dari bahasa pemrograman C. bisa dikatakan scanf memiliki peran yang berlawanan dengan printf, yaitu scanf berfungsi untuk melaukan perintah masukan melalui layal CLI. Memungkinkan pengguna (manusia) memberikan informasi berupa data text kepada program. Scanf merupakan fungsi yang berasal dari pustaka stdio.h
Bentuk Penulisan
scanf("%penentu_format", alamat_variabel);
Keterangan
Baca : ย Penjelasan dan Cara Mendirikan Variabel
scanf membutuhkan dua argument, yaitu:
โข Argument pertama %penentu_format. Tidak jauh beda dengan printf, scanf membutuhkan penentu format sama seperti penentu format pada printf (untuk dasar penentu format anda bisa melihat pada keterangan printf di atas).
โข Argument kedua alamat_variabel digunakan untuk memberikan data berupa alamat memori dari variabel atau obyek untuk penyimpanan data input. Untuk variabel biasa, biasanya kita membutuhkan tanda Address of Operator & untuk menyatakan alamat memori pada variabel tersebut.
Contoh Program
#include <stdio.h>
int main( )
{
char nama[15] = "";
int kelas = 0;
float nilai = 0;
printf("Nama : ");scanf ("%s",nama);
printf("Kelas: ");scanf("%d",&kelas);
printf("Nilai: ");scanf("%f",&nilai);
printf("\nNama : %s \nKelas : %d \nNilai : %f", nama, kelas, nilai);
return 0;
}
Function gets()
Gets (Get String) adalah function yang berasal dari bahasa pemrograman C, bisa dikatakan bahwa gets() merupakan pasangan dari puts() yang memiliki fungsi berlawanan yaitu berfungsi untuk membuat perintah masukan, memungkinkan pengguna (manusia) danapat memasukan data atau informasi ke dalam program komputer melalui CLI dalam bentuk data text. Mirip seperti scanf tapi gets lebih dikhususkan untuk string. Gets termasuk dalam pustaka stdio.h.
Bentuk Penulisan
gets(nama_variabel)
Perbedaan antara scanf() dan gets()
scanf() Tidak dapat menerima string yang mengandung spasi atau tab, dan akan dianggap sebagai data terpisah
gets() Dapat menerima string yang mengandung spasi atau tab dan masih dianggap sebagai satu kesatuan data.
Contoh Program
#include <stdio.h>
int main()
{
char nama[20];
puts("Masukan nama anda :");
gets(nama);
puts("\nSenang Berkenalan Dengan Anda, Saudara");
puts(nama);
return 0;
}
Function getchar()
getchar (Get Character from stdin), adalah fungsi yang berasala dari bahasa pemrograman C. merupakan pasangan dari Putchar, dan memiliki peran yang berlawanan yaitu, getchar berfungsi untuk memberikan perintah masukan, memungkinkan pengguna untuk memasukan data informasi berupa satu karakter text melalui CLI kepada program komputer. getchar dikhususkan untuk masukan berupa satu karakter. getchar termasuk dalam pustaka stdio.h.
Bentuk Penulisan
variabel = getchar();
berbeda seperti fungsi-fungsi sebelumnya, getchar tidak menggunakan argument untuk parameter getchar, tempat dimana data akan disimpan ditulis sebelum fungsi getchar dan menggunakan tanda = diantara nama tujuan dan function getchar().
Contoh Program
#include <stdio.h>
int main( )
{
char var;
printf("Masukan Sebuah Karakter = ");
var = getchar( );
printf("\nAnda Memasukan karakter \"%c\"", var);
return 0;
}
One Reply to โDasar Input Output pada C pada CLIโ
1. Itโs actually a cool and helpful piece of info. I am satisfied that you simply shared this
useful information with us. Please keep us informed like this.
Thanks for sharing.
Tinggalkan Balasan
Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *
|
__label__pos
| 0.99348 |
AJAX โ Do I dare code it from scratch?
When โajaxyfyingโ (my new word for when you add AJAX to a site) a website do you code all the java script your self or do you go use one of the many AJAX libraries/frameworks?
Iโm currently working on something that has lots of forms for inputing values and tables for viewing the values, and I would like to make the interface a little snappier by adding some AJAX to update things without reloading the entire page. Obviously this is relatively simple to do and can be done without a fancy library by coding things myselfโฆ but is it actually worth it? I realise that knowing how AJAX works โunder the hoodโ is a good idea, and the ability to actually code that yourself is an important skill, but other than a simple test app is it actually worth doing it yourself?
At the moment Iโm playing with Prototype, it wasnโt chosen through some kind of scientific process it was just the first tab I happened to open in Firefox. Iโm trying to make the webpage more responsive, not โflashyโ so I donโt need any fancy visual effects just want to limit the amount of times a page loads when things are clicked on.
Any opions? let me know below.
Edit: After writing this I took alook at the WordPress code and found out it uses jQuery, so Iโve downloaded that and gonna have a go with that too.
|
__label__pos
| 0.762481 |
StudyDocs.ru Logo
ะกะพััะธัะพะฒะบะฐ ะผะตัะพะดะพะผ ะฟัะพััะพะณะพ ะฒัะฑะพัะฐ.docx
ะััะพััะธัะพะฒะฐัั ะฟะพะปะพะถะธัะตะปัะฝัะต ัะปะตะผะตะฝัั ะผะฐััะธะฒะฐ ั ะฟะพะผะพััั ะฟัะพััะพะณะพ ะฒัะฑะพัะฐ.
// ะะฐะฑะพัะฐัะพัะฝะฐั ัะฐะฑะพัะฐ โ3.cpp: ะพะฟัะตะดะตะปัะตั ัะพัะบั ะฒั
ะพะดะฐ ะดะปั ะบะพะฝัะพะปัะฝะพะณะพ ะฟัะธะปะพะถะตะฝะธั.//
#include "stdafx.h"#include <iostream>
void main(int argc, _TCHAR* argv[]){ setlocale (LC_ALL,"Rus");FILE *f, *g;int i, A[10], k, min,tmp,j;f=fopen("file1.txt","r");for (i=0;i<10;i++){ fscanf(f,"%d",&A[i]); }fclose(f);printf ("ะััะพััะธัะพะฒะฐัั ะฟะพะปะพะถะธัะตะปัะฝัะต ัะปะตะผะตะฝัั ะผะฐััะธะฒะฐ ั ะฟะพะผะพััั ะฟัะพััะพะณะพ ะฒัะฑะพัะฐ.\n\n");printf ("ะัั
ะพะดะฝัะน ะผะฐััะธะฒ:\n");for (i=0;i<10;i++){ printf ("%d ",A[i]);}
for(i=0;i<9;i++) if(A[i]>=0) { for(k=i, j=i+1;j<10;j++) if(A[j]>A[k]) k=j; tmp = A[k]; A[k] = A[i]; A[i] = tmp; }printf ("\nะััะพััะธัะพะฒะฐะฝะฝัะน ะผะฐััะธะฒ:\n");for (i=0;i<10;i++){ printf ("%d ",A[i]);}
g=fopen("file2.txt","w");for (i=0;i<10;i++){ fprintf (g,"%d ",A[i]);}fclose (g);printf ("\n\n");system ("pause");}
|
__label__pos
| 0.989033 |
้็ฌ-341ย ่ฏ่ฎบ-2670ย ๆ็ซ -0ย trackbacks-0
ไบบไปฌ้ฝๅพๅๆฌข่ฎจ่ฎบ้ญๅ
่ฟไธชๆฆๅฟตใๅ
ถๅฎ่ฟไธชๆฆๅฟตๅฏนไบๅไปฃ็ ๆฅ่ฎฒไธ็น็จ้ฝๆฒกๆ๏ผๅไปฃ็ ๅช้่ฆๆๆกๅฅฝlambda่กจ่พพๅผๅclass+interface็่ฏญไนๅฐฑ่กไบใๅบๆฌไธๅชๆๅจๅ็ผ่ฏๅจๅ่ๆๆบ็ๆถๅๆ้่ฆ็ฎกไปไนๆฏ้ญๅ
ใไธ่ฟๅ ไธบ็ณปๅๆ็ซ ไธป้ข็็ผๆ
๏ผๅจ่ฟ้ๆๅฐฑ่ทๅคงๅฎถ่ฎฒไธไธ้ญๅ
ๆฏไปไนไธ่ฅฟใๅจ็่งฃ้ญๅ
ไนๅ๏ผๆไปฌๅพๅ
็่งฃไธไบๅธธ่ง็argument passingๅsymbol resolving็่งๅใ
้ฆๅ
็ฌฌไธไธชๅฐฑๆฏcall by valueไบใ่ฟไธช่งๅๆไปฌๅคงๅฎถ้ฝๅพ็ๆ๏ผๅ ไธบๆต่ก็่ฏญ่จ้ฝๆฏ่ฟไนๅ็ใๅคงๅฎถ่ฟ่ฎฐๅพๅๅผๅงๅญฆ็ผ็จ็ๆถๅ๏ผไนฆไธๆปๆฏๆไธ้้ข็ฎ๏ผ่ฏด็ๆฏ๏ผ
void Swap(int a, int b)
{
int t = a;
a = b;
b = t;
}
int main()
{
int a=0;
int b=1;
Swap(a, b);
printf("%d, %d", a, b);
}
ย
็ถๅ้ฎ็จๅบไผ่พๅบไปไนใๅฝ็ถๆไปฌ็ฐๅจ้ฝ็ฅ้๏ผaๅbไป็ถๆฏ0ๅ1๏ผๆฒกๆๅๅฐๅๅใ่ฟๅฐฑๆฏcall by valueใๅฆๆๆไปฌไฟฎๆนไธไธ่งๅ๏ผ่ฎฉๅๆฐๆปๆฏ้่ฟๅผ็จไผ ้่ฟๆฅ๏ผๅ ๆญคSwapไผๅฏผ่ดmainๅฝๆฐๆๅไผ่พๅบ1ๅ0็่ฏ๏ผ้ฃ่ฟไธชๅฐฑๆฏcall by referenceไบใ
้คๆญคไนๅค๏ผไธไธชไธๅคชๅธธ่ง็ไพๅญๅฐฑๆฏcall by needไบใcall by need่ฟไธชไธ่ฅฟๅจๆไบ่ๅ็ๅฎ็จ็ๅฝๆฐๅผ่ฏญ่จ๏ผ่ญฌๅฆHaskell๏ผๆฏไธไธช้่ฆ็่งๅ๏ผ่ฏด็ๅฐฑๆฏๅฆๆไธไธชๅๆฐๆฒก่ขซ็จไธ๏ผ้ฃไผ ่ฟๅป็ๆถๅๅฐฑไธไผๆง่กใๅฌ่ตทๆฅๅฅฝๅๆ็น็๏ผๆไป็ถ็จC่ฏญ่จๆฅไธพไธชไพๅญใ
int Add(int a, int b)
{
return a + b;
}
int Choose(bool first, int a, int b)
{
return first ? a : b;
}
int main()
{
int r = Choose(false, Add(1, 2), Add(3, 4));
printf("%d", r);
}
ย
่ฟไธช็จๅบAddไผ่ขซ่ฐ็จๅคๅฐๆฌกๅข๏ผๅคงๅฎถ้ฝ็ฅ้ๆฏไธคๆฌกใไฝๆฏๅจHaskell้้ข่ฟไนๅ็่ฏ๏ผๅฐฑๅชไผ่ขซ่ฐ็จไธๆฌกใไธบไปไนๅข๏ผๅ ไธบChoose็็ฌฌไธไธชๅๆฐๆฏfalse๏ผๆไปฅๅฝๆฐ็่ฟๅๅผๅชไพ่ตไธb๏ผ่ไธไพ่ตไธaใๆไปฅๅจmainๅฝๆฐ้้ขๅฎๆ่งๅฐไบ่ฟไธ็น๏ผไบๆฏๅช็ฎAdd(3, 4)๏ผไธ็ฎAdd(1, 2)ใไธ่ฟๅคงๅฎถๅซไปฅไธบ่ฟๆฏๅ ไธบ็ผ่ฏๅจไผๅ็ๆถๅๅ
่ไบ่ฟไธชๅฝๆฐๆ่ฟไนๅนฒ็๏ผHaskell็่ฟไธชๆบๅถๆฏๅจ่ฟ่กๆถ่ตทไฝ็จ็ใๆไปฅๅฆๆๆไปฌๅไบไธชๅฟซ้ๆๅบ็็ฎๆณ๏ผ็ถๅๆไธไธชๆฐ็ปๆๅบๅๅช่พๅบ็ฌฌไธไธชๆฐๅญ๏ผ้ฃไนๆดไธช็จๅบๆฏO(n)ๆถ้ดๅคๆๅบฆ็ใๅ ไธบๅฟซ้ๆๅบ็average caseๅจๆ็ฌฌไธไธชๅ
็ด ็กฎๅฎไธๆฅ็ๆถๅ๏ผๅช่ฑไบO(n)็ๆถ้ดใๅๅ ไธๆดไธช็จๅบๅช่พๅบ็ฌฌไธไธชๆฐๅญ๏ผๆไปฅๅ้ข็ไปๅฐฑไธ็ฎไบ๏ผไบๆฏๆดไธช็จๅบไนๆฏO(n)ใ
ไบๆฏๅคงๅฎถ็ฅ้call by nameใcall by referenceๅcall by needไบใ็ฐๅจๆฅ็ปๅคงๅฎถ่ฎฒไธไธชcall by name็็ฅๅฅ็่งๅใ่ฟไธช่งๅ็ฅๅฅๅฐ๏ผๆ่งๅพๆ นๆฌๆฒกๅๆณ้ฉพ้ฉญๅฎๆฅๅๅบไธไธชๆญฃ็กฎ็็จๅบใๆๆฅไธพไธชไพๅญ๏ผ
int Set(int a, int b, int c, int d)
{
a += b;
a += c;
a += d;
}
int main()
{
int i = 0;
int x[3] = {1, 2, 3};
Set(x[i++], 10, 100, 1000);
printf("%d, %d, %d, %d", x[0], x[1], x[2], i);
}
ย
ๅญฆ่ฟC่ฏญ่จ็้ฝ็ฅ้่ฟไธช็จๅบๅ
ถๅฎไปไน้ฝๆฒกๅใๅฆๆๆC่ฏญ่จ็call by valueๆนๆไบcall by reference็่ฏ๏ผ้ฃไนxๅi็ๅผๅๅซๆฏ{1111, 2, 3}ๅ1ใไฝๆฏๆไปฌ็ฅ้๏ผไบบ็ฑป็ๆณ่ฑกๅๆฏๅพไธฐๅฏ็๏ผไบๆฏๅๆไบไธ็งๅซๅcall by name็่งๅใcall by nameไนๆฏcall by reference็๏ผไฝๆฏๅบๅซๅจไบไฝ ๆฏไธๆฌกไฝฟ็จไธไธชๅๆฐ็ๆถๅ๏ผ็จๅบ้ฝไผๆ่ฎก็ฎ่ฟไธชๅๆฐ็่กจ่พพๅผๆง่กไธ้ใๅ ๆญค๏ผๅฆๆๆC่ฏญ่จ็call by valueๆขๆcall by name๏ผ้ฃไนไธ้ข็็จๅบๅ็ไบๆ
ๅฎ้
ไธๅฐฑๆฏ๏ผ
x[i++] += 10;
x[i++] += 100;
x[i++] += 1000;
ย
็จๅบๆง่กๅฎไนๅxๅi็ๅผๅฐฑๆฏ{11, 102, 1003}ๅ3ไบใ
ๅพ็ฅๅฅๅฏนๅง๏ผ็จๅพฎไธๆณจๆๅฐฑไผไธญๆ๏ผๆฏไธชๅคงๅ๏ผๅบๆฌๆฒกๆณ็จๅฏนๅงใ้ฃไฝ ไปฌ่ฟๆดๅคฉ็จC่ฏญ่จ็ๅฎๆฅไปฃๆฟๅฝๆฐๅนฒไปไนๅขใๆไพ็จ่ฎฐๅพAdaๆ็ฝๅๆๅบ่ฟๆฏAlgol 60๏ผ่ฟๆฏไปไน่ฏญ่จๅฐฑๆฏ็จ่ฟไธช่งๅ็๏ผๅฐ่ฑกๆฏ่พๆจก็ณใ
่ฎฒๅฎไบargument passing็ไบๆ
๏ผๅจ็่งฃlambda่กจ่พพๅผไนๅ๏ผๆไปฌ่ฟ้่ฆ็ฅ้ไธคไธชๆต่ก็symbol resolving็่งๅใๆ่ฐ็symbol resolving่ฎฒ็ๅฐฑๆฏ่งฃๅณ็จๅบๅจ็ๅฐไธไธชๅๅญ็ๆถๅ๏ผๅฆไฝ็ฅ้่ฟไธชๅๅญๅฐๅบๆๅ็ๆฏ่ฐ็้ฎ้ขใไบๆฏๆๅๅฏไปฅไธพไธไธช็ฎๅ็ฒๆด็ไพๅญไบ๏ผ
Action<int> SetX()
{
int x = 0;
return (int n)=>
{
x = n;
};
}
void Main()
{
int x = 10;
var setX = SetX();
setX(20);
Console.WriteLine(x);
}
ย
ๅผฑๆบ้ฝ็ฅ้่ฟไธช็จๅบๅ
ถๅฎไปไน้ฝๆฒกๅ๏ผๅฐฑ่พๅบ10ใ่ฟๆฏๅ ไธบC#็จ็symbol resolvingๅฐๆนๆณๆฏlexical scopingใๅฏนไบSetX้้ข้ฃไธชlambda่กจ่พพๅผๆฅ่ฎฒ๏ผ้ฃไธชxๆฏSetX็x่ไธๆฏMain็x๏ผๅ ไธบlexical scoping็ๅซไนๅฐฑๆฏ๏ผๅจๅฎไน็ๅฐๆนๅไธๆฅๆพๅๅญใ้ฃไธบไปไนไธ่ฝๅจ่ฟ่ก็ๆถๅๅไธๆฅๆพๅๅญไป่่ฎฉSetX้้ข็lambda่กจ่พพๅผๅฎ้
ไธ่ฎฟ้ฎ็ๆฏMainๅฝๆฐ้้ข็xๅข๏ผๅ
ถๅฎๆฏๆไบบ่ฟไนๅนฒ็ใ่ฟ็งๅๆณๅซdynamic scopingใๆไปฌ็ฅ้๏ผ่ๅ็javascript่ฏญ่จ็evalๅฝๆฐ๏ผๅญ็ฌฆไธฒๅๆฐ้้ข็ๆๆๅๅญๅฐฑๆฏๅจ่ฟ่ก็ๆถๅๆฅๆพ็ใ
=======================ๆๆฏ่ๆฏ็ฅ่ฏ็ๅๅฒ็บฟ=======================
ๆณๅฟ
ๅคงๅฎถ้ฝ่งๅพ๏ผๅฆๆไธไธช่ฏญ่จ็lambda่กจ่พพๅผๅจๅฎไนๅๆง่ก็ๆถๅ้็จ็ๆฏlexical scopingๅcall by value้ฃ่ฏฅๆๅคๅฅฝๅใๆต่ก็่ฏญ่จ้ฝๆฏ่ฟไนๅ็ใๅฐฑ็ฎ่งๅฎๅฐ่ฟไน็ป๏ผ้ฃ่ฟๆฏๆไธไธชๅๆญงใๅฐๅบไธไธชlambda่กจ่พพๅผๆไธๆฅ็ๅค้ข็็ฌฆๅทๆฏๅช่ฏป็่ฟๆฏๅฏ่ฏปๅ็ๅข๏ผpythonๅ่ฏๆไปฌ๏ผ่ฟๆฏๅช่ฏป็ใC#ๅjavascriptๅ่ฏๆไปฌ๏ผ่ฟๆฏๅฏ่ฏปๅ็ใC++ๅ่ฏๆไปฌ๏ผไฝ ไปฌ่ชๅทฑๆฅๅณๅฎๆฏไธไธช็ฌฆๅท็่งๅใไฝไธบไธไธชๅฏน่ฏญ่จไบ่งฃๅพๅพๆทฑๅป๏ผ็ฅ้่ชๅทฑๆฏไธ่กไปฃ็ ๅฐๅบๅจๅไปไน๏ผ่ไธ่ฟๅพๆ่ชๅถๅ็็จๅบๅๆฅ่ฏด๏ผๆ่ฟๆฏๆฏ่พๅๆฌขC#้ฃ็งๅๆณใๅ ไธบๅ
ถๅฎC++ๅฐฑ็ฎไฝ ๆไธไธชๅผๆไบไธๆฅ๏ผๅคง้จๅๆ
ๅตไธ่ฟๆฏไธ่ฝไผๅ็๏ผ้ฃไฝ่ฆๆฏไธชๅ้้ฝ่ฆๆ่ชๅทฑ่ฏดๆๆๅฐๅบๆฏๆณๅช่ฏปๅข๏ผ่ฟๆฏ่ฆ่ฏปๅ้ฝๅฏไปฅๅข๏ผๅฝๆฐไฝๆๆไน็จ่ฟไธชๅ้ไธๆฏๅทฒ็ปๅพๆธ
ๆฅ็่กจ่พพๅบๆฅไบๅใ
้ฃ่ฏดๅฐๅบ้ญๅ
ๆฏไปไนๅข๏ผ้ญๅ
ๅ
ถๅฎๅฐฑๆฏ้ฃไธช่ขซlambda่กจ่พพๅผๆไธๆฅ็โไธไธๆโๅ ไธๅฝๆฐๆฌ่บซไบใๅไธ้ข็SetXๅฝๆฐ้้ข็lambda่กจ่พพๅผ็้ญๅ
๏ผๅฐฑๆฏxๅ้ใไธไธช่ฏญ่จๆไบๅธฆ้ญๅ
็lambda่กจ่พพๅผ๏ผๆๅณ็ไปไนๅข๏ผๆไธ้ข็ปๅคงๅฎถๅฑ็คบไธๅฐๆฎตไปฃ็ ใ็ฐๅจ่ฆไปๅจๆ็ฑปๅ็็lambda่กจ่พพๅผๅผๅง่ฎฒ๏ผๅฐฑๅๅ็็จ้ฃไธชๆ ่็javascriptๅง๏ผ
function pair(a, b) {
return function(c) {
return c(a, b);
};
}
function first(a, b) {
return a;
}
function second(a, b) {
return b;
}
var p = pair(1, pair(2, 3));
var a = p(first);
var b = p(second)(first);
var c = p(second)(second);
print(a, b, c);
ย
่ฟไธช็จๅบ็aใbๅcๅฐๅบๆฏไปไนๅผๅข๏ผๅฝ็ถๅฐฑ็ฎ็ไธๆ่ฟไธช็จๅบ็ไบบไนๅฏไปฅๅพๅฟซ็ๅบๆฅไปไปฌๆฏ1ใ2ๅ3ไบ๏ผๅ ไธบๅ้ๅๅฎๅจๆฏๅฎไน็ๅคชๆธ
ๆฅไบใ้ฃไน็จๅบ็่ฟ่ก่ฟ็จๅฐๅบๆฏๆไนๆ ท็ๅข๏ผๅคงๅฎถๅฏไปฅ็ๅฐ่ฟไธช็จๅบ็ไปปไฝไธไธชๅผๅจๅๅปบไนๅ้ฝๆฒกๆ่ขซ็ฌฌไบๆฌก่ตๅผ่ฟ๏ผไบๆฏ่ฟ็ง็จๅบๅฐฑๆฏๆฒกๆๅฏไฝ็จ็๏ผ้ฃๅฐฑไปฃ่กจๅ
ถๅฎๅจ่ฟ้call by valueๅcall by needๆฏๆฒกๆๅบๅซ็ใcall by needๆๅณ็ๅฝๆฐ็ๅๆฐ็ๆฑๅผ้กบๅบไนๆฏๆ ๆ่ฐ็ใๅจ่ฟ็งๆ
ๅตไธ๏ผ็จๅบๅฐฑๅๅพ่ทๆฐๅญฆๅ
ฌๅผไธๆ ท๏ผๅฏไปฅๆจๅฏผไบใ้ฃๆไปฌ็ฐๅจๅฐฑๆฅๆจๅฏผไธไธ๏ผ
var p = pair(1, pair(2, 3));
var a = p(first);
// โโโโโ
var p = function(c) {
return c(1, pair(2, 3));
};
var a = p(first);
// โโโโโ
var a = first(1, pair(2, 3));
// โโโโโ
var a = 1;
ย
่ฟไน็ฎๆฏไธช่ๆ็็ไพๅญไบๅใ้ญๅ
ๅจ่ฟ้ไฝ็ฐไบไปๅผบๅคง็ไฝ็จ๏ผๆๅๆฐไฟ็ไบ่ตทๆฅ๏ผๆไปฌๅฏไปฅๅจ่ฟไนๅ่ฟ่ก่ฎฟ้ฎใไปฟไฝๆไปฌๅ็ๅฐฑๆฏไธ้ข่ฟๆ ท็ไปฃ็ ๏ผ
var p = {
first : 1,
second : {
first : 1,
second : 2,
}
};
var a = p.first;
var b = p.second.first;
var c = p.second.second;
ย
ไบๆฏๆไปฌๅพๅฐไบไธไธช็ป่ฎบ๏ผ๏ผๅธฆ้ญๅ
็๏ผlambda่กจ่พพๅผๅฏไปฅไปฃๆฟไธไธชๆๅไธบๅช่ฏป็structไบใ้ฃไน๏ผๆๅๅฏไปฅ่ฏปๅ็struct่ฆๆไนๅๅข๏ผๅๆณๅฝ็ถ่ทไธ้ข็ไธไธๆ ทใ็ฉถๅ
ถๅๅ ๏ผๅฐฑๆฏๅ ไธบjavascriptไฝฟ็จไบcall by value็่งๅ๏ผไฝฟๅพpair้้ข็return c(a, b);ๆฒกๅๆณๅฐaๅb็ๅผ็จไผ ้็ปc๏ผ่ฟๆ ทๅฐฑๆฒกๆไบบๅฏไปฅไฟฎๆนaๅb็ๅผไบใ่ฝ็ถaๅbๅจ้ฃไบc้้ขๆฏๆนไธไบ็๏ผไฝๆฏpairๅฝๆฐๅ
้จๆฏๅฏไปฅไฟฎๆน็ใๅฆๆๆไปฌ่ฆๅๆๅชๆฏ็จlambda่กจ่พพๅผ็่ฏ๏ผๅฐฑๅพ่ฆๆฑcๆไฟฎๆนๅ็ๆๆโ่ฟไธชstruct็ๆๅๅ้โ้ฝๆฟๅบๆฅใไบๆฏๅฐฑๆไบไธ้ข็ไปฃ็ ๏ผ
// ๅจ่ฟ้ๆไปฌ็ปง็ปญไฝฟ็จไธ้ข็pairใfirstๅsecondๅฝๆฐ
function mutable_pair(a, b) {
return function(c) {
var x = c(a, b);
// ่ฟ้ๆไปฌๆpairๅฝ้พ่กจ็จ๏ผไธไธช(1, 2, 3)็้พ่กจไผ่ขซๅจๅญไธบpair(1, pair(2, pair(3, null)))
a = x(second)(first);
b = x(second)(second)(first);
return x(first);
};
}
function get_first(a, b) {
return pair(a, pair(a, pair(b, null)));
}
function get_second(a, b) {
return pair(b, pair(a, pair(b, null)));
}
function set_first(value) {
return function(a, b) {
return pair(undefined, pair(value, pair(b, null)));
};
}
function set_second(value) {
return function(a, b) {
return pair(undefined, pair(a, pair(value, null)));
};
}
var p = mutable_pair(1, 2);
var a = p(get_first);
var b = p(get_second);
print(a, b);
p(set_first(3));
p(set_second(4));
var c = p(get_first);
var d = p(get_second);
print(c, d);
ย
ๆไปฌๅฏไปฅ็ๅฐ๏ผๅ ไธบget_firstๅget_secondๅไบไธไธชๅช่ฏป็ไบๆ
๏ผๆไปฅ่ฟๅ็้พ่กจ็็ฌฌไบไธชๅผ๏ผไปฃ่กจๆฐ็a๏ผๅ็ฌฌไธไธชๅผ๏ผไปฃ่กจๆฐ็b๏ผ้ฝๆฏๆง็aๅbใไฝๆฏset_firstๅset_secondๅฐฑไธไธๆ ทไบใๅ ๆญคๅจๆง่กๅฐ็ฌฌไบไธชprint็ๆถๅ๏ผๆไปฌๅฏไปฅ็ๅฐp็ไธคไธชๅผๅทฒ็ป่ขซๆดๆนๆไบ3ๅ4ใ
่ฝ็ถ่ฟ้ๅทฒ็ปๆถๅๅฐไบโ็ปๅฎ่ฟ็ๅ้้ๆฐ่ตๅผโ็ไบๆ
๏ผไธ่ฟๆไปฌ่ฟๆฏๅฏไปฅๅฐ่ฏๆจๅฏผไธไธ๏ผ็ฉถ็ซp(set_first(3));็ๆถๅ็ฉถ็ซๅนฒไบไปไนไบๆ
๏ผ
var p = mutable_pair(1, 2);
p(set_first(3));
// โโโโโ
p = return function(c) {
var x = c(1, 2);
a = x(second)(first);
b = x(second)(second)(first);
return x(first);
};
p(set_first(3));
// โโโโโ
var x = set_first(3)(1, 2);
p.a = x(second)(first); // ่ฟ้็aๅbๆฏp็้ญๅ
ๅ
ๅ
ๅซ็ไธไธๆ็ๅ้ไบ๏ผๆไปฅ่ฟไนๅไผๆธ
ๆฅไธ็น
p.b = x(second)(second)(first);
// return x(first);ๅบๆฅ็ๅผๆฒกไบบ่ฆ๏ผๆไปฅ็็ฅๆใ
// โโโโโ
var x = (function(a, b) {
return pair(undefined, pair(3, pair(b, null)));
})(1, 2);
p.a = x(second)(first);
p.b = x(second)(second)(first);
// โโโโโ
x = pair(undefined, pair(3, pair(2, null)));
p.a = x(second)(first);
p.b = x(second)(second)(first);
// โโโโโ
p.a = 3;
p.b = 2;
ย
็ฑไบๆถๅๅฐไบไธไธๆ็ไฟฎๆน๏ผ่ฟไธชๆจๅฏผไธฅๆ ผไธๆฅ่ฏดๅทฒ็ปไธ่ฝๅซๆจๅฏผไบ๏ผๅช่ฝๅซ่งฃ่ฏดไบใไธ่ฟๆไปฌๅฏไปฅๅ็ฐ๏ผไป
ไป
ไฝฟ็จๅฏไปฅๆๆๅฏ่ฏปๅ็ไธไธๆ็lambda่กจ่พพๅผ๏ผๅทฒ็ปๅฏไปฅๅฎ็ฐๅฏ่ฏปๅ็struct็ๆๆไบใ่ไธ่ฟไธชstruct็่ฏปๅๆฏ้่ฟgetterๅsetterๆฅๅฎ็ฐ็๏ผไบๆฏๅช่ฆๆไปฌๅ็ๅคๆไธ็น๏ผๆไปฌๅฐฑๅพๅฐไบไธไธชinterfaceใไบๆฏ้ฃไธชmutable_pair๏ผๅฐฑๅฏไปฅ็ๆๆฏไธไธชๆ้ ๅฝๆฐไบใ
ๅคงๆฌๅทไธ่ฝๆข่ก็ไปฃ็ ็ไปๅฆ็้พ่ฏปๅ๏ผ่ฟ่ฟๆๅปๅฐฑๅไธๅจๅฑ๏ผgo่ฏญ่จ่ฟๆjavascript่ชๅจ่กฅๅ
จๅๅท็็ฎๆณ็ปๆๅปไบ๏ผ็ๆฏๆฒกๅไฝใ
ๆไปฅ๏ผinterfaceๅ
ถๅฎ่ทlambda่กจ่พพๆฏไธๆ ท๏ผไนๅฏไปฅ็ๆๆฏไธไธช้ญๅ
ใๅชๆฏinterface็ๅ
ฅๅฃๆฏ่พๅค๏ผlambda่กจ่พพๅผ็ๅ
ฅๅฃๅชๆไธไธช๏ผ็ฑปไผผไบC++็operator()๏ผใๅคงๅฎถๅฏ่ฝไผ้ฎ๏ผclassๆฏไปไนๅข๏ผclassๅฝ็ถๆฏinterfaceๅ
้จไธๅฏๅไบบ็ๅฎ็ฐ็ป่็ใๆไปฌ็ฅ้๏ผไพ่ตๅฎ็ฐ็ป่ๆฅ็ผ็จๆฏไธๅฏน็๏ผๆไปฅๆไปฌ่ฆไพ่ตๆฅๅฃ็ผ็จ
ๅฝ็ถ๏ผๅณไฝฟๆฏไปไฟ่ฎพ่ฎกๅบjavascript็้ฃไธชไบบ๏ผๅคงๆฆไนๆฏ็ฅ้ๆ้ ๅฝๆฐไนๆฏไธไธชๅฝๆฐ็๏ผ่ไธ็ฑป็ๆๅ่ทๅฝๆฐ็ไธไธๆ้พ่กจ็่็นๅฏน่ฑกๅ
ถๅฎๆฒกไปไนๅบๅซใไบๆฏๆไปฌไผ็ๅฐ๏ผjavascript้้ขๆฏ่ฟไนๅ้ขๅๅฏน่ฑก็ไบๆ
็๏ผ
function rectangle(a, b) {
this.width = a;
this.height = height;
}
rectangle.prototype.get_area = function() {
return this.width * this.height;
};
var r = new rectangle(3, 4);
print(r.get_area());
ย
็ถๅๆไปฌๅฐฑๆฟๅฐไบไธไธช3ร4็้ฟๆนๅฝข็้ข็งฏ12ไบใไธ่ฟjavascript็ปๆไปฌๅธฆๆฅ็ไธ็น็นๅฐๅฐๆๆฏ๏ผๅฝๆฐ็thisๅๆฐๅ
ถๅฎๆฏdynamic scoping็๏ผไนๅฐฑๆฏ่ฏด๏ผ่ฟไธชthisๅฐๅบๆฏไปไน๏ผ่ฆ็ไฝ ๅจๅชๅฆไฝ่ฐ็จ่ฟไธชๅฝๆฐใไบๆฏๅ
ถๅฎ
obj.method(args)
ย
ๆดไธชไธ่ฅฟๆฏไธไธช่ฏญๆณ๏ผๅฎไปฃ่กจmethod็thisๅๆฐๆฏobj๏ผๅฉไธ็ๅๆฐๆฏargsใๅฏๆ็ๆฏ๏ผ่ฟไธช่ฏญๆณๅนถไธๆฏ็ฑโobj.memberโๅโfunc(args)โ็ปๆ็ใ้ฃไนๅจไธ้ข็ไพๅญไธญ๏ผๅฆๆๆไปฌๆไปฃ็ ๆนไธบ๏ผ
var x = r.get_area;
print(x());
ย
็ปๆๆฏไปไนๅข๏ผๅๆญฃไธๆฏ12ใๅฆๆไฝ ๅจC#้้ขๅ่ฟไธชไบๆ
๏ผๆๆๅฐฑ่ทjavascriptไธไธๆ ทไบใๅฆๆๆไปฌๆไธ้ข็ไปฃ็ ๏ผ
class Rectangle
{
public int width;
public int height;
public int GetArea()
{
return width * height;
}
};
ย
้ฃไนไธ้ขไธคๆฎตไปฃ็ ็ๆๆๆฏไธๆ ท็๏ผ
var r = new Rectangle
{
width = 3;
height = 4;
};
// ็ฌฌไธๆฎตไปฃ็
Console.WriteLine(r.GetArea());
// ็ฌฌไบๆฎตไปฃ็
Func<int> x = r.GetArea;
Console.WriteLine(x());
ย
็ฉถๅ
ถๅๅ ๏ผๆฏๅ ไธบjavascriptๆobj.method(a, b)่งฃ้ๆไบGetMember(obj, โmethodโ).Invoke(a, b, this = r);ไบใๆไปฅไฝ ๅr.get_area็ๆถๅ๏ผไฝ ๆฟๅฐ็ๅ
ถๅฎๆฏๅฎไนๅจrectangle.prototype้้ข็้ฃไธชไธ่ฅฟใไฝๆฏC#ๅ็ไบๆ
ไธไธๆ ท๏ผC#็็ฌฌไบๆฎตไปฃ็ ๅ
ถๅฎ็ธๅฝไบ๏ผ
Func<int> x = ()=>
{
return r.GetArea();
};
Console.WriteLine(x());
ย
ๆไปฅ่ฏดC#่ฟไธชๅๆณๆฏ่พ็ฌฆๅ็ด่งๅ๏ผไธบไปไนdynamic scoping๏ผ่ญฌๅฆjavascript็thisๅๆฐ๏ผๅcall by name๏ผ่ญฌๅฆC่ฏญ่จ็ๅฎ๏ผ็่ตทๆฅ้ฝ้ฃไนๅฑไธ๏ผๆปๆฏ่ฎฉไบบๆๅ้๏ผๅฐฑๆฏๅ ไธบ่ฟๅไบ็ด่งใไธ่ฟjavascript้ฃไนๅ่ฟๆฏๆ
ๆๅฏๅ็ใไผฐ่ฎก็ฌฌไธๆฌก่ฎพ่ฎก่ฟไธชไธ่ฅฟ็ๆถๅ๏ผๆถๅฐไบ้ๆ็ฑปๅ่ฏญ่จๅคชๅค็ๅฝฑๅ๏ผไบๆฏๆobj.method(args)ๆดไธชๅฝๆไบไธไธชๆดไฝๆฅ็ใๅ ไธบๅจC++้้ข๏ผthis็็กฎๅฐฑๆฏไธไธชๅๆฐ๏ผๅชๆฏๅฅนไธ่ฝ่ฎฉไฝ obj.method๏ผๅพๅ&TObj::method๏ผ็ถๅ่ฟๆไธไธชไธ้จๅกซthisๅๆฐ็่ฏญๆณโโๆฒก้๏ผๅฐฑๆฏ.*ๅ->*ๆไฝ็ฌฆไบใ
ๅๅฆ่ฏด๏ผjavascript็thisๅๆฐ่ฆๅๆlexical scoping๏ผ่ไธๆฏdynamic scoping๏ผ้ฃไน่ฝไธ่ฝ็จlambda่กจ่พพๅผๆฅๆจกๆinterfaceๅข๏ผ่ฟๅฝ็ถๆฏๅฏไปฅ๏ผๅชๆฏๅฆๆไธ็จprototype็่ฏ๏ผ้ฃๆไปฌๅฐฑไผไธงๅคฑjavascript็ฑๅฅฝ่
ไปฌๅๆน็พ่ฎก็ปๅฐฝ่ๆฑ็จๅฐฝๅฅๆๆทซๅทง้ๆจกๆๅบๆฅ็โ็ปงๆฟโๆๆไบ๏ผ
function mutable_pair(a, b) {
_this = {
get_first = function() { return a; },
get_second = function() { return b; },
set_first = function(value) { a = value; },
set_second = function(value) { b = value; }
};
return _this; } var p = new mutable_pair(1, 2); var a = p.get_first(); var b = p.get_second(); print(a, b); var c = p.set_first(3); var d = p.set_second(4); print(c, d);
ย
่ฟไธชๆถๅ๏ผๅณไฝฟไฝ ๅ
var x = p.set_first;
var y = p.set_second;
x(3);
y(4);
ย
ไปฃ็ ไนไผ่ทๆไปฌๆๆๆ็ไธๆ ทๆญฃๅธธๅทฅไฝไบใ่ไธๅ้ ๅบๆฅ็r๏ผๆๆ็ๆๅๅ้้ฝๅฑ่ฝๆไบ๏ผๅช็ไธไบๅ ไธชๅฝๆฐ็ปไฝ ใไธๆญคๅๆถ๏ผๅฝๆฐ้้ข่ฎฟ้ฎ_thisไนไผๅพๅฐๅๅปบๅบๆฅ็้ฃไธชinterfaceไบใ
ๅคงๅฎถๅฐ่ฟ้ๅคงๆฆๅทฒ็ปๆ็ฝ้ญๅ
ใlambda่กจ่พพๅผๅinterfaceไน้ด็ๅ
ณ็ณปไบๅงใๆ็ไบไธไธไนๅๅ่ฟ็ๅ
ญ็ฏๆ็ซ ๏ผๅ ไธไปๅคฉ่ฟ็ฏ๏ผๅ
ๅฎนๅทฒ็ป่ฆ็ไบๆ๏ผ
1. ้
่ฏปC่ฏญ่จ็ๅคๆ็ๅฃฐๆ่ฏญๆณ
2. ไปไนๆฏ่ฏญๆณๅช้ณ
3. ไปไนๆฏ่ฏญๆณ็ไธ่ดๆง
4. C++็const็ๆๆ
5. C#็structๅproperty็้ฎ้ข
6. C++็ๅค้็ปงๆฟ
7. ๅฐ่ฃ
ๅฐๅบๆๅณ็ไปไน
8. ไธบไปไนexception่ฆๆฏerror codeๅ่ตทๆฅๅนฒๅใๅฎนๆ็ปดๆค่ไธไธ้่ฆๅคชๅค็ๆฒ้
9. ไธบไปไนC#็ๆไบinterfaceๅบ่ฏฅ่กจ่พพไธบconcept
10. ๆจกๆฟๅๆจกๆฟๅ
็ผ็จ
11. ๅๅๅ้ๅ
12. type rich programming
13. OO็ๆถๆฏๅ้็ๅซไน
14. ่ๅฝๆฐ่กจๆฏๅฆไฝๅฎ็ฐ็
15. ไปไนๆฏOO้้ข็็ฑปๅๆฉๅฑๅผๆพ/ๅฐ้ญไธ้ป่พๆฉๅฑๅผๆพ/ๅฐ้ญ
16. visitorๆจกๅผๅฆไฝ้่ฝฌ็ฑปๅๅ้ป่พ็ๆฉๅฑๅๅฐ้ญ
17. CPS๏ผcontinuation passing style๏ผๅๆขไธๅผๆญฅ่ฐ็จ็ๅผๅธธๅค็็ๅ
ณ็ณป
18. CPSๅฆไฝ่ฎฉexceptionๅๆerror code
19. argument passingๅsymbol resolving
20. ๅฆไฝ็จlambdaๅฎ็ฐmutable structๅimmutable struct
21. ๅฆไฝ็จlambdaๅฎ็ฐinterface
ๆณไบๆณ๏ผๅคงๆฆ้ไฟๆๆ็ๅฏไปฅ่ชๅญฆๆๆ็้ฃไบไธ่ฅฟๅคงๆฆ้ฝ่ฎฒๅฎไบใๅฝ็ถ๏ผ็ณปๅๆฏไธไผๅจ่ฟ้ๅฐฑ็ปๆ็๏ผๅชๆฏๅ้ข็ไธ่ฅฟ๏ผๅคงๆฆๅฐฑ้่ฆๅคงๅฎถๅคไธ็นๆ่ไบใ
ๅ็จๅบ่ฎฒ็ฉถ่กไบๆตๆฐดใๅชๆ่ชๅทฑๅคไบๆ่๏ผๅคไบๅๅฎ้ช๏ผๅคไบ้ ่ฝฎๅญ๏ผๆ่ฝ่ฎฉ็ผ็จ็ๅญฆไน ไบๅๅๅใ
posted on 2013-07-05 22:31 ้ๆข็(vczh) ้
่ฏป(7638) ่ฏ่ฎบ(12) ย ็ผ่พย ๆถ่ ๅผ็จ ๆๅฑๅ็ฑป: ๅฏ็คบ
่ฏ่ฎบ:
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-06 14:58 | OpenGG
function MutablePair(a, b) {
var _this = {
get_first: function() { return a; },
get_second: function() { return b; },
set_first: function(value) { a = value; },
set_second: function(value) { b = value; }
};
return _this;
}
var p = new MutablePair(1, 2);
var q = new MutablePair(1, 2);
console.log(p.get_first === q.get_first);ย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-06 17:35 | ้ๆข็(vczh)
@OpenGG
false๏ผๅ ไธบp.get_firstๅq.get_first่ฟๅ็ๆฏไธๅๆฅๆบ็ไธ่ฅฟ๏ผๆไปฅๆฏไธคไธชไธๅ็ๅฝๆฐใย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-06 20:13 | ๆบชๆต
ๅญฆไน ไบย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-08 10:30 | up
call by nameไธๆฏAda๏ผๆฏAloglไฝฟ็จ็ไธ็งๆนๅผ๏ผๅฎๅบ็ฐ็้ๅธธๆฉใFortran๏ผๅๅง็๏ผ็ๆๆๅๆฐ้ฝๆฏcall by ref็๏ผๅฎ้
ไธ๏ผcall by refๅบ็ฐ็ๆฏcall by valueๆฉใcall by valueๅบ่ฏฅๆฏๆๅ้็ไธ็งๆนๅผไบใๅฎ้
ไธ๏ผ่ฏญ่จๅๅฑ็ๅๅฒๅฏไปฅ็ไฝๆฏๅ็ง่ฎพๆฝ่ฝๅไธๆญ่ขซ้ๅถ็ๅๅฒใย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-18 20:09 | rink1969
call by name ้ฃไธชๅ
่ขฑๆ็ไธ้
mutable็ๆฐๆฎ็ปๆ่ฎพ่ฎก๏ผๆ็ถ่ทๆฑๅฏผๆไบๅย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-07-24 16:08 | Scan
่ๅคงไธบไปไนไธ็จๆ ่ฎฐๆฅๅบๅlambda็็ฏๅขaccessor็็จ้ๅข๏ผๆ่งๅพ่ฟๆ ท็ฎๅไบๅ
function pair(a, b)
return function(accessor, isSetter)
if isSetter then a, b = accessor(a, b)
else return accessor(a, b) end
end
end
function get_first(pair)
return pair(function(a, b) return a end)
end
function get_second(pair)
return pair(function(a, b) return b end)
end
function set_first(pair, v)
pair(function(a, b) return v, b end, true)
end
function set_second(pair, v)
pair(function(a, b) return a, v end, true)
end
local p = pair(1, 2)
print(get_first(p))
print(get_second(p))
set_first(p, 3)
print(get_first(p))
print(get_second(p))
ย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface[ๆช็ปๅฝ] 2013-07-24 16:56 | ้ๆข็(vczh)
@Scan
ๆๅผฏๆน่งๅย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-10-24 22:38 | SLiang
ๆ่ฏไบไธไธๆๅ้ฃๆฎตjavascriptไปฃ็ ๏ผไผผไนไป่ฟๅ็็ปๆๆ่ง็จ็ๆฏlexical scoping
function mutable_pair(a, b) {
_this = {
get_first : function() { return a; },
get_second : function() { return b; },
set_first : function(value) { a = value; },
set_second : function(value) { b = value; }
};
return _this;
}
var p = new mutable_pair(1, 2);
var x = p.set_first
x(4)
var k = p.get_first();
็ถๅk็็ปๆๆฏ4ใย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-10-24 22:59 | SLiang
็่งฃ้ไฝ ็่ฏไบ๏ผ่ฏทๅฟฝ็ฅๆไธไธไธช้ฎ้ขใ
ๆ็ๆญฃ็้ฎ้ขๆฏ๏ผๅฆๆ็จไฝ ๆๅไธๆฎตjs้ฃ็ง้ญๅ
ๅๆณ๏ผๅฆไฝๅฎ็ฐ็ปงๆฟ็ๆๆๅข๏ผ่ฟๆๆๅ้ฃไธช
var p = new mutable_pair(1, 2);
ๅปๆnewไผผไนไนๆฏไธๆ ท็ๆๆย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2013-10-25 14:59 | ้ๆข็(vczh)
@SLiang
js็็ปงๆฟๆไนๅ็ฐๅจๅคงๅฎถ้ฝๆ็ธๅฝ็่ฎจ่ฎบไบ๏ผไฝ ๅปๆไธๆๅบ่ฏฅไผๆฏๆๅ่ฏไฝ ็ๆดๅ
จ้ขใไฝ ่ฟๅฏไปฅ็็TypeScript๏ผ็ไปๆฏๆไน็ๆ็ปงๆฟ็jsไปฃ็ ็ใย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface[ๆช็ปๅฝ] 2013-11-10 19:51 | patz
> ๅคงๅฎถๅฐ่ฟ้ๅคงๆฆๅทฒ็ปๆ็ฝ้ญๅ
ใlambda่กจ่พพๅผๅinterfaceไน้ด็ๅ
ณ็ณปไบๅงใ
ๆ่น่ฟๅฅ่ฏๆฏๆไน็ช็ถ่นฆๅบๆฅ็๏ผๆ่งๅฐฑๅๆฑชๅณฐๆฑ็ฑ็ปๆๅคงๅฎถ้ฝๅป็ไบๅ ๆฐ้ปๆฒกไบบ็โฆโฆ
็ป่ฎบๅฏผๅบ็ๅคชๅฟซไบ๏ผๅทฎ่ฏ๏ผ่ฟๅฅฝ็่ฟSICP๏ผไธ็ถ็ๆณไธๆธ
ๆฅใย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
#ย re: ๅฆไฝ่ฎพ่ฎกไธ้จ่ฏญ่จ๏ผไธ๏ผ——้ญๅ
ใlambdaๅinterface 2014-03-04 19:44 | nic
call by need ๆฏๆฑๅผ้กบๅบ,ๆไนๅไผ ๅๆททๅจไธๅ่ฏดย ย ๅๅคย ย ๆดๅค่ฏ่ฎบ
ย ย
ๅชๆๆณจๅ็จๆท็ปๅฝๅๆ่ฝๅ่กจ่ฏ่ฎบใ
ใๆจ่ใ่ถ
50ไธ่กVC++ๆบ็ : ๅคงๅ็ปๆๅทฅๆงใ็ตๅไปฟ็CADไธGISๆบ็ ๅบ
็ฝ็ซๅฏผ่ช: ๅๅฎขๅญย ย ITๆฐ้ปย ย BlogJavaย ย ็ฅ่ฏๅบย ย ๅ้ฎย ย ็ฎก็
|
__label__pos
| 0.897197 |
Probabilistic Record-Linkage to Measure Re-Identification Risk Case Studies
Identity disclosure can occur if respondents in the survey data could be matched to records in administrative data or other external data sources through common variables. Our standard process measures the re-identification risk in the statistical disclosure control process before disseminating the data to the public. Probabilistic record linkage is an effective way to identify the re-identification risk in a potential public use file.
Federal Employee Viewpoint Survey (FEVS)
In FEVS, conducted forย the Office of Personnel Management,ย Westat used probability-based matching to estimate the likelihood of correctly matching a person in a proposed partially synthetic public-use file to the record in the original data. The file-level risk was computed as the average of the probabilities and used to determine if the partially synthetic file was safe for release.
National Household Food Acquisition and Purchase Study
For the National Household Food Acquisition and Purchase Study for the U.S. Department of Agriculture, we evaluated the re-identification risk due to geographical clustering.
person checking out at grocery store
The public-use file included some stratification information that could potentially be used to find records in the same U.S. counties. A probabilistic record-linkage software was used to identify the counties that were subject to high re-identification risk.
How can we help?
We welcome messages from job seekers, collaborators, and potential clients and partners.
Get in Contact
Want to work with us?
Youโll be in great company.
Explore Careers
Back to Top
|
__label__pos
| 0.941859 |
getdns: Python bindings for getdns
โgetdnsโ is an implementation of Python language bindings for the getdns API. getdns is a modern, asynchronous DNS API that simplifies access to advanced DNS features, including DNSSEC. The API specification was developed by Paul Hoffman. getdns is built on top of the getdns implementation developed as a joint project between Verisign Labs and NLnet Labs.
We have tried to keep this interface as Pythonic as we can while staying true to the getdns architecture, including trying to maintain consistency with Python object design.
Dependencies
This version of getdns has been built and tested against Python 2.7 and Python 3.4. We also expect these other prerequisites to be installed:
n.b.: libgetdns must be built with the libevent extension, as follows:
./configure --with-libevent
To enable the use of edns cookies in the Python bindings, you must compile support for them into libgetdns, by including the โenable-draft-edns-cookies argument to configure.
This release has been tested against libgetdns 0.5.0.
Building
The code repository for getdns is available at: https://github.com/getdnsapi/getdns-python-bindings. If you are building from source you will need the Python development package for Python 2.7. On Linux systems this is typically something along the lines of โpython-devโ or โpython2.7-devโ, available through your package system. On Mac OS we are building against the python.org release, available in source form here.
For the actual build, we are using the standard Python distutils. To build and install:
python setup.py build
python setup.py install
If you have installed getdns libraries and headers in other than the default location, build the Python bindings using the --with-getdns argument to setup.py, providing the getdns root directory as an argument. (Note that there should be a space between โwith-getdns and the directory). For example,
python setup.py build --with-getdns ~/build
if you installed getdns into your home/build directory.
Weโve added optional support for draft-ietf-dnsop-cookies. It is implemented as a getdns extension (see below). It is not built by default. To enable it, you must build libgetdns with cookies support and add the --with-edns-cookies to the Python module build (i.e. python setup.py build --with-edns-cookies).
Using getdns
Contexts
All getdns queries happen within a resolution context, and among the first tasks youโll need to do before issuing a query is to acquire a Context object. A context is an opaque object with attributes describing the environment within which the query and replies will take place, including elements such as DNSSEC validation, whether the resolution should be performed as a recursive resolver or a stub resolver, and so on. Individual Context attributes may be examined directly, and the overall state of a given context can be queried with the Context.get_api_information() method.
See section 8 of the API specification
Examples
In this example, we do a simple address lookup and dump the results to the screen:
import getdns, pprint, sys
def main():
if len(sys.argv) != 2:
print "Usage: {0} hostname".format(sys.argv[0])
sys.exit(1)
ctx = getdns.Context()
extensions = { "return_both_v4_and_v6" :
getdns.EXTENSION_TRUE }
results = ctx.address(name=sys.argv[1],
extensions=extensions)
if results.status == getdns.RESPSTATUS_GOOD:
sys.stdout.write("Addresses: ")
for addr in results.just_address_answers:
print " {0}".format(addr["address_data"])
sys.stdout.write("\n\n")
print "Entire results tree: "
pprint.pprint(results.replies_tree)
if results.status == getdns.RESPSTATUS_NO_NAME:
print "{0} not found".format(sys.argv[1])
if __name__ == "__main__":
main()
In this example, we do a DNSSEC query and check the response:
import getdns, sys
dnssec_status = {
"DNSSEC_SECURE" : 400,
"DNSSEC_BOGUS" : 401,
"DNSSEC_INDETERINATE" : 402,
"DNSSEC_INSECURE" : 403,
"DNSSEC_NOT_PERFORMED" : 404
}
def dnssec_message(value):
for message in dnssec_status.keys():
if dnssec_status[message] == value:
return message
def main():
if len(sys.argv) != 2:
print "Usage: {0} hostname".format(sys.argv[0])
sys.exit(1)
ctx = getdns.Context()
extensions = { "return_both_v4_and_v6" :
getdns.EXTENSION_TRUE,
"dnssec_return_status" :
getdns.EXTENSION_TRUE }
results = ctx.address(name=sys.argv[1],
extensions=extensions)
if results.status == getdns.RESPSTATUS_GOOD:
sys.stdout.write("Addresses: ")
for addr in results.just_address_answers:
print " {0}".format(addr["address_data"])
sys.stdout.write("\n")
for result in results.replies_tree:
if "dnssec_status" in result.keys():
print "{0}: dnssec_status:
{1}".format(result["canonical_name"],
dnssec_message(result["dnssec_status"]))
if results.status == getdns.RESPSTATUS_NO_NAME:
print "{0} not found".format(sys.argv[1])
if __name__ == "__main__":
main()
Module-level attributes and methods
__version__
The getdns.__version__ attribute contains the version string for the Python getdns module. Please note that this is independent of the version of the underlying getdns library, which may be retrieved through attributes associated with a Context.
get_errorstr_by_id()
Returns a human-friendly string representation of an error ID.
ulabel_to_alabel()
Converts a ulabel to an alabel. Takes one argument (the ulabel)
alabel_to_ulabel()
Converts an alabel to a ulabel. Takes one argument (the alabel)
root_trust_anchor()
Returns the default root trust anchor for DNSSEC.
Known issues
โข โuserargโ currently only accepts a string. This will be changed in a future release, to take arbitrary data types
Contents:
Indices and tables
|
__label__pos
| 0.992146 |
! Copyright (C) 2006, 2007, 2008 Alex Chapman ! See http://factorcode.org/license.txt for BSD license. USING: accessors alarms arrays calendar kernel make math math.rectangles math.parser namespaces sequences system tetris.game tetris.gl ui.gadgets ui.gadgets.labels ui.gadgets.worlds ui.gadgets.status-bar ui.gestures ui.render ui ; IN: tetris TUPLE: tetris-gadget < gadget { tetris tetris } { alarm } ; : ( tetris -- gadget ) tetris-gadget new swap >>tetris ; M: tetris-gadget pref-dim* drop { 200 400 } ; : update-status ( gadget -- ) dup tetris>> [ "Level: " % dup level>> # " Score: " % score>> # ] "" make swap show-status ; M: tetris-gadget draw-gadget* ( gadget -- ) [ [ dim>> first2 ] [ tetris>> ] bi draw-tetris ] keep update-status ; : new-tetris ( gadget -- gadget ) [ ] change-tetris ; tetris-gadget H{ { T{ button-down f f 1 } [ request-focus ] } { T{ key-down f f "UP" } [ tetris>> rotate-right ] } { T{ key-down f f "d" } [ tetris>> rotate-left ] } { T{ key-down f f "f" } [ tetris>> rotate-right ] } { T{ key-down f f "e" } [ tetris>> rotate-left ] } ! dvorak d { T{ key-down f f "u" } [ tetris>> rotate-right ] } ! dvorak f { T{ key-down f f "LEFT" } [ tetris>> move-left ] } { T{ key-down f f "RIGHT" } [ tetris>> move-right ] } { T{ key-down f f "DOWN" } [ tetris>> move-down ] } { T{ key-down f f " " } [ tetris>> move-drop ] } { T{ key-down f f "p" } [ tetris>> toggle-pause ] } { T{ key-down f f "n" } [ new-tetris drop ] } } set-gestures : tick ( gadget -- ) [ tetris>> ?update ] [ relayout-1 ] bi ; M: tetris-gadget graft* ( gadget -- ) [ [ tick ] curry 100 milliseconds every ] keep (>>alarm) ; M: tetris-gadget ungraft* ( gadget -- ) [ cancel-alarm f ] change-alarm drop ; : tetris-window ( -- ) [ "Tetris" open-status-window ] with-ui ; MAIN: tetris-window
|
__label__pos
| 0.996634 |
0
$\begingroup$
I'm trying to understand the orientable double cover of manifolds and a friend posed a question about $M = \mathbb{R}\mathbb{P}^2 \times \mathbb{R}\mathbb{P}^2$.
First, here is some set up. The fundamental group of $M$ is $\mathbb{Z}_2 \times \mathbb{Z}_2$ and so the number of covering spaces for $M$ is equal to the number of conjugacy classes of subgroups in $\pi_1(M)$. Being abelian, there are 4 subgroups which correspond to the universal cover $S^2 \times S^2$, $S^2 \times \mathbb{R}\mathbb{P}^2$, $\mathbb{R}\mathbb{P}^2 \times S^2$ and $\widehat{M} = S^2 \times S^2/\langle (1,1) \rangle$ where $\langle (1,1) \rangle$ is the "diagonal" order 2 subgroup.
The question: What are the de Rham groups of $\widehat{M}$?
We know that since $M$ is non-orientable and compact, $\widehat{M}$ should be connected and so $H^0_{dR}(\widehat{M}) = \mathbb{R} = H^4_{dR}(\widehat{M})$ by Poincare duality. But I don't know how to compute $H^1, H^2$. I was thinking of using the only other tool that I know which is the Mayer-Vietoris theorem but I don't know what two open subsets to choose. And Kunneth's Formula doesn't seem to apply because the quotient by group action complicates the product.
$\endgroup$
2
โข $\begingroup$ Do you know how to compute the dR cohomology of $S^2\times S^2$? If so, it is not hard to compute the cohomology of $S^2\times S^2/G$ for $G$ finite. You do some averaging on the cover space. $\endgroup$
โย orangeskid
May 11 '18 at 13:23
โข $\begingroup$ @orangeskid I can compute for $S^2 \times S^2$ but I don't know how to get to $S^2 \times S^2/G$. Do you have a reference for me? I think I figured something out using $H^1_{dR}(\widehat{M}) \cong H_1(\widehat{M}, \mathbb{R})^*$. $\endgroup$
โย inkievoyd
May 11 '18 at 20:15
2
$\begingroup$
The forms on $M/G$ correspond to forms on $M$ invariant under $G$. Now, if you have a complex of vector spaces with an action of a finite group $G$, and you consider the subcomplex of invariant elements, the cohomology of the complex of invariants will be isomorphic to the invariants of the cohomology. That is, if $A$ is your complex of vector spaces, we have a morphism of complexes $A^G\to A$ so a morphism of the cohomology that maps $H(A^G)$ to $H(A)^G$.
Injective: let $a$ $G$ invariant, such that $a = d a'$. Then $a = d( \frac{1}{|G|} \sum g a')$, the image of an invariant cochain.
Surjective: Let $\hat a \in H(A)^G$. Then $g a - a = d( b_g)$ for some $b_g$. Then $a = \frac{1}{|G|}(\sum g a) - d(\frac{1}{|G|}\sum b_g)$.
Now, we know that the cohomology of the complex of invariant forms is isomorphic to the invariants of the cohomology. On $H^2(S^2)\otimes H^2(S^2)$ the map $-1\otimes -1$ acts trivially, so the invariants is the full space, $\mathbb{R}$.
On $H^2(S^2\times S^2)= H^2(S^2)\otimes \mathbb{R}\oplus \mathbb{R}\otimes H^2(S^2)$, the map $(-1,-1)$ acts by $-1$ on each term. So we get $0$. No other non-zero term except $H^0$.
$\endgroup$
1
$\begingroup$
You can see $\hat M$ has a flat bundle (suspension) of $S^2$ over $\mathbb{R}P^2$ and apply the Serre spectral sequence which gives $H^1(\hat M)=H^1(\mathbb{R}P^2,H^0(S^2))=H^1(\mathbb{R}P^2,\mathbb{R})$ and $H^2(\hat M)=H^2(\mathbb{R}P^2,H^0(\mathbb{R}))=H^2(\mathbb{R}P^2,\mathbb{R})$.
$\endgroup$
0
$\begingroup$
We can use the fact that $H^1_{dR}(\widehat{M}) \cong H_1(\widehat{M}, \mathbb{R})^* = \text{Hom}(H_1(\widehat{M}),\mathbb{R})$ and since $H_1(\widehat{M},\mathbb{R}) = H_1(\widehat{M},\mathbb{Z}) \otimes_\mathbb{Z} \mathbb{R} = 0$, $H^1_{dR}(\widehat{M}) = 0 = H^3_{dR}(\widehat{M})$ by Poincare duality. We can then compute $H^2$ by looking at the Euler characteristic. In general, we know that $\chi(M \times N) = \chi(M)\chi(N)$ and if $M$ is a $k$-fold cover of $N$, then $\chi(M) = k \chi(N)$. Thus, we know that $\chi(\widehat{M}) = b_2 + 2 = 2\chi(\mathbb{R}\mathbb{P}^2)\chi(\mathbb{R}\mathbb{P}^2)$. So we compute the Euler characteristic of $\mathbb{R}\mathbb{P}^2$. $\mathbb{R}\mathbb{P}^2$ is covered twice by $S^2$ and $\chi(S^2) = 2 = 2\chi(\mathbb{R}\mathbb{P}^2)$. So $\chi(\mathbb{R}\mathbb{P}^2) = 1 \Rightarrow b_2 + 2 = 2$. Thus, $H^2_{dR}(\widehat{M}) = 0$.
$\endgroup$
Your Answer
By clicking โPost Your Answerโ, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 1 |
Java solution with sorting N(lgN)
โข 1
Q
public class Solution {
public int hIndex(int[] citations) {
Arrays.sort(citations);
int maxCit = 0;
for (int i=0;i<citations.length;i++){
int cit = Math.min(citations[i], citations.length-i);
if (cit >= maxCit ){
maxCit = cit;
continue;
}
break;
}
return maxCit;
}
}
Log in to reply
ย
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
__label__pos
| 0.983476 |
0
:rolleyes: Hello All,
I am very new to writing Java program so am here looking for help but most I want learn more about java. I am try to understand the code but it does not click for me.
My question is: I am trying to write a program to create an array size of 25 and fill it with 5 digit palindromes. I also need to print out how many of the 25 are even and how many are odd.
Now, I know how to make the array of 25 and I set it up so that the palindromes are ramdonly pick but I have to compare the digit to make sure that they are palindromes. How should I do that! How do I compare cell of the Array or should I convert the number into strings!
This is what I have so far:
//class palindromesarray
class palindromesarray
{
public static void main(String args[])
{
int number[];
number = new int[25];
for(int i = 0; i < 25; i++) number[i] = (int)Math.floor(Math.random()*100000+1);
for(int i=0; i<25; i++)
System.out.println(number[i]);
}
}
2
Contributors
1
Reply
3
Views
11 Years
Discussion Span
Last Post by iamthwee
0
How should I do that! How do I compare cell of the Array or should I convert the number into strings!
Do whatever you're allowed. If you can cannot convert to a string then use the modulus operator to separate the digits.
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.
|
__label__pos
| 0.875362 |
Below The Beltway
I believe in the free speech that liberals used to believe in, the economic freedom that conservatives used to believe in, and the personal freedom that America used to believe in.
A Twit Takes On Twitter
by @ 9:18 am on April 22, 2009. Filed under Dumbasses, Internet, Media, Technology, Twitter
Maureen Dowd talks to the founders of Twitter:
Alfred Hitchcock would have loved the Twitter headquarters here. Birds gathering everywhere, painted on the wall in flocks, perched on the coffee table, stitched on pillows and framed on the wall with a thought bubble asking employees to please tidy up after themselves.
In a droll nod to shifting technology, thereโs a British red telephone booth in the loftlike office that you are welcome to use but youโll have to bring in your cellphone.
I was here on a simple quest: curious to know if the inventors of Twitter were as annoying as their invention.
I sat down with Biz Stone, 35, and Evan Williams, 37, and asked them to justify themselves.
As if justifying themselves to MoDowd was something that the guys behind one of the fastest growing phenomena on the Internet even need to worry about doing.
Take a look at a few of Dowdโs questions:
ME: Did you know you were designing a toy for bored celebrities and high-school girls?
As opposed to The New York Times Op-Ed page, which seems to be a toy for bored Ivy Leaguers ?
ME: Was there anything in your childhood that led you to want to destroy civilization as we know it?
Yes, Maureen, weโre all wondering that about you.
ME: I would rather be tied up to stakes in the Kalahari Desert, have honey poured over me and red ants eat out my eyes than open a Twitter account. Is there anything you can say to change my mind?
Believe me Maureen, there are a lot of people out there who would prefer Option One to Option Two.
Does she really get paid for writing this drivel ?
Post to Twitter Post to Digg Post to Facebook Post to Reddit Post to StumbleUpon
One Response to โA Twit Takes On Twitterโ
1. Raymond says:
ME: I would rather be tied up to stakes in the Kalahari Desert, have honey poured over me and red ants eat out my eyes than open a Twitter account. Is there anything you can say to change my mind?
Actually just come to Texas. Fire-ants will do roughly the same thing.
[Below The Beltway is proudly powered by WordPress.]
|
__label__pos
| 0.864052 |
OmniSciDB ย f17484ade4
ย Allย Classesย Namespacesย Filesย Functionsย Variablesย Typedefsย Enumerationsย Enumeratorย Friendsย Macrosย Groupsย Pages
ParquetGeospatialImportEncoder.h
Go to the documentation of this file.
1ย /*
2ย * Copyright 2022 HEAVY.AI, Inc.
3ย *
4ย * Licensed under the Apache License, Version 2.0 (the "License");
5ย * you may not use this file except in compliance with the License.
6ย * You may obtain a copy of the License at
7ย *
8ย * http://www.apache.org/licenses/LICENSE-2.0
9ย *
10ย * Unless required by applicable law or agreed to in writing, software
11ย * distributed under the License is distributed on an "AS IS" BASIS,
12ย * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13ย * See the License for the specific language governing permissions and
14ย * limitations under the License.
15ย */
16ย
17ย #pragma once
18ย
19ย #include <parquet/schema.h>
20ย #include <parquet/types.h>
21ย #include "GeospatialEncoder.h"
22ย #include "ParquetEncoder.h"
24ย
25ย namespace foreign_storage {
26ย
28ย public GeospatialEncoder,
29ย public ParquetImportEncoder {
30ย public:
31ย ParquetGeospatialImportEncoder(const bool geo_validate_geometry)
32ย : ParquetEncoder(nullptr)
33ย , GeospatialEncoder(geo_validate_geometry)
35ย , invalid_indices_(nullptr) {}
36ย
37ย ParquetGeospatialImportEncoder(std::list<Chunk_NS::Chunk>& chunks,
38ย const bool geo_validate_geometry)
39ย : ParquetEncoder(nullptr)
40ย , GeospatialEncoder(chunks, geo_validate_geometry)
42ย , invalid_indices_(nullptr)
43ย , base_column_buffer_(nullptr)
44ย , coords_column_buffer_(nullptr)
45ย , bounds_column_buffer_(nullptr)
47ย , poly_rings_column_buffer_(nullptr) {
49ย
50ย const auto geo_column_type = geo_column_descriptor_->columnType.get_type();
51ย
53ย chunks.begin()->getBuffer());
55ย
56ย // initialize coords column
58ย getBuffer(chunks, geo_column_type, COORDS));
60ย
61ย // initialize bounds column
62ย if (hasBoundsColumn()) {
64ย getBuffer(chunks, geo_column_type, BOUNDS));
66ย }
67ย
68ย // initialize ring sizes column & render group column
72ย getBuffer(chunks, geo_column_type, RING_OR_LINE_SIZES));
74ย }
75ย
76ย // initialize poly rings column
77ย if (hasPolyRingsColumn()) {
79ย getBuffer(chunks, geo_column_type, POLY_RINGS));
81ย }
82ย }
83ย
84ย void validateAndAppendData(const int16_t* def_levels,
85ย const int16_t* rep_levels,
86ย const int64_t values_read,
87ย const int64_t levels_read,
88ย int8_t* values,
89ย const SQLTypeInfo& column_type, /* may not be used */
90ย InvalidRowGroupIndices& invalid_indices) override {
91ย invalid_indices_ = &invalid_indices; // used in assembly algorithm
92ย appendData(def_levels, rep_levels, values_read, levels_read, values);
93ย }
94ย
96ย const InvalidRowGroupIndices& invalid_indices) override {
97ย if (invalid_indices.empty()) {
98ย return;
99ย }
100ย base_column_buffer_->eraseInvalidData(invalid_indices);
101ย coords_column_buffer_->eraseInvalidData(invalid_indices);
102ย if (hasBoundsColumn()) {
103ย bounds_column_buffer_->eraseInvalidData(invalid_indices);
104ย }
105ย if (hasRingOrLineSizesColumn()) {
107ย }
108ย if (hasPolyRingsColumn()) {
110ย }
111ย }
112ย
113ย void appendData(const int16_t* def_levels,
114ย const int16_t* rep_levels,
115ย const int64_t values_read,
116ย const int64_t levels_read,
117ย int8_t* values) override {
118ย auto parquet_data_ptr = reinterpret_cast<const parquet::ByteArray*>(values);
119ย
121ย
122ย for (int64_t i = 0, j = 0; i < levels_read; ++i) {
124ย if (def_levels[i] == 0) {
126ย } else {
127ย CHECK(j < values_read);
128ย auto& byte_array = parquet_data_ptr[j++];
129ย auto geo_string_view = std::string_view{
130ย reinterpret_cast<const char*>(byte_array.ptr), byte_array.len};
131ย try {
132ย processGeoElement(geo_string_view);
133ย } catch (const std::runtime_error& error) {
139ย }
140ย }
141ย }
142ย
144ย
145ย appendBaseData(levels_read);
146ย
147ย current_batch_offset_ += levels_read;
148ย }
149ย
150ย void appendDataTrackErrors(const int16_t* def_levels,
151ย const int16_t* rep_levels,
152ย const int64_t values_read,
153ย const int64_t levels_read,
154ย int8_t* values) override {
155ย UNREACHABLE() << "unexpected call to appendDataTrackErrors from unsupported encoder";
156ย }
157ย
158ย private:
160ย const std::vector<ArrayDatum>& datum_buffer) {
161ย if (column_buffer) {
162ย for (const auto& datum : datum_buffer) {
163ย column_buffer->appendElement(datum);
164ย }
165ย } else {
166ย CHECK(datum_buffer.empty());
167ย }
168ย }
169ย
176ย }
177ย
178ย void appendBaseData(const int64_t row_count) {
179ย for (int64_t i = 0; i < row_count; ++i) {
181ย }
182ย }
183ย
184ย AbstractBuffer* getBuffer(std::list<Chunk_NS::Chunk>& chunks,
185ย const SQLTypes sql_type,
186ย GeoColumnType geo_column_type) {
187ย auto chunk = getIteratorForGeoColumnType(chunks, sql_type, geo_column_type);
188ย auto buffer = chunk->getBuffer();
189ย return buffer;
190ย }
191ย
199ย };
200ย
201ย } // namespace foreign_storage
AbstractBuffer * getBuffer(std::list< Chunk_NS::Chunk > &chunks, const SQLTypes sql_type, GeoColumnType geo_column_type)
void eraseInvalidData(const FindContainer &invalid_indices)
void validateAndAppendData(const int16_t *def_levels, const int16_t *rep_levels, const int64_t values_read, const int64_t levels_read, int8_t *values, const SQLTypeInfo &column_type, InvalidRowGroupIndices &invalid_indices) override
SQLTypes
Definition: sqltypes.h:65
std::vector< ArrayDatum > coords_datum_buffer_
void appendDataTrackErrors(const int16_t *def_levels, const int16_t *rep_levels, const int64_t values_read, const int64_t levels_read, int8_t *values) override
std::vector< ArrayDatum > ring_or_line_sizes_datum_buffer_
TypedParquetStorageBuffer< ArrayDatum > * coords_column_buffer_
#define UNREACHABLE()
Definition: Logger.h:338
void appendData(const int16_t *def_levels, const int16_t *rep_levels, const int64_t values_read, const int64_t levels_read, int8_t *values) override
TypedParquetStorageBuffer< std::string > * base_column_buffer_
std::vector< ArrayDatum > bounds_datum_buffer_
HOST DEVICE SQLTypes get_type() const
Definition: sqltypes.h:391
ParquetGeospatialImportEncoder(std::list< Chunk_NS::Chunk > &chunks, const bool geo_validate_geometry)
void appendArrayDatumsIfApplicable(TypedParquetStorageBuffer< ArrayDatum > *column_buffer, const std::vector< ArrayDatum > &datum_buffer)
std::set< int64_t > InvalidRowGroupIndices
An AbstractBuffer is a unit of data management for a data manager.
void processGeoElement(std::string_view geo_string_view)
ParquetGeospatialImportEncoder(const bool geo_validate_geometry)
#define CHECK(condition)
Definition: Logger.h:291
bool is_geometry() const
Definition: sqltypes.h:595
TypedParquetStorageBuffer< ArrayDatum > * ring_or_line_sizes_column_buffer_
const ColumnDescriptor * geo_column_descriptor_
SQLTypeInfo columnType
TypedParquetStorageBuffer< ArrayDatum > * bounds_column_buffer_
std::vector< ArrayDatum > poly_rings_datum_buffer_
TypedParquetStorageBuffer< ArrayDatum > * poly_rings_column_buffer_
void eraseInvalidIndicesInBuffer(const InvalidRowGroupIndices &invalid_indices) override
std::list< T >::iterator getIteratorForGeoColumnType(std::list< T > &list, const SQLTypes column_type, const GeoColumnType geo_column)
|
__label__pos
| 0.994848 |
Generics
Generics are an essential feature of Rust, allowing you to write reusable code that works with multiple types without sacrificing performance. They let you write code that works with a variety of types without duplicating code.
What are Generics?
Generics are a way to write code that accepts one or more type parameters, which can then be used within the code as actual types. This allows the code to work with different types, while still being type-safe and efficient.
In Rust, generics are similar to templates in C++ or generics in Java, TypeScript, or other languages.
Using Generics
To illustrate how generics work in Rust, let's start with a simple example. Suppose you have a function that takes two arguments and returns the larger of the two. Without generics, you'd need to write a separate function for each type you want to support, e.g., one for integers and one for floating-point numbers.
However, using generics, you can write a single function that works with any type that implements the PartialOrd trait. Here's an example:
fn max<T: PartialOrd>(x: T, y: T) -> T {
if x > y {
x
} else {
y
}
}
fn main() {
let a = 5;
let b = 10;
let c = 3.14;
let d = 6.28;
println!("Larger of {} and {}: {}", a, b, max(a, b));
println!("Larger of {} and {}: {}", c, d, max(c, d));
}
In the max function definition, we introduce a generic type parameter T using angle brackets (<>). We also specify the trait bound PartialOrd for T using the colon syntax (:). This constraint ensures that the max function only works with types that implement the PartialOrd trait, which is necessary for comparing values using the > operator.
Tip: It can be hard to know what trait bound you need, especially when new to Rust. One of the things I like to do is leave it out entirely, then let the compiler tell me which trait bound it thinks is missing. This works a surprising amount of the time, especially for simple cases.
Now, the max function works with both integers and floating-point numbers. As an added bonus, you can call it with any two values of the same type that implement the PartialOrd trait. So it will even work for strings, or types that you don't even know about! Pretty neat and pretty powerful.
Generic Structs
struct Point<T> {
x: T,
y: T,
}
fn main() {
let integer_point = Point { x: 5, y: 10 };
let float_point = Point { x: 3.14, y: 6.28 };
println!("Integer point: ({}, {})", integer_point.x, integer_point.y);
println!("Floating point: ({}, {})", float_point.x, float_point
}
In the Point struct definition, we introduce a generic type parameter T using angle brackets (<>). This allows us to use the same Point struct with different types for the x and y coordinates.
Note that in this example, both coordinates must have the same type. If you want to allow different types for x and y, you can introduce multiple generic type parameters:
struct Point2<X, Y> {
x: X,
y: Y,
}
fn main() {
let mixed_point = Point2 { x: 5, y: 6.28 };
println!("Mixed point: ({}, {})", mixed_point.x, mixed_point.y);
}
Here we have left the types unbounded, but you would likely want some trait bounds for these generic parameters. PartialOrd, PartialEq, and Debug are common choices.
Generic Enums and Traits
You can use generics with enums and traits in a similar way as with structs and functions. Here's an example of a generic Result enum that can be used to represent the success or failure of a computation (in fact, this is how the standard library type is defined):
enum Result<T, E> {
Ok(T),
Err(E),
}
fn divide(x: f64, y: f64) -> Result<f64, String> {
if y == 0.0 {
Result::Err("Cannot divide by zero.".to_string())
} else {
Result::Ok(x / y)
}
}
fn main() {
let result = divide(5.0, 2.0);
match result {
Result::Ok(value) => println!("Result: {}", value
Result::Err(error) => println!("Error: {}", error),
}
let result = divide(5.0, 0.0);
match result {
Result::Ok(value) => println!("Result: {}", value),
Result::Err(error) => println!("Error: {}", error),
}
}
In this example, we define a generic Result enum with two type parameters: T for the success value and E for the error value. The Result enum has two variants: Ok(T) for success and Err(E) for failure.
We then define a divide function that returns a Result<f64, String>. The function takes two f64 arguments and either returns the result of the division or an error message if the divisor is zero.
In the main function, we call the divide function and pattern match on the returned Result to handle both the success and error cases.
This Result enum is a simplified version of the Result type that is part of the Rust standard library, which is used extensively for error handling.
Exercise: Write a generic divide function. (Hint: look up the std::ops::Div trait.)
|
__label__pos
| 0.999866 |
{ "type": "module", "source": "doc/api/debugger.md", "introduced_in": "v0.9.12", "stability": 2, "stabilityText": "Stable", "miscs": [ { "textRaw": "Debugger", "name": "Debugger", "introduced_in": "v0.9.12", "stability": 2, "stabilityText": "Stable", "type": "misc", "desc": "
Node.js includes an out-of-process debugging utility accessible via a\nV8 Inspector and built-in debugging client. To use it, start Node.js\nwith the inspect argument followed by the path to the script to debug; a\nprompt will be displayed indicating successful launch of the debugger:
\n
$ node inspect myscript.js\n< Debugger listening on ws://127.0.0.1:9229/80e7a814-7cd3-49fb-921a-2e02228cd5ba\n< For help, see: https://nodejs.org/en/docs/inspector\n< Debugger attached.\nBreak on start in myscript.js:1\n> 1 (function (exports, require, module, __filename, __dirname) { global.x = 5;\n 2 setTimeout(() => {\n 3 console.log('world');\ndebug>\n
\n
The Node.js debugger client is not a full-featured debugger, but simple step and\ninspection are possible.
\n
Inserting the statement debugger; into the source code of a script will\nenable a breakpoint at that position in the code:
\n\n
// myscript.js\nglobal.x = 5;\nsetTimeout(() => {\n debugger;\n console.log('world');\n}, 1000);\nconsole.log('hello');\n
\n
Once the debugger is run, a breakpoint will occur at line 3:
\n
$ node inspect myscript.js\n< Debugger listening on ws://127.0.0.1:9229/80e7a814-7cd3-49fb-921a-2e02228cd5ba\n< For help, see: https://nodejs.org/en/docs/inspector\n< Debugger attached.\nBreak on start in myscript.js:1\n> 1 (function (exports, require, module, __filename, __dirname) { global.x = 5;\n 2 setTimeout(() => {\n 3 debugger;\ndebug> cont\n< hello\nbreak in myscript.js:3\n 1 (function (exports, require, module, __filename, __dirname) { global.x = 5;\n 2 setTimeout(() => {\n> 3 debugger;\n 4 console.log('world');\n 5 }, 1000);\ndebug> next\nbreak in myscript.js:4\n 2 setTimeout(() => {\n 3 debugger;\n> 4 console.log('world');\n 5 }, 1000);\n 6 console.log('hello');\ndebug> repl\nPress Ctrl + C to leave debug repl\n> x\n5\n> 2 + 2\n4\ndebug> next\n< world\nbreak in myscript.js:5\n 3 debugger;\n 4 console.log('world');\n> 5 }, 1000);\n 6 console.log('hello');\n 7\ndebug> .exit\n
\n
The repl command allows code to be evaluated remotely. The next command\nsteps to the next line. Type help to see what other commands are available.
\n
Pressing enter without typing a command will repeat the previous debugger\ncommand.
", "miscs": [ { "textRaw": "Watchers", "name": "watchers", "desc": "
It is possible to watch expression and variable values while debugging. On\nevery breakpoint, each expression from the watchers list will be evaluated\nin the current context and displayed immediately before the breakpoint's\nsource code listing.
\n
To begin watching an expression, type watch('my_expression'). The command\nwatchers will print the active watchers. To remove a watcher, type\nunwatch('my_expression').
", "type": "misc", "displayName": "Watchers" }, { "textRaw": "Command reference", "name": "command_reference", "modules": [ { "textRaw": "Stepping", "name": "stepping", "desc": "", "type": "module", "displayName": "Stepping" }, { "textRaw": "Breakpoints", "name": "breakpoints", "desc": "\n
It is also possible to set a breakpoint in a file (module) that\nis not loaded yet:
\n
$ node inspect main.js\n< Debugger listening on ws://127.0.0.1:9229/4e3db158-9791-4274-8909-914f7facf3bd\n< For help, see: https://nodejs.org/en/docs/inspector\n< Debugger attached.\nBreak on start in main.js:1\n> 1 (function (exports, require, module, __filename, __dirname) { const mod = require('./mod.js');\n 2 mod.hello();\n 3 mod.hello();\ndebug> setBreakpoint('mod.js', 22)\nWarning: script 'mod.js' was not loaded yet.\ndebug> c\nbreak in mod.js:22\n 20 // USE OR OTHER DEALINGS IN THE SOFTWARE.\n 21\n>22 exports.hello = function() {\n 23 return 'hello from module';\n 24 };\ndebug>\n
", "type": "module", "displayName": "Breakpoints" }, { "textRaw": "Information", "name": "information", "desc": "", "type": "module", "displayName": "Information" }, { "textRaw": "Execution control", "name": "execution_control", "desc": "", "type": "module", "displayName": "Execution control" }, { "textRaw": "Various", "name": "various", "desc": "", "type": "module", "displayName": "Various" } ], "type": "misc", "displayName": "Command reference" }, { "textRaw": "Advanced Usage", "name": "advanced_usage", "modules": [ { "textRaw": "V8 Inspector Integration for Node.js", "name": "v8_inspector_integration_for_node.js", "desc": "
V8 Inspector integration allows attaching Chrome DevTools to Node.js\ninstances for debugging and profiling. It uses the\nChrome DevTools Protocol.
\n
V8 Inspector can be enabled by passing the --inspect flag when starting a\nNode.js application. It is also possible to supply a custom port with that flag,\ne.g. --inspect=9222 will accept DevTools connections on port 9222.
\n
To break on the first line of the application code, pass the --inspect-brk\nflag instead of --inspect.
\n
$ node --inspect index.js\nDebugger listening on ws://127.0.0.1:9229/dc9010dd-f8b8-4ac5-a510-c1a114ec7d29\nFor help, see: https://nodejs.org/en/docs/inspector\n
\n
(In the example above, the UUID dc9010dd-f8b8-4ac5-a510-c1a114ec7d29\nat the end of the URL is generated on the fly, it varies in different\ndebugging sessions.)
\n
If the Chrome browser is older than 66.0.3345.0,\nuse inspector.html instead of js_app.html in the above URL.
\n
Chrome DevTools doesn't support debugging Worker Threads yet.\nndb can be used to debug them.
", "type": "module", "displayName": "V8 Inspector Integration for Node.js" } ], "type": "misc", "displayName": "Advanced Usage" } ] } ] }
|
__label__pos
| 0.822484 |
pm.drush.inc
1. 8.0.x commands/pm/pm.drush.inc
2. 6.x commands/pm/pm.drush.inc
3. 7.x commands/pm/pm.drush.inc
4. 3.x commands/pm/pm.drush.inc
5. 4.x commands/pm/pm.drush.inc
6. 5.x commands/pm/pm.drush.inc
7. master commands/pm/pm.drush.inc
The drush Project Manager
Terminology:
โข Request: a requested project (string or keyed array), with a name and (optionally) version.
โข Project: a drupal.org project (i.e drupal.org/project/*), such as cck or zen.
โข Extension: a drupal.org module, theme or profile.
โข Version: a requested version, such as 1.0 or 1.x-dev.
โข Release: a specific release of a project, with associated metadata (from the drupal.org update service).
Functions
Namesort descending Description
drush_find_empty_directories Return an array of empty directories.
drush_get_extension_status Calculate an extension status based on current status and schema version.
drush_get_projects Obtain an array of installed projects off the extensions available.
drush_pm_cache_project_extensions Store extensions founds within a project in extensions cache.
drush_pm_classify_extensions Classify extensions as modules, themes or unknown.
drush_pm_disable Command callback. Disable one or more extensions.
drush_pm_enable Command callback. Enable one or more extensions from downloaded projects. Note that the modules and themes to be enabled were evaluated during the pm-enable validate hook, above.
drush_pm_enable_find_project_from_extension Helper function for pm-enable.
drush_pm_enable_validate Validate callback. Determine the modules and themes that the user would like enabled.
drush_pm_extensions_in_project Print out all extensions (modules/themes/profiles) found in specified project.
drush_pm_include_version_control A simple factory function that tests for version control systems, in a user specified order, and returns the one that appears to be appropriate for a specific directory.
drush_pm_inject_info_file_metadata Inject metadata into all .info files for a given project.
drush_pm_list Command callback. Show a list of extensions with type and status.
drush_pm_lookup_extension_in_cache Lookup an extension in the extensions cache.
drush_pm_post_pm_update Post-command callback. Execute updatedb command after an updatecode - user requested `update`.
drush_pm_post_pm_updatecode Post-command callback for updatecode.
drush_pm_put_extension_cache Persists extensions cache.
drush_pm_refresh Command callback. Refresh update status information.
drush_pm_releasenotes Command callback. Show release notes for given project(s).
drush_pm_releases Command callback. Show available releases for given project(s).
drush_pm_uninstall Command callback. Uninstall one or more modules.
drush_pm_update Command callback. Execute pm-update.
drush_pm_updatecode_postupdate Command callback. Execute updatecode-postupdate.
drush_pm_updatecode_validate Validate callback for updatecode command. Abort if 'backup' directory exists.
drush_pm_update_lock Update the locked status of all of the candidate projects to be updated.
pm_complete_extensions List extensions for completion.
pm_complete_projects List projects for completion.
pm_drush_command Implementation of hook_drush_command().
pm_drush_engine_package_handler Implements hook_drush_engine_ENGINE_TYPE().
pm_drush_engine_release_info Implements hook_drush_engine_ENGINE_TYPE().
pm_drush_engine_type_info Implementation of hook_drush_engine_type_info().
pm_drush_engine_update_status Implements hook_drush_engine_ENGINE_TYPE().
pm_drush_engine_version_control Implements hook_drush_engine_ENGINE_TYPE().
pm_drush_help Implementation of hook_drush_help().
pm_parse_arguments Sanitize user provided arguments to several pm commands.
pm_parse_request Parse out the project name and version and return as a structured array.
pm_parse_version Parses a version string and returns its components.
pm_pm_disable_complete Command argument complete callback.
pm_pm_enable_complete Command argument complete callback.
pm_pm_info_complete Command argument complete callback.
pm_pm_releasenotes_complete Command argument complete callback.
pm_pm_releases_complete Command argument complete callback.
pm_pm_uninstall_complete Command argument complete callback.
pm_pm_updatecode_complete Command argument complete callback.
pm_pm_update_complete Command argument complete callback.
_drush_pm_expand_extensions Add extensions that match extension_name*.
_drush_pm_extension_cache_file Returns the path to the extensions cache file.
_drush_pm_find_common_path Helper function to find the common path for a list of extensions in the aim to obtain the project name.
_drush_pm_generate_info_ini_metadata Generate version information for `.info` files in ini format.
_drush_pm_generate_info_yaml_metadata Generate version information for `.info` files in YAML format.
_drush_pm_get_extension_cache Load the extensions cache.
_drush_pm_sort_extensions Sort callback function for sorting extensions.
_pm_parse_version_compound Build a version string from an array of major, minor and extra parts.
_pm_parse_version_decompound Decompound a version string and returns major, minor, patch and extra parts.
Constants
Namesort descending Description
DRUSH_UPDATESTATUS_CURRENT Project is up to date.
DRUSH_UPDATESTATUS_FETCH_PENDING We need to (re)fetch available update data for this project.
DRUSH_UPDATESTATUS_NOT_CHECKED Project's status cannot be checked.
DRUSH_UPDATESTATUS_NOT_CURRENT Project has a new release available, but it is not a security release.
DRUSH_UPDATESTATUS_NOT_FETCHED There was a failure fetching available update data for this project.
DRUSH_UPDATESTATUS_NOT_SECURE Project is missing security update(s).
DRUSH_UPDATESTATUS_NOT_SUPPORTED Current release is no longer supported by the project maintainer.
DRUSH_UPDATESTATUS_PROJECT_NOT_PACKAGED Project was not packaged by drupal.org.
DRUSH_UPDATESTATUS_REQUESTED_PROJECT_NOT_FOUND Requested project not found.
DRUSH_UPDATESTATUS_REQUESTED_PROJECT_NOT_UPDATEABLE Requested project is not updateable.
DRUSH_UPDATESTATUS_REQUESTED_VERSION_CURRENT Requested version already installed.
DRUSH_UPDATESTATUS_REQUESTED_VERSION_NOT_CURRENT Requested version available.
DRUSH_UPDATESTATUS_REQUESTED_VERSION_NOT_FOUND Requested version not found.
DRUSH_UPDATESTATUS_REVOKED Current release has been unpublished and is no longer available.
DRUSH_UPDATESTATUS_UNKNOWN No available update data was found for project.
Interfaces
Namesort descending Description
drush_version_control Interface for version control systems. We use a simple object layer because we conceivably need more than one loaded at a time.
File
commands/pm/pm.drush.inc
View source
1. <?php
2. /**
3. * @file
4. * The drush Project Manager
5. *
6. * Terminology:
7. * - Request: a requested project (string or keyed array), with a name and (optionally) version.
8. * - Project: a drupal.org project (i.e drupal.org/project/*), such as cck or zen.
9. * - Extension: a drupal.org module, theme or profile.
10. * - Version: a requested version, such as 1.0 or 1.x-dev.
11. * - Release: a specific release of a project, with associated metadata (from the drupal.org update service).
12. */
13. use Drush\Log\LogLevel;
14. /**
15. * @defgroup update_status_constants Update Status Constants
16. * @{
17. * Represents update status of projects.
18. *
19. * The first set is a mapping of some constants declared in update.module.
20. * We only declare the ones we're interested in.
21. * The rest of the constants are used by pm-updatestatus to represent
22. * a status when the user asked for updates to specific versions or
23. * other circumstances not managed by Drupal.
24. */
25. /**
26. * Project is missing security update(s).
27. *
28. * Maps UPDATE_NOT_SECURE.
29. */
30. const DRUSH_UPDATESTATUS_NOT_SECURE = 1;
31. /**
32. * Current release has been unpublished and is no longer available.
33. *
34. * Maps UPDATE_REVOKED.
35. */
36. const DRUSH_UPDATESTATUS_REVOKED = 2;
37. /**
38. * Current release is no longer supported by the project maintainer.
39. *
40. * Maps UPDATE_NOT_SUPPORTED.
41. */
42. const DRUSH_UPDATESTATUS_NOT_SUPPORTED = 3;
43. /**
44. * Project has a new release available, but it is not a security release.
45. *
46. * Maps UPDATE_NOT_CURRENT.
47. */
48. const DRUSH_UPDATESTATUS_NOT_CURRENT = 4;
49. /**
50. * Project is up to date.
51. *
52. * Maps UPDATE_CURRENT.
53. */
54. const DRUSH_UPDATESTATUS_CURRENT = 5;
55. /**
56. * Project's status cannot be checked.
57. *
58. * Maps UPDATE_NOT_CHECKED.
59. */
60. const DRUSH_UPDATESTATUS_NOT_CHECKED = -1;
61. /**
62. * No available update data was found for project.
63. *
64. * Maps UPDATE_UNKNOWN.
65. */
66. const DRUSH_UPDATESTATUS_UNKNOWN = -2;
67. /**
68. * There was a failure fetching available update data for this project.
69. *
70. * Maps UPDATE_NOT_FETCHED.
71. */
72. const DRUSH_UPDATESTATUS_NOT_FETCHED = -3;
73. /**
74. * We need to (re)fetch available update data for this project.
75. *
76. * Maps UPDATE_FETCH_PENDING.
77. */
78. const DRUSH_UPDATESTATUS_FETCH_PENDING = -4;
79. /**
80. * Project was not packaged by drupal.org.
81. */
82. const DRUSH_UPDATESTATUS_PROJECT_NOT_PACKAGED = 101;
83. /**
84. * Requested project is not updateable.
85. */
86. const DRUSH_UPDATESTATUS_REQUESTED_PROJECT_NOT_UPDATEABLE = 102;
87. /**
88. * Requested project not found.
89. */
90. const DRUSH_UPDATESTATUS_REQUESTED_PROJECT_NOT_FOUND = 103;
91. /**
92. * Requested version not found.
93. */
94. const DRUSH_UPDATESTATUS_REQUESTED_VERSION_NOT_FOUND = 104;
95. /**
96. * Requested version available.
97. */
98. const DRUSH_UPDATESTATUS_REQUESTED_VERSION_NOT_CURRENT = 105;
99. /**
100. * Requested version already installed.
101. */
102. const DRUSH_UPDATESTATUS_REQUESTED_VERSION_CURRENT = 106;
103. /**
104. * @} End of "defgroup update_status_constants".
105. */
106. /**
107. * Implementation of hook_drush_help().
108. */
109. function pm_drush_help($section) {
110. switch ($section) {
111. case 'meta:pm:title':
112. return dt('Project manager commands');
113. case 'meta:pm:summary':
114. return dt('Download, enable, examine and update your modules and themes.');
115. case 'drush:pm-enable':
116. return dt('Enable one or more extensions (modules or themes). Enable dependant extensions as well.');
117. case 'drush:pm-disable':
118. return dt('Disable one or more extensions (modules or themes). Disable dependant extensions as well.');
119. case 'drush:pm-updatecode':
120. case 'drush:pm-update':
121. $message = dt("Display available update information for Drupal core and all enabled projects and allow updating to latest recommended releases.");
122. if ($section == 'drush:pm-update') {
123. $message .= ' '.dt("Also apply any database updates required (same as pm-updatecode + updatedb).");
124. }
125. $message .= ' '.dt("Note: The user is asked to confirm before the actual update. Backups are performed unless directory is already under version control. Updated projects can potentially break your site. It is NOT recommended to update production sites without prior testing.");
126. return $message;
127. case 'drush:pm-updatecode-postupdate':
128. return dt("This is a helper command needed by updatecode. It is used to check for db updates in a backend process after code updated have been performed. We need to run this task in a separate process to not conflict with old code already in memory.");
129. case 'drush:pm-download':
130. return dt("Download Drupal core or projects from drupal.org (Drupal core, modules, themes or profiles) and other sources. It will automatically figure out which project version you want based on its recommended release, or you may specify a particular version.
131. If no --destination is provided, then destination depends on the project type:
132. - Profiles will be downloaded to profiles/ in your Drupal root.
133. - Modules and themes will be downloaded to the site specific directory (sites/example.com/modules|themes) if available, or to the site wide directory otherwise.
134. - If you're downloading drupal core or you are not running the command within a bootstrapped drupal site, the default location is the current directory.
135. - Drush commands will be relocated to @site_wide_location (if available) or ~/.drush. Relocation is determined once the project is downloaded by examining its content. Note you can provide your own function in a commandfile to determine the relocation of any project.", array('@site_wide_location' => drush_get_context('DRUSH_SITE_WIDE_COMMANDFILES')));
136. }
137. }
138. /**
139. * Implementation of hook_drush_command().
140. */
141. function pm_drush_command() {
142. $update_options = array(
143. 'lock' => array(
144. 'description' => 'Add a persistent lock to remove the specified projects from consideration during updates. Locks may be removed with the --unlock parameter, or overridden by specifically naming the project as a parameter to pm-update or pm-updatecode. The lock does not affect pm-download. See also the update_advanced project for similar and improved functionality.',
145. 'example-value' => 'foo,bar',
146. ),
147. );
148. $update_suboptions = array(
149. 'lock' => array(
150. 'lock-message' => array(
151. 'description' => 'A brief message explaining why a project is being locked; displayed during pm-updatecode. Optional.',
152. 'example-value' => 'message',
153. ),
154. 'unlock' => array(
155. 'description' => 'Remove the persistent lock from the specified projects so that they may be updated again.',
156. 'example-value' => 'foo,bar',
157. ),
158. ),
159. );
160. $items['pm-enable'] = array(
161. 'description' => 'Enable one or more extensions (modules or themes).',
162. 'arguments' => array(
163. 'extensions' => 'A list of modules or themes. You can use the * wildcard at the end of extension names to enable all matches.',
164. ),
165. 'options' => array(
166. 'resolve-dependencies' => 'Attempt to download any missing dependencies. At the moment, only works when the module name is the same as the project name.',
167. 'skip' => 'Skip automatic downloading of libraries (c.f. devel).',
168. ),
169. 'aliases' => array('en'),
170. 'engines' => array(
171. 'release_info' => array(
172. 'add-options-to-command' => FALSE,
173. ),
174. ),
175. );
176. $items['pm-disable'] = array(
177. 'description' => 'Disable one or more extensions (modules or themes).',
178. 'arguments' => array(
179. 'extensions' => 'A list of modules or themes. You can use the * wildcard at the end of extension names to disable multiple matches.',
180. ),
181. 'aliases' => array('dis'),
182. 'engines' => array(
183. 'version_control',
184. 'package_handler',
185. 'release_info' => array(
186. 'add-options-to-command' => FALSE,
187. ),
188. ),
189. );
190. $items['pm-info'] = array(
191. 'description' => 'Show detailed info for one or more extensions (modules or themes).',
192. 'arguments' => array(
193. 'extensions' => 'A list of modules or themes. You can use the * wildcard at the end of extension names to show info for multiple matches. If no argument is provided it will show info for all available extensions.',
194. ),
195. 'aliases' => array('pmi'),
196. 'outputformat' => array(
197. 'default' => 'key-value-list',
198. 'pipe-format' => 'json',
199. 'formatted-filter' => '_drush_pm_info_format_table_data',
200. 'field-labels' => array(
201. 'extension' => 'Extension',
202. 'project' => 'Project',
203. 'type' => 'Type',
204. 'title' => 'Title',
205. 'description' => 'Description',
206. 'version' => 'Version',
207. 'date' => 'Date',
208. 'package' => 'Package',
209. 'core' => 'Core',
210. 'php' => 'PHP',
211. 'status' => 'Status',
212. 'path' => 'Path',
213. 'schema_version' => 'Schema version',
214. 'files' => 'Files',
215. 'requires' => 'Requires',
216. 'required_by' => 'Required by',
217. 'permissions' => 'Permissions',
218. 'config' => 'Configure',
219. 'engine' => 'Engine',
220. 'base_theme' => 'Base theme',
221. 'regions' => 'Regions',
222. 'features' => 'Features',
223. 'stylesheets' => 'Stylesheets',
224. // 'media_' . $media => 'Media '. $media for each $info->info['stylesheets'] as $media => $files
225. 'scripts' => 'Scripts',
226. ),
227. 'output-data-type' => 'format-table',
228. ),
229. );
230. // Install command is reserved for the download and enable of projects including dependencies.
231. // @see http://drupal.org/node/112692 for more information.
232. // $items['install'] = array(
233. // 'description' => 'Download and enable one or more modules',
234. // );
235. $items['pm-uninstall'] = array(
236. 'description' => 'Uninstall one or more modules.',
237. 'arguments' => array(
238. 'modules' => 'A list of modules.',
239. ),
240. 'aliases' => array('pmu'),
241. );
242. $items['pm-list'] = array(
243. 'description' => 'Show a list of available extensions (modules and themes).',
244. 'callback arguments' => array(array(), FALSE),
245. 'options' => array(
246. 'type' => array(
247. 'description' => 'Filter by extension type. Choices: module, theme.',
248. 'example-value' => 'module',
249. ),
250. 'status' => array(
251. 'description' => 'Filter by extension status. Choices: enabled, disabled and/or \'not installed\'. You can use multiple comma separated values. (i.e. --status="disabled,not installed").',
252. 'example-value' => 'disabled',
253. ),
254. 'package' => 'Filter by project packages. You can use multiple comma separated values. (i.e. --package="Core - required,Other").',
255. 'core' => 'Filter out extensions that are not in drupal core.',
256. 'no-core' => 'Filter out extensions that are provided by drupal core.',
257. ),
258. 'outputformat' => array(
259. 'default' => 'table',
260. 'pipe-format' => 'list',
261. 'field-labels' => array('package' => 'Package', 'name' => 'Name', 'type' => 'Type', 'status' => 'Status', 'version' => 'Version'),
262. 'output-data-type' => 'format-table',
263. ),
264. 'aliases' => array('pml'),
265. );
266. $items['pm-refresh'] = array(
267. 'description' => 'Refresh update status information.',
268. 'engines' => array(
269. 'update_status' => array(
270. 'add-options-to-command' => FALSE,
271. ),
272. ),
273. 'aliases' => array('rf'),
274. );
275. $items['pm-updatestatus'] = array(
276. 'description' => 'Show a report of available minor updates to Drupal core and contrib projects.',
277. 'arguments' => array(
278. 'projects' => 'Optional. A list of installed projects to show.',
279. ),
280. 'options' => array(
281. 'pipe' => 'Return a list of the projects with any extensions enabled that need updating, one project per line.',
282. ) + $update_options,
283. 'sub-options' => $update_suboptions,
284. 'engines' => array(
285. 'update_status',
286. ),
287. 'outputformat' => array(
288. 'default' => 'table',
289. 'pipe-format' => 'list',
290. 'field-labels' => array('name' => 'Short Name', 'label' => 'Name', 'existing_version' => 'Installed Version', 'status' => 'Status', 'status_msg' => 'Message', 'candidate_version' => 'Proposed version'),
291. 'fields-default' => array('label', 'existing_version', 'candidate_version', 'status_msg' ),
292. 'fields-pipe' => array('name', 'existing_version', 'candidate_version', 'status_msg'),
293. 'output-data-type' => 'format-table',
294. ),
295. 'aliases' => array('ups'),
296. );
297. $items['pm-updatecode'] = array(
298. 'description' => 'Update Drupal core and contrib projects to latest recommended releases.',
299. 'examples' => array(
300. 'drush pm-updatecode --no-core' => 'Update contrib projects, but skip core.',
301. 'drush pm-updatestatus --format=csv --list-separator=" " --fields="name,existing_version,candidate_version,status_msg"' => 'To show a list of projects with their update status, use pm-updatestatus instead of pm-updatecode.',
302. ),
303. 'arguments' => array(
304. 'projects' => 'Optional. A list of installed projects to update.',
305. ),
306. 'options' => array(
307. 'notes' => 'Show release notes for each project to be updated.',
308. 'no-core' => 'Only update modules and skip the core update.',
309. 'check-updatedb' => 'Check to see if an updatedb is needed after updating the code. Default is on; use --check-updatedb=0 to disable.',
310. ) + $update_options,
311. 'sub-options' => $update_suboptions,
312. 'aliases' => array('upc'),
313. 'topics' => array('docs-policy'),
314. 'engines' => array(
315. 'version_control',
316. 'package_handler',
317. 'release_info' => array(
318. 'add-options-to-command' => FALSE,
319. ),
320. 'update_status',
321. ),
322. );
323. // Merge all items from above.
324. $items['pm-update'] = array(
325. 'description' => 'Update Drupal core and contrib projects and apply any pending database updates (Same as pm-updatecode + updatedb).',
326. 'aliases' => array('up'),
327. 'allow-additional-options' => array('pm-updatecode', 'updatedb'),
328. );
329. $items['pm-updatecode-postupdate'] = array(
330. 'description' => 'Notify of pending db updates.',
331. 'hidden' => TRUE,
332. );
333. $items['pm-releasenotes'] = array(
334. 'description' => 'Print release notes for given projects.',
335. 'arguments' => array(
336. 'projects' => 'A list of project names, with optional version. Defaults to \'drupal\'',
337. ),
338. 'options' => array(
339. 'html' => dt('Display release notes in HTML rather than plain text.'),
340. ),
341. 'examples' => array(
342. 'drush rln cck' => 'Prints the release notes for the recommended version of CCK project.',
343. 'drush rln token-1.13' => 'View release notes of a specfic version of the Token project for my version of Drupal.',
344. 'drush rln pathauto zen' => 'View release notes for the recommended version of Pathauto and Zen projects.',
345. ),
346. 'aliases' => array('rln'),
347. 'bootstrap' => DRUSH_BOOTSTRAP_MAX,
348. 'engines' => array(
349. 'release_info',
350. ),
351. );
352. $items['pm-releases'] = array(
353. 'description' => 'Print release information for given projects.',
354. 'arguments' => array(
355. 'projects' => 'A list of drupal.org project names. Defaults to \'drupal\'',
356. ),
357. 'examples' => array(
358. 'drush pm-releases cck zen' => 'View releases for cck and Zen projects for your Drupal version.',
359. ),
360. 'options' => array(
361. 'default-major' => 'Show releases compatible with the specified major version of Drupal.',
362. ),
363. 'aliases' => array('rl'),
364. 'bootstrap' => DRUSH_BOOTSTRAP_MAX,
365. 'outputformat' => array(
366. 'default' => 'table',
367. 'pipe-format' => 'csv',
368. 'field-labels' => array('project' => 'Project', 'version' => 'Release', 'date' => 'Date', 'status' => 'Status'),
369. 'fields-default' => array('project', 'version', 'date', 'status'),
370. 'fields-pipe' => array('project', 'version', 'date', 'status'),
371. 'output-data-type' => 'format-table',
372. ),
373. 'engines' => array(
374. 'release_info',
375. ),
376. );
377. $items['pm-download'] = array(
378. 'description' => 'Download projects from drupal.org or other sources.',
379. 'examples' => array(
380. 'drush dl drupal' => 'Download latest recommended release of Drupal core.',
381. 'drush dl drupal-7.x' => 'Download latest 7.x development version of Drupal core.',
382. 'drush dl drupal-6' => 'Download latest recommended release of Drupal 6.x.',
383. 'drush dl cck zen' => 'Download latest versions of CCK and Zen projects.',
384. 'drush dl og-1.3' => 'Download a specfic version of Organic groups module for my version of Drupal.',
385. 'drush dl diff-6.x-2.x' => 'Download a specific development branch of diff module for a specific Drupal version.',
386. 'drush dl views --select' => 'Show a list of recent releases of the views project, prompt for which one to download.',
387. 'drush dl webform --dev' => 'Download the latest dev release of webform.',
388. 'drush dl webform --cache' => 'Download webform. Fetch and populate the download cache as needed.',
389. ),
390. 'arguments' => array(
391. 'projects' => 'A comma delimited list of drupal.org project names, with optional version. Defaults to \'drupal\'',
392. ),
393. 'options' => array(
394. 'destination' => array(
395. 'description' => 'Path to which the project will be copied. If you\'re providing a relative path, note it is relative to the drupal root (if bootstrapped).',
396. 'example-value' => 'path',
397. ),
398. 'use-site-dir' => 'Force to use the site specific directory. It will create the directory if it doesn\'t exist. If --destination is also present this option will be ignored.',
399. 'notes' => 'Show release notes after each project is downloaded.',
400. 'variant' => array(
401. 'description' => "Only useful for install profiles. Possible values: 'full', 'projects', 'profile-only'.",
402. 'example-value' => 'full',
403. ),
404. 'select' => "Select the version to download interactively from a list of available releases.",
405. 'drupal-project-rename' => 'Alternate name for "drupal-x.y" directory when downloading Drupal project. Defaults to "drupal".',
406. 'default-major' => array(
407. 'description' => 'Specify the default major version of modules to download when there is no bootstrapped Drupal site. Defaults to "7".',
408. 'example-value' => '6',
409. ),
410. 'skip' => 'Skip automatic downloading of libraries (c.f. devel).',
411. 'pipe' => 'Returns a list of the names of the extensions (modules and themes) contained in the downloaded projects.',
412. ),
413. 'bootstrap' => DRUSH_BOOTSTRAP_MAX,
414. 'aliases' => array('dl'),
415. 'engines' => array(
416. 'version_control',
417. 'package_handler',
418. 'release_info',
419. ),
420. );
421. return $items;
422. }
423. /**
424. * @defgroup extensions Extensions management.
425. * @{
426. * Functions to manage extensions.
427. */
428. /**
429. * Command argument complete callback.
430. */
431. function pm_pm_enable_complete() {
432. return pm_complete_extensions();
433. }
434. /**
435. * Command argument complete callback.
436. */
437. function pm_pm_disable_complete() {
438. return pm_complete_extensions();
439. }
440. /**
441. * Command argument complete callback.
442. */
443. function pm_pm_uninstall_complete() {
444. return pm_complete_extensions();
445. }
446. /**
447. * Command argument complete callback.
448. */
449. function pm_pm_info_complete() {
450. return pm_complete_extensions();
451. }
452. /**
453. * Command argument complete callback.
454. */
455. function pm_pm_releasenotes_complete() {
456. return pm_complete_projects();
457. }
458. /**
459. * Command argument complete callback.
460. */
461. function pm_pm_releases_complete() {
462. return pm_complete_projects();
463. }
464. /**
465. * Command argument complete callback.
466. */
467. function pm_pm_updatecode_complete() {
468. return pm_complete_projects();
469. }
470. /**
471. * Command argument complete callback.
472. */
473. function pm_pm_update_complete() {
474. return pm_complete_projects();
475. }
476. /**
477. * List extensions for completion.
478. *
479. * @return
480. * Array of available extensions.
481. */
482. function pm_complete_extensions() {
483. if (drush_bootstrap_max(DRUSH_BOOTSTRAP_DRUPAL_FULL)) {
484. $extension_info = drush_get_extensions(FALSE);
485. return array('values' => array_keys($extension_info));
486. }
487. }
488. /**
489. * List projects for completion.
490. *
491. * @return
492. * Array of installed projects.
493. */
494. function pm_complete_projects() {
495. if (drush_bootstrap_max(DRUSH_BOOTSTRAP_DRUPAL_FULL)) {
496. return array('values' => array_keys(drush_get_projects()));
497. }
498. }
499. /**
500. * Sort callback function for sorting extensions.
501. *
502. * It will sort first by type, second by package and third by name.
503. */
504. function _drush_pm_sort_extensions($a, $b) {
505. $a_type = drush_extension_get_type($a);
506. $b_type = drush_extension_get_type($b);
507. if ($a_type == 'module' && $b_type == 'theme') {
508. return -1;
509. }
510. if ($a_type == 'theme' && $b_type == 'module') {
511. return 1;
512. }
513. $cmp = strcasecmp($a->info['package'], $b->info['package']);
514. if ($cmp == 0) {
515. $cmp = strcasecmp($a->info['name'], $b->info['name']);
516. }
517. return $cmp;
518. }
519. /**
520. * Calculate an extension status based on current status and schema version.
521. *
522. * @param $extension
523. * Object of a single extension info.
524. *
525. * @return
526. * String describing extension status. Values: enabled|disabled|not installed
527. */
528. function drush_get_extension_status($extension) {
529. if ((drush_extension_get_type($extension) == 'module') && ($extension->schema_version == -1)) {
530. $status = "not installed";
531. }
532. else {
533. $status = ($extension->status == 1)?'enabled':'disabled';
534. }
535. return $status;
536. }
537. /**
538. * Classify extensions as modules, themes or unknown.
539. *
540. * @param $extensions
541. * Array of extension names, by reference.
542. * @param $modules
543. * Empty array to be filled with modules in the provided extension list.
544. * @param $themes
545. * Empty array to be filled with themes in the provided extension list.
546. */
547. function drush_pm_classify_extensions(&$extensions, &$modules, &$themes, $extension_info) {
548. _drush_pm_expand_extensions($extensions, $extension_info);
549. foreach ($extensions as $extension) {
550. if (!isset($extension_info[$extension])) {
551. continue;
552. }
553. $type = drush_extension_get_type($extension_info[$extension]);
554. if ($type == 'module') {
555. $modules[$extension] = $extension;
556. }
557. else if ($type == 'theme') {
558. $themes[$extension] = $extension;
559. }
560. }
561. }
562. /**
563. * Obtain an array of installed projects off the extensions available.
564. *
565. * A project is considered to be 'enabled' when any of its extensions is
566. * enabled.
567. * If any extension lacks project information and it is found that the
568. * extension was obtained from drupal.org's cvs or git repositories, a new
569. * 'vcs' attribute will be set on the extension. Example:
570. * $extensions[name]->vcs = 'cvs';
571. *
572. * @param array $extensions
573. * Array of extensions as returned by drush_get_extensions().
574. *
575. * @return
576. * Array of installed projects with info of version, status and provided
577. * extensions.
578. */
579. function drush_get_projects(&$extensions = NULL) {
580. if (!isset($extensions)) {
581. $extensions = drush_get_extensions();
582. }
583. $projects = array(
584. 'drupal' => array(
585. 'label' => 'Drupal',
586. 'version' => drush_drupal_version(),
587. 'type' => 'core',
588. 'extensions' => array(),
589. )
590. );
591. if (isset($extensions['system']->info['datestamp'])) {
592. $projects['drupal']['datestamp'] = $extensions['system']->info['datestamp'];
593. }
594. foreach ($extensions as $extension) {
595. $extension_name = drush_extension_get_name($extension);
596. $extension_path = drush_extension_get_path($extension);
597. // Obtain the project name. It is not available in this cases:
598. // 1. the extension is part of drupal core.
599. // 2. the project was checked out from CVS/git and cvs_deploy/git_deploy
600. // is not installed.
601. // 3. it is not a project hosted in drupal.org.
602. if (empty($extension->info['project'])) {
603. if (isset($extension->info['version']) && ($extension->info['version'] == drush_drupal_version())) {
604. $project = 'drupal';
605. }
606. else {
607. if (is_dir($extension_path . '/CVS') && (!drush_module_exists('cvs_deploy'))) {
608. $extension->vcs = 'cvs';
609. drush_log(dt('Extension !extension is fetched from cvs. Ignoring.', array('!extension' => $extension_name)), LogLevel::DEBUG);
610. }
611. elseif (is_dir($extension_path . '/.git') && (!drush_module_exists('git_deploy'))) {
612. $extension->vcs = 'git';
613. drush_log(dt('Extension !extension is fetched from git. Ignoring.', array('!extension' => $extension_name)), LogLevel::DEBUG);
614. }
615. continue;
616. }
617. }
618. else {
619. $project = $extension->info['project'];
620. }
621. // Create/update the project in $projects with the project data.
622. if (!isset($projects[$project])) {
623. $projects[$project] = array(
624. // If there's an extension with matching name, pick its label.
625. // Otherwise use just the project name. We avoid $extension->label
626. // for the project label because the extension's label may have
627. // no direct relation with the project name. For example,
628. // "Text (text)" or "Number (number)" for the CCK project.
629. 'label' => isset($extensions[$project]) ? $extensions[$project]->label : $project,
630. 'type' => drush_extension_get_type($extension),
631. 'version' => $extension->info['version'],
632. 'status' => $extension->status,
633. 'extensions' => array(),
634. );
635. if (isset($extension->info['datestamp'])) {
636. $projects[$project]['datestamp'] = $extension->info['datestamp'];
637. }
638. if (isset($extension->info['project status url'])) {
639. $projects[$project]['status url'] = $extension->info['project status url'];
640. }
641. }
642. else {
643. // If any of the extensions is enabled, consider the project is enabled.
644. if ($extension->status != 0) {
645. $projects[$project]['status'] = $extension->status;
646. }
647. }
648. $projects[$project]['extensions'][] = drush_extension_get_name($extension);
649. }
650. // Obtain each project's path and try to provide a better label for ones
651. // with machine name.
652. $reserved = array('modules', 'sites', 'themes');
653. foreach ($projects as $name => $project) {
654. if ($name == 'drupal') {
655. continue;
656. }
657. // If this project has no human label, see if we can find
658. // one "main" extension whose label we could use.
659. if ($project['label'] == $name) {
660. // If there is only one extension, construct a label based on
661. // the extension name.
662. if (count($project['extensions']) == 1) {
663. $extension = $extensions[$project['extensions'][0]];
664. $projects[$name]['label'] = $extension->info['name'] . ' (' . $name . ')';
665. }
666. else {
667. // Make a list of all of the extensions in this project
668. // that do not depend on any other extension in this
669. // project.
670. $candidates = array();
671. foreach ($project['extensions'] as $e) {
672. $has_project_dependency = FALSE;
673. if (isset($extensions[$e]->info['dependencies']) && is_array($extensions[$e]->info['dependencies'])) {
674. foreach ($extensions[$e]->info['dependencies'] as $dependent) {
675. if (in_array($dependent, $project['extensions'])) {
676. $has_project_dependency = TRUE;
677. }
678. }
679. }
680. if ($has_project_dependency === FALSE) {
681. $candidates[] = $extensions[$e]->info['name'];
682. }
683. }
684. // If only one of the modules is a candidate, use its name in the label
685. if (count($candidates) == 1) {
686. $projects[$name]['label'] = reset($candidates) . ' (' . $name . ')';
687. }
688. }
689. }
690. drush_log(dt('Obtaining !project project path.', array('!project' => $name)), LogLevel::DEBUG);
691. $path = _drush_pm_find_common_path($project['type'], $project['extensions']);
692. // Prevent from setting a reserved path. For example it may happen in a case
693. // where a module and a theme are declared as part of a same project.
694. // There's a special case, a project called "sites", this is the reason for
695. // the second condition here.
696. if ($path == '.' || (in_array(basename($path), $reserved) && !in_array($name, $reserved))) {
697. drush_log(dt('Error while trying to find the common path for enabled extensions of project !project. Extensions are: !extensions.', array('!project' => $name, '!extensions' => implode(', ', $project['extensions']))), LogLevel::ERROR);
698. }
699. else {
700. $projects[$name]['path'] = $path;
701. }
702. }
703. return $projects;
704. }
705. /**
706. * Helper function to find the common path for a list of extensions in the aim to obtain the project name.
707. *
708. * @param $project_type
709. * Type of project we're trying to find. Valid values: module, theme.
710. * @param $extensions
711. * Array of extension names.
712. */
713. function _drush_pm_find_common_path($project_type, $extensions) {
714. // Select the first path as the candidate to be the common prefix.
715. $extension = array_pop($extensions);
716. while (!($path = drupal_get_path($project_type, $extension))) {
717. drush_log(dt('Unknown path for !extension !type.', array('!extension' => $extension, '!type' => $project_type)), LogLevel::WARNING);
718. $extension = array_pop($extensions);
719. }
720. // If there's only one extension we are done. Otherwise, we need to find
721. // the common prefix for all of them.
722. if (count($extensions) > 0) {
723. // Iterate over the other projects.
724. while($extension = array_pop($extensions)) {
725. $path2 = drupal_get_path($project_type, $extension);
726. if (!$path2) {
727. drush_log(dt('Unknown path for !extension !type.', array('!extension' => $extension, '!type' => $project_type)), LogLevel::DEBUG);
728. continue;
729. }
730. // Option 1: same path.
731. if ($path == $path2) {
732. continue;
733. }
734. // Option 2: $path is a prefix of $path2.
735. if (strpos($path2, $path) === 0) {
736. continue;
737. }
738. // Option 3: $path2 is a prefix of $path.
739. if (strpos($path, $path2) === 0) {
740. $path = $path2;
741. continue;
742. }
743. // Option 4: no one is a prefix of the other. Find the common
744. // prefix by iteratively strip the rigthtmost piece of $path.
745. // We will iterate until a prefix is found or path = '.', that on the
746. // other hand is a condition theorically impossible to reach.
747. do {
748. $path = dirname($path);
749. if (strpos($path2, $path) === 0) {
750. break;
751. }
752. } while ($path != '.');
753. }
754. }
755. return $path;
756. }
757. /**
758. * @} End of "defgroup extensions".
759. */
760. /**
761. * Command callback. Show a list of extensions with type and status.
762. */
763. function drush_pm_list() {
764. //--package
765. $package_filter = array();
766. $package = strtolower(drush_get_option('package'));
767. if (!empty($package)) {
768. $package_filter = explode(',', $package);
769. }
770. if (!empty($package_filter) && (count($package_filter) == 1)) {
771. drush_hide_output_fields('package');
772. }
773. //--type
774. $all_types = array('module', 'theme');
775. $type_filter = strtolower(drush_get_option('type'));
776. if (!empty($type_filter)) {
777. $type_filter = explode(',', $type_filter);
778. }
779. else {
780. $type_filter = $all_types;
781. }
782. if (count($type_filter) == 1) {
783. drush_hide_output_fields('type');
784. }
785. foreach ($type_filter as $type) {
786. if (!in_array($type, $all_types)) { //TODO: this kind of check can be implemented drush-wide
787. return drush_set_error('DRUSH_PM_INVALID_PROJECT_TYPE', dt('!type is not a valid project type.', array('!type' => $type)));
788. }
789. }
790. //--status
791. $all_status = array('enabled', 'disabled', 'not installed');
792. $status_filter = strtolower(drush_get_option('status'));
793. if (!empty($status_filter)) {
794. $status_filter = explode(',', $status_filter);
795. }
796. else {
797. $status_filter = $all_status;
798. }
799. if (count($status_filter) == 1) {
800. drush_hide_output_fields('status');
801. }
802. foreach ($status_filter as $status) {
803. if (!in_array($status, $all_status)) { //TODO: this kind of check can be implemented drush-wide
804. return drush_set_error('DRUSH_PM_INVALID_PROJECT_STATUS', dt('!status is not a valid project status.', array('!status' => $status)));
805. }
806. }
807. $result = array();
808. $extension_info = drush_get_extensions(FALSE);
809. uasort($extension_info, '_drush_pm_sort_extensions');
810. $major_version = drush_drupal_major_version();
811. foreach ($extension_info as $key => $extension) {
812. if (!in_array(drush_extension_get_type($extension), $type_filter)) {
813. unset($extension_info[$key]);
814. continue;
815. }
816. $status = drush_get_extension_status($extension);
817. if (!in_array($status, $status_filter)) {
818. unset($extension_info[$key]);
819. continue;
820. }
821. // Filter out core if --no-core specified.
822. if (drush_get_option('no-core', FALSE)) {
823. if ((($major_version >= 8) && ($extension->origin == 'core')) || (($major_version <= 7) && (strpos($extension->info['package'], 'Core') === 0))) {
824. unset($extension_info[$key]);
825. continue;
826. }
827. }
828. // Filter out non-core if --core specified.
829. if (drush_get_option('core', FALSE)) {
830. if ((($major_version >= 8) && ($extension->origin != 'core')) || (($major_version <= 7) && (strpos($extension->info['package'], 'Core') !== 0))) {
831. unset($extension_info[$key]);
832. continue;
833. }
834. }
835. // Filter by package.
836. if (!empty($package_filter)) {
837. if (!in_array(strtolower($extension->info['package']), $package_filter)) {
838. unset($extension_info[$key]);
839. continue;
840. }
841. }
842. $row['package'] = $extension->info['package'];
843. $row['name'] = $extension->label;
844. $row['type'] = ucfirst(drush_extension_get_type($extension));
845. $row['status'] = ucfirst($status);
846. // Suppress notice when version is not present.
847. $row['version'] = @$extension->info['version'];
848. $result[$key] = $row;
849. unset($row);
850. }
851. // In Drush-5, we used to return $extension_info here.
852. return $result;
853. }
854. /**
855. * Helper function for pm-enable.
856. */
857. function drush_pm_enable_find_project_from_extension($extension) {
858. $result = drush_pm_lookup_extension_in_cache($extension);
859. if (!isset($result)) {
860. $release_info = drush_get_engine('release_info');
861. // If we can find info on a project that has the same name
862. // as the requested extension, then we'll call that a match.
863. $request = pm_parse_request($extension);
864. if ($release_info->checkProject($request)) {
865. $result = $extension;
866. }
867. }
868. return $result;
869. }
870. /**
871. * Validate callback. Determine the modules and themes that the user would like enabled.
872. */
873. function drush_pm_enable_validate() {
874. $args = pm_parse_arguments(func_get_args());
875. $extension_info = drush_get_extensions();
876. $recheck = TRUE;
877. while ($recheck) {
878. $recheck = FALSE;
879. // Classify $args in themes, modules or unknown.
880. $modules = array();
881. $themes = array();
882. $download = array();
883. drush_pm_classify_extensions($args, $modules, $themes, $extension_info);
884. $extensions = array_merge($modules, $themes);
885. $unknown = array_diff($args, $extensions);
886. // If there're unknown extensions, try and download projects
887. // with matching names.
888. if (!empty($unknown)) {
889. $found = array();
890. foreach ($unknown as $key => $name) {
891. drush_log(dt('!extension was not found.', array('!extension' => $name)), LogLevel::WARNING);
892. $project = drush_pm_enable_find_project_from_extension($name);
893. if (!empty($project)) {
894. $found[] = $project;
895. }
896. }
897. if (!empty($found)) {
898. drush_log(dt("The following projects provide some or all of the extensions not found:\n@list", array('@list' => implode("\n", $found))), LogLevel::OK);
899. if (drush_get_option('resolve-dependencies')) {
900. drush_log(dt("They are being downloaded."), LogLevel::OK);
901. }
902. if ((drush_get_option('resolve-dependencies')) || (drush_confirm("Would you like to download them?"))) {
903. $download = $found;
904. }
905. }
906. }
907. // Discard already enabled and incompatible extensions.
908. foreach ($extensions as $name) {
909. if ($extension_info[$name]->status) {
910. drush_log(dt('!extension is already enabled.', array('!extension' => $name)), LogLevel::OK);
911. }
912. // Check if the extension is compatible with Drupal core and php version.
913. if ($component = drush_extension_check_incompatibility($extension_info[$name])) {
914. drush_set_error('DRUSH_PM_ENABLE_MODULE_INCOMPATIBLE', dt('!name is incompatible with the !component version.', array('!name' => $name, '!component' => $component)));
915. if (drush_extension_get_type($extension_info[$name]) == 'module') {
916. unset($modules[$name]);
917. }
918. else {
919. unset($themes[$name]);
920. }
921. }
922. }
923. if (!empty($modules)) {
924. // Check module dependencies.
925. $dependencies = drush_check_module_dependencies($modules, $extension_info);
926. $unmet_dependencies = array();
927. foreach ($dependencies as $module => $info) {
928. if (!empty($info['unmet-dependencies'])) {
929. foreach ($info['unmet-dependencies'] as $unmet_module) {
930. $unmet_project = drush_pm_enable_find_project_from_extension($unmet_module);
931. if (!empty($unmet_project)) {
932. $unmet_dependencies[$module][$unmet_project] = $unmet_project;
933. }
934. }
935. }
936. }
937. if (!empty($unmet_dependencies)) {
938. $msgs = array();
939. $unmet_project_list = array();
940. foreach ($unmet_dependencies as $module => $unmet_projects) {
941. $unmet_project_list = array_merge($unmet_project_list, $unmet_projects);
942. $msgs[] = dt("!module requires !unmet-projects", array('!unmet-projects' => implode(', ', $unmet_projects), '!module' => $module));
943. }
944. drush_log(dt("The following projects have unmet dependencies:\n!list", array('!list' => implode("\n", $msgs))), LogLevel::OK);
945. if (drush_get_option('resolve-dependencies')) {
946. drush_log(dt("They are being downloaded."), LogLevel::OK);
947. }
948. if (drush_get_option('resolve-dependencies') || drush_confirm(dt("Would you like to download them?"))) {
949. $download = array_merge($download, $unmet_project_list);
950. }
951. }
952. }
953. if (!empty($download)) {
954. // Disable DRUSH_AFFIRMATIVE context temporarily.
955. $drush_affirmative = drush_get_context('DRUSH_AFFIRMATIVE');
956. drush_set_context('DRUSH_AFFIRMATIVE', FALSE);
957. // Invoke a new process to download dependencies.
958. $result = drush_invoke_process('@self', 'pm-download', $download, array(), array('interactive' => TRUE));
959. // Restore DRUSH_AFFIRMATIVE context.
960. drush_set_context('DRUSH_AFFIRMATIVE', $drush_affirmative);
961. // Refresh module cache after downloading the new modules.
962. $extension_info = drush_get_extensions();
963. $recheck = TRUE;
964. }
965. }
966. if (!empty($modules)) {
967. $all_dependencies = array();
968. $dependencies_ok = TRUE;
969. foreach ($dependencies as $key => $info) {
970. if (isset($info['error'])) {
971. unset($modules[$key]);
972. $dependencies_ok = drush_set_error($info['error']['code'], $info['error']['message']);
973. }
974. elseif (!empty($info['dependencies'])) {
975. // Make sure we have an assoc array.
976. $assoc = array_combine($info['dependencies'], $info['dependencies']);
977. $all_dependencies = array_merge($all_dependencies, $assoc);
978. }
979. }
980. if (!$dependencies_ok) {
981. return FALSE;
982. }
983. $modules = array_diff(array_merge($modules, $all_dependencies), drush_module_list());
984. // Discard modules which doesn't meet requirements.
985. require_once DRUSH_DRUPAL_CORE . '/includes/install.inc';
986. foreach ($modules as $key => $module) {
987. // Check to see if the module can be installed/enabled (hook_requirements).
988. // See @system_modules_submit
989. if (!drupal_check_module($module)) {
990. unset($modules[$key]);
991. drush_set_error('DRUSH_PM_ENABLE_MODULE_UNMEET_REQUIREMENTS', dt('Module !module doesn\'t meet the requirements to be enabled.', array('!module' => $module)));
992. _drush_log_drupal_messages();
993. return FALSE;
994. }
995. }
996. }
997. $searchpath = array();
998. foreach (array_merge($modules, $themes) as $name) {
999. $searchpath[] = drush_extension_get_path($extension_info[$name]);
1000. }
1001. // Add all modules that passed validation to the drush
1002. // list of commandfiles (if they have any). This
1003. // will allow these newly-enabled modules to participate
1004. // in the pre-pm_enable and post-pm_enable hooks.
1005. if (!empty($searchpath)) {
1006. _drush_add_commandfiles($searchpath);
1007. }
1008. drush_set_context('PM_ENABLE_EXTENSION_INFO', $extension_info);
1009. drush_set_context('PM_ENABLE_MODULES', $modules);
1010. drush_set_context('PM_ENABLE_THEMES', $themes);
1011. return TRUE;
1012. }
1013. /**
1014. * Command callback. Enable one or more extensions from downloaded projects.
1015. * Note that the modules and themes to be enabled were evaluated during the
1016. * pm-enable validate hook, above.
1017. */
1018. function drush_pm_enable() {
1019. // Get the data built during the validate phase
1020. $extension_info = drush_get_context('PM_ENABLE_EXTENSION_INFO');
1021. $modules = drush_get_context('PM_ENABLE_MODULES');
1022. $themes = drush_get_context('PM_ENABLE_THEMES');
1023. // Inform the user which extensions will finally be enabled.
1024. $extensions = array_merge($modules, $themes);
1025. if (empty($extensions)) {
1026. return drush_log(dt('There were no extensions that could be enabled.'), LogLevel::OK);
1027. }
1028. else {
1029. drush_print(dt('The following extensions will be enabled: !extensions', array('!extensions' => implode(', ', $extensions))));
1030. if(!drush_confirm(dt('Do you really want to continue?'))) {
1031. return drush_user_abort();
1032. }
1033. }
1034. // Enable themes.
1035. if (!empty($themes)) {
1036. drush_theme_enable($themes);
1037. }
1038. // Enable modules and pass dependency validation in form submit.
1039. if (!empty($modules)) {
1040. drush_include_engine('drupal', 'environment');
1041. drush_module_enable($modules);
1042. }
1043. // Inform the user of final status.
1044. $result_extensions = drush_get_named_extensions_list($extensions);
1045. $problem_extensions = array();
1046. $role = drush_role_get_class();
1047. foreach ($result_extensions as $name => $extension) {
1048. if ($extension->status) {
1049. drush_log(dt('!extension was enabled successfully.', array('!extension' => $name)), LogLevel::OK);
1050. $perms = $role->getModulePerms($name);
1051. if (!empty($perms)) {
1052. drush_print(dt('!extension defines the following permissions: !perms', array('!extension' => $name, '!perms' => implode(', ', $perms))));
1053. }
1054. }
1055. else {
1056. $problem_extensions[] = $name;
1057. }
1058. }
1059. if (!empty($problem_extensions)) {
1060. return drush_set_error('DRUSH_PM_ENABLE_EXTENSION_ISSUE', dt('There was a problem enabling !extension.', array('!extension' => implode(',', $problem_extensions))));
1061. }
1062. // Return the list of extensions enabled
1063. return $extensions;
1064. }
1065. /**
1066. * Command callback. Disable one or more extensions.
1067. */
1068. function drush_pm_disable() {
1069. $args = pm_parse_arguments(func_get_args());
1070. drush_include_engine('drupal', 'pm');
1071. _drush_pm_disable($args);
1072. }
1073. /**
1074. * Add extensions that match extension_name*.
1075. *
1076. * A helper function for commands that take a space separated list of extension
1077. * names. It will identify extensions that have been passed in with a
1078. * trailing * and add all matching extensions to the array that is returned.
1079. *
1080. * @param $extensions
1081. * An array of extensions, by reference.
1082. * @param $extension_info
1083. * Optional. An array of extension info as returned by drush_get_extensions().
1084. */
1085. function _drush_pm_expand_extensions(&$extensions, $extension_info = array()) {
1086. if (empty($extension_info)) {
1087. $extension_info = drush_get_extensions();
1088. }
1089. foreach ($extensions as $key => $extension) {
1090. if (($wildcard = rtrim($extension, '*')) !== $extension) {
1091. foreach (array_keys($extension_info) as $extension_name) {
1092. if (substr($extension_name, 0, strlen($wildcard)) == $wildcard) {
1093. $extensions[] = $extension_name;
1094. }
1095. }
1096. unset($extensions[$key]);
1097. continue;
1098. }
1099. }
1100. }
1101. /**
1102. * Command callback. Uninstall one or more modules.
1103. */
1104. function drush_pm_uninstall() {
1105. $args = pm_parse_arguments(func_get_args());
1106. drush_include_engine('drupal', 'pm');
1107. _drush_pm_uninstall($args);
1108. }
1109. /**
1110. * Command callback. Show available releases for given project(s).
1111. */
1112. function drush_pm_releases() {
1113. $release_info = drush_get_engine('release_info');
1114. // Obtain requests.
1115. $requests = pm_parse_arguments(func_get_args(), FALSE);
1116. if (!$requests) {
1117. $requests = array('drupal');
1118. }
1119. // Get installed projects.
1120. if (drush_get_context('DRUSH_BOOTSTRAP_PHASE') >= DRUSH_BOOTSTRAP_DRUPAL_FULL) {
1121. $projects = drush_get_projects();
1122. }
1123. else {
1124. $projects = array();
1125. }
1126. // Select the filter to apply based on cli options.
1127. if (drush_get_option('dev', FALSE)) {
1128. $filter = 'dev';
1129. }
1130. elseif (drush_get_option('all', FALSE)) {
1131. $filter = 'all';
1132. }
1133. else {
1134. $filter = '';
1135. }
1136. $status_url = drush_get_option('source');
1137. $rows = array();
1138. $releases_info = array();
1139. foreach ($requests as $request) {
1140. $request = pm_parse_request($request, $status_url, $projects);
1141. $project_name = $request['name'];
1142. $project_release_info = $release_info->get($request);
1143. if ($project_release_info) {
1144. $version = isset($projects[$project_name]) ? $projects[$project_name]['version'] : NULL;
1145. $releases = $project_release_info->filterReleases($filter, $version);
1146. foreach ($releases as $key => $release) {
1147. $rows[$key] = array(
1148. 'project' => $project_name,
1149. 'version' => $release['version'],
1150. 'date' => gmdate('Y-M-d', $release['date']),
1151. 'status' => implode(', ', $release['release_status']),
1152. );
1153. }
1154. // Collect parsed update service info to set backend result.
1155. $releases_info[$project_name] = $project_release_info->getInfo();
1156. }
1157. }
1158. if (empty($rows)) {
1159. return drush_log(dt('No valid projects given.'), LogLevel::OK);
1160. }
1161. #TODO# Ask @weitzman about 5a233660560a0
1162. drush_backend_set_result($releases_info);
1163. return $rows;
1164. }
1165. /**
1166. * Command callback. Show release notes for given project(s).
1167. */
1168. function drush_pm_releasenotes() {
1169. $release_info = drush_get_engine('release_info');
1170. // Obtain requests.
1171. if (!$requests = pm_parse_arguments(func_get_args(), FALSE)) {
1172. $requests = array('drupal');
1173. }
1174. // Get installed projects.
1175. if (drush_get_context('DRUSH_BOOTSTRAP_PHASE') >= DRUSH_BOOTSTRAP_DRUPAL_FULL) {
1176. $projects = drush_get_projects();
1177. }
1178. else {
1179. $projects = array();
1180. }
1181. $status_url = drush_get_option('source');
1182. $output = '';
1183. foreach($requests as $request) {
1184. $request = pm_parse_request($request, $status_url, $projects);
1185. $project_release_info = $release_info->get($request);
1186. if ($project_release_info) {
1187. $version = empty($request['version']) ? NULL : $request['version'];
1188. $output .= $project_release_info->getReleaseNotes($version);
1189. }
1190. }
1191. return $output;
1192. }
1193. /**
1194. * Command callback. Refresh update status information.
1195. */
1196. function drush_pm_refresh() {
1197. $update_status = drush_get_engine('update_status');
1198. drush_print(dt("Refreshing update status information ..."));
1199. $update_status->refresh();
1200. drush_print(dt("Done."));
1201. }
1202. /**
1203. * Command callback. Execute pm-update.
1204. */
1205. function drush_pm_update() {
1206. // Call pm-updatecode. updatedb will be called in the post-update process.
1207. $args = pm_parse_arguments(func_get_args(), FALSE);
1208. drush_set_option('check-updatedb', FALSE);
1209. return drush_invoke('pm-updatecode', $args);
1210. }
1211. /**
1212. * Post-command callback.
1213. * Execute updatedb command after an updatecode - user requested `update`.
1214. */
1215. function drush_pm_post_pm_update() {
1216. // Use drush_invoke_process to start a subprocess. Cleaner that way.
1217. if (drush_get_context('DRUSH_PM_UPDATED', FALSE) !== FALSE) {
1218. drush_invoke_process('@self', 'updatedb');
1219. }
1220. }
1221. /**
1222. * Validate callback for updatecode command. Abort if 'backup' directory exists.
1223. */
1224. function drush_pm_updatecode_validate() {
1225. $path = drush_get_context('DRUSH_DRUPAL_ROOT') . '/backup';
1226. if (is_dir($path) && (realpath(drush_get_option('backup-dir', FALSE)) != $path)) {
1227. return drush_set_error('', dt('Backup directory !path found. It\'s a security risk to store backups inside the Drupal tree. Drush now uses by default ~/drush-backups. You need to move !path out of the Drupal tree to proceed. Note: if you know what you\'re doing you can explicitly set --backup-dir to !path and continue.', array('!path' => $path)));
1228. }
1229. }
1230. /**
1231. * Post-command callback for updatecode.
1232. *
1233. * Execute pm-updatecode-postupdate in a backend process to not conflict with
1234. * old code already in memory.
1235. */
1236. function drush_pm_post_pm_updatecode() {
1237. // Skip if updatecode was invoked by pm-update.
1238. // This way we avoid being noisy, as updatedb is to be executed.
1239. if (drush_get_option('check-updatedb', TRUE)) {
1240. if (drush_get_context('DRUSH_PM_UPDATED', FALSE)) {
1241. drush_invoke_process('@self', 'pm-updatecode-postupdate');
1242. }
1243. }
1244. }
1245. /**
1246. * Command callback. Execute updatecode-postupdate.
1247. */
1248. function drush_pm_updatecode_postupdate() {
1249. // Clear the cache, since some projects could have moved around.
1250. drush_drupal_cache_clear_all();
1251. // Notify of pending database updates.
1252. // Make sure the installation API is available
1253. require_once DRUSH_DRUPAL_CORE . '/includes/install.inc';
1254. // Load all .install files.
1255. drupal_load_updates();
1256. // @see system_requirements().
1257. foreach (drush_module_list() as $module) {
1258. $updates = drupal_get_schema_versions($module);
1259. if ($updates !== FALSE) {
1260. $default = drupal_get_installed_schema_version($module);
1261. if (max($updates) > $default) {
1262. drush_log(dt("You have pending database updates. Run `drush updatedb` or visit update.php in your browser."), LogLevel::WARNING);
1263. break;
1264. }
1265. }
1266. }
1267. }
1268. /**
1269. * Sanitize user provided arguments to several pm commands.
1270. *
1271. * Return an array of arguments off a space and/or comma separated values.
1272. */
1273. function pm_parse_arguments($args, $dashes_to_underscores = TRUE) {
1274. $arguments = _convert_csv_to_array($args);
1275. foreach ($arguments as $key => $argument) {
1276. $argument = ($dashes_to_underscores) ? strtr($argument, '-', '_') : $argument;
1277. }
1278. return $arguments;
1279. }
1280. /**
1281. * Decompound a version string and returns major, minor, patch and extra parts.
1282. *
1283. * @see _pm_parse_version_compound()
1284. * @see pm_parse_version()
1285. *
1286. * @param string $version
1287. * A version string like X.Y-Z, X.Y.Z-W or a subset.
1288. *
1289. * @return array
1290. * Array with major, patch and extra keys.
1291. */
1292. function _pm_parse_version_decompound($version) {
1293. $pattern = '/^(\d+)(?:.(\d+))?(?:\.(x|\d+))?(?:-([a-z0-9\.-]*))?(?:\+(\d+)-dev)?$/';
1294. $matches = array();
1295. preg_match($pattern, $version, $matches);
1296. $parts = array(
1297. 'major' => '',
1298. 'minor' => '',
1299. 'patch' => '',
1300. 'extra' => '',
1301. 'offset' => '',
1302. );
1303. if (isset($matches[1])) {
1304. $parts['major'] = $matches[1];
1305. if (isset($matches[2])) {
1306. if (isset($matches[3]) && $matches[3] != '') {
1307. $parts['minor'] = $matches[2];
1308. $parts['patch'] = $matches[3];
1309. }
1310. else {
1311. $parts['patch'] = $matches[2];
1312. }
1313. }
1314. if (!empty($matches[4])) {
1315. $parts['extra'] = $matches[4];
1316. }
1317. if (!empty($matches[5])) {
1318. $parts['offset'] = $matches[5];
1319. }
1320. }
1321. return $parts;
1322. }
1323. /**
1324. * Build a version string from an array of major, minor and extra parts.
1325. *
1326. * @see _pm_parse_version_decompound()
1327. * @see pm_parse_version()
1328. *
1329. * @param array $parts
1330. * Array of parts.
1331. *
1332. * @return string
1333. * A Version string.
1334. */
1335. function _pm_parse_version_compound($parts) {
1336. $project_version = '';
1337. if ($parts['patch'] != '') {
1338. $project_version = $parts['major'];
1339. if ($parts['minor'] != '') {
1340. $project_version = $project_version . '.' . $parts['minor'];
1341. }
1342. if ($parts['patch'] == 'x') {
1343. $project_version = $project_version . '.x-dev';
1344. }
1345. else {
1346. $project_version = $project_version . '.' . $parts['patch'];
1347. if ($parts['extra'] != '') {
1348. $project_version = $project_version . '-' . $parts['extra'];
1349. }
1350. }
1351. if ($parts['offset'] != '') {
1352. $project_version = $project_version . '+' . $parts['offset'] . '-dev';
1353. }
1354. }
1355. return $project_version;
1356. }
1357. /**
1358. * Parses a version string and returns its components.
1359. *
1360. * It parses both core and contrib version strings.
1361. *
1362. * Core (semantic versioning):
1363. * - 8.0.0-beta3+252-dev
1364. * - 8.0.0-beta2
1365. * - 8.0.x-dev
1366. * - 8.1.x
1367. * - 8.0.1
1368. * - 8
1369. *
1370. * Core (classic drupal scheme):
1371. * - 7.x-dev
1372. * - 7.x
1373. * - 7.33
1374. * - 7.34+3-dev
1375. * - 7
1376. *
1377. * Contrib:
1378. * - 7.x-1.0-beta1+30-dev
1379. * - 7.x-1.0-beta1
1380. * - 7.x-1.0+30-dev
1381. * - 7.x-1.0
1382. * - 1.0-beta1
1383. * - 1.0
1384. * - 7.x-1.x
1385. * - 7.x-1.x-dev
1386. * - 1.x
1387. *
1388. * @see pm_parse_request()
1389. *
1390. * @param string $version
1391. * A core or project version string.
1392. *
1393. * @param bool $is_core
1394. * Whether this is a core version or a project version.
1395. *
1396. * @return array
1397. * Version string in parts.
1398. * Example for a contrib version (ex: 7.x-3.2-beta1):
1399. * - version : Fully qualified version string.
1400. * - drupal_version : Core compatibility version (ex: 7.x).
1401. * - version_major : Major version (ex: 3).
1402. * - version_minor : Minor version. Not applicable. Always empty.
1403. * - version_patch : Patch version (ex: 2).
1404. * - version_extra : Extra version (ex: beta1).
1405. * - project_version : Project specific part of the version (ex: 3.2-beta1).
1406. *
1407. * Example for a core version (ex: 8.1.2-beta2 or 7.0-beta2):
1408. * - version : Fully qualified version string.
1409. * - drupal_version : Core compatibility version (ex: 8.x).
1410. * - version_major : Major version (ex: 8).
1411. * - version_minor : Minor version (ex: 1). Empty if not a semver.
1412. * - version_patch : Patch version (ex: 2).
1413. * - version_extra : Extra version (ex: beta2).
1414. * - project_version : Same as 'version'.
1415. */
1416. function pm_parse_version($version, $is_core = FALSE) {
1417. $core_parts = _pm_parse_version_decompound($version);
1418. // If no major version, we have no version at all. Pick a default.
1419. $drupal_version_default = drush_drupal_major_version();
1420. if ($core_parts['major'] == '') {
1421. $core_parts['major'] = ($drupal_version_default) ? $drupal_version_default : drush_get_option('default-major', 7);
1422. }
1423. if ($is_core) {
1424. $project_version = _pm_parse_version_compound($core_parts);
1425. $version_parts = array(
1426. 'version' => $project_version,
1427. 'drupal_version' => $core_parts['major'] . '.x',
1428. 'project_version' => $project_version,
1429. 'version_major' => $core_parts['major'],
1430. 'version_minor' => $core_parts['minor'],
1431. 'version_patch' => ($core_parts['patch'] == 'x') ? '' : $core_parts['patch'],
1432. 'version_extra' => ($core_parts['patch'] == 'x') ? 'dev' : $core_parts['extra'],
1433. 'version_offset' => $core_parts['offset'],
1434. );
1435. }
1436. else {
1437. // If something as 7.x-1.0-beta1, the project specific version is
1438. // in $version['extra'] and we need to parse it.
1439. if (strpbrk($core_parts['extra'], '.-')) {
1440. $nocore_parts = _pm_parse_version_decompound($core_parts['extra']);
1441. $nocore_parts['offset'] = $core_parts['offset'];
1442. $project_version = _pm_parse_version_compound($nocore_parts);
1443. $version_parts = array(
1444. 'version' => $core_parts['major'] . '.x-' . $project_version,
1445. 'drupal_version' => $core_parts['major'] . '.x',
1446. 'project_version' => $project_version,
1447. 'version_major' => $nocore_parts['major'],
1448. 'version_minor' => $core_parts['minor'],
1449. 'version_patch' => ($nocore_parts['patch'] == 'x') ? '' : $nocore_parts['patch'],
1450. 'version_extra' => ($nocore_parts['patch'] == 'x') ? 'dev' : $nocore_parts['extra'],
1451. 'version_offset' => $core_parts['offset'],
1452. );
1453. }
1454. // At this point we have half a version and must decide if this is a drupal major or a project.
1455. else {
1456. // If working on a bootstrapped site, core_parts has the project version.
1457. if ($drupal_version_default) {
1458. $project_version = _pm_parse_version_compound($core_parts);
1459. $version = ($project_version) ? $drupal_version_default . '.x-' . $project_version : '';
1460. $version_parts = array(
1461. 'version' => $version,
1462. 'drupal_version' => $drupal_version_default . '.x',
1463. 'project_version' => $project_version,
1464. 'version_major' => $core_parts['major'],
1465. 'version_minor' => $core_parts['minor'],
1466. 'version_patch' => ($core_parts['patch'] == 'x') ? '' : $core_parts['patch'],
1467. 'version_extra' => ($core_parts['patch'] == 'x') ? 'dev' : $core_parts['extra'],
1468. 'version_offset' => $core_parts['offset'],
1469. );
1470. }
1471. // Not working on a bootstrapped site, core_parts is core version.
1472. else {
1473. $version_parts = array(
1474. 'version' => '',
1475. 'drupal_version' => $core_parts['major'] . '.x',
1476. 'project_version' => '',
1477. 'version_major' => '',
1478. 'version_minor' => '',
1479. 'version_patch' => '',
1480. 'version_extra' => '',
1481. 'version_offset' => '',
1482. );
1483. }
1484. }
1485. }
1486. return $version_parts;
1487. }
1488. /**
1489. * Parse out the project name and version and return as a structured array.
1490. *
1491. * @see pm_parse_version()
1492. *
1493. * @param string $request_string
1494. * Project name with optional version. Examples: 'ctools-7.x-1.0-beta1'
1495. *
1496. * @return array
1497. * Array with all parts of the request info.
1498. */
1499. function pm_parse_request($request_string, $status_url = NULL, &$projects = array()) {
1500. $request = array();
1501. // Split $request_string in project name and version. Note that hyphens (-)
1502. // are permitted in project names (ex: field-conditional-state).
1503. // We use a regex to split the string. The pattern used matches a string
1504. // starting with hyphen, followed by one or more numbers, any of the valid
1505. // symbols in version strings (.x-) and a catchall for the rest of the
1506. // version string.
1507. $parts = preg_split('/-(?:([\d+\.x].*))?$/', $request_string, NULL, PREG_SPLIT_DELIM_CAPTURE);
1508. if (count($parts) == 1) {
1509. // No version in the request string.
1510. $project = $request_string;
1511. $version = '';
1512. }
1513. else {
1514. $project = $parts[0];
1515. $version = $parts[1];
1516. }
1517. $is_core = ($project == 'drupal');
1518. $request = array(
1519. 'name' => $project,
1520. ) + pm_parse_version($version, $is_core);
1521. // Set the status url if provided or available in project's info file.
1522. if ($status_url) {
1523. $request['status url'] = $status_url;
1524. }
1525. elseif (!empty($projects[$project]['status url'])) {
1526. $request['status url'] = $projects[$project]['status url'];
1527. }
1528. return $request;
1529. }
1530. /**
1531. * @defgroup engines Engine types
1532. * @{
1533. */
1534. /**
1535. * Implementation of hook_drush_engine_type_info().
1536. */
1537. function pm_drush_engine_type_info() {
1538. return array(
1539. 'package_handler' => array(
1540. 'option' => 'package-handler',
1541. 'description' => 'Determine how to fetch projects from update service.',
1542. 'default' => 'wget',
1543. 'options' => array(
1544. 'cache' => 'Cache release XML and tarballs or git clones. Git clones use git\'s --reference option. Defaults to 1 for downloads, and 0 for git.',
1545. ),
1546. ),
1547. 'release_info' => array(
1548. 'add-options-to-command' => TRUE,
1549. ),
1550. 'update_status' => array(
1551. 'option' => 'update-backend',
1552. 'description' => 'Determine how to fetch update status information.',
1553. 'default' => 'drush',
1554. 'add-options-to-command' => TRUE,
1555. 'options' => array(
1556. 'update-backend' => 'Backend to obtain available updates.',
1557. 'check-disabled' => 'Check for updates of disabled modules and themes.',
1558. 'security-only' => 'Only update modules that have security updates available.',
1559. ),
1560. 'combine-help' => TRUE,
1561. ),
1562. 'version_control' => array(
1563. 'option' => 'version-control',
1564. 'default' => 'backup',
1565. 'description' => 'Integrate with version control systems.',
1566. ),
1567. );
1568. }
1569. /**
1570. * Implements hook_drush_engine_ENGINE_TYPE().
1571. *
1572. * Package handler engine is used by pm-download and
1573. * pm-updatecode commands to determine how to download/checkout
1574. * new projects and acquire updates to projects.
1575. */
1576. function pm_drush_engine_package_handler() {
1577. return array(
1578. 'wget' => array(
1579. 'description' => 'Download project packages using wget or curl.',
1580. 'options' => array(
1581. 'no-md5' => 'Skip md5 validation of downloads.',
1582. ),
1583. ),
1584. 'git_drupalorg' => array(
1585. 'description' => 'Use git.drupal.org to checkout and update projects.',
1586. 'options' => array(
1587. 'gitusername' => 'Your git username as shown on user/[uid]/edit/git. Typically, this is set this in drushrc.php. Omitting this prevents users from pushing changes back to git.drupal.org.',
1588. 'gitsubmodule' => 'Use git submodules for checking out new projects. Existing git checkouts are unaffected, and will continue to (not) use submodules regardless of this setting.',
1589. 'gitcheckoutparams' => 'Add options to the `git checkout` command.',
1590. 'gitcloneparams' => 'Add options to the `git clone` command.',
1591. 'gitfetchparams' => 'Add options to the `git fetch` command.',
1592. 'gitpullparams' => 'Add options to the `git pull` command.',
1593. 'gitinfofile' => 'Inject version info into each .info file.',
1594. ),
1595. 'sub-options' => array(
1596. 'gitsubmodule' => array(
1597. 'gitsubmoduleaddparams' => 'Add options to the `git submodule add` command.',
1598. ),
1599. ),
1600. ),
1601. );
1602. }
1603. /**
1604. * Implements hook_drush_engine_ENGINE_TYPE().
1605. *
1606. * Release info engine is used by several pm commands to obtain
1607. * releases info from Drupal's update service or external sources.
1608. */
1609. function pm_drush_engine_release_info() {
1610. return array(
1611. 'updatexml' => array(
1612. 'description' => 'Drush release info engine for update.drupal.org and compatible services.',
1613. 'options' => array(
1614. 'source' => 'The base URL which provides project release history in XML. Defaults to http://updates.drupal.org/release-history.',
1615. 'dev' => 'Work with development releases solely.',
1616. ),
1617. 'sub-options' => array(
1618. 'cache' => array(
1619. 'cache-duration-releasexml' => 'Expire duration (in seconds) for release XML. Defaults to 86400 (24 hours).',
1620. ),
1621. 'select' => array(
1622. 'all' => 'Shows all available releases instead of a short list of recent releases.',
1623. ),
1624. ),
1625. 'class' => 'Drush\UpdateService\ReleaseInfo',
1626. ),
1627. );
1628. }
1629. /**
1630. * Implements hook_drush_engine_ENGINE_TYPE().
1631. *
1632. * Update status engine is used to check available updates for
1633. * the projects in a Drupal site.
1634. */
1635. function pm_drush_engine_update_status() {
1636. return array(
1637. 'drupal' => array(
1638. 'description' => 'Check available updates with update.module.',
1639. 'drupal dependencies' => array('update'),
1640. 'class' => 'Drush\UpdateService\StatusInfoDrupal',
1641. ),
1642. 'drush' => array(
1643. 'description' => 'Check available updates without update.module.',
1644. 'class' => 'Drush\UpdateService\StatusInfoDrush',
1645. ),
1646. );
1647. }
1648. /**
1649. * Implements hook_drush_engine_ENGINE_TYPE().
1650. *
1651. * Integration with VCS in order to easily commit your changes to projects.
1652. */
1653. function pm_drush_engine_version_control() {
1654. return array(
1655. 'backup' => array(
1656. 'description' => 'Backup all project files before updates.',
1657. 'options' => array(
1658. 'no-backup' => 'Do not perform backups.',
1659. 'backup-dir' => 'Specify a directory to backup projects into. Defaults to drush-backups within the home directory of the user running the command. It is forbidden to specify a directory inside your drupal root.',
1660. ),
1661. ),
1662. 'bzr' => array(
1663. 'signature' => 'bzr root %s',
1664. 'description' => 'Quickly add/remove/commit your project changes to Bazaar.',
1665. 'options' => array(
1666. 'bzrsync' => 'Automatically add new files to the Bazaar repository and remove deleted files. Caution.',
1667. 'bzrcommit' => 'Automatically commit changes to Bazaar repository. You must also use the --bzrsync option.',
1668. ),
1669. 'sub-options' => array(
1670. 'bzrcommit' => array(
1671. 'bzrmessage' => 'Override default commit message which is: Drush automatic commit. Project <name> <type> Command: <the drush command line used>',
1672. ),
1673. ),
1674. 'examples' => array(
1675. 'drush dl cck --version-control=bzr --bzrsync --bzrcommit' => 'Download the cck project and then add it and commit it to Bazaar.'
1676. ),
1677. ),
1678. 'svn' => array(
1679. 'signature' => 'svn info %s',
1680. 'description' => 'Quickly add/remove/commit your project changes to Subversion.',
1681. 'options' => array(
1682. 'svnsync' => 'Automatically add new files to the SVN repository and remove deleted files. Caution.',
1683. 'svncommit' => 'Automatically commit changes to SVN repository. You must also using the --svnsync option.',
1684. 'svnstatusparams' => "Add options to the 'svn status' command",
1685. 'svnaddparams' => 'Add options to the `svn add` command',
1686. 'svnremoveparams' => 'Add options to the `svn remove` command',
1687. 'svnrevertparams' => 'Add options to the `svn revert` command',
1688. 'svncommitparams' => 'Add options to the `svn commit` command',
1689. ),
1690. 'sub-options' => array(
1691. 'svncommit' => array(
1692. 'svnmessage' => 'Override default commit message which is: Drush automatic commit: <the drush command line used>',
1693. ),
1694. ),
1695. 'examples' => array(
1696. 'drush [command] cck --svncommitparams=\"--username joe\"' => 'Commit changes as the user \'joe\' (Quotes are required).'
1697. ),
1698. ),
1699. );
1700. }
1701. /**
1702. * @} End of "Engine types".
1703. */
1704. /**
1705. * Interface for version control systems.
1706. * We use a simple object layer because we conceivably need more than one
1707. * loaded at a time.
1708. */
1709. interface drush_version_control {
1710. function pre_update(&$project);
1711. function rollback($project);
1712. function post_update($project);
1713. function post_download($project);
1714. static function reserved_files();
1715. }
1716. /**
1717. * A simple factory function that tests for version control systems, in a user
1718. * specified order, and returns the one that appears to be appropriate for a
1719. * specific directory.
1720. */
1721. function drush_pm_include_version_control($directory = '.') {
1722. $engine_info = drush_get_engines('version_control');
1723. $version_controls = drush_get_option('version-control', FALSE);
1724. // If no version control was given, use a list of defaults.
1725. if (!$version_controls) {
1726. // Backup engine is the last option.
1727. $version_controls = array_reverse(array_keys($engine_info['engines']));
1728. }
1729. else {
1730. $version_controls = array($version_controls);
1731. }
1732. // Find the first valid engine in the list, checking signatures if needed.
1733. $engine = FALSE;
1734. while (!$engine && count($version_controls)) {
1735. $version_control = array_shift($version_controls);
1736. if (isset($engine_info['engines'][$version_control])) {
1737. if (!empty($engine_info['engines'][$version_control]['signature'])) {
1738. drush_log(dt('Verifying signature for !vcs version control engine.', array('!vcs' => $version_control)), LogLevel::DEBUG);
1739. if (drush_shell_exec($engine_info['engines'][$version_control]['signature'], $directory)) {
1740. $engine = $version_control;
1741. }
1742. }
1743. else {
1744. $engine = $version_control;
1745. }
1746. }
1747. }
1748. if (!$engine) {
1749. return drush_set_error('DRUSH_PM_NO_VERSION_CONTROL', dt('No valid version control or backup engine found (the --version-control option was set to "!version-control").', array('!version-control' => $version_control)));
1750. }
1751. $instance = drush_include_engine('version_control', $engine);
1752. return $instance;
1753. }
1754. /**
1755. * Update the locked status of all of the candidate projects
1756. * to be updated.
1757. *
1758. * @param array &$projects
1759. * The projects array from pm_updatecode. $project['locked'] will
1760. * be set for every file where a persistent lockfile can be found.
1761. * The 'lock' and 'unlock' operations are processed first.
1762. * @param array $projects_to_lock
1763. * A list of projects to create peristent lock files for
1764. * @param array $projects_to_unlock
1765. * A list of projects to clear the persistent lock on
1766. * @param string $lock_message
1767. * The reason the project is being locked; stored in the lockfile.
1768. *
1769. * @return array
1770. * A list of projects that are locked.
1771. */
1772. function drush_pm_update_lock(&$projects, $projects_to_lock, $projects_to_unlock, $lock_message = NULL) {
1773. $locked_result = array();
1774. // Warn about ambiguous lock / unlock values
1775. if ($projects_to_lock == array('1')) {
1776. $projects_to_lock = array();
1777. drush_log(dt('Ignoring --lock with no value.'), LogLevel::WARNING);
1778. }
1779. if ($projects_to_unlock == array('1')) {
1780. $projects_to_unlock = array();
1781. drush_log(dt('Ignoring --unlock with no value.'), LogLevel::WARNING);
1782. }
1783. // Log if we are going to lock or unlock anything
1784. if (!empty($projects_to_unlock)) {
1785. drush_log(dt('Unlocking !projects', array('!projects' => implode(',', $projects_to_unlock))), LogLevel::OK);
1786. }
1787. if (!empty($projects_to_lock)) {
1788. drush_log(dt('Locking !projects', array('!projects' => implode(',', $projects_to_lock))), LogLevel::OK);
1789. }
1790. $drupal_root = drush_get_context('DRUSH_DRUPAL_ROOT');
1791. foreach ($projects as $name => $project) {
1792. $message = NULL;
1793. if (isset($project['path'])) {
1794. if ($name == 'drupal') {
1795. $lockfile = $drupal_root . '/.drush-lock-update';
1796. }
1797. else {
1798. $lockfile = $drupal_root . '/' . $project['path'] . '/.drush-lock-update';
1799. }
1800. // Remove the lock file if the --unlock option was specified
1801. if (((in_array($name, $projects_to_unlock)) || (in_array('all', $projects_to_unlock))) && (file_exists($lockfile))) {
1802. drush_op('unlink', $lockfile);
1803. }
1804. // Create the lock file if the --lock option was specified
1805. if ((in_array($name, $projects_to_lock)) || (in_array('all', $projects_to_lock))) {
1806. drush_op('file_put_contents', $lockfile, $lock_message != NULL ? $lock_message : "Locked via drush.");
1807. // Note that the project is locked. This will work even if we are simulated,
1808. // or if we get permission denied from the file_put_contents.
1809. // If the lock is -not- simulated or transient, then the lock message will be
1810. // read from the lock file below.
1811. $message = drush_get_context('DRUSH_SIMULATE') ? 'Simulated lock.' : 'Transient lock.';
1812. }
1813. // If the persistent lock file exists, then mark the project as locked.
1814. if (file_exists($lockfile)) {
1815. $message = trim(file_get_contents($lockfile));
1816. }
1817. }
1818. // If there is a message set, then mark the project as locked.
1819. if (isset($message)) {
1820. $projects[$name]['locked'] = !empty($message) ? $message : "Locked.";
1821. $locked_result[$name] = $project;
1822. }
1823. }
1824. return $locked_result;
1825. }
1826. /**
1827. * Returns the path to the extensions cache file.
1828. */
1829. function _drush_pm_extension_cache_file() {
1830. return drush_get_context('DRUSH_PER_USER_CONFIGURATION') . "/drush-extension-cache.inc";
1831. }
1832. /**
1833. * Load the extensions cache.
1834. */
1835. function _drush_pm_get_extension_cache() {
1836. $extension_cache = array();
1837. $cache_file = _drush_pm_extension_cache_file();
1838. if (file_exists($cache_file)) {
1839. include $cache_file;
1840. }
1841. if (!array_key_exists('extension-map', $extension_cache)) {
1842. $extension_cache['extension-map'] = array();
1843. }
1844. return $extension_cache;
1845. }
1846. /**
1847. * Lookup an extension in the extensions cache.
1848. */
1849. function drush_pm_lookup_extension_in_cache($extension) {
1850. $result = NULL;
1851. $extension_cache = _drush_pm_get_extension_cache();
1852. if (!empty($extension_cache) && array_key_exists($extension, $extension_cache)) {
1853. $result = $extension_cache[$extension];
1854. }
1855. return $result;
1856. }
1857. /**
1858. * Persists extensions cache.
1859. *
1860. * #TODO# not implemented.
1861. */
1862. function drush_pm_put_extension_cache($extension_cache) {
1863. }
1864. /**
1865. * Store extensions founds within a project in extensions cache.
1866. */
1867. function drush_pm_cache_project_extensions($project, $found) {
1868. $extension_cache = _drush_pm_get_extension_cache();
1869. foreach($found as $extension) {
1870. // Simple cache does not handle conflicts
1871. // We could keep an array of projects, and count
1872. // how many times each one has been seen...
1873. $extension_cache[$extension] = $project['name'];
1874. }
1875. drush_pm_put_extension_cache($extension_cache);
1876. }
1877. /**
1878. * Print out all extensions (modules/themes/profiles) found in specified project.
1879. *
1880. * Find .info.yml files in the project path and identify modules, themes and
1881. * profiles. It handles two kind of projects: drupal core/profiles and
1882. * modules/themes.
1883. * It does nothing with theme engine projects.
1884. */
1885. function drush_pm_extensions_in_project($project) {
1886. // Mask for drush_scan_directory, to match .info.yml files.
1887. $mask = $project['drupal_version'][0] >= 8 ? '/(.*)\.info\.yml$/' : '/(.*)\.info$/';
1888. // Mask for drush_scan_directory, to avoid tests directories.
1889. $nomask = array('.', '..', 'CVS', 'tests');
1890. // Drupal core and profiles can contain modules, themes and profiles.
1891. if (in_array($project['project_type'], array('core', 'profile'))) {
1892. $found = array('profile' => array(), 'theme' => array(), 'module' => array());
1893. // Find all of the .info files
1894. foreach (drush_scan_directory($project['full_project_path'], $mask, $nomask) as $filename => $info) {
1895. // Extract extension name from filename.
1896. $matches = array();
1897. preg_match($mask, $info->basename, $matches);
1898. $name = $matches[1];
1899. // Find the project type corresponding the .info file.
1900. // (Only drupal >=7.x has .info for .profile)
1901. $base = dirname($filename) . '/' . $name;
1902. if (is_file($base . '.module')) {
1903. $found['module'][] = $name;
1904. }
1905. else if (is_file($base . '.profile')) {
1906. $found['profile'][] = $name;
1907. }
1908. else {
1909. $found['theme'][] = $name;
1910. }
1911. }
1912. // Special case: find profiles for drupal < 7.x (no .info)
1913. if ($project['drupal_version'][0] < 7) {
1914. foreach (drush_find_profiles($project['full_project_path']) as $filename => $info) {
1915. $found['profile'][] = $info->name;
1916. }
1917. }
1918. // Log results.
1919. $msg = "Project !project contains:\n";
1920. $args = array('!project' => $project['name']);
1921. foreach (array_keys($found) as $type) {
1922. if ($count = count($found[$type])) {
1923. $msg .= " - !count_$type !type_$type: !found_$type\n";
1924. $args += array("!count_$type" => $count, "!type_$type" => $type, "!found_$type" => implode(', ', $found[$type]));
1925. if ($count > 1) {
1926. $args["!type_$type"] = $type.'s';
1927. }
1928. }
1929. }
1930. drush_log(dt($msg, $args), LogLevel::SUCCESS);
1931. drush_print_pipe(call_user_func_array('array_merge', array_values($found)));
1932. }
1933. // Modules and themes can only contain other extensions of the same type.
1934. elseif (in_array($project['project_type'], array('module', 'theme'))) {
1935. $found = array();
1936. foreach (drush_scan_directory($project['full_project_path'], $mask, $nomask) as $filename => $info) {
1937. // Extract extension name from filename.
1938. $matches = array();
1939. preg_match($mask, $info->basename, $matches);
1940. $found[] = $matches[1];
1941. }
1942. // If there is only one module / theme in the project, only print out
1943. // the message if is different than the project name.
1944. if (count($found) == 1) {
1945. if ($found[0] != $project['name']) {
1946. $msg = "Project !project contains a !type named !found.";
1947. }
1948. }
1949. // If there are multiple modules or themes in the project, list them all.
1950. else {
1951. $msg = "Project !project contains !count !types: !found.";
1952. }
1953. if (isset($msg)) {
1954. drush_print(dt($msg, array('!project' => $project['name'], '!count' => count($found), '!type' => $project['project_type'], '!found' => implode(', ', $found))));
1955. }
1956. drush_print_pipe($found);
1957. // Cache results.
1958. drush_pm_cache_project_extensions($project, $found);
1959. }
1960. }
1961. /**
1962. * Return an array of empty directories.
1963. *
1964. * Walk a directory and return an array of subdirectories that are empty. Will
1965. * return the given directory if it's empty.
1966. * If a list of items to exclude is provided, subdirectories will be condidered
1967. * empty even if they include any of the items in the list.
1968. *
1969. * @param string $dir
1970. * Path to the directory to work in.
1971. * @param array $exclude
1972. * Array of files or directory to exclude in the check.
1973. *
1974. * @return array
1975. * A list of directory paths that are empty. A directory is deemed to be empty
1976. * if it only contains excluded files or directories.
1977. */
1978. function drush_find_empty_directories($dir, $exclude = array()) {
1979. // Skip files.
1980. if (!is_dir($dir)) {
1981. return array();
1982. }
1983. $to_exclude = array_merge(array('.', '..'), $exclude);
1984. $empty_dirs = array();
1985. $dir_is_empty = TRUE;
1986. foreach (scandir($dir) as $file) {
1987. // Skip excluded directories.
1988. if (in_array($file, $to_exclude)) {
1989. continue;
1990. }
1991. // Recurse into sub-directories to find potentially empty ones.
1992. $subdir = $dir . '/' . $file;
1993. $empty_dirs += drush_find_empty_directories($subdir, $exclude);
1994. // $empty_dir will not contain $subdir, if it is a file or if the
1995. // sub-directory is not empty. $subdir is only set if it is empty.
1996. if (!isset($empty_dirs[$subdir])) {
1997. $dir_is_empty = FALSE;
1998. }
1999. }
2000. if ($dir_is_empty) {
2001. $empty_dirs[$dir] = $dir;
2002. }
2003. return $empty_dirs;
2004. }
2005. /**
2006. * Inject metadata into all .info files for a given project.
2007. *
2008. * @param string $project_dir
2009. * The full path to the root directory of the project to operate on.
2010. * @param string $project_name
2011. * The project machine name (AKA shortname).
2012. * @param string $version
2013. * The version string to inject into the .info file(s).
2014. * @param int $datestamp
2015. * The datestamp of the last commit.
2016. *
2017. * @return boolean
2018. * TRUE on success, FALSE on any failures appending data to .info files.
2019. */
2020. function drush_pm_inject_info_file_metadata($project_dir, $project_name, $version, $datestamp) {
2021. // `drush_drupal_major_version()` cannot be used here because this may be running
2022. // outside of a Drupal context.
2023. $yaml_format = substr($version, 0, 1) >= 8;
2024. $pattern = preg_quote($yaml_format ? '.info.yml' : '.info');
2025. $info_files = drush_scan_directory($project_dir, '/.*' . $pattern . '$/');
2026. if (!empty($info_files)) {
2027. // Construct the string of metadata to append to all the .info files.
2028. if ($yaml_format) {
2029. $info = _drush_pm_generate_info_yaml_metadata($version, $project_name);
2030. }
2031. else {
2032. $info = _drush_pm_generate_info_ini_metadata($version, $project_name, $datestamp);
2033. }
2034. foreach ($info_files as $info_file) {
2035. if (!drush_file_append_data($info_file->filename, $info)) {
2036. return FALSE;
2037. }
2038. }
2039. }
2040. return TRUE;
2041. }
2042. /**
2043. * Generate version information for `.info` files in ini format.
2044. *
2045. * Taken with some modifications from:
2046. * http://drupalcode.org/project/drupalorg.git/blob/refs/heads/6.x-3.x:/drupalorg_project/plugins/release_packager/DrupalorgProjectPackageRelease.class.php#l192
2047. */
2048. function _drush_pm_generate_info_ini_metadata($version, $project_name, $datestamp) {
2049. $matches = array();
2050. $extra = '';
2051. if (preg_match('/^((\d+)\.x)-.*/', $version, $matches) && $matches[2] >= 6) {
2052. $extra .= "\ncore = \"$matches[1]\"";
2053. }
2054. if (!drush_get_option('no-gitprojectinfo', FALSE)) {
2055. $extra = "\nproject = \"$project_name\"";
2056. }
2057. $date = date('Y-m-d', $datestamp);
2058. $info = <<<METADATA
2059. ; Information added by drush on {$date}
2060. version = "{$version}"{$extra}
2061. datestamp = "{$datestamp}"
2062. METADATA;
2063. return $info;
2064. }
2065. /**
2066. * Generate version information for `.info` files in YAML format.
2067. */
2068. function _drush_pm_generate_info_yaml_metadata($version, $project_name) {
2069. $matches = array();
2070. $extra = '';
2071. if (preg_match('/^((\d+)\.x)-.*/', $version, $matches) && $matches[2] >= 6) {
2072. $extra .= "\ncore: '$matches[1]'";
2073. }
2074. if (!drush_get_option('no-gitprojectinfo', FALSE)) {
2075. $extra = "\nproject: '$project_name'";
2076. }
2077. $time = time();
2078. $date = date('Y-m-d');
2079. $info = <<<METADATA
2080. # Information added by drush on {$date}
2081. version: '{$version}'{$extra}
2082. datestamp: {$time}
2083. METADATA;
2084. return $info;
2085. }
|
__label__pos
| 0.958277 |
Quotient space
From formulasearchengine
Jump to navigation Jump to search
{{#invoke:Hatnote|hatnote}}
Illustration of quotient space, S2, obtained by gluing the boundary (in blue) of the disk D2 to a single point.
In topology and related areas of mathematics, a quotient space (also called an identification space) is, intuitively speaking, the result of identifying or "gluing together" certain points of a given space. The points to be identified are specified by an equivalence relation. This is commonly done in order to construct new spaces from given ones.
Definition
Let (X,ฯX) be a topological space, and let ~ be an equivalence relation on X. The quotient space, is defined to be the set of equivalence classes of elements of X:
equipped with the topology where the open sets are defined to be those sets of equivalence classes whose unions are open sets in X:
Equivalently, we can define them to be those sets with an open preimage under the quotient map which sends a point in X to the equivalence class containing it.
The quotient topology is the final topology on the quotient space with respect to the quotient map.
Examples
โข Gluing. Often, topologists talk of gluing points together. If X is a topological space and points are to be "glued", then what is meant is that we are to consider the quotient space obtained from the equivalence relation a ~ b if and only if a = b or a = x, b = y (or a = y, b = x). The two points are henceforth interpreted as one point. Such gluing is commonly referred to as the wedge sum.
โข Consider the unit square I2 = [0,1]ร[0,1] and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then I2/~ is homeomorphic to the unit sphere S2.
โข Adjunction space. More generally, suppose X is a space and A is a subspace of X. One can identify all points in A to a single equivalence class and leave points outside of A equivalent only to themselves. The resulting quotient space is denoted X/A. The 2-sphere is then homeomorphic to the unit disc with its boundary identified to a single point: D2/โD2.
โข Consider the set X = R of all real numbers with the ordinary topology, and write x ~ y if and only if xy is an integer. Then the quotient space X/~ is homeomorphic to the unit circle S1 via the homeomorphism which sends the equivalence class of x to exp(2ฯix).
โข A vast generalization of the previous example is the following: Suppose a topological group G acts continuously on a space X. One can form an equivalence relation on X by saying points are equivalent if and only if they lie in the same orbit. The quotient space under this relation is called the orbit space, denoted X/G. In the previous example G = Z acts on R by translation. The orbit space R/Z is homeomorphic to S1.
Warning: The notation R/Z is somewhat ambiguous. If Z is understood to be a group acting on R then the quotient is the circle. However, if Z is thought of as a subspace of R, then the quotient is an infinite bouquet of circles joined at a single point.
Properties
Quotient maps qย : XY are characterized among surjective maps by the following property: if Z is any topological space and fย : YZ is any function, then f is continuous if and only if fq is continuous.
Characteristic property of the quotient topology
The quotient space X/~ together with the quotient map qย : XX/~ is characterized by the following universal property: if gย : XZ is a continuous map such that a ~ b implies g(a) = g(b) for all a and b in X, then there exists a unique continuous map fย : X/~ โ Z such that g = fq. We say that g descends to the quotient.
The continuous maps defined on X/~ are therefore precisely those maps which arise from continuous maps defined on X that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is constantly used when studying quotient spaces.
Given a continuous surjection fย : XY it is useful to have criteria by which one can determine if f is a quotient map. Two sufficient criteria are that f be open or closed. Note that these conditions are only sufficient, not necessary. It is easy to construct examples of quotient maps that are neither open nor closed.
Compatibility with other topological notions
โข Separation
โข In general, quotient spaces are ill-behaved with respect to separation axioms. The separation properties of X need not be inherited by X/~, and X/~ may have separation properties not shared by X.
โข X/~ is a T1 space if and only if every equivalence class of ~ is closed in X.
โข If the quotient map is open, then X/~ is a Hausdorff space if and only if ~ is a closed subset of the product space XรX.
โข Connectedness
โข Compactness
โข If a space is compact, then so are all its quotient spaces.
โข A quotient space of a locally compact space need not be locally compact.
โข Dimension
See also
Topology
Algebra
References
โข {{#invoke:citation/CS1|citation
|CitationClass=book }}
de:Quotiententopologie es:Topologรญa cociente fr:Topologie quotient ko:๋ชซ๊ณต๊ฐ it:Topologia quoziente he:ืืจืื ืื ื (ืืืคืืืืืื) nl:Quotiรซnttopologie ja:ๅไฝ็ธ็ฉบ้ pl:Topologia ilorazowa pt:Espaรงo topolรณgico quociente ru:ะคะฐะบัะพัะฟัะพัััะฐะฝััะฒะพ uk:ะคะฐะบัะพั-ะฟัะพัััั zh:ๅ็ฉบ้ด
|
__label__pos
| 0.97786 |
the term as a child - this indicates a loop. if ( isset( $ancestors[ $term->term_id ] ) ) { continue; } if ( (int) $term->parent === $term_id ) { if ( $use_id ) { $term_list[] = $term->term_id; } else { $term_list[] = $term; } if ( ! isset( $has_children[ $term->term_id ] ) ) { continue; } $ancestors[ $term->term_id ] = 1; $children = _get_term_children( $term->term_id, $terms, $taxonomy, $ancestors ); if ( $children ) { $term_list = array_merge( $term_list, $children ); } } } return $term_list; } /** * Add count of children to parent count. * * Recalculates term counts by including items from child terms. Assumes all * relevant children are already in the $terms argument. * * @access private * @since 2.3.0 * * @global wpdb $wpdb WordPress database abstraction object. * * @param object[]|WP_Term[] $terms List of term objects (passed by reference). * @param string $taxonomy Term context. */ function _pad_term_counts( &$terms, $taxonomy ) { global $wpdb; // This function only works for hierarchical taxonomies like post categories. if ( ! is_taxonomy_hierarchical( $taxonomy ) ) { return; } $term_hier = _get_term_hierarchy( $taxonomy ); if ( empty( $term_hier ) ) { return; } $term_items = array(); $terms_by_id = array(); $term_ids = array(); foreach ( (array) $terms as $key => $term ) { $terms_by_id[ $term->term_id ] = & $terms[ $key ]; $term_ids[ $term->term_taxonomy_id ] = $term->term_id; } // Get the object and term IDs and stick them in a lookup table. $tax_obj = get_taxonomy( $taxonomy ); $object_types = esc_sql( $tax_obj->object_type ); $results = $wpdb->get_results( "SELECT object_id, term_taxonomy_id FROM $wpdb->term_relationships INNER JOIN $wpdb->posts ON object_id = ID WHERE term_taxonomy_id IN (" . implode( ',', array_keys( $term_ids ) ) . ") AND post_type IN ('" . implode( "', '", $object_types ) . "') AND post_status = 'publish'" ); foreach ( $results as $row ) { $id = $term_ids[ $row->term_taxonomy_id ]; $term_items[ $id ][ $row->object_id ] = isset( $term_items[ $id ][ $row->object_id ] ) ? ++$term_items[ $id ][ $row->object_id ] : 1; } // Touch every ancestor's lookup row for each post in each term. foreach ( $term_ids as $term_id ) { $child = $term_id; $ancestors = array(); while ( ! empty( $terms_by_id[ $child ] ) && $parent = $terms_by_id[ $child ]->parent ) { $ancestors[] = $child; if ( ! empty( $term_items[ $term_id ] ) ) { foreach ( $term_items[ $term_id ] as $item_id => $touches ) { $term_items[ $parent ][ $item_id ] = isset( $term_items[ $parent ][ $item_id ] ) ? ++$term_items[ $parent ][ $item_id ] : 1; } } $child = $parent; if ( in_array( $parent, $ancestors, true ) ) { break; } } } // Transfer the touched cells. foreach ( (array) $term_items as $id => $items ) { if ( isset( $terms_by_id[ $id ] ) ) { $terms_by_id[ $id ]->count = count( $items ); } } } /** * Adds any terms from the given IDs to the cache that do not already exist in cache. * * @since 4.6.0 * @access private * * @global wpdb $wpdb WordPress database abstraction object. * * @param array $term_ids Array of term IDs. * @param bool $update_meta_cache Optional. Whether to update the meta cache. Default true. */ function _prime_term_caches( $term_ids, $update_meta_cache = true ) { global $wpdb; $non_cached_ids = _get_non_cached_ids( $term_ids, 'terms' ); if ( ! empty( $non_cached_ids ) ) { $fresh_terms = $wpdb->get_results( sprintf( "SELECT t.*, tt.* FROM $wpdb->terms AS t INNER JOIN $wpdb->term_taxonomy AS tt ON t.term_id = tt.term_id WHERE t.term_id IN (%s)", implode( ',', array_map( 'intval', $non_cached_ids ) ) ) ); update_term_cache( $fresh_terms, $update_meta_cache ); if ( $update_meta_cache ) { update_termmeta_cache( $non_cached_ids ); } } } // // Default callbacks. // /** * Will update term count based on object types of the current taxonomy. * * Private function for the default callback for post_tag and category * taxonomies. * * @access private * @since 2.3.0 * * @global wpdb $wpdb WordPress database abstraction object. * * @param int[] $terms List of Term taxonomy IDs. * @param WP_Taxonomy $taxonomy Current taxonomy object of terms. */ function _update_post_term_count( $terms, $taxonomy ) { global $wpdb; $object_types = (array) $taxonomy->object_type; foreach ( $object_types as &$object_type ) { list( $object_type ) = explode( ':', $object_type ); } $object_types = array_unique( $object_types ); $check_attachments = array_search( 'attachment', $object_types, true ); if ( false !== $check_attachments ) { unset( $object_types[ $check_attachments ] ); $check_attachments = true; } if ( $object_types ) { $object_types = esc_sql( array_filter( $object_types, 'post_type_exists' ) ); } $post_statuses = array( 'publish' ); /** * Filters the post statuses for updating the term count. * * @since 5.7.0 * * @param string[] $post_statuses List of post statuses to include in the count. Default is 'publish'. * @param WP_Taxonomy $taxonomy Current taxonomy object. */ $post_statuses = esc_sql( apply_filters( 'update_post_term_count_statuses', $post_statuses, $taxonomy ) ); foreach ( (array) $terms as $term ) { $count = 0; // Attachments can be 'inherit' status, we need to base count off the parent's status if so. if ( $check_attachments ) { // phpcs:ignore WordPress.DB.PreparedSQLPlaceholders.QuotedDynamicPlaceholderGeneration $count += (int) $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(*) FROM $wpdb->term_relationships, $wpdb->posts p1 WHERE p1.ID = $wpdb->term_relationships.object_id AND ( post_status IN ('" . implode( "', '", $post_statuses ) . "') OR ( post_status = 'inherit' AND post_parent > 0 AND ( SELECT post_status FROM $wpdb->posts WHERE ID = p1.post_parent ) IN ('" . implode( "', '", $post_statuses ) . "') ) ) AND post_type = 'attachment' AND term_taxonomy_id = %d", $term ) ); } if ( $object_types ) { // phpcs:ignore WordPress.DB.PreparedSQLPlaceholders.QuotedDynamicPlaceholderGeneration $count += (int) $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(*) FROM $wpdb->term_relationships, $wpdb->posts WHERE $wpdb->posts.ID = $wpdb->term_relationships.object_id AND post_status IN ('" . implode( "', '", $post_statuses ) . "') AND post_type IN ('" . implode( "', '", $object_types ) . "') AND term_taxonomy_id = %d", $term ) ); } /** This action is documented in wp-includes/taxonomy.php */ do_action( 'edit_term_taxonomy', $term, $taxonomy->name ); $wpdb->update( $wpdb->term_taxonomy, compact( 'count' ), array( 'term_taxonomy_id' => $term ) ); /** This action is documented in wp-includes/taxonomy.php */ do_action( 'edited_term_taxonomy', $term, $taxonomy->name ); } } /** * Will update term count based on number of objects. * * Default callback for the 'link_category' taxonomy. * * @since 3.3.0 * * @global wpdb $wpdb WordPress database abstraction object. * * @param int[] $terms List of term taxonomy IDs. * @param WP_Taxonomy $taxonomy Current taxonomy object of terms. */ function _update_generic_term_count( $terms, $taxonomy ) { global $wpdb; foreach ( (array) $terms as $term ) { $count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(*) FROM $wpdb->term_relationships WHERE term_taxonomy_id = %d", $term ) ); /** This action is documented in wp-includes/taxonomy.php */ do_action( 'edit_term_taxonomy', $term, $taxonomy->name ); $wpdb->update( $wpdb->term_taxonomy, compact( 'count' ), array( 'term_taxonomy_id' => $term ) ); /** This action is documented in wp-includes/taxonomy.php */ do_action( 'edited_term_taxonomy', $term, $taxonomy->name ); } } /** * Create a new term for a term_taxonomy item that currently shares its term * with another term_taxonomy. * * @ignore * @since 4.2.0 * @since 4.3.0 Introduced `$record` parameter. Also, `$term_id` and * `$term_taxonomy_id` can now accept objects. * * @global wpdb $wpdb WordPress database abstraction object. * * @param int|object $term_id ID of the shared term, or the shared term object. * @param int|object $term_taxonomy_id ID of the term_taxonomy item to receive a new term, or the term_taxonomy object * (corresponding to a row from the term_taxonomy table). * @param bool $record Whether to record data about the split term in the options table. The recording * process has the potential to be resource-intensive, so during batch operations * it can be beneficial to skip inline recording and do it just once, after the * batch is processed. Only set this to `false` if you know what you are doing. * Default: true. * @return int|WP_Error When the current term does not need to be split (or cannot be split on the current * database schema), `$term_id` is returned. When the term is successfully split, the * new term_id is returned. A WP_Error is returned for miscellaneous errors. */ function _split_shared_term( $term_id, $term_taxonomy_id, $record = true ) { global $wpdb; if ( is_object( $term_id ) ) { $shared_term = $term_id; $term_id = (int) $shared_term->term_id; } if ( is_object( $term_taxonomy_id ) ) { $term_taxonomy = $term_taxonomy_id; $term_taxonomy_id = (int) $term_taxonomy->term_taxonomy_id; } // If there are no shared term_taxonomy rows, there's nothing to do here. $shared_tt_count = (int) $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(*) FROM $wpdb->term_taxonomy tt WHERE tt.term_id = %d AND tt.term_taxonomy_id != %d", $term_id, $term_taxonomy_id ) ); if ( ! $shared_tt_count ) { return $term_id; } /* * Verify that the term_taxonomy_id passed to the function is actually associated with the term_id. * If there's a mismatch, it may mean that the term is already split. Return the actual term_id from the db. */ $check_term_id = (int) $wpdb->get_var( $wpdb->prepare( "SELECT term_id FROM $wpdb->term_taxonomy WHERE term_taxonomy_id = %d", $term_taxonomy_id ) ); if ( $check_term_id !== $term_id ) { return $check_term_id; } // Pull up data about the currently shared slug, which we'll use to populate the new one. if ( empty( $shared_term ) ) { $shared_term = $wpdb->get_row( $wpdb->prepare( "SELECT t.* FROM $wpdb->terms t WHERE t.term_id = %d", $term_id ) ); } $new_term_data = array( 'name' => $shared_term->name, 'slug' => $shared_term->slug, 'term_group' => $shared_term->term_group, ); if ( false === $wpdb->insert( $wpdb->terms, $new_term_data ) ) { return new WP_Error( 'db_insert_error', __( 'Could not split shared term.' ), $wpdb->last_error ); } $new_term_id = (int) $wpdb->insert_id; // Update the existing term_taxonomy to point to the newly created term. $wpdb->update( $wpdb->term_taxonomy, array( 'term_id' => $new_term_id ), array( 'term_taxonomy_id' => $term_taxonomy_id ) ); // Reassign child terms to the new parent. if ( empty( $term_taxonomy ) ) { $term_taxonomy = $wpdb->get_row( $wpdb->prepare( "SELECT * FROM $wpdb->term_taxonomy WHERE term_taxonomy_id = %d", $term_taxonomy_id ) ); } $children_tt_ids = $wpdb->get_col( $wpdb->prepare( "SELECT term_taxonomy_id FROM $wpdb->term_taxonomy WHERE parent = %d AND taxonomy = %s", $term_id, $term_taxonomy->taxonomy ) ); if ( ! empty( $children_tt_ids ) ) { foreach ( $children_tt_ids as $child_tt_id ) { $wpdb->update( $wpdb->term_taxonomy, array( 'parent' => $new_term_id ), array( 'term_taxonomy_id' => $child_tt_id ) ); clean_term_cache( (int) $child_tt_id, '', false ); } } else { // If the term has no children, we must force its taxonomy cache to be rebuilt separately. clean_term_cache( $new_term_id, $term_taxonomy->taxonomy, false ); } clean_term_cache( $term_id, $term_taxonomy->taxonomy, false ); /* * Taxonomy cache clearing is delayed to avoid race conditions that may occur when * regenerating the taxonomy's hierarchy tree. */ $taxonomies_to_clean = array( $term_taxonomy->taxonomy ); // Clean the cache for term taxonomies formerly shared with the current term. $shared_term_taxonomies = $wpdb->get_col( $wpdb->prepare( "SELECT taxonomy FROM $wpdb->term_taxonomy WHERE term_id = %d", $term_id ) ); $taxonomies_to_clean = array_merge( $taxonomies_to_clean, $shared_term_taxonomies ); foreach ( $taxonomies_to_clean as $taxonomy_to_clean ) { clean_taxonomy_cache( $taxonomy_to_clean ); } // Keep a record of term_ids that have been split, keyed by old term_id. See wp_get_split_term(). if ( $record ) { $split_term_data = get_option( '_split_terms', array() ); if ( ! isset( $split_term_data[ $term_id ] ) ) { $split_term_data[ $term_id ] = array(); } $split_term_data[ $term_id ][ $term_taxonomy->taxonomy ] = $new_term_id; update_option( '_split_terms', $split_term_data ); } // If we've just split the final shared term, set the "finished" flag. $shared_terms_exist = $wpdb->get_results( "SELECT tt.term_id, t.*, count(*) as term_tt_count FROM {$wpdb->term_taxonomy} tt LEFT JOIN {$wpdb->terms} t ON t.term_id = tt.term_id GROUP BY t.term_id HAVING term_tt_count > 1 LIMIT 1" ); if ( ! $shared_terms_exist ) { update_option( 'finished_splitting_shared_terms', true ); } /** * Fires after a previously shared taxonomy term is split into two separate terms. * * @since 4.2.0 * * @param int $term_id ID of the formerly shared term. * @param int $new_term_id ID of the new term created for the $term_taxonomy_id. * @param int $term_taxonomy_id ID for the term_taxonomy row affected by the split. * @param string $taxonomy Taxonomy for the split term. */ do_action( 'split_shared_term', $term_id, $new_term_id, $term_taxonomy_id, $term_taxonomy->taxonomy ); return $new_term_id; } /** * Splits a batch of shared taxonomy terms. * * @since 4.3.0 * * @global wpdb $wpdb WordPress database abstraction object. */ function _wp_batch_split_terms() { global $wpdb; $lock_name = 'term_split.lock'; // Try to lock. $lock_result = $wpdb->query( $wpdb->prepare( "INSERT IGNORE INTO `$wpdb->options` ( `option_name`, `option_value`, `autoload` ) VALUES (%s, %s, 'no') /* LOCK */", $lock_name, time() ) ); if ( ! $lock_result ) { $lock_result = get_option( $lock_name ); // Bail if we were unable to create a lock, or if the existing lock is still valid. if ( ! $lock_result || ( $lock_result > ( time() - HOUR_IN_SECONDS ) ) ) { wp_schedule_single_event( time() + ( 5 * MINUTE_IN_SECONDS ), 'wp_split_shared_term_batch' ); return; } } // Update the lock, as by this point we've definitely got a lock, just need to fire the actions. update_option( $lock_name, time() ); // Get a list of shared terms (those with more than one associated row in term_taxonomy). $shared_terms = $wpdb->get_results( "SELECT tt.term_id, t.*, count(*) as term_tt_count FROM {$wpdb->term_taxonomy} tt LEFT JOIN {$wpdb->terms} t ON t.term_id = tt.term_id GROUP BY t.term_id HAVING term_tt_count > 1 LIMIT 10" ); // No more terms, we're done here. if ( ! $shared_terms ) { update_option( 'finished_splitting_shared_terms', true ); delete_option( $lock_name ); return; } // Shared terms found? We'll need to run this script again. wp_schedule_single_event( time() + ( 2 * MINUTE_IN_SECONDS ), 'wp_split_shared_term_batch' ); // Rekey shared term array for faster lookups. $_shared_terms = array(); foreach ( $shared_terms as $shared_term ) { $term_id = (int) $shared_term->term_id; $_shared_terms[ $term_id ] = $shared_term; } $shared_terms = $_shared_terms; // Get term taxonomy data for all shared terms. $shared_term_ids = implode( ',', array_keys( $shared_terms ) ); $shared_tts = $wpdb->get_results( "SELECT * FROM {$wpdb->term_taxonomy} WHERE `term_id` IN ({$shared_term_ids})" ); // Split term data recording is slow, so we do it just once, outside the loop. $split_term_data = get_option( '_split_terms', array() ); $skipped_first_term = array(); $taxonomies = array(); foreach ( $shared_tts as $shared_tt ) { $term_id = (int) $shared_tt->term_id; // Don't split the first tt belonging to a given term_id. if ( ! isset( $skipped_first_term[ $term_id ] ) ) { $skipped_first_term[ $term_id ] = 1; continue; } if ( ! isset( $split_term_data[ $term_id ] ) ) { $split_term_data[ $term_id ] = array(); } // Keep track of taxonomies whose hierarchies need flushing. if ( ! isset( $taxonomies[ $shared_tt->taxonomy ] ) ) { $taxonomies[ $shared_tt->taxonomy ] = 1; } // Split the term. $split_term_data[ $term_id ][ $shared_tt->taxonomy ] = _split_shared_term( $shared_terms[ $term_id ], $shared_tt, false ); } // Rebuild the cached hierarchy for each affected taxonomy. foreach ( array_keys( $taxonomies ) as $tax ) { delete_option( "{$tax}_children" ); _get_term_hierarchy( $tax ); } update_option( '_split_terms', $split_term_data ); delete_option( $lock_name ); } /** * In order to avoid the _wp_batch_split_terms() job being accidentally removed, * check that it's still scheduled while we haven't finished splitting terms. * * @ignore * @since 4.3.0 */ function _wp_check_for_scheduled_split_terms() { if ( ! get_option( 'finished_splitting_shared_terms' ) && ! wp_next_scheduled( 'wp_split_shared_term_batch' ) ) { wp_schedule_single_event( time() + MINUTE_IN_SECONDS, 'wp_split_shared_term_batch' ); } } /** * Check default categories when a term gets split to see if any of them need to be updated. * * @ignore * @since 4.2.0 * * @param int $term_id ID of the formerly shared term. * @param int $new_term_id ID of the new term created for the $term_taxonomy_id. * @param int $term_taxonomy_id ID for the term_taxonomy row affected by the split. * @param string $taxonomy Taxonomy for the split term. */ function _wp_check_split_default_terms( $term_id, $new_term_id, $term_taxonomy_id, $taxonomy ) { if ( 'category' !== $taxonomy ) { return; } foreach ( array( 'default_category', 'default_link_category', 'default_email_category' ) as $option ) { if ( (int) get_option( $option, -1 ) === $term_id ) { update_option( $option, $new_term_id ); } } } /** * Check menu items when a term gets split to see if any of them need to be updated. * * @ignore * @since 4.2.0 * * @global wpdb $wpdb WordPress database abstraction object. * * @param int $term_id ID of the formerly shared term. * @param int $new_term_id ID of the new term created for the $term_taxonomy_id. * @param int $term_taxonomy_id ID for the term_taxonomy row affected by the split. * @param string $taxonomy Taxonomy for the split term. */ function _wp_check_split_terms_in_menus( $term_id, $new_term_id, $term_taxonomy_id, $taxonomy ) { global $wpdb; $post_ids = $wpdb->get_col( $wpdb->prepare( "SELECT m1.post_id FROM {$wpdb->postmeta} AS m1 INNER JOIN {$wpdb->postmeta} AS m2 ON ( m2.post_id = m1.post_id ) INNER JOIN {$wpdb->postmeta} AS m3 ON ( m3.post_id = m1.post_id ) WHERE ( m1.meta_key = '_menu_item_type' AND m1.meta_value = 'taxonomy' ) AND ( m2.meta_key = '_menu_item_object' AND m2.meta_value = %s ) AND ( m3.meta_key = '_menu_item_object_id' AND m3.meta_value = %d )", $taxonomy, $term_id ) ); if ( $post_ids ) { foreach ( $post_ids as $post_id ) { update_post_meta( $post_id, '_menu_item_object_id', $new_term_id, $term_id ); } } } /** * If the term being split is a nav_menu, change associations. * * @ignore * @since 4.3.0 * * @param int $term_id ID of the formerly shared term. * @param int $new_term_id ID of the new term created for the $term_taxonomy_id. * @param int $term_taxonomy_id ID for the term_taxonomy row affected by the split. * @param string $taxonomy Taxonomy for the split term. */ function _wp_check_split_nav_menu_terms( $term_id, $new_term_id, $term_taxonomy_id, $taxonomy ) { if ( 'nav_menu' !== $taxonomy ) { return; } // Update menu locations. $locations = get_nav_menu_locations(); foreach ( $locations as $location => $menu_id ) { if ( $term_id === $menu_id ) { $locations[ $location ] = $new_term_id; } } set_theme_mod( 'nav_menu_locations', $locations ); } /** * Get data about terms that previously shared a single term_id, but have since been split. * * @since 4.2.0 * * @param int $old_term_id Term ID. This is the old, pre-split term ID. * @return array Array of new term IDs, keyed by taxonomy. */ function wp_get_split_terms( $old_term_id ) { $split_terms = get_option( '_split_terms', array() ); $terms = array(); if ( isset( $split_terms[ $old_term_id ] ) ) { $terms = $split_terms[ $old_term_id ]; } return $terms; } /** * Get the new term ID corresponding to a previously split term. * * @since 4.2.0 * * @param int $old_term_id Term ID. This is the old, pre-split term ID. * @param string $taxonomy Taxonomy that the term belongs to. * @return int|false If a previously split term is found corresponding to the old term_id and taxonomy, * the new term_id will be returned. If no previously split term is found matching * the parameters, returns false. */ function wp_get_split_term( $old_term_id, $taxonomy ) { $split_terms = wp_get_split_terms( $old_term_id ); $term_id = false; if ( isset( $split_terms[ $taxonomy ] ) ) { $term_id = (int) $split_terms[ $taxonomy ]; } return $term_id; } /** * Determine whether a term is shared between multiple taxonomies. * * Shared taxonomy terms began to be split in 4.3, but failed cron tasks or * other delays in upgrade routines may cause shared terms to remain. * * @since 4.4.0 * * @param int $term_id Term ID. * @return bool Returns false if a term is not shared between multiple taxonomies or * if splitting shared taxonomy terms is finished. */ function wp_term_is_shared( $term_id ) { global $wpdb; if ( get_option( 'finished_splitting_shared_terms' ) ) { return false; } $tt_count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(*) FROM $wpdb->term_taxonomy WHERE term_id = %d", $term_id ) ); return $tt_count > 1; } /** * Generate a permalink for a taxonomy term archive. * * @since 2.5.0 * * @global WP_Rewrite $wp_rewrite WordPress rewrite component. * * @param WP_Term|int|string $term The term object, ID, or slug whose link will be retrieved. * @param string $taxonomy Optional. Taxonomy. Default empty. * @return string|WP_Error URL of the taxonomy term archive on success, WP_Error if term does not exist. */ function get_term_link( $term, $taxonomy = '' ) { global $wp_rewrite; if ( ! is_object( $term ) ) { if ( is_int( $term ) ) { $term = get_term( $term, $taxonomy ); } else { $term = get_term_by( 'slug', $term, $taxonomy ); } } if ( ! is_object( $term ) ) { $term = new WP_Error( 'invalid_term', __( 'Empty Term.' ) ); } if ( is_wp_error( $term ) ) { return $term; } $taxonomy = $term->taxonomy; $termlink = $wp_rewrite->get_extra_permastruct( $taxonomy ); /** * Filters the permalink structure for a term before token replacement occurs. * * @since 4.9.0 * * @param string $termlink The permalink structure for the term's taxonomy. * @param WP_Term $term The term object. */ $termlink = apply_filters( 'pre_term_link', $termlink, $term ); $slug = $term->slug; $t = get_taxonomy( $taxonomy ); if ( empty( $termlink ) ) { if ( 'category' === $taxonomy ) { $termlink = '?cat=' . $term->term_id; } elseif ( $t->query_var ) { $termlink = "?$t->query_var=$slug"; } else { $termlink = "?taxonomy=$taxonomy&term=$slug"; } $termlink = home_url( $termlink ); } else { if ( ! empty( $t->rewrite['hierarchical'] ) ) { $hierarchical_slugs = array(); $ancestors = get_ancestors( $term->term_id, $taxonomy, 'taxonomy' ); foreach ( (array) $ancestors as $ancestor ) { $ancestor_term = get_term( $ancestor, $taxonomy ); $hierarchical_slugs[] = $ancestor_term->slug; } $hierarchical_slugs = array_reverse( $hierarchical_slugs ); $hierarchical_slugs[] = $slug; $termlink = str_replace( "%$taxonomy%", implode( '/', $hierarchical_slugs ), $termlink ); } else { $termlink = str_replace( "%$taxonomy%", $slug, $termlink ); } $termlink = home_url( user_trailingslashit( $termlink, 'category' ) ); } // Back compat filters. if ( 'post_tag' === $taxonomy ) { /** * Filters the tag link. * * @since 2.3.0 * @since 2.5.0 Deprecated in favor of {@see 'term_link'} filter. * @since 5.4.1 Restored (un-deprecated). * * @param string $termlink Tag link URL. * @param int $term_id Term ID. */ $termlink = apply_filters( 'tag_link', $termlink, $term->term_id ); } elseif ( 'category' === $taxonomy ) { /** * Filters the category link. * * @since 1.5.0 * @since 2.5.0 Deprecated in favor of {@see 'term_link'} filter. * @since 5.4.1 Restored (un-deprecated). * * @param string $termlink Category link URL. * @param int $term_id Term ID. */ $termlink = apply_filters( 'category_link', $termlink, $term->term_id ); } /** * Filters the term link. * * @since 2.5.0 * * @param string $termlink Term link URL. * @param WP_Term $term Term object. * @param string $taxonomy Taxonomy slug. */ return apply_filters( 'term_link', $termlink, $term, $taxonomy ); } /** * Display the taxonomies of a post with available options. * * This function can be used within the loop to display the taxonomies for a * post without specifying the Post ID. You can also use it outside the Loop to * display the taxonomies for a specific post. * * @since 2.5.0 * * @param array $args { * Arguments about which post to use and how to format the output. Shares all of the arguments * supported by get_the_taxonomies(), in addition to the following. * * @type int|WP_Post $post Post ID or object to get taxonomies of. Default current post. * @type string $before Displays before the taxonomies. Default empty string. * @type string $sep Separates each taxonomy. Default is a space. * @type string $after Displays after the taxonomies. Default empty string. * } */ function the_taxonomies( $args = array() ) { $defaults = array( 'post' => 0, 'before' => '', 'sep' => ' ', 'after' => '', ); $parsed_args = wp_parse_args( $args, $defaults ); echo $parsed_args['before'] . implode( $parsed_args['sep'], get_the_taxonomies( $parsed_args['post'], $parsed_args ) ) . $parsed_args['after']; } /** * Retrieve all taxonomies associated with a post. * * This function can be used within the loop. It will also return an array of * the taxonomies with links to the taxonomy and name. * * @since 2.5.0 * * @param int|WP_Post $post Optional. Post ID or WP_Post object. Default is global $post. * @param array $args { * Optional. Arguments about how to format the list of taxonomies. Default empty array. * * @type string $template Template for displaying a taxonomy label and list of terms. * Default is "Label: Terms." * @type string $term_template Template for displaying a single term in the list. Default is the term name * linked to its archive. * } * @return array List of taxonomies. */ function get_the_taxonomies( $post = 0, $args = array() ) { $post = get_post( $post ); $args = wp_parse_args( $args, array( /* translators: %s: Taxonomy label, %l: List of terms formatted as per $term_template. */ 'template' => __( '%s: %l.' ), 'term_template' => '%2$s', ) ); $taxonomies = array(); if ( ! $post ) { return $taxonomies; } foreach ( get_object_taxonomies( $post ) as $taxonomy ) { $t = (array) get_taxonomy( $taxonomy ); if ( empty( $t['label'] ) ) { $t['label'] = $taxonomy; } if ( empty( $t['args'] ) ) { $t['args'] = array(); } if ( empty( $t['template'] ) ) { $t['template'] = $args['template']; } if ( empty( $t['term_template'] ) ) { $t['term_template'] = $args['term_template']; } $terms = get_object_term_cache( $post->ID, $taxonomy ); if ( false === $terms ) { $terms = wp_get_object_terms( $post->ID, $taxonomy, $t['args'] ); } $links = array(); foreach ( $terms as $term ) { $links[] = wp_sprintf( $t['term_template'], esc_attr( get_term_link( $term ) ), $term->name ); } if ( $links ) { $taxonomies[ $taxonomy ] = wp_sprintf( $t['template'], $t['label'], $links, $terms ); } } return $taxonomies; } /** * Retrieve all taxonomy names for the given post. * * @since 2.5.0 * * @param int|WP_Post $post Optional. Post ID or WP_Post object. Default is global $post. * @return string[] An array of all taxonomy names for the given post. */ function get_post_taxonomies( $post = 0 ) { $post = get_post( $post ); return get_object_taxonomies( $post ); } /** * Determine if the given object is associated with any of the given terms. * * The given terms are checked against the object's terms' term_ids, names and slugs. * Terms given as integers will only be checked against the object's terms' term_ids. * If no terms are given, determines if object is associated with any terms in the given taxonomy. * * @since 2.7.0 * * @param int $object_id ID of the object (post ID, link ID, ...). * @param string $taxonomy Single taxonomy name. * @param int|string|int[]|string[] $terms Optional. Term ID, name, slug, or array of such * to check against. Default null. * @return bool|WP_Error WP_Error on input error. */ function is_object_in_term( $object_id, $taxonomy, $terms = null ) { $object_id = (int) $object_id; if ( ! $object_id ) { return new WP_Error( 'invalid_object', __( 'Invalid object ID.' ) ); } $object_terms = get_object_term_cache( $object_id, $taxonomy ); if ( false === $object_terms ) { $object_terms = wp_get_object_terms( $object_id, $taxonomy, array( 'update_term_meta_cache' => false ) ); if ( is_wp_error( $object_terms ) ) { return $object_terms; } wp_cache_set( $object_id, wp_list_pluck( $object_terms, 'term_id' ), "{$taxonomy}_relationships" ); } if ( is_wp_error( $object_terms ) ) { return $object_terms; } if ( empty( $object_terms ) ) { return false; } if ( empty( $terms ) ) { return ( ! empty( $object_terms ) ); } $terms = (array) $terms; $ints = array_filter( $terms, 'is_int' ); if ( $ints ) { $strs = array_diff( $terms, $ints ); } else { $strs =& $terms; } foreach ( $object_terms as $object_term ) { // If term is an int, check against term_ids only. if ( $ints && in_array( $object_term->term_id, $ints, true ) ) { return true; } if ( $strs ) { // Only check numeric strings against term_id, to avoid false matches due to type juggling. $numeric_strs = array_map( 'intval', array_filter( $strs, 'is_numeric' ) ); if ( in_array( $object_term->term_id, $numeric_strs, true ) ) { return true; } if ( in_array( $object_term->name, $strs, true ) ) { return true; } if ( in_array( $object_term->slug, $strs, true ) ) { return true; } } } return false; } /** * Determine if the given object type is associated with the given taxonomy. * * @since 3.0.0 * * @param string $object_type Object type string. * @param string $taxonomy Single taxonomy name. * @return bool True if object is associated with the taxonomy, otherwise false. */ function is_object_in_taxonomy( $object_type, $taxonomy ) { $taxonomies = get_object_taxonomies( $object_type ); if ( empty( $taxonomies ) ) { return false; } return in_array( $taxonomy, $taxonomies, true ); } /** * Get an array of ancestor IDs for a given object. * * @since 3.1.0 * @since 4.1.0 Introduced the `$resource_type` argument. * * @param int $object_id Optional. The ID of the object. Default 0. * @param string $object_type Optional. The type of object for which we'll be retrieving * ancestors. Accepts a post type or a taxonomy name. Default empty. * @param string $resource_type Optional. Type of resource $object_type is. Accepts 'post_type' * or 'taxonomy'. Default empty. * @return int[] An array of IDs of ancestors from lowest to highest in the hierarchy. */ function get_ancestors( $object_id = 0, $object_type = '', $resource_type = '' ) { $object_id = (int) $object_id; $ancestors = array(); if ( empty( $object_id ) ) { /** This filter is documented in wp-includes/taxonomy.php */ return apply_filters( 'get_ancestors', $ancestors, $object_id, $object_type, $resource_type ); } if ( ! $resource_type ) { if ( is_taxonomy_hierarchical( $object_type ) ) { $resource_type = 'taxonomy'; } elseif ( post_type_exists( $object_type ) ) { $resource_type = 'post_type'; } } if ( 'taxonomy' === $resource_type ) { $term = get_term( $object_id, $object_type ); while ( ! is_wp_error( $term ) && ! empty( $term->parent ) && ! in_array( $term->parent, $ancestors, true ) ) { $ancestors[] = (int) $term->parent; $term = get_term( $term->parent, $object_type ); } } elseif ( 'post_type' === $resource_type ) { $ancestors = get_post_ancestors( $object_id ); } /** * Filters a given object's ancestors. * * @since 3.1.0 * @since 4.1.1 Introduced the `$resource_type` parameter. * * @param int[] $ancestors An array of IDs of object ancestors. * @param int $object_id Object ID. * @param string $object_type Type of object. * @param string $resource_type Type of resource $object_type is. */ return apply_filters( 'get_ancestors', $ancestors, $object_id, $object_type, $resource_type ); } /** * Returns the term's parent's term_ID. * * @since 3.1.0 * * @param int $term_id Term ID. * @param string $taxonomy Taxonomy name. * @return int|false Parent term ID on success, false on failure. */ function wp_get_term_taxonomy_parent_id( $term_id, $taxonomy ) { $term = get_term( $term_id, $taxonomy ); if ( ! $term || is_wp_error( $term ) ) { return false; } return (int) $term->parent; } /** * Checks the given subset of the term hierarchy for hierarchy loops. * Prevents loops from forming and breaks those that it finds. * * Attached to the {@see 'wp_update_term_parent'} filter. * * @since 3.1.0 * * @param int $parent `term_id` of the parent for the term we're checking. * @param int $term_id The term we're checking. * @param string $taxonomy The taxonomy of the term we're checking. * @return int The new parent for the term. */ function wp_check_term_hierarchy_for_loops( $parent, $term_id, $taxonomy ) { // Nothing fancy here - bail. if ( ! $parent ) { return 0; } // Can't be its own parent. if ( $parent === $term_id ) { return 0; } // Now look for larger loops. $loop = wp_find_hierarchy_loop( 'wp_get_term_taxonomy_parent_id', $term_id, $parent, array( $taxonomy ) ); if ( ! $loop ) { return $parent; // No loop. } // Setting $parent to the given value causes a loop. if ( isset( $loop[ $term_id ] ) ) { return 0; } // There's a loop, but it doesn't contain $term_id. Break the loop. foreach ( array_keys( $loop ) as $loop_member ) { wp_update_term( $loop_member, $taxonomy, array( 'parent' => 0 ) ); } return $parent; } /** * Determines whether a taxonomy is considered "viewable". * * @since 5.1.0 * * @param string|WP_Taxonomy $taxonomy Taxonomy name or object. * @return bool Whether the taxonomy should be considered viewable. */ function is_taxonomy_viewable( $taxonomy ) { if ( is_scalar( $taxonomy ) ) { $taxonomy = get_taxonomy( $taxonomy ); if ( ! $taxonomy ) { return false; } } return $taxonomy->publicly_queryable; } /** * Sets the last changed time for the 'terms' cache group. * * @since 5.0.0 */ function wp_cache_set_terms_last_changed() { wp_cache_set( 'last_changed', microtime(), 'terms' ); } /** * Aborts calls to term meta if it is not supported. * * @since 5.0.0 * * @param mixed $check Skip-value for whether to proceed term meta function execution. * @return mixed Original value of $check, or false if term meta is not supported. */ function wp_check_term_meta_support_prefilter( $check ) { if ( get_option( 'db_version' ) < 34370 ) { return false; } return $check; }
|
__label__pos
| 0.999987 |
[Tutor] IndexError: list index out of range
Mysore Ventaka Rama Kumar Kumar.Mysore at cargolux.com
Tue Dec 27 10:22:21 EST 2016
Please review and comment/correct my error!
Many thanks in advance.
# Required module
import os
# Function for getting files from a folder
def fetchFiles(pathToFolder, flag, keyWord):
''' fetchFiles() requires three arguments: pathToFolder, flag and
keyWord flag must be 'STARTS_WITH' or 'ENDS_WITH' keyWord is a string to
search the file's name Be careful, the keyWord is case sensitive and must
be exact. Example: fetchFiles('/Documents/Photos/','ENDS_WITH','.jpg')
returns: _pathToFiles and _fileNames '''
_pathToFiles = []
_fileNames = []
for dirPath, dirNames, fileNames in os.walk(pathToFolder):
if flag == 'ENDS_WITH':
selectedPath = [os.path.join(dirPath,item) for item in fileNames if item.endswith(keyWord)]
_pathToFiles.extend(selectedPath)
selectedFile = [item for item in fileNames if item.endswith(keyWord)]
_fileNames.extend(selectedFile)
elif flag == 'STARTS_WITH':
selectedPath = [os.path.join(dirPath,item) for item in fileNames if item.startswith(keyWord)]
_pathToFiles.extend(selectedPath)
selectedFile = [item for item in fileNames if item.startswith(keyWord)]
_fileNames.extend(selectedFile)
else:
print fetchFiles.__doc__
break
# Try to remove empty entries if none of the required files are in directory
try:
_pathToFiles.remove('')
_imageFiles.remove('')
except ValueError:
pass
# Warn if nothing was found in the given path
#if selectedFile == []:
#print 'No files with given parameters were found in:\n', dirPath, '\n'
#print len(_fileNames), 'files were found is searched folder(s)'
#return _pathToFiles, _fileNames
#print _pathToFiles, _fileNames
#print _pathToFiles [1]
print _pathToFiles
s = ' '.join(_pathToFiles [1]) #convert tuple element 1 to string
#print len (s)
s_no_white = s.replace(" ", "") #remove white spaces
#print s_no_white[4:7] #extract rgeistration
s1 = s_no_white[4:7] #extract rgeistration
print 'str1 is:', str1
str2 = 'FLDAT'
print 'str2 is: ', str2
str3 = str1.__add__(str2)
print 'str 3 is: ',str3
str4 = 'H//'
print 'str4 is: ', str4
str5 = str4.__add__(str3)
print 'str5 is: ', str5
print _fileNames
print 'Number of files found: ', len(_fileNames)
fetchFiles('str5','ENDS_WITH','.FLD')
####code end##
###Output##
File "D:/university_2/my_code_2/good_2016_12_01/get_registration.py", line 52, in fetchFiles
s = ' '.join(_pathToFiles [1]) #convert tuple element 1 to string
IndexError: list index out of range
##output end##
More information about the Tutor mailing list
|
__label__pos
| 0.682492 |
source: git/Tst/Short/tasks_s.tst @ 0d845f7
jengelh-datetimespielwiese
Last change on this file since 0d845f7 was 0d845f7, checked in by Andreas Steenpass <steenpass@โฆ>, 9 years ago
chg: add/update tests for resources.lib, tasks.lib, and parallel.lib (cherry picked from commit 2d84bafbea754424246bd96a138b4e45710589df) Signed-off-by: Andreas Steenpass <[email protected]>
โข Property mode set to 100644
File size: 1.6 KB
Lineย
1LIB "tst.lib";
2tst_init();
3
4LIB "tasks.lib";
5
6ring R = 0, (x,y), dp;
7ideal I = x9y2+x10, x2y7-y8;
8task t = createTask("std", list(I));
9t;
10killTask(t);
11
12t = "std", list(I);
13startTasks(t);
14t;
15killTask(t);
16t;
17getState(t);
18
19task t1 = "std", list(I);
20startTasks(t1);
21waitAllTasks(t1);
22task t2 = copyTask(t1);
23killTask(t1);
24t2;
25getResult(t2);
26killTask(t2);
27
28t1 = "std", list(I);
29t2 = "std", list(I);
30compareTasks(t1, t2);
31startTasks(t1);
32waitAllTasks(t1);
33t1 == t2;
34killTask(t1);
35killTask(t2);
36ideal J = x;
37task t3 = "std", list(I);
38task t4 = "std", list(J);
39t3 == t4;
40killTask(t3);
41killTask(t4);
42
43printTask(t);
44t = "std", list(I);
45t;
46startTasks(t);
47waitAllTasks(t);
48t;
49killTask(t);
50
51t1 = "std", list(I);
52t2 = "slimgb", list(I);
53startTasks(t1, t2);
54waitAllTasks(t1, t2);
55getResult(t1);
56getResult(t2);
57killTask(t1);
58killTask(t2);
59
60t = "std", list(I);
61startTasks(t);
62stopTask(t);
63t;
64killTask(t);
65
66t1 = "std", list(I);
67t2 = "slimgb", list(I);
68startTasks(t1, t2);
69waitTasks(list(t1, t2), 2);
70getResult(t1);
71getResult(t2);
72killTask(t1);
73killTask(t2);
74
75t = "std", list(I);
76startTasks(t);
77waitAllTasks(t);
78pollTask(t);
79t;
80getResult(t);
81killTask(t);
82
83t = "std", list(I);
84getCommand(t);
85killTask(t);
86
87t = "std", list(I);
88getArguments(t);
89killTask(t);
90
91t = "std", list(I);
92startTasks(t);
93waitAllTasks(t);
94getResult(t);
95killTask(t);
96
97t = "std", list(I);
98getState(t);
99startTasks(t);
100getState(t);
101waitAllTasks(t);
102getState(t);
103killTask(t);
104getState(t);
105
106int sem = semaphore(1);
107system("semaphore", "acquire", sem);
108system("semaphore", "try_acquire", sem);
109system("semaphore", "release", sem);
110system("semaphore", "try_acquire", sem);
111
112tst_status(1);$
Note: See TracBrowser for help on using the repository browser.
|
__label__pos
| 0.568733 |
react-three-fiber
Reducing bundle-size
Threejs is quite heavy and tree-shaking doesn't yet yield the results you would hope for atm. But you can always create your own export-file and alias "three" towards it. This way you can reduce it to 80-50kb, or perhaps less, depending on what you need.
three-exports.js
// Only export the things that are actually needed, cut out everything else
export { WebGLRenderer } from "three/src/renderers/WebGLRenderer.js";
export { ShaderLib } from "three/src/renderers/shaders/ShaderLib.js";
export { UniformsLib } from "three/src/renderers/shaders/UniformsLib.js";
export { UniformsUtils } from "three/src/renderers/shaders/UniformsUtils.js";
export { ShaderChunk } from "three/src/renderers/shaders/ShaderChunk.js";
export { Scene } from "three/src/scenes/Scene.js";
export { Mesh } from "three/src/objects/Mesh.js";
export { LineSegments } from "three/src/objects/LineSegments.js";
export { Line } from "three/src/objects/Line.js";
export { CubeTexture } from "three/src/textures/CubeTexture.js";
export { CanvasTexture } from "three/src/textures/CanvasTexture.js";
export { Group } from "three/src/objects/Group.js";
export {
SphereGeometry,
SphereBufferGeometry,
} from "three/src/geometries/SphereGeometry.js";
export {
PlaneGeometry,
PlaneBufferGeometry,
} from "three/src/geometries/PlaneGeometry.js";
export {
BoxGeometry,
BoxBufferGeometry,
} from "three/src/geometries/BoxGeometry.js";
export {
ConeGeometry,
ConeBufferGeometry,
} from "three/src/geometries/ConeGeometry.js";
export {
CylinderGeometry,
CylinderBufferGeometry,
} from "three/src/geometries/CylinderGeometry.js";
export {
CircleGeometry,
CircleBufferGeometry,
} from "three/src/geometries/CircleGeometry.js";
export {
RingGeometry,
RingBufferGeometry,
} from "three/src/geometries/RingGeometry.js";
export { EdgesGeometry } from "three/src/geometries/EdgesGeometry.js";
export { Material } from "three/src/materials/Material.js";
export { MeshPhongMaterial } from "three/src/materials/MeshPhongMaterial.js";
export { MeshPhysicalMaterial } from "three/src/materials/MeshPhysicalMaterial.js";
export { MeshBasicMaterial } from "three/src/materials/MeshBasicMaterial.js";
export { LineDashedMaterial } from "three/src/materials/LineDashedMaterial.js";
export { SpriteMaterial } from "three/src/materials/SpriteMaterial.js";
export { LineBasicMaterial } from "three/src/materials/LineBasicMaterial.js";
export { TextureLoader } from "three/src/loaders/TextureLoader.js";
export { Texture } from "three/src/textures/Texture.js";
export { Sprite } from "three/src/objects/Sprite.js";
export { SpotLightShadow } from "three/src/lights/SpotLightShadow.js";
export { SpotLight } from "three/src/lights/SpotLight.js";
export { SpotLightHelper } from "three/src/helpers/SpotLightHelper.js";
export { CameraHelper } from "three/src/helpers/CameraHelper.js";
export { PointLight } from "three/src/lights/PointLight.js";
export { DirectionalLight } from "three/src/lights/DirectionalLight.js";
export { AmbientLight } from "three/src/lights/AmbientLight.js";
export { LightShadow } from "three/src/lights/LightShadow.js";
export { PerspectiveCamera } from "three/src/cameras/PerspectiveCamera.js";
export { OrthographicCamera } from "three/src/cameras/OrthographicCamera.js";
export { BufferGeometry } from "three/src/core/BufferGeometry.js";
export { Geometry } from "three/src/core/Geometry.js";
export * from "three/src/core/BufferAttribute.js";
export { Face3 } from "three/src/core/Face3.js";
export { Object3D } from "three/src/core/Object3D.js";
export { Raycaster } from "three/src/core/Raycaster.js";
export { Triangle } from "three/src/math/Triangle.js";
export { _Math as Math } from "three/src/math/Math.js";
export { Spherical } from "three/src/math/Spherical.js";
export { Cylindrical } from "three/src/math/Cylindrical.js";
export { Plane } from "three/src/math/Plane.js";
export { Frustum } from "three/src/math/Frustum.js";
export { Sphere } from "three/src/math/Sphere.js";
export { Ray } from "three/src/math/Ray.js";
export { Matrix4 } from "three/src/math/Matrix4.js";
export { Matrix3 } from "three/src/math/Matrix3.js";
export { Box3 } from "three/src/math/Box3.js";
export { Box2 } from "three/src/math/Box2.js";
export { Line3 } from "three/src/math/Line3.js";
export { Euler } from "three/src/math/Euler.js";
export { Vector4 } from "three/src/math/Vector4.js";
export { Vector3 } from "three/src/math/Vector3.js";
export { Vector2 } from "three/src/math/Vector2.js";
export { Quaternion } from "three/src/math/Quaternion.js";
export { Color } from "three/src/math/Color.js";
export { GridHelper } from "three/src/helpers/GridHelper.js";
export { AxesHelper } from "three/src/helpers/AxesHelper.js";
export * from "three/src/constants.js";
export { InstancedBufferGeometry } from "three/src/core/InstancedBufferGeometry.js";
export { InstancedInterleavedBuffer } from "three/src/core/InstancedInterleavedBuffer.js";
export { InterleavedBufferAttribute } from "three/src/core/InterleavedBufferAttribute.js";
export { ShaderMaterial } from "three/src/materials/ShaderMaterial.js";
export { WireframeGeometry } from "three/src/geometries/WireframeGeometry.js";
webpack.config.js
module.exports = {
resolve: {
alias: {
// Forward all three imports to our exports file
three$: path.resolve("./build/three-exports.js"),
},
},
};
Edit this page on GitHub
|
__label__pos
| 0.999457 |
Friday, May 31, 2013
Database library to handle multiple masters and multiple slaves
In a large scale mysql deployment there could be multiple masters and multiple slaves. Masters are generally in circular replication. And are used for running all inserts, updates and deletes. Slaves are used to run selects.
When you are dealing with multiple mysql instances running in a large scale environment, it is important to take care of lags between masters and slaves. To handle such scenarios, the code should be capable of firing query on a server dynamically. Which means that for each query, I as a developer should have the flexibility to decide which server the query should go.
A list of existing scenarios :
1. All registrations / username generation process should happen on a single master. If you generate usernames at both masters, there may be scenarios where, due to lag between mysql masters, the user is not reflected. And in such a case, the user may register again and land on another master. Creating the same username again and breaking the circular replication. So all registrations and check for "username exists" should happen on a single master.
2. For all other Insert, Update and Delete operations, the user should be stuck to a single master. Why ? Assume there is a lag of around 30 minutes between the masters and slaves. The user inserts a record and immediately wants to see what record has been inserted. If we fetch the record from another master or slave, the record will not be available, because it has not yet been replicated. To take care of this scenario, whenever a record is inserted the immediate select has to be from the same server.
3. For all other selects, the query can be fired on any of the slaves. For example, the user logs into the site and sees his own profile. We show him his profile using one of the slave servers. This can be cached as well. The point here is that for data which has not been updated recently - the query can be fired on any of the slaves.
The following piece of code/library handles most of the scenarios. Please feel free to suggest modifications or improvements.
/**
* Created by : Jayant Kumar
* Description : php database library to handle multiple masters & multiple slaves
**/
class DatabaseList // jk : Base class
{
public $db = array();
public function setDb($db)
{
$this->db = $db;
}
public function getDb()
{
return $this->db;
}
}
class SDatabaseList extends DatabaseList // jk : Slave mysql servers
{
function __construct()
{
$this->db[0] = array('ip'=>'10.20.1.11', 'u'=>'user11', 'p'=>'pass11', 'db'=>'database1');
$this->db[1] = array('ip'=>'10.20.1.12', 'u'=>'user12', 'p'=>'pass12', 'db'=>'database1');
$this->db[2] = array('ip'=>'10.20.1.13', 'u'=>'user13', 'p'=>'pass13', 'db'=>'database1');
//print_r($db);
}
}
class MDatabaseList extends DatabaseList // jk : Master mysql servers
{
function __construct()
{
$this->db[0] = array('ip'=>'10.20.1.1', 'u'=>'user1', 'p'=>'pass1', 'db'=>'database1');
$this->db[1] = array('ip'=>'10.20.1.2', 'u'=>'user2', 'p'=>'pass2', 'db'=>'database2');
//print_r($db);
}
}
class MemcacheList extends DatabaseList // jk : memcache servers
{
function __construct()
{
$this->db[0] = array('ip'=>'localhost', 'port'=>11211);
}
}
Interface DatabaseSelectionStrategy ย // jk : Database interface
{
public function getCurrentDb();
}
class StickyDbSelectionStrategy implements DatabaseSelectionStrategy // jk : sticky db . For update / delete / insert
{
private $dblist;
private $uid;
private $sessionDb;
private $sessionTimeout = 3600;
function __construct(DatabaseList $dblist)
{
$this->dblist = $dblist;
}
public function setUserId($uid)
{
$this->uid = $uid;
}
public function setSessionDb($sessionDb)
{
$this->sessionDb = $sessionDb->db;
}
private function getDbForUser() // jk : get db for this user. If not found - assign him random master db.
{
$memc = new Memcache;
foreach ($this->sessionDb as $key => $value) {
$memc->addServer($value['ip'], $value['port']);
}
$dbIp = $memc->get($this->uid);
if($dbIp == null)
{
$masterlist = new MDatabaseList();
$randomdb = new RandomDbSelectionStrategy($masterlist);
$mdb = $randomdb->getCurrentDb();
$dbIp = $mdb['ip'];
$memc->set($this->uid, $dbIp, false, $this->sessionTimeout);
}
return $dbIp;
}
public function getCurrentDb()
{
$dbIp = $this->getDbForUser();
foreach ($this->dblist->db as $key => $value)ย
{
if($value['ip'] == $dbIp)
return $value;
}
}
}
class RandomDbSelectionStrategy implements DatabaseSelectionStrategy // jk : select random db from list
{
private $dblist;
function __construct(DatabaseList $dblist)
{
//print_r($dblist);
$this->dblist = $dblist;
}
public function getCurrentDb()
{
//print_r($this->dblist);
$cnt = sizeof($this->dblist->db);
$rnd = rand(0,$cnt-1);
$current = $this->dblist->db[$rnd];
return $current;
}
}
class SingleDbSelectionStrategy implements DatabaseSelectionStrategy // jk : select one master db - to generate unique keys
{
private $dblist;
function __construct(DatabaseList $dblist)
{
$this->dblist = $dblist;
}
public function getCurrentDb()
{
//print_r($this->dblist);
return $this->dblist->db[0];
}
}
Interface Database
{
public function getIp();
public function getDbConnection();
}
class DatabaseFactory implements Database // cmt : database factory
{
private $db;
public function getIp()
{
return $this->db['ip'];
}
public function getDbConnection($type = 'slave', $uid = 0)
{
$dbStrategy;
switch($type)
{
case 'slave':
$dblist = new SDatabaseList();
//print_r($dblist);
$dbStrategy = new RandomDbSelectionStrategy($dblist);
break;
case 'master':
$dblist = new MDatabaseList();
//print_r($dblist);
$dbStrategy = new StickyDbSelectionStrategy($dblist);
$dbStrategy->setSessionDb(new MemcacheList());
$dbStrategy->setUserId($uid);
break;
case 'unique':
$dblist = new MDatabaseList();
//print_r($dblist);
$dbStrategy = new SingleDbSelectionStrategy($dblist);
break;
}
$this->db = $dbStrategy->getCurrentDb();
print_r($this->db);
// return mysql_connect($this->db['ip'], $this->db['u'], $this->db['p'], $this->db['db']);
}
}
// tst : ย test this out...
$factory = new DatabaseFactory();
echo 'Slave : '; $factory->getDbConnection('slave');
echo 'Slave2 : '; $factory->getDbConnection('slave');
echo 'Unique : '; $factory->getDbConnection('unique');
echo 'New Master 100: '; $factory->getDbConnection('master',100);
echo 'New Master 101: '; $factory->getDbConnection('master',101);
echo 'New Master 102: '; $factory->getDbConnection('master',102);
echo 'old Master 100: '; $factory->getDbConnection('master',100);
echo 'old Master 102: '; $factory->getDbConnection('master',102);
?>
Wednesday, May 01, 2013
how to create a 3 node riak cluster ?
A very brief intro about riak - http://basho.com/riak/. Riak is a distributed database written in erlang. Each node in a riak cluster contains the complete independent copy of the riak package. A riak cluster does not have any "master". Data is distributed across nodes using consistent hashing - which ensures that the data is evenly distributed and a new node can be added with minimum reshuffling. Each object has in a riak cluster has multiple copies distributed acorss multiple nodes. Hence failure of a node does not necessarily result in data loss.
To setup a 3 node riak cluster, we first setup 3 machines with riak installed. To install riak on ubuntu machines all that needs to be done is download the "deb" package and do a dpkg -i "riak_x.x.x_amd64.deb". The version I used here was 1.3.1. 3 machines with ips 10.20.220.2, 10.20.220.3 & 10.20.220.4 were setup
To setup riak on 1st node, there are 3 config changes that need to be done
1. replace http ip: in /etc/riak/app.config replace ip in {http, [ {"127.0.0.1", 8098 } ]} with 10.20.220.2
2. replace pb_ip: in /etc/riak/app.config replace ip in {pb_ip,ย ย "127.0.0.1" } with 10.20.220.2
3. change the name of the fiak machine to match your ip: in /etc/riak/vm.args change name to [email protected]
If you had started the riak cluster earlier - before making the ip related changes, you will need to clear the ring and backend db. Do the following.
rm -rf /var/lib/riak/bitcask/
rm -rf /var/lib/riak/ring/
To start the first node, run riak start.
To prepare the second node, replace the ips with 10.20.220.3. Once done do a "riak start". To join this node to the cluster do the following
root@riak2# riak-admin cluster join [email protected]
Attempting to restart script through sudo -H -u riak
Success: staged join request for '[email protected]' to '[email protected]'
check out the cluster plan
root@riak2# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
===============================Staged Changes================================
Actionย ย ย ย ย ย ย ย Nodes(s)
-------------------------------------------------------------------------------
joinย ย ย ย ย ย ย ย ย ย '[email protected]'
-------------------------------------------------------------------------------
NOTE: Applying these changes will result in 1 cluster transition
###############################################################################
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย After cluster transition 1/1
###############################################################################
=================================Membership==================================
Statusย ย ย ย Ringย ย ย Pendingย ย ย Node
-------------------------------------------------------------------------------
validย ย ย ย 100.0%ย ย ย ย 50.0%ย ย ย '[email protected]'
validย ย ย ย ย ย 0.0%ย ย ย ย 50.0%ย ย ย '[email protected]'
-------------------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
WARNING: Not all replicas will be on distinct nodes
Transfers resulting from cluster changes: 32
ย 32 transfers from '[email protected]' to '[email protected]'
Save the cluster
root@riak2# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed
Add 1 more node
Prepare the 3rd node by replacing the ip with 10.20.220.4. And add this node to the riak cluster.
root@riak3# riak-admin cluster join [email protected]
Attempting to restart script through sudo -H -u riak
Success: staged join request for '[email protected]' to '[email protected]'
check and commit the new node to the cluster.
root@riak3# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
=============================== Staged Changes ================================
Actionย ย ย ย ย ย ย ย Nodes(s)
-------------------------------------------------------------------------------
joinย ย ย ย ย ย ย ย ย ย '[email protected]'
-------------------------------------------------------------------------------
NOTE: Applying these changes will result in 1 cluster transition
###############################################################################
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย After cluster transition 1/1
###############################################################################
================================= Membership ==================================
Statusย ย ย ย Ringย ย ย Pendingย ย ย Node
-------------------------------------------------------------------------------
validย ย ย ย ย 50.0%ย ย ย ย 34.4%ย ย ย '[email protected]'
validย ย ย ย ย 50.0%ย ย ย ย 32.8%ย ย ย '[email protected]'
validย ย ย ย ย ย 0.0%ย ย ย ย 32.8%ย ย ย '[email protected]'
-------------------------------------------------------------------------------
Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
WARNING: Not all replicas will be on distinct nodes
Transfers resulting from cluster changes: 21
ย 10 transfers from '[email protected]' to '[email protected]'
ย 11 transfers from '[email protected]' to '[email protected]'
root@riak3# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed
check status
root@riak3# riak-admin status | grep ring
Attempting to restart script through sudo -H -u riak
ring_members : ['[email protected]','[email protected]','[email protected]']
ring_num_partitions : 64
ring_ownership : <<"[{'[email protected]',22},{'[email protected]',21},{'[email protected]',21}]">>
ring_creation_size : 64
For Advanced configuration refer:
http://docs.basho.com/riak/latest/cookbooks/Adding-and-Removing-Nodes/
|
__label__pos
| 0.82027 |
0
Hello everyone. I am currently making some database connection modules in Python inb order to learn Python's approach to OOP and also to lear how to connect to databases via Python. I successfully made a package with modules for postgres, mysql, sqlite3, and MongoDB. The classes simply connect, disconnect and run queries of any type. Everything is working fine now.
However I noticed that the code is nearly the same. For example, the connect and disconnect methods are exactly the same. Also, postgress and mysql connections have exactly the same number of attributes. Inheritance came to mind, but some classes have less attributes. For example, my sqlite3 class has less attributes because this connection doesn't care about server, port, username or password.
So, what should I do? I can use inheritance but how do I remove the unwanted attributes? Should I make an abstract class stating the methods (they are nearly the same for all subclasses) but no attributes?
2
Contributors
16
Replies
87
Views
1 Year
Discussion Span
Last Post by Gribouillis
0
I think you can follow the simple rule that code which is nearly the same in 2 classes is a good candidate to be written in a common superclass for these 2 classes. You can build a hierarchy based on this principle.
The principle can be applied to attributes too, but there is some freedom here. An attribute is easily added in the __init__'s method body. I mean that an attribute may be defined in several subclasses without being defined in the ancestor classes. It does not change many lines of code to do so.
Abstract classes in the sense of the abc module should be used seldom and with good reasons to do so. Used differently, they may get in the way and clutter the code without significant profit. Instead of this, you can very easily add a method such as
class BaseFoobar(object):
def qux(self, i, j, k):
raise NotImplementedError(self)
Such pseudo abstract methods are very useful and don't get in the way. They trigger the right exception when something is missing in your code.
Edited by Gribouillis
0
I see. I think I'll go with the simple superclass-subclass model. BUt there is then one question left: if the superclass has attributes A,B,C and D, but I only need A and B, what should the child class do to not inherit C, and D?
Look at this code:
class MySQLConnection:
def __init__(self, a_server, a_database, the_user, the_password):
self.server_ip = a_server
self.database = a_database
self.user = the_user
self.password = the_password
self.connection = None
self.cursor = None
self.error = None
And the this one:
class SQLite3Connection:
def __init__(self, db_path):
self.path_to_database = db_path
self.connection = None
self.cursor = None
self.error = None
The second class needs only three of the attributes of the first one, and adds a new one (the connect method initializes the = None attributes later). When I inherit, will the child class inherit the useless attributes too? Should I write a new constructor for the child class? If I do so will it forget the inherited attributes?
Edited by G_S: Better explanation
0
If the child class has only attributes A and B, then the superclass should not have attributes Aย Bย Cย D. If it is not the case, it means that your model is incorrect. Remember that the relation of a subclass is the ISย A relation. So if a Socrates instance ISย A Man instance and men have a .mortal attribute, then a Socrates has a .mortal attribute.
Edited by Gribouillis
0
What you can do here is a common ancestor with the 3 attributes
class BaseConnection(object):
def __init__(self):
self.connection = None
self.cursor = None
self.error = None
class MySQLConnection(BaseConnection):
def __init__(self, a_server, a_database, the_user, the_password):
BaseConnection.__init__(self)
self.server_ip = a_server
self.database = a_database
self.user = the_user
self.password = the_password
class Sqlite3Connection(BaseConnection):
def __init__(self, db_path):
BaseConnection.__init__(self)
self.path_to_database = db_path
Edit: Notice how a Sqlite3 connection ISย NOTย A MySQL connection !
Edited by Gribouillis
0
Hmm, I think I got it. If inheritance is a specialization, then my most general class should be the parent. So it goes like this:
Base (common attributes and methods) -> MySQL | Sqlite.
Now there is one issue left: I have a postgres connection which looks exaclty like a mysql connection:
The MySQL connection:
class MySQLConnection:
def __init__(self, a_server, a_database, the_user, the_password):
self.server_ip = a_server
self.database = a_database
self.user = the_user
self.password = the_password
self.connection = None
self.cursor = None
self.error = None
The postgres class:
class PostgresConnection:
def __init__(a_server, a_database, the_user, the_password):
self.server_ip = a_server
self.database = a_database
self.user = the_user
self.password = the_password
self.connection = None
self.cursor = None
self. error = None
They are twins! Except for the fact that their connect, disconnect and run_query methods use very similar but not equal code to perform those actions. A PostgreSQL connection IS NOT a MySQL connection, although they look really similar in that they have the same basic attributes BESIDES the base attributes of any connection.
What should I do then? Another base class that adds the host, database, user and password attributes and that is the aprent of postgres and mysql? Or simply leave them as twins?
0
There is no general rule. You can make another base class which would inherit the first base class, or you can leave them as twin classes.
I think twin methods between 2 classes are a much more serious reason to build common ancestors than common attributes. Don't hesitate to build intermediary classes. Python is very flexible and it is easy to copy and paste a method from one class to another if you change your mind later.
0
Hmm, I'm currently creating the intermediate pseudoabstract classes. I really need the attributes, so question: should the NotImplementedError be raised by the constructor too? Or should I have the constructor create the attributes I want so that child classes can inherit them?
0
The constructor must not raise NotImplementedError because the constructor must be called by the subclasses. You could give the attributes a default value such as None or NotImplemented for example.
Also the pseudo abstract class can implement some methods, raise NotImplementedError on other methods, etc.
Edited by Gribouillis
0
Thank you very much. I have made several pseudoasbtract classses with NotImplementedError and NotImplemented. I notice that I went from very gneral classes to very specific classes, so I think it's correct. I'll write the final classes later and let you know of the final result (I hope i don't come across any errors in the nonabstract classes.)
0
Great, I'm looking forward to seeing your classes. If all this doesn't suffice, there are other tools such as multiple inheritance and mix-in classes, but remember the zen of python:
Simple is better than complex.
Complex is better than complicated.
0
Here is my class diagram. Took me a while to upload it due to technical issues with my computer, but here it is. The classess at the bottom are non-abstract. The other ones (5) are abstract. The run_query and connect methods are all different for each of the bottom classes. The disconnect() method is exactly the same, so I implemented it in the BaseConnection class.
One thing to note is that the concrete classes have no new attributes, it's the implemented methods that make them different from their parents and siblings.
ClassDiagram.jpg
Edited by G_S: Clarification
0
No, I don't see any problem. The most annoying point is that the query language is database-specific. Later on, you may feel the need for methods performing some queries in a DB-independent way.
0
Yes, that's true. I think that later on I'll have to use some sort of ORM to solve that problem.
0
Not necessarily. The drawbacks of ORMs is that they are heavy, and often slow. For example on a concrete problem, I prefer by far to use the mysqldb module than sqlalchemy. What you can do is specialize some of your classes for a specific problem with only relevant requests implemented.
Edited by Gribouillis
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.
|
__label__pos
| 0.97902 |
Enov8-Plugin
Ansible : How do I protect sensitive data with encrypted files
MAR, 2023
by Jane Temov.
ย
Author Jane Temov
Jane Temov is an IT Environments Evangelist at Enov8, specializing in IT and Test Environment Management, Test Data Management, Data Security, Disaster Recovery, Release Management, Service Resilience, Configuration Management, DevOps, and Infrastructure/Cloud Migration. Jane is passionate about helping organizations optimize their IT environments for maximum efficiency.
One of the great things about Enov8 Environment Manager is our DevOps Management area. An area that allows you to add your favourite automation independent of language. This includes the ability to โplug inโ some very cool ITย Automation tools like Ansible.
ย
Enov8 IT & Test Environment Manager
*Innovate with Enov8
Streamlining delivery through effective transparency & control of your IT & Test Environments.
Ansible is an open-source automation tool used to manage and configure systems and applications. It uses a simple and powerful language to describe and automate IT workflows, making it easy to manage complex systems and processes with ease. Ansible is agentless, meaning it does not require any software or daemons to be installed on remote hosts, making it easy to deploy and use. With Ansible, IT teams can automate tasks such as configuration management, application deployment, and orchestration, allowing them to improve efficiency, reduce errors, and improve overall infrastructure reliability.
Evaluate Now
Protecting Sensitive Data with Encrypted Files
To protect sensitive data with encrypted files or elements in Ansible, you can use Ansible Vault. Ansible Vault is a feature in Ansible that allows you to encrypt sensitive data using a password, and then decrypt it when needed during playbook execution. Here are the steps to use Ansible Vault:
1. Create a file that contains sensitive data, such as passwords, API keys, or private keys. For example, letโs say you have a file named secrets.yml that contains the following:
username: myusername
password: mypassword
1. Encrypt the file using the ansible-vault command:
ansible-vault encrypt secrets.yml
This will prompt you to enter a password that will be used to encrypt the file.
1. Edit your playbook to include the encrypted file.
For example, letโs say your playbook includes the following task:
# yaml
โ name: Configure the server
become: yes
vars:
my_username: โ{{ username }}โ
my_password: โ{{ password }}โ
template:
src: template.j2
dest: /etc/config.cfg
You can replace the username and password variables with the encrypted values by modifying the task as follows:
# yaml
โ name: Configure the server
become: yes
vars_files:
โ secrets.yml
vars:
my_username: โ{{ username }}โ
my_password: โ{{ password }}โ
template:
src: template.j2
dest: /etc/config.cfg
Note that we added the vars_files option to include the encrypted file.
ย
1. Run your playbook using the ansible-playbook command and provide the password for the encrypted file:
python ansible-playbook playbook.yml โask-vault-pass
This will prompt you to enter the password* you used to encrypt the file.
Once you enter the password, Ansible will decrypt the file and use the values in your playbook.
Tip: *Parameterising the Ansible Secret
If you need to parameterise the secret then you can pass the password for the encrypted file as a parameter to the ansible-playbook command using the โvault-password-file option. For example, if your password is mysecretpassword, you can run the following command: ansible-playbook playbook.yml โvault-password-file=/path/to/password_file where /path/to/password_file is a file containing your password, such as: mysecretpassword. This can be particularly useful when you need to run the Enov8 environment automation on a schedule.
Add Parameter
With these steps, you can protect sensitive data in your Ansible playbooks using Ansible Vault.
Enov8, DevOps Manager: Screenshot
Enov8 DevOps Manager
Conclusion
In conclusion, Ansible Vault provides a powerful and flexible way to protect sensitive data in Ansible playbooks and configuration files. By encrypting sensitive data using strong encryption algorithms and protecting the decryption key with a password or key file, Ansible Vault helps ensure that critical data such as passwords, API keys, and other secrets are kept secure and confidential. Ansible Vault also integrates seamlessly with the rest of the Enov8 & Ansible automation framework, making it easy to incorporate secure credential management into your overall infrastructure management workflow. Overall, Ansible Vault is an essential tool for any organization that wants to ensure the security and integrity of its IT infrastructure and data.
Other TEM Reading
Interested in reading more about Test Environment Management. Why not start here:
Enov8 Blog: Your Essential Test Environment Management Checklist
Enov8 Blog:ย What makes a good Test Environment Manager
Enov8 Blog: Understanding the Types of Test Environments
ย
Relevant Articles
Understanding ERM versus SAFe
Understanding ERM versus SAFe
April,ย 2024 by Jane Temov. Author Jane Temov.ย Jane is a Senior Consultant at Enov8, where she specializes in products related to IT and Test Environment Management, Enterprise Release Management, and Test Data Management. Outside of her professional work, Jane...
Serverless Architectures: Benefits and Challenges
Serverless Architectures: Benefits and Challenges
April,ย 2024 by Jane Temov. Author Jane Temov. Jane is a Senior Consultant at Enov8, where she specializes in products related to IT and Test Environment Management, Enterprise Release Management, and Test Data Management. Outside of her professional work, Jane enjoys...
The Crucial Role of Runsheets in Disaster Recovery
The Crucial Role of Runsheets in Disaster Recovery
March,ย 2024 by Jane Temov. ย Author Jane Temov Jane Temov is an IT Environments Evangelist at Enov8, specializing in IT and Test Environment Management, Test Data Management, Data Security, Disaster Recovery, Release Management, Service Resilience, Configuration...
Establishing a Paved Road for IT Ops & Development
Establishing a Paved Road for IT Ops & Development
March,ย 2024 by Jane Temov. ย Author Jane Temov Jane Temov is an IT Environments Evangelist at Enov8, specializing in IT and Test Environment Management, Test Data Management, Data Security, Disaster Recovery, Release Management, Service Resilience, Configuration...
Why Release Management Matters?
Why Release Management Matters?
February,ย 2024 by Jane Temov. ย Author Jane Temov Jane Temov is an IT Environments Evangelist at Enov8, specializing in IT and Test Environment Management, Test Data Management, Data Security, Disaster Recovery, Release Management, Service Resilience,...
Unveiling the ROI of Test Data Management
Unveiling the ROI of Test Data Management
February,ย 2024 by Andrew Walker. ย Author Andrew Walker Andrew Walker is a software architect with 10+ years of experience. Andrew is passionate about his craft, and he loves using his skills to design enterprise solutions for Enov8, in the areas of IT...
|
__label__pos
| 0.817458 |
K. Güler K. Güler - 10 months ago 25
Javascript Question
Weird behaviour with window object when removing property from it
function foobar() {}
console.log(typeof window.foobar); // "function"
console.log(typeof window.alert); // "function"
delete window.foobar;
delete window.alert;
console.log(typeof window.foobar); // "function"
console.log(typeof window.alert); // "undefined"
console.log(window.hasOwnProperty('foobar')); // true
console.log(window.hasOwnProperty('alert')); // false
Can somebody please explain, how this is possible?
Why can't I simply delete the
foobar
property of the window object?
Why is a custom global function like
foobar
protected against
delete
operator, but a built-in global function like
alert
not?
Answer
Global variables are not configurable:
Object.getOwnPropertyDescriptor(window, 'foobar').configurable; // false
Therefore, delete won't work:
delete window.foobar; // false (in sloppy mode)
delete window.foobar; // TypeError (in strict mode)
That's why you should use strict mode. Otherwise some problems are silently ignored.
If you want to be able to delete the function, assign it as a property instead of using a function declaration:
window.foobar = function() {};
delete window.foobar; // true
|
__label__pos
| 0.993689 |
What is 2/732 Simplified?
Are you looking to calculate how to simplify the fraction 2/732? In this really simple guide, we'll teach you exactly how to simplify 2/732 and convert it to the lowest form (this is sometimes calling reducing a fraction to the lowest terms).
To start with, the number above the line (2) in a fraction is called a numerator and the number below the line (732) is called the denominator.
So what we want to do here is to simplify the numerator and denominator in 2/732 to their lowest possible values, while keeping the actual fraction the same.
To do this, we use something called the greatest common factor. It's also known as the greatest common divisor and put simply, it's the highest number that divides exactly into two or more numbers.
In our case with 2/732, the greatest common factor is 2. Once we have this, we can divide both the numerator and the denominator by it, and voila, the fraction is simplified:
2/2 = 1
732/2 = 366
1 / 366
What this means is that the following fractions are the same:
2 / 732 = 1 / 366
So there you have it! You now know exactly how to simplify 2/732 to its lowest terms. Hopefully you understood the process and can use the same techniques to simplify other fractions on your own. The complete answer is below:
1/366
Convert 2/732 to Decimal
Here's a little bonus calculation for you to easily work out the decimal format of the fraction we calculated. All you need to do is divide the numerator by the denominator and you can convert any fraction to decimal:
2 / 732 = 0.0027
Cite, Link, or Reference This Page
If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support!
โข "What is 2/732 Simplified?". VisualFractions.com. Accessed on August 19, 2022. http://visualfractions.com/calculator/simplify-fractions/what-is-2-732-simplified/.
โข "What is 2/732 Simplified?". VisualFractions.com, http://visualfractions.com/calculator/simplify-fractions/what-is-2-732-simplified/. Accessed 19 August, 2022.
โข What is 2/732 Simplified?. VisualFractions.com. Retrieved from http://visualfractions.com/calculator/simplify-fractions/what-is-2-732-simplified/.
Preset List of Fraction Reduction Examples
Below are links to some preset calculations that are commonly searched for:
Random Fraction Simplifier Problems
If you made it this far down the page then you must REALLY love simplifying fractions? Below are a bunch of randomly generated calculations for your fraction loving pleasure:
|
__label__pos
| 0.746686 |
Probability and Statistics in Project Management
13 minute read ย ย Updated: ย ย Harwinder Singh
Probability and Statistics in Project Time and Cost Management Project Estimation and PERT (Part 5): So far we have learned about the basics of PERT, how PERT is used to estimate activity completion time, and how the PERT formula was derived. We know that a project manager has to wear several hats. In this article, we are going to wear the probabilist and statistician hats and review the basic concepts of probability and statistics relevant to project estimation. It is certainly โout of scopeโ as far as the PMP exam is concerned. But if you ever wanted to learn the basics of random variables, distributions, mean, variance, standard deviation etc., without sacrificing a lot of precious time, then this article is just for you. I think it will give you the necessary foundation for understanding the concepts of project estimation and bring you one step closer to implementing theory into practice.
Random variable
The term random variable is a misnomer. Random variable is actually not a variable, but a function. The outcome of every event may not always be a number. For example, when you toss a coin, the outcome is a Head or a Tail. Random variable is a function which assigns a unique numerical value to every possible outcome of an event. For example, in a toss of a coin, how do we express these outcomes in a number? Letโs see.
Consider an experiment in which a coin is tossed three times. There are eight possible outcomes of this experiment:
HHH HHT HTH THH HTT THT TTH TTT
where H = Head and T = Tail
The number of Heads in these outcomes are:
3, 2, 2, 2, 1, 1, 1, 0
So, the number of times we can get a head in this experiment are 0, 1, 2 and 3.
In this experiment, we can say that the โNumber of Headsโ is a random variable. The numbers 0, 1, 2 and 3 are the โvaluesโ of the random variable.
If we express the random variable as X, then
X = Number of Heads in three tosses of a coin
Values of X = {0, 1, 2, 3}
As you can see, we have converted the outcomes from the toss of the coin experiment into numbers.
Types of Random Variables
Random variables are of 2 types:
โข Continuous: A random variable with infinite number of values. Examples: The lifespan of people, and that of their LCD TVs and mobile phone batteries are examples of continuous random variables. They have an infinite number of possible values. Similarly, the number of miles a car will run before being scrapped, the length of a telephone conversation, the amount of rainfall in a season also fall in the same category. The duration of an activity on a project is also an example of a continuous random variable.
โข Discrete: A random variable with countable number of distinct values. Examples: The score of students in an examination is a discrete random variable and so is the number of students who fail the exam. The experiment of three tosses of a coin (described above) is also an example of discrete random variable, with distinct values 0, 1, 2 and 3.
Distribution
If we plot a graph with the values of the random variable X on the x-axis and the probability of occurrence of the values on the y-axis, then the plot is known as a Distribution.
Probability Distribution Function (PDF)
The (mathematical) function that describes the shape of the Distribution is known as the Probability Distribution Function (PDF).
Common Probability Distributions
Some of the common distribution patterns are Uniform Distribution, Beta Distribution, Triangular Distribution and Normal Distribution.
Uniform Distribution
In the uniform distribution, each value of the random variable has equal probability of occurrence. For example, in a roll of a dice, each value (1 to 6) has an equal probability. The random variable is the score on each roll of the dice, and the values are 1 to 6. Each value has an equal probability of 1/6.
The uniform distribution can be continuous or discrete. The roll of a dice is an example of discrete uniform distribution. The uniform distribution looks like a rectangle, as shown in the following figure, where a is the minimum value and b is the maximum value of the distribution.
Uniform Distribution
The uniform distribution is used in project management to determine rough estimates (range) when very little information is available about the project, and for risk management when several risks have equal probability of occurrence.
Triangular Distribution
The triangular distribution is a continuous probability distribution with a minimum value, a mode (most likely value), and a maximum value. The triangular distribution differs from the uniform distribution in that, the probability of the values of the random variable are not the same. The probability of the minimum, a and maximum value, b is zero, and the probability of the mode value, c is the highest for the entire distribution.
Triangular Distribution
The triangular distribution is used in Project Management, often as an approximation to the beta distribution, to estimate activity duration. Assuming a triangular distribution, the expected activity duration (mean of the distribution) can be calculated using the simple average method.
Beta Distribution
The beta distribution is determined by 4 parameters:
โข a - minimum value
โข b - maximum value
โข ฮฑ - shape parameter
โข ฮฒ - shape parameter
Where a and b are finite numbers.
Natural events rarely have finite end points. However, Beta distribution approximates natural events quite well.
Beta Distribution
A form of beta distribution which looks like a rounded-off triangle is often used in project management to determine activity duration and cost. It models the optimistic (minimum), the pessimistic (maximum) and the most likely (mode) values quite well. The triangular distribution is considered a good approximation of the beta distribution.
Normal Distribution
A random variable which can take any value from -โ to +โ, is said to follow a Normal Distribution. The normal distribution models natural events very well. In practice, the normal distribution is also used to model distributions with non-negative values. For example, the height of adults in a country is considered to follow a normal distribution, even though the height of a person can never be a negative number.
Normal Distribution
The normal distribution curve is symmetrical about the mean (expected value). The curve is also known as a bell-curve because of its resemblance to the shape of a bell. The simplest form of the normal distribution is known as the Standard Normal Distribution, which has a mean of 0 and variance of 1.
Because of itโs ability to accurately portray many real world events, the normal distribution has many practical applications. Be it height, weight or income of a population, or time and cost estimation in project management, all can be modeled fairly accurately using the normal distribution.
The normal distributionโs end points come very close to the horizontal axis but never actually touch it. This is unlike the beta distribution, whose end points touch the horizontal axis i.e. have a zero probability.
Expected Value or Mean (ฮผ)
Expected value of a random variable is itโs mean or average value. This expected value is used in project management to represent the expected activity duration (or PERT estimate).
Variance (ฯ ^ 2)
Variance of a random variable is a measure of its spread. Variance is always non-negative. A small variance indicates that the values of the random variable are placed close to the mean, whereas a large variances indicates that the values are spread away from the mean.
Standard Deviation (ฯ)
Standard deviation is the square root of variance and is also always non-negative. It also gives a measure of the dispersion of values of a random variable.
Trivia: If both Variance and Standard Deviation give a measure of the spread of values of a random variable, then why do we need 2 variables to tell us the same thing?
The y-axis of the distribution curve gives the probability of occurrence of any value of a random variable. The ratio of the area under the curve between any two points a, b to the total area under the curve, gives the probability that the value of the random variable will lie between a and b. This is where standard deviation comes into play.
According to statistics, for the normal distribution, 68.2% values of a random variable fall within 1 standard deviation of the mean, 95.5% within 2 standard deviations of the mean and 99.7% within 3 standard deviations.
In other words, thereโs a 68.2% probability that the value of a random variable will lie between the range [ฮผ-ฯ, ฮผ+ฯ], 95.5% probability to be within [ฮผ-2ฯ, ฮผ+2ฯ] and 99.7% probability to be within [ฮผ-3ฯ, ฮผ+3ฯ].
Standard Deviation Diagram
The area under the curve in the range [ฮผ-ฯ, ฮผ+ฯ] is 68.2% of the total area under the curve. This area is shown in blue. Similarly 95.5% area is covered in [ฮผ-2ฯ, ฮผ+2ฯ] range, and is the sum of the blue and brown colored regions. And lastly, 99.7% area is covered in [ฮผ-3ฯ, ฮผ+3ฯ] range and is the total area shown in blue, brown and green.
Central Limit Theorem
Central Limit Theorem (CLT) says that the mean of a large sample of independent random variables, each having a finite mean and variance, will be normally distributed. For CLT to be applicable, certain requirements should be met. The sample size should be fairly large (more than 30), the random variables should be independent of each other and should have the same type of distribution.
Letโs understand this with an example.
Say we have a set of coins with denominations of 1 to 999. The set of coins is the population and the denomination of each coin is a random variable. The mean of this entire population is 500. From this population, we select a sample of 50 coins, and take a mean of their denomination. It is possible that the mean of those 50 coins is less than 500, equal to 500, or even more than 500. Now if you repeatedly draw 50 coins from the set, calculate their mean, and plot the mean vs frequency graph, the resulting distribution will be close to normal. In general, the larger the sample size, the closer the resulting distribution will be to normal.
Letโs put this in project management perspective. We learned in the previous articles that the basic assumptions in PERT are that the individual activity durations are random variables and that they follow the beta distribution. Consider the individual activity durations as the random variables and the duration of activities on the critical path as the sample. In order to calculate the total project duration, we add up the PERT estimates of the activities on the critical path. According to CLT:
The total project duration is assumed to follow a normal distribution and the variance in project duration can be calculated by summing up the variances in the durations of activities on the critical path.
In real world project management, the conditions under which CLT is applicable, are not always met. For instance, the activities on a project, and hence their durations, are not always independent of each other. The critical path may not have 30 activities. Weโll discuss these issues in a follow-up article on the limitations of PERT.
With this we have come to the end of this post. As you know, this topic is a study in itself, and I cannot possibly cover it in enough detail. Iโve tried my best to summarize lot of information in a concise manner. Please use the comments section below to let me know whether you found it useful.
8-part series on Project Estimation and PERT
1. Get Intimate with PERT
2. Three Point Estimate - The Power of Three in Project Estimation
3. What is PERT?
4. The Magical PERT Formula
5. Probability and Statistics in Project Management (you are here)
6. PERT and CPM get Cozy
7. PMP Quiz Contest - Activity Duration Estimates
8. Standard Deviation and Project Duration Estimates
Image credit: Flickr / icma
Leave a Comment
Loading...
3 Comments
|
__label__pos
| 0.995097 |
Skip to main content
Creating SSL Certificates
Published
A million people have written about this and now itโs my turn. I have a need to create myself a certificate authority and then to use that certificate authority so that my programs can communicate with each other using those certificates to identify themselves. Here are the commands that I ran with absolutely no explanation behind why or how they work.
I decided to not screw around with RSA and instead I jumped straight to ECC. Everything Iโm doing is controlled by me so I have no need for the strong backward compatibility that RSA provides.
Please also note that you willl need to take my examples and fix your paths as appropriate. I keep my certificates in /usr/local/ssl/certs. The certificate authority is in there and then all of the generated certificates are kept in /usr/local/ssl/certs/local. So youโll need to double check the paths in my following commands.
First I created my certificate authority private key:
openssl ecparam -out /usr/local/ssl/certs/local-ca.key -name prime256v1 -genkey
That key will be protected with my life, metaphorically. I filled in some values with my details like my location and my name. Whatever, no big deal. Just make it somewhat accurate because it will be pasted into every certificate that you sign from this CA.
Then I must create my certificate authority public certificate:
openssl req -x509 -new -nodes -days 36500 -sha512 \
-key /usr/local/ssl/certs/local-ca.key \
-out /usr/local/ssl/certs/local-ca.cert
I set it to exist for 36,500 days, aka ten years. Choose a number that you feel comfortable with.
Now I can start to sign certificates from this. First I must create an OpenSSL configuration file. Mine looks like this:
[ca]
default_ca = CA_default
[CA_default]
database = certificates
new_certs_dir = /usr/local/ssl/certs/local
certificate = /usr/local/ssl/certs/local-ca.cert
private_key = /usr/local/ssl/certs/local-ca.key
preserve = no
email_in_dn = no
nameopt = default_ca
certopt = default_ca
policy = policy_match
default_days = 2190
[policy_match]
countryName = match
stateOrProvinceName = match
organizationName = match
organizationalUnitName = match
commonName = supplied
emailAddress = match
[req]
string_mask = nombstr
distinguished_name = req_distinguished_name
req_extensions = v3_req
x509_extensions = v3_req
[req_distinguished_name]
0.organizationName = Organization Name
organizationalUnitName = Organizational Unit Name
emailAddress = Email Address
emailAddress_max = 40
localityName = City
stateOrProvinceName = State
countryName = Country Code
countryName_min = 2
countryName_max = 2
commonName = Common Name
commonName_max = 64
0.organizationName_default = Your Name
organizationalUnitName_default = Certificate Authority
localityName_default = Your City
stateOrProvinceName_default = Your State
countryName_default = US
emailAddress_default = [email protected]
[v3_ca]
basicConstraints = CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
[v3_req]
subjectKeyIdentifier = hash
basicConstraints = critical,CA:FALSE
keyUsage = critical,digitalSignature,keyEncipherment
Thereโs nothing particularly sensitive in this file. Itโs just there. Your certificates will copy their locality and state and country from your CA.
With that file and the certificate authority that we created we can start signing certificates with three commands. First, we want to create our certificateโs private key:
openssl ecparam -name prime256v1 -genkey \
-out /usr/local/ssl/certs/local/example.com.key
With the private key and the above configuration file we will create a certificate signing request.
openssl req -new \
-key /usr/local/ssl/certs/local/example.com.key \
-out /usr/local/ssl/certs/local/example.com.csr \
-batch -subj "/CN=example.com" -sha512 \
-config /usr/local/ssl/certs/local-openssl.conf
Those two commands โ creating the private key and the signing request โ only need to be done once, ever. If you keep those two files then all you need to do when your certificate expires is use the existing signing request to create a new certificate and it will be created with all of the same options. Now to create that new certificate:
openssl x509 -req -extfile <(printf "subjectAltName=DNS:example.com") \
-in /usr/local/ssl/certs/local/example.com.csr \
-out /usr/local/ssl/certs/local/example.com.cert \
-CA /usr/local/ssl/certs/local-ca.cert \
-CAkey /usr/local/ssl/certs/local-ca.key \
-CAcreateserial -sha512
Thatโs it. Thatโs the entire process. And this even follows RFC2818 and sets a subjectAltName parameter so that clients donโt complain.
Be aware that SSL is a tricky thing. Some of my options might not be 100% correct and some of them might not age well so itโs really best to double check what you run before you run it and not just trust me. But if youโre lazy then these commands will probably work.
If you want to let someone else do the hard work for you and have it work in your browser, though, I highly recommend that you use Letโs Encrypt to automatically sign certificates. I wrote about how to get that going on my blog in the past.
|
__label__pos
| 0.669927 |
Advertisement
1. Game Development
Basic 2D Platformer Physics, Part 3
Scroll to top
Read Time: 13 min
This post is part of a series called Basic 2D Platformer Physics .
Basic 2D Platformer Physics, Part 2
Basic 2D Platformer Physics, Part 4
One-Way Platforms
Since we've just finished working on the ground collision check, we might as well add one-way platforms while we're at it. They are going to concern only the ground collision check anyway. One-way platforms differ from solid blocks in that they will stop an object only if it's falling down. Additionally, we'll also allow a character to drop down from such a platform.
First of all, when we want to drop off a one-way platform, we basically want to ignore the collision with the ground. An easy way out here is to set up an offset, after passing which the character or object will no longer collide with a platform.ย
For example, if the character is already two pixels below the top of the platform, it shouldn't detect a collision anymore. In that case, when we want to drop off the platform, all we have to do is move the character two pixels down. Let's create this offset constant.
1
public const float cOneWayPlatformThreshold = 2.0f;
Now let's add a variable which will let us know if an object is currently on a one-way platform.
1
public bool mOnOneWayPlatform = false;
Let's modify the definition of theย HasGround function to also take a reference to a boolean which will be set if the object has landed on a one-way platform.
1
public bool HasGround(Vector2 oldPosition, Vector2 position, Vector2 speed, out float groundY, ref bool onOneWayPlatform)
Now, after we check if the tile we are currently at is an obstacle, and it isn't, we should check if it's a one-way platform.
1
if (mMap.IsObstacle(tileIndexX, tileIndexY))
2
return true;
3
else if (mMap.IsOneWayPlatform(tileIndexX, tileIndexY))
4
onOneWayPlatform = true;
As explained before, we also need to make sure that this collision is ignored if we have fallen beyond the cOneWayPlatformThreshold below the platform.ย
Of course, we cannot simply compare the difference between the top of the tile and the sensor, because it's easy to imagine that even if we're falling, we might go well below two pixels from the platform's top. For the one-way platforms to stop an object, we want the sensor distance between the top of the tile and the sensor to be less than or equal to the cOneWayPlatformThreshold plus the offset from this frame's position to the previous one.
1
if (mMap.IsObstacle(tileIndexX, tileIndexY))
2
return true;
3
else if (mMap.IsOneWayPlatform(tileIndexX, tileIndexY)
4
&& Mathf.Abs(checkedTile.y - groundY) <= Constants.cOneWayPlatformThreshold + mOldPosition.y - position.y)
5
onOneWayPlatform = true;
Finally, there's one more thing to consider. When we find a one-way platform, we cannot really exit the loop, because there are situations when the character is partially on a platform and partially on a solid block.
We shouldn't really consider such a position as "on a one-way platform", because we can't really drop down from thereโthe solid block is stopping us. That's why we first need to continue looking for a solid block, and if one is found before we return the result, we also need to setย onOneWayPlatformย to false.
1
if (mMap.IsObstacle(tileIndexX, tileIndexY))
2
{
3
onOneWayPlatform = false;
4
return true;
5
}
6
else if (mMap.IsOneWayPlatform(tileIndexX, tileIndexY)
7
&& Mathf.Abs(checkedTile.y - groundY) <= Constants.cOneWayPlatformThreshold + mOldPosition.y - position.y)
8
onOneWayPlatform = true;
Now, if we went through all the tiles we needed to check horizontally and we found a one-way platform but no solid blocks, then we can be sure that we are on a one-way platform from which we can drop down.
1
if (mMap.IsObstacle(tileIndexX, tileIndexY))
2
{
3
onOneWayPlatform = false;
4
return true;
5
}
6
else if (mMap.IsOneWayPlatform(tileIndexX, tileIndexY)
7
&& Mathf.Abs(checkedTile.y - groundY) <= Constants.cOneWayPlatformThreshold + mOldPosition.y - position.y)
8
onOneWayPlatform = true;
9
10
if (checkedTile.x >= bottomRight.x)
11
{
12
if (onOneWayPlatform)
13
return true;
14
break;
15
}
That's it, so now let's add to the character class an option to drop down the platform. In both the stand and run states, we need to add the following code.
1
if (KeyState(KeyInput.GoDown))
2
{
3
if (mOnOneWayPlatform)
4
mPosition.y -= Constants.cOneWayPlatformThreshold;
5
}
Let's see how it works.
Everything is working correctly.
Handle Collisions for the Ceiling
We need to create an analogous function to the HasGround for each side of the AABB, so let's start with the ceiling. The differences are as follows:
โข The sensor line is above the AABB instead of being below it
โข We check for the ceiling tile from bottom to top, as we're moving up
โข No need to handle one-way platforms
Here's the modified function.
1
public bool HasCeiling(Vector2 oldPosition, Vector2 position, out float ceilingY)
2
{
3
var center = position + mAABBOffset;
4
var oldCenter = oldPosition + mAABBOffset;
5
6
ceilingY = 0.0f;
7
8
var oldTopRight = oldCenter + mAABB.halfSize + Vector2.up - Vector2.right;
9
10
var newTopRight = center + mAABB.halfSize + Vector2.up - Vector2.right;
11
var newTopLeft = new Vector2(newTopRight.x - mAABB.halfSize.x * 2.0f + 2.0f, newTopRight.y);
12
13
int endY = mMap.GetMapTileYAtPoint(newTopRight.y);
14
int begY = Mathf.Min(mMap.GetMapTileYAtPoint(oldTopRight.y) + 1, endY);
15
int dist = Mathf.Max(Mathf.Abs(endY - begY), 1);
16
17
int tileIndexX;
18
19
for (int tileIndexY = begY; tileIndexY <= endY; ++tileIndexY)
20
{
21
var topRight = Vector2.Lerp(newTopRight, oldTopRight, (float)Mathf.Abs(endY - tileIndexY) / dist);
22
var topLeft = new Vector2(topRight.x - mAABB.halfSize.x * 2.0f + 2.0f, topRight.y);
23
24
for (var checkedTile = topLeft; ; checkedTile.x += Map.cTileSize)
25
{
26
checkedTile.x = Mathf.Min(checkedTile.x, topRight.x);
27
28
tileIndexX = mMap.GetMapTileXAtPoint(checkedTile.x);
29
30
if (mMap.IsObstacle(tileIndexX, tileIndexY))
31
{
32
ceilingY = (float)tileIndexY * Map.cTileSize - Map.cTileSize / 2.0f + mMap.mPosition.y;
33
return true;
34
}
35
36
if (checkedTile.x >= topRight.x)
37
break;
38
}
39
}
40
41
return false;
42
}
Handle Collisions for the Left Wall
Similarly to how we handled the collision check for the ceiling and ground, we also need to check if the object is colliding with the wall on the left or the wall on the right. Let's start from the left wall. The idea here is pretty much the same, but there are a few differences:
โข The sensor line is on the left side of the AABB.
โข The inner for loop needs to iterate through the tiles vertically, because the sensor is now a vertical line.
โข The outer loop needs to iterate through tiles horizontally to see if we haven't skipped a wall when moving with a big horizontal speed.
1
public bool CollidesWithLeftWall(Vector2 oldPosition, Vector2 position, out float wallX)
2
{
3
var center = position + mAABBOffset;
4
var oldCenter = oldPosition + mAABBOffset;
5
6
wallX = 0.0f;
7
8
var oldBottomLeft = oldCenter - mAABB.halfSize - Vector2.right;
9
var newBottomLeft = center - mAABB.halfSize - Vector2.right;
10
var newTopLeft = newBottomLeft + new Vector2(0.0f, mAABB.halfSize.y * 2.0f);
11
12
int tileIndexY;
13
14
var endX = mMap.GetMapTileXAtPoint(newBottomLeft.x);
15
var begX = Mathf.Max(mMap.GetMapTileXAtPoint(oldBottomLeft.x) - 1, endX);
16
int dist = Mathf.Max(Mathf.Abs(endX - begX), 1);
17
18
for (int tileIndexX = begX; tileIndexX >= endX; --tileIndexX)
19
{
20
var bottomLeft = Vector2.Lerp(newBottomLeft, oldBottomLeft, (float)Mathf.Abs(endX - tileIndexX) / dist);
21
var topLeft = bottomLeft + new Vector2(0.0f, mAABB.halfSize.y * 2.0f);
22
23
for (var checkedTile = bottomLeft; ; checkedTile.y += Map.cTileSize)
24
{
25
checkedTile.y = Mathf.Min(checkedTile.y, topLeft.y);
26
27
tileIndexY = mMap.GetMapTileYAtPoint(checkedTile.y);
28
29
if (mMap.IsObstacle(tileIndexX, tileIndexY))
30
{
31
wallX = (float)tileIndexX * Map.cTileSize + Map.cTileSize / 2.0f + mMap.mPosition.x;
32
return true;
33
}
34
35
if (checkedTile.y >= topLeft.y)
36
break;
37
}
38
}
39
40
return false;
41
}
Handle Collisions for the Right Wall
Finally, let's create theย CollidesWithRightWall function, which as you can imagine, will do a very similar thing asย CollidesWithLeftWall, but instead of using a sensor on the left, we'll be using a sensor on the right side of the character.ย
The other difference here is that instead of checking the tiles from right to left, we'll be checking them from left to right, since that's the assumed moving direction.
1
public bool CollidesWithRightWall(Vector2 oldPosition, Vector2 position, out float wallX)
2
{
3
var center = position + mAABBOffset;
4
var oldCenter = oldPosition + mAABBOffset;
5
6
wallX = 0.0f;
7
8
var oldBottomRight = oldCenter + new Vector2(mAABB.halfSize.x, -mAABB.halfSize.y) + Vector2.right;
9
var newBottomRight = center + new Vector2(mAABB.halfSize.x, -mAABB.halfSize.y) + Vector2.right;
10
var newTopRight = newBottomRight + new Vector2(0.0f, mAABB.halfSize.y * 2.0f);
11
12
var endX = mMap.GetMapTileXAtPoint(newBottomRight.x);
13
var begX = Mathf.Min(mMap.GetMapTileXAtPoint(oldBottomRight.x) + 1, endX);
14
int dist = Mathf.Max(Mathf.Abs(endX - begX), 1);
15
16
int tileIndexY;
17
18
for (int tileIndexX = begX; tileIndexX <= endX; ++tileIndexX)
19
{
20
var bottomRight = Vector2.Lerp(newBottomRight, oldBottomRight, (float)Mathf.Abs(endX - tileIndexX) / dist);
21
var topRight = bottomRight + new Vector2(0.0f, mAABB.halfSize.y * 2.0f);
22
23
for (var checkedTile = bottomRight; ; checkedTile.y += Map.cTileSize)
24
{
25
checkedTile.y = Mathf.Min(checkedTile.y, topRight.y);
26
27
tileIndexY = mMap.GetMapTileYAtPoint(checkedTile.y);
28
29
if (mMap.IsObstacle(tileIndexX, tileIndexY))
30
{
31
wallX = (float)tileIndexX * Map.cTileSize - Map.cTileSize / 2.0f + mMap.mPosition.x;
32
return true;
33
}
34
35
if (checkedTile.y >= topRight.y)
36
break;
37
}
38
}
39
40
return false;
41
}
Move the Object Out of the Collision
All of our collision detection functions are done, so let's use them to complete the collision response against the tilemap. Before we do that, though, we need to figure out the order in which we'll be checking the collisions. Let's consider the following situations.
In both of these situations, we can see the character ended up overlapping with a tile, but we need to figure out how should we resolve the overlap.ย
The situation on the left is pretty simpleโwe can see that we're falling straight down, and because of that we definitely should land on top of the block.ย
The situation on the right is a bit more tricky, since in truth we could land on the very corner of the tile, and pushing the character to the top is as reasonable as pushing it to the right. Let's choose to prioritize the horizontal movement. It doesn't really matter much which alignment we wish to do first; both choices look correct in action.
Let's go to our UpdatePhysics function and add the variables which will hold the results of our collision queries.
1
float groundY = 0.0f, ceilingY = 0.0f;
2
float rightWallX = 0.0f, leftWallX = 0.0f;
Now let's start by looking if we should move the object to the right. The conditions here are that:
โข the horizontal speed is less or equal to zero
โข we collide with the left wallย
โข in the previous frame we didn't overlap with the tile on the horizontal axisโa situation akin to the one on the right in the above picture
The last one is a necessary condition, because if it wasn't fulfilled then we would be dealing with a situation similar to the one on the left in the above picture, in which we surely shouldn't move the character to the right.
1
if (mSpeed.x <= 0.0f
2
&& CollidesWithLeftWall(mOldPosition, mPosition, out leftWallX)
3
&& mOldPosition.x - mAABB.halfSize.x + mAABBOffset.x >= leftWallX)
4
{
5
}
If the conditions are true, we need to align the left side of our AABB to the right side of the tile, make sure that we stop moving to the left, and mark that we are next to the wall on the left.
1
if (mSpeed.x <= 0.0f
2
&& CollidesWithLeftWall(mOldPosition, mPosition, out leftWallX)
3
&& mOldPosition.x - mAABB.halfSize.x + mAABBOffset.x >= leftWallX)
4
{
5
mPosition.x = leftWallX + mAABB.halfSize.x - mAABBOffset.x;
6
mSpeed.x = Mathf.Max(mSpeed.x, 0.0f);
7
mPushesLeftWall = true;
8
}
If any of the conditions besides the last one is false, we need to setย mPushesLeftWallย to false. That's because the last condition being false does not necessarily tell us that the character is not pushing the wall, but conversely, it tells us that it was colliding with it already in the previous frame. Because of this, it's best to change mPushesLeftWall to false only if any of the first two conditions is false as well.
1
if (mSpeed.x <= 0.0f
2
&& CollidesWithLeftWall(mOldPosition, mPosition, out leftWallX))
3
{
4
if (mOldPosition.x - mAABB.HalfSizeX + AABBOffsetX >= leftWallX)
5
{
6
mPosition.x = leftWallX + mAABB.HalfSizeX - AABBOffsetX;
7
mPushesLeftWall = true;
8
}
9
mSpeed.x = Mathf.Max(mSpeed.x, 0.0f);
10
}
11
else
12
mPushesLeftWall = false;
Now let's check for the collision with the right wall.
1
if (mSpeed.x >= 0.0f
2
&& CollidesWithRightWall(mOldPosition, mPosition, out rightWallX))
3
{
4
if (mOldPosition.x + mAABB.HalfSizeX + AABBOffsetX <= rightWallX)
5
{
6
mPosition.x = rightWallX - mAABB.HalfSizeX - AABBOffsetX;
7
mPushesRightWall = true;
8
}
9
10
mSpeed.x = Mathf.Min(mSpeed.x, 0.0f);
11
}
12
else
13
mPushesRightWall = false;
As you can see, it's the same formula we used for checking the collision with the left wall, but mirrored.
We already have the code for checking the collision with the ground, so after that one we need to check the collision with the ceiling. Nothing new here as well, plus we don't need to do any additional checks except that the vertical speed needs to be greater or equal to zero and we actually collide with a tile that's on top of us.
1
if (mSpeed.y >= 0.0f
2
&& HasCeiling(mOldPosition, mPosition, out ceilingY))
3
{
4
mPosition.y = ceilingY - mAABB.halfSize.y - mAABBOffset.y - 1.0f;
5
mSpeed.y = 0.0f;
6
mAtCeiling = true;
7
}
8
else
9
mAtCeiling = false;
Round Up the Corners
Before we test if the collision responses work, there's one more important thing to do, which is to round the values of the corners we calculate for the collision checks. We need to do that, so that our checks are not destroyed by floating point errors, which might come about from weird map position, character scale or just a weird AABB size.
First, for our ease, let's create a function that transforms a vector of floats into a vector of rounded floats.
1
Vector2 RoundVector(Vector2 v)
2
{
3
return new Vector2(Mathf.Round(v.x), Mathf.Round(v.y));
4
}
Now let's use this function in every collision check. First, let's fix the HasCeiling function.
1
var oldTopRight = RoundVector(oldCenter + mAABB.HalfSize + Vector2.up - Vector2.right);
2
3
var newTopRight = RoundVector(center + mAABB.HalfSize + Vector2.up - Vector2.right);
4
var newTopLeft = RoundVector(new Vector2(newTopRight.x - mAABB.HalfSizeX * 2.0f + 2.0f, newTopRight.y));
Next is OnGround.
1
var oldBottomLeft = RoundVector(oldCenter - mAABB.HalfSize - Vector2.up + Vector2.right);
2
3
var newBottomLeft = RoundVector(center - mAABB.HalfSize - Vector2.up + Vector2.right);
4
var newBottomRight = RoundVector(new Vector2(newBottomLeft.x + mAABB.HalfSizeX * 2.0f - 2.0f, newBottomLeft.y));
PushesRightWall.
1
var oldBottomRight = RoundVector(oldCenter + new Vector2(mAABB.HalfSizeX, -mAABB.HalfSizeY) + Vector2.right);
2
3
var newBottomRight = RoundVector(center + new Vector2(mAABB.HalfSizeX, -mAABB.HalfSizeY) + Vector2.right);
4
var newTopRight = RoundVector(newBottomRight + new Vector2(0.0f, mAABB.HalfSizeY * 2.0f));
And finally, PushesLeftWall.
1
var oldBottomLeft = RoundVector(oldCenter - mAABB.HalfSize - Vector2.right);
2
3
var newBottomLeft = RoundVector(center - mAABB.HalfSize - Vector2.right);
4
var newTopLeft = RoundVector(newBottomLeft + new Vector2(0.0f, mAABB.HalfSizeY * 2.0f));
That should solve our issues!
Check the Results
That's going to be it. Let's test how our collisions are working now.
Summary
That's it for this part! We've got a fully working set of tilemap collisions, which should be very reliable. We know in what position state theย objectย currentlyย is: whether it's on the ground, touching a tile on the left or on the right, or bumping a ceiling. We've also implemented the one-way platforms, which are a very important tool in every platformer game.ย
In the next part, we'll add ledge-grabbing mechanics, which will increase the possible movement of the character even further, so stay tuned!
Advertisement
Did you find this post useful?
Want a weekly email summary?
Subscribe below and weโll send you a weekly email summary of all new Game Development tutorials. Never miss out on learning about the next big thing.
Advertisement
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.
|
__label__pos
| 0.814527 |
0
Given that Client A and Client B are using OpenVPN and they are on the same virtual network. A connects to B via e.g. SSH. In front of B there is an external Firewall or whatever.
Is the traffic which is monitored by the firewall actually addressed to port 22 then? So is it port 22 B should enable or is it more like the VPN port?
To take it short: am I right that all traffic from and to the VPN is "tunneled" via a single Port and therefore only this single ports needs to be open?
1 Answer 1
0
Your question is confusing to me.
VPN is VPN and Services running on a specific port is a service running on a specific port.
Now, if you target services are fronted by another firewall after the VPN endpoint, you also need to play with that firewall to allow traffic to the destination endpoint you want to get to.
Your Answer
By clicking โPost Your Answerโ, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.880303 |
Basic HTML: Introduction
By Joe Burns
(with updates by editorial staff)
Use these to jump around or read it all
Welcome to HTML
Let's Get Started
What is HTML?
Beginning to Write
View Source
Welcome to HTML...
This is Primer #1 in a series of seven that will calmly introduce you to the very basics of HyperText Mark-up Language. I suggest you take the Primers one at a time over seven days. By the end of the week, you'll easily know enough to create your own HTML home page. No really. You will.
I say that because many people scoff at the notion that they can actually learn this new Internet format. I'm still amazed that the best-selling line of computer books calls its readers "Dummies." And people seem to revel in that title. Some of the smartest people I know love to proclaim themselves "Dummies" regarding every aspect of computers. Strange. I think you'll do a whole lot better at your next cocktail party by handing out your home page address rather than laughing about how dumb you are about the Internet.
You Can Do This!
Let's Get Started
I am assuming at the beginning of this tutorial that you know nothing about HTML. I am assuming, however, some computer knowledge. You wouldn't be looking at this page without having some knowledge. To continue with these Primers, you will need...
1. A computer (obviously)
2. A browser like Mozilla Firefox, Netscape Navigator, Microsoft Internet Explorer, or Opera. If you're looking at this page, you already have one. If you look up at the title bar at the very top of your screen it will probably say the page title ("Basic HTML: Introduction") and then your browser's name.
3. A word processor. If you have access to Windows "Notepad" or "WordPad" programs or the MAC "Simple Text" program, use that to get started.
If you have those three things, you can write HTML with the best of them. Now here are a few questions you probably have:
Q. I have a MAC (or PC) -- will this work on my computer?
A. Yes. HTML does not use any specific platform. It works with simple text. More on that in a moment...
Q. Must I be logged onto the Internet to do this? More specifically, will learning this throw my cost for on-line way up?
A. Neither. You will write off-line.
Q. Do I need some sort of expensive program to help me write this?
A. No. You will write using just what I outlined above. You can buy those programs if you'd like, but they're not needed. I've never used one.
Q. Is this going to require I learn a whole new computer language like Basic or Fortran or some other cryptic, God-awful, silly-lookin', gothic extreme gobbledygook?
A. Touchy-touchy, aren't we? "No" is the answer. HTML is not a computer language. Allow me to repeat that in bold... HTML is not a computer language!
What is HTML?
H-T-M-L are initials that stand for HyperText Markup Language (computer people love initials and acronyms -- you'll be talking acronyms ASAP). Let me break it down for you:
โข Hyper is the opposite of linear. It used to be that computer programs had to move in a linear fashion. This before this, this before this, and so on. HTML does not hold to that pattern and allows the person viewing the World Wide Web page to go anywhere, any time they want.
โข Text is what you will use. Real, honest to goodness English letters.
โข Mark up is what you will do. You will write in plain English and then mark up what you wrote. More to come on that in the next Primer.
โข Language because they needed something that started with "L" to finish HTML and Hypertext Markup Louie didn't flow correctly. Because it's a language, really -- but the language is plain English.
Beginning to Write
You will actually begin to write HTML starting with Primer #2. That's tomorrow if you follow the seven-day plan this was written for. Here, I want to tell you how you will go about the process.
You will write the HTML document on the word processor, or Notepad, WordPad, or Simple Text. When you are finished creating the HTML document, you'll then open the document in a browser, like Netscape Navigator. The browser will interpret the HTML commands for you and display the Web page.
Now, some people who are already schooled in HTML are going to jump up and down and yell that you should be using an HTML assistant program because it makes it easier. That's true, but it also makes it harder to learn as the program does half the work for you. Take my word for it, use the word processor for a week, then go to the assistant if you still want to use one. You'll be far better off for the effort. I have been writing HTML for six years and I still use Notepad for the bulk of my writing.
Let's get into the programs you will use to write your HTML document. Keep this in mind: HTML documents must be text only. When you save an HTML document, you must save only the text, nothing else.
The reason I am pushing NotePad, WordPad, and Simple Text is that they save in text-only format without your doing any additional work. They just do it. But, if you're like me, then you will want to start writing on a word processor, like WORD, or WordPerfect. Maybe you're just more comfortable on it. If so, read this next part carefully.
The Word Processor
When you write to the word processor you will need to follow a few steps:
1. Write the page as you would any other document.
2. When you go to save the document (Here's the trick), ALWAYS choose SAVE AS.
3. When the SAVE AS box pops up, you will need to save the page in a specific format. Look at the SAVE AS dialogue box when it pops up: Usually at the bottom, you find where you will be able to change the file format.
4. If you have a PC, save your document as ASCII TEXT DOS or just TEXT. Either one will work.
5. If you have a MAC, save your document as TEXT.
6. When I started writing HTML, I saved pages by assigning every Web page its own floppy disc. It just helped me keep it all straight, but if you want to save right to your hard drive, do it. I only offer the floppy disc premise as a suggestion.
Please remember: It is very important to choose SAVE AS EVERY time you save your document. If you don't, the program won't save as TEXT, but rather in its default format. In layman's terms -- use SAVE AS or screw up your document.
You see, when you save your document in WORD, or some other word processor format other than text, you are saving much more than just the letters on the page. You're saving the margin settings, the tab settings, specific fonts, and a whole lot of other settings the page needs to be displayed correctly. You don't want all of that. You just want the text.
NotePad, WordPad, and SimpleText already save in text-only format so if you use one of them as your word processor, you'll get the correct format simply by saving your document.
How To Name Your Document
What you name your document is very important. You must first give your document a name and then add a suffix to it. That's the way everything works in HTML. You give a name and then a suffix.
Follow this format to name your document:
1. Choose a name. Anything. If you have a PC not running Windows 95, you are limited to eight letters, however.
2. Add a suffix. For all HTML documents, you will add either ".htm" or ".html".
(".htm" for PCs running Windows 3.x and ".html" for MAC and Windows 95/98 Machines)
Example:
I am looking to name a document I just wrote on a PC running Windows 3.11 for workgroups. I want to name the document "fred". Thus the document must be named "fred.htm". If it was MAC or Windows 95/98 I would name it "fred.html". Please notice the dot (period) before .htm and .html. And no quotation marks, I just put them in here to set the name apart.
Uhhhhhh.... Why Do I Do That?
Glad you asked. It's a thing called "association." It's how computers tell different file types apart. ".html" tells the computer that this file is an HTML document. When we get into graphics, you'll see a different suffix. All files used on the Web will follow the format of "name.suffix." Always.
Okay, why .htm for PCs running Windows 3.x and .html for MAC and Windows 95/98?
Because that's the way the operating systems are made (Windows 3.x, Windows 95/98, and MAC OS are all technically called operating systems). Windows 3.x only allows three letters after the dot. MAC OS and Windows 95/98 allow four, or more. Your browser allows for both suffixes. It acts upon .html and .htm in the same fashion.
Why do you keep harping on the fact that I must save in TEXT only?
You're just full of questions! You see, HTML browsers can only read text. Look at your keyboard. See the letters and numbers and little signs like % and @ and *? There are 128 in all (read upper- and lowercase letters as two). That's text. That's what the browser reads. It simply doesn't understand anything else.
If you'd like to test this theory, then go ahead and create an HTML document and save it in WORD. Then try and open it in your browser. Nothing will happen. Go ahead and try it. You won't hurt anything.
Remember that if you are using Notepad, Wordpad, or Simple Text, the document will be saved as text with no extra prompting. Just choose SAVE.
Opening the Document in the Browser
Once you have your HTML document on the floppy disc or your hard drive, you'll need to open it up in the browser. It's easy enough. Since you're using a browser to look at this Primer, follow along.
1. Under the FILE menu at the very top left of this screen, you'll find OPEN, OPEN FILE, OPEN DOCUMENT, or words to that effect.
2. Click on it. Some browsers give you the dialogue box that allows you to find your document right away. Internet Explorer, and later versions of Netscape Navigator, require you to click on a BROWSE button or OPEN FILE button to get the dialogue box. When the dialogue box opens up, switch to the A:\ drive (or the floppy disc for MAC users) and open your document. If you saved the file to your hard drive, get it from there.
3. You might have to then click an OK button. The browser will do the rest.
One More Thing
You easily have enough to keep you occupied for the first day. Don't worry, the Primers get less wordy after this.
If you are going to start writing HTML, I suggest you make a point of learning to look at other authors' HTML pages. You say you're already doing that, right? Maybe. What I mean is for you to look at the HTML document a person wrote to present the page you are looking at. Don't look at the pretty page, look behind it at the HTML document.
Why Would I Do That?
Because you can... but seriously, folks. Let's say you run into a page that has a really neat layout, or a fancy text pattern, or a strange grouping of pictures. You'd like to know how to do it.
Well, look, I'm not telling you to steal anything, but let's be honest, if you see some landscaping you like, you're going to use the idea. If you see a room layout you like, you will use the idea to help yourself. That's the point of looking at another page's HTML document. I think it's also the best way to learn HTML. In fact, I am self-taught in HTML simply by looking at others' documents. It was the long way around, believe me. You're going to have a much easier time of it with these Primers.
Here's how you look at an HTML document (known as the "source code"):
1. When you find a page you like, click on VIEW at the top of the screen.
2. Choose DOCUMENT SOURCE from the menu. Sometimes it only reads SOURCE.
3. The HTML document will appear on the screen.
4. Go ahead. Try it with this page. Click on VIEW and then choose the SOURCE.
It's going to look like chicken-scratch right now, but by the end of the week, it'll be readable and you'll be able to find exactly how a certain HTML presentation was performed.
It's A Little Different On AOL
Those of you who use AOL can also see the source. You can do it by placing your pointer on the page, off of an image, and clicking the right mouse button. MAC users should click and hold. A small menu should pop up. One of the items will allow you the ability to view the source.
That's the Primer for today. Print it out if you'd like and get ready to delve in and write your first HTML document. See you tomorrow.
Welcome to HTML
Let's Get Started
What is HTML?
Beginning to Write
View Source
โข Make a Comment
Loading Comments...
โข Web Development Newsletter Signup
Invalid email
You have successfuly registered to our newsletter.
โข ย
โข ย
โข ย
|
__label__pos
| 0.810705 |
Difference between PositionsTotal() and OrdersTotal()
ย
Hi, I found that in MQL5 there's PositionsTotal() and OrdersTotal(). From the documentation PositionsTotal() returns open position, does it means that PositionsTotal() only count buy and sell order while OrdersTotal () also count stop and limit order?
ย
Luandre Ezra:
Hi, I found that in MQL5 there's PositionsTotal() and OrdersTotal(). From the documentation PositionsTotal() returns open position, does it means that PositionsTotal() only count buy and sell order while OrdersTotal () also count stop and limit order?
Positions orders that have been filled or open position in a asset. Orders are stuff that is pending to be filled.
ย
Luandre Ezra: Hi, I found that in MQL5 there's PositionsTotal() and OrdersTotal(). From the documentation PositionsTotal() returns open position, does it means that PositionsTotal() only count buy and sell order while OrdersTotal () also count stop and limit order?
No! In a way to try to explain it, consider the Orders as the "requests", Positions as the "results", and Deals as the "actions" between the requests and the results. MT5 trading functionality is totally different to MT4.
Read the following article for a better understanding ...
Articles
Orders, Positions and Deals in MetaTrader 5
MetaQuotes, 2011.02.01 16:13
Creating a robust trading robot cannot be done without an understanding of the mechanisms of the MetaTrader 5 trading system. The client terminal receives the information about the positions, orders, and deals from the trading server. To handle this data properly using the MQL5, it's necessary to have a good understanding of the interaction between the MQL5-program and the client terminal.
ย
I already read the article and if I want my EA to look for open order I could use OrdersTotal() or PositionsTotal() even though PositionsTotal() could have a different volume then the volume of order execution, but if I want to look for pending order I can only use OrdersTotal(). Is this correct?
ย
No. It is like Fernando wrote: The order is only a request, the position is the result of said request. As soon as an order gets filled, it is deleted from OrdersTotal and a position is opened.
ย
Tobias Johannes Zimmer #:
No. It is like Fernando wrote: The order is only a request, the position is the result of said request. As soon as an order gets filled, it is deleted from OrdersTotal and a position is opened.
Ok I understand now, thanks for make it clear
Reason:
|
__label__pos
| 0.976019 |
cancel
Showing results forย
Search instead forย
Did you mean:ย
How do you block a website using Firefox?
Highlighted
Grand Master
How do you block a website using Firefox?
Wondering if possible to block a website using Firefox browser.
5 REPLIES
Site Moderator
Re: How do you block a website using Firefox?
There are extensions that can block websites, but any extension can easily be bypassed by starting Firefox in Troubleshoot Firefox issues using Safe Mode or by using the portable Firefox version. LeechBlock: https://addons.mozilla.org/firefox/addon/4476 BlockSite: https://addons.mozilla.org/firefox/addon/3145
New User
Re: How do you block a website using Firefox?
I haven't found a way in to block websites directly in Firefox but there is a work-around in MS Windows. If you using XP/Vista/7 you can add websites-to-block through the Windows Control panel. Firefox appears to reference to this same list.
Open Control Panel > Internet Options > Security tab > Double click on Restricted Sites (It's the red circle with the line through the middle) > Sites. Add the websites to the list. Click Apply > OK.
Win 7 is Control Panel > Network and Internet > Internet Properties.
This worked for me.
Re: How do you block a website using Firefox?
You can use a program such as 123BlockMe. It will allow you to block any website on Windows and it works with all browsers including Firefox.
New User
Re: How do you block a website using Firefox?
I have leechblock and love it. but I have a question about it. It is under tools, under add ons, which is great, but it also shows just on the list under tools and I can't seem to get it to not show there. I do not want kids to know it is on the computer that easily. anyone help with that. I have tried right clicking to delete etc, but I can't find anything that allows me to take it off of the tools menu
Support Forum Moderator
Re: How do you block a website using Firefox?
@russdomi
Please stick with the thread that you started.
https://support.mozilla.com/en-US/questions/886405
|
__label__pos
| 0.972442 |
How to Run a Program in Ubuntu?
Introduction
Ubuntu is a widely used Linux distribution known for its stability and user-friendliness. If youโre new to Ubuntu or simply want to enhance your knowledge, understanding how to run a program is fundamental. In this guide, weโll take you through the process, covering everything you need to know about executing programs in Ubuntu.
How to Run a Program in Ubuntu?
Running a program in Ubuntu is a straightforward process, and hereโs how you can do it:
1. Accessing the Terminal
โข To run a program in Ubuntu, youโll often use the Terminal. Press Ctrl + Alt + T to open it. Alternatively, you can search for โTerminalโ in the Ubuntu Dash.
2. Navigating to the Programโs Directory
โข Use the cd command followed by the path to the programโs directory. This step is essential if the program isnโt in a directory thatโs in your systemโs PATH.
3. Executing the Program
โข Once youโre in the programโs directory, type the programโs name and press Enter. For example, if the program is called โmy_program,โ youโd type ./my_program and press Enter.
4. Providing Command-line Arguments (Optional)
โข Some programs may require additional information when executed. You can provide this information as command-line arguments. For instance, ./my_program -arg1 value1 -arg2 value2.
5. Understanding Permissions
โข If you encounter a permission issue, you might need to use the chmod command to make the program executable. For example, chmod +x my_program.
6. Exiting a Program
โข To exit a program, you can typically press Ctrl + C in the Terminal, and it will terminate the programโs execution.
Tips for Smooth Program Execution
Running programs in Ubuntu can be even smoother with these tips:
โข Keep Your System Updated: Ensure your system is up-to-date using the sudo apt update and sudo apt upgrade commands.
โข Check Dependencies: Verify that your program has all the necessary dependencies installed. Use the dpkg or apt commands to manage packages.
โข Use PPA Repositories (if needed): Some software might not be available in the official Ubuntu repositories. In such cases, consider adding a Personal Package Archive (PPA) to access additional software.
โข Explore GUI Alternatives: While the Terminal is powerful, Ubuntu also offers a graphical user interface (GUI) to manage and run programs.
FAQs
How do I run a program with superuser privileges?
To run a program with superuser privileges, use the sudo command before the programโs name in the Terminal. For example, sudo ./my_program will execute โmy_programโ with administrative rights.
Can I run Windows programs on Ubuntu?
Yes, you can run some Windows programs on Ubuntu using compatibility layers like Wine or virtualization software like VirtualBox.
What if a program crashes while running?
If a program crashes during execution, you can try running it with debugging tools like gdb to identify and fix the issue.
How can I check the version of a program?
To check the version of a program, you can often use the --version or -V flag. For instance, ./my_program --version or ./my_program -V will display version information.
Can I schedule programs to run automatically?
Yes, you can schedule programs to run automatically in Ubuntu using tools like cron or systemd timers.
What should I do if a program hangs and doesnโt respond?
If a program becomes unresponsive, you can use the kill command to terminate it. Find the programโs process ID (PID) with ps aux | grep program_name and then use kill PID to stop it.
How to run a program in Ubuntu?
To run a program in Ubuntu, open the terminal and type the programโs name or use the application launcher.
How do I write and run a program in Linux?
To write and run a program in Linux, use a text editor to write the code, save it with the appropriate file extension (e.g., .c for C), compile it using a compiler (e.g., gcc), and then execute the compiled binary.
How do you execute a program in Ubuntu?
To execute a program in Ubuntu, open the terminal, navigate to the programโs directory if necessary, and then type โ./program_nameโ (replace โprogram_nameโ with the actual name of the program) or use the application launcher.
Conclusion
Running a program in Ubuntu is a fundamental skill for any Linux user. With this guide, youโve gained valuable insights into the process and learned some essential tips to ensure smooth execution. As you explore Ubuntu further, remember that practice makes perfect. Experiment with different programs, and donโt hesitate to seek help from the vibrant Ubuntu community if you encounter challenges.
Leave a comment
|
__label__pos
| 0.999802 |
Fouling
Tangent
Tangent to a curve. The red line is tangential to the curve at the point marked by a red dot.
Tangent plane to a sphere
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that โjust touchesโ the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve.[1] More precisely, a straight line is said to be a tangent of a curve y = f (x) at a point x = c on the curve if the line passes through the point (c, f (c)) on the curve and has slope f (c) where f is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.
As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is โgoing in the same directionโ as the curve, and is thus the best straight-line approximation to the curve at that point.
Similarly, the tangent plane to a surface at a given point is the plane that โjust touchesโ the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space.
The word โtangentโ comes from the Latin tangere, โto touchโ.
History
Euclid makes several references to the tangent (แผฯฮฑฯฯฮฟฮผฮญฮฝฮท) to a circle in book III of the Elements (c. 300 BC).[2] In Apollonius work Conics (c. 225 BC) he defines a tangent as being a line such that no other straight line could fall between it and the curve.[3]
Archimedes (c.โ 287 โ c. โ212 BC) found the tangent to an Archimedean spiral by considering the path of a point moving along the curve.[3]
In the 1630s Fermat developed the technique of adequality to calculate tangents and other problems in analysis and used this to calculate tangents to the parabola. The technique of adeqality is similar to taking the difference between and and dividing by a power of . Independently Descartes used his method of normals based on the observation that the radius of a circle is always normal to the circle itself.[4]
These methods led to the development of differential calculus in the 17th century. Many people contributed. Roberval discovered a general method of drawing tangents, by considering a curve as described by a moving point whose motion is the resultant of several simpler motions.[5] Renรฉ-Franรงois de Sluse and Johannes Hudde found algebraic algorithms for finding tangents.[6] Further developments included those of John Wallis and Isaac Barrow, leading to the theory of Isaac Newton and Gottfried Leibniz.
An 1828 definition of a tangent was โa right line which touches a curve, but which when produced, does not cut itโ.[7] This old definition prevents inflection points from having any tangent. It has been dismissed and the modern definitions are equivalent to those of Leibniz who defined the tangent line as the line through a pair of infinitely close points on the curve.
Tangent line to a curve
A tangent, a chord, and a secant to a circle
The intuitive notion that a tangent line โtouchesโ a curve can be made more explicit by considering the sequence of straight lines (secant lines) passing through two points, A and B, those that lie on the function curve. The tangent at A is the limit when point B approximates or tends to A. The existence and uniqueness of the tangent line depends on a certain type of mathematical smoothness, known as โdifferentiability.โ For example, if two circular arcs meet at a sharp point (a vertex) then there is no uniquely defined tangent at the vertex because the limit of the progression of secant lines depends on the direction in which โpoint Bโ approaches the vertex.
At most points, the tangent touches the curve without crossing it (though it may, when continued, cross the curve at other places away from the point of tangent). A point where the tangent (at this point) crosses the curve is called an inflection point. Circles, parabolas, hyperbolas and ellipses do not have any inflection point, but more complicated curves do have, like the graph of a cubic function, which has exactly one inflection point, or a sinusoid, which has two inflection points per each period of the sine.
Conversely, it may happen that the curve lies entirely on one side of a straight line passing through a point on it, and yet this straight line is not a tangent line. This is the case, for example, for a line passing through the vertex of a triangle and not intersecting the triangleโwhere the tangent line does not exist for the reasons explained above. In convex geometry, such lines are called supporting lines.
At each point, the moving line is always tangent to the curve. Its slope is the derivative; green marks positive derivative, red marks negative derivative and black marks zero derivative. The point (x,y) = (0,1) where the tangent intersects the curve, is not a max, or a min, but is a point of inflection.
Analytical approach
The geometrical idea of the tangent line as the limit of secant lines serves as the motivation for analytical methods that are used to find tangent lines explicitly. The question of finding the tangent line to a graph, or the tangent line problem, was one of the central questions leading to the development of calculus in the 17th century. In the second book of his Geometry, Renรฉ Descartes[8] said of the problem of constructing the tangent to a curve, โAnd I dare say that this is not only the most useful and most general problem in geometry that I know, but even that I have ever desired to knowโ.[9]
Intuitive description
Suppose that a curve is given as the graph of a function, y = f(x). To find the tangent line at the point p = (a, f(a)), consider another nearby point q = (a + h, f(a + h)) on the curve. The slope of the secant line passing through p and q is equal to the difference quotient
As the point q approaches p, which corresponds to making h smaller and smaller, the difference quotient should approach a certain limiting value k, which is the slope of the tangent line at the point p. If k is known, the equation of the tangent line can be found in the point-slope form:
More rigorous description
To make the preceding reasoning rigorous, one has to explain what is meant by the difference quotient approaching a certain limiting value k. The precise mathematical formulation was given by Cauchy in the 19th century and is based on the notion of limit. Suppose that the graph does not have a break or a sharp edge at p and it is neither plumb nor too wiggly near p. Then there is a unique value of k such that, as h approaches 0, the difference quotient gets closer and closer to k, and the distance between them becomes negligible compared with the size of h, if h is small enough. This leads to the definition of the slope of the tangent line to the graph as the limit of the difference quotients for the function f. This limit is the derivative of the function f at x = a, denoted fย โฒ(a). Using derivatives, the equation of the tangent line can be stated as follows:
Calculus provides rules for computing the derivatives of functions that are given by formulas, such as the power function, trigonometric functions, exponential function, logarithm, and their various combinations. Thus, equations of the tangents to graphs of all these functions, as well as many others, can be found by the methods of calculus.
How the method can fail
Calculus also demonstrates that there are functions and points on their graphs for which the limit determining the slope of the tangent line does not exist. For these points the function f is non-differentiable. There are two possible reasons for the method of finding the tangents based on the limits and derivatives to fail: either the geometric tangent exists, but it is a vertical line, which cannot be given in the point-slope form since it does not have a slope, or the graph exhibits one of three behaviors that precludes a geometric tangent.
The graph y = x1/3 illustrates the first possibility: here the difference quotient at a = 0 is equal to h1/3/h = hโ2/3, which becomes very large as h approaches 0. This curve has a tangent line at the origin that is vertical.
The graph y = x2/3 illustrates another possibility: this graph has a cusp at the origin. This means that, when h approaches 0, the difference quotient at a = 0 approaches plus or minus infinity depending on the sign of x. Thus both branches of the curve are near to the half vertical line for which y=0, but none is near to the negative part of this line. Basically, there is no tangent at the origin in this case, but in some context one may consider this line as a tangent, and even, in algebraic geometry, as a double tangent.
The graph y = |x| of the absolute value function consists of two straight lines with different slopes joined at the origin. As a point q approaches the origin from the right, the secant line always has slope 1. As a point q approaches the origin from the left, the secant line always has slope โ1. Therefore, there is no unique tangent to the graph at the origin. Having two different (but finite) slopes is called a corner.
Finally, since differentiability implies continuity, the contrapositive states discontinuity implies non-differentiability. Any such jump or point discontinuity will have no tangent line. This includes cases where one slope approaches positive infinity while the other approaches negative infinity, leading to an infinite jump discontinuity
Equations
When the curve is given by y = f(x) then the slope of the tangent is so by the pointโslope formula the equation of the tangent line at (XY) is
where (xy) are the coordinates of any point on the tangent line, and where the derivative is evaluated at .[10]
When the curve is given by y = f(x), the tangent lineโs equation can also be found[11] by using polynomial division to divide by ; if the remainder is denoted by , then the equation of the tangent line is given by
When the equation of the curve is given in the form f(xy) = 0 then the value of the slope can be found by implicit differentiation, giving
The equation of the tangent line at a point (X,Y) such that f(X,Y) = 0 is then[10]
This equation remains true if but (in this case the slope of the tangent is infinite). If the tangent line is not defined and the point (X,Y) is said singular.
For algebraic curves, computations may be simplified somewhat by converting to homogeneous coordinates. Specifically, let the homogeneous equation of the curve be g(xyz) = 0 where g is a homogeneous function of degree n. Then, if (XYZ) lies on the curve, Eulerโs theorem implies
It follows that the homogeneous equation of the tangent line is
The equation of the tangent line in Cartesian coordinates can be found by setting z=1 in this equation.[12]
To apply this to algebraic curves, write f(xy) as
where each ur is the sum of all terms of degree r. The homogeneous equation of the curve is then
Applying the equation above and setting z=1 produces
as the equation of the tangent line.[13] The equation in this form is often simpler to use in practice since no further simplification is needed after it is applied.[12]
If the curve is given parametrically by
then the slope of the tangent is
giving the equation for the tangent line at as[14]
If , the tangent line is not defined. However, it may occur that the tangent line exists and may be computed from an implicit equation of the curve.
Normal line to a curve
The line perpendicular to the tangent line to a curve at the point of tangency is called the normal line to the curve at that point. The slopes of perpendicular lines have product โ1, so if the equation of the curve is y = f(x) then slope of the normal line is
and it follows that the equation of the normal line at (X, Y) is
Similarly, if the equation of the curve has the form f(xy) = 0 then the equation of the normal line is given by[15]
If the curve is given parametrically by
then the equation of the normal line is[14]
Angle between curves
The angle between two curves at a point where they intersect is defined as the angle between their tangent lines at that point. More specifically, two curves are said to be tangent at a point if they have the same tangent at a point, and orthogonal if their tangent lines are orthogonal.[16]
Multiple tangents at a point
The limaรงon trisectrix: a curve with two tangents at the origin.
The formulas above fail when the point is a singular point. In this case there may be two or more branches of the curve that pass through the point, each branch having its own tangent line. When the point is the origin, the equations of these lines can be found for algebraic curves by factoring the equation formed by eliminating all but the lowest degree terms from the original equation. Since any point can be made the origin by a change of variables, this gives a method for finding the tangent lines at any singular point.
For example, the equation of the limaรงon trisectrix shown to the right is
Expanding this and eliminating all but terms of degree 2 gives
which, when factored, becomes
So these are the equations of the two tangent lines through the origin.[17]
When the curve is not self-crossing, the tangent at a reference point may still not be uniquely defined because the curve is not differentiable at that point although it is differentiable elsewhere. In this case the left and right derivatives are defined as the limits of the derivative as the point at which it is evaluated approaches the reference point from respectively the left (lower values) or the right (higher values). For example, the curve y = |x | is not differentiable at x = 0: its left and right derivatives have respective slopes โ1 and 1; the tangents at that point with those slopes are called the left and right tangents.[18]
Sometimes the slopes of the left and right tangent lines are equal, so the tangent lines coincide. This is true, for example, for the curve y = x 2/3, for which both the left and right derivatives at x = 0 are infinite; both the left and right tangent lines have equation x = 0.
Tangent circles
Two pairs of tangent circles. Above internally and below externally tangent
Two circles of non-equal radius, both in the same plane, are said to be tangent to each other if they meet at only one point. Equivalently, two circles, with radii of ri and centers at (xi, yi), for iย =ย 1,ย 2 are said to be tangent to each other if
โข Two circles are externally tangent if the distance between their centres is equal to the sum of their radii.
โข Two circles are internally tangent if the distance between their centres is equal to the difference between their radii.[19]
Surfaces and higher-dimensional manifolds
The tangent plane to a surface at a given point p is defined in an analogous way to the tangent line in the case of curves. It is the best approximation of the surface by a plane at p, and can be obtained as the limiting position of the planes passing through 3 distinct points on the surface close to p as these points converge to p. More generally, there is a k-dimensional tangent space at each point of a k-dimensional manifold in the n-dimensional Euclidean space.
See also
References
1. ^ Leibniz, G., โNova Methodus pro Maximis et Minimisโ, Acta Eruditorum, Oct. 1684.
2. ^ Euclid. โEuclidโs Elementsโ. Retrieved 1 June 2015.ย
3. ^ a b Shenk, Al. โe-CALCULUS Section 2.8โ (PDF). p.ย 2.8. Retrieved 1 June 2015.ย
4. ^ Katz, Victor J. (2008). A History of Mathematics (3rd ed.). Addison Wesley. p.ย 510. ISBNย 978-0321387004.ย
5. ^ Wolfson, Paul R. (2001). โThe Crooked Made Straight: Roberval and Newton on Tangentsโ. The American Mathematical Monthly. 108 (3): 206โ216. doi:10.2307/2695381.ย
6. ^ Katz, Victor J. (2008). A History of Mathematics (3rd ed.). Addison Wesley. pp.ย 512โ514. ISBNย 978-0321387004.ย
7. ^ Noah Webster, American Dictionary of the English Language (New York: S. Converse, 1828), vol. 2, p. 733, [1]
8. ^ Descartes, Renรฉ (1954). The geometry of Renรฉ Descartes. Courier Dover. p.ย 95. ISBNย 0-486-60068-8.ย External link in |publisher= (help)
9. ^ R. E. Langer (October 1937). โRene Descartesโ. American Mathematical Monthly. Mathematical Association of America. 44 (8): 495โ512. doi:10.2307/2301226. JSTORย 2301226.ย
10. ^ a b Edwards Art. 191
11. ^ Strickland-Constable, Charles, โA simple method for finding tangents to polynomial graphsโ, Mathematical Gazette, November 2005, 466โ467.
12. ^ a b Edwards Art. 192
13. ^ Edwards Art. 193
14. ^ a b Edwards Art. 196
15. ^ Edwards Art. 194
16. ^ Edwards Art. 195
17. ^ Edwards Art. 197
18. ^ Thomas, George B. Jr., and Finney, Ross L. (1979), Calculus and Analytic Geometry, Addison Wesley Publ. Co.: p. 140.
19. ^ Circles For Leaving Certificate Honours Mathematics by Thomas OโSullivan 1997
Sources
External links
|
__label__pos
| 0.983866 |
Rotations and Rotational Symmetry
by
Michelle Brantley
Objective: To understand the effects of rotation on geometric figures. To be able to make conjectures regarding geometric effects as well as algebraic effects.
ย
Materials: GSP
ย
Time: 1-2 days
ย
Level of Difficulty: Medium
ย
Procedure:
1. Construct a triangle in the 1st quadrant . Label those coordinates by highlighting each vertex and clicking on Measure. Click on Coordinates.
ย
2. Now we are ready to rotate the given triangle around the origin 90 degrees. Click on the origin. Under the Transform Menu, click on Mark center "A". Select the entire object and click on Rotate. Rotate the triangle 90 degrees.
ย
In which quadrant is the translated triangle located? How could you rotate the triangle in to the 3rd quadrant? the 4th quadrant?
ย
3. How would the rotations change if the angle of rotation was divided by 2? How many 45 degree rotations would it take for the figure to return to the original position in the 1st quadrant?
ย
4. How would you change the angle of rotation to rotate in the other direction?
ย
5. Now we want to investigate the relationship between the coordinates of the original figure compared to its image. For each rotation, (45 degree clockwise, 45 degree counterclockwise, 90 degree clockwise and 90 degree counterclockwise) find any potential relationships between the original figure's coordinates and the image's coordinates.
|
__label__pos
| 0.999701 |
algorithms software expert
์ํํธ์จ์ด ์ต์คํผํธ 1219 (software expert 1219)
์ํํธ์จ์ด ์ต์คํผํธ 1219 (software expert 1219)
์ํํธ์จ์ด ์ต์คํผํธ 1219
์ฝ๋
import java.io.*;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Queue;
public class SWE1219 {
static int t = 10, n, e;
static ArrayList<Integer>[] g = new ArrayList[101];
static String[] ins;
public static void main(String[] args) throws IOException {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(System.out));
for (int i = 0; i < 101; i++) {
g[i] = new ArrayList<>();
}
while (t-- != 0) {
for (int i = 0; i < 101; i++) {
g[i].clear();
}
ins = br.readLine().split(" ");
n = Integer.parseInt(ins[0]);
e = Integer.parseInt(ins[1]);
ins = br.readLine().split(" ");
for (int i = 0; i < 2*e; i += 2) {
g[Integer.parseInt(ins[i])].add(Integer.parseInt(ins[i + 1]));
}
bw.write("#" + (10 - t) + " " + ((bfs()) ? 1 : 0) + "\n");
bw.flush();
}
bw.close();
}
public static boolean bfs() {
Queue<Integer> q = new LinkedList<>();
boolean[] v = new boolean[101];
q.offer(0);v[0] = true;
boolean isArrive = false;
while (!q.isEmpty()) {
int cur = q.poll();
if (cur == 99) {isArrive = true;break;}
for (int i = 0; i < g[cur].size(); i++) {
int to = g[cur].get(i);
if (!v[to]) {
v[to] = true;q.offer(to);
}
}
}
return isArrive;
}
}
์ค๋ช
|
__label__pos
| 0.993957 |
[RFC PATCH 1/3] PM / clock_ops: Add pm_clk_add_clk()
From: Grygorii Strashko
Date: Fri Jul 25 2014 - 13:47:12 EST
From: Geert Uytterhoeven <geert+renesas@xxxxxxxxx>
The existing pm_clk_add() allows to pass a clock by con_id. However,
when referring to a specific clock from DT, no con_id is available.
Add pm_clk_add_clk(), which allows to specify the struct clk * directly.
Signed-off-by: Geert Uytterhoeven <geert+renesas@xxxxxxxxx>
---
drivers/base/power/clock_ops.c | 40 ++++++++++++++++++++++++++++++----------
include/linux/pm_clock.h | 3 +++
2 files changed, 33 insertions(+), 10 deletions(-)
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index b99e6c0..2d5c9c1 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -53,7 +53,8 @@ static inline int __pm_clk_enable(struct device *dev, struct clk *clk)
*/
static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
{
- ce->clk = clk_get(dev, ce->con_id);
+ if (!ce->clk)
+ ce->clk = clk_get(dev, ce->con_id);
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
@@ -63,15 +64,8 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
}
}
-/**
- * pm_clk_add - Start using a device clock for power management.
- * @dev: Device whose clock is going to be used for power management.
- * @con_id: Connection ID of the clock.
- *
- * Add the clock represented by @con_id to the list of clocks used for
- * the power management of @dev.
- */
-int pm_clk_add(struct device *dev, const char *con_id)
+static int __pm_clk_add(struct device *dev, const char *con_id,
+ struct clk *clk)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
@@ -93,6 +87,8 @@ int pm_clk_add(struct device *dev, const char *con_id)
kfree(ce);
return -ENOMEM;
}
+ } else {
+ ce->clk = clk;
}
pm_clk_acquire(dev, ce);
@@ -102,6 +98,30 @@ int pm_clk_add(struct device *dev, const char *con_id)
spin_unlock_irq(&psd->lock);
return 0;
}
+/**
+ * pm_clk_add - Start using a device clock for power management.
+ * @dev: Device whose clock is going to be used for power management.
+ * @con_id: Connection ID of the clock.
+ *
+ * Add the clock represented by @con_id to the list of clocks used for
+ * the power management of @dev.
+ */
+int pm_clk_add(struct device *dev, const char *con_id)
+{
+ return __pm_clk_add(dev, con_id, NULL);
+}
+
+/**
+ * pm_clk_add_clk - Start using a device clock for power management.
+ * @dev: Device whose clock is going to be used for power management.
+ * @clk: Clock pointer
+ *
+ * Add the clock to the list of clocks used for the power management of @dev.
+ */
+int pm_clk_add_clk(struct device *dev, struct clk *clk)
+{
+ return __pm_clk_add(dev, NULL, clk);
+}
/**
* __pm_clk_remove - Destroy PM clock entry.
diff --git a/include/linux/pm_clock.h b/include/linux/pm_clock.h
index 8348866..6981aa2 100644
--- a/include/linux/pm_clock.h
+++ b/include/linux/pm_clock.h
@@ -18,6 +18,8 @@ struct pm_clk_notifier_block {
char *con_ids[];
};
+struct clk;
+
#ifdef CONFIG_PM_CLK
static inline bool pm_clk_no_clocks(struct device *dev)
{
@@ -29,6 +31,7 @@ extern void pm_clk_init(struct device *dev);
extern int pm_clk_create(struct device *dev);
extern void pm_clk_destroy(struct device *dev);
extern int pm_clk_add(struct device *dev, const char *con_id);
+extern int pm_clk_add_clk(struct device *dev, struct clk *clk);
extern void pm_clk_remove(struct device *dev, const char *con_id);
extern int pm_clk_suspend(struct device *dev);
extern int pm_clk_resume(struct device *dev);
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
|
__label__pos
| 0.914178 |
The Pros and Cons of ETL and ELT for Data Warehousing
Hardik Shah
4 min readMar 13
--
For businesses to successfully manage and analyze massive amounts of data, data warehousing is a critical procedure. In order to make wise choices, it enables businesses to store, arrange, and retrieve data from various sources. However, integrating data into a data warehouse can be a difficult and time-consuming procedure. ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) are used in this situation.
In this blog post, weโll look at the benefits and drawbacks of ETL and ELT for data warehousing, as well as how companies can select the best data integration technique for their requirements.
The Pros of ETL for Data Warehousing
Data is extracted from different sources, transformed to meet the needs of the data warehouse, and then loaded into the warehouse using the traditional data integration technique known as ETL. The following are some advantages of using ETL for data warehousing:
1. Data cleansing: Before being loaded into the data warehouse, data is cleaned using ETL to ensure that it is accurate, full, and consistent. This guarantees that the data centre has high-quality information that can be analysed and used to make decisions.
2. Data transformation: ETL also enables the conversion of data from its original format to the format needed by the data warehouse. By doing this, it is made possible for the data to be consistent and simple to incorporate into the data warehouse.
3. Real-time updates: The data repository can receive real-time updates from ETL. More recent information is made available as a result, which is beneficial in particular for sectors where change happens quickly.
4. ETL processes are generally simpler to manage than ELT processes because they give users more control over the consistency and quality of the data.
The Pros of ELT for Data Warehousing
Data is extracted from different sources and loaded into the data warehouse before being transformed using ELT, a more recent data integration technique. Using ELT for data storage has the following advantages:
1. Flexibility: Because ELT enables data processing at the source, it gives users more freedom when working with large amounts of data. As a result, there would be no need for a distinct transformation layer, which could shorten processing times.
2. Scalability: ELT is more scalable than ETL because it provides for the quick and effective processing of big volumes of data.
3. Reduced storage needs: By doing away with the necessity for a separate transformation layer, ELT can reduce storage needs. Businesses may experience significant cost savings as a consequence of this.
4. Better for unstructured data: Since unstructured data can be loaded directly into the data warehouse before being transformed, ELT is better suited for managing unstructured data, such as social media or machine-generated data.
The Cons of ETL and ELT for Data Warehousing
Despite the fact that ETL and ELT are frequently used for data storage, they are not without flaws. Weโll examine some of the major drawbacks of these strategies in this part.
1. Complexity
Processes like ETL and ELT can be challenging, particularly when working with a lot of data from various sources. It can take a lot of time and resources to extract data from different sources, modify it to fit the target data model, and load the modified data into a data warehouse. Longer development times, higher costs, and a greater chance of errors can result from this complexity.
2. Data Quality
The risk of problems with data quality is another possible disadvantage of ETL and ELT. Errors or inconsistencies could happen as data is extracted from various sources, transformed, and put into the data warehouse. These problems may cause the data warehouse to contain inaccurate or incomplete data, which could have a detrimental effect on decision-making and analysis.
3. Scalability
Scaling ETL and ELT can be difficult, especially when working with significant amounts of data. Performance problems may arise as a result of the need for longer ETL or ELT processing times as data volume grows. These procedures might need to be scaled up with more expensive hardware or software.
4. Maintenance
ETL and ELT process maintenance can be a difficult and time-consuming job. The ETL or ELT process may need to be updated to account for changes as new data sources are added or as current sources are altered. This procedure can be difficult and time-consuming, especially for large data warehouses with numerous data sources.
5. Real-time processing
As batch processing methodologies, ETL and ELT are not intended for real-time data handling. For businesses that depend on current info for decision-making, this may be a drawback. A distinct strategy, such as event-driven architecture or stream processing, is needed for real-time data processing.
Conclusion
In summation, when it comes to data warehousing, both ETL and ELT have benefits and drawbacks. While ELT is a more recent strategy that makes use of cloud-based processing, ETL is a tried-and-true method that has been used for many years. The best option will depend on the particular needs and objectives of the company because both strategies have their advantages and disadvantages.
Itโs crucial to take into account aspects like data intricacy, data quality, scalability, maintenance needs, and real-time processing requirements when choosing between ETL and ELT. Businesses can select the strategy that is best suited to their needs and goals by carefully weighing the advantages and disadvantages of each approach, leading to a more effective and efficient data warehousing solution.
Reference:
--
--
|
__label__pos
| 0.961018 |
Basic Docker commands
Docker revolutionizes software development, deployment, and management, requiring mastery of basic commands for experienced developers and newcomers to navigate its powerful capabilities. This article provides a comprehensive overview of fundamental Docker commands, enabling users to check the status of containers, check, or access images, and manage containers.
Pulling Image:
docker pull <image name>
For accessing the container:
$ docker ps
$ docker ps -a
$ docker inspect <container ID/name>
$ docker logs <container ID/name>
Checking of the status:
$ Docker ps
$ Docker ps -a
$ docker inspect -f '{{.State.Status}}' <container ID>
To delete container:
$ docker ps -a
$ docker rm <container ID>
To check the Docker Disk Usage:
$ docker system df
To check the disk usage of Docker images:
$ docker images
To check the disk usage of Docker containers:
$ docker ps -s
As we conclude our journey through basic Docker commands, remember that knowing these fundamental tools opens a world of possibilities in containerized development and deployment. With each command, you're one step closer to unleashing the full potential of container technology. Hope this is somewhat helpful as your beginner guide!
Leave a Comment
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.98086 |
org.apache.tapestry5.services
Interface BaseURLSource
All Known Implementing Classes:
BaseURLSourceImpl
public interface BaseURLSource
Used when switching between normal/insecure (HTTP) and secure (HTTPS) mode. When a switch occurs, it is no longer possible to use just paths, instead absolute URLs (including the scheme, hostname and possibly port) must be generated. The default implementation of this is simple-minded: it just tacks the correct scheme in front of Request.getServerName(). In production, behind a firewall, it is often necessary to do a bit more, since getServerName() will often be the name of the internal server (not visible to the client web browser), and a hard-coded name of a server that is visible to the web browser is needed. Further, in testing, non-default ports are often used. In those cases, an overriding contribution to the ServiceOverride service will allow a custom implementation to supercede the default version. You may also contribute application specific values for the following SymbolConstants: SymbolConstants.HOSTNAME, SymbolConstants.HOSTPORT and SymbolConstants.HOSTPORT_SECURE to alter the behavior of the default BaseURLSource implementation. The default values for the SymbolConstants require Request context to be available. If you contribute specific values for the specified SymbolConstants, it's safe to use the default implementation of this service outside of request context, for example in a batch job. For SymbolConstants.HOSTNAME, a value starting with a dollar sign ($) will be resolved using System.getenv() - contributing "$HOSTNAME" for SymbolConstants.HOSTNAME is the most sensible choice for a dynamic value that doesn't use Request.getServerName().
Method Summary
ย String getBaseURL(booleanย secure)
ย ย ย ย ย ย ย ย ย ย Returns the base portion of the URL, before the context path and servlet path are appended.
ย
Method Detail
getBaseURL
String getBaseURL(booleanย secure)
Returns the base portion of the URL, before the context path and servlet path are appended. The return value should not end with a slash; it should end after the host name, or after the port number. The context path, servlet path, and path info will be appended to the returned value.
Parameters:
secure - whether a secure "https" or insecure "http" base URL should be returned
Returns:
the base URL ready for additional extensions
Copyright ยฉ 2003-2012 The Apache Software Foundation.
|
__label__pos
| 0.96373 |
PageRenderTime 12ms CodeModel.GetById 2ms app.highlight 7ms RepoModel.GetById 1ms app.codeStats 1ms
/tst/org/diffkit/diff/conf/tst/TestMagicPlanBuilder.groovy
http://diffkit.googlecode.com/
Groovy | 160 lines | 118 code | 20 blank | 22 comment | 22 complexity | 16b6966d68fe124c5cc4b7c5bc8dc4cc MD5 | raw file
1/**
2 * Copyright 2010-2011 Joseph Panico
3 *
4 * Licensed under the Apache License, Version 2.0 (the "License");
5 * you may not use this file except in compliance with the License.
6 * You may obtain a copy of the License at
7 *
8 * http://www.apache.org/licenses/LICENSE-2.0
9 *
10 * Unless required by applicable law or agreed to in writing, software
11 * distributed under the License is distributed on an "AS IS" BASIS,
12 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 * See the License for the specific language governing permissions and
14 * limitations under the License.
15 */
16package org.diffkit.diff.conf.tst
17
18
19import org.apache.commons.io.FilenameUtils;
20import org.diffkit.db.DKDBConnectionInfo
21import org.diffkit.db.tst.DBTestSetup;
22import org.diffkit.diff.conf.DKMagicPlan
23import org.diffkit.diff.conf.DKMagicPlanBuilder
24import org.diffkit.diff.diffor.DKEqualsDiffor
25import org.diffkit.diff.engine.DKSourceSink;
26import org.diffkit.diff.sns.DKDBSource
27import org.diffkit.diff.sns.DKFileSource
28import org.diffkit.util.DKResourceUtil;
29import org.diffkit.db.DKDBFlavor;
30
31
32
33/**
34 * @author jpanico
35 */
36public class TestMagicPlanBuilder extends GroovyTestCase {
37
38 /**
39 * test that including two mutually exclusive properties, properties that trigger
40 * mutually exclusive rules, results in Exception
41 */
42 public void testMagicExclusion() {
43 def lhsFileResourcePath = 'org/diffkit/diff/conf/tst/test.lhs.csv'
44 def rhsFileResourcePath = 'org/diffkit/diff/conf/tst/test.rhs.csv'
45 def lhsFile = DKResourceUtil.findResourceAsFile(lhsFileResourcePath)
46 def rhsFile = DKResourceUtil.findResourceAsFile(rhsFileResourcePath)
47
48 DKMagicPlan magicPlan = []
49 magicPlan.lhsFilePath =lhsFile.absolutePath
50 magicPlan.rhsFilePath = rhsFile.absolutePath
51 magicPlan.lhsDBTableName = 'LHS_TABLE'
52 magicPlan.rhsDBTableName = 'RHS_TABLE'
53
54 DKMagicPlanBuilder builder = [magicPlan]
55 shouldFail(RuntimeException) {
56 def builtPlan = builder.build()
57 assert builtPlan
58 }
59 }
60
61 public void testFullyMagicFileBuild(){
62 def lhsFileResourcePath = 'org/diffkit/diff/conf/tst/test.lhs.csv'
63 def rhsFileResourcePath = 'org/diffkit/diff/conf/tst/test.rhs.csv'
64 DKMagicPlan magicPlan = []
65 def lhsFile = DKResourceUtil.findResourceAsFile(lhsFileResourcePath)
66 def rhsFile = DKResourceUtil.findResourceAsFile(rhsFileResourcePath)
67 magicPlan.lhsFilePath =lhsFile.absolutePath
68 magicPlan.rhsFilePath = rhsFile.absolutePath
69 magicPlan.delimiter = '\\,'
70
71 DKMagicPlanBuilder builder = [magicPlan]
72 def builtPlan = builder.build()
73 assert builtPlan
74
75 def lhsSource = builtPlan.lhsSource
76 assert lhsSource
77 assert lhsSource instanceof DKFileSource
78 def lhsModel = lhsSource.model
79 assert lhsModel
80 assert lhsModel.columns
81 assert lhsModel.columns.length ==3
82 assert lhsModel.columns[0].name == 'column1'
83 def lhsSourceFile = lhsSource.file
84 assert lhsSourceFile
85 assert lhsSourceFile.exists()
86 def normalizedLhsSourceFilePath = FilenameUtils.normalize(lhsSourceFile.path)
87 def normalizedLhsFileResourcePath = FilenameUtils.normalize(lhsFileResourcePath)
88 assert normalizedLhsSourceFilePath.endsWith(normalizedLhsFileResourcePath)
89
90 def tableComparison = builtPlan.tableComparison
91 assert tableComparison
92 assert tableComparison.rhsModel == builtPlan.rhsSource.model
93 def comparisonMap = tableComparison.map
94 assert comparisonMap
95 assert comparisonMap.length == 3
96 for ( i in 0..2 ) {
97 comparisonMap[i]._lhsColumn.name == comparisonMap[i]._rhsColumn.name
98 comparisonMap[i]._diffor instanceof DKEqualsDiffor
99 }
100 assert tableComparison.diffIndexes == [1,2]
101 assert tableComparison.displayIndexes == [[0],[0]]
102 }
103
104 public void testFullyMagicDBBuild(){
105 DBTestSetup.setupDB(new File('org/diffkit/diff/conf/tst/test.dbsetup.xml'), (File[])[new File('org/diffkit/diff/conf/tst/dbConnectionInfo.xml')], 'org/diffkit/diff/conf/tst/test.lhs.csv', 'org/diffkit/diff/conf/tst/test.rhs.csv')
106 DKDBConnectionInfo dbConnectionInfo = ['test', DKDBFlavor.H2, 'mem:conf.test;DB_CLOSE_DELAY=-1', null, null, 'test', 'test']
107
108 DKMagicPlan magicPlan = []
109 magicPlan.lhsDBTableName = 'LHS_TABLE'
110 magicPlan.rhsDBTableName = 'RHS_TABLE'
111 magicPlan.dbConnectionInfo = dbConnectionInfo
112
113 DKMagicPlanBuilder builder = [magicPlan]
114 def builtPlan = builder.build()
115 assert builtPlan
116
117 def tableComparison = builtPlan.tableComparison
118 assert tableComparison
119 assert tableComparison.lhsModel == builtPlan.lhsSource.model
120 assert tableComparison.rhsModel == builtPlan.rhsSource.model
121 def comparisonMap = tableComparison.map
122 assert comparisonMap
123 assert comparisonMap.length == 3
124 for ( i in 0..2 ) {
125 comparisonMap[i]._lhsColumn.name == comparisonMap[i]._rhsColumn.name
126 comparisonMap[i]._diffor instanceof DKEqualsDiffor
127 }
128 assert tableComparison.diffIndexes == [1]
129 assert tableComparison.displayIndexes == [[0,2],[0,2]]
130
131 def lhsSource = builtPlan.lhsSource
132 assert lhsSource
133 assert lhsSource instanceof DKDBSource
134 def lhsDBTable = lhsSource.table
135 assert lhsDBTable
136 assert lhsDBTable.tableName == 'LHS_TABLE'
137 assert lhsDBTable.columns
138 assert lhsDBTable.columns.length == 3
139 assert lhsDBTable.columns[0].name == 'COLUMN1'
140 assert lhsDBTable.columns[0].DBTypeName == 'VARCHAR'
141
142 def rhsSource = builtPlan.rhsSource
143 assert rhsSource
144 assert rhsSource instanceof DKDBSource
145 def rhsDBTable = rhsSource.table
146 assert rhsDBTable
147 assert rhsDBTable.tableName == 'RHS_TABLE'
148 def rhsModel = rhsSource.model
149 assert rhsModel
150 assert rhsModel.name == 'PUBLIC.RHS_TABLE'
151 assert rhsModel.columns
152 assert rhsModel.columns.length ==3
153 assert rhsModel.columns[0].name == 'COLUMN1'
154
155 def sink = builtPlan.sink
156 assert sink
157 assert sink.kind == DKSourceSink.Kind.STREAM
158 }
159}
160
|
__label__pos
| 0.6661 |
Relation binaire et ensemble vide โ Les-mathematiques.net The most powerful custom community solution in the world
Relation binaire et ensemble vide
$\newcommand{\dom}{\operatorname{dom}}\newcommand{\im}{\operatorname{im}}$Bonjour
Dans un livre, j'ai besoin d'un รฉclaircissement par rapport ร un passage. Merci d'avance.
On considรจre deux ensembles (non vides) $X$ et $Y$ et une relation binaire $R$ entre les deux : $R$ = l'ensemble des couples $(x,y) \in X \times Y$ tels que $xRy$ (on confond une relation avec l'ensemble (le graphe) qui la dรฉfinit, partie du produit cartรฉsien $X \times Y$).
On dรฉfinit $\dom R = \{x \mid \exists y,\ xRy \},\ \im R =\{y \mid \exists y,\ xRy \}$ (1รจre et 2รจme projection de $R$).
Puis le livre affirme (sans plus de prรฉcision et sans dรฉmonstration) pour l'exemple choisi : $X$ = ensemble d'hommes, $Y$ = ensemble de femmes, $R$ = ensemble des couples mariรฉs : $\dom \emptyset = \emptyset$, ce que j'ai du mal ร comprendre.
En effet, on peut considรฉrer que $\emptyset = \{(x,y) \in R \mid (x,y) \notin X \times Y \} \subset R$ ; dans ce cas, on a bien $\dom \emptyset = \{x \mid \exists y, \ (x,y) \in \emptyset \} = \emptyset \subset X$ (vu que c'est impossible).
Ou bien que : $\emptyset = \{(x,y) \in R \mid non (x R y) \} \subset R$. Dans ce cas, $\dom \emptyset = \{x \mid \exists y,\ non ( x R y) \} = X$ (dรจs que le cardinal de $R$ est supรฉrieur ou รฉgal ร $2$).
Questions.
1) Comment peut-on obtenir deux rรฉsultats diffรฉrents, vu qu'il n'existe qu'un seul ensemble vide ?
2) Il me semble que la 2รจme dรฉfinition pour l'ensemble vide est plus proche de celle voulue par le livre, mais elle ne colle pas avec le rรฉsultat. Quel est votre avis ?
Je suis un peu perdue lร , merci d'avance.
Rรฉponses
โข Il y a beaucoup dโรฉlรฉments dans la relation vide ? Donc tu as la rรฉponse ร la question.
Algebraic symbols are used when you do not know what you are talking about.
ย ย ย ย ย ย -- Schnoebelen, Philippe
โข Merci beaucoup nicolas.patrois, j'ai compris ! C'est plus clair si on parle de 1รจre et 2รจme projection (de l'ensemble vide), et ceci est vrai pour toute relation (l'exemple donnรฉ n'a rien ร faire lร -dedans).
En fait, il ne faut pas prendre la dรฉfinition au pied de la lettre (mot pour mot), mais comprendre ce que l'auteur a voulu dire, mรชme dans ce cas extrรชme.
โข Une relation binaire est juste un ensemble de couples. L'acadรฉmisme entretient une ambiguรฏtรฉ en voulant prรฉciser les supports. Cette question vaut aussi pour les fonctions que certains s'arrachent les cheveux ร ne pas vouloir identifier ร ce qu'ils appellent leur graphe.
Aide les autres comme toi-mรชme car ils sont toi, ils sont vraiment toi
โข Ah mais oui, l'ensemble vide est commun ร tous les ensembles, donc il n'a rien ร voir avec l'exemple donnรฉ (c'รฉtait ambigu dans le livre car les deux รฉnoncรฉs sont ร la suite).
$\dom \emptyset = \{x \in X \mid \exists y \in Y,\ (x,y) \in \emptyset \} = \emptyset $. En effet, on suppose qu'il existe $x \in X$ tel que $ \exists y \in Y, (x,y) \in \emptyset$, c'est impossible, donc un tel $x$ n'existe pas.
Connectez-vous ou Inscrivez-vous pour rรฉpondre.
Success message!
|
__label__pos
| 0.510544 |
How do you define a cubic function with zeros #r_1, r_2, r_3# such that #r_1+r_2+r_3 = 3#, #r_1r_2+r_2r_3+r_3r_1 = -1#, #r_1r_2r_3 = -3# ?
1 Answer
Oct 26, 2016
Answer:
#f(x) = x^3-3x^2-x+3#
Explanation:
Suppose #f(x) = x^3+ax^2+bx+c# has zeros #r_1, r_2, r_3#
Then:
#f(x) = (x-r_1)(x-r_2)(x-r_3)#
#color(white)(f(x)) = x^3-(r_1+r_2+r_3)x^2+(r_1 r_2 + r_2 r_3 + r_3 r_1)x-r_1 r_2 r_3#
We are told that:
#{ (r_1+r_2+r_3 = 3), (r_1 r_2 + r_2 r_3 + r_3 r_1 = -1), (r_1 r_2 r_3 = -3) :}#
Hence:
#f(x) = x^3-3x^2-x+3#
In this particular example, we can check our result by factoring the cubic by grouping:
#x^3-3x^2-x+3 = (x^3-3x^2)-(x-3)#
#color(white)(x^3-3x^2-x+3) = x^2(x-3)-1(x-3)#
#color(white)(x^3-3x^2-x+3) = (x^2-1)(x-3)#
#color(white)(x^3-3x^2-x+3) = (x-1)(x+1)(x-3)#
So we can write:
#{ (r_1 = 1), (r_2 = -1), (r_3 = 3) :}#
and we find:
#{ (r_1+r_2+r_3 = 1-1+3 = 3), (r_1 r_2 + r_2 r_3 + r_3 r_1 =(1)(-1) + (-1)(3) + (3)(1) = -1), (r_1 r_2 r_3 = (1)(-1)(3) = -3) :}#
|
__label__pos
| 1 |
Commit d3b39189 authored by Administrator's avatar Administrator
Browse files
Check if the last line is terminated by a <newline> character
According to the Posix standard a line of a text is defined as follows:
3.206 Line
A sequence of zero or more non- <newline> characters plus a terminating
<newline> character.
The test checks if this is valid for all newly added or changed text files.
I stumbeled over the issue when adding a new header file the list of header
files used to generate a ROOT dictionary.
The previous last file of the list did not have a proper file ending and
attaching the new header file results in a ROOT dictionary source code with
the following line
'#endif#ifdef'
which could be properly parsed and compiled by the compiler but results in an
error at run time when loading the library.
To avoid such problems in future the test was added.
parent c8097a83
......@@ -111,6 +111,24 @@ FileFormatCheck:
- git fetch upstream
- scripts/check-file-format.sh upstream
FileEndCheck:
stage: checkFormat
image: alpine
tags:
- docker
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_PROJECT_PATH == "computing/cbmroot" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
script:
# Get the upstream repository manually. I did not find any other way to have it for
# comparison
- apk update && apk add git bash file
- scripts/connect_upstream_repo.sh $CI_MERGE_REQUEST_PROJECT_URL
- git fetch upstream
- scripts/check-file-ending.sh upstream
CbmRoot_Continuous:
stage: build
tags:
......
#!/bin/bash
if [[ $# -eq 1 ]]; then
UPSTREAM=$1
else
if [ -z $UPSTREAM ]; then
UPSTREAM=$(git remote -v | grep git.cbm.gsi.de[:/]computing/cbmroot | cut -f1 | uniq)
if [ -z $UPSTREAM ]; then
echo "Error: Name of upstream repository not provided and not found by automatic means"
echo 'Please provide if by checking your remotes with "git remote -v" and exporting UPSTREAM'
echo "or passing as an argument"
exit -1
fi
fi
fi
echo "Upstream name is :" $UPSTREAM
# If one wants to find all files in the CbmRoot and not only the changed ones
# uncomment the follwing line and comment the next two
#CHANGED_FILES=$(find . -type f -not \( -path "./.git/*" -o -path "./geometry/*" -o -path "./input/*" -o -path "./external/*" -o -path "./parameters/*" -prune \))
BASE_COMMIT=$UPSTREAM/master
CHANGED_FILES=$(git diff --name-only $BASE_COMMIT)
echo ""
for file in $CHANGED_FILES; do
# First check for text files and only do the further test on line endings
# for text files
result=$(file $file | grep -v text)
if [[ -z $result ]]; then
if [[ $(tail -c 1 $file) ]]; then
echo "File $file does not finish with end of line"
okay=false
fi
fi
done
if [[ "$okay" = "false" ]]; then
echo ""
echo "Not all files have the correct file ending"
echo "Test failed"
echo ""
exit 1
else
exit 0
fi
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.724186 |
2000 What is Netiquette? What are Online Ethics? Learn the Concepts Today
Select Country
Australia flag Australia Canada flag Canada/English Germany flag Deutschland Spain flag Espaรฑa France flag France China flag Hong Kong India flag India Ireland flag Ireland Netherlands flag Nederland New Zealand flag New Zealand Portugal flag Portugal South Africa flag South Africa Switzerland flag Schweiz Switzerland flag Suisse United Kingdom flag United Kingdom United States flag United States Japan flag ๆฅ ๆฌ ร
Netiquette and Online Ethics: What Are They?
Netiquette is a combination of the words network and etiquette, and is defined as a set of rules for acceptable online behavior. Similarly, online ethics focuses on the acceptable use of online resources in an online social environment.
Both phrases are frequently interchanged and are often combined with the concept of a โnetizenโ which itself is a contraction of the words internet and citizen and refers to both a person who uses the internet to participate in society, and an individual who has accepted the responsibility of using the internet in productive and socially responsible ways.
Underlying this overall concept of socially responsible internet use are a few core pillars, though the details underneath each pillar are still subject to debate.
At a high level using netiquette, applying online ethics, or being a good netizen means:
Most internet users automatically apply the same responsible respectful behavior online as they do in every other environment and by nature apply netiquette an online ethics, and are good netizens. The minority that fail to apply societal values in some or any environment - including the internet - are quickly identified as exceptions to be dealt with on a social, or criminal level.
ย
0
|
__label__pos
| 0.522876 |
std::current_exception
ๆฅ่ชcppreference.com
< cppโ | error
ย
ย
ๅฎ็จๅทฅๅ
ทๅบ
็ฑปๅ็ๆฏๆ (basic types, RTTI, type traits)
ๅจๆๅ
ๅญ็ฎก็
้่ฏฏๅค็
็จๅบๅฎ็จๅทฅๅ
ท
ๅฏๅๅๆฐๅฝๆฐ
ๆฅๆๅๆถ้ด
ๅฝๆฐๅฏน่ฑก
initializer_list๏ผC++11๏ผ
bitset
hash๏ผC++11๏ผ
ๅ
ณ็ณป่ฟ็ฎ็ฌฆ
ๅๆ๏ผ
Relational operators
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
rel_ops::operator!=
rel_ops::operator>
rel_ops::operator<=
rel_ops::operator>=
ๅๅๅ
็ป
ๅๆ๏ผ
Pairs and tuples
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
pair
tuple๏ผC++11๏ผ
piecewise_construct_t๏ผC++11๏ผ
piecewise_construct๏ผC++11๏ผ
ๆๆ๏ผ่ฟๆๅ็งปๅจ
ๅๆ๏ผ
Swap, forward and move
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
swap
forward๏ผC++11๏ผ
move๏ผC++11๏ผ
move_if_noexcept๏ผC++11๏ผ
declval๏ผC++11๏ผ
ย
้่ฏฏๅค็
ๅผๅธธๅค็
ๅๆ๏ผ
Exception handling
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
exception
uncaught_exception
exception_ptr๏ผC++11๏ผ
make_exception_ptr๏ผC++11๏ผ
current_exception๏ผC++11๏ผ
rethrow_exception๏ผC++11๏ผ
nested_exception๏ผC++11๏ผ
throw_with_nested๏ผC++11๏ผ
rethrow_if_nested๏ผC++11๏ผ
ๅผๅธธๅค็ๆ
้
ๅๆ๏ผ
Exception handling failures
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
terminate
terminate_handler
get_terminate๏ผC++11๏ผ
set_terminate
unexpected๏ผๅทฒๅผ็จ๏ผ
bad_exception
unexpected_handler๏ผๅทฒๅผ็จ๏ผ
get_unexpected๏ผC++11๏ผ๏ผๅทฒๅผ็จ๏ผ
set_unexpected๏ผๅทฒๅผ็จ๏ผ
ๅผๅธธ็ฑป
ๅๆ๏ผ
Exception categories
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
logic_error
invalid_argument
domain_error
length_error
out_of_range
runtime_error
range_error
overflow_error
underflow_error
้่ฏฏไปฃ็
ๅๆ๏ผ
Error codes
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
้่ฏฏไปฃ็
errno
ๆญ่จ
ๅๆ๏ผ
Assertions
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
assert
SYSTEM_ERROR่ฎพๆฝ
ๅๆ๏ผ
system_error facility
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
error_category๏ผC++11๏ผ
generic_category๏ผC++11๏ผ
system_category๏ผC++11๏ผ
error_condition๏ผC++11๏ผ
errc๏ผC++11๏ผ
error_code๏ผC++11๏ผ
system_error๏ผC++11๏ผ
ย
ๅจๅคดๆไปถ <exception> ไธญๅฎไน
std::exception_ptr current_exception()
๏ผC++11 ่ตท๏ผ
ๅฆๆๅจๅผๅธธๅค็่ฟ็จไธญ๏ผ้ๅธธๅจไธไธชcatchๆก๏ผ๏ผๆไฝไบๅฝๅ็ๅผๅธธๅฏน่ฑก๏ผๅนถๅๅปบไธไธชstd::exception_ptrๆๆ่ฏฅๅผๅธธๅฏน่ฑก็ๅผ็จ๏ผๆ่ฏฅๅผๅธธๅฏน่ฑก็ๅฏๆฌ๏ผ่ฟๆฏๅฎ็ฐๅฎไน็ๅฏๆฌๆฏๅถ๏ผ
ๅๆ๏ผ
If called during exception handling (typically, in a catch clause), captures the current exception object and creates an std::exception_ptr that holds a reference to that exception object, or to a copy of that exception object (it is implementation-defined if a copy is made)
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
ๅฆๆๅฎ็ฐ่ฟไธชๅ่ฝ้่ฆ่ฐ็จnewๅ่ฐ็จๅคฑ่ดฅ๏ผ่ฟๅ็ๆ้ๅฐไธพ่กstd::bad_alloc็ไธไธชๅฎไพ
ๅๆ๏ผ
If the implementation of this function requires a call to new and the call fails, the returned pointer will hold a reference to an instance of std::bad_alloc
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
ๅฆๆๆญคๅ่ฝ็ๅฎ็ฐ้่ฆๅคๅถๆ่ท็ๅผๅธธๅฏน่ฑกๅๅฎ็ๆท่ดๆ้ ๅฝๆฐๆๅบไธไธชๅผๅธธ๏ผ่ฟๅ็ๆ้ๅฐไธพ่กไธไธชๆๅบ็ๅผๅธธใๅฆๆ่ฟๆๅบ๏ผๆๅบ็ๅผๅธธๅฏน่ฑก็ๆท่ดๆ้ ๅฝๆฐ่ฟๅ็ๆ้ๅฏ่ฝไผๆๆไธไธชๅผ็จ็ไธไธชๅฎไพstd::bad_exceptionๆ็ ด็ๆญปๅพช็ฏ.....
ๅๆ๏ผ
If the implementation of this function requires to copy the captured exception object and its copy constructor throws an exception, the returned pointer will hold a reference to the exception thrown. If the copy constructor of the thrown exception object also throws, the returned pointer may hold a reference to an instance of std::bad_exception to break the endless loop.
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
ๅฆๆๅฝๆฐ่ขซ่ฐ็จๆถไนไธไพๅคๅค็๏ผๅ่ฟๅไธไธช็ฉบ็std::exception_ptr.
ๅๆ๏ผ
If the function is called when no exception is being handled, an empty std::exception_ptr is returned.
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
็ฎๅฝ
[็ผ่พ] ๅๆฐ
๏ผๆ ๏ผ
ๅๆ๏ผ
(none)
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
[็ผ่พ] ่ฟๅๅผ
std::exception_ptrไธไธชๅ่็ๅผๅธธๅฏน่ฑก๏ผๆไธไธชๅผๅธธๅฏน่ฑก็ๅฏๆฌ๏ผๆๅฎไพstd::bad_allocstd::bad_exception็ไธไธชๅฎไพ็ๅฎไพ.
ๅๆ๏ผ
An instance of std::exception_ptr holding a reference to the exception object, or a copy of the exception object, or to an instance of std::bad_alloc or to an instance of std::bad_exception.
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
[็ผ่พ] ไพๅค
noexcept specification:ย ย
noexcept
ย ย ๏ผC++11 ่ตท๏ผ
[็ผ่พ] ไธบไพ
[edit]
#include <iostream>
#include <string>
#include <exception>
#include <stdexcept>
ย
void handle_eptr(std::exception_ptr eptr) // passing by value is ok
{
try {
if (eptr != std::exception_ptr()) {
std::rethrow_exception(eptr);
}
} catch(const std::exception& e) {
std::cout << "Caught exception \"" << e.what() << "\"\n";
}
}
ย
int main()
{
std::exception_ptr eptr;
try {
std::string().at(1); // this generates an std::out_of_range
} catch(...) {
eptr = std::current_exception(); // capture
}
handle_eptr(eptr);
} // destructor for std::out_of_range called here, when the eptr is destructed
่พๅบ๏ผ
Caught exception "basic_string::at"
[็ผ่พ] ๅฆ่ฏทๅ้
๏ผC++11๏ผ
ๅค็ๅผๅธธๅฏน่ฑก็ๅ
ฑไบซๆ้็ฑปๅ
ๅๆ๏ผ
shared pointer type for handling exception objects
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
๏ผtypedef๏ผ [edit]
std::exception_ptrๆๅบ็ๅผๅธธ
ๅๆ๏ผ
throws the exception from an std::exception_ptr
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
๏ผๅฝๆฐ๏ผ [edit]
ๅๅปบไธไธชๅผๅธธๅฏน่ฑก็std::exception_ptr
ๅๆ๏ผ
creates an std::exception_ptr from an exception object
่ฟๆฎตๆๅญๆฏ้่ฟ Google Translate ่ชๅจ็ฟป่ฏ็ๆ็ใ
ๆจๅฏไปฅๅธฎๅฉๆไปฌๆฃๆฅใ็บ ๆญฃ็ฟป่ฏไธญ็้่ฏฏใ่ฏฆๆ
่ฏท็นๅป่ฟ้
๏ผๅฝๆฐๆจกๆฟ๏ผ [edit]
|
__label__pos
| 0.772805 |
Posts In The linq Category
What is LINQ
Introduction LINQ, which stands for Language Integrated Query, is a powerful feature of the .NET framework that provides a simple, unified way of querying and manipulating data. It allows developers to write queries against a variety of data sources, including collections, databases, and XML, using a consistent syntax. Understanding the Basics LINQ introduces a set of standard query operators, such as Where, Select, OrderBy, GroupBy, and Join, that can be used to perform common data operations.....
02/10/2023 - Comments icon, image of a speech bubble 0
What is LINQ to Entities
Introduction LINQ to Entities is a powerful and versatile query language that allows developers to query and manipulate data from relational databases using object-oriented programming concepts. This article aims to provide a detailed explanation of what LINQ to Entities is and how it can be used in different scenarios. Whether you are a beginner or an experienced developer, this article will help you gain a clear understanding of LINQ to Entities and its benefits. Understanding LINQ to Entitie....
02/10/2023 - Comments icon, image of a speech bubble 0
What is LINQ to Database
Introduction When it comes to working with databases in the world of software development, LINQ to Database is a powerful and efficient tool that many developers turn to. But what exactly is LINQ to Database and how does it work? In this article, we will delve into the depths of LINQ to Database, explaining its purpose, benefits, and some key concepts associated with it. So let's get started! Understanding LINQ to Database LINQ, an acronym for Language Integrated Query, is a feature of the .NET....
02/10/2023 - Comments icon, image of a speech bubble 0
Two chevrons pointing up icon image
|
__label__pos
| 0.996707 |
CodeNewbie Community
Discussion on: How did you realize you wanted to learn to code?
Collapse
missbabelfish profile image
Leah Godfrey
At my last workplace I helped an engineer make some scripts to make some tasks that took forever to do manually much much easier. It was like wizardry. I wanted to learn how to do it myself.
|
__label__pos
| 0.844628 |
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
Is it possible to convert my own .cls to .fmt conversion using PdfTeX?
NOTE: Our peoples are frequently change/updations in class file (source .cls file). So that I would like to convert non-editable class format.
share|improve this question
4 ย
look at the mylatexformat package which helps in compiling a custom format based on a document preamble loading specific classes and packages. โย David Carlisle Sep 2 '13 at 12:51
ย ย ย
Did this help? The question is still unanswered. โย Johannes_B Feb 25 '15 at 18:20
ย ย ย
Are there any news here? <- for later google search โย Johannes_B Apr 7 '15 at 15:04
ย ย ย
Still i have not found the solution. Do you have any ideas? โย Balaji Apr 8 '15 at 8:56
2 ย
If the class is frequently changed, then the class should be improved by adding features and easier configurations, fixing bugs, improving documentation, ... โย Heiko Oberdiek Jul 2 '15 at 12:42
up vote 15 down vote accepted
+50
Procedure for a format, which has a class preloaded. The class is named theclass in the following.
The class file
Because the class is already loaded in the format, global/class options cannot be specified any longer. Configuration can be implemented by a \theclasssetup command as done by many packages and some classes, e.g. \sisetup of package siunitx or \hypersetup for some of the options of package hyperref.
A dummy class file theclass.cls for testing:
% File: theclass.cls
\NeedsTeXFormat{LaTeX2e}
\ProvidesClass{theclass}[2015/07/02 v1.0 Class for something]
% Implementation of the class
\LoadClass{article}
\endinput
theclass.ini file for the format generation
The new format file consists of LaTeX and the class. The file theclass.ini contain the commands for the format. LaTeX is not loaded, because the available LaTeX formats can be reused. But the class needs to be loaded and \documentclass redefined to allow normal LaTeX documents that can be processed with the format with or without the class preloaded.
% File: theclass.ini
%
% Check for e-TeX
\expandafter\ifx\csname eTeXversion\endcsname\relax
\begingroup
\catcode`\{=1
\catcode`\}=2
\catcode`\^=7
\newlinechar=10 %
\def\space{ }
\def\MessageBreak{^^J\space\space}
\errmessage{%
The eTeX extension is missing.\MessageBreak
It can be enabled by option `-etex`\MessageBreak
or starting the first filename with a star `*'.\MessageBreak
The format generation is aborted%
}
\endgroup
\csname fi\endcsname % end \ifx
\csname @@end\endcsname % for the case a LaTeX format is already loaded
\end % end job
\fi
% Load class
\NeedsTeXFormat{LaTeX2e}
\documentclass{theclass}\relax
% Now we have to redefine \documentclass to get valid user documents.
\makeatletter
\renewcommand*{\documentclass}[2][]{%
\def\reserved@a{#2}%
\ifx\reserved@a\theclass@name
\def\reserved@a{#1}%
\ifx\reserved@a\@empty
\else
\ClassWarningNoLine{theclass}{%
This format has already loaded the class\MessageBreak
and does not support global options.\MessageBreak
The following options are ignored:\MessageBreak
\@spaces[#1]%
}%
\fi
\expandafter\theclass@datecheck
\else
\ClassError{theclass}{%
This format has already loaded the class `theclass'\MessageBreak
and cannot be used with other classes.\MessageBreak
LaTeX run is aborted%
}\@ehd
\expandafter\@@end % abort job.
\fi
}
\newcommand*{\theclass@name}{theclass}
\newcommand*{\theclass@datecheck}[1][]{%
\@ifclasslater{theclass}{#1}{%
}{%
\@latex@warning@no@line{%
You have requested,\on@line,
version\MessageBreak
`#1' of class\space\theclass@name,\MessageBreak
but only version\MessageBreak
`\csname ver@\[email protected]\endcsname'\MessageBreak
is available in this format%
}%
}%
}
% Format identification
\begingroup
\edef\x{%
\noexpand\typeout{%
Class \theclass@name\space \@nameuse{ver@\[email protected]}.%
}%
\noexpand\typeout{%
Format created at \the\year/\two@digits\month/\two@digits\day
\space by class author.%
}%
}%
\expandafter\endgroup
\expandafter\everyjob\expandafter{\the\expandafter\everyjob\x}
\makeatother
% Generate format
\dump
\endinput
Format generation
This is shown for TeX Live (2015) for pdflatex. Since the format for pdflatex is already available, it is reused by &pdflatex. The special character & usually needs to be escaped on a command line, e.g. \&pdflatex (bash) or "&pdflatex". Option --ini enables iniTeX mode. In this case option --etex is not given, because it is inherited from the imported format file of pdflatex.
pdflatex --ini "&pdflatex" theclass.ini
This generates the format file theclass.fmt. (In the past the extension was .efmt with eTeX extensions.)
Usage
Test file:
\documentclass{theclass}
% or \documentclass{theclass}[2015/07/02]
\begin{document}
Hello Word!
\end{document}
The new format theclass.fmt can be loaded with & (should be supported by all TeX compilers):
pdflatex "&theclass" test
Or option --fmt (supported by TeX Live):
pdflatex --fmt=theclass test
Also pdftex could be used, but it uses the wrong search paths for plain format instead of LaTeX. This can be fixed by overwriting the program name:
pdftex --progname=pdflatex --fmt=theclass test
Installation
The format file can be found in the current directory or installed in a texmf (TDS) tree. The format file theclass.fmt for engine pdfTeX is installed in:
TDS:web2c/pdftex/theclass.fmt
The root can be the home tree (TEXMFHOME) or a local tree (TEXMFLOCAL). The latter (or MiKTeX) requires the update of the file name database. The programs texhash or mktexlsr can be used in TeX Live for this purpose.
share|improve this answer
ย ย ย
I hope you get many upvotes for your effort. โย Johannes_B Jul 2 '15 at 19:28
ย ย ย
As an answer to the question, @Johannes_B is surely right. But I worry that the question asked is not the right one. In that sense.... That is, I entirely agreed with your comment on the question but that is not, as it were, reflected at all in the answer.... โย cfr Jul 2 '15 at 20:14
Your Answer
ย
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.836673 |
0
I made a simple SHA3 Python script to generate Ethereum addresses and priv/pub keys. After that I sent some ETH to those addresses but then found out that priv keys do not correspond to the addresses I have. I think I misunderstood behavior of update method in Python keccak_256 implementation. Do I still have any chance to get priv keys for the addresses I got using this script?
from ecdsa import SigningKey, SECP256k1
import sha3, sys
n = 5
full_file = "addresses.txt"
keccak = sha3.keccak_256()
with open(full_file, "r") as f:
for i in range(n):
priv = SigningKey.generate(curve=SECP256k1)
pub = priv.get_verifying_key().to_string()
keccak.update(pub)
address = keccak.hexdigest()[24:]
print address
pr_key = str(priv.to_string().hex())
pub_key = str(pub.hex())
address_str = "0x" + address
f.write(address_str + " " + pr_key + " " + pub_key + "\n")
Thank you
1
1
It seems that what you did nearly works. I just modified your code snippet a bit:
from ecdsa import SigningKey, SECP256k1
import sha3, sys
n = 5
full_file = "addresses.txt"
keccak = sha3.keccak_256()
with open(full_file, "w") as f:
f.write("address | private key\n")
f.write("---------------------------------------------------------------------------------------------------------------\n")
for i in range(n):
priv = SigningKey.generate(curve=SECP256k1)
pub = priv.get_verifying_key().to_string()
keccak.update(pub)
address = keccak.hexdigest()[24:]
pr_key = str(priv.to_string().hex())
pub_key = str(pub.hex())
f.write("0x" + address + " | " + "0x" + pr_key + "\n")
When I generate the public an private keys with the script above and check the output of the Web3js-Function web3.eth.accounts.privateKeyToAccount(privateKey) with the according private key as input I get the public key as expected. There were only two things I needed to correct:
1. The file full_file needs to be opened as writable ("w") not readable ("r").
2. The private key also needs the 0x-Prefix just as the address (which is the public key...)
Hope it helps.
4
โข thank you for the attention to this problem And did you check if the addresses that you got really correspond to the pubkeys you have got? (this part was actually the problem)
โย Galileo
Feb 13 '18 at 17:35
โข Yes indeed. As written above I checked the result by means of the privateKeyToAccount function. And the public key I received matched the right public key. So first private key of the outputted file gives first public key of the file.
โย joffi
Feb 14 '18 at 13:42
โข correct. and do you have idea why first public key of the outputted file DOES NOT give first public address of the file?
โย Galileo
Feb 14 '18 at 16:10
โข What do you mean with the "first public address"? According to this post the address is only a part of the public key. So if the public keys are okay the address should be fine as well. Am I missing something?
โย joffi
Feb 15 '18 at 16:22
Your Answer
By clicking โPost Your Answerโ, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.913284 |
Searching image sets - AWS HealthImaging
Searching image sets
You use the SearchImageSets action to run search queries against all image sets in an ACTIVE HealthImaging data store. The following menus provide a procedure for the AWS Management Console and code examples for the AWS CLI and AWS SDKs. For more information, see SearchImageSets in the AWS HealthImaging API Reference.
Note
Keep the following points in mind when searching image sets.
โข SearchImageSets accepts a single search query parameter and returns a paginated response of all image sets that have the matching criteria. All date range queries must be input as (lowerBound, upperBound).
โข By default, SearchImageSets uses the updatedAt field for sorting in decreasing order from newest to oldest.
โข If you created your data store with a customer-owned AWS KMS key, you must update your AWS KMS key policy before interacting with image sets. For more information, see Creating a customer managed key.
To search image sets
Choose a menu based on your access preference to AWS HealthImaging.
Note
The following procedures show how to search image sets using the Series Instance UID and Updated at property filters.
Series Instance UID
Search image sets using the Series Instance UID property filter
1. Open the HealthImaging console Data stores page.
2. Choose a data store.
The Data store details page opens and the Image sets tab is selected by default.
3. Choose the property filter menu and select Series Instance UID.
4. In the Enter value to search field, enter (paste) the Series Instance UID of interest.
Note
Series Instance UID values must be identical to those listed in the Registry of DICOM Unique Identifiers (UIDs). Note the requirements include a series of numbers that contain at least one period between them. Periods are not allowed at the beginning or end of Series Instance UIDs. Letters and white space are not allowed, so use caution when copying and pasting UIDs.
5. Choose the Date range menu, select a date range for the Series Instance UID, and choose Apply.
6. Choose Search.
Series Instance UIDs that fall within the selected date range are returned in Newest order by default.
Updated at
Search image sets using the Updated at property filter
1. Open the HealthImaging console Data stores page.
2. Choose a data store.
The Data store details page opens and the Image sets tab is selected by default.
3. Choose the property filter menu and choose Updated at.
4. Choose the Date range menu, select an image set date range, and choose Apply.
5. Choose Search.
Image sets that fall within the selected date range are returned in Newest order by default.
C++
SDK for C++
The utility function for searching image sets.
//! Routine which searches for image sets based on defined input attributes. /*! \param dataStoreID: The HealthImaging data store ID. \param searchCriteria: A search criteria instance. \param imageSetResults: Vector to receive the image set IDs. \param clientConfig: Aws client configuration. \return bool: Function succeeded. */ bool AwsDoc::Medical_Imaging::searchImageSets(const Aws::String &dataStoreID, const Aws::MedicalImaging::Model::SearchCriteria &searchCriteria, Aws::Vector<Aws::String> &imageSetResults, const Aws::Client::ClientConfiguration &clientConfig) { Aws::MedicalImaging::MedicalImagingClient client(clientConfig); Aws::MedicalImaging::Model::SearchImageSetsRequest request; request.SetDatastoreId(dataStoreID); request.SetSearchCriteria(searchCriteria); Aws::String nextToken; // Used for paginated results. bool result = true; do { if (!nextToken.empty()) { request.SetNextToken(nextToken); } Aws::MedicalImaging::Model::SearchImageSetsOutcome outcome = client.SearchImageSets( request); if (outcome.IsSuccess()) { for (auto &imageSetMetadataSummary: outcome.GetResult().GetImageSetsMetadataSummaries()) { imageSetResults.push_back(imageSetMetadataSummary.GetImageSetId()); } nextToken = outcome.GetResult().GetNextToken(); } else { std::cout << "Error: " << outcome.GetError().GetMessage() << std::endl; result = false; } } while (!nextToken.empty()); return result; }
Use case #1: EQUAL operator.
Aws::Vector<Aws::String> imageIDsForPatientID; Aws::MedicalImaging::Model::SearchCriteria searchCriteriaEqualsPatientID; Aws::Vector<Aws::MedicalImaging::Model::SearchFilter> patientIDSearchFilters = { Aws::MedicalImaging::Model::SearchFilter().WithOperator(Aws::MedicalImaging::Model::Operator::EQUAL) .WithValues({Aws::MedicalImaging::Model::SearchByAttributeValue().WithDICOMPatientId(patientID)}) }; searchCriteriaEqualsPatientID.SetFilters(patientIDSearchFilters); bool result = AwsDoc::Medical_Imaging::searchImageSets(dataStoreID, searchCriteriaEqualsPatientID, imageIDsForPatientID, clientConfig); if (result) { std::cout << imageIDsForPatientID.size() << " image sets found for the patient with ID '" << patientID << "'." << std::endl; for (auto &imageSetResult : imageIDsForPatientID) { std::cout << " Image set with ID '" << imageSetResult << std::endl; } }
Use case #2: BETWEEN operator using DICOMStudyDate and DICOMStudyTime.
Aws::MedicalImaging::Model::SearchByAttributeValue useCase2StartDate; useCase2StartDate.SetDICOMStudyDateAndTime(Aws::MedicalImaging::Model::DICOMStudyDateAndTime() .WithDICOMStudyDate("19990101") .WithDICOMStudyTime("000000.000")); Aws::MedicalImaging::Model::SearchByAttributeValue useCase2EndDate; useCase2EndDate.SetDICOMStudyDateAndTime(Aws::MedicalImaging::Model::DICOMStudyDateAndTime() .WithDICOMStudyDate(Aws::Utils::DateTime(std::chrono::system_clock::now()).ToLocalTimeString("%Y%m%d")) .WithDICOMStudyTime("000000.000")); Aws::MedicalImaging::Model::SearchFilter useCase2SearchFilter; useCase2SearchFilter.SetValues({useCase2StartDate, useCase2EndDate}); useCase2SearchFilter.SetOperator(Aws::MedicalImaging::Model::Operator::BETWEEN); Aws::MedicalImaging::Model::SearchCriteria useCase2SearchCriteria; useCase2SearchCriteria.SetFilters({useCase2SearchFilter}); Aws::Vector<Aws::String> usesCase2Results; result = AwsDoc::Medical_Imaging::searchImageSets(dataStoreID, useCase2SearchCriteria, usesCase2Results, clientConfig); if (result) { std::cout << usesCase2Results.size() << " image sets found for between 1999/01/01 and present." << std::endl; for (auto &imageSetResult : usesCase2Results) { std::cout << " Image set with ID '" << imageSetResult << std::endl; } }
Use case #3: BETWEEN operator using createdAt. Time studies were previously persisted.
Aws::MedicalImaging::Model::SearchByAttributeValue useCase3StartDate; useCase3StartDate.SetCreatedAt(Aws::Utils::DateTime("20231130T000000000Z",Aws::Utils::DateFormat::ISO_8601_BASIC)); Aws::MedicalImaging::Model::SearchByAttributeValue useCase3EndDate; useCase3EndDate.SetCreatedAt(Aws::Utils::DateTime(std::chrono::system_clock::now())); Aws::MedicalImaging::Model::SearchFilter useCase3SearchFilter; useCase3SearchFilter.SetValues({useCase3StartDate, useCase3EndDate}); useCase3SearchFilter.SetOperator(Aws::MedicalImaging::Model::Operator::BETWEEN); Aws::MedicalImaging::Model::SearchCriteria useCase3SearchCriteria; useCase3SearchCriteria.SetFilters({useCase3SearchFilter}); Aws::Vector<Aws::String> usesCase3Results; result = AwsDoc::Medical_Imaging::searchImageSets(dataStoreID, useCase3SearchCriteria, usesCase3Results, clientConfig); if (result) { std::cout << usesCase3Results.size() << " image sets found for created between 2023/11/30 and present." << std::endl; for (auto &imageSetResult : usesCase3Results) { std::cout << " Image set with ID '" << imageSetResult << std::endl; } }
Use case #4: EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field.
Aws::MedicalImaging::Model::SearchByAttributeValue useCase4StartDate; useCase4StartDate.SetUpdatedAt(Aws::Utils::DateTime("20231130T000000000Z",Aws::Utils::DateFormat::ISO_8601_BASIC)); Aws::MedicalImaging::Model::SearchByAttributeValue useCase4EndDate; useCase4EndDate.SetUpdatedAt(Aws::Utils::DateTime(std::chrono::system_clock::now())); Aws::MedicalImaging::Model::SearchFilter useCase4SearchFilterBetween; useCase4SearchFilterBetween.SetValues({useCase4StartDate, useCase4EndDate}); useCase4SearchFilterBetween.SetOperator(Aws::MedicalImaging::Model::Operator::BETWEEN); Aws::MedicalImaging::Model::SearchByAttributeValue seriesInstanceUID; seriesInstanceUID.SetDICOMSeriesInstanceUID(dicomSeriesInstanceUID); Aws::MedicalImaging::Model::SearchFilter useCase4SearchFilterEqual; useCase4SearchFilterEqual.SetValues({seriesInstanceUID}); useCase4SearchFilterEqual.SetOperator(Aws::MedicalImaging::Model::Operator::EQUAL); Aws::MedicalImaging::Model::SearchCriteria useCase4SearchCriteria; useCase4SearchCriteria.SetFilters({useCase4SearchFilterBetween, useCase4SearchFilterEqual}); Aws::MedicalImaging::Model::Sort useCase4Sort; useCase4Sort.SetSortField(Aws::MedicalImaging::Model::SortField::updatedAt); useCase4Sort.SetSortOrder(Aws::MedicalImaging::Model::SortOrder::ASC); useCase4SearchCriteria.SetSort(useCase4Sort); Aws::Vector<Aws::String> usesCase4Results; result = AwsDoc::Medical_Imaging::searchImageSets(dataStoreID, useCase4SearchCriteria, usesCase4Results, clientConfig); if (result) { std::cout << usesCase4Results.size() << " image sets found for EQUAL operator " << "on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response\n" << "in ASC order on updatedAt field." << std::endl; for (auto &imageSetResult : usesCase4Results) { std::cout << " Image set with ID '" << imageSetResult << std::endl; } }
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.
CLI
AWS CLI
Example 1: To search image sets with an EQUAL operator
The following search-image-sets code example uses the EQUAL operator to search image sets based on a specific value.
aws medical-imaging search-image-sets \ --datastore-id 12345678901234567890123456789012 \ --search-criteria file://search-criteria.json
Contents of search-criteria.json
{ "filters": [{ "values": [{"DICOMPatientId" : "SUBJECT08701"}], "operator": "EQUAL" }] }
Output:
{ "imageSetsMetadataSummaries": [{ "imageSetId": "09876543210987654321098765432109", "createdAt": "2022-12-06T21:40:59.429000+00:00", "version": 1, "DICOMTags": { "DICOMStudyId": "2011201407", "DICOMStudyDate": "19991122", "DICOMPatientSex": "F", "DICOMStudyInstanceUID": "1.2.840.99999999.84710745.943275268089", "DICOMPatientBirthDate": "19201120", "DICOMStudyDescription": "UNKNOWN", "DICOMPatientId": "SUBJECT08701", "DICOMPatientName": "Melissa844 Huel628", "DICOMNumberOfStudyRelatedInstances": 1, "DICOMStudyTime": "140728", "DICOMNumberOfStudyRelatedSeries": 1 }, "updatedAt": "2022-12-06T21:40:59.429000+00:00" }] }
Example 2: To search image sets with a BETWEEN operator using DICOMStudyDate and DICOMStudyTime
The following search-image-sets code example searches for image sets with DICOM Studies generated between January 1, 1990 (12:00 AM) and January 1, 2023 (12:00 AM).
Note: DICOMStudyTime is optional. If it is not present, 12:00 AM (start of the day) is the time value for the dates provided for filtering.
aws medical-imaging search-image-sets \ --datastore-id 12345678901234567890123456789012 \ --search-criteria file://search-criteria.json
Contents of search-criteria.json
{ "filters": [{ "values": [{ "DICOMStudyDateAndTime": { "DICOMStudyDate": "19900101", "DICOMStudyTime": "000000" } }, { "DICOMStudyDateAndTime": { "DICOMStudyDate": "20230101", "DICOMStudyTime": "000000" } }], "operator": "BETWEEN" }] }
Output:
{ "imageSetsMetadataSummaries": [{ "imageSetId": "09876543210987654321098765432109", "createdAt": "2022-12-06T21:40:59.429000+00:00", "version": 1, "DICOMTags": { "DICOMStudyId": "2011201407", "DICOMStudyDate": "19991122", "DICOMPatientSex": "F", "DICOMStudyInstanceUID": "1.2.840.99999999.84710745.943275268089", "DICOMPatientBirthDate": "19201120", "DICOMStudyDescription": "UNKNOWN", "DICOMPatientId": "SUBJECT08701", "DICOMPatientName": "Melissa844 Huel628", "DICOMNumberOfStudyRelatedInstances": 1, "DICOMStudyTime": "140728", "DICOMNumberOfStudyRelatedSeries": 1 }, "updatedAt": "2022-12-06T21:40:59.429000+00:00" }] }
Example 3: To search image sets with a BETWEEN operator using createdAt (time studies were previously persisted)
The following search-image-sets code example searches for image sets with DICOM Studies persisted in HealthImaging between the time ranges in UTC time zone.
Note: Provide createdAt in example format ("1985-04-12T23:20:50.52Z").
aws medical-imaging search-image-sets \ --datastore-id 12345678901234567890123456789012 \ --search-criteria file://search-criteria.json
Contents of search-criteria.json
{ "filters": [{ "values": [{ "createdAt": "1985-04-12T23:20:50.52Z" }, { "createdAt": "2022-04-12T23:20:50.52Z" }], "operator": "BETWEEN" }] }
Output:
{ "imageSetsMetadataSummaries": [{ "imageSetId": "09876543210987654321098765432109", "createdAt": "2022-12-06T21:40:59.429000+00:00", "version": 1, "DICOMTags": { "DICOMStudyId": "2011201407", "DICOMStudyDate": "19991122", "DICOMPatientSex": "F", "DICOMStudyInstanceUID": "1.2.840.99999999.84710745.943275268089", "DICOMPatientBirthDate": "19201120", "DICOMStudyDescription": "UNKNOWN", "DICOMPatientId": "SUBJECT08701", "DICOMPatientName": "Melissa844 Huel628", "DICOMNumberOfStudyRelatedInstances": 1, "DICOMStudyTime": "140728", "DICOMNumberOfStudyRelatedSeries": 1 }, "lastUpdatedAt": "2022-12-06T21:40:59.429000+00:00" }] }
Example 4: To search image sets with an EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field
The following search-image-sets code example searches for image sets with an EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field.
Note: Provide updatedAt in example format ("1985-04-12T23:20:50.52Z").
aws medical-imaging search-image-sets \ --datastore-id 12345678901234567890123456789012 \ --search-criteria file://search-criteria.json
Contents of search-criteria.json
{ "filters": [{ "values": [{ "updatedAt": "2024-03-11T15:00:05.074000-07:00" }, { "updatedAt": "2024-03-11T16:00:05.074000-07:00" }], "operator": "BETWEEN" }, { "values": [{ "DICOMSeriesInstanceUID": "1.2.840.99999999.84710745.943275268089" }], "operator": "EQUAL" }], "sort": { "sortField": "updatedAt", "sortOrder": "ASC" } }
Output:
{ "imageSetsMetadataSummaries": [{ "imageSetId": "09876543210987654321098765432109", "createdAt": "2022-12-06T21:40:59.429000+00:00", "version": 1, "DICOMTags": { "DICOMStudyId": "2011201407", "DICOMStudyDate": "19991122", "DICOMPatientSex": "F", "DICOMStudyInstanceUID": "1.2.840.99999999.84710745.943275268089", "DICOMPatientBirthDate": "19201120", "DICOMStudyDescription": "UNKNOWN", "DICOMPatientId": "SUBJECT08701", "DICOMPatientName": "Melissa844 Huel628", "DICOMNumberOfStudyRelatedInstances": 1, "DICOMStudyTime": "140728", "DICOMNumberOfStudyRelatedSeries": 1 }, "lastUpdatedAt": "2022-12-06T21:40:59.429000+00:00" }] }
Java
SDK for Java 2.x
The utility function for searching image sets.
public static List<ImageSetsMetadataSummary> searchMedicalImagingImageSets( MedicalImagingClient medicalImagingClient, String datastoreId, SearchCriteria searchCriteria) { try { SearchImageSetsRequest datastoreRequest = SearchImageSetsRequest.builder() .datastoreId(datastoreId) .searchCriteria(searchCriteria) .build(); SearchImageSetsIterable responses = medicalImagingClient .searchImageSetsPaginator(datastoreRequest); List<ImageSetsMetadataSummary> imageSetsMetadataSummaries = new ArrayList<>(); responses.stream().forEach(response -> imageSetsMetadataSummaries .addAll(response.imageSetsMetadataSummaries())); return imageSetsMetadataSummaries; } catch (MedicalImagingException e) { System.err.println(e.awsErrorDetails().errorMessage()); System.exit(1); } return null; }
Use case #1: EQUAL operator.
List<SearchFilter> searchFilters = Collections.singletonList(SearchFilter.builder() .operator(Operator.EQUAL) .values(SearchByAttributeValue.builder() .dicomPatientId(patientId) .build()) .build()); SearchCriteria searchCriteria = SearchCriteria.builder() .filters(searchFilters) .build(); List<ImageSetsMetadataSummary> imageSetsMetadataSummaries = searchMedicalImagingImageSets( medicalImagingClient, datastoreId, searchCriteria); if (imageSetsMetadataSummaries != null) { System.out.println("The image sets for patient " + patientId + " are:\n" + imageSetsMetadataSummaries); System.out.println(); }
Use case #2: BETWEEN operator using DICOMStudyDate and DICOMStudyTime.
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyyMMdd"); searchFilters = Collections.singletonList(SearchFilter.builder() .operator(Operator.BETWEEN) .values(SearchByAttributeValue.builder() .dicomStudyDateAndTime(DICOMStudyDateAndTime.builder() .dicomStudyDate("19990101") .dicomStudyTime("000000.000") .build()) .build(), SearchByAttributeValue.builder() .dicomStudyDateAndTime(DICOMStudyDateAndTime.builder() .dicomStudyDate((LocalDate.now() .format(formatter))) .dicomStudyTime("000000.000") .build()) .build()) .build()); searchCriteria = SearchCriteria.builder() .filters(searchFilters) .build(); imageSetsMetadataSummaries = searchMedicalImagingImageSets(medicalImagingClient, datastoreId, searchCriteria); if (imageSetsMetadataSummaries != null) { System.out.println( "The image sets searched with BETWEEN operator using DICOMStudyDate and DICOMStudyTime are:\n" + imageSetsMetadataSummaries); System.out.println(); }
Use case #3: BETWEEN operator using createdAt. Time studies were previously persisted.
searchFilters = Collections.singletonList(SearchFilter.builder() .operator(Operator.BETWEEN) .values(SearchByAttributeValue.builder() .createdAt(Instant.parse("1985-04-12T23:20:50.52Z")) .build(), SearchByAttributeValue.builder() .createdAt(Instant.now()) .build()) .build()); searchCriteria = SearchCriteria.builder() .filters(searchFilters) .build(); imageSetsMetadataSummaries = searchMedicalImagingImageSets(medicalImagingClient, datastoreId, searchCriteria); if (imageSetsMetadataSummaries != null) { System.out.println("The image sets searched with BETWEEN operator using createdAt are:\n " + imageSetsMetadataSummaries); System.out.println(); }
Use case #4: EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field.
Instant startDate = Instant.parse("1985-04-12T23:20:50.52Z"); Instant endDate = Instant.now(); searchFilters = Arrays.asList( SearchFilter.builder() .operator(Operator.EQUAL) .values(SearchByAttributeValue.builder() .dicomSeriesInstanceUID(seriesInstanceUID) .build()) .build(), SearchFilter.builder() .operator(Operator.BETWEEN) .values( SearchByAttributeValue.builder().updatedAt(startDate).build(), SearchByAttributeValue.builder().updatedAt(endDate).build() ).build()); Sort sort = Sort.builder().sortOrder(SortOrder.ASC).sortField(SortField.UPDATED_AT).build(); searchCriteria = SearchCriteria.builder() .filters(searchFilters) .sort(sort) .build(); imageSetsMetadataSummaries = searchMedicalImagingImageSets(medicalImagingClient, datastoreId, searchCriteria); if (imageSetsMetadataSummaries != null) { System.out.println("The image sets searched with EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response\n" + "in ASC order on updatedAt field are:\n " + imageSetsMetadataSummaries); System.out.println(); }
โข For API details, see SearchImageSets in AWS SDK for Java 2.x API Reference.
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.
JavaScript
SDK for JavaScript (v3)
The utility function for searching image sets.
import {paginateSearchImageSets} from "@aws-sdk/client-medical-imaging"; import {medicalImagingClient} from "../libs/medicalImagingClient.js"; /** * @param {string} datastoreId - The data store's ID. * @param { import('@aws-sdk/client-medical-imaging').SearchFilter[] } filters - The search criteria filters. * @param { import('@aws-sdk/client-medical-imaging').Sort } sort - The search criteria sort. */ export const searchImageSets = async ( datastoreId = "xxxxxxxx", searchCriteria = {} ) => { const paginatorConfig = { client: medicalImagingClient, pageSize: 50, }; const commandParams = { datastoreId: datastoreId, searchCriteria: searchCriteria, }; const paginator = paginateSearchImageSets(paginatorConfig, commandParams); const imageSetsMetadataSummaries = []; for await (const page of paginator) { // Each page contains a list of `jobSummaries`. The list is truncated if is larger than `pageSize`. imageSetsMetadataSummaries.push(...page["imageSetsMetadataSummaries"]); console.log(page); } // { // '$metadata': { // httpStatusCode: 200, // requestId: 'f009ea9c-84ca-4749-b5b6-7164f00a5ada', // extendedRequestId: undefined, // cfId: undefined, // attempts: 1, // totalRetryDelay: 0 // }, // imageSetsMetadataSummaries: [ // { // DICOMTags: [Object], // createdAt: "2023-09-19T16:59:40.551Z", // imageSetId: '7f75e1b5c0f40eac2b24cf712f485f50', // updatedAt: "2023-09-19T16:59:40.551Z", // version: 1 // }] // } return imageSetsMetadataSummaries; };
Use case #1: EQUAL operator.
const datastoreId = "12345678901234567890123456789012"; try { const searchCriteria = { filters: [ { values: [{DICOMPatientId: "1234567"}], operator: "EQUAL", }, ] }; await searchImageSets(datastoreId, searchCriteria); } catch (err) { console.error(err); }
Use case #2: BETWEEN operator using DICOMStudyDate and DICOMStudyTime.
const datastoreId = "12345678901234567890123456789012"; try { const searchCriteria = { filters: [ { values: [ { DICOMStudyDateAndTime: { DICOMStudyDate: "19900101", DICOMStudyTime: "000000", }, }, { DICOMStudyDateAndTime: { DICOMStudyDate: "20230901", DICOMStudyTime: "000000", }, }, ], operator: "BETWEEN", }, ] }; await searchImageSets(datastoreId, searchCriteria); } catch (err) { console.error(err); }
Use case #3: BETWEEN operator using createdAt. Time studies were previously persisted.
const datastoreId = "12345678901234567890123456789012"; try { const searchCriteria = { filters: [ { values: [ {createdAt: new Date("1985-04-12T23:20:50.52Z")}, {createdAt: new Date()}, ], operator: "BETWEEN", }, ] }; await searchImageSets(datastoreId, searchCriteria); } catch (err) { console.error(err); }
Use case #4: EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field.
const datastoreId = "12345678901234567890123456789012"; try { const searchCriteria = { filters: [ { values: [ {updatedAt: new Date("1985-04-12T23:20:50.52Z")}, {updatedAt: new Date()}, ], operator: "BETWEEN", }, { values: [ {DICOMSeriesInstanceUID: "1.1.123.123456.1.12.1.1234567890.1234.12345678.123"}, ], operator: "EQUAL", }, ], sort: { sortOrder: "ASC", sortField: "updatedAt", } }; await searchImageSets(datastoreId, searchCriteria); } catch (err) { console.error(err); }
โข For API details, see SearchImageSets in AWS SDK for JavaScript API Reference.
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.
Python
SDK for Python (Boto3)
The utility function for searching image sets.
class MedicalImagingWrapper: def __init__(self, health_imaging_client): self.health_imaging_client = health_imaging_client def search_image_sets(self, datastore_id, search_filter): """ Search for image sets. :param datastore_id: The ID of the data store. :param search_filter: The search filter. For example: {"filters" : [{ "operator": "EQUAL", "values": [{"DICOMPatientId": "3524578"}]}]}. :return: The list of image sets. """ try: paginator = self.health_imaging_client.get_paginator("search_image_sets") page_iterator = paginator.paginate( datastoreId=datastore_id, searchCriteria=search_filter ) metadata_summaries = [] for page in page_iterator: metadata_summaries.extend(page["imageSetsMetadataSummaries"]) except ClientError as err: logger.error( "Couldn't search image sets. Here's why: %s: %s", err.response["Error"]["Code"], err.response["Error"]["Message"], ) raise else: return metadata_summaries
Use case #1: EQUAL operator.
search_filter = { "filters": [ {"operator": "EQUAL", "values": [{"DICOMPatientId": patient_id}]} ] } image_sets = self.search_image_sets(data_store_id, search_filter) print(f"Image sets found with EQUAL operator\n{image_sets}")
Use case #2: BETWEEN operator using DICOMStudyDate and DICOMStudyTime.
search_filter = { "filters": [ { "operator": "BETWEEN", "values": [ { "DICOMStudyDateAndTime": { "DICOMStudyDate": "19900101", "DICOMStudyTime": "000000", } }, { "DICOMStudyDateAndTime": { "DICOMStudyDate": "20230101", "DICOMStudyTime": "000000", } }, ], } ] } image_sets = self.search_image_sets(data_store_id, search_filter) print( f"Image sets found with BETWEEN operator using DICOMStudyDate and DICOMStudyTime\n{image_sets}" )
Use case #3: BETWEEN operator using createdAt. Time studies were previously persisted.
search_filter = { "filters": [ { "values": [ { "createdAt": datetime.datetime( 2021, 8, 4, 14, 49, 54, 429000 ) }, { "createdAt": datetime.datetime.now() + datetime.timedelta(days=1) }, ], "operator": "BETWEEN", } ] } recent_image_sets = self.search_image_sets(data_store_id, search_filter) print( f"Image sets found with with BETWEEN operator using createdAt\n{recent_image_sets}" )
Use case #4: EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and sort response in ASC order on updatedAt field.
search_filter = { "filters": [ { "values": [ { "updatedAt": datetime.datetime( 2021, 8, 4, 14, 49, 54, 429000 ) }, { "updatedAt": datetime.datetime.now() + datetime.timedelta(days=1) }, ], "operator": "BETWEEN", }, { "values": [{"DICOMSeriesInstanceUID": series_instance_uid}], "operator": "EQUAL", }, ], "sort": { "sortOrder": "ASC", "sortField": "updatedAt", }, } image_sets = self.search_image_sets(data_store_id, search_filter) print( "Image sets found with EQUAL operator on DICOMSeriesInstanceUID and BETWEEN on updatedAt and" ) print(f"sort response in ASC order on updatedAt field\n{image_sets}")
The following code instantiates the MedicalImagingWrapper object.
client = boto3.client("medical-imaging") medical_imaging_wrapper = MedicalImagingWrapper(client)
โข For API details, see SearchImageSets in AWS SDK for Python (Boto3) API Reference.
Note
There's more on GitHub. Find the complete example and learn how to set up and run in the AWS Code Examples Repository.
|
__label__pos
| 0.852869 |
Header and Footer Formatting Codes in Excel
Introduction
In the world of Excel spreadsheets, headers and footers play a crucial role in organizing information and presenting data in a professional manner. Headers are the top section of a worksheet that usually contain the title, company logo, or other relevant information. Footers, on the other hand, are located at the bottom of the worksheet and often include page numbers, dates, or author information. While headers and footers may seem like small details, their formatting can greatly enhance the overall appearance and professionalism of your spreadsheet.
Key Takeaways
โข Headers and footers are important for organizing information and presenting data professionally in Excel spreadsheets.
โข Formatting headers and footers can greatly enhance the overall appearance and professionalism of a spreadsheet.
โข Understanding header and footer codes is essential for customizing their appearance in Excel.
โข Creating and modifying headers involves step-by-step instructions and various formatting options.
โข Formatting footers in Excel allows for customization including alignment, page numbering, and date/time codes.
Understanding Header and Footer Codes
When working with headers and footers in Excel, it is important to understand the codes that can be used to customize their appearance. These codes allow you to add dynamic information, such as page numbers or the current date, to your headers and footers. Additionally, you can format the text using HTML tags to make certain elements stand out. In this chapter, we will explore the explanation of header and footer codes in Excel, how they are used to customize the appearance, and provide examples of common codes and their functions.
Explanation of Header and Footer Codes in Excel
In Excel, header and footer codes are special placeholders that are replaced with actual information when the spreadsheet is printed or viewed in print preview mode. These codes are enclosed within ampersand symbols (&) and are typically used within the header or footer section of the Page Setup dialog box.
โข &[Page]: Inserts the page number on which the header or footer appears.
โข &[Pages]: Inserts the total number of pages in the worksheet.
โข &[Date]: Inserts the current date into the header or footer.
โข &[Time]: Inserts the current time into the header or footer.
โข &[File]: Inserts the filename of the workbook.
How Codes are Used to Customize the Appearance of Headers and Footers
By using codes in combination with HTML tags, you can further customize the appearance of headers and footers in Excel. HTML tags provide a wide range of formatting options, such as changing the font size or color, adding bold or italic styles, or aligning the text.
โข Text: Changes the color of the text to red.
โข Text: Makes the text bold.
โข Text: Makes the text italic.
โข Text: Underlines the text.
Examples of Common Codes and Their Functions
Let's take a look at some examples of common codes and their functions:
โข &[Page] of &[Pages]: Displays the current page number and the total number of pages.
โข Important Information: Formats the text as "Important Information" and increases the font size to 14 points.
โข Header Text/b>: Makes the header text bold.
โข Footer Text: Makes the footer text italic and underlined.
By using these codes and HTML tags, you can easily customize the appearance of headers and footers in Excel to meet your specific formatting needs.
Creating and Modifying Headers
Headers in Excel worksheets provide a consistent way to display information across multiple pages, ensuring clarity and professionalism in your documents. In this chapter, we will walk you through the process of creating and modifying headers in Excel, including various formatting options and the ability to insert dynamic content.
Step-by-step guide on creating and adding headers to Excel worksheets
Follow these simple steps to create and add headers to your Excel worksheets:
1. Open your Excel worksheet and navigate to the "Insert" tab on the Ribbon.
2. Click on the "Header & Footer" button in the "Text" group.
3. A new "Header & Footer Tools" contextual tab will appear. Choose the "Header" option.
4. Excel provides a set of pre-designed header layouts to choose from. Select the desired layout that best suits your needs.
5. Once you have chosen a layout, you can click within the header area and start typing your desired text.
6. Use the available options in the "Header & Footer Elements" group to insert page numbers, worksheet names, date and time, and other dynamic content into your header.
7. Format the header text by selecting it and utilizing the font style, size, color, bold, italic, and underline options in the "Font" group of the "Home" tab.
8. Click outside the header area to exit the header editing mode.
Demonstrating different formatting options
Excel provides a range of formatting options to customize the appearance of your headers. These include:
โข Font style: Choose from a variety of font styles such as Arial, Times New Roman, or Calibri.
โข Font size: Adjust the size of the text to make it more prominent or less intrusive.
โข Font color: Select a specific color for your header text, ensuring it matches the overall theme of your worksheet.
How to insert dynamic content using codes
To make your headers more dynamic, you can use special codes to insert dynamic content. Here are some examples:
โข Page numbers: Use the code "&P" to insert the current page number.
โข Worksheet names: Utilize the code "&A" to insert the name of the current worksheet.
โข Date and time: Embed the code "&D" for the current date and "&T" for the current time in your header.
Remember to avoid using numbers in the header, as Excel may interpret them as formulas. Instead, use the tag to highlight important information within your header.
By following these steps and utilizing various formatting options and codes, you can create and modify headers in Excel that enhance the visual appeal and functionality of your worksheets.
Formatting Footers for Excel Spreadsheets
Adding footers to your Excel spreadsheets can enhance the overall appearance and provide important information to the viewers. In this chapter, we will guide you through the process of adding footers to your worksheets and explore different customization options to make your footers more effective and informative.
Instructions on Adding Footers to Worksheets
Follow these steps to add footers to your Excel spreadsheets:
โข Open your Excel spreadsheet and navigate to the "Insert" tab on the Excel ribbon.
โข Click on the "Header & Footer" button in the "Text" group.
โข The worksheet will switch to "Page Layout" view, and the Header & Footer Tools Design tab will appear on the Excel ribbon.
โข Click on the "Footer" button in the "Header & Footer" group.
โข A drop-down menu will appear with various footer options.
โข Select the desired footer option, such as "Blank," "Page Number," or "Date."
โข Excel will place the selected footer in the center section of the worksheet's footer area.
Different Options for Customization
Excel provides various customization options to help you tailor your footers according to your preferences and requirements. Here are some customization options to consider:
โข Alignment: You can align the content of your footer to the left, center, or right of the footer area.
โข Page Numbering: Excel allows you to include page numbers in your footers, which can be useful when printing large spreadsheets that span multiple pages.
โข Date/Time Codes: You can insert date and time codes in your footers to automatically display the current date and time when the spreadsheet is printed.
Tips for Utilizing Codes to Display Relevant Information
Excel provides various codes that allow you to display relevant information in your footers. Here are some tips for utilizing these codes effectively:
โข Use the &[amp] code to display the file path and name in your footer. This can be helpful when sharing spreadsheets with others.
โข Include the &[Date] code to display the print date in your footer. This ensures that viewers know when the spreadsheet was last printed.
โข Experiment with different codes, such as &[Page] for page numbers and &[Time] for the current time, to customize your footers further.
By incorporating these tips and utilizing the available customization options, you can create professional and informative footers for your Excel spreadsheets. Stay tuned for the next chapter, where we will explore header formatting codes in Excel.
Advanced Header and Footer Customization Techniques
In Excel, headers and footers provide a way to add important information, such as page numbers, document titles, and company logos, to printed worksheets. While the basic formatting options are commonly used, there are advanced techniques that can take your headers and footers to the next level. In this chapter, we will explore some of these techniques and learn how to customize headers and footers in Excel.
Exploring advanced formatting options for headers and footers
Excel offers a wide range of formatting options to enhance the appearance of your headers and footers. Some of the advanced formatting options include:
โข Adding images or pictures: You can insert company logos or other relevant images into the headers and footers.
โข Using different font styles and sizes: Customize the appearance of your headers and footers by using various font styles, sizes, and colors.
โข Adding special characters and symbols: Enhance the visual appeal of your headers and footers by including special characters, such as arrows or checkmarks.
Using conditional formatting to display different headers and footers based on specific conditions
Conditional formatting allows you to change the appearance of cells based on specific conditions. Similarly, you can use conditional formatting to display different headers and footers based on certain criteria. By applying conditional formatting rules to your headers and footers, you can dynamically update the content based on the data in your worksheet. This technique can be particularly useful when working with large datasets that require different headers and footers for specific sections.
Employing formulas and functions within header and footer codes to create dynamic content
Excel's powerful formulas and functions can be utilized within header and footer codes to create dynamic content. By incorporating formulas and functions, you can automatically display information such as the current date, time, file name, or worksheet information in your headers and footers. This ensures that the information in your headers and footers is always up to date and accurate, saving you time and effort in manual updates.
Additionally, you can use formulas and functions to perform calculations within the headers and footers. For example, you can calculate the sum or average of certain cells and display the result in the header or footer. This can be particularly useful when working with financial statements or reports that require dynamic calculations.
By leveraging the power of formulas and functions, you can create highly customized and dynamic headers and footers that cater to your specific needs.
In conclusion, Excel provides a range of advanced techniques to customize headers and footers. By exploring advanced formatting options, utilizing conditional formatting, and incorporating formulas and functions, you can create visually appealing and dynamic headers and footers that enhance the overall presentation of your worksheets.
Chapter 1: Best Practices for Header and Footer Formatting
1. Tips for maintaining consistency across multiple worksheets or workbooks
โข Use consistent header and footer formatting throughout all worksheets or workbooks to maintain a professional and organized appearance.
โข Create a template with predefined header and footer settings to easily apply them to new worksheets or workbooks.
โข Consider using a consistent color scheme or logo in the headers and footers for branding purposes.
2. Guidelines for choosing appropriate content for headers and footers
โข Include relevant information in the headers, such as the worksheet or workbook name, page numbers, and date.
โข Avoid using numbers in the headers, as they can be confusing and may not provide meaningful context.
โข Highlight important information in the headers and footers using the strong tag to make it stand out.
โข Consider adding a copyright statement or disclaimer in the footer, if applicable.
3. Ensuring readability and avoiding overcrowding by adjusting font size and margins
โข Choose a font size and style that is legible and appropriate for the content in the headers and footers.
โข Avoid overcrowding the headers and footers by adjusting the margins to provide enough space for the content to be easily read.
โข Use line breaks () and headings (
) when necessary to organize and separate information in the headers and footers.
Conclusion
As we come to the end of this blog post, it is important to recap the significance of header and footer formatting in Excel. Headers and footers not only provide valuable information such as page numbers and document titles, but they also contribute to the overall aesthetic appeal of your spreadsheets. By utilizing formatting codes, you can customize and enhance the appearance of your headers and footers, making them more visually appealing and professional-looking.
It is essential to remember that mastering the art of formatting in Excel requires practice and experimentation. Don't be afraid to try different codes and combinations to achieve the desired outcome. As you continue to explore and familiarize yourself with these formatting codes, you will become more confident in your ability to create visually stunning spreadsheets that leave a lasting impression.
Excel Dashboard
SAVE $698
ULTIMATE EXCEL TEMPLATES BUNDLE
Immediate Download
MAC & PC Compatible
Free Email Support
Leave a comment
Your email address will not be published. Required fields are marked *
Please note, comments must be approved before they are published
Related aticles
|
__label__pos
| 0.927019 |
On 21/02/18 22:59, Jordan Crouse wrote:
[...]
> +int iommu_sva_alloc_pasid(struct iommu_domain *domain, struct device *dev)
> +{
> + int ret, pasid;
> + struct io_pasid *io_pasid;
> +
> + if (!domain->ops->pasid_alloc || !domain->ops->pasid_free)
> + return -ENODEV;
> +
> + io_pasid = kzalloc(sizeof(*io_pasid), GFP_KERNEL);
> + if (!io_pasid)
> + return -ENOMEM;
> +
> + io_pasid->domain = domain;
> + io_pasid->base.type = IO_TYPE_PASID;
> +
> + idr_preload(GFP_KERNEL);
> + spin_lock(&iommu_sva_lock);
> + pasid = idr_alloc_cyclic(&iommu_pasid_idr, &io_pasid->base,
> + 1, (1 << 31), GFP_ATOMIC);
To be usable by other IOMMUs, this should restrict the PASID range to what
the IOMMU and the device support like io_mm_alloc(). In your case 31 bits,
but with PCI PASID it depends on the PASID capability and the SMMU
SubstreamID range.
For this reason I think device drivers should call iommu_sva_device_init()
once, even for the alloc_pasid() API. For SMMUv2 I guess it will be a NOP,
but other IOMMUs will allocate PASID tables and enable features in the
device. In addition, knowing that all users of the API call
iommu_sva_device_init()/shutdown() could allow us to allocate and enable
stuff lazily in the future.
It would also allow a given device driver to use both
iommu_sva_pasid_alloc() and iommu_sva_bind() at the same time. So that the
driver can assigns contexts to userspace and still use some of them for
management.
[...]
> +int iommu_sva_map(int pasid, unsigned long iova,
> + phys_addr_t paddr, size_t size, int prot)
It would be nice to factor iommu_map(), since this logic for map, map_sg
and unmap should be the same regardless of the PASID argument.
For example
- iommu_sva_map(domain, pasid, ...)
- iommu_map(domain, ...)
both call
- __iommu_map(domain, pasid, ...)
which calls either
- ops->map(domain, ...)
- ops->sva_map(domain, pasid, ...)
[...]
> @@ -347,6 +353,15 @@ struct iommu_ops {
> int (*page_response)(struct iommu_domain *domain, struct device *dev,
> struct page_response_msg *msg);
>
> + int (*pasid_alloc)(struct iommu_domain *domain, struct device *dev,
> + int pasid);
> + int (*sva_map)(struct iommu_domain *domain, int pasid,
> + unsigned long iova, phys_addr_t paddr, size_t size,
> + int prot);
> + size_t (*sva_unmap)(struct iommu_domain *domain, int pasid,
> + unsigned long iova, size_t size);
> + void (*pasid_free)(struct iommu_domain *domain, int pasid);
> +
Hmm, now IOMMU has the following ops:
* mm_alloc(): allocates a shared mm structure
* mm_attach(): writes the entry in the PASID table
* mm_detach(): removes the entry from the PASID table and invalidates it
* mm_free(): free shared mm
* pasid_alloc(): allocates a pasid structure (which I usually call
"private mm") and write the entry in the PASID table (or call
install_pasid() for SMMUv2)
* pasid_free(): remove from the PASID table (or call remove_pasid()) and
free the pasid structure.
Splitting mm_alloc and mm_attach is necessary because the io_mm in my case
can be shared between devices (allocated once, attached multiple times).
In your case a PASID is private to one device so only one callback is
needed. However mm_alloc+mm_attach will do roughly the same as
pasid_alloc, so to reduce redundancy in iommu_ops, maybe we could reuse
mm_alloc and mm_attach for the private PASID case?
Thanks,
Jean
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Reply via email to
|
__label__pos
| 0.750116 |
Reverting my windows build fix because it breaks the bug fix committed in r40995
[blender.git] / build_files / cmake / cmake_netbeans_project.py
1 #!/usr/bin/envย python
2
3 #ย $Id$
4 #ย *****ย BEGINย GPLย LICENSEย BLOCKย *****
5 #
6 #ย Thisย programย isย freeย software;ย youย canย redistributeย itย and/or
7 #ย modifyย itย underย theย termsย ofย theย GNUย Generalย Publicย License
8 #ย asย publishedย byย theย Freeย Softwareย Foundation;ย eitherย versionย 2
9 #ย ofย theย License,ย orย (atย yourย option)ย anyย laterย version.
10 #
11 #ย Thisย programย isย distributedย inย theย hopeย thatย itย willย beย useful,
12 #ย butย WITHOUTย ANYย WARRANTY;ย withoutย evenย theย impliedย warrantyย of
13 #ย MERCHANTABILITYย orย FITNESSย FORย Aย PARTICULARย PURPOSE.ย ย Seeย the
14 #ย GNUย Generalย Publicย Licenseย forย moreย details.
15 #
16 #ย Youย shouldย haveย receivedย aย copyย ofย theย GNUย Generalย Publicย License
17 #ย alongย withย thisย program;ย ifย not,ย writeย toย theย Freeย Softwareย Foundation,
18 #ย Inc.,ย 51ย Franklinย Street,ย Fifthย Floor,ย Boston,ย MAย 02110-1301,ย USA.
19 #
20 #ย Contributor(s):ย Campbellย Barton,ย M.G.ย Kishalmi
21 #
22 #ย *****ย ENDย GPLย LICENSEย BLOCKย *****
23
24 #ย <pep8ย compliant>
25
26 """
27 Exampleย linuxย usage
28 ย pythonย .~/blenderSVN/blender/build_files/cmake/cmake_netbeans_project.pyย ~/blenderSVN/cmake
29
30 Windowsย notย supportedย soย far
31 """
32
33 fromย project_infoย importย (SIMPLE_PROJECTFILE,
34 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย SOURCE_DIR,
35 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย CMAKE_DIR,
36 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย PROJECT_DIR,
37 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย source_list,
38 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย is_project_file,
39 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย is_c_header,
40 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย #ย is_py,
41 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย cmake_advanced_info,
42 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย cmake_compiler_defines,
43 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย )
44
45
46 importย os
47 fromย os.pathย importย join,ย dirname,ย normpath,ย relpath,ย exists
48
49
50 defย create_nb_project_main():
51 ย ย ย ย filesย =ย list(source_list(SOURCE_DIR,ย filename_check=is_project_file))
52 ย ย ย ย files_relย =ย [relpath(f,ย start=PROJECT_DIR)ย forย fย inย files]
53 ย ย ย ย files_rel.sort()
54
55 ย ย ย ย ifย SIMPLE_PROJECTFILE:
56 ย ย ย ย ย ย ย ย pass
57 ย ย ย ย else:
58 ย ย ย ย ย ย ย ย includes,ย definesย =ย cmake_advanced_info()
59 ย ย ย ย ย ย ย ย #ย forย someย reasonย itย doesntย giveย allย internalย includes
60 ย ย ย ย ย ย ย ย includesย =ย list(set(includes)ย |ย set(dirname(f)ย forย fย inย filesย ifย is_c_header(f)))
61 ย ย ย ย ย ย ย ย includes.sort()
62
63 ย ย ย ย ย ย ย ย PROJECT_NAMEย =ย "Blender"
64
65 ย ย ย ย ย ย ย ย #ย ---------------ย NBย spesific
66 ย ย ย ย ย ย ย ย definesย =ย [("%s=%s"ย %ย cdef)ย ifย cdef[1]ย elseย cdef[0]ย forย cdefย inย defines]
67 ย ย ย ย ย ย ย ย definesย +=ย [cdef.replace("#define",ย "").strip()ย forย cdefย inย cmake_compiler_defines()]
68
69 ย ย ย ย ย ย ย ย defย file_list_to_nested(files):
70 ย ย ย ย ย ย ย ย ย ย ย ย #ย convertย pathsย toย hierarchy
71 ย ย ย ย ย ย ย ย ย ย ย ย paths_nestedย =ย {}
72
73 ย ย ย ย ย ย ย ย ย ย ย ย defย ensure_path(filepath):
74 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย filepath_splitย =ย filepath.split(os.sep)
75
76 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย pnย =ย paths_nested
77 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย forย subdirย inย filepath_split[:-1]:
78 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย pnย =ย pn.setdefault(subdir,ย {})
79 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย pn[filepath_split[-1]]ย =ย None
80
81 ย ย ย ย ย ย ย ย ย ย ย ย forย pathย inย files:
82 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ensure_path(path)
83 ย ย ย ย ย ย ย ย ย ย ย ย returnย paths_nested
84
85 ย ย ย ย ย ย ย ย PROJECT_DIR_NBย =ย join(PROJECT_DIR,ย "nbproject")
86 ย ย ย ย ย ย ย ย ifย notย exists(PROJECT_DIR_NB):
87 ย ย ย ย ย ย ย ย ย ย ย ย os.mkdir(PROJECT_DIR_NB)
88
89 ย ย ย ย ย ย ย ย #ย SOURCE_DIR_RELย =ย relpath(SOURCE_DIR,ย PROJECT_DIR)
90
91 ย ย ย ย ย ย ย ย fย =ย open(join(PROJECT_DIR_NB,ย "project.xml"),ย 'w')
92
93 ย ย ย ย ย ย ย ย f.write('<?xmlย version="1.0"ย encoding="UTF-8"?>\n')
94 ย ย ย ย ย ย ย ย f.write('<projectย xmlns="http://www.netbeans.org/ns/project/1">\n')
95 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <type>org.netbeans.modules.cnd.makeproject</type>\n')
96 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <configuration>\n')
97 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <dataย xmlns="http://www.netbeans.org/ns/make-project/1">\n')
98 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <name>%s</name>\n'ย %ย PROJECT_NAME)
99 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <c-extensions>c,m</c-extensions>\n')
100 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <cpp-extensions>cpp,mm</cpp-extensions>\n')
101 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <header-extensions>h,hpp,inl</header-extensions>\n')
102 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <sourceEncoding>UTF-8</sourceEncoding>\n')
103 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <make-dep-projects/>\n')
104 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <sourceRootList>\n')
105 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <sourceRootElem>%s</sourceRootElem>\n'ย %ย SOURCE_DIR)ย ย #ย base_root_rel
106 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย </sourceRootList>\n')
107 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <confList>\n')
108 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <confElem>\n')
109 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <name>Default</name>\n')
110 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย <type>0</type>\n')
111 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย </confElem>\n')
112 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย </confList>\n')
113 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย </data>\n')
114 ย ย ย ย ย ย ย ย f.write('ย ย ย ย </configuration>\n')
115 ย ย ย ย ย ย ย ย f.write('</project>\n')
116
117 ย ย ย ย ย ย ย ย fย =ย open(join(PROJECT_DIR_NB,ย "configurations.xml"),ย 'w')
118
119 ย ย ย ย ย ย ย ย f.write('<?xmlย version="1.0"ย encoding="UTF-8"?>\n')
120 ย ย ย ย ย ย ย ย f.write('<configurationDescriptorย version="79">\n')
121 ย ย ย ย ย ย ย ย f.write('ย ย <logicalFolderย name="root"ย displayName="root"ย projectFiles="true"ย kind="ROOT">\n')
122 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <dfย name="blender"ย root="%s">\n'ย %ย SOURCE_DIR)ย ย #ย base_root_rel
123
124 ย ย ย ย ย ย ย ย #ย writeย files!
125 ย ย ย ย ย ย ย ย files_rel_localย =ย [normpath(relpath(join(CMAKE_DIR,ย path),ย SOURCE_DIR))ย forย pathย inย files_rel]
126 ย ย ย ย ย ย ย ย files_rel_hierarchyย =ย file_list_to_nested(files_rel_local)
127 ย ย ย ย ย ย ย ย #ย print(files_rel_hierarchy)
128
129 ย ย ย ย ย ย ย ย defย write_df(hdir,ย ident):
130 ย ย ย ย ย ย ย ย ย ย ย ย dirsย =ย []
131 ย ย ย ย ย ย ย ย ย ย ย ย filesย =ย []
132 ย ย ย ย ย ย ย ย ย ย ย ย forย key,ย itemย inย sorted(hdir.items()):
133 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ifย itemย isย None:
134 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย files.append(key)
135 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย else:
136 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย dirs.append((key,ย item))
137
138 ย ย ย ย ย ย ย ย ย ย ย ย forย key,ย itemย inย dirs:
139 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย f.write('%sย ย <dfย name="%s">\n'ย %ย (ident,ย key))
140 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย write_df(item,ย identย +ย "ย ย ")
141 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย f.write('%sย ย </df>\n'ย %ย ident)
142
143 ย ย ย ย ย ย ย ย ย ย ย ย forย keyย inย files:
144 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย f.write('%s<in>%s</in>\n'ย %ย (ident,ย key))
145
146 ย ย ย ย ย ย ย ย write_df(files_rel_hierarchy,ย ident="ย ย ย ย ")
147
148 ย ย ย ย ย ย ย ย f.write('ย ย ย ย </df>\n')
149
150 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <logicalFolderย name="ExternalFiles"\n')
151 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย displayName="Importantย Files"\n')
152 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย projectFiles="false"\n')
153 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย kind="IMPORTANT_FILES_FOLDER">\n')
154 ย ย ย ย ย ย ย ย #ย f.write('ย ย ย ย ย ย <itemPath>../GNUmakefile</itemPath>\n')
155 ย ย ย ย ย ย ย ย f.write('ย ย ย ย </logicalFolder>\n')
156
157 ย ย ย ย ย ย ย ย f.write('ย ย </logicalFolder>\n')
158 ย ย ย ย ย ย ย ย #ย default,ย butย thisย dirย isย infactย notย inย blenderย dirย soย weย canย ignoreย it
159 ย ย ย ย ย ย ย ย #ย f.write('ย ย <sourceFolderFilter>^(nbproject)$</sourceFolderFilter>\n')
160 ย ย ย ย ย ย ย ย f.write('ย ย <sourceFolderFilter>^(nbproject|__pycache__|.*\.py|.*\.html|.*\.blend)$</sourceFolderFilter>\n')
161
162 ย ย ย ย ย ย ย ย f.write('ย ย <sourceRootList>\n')
163 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <Elem>%s</Elem>\n'ย %ย SOURCE_DIR)ย ย #ย base_root_rel
164 ย ย ย ย ย ย ย ย f.write('ย ย </sourceRootList>\n')
165
166 ย ย ย ย ย ย ย ย f.write('ย ย <projectmakefile>Makefile</projectmakefile>\n')
167
168 ย ย ย ย ย ย ย ย #ย pathsย again
169 ย ย ย ย ย ย ย ย f.write('ย ย <confs>\n')
170 ย ย ย ย ย ย ย ย f.write('ย ย ย ย <confย name="Default"ย type="0">\n')
171
172 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย <toolsSet>\n')
173 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <remote-sources-mode>LOCAL_SOURCES</remote-sources-mode>\n')
174 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <compilerSet>default</compilerSet>\n')
175 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย </toolsSet>\n')
176 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย <makefileType>\n')
177
178 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <makeTool>\n')
179 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <buildCommandWorkingDir>.</buildCommandWorkingDir>\n')
180 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <buildCommand>${MAKE}ย -fย Makefile</buildCommand>\n')
181 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <cleanCommand>${MAKE}ย -fย Makefileย clean</cleanCommand>\n')
182 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <executablePath>./bin/blender</executablePath>\n')
183
184 ย ย ย ย ย ย ย ย defย write_toolinfo():
185 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <incDir>\n')
186 ย ย ย ย ย ย ย ย ย ย ย ย forย incย inย includes:
187 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย <pElem>%s</pElem>\n'ย %ย inc)
188 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย </incDir>\n')
189 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย <preprocessorList>\n')
190 ย ย ย ย ย ย ย ย ย ย ย ย forย cdefย inย defines:
191 ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ย ย <Elem>%s</Elem>\n'ย %ย cdef)
192 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย </preprocessorList>\n')
193
194 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <cTool>\n')
195 ย ย ย ย ย ย ย ย write_toolinfo()
196 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย </cTool>\n')
197
198 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย <ccTool>\n')
199 ย ย ย ย ย ย ย ย write_toolinfo()
200 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย </ccTool>\n')
201
202 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย </makeTool>\n')
203 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย </makefileType>\n')
204 ย ย ย ย ย ย ย ย #ย finisheย makefleย info
205
206 ย ย ย ย ย ย ย ย f.write('ย ย ย ย \n')
207
208 ย ย ย ย ย ย ย ย forย pathย inย files_rel_local:
209 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย <itemย path="%s"\n'ย %ย path)
210 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย ex="false"\n')
211 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย tool="1"\n')
212 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย ย ย ย ย flavor="0">\n')
213 ย ย ย ย ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย </item>\n')
214
215 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย <runprofileย version="9">\n')
216 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <runcommandpicklist>\n')
217 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย </runcommandpicklist>\n')
218 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <runcommand>%s</runcommand>\n'ย %ย os.path.join(CMAKE_DIR,ย "bin/blender"))
219 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <rundir>%s</rundir>\n'ย %ย SOURCE_DIR)
220 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <buildfirst>false</buildfirst>\n')
221 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <terminal-type>0</terminal-type>\n')
222 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <remove-instrumentation>0</remove-instrumentation>\n')
223 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย <environment>\n')
224 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย ย ย </environment>\n')
225 ย ย ย ย ย ย ย ย f.write('ย ย ย ย ย ย </runprofile>\n')
226
227 ย ย ย ย ย ย ย ย f.write('ย ย ย ย </conf>\n')
228 ย ย ย ย ย ย ย ย f.write('ย ย </confs>\n')
229
230 ย ย ย ย ย ย ย ย #ย todo
231
232 ย ย ย ย ย ย ย ย f.write('</configurationDescriptor>\n')
233
234
235 defย main():
236 ย ย ย ย create_nb_project_main()
237
238
239 ifย __name__ย ==ย "__main__":
240 ย ย ย ย main()
|
__label__pos
| 0.997578 |
gitk: Second try to work around the command line limit on Windows
[git/dscho.git] / builtin / stripspace.c
blob4d3b93fedb5f2eca68edca15ca1e10cb60176d36
1 #include "builtin.h"
2 #include "cache.h"
4 /*
5 * Returns the length of a line, without trailing spaces.
7 * If the line ends with newline, it will be removed too.
8 */
9 static size_t cleanup(char *line, size_t len)
11 while (len) {
12 unsigned char c = line[len - 1];
13 if (!isspace(c))
14 break;
15 len--;
18 return len;
22 * Remove empty lines from the beginning and end
23 * and also trailing spaces from every line.
25 * Note that the buffer will not be NUL-terminated.
27 * Turn multiple consecutive empty lines between paragraphs
28 * into just one empty line.
30 * If the input has only empty lines and spaces,
31 * no output will be produced.
33 * If last line does not have a newline at the end, one is added.
35 * Enable skip_comments to skip every line starting with "#".
37 void stripspace(struct strbuf *sb, int skip_comments)
39 int empties = 0;
40 size_t i, j, len, newlen;
41 char *eol;
43 /* We may have to add a newline. */
44 strbuf_grow(sb, 1);
46 for (i = j = 0; i < sb->len; i += len, j += newlen) {
47 eol = memchr(sb->buf + i, '\n', sb->len - i);
48 len = eol ? eol - (sb->buf + i) + 1 : sb->len - i;
50 if (skip_comments && len && sb->buf[i] == '#') {
51 newlen = 0;
52 continue;
54 newlen = cleanup(sb->buf + i, len);
56 /* Not just an empty line? */
57 if (newlen) {
58 if (empties > 0 && j > 0)
59 sb->buf[j++] = '\n';
60 empties = 0;
61 memmove(sb->buf + j, sb->buf + i, newlen);
62 sb->buf[newlen + j++] = '\n';
63 } else {
64 empties++;
68 strbuf_setlen(sb, j);
71 int cmd_stripspace(int argc, const char **argv, const char *prefix)
73 struct strbuf buf = STRBUF_INIT;
74 int strip_comments = 0;
76 if (argc == 2 && (!strcmp(argv[1], "-s") ||
77 !strcmp(argv[1], "--strip-comments")))
78 strip_comments = 1;
79 else if (argc > 1)
80 usage("git stripspace [-s | --strip-comments] < <stream>");
82 if (strbuf_read(&buf, 0, 1024) < 0)
83 die_errno("could not read the input");
85 stripspace(&buf, strip_comments);
87 write_or_die(1, buf.buf, buf.len);
88 strbuf_release(&buf);
89 return 0;
|
__label__pos
| 0.987321 |
CodecV
What is CodecV?
CodecV is a sneaky adware application that gets onto your computer without you even noticing it. It is compatible with Internet Explorer, Google Chrome, and Mozilla Firefox browsers, which means that everyone can get in touch with it. It has been found out that the main feature of CodecV is various commercials that it tends to display on different shopping websites, including amazon.com, kmart.com, and ebay.com. Unfortunately, these websites are not the only one; this application might operate on different kinds of websites and you will not be able to stop all these ads unless you get rid of CodecV itself. We definitely recommend removing this software if you do not want to get exposed to malicious software.
How does CodecV work?
Advertising-supported programs are able to render various advertisements automatically and possibly generate revenue for its publishers. In addition to this, such applications might cause harm to the security of your system because they display advertisements which redirect those who click on them to some other websites. The content of these web pages is completely unknown, which means that the possibility to come across malicious software and infect your computer is considerably high. We believe that the best solution is to remove the program responsible for these ads. This move will minimize the possibility to infect your system.
What is more, the fact that CodecV is going to collect some of the non-personally identifiable information related to your browsing habits is rather suspicious. It is interested in your overall browsing history, the websites that you visit the most, the content that you access on them such as the advertisements that you click on. All this data is required in order to help the publishers to improve the service as well as help third parties to display you the most relevant advertisements. As you might have probably understood, all this collected information will be shared with third parties. Does it seem rather inappropriate? If it is so, you should remove CodecV as soon as possible.
How to erase CodecV?
You should erase CodecV the moment you notice its presence on your system. It is not a very serious threat itself; however, it might cause you some inconvenience. We have provided the instructions for manual removal below. Of course, you can scan your computer with an antimalware tool, for instance, SpyHunter as well. It will do all the work for you and you will not need to worry about different unwanted programs anymore.
How to remove CodecV
Windows XP
1. Open the Start menu.
2. Select Settings and then access Control Panel.
3. Go to Add or Remove Programs.
4. Locate CodecV and select Remove.
Windows 7 and Vista
1. Click the Start button to open the menu.
2. Click the Control Panel.
3. Select Uninstall a program.
4. Right-click on the unwanted program.
5. Select Uninstall.
Windows 8
1. Access the Metro UI menu.
2. Start typing โControl Panelโ.
3. Select it when it appears to you and locate the program that you want to erase.
4. Right-click on it and then select Uninstall.
Internet Explorer
1. Launch your browser and then press Alt+T.
2. Select Manage Add-ons.
3. Under the Toolbars and Extensions, select the extension that needs to be deleted.
4. Click on it and then click Disable.
Mozilla Firefox
1. Open your browser.
2. Press Ctrl+Shift+A simultaneously.
3. Select Extensions from the menu on the left.
4. Select the unwelcome extension and then select the Remove button.
Google Chrome
1. Launch Google Chrome.
2. Press Alt+F to open the Chrome menu.
3. Select Tools and then go to Extensions.
4. Find the extension and then click the recycle bin button.
5. Select Remove in the dialog box.
You should still scan your computer after the manual removal because there might be other threats residing in your system. Click on the button below and you will be able to download a free reliable scanner.
100% FREE spyware scan and
tested removal of CodecV*
CodecV
Disclaimer
Disclaimer
Leave a Comment
Enter the numbers in the box to the right *
|
__label__pos
| 0.911764 |
Fork me on GitHub ๆ่ต
Direct3D 11 Tutorial 5: 3D Transformation_Direct3D 11 ๆ็จ5๏ผ3D่ฝฌๅ
ๆฆ่ฟฐ
ๅจไธไธไธชๆ็จไธญ๏ผๆไปฌไปๆจกๅ็ฉบ้ดๅฐๅฑๅนๆธฒๆไบไธไธช็ซๆนไฝใ ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐๆฉๅฑ่ฝฌๆข็ๆฆๅฟตๅนถๆผ็คบๅฏไปฅ้่ฟ่ฟไบ่ฝฌๆขๅฎ็ฐ็็ฎๅๅจ็ปใ
ๆฌๆ็จ็็ปๆๅฐๆฏๅด็ปๅฆไธไธช่ฝจ้่ฟ่ก็ๅฏน่ฑกใ ๅฑ็คบ่ฝฌๆขไปฅๅๅฆไฝๅฐๅฎไปฌ็ปๅไปฅๅฎ็ฐๆๆ็ๆๆๅฐๆฏๆ็จ็ใ ๅจๆไปฌไป็ปๆฐๆฆๅฟตๆถ๏ผๆชๆฅ็ๆ็จๅฐๅจๆญคๅบ็กไธๆๅปบใ
ย
่ตๆบ็ฎๅฝ
(SDK root)\Samples\C++\Direct3D11\Tutorials\Tutorial05
Github
ย
่ฝฌๅ
ๅจ3Dๅพๅฝขไธญ๏ผๅๆข้ๅธธ็จไบๅฏน้กถ็นๅ็ข้่ฟ่กๆไฝใ ๅฎ่ฟ็จไบๅฐๅฎไปฌๅจไธไธช็ฉบ้ดไธญ่ฝฌๆขไธบๅฆไธไธช็ฉบ้ดใ ้่ฟไธ็ฉ้ต็ธไนๆฅๆง่กๅๆขใ ้ๅธธๆไธ็ง็ฑปๅ็ๅๅงๅๆขๅฏไปฅๅจ้กถ็นไธๆง่ก๏ผๅนณ็งป๏ผ็ธๅฏนไบๅ็นไฝไบ็ฉบ้ดไธญ๏ผ๏ผๆ่ฝฌ๏ผ็ธๅฏนไบx๏ผy๏ผzๅธง็ๆนๅ๏ผๅ็ผฉๆพ๏ผ่ท็ฆป ่ตทๆบ๏ผใ ้คๆญคไนๅค๏ผๆๅฝฑๅๆข็จไบไป่งๅพ็ฉบ้ดๅฐๆๅฝฑ็ฉบ้ดใ XNA Mathๅบๅ
ๅซ็APIๅฏไปฅๆนไพฟๅฐๆๅปบ็ฉ้ต๏ผ็จไบๅค็ง็จ้๏ผไพๅฆๅนณ็งป๏ผๆ่ฝฌ๏ผ็ผฉๆพ๏ผไธ็ๅฐ่งๅพ่ฝฌๆข๏ผ่งๅพๅฐๆๅฝฑ่ฝฌๆข็ญใ ็ถๅ๏ผๅบ็จ็จๅบๅฏไปฅไฝฟ็จ่ฟไบ็ฉ้ตๆฅ่ฝฌๆขๅ
ถๅบๆฏไธญ็้กถ็นใ ้่ฆๅฏน็ฉ้ตๅๆขๆๅบๆฌ็ไบ่งฃใ ๆไปฌๅฐ็ฎ่ฆไป็ปไธ้ข็ไธไบ็คบไพใ
ย
ๅนณ็งป
ๅนณ็งปๆฏๆๅจ็ฉบ้ดไธญ็งปๅจๆ็งปไฝไธๅฎ่ท็ฆปใ ๅจ3Dไธญ๏ผ็จไบ็ฟป่ฏ็็ฉ้ตๅ
ทๆๅฝขๅผใ
1 0 0 0
0 1 0 0
0 0 1 0
a b c 1
ๅ
ถไธญ๏ผa๏ผb๏ผc๏ผๆฏๅฎไน็งปๅจๆนๅๅ่ท็ฆป็ๅ้ใ ไพๅฆ๏ผ่ฆๆฒฟX่ฝด๏ผ่ดXๆนๅ๏ผ็งปๅจ้กถ็น-5ๅไฝ๏ผๆไปฌๅฏไปฅๅฐๅ
ถไธๆญค็ฉ้ต็ธไน๏ผ
1 0 0 0
0 1 0 0
0 0 1 0
-5 0 0 1
ๅฆๆๆไปฌๅฐๆญคๅบ็จไบไปฅๅ็นไธบไธญๅฟ็็ซๆนไฝๅฏน่ฑก๏ผๅ็ปๆๆฏ่ฏฅๆกๅ่ดX่ฝด็งปๅจ5ไธชๅไฝ๏ผๅฆๅพ5ๆ็คบ๏ผๅจๅบ็จๅนณ็งปไนๅใ
ๅพ1.ๅนณ็งป็ๅฝฑๅ
ๅจ3Dไธญ๏ผ็ฉบ้ด้ๅธธ็ฑๅ็นๅๆฅ่ชๅ็น็ไธไธชๅฏไธ่ฝดๅฎไน๏ผX๏ผYๅZ.่ฎก็ฎๆบๅพๅฝขไธญ้ๅธธไฝฟ็จๅคไธช็ฉบ้ด๏ผๅฏน่ฑก็ฉบ้ด๏ผไธ็็ฉบ้ด๏ผ่งๅพ็ฉบ้ด๏ผๆๅฝฑ็ฉบ้ดๅๅฑๅน็ฉบ้ดใ
ๅพ2.ๅจๅฏน่ฑก็ฉบ้ดไธญๅฎไน็็ซๆนไฝ
ย
ๆ่ฝฌ
ๆ่ฝฌๆฏๆๅด็ป็ฉฟ่ฟๅ็น็่ฝดๆ่ฝฌ้กถ็นใ ไธไธช่ฟๆ ท็่ฝดๆฏ็ฉบ้ดไธญ็X๏ผYๅZ่ฝดใ 2Dไธญ็็คบไพๆฏ้ๆถ้ๆ่ฝฌ็ข้[1 0] 90ๅบฆใ ๆ่ฝฌ็็ปๆๆฏๅ้[0 1]ใ ็จไบๅด็ปY่ฝด้กบๆถ้ๆ่ฝฌ1ๅบฆ็็ฉ้ต็่ตทๆฅๅ่ฟๆ ท๏ผ
cosฮ 0 -sinฮ 0
0 1 0 0
sinฮ 0 cosฮ 0
0 0 0 1
ๅพ6ๆพ็คบไบๅด็ปY่ฝดๆ่ฝฌไปฅๅ็นไธบไธญๅฟ45ๅบฆ็็ซๆนไฝ็ๆๆใ
ๅพ3.ๅด็ปY่ฝดๆ่ฝฌ็ๆๆ
ย
็ผฉๆพ
็ผฉๆพๆฏๆๆฒฟ่ฝดๆนๅๆพๅคงๆ็ผฉๅฐ็ข้ๅ้็ๅคงๅฐใ ไพๅฆ๏ผ็ข้ๅฏไปฅๆฒฟๆๆๆนๅๆๆฏไพๆพๅคงๆไป
ๆฒฟX่ฝดๆๆฏไพ็ผฉๅฐใ ไธบไบๆฉๅฑ๏ผๆไปฌ้ๅธธๅจไธ้ขๅบ็จ็ผฉๆพ็ฉ้ต๏ผ
p 0 0 0
0 q 0 0
0 0 r 0
0 0 0 1
ๅ
ถไธญp๏ผqๅrๅๅซๆฏๆฒฟX๏ผYๅZๆนๅ็ๆฏไพๅ ๅญใ ไธๅพๆพ็คบไบๆฒฟX่ฝด็ผฉๆพ2ๅนถๆฒฟY่ฝด็ผฉๆพ0.5็ๆๆใ
ๅพ4.็ผฉๆพ็ๆๆ
ย
ๅค้่ฝฌๆข
่ฆๅฐๅคไธชๅๆขๅบ็จไบ็ข้๏ผๆไปฌๅฏไปฅ็ฎๅๅฐๅฐ็ข้ไนไปฅ็ฌฌไธไธชๅๆข็ฉ้ต๏ผ็ถๅๅฐๅพๅฐ็็ข้ไนไปฅ็ฌฌไบไธชๅๆข็ฉ้ต๏ผไพๆญค็ฑปๆจใ ๅ ไธบๅ้ๅ็ฉ้ตไนๆณๆฏๅ
ณ่็๏ผๆไปฌไนๅฏไปฅๅ
ๅฐๆๆ็ฉ้ต็ธไน๏ผ็ถๅๅฐๅ้ไนไปฅไน็งฏ็ฉ้ต๏ผๅพๅฐ็ธๅ็็ปๆใ ไธๅพๆพ็คบไบๅฆๆๆไปฌๅฐๆ่ฝฌๅๅนณ็งป่ฝฌๆข็ปๅๅจไธ่ตท๏ผ็ซๆนไฝๅฐๅฆไฝ็ปๆใ
ๅพ5.ๆ่ฝฌๅๅนณ็งป็ๆๆ
ๅๅปบ่ฝจ้
ๅจๆฌๆ็จไธญ๏ผๆไปฌๅฐ่ฝฌๆขไธคไธชๅค็ปดๆฐๆฎ้ใ ็ฌฌไธไธชๅฐๆ่ฝฌๅฐไฝ๏ผ่็ฌฌไบไธชๅฐๅด็ป็ฌฌไธไธชๆ่ฝฌ๏ผๅๆถๅจๅ
ถ่ชๅทฑ็่ฝดไธๆ่ฝฌใ ่ฟไธคไธช็ซๆนไฝๅฐๅ
ทๆไธๅ
ถๅ
ณ่็่ชๅทฑ็ไธ็ๅๆข็ฉ้ต๏ผๅนถไธ่ฏฅ็ฉ้ตๅฐๅจๆธฒๆ็ๆฏไธชๅธงไธญ้ๆฐๅบ็จไบ่ฏฅ็ฉ้ตใ
XNA Mathไธญๆไธไบๅฝๆฐๅฏไปฅๅธฎๅฉๅๅปบๆ่ฝฌ๏ผๅนณ็งปๅ็ผฉๆพ็ฉ้ตใ
โข ๅด็ปX๏ผYๅZ่ฝดๆง่ก็ๆ่ฝฌๅๅซไฝฟ็จๅฝๆฐXMMatrixRotationX๏ผXMMatrixRotationYๅXMMatrixRotationZๆฅๅฎๆใ ๅฎไปฌๅๅปบๅด็ปไธป่ฝดไนไธๆ่ฝฌ็ๅบๆฌๆ่ฝฌ็ฉ้ตใ ๅด็ปๅ
ถไป่ฝด็ๅคๆๆ่ฝฌๅฏไปฅ้่ฟๅฐๅฎไปฌไธญ็ๅ ไธช็ธไนๆฅๅฎๆใ
โข ๅฏไปฅ้่ฟ่ฐ็จXMMatrixTranslationๅฝๆฐๆฅๆง่ก่ฝฌๆขใ ๆญคๅฝๆฐๅฐๅๅปบไธไธช็ฉ้ต๏ผ็จไบ่ฝฌๆขๅๆฐๆๅฎ็็นใ
โข ไฝฟ็จXMMatrixScalingๅฎๆ็ผฉๆพใ ๅฎไป
ๆฒฟไธป่ฝด็ผฉๆพใ ๅฆๆ้่ฆๆฒฟไปปๆ่ฝด็ผฉๆพ๏ผๅๅฏไปฅๅฐ็ผฉๆพ็ฉ้ตไธ้ๅฝ็ๆ่ฝฌ็ฉ้ต็ธไนไปฅๅฎ็ฐ่ฏฅๆๆใ
็ฌฌไธไธช็ซๆนไฝๅฐๆ่ฝฌๅฐไฝ๏ผๅนถไฝไธบ่ฝจ้็ไธญๅฟใ ็ซๆนไฝๆฒฟY่ฝดๆ่ฝฌ๏ผๅบ็จไบ็ธๅ
ณ็ไธ็็ฉ้ตใ ่ฟๆฏ้่ฟ่ฐ็จไปฅไธไปฃ็ ไธญๆพ็คบ็XMMatrixRotationYๅฝๆฐๆฅๅฎๆ็ใ ็ซๆนไฝๆฏๅธงๆ่ฝฌไธๅฎ้ใ ็ฑไบ็ซๆนไฝ่ขซๅ่ฎพไธบ่ฟ็ปญๆ่ฝฌ๏ผๅ ๆญคๆ่ฝฌ็ฉ้ตๆๅบไบ็ๅผ้ๆฏๅธง้ๅขใ
// 1st Cube: Rotate around the origin
g_World1 = XMMatrixRotationY( t );
ใใ
็ฌฌไธไธช็ซๆนไฝๅฐๆ่ฝฌๅฐไฝ๏ผๅนถไฝไธบ่ฝจ้็ไธญๅฟใ ็ซๆนไฝๆฒฟY่ฝดๆ่ฝฌ๏ผๅบ็จไบ็ธๅ
ณ็ไธ็็ฉ้ตใ ่ฟๆฏ้่ฟ่ฐ็จไปฅไธไปฃ็ ไธญๆพ็คบ็XMMatrixRotationYๅฝๆฐๆฅๅฎๆ็ใ ็ซๆนไฝๆฏๅธงๆ่ฝฌไธๅฎ้ใ ็ฑไบ็ซๆนไฝ่ขซๅ่ฎพไธบ่ฟ็ปญๆ่ฝฌ๏ผๅ ๆญคๆ่ฝฌ็ฉ้ตๆๅบไบ็ๅผ้ๆฏๅธง้ๅขใ
// 2nd Cube: Rotate around origin
XMMATRIX mSpin = XMMatrixRotationZ( -t );
XMMATRIX mOrbit = XMMatrixRotationY( -t * 2.0f );
XMMATRIX mTranslate = XMMatrixTranslation( -4.0f, 0.0f, 0.0f );
XMMATRIX mScale = XMMatrixScaling( 0.3f, 0.3f, 0.3f );
g_World2 = mScale * mSpin * mTranslate * mOrbit;
ใใ
้่ฆๆณจๆ็ไธ็นๆฏ๏ผ่ฟไบๆไฝไธๆฏๅฏไบคๆข็ใ ๅบ็จ่ฝฌๆข็้กบๅบๅพ้่ฆใ ่ฏ้ช่ฝฌๅ้กบๅบๅนถ่งๅฏ็ปๆใ
็ฑไบๆๆๅๆขๅฝๆฐ้ฝๅฐๆ นๆฎๅๆฐๅๅปบๆฐ็ฉ้ต๏ผๅ ๆญคๅฎไปฌๆ่ฝฌ็้ๅฟ
้กป้ๅขใ ่ฟๆฏ้่ฟๆดๆฐโๆถ้ดโๅ้ๆฅๅฎๆ็ใ
// Update our time
t += XM_PI * 0.0125f;
ใใ
ๅจ่ฟ่กๆธฒๆ่ฐ็จไนๅ๏ผๅฟ
้กปไธบ็่ฒๅจๆดๆฐๅธธ้็ผๅฒๅบใ ่ฏทๆณจๆ๏ผไธ็็ฉ้ตๅฏนไบๆฏไธชๅค็ปดๆฐๆฎ้้ฝๆฏๅฏไธ็๏ผๅ ๆญคไผไธบๆฏไธชไผ ้็ปๅฎ็ๅฏน่ฑก่ฟ่กๆดๆนใ
//
// Update variables for the first cube
//
ConstantBuffer cb1;
cb1.mWorld = XMMatrixTranspose( g_World1 );
cb1.mView = XMMatrixTranspose( g_View );
cb1.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, NULL, &cb1, 0, 0 );
//
// Render the first cube
//
g_pImmediateContext->VSSetShader( g_pVertexShader, NULL, 0 );
g_pImmediateContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );
g_pImmediateContext->PSSetShader( g_pPixelShader, NULL, 0 );
g_pImmediateContext->DrawIndexed( 36, 0, 0 );
//
// Update variables for the second cube
//
ConstantBuffer cb2;
cb2.mWorld = XMMatrixTranspose( g_World2 );
cb2.mView = XMMatrixTranspose( g_View );
cb2.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, NULL, &cb2, 0, 0 );
//
// Render the second cube
//
g_pImmediateContext->DrawIndexed( 36, 0, 0 );
ใใ
ๆทฑๅบฆ็ผๅฒๅบ
ๆฌๆ็จ่ฟๆๅฆๅคไธไธช้่ฆ็่กฅๅ
๏ผ้ฃๅฐฑๆฏๆทฑๅบฆ็ผๅฒๅบใ ๆฒกๆๅฎ๏ผ่พๅฐ็่ฝจ้็ซๆนไฝๅจๅด็ปๅ่
็ๅ้จๆถไปไผ่ขซ็ปๅถๅจ่พๅคง็ไธญๅฟ็ซๆนไฝ็้กถ้จใ ๆทฑๅบฆ็ผๅฒๅบๅ
่ฎธDirect3D่ท่ธช็ปๅถๅฐๅฑๅน็ๆฏไธชๅ็ด ็ๆทฑๅบฆใ Direct3D 11ไธญๆทฑๅบฆ็ผๅฒๅบ็้ป่ฎค่กไธบๆฏๆฃๆฅๅฑๅนไธ็ปๅถ็ๆฏไธชๅ็ด ไธๅฑๅน็ฉบ้ดๅ็ด ็ๆทฑๅบฆ็ผๅฒๅบไธญๅญๅจ็ๅผใ ๅฆๆๆญฃๅจๆธฒๆ็ๅ็ด ็ๆทฑๅบฆๅฐไบๆ็ญไบๆทฑๅบฆ็ผๅฒๅจไธญๅทฒ็ปๅญๅจ็ๅผ๏ผๅ็ปๅถๅ็ด ๅนถไธๅฐๆทฑๅบฆ็ผๅฒๅจไธญ็ๅผๆดๆฐไธบๆฐ็ปๅถ็ๅ็ด ็ๆทฑๅบฆใ ๅฆไธๆน้ข๏ผๅฆๆๆญฃๅจ็ปๅถ็ๅ็ด ็ๆทฑๅบฆๅคงไบๆทฑๅบฆ็ผๅฒๅจไธญๅทฒ็ปๅญๅจ็ๅผ๏ผๅไธขๅผ่ฏฅๅ็ด ๅนถไธๆทฑๅบฆ็ผๅฒๅจไธญ็ๆทฑๅบฆๅผไฟๆไธๅใ
็คบไพไธญ็ไปฅไธไปฃ็ ๅๅปบๆทฑๅบฆ็ผๅฒๅบ๏ผDepthStencil็บน็๏ผใ ๅฎ่ฟๅๅปบๆทฑๅบฆ็ผๅฒๅบ็DepthStencilView๏ผไปฅไพฟDirect3D 11็ฅ้ๅฐๅ
ถ็จไฝๆทฑๅบฆๆจกๆฟ็บน็ใ
// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_pd3dDevice->CreateTexture2D( &descDepth, NULL, &g_pDepthStencil );
if( FAILED(hr) )
return hr;
// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = descDepth.Format;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = g_pd3dDevice->CreateDepthStencilView( g_pDepthStencil, &descDSV, &g_pDepthStencilView );
if( FAILED(hr) )
return hr;
ใใ
ไธบไบไฝฟ็จ่ฟไธชๆฐๅๅปบ็ๆทฑๅบฆๆจกๆฟ็ผๅฒๅบ๏ผๆ็จๅฟ
้กปๅฐๅ
ถ็ปๅฎๅฐ่ฎพๅคใ ่ฟๆฏ้่ฟๅฐๆทฑๅบฆๆจกๆฟ่งๅพไผ ้็ปOMSetRenderTargetsๅฝๆฐ็็ฌฌไธไธชๅๆฐๆฅๅฎๆ็ใ
g_pImmediateContext->OMSetRenderTargets( 1, &g_pRenderTargetView, g_pDepthStencilView );
ใใ
ไธๆธฒๆ็ฎๆ ไธๆ ท๏ผๆไปฌ่ฟๅฟ
้กปๅจๆธฒๆไนๅๆธ
้คๆทฑๅบฆ็ผๅฒๅบใ ่ฟๅฏ็กฎไฟๅ
ๅๅธง็ๆทฑๅบฆๅผไธไผ้่ฏฏๅฐไธขๅผๅฝๅๅธงไธญ็ๅ็ด ใ ๅจไธ้ข็ไปฃ็ ไธญ๏ผๆ็จๅฎ้
ไธๆฏๅฐๆทฑๅบฆ็ผๅฒๅบ่ฎพ็ฝฎไธบๆๅคง้๏ผ1.0๏ผใ
//
// Clear the depth buffer to 1.0 (max depth)
//
ใใ
ๆ็ปๆๆ
ย
posted @ 2018-10-27 19:00ย Zoctopus_Zhang ้
่ฏป(...) ่ฏ่ฎบ(...) ็ผ่พ ๆถ่
// function btn_donateClick() { var DivPopup = document.getElementById('Div_popup'); var DivMasklayer = document.getElementById('div_masklayer'); DivMasklayer.style.display = 'block'; DivPopup.style.display = 'block'; var h = Div_popup.clientHeight; with (Div_popup.style) { marginTop = -h / 2 + 'px'; } } function MasklayerClick() { var masklayer = document.getElementById('div_masklayer'); var divImg = document.getElementById("Div_popup"); masklayer.style.display = "none"; divImg.style.display = "none"; } setTimeout( function () { document.getElementById('div_masklayer').onclick = MasklayerClick; document.getElementById('btn_donate').onclick = btn_donateClick; var a_gzw = document.getElementById("guanzhuwo"); a_gzw.href = "javascript:void(0);"; $("#guanzhuwo").attr("onclick","follow('33513f9f-ba13-e011-ac81-842b2b196315');"); }, 900);
|
__label__pos
| 0.586281 |
Hello (again),
Hindsight is using a push-model (i.e Nagios passive checks). This is
great, but I want to plug it with Prometheus which uses pull-model
[1].
I see several ways to handle this:
- use the prometheus push-gateway [2]. This has several drawbacks listed below
- introduce pull model in hindsight
- add a new daemon, based on lua_sandbox too, but using pull model
The drawbacks of prometheus push-gateway are:
- Unnecessary polling of data (data is grabbed even if not pulled by prometheus
- time lag, between data grabbing and data pulling
- To sum up : to reduce time lag, you increase polling rate, when us
decrease polling, you increase time lag.
The push model may work like this:
- Adding pull_message_matcher config to inputs (defaults to FALSE)
- Adding process_pull_message() function to inputs, returning a table
of messages (or should it be inject_pull_message() + return 0?)
- Adding request_pull_message() function to outputs, which maps to
matching process_pull_message() and concatenates the results in a
table. This function is blocking.
Opinions?
NB: I won't work on this before 2017 or 2018, but I need to discuss
this early to.
NB2: My current TODO list is at:
https://gist.github.com/sathieu/5a7e83d514638f396e17d462f13adee0
[1]:
https://prometheus.io/docs/introduction/faq/#why-do-you-pull-rather-than-push?
[2]: https://prometheus.io/docs/instrumenting/pushing/
--
Mathieu Parent
_______________________________________________
Heka mailing list
[email protected]
https://mail.mozilla.org/listinfo/heka
Reply via email to
|
__label__pos
| 0.908483 |
J2EEOnline J2EEย
Java Conceptsย ยซPrevย
Using join() with Java Threads
In addition to ensuring that only one chunk of code can be accessed at once, you can synchronize by splicing processes back together.
Here is the pseudocode for the TownHall class' main() method:
create the MC
create the speakers
join the MC's thread with the thread that initiated it
close the town hall
What would happen if we did not attempt to join the MC's thread with the thread that initiated it (which is the thread running the main() method)?
The town hall would close while the MC and the speakers were still inside hashing out the issues. In other words, we would get output that looked something like this:
Output
MC here: good morning.
speaker 4 stepping onto soapbox
speaker 1 stepping onto soapbox
speaker 3 stepping onto soapbox
speaker 0 stepping onto soapbox
speaker 0: music should be bought
speaker 2 stepping onto soapbox
Town Hall closing.
speaker 2: dogs should be mandatory
.
.
To join a thread with the thread that initiated it, you simply call the thread's join() method.
The MC for the town hall meeting does this as follows:
public static void main(String args[]) {
// create the speakers
MC georgeWill = new MC(podium);
georgeWill.start();
// let the speakers speak
try {
georgeWill.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Town Hall closing.");
}
Note that the join() method might throw an exception, so we have to wrap this call to join() in a try statement.
At the time of this writing, the implementation of the join() method in the Netscape JVM (up to version 4.04) causes the thread to block forever.
Thread Question discussing the join() method
What can be done so that the following program prints "tom", "dick" and "harry" in that order?
public class TestClass extends Thread{
String name = "";
public TestClass(String str) {
name = str;
}
public void run() {
try{
Thread.sleep( (int) (Math.random()*1000) );
System.out.println(name);
}
catch(Exception e){
}
}
public static void main(String[] str) throws Exception{
//1
Thread t1 = new TestClass("tom");
//2
Thread t2 = new TestClass("dick");
//3
t1.start();
//4
t2.start();
//5
System.out.println("harry");
//6
}
}
Select 1 option:
1. Insert t1.join(); and t2.join(); at //4 and //5 respectively.
2. Insert t2.join() at //5
3. Insert t1.join(); t2.join(); at //6
4. Insert t1.join(); t2.join(); at //3
5. Insert t1.join() at //5
Answer: a
Explanation:
Here, you have 3 threads in action. The main thread and the 2 threads that are created in main.
The concept is when a thread calls join() on another thread, the calling thread waits till the other thread dies.
If we insert t1.join() at //4, the main thread will not call t2.start() till t1 dies. So, in affect, "tom" will be printed first and then t2 is started. Now, if we put t2.join() at //5, the main thread will not print "harry" till t2 dies. So, "dick" is printed and then the main thread prints "harry".
|
__label__pos
| 0.942728 |
2
$\begingroup$
I am playing the Natural Number Game found here and am trying to prove a theorem in Lean4. The theorem states that if a natural number x is less than or equal to 0, then x must be 0. The formal statement is โ (x : โ), x โค 0 โ x = 0.
Here's my current code:
theorem le_zero (x : โ) (hx : x โค 0) : x = 0 := by {
apply le_trans x 0 x at hx,
cases x with d,
rfl,
symm,
-- Now I have a goal of `succ d = 0`
apply zero_le at hx,
-- Stuck here, it says 'oops'
}
I applied le_trans and then used cases to handle the base case x = 0 and the successor case x = succ d. The base case is easily solved with rfl, but I'm stuck on the successor case.
$\endgroup$
1
โข $\begingroup$ Hint 1: Note that in the natural numbers, succ d can never be 0. So your mistake has to be before you got to the goal succ d = 0. Hint 2: Have you tried it on paper? $\endgroup$
โย Jason Rute
Oct 30, 2023 at 1:32
1 Answer 1
3
$\begingroup$
this is my proof that uses the rules of the game
cases hx
contrapose! h
symm
intro t
apply eq_zero_of_add_right_eq_zero at t
apply h at t
exact t
Not sure if cases is allowed at the beginning without a with but it seems to be accepted, Game will accept this proof.
Edit: Someone posted another shorter proof, but I don't see their proof anymore, so here it goes
cases hx with a ha
exact eq_zero_of_add_right_eq_zero _ _ ha.symm
$\endgroup$
0
Your Answer
By clicking โPost Your Answerโ, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.953524 |
Step-by-step install Windows Server Essentials 2012 R2 with non local domain
As the best practice of the latest few years, .local domain is not a good way to be deployed in any environment. The main reason for this is that since November 1 2015, will end the ability to have .local domains in public certificates. This will also apply in small environments, because we also use that certificates (for example we use them in Remote desktop services, Exchange, Remote web workplaceโฆ). On the other way, it is also not a good choice to have the internal domain name the same as the external. I would suggest you, for the internal domain name, to choose some kind of subdomain of the public domain name. For example, we can use company.com as public (external) domain name and internal.company.com as internal (Active Directory) domain name.
When you install the Essentials Server 2012R2, you will not be able to choose the internal domain name as you want, but this is simply your NetBIOS domain with.local extension in the end โ exactly the type of extension we want to avoid.
Here is the step-by-step guide how to install Essentials server with different, more accurate options. In the example we have below, we will install Essentials server with NetBIOS domain name MyCompany, AD domain name Internal.Mycompany.com, server name MyServer and company name MyCompany. In your installation, you have to change the variables to your desired values.
The installation begins with a normal server installation from a media and after the server restarts, when the Configure Windows Server Essentials wizard will appear, you can see that you have no place to write your AD domain name (picture 1).
Picture 1
At this point, just close this wizard with cancel (picture 2).
Picture 2
Open the PowerShell as Administrator and write the syntax:
Start-WssConfigurationService -CompanyName โMyCompanyโ -DNSName โInternal.MyCompamny.comโ -NetBiosName โMyCompanyโ -ComputerName โMyServerโ โNewAdminCredential $cred -Setting All
The explanation of all used switches is available on TechNet. Enter your AD administrator credentials in the window that will appear. This will be the new administrator โ the same as you configure it in the Essential server wizard (picture 3).
Picture 3
When the system will prompt, if you want to continue the Essentials server configuration, just click Y (picture 4).
Picture 4
Exit from PowerShell and the server will restart. After this, when you log in, you will see that the wizard Configure Windows Server Essentials will run. You have just to wait that it will finish. At this point the wizard has all the information it needs and you are not able to change them (picture 5).
Picture 5
This is all you need to do. As you can see in the picture 6, now we have installed the server with a non .local domain and with all the settings we want.
Picture 6
.
Recommended Reading
Comments Icon47 comments found on โStep-by-step install Windows Server Essentials 2012 R2 with non local domain
1. is it posible to use a domain name such as โmyown.mxโ in the -DNSName โInternal.MyCompamny.comโ?
1. Hi,
Of course. You can use any domain name, but my suggestion is to use a subdomain of public used company domain.
1. well, iโve tryed with a domain an sufix .mx but it doesnโt permit so!!
any sugestion?
2. Thanks for your answer! i tryed with a different domain name (as well as subdomains) and seems like it does not allow me to use the .mx suffix.
if i use something like โsubserver.myown.comโ its ok for the setup process โฆ
it reports me an error if I add the โ.mxโ Do you know if it is restricted?
or can you try and test this example: โsubdmntest.myown.com.mxโ?
than you very much for your help!
1. Hm, I donโt know why. The โ.mxโ domain is just a domain. Maybe it was something wrong with resolving it? Internet connection?
2. I found this article and have used the technique 3 times now. I was thrown by the fact that the โEnter your AD administrator credentials in the window that will appear. This will be the new administrator โ the same as you configure it in the Essential server wizardโ REQUIRES an answer DIFFERENT than the โadministratorโ login created when first installing Windows 2012r2. Once I got past trying to figure out what the error message was all about and put in a different user name and password. (I am NOT joining this to an existing domain so I am setting up a new AD Administrator) It worked exactly as expected.
There is a noticeable time delay between when you exit power shell and the system reboots. Something that is a little disconcerting after just seeing several errors (because of the user/password miss understanding) and wondering if something is broken.
Third timeโs the charm! ๐
Thank you very much for this article. I wonder why the world is not beating down your door. I have chosen to register a second โshort as possibleโ domain.net for my clients to use as their internal domain. I also use the โshortโ domain name for their Office 365 account initial internal domain.
Randy
3. Thanks for this tip โ extremely useful. I have been testing it for use with a Windows 2012 R2 Std on which the Essentials Experience role has been enabled (not a Windows 2012 R2 Essentials SKU). So this is Windows 2012 R2 Std install with the Essentials Experience role installed, then a reboot performed and then the PowerShell script is run. It works well with one or two small changes that i would like to document here if anyone else stumbles across this blog page.
1) The server rejects the -Setting All parameter for some reason. I omitted it in the end because it is simply the Windows Updates config which you can do later in the GUI.
2) I discovered that whatever I did, the server completely ignored the -ComputerName โMyServerโ parameter. When the server rebooted the server name had not changed. This was annoying because once AD is installed, you canโt change the server name through the GUI. I believe there may be some registry hack or script you can use to change the name but this seems unclean. So, I started again and simply named the server to my required name when it was in workgroup mode, then ran the script. I kept the parameter in the script, just in case, but reading Technet, it seems itโs not required, so you can probably leave it out.
So this is the script I used:
Start-WssConfigurationService -CompanyName โMyCompanyโ -DNSName โInternal.MyCompamny.comโ -NetBiosName โMyCompanyโ -ComputerName โMyServerโ โNewAdminCredential $cred
And as Randy says, there is a noticeable time delay between closing PowerShell and the server rebooting โ it appears as though nothing is happening but just leave it and it will reboot (you can check Task manager to see that it is indeed doing something behind the scenes).
1. Thank you for comments.
You are totally right.
1. As you write, this is an Windows Update setting and can be changed later without any problem. The succes of this setting of course depend on many things (internet connection, โฆ) and can fail.
2. True. My Mistake. DC name can not be changed, but you can use the same script if you want to install a new domain on Essentials server (if you donโt want a .local domain). There you need to specify a computer name.
Thank you for a comment.
2. I think the -ComputerName command didnโt work because of the โdouble quotesโ around the name. Try it without the quotes. I did and it worked for me.
1. Could be true. Yes, you can use without quotes, but is better to use quotes as in this case you can use any symbol (many times also more than one word).
You should use double quotes, but unfortunately many times happens that formatting text with MS Word or with other word processor change the normal double quote symbol to something similar. In this case you have to write quotes manually.
4. This article was found after trying to use an answer file a few times, thanks for writing it. I had no errors and now I am having trouble after reboot joining PCโs to the corp.company.com
Anyone else have this? to me it sounds like a DNS lookup issue but I see nothing wrong
5. Figured it out, small office everyone was on wifi and I didnt have the server as primary DNS, !!
Thanks again for the instructions canโt believe <S just didnt allow the .com in the first place
1. Not really agree with you. If you have server in company, it is always the best choice to put this server as DNS.
Otherwise, if you look to company without server, then this article is not for you and it doesnโt matter if it is on wireless or wired network. As I know all access points and routers they have an option to disable DHCP and change DNS server.
6. After running this command upon reboot Iโm finding I have no active directory users and computers tools and I cannot edit group policies on the domain, almost like Iโm not a full admin for some reason. Anyone else run into this?
1. Something went wrong between instalation. The istalation in this mode could not modify the installation of ADDS tools or roles.
In any case, you can try to add this snap ins manually, but I am affraid that there are more problems in the installation.
1. My bad, I made too many changes at once it seems, I had a few dozen updates still pending reboot when I ran the powershell command, I let it reboot and it finished the updates. After the reboot I had some issues, no ADDS tools (found an article that says the powershell doesnโt add them like the wizard does, so added them manually)
I had other strange issues, couldnโt authorize DHCP server, error said it couldnโt find AD. Couldnโt edit the 2 default group policies. (edit greyed out)
I had a robocopy running so didnโt want to reboot until it was done but the good old โDid you reboot it?โ seems to have solved the strangeness. After a second reboot I can auth the dhcp and edit gp.
Hopefully this helps someone else!
7. I am installing Windows 2012 Essentials that came with my new Dell R320 server and I can perform the Cancel as you describe. It only has back and next.
What am I missing? I really need this to be a .com server. I had found steps to change it later, but I worry there will be lingering issues later.
Thoughts?
8. I am trying to use your script, but I keep getting an error message saying Start-WSSConfiguraitionService does not exist.
I am doing it a bit different. my install CD does not allow me to exit where yours does, so I was following Jamesโs post about doing it after the install and the restart. But as I said it keeps saying that it is not a valid program script.
1. You have to instal complete server from CD! You have to breake role instalation after the logon to the server is done.
9. Unfortunately, all I have in my server 2012 essentials installation is โbackโ and โnextโ. Why donโt I have cancel
1. You can exit from that windows in many moeds. You can close it, from task manager,โฆ
Just use one of them.
10. Elvis, Iโm still have same issue as Chris HallโฆI let the installation complete but at no point is accepting Start-Wssโฆcmdlet. Is it the difference between Essentials plain and Essentials R2?
1. If you are talking between Essentials 2012 and Essentials 2012R2, the answer is yes. There is a difference.
It should be done in different way on server 2012 โ answer file.
11. What am I missing?
A .local domain is simply a nuisance for Macs. I could have a public domain name acme.com, name the internal DNS domain acme.lan, have a server on the local network named mail.acme.lan, buy a certificate for it named mail.acme.com, with a firewall that routes the mail ports to it, and a resource record in the public acme.com zone that points to the firewall.
Using a public internal domain name, I could sub it using the identifier of the closest airport, such as lax.acme.com, with mail on the local network hosting a certificate named mail.lax.acme.com. I still need to add a resource record in the public acme.com zone for mail.lax.acme.com that points to the firewall. I probably wouldnโt want to name something used publicly, internal.acme.com or lan.acme.com, nor would I want to expose the internal zone to the Internet.
I donโt know why anyone would want to want to buy a cert for mail.acme.lan even if they could. The problem is you cannot access it from anywhere but the local area network without getting a warning. On the other hand, a cert that uses an Internet routable name can be used anywhere, including on internal networks that use non-routable names such as .local and .lan. So other than the Apple issue with .local, I donโt see where it makes much difference, and something like acme.lan is pretty simple.
So help me out, what am I missing?
;;
1. Hi,
With .local domain you will have problems with MACโs (for now) and with public certificate โ exactly as you mentioned. I donโt know what exactly you want to tell me with mail.acme.lan certificate.
The answer is simple: As is described in best practices, one of the solutions for internal domain name is subdomain of existing external domain (for example internal.acme.com). You need a certificate for access to internal website true https and must have the same CN that you published in external DNS servers. It should be whatever you want โ like myoffice.acme.com, but the Essentials server wizard will create the cone for that record (in my case myoffice.acme.com) with root A record. This record is needed to resolve this DNS name from LAN.
Hope this answer will help you.
12. What I meant is 3rd party certificates do not need to match the server name nor internal network name. It can be anything you like, including your external internet domain name.
Thanks!
13. Thank you for taking the time to talk through this. It works either way whether you do a .lan or a .com. However, I am seeing the merit of the sub domain now.
โ With the .lan approach, you can have a server named server.acme.lan internally, and install a certificate on it named server.acme.com. Internal and external users can do an NS lookup on server.acme.com, which is a single resource record on the public dns, which will return the IP of the Internet router, and be NATed to server.acme.lan. Since the cert matches, everyone is happy. However, the router is involved to NAT or consulted for the internal address. If you lose the connection to the outside, you can no longer resolve the local resource.
โ With the subdomain approach, you have a server named server.lax.acme.com, a cert named server.lax.acme.com, and two resource records, one private and one public. The private one contains โserverโ and the private IP address, and the public DNS faking it with a resource named โserver.laxโ which associates it with a company public address that usually gets NATed to the internal address. This requires two resource records, like a typical split-brain. The flaw in my thinking is I was thinking delegation, which canโt work as-is, and exposes the private DNS. Itโs true, with the sub domain method you have to maintain two resource records like a split brain, but local users are not dependent on the public zone to resolve local shared resources, and the same resource has the same FQDN everywhere, so itโs still a better way to go.
Iโve done split-brain before also. The advantage of the sub domain method is that it is much simpler to prevent multiple resources from having the the same FQDN, and the DNS can be self-documenting if you use the location for the sub domain.
This all strengthens what you wrote. It is not possible to look intelligent while making an argument for .local. Not allowing users to spec the intenal domain name is also indefensible. Iโve decided to use the location for the sub domain from now on. Moreover, just because it is Essentials doesnโt mean that it will not grow into Standard or Enterprise.
Thanks!
14. The easiest solution is to exit the configuration wizard when it starts.
Open server manager and add the AD role.
Configure AD as you would any new domain
reboot the server
Now when the essentails wizard starts, it will tell you that the domain already exist and configure essentials for that domain.
15. So why did Microsoft design server essentials to set itself up in this way? What issues will I face leaving the default domain configuration? Can you explain a little more about how this will effect applications using certificates as mentioned?
16. Thanks for guiding it worked after little tweaking the command.
The server essential would take too long at the time of adding a next user after 75th user.
Virtually you can not add 76th user through the essential dashboard.
I still looking the way where I can avoid installing server essential. I guess we could have better control if we go all without essential. But essential gets the server ready very quickly without knowing much configuration tasks.
1. Well, you can use standard server with Essentials role. This will make possible to have more than 75 users and essentials functionalities.
You can use also Active directory users and computers for managing users.
17. Hello there,
I have tried your script in order to move away from .local.
However, I keep on getting this error on powershell.
Command used :
Start-WssConfigurationService -CompanyName โDECAโ -DNSName โdc.decacalgary.comโ -NetBiosName โDECAโ -ComputerName โSV-DECAโ โNewAdminCredential $cred -Setting All
Error :
Start-WssConfigurationService : Type a different name
At line:1 char:1
+ Start-WssConfigurationService -CompanyName โDECAโ -DNSName โdc.decacalgaryโ -Net โฆ
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (administrator:String) [Start-WssConfigurationService], ArgumentExcepti
on
+ FullyQualifiedErrorId : ValidatorUserUniqueInfo,Microsoft.WindowsServerSolutions.Setup.Commands.InvokeEssentials
ConfigureServiceCommand
Can some body please help me, what I am doing wrong ? Thanks
1. It should work. As I can see in your reply, I suspect to quotes.
Sometimes just with copy and paste can be wrong character.
18. @Kapil Bharwdwaj please enter some other username (not administrator) when it ask for credential after executing the script.
@elvis I am installing the role. Yes it allows you to add more than 75 users but practically it takes to long to add users after 75th count.
19. First off, I am not a professional IT person. I purchased a server from Dell for our extremely small business. It came with WS 2012 R2 Essentials already installed. Please note, the main reason I got this server was for a design software we use that must utilize SQL Server in order for all the workstations to share the data files. The software company came to set things up for me and said we canโt install SQL Server 2014 on a domain controller (still a bit over my head as to why). My question is as follows:
โ Is there a way to format the server and reinstall WS 2012 R2 E without making it a domain controller so the SQL Server 2014 will work?
1. Hi Josh,
To be honest it is not a good idea to install SQL server on domain controller and I prefer to add a second server in environment for DB server.
But if you are really a small organization and you have small SQL DB, you can install Express edition on Essentials. It is not the same as standard edition and some software cannot use this edition; you will have some limitations like 10GB size of database โ anyway in many cases is OK. Ask them for this option.
If you can format this server? I donโt know as I am not working with Dell servers but I suppose yes. Your license in this case is for Essentials server and also if you can format and reinstall the server, will be just new Essentials server with the same roles and same functions โ so nothing will change, you will still have domain controller. Essentials server must be domain controller.
Elvis
20. This did not work. I tried the command for windows server 2016 essentials and it simply did not work. Errors like company name exists. invalid admin string. will this work on server essentials 2016?
Discuss
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.617945 |
the sky is so f***ingย lazy
Theories on Everything activated the human scheme,ย simplified natureโs intentions of order. Applied to metal and microchips, mortal person was able to sit at the edge of his bed, spine curved to center vision on the cellphone held in his steady hand, researching the Johnstown Flood of 1889.ย To compare the usefulness of inventions toContinue reading โthe sky is so f***ingย lazyโ
|
__label__pos
| 0.835225 |
May 17, 2020
What is programming
Inย simple words programming is a way to instruct the computer to perform various tasks.
Programming is the process of designing and building an executable computer program to accomplish a specific computing result.
That means you can use programming languagesย like C, C++, java, python etc. Programming languages to create list of instructions for a computer to follow.
This means that you can provide the computer a set of instructions that are Written in any programming languages i.e, C, C++, java, python etc..etc. that computer can understand easily.
You can give any of the instructions to computer using programming languages For Example:
Like this lot of instructions you can give to a computer.
Using lot of programming languages.
No comments:
Post a Comment
If you have any doubts, please discuss here...๐
|
__label__pos
| 0.80069 |
Mimer SQL Data Provider help
Changed Functions
Mimer SQL Data Provider > Overview > Release Notes > Changed Functions
Changed functions in Mimer SQL Data Provider 10.1.2.14 (10.1.2A):
1. New installer technology used for installation. WIX is used instead of Visual Studio Deployment projects.
Changed functions in Mimer SQL Data Provider 10.1.0.32 and 10.1.0.34 (10.1.0C):
1. List returned byย MimerConnection.GetSchema("ReservedWords")ย updated with reserved wordsย in version 10.1 of Mimer SQL.
Changed functions in Mimer SQL Data Provider 10.1.0.12 and 10.1.0.14 (10.0.1A):
1. The column IsUnique and IsKeyย in the result set returned by MimerDataReader.GetSchemaTableย has a slightly different meaning. IsUnique is now true ifย the single column is unique by itself. IsKey is true if the column is part of a unique key, unique index, or a primary key consisting of one (IsUnique is also true in this case) or several columns.
Changed functions in Mimer SQL Data Provider 9.4.4.1 (9.4.4A):
1. When running the Mimer SQL Data Provider in 64-bit mode and connecting with protocol local, namedpipes, or rapi the system loads the dynamic link library mimcomm64.dll.ย In 32-bit mode the dynamic link library mimcomm.dll is still used. This allows both of the dll-files to be present in the path, while stillย allowing the correct library to beย loaded depending on if 32-bit or 64-bit mode is used. The mimcomm64.dll is distributed with Mimer SQL version 10.0.6 and later.
2. The MimerDataSourceEnumeratorย can now enumerate data sources for both 32-bit and 64-bit installations of Mimer SQL irregardless if the .NET executable is running in 32-bit or 64-bit mode. To allow a 32-bit ADO.NET execution to enumerate data sources for a 64-bit installation it is necessary for the mimcomm.dll to be present as managed 32-bit .NETย applications do not have access to the 64-bit registry where this information is found.
Changed functions in Mimer SQL Data Provider 9.4.3.1 (9.4.3A):
1. Performance improvement in several MimerDataReaderย GetXxx methods such as GetString and GetDateTime.
2. When changing MimerCommand.CommandTypeย property to the same value, the SQL statement is no longer recompiled.
Changed functions in Mimer SQL Data Provider 9.4.2.1 (9.4.2A):
1. New column SPECIFIC_NAME for MimerConnection.GetSchema row sets that return procedure and function information.
Changed functions in Mimer SQL Data Provider 9.4.1.2 (9.4.1B):
1. DataTable returned by MimerConnection.GetSchema("Triggers") no longer returns the column CREATED. This change is made for consistency with other GetSchema result sets.
2. GetSchema collection "Schemas" no longer supports any restrictions.
3. Restriction names in GetSchema collection "Restrictions" has been made more consistent.
Changed functions in Mimer SQL Data Provider 9.4.1.1 (9.4.1A):
1. MimerDataSourceEnumerator.GetDataSourcesย returns a DataTable where the column DatabaseName is now called Database for consistency with other parts of the product.
2. Improved property sheet supportย for drop down lists for Database and Protocol when building connection strings.
Changed functions in Mimer SQL Data Provider 9.4.0.1 (9.4.0A):
1. To allow for proper integration with Visual Studio 2005 is has been necessary to use the same namespace in both the .NET Framework version and the .NET Compact Framework version provider. This means that all applications using the Compact Framework provider must be adjusted when upgrading to this version. Perform the following steps:
2. Change all using clauses and full references to classes:
ย
Old construction Replace with
Mimer.Data.CompactClient using Mimer.Data.Client
|
__label__pos
| 0.950477 |
blob: 339701d7a9a5069ccce00f2f171f923073d2c3b5 [file] [log] [blame]
/*
* Copyright ยฉ 2008-2010 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
* Authors:
* Eric Anholt <[email protected]>
* Zou Nan hai <[email protected]>
* Xiang Hai hao<[email protected]>
*
*/
#include <linux/log2.h>
#include <drm/drmP.h>
#include "i915_drv.h"
#include <drm/i915_drm.h>
#include "i915_trace.h"
#include "intel_drv.h"
int __intel_ring_space(int head, int tail, int size)
{
int space = head - tail;
if (space <= 0)
space += size;
return space - I915_RING_FREE_SPACE;
}
void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
{
if (ringbuf->last_retired_head != -1) {
ringbuf->head = ringbuf->last_retired_head;
ringbuf->last_retired_head = -1;
}
ringbuf->space = __intel_ring_space(ringbuf->head & HEAD_ADDR,
ringbuf->tail, ringbuf->size);
}
int intel_ring_space(struct intel_ringbuffer *ringbuf)
{
intel_ring_update_space(ringbuf);
return ringbuf->space;
}
bool intel_ring_stopped(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
return dev_priv->gpu_error.stop_rings & intel_ring_flag(ring);
}
static void __intel_ring_advance(struct intel_engine_cs *ring)
{
struct intel_ringbuffer *ringbuf = ring->buffer;
ringbuf->tail &= ringbuf->size - 1;
if (intel_ring_stopped(ring))
return;
ring->write_tail(ring, ringbuf->tail);
}
static int
gen2_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
u32 cmd;
int ret;
cmd = MI_FLUSH;
if (((invalidate_domains|flush_domains) & I915_GEM_DOMAIN_RENDER) == 0)
cmd |= MI_NO_WRITE_FLUSH;
if (invalidate_domains & I915_GEM_DOMAIN_SAMPLER)
cmd |= MI_READ_FLUSH;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
static int
gen4_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
struct drm_device *dev = ring->dev;
u32 cmd;
int ret;
/*
* read/write caches:
*
* I915_GEM_DOMAIN_RENDER is always invalidated, but is
* only flushed if MI_NO_WRITE_FLUSH is unset. On 965, it is
* also flushed at 2d versus 3d pipeline switches.
*
* read-only caches:
*
* I915_GEM_DOMAIN_SAMPLER is flushed on pre-965 if
* MI_READ_FLUSH is set, and is always flushed on 965.
*
* I915_GEM_DOMAIN_COMMAND may not exist?
*
* I915_GEM_DOMAIN_INSTRUCTION, which exists on 965, is
* invalidated when MI_EXE_FLUSH is set.
*
* I915_GEM_DOMAIN_VERTEX, which exists on 965, is
* invalidated with every MI_FLUSH.
*
* TLBs:
*
* On 965, TLBs associated with I915_GEM_DOMAIN_COMMAND
* and I915_GEM_DOMAIN_CPU in are invalidated at PTE write and
* I915_GEM_DOMAIN_RENDER and I915_GEM_DOMAIN_SAMPLER
* are flushed at any MI_FLUSH.
*/
cmd = MI_FLUSH | MI_NO_WRITE_FLUSH;
if ((invalidate_domains|flush_domains) & I915_GEM_DOMAIN_RENDER)
cmd &= ~MI_NO_WRITE_FLUSH;
if (invalidate_domains & I915_GEM_DOMAIN_INSTRUCTION)
cmd |= MI_EXE_FLUSH;
if (invalidate_domains & I915_GEM_DOMAIN_COMMAND &&
(IS_G4X(dev) || IS_GEN5(dev)))
cmd |= MI_INVALIDATE_ISP;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
/**
* Emits a PIPE_CONTROL with a non-zero post-sync operation, for
* implementing two workarounds on gen6. From section 1.4.7.1
* "PIPE_CONTROL" of the Sandy Bridge PRM volume 2 part 1:
*
* [DevSNB-C+{W/A}] Before any depth stall flush (including those
* produced by non-pipelined state commands), software needs to first
* send a PIPE_CONTROL with no bits set except Post-Sync Operation !=
* 0.
*
* [Dev-SNB{W/A}]: Before a PIPE_CONTROL with Write Cache Flush Enable
* =1, a PIPE_CONTROL with any non-zero post-sync-op is required.
*
* And the workaround for these two requires this workaround first:
*
* [Dev-SNB{W/A}]: Pipe-control with CS-stall bit set must be sent
* BEFORE the pipe-control with a post-sync op and no write-cache
* flushes.
*
* And this last workaround is tricky because of the requirements on
* that bit. From section 1.4.7.2.3 "Stall" of the Sandy Bridge PRM
* volume 2 part 1:
*
* "1 of the following must also be set:
* - Render Target Cache Flush Enable ([12] of DW1)
* - Depth Cache Flush Enable ([0] of DW1)
* - Stall at Pixel Scoreboard ([1] of DW1)
* - Depth Stall ([13] of DW1)
* - Post-Sync Operation ([13] of DW1)
* - Notify Enable ([8] of DW1)"
*
* The cache flushes require the workaround flush that triggered this
* one, so we can't use it. Depth stall would trigger the same.
* Post-sync nonzero is what triggered this second workaround, so we
* can't use that one either. Notify enable is IRQs, which aren't
* really our business. That leaves only stall at scoreboard.
*/
static int
intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
ret = intel_ring_begin(req, 6);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(5));
intel_ring_emit(ring, PIPE_CONTROL_CS_STALL |
PIPE_CONTROL_STALL_AT_SCOREBOARD);
intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
intel_ring_emit(ring, 0); /* low dword */
intel_ring_emit(ring, 0); /* high dword */
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
ret = intel_ring_begin(req, 6);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(5));
intel_ring_emit(ring, PIPE_CONTROL_QW_WRITE);
intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT); /* address */
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
static int
gen6_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
/* Force SNB workarounds for PIPE_CONTROL flushes */
ret = intel_emit_post_sync_nonzero_flush(req);
if (ret)
return ret;
/* Just flush everything. Experiments have shown that reducing the
* number of bits based on the write domains has little performance
* impact.
*/
if (flush_domains) {
flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
/*
* Ensure that any following seqno writes only happen
* when the render cache is indeed flushed.
*/
flags |= PIPE_CONTROL_CS_STALL;
}
if (invalidate_domains) {
flags |= PIPE_CONTROL_TLB_INVALIDATE;
flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
/*
* TLB invalidate requires a post-sync write.
*/
flags |= PIPE_CONTROL_QW_WRITE | PIPE_CONTROL_CS_STALL;
}
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
intel_ring_emit(ring, flags);
intel_ring_emit(ring, scratch_addr | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
return 0;
}
static int
gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
intel_ring_emit(ring, PIPE_CONTROL_CS_STALL |
PIPE_CONTROL_STALL_AT_SCOREBOARD);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
return 0;
}
static int
gen7_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
/*
* Ensure that any following seqno writes only happen when the render
* cache is indeed flushed.
*
* Workaround: 4th PIPE_CONTROL command (except the ones with only
* read-cache invalidate bits set) must have the CS_STALL bit set. We
* don't try to be clever and just set it unconditionally.
*/
flags |= PIPE_CONTROL_CS_STALL;
/* Just flush everything. Experiments have shown that reducing the
* number of bits based on the write domains has little performance
* impact.
*/
if (flush_domains) {
flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
flags |= PIPE_CONTROL_FLUSH_ENABLE;
}
if (invalidate_domains) {
flags |= PIPE_CONTROL_TLB_INVALIDATE;
flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_MEDIA_STATE_CLEAR;
/*
* TLB invalidate requires a post-sync write.
*/
flags |= PIPE_CONTROL_QW_WRITE;
flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
flags |= PIPE_CONTROL_STALL_AT_SCOREBOARD;
/* Workaround: we must issue a pipe_control with CS-stall bit
* set before a pipe_control command that has the state cache
* invalidate bit set. */
gen7_render_ring_cs_stall_wa(req);
}
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4));
intel_ring_emit(ring, flags);
intel_ring_emit(ring, scratch_addr);
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
return 0;
}
static int
gen8_emit_pipe_control(struct drm_i915_gem_request *req,
u32 flags, u32 scratch_addr)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 6);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
intel_ring_emit(ring, flags);
intel_ring_emit(ring, scratch_addr);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
intel_ring_emit(ring, 0);
intel_ring_advance(ring);
return 0;
}
static int
gen8_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
u32 flags = 0;
u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
flags |= PIPE_CONTROL_CS_STALL;
if (flush_domains) {
flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
flags |= PIPE_CONTROL_FLUSH_ENABLE;
}
if (invalidate_domains) {
flags |= PIPE_CONTROL_TLB_INVALIDATE;
flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
flags |= PIPE_CONTROL_QW_WRITE;
flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
/* WaCsStallBeforeStateCacheInvalidate:bdw,chv */
ret = gen8_emit_pipe_control(req,
PIPE_CONTROL_CS_STALL |
PIPE_CONTROL_STALL_AT_SCOREBOARD,
0);
if (ret)
return ret;
}
return gen8_emit_pipe_control(req, flags, scratch_addr);
}
static void ring_write_tail(struct intel_engine_cs *ring,
u32 value)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
I915_WRITE_TAIL(ring, value);
}
u64 intel_ring_get_active_head(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
u64 acthd;
if (INTEL_INFO(ring->dev)->gen >= 8)
acthd = I915_READ64_2x32(RING_ACTHD(ring->mmio_base),
RING_ACTHD_UDW(ring->mmio_base));
else if (INTEL_INFO(ring->dev)->gen >= 4)
acthd = I915_READ(RING_ACTHD(ring->mmio_base));
else
acthd = I915_READ(ACTHD);
return acthd;
}
static void ring_setup_phys_status_page(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
u32 addr;
addr = dev_priv->status_page_dmah->busaddr;
if (INTEL_INFO(ring->dev)->gen >= 4)
addr |= (dev_priv->status_page_dmah->busaddr >> 28) & 0xf0;
I915_WRITE(HWS_PGA, addr);
}
static void intel_ring_setup_status_page(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = ring->dev->dev_private;
i915_reg_t mmio;
/* The ring status page addresses are no longer next to the rest of
* the ring registers as of gen7.
*/
if (IS_GEN7(dev)) {
switch (ring->id) {
case RCS:
mmio = RENDER_HWS_PGA_GEN7;
break;
case BCS:
mmio = BLT_HWS_PGA_GEN7;
break;
/*
* VCS2 actually doesn't exist on Gen7. Only shut up
* gcc switch check warning
*/
case VCS2:
case VCS:
mmio = BSD_HWS_PGA_GEN7;
break;
case VECS:
mmio = VEBOX_HWS_PGA_GEN7;
break;
}
} else if (IS_GEN6(ring->dev)) {
mmio = RING_HWS_PGA_GEN6(ring->mmio_base);
} else {
/* XXX: gen8 returns to sanity */
mmio = RING_HWS_PGA(ring->mmio_base);
}
I915_WRITE(mmio, (u32)ring->status_page.gfx_addr);
POSTING_READ(mmio);
/*
* Flush the TLB for this page
*
* FIXME: These two bits have disappeared on gen8, so a question
* arises: do we still need this and if so how should we go about
* invalidating the TLB?
*/
if (INTEL_INFO(dev)->gen >= 6 && INTEL_INFO(dev)->gen < 8) {
i915_reg_t reg = RING_INSTPM(ring->mmio_base);
/* ring should be idle before issuing a sync flush*/
WARN_ON((I915_READ_MODE(ring) & MODE_IDLE) == 0);
I915_WRITE(reg,
_MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE |
INSTPM_SYNC_FLUSH));
if (wait_for((I915_READ(reg) & INSTPM_SYNC_FLUSH) == 0,
1000))
DRM_ERROR("%s: wait for SyncFlush to complete for TLB invalidation timed out\n",
ring->name);
}
}
static bool stop_ring(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = to_i915(ring->dev);
if (!IS_GEN2(ring->dev)) {
I915_WRITE_MODE(ring, _MASKED_BIT_ENABLE(STOP_RING));
if (wait_for((I915_READ_MODE(ring) & MODE_IDLE) != 0, 1000)) {
DRM_ERROR("%s : timed out trying to stop ring\n", ring->name);
/* Sometimes we observe that the idle flag is not
* set even though the ring is empty. So double
* check before giving up.
*/
if (I915_READ_HEAD(ring) != I915_READ_TAIL(ring))
return false;
}
}
I915_WRITE_CTL(ring, 0);
I915_WRITE_HEAD(ring, 0);
ring->write_tail(ring, 0);
if (!IS_GEN2(ring->dev)) {
(void)I915_READ_CTL(ring);
I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));
}
return (I915_READ_HEAD(ring) & HEAD_ADDR) == 0;
}
static int init_ring_common(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_ringbuffer *ringbuf = ring->buffer;
struct drm_i915_gem_object *obj = ringbuf->obj;
int ret = 0;
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
if (!stop_ring(ring)) {
/* G45 ring initialization often fails to reset head to zero */
DRM_DEBUG_KMS("%s head not reset to zero "
"ctl %08x head %08x tail %08x start %08x\n",
ring->name,
I915_READ_CTL(ring),
I915_READ_HEAD(ring),
I915_READ_TAIL(ring),
I915_READ_START(ring));
if (!stop_ring(ring)) {
DRM_ERROR("failed to set %s head to zero "
"ctl %08x head %08x tail %08x start %08x\n",
ring->name,
I915_READ_CTL(ring),
I915_READ_HEAD(ring),
I915_READ_TAIL(ring),
I915_READ_START(ring));
ret = -EIO;
goto out;
}
}
if (I915_NEED_GFX_HWS(dev))
intel_ring_setup_status_page(ring);
else
ring_setup_phys_status_page(ring);
/* Enforce ordering by reading HEAD register back */
I915_READ_HEAD(ring);
/* Initialize the ring. This must happen _after_ we've cleared the ring
* registers with the above sequence (the readback of the HEAD registers
* also enforces ordering), otherwise the hw might lose the new ring
* register values. */
I915_WRITE_START(ring, i915_gem_obj_ggtt_offset(obj));
/* WaClearRingBufHeadRegAtInit:ctg,elk */
if (I915_READ_HEAD(ring))
DRM_DEBUG("%s initialization failed [head=%08x], fudging\n",
ring->name, I915_READ_HEAD(ring));
I915_WRITE_HEAD(ring, 0);
(void)I915_READ_HEAD(ring);
I915_WRITE_CTL(ring,
((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES)
| RING_VALID);
/* If the head is still not zero, the ring is dead */
if (wait_for((I915_READ_CTL(ring) & RING_VALID) != 0 &&
I915_READ_START(ring) == i915_gem_obj_ggtt_offset(obj) &&
(I915_READ_HEAD(ring) & HEAD_ADDR) == 0, 50)) {
DRM_ERROR("%s initialization failed "
"ctl %08x (valid? %d) head %08x tail %08x start %08x [expected %08lx]\n",
ring->name,
I915_READ_CTL(ring), I915_READ_CTL(ring) & RING_VALID,
I915_READ_HEAD(ring), I915_READ_TAIL(ring),
I915_READ_START(ring), (unsigned long)i915_gem_obj_ggtt_offset(obj));
ret = -EIO;
goto out;
}
ringbuf->last_retired_head = -1;
ringbuf->head = I915_READ_HEAD(ring);
ringbuf->tail = I915_READ_TAIL(ring) & TAIL_ADDR;
intel_ring_update_space(ringbuf);
memset(&ring->hangcheck, 0, sizeof(ring->hangcheck));
out:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
}
void
intel_fini_pipe_control(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
if (ring->scratch.obj == NULL)
return;
if (INTEL_INFO(dev)->gen >= 5) {
kunmap(sg_page(ring->scratch.obj->pages->sgl));
i915_gem_object_ggtt_unpin(ring->scratch.obj);
}
drm_gem_object_unreference(&ring->scratch.obj->base);
ring->scratch.obj = NULL;
}
int
intel_init_pipe_control(struct intel_engine_cs *ring)
{
int ret;
WARN_ON(ring->scratch.obj);
ring->scratch.obj = i915_gem_alloc_object(ring->dev, 4096);
if (ring->scratch.obj == NULL) {
DRM_ERROR("Failed to allocate seqno page\n");
ret = -ENOMEM;
goto err;
}
ret = i915_gem_object_set_cache_level(ring->scratch.obj, I915_CACHE_LLC);
if (ret)
goto err_unref;
ret = i915_gem_obj_ggtt_pin(ring->scratch.obj, 4096, 0);
if (ret)
goto err_unref;
ring->scratch.gtt_offset = i915_gem_obj_ggtt_offset(ring->scratch.obj);
ring->scratch.cpu_page = kmap(sg_page(ring->scratch.obj->pages->sgl));
if (ring->scratch.cpu_page == NULL) {
ret = -ENOMEM;
goto err_unpin;
}
DRM_DEBUG_DRIVER("%s pipe control offset: 0x%08x\n",
ring->name, ring->scratch.gtt_offset);
return 0;
err_unpin:
i915_gem_object_ggtt_unpin(ring->scratch.obj);
err_unref:
drm_gem_object_unreference(&ring->scratch.obj->base);
err:
return ret;
}
static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
struct intel_engine_cs *ring = req->ring;
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_workarounds *w = &dev_priv->workarounds;
if (w->count == 0)
return 0;
ring->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
ret = intel_ring_begin(req, (w->count * 2 + 2));
if (ret)
return ret;
intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
for (i = 0; i < w->count; i++) {
intel_ring_emit_reg(ring, w->reg[i].addr);
intel_ring_emit(ring, w->reg[i].value);
}
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
ring->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
DRM_DEBUG_DRIVER("Number of Workarounds emitted: %d\n", w->count);
return 0;
}
static int intel_rcs_ctx_init(struct drm_i915_gem_request *req)
{
int ret;
ret = intel_ring_workarounds_emit(req);
if (ret != 0)
return ret;
ret = i915_gem_render_state_init(req);
if (ret)
DRM_ERROR("init render state: %d\n", ret);
return ret;
}
static int wa_add(struct drm_i915_private *dev_priv,
i915_reg_t addr,
const u32 mask, const u32 val)
{
const u32 idx = dev_priv->workarounds.count;
if (WARN_ON(idx >= I915_MAX_WA_REGS))
return -ENOSPC;
dev_priv->workarounds.reg[idx].addr = addr;
dev_priv->workarounds.reg[idx].value = val;
dev_priv->workarounds.reg[idx].mask = mask;
dev_priv->workarounds.count++;
return 0;
}
#define WA_REG(addr, mask, val) do { \
const int r = wa_add(dev_priv, (addr), (mask), (val)); \
if (r) \
return r; \
} while (0)
#define WA_SET_BIT_MASKED(addr, mask) \
WA_REG(addr, (mask), _MASKED_BIT_ENABLE(mask))
#define WA_CLR_BIT_MASKED(addr, mask) \
WA_REG(addr, (mask), _MASKED_BIT_DISABLE(mask))
#define WA_SET_FIELD_MASKED(addr, mask, value) \
WA_REG(addr, mask, _MASKED_FIELD(mask, value))
#define WA_SET_BIT(addr, mask) WA_REG(addr, mask, I915_READ(addr) | (mask))
#define WA_CLR_BIT(addr, mask) WA_REG(addr, mask, I915_READ(addr) & ~(mask))
#define WA_WRITE(addr, val) WA_REG(addr, 0xffffffff, val)
static int gen8_init_workarounds(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
WA_SET_BIT_MASKED(INSTPM, INSTPM_FORCE_ORDERING);
/* WaDisableAsyncFlipPerfMode:bdw,chv */
WA_SET_BIT_MASKED(MI_MODE, ASYNC_FLIP_PERF_DISABLE);
/* WaDisablePartialInstShootdown:bdw,chv */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE);
/* Use Force Non-Coherent whenever executing a 3D context. This is a
* workaround for for a possible hang in the unlikely event a TLB
* invalidation occurs during a PSD flush.
*/
/* WaForceEnableNonCoherent:bdw,chv */
/* WaHdcDisableFetchWhenMasked:bdw,chv */
WA_SET_BIT_MASKED(HDC_CHICKEN0,
HDC_DONOT_FETCH_MEM_WHEN_MASKED |
HDC_FORCE_NON_COHERENT);
/* From the Haswell PRM, Command Reference: Registers, CACHE_MODE_0:
* "The Hierarchical Z RAW Stall Optimization allows non-overlapping
* polygons in the same 8x4 pixel/sample area to be processed without
* stalling waiting for the earlier ones to write to Hierarchical Z
* buffer."
*
* This optimization is off by default for BDW and CHV; turn it on.
*/
WA_CLR_BIT_MASKED(CACHE_MODE_0_GEN7, HIZ_RAW_STALL_OPT_DISABLE);
/* Wa4x4STCOptimizationDisable:bdw,chv */
WA_SET_BIT_MASKED(CACHE_MODE_1, GEN8_4x4_STC_OPTIMIZATION_DISABLE);
/*
* BSpec recommends 8x4 when MSAA is used,
* however in practice 16x4 seems fastest.
*
* Note that PS/WM thread counts depend on the WIZ hashing
* disable bit, which we don't touch here, but it's good
* to keep in mind (see 3DSTATE_PS and 3DSTATE_WM).
*/
WA_SET_FIELD_MASKED(GEN7_GT_MODE,
GEN6_WIZ_HASHING_MASK,
GEN6_WIZ_HASHING_16x4);
return 0;
}
static int bdw_init_workarounds(struct intel_engine_cs *ring)
{
int ret;
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
ret = gen8_init_workarounds(ring);
if (ret)
return ret;
/* WaDisableThreadStallDopClockGating:bdw (pre-production) */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, STALL_DOP_GATING_DISABLE);
/* WaDisableDopClockGating:bdw */
WA_SET_BIT_MASKED(GEN7_ROW_CHICKEN2,
DOP_CLOCK_GATING_DISABLE);
WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
GEN8_SAMPLER_POWER_BYPASS_DIS);
WA_SET_BIT_MASKED(HDC_CHICKEN0,
/* WaForceContextSaveRestoreNonCoherent:bdw */
HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT |
/* WaDisableFenceDestinationToSLM:bdw (pre-prod) */
(IS_BDW_GT3(dev) ? HDC_FENCE_DEST_SLM_DISABLE : 0));
return 0;
}
static int chv_init_workarounds(struct intel_engine_cs *ring)
{
int ret;
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
ret = gen8_init_workarounds(ring);
if (ret)
return ret;
/* WaDisableThreadStallDopClockGating:chv */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN, STALL_DOP_GATING_DISABLE);
/* Improve HiZ throughput on CHV. */
WA_SET_BIT_MASKED(HIZ_CHICKEN, CHV_HZ_8X8_MODE_IN_1X);
return 0;
}
static int gen9_init_workarounds(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t tmp;
/* WaEnableLbsSlaRetryTimerDecrement:skl */
I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) |
GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE);
/* WaDisableKillLogic:bxt,skl */
I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
ECOCHK_DIS_TLB);
/* WaDisablePartialInstShootdown:skl,bxt */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE);
/* Syncing dependencies between camera and graphics:skl,bxt */
WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC);
/* WaDisableDgMirrorFixInHalfSliceChicken5:skl,bxt */
if (IS_SKL_REVID(dev, 0, SKL_REVID_B0) ||
IS_BXT_REVID(dev, 0, BXT_REVID_A1))
WA_CLR_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN5,
GEN9_DG_MIRROR_FIX_ENABLE);
/* WaSetDisablePixMaskCammingAndRhwoInCommonSliceChicken:skl,bxt */
if (IS_SKL_REVID(dev, 0, SKL_REVID_B0) ||
IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
WA_SET_BIT_MASKED(GEN7_COMMON_SLICE_CHICKEN1,
GEN9_RHWO_OPTIMIZATION_DISABLE);
/*
* WA also requires GEN9_SLICE_COMMON_ECO_CHICKEN0[14:14] to be set
* but we do that in per ctx batchbuffer as there is an issue
* with this register not getting restored on ctx restore
*/
}
/* WaEnableYV12BugFixInHalfSliceChicken7:skl,bxt */
if (IS_SKL_REVID(dev, SKL_REVID_C0, REVID_FOREVER) || IS_BROXTON(dev))
WA_SET_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN7,
GEN9_ENABLE_YV12_BUGFIX);
/* Wa4x4STCOptimizationDisable:skl,bxt */
/* WaDisablePartialResolveInVc:skl,bxt */
WA_SET_BIT_MASKED(CACHE_MODE_1, (GEN8_4x4_STC_OPTIMIZATION_DISABLE |
GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE));
/* WaCcsTlbPrefetchDisable:skl,bxt */
WA_CLR_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN5,
GEN9_CCS_TLB_PREFETCH_ENABLE);
/* WaDisableMaskBasedCammingInRCC:skl,bxt */
if (IS_SKL_REVID(dev, SKL_REVID_C0, SKL_REVID_C0) ||
IS_BXT_REVID(dev, 0, BXT_REVID_A1))
WA_SET_BIT_MASKED(SLICE_ECO_CHICKEN0,
PIXEL_MASK_CAMMING_DISABLE);
/* WaForceContextSaveRestoreNonCoherent:skl,bxt */
tmp = HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT;
if (IS_SKL_REVID(dev, SKL_REVID_F0, SKL_REVID_F0) ||
IS_BXT_REVID(dev, BXT_REVID_B0, REVID_FOREVER))
tmp |= HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE;
WA_SET_BIT_MASKED(HDC_CHICKEN0, tmp);
/* WaDisableSamplerPowerBypassForSOPingPong:skl,bxt */
if (IS_SKYLAKE(dev) || IS_BXT_REVID(dev, 0, BXT_REVID_B0))
WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
GEN8_SAMPLER_POWER_BYPASS_DIS);
/* WaDisableSTUnitPowerOptimization:skl,bxt */
WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN2, GEN8_ST_PO_DISABLE);
return 0;
}
static int skl_tune_iz_hashing(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
u8 vals[3] = { 0, 0, 0 };
unsigned int i;
for (i = 0; i < 3; i++) {
u8 ss;
/*
* Only consider slices where one, and only one, subslice has 7
* EUs
*/
if (!is_power_of_2(dev_priv->info.subslice_7eu[i]))
continue;
/*
* subslice_7eu[i] != 0 (because of the check above) and
* ss_max == 4 (maximum number of subslices possible per slice)
*
* -> 0 <= ss <= 3;
*/
ss = ffs(dev_priv->info.subslice_7eu[i]) - 1;
vals[i] = 3 - ss;
}
if (vals[0] == 0 && vals[1] == 0 && vals[2] == 0)
return 0;
/* Tune IZ hashing. See intel_device_info_runtime_init() */
WA_SET_FIELD_MASKED(GEN7_GT_MODE,
GEN9_IZ_HASHING_MASK(2) |
GEN9_IZ_HASHING_MASK(1) |
GEN9_IZ_HASHING_MASK(0),
GEN9_IZ_HASHING(2, vals[2]) |
GEN9_IZ_HASHING(1, vals[1]) |
GEN9_IZ_HASHING(0, vals[0]));
return 0;
}
static int skl_init_workarounds(struct intel_engine_cs *ring)
{
int ret;
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
ret = gen9_init_workarounds(ring);
if (ret)
return ret;
if (IS_SKL_REVID(dev, 0, SKL_REVID_D0)) {
/* WaDisableChickenBitTSGBarrierAckForFFSliceCS:skl */
I915_WRITE(FF_SLICE_CS_CHICKEN2,
_MASKED_BIT_ENABLE(GEN9_TSG_BARRIER_ACK_DISABLE));
}
/* GEN8_L3SQCREG4 has a dependency with WA batch so any new changes
* involving this register should also be added to WA batch as required.
*/
if (IS_SKL_REVID(dev, 0, SKL_REVID_E0))
/* WaDisableLSQCROPERFforOCL:skl */
I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) |
GEN8_LQSC_RO_PERF_DIS);
/* WaEnableGapsTsvCreditFix:skl */
if (IS_SKL_REVID(dev, SKL_REVID_C0, REVID_FOREVER)) {
I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) |
GEN9_GAPS_TSV_CREDIT_DISABLE));
}
/* WaDisablePowerCompilerClockGating:skl */
if (IS_SKL_REVID(dev, SKL_REVID_B0, SKL_REVID_B0))
WA_SET_BIT_MASKED(HIZ_CHICKEN,
BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);
if (IS_SKL_REVID(dev, 0, SKL_REVID_F0)) {
/*
*Use Force Non-Coherent whenever executing a 3D context. This
* is a workaround for a possible hang in the unlikely event
* a TLB invalidation occurs during a PSD flush.
*/
/* WaForceEnableNonCoherent:skl */
WA_SET_BIT_MASKED(HDC_CHICKEN0,
HDC_FORCE_NON_COHERENT);
/* WaDisableHDCInvalidation:skl */
I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
BDW_DISABLE_HDC_INVALIDATION);
}
/* WaBarrierPerformanceFixDisable:skl */
if (IS_SKL_REVID(dev, SKL_REVID_C0, SKL_REVID_D0))
WA_SET_BIT_MASKED(HDC_CHICKEN0,
HDC_FENCE_DEST_SLM_DISABLE |
HDC_BARRIER_PERFORMANCE_DISABLE);
/* WaDisableSbeCacheDispatchPortSharing:skl */
if (IS_SKL_REVID(dev, 0, SKL_REVID_F0))
WA_SET_BIT_MASKED(
GEN7_HALF_SLICE_CHICKEN1,
GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
return skl_tune_iz_hashing(ring);
}
static int bxt_init_workarounds(struct intel_engine_cs *ring)
{
int ret;
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
ret = gen9_init_workarounds(ring);
if (ret)
return ret;
/* WaStoreMultiplePTEenable:bxt */
/* This is a requirement according to Hardware specification */
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
I915_WRITE(TILECTL, I915_READ(TILECTL) | TILECTL_TLBPF);
/* WaSetClckGatingDisableMedia:bxt */
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
I915_WRITE(GEN7_MISCCPCTL, (I915_READ(GEN7_MISCCPCTL) &
~GEN8_DOP_CLOCK_GATE_MEDIA_ENABLE));
}
/* WaDisableThreadStallDopClockGating:bxt */
WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
STALL_DOP_GATING_DISABLE);
/* WaDisableSbeCacheDispatchPortSharing:bxt */
if (IS_BXT_REVID(dev, 0, BXT_REVID_B0)) {
WA_SET_BIT_MASKED(
GEN7_HALF_SLICE_CHICKEN1,
GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
}
return 0;
}
int init_workarounds_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
WARN_ON(ring->id != RCS);
dev_priv->workarounds.count = 0;
if (IS_BROADWELL(dev))
return bdw_init_workarounds(ring);
if (IS_CHERRYVIEW(dev))
return chv_init_workarounds(ring);
if (IS_SKYLAKE(dev))
return skl_init_workarounds(ring);
if (IS_BROXTON(dev))
return bxt_init_workarounds(ring);
return 0;
}
static int init_render_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = init_ring_common(ring);
if (ret)
return ret;
/* WaTimedSingleVertexDispatch:cl,bw,ctg,elk,ilk,snb */
if (INTEL_INFO(dev)->gen >= 4 && INTEL_INFO(dev)->gen < 7)
I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH));
/* We need to disable the AsyncFlip performance optimisations in order
* to use MI_WAIT_FOR_EVENT within the CS. It should already be
* programmed to '1' on all products.
*
* WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv
*/
if (INTEL_INFO(dev)->gen >= 6 && INTEL_INFO(dev)->gen < 8)
I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
/* Required for the hardware to program scanline values for waiting */
/* WaEnableFlushTlbInvalidationMode:snb */
if (INTEL_INFO(dev)->gen == 6)
I915_WRITE(GFX_MODE,
_MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_EXPLICIT));
/* WaBCSVCSTlbInvalidationMode:ivb,vlv,hsw */
if (IS_GEN7(dev))
I915_WRITE(GFX_MODE_GEN7,
_MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_EXPLICIT) |
_MASKED_BIT_ENABLE(GFX_REPLAY_MODE));
if (IS_GEN6(dev)) {
/* From the Sandybridge PRM, volume 1 part 3, page 24:
* "If this bit is set, STCunit will have LRA as replacement
* policy. [...] This bit must be reset. LRA replacement
* policy is not supported."
*/
I915_WRITE(CACHE_MODE_0,
_MASKED_BIT_DISABLE(CM0_STC_EVICT_DISABLE_LRA_SNB));
}
if (INTEL_INFO(dev)->gen >= 6 && INTEL_INFO(dev)->gen < 8)
I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING));
if (HAS_L3_DPF(dev))
I915_WRITE_IMR(ring, ~GT_PARITY_ERROR(dev));
return init_workarounds_ring(ring);
}
static void render_ring_cleanup(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (dev_priv->semaphore_obj) {
i915_gem_object_ggtt_unpin(dev_priv->semaphore_obj);
drm_gem_object_unreference(&dev_priv->semaphore_obj->base);
dev_priv->semaphore_obj = NULL;
}
intel_fini_pipe_control(ring);
}
static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 8
struct intel_engine_cs *signaller = signaller_req->ring;
struct drm_device *dev = signaller->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
#undef MBOX_UPDATE_DWORDS
ret = intel_ring_begin(signaller_req, num_dwords);
if (ret)
return ret;
for_each_ring(waiter, dev_priv, i) {
u32 seqno;
u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;
seqno = i915_gem_request_get_seqno(signaller_req);
intel_ring_emit(signaller, GFX_OP_PIPE_CONTROL(6));
intel_ring_emit(signaller, PIPE_CONTROL_GLOBAL_GTT_IVB |
PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_FLUSH_ENABLE);
intel_ring_emit(signaller, lower_32_bits(gtt_offset));
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
intel_ring_emit(signaller, seqno);
intel_ring_emit(signaller, 0);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
intel_ring_emit(signaller, 0);
}
return 0;
}
static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 6
struct intel_engine_cs *signaller = signaller_req->ring;
struct drm_device *dev = signaller->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
#undef MBOX_UPDATE_DWORDS
ret = intel_ring_begin(signaller_req, num_dwords);
if (ret)
return ret;
for_each_ring(waiter, dev_priv, i) {
u32 seqno;
u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;
seqno = i915_gem_request_get_seqno(signaller_req);
intel_ring_emit(signaller, (MI_FLUSH_DW + 1) |
MI_FLUSH_DW_OP_STOREDW);
intel_ring_emit(signaller, lower_32_bits(gtt_offset) |
MI_FLUSH_DW_USE_GTT);
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
intel_ring_emit(signaller, seqno);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
intel_ring_emit(signaller, 0);
}
return 0;
}
static int gen6_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
struct intel_engine_cs *signaller = signaller_req->ring;
struct drm_device *dev = signaller->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *useless;
int i, ret, num_rings;
#define MBOX_UPDATE_DWORDS 3
num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
num_dwords += round_up((num_rings-1) * MBOX_UPDATE_DWORDS, 2);
#undef MBOX_UPDATE_DWORDS
ret = intel_ring_begin(signaller_req, num_dwords);
if (ret)
return ret;
for_each_ring(useless, dev_priv, i) {
i915_reg_t mbox_reg = signaller->semaphore.mbox.signal[i];
if (i915_mmio_reg_valid(mbox_reg)) {
u32 seqno = i915_gem_request_get_seqno(signaller_req);
intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
intel_ring_emit_reg(signaller, mbox_reg);
intel_ring_emit(signaller, seqno);
}
}
/* If num_dwords was rounded, make sure the tail pointer is correct */
if (num_rings % 2 == 0)
intel_ring_emit(signaller, MI_NOOP);
return 0;
}
/**
* gen6_add_request - Update the semaphore mailbox registers
*
* @request - request to write to the ring
*
* Update the mailbox registers in the *other* rings with the current seqno.
* This acts like a signal in the canonical semaphore.
*/
static int
gen6_add_request(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
int ret;
if (ring->semaphore.signal)
ret = ring->semaphore.signal(req, 4);
else
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, i915_gem_request_get_seqno(req));
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);
return 0;
}
static inline bool i915_gem_has_seqno_wrapped(struct drm_device *dev,
u32 seqno)
{
struct drm_i915_private *dev_priv = dev->dev_private;
return dev_priv->last_seqno < seqno;
}
/**
* intel_ring_sync - sync the waiter to the signaller on seqno
*
* @waiter - ring that is waiting
* @signaller - ring which has, or will signal
* @seqno - seqno which the waiter will block on
*/
static int
gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
struct intel_engine_cs *waiter = waiter_req->ring;
struct drm_i915_private *dev_priv = waiter->dev->dev_private;
int ret;
ret = intel_ring_begin(waiter_req, 4);
if (ret)
return ret;
intel_ring_emit(waiter, MI_SEMAPHORE_WAIT |
MI_SEMAPHORE_GLOBAL_GTT |
MI_SEMAPHORE_POLL |
MI_SEMAPHORE_SAD_GTE_SDD);
intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter,
lower_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
intel_ring_emit(waiter,
upper_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
intel_ring_advance(waiter);
return 0;
}
static int
gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
struct intel_engine_cs *waiter = waiter_req->ring;
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
u32 wait_mbox = signaller->semaphore.mbox.wait[waiter->id];
int ret;
/* Throughout all of the GEM code, seqno passed implies our current
* seqno is >= the last seqno executed. However for hardware the
* comparison is strictly greater than.
*/
seqno -= 1;
WARN_ON(wait_mbox == MI_SEMAPHORE_SYNC_INVALID);
ret = intel_ring_begin(waiter_req, 4);
if (ret)
return ret;
/* If seqno wrap happened, omit the wait with no-ops */
if (likely(!i915_gem_has_seqno_wrapped(waiter->dev, seqno))) {
intel_ring_emit(waiter, dw1 | wait_mbox);
intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter, 0);
intel_ring_emit(waiter, MI_NOOP);
} else {
intel_ring_emit(waiter, MI_NOOP);
intel_ring_emit(waiter, MI_NOOP);
intel_ring_emit(waiter, MI_NOOP);
intel_ring_emit(waiter, MI_NOOP);
}
intel_ring_advance(waiter);
return 0;
}
#define PIPE_CONTROL_FLUSH(ring__, addr__) \
do { \
intel_ring_emit(ring__, GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE | \
PIPE_CONTROL_DEPTH_STALL); \
intel_ring_emit(ring__, (addr__) | PIPE_CONTROL_GLOBAL_GTT); \
intel_ring_emit(ring__, 0); \
intel_ring_emit(ring__, 0); \
} while (0)
static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
/* For Ironlake, MI_USER_INTERRUPT was deprecated and apparently
* incoherent with writes to memory, i.e. completely fubar,
* so we need to use PIPE_NOTIFY instead.
*
* However, we also need to workaround the qword write
* incoherence by flushing the 6 PIPE_NOTIFY buffers out to
* memory before requesting an interrupt.
*/
ret = intel_ring_begin(req, 32);
if (ret)
return ret;
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_WRITE_FLUSH |
PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE);
intel_ring_emit(ring, ring->scratch.gtt_offset | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, i915_gem_request_get_seqno(req));
intel_ring_emit(ring, 0);
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES; /* write to separate cachelines */
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES;
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES;
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES;
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES;
PIPE_CONTROL_FLUSH(ring, scratch_addr);
intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_WRITE_FLUSH |
PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE |
PIPE_CONTROL_NOTIFY);
intel_ring_emit(ring, ring->scratch.gtt_offset | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, i915_gem_request_get_seqno(req));
intel_ring_emit(ring, 0);
__intel_ring_advance(ring);
return 0;
}
static u32
gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
{
/* Workaround to force correct ordering between irq and seqno writes on
* ivb (and maybe also on snb) by reading from a CS register (like
* ACTHD) before reading the status page. */
if (!lazy_coherency) {
struct drm_i915_private *dev_priv = ring->dev->dev_private;
POSTING_READ(RING_ACTHD(ring->mmio_base));
}
return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
}
static u32
ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
{
return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
}
static void
ring_set_seqno(struct intel_engine_cs *ring, u32 seqno)
{
intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
}
static u32
pc_render_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
{
return ring->scratch.cpu_page[0];
}
static void
pc_render_set_seqno(struct intel_engine_cs *ring, u32 seqno)
{
ring->scratch.cpu_page[0] = seqno;
}
static bool
gen5_ring_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (WARN_ON(!intel_irqs_enabled(dev_priv)))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0)
gen5_enable_gt_irq(dev_priv, ring->irq_enable_mask);
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
gen5_ring_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0)
gen5_disable_gt_irq(dev_priv, ring->irq_enable_mask);
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static bool
i9xx_ring_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (!intel_irqs_enabled(dev_priv))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0) {
dev_priv->irq_mask &= ~ring->irq_enable_mask;
I915_WRITE(IMR, dev_priv->irq_mask);
POSTING_READ(IMR);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
i9xx_ring_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0) {
dev_priv->irq_mask |= ring->irq_enable_mask;
I915_WRITE(IMR, dev_priv->irq_mask);
POSTING_READ(IMR);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static bool
i8xx_ring_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (!intel_irqs_enabled(dev_priv))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0) {
dev_priv->irq_mask &= ~ring->irq_enable_mask;
I915_WRITE16(IMR, dev_priv->irq_mask);
POSTING_READ16(IMR);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
i8xx_ring_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0) {
dev_priv->irq_mask |= ring->irq_enable_mask;
I915_WRITE16(IMR, dev_priv->irq_mask);
POSTING_READ16(IMR);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static int
bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring, MI_FLUSH);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
static int
i9xx_add_request(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, i915_gem_request_get_seqno(req));
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);
return 0;
}
static bool
gen6_ring_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (WARN_ON(!intel_irqs_enabled(dev_priv)))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0) {
if (HAS_L3_DPF(dev) && ring->id == RCS)
I915_WRITE_IMR(ring,
~(ring->irq_enable_mask |
GT_PARITY_ERROR(dev)));
else
I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
gen5_enable_gt_irq(dev_priv, ring->irq_enable_mask);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
gen6_ring_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0) {
if (HAS_L3_DPF(dev) && ring->id == RCS)
I915_WRITE_IMR(ring, ~GT_PARITY_ERROR(dev));
else
I915_WRITE_IMR(ring, ~0);
gen5_disable_gt_irq(dev_priv, ring->irq_enable_mask);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static bool
hsw_vebox_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (WARN_ON(!intel_irqs_enabled(dev_priv)))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0) {
I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
gen6_enable_pm_irq(dev_priv, ring->irq_enable_mask);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
hsw_vebox_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0) {
I915_WRITE_IMR(ring, ~0);
gen6_disable_pm_irq(dev_priv, ring->irq_enable_mask);
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static bool
gen8_ring_get_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
if (WARN_ON(!intel_irqs_enabled(dev_priv)))
return false;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (ring->irq_refcount++ == 0) {
if (HAS_L3_DPF(dev) && ring->id == RCS) {
I915_WRITE_IMR(ring,
~(ring->irq_enable_mask |
GT_RENDER_L3_PARITY_ERROR_INTERRUPT));
} else {
I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
}
POSTING_READ(RING_IMR(ring->mmio_base));
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
return true;
}
static void
gen8_ring_put_irq(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
spin_lock_irqsave(&dev_priv->irq_lock, flags);
if (--ring->irq_refcount == 0) {
if (HAS_L3_DPF(dev) && ring->id == RCS) {
I915_WRITE_IMR(ring,
~GT_RENDER_L3_PARITY_ERROR_INTERRUPT);
} else {
I915_WRITE_IMR(ring, ~0);
}
POSTING_READ(RING_IMR(ring->mmio_base));
}
spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
}
static int
i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 length,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
MI_BATCH_GTT |
(dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_NON_SECURE_I965));
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
return 0;
}
/* Just userspace ABI convention to limit the wa batch bo to a resonable size */
#define I830_BATCH_LIMIT (256*1024)
#define I830_TLB_ENTRIES (2)
#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int
i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
u32 cs_offset = ring->scratch.gtt_offset;
int ret;
ret = intel_ring_begin(req, 6);
if (ret)
return ret;
/* Evict the invalid PTE TLBs */
intel_ring_emit(ring, COLOR_BLT_CMD | BLT_WRITE_RGBA);
intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | 4096);
intel_ring_emit(ring, I830_TLB_ENTRIES << 16 | 4); /* load each page */
intel_ring_emit(ring, cs_offset);
intel_ring_emit(ring, 0xdeadbeef);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
if ((dispatch_flags & I915_DISPATCH_PINNED) == 0) {
if (len > I830_BATCH_LIMIT)
return -ENOSPC;
ret = intel_ring_begin(req, 6 + 2);
if (ret)
return ret;
/* Blit the batch (which has now all relocs applied) to the
* stable batch scratch bo area (so that the CS never
* stumbles over its tlb invalidation bug) ...
*/
intel_ring_emit(ring, SRC_COPY_BLT_CMD | BLT_WRITE_RGBA);
intel_ring_emit(ring, BLT_DEPTH_32 | BLT_ROP_SRC_COPY | 4096);
intel_ring_emit(ring, DIV_ROUND_UP(len, 4096) << 16 | 4096);
intel_ring_emit(ring, cs_offset);
intel_ring_emit(ring, 4096);
intel_ring_emit(ring, offset);
intel_ring_emit(ring, MI_FLUSH);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
/* ... and execute it. */
offset = cs_offset;
}
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER);
intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_NON_SECURE));
intel_ring_emit(ring, offset + len - 8);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
static int
i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring, MI_BATCH_BUFFER_START | MI_BATCH_GTT);
intel_ring_emit(ring, offset | (dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_NON_SECURE));
intel_ring_advance(ring);
return 0;
}
static void cleanup_status_page(struct intel_engine_cs *ring)
{
struct drm_i915_gem_object *obj;
obj = ring->status_page.obj;
if (obj == NULL)
return;
kunmap(sg_page(obj->pages->sgl));
i915_gem_object_ggtt_unpin(obj);
drm_gem_object_unreference(&obj->base);
ring->status_page.obj = NULL;
}
static int init_status_page(struct intel_engine_cs *ring)
{
struct drm_i915_gem_object *obj;
if ((obj = ring->status_page.obj) == NULL) {
unsigned flags;
int ret;
obj = i915_gem_alloc_object(ring->dev, 4096);
if (obj == NULL) {
DRM_ERROR("Failed to allocate status page\n");
return -ENOMEM;
}
ret = i915_gem_object_set_cache_level(obj, I915_CACHE_LLC);
if (ret)
goto err_unref;
flags = 0;
if (!HAS_LLC(ring->dev))
/* On g33, we cannot place HWS above 256MiB, so
* restrict its pinning to the low mappable arena.
* Though this restriction is not documented for
* gen4, gen5, or byt, they also behave similarly
* and hang if the HWS is placed at the top of the
* GTT. To generalise, it appears that all !llc
* platforms have issues with us placing the HWS
* above the mappable region (even though we never
* actualy map it).
*/
flags |= PIN_MAPPABLE;
ret = i915_gem_obj_ggtt_pin(obj, 4096, flags);
if (ret) {
err_unref:
drm_gem_object_unreference(&obj->base);
return ret;
}
ring->status_page.obj = obj;
}
ring->status_page.gfx_addr = i915_gem_obj_ggtt_offset(obj);
ring->status_page.page_addr = kmap(sg_page(obj->pages->sgl));
memset(ring->status_page.page_addr, 0, PAGE_SIZE);
DRM_DEBUG_DRIVER("%s hws offset: 0x%08x\n",
ring->name, ring->status_page.gfx_addr);
return 0;
}
static int init_phys_status_page(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
if (!dev_priv->status_page_dmah) {
dev_priv->status_page_dmah =
drm_pci_alloc(ring->dev, PAGE_SIZE, PAGE_SIZE);
if (!dev_priv->status_page_dmah)
return -ENOMEM;
}
ring->status_page.page_addr = dev_priv->status_page_dmah->vaddr;
memset(ring->status_page.page_addr, 0, PAGE_SIZE);
return 0;
}
void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
{
if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
vunmap(ringbuf->virtual_start);
else
iounmap(ringbuf->virtual_start);
ringbuf->virtual_start = NULL;
i915_gem_object_ggtt_unpin(ringbuf->obj);
}
static u32 *vmap_obj(struct drm_i915_gem_object *obj)
{
struct sg_page_iter sg_iter;
struct page **pages;
void *addr;
int i;
pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
if (pages == NULL)
return NULL;
i = 0;
for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
pages[i++] = sg_page_iter_page(&sg_iter);
addr = vmap(pages, i, 0, PAGE_KERNEL);
drm_free_large(pages);
return addr;
}
int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
struct intel_ringbuffer *ringbuf)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj = ringbuf->obj;
int ret;
if (HAS_LLC(dev_priv) && !obj->stolen) {
ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, 0);
if (ret)
return ret;
ret = i915_gem_object_set_to_cpu_domain(obj, true);
if (ret) {
i915_gem_object_ggtt_unpin(obj);
return ret;
}
ringbuf->virtual_start = vmap_obj(obj);
if (ringbuf->virtual_start == NULL) {
i915_gem_object_ggtt_unpin(obj);
return -ENOMEM;
}
} else {
ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, PIN_MAPPABLE);
if (ret)
return ret;
ret = i915_gem_object_set_to_gtt_domain(obj, true);
if (ret) {
i915_gem_object_ggtt_unpin(obj);
return ret;
}
ringbuf->virtual_start = ioremap_wc(dev_priv->gtt.mappable_base +
i915_gem_obj_ggtt_offset(obj), ringbuf->size);
if (ringbuf->virtual_start == NULL) {
i915_gem_object_ggtt_unpin(obj);
return -EINVAL;
}
}
return 0;
}
static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
{
drm_gem_object_unreference(&ringbuf->obj->base);
ringbuf->obj = NULL;
}
static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
struct intel_ringbuffer *ringbuf)
{
struct drm_i915_gem_object *obj;
obj = NULL;
if (!HAS_LLC(dev))
obj = i915_gem_object_create_stolen(dev, ringbuf->size);
if (obj == NULL)
obj = i915_gem_alloc_object(dev, ringbuf->size);
if (obj == NULL)
return -ENOMEM;
/* mark ring buffers as read-only from GPU side by default */
obj->gt_ro = 1;
ringbuf->obj = obj;
return 0;
}
struct intel_ringbuffer *
intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
{
struct intel_ringbuffer *ring;
int ret;
ring = kzalloc(sizeof(*ring), GFP_KERNEL);
if (ring == NULL) {
DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s\n",
engine->name);
return ERR_PTR(-ENOMEM);
}
ring->ring = engine;
list_add(&ring->link, &engine->buffers);
ring->size = size;
/* Workaround an erratum on the i830 which causes a hang if
* the TAIL pointer points to within the last 2 cachelines
* of the buffer.
*/
ring->effective_size = size;
if (IS_I830(engine->dev) || IS_845G(engine->dev))
ring->effective_size -= 2 * CACHELINE_BYTES;
ring->last_retired_head = -1;
intel_ring_update_space(ring);
ret = intel_alloc_ringbuffer_obj(engine->dev, ring);
if (ret) {
DRM_DEBUG_DRIVER("Failed to allocate ringbuffer %s: %d\n",
engine->name, ret);
list_del(&ring->link);
kfree(ring);
return ERR_PTR(ret);
}
return ring;
}
void
intel_ringbuffer_free(struct intel_ringbuffer *ring)
{
intel_destroy_ringbuffer_obj(ring);
list_del(&ring->link);
kfree(ring);
}
static int intel_init_ring_buffer(struct drm_device *dev,
struct intel_engine_cs *ring)
{
struct intel_ringbuffer *ringbuf;
int ret;
WARN_ON(ring->buffer);
ring->dev = dev;
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
INIT_LIST_HEAD(&ring->execlist_queue);
INIT_LIST_HEAD(&ring->buffers);
i915_gem_batch_pool_init(dev, &ring->batch_pool);
memset(ring->semaphore.sync_seqno, 0, sizeof(ring->semaphore.sync_seqno));
init_waitqueue_head(&ring->irq_queue);
ringbuf = intel_engine_create_ringbuffer(ring, 32 * PAGE_SIZE);
if (IS_ERR(ringbuf)) {
ret = PTR_ERR(ringbuf);
goto error;
}
ring->buffer = ringbuf;
if (I915_NEED_GFX_HWS(dev)) {
ret = init_status_page(ring);
if (ret)
goto error;
} else {
BUG_ON(ring->id != RCS);
ret = init_phys_status_page(ring);
if (ret)
goto error;
}
ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf);
if (ret) {
DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
ring->name, ret);
intel_destroy_ringbuffer_obj(ringbuf);
goto error;
}
ret = i915_cmd_parser_init_ring(ring);
if (ret)
goto error;
return 0;
error:
intel_cleanup_ring_buffer(ring);
return ret;
}
void intel_cleanup_ring_buffer(struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv;
if (!intel_ring_initialized(ring))
return;
dev_priv = to_i915(ring->dev);
if (ring->buffer) {
intel_stop_ring_buffer(ring);
WARN_ON(!IS_GEN2(ring->dev) && (I915_READ_MODE(ring) & MODE_IDLE) == 0);
intel_unpin_ringbuffer_obj(ring->buffer);
intel_ringbuffer_free(ring->buffer);
ring->buffer = NULL;
}
if (ring->cleanup)
ring->cleanup(ring);
cleanup_status_page(ring);
i915_cmd_parser_fini_ring(ring);
i915_gem_batch_pool_fini(&ring->batch_pool);
ring->dev = NULL;
}
static int ring_wait_for_space(struct intel_engine_cs *ring, int n)
{
struct intel_ringbuffer *ringbuf = ring->buffer;
struct drm_i915_gem_request *request;
unsigned space;
int ret;
if (intel_ring_space(ringbuf) >= n)
return 0;
/* The whole point of reserving space is to not wait! */
WARN_ON(ringbuf->reserved_in_use);
list_for_each_entry(request, &ring->request_list, list) {
space = __intel_ring_space(request->postfix, ringbuf->tail,
ringbuf->size);
if (space >= n)
break;
}
if (WARN_ON(&request->list == &ring->request_list))
return -ENOSPC;
ret = i915_wait_request(request);
if (ret)
return ret;
ringbuf->space = space;
return 0;
}
static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
{
uint32_t __iomem *virt;
int rem = ringbuf->size - ringbuf->tail;
virt = ringbuf->virtual_start + ringbuf->tail;
rem /= 4;
while (rem--)
iowrite32(MI_NOOP, virt++);
ringbuf->tail = 0;
intel_ring_update_space(ringbuf);
}
int intel_ring_idle(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req;
/* Wait upon the last request to be completed */
if (list_empty(&ring->request_list))
return 0;
req = list_entry(ring->request_list.prev,
struct drm_i915_gem_request,
list);
/* Make sure we do not trigger any retires */
return __i915_wait_request(req,
atomic_read(&to_i915(ring->dev)->gpu_error.reset_counter),
to_i915(ring->dev)->mm.interruptible,
NULL, NULL);
}
int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
request->ringbuf = request->ring->buffer;
return 0;
}
int intel_ring_reserve_space(struct drm_i915_gem_request *request)
{
/*
* The first call merely notes the reserve request and is common for
* all back ends. The subsequent localised _begin() call actually
* ensures that the reservation is available. Without the begin, if
* the request creator immediately submitted the request without
* adding any commands to it then there might not actually be
* sufficient room for the submission commands.
*/
intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
return intel_ring_begin(request, 0);
}
void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
{
WARN_ON(ringbuf->reserved_size);
WARN_ON(ringbuf->reserved_in_use);
ringbuf->reserved_size = size;
}
void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf)
{
WARN_ON(ringbuf->reserved_in_use);
ringbuf->reserved_size = 0;
ringbuf->reserved_in_use = false;
}
void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf)
{
WARN_ON(ringbuf->reserved_in_use);
ringbuf->reserved_in_use = true;
ringbuf->reserved_tail = ringbuf->tail;
}
void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
{
WARN_ON(!ringbuf->reserved_in_use);
if (ringbuf->tail > ringbuf->reserved_tail) {
WARN(ringbuf->tail > ringbuf->reserved_tail + ringbuf->reserved_size,
"request reserved size too small: %d vs %d!\n",
ringbuf->tail - ringbuf->reserved_tail, ringbuf->reserved_size);
} else {
/*
* The ring was wrapped while the reserved space was in use.
* That means that some unknown amount of the ring tail was
* no-op filled and skipped. Thus simply adding the ring size
* to the tail and doing the above space check will not work.
* Rather than attempt to track how much tail was skipped,
* it is much simpler to say that also skipping the sanity
* check every once in a while is not a big issue.
*/
}
ringbuf->reserved_size = 0;
ringbuf->reserved_in_use = false;
}
static int __intel_ring_prepare(struct intel_engine_cs *ring, int bytes)
{
struct intel_ringbuffer *ringbuf = ring->buffer;
int remain_usable = ringbuf->effective_size - ringbuf->tail;
int remain_actual = ringbuf->size - ringbuf->tail;
int ret, total_bytes, wait_bytes = 0;
bool need_wrap = false;
if (ringbuf->reserved_in_use)
total_bytes = bytes;
else
total_bytes = bytes + ringbuf->reserved_size;
if (unlikely(bytes > remain_usable)) {
/*
* Not enough space for the basic request. So need to flush
* out the remainder and then wait for base + reserved.
*/
wait_bytes = remain_actual + total_bytes;
need_wrap = true;
} else {
if (unlikely(total_bytes > remain_usable)) {
/*
* The base request will fit but the reserved space
* falls off the end. So only need to to wait for the
* reserved size after flushing out the remainder.
*/
wait_bytes = remain_actual + ringbuf->reserved_size;
need_wrap = true;
} else if (total_bytes > ringbuf->space) {
/* No wrapping required, just waiting. */
wait_bytes = total_bytes;
}
}
if (wait_bytes) {
ret = ring_wait_for_space(ring, wait_bytes);
if (unlikely(ret))
return ret;
if (need_wrap)
__wrap_ring_buffer(ringbuf);
}
return 0;
}
int intel_ring_begin(struct drm_i915_gem_request *req,
int num_dwords)
{
struct intel_engine_cs *ring;
struct drm_i915_private *dev_priv;
int ret;
WARN_ON(req == NULL);
ring = req->ring;
dev_priv = ring->dev->dev_private;
ret = i915_gem_check_wedge(&dev_priv->gpu_error,
dev_priv->mm.interruptible);
if (ret)
return ret;
ret = __intel_ring_prepare(ring, num_dwords * sizeof(uint32_t));
if (ret)
return ret;
ring->buffer->space -= num_dwords * sizeof(uint32_t);
return 0;
}
/* Align the ring tail to a cacheline boundary */
int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
int num_dwords = (ring->buffer->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
int ret;
if (num_dwords == 0)
return 0;
num_dwords = CACHELINE_BYTES / sizeof(uint32_t) - num_dwords;
ret = intel_ring_begin(req, num_dwords);
if (ret)
return ret;
while (num_dwords--)
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (INTEL_INFO(dev)->gen == 6 || INTEL_INFO(dev)->gen == 7) {
I915_WRITE(RING_SYNC_0(ring->mmio_base), 0);
I915_WRITE(RING_SYNC_1(ring->mmio_base), 0);
if (HAS_VEBOX(dev))
I915_WRITE(RING_SYNC_2(ring->mmio_base), 0);
}
ring->set_seqno(ring, seqno);
ring->hangcheck.seqno = seqno;
}
static void gen6_bsd_ring_write_tail(struct intel_engine_cs *ring,
u32 value)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
/* Every tail move must follow the sequence below */
/* Disable notification that the ring is IDLE. The GT
* will then assume that it is busy and bring it out of rc6.
*/
I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
_MASKED_BIT_ENABLE(GEN6_BSD_SLEEP_MSG_DISABLE));
/* Clear the context id. Here be magic! */
I915_WRITE64(GEN6_BSD_RNCID, 0x0);
/* Wait for the ring not to be idle, i.e. for it to wake up. */
if (wait_for((I915_READ(GEN6_BSD_SLEEP_PSMI_CONTROL) &
GEN6_BSD_SLEEP_INDICATOR) == 0,
50))
DRM_ERROR("timed out waiting for the BSD ring to wake up\n");
/* Now that the ring is fully powered up, update the tail */
I915_WRITE_TAIL(ring, value);
POSTING_READ(RING_TAIL(ring->mmio_base));
/* Let the ring send IDLE messages to the GT again,
* and so let it sleep to conserve power when idle.
*/
I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
_MASKED_BIT_DISABLE(GEN6_BSD_SLEEP_MSG_DISABLE));
}
static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
struct intel_engine_cs *ring = req->ring;
uint32_t cmd;
int ret;
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
cmd = MI_FLUSH_DW;
if (INTEL_INFO(ring->dev)->gen >= 8)
cmd += 1;
/* We always require a command barrier so that subsequent
* commands, such as breadcrumb interrupts, are strictly ordered
* wrt the contents of the write cache being flushed to memory
* (and thus being coherent from the CPU).
*/
cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW;
/*
* Bspec vol 1c.5 - video engine command streamer:
* "If ENABLED, all TLBs will be invalidated once the flush
* operation is complete. This bit is only valid when the
* Post-Sync Operation field is a value of 1h or 3h."
*/
if (invalidate & I915_GEM_GPU_DOMAINS)
cmd |= MI_INVALIDATE_TLB | MI_INVALIDATE_BSD;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
if (INTEL_INFO(ring->dev)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
intel_ring_emit(ring, 0);
intel_ring_emit(ring, MI_NOOP);
}
intel_ring_advance(ring);
return 0;
}
static int
gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
bool ppgtt = USES_PPGTT(ring->dev) &&
!(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
/* FIXME(BDW): Address space and security selectors. */
intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 | (ppgtt<<8) |
(dispatch_flags & I915_DISPATCH_RS ?
MI_BATCH_RESOURCE_STREAMER : 0));
intel_ring_emit(ring, lower_32_bits(offset));
intel_ring_emit(ring, upper_32_bits(offset));
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
return 0;
}
static int
hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
(dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_PPGTT_HSW | MI_BATCH_NON_SECURE_HSW) |
(dispatch_flags & I915_DISPATCH_RS ?
MI_BATCH_RESOURCE_STREAMER : 0));
/* bit0-7 is the length on GEN6+ */
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
return 0;
}
static int
gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
int ret;
ret = intel_ring_begin(req, 2);
if (ret)
return ret;
intel_ring_emit(ring,
MI_BATCH_BUFFER_START |
(dispatch_flags & I915_DISPATCH_SECURE ?
0 : MI_BATCH_NON_SECURE_I965));
/* bit0-7 is the length on GEN6+ */
intel_ring_emit(ring, offset);
intel_ring_advance(ring);
return 0;
}
/* Blitter support (SandyBridge+) */
static int gen6_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
struct intel_engine_cs *ring = req->ring;
struct drm_device *dev = ring->dev;
uint32_t cmd;
int ret;
ret = intel_ring_begin(req, 4);
if (ret)
return ret;
cmd = MI_FLUSH_DW;
if (INTEL_INFO(dev)->gen >= 8)
cmd += 1;
/* We always require a command barrier so that subsequent
* commands, such as breadcrumb interrupts, are strictly ordered
* wrt the contents of the write cache being flushed to memory
* (and thus being coherent from the CPU).
*/
cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW;
/*
* Bspec vol 1c.3 - blitter engine command streamer:
* "If ENABLED, all TLBs will be invalidated once the flush
* operation is complete. This bit is only valid when the
* Post-Sync Operation field is a value of 1h or 3h."
*/
if (invalidate & I915_GEM_DOMAIN_RENDER)
cmd |= MI_INVALIDATE_TLB;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
if (INTEL_INFO(dev)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
intel_ring_emit(ring, 0);
intel_ring_emit(ring, MI_NOOP);
}
intel_ring_advance(ring);
return 0;
}
int intel_init_render_ring_buffer(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring = &dev_priv->ring[RCS];
struct drm_i915_gem_object *obj;
int ret;
ring->name = "render ring";
ring->id = RCS;
ring->mmio_base = RENDER_RING_BASE;
if (INTEL_INFO(dev)->gen >= 8) {
if (i915_semaphore_is_enabled(dev)) {
obj = i915_gem_alloc_object(dev, 4096);
if (obj == NULL) {
DRM_ERROR("Failed to allocate semaphore bo. Disabling semaphores\n");
i915.semaphores = 0;
} else {
i915_gem_object_set_cache_level(obj, I915_CACHE_LLC);
ret = i915_gem_obj_ggtt_pin(obj, 0, PIN_NONBLOCK);
if (ret != 0) {
drm_gem_object_unreference(&obj->base);
DRM_ERROR("Failed to pin semaphore bo. Disabling semaphores\n");
i915.semaphores = 0;
} else
dev_priv->semaphore_obj = obj;
}
}
ring->init_context = intel_rcs_ctx_init;
ring->add_request = gen6_add_request;
ring->flush = gen8_render_ring_flush;
ring->irq_get = gen8_ring_get_irq;
ring->irq_put = gen8_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->get_seqno = gen6_ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
WARN_ON(!dev_priv->semaphore_obj);
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_rcs_signal;
GEN8_RING_SEMAPHORE_INIT;
}
} else if (INTEL_INFO(dev)->gen >= 6) {
ring->init_context = intel_rcs_ctx_init;
ring->add_request = gen6_add_request;
ring->flush = gen7_render_ring_flush;
if (INTEL_INFO(dev)->gen == 6)
ring->flush = gen6_render_ring_flush;
ring->irq_get = gen6_ring_get_irq;
ring->irq_put = gen6_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->get_seqno = gen6_ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
/*
* The current semaphore is only applied on pre-gen8
* platform. And there is no VCS2 ring on the pre-gen8
* platform. So the semaphore between RCS and VCS2 is
* initialized as INVALID. Gen8 will initialize the
* sema between VCS2 and RCS later.
*/
ring->semaphore.mbox.wait[RCS] = MI_SEMAPHORE_SYNC_INVALID;
ring->semaphore.mbox.wait[VCS] = MI_SEMAPHORE_SYNC_RV;
ring->semaphore.mbox.wait[BCS] = MI_SEMAPHORE_SYNC_RB;
ring->semaphore.mbox.wait[VECS] = MI_SEMAPHORE_SYNC_RVE;
ring->semaphore.mbox.wait[VCS2] = MI_SEMAPHORE_SYNC_INVALID;
ring->semaphore.mbox.signal[RCS] = GEN6_NOSYNC;
ring->semaphore.mbox.signal[VCS] = GEN6_VRSYNC;
ring->semaphore.mbox.signal[BCS] = GEN6_BRSYNC;
ring->semaphore.mbox.signal[VECS] = GEN6_VERSYNC;
ring->semaphore.mbox.signal[VCS2] = GEN6_NOSYNC;
}
} else if (IS_GEN5(dev)) {
ring->add_request = pc_render_add_request;
ring->flush = gen4_render_ring_flush;
ring->get_seqno = pc_render_get_seqno;
ring->set_seqno = pc_render_set_seqno;
ring->irq_get = gen5_ring_get_irq;
ring->irq_put = gen5_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT |
GT_RENDER_PIPECTL_NOTIFY_INTERRUPT;
} else {
ring->add_request = i9xx_add_request;
if (INTEL_INFO(dev)->gen < 4)
ring->flush = gen2_render_ring_flush;
else
ring->flush = gen4_render_ring_flush;
ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (IS_GEN2(dev)) {
ring->irq_get = i8xx_ring_get_irq;
ring->irq_put = i8xx_ring_put_irq;
} else {
ring->irq_get = i9xx_ring_get_irq;
ring->irq_put = i9xx_ring_put_irq;
}
ring->irq_enable_mask = I915_USER_INTERRUPT;
}
ring->write_tail = ring_write_tail;
if (IS_HASWELL(dev))
ring->dispatch_execbuffer = hsw_ring_dispatch_execbuffer;
else if (IS_GEN8(dev))
ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
else if (INTEL_INFO(dev)->gen >= 6)
ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
else if (INTEL_INFO(dev)->gen >= 4)
ring->dispatch_execbuffer = i965_dispatch_execbuffer;
else if (IS_I830(dev) || IS_845G(dev))
ring->dispatch_execbuffer = i830_dispatch_execbuffer;
else
ring->dispatch_execbuffer = i915_dispatch_execbuffer;
ring->init_hw = init_render_ring;
ring->cleanup = render_ring_cleanup;
/* Workaround batchbuffer to combat CS tlb bug. */
if (HAS_BROKEN_CS_TLB(dev)) {
obj = i915_gem_alloc_object(dev, I830_WA_SIZE);
if (obj == NULL) {
DRM_ERROR("Failed to allocate batch bo\n");
return -ENOMEM;
}
ret = i915_gem_obj_ggtt_pin(obj, 0, 0);
if (ret != 0) {
drm_gem_object_unreference(&obj->base);
DRM_ERROR("Failed to ping batch bo\n");
return ret;
}
ring->scratch.obj = obj;
ring->scratch.gtt_offset = i915_gem_obj_ggtt_offset(obj);
}
ret = intel_init_ring_buffer(dev, ring);
if (ret)
return ret;
if (INTEL_INFO(dev)->gen >= 5) {
ret = intel_init_pipe_control(ring);
if (ret)
return ret;
}
return 0;
}
int intel_init_bsd_ring_buffer(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_engine_cs *ring = &dev_priv->ring[VCS];
ring->name = "bsd ring";
ring->id = VCS;
ring->write_tail = ring_write_tail;
if (INTEL_INFO(dev)->gen >= 6) {
ring->mmio_base = GEN6_BSD_RING_BASE;
/* gen6 bsd needs a special wa for tail updates */
|
__label__pos
| 0.98496 |
what is m if distance between points (2,m),(m,-2) is 4?
1 Answer
sciencesolve's profile pic
sciencesolve | Teacher | (Level 3) Educator Emeritus
Posted on
You should use the following distance formula, such that:
`D = sqrt((x_B - x_A)^2 + (y_B - y_A)^2)`
Considering `A(2,m)` and `B(m,-2)` yields:
`D = sqrt((m - 2)^2 + (-2 - m)^2)`
`D = sqrt((m - 2)^2 + (-(2 + m))^2)`
`D = sqrt((m - 2)^2 + (2 + m)^2)`
Since the problem provides the information that the distance between the points A and B is 4, yields:
`4 = sqrt((m - 2)^2 + (2 + m)^2)`
Squaring both sides, yields:
`16 = (m - 2)^2 + (m + 2)^2`
Expanding the squares, yields:
`16 = m^2 - 4m + 4 + m^2 + 4m + 4`
Reducing duplicate terms yields:
`16 = 2m^2 + 8 => 2m^2 = 8 => m^2 = 4 => m = +-2`
Hence, evaluating the values of coordinates m, under the given conditions, yields `m = +-2.`
|
__label__pos
| 1 |
ad-0.13: Automatic Differentiation
PortabilityGHC only
Stabilityexperimental
[email protected]
Numeric.AD.Newton
Contents
Description
ย
Synopsis
Newton's Method (Forward AD)
findZero :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
Examples:
take 10 $ findZero (\\x->x^2-4) 1 -- converge to 2.0
module Data.Complex
take 10 $ findZero ((+1).(^2)) (1 :+ 1) -- converge to (0 :+ 1)@
inverse :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> a -> [a]Source
The inverseNewton function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
Example:
take 10 $ inverseNewton sqrt 1 (sqrt 10) -- converge to 10
fixedPoint :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
extremum :: Fractional a => (forall t s. (Mode t, Mode s) => AD t (AD s a) -> AD t (AD s a)) -> a -> [a]Source
The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.)
Gradient Descent (Reverse AD)
gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source
The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)
It uses reverse mode automatic differentiation to compute the gradient.
Exposed Types
newtype AD f a Source
AD serves as a common wrapper for different Mode instances, exposing a traditional numerical tower. Universal quantification is used to limit the actions in user code to machinery that will return the same answers under all AD modes, allowing us to use modes interchangeably as both the type level "brand" and dictionary, providing a common API.
Constructors
ADย
Fields
runAD :: f a
ย
Instances
Primal f => Primal (AD f)ย
Mode f => Mode (AD f)ย
Lifted f => Lifted (AD f)ย
Iso (f a) (AD f a)ย
(Num a, Lifted f, Bounded a) => Bounded (AD f a)ย
(Num a, Lifted f, Enum a) => Enum (AD f a)ย
(Num a, Lifted f, Eq a) => Eq (AD f a)ย
(Lifted f, Floating a) => Floating (AD f a)ย
(Lifted f, Fractional a) => Fractional (AD f a)ย
(Lifted f, Num a) => Num (AD f a)ย
(Num a, Lifted f, Ord a) => Ord (AD f a)ย
(Lifted f, Real a) => Real (AD f a)ย
(Lifted f, RealFloat a) => RealFloat (AD f a)ย
(Lifted f, RealFrac a) => RealFrac (AD f a)ย
(Lifted f, Show a) => Show (AD f a)ย
Var (AD Reverse a) aย
class Lifted t => Mode t whereSource
Methods
lift :: Num a => a -> t aSource
Embed a constant
(<+>) :: Num a => t a -> t a -> t aSource
Vector sum
(*^) :: Num a => a -> t a -> t aSource
Scalar-vector multiplication
(^*) :: Num a => t a -> a -> t aSource
Vector-scalar multiplication
(^/) :: Fractional a => t a -> a -> t aSource
Scalar division
zero :: Num a => t aSource
'zero' = 'lift' 0
|
__label__pos
| 0.791498 |
Including external CSS frameworks
Is it possible to use external CSS frameworks like bootstrap with custom widgets?
Hey Kevin,
Yes it is. You can leverage external CSS library in a similar way to the external JS library.
You need the code below in the JS of your widget. Here I referencing the https://getbootstrap.com/ library. It is possible to use others in the same way. I also attaching an example widgets which you can download and install in your instance to test.
customWidget-Example External CSS - Bootstrap.json (2.4 KB)
This is the output from a simple example:
Before Bootstrap CSS:
image
With Bootstrap CSS:
Code required in JS:
let urls = ["https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css"];
loadStyles(urls);
async function loadStyles(...urls) {
for (const thisUrl of urls) {
console.log(`Loading style "${thisUrl}"...`);
const response = await fetch(thisUrl);
const responseBody = await response.text();
const styleElm = document.createElement("style");
const inlineStyle = document.createTextNode(responseBody);
styleElm.appendChild(inlineStyle);
document.body.appendChild(styleElm);
console.log(`Style "${thisUrl}" successfully loaded`);
}
}
@parth.parmar ^^^ see this for CSS library
@kirons How did you get on with the CSS? Anything tips and tricks to share? I am thinking of a couple of widgets to build that would leverage external CSS. So any learnings appreciated.
Hi Eddy, I havenโt gotten back to this beyond testing it after you initially provided feedback. However, I have used a couple of custom widgets in apps recently. One was the label preview that you built, modified for our environment, and the second was a text box that updates the ZPL for the label preview when the values change.
Would be nice to add this as default for all widgets.
Switch option on the widget dashboard, to use the default one or the reference
Hey @jajzler -
Can you give a little more context here?
โข You thinking just CSS formatting libraries could be selected from the widget configuration pane?
โข Would there be cases where you might use a widget in multiple steps or apps, but with different css libraries?
โข Would this behavior exist for JS libraries too?
If there are external libraries you want us to add native support for, please reach out and we can add them to the dropdown list. The code @EddyA shared above is really just a stopgap until we can natively support anything anyone would ask for.
Pete
image
Okay, maybe i was not explained clearly.
In the picture the have snapshot from the widget view.
If the settings has a new options like
โข define widget settings
Under that, the developer can set the new default settings that will be automatic load into to widget.
It is like the current default css style, that came by default. for each new widget.
Sometimes i need the same css style for many application, so the user has the same feeling.
(Material Design)
If you can add Bulma CSS as an external lib, without the code from Eddy. That would be nice.
I dont wont to put the same code in many widgets again and again.
I like the idea of applying CSS across all widgets, and thatโs a common mechanism/feature many places where CSS is supported. What do you think about a โhelper widgetโ with the connection to external library or framework, then referencing that helper widget instead of the entire code block?
2 Likes
That sounds very good :grinning:
|
__label__pos
| 0.799794 |
Tag Info
Hot answers tagged
2
You can split this into two stages: Firstly generate some pinecones with such minimum distance between them that you need: On second stage add some pinecones next to those placed already: That way you will have have some single pinecones and some pinecones clumps, but all of those will be separated well apart. You can tweak 1st stage to use slightly ...
2
You need to enable some kind of z-buffering: first render unobstructed units, then buildings, then units that are obstructed (complete or just partially) by buildings and then the ground. If you render them in this order make sure no pixel is overwritten: do not draw over a pixel that has already been drawn, else only the terrain will render. Flush the ...
1
I am not quite sure what you are not understanding here, but I attempt to answer your question anyway. I am not quite getting what that does or what purpose it serves. In cases where you do not fill in the fields in the inspector (which requires them to be public), you need to find the instances of the needed components in code. The ...
1
Your TryToCreateNewMiner function can call it again. If your RandomPercent calculator keeps returning true, the callstack gets deeper and deeper. TryToCreateNewMiner->Miner->StartDigging->TryToCreateNewMiner
1
1) You don't need to calculate the visible objects in a single frame, you may use a bit bigger viewport, and calcultate only 500 objects per frame, if you have 20000 objects and your framerate is 50fps, in 40 frames you will have the right list, and it will take 0.8 secs 2) if your objects are not very complex or are static, sometimes is faster to put them ...
1
Personally, I use a spin on the first option you listed. When checking if A collides with B, however, a series of disqualifiers (or a singular disqualifier) can be used to reduce the computation required for each iteration of your collision checker. For example: if((b.y >= a.y && b.y <= (a.y + a.h)) || ((b.y+b.h) >= a.y && (b.y+b.h) ...
1
You seem not to have a TextMesh component in your game object, and you are not checking if GetComponent method has returned any. You can add it once at runtime if it doesn't exist: var text = gameObject.GetComponent(TextMesh); if( text == null ) { text = gameObject.AddComponent(TextMesh); } Or to make it better, you can make var text a private class ...
1
Simple Answer Use a class to load the original prefab from Resources.Load(string prefabPath, typeof(GameObject)); Store this returned prefab into a resource pool class by path key. For instance: Dictionary<string, GameObject> prefabLookup; You can then grab the original prefab anytime you need it. Use a helper method to automatically load at ...
1
There's many ways to solve this. One way is, yes, to keep the resources attached to the game objects. Each game object would store a handle to its material and its mesh(es). Another approach is to keep a separate hierarchy/list of scene nodes. Each game object would just hold a reference to the scene node that represents the object visually. The scene node ...
1
In modern OpenGL you can almost totally seperate rendered object from each other, using different vaos and shader programs. And even the implementation of one object can be separated into many abstraction layers. For example, if you want to implement a terrain, you could define a TerrainMesh whose constructor creates the vertices and indices for the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
__label__pos
| 0.647543 |
A Comprehensive Guide to Network Switches
Introduction
In todayโs interconnected world, a reliable and efficient network infrastructure is essential for businesses and individuals alike. One crucial component of a network setup is the network switch. A network switch plays a vital role in connecting devices within a local area network (LAN) and enabling seamless communication and data transfer. In this guide, we will delve into the world of network switches, exploring their functions, types, key features, and important considerations.
Understanding Network Switches
A network switch is a network switch that connects devices within a LAN. It operates at the data link layer (Layer 2) of the OSI model and facilitates the exchange of data packets between connected devices. Unlike hubs, switches are intelligent devices that direct data packets only to the intended recipient, enhancing network performance and security.
Types of Network Switches
Unmanaged Switches: These switches are plug-and-play devices, requiring minimal configuration. They are suitable for small networks with basic connectivity needs.
ย Managed Switches:
These switches provide advanced features and allow administrators to monitor, configure, and control the network. Managed switches are ideal for larger networks requiring enhanced security and scalability.
PoE Switches:
Power over Ethernet (PoE) switches supply power and data over the same Ethernet cable. They simplify network infrastructure by eliminating the need for separate power cables, making them ideal for devices like IP cameras, wireless access points, and VoIP phones.
Layer 2 and Layer 3 Switches:
Layer 2 switches operate at the data link layer and make forwarding decisions based on MAC addresses. Layer 3 switches, also known as multilayer switches, can perform routing functions in addition to Layer 2 switching.
Documentation and Labeling
Maintain a detailed record of your switch configurations, including VLAN setups, port assignments, and security settings. Label the physical connections and document the switchโs location for easy identification and troubleshooting.
Future Trends and Technologies. The field of network switches continues to evolve, with several emerging trends and technologies shaping their future:
Software-Defined Networking:
SDN separates the control plane from the data plane, providing centralized management and programmability. It offers flexibility, scalability, and automation in network configurations.
Intent-Based Networking (IBN):
IBN leverages artificial intelligence and machine learning to automate network management based on desired business outcomes, simplifying network operations and optimizing performance.
Multi-Gigabit and 10 Gigabit Ethernet:
As network bandwidth requirements increase, switches supporting multi-gigabit and 10 Gigabit Ethernet are becoming more prevalent, enabling faster data transfers and supporting high-bandwidth applications.
Network Virtualization:
Network virtualization technologies, such as VXLAN and NVGRE, allow for the creation of virtual networks that operate independently of the physical infrastructure. This enhances network scalability and flexibility.
Troubleshooting and Maintenance
Regular maintenance and troubleshooting are essential for ensuring the smooth operation of your network switch. Here are some tips to help you with this process:
Firmware Updates:
Keep your switchโs firmware up to date by regularly checking for and installing the latest updates from the manufacturer. Updated firmware often includes bug fixes, security enhancements, and new features.
Monitoring and Logging:
Utilize the monitoring and logging capabilities of your managed switch to identify any network anomalies, traffic bottlenecks, or potential security issues. Analyzing logs can provide valuable insights into the performance and health of your network.
Cable and Connection Checks:
Periodically inspect the physical connections between your devices and the switch. Ensure that cables are securely plugged in, free from damage, and properly seated in the ports. Faulty or damaged cables can cause network disruptions.
Troubleshooting Tools:
Familiarize yourself with troubleshooting tools provided by your switch manufacturer. These tools may include features such as port mirroring, packet capture, and loop detection, which can help diagnose and resolve network issues.
Power Cycle:
If you encounter unexpected network problems, try power cycling the switch by turning it off, waiting for a few seconds, and then turning it back on. This simple step can often resolve minor glitches.
Key Features and Considerations
Port Density:
Consider the number of devices that need to be connected to determine the required port density. Ensure the has enough ports to accommodate current and future needs.
Switching Capacity and Speed:
Higher switching capacity and speed allow for faster data transfer and reduce network congestion. Consider the data throughput requirements of your network when choosing a switch.
VLAN Support:
Virtual Local Area Networks (VLANs) enhance network security and performance by segregating network traffic. Look for switches that support VLAN functionality if you require network segmentation.
Quality of Service (QoS):
QoS features prioritize specific types of network traffic, ensuring critical applications receive sufficient bandwidth. This is crucial for voice and video streaming applications.
Redundancy and Link Aggregation:
Redundant power supplies and support for link aggregation (such as LACP) help increase network availability and resilience.
Security Features:
Look for switches with built-in security features like Access Control Lists (ACLs), port security, and DHCP snooping to protect your network from unauthorized access.
Management Options:
Managed switches provide more control and monitoring capabilities. Consider whether a web-based interface, command-line interface, or network management software best suits your needs.
Installation and Configuration
Follow the manufacturerโs guidelines for switch installation, including rack mounting or desktop placement. Configure basic settings like IP address, subnet mask, and default gateway. For managed switches, delve into the configuration options, including VLAN setup, QoS policies, and security settings. Document your configurations for future reference.
Conclusion
A network switch is a fundamental component for establishing a reliable and efficient local area network. Understanding the different types, features, and considerations will empower you to choose the right switch for your network infrastructure, improving performance, security, and scalability.
|
__label__pos
| 0.930282 |
user3438917 user3438917 - 6 months ago 35
AngularJS Question
Displaying data from JSON in AngularJS
I feel like I'm missing something obvious here, but I can't render the JSON data to the page, only able to out put the JSON formatted data, and not just specific keys (e.g., Name: Abbey Malt).
This is my controller:
app.controller('beerController', function($scope, $http, $modal) {
$http.get("http://api.brewerydb.com/v2/fermentables", {
params: {
key: 'xxx'
}
})
.then(function(response) {
$scope.response = response.data.name;
console.log(response);
});
};
And the HTML:
<div ng-controller="beerController">
<div class="container">
<h3>Latest Releases ({{currentPage}} pages)</h3>
<div class="row">
<div ng-repeat="items in response">
<li>{{data.name}}</li>
</div>
</div>
</div>
</div>
Lastly, here's a screenshot of the console and how the JSON object is structured:
enter image description here
Answer
Try to change your code to
app.controller('beerController', function($scope, $http, $modal) {
$http.get("http://api.brewerydb.com/v2/fermentables", {
params: {
key: 'xxx'
}
})
.then(function(response) {
$scope.response = response.data.data;
console.log(response);
});
};
and view
<div>
<div ng-controller="beerController">
<div class="container">
<h3>Latest Releases ({{currentPage}} pages)</h3>
<div class="row">
<div ng-repeat="item in response">
<li>{{item.name}}</li>
</div>
</div>
</div>
</div>
|
__label__pos
| 0.692863 |
Bibm@th
Forum de mathรฉmatiques - [email protected]
Bienvenue dans les forums du site BibM@th, des forums oรน on dit Bonjour (Bonsoir), Merci, S'il vous plaรฎt...
Vous n'รชtes pas identifiรฉ(e).
#1 04-10-2017 15:34:41
Lyne
Invitรฉ
Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Bonjour,
j'ai reรงu un dm a faireย jusqu'ร vendredi cependant j'ai รฉtรฉ trรจs occupรฉ par des รฉvaluations et j'ai eu aussi du mal ร comprendre. S'il vous plait aider moi. Merci de toutes participations. Meme si j'ai du mal ร comprendre, je ferais de mon mieux
Partie A:
On considรจre 3 rรฉels a, b, c avec a diffรฉrent de 0. On pose f(x)=ax carrรฉ +bx+ c.
1) On suppose que le trinรดme ax carrรฉ +bx+c possรจde deux racines distinctes. Montrer que la somme des racines vaut -b/a et que leur produit vaut c/a.
2) Les deux formules prรฉcรฉdentes sont-elles encore valables si le trinรดme possรจde une racine double ?
Partie B
3) Deux nombres ont pour somme S et pour produit P. Montrer que ces deux nombres sont solutions de l'รฉquation x carrรฉ - Sx+P=0
4) Inversement justifier que si l'รฉquation x carrรฉ -Sx+P=0 a deux solutions,alors ces deux solutions ont pour somme S et pour produit P.
5) Renรฉ doit rรฉsoudre x carrรฉ -5,5x + 7,5=0. il remarque que 2,5 et 3 ont pour somme 5,5 et pour produit 7,5. Il en dรฉduit que ce sont forcรฉment les solutions de l'รฉquation, et qu'il n'y en a pas d'autres. Sans faire de calcul, pensez-vous qu'il a raison? Pourquoi ?
6) L'รฉquation 2x carrรฉ + 3x -5=0 possรจde 1 comme racine รฉvidente. En dรฉduire d'autre racines sans calculer delta.
7)On cherche deux nombres rรฉels dont la somme est รฉgale au produit. A quelle condition sur leur somme cela est-il possible ?
Partie C:
8) Existe t il un rectangle dont l'aire est 16 cm carrรฉ et le pรฉrimรฉtre 16 cm? Quelles sont alors ses dimensions ?
9)Existe t-il un rectangle dont l'aire est 17 cm carrรฉ et le pรฉrimรจtre 17 cm ? Quelles sont alors ses dimensions?
10) Existe t-il un rectangle dont l'aire est 15 cm carrรฉ et le pรฉrimรจtre 15 cm? Quelles sont alors ses dimensions?
Un grand merci de ma part a tous ceux qui m'aideront. Vraiment Merci
Voici Mes calculs:
Partie A):
1)ย X1+x2= -b- racine delta /2a + -b+racine delta/2a
ย ย ย ย ย =-b-b/2a car les racines s'annulent
ย ย ย ย ย = -2b/2aย ( les 2 s'annulent)
ย ย ย ย ย = -b/a
x1 * x2=ย -b- racine delta /2a * -b+racine delta/2a
ย ย ย ย ย ย = (-b- racine delta)(-b- racine delta)/4a carrรฉ
ย ย ย ย ย ย = (-b) carrรฉ - racine delta carrรฉ/4a carrรฉ
ย ย ย ย ย ย = b carrรฉ - delta/ 4a carrรฉ
ย ย ย ย ย ย = b carrรฉ - (b carrรฉ - 4ac)/4a carrรฉ
ย ย ย ย ย ย = tous s'annulent sauf le 4a carrรฉ
ย ย ย ย ย ย = c/a
2) La j'ai pas compris mais je sais que l'รฉquation a une racine double lorsque delta est nul
Partie B:
a) x+y=S donc y=S-x
x*y=P donc x(S-x)=P
et on fais *(-1) pour retrouver l'รฉquation de dรฉpart.ย Aprรจs pour en รชtre vraiment sur j'ai refais ce calcul en enlevant X et j'ai obtenu y carrรฉ -Sy+P=0
b) Les solutions de l'รฉquation รฉtaient X1= x carrรฉ -Sx + P
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย et X2= y carrรฉ-Sy+P
ainsi il faut faire X1+X2= ( x carrรฉ -Sx + P) + (y carrรฉ-Sy+P)
Aprรฉs j'ai du mal a imaginรฉ le calcul.
c) Je pense qu'il a raison car dans la premiรจre question, on a vu que S et P sont bel et bien des solutions de l'รฉquation .
Aprรฉs tout le reste j'essaye encore de comprendre.
Merci de me corriger et de m'aider ร le faire.
#2 04-10-2017 16:34:38
freddy
Membre chevronnรฉ
Lieuย : Paris
Inscriptionย : 27-03-2009
Messagesย : 6 172
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Salut,
ce qui est dommage dans cette affaire est que l'objectif pรฉdagogique premier d'un DM est de t'amener ร rรฉflรฉchir sur une thรจme donnรฉ, ici les รฉquations du second degrรฉ. Pour bien rรฉflรฉchir, il faut du temps et toi, tu en manques car il ne te reste que quelques heures et tu n'es pas particuliรจrement ร l'aise sur le sujet.
Si tu รฉtais venu nous solliciter plus tรดt dans la semaine, on t'aurait aidรฉ et tu aurais progressรฉ. Lร , on va te donner quelques indications mais je ne suis pas certain que tu comprennes ce que tu dois comprendre ... C'est bien dommage !
Donc A 1) OK,
A 2) : rรฉflรฉchis quelques secondes de plus !
B 3) OK,
B 4) fais le calcul et dรฉduis ce que tu dois !
B 5) pourquoi tu ne te sers pas de B 3) ?
Merci ร d'autres intervenants (yoshi, tibo, ...) de prendre la main si nรฉcessaire !
Memento Mori ! ...
Hors ligne
#3 04-10-2017 17:20:26
yoshi
Modo Ferox
Inscriptionย : 20-11-2005
Messagesย : 11 589
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Salut,
Les x au carrรฉ c'est pรฉnible ร lire : utilise plutรดt le ยฒ en dessous de Echap (ou ESC selon les claviers)...
Attention, moi je respecte la prioritรฉ des opรฉrations parce que je vois qu'il manque des parenthรจses...
En effet avec stylo + papier, ton -b- racine delta /2a s'รฉcrit : [tex]-b-\frac{\sqrt{\Delta}}{2}\times a[/tex]
รcris plutรดt : (-b-racine Delta)/(2a)
Sinon, question 1, ok !
Q2 formules toujours valables avec racine double ?
[tex]x_1=\frac{-b-\sqrt{\Delta}}{2a}[/tex]
[tex]x_2=\frac{-b+\sqrt{\Delta}}{2a}[/tex]
racine double donc oui [tex]\Delta=0[/tex]
[tex]x_1=x_2=\frac{-b}{2a}[/tex]
Donc :
S = ?
et [tex]P=\frac{-b}{2a}\times \frac{-b}{2a}=\cdots[/tex]
Mais lร tu n'as pas fini :
si [tex]\Delta=0[/tex], c'est que [tex]b^2-4ac = 0[/tex] et tu vas pouvoir simplifier P...
Partie B
Q3
oui
Q4
Les solutions de l'รฉquation รฉtaient X1= x carrรฉ -Sx + Pย et X2 = y carrรฉ-Sy+P
Non.
Pourquoi donc ?
Dans ce sens, c'est plus facile :
[tex]x^2-Sx+P=0[/tex]
Donc [tex]\Delta =S^2-4P[/tex]
[tex]x_1=\frac{S-\sqrt{\Delta}}{2}[/tex]
[tex]x_2=\frac{S+\sqrt{\Delta}}{2}[/tex]
Maintenant tu fais [tex]x_1+x_2[/tex]ย ... facile
Puis
[tex]x_1x_2=\cdots[/tex]
Lร , tu auras besoin de savoir que [tex]\Delta=S^2-4P[/tex]
Q5
Le produit de 2 nombres de mรชme signe est positif.
Conclusion provisoire pour le signe des solutions ?
Pour aller plus loin, pense ร la somme...
La somme de 2 nรฉgatifs est nรฉgatifs est nรฉgative, de deux positifs est positive...
Donc comment sont les signes des solutions ?
Sans calculs, on ne peut pas faire mieux...
Mais c'est suffisant pour l'instant
Le : et qu'il n'y en a pas d'autres est plutรดt marrant.
S'il y en avait deux autres, รงa en ferait quatre !
Une รฉquation du 2nd degrรฉ n'a que 2 solutions...
Donc conclusion 2,5 et 3 sont-ils solution ?
Q6
Sers-toi de Q3 et Q4ย : [tex]P=x_1x_2=\frac c a[/tex]
Ici P= ?
si x1=1, combien vaut x2 ?
Q7
[tex]x^2-Sx+P=0[/tex]
Mais P = S, donc [tex]x^2-Sx+S=0[/tex]
Regarde [tex]\Delta[/tex]...
Partie C
Q8
[tex]x_1x_2=16[/tex] , [tex]x_1+x_2=16[/tex]ย rien de spรฉcial
Q9
[tex]x_1x_2=17[/tex]ย ; [tex]x_1+x_2=17[/tex]ย ย Dรฉcevant et aucun intรฉrรชt : pas de piรจge. J'aurais donnรฉ 3 et non 15 et on aurait bien ri...
Q10
[tex]x_1+x_2=15[/tex]ย ; [tex]x_1+x_2=15[/tex]
Voir ci-dessus
Arx Tarpeia Capitoli proxima...
En ligne
#4 05-10-2017 20:33:46
Lyne
Invitรฉ
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Bonsoir,
pour la partie B:
4)en gros รงa fait X1+X2 = -b/a=-s/1
et pour =X1*X2= c/a=P/1ย ย (Mon prof m'a aidรฉ ) mais s'il te plait pourras- tu m'รฉcris la fin du calcul que tu as commencรฉ.
5) 2,5+3=5,5
ย ย ย 2,5*3=7,5ย ย ย les signes des solutions est positifs donc delta est positif = il n'y a que deux solutions .
Donc 2,5 et 3 sont les solutions
(Et la consigne affirme qu'il ne faut pas faire de calcul.)
6) P= X1*X2=c/a =-5/4= -1,25
1*X2= -1,25
ย ย X2= -1,25/ 1 = -1,25ย
Donc X2 vautย - 1,25
7)ย je pense que xยฒ-Sx+S =0
ย ย ย ย ย ย ย ย ย ย ย ย delta= bยฒ-4ac
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย =ย -S-4*1*S
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย = -S-4S
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย = -5S
Partie C:
a) cela marche avec 4*4=16 et 2L+2l=16 est รฉquivalent a 2*4+2*4=16
Par contre la b) et la c) je ne comprends pas, ร moins de donner le rรฉsultat au pif
Merci de me corriger et de m'aider, mes remerciements
#5 06-10-2017 05:18:43
Lyne
Invitรฉ
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
IL me reste encore jusqu'a 13h pour poster le devoir.
Merci de votre aide
#6 06-10-2017 06:33:21
yoshi
Modo Ferox
Inscriptionย : 20-11-2005
Messagesย : 11 589
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Re,
Rapidement, j'ai RDV chez le Kinรฉ ร 9 h. Je reprends au retour...
7)ย je pense que xยฒ-Sx+S =0
ย ย ย ย ย ย ย ย ย ย ย ย delta= bยฒ-4ac
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย =ย -S-4*1*S
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย = -S-4S
ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย = -5S
M'enfin !...
[tex]\Delta = (-S)^2-4S=S^2-4S=S(S-4)[/tex]
Puisque S est la somme deux longueurs, il ne peut รชtre nรฉgatif...
[tex]\Delta[/tex] n'est donc positif que S>4 !!!
Partie C:
a) cela marche avec 4*4=16 et 2L+2l=16 est รฉquivalent a 2*4+2*4=16
Par contre la b) et la c) je ne comprends pas, ร moins de donner le rรฉsultat au pif
S=16 donc S>4 oui il y a des solutions
[tex]X^2-16X+16=0[/tex]
[tex]\Delta = 16^2-64 = 156-64 = 192[/tex]
[tex]\text{largeur, Longueur }= \dfrac{16\pm\sqrt{192}}{2}= \dfrac{16\pm8\sqrt{3}}{2}=8\pm4\sqrt 3[/tex]
Pour b) et c) Mรชme mรฉthode...
pour la partie B:
4)en gros รงa fait X1+X2 = -b/a=-s/1
et pour =X1*X2= c/a=P/1ย ย (Mon prof m'a aidรฉ ) mais s'il te plait pourras- tu m'รฉcris la fin du calcul que tu as commencรฉ.
[tex]P=x_1x_2=x_1^=x_2^2=\frac{b^2}{4a^2}[/tex]
Mais [tex]\Delta = 0 = b^2-4c[/tex]
Donc [tex]b^2=4ac[/tex] que l'on remplace !
[tex]P=\frac{b^2}{4a^2}=\frac{4ac}{4a^2}[/tex] que tu simplifies...
5) 2,5+3=5,5
ย ย ย 2,5*3=7,5ย ย ย les signes des solutions est positifs donc delta est positif = il n'y a que deux solutions .
Donc 2,5 et 3 sont les solutions
(Et la consigne affirme qu'il ne faut pas faire de calcul.)
P >0 donc les solutions sont soit toutes deux positives, soit toutes deux nรฉgatives.
Mais S>0, donc les deux solutions sont toutes deux positives...
S'il y a des solutions, ce sont donc bien celles-lร ...
Maintenant y a-t-il des solutions ?
Sans calculs, c'est dur ร dire... Il faut savoir que [tex]\Delta >0[/tex]
Mais ta proposition est fausse, voilร un contre exemple :
[tex]x^2-3x+6=0[/tex]
Si je raisonne comme toi :
S=3, P=6 si les solutions existent elles sont toutes deux positives, toi tu conclurais donc que [tex]\Delta>0[/tex]
Pourtant : [tex]\Delta = (-3)^2-4\times 6 = 9-24 <0[/tex]...
@+
Arx Tarpeia Capitoli proxima...
En ligne
#7 06-10-2017 08:20:00
yoshi
Modo Ferox
Inscriptionย : 20-11-2005
Messagesย : 11 589
Reย : Dm urgent รฉquations du 2eme degrรฉ a rendre vendredi
Re,
Je reprends :
5) 2,5+3=5,5
ย ย ย 2,5*3=7,5ย ย ย les signes des solutions est positifs donc delta est positif = il n'y a que deux solutions .
Donc 2,5 et 3 sont les solutions
(Et la consigne affirme qu'il ne faut pas faire de calcul.)
On va essayer de contourner la consigne : pas de calculs.
Tu vas lui dire que tu ne fais pas de calculs, que tu vas juste utiliser tes tables de multiplications que tu connais par coeur depuis la Primaire... ^_^
Ici a=1. Or le a ne "sert ร rien" dans la multiplication (la justification exacte, que tune connais probablement pas est : le 1 est รฉlรฉment neutre pour la multiplication)
Donc [tex]\Delta=b^2-4c[/tex]
Il y a deux solutions si [tex]b^2>4c[/tex]
Comme on ne doit pas faire de calcul, on va
a) arrondir b et cย ร 5 et 8 : bยฒ c'estย [tex]b\times b=5\times 5[/tex] et la table de 5 dit que รงa fait 25
ย ย 4c c'est [tex]4\times 8[/tex] et la table de 4 dit 32... 25<32
b) arrondir b et cย ร 5 et 7
ย ย [tex] 4c = 4\times 7 = 28[/tex] d'oรน bยฒ <4c...
c) arrondir b et cย ร 6 et 7
ย ย [tex]b\times b=6\times 6=36[/tex]ย et[tex] 4c = 4\times 7 = 28[/tex]
ย bยฒ>4c
d) arrondir b et c ร 6 et 8
ย ย 4c = 32
ย ย bยฒ>4c
Donc, ne peut pas rรฉpondre sans calcul du discriminant : rien n'est รฉvident.
La seule rรฉponse est bien :
SI les solutions existent, ce sont bien 2,5 et 3...
6) L'รฉquation 2xยฒ+ 3x -5=0 possรจde 1 comme racine รฉvidente. En dรฉduire d'autre racines sans calculer delta.
Il y a une racine รฉvidente dit l'รฉnoncรฉย et 2xยฒ-3x+5 n'est pas un carrรฉ donc, il y en a bien une 2e...
P<0 donc une solution positive, l'autre nรฉgative
[tex]P=-\frac 5 2=-2,5[/tex]
On pose [tex]x'=1[/tex] et on cherche [tex]x''[/tex] : [tex]1\times x"=-2,5[/tex], donc [tex]x"= ?[/tex]
@+
Arx Tarpeia Capitoli proxima...
En ligne
Rรฉponse rapide
Veuillez composer votre message et l'envoyer
Nom (obligatoire)
E-mail (obligatoire)
Message (obligatoire)
Programme anti-spam : Afin de lutter contre le spam, nous vous demandons de bien vouloir rรฉpondre ร la question suivante. Aprรจs inscription sur le site, vous n'aurez plus ร rรฉpondre ร ces questions.
Quel est le rรฉsultat de l'opรฉration suivante ?65 + 19
Systรจme anti-bot
Faites glisser le curseur de gauche ร droite pour activer le bouton de confirmation.
Attention : Vous devez activer Javascript dans votre navigateur pour utiliser le systรจme anti-bot.
Pied de page des forums
|
__label__pos
| 0.513998 |
Ralf Marmorglatt Ralf Marmorglatt - 8 months ago 38
Python Question
Python - does range() mutate variable values while in recursion?
While working on understanding string permutations and its implementation in python (regarding to this post) I stumbled upon something in a
for
loop using
range()
I just don't understand.
Take the following code:
def recursion(step=0):
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
print "Call recursion"
print "\n-----------------\n"
recursion(step + 1)
recursion()
That gives the following output:
root@host:~# python range_test.py
Step I: 0
Step II: 0
Value i: 0
Call recursion
-----------------
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
Step I: 2
Step II: 0 <---- WHAT THE HECK?
Value i: 1
Call recursion
-----------------
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
Step I: 2
root@host:~#
As you can see the variable
step
gets a new value after a certain
for
loop run using
range()
- see the
WHAT THE HECK
mark.
Any ideas to lift the mist?
Answer Source
Your conclusion is incorrect. step value does not change by using range. This can be verified as:
def no_recursion(step=0):
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
no_recursion(step=2)
which produces the output:
Step I: 2
which is expected since range(2,2) returns [].
The illusion that step changes its value to 0 comes since the function recursion (called with step=2) returns after it prints Step I: 2, then control is returned to function recursion (called with step=1) which immediately returns since its for loop has terminated, and then control is returned to recursion (called with step=0) which continues since it has 1 iteration left, and that prints Step II: 0 to console which is no surprise. This can be simpler to observe if we modify the code a little bit (by adding function entry and exit logging) and observe the output:
def recursion(step=0):
print "recursion called with [step = {}]".format(step) # add entry logging
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
print "Call recursion"
print "\n-----------------\n"
recursion(step + 1)
print "--> returned recursion [step = {}]".format(step) # add exit logging
recursion()
the output produced by this code is :
recursion called with [step = 0]
Step I: 0
Step II: 0
Value i: 0
Call recursion
-----------------
recursion called with [step = 1]
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
recursion called with [step = 2]
Step I: 2
--> returned recursion [step = 2]
--> returned recursion [step = 1]
Step II: 0
Value i: 1
Call recursion
-----------------
recursion called with [step = 1]
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
recursion called with [step = 2]
Step I: 2
--> returned recursion [step = 2]
--> returned recursion [step = 1]
--> returned recursion [step = 0]
where we can clearly see that order in which recursion unfolds and observe that value of step is consistent in each step.
|
__label__pos
| 0.99321 |
Concerned your API keys and other secrets are out in the open?
Free, no obligation API Leaks Assessment
Privacy settings
We use cookies and similar technologies that are necessary to run the website. Additional cookies are only used with your consent. You can consent to our use of cookies by clicking on Agree. For more information on which data is collected and how it is shared with our partners please read our privacy and cookie policy: Cookie policy, Privacy policy
We use cookies to access, analyse and store information such as the characteristics of your device as well as certain personal data (IP addresses, navigation usage, geolocation data or unique identifiers). The processing of your data serves various purposes: Analytics cookies allow us to analyse our performance to offer you a better online experience and evaluate the efficiency of our campaigns. Personalisation cookies give you access to a customised experience of our website with usage-based offers and support. Finally, Advertising cookies are placed by third-party companies processing your data to create audiences lists to deliver targeted ads on social media and the internet. You may freely give, refuse or withdraw your consent at any time using the link provided at the bottom of each page.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
API Security
What is multifactor authentication (MFA)?
What is multifactor authentication (MFA)?
If youโre a professional, dealing with API or system security, then multi factor authentication wonโt be an unfamiliar term. Afterall, it is the spine of system security. Used at multiple places and for various purposes, it is a real savior against online vulnerabilities for all of us.ย
In this post, weโre going to get into the details of this technology and explain why using this one is a wise move to make.
Learning Objectives
What is Multi factor Authentication (MFA)?
As per the accepted Multi factor Authentication definition, this is a high-end security technology clubbing a couple of authentication methods in one place so that the userโs identity is confirmed before using software/products or before making any transaction.ย ย
Mostly, this technology combines passwords, security tokens, and biometric verification together. This is done to make sure that there is a multi-layer defense system guarding a product or transaction. Such a robust system keeps unauthorized access from the targeted product/technology and reduces the odds of security breaches, data thefts, and online frauds.
Multi Factor Authentication is commonly used to protect computing devices, databases, networks, and software.
Multi-factor Authentication
Why Is Multifactor Authentication Important?
Now that the basic MFA definition is clear, letโs talk about its importance. In the past few years, hackers have become very smart and are able to crack even the toughest passwords. So, if youโre thinking that a complex password can safeguard your computing device and network then youโre mistaken big time.ย
Organizations of all sorts have been victimized by data theft, phishing, brute-force attacks, password stealing, and various other kinds of online fraud. These attacks on cybersecurity and computing devices are going to make the world lose $10.5 trillion by the end of 2025, says the recent market research.
The use of multifactor authentication adds multiple security layers, which is hard to decode. Hackers will surrender in front of it. Hence, youโll face fewer security risks.
ย
When Should I use MFA?
Honestly speaking, you should bring MFA into action whenever you want to protect your sensitive information, devices, software, databases, or any other sort of digital asset. Most people use it to access their email boxes, financial accounts, and health records.
From an organizationsโ point of view, MFA is used to verify the user identity whenever access to the database, computing device, and network is required.
How it works?
Multifactor authentication uses a structured approach to verify the userโs identity. This approach includes asking for added verification credentials, one after another. For different categories, verification factors are diverse.
One of the widest verification processes is OTP or one-time passwords. OTPs are usually 4-8 digit codes, shared with the user via SMS, emails, and phone calls. Each time a new code is generated as OTP based on a seed value.ย ย
Types of Multi Factor Authentication
Based upon the information asked or shared with the end-user to complete the process, there are three categories of MFA. We have explained them in detail below.
Knowledge Factor
Based on knowledge, this factor involves answering certain security questions by the end-user. Some of the most common tools used to make this happen are the use of passwords, PINs, and OTP in scenarios such as:
ย Using a debit/credit for payment at multiple outlets needs entering a PIN.
โข Entering the information like a pet name or previous address each time one needs to gain access to any particular system.ย ย
โข Using a VPN client with a verified digital certification and connecting to it each time you access a network.
Possession Factorย
It involves sharing the user's possession details to enter a network or computing device. Use of badge, token, SIM card and key fob are commonly used possession factors for authentication.ย ย ย
Inherence Factorย
Lastly, we have an inherence factor that requires sharing the details of the userโs biological traits to confirm the login. Each human has distinct biometric traits. Hence, such authentication has very low odds of manipulations and tempering.ย
The most common information used by Inherence factor technologies is scanning of retina/iris/fingerprints, voice authentication, verification of hand or earlobe geometry, facial recognition, and digital signatures.ย ย
In this type of authentication factor, a device/software is used to scan the biometric traits and compare their details with the stored use cases. Based on this, a match is found or the user is found unauthorized.
types of multifactor authentication
Examples of Multi Factor Authentication
MFA has become a common practice and is used by almost everyone. Some may use it for all the users while few keep it for a certain group. To give you better clarity on the use of the actual world, we have come up with some of the real-life examples:
1. Each time you log in to your internet banking, you provide a username and password, along with sharing of OTP. This is a multi factor authentication example.ย
2. Companies are using retina scans or fingerprint scans for employees before granting access to the database.ย
3. Open Banking Limited is a UK-based non-profit organization using Trust framework, identity, and dynamic client registration to initiate a transaction.ย
4. Etsy is using a multi-level security solution with the userโs smartphone in place of the unreliable token.ย
Benefits And Drawbacks of MFA
If there is anything that grants ultimate peace of mind to individuals and organizations about safe access to the organizationโs digital assets then itโs the use of multi-factor authentication.ย
Here are some of the key perks to relish over after bringing this technology into action:
โข It can safeguard hardware, software, database, and networks with the same ease and excellence.ย ย
โข The real-time generated OTPs are hard to decode for the hackers.
โข Its usage with passwords can trim down hacking or data-breaking incidents by 99%.
โข No high-end technical skills are required to set up.
โข Security technologies can be modified as per the need of the hour.
โข It allows organizations to keep unwanted expenses like loss due to data theft at bay and deliver better ROI.
โข For sectors like e-commerce, banking, and financial dealing, the use of MFA builds trust in the customers and gives them the confidence to proceed. This has a direct positive impact on sales and customer retention.
While there is no second opinion about the fact that multifactor authentication is the Knight in shining armor, itโs not always a win-win situation as there are certain drawbacks. For instance:
โข Having a phone is a primary prerequisite to bring MFA into action.ย
โข If hardware tokens are used, the risk of losing them is high. One has to remain highly diligent about it.ย
โข Once the phone is lost or damaged, the stored MFA-related information can also be lost.ย
โข Biometric data has a probability of showing false negatives and positives.ย
โข MFA verification depends on the network connectivity and can fail to help you out when there is an internet outage.ย
โข Constant update and upgrade are required.ย
Two Factor Authentication vs Multi Factor Authentication
Like the 2 faces of the same coin, two-factor and multifactor authentication are like hands in gloves. However, they are a bit different from each other.ย
The difference is very basic and is clear from the title itself. In two-way factor authentication, only two factors are used to verify the userโs identity. On the other hand, MFA uses more than two sorts of factors to authenticate access.ย
Two Factor Authentication vs Multi Factor Authentication
The Role of Multifactor Authentication in API Security
MFA is used in various domains. API security is one of them. Adding multi-factor authentication in API is a sure shot way to double the API-security and keep the code safe.ย
Itโs wise to introduce MFA in the early stage as this keeps the API secure and away from unauthorized access. Doing so reduces the incidents of the introduction of bugs in the code and allows developers to create functional and viable APIs.ย
Whether you use RESTful API or any other kind of API, adding multifactor authentication is a must move to make. Here are some of the ways using which one can introduce MFA in API security:
โข Add an access token like OAUTH 2.0 for the APIย
โข Generating an access keyย
โข Using Factor APIsย
โข Using a single Sign-on or mobile sign-in login processย
In addition to this, there are certain APIs that are already backed with multi-factor authentication. Using such APIs makes their security high-end in the least possible efforts.ย
mfa
Ending Notesย
Multifactor authentication is one among multiple methods available to make the IT ecosystem secure and robust enough to keep any authorized access from crucial information at bay. Itโs just an added step to take towards unmatched peace of mind.ย
There are multiple ways to introduce it into the system. Pick what suits you the most and move ahead. This one step will keep your computing devices, databases, and network safe and far away from the reach of intruders.ย
If you havenโt thought of using it for APIs then do it now as it leads to the development of secure and viable APIs and applications. Bug-free performance and optimized service delivery are sure things by introducing multi factor authentication for multiplied security.
FAQ
Subscribe for the latest news
|
__label__pos
| 0.537635 |
Tell me more ร
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I am using m2eclipse 0.10.2 and eclipse helios/ajdt. I remember that m2eclipse is managing the inpath for eclipse configuration (at least in eclipse galileo)
right now, it doesn't manage it for me any more and I don't know why. This is my plugin configuration:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.3</version>
<configuration>
<complianceLevel>1.6</complianceLevel>
<source>1.6</source>
<aspectLibraries>
<aspectLibrary>
<groupId>org.springframework</groupId>
<artifactId>spring-aspects</artifactId>
</aspectLibrary>
</aspectLibraries>
</configuration>
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjtools</artifactId>
<version>1.6.2</version>
</dependency>
</dependencies>
</plugin>
I can add my aspectLibrary to the inpath manually and eclipse adds it to the .classpath file like this:
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java"/>
<classpathentry excluding="**" kind="src" output="target/classes" path="src/main/resources"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.6"/>
<classpathentry kind="con" path="org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER">
<attributes>
<attribute name="org.eclipse.ajdt.inpath.restriction" value="spring-aspects-3.0.4.RELEASE.jar"/>
<attribute name="org.eclipse.ajdt.inpath" value="org.eclipse.ajdt.inpath"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.ajdt.core.ASPECTJRT_CONTAINER"/>
<classpathentry kind="output" path="target/classes"/>
</classpath>
When i configure my project (right-click > maven) and select "Update Project Configuration", it looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java"/>
<classpathentry excluding="**" kind="src" output="target/classes" path="src/main/resources"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.6"/>
<classpathentry kind="con" path="org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER"/>
<classpathentry kind="con" path="org.eclipse.ajdt.core.ASPECTJRT_CONTAINER"/>
<classpathentry kind="output" path="target/classes"/>
</classpath>
so my inpath is gone and i don't see any aspect markers anymore.
Does anybody can give me some advice? Is it working on your site? Can you send me the steps and pom config to let m2eclipse mange my ajdt inpath?
BTW: I am having a multi-module.
regards J
share|improve this question
ย
I'm having the same problems with m2eclipse. Too bad there is no answer to this one... โย Jens Aug 24 '12 at 13:01
add comment
1 Answer
I'm seeing this as well, but couldn't find any Jira issue assigned to the project. Do you intend to open one?
share|improve this answer
ย
I have switched to maven eclipse plugin (maven.apache.org/plugins/maven-eclipse-plugin) . I was trying to use m2eclipse for two years and i always had problems. Like you can debug into your jar source code. You have to add manually every single jar. some other things are suddenly not working. Soemtimes i have to rebuild/clean over and over again. And suddenly my compile errors went away, i didn't know why. With the regular maven plugin you dont have such problems, you always know what the plugin is doing. โย Janning Oct 15 '10 at 12:52
add comment
Your Answer
ย
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.533357 |
Changes between Initial Version and Version 1 of TracAdmin
Ignore:
Timestamp:
Oct 20, 2016, 11:38:52 AM (5 years ago)
Author:
trac
Comment:
--
Legend:
Unmodified
Added
Removed
Modified
โข TracAdmin
v1 v1 ย
ย 1= TracAdmin
ย 2
ย 3[[PageOutline(2-5, Contents, floated)]]
ย 4[[TracGuideToc]]
ย 5
ย 6Trac is distributed with a powerful command-line configuration tool. This tool can be usedย to configure and customize your Trac-installation to better fit your needs.
ย 7
ย 8Some of those operations can also be performed via the web administration module.
ย 9
ย 10== Usage
ย 11
ย 12For nearly every `trac-admin` command, you'll need to specify the path to the TracEnvironment that you want to administer as the first argument, for example:
ย 13{{{
ย 14trac-admin /path/to/projenv wiki list
ย 15}}}
ย 16
ย 17The only exception is for the `help` command, but even in this case if you omit the environment, you'll only get a very succinct list of commands (`help` and `initenv`), the same list you'd get when invoking `trac-admin` alone.
ย 18Also, `trac-admin --version` will tell you about the Trac version (e.g. 0.12) corresponding to the program.
ย 19
ย 20If you want to get a comprehensive list of the available commands and sub-commands, you need to specify an existing environment:
ย 21{{{
ย 22trac-admin /path/to/projenv help
ย 23}}}
ย 24
ย 25Some commands have a more detailed help, which you can access by specifying the command's name as a subcommand for `help`:
ย 26
ย 27{{{
ย 28trac-admin /path/to/projenv help <command>
ย 29}}}
ย 30
ย 31=== `trac-admin <targetdir> initenv` === #initenv
ย 32
ย 33This subcommand is very important as it's the one used to create a TracEnvironment in the specified `<targetdir>`. That directory must not exist prior to the call.
ย 34
ย 35[[TracAdminHelp(initenv)]]
ย 36
ย 37It supports an extra `--inherit` option, which can be used to specify a global configuration file which can be used to share settings between several environments. You can also inherit from a shared configuration afterwards, by setting the `[inherit] file` option in the `conf/trac.ini` file in your newly created environment, but the advantage of specifying the inherited configuration file at environment creation time is that only the options ''not'' already specified in the global configuration file will be written in the created environment's `conf/trac.ini` file.
ย 38See TracIni#GlobalConfiguration.
ย 39
ย 40Note that in version 0.11 of Trac, `initenv` lost an extra last argument `<templatepath>`, which was used in previous versions to point to the `templates` folder. If you are using the one-liner '`trac-admin /path/to/trac/ initenv <projectname> <db> <repostype> <repospath>`' in the above and getting an error that reads ''''`Wrong number of arguments to initenv: 4`'''', then this is because you're using a `trac-admin` script from an '''older''' version of Trac.
ย 41
ย 42== Interactive Mode
ย 43
ย 44When passing the environment path as the only argument, `trac-admin` starts in interactive mode.
ย 45Commands can then be executed on the selected environment using the prompt, which offers tab-completion
ย 46(on non-Windows environments, and when the Python `readline` module is available) and automatic repetition of the last command issued.
ย 47
ย 48Once you're in interactive mode, you can also get help on specific commands or subsets of commands:
ย 49
ย 50For example, to get an explanation of the `resync` command, run:
ย 51{{{
ย 52> help resync
ย 53}}}
ย 54
ย 55To get help on all the Wiki-related commands, run:
ย 56{{{
ย 57> help wiki
ย 58}}}
ย 59
ย 60== Full Command Reference
ย 61
ย 62You'll find below the detailed help for all the commands available by default in `trac-admin`. Note that this may not match the list given by `trac-admin <yourenv> help`, as the commandsย pertaining to components disabled in that environment won't be available and conversely some plugins activated in the environment can add their own commands.
ย 63
ย 64[[TracAdminHelp()]]
ย 65
ย 66----
ย 67See also: TracGuide, TracBackup, TracPermissions, TracEnvironment, TracIni, [trac:TracMigrate TracMigrate]
|
__label__pos
| 0.976793 |
Index Index for
Section 8
Index Alphabetical
listing for M
Bottom of page Bottom of
page
makedbm(8)
NAME
makedbm - Makes a Network Information Service (NIS) map file
SYNOPSIS
/var/yp/makedbm [-i yp_input_file] [-s yp_secure_name] [-a method] [-o yp_output_name] [-d yp_domain_name] [-m yp_master_name] infile outfile /var/yp/makedbm [-a method] -u infile
OPTIONS
-a method Specifies that NIS maps are to be stored in one of the following formats: b btree -- Recommended when creating and maintaining very large maps. d dbm/ndbm -- For backward compatibility. This is the default. h hash -- A potentially quicker method for managing small maps. -i yp_input_file Creates a special entry with the key YP_INPUT_FILE, which is set to the specified value. -s yp_secure_name Creates a special entry with the key YP_SECURE, which is set to the specified value. This causes the makedbm command to write a secure map. -o yp_output_name Creates a special entry with the key YP_OUTPUT_NAME, which is set to the specified value. -d yp_domain_name Creates a special entry with the key YP_DOMAIN_NAME, which is set to the specified value. -m yp_master_name Creates a special entry with the key YP_MASTER_NAME, which is set to the specified value. If no master host name is specified, YP_MASTER_NAME will be set to the local host name. -u Undoes a dbm file. Prints the file to standard output in a plain text format, one entry per line, with a single space separating keys from values. This option processes dbm/ndbm-formatted files by default. To undo btree or hash files, you must use the -a option in combination with the -u option to specify the format.
DESCRIPTION
The makedbm command takes the file specified by the argument infile and converts it to a single file or a pair of files in dbm(3), btree(3), or hash(3) format. The dbm(3) files are stored as outfile.pag and outfile.dir, the btree(3) files are stored as outfile.btree, and the hash(3) files are stored as outfile.hash. Each line of the input file is converted to a single record. All characters up to the first tab or space form the key, and the rest of the line is defined as the key's associated data. If a line ends with a backslash (\), the data for that record is continued onto the next line. It is left for the Network Information Service (NIS) clients to interpret the number sign (#); makedbm does not treat it as a comment character. The infile parameter can be a hyphen (-), in which case makedbm reads the standard input. The makedbm command is meant to be used in generating database files for NIS. The makedbm command generates a special entry with the key YP_LAST_MODIFIED, which is set to the modification date from infile.
RESTRICTIONS
You must use the same database format for each map in a domain. In addition, a server serving multiple NIS domains must use the same database format for all domains. Although a Tru64 UNIX NIS server that takes advantage of btree files will be able to store very large maps, NIS slave servers that lack this feature might have a much smaller limit on the number of map entries they can handle. It may not be possible to distribute very large maps from a Tru64 UNIX NIS master server to a slave server that lacks support for very large maps. NIS clients are not affected by these enhancements.
EXAMPLES
1. The following example shows how a combination of commands can be used to make the NIS dbm files passwd.byname.pag and passwd.byname.dir from the /etc/passwd file: % awk 'BEGIN { FS = ":"; OFS = "\t"; } { print $1, $0 }' /etc/passwd > ptmp % /var/yp/makedbm ptmp /var/yp/domain_name/passwd.byname % rm ptmp The awk command creates the ptmp file, which is in a form usable by makedbm. The makedbm command uses this temporary file to create the database files, which it stores in the map file directory for your domain, /var/yp/domain_name. The rm command removes the ptmp file. 2. The following example shows how to create the same passwd.byname map in btree format: /var/yp/makedbm -a b ptmp /var/yp/domain_name/passwd.byname This command outputs a file called passwd.byname.btree and stores it in the map file directory for your domain, /var/yp/domain_name. 3. The following example shows how to undo a hash-formatted ypservers map and put the output into a temporary file for editing: /var/yp/makedbm -a h -u /var/yp/domain_name/ypservers > tmpfile You might undo the ypservers map in this manner if you need to add or remove a slave server from the domain. See Network Administration: Services for the full procedure and scripts to automate this process.
SEE ALSO
Commands: yppasswd(1), ypmake(8) Functions: btree(3), dbm(3), dbopen(3), hash(3), ndbm(3) Network Administration: Services
Index Index for
Section 8
Index Alphabetical
listing for M
Top of page Top of
page
|
__label__pos
| 0.893211 |
Take the 2-minute tour ร
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free, no registration required.
I've heard this term bandied about, especially in reference to web development, but I'm having trouble getting a clear understanding of what an Information Architect actually does.
Any ideas what their role or deliverables would be?
share|improve this question
1 ย
2 Answers 2
up vote 2 down vote accepted
As I understand it, an Information Architect decides how information will be organized and presented on a web site. This would include navigation, aggregation, presentation (more what is presented than how) and access control (including security and filtering). Think of them as responsible for "content design", as opposed to "site design".
share|improve this answer
I.A.I. definition
Information architecture is defined by the Information Architecture Institute as:
1. The structural design of shared information environments.
2. The art and science of organizing and labeling web sites, intranets, online communities, and software to support findability and usability.
3. An emerging community of practice focused on bringing principles of design and architecture to the digital landscape.
share|improve this answer
ย ย ย
+1 for the link - they even have a journal: journalofia.org โย Gary Rowe Nov 29 '10 at 19:43
ย ย ย
...but what do they actually do? โย Alison Nov 29 '10 at 21:09
Your Answer
ย
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.76347 |
Take the 2-minute tour ร
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Hello everyone,
Suppose that I am defining a function which embeds a surface (manifold) in $\mathbb{R}^3$.
Is there a standard symbol or letter that is used for this function?
Additionally, is there any other classical or standard notation (such as the hooked arrow for inclusion maps) of which I ought to be aware?
Regards,
Christopher
share|improve this question
3 ย
Most embeddings I've met were called a lower case Roman letter between $e$ and $k$. As far as arrows, I prefer $\hookrightarrow$ for an embedding and $\looparrowright$ for an immersion, but you sometimes also see $\subset$ for an embedding and $\subseteq$ for an immersion. โย Mark Grant Aug 13 '12 at 12:30
2 Answers 2
up vote 3 down vote accepted
I usually use $\iota$ (\iota) for all kinds of embeddings.
share|improve this answer
ย ย ย
Thanks Wolfgang, much appreciated. โย Chris Aug 15 '12 at 0:00
No, there is none.
share|improve this answer
1 ย
Thanks Dox, I hadn't run into anything obviously standard in the literature, but I wanted to ask before I put it in my thesis! โย Chris Aug 15 '12 at 0:00
ย ย ย
You're welcome! (even if mine contribution was useless) โย Dox Aug 15 '12 at 1:55
Your Answer
ย
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.865934 |
Data Backup and Recovery
Snapvault baseline snapshot rollover
Willybobs27
I have a snapvault relationship that hasn't been updated for a while.
The current baseline snapshot on the front end is now consuming so much space that the front end is running out of space and SnapManager backups fail.
If I delete the oldest snapshot which is also the snapvault baseline is there any way I can preserve the current backups?
Is there a document that shows how the snapvault front end snapshots roll over?
1 REPLY 1
ogra
SnapVault snapshots roll up as soon as the SV update has triggered.
I think your SV update is not happening.
Do you know how is it configured ?
Public
|
__label__pos
| 0.998934 |
Man page of Glib::Object
Glib::Object
Section: User Contributed Perl Documentation (3pm)
Updated: 2014-08-29
Index?action=index Return to Main Contents
NAME
Glib::Object - Bindings for GObject
DESCRIPTION
GObject is the base object class provided by the gobject library. It provides object properties with a notification system, and emittable signals.
Glib::Object is the corresponding Perl object class. Glib::Objects are represented by blessed hash references, with a magical connection to the underlying C object.
get and set
Some subclasses of
"Glib::Object"
override
"get"
and
"set"
with methods more useful to the subclass, for example
"Gtk2::TreeModel"
getting and setting row contents.
This is usually done when the subclass has no object properties. Any object properties it or a further subclass does have can always be accessed with
"get_property"
and
"set_property"
(together with
"find_property"
and
"list_properties"
to enquire about them).
Generic code for any object subclass can use the names
"get_property"
and
"set_property"
to be sure of getting the object properties as such.
HIERARCHY
Glib::Object
METHODS
object = $class->new (...)
... (list) key/value pairs, property values to set on creation:
Instantiate a Glib::Object of type $class. Any key/value pairs in ... are used to set properties on the new object; see
"set"
. This is designed to be inherited by Perl-derived subclasses (see Glib::Object::Subclass), but you can actually use it to create any GObject-derived type.
scalar = Glib::Object->new_from_pointer ($pointer, $noinc=FALSE)
$pointer
(unsigned) a C pointer value as an integer.:
$noinc
(boolean) if true, do not increase the GObject's reference count when creating the Perl wrapper. this typically means that when the Perl wrapper will own the object. in general you don't want to do that, so the default is false.:
Create a Perl Glib::Object reference for the C object pointed to by $pointer. You should need this very rarely; it's intended to support foreign objects.
NOTE: the cast from arbitrary integer to GObject may result in a core dump without warning, because the type-checking macro G_OBJECT() attempts to dereference the pointer to find a GTypeClass structure, and there is no portable way to validate the pointer.
unsigned = $object->get_data ($key)
$key
(string):
Fetch the integer stored under the object data key $key. These values do not have types; type conversions must be done manually. See
"set_data"
.
$object->set_data ($key, $data)
$key
(string):
$data
(scalar):
GObject provides an arbitrary data mechanism that assigns unsigned integers to key names. Functionality overlaps with the hash used as the Perl object instance, so we strongly recommend you use hash keys for your data storage. The GObject data values cannot store type information, so they are not safe to use for anything but integer values, and you really should use this method only if you know what you are doing.
pspec or undef = $object_or_class_name->find_property ($name)
$name
(string):
Find the definition of object property $name for $object_or_class_name. Return
"undef"
if no such property. For the returned data see Glib::Object::list_properties.
$object->freeze_notify
Stops emission of ``notify'' signals on $object. The signals are queued until
"thaw_notify"
is called on $object.
$object->get (...)
... (list) list of property names:
Alias for
"get_property"
(see ``get and set'' above).
$object->set (key => $value, ...)
... (list) key/value pairs:
Alias for
"set_property"
(see ``get and set'' above).
list = $object_or_class_name->list_properties
List all the object properties for $object_or_class_name; returns them as a list of hashes, containing these keys:
name
The name of the property:
type
The type of the property:
owner_type
The type that owns the property:
descr
The description of the property:
flags
The Glib::ParamFlags of the property:
$object->notify ($property_name)
$property_name
(string):
Emits a ``notify'' signal for the property $property on $object.
gpointer = $object->get_pointer
Complement of
"new_from_pointer"
.
$object->get_property (...)
Fetch and return the values for the object properties named in ....
$object->set_property (key => $value, ...)
Set object properties.
unsigned = $object_or_class_name->signal_add_emission_hook ($detailed_signal, $hook_func, $hook_data=undef)
$detailed_signal
(string) of the form ``signal-name::detail'':
$hook_func
(subroutine):
$hook_data
(scalar):
Add an emission hook for a signal. The hook will be called for any emission of that signal, independent of the instance. This is possible only for signals which don't have the
"G_SIGNAL_NO_HOOKS"
flag set.
The $hook_func should be reference to a subroutine that looks something like this:
sub emission_hook {
my ($invocation_hint, $parameters, $hook_data) = @_;
# $parameters is a reference to the @_ to be passed to
# signal handlers, including the instance as $parameters->[0].
return $stay_connected; # boolean
}
This function returns an id that can be used with
"remove_emission_hook"
.
Since 1.100.
list = $instance->signal_chain_from_overridden (...)
... (list):
Chain up to an overridden class closure; it is only valid to call this from a class closure override.
Translation: because of various details in how GObjects are implemented, the way to override a virtual method on a GObject is to provide a new ``class closure'', or default handler for a signal. This happens when a class is registered with the type system (see Glib::Type::register and Glib::Object::Subclass). When called from inside such an override, this method runs the overridden class closure. This is equivalent to calling
$self
->SUPER::$method (@_) in normal Perl objects.
unsigned = $instance->signal_connect ($detailed_signal, $callback, $data=undef)
$detailed_signal
(string):
$callback
(subroutine):
$data
(scalar) arbitrary data to be passed to each invocation of callback:
Register callback to be called on each emission of $detailed_signal. Returns an identifier that may be used to remove this handler with
"$object->signal_handler_disconnect"
.
unsigned = $instance->signal_connect_after ($detailed_signal, $callback, $data=undef)
$detailed_signal
(string):
$callback
(scalar):
$data
(scalar):
Like
"signal_connect"
, except that $callback will be run after the default handler.
unsigned = $instance->signal_connect_swapped ($detailed_signal, $callback, $data=undef)
$detailed_signal
(string):
$callback
(scalar):
$data
(scalar):
Like
"signal_connect"
, except that $data and $object will be swapped on invocation of $callback.
retval = $object->signal_emit ($name, ...)
$name
(string) the name of the signal:
... (list) any arguments to pass to handlers.:
Emit the signal name on $object. The number and types of additional arguments in ... are determined by the signal; similarly, the presence and type of return value depends on the signal being emitted.
$ihint = $instance->signal_get_invocation_hint
Get a reference to a hash describing the innermost signal currently active on
$instance
. Returns undef if no signal emission is active. This invocation hint is the same object passed to signal emission hooks, and contains these keys:
signal_name
The name of the signal being emitted.:
detail
The detail passed on for this emission. For example, a
"notify"
signal will have the property name as the detail.:
run_type
The current stage of signal emission, one of ``run-first, ``run-last, or ``run-cleanup''.:
$object->signal_handler_block ($handler_id)
$handler_id
(unsigned):
$object->signal_handler_disconnect ($handler_id)
$handler_id
(unsigned):
boolean = $object->signal_handler_is_connected ($handler_id)
$handler_id
(unsigned):
$object->signal_handler_unblock ($handler_id)
$handler_id
(unsigned):
integer = $instance->signal_handlers_block_by_func ($func, $data=undef)
$func
(subroutine) function to block:
$data
(scalar) data to match, ignored if undef:
integer = $instance->signal_handlers_disconnect_by_func ($func, $data=undef)
$func
(subroutine) function to block:
$data
(scalar) data to match, ignored if undef:
integer = $instance->signal_handlers_unblock_by_func ($func, $data=undef)
$func
(subroutine) function to block:
$data
(scalar) data to match, ignored if undef:
scalar = $object_or_class_name->signal_query ($name)
$name
(string):
Look up information about the signal $name on the instance type $object_or_class_name, which may be either a Glib::Object or a package name.
See also
"Glib::Type::list_signals"
, which returns the same kind of hash refs as this does.
Since 1.080.
$object_or_class_name->signal_remove_emission_hook ($signal_name, $hook_id)
$signal_name
(string):
$hook_id
(unsigned):
Remove a hook that was installed by
"add_emission_hook"
.
Since 1.100.
$instance->signal_stop_emission_by_name ($detailed_signal)
$detailed_signal
(string):
$object->thaw_notify
Reverts the effect of a previous call to
"freeze_notify"
. This causes all queued ``notify'' signals on $object to be emitted.
boolean = Glib::Object->set_threadsafe ($threadsafe)
$threadsafe
(boolean):
Enables/disables threadsafe gobject tracking. Returns whether or not tracking will be successful and thus whether using perl ithreads will be possible.
$object->tie_properties ($all=FALSE)
$all
(boolean) if FALSE (or omitted) tie only properties for this object's class, if TRUE tie the properties of this and all parent classes.:
A special method available to Glib::Object derivatives, it uses perl's tie facilities to associate hash keys with the properties of the object. For example:
$button->tie_properties;
# equivilent to $button->set (label => 'Hello World');
$button->{label} = 'Hello World';
print "the label is: ".$button->{label}."\n";
Attempts to write to read-only properties will croak, reading a write-only property will return '[write-only]'.
Care must be taken when using tie_properties with objects of types created with Glib::Object::Subclass as there may be clashes with existing hash keys that could cause infinite loops. The solution is to use custom property get/set functions to alter the storage locations of the properties.
SIGNALS
notify (Glib
:Object, Glib::ParamSpec)
SEE ALSO
Glib
COPYRIGHT
Copyright (C) 2003-2011 by the gtk2-perl team.
This software is licensed under the LGPL. See Glib for a full notice.
Index
NAME
DESCRIPTION
get and set
HIERARCHY
METHODS
object = $class->new (...)
scalar = Glib::Object->new_from_pointer ($pointer, $noinc=FALSE)
unsigned = $object->get_data ($key)
$object->set_data ($key, $data)
pspec or undef = $object_or_class_name->find_property ($name)
$object->freeze_notify
$object->get (...)
$object->set (key => $value, ...)
list = $object_or_class_name->list_properties
$object->notify ($property_name)
gpointer = $object->get_pointer
$object->get_property (...)
$object->set_property (key => $value, ...)
unsigned = $object_or_class_name->signal_add_emission_hook ($detailed_signal, $hook_func, $hook_data=undef)
list = $instance->signal_chain_from_overridden (...)
unsigned = $instance->signal_connect ($detailed_signal, $callback, $data=undef)
unsigned = $instance->signal_connect_after ($detailed_signal, $callback, $data=undef)
unsigned = $instance->signal_connect_swapped ($detailed_signal, $callback, $data=undef)
retval = $object->signal_emit ($name, ...)
$ihint = $instance->signal_get_invocation_hint
$object->signal_handler_block ($handler_id)
$object->signal_handler_disconnect ($handler_id)
boolean = $object->signal_handler_is_connected ($handler_id)
$object->signal_handler_unblock ($handler_id)
integer = $instance->signal_handlers_block_by_func ($func, $data=undef)
integer = $instance->signal_handlers_disconnect_by_func ($func, $data=undef)
integer = $instance->signal_handlers_unblock_by_func ($func, $data=undef)
scalar = $object_or_class_name->signal_query ($name)
$object_or_class_name->signal_remove_emission_hook ($signal_name, $hook_id)
$instance->signal_stop_emission_by_name ($detailed_signal)
$object->thaw_notify
boolean = Glib::Object->set_threadsafe ($threadsafe)
$object->tie_properties ($all=FALSE)
SIGNALS
SEE ALSO
COPYRIGHT
|
__label__pos
| 0.601747 |
Posts tagged โAsynchronousโ
May 22, 2012
Using Task for responsive UI inย WPF
Blocking of the UI thread in a WPF application causes the application to become unresponsive. It is a common problem with several solutions, but usage of Task is in my opinion the most elegant solution because of readability and code brevity.
For this example we have a simple WPF application with a button that starts a long running loop. A busy indicator is displayed while the loop is running.
_busyIndicator.IsBusy = true;
for (var i = 0; i < 100; i++)
{
System.Threading.Thread.Sleep(50);
}
_busyIndicator.IsBusy = false;
As written, this code will freeze the app for the duration of the loop since it blocks execution of the UI thread. The solution is to run the loop Asynchronously and using Task is the easiest way to accomplish it in this case.
_busyIndicator.IsBusy = true;
var T = new Task(() =>
{
for (var i = 0; i < 100; i++)
{
System.Threading.Thread.Sleep(50);
}
});
T.Start();
_busyIndicator.IsBusy = false;
However, this bit of code will crash the application because we have not passed control back to the context of the UI thread. The โTask.ContinueWithโ method can be used to serve that purpose.
var T = new Task(() =>
{
for (var i = 0; i < 100; i++)
{
System.Threading.Thread.Sleep(50);
}
});
T.ContinueWith(antecedent => _busyIndicator.IsBusy = false, TaskScheduler.FromCurrentSynchronizationContext());
Running the application will now result in complete responsiveness. You can move, resize, or perform other operations while the loop is running. However, there are cases where CPU intensive loops may cause the busy indicator to not be displayed. In WPF, the UI render events are queued to prevent issues related to multiple execution of the same event. To prevent this, we want to make sure that the busy indicator is displayed. And for that we can use a timer event.
private System.Windows.Threading.DispatcherTimer _timer = null;
public MainWindow()
{
InitializeComponent();
_timer = new DispatcherTimer();
_timer.IsEnabled = false;
_timer.Tick += new System.EventHandler(RunTask);
}
private void StartProcess(object sender, RoutedEventArgs e)
{
_busyIndicator.IsBusy = true;
_timer.Interval = new TimeSpan(0,0,0,0,100);
_timer.Start();
// go back to the user thread, giving the busy indicator a chance to come up before the next timer tick event
}
void RunTask(object sender, System.EventArgs e)
{
_timer.IsEnabled = false;
var T = new Task(() =>
{
for (var i = 0; i < 100; i++)
{
System.Threading.Thread.Sleep(50);
}
});
T.ContinueWith(antecedent => _busyIndicator.IsBusy = false, TaskScheduler.FromCurrentSynchronizationContext());
T.Start();
}
So, when the user clicks the button, the busy indicator is set to true first, the timer is started, and once it has ticked the Task is called. Using the timer event in this manner ensures that the loop will not be executed until the busy indicator is rendered.
The complete source code for this example can be found in our github repository.
Advertisement
Tags: ,
|
__label__pos
| 0.946495 |
CameraIcon
CameraIcon
SearchIcon
MyQuestionIcon
Question
Sum of two numbers is 20 and their difference is 5. Find the difference in their squares.
A
100
loader
B
100
loader
C
50
loader
D
25
loader
Solution
The correct option is A
100
Let the numbers be a and b. Assume a > b.
(a+b)(a-b) = a2b2ย = 20 xย 5ย = 100.
Mathematics
Suggest Corrections
thumbs-up
ย
0
similar_icon
Similar questions
View More
similar_icon
People also searched for
View More
footer-image
|
__label__pos
| 0.82842 |
Anonymize CloudFront Access Logs
Michael Wittig โ 22 Apr 2020
Amazon CloudFront can upload access log files to an S3 bucket. By default, CloudFront logs the IP address of the client. Optionally, cookies could be logged as well. If EU citizens access your CloudFront distribution, you have to process personally identifiable information (PII) in a General Data Protection Regulation (GDPR) compliant way. IP addresses are considered PII, and cookie data could also contain PII. If you want to process and store PII, you need a reason in the spirit of the GDPR.
Disclaimer: Iโm not a lawyer! This is not legal advice.
Access logs are required to support operations to debug issues. For that purpose, it is okay to keep the access logs for seven days1. But you might need access logs for capacity planning as well. How can you keep the access logs for more than seven days without violating GDPR?
Anonymize Data
The question is: do you really need the IP address in our access logs? The answer is likely no. Unfortunately, CloudFront does not allow us to disable the IP address logging. We have to implement a workaround to anonymize the access logs as soon as they are available on S3. The workaround works like this:
Anonymize CloudFront Access Logs
The diagram was created with Cloudcraft - Visualize your cloud architecture like a pro.
We can use a similar mechanism that is implemented by Google Analytics. An IPv4 address like 91.45.135.67 is turned into 91.45.135.0 (the last 8 bits are removed, 24 bits are kept). IPv6 addresses need a different logic: Google removes the last 80 bits. I will go one step further and remove the last 96 bits and keep 32 bits 2.
The following steps are needed to anonymize an access log file:
1. Download the object from S3
2. Decompress the gzip data
3. Parse the data (tab-separated values, log file format)
4. Replace the IP addresses with anonymized values
5. Compress the data with gzip
6. Upload the anonymized data to S3
7. Remove the original data from S3
There is no documented max size of an access log file. We should prepare for files that are larger than the available memory. Luckily, Lambda functions support Node.js, which has superb support to deal with streaming data. If we stream data, we never load all data into memory at once.
cloudonaut plus
Staying ahead of the game with Amazon Web Services (AWS) is a challenge. Our weekly videos and online events provide independent insights into the world of cloud. Subscribe to cloudonaut plus to get access to our exclusive videos and online events.
Subscribe now!
First, weload some core Node.js dependencies and the AWS SDK:
const fs = require('fs');
const zlib = require('zlib');
const stream = require('stream');
const AWS = require('aws-sdk');
const s3 = new AWS.S3({apiVersion: '2006-03-01'});
Itโs time to implement the anonymization:
function anonymizeIPv4Address(str) {
const s = str.split('.');
s[3] = '0';
return s.join('.');
}
function anonymizeIPv6Address(str) {
const s = str.split(':').slice(0, 2);
s.push(':');
return s.join(':');
}
function anonymizeIpAddress(str) {
if (str === '-' || str === 'unknown') {
return str;
}
if (str.includes('.')) {
return anonymizeIPv4Address(str);
} else if (str.includes(':')) {
return anonymizeIPv6Address(str);
} else {
throw new Error('Neither IPv4 nor IPv6: ' + str);
}
}
We also have to deal with TSV (tab-separated values)
function transformLine(line) {
if (line.startsWith('#') || line.trim() === '') {
return line;
}
const values = line.split('\t');
values[4] = anonymizeIpAddress(values[4]);
values[19] = anonymizeIpAddress(values[19]);
return values.join('\t');
}
So far, we process only small amounts of data: a single access log file line. Itโs time to deal with the whole file.
Each chunk of data is represented as a buffer in Node.js. A buffer represents binary data in the form of a sequence of bytes. In the buffer, we search for the line-end \n byte. We slice all bytes from beginning to \n and convert them into a string to extract a line. Continue with the apporach until end of file is reached. There is one edge case: A chunk of data can stop in the middle of a line. We have to add the old chunk to the beginning of a new chunk.
async function process(record) {
let chunk = Buffer.alloc(0);
const transform = (currentChunk, encoding, callback) => {
chunk = Buffer.concat([chunk, currentChunk]);
const lines = [];
while(chunk.length > 0) {
const i = chunk.indexOf('\n', 'utf8');
if (i === -1) {
break;
} else {
lines.push(chunk.slice(0, i).toString('utf8'));
chunk = chunk.slice(i+1);
}
}
lines.push('');
const transformed = lines
.map(transformLine)
.join('\n');
callback(null, Buffer.from(transformed, 'utf8'));
};
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
if ('versionId' in record.s3.object) {
params.VersionId = record.s3.object.versionId;
}
const body = s3.getObject(params).createReadStream()
.pipe(zlib.createGunzip())
.pipe(new stream.Transform({
transform
}))
.pipe(zlib.createGzip());
await s3.upload({
Bucket: record.s3.bucket.name,
Key: record.s3.object.key.slice(0, -2) + 'anonymized.gz',
Body: body
}).promise();
if (chunk.length > 0) {
throw new Error('file was not read completly');
}
return s3.deleteObject(params).promise();
}
Finally, Lambda requires a thin interface that we have to implement. I also ensure that anonymized data is not processed again to avoid an expensive infinit loop.
exports.handler = async (event) => {
console.log(JSON.stringify(event));
for (let record of event.Records) {
if (record.s3.object.key.endsWith('.anonymized.gz')) {
continue;
} else if (record.s3.object.key.endsWith('.gz')) {
await process(record);
}
}
};
I integrated the workaround into our collection of aws-cf-templates. Check out the documentation or the code on GitHub. A similar approach can be used to anonymize access logs from ELB load balancers (ALB, CLB, NLB).
PS: You should also enable S3 lifecycle rules to delete access logs after 38 months.
Thanks to Thorsten Hรถger for reviewing this article.
1. 1. Germany source: https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Veranstaltungen/ITSiKongress/14ter/Vortraege-19-05-2015/Heidrich_Wegener.pdf?__blob=publicationFile
2. 2. One official recommendation I found recommends dropping at least the last 88 bits of an IPv6 address (German source: https://www.datenschutz-bayern.de/dsbk-ent/DSK_84-IPv6.html)
Michael Wittig
Michael Wittig
I'm an independent consultant, technical writer, and programming founder. All these activities have to do with AWS. I'm writing this blog and all other projects together with my brother Andreas.
In 2009, we joined the same company as software developers. Three years later, we were looking for a way to deploy our softwareโan online banking platformโin an agile way. We got excited about the possibilities in the cloud and the DevOps movement. Itโs no wonder we ended up migrating the whole infrastructure of Tullius Walden Bank to AWS. This was a first in the finance industry, at least in Germany! Since 2015, we have accelerated the cloud journeys of startups, mid-sized companies, and enterprises. We have penned books like Amazon Web Services in Action and Rapid Docker on AWS, we regularly update our blog, and we are contributing to the Open Source community. Besides running a 2-headed consultancy, we are entrepreneurs building Software-as-a-Service products.
We are available for projects.
You can contact me via Email, Twitter, and LinkedIn.
Briefcase icon
Hire me
|
__label__pos
| 0.972555 |
Many applications use Core data for iOS devices, others utilize frameworks like fmdb. But there are also people that write SQLite access from scratch. The comes the answer - how do you cache prepared statements (I will not discuss here why we cache prepared statements)?
Well - I have implemented 2 methods:
First approach has itโs advantages in regards of using older SDKs (NSPointerArray is available from iOS 6.0+)
Second approach is more elegant, but as mentioned - only for newer versions.
Lets have some code:
Using C syntax
// Declare instance variables to store statements - good place is in you Database manager instance
sqlite3_stmt **stmtCached;
int stmtCachedCount;
NSMutableDictionary *stmtSQLPool;
...
@property (atomic, retain) NSMutableDictionary *stmtSQLPool;
...
// in implementation
@synthesize stmtSQLPool;
...
// in constructor initialize the array
stmtCached = (sqlite3_stmt**) malloc(sizeof(sqlite3_stmt*)*MAX_NUM_CACHED_STATEMENT);
Code to prepare and cache statements:
- (sqlite3_stmt*) prepareAndCacheStatementWithSQL: (NSString*) sql andDatabase: (sqlite3*) db {
@synchronized(self){
// Try to find if we have cached the statement
NSNumber *index = [self.stmtSQLPool objectForKey:sql];
if (index != nil && stmtCachedCount > index.intValue) {
// we have already cached statement
sqlite3_stmt* stmt = stmtCached[index.intValue];
return stmt;
}
// Not prepared yet - prepare it and put it in the pool
sqlite3_stmt *stmt = [DAO prepareStatementWithNSString:sql andDB: db];
if (!stmt)
@throw [NSException exceptionWithName:@"Error preparing statement"
reason:[NSString stringWithFormat:@"Something went wrong - cannot prepare statement for sql: '%@' - %s", sql, sqlite3_errmsg(db)]
userInfo:nil];
if (stmtCachedCount >= MAX_NUM_CACHED_STATEMENT) {
NSLog(@"Maximum number of cached statements reached. Stmt for sql %@ will not be cached.", sql);
} else {
stmtCached[stmtCachedCount++] = stmt;
// store the index
NSNumber *indexNumber = [NSNumber numberWithInt:(stmtCachedCount-1)];
[stmtSQLPool setObject:indexNumber forKey:sql];
}
return stmt;
}
}
This is pretty much everything. Of course donโt forget to release and finalize statements on db close.
In second approach - just instead of using (_sqlite**_) - replace with (_NSPointerArray*_):
NSPointerFunctionsOptions options = (NSPointerFunctionsStrongMemory |
NSPointerFunctionsStructPersonality);
self.stmtCached = [NSPointerArray pointerArrayWithOptions:options];
Fetch and store statements:
sqlite3_stmt * stmt = [stmtCached pointerAtIndex:i];
...
[stmtCached addPointer:stmt];
Enjoy~
|
__label__pos
| 0.967587 |
git-pisect: A Parallel Version of git-bisect
One of my favorite tools in the git tool suite is git-bisect. For those of you unfamiliar with it, git-bisect is a sort of magical program that you can use to quickly find which commit introduced a problem into a repository. It does this by checking out commits in your repository's history, and you tell it whether the state of the commits for whatever you're testing is good or bad. It minimizes the number of times you need to do this by using a binary search algorithm. Futhermore, if you have an automated test suite, you can tell git-bisect to run the tests on your behalf (using git bisect run) and drive the whole process by itself. It has proven very useful to me over the years, both in open source and commercial work.
However, sometimes bisect just doesn't seem fast enough. After all, if you have a test suite that takes a while, bisect will take a while too. Using bisect on a multi-core machine can be a shame - bisect only uses up one of your cores while the others remain idle.
So a while ago, I was talking with a co-worker and had the idea for a parallel version of git-bisect, which I implemented at the hack 'n' tell I recently attended. I call it: git-pisect.
What's With the Name?
It's a really crappy portmanteau of โgit parallel bisectโ, of course!
What? How can you bisect in parallel?
I know, I know - you need to finish one round of tests to be able to decide the next commit to test, right? This is why the name is crappy - it should really be something like โgit-pnsectโ (for parallel n-sect, for reasons which will become apparent shortly) or โgit-parallel-regression-finderโ, but git-pisect is just so catchy.
How Does It Work?
In order to understand how pisect works, we first need to fully understand how regular ol' bisect works. If you feel comfortable with how git-bisect works, feel free to skip the next section.
How does git-bisect work?
Well, let's look at your typical range of Git commits:
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
Let's say that HEAD~15 passes, but HEAD fails. So if you run git bisect start HEAD HEAD~15, git starts testing at the halfway point:
โ
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
If the regression isn't present at the current point (HEAD~8 here), we know that it must have been introduced in a later commit, so we pick a new point halfway between the last known good commit and the first known bad commit, like so:
โ
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
Assuming that HEAD~4 fails, we can narrow the range even further:
โ
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
If the test fails at HEAD~6, we know that either it or HEAD~7 is the culprit, so we do a final test with the files from HEAD~7. I'll spare you the additional diagram, so let's say that HEAD~7 is indeed the culprit for this next part.
Ok, now what about parallel bisect?
So let's say you're doing this on a repository that has a test suite that takes about a minute to run, is safely parallelizable, and doesn't implement any sort of parallelization itself. Since most of our machines these days have multiple cores, running these tests on just a single core seems like a waste of time. What if we are using a machine with four cores - can we use all of them? Yes! Let's look at that example from before, but with multiple parallel tests:
โ โ โ โ
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
Running four tests in parallel, we can move on to a smaller range of commits under consideration much more quickly. After the step, I demonstrated above, here's the next one:
โ โ
โโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโฌโโโโโโโ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โ HEAD~15 โ HEAD~14 โ HEAD~13 โ HEAD~12 โ HEAD~11 โ HEAD~10 โ HEAD~9 โ HEAD~8 โ HEAD~7 โ HEAD~6 โ HEAD~5 โ HEAD~4 โ HEAD~3 โ HEAD~2 โ HEAD~1 โ HEAD โ
โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ โ
โโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโ
So instead of taking four steps (ie. four minutes) as in the standard bisect, we only take two!
Metrics
To test git-pisect, I picked a random number between 1 and 1000 - let's call it N. I then created a Git repository with 1000 commits - commits before N passed, and commits after N failed, using the following code:
#include <unistd.h>
int
main(void)
{
sleep(10);
return EXIT_CODE; // 0 before N, 1 after
}
I ran git-bisect and git-pisect on this repository - the latter with 1, 2, 4, 8, and 16 jobs. Here are the results from that test on my four-core laptop:
git-bisect: 1m42s
git-pisect(1): 1m24s
git-pisect(2): 1m5s
git-pisect(4): 45s
git-pisect(8): 38s
git-pisect(16): 44s
I have no idea why git-pisect with a single job is a little faster than git-bisect, but I'm not complaining!
UPDATE: I spent some time looking at the reason behind this - it turns out that I was picking slightly different test commits than git-bisect. What I mean by this is that given 1000 commits to investigate, git-pisect may pick commit #499 and git-bisect may pick #500, or vice versa. This means that, depending on where in the commit range the defective commit is, sometimes git-pisect will end up performing one fewer test, and sometimes git-bisect will perform one fewer test. Also interesting is the fact that sixteen jobs takes longer than eight jobs; log(1000) / log(16) (2.49) and log(1000) / log(8) (3.32) are fairly close, and so sometimes you'll end up needing the same number of iterations to find the offending commit, but more jobs means more overhead for setup and task scheduling.
What's interesting to me is that even though it takes less time, git-pisect does actually perform more work: with a single job (equivalent to git-bisect, it performs O(logโ n) (about 10 here) tests. With eight jobs, it performs O(8 * logโ n) (about 25) tests, so we are performing some redundant work. However, since that work is happening in parallel, we only observe time relative to O(logโ n)!
Try It Out!
If this idea sounds interesting to you, I encourage you to head over to the repository, clone it, and try it out! If people are interested, I'm happy to accept contributions, improve the code, and add more features (I've been thinking a distributed version - git-disect, if you will - could be interesting as well). I'm eager to hear if others find this useful or find issues with either the idea or the implementation!
|
__label__pos
| 0.948604 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.