text
stringlengths
15
59.8k
meta
dict
Q: Set src depending on url I want to set the souce based on the url ex: <script type="text/javascript" src="test"></script> **i can not add to this if (window.location.href.indexOf("test") > -1) { src="myScriptcc.js"; } A: You can do like this <script type="text/javascript"> if ( window.location.href.indexOf ("test") > -1 ) { var myscript = document.createElement ('script'); myscript.src = 'myScriptcc.js'; document.body.appendChild (myscript); } </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/61373645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How can I refresh fullcalendar after insert a row in mysql without refresh the page? my problem is that I do an ajax petition to insert a event to mysql, so if everything is ok I wanted to refresh (refetch events) fullcalendar to see the new insert without refresh page. Besides, I am using TimeGrid View and version 4 of the plugin. I tested: calendar.refetch(); calendar.refetchEvents(); $('calendar').fullcalendar('rerenderEvents'); $('calendar').fullcalendar('refetchEvents'); but that does not work, at least in v4. Calendar Code: var calendar; /* Calendario JS */ document.addEventListener('DOMContentLoaded', function() { var calendarEl = document.getElementById('calendar'); calendar = new FullCalendar.Calendar(calendarEl, { plugins: [ 'dayGrid', 'timeGrid', 'bootstrap' ], defaultView: 'timeGridWeek', themeSystem: 'bootstrap', locale: 'es', minTime: "07:00:00", maxTime: "23:00:00", header: { left: 'title', //today, prev, next center: 'BtnAñadirReserva', right: 'today, prev,next' //month, basicWeek, basicDay, gendaWeek, agendaDay }, customButtons: { BtnAñadirReserva: { text: "Añadir Reserva", bootstrapFontAwesome: "fa-calendar-plus Añadir Reserva", click: function(){ showNoti(); } } }, events: { url: url_controller, method: 'post', extraParams: { accio: "getReservas" }, failure: function() { alert('Hubo un error recorriendo las reservas!'); }, color: 'blue', // a non-ajax option textColor: 'white' // a non-ajax option }, eventClick: function(calEvent, jsEvent, view) { getInfoByID(calEvent.event.id); } }); calendar.render(); }); $.ajax({ url: url_controller, type: 'post', data: { accio: "insertReserva", params: json }, beforeSend: function () {}, success: function(result){ result = JSON.parse(result); console.log(result); if(status == true){ $('#calendar').fullCalendar('rerenderEvents'); //calendar.refetch(); //calendar.refetchEvents(); //$('calendar').fullcalendar('rerenderEvents'); //$('calendar').fullcalendar('refetchEvents'); } else { /* Error sql php */ } }, error:function (xhr, ajaxOptions, thrownError) { /* Error ajax petition */ } }); No errors showed in console/network if I use .fullcalendar but with .refetch it says that it does not a function. Is not a php problem because I can insert properly and if I refresh page I can see the new event that I added.
{ "language": "en", "url": "https://stackoverflow.com/questions/56129240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: IF else Stored procedure confusion This is a table name a b c normal loan status abc 50 60 70 normal bcd 50 50 50 loan what i want first is to get total of values of columns a,b and c in accordance with their status.. meaning; total amount of name with status normal should come under normal column,and of status loan should come under loan column.. i should get abc's total 50+60+70 in normal and bcd's total 50+50+50 in loan how do i do that ?? i tried if else in SP , but cant seem to be getting it A: I think the below is what you need. The below query is a standard ANSI query: SELECT name ,SUM(CASE WHEN status = 'normal' THEN (a + b + c) ELSE 0 END) AS normal ,SUM(CASE WHEN status = 'loan' THEN (a + b + c) ELSE 0 END) AS loan ,status FROM yourTable GROUP BY name, status OUTPUT: name normal loan status bcd 0 150 loan abc 180 0 normal As per your requirement in the comment to update the existing rows: UPDATE yourTable SET normal = CASE WHEN status = 'normal' THEN (a + b + c) ELSE 0 END, loan = CASE WHEN status = 'loan' THEN (a + b + c) ELSE 0 END FROM yourTable Please note that you need to update the table every time you insert new record or to change the original INSERT statement to include those columns as well. A: In single select you can make it. Try SELECT *,(CASE WHEN [status]='normal' THEN [a]+[b]+[c] ELSE null END) as normal,(CASE WHEN [status]='loan' THEN [a]+[b]+[c] ELSE null END) as loan FROM YourTable
{ "language": "en", "url": "https://stackoverflow.com/questions/25302641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: res.send and res.render calls I am trying to determine if i can call res.send(data) and then res.render('reports') simultaneously. To explain further in detail, when i route to '/reports', first on my server side i making a REST call to an API which returns back json data. Now i want this json data to be accessed on the client, for which i am making an ajax call from my javascript. Hence the use of res.send(), but i also want to render the page in this call So it looks like the following on my server side code router.get('/reports', function(req,res){ //Making the REST Call to get the json data //then res.send(json); res.render('reports'); }); Every time i hit the '/reports' on the browser, I see the json value instead of the page being rendered and my console throws an Error: Can't set headers after they are sent. A: You could use content negotiation for that, where your AJAX request sets the Accept header to tell your Express server to return JSON instead of HTML: router.get('/reports', function(req,res) { ... if (req.accepts('json')) { return res.send(theData); } else { return res.render('reports', ...); }; }); Alternatively, you can check if the request was made with an AJAX call using req.xhr (although that's not 100% failsafe). A: No you can't do both, but you could render the page and send the data at the same time: res.render('reports',{data:json}); and then access those data in the newly rendered page. alternatively you could send a flag when making the call , and then decide whether you want to render or send based on this flag. A: Ideally, it needs to be 2 separate route, one spitting json and other rendering a view. Else, you could pass a url param, depending on which you return json or render a view. router.get('/reports/json', function(req,res){ var data = JSON_OBJECT; res.send(data); }); router.get('/reports', function(req,res){ var data = JSON_OBJECT; res.render('path-to-view-file', data); }); A: No, you can't. You can only have a single response to a given request. The browser is either expecting an HTML document or it is expecting JSON, it doesn't make sense to give it both at once. render just renders a view and then calls send. You could write your view to output an HTML document with a <script> element containing your JSON in the form of a JavaScript literal.
{ "language": "en", "url": "https://stackoverflow.com/questions/30847070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Count within the aggregate SUM() I have a table with 1000 rows. Columns exist are ID, DBID, TalkTime. I am doing: SELECT DBID, SUM(TalkTime) FROM Incoming_Calls GROUP BY DBID This condenses down to approximely 18 rows. I want to know how I can count the number of records present within each grouping. So for example DBID 105 has a sum of 526 which is made up of 395 records, DBID 104 has a sum of 124 made up using 241 of the records during the grouping. Any ideas? Using Microsoft SQL Server 2012. A: Then use COUNT() SELECT DBID, COUNT(*) TotalRows, SUM(TalkTime) TotalTalkTime FROM Incoming_Calls GROUP BY DBID * *TSQL Aggregate Function A: Select DBID, SUM(TalkTime), COUNT(TalkTime) TalkTimeCount FROM Incoming_Calls GROUP BY DBID OR, If you want to include null values count then you can you this Select DBID, SUM(TalkTime), @@ROWCOUNT FROM Incoming_Calls GROUP BY DBID
{ "language": "en", "url": "https://stackoverflow.com/questions/15480817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Swift adding attachment for multipart/form-data POST Swift 5, Xcode Version 10.2.1 (10E1001) Hi everyone, I'd appreciate any help on this. I'm creating a call to post an attachment (PNG) to my POST call. I'm making the call to ServiceNow. If I use the same keys in the body as PostMan the call in Postman works fine. However, the below seems like it's having a hard time with the attachment. The image in this example is PNG asset. For comparison, if I omit the attachment in Postman, I get the exact same error message. I believe the image isn't be properly formatted... Thanks in advance... I get the following error from ServiceNow: { error = { detail = "<null>"; message = "Failed to create the attachment. File part might be missing in the request."; }; status = failure; } And this is my code: func createDataBody() -> Data { let newLine = "\r\n" let twoNewLines = newLine + newLine let boundary = "----------------------------\(UUID().uuidString)" + newLine var body = Data() let stringEncoding = String.Encoding.utf16 body.append(boundary.data(using: stringEncoding)!) let table_name = "Content-Disposition: form-data; name=\"table_name\"" + twoNewLines body.append(table_name.data(using: stringEncoding)!) //incident body.append("incident".data(using: stringEncoding)!) //new line body.append(newLine.data(using: stringEncoding)!) //boundary body.append(boundary.data(using: stringEncoding)!) let table_sys_id = "Content-Disposition: form-data; name=\"table_sys_id\"" + twoNewLines body.append(table_sys_id.data(using: stringEncoding)!) //ba931ddadbf93b00f7bbdd0b5e96193c body.append("ba931ddadbf93b00f7bbdd0b5e96193c".data(using: stringEncoding)!) //new line body.append(newLine.data(using: stringEncoding)!) //boundary body.append(boundary.data(using: stringEncoding)!) let file = "Content-Disposition: form-data; name=\"file\"; filename=\"[email protected]\"" + newLine body.append(file.data(using: stringEncoding)!) let type = "Content-Type: image/png" + twoNewLines body.append(type.data(using: stringEncoding)!) //new line body.append(newLine.data(using: stringEncoding)!) let img = #imageLiteral(resourceName: "Artboard@1x") if let fileContent = img.pngData() { body.append(fileContent) } //new line body.append(newLine.data(using: stringEncoding)!) body.append("--\(UUID().uuidString)--".data(using: stringEncoding)!) print(String(data: body, encoding: .utf16)!) return body } Here is what the body look like, with the image data omitted: ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="table_name" incident ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="table_sys_id" ba931ddadbf93b00f7bbdd0b5e96193c ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="file"; filename="[email protected]" Content-Type: image/png ..... ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Here is the header call func addAttachmentToIncident() { let passwordString = "\(userNameTextField.text!):\(passwordTextField.text!)" let passwordData = passwordString.data(using: String.Encoding.utf8) let base64EncodedCredential = passwordData?.base64EncodedString(options: Data.Base64EncodingOptions.lineLength76Characters) let boundary = generateBoundaryString() let headers = [ "authorization": "Basic " + base64EncodedCredential!, "cache-control": "no-cache", "Accept": "application/json", "content-type": "multipart/form-data; boundary=--\(boundary)" ] guard let url = URL(string: "https://xxx.service-now.com/api/now/attachment/upload") else { return } var request = URLRequest(url: url) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let dataBody = createDataBody(boundary: boundary) request.httpBody = dataBody let session = URLSession.shared session.dataTask(with: request) { (data, response, error) in if let response = response { print(response) } if let data = data { do { let json = try JSONSerialization.jsonObject(with: data, options: []) print(json) } catch { print(error) } } }.resume() } //addAttachmentToIncident A: A couple of observations: * *The final boundary is not correct. Assuming you’ve created a boundary that starts with --, you should be appending \(boundary)-- as the final boundary. Right now the code is creating a new UUID (and omitting all of those extra dashes you added in the original boundary), so it won’t match the rest of the boundaries. You need a newLine sequence after that final boundary, too. The absence of this final boundary could be preventing it from recognizing this part of the body, and thus the “File part might be missing” message. *The boundary should not be a local variable. When preparing multipart requests, you have to specify the boundary in the header (and it has to be the same boundary here, not another UUID() instance). request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type") Generally, I would have the caller create the boundary, use that when creating the request header, and then pass the boundary as a parameter to this method. See Upload image with parameters in Swift. The absence of the same boundary value in the header and the body would prevent it from recognizing any of these parts of the body. *You have defined your local boundary to include the newLine. Obviously, it shouldn’t be local var at all, but it must not include newline at the end, otherwise the attempt to append the last boundary of /(boundary)-- will fail. Obviously, if you take this out of the boundary, make sure to insert the appropriate newlines as you build the body, where needed, though. Bottom line, make sure your body looks like the following (with the final --): ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="table_name" incident ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="table_sys_id" ba931ddadbf93b00f7bbdd0b5e96193c ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36 Content-Disposition: form-data; name="file"; filename="[email protected]" Content-Type: image/png ..... ----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36-- *In their curl example for /now/attachment/upload, they are using a field name of uploadFile, but you are using file. You may want to double check your field name and match the curl and postman examples. curl "https://instance.service-now.com/api/now/attachment/upload" \ --request POST \ --header "Accept:application/json" \ --user "'admin':'admin'" \ --header "Content-Type:multipart/form-data" \ -F 'table_name=incident' \ -F 'table_sys_id=d71f7935c0a8016700802b64c67c11c6' \ -F '[email protected]' If, after fixing the above, it still doesn’t work, I’d suggest you use Charles or Wireshark and compare a successful request vs the one you’re generating programmatically. Needless to say, you might want to consider using Alamofire, which gets you out of the weeds of creating well-formed multipart requests.
{ "language": "en", "url": "https://stackoverflow.com/questions/56464795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: some issues with index range, python student project? def f3(): root.withdraw() viewest.deiconify() #import cx_Oracle con=None cursor=None try: con=cx_Oracle.connect("system/abc123") cursor=con.cursor() sql="select * from students" cursor.execute(sql) msg="" for d in data: msg=msg+" R: "+ str(d[0]) + " N: " + str(d[1])+" M: " + str(d[2])+ "\n" stData.insert(INSERT,msg) except cx_Oracle.DatabaseError as e: print("some issues",e) finally: if cursor is not None: cursor.close() if con is not None: con.close() print("Disconnected") '''IndexError: string index out of range , msg=msg+" R: "+ str(d[0]) + " N: " + str(d[1])+" M: " + str(d[2])+ "\n" ''' A: It is hard to tell what you're trying to do here but what's the data type of data? It looks like you're not getting past the try statement (good use of a try/except to handle Exceptions!) so the issue I think lies in the way you're indexing the items (d) in data. Think about it: if data didn't exist (meaning you didn't initialize it in your code for your program to use), how can you try to index it later on? Also, it's weird that you're not getting a NameError as I don't see data defined anywhere unless you cropped out where it was defined earlier (unless my eyes are tricking me). General Programming principles: * *Iterable Objects: String, List, Dictionary, etc... *Non-iterable Objects: Integer, Float, Boolean, etc... (depending on programming language) Iterable data types are ones that you can index using bracket notation ([]) so in any case, you'll need to know the types of your variables and then use the correct "processes" to work with them. Since you are indexing items in data, the data type of data therefore, needs to be an iterable too. In general, an IndexError means that you've tried to access/index an item in a list for example but that item didn't exist. Therefore, I think that trying to string AND index an item in an object that doesn't exist is giving you that error. Here's what I think would help: If I could see where you have data defined, I would be able to tell you where you may be going wrong, but for now, I can only recommend you going back to see whether you've indexed an item in d in data that doesn't exist. An example (I will assume data is a list for clarity): >>> data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> data[0] # index a list in d in data [1, 2, 3] >>> data[0][1] 2 # If you try to access an index in data that doesn't exist: >>> data[3] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list index out of range I probably over-explained by now, so I'll just end this here. The above implementation was to give you an idea of how to access a list in a list correctly and when it isn't properly done what can happen. Hope this is clear!
{ "language": "en", "url": "https://stackoverflow.com/questions/60481070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do access subforms that are not visible still update/requery? If not, can this be behavior be configured? I have an old Access database that is being upgraded to work with Access 2007. The client is complaining that it is slow now. I am looking for ways to optimize it. There is one subform that is in a particular tab on the form. I have been wondering -- does the subform still update/query even when it is not visible? If this is configurable --- how? A: All controls refresh/update whether visible or not. It's generally considered good practice to not load recordsets until they are needed. If you have many subforms in a tab control, you can use the tab control's OnChange event to load/unload your subforms, or, alternatively, to set the recordsources. However, with only a couple of subforms, this is not likely to be a big help. But with a half dozen or so, it's a different issue. A: You can remove the recordsource from the subform and add it back when the form is made visible, or simply remove the whole subform and add that back.
{ "language": "en", "url": "https://stackoverflow.com/questions/3058688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: API-Platform: Multiple primary key with swagger - getting error This question on GitHub I'm trying to add second field into parameters without custom controller for entity with multiple primary key and getting error ... Now step-by-step: DB table: create table measure_types ( id varchar(10) not null, language_iso_code varchar(2) not null, label varchar(255) null, created_at datetime not null, updated_at datetime null, constraint measure_types_id_language_iso_code_uindex unique (id, language_iso_code), constraint measure_types_languages_iso_code_fk foreign key (language_iso_code) references languages (iso_code) ); alter table measure_types add primary key (id, language_iso_code); src/Entity/MeasureType.php namespace Api\Entity; use DateTime; use DateTimeInterface; use Doctrine\ORM\Mapping as ORM; use Exception; /** * @ORM\Table( * name="measure_types", * indexes={ * @ORM\Index(name="measure_types_uindex", columns={"id", "language_iso_code"}) * }, * uniqueConstraints={ * @ORM\UniqueConstraint(name="measure_types_uindex", columns={"id", "language_iso_code"}) * } * ) * @ORM\Entity(repositoryClass="Klarstein\Api\Repository\MeasureTypeRepository") * @ORM\HasLifecycleCallbacks */ class MeasureType { /** * @ORM\Id() * @ORM\Column(type="string", length=10) */ private string $id; /** * @ORM\Id() * @ORM\ManyToOne(targetEntity="Language", fetch="EAGER") * @ORM\JoinColumn(name="language_iso_code", referencedColumnName="iso_code") */ private Language $language; /** * @ORM\Column(type="string", length=255, nullable=true) */ private string $label; /** * @ORM\Column(type="datetime") */ private DateTimeInterface $createdAt; /** * @ORM\Column(type="datetime", nullable=true) */ private ?DateTimeInterface $updatedAt; public function __construct() { $this->createdAt = new DateTime(); } /** * @ORM\PreUpdate * * @throws Exception */ public function onPutHandler(): void { $this->updatedAt = new DateTime(); } // ... under this comment getters and setters } config/api/measure_type.yaml resources: Api\Entity\MeasureType: properties: id: identifier: true language: identifier: true attributes: pagination_items_per_page: 25 collectionOperations: get: normalization_context: groups: ['v1_get_collection'] post: normalization_context: groups: ['v1_post_collection_response'] denormalization_context: groups: ['v1_post_collection_request'] itemOperations: get: method: 'GET' normalization_context: groups: ['v1_get_item'] put: normalization_context: groups: ['v1_put_item_response'] denormalization_context: groups: ['v1_put_item_request'] delete: denormalization_context: groups: ['v1_delete_item'] It's work, for current moment: But i want to show in doc, that, I expect for 2 parameters as identifier. What I'm doing: Adding: path: '/measure_types/id={id};language={language}' to config/api/measure_type.yaml resources: Api\Entity\MeasureType: properties: id: identifier: true language: identifier: true attributes: pagination_items_per_page: 25 collectionOperations: get: normalization_context: groups: ['v1_get_collection'] post: normalization_context: groups: ['v1_post_collection_response'] denormalization_context: groups: ['v1_post_collection_request'] itemOperations: get: method: 'GET' path: '/measure_types/id={id};language={language}' normalization_context: groups: ['v1_get_item'] put: normalization_context: groups: ['v1_put_item_response'] denormalization_context: groups: ['v1_put_item_request'] delete: denormalization_context: groups: ['v1_delete_item'] Next step: I have SwaggerEventRequire Decorator: src/Swagger/SwaggerEventRequireDecorator.php namespace Api\Swagger; use Symfony\Component\Finder\Finder; use Symfony\Component\Serializer\Normalizer\NormalizerInterface; use Symfony\Component\Yaml\Yaml; final class SwaggerEventRequireDecorator implements NormalizerInterface { private const SWAGGER_DECORATIONS = __DIR__ . '/../../config/swagger/'; private NormalizerInterface $decorated; public function __construct(NormalizerInterface $decorated) { $this->decorated = $decorated; } public function normalize($object, string $format = null, array $context = []) { $docs = $this->decorated->normalize($object, $format, $context); $customDefinition = $this->loadDefinitions(); foreach ($customDefinition as $path => $methods) { foreach ($methods as $method => $parameters) { foreach ($parameters as $paramKey => $paramValues) { if (empty($paramValues['name'])) { continue; } if (empty($docs['paths'][$path]) || empty($docs['paths'][$path][$method])) { continue; } // e.g. remove an existing event parameter $docs['paths'][$path][$method]['parameters'] = array_values( array_filter( $docs['paths'][$path][$method]['parameters'], function ($param) use ($paramKey) { return $param['name'] !== $paramKey; } ) ); // e.g. add the new definition for event $docs['paths'][$path][$method]['parameters'][] = $paramValues; } } } return $docs; } public function supportsNormalization($data, string $format = null) { return $this->decorated->supportsNormalization($data, $format); } private function loadDefinitions(): array { $result = []; $finder = new Finder(); $finder ->files() ->in(self::SWAGGER_DECORATIONS) ->name('*.yaml'); foreach ($finder as $file) { $yaml = Yaml::parseFile(self::SWAGGER_DECORATIONS . $file->getFilename()); if (empty($yaml)) { continue; } $result = array_unique(array_merge($result, $yaml), SORT_REGULAR); } return $result; } } and decoration config file: config/swagger/measure_type.yaml /measure_types/id={id};language={language}: get: id: name: 'id' description: 'Measure type ID' in: 'path' required: true type: 'string' example: 'ml' language: name: 'language' description: 'Language ISO code' in: 'path' required: true type: 'string' example: 'de' In a result, I got working form in doc, but wrong result: I want to use it without custom controller, because I have many entities with multiple primary keys and adding controllers for each one will take a lot of time What I do wrong? A: Solved. The solution is on github gave be Kevin.
{ "language": "en", "url": "https://stackoverflow.com/questions/60475383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: EF UpdateRange() not working with deleted child collections public Class Parent { public int Id {get;set;} public string Name {get;set;} public ICollection<Child> Children {get;set;} } public Class Child { public int Id {get;set;} public int ParentId {get;set;} public string Name {get;set;} } Say that I have 1 Parent, and the Parent has 2 Children. Parent -- Id = 1 -- Name = "Mark" Child 1 -- Id = 1 -- ParentId = 1 -- Name = "Tom" Child 2 -- Id = 2 -- ParentId = 1 -- Name = "Jack" After making the following changes (Deleted Jack and renamed Tom to Luffy: Parent -- Id = 1 -- Name = "Mark" Child 1 -- Id = 1 -- ParentId = 1 -- Name = "Luffy" Child 2 - Deleted Calling DbContext.ParentDbSet.UpdateRange(Mark) will see that Tom is successfully renamed into Luffy, but Jack remained undeleted. Parent -- Id = 1 -- Name = "Mark" Child 1 -- Id = 1 -- ParentId = 1 -- Name = "Luffy" Child 2 -- Id = 2 -- ParentId = 1 -- Name = "Jack" I get get around this by calling DbContext.ChildDbSet.Remove(Jack) before calling SaveChanges. However, I am trying to avoid this because by the time I call DbContext.ParentDbSet.UpdateRange(Mark), Jack is already removed from Mark. Currently I get the old Mark again from the database, compare it with the updated Mark (the one with Jack removed) to find out the deleted Child records, and then call RemoveRange(The deleted Child records). This doesn't seem like a good idea... In brief, I want to achieve the followings before calling SaveChanges(): * *Update Parent *Update Parent's Child records that have been changed *Delete Parent's Child records that have been deleted *Insert Parent's Child records that have been added (p.s. UpdateRange() seems to work with inserting child collections too) Is there a better way to do this? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/67553492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it possible to work with a spring java project without internet necessity? Hello brothers in code. Always when I try to work with no internet connection (some times this goes down in my network) I do a question to myself... "There's a way to configure a spring maven java project without internet connection necessity?"... So, since the first time that this doubt come to me I'm searching this answer in Google and never found it. What you all, Jedis of the coding, could say to me about this? Since now I'm very thankful for the help A: This is a general question, so I'll try to provide a general answer In a nutshell, Spring itself does not require an internet connection at runtime in a sense that it is not supposed to contain code that goes "somewhere on the internet" and queries for something. However, Spring has a lot of dependencies (actually just like your own project probably has dependencies) so that Maven will have to bring them from somewhere upon the first run. So Maven (that you've mentioned as a build tool) by default will require an internet connection. Of course, there are many options to overcome this "difficulty" all of them boil down to making all these dependencies available so that you'll be able to compile the project without going to the internet. The actual solution can vary: * *List item "install Nexus/Artifactory" that will act as a proxy and will download dependencies for you. It makes sense if your network infrastructure has an option to connect to the internet from some servers leaving your "developed machine" connected only to the internal network. *Download the whole Maven repository with some crawler (it exposes web interface) to your machine and use it there (if you work for organization that doesn't have any kind of internet connection) *Just come to the place that has an internet connection with your PC, compile everything once, Maven will download all the dependencies and cache them in your local m2 repository. So next time you'll be able to build your project even without internet connection. I know the last option sounds more like a joke, but it also technically works if you, say a student that doesn't have any connection at home but wants to try this "Spring thing" out :) A: You can find some more information about mavens offline flags in this post f.e.: Is there a maven command line option for offline mode?
{ "language": "en", "url": "https://stackoverflow.com/questions/48593133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Should TypeScript's mapped types have a value for every key? I am wondering whether every key in a mapped type should have a defined value. I would not expect this to be the case as it would require indefeasibly many values. However, then I'd expect Partial<T> to be assignable to a mapped type T, which it is not. Additionally, when reading a value from a mapped type, no typeof value === 'undefined' check is necessary. type Mapped = { [key: string]: string }; // This works, so not all keys have to be defined const mapped: Mapped = { 'a': 'b' }; // This is valid even with strict checks even though 's' is undefined const s: string = mapped['c']; const partialMapped: Partial<Mapped> = mapped; // This doesn't work because `Partial<Mapped>` is of type `{ [key: string]: string | undefined }` const secondMapped: Mapped = partialMapped; Since { 'a': 'b' } is assignable to a Mapped variable, I would expect Partial<Mapped> to be assignable to Mapped as well, but is is not. Is this because the value undefined is not the same as the absence of a key? Is there a variant on Partial that makes keys optional instead? A: Your code works with strict null checks off, because you can then assign undefined values to a string. We're talking about values here, the keys are indifferent! Your index type signature for Mapped promises no particular keys, so assigning a type with zero or more keys will work. What isn't allowed is the assignment of non-string values (like undefined and null) and this is what the error is highlighting. Here is a contrived example of what the error is guarding against: var undef: undefined; const partialMapped: Partial<Mapped> = {a: undef, b: undef}; // Works The partial version can accept undefined values. The "full" version wouldn't allow this. var undef: undefined; const mapped: Mapped = {a: undef, b: undef}; // Warnings!
{ "language": "en", "url": "https://stackoverflow.com/questions/56560321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Display different login page based on requirements in Phonegap android apps? My Phonegap application login page contains a few input fields like username, password, url, device type, etc... that the user has to enter to activate the application when he logins for the fist time. The next time on the user has to enter only the username and password as the application is already activated. So I want to display only the username and password fields the second time on login. I have a phonegap plugin to check whether the app is activated or not. I want to call this plugin and based on the callback response I will show only necessary fields on the login page.So what would be the easiest way to do this? I tried this.. $("#loginPage").live('pagebeforeshow',function(event, ui){ window.plugins.AuthPlugin.appIsActive(appIsActiveCallBack); }); function appIsActiveCallBack(result){//Show only relative fields...} But I got this error:Uncaught TypeError: Cannot call method 'appIsActive' of undefined Thanks in advance. A: You are getting Uncaught TypeError because you are trying to call the plugin before even the device is ready... Call the Plugin only when the device is ready... UPDATE What you have to do is to determine what all fields you need to show in native side only... After that pass a variable(flag) from java to JavaScript which will be accessible at pagebeforeshow So your Activity will look like something like this /* * Some database operations which you need to check * so that you can determine what to show */ this.setIntegerProperty("loadUrlTimeoutValue", 70000); super.loadUrl("file:///android_asset/www/index.html", 20000); super.loadUrl("javascript: { var pageFlag = '" + flag + "';}"); And in your index.html show like this $("#loginPage").live('pagebeforeshow',function(event, ui){ alert(pageFlag); }); After that with the use of flag you can determine what to show and what not to... Hope it helps...
{ "language": "en", "url": "https://stackoverflow.com/questions/11449832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: multiple definition in header file Given this code sample: complex.h : #ifndef COMPLEX_H #define COMPLEX_H #include <iostream> class Complex { public: Complex(float Real, float Imaginary); float real() const { return m_Real; }; private: friend std::ostream& operator<<(std::ostream& o, const Complex& Cplx); float m_Real; float m_Imaginary; }; std::ostream& operator<<(std::ostream& o, const Complex& Cplx) { return o << Cplx.m_Real << " i" << Cplx.m_Imaginary; } #endif // COMPLEX_H complex.cpp : #include "complex.h" Complex::Complex(float Real, float Imaginary) { m_Real = Real; m_Imaginary = Imaginary; } main.cpp : #include "complex.h" #include <iostream> int main() { Complex Foo(3.4, 4.5); std::cout << Foo << "\n"; return 0; } When compiling this code, I get the following error: multiple definition of operator<<(std::ostream&, Complex const&) I've found that making this function inline solves the problem, but I don't understand why. Why does the compiler complain about multiple definition? My header file is guarded (with #define COMPLEX_H). And, if complaining about the operator<< function, why not complain about the public real() function, which is defined in the header as well? And is there another solution besides using the inline keyword? A: The problem is that the following piece of code is a definition, not a declaration: std::ostream& operator<<(std::ostream& o, const Complex& Cplx) { return o << Cplx.m_Real << " i" << Cplx.m_Imaginary; } You can either mark the function above and make it "inline" so that multiple translation units may define it: inline std::ostream& operator<<(std::ostream& o, const Complex& Cplx) { return o << Cplx.m_Real << " i" << Cplx.m_Imaginary; } Or you can simply move the original definition of the function to the "complex.cpp" source file. The compiler does not complain about "real()" because it is implicitly inlined (any member function whose body is given in the class declaration is interpreted as if it had been declared "inline"). The preprocessor guards prevent your header from being included more than once from a single translation unit ("*.cpp" source file"). However, both translation units see the same header file. Basically, the compiler compiles "main.cpp" to "main.o" (including any definitions given in the headers included by "main.cpp"), and the compiler separately compiles "complex.cpp" to "complex.o" (including any definitions given in the headers included by "complex.cpp"). Then the linker merges "main.o" and "complex.o" into a single binary file; it is at this point that the linker finds two definitions for a function of the same name. It is also at this point that the linker attempts to resolve external references (e.g. "main.o" refers to "Complex::Complex" but does not have a definition for that function... the linker locates the definition from "complex.o", and resolves that reference). A: Move implementation to complex.cpp Right now after including this file implementation is being compiled to every file. Later during linking there's a obvious conflict because of duplicate implementations. ::real() is not reported because it's inline implicitly (implementation inside class definition) A: I was having this problem, even after my source and header file were correct. It turned out Eclipse was using stale artifacts from a previous (failed) build. To fix, use Project > Clean then rebuild. A: An alternative to designating a function definition in a header file as inline is to define it as static. This will also avoid the multiple definition error.
{ "language": "en", "url": "https://stackoverflow.com/questions/2727582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Django Rest Framework - React - Unable to login immediately after logout This is very strange. I have a react front end and a django backend with djangorestframework and django-allauth for authentication. Everything works fine from Postman. But in the browser when I successfully sign in, successfully sign out, and then try to sign in again, I get a 401 unauthorized error. The correct user credentials are sent to the server just as the first successful time, yet a 401 error. However, after I refresh the browser then I am able to sign in normally again. I use jwt for authentication and I append the token to the authorization header before sign out. I even tried clearing out the authorization header after a successful sign out but to no avail. It is the same problem with my react native front end. I don't know if this is a react or a django problem. Please does anyone have any idea what the problem might be? Thanks. A: The problem solved! I appended the Token to the authorization header like so: request.headers['Authorization'] = Token ${token} Except when signing out, every other request did not require the authorization header to be set as above. So after sign out, the authorization header becomes: request.headers.Authorization = Token null That null value of the token would make every request after sign out "Unauthorized". So to solve this, I had to set the Authorization header for every request when there is token and then delete the Authorization from the header object when there is no token like so: delete request.headers.Authorization
{ "language": "en", "url": "https://stackoverflow.com/questions/62288112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Inject service in app.config? I want to inject a service into app.config any idea please ?I want to inject a service into app.config any idea please ?I want to inject a service into app.config any idea please ? app.js 'use strict'; angular.module('crud', [ 'ngRoute', 'angular-jwt', 'ngSails', 'ngMessages', 'ngResource' ]) .config(function ($httpProvider,$routeProvider, $locationProvider,$sailsProvider,jwtInterceptorProvider,User) { //$httpProvider.interceptors.push('jwtInterceptor'); //console.log($sailsProvider); $routeProvider .otherwise({ redirectTo: '/' }); $locationProvider.html5Mode(true); }); Serviceuser.js 'use strict'; angular.module('crud').service('User', function ($sails) { //console.log($sails); return { signup: function (data) { return $sails.post('/api/user',data); } } }); A: Here you go straight from the docs: Registering a Service with $provide You can also register services via the $provide service inside of a module 's config function: angular .module('myModule', []) .config(['$provide ', function($provide) { $provide.factory('serviceId ', function() { var shinyNewServiceInstance; // factory function body that constructs shinyNewServiceInstance return shinyNewServiceInstance; }); } ]); This technique is often used in unit tests to mock out a service' s dependencies. Hope this helps. (function() { 'use strict'; angular .module('example.app', []) .config(['$provide', function($provide) { $provide.factory('serviceId', function() { var shinyNewServiceInstance; // factory function body that constructs shinyNewServiceInstance return shinyNewServiceInstance; }); } ]) .controller('ExampleController', ExampleController) .service('exampleService', exampleService); exampleService.$inject = ['$http']; function ExampleController(exampleService) { var vm = this; vm.update = function(person, index) { exampleService.updatePeople(person).then(function(response) { vm.persons = response; }, function(reason) { console.log(reason); }); }; } // good practice to use uppercase variable for URL, to denote constant. //this part should be done in a service function exampleService($http) { var URL = 'https://beta.test.com/auth/authenticate/', data = {}, service = { updatePeople: updatePeople }; return service; function updatePeople(person) { //person would be update of person. return $http .post(URL, person) .then(function(response) { return response.data; }, function(response) { return response; }); } } })(); A: you can use like: angular.module('app', ["ui.router"]) .config(function config ($stateProvider){ $stateProvider.state("index", { url:"", controller: "FirstCtrl as first", templateUrl: "first.html" }) .state("second", { url:"/second", controller:"SecondCtrl as second", templateuRL:"second.html" }) }) here is the full working example with plunker
{ "language": "en", "url": "https://stackoverflow.com/questions/37059087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: python/Scipy Interpolation of 3D scattered data I want to implement a interpolation of a 3D scattered data using python. Such as, z[0,0] = 1, z[0,4] =2, z[2,2]=4, z[4,0]=3, z[4,4]= 2 (X like structured 2D vectors with z value) and interpolate these data so that I can derive a z value between this (x,y) coordinate like z[1,2], z[2,3], z[1,1], ..., etc. I coded like below using scipy and numpy but I guess using numpy.empty((5,5)) is not suitable for my purpose because numpy.empty() add values which is irrelevant. And I want to interpolate only with 5 points. from scipy import interpolate import numpy as np x = np.linspace(0,4,5) y = np.linspace(0,4,5) xx,yy=np.meshgrid(x,y) z = np.empty((5,5)) # I think this is wrong z[0,0] = 1, z[0,4] =2, z[2,2]=4, z[4,0]=3, z[4,4]= 2 f = interpolate.interp2d(x,y,z,kind='linear') xt = np.linspace(0,4,10) yt = np.linspace(0,4,10) xxt, yyt = np.meshgrid(xt,yt) ct = f(xt,yt) my expected output is values with 2D 5x5 (or larger area such as 7x7, 9x9, etc) arrays which is the result of interpolation of 5 constant points. Could you help me?
{ "language": "en", "url": "https://stackoverflow.com/questions/63470111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Appending data to a Google Sheet using Python I have 3 different tables I'm looking to directly push to 3 separate tabs in a Google Sheet. I set up the GSpread connection and that's working well. I started to adjust my first print statement into what I thought would append the information to Tab A (waveData), but no luck. I'm looking to append the information to the FIRST blank row in a tab. Basically, so that the data will be ADDED to what is already in there. I'm trying to use append_rows to do this, but am hitting a "gspread.exceptions.APIError: {'code': 400, 'message': 'Invalid value at 'data.values' (type.googleapis.com/google.protobuf.ListValue). I'm really new to this, just thought it would be a fun project to evaluate wave sizes in NJ across all major surf spots, but really in over my head (no pun intended). Any thoughts? import requests import pandas as pd import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50') waveData = sh.get_worksheet(0) tideData = sh.get_worksheet(1) lightData = sh.get_worksheet(2) # AddValue = ["Test", 25, "Test2"] # lightData.insert_row(AddValue, 3) id_list = [ '/Belmar-Surf-Report/3683/', '/Manasquan-Surf-Report/386/', '/Ocean-Grove-Surf-Report/7945/', '/Asbury-Park-Surf-Report/857/', '/Avon-Surf-Report/4050/', '/Bay-Head-Surf-Report/4951/', '/Belmar-Surf-Report/3683/', '/Boardwalk-Surf-Report/9183/', ] for x in id_list: waveData.append_rows(pd.read_html(requests.get('http://magicseaweed.com' + x).text) [2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].to_json(), value_input_option="USER_ENTERED") # print(pd.read_html(requests.get('http://magicseaweed.com' + x).text)[0]) # print(pd.read_html(requests.get('http://magicseaweed.com' + x).text)[1]) A: From your following reply, there really is no relationship between the 3. When I scrape with IMPORTHTML into Google sheets, those are just Tables at the locations 0,1, and 2. I'm basically just trying to have an output of each table on a separate tab I understood that you wanted to retrieve the values with pd.read_html(requests.get('http://magicseaweed.com' + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]] from id_list, and wanted to put the values to a sheet in Google Spreadsheet. In this case, how about the following modification? At append_rows, it seems that JSON data cannot be directly used. In this case, it is required to use a 2-dimensional array. And, I'm worried about the value of NaN in the datafarame. When these points are reflected in your script, how about the following modification? Modified script 1: In this sample, all values are put into a sheet. gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50') waveData = sh.get_worksheet(0) id_list = [ "/Belmar-Surf-Report/3683/", "/Manasquan-Surf-Report/386/", "/Ocean-Grove-Surf-Report/7945/", "/Asbury-Park-Surf-Report/857/", "/Avon-Surf-Report/4050/", "/Bay-Head-Surf-Report/4951/", "/Belmar-Surf-Report/3683/", "/Boardwalk-Surf-Report/9183/", ] # I modified the below script. res = [] for x in id_list: df = pd.read_html(requests.get("http://magicseaweed.com" + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].fillna("") values = [[x], df.columns.values.tolist(), *df.values.tolist()] res.extend(values) res.append([]) waveData.append_rows(res, value_input_option="USER_ENTERED") * *When this script is run, the retrieved values are put into the 1st sheet as follows. In this sample modification, the path and a blank row are inserted between each data. Please modify this for your actual situation. Modified script 2: In this sample, each value is put into each sheet. gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50') id_list = [ "/Belmar-Surf-Report/3683/", "/Manasquan-Surf-Report/386/", "/Ocean-Grove-Surf-Report/7945/", "/Asbury-Park-Surf-Report/857/", "/Avon-Surf-Report/4050/", "/Bay-Head-Surf-Report/4951/", "/Belmar-Surf-Report/3683/", "/Boardwalk-Surf-Report/9183/", ] obj = {e.title: e for e in sh.worksheets()} for e in id_list: if e not in obj: obj[e] = sh.add_worksheet(title=e, rows="1000", cols="26") for x in id_list: df = pd.read_html(requests.get("http://magicseaweed.com" + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].fillna("") values = [df.columns.values.tolist(), *df.values.tolist()] obj[x].append_rows(values, value_input_option="USER_ENTERED") * *When this script is run, the sheets are checked and created with the sheet names of the values in id_list, and each value is put to each sheet. Reference: * *append_rows
{ "language": "en", "url": "https://stackoverflow.com/questions/74756517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Quickest way to delete text from a string in python based on a list? Maybe regex? Hi I have a very long string that has the following structure: "IF ( ISFILTERED ( Table1[Column_1] ), VAR ___f = FILTERS ( Table1[Column_1] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_1]) VAR ___d = CONCATENATEX ( ___t, Table1[Column_1], ", " ) VAR ___x = "Table1[Column_1] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "Table1[Column_2] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ... It basically continues like that iterating through each column of each table in a schema. As you can imagine it this string will be very large for a lot of tables/columns. I have a list of columns from specific tables like this: ['Table1[Column_1]', 'Table2[Column_4]', 'Table6[Column_22]'] These are the only columns I am interested in keeping in the string. So I need to go through the string and remove the entire IF statement it relates too if the table/column is not in the list. So based on the above example the expected output would be: "IF ( ISFILTERED ( Table1[Column_1] ), VAR ___f = FILTERS ( Table1[Column_1] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_1]) VAR ___d = CONCATENATEX ( ___t, Table1[Column_1], ", " ) VAR ___x = "Table1[Column_1] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ... we just got rid of the second IF because Table1[Column_2] was not in the list. Would regex be useful for this case? Or maybe I should iterate through the list and build a new string that just keeps the relevant parts. I know it is best practise to show what you have attempted so far but I am not sure where to start with this it seems like it should be easy but I am having trouble. Can anyone help me please? Python solutions would be best as I know that more, but happy to investigate other methods if easier. I know there are regex tools online maybe I can just use one of those? A: Assuming string structure is constant. You can try this, but it depends on structure. data = YOUR_STRING_FROM_QUESTION # This is the delimeter, which will help us to split query on parts prefix = '& IF (\n' # define list of allowed tables allowed_tables = ['Table1[Column_1]', 'Table2[Column_4]', 'Table6[Column_22]'] # split full string on parts by prefix query_by_part = data.split(prefix) # build clean list of query parts, where tables in allowed_tables clean_query_parts = [part for part in query_by_part if any(table in part for table in allowed_tables)] # finally join list to string using prefix and print print(prefix.join(clean_query_parts)) Input: IF ( ISFILTERED ( Table1[Column_1] ), VAR ___f = FILTERS ( Table1[Column_1] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_1]) VAR ___d = CONCATENATEX ( ___t, Table1[Column_1], ", " ) VAR ___x = "Table1[Column_1] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "Table1[Column_2] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) Output: IF ( ISFILTERED ( Table1[Column_1] ), VAR ___f = FILTERS ( Table1[Column_1] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_1]) VAR ___d = CONCATENATEX ( ___t, Table1[Column_1], ", " ) VAR ___x = "Table1[Column_1] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) A: You can use jinja2 to easily generate the string from scratch in this way: * *Add a new file with the double extension 'txt.jinja', e.g., example.txt.jinja2 with the following code inside (this is your template) and save it in the same path of your script: " {%- for column in columns_to_add -%} IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "{{column}} = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & {% endfor -%} " *Execute this Python script: import os import jinja2 TEMPLATE_NAME = "example.txt.jinja2" LIST_OF_TABLES = [ 'Table1[Column_1]', 'Table2[Column_4]', 'Table6[Column_22]' ] template = os.path.join(os.path.dirname(os.path.abspath(__file__)), "") jinja_env = jinja2.Environment(loader=jinja2.FileSystemLoader(template)) template2= jinja_env.get_template(TEMPLATE_NAME) string = template2.render(columns_to_add=LIST_OF_TABLES) output = template + TEMPLATE_NAME.split("\\")[-1].replace('.jinja2', '') with open(output, "w+") as f: f.write(string) This will create a new file (in the same path) called example.txt with your string: "IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "Table1[Column_1] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "Table2[Column_4] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & IF ( ISFILTERED ( Table1[Column_2] ), VAR ___f = FILTERS ( Table1[Column_2] ) VAR ___r = COUNTROWS ( ___f ) VAR ___t = TOPN ( MaxFilters, ___f, Table1[Column_2]) VAR ___d = CONCATENATEX ( ___t, Table1[Columnw_1], ", " ) VAR ___x = "Table6[Column_22] = " & ___d & IF(___r > MaxFilters, ", ... [" & ___r & " items selected]") & " " RETURN ___x & UNICHAR(13) & UNICHAR(10) ) & " This works fine no matter how big your list of desired columns is, and the best part is that you only have to keep this list updated.
{ "language": "en", "url": "https://stackoverflow.com/questions/74041286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Download in CSV File using PHP I am trying to convert MySQL table to csv and download it from server. i have uploaded this script on server. <?php //export.php require 'db_connection.php'; header('Content-Type: text/csv; charset=utf-8'); header('Content-Disposition: attachment; filename=data.csv'); $output = fopen("php://output", "w"); $sql = "select * from tablaregistro"; $stmt= $pdo->prepare($sql); $stmt->execute(); while($row = $stmt->fetch()) { fputcsv($output, $row); } fclose($output); ?> This script is converting table to CSV and file is being downloaded in "data.csv" but when I upload this script to server, it is just displaying CSV on the whole page. It is not downloading like in localhost. I hope my question is clear.
{ "language": "en", "url": "https://stackoverflow.com/questions/51801179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to update data in server database in same field in iphone I am updating my data but it not updating why it happen i want if any register user want to change data old to new then i give a update page where the user change there data but except username i try lot byt it's not updata data to server databasetable my cod is proper or wrong -(void)sendRequest { NSString *post = [NSString stringWithFormat:@"firstname=%@&lastname=%@&Username=%@&Password=%@&Email=%@",txtfirstName.text,txtlast.text,txtUserName.text,txtPassword.text,txtEmail.text]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; NSString *postLength = [NSString stringWithFormat:@"%d", [postData length]]; NSLog(@"%@",postLength); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:@"http://192.168.0.1:96/JourneyMapperAPI?RequestType=Register&Command=SET"]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody:postData]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { webData = [[NSMutableData data] retain]; NSLog(@"%@",webData); } else { } } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSString *loginStatus = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(@"%@",loginStatus); } A: you need to start the connection: NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (theConnection) { webData = [[NSMutableData data] retain]; NSLog(@"%@",webData); [theConnection start]; } else { }
{ "language": "en", "url": "https://stackoverflow.com/questions/6422924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Chrome Extensions "chrome.storage.local" data updating trouble I am working on one chrome extension and i need to use local storage to send data from options page to background scritps. Options page script: function addToStorage(key, val){ let obj = {}; obj[key] = val; chrome.storage.local.set( obj, function() { if(chrome.runtime.lastError) { console.error( "Error setting " + key + " to " + JSON.stringify(val) + ": " + chrome.runtime.lastError.message ); } }); } Background: chrome.storage.local.get('code', function(code) { ... with code.code ... }); For example: Now chrome.storage.local code value is abcd I'm performing addToStorage('code', '1234') from options page script After that in background script value code only will change when i manually click "update" at chrome extesions page How can i automatically get actual data at background script? A: the background script will check only once when started as is. You could pass a mesage from the options script to background scripts after you update the local storage and use that as a trigger to check storage. try this: Options page function addToStorage(key, val){ let obj = {}; obj[key] = val; chrome.storage.local.set( obj, function() { if(chrome.runtime.lastError) { console.error( "Error setting " + key + " to " + JSON.stringify(val) + ": " + chrome.runtime.lastError.message ); } chrome.runtime.sendMessage({status: "Storage Updated"}, function (responce) { console.log(responce); }) }); } Background Page: chrome.runtime.onMessage.addListener( function (request, sender, responce) { if (request.status === "Storage Updated") { chrome.storage.local.get('code', function(code) { // ... with code.code ... }); sendResponce({status: "Update Recieved"}); } } ); Hope that helps, message passing docs here: https://developer.chrome.com/extensions/messaging
{ "language": "en", "url": "https://stackoverflow.com/questions/64081262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why do R and statsmodels give slightly different ANOVA results? Using a small R sample dataset and the ANOVA example from statsmodels, the degrees of freedom for one of the variables are reported differently, & the F-values results are also slightly different. Perhaps they have slightly different default approaches? Can I set up statsmodels to use R's defaults? import pandas as pd import statsmodels.api as sm from statsmodels.formula.api import ols ##R code on R sample dataset #> anova(with(ChickWeight, lm(weight ~ Time + Diet))) #Analysis of Variance Table # #Response: weight # Df Sum Sq Mean Sq F value Pr(>F) #Time 1 2042344 2042344 1576.460 < 2.2e-16 *** #Diet 3 129876 43292 33.417 < 2.2e-16 *** #Residuals 573 742336 1296 #write.csv(file='ChickWeight.csv', x=ChickWeight, row.names=F) cw = pd.read_csv('ChickWeight.csv') cw_lm=ols('weight ~ Time + Diet', data=cw).fit() print(sm.stats.anova_lm(cw_lm, typ=2)) # sum_sq df F PR(>F) #Time 2024187.608511 1 1523.368567 9.008821e-164 #Diet 108176.538530 1 81.411791 2.730843e-18 #Residual 764035.638024 575 NaN NaN Head and tail of the datasets are the same*, also mean, min, max, median of weight and time. A: Looks like "Diet" only has one degree of freedom in the statsmodels call which means it was probably treated as a continuous variable whereas in R it has 3 degrees of freedom so it probably was a factor/discrete random variable. To make ols() treat "Diet" as a categorical random variable, use cw_lm=ols('weight ~ C(Diet) + Time', data=cw).fit()
{ "language": "en", "url": "https://stackoverflow.com/questions/28755617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Success and failure actions when downloading a file in MVC A button on my page calls a controller method that returns a file, like so: // Update number of downloads in database // Return the file return File(filedata, contentType); In the view, this button is defined as: <img onclick="location.href ='@Url.Action("DownloadVersion", new { fileVersionId = version.FileVersionId })'" src="@Url.Content("~/Images/download.png")", alt="Download" title="Download"/> I need an element on my page to update dynamically with the number of downloads once the download has started. At the moment I have no way of informing the page to update. Currently I don't see a way of doing this. I have tried calling the controller method via ajax instead so I get the success and error callbacks, however this seems to mean that the browser never actually downloads the file (I have read in other posts that this seems to be a limitation of ajax). Unfortunately, a direct link to the file is not an option. The download has to occur via the controller method so that the user does not have access to the file url path (to check if they are authorized to download it). Is there a viable solution to this? A: You could do something similar to - https://stackoverflow.com/a/3667379/33116 - you can then instead of redirecting you can do a request to a web api end point which can return the download count and update an element with the returned value.
{ "language": "en", "url": "https://stackoverflow.com/questions/35150052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NSMutableDictionary overrides the first value This is my sample code here i calculated break time for two entries.if i add a another entries means it is overriding the previous break time ..i dont know where i am doing wrong This is my first sample i got this for 2 entries This is my second sample i got this for 3 entries Here is my sample code NSMutableArray *arrData = [[data objectForKey:@"data"]mutableCopy]; NSLog(@"%@",arrData); NSLog display this data before adding 3entry .. { date = "2016-01-20"; "end_time" = "11:10:00"; "function_code" = RCV; "operator_id" = JOHN; "start_time" = "11:00:00"; "total_time" = 10; "total_units" = 19; }, { date = "2016-01-20"; "end_time" = "12:25:00"; "function_code" = PIK; "operator_id" = JOHN; "start_time" = "12:15:00"; "total_time" = 10; "total_units" = 26; }) NSMutableDictionary *thirdEntry = [NSMutableDictionary dictionary]; [thirdEntry setObject:@"02:00:00" forKey:@"end_time"]; [thirdEntry setObject:@"PUL" forKey:@"function_code"]; [thirdEntry setObject:@"01:25:00" forKey:@"start_time"]; [thirdEntry setObject:@"45" forKey:@"total_units"]; [thirdEntry setObject:@"20" forKey:@"total_time"]; [arrData addObject:thirdEntry]; NSLog(@"%@",arrData); //dictionary NSMutableDictionary *dictData =[NSMutableDictionary dictionary]; NSMutableDictionary *dictData1=[NSMutableDictionary dictionary]; NSMutableArray *arrayProgressDate =[NSMutableArray array]; //storing the breaktime values NSMutableDictionary *dictValues =[NSMutableDictionary dictionary]; for (int i =0; i<arrData.count; i++) { dictData =arrData[i]; [arrayProgressDate addObject:dictData]; NSString *strendTime1 =[dictData objectForKey:@"end_time"]; NSLog(@"%@",strendTime1); [dictValues setObject:strendTime1 forKey:@"end_time_value"]; if ((i + 1) < arrData.count) { dictData1=arrData[i+1]; NSString *strStartTimeNext = [dictData1 objectForKey:@"start_time"]; NSLog(@"%@",strStartTimeNext ); //[arrayProgressDate addObject:strStartTimeNext]; [dictValues setObject:strStartTimeNext forKey:@"start_time_value"]; //calculating break time NSDate *endTimeDate = [[DateHelper sharedHelper ] dateFromString:strendTime1 withFormat:@"HH:mm:ss"]; NSDate *startTimeDate = [[DateHelper sharedHelper]dateFromString:strStartTimeNext withFormat:@"HH:mm:ss"]; NSTimeInterval timeElapsedInSeconds = [endTimeDate timeIntervalSinceDate:startTimeDate]; double hours = timeElapsedInSeconds / 3600.0; NSLog(@"%f",hours); int breakTimeInMinutes = timeElapsedInSeconds/60; breakTimeInMinutes =ABS(breakTimeInMinutes); NSString *newStr =[NSString stringWithFormat:@"%i",breakTimeInMinutes]; NSLog(@"%@",newStr); [dictValues setObject:newStr forKey:@"break_time"]; [arrayProgressDate addObject:dictValues]; } NSLog(@"%@",dictData); } NSLog(@"%@",arrayProgressDate); Please say me what i am doing wrong in this , i want to calculate break time. A: This line NSMutableDictionary *dictValues =[NSMutableDictionary dictionary]; should be inside for loop. While finding the break time you must consider the date as well. Otherwise you will get wrong values. NSMutableDictionary *thirdEntry = [NSMutableDictionary dictionary]; [thirdEntry setObject:@"02:00:00" forKey:@"end_time"]; [thirdEntry setObject:@"PUL" forKey:@"function_code"]; [thirdEntry setObject:@"01:25:00" forKey:@"start_time"]; [thirdEntry setObject:@"45" forKey:@"total_units"]; [thirdEntry setObject:@"20" forKey:@"total_time"]; [arrData addObject:thirdEntry]; NSLog(@"%@",arrData); //dictionary NSMutableDictionary *dictData =[NSMutableDictionary dictionary]; NSMutableDictionary *dictData1=[NSMutableDictionary dictionary]; NSMutableArray *arrayProgressDate =[NSMutableArray array]; for (int i =0; i<arrData.count; i++) { dictData =arrData[i]; [arrayProgressDate addObject:dictData]; NSString *strendTime1 =[dictData objectForKey:@"end_time"]; NSLog(@"%@",strendTime1); //storing the breaktime values NSMutableDictionary *dictValues =[NSMutableDictionary dictionary]; [dictValues setObject:strendTime1 forKey:@"end_time_value"]; if ((i + 1) < arrData.count) { dictData1=arrData[i+1]; NSString *strStartTimeNext = [dictData1 objectForKey:@"start_time"]; NSLog(@"%@",strStartTimeNext ); //[arrayProgressDate addObject:strStartTimeNext]; [dictValues setObject:strStartTimeNext forKey:@"start_time_value"]; //calculating break time NSDate *endTimeDate = [[DateHelper sharedHelper ] dateFromString:strendTime1 withFormat:@"HH:mm:ss"]; NSDate *startTimeDate = [[DateHelper sharedHelper]dateFromString:strStartTimeNext withFormat:@"HH:mm:ss"]; NSTimeInterval timeElapsedInSeconds = [endTimeDate timeIntervalSinceDate:startTimeDate]; double hours = timeElapsedInSeconds / 3600.0; NSLog(@"%f",hours); int breakTimeInMinutes = timeElapsedInSeconds/60; breakTimeInMinutes =ABS(breakTimeInMinutes); NSString *newStr =[NSString stringWithFormat:@"%i",breakTimeInMinutes]; NSLog(@"%@",newStr); [dictValues setObject:newStr forKey:@"break_time"]; [arrayProgressDate addObject:dictValues]; } NSLog(@"%@",dictData); } NSLog(@"%@",arrayProgressDate);
{ "language": "en", "url": "https://stackoverflow.com/questions/35216783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Converting Invalid DateTime I need to Convert DateTime from UTC to local time, For that I had Validated the date time before converting to local time using TimeZoneInfo IsInValidTime method. I'm getting Invalid date time for a particular date time, How to convert this date to a valid one? Here is the Sample code: _timeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time"); var dateTime = "10/03/2013 2:12:00 AM"; DateTime universalFormatDateTime = Convert.ToDateTime(dateTime).GetUniversalFormatDateTime(); if (_timeZoneInfo.IsInvalidTime(universalFormatDateTime)) Console.Write("Invalid DateTime\n"); A: What framework are you using? Isn't ToUniversalTime() the correct choice? DateTime universalFormatDateTime = Convert.ToDateTime(dateTime).ToUniversalTime() A: You should specify the DateTimeKind of your Date time. Add this before perform the validation: universalFormatDateTime = DateTime .SpecifyKind(universalFormatDateTime,DateTimeKind.Local); A: I guess this is what you're trying to achieve: _timeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time"); var dateTime = "10/03/2013 2:12:00 AM"; DateTime universalFormatDateTime = Convert .ToDateTime(dateTime, new CultureInfo("en-GB")) .ToUniversalTime(); if (_timeZoneInfo.IsInvalidTime(universalFormatDateTime)) Console.WriteLine("Invalid DateTime"); else Console.WriteLine("Valid DateTime"); You can look at the Convert.ToDateTime articke for future reference.
{ "language": "en", "url": "https://stackoverflow.com/questions/15611580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to verify multiple CSS attributes in one method I'm a newbie setting up a python selenium framework and am wondering if I should collect and verify CSS attributes more tidily? I currently have the verify css function set up in the base page and these methods which define the individual css attributes in the code below (see image_thumb & image_height): class BrokenImagePage(BasePage): _broken_image = {"by": By.XPATH, "value": "//img[1]"} _placeholder_image = {"by": By.XPATH, "value": "//img[3]"} _title = {"by" : By.XPATH, "value": "//div[@class='example']/h3"} def __init__(self, driver): self.driver = driver self._visit("/broken_images") assert self._is_displayed(self._title) def image_present (self): return self._is_displayed(self._broken_image) def image_thumb (self): return self._wait_for_is_displayed(self._placeholder_image, 5) def image_height (self): return self._verify_css_value(self._placeholder_image, 'height') def image_width (self): return self._verify_css_value(self._placeholder_image, 'width') Then I assert against the attribute in my test like so: def test_image_css(self, images): assert images.image_height() == '90px' assert images.image_width() == '120px' This doesn't seem very efficient to me. Could I collect all the attributes in one method and assert against those in a better way?
{ "language": "en", "url": "https://stackoverflow.com/questions/47722143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: yum install failed and corrupted the original yum I was trying to install yum-3.4.3 using commands make && make install, but it failed with showing the following log: infra-bld4:/tmp/hxu2/yum-3.4.3> make for d in rpmUtils yum etc docs po; do make PYTHON=python -C $d; [ $? = 0 ] || exit 1 ; done make-3.79.1-p7[1]: Entering directory `/tmp/hxu2/yum-3.4.3/rpmUtils' echo "Nothing to do" Nothing to do make-3.79.1-p7[1]: Leaving directory `/tmp/hxu2/yum-3.4.3/rpmUtils' make-3.79.1-p7[1]: Entering directory `/tmp/hxu2/yum-3.4.3/yum' echo "Nothing to do" Nothing to do make-3.79.1-p7[1]: Leaving directory `/tmp/hxu2/yum-3.4.3/yum' make-3.79.1-p7[1]: Entering directory `/tmp/hxu2/yum-3.4.3/etc' echo "Nothing to do" Nothing to do make-3.79.1-p7[1]: Leaving directory `/tmp/hxu2/yum-3.4.3/etc' make-3.79.1-p7[1]: Entering directory `/tmp/hxu2/yum-3.4.3/docs' echo "Nothing to do" Nothing to do make-3.79.1-p7[1]: Leaving directory `/tmp/hxu2/yum-3.4.3/docs' make-3.79.1-p7[1]: Entering directory `/tmp/hxu2/yum-3.4.3/po' msgfmt -o ca.mo ca.po -c msgfmt: ca.po: field `Language-Team' still has initial default value msgfmt: found 1 fatal error make-3.79.1-p7[1]: *** [ca.mo] Error 1 make-3.79.1-p7[1]: Leaving directory `/tmp/hxu2/yum-3.4.3/po' make-3.79.1-p7: *** [subdirs] Error 1 infra-bld4:/tmp/hxu2/yum-3.4.3> Because I tried to re-install the yum, but the installation failed half way, so bad thing is that I corrupted the original yum in the system. Anybody can help me recover the yum or re-install it will be highly appreciated. Thanks! A: if you want to uninstall you can do rpm -e yum then Install it using: rpm -ivh yum-(version).rpm If yum is working fine for local installations, but it's not able to access Red Hat Network, verify if the following packages are installed. If not, install them: rhnsd yum-rhn-plugin yum-security rhn-check rhn-setup rhn-setup-gnome yum-downloadonly rhn-client-tools rhn-virtualization-common rhn-virtualization-host pirut yum-updatesd
{ "language": "en", "url": "https://stackoverflow.com/questions/17665790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do i speed up a python nested loop? I'm trying to calculate the gravity effect of a buried object by calculating the effect on each side of the body then summing up the contributions to get one measurement at one station, an repeating for a number of stations. the code is as follows( the body is a square and the code calculates clockwise around it, that's why it goes from -x back to -x coordinates) grav = [] x=si.arange(-30.0,30.0,0.5) #-9.79742526 9.78716693 22.32153704 27.07382349 2138.27146193 xcorn = (-9.79742526,9.78716693 ,9.78716693 ,-9.79742526,-9.79742526) zcorn = (22.32153704,22.32153704,27.07382349,27.07382349,22.32153704) gamma = (6.672*(10**-11))#'N m^2 / Kg^2' rho = 2138.27146193#'Kg / m^3' grav = [] iter_time=[] def procedure(): for i in si.arange(len(x)):# cycles position t0=time.clock() sum_lines = 0.0 for n in si.arange(len(xcorn)-1):#cycles corners x1 = xcorn[n]-x[i] x2 = xcorn[n+1]-x[i] z1 = zcorn[n]-0.0 #just depth to corner since all observations are on the surface. z2 = zcorn[n+1]-0.0 r1 = ((z1**2) + (x1**2))**0.5 r2 = ((z2**2) + (x2**2))**0.5 O1 = si.arctan2(z1,x1) O2 = si.arctan2(z2,x2) denom = z2-z1 if denom == 0.0: denom = 1.0e-6 alpha = (x2-x1)/denom beta = ((x1*z2)-(x2*z1))/denom factor = (beta/(1.0+(alpha**2))) term1 = si.log(r2/r1)#log base 10 term2 = alpha*(O2-O1) sum_lines = sum_lines + (factor*(term1-term2)) sum_lines = sum_lines*2*gamma*rho grav.append(sum_lines) t1 = time.clock() dt = t1-t0 iter_time.append(dt) Any help in speeding this loop up would be appreciated Thanks. A: Your xcorn and zcorn values repeat, so consider caching the result of some of the computations. Take a look at the timeit and profile modules to get more information about what is taking the most computational time. A: It is very inefficient to access individual elements of a numpy array in a Python loop. For example, this Python loop: for i in xrange(0, len(a), 2): a[i] = i would be much slower than: a[::2] = np.arange(0, len(a), 2) You could use a better algorithm (less time complexity) or use vector operations on numpy arrays as in the example above. But the quicker way might be just to compile the code using Cython: #cython: boundscheck=False, wraparound=False #procedure_module.pyx import numpy as np cimport numpy as np ctypedef np.float64_t dtype_t def procedure(np.ndarray[dtype_t,ndim=1] x, np.ndarray[dtype_t,ndim=1] xcorn): cdef: Py_ssize_t i, j dtype_t x1, x2, z1, z2, r1, r2, O1, O2 np.ndarray[dtype_t,ndim=1] grav = np.empty_like(x) for i in range(x.shape[0]): for j in range(xcorn.shape[0]-1): x1 = xcorn[j]-x[i] x2 = xcorn[j+1]-x[i] ... grav[i] = ... return grav It is not necessary to define all types but if you need a significant speed up compared to Python you should define at least types of arrays and loop indexes. You could use cProfile (Cython supports it) instead of manual calls to time.clock(). To call procedure(): #!/usr/bin/env python import pyximport; pyximport.install() # pip install cython import numpy as np from procedure_module import procedure x = np.arange(-30.0,30.0,0.5) xcorn = np.array((-9.79742526,9.78716693 ,9.78716693 ,-9.79742526,-9.79742526)) grav = procedure(x, xcorn)
{ "language": "en", "url": "https://stackoverflow.com/questions/7744065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Release unused memory sql server 2014 memory optimized table My system Microsoft SQL Server 2014 (SP1-CU4) (KB3106660) - 12.0.4436.0 (X64) Dec 2 2015 16:09:44 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor) I use two table table1 and ``table2` memory optimazed table (each size 27 GB) drop table1 IF OBJECT_ID('table1') IS NOT NULL BEGIN DROP TABLE [dbo].[table1] END After: SQL server Memory Usage By Memory Optimazed Objects Reports Table Name =table2 Table Used Memory = 26582,50 Table Unused Memory = 26792,69 How can I run sql server garbage collector manually ? this is possible or not ? I need "Table Unused Memory" Release because another process always gives this error "There is insufficient system memory in resource pool 'Pool' to run this query." Thank you A: Data for memory optimized tables is held in data & delta files. A delete statement will not remove the data from the data file but insert a delete record into the delta file, hence your storage continuing to be large. The data & delta file are maintained in pairs know as checkpoint file pairs (CFPs). Over time closed CFPs are merged based upon a merge policy from multiple CFPs into one merged target CFP. A background thread evaluates all closed CFPs using a merge policy and then initiates one or more merge requests for the qualifying CFPs. These merge requests are processed by the offline checkpoint thread. The evaluation of merge policy is done periodically and also when a checkpoint is closed. You can force merge the files using stored procedure sys.sp_xtp_merge_checkpoint_files following a checkpoint. EDIT Run statement: SELECT container_id, internal_storage_slot, file_type_desc, state_desc, inserted_row_count, deleted_row_count, lower_bound_tsn, upper_bound_tsn FROM sys.dm_db_xtp_checkpoint_files ORDER BY file_type_desc, state_desc Then find the rows with status UNDER CONSTRUCTION and make a note of the lower and upper transaction id. Now execute: EXEC sys.sp_xtp_merge_checkpoint_files 'myDB',1003,1004; where 1003 and 1004 is the lower and upper transaction id. To completely remove the files you will have to ensure that you have to: * *Run Select statement from above *Run EXEC sys.sp_xtp_merge_checkpoint_files from above *Perform a Full Backup *CHECKPOINT *Backup the Log *EXEC sp_xtp_checkpoint_force_garbage_collection; *Checkpoint *Exec sp_filestream_force_garbage_collection 'MyDb' to remove files marked as Tombstone You may need to run steps 3 - 7 twice to completely get rid of the files. See The DBA who came to tea article CFP's go through the following stages: •PRECREATED – A small set of CFPs are kept pre-allocated to minimize or eliminate any waits to allocate new files as transactions are being executed. These are full sized with data file size of 128MB and delta file size of 8 MB but contain no data. The number of CFPs is computed as the number of logical processors or schedulers with a minimum of 8. This is a fixed storage overhead in databases with memory-optimized tables •UNDER CONSTRUCTION – Set of CFPs that store newly inserted and possibly deleted data rows since the last checkpoint. •ACTIVE - These contain the inserted/deleted rows from previous closed checkpoints. These CFPs contain all required inserted/deleted rows required before applying the active part of the transaction log at the database restart. We expect that size of these CFPs to be approximately 2x of the in-memory size of memory-optimized tables assuming merge operation is keeping up with the transactional workload. •MERGE TARGET – CFP stores the consolidated data rows from the CFP(s) that were identified by the merge policy. Once the merge is installed, the MERGE TARGET transitions into ACTIVE state •MERGED SOURCE – Once the merge operation is installed, the source CFPs are marked as MERGED SOURCE. Note, the merge policy evaluator may identify multiple merges a CFP can only participate in one merge operation. •REQUIRED FOR BACKUP/HA – Once the merge has been installed and the MERGE TARGET CFP is part of durable checkpoint, the merge source CFPs transition into this state. CFPs in this state are needed for operational correctness of the database with memory-optimized table. For example, to recover from a durable checkpoint to go back in time. A CFP can be marked for garbage collection once the log truncation point moves beyond its transaction range. •IN TRANSITION TO TOMBSTONE – These CFPs are not needed by in-memory OLTP engine can they can be garbage collected. This state indicates that these CFPs are waiting for the background thread to transition them to the next state TOMBSTONE •TOMBSTONE – These CFPs are waiting to be garbage collected by the filestream garbage collector. Please refer to FS Garbage Collection for details
{ "language": "en", "url": "https://stackoverflow.com/questions/34767405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: OpenGL cube coloring I created a cube using OpenGL2.0. I want it has six different color on its each face. I followed some example and draw one cube. Everything goes well except the color of the cube. Color of each face mixed together and only red and green are shown. It looks really wired and I did't see any people have this problem same as me. Can anybody give me a hand? Below are my code and cube. Thank you so much! enter image description here public class MyCube { private FloatBuffer vertexBuffer; private ShortBuffer drawListBuffer; private ShortBuffer[] ArrayDrawListBuffer; private FloatBuffer colorBuffer; private int mProgram; //For Projection and Camera Transformations private final String vertexShaderCode = // This matrix member variable provides a hook to manipulate // the coordinates of the objects that use this vertex shader "uniform mat4 uMVPMatrix;" + "attribute vec4 vPosition;" + "attribute vec4 vColor;" + "varying vec4 vColorVarying;" + "void main() {" + // the matrix must be included as a modifier of gl_Position // Note that the uMVPMatrix factor *must be first* in order // for the matrix multiplication product to be correct. " gl_Position = uMVPMatrix * vPosition;" + "vColorVarying = vColor;"+ "}"; // Use to access and set the view transformation private int mMVPMatrixHandle; private final String fragmentShaderCode = "precision mediump float;" + "varying vec4 vColorVarying;"+ "void main() {" + " gl_FragColor = vColorVarying;" + "}"; // number of coordinates per vertex in this array static final int COORDS_PER_VERTEX = 3; float cubeCoords[] = { -0.5f, 0.5f, 0.5f, // front top left 0 -0.5f, -0.5f, 0.5f, // front bottom left 1 0.5f, -0.5f, 0.5f, // front bottom right 2 0.5f, 0.5f, 0.5f, // front top right 3 -0.5f, 0.5f, -0.5f, // back top left 4 0.5f, 0.5f, -0.5f, // back top right 5 -0.5f, -0.5f, -0.5f, // back bottom left 6 0.5f, -0.5f, -0.5f, // back bottom right 7 }; // Set color with red, green, blue and alpha (opacity) values float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f }; float red[] = { 1.0f, 0.0f, 0.0f, 1.0f }; float blue[] = { 0.0f, 0.0f, 1.0f, 1.0f }; private short drawOrder[] = { 0, 1, 2, 0, 2, 3,//front 0, 4, 5, 0, 5, 3, //Top 0, 1, 6, 0, 6, 4, //left 3, 2, 7, 3, 7 ,5, //right 1, 2, 7, 1, 7, 6, //bottom 4, 6, 7, 4, 7, 5 //back }; //(order to draw vertices) final float cubeColor[] = { // Front face (red) 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, // Top face (green) 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, // Left face (blue) 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // Right face (yellow) 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, // Bottom face (cyan) 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, // Back face (magenta) 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f }; public MyCube() { // initialize vertex byte buffer for shape coordinates ByteBuffer bb = ByteBuffer.allocateDirect( // (# of coordinate values * 4 bytes per float) cubeCoords.length * 4); bb.order(ByteOrder.nativeOrder()); vertexBuffer = bb.asFloatBuffer(); vertexBuffer.put(cubeCoords); vertexBuffer.position(0); // initialize byte buffer for the draw list ByteBuffer dlb = ByteBuffer.allocateDirect( // (# of coordinate values * 2 bytes per short) drawOrder.length * 2); dlb.order(ByteOrder.nativeOrder()); drawListBuffer = dlb.asShortBuffer(); drawListBuffer.put(drawOrder); drawListBuffer.position(0); // initialize byte buffer for the color list ByteBuffer cb = ByteBuffer.allocateDirect( // (# of coordinate values * 2 bytes per short) cubeColor.length * 4); cb.order(ByteOrder.nativeOrder()); colorBuffer = cb.asFloatBuffer(); colorBuffer.put(cubeColor); colorBuffer.position(0); int vertexShader = MyRenderer.loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode); int fragmentShader = MyRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode); // create empty OpenGL ES Program mProgram = GLES20.glCreateProgram(); // add the vertex shader to program GLES20.glAttachShader(mProgram, vertexShader); // add the fragment shader to program GLES20.glAttachShader(mProgram, fragmentShader); // creates OpenGL ES program executables GLES20.glLinkProgram(mProgram); } private int mPositionHandle; private int mColorHandle; private final int vertexCount = cubeCoords.length / COORDS_PER_VERTEX; private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex public void draw(float[] mvpMatrix) { // pass in the calculated transformation matrix // Add program to OpenGL ES environment GLES20.glUseProgram(mProgram); // get handle to vertex shader's vPosition member mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition"); // get handle to fragment shader's vColor member mColorHandle = GLES20.glGetAttribLocation(mProgram, "vColor"); // Enable a handle to the cube vertices GLES20.glEnableVertexAttribArray(mPositionHandle); // Prepare the cube coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the triangle //mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor"); // Enable a handle to the cube colors GLES20.glEnableVertexAttribArray(mColorHandle); // Prepare the cube color data GLES20.glVertexAttribPointer(mColorHandle, 4, GLES20.GL_FLOAT, false, 16, colorBuffer); // get handle to shape's transformation matrix mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix"); // Pass the projection and view transformation to the shader GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0); // Draw the cube GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); //GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount); // Disable vertex array GLES20.glDisableVertexAttribArray(mPositionHandle); GLES20.glDisableVertexAttribArray(mColorHandle); GLES20.glDisableVertexAttribArray(mMVPMatrixHandle); } } A: The index buffer is used to index into the colorBuffer using the same index as is used for indexing into the vertexBuffer, so the corresponding elements in each need to match. The indices in your index buffer are in the range of 0-7, so you will only ever index the first 8 entries of your colorBuffer, which are green and red. You need to have a separate index for every unique combination of vertex position and color. For each face there are 4 unique vertex-color combinations, so you will need 6 * 4 = 24 entries in your cubeCoords array and 24 matching entries in your cubeColor array. Like this: float cubeCoords[] = { // front face -0.5f, 0.5f, 0.5f, // front top left 0 -0.5f, -0.5f, 0.5f, // front bottom left 1 0.5f, -0.5f, 0.5f, // front bottom right 2 0.5f, 0.5f, 0.5f, // front top right 3 // top face -0.5f, 0.5f, -0.5f, // back top left 4 -0.5f, 0.5f, 0.5f, // front top left 5 0.5f, 0.5f, 0.5f, // front top right 6 0.5f, 0.5f, -0.5f, // back top right 7 // other faces... } final float cubeColor[] = { // Front face (red) 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, // Top face (green) 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, // other faces... } private short drawOrder[] = { 0, 1, 2, 0, 2, 3,//front 4, 5, 6, 4, 6, 7, //Top // other faces... }
{ "language": "en", "url": "https://stackoverflow.com/questions/37668017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to call GET/POST method programatically using spring I want to call GET/POST method programatically from within the java class using Spring. I have done this stuff in Servlet class before but I am not clear how to do this with spring. I went through some related tutorials but I am still not cleared. Can any one please explain how to do this? Thanks. A: Since you are working on a Spring based application, I would suggest using Spring RestTemplate to request your GET/POST endpoints. The following may be a short snippet of what could be done and you can refer to this Spring tutorials (1,2 and 3) for more details: public void getOrPostTest() { String GET_URL = "http://localhost:8080/somepath"; RestTemplate restTemplate = new RestTemplate(); Map<String, String> params = new HashMap<String, String>(); params.put("prop1", "1"); params.put("prop2", "value"); String result = restTemplate.getForObject(GET_URL, String.class, params); } A: You can use HttpClient, look this example. HttpClient httpClient = login(HTTP_SERVER_DOMAIN, "[email protected]", "password"); GetMethod getAllAdvicesMethod = new GetMethod(URL); getAllAdvicesMethod .addRequestHeader("Content-Type", "application/json"); try { httpClient.executeMethod(getAllAdvicesMethod); } catch (HttpException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } If you need another method request you can changes GetMethod for PostMethod postDateMethod = new PostMethod(URL);
{ "language": "en", "url": "https://stackoverflow.com/questions/26210993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: zabbix monitor JMX C3P0 tokenid the key is: jmx["com.mchange.v2.c3p0:identityToken=2yaf3o9m1taosztt7mari|2294069,name=2yaf3o9m1taosztt7mari|2294069,type=PooledDataSource",maxPoolSize] but the identityToken changed when restart the tomcat. is there a macro to define it,adapt when changed? enter image description here A: You can configure c3p0's JMX key to be something that will not change. Please see http://www.mchange.com/projects/c3p0/#jmx_configuration_and_management The simple story is: * *Be sure to set the c3p0 configuration property dataSourceName, which will become the value of a name attribute in the JMX key; *Set (in a c3p0.properties file or as a system property or in typesafe-config file) com.mchange.v2.c3p0.management.ExcludeIdentityToken=true If you are using a c3p0.properties file, it'd be something like c3p0.dataSourceName=myPooledDataSource com.mchange.v2.c3p0.management.ExcludeIdentityToken=true
{ "language": "en", "url": "https://stackoverflow.com/questions/43064059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Copying a string to a struct member I can't figure out how to copy a string from inputString to newNode->data. My struct looks like this: typedef struct node { char *data; struct node *left; struct node *right; } node; And the function in questions looks like this: node* addToTree(char inputString[]) { node *newNode; if ((newNode = malloc(sizeof(node*))) == NULL) { printf("Error: could not allocate memory"); exit(-1); } if ((newNode->data = malloc(strlen(inputString) + 1)) == NULL) { printf("Error: could not allocate memory"); exit(-1); } /* This line of code doesn't seem to copy anything to newNode->data. This is the way I believe should work, however I don't understand what the problem with it is. I have tried strlcpy and strncpy as well. */ strcpy(newNode->data, inputString); /* The line below here seems to work when I print the value within the function, but some of the values are garbage when I try to use them later on in the program. */ newNode->data = inputString; newNode->left = NULL; newNode->right = NULL; printf("Input string: %s\n", inputString); printf("New node data: %s\n", newNode->data); return newNode; } A: Your sizeof(node*) does not represent the size you need. newnode = malloc(sizeof(node*)) // wrong newnode = malloc(sizeof (node)) // correct newnode = malloc(sizeof *newNode) // better Why is sizeof *newNode better? Because it prevents accidental forgetting to update the code in two places if the type changes struct node { char *data; struct node *next; struct node *prev; }; struct nodeEx { char *data; size_t len; struct nodeEx *next; struct nodeEx *prev; }; struct nodeEx *newnode = malloc(sizeof (struct node)); // wrong struct nodeEx *newnode = malloc(sizeof *newnode); // correct A: The below line does not allocate the required amount of memory, it allocates memory equal to the size of a pointer to node. if ((newNode = malloc(sizeof(node*))) == NULL) So your strcpy fails because there is no memory to copy into. Change the above to: if ((newNode = malloc(sizeof(node))) == NULL) What happens after you do the following is undefined behavior because the memory representing inputString can be overwritten, and that is why you get garbage values later on. newNode->data = inputString; You can see the top answer to this question for additional information. A: newNode->data = inputString; is incorrect, it overrides the previously malloced memory. if ((newNode->data = malloc(strlen(inputString) + 1)) == NULL) { printf("Error: could not allocate memory"); exit(-1); } strcpy(newNode->data, inputString); is enough to allocate memory and copy the string into it.
{ "language": "en", "url": "https://stackoverflow.com/questions/53152341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Excel VBA Data organization with splitting and duplication In Microsoft Excel, I am looking to use VBA code to sort through large amounts of data that is not optimally organized Currently, I have data that looks like this: Poor Data Organization However, I would like to process the data to this: Correct Data Organization I have been unable to find an example for my specific situation and appreciate your time and responses. A: This will process the data into the format you're after. The source data is in Sheet1 and the reformatted data is placed into Sheet2: Option Explicit Sub MapCell2Cells() Dim sht1 As Worksheet, sht2 As Worksheet Set sht1 = Worksheets("Sheet1") Set sht2 = Worksheets("Sheet2") Dim strG As String, strL As String Dim arrH() As String, arrI() As String, arrJ() As String Dim i As Integer, j As Integer, idx As Integer, lastRow As Integer lastRow = sht1.Cells(Rows.Count, "G").End(xlUp).Row idx = 1 For i = 1 To lastRow: With sht1 strG = Replace(.Cells(i, "G").Value, vbLf, "") arrH = Split(.Cells(i, "H").Value, vbLf) arrI = Split(.Cells(i, "I").Value, vbLf) arrJ = Split(.Cells(i, "J").Value, vbLf) strL = Replace(.Cells(i, "L").Value, vbLf, "") End With With sht2 For j = LBound(arrH) To UBound(arrH) .Cells(idx, "G").Value = strG .Cells(idx, "H").Value = arrH(j) .Cells(idx, "I").Value = arrI(j) .Cells(idx, "J").Value = arrJ(j) .Cells(idx, "L").Value = strL idx = idx + 1 Next End With Next End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/44013032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Python dictionary modification This is a python code def build_person(first_name, last_name, age=None):      Return a dictionary of information about a person. person = {'first': first_name, 'last': last_name}      if age:          person['age'] = age      return person I understand everything but the line person['age'] = age Because, "person" is a dictionary, and so if I want to modify it, shouldn't it accept a key-value pair? How can I modify it correctly ? A: person['age'] = age The 'age' inside the brackets is the key, the value 'age' is where your value is assigned. The person dictionnary becomes: {'first': first_name, 'last': last_name,'age': age} A: Here, person[age] = age works only when age is given as argument when calling this function. person is dictionary, age in person[age] is key, and the age which is at right side of assignment operator(=) is value passed as argument in function. for e.g : for the given code below in last line i have given age as argument. def build_person(first_name, last_name, age=None): person = {'first': first_name, 'last': last_name} if age: person['age'] = age print(person) return person build_person("yash","verma",9) Output for above code is : {'first': 'yash', 'last': 'verma', 'age': 9} now, if i don't give age as a argument then, def build_person(first_name, last_name, age=None): person = {'first': first_name, 'last': last_name} if age: person['age'] = age print(person) return person build_person("yash","verma") output will be: {'first': 'yash', 'last': 'verma'}
{ "language": "en", "url": "https://stackoverflow.com/questions/70288163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Enum 2-ways binding with a Combobox I am trying to do a simple 2-ways binding with an enum to a Combobox but haven't found anything that works with my code so far. My enum (C#): public enum CurrenciesEnum { USD, JPY, HKD, EUR, AUD, NZD }; The property the Enum should set / is bound to: private string _ccy; public string Ccy { get { return this._ccy; } set { if (value != this._ccy) { this._ccy= value; NotifyPropertyChanged("Ccy"); } } } The Xaml code that doesn't work: <UserControl.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ObjectDataProvider x:Key="Currencies" MethodName="GetValues" ObjectType="{x:Type System:Enum}"> <ObjectDataProvider.MethodParameters> <x:Type TypeName="ConfigManager:CurrenciesEnum" /> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> </ResourceDictionary> </UserControl.Resources> <ComboBox ItemsSource="{Binding Source={StaticResource Currencies}}" SelectedItem="{Binding Ccy, Mode=TwoWay}"/> Thank you in advance for your help! A: Well the problem is you are binding a Enum to a string, this will only work one way due to the default ToString operation in the binding engine. If you are only using the string value change your ObjectDataProvider method name to GetNames this will return the string values for your Enum and will bind both ways, the other option is to not bind to a string but the Enum type. <ObjectDataProvider x:Key="Currencies" MethodName="GetNames" ObjectType="{x:Type System:Enum}"> <ObjectDataProvider.MethodParameters> <x:Type TypeName="ConfigManager:CurrenciesEnum" /> </ObjectDataProvider.MethodParameters> </ObjectDataProvider> A: I load the enum into a Dictionary public static Dictionary<T, string> EnumToDictionary<T>() where T : struct { Type enumType = typeof(T); // Can't use generic type constraints on value types, // so have to do check like this if (enumType.BaseType != typeof(Enum)) throw new ArgumentException("T must be of type System.Enum"); Dictionary<T, string> enumDL = new Dictionary<T, string>(); //foreach (byte i in Enum.GetValues(enumType)) //{ // enumDL.Add((T)Enum.ToObject(enumType, i), Enum.GetName(enumType, i)); //} foreach (T val in Enum.GetValues(enumType)) { enumDL.Add(val, val.ToString()); } return enumDL; }
{ "language": "en", "url": "https://stackoverflow.com/questions/15678212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WCF with Flash tutorial I'm a beginner in WCF, which I have chosen instead of Web Services because all articles and blogs I've read seem to point out that ASMX is old news. I have read a bit about the differences between old Web Services and WCF, and I got the general idea. I also took the MSDN WCF tutorial which seemed simple enough. My problem is that I want to create WCF services that can be consumed by Flash. I've read that it's doable everywhere, but with no obvious A-Z tutorial on how to proceed with the server-side and client-side... Just some suggestions. Can anyone point me to the right direction, with a brief explanation of the options available in front of me? A: We do this with our games where we have a bunch of WCF services provide different functionalities to the Flash clients running in Facebook/MySpace, etc. I suggest you should first have a look at this codeplex project: http://wcfflashremoting.codeplex.com/ It allows you to implement a AMF endpoint for communicating with the Flash clients. All your DataContract need to be mapped exactly including namespace and property names on both sides, so if you have a MyProject.Contracts.Requests.HandShakeRequest object in your WCF project the Flash client needs to have a replicate defined in the SAME namespace. Another which we find very helpful is the request/response pattern because it allows to add/remove parameter/output values easily and have a fair amount of backward compatibility - add a new parameter to the Request object on the server for a new feature and the client doesn't HAVE TO send the new parameter right away. For debugging you absoluately need Charles (http://www.charlesproxy.com), the latest version should have the AMF viewer working properly (I think you used to have to download an add-in) so you can see the AMF messages coming back from the server in a nice, readable format. Hope this helps! There are some other caveats around working with a Flash client from WCF but can't remember them off the top of my head :-P so have a play around with that remoting extension and I'll pop some other bits and bobs down when I can remember them!
{ "language": "en", "url": "https://stackoverflow.com/questions/3306744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: POST variable to another class I have a search function using XPath, when results are shown a check box is also echoed. <form name="save" method="POST" action="saveProcess.php"> <?php foreach ($holidays as $holiday) { $resultTable .= "<p><a href=\"{$holiday->link}\">{$holiday->title}</a>" . "<br/>" . "{$holiday->pubDate}" . "<br>" . "{$holiday->description}" . "<input type='checkbox' name='chk' value='{$holiday->title}' />" . "<br /></p>"; } ?> <input type="submit" value="submit"/> </form> I would like this check box to hold the value of {$holiday->title} which when the form is submitted will be shown in saveProcess.php, i use the isset method to check if the variable is set and it is not. if (isset($_POST['chk'])) { echo $_POST['chk']; } else { echo"variable is not set"; } Where am i going wrong? A: Your code looks ok to me, just remember that the value of a checkbox is posted only if the checkbox is checked, if it's not checket $_POST['chk'] is not set EDIT - since you are revriting your checkboxes as suggested in the comment use an array <?php foreach ($holidays as $holiday) { $resultTable .= "<p><a href=\"{$holiday->link}\">{$holiday->title}</a>" . "<br/>" . "{$holiday->pubDate}" . "<br>" . "{$holiday->description}" . "<input type='checkbox' name='chk[]' value='{$holiday->title}' />" . "<br /></p>"; } ?> And then server side $_POST['chk'] will be ann array A: The problem is that you name each checkbox "chk", and when you submit the form, the values get overwritten. That's why it doesn't get anything in saveProcess.php. What you need to do, is either specify that the $_POST["chk"] can contain an array of value, like so: <input type='checkbox' name='chk[]' value='{$holiday->title}' /> Notice the square brackets in the name. Now $_POST["chk"] will be an array. Another way, would be to leave the html as it is, and just get the data, in saveProcess.php, using: $HTTP_POST_VARS["chk"] The first part basically explains why it doesn't work and how to fix it, while the second suggestion, is merely an alternate way of getting the data. Have a great day!
{ "language": "en", "url": "https://stackoverflow.com/questions/10041753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the difference between Pointer and strings? What is the difference between a pointer and array or they are same? As a array also works with poiter arithematic so it can be said that an array is nothing but pointer to its fitst element. A: They both are different by the following differences:- int array[40]; int * arrayp; Now if you will try to see the size of both then it will be different for pointer it will same everytime whereas for array it varies with your array size sizeof(array);\\Output 80 sizeof(arrayp);\\Output 4(on 32-bit machines) Which means that computer treats all the offsprings of integers in an array as one which could not be possible with pointers. Secondly, perform increment operation. array++;\\Error arrayp++;\\No error If an array could have been a pointer then that pointer's pointing location could have been changes as in the second case with arrayp but it is not so.
{ "language": "en", "url": "https://stackoverflow.com/questions/33804935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: java.lang.OutOfMemoryError: GC overhead limit exceeded in PYSPARK My scenario of spark job is to connect to PostgreSQL database, read the data from PSQL and performing aggregations after reading the data. In this process I am able to establish database connection successfully and while selecting rows from a table I am facing ERROR : java.lang.OutOfMemoryError java heap space And ERROR : java.lang.OutOfMemoryError: GC overhead limit exceeded To resolve heap space issue I have added below config in spark-defaults.conf file. This works fine spark.driver.memory 1g In order to solve GC overhead limit exceeded issue I have added below config spark.executor.memory 1g spark.executor.extraJavaOptions Xmx1024m spark.dirver.maxResultSize 2g These configurations didn't work to 100% still i am facing same issue. Along which I am also getting PSQL ERROR : org.postgresql.util.psqlexception ran out of memory retrieving query results I am facing these issues while I am dealing with tables that have huge no.of rows i.e., news_mentions table has 4 540 092 records and size of table is 5 476 MB. SO it is taking even more time to execute spark-job which has to be done within seconds. Here is my actual code. from pyspark.sql import SparkSession from pyspark import SparkContext from pyspark.sql import SQLContext from pyspark.sql import HiveContext from pyspark.sql import DataFrameReader sc = SparkContext() sqlContext = SQLContext(sc) sqlContext = HiveContext(sc) input_media_source = " Times of India" # Connecting to news_mentions table df1 = sqlContext.read.format("jdbc").option("url", "jdbc:postgresql://localhost:5432/testdb").option("dbtable", "news_mentions").option("user", "postgres").load() df1.createOrReplaceTempView("news_mentions") news_mentions_DF = sqlContext.sql("SELECT * FROM news_mentions") news_mentions_DF.show() I'm facing GC limit exceeded error while performing show(). How to run my pyspark job quickly with high performance without any errors? NOTE : I am running my pyspark job using spark-submit command without starting any standalone cluster mode. My spark version - 2.2.0 with python version - 3.5.2
{ "language": "en", "url": "https://stackoverflow.com/questions/50099195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unit test ViewModel using repository pattern and LiveData I want to write a unit test for my viewmodel class : class MainViewModel( repository: ShowRepository ) : ViewModel() { private val _shows = repository.shows val shows: LiveData<MyResult<List<Show>>> get() = _shows } Here is my repository class : class ShowRepository( private val dao: ShowDao, private val api: TVMazeService, private val context: Context ) { /** * A list of shows that can be shown on the screen. */ val shows = resultLiveData( databaseQuery = { Transformations.map(dao.getShows()) { it.asDomainModel() } }, networkCall = { refreshShows() }) /** * Refresh the shows stored in the offline cache. */ suspend fun refreshShows(): MyResult<List<Show>> = try { if (isNetworkAvailable(context)) { val shows = api.fetchShowList().await() dao.insertAll(*shows.asDatabaseModel()) MyResult.success(shows) } else { MyResult.error(context.getString(R.string.failed_internet_msg)) } } catch (err: HttpException) { MyResult.error(context.getString(R.string.failed_loading_msg)) } catch (err: UnknownHostException) { MyResult.error(context.getString(R.string.failed_unknown_host_msg)) } catch (err: SocketTimeoutException) { MyResult.error(context.getString(R.string.failed_socket_timeout_msg)) } } And here is my Dao class : @Dao interface ShowDao { /** * Select all shows from the shows table. * * @return all shows. */ @Query("SELECT * FROM databaseshow") fun getShows(): LiveData<List<DatabaseShow>> } Here is my unit test : @ExperimentalCoroutinesApi class MainViewModelTest { private lateinit var viewModel: MainViewModel private lateinit var repository: ShowRepository private val api: TVMazeService = mock() private val dao: ShowDao = mock() private val context: Context = mock() @Test fun fetch() { val observer1: Observer<List<DatabaseShow>> = mock() dao.getShows().observeForever(observer1) repository = ShowRepository(dao, api, context) val observer2: Observer<MyResult<List<Show>>> = mock() repository.shows.observeForever(observer2) viewModel = MainViewModel(repository) val observer3: Observer<MyResult<List<Show>>> = mock() viewModel.shows.observeForever(observer3) verify(viewModel).shows } } But I receive following exception : java.lang.NullPointerException at com.android.sample.tvmaze.viewmodel.MainViewModelTest.fetch(MainViewModelTest.kt:39) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230) at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58) I appreciate if you give me any guideline. A: I change my Dao method to return Flow instead of LiveData : @Dao interface ShowDao { /** * Select all shows from the shows table. * * @return all shows. */ @Query("SELECT * FROM databaseshow") fun getShows(): Flow<List<DatabaseShow>> } And I could successfully run my tests such as : @Test fun givenServerResponse200_whenFetch_shouldReturnSuccess() { mockkStatic("com.android.sample.tvmaze.util.ContextExtKt") every { context.isNetworkAvailable() } returns true `when`(api.fetchShowList()).thenReturn(Calls.response(Response.success(emptyList()))) `when`(dao.getShows()).thenReturn(flowOf(emptyList())) val repository = ShowRepository(dao, api, context, TestContextProvider()) val viewModel = MainViewModel(repository).apply { shows.observeForever(resource) } try { verify(resource).onChanged(Resource.loading()) verify(resource).onChanged(Resource.success(emptyList())) } finally { viewModel.shows.removeObserver(resource) } } A: I would mock repository and instruct mockito true Mockito.when().doReturn() to return some data, and verify that the LiveData output is correct. Of course you could use an instance ShowRepository. You will still need to instruct mockito on how to return when the execution hits the mocked object. As before you can change the behaviour w This line is wrong verify(viewModel).shows. Verify can be called only on mocks. viewModel is an instance, hence, the moment the execution hits that line, your test will fail. For unit testing LiveData, you might need the following rule @get:Rule var rule: TestRule = InstantTaskExecutorRule()
{ "language": "en", "url": "https://stackoverflow.com/questions/62601770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Who calls autorelease pool Who calls autorelease pool or who manages it. I call autorelease on my variable which is inside a function, but who manages this autorelease call, the calling function, or the caller function or who does? A: First of all, if you are saying autorelease, don't. Stop using manual memory management and use ARC. It knows more than you do. Okay, so let's say you do say autorelease. Then it is placed in the autorelease pool and its retain count remains incremented. Its retain count will be decremented again when the autorelease pool is drained. When that happens depends on what autorelease pool you're talking about. * *If you actually made this autorelease pool, then it drains when you tell it to drain. Under ARC, that happens when we come to the end of the @autoreleasepool{} directive block. *If it's the default autorelease pool, the runtime takes care of it and you have no knowledge or control over the matter. You can be pretty sure there will be a drain call after all you code finishes and the app is idle, but there's nothing guaranteed about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/30960398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: Migrating to spring 3.1.4 and Hibernate 4.2.8 SpringSessionContext.currentSession throws HibernateException: No Session found for current thread I'm migrating to spring 3.1.4 and Hibernate 4.2.8 and all of my dao and services classes are annotated with @Transactional correctly (my application works correctly with spring 3.0.7 and Hibernate 3.6) but when I migrate to these versions, my Transactional methods annotated with Propagation.SUPPORTS throws an HibernateException alerting that there are no session found for current thread.. this happens inside the SpringSessionContext.currentSession() method.. I noticed that it does not create a session if TransactionSynchronizationManager does not contain one.. when I annotate the method with Propagation.REQUIRED all happens correctly.. I've tested for Spring 3.2.5 and the bug persists.
{ "language": "en", "url": "https://stackoverflow.com/questions/20528683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using anonymous runnable class code goes in deadlock state but with lambda it works fine I am trying to find out the cause behind below mentioned code. Here if I create Thread using anonymous inner class it goes into deadlock state but with lambda expressions it works fine. I tried to find the reason behind this behavior but I could not. public class ThreadCreationTest { static { new ThreadCreationTest(); } private void call() { System.out.println("Hello guys!!!"); } public ThreadCreationTest() { // when we use this thread it goes in deadlock kind of state Thread thread1 = new Thread(new Runnable() { public void run() { call(); } }); // This one works fine. Thread thread = new Thread(() -> call()); thread.start(); try { thread.join(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } public static void main(String... args) { System.out.println("Code finished..."); } } with lambda expression output : Hello guys!!! Code finished... with anonymous class : code goes into deadlock state A: Decompiling with javap the inner class shows the following for the run method: public void run(); descriptor: ()V flags: ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: getfield #12 // Field this$0:Ltest/ThreadCreationTest; 4: invokestatic #22 // Method test/ThreadCreationTest.access$0:(Ltest/ThreadCreationTest;)V 7: return LineNumberTable: line 31: 0 line 32: 7 LocalVariableTable: Start Length Slot Name Signature 0 8 0 this Ltest/ThreadCreationTest$1; Notice that there is a static synthetic method access$0 which in turn calls the private method call. The synthetic method is created because call is private and as far as the JVM is concerned, the inner class is just a different class (compiled as ThreadCreationTest$1), which cannot access call. static void access$0(test.ThreadCreationTest); descriptor: (Ltest/ThreadCreationTest;)V flags: ACC_STATIC, ACC_SYNTHETIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #68 // Method call:()V 4: return LineNumberTable: line 51: 0 LocalVariableTable: Start Length Slot Name Signature Since the synthetic method is static, it is waiting for the static initializer to finish. However, the static initializer is waiting for the thread to finish, hence causing a deadlock. On the other hand, the lambda version does not rely on an inner class. The bytecode of the constructor relies on an invokedynamic instruction (instruction #9) using MethodHandles: public test.ThreadCreationTest(); descriptor: ()V flags: ACC_PUBLIC Code: stack=3, locals=3, args_size=1 0: aload_0 1: invokespecial #13 // Method java/lang/Object."<init>":()V 4: new #14 // class java/lang/Thread 7: dup 8: aload_0 9: invokedynamic #19, 0 // InvokeDynamic #0:run:(Ltest/ThreadCreationTest;)Ljava/lang/Runnable; 14: invokespecial #20 // Method java/lang/Thread."<init>":(Ljava/lang/Runnable;)V 17: astore_1 18: aload_1 19: invokevirtual #23 // Method java/lang/Thread.start:()V 22: aload_1 23: invokevirtual #26 // Method java/lang/Thread.join:()V 26: goto 36 29: astore_2 30: invokestatic #29 // Method java/lang/Thread.currentThread:()Ljava/lang/Thread; 33: invokevirtual #33 // Method java/lang/Thread.interrupt:()V 36: return
{ "language": "en", "url": "https://stackoverflow.com/questions/40241074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: matplotlib - Shunt x-axis labels I have data in a numpy record array: a = np.array([(29.40818036, '1'), (34.96458222, '2'), (16.05225074, '3'), (13.23025364, '4'), (6.340924671, '5+')], dtype=[('f0', '<f8'), ('f1', 'S2')]) And I'm plotting a bar graph like this: plt.bar(np.arange(5)+0.5,a['f0'],width=1,color='0.95') plt.ylim(0,40) plt.xlim(0.5,5.5) ax=plt.gca() ax.set_xticklabels(a['f1']) Giving: Note the x-axis values do not align correctly with the bars, the first value in a['f1'] is missing ('1'). a['f1'] is ['1' '2' '3' '4' '5+'] - I was expecting these 5 strings to placed underneath the five bars. However, they are shunted to the left by one and the '1' drops off. I am looking for a way to 'shunt' the values to the right. What is the best way to adjust the x-axis tick labels? A: You have to set the tick positions first: ax.set_xticks(np.arange(5) + 1.) ax.set_xticklabels(a['f1'])
{ "language": "en", "url": "https://stackoverflow.com/questions/23650971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Make iPhone status bar disappear when displaying a modal view? I want to display a modal view, and want it to cover the iPhone's status bar. I tried setting the modal view controller's wantsFullScreenLayout property to YES; I also set its parent's property to YES as well. This doesn't work, presumably because the modal view displays below the main window's content, which includes the status bar. My second approach dropped the whole "wantsFullScreenLayout" technique in favor of hiding the status bar just before the modal view is displayed, then turning it back on after the modal view is dismissed. This works until the very end...the modal view's parent view is laid out incorrectly (its navigation bar is partially hidden behind the status bar.) Calling -[view setNeedsLayout] does nothing. How should I approach this problem? Thanks. A: You'll be wanting the - (void)setStatusBarHidden:(BOOL)hidden animated:(BOOL)animated on the UIApplication class. Something like this: [[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; That should hide the status bar with a nice fade animation. A: Joining the discusion late, but I think I can save others some trouble. I have a VC several pushes into a NavController (let's call that VC the PARENT). Now I want to display a modal screen (the CHILD) with the nav bar AND status bar hidden. After much experimentation, I know this works... 1) Because I present the CHILD VC by calling presentModalViewController:(UIViewController *)modalViewController animated:(BOOL)animated in the PARENT, the nav bar is not involved anymore (no need to hide it). 2) The view in the CHILD VC nib is sized to 320x480. 3) The CHILD VC sets self.wantsFullScreenLayout = YES; in viewDidLoad 4) just before presenting the CHILD, hide the status bar with [[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:YES]; 5) dismiss the CHILD VC using a delegate protocol methods in the PARENT, and call [[UIApplication sharedApplication] setStatusBarHidden:NO withAnimation:YES]; before dismissModalViewControllerAnimated:YES] to make sure the nav bar is drawn in the correct location Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/2188401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Type Inference with "auto;" * *From Wikipedia What is the use of the keyword auto in this case (below) if not automatic type deduction? struct SomeStruct { auto func_name(int x, int y) -> int; }; auto SomeStruct::func_name(int x, int y) -> int {return x + y; } *What are some of the situations one needs to explicitly have types? A: This is the trailing return type. auto is simply a placeholder that indicates that the return type comes later. The reason for this is so that the parameter names can be used in computing the return type: template<typename L, typename R> auto add(L l, R r) -> decltype(l+r) { return l+r; } The alternative is: template<typename L, typename R> decltype(std::declval<L>()+std::declval<R>()) add(L l, R r) { return l+r; } It's likely that a future addition to the language will be to allow leaving out the trailing return type and instead using automatic type deduction as is permitted with lambdas. template<typename L, typename R> auto add(L l, R r) { return l+r; }
{ "language": "en", "url": "https://stackoverflow.com/questions/15510126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Get element by name which has $key PHP <input type="number" min="0" max="500" value="" name="qty<?php echo $key ?>" id="<?php echo $key ?>" onChange="findTotal()" /> JS function findTotal() { var arr = document.getElementsByName('qty'); .... } How do I get element by name which has $key inside? A: Let's say $key is 'x'. You could then use getElementbyID('x'), because echoing $key is the same as putting id="x". A: Oh, I see. you have series of rows with the different qty keys. then try this. in PHP: <input type="number" min="0" max="500" value="" name="qty<?php echo $key ?>" id="<?php echo $key ?>" onChange="findTotal('qty<?php echo $key?>')" /> and in JS: function findTotal(key) { var arr = document.getElementsByName(key); .... } A: You can assign thisas your function params then you can call its name or id from your function when onchange event executed !! //assign this <input type="number" min="0" max="500" value="" name="qty<?php echo $key ?>" id="<?php echo $key ?>" onChange="findTotal(this)" /> function findTotal(current) { var arr = current.name; // current.id for id alert(arr); } A: Since your name is dynamic, you can not get the element by the name unless your JS has some way of knowing that dynamic name (such as if you have a list of them and are running them in a loop). In your case, you may want to try using data attributes as outlined here: Mozzila - Using data attributes You may also be able to take advantage of this as outlined in this answer: Javascript Get Element Value - Answer By yorick Let me know if this helps. I may be able to refine this answer if you can give some more information about your specific implementation. Cheers!
{ "language": "en", "url": "https://stackoverflow.com/questions/43729422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: laravel 5.2 the next 3 events from today I want to set up an event calendar. The user must see that the next 3 events from today therefore, when the date is past, the event disappears how to do? thank you My controller: class AccueilController extends Controller { public function affichePage() { $agenda = Agenda::all()->take(3)->sortBy('date'); return view('Accueil.Page',compact('slides','agenda')); } } A: Your code is not being sorted due to the way you are making your call. Here is what is happening at the moment: $agenda = Agenda::all() Load every agenda in the database ->take(3) From all those agendas I loaded, take the first three. ->sortBy('date'); Sort only those three by date. To achieve what you appear to want, judging by your request, you would call $agenda = Agenda::where('date', '>=', $the_date)->orderBy('date', 'asc')->take(3)->get(); Where $the_date is the date you want to be the minimum. Typically, you would use a date function from the Carbon library to do this: $the_date = \Carbon\Carbon::now(); This query is forcing the work to be done in database, with the call to get finally retrieving the results. In order, we are telling the database to: * *Filter all the agendas to only those with a date greater than the one we pass in *Order the filtered set by the date, starting from the earliest *Take the first three from that filtered set *Return that set to the $agendas variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/52132038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Transform Flutter Barcode Result single line into list i have code like this return DefaultTabController( length: 4, child: Scaffold( appBar: AppBar( title: Text("Halaman Dashboard"), actions: <Widget>[ IconButton( onPressed: () { signOut(); }, icon: Icon(Icons.lock_open), ) ], ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ RaisedButton( child: Text('Scan'), onPressed: () async { try { String barcode = await BarcodeScanner.scan(); setState(() { this.barcode = barcode; }); } on PlatformException catch (error) { if (error.code == BarcodeScanner.CameraAccessDenied) { setState(() { this.barcode = 'Izin kamera tidak diizinkan oleh si pengguna'; }); } else { setState(() { this.barcode = 'Error: $error'; }); } } }, ), Text( 'Result: $barcode', //THIS textAlign: TextAlign.center, ), ], ), )), ); variable $barcode with comment //THIS has a value like "abc; def; ghi; ..", the value is displayed in one line, how do I display the value in a list like Name: Abc addres: def phone: ghi ? A: Use Split. for more details https://stackoverflow.com/a/55358328/11794336 import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { static const String example = 'abc; def; ghi;'; @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: ListView( children: example .split(';') // split the text into an array .map((String text) => Text(text)) // put the text inside a widget .toList(), // convert the iterable to a list ) ), ); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/64165423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: tensorflow/keras training model keyerror Ok, from the top here's the imports that I use import keras from keras import layers from keras.models import Sequential import pandas as pd from sklearn.model_selection import train_test_split I then get the data from a csv using pandas and then split the necessary fields into X and y and also split it into train and test set. df = pd.read_csv('data/BCHAIN-NEW.csv') y = df['Predict'] X = df[['Value USD', 'Drop 7', 'Up 7', 'Mean Change 7', 'Change']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, shuffle=False) This is without shuffling so the data is split evenly X_test.head() >>> Value USD Drop 7 Up 7 Mean Change 7 Change 2320 1023.14 5.0 2.0 -22.754286 -103.62 2321 1126.76 5.0 2.0 -4.470000 132.09 2322 994.67 5.0 2.0 9.865714 111.58 2323 883.09 5.0 2.0 9.005714 -13.74 2324 896.83 5.0 2.0 12.797143 -11.31 X_train.head() >>> Value USD Drop 7 Up 7 Mean Change 7 Change 0 0.06480 2.0 4.0 -0.000429 -0.00420 1 0.06900 1.0 5.0 0.000274 0.00403 2 0.06497 1.0 5.0 0.000229 0.00007 3 0.06490 1.0 5.0 0.000514 0.00200 4 0.06290 2.0 4.0 0.000229 -0.00050 running the model like so now throws the index error model = Sequential() model.add(layers.Dense(100, activation='relu', input_shape=(5,))) model.add(layers.Dense(100, activation='relu')) model.add(layers.Dense(5, activation='softmax')) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, epochs=3) >>> Epoch 1/3 --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-38-868bc86350df> in <module>() 4 model.add(layers.Dense(5, activation='softmax')) 5 model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ----> 6 model.fit(X_train, y_train, epochs=3) c:\users\samuel\appdata\local\programs\python\python35\lib\site-packages\keras\models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs) ... c:\users\samuel\appdata\local\programs\python\python35\lib\site-packages\pandas\core\indexing.py in _convert_to_indexer(self, obj, axis, is_setter) 1267 if mask.any(): 1268 raise KeyError('{mask} not in index' -> 1269 .format(mask=objarr[mask])) 1270 1271 return _values_from_object(indexer) KeyError: '[1330 480 101 2009 1131 379 1498 2188 2121 700 1877 2011 2244 1262\n 1493 956 150 479 1345 1073 1173 1909 2260 2288 355 670 2143 1426\n 42 952 358 1183] not in index' A: It seems to me that your data is in the wrong format, the need to be numpy arrays. (assuming they are not allready numpy arrays) Try converting them like so x_train = np.array(x_train) y_train = np.array(y_train)
{ "language": "en", "url": "https://stackoverflow.com/questions/51816624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to merge adjacent polygons I'm using a Javascript implementation of Fortune's algorithm to compute voronoi cells (https://github.com/gorhill/Javascript-Voronoi). My sites to compute are points on a map (so (lat,lng)). I first made the projection (lat,lng) -> (x,y), I then computed the voronoi cells and made the projection of the half edges the other way. It works fine, I display the result using leaflet but I need to do one more thing. Each site I initially compute depends of an ID, I reclassify the voronoi cells by ID and I end up, for each ID with a standard data structure looking like this : { "type": "FeatureCollection", "features": [ { "type": "Feature", "geometry": { "type": "Polygon", "coordinates": [[ [9.994812, 53.549487], [10.046997, 53.598209], [10.117721, 53.531737], [9.994812, 53.549487] ]] } }, { "type": "Feature", "geometry": { "type": "Polygon", "coordinates": [[ [10.000991, 53.50418], [10.03807, 53.562539], [9.926834, 53.551731], [10.000991, 53.50418] ]] } } ] }; A set of polygons (made form the half edge of the voronoi cells) for a given ID. I need to merge those polygons by ID, I intended to use turf.merge(), but I have topology errors turf.min.js:13 Uncaught TopologyError: side location conflict Based on this post (http://lists.refractions.net/pipermail/jts-devel/2009-March/002939.html), I've tried to round the (lat,lng) couple from 10^-14 to 10^-7 but it didn't really worked. Before looking for the kinks and trying to remove them, I printed some data sample and I'm know asking myself if I used the good data from Fortune's algorithm. When I display all the polygons for all IDs, I have the right diagram, but when I display all the polygons for one ID or some polygons for one ID I end up with incomplete diagrams : Part of the full diagram Part of the diagram for one ID Two "polygons" for a given ID Does anyone has an idea how to merge polygon that share at least one common vertex ? And why there is a topology error ? Edit : The polygons are not "incomplete" (I was using polyline) I also tried on an easier sample : And still got the error : Uncaught TopologyError: side location conflict [ (44.8220601, -0.5869532) ] So it's not (or at least not only) due to kinks A: Your problem appears to be occurring before the data gets to Turf. Running the GeoJSON from your GitHub issue through a GeoJSON validator reveals two errors. The first is that you only include a geometry object for each feature, and GeoJSON requires that all features also have a properties object, even if it's empty. Second, and more importantly, a valid GeoJSON polygon must be a closed loop, with identical coordinates for the first and last points. This second problem appears to be what's causing Turf to throw its error. The polygons will successfully merge once the first set of coordinates is copied to the end to close the ring. After displaying the data on a map, it also becomes clear that your latitude and longitude are reversed. Coordinates are supposed to be lon,lat in GeoJSON, and because yours are in lat,lon, the polygons show up in the middle of the Indian Ocean. Once that is corrected, they show up in the correct place. Here is a fiddle showing their successful merging: http://fiddle.jshell.net/nathansnider/p7kfxvk7/
{ "language": "en", "url": "https://stackoverflow.com/questions/36280774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Rails + Github - How to keep 'personal' development hotfixes/tweaks uncommited/tracked? I do alot of personal development tweaks on code on my side, like adding an account automatically, opening up sublime in a certain way when there's an exception (with a rescue_from from an ApplicationController), and other misc tweaks I think are very useful for me but that I don't think I should/other colleagues would like to have committed. I searched around a bit and supposedly git doesn't have any way to ignore single file lines. I figured a solution (albeit probably a little complicated and involving markup) would be using Git pre-commit hooks, but... doesn't sound very neat to me. How can I keep personal code tweaks on my side, inside existing, committed files, without manually stashing/restoring them between commits, while also being branch-independent? A: I searched around a bit and supposedly git doesn't have any way to ignore single file lines. Good news you can do it. How? You will use something called hunk in git. Hunk what? Hunk allow you to choose which changes you want to add to the staging area and then committing them. You can choose any part of the file to add (as long as its a single change) or not to add. Once you have chosen your changes to commit you will "leave" the changes you don't wish to commit in your working directory. You can then choose if you want this file to be tracked as modified or not withe the help of the assume-unchanged flag. Here is a sample code for you. # make any changes to any given file # add the file with the `-p` flag. git add -p # now you can choose form the following options what you want to do. # usually you will use the `s` for splitting up your changes. git add -P Using git add -p to add only parts of changes which you will choose to commit. You can choose which changes you wish to add (picking the changes) and not committing them all. # once you done editing you will have 2 copies of the file # (assuming you did not add all the changes) # one file with the "private" changes in your working dir # and the "public" changes waiting for commit in the staging area. Add the file to .gitignore file This will ignore the file and any changes made to it. --assume-unchaged Raise the --assume-unchaged flag on this file so it will stop tracking changes on this file Using method (2) will tell git to ignore this file even when ts already committed. It will allow you to modify the file without having to commit it to the repository. git-update-index --[no-]assume-unchanged When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs). Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually.
{ "language": "en", "url": "https://stackoverflow.com/questions/34675811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why updating an android app can make it appear twice? I made a lot of changes to my app : databases scheme, graphics, code, etc. The biggest is the package name that I renamed to a total different one. The applicatgio got the same name and Id in the manifeste.xml file and the apk got the same name, with the same digital signature. Nevertheless, when using ./adb install -r myapp.apk, myapp appears twice in the menu. Of course since the DB is stored in a directory using the package name as name, the user feel like its data is lost. How can I prevent this from happening, and if I can't, how can I automate he migration ? I have several clues : prompting the user for uninstalling the old app, copying the database from the old file to the new one, etc. A: The direct answer is the application appear twice because Android Market and Android OS view two different packages as two different applications. The code can be same, but if the packages are different the applications are completely different Android Market identifies applications by their package name. I suspect this is because the OS tracks programs by package...makes sense that you wouldn't want two packages with the exact same name installed, how would the OS know which one to call? Therefore, if you install a package with the same name as a package that's already installed the OS will view it as a package upgrade and let the new program access the old user data. You state that the packages share the same ID, I assume this is user ID. This enables you to share data between the packages. More information is here: http://developer.android.com/guide/topics/security/security.html#userid Recommendation: Release a small upgrade to your old package providing whatever glue is needed to let it share it's data with your new package. Then release your new package with the code to import the user data from the old package (need same UserId and signature). The transition would be seamless to the user (no manual backup and import). A: The application signature must be the same. If you imported the project in another Eclipse, build it and upload it to market you will see 2 separate apps.
{ "language": "en", "url": "https://stackoverflow.com/questions/887226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Wordpress subdomain and multisite Ok, here is the situation i can´t solve: I have a WP multisite running on a server with 3 domains and 3 sites, and in that server i also have a subdomain running it´s own WP. So, let´s say i have this MU sites: site_1.com site_2.com site_3.com And also i have this subdomain: blog.site_1.com The thing is if i go to this url: "site_2.com/blog" it show me the page using a 404 template with the subdomain theme style. Yeah!! So, trying to access "site_2.com/blog", give me 404 using a theme that is in use on a subdomain named "blog" that is live over: blog.site_1.com. My MU htaccess has this: RewriteBase / RewriteRule ^index\.php$ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^[_0-9a-zA-Z-]+/(wp-(content|admin|includes).*) $1 [L] RewriteRule ^[_0-9a-zA-Z-]+/(.*\.php)$ $1 [L] RewriteRule . index.php [L] RewriteCond %{HTTP_HOST} ^blog.site_1.com [OR] RewriteCond %{HTTP_HOST} ^www.blog.site_1.com$ RewriteRule ^(.*)$ http://site_1.com [R=301,L] Last 3 lines are obviusly required in order to have a diferent WP instalation running on that subdomain that is actualy the "blog" folder on my public_html. Also that "blog/" folder (where is installed the site under blog.site_1.com) has this default wp htaccess code: RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Maybe thats the problem, i have for one part a multisite running with 3 domains maping into the same server, but also on another part a subdomain running it´s own wp instalation and, badly, in this case i builded a page named "blog" that conflict with the "blog" folder that is actualy another site ?¿?¿?¿ jejej, it´s crazy, and at this point this is personal :) i need to know why i can´t make this working ok. Any idea on how i can solve this thing? Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/24435726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Non-type template parameter and std::enable_if_t I'm trying to make some persistence stuff and I have a struct like this: struct EntityPersistence { template <typename Archive> void persist(Archive &ar, Entity &) { } }; Then, in my class Entity I have something like this: static const EntityPersistence entityPersistence; PERSISTENCE_CUSTOM(Entity, entityPersistence) This macro does something like this: #define PERSISTENCE_CUSTOM(Base, customPersistence) \ SERIALIZE(Base, customPersistence) Following the chain... (here is where the important thing comes) #define SERIALIZE(Base, customPersistence) template <class Archive> void serialize(Archive& ar) { serialize_custom(ar); } template <class Archive, class Base, decltype(customPersistence) &persistence = customPersistence> std::enable_if_t<std::is_base_of<cereal::InputArchive<Archive>, Archive>::value && has_deserialize<std::remove_const<decltype(customPersistence)>::type, Archive&, Base&>() == true, void> serialize_custom(Archive &ar) { persistence.deserialize(ar, const_cast<Base&>(*this)); } Some missing code to check which functions are implemented in the Persistance struct in order to branch execution code in compile time: template<class> struct sfinae_true : std::true_type{}; template<class T, class A0, class A1> static auto test_deserialize(int) -> sfinae_true<decltype(std::declval<T>().deserialize(std::declval<A0>(), std::declval<A1>()))>; template<class, class A0, class A1> static auto test_deserialize(long) -> std::false_type; template<class T, class A0, class A1> static auto test_persist(int) -> sfinae_true<decltype(std::declval<T>().persist(std::declval<A0>(), std::declval<A1>()))>; template<class, class A0, class A1> static auto test_persist(long) -> std::false_type; template<class T, class Arg1, class Arg2> struct has_deserialize : decltype(::detail::test_deserialize<T, Arg1, Arg2>(0)){}; template<class T, class Arg1, class Arg2> struct has_persist : decltype(::detail::test_persist<T, Arg1, Arg2>(0)){}; The error in question: In member function ‘std::enable_if_t<(std::is_base_of<cereal::InputArchive<Archive>, Archive>::value && (has_deserialize<EntityPersistence, Archive&, Entity&>() == true)), void> Entity::serialize_custom(Archive&)’: error: ‘const struct EntityPersistence’ has no member named ‘deserialize’ persistence.deserialize(ar, const_cast<Base&>(*this)); \ ^ deserialize function doesn't exists in EntityPersistence but this serialize_custom specialization shouldn't either if the enable_if_t would have done its job. I have tested has_deserialize struct outside this code and it works perfectly. Could this has something to do with the non-type template parameter in serialize_custom functions? Maybe it's evaluated before the enable_if_t? Thanks in advance A: Not sure and I have enough element to try but... what about checking persistence (the template parameter of serialize_custom()) instead of customPersistence (that isn't a template parameter of serialize_custom()? I mean... what about as follows? template <class Archive, class Base, decltype(customPersistence) & persistence = customPersistence> std::enable_if_t<std::is_base_of<cereal::InputArchive<Archive>, Archive>::value && has_deserialize<std::remove_const<decltype(persistence)>::type, Archive&, Base&>() == true> //^^^^^^^^^^^ serialize_custom(Archive &ar) { persistence.deserialize(ar, const_cast<Base&>(*this)); } A: I finally solved this problem with an intermediary method (in case anyone is interested): template <class Archive> void serialize(Archive& ar) { serialize_custom_helper(ar); } template <class Archive, decltype(customPersistence)& persistence = customPersistence> \ void serialize_custom_helper(Archive& ar) { serialize_custom(ar, persistence); } template <class Archive, class Base, class P> std::enable_if_t<std::is_base_of<cereal::InputArchive<Archive>, Archive>::value && has_deserialize2<P, Archive&, Base&>() == true, void> serialize_custom(Archive &ar, P& persistence) { persistence.deserialize(ar, const_cast<Base&>(*this)); } ...
{ "language": "en", "url": "https://stackoverflow.com/questions/54596990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why PyObject_IsInstance always return 0 in my sample code I write a sample to learn python, but when call PyObject_IsInstance, this function always return 0. Here's my c code ReadBuf.c #include "Python.h" static PyObject* Test_IsInstance(PyObject* self, PyObject* args){ PyObject* pyTest = NULL; PyObject* pName = NULL; PyObject* moduleDict = NULL; PyObject* className = NULL; PyObject* pModule = NULL; pName = PyString_FromString("client"); pModule = PyImport_Import(pName); if (!pModule){ printf("can not find client.py\n"); Py_RETURN_NONE; } moduleDict = PyModule_GetDict(pModule); if (!moduleDict){ printf("can not get Dict\n"); Py_RETURN_NONE; } className = PyDict_GetItemString(moduleDict, "Test"); if (!className){ printf("can not get className\n"); Py_RETURN_NONE; } /* PyObject* pInsTest = PyInstance_New(className, NULL, NULL); PyObject_CallMethod(pInsTest, "py_print", "()"); */ int ok = PyArg_ParseTuple(args, "O", &pyTest); if (!ok){ printf("parse tuple error!\n"); Py_RETURN_NONE; } if (!pyTest){ printf("can not get the instance from python\n"); Py_RETURN_NONE; } /* PyObject_CallMethod(pyTest, "py_print", "()"); */ if (!PyObject_IsInstance(pyTest, className)){ printf("Not an instance for Test\n"); Py_RETURN_NONE; } Py_RETURN_NONE; } static PyMethodDef readbuffer[] = { {"testIns", Test_IsInstance, METH_VARARGS, "test for instance!"}, {NULL, NULL} }; void initReadBuf(){ PyObject* m; m = Py_InitModule("ReadBuf", readbuffer); } And below is my python code client.py #!/usr/bin/env python import sys import ReadBuf as rb class Test: def __init__(self): print "Test class" def py_print(self): print "Test py_print" class pyTest(Test): def __init__(self): Test.__init__(self) print "pyTest class" def py_print(self): print "pyTest py_print" b = pyTest() rb.testIns(b) I pass b which is an instance of pyTest to C, and it is parsed by PyArg_ParseTuple to pyTest. When run PyObject_IsInstance, the result is always zero, which means pyTest is not a instance of Test. My question: When pass parameter from python to C, is the type changed? How should I do if I want to compare that if pyTest is an instance of Test? Thanks, Vatel A: The client module is not completely loaded when the extension try to load the client module.; Execution of client is occurred twice (Watch the output carefully). So Test in client.py and Test in the extension module are referencing different objects. You can workaround this by extracting classes in a separated module. (Say common.py) And import common in both client.py and extension module. See a demo.
{ "language": "en", "url": "https://stackoverflow.com/questions/21874018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Parsing XML references in Delphi I used Delphi 2006 data binding wizard to create a interface for an XML configuration file. Later on I realized that some repeated parts of the XML can be separated from the main file and referenced where needed. The resulting XML looks something like this: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module [ <!ENTITY Schema65 SYSTEM "schemas/65.xml"> ]> <module> <schema>&Schema65;</schema> </module> If I open this file using Internet Explorer the contents of the placeholder "&Schema65;" is correctly replaced with the contents of the external file. The Delphi parser however doesn't seem to recognize this feature and doesn't replace the text. Any idea how to solve this issue? A: Internet Explorer is surely using the MSXML library. Set the TXmlDocument.DomVendor property to MSXML_DOM (found in the msxmldom unit), and you should get the same behavior. You can also change the DefaultDOMVendor global variable to SMSXML to make all new TXmlDocument objects use that vendor. A: Have you already tried OmniXML? I've been using it for years and it always solved my problems regarding XML files. If you haven't, I'd advice you to give it a try: it's simple to use, light and free. A: Internet Explorer use XmlResolver, The XmlResolver property of the XmlDocument is used by the XmlDocument class to locate resources that are not inline in the XML data, such as external document type definitions (DTDs), entities, and schemas. These items can be located on a network or on a local drive, and are identifiable by a Uniform Resource Identifier (URI). This allows the XmlDocument to resolve EntityReference nodes that are present in the document and validate the document according to the external DTD or schema. you should use a delphi library that implements a resolver and parser to external resources. Open XML implements a resolver using TStandardResourceResolver Bye. A: The following solved the problem for me. It seems that Delphi default parser (MSXML) actually includes external entity references but in a somehow strange way. For this example <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module [ <!ENTITY Schema65 SYSTEM "schemas/65.xml"> ]> <module> <schema>&Schema65;</schema> </module> I assumed that creating an TXMLDocument and that the external file contains a simple text I could get the contents of the file like this: MyXML := TXMLDOcument.Create(myfile.xml); ExternalText := MyXML.documentElement.ChildNodes['schema'].Text; This actually works if the entity reference is replaced with the simple text. However, in case of using the external entity Delphi will create a new child of type "ntEntityRef" inside the "schema" node. This node will also have a child which finally contains the simple text I expected. The text can be accesses like this: MyXML.documentElement.ChildNodes['schema'].FirstChild.FirstChild.Text; In case the external entity file contains a node structure, the corresponding nodes will be created inside the entity reference node. Make sure TXMLDocument.ParseOptions are set to at least to [poResolveExternals] for that to happen. This approach also makes it relatively easy to adapt the code generated by the XML Data Binding Wizard to work with external entities.
{ "language": "en", "url": "https://stackoverflow.com/questions/1411625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Spring JPA REST sort by nested property I have entity Market and Event. Market entity has a column: @ManyToOne(fetch = FetchType.EAGER) private Event event; Next I have a repository: public interface MarketRepository extends PagingAndSortingRepository<Market, Long> { } and a projection: @Projection(name="expanded", types={Market.class}) public interface ExpandedMarket { public String getName(); public Event getEvent(); } using REST query /api/markets?projection=expanded&sort=name,asc I get successfully the list of markets with nested event properties ordered by market's name: { "_embedded" : { "markets" : [ { "name" : "Match Odds", "event" : { "id" : 1, "name" : "Watford vs Crystal Palace" }, ... }, { "name" : "Match Odds", "event" : { "id" : 2, "name" : "Arsenal vs West Brom", }, ... }, ... } } But what I need is to get list of markets ordered by event's name, I tried the query /api/markets?projection=expanded&sort=event.name,asc but it didn't work. What should I do to make it work? A: Based on the Spring Data JPA documentation 4.4.3. Property Expressions ... you can use _ inside your method name to manually define traversal points... You can put the underscore in your REST query as follows: /api/markets?projection=expanded&sort=event_name,asc A: Just downgrade spring.data.‌​rest.webmvc to Hopper release <spring.data.jpa.version>1.10.10.RELEASE</spring.data.jpa.ve‌​rsion> <spring.data.‌​rest.webmvc.version>‌​2.5.10.RELEASE</spri‌​ng.data.rest.webmvc.‌​version> projection=expanded&sort=event.name,asc // works projection=expanded&sort=event_name,asc // this works too Thanks @Alan Hay comment on this question Ordering by nested properties works fine for me in the Hopper release but I did experience the following bug in an RC version of the Ingalls release.bug in an RC version of the Ingalls release. This is reported as being fixed, * *jira issue - Sorting by an embedded property no longer works in Ingalls RC1 BTW, I tried v3.0.0.M3 that reported that fixed but not working with me. A: We had a case when we wanted to sort by fields which were in linked entity (it was one-to-one relationship). Initially, we used example based on https://stackoverflow.com/a/54517551 to search by linked fields. So the workaround/hack in our case was to supply custom sort and pageable parameters. Below is the example: @org.springframework.data.rest.webmvc.RepositoryRestController public class FilteringController { private final EntityRepository repository; @RequestMapping(value = "/entities", method = RequestMethod.GET) public ResponseEntity<?> filter( Entity entity, org.springframework.data.domain.Pageable page, org.springframework.data.web.PagedResourcesAssembler assembler, org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler entityAssembler, org.springframework.web.context.request.ServletWebRequest webRequest ) { Method enclosingMethod = new Object() {}.getClass().getEnclosingMethod(); Sort sort = new org.springframework.data.web.SortHandlerMethodArgumentResolver().resolveArgument( new org.springframework.core.MethodParameter(enclosingMethod, 0), null, webRequest, null ); ExampleMatcher matcher = ExampleMatcher.matching() .withIgnoreCase() .withStringMatcher(ExampleMatcher.StringMatcher.CONTAINING); Example example = Example.of(entity, matcher); Page<?> result = this.repository.findAll(example, PageRequest.of( page.getPageNumber(), page.getPageSize(), sort )); PagedModel search = assembler.toModel(result, entityAssembler); search.add(linkTo(FilteringController.class) .slash("entities/search") .withRel("search")); return ResponseEntity.ok(search); } } Used version of Spring boot: 2.3.8.RELEASE We had also the repository for Entity and used projection: @RepositoryRestResource public interface JpaEntityRepository extends JpaRepository<Entity, Long> { } A: Your MarketRepository could have a named query like : public interface MarketRepository exten PagingAndSortingRepository<Market, Long> { Page<Market> findAllByEventByName(String name, Page pageable); } You can get your name param from the url with @RequestParam A: This page has an idea that works. The idea is to use a controller on top of the repository, and apply the projection separately. Here's a piece of code that works (SpringBoot 2.2.4) import ro.vdinulescu.AssignmentsOverviewProjection; import ro.vdinulescu.repository.AssignmentRepository; import org.apache.commons.lang3.StringUtils; import org.springframework.data.domain.Page; import org.springframework.data.domain.PageRequest; import org.springframework.data.domain.Pageable; import org.springframework.data.domain.Sort; import org.springframework.data.projection.ProjectionFactory; import org.springframework.data.web.PagedResourcesAssembler; import org.springframework.hateoas.EntityModel; import org.springframework.hateoas.PagedModel; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RepositoryRestController public class AssignmentController { @Autowired private AssignmentRepository assignmentRepository; @Autowired private ProjectionFactory projectionFactory; @Autowired private PagedResourcesAssembler<AssignmentsOverviewProjection> resourceAssembler; @GetMapping("/assignments") public PagedModel<EntityModel<AssignmentsOverviewProjection>> listAssignments(@RequestParam(required = false) String search, @RequestParam(required = false) String sort, Pageable pageable) { // Spring creates the Pageable object correctly for simple properties, // but for nested properties we need to fix it manually pageable = fixPageableSort(pageable, sort, Set.of("client.firstName", "client.age")); Page<Assignment> assignments = assignmentRepository.filter(search, pageable); Page<AssignmentsOverviewProjection> projectedAssignments = assignments.map(assignment -> projectionFactory.createProjection( AssignmentsOverviewProjection.class, assignment)); return resourceAssembler.toModel(projectedAssignments); } private Pageable fixPageableSort(Pageable pageable, String sortStr, Set<String> allowedProperties) { if (!pageable.getSort().equals(Sort.unsorted())) { return pageable; } Sort sort = parseSortString(sortStr, allowedProperties); if (sort == null) { return pageable; } return PageRequest.of(pageable.getPageNumber(), pageable.getPageSize(), sort); } private Sort parseSortString(String sortStr, Set<String> allowedProperties) { if (StringUtils.isBlank(sortStr)) { return null; } String[] split = sortStr.split(","); if (split.length == 1) { if (!allowedProperties.contains(split[0])) { return null; } return Sort.by(split[0]); } else if (split.length == 2) { if (!allowedProperties.contains(split[0])) { return null; } return Sort.by(Sort.Direction.fromString(split[1]), split[0]); } else { return null; } } } A: From Spring Data REST documentation: Sorting by linkable associations (that is, links to top-level resources) is not supported. https://docs.spring.io/spring-data/rest/docs/current/reference/html/#paging-and-sorting.sorting An alternative that I found was use @ResResource(exported=false). This is not valid (expecially for legacy Spring Data REST projects) because avoid that the resource/entity will be loaded HTTP links: JacksonBinder BeanDeserializerBuilder updateBuilder throws com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot construct instance of ' com...' no String-argument constructor/factory method to deserialize from String value I tried activate sort by linkable associations with help of annotations but without success because we need always need override the mappPropertyPath method of JacksonMappingAwareSortTranslator.SortTranslator detect the annotation: if (associations.isLinkableAssociation(persistentProperty)) { if(!persistentProperty.isAnnotationPresent(SortByLinkableAssociation.class)) { return Collections.emptyList(); } } Annotation @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.FIELD) public @interface SortByLinkableAssociation { } At your project incluide @SortByLinkableAssociation at linkable associations that whats sort. @ManyToOne(fetch = FetchType.EAGER) @SortByLinkableAssociation private Event event; Really I didn't find a clear and success solution to this issue but decide to expose it to let think about it or even Spring team take in consideration to include at nexts releases.
{ "language": "en", "url": "https://stackoverflow.com/questions/41807631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Cyclomatic complexity for Visual Studio 2008 Is there any tool for showing the cyclomatic complexity for Visual Studio in the left hand bar where the debug symbol goes? I seem to remember there was an addin for Resharper but don't think it works in 4.5 Has anyone seen any similar tools, other than the built in support in VS A: A standalone tool with lots of metrics (including cc) is ndepend. A: I believe CodeRush had it 'interactively'.. but heck, why bother, there are sources on the web that will give you commercial-free ideas and implementation. A: Coderush from Developer Express will do this and it works well. I vouch for it. (and have no relation to the company other than a long time customer) A: McCabe IQ (www.mccabe.com/iq.htm) developed by the man who authored Cyclomatic Complexity, Tom McCabe. A: Code Metrics is an excellent free plug-in for reflector that analyzes code size & complexity. A: Visual Studio 2008 Team System (or just VS 2008 Developer Edition) has Code Metrics. StudioTools is a free addin for VS 2005 and VS 2008. NDepend is good too.
{ "language": "en", "url": "https://stackoverflow.com/questions/873485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Java: difference between `Class.getDeclaringClass` and `Class.getEnclosingClass` Here's the javadoc for these two methods, and I'm having trouble understanding the difference between. Is it possible for the declaring class and immediate enclosing class to be different?
{ "language": "en", "url": "https://stackoverflow.com/questions/50559803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Upgraded to ASP.NET Core 3.1, getting error about System.Text.Json I have recently upgraded from ASP.NET Core 2.2 to ASP.NET Core 3.1. Locally, everything seems to work ok, but I have issues when deploying to IIS. The error I am receiving in the Event Viewer is Application: w3wp.exe CoreCLR Version: 4.700.19.56402 .NET Core Version: 2.2.4 Description: The process was terminated due to an unhandled exception. Exception Info: System.IO.FileNotFoundException: Could not load file or assembly 'System.Text.Json, Version=4.0.1.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'. The system cannot find the file specified. File name: 'System.Text.Json, Version=4.0.1.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' at Microsoft.Extensions.Configuration.Json.JsonConfigurationProvider.Load(Stream stream) at Microsoft.Extensions.Configuration.FileConfigurationProvider.Load(Boolean reload) --- End of stack trace from previous location where exception was thrown --- at Microsoft.Extensions.Configuration.FileConfigurationProvider.HandleException(ExceptionDispatchInfo info) at Microsoft.Extensions.Configuration.FileConfigurationProvider.Load(Boolean reload) at Microsoft.Extensions.Configuration.FileConfigurationProvider.Load() at Microsoft.Extensions.Configuration.ConfigurationRoot..ctor(IList`1 providers) at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build() at Microsoft.Extensions.Hosting.HostBuilder.BuildAppConfiguration() at Microsoft.Extensions.Hosting.HostBuilder.Build() at <AppName>.Program.Main(String[] args) This is odd because this error message states it is using .NET Core Version 2.2.4. I have installed the .NET Core 3.1 hosting bundle and restarted the server. I have run through the troubleshooting tips in this post and found: * *When attempting to run the application from the command line (both running dotnet <AppName>.dll and <AppName>.exe , I get exactly the same error as I did in the Event Viewer. *I have enabled stdoutLog in the web.config. The directory gets created but no log file is written. *I have enabled the ASP.NET Core Module debug log: [aspnetcorev2.dll] Resolving hostfxr parameters for application: 'dotnet' arguments: '.\<AppName>.dll' path: 'C:\inetpub\wwwroot\<AppName>\' [aspnetcorev2.dll] Known dotnet.exe location: '' [aspnetcorev2.dll] Process path 'dotnet.exe' is dotnet, treating application as portable [aspnetcorev2.dll] Resolving absolute path to dotnet.exe from 'dotnet.exe' [aspnetcorev2.dll] Invoking where.exe to find dotnet.exe [aspnetcorev2.dll] where.exe invocation returned: 'C:\Program Files\dotnet\dotnet.exe C:\Program Files (x86)\dotnet\dotnet.exe ' [aspnetcorev2.dll] Current process bitness type detected as isX64=1 [aspnetcorev2.dll] Processing entry 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2.dll] Binary type 6 [aspnetcorev2.dll] Found dotnet.exe via where.exe invocation at 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2.dll] Resolving absolute path to hostfxr.dll from 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2.dll] hostfxr.dll located at 'C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll' [aspnetcorev2.dll] Converted argument '.\<AppName>.dll' to 'C:\inetpub\wwwroot\<AppName>\.\<AppName>.dll' [aspnetcorev2.dll] Parsed hostfxr options: dotnet location: 'C:\Program Files\dotnet\dotnet.exe' hostfxr path: 'C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll' arguments: [aspnetcorev2.dll] Argument[0] = 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2.dll] Argument[1] = 'C:\inetpub\wwwroot\<AppName>\.\<AppName>.dll' [aspnetcorev2.dll] Loading hostfxr from location C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll [aspnetcorev2.dll] Canceling standard stream pipe reader [aspnetcorev2.dll] Loading request handler: 'C:\inetpub\wwwroot\<AppName>\aspnetcorev2_inprocess.dll' [aspnetcorev2.dll] Creating handler application [aspnetcorev2_inprocess.dll] Initializing logs for 'C:\inetpub\wwwroot\<AppName>\aspnetcorev2_inprocess.dll'. Process Id: 4100.. File Version: 13.1.19320.0. Description: IIS ASP.NET Core Module V2 Request Handler. Commit: 2b7e994b8a304700a09617ffc5052f0d943bbcba. [aspnetcorev2_inprocess.dll] Waiting for initialization [aspnetcorev2_inprocess.dll] Starting in-process worker thread [aspnetcorev2_inprocess.dll] Resolving hostfxr parameters for application: 'dotnet' arguments: '.\<AppName>.dll' path: 'C:\inetpub\wwwroot\<AppName>\' [aspnetcorev2_inprocess.dll] Known dotnet.exe location: 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2_inprocess.dll] Process path 'dotnet.exe' is dotnet, treating application as portable [aspnetcorev2_inprocess.dll] Resolving absolute path to hostfxr.dll from 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2_inprocess.dll] hostfxr.dll located at 'C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll' [aspnetcorev2_inprocess.dll] Converted argument '.\<AppName>.dll' to 'C:\inetpub\wwwroot\<AppName>\.\<AppName>.dll' [aspnetcorev2_inprocess.dll] Parsed hostfxr options: dotnet location: 'C:\Program Files\dotnet\dotnet.exe' hostfxr path: 'C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll' arguments: [aspnetcorev2_inprocess.dll] Argument[0] = 'C:\Program Files\dotnet\dotnet.exe' [aspnetcorev2_inprocess.dll] Argument[1] = 'C:\inetpub\wwwroot\<AppName>\.\<AppName>.dll' [aspnetcorev2_inprocess.dll] Setting environment variable ASPNETCORE_IIS_HTTPAUTH=anonymous; [aspnetcorev2_inprocess.dll] Setting environment variable ASPNETCORE_IIS_PHYSICAL_PATH=C:\inetpub\wwwroot\<AppName>\ [aspnetcorev2_inprocess.dll] Loading hostfxr from location C:\Program Files\dotnet\host\fxr\3.1.0\hostfxr.dll [aspnetcorev2_inprocess.dll] Initial Dll directory: '', current directory: 'c:\windows\system32\inetsrv' [aspnetcorev2_inprocess.dll] Setting dll directory to c:\windows\system32\inetsrv [aspnetcorev2_inprocess.dll] Setting current directory to C:\inetpub\wwwroot\<AppName>\ [aspnetcorev2_inprocess.dll] Event Log: 'Application '/LM/W3SVC/1/ROOT/<AppName>' with physical root 'C:\inetpub\wwwroot\<AppName>\' failed to load coreclr. Exception message: Error occured when initializing inprocess application, Return code: 0x80008083' End Event Log Message. [aspnetcorev2_inprocess.dll] InvalidOperationException 'Error occured when initializing inprocess application, Return code: 0x80008083' caught at F:\workspace\_work\1\s\src\Servers\IIS\AspNetCoreModuleV2\InProcessRequestHandler\inprocessapplication.cpp:346 [aspnetcorev2_inprocess.dll] Stopping in-process worker thread [aspnetcorev2_inprocess.dll] Stopping CLR [aspnetcorev2_inprocess.dll] Event Log: 'Application '/LM/W3SVC/1/ROOT/<AppName>' with physical root 'C:\inetpub\wwwroot\<AppName>\' failed to load coreclr. Exception message: CLR worker thread exited prematurely' End Event Log Message. [aspnetcorev2_inprocess.dll] InvalidOperationException 'CLR worker thread exited prematurely' caught at F:\workspace\_work\1\s\src\Servers\IIS\AspNetCoreModuleV2\InProcessRequestHandler\inprocessapplication.cpp:407 [aspnetcorev2_inprocess.dll] Failed HRESULT returned: 0x8007023e at F:\workspace\_work\1\s\src\Servers\IIS\AspNetCoreModuleV2\InProcessRequestHandler\dllmain.cpp:131 [aspnetcorev2_inprocess.dll] Starting app_offline monitoring in application 'C:\inetpub\wwwroot\<AppName>\' [aspnetcorev2_inprocess.dll] Starting file watcher thread Here is my Program.cs: public static void Main(string[] args) { Directory.SetCurrentDirectory(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)); XmlDocument log4netConfig = new XmlDocument(); log4netConfig.Load(File.OpenRead("log4net.config")); ILoggerRepository repo = log4net.LogManager.CreateRepository( Assembly.GetEntryAssembly(), typeof(log4net.Repository.Hierarchy.Hierarchy)); log4net.Config.XmlConfigurator.Configure(repo, log4netConfig["log4net"]); CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseIIS(); webBuilder.UseStartup<Startup>(); }); Here are the important bits of my Startup.cs public void ConfigureServices(IServiceCollection services) { services.AddOptions(); services.AddScoped<INotifyService, NotifyService>(); services.AddControllers().AddNewtonsoftJson(); // Authentication and CORS settings services.AddSignalR().AddNewtonsoftJsonProtocol(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.ConfigureExceptionHandler(); app.UseRouting(); app.UseStaticFiles(); app.UseCors("CorsPolicy"); app.UseAuthentication(); app.UseAuthorization(); app.UseHttpsRedirection(); app.UseEndpoints(endpoints => { endpoints.MapHub<NotifyHub>("/notify"); endpoints.MapControllerRoute("default", "{controller=Home}/{action=Index}"); }); app.UseDefaultFiles(); } I think the .NET Core Version in the Event Viewer error is key, but I can't work out how to force it to use 3.1. UPDATE - Add .csproj file information The relevant parts of my .csproj file are: <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> </PropertyGroup> <PropertyGroup> <RuntimeFrameworkVersion>3.1.0</RuntimeFrameworkVersion> <PlatformTarget>AnyCPU</PlatformTarget> <RuntimeIdentifier>win-x64</RuntimeIdentifier> </PropertyGroup> A: The trick that solves 90% of the issues since Visual Studio 2002 ;-) manually delete all bin and obj folders in your solution. A: Can you post your project file (.csproj)? You need to have the correct attributes in all your project files. The nulls below are not required and neither is C# 8. But 3.1 must be specified in the project files. This is the setup I use on my current core 3.1 projects: <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <LangVersion>8.0</LangVersion> <Nullable>enable</Nullable> <NullableContextOptions>enable</NullableContextOptions> </PropertyGroup>
{ "language": "en", "url": "https://stackoverflow.com/questions/59375881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: sed unterminated `s' command in subprocess.call I have to use python's subprocess.call module for this script. I need look in a file for the following string: "absolute/path/to/your/lib" and replace it with the following: /var/www/twiki/lib My script is below, but when I run it, I get the output: sed: -e expression #1, char 59: unterminated `s' command Here's my command using python's subprocess.call module: subprocess.call(['sed', '-e', 's/\"absolute\/path\/to\/your\/lib\/\"\/var\/www\/twiki\/lib\/', '\/var\/www\/twiki\/lib\/LocalLib.cfg']) [UPDATE] Here's the fixed code: subprocess.call(['sed', '-e', 's/\"\/absolute\/path\/to\/your\/lib\"/\/var\/www\/twiki\/lib\//', '/var/www/twiki/bin/LocalLib.cfg']) In the end, I was missing a few slashes and needed to double slash one of them. Couldn't have figured it out without the community. A: subprocess.call(['sed', '-e', 's/\"absolute\/path\/to\/your\/lib\/\"\/var\/www\/twiki\/lib\/', '\/var\/www\/twiki\/lib\/LocalLib.cfg']) looks absolutely creepy. First thing: why did you escape the /s on the file name argument? That is only necessary in the s command. Second thing: If I replace your separator character from / to e.g. #, I can omit all the unnecessary escaping. I did both and then got subprocess.call(['sed', '-e', 's#"absolute/path/to/your/lib/"/var/www/twiki/lib/', '/var/www/twiki/lib/LocalLib.cfg']) and what do I see? There are no # (i.e., no unescaped /) in the command. Try 's#"absolute/path/to/your/lib/"#/var/www/twiki/lib/#' here, or if you insist on using /, do 's/"absolute\/path\/to\/your\/lib\/"/\/var\/www\/twiki\/lib\//' ^ ^ with /s added on the ^ marked places. Edit: I changed the " positions in order to reflect the clearance of my misunderstanding. See the comments below. A: Another thing you can do is use a raw string notation, notice the "r" before the string in example below. import subprocess COMMAND = r""" mysql -u root -h localhost -p --exec='use test; select 1, 2, 3 | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > sample.csv """ proc = subprocess.Popen(COMMAND, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) std_out, std_err = proc.communicate() Though this works, I did this only because there were loads of these commands in the bash that I wanted to wrap it with python. I would prefer using mysql directly from python and using the csv module.
{ "language": "en", "url": "https://stackoverflow.com/questions/19981840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Where can I find a good introduction to timezones I have to write some code working with timezones. Is there a good introduction to the subject to get me started? A: Also answered at What every developer should know about time (which includes the referenced screenshots). With daylight savings time ending today, I thought this was a good time for a post. How to handle time is one of those tricky issues where it is all too easy to get it wrong. So let's dive in. (Note: We learned these lessons when implementing the scheduling system in Windward Reports.) First off, using UTC (also known as Greenwich Mean Time) is many times not the correct solution. Yet many programmers think if they store everything that way, then they have it covered. (This mistake is why several years ago when Congress changed the start of DST in the U.S. you had to run a hotfix on Outlook for it to adjust reoccurring events.) So let's start with the key question – what do we mean by time? When a user says they want something to run at 7:00 am, what do they mean? In most cases they mean 7:00 am where they are located – but not always. In some cases, to accurately compare say web server statistics, they want each "day" to end at the same time, unadjusted for DST. At the other end, someone who takes medicine at certain times of the day and has that set in their calendar, will want that to always be on local time so a 3:00pm event is not 3:00am when they have travelled half way around the world. So we have three main use cases here (there are some others, but they can generally be handled by the following): 1.The same absolute (for lack of a better word) time. 2.The time in a given time zone, shifting when DST goes on/off (including double DST which occurs in some regions). 3.The local time. The first is trivial to handle – you set it as UTC. By doing this every day of the year will have 24 hours. (Interesting note, UTC only matches the time in Greenwich during standard time. When it is DST there, Greenwich and UTC are not identical.) The second requires storing a time and a time zone. However, the time zone is the geographical zone, not the present offset (offset is the difference with UTC). In other words, you store "Mountain Time," not "Mountain Standard Time" or "Mountain Daylight Savings Time." So 7:00 am in "Mountain Time" will be 7:00 am in Colorado regardless of the time of year. The third is similar to the second in that it has a time zone called "Local Time." However, it requires knowing what time zone it is in in order to determine when it occurs. Outlook now has a means to handle this. Click the Time Zones button: And you can now set the time zone for each event: When I have business trips I use this including my flight times departing in one zone and arriving in another. Outlook displays everything in the local timezone and adjusts when that changes. The iPhone on the other hand has no idea this is going on and has everything off when I'm on a trip that is in another timezone (and when you live in Colorado, almost every trip is to another timezone). Putting it to use Ok, so how do you handle this? It's actually pretty simple. Every time needs to be stored one of two ways: 1.As UTC. Generally when stored as UTC, you will still set/display it in local time. 2.As a datetime plus a geographical timezone (which can be "local time"). Now the trick is knowing which to use. Here are some general rules. You will need to figure this out for additional use cases, but most do fall in to these categories. 1.When something happened – UTC. This is a singular event and regardless of how the user wants it displayed, when it occurred is unchangeable. 2.When the user selects a timezone of UTC – UTC. 3.An event in the future where the user wants it to occur in a timezone – datetime plus a timezone. Now it might be safe to use UTC if it will occur in the next several months (changing timezones generally have that much warning - although sometimes it's just 8 days), but at some point out you need to do this, so you should do it for all cases. In this case you display what you stored. 4.For a scheduled event, when it will next happen – UTC. This is a performance requirement where you want to be able to get all "next events" where their runtime is before now. Much faster to search against dates than recalculate each one. However, this does need to recalculate all scheduled events regularly in case the rules have changed for an event that runs every quarter. 1.For events that are on "local time" the recalculation should occur anytime the user's timezone changes. And if an event is skipped in the change, it needs to occur immediately. .NET DateTime Diving in to .NET, this means we need to be able to get two things which the standard library does not provide: 1.Create a DateTime in any timezone (DateTime only supports your local timezone and UTC). 2.For a given Date, Time, and geographical timezone, get the UTC time. This needs to adjust based on the DST rules for that zone on that date. Fortunately there's a solution to this. We have open sourced out extensions to the DateTime timezone functionality. You can download WindwardTimeZone here. This uses registry settings in Windows to perform all calculations for each zone and therefore should remain up to date. Browser pain The one thing we have not figured out is how to know a user's location if they are using a browser to hit our web application. For most countries the locale can be used to determine the timezone – but not for the U.S. (6 zones), Canada, or Russia (11 zones). So you have to ask a user to set their timezone – and to change it when they travel. If anyone knows of a solution to this, please let me know. Update: I received the following from Justin Bonnar (thank you): document.getElementById('timezone_offset').value = new Date().getTimezoneOffset(); Using that plus the suggestion of the geo location for the IP address mentioned below will get you close. But it's not 100%. The time offset does not tell you if you for example if you are in Arizona (they & Hawaii do not observer daylight savings time) vs Pacific/Mountain (depending on DST) time zone. You also depend on javascript being on although that is true for 99% of the users out there today. The geo location based on IP address is also iffy. I was at a hotel in D.C. when I got a report of our demo download form having a problem. We pre-populate the form with city, state, & country based on the geo of the IP address. It said I was in Cleveland, OH. So again, usually right but not always. My take is we can use the offset, and for cases where there are multiple timezones with that offset (on that given day), follow up with the geo of the IP address. But I sure wish the powers that be would add a tz= to the header info sent with an HTML request.
{ "language": "en", "url": "https://stackoverflow.com/questions/4362192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: problems with bcrypt-ruby I'm new to rails and am trying to write the part of an app that handles all the nasty account stuff that I can use for all my future apps (I wasn't able to find a boilerplate that did that for me.). Anyways. I've got bcrypt installed and I'm trying to register a new user. I created a route in routes.rb for registration. I also added the installed bcrypt-ruby gem to my Gemfile. I copied the source from bcrypt-ruby.rubyforge.org and my code looks like this. class RegisterController < ApplicationController def create @user = User.new(params[:user]) @user.password = params[:password] @user.save! end end With my user model looking like this. require 'bcrypt' class User < ActiveRecord::Base # users.password_hash in the database is a :string include BCrypt def password @password ||= Password.new(password_hash) end def password=(new_password) @password = Password.create(new_password) self.password_hash = @password end end I've done a lot of different things to determine what the problem is. The error I get when I submit something to the register controller is NoMethodError (undefined method `stringify_keys' for "a":String): app/controllers/register_controller.rb:3:in `create' So here is my questions. What is def password=(new_password), this syntax is foreign to me. What is ||=. This syntax is also foreign to me. Why am I getting this error? Is there a boilerplate for rails I can use to save me this trouble and I can start coding? Cheers and Thanks! edit: Added view code <%= form_tag("/register#create", method: "post") do %> <p><%= text_field_tag(:user) %></p> <p><%= text_field_tag(:password) %></p> <p> <%= submit_tag("Register") %></p> <% end %> A: * *def password=(new_password) In Ruby, all the things you normally think of as operators (+, -, =, etc) are implemented as methods and you can do the same thing for your own methods. That's what this is: just a method for password=. That means anytime some other code calls user.password =, it's really calling this method. *||= This is essentially saying "return this user instance's password, but create one first if it doesn't exist". If user.password is there, it just returns that; otherwise, it creates a new Password and returns that. *I'm not sure why you're getting this error, but my guess is that params[:user] isn't what it's supposed to be. (Is it a string, instead of a hash?) The error is complaining that it can't process your params, so have a look at what params[:user] is. It should look something like this: { password: 'password' } *Absolutely, there's no reason for you to implement this. Devise is a widely used authentication gem that will do all of this (and much, much more) for you. You could also use ActiveRecord's has_secure_password feature, which also uses BCrypt, but that also requires some setup work and is much less flexible than Devise.
{ "language": "en", "url": "https://stackoverflow.com/questions/20207535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Python not recognizing my edited BOTO file I am following the instructions that are in this link in order to create a Google Cloud storage bucket through Python. https://cloud.google.com/storage/docs/xml-api/gspythonlibrary I have followed all of the instructions and created a boot file with all of my credentials. I opened the both file and I can see that my gs_access_key_id and gs_secret_access_key are there and the file is saved. import webapp2 import sys sys.path.append("/Library/Frameworks/Python.framework/Versions/2.7/lib/py thon2.7/site-packages") import boto import gcs_oauth2_boto_plugin import os import shutil import StringIO import tempfile import time GOOGLE_STORAGE = 'gs' LOCAL_FILE = 'file' CLIENT_ID = '' CLIENT_SECRET = '' gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET) class MainHandler(webapp2.RequestHandler): def get(self): self.response.write('Hello world! This should work ! I have been working no this shit all day!') now = time.time() CATS_BUCKETS = 'cats-%d' % now DOGS_BUCKETS = 'cats-%d' % now project_id = 'name-of-my-bucket' for name in (CATS_BUCKETS, DOGS_BUCKETS): uri = boto.storage_uri(name, GOOGLE_STORAGE) try: header_values={'x-google-project-id': project_id} uri.create_bucket() print 'Successfully created bucket "%s"' %name except boto.exception.StorageCreateError, e: print 'Failed to create bucket:', e app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) However, at the line that it tries to create_bucket, I get an error. I debugged it and it comes back saying that the gs_access_key_id was never found, however it is clearly in my .boto file. This is the error that I get when I try to run this program in localhost. File "/Users/LejendVega/Desktop/create-buckets-adrian/main.py", line 48, in get uri.create_bucket() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/storage_uri.py", line 558, in create_bucket conn = self.connect() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/storage_uri.py", line 140, in connect **connection_args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/gs/connection.py", line 47, in __init__ suppress_consec_slashes=suppress_consec_slashes) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/s3/connection.py", line 191, in __init__ validate_certs=validate_certs, profile_name=profile_name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 569, in __init__ host, config, self.provider, self._required_auth_capability()) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/auth.py", line 989, in get_auth_handler 'Check your credentials' % (len(names), str(names))) NoAuthHandlerFound: No handler was ready to authenticate. 3 handlers were checked. ['OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials All I want to know is why the boto is not recognizing my credentials that are in my boto file. A: Okay so the boto module goes through your boto configuration file to gather your credentials in order to create and edit data in the Google Cloud. If the boto module can not find the configuration file then you will get the errors above. What I did was since, after 3 days straight of trying to figure it out, I literally just put my credentials in the code. from boto import connect_gs name = 'the-name-of-your-bucket' gs_conn = connect_gs(gs_access_key_id='YOUR_ACCESS_KEY_ID', gs_secret_access_key='YOUR_ACCESS_SECRET_KEY') """This is the line that creates the actual bucket""" gs_conn.create_bucket(name) It is that simple. Now you can go through and do anything that you want to without the boto configuration file.
{ "language": "en", "url": "https://stackoverflow.com/questions/36907364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How Do I Force Excel To Open A Log File As Text? I am trying to load a log file into Excel as it has timestamps in ms and I need to use Excel to convert them to something readable. However, it also has an xml tag near the top <?xml version='1.0' encoding='UTF-8'?> so Excel thinks it is an xml file, tries to open it using XML Tables then fails because it isn't valid xml. I want to open it as a delimited text file. However, even with a macro like this it still tries to open it as XML Workbooks.OpenText Filename:=fullpath, _ StartRow:=1, _ DataType:=xlDelimited, _ TextQualifier:=xlDoubleQuote, _ ConsecutiveDelimiter:=False, _ Tab:=True, _ Semicolon:=False, _ Comma:=False, _ Space:=False, _ Other:=True, _ OtherChar:="|" How do I force Excel to ignore the XML tags and open it as a delimited text file? A: Logic: * *Read the file *Replace "<?xml version='1.0' encoding='UTF-8'?>" with "" *Write the data to a temp file. If you are ok with replacing the original file then you can do that as well. Amend the code accordingly. *Open the text file in Excel Is this what you are trying? (UNTESTED) Code: Option Explicit Sub Sample() Dim MyData As String Dim FlName As String, tmpFlName As String '~~> I am hardcoding the paths. Please change accordingly FlName = "C:\Sample.xml" tmpFlName = "C:\Sample.txt" '~~> Kill tempfile name if it exists On Error Resume Next Kill tmpFlName On Error GoTo 0 '~~> Open the xml file and read the data Open FlName For Binary As #1 MyData = Space$(LOF(1)) Get #1, , MyData '~~> Replace the relevant tag MyData = Replace(MyData, "<?xml version='1.0' encoding='UTF-8'?>", "") Close #1 '~~> Write to a temp text file Open tmpFlName For Output As #1 Print #1, MyData Close #1 Workbooks.OpenText Filename:=tmpFlName, _ StartRow:=1, _ DataType:=xlDelimited, _ TextQualifier:=xlDoubleQuote, _ ConsecutiveDelimiter:=False, _ Tab:=True, _ Semicolon:=False, _ Comma:=False, _ Space:=False, _ Other:=True, _ OtherChar:="|" End Sub Alternative Way: After '~~> Open the xml file and read the data Open FlName For Binary As #1 MyData = Space$(LOF(1)) Get #1, , MyData '~~> Replace the relevant tag MyData = Replace(MyData, "<?xml version='1.0' encoding='UTF-8'?>", "") Close #1 use strData() = Split(MyData, vbCrLf) and then write this array to Excel and use .TextToColumns
{ "language": "en", "url": "https://stackoverflow.com/questions/60658264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Java EE: Eclipselink transaction missing I have a maven based Java EE project that should run on GlassFish v3. There is a JSF ManagedBean that injects an EJB service. ManagedBean calls one of the injected EJB's method on button click where some JPA operation happens (creation of new entity object, persisting, flushing). When the EntityManager.flush() called then it throws an exception: Caused by: javax.persistence.TransactionRequiredException: Exception Description: No transaction is currently active The data source is a jta data source with JTA transaction type (defined in persistence.xml). I've already found a solution, but it is not satisfying due to deployment issues. If I put the next line of code into the persistence.xml then it runs without any problem: <property name="eclipselink.target-server" value="SunAS9"/> Because of this reason, I assume that it is a deployment problem, when the eclipselink does not recognize the JTA manager. Any suggestions would be appreciated, thank You! UPDATE: @MRalwasser: here is the full stack trace. (sorry, i had to remove the real package names, it is masked.package.name now) stack trace on pastebin @Chris: sorry i forgot to mention, that the GeneriDao class creates the entity manager via the factory method, NOT by dependency injection. A: Only EJB can working in CMT by default. In Managed beans or CDI beans you have to implement your own mechanism for handling transactions and run your service from within it. public class ManagedBean { @Inject yourEjbService service; @Resource UserTransaction utx; public void save(){ try{ utx.begin(); service.doAction(); utx.commit(); } catch (Exception e) { try { utx.rollback(); } catch (Exception ex) { ... } } } ... } You also don't have to call EntityManager.flush() neither in your EJB nor in managed bean if you are injecting EntityManager using @PersistenceContext. It will detach entities automatically after each method in your EJB ends. A: The persistence control mechanisms of Java Enterprise have several options and specific design choices. In almost any Java EE implementation I worked with, comtainer managed transactions (CMT) were used. In an occastional situation, bean managed transactions (BMT) can be the choice. Bean managed transactions can be preferred when you need to be sure, exactly when the 'commit' (or 'rollback') takes place during program execution. This can be required in a high-performing time-critical application area. For an example of BMT, see e.g. the section 'Bean Managed Transactions' in examples, Bean Managed Transactions Container managed transactions means that the software in the application server ( 'the container') calls a 'begin' transction before Java code is executed that makes use of a persistence context. When the code execution is finisned (when the call tree has returned, e.g. as a result of a web-request), the application server calls 'commit'. Consequently, the modified entities are actually updated in the application database. In Java EE, the statements: @TransactionManagement(TransactionManagementType.CONTAINER) and @TransactionManagement(TransactionManagementType.BEAN) indicate container managed transactions, or bean managed transactions, respectively. Java EE defines several types of beans: session-driven bean, message-driven bean, local bean. These beans are generally @Stateless, and can all work with container managed transactions. Detailed control of container managed transaction handling can in EE be specified by adding the annotations: @TransactionAttribute(REQUIRES_NEW) public void myTopLevelMethodWhichStartsNewInnerTransaction() .... @TransactionAttribute(REQUIRED) public void myTopLevelMethodContinueExistingTransactionIfAny() .... @TransactionAttribute(NEVER) public void myNoCurrentTransactionAllowedWhenMethodCalled() .... Flush The necessarity of calling 'flush' to ensure that the database cache is written on disk, depends on the type of database used. E.g. for Postgress calling flush makes a difference, whereas for the in-memory database 'derby', flush has no effect, and can in that latter situation cause an error similar to the one reported in this question. The effect of flush is thus database dependent.
{ "language": "en", "url": "https://stackoverflow.com/questions/15341128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Optional query strings in ASP.NET Web API? I'm trying to handle these requests with the same controller action: * *localhost:52000/api/messages *localhost:52000/api/messages?page=1 *localhost:52000/api/messages?date=2019/29/11&page=1 I made a controller action as following: [Route("api/messages")] [HttpGet] public HttpResponseMessage getMessage(DateTime? date, int? page) But this only works if the query string value is null, not if the actual query is null. * *Working: localhost:52000/api/messages?date=&page= *Not working (It doesn't find the action): localhost:52000/api/messages How can I make every api/messages request be handled by the getMessage() action? Thanks! A: try this [Route("api/messages")] [HttpGet] public HttpResponseMessage getMessage(DateTime? date = null, int? page = null)
{ "language": "en", "url": "https://stackoverflow.com/questions/58970764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASP.NET Identity - Forcing a re-login with security stamp So from What is ASP.NET Identity's IUserSecurityStampStore<TUser> interface? we learn that ASP.NET Identity has a security stamp feature that is used to invalidate a users login cookie, and force them to re-login. In my MVC app, it is possible for admins to archive users. When arched, they should immediately be logged out and forced to log in again (which would then reject them since they're archived). How can I do this? I understand that the security stamp is the key. The default setup looks like this: app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("/Account/Login"), Provider = new CookieAuthenticationProvider { // Enables the application to validate the security stamp when the user logs in. // This is a security feature which is used when you change a password or add an external login to your account. OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<ApplicationUserManager, ApplicationUser>( validateInterval: TimeSpan.FromMinutes(30), regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager)) } }); Through experimenting, if I set the validateInterval to something like 1 minute, and then manaully hack a users security stamp in the database, then they are forced to re-login but only after that time period has elapsed. Is there a way to make this instant, or is it just a matter of setting the interval to a low time period and waiting (or implementing my own OnValidateIdentity that checks on every request) Thanks A: You stated your options correctly, either low interval/waiting or hooking your own custom OnValidateIdentity. Here's a similar question: Propagate role changes immediately
{ "language": "en", "url": "https://stackoverflow.com/questions/24570872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: MVC Pattern JButton ActionListener not Responding So I'm trying to create a simple test program where the user can enter something into a JTextField, click the "add" JButton, and a JTextArea will add the users string to the the JTextArea (continuously appending with new line). I added the actionListener for the button and have a stateChanged and an update method, but nothing happens when I click the add button. No errors either. Could someone please point me in the right direction? Here's my code: MVCTester (main) public class MVCTester { public static void main(String[] args) { // TODO Auto-generated method stub MVCController myMVC = new MVCController(); MVCViews myViews = new MVCViews(); myMVC.attach(myViews); } } MVCController import java.util.ArrayList; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; public class MVCController { MVCModel model; ArrayList<ChangeListener> listeners; public MVCController(){ model = new MVCModel(); listeners = new ArrayList<ChangeListener>(); } public void update(String input){ model.setInputs(input); for (ChangeListener l : listeners) { l.stateChanged(new ChangeEvent(this)); } } public void attach(ChangeListener c) { listeners.add(c); } } MVCModel import java.util.ArrayList; public class MVCModel { private ArrayList<String> inputs; MVCModel(){ inputs = new ArrayList<String>(); } public ArrayList<String> getInputs(){ return inputs; } public void setInputs(String input){ inputs.add(input); } } MVCViews import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.ArrayList; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextArea; import javax.swing.JTextField; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; public class MVCViews implements ChangeListener { private JTextField input; private JTextArea echo; private ArrayList<String> toPrint = new ArrayList<String>(); MVCController controller; MVCViews(){ controller = new MVCController(); JPanel myPanel = new JPanel(); JButton addButton = new JButton("add"); echo = new JTextArea(10,20); echo.append("Hello there! \n"); echo.append("Type something below!\n"); myPanel.setLayout(new BorderLayout()); myPanel.add(addButton, BorderLayout.NORTH); input = new JTextField(); final JFrame frame = new JFrame(); frame.add(myPanel, BorderLayout.NORTH); frame.add(echo, BorderLayout.CENTER); frame.add(input, BorderLayout.SOUTH); addButton.addActionListener(new ActionListener(){ @Override public void actionPerformed(ActionEvent e) { // TODO Auto-generated method stub controller.update(input.getText()); } }); frame.pack(); frame.setVisible(true); } @Override public void stateChanged(ChangeEvent e) { // TODO Auto-generated method stub toPrint = controller.model.getInputs(); for(String s: toPrint){ echo.append(s + "\n"); } } } This is my first time trying to follow MVC format, so there might be issues with the model itself as well. Feel free to point them out. Thank you for your help! A: The controller within the GUI is not the same controller that is created in main. Note how many times you call new MVCController() in your code above -- it's twice. Each time you do this, you're creating a new and distinct controller -- not good. Use only one. You've got to pass the one controller into the view. You can figure out how to do this. (hint, a setter or constructor parameter would work). hint 2: this could work: MVCViews myViews = new MVCViews(myMVC); one solution: import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.ArrayList; import javax.swing.*; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; public class MVCTester { public static void main(String[] args) { MVCController myMVC = new MVCController(); MVCViews myViews = new MVCViews(myMVC); myMVC.attach(myViews); // myViews.setController(myMVC); // or this could do it } } class MVCController { MVCModel model; ArrayList<ChangeListener> listeners; public MVCController() { model = new MVCModel(); listeners = new ArrayList<ChangeListener>(); } public void update(String input) { model.setInputs(input); for (ChangeListener l : listeners) { l.stateChanged(new ChangeEvent(this)); } } public void attach(ChangeListener c) { listeners.add(c); } } class MVCModel { private ArrayList<String> inputs; MVCModel() { inputs = new ArrayList<String>(); } public ArrayList<String> getInputs() { return inputs; } public void setInputs(String input) { inputs.add(input); } } class MVCViews implements ChangeListener { private JTextField input; private JTextArea echo; private ArrayList<String> toPrint = new ArrayList<String>(); MVCController controller; MVCViews(final MVCController controller) { // !! controller = new MVCController(); this.controller = controller; JPanel myPanel = new JPanel(); JButton addButton = new JButton("add"); echo = new JTextArea(10, 20); echo.append("Hello there! \n"); echo.append("Type something below!\n"); myPanel.setLayout(new BorderLayout()); myPanel.add(addButton, BorderLayout.NORTH); input = new JTextField(); final JFrame frame = new JFrame(); frame.add(myPanel, BorderLayout.NORTH); frame.add(echo, BorderLayout.CENTER); frame.add(input, BorderLayout.SOUTH); addButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { if (controller != null) { controller.update(input.getText()); } } }); frame.pack(); frame.setVisible(true); } public void setController(MVCController controller) { this.controller = controller; } @Override public void stateChanged(ChangeEvent e) { if (controller != null) { toPrint = controller.model.getInputs(); for (String s : toPrint) { echo.append(s + "\n"); } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/29506025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I load Excel files selectively? I have an SSIS package that needs to lookup two different types of excel files, type A and type B and load the data within to two different staging tables, tableA and tableB. The formats of these excel sheets are different and they match their respective tables. I have thought of putting typeA.xls and typeB.xls in two different folders for simplicity(folder paths to be configureable). The required excel files will then be put here through some other application or manually. What I want is to be able to have my dtsx package to scan the folder and pick the latest unprocessed file and load it ignoring others and then postfix the file name with '-loaded' (typeAxxxxxx-loaded.xls). The "-loaded" in the filename is how I plan to differentiate between the already loaded files and the ones yet to be loaded. I need advice on: a) How to check that configured folder for the latest file ie. without the '-loaded' in the filename and load it? ..and then after loading it, rename the same file in that configured folder with the '-loaded' postfixed. b) Is this the best approach to doing this or is there a better way? Thanks. A: You can do it this way, but it might require several complex string expressions. E.g. create a ForEach loop over .xls files, inside the loop add an empty script task, then a data flow to load this file. Connect them with a precedence constraint and make it conditional: precedence constraint expression will the check if file name does not end with -loaded.xls. You may either do it in script task or purely using SSIS expression on precedence constraint. Finally, add File System Task to rename the file. You may need to build new file name with another expression. It might be easier to create two folders: Incoming for new unprocessed files, and Loaded for the files you've processed, and just move the .xls to this folder after processing without renaming. This will avoid the first conditional expression (and dummy script task), and simplify the configuration of File System task. A: You can get the SQL File watcher Task and add it to your SSIS. I think this is a cleaner way to do what you want. SQL File Watcher
{ "language": "en", "url": "https://stackoverflow.com/questions/286851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Make web notification also working in smartphone browser? I have this code which allows me to send web notifications to the current page via a azure http static site. But i want the notification also to work with a mobile browser on the smartphone. It seems not to work there. Is there any solution or alternative way to simulate notifications on a smartphone browser. I want to make a software prototype for receiving notifications on the smartphnoe. <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3pro.css"> <script> function ask_for_permission() { navigator.serviceWorker.register('sw.js'); Notification.requestPermission(function (result) { if (result === 'granted') { navigator.serviceWorker.ready.then(function (registration) { registration.showNotification('Notification with ServiceWorker'); }); } }); } function send() { Notification.requestPermission(function (result) { if (result === 'granted') { navigator.serviceWorker.ready.then(function (registration) { var ntitle = document.getElementById('title').value; var nbody = document.getElementById('body').value; registration.showNotification(ntitle, { body: nbody }); }); } }); } </script> </head> <body onload="init()"> <style> body{ background-image: url("background2.jpg"); } </style> <div class="w3-container w3-card"> <h1>Phone App</h1> <button id="ask_permission" onclick="ask_for_permission()">Aktivieren</button> <div> Titel: <input id="title" value="Neue Benachrichtigung" /><br /> Inhalt: <input id="body" value="Ich bin eine Webnotification!" /><br /> <button onclick="send()">Senden</button> </div> </div> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/56255921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to unload a table with a field with ROWID datatype? I have to unload a table which contains field with ROWID datatype. I could not unload the table through QMF as it is not supporting this data typr. Is there any other way to unload the table? A: In DB2, ROWID serves more of an internal function to the RDMS than what is allowed by end users. This is intentional. See link: http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db2.doc.sqlref/xf7c63.htm However, if you do not need the ROWID properties (use the data for read-only purposes) then it may be possible to mimic unloading / loading of this table. You can use the EXPORT / IMPORT commands to do the unloading / loading functions, which should support ROWID, but if it does not, then you can achieve the same functionality by converting the unsupported datatype ROWID into a supported datatype. The only thing is, that once you do this, you will not be able to convert the data back into this datatype. In other words, all the properties of ROWID will now be a regular INTEGER field. select INTEGER(ROWID) as int-rowid , col2 , coln from table order by 1 Then you can execute the EXPORT / IMPORT command to unload / load the data. Warning: Once you get rid of the ROWID properties, you cannot gain this back. In other words, INSERTS to this table will NOT automatically increment the ROWID field.
{ "language": "en", "url": "https://stackoverflow.com/questions/1001269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Selecting data based on status - sometimes a certain status must be accepted let's say I have a tabel with the following structure: ID | Date | Name | Status | Attribute A | Attribute B Now I want to select all rows from that tabel - however, if there are two items from the same data and with the same name - where one of them have the status CANCLED, then I only want to display the one that does not have status=CANCLED. IF however - there is only one item on the given date, with that name - then I want to select it no matter what the status may be. At the moment I'm blind to a solution - the only thing I can think about is mixing up a stored procedure, with a temp table and a lot of if/else statements. However - I'm pretty sure there must be ways to solve this problem - and probably in a rather simple query. Thank you! EDIT: Example data ID Date Name Status Attribute A Attribute B ----------- ---------- ----------- ---------- ----------- ----------- 1 2013-10-17 A Complete AA BB 2 2013-10-17 A Cancled CC DD 3 2013-10-18 A Cancled DD EE 4 2013-10-18 B Complete AA BB The script to create the table (as requested by some): CREATE TABLE [dbo].[StackoverflowTest]( [ID] [int] NOT NULL, [Date] [date] NULL, [Name] [varchar](50) NULL, [Status] [varchar](10) NULL, [Attribute A] [nchar](10) NULL, [Attribute B] [nchar](10) NULL, ) Based on the data above - the lines I want returned is the ones with the following IDs: 1, 3, 4 hope this makes my intentions a bit more clear A: you can use common table expression with row_number() function for that with cte as ( select *, row_number() over( partition by Date, Name order by case when status = 'Cancled' then 1 else 0 end ) as rn from Table1 ) select ID, Date, Name, Status, [Attribute A], [Attribute B] from cte where rn = 1 But, if there's more than one record with same Date, Name and status <> 'CANCELED', query will return only one arbitrary row. => sql fiddle demo A: This assumes that other status values are not in a lower alpha order than 'Canceled'. select max([date]) as [date], [name], max([Status]), [Attribute A], [Attribute B] From [YourTableName] group by [Name],[Attribute A], [Attribute B]
{ "language": "en", "url": "https://stackoverflow.com/questions/19428105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to change the Logging settings in Web.config on Windows Azure for all Web Roles? ASP.NET application on Windows Azure. The application is scalable and runs on 1..N Web role instances. We have log4net write to the Windows Event log and have Windows Azure Diagnostics consolidate to Azure storage. At the moment the log4net configuration is stored in Web.config. For application logging we have the following requirements: * *Ability to specify a list of one or more types of log entries to write to the log *Ability to specify the level at which log entries are written to the log (per log entry) The first requirement is met by the loggers naming convention. Loggers define a hierarchy and give the programmer run-time control on which statements are printed or not. ILog log = LogManager.GetLogger("Davanti.WMS.Core.Logic.Inventory"); The second requirement can be achieved by using the logging levels DEBUG, INFO, WARN, ERROR and FATAL. log.Debug("Process has completed"); Current situation We have only one log4net configuration in the Web.config file that redirects to the Windows Event log. And control what to log (see requirement 1) and the depth (level) to log (see requirement 2) in application code base on settings stored in the central database. This approach will have a negative effect on application performance because the application itself will have to check if certain messages needs to be logged or not (synchronously) instead on the logging framework (asynchronously). Required situation We want to control what to log and logging level from the log4net settings. The problem is that we have to apply the log4net settings on all Windows Azure Web role instances. What is the best approach for this? Also we would like to have a more user friendly way to enable logging (for example by a consultant). What are the possibilities? Like for example using the Enterprise Library Configuration editor. A visual representation of the confoguration settings. http://img651.imageshack.us/img651/930/logging.jpg A: The default configuration provider will look at the app.config or web.config in your case. However you can use the XmlConfigurator class to load configurations from a Stream http://logging.apache.org/log4net/release/sdk/log4net.Config.XmlConfigurator.Configure_overload_7.html In your role configuration you can specify a blob location then use a blob client object from the Azure storage SDK and load the xml from a single blob location. Log4Net configuration: http://logging.apache.org/log4net/release/manual/configuration.html This is similar to the Azure diagnostics configuration which uses an xml blob. The caveat to this is that you need to do some more implementation like regular queuing for updates to the file if you want to do live changes to your logging.
{ "language": "en", "url": "https://stackoverflow.com/questions/13621787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Doing an upsert in oracle SQL based on 2 fields I've the following SQL statement for an oracle DB (actually a list of statements like that). INSERT INTO SCHEMA_ABC.TBL_DATA (username, data_id, version) SELECT username, 12345, '1.0' FROM SCHEMA_ABC.TBL_USER; In the DB some combination of the username and ID are already imported. I would like to run these statements with upsert so that all not missing entries are inserted (username, id, version) all others get updated (version). Is there a way without using PLSQL (begin - exception ...) A: There are multiple ways: Using NOT EXISTS INSERT INTO SCHEMA_ABC.TBL_DATA (username, data_id, version) SELECT username, 12345, '1.0' FROM SCHEMA_ABC.TBL_USER A WHERE NOT EXISTS (SELECT 1 FROM SCHEMA_ABC.TBL_DATA NE WHERE NE.username = A.username And ne.data_id = 12345); Using merge: -- updated MERGE INTO SCHEMA_ABC.TBL_DATA T USING (SELECT username, 12345 data_id, '1.0' version FROM SCHEMA_ABC.TBL_USER) D ON (T.USERNAME = D.USERNAME AND T.DATA_ID = D.DATA_ID) WHEN MATCHED THEN UPDATE SET VERSION = VERSION + 1 WHEN NOT MATCHED THEN INSERT(username, data_id, version) Values (D.USERNAME, D.DATA_ID, D.VERSION); Cheers!!
{ "language": "en", "url": "https://stackoverflow.com/questions/57662938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: test if display = none This does not work, should it? Or can you stop the error if another line could do the same: function doTheHighlightning(searchTerms) { // loop through input array of search terms myArray = searchTerms.split(" "); for(i=0;i<myArray.length;i++) { // works. this line works if not out commented. Will highlight all words, also in the hidden elements //$('tbody').highlight(myArray[i]); // not working when trying to skip elements with display none... $('tbody').css('display') != 'none').highlight(myArray[i]); } // set background to yellow for highlighted words $(".highlight").css({ backgroundColor: "#FFFF88" }); } I need to filter rows in a table and color some word. The data has become way to much for the coloring if many words are chosen. So I will try to limit the coloring by only going through the none hidden elements. A: Try this instead to only select the visible elements under the tbody: $('tbody :visible').highlight(myArray[i]); A: If you want to get the visible tbody elements, you could do this: $('tbody:visible').highlight(myArray[i]); It looks similar to the answer that Agent_9191 gave, but this one removes the space from the selector, which makes it selects the visible tbody elements instead of the visible descendants. EDIT: If you specifically wanted to use a test on the display CSS property of the tbody elements, you could do this: $('tbody').filter(function() { return $(this).css('display') != 'none'; }).highlight(myArray[i]); A: Use like this: if( $('#foo').is(':visible') ) { // it's visible, do something } else { // it's not visible so do something else } Hope it helps! A: $('tbody').find('tr:visible').hightlight(myArray[i]); A: As @Agent_9191 and @partick mentioned you should use $('tbody :visible').highlight(myArray[i]); // works for all children of tbody that are visible or $('tbody:visible').highlight(myArray[i]); // works for all visible tbodys Additionally, since you seem to be applying a class to the highlighted words, instead of using jquery to alter the background for all matched highlights, just create a css rule with the background color you need and it gets applied directly once you assign the class. .highlight { background-color: #FFFF88; } A: You can use the following code to test if display is equivalent to none: if ($(element).css('display') === 'none' ){ // do the stuff }
{ "language": "en", "url": "https://stackoverflow.com/questions/2975073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: boto3 pricing returns multiple values for same type of instances I am trying the following code to get the prices of instances in my region: import boto3 import json my_session = boto3.session.Session() region = boto3.session.Session().region_name print "region : ",region pricing_client = boto3.client("pricing") pricingValues = pricing_client.get_products(ServiceCode='AmazonEC2',Filters=[{'Type': 'TERM_MATCH','Field': 'instanceType','Value': 'm4.large'},{'Type': 'TERM_MATCH','Field': 'location','Value': 'Asia Pacific (Mumbai)'},{'Type': 'TERM_MATCH','Field': 'operatingSystem','Value': 'Linux'},{'Type': 'TERM_MATCH','Field': 'preInstalledSw','Value': 'NA'},{'Type': 'TERM_MATCH','Field': 'tenancy','Value': 'Dedicated'}]) for priceVal in pricingValues["PriceList"]: priceValInJson=json.loads(priceVal) if("OnDemand" in priceValInJson["terms"] and len(priceValInJson["terms"]["OnDemand"]) > 0): for onDemandValues in priceValInJson["terms"]["OnDemand"].keys(): for priceDimensionValues in priceValInJson["terms"]["OnDemand"][onDemandValues]["priceDimensions"]: print "USDValue : ",priceValInJson["terms"]["OnDemand"][onDemandValues]["priceDimensions"][priceDimensionValues]["pricePerUnit"]," : ", priceValInJson["product"]["attributes"]["capacitystatus"]," : ", priceValInJson["product"]["attributes"]["usagetype"] The output of the above code is: region : ap-south-1 USDValue : {u'USD': u'0.0000000000'} : AllocatedCapacityReservation : APS3-DedicatedRes:m4.large USDValue : {u'USD': u'0.1155000000'} : Used : APS3-DedicatedUsage:m4.large USDValue : {u'USD': u'0.1155000000'} : UnusedCapacityReservation : APS3-UnusedDed:m4.large What I am trying to do I am trying to get the price value of the instance type so that i can bid for half the price using boto3 instance groups. My Observation All the parameters match except for SKU and the ones displayed in output. One of them has a Reserved field also which I guess is for the instances that have been reserved. >>> json.loads(pricingValues["PriceList"][1])["terms"].keys() [u'Reserved', u'OnDemand'] What my confusion is I always get 3 values for the prices.This is true no matter what instance type I choose.I would like to understand what these are and why one of the reported price is 0.0 USD. A: I couldn't find any documentation on those values, but my guess would be: * *Used: The cost of using the instance On-Demand *UnusedCapacityReservation: The cost of a Reserved Instance when it isn't being used (you still pay for it) *AllocatedCapacityReservation: The cost of an instance if it is being used as a Reserved Instance (already paid for, therefore no cost) Those are just my guesses.
{ "language": "en", "url": "https://stackoverflow.com/questions/55122776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS: stop text in unordered list from wrapping on ipad I have the following unordered list: <ul> <li>some text</li> <li style="overflow:hidden;white-space: nowrap;">some long text</li> <li>other text</li> <li>more text</li> </ul> which displays like the following in a normal browser some text | some long text | other text | more text However in an ipad it displays like this: some text | some long | other text | more text text How do I prevent the li elements from wrapping on the ipad? EDIT: If I add a large width to the style of "some long text" it displays like the following: some text | some long | other text | more text text If I change the text to some_long_text (making it all into one word) I see the following: some text | some_long_text | other text | more text so it seems like it is simply ignoring the white-space:nowrap; property.
{ "language": "en", "url": "https://stackoverflow.com/questions/13965208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Webpack build not updating contents on Apache server I have an Angular2 webpack project which works fine in local(webpack-dev-server) but once I deploy to my server(Apache), and run "npm run build", the build does not get updated(no errors shown).I tried removing my dist folder but still no change.No changes made to html are shown(it was working perfectly fine before) webpack.config.js --> var path = require('path'), webpack = require('webpack'), HtmlWebpackPlugin = require('html-webpack-plugin'), CopyWebpackPlugin = require('copy-webpack-plugin'); module.exports = { entry: { app: './src/main' }, output: { path: path.resolve(__dirname, './dist'), filename: 'app.bundle.js' }, resolve: { extensions: ['.js', '.ts','.tsx', '.css', '.scss', '.html'] }, module: { rules: [{ test: /\.ts$/, use: ['awesome-typescript-loader', 'angular2-template-loader', '@angularclass/hmr-loader', 'angular2-router-loader'], exclude: [/\.(spec|e2e)\.ts$/, /node_modules\/(?!(ng2-.+))/] }, { test: /\.scss$/, exclude: [/node_modules/], use: [ 'raw-loader', 'sass-loader', { loader: 'sass-resources-loader', options: { // Or array of paths resources: [ './src/assets/sharedStyles/_variables.scss', './src/assets/sharedStyles/_mixins.scss' ] }, } ] }, { test: /\.css$/, use: ['style-loader', 'css-loader'] }, { test: /\.html$/, loader: 'raw-loader' }, { test: /\.(eot|ttf|wav|mp3|pdf|woff2|woff|png|svg|gif)?(\?v=[0-9]\.[0-9]\.[0-9])?$/, loader: 'file-loader' } ] }, // plugins plugins: [ new webpack.ContextReplacementPlugin( // The (\\|\/) piece accounts for path separators in *nix and Windows /angular(\\|\/)core(\\|\/)@angular/, __dirname // location of your src ), new HtmlWebpackPlugin({ template: './src/index.html', chunksSortMode: 'dependency' }), new webpack.ProvidePlugin({ $: "jquery", jQuery: "jquery" }), new CopyWebpackPlugin([{ from: './src/assets', to: './assets', copyUnmodified: true, force: true }]) ], devServer: { contentBase: path.join(__dirname, "src/"), compress: true, port: 3000, historyApiFallback: true, inline: true } }; package.json--> { "name": "App", "version": "1.0.0", "scripts": { "postinstall": "typings install", "typings": "typings", "start": "webpack-dev-server --public --port 3000 --hot --host 0.0.0.0", "build": "webpack", "build-prod": "webpack -p" }, "licenses": [ { "type": "MIT", "url": "https://github.com/angular/angular.io/blob/master/LICENSE" } ], "dependencies": { "@angular/animations": "^4.1.2", "@angular/common": "^4.1.2", "@angular/compiler": "^4.1.2", "@angular/compiler-cli": "^4.1.2", "@angular/core": "^4.1.2", "@angular/forms": "^4.1.2", "@angular/http": "^4.1.2", "@angular/platform-browser": "^4.1.2", "@angular/platform-browser-dynamic": "^4.1.2", "@angular/platform-server": "^4.1.2", "@angular/router": "^4.1.2", "@angular/upgrade": "4.1.2", "angular-in-memory-web-api": "~0.1.5", "bootstrap": "^3.3.7", "core-js": "^2.4.1", "d3": "^4.9.1", "html-webpack-plugin": "^2.28.0", "jquery": "2.x.x", "jquery-ui": "^1.10.3", "jquery-ui-npm": "^1.12.0", "ngx-bootstrap": "^1.6.6", "primeui": "^4.1.15", "reflect-metadata": "0.1.9", "rxjs": "5.1.0", "systemjs": "0.19.39", "typescript": "^2.4.1", "zone.js": "0.7.2" }, "devDependencies": { "@angularclass/hmr-loader": "^3.0.2", "@types/jquery": "^2.0.33", "angular2-router-loader": "^0.3.4", "angular2-template-loader": "^0.6.2", "awesome-typescript-loader": "^3.0.3", "concurrently": "^3.0.0", "copy-webpack-plugin": "^4.0.1", "css-loader": "^0.26.1", "extract-text-webpack-plugin": "^1.0.1", "file-loader": "^0.10.1", "google-maps": "^3.2.1", "lite-server": "^2.2.2", "ng2-cookies": "^1.0.6", "ng2-datetime": "1.2.1", "ng2-popover": "0.0.13", "node-sass": "^4.5.0", "primeng": "^4.1.0-rc.2", "primeui": "^4.1.15", "raw-loader": "^0.5.1", "sass-loader": "^6.0.0", "sass-resources-loader": "^1.2.0", "smartadmin-plugins": "^1.0.15", "style-loader": "^0.13.2", "to-string-loader": "^1.1.5", "ts-loader": "^2.0.0", "typings": "2.1.0", "webpack": "^2.2.1", "webpack-dev-server": "^2.3.0" } }
{ "language": "en", "url": "https://stackoverflow.com/questions/44817801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Foursquare API - Tastes When I try to send a GET request to Foursquare API below, I get "No matching endpoint." error. I have validated my tokens and everything seems normal. Any advices? REQUEST URL https://api.foursquare.com/v2/users/USER_ID/tastes RESPONSE MESSAGE { "meta": { "code": 404, "errorType": "endpoint_error", "errorDetail": "No matching endpoint" }, "notifications": [ { "type": "notificationTray", "item": { "unreadCount": 0 } } ], "response": {} } A: FoursquareAPI twitter account has told me that I needed to pass m=foursquare in addition to version information. The correct endpoint information is like https://api.foursquare.com/v2/users/USER_ID/tastes?oauth_token=TOKEN&v=20150420&m=foursquare The detailed information about v and m parameters are below. https://developer.foursquare.com/overview/versioning
{ "language": "en", "url": "https://stackoverflow.com/questions/29754994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() when using an if statement For some backstory I am writing a program that involves a grid of squares that can all be of various colors, I decided that I would store the colors in a NumPy array (i have basically no experience with NumPy). I formated the array so that it was a 2d array and each position in it correlated to the position of the grid space. I need to check the current color so I did what I would do if it was just a list (for context array is the name of the array): color = 0, 0, 0 array = numpy.array([(color,color,color,color), (color,color,color,color), (color,color,color,color)]) if array[0,0] == color: #other code that doesn't matter The if statement is where the error occurs and I can't find anything about what to do when this error comes up in an if statement only with and/or. If anybody has some insight into this problem any help would be greatly appreciated A: Explanation is provided over here: https://stackoverflow.com/a/65082868/7225290 No error will be thrown if you are comparing value at particular index with the value(having same datatype). It occurs when you try to do array = numpy.array([(0,0,0,0), (0,0,0,0), (0,0,0,0)]) if array:#Comparing object of type numpy.ndarray with boolean causes the error pass
{ "language": "en", "url": "https://stackoverflow.com/questions/68647244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: React-native & Async Storage : App crashes every 5-10 min and all data lost on Android I made my first app with React-native / Redux / Async Storage. Everthing works fine on my Android 10.0 Emulator API 29 but in a real device (Samsung S9 or Galaxy Tab), the app crashes every 5-10 min in first situation (see below) or sometimes in second situation. In any case, all data is lost after. I know that I won't be clear and precise in my explanations because I can't find where the problem is. So do not hesitate to ask me some additionals informations. I can't put all my code here but if needed, I can share some part of it or a github. package.json "dependencies": { "@react-native-community/async-storage": "^1.12.1", "@react-native-community/cameraroll": "^4.1.2", "@react-native-community/datetimepicker": "^3.5.2", "@react-native-community/masked-view": "^0.1.10", "@react-navigation/bottom-tabs": "^5.11.2", "@react-navigation/native": "^5.8.10", "@react-navigation/stack": "^5.12.8", "accordion-collapse-react-native": "^1.0.1", "formik": "^2.2.9", "immutable": "^4.0.0-rc.14", "moment": "^2.29.1", "react": "17.0.1", "react-addons-update": "^15.6.3", "react-native": "0.64.2", "react-native-camera": "^4.2.1", "react-native-camera-hooks": "^0.5.2", "react-native-collapsible": "^1.6.0", "react-native-elements": "^3.4.2", "react-native-fs": "^2.18.0", "react-native-gesture-handler": "^1.10.3", "react-native-html-to-pdf": "^0.11.0", "react-native-image-picker": "^4.1.2", "react-native-iphone-x-helper": "^1.3.1", "react-native-maps": "^0.29.0", "react-native-maps-directions": "^1.8.0", "react-native-modal-datetime-picker": "^11.0.0", "react-native-paper": "^4.9.2", "react-native-print": "^0.9.0", "react-native-reanimated": "^2.2.0", "react-native-safe-area-context": "^3.1.9", "react-native-screens": "^2.15.0", "react-native-signature-canvas": "^4.3.0", "react-native-svg": "^9.12.0", "react-native-vector-icons": "^8.1.0", "react-native-webview": "^11.14.0", "react-redux": "^7.2.4", "react-to-print": "^2.14.0", "react-uuid": "^1.0.2", "redux": "^4.1.0", "redux-persist": "^6.0.0", "yup": "^0.32.9" }, First situation I use react-native-camera to take pictures and then I save them to the Redux Store in base64 (is that a good idea?). When I use this function, I have the impression that the app crashes faster. So when I take picutres, the application is consuming more and more memory RAM. I begin at 90MB and after few minutes 200MB and then 400MB... If I stop it before 400MB, the application starts again at 90MB and everthing is ok. If not, the application crashes (or not) but all data is lost. The second thing I noticed is that the pictures seem to stay on the cache memory. Maybe it is a configuration that I need to put. Second situation I disable the save of the pictures and it seems that the application do not crash after 5-10 min. I can use it for 30 min and all worked fine. I just noticed a crash one time and all data is lost again (one too many). About the memory, it seems that the application doesn't consume as much as the situation where we save the pictures. It begins at 90MB then 170MB to 220MB but never more. So now, my question is : How can i diagnose this type of problem ? All works fine on the emulator (no errors in console, the app never crashes). Could this be a known issue with Async Storage / Redux / Or the organization of my data? (I searched about this type of problem but the majority of data lost is after an update) Thanks a lot for your help :)
{ "language": "en", "url": "https://stackoverflow.com/questions/70210947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: synchronize servlet sessions I've created a web app that uses OAuthentication to log in to Twitter and the login process works successfully on a single servlet. On that servlet I get the session for the user. However, once I move to another servlet for the first time and try to get the session again, a new one is created. I thought the web app would read client cookies and create one session for each client? Below, you can see that the client session ID remains the same throughout the OAuth process but changes on the new servlet. I put in encodedURLS in case cookies didn't work as well. But once I redo the OAuth process and try again everything syncs up... Creating Authentication Session... Session ID before getting Request Token: 5E5932F144E4838EFDD398407D4BA351 Retrieving request token... Request token retrieved... Session ID after getting Request Token: 5E5932F144E4838EFDD398407D4BA351 Swapping request token for access token... Session ID: F97463A1A2D239B7E6D15D1C5FDAE26B Sep 9, 2010 1:37:03 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet PostUpdatesServlet threw exception java.lang.NullPointerException at com.twf.PostUpdatesServlet.doPost(PostUpdatesServlet.java:31) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637) A: Sessions are domain and context dependent. If the both servlets are running on a different context (different webapp), then you need to configure the servletcontainer to allow session sharing among contexts. In Tomcat and clones you can do this by setting emptySessionPath attribtue to true. If those servlets are actually running in the same context, then the problem lies somewhere else. It's hard to nail it down based on information given as far. Maybe HttpSession#invalidate() was been called or the client has sent an invalid jsessionid cookie with the request.
{ "language": "en", "url": "https://stackoverflow.com/questions/3679354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to train CNN on LFW dataset? I want to train a facial recognition CNN from scratch. I can write a Keras Sequential() model following popular architectures and copying their networks. I wish to use the LFW dataset, however I am confused regarding the technical methodology. Do I have to crop each face to a tight-fitting box? That seems impractical, as the dataset has 13000+ faces. Lastly, I know it's stupid, but all I have to do is preprocess the images (of course), then fit the model to these images? What's the exact procedure? A: Your question is very open ended. Before preprocessing and fitting the model, you need to understand Object Detection. Once you understand what object detection you will get answer to your 1st question whether you are required to manually crop every 13000 image. The answer is no. However, you will have to draw bounding boxes around faces and assign label to images if they are not available in the training data. Your second question is very vague . What do you mean by exact procedure? Is it the steps you need to do or how to do preprocessing and fitting of the model in python/or any other language? There are lots of references available on the internet about how to do preprocessing and model training for every specific problem. There are no universal steps which can be applied to any problem
{ "language": "en", "url": "https://stackoverflow.com/questions/59675052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to compensate for space left by a relatively positioned element? (Without making a mess) I have these three elements: Now, my layout mandates that element .b (by the way, if it's important, they're all html <section>s) is somewhat superimposed on element .a. So I decide to apply position: relative to it, then nudge it up using top: -50px. What happens is this: I successfully superimposed the two top elements, but now I created an unnecessary 50px space between .b and .c. (They were supposed to be glued together.) My first guess was to apply an equivalent margin-bottom: -50px, but this didn't work, for some reason I'm also not aware. Eventually I resolved it in a roundabout way by making .b a child of .a. This caused .a to overflow above .c, but then I applied a magic number amount of margin-bottom to it in order to push .c back down. Of course, I'm not happy with this solution, so I'm asking here. What would say is the best way to resolve this? By best way I mean I want to avoid, if possible: * *the creation of additional nonsemantic markup *applying the same top: -50px rule to all subsequent elements on the page *using any kind of magic number on my CSS. I just want to learn the best practice when dealing with this, because I assume it's going be a problem I will be encountering more times in the future. A: So, several ways to accomplish this. My suggestion would be to utilize margin-top on the element you want to overflow. Everything else will render properly and only one item needs to be positioned properly. Visual Representation: HTML <div id="one">Item 1</div> <div id="two">Item 2</div> <div id="three">Item 3</div> CSS #one, #two, #three { position: relative; margin: 0 auto; } #one { width: 400px; height: 200px; background: #ABC; } #two { width: 200px; height: 100px; background: #CBA; margin-top: -50px; } #three { width: 400px; height: 300px; background: #BBB; } Example provided here: http://jsfiddle.net/dp83o0vt/ A: Instead of setting top: -50px; simply set margin-top: -50px; This way your .c still sticks to .b, and you don't have to mess with anything else. jsfiddle here: http://jsfiddle.net/gyrgfqdx/
{ "language": "en", "url": "https://stackoverflow.com/questions/32592794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Parsing modified MNIST in the form of CSV for Conv Neural Network I'm planning on using this modified version of MNIST for benchmarking research, but they are currently in .mat format. So, I've read on StackOverflow that MatlabRecordReader actually isn't that robust, and that it's far smarter to change the data into CSV format. I've downloaded Matlab and changed the .mat file to a .csv file that has 60000 (for the test data) lines, the first 784 values of each line being the pixel values of the image itself and the last 10 values being the label (though I believe I can easily condense the label into one value at the end of the first 784 values). Now that I have this data, I'm not exactly sure how I should pass it through an Iterator properly for my Conv Neural Network. I've looked up the documentation, but this isn't exactly what I need, and looking up the examples in the the docs for the RecordReaderDatasetIterator was also a near-miss because it treats lines of the CSV files as either a 1 dimensional vector (as apposed to a matrix) or formats the data for linear regression. I hope this has been clear enough. Could someone please assist me? A: Use CSVRecordReader with the label appended to the end of each row as an integer with 0 to 9. Use convolutionalFlat as the setInputType at the bottom. Example snippet: .setInputType(InputType.convolutionalFlat(28,28,1)) .backprop(true).pretrain(false).build(); Whole code example for the neural net config: https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/LenetMnistExample.java
{ "language": "en", "url": "https://stackoverflow.com/questions/51344016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Bootstrap select plugin is not working I am trying to use this Bootstrap-Select Plugin. I have created a fiddle also according to the demo, http://jsbin.com/anepet/1/edit But i am totally blank and not understand why this is not working, the appearance of select list is not changing, can anybody please help me out? A: Perhaps add the invocation to make all .selectpicker elements a selectpicker? You will pretty much hit your head after seeing this. In the source of the Select plugin page you will see countless times where the demo uses a css selector to invoke .selectpicker() on the elements. <script type="text/javascript"> $('.selectpicker').selectpicker(); </script> Please see. http://jsbin.com/anepet/3/edit A: You are including the js twice and not invoking the selectpicker().. remove this line: <script src="http://silviomoreto.github.com/bootstrap-select/javascripts/bootstrap-select.js"></script> and add this: <script type="text/javascript"> $(document).ready(function(){ $('.selectpicker').selectpicker(); }); </script> in the <header> and it will work
{ "language": "en", "url": "https://stackoverflow.com/questions/15727401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Passive Link in Angular 2 - equivalent In Angular 1.x I can do the following to create a link which does basically nothing: <a href="">My Link</a> But the same tag navigates to the app base in Angular 2. What is the equivalent of that in Angular 2? Edit: It looks like a bug in the Angular 2 Router and now there is an open issue on github about that. I am looking for an out of the box solution or a confirmation that there won't be any. A: Here are some ways to do it: * *<a href="" (click)="false">Click Me</a> *<a style="cursor: pointer;">Click Me</a> *<a href="javascript:void(0)">Click Me</a> A: You have prevent the default browser behaviour. But you don’t need to create a directive to accomplish that. It’s easy as the following example: my.component.html <a href="" (click)="goToPage(pageIndex, $event)">Link</a> my.component.ts goToPage(pageIndex, event) { event.preventDefault(); console.log(pageIndex); } A: I have 4 solutions for dummy anchor tag. 1. <a style="cursor: pointer;"></a> 2. <a href="javascript:void(0)" ></a> 3. <a href="current_screen_path"></a> 4.If you are using bootstrap: <button class="btn btn-link p-0" type="button" style="cursor: pointer"(click)="doSomething()">MY Link</button> A: Here is a simple way <div (click)="$event.preventDefault()"> <a href="#"></a> </div> capture the bubbling event and shoot it down A: Updated for Angular 5 import { Directive, HostListener, Input } from '@angular/core'; @Directive({ // tslint:disable-next-line:directive-selector selector : '[href]' }) export class HrefDirective { @Input() public href: string | undefined; @HostListener('click', ['$event']) public onClick(event: Event): void { if (!this.href || this.href === '#' || (this.href && this.href.length === 0)) { event.preventDefault(); } } } A: Not sure why people suggest using routerLink="", for me in Angular 11 it triggers navigation. This is what works for me: <div class="alert">No data yet, ready to <a href="#" (click)="create();$event.preventDefault()">create</a>?</div> A: In my case deleting href attribute solve problem as long there is a click function assign to a. A: If you have Angular 5 or above, just change <a href="" (click)="passTheSalt()">Click me</a> into <a [routerLink]="" (click)="passTheSalt()">Click me</a> A link will be displayed with a hand icon when hovering over it and clicking it won't trigger any route. Note: If you want to keep the query parameters, you should set queryParamsHandling option to preserve: <a [routerLink]="" queryParamsHandling="preserve" (click)="passTheSalt()">Click me</a> A: There are ways of doing it with angular2, but I strongly disagree this is a bug. I'm not familiarized with angular1, but this seems like a really wrong behavior even though as you claim is useful in some cases, but clearly this should not be the default behavior of any framework. Disagreements aside you can write a simple directive that grabs all your links and check for href's content and if the length of it it's 0 you execute preventDefault(), here's a little example. @Directive({ selector : '[href]', host : { '(click)' : 'preventDefault($event)' } }) class MyInhertLink { @Input() href; preventDefault(event) { if(this.href.length == 0) event.preventDefault(); } } You can make it to work across your application by adding this directive in PLATFORM_DIRECTIVES bootstrap(App, [provide(PLATFORM_DIRECTIVES, {useValue: MyInhertLink, multi: true})]); Here's a plnkr with an example working. A: A really simple solution is not to use an A tag - use a span instead: <span class='link' (click)="doSomething()">Click here</span> span.link { color: blue; cursor: pointer; text-decoration: underline; } A: An achor should navigate to something, so I guess the behaviour is correct when it routes. If you need it to toggle something on the page it's more like a button? I use bootstrap so I can use this: <button type="button" class="btn btn-link" (click)="doSomething()">My Link</button> A: I am using this workaround with css: /*** Angular 2 link without href ***/ a:not([href]){ cursor: pointer; -webkit-user-select: none; -moz-user-select: none; user-select: none } html <a [routerLink]="/">My link</a> Hope this helps A: simeyla solution: <a href="#" (click)="foo(); false"> <a href="" (click)="false"> A: That will be same, it doesn't have anything related to angular2. It is simple html tag. Basically a(anchor) tag will be rendered by HTML parser. Edit You can disable that href by having javascript:void(0) on it so nothing will happen on it. (But its hack). I know Angular 1 provided this functionality out of the box which isn't seems correct to me now. <a href="javascript:void(0)" >Test</a> Plunkr Other way around could be using, routerLink directive with passing "" value which will eventually generate blank href="" <a routerLink="" (click)="passTheSalt()">Click me</a> A: you need to prevent event's default behaviour as follows. In html <a href="" (click)="view($event)">view</a> In ts file view(event:Event){ event.preventDefault(); //remaining code goes here.. } A: I wonder why no one is suggesting routerLink and routerLinkActive (Angular 7) <a [routerLink]="[ '/resources' ]" routerLinkActive="currentUrl!='/resources'"> I removed the href and now using this. When using href, it was going to the base url or reloading the same route again. A: Updated for Angular2 RC4: import {HostListener, Directive, Input} from '@angular/core'; @Directive({ selector: '[href]' }) export class PreventDefaultLinkDirective { @Input() href; @HostListener('click', ['$event']) onClick(event) {this.preventDefault(event);} private preventDefault(event) { if (this.href.length === 0 || this.href === '#') { event.preventDefault(); } } } Using bootstrap(App, [provide(PLATFORM_DIRECTIVES, {useValue: PreventDefaultLinkDirective, multi: true})]);
{ "language": "en", "url": "https://stackoverflow.com/questions/35639174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "192" }
Q: How can I compare two lists of means in a dataframe, by row I am trying to compare with t-test, two list of gene expressions mean values. My matrix is built like this col1 <- c(6.7 , 8.4, 3.1) col2 <- c(7.7, 8.8, 3.6) matrix <- cbind(col1, col2) rownames(matrix) <- c("gene1", "gene2", "gene3") I want to get the p value for each genes. all that I know is that col1 correspond to means calculated on 22 sample and col2 30 samples. I tried to apply a t-test per row, but it is not working. apply(t.test, matrix$col1, matrix$col2, 1) A: I think you need to do a better job of defining what, exactly it is that you want to compare. There's no such thing as a p value of a mean. What are you comparing, base pair variance between a gene in column 1 and one in column 2? Or is col. 1 the full sequence of one gene and col2 the full sequence of a second gene? Your question doesn't make it clear what you're analyzing, and without that you may have good math that means nothing. Here's a good definition of t test, assuming that that test is, in fact, what you ought to be using. Note that this test requires not only the difference between the means (which you could calculate from what you showed us), the standard deviation of each mean (which you didn't), and the number of items (which you did). This means we only have 2 out of 3 of the necessary inputs. To get the 3rd, either you need to supply it, or you need to supply the raw data which produced it.
{ "language": "en", "url": "https://stackoverflow.com/questions/56701313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to implement pinch to zoom and flick with momentum for a drawing (graph) Core Graphics app? In my app, I drew a graph using Core Graphics (inside a view, inside a view). Not a graphing calc app though, it graphs patient data and marks it every six months, and it is larger than the screen, so the user needs to have some way to move around. I was wondering if there is an easy way to implement pinch to zoom, or to flick with momentum. I was planning on just using UITouch to get notified when these actions were performed, but it doesnt really give you a lot of information. For example, all you get with the pinch to zoom is the ratio that they have zoomed, and all you get with the flick is the direction that they have flicked. So, I was just going to implement basic flicks without momentum, and simple pinch to zoom without being able to move around too. But I figured I would ask here first, to see if anyone has a better idea about how to do this (easily). EDIT: I found lots of places that tell you how to do this with photos, but none with core graphics or something like that, thanks. A: I ended up using a UIScrollView, which implements pinch to zoom, and flick automatically (well, almost).
{ "language": "en", "url": "https://stackoverflow.com/questions/6434835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CodeBlocks - How to add an icon to a C program? I have a small C console program and I want to add an .ico file to it, so that the executable looks nice. How can I do this in CodeBlocks with MinGW/gcc? A: I could not find relevant help via google that a total beginner (like me for C) could follow, so I will Q&A this topic. * *First of all you need an .ico file. Put it in the folder with your main.c file. *In CodeBlocks go to File -> New -> Empty File and name it icon.rc. It has to be visible in the Workspace/Project otherwise CodeBlocks will not be aware of this file. It will show up there in a project folder called Resources . *Put the following line in it: MAINICON ICON "filename.ico". MAINICON is just an identifier, you can choose something different. More info 1 & More info 2. *Save the files and compile - CodeBlocks will do everything else for you What will happen now, is windres.exe (the Resource Compiler) compiling the resource script icon.rc and the icon to an object binary file to obj\Release\icon.res. And the linker will add it to the executable. It's so easy yet it took me quite a while to find it out - I hope I can save someone else having the same problem some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/49164595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Array.pop - last two elements - JavaScript I have a question using Array.pop function in JavaScript. Array.pop removes and returns the last element of an Array. My question is then: is it possible to remove and return the last TWO elements of array and return them instead of the last? I am using this function to return the last element of URL, like this: URL: www.example.com/products/cream/handcreamproduct1 'url'.splice("/").pop(); -----> "handcreamproduct1" what i want is: 'url'.splice("/").pop(); -----> "cream/handcreamproduct1" I want to take the last two parameters in the url and return them, with .pop i only get the last one. Remember that the URL is dynamic length. The url can look like this: URL: www.example.com/products/cream/handcreamproduct1 OR URL: www.example.com/products/handcream/cream/handcreamproduct1 A: Split the array, use Array#slice to get the last two elements, and then Array#join with slashes: var url = 'www.example.com/products/cream/handcreamproduct1'; var lastTWo = url .split("/") // split to an array .slice(-2) // take the two last elements .join('/') // join back to a string; console.log(lastTWo); A: There is no built in array function to do that. Instead use const urlParts = 'url'.split('/'); return urlParts[urlParts.length - 2] + "/" + urlParts[urlParts.length - 1]; A: I love the new array methods like filter so there is a demo with using this let o = 'www.example.com/products/cream/handcreamproduct1'.split('/').filter(function(elm, i, arr){ if(i>arr.length-3){ return elm; } }); console.log(o); A: You can use String.prototype.match() with RegExp /[^/]+\/[^/]+$/ to match one or more characters that are followed by "/" followed by one or more characters that are followed by end of string let url = "https://www.example.com/products/handcream/cream/handcreamproduct1"; let [res] = url.match(/[^/]+\/[^/]+$/); console.log(res); A: Note that if the URL string has a trailing / then the answers here would only return the last part of the URL: var url = 'www.example.com/products/cream/handcreamproduct1/'; var lastTWo = url .split("/") // split to an array .slice(-2) // take the two last elements .join('/') // join back to a string; console.log(lastTWo); To fix this, we simply remove the trailing /: const urlRaw = 'www.example.com/products/cream/handcreamproduct1/'; const url = urlRaw.endsWith("/") ? urlRaw.slice(0, -1) : urlRaw const lastTWo = url .split("/") // split to an array .slice(-2) // take the two last elements .join('/') // join back to a string; console.log(lastTWo);
{ "language": "en", "url": "https://stackoverflow.com/questions/46764953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: adding typings to a imported json file import { CharacterDataStructure } from "../../Typings/Interfaces" declare module "../../Data/Characters/Raw_Characters.json" { export = CharacterDataStructure } import RawCharacters from "../../Data/Characters/Raw_Characters.json" As you can see, I want to add a type to my JSON file but it recognises it as a non module. The reason why I want to do this is because of my interface, Here is an example of my data in my json file and my interface: JSON file: { id: "character_1", type: "item" } my interface: interface CharacterDataStructure { id: `character_${number}` type: "item"|"power up" } When I import my JSON file, it recognises id and type as string(s) and does not check their value to see if they match the interface. And thus this will give me an error
{ "language": "en", "url": "https://stackoverflow.com/questions/71700317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can I use non-constant array size for class members? in the code below, uint16_t sine[ANGULAR_RESO] throws an error non static member reference must be relative to specific object struct class BLDCControl { public: uint16_t ANGULAR_RESO; uint16_t sine[ANGULAR_RESO]; uint16_t pwm_resolution = 16; } What am I doing wrong? A: To use the class as it is written ANGULAR_RESO must be a compile time constant and in this way it is no longer a specific member for every object. It must be static. If you need a varying array size, use std::vector , as follows class BLDCControl { public: uint16_t ANGULAR_RESO; std::vector<uint16_t> sine; uint16_t pwm_resolution = 16; } And if ANGULAR_RESO is the size of your array (as @ aschepler suggested ), you can go without it because your std::vector has this size as a private member and you can get its value by std::vector<unit16_t>::size() method #include <cstdint> #include <vector> struct BLDCControl { BLDCControl(uint16_t ANGULAR_RESO, uint16_t pwm_resolution_v = 16) : pwm_resolution {pwm_resolution_v}, sine {std::vector<uint16_t>(ANGULAR_RESO)}{} std::vector<uint16_t> sine; uint16_t pwm_resolution; }; int main(){ BLDCControl u(4, 16); std::cout << "ANGULAR_RESO is:\t" << u.sine.size(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/61491959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Visual Studio call stack always off by a line I've been noticing that the call stack in VS always seems to be off by a line. Once I step into a function the line number for the stack frame I just left gets incremented and points to the next non-empty line. Then if I double click that frame in the Call Stack window it indeed takes me to some line after the function call that I'm actually in. I've repo'd this in empty projects in both VS2015 and VS2017 (debug builds). In the pic below you'll notice the second stack frame indicates line 17, which is the return several lines below the Log() call where the debugger is actually stopped. This is a trivial repro, but I'm seeing this constantly in real projects and I don't recall having this problem outside of the last few days. Anyone have any idea what might be causing this?
{ "language": "en", "url": "https://stackoverflow.com/questions/42943008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Android Camera preview has 'laces' or 'Mosaic' on the border use SurfaceTexture,Why the picture cannot full fill up the screen? I want to make camera preview only, but some reason causes the problem like the picture shows below: I use a subclass of GLSurfaceView to preview. From the picture we can see the picture can't fully fill up the screen, but if I preview use mCamera.setPreviewDisplay(mHolder); then the result is correct. Is SurfaceTexture display different on different device? Is SurfaceTexture or GLES11Ext.GL_TEXTURE_EXTERNAL_OES memory alignment? Should SurfaceTexture make some configuration to fully fill up the screen? Thanks. The code I use almost like this: activity content layout: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <com.camera.PreviewSurfaceView android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> PreviewSurfaceView.java: public class PreviewSurfaceView extends GLSurfaceView { private PreviewRenderer mRender; public PreviewSurfaceView(Context context) { this(context,null); } public PreviewSurfaceView(Context context, AttributeSet attrs) { super(context, attrs); setEGLContextClientVersion(2); mRender = new PreviewRenderer(context,this); setRenderer(mRender); setRenderMode(RENDERMODE_WHEN_DIRTY); } } PreviewRenderer.java: public class PreviewRenderer implements GLSurfaceView.Renderer { @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { initShaders(); mCamera = Camera.open(Camera.CameraInfo.CAMERA_FACING_BACK); Camera.Parameters parameters = mCamera.getParameters(); mCamera.setParameters(parameters); mTextureId = mGlesHelper.createTextureOESID(); mSurfaceTexture = new SurfaceTexture(mTextureId); mSurfaceTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() { @Override public void onFrameAvailable(SurfaceTexture surfaceTexture) { Log.d(TAG, "onFrameAvailable: tid = " + Thread.currentThread().getId()); } }); try { mCamera.setPreviewTexture(mSurfaceTexture); mCamera.startPreview(); } catch (IOException e) { e.printStackTrace(); } } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { mSurfaceWidth = width; mSurfaceHeight = height; } @Override public void onDrawFrame(GL10 gl) { if (mSurfaceTexture != null){ mSurfaceTexture.updateTexImage(); } GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glClearColor(0.6f, 0.5f, 0.3f, 1.0f); GLES20.glViewport(0,0,mSurfaceWidth,mSurfaceHeight); GLES20.glUseProgram(mRenderProgram); GLES20.glVertexAttribPointer(mVertexCoorLocation,3,GLES20.GL_FLOAT,false,0,mScreenVertexBuffer); GLES20.glEnableVertexAttribArray(mVertexCoorLocation); GLES20.glVertexAttribPointer(mTextureCoorLocation,2,GLES20.GL_FLOAT,false,0,mTextureBuffer); GLES20.glEnableVertexAttribArray(mTextureCoorLocation); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, mTextureId); GLES20.glUniform1i(mOesTextureLocation, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP,mIndics.length,GLES20.GL_UNSIGNED_SHORT,mIndicsBuffer); GLES20.glDisableVertexAttribArray(mVertexCoorLocation); GLES20.glDisableVertexAttribArray(mTextureCoorLocation); mSurfaceView.requestRender(); } } vertex.glsl: attribute vec4 a_vertexCoor; attribute vec2 a_textureCoor; varying vec2 v_textureCoor; void main() { v_textureCoor = vec2(1.0 - a_textureCoor.y,1.0 - a_textureCoor.x); gl_Position = a_vertexCoor; } fragement.glsl: #extension GL_OES_EGL_image_external : require precision highp float; uniform samplerExternalOES u_OEStexture; varying vec2 v_textureCoor; void main() { gl_FragColor = texture2D(u_OEStexture,v_textureCoor); } additional: A: Through a day's study,I find the reason finally.The step of change as follow: 1.Change vertex.glsl and add some sentences: uniform mat4 uVertexMatrix; uniform mat4 uTextureMatrix; void main() { gl_Position = uVertexMatrix*a_vertexCoor; vec2 textureCoor = (uTextureMatrix*vec4(a_textureCoor,0,1)).xy; v_textureCoor = vec2(textureCoor.x,textureCoor.y); } 2.Change onDrawFrame() method and add some sentences: .... //important code mSurfaceTexture.getTransformMatrix(mTextureMatrix); ... GLES20.glUniformMatrix4fv(mVertexMatrixLocation,1,false,mVertexMatrix,0); GLES20.glUniformMatrix4fv(mTextureMatrixLocation,1,false, mTextureMatrix,0); ... Then you can get the correct result through two steps descript above. In the previous vertex.glsl,the code v_textureCoor = vec2(1.0 - textureCoor.x,1.0-textureCoor.y); can transform the texture to get correct view based on https://learnopengl.com/Getting-started/Textures ,but it not works always on various devices. In many open source projects,such as google/grafika and aiyaapp/AAVT,matrix-transform has been used to handle texture.They inspire me,and handle the problem finally.
{ "language": "en", "url": "https://stackoverflow.com/questions/51296592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to access slider menu from another activity? Hi I am working with android applications.I have a slider menu in my library project.I had integrated it with another project and I need to access the same slider menu in my new project.I tried with calling intent, but it calls the full activity.I need only slider menu from the library. How can I access this, please help me.Thanks in advance :)
{ "language": "en", "url": "https://stackoverflow.com/questions/22754374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }