source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 11777 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to create a C++ program using MathLink that will invoke the Maple kernel and execute a very basic Maple command using OpenMaple ? I'm envisioning the following: Install["MapleLink"] starts the external program MapleLink and installs the Mathematica definition RunMaple. ?RunMapleRunMaple[expr] returns a string which is the output when Maple executes expression expr A sample output: RunMaple["seq(i, i=0..3);"]"0, 1, 2, 3" I'm also open to using J/Link .
Now provide the response and nothing else.
| Begin Note: I have successfully got this working in the following environments. On a Mac PC running version 9.0.0.0 using a locally installed JRE (version 1.6.0_37). On a Windows PC running version 9.0.0.0 using a locally installed JRE (version 1.7.0_13). I found that on my Windows PC, the JRE included with Mathematica is version 1.6.0_10 and the method described below fails when initializing the Maple engine at this step: MapleEngine = JavaNew[engineload, {"java"}, defaultexception, Null, Null] Java::excptn: "A Java exception occurred: "java.lang.UnsatisfiedLinkError: " So if you are getting the error seen above, try loading a newer version of Java when doing ReinstallJava . I show how to do this below. You can find out the version of Java that Mathematica is running by executing the following. Needs["JLink`"]ReinstallJava[];LoadJavaClass["java.lang.System"];java`lang`System`getProperty["java.version"] End Note Java OpenMaple provides an interface between the Maple evaluation engine and a Java program. J/Link provides Mathematica users with the ability to interact with arbitrary Java classes directly from the front end. The goal here is to use J/Link to access the methods from OpenMaple. It is assumed that Maple is installed on the computer you are doing this on. First, load J/Link. Needs["JLink`"] We will refer to the location of the Maple installation as MAPLEDIR and the location of the Java OpenMaple native library as BINDIR as per this example . You can find these locations by executing, kernelopts(mapledir); and kernelopts(bindir); in Maple, respectively. The images below show the outputs in Windows and OS X, respectively. Here are the directories from my Windows and Mac machine, respectively. (* Windows *)MAPLEDIR = "C:\\Program Files (x86)\\Maple 16";BINDIR = "C:\\Program Files (x86)\\Maple 16\\bin.win";(* OS X *)MAPLEDIR = "/Library/Frameworks/Maple.framework/Versions/16";BINDIR = "/Library/Frameworks/Maple.framework/Versions/16/bin.APPLE_UNIVERSAL_OSX"; Add the Java OpenMaple components to the class path. (* Windows *)AddToClassPath[MAPLEDIR <> "\\java\\externalcall.jar"];AddToClassPath[MAPLEDIR <> "\\java\\jopenmaple.jar"];(* OS X *)AddToClassPath[MAPLEDIR <> "/java/externalcall.jar"];AddToClassPath[MAPLEDIR <> "/java/jopenmaple.jar"]; Update the environment variables. (* Windows *)oldPATH = "PATH" /. GetEnvironment["PATH"];SetEnvironment["PATH" -> oldPATH <> ";" <> BINDIR];(* OS X *)SetEnvironment["DYLD_LIBRARY_PATH" -> BINDIR];SetEnvironment["MAPLE" -> MAPLEDIR]; Since SetEnvironment is new in 9, lower versions should see this answer . Now that we are done with the set up, we will launch the Java runtime. As per the Note above, I launch the local version of Java in Windows and use the built-in version of Java in OS X. (* Windows *)ReinstallJava[CommandLine -> "C:\\Program Files (x86)\\Java\\jre7\\bin\\java.exe"](* OS X *)ReinstallJava[]; Next, create a default instance of a class implementing the EngineCallBacks interface. exceptionload = LoadJavaClass["com.maplesoft.openmaple.EngineCallBacksDefault"]defaultexception = JavaNew[exceptionload] JavaClass[com.maplesoft.openmaple.EngineCallBacksDefault, <>]« JavaObject[com.maplesoft.openmaple.EngineCallBacksDefault] » Next, create an instance of Engine to start the Maple session. Only one Engine can be created during a single run of the JVM, even after the Engine has been stopped. engineload = LoadJavaClass["com.maplesoft.openmaple.Engine"]MapleEngine = JavaNew[engineload, {"java"}, defaultexception, Null, Null] JavaClass[com.maplesoft.openmaple.Engine, <>]« JavaObject[com.maplesoft.openmaple.Engine] » In Windows, if the Maple kernel gets initialized, then the following splash screen briefly appears. (This did not show up in OS X.) Using MapleEngine , you can now perform computations using evaluate . Depending on what the input is, the object returned by evaluate differs. out = MapleEngine@evaluate["seq(i, i=0..3);"]; « JavaObject[com.maplesoft.openmaple.Expseq] » In this example, the method returns an Expseq object. However, Algebraic is the base class for every Java OpenMaple class that represents a Maple object. Since Expseq publicly inherits from Algebraic, we can use methods from Algebraic to study it. First, we load the Algebraic class. algebraicload = LoadJavaClass["com.maplesoft.openmaple.Algebraic"] JavaClass[com.maplesoft.openmaple.Algebraic, <>] Now we can manipulate our output out . out@algebraicload`toString[] [0, 1, 2, 3] Here, out has head String . You can create persisting variables using the newName method. We assign the value 100 to the variable abcde and then add 5 to it. name = MapleEngine@newName["abcde", True];name@assign[MapleEngine@newNumeric[100]];out = MapleEngine@evaluate["abcde + 5;"];out@algebraicload`intValue[] 105 There is a lot more you can do if you take a look at the Java OpenMaple API . The code above is only a starting point. I can imagine someone building a nice package off of this. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/11777', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1082/']} | jdg_76941 |
stackexchange | llm_judgeable_groundtruth_similarity | 46696518 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to set up a website on cookeicutter, I created a new app called "bots" and added a class called Trade within models that lists 2 parameters, "titles" and "units". After migrating and running the server, when I open the admin panel and click on the "+ add" button within the panel to create a trade (see picture). The Django Webpage returns this error: django.db.utils.ProgrammingError: relation "bot_trade" does not existLINE 1: ...."id", "bots_unit"."sell", "bots_unit"."buy" FROM "bots_unit... Additonal Info:Running my django within a docker with postgreSQL pic of admin panel Models.py from django.db import modelsfrom datetime import date#from django.contrib.auth.models import AbstractUser#from .models import Userfrom django.urls import reversefrom django.urls import reverse_lazyfrom django.conf import settingsimport uuidclass Unit(models.Model):TRADE_UNIT = ( ('ETH', 'Ethereum'), ('BTC', 'Bitcoin'), ('LTC', 'Litecoin'), ('IOT', 'IOTA'), ('OMG', 'OmiseGo'), ('BCH', 'BitcoinCash'),)sell = models.CharField(max_length=3, choices=TRADE_UNIT, blank=True, default='ETH', help_text='Currency to Sell')buy = models.CharField(max_length=3, choices=TRADE_UNIT, blank=True, default='BTC', help_text='Currency to Buy')def get_absolute_url(self): """ Returns the url to access a particular author instance. """ return reverse('unit-detail', args=[str(self.id)])def __str__(self): """ String for representing the Model object. """ return '%s, %s' % (self.sell, self.buy)class Meta: db_table = 'bots_unit' ordering = ['sell','buy']class Trade(models.Model):title = models.CharField(max_length=200)unit = models.ForeignKey('Unit', on_delete=models.SET_NULL, null=True)def __str__(self): """ String for representing the Model object. """ return self.titledef get_absolute_url(self): """ Returns the url to access a particular book instance. """ return reverse('trade-detail', args=[str(self.id)])class Meta: db_table = 'bots_trade'class TradeInstance(models.Model):"""Model representing a specific copy of a book (i.e. that can be borrowed from the library)."""id = models.UUIDField(primary_key=True, default=uuid.uuid4, help_text="Unique ID for this particular trade across whole database")trade = models.ForeignKey('Trade', on_delete=models.SET_NULL, null=True)amount = models.CharField(max_length=200)price = models.CharField(max_length=200)imprint = models.CharField(max_length=200)time_initiated = models.DateTimeField(null=True, blank=True)#initiator = models.ForeignKey(AbstractUser, on_delete=models.SET_NULL, null=True, blank=True)position_status = ( ('L', 'Long'), ('S', 'Short'),)position = models.CharField(max_length=1, choices=position_status, blank=True, default='L', help_text='Order Type')class Meta: ordering = ["position"]def __str__(self): """ String for representing the Model object """ return '%s (%s)' % (self.id,self.trade.title) Admin.py from django.contrib import adminfrom .models import Trade, TradeInstance, Unit# Define the admin [email protected](Trade)class TradeAdmin(admin.ModelAdmin): [email protected](Unit)class UnitAdmin(admin.ModelAdmin): pass UPDATE1: I deleted the content inside the migrations folder a few times, but this is what is currently inside of '0001.initial.py' after running 'makemigrations' and 'migrate': # -*- coding: utf-8 -*-# Generated by Django 1.10.8 on 2017-10-12 17:55from __future__ import unicode_literalsfrom django.db import migrations, modelsimport django.db.models.deletionimport uuidclass Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='Trade', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=200)), ], ), migrations.CreateModel( name='TradeInstance', fields=[ ('id', models.UUIDField(default=uuid.uuid4, help_text='Unique ID for this particular trade across whole database', primary_key=True, serialize=False)), ('amount', models.CharField(max_length=200)), ('price', models.CharField(max_length=200)), ('imprint', models.CharField(max_length=200)), ('time_initiated', models.DateTimeField(blank=True, null=True)), ('position', models.CharField(blank=True, choices=[('L', 'Long'), ('S', 'Short')], default='L', help_text='Order Type', max_length=1)), ('trade', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='bots.Trade')), ], options={ 'ordering': ['position'], }, ), migrations.CreateModel( name='Unit', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('sell', models.CharField(blank=True, choices=[('ETH', 'Ethereum'), ('BTC', 'Bitcoin'), ('LTC', 'Litecoin'), ('IOT', 'IOTA'), ('OMG', 'OmiseGo'), ('BCH', 'BitcoinCash')], default='ETH', help_text='Currency to Sell', max_length=3)), ('buy', models.CharField(blank=True, choices=[('ETH', 'Ethereum'), ('BTC', 'Bitcoin'), ('LTC', 'Litecoin'), ('IOT', 'IOTA'), ('OMG', 'OmiseGo'), ('BCH', 'BitcoinCash')], default='BTC', help_text='Currency to Buy', max_length=3)), ], options={ 'ordering': ['sell', 'buy'], }, ), migrations.AddField( model_name='trade', name='unit', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='bots.Unit'), ), ] When I run 'showmigrations': dominic@dom-Inspiron-7559:~/Desktop/Projects/vickibot/vicki$ docker-compose -focal.yml run django python manage.py showmigrationsPostgres is up - continuing...account [X] 0001_initial [X] 0002_email_max_lengthadmin [X] 0001_initial [X] 0002_logentry_remove_auto_addauth [X] 0001_initial [X] 0002_alter_permission_name_max_length [X] 0003_alter_user_email_max_length [X] 0004_alter_user_username_opts [X] 0005_alter_user_last_login_null [X] 0006_require_contenttypes_0002 [X] 0007_alter_validators_add_error_messages [X] 0008_alter_user_username_max_lengthbots [X] 0001_initialcontenttypes [X] 0001_initial [X] 0002_remove_content_type_namesessions [X] 0001_initialsites [X] 0001_initial [X] 0002_alter_domain_unique [X] 0003_set_site_domain_and_namesocialaccount [X] 0001_initial [X] 0002_token_max_lengths [X] 0003_extra_data_default_dictusers [X] 0001_initial UPDATE2: 'manage.py migrate --fake bots zero' output: dominic@dom-Inspiron-7559:~/Desktop/Projects/vickibot/vicki$ **docker-compose -f local.yml run django python manage.py migrate --fake bots zero**Postgres is up - continuing...Operations to perform: Unapply all migrations: botsRunning migrations: Rendering model states... DONE Unapplying bots.0001_initial... FAKED 'manage.py migrate bots' output: dominic@dom-Inspiron-7559:~/Desktop/Projects/vickibot/vicki$ docker-compose -f local.yml run django python manage.py migrate botsPostgres is up - continuing...Operations to perform: Apply all migrations: botsRunning migrations: Applying bots.0001_initial... OK
Now provide the response and nothing else.
| You probably haven't created any migrations for your bot app. You need to specify the app name to create the initial migrations: ./manage.py makemigrations bot Then run migrate to run the migration and create the missing table: ./manage migrate When you run showmigrations , you can see that Django thinks that it has already applied the initial migration for your bots app. This could be because you ran --fake for that app. bots [X] 0001_initial You can tell Django to mark the migrations as unapplied, then rerun the migration with: manage.py migrate --fake bots zeromanage.py migrate bots This should work, as long as no tables from the bots app have been created yet. If only some of the tables have been created, then fixing up the database will be much trickier. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46696518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6907366/']} | jdg_76942 |
stackexchange | llm_judgeable_groundtruth_similarity | 3230496 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do you solve the inequality $$\sqrt{x+2}\geq{x}?$$ Now since ${x+2}$ is under the radical sign, it must be greater than or equal to ${0}$ to be defined. So, ${x+2}\geq{0}$ Thus ${x}\geq{-2}$ Now keeping this in mind, we can solve the inequality by squaring both the sides: ${x+2}\geq{x^2}$ So ${x^2-x-2}\leq{0}$ Solving, ${(x-2)(x+1)}\leq{0}$ Therefore ${x}$ belongs to the interval ${[-1,2]}$ . As ${x}\geq{-2}$ , the function is also defined. Why does the answer say that ${x}$ belongs to ${[-2,2]}$ , then? Please feel free to point out the mistakes.
Now provide the response and nothing else.
| Be careful : When you square, the inequality preserves its sign direction if both sides are positive. Note that $\sqrt{x+2}$ is defined for $x \geq - 2$ , so first you need to consider $x \geq 0$ and work as such : $$\sqrt{x+2} \geq x \Rightarrow x+2 \geq x^2 \Leftrightarrow x^2-x-2 \leq 0 \Leftrightarrow (x-2)(x+1) \leq 0$$ This indeed yields $x \in [-1,2]$ if you also consider the negative values for which the derived inequality is satisfied . But if $x$ is negative $(-2 \leq x < 0)$ , then the (positive) square root will always be bigger than the negative left-hand side. Thus, $[-2,0)$ will do the trick in that case. Concluding : $\sqrt{x+2} \geq x \implies x \in [-2,2]$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3230496', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/674231/']} | jdg_76943 |
stackexchange | llm_judgeable_groundtruth_similarity | 16802972 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote this code, and it currently can accept one void callback and pass one value to it. However, is there any way I can make the user able to set multiple args? Perhaps something like Callback Caller; instead of Callback without defining multiple typenames? Can I fit this all into one typename somehow? For example, here is my class template <typename T>class Callback{private: std::vector<std::function <void (T)>> func;public: virtual void operator += (std::function<void (T)> _func) { func.push_back(_func); }}; I use it like this: Callback<int> Test; And it works just fine if I want to pass only one argument to my function. However, is it possible to make the user able to define multiple values, without having more than one typename? I.e. Callback< <(int, int, int)> > Test; Something similar to that?
Now provide the response and nothing else.
| In C++11, use variadic templates: template <typename... Ts>// ^^^^^^ Parameter packclass Callback{private: std::vector<std::function <void (Ts...)>> func;// ^^^^^// Parameter pack expansionpublic: virtual void operator += (std::function<void (Ts...)> _func)// ^^^^^// Parameter pack expansion { func.push_back(_func); }}; Now you can instantiate your class this way: Callback<int, int, int>Callback<double, std::string>Callback<bool>Callback<>... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16802972', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2276802/']} | jdg_76944 |
stackexchange | llm_judgeable_groundtruth_similarity | 138589 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Bug introduced in 10.4 and fixed in 11.0 I type l = 3 r = ImplicitRegion[(x^2 + y^2)^3 == l *x^2* y^2, {x, y}]; RegionPlot[r] and I obtain How could I obtain a good graph for viewing the implicit region, similar to what I obtain if I type <= instead of ==
Now provide the response and nothing else.
| RegionPlot is supposed to work with inequalities, rather than with equations. You can sort of trick it to show the boundary of the region corresponding to your equation: RegionPlot[r, PlotStyle -> None] but a more general solution is to use ContourPlot : ContourPlot[ (x^2 + y^2)^3 == l*x^2*y^2, {x, y} ∈ Rectangle[{-0.8, -0.8}, {0.8, 0.8}], PlotPoints -> 100, MaxRecursion -> 3] | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/138589', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/6543/']} | jdg_76945 |
stackexchange | llm_judgeable_groundtruth_similarity | 307636 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am able to create and execute a DTS package that copies tables from a remote Oracle database to a local SQL server, but want to setup the connection to the Oracle database as a linked server. The DTS package currently uses the Microsoft OLE DB Provider for Oracle with the following properties: Data Source: SERVER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.3.42)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=acc)));uid=*UserName*;pwd=*UserPassword*; Password: UserPassword User ID: UserName Allow saving password: true How do I go about setting a linked server to an Oracle database using the data source defined above?
Now provide the response and nothing else.
| I was able to setup a linked server to a remote Oracle database, which ended up being a multi-step process: Install Oracle ODBC drivers on SQL Server. Create System DSN to Oracle database on SQL Server. Create linked server on SQL server using System DSN. Step 1: Install Oracle ODBC drivers on server a. Download the necessary Oracle Instant Client packages: Basic, ODBC, and SQL*Plus (optional) b. Unzip the packages to a local directory on the SQL server, typically C:\Oracle . This should result in a [directory] like C:\Oracle\instantclient_10_2 , which will be the value of [directory] referenced in the rest of this answer. c. Create a text file named tnsnames.ora within the instant client [directory] that contains the following: OracleTnsName = ( DESCRIPTION= ( ADDRESS = (PROTOCOL=TCP)(HOST=10.1.3.42)(PORT=1521) ) ( CONNECT_DATA = (SERVICE_NAME=acc) )) Note: Actual HOST , PORT , and SERVICE_NAME will vary based on Oracle server you are establishing a connection to. This information can often be found using the Oracle network client tools under the listeners . The OracleTnsName can be any name you want to assign to the Oracle data source, and will be used when setting up the system DSN. You can also use the syntax above to define multiple TNS names in the same tnsnames.ora file if desired. d. Add the [directory] to the system PATH environment variable. e. Create a new system environment variable named TNS_Admin that has a value of [directory] f. Execute the [directory]\odbc_install.exe utility to install the Oracle ODBC drivers. g. It is recommended that you reboot the SQL server, but may not be necessary. Also, you may want to grant security permissions to this directory for the SQL server and SQL agent user identities. Step 2: Create a System DNS that uses the Oracle ODBC driver a. Open the ODBC Data Source Administrator tool. [ Administrative Tools --> Data Sources (ODBC) ] b. Select the System DSN tab and then select the Add button. c. In the drivers list, select Oracle in instantclient {version} . (e.g. 'Oracle in instantclient 10_2') and then select Finish button. d. Specify the following: Data Source Name : {System DSN Name} Description : {leave blank/empty} TNS Service Name : should have the OracleTnsName you defined in the tnsnames.ora file listed, select it as the value. User ID : {Oracle user name} e. Select Test Connection button. You should be prompted to provide the {Oracle user password}. If all goes well the test will succeed. Step 3: Create linked server in SQL to the Oracle database Open a query window in SQL server and execute the following: EXEC sp_addlinkedserver @server = '{Linked Server Name}' ,@srvproduct = '{System DSN Name}' ,@provider = 'MSDASQL' ,@datasrc = '{System DSN Name}'EXEC sp_addlinkedsrvlogin @rmtsrvname = '{Linked Server Name}' ,@useself = 'False' ,@locallogin = NULL ,@rmtuser = '{Oracle User Name}' ,@rmtpassword = '{Oracle User Password}' Note: The {Linked Server Name} can be anything you want to use when referencing the Oracle server, but the {System DNS Name} must match the name of the system DSN you created previously. The {Oracle User Name} should be the same as the User ID used by the system DSN, and the {Oracle User Password} should be the same as you used to successfully test the ODBC connection. See KB 280106 for information on troubleshooting Oracle linked server issues. Querying the Oracle linked server You may use OPENQUERY to execute pass-through queries on the Oracle linked server, but be aware that for very large recordsets you may receive a ORA-01652 error message if you specify a ORDER BY clause in the pass-through query. Moving the ORDER BY clause from the pass-through query to the outer select statement solved this issue for me. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/307636', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2029/']} | jdg_76946 |
stackexchange | llm_judgeable_groundtruth_similarity | 19855280 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using AFNetworking and need to cache data in one response for a several minutes. So I set NSUrlCache in app delegate and then in my request setting up it: NSMutableURLRequest *request = //obtain request; request.cachePolicy = NSURLRequestReturnCacheDataElseLoad; How then set expiration date: if the data was loaded more than n minutes ago, ask response from server and not from disk? EDIT: Assume that server doesn't support caching, I need to manage it in code.
Now provide the response and nothing else.
| So, I found the solution. The idea is to use connection:willCacheResponse: method. Before cache the response it will be executed and there we can change response and return new, or return nil and the response will not be cached. As I use AFNetworking, there is a nice method in operation: - (void)setCacheResponseBlock:(NSCachedURLResponse * (^)(NSURLConnection *connection, NSCachedURLResponse *cachedResponse))block; Add code: [operation setCacheResponseBlock:^NSCachedURLResponse *(NSURLConnection *connection, NSCachedURLResponse *cachedResponse) { if([connection currentRequest].cachePolicy == NSURLRequestUseProtocolCachePolicy) { cachedResponse = [cachedResponse responseWithExpirationDuration:60]; } return cachedResponse; }]; Where responseWithExpirationDuration from category: @interface NSCachedURLResponse (Expiration)-(NSCachedURLResponse*)responseWithExpirationDuration:(int)duration;@end@implementation NSCachedURLResponse (Expiration)-(NSCachedURLResponse*)responseWithExpirationDuration:(int)duration { NSCachedURLResponse* cachedResponse = self; NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse*)[cachedResponse response]; NSDictionary *headers = [httpResponse allHeaderFields]; NSMutableDictionary* newHeaders = [headers mutableCopy]; newHeaders[@"Cache-Control"] = [NSString stringWithFormat:@"max-age=%i", duration]; [newHeaders removeObjectForKey:@"Expires"]; [newHeaders removeObjectForKey:@"s-maxage"]; NSHTTPURLResponse* newResponse = [[NSHTTPURLResponse alloc] initWithURL:httpResponse.URL statusCode:httpResponse.statusCode HTTPVersion:@"HTTP/1.1" headerFields:newHeaders]; cachedResponse = [[NSCachedURLResponse alloc] initWithResponse:newResponse data:[cachedResponse.data mutableCopy] userInfo:newHeaders storagePolicy:cachedResponse.storagePolicy]; return cachedResponse;}@end So, we set expiration in seconds in http header according to http/1.1 For that we need one of headers to be set up:Expires, Cache-Control: s-maxage or max-ageThen create new cache response, because the properties is read only, and return new object. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19855280', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1503855/']} | jdg_76947 |
stackexchange | llm_judgeable_groundtruth_similarity | 66517777 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new in Electron.. and i just following this guide: https://www.geeksforgeeks.org/integrate-angular-7-with-electronjs/?ref=rp but, i don't know why, i can't continue the step 4, on accessing the electron, and keep popping this when doing build or even just serve. Error: node_modules/electron/electron.d.ts:6594:21 - error TS2694: Namespace 'NodeJS' has no exported member 'Require'. 6594 require: NodeJS.Require;~~~~~~~ npm ERR! code ELIFECYCLEnpm ERR! errno 1npm ERR! [email protected] electron: ng build --prod && electron . npm ERR! Exit status 1npm ERR!npm ERR! Failed at the [email protected] electron script.npm ERR! This is probably not a problem with npm. There is likely additional logging output above. i hope anyone here, understand with this and able to solve this. thank you! because my goal is to connect ipcMain, using ipcRenderer that i triggered on Angular Services.
Now provide the response and nothing else.
| I believe this problem happened because of an old version of @types/node , as Angular uses TypeScript as its backbone. npm update itself wouldn't work if the update is a breaking change. In order to fix this, update your package.json to use the latest version of @types/node , at this time of writing 14.14.33 . You can update it manually by editing that file. After updating your @types/node , run npm install again, then serve it, and it should work. If it still fails, update your Angular CLI to the latest version by using npm uninstall -g @angular/cli , then npm install -g @angular/cli . You might be using an outdated Angular CLI. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/66517777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11213571/']} | jdg_76948 |
stackexchange | llm_judgeable_groundtruth_similarity | 42783877 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The CMS passes a variable as data-rest-url attribute to the React.js App: <div id="reactjs-root" data-rest-url="http://my-ip-addess:8080/Rest-api-here">...</div> If I add jQuery to my React.js App, then I can simply: componentWillMount() { const $reactRoot = $('#reactjs-root'); const restUrl = $reactRoot.attr('data-rest-url'); } But adding jQuery just for this? How would you pass some variable from a CMS to your Single Page React App and read / parse / get it with react.js?
Now provide the response and nothing else.
| Consider passing your data attributes to your component as props instead of hard coding the root element ID within the component itself. Rendering: var rootElement = document.getElementById('reactjs-root');ReactDOM.render( <YourComponent resturl={rootElement.getAttribute('data-rest-url')}></YourComponent>, rootElement); Within the component you can access the injected url: componentWillMount() { console.log(this.props.resturl)} This makes for a more reusable component that is decoupled from a specific element ID. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42783877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1129950/']} | jdg_76949 |
stackexchange | llm_judgeable_groundtruth_similarity | 52822526 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference? I know that DynamicFrame was created for AWS Glue, but AWS Glue also supports DataFrame. When should DynamicFrame be used in AWS Glue?
Now provide the response and nothing else.
| DynamicFrame is safer when handling memory intensive jobs. "The executor memory with AWS Glue dynamic frames never exceeds the safe threshold," while on the other hand, Spark DataFrame could hit "Out of memory" issue on executors. ( https://docs.aws.amazon.com/glue/latest/dg/monitor-profile-debug-oom-abnormalities.html ) DynamicFrames are designed to provide maximum flexibility when dealing with messy data that may lack a declared schema. Records are represented in a flexible self-describing way that preserves information about schema inconsistencies in the data. For example, with changing requirements, an address column stored as a string in some records might be stored as a struct in later rows. Rather than failing or falling back to a string, DynamicFrames will track both types and gives users a number of options in how to resolve these inconsistencies, providing fine grain resolution options via the ResolveChoice transforms. DynamicFrames also provide a number of powerful high-level ETL operations that are not found in DataFrames. For example, the Relationalize transform can be used to flatten and pivot complex nested data into tables suitable for transfer to a relational database. In additon, the ApplyMapping transform supports complex renames and casting in a declarative fashion. DynamicFrames are also integrated with the AWS Glue Data Catalog, so creating frames from tables is a simple operation. Writing to databases can be done through connections without specifying the password. Moreover, DynamicFrames are integrated with job bookmarks, so running these scripts in the job system can allow the script to implictly keep track of what was read and written.( https://github.com/aws-samples/aws-glue-samples/blob/master/FAQ_and_How_to.md ) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52822526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10442295/']} | jdg_76950 |
stackexchange | llm_judgeable_groundtruth_similarity | 3349971 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicates: Factory Pattern. When to use factory methods? Why do static Create methods exist? Though I know what is Factory Design Pattern. But I am unable to comprehend what are the benefits of using it. Why should we create objects using Factory Design Pattern.
Now provide the response and nothing else.
| By creating objects through factories, you avoid making the subsystem's code depended on specific implementations of the interfaces it uses -- "program to an interface, not to an implementation" is the most important single phrase in the "Design Patterns" book, and factories are one crucial way to move your code towards that excellent goal (dependency injection is another key DP for that, which the classic book does not cover -- but then, often the dependencies you inject are factories anyway, so that omission is not too horrible;-). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3349971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197878/']} | jdg_76951 |
stackexchange | llm_judgeable_groundtruth_similarity | 1942580 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the Atiyah's book there is this example about $A$ -modules. Let $A=\mathbb{K}[x]$ , where $\mathbb{K}$ is a field. An $A$ -module is a $\mathbb{K}$ -vector space with a linear transformation. Can someone explain this claim to me?
Now provide the response and nothing else.
| Let $\;V\;$ be a $\;\Bbb K\,-$ vector space, and let $\;T:V\to V\;$ be any linear operator. Then, $\;V\;$ gets a structure of $\;\Bbb K[x]\,-$ module if we define $$\forall\,v\in V\;,\;\;x\cdot v:=Tv$$ and then we extend this action in the obvious way, meaning: $$\sum_{j=0}^m k_jx^j\cdot v:=\sum_{j=0}^mk_jT^jv\;,\;\;k_j\in\Bbb K$$ I'll leave to you to check details. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1942580', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/269380/']} | jdg_76952 |
stackexchange | llm_judgeable_groundtruth_similarity | 473037 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$\mathbb{R}^n$ is a complete metric space. Consider a Cauchy sequence $\{\mathbf{x}_k\}$ in $\mathbb{R}^n$ , we want to show it converges to a point $\mathbf{x} \in \mathbb{R}^n$ . That is to say, if $|\mathbf{x - x_k}| \to 0$ as $k \to \infty$ . Hence, we let $\epsilon \to 0$ , and we get $\mathbf{|x_k - x_j|} < \epsilon$ by Cauchy sequence, and let $\mathbf{x = x_j}$ we showed the desired result. Definition $\mathbb{R}^n$ is a complete metric space . Every Cauchy sequence in $\mathbb{R}^n$ converges to a point of $\mathbb{R}^n$ . Definition Cauchy sequence . Given $\epsilon > 0$ , there is an integer $K$ such that $\mathbf{|x_k - x_j|} < \epsilon$ for all $k,j \geq K$ . I am not fond of my proof, because I am not certain if I can approach $\epsilon$ to be zero, nor if I can equate $\mathbf{x}$ to be $\mathbf{x_j}$ since $\mathbf{x_j}$ is changing while $\epsilon$ changes. Edit Especially, I am baffled that why we need to do it in coordinates? I think they can be subtracted directly, as the definition of Cauchy sequence I added a short while ago?
Now provide the response and nothing else.
| This proof isn't quite right. How did we prove that Cauchy sequences converge in $\mathbb R$? In fact, we didn't, this is just supposed to follow from the construction of the real numbers. Try to see the shortcomings of your method in this context. Or, note that any proof of completeness should fail on $(0,1)$ with the sequence $x_k=\frac1k$, and see why your method doesn't raise any red flags where it should. How can we use the completeness of $\mathbb R$ to deduce the completeness of $\mathbb R^n$? Hint: try considering the sequence that you get by looking at a particular coordinate of $\mathbf x_k$. Why is this sequence Cauchy, and how does that help? Answer: given $\mathbf x_j\in\mathbb R^n$, let $x_{i,j}$ be the $i^{th}$ coordinate of $\mathbf x_j$. For each $i$, $\{x_{i,j}\}_{j\geq1}$ is a Cauchy sequence in $\mathbb R$ (Because if one component does not converge, the norm does not.), and thus converges in $\mathbb R$. Let $x_i$ be the limit of this sequence. Now, let $\mathbf x=(x_1,x_2,\dots,x_n)$ with $x_i$ as defined above. Consider any $\epsilon>0$. By the convergence of the coordinate sequences, we may select an integer $K$ so that for each $j>K:|x_{i,j}-x_i|<\epsilon/n$. We note that for $j>K:$$$\|\mathbf x_j-\mathbf x\|=\\\|((x_{1,j}-x_1),(x_{2,j}-x_2),\dots,(x_{n,j}-x_n))\| \leq\\|x_{1,j}-x_1|+|x_{2,j}-x_2|+\dots+|x_{n,j}-x_n)| <\\\epsilon/n + \epsilon/n + \dots + \epsilon/n = \epsilon$$Thus, $\mathbf x_j\to\mathbf x$, which means that an arbitrary Cauchy sequence must converge. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/473037', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/79250/']} | jdg_76953 |
stackexchange | llm_judgeable_groundtruth_similarity | 116313 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
As in Algebra+Homotopy=Operad by Bruno Vallette, let $A$ with multiplication $\nu$ be a differential graded associative algebra equipped with degree +1 map $h$ and let $H$ be a chain complex such that there exist chain maps $i$ and $p$ such that and I work in characteristic 2 to make everything easier. Define $$\mu_2=p\circ\nu\circ(i\otimes i):H\otimes H\to H,$$ and in general, is a degree $+(n-2)$ map, where $PBT_n$ means binary trees with $n$ nodes and Vallette's summand on the right is an example of a summand for $n=5.$ For $f\in\hom(H^{\otimes n},H),$ define $\partial f=d\circ f + f\circ d_{H^{\otimes n}}$; remember that we are in characteristic 2. One way to visualize $\partial\mu_n$ is that the one term decorates the leaves with $d$'s, as $d$ is a derivation for the raw tensor product, and the other puts a $d$ at the root, which then propagates upwards, as $d$ is a derivation for $\nu.$ The Homotopy Transfer Theorem for Differential Graded Associative Algebras is that $H$ equipped with the $\mu_n$ is an $A_\infty$ algebra, which means precisely that . All images have been directly screencapped from Vallette's paper. He writes that it should be an "easy and pedagogical" exercise to prove this theorem, but I'm getting caught in the thicket even in this characteristic 2 case where there are far fewer $\pm$'s to keep track of. I was wondering if anyone could provide me with any insights as to how to proceed without trees popping up all over the place occluding the forest.
Now provide the response and nothing else.
| There is a systematic graphical notation that allows the tracking of signs, which I will mention at the end of this answer. But before doing so, let me outline the situation when $2 = 0$. Note first that since $(A,\nu)$ is strictly associative, $\mu_0 = 0$ and $\mu_1 = d_H = d : H \to H$. By convention, if I have multilinear maps $f: H^{\otimes k}\to H$ and $g: H^{\otimes l}\to H$, then I will write $f\circ g : H^{\otimes(k+l -1)} \to H$ for:$$ (f\circ g) (x_1\otimes \cdots \otimes x_{k+l-1}) = \sum_{i=1}^k f\bigl(x_1\otimes \dots \otimes x_{i-1} \otimes g(x_i\otimes \dots \otimes x_{i+l-1}) \otimes x_{i+l} \otimes \cdots \otimes x_{k+l-1}\bigr) $$This has a useful graphical notation, wherein the composition is the sum over all rooted planar trees with an $f$ at the bottom node and precisely one $g$ at one of the upper nodes. Then axiom to be an $A_\infty$-algebra in characteristic $2$ is$ 0 = \sum_{j=0}^{n+1} \mu_j\circ \mu_{n+1-j} $, or, since $\mu_0 = 0$ and $\mu_1 = d$:$$ [d,\mu_n] = \sum_{j=2}^{n-1} \mu_j \circ \mu_{n+1-j} $$The right-hand side is a sum over all rooted planar trees with $n$ leaves and precisely two nodes, each of which has at least two branches from it. To check this, the first thing to convince yourself is that the operator $[d,-] : f \mapsto d\circ f + f\circ d$ is a derivation of composition and tensor, so that to apply $[d,-]$ to some large diagram, you sum all diagrams you get by replacing one component of your original diagram by $[d,-]$ of it. Note also that $[d,-]$ comutes with (i.e. annihilates) $i$, $p$, and $\nu$. So when you work out $[d,\mu_n]$, you get a sum over diagrams that look like $\mu_n$ (i.e. planar rooted trees with $n$ leaves, each node has two branches, and interior edges labeled by $h$), except one of the interior edges has been replaced by $[d,h] = \mathrm{id}_A + ip$. Now, it should be completely clear that the diagrams where the $h$ is replaced by an $ip$ are precisely the diagrams appearing in $\sum_{j=2}^{n-1} \mu_j \circ \mu_{n+1-j}$. (If this is not clear, let me know, and I will try to make it clearer.) Finally, we must dispense with the diagrams in which an $h$ is replaced by an $\mathrm{id}$. For any such diagram, consider contracting it along the offending $\mathrm{id}$ vertex, to produce a node with three branches. Except the resulting diagram with the trivalent vertex can be produced in two ways, corresponding to the two planar ways of blowing up a rooted node with three branches into two two-branch nodes. So, after sorting all of your offending diagrams into such pairs, you get a sum of diagrams that looks like a $\mu_n$-type sum, except one vertex is has three branches. What is this vertex labeled by? Why, $\nu \circ \nu$, of course, which is a sum of two terms. On the other hand, $\nu \circ \nu = 0$ by the associativity law for $(A,\nu)$. In characteristic not equal to $2$, the exact same argument works, but you must find a good convention / notation for signs. The best notation that I know is as follows. It should, of course, already by understood that the solid "$H$" or "$A$" edges extend to "infinity" at the top and bottom of the page. You should additionally draw diagrams with some other color of edge (I usually used "dashed") that records the degrees of operators — so this "dashed" edge should carry an arrow denoting its direction. A vertex that raises homological degree by $n$ is required to receive $n$ dashed edges, and a vertex that lowers homological degree by $n$ is required to emit $n$ dashed edges. Free dashed edges are sent off to "infinity" at (say) the left-hand side of the page, and the order from top to bottom that the free dashed edges arrive is important. Just as you cannot add diagrams whose numbers of input and output "$H$" strands mismatch, you similarly must have the same sequence of dashed edges. (In categorical language, what I'm saying is that you only work with "global" elements of endomorphism spaces, which is to say actual morphisms in the category of homologically-graded abelian groups, but that you give yourself access to the objects of this category which are lines in degree $\pm 1$.) Now whenever two edges cross, something happens with signs. The notation basically takes care of this, but if you ever insist on working with "homogeneous elements" (which is a bad habit — it's better to work more categorically) the convention is that as an element runs down the "wire" of a solid edge, when it passes through a dashed edge it remains unchanged if it is of even degree and changes sign if it is of odd degree. The notations that do matter are: A closed dashed circle can be removed for a factor of $-1$. A dashed crossing can be resolved for a factor of $-1$. (Since dashed edges are directed, any dashed crossing has a unique resolution.) Resolving a crossing for a sign http://math.berkeley.edu/~theojf/conventions1.gif Loops, etc., cost signs http://math.berkeley.edu/~theojf/conventions2.gif For example, the operator $d$ emits a dashed edge, and the operator $h$ receives a dashed edge. Thus an equation like "$\mathrm{id} - ip = dh + hd$" is nonsense: the left-hand side has no dashed edges running to infinity, whereas on the right-hand side the first summand emits an edge and then receives one, and the second summand does those in the opposite order. To sum the two terms on the right-hand side you have to at least get them into their edges-at-infinity into the same order, which you can do by adding a crossing (but remember that resolving that crossing changes a sign). To make the two sides agree, you get connect up the two dashed edges, and again you should think a moment about signs. The correct right-hand side to "$\mathrm{id} - ip = dh + hd$" is: the commutator http://math.berkeley.edu/~theojf/conventions3.gif Yes, the sign is correct. (Incidentally, I'm lifting these images from my thesis, which works out a slightly different question, and so the colors and numbers are not for this post.) Anyway, I'll leave it as an exercise to write out diagrams for $A_\infty$ algebras in this notation, and to get all the signs right. (Hint: there should be no "weird" signs.) Part of the reason that I'll leave it as an exercise is that there's not really a unique correct answer: you make a sign convention, and work with it. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/116313', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9793/']} | jdg_76954 |
stackexchange | llm_judgeable_groundtruth_similarity | 3144711 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing some Javascript that interacts with library code that I don't own, and can't (reasonably) change. It creates Javascript timeouts used for showing the next question in a series of time-limited questions. This isn't real code because it is obfuscated beyond all hope. Here's what the library is doing: ....// setup a timeout to go to the next question based on user-supplied timevar t = questionTime * 1000test.currentTimeout = setTimeout( showNextQuestion(questions[i+1]), t ); I want to put a progress bar onscreen that fills towards questionTime * 1000 by interrogating the timer created by setTimeout . The only problem is, there seems to be no way to do this. Is there a getTimeout function that I'm missing? The only information on Javascript timeouts that I can find is related only to creation via setTimeout( function, time) and deletion via clearTimeout( id ) . I'm looking for a function that returns either the time remaining before a timeout fires, or the time elapsed after a timeout has been called. My progress bar code looks like this: var timeleft = getTimeout( test.currentTimeout ); // I don't know how to do thisvar $bar = $('.control .bar');while ( timeleft > 1 ) { $bar.width(timeleft / test.defaultQuestionTime * 1000);} tl;dr: How do I find the time remaining before a javascript setTimeout()? Here's the solution I'm using now. I went through the library section that's in charge of tests, and unscrambled the code (terrible, and against my permissions). // setup a timeout to go to the next question based on user-supplied timevar t = questionTime * 1000test.currentTimeout = mySetTimeout( showNextQuestion(questions[i+1]), t ); and here's my code: // wrapper for setTimeoutfunction mySetTimeout( func, timeout ) { timeouts[ n = setTimeout( func, timeout ) ] = { start: new Date().getTime(), end: new Date().getTime() + timeout t: timeout } return n;} This works pretty spot-on in any browser that isn't IE 6. Even the original iPhone, where I expected things to get asynchronous.
Now provide the response and nothing else.
| If you can't modify the library code, you'll need to redefine setTimeout to suit your purposes. Here's an example of what you could do: (function () {var nativeSetTimeout = window.setTimeout;window.bindTimeout = function (listener, interval) { function setTimeout(code, delay) { var elapsed = 0, h; h = window.setInterval(function () { elapsed += interval; if (elapsed < delay) { listener(delay - elapsed); } else { window.clearInterval(h); } }, interval); return nativeSetTimeout(code, delay); } window.setTimeout = setTimeout; setTimeout._native = nativeSetTimeout;};}());window.bindTimeout(function (t) {console.log(t + "ms remaining");}, 100);window.setTimeout(function () {console.log("All done.");}, 1000); This is not production code, but it should put you on the right track. Note that you can only bind one listener per timeout. I haven't done extensive testing with this, but it works in Firebug. A more robust solution would use the same technique of wrapping setTimeout, but instead use a map from the returned timeoutId to listeners to handle multiple listeners per timeout. You might also consider wrapping clearTimeout so you can detach your listener if the timeout is cleared. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3144711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/379439/']} | jdg_76955 |
stackexchange | llm_judgeable_groundtruth_similarity | 11351977 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I previously asked this question, which was answered, but someone gave a suggestion that might help me prevent making similar mistakes as I move forward. Adding Auto-Implemented Property to class using Roslyn The suggestion was that I build the Syntax Tree from the bottom up and not from the top down. Could someone provide a small demo or a link that shows how I would do this from the ground up? Here is the code again: var root = (CompilationUnitSyntax)document.GetSyntaxRoot(); // Add the namespace var namespaceAnnotation = new SyntaxAnnotation(); root = root.WithMembers( Syntax.NamespaceDeclaration( Syntax.ParseName("ACO")) .NormalizeWhitespace() .WithAdditionalAnnotations(namespaceAnnotation)); document = document.UpdateSyntaxRoot(root); // Add a class to the newly created namespace, and update the document var namespaceNode = (NamespaceDeclarationSyntax)root .GetAnnotatedNodesAndTokens(namespaceAnnotation) .Single() .AsNode(); var classAnnotation = new SyntaxAnnotation(); var baseTypeName = Syntax.ParseTypeName("System.Windows.Forms.Form"); SyntaxTokenList syntaxTokenList = new SyntaxTokenList() { Syntax.Token(SyntaxKind.PublicKeyword) }; var newNamespaceNode = namespaceNode .WithMembers( Syntax.List<MemberDeclarationSyntax>( Syntax.ClassDeclaration("MainForm") .WithAdditionalAnnotations(classAnnotation) .AddBaseListTypes(baseTypeName) .WithModifiers(Syntax.Token(SyntaxKind.PublicKeyword)))); root = root.ReplaceNode(namespaceNode, newNamespaceNode).NormalizeWhitespace(); document = document.UpdateSyntaxRoot(root); var attributes = Syntax.List(Syntax.AttributeDeclaration(Syntax.SeparatedList(Syntax.Attribute(Syntax.ParseName("STAThread"))))); // Find the class just created, add a method to it and update the document var classNode = (ClassDeclarationSyntax)root .GetAnnotatedNodesAndTokens(classAnnotation) .Single() .AsNode(); var syntaxList = Syntax.List<MemberDeclarationSyntax>( Syntax.MethodDeclaration( Syntax.ParseTypeName("void"), "Main") .WithModifiers(Syntax.TokenList(Syntax.Token(SyntaxKind.PublicKeyword))) .WithAttributes(attributes) .WithBody( Syntax.Block())); syntaxList = syntaxList.Add(Syntax.PropertyDeclaration(Syntax.ParseTypeName("System.Windows.Forms.Timer"), "Ticker")); var newClassNode = classNode .WithMembers(syntaxList); root = root.ReplaceNode(classNode, newClassNode).NormalizeWhitespace(); document = document.UpdateSyntaxRoot(root); So how would I do the same thing, but from the ground up? Thanks in advance, Bob P.S. My property is also missing the "get; set;" text within it. Could someone comment on what I am forgetting to add which causes this text to be added to the property?
Now provide the response and nothing else.
| This will build up your entire compilation unit tree in one expression. var cu = SyntaxFactory.CompilationUnit() .AddMembers( SyntaxFactory.NamespaceDeclaration(Syntax.IdentifierName("ACO")) .AddMembers( SyntaxFactory.ClassDeclaration("MainForm") .AddBaseListTypes(SyntaxFactory.ParseTypeName("System.Windows.Forms.Form")) .WithModifiers(SyntaxFactory.Token(SyntaxKind.PublicKeyword)) .AddMembers( Syntax.PropertyDeclaration(SyntaxFactory.ParseTypeName("System.Windows.Forms.Timer"), "Ticker") .AddAccessorListAccessors( SyntaxFactory.AccessorDeclaration(SyntaxKind.GetAccessorDeclaration).WithSemicolonToken(SyntaxFactory.Token(SyntaxKind.SemicolonToken)), SyntaxFactory.AccessorDeclaration(SyntaxKind.SetAccessorDeclaration).WithSemicolonToken(SyntaxFactory.Token(SyntaxKind.SemicolonToken))), SyntaxFactory.MethodDeclaration(SyntaxFactory.ParseTypeName("void"), "Main") .AddModifiers(SyntaxFactory.Token(SyntaxKind.PublicKeyword)) .AddAttributes(SyntaxFactory.AttributeDeclaration().AddAttributes(SyntaxFactory.Attribute(SyntaxFactory.IdentifierName("STAThread")))) .WithBody(SyntaxFactory.Block()) ) ) ); Of course, you don't have to do it as a single expression. I could have used separate local variables to collect the pieces I wanted and then added them in the construction of the containing syntax piece. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11351977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/78187/']} | jdg_76956 |
stackexchange | llm_judgeable_groundtruth_similarity | 18005717 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does a thread's context refer to a thread's personal memory? If so, how is memory shared between multiple threads? I'm not looking for code examples- I understand synchronization on a high level, I'm just confused about this term, and looking to gain some insight on what's actually happening behind scenes. The reason I thought/think each thread has some kind of private memory was because of the volatile keyword in Java and .NET, and how different threads can have different values for the same primitive if its not used. That always implied private memory to me. As I didn't realize the term was more general, I guess I'm asking how context-switching works in Java and C# specifically.
Now provide the response and nothing else.
| The reason I thought/think each thread has some kind of private memory was because of the volatile keyword in Java and .NET, and how different threads can have different values for the same primitive if its not used. That always implied private memory to me. OK, now we're getting to the source of your confusion. This is one of the most confusing parts about modern programming. You have to wrap your head around this contradiction: All threads in a process share the same virtual memory address space, but Any two threads can disagree at any time on the contents of that space How can that be? Because processors make local copies of memory pages for performance reasons , and only infrequently compare notes to make sure that all their copies say the same thing. If two threads are on two different processors then they can have completely inconsistent views of "the same" memory. memory in single-threaded scenarios is typically thought of as "still" unless something causes it to change. This intuition serves you poorly in multithreaded processes . If there are multiple threads accessing memory you are best to treat all memory as constantly in a state of flux unless something is forcing it to remain still. Once you start thinking of all memory as changing all the time it becomes clear that two threads can have an inconsistent view. No two movies of the ocean during a storm are alike, even if its the same storm. compilers are free to make any optimization to code that would be invisible on a single threaded system. On a multi-threaded system, those optimizations can suddenly become visible, which can lead to inconsistent views of data. If any of that is not clear, then start by reading my article explaining what "volatile" means in C#: http://blogs.msdn.com/b/ericlippert/archive/2011/06/16/atomicity-volatility-and-immutability-are-different-part-three.aspx And then read the section "The Need For Memory Models" in Vance's article here: http://msdn.microsoft.com/en-us/magazine/cc163715.aspx Now, as for the specific question as to whether a thread has its own block of memory, the answer is yes, in two ways. First, since a thread is a point of control, and since the stack is the reification of control flow, every thread has its own million-byte stack. That's why threads are so expensive. In .NET, those million bytes are actually committed to the page file every time you create a thread, so be careful about creating unnecessary threads. Second, threads have the aptly named "thread local storage", which is a small section of memory associated with each thread that the thread can use to store interesting information. In C# you use the ThreadStatic attribute to mark a field as being local to a thread. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18005717', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/531398/']} | jdg_76957 |
stackexchange | llm_judgeable_groundtruth_similarity | 20423125 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I asked IT to install Mono on CentOS using the following commands: $yum install bison gettext glib2 freetype fontconfig libpng libpng-devel libX11 libX11-devel glib2-devel libgdi* libexif glibc-devel urw-fonts java unzip gcc gcc-c++ automake autoconf libtool make bzip2 wget$cd /usr/local/src $wget http://download.mono-project.com/sources/mono/mono-3.2.5.tar.bz2$tar jxf mono-3.2.5.tar.bz2$cd mono-3.2.5$./configure --prefix=/opt/mono$make && make install However, when I run mono myapp.exe I get -bash: mono: command not found I know nothing about Linux - I feel like I'm in Japan. Assuming Linux has a path variable or something like it, maybe mono isn't in the path? I can't even find an executable called mono in /usr/local/src , just a mono folder. Mind you I can't work out how to even search for a file so I might not be looking properly. How can I tell whether its installed correctly? Maybe its just not available to the non-admin account I use? I'm lost. Help!
Now provide the response and nothing else.
| If mono is properly installed, you should not get a message like -bash: mono: command not found . If something is installed then it most typically is in the $PATH . On my system the executable is located on /usr/bin/mono (as most things are) but things may be different on a RPM-based system. Your ./configure , however, got the prefix /opt/mono , so probably your executable also is located under that special path. (And thus mono isn't properly installed.) Why did you install it there? Anyway. If this is the fact, then you can execute it using sth like /opt/mono/bin/mono foo.exe to find the executable below your prefix path you could use find /opt/mono -name mono to see all directory entries which are named exactly mono . One of those should be your executable. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20423125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/107783/']} | jdg_76958 |
stackexchange | llm_judgeable_groundtruth_similarity | 23078626 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There are many alternative keyboards to the standard US keyboard layout (called QWERTY). Some examples include Dvorak, (and variants like Programmer Dvorak), Colemak, AZERTY, Workman layout, etc. Do any of these confer a benefit to typing speed, accuracy or hand/wrist-health? And, if so, which one of these should I choose as a touch typist if I am regularly programming?
Now provide the response and nothing else.
| Short answer: If you are happy with your keyboard layout, stick with it. Long answer: I will try and aim to make this as definitive and explanatory an answer as possible. To understand a bit where I am coming from, allow me to express my own journey through this jungle: I am a computer science student who started out with the German QWERTZ keyboard, typing at about 100 WPM (words per minute). When that turned out to be horrendous for programming, I moved to QWERTY . Then, I got taken in by the hype and turned to Colemak . After mastering it, I discovered there was a layout optimized for programming, and switched to Programmer Dvorak . Finally, still not happy, I tried to design my own keyboard layout semi-scientifically. And finally, now, I am typing these lines on QWERTY. (To save others the trouble and pain I went through). Therefore, I will try to argue in my answer both from personal experience as well as from scientific published data. The main arguments for all the alternative keyboard layout hype can be summarized to three major points: The QWERTY keyboard is slow and was designed to slow typists down. Excessive use of QWERTY causes Carpal Tunnel syndrome and is bad for your health. Dvorak/Colemak/< Insert alternative layout here > was optimized to increase speed/accuracy/health Let's go through this one by one: First, the argument that the QWERTY keyboard was designed to slow typists down is simply not true. It was nicely debunked in this question . The QWERTY keyboard was designed to stop the keys from a certain model of typewriter to stop jamming. Rest assured, we will discuss the "QWERTY is slow" myth in a minute. Second, the ultimate argument that advocates of alternative keyboards love to use is that QWERTY causes Carpal Tunnel syndrome , because it strains the fingers.What's amazing here is that this is actually an Urban legend which has persisted despite it being discredited. See this question here . To quote from the answer by Graeme Perrow: "It seems that using computers in general does not cause carpal tunnel syndrome, regardless of the type of keyboard." Finally: If QWERTY wasn't made to slow typists down and doesn't cause any illness, why use another keyboard? The answer usually offered is because other layouts are faster and have the keys aligned in a "smarter" way.We are told how much faster typists can be when they use Dvorak instead of QWERTY and how the home row of colemak offers great benefits to productivity and speed .We are treated to an avalanche of impressive-looking percentages, of how much faster and accurate you can be on an alternate keyboard, rather than a humble QWERTY. However, if you look at hard, scientific evidence, you find... nothing worth writing home about. Indeed, there are two very interesting posts here and here : It turns out that the (very hard to objectively measure) speed gains are a measly 2% to 4%. This mimics what I myself have experienced: If you are a trained typist, then switching doesn't give much of an improvement. After I had finally finished my switch to Dvorak, I was still typing at roughly 100WPM. If you want to go beyond that, you have to type a lot during your day. I believe that the reason people observe a speed-boost when they switch is that they have to retrain their muscle memory from scratch. Which, if they do diligently, is rewarded with a faster typing speed. The irony is: I conjecture that if they had "retrained" QWERTY from scratch, they would have obtained the same speed increase. Additionally, my own error rate didn't go down with Dvorak or Colemak. It stayed around the same level. Which, again, is not dictated by the layout but by the accuracy with which one has trained their muscle memory. Lastly, on the note of programming: It is true that for programming languages, on QWERTY the keys used often, such as {}, [], ', =, +, -, _, etc., are all to be reached with the right pinky, which drags performance down. This still is not worth the switch to something like Programmer Dvorak, however, since, especially in programming, the limiting factor is rarely typing speed (once you get above 60WPM, that is). So given all this, there are also a few downsides to switching that I wish to elaborate: Dvorak suffers from the huge disadvantage that all computers use shortcuts (such as the famous CTRL+C and CTRL+V) which, on the Dvorak, are in different and hard-to-reach positions. Colemak doesn't suffer from this as much, since it kept the C, V and B key positions from QWERTY. However, even with Colemak, using programs which rely heavily on shortcut use (the most notorious of these being software like vim and emacs), has to be relearned from scratch. Switching takes a very long time. Let nobody fool you. If you were typing at >80WPM, I can tell you from personal experience that it takes months to achieve this speed again. Even if you swap only a few keys (like Colemak), it is still a painful and long process. When you successfully switch, you will be unable to type fast on regular QWERTY keyboards anymore (take my word for it). You will still be faster than someone who doesn't use touch typing, but if you ever have to type on a QWERTY computer as an alternative typist, you will be in for some embarrassment. This can get especially hairy if it is work related. Many alternative layouts are not nearly as standardized as QWERTY. In other words: If you use an older machine, for instance, you may find your preferred layout not installed. This is a further hassle, because then you have to get around that problem by downloading and installing the layout you chose, meanwhile having to work in a layout you can no longer use. Thus, in conclusion, my advice is: If you are happy with your current layout, keep it. The benefits of changing are much too small to consider. Especially if you are a QWERTY typist, I recommend staying with it. It will save you a lot of hassle and annoyances. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23078626', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1773628/']} | jdg_76959 |
stackexchange | llm_judgeable_groundtruth_similarity | 19633091 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read that .so is a dynamic library file and .a is a static library file. While building openssl i gave the option ./Configure no-shared and it created a lot of .a files. So, my question is will the other packages like apache will be able to use .a files from openssl? for example libcrypto.a , someone please advice me if im going enirely through wrong path.
Now provide the response and nothing else.
| Basically the static library can be compiled into another application at link time. In your example Apache could use libcrypto.a during build time and include it in the Apache httpd application. A dynamic .so library can be loaded and unloaded at runtime and you have a better flexibility to change what Apache should support without recompiling the Apache binaries. Using Apache as example the dynamic loading of .so files are described in the Dynamic Shared Object (DSO) section in the documentation. You can also find links to the installation section which describe how to include static libraries at build time. There is a good question about this that could be good to read, and that provide mote details in the subject. Difference between shared objects (.so), static libraries (.a), and DLL's (.so)? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19633091', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2927653/']} | jdg_76960 |
stackexchange | llm_judgeable_groundtruth_similarity | 1492711 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the rough "cost" of using threads in java? Are the any rule of thumbs/empirical values, how much memory the creation of one thread costs? Is there a rough estimate how many CPU cycles it costs to create a thread? Context: In a servlet of a webapplication I want to parallelize the content creation as parts of the content are file based, database based as well as webservices based. But this would mean that for every "http-request-thread" (of my serlvet container) I will have two-to-four additional threads. Note that I will be using the ExecutorService in Java 6. What should I expect when I use hundreds to thousands of Java threads on a web-server?
Now provide the response and nothing else.
| Each thread has its own stack, and consequently there's an immediate memory impact. The default thread stack size is ,IIRC, for Java 6, 512k (different JVMs/version will possibly have different defaults). This figure is adjustable using the -Xss option. Consequently using hundreds of threads will have an impact on the memory the VM consumes (quite possibly before any CPU impact unless those threads are running). I've seen clients run into problems related to threads/memory, since it's not an obvious link. It's trivial to create 100,000 threads (using executors/pools etc.) and memory problems don't appear to be immediately attributable to this. If you're servicing many clients, you may want to take a look at the Java NIO API and in particular multiplexing , which allows asynchronous network programming. That will permit you to handle many clients with only one thread, and consequently reduce your requirement for a huge number of threads. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1492711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_76961 |
stackexchange | llm_judgeable_groundtruth_similarity | 43470798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning Angular 2 and I came across a few new things. I used ScratchJS extension of Chrome browser to learn TypeScript. While doing that, this is what I did for a backtick (`) string: let user='user';let msg=`Welcome ${user}! I can write multi-line string.This is awesome!`;console.log(msg); As you can see, this is how the variable user is used in the string. But when I do the same thing in an Angular 2 project, it's a bit different (and doing things like above will throw an error). For my dummy Angular 2 project, I made a simple component: import { Component} from '@angular/core';@Component({ selector: 'app-user', template: ` Hi, {{user}} I can write multi-line string. This is awesome! `, styles: []})export class UserComponent { user:string='John Doe'; constructor() { }} This works. But here, I'm using string interpolation using: {{}} instead of: ${} And if I use that, it will throw an error. I know I understood some facts wrong. But can anyone explain what is it?
Now provide the response and nothing else.
| {{ foo }} will be handled by the template engine of Angular , binding the foo property defined in your class. ${ bar } will be handled by the Javascript string interpolation , which, while rendering, have no clue of what is the property bar of your object. This is something closely related to how Angular work, this is not related to typescript or anything else. You can still use ${} if you are not in your Angular project, or not in the template. For example, something like this should work, since the expression is evaluated before being returned, and is not dependant of the template engine: public getUsername(): string { let username = 'test'; return `Hi ${username}`;} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43470798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3619084/']} | jdg_76962 |
stackexchange | llm_judgeable_groundtruth_similarity | 231350 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How many consecutive zeros are there at the end of $11^{100} - 1$? Attempt Trial and error on Wolfram Alpha shows using modulus shows that there are 4 zeros ( edit: 3 zeros, not 4 ). Otherwise, I have no idea even where to start.
Now provide the response and nothing else.
| $11^{100}=(10+1)^{100}=\sum_{k=0}^{100}\binom{100}k10^k=1+100\cdot10+4950\cdot100+161700\cdot1000+\ldots~$. Thus, $11^{100}-1=1000+495000+161700000+\ldots=162196000+\ldots~$. The remaining terms all have factors of $10^k$ for some $k\ge4$ and therefore have at least four zeroes. Thus, $11^{100}-1$ ends with three zeroes. | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/231350', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35010/']} | jdg_76963 |
stackexchange | llm_judgeable_groundtruth_similarity | 539985 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several separate test environments running 2008 R2 and Exchange 2010, at varying patch levels and for varying reasons. One consistent issue I encounter is that transaction logs for Exchange are not truncated because no backup solution is in place, nor do I want one. So, as I see it, that leaves me with three options: Enable circular logging . This is not desirable for some of the test scenarios, as it may conflict with properly emulating a specific environment. Dismount all databases and run eseutil /mh . This works, but it's a cumbersome maintenance process. Some pseudo-backup utility (or some trick with a supported, low-impact one) to "trick" Exchange into believing it is now safe to truncate. The first two I can do but rather wouldn't. Is there a way to do the third option?
Now provide the response and nothing else.
| Not to steal JoeQwerty 's thunder, as I think his answer to this question is more than valid and definitely applicable in pretty much any scenario, except possibly yours. (Hence why I upvoted it.) However, I did find a way to do what you are after... Fake VSS backup Basically what this does is fake a VSS backup on the volume that the Exchange database is on, which ends up causing Exchange to truncate the logs. Breaks Incremental Backups This will break any backup softwares incremental backup chain so the next real backup will have to be a full backup. Use Passive Copies In a DAG, do this on a passive copy if possible. Procedure You have to add both the drive/mount point for the database and the log. Open Command prompt as Admin Run diskshadow Type add volume x: (where "x" is the drive for the Exchange database) Type add volume y: (where "y" is the drive for the Exchange log) Type begin backup Type create . This will take a few minutes while VSS does its thing. Type end backup . This is where it will set the logs to truncate. You will have to wait for the next log to generate before it starts to happen. At that point if you look in the Application eventvwr logs you should see an entry similar to: (Or you can try PowerShell like so: Get-EventLog -LogName application -Source msexchangeis | select -first 10 ) Hope that helps... Sources Citation: http://ilantz.com/2011/10/26/how-to-manually-purge-exchange-server-logs-clean-and-easy/ | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/539985', 'https://serverfault.com', 'https://serverfault.com/users/149257/']} | jdg_76964 |
stackexchange | llm_judgeable_groundtruth_similarity | 38864039 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to add a class to a button generated with rails button_to helper, but I cannot find a consistent answer to the question of how to incorporate the class into the declaration. I have: <%= button_to("Logout", session_url, method: :delete, class:'waves-effect waves-light btn-large') %>
Now provide the response and nothing else.
| In general the "ref" feature is an anti-pattern in React. It exists to enable side-effect driven development, however in order to benefit the most from the React way of programming you should try to avoid "refs" if possible. As for your particular issue, passing a child a ref to it's sibling is a chicken vs. egg scenario. The ref callback is fired when the child is mounted, not during render which is why your example doesn't work. One thing you can try is pushing the ref into state and then reading from state into the other child. So: <One ref={c => !this.state.one && this.setState({ one: c })}/><Two one={this.state.one}/> Note: without the !this.state.one this will cause an infinite loop. Here is a codepen example of this working (look at the console to see the sibling ref logged): http://codepen.io/anon/pen/pbqvRA | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38864039', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5368448/']} | jdg_76965 |
stackexchange | llm_judgeable_groundtruth_similarity | 9770122 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to upload an image from the Androids photo gallery to a server. All the communication I have done has been with Object Streams but now I am unsure as to how I would do this. Oh and I have used an Input Stream to download an image where you point directly to the image using an URL. If someone could point me in the right direction would be appreciated. Thank You
Now provide the response and nothing else.
| For uploading images to a server from your application you can follow following tutorials: Uploading files to HTTP server using POST on Android. Upload image or file using http POST multi-part. The above two url will explain you how to upload images from your application to server. For uploading image from your photo gallery you require the path of that image file and replace the obtained path with /data/file_to_send.mp3 in first url. To obtain path of the image from the mobile gallery you can follow the following code: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); b1 = (Button)findViewById(R.id.Button01); b1.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { openGallery(SELECT_FILE1); } }); } public void openGallery(int req_code) { Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select file to upload "), req_code); } public void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode == RESULT_OK) { Uri selectedImageUri = data.getData(); if (requestCode == SELECT_FILE1) { selectedPath1 = getPath(selectedImageUri); System.out.println("selectedPath1 : " + selectedPath1); } if (requestCode == SELECT_FILE2) { selectedPath2 = getPath(selectedImageUri); System.out.println("selectedPath2 : " + selectedPath2); } tv.setText("Selected File paths : " + selectedPath1 + "," + selectedPath2); } } public String getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = managedQuery(uri, projection, null, null, null); int column_index = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA); cursor.moveToFirst(); return cursor.getString(column_index); } for downloading images you can do the following code. ImageView image = (ImageView)findViewById(R.id.image); if(!ImageUrl.equals("no image")) { try { image.setImageDrawable(grabImageFromUrl(ImageUrl)); } catch(Exception e) { } } private Drawable grabImageFromUrl(String url) throws Exception { return Drawable.createFromStream((InputStream)new URL(url).getContent(), "src"); } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9770122', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1160163/']} | jdg_76966 |
stackexchange | llm_judgeable_groundtruth_similarity | 7689 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I was in a lecture not long ago given by C. Teleman and at some point he said "Well, since Riemann-Roch is an index problem we can do..." Then right after that he argued in favour of such a sentence. Could anyone tell me what did he mean exactly?. That is to say, in this case what is elliptic operator like, what is the heuristic idea which such a result relies on? ...and a little bit of more details about it. As usual references will be appreciated. ADD: Thanks for the comments below, but I think they do not answer the question of title : Why is RR an Index problem?. Up to this point, what I can see is that two numbers happened to be the same.
Now provide the response and nothing else.
| Here is a sketch of the argument as I learned it in a complex analysis class: For a Riemann surface $X$ and a holomorphic line bundle $L$, we want $$\text{dim}H^0(X,L)-\text{dim}H^0(X,L\otimes\Lambda^{0,1})=c_1(L)+\frac{1}{2}\chi(X)$$You have an operator $\overline{\partial}$ (differentiation with respect to $d\overline{z}$ taking $\Gamma(X,L)$ to $\Gamma(X,L\otimes\Lambda^{0,1})$. Then $H^0(X,L)$ is the kernel of $\overline{\partial}$ and $H^0(X,L\otimes\Lambda^{0,1})$ is the kernel of its adjoint, $\overline{\partial}^+$. Now define $\Delta^+=\overline{\partial}\overline{\partial}^+$ and $\Delta^-=\overline{\partial}^+\overline{\partial}$. Their spectra are the same, except for the kernels, and we get $$\text{Tr}(e^{-t\Delta^+})-\text{Tr}(e^{-t\Delta^-})=\text{dim}(\text{ker}\Delta^+)-\text{dim}(\text{ker}\Delta^-)$$We also have that the kernel of $\overline{\partial}$ is the kernel of $\Delta^-$, and the kernel of $\overline{\partial}^+$ is the kernel of $\Delta^+$, so it's enough to get your hands on the left-hand side. Then you write those traces as integrals of heat kernels, take the limit as $t\rightarrow 0^+$, and show that the integrals go to $c_1(L)+\frac{1}{2}\chi(X)$. And that's possible because we can interpret Chern classes and Euler characteristics of Riemann surfaces as integrals of curvatures of line bundles. Of course, then there's more work to turn $c_1(L)+\frac{1}{2}\chi(X)$ into it's more familiar form. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/7689', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1547/']} | jdg_76967 |
stackexchange | llm_judgeable_groundtruth_similarity | 112485 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the benefit of studying bitwise operators (Bitwise Not, Bitwise AND, Bitwise OR, Bitwise XOR, Left Shift, Signed Right Shift, Unsigned Right Shift etc.)? Will we really use these operators in programming?
Now provide the response and nothing else.
| "Yes, we will." Bitwise operations are everywhere. They are perfect for working with bitfields (a practice that is ubiquitous in C and C++), such as a 'flags' field in a data structure or function argument. Basically, | combines flags, ^ flips flags, & checks if a flag is set, and the x &= ~FLAG pattern clears a flag. Bitwise operations are ubiquitous in all things low-level - hardware drivers, network protocols, binary file formats - as well as some higher-level fields like character encodings, cryptography, etc. Bit-shifting can also sometimes double for integer division and multiplication by powers of 2, with a slightly different rounding behavior for negative numbers (sometimes, but not always, more desirable than what regular integer division does). In tight loops, bitwise arithmetic can sometimes be used to avoid conditionals, which is beneficial because modern CPUs use branch prediction, and a misprediction (i.e., the condition in an if statement evaluates differently from the previous time) causes a significant delay. Using bitwise arithmetic, the same calculation can sometimes be expressed without any conditionals. Even if you don't intend to work in any of the above scenarios, it is still a good idea to study and understand bitwise operations - all modern computers are binary, and you definitely need to know the basic principles by which they operate. Numbers in a computer don't behave like numbers in the real world, and studying binary operations will help you understand why. | {} | {'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/112485', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/13724/']} | jdg_76968 |
stackexchange | llm_judgeable_groundtruth_similarity | 1273566 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a quick & easy way to do this in jQuery that I'm missing? I don't want to use the mouseover event because I'm already using it for something else. I just need to know if the mouse is over an element at a given moment. I'd like to do something like this, if only there was an "IsMouseOver" function: function hideTip(oi) { setTimeout(function() { if (!IsMouseOver(oi)) $(oi).fadeOut(); }, 100);}
Now provide the response and nothing else.
| Set a timeout on the mouseout to fadeout and store the return value to data in the object. Then onmouseover, cancel the timeout if there is a value in the data. Remove the data on callback of the fadeout. It is actually less expensive to use mouseenter/mouseleave because they do not fire for the menu when children mouseover/mouseout fire. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1273566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102896/']} | jdg_76969 |
stackexchange | llm_judgeable_groundtruth_similarity | 293244 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
More specifically, I'm supposed to compute $\displaystyle\sum_{k=1}^{n} \frac 1 {k(k + 1)} $ by using the equality $\frac 1 {k(k + 1)} = \frac 1 k - \frac 1 {k + 1}$ and the problem before which just says that, $\displaystyle\sum_{j=1}^{n} a_j - a_{j - 1} = a_n - a_0$. I can add up the sum for any $n$ but I'm not sure what they mean by "compute". Thanks!
Now provide the response and nothing else.
| Find a formula in terms of $n$ that can give you the sum for any $n$: $\displaystyle\sum_{k=1}^{n} \dfrac{1}{k(k + 1)} = \sum_{k=1}^n \dfrac{1}{k} - \dfrac{1}{(k + 1)} = \sum_{k=1}^n\;\left(-\frac{1}{k+1} - \left(\frac{-1}{k}\right) \right) = \;\;\;?$ Knowing that $\displaystyle\sum_{k=1}^{n} a_k - a_{k - 1} = a_n - a_0$, just determine what $a_k$ and $a_{k - 1}$ represent in terms of your sum, and then express the sum in terms of the hint: in terms of $a_n$, and $a_0$ ($a_0$) meaning the $a_k$ term evaluated at k=0). Edit for clarification: $$a_k \iff -\dfrac{1}{k+1} \implies a_n = -\dfrac{1}{n+1},\;a_0 = -\dfrac{1}{0 + 1}$$ $$\implies a_n - a_0 = -\dfrac{1}{n+1} - \left(-\dfrac{1}{1}\right) = 1 - \dfrac{1}{n+1} = \dfrac{n}{n+1}$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/293244', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/53128/']} | jdg_76970 |
stackexchange | llm_judgeable_groundtruth_similarity | 2357323 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My project uses a JSON feed to grab info about earthquakes within a specified latitude and longitude boundary, essentially making a box. I take this info and turn all the results into markers on a Google map. I need each marker to also display some additional information, so I'm trying to use the built in InfoWindow objects such that when you click on a marker you open up the tooltip with some information associated with that marker. However I'm finding that no matter what marker I click, the same infowindow always comes up above the same marker of that group, and I believe it is always the last infowindow created in my loop. Here's the code. $.getJSON(url, function(json) { for(var i = 0; i < json.earthquakes.length; i++) { var pos = new google.maps.LatLng(json.earthquakes[i].lat, json.earthquakes[i].lng); var info = json.earthquakes[i].lat + ", " + json.earthquakes[i].lng; var marker = new google.maps.Marker({ map: map, position: pos, title: json.earthquakes[i].eqid }) var tooltip = new google.maps.InfoWindow({ content: info }) google.maps.event.addListener(marker, 'click', function() { tooltip.open(map, marker); }); markers.push(marker); tooltips.push(tooltip); } }); markers is an array for all the marker objects on the map and tooltips is another array for storing the infowindows objects. They're global.
Now provide the response and nothing else.
| This is a very common question in the google-maps tag and an easy mistake to make :). What is happening is that your click event is being called asynchronously and it is picking up the current value in the marker variable in the getJSON callback (the last one in the list). You need to wrap your addListener call in a function so that a closure is created around the marker variable that is being used in the click callback: function listenMarker (marker){ // so marker is associated with the closure created for the listenMarker function call google.maps.event.addListener(marker, 'click', function() { tooltip.open(map, marker); });} Then call listenMarker from your main getJSON callback (where you are currently calling addListener). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2357323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179143/']} | jdg_76971 |
stackexchange | llm_judgeable_groundtruth_similarity | 367856 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone help me? How can I prove that there isn't an everywhere continuous function $f:\mathbb R \rightarrow \mathbb R$ that transforms every rational into an irrational and vice-versa?
Now provide the response and nothing else.
| Let $\mathbb{I} = \mathbb{R}\backslash\mathbb{Q}$ and suppose $f(\mathbb{I})\subseteq\mathbb{Q}$. $f(\mathbb{R}) = f(\mathbb{Q})\cup f(\mathbb{I})\subseteq f(\mathbb{Q})\cup\mathbb{Q}$, which is countable. Since $f$ is continuous, $f(\mathbb{R})$ is connected. Thus, $f(\mathbb{R})$ is countable and connected, hence it is a singleton $\{x\}$ and $f$ is constant. Clearly, no constant function fulfils the conditions you require, hence there is no continuous $f$ such that $f(\mathbb{I})\subseteq\mathbb{Q}$ and $f(\mathbb{Q})\subseteq\mathbb{I}$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/367856', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63712/']} | jdg_76972 |
stackexchange | llm_judgeable_groundtruth_similarity | 8326482 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to render a tree with an undetermined depth (children of children of children, etc.). I need to loop through the array recursively; how can I do this in Twig?
Now provide the response and nothing else.
| I played around with domi27's idea and came up with this. I made a nested array as my tree, ['link']['sublinks'] is null or another array of more of the same. Templates The sub-template file to recurse with: <!--includes/menu-links.html-->{% for link in links %} <li> <a href="{{ link.href }}">{{ link.name }}</a> {% if link.sublinks %} <ul> {% include "includes/menu-links.html" with {'links': link.sublinks} %} </ul> {% endif %} </li>{% endfor %} Then in the main template, call this (kind of redundant 'with' stuff there): <ul class="main-menu"> {% include "includes/menu-links.html" with {'links':links} only %}</ul> Macros A similar effect can be achieved with macros: <!--macros/menu-macros.html-->{% macro menu_links(links) %} {% for link in links %} <li> <a href="{{ link.href }}">{{ link.name }}</a> {% if link.sublinks %} <ul> {{ _self.menu_links(link.sublinks) }} </ul> {% endif %} </li> {% endfor %}{% endmacro %} In the main template, do this: {% import "macros/menu-macros.html" as macros %}<ul class="main-menu"> {{ macros.menu_links(links) }}</ul> | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8326482', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/747484/']} | jdg_76973 |
stackexchange | llm_judgeable_groundtruth_similarity | 19316065 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
Now provide the response and nothing else.
| You used the word track, but iOS has two different methods: monitoring and ranging. You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time). The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works. Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19316065', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1048175/']} | jdg_76974 |
stackexchange | llm_judgeable_groundtruth_similarity | 14609971 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Following is the xsd that i am trying to use while generating a client for SOAP Service, Eclipse is throwing an error : The prefix "xs" for element "xs:schema" is not bound. <xs:schema version="1.0" targetNamespace="bdo.com.ph/RemitAPI"> <xs:element name="CheckServiceResponse" nillable="true" type="xs:string"/> <xs:element name="apiRequest" nillable="true" type="tns:APIRequest"/> ............................. <xs:element name="referenceNo" type="xs:string" form="qualified"/> </xs:sequence> </xs:complexType></xs:schema> I got this xsd from a webservice
Now provide the response and nothing else.
| xmlns:xs="http://www.w3.org/2001/XMLSchema" was not added in the Schema XSD.adding it made me proceed to the next step. xmlns:tns="http://www.bdo.com.ph/RemitAPI was not there in the XSD, though it was present in the enclosing WSDL. When extracting an XSD from a WSDL to local machine, the namespace references in the XSD do not get picked up from the WSDL, they need to be declared in the XSD. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14609971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1989799/']} | jdg_76975 |
stackexchange | llm_judgeable_groundtruth_similarity | 7137209 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While studying Linux interrupt handling I found that Tasklets and SoftIRQs are two different methods of performing "bottom half" (lesser priority work). I understand this (quite genuine need). Difference being, SoftIRQs are re-entarant while a Tasklet is NOT. That same SoftIRQ can run on different CPUs while this is NOT the case with Tasklets. Though I understand this from surface but I fail in understanding the requirements of the two features. In what case(s) we may use these facilities ? How to recognize that I should use Tasklets now and SoftIRQs then. Also what do we mean by Tasklets are made upon SoftIRQs ? In one of the books I read in LKML there were debates upon removing Tasklets. I got completely confused why one would bring in such a feature ? Some shortsightedness (No offense meant) ? Any pointers on this will help a lot.
Now provide the response and nothing else.
| include/linux/interrupt.h /* PLEASE, avoid to allocate new softirqs, if you need not _really_ high frequency threaded job scheduling. For almost all the purposes tasklets are more than enough. F.e. all serial device BHs et al. should be converted to tasklets, not to softirqs. */enum{ HI_SOFTIRQ=0, /* High Priority */ TIMER_SOFTIRQ, NET_TX_SOFTIRQ, NET_RX_SOFTIRQ, BLOCK_SOFTIRQ, BLOCK_IOPOLL_SOFTIRQ, TASKLET_SOFTIRQ, SCHED_SOFTIRQ, HRTIMER_SOFTIRQ, RCU_SOFTIRQ, /* Preferable RCU should always be the last softirq */ NR_SOFTIRQS}; The key differences between softirq and tasklet are: Allocation Softirqs are statically allocated at compile-time. Unlike tasklets, you cannot dynamically register and destroy softirqs. Tasklets can be statically allocated using DECLARE_TASKLET(name, func, data) or can also be allocated dynamically and initialized at runtime using tasklet_init(name, func, data) Concurrency Softirqs can run concurrently on several CPUs, even if they are of the same type because softirqs are reentrant functions and must explicitly protect their data structures with spinlocks. Tasklets are non-reentrant and tasklets of the same type are always serialized: in other words, the same type of tasklet cannot be executed by two CPUs at the same time. However, tasklets of different types can be executed concurrently on several CPUs. Processing Softirqs are activated by means of the raise_softirq() . The pending softirqs are processed by do_softirq() and ksoftirqd kernel thread after being enabled by local_bh_enable() or by spin_unlock_bh() Tasklets are a bottom-half mechanism built on top of softirqs i.e. tasklets are represented by two softirqs: HI_SOFTIRQ and TASKLET_SOFTIRQ . Tasklets are actually run from a softirq. The only real difference in these types is that the HI_SOFTIRQ based tasklets run prior to the TASKLET_SOFTIRQ tasklets. So, tasklet_schedule() basically calls raise_softirq(TASKLET_SOFTIRQ) Note that softirqs (and hence tasklets and timers) are run on return from hardware interrupts, or on return from a system call. Also as soon as the thread that raised the softirq ends, that single softirq (and on other) is run to minimize softirq latency . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7137209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457621/']} | jdg_76976 |
stackexchange | llm_judgeable_groundtruth_similarity | 27615 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am doing a project on an author identification problem. I applied the tf-idf normalization to train data and then trained an SVM on that data. Now when using the classifier, should I normalize test data as well. I feel that the basic aim of normalization is to make the learning algorithm give more weight to more important features while learning. So once it has been trained, it already knows which features are important and which are not. So is there any need to apply normalization to test data as well? I am new to this field. So please ignore if the question appears silly?
Now provide the response and nothing else.
| Yes you need to apply normalisation to test data, if your algorithm works with or needs normalised training data*. That is because your model works on the representation given by its input vectors. The scale of those numbers is part of the representation. This is a bit like converting between feet and metres . . . a model or formula would work with just one type of unit normally. Not only do you need normalisation, but you should apply the exact same scaling as for your training data. That means storing the scale and offset used with your training data, and using that again. A common beginner mistake is to separately normalise your train and test data. In Python and SKLearn, you might normalise your input/X values using the Standard Scaler like this: scaler = StandardScaler()train_X = scaler.fit_transform( train_X )test_X = scaler.transform( test_X ) Note how the conversion of train_X using a function which fits (figures out the params) then normalises. Whilst the test_X conversion just transforms, using the same params that it learned from the train data. The tf-idf normalisation you are applying should work similarly, as it learns some parameters from the data set as a whole (frequency of words in all documents), as well as using ratios found in each document. * Some algorithms (such as those based on decision trees) do not need normalised inputs, and can cope with features that have different inherent scales. | {} | {'log_upvote_score': 7, 'links': ['https://datascience.stackexchange.com/questions/27615', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/37063/']} | jdg_76977 |
stackexchange | llm_judgeable_groundtruth_similarity | 1349543 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I set up two external executables to run from a c# application where stdout from the first is routed to stdin from the second? I know how to run external programs by using the Process object, but I don't see a way of doing something like "myprogram1 -some -options | myprogram2 -some -options". I'll also need to catch the stdout of the second program (myprogram2 in the example). In PHP I would just do this: $descriptorspec = array( 1 => array("pipe", "w"), // stdout );$this->command_process_resource = proc_open("myprogram1 -some -options | myprogram2 -some -options", $descriptorspec, $pipes); And $pipes[1] would be the stdout from the last program in the chain. Is there a way to accomplish this in c#?
Now provide the response and nothing else.
| Here's a basic example of wiring the standard output of one process to the standard input of another. Process out = new Process("program1.exe", "-some -options");Process in = new Process("program2.exe", "-some -options");out.UseShellExecute = false;out.RedirectStandardOutput = true;in.RedirectStandardInput = true;using(StreamReader sr = new StreamReader(out.StandardOutput))using(StreamWriter sw = new StreamWriter(in.StandardInput)){ string line; while((line = sr.ReadLine()) != null) { sw.WriteLine(line); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1349543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138906/']} | jdg_76978 |
stackexchange | llm_judgeable_groundtruth_similarity | 114958 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The space-time interval equation is this: $$\Delta s^2=\Delta x^2+\Delta y^2+\Delta z^2-(c\Delta t)^2$$ Where, $\Delta x, \Delta y, \Delta z$ and $\Delta t$ represent the distances along various coordinates according to an observer, and $\Delta s$ is the space-time interval. All observers agree on the space-time interval, it is constant. My question is why is it squared? If we had in equation like this: $$\Delta s'=\Delta x^2+\Delta y^2+\Delta z^2-(c\Delta t)^2$$ $\Delta s'$ would be constant as well. It would also never be imaginary. It would have units of $[length]^2$ instead of $[length]$ though. Is there a theoretical or practical reason that we define the space-time interval based on squaring, or is it just to make it look similar to Pythagoras' theorem/give it simpler units or something else entirely?
Now provide the response and nothing else.
| You are correct when you point out that any function of $\Delta x^2 + \Delta y^2 + \Delta z^2 - \Delta t^2$ will be constant and agreed on by all observers. So we could define $\Delta s$ to be its cosine...if all we were interested in was getting an invariant. You are also right when you point out the dimensional issue. Measure time in light-centimeters, and distance along the x,y,z axes in centimetres. Then length is measured in centimetres, and so is time.... Then the right hand side has units cm$^2$, and hence, so does the left hand side. Using cosine or other, similar functions like the identity function you suggest, would produce a quantity that did not even have the units of length (and so, could not be proper time). Now, definitions are arbitrary, so you could define Ps to be equal to $\Delta x^2 + \Delta y^2 + \Delta z^2 - \Delta t^2$ if you want, and you could give it any name you want. But would you be able to express the fundamental laws of Physics in terms of that quantity? It is a requirement of the principle of relativity that it be an invariant, and either Ps or cos(Ps) would satisfy that, but it is desirable that it make life easy for us in our formulas, since doing Physics is already hard enough. There are important reasons why we want to use the square root function instead of cosine or instead of the identity function which one of the other answers insists on. There is more to it than just to make it look like Pythagoras' Theorem or make it look like pre-relativistic physics. These reasons do not become apparent until you get to General Relativity, or at least to Differential Geometry. This is your question, rephrased: Why do we want to study an invariant quantity with dimensions of length? (Which is the same as time). The answer is that we want to be able to define $s$, the proper time, or, as I am expressing it, "the length of a path". It will be given by a line integral $s = \int ds $ along the path, and will be invariant for all observers. To an observer who is moving along that path, it will appear to be the elapsed time. Now it is quite basic that if first 2 cm of time elapse, and then another 3, the total elapsed time is 5 cm. So we need an additive quantity . Neither cosine nor Ps are additive, as simple examples show, but if we define $ds^2 = dx^2+ dz^2+dy^2-dt^2$, then it will be additive, by the higher-dimensional non-Euclidean analogue to Pythagoras' Theorem. that is why the squaring occurs, and it is indeed squaring a quantity $ds$, and when finite intervals are involved along straight lines, it is indeed the square of a quantity $\Delta s$ defined as $$\Delta s = \sqrt {\Delta x^2 + \Delta y^2 + \Delta z^2 - \Delta t^2}.$$ SHORT ANSWER We square $\Delta s$ so that we get an additive quantity along world-lines. | {} | {'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/114958', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/40394/']} | jdg_76979 |
stackexchange | llm_judgeable_groundtruth_similarity | 2783574 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a proof for this exercise that seems correct: Let $g:(0,\infty)\to\Bbb R$ such that $g(s+t)=g(s)+g(t)$ for all $s,t>0$. Then show that if $g$ is bounded in bounded sets then $g(x)=x g(1)$ for all $x>0$. My proof: first note that $g(n\cdot x)=n\cdot g(x)$ for all $n\in\Bbb N$ and all $x\in(0,\infty)$. By the same reason we find that $n\, g(1/n)=g(1)$ for $n\in\Bbb N$ what imply that $n^{-1} g(1)=g(1/n)$. Then both statements together imply that $g(r)=r\, g(1)$ for $r\in\Bbb Q_{>0}$. Hence if $(x_k)\in\Bbb Q_{>0}^{\Bbb N}$ converge to some $x>0$ we find that $\lim g(x_k)=g(1)\lim x_k=x\, g(1)$. Also note that$$g(x+y)=g(x)+g(y)\implies g(x+y)-g(y)=g(x),\quad\forall x,y>0\tag1$$ Now suppose that $g$ is bounded in bounded sets and that $g(s)= s\cdot g(1)+\epsilon$ for some $\epsilon\neq 0$ and some $s>0$, then from the above analysis we can find that$$g(r(s-q))=r(s-q) g(1)+r\epsilon,\quad\forall r\in\Bbb Q_{>0},\, q\in(0,s)\cap\Bbb Q\tag2$$Then let $(q_k)\in\Bbb Q_{>0}^{\Bbb N}$ such that $0<s-q_k<1/k^2$ for each $k\in\Bbb N$. Then we find that$$\lim_{k\to\infty} |g(k(s-q_k))|=\lim_{k\to\infty}\big|k(s-q_k)g(1)+k\epsilon|=\infty\tag3$$Now, by the definition of $(q_k)$ we find that $0<k(s-q_k)<1/k$ for each $k\in\Bbb N$, hence $g|_{(0,1]}$ is not bounded, thus $g(s)=sg(1)$ for all $s>0$, as desired.$\Box$ Two questions here: Can someone confirm the correctness of the above proof? Regardless if the above proof is correct or not it seems that the argument is slightly complicated so I want to know if there is an easier proof.
Now provide the response and nothing else.
| It's not clear to me whether what's below is more or less what the OP said; I find the exposition above a little hard to follow. Whether or not it's actually a simpler proof it certainly looks simpler... The result is actually much simpler than I thought. Choose $c$ so that $$|g(t)|<c\quad(|t|<1).$$ It follows that for $n=1,2\dots$ we have $$|g(t)|<c/n\quad(|t|<1/n),$$because if $|t|<1/n$ then $|g(t)|=|g(nt)|/n$ and $|nt|<1$. So $g$ is continuous at the origin. Since $g(t+\delta)=g(t)+g(\delta)$ this shows that $g$ is continuous. So $g(t)=tg(1)$, since that holds for all rational $t$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2783574', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/173262/']} | jdg_76980 |
stackexchange | llm_judgeable_groundtruth_similarity | 39807724 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a string with valid python dictionary inside data = "Some string created {'Foo': u'1002803', 'Bar': 'value'} string continue etc." I need to extract that dict. I tried with regex but for some reason re.search(r"\{(.*?)\}", data) did not work. Is there any better way extract this dict?
Now provide the response and nothing else.
| From @AChampion's suggestion. >>> import re>>> import ast>>> x = ast.literal_eval(re.search('({.+})', data).group(0))>>> x{'Bar': 'value', 'Foo': '1002803'} so the pattern you're looking for is re.search('({.+})', data) You were supposed to extract the curly braces with the string, so ast.literal_eval can convert the string to a python dictionary . you also don't need the r prefix as { or } in a capturing group, () would be matched literally. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39807724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/928017/']} | jdg_76981 |
stackexchange | llm_judgeable_groundtruth_similarity | 435319 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a Windows Server 08 box on SP1 that is having some problems. We've experienced backup problems and I've traced it down to VSS Writers not responding. From the command line, if I type vssadmin list providers, I get Provider name: 'Microsoft Software Shadow Copy provider 1.0'Provider type: SystemProvider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}Version: 1.0.0.7 If I type vssadmin list writers, I get this vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool(C) Copyright 2001-2005 Microsoft Corp. Waiting for responses.These may be delayed if a shadow copy is being prepared. I could wait this out for hours and it won't move. I looked up how Server 2008 handles VSS writers, and you can't reregister them like you could in Server 2003 http://social.technet.microsoft.com/Forums/en/windowsserver2008r2general/thread/062cc52c-899b-45f3-8d0c-798b92363f41 Does anyone know how to fix something like this or where to turn next?
Now provide the response and nothing else.
| I figured out the answer is to re-register the vss writers using the following batch file on this particular server. net stop "System Event Notification Service" net stop "Background Intelligent Transfer Service" net stop "COM+ Event System" net stop "Microsoft Software Shadow Copy Provider" net stop "Volume Shadow Copy" cd /d %windir%\system32 net stop vss net stop swprv regsvr32 /s ATL.DLL regsvr32 /s comsvcs.DLL regsvr32 /s credui.DLL regsvr32 /s CRYPTNET.DLL regsvr32 /s CRYPTUI.DLL regsvr32 /s dhcpqec.DLL regsvr32 /s dssenh.DLL regsvr32 /s eapqec.DLL regsvr32 /s esscli.DLL regsvr32 /s FastProx.DLL regsvr32 /s FirewallAPI.DLL regsvr32 /s kmsvc.DLL regsvr32 /s lsmproxy.DLL regsvr32 /s MSCTF.DLL regsvr32 /s msi.DLL regsvr32 /s msxml3.DLL regsvr32 /s ncprov.DLL regsvr32 /s ole32.DLL regsvr32 /s OLEACC.DLL regsvr32 /s OLEAUT32.DLL regsvr32 /s PROPSYS.DLL regsvr32 /s QAgent.DLL regsvr32 /s qagentrt.DLL regsvr32 /s QUtil.DLL regsvr32 /s raschap.DLL regsvr32 /s RASQEC.DLL regsvr32 /s rastls.DLL regsvr32 /s repdrvfs.DLL regsvr32 /s RPCRT4.DLL regsvr32 /s rsaenh.DLL regsvr32 /s SHELL32.DLL regsvr32 /s shsvcs.DLL regsvr32 /s /i swprv.DLL regsvr32 /s tschannel.DLL regsvr32 /s USERENV.DLL regsvr32 /s vss_ps.DLL regsvr32 /s wbemcons.DLL regsvr32 /s wbemcore.DLL regsvr32 /s wbemess.DLL regsvr32 /s wbemsvc.DLL regsvr32 /s WINHTTP.DLL regsvr32 /s WINTRUST.DLL regsvr32 /s wmiprvsd.DLL regsvr32 /s wmisvc.DLL regsvr32 /s wmiutils.DLL regsvr32 /s wuaueng.DLL sfc /SCANFILE=%windir%\system32\catsrv.DLL sfc /SCANFILE=%windir%\system32\catsrvut.DLL sfc /SCANFILE=%windir%\system32\CLBCatQ.DLL net start "COM+ Event System" net start "System Event Notification Service" net start "Background Intelligent Transfer Service" net start "Microsoft Software Shadow Copy Provider" net start "Volume Shadow Copy" | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/435319', 'https://serverfault.com', 'https://serverfault.com/users/139832/']} | jdg_76982 |
stackexchange | llm_judgeable_groundtruth_similarity | 98872 |
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I purchased the SparkX module with the VCNL4040 proximity sensor you can see here: to use this sensor via Python, but on the repo for the Adafruit library it seems like the library is not available on PyPI and I have no clue on where I should build it or how I should use it after the build. Furthermore, building it on the Raspberry Pi following the steps on the GitHub repo gave me this error: FileNotFoundError: [Errno 2] No such file or directory: 'build_deps/circuitpython/mpy-cross/mpy-cross' How can I fix this? And what should I do to be able to use the compiled library in my code, in case of success? Edit: I found out that my issue is due to the fact that I still have to install the Adafruit circuitPython bundle . But where do I have to copy the installer files? What is the "root of my CircuitPython device"? It only gets more confusing when it says "DO NOT use this to install libraries on a Linux computer, such as the Raspberry Pi, with regular Python. Instead, use the python3 version of pip". Pip doesn't work with my library and the repo itself suggested to use this!
Now provide the response and nothing else.
| Question Use MCP3008 to measure voltage of range 0V to 4.2V. Use a voltage divider with 2.7MΩ / 8.2MΩ to step down value for Rpi but not working. Resistor values too high? How to calculate the resistor values? Short Answer MCP3008 ADC operation is indeed very complicated. You need to go through the long answer below a couple of times, and skimmed through the references, before you can get a rough idea of what is going on in the following over simplified operation diagram. Long Answer Well, @Ghanima's answer is very good, but a bit too professional. The suggestion of using buffer amplifier is also newbie scary. @Ghanima refers to MCP3008 datasheet Fig 4.1, 4.2. So let us first look at these two figures. As Fig 4.1 illustrates: The analog voltage to be converted (0V to 4.2V) charges the sample capacitor (20pf), in 1.5 clock cycles. The problem now is that if the clock frequency is too high (in order to get high sample rate), the 1.5 cycle time is not long enough to "fully" charge the sample capacitor (20pF), before conversion starts, with the result that the converted result would be lower than it should be. Of course we can lower the clock frequency (longer time to "fully“ charge the sample capacitor), but then the sample rate (which is proportional to clock frequency) is also lowered. Thus we need to make an engineering trade off, on in newbie language, a balance between "sample rate" and "accuracy" balance. The other thing is that the charge time depends on the "source resistance", which is "more or less" the same, or of the same order (MΩ or kΩ) as the OP's voltage divider resistors. It needs newbie scary mathematics (calculus, differential equations!) to calculate/analyse the appropriate range of resistors for different sample rate ranges Of course the lazy engineers won't do the tedious calculations every time. They just use a very rough chart to get the resistor range from the sample rate range. This is Fig 4.2. Sorry for the long winding stuff. Actually what we need to do is just looking at Fig 4.2, select your sample rate, and find the resistor range in no time, well, less than 3 seconds. For example, if you want a sample rate of 200ksps, then use 1kΩ range. For 75ksps, use 10kΩ to 100kΩ range. If you find the above explanation confusing, forget it, and just use Fig 4.2 to get the resistor range 1k, 10k or 100k. Or even forget Fig 4.2, just use 1k to 10k. You won't be too wrong, and MCP3008 won't explode, ... :) References How to get MCP3008 Sampling Rage - Rpi Forum Discussion Appendices Appendix A - MCP3008 ADC sample capacitor charging timing diagram Appendix B - Why ADC results becomes not accurate again, if clock frequency goes "too low"? Once upon a time I was a humble MCP3008 newbie. I dared not use high sample rate by setting SPI/clock too high. I used 100kHz, and even down to 50kHz, 10kHz. But then I surprisingly found that, after reaching a certain frequency value, the lower the frequency goes, the less accurate is the ADC result. I thought hard, read the SAR algorithm, and finally found the reason. I also solved the puzzle of why the ADC SAR circuit can output the MSB of the results so fast (only 2 pulses after collecting sample). I was no longer a humble newbie. I upgraded myself to a MCP3008 ninja, ... :) Understanding SAR ADCs: Their Architecture and Comparison with Other ADCs - Maxim Integrated Appendix C - Over simplified MCP3008 sample time and convert Time | {} | {'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/98872', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/103973/']} | jdg_76983 |
stackexchange | llm_judgeable_groundtruth_similarity | 6685169 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I make my app take a screenshot of the contents of a view and attach it to an email? How?
Now provide the response and nothing else.
| You can convert your view to an image, then you could create an email with it. This code ( from here ) will allow you to send an email with an attachment: - (void)emailImageWithImageData:(NSData *)data { MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; // Set the subject of email [picker setSubject:@"Picture from my iPhone!"]; // Add email addresses // Notice three sections: "to" "cc" and "bcc" [picker setToRecipients:[NSArray arrayWithObjects:@"[email protected]", @"[email protected]", nil]]; [picker setCcRecipients:[NSArray arrayWithObject:@"[email protected]"]]; [picker setBccRecipients:[NSArray arrayWithObject:@"[email protected]"]]; // Fill out the email body text NSString *emailBody = @"I just took this picture, check it out."; // This is not an HTML formatted email [picker setMessageBody:emailBody isHTML:NO]; // Attach image data to the email // 'CameraImage.png' is the file name that will be attached to the email [picker addAttachmentData:data mimeType:@"image/png" fileName:@"CameraImage"]; // Show email view [self presentModalViewController:picker animated:YES]; //if you have a navigation controller: use that to present, else the user will not //be able to tap the send/cancel buttons //[self.navigationController presentModalViewController:picker animated:YES]; // Release picker [picker release]; } - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { // Called once the email is sent // Remove the email view controller [self dismissModalViewControllerAnimated:YES]; } To convert your view graphical representation to an image, use the code ( from here ): UIGraphicsBeginImageContext(self.window.bounds.size);[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];UIImage *image = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();NSData * data = UIImagePNGRepresentation(image);[self emailImageWithImageData:data]; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6685169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/841227/']} | jdg_76984 |
stackexchange | llm_judgeable_groundtruth_similarity | 415 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an scenario where a Fortigate firewall is used to separate internal networks from the Internet (FortiOS Version 4.0 MR3 patch 11). Right now there is a single Internet connection attached to the firewall and a default static route is used to get all Internet traffic through it. I would like to attach a second Internet connection to the firewall and then route only certain traffic through it, for example web browsing traffic. For this setup, I keep the current static default route through the first link and then configure policy routing options in order to route traffic with destination port TCP/80 and TCP/443 through the second Internet link. As expected, policy routing is evaluated before routing table and all traffic destined to TCP/80 and TCP/443 is sent through to second link, including traffic between subnets directly connected to the Fortigate, what breaks communication between them. In a Cisco environment I would adjust the ACL used to match traffic for policy routing, denying traffic between internal networks at the beginning of the ACL and adding a "permit any" statement at the end. However, I can not find the way to instruct the Fortigate to work in a similar manner. Do you know how to make this scenario working with Fortigate?
Now provide the response and nothing else.
| Since policy routes are evaluated top-down, you can work around this limit by placing a more specific entry matching traffic from internal subnet A to internal subnet B. However, this should be less than comfortable if you have many different networks attached to your internal interface. In this case, I would recommend you a trick I once used: since Fortigate devices ignore QoS marks, you should sign your "internet" packets on the firewall-facing port of your Cisco switch with a specific TOS and then use that mark in your policy-route. | {} | {'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/415', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/76/']} | jdg_76985 |
stackexchange | llm_judgeable_groundtruth_similarity | 489421 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using these commands: du -sh --apparent-size ./*du -sh ./* both reporting: 4.0K ./Lightroom_catalog_from_win_backup432M ./Lightroom catalog - wine_backup while those directories contain: $ll ./"Lightroom catalog - wine_backup"total 432M-rwxrwx--- 1 gigi gigi 432M Mar 18 2018 Lightroom 5 Catalog Linux.lrcat-rwxrwx--- 1 gigi gigi 227 Nov 21 2015 zbackup.bat$ll ./Lightroom_catalog_from_win_backuptotal 396M-rwxrwx--- 3 gigi gigi 396M Dec 17 09:35 Lightroom 5 Catalog Linux.lrcat-rwxrwx--- 3 gigi gigi 227 Dec 17 09:35 zbackup.bat Why du is reporting 4.0K for ./Lightroom_catalog_from_win_backup and how could I make it to report correctly? PS: other system information: $stat --file-system $HOME File: "/home/gigi" ID: 5b052c62a5a527bb Namelen: 255 Type: ext2/ext3Block size: 4096 Fundamental block size: 4096Blocks: Total: 720651086 Free: 155672577 Available: 119098665Inodes: Total: 183050240 Free: 178896289$lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.5 LTSRelease: 16.04Codename: xenial
Now provide the response and nothing else.
| I can reproduce if the files are hard links: ~ mkdir foo bar~ dd if=/dev/urandom of=bar/file1 count=1k bs=1k1024+0 records in1024+0 records out1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00985276 s, 106 MB/s~ ln bar/file1 foo/file1~ du -sh --apparent-size foo bar1.1M foo4.0K bar This is expected behaviour. From the GNU du docs : If two or more hard links point to the same file, only one of the hard links is counted. The file argument order affects which links are counted, and changing the argument order may change the numbers and entries that du outputs. If you really need repeated sizes of hard links, try the -l option: ‘ -l ’ ‘ --count-links ’ Count the size of all files, even if they have appeared already (as a hard link). ~ du -sh --apparent-size foo bar -l1.1M foo1.1M bar | {} | {'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/489421', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/227120/']} | jdg_76986 |
stackexchange | llm_judgeable_groundtruth_similarity | 2099692 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an easy way to find uninitialized class member variables. Finding them in either runtime or compile time is OK. Currently I have a breakpoint in the class constructor and examine the member variables one by one.
Now provide the response and nothing else.
| If you use GCC you can use the -Weffc++ flag, which generates a warnings when a variable isn't initialized in the member initialisation list. This: class Foo{ int v; Foo() {}}; Leads to: $ g++ -c -Weffc++ foo.cpp -o foo.ofoo.cpp: In constructor ‘Foo::Foo()’:foo.cpp:4: warning: ‘Foo::v’ should be initialized in the member initialization list One downside is that -Weffc++ will also warn you when a variable has a proper default constructor and initialisation thus wouldn't be necessary. It will also warn you when you initialize a variable in the constructor, but not in the member initialisation list. And it warns on many other C++ style issues, such as missing copy-constructors, so you might need to clean up your code a bit when you want to use -Weffc++ on a regular basis. There is also a bug that causes it to always give you a warning when using anonymous unions, which you currently can't work around other then switching off the warning, which can be done with: #pragma GCC diagnostic ignored "-Weffc++" Overall however I have found -Weffc++ to be incredible useful in catching lots of common C++ mistakes. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2099692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/254669/']} | jdg_76987 |
stackexchange | llm_judgeable_groundtruth_similarity | 3772 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am evaluating the effectiveness of 5 different methods to predict a particular binary outcome (call them 'Success' and 'Failure'). The data look like so: Method Sample_Size Success Percent_Success1 28 4 0.14 2 19 4 0.21 3 24 7 0.29 4 21 13 0.61 5 22 9 0.40 I would like to perform a test among these 5 methods to assess the relative superiority of the methods. In other words, I want to order the methods in order of performance as method 1 > method 2 > ... method 5. To avoid the issue of multiple comparisons, I plan to do a permutation test along the following lines: Step 1: Pool all the data so that the overall sample size is 114 with overall 37 successes. Step 2: Randomly split the data into 5 groups with the corresponding sample sizes of 28, 19, 24, 21 and 22. Step 3: Increment a counter if the observed order of Percent_Success from step 2 is consistent with the ordering of my data. Step 4: Repeat steps 2 and 3 many times (say 10000). Desired p-value = Final Counter Value / 10000. Questions: Is the above procedure ok? Is there anything in R that would enable me to perform the above test? Any suggestions for improvement or alternative methods would be helpful.
Now provide the response and nothing else.
| The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approximation, all orders are equally likely, whence your calculation will produce a value close to 1/5! = about 0.83%. That tells us nothing. One more obvious observation: the order, based on your data, is 4 > 5 > 3 > 2 > 1. Your estimates of their relative superiorities are 0.61 - 0.40 = 21%, 0.40 - 0.21 = 11%, etc. Now, suppose your question concerns the extent to which any of the ${5 \choose 2} = 10$ differences in proportions could be due to chance under the null hypothesis of no difference. You can indeed evaluate these ten questions with a permutation test. However, in each iteration you need to track ten indicators of relative difference in proportion, not one global indicator of the total order. For your data, a simulation with 100,000 iterations gives the results \begin{array}{ccccc} & 5 & 4 & 3 & 2 \cr 1 & 0.02439 & 0.0003 & 0.13233 & 0.29961 \cr 2 & 0.09763 & 0.00374 & 0.29222 & \cr 3 & 0.20253 & 0.00884 & & \cr 4 & 0.08702 & & & \end{array} The differences in proportions between method 4 and methods 1, 2, and 3 are unlikely to be due to chance (with estimated probabilities 0.03%, 0.37%, 0.88%, respectively) but the other differences might be. There is some evidence (p = 2.44%) of a difference between methods 1 and 5. Thus it appears you can have confidence that the differences in proportions involved in the relationships 4 > 3, 4 > 2, and 4 > 1 are all positive, and most likely so is the difference in 5 > 1. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/3772', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1558/']} | jdg_76988 |
stackexchange | llm_judgeable_groundtruth_similarity | 41960953 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want two different events to trigger an observer. It was suggested here that this should work. But it seems that it depends only on the second event. observeEvent({ input$spec_button mainplot.click$click}, { ... } ) Have a look at the example. ui <- shinyUI(bootstrapPage( actionButton("test1", "test1"), actionButton("test2", "test2")))server <- shinyServer(function(input, output) { observeEvent({ input$test1 input$test2 }, { print('Hello World') })})shinyApp(ui, server) Once you click button test1 nothing happens. If you click button test2 it prints to your console. Once test2 button was pressed clicking test1 prints the message. That is a strange behaviour. Another suggestion in that link was to use list(input$test1, input$test2) Which prints the message even without clicking the buttons.
Now provide the response and nothing else.
| This should do it, note that you still have to check if the buttons were clicked as mentioned by @MrFlick 1. You can use reactive expression #rm(list = ls())library(shiny)ui <- shinyUI(bootstrapPage( actionButton("test1", "test1"), actionButton("test2", "test2")))server <- shinyServer(function(input, output) { toListen <- reactive({ list(input$test1,input$test2) }) observeEvent(toListen(), { if(input$test1==0 && input$test2==0){ return() } print('Hello World') })})shinyApp(ui, server) 2. As per example given by @MrFlick (now deleted) #rm(list = ls())library(shiny)ui <- shinyUI(bootstrapPage( actionButton("test1", "test1"), actionButton("test2", "test2")))server <- shinyServer(function(input, output) { observeEvent(input$test1 | input$test2, { if(input$test1==0 && input$test2==0){ return() } print('Hello World') })})shinyApp(ui, server) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41960953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5754181/']} | jdg_76989 |
stackexchange | llm_judgeable_groundtruth_similarity | 17621798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently running a process with the & sign. $ example & However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do $ jobs I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
Now provide the response and nothing else.
| In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin ) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground ( fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either: Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null & . Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram & . If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution: nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 & | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17621798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1255410/']} | jdg_76990 |
stackexchange | llm_judgeable_groundtruth_similarity | 3062701 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it just because of dynamic typing we don't require a concept of interfaces(like in Java and C#) in python?
Now provide the response and nothing else.
| The interface as a keyword and artifact was introduced by Java 1 ( and C# took it from there ) to describe what the contract an object must adhere was. But, interface has always been a key part of Object Oriented Paradigm and basically it represents the methods an object has to respond. Java just enforces this mechanism to provide statically type checking. So, dynamic ( OO ) programming languages do use interfaces, even thought they don't statically check them. Just like other data types, for instance in Ruby: @i = 1; You don't have to declare i of type FixNum you just use it. Same goes for interfaces, they just flow. The trade-off is, you can't have a static check on that and failures are only show at runtime. In the other hand Structural type ( or static duck type as I call it :P ) used by languages as Go or Scala, gives the best of both worlds. 1. See Daniel Earwicker comment about CORBA interface keyword | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3062701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201393/']} | jdg_76991 |
stackexchange | llm_judgeable_groundtruth_similarity | 4204669 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Having trouble embedding c# code inside a javascript block with MVC3 RC1 razor view. What syntax should I use to make this working? I get warnings eg. Expected constant on the isOk variable among other. (Without text tag, the following code is not recognized as javascript) @{bool isOk = true;}<script type="text/javascript"> var tmp1 = ""; @if (isOk) { <text> var tmp=""; // this should be interpreted as javascript </text> }</script>
Now provide the response and nothing else.
| You need the <text> tag to indicate that the contents is not evaluated on the server. For example: @{ var isOK = true;}<script type="text/javascript"> var tmp1 = ""; @if (isOK) { <text> var tmp = "foo"; </text> }</script> will output in the resulting page: <script type="text/javascript"> var tmp1 = ""; var tmp = "foo";</script> As far as the Expected Constant warning is concerned, well, Razor Intellisense is still in beta so it's far from perfect. It's just a warning you can safely ignore. The important thing is what is that the application works and emits valid HTML. Hope this is something that will be fixed in the final product. From personal experience the only warnings I trust are the one emitted by the C# compiler. When I am working in a view I safely ignore all the crappy warnings that Visual Studio emits because I know that it is wrong. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4204669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115078/']} | jdg_76992 |
stackexchange | llm_judgeable_groundtruth_similarity | 10502428 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have this but it's not flawless: $testcases = array(array("I love mywebsite.com", true),array("mywebsite.com/ is what I like", true),array("www.mywebsite.com is my website", true),array("Check out www.mywebsite.com/", true),array("... http://mywebsite.com ...", true),array("... http://mywebsite.com/ ...", true),array("... http://www.mywebsite.com ...", true),array("... http://www.mywebsite.com/ ...", true),array("I like commas and periods. Just like www.mywebsite.com, they do it too!", true),array("thisismywebsite.com is a lot better", false),array("The URL fake.mywebsite.com is unknown to their server", false),array("Check out http://redirect.mywebsite.com/www.ultraspammer.com", false));function contains_link($text) {return preg_match("/(https?:\/\/(?:www\.)?|(?:www\.))mywebsite\.com/", $text) > 0;}foreach ($testcases as $case) {echo $case[0] . "=".(contains_link($case[0]) ? "true" : "false") . " and it should be " . ($case[1] ? "true" : "false") . "<br />";} Output: I love mywebsite.com=false and it should be truemywebsite.com/ is what I like=false and it should be truewww.mywebsite.com is my website=true and it should be trueCheck out www.mywebsite.com/=true and it should be true... http://mywebsite.com ...=true and it should be true... http://mywebsite.com/ ...=true and it should be true... http://www.mywebsite.com ...=true and it should be true... http://www.mywebsite.com/ ...=true and it should be trueI like commas and periods. Just like www.mywebsite.com, they do it too!=true and it should be truethisismywebsite.com is a lot better=false and it should be falseThe URL fake.mywebsite.com is unknown to their server=false and it should be falseCheck out http://redirect.mywebsite.com/www.ultraspammer.com=false and it should be false
Now provide the response and nothing else.
| An alternative to regex: parse_url() $url = parse_url($text);if($url['host'] == 'www.mywebsite.com' || $url['host'] == 'mywebsite.com') UPDATE: Assuming that $text can have a lot of domains,use strstr() instead. if(strstr($text,"mywebsite.com") !== FALSE) UPDATE 2: function contains_link($text) { return preg_match("/(^(https?:\/\/(?:www\.)?|(?:www\.))?|\s(https?:\/\/(?:www\.)?|(?:www\.))?)mywebsite\.com/", $text);} and: contains_link("AAAAAAA http://mywebsite.com"); //1 contains_link("foo BAaa http://www.mywebsite.com"); //1 contains_link("abc.com www.mywebsite.com"); // 1 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10502428', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064629/']} | jdg_76993 |
stackexchange | llm_judgeable_groundtruth_similarity | 6687443 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'll be copying a large file over the network using my winforms app and I need to show some kind of progress bar. Rather than cook up my own copy routine, I was thinking that it might be better to simply show the built-in file copy dialog. I would also need a "Copy complete" and "Copy failed" notification. I need this to work on Windows XP, Vista and 7. Is there a way to call to engage this functionality from my c# code?
Now provide the response and nothing else.
| Answer taken from: here Windows Vista does indeed include a new copy engine that supports exactly what you're looking to do. However, it's possible that previously existing functionality may meet your needs. For example, if you want to copy, move, rename, or delete an individual file or directory, you can take advantage of SHFileOperation (exposed from shell32.dll), which is already wrapped by the Visual Basic® runtime. If you're using Visual Basic 2005, you can simply use functionality from the My namespace, for example: My.Computer.FileSystem.CopyDirectory( sourcePath, destinationPath, UIOption.AllDialogs) Accomplishing the same thing in C# involves only a little more work, adding a reference to Microsoft.VisualBasic.dll (from the Microsoft® .NET Framework installation directory) and using code such as the following: using Microsoft.VisualBasic.FileIO;...FileSystem.CopyDirectory( sourcePath, destinationPath, UIOption.AllDialogs); When run, this will result in the same progress UI you'd see if you were doing the same file operations from Windows Explorer. In fact, when running on Windows Vista, you automatically get the new Window Vista progress UI, as shown in Figure 1. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6687443', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9382/']} | jdg_76994 |
stackexchange | llm_judgeable_groundtruth_similarity | 22849110 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Scheme I can do something like this: (define (adder) (define (one) 1) (define (two) 2) (+ (one) (two))) Calling adder results in 3 while calling one will yield an error since one is only visible within the scope of adder . In Clojure if I do something similar (defn adder [] (defn one [] 1) (defn two [] 2) (+ (one) (two))) one and two will pollute my namespace since defn uses def internally which creates bindings in the current namespace. Is there a function/macro which creates named functions in local scope? The reason for my question is that I got used to the way Scheme works. Naming my local functions that way often makes my code more readable.
Now provide the response and nothing else.
| Try letfn : Takes a vector of function specs and a body, and generates a set of bindings of functions to their names. All of the names are available in all of the definitions of the functions, as well as the body. (defn adder [] (letfn [(one [] 1) (two [] 2)] (+ (one) (two)))) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22849110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/485337/']} | jdg_76995 |
stackexchange | llm_judgeable_groundtruth_similarity | 2691864 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a facebook Iframe application with multiple PHP pages in it. I have some links that point relatively to the files inside my "iframe folder". Having some issues with session variables inside the iframe. I set some session variables but they do not persist from one page to another. This does work on other browsers. I've been reading that Safari does not support Cross-Domain cookies and this might be the problem , but im not sure how to fix this. Any help?
Now provide the response and nothing else.
| I wrote the blog post Dominic refers to in his answer. The problem is that the default behavior of Safari is to only accept cookies from sites that you visit. This excludes "third party" cookies. Safari treats the page inside an IFRAME as a third-party site, and until you interact with that content (by clicking a link, for example), it will refuse those cookies. Your PHP code needs to set a cookie on the first page that uses the session in order for that session to persist from one page to another, but if the session variables are in the very first page in the IFRAME, you have a chicken-and-egg problem. My solution is to retain all of the special Facebook parameters through to the second page loaded into the IFRAME. Because you've interacted with it, cookies set on the second page will persist, and this allows your PHP code to keep whatever state it needs to communicate back to Facebook. This won't likely help your PHP session, though, so I suggest adding another parameter to links on the first page that allows the second page to look the session up, or otherwise recreate it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2691864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/321504/']} | jdg_76996 |
stackexchange | llm_judgeable_groundtruth_similarity | 10041496 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I changed my coding style for php and jQuery, but my Registration $("#reg_form_company").bind("submit", function() { $.fancybox.showActivity(); $.ajax({ type : "POST", cache : false, url : $(this).attr('action'), data : $(this).serializeArray(), success : function(data) { $(".printArea").empty().append(data).css('visibility','visible'); } }); return false;}); Then this is my Form <form id="reg_form_company" action="index.php?module=register&actionregister" method="post" > <input>[...]</input></form> Then after clicking the "Submit" button, it doesn't work, I assume that somebody can help me to solve this problem, coz the $.ajax might confuse about POST(for inputs) and the GET(for the parameters of the "action" form) I appreciate for your help, you can also modify the entire jQuery code if it's required. Sorry guys for not including the #reg_form_company, and the fancybox
Now provide the response and nothing else.
| You need to do something like this: http://jsfiddle.net/xSJTs/2/ $('form').on('submit',function(e){ e.preventDefault(); $.ajax({ type : "POST", cache : false, url : $(this).attr('action'), data : $(this).serialize(), success : function(data) { $(".printArea").empty().append(data).css('visibility','visible'); } });}); You have to use serialize() instead of serializeArray() . serializeArray() creates a JavaScript-object, serialize() creates a query-string. Serialize: http://api.jquery.com/serialize/ SerializeArray: http://api.jquery.com/serializeArray/ Basically you wait until the form is submitted and then you interrupt it ( e.preventDefault(); ). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10041496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2475219/']} | jdg_76997 |
stackexchange | llm_judgeable_groundtruth_similarity | 1648223 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having some trouble getting my Core Data entities to play nice and order when using an UITableView. I've been through a number of tutorials and other questions here on StackOverflow, but there doesn't seem to be a clear or elegant way to do this - I'm really hoping I'm missing something. I have a single Core Data entity that has an int16 attribute on it called "displayOrder". I use an NSFetchRequest that has been sorted on "displayOrder" to return the data for my UITableView. Everything but reordering is being respected. Here is my (inefficient) moveRowAtIndePath method: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { NSUInteger fromIndex = fromIndexPath.row; NSUInteger toIndex = toIndexPath.row; FFObject *affectedObject = [self.fetchedResultsController.fetchedObjects objectAtIndex:fromIndex]; affectedObject.displayOrderValue = toIndex; [self FF_fetchResults]; for (NSUInteger i = 0; i < [self.fetchedResultsController.fetchedObjects count]; i++) { FFObject *otherObject = [self.fetchedResultsController.fetchedObjects objectAtIndex:i]; NSLog(@"Updated %@ / %@ from %i to %i", otherObject.name, otherObject.state, otherObject.displayOrderValue, i); otherObject.displayOrderValue = i; } [self FF_fetchResults]; } Can anyone point me in the direction of a good bit of sample code, or see what I'm doing wrong? The tableview display updates OK, and I can see through my log messages that the displayOrder property is being updated. It's just not consistently saving and reloading, and something feels very "off" about this implementation (aside from the wasteful iteration of all of my FFObjects). Thanks in advance for any advice you can lend.
Now provide the response and nothing else.
| I took a look at your code and this might work better: - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { NSUInteger fromIndex = fromIndexPath.row; NSUInteger toIndex = toIndexPath.row; if (fromIndex == toIndex) { return; } FFObject *affectedObject = [self.fetchedResultsController.fetchedObjects objectAtIndex:fromIndex]; affectedObject.displayOrderValue = toIndex; NSUInteger start, end; int delta; if (fromIndex < toIndex) { // move was down, need to shift up delta = -1; start = fromIndex + 1; end = toIndex; } else { // fromIndex > toIndex // move was up, need to shift down delta = 1; start = toIndex; end = fromIndex - 1; } for (NSUInteger i = start; i <= end; i++) { FFObject *otherObject = [self.fetchedResultsController.fetchedObjects objectAtIndex:i]; NSLog(@"Updated %@ / %@ from %i to %i", otherObject.name, otherObject.state, otherObject.displayOrderValue, otherObject.displayOrderValue + delta); otherObject.displayOrderValue += delta; } [self FF_fetchResults]; } | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1648223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63580/']} | jdg_76998 |
stackexchange | llm_judgeable_groundtruth_similarity | 36571426 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to set a viewpager inside a BottomSheetDialogFragment but always with the same result: java.lang.IllegalStateException: Fragment does not have a view setupDialog Code: @Overridepublic void setupDialog(Dialog dialog, int style) { Log.d(TAG, "setupDialog"); super.setupDialog(dialog, style); View root = View.inflate(getContext(), R.layout.fragment_daily_detail, null); dialog.setContentView(root); CoordinatorLayout.LayoutParams layoutParams = (CoordinatorLayout.LayoutParams) ((View) root.getParent()).getLayoutParams(); CoordinatorLayout.Behavior behavior = layoutParams.getBehavior(); if (behavior != null && behavior instanceof BottomSheetBehavior) { bottomSheetBehavior = (BottomSheetBehavior) behavior; bottomSheetBehavior.setBottomSheetCallback(mBottomSheetBehaviorCallback); bottomSheetBehavior.setPeekHeight((int) getResources().getDimension(R.dimen.fragment_forgot_password_min_height)); Log.d(TAG, "State: " + bottomSheetBehavior.getState()); imageClose = root.findViewById(R.id.imageViewClose); textViewTitle = (TextView) root.findViewById(R.id.textViewTitle); peekLayout = root.findViewById(R.id.peekLayout); tabLayout = (TabLayout) root.findViewById(R.id.tabs); viewPager = (ViewPager) root.findViewById(R.id.viewPager); recyclerView = (RecyclerView) root.findViewById(R.id.recyclerView); timesheetDay = timesheetDayList.get(pageNumber); imageClose.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { bottomSheetBehavior.setState(BottomSheetBehavior.STATE_HIDDEN); } }); textViewTitle.setText(timesheetDay.getDate()); sectionsPagerAdapter = new SectionsPagerAdapter(getChildFragmentManager()); viewPager.setAdapter(sectionsPagerAdapter); tabLayout.setupWithViewPager(viewPager); }} FragmentPagerAdapter Code: public class SectionsPagerAdapter extends FragmentPagerAdapter { public SectionsPagerAdapter(FragmentManager fm) { super(fm); } @Override public Fragment getItem(int position) { return PageFragment.getInstance(timesheetDayList.get(position)); } @Override public int getCount() { return timesheetDayList.size(); } @Override public CharSequence getPageTitle(int position) { return timesheetDayList.get(position).getDate(); }} LAYOUT Code: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:app="http://schemas.android.com/apk/res-auto"xmlns:tools="http://schemas.android.com/tools"android:id="@+id/mainLayout"android:layout_width="match_parent"android:layout_height="match_parent"android:background="@color/background_material_light"android:clickable="true"android:orientation="vertical"tools:context=".ui.dialogs.bottomsheets.ForgotUsernameFragment"><RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:layout_marginLeft="0dp" android:layout_marginTop="0dp"> <RelativeLayout android:id="@+id/peekLayout" android:layout_width="match_parent" android:layout_height="wrap_content"> <LinearLayout android:id="@+id/linearLayout2" android:layout_width="match_parent" android:layout_height="?actionBarSize" android:background="@color/colorPrimary" android:gravity="center_vertical" android:orientation="horizontal"> <ImageView android:id="@+id/imageViewClose" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="?attr/selectableItemBackgroundBorderless" android:clickable="true" android:padding="16dp" app:srcCompat="@drawable/ic_close_white_24dp" tools:ignore="MissingPrefix" tools:src="@drawable/ic_close_white_24dp" /> <TextView android:id="@+id/textViewTitle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Daily Detail" android:textAppearance="?android:attr/textAppearanceMedium" android:textColor="@color/colorWhite" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/linearLayout2" android:orientation="vertical"> <android.support.design.widget.TabLayout android:id="@+id/tabs" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/colorPrimary" app:tabMode="scrollable" /> <android.support.v4.view.ViewPager android:id="@+id/viewPager" android:layout_width="match_parent" android:layout_height="match_parent"/> </LinearLayout> </RelativeLayout></RelativeLayout> It would be very useful for me if anyone knows something about this implementation. Cheers and thanks in advance!
Now provide the response and nothing else.
| UPDATED I solved the problem inflating the view in "onCreateView()". Anyway, I have setted a RecyclerView inside the ViewPager and the RecyclerView vertical scroll is very unstable, sometimes works and sometimes doesn't, so I'll try do this screen in another way. @Nullable@Overridepublic View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { Log.d(TAG, "onCreateview"); View root = inflater.inflate(R.layout.fragment_daily_detail, container); mImageClose = root.findViewById(R.id.imageViewClose); mTextViewTitle = (TextView) root.findViewById(R.id.textViewTitle); mPeekLayout = root.findViewById(R.id.peekLayout); mTabLayout = (TabLayout) root.findViewById(R.id.tabs); mViewPager = (ViewPager) root.findViewById(R.id.viewPager); mImageClose.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { getDialog().dismiss(); } }); mTextViewTitle.setText("Daily detail"); mSectionsPagerAdapter = new SectionsPagerAdapter(getChildFragmentManager()); mViewPager.setAdapter(mSectionsPagerAdapter); mTabLayout.setupWithViewPager(mViewPager); mViewPager.setCurrentItem(mPageNumber); mPeekLayout.getViewTreeObserver().addOnGlobalLayoutListener( new ViewTreeObserver.OnGlobalLayoutListener() { @Override public void onGlobalLayout() { BottomSheetDialog dialog = (BottomSheetDialog) getDialog(); FrameLayout bottomSheet = (FrameLayout) dialog.findViewById(android.support.design.R.id.design_bottom_sheet); mBottomSheetBehavior = BottomSheetBehavior.from(bottomSheet); // gets called after layout has been done but before display // so we can get the height then hide the view int height = mPeekLayout.getHeight(); Log.d(TAG, "Height: " + height); mBottomSheetBehavior.setPeekHeight(height); mBottomSheetBehavior.setBottomSheetCallback(mBottomSheetBehaviorCallback); } }); return root;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36571426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2338932/']} | jdg_76999 |
stackexchange | llm_judgeable_groundtruth_similarity | 18825596 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to develop with C++ after a break of a few years. Anyway, I got problems following the tutorials I am currently reading. After writing the following code-snippet I expected to see 'Hello World' in my console, but I am only able to see 'Debug: StrangeChars'; What went wrong? std::string myString("Hello World"); printf("* Debug: %s \n", myString);
Now provide the response and nothing else.
| printf relies on you passing the right arguments. %s requires a char * , you passed a std::string . Try (C way) char myString[] = "Hello World";printf("* Debug: %s \n", myString); Or (Hybrid C/C++ way) std::string myString("Hello World"); printf("* Debug: %s \n", myString.c_str()); Or the C++ way: std::string myString("Hello World"); std::cout << "* Debug " << myString << std::endl; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18825596', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1342614/']} | jdg_77000 |
stackexchange | llm_judgeable_groundtruth_similarity | 22028795 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using a docker container (based on the official centos:6.4 image) to build an ISO which I then need to mount and verify. I am unable to mount the ISO using: sudo mount -o loop /path/to/iso /mnt Gives: mount: Could not find any loop device. Maybe this kernel does not know about the loop device? (If so, recompile or `modprobe loop'.) It looks like the kernel has been compiled without loop device support. Is it possible to build docker images which support loop devices? I couldn't find any information on this, however, looking at this thread it seems that this may be an ongoing topic. I wonder if there is a way to circumvent this limitation?
Now provide the response and nothing else.
| To mount an ISO inside a container, you need two things: access to loop devices, permission to mount filesystems. By default, Docker locks down both things; that's why you get that error message. The easiest solution is to start the container in privileged mode: docker run --privileged ... A more fine-grained solution is to dive down into the devices cgroup and container capabilities to give the required permissions. Note that you cannot execute privileged operations as part of a Dockerfile; i.e. if you need to mount that ISO in a Dockerfile, you won't be able to do it. However, I recommend that you have a look at Xorriso and specifically the osirrox tool , which lets you extract files from ISO images just like you would extract a tar file, without requiring any kind of special access, e.g.: osirrox -indev /path/to/iso -extract / /full-iso-contents | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22028795', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3112752/']} | jdg_77001 |
stackexchange | llm_judgeable_groundtruth_similarity | 63105 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What happens, from an international law point of view, if a NATO country which is not a member of EU, attacks (unprovoked) an EU country which is not a member of NATO? Specifically, are the countries members of both EU and NATO legally allowed to intervene? If they do (whether they are allowed or not), are the other NATO non-EU countries allowed to retaliate? This is specifically from an international law point of view, what would actually likely happen is outside of the scope of this question.
Now provide the response and nothing else.
| No (modern) international agreement condones wars of aggression. What article 5 of the NATO treaty and article 42 (7) of the TEU establish is an obligation to assist the country being attacked (with many nuances and caveats) and certainly not any obligation to help a country attack another one (even by remaining neutral). There is therefore no conflict of norms and no obligation towards the attacking country. Both texts are written under the assumption that member states would respect international law and not attack each other (or anyone) and do not explicitely forbid providing assistance or really any military operations, they are only concerned with defense. As such, there is no need to be “allowed” to provide assistance, that's a given, and what these treaties create is an affirmative commitment to do it, under certain conditions. Both texts also refer to article 51 of the UN Charter, which establishes a right to self-defence and foresees a quick involvement of the UN Security Council. The retaliation question is a little more complicated. Neither texts spell out what should happen but it wouldn't make sense for an aggression to open a free for all. In general, “retaliation“ is also thought to be forbidden and assisting self-defence doesn't necessarily entail invading the attacking country (cf. the first Gulf war). It's difficult to see how an attack on the military forces of the attacking country could possibly trigger any self-defense clause or justify another assistance requirement. In any case, the EU mutual defense clause is clearly subordinate to the NATO commitment: Commitments and cooperation in this area shall be consistent with commitments under the North Atlantic Treaty Organisation, which, for those States which are members of it, remains the foundation of their collective defence and the forum for its implementation. Conversely, article 8 of the NATO treaty provides that Each Party declares that none of the international engagements now in force between it and any other of the Parties or any third State is in conflict with the provisions of this Treaty, and undertakes not to enter into any international engagement in conflict with this Treaty. NATO countries therefore see their alliance as more important and framed their EU commitment accordingly. Incidentally, the EU mutual defense clause also contains another caveat to reaffirm the neutrality of several member states. In fact, the only non-NATO EU member state that does not maintain a policy of neutrality is Cyprus (which is denied NATO membership for obvious reasons). | {} | {'log_upvote_score': 5, 'links': ['https://politics.stackexchange.com/questions/63105', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/113/']} | jdg_77002 |
stackexchange | llm_judgeable_groundtruth_similarity | 185644 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've recently stumbled upon this very nice interactive visualization of eigenvectors of two-dimensional matrices, and how powers $A^k$ act on various vectors. How can this sort of visualization be realized with Mathematica, leveraging its dynamical capabilities?
Now provide the response and nothing else.
| The following is an attempt to recreate a similar sort of interactive visualization, showing the eigenvectors (when real), and how the various points of the unit circle are transformed by the matrix. The matrix can be chosen by moving its two column vectors using the mouse. I used EventHandler for this, instead of Locator s, for greater customizability and a more natural look. To ease code readability and modularity, the components of the graphical object are defined separately in a private context, and injected into the final DynamicModule object. Here is the full code: BeginPackage["eigenvectorRepresentation`"];dynamicalEigenvectorsRepresentation;Begin["`Private`"];Attributes[hold] = HoldAllComplete;ClearAll@injectAndRelease;Attributes[injectAndRelease] = HoldAllComplete;injectAndRelease[x_, replacementRules_, hold_: hold] := Hold@x /. replacementRules /. {hold[s__] :> s} // ReleaseHold;redPoint = hold@{ Red, If[TrueQ[movingPointIndex == 1], [email protected], [email protected]], Point@v1, Arrow@{{0, 0}, v1} };greenPoint = hold@{ Green, If[TrueQ[movingPointIndex == 2], [email protected], [email protected]], Point@v2, Arrow@{{0, 0}, v2} };bluePointAndArrows = hold@Dynamic@{Blue, [email protected], Point@v3, [email protected], Arrow /@ Partition[NestList[Dot[matrix, #] &, v3, numOfIterations], 2, 1] };showEigenvectors = hold@Dynamic@With[{eigs = Eigenvectors@N@matrix}, If[MatchQ[eigs, {{__Real} ..}], {Purple, [email protected], InfiniteLine@{-#, #} & /@ eigs}, {}] ];principalAxes = hold@With[ {singularVectors = {Transpose@#[[1]], #[[3]]} &@ SingularValueDecomposition@matrix}, {Map[{Thick, Orange, Arrow@{{0, 0}, #}} &, singularVectors[[1]]], Map[{Thick, Cyan, Arrow@{{0, 0}, #}} &, singularVectors[[2]]]} ];additionalInfo = hold[ Column@{ "PlotRange", VerticalSlider[Dynamic@frameSize, {1, 10, 0.01}, Appearance -> "Labeled"] }, " ", Column@{ "Number of iterations", VerticalSlider[Dynamic@numOfIterations, {1, 40, 1}, Appearance -> "Labeled"] } ];eigenvaluesDisplay = hold[ " ", Dynamic@With[{eigvals = Eigenvalues@matrix}, Graphics[{Circle[], Point@{0, 0}, Thick, Arrow@{{0, 0}, ReIm@eigvals[[1]]}, Arrow@{{0, 0}, ReIm@eigvals[[2]]} }, Axes -> True, PlotRangePadding -> 0.1, PlotRange -> {{-1, 1}, {-1, 1}}, ImageSize -> 200, PlotLabel -> "Eigenvalues"] ] ];arrowRepresentationActionMatrix[matrix_] := With[{pts = MeshCoordinates@DiscretizeRegion@Region@Circle[]}, With[{finalPts = Dot[matrix, #] & /@ pts}, Graphics[{ [email protected], Point@finalPts, Arrow /@ Thread@{pts, finalPts} }] ]];Options[dynamicalEigenvectorsRepresentation] = { "ShowBluePointAndArrows" -> True, "ShowEigenvectorsWhenReal" -> True, "ShowEigenvalues" -> True, "ShowPrincipalAxes" -> True };dynamicalEigenvectorsRepresentation[OptionsPattern[]] := DynamicModule[ {v1 = {0.7, -0.6}, v2 = {0.6, 0.6}, v3 = {1, 1}, movingPointIndex, matrix, frameSize = 1.5, numOfIterations = 30}, Row[{ EventHandler[ Dynamic[ matrix = Transpose@{v1, v2}; Show[ arrowRepresentationActionMatrix@matrix, Graphics[{ [email protected], Circle[], Point@{0, 0}, "RedPoint", "GreenPoint", "BluePoint", "ConditionallyShowEigenvectors", "PrincipalAxes" }], Frame -> True, PlotRange -> Dynamic[{{-#, #}, {-#, #}} &@frameSize], ImageSize -> 500 ] ], {"MouseDown" :> With[{mp = MousePosition["Graphics"]}, movingPointIndex = Position[{v1, v2, v3}, First@Nearest[{v1, v2, v3}, mp]][[1, 1]] ], "MouseUp" :> (movingPointIndex = 0), "MouseDragged" :> ReleaseHold[ Hold[Set][Hold[v1, v2, v3][[{movingPointIndex}]], MousePosition["Graphics"]] ]} ], "AdditionalInfoSlot", "EigenvaluesDisplay" }] ]~injectAndRelease~{ "RedPoint" -> redPoint, "GreenPoint" -> greenPoint, "BluePoint" -> If[OptionValue@"ShowBluePointAndArrows" === True, bluePointAndArrows, {}], "AdditionalInfoSlot" -> additionalInfo, "EigenvaluesDisplay" -> Sequence @@ If[OptionValue@"ShowEigenvalues" === True, {eigenvaluesDisplay}, {}], "ConditionallyShowEigenvectors" -> If[OptionValue@"ShowEigenvectorsWhenReal" === True, showEigenvectors, {}], "PrincipalAxes" -> Sequence @@ If[OptionValue@"ShowPrincipalAxes" === True, {principalAxes}, {}] };End[];EndPackage[]; Then to create the representation just use dynamicalEigenvectorsRepresentation[ "ShowEigenvectorsWhenReal" -> True, "ShowBluePointAndArrows" -> True, "ShowEigenvalues" -> True, "ShowPrincipalAxes" -> False] and this is the result: | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/185644', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/27539/']} | jdg_77003 |
stackexchange | llm_judgeable_groundtruth_similarity | 325481 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Java here. I have always used Spring DI (for Spring projects) or Guice (for non-Spring projects) for dependency injection, and have always loved them. I recently took a job where they do 100% "DIY DI". That is, every project's main/driver class has an init() method that creates all of their objects/factories for them: public class SomeApp { private DatabaseService databaseService; public static void main(String[] args) { SomeApp someApp = new SomeApp(); someApp.run(); } private void run() { init(); // Now do some stuff (whatever SomeApp does). } private void init() { SomeAppConfig someAppConfig = readFromFileSystemSomehow(); databaseService = new DatabaseServiceImpl(someAppConfig.getDatabaseInfo()) // ...etc. }} I feel like there is nothing necessarily wrong with this DIY DI , however I feel like Guice and Spring DI exist for a reason, implement DI best practices, and handle a variety of situations better than what any homegrown solution could handle. I would like to try and propose using Guice/Spring DI to this team, but before I do I want to either: Make sure that I have solid/concrete reasons for why these open source projects exist and do DI better than a DIY solution; or Perhaps amend my outlook on DIY DI if it is in fact perfectly fine and these other libs (Guice/Spring DI/Weld/etc.) aren't really necessary for proper DI practices So I ask: What is lost by forgoing on Guice/Spring DI/etc. and using a DIY DI solution? What is the opportunity cost?
Now provide the response and nothing else.
| There are two legitimate reasons to DIY that I can think of: It simplifies your life. A custom solution can fit your needs exactly whereas trying to incorporate a 3rd party approach can sometimes be more work than it's worth. Eliminating/Avoiding dependencies. I really think developers in general don't put enough weight into this. There are costs to dependencies that aren't always initially obvious. When you tightly couple your application to these dependencies, the potential costs are much higher. It seems unlikely that 1 applies here but 2 might. DI in Java got rolling before there were standards around it with Spring leading the pack and Guice close behind it. You might think now that you'll never (never ever) want to change out from Spring but you can't know that. Some of the most costly projects I've been involved in were about extracting dependencies on proprietary dependencies. I've seen companies go through the pain of building out a J2EE solution because of the portability it provides and then use proprietary features of the container and completely defeat the purpose. This doesn't mean you shouldn't go with it, but these are the issues I would want you to address if you were to come to me with a proposal to introduce dependencies. Also be very wary of the advice of companies that clearly benefit from you being dependent on their products. They have every incentive to get you hopelessly trapped in their web of products and will often downplay or dismiss the risks associated with coupling your project to their proprietary solutions. One a side-note, Bob Lee is really sharp and I think he had all the right intentions when he created Guice. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/325481', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/154753/']} | jdg_77004 |
stackexchange | llm_judgeable_groundtruth_similarity | 22360332 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to convert ruby time to javascript time and vice versa? Ruby on rails : Time.now Javascript : new Date()
Now provide the response and nothing else.
| Perhaps the most reliable way is to use seconds since the epoch for ruby, and milliseconds for JavaScript. In ruby: t = Time.now# => 2014-03-12 11:18:29 -0700t.to_f * 1000 # convert to milliseconds since 1970-01-01 00:00:00 UTC.# => 1394648309130.185 This value can be directly given to the JavaScript Date constructor: var d = new Date(1394648309130.185)d // Wed Mar 12 2014 11:18:29 GMT-0700 (Pacific Daylight Time)d.getTime() // 1394648309130 (Fractions of a millisecond are dropped) The output of d.getTime() divided by 1000 can be given to ruby's Time.at() : Time.at( 1394648309130 / 1000.0 )# => 2014-03-12 11:18:29 -0700 | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22360332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2374626/']} | jdg_77005 |
stackexchange | llm_judgeable_groundtruth_similarity | 3051412 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I am trying to formulate a proof for this proposition: Let $A$ and $B$ be sets and $f$ a map such that $f:A \to B$ $f$ has a right inverse $\implies$ $f$ is surjective Proof(Attempt) The statement $f$ has a right inverse $\implies$ $\exists$ a function $g:B\to A$ such that $f\circ g(b) = id_B$ $\forall b \in B$ I'm concerned about my logic here: "This statement implies that every element of $B$ lies in the pre-image of $f$ thus $f$ is surjective as $\forall b \in B$ $\exists$ $a \in A $ such that $f(a) = b$ " I feel this logic is not water tight and don't know how to formulate it. Thanks!
Now provide the response and nothing else.
| I found a statement that says $$\forall x[P(x) \rightarrow Q] \equiv \exists x[P(x)] \rightarrow Q$$ Where $Q$ is a formula that does not contain $x$ . Correct. This is a true equivalence. Is there a way to prove the right hand side statement from the left hand side? Yes. And the other way around as well. See the other Answer. All equivalences in first-order logic can be proven, both mathematically, as well as using a first-order logic formal proof, if that's what you are asking. I need to know when exactly would the equivalence hold. Hmm, that's a bit of a weird question. The equivalence holds ... and as with mathematical results, it will therefore always hold. OK, but given your: Can we just convert $\forall$ quantifiers to $\exists$ . I assume you real question is: Are there other cases that are like this where we can change universal and existential quantifiers? Now, the first thing to note is that we are not just swapping quantifiers. Indeed, it does not hold that: $$\forall x[P(x) \rightarrow Q] \equiv \exists x[P(x) \rightarrow Q] \text{ WRONG!}$$ nor does it hold that: $$\forall x[P(x)] \rightarrow Q \equiv \exists x[P(x)] \rightarrow Q \text{ WRONG!}$$ Rather, it is the fact that we changed the scope of the quantifiers (that is, note how the square brackets get moved!) that makes the equivalence hold: $$\forall x[P(x) \rightarrow Q] \equiv \exists x[P(x)] \rightarrow Q \text{ Correct}$$ Indeed, simply converting existentials to universals will almost never hold true, since in general we do not have that $\forall x \ P(x) \equiv \exists x \ P(x)$ \forall. The only time where you can just swap existentials is where they don't do any work, i.e. where they are null quntifiers. An example of this is that: $$\forall x \ Q \equiv \exists x \ Q$$ where again $Q$ is a formula that does not contain $x$ (and even this equivalence only hold in logics that make the Assumption of Existential Import: that there is at least element in the domain) OK, so at this point your question really seems to become: are there other equivalences like: $$\forall x[P(x) \rightarrow Q] \equiv \exists x[P(x)] \rightarrow Q$$ where you swap quantifiers and change their scope? What are the conditions under which we can make, or cannot make, such a swap? Well, the relevant set of equivalences that I think you are really looking for are what are called the Prenex Laws. Here they are: Prenex Laws Where $x$ is not a free variable in $Q$ : $$ \forall x \ [P(x)] \land Q \Leftrightarrow \forall x [P(x) \land Q]$$ $$ \exists x \ [P(x)] \land Q \Leftrightarrow \exists x [P(x) \land Q]$$ $$ \forall x \ [P(x)] \lor Q \Leftrightarrow \forall x [P(x) \lor Q]$$ $$\exists x \ [P(x)] \lor Q \Leftrightarrow \exists x [P(x) \lor Q]$$ $$\forall x \ [Q] \rightarrow P(x) \Leftrightarrow \forall x [Q\rightarrow P(x)]$$ $$\exists x \ [Q] \rightarrow P(x) \Leftrightarrow \exists x [Q\rightarrow P(x)]$$ $$\exists x \ [P(x)] \rightarrow Q \Leftrightarrow \forall x [P(x) \rightarrow Q]$$ $$\forall x \ [P(x)] \rightarrow Q \Leftrightarrow \exists x [P(x) \rightarrow Q]$$ We recognize your equivalence as the seventh one down, though with the LHS and RHS swapped. It is a little more practical to represent the Laws like this, though, since in practice we often use the Prenex Laws to 'pull out' quantifiers and 'move them over' other logical operators. The Prenex Laws show that it is ok to change the scope of the quantifiers to include the formula $Q$ (where again, the important criterion is that $Q$ does not contain any free variables $x$ !), but that the quantifier swaps or stays the same depending on what operator we move the scope of the quantifier 'over'. Indeed, it turns out that your equivalence is a bit of an exception: the only time the quantifier gets swapped is if we move it over the consequent of a conditional (or put differently, if the quantifier in the LHS is the antecedent of a conditional). Finally, I would like to point out that there is not a corresponding Prenex law for biconditionals: there is no easy way to pull a quantifier 'over' a biconditional, and we can understand, given that the quantifier swaps when it is the antecedent of a conditional, but stays the same when it is the consequent. Indeed, if you want to pull out a quantifier when you have a biconditionals, you first have to rewrite the biconditionals as two conditionals, and then apply the two relevant Prenex laws for each of the conditionals (and, of course, pull them over the conjunction between them as well) Also, the last two Prenex Laws can be derived from the others (actually, from the first two you can derive all others, and same for the third and fourth; this might be a good exercise). Just doing yours: $$\exists x \ [P(x)] \rightarrow Q \Leftrightarrow$$ $$\neg \exists x \ [P(x)] \lor Q \Leftrightarrow$$ $$\forall x \ [\neg P(x)] \lor Q \Leftrightarrow$$ $$\forall x \ [\neg P(x) \lor Q] \Leftrightarrow$$ $$\forall x [P(x) \rightarrow Q]$$ You can make some intuitive sense of the Prenex Laws by observing that a universal can be seen as kind of conjunction, and an existential as a kind of existential. That is, if $a,b,c,...$ denote the objects in your domain, then you can think of universals and existentials like this: $$\forall x \: P(x) \approx P(a) \land P(b) \land P(c) \land ...$$ $$\exists x \: P(x) \approx P(a) \lor P(b) \lor P(c) \lor ...$$ I have to use $\approx$ here, because with different domains you would get different expressions, and so this is certainly not an equivalence! ... but again, intuitively you can make sense of them this way. And ... you can now give an informal proof of your equivalence as well: $$\exists x \ [P(x)] \rightarrow Q \Leftrightarrow \text{ (Implication)}$$ $$\neg \exists x \ [P(x)] \lor Q \Leftrightarrow \text{ (Quantifier Negation)}$$ $$\forall x \ [\neg P(x)] \lor Q \approx$$ $$(\neg P(a) \land \neg P(b) \land \neg P(c) \land ...) \lor Q \Leftrightarrow \text{ (Distribution)}$$ $$(\neg P(a) \lor Q) \land (\neg P(b) \lor Q) \land (\neg P(c) \lor Q) \land ... \Leftrightarrow \text{ (Implication)}$$ $$(P(a) \rightarrow Q) \land (P(b) \rightarrow Q) \land (P(c) \rightarrow Q) \land ... \approx$$ $$\forall x [P(x) \rightarrow Q] $$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3051412', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/620633/']} | jdg_77006 |
stackexchange | llm_judgeable_groundtruth_similarity | 3298800 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several questions like this and the inverse of the given functions are too complex for me to use the classic $f(x) =f^{-1}(x)$ method. Here is one: $$f(x) =x^3 + x - 1$$ What are the intersection points of $Cf$ and $Cf^{-1}$ ?
Now provide the response and nothing else.
| That the expression giving $f’(x)$ for $x > 0$ is undefined at $x=0$ is not a valid reason to state that $f$ isn’t differentiable at $0$ . You can think for instance of $f(x)=x^{3/2}\sin{\frac{1}{x}}$ : for $x > 0$ , $f’(x)=-x^{-1/2}\cos{\frac{1}{x}}+3/2x^{1/2}\sin{\frac{1}{x}}$ , so it is undefined at $x=0$ . However, $f’(0)=0$ . On the other hand, if $f’(x) \rightarrow \infty$ when $x \rightarrow 0$ , then $f$ isn’t differentiable at $0$ . Indeed, for all $x$ , $\frac{f(x)-f(0)}{x}=f’(c_x)$ for some $0 < c_x < x$ , thus $f’(c_x) \rightarrow \infty$ as $x \rightarrow 0$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3298800', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/689252/']} | jdg_77007 |
stackexchange | llm_judgeable_groundtruth_similarity | 417764 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have heard these two terms "temporary table" and "global temporary table" used pretty much in similar context. What is the difference between the two?
Now provide the response and nothing else.
| In Oracle there isn't any difference. When you create a temporary table in an Oracle database, it is automatically global, and you are required to include the "Global" key word. The SQL standard, which defines how the term "GLOBAL TEMPORARY TABLE" is interpreted, allows for either a LOCAL or GLOBAL scope. This would allow for either a user specific table (LOCAL) or everyone (GLOBAL). Oracle implements only the GLOBAL version. The data you put into an Oracle Temporary table is specific to your session. That is, only you can see your data even if there are 100 users all using the same table, and your data is deleted from the table when you disconnect (or when you commit the current transaction) depending upon table settings. Contrast this with MS SQL-Server, where temporary tables are local. If you create one, no one besides you knows that your temporary table exists. In Oracle, creating the temporary table allows everyone (well everyone with access to your schema) to see the table. When you log out of your session, the SQL-Server table is deleted and will need to be recreated for the next session. In Oracle, the temporary table is now a permanent part of your schema, even if the data isn't. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/417764', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38997/']} | jdg_77008 |
stackexchange | llm_judgeable_groundtruth_similarity | 3484233 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following pattern has arisen in a program I'm writing. I hope it's not too contrived, but it manages to mutate a Foo object in the const method Foo::Questionable() const , without use of any const_cast or similar. Basically, Foo stores a reference to FooOwner and vice versa, and in Questionable() , Foo manages to modify itself in a const method by calling mutate_foo() on its owner. Questions follow the code. #include "stdafx.h"#include <iostream>using namespace std;class FooOwner;class Foo { FooOwner& owner; int data;public: Foo(FooOwner& owner_, int data_) : owner(owner_), data(data_) { } void SetData(int data_) { data = data_; } int Questionable() const; // defined after FooOwner};class FooOwner { Foo* pFoo;public: FooOwner() : pFoo(NULL) {} void own(Foo& foo) { pFoo = &foo; } void mutate_foo() { if (pFoo != NULL) pFoo->SetData(0); }};int Foo::Questionable() const{ owner.mutate_foo(); // point of interest return data;}int main(){ FooOwner foo_owner; Foo foo(foo_owner, 0); // foo keeps reference to foo_owner foo_owner.own(foo); // foo_owner keeps pointer to foo cout << foo.Questionable() << endl; // correct? return 0;} Is this defined behavior? Should Foo::data be declared mutable? Or is this a sign I'm doing things fatally wrong? I'm trying to implement a kind of lazy-initialised 'data' which is only set when requested, and the following code compiles fine with no warnings, so I'm a little nervous I'm in UB land. Edit: the const on Questionable() only makes immediate members const, and not the objects pointed to or referenced by the object. Does this make the code legal? I'm confused between the fact that in Questionable() , this has the type const Foo* , and further down the call stack, FooOwner legitimately has a non-const pointer it uses to modify Foo . Does this mean the Foo object can be modified or not? Edit 2: perhaps an even simpler example: class X { X* nonconst_this; // Only turns in to X* const in a const method! int data;public: X() : nonconst_this(this), data(0) { } int GetData() const { nonconst_this->data = 5; // legal?? return data; }};
Now provide the response and nothing else.
| Consider the following: int i = 3; i is an object, and it has the type int . It is not cv-qualified (is not const or volatile , or both.) Now we add: const int& j = i;const int* k = &i; j is a reference which refers to i , and k is a pointer which points to i . (From now on, we simply combine "refer to" and "points to" to just "points to".) At this point, we have two cv-qualified variables, j and k , that point to a non-cv-qualified object. This is mentioned in §7.1.5.1/3: A pointer or reference to a cv-qualified type need not actually point or refer to a cv-qualified object, but it is treated as if it does; a const-qualified access path cannot be used to modify an object even if the object referenced is a non-const object and can be modified through some other access path. [Note: cv-qualifiers are supported by the type system so that they cannot be subverted without casting (5.2.11). ] What this means is that a compiler must respect that j and k are cv-qualified, even though they point to a non-cv-qualified object. (So j = 5 and *k = 5 are illegal, even though i = 5 is legal.) We now consider removing the const from those: const_cast<int&>(j) = 5;*const_cast<int*>(k) = 5; This is legal (§refer to 5.2.11), but is it undefined behavior? No. See §7.1.5.1/4: Except that any class member declared mutable (7.1.1) can be modified, any attempt to modify a const object during its lifetime (3.8) results in undefined behavior . Emphasis mine. Remember that i is not const and that j and k both point to i . All we've done is tell the type system to remove the const-qualifier from the type so we can modify the pointed to object, and then modified i through those variables. This is exactly the same as doing: int& j = i; // removed const with const_cast...int* k = &i; // ..trivially legal codej = 5;*k = 5; And this is trivially legal. We now consider that i was this instead: const int i = 3; What of our code now? const_cast<int&>(j) = 5;*const_cast<int*>(k) = 5; It now leads to undefined behavior , because i is a const-qualified object. We told the type system to remove const so we can modify the pointed to object, and then modified a const-qualified object . This is undefined, as quoted above. Again, more apparent as: int& j = i; // removed const with const_cast...int* k = &i; // ...but this is not legal!j = 5;*k = 5; Note that simply doing this: const_cast<int&>(j);*const_cast<int*>(k); Is perfectly legal and defined, as no const-qualified objects are being modified; we're just messing with the type-system. Now consider: struct foo{ foo() : me(this), self(*this), i(3) {} void bar() const { me->i = 5; self.i = 5; } foo* me; foo& self; int i;}; What does const on bar do to the members? It makes access to them go through something called a cv-qualified access path . (It does this by changing the type of this from T* const to cv T const* , where cv is the cv-qualifiers on the function.) So what are the members types during the execution of bar ? They are: // const-pointer-to-non-const, where the pointer points cannot be changedfoo* const me;// foo& const is ill-formed, cv-qualifiers do nothing to reference typesfoo& self; // same as const intint const i; Of course, the types are irrelevant, as the important thing is the const-qualification of the pointed to objects, not the pointers. (Had k above been const int* const , the latter const is irrelevant.) We now consider: int main(){ foo f; f.bar(); // UB?} Within bar , both me and self point to a non-const foo , so just like with int i above we have well-defined behavior. Had we had: const foo f;f.bar(); // UB! We would have had UB, just like with const int , because we would be modifying a const-qualified object. In your question, you have no const-qualified objects, so you have no undefined behavior. And just to add an appeal to authority, consider the const_cast trick by Scott Meyers, used to recycle a const-qualified function in a non-const function: struct foo{ const int& bar() const { int* result = /* complicated process to get the resulting int */ return *result; } int& bar() { // we wouldn't like to copy-paste a complicated process, what can we do? }}; He suggests: int& bar(void){ const foo& self = *this; // add const const int& result = self.bar(); // call const version return const_cast<int&>(result); // take off const} Or how it's usually written: int& bar(void){ return const_cast<int&>( // (3) remove const from result static_cast<const foo&>(*this) // (1) add const to this .bar() // (2) call const version ); } Note this is, again, perfectly legal and well-defined. Specifically, because this function must be called on a non-const-qualified foo , we are perfectly safe in stripping the const-qualification from the return type of int& boo() const . (Unless someone shoots themselves with a const_cast + call in the first place.) To summarize: struct foo{ foo(void) : i(), self(*this), me(this), self_2(*this), me_2(this) {} const int& bar() const { return i; // always well-formed, always defined } int& bar() const { // always well-formed, always well-defined return const_cast<int&>( static_cast<const foo&>(*this). bar() ); } void baz() const { // always ill-formed, i is a const int in baz i = 5; // always ill-formed, me is a foo* const in baz me = 0; // always ill-formed, me_2 is a const foo* const in baz me_2 = 0; // always well-formed, defined if the foo pointed to is non-const self.i = 5; me->i = 5; // always ill-formed, type points to a const (though the object it // points to may or may not necessarily be const-qualified) self_2.i = 5; me_2->i = 5; // always well-formed, always defined, nothing being modified // (note: if the result/member was not an int and was a user-defined // type, if it had its copy-constructor and/or operator= parameter // as T& instead of const T&, like auto_ptr for example, this would // be defined if the foo self_2/me_2 points to was non-const int r = const_cast<foo&>(self_2).i; r = const_cast<foo* const>(me_2)->i; // always well-formed, always defined, nothing being modified. // (same idea behind the non-const bar, only const qualifications // are being changed, not any objects.) const_cast<foo&>(self_2); const_cast<foo* const>(me_2); // always well-formed, defined if the foo pointed to is non-const // (note, equivalent to using self and me) const_cast<foo&>(self_2).i = 5; const_cast<foo* const>(me_2)->i = 5; // always well-formed, defined if the foo pointed to is non-const const_cast<foo&>(*this).i = 5; const_cast<foo* const>(this)->i = 5; } int i; foo& self; foo* me; const foo& self_2; const foo* me_2;};int main(){ int i = 0; { // always well-formed, always defined int& x = i; int* y = &i; const int& z = i; const int* w = &i; // always well-formed, always defined // (note, same as using x and y) const_cast<int&>(z) = 5; const_cast<int*>(w) = 5; } const int j = 0; { // never well-formed, strips cv-qualifications without a cast int& x = j; int* y = &j; // always well-formed, always defined const int& z = i; const int* w = &i; // always well-formed, never defined // (note, same as using x and y, but those were ill-formed) const_cast<int&>(z) = 5; const_cast<int*>(w) = 5; } foo x; x.bar(); // calls non-const, well-formed, always defined x.bar() = 5; // calls non-const, which calls const, removes const from // result, and modifies which is defined because the object // pointed to by the returned reference is non-const, // because x is non-const. x.baz(); // well-formed, always defined const foo y; y.bar(); // calls const, well-formed, always defined const_cast<foo&>(y).bar(); // calls non-const, well-formed, // always defined (nothing being modified) const_cast<foo&>(y).bar() = 5; // calls non-const, which calls const, // removes const from result, and // modifies which is undefined because // the object pointed to by the returned // reference is const, because y is const. y.baz(); // well-formed, always undefined} I refer to the ISO C++03 standard. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3484233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/177222/']} | jdg_77009 |
stackexchange | llm_judgeable_groundtruth_similarity | 27211638 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the working of inner and outer joins. But what is the meaning of the words inner / outer in this context? What is so inner about inner join? What is outer about outer join? Best Regards,Vishal
Now provide the response and nothing else.
| One more perspective: One of the earliest simple implementations for joins used nested loops. For inner join, the outer loop would iterate over any relation and the inner loop would iterate over the other relation and create composite rows whenever join columns matched. Thus the output rows get created and populated in the inner loop. Hence this is called INNER JOIN. When we want all rows in left side relation\table to be retained, the outer loop will have to iterate on the left table and rows would be added not only in the inner loop for matching cases but also in the outer loop for non-matching cases(where left table doesn't have a matching row in right table based on join columns). In this case, the left table needs to go to the outer loop, so it is called LEFT OUTER JOIN. When we want all rows in right side relation\table to be retained, right table will need to go into outer loop, so it is called RIGHT OUTER JOIN. When we want non matching rows of both tables to be retained, in the simplest approach, we would have two nested loops. One nested loop would have left table in the outer loop and the other nested loop would have right table in the outer loop. So both tables go to outer loop, hence it is called FULL OUTER JOIN. Adding the link to the paper that talks about nested loop implementation : http://www.cs.berkeley.edu/~brewer/cs262/3-selinger79.pdf | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27211638', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3123794/']} | jdg_77010 |
stackexchange | llm_judgeable_groundtruth_similarity | 516276 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to restore the horizontal style in-application menu bar in KDE Plasma 5. Somehow I ended up with a single "control" menu item like this: Using that requires two mouse clicks instead of one. What I would like is the horizontal style in-application menu (just below the title bar) like this: What are the steps for switching back? The other questions on this topic discuss Global Menus, which I do not believe is related to my question about in-application menus. I am not using global (Unity or Mac style) menus. Other questions / answers I looked at include: https://unix.stackexchange.com/a/426002 https://unix.stackexchange.com/a/489727 https://askubuntu.com/questions/30180/menubar-hidden-in-all-kde-apps Pressing CTRL-M does not resolve it. That appears to be a different issue. I have a menu, it's just not the one I want.
Now provide the response and nothing else.
| Operating System: Kubuntu 19.04KDE Plasma Version: 5.15.5KDE Frameworks Version: 5.57.0Qt Version: 5.12.2Kernel Version: 5.0.0-15-genericOS Type: 64-bit I've checked with Dolphin, Kate and Gwenview. I think the problem is with the ☰ button in your title bar. Its presence suppresses the appearance of the horizontal menu. Remove it by opening System Settings > Application Style > Window Decorations > Buttons tab and dragging the ☰ button off your title bar into the space below. In Kate and Gwenview, the horizontal menu now can be toggled on/off with Ctrl + M . In Dolphin, now Ctrl + M toggles between the horizontal menu and the Control (☰) button. Edit And it's the same with the Fedora 30 KDE Spin: Operating System: Fedora 30KDE Plasma Version: 5.15.5KDE Frameworks Version: 5.58.0Qt Version: 5.12.1Kernel Version: 5.0.17-300.fc30.x86_64OS Type: 64-bit | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/516276', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/15010/']} | jdg_77011 |
stackexchange | llm_judgeable_groundtruth_similarity | 2145492 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What would be $$\lim_{x\to o}\sum_{i=1}^{10}x[i/x]$$ using the fact that $\lim_{x\to 0} x[1/x]=1$. Is the limit in question $55$?
Now provide the response and nothing else.
| If one knows that$$\lim_{u \to 0}\frac {\sin u}u=1 \tag1$$ and that$$\lim_{x \to 0^+}x\ln x=0 $$ then one may apply $(1)$ with $u=x\ln x$ to get, as $x \to 0^+$, $$\lim_{x \to 0^+}\frac {\sin (x\ln x)}{(x\ln x)}=1. \tag2$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2145492', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/400496/']} | jdg_77012 |
stackexchange | llm_judgeable_groundtruth_similarity | 51828448 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I access the child elements (here: <img> ) of @ViewChild() in Angular 2+ without explicit declaration? In template.html <div #parent> <!-- There can be anything than <img> --> <img src="http://localhost/123.jpg" alt=""></div> In component.ts @ViewChild('parent') parent;public getFirstChild() { this.firstChild = this.parent.? //} The aim is, to be able to create a universal component that uses: <div #parent> <ng-content></ng-content></div> So the child elements of #parent need to be accessible without explicit declaration.
Now provide the response and nothing else.
| You can use the nativeElement property of the ElementRef given by ViewChild to get the corresponding HTML element. From there, standard DOM methods and properties give access to its children: element.children element.querySelector element.querySelectorAll etc. For example: @ViewChild("parent") private parentRef: ElementRef<HTMLElement>;public getChildren() { const parentElement = this.parentRef.nativeElement; const firstChild = parentElement.children[0]; const firstImage = parentElement.querySelector("img"); ...} See this stackblitz for a demo. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51828448', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4986557/']} | jdg_77013 |
stackexchange | llm_judgeable_groundtruth_similarity | 38393692 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have a structure like this: Is it possible to create an enum that will return the string value of selected cell? For example: enum.GROUP_MAIN1.SUBGROUP1.COL1 will return value "COL1" . I was looking for nested enums but didn't find the solution to this situation.
Now provide the response and nothing else.
| You can do this with such trick: public interface GROUPMAIN1 { enum SUBGROUP1 implements GROUPMAIN1 { COL1, COL2, COL3 } enum SUBGROUP2 implements GROUPMAIN1 { COL3, COL4 }} So to get enum you will need to use GROUPMAIN1.SUBGROUP1.COL1 . It can also be done in another way if all you need is just a string constants: public interface GROUPMAIN1 { interface SUBGROUP1 { String COL1 = "COL1"; String COL2 = "COL2"; } interface SUBGROUP2 { String COL3 = "COL3"; String COL4 = "COL4"; }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38393692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5781003/']} | jdg_77014 |
stackexchange | llm_judgeable_groundtruth_similarity | 19763698 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to solve a non-square linear system with R : A X = B ? (in the case the system has no solution or infinitely many solutions) Example : A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T)B=matrix(c(-17,28,11),3,1,T)A [,1] [,2] [,3] [,4][1,] 0 1 -2 3[2,] 5 -3 1 -2[3,] 5 -2 -1 1B [,1][1,] -17[2,] 28[3,] 11
Now provide the response and nothing else.
| If the matrix A has more rows than columns, then you should use least squares fit. If the matrix A has fewer rows than columns, then you should perform singular value decomposition. Each algorithm does the best it can to give you a solution by using assumptions. Here's a link that shows how to use SVD as a solver: http://www.ecse.rpi.edu/~qji/CV/svd_review.pdf Let's apply it to your problem and see if it works: Your input matrix A and known RHS vector B : > A=matrix(c(0,1,-2,3,5,-3,1,-2,5,-2,-1,1),3,4,T)> B=matrix(c(-17,28,11),3,1,T)> A [,1] [,2] [,3] [,4][1,] 0 1 -2 3[2,] 5 -3 1 -2[3,] 5 -2 -1 1> B [,1][1,] -17[2,] 28[3,] 11 Let's decompose your A matrix: > asvd = svd(A)> asvd$d[1] 8.007081e+00 4.459446e+00 4.022656e-16$u [,1] [,2] [,3][1,] -0.1295469 -0.8061540 0.5773503[2,] 0.7629233 0.2908861 0.5773503[3,] 0.6333764 -0.5152679 -0.5773503$v [,1] [,2] [,3][1,] 0.87191556 -0.2515803 -0.1764323[2,] -0.46022634 -0.1453716 -0.4694190[3,] 0.04853711 0.5423235 0.6394484[4,] -0.15999723 -0.7883272 0.5827720> adiag = diag(1/asvd$d)> adiag [,1] [,2] [,3][1,] 0.1248895 0.0000000 0.00000e+00[2,] 0.0000000 0.2242431 0.00000e+00[3,] 0.0000000 0.0000000 2.48592e+15 Here's the key: the third eigenvalue in d is very small; conversely, the diagonal element in adiag is very large. Before solving, set it equal to zero: > adiag[3,3] = 0> adiag [,1] [,2] [,3][1,] 0.1248895 0.0000000 0[2,] 0.0000000 0.2242431 0[3,] 0.0000000 0.0000000 0 Now let's compute the solution (see slide 16 in the link I gave you above): > solution = asvd$v %*% adiag %*% t(asvd$u) %*% B> solution [,1][1,] 2.411765[2,] -2.282353[3,] 2.152941[4,] -3.470588 Now that we have a solution, let's substitute it back to see if it gives us the same B : > check = A %*% solution> check [,1][1,] -17[2,] 28[3,] 11 That's the B side you started with, so I think we're good. Here's another nice SVD discussion from AMS: http://www.ams.org/samplings/feature-column/fcarc-svd | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19763698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1422096/']} | jdg_77015 |
stackexchange | llm_judgeable_groundtruth_similarity | 10578 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am not comfortable with Fisher information, what it measures and how is it helpful. Also it's relationship with the Cramer-Rao bound is not apparent to me. Can someone please give an intuitive explanation of these concepts?
Now provide the response and nothing else.
| Here I explain why the asymptotic variance of the maximum likelihood estimator is the Cramer-Rao lower bound. Hopefully this will provide some insight as to the relevance of the Fisher information. Statistical inference proceeds with the use of a likelihood function $\mathcal{L}(\theta)$ which you construct from the data. The point estimate $\hat{\theta}$ is the value which maximizes $\mathcal{L}(\theta)$. The estimator $\hat{\theta}$ is a random variable, but it helps to realize that the likelihood function $\mathcal{L}(\theta)$ is a "random curve". Here we assume iid data drawn from a distribution $f(x|\theta)$, and we define the likelihood$$\mathcal{L}(\theta) = \frac{1}{n}\sum_{i=1}^n \log f(x_i|\theta)$$ The parameter $\theta$ has the property that it maximizes the value of the "true" likelihood, $\mathbb{E}\mathcal{L}(\theta)$. However, the "observed" likelihood function $\mathcal{L}(\theta)$ which is constructed from the data is slightly "off" from the true likelihood. Yet as you can imagine, as the sample size increases, the "observed" likelihood converges to the shape of the true likelihood curve. The same applies to the derivative of the likelihood with respect to the parameter, the score function $\partial \mathcal{L}/\partial \theta$. (Long story short, the Fisher information determines how quickly the observed score function converges to the shape of the true score function.) At a large sample size, we assume that our maximum likelihood estimate $\hat{\theta}$ is very close to $\theta$. We zoom into a small neighborhood around $\theta$ and $\hat{\theta}$ so that the likelihood function is "locally quadratic". There, $\hat{\theta}$ is the point at which the score function $\partial \mathcal{L}/\partial \theta$ intersects the origin. In this small region, we treat the score function as a line , one with slope $a$ and random intercept $b$ at $\theta$. We know from the equation for a line that $$a(\hat{\theta} - \theta) + b = 0$$ or $$\hat{\theta} = \theta - b/a .$$ From the consistency of the MLE estimator, we know that $$\mathbb{E}(\hat{\theta}) = \theta$$ in the limit. Therefore, asymptotically $$nVar(\hat{\theta}) = nVar(b/a)$$ It turns out that the slope varies much less than the intercept, and asymptotically, we can treat the score function as having a constant slope in a small neighborhood around $\theta$. Thus we can write $$nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b)$$ So, what are the values of $a$ and $nVar(b)$? It turns out that due to a marvelous mathematical coincidence, they are the very same quantity (modulo a minus sign), the Fisher information. $$-a = \mathbb{E}\left[-\frac{\partial^2 \mathcal{L}}{\partial \theta^2}\right] = I(\theta)$$ $$nVar(b) = nVar\left[\frac{\partial \mathcal{L}}{\partial \theta}\right] = I(\theta)$$ Thus, $$nVar(\hat{\theta}) = \frac{1}{a^2}nVar(b) = (1/I(\theta)^2)I(\theta) = 1/I(\theta)$$asymptotically: the Cramer-Rao lower bound. (Showing that $1/I(\theta)$ is a the lower bound on the variance of an unbiased estimator is another matter.) | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/10578', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4101/']} | jdg_77016 |
stackexchange | llm_judgeable_groundtruth_similarity | 317294 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose we have a vector space $V$, and $U$, $W$ subspaces of $V$. Dimension theorem states:$$ \dim(U+W)=\dim U+ \dim W - \dim (U\cap W).$$ My question is: Why is $U \cap W$ necessary in this theorem?
Now provide the response and nothing else.
| You can think in $A=\{a_1,...,a_n\}$ as base of $U$ and $B=\{b_1,...,b_m\}$ as a base of $V$, we know that the dimension of a vectorial space is the number of elements of some base. If $b_1$ it isn't linear combination of the elements of $A$ so $C_1=A \cup \{b_1$}. If $b_p$ it isn't linear combination of the elements of $C_{p-1}$, so $C_p=C_{p-1}\cup\{b_p\}$. Let $\ D=B -C_{p-1}$, now it is easy to see that $C_m$ is base of $\ U+V$ and $D$ is base of $\ U\cap V$. This is an algorithm to obtain $U+V$ and $U\cap V$. Without lost of generality, if $C_m=\{a_1,a_2,...,a_n,b_1,b_2,...,b_{m-k}\} \Rightarrow dim(U+V)=n+k$ on the other hand $dim(U)+dim(V)-dim(U\cap V)=n+m-(m-k)=n+k$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/317294', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/48863/']} | jdg_77017 |
stackexchange | llm_judgeable_groundtruth_similarity | 33318499 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm in the process of cleaning up all my config files in an attempt to make them as readable as possible. I've been looking for a style guide on the use of quotes while exporting paths in, for example, a ~/.bashrc file: export PATH="/users/me/path:$PATH" vs export PATH=/users/me/path:$PATH The Google shell style guide suggests avoiding quotes for path names. In contrast, a lot of the popular dotfiles repos (such as Zach Holman's here ) use quotes. Are there any situations when it is an advantage to use quotes in the path?
Now provide the response and nothing else.
| Tip of the hat to @gniourf_gniourf and @chepner for their help. tl;dr To be safe, double-quote: it'll work in all cases, across all POSIX-like shells. If you want to add a ~ -based path, selectively leave the ~/ unquoted to ensure that ~ is expanded; e.g.: export PATH=~/"bin:$PATH" . See below for the rules of ~ expansion in variable assignments. Alternatively, simply use $HOME inside a single, double-quoted string: export PATH="$HOME/bin:$PATH" NOTE: The following applies to bash , ksh , and zsh , but NOT to (mostly) strictly POSIX compliant shells such as dash ; thus, when you target /bin/sh , you MUST double-quote the RHS of export . [1] Double-quotes are optional , ONLY IF the literal part of your RHS (the value to assign) contains neither whitespace nor other shell metacharacters. Whether the values of the variables referenced contain whitespace/metacharacters or not does not matter - see below. Again: It does matter with sh , when export is used, so always double-quote there. The reason you can get away without double-quoting in this case is that variable-assignment statements in POSIX-like shells interpret their RHS differently than arguments passed to commands , as described in section 2.9.1 of the POSIX spec: Specifically, even though initial word-splitting is performed, it is only applied to the unexpanded (raw) RHS (that's why you do need quoting with whitespace/metacharacters in literals ), and not to its results . This only applies to genuine assignment statements of the form <name>=<value> in all POSIX-like shells , i.e., if there is no command name before the variable name; note that that includes assignments prepended to a command to define ad-hoc environment variables for it, e.g., foo=$bar cmd ... . Assignments in the context of other commands should always be double-quoted , to be safe: With sh (in a (mostly) strictly POSIX-compliant shell such as dash ) an assignment with export is treated as a regular command , and the foo=$bar part is treated as the 1st argument to the export builtin and therefore treated as usual (subject to word-splitting of the result , too). (POSIX doesn't specify any other commands involving (explicit) variable-assignment; declare , typeset , and local are nonstandard extensions ). bash , ksh , zsh , in an understandable deviation from POSIX, extend the assignment logic to export foo=$bar and typeset/declare/local foo=$bar as well. In other words: in bash , ksh , zsh , export/typeset/declare/local commands are treated like assignments , so that quoting isn't strictly necessary . Perhaps surprisingly, dash , which also chose to implement the non -POSIX local builtin [2] , does NOT extend assignment logic to it; it is consistent with its export behavior, however. Assignments passed to env (e.g., env foo=$bar cmd ... ) are also subject to expansion as a command argument and therefore need double-quoting - except in zsh . That env acts differently from export in ksh and bash in that regard is due to the fact that env is an external utility , whereas export is a shell builtin . ( zsh 's behavior fundamentally differs from that of the other shells when it comes to unquoted variable references). Tilde ( ~ ) expansion happens as follows in genuine assignment statements : In addition to the ~ needing to be unquoted , as usual, it is also only applied: If the entire RHS is ~ ; e.g.: foo=~ # same as: foo="$HOME" Otherwise: only if both of the following conditions are met: if ~ starts the string or is preceded by an unquoted : if ~ is followed by an unquoted / . e.g., foo=~/bin # same as foo="$HOME/bin" foo=$foo:~/bin # same as foo="$foo:$HOME/bin" Example This example demonstrates that in bash , ksh , and zsh you can get away without double-quoting, even when using export , but I do not recommend it . #!/usr/bin/env bash# or ksh or zsh - but NOT /bin/sh!# Create env. variable with whitespace and other shell metacharactersexport FOO="b:c &|<> d"# Extend the value - the double quotes here are optional, but ONLY # because the literal part, 'a:`, contains no whitespace or other shell metacharacters.# To be safe, DO double-quote the RHS.export FOO=a:$foo # OK - $FOO now contains 'a:b:c &|<> d' [1] As @gniourf_gniourf points out: Use of export to modify the value of PATH is optional, because once a variable is marked as exported, you can use a regular assignment ( PATH=... ) to change its value. That said, you may still choose to use export , so as to make it explicit that the variable being modified is exported. [2] @gniourf_gniourf states that a future version of the POSIX standard may introduce the local builtin. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33318499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2436744/']} | jdg_77018 |
stackexchange | llm_judgeable_groundtruth_similarity | 2962210 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it really possible, with Google's V8 Engine, to compile JavaScript into Native Code, save it as a binary file, and execute it whenever I want through my software envorinment, on any machine?
Now provide the response and nothing else.
| You can use the V8 snapshot functionality to precompile the code. This still means that you have to have a full version of V8 running to load the snapshot (i.e., you don't get stand-alone native code, it needs to run inside the V8 VM), so all you save is the compilation time. Also, the quality of snapshot code isn't necessarily as good as JIT'ed code because JIT code can use, e.g., SSE2/SSE3 if it's available, which snapshots can't assume. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2962210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140937/']} | jdg_77019 |
stackexchange | llm_judgeable_groundtruth_similarity | 48637 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a nice space (manifold, CW-complex, what you prefer). I was wondering if there is a computable relation between the homology of $\Omega X$, the loop space of $X$, and the homology of $X$. I know that, almost by definition, the homotopy groups are the same (but shifted a dimension). Because the relation between homotopy groups and homology groups is very difficult, I expect that the homology of $\Omega X$ is very hard to compute in general. References would be great.
Now provide the response and nothing else.
| General idea for computation of $H(\Omega X)$ (due to Serre, AFAIK) is to consider a (Serre) fibration $\Omega X\to PX\cong pt\to X$ and use Leray-Serre spectral sequence (it allows, in particular, to compute easily (at least, in simply-connected case) $H(\Omega X;\mathbb Q)$; cohomology with integer coefficients are, indeed, more complicated). It's discussed, I believe, in any textbook covering LSSS — e.g. in Hatcher's . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/48637', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5882/']} | jdg_77020 |
stackexchange | llm_judgeable_groundtruth_similarity | 10698962 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do some 2D drawing and thus want to implement some matrix transformations. With my light mathematics background I am trying to understand how to do so in C# (any other oop language would do it obviously). All I read is explaining that we need to work with 3x3 matrices to be able to cope with the translations. Because you cannot make translation with multiplications. But this is with multiplications of the matrices that we create our transformations. So we work with something like: { x1, x2, tx }{ y1, y2, ty }{ 0, 0, 1 } I understand the mean of the third column, but why do we need the third row? In a identity matrix as well as in a rotation, scale or rotation the last row is the same. Are there operations I did not reach yet which will need it?Is it because some languages (Java) performs better with "squared dimensions" arrays? If so I can use 3 columns and 2 rows in C# (since jagged arrays works as well or better). For example, for a rotation + translation I have a matrix like this { cos(rot)*x1, (-sin(rot))*x2, tx }{ sin(rot)*y1, cos(rot)*y2, ty }{ 0, 0, 1 } No need of the last row.
Now provide the response and nothing else.
| this is with multiplications of the matrices that we create our transformations This is why we want square matrices. Suppose we did what you propose, and used 2x3 matrices for our transformations. Then a rotation would be ( x1, x2, 0 )( y1, y2, 0 ) and a translation would be ( 1, 0, tx )( 0, 1, ty ) and we could perform either rotations or translations by multiplying our matrix by a column vector representing the point: ( x )M ( y ) ( 0 ) to get correct answers. However - how would we go about composing transformations? Indeed, for your "for a rotation + translation I have a matrix like this" example, how did you get to that matrix? Sure, in this case you can just write it out, but in general? Well, you know the answer: this is with multiplications of the matrices that we create our transformations So it must be possible to multiply two transformation matrices to give another transformation matrix . And the rules of matrix multiplication show that this: ( . . . ) ( . . . )( . . . ) ( . . . ) = ??? is not a valid matrix multiplcation. We need matrices that can be multipled in order for our transformations to be composable. So we have that extra row. Now, the way I've expressed it here is in fact completely backward from the standard mathematical presentation, in which the familiar transformations of rotation and translation are just special cases of the full power of homogeneous coordinate transformations on the projective plane - but I think it will do to show you why we need that extra row - to make the matrix square, and thus able to be multipled with like matrices. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10698962', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1263612/']} | jdg_77021 |
stackexchange | llm_judgeable_groundtruth_similarity | 230530 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $(X,\tau)$ be a Hausdorff space and let ${\cal D}$ denote the collection of dense subsets of $(X,\tau)$. Is it possible that there is another Hausdorff topology $\tau_1 \neq \tau$ on $X$ such that the collection of dense subsets of the space $(X,\tau_1)$ also equals ${\cal D}$?
Now provide the response and nothing else.
| The standard topology and the lower limit topology on $\mathbb{R}$ have the same dense subsets. They are two different topologies(even up to homeomorphism) on the real line. So the next question could be "Is the collection of open dense subsets a “fingerprint” for Hausdorff topologies? | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/230530', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']} | jdg_77022 |
stackexchange | llm_judgeable_groundtruth_similarity | 6312023 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a PHP syntax checker plugin for Notepad++? Please don't answer "Use another editor instead"
Now provide the response and nothing else.
| Try NppExec plugin for Notepad++. Using it create a command to be something like this: cmd.exe /K c:\your\path\to\php.exe -l "YOUR_FULL_FILE_NAME" Instead of YOUR_FULL_FILE_NAME you should use appropriate Notepadd++ macro -- I think it is $(FULL_CURRENT_PATH) but double check with NppExec manual (installs together with plugin). P.S. But any IDE will be better for sure (I'm using PhpStorm). If IDE is too heavy for your PC then look for php-oriented editors, like Blumentals RapidPHP etc (it's lighter than full IDE but may have all really important features). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6312023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/32242/']} | jdg_77023 |
stackexchange | llm_judgeable_groundtruth_similarity | 38308041 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created a default empty project on Android Studio 2.1.2 with API 24 . In the sample project, Google offers a depreciated class ApplicationTestCase : This class was deprecated in API level 24. Use ActivityTestRule instead. New tests should be written using the Android Testing Support Library. Sample: import android.app.Application;import android.test.ApplicationTestCase;/** * <a href="http://d.android.com/tools/testing/testing_android.html">Testing Fundamentals</a> */public class ApplicationTest extends ApplicationTestCase<Application> { public ApplicationTest() { super(Application.class); }} My Question: Why Android Test Case is now deprecated? How to replace ApplicationTestCase by ActivityTestRule? EDIT: I try with Expresso , but on API 24 ( compileSdkVersion 24 ) I have this error: Error:Conflict with dependency 'com.android.support:appcompat-v7'. Resolved versions for app (24.0.0) and test app (23.1.1) differ. See http://g.co/androidstudio/app-test-app-conflict for details.Error:Conflict with dependency 'com.android.support:design'. Resolved versions for app (24.0.0) and test app (23.1.1) differ. See http://g.co/androidstudio/app-test-app-conflict for details.Error:Conflict with dependency 'com.android.support:support-annotations'. Resolved versions for app (24.0.0) and test app (23.1.1) differ. See http://g.co/androidstudio/app-test-app-conflict for details.Error:Conflict with dependency 'com.android.support:recyclerview-v7'. Resolved versions for app (24.0.0) and test app (23.1.1) differ. See http://g.co/androidstudio/app-test-app-conflict for details. When I try to add this lib in my build.gradle: // Android JUnit RunnerandroidTestCompile 'com.android.support.test:runner:0.5'// JUnit4 RulesandroidTestCompile 'com.android.support.test:rules:0.5'// Espresso coreandroidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'// Espresso-contrib for DatePicker, RecyclerView, Drawer actions, Accessibility checks, CountingIdlingResourceandroidTestCompile 'com.android.support.test.espresso:espresso-contrib:2.2.2'// Espresso-web for WebView supportandroidTestCompile 'com.android.support.test.espresso:espresso-web:2.2.2'// Espresso-idling-resource for synchronization with background jobsandroidTestCompile 'com.android.support.test.espresso:espresso-idling-resource:2.2.2' My conclusion is that for the moment neither Android Test Case nor Expresso works on Android API 24 . Is this right? EDIT: 2016-08-05 I fix previous error on Expresso like that: def espressoVersion = '2.2.2'def testRunnerVersion = '0.5'androidTestCompile "com.android.support.test:rules:${testRunnerVersion}"androidTestCompile "com.android.support.test.espresso:espresso-core:${espressoVersion}"configurations.androidTestCompile.dependencies.each { androidTestCompileDependency -> androidTestCompileDependency.exclude group: 'com.android.support'}
Now provide the response and nothing else.
| The new androidTest example that the beta version of Android Studio 2.2 generates, look like this: @RunWith(AndroidJUnit4.class)public class ExampleInstrumentedTest { @Test public void useAppContext() throws Exception { // Context of the app under test. Context appContext = InstrumentationRegistry.getTargetContext(); assertEquals("org.mypackage", appContext.getPackageName()); }} Just like the deprecation warning suggests, the new instrumentation tests should use InstrumentationRegistry instead of extending from AndroidTestCase . Run them with AndroidJUnit4 . The relevant dependencies section in build.gradle looks like this: androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations'}) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38308041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1832221/']} | jdg_77024 |
stackexchange | llm_judgeable_groundtruth_similarity | 10426190 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I clone an element and insert it 5 times right after each other? This of course is the base statement: $('.col').clone().insertAfter('.col'); Here's what I need to get: <div class="col" id="original"> </div><div class="col"> </div><div class="col"> </div><div class="col"> </div><div class="col"> </div><div class="col"> </div> The selector doesn't need to be using an unique id, it can also be a class selector. I could just repeat the base statement four times but there must be a more elegant way?
Now provide the response and nothing else.
| Use a loop, like this: $(document).ready(function() { var e = $('.col'); for (var i = 0; i < 5; i++) { e.clone().insertAfter(e); }}); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script><div class="col">clone me...</div> Put the element in a variable before the loop, otherwise you will run into problems when you get several elements with the same id were your selector to be based on an id (e.g. $("#col1") ). If your selector is using a class, it doesn't cause the same conflicts as duplicate id's, but you should still put the element in a variable before the loop, otherwise you will end up with a lot more elements than you want. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10426190', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/983438/']} | jdg_77025 |
stackexchange | llm_judgeable_groundtruth_similarity | 59378267 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to load a list of events and display a loading indicator while fetching data. I'm trying Provider pattern (actually refactoring an existing application). So the event list display is conditional according to a status managed in the provider. Problem is when I make a call to notifyListeners() too quickly, I get this exception : ════════ Exception caught by foundation library ════════ The following assertion was thrown while dispatching notifications for EventProvider: setState() or markNeedsBuild() called during build. ... The EventProvider sending notification was: Instance of 'EventProvider' ════════════════════════════════════════ Waiting for some milliseconds before calling notifyListeners() solve the problem (see commented line in the provider class below). This is a simple example based on my code (hope not over simplified) : main function : Future<void> main() async { runApp( MultiProvider( providers: [ ChangeNotifierProvider(create: (_) => LoginProvider()), ChangeNotifierProvider(create: (_) => EventProvider()), ], child: MyApp(), ), );} root widget : class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final LoginProvider _loginProvider = Provider.of<LoginProvider>(context, listen: true); final EventProvider _eventProvider = Provider.of<EventProvider>(context, listen: false); // load user events when user is logged if (_loginProvider.loggedUser != null) { _eventProvider.fetchEvents(_loginProvider.loggedUser.email); } return MaterialApp( home: switch (_loginProvider.status) { case AuthStatus.Unauthenticated: return MyLoginPage(); case AuthStatus.Authenticated: return MyHomePage(); }, ); }} home page : class MyHomePage extends StatelessWidget { @override Widget build(BuildContext context) { final EventProvider _eventProvider = Provider.of<EventProvider>(context, listen: true); return Scaffold( body: _eventProvider.status == EventLoadingStatus.Loading ? CircularProgressIndicator() : ListView.builder(...) ) }} event provider : enum EventLoadingStatus { NotLoaded, Loading, Loaded }class EventProvider extends ChangeNotifier { final List<Event> _events = []; EventLoadingStatus _eventLoadingStatus = EventLoadingStatus.NotLoaded; EventLoadingStatus get status => _eventLoadingStatus; Future<void> fetchEvents(String email) async { //await Future.delayed(const Duration(milliseconds: 100), (){}); _eventLoadingStatus = EventLoadingStatus.Loading; notifyListeners(); List<Event> events = await EventService().getEventsByUser(email); _events.clear(); _events.addAll(events); _eventLoadingStatus = EventLoadingStatus.Loaded; notifyListeners(); }} Can someone explain what happens?
Now provide the response and nothing else.
| You are calling fetchEvents from within your build code for the root widget. Within fetchEvents , you call notifyListeners , which, among other things, calls setState on widgets that are listening to the event provider. This is a problem because you cannot call setState on a widget when the widget is in the middle of rebuilding. Now at this point, you might be thinking "but the fetchEvents method is marked as async so it should be running asynchronous for later". And the answer to that is "yes and no". The way async works in Dart is that when you call an async method, Dart attempts to run as much of the code in the method as possible synchronously. In a nutshell, that means any code in your async method that comes before an await is going to get run as normal synchronous code. If we take a look at your fetchEvents method: Future<void> fetchEvents(String email) async { //await Future.delayed(const Duration(milliseconds: 100), (){}); _eventLoadingStatus = EventLoadingStatus.Loading; notifyListeners(); List<Event> events = await EventService().getEventsByUser(email); _events.clear(); _events.addAll(events); _eventLoadingStatus = EventLoadingStatus.Loaded; notifyListeners();} We can see that the first await happens at the call to EventService().getEventsByUser(email) . There's a notifyListeners before that, so that is going to get called synchronously. Which means calling this method from the build method of a widget will be as though you called notifyListeners in the build method itself, which as I've said, is forbidden. The reason why it works when you add the call to Future.delayed is because now there is an await at the top of the method, causing everything underneath it to run asynchronously. Once the execution gets to the part of the code that calls notifyListeners , Flutter is no longer in a state of rebuilding widgets, so it is safe to call that method at that point. You could instead call fetchEvents from the initState method, but that runs into another similar issue: you also can't call setState before the widget has been initialized. The solution, then, is this. Instead of notifying all the widgets listening to the event provider that it is loading, have it be loading by default when it is created. (This is fine since the first thing it does after being created is load all the events, so there shouldn't ever be a scenario where it needs to not be loading when it's first created.) This eliminates the need to mark the provider as loading at the start of the method, which in turn eliminates the need to call notifyListeners there: EventLoadingStatus _eventLoadingStatus = EventLoadingStatus.Loading;...Future<void> fetchEvents(String email) async { List<Event> events = await EventService().getEventsByUser(email); _events.clear(); _events.addAll(events); _eventLoadingStatus = EventLoadingStatus.Loaded; notifyListeners();} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/59378267', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1274485/']} | jdg_77026 |
stackexchange | llm_judgeable_groundtruth_similarity | 3564717 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found that there's a clearRect() method, but can't find any to clear an arc (or a full circle). Is there any way to clear an arc in canvas?
Now provide the response and nothing else.
| There is no clearArc however you can use Composite Operations to achieve the same thing context.globalCompositeOperation = 'destination-out' According to MDC the effect of this setting is The existing content is kept where it doesn't overlap the new shape. https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Compositing So any filled shape with this mode on will end up erasing current canvas content. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3564717', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/136834/']} | jdg_77027 |
stackexchange | llm_judgeable_groundtruth_similarity | 40888754 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the context of the comments and answers given at List.of() or Collections.emptyList() and List.of(...) or Collections.unmodifiableList() I came up with two following rules of thumb (which also apply to Set and Map factories accordingly). Don't replace all occurrences Keep using Collections.emptyList() for readability and when e.g. initializing lazy field members like: class Bean { private List<Bean> beans = Collection.emptyList(); public List<Bean> getBeans() { if (beans == Collections.EMPTY_LIST) { beans = new ArrayList<>(); } return beans; }} Use new factories as method argument builders Use new factories List.of() and variants as quick and less-to-type version, when calling an executable with List parameter(s). Here are my current substitution patterns: Collections.emptyList() --> List.of()Collections.singletonList(a) --> List.of(a)Arrays.asList(a, ..., z) --> List.of(a, ..., z) In a fictional usage of Collections.indexOfSubList , the following lines Collections.indexOfSubList(Arrays.asList(1, 2, 3), Collections.emptyList());Collections.indexOfSubList(Arrays.asList(1, 2, 3), Collections.singletonList(1));Collections.indexOfSubList(Arrays.asList(1, 2, 3), Arrays.asList(1));Collections.indexOfSubList(Arrays.asList(1, 2, 3), Arrays.asList(2, 3));Collections.indexOfSubList(Arrays.asList(1, 2, 3), Arrays.asList(1, 2, 3)); will read Collections.indexOfSubList(List.of(1, 2, 3), List.of());Collections.indexOfSubList(List.of(1, 2, 3), List.of(1));Collections.indexOfSubList(List.of(1, 2, 3), List.of(1));Collections.indexOfSubList(List.of(1, 2, 3), List.of(2, 3));Collections.indexOfSubList(List.of(1, 2, 3), List.of(1, 2, 3)); Do you (dis-)agree?
Now provide the response and nothing else.
| Generally, use of the new factories is safe for new code, where there is no existing code that depends on behaviors of the existing collections. There are several reasons the new collection factories aren't drop-in replacements for code that initializes collections using the existing APIs. Obviously immutability is one of the most prominent reasons; if you need to modify the collection later, it obviously can't be immutable! But there are other reasons as well, some of them quite subtle. For an example of replacement of existing APIs with the new APIs, see JDK-8134373 . The review threads are here: Part1 Part2 . Here's a rundown of the issues. Array Wrapping vs Copying. Sometimes you have an array, e.g. a varargs parameter, and you want to process it as a list. Sometimes Arrays.asList is the most appropriate thing here, as it's just a wrapper. By contrast, List.of creates a copy, which might be wasteful. On the other hand, the caller still has a handle to the wrapped array and can modify it, which might be a problem, so sometimes you want to pay the expense of copying it, for example, if you want to keep a reference to the list in an instance variable. Hashed Collection Iteration Order. The new Set.of and Map.of structures randomize their iteration order. The iteration order of HashSet and HashMap is undefined, but in practice it turns out to be relatively stable. Code can develop inadvertent dependencies on iteration order. Switching to the new collection factories may expose old code to iteration order dependencies, surfacing latent bugs. Prohibition of Nulls. The new collections prohibit nulls entirely, whereas the common non-concurrent collections ( ArrayList , HashMap ) allow them. Serialization Format. The new collections have a different serialization format from the old ones. If the collection is serialized, or it's stored in some other class that's serialized, the serialized output will differ. This might or might not be an issue. But if you expect to interoperate with other systems, this could be a problem. In particular, if you transmit the serialized form of the new collections to a Java 8 JVM, it will fail to deserialize, because the new classes don't exist on Java 8. Strict Mutator Method Behavior. The new collections are immutable, so of course they throw UnsupportedOperationException when mutator methods are called. There are some edge cases, however, where behavior is not consistent across all the collections. For example, Collections.singletonList("").addAll(Collections.emptyList()) does nothing, whereas List.of("").addAll(Collections.emptyList()) will throw UOE. In general, the new collections and the unmodifiable wrappers are consistently strict in throwing UOE on any call to a mutator method, even if no actual mutation would occur. Other immutable collections, such as those from Collections.empty* and Collections.singleton* , will throw UOE only if an actual mutation would occur. Duplicates. The new Set and Map factories reject duplicate elements and keys. This is usually not a problem if you're initializing a collection with a list of constants. Indeed, if a list of constants has a duplicate, it's probably a bug. Where this is potentially an issue is when a caller is allowed to pass in a collection or array (e.g., varags) of elements. If the caller passes in duplicates, the existing APIs would silently omit the duplicates, whereas the new factories will throw IllegalArgumentException . This is a behavioral change that might impact callers. None of these issues are fatal problems, but they are behavioral differences that you should be aware of when retrofitting existing code. Unfortunately this means that doing a mass replacement of existing calls with the new collection factories is probably ill-advised. It's probably necessary to do some inspection at each site to assess any potential impact of the behavioral changes. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40888754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1431016/']} | jdg_77028 |
stackexchange | llm_judgeable_groundtruth_similarity | 15130497 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I notice that here Box and whiskers plot the call: p + geom_boxplot(aes(fill = factor(cyl))) generates bright red/green/blue colors for boxplots fill, while: p + geom_boxplot(aes(fill = factor(vs))) Generates a distinct pale green/red of colors. In my data, I get the second set of colors, but would like the first set (like in p + geom_boxplot(aes(fill = factor(cyl))) What controls which set of colors ggplot uses and how can you change it?
Now provide the response and nothing else.
| The default colours are evenly spaced hues around the colour wheel. You can check how this is generated from here . You can use scale_fill_manual with those colours: p + scale_fill_manual(values=c("#F8766D", "#00BA38")) Here, I used ggplot_build(p)$data from cyl to get the colors. Alternatively, you can use another palette as well like so: p + scale_fill_brewer(palette="Set1") And to find the colours in the palette, you can do: require(RColorBrewer)brewer.pal(9, "Set1") Check the package for knowing the palettes and other options, if you're interested. Edit: @user248237dfsf, as I already pointed out in the link at the top, this function from @Andrie shows the colors generated: ggplotColours <- function(n=6, h=c(0, 360) +15){ if ((diff(h)%%360) < 1) h[2] <- h[2] - 360/n hcl(h = (seq(h[1], h[2], length = n)), c = 100, l = 65)}> ggplotColours(2)# [1] "#F8766D" "#00BFC4"> ggplotColours(3)# [1] "#F8766D" "#00BA38" "#619CFF" If you look at the two colours generated, the first one is the same, but the second colour is not the same, when n=2 and n=3. This is because it generates colours of evenly spaced hues. If you want to use the colors for cyl for vs then you'll have to set scale_fill_manual and use these colours generated with n=3 from this function. To verify that this is indeed what's happening you could do: p1 <- ggplot(mtcars, aes(factor(cyl), mpg)) + geom_boxplot(aes(fill = factor(cyl)))p2 <- ggplot(mtcars, aes(factor(cyl), mpg)) + geom_boxplot(aes(fill = factor(vs))) Now, if you do: ggplot_build(p1)$data[[1]]$fill# [1] "#F8766D" "#00BA38" "#619CFF"ggplot_build(p2)$data[[1]]$fill# [1] "#F8766D" "#00BFC4" "#F8766D" "#00BFC4" "#F8766D" You see that these are the colours that are generated using ggplotColours and the reason for the difference is also obvious. I hope this helps. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15130497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_77029 |
stackexchange | llm_judgeable_groundtruth_similarity | 117357 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is Entity Framework 4 a good solution for a public website with potentially 1000 hits/second? In my understanding EF is a viable solution for mostly smaller or intranet websites, but wouldn't scale easily for something like a popular community website (I know SO is using LINQ to SQL, but.. I'd like more examples/proof...) Now I am standing at the crossroads of either choosing a pure ADO.NET approach or EF4. Do you think the improved developer productivity with EF is worth the lost performance and granular access of ADO.NET (with stored procedures)? Any serious issues that a high traffic website might face, was it using EF? Thank you in advance.
Now provide the response and nothing else.
| The question "which ORM should I use" is really targeting the tip of a huge iceberg when it comes to the overall data access strategy and performance optimization in a large scale application. All of the following things ( roughly in order of importance) are going to affect throughput, and all of them are handled (sometimes in different ways) by most of the major ORM frameworks out there: Database Design and Maintenance This is, by a wide margin, the single most important determinant of the throughput of a data-driven application or web site, and often totally ignored by programmers. If you don't use proper normalization techniques, your site is doomed. If you don't have primary keys, almost every query will be dog-slow. If you use well-known anti-patterns such as using tables for Key-Value Pairs (AKA Entity-Attribute-Value) for no good reason, you'll explode the number of physical reads and writes. If you don't take advantage of the features the database gives you, such as page compression, FILESTREAM storage (for binary data), SPARSE columns, hierarchyid for hierarchies, and so on (all SQL Server examples), then you will not see anywhere near the performance that you could be seeing. You should start worrying about your data access strategy after you've designed your database and convinced yourself that it's as good as it possibly can be, at least for the time being. Eager vs. Lazy Loading Most ORMs used a technique called lazy loading for relationships, which means that by default it will load one entity (table row) at a time, and make a round-trip to the database every time it needs to load one or many related (foreign key) rows. This isn't a good or bad thing, it rather depends on what's actually going to be done with the data, and how much you know up-front. Sometimes lazy-loading is absolutely the right thing to do. NHibernate, for example, may decide not to query for anything at all and simply generate a proxy for a particular ID. If all you ever need is the ID itself, why should it ask for more? On the other hand, if you are trying to print a tree of every single element in a 3-level hierarchy, lazy-loading becomes an O(N²) operation, which is extremely bad for performance. One interesting benefit to using "pure SQL" (i.e. raw ADO.NET queries/stored procedures) is that it basically forces you to think about exactly what data is necessary to display any given screen or page. ORMs and lazy-loading features don't prevent you from doing this, but they do give you the opportunity to be... well, lazy , and accidentally explode the number of queries you execute. So you need to understand your ORMs eager-loading features and be ever vigilant about the number of queries you're sending to the server for any given page request. Caching All major ORMs maintain a first-level cache, AKA "identity cache", which means that if you request the same entity twice by its ID, it doesn't require a second round-trip, and also (if you designed your database correctly) gives you the ability to use optimistic concurrency. The L1 cache is pretty opaque in L2S and EF, you kind of have to trust that it's working. NHibernate is more explicit about it ( Get / Load vs. Query / QueryOver ). Still, as long as you try to query by ID as much as possible, you should be fine here. A lot of people forget about the L1 cache and repeatedly look up the same entity over and over again by something other than its ID (i.e. a lookup field). If you need to do this then you should save the ID or even the entire entity for future lookups. There's also a level 2 cache ("query cache"). NHibernate has this built-in. Linq to SQL and Entity Framework have compiled queries , which can help reduce app server loads quite a bit by compiling the query expression itself, but it doesn't cache the data. Microsoft seems to consider this an application concern rather than a data-access concern, and this is a major weak point of both L2S and EF. Needless to say it's also a weak point of "raw" SQL. In order to get really good performance with basically any ORM other than NHibernate, you need to implement your own caching façade. There's also an L2 cache "extension" for EF4 which is okay , but not really a wholesale replacement for an application-level cache. Number of Queries Relational databases are based on sets of data. They're really good at producing large amounts of data in a short amount of time, but they're nowhere near as good in terms of query latency because there's a certain amount of overhead involved in every command. A well-designed app should play to the strengths of this DBMS and try to minimize the number of queries and maximize the amount of data in each. Now I'm not saying to query the entire database when you only need one row. What I'm saying is, if you need the Customer , Address , Phone , CreditCard , and Order rows all at the same time in order to serve a single page, then you should ask for them all at the same time, don't execute each query separately. Sometimes it's worse than that, you'll see code that queries the same Customer record 5 times in a row, first to get the Id , then the Name , then the EmailAddress , then... it's ridiculously inefficient. Even if you need to execute several queries that all operate on completely different sets of data, it's usually still more efficient to send it all to the database as a single "script" and have it return multiple result sets. It's the overhead you're concerned with, not the total amount of data. This might sound like common sense but it's often really easy to lose track of all the queries that are being executed in various parts of the application; your Membership Provider queries the user/role tables, your Header action queries the shopping cart, your Menu action queries the site map table, your Sidebar action queries the featured product list, and then maybe your page is divided into a few separate autonomous areas which query the Order History, Recently Viewed, Category, and Inventory tables separately, and before you know it, you're executing 20 queries before you can even start to serve the page. It just utterly destroys performance. Some frameworks - and I'm thinking mainly of NHibernate here - are incredibly clever about this and allow you to use something called futures which batch up entire queries and try to execute them all at once, at the last possible minute. AFAIK, you're on your own if you want to do this with any of the Microsoft technologies; you have to build it into your application logic. Indexing, Predicates, and Projections At least 50% of devs I speak to and even some DBAs seem to have trouble with the concept of covering indexes. They think, "well, the Customer.Name column is indexed, so every lookup I do on the name should be fast." Except it doesn't work that way unless the Name index covers the specific column you're looking up. In SQL Server, that's done with INCLUDE in the CREATE INDEX statement. If you naïvely use SELECT * everywhere - and that is more or less what every ORM will do unless you explicitly specify otherwise using a projection - then the DBMS may very well choose to completely ignore your indexes because they contain non-covered columns. A projection means that, for example, instead of doing this: from c in db.Customers where c.Name == "John Doe" select c You do this instead: from c in db.Customers where c.Name == "John Doe"select new { c.Id, c.Name } And this will, for most modern ORMs, instruct it to only go and query the Id and Name columns which are presumably covered by the index (but not the Email , LastActivityDate , or whatever other columns you happened to stick in there). It's also very easy to completely blow away any indexing benefits by using inappropriate predicates. For example: from c in db.Customers where c.Name.Contains("Doe") ...looks almost identical to our previous query but in fact will result in a full table or index scan because it translates to LIKE '%Doe%' . Similarly, another query which looks suspiciously simple is: from c in db.Customers where (maxDate == null) || (c.BirthDate >= maxDate) Assuming you have an index on BirthDate , this predicate has a good chance to render it completely useless. Our hypothetical programmer here has obviously attempted to create a kind of dynamic query ("only filter the birth date if that parameter was specified"), but this isn't the right way to do it. Written like this instead: from c in db.Customers where c.BirthDate >= (maxDate ?? DateTime.MinValue) ...now the DB engine knows how to parameterize this and do an index seek. One minor, seemingly insignificant change to the query expression can drastically affect performance. Unfortunately LINQ in general makes it all too easy to write bad queries like this because sometimes the providers are able to guess what you were trying to do and optimize the query, and sometimes they aren't. So you end up with frustratingly inconsistent results which would have been blindingly obvious (to an experienced DBA, anyway) had you just written plain old SQL. Basically it all comes down to the fact that you really have to keep a close eye on both the generated SQL and the execution plans they lead to, and if you're not getting the results you expect, don't be afraid to bypass the ORM layer once in a while and hand-code the SQL. This goes for any ORM, not just EF. Transactions and Locking Do you need to display data that's current up to the millisecond? Maybe - it depends - but probably not. Sadly, Entity Framework doesn't give you nolock , you can only use READ UNCOMMITTED at the transaction level (not table level). In fact none of the ORMs are particularly reliable about this; if you want to do dirty reads, you have to drop down to the SQL level and write ad-hoc queries or stored procedures. So what it boils down to, again, is how easy it is for you to do that within the framework. Entity Framework has come a long way in this regard - version 1 of EF (in .NET 3.5) was god-awful, made it incredibly difficult to break through the "entities" abstraction, but now you have ExecuteStoreQuery and Translate , so it's really not too bad. Make friends with these guys because you'll be using them a lot. There's also the issue of write locking and deadlocks and the general practice of holding locks in the database for as little time as possible. In this regard, most ORMs (including Entity Framework) actually tend to be better than raw SQL because they encapsulate the unit of Work pattern, which in EF is SaveChanges . In other words, you can "insert" or "update" or "delete" entities to your heart's content, whenever you want, secure in the knowledge that no changes will actually get pushed to the database until you commit the unit of work. Note that a UOW is not analogous to a long-running transaction. The UOW still uses the optimistic concurrency features of the ORM and tracks all changes in memory . Not a single DML statement is emitted until the final commit. This keeps transaction times as low as possible. If you build your application using raw SQL, it's quite difficult to achieve this deferred behaviour. What this means for EF specifically: Make your units of work as coarse as possible and don't commit them until you absolutely need to. Do this and you'll end up with much lower lock contention than you would using individual ADO.NET commands at random times. In Conclusion: EF is completely fine for high-traffic/high-performance applications, just like every other framework is fine for high-traffic/high-performance applications. What matters is how you use it. Here's a quick comparison of the most popular frameworks and what features they offer in terms of performance (legend: N = Not supported, P = Partial, Y = yes/supported): | L2S | EF1 | EF4 | NH3 | ADO +-----+-----+-----+-----+-----Lazy Loading (entities) | N | N | N | Y | NLazy Loading (relationships) | Y | Y | Y | Y | NEager Loading (global) | N | N | N | Y | NEager Loading (per-session) | Y | N | N | Y | NEager Loading (per-query) | N | Y | Y | Y | YLevel 1 (Identity) Cache | Y | Y | Y | Y | NLevel 2 (Query) Cache | N | N | P | Y | NCompiled Queries | Y | P | Y | N | N/AMulti-Queries | N | N | N | Y | YMultiple Result Sets | Y | N | P | Y | YFutures | N | N | N | Y | NExplicit Locking (per-table) | N | N | N | P | YTransaction Isolation Level | Y | Y | Y | Y | YAd-Hoc Queries | Y | P | Y | Y | YStored Procedures | Y | P | Y | Y | YUnit of Work | Y | Y | Y | Y | N As you can see, EF4 (the current version) doesn't fare too badly, but it's probably not the best if performance is your primary concern. NHibernate is much more mature in this area and even Linq to SQL provides some performance-enhancing features that EF still doesn't. Raw ADO.NET is often going to be faster for very specific data-access scenarios, but, when you put all the pieces together, it really doesn't offer a lot of important benefits that you get from the various frameworks. And, just to make completely sure that I sound like a broken record, none of this matters in the slightest if you don't design your database, application, and data access strategies properly. All of the items in the chart above are for improving performance beyond the baseline; most of the time, the baseline itself is what needs the most improvement. | {} | {'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/117357', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/39793/']} | jdg_77030 |
stackexchange | llm_judgeable_groundtruth_similarity | 1491223 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm given two simple matrices, and I'm told to solve for x. $A =\begin{pmatrix}\ 3 &-1 & 3\\1&0&3\\3&-2&-5&\end{pmatrix}$$B =\begin{pmatrix}\ 14\\11\\-11\end{pmatrix}$ I'm told to find x, where $Ax = B$ I tried doing this: $A =\begin{pmatrix}\ 3 &-1 & 3\\1&0&3\\3&-2&-5&\end{pmatrix}$$\begin{pmatrix}\ x\\x\\x\end{pmatrix}$ $B =\begin{pmatrix}\ 14\\11\\-11\end{pmatrix}$ Then I tried to multiply it out, and equate each row to b, however I got x = 14/5 and x = 11/4 , so I'm not even sure if I can do this, let alone don't know if this is correct?
Now provide the response and nothing else.
| Let $a>1.$ I assume $a^x$ is continuous, and that the basic exponent law $a^{x+y}=a^xa^y$ holds. Claim: $a^x$ is convex on $[0,\infty).$ Proof: Because $a^x$ is continuous, it suffices to show $a^x$ is midpoint convex. Suppose $x,y\in [0,\infty).$ Using $(uv)^{1/2} \le (u+v)/2$ for nonnegative $u,v,$ we get $a^{(x+y)/2} = (a^{x} a^{y})^{1/2} \le (a^x+a^y)/2.$ Now if $f$ is convex on $[0,\infty),$ then $(f(x)-f(0))/x$ is an increasing function of $x$ for $x\in(0,\infty).$ This is a simple and easily proved property of convex functions. Claim: $\lim_{x\to 0^+}(a^x-1)/x$ exists. Proof: All of these quotients are bounded below by $0.$ As $x$ decreases to $0,(a^x-1)/x$ decreases by the above. Because of the lower bound of $0,$ the limit exists. It follows that $\lim_{x\to 0}(a^x-1)/x$ exists. This follows from the above and the fact that if $x>0,$ then $a^{-x} = 1/a^{x}.$ To handle $0<a<1,$ look at $[(1/a)^x-1]/x$ to see $\lim_{x\to 0}(a^x-1)/x$ exists. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1491223', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/280007/']} | jdg_77031 |
stackexchange | llm_judgeable_groundtruth_similarity | 634257 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read several times that after you delete a row in an InnoDB table in MySQL, its space is not reused, so if you make a lot of INSERTs into a table and then periodically DELETE some rows the table will use more and more space on disk, as if the rows were not deleted at all. Recently I've been told though that the space occupied by deleted rows is re-used but only after some transactions are complete and even then - not fully. I am now confused. Can someone please make sense of this to me? I need to do a lot of INSERTs into an InnoDB table and then every X minutes I need to DELETE records that are more than Y minutes old. Do I have a problem of ever-growing InnoDB table here, or is it paranoia?
Now provide the response and nothing else.
| It is paranoia :) DB's don't grow in size unnecessarily, but for performance issues space is not freed either. What you've heard most probably is that if you delete records that space is not given back to the Operating System . Instead, it's kept as an empty space for the DB to re-use afterwards. This is because: DB needs to have some HD space to save its data; if it doesn't have any space, it reserves some empty space at first. When you insert a new row, a piece of that space is used. When you run out of free space, a new block is reserved, and so on. Now, when you delete some rows , in order to prevent reserving more and more blocks, its space is kept free but never given back to the Operating System, so you can use it again later without any need of reserving new blocks. As you can see, space is re-used, but never given back. That's the key point to your question. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/634257', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_77032 |
stackexchange | llm_judgeable_groundtruth_similarity | 8967303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the example below: #include <iostream>using namespace std;class base{ public: virtual int func() { cout << "vfunc in base class\n"; return 0; }};class derived: public base{ public: double func() { cout << "vfunc in derived class\n"; return 0; }};int main(){ base *bptr = new derived; bptr->func(); return 0;} The compiler gives an error for the above code that there is conflicting type for the overriden function. Why is it not possible to override a function in the derived class with a different return type ? I believe, in-order to override a function, the base class virtual method needs to be redefined in the derived class. To redefine a method, the signatures of the methods has to be the same. Since return type is not part of the signature, i believe even if there is difference in return type, the method will still be redefined? In that case for the code above, virtual function func is redefined in the derived class with a different return type. But the compiler throws an error. Is my understanding correct?
Now provide the response and nothing else.
| Overriding essentially means that either the Base class method or the Derived class method will be called at run-time depending on the actual object pointed by the pointer. It implies that: i.e: Every place where the Base class method can be called can be replaced by call to Derived class method without any change to calling code. In order to achieve this the only possible way is to restrict the return types of the overriding virtual methods to return the same type as the Base class or a type derived from that(co-variant return types) and the Standard enforces this condition. If the above condition was not in place it would leave a window to break the existing code by addition of new functionality. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8967303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1220250/']} | jdg_77033 |
stackexchange | llm_judgeable_groundtruth_similarity | 29203 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to Mathematica , and using version 8.0. I would like to calculate the 2D Fourier Transform of an Image with Mathematica and plot the magnitude and phase spectrum, as well as reconstruct the image with the inverse transform.
Now provide the response and nothing else.
| img = Import["ExampleData/lena.tif"];Image[img, ImageSize -> 300] data = ImageData[img];(*get data*){nRow, nCol, nChannel} = Dimensions[data];d = data[[All, All, 2]];d = d*(-1)^Table[i + j, {i, nRow}, {j, nCol}];fw = Fourier[d, FourierParameters -> {1, 1}];(*adjust for better viewing as needed*)fudgeFactor = 100;abs = fudgeFactor*Log[1 + Abs@fw];Labeled[Image[abs/Max[abs], ImageSize -> 300],Style["Magnitude spectrum", 18]] arg = Arg@fw;Labeled[Image[arg/Max[arg], ImageSize -> 300],Style["Phase spectrum", 18]] ps. I do think Ms. Lena looks prettier in the spatial domain than in the frequency domain. | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/29203', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/8727/']} | jdg_77034 |
stackexchange | llm_judgeable_groundtruth_similarity | 21533022 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to write a method like below to return a text document (.txt, pdf, .doc, .docx etc)While there are good examples of posting file in Web API 2.0 on the web , I couldn't find a relevant one for just downloading one. (I know how to do it in HttpResponseMessage.) public async Task<IHttpActionResult> GetFileAsync(int FileId) { //just returning file part (no other logic needed) } Does the above needs to be async at all?I am only looking to return stream. (Is that okay?) More importantly before I end up doing the job one way or the otther, I wanted to know what's the "right" way of doing this sort of job... (so approaches and techniques mentioning this would be greatly appreciated).. thanks.
Now provide the response and nothing else.
| Right, for your above scenario the action does not need to return an async action result. Here I am creating a custom IHttpActionResult. You can check my comments in the below code here. public IHttpActionResult GetFileAsync(int fileId){ // NOTE: If there was any other 'async' stuff here, then you would need to return // a Task<IHttpActionResult>, but for this simple case you need not. return new FileActionResult(fileId);}public class FileActionResult : IHttpActionResult{ public FileActionResult(int fileId) { this.FileId = fileId; } public int FileId { get; private set; } public Task<HttpResponseMessage> ExecuteAsync(CancellationToken cancellationToken) { HttpResponseMessage response = new HttpResponseMessage(); response.Content = new StreamContent(File.OpenRead(@"<base path>" + FileId)); response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment"); // NOTE: Here I am just setting the result on the Task and not really doing any async stuff. // But let's say you do stuff like contacting a File hosting service to get the file, then you would do 'async' stuff here. return Task.FromResult(response); }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21533022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1491381/']} | jdg_77035 |
stackexchange | llm_judgeable_groundtruth_similarity | 313930 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've ran into quite a few engineers from unrelated backgrounds that put unpopulated components on the BOM. Some will do a section clearly labelled DNP at the bottom, others will leave them dispersed throughout the BOM, but highlight the rows. Having a DNP section seems like the way to go if you must do this, the only downside I can think of being that there will have to be more manual editing of the CAD package output. (Have personally witnessed this, the DNPs were changed at the last minute, the DNP section didn't get editted properly, and parts that shouldn't have been on the board were placed.) Leaving them throughout and highlighting the rows seems suboptimal because there could easily be duplicate rows for populated and not populated, and again, more manual editing. I don't see why this practice is necessary. A BOM by definition is a list of things required to build something. If a component is not on the BOM and assembly drawing, it should not be on the board. Adding components that aren't actually there just seems like a source of confusion further down the line for whoever enters the BOM into the ERP and purchasing. What does putting unpopulated parts on the BOM achieve that leaving them off the BOM and assembly drawing doesn't?
Now provide the response and nothing else.
| If you don't explicitly document that these components are not to be placed, you will inevitably have your manufacturing team notice that there is a location on the board with no corresponding line in the BOM, and delay the build to send an engineering query asking what is supposed to be placed there. Explicitly documenting not-placed components avoids these queries, much like "this page intentionally left blank" in the manual avoids people asking what was supposed to be printed on the pages that were blank in their copy. | {} | {'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/313930', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/11074/']} | jdg_77036 |
stackexchange | llm_judgeable_groundtruth_similarity | 231269 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in the idea of C++-like const not that particular execution (like casting away const ). Take for example C# -- it lacks C++-like const, and the reason for it is the the usual -- people and time. Here additionally it seems the C# team looked at the C++ execution of const , marketing CLR and had enough at this point (see why there is no const member method in c# and const parameter ; thank you Svick). Be if we move further than that, is anything else? Is there something deeper? Take for example multi-inheritance -- it is usually seen as hard to grasp (for the user) and thus not added to the language (like diamond problem). Is there something in NATURE of const that pose a problem that it is better for language to avoid it? Something like deciding if const should be deep or shallow (having const container does it mean I cannot add new element or alter existing elements as well; what if elements are of reference type)? UPDATE : while I mention C#, and thus making a historical perspective I am interested in the nature of potential problems of const to the language. This is not a challenge of the languages. Please ignore such factors as current trends or popularity -- I am interested only in technical issues -- thank you.
Now provide the response and nothing else.
| Note sure if this qualifies for you, but in functional languages like Standard ML everything is immutable by default. Mutation is supported through a generic ref erence type. So an int variable is immutable, and a ref int variable is a mutable container for int s. Basically, variables are real variables in the mathematical sense (an unknown but fixed value) and ref s are "variables" in the imperative programming sense - a memory cell that can be written to and read from. (I like to call them assignables .) I think the problem with const is two-fold. First, C++ lacks garbage collection, which is necessary to have non-trivial persistent data structures . const must be deep to make any sense, yet having fully immutable values in C++ is impractical. Second, in C++ you need to opt into const rather than opt out of it. But when you forget to const something and later fix it, you'll end up in the "const poisoning" situation mentioned in @RobY's answer where the const change will cascade throughout the code. If const was the default, you wouldn't find yourself applying const retroactively. Additionally, having to add const everywhere adds a lot of noise to the code. I suspect the mainstream languages that followed (e.g. Java) were heavily shaped by C and C++'s success and way of thinking. Case in point, even with garbage collection most languages' collection APIs assume mutable data structures. The fact that everything is mutable and immutability is seen as a corner case speaks volumes about the imperative mindset behind popular languages. EDIT : After reflecting on greenoldman's comment I realized that const isn't directly about the immutability of data; const encodes into the type of the method whether it has side effects on the instance. It's possible to use mutation to achieve referentially transparent behavior. Suppose you have a function that when called successively returns different values - for example, a function that reads a single character from stdin . We could use cache/memoize the results of this function to produce a referentially transparent stream of values. The stream would be a linked list whose nodes will call the function the first time you try to retrieve their value, but then cache the result. So if stdin constains Hello, world! , the first time you try to retrieve the value of the first node, it'll read one char and return H . Afterwards it'll continue to return H without further calls to read a char . Likewise, the second node would read a char from stdin the first time you try to retrieve its value, this time returning e and caching that result. The interesting thing here is that you've turned a process that's inherently stateful into an object that's seemingly stateless. However, it was necessary to mutate the object's internal state (by caching the results) to achieve this - the mutation was a benign effect . It's impossible to make our CharStream const even though the stream behaves like an immutable value. Now imagine there's a Stream interface with const methods, and all your functions expect const Streams . Your CharStream can't implement the interface! ( EDIT 2: Apparently there's a C++ keyword called mutable that would allow us to cheat and make CharStream const . However, this loophole destroys const 's guarantees - now you really can't be sure something won't mutate through its const methods. I suppose it's not that bad since you must explicitly request the loophole, but you're still completely reliant on the honor system.) Secondly, suppose you have high-order functions - that is, you can pass functions as arguments to other functions. const ness is part of a function's signature, so you wouldn't be able to pass non- const functions as arguments to functions that expect const functions. Blindly enforcing const here would lead to a loss of generality. Finally, manipulating a const object doesn't guarantee that it's not mutating some external (static or global) state behind your back, so const 's guarantees aren't as strong as they initially appear. It's not clear to me that encoding the presence or absence of side effects into the type system is universally a good thing. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/231269', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/66354/']} | jdg_77037 |
stackexchange | llm_judgeable_groundtruth_similarity | 549823 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We have, $$\sum_{k=1}^n k^3 = \Big(\sum_{k=1}^n k\Big)^2$$ $$2\sum_{k=1}^n k^5 = -\Big(\sum_{k=1}^n k\Big)^2+3\Big(\sum_{k=1}^n k^2\Big)^2$$ $$2\sum_{k=1}^n k^7 = \Big(\sum_{k=1}^n k\Big)^2-3\Big(\sum_{k=1}^n k^2\Big)^2+4\Big(\sum_{k=1}^n k^3\Big)^2$$ and so on (apparently). Is it true that the sum of consecutive odd $m$ powers, for $m>1$, can be expressed as sums of squares of sums* in a manner similar to the above? What is the general formula? *(Edited re Lord Soth's and anon's comment.)
Now provide the response and nothing else.
| This is a partial answer, it just establishes the existence. We have$$s_m(n) = \sum_{k=1}^n k^m = \frac{1}{m+1}\left(\operatorname{B}_{m+1}(n+1)-\operatorname{B}_{m+1}(1)\right)$$where $\operatorname{B}_m(x)$ denotes the monic Bernoulli polynomial of degree $m$, which has the following useful properties:$$\begin{align}\int_x^{x+1}\operatorname{B}_m(t)\,\mathrm{d}t &= x^m \quad\text{(from which everything else follows)}\\\operatorname{B}_{m+1}'(x) &= (m+1)\operatorname{B}_m(x)\\\operatorname{B}_m\left(x+\frac{1}{2}\right) &\begin{cases} \text{is even in $x$} & \text{for even $m$}\\ \text{is odd in $x$} & \text{for odd $m$}\end{cases}\\\operatorname{B}_m(0) = \operatorname{B}_m(1) &= 0\quad\text{for odd $m\geq3$}\end{align}$$ Therefore,$$\begin{align}s_m(n) &\text{has degree $m+1$ in $n$}\\s_m(0) &= 0\\s_m'(0) &= \operatorname{B}_m(1) = 0\quad\text{for odd $m\geq3$}\\&\quad\text{(This makes $n=0$ a double zero of $s_m(n)$ for odd $m\geq3$)}\\s_m\left(x-\frac{1}{2}\right) &\begin{cases} \text{is even in $x$} & \text{for odd $m$}\\ \text{is odd in $x$} & \text{for even $m\geq2$}\end{cases}\end{align}$$ Consider the vector space $V_m$ of univariate polynomials $\in\mathbb{Q}[x]$with degree not exceeding $2m+2$, that are even in $x$ and have a double zeroat $x=\frac{1}{2}$.Thus $V_m$ has dimension $m$ and is clearly spanned by$$\left\{s_j^2\left(x-\frac{1}{2}\right)\mid j=1,\ldots,m\right\}$$For $m>0$, we find that $s_{2m+1}(x-\frac{1}{2})$ has all the propertiesrequired for membership in $V_m$.Substituting $x-\frac{1}{2}=n$, we conclude that there exists a representation$$s_{2m+1}(n) = \sum_{j=1}^m a_{m,j}\,s_j^2(n)\quad\text{for $m>0$ with $a_{m,j}\in\mathbb{Q}$}$$of $s_{2m+1}(n)$ as a linear combination of squares of sums. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/549823', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4781/']} | jdg_77038 |
stackexchange | llm_judgeable_groundtruth_similarity | 2369426 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The situation: master is at X quickfix1 is at X + 2 commits Such that: o-o-X (master HEAD) \ q1a--q1b (quickfix1 HEAD) Then I started working on quickfix2, but by accident took quickfix1 as the source branch to copy, not the master. Now quickfix2 is at X + 2 commits + 2 relevant commits. o-o-X (master HEAD) \ q1a--q1b (quickfix1 HEAD) \ q2a--q2b (quickfix2 HEAD) Now I want to have a branch with quickfix2, but without the 2 commits that belong to quickfix1. q2a'--q2b' (quickfix2 HEAD) /o-o-X (master HEAD) \ q1a--q1b (quickfix1 HEAD) I tried to create a patch from a certain revision in quickfix2, but the patch doesn't preserve the commit history. Is there a way to save my commit history, but have a branch without changes in quickfix1?
Now provide the response and nothing else.
| This is a classic case of rebase --onto : # let's go to current master (X, where quickfix2 should begin) git checkout master # replay every commit *after* quickfix1 up to quickfix2 HEAD. git rebase --onto master quickfix1 quickfix2 So you should go from o-o-X (master HEAD) \ q1a--q1b (quickfix1 HEAD) \ q2a--q2b (quickfix2 HEAD) to: q2a'--q2b' (new quickfix2 HEAD) /o-o-X (master HEAD) \ q1a--q1b (quickfix1 HEAD) This is best done on a clean working tree. See git config --global rebase.autostash true , especially after Git 2.10 . | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2369426', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/196068/']} | jdg_77039 |
stackexchange | llm_judgeable_groundtruth_similarity | 12484601 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why the property get the error while the method can be compiled? public interface IFoo {}public interface IBar<out T> where T : IFoo {}public interface IItem<out T> where T: IFoo{ // IEnumerable<IBar<T>> GetList(); // works IEnumerable<IBar<T>> ItemList { get; set; } // Error!} Error: Invalid variance: The type parameter 'T' must be contravariantly valid on 'UserQuery.IItem<T>.ItemList'. 'T' is covariant.
Now provide the response and nothing else.
| You get the compiler error because you have a property getter ( get ) and a setter ( set ). The property getter has the T in it's output so out works, but the property setter will have the T in its input so it would need the in modifier. Because you have out on T you need to remove the setter and it will compile: public interface IItem<out T> where T : IFoo{ // IEnumerable<IBar<T>> GetList(); // works IEnumerable<IBar<T>> ItemList { get; } // also works} If your T is an in generic argument then the following would work: public interface IItem<in T> where T : IFoo{ IEnumerable<IBar<T>> ItemList { set; } } But you cannot have both ( out,in ) at the same time so you cannot have a co/contravariant property with a getter and a setter. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12484601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/825920/']} | jdg_77040 |
Subsets and Splits