author
int64 658
755k
| date
stringlengths 19
19
| timezone
int64 -46,800
43.2k
| hash
stringlengths 40
40
| message
stringlengths 5
490
| mods
list | language
stringclasses 20
values | license
stringclasses 3
values | repo
stringlengths 5
68
| original_message
stringlengths 12
491
|
---|---|---|---|---|---|---|---|---|---|
49,706 | 04.10.2022 14:44:55 | -7,200 | fa04b9c6e5d7ac19a4f0943e03d98677d7bb1fbc | [MINOR] Update docker release script to publish
Previously the push of this script was set to false, this commit
change it to true, so now we can publish our docker image with
latest flag and version numbers. | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/docker-release.yml",
"new_path": ".github/workflows/docker-release.yml",
"diff": "@@ -50,6 +50,11 @@ jobs:\nwith:\nimages: apache/systemds\ntags: ${{ github.event.inputs.version }},latest\n+ labels: |\n+ maintainer=Apache\n+ org.opencontainers.image.title=SystemDS\n+ org.opencontainers.image.description=An open source ML system for the end-to-end data science lifecycle\n+ org.opencontainers.image.vendor=Apache\n# https://github.com/docker/setup-buildx-action\n- name: Set up Docker Buildx\n@@ -71,7 +76,6 @@ jobs:\nwith:\ncontext: .\nfile: ./docker/sysds.Dockerfile\n- push: false\n+ push: True\ntags: ${{ steps.meta.outputs.tags }}\n-# Use the below labels entry for images for cpu, gpu for the same release\n-# labels: ${{ steps.meta.outputs.labels }}\n+ labels: ${{ steps.meta.outputs.labels }}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Update docker release script to publish (#1700)
Previously the push of this script was set to false, this commit
change it to true, so now we can publish our docker image with
latest flag and version numbers. |
49,706 | 06.10.2022 16:38:05 | -7,200 | 27f595a1fe04c1ab7a53716b37b5c6188e9ddfe9 | [MINOR] Quick fix of Federated monitor registration
This commit fixes the registration with the federated monitor tool,
to use localhost, because the backend always identify with localhost.
In the future we need to change this, but for now it works. | [
{
"change_type": "MODIFY",
"old_path": "docs/site/federated-monitoring.md",
"new_path": "docs/site/federated-monitoring.md",
"diff": "@@ -36,7 +36,7 @@ the **monitoring frontend** developed in [Angular](https://angular.io/).\n### Installation & Build\n-#### 1. Monitoring Backend\n+#### Install Backend\nTo compile the project, run the following code, more information can be found [here](./install.md):\n@@ -62,7 +62,7 @@ export SYSTEMDS_ROOT=$(pwd)\nexport PATH=$SYSTEMDS_ROOT/bin:$PATH\n```\n-#### 2. Monitoring Frontend\n+#### Install Frontend\nSince the frontend is in **Angular v13**, a **node version 12/14/16** or later minor version is required.\nTo install `nodejs` and `npm` go to [https://nodejs.org/en/](https://nodejs.org/en/) and install version either **12.x**,\n@@ -89,6 +89,7 @@ cd scripts/monitoring\n# 2. Install all npm packages\nnpm install\n```\n+\nAfter those steps all the packages needed for running the monitoring tool should be installed.\n### Running\n@@ -98,7 +99,7 @@ Since they are designed with loose decoupling in mind, the frontend can integrat\nthe backend can work with different frontends, provided that the format of the data and the communication protocol is\npreserved.\n-#### 1. Monitoring Backend\n+#### Backend\nTo run the backend, use the `-fedMonitoring` flag followed by a `port` and can be executed using the systemds binary like this:\n@@ -111,6 +112,7 @@ systemds FEDMONITORING 8080\n#[ INFO] Starting Federated Monitoring Backend server at port: 8080\n#[ INFO] Started Federated Monitoring Backend at port: 8080\n```\n+\nThis will start the backend server which will be listening for REST requests on `http://localhost:8080`.\n**NOTE:** The backend is polling all registered workers with a given frequency, it can be changed by including\n@@ -118,7 +120,7 @@ the `<sysds.federated.monitorFreq>3</sysds.federated.monitorFreq>` in the `Syste\n**doubles**, representing seconds (0.5 can be used for setting the frequency to be half a second). The example shown\nhere will start the backend with polling with frequency of **3 seconds**, which is also the default value.\n-#### 2. Monitoring Frontend\n+#### Frontend\nTo run the Angular app:\n@@ -128,6 +130,7 @@ cd scripts/monitoring\n# 2. Start the angular app\nnpm start\n```\n+\nAfter this step the Angular UI should be started on [http://localhost:4200](http://localhost:4200) and can be viewed by opening the\nbrowser on the same address.\n@@ -144,4 +147,3 @@ systemds -f testFederated.dml -exec singlenode -explain -debug -stats 20 -fedMon\n```\n**NOTE:** The backend service should already be running, otherwise the coordinator will not start.\n-\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/api/DMLScript.java",
"new_path": "src/main/java/org/apache/sysds/api/DMLScript.java",
"diff": "@@ -616,7 +616,9 @@ public class DMLScript\nvar model = new CoordinatorModel();\nmodel.name = InetAddress.getLocalHost().getHostName();\n- model.host = InetAddress.getLocalHost().getHostName();\n+ // TODO fix and replace localhost identifyer with hostname in federated instructions SYSTEMDS-3440\n+ // https://issues.apache.org/jira/browse/SYSTEMDS-3440\n+ model.host = \"localhost\";\nmodel.processId = Long.parseLong(IDHandler.obtainProcessID());\nString requestBody = objectMapper\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Quick fix of Federated monitor registration
This commit fixes the registration with the federated monitor tool,
to use localhost, because the backend always identify with localhost.
In the future we need to change this, but for now it works. |
49,706 | 11.10.2022 18:30:55 | -7,200 | f0a83f7d5275f098712073da107c3eb29605ef3f | [DOCS] Minor refinement of readme for federated tutorial | [
{
"change_type": "MODIFY",
"old_path": "scripts/tutorials/federated/README.md",
"new_path": "scripts/tutorials/federated/README.md",
"diff": "@@ -31,6 +31,17 @@ Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 numpy-1.20.3 pa\nInstalled Python Systemds\n```\n+Also required is to install NodeJS <https://nodejs.org/en/>, simply download and add the bin folder to path add it on path to enable:\n+\n+```sh\n+node --version\n+```\n+\n+```txt\n+Me:~/.../scripts/tutorials/federated$ node --version\n+v16.17.1\n+```\n+\n## Step 3: Setup and Download Data\nNext we download and split the dataset into partitions that the different federated workers can use.\n@@ -91,7 +102,11 @@ Me:~/github/federatedTutorial$ cat tmp/worker/XPS-15-7590-8001\nAlso worth noting is that all the output from the federated worker is concatenated to: results/fed/workerlog/\n-## Step 4.1: Port Forward if you dont have access to the ports\n+At startup a federated monitoring tool is launched at <http://localhost:4200>.\n+\n+## Step 5: Port Forward\n+\n+if you don't have access to the ports on the federated workers (note local execution can skip this.)\nIf the ports are not accessible directly from your machine because of a firewall, i suggest using the port forwarding script.\nthat port forward the list of ports from your local machine to the remote machines.\n@@ -101,29 +116,24 @@ Note this only works if all the federated machines are remote machines, aka the\nportforward.sh\n```\n-Note these process will just continue running in the background, and have to manually terminated.\n+Note these process will just continue running in the background so have to be manually terminated.\n-## Step 5: run algorithms\n+## Step 6: run algorithms\n-This tutorial is using a LM script. To execute it simply use:\n+This tutorial is using different scripts depending on what is outcommented in the run.sh.\n-```sh\n-./run.sh\n-```\n+Please go into this file and enable which specific script you want to run.\n-The terminal output should look like the following:\n+To execute it simply use:\n-```txt\n-Me:~/github/federatedTutorial$ ./run.sh\n-fed 1W def - lm mnist\n-fed 2W def - lm mnist\n-loc def - lm mnist\n+```sh\n+./run.sh\n```\nThis have execute three different execution versions.\nfirst with one federated worker, then two and finally a local baseline.\n-all outputs are put into the results folder:\n+all outputs are put into the results folder and could look like:\n```txt\nMe:~/github/federatedTutorial$ cat results/fed1/lm_mnist_XPS-15-7590_def.log\n@@ -196,13 +206,6 @@ Heavy hitter instructions:\n4 rmvar 0.000 3\n```\n-The saved LM model is located in the tmp folder and the federated results is exactly the same as if it was executed locally.\n-\n-```txt\n-Me:~/github/federatedTutorial$ ls tmp/\n-fed_mnist_1.res fed_mnist_1.res.mtd fed_mnist_2.res fed_mnist_2.res.mtd mnist_local.res mnist_local.res.mtd worker\n-```\n-\n## Step 6: Stop Workers\nTo stop the workers running simply use the stop all workers script.\n@@ -210,3 +213,14 @@ To stop the workers running simply use the stop all workers script.\n```sh\n./stopAllWorkers.sh\n```\n+\n+```txt\n+Me:~/.../scripts/tutorials/federated$ ./stopAllWorkers.sh\n+Stopping workers XPS-15-7590\n+Stopping Monitoring\n+STOP NPM manually!! Process ID:\n+62870q\n+```\n+\n+As specified by the output npm is still running and have to be manually stopped.\n+Simply search for 'ng serve' process in the terminal and stop it.\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/tutorials/federated/startAllWorkers.sh",
"new_path": "scripts/tutorials/federated/startAllWorkers.sh",
"diff": "@@ -18,14 +18,13 @@ done\n./scripts/startMonitoring.sh\nfor index in ${!address[*]}; do\n- echo \"\"\ncurl \\\n--header \"Content-Type: application/json\" \\\n--data \"{\\\"name\\\":\\\"Worker - ${ports[$index]}\\\",\\\"address\\\":\\\"${address[$index]}:${ports[$index]}\\\"}\" \\\n- http://localhost:8080/workers\n- echo \"\"\n+ http://localhost:8080/workers > /dev/null\ndone\n+\necho \"A Monitoring tool is started at http://localhost:4200\"\nwait\n"
}
] | Java | Apache License 2.0 | apache/systemds | [DOCS] Minor refinement of readme for federated tutorial |
49,706 | 12.10.2022 20:40:21 | -7,200 | 174cfe9d3d021db7dacc12e3363bae44b601222a | [DOCS] Fix the internal slice docs for MatrixBlock | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"diff": "@@ -37,6 +37,8 @@ import java.util.stream.IntStream;\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.commons.lang3.concurrent.ConcurrentUtils;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.commons.math3.random.Well1024a;\nimport org.apache.hadoop.io.DataInputBuffer;\nimport org.apache.sysds.common.Types.BlockType;\n@@ -117,7 +119,7 @@ import org.apache.sysds.utils.NativeHelper;\npublic class MatrixBlock extends MatrixValue implements CacheBlock, Externalizable {\n- // private static final Log LOG = LogFactory.getLog(MatrixBlock.class.getName());\n+ private static final Log LOG = LogFactory.getLog(MatrixBlock.class.getName());\nprivate static final long serialVersionUID = 7319972089143154056L;\n@@ -818,7 +820,7 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nif( src.sparse ) //SPARSE <- SPARSE\n{\nSparseBlock a = src.sparseBlock;\n- if( a.isEmpty(i) ) return;\n+ if( a == null || a.isEmpty(i) ) return;\nint aix = rowoffset+i;\n//single block append (avoid re-allocations)\n@@ -1164,7 +1166,6 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nreturn evalSparseFormatInMemory(dc.getRows(), dc.getCols(), dc.getNonZeros());\n}\n-\n/**\n* Evaluates if a matrix block with the given characteristics should be in sparse format\n* in memory.\n@@ -1237,6 +1238,8 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nfinal int n = clen;\nif( allowCSR && nonZeros <= Integer.MAX_VALUE ) {\n+ try{\n+\n//allocate target in memory-efficient CSR format\nint lnnz = (int) nonZeros;\nint[] rptr = new int[m+1];\n@@ -1257,6 +1260,19 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\n}\nsparseBlock = new SparseBlockCSR(\nrptr, indexes, values, lnnz);\n+ } catch(ArrayIndexOutOfBoundsException ioobe){\n+ sparse = false;\n+ long nnzBefore = nonZeros;\n+ long nnzNew = recomputeNonZeros();\n+ if(nnzBefore != nnzNew){\n+ LOG.error(\"Error in dense to sparse because nonZeros was set incorrectly\\nTrying again with correction\");\n+ denseToSparse(true);\n+ }\n+ else{\n+ LOG.error(\"Failed construction of SparseCSR block\", ioobe);\n+ denseToSparse(false);\n+ }\n+ }\n}\nelse {\n// remember number non zeros.\n@@ -4113,8 +4129,6 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nreturn ret;\n}\n-\n-\npublic MatrixBlock slice(IndexRange ixrange, MatrixBlock ret) {\nreturn slice(\n(int)ixrange.rowStart, (int)ixrange.rowEnd,\n@@ -4122,34 +4136,63 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\n}\n/**\n- * Slice out a row block\n- * @param rl The row lower to start from\n- * @param ru The row lower to end at\n+ * Slice out a block in the range\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n* @return The sliced out matrix block.\n*/\npublic final MatrixBlock slice(int rl, int ru) {\nreturn slice(rl, ru, 0, clen-1, true, null);\n}\n+ /**\n+ * Slice out a block in the range\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n+ * @param deep Deep copy or not\n+ * @return The sliced out matrix block.\n+ */\npublic final MatrixBlock slice(int rl, int ru, boolean deep){\nreturn slice(rl,ru, 0, clen-1, deep, null);\n}\n+ /**\n+ * Slice out a block in the range\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n+ * @param cl column lower (inclusive)\n+ * @param cu column upper (inclusive)\n+ * @return The sliced out matrix block.\n+ */\npublic final MatrixBlock slice(int rl, int ru, int cl, int cu){\nreturn slice(rl, ru, cl, cu, true, null);\n}\n+ /**\n+ * Slice out a block in the range\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n+ * @param cl column lower (inclusive)\n+ * @param cu column upper (inclusive)\n+ * @param ret output sliced out matrix block\n+ * @return The sliced out matrix block.\n+ */\n@Override\npublic final MatrixBlock slice(int rl, int ru, int cl, int cu, CacheBlock ret) {\nreturn slice(rl, ru, cl, cu, true, ret);\n}\n/**\n- * Slice out a row block\n- * @param rl The row lower to start from\n- * @param ru The row lower to end at\n- * @param cl The col lower to start from\n- * @param cu The col lower to end at\n+ * Slice out a block in the range\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n+ * @param cl column lower (inclusive)\n+ * @param cu column upper (inclusive)\n* @param deep Deep copy or not\n* @return The sliced out matrix block.\n*/\n@@ -4163,10 +4206,13 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\n*\n* This means that if you call with rl == ru then you get 1 row output.\n*\n- * @param rl row lower if this value is below 0 or above the number of rows contained in the matrix an exception is thrown\n- * @param ru row upper if this value is below rl or above the number of rows contained in the matrix an exception is thrown\n- * @param cl column lower if this value us below 0 or above the number of columns contained in the matrix an exception is thrown\n- * @param cu column upper if this value us below cl or above the number of columns contained in the matrix an exception is thrown\n+ * If rl or cl less than 0 an exception is thrown\n+ * If ru or cu greater than or equals to nRows or nCols an exception is thrown\n+ *\n+ * @param rl row lower (inclusive)\n+ * @param ru row upper (inclusive)\n+ * @param cl column lower (inclusive)\n+ * @param cu column upper (inclusive)\n* @param deep should perform deep copy, this is relevant in cases where the matrix is in sparse format,\n* or the entire matrix is sliced out\n* @param ret output sliced out matrix block\n"
}
] | Java | Apache License 2.0 | apache/systemds | [DOCS] Fix the internal slice docs for MatrixBlock |
49,738 | 20.10.2022 15:26:43 | 14,400 | 1e3dff7600f8b8aa8fa6d543ac1d8da8c80a8873 | Fix warnings scalar reads, cleanup csv default handling
This patch fixes unnecessary scalar read warnings which were raised
even when the DML read was fully specified (scalar, value type).
Furthermore, this also cleans up redundant code in handling CSV default
parameters via a more general code that works for all variants. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DataExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/DataExpression.java",
"diff": "@@ -24,6 +24,7 @@ import java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.List;\n+import java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.Set;\n@@ -169,6 +170,18 @@ public class DataExpression extends DataIdentifier\npublic static final String DEFAULT_NA_STRINGS = \"\";\npublic static final String DEFAULT_SCHEMAPARAM = \"NULL\";\npublic static final String DEFAULT_LIBSVM_INDEX_DELIM = \":\";\n+ private static Map<String, Object> csvDefaults;\n+ static {\n+ csvDefaults = new HashMap<>();\n+ csvDefaults.put(DELIM_DELIMITER, DEFAULT_DELIM_DELIMITER);\n+ csvDefaults.put(DELIM_HAS_HEADER_ROW, DEFAULT_DELIM_HAS_HEADER_ROW);\n+ csvDefaults.put(DELIM_FILL, DEFAULT_DELIM_FILL);\n+ csvDefaults.put(DELIM_FILL_VALUE, DEFAULT_DELIM_FILL_VALUE);\n+ csvDefaults.put(DELIM_SPARSE, DEFAULT_DELIM_SPARSE);\n+ csvDefaults.put(DELIM_NA_STRINGS, DEFAULT_NA_STRINGS);\n+ csvDefaults.put(SCHEMAPARAM, DEFAULT_SCHEMAPARAM);\n+ csvDefaults.put(LIBSVM_INDEX_DELIM, DEFAULT_LIBSVM_INDEX_DELIM);\n+ }\nprivate DataOp _opcode;\nprivate HashMap<String, Expression> _varParams;\n@@ -967,6 +980,8 @@ public class DataExpression extends DataIdentifier\n// track whether should attempt to read MTD file or not\nboolean shouldReadMTD = _checkMetadata\n+ && !(dataTypeString!= null && getVarParam(VALUETYPEPARAM) != null\n+ && dataTypeString.equalsIgnoreCase(Statement.SCALAR_DATA_TYPE))\n&& (!ConfigurationManager.getCompilerConfigFlag(ConfigType.IGNORE_READ_WRITE_METADATA)\n|| HDFSTool.existsFileOnHDFS(mtdFileName)); // existing mtd file\n@@ -1096,7 +1111,6 @@ public class DataExpression extends DataIdentifier\n}\nif (isCSV){\n-\n// there should be no MTD file for delimited file format\nshouldReadMTD = true;\n@@ -1112,68 +1126,12 @@ public class DataExpression extends DataIdentifier\n}\n}\n- // DEFAULT for \"sep\" : \",\"\n- if (getVarParam(DELIM_DELIMITER) == null) {\n- addVarParam(DELIM_DELIMITER, new StringIdentifier(DEFAULT_DELIM_DELIMITER, this));\n- }\n- else {\n- if ( (getVarParam(DELIM_DELIMITER) instanceof ConstIdentifier)\n- && (! (getVarParam(DELIM_DELIMITER) instanceof StringIdentifier)))\n- {\n- raiseValidateError(\"For delimited file '\" + getVarParam(DELIM_DELIMITER)\n- + \"' must be a string value \", conditional);\n- }\n- }\n-\n- // DEFAULT for \"default\": 0\n- if (getVarParam(DELIM_FILL_VALUE) == null) {\n- addVarParam(DELIM_FILL_VALUE, new DoubleIdentifier(DEFAULT_DELIM_FILL_VALUE, this));\n- }\n- else {\n- if ( (getVarParam(DELIM_FILL_VALUE) instanceof ConstIdentifier)\n- && (! (getVarParam(DELIM_FILL_VALUE) instanceof IntIdentifier || getVarParam(DELIM_FILL_VALUE) instanceof DoubleIdentifier)))\n- {\n- raiseValidateError(\"For delimited file '\" + getVarParam(DELIM_FILL_VALUE) + \"' must be a numeric value \", conditional);\n- }\n- }\n-\n- // DEFAULT for \"header\": boolean false\n- if (getVarParam(DELIM_HAS_HEADER_ROW) == null) {\n- addVarParam(DELIM_HAS_HEADER_ROW, new BooleanIdentifier(DEFAULT_DELIM_HAS_HEADER_ROW, this));\n- }\n- else {\n- if ((getVarParam(DELIM_HAS_HEADER_ROW) instanceof ConstIdentifier)\n- && (! (getVarParam(DELIM_HAS_HEADER_ROW) instanceof BooleanIdentifier)))\n- {\n- raiseValidateError(\"For delimited file '\" + getVarParam(DELIM_HAS_HEADER_ROW) + \"' must be a boolean value \", conditional);\n- }\n- }\n-\n- // DEFAULT for \"fill\": boolean false\n- if (getVarParam(DELIM_FILL) == null){\n- addVarParam(DELIM_FILL, new BooleanIdentifier(DEFAULT_DELIM_FILL, this));\n- }\n- else {\n-\n- if ((getVarParam(DELIM_FILL) instanceof ConstIdentifier)\n- && (! (getVarParam(DELIM_FILL) instanceof BooleanIdentifier)))\n- {\n- raiseValidateError(\"For delimited file '\" + getVarParam(DELIM_FILL) + \"' must be a boolean value \", conditional);\n- }\n- }\n-\n- // DEFAULT for \"naStrings\": String \"\"\n- if (getVarParam(DELIM_NA_STRINGS) == null){\n- addVarParam(DELIM_NA_STRINGS, new StringIdentifier(DEFAULT_NA_STRINGS, this));\n- }\n- else {\n- if ((getVarParam(DELIM_NA_STRINGS) instanceof ConstIdentifier)\n- && (! (getVarParam(DELIM_NA_STRINGS) instanceof StringIdentifier)))\n- {\n- raiseValidateError(\"For delimited file '\" + getVarParam(DELIM_NA_STRINGS) + \"' must be a string value \", conditional);\n- }\n- LOG.info(\"Replacing :\" + _varParams.get(DELIM_NA_STRINGS) + \" with NaN\");\n- }\n+ //handle all csv default parameters\n+ handleCSVDefaultParam(DELIM_DELIMITER, ValueType.STRING, conditional);\n+ handleCSVDefaultParam(DELIM_FILL_VALUE, ValueType.FP64, conditional);\n+ handleCSVDefaultParam(DELIM_HAS_HEADER_ROW, ValueType.BOOLEAN, conditional);\n+ handleCSVDefaultParam(DELIM_FILL, ValueType.BOOLEAN, conditional);\n+ handleCSVDefaultParam(DELIM_NA_STRINGS, ValueType.STRING, conditional);\n}\nboolean isLIBSVM = false;\n@@ -1201,28 +1159,11 @@ public class DataExpression extends DataIdentifier\n}\n}\n}\n- // DEFAULT for \"sep\" : \",\"\n- if (getVarParam(DELIM_DELIMITER) == null) {\n- addVarParam(DELIM_DELIMITER, new StringIdentifier(DEFAULT_DELIM_DELIMITER, this));\n- }\n- else {\n- if ((getVarParam(DELIM_DELIMITER) instanceof ConstIdentifier)\n- && (!(getVarParam( DELIM_DELIMITER) instanceof StringIdentifier))) {\n- raiseValidateError( \"For delimited file \" + getVarParam(DELIM_DELIMITER) + \" must be a string value \", conditional);\n- }\n- }\n- // DEFAULT for \"indSep\": \":\"\n- if(getVarParam(LIBSVM_INDEX_DELIM) == null) {\n- addVarParam(LIBSVM_INDEX_DELIM, new StringIdentifier(DEFAULT_LIBSVM_INDEX_DELIM, this));\n- }\n- else {\n- if((getVarParam(LIBSVM_INDEX_DELIM) instanceof ConstIdentifier)\n- && (!(getVarParam(LIBSVM_INDEX_DELIM) instanceof StringIdentifier))) {\n- raiseValidateError(\n- \"For delimited file \" + getVarParam(LIBSVM_INDEX_DELIM) + \" must be a string value \", conditional);\n- }\n- }\n+ //handle all default parameters\n+ handleCSVDefaultParam(DELIM_DELIMITER, ValueType.STRING, conditional);\n+ handleCSVDefaultParam(LIBSVM_INDEX_DELIM, ValueType.STRING, conditional);\n}\n+\nboolean isHDF5 = (formatTypeString != null && formatTypeString.equalsIgnoreCase(FileFormat.HDF5.toString()));\ndataTypeString = (getVarParam(DATATYPEPARAM) == null) ? null : getVarParam(DATATYPEPARAM).toString();\n@@ -2223,6 +2164,37 @@ public class DataExpression extends DataIdentifier\n}\n}\n+ private void handleCSVDefaultParam(String param, ValueType vt, boolean conditional) {\n+ if (getVarParam(param) == null) {\n+ Expression defExpr = null;\n+ switch(vt) {\n+ case BOOLEAN:\n+ defExpr = new BooleanIdentifier((boolean)csvDefaults.get(param), this);\n+ break;\n+ case FP64:\n+ defExpr = new DoubleIdentifier((double)csvDefaults.get(param), this);\n+ break;\n+ default:\n+ defExpr = new StringIdentifier((String)csvDefaults.get(param), this);\n+ }\n+ addVarParam(param, defExpr);\n+ }\n+ else {\n+ if ( (getVarParam(param) instanceof ConstIdentifier)\n+ && (! checkValueType(getVarParam(param), vt)))\n+ {\n+ raiseValidateError(\"For delimited file '\" + getVarParam(param)\n+ + \"' must be a '\"+vt.toExternalString()+\"' value \", conditional);\n+ }\n+ }\n+ }\n+\n+ private boolean checkValueType(Expression expr, ValueType vt) {\n+ return (vt == ValueType.STRING && expr instanceof StringIdentifier)\n+ || (vt == ValueType.FP64 && (expr instanceof DoubleIdentifier || expr instanceof IntIdentifier))\n+ || (vt == ValueType.BOOLEAN && expr instanceof BooleanIdentifier);\n+ }\n+\nprivate void validateParams(boolean conditional, Set<String> validParamNames, String legalMessage) {\nfor( String key : _varParams.keySet() )\n{\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3452] Fix warnings scalar reads, cleanup csv default handling
This patch fixes unnecessary scalar read warnings which were raised
even when the DML read was fully specified (scalar, value type).
Furthermore, this also cleans up redundant code in handling CSV default
parameters via a more general code that works for all variants. |
49,706 | 12.10.2022 23:51:33 | -7,200 | 4cebe8ed343801e565c769fc8208001251242970 | Spark Write compressed format
This commit fixes/adds support for spark writing,
the previous versions did not properly support the edge cases,
while this commit fixes inconsistencies, and allow local reading
of folders containing sub blocks. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"new_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"diff": "@@ -1053,20 +1053,18 @@ public class DMLTranslator\ncase MM:\ncase CSV:\ncase LIBSVM:\n+ case HDF5:\n// write output in textcell format\nae.setOutputParams(ae.getDim1(), ae.getDim2(), ae.getNnz(), ae.getUpdateType(), -1);\nbreak;\ncase BINARY:\n+ case COMPRESSED:\n// write output in binary block format\nae.setOutputParams(ae.getDim1(), ae.getDim2(), ae.getNnz(), ae.getUpdateType(), ae.getBlocksize());\nbreak;\ncase FEDERATED:\nae.setOutputParams(ae.getDim1(), ae.getDim2(), -1, ae.getUpdateType(), -1);\nbreak;\n- case HDF5:\n- // write output in HDF5 format\n- ae.setOutputParams(ae.getDim1(), ae.getDim2(), ae.getNnz(), ae.getUpdateType(), -1);\n- break;\ndefault:\nthrow new LanguageException(\"Unrecognized file format: \" + ae.getFileFormat());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DataExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/DataExpression.java",
"diff": "@@ -1327,7 +1327,7 @@ public class DataExpression extends DataIdentifier\n//validate read filename\nif (getVarParam(FORMAT_TYPE) == null || FileFormat.isTextFormat(getVarParam(FORMAT_TYPE).toString()))\ngetOutput().setBlocksize(-1);\n- else if (getVarParam(FORMAT_TYPE).toString().equalsIgnoreCase(FileFormat.BINARY.toString())) {\n+ else if (getVarParam(FORMAT_TYPE).toString().equalsIgnoreCase(FileFormat.BINARY.toString()) || getVarParam(FORMAT_TYPE).toString().equalsIgnoreCase(FileFormat.COMPRESSED.toString())) {\nif( getVarParam(ROWBLOCKCOUNTPARAM)!=null )\ngetOutput().setBlocksize(Integer.parseInt(getVarParam(ROWBLOCKCOUNTPARAM).toString()));\nelse\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressWrap.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.compress.io;\n+\n+import org.apache.spark.api.java.function.Function;\n+import org.apache.sysds.runtime.compress.CompressedMatrixBlock;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+\n+public class CompressWrap implements Function<MatrixBlock, CompressedWriteBlock> {\n+ private static final long serialVersionUID = 966405324406154236L;\n+\n+ public CompressWrap() {\n+ }\n+\n+ @Override\n+ public CompressedWriteBlock call(MatrixBlock arg0) throws Exception {\n+ return new CompressedWriteBlock(arg0);\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/ReaderCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/ReaderCompressed.java",
"diff": "@@ -69,11 +69,23 @@ public final class ReaderCompressed extends MatrixReader {\nprivate static MatrixBlock readCompressedMatrix(Path path, JobConf job, FileSystem fs, int rlen, int clen, int blen)\nthrows IOException {\n- final Reader reader = new SequenceFile.Reader(job, SequenceFile.Reader.file(path));\n+ final Map<MatrixIndexes, MatrixBlock> data = new HashMap<>();\n+\n+ for(Path subPath : IOUtilFunctions.getSequenceFilePaths(fs, path))\n+ read(subPath, job, data);\n+\n+ if(data.size() == 1)\n+ return data.entrySet().iterator().next().getValue();\n+ else\n+ return CLALibCombine.combine(data);\n+ }\n+\n+ private static void read(Path path, JobConf job, Map<MatrixIndexes, MatrixBlock> data) throws IOException {\n+\n+ final Reader reader = new SequenceFile.Reader(job, SequenceFile.Reader.file(path));\ntry {\n// Materialize all sub blocks.\n- Map<MatrixIndexes, MatrixBlock> data = new HashMap<>();\n// Use write and read interface to read and write this object.\nMatrixIndexes key = new MatrixIndexes();\n@@ -84,10 +96,6 @@ public final class ReaderCompressed extends MatrixReader {\nkey = new MatrixIndexes();\nvalue = new CompressedWriteBlock();\n}\n- if(data.size() == 1)\n- return data.entrySet().iterator().next().getValue();\n- else\n- return CLALibCombine.combine(data);\n}\nfinally {\nIOUtilFunctions.closeSilently(reader);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"diff": "@@ -28,16 +28,22 @@ import org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.io.SequenceFile;\nimport org.apache.hadoop.io.SequenceFile.Writer;\nimport org.apache.hadoop.mapred.JobConf;\n+import org.apache.hadoop.mapred.SequenceFileOutputFormat;\n+import org.apache.spark.api.java.JavaPairRDD;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlockFactory;\n+import org.apache.sysds.runtime.instructions.spark.CompressionSPInstruction.CompressionFunction;\n+import org.apache.sysds.runtime.instructions.spark.utils.RDDConverterUtils;\nimport org.apache.sysds.runtime.io.FileFormatProperties;\nimport org.apache.sysds.runtime.io.IOUtilFunctions;\nimport org.apache.sysds.runtime.io.MatrixWriter;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\n+import org.apache.sysds.runtime.meta.DataCharacteristics;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.runtime.util.HDFSTool;\npublic final class WriterCompressed extends MatrixWriter {\n@@ -62,6 +68,18 @@ public final class WriterCompressed extends MatrixWriter {\ncreate(null).writeMatrixToHDFS(src, fname, rlen, clen, blen, nnz, diag);\n}\n+ public static void writeRDDToHDFS(JavaPairRDD<MatrixIndexes, MatrixBlock> src, String path, int blen,\n+ DataCharacteristics mc) {\n+ final DataCharacteristics outC = new MatrixCharacteristics(mc).setBlocksize(blen);\n+ writeRDDToHDFS(RDDConverterUtils.binaryBlockToBinaryBlock(src, mc, outC), path);\n+ }\n+\n+ public static void writeRDDToHDFS(JavaPairRDD<MatrixIndexes, MatrixBlock> src, String path) {\n+ src.mapValues(new CompressionFunction()) // Try to compress each block.\n+ .mapValues(new CompressWrap()) // Wrap in writable\n+ .saveAsHadoopFile(path, MatrixIndexes.class, CompressedWriteBlock.class, SequenceFileOutputFormat.class);\n+ }\n+\n@Override\npublic void writeMatrixToHDFS(MatrixBlock src, String fname, long rlen, long clen, int blen, long nnz, boolean diag)\nthrows IOException {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibDecompress.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibDecompress.java",
"diff": "@@ -61,6 +61,21 @@ public class CLALibDecompress {\npublic static void decompressTo(CompressedMatrixBlock cmb, MatrixBlock ret, int rowOffset, int colOffset, int k) {\nTiming time = new Timing(true);\n+ if(cmb.getNumColumns() + colOffset > ret.getNumColumns() || cmb.getNumRows() + rowOffset > ret.getNumRows()) {\n+ LOG.warn(\n+ \"Slow slicing off excess parts for decompressTo because decompression into is implemented for fitting blocks\");\n+ MatrixBlock mbSliced = cmb.slice( //\n+ Math.min(Math.abs(rowOffset), 0), Math.min(cmb.getNumRows(), ret.getNumRows() - rowOffset) - 1, // Rows\n+ Math.min(Math.abs(colOffset), 0), Math.min(cmb.getNumColumns(), ret.getNumColumns() - colOffset) - 1); // Cols\n+ if(mbSliced instanceof MatrixBlock) {\n+ mbSliced.putInto(ret, rowOffset, colOffset, false);\n+ return;\n+ }\n+\n+ cmb = (CompressedMatrixBlock) mbSliced;\n+ decompress(cmb, 1);\n+ }\n+\nfinal boolean outSparse = ret.isInSparseFormat();\nif(!cmb.isEmpty()) {\nif(outSparse && cmb.isOverlapping())\n@@ -78,8 +93,7 @@ public class CLALibDecompress {\nLOG.trace(\"decompressed block w/ k=\" + k + \" in \" + t + \"ms.\");\n}\n- if(ret.getNonZeros() <= 0)\n- ret.setNonZeros(cmb.getNonZeros());\n+ ret.recomputeNonZeros();\n}\nprivate static void decompressToSparseBlock(CompressedMatrixBlock cmb, MatrixBlock ret, int rowOffset,\n@@ -96,6 +110,8 @@ public class CLALibDecompress {\nelse\nfor(AColGroup g : groups)\ng.decompressToSparseBlock(sb, 0, nRows, rowOffset, colOffset);\n+ sb.sort();\n+ ret.checkSparseRows();\n}\nprivate static void decompressToDenseBlock(CompressedMatrixBlock cmb, DenseBlock ret, int rowOffset, int colOffset) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/WriteSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/WriteSPInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.spark;\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Random;\n+\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.commons.lang3.tuple.Pair;\nimport org.apache.hadoop.io.LongWritable;\n@@ -31,6 +35,7 @@ import org.apache.sysds.common.Types.FileFormat;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.compress.io.WriterCompressed;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.context.SparkExecutionContext;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\n@@ -52,10 +57,6 @@ import org.apache.sysds.runtime.meta.DataCharacteristics;\nimport org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.runtime.util.HDFSTool;\n-import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Random;\n-\npublic class WriteSPInstruction extends SPInstruction implements LineageTraceable {\npublic CPOperand input1 = null;\nprivate CPOperand input2 = null;\n@@ -252,6 +253,21 @@ public class WriteSPInstruction extends SPInstruction implements LineageTraceabl\nif( nonDefaultBlen )\nmcOut = new MatrixCharacteristics(mc).setBlocksize(blen);\n}\n+ else if(fmt == FileFormat.COMPRESSED) {\n+ // reblock output if needed\n+ final int blen = Integer.parseInt(input4.getName());\n+ final boolean nonDefaultBlen = ConfigurationManager.getBlocksize() != blen;\n+ mc.setNonZeros(-1); // default to unknown non zeros for compressed matrix block\n+\n+ if(nonDefaultBlen)\n+ WriterCompressed.writeRDDToHDFS(in1, fname, blen, mc);\n+ else\n+ WriterCompressed.writeRDDToHDFS(in1, fname);\n+\n+ if(nonDefaultBlen)\n+ mcOut = new MatrixCharacteristics(mc).setBlocksize(blen);\n+\n+ }\nelse if(fmt == FileFormat.LIBSVM) {\nif(mc.getRows() == 0 || mc.getCols() == 0) {\nthrow new IOException(\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/functions/ExtractBlockForBinaryReblock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/functions/ExtractBlockForBinaryReblock.java",
"diff": "@@ -23,7 +23,6 @@ import java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.Iterator;\n-import org.apache.commons.lang.NotImplementedException;\nimport org.apache.spark.api.java.function.PairFlatMapFunction;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\n@@ -92,15 +91,19 @@ public class ExtractBlockForBinaryReblock implements PairFlatMapFunction<Tuple2<\nfinal int cixi = UtilFunctions.computeCellInBlock(rowLower, out_blen);\nfinal int cixj = UtilFunctions.computeCellInBlock(colLower, out_blen);\n+ if( aligned ) {\nif(in instanceof CompressedMatrixBlock){\nblk.allocateSparseRowsBlock(false);\n- CLALibDecompress.decompressTo((CompressedMatrixBlock) in, blk, cixi, cixj, 1);\n- }\n- else if( aligned ) {\n+ CLALibDecompress.decompressTo((CompressedMatrixBlock) in, blk, cixi- aixi, cixj-aixj, 1);\n+ }else{\nblk.appendToSparse(in, cixi, cixj);\nblk.setNonZeros(in.getNonZeros());\n}\n+ }\nelse { //general case\n+ if(in instanceof CompressedMatrixBlock){\n+ in = CompressedMatrixBlock.getUncompressed(in);\n+ }\nfor(int i2 = 0; i2 <= (int)(rowUpper-rowLower); i2++)\nfor(int j2 = 0; j2 <= (int)(colUpper-colLower); j2++)\nblk.appendValue(cixi+i2, cixj+j2, in.quickGetValue(aixi+i2, aixj+j2));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/utils/RDDAggregateUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/utils/RDDAggregateUtils.java",
"diff": "@@ -704,17 +704,14 @@ public class RDDAggregateUtils\n}\n@Override\n- public MatrixBlock call(MatrixBlock b1, MatrixBlock b2)\n- throws Exception\n- {\n+ public MatrixBlock call(MatrixBlock b1, MatrixBlock b2) throws Exception {\nlong b1nnz = b1.getNonZeros();\nlong b2nnz = b2.getNonZeros();\n// sanity check input dimensions\nif(b1.getNumRows() != b2.getNumRows() || b1.getNumColumns() != b2.getNumColumns()) {\n- throw new DMLRuntimeException(\"Mismatched block sizes for: \"\n- + b1.getNumRows() + \" \" + b1.getNumColumns() + \" \"\n- + b2.getNumRows() + \" \" + b2.getNumColumns());\n+ throw new DMLRuntimeException(\"Mismatched block sizes for: \" + b1.getNumRows() + \" \" + b1.getNumColumns()\n+ + \" \" + b2.getNumRows() + \" \" + b2.getNumColumns());\n}\n// execute merge (never pass by reference)\n@@ -724,12 +721,10 @@ public class RDDAggregateUtils\n// sanity check output number of non-zeros\nif(ret.getNonZeros() != b1nnz + b2nnz) {\n- throw new DMLRuntimeException(\"Number of non-zeros does not match: \"\n- + ret.getNonZeros() + \" != \" + b1nnz + \" + \" + b2nnz);\n+ throw new DMLRuntimeException(\n+ \"Number of non-zeros does not match: \" + ret.getNonZeros() + \" != \" + b1nnz + \" + \" + b2nnz);\n}\n-\nreturn ret;\n}\n-\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"diff": "@@ -1435,6 +1435,8 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nfor(int k = apos; k < apos + alen; k++)\nif(avals[k] == 0)\nthrow new RuntimeException(\"Wrong sparse row: zero at \" + k);\n+ if(aix[apos + alen-1] > clen)\n+ throw new RuntimeException(\"Invalid offset outside of matrix\");\n}\n}\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOTestUtils.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOCompressionTestUtils.java",
"diff": "@@ -28,14 +28,14 @@ import java.util.concurrent.atomic.AtomicInteger;\nimport org.apache.sysds.runtime.compress.io.ReaderCompressed;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n-public class IOTestUtils {\n+public class IOCompressionTestUtils {\nfinal static Object lock = new Object();\nstatic final AtomicInteger id = new AtomicInteger(0);\nprotected static void deleteDirectory(File file) {\n- synchronized(IOTestUtils.lock) {\n+ synchronized(IOCompressionTestUtils.lock) {\nfor(File subfile : file.listFiles()) {\nif(subfile.isDirectory())\ndeleteDirectory(subfile);\n@@ -59,7 +59,7 @@ public class IOTestUtils {\n// assertTrue(\"Disk size is not equivalent\", a.getExactSizeOnDisk() > b.getExactSizeOnDisk());\n}\n- protected static MatrixBlock read(String path) {\n+ public static MatrixBlock read(String path) {\ntry {\nreturn ReaderCompressed.readCompressedMatrixFromHDFS(path);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOEmpty.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOEmpty.java",
"diff": "@@ -39,18 +39,18 @@ public class IOEmpty {\n+ IOEmpty.class.getSimpleName() + \"/\";\npublic IOEmpty() {\n- synchronized(IOTestUtils.lock) {\n+ synchronized(IOCompressionTestUtils.lock) {\nnew File(nameBeginning).mkdirs();\n}\n}\n@AfterClass\npublic static void cleanup() {\n- IOTestUtils.deleteDirectory(new File(nameBeginning));\n+ IOCompressionTestUtils.deleteDirectory(new File(nameBeginning));\n}\npublic static String getName() {\n- return IOTestUtils.getName(nameBeginning);\n+ return IOCompressionTestUtils.getName(nameBeginning);\n}\n@Test\n@@ -65,8 +65,8 @@ public class IOEmpty {\npublic void writeEmptyAndRead() {\nString n = getName();\nwrite(n, 10, 10, 1000);\n- MatrixBlock mb = IOTestUtils.read(n);\n- IOTestUtils.verifyEquivalence(mb, new MatrixBlock(10, 10, 0.0));\n+ MatrixBlock mb = IOCompressionTestUtils.read(n);\n+ IOCompressionTestUtils.verifyEquivalence(mb, new MatrixBlock(10, 10, 0.0));\n}\n@Test\n@@ -83,8 +83,8 @@ public class IOEmpty {\nwrite(n, 1000, 10, 100);\nFile f = new File(n);\nassertTrue(f.isDirectory() || f.isFile());\n- MatrixBlock mb = IOTestUtils.read(n);\n- IOTestUtils.verifyEquivalence(mb, new MatrixBlock(1000, 10, 0.0));\n+ MatrixBlock mb = IOCompressionTestUtils.read(n);\n+ IOCompressionTestUtils.verifyEquivalence(mb, new MatrixBlock(1000, 10, 0.0));\n}\nprotected static void write(String path, int nRows, int nCols, int blen) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOSpark.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOSpark.java",
"diff": "package org.apache.sysds.test.component.compress.io;\nimport static org.junit.Assert.assertEquals;\n+import static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport java.io.File;\n@@ -31,14 +32,18 @@ import org.apache.spark.api.java.JavaPairRDD;\nimport org.apache.spark.api.java.JavaSparkContext;\nimport org.apache.sysds.common.Types.FileFormat;\nimport org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.runtime.compress.io.CompressUnwrap;\nimport org.apache.sysds.runtime.compress.io.CompressedWriteBlock;\nimport org.apache.sysds.runtime.compress.io.WriterCompressed;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContextFactory;\nimport org.apache.sysds.runtime.controlprogram.context.SparkExecutionContext;\n+import org.apache.sysds.runtime.instructions.spark.utils.RDDConverterUtils;\nimport org.apache.sysds.runtime.io.InputOutputInfo;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\n+import org.apache.sysds.runtime.meta.DataCharacteristics;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\nimport org.apache.sysds.test.TestUtils;\nimport org.junit.AfterClass;\nimport org.junit.Test;\n@@ -54,11 +59,11 @@ public class IOSpark {\n@AfterClass\npublic static void cleanup() {\n- IOTestUtils.deleteDirectory(new File(nameBeginning));\n+ IOCompressionTestUtils.deleteDirectory(new File(nameBeginning));\n}\nprivate static String getName() {\n- return IOTestUtils.getName(nameBeginning);\n+ return IOCompressionTestUtils.getName(nameBeginning);\n}\n@Test\n@@ -109,38 +114,202 @@ public class IOSpark {\nreadWrite(mb);\n}\n- private void readWrite(MatrixBlock mb) {\n- double sum = mb.sum();\n- String n = getName();\n- try {\n- WriterCompressed.writeCompressedMatrixToHDFS(mb, n, 50);\n+ @Test\n+ public void writeSparkReadCPMultiColBlock() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 124, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 100);\n}\n- catch(Exception e) {\n- e.printStackTrace();\n- fail(e.getMessage());\n+\n+ @Test\n+ public void writeSparkReadCPMultiRowBlock() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(1322, 33, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 100);\n}\n- verifySum(read(n), sum, 0.0001);\n+\n+ @Test\n+ public void writeSparkReadCPSingleBlock() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 99, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 100);\n}\n- private void verifySum(List<Tuple2<MatrixIndexes, MatrixBlock>> c, double val, double tol) {\n- double sum = 0.0;\n- for(Tuple2<MatrixIndexes, MatrixBlock> b : c)\n- sum += b._2().sum();\n- assertEquals(val, sum, tol);\n+ @Test\n+ public void writeSparkReadCPMultiBlock() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(580, 244, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 100);\n}\n- private List<Tuple2<MatrixIndexes, MatrixBlock>> read(String n) {\n- JavaPairRDD<MatrixIndexes, MatrixBlock> m = getRDD(n).mapValues(x -> x.get());\n- return m.collect();\n+ @Test\n+ public void writeSparkReadCPMultiColBlockReblockUp() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 124, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 150);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPMultiRowBlockReblockUp() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(1322, 33, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 150);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPSingleBlockReblockUp() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 99, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 150);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPMultiBlockReblockUp() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(580, 244, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 150);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPMultiColBlockReblockDown() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 124, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 80);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPMultiRowBlockReblockDown() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(1322, 33, 1, 3, 1.0, 2514));\n+ testWriteSparkReadCP(mb, 100, 80);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPSingleBlockReblockDown() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 99, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 80);\n+ }\n+\n+ @Test\n+ public void writeSparkReadCPMultiBlockReblockDown() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(580, 244, 1, 3, 1.0, 33));\n+ testWriteSparkReadCP(mb, 100, 80);\n+ }\n+\n+ private void testWriteSparkReadCP(MatrixBlock mb, int blen1, int blen2) throws Exception {\n+\n+ String f1 = getName();\n+ WriterCompressed.writeCompressedMatrixToHDFS(mb, f1, blen1);\n+\n+ // Make sure the first file is written\n+ File f = new File(f1);\n+ assertTrue(f.isFile() || f.isDirectory());\n+ // Read in again as RDD\n+ JavaPairRDD<MatrixIndexes, MatrixBlock> m = getRDD(f1);\n+ String f2 = getName(); // get new name for writing RDD.\n+\n+ // Write RDD to disk\n+ if(blen1 != blen2) {\n+ DataCharacteristics mc = new MatrixCharacteristics(mb.getNumRows(), mb.getNumColumns(), blen1);\n+ WriterCompressed.writeRDDToHDFS(m, f2, blen2, mc);\n+ }\n+ else\n+ WriterCompressed.writeRDDToHDFS(m, f2);\n+ // Read locally the spark block written.\n+ MatrixBlock mbr = IOCompressionTestUtils.read(f2);\n+ IOCompressionTestUtils.verifyEquivalence(mb, mbr);\n+ }\n+\n+ @Test\n+ public void testReblock_up() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 50, 1, 3, 1.0, 2514));\n+ testReblock(mb, 25, 50);\n+ }\n+\n+ @Test\n+ public void testReblock_up_2() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 50, 1, 3, 1.0, 2514));\n+ testReblock(mb, 25, 55);\n+ }\n+\n+ @Test\n+ public void testReblock_up_3() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(165, 110, 1, 3, 1.0, 2514));\n+ testReblock(mb, 25, 55);\n+ }\n+\n+ @Test\n+ public void testReblock_up_4() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(165, 110, 1, 3, 1.0, 2514));\n+ testReblock(mb, 25, 100);\n+ }\n+\n+ @Test\n+ public void testReblock_up_5() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(230, 401, 1, 3, 1.0, 2514));\n+ testReblock(mb, 25, 100);\n+ }\n+\n+ @Test\n+ public void testReblock_down() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 50, 1, 3, 1.0, 2514));\n+ testReblock(mb, 50, 25);\n+ }\n+\n+ @Test\n+ public void testReblock_down_2() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(50, 50, 1, 3, 1.0, 2514));\n+ testReblock(mb, 55, 25);\n+ }\n+\n+ @Test\n+ public void testReblock_down_3() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(165, 110, 1, 3, 1.0, 2514));\n+ testReblock(mb, 55, 25);\n+ }\n+\n+ @Test\n+ public void testReblock_down_4() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(165, 110, 1, 3, 1.0, 2514));\n+ testReblock(mb, 100, 25);\n+ }\n+\n+ @Test\n+ public void testReblock_down_5() throws Exception {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(230, 401, 1, 3, 1.0, 2514));\n+ testReblock(mb, 100, 25);\n+ }\n+\n+ private void testReblock(MatrixBlock mb, int blen1, int blen2) throws Exception {\n+ String f1 = getName();\n+ WriterCompressed.writeCompressedMatrixToHDFS(mb, f1, blen1);\n+\n+ // Read in again as RDD\n+ JavaPairRDD<MatrixIndexes, MatrixBlock> m = getRDD(f1); // Our starting point\n+\n+ int nBlocksExpected = (1 + (mb.getNumColumns() - 1) / blen1) * (1 + (mb.getNumRows() - 1) / blen1);\n+ int nBlocksActual = m.collect().size();\n+ assertEquals(\"Expected same number of blocks \", nBlocksExpected, nBlocksActual);\n+\n+ DataCharacteristics mc = new MatrixCharacteristics(mb.getNumRows(), mb.getNumColumns(), blen1);\n+ JavaPairRDD<MatrixIndexes, MatrixBlock> m2 = reblock(m, mc, blen2);\n+ int nBlocksExpected2 = (1 + (mb.getNumColumns() - 1) / blen2) * (1 + (mb.getNumRows() - 1) / blen2);\n+ int nBlocksActual2 = m2.collect().size();\n+ assertEquals(\"Expected same number of blocks on re-blocked\", nBlocksExpected2, nBlocksActual2);\n+\n+ double val = mb.sum();\n+ verifySum(m, val, 0.0000001);\n+ verifySum(m2, val, 0.0000001);\n+\n+ }\n+\n+ private static JavaPairRDD<MatrixIndexes, MatrixBlock> reblock(JavaPairRDD<MatrixIndexes, MatrixBlock> in,\n+ DataCharacteristics mc, int blen) {\n+ final DataCharacteristics outC = new MatrixCharacteristics(mc).setBlocksize(blen);\n+ return RDDConverterUtils.binaryBlockToBinaryBlock(in, mc, outC);\n+ }\n+\n+ private List<Tuple2<MatrixIndexes, MatrixBlock>> read(String n) {\n+ return getRDD(n).collect();\n}\n@SuppressWarnings({\"unchecked\"})\n- private JavaPairRDD<MatrixIndexes, CompressedWriteBlock> getRDD(String path) {\n+ private JavaPairRDD<MatrixIndexes, MatrixBlock> getRDD(String path) {\nInputOutputInfo inf = InputOutputInfo.CompressedInputOutputInfo;\nJavaSparkContext sc = SparkExecutionContext.getSparkContextStatic();\n- return (JavaPairRDD<MatrixIndexes, CompressedWriteBlock>) sc.hadoopFile(path, inf.inputFormatClass, inf.keyClass,\n- inf.valueClass);\n+ return ((JavaPairRDD<MatrixIndexes, CompressedWriteBlock>) sc.hadoopFile(path, inf.inputFormatClass, inf.keyClass,\n+ inf.valueClass)).mapValues(new CompressUnwrap());\n}\n@SuppressWarnings({\"unchecked\"})\n@@ -158,13 +327,36 @@ public class IOSpark {\nJavaPairRDD<MatrixIndexes, MatrixBlock> m = (JavaPairRDD<MatrixIndexes, MatrixBlock>) ec\n.getRDDHandleForMatrixObject(obj, fmt);\n+ List<Tuple2<MatrixIndexes, MatrixBlock>> c = m.collect();\n+ verifySum(c, mb.sum(), 0.0001);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n- verifySum(m.collect(), mb.sum(), 0.0001);\n+ private void readWrite(MatrixBlock mb) {\n+ double sum = mb.sum();\n+ String n = getName();\n+ try {\n+ WriterCompressed.writeCompressedMatrixToHDFS(mb, n, 50);\n}\ncatch(Exception e) {\ne.printStackTrace();\nfail(e.getMessage());\n}\n+ verifySum(read(n), sum, 0.0001);\n+ }\n+\n+ private void verifySum(JavaPairRDD<MatrixIndexes, MatrixBlock> m, double val, double tol) {\n+ verifySum(m.collect(), val, tol);\n}\n+ private void verifySum(List<Tuple2<MatrixIndexes, MatrixBlock>> c, double val, double tol) {\n+ double sum = 0.0;\n+ for(Tuple2<MatrixIndexes, MatrixBlock> b : c)\n+ sum += b._2().sum();\n+ assertEquals(val, sum, tol);\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOTest.java",
"diff": "@@ -41,18 +41,18 @@ public class IOTest {\n+ IOTest.class.getSimpleName() + \"/\";\npublic IOTest() {\n- synchronized(IOTestUtils.lock) {\n+ synchronized(IOCompressionTestUtils.lock) {\nnew File(nameBeginning).mkdirs();\n}\n}\n@AfterClass\npublic static void cleanup() {\n- IOTestUtils.deleteDirectory(new File(nameBeginning));\n+ IOCompressionTestUtils.deleteDirectory(new File(nameBeginning));\n}\npublic static String getName() {\n- return IOTestUtils.getName(nameBeginning);\n+ return IOCompressionTestUtils.getName(nameBeginning);\n}\n@Test\n@@ -101,8 +101,8 @@ public class IOTest {\nWriterCompressed.writeCompressedMatrixToHDFS(mb, filename);\nFile f = new File(filename);\nassertTrue(f.isFile() || f.isDirectory());\n- MatrixBlock mbr = IOTestUtils.read(filename);\n- IOTestUtils.verifyEquivalence(mb, mbr);\n+ MatrixBlock mbr = IOCompressionTestUtils.read(filename);\n+ IOCompressionTestUtils.verifyEquivalence(mb, mbr);\n}\nprotected static void write(MatrixBlock src, String path) {\n@@ -145,7 +145,7 @@ public class IOTest {\nWriterCompressed.writeCompressedMatrixToHDFS(mb, filename, blen);\nFile f = new File(filename);\nassertTrue(f.isFile() || f.isDirectory());\n- MatrixBlock mbr = IOTestUtils.read(filename);\n- IOTestUtils.verifyEquivalence(mb, mbr);\n+ MatrixBlock mbr = IOCompressionTestUtils.read(filename);\n+ IOCompressionTestUtils.verifyEquivalence(mb, mbr);\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/io/compressed/WriteCompressedTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.io.compressed;\n+\n+import java.io.IOException;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.apache.sysds.test.component.compress.io.IOCompressionTestUtils;\n+import org.junit.Test;\n+\n+/**\n+ * JUnit Test cases to evaluate the functionality of reading CSV files.\n+ *\n+ * Test 1: write() w/ all properties. Test 2: read(format=\"csv\") w/o mtd file. Test 3: read() w/ complete mtd file.\n+ *\n+ */\n+\n+public class WriteCompressedTest extends AutomatedTestBase {\n+\n+ private final static String TEST_NAME = \"WriteCompressedTest\";\n+ private final static String TEST_DIR = \"functions/io/compressed/\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + WriteCompressedTest.class.getSimpleName() + \"/\";\n+\n+ private final static double eps = 1e-9;\n+\n+ @Override\n+ public void setUp() {\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] {\"Rout\"}));\n+ }\n+\n+ @Test\n+ public void testCP() throws IOException {\n+ runWriteTest(ExecMode.SINGLE_NODE);\n+ }\n+\n+ @Test\n+ public void testHP() throws IOException {\n+ runWriteTest(ExecMode.HYBRID);\n+ }\n+\n+ @Test\n+ public void testSP() throws IOException {\n+ runWriteTest(ExecMode.SPARK);\n+ }\n+\n+ private void runWriteTest(ExecMode platform) throws IOException {\n+ runWriteTest(platform, 100, 100, 0, 0, 0.0);\n+ }\n+\n+ private void runWriteTest(ExecMode platform, int rows, int cols, int min, int max, double sparsity)\n+ throws IOException {\n+\n+ ExecMode oldPlatform = rtplatform;\n+ rtplatform = platform;\n+\n+ TestConfiguration config = getTestConfiguration(TEST_NAME);\n+ loadTestConfiguration(config);\n+\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[] {\"-explain\", \"-args\", \"\" + rows, \"\" + cols, \"\" + min, \"\" + max, \"\" + sparsity,\n+ output(\"out.cla\"), output(\"sum.scalar\")};\n+\n+ runTest(null);\n+\n+ double sumDML = TestUtils.readDMLScalar(output(\"sum.scalar\"));\n+ MatrixBlock mbr = IOCompressionTestUtils.read(output(\"out.cla\"));\n+\n+ TestUtils.compareScalars(sumDML, mbr.sum(), eps);\n+\n+ rtplatform = oldPlatform;\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/io/compressed/WriteCompressedTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+m = rand(rows= $1, cols=$2, min=$3, max=$4, sparsity=$5)\n+\n+s = sum(m)\n+write(m, $6, format=\"compressed\")\n+write(s, $7, format=\"csv\")\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3444] Spark Write compressed format
This commit fixes/adds support for spark writing,
the previous versions did not properly support the edge cases,
while this commit fixes inconsistencies, and allow local reading
of folders containing sub blocks. |
49,706 | 15.10.2022 16:09:12 | -7,200 | 07dd88aba39547b2ba00ae44f86d9a0f83444b0e | DDC Append | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroup.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroup.java",
"diff": "@@ -516,6 +516,14 @@ public abstract class AColGroup implements Serializable {\n*/\npublic abstract double getMax();\n+ /**\n+ * Short hand method for getting the sum of this column group\n+ *\n+ * @param nRows The number of rows in the column group\n+ * @return The sum of this column group\n+ */\n+ public abstract double getSum(int nRows);\n+\n/**\n* Detect if the column group contains a specific value.\n*\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroupCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroupCompressed.java",
"diff": "@@ -115,6 +115,13 @@ public abstract class AColGroupCompressed extends AColGroup {\nreturn computeMxx(Double.NEGATIVE_INFINITY, Builtin.getBuiltinFnObject(BuiltinCode.MAX));\n}\n+ @Override\n+ public double getSum(int nRows) {\n+ double[] ret = new double[1];\n+ computeSum(ret, nRows);\n+ return ret[0];\n+ }\n+\n@Override\npublic final void unaryAggregateOperations(AggregateUnaryOperator op, double[] c, int nRows, int rl, int ru) {\nunaryAggregateOperations(op, c, nRows, rl, ru,\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.runtime.compress.colgroup;\nimport java.io.DataInput;\nimport java.io.DataOutput;\nimport java.io.IOException;\n+import java.util.Arrays;\nimport org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.dictionary.ADictionary;\n@@ -485,6 +486,13 @@ public class ColGroupDDC extends APreAgg {\n@Override\npublic AColGroup append(AColGroup g) {\n+ if(g instanceof ColGroupDDC && Arrays.equals(g.getColIndices(), _colIndexes)) {\n+ ColGroupDDC gDDC = (ColGroupDDC) g;\n+ if(gDDC._dict.eq(_dict)){\n+ AMapToData nd = _data.append(gDDC._data);\n+ return create(_colIndexes, _dict, nd, null);\n+ }\n+ }\nreturn null;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"diff": "@@ -477,6 +477,11 @@ public class ColGroupUncompressed extends AColGroup {\nreturn _data.max();\n}\n+ @Override\n+ public double getSum(int nRows) {\n+ return _data.sum();\n+ }\n+\n@Override\npublic final void tsmm(MatrixBlock ret, int nRows) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/Dictionary.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/Dictionary.java",
"diff": "@@ -1077,7 +1077,6 @@ public class Dictionary extends ADictionary {\nfinal double[] dv = mb.getDenseBlockValues();\nreturn Arrays.equals(_values, dv);\n}\n-\nreturn false;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/AMapToData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/AMapToData.java",
"diff": "@@ -815,6 +815,9 @@ public abstract class AMapToData implements Serializable {\n*/\npublic abstract AMapToData slice(int l, int u);\n+\n+ public abstract AMapToData append(AMapToData t);\n+\n@Override\npublic String toString() {\nfinal int sz = size();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToBit.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToBit.java",
"diff": "@@ -24,6 +24,7 @@ import java.io.DataOutput;\nimport java.io.IOException;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.dictionary.ADictionary;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\nimport org.apache.sysds.utils.MemoryEstimates;\n@@ -324,4 +325,22 @@ public class MapToBit extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToBit(getUnique(), _data.get(l,u), u - l);\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToBit){\n+ MapToBit tb = (MapToBit) t;\n+ BitSet tbb = tb._data;\n+ final int newSize = _size + t.size();\n+ BitSet ret = new BitSet(newSize);\n+ ret.xor(_data);\n+\n+ tbb.stream().forEach(x -> ret.set(x + _size, true));\n+ return new MapToBit(2, ret, newSize);\n+ }\n+ else{\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToByte.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToByte.java",
"diff": "@@ -25,6 +25,7 @@ import java.io.IOException;\nimport java.util.Arrays;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\nimport org.apache.sysds.utils.MemoryEstimates;\n@@ -209,4 +210,27 @@ public class MapToByte extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToByte(getUnique(), Arrays.copyOfRange(_data, l, u));\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToByte) {\n+ MapToByte tb = (MapToByte) t;\n+ byte[] tbb = tb._data;\n+ final int newSize = _data.length + t.size();\n+ final int newDistinct = Math.max(getUnique(), t.getUnique());\n+\n+ // copy\n+ byte[] ret = Arrays.copyOf(_data, newSize);\n+ System.arraycopy(tbb, 0, ret, _data.length, t.size());\n+\n+ // return\n+ if(newDistinct < 127)\n+ return new MapToUByte(newDistinct, ret);\n+ else\n+ return new MapToByte(newDistinct, ret);\n+ }\n+ else {\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToChar.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToChar.java",
"diff": "@@ -25,6 +25,7 @@ import java.io.IOException;\nimport java.util.Arrays;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\nimport org.apache.sysds.utils.MemoryEstimates;\n@@ -171,7 +172,6 @@ public class MapToChar extends AMapToData {\n}\n}\n-\n@Override\npublic int getUpperBoundValue() {\nreturn Character.MAX_VALUE;\n@@ -232,4 +232,23 @@ public class MapToChar extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToChar(getUnique(), Arrays.copyOfRange(_data, l, u));\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToChar) {\n+ MapToChar tb = (MapToChar) t;\n+ char[] tbb = tb._data;\n+ final int newSize = _data.length + t.size();\n+ final int newDistinct = Math.max(getUnique(), t.getUnique());\n+\n+ // copy\n+ char[] ret = Arrays.copyOf(_data, newSize);\n+ System.arraycopy(tbb, 0, ret, _data.length, t.size());\n+\n+ return new MapToChar(newDistinct, ret);\n+ }\n+ else {\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToCharPByte.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToCharPByte.java",
"diff": "@@ -25,6 +25,7 @@ import java.io.IOException;\nimport java.util.Arrays;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\nimport org.apache.sysds.utils.MemoryEstimates;\n@@ -217,4 +218,27 @@ public class MapToCharPByte extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToCharPByte(getUnique(), Arrays.copyOfRange(_data_c, l, u), Arrays.copyOfRange(_data_b, l, u));\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToCharPByte) {\n+ MapToCharPByte tb = (MapToCharPByte) t;\n+ char[] tbb = tb._data_c;\n+ byte[] tbbb = tb._data_b;\n+ final int newSize = _data_c.length + t.size();\n+ final int newDistinct = Math.max(getUnique(), t.getUnique());\n+\n+ // copy\n+ char[] ret_c = Arrays.copyOf(_data_c, newSize);\n+ System.arraycopy(tbb, 0, ret_c, _data_c.length, t.size());\n+ byte[] ret_b = Arrays.copyOf(_data_b, newSize);\n+ System.arraycopy(tbbb, 0, ret_b, _data_b.length, t.size());\n+\n+\n+ return new MapToCharPByte(newDistinct, ret_c, ret_b);\n+ }\n+ else {\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToInt.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToInt.java",
"diff": "@@ -25,6 +25,7 @@ import java.io.IOException;\nimport java.util.Arrays;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\nimport org.apache.sysds.utils.MemoryEstimates;\n@@ -232,4 +233,23 @@ public class MapToInt extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToInt(getUnique(), Arrays.copyOfRange(_data, l, u));\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToInt) {\n+ MapToInt tb = (MapToInt) t;\n+ int[] tbb = tb._data;\n+ final int newSize = _data.length + t.size();\n+ final int newDistinct = Math.max(getUnique(), t.getUnique());\n+\n+ // copy\n+ int[] ret = Arrays.copyOf(_data, newSize);\n+ System.arraycopy(tbb, 0, ret, _data.length, t.size());\n+\n+ return new MapToInt(newDistinct, ret);\n+ }\n+ else {\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToZero.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToZero.java",
"diff": "@@ -24,6 +24,7 @@ import java.io.DataOutput;\nimport java.io.IOException;\nimport java.util.BitSet;\n+import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.colgroup.dictionary.ADictionary;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory.MAP_TYPE;\n@@ -149,4 +150,12 @@ public class MapToZero extends AMapToData {\npublic AMapToData slice(int l, int u) {\nreturn new MapToZero(u - l);\n}\n+\n+ @Override\n+ public AMapToData append(AMapToData t) {\n+ if(t instanceof MapToZero)\n+ return new MapToZero(_size + t.size());\n+ else\n+ throw new NotImplementedException(\"Not implemented append on Bit map different type\");\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/combine/CombineTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/combine/CombineTest.java",
"diff": "@@ -26,15 +26,23 @@ import static org.junit.Assert.fail;\nimport java.util.HashMap;\nimport java.util.Map;\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlockFactory;\n+import org.apache.sysds.runtime.compress.CompressionSettingsBuilder;\n+import org.apache.sysds.runtime.compress.colgroup.AColGroup;\n+import org.apache.sysds.runtime.compress.colgroup.AColGroup.CompressionType;\nimport org.apache.sysds.runtime.compress.lib.CLALibCombine;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\n+import org.apache.sysds.test.TestUtils;\nimport org.junit.Test;\npublic class CombineTest {\n+ protected static final Log LOG = LogFactory.getLog(CombineTest.class.getName());\n+\n@Test\npublic void combineEmpty() {\nCompressedMatrixBlock m1 = CompressedMatrixBlockFactory.createConstant(100, 10, 0.0);\n@@ -56,7 +64,6 @@ public class CombineTest {\n}\n-\n@Test\npublic void combineConst() {\nCompressedMatrixBlock m1 = CompressedMatrixBlockFactory.createConstant(100, 10, 1.0);\n@@ -78,4 +85,23 @@ public class CombineTest {\n}\n+ @Test\n+ public void combineDDC() {\n+ MatrixBlock mb = TestUtils.ceil(TestUtils.generateTestMatrixBlock(165, 2, 1, 3, 1.0, 2514));\n+ CompressedMatrixBlock csb = (CompressedMatrixBlock) CompressedMatrixBlockFactory\n+ .compress(mb,\n+ new CompressionSettingsBuilder().clearValidCompression().addValidCompression(CompressionType.DDC))\n+ .getLeft();\n+\n+ AColGroup g = csb.getColGroups().get(0);\n+ double sum = g.getSum(165);\n+ AColGroup ret = g.append(g);\n+ double sum2 = ret.getSum(165 * 2);\n+ assertEquals(sum * 2, sum2, 0.001);\n+ AColGroup ret2 = ret.append(g);\n+ double sum3 = ret2.getSum(165 * 3);\n+ assertEquals(sum * 3, sum3, 0.001);\n+\n+ }\n+\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3446] DDC Append |
49,706 | 15.10.2022 16:12:16 | -7,200 | 02a6a6c798e5f0aa4228ce13f11096c930eadd10 | DDCFOR Append | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDCFOR.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDCFOR.java",
"diff": "@@ -435,6 +435,13 @@ public class ColGroupDDCFOR extends AMorphingMMColGroup {\n@Override\npublic AColGroup append(AColGroup g) {\n+ if(g instanceof ColGroupDDCFOR && Arrays.equals(g.getColIndices(), _colIndexes)) {\n+ ColGroupDDCFOR gDDC = (ColGroupDDCFOR) g;\n+ if(Arrays.equals(_reference , gDDC._reference) && gDDC._dict.eq(_dict)){\n+ AMapToData nd = _data.append(gDDC._data);\n+ return create(_colIndexes, _dict, nd, null, _reference);\n+ }\n+ }\nreturn null;\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3450] DDCFOR Append |
49,706 | 15.10.2022 16:26:08 | -7,200 | 8e4f06a89a2c82fc2b32bb11106849fe2cd83522 | Uncompressed Append
This commit adds support for uncompressed matrices to append to themselves. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"diff": "@@ -486,13 +486,23 @@ public class ColGroupDDC extends APreAgg {\n@Override\npublic AColGroup append(AColGroup g) {\n- if(g instanceof ColGroupDDC && Arrays.equals(g.getColIndices(), _colIndexes)) {\n+ if(g instanceof ColGroupDDC) {\n+ if(Arrays.equals(g.getColIndices(), _colIndexes)) {\n+\nColGroupDDC gDDC = (ColGroupDDC) g;\nif(gDDC._dict.eq(_dict)) {\nAMapToData nd = _data.append(gDDC._data);\nreturn create(_colIndexes, _dict, nd, null);\n}\n+ else\n+ LOG.warn(\"Not same Dictionaries therefore not appending DDC\\n\" + _dict + \"\\n\\n\" + gDDC._dict);\n}\n+ else\n+ LOG.warn(\"Not same columns therefore not appending DDC\\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g.getColIndices()));\n+ }\n+ else\n+ LOG.warn(\"Not DDC but \" + g.getClass().getSimpleName() + \", therefore not appending DDC\");\nreturn null;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"diff": "@@ -571,6 +571,14 @@ public class ColGroupSDC extends ASDC {\n@Override\npublic AColGroup append(AColGroup g) {\n+ if(g instanceof ColGroupSDC && Arrays.equals(g.getColIndices(), _colIndexes)) {\n+ final ColGroupSDC gSDC = (ColGroupSDC) g;\n+ if(Arrays.equals(_defaultTuple, gSDC._defaultTuple) && gSDC._dict.eq(_dict)) {\n+ final AMapToData nd = _data.append(gSDC._data);\n+ final AOffset ofd = _indexes.append(gSDC._indexes);\n+ return create(_colIndexes, _numRows + gSDC._numRows, _dict, _defaultTuple, ofd, nd, null);\n+ }\n+ }\nreturn null;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"diff": "@@ -767,6 +767,11 @@ public class ColGroupUncompressed extends AColGroup {\n@Override\npublic AColGroup append(AColGroup g) {\n+ if(g instanceof ColGroupUncompressed && Arrays.equals(g.getColIndices(), _colIndexes)) {\n+ final ColGroupUncompressed gDDC = (ColGroupUncompressed) g;\n+ final MatrixBlock nd = _data.append(gDDC._data, false);\n+ return create(nd, _colIndexes);\n+ }\nreturn null;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/offset/AOffset.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/offset/AOffset.java",
"diff": "@@ -439,18 +439,14 @@ public abstract class AOffset implements Serializable {\nreturn ((OffsetChar) this).slice(lowOff, highOff, lowValue, highValue, low, high);\n}\n- // protected abstract OffsetSliceInfo slice(int lowOff, int highOff, int lowValue, int highValue, int low, int high);\n-\n- public static final class OffsetSliceInfo {\n- public final int lIndex;\n- public final int uIndex;\n- public final AOffset offsetSlice;\n-\n- protected OffsetSliceInfo(int l, int u, AOffset off) {\n- this.lIndex = l;\n- this.uIndex = u;\n- this.offsetSlice = off;\n- }\n+ /**\n+ * Append the offsets from that other offset to the offsets in this.\n+ *\n+ * @param t that offsets\n+ * @return this offsets followed by thats offsets.\n+ */\n+ public AOffset append(AOffset t){\n+ throw new NotImplementedException();\n}\n@Override\n@@ -477,6 +473,18 @@ public abstract class AOffset implements Serializable {\nreturn sb.toString();\n}\n+ public static final class OffsetSliceInfo {\n+ public final int lIndex;\n+ public final int uIndex;\n+ public final AOffset offsetSlice;\n+\n+ protected OffsetSliceInfo(int l, int u, AOffset off) {\n+ this.lIndex = l;\n+ this.uIndex = u;\n+ this.offsetSlice = off;\n+ }\n+ }\n+\nprotected static class OffsetCache {\nprotected final AIterator it;\nprotected final int row;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressedWriteBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressedWriteBlock.java",
"diff": "@@ -35,6 +35,10 @@ public class CompressedWriteBlock implements WritableComparable<CompressedWrite\npublic MatrixBlock mb;\n+ private enum CONTENT {\n+ Comp, MB;\n+ }\n+\n/**\n* Write block used to point to a underlying instance of CompressedMatrixBlock or MatrixBlock, Unfortunately spark\n* require a specific object type to serialize therefore we use this class.\n@@ -49,17 +53,25 @@ public class CompressedWriteBlock implements WritableComparable<CompressedWrite\n@Override\npublic void write(DataOutput out) throws IOException {\n- out.writeBoolean(mb instanceof CompressedMatrixBlock);\n+\n+ if(mb instanceof CompressedMatrixBlock)\n+ out.writeByte(CONTENT.Comp.ordinal());\n+ else\n+ out.writeByte(CONTENT.MB.ordinal());\nmb.write(out);\n+\n}\n@Override\npublic void readFields(DataInput in) throws IOException {\n- if(in.readBoolean())\n+ switch(CONTENT.values()[in.readByte()]) {\n+ case Comp:\nmb = CompressedMatrixBlock.read(in);\n- else {\n+ break;\n+ case MB:\nmb = new MatrixBlock();\nmb.readFields(in);\n+ break;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"diff": "package org.apache.sysds.runtime.compress.io;\nimport java.io.IOException;\n+import java.util.List;\nimport org.apache.commons.logging.Log;\nimport org.apache.commons.logging.LogFactory;\n@@ -35,6 +36,7 @@ import org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlockFactory;\n+import org.apache.sysds.runtime.compress.lib.CLALibSlice;\nimport org.apache.sysds.runtime.instructions.spark.CompressionSPInstruction.CompressionFunction;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDConverterUtils;\nimport org.apache.sysds.runtime.io.FileFormatProperties;\n@@ -112,7 +114,7 @@ public final class WriterCompressed extends MatrixWriter {\nwrite(m, fname, blen);\n}\n- private void write(MatrixBlock src, String fname, int blen) throws IOException {\n+ private void write(MatrixBlock src, final String fname, final int blen) throws IOException {\nfinal int k = OptimizerUtils.getParallelTextWriteParallelism();\nfinal Path path = new Path(fname);\nfinal JobConf job = new JobConf(ConfigurationManager.getCachedJobConf());\n@@ -120,13 +122,17 @@ public final class WriterCompressed extends MatrixWriter {\nHDFSTool.deleteFileIfExistOnHDFS(path, job);\n// Make Writer (New interface)\n- Writer w = SequenceFile.createWriter(job, Writer.file(path), Writer.bufferSize(4096), Writer.blockSize(4096),\n- Writer.keyClass(MatrixIndexes.class), Writer.valueClass(CompressedWriteBlock.class),\n+ final Writer w = SequenceFile.createWriter(job, Writer.file(path), Writer.bufferSize(4096),\n+ Writer.blockSize(4096), Writer.keyClass(MatrixIndexes.class), Writer.valueClass(CompressedWriteBlock.class),\nWriter.compression(SequenceFile.CompressionType.RECORD), Writer.replication((short) 1));\nfinal int rlen = src.getNumRows();\nfinal int clen = src.getNumColumns();\n+ // Try to compress!\n+ if(!(src instanceof CompressedMatrixBlock))\n+ src = CompressedMatrixBlockFactory.compress(src, k).getLeft();\n+\nif(rlen <= blen && clen <= blen)\nwriteSingleBlock(w, src, k);\nelse\n@@ -145,16 +151,31 @@ public final class WriterCompressed extends MatrixWriter {\nw.append(idx, new CompressedWriteBlock(mc));\n}\n- private void writeMultiBlock(Writer w, MatrixBlock b, int rlen, int clen, int blen, int k) throws IOException {\n+ private void writeMultiBlock(Writer w, MatrixBlock b, final int rlen, final int clen, final int blen, int k)\n+ throws IOException {\nfinal MatrixIndexes indexes = new MatrixIndexes();\n- for(int br = 0; br * blen < rlen; br++) {\n+ if(!(b instanceof CompressedMatrixBlock))\n+ LOG.warn(\"Writing compressed format with non identical compression scheme\");\n+\nfor(int bc = 0; bc * blen < clen; bc++) {\n+ final int sC = bc * blen;\n+ final int mC = Math.min(sC + blen, clen) - 1;\n+ if(b instanceof CompressedMatrixBlock) {\n+ final CompressedMatrixBlock mc = //mC == clen - 1 ? (CompressedMatrixBlock) b :\n+ CLALibSlice\n+ .sliceColumns((CompressedMatrixBlock) b, sC, mC); // slice columns!\n+\n+ final List<MatrixBlock> blocks = CLALibSlice.sliceBlocks(mc, blen); // Slice compressed blocks\n+ for(int br = 0; br * blen < rlen; br++) {\n+ indexes.setIndexes(br + 1, bc + 1);\n+ w.append(indexes, new CompressedWriteBlock(blocks.get(br)));\n+ }\n+ }\n+ else {\n+ for(int br = 0; br * blen < rlen; br++) {\n// Max Row and col in block\n- int sR = br * blen;\n- int sC = bc * blen;\n- int mR = Math.min(br * blen + blen, rlen) - 1;\n- int mC = Math.min(bc * blen + blen, clen) - 1;\n-\n+ final int sR = br * blen;\n+ final int mR = Math.min(sR + blen, rlen) - 1;\nMatrixBlock mb = b.slice(sR, mR, sC, mC);\nMatrixBlock mc = CompressedMatrixBlockFactory.compress(mb, k).getLeft();\nindexes.setIndexes(br + 1, bc + 1);\n@@ -163,3 +184,4 @@ public final class WriterCompressed extends MatrixWriter {\n}\n}\n}\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"diff": "@@ -151,8 +151,11 @@ public class CLALibCombine {\nfinal List<AColGroup> gs = cmb.getColGroups();\nfor(AColGroup g : gs) {\n- final int[] cols = g.getColIndices();\n- finalCols[cols[0] + bc * blen] = g; // only assign first column of each group.\n+ AColGroup gc = g;\n+ if(bc > 0)\n+ gc = g.shiftColIndices(bc * blen);\n+ final int[] cols = gc.getColIndices();\n+ finalCols[cols[0]] = gc; // only assign first column of each group.\n}\n}\n@@ -169,12 +172,14 @@ public class CLALibCombine {\nif(bc > 0)\ngc = g.shiftColIndices(bc * blen);\nfinal int[] cols = gc.getColIndices();\n-\n- finalCols[cols[0]] = finalCols[cols[0]].append(gc);\n- if(finalCols[cols[0]] == null) {\n- LOG.warn(\"Combining of columns was non trivial, therefore falling back to decompression\");\n+ AColGroup prev = finalCols[cols[0]];\n+ AColGroup comb = prev.append(gc);\n+ if(comb == null) {\n+ LOG.warn(\"Combining of columns from group: \" + prev.getClass().getSimpleName() + \" and \"\n+ + gc.getClass().getSimpleName() + \" was non trivial, therefore falling back to decompression\");\nreturn combineViaDecompression(m, rlen, clen, blen);\n}\n+ finalCols[cols[0]] = comb;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibSlice.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibSlice.java",
"diff": "@@ -39,18 +39,12 @@ public class CLALibSlice {\n*\n* @param cmb The input block to slice.\n* @param blen The length of the blocks.\n- * @return A list containing CompressedMatrixBlocks where there is values, and null if there is no values in the sub\n- * block.\n+ * @return A list containing CompressedMatrixBlocks or MatrixBlocks\n*/\n- public static List<CompressedMatrixBlock> sliceBlocks(CompressedMatrixBlock cmb, int blen) {\n- List<CompressedMatrixBlock> mbs = new ArrayList<>();\n- for(int b = 0; b < cmb.getNumRows(); b += blen) {\n- MatrixBlock mb = sliceRowsCompressed(cmb, b, Math.min(b + blen, cmb.getNumRows()));\n- if(mb instanceof CompressedMatrixBlock)\n- mbs.add((CompressedMatrixBlock) mb);\n- else\n- mbs.add(null);\n- }\n+ public static List<MatrixBlock> sliceBlocks(CompressedMatrixBlock cmb, int blen) {\n+ final List<MatrixBlock> mbs = new ArrayList<>();\n+ for(int b = 0; b < cmb.getNumRows(); b += blen)\n+ mbs.add(sliceRowsCompressed(cmb, b, Math.min(b + blen, cmb.getNumRows()) - 1));\nreturn mbs;\n}\n@@ -79,7 +73,6 @@ public class CLALibSlice {\nreturn sliceRowsDecompress(cmb, rl, ru);\nelse\nreturn sliceRowsCompressed(cmb, rl, ru);\n-\n}\nprivate static boolean shouldDecompressSliceRows(CompressedMatrixBlock cmb, int rl, int ru) {\n@@ -139,7 +132,7 @@ public class CLALibSlice {\nreturn tmp;\n}\n- private static CompressedMatrixBlock sliceColumns(CompressedMatrixBlock cmb, int cl, int cu) {\n+ public static CompressedMatrixBlock sliceColumns(CompressedMatrixBlock cmb, int cl, int cu) {\nfinal int cue = cu + 1;\nfinal CompressedMatrixBlock ret = new CompressedMatrixBlock(cmb.getNumRows(), cue - cl);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"diff": "@@ -3659,22 +3659,66 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\nreturn result;\n}\n+ /**\n+ * Append that matrix to this matrix, while allocating a new matrix.\n+ * Default is cbind making the matrix \"wider\"\n+ *\n+ * @param that the other matrix to append\n+ * @return A new MatrixBlock object with the appended result\n+ */\npublic final MatrixBlock append(MatrixBlock that) {\nreturn append(that, null, true); // default cbind\n}\n+ /**\n+ * Append that matrix to this matrix, while allocating a new matrix.\n+ * cbind true makes the matrix \"wider\" while cbind false make it \"taller\"\n+ *\n+ * @param that the other matrix to append\n+ * @param cbind if binding on columns or rows\n+ * @return a new MatrixBlock object with the appended result\n+ */\npublic final MatrixBlock append(MatrixBlock that, boolean cbind) {\nreturn append(that, null, cbind);\n}\n+ /**\n+ * Append that matrix to this matrix.\n+ *\n+ * Default is cbind making the matrix \"wider\"\n+ *\n+ * @param that the other matrix to append\n+ * @param ret the output matrix to modify, (is also returned)\n+ * @return the ret MatrixBlock object with the appended result\n+ */\npublic final MatrixBlock append( MatrixBlock that, MatrixBlock ret ) {\nreturn append(that, ret, true); //default cbind\n}\n+ /**\n+ * Append that matrix to this matrix.\n+ *\n+ * cbind true makes the matrix \"wider\" while cbind false make it \"taller\"\n+ *\n+ * @param that the other matrix to append\n+ * @param ret the output matrix to modify, (is also returned)\n+ * @param cbind if binding on columns or rows\n+ * @return the ret MatrixBlock object with the appended result\n+ */\npublic final MatrixBlock append( MatrixBlock that, MatrixBlock ret, boolean cbind ) {\nreturn append(new MatrixBlock[]{that}, ret, cbind);\n}\n+ /**\n+ * Append that list of matrixes to this matrix.\n+ *\n+ * cbind true makes the matrix \"wider\" while cbind false make it \"taller\"\n+ *\n+ * @param that a list of matrices to append in order\n+ * @param result the output matrix to modify, (is also returned)\n+ * @param cbind if binding on columns or rows\n+ * @return the ret MatrixBlock object with the appended result\n+ */\npublic MatrixBlock append( MatrixBlock[] that, MatrixBlock result, boolean cbind) {\ncheckDimensionsForAppend(that, cbind);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/io/IOTest.java",
"diff": "@@ -117,12 +117,12 @@ public class IOTest {\n@Test\npublic void testWriteAndReadSmallBlen() throws Exception {\n- writeAndRead(TestUtils.ceil(TestUtils.generateTestMatrixBlock(1000, 3, 1, 3, 1.0, 2514)), 100);\n+ writeAndRead(TestUtils.ceil(TestUtils.generateTestMatrixBlock(200, 3, 1, 3, 1.0, 2514)), 100);\n}\n@Test\npublic void testWriteAndReadSmallBlenBiggerClen() throws Exception {\n- writeAndRead(TestUtils.ceil(TestUtils.generateTestMatrixBlock(1000, 51, 1, 3, 1.0, 2514)), 50);\n+ writeAndRead(TestUtils.ceil(TestUtils.generateTestMatrixBlock(200, 51, 1, 3, 1.0, 2514)), 50);\n}\n@Test\n@@ -141,6 +141,8 @@ public class IOTest {\n}\nprotected static void writeAndRead(MatrixBlock mb, int blen) throws Exception {\n+ try{\n+\nString filename = getName();\nWriterCompressed.writeCompressedMatrixToHDFS(mb, filename, blen);\nFile f = new File(filename);\n@@ -148,4 +150,9 @@ public class IOTest {\nMatrixBlock mbr = IOCompressionTestUtils.read(filename);\nIOCompressionTestUtils.verifyEquivalence(mb, mbr);\n}\n+ catch(Exception e){\n+ e.printStackTrace();\n+ throw e;\n+ }\n+ }\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3448] Uncompressed Append
This commit adds support for uncompressed matrices to append to themselves. |
49,706 | 19.10.2022 16:38:16 | -7,200 | 678b28a2210dcdf11841d9486d1dac4fdacb036d | [MINOR] Disable Hadoop Write compression | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressWrap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressWrap.java",
"diff": "package org.apache.sysds.runtime.compress.io;\nimport org.apache.spark.api.java.function.Function;\n-import org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\npublic class CompressWrap implements Function<MatrixBlock, CompressedWriteBlock> {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/WriterCompressed.java",
"diff": "@@ -124,7 +124,8 @@ public final class WriterCompressed extends MatrixWriter {\n// Make Writer (New interface)\nfinal Writer w = SequenceFile.createWriter(job, Writer.file(path), Writer.bufferSize(4096),\nWriter.blockSize(4096), Writer.keyClass(MatrixIndexes.class), Writer.valueClass(CompressedWriteBlock.class),\n- Writer.compression(SequenceFile.CompressionType.RECORD), Writer.replication((short) 1));\n+ Writer.compression(SequenceFile.CompressionType.NONE), // No Compression type on disk\n+ Writer.replication((short) 1));\nfinal int rlen = src.getNumRows();\nfinal int clen = src.getNumColumns();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"diff": "@@ -32,13 +32,14 @@ import org.apache.sysds.runtime.compress.colgroup.AColGroup.CompressionType;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\n+import edu.emory.mathcs.backport.java.util.Arrays;\n+\npublic class CLALibCombine {\nprotected static final Log LOG = LogFactory.getLog(CLALibCombine.class.getName());\npublic static MatrixBlock combine(Map<MatrixIndexes, MatrixBlock> m) {\n// Dynamically find rlen, clen and blen;\n-\n// assume that the blen is the same in all blocks.\n// assume that all blocks are there ...\nfinal MatrixIndexes lookup = new MatrixIndexes(1, 1);\n@@ -57,15 +58,21 @@ public class CLALibCombine {\nlookup.setIndexes(1, lookup.getColumnIndex() + 1);\n}\n- return combine(m, (int) rows, (int) cols, blen);\n+ return combine(m, lookup, (int) rows, (int) cols, blen);\n}\npublic static MatrixBlock combine(Map<MatrixIndexes, MatrixBlock> m, int rlen, int clen, int blen) {\n+ final MatrixIndexes lookup = new MatrixIndexes();\n+ return combine(m, lookup, rlen, clen, blen);\n+ }\n+\n+ private static MatrixBlock combine(final Map<MatrixIndexes, MatrixBlock> m, final MatrixIndexes lookup,\n+ final int rlen, final int clen, final int blen) {\nif(rlen < blen) // Shortcut, in case file only contains one block in r length.\n- return CombiningColumnGroups(m, rlen, clen, blen);\n+ return CombiningColumnGroups(m, lookup, rlen, clen, blen);\n+\nfinal CompressionType[] colTypes = new CompressionType[clen];\n- final MatrixIndexes lookup = new MatrixIndexes();\n// Look through the first blocks in to the top.\nfor(int bc = 0; bc * blen < clen; bc++) {\nlookup.setIndexes(1, bc + 1); // get first blocks\n@@ -119,10 +126,11 @@ public class CLALibCombine {\n}\n}\n- return CombiningColumnGroups(m, rlen, clen, blen);\n+ return CombiningColumnGroups(m, lookup, rlen, clen, blen);\n}\n- private static MatrixBlock combineViaDecompression(Map<MatrixIndexes, MatrixBlock> m, int rlen, int clen, int blen) {\n+ private static MatrixBlock combineViaDecompression(final Map<MatrixIndexes, MatrixBlock> m, final int rlen,\n+ final int clen, final int blen) {\nfinal MatrixBlock out = new MatrixBlock(rlen, clen, false);\nout.allocateDenseBlock();\nfor(Entry<MatrixIndexes, MatrixBlock> e : m.entrySet()) {\n@@ -140,57 +148,48 @@ public class CLALibCombine {\n}\n// It is known all of the matrices are Compressed and they are non overlapping.\n- private static MatrixBlock CombiningColumnGroups(Map<MatrixIndexes, MatrixBlock> m, int rlen, int clen, int blen) {\n+ private static MatrixBlock CombiningColumnGroups(final Map<MatrixIndexes, MatrixBlock> m, final MatrixIndexes lookup,\n+ final int rlen, final int clen, final int blen) {\n- final AColGroup[] finalCols = new AColGroup[clen];\n- final MatrixIndexes lookup = new MatrixIndexes();\n- for(int bc = 0; bc * blen < clen; bc++) {\n- lookup.setIndexes(1, bc + 1); // get first blocks\n- final MatrixBlock b = m.get(lookup);\n- final CompressedMatrixBlock cmb = (CompressedMatrixBlock) b;\n+ final AColGroup[][] finalCols = new AColGroup[clen][]; // temp array for combining\n+ final int blocksInColumn = (rlen - 1) / blen + 1;\n+ final int nGroups = m.size() / blocksInColumn;\n- final List<AColGroup> gs = cmb.getColGroups();\n- for(AColGroup g : gs) {\n- AColGroup gc = g;\n- if(bc > 0)\n- gc = g.shiftColIndices(bc * blen);\n+ // Add all the blocks into linear structure.\n+ for(int br = 0; br * blen < rlen; br++) {\n+ for(int bc = 0; bc * blen < clen; bc++) {\n+ lookup.setIndexes(br + 1, bc + 1);\n+ final CompressedMatrixBlock cmb = (CompressedMatrixBlock) m.get(lookup);\n+ for(AColGroup g : cmb.getColGroups()) {\n+ final AColGroup gc = bc > 0 ? g.shiftColIndices(bc * blen) : g;\nfinal int[] cols = gc.getColIndices();\n- finalCols[cols[0]] = gc; // only assign first column of each group.\n+ if(br == 0)\n+ finalCols[cols[0]] = new AColGroup[blocksInColumn];\n+\n+ finalCols[cols[0]][br] = gc;\n}\n}\n-\n- for(int br = 1; br * blen < rlen; br++) {\n- for(int bc = 0; bc * blen < clen; bc++) {\n- lookup.setIndexes(br + 1, bc + 1); // get first blocks\n- final MatrixBlock b = m.get(lookup);\n-\n- final CompressedMatrixBlock cmb = (CompressedMatrixBlock) b;\n-\n- final List<AColGroup> gs = cmb.getColGroups();\n- for(AColGroup g : gs) {\n- AColGroup gc = g;\n- if(bc > 0)\n- gc = g.shiftColIndices(bc * blen);\n- final int[] cols = gc.getColIndices();\n- AColGroup prev = finalCols[cols[0]];\n- AColGroup comb = prev.append(gc);\n- if(comb == null) {\n- LOG.warn(\"Combining of columns from group: \" + prev.getClass().getSimpleName() + \" and \"\n- + gc.getClass().getSimpleName() + \" was non trivial, therefore falling back to decompression\");\n- return combineViaDecompression(m, rlen, clen, blen);\n}\n- finalCols[cols[0]] = comb;\n+ final List<AColGroup> finalGroups = new ArrayList<>(nGroups);\n+ for(AColGroup[] colOfGroups : finalCols) {\n+ if(colOfGroups != null) { // skip null entries\n+ final AColGroup combined = combineN(colOfGroups);\n+ if(combined == null) {\n+ LOG.warn(\"Combining of columns from group failed: \" + Arrays.toString(colOfGroups));\n+ return combineViaDecompression(m, rlen, clen, blen);\n}\n+ finalGroups.add(combined);\n}\n}\n- List<AColGroup> finalGroups = new ArrayList<>();\n-\n- for(AColGroup g : finalCols)\n- if(g != null)\n- finalGroups.add(g);\n-\nreturn new CompressedMatrixBlock(rlen, clen, -1, false, finalGroups);\n}\n+\n+ private static AColGroup combineN(AColGroup[] groups) {\n+ AColGroup r = groups[0];\n+ for(int i = 1; i < groups.length && r != null; i++)\n+ r = r.append(groups[i]);\n+ return r;\n+ }\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Disable Hadoop Write compression |
49,706 | 19.10.2022 17:02:34 | -7,200 | 657a1bacca9a83dba1632df156979fec7bfaee5a | N Way combining Interface | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroup.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AColGroup.java",
"diff": "@@ -611,10 +611,34 @@ public abstract class AColGroup implements Serializable {\n* If it is not possible or very inefficient null is returned.\n*\n* @param g The other column group\n- * @return A combined column group\n+ * @return A combined column group or null\n*/\npublic abstract AColGroup append(AColGroup g);\n+ /**\n+ * Append all column groups in the list provided together in one go allocating the output once.\n+ *\n+ * If it is not possible or very inefficient null is returned.\n+ *\n+ * @param groups The groups to combine.\n+ * @return A combined column group or null\n+ */\n+ public static AColGroup appendN(AColGroup[] groups) {\n+ return groups[0].appendNInternal(groups);\n+ }\n+\n+ /**\n+ * Append all column groups in the list provided together with this.\n+ *\n+ * A Important detail is the first entry in the group == this, and should not be appended twice.\n+ *\n+ * If it is not possible or very inefficient null is returned.\n+ *\n+ * @param groups The groups to combine.\n+ * @return A combined column group or null\n+ */\n+ protected abstract AColGroup appendNInternal(AColGroup[] groups);\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupConst.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupConst.java",
"diff": "@@ -21,6 +21,7 @@ package org.apache.sysds.runtime.compress.colgroup;\nimport java.io.DataInput;\nimport java.io.IOException;\n+import java.util.Arrays;\nimport org.apache.sysds.runtime.compress.DMLCompressionException;\nimport org.apache.sysds.runtime.compress.colgroup.dictionary.ADictionary;\n@@ -527,6 +528,14 @@ public class ColGroupConst extends ADictBasedColGroup {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ for(int i = 0; i < g.length; i++)\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes) || !this._dict.eq(((ColGroupConst) g[i])._dict))\n+ return null;\n+ return this;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDC.java",
"diff": "@@ -506,6 +506,12 @@ public class ColGroupDDC extends APreAgg {\nreturn null;\n}\n+\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDCFOR.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupDDCFOR.java",
"diff": "@@ -445,6 +445,11 @@ public class ColGroupDDCFOR extends AMorphingMMColGroup {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupEmpty.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupEmpty.java",
"diff": "@@ -319,4 +319,12 @@ public class ColGroupEmpty extends AColGroupCompressed {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ for(int i = 0; i < g.length; i++)\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes))\n+ return null;\n+ return this;\n+ }\n+\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupLinearFunctional.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupLinearFunctional.java",
"diff": "@@ -670,4 +670,9 @@ public class ColGroupLinearFunctional extends AColGroupCompressed {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupOLE.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupOLE.java",
"diff": "@@ -660,4 +660,9 @@ public class ColGroupOLE extends AColGroupOffset {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupRLE.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupRLE.java",
"diff": "@@ -972,6 +972,11 @@ public class ColGroupRLE extends AColGroupOffset {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"diff": "@@ -582,6 +582,11 @@ public class ColGroupSDC extends ASDC {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"diff": "@@ -444,6 +444,11 @@ public class ColGroupSDCFOR extends ASDC {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"diff": "@@ -581,6 +581,11 @@ public class ColGroupSDCSingle extends ASDC {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"diff": "@@ -814,6 +814,11 @@ public class ColGroupSDCSingleZeros extends ASDCZero {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"diff": "@@ -725,6 +725,11 @@ public class ColGroupSDCZeros extends ASDCZero {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupUncompressed.java",
"diff": "@@ -775,6 +775,11 @@ public class ColGroupUncompressed extends AColGroup {\nreturn null;\n}\n+ @Override\n+ public AColGroup appendNInternal(AColGroup[] g) {\n+ return null;\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/lib/CLALibCombine.java",
"diff": "@@ -187,9 +187,6 @@ public class CLALibCombine {\n}\nprivate static AColGroup combineN(AColGroup[] groups) {\n- AColGroup r = groups[0];\n- for(int i = 1; i < groups.length && r != null; i++)\n- r = r.append(groups[i]);\n- return r;\n+ return AColGroup.appendN(groups);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupNegativeTests.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupNegativeTests.java",
"diff": "@@ -360,6 +360,12 @@ public class ColGroupNegativeTests {\n// TODO Auto-generated method stub\nreturn null;\n}\n+\n+ @Override\n+ protected AColGroup appendNInternal(AColGroup[] groups) {\n+ // TODO Auto-generated method stub\n+ return null;\n+ }\n}\nprivate class FakeDictBasedColGroup extends ADictBasedColGroup {\n@@ -572,5 +578,11 @@ public class ColGroupNegativeTests {\n// TODO Auto-generated method stub\nreturn null;\n}\n+\n+ @Override\n+ protected AColGroup appendNInternal(AColGroup[] groups) {\n+ // TODO Auto-generated method stub\n+ return null;\n+ }\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3445] N Way combining Interface |
49,706 | 20.10.2022 17:54:02 | -7,200 | 451e84c538abf1f4bc274cdb2ee3388ea49915bf | [MINOR] Fix rand instruction to return empty matrix if sparsity == 0.0
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToBit.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/mapping/MapToBit.java",
"diff": "@@ -385,16 +385,4 @@ public class MapToBit extends AMapToData {\nreturn new MapToBit(getUnique(), retBS, p);\n}\n-\n- private static String bl(long l) {\n- int lead = Long.numberOfLeadingZeros(l);\n- if(lead == 64)\n- return \"0000000000000000000000000000000000000000000000000000000000000000\";\n- StringBuilder sb = new StringBuilder(64);\n- for(int i = 0; i < lead; i++) {\n- sb.append('0');\n- }\n- sb.append(Long.toBinaryString(l));\n- return sb.toString();\n- }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/utils/DoubleCountHashMap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/utils/DoubleCountHashMap.java",
"diff": "@@ -231,39 +231,9 @@ public class DoubleCountHashMap {\n}\nprivate final int hashIndex(final double key) {\n-\n- // previous require pow2 size.:\n- // long bits = Double.doubleToRawLongBits(key);\n- // int h =(int)( bits ^ (bits >>> 32));\n- // h = h ^ (h >>> 20) ^ (h >>> 12);\n- // h = h ^ (h >>> 7) ^ (h >>> 4);\n- // return h & (_data.length - 1);\n- // 100.809.414.955 instructions\n-\n// Option 1 ... conflict on 1 vs -1\n- long bits = Double.doubleToLongBits(key);\n+ final long bits = Double.doubleToLongBits(key);\nreturn Math.abs((int)(bits ^ (bits >>> 32)) % _data.length);\n- // 102.356.926.448 instructions\n-\n- // Option 2\n- // long bits = Double.doubleToRawLongBits(key);\n- // return (int) ((bits ^ (bits >> 32) % _data.length));\n-\n-\n- // basic double hash code (w/o object creation)\n- // return Double.hashCode(key) % _data.length;\n- // return (int) ((bits ^ (bits >>> 32)) % _data.length);\n- // long bits = Double.doubleToLongBits(key);\n- // return (int) Long.remainderUnsigned(bits, (long) _data.length);\n- // long bits = Double.doubleToLongBits(key);\n- // long bits = Double.doubleToRawLongBits(key);\n- // return (int) (bits % (long) _data.length);\n-\n- // return h;\n-\n- // This function ensures that hashCodes that differ only by\n- // constant multiples at each bit position have a bounded\n- // number of collisions (approximately 8 at default load factor).\n}\n// private static int indexFor(int h, int length) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/DataGenCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/DataGenCPInstruction.java",
"diff": "@@ -411,12 +411,15 @@ public class DataGenCPInstruction extends UnaryCPInstruction {\nlong lcols = ec.getScalarInput(cols).getLongValue();\ncheckValidDimensions(lrows, lcols);\n+ if(sparsity == 0.0 && lrows < Integer.MAX_VALUE && lcols < Integer.MAX_VALUE)\n+ return new MatrixBlock((int)lrows,(int)lcols, 0.0);\n+\nif(ConfigurationManager.isCompressionEnabled() && minValue == maxValue && sparsity == 1.0) {\n// contains constant\nif(lrows > 1000 && lcols > 0 && lrows / lcols > 1)\nreturn CompressedMatrixBlockFactory.createConstant((int)lrows, (int)lcols, minValue);\n- else\n+\nreturn MatrixBlock.randOperations(getGenerator(lrows, lcols), lSeed, numThreads);\n}\nelse\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix rand instruction to return empty matrix if sparsity == 0.0
Closes #1706 |
49,689 | 24.10.2022 11:43:39 | -7,200 | 8ac9af09bcc6a9dd6de818544fde32e4ec0add90 | Configuration flags for Prefetch, Broadcast
This patch adds configuration flags for Prefetch, Broadcast and
other instructions that asynchronously trigger Spark executions. | [
{
"change_type": "MODIFY",
"old_path": "conf/SystemDS-config.xml.template",
"new_path": "conf/SystemDS-config.xml.template",
"diff": "<!-- set memory manager (static, unified) -->\n<sysds.caching.memorymanager>static</sysds.caching.memorymanager>\n+\n+ <!-- Asynchronously trigger prefetch (Spark intermediate) -->\n+ <sysds.async.prefetch>false</sysds.async.prefetch>\n+\n+ <!-- Asynchronously trigger broadcast (CP intermediate) -->\n+ <sysds.async.broadcast>false</sysds.async.broadcast>\n+\n</root>\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"new_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"diff": "@@ -21,6 +21,7 @@ package org.apache.sysds.conf;\nimport org.apache.hadoop.mapred.JobConf;\nimport org.apache.sysds.conf.CompilerConfig.ConfigType;\n+import org.apache.sysds.hops.OptimizerUtils;\nimport org.apache.sysds.lops.Compression.CompressConfig;\n/**\n@@ -231,6 +232,16 @@ public class ConfigurationManager\nreturn getDMLConfig().getBooleanValue(DMLConfig.FEDERATED_READCACHE);\n}\n+ public static boolean isPrefetchEnabled() {\n+ return (getDMLConfig().getBooleanValue(DMLConfig.ASYNC_SPARK_PREFETCH)\n+ || OptimizerUtils.ASYNC_PREFETCH_SPARK);\n+ }\n+\n+ public static boolean isBroadcastEnabled() {\n+ return (getDMLConfig().getBooleanValue(DMLConfig.ASYNC_SPARK_BROADCAST)\n+ || OptimizerUtils.ASYNC_BROADCAST_SPARK);\n+ }\n+\n///////////////////////////////////////\n// Thread-local classes\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"new_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"diff": "@@ -127,7 +127,9 @@ public class DMLConfig\npublic static final String FEDERATED_MONITOR_FREQUENCY = \"sysds.federated.monitorFreq\";\npublic static final int DEFAULT_FEDERATED_PORT = 4040; // borrowed default Spark Port\npublic static final int DEFAULT_NUMBER_OF_FEDERATED_WORKER_THREADS = 8;\n-\n+ /** Asynchronous triggering of Spark OPs and operator placement **/\n+ public static final String ASYNC_SPARK_PREFETCH = \"sysds.async.prefetch\"; // boolean: enable asynchronous prefetching spark intermediates\n+ public static final String ASYNC_SPARK_BROADCAST = \"sysds.async.broadcast\"; // boolean: enable asynchronous broadcasting CP intermediates\n//internal config\npublic static final String DEFAULT_SHARED_DIR_PERMISSION = \"777\"; //for local fs and DFS\n@@ -198,6 +200,8 @@ public class DMLConfig\n_defaultVals.put(FEDERATED_READCACHE, \"true\"); // vcores\n_defaultVals.put(FEDERATED_MONITOR_FREQUENCY, \"3\");\n_defaultVals.put(PRIVACY_CONSTRAINT_MOCK, null);\n+ _defaultVals.put(ASYNC_SPARK_PREFETCH, \"false\" );\n+ _defaultVals.put(ASYNC_SPARK_BROADCAST, \"false\" );\n}\npublic DMLConfig() {\n@@ -450,7 +454,7 @@ public class DMLConfig\nPRINT_GPU_MEMORY_INFO, AVAILABLE_GPUS, SYNCHRONIZE_GPU, EAGER_CUDA_FREE, FLOATING_POINT_PRECISION,\nGPU_EVICTION_POLICY, LOCAL_SPARK_NUM_THREADS, EVICTION_SHADOW_BUFFERSIZE, GPU_MEMORY_ALLOCATOR,\nGPU_MEMORY_UTILIZATION_FACTOR, USE_SSL_FEDERATED_COMMUNICATION, DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT,\n- FEDERATED_TIMEOUT, FEDERATED_MONITOR_FREQUENCY\n+ FEDERATED_TIMEOUT, FEDERATED_MONITOR_FREQUENCY, ASYNC_SPARK_PREFETCH, ASYNC_SPARK_BROADCAST\n};\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"new_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"diff": "@@ -264,6 +264,7 @@ public class OptimizerUtils\n*/\npublic static boolean ALLOW_SCRIPT_LEVEL_COMPRESS_COMMAND = false;\n+\n/**\n* Boolean specifying if compression rewrites is allowed. This is disabled at run time if the IPA for Workload aware compression\n* is activated.\n@@ -281,7 +282,8 @@ public class OptimizerUtils\n* transformations, which would would otherwise make the next instruction wait till completion. Broadcast allows\n* asynchronously transferring the data to all the nodes.\n*/\n- public static boolean ASYNC_TRIGGER_RDD_OPERATIONS = false;\n+ public static boolean ASYNC_PREFETCH_SPARK = false;\n+ public static boolean ASYNC_BROADCAST_SPARK = false;\n//////////////////////\n// Optimizer levels //\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/Dag.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/Dag.java",
"diff": "@@ -35,6 +35,7 @@ import org.apache.sysds.common.Types.ExecType;\nimport org.apache.sysds.common.Types.FileFormat;\nimport org.apache.sysds.common.Types.OpOp1;\nimport org.apache.sysds.common.Types.OpOpData;\n+import org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.conf.DMLConfig;\nimport org.apache.sysds.hops.AggBinaryOp.SparkAggType;\nimport org.apache.sysds.hops.OptimizerUtils;\n@@ -195,8 +196,8 @@ public class Dag<N extends Lop>\nList<Lop> node_v = ILinearize.linearize(nodes);\n// add Prefetch and broadcast lops, if necessary\n- List<Lop> node_pf = OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS ? addPrefetchLop(node_v) : node_v;\n- List<Lop> node_bc = OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS ? addBroadcastLop(node_pf) : node_pf;\n+ List<Lop> node_pf = ConfigurationManager.isPrefetchEnabled() ? addPrefetchLop(node_v) : node_v;\n+ List<Lop> node_bc = ConfigurationManager.isBroadcastEnabled() ? addBroadcastLop(node_pf) : node_pf;\n// TODO: Merge via a single traversal of the nodes\nprefetchFederated(node_bc);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"diff": "@@ -123,7 +123,6 @@ public class SparkStatistics {\nparallelizeTime.longValue()*1e-9,\nbroadcastTime.longValue()*1e-9,\ncollectTime.longValue()*1e-9));\n- if (OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS)\nsb.append(\"Spark async. count (pf,bc,tr): \\t\" +\nString.format(\"%d/%d/%d.\\n\", getAsyncPrefetchCount(), getAsyncBroadcastCount(), getAsyncTriggerRemoteCount()));\nreturn sb.toString();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/AsyncBroadcastTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/AsyncBroadcastTest.java",
"diff": "@@ -88,9 +88,9 @@ public class AsyncBroadcastTest extends AutomatedTestBase {\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\nHashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n- OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS = true;\n+ OptimizerUtils.ASYNC_BROADCAST_SPARK = true;\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n- OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS = false;\n+ OptimizerUtils.ASYNC_BROADCAST_SPARK = false;\nHashMap<MatrixValue.CellIndex, Double> R_bc = readDMLScalarFromOutputDir(\"R\");\n//compare matrices\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"diff": "@@ -96,9 +96,9 @@ public class PrefetchRDDTest extends AutomatedTestBase {\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\nHashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n- OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS = true;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = true;\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n- OptimizerUtils.ASYNC_TRIGGER_RDD_OPERATIONS = false;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = false;\nHashMap<MatrixValue.CellIndex, Double> R_pf = readDMLScalarFromOutputDir(\"R\");\n//compare matrices\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3088] Configuration flags for Prefetch, Broadcast
This patch adds configuration flags for Prefetch, Broadcast and
other instructions that asynchronously trigger Spark executions. |
49,706 | 21.10.2022 15:45:04 | -7,200 | 77834df53f208e84dcafe20658d6a09ba6ffe1fc | CLA Append N SDC
This commit adds the a generic approach to append N for SDC. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/CompressedMatrixBlockFactory.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/CompressedMatrixBlockFactory.java",
"diff": "@@ -268,13 +268,14 @@ public class CompressedMatrixBlockFactory {\nif(mb instanceof CompressedMatrixBlock) // Redundant compression\nreturn returnSelf();\n- else if(mb.isEmpty()) // empty input return empty compression\n- return createEmpty();\n_stats.denseSize = MatrixBlock.estimateSizeInMemory(mb.getNumRows(), mb.getNumColumns(), 1.0);\n_stats.originalSize = mb.getInMemorySize();\n_stats.originalCost = costEstimator.getCost(mb);\n+ if(mb.isEmpty()) // empty input return empty compression\n+ return createEmpty();\n+\nres = new CompressedMatrixBlock(mb); // copy metadata and allocate soft reference\nclassifyPhase();\n@@ -506,7 +507,7 @@ public class CompressedMatrixBlockFactory {\nLOG.debug(\nString.format(\"--relative cost: %1.4f\", (_stats.compressedCost / _stats.originalCost)));\n}\n- if(compressionGroups.getInfo().size() < 1000) {\n+ if(compressionGroups != null && compressionGroups.getInfo().size() < 1000) {\nint[] lengths = new int[res.getColGroups().size()];\nint i = 0;\nfor(AColGroup colGroup : res.getColGroups())\n@@ -546,11 +547,16 @@ public class CompressedMatrixBlockFactory {\nprivate Pair<MatrixBlock, CompressionStatistics> createEmpty() {\nLOG.info(\"Empty input to compress, returning a compressed Matrix block with empty column group\");\n- CompressedMatrixBlock ret = new CompressedMatrixBlock(mb.getNumRows(), mb.getNumColumns());\n+ res = new CompressedMatrixBlock(mb.getNumRows(), mb.getNumColumns());\nColGroupEmpty cg = ColGroupEmpty.create(mb.getNumColumns());\n- ret.allocateColGroup(cg);\n- ret.setNonZeros(0);\n- return new ImmutablePair<>(ret, null);\n+ res.allocateColGroup(cg);\n+ res.setNonZeros(0);\n+ _stats.compressedSize = res.getInMemorySize();\n+ _stats.compressedCost = costEstimator.getCost(res.getColGroups(), res.getNumRows());\n+ _stats.setColGroupsCounts(res.getColGroups());\n+ phase = 4;\n+ logPhase();\n+ return new ImmutablePair<>(res, _stats);\n}\nprivate Pair<MatrixBlock, CompressionStatistics> returnSelf() {\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/AOffsetsGroup.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.compress.colgroup;\n+\n+import org.apache.sysds.runtime.compress.colgroup.offset.AOffset;\n+\n+public interface AOffsetsGroup {\n+\n+ public AOffset getOffsets();\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ASDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ASDC.java",
"diff": "@@ -29,7 +29,7 @@ import org.apache.sysds.runtime.compress.colgroup.offset.AOffset;\n* This column group is handy in cases where sparse unsafe operations is executed on very sparse columns. Then the zeros\n* would be materialized in the group without any overhead.\n*/\n-public abstract class ASDC extends AMorphingMMColGroup {\n+public abstract class ASDC extends AMorphingMMColGroup implements AOffsetsGroup {\nprivate static final long serialVersionUID = 769993538831949086L;\n/** Sparse row indexes for the data */\n@@ -49,6 +49,7 @@ public abstract class ASDC extends AMorphingMMColGroup {\npublic abstract double[] getDefaultTuple();\n+ @Override\npublic AOffset getOffsets() {\nreturn _indexes;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ASDCZero.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ASDCZero.java",
"diff": "@@ -27,7 +27,7 @@ import org.apache.sysds.runtime.data.DenseBlock;\nimport org.apache.sysds.runtime.data.SparseBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n-public abstract class ASDCZero extends APreAgg {\n+public abstract class ASDCZero extends APreAgg implements AOffsetsGroup {\nprivate static final long serialVersionUID = -69266306137398807L;\n/** Sparse row indexes for the data */\n@@ -41,6 +41,10 @@ public abstract class ASDCZero extends APreAgg {\n_numRows = numRows;\n}\n+ public int getNumRows() {\n+ return _numRows;\n+ }\n+\n@Override\npublic final void leftMultByMatrixNoPreAgg(MatrixBlock matrix, MatrixBlock result, int rl, int ru, int cl, int cu) {\nfinal AIterator it = _indexes.getIterator(cl);\n@@ -205,4 +209,9 @@ public abstract class ASDCZero extends APreAgg {\npublic AIterator getIterator(int row) {\nreturn _indexes.getIterator(row);\n}\n+\n+ @Override\n+ public AOffset getOffsets() {\n+ return _indexes;\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"diff": "@@ -51,7 +51,7 @@ import org.apache.sysds.runtime.matrix.operators.UnaryOperator;\n* This column group is handy in cases where sparse unsafe operations is executed on very sparse columns. Then the zeros\n* would be materialized in the group without any overhead.\n*/\n-public class ColGroupSDC extends ASDC {\n+public class ColGroupSDC extends ASDC implements AMapToDataGroup {\nprivate static final long serialVersionUID = 769993538831949086L;\n/** Pointers to row indexes in the dictionary. */\n@@ -108,7 +108,8 @@ public class ColGroupSDC extends ASDC {\nreturn _defaultTuple;\n}\n- public AMapToData getMapping() {\n+ @Override\n+ public AMapToData getMapToData() {\nreturn _data;\n}\n@@ -584,8 +585,31 @@ public class ColGroupSDC extends ASDC {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n+ int sumRows = 0;\n+ for(int i = 1; i < g.length; i++) {\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\n+ LOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g[i]._colIndexes));\n+ return null;\n+ }\n+\n+ if(!(g[i] instanceof ColGroupSDC)) {\n+ LOG.warn(\"Not SDC but \" + g[i].getClass().getSimpleName());\n+ return null;\n+ }\n+\n+ final ColGroupSDC gc = (ColGroupSDC) g[i];\n+ if(!gc._dict.eq(_dict)) {\n+ LOG.warn(\"Not same Dictionaries therefore not appending \\n\" + _dict + \"\\n\\n\" + gc._dict);\nreturn null;\n}\n+ sumRows += gc.getNumRows();\n+ }\n+ AMapToData nd = _data.appendN(Arrays.copyOf(g, g.length, AMapToDataGroup[].class));\n+ AOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class));\n+\n+ return create(_colIndexes, sumRows, _dict, _defaultTuple, no, nd, null);\n+ }\n@Override\npublic String toString() {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"diff": "@@ -446,9 +446,31 @@ public class ColGroupSDCFOR extends ASDC {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n+ int sumRows = 0;\n+ for(int i = 1; i < g.length; i++) {\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\n+ LOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g[i]._colIndexes));\nreturn null;\n}\n+ if(!(g[i] instanceof ColGroupSDCFOR)) {\n+ LOG.warn(\"Not SDCFOR but \" + g[i].getClass().getSimpleName());\n+ return null;\n+ }\n+\n+ final ColGroupSDCFOR gc = (ColGroupSDCFOR) g[i];\n+ if(!gc._dict.eq(_dict)) {\n+ LOG.warn(\"Not same Dictionaries therefore not appending \\n\" + _dict + \"\\n\\n\" + gc._dict);\n+ return null;\n+ }\n+ sumRows += gc.getNumRows();\n+ }\n+ AMapToData nd = _data.appendN(Arrays.copyOf(g, g.length, AMapToDataGroup[].class));\n+ AOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class));\n+ return create(_colIndexes, sumRows, _dict, no, nd, null, _reference);\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"diff": "@@ -583,9 +583,30 @@ public class ColGroupSDCSingle extends ASDC {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n+ int sumRows = 0;\n+ for(int i = 1; i < g.length; i++) {\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\n+ LOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g[i]._colIndexes));\nreturn null;\n}\n+ if(!(g[i] instanceof ColGroupSDCSingle)) {\n+ LOG.warn(\"Not SDCFOR but \" + g[i].getClass().getSimpleName() );\n+ return null;\n+ }\n+\n+ final ColGroupSDCSingle gc = (ColGroupSDCSingle) g[i];\n+ if(!gc._dict.eq(_dict)) {\n+ LOG.warn(\"Not same Dictionaries therefore not appending \\n\" + _dict + \"\\n\\n\" + gc._dict);\n+ return null;\n+ }\n+ sumRows += gc.getNumRows();\n+ }\n+ AOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class));\n+ return create(_colIndexes, sumRows, _dict, _defaultTuple, no, null);\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"diff": "@@ -816,9 +816,31 @@ public class ColGroupSDCSingleZeros extends ASDCZero {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n+ int sumRows = 0;\n+ for(int i = 1; i < g.length; i++) {\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\n+ LOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g[i]._colIndexes));\nreturn null;\n}\n+ if(!(g[i] instanceof ColGroupSDCSingleZeros)) {\n+ LOG.warn(\"Not SDCFOR but \" + g[i].getClass().getSimpleName());\n+ return null;\n+ }\n+\n+ final ColGroupSDCSingleZeros gc = (ColGroupSDCSingleZeros) g[i];\n+ if(!gc._dict.eq(_dict)) {\n+ LOG.warn(\"Not same Dictionaries therefore not appending \\n\" + _dict + \"\\n\\n\" + gc._dict);\n+ return null;\n+ }\n+ sumRows += gc.getNumRows();\n+ }\n+ AOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class));\n+ return create(_colIndexes, sumRows, _dict, no, null);\n+\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"diff": "@@ -727,9 +727,32 @@ public class ColGroupSDCZeros extends ASDCZero {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n+ int sumRows = 0;\n+ for(int i = 1; i < g.length; i++) {\n+ if(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\n+ LOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n+ + Arrays.toString(g[i]._colIndexes));\nreturn null;\n}\n+ if(!(g[i] instanceof ColGroupSDCZeros)) {\n+ LOG.warn(\"Not SDCFOR but \" + g[i].getClass().getSimpleName());\n+ return null;\n+ }\n+\n+ final ColGroupSDCZeros gc = (ColGroupSDCZeros) g[i];\n+ if(!gc._dict.eq(_dict)) {\n+ LOG.warn(\"Not same Dictionaries therefore not appending \\n\" + _dict + \"\\n\\n\" + gc._dict);\n+ return null;\n+ }\n+ sumRows += gc.getNumRows();\n+ }\n+ AMapToData nd = _data.appendN(Arrays.copyOf(g, g.length, AMapToDataGroup[].class));\n+ AOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class));\n+\n+ return create(_colIndexes, sumRows, _dict, no, nd, null);\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/offset/AOffset.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/offset/AOffset.java",
"diff": "@@ -26,6 +26,7 @@ import org.apache.commons.lang.NotImplementedException;\nimport org.apache.commons.logging.Log;\nimport org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.runtime.compress.DMLCompressionException;\n+import org.apache.sysds.runtime.compress.colgroup.AOffsetsGroup;\nimport org.apache.sysds.runtime.compress.colgroup.mapping.AMapToData;\nimport org.apache.sysds.runtime.data.DenseBlock;\nimport org.apache.sysds.runtime.data.SparseBlock;\n@@ -449,6 +450,16 @@ public abstract class AOffset implements Serializable {\nthrow new NotImplementedException();\n}\n+ /**\n+ * Append a list of offsets together in order.\n+ *\n+ * @param g The offsets to append together.\n+ * @return The combined offsets.\n+ */\n+ public AOffset appendN(AOffsetsGroup[] g) {\n+ throw new NotImplementedException();\n+ }\n+\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressUnwrap.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/CompressUnwrap.java",
"diff": "@@ -21,6 +21,7 @@ package org.apache.sysds.runtime.compress.io;\nimport org.apache.spark.api.java.function.Function;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\n+import org.apache.sysds.runtime.compress.CompressedMatrixBlockFactory;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\npublic class CompressUnwrap implements Function<CompressedWriteBlock, MatrixBlock> {\n@@ -31,9 +32,12 @@ public class CompressUnwrap implements Function<CompressedWriteBlock, MatrixBloc\n@Override\npublic MatrixBlock call(CompressedWriteBlock arg0) throws Exception {\n- if(arg0.get() instanceof CompressedMatrixBlock)\n- return new CompressedMatrixBlock((CompressedMatrixBlock) arg0.get());\n+ final MatrixBlock g = arg0.get();\n+ if(g instanceof CompressedMatrixBlock)\n+ return new CompressedMatrixBlock((CompressedMatrixBlock) g);\n+ else if(g.isEmpty())\n+ return CompressedMatrixBlockFactory.createConstant(g.getNumRows(), g.getNumColumns(), 0.0);\nelse\n- return new MatrixBlock(arg0.get());\n+ return new MatrixBlock(g);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/io/ReaderCompressed.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/io/ReaderCompressed.java",
"diff": "@@ -32,6 +32,8 @@ import org.apache.hadoop.io.SequenceFile.Reader;\nimport org.apache.hadoop.mapred.JobConf;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.compress.CompressedMatrixBlock;\n+import org.apache.sysds.runtime.compress.CompressedMatrixBlockFactory;\nimport org.apache.sysds.runtime.compress.lib.CLALibCombine;\nimport org.apache.sysds.runtime.io.IOUtilFunctions;\nimport org.apache.sysds.runtime.io.MatrixReader;\n@@ -92,7 +94,14 @@ public final class ReaderCompressed extends MatrixReader {\nCompressedWriteBlock value = new CompressedWriteBlock();\nwhile(reader.next(key, value)) {\n- data.put(key, value.get());\n+ final MatrixBlock g = value.get();\n+\n+ if(g instanceof CompressedMatrixBlock)\n+ data.put(key, g);\n+ else if(g.isEmpty())\n+ data.put(key, CompressedMatrixBlockFactory.createConstant(g.getNumRows(), g.getNumColumns(), 0.0));\n+ else\n+ data.put(key, g);\nkey = new MatrixIndexes();\nvalue = new CompressedWriteBlock();\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupMorphingPerformanceCompare.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupMorphingPerformanceCompare.java",
"diff": "@@ -154,7 +154,7 @@ public class ColGroupMorphingPerformanceCompare {\nprivate final MatrixBlock mbDict;\npublic SDCNoMorph(ColGroupSDC g) {\n- this(g.getColIndices(), g.getNumRows(), g.getDictionary(), g.getDefaultTuple(), g.getOffsets(), g.getMapping(),\n+ this(g.getColIndices(), g.getNumRows(), g.getDictionary(), g.getDefaultTuple(), g.getOffsets(), g.getMapToData(),\nnull);\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3453] CLA Append N SDC
This commit adds the a generic approach to append N for SDC. |
49,706 | 27.10.2022 18:34:08 | -7,200 | 387bc2cd3075acec3ab209c6dcd3606115e0ee5f | MatrixBlock equals Sparse Specialization
This commit adds support for efficient sparse comparison of equality,
Supported with individual kernels is now sparse-sparse and sparse-dense.
With of cause the best performance if both sides are allocated in the
same way.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/data/Block.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.data;\n+\n+public interface Block {\n+ /**\n+ * Get value of matrix cell (r,c). In case of non existing values this call returns 0.\n+ *\n+ * @param r row index starting at 0\n+ * @param c column index starting at 0\n+ * @return value of cell at position (r,c)\n+ */\n+ public abstract double get(int r, int c);\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/data/DenseBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/data/DenseBlock.java",
"diff": "@@ -35,7 +35,7 @@ import org.apache.sysds.utils.MemoryEstimates;\n* one or many contiguous rows.\n*\n*/\n-public abstract class DenseBlock implements Serializable\n+public abstract class DenseBlock implements Serializable, Block\n{\nprivate static final long serialVersionUID = 7517220490270237832L;\n@@ -188,6 +188,15 @@ public abstract class DenseBlock implements Serializable\nreturn _rlen;\n}\n+ /**\n+ * Get the number of columns / first dimension\n+ *\n+ * @return number of columns\n+ */\n+ public final int numCols(){\n+ return _odims[0];\n+ }\n+\n/**\n* Get the number of dimensions.\n*\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/data/SparseBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/data/SparseBlock.java",
"diff": "package org.apache.sysds.runtime.data;\nimport java.io.Serializable;\n+import java.util.Arrays;\nimport java.util.Iterator;\nimport org.apache.sysds.runtime.matrix.data.IJV;\n@@ -35,7 +36,7 @@ import org.apache.sysds.runtime.matrix.data.IJV;\n* CSR, MCSR, and - with performance drawbacks - COO.\n*\n*/\n-public abstract class SparseBlock implements Serializable\n+public abstract class SparseBlock implements Serializable, Block\n{\nprivate static final long serialVersionUID = -5008747088111141395L;\n@@ -535,6 +536,85 @@ public abstract class SparseBlock implements Serializable\n@Override\npublic abstract String toString();\n+\n+ @Override\n+ public boolean equals(Object o) {\n+ if(o instanceof SparseBlock)\n+ return equals((SparseBlock) o, Double.MIN_NORMAL * 1024);\n+ return false;\n+ }\n+\n+ /**\n+ * Verify if the values in this sparse block is equivalent to that sparse block, not taking into account the\n+ * dimensions of the contained values.\n+ *\n+ * @param o Other block\n+ * @param eps Epsilon allowed\n+ * @return If the blocs are equivalent.\n+ */\n+ public boolean equals(SparseBlock o, double eps) {\n+ for(int r = 0; r < numRows(); r++){\n+ if(isEmpty(r) != o.isEmpty(r))\n+ return false;\n+ if(isEmpty(r))\n+ continue;\n+\n+ final int apos = pos(r);\n+ final int alen = apos + size(r);\n+\n+ final int aposO = o.pos(r);\n+ final int alenO = aposO + o.size(r);\n+\n+ if(! Arrays.equals(indexes(r), apos, alen, o.indexes(r), aposO, alenO))\n+ return false;\n+ if(! Arrays.equals(values(r), apos, alen, o.values(r), aposO, alenO))\n+ return false;\n+ }\n+ return true;\n+ }\n+\n+ /**\n+ * Get if the dense double array is equivalent to this sparse Block.\n+ *\n+ * @param denseValues row major double values same dimensions of sparse Block.\n+ * @param nCol Number of columns in dense values (and hopefully in this sparse block)\n+ * @param eps Epsilon allowed to be off. Note we treat zero differently and it must be zero.\n+ * @return If the dense array is equivalent\n+ */\n+ public boolean equals(double[] denseValues, int nCol, double eps) {\n+ for(int r = 0; r < numRows(); r++) {\n+ final int off = r * nCol;\n+ final int offEnd = off + nCol;\n+ if(isEmpty(r)) {\n+ // all in row should be zero.\n+ for(int i = off; i < offEnd; i++)\n+ if(denseValues[i] != 0)\n+ return false;\n+ }\n+ else {\n+ final int apos = pos(r);\n+ final int alen = apos + size(r);\n+ final double[] avals = values(r);\n+ final int[] aix = indexes(r);\n+ int j = apos;\n+ int i = off;\n+ for(int k = 0; i < offEnd && j < alen; i++, k++) {\n+ if(aix[j] == k) {\n+ if(Math.abs(denseValues[i] - avals[j]) > eps)\n+ return false;\n+ j++;\n+ }\n+ else if(denseValues[i] != 0.0)\n+ return false;\n+ }\n+ for(; i < offEnd; i++)\n+ if(denseValues[i] != 0)\n+ return false;\n+ }\n+ }\n+ return true;\n+ }\n+\n/**\n* Default sparse block iterator implemented against the sparse block\n* api in an implementation-agnostic manner.\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixEquals.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixEquals.java",
"diff": "@@ -148,6 +148,13 @@ public class LibMatrixEquals {\nif(a.denseBlock != null && b.denseBlock != null)\nreturn a.denseBlock.equals(b.denseBlock, eps);\n+ if(a.sparseBlock != null && b.sparseBlock != null)\n+ return a.sparseBlock.equals(b.sparseBlock, eps);\n+ if(a.sparseBlock != null && b.denseBlock != null && b.denseBlock.isContiguous())\n+ return a.sparseBlock.equals(b.denseBlock.values(0), b.getNumColumns(), eps);\n+ if(b.sparseBlock != null && a.denseBlock != null && a.denseBlock.isContiguous())\n+ return b.sparseBlock.equals(a.denseBlock.values(0), a.getNumColumns(), eps);\n+\nreturn genericEquals();\n}\n@@ -195,7 +202,6 @@ public class LibMatrixEquals {\nLOG.warn(\"Using generic equals, potential optimizations are possible\");\nfinal int rows = a.getNumRows();\nfinal int cols = a.getNumColumns();\n-\nfor(int i = 0; i < rows; i++)\nfor(int j = 0; j < cols; j++)\nif(Math.abs(a.quickGetValue(i, j) - b.quickGetValue(i, j)) > eps)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/MatrixBlock.java",
"diff": "@@ -53,6 +53,7 @@ import org.apache.sysds.runtime.compress.DMLCompressionException;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheBlock;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject.UpdateType;\nimport org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\n+import org.apache.sysds.runtime.data.Block;\nimport org.apache.sysds.runtime.data.DenseBlock;\nimport org.apache.sysds.runtime.data.DenseBlockFP64;\nimport org.apache.sysds.runtime.data.DenseBlockFactory;\n@@ -551,7 +552,6 @@ public class MatrixBlock extends MatrixValue implements CacheBlock, Externalizab\n////////\n// Data handling\n-\npublic DenseBlock getDenseBlock() {\nreturn denseBlock;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/TestUtils.java",
"new_path": "src/test/java/org/apache/sysds/test/TestUtils.java",
"diff": "@@ -65,6 +65,7 @@ import org.apache.hadoop.io.SequenceFile.Writer;\nimport org.apache.sysds.common.Types.FileFormat;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\n+import org.apache.sysds.runtime.data.DenseBlockFP64;\nimport org.apache.sysds.runtime.data.SparseBlock;\nimport org.apache.sysds.runtime.data.TensorBlock;\nimport org.apache.sysds.runtime.functionobjects.Builtin;\n@@ -3519,4 +3520,31 @@ public class TestUtils\n// FIXME: Fails to skip if gpu available but no libraries\nreturn 1; //return false for now\n}\n+\n+ public static MatrixBlock mockNonContiguousMatrix(MatrixBlock db){\n+ db.sparseToDense();\n+ double[] vals = db.getDenseBlockValues();\n+ int[] dims = new int[]{db.getNumRows(), db.getNumColumns()};\n+ MatrixBlock m = new MatrixBlock(db.getNumRows(), db.getNumColumns(), new DenseBlockFP64Mock(dims, vals));\n+ m.setNonZeros(db.getNonZeros());\n+ return m;\n+ }\n+\n+ private static class DenseBlockFP64Mock extends DenseBlockFP64 {\n+ private static final long serialVersionUID = -3601232958390554672L;\n+\n+ public DenseBlockFP64Mock(int[] dims, double[] data) {\n+ super(dims, data);\n+ }\n+\n+ @Override\n+ public boolean isContiguous() {\n+ return false;\n+ }\n+\n+ @Override\n+ public int numBlocks() {\n+ return 2;\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/matrix/EqualsTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/matrix/EqualsTest.java",
"diff": "@@ -288,4 +288,24 @@ public class EqualsTest {\nassertFalse(LibMatrixEquals.equals(m1, m2, 0.00000001));\nassertFalse(LibMatrixEquals.equals(m2, m1, 0.00000001));\n}\n+\n+ @Test\n+ public void testForcedNonContiguousNotEqual() {\n+ MatrixBlock m1 = TestUtils.generateTestMatrixBlock(100, 100, 0, 100, 0.05, 231);\n+ MatrixBlock m2 = TestUtils.mockNonContiguousMatrix( //\n+ TestUtils.generateTestMatrixBlock(100, 100, 0, 100, 0.05, 231));\n+ m1.getSparseBlock().get(13).values()[2] = 1324;\n+ assertFalse(LibMatrixEquals.equals(m1, m2, 0.00000001));\n+ assertFalse(LibMatrixEquals.equals(m2, m1, 0.00000001));\n+ }\n+\n+ @Test\n+ public void testForcedNonContiguousEqual() {\n+ MatrixBlock m1 = TestUtils.generateTestMatrixBlock(100, 100, 0, 100, 0.05, 231);\n+ MatrixBlock m2 = TestUtils.mockNonContiguousMatrix( //\n+ TestUtils.generateTestMatrixBlock(100, 100, 0, 100, 0.05, 231));\n+ assertTrue(LibMatrixEquals.equals(m1, m2, 0.00000001));\n+ assertTrue(LibMatrixEquals.equals(m2, m1, 0.00000001));\n+ }\n+\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3457] MatrixBlock equals Sparse Specialization
This commit adds support for efficient sparse comparison of equality,
Supported with individual kernels is now sparse-sparse and sparse-dense.
With of cause the best performance if both sides are allocated in the
same way.
Closes #1713 |
49,706 | 25.10.2022 19:55:18 | -7,200 | 11d07737a61d2142774c89857fb00d3338d6cc1d | [MINOR] Add scheme for empty
Add a empty scheme for empty column groups.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupEmpty.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupEmpty.java",
"diff": "@@ -25,6 +25,7 @@ import java.util.Arrays;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.colgroup.dictionary.Dictionary;\n+import org.apache.sysds.runtime.compress.colgroup.scheme.EmptyScheme;\nimport org.apache.sysds.runtime.compress.colgroup.scheme.ICLAScheme;\nimport org.apache.sysds.runtime.compress.cost.ComputationCostEstimator;\nimport org.apache.sysds.runtime.compress.utils.Util;\n@@ -330,6 +331,6 @@ public class ColGroupEmpty extends AColGroupCompressed {\n@Override\npublic ICLAScheme getCompressionScheme() {\n- return null;\n+ return EmptyScheme.create(this);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDC.java",
"diff": "@@ -586,7 +586,7 @@ public class ColGroupSDC extends ASDC implements AMapToDataGroup {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n- int sumRows = 0;\n+ int sumRows = getNumRows();\nfor(int i = 1; i < g.length; i++) {\nif(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\nLOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n@@ -617,7 +617,6 @@ public class ColGroupSDC extends ASDC implements AMapToDataGroup {\nreturn null;\n}\n-\n@Override\npublic String toString() {\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCFOR.java",
"diff": "@@ -57,7 +57,7 @@ import org.apache.sysds.runtime.matrix.operators.UnaryOperator;\n* with no modifications.\n*\n*/\n-public class ColGroupSDCFOR extends ASDC {\n+public class ColGroupSDCFOR extends ASDC implements AMapToDataGroup {\nprivate static final long serialVersionUID = 3883228464052204203L;\n@@ -116,6 +116,11 @@ public class ColGroupSDCFOR extends ASDC {\nreturn _data.getCounts(counts);\n}\n+ @Override\n+ public AMapToData getMapToData() {\n+ return _data;\n+ }\n+\n@Override\nprotected void computeRowSums(double[] c, int rl, int ru, double[] preAgg) {\nColGroupSDC.computeRowSums(c, rl, ru, preAgg, _data, _indexes, _numRows);\n@@ -447,7 +452,7 @@ public class ColGroupSDCFOR extends ASDC {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n- int sumRows = 0;\n+ int sumRows = getNumRows();\nfor(int i = 1; i < g.length; i++) {\nif(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\nLOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingle.java",
"diff": "@@ -584,7 +584,7 @@ public class ColGroupSDCSingle extends ASDC {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n- int sumRows = 0;\n+ int sumRows = getNumRows();\nfor(int i = 1; i < g.length; i++) {\nif(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\nLOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCSingleZeros.java",
"diff": "@@ -817,7 +817,7 @@ public class ColGroupSDCSingleZeros extends ASDCZero {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n- int sumRows = 0;\n+ int sumRows = getNumRows();\nfor(int i = 1; i < g.length; i++) {\nif(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\nLOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n@@ -839,7 +839,6 @@ public class ColGroupSDCSingleZeros extends ASDCZero {\n}\nAOffset no = _indexes.appendN(Arrays.copyOf(g, g.length, AOffsetsGroup[].class), getNumRows());\nreturn create(_colIndexes, sumRows, _dict, no, null);\n-\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/ColGroupSDCZeros.java",
"diff": "@@ -56,7 +56,7 @@ import org.apache.sysds.runtime.matrix.operators.UnaryOperator;\n*\n* This column group is handy in cases where sparse unsafe operations is executed on very sparse columns.\n*/\n-public class ColGroupSDCZeros extends ASDCZero {\n+public class ColGroupSDCZeros extends ASDCZero implements AMapToDataGroup{\nprivate static final long serialVersionUID = -3703199743391937991L;\n/** Pointers to row indexes in the dictionary. Note the dictionary has one extra entry. */\n@@ -89,6 +89,11 @@ public class ColGroupSDCZeros extends ASDCZero {\nreturn ColGroupType.SDCZeros;\n}\n+ @Override\n+ public AMapToData getMapToData(){\n+ return _data;\n+ }\n+\n@Override\nprotected void decompressToDenseBlockDenseDictionary(DenseBlock db, int rl, int ru, int offR, int offC,\ndouble[] values) {\n@@ -728,7 +733,7 @@ public class ColGroupSDCZeros extends ASDCZero {\n@Override\npublic AColGroup appendNInternal(AColGroup[] g) {\n- int sumRows = 0;\n+ int sumRows = getNumRows();\nfor(int i = 1; i < g.length; i++) {\nif(!Arrays.equals(_colIndexes, g[i]._colIndexes)) {\nLOG.warn(\"Not same columns therefore not appending \\n\" + Arrays.toString(_colIndexes) + \"\\n\\n\"\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/Dictionary.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/Dictionary.java",
"diff": "@@ -26,7 +26,6 @@ import java.math.BigDecimal;\nimport java.math.MathContext;\nimport java.util.Arrays;\n-import org.apache.commons.lang.NotImplementedException;\nimport org.apache.sysds.runtime.compress.DMLCompressionException;\nimport org.apache.sysds.runtime.data.SparseBlock;\nimport org.apache.sysds.runtime.functionobjects.Builtin;\n@@ -1073,7 +1072,7 @@ public class Dictionary extends ADictionary {\nelse if(o instanceof MatrixBlockDictionary) {\nfinal MatrixBlock mb = ((MatrixBlockDictionary) o).getMatrixBlock();\nif(mb.isInSparseFormat())\n- throw new NotImplementedException();\n+ return mb.getSparseBlock().equals(_values, mb.getNumColumns());\nfinal double[] dv = mb.getDenseBlockValues();\nreturn Arrays.equals(_values, dv);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/MatrixBlockDictionary.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/MatrixBlockDictionary.java",
"diff": "@@ -2147,10 +2147,10 @@ public class MatrixBlockDictionary extends ADictionary {\n@Override\npublic boolean eq(ADictionary o) {\nif(o instanceof MatrixBlockDictionary)\n- throw new NotImplementedException(\"Comparison if a MatrixBlock is equivalent is not implemented yet\");\n+ return _data.equals(((MatrixBlockDictionary) o)._data);\nelse if(o instanceof Dictionary) {\nif(_data.isInSparseFormat())\n- throw new NotImplementedException();\n+ return _data.getSparseBlock().equals(((Dictionary) o)._values, _data.getNumColumns());\nfinal double[] dv = _data.getDenseBlockValues();\nreturn Arrays.equals(dv, ((Dictionary) o)._values);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/scheme/ConstScheme.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/scheme/ConstScheme.java",
"diff": "@@ -77,7 +77,7 @@ public class ConstScheme implements ICLAScheme {\nif(dv[off + cols[ci]] != values[ci])\nreturn null;\n}\n- return g;\n+ return returnG(cols);\n}\nprivate AColGroup encodeSparse(final MatrixBlock data, final int[] cols, final double[] values, final int nRow,\n@@ -92,7 +92,9 @@ public class ConstScheme implements ICLAScheme {\nfinal double[] aval = sb.values(r);\nfinal int[] aix = sb.indexes(r);\nint p = 0; // pointer into cols;\n- while(p < cols.length && values[p] == 0.0)\n+ while(values[p] == 0.0)\n+ // technically also check for&& p < cols.length\n+ // but this verification is indirectly maintained\np++;\nfor(int j = apos; j < alen && p < cols.length; j++) {\nif(aix[j] == cols[p]) {\n@@ -106,7 +108,7 @@ public class ConstScheme implements ICLAScheme {\nreturn null; // not matching\n}\n}\n- return g;\n+ return returnG(cols);\n}\nprivate AColGroup encodeGeneric(final MatrixBlock data, final int[] cols, final double[] values, final int nRow,\n@@ -115,6 +117,14 @@ public class ConstScheme implements ICLAScheme {\nfor(int ci = 0; ci < cols.length; ci++)\nif(data.quickGetValue(r, cols[ci]) != values[ci])\nreturn null;\n- return g;\n+ return returnG(cols);\n}\n+\n+ private AColGroup returnG(int[] columns) {\n+ if(columns == g.getColIndices())\n+ return g;// great!\n+ else\n+ return ColGroupConst.create(columns, g.getValues());\n+ }\n+\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/scheme/EmptyScheme.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.compress.colgroup.scheme;\n+\n+import org.apache.sysds.runtime.compress.colgroup.AColGroup;\n+import org.apache.sysds.runtime.compress.colgroup.ColGroupEmpty;\n+import org.apache.sysds.runtime.data.SparseBlock;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+\n+public class EmptyScheme implements ICLAScheme {\n+ /** The instance of a empty column group that in all cases here would be returned to be the same */\n+ final ColGroupEmpty g;\n+\n+ protected EmptyScheme(ColGroupEmpty g) {\n+ this.g = g;\n+ }\n+\n+ public static EmptyScheme create(ColGroupEmpty g) {\n+ return new EmptyScheme(g);\n+ }\n+\n+ @Override\n+ public AColGroup encode(MatrixBlock data) {\n+ return encode(data, g.getColIndices());\n+ }\n+\n+ @Override\n+ public AColGroup encode(MatrixBlock data, int[] columns) {\n+\n+ if(columns.length != g.getColIndices().length)\n+ throw new IllegalArgumentException(\"Invalid columns to encode\");\n+ final int nCol = data.getNumColumns();\n+ final int nRow = data.getNumRows();\n+ if(nCol < columns[columns.length - 1]) {\n+ LOG.warn(\"Invalid to encode matrix with less columns than encode scheme max column\");\n+ return null;\n+ }\n+ else if(data.isEmpty())\n+ return returnG(columns);\n+ else if(data.isInSparseFormat())\n+ return encodeSparse(data, columns, nRow, nCol);\n+ else if(data.getDenseBlock().isContiguous())\n+ return encodeDense(data, columns, nRow, nCol);\n+ else\n+ return encodeGeneric(data, columns, nRow, nCol);\n+ }\n+\n+ private AColGroup encodeDense(final MatrixBlock data, final int[] cols, final int nRow, final int nCol) {\n+ final double[] dv = data.getDenseBlockValues();\n+ for(int r = 0; r < nRow; r++) {\n+ final int off = r * nCol;\n+ for(int ci = 0; ci < cols.length; ci++)\n+ if(dv[off + cols[ci]] != 0.0)\n+ return null;\n+ }\n+ return g;\n+ }\n+\n+ private AColGroup encodeSparse(final MatrixBlock data, final int[] cols, final int nRow, final int nCol) {\n+ SparseBlock sb = data.getSparseBlock();\n+ for(int r = 0; r < nRow; r++) {\n+ if(sb.isEmpty(r))\n+ continue; // great!\n+\n+ final int apos = sb.pos(r);\n+ final int alen = apos + sb.size(r);\n+ final int[] aix = sb.indexes(r);\n+ int p = 0; // pointer into cols;\n+ for(int j = apos; j < alen ; j++) {\n+ while(p < cols.length && cols[p] < aix[j])\n+ p++;\n+ if(p < cols.length && aix[j] == cols[p])\n+ return null;\n+\n+ if(p >= cols.length)\n+ continue;\n+ }\n+ }\n+ return returnG(cols);\n+ }\n+\n+ private AColGroup encodeGeneric(final MatrixBlock data, final int[] cols, final int nRow, final int nCol) {\n+ for(int r = 0; r < nRow; r++)\n+ for(int ci = 0; ci < cols.length; ci++)\n+ if(data.quickGetValue(r, cols[ci]) != 0.0)\n+ return null;\n+ return returnG(cols);\n+ }\n+\n+ private AColGroup returnG(int[] columns) {\n+ if(columns == g.getColIndices())\n+ return g;// great!\n+ else\n+ return new ColGroupEmpty(columns);\n+ }\n+\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/data/SparseBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/data/SparseBlock.java",
"diff": "@@ -573,6 +573,18 @@ public abstract class SparseBlock implements Serializable, Block\nreturn true;\n}\n+\n+ /**\n+ * Get if the dense double array is equivalent to this sparse Block.\n+ *\n+ * @param denseValues row major double values same dimensions of sparse Block.\n+ * @param nCol Number of columns in dense values (and hopefully in this sparse block)\n+ * @return If the dense array is equivalent\n+ */\n+ public boolean equals(double[] denseValues, int nCol) {\n+ return equals(denseValues, nCol, Double.MIN_NORMAL * 1024);\n+ }\n+\n/**\n* Get if the dense double array is equivalent to this sparse Block.\n*\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/ColGroupTest.java",
"diff": "@@ -2112,18 +2112,6 @@ public class ColGroupTest extends ColGroupBase {\nassertTrue(co < eo);\n}\n- // @Test\n- // public void copyMaintainPointers() {\n- // AColGroup a = base.copy();\n- // AColGroup b = other.copy();\n-\n- // assertTrue(a.getColIndices() == base.getColIndices());\n- // assertTrue(b.getColIndices() == other.getColIndices());\n- // // assertFalse(a.getColIndices() == other.getColIndices());\n- // assertFalse(a == base);\n- // assertFalse(b == other);\n- // }\n-\n@Test\npublic void sliceRowsBeforeEnd() {\nif(nRow > 10)\n@@ -2241,4 +2229,54 @@ public class ColGroupTest extends ColGroupBase {\nfail(e.getMessage());\n}\n}\n+\n+ @Test\n+ public void testAppendSelf() {\n+ appendSelfVerification(base);\n+ appendSelfVerification(other);\n+ }\n+\n+ @Test\n+ public void testAppendSomethingElse() {\n+ // This is under the assumption that if one is appending\n+ // to the other then other should append to this.\n+ // If this property does not hold it is because some cases are missing in the append logic.\n+ try {\n+\n+ AColGroup g2 = base.append(other);\n+ AColGroup g2n = other.append(base);\n+ // both should be null, or both should not be.\n+ if(g2 == null)\n+ assertTrue(g2n == null);\n+ else if(g2 != null)\n+ assertTrue(g2n != null);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+ private void appendSelfVerification(AColGroup g) {\n+ try {\n+\n+ AColGroup g2 = g.append(g);\n+ AColGroup g2n = AColGroup.appendN(new AColGroup[] {g, g});\n+\n+ if(g2 != null && g2n != null) {\n+ double s2 = g2.getSum(nRow * 2);\n+ double s = g.getSum(nRow) * 2;\n+ double s2n = g2n.getSum(nRow * 2);\n+ assertEquals(s2, s, 0.0001);\n+ assertEquals(s2n, s, 0.0001);\n+\n+ UA_ROW(InstructionUtils.parseBasicAggregateUnaryOperator(\"uar+\", 1), 0, nRow * 2, g2, g2n, nRow * 2);\n+ }\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/component/compress/colgroup/scheme/CLAEmptySchemeTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.component.compress.colgroup.scheme;\n+\n+import static org.junit.Assert.assertTrue;\n+\n+import org.apache.sysds.runtime.compress.colgroup.AColGroup;\n+import org.apache.sysds.runtime.compress.colgroup.ColGroupEmpty;\n+import org.apache.sysds.runtime.compress.colgroup.scheme.ICLAScheme;\n+import org.apache.sysds.runtime.data.DenseBlockFP64;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.junit.Test;\n+\n+public class CLAEmptySchemeTest {\n+\n+ private final AColGroup g;\n+ private final ICLAScheme sh;\n+\n+ public CLAEmptySchemeTest() {\n+ g = new ColGroupEmpty(//\n+ new int[] {1, 3, 5} // Columns\n+ );\n+ sh = g.getCompressionScheme();\n+ }\n+\n+ @Test\n+ public void testConstValid() {\n+ assertTrue(sh != null);\n+ }\n+\n+ @Test\n+ public void testToSmallMatrix() {\n+ assertTrue(sh.encode(new MatrixBlock(1, 3, new double[] {//\n+ 1.1, 1.2, 1.3})) == null);\n+ }\n+\n+ @Test\n+ public void testWrongValuesSingleRow() {\n+ assertTrue(sh.encode(new MatrixBlock(1, 6, new double[] {//\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.2})) == null);\n+ }\n+\n+ @Test\n+ public void testWrongValuesSingleRowV2() {\n+ assertTrue(sh.encode(new MatrixBlock(1, 6, new double[] {//\n+ 0.0, 1.0, 0.2, 1.2, 0.2, 1.3})) == null);\n+ }\n+\n+ @Test\n+ public void testValidEncodeSingleRow() {\n+ assertTrue(sh.encode(new MatrixBlock(1, 6, new double[] {//\n+ 0.1, 0.0, 0.04, 0.0, 0.03, 0.0})) != null);\n+ }\n+\n+ @Test\n+ public void testValidEncodeMultiRow() {\n+ assertTrue(sh.encode(new MatrixBlock(2, 6, new double[] {//\n+ 132, 0.0, 241, 0.0, 142, 0.0, //\n+ 132, 0.0, 241, 0.0, 142, 0.0, //\n+ })) != null);\n+ }\n+\n+ @Test\n+ public void testValidEncodeMultiRowsLarger() {\n+ assertTrue(sh.encode(new MatrixBlock(2, 10, new double[] {//\n+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1, 1, 1, 1, //\n+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1, 1, 1, 1, //\n+ })) != null);\n+ }\n+\n+ @Test\n+ public void testInvalidEncodeMultiRowsValue() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 8, new double[] {//\n+ 0.0, 0.0, 0.2, 0.0, 1.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 1.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 1.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 1.2, 0.0, 0.2, 1.3, //\n+ })) != null);\n+ }\n+\n+ @Test\n+ public void testValidEncodeMultiRowDifferentValuesOtherColumns() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 12, new double[] {//\n+ 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.1, 0.4, 1.2, 0.3, 1.3, //\n+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.1, 0.2, 1.2, 0.1, 1.3, //\n+ 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.1, 0.4, 1.2, 0.1, 1.3, //\n+ })) != null);\n+ }\n+\n+ @Test\n+ public void testInvalidEncodeValueMultiRowMultiError() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 6, new double[] {//\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.4, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ })) == null);\n+ }\n+\n+ @Test\n+ public void testInvalidEncodeMultiRow() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 6, new double[] {//\n+ 0.0, 1.3, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.4, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ })) == null);\n+ }\n+\n+ @Test\n+ public void testEncodeOtherColumns() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 5, new double[] {//\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ }), new int[] {0, 2, 4}// other columns\n+ ) == null);\n+ }\n+\n+ @Test\n+ public void testEncodeOtherColumnsValid() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 8, new double[] {//\n+ 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ }), new int[] {0, 2, 4}// other columns\n+ ) != null);\n+ }\n+\n+ @Test\n+ public void testEncodeOtherColumnsInvalid() {\n+ assertTrue(sh.encode(new MatrixBlock(4, 5, new double[] {//\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 1.1, 0.2, 1.4, 0.2, 1.3, //\n+ 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ }), new int[] {0, 2, 4}// other columns\n+ ) == null);\n+ }\n+\n+ @Test(expected = IllegalArgumentException.class)\n+ public void testInvalidArgument_1() {\n+ sh.encode(null, new int[] {0, 2, 4, 5});\n+ }\n+\n+ @Test(expected = IllegalArgumentException.class)\n+ public void testInvalidArgument_2() {\n+ sh.encode(null, new int[] {0, 2});\n+ }\n+\n+ @Test\n+ public void testSparse() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, new double[] {//\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = mb.append(empty);\n+\n+ assertTrue(sh.encode(mb) != null);\n+ }\n+\n+ @Test\n+ public void testSpars_AllCosOver() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, new double[] {//\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = mb.append(empty);\n+\n+ assertTrue(sh.encode(mb, new int[] {100, 102, 999}) != null);\n+ }\n+\n+ @Test\n+ public void testSpars_InsideInvalid() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, new double[] {//\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ 0.01, 0.0, 0.2, 0.0, 0.2, 0.0, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = mb.append(empty);\n+\n+ assertTrue(sh.encode(mb, new int[] {1, 4, 5}) == null);\n+ }\n+\n+ @Test\n+ public void testSparse_Append() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, new double[] {//\n+ 0.0, 0.0, 0.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 0.2, 1.3, //\n+ 0.0, 0.0, 0.2, 0.0, 0.2, 1.3, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = empty.append(mb);\n+\n+ assertTrue(sh.encode(mb) != null);\n+ }\n+\n+ @Test\n+ public void testSparseValidCustom() {\n+ MatrixBlock mb = new MatrixBlock(4, 9, new double[] {//\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = empty.append(mb);\n+\n+ assertTrue(sh.encode(mb, new int[] {1001, 1003, 1005}) != null);\n+ }\n+\n+ @Test\n+ public void testSparseValidCustom2() {\n+ MatrixBlock mb = new MatrixBlock(4, 9, new double[] {//\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ MatrixBlock comb = empty.append(mb).append(mb);\n+\n+ assertTrue(sh.encode(comb, new int[] {1001, 1003, 1005}) != null);\n+ }\n+\n+ @Test\n+ public void testSparseValidCustom3Valid() {\n+ MatrixBlock mb = new MatrixBlock(4, 9, new double[] {//\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.33, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ MatrixBlock comb = empty.append(mb).append(mb);\n+\n+ assertTrue(sh.encode(comb, new int[] {1001, 1003, 1005}) != null);\n+ }\n+\n+ @Test\n+ public void testSparseEmptyRow() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, new double[] {//\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ });\n+\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ mb = empty.append(mb);\n+ MatrixBlock emptyRow = new MatrixBlock(1, 1006, 0.0);\n+ mb = mb.append(emptyRow, false);\n+\n+ assertTrue(sh.encode(mb, new int[] {44, 45, 999}) != null);\n+ }\n+\n+ @Test\n+ public void testEmpty() {\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ assertTrue(sh.encode(empty) != null);\n+ }\n+\n+ @Test\n+ public void testEmptyOtherColumns() {\n+ MatrixBlock empty = new MatrixBlock(4, 1000, 0.0);\n+ assertTrue(sh.encode(empty, new int[] {33, 34, 99}) != null);\n+ }\n+\n+ @Test\n+ public void testGenericNonContinuosBlockValid() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, //\n+ new DenseBlockFP64Mock(new int[] {4, 9}, new double[] {//\n+ 0.2, 0.0, 1.1, 0.0, 0.4, 0.0, 1.2, 0.3, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.2, 1.3, //\n+ 0.0, 0.0, 1.1, 0.0, 0.2, 0.0, 1.2, 0.1, 1.3, //\n+ 0.2, 0.0, 1.1, 0.0, 0.4, 0.0, 1.2, 0.1, 1.3, //\n+ }));\n+ mb.recomputeNonZeros();\n+ assertTrue(sh.encode(mb) != null);\n+ }\n+\n+ @Test\n+ public void testGenericNonContinuosBlockInValid() {\n+ MatrixBlock mb = new MatrixBlock(4, 6, //\n+ new DenseBlockFP64Mock(new int[] {4, 6}, new double[] {//\n+ 0.2, 1.1, 0.4, 1.2, 0.3, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.2, 1.3, //\n+ 0.0, 1.1, 0.2, 1.2, 0.1, 1.3, //\n+ 0.2, 1.22, 0.4, 1.2, 0.1, 1.3, //\n+ }));\n+ mb.recomputeNonZeros();\n+ assertTrue(sh.encode(mb) == null);\n+ }\n+\n+ @Test(expected = NullPointerException.class)\n+ public void testNull() {\n+ sh.encode(null, null);\n+ }\n+\n+ private class DenseBlockFP64Mock extends DenseBlockFP64 {\n+ private static final long serialVersionUID = -3601232958390554672L;\n+\n+ public DenseBlockFP64Mock(int[] dims, double[] data) {\n+ super(dims, data);\n+ }\n+\n+ @Override\n+ public boolean isContiguous() {\n+ return false;\n+ }\n+\n+ @Override\n+ public int numBlocks() {\n+ return 2;\n+ }\n+ }\n+\n+}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Add scheme for empty
Add a empty scheme for empty column groups.
Closes #1711 |
49,706 | 28.10.2022 14:40:01 | -7,200 | 2a85ff18fc02f85e393cfb3d6b3d78e7b4608f59 | [MINOR] Fix wildcard import in InstructionUtils | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"diff": "@@ -83,15 +83,27 @@ import org.apache.sysds.runtime.functionobjects.ReduceDiag;\nimport org.apache.sysds.runtime.functionobjects.ReduceRow;\nimport org.apache.sysds.runtime.functionobjects.Xor;\nimport org.apache.sysds.runtime.instructions.cp.AggregateUnaryCPInstruction;\n-import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.CPInstruction.CPType;\n+import org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.fed.FEDInstruction.FEDType;\nimport org.apache.sysds.runtime.instructions.fed.FEDInstruction.FederatedOutput;\nimport org.apache.sysds.runtime.instructions.gpu.GPUInstruction.GPUINSTRUCTION_TYPE;\nimport org.apache.sysds.runtime.instructions.spark.SPInstruction.SPType;\nimport org.apache.sysds.runtime.matrix.data.LibCommonsMath;\n-import org.apache.sysds.runtime.matrix.operators.*;\n+import org.apache.sysds.runtime.matrix.operators.AggregateBinaryOperator;\n+import org.apache.sysds.runtime.matrix.operators.AggregateOperator;\n+import org.apache.sysds.runtime.matrix.operators.AggregateTernaryOperator;\n+import org.apache.sysds.runtime.matrix.operators.AggregateUnaryOperator;\n+import org.apache.sysds.runtime.matrix.operators.BinaryOperator;\n+import org.apache.sysds.runtime.matrix.operators.CMOperator;\nimport org.apache.sysds.runtime.matrix.operators.CMOperator.AggregateOperationTypes;\n+import org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\n+import org.apache.sysds.runtime.matrix.operators.LeftScalarOperator;\n+import org.apache.sysds.runtime.matrix.operators.Operator;\n+import org.apache.sysds.runtime.matrix.operators.RightScalarOperator;\n+import org.apache.sysds.runtime.matrix.operators.ScalarOperator;\n+import org.apache.sysds.runtime.matrix.operators.TernaryOperator;\n+import org.apache.sysds.runtime.matrix.operators.UnaryOperator;\npublic class InstructionUtils\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix wildcard import in InstructionUtils |
49,706 | 31.10.2022 17:23:12 | -3,600 | d097c403a0a5f203acca9cb05534d6ccbf134edc | [MINOR] Correct compiled function for FrameBlock
This commit fixes the compilation and return of mapping functions
to use the new path of the FrameBlock | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"diff": "@@ -2505,7 +2505,7 @@ public class FrameBlock implements CacheBlock, Externalizable {\n// construct class code\nsb.append(\"import org.apache.sysds.runtime.util.UtilFunctions;\\n\");\nsb.append(\"import org.apache.sysds.runtime.util.PorterStemmer;\\n\");\n- sb.append(\"import org.apache.sysds.runtime.matrix.data.FrameBlock.FrameMapFunction;\\n\");\n+ sb.append(\"import org.apache.sysds.runtime.frame.data.FrameBlock.FrameMapFunction;\\n\");\nsb.append(\"import java.util.Arrays;\\n\");\nsb.append(\"public class \" + cname + \" extends FrameMapFunction {\\n\");\nif(margin != 0) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Correct compiled function for FrameBlock
This commit fixes the compilation and return of mapping functions
to use the new path of the FrameBlock |
49,706 | 31.10.2022 18:16:54 | -3,600 | b54422cc70fd8c86037c8d97f20325e65052c48e | [MINOR] Change python binding to frame
This commit fixes the python binding to use the new path
for the FrameBlock.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/utils/converters.py",
"new_path": "src/main/python/systemds/utils/converters.py",
"diff": "@@ -113,7 +113,7 @@ def pandas_to_frame_block(sds: \"SystemDSContext\", pd_df: pd.DataFrame):\ntry:\njc_ValueType = jvm.org.apache.sysds.common.Types.ValueType\njc_String = jvm.java.lang.String\n- jc_FrameBlock = jvm.org.apache.sysds.runtime.matrix.data.FrameBlock\n+ jc_FrameBlock = jvm.org.apache.sysds.runtime.frame.data.FrameBlock\nj_valueTypeArray = java_gate.new_array(jc_ValueType, len(schema))\nj_colNameArray = java_gate.new_array(jc_String, len(col_names))\nj_dataArray = java_gate.new_array(jc_String, rows, cols)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Change python binding to frame
This commit fixes the python binding to use the new path
for the FrameBlock.
Closes #1717 |
49,724 | 01.11.2022 08:56:51 | -3,600 | 952ed0089be2744becf973a408c08c879cd55893 | [MINOR] Remove duplicate tests for countDistinct
This patch removes duplicate tests for countDistinct builtin
Closes | [
{
"change_type": "DELETE",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctApproxCol.java",
"new_path": null,
"diff": "-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements. See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership. The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License. You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied. See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-\n-package org.apache.sysds.test.functions.countDistinct;\n-\n-import org.apache.sysds.common.Types;\n-import org.apache.sysds.runtime.data.SparseBlock;\n-import org.junit.Test;\n-\n-public class CountDistinctApproxCol extends CountDistinctRowOrColBase {\n-\n- private final static String TEST_NAME = \"countDistinctApproxCol\";\n- private final static String TEST_DIR = \"functions/countDistinctApprox/\";\n- private final static String TEST_CLASS_DIR = TEST_DIR + CountDistinctApproxCol.class.getSimpleName() + \"/\";\n-\n- @Override\n- protected String getTestClassDir() {\n- return TEST_CLASS_DIR;\n- }\n-\n- @Override\n- protected String getTestName() {\n- return TEST_NAME;\n- }\n-\n- @Override\n- protected String getTestDir() {\n- return TEST_DIR;\n- }\n-\n- @Override\n- protected Types.Direction getDirection() {\n- return Types.Direction.Col;\n- }\n-\n- @Override\n- public void setUp() {\n- super.addTestConfiguration();\n- }\n-\n- @Test\n- public void testCPSparseLargeDefaultMCSR() {\n- Types.ExecType ex = Types.ExecType.CP;\n-\n- int actualDistinctCount = 10;\n- int rows = 1000, cols = 10000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- countDistinctMatrixTest(getDirection(), actualDistinctCount, cols, rows, sparsity, ex, tolerance);\n- }\n-\n- @Test\n- public void testCPSparseLargeCSR() {\n- int actualDistinctCount = 10;\n- int rows = 1000, cols = 10000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- super.testCPSparseLarge(SparseBlock.Type.CSR, Types.Direction.Col, rows, cols, actualDistinctCount, sparsity,\n- tolerance);\n- }\n-\n- @Test\n- public void testCPSparseLargeCOO() {\n- int actualDistinctCount = 10;\n- int rows = 1000, cols = 10000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- super.testCPSparseLarge(SparseBlock.Type.COO, Types.Direction.Col, rows, cols, actualDistinctCount, sparsity,\n- tolerance);\n- }\n-\n- @Test\n- public void testCPDenseLarge() {\n- Types.ExecType ex = Types.ExecType.CP;\n-\n- int actualDistinctCount = 100;\n- int rows = 1000, cols = 10000;\n- double sparsity = 0.9;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- countDistinctMatrixTest(getDirection(), actualDistinctCount, cols, rows, sparsity, ex, tolerance);\n- }\n-}\n"
},
{
"change_type": "DELETE",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctApproxRow.java",
"new_path": null,
"diff": "-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements. See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership. The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License. You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied. See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-\n-package org.apache.sysds.test.functions.countDistinct;\n-\n-import org.apache.sysds.common.Types;\n-import org.apache.sysds.runtime.data.SparseBlock;\n-import org.junit.Test;\n-\n-public class CountDistinctApproxRow extends CountDistinctRowOrColBase {\n-\n- private final static String TEST_NAME = \"countDistinctApproxRow\";\n- private final static String TEST_DIR = \"functions/countDistinctApprox/\";\n- private final static String TEST_CLASS_DIR = TEST_DIR + CountDistinctApproxRow.class.getSimpleName() + \"/\";\n-\n- @Override\n- protected String getTestClassDir() {\n- return TEST_CLASS_DIR;\n- }\n-\n- @Override\n- protected String getTestName() {\n- return TEST_NAME;\n- }\n-\n- @Override\n- protected String getTestDir() {\n- return TEST_DIR;\n- }\n-\n- @Override\n- protected Types.Direction getDirection() {\n- return Types.Direction.Row;\n- }\n-\n- @Override\n- public void setUp() {\n- super.addTestConfiguration();\n- }\n-\n- @Test\n- public void testCPSparseLargeDefaultMCSR() {\n- Types.ExecType ex = Types.ExecType.CP;\n-\n- int actualDistinctCount = 10;\n- int rows = 10000, cols = 1000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- countDistinctMatrixTest(getDirection(), actualDistinctCount, cols, rows, sparsity, ex, tolerance);\n- }\n-\n- @Test\n- public void testCPSparseLargeCSR() {\n- int actualDistinctCount = 10;\n- int rows = 10000, cols = 1000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- super.testCPSparseLarge(SparseBlock.Type.CSR, Types.Direction.Row, rows, cols, actualDistinctCount, sparsity,\n- tolerance);\n- }\n-\n- @Test\n- public void testCPSparseLargeCOO() {\n- int actualDistinctCount = 10;\n- int rows = 10000, cols = 1000;\n- double sparsity = 0.1;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- super.testCPSparseLarge(SparseBlock.Type.COO, Types.Direction.Row, rows, cols, actualDistinctCount, sparsity,\n- tolerance);\n- }\n-\n- @Test\n- public void testCPDenseLarge() {\n- Types.ExecType ex = Types.ExecType.CP;\n-\n- int actualDistinctCount = 100;\n- int rows = 10000, cols = 1000;\n- double sparsity = 0.9;\n- double tolerance = actualDistinctCount * this.percentTolerance;\n-\n- countDistinctMatrixTest(getDirection(), actualDistinctCount, cols, rows, sparsity, ex, tolerance);\n- }\n-}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Remove duplicate tests for countDistinct
This patch removes duplicate tests for countDistinct builtin
Closes #1719 |
49,706 | 02.11.2022 09:58:14 | -3,600 | 32331d683ccd997ae2123507972b05f762dc3334 | Federated Monitoring Tool Fix Rat Check
This commit exclude the license check for the monitoring tools artifacts
for when installed locally. Previously the license check would fail on
50k+ files, from the monitoring tools build. | [
{
"change_type": "MODIFY",
"old_path": "pom.xml",
"new_path": "pom.xml",
"diff": "<exclude>scripts/perftest/temp/**</exclude>\n<exclude>scripts/tutorials/**</exclude>\n<exclude>scripts/perftest/logs/**</exclude>\n+ <exclude>scripts/monitoring/node_modules/**</exclude>\n<exclude>.gitignore</exclude>\n<exclude>src/main/python/.gitignore</exclude>\n<exclude>.gitmodules</exclude>\n<exclude>**/target/**</exclude>\n<exclude>**/README.md</exclude>\n<exclude>**/*.svg</exclude>\n+ <exclude>**/derby.log</exclude>\n<!-- Jupyter Notebooks -->\n<exclude>**/*.ipynb</exclude>\n<!-- Generated antlr files -->\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3348] Federated Monitoring Tool Fix Rat Check
This commit exclude the license check for the monitoring tools artifacts
for when installed locally. Previously the license check would fail on
50k+ files, from the monitoring tools build. |
49,689 | 02.11.2022 15:44:40 | -3,600 | f55ca74d5ac90a305d08db7260f638078de62b03 | Disable successful Prefetch count test
This patch brings a minor change in the PrefetchRDD tests. Now, we
only check the number of Prefetch instructions executed and removed
the check for how many of those are successful.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"diff": "@@ -110,8 +110,8 @@ public class PrefetchRDDTest extends AutomatedTestBase {\nlong expected_successPF = !testname.equalsIgnoreCase(TEST_NAME+\"3\") ? 1 : 0;\nlong numPF = Statistics.getCPHeavyHitterCount(\"prefetch\");\nAssert.assertTrue(\"Violated Prefetch instruction count: \"+numPF, numPF == expected_numPF);\n- long successPF = SparkStatistics.getAsyncPrefetchCount();\n- Assert.assertTrue(\"Violated successful Prefetch count: \"+successPF, successPF == expected_successPF);\n+ //long successPF = SparkStatistics.getAsyncPrefetchCount();\n+ //Assert.assertTrue(\"Violated successful Prefetch count: \"+successPF, successPF == expected_successPF);\n} finally {\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\nOptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3088] Disable successful Prefetch count test
This patch brings a minor change in the PrefetchRDD tests. Now, we
only check the number of Prefetch instructions executed and removed
the check for how many of those are successful.
Closes #1720 |
49,700 | 11.10.2022 08:02:25 | -7,200 | 755f879419eccad29961d80535caab434f592aea | Docker Setup For HE
This commit changes the dockerfile for tests to include SEAL for running the homomorphic encryption tests.
The ignore tags are removed from the HE tests and the tests are added to the GitHub test workflows.
Closes | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/functionsTests.yml",
"new_path": ".github/workflows/functionsTests.yml",
"diff": "@@ -65,7 +65,7 @@ jobs:\n\"**.functions.dnn.**,**.functions.paramserv.**\",\n\"**.functions.recompile.**,**.functions.misc.**,**.functions.mlcontext.**\",\n\"**.functions.nary.**,**.functions.quaternary.**\",\n- \"**.functions.parfor.**,**.functions.pipelines.**,**.functions.privacy.**\",\n+ \"**.functions.parfor.**,**.functions.pipelines.**,**.functions.privacy.**\", \"**.functions.homomorphicEncryption.**\"\n\"**.functions.unary.scalar.**,**.functions.updateinplace.**,**.functions.vect.**\",\n\"**.functions.reorg.**,**.functions.rewrite.**,**.functions.ternary.**,**.functions.transform.**\",\n\"**.functions.unary.matrix.**,**.functions.linearization.**,**.functions.jmlc.**\"\n"
},
{
"change_type": "MODIFY",
"old_path": "docker/entrypoint.sh",
"new_path": "docker/entrypoint.sh",
"diff": "# A script to execute the tests inside the docker container.\n+cd /github/workspace/src/main/cpp\n+./build.sh\ncd /github/workspace\nexport MAVEN_OPTS=\"-Xmx512m -XX:MaxPermSize=128m\"\n"
},
{
"change_type": "MODIFY",
"old_path": "docker/testsysds.Dockerfile",
"new_path": "docker/testsysds.Dockerfile",
"diff": "@@ -33,6 +33,7 @@ ENV PATH $JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH\nENV LANGUAGE en_US:en\nENV LC_ALL en_US.UTF-8\nENV LANG en_US.UTF-8\n+ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/\nCOPY ./src/test/scripts/installDependencies.R installDependencies.R\nCOPY ./docker/entrypoint.sh /entrypoint.sh\n@@ -62,7 +63,11 @@ RUN apt-get update -qq \\\n&& mv openjdk-11.0.13_8 /usr/lib/jvm/java-11-openjdk-amd64 \\\n&& wget -qO- \\\nhttp://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz | tar xzf - \\\n- && mv apache-maven-$MAVEN_VERSION /usr/lib/mvn\n+ && mv apache-maven-$MAVEN_VERSION /usr/lib/mvn \\\n+ && apt-get install -y --no-install-recommends \\\n+ git \\\n+ cmake \\\n+ patchelf\nRUN apt-get install -y --no-install-recommends \\\nlibssl-dev \\\n@@ -74,4 +79,11 @@ RUN apt-get install -y --no-install-recommends \\\n&& rm -rf installDependencies.R \\\n&& rm -rf /var/lib/apt/lists/*\n+# SEAL\n+RUN wget -qO- https://github.com/microsoft/SEAL/archive/refs/tags/v3.7.0.tar.gz | tar xzf - \\\n+ && cd SEAL-3.7.0 \\\n+ && cmake -S . -B build -DBUILD_SHARED_LIBS=ON \\\n+ && cmake --build build \\\n+ && cmake --install build\n+\nENTRYPOINT [\"/entrypoint.sh\"]\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/NativeHelper.java",
"new_path": "src/main/java/org/apache/sysds/utils/NativeHelper.java",
"diff": "@@ -333,6 +333,7 @@ public class NativeHelper {\n* @return true if successfully loaded BLAS\n*/\npublic static boolean loadLibraryHelperFromResource(String libFileName) {\n+ LOG.info(\"Loading JNI shared library: \" + libFileName);\ntry(InputStream in = NativeHelper.class.getResourceAsStream(\"/lib/\"+ libFileName)) {\n// This logic is added because Java does not allow to load library from a resource file.\nif(in != null) {\n@@ -346,10 +347,10 @@ public class NativeHelper {\nreturn true;\n}\nelse\n- LOG.warn(\"No lib available in the jar:\" + libFileName);\n+ LOG.error(\"No lib available in the jar:\" + libFileName);\n}\n- catch(IOException e) {\n- LOG.warn(\"Unable to load library \" + libFileName + \" from resource:\" + e.getMessage());\n+ catch(IOException | UnsatisfiedLinkError e) {\n+ LOG.error(\"Unable to load library \" + libFileName + \" from resource:\" + e.getMessage());\n}\nreturn false;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/EncryptedFederatedParamservTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/EncryptedFederatedParamservTest.java",
"diff": "@@ -32,7 +32,6 @@ import org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\nimport org.apache.sysds.utils.Statistics;\nimport org.junit.Assert;\n-import org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.Parameterized;\n@@ -73,7 +72,7 @@ public class EncryptedFederatedParamservTest extends AutomatedTestBase {\n//{\"TwoNN\", 5, 1000, 100, 1, 0.01, \"BSP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"NONE\", \"true\", \"BALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"SBP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"BASELINE\", \"false\", \"IMBALANCED\", 200},\n{\"TwoNN\", 2, 4, 1, 4, 0.01, \"SBP\", \"BATCH\", \"KEEP_DATA_ON_WORKER\", \"BASELINE\", \"false\", \"BALANCED\", 200},\n- {\"CNN\", 2, 4, 1, 4, 0.01, \"SBP\", \"EPOCH\", \"SHUFFLE\", \"BASELINE\", \"false\", \"BALANCED\", 200},\n+ //{\"CNN\", 2, 4, 1, 4, 0.01, \"SBP\", \"EPOCH\", \"SHUFFLE\", \"BASELINE\", \"false\", \"BALANCED\", 200},\n/*\n// runtime balancing\n@@ -127,13 +126,11 @@ public class EncryptedFederatedParamservTest extends AutomatedTestBase {\n}\n@Test\n- @Ignore\npublic void EncryptedfederatedParamservSingleNode() {\nEncryptedfederatedParamserv(ExecMode.SINGLE_NODE, true);\n}\n@Test\n- @Ignore\npublic void EncryptedfederatedParamservHybrid() {\nEncryptedfederatedParamserv(ExecMode.HYBRID, true);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/homomorphicEncryption/InOutTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/homomorphicEncryption/InOutTest.java",
"diff": "package org.apache.sysds.test.functions.homomorphicEncryption;\n+import org.apache.sysds.api.DMLException;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.paramserv.NativeHEHelper;\n@@ -33,7 +34,6 @@ import org.apache.sysds.runtime.meta.MetaDataFormat;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-import org.junit.Ignore;\nimport org.junit.Test;\npublic class InOutTest extends AutomatedTestBase {\n@@ -49,18 +49,12 @@ public class InOutTest extends AutomatedTestBase {\n@Override\npublic void setUp() {\n- try {\nNativeHEHelper.initialize();\n- } catch (Exception e) {\n- throw e;\n- }\n-\nTestUtils.clearAssertionInformation();\naddTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[] { \"C\" }) );\n}\n@Test\n- @Ignore\npublic void endToEndTest() {\nSEALServer server = new SEALServer();\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3280] Docker Setup For HE
This commit changes the dockerfile for tests to include SEAL for running the homomorphic encryption tests.
The ignore tags are removed from the HE tests and the tests are added to the GitHub test workflows.
Closes #1721. |
49,706 | 08.11.2022 17:45:04 | -3,600 | fc474f6dff66f6df4d15d710e25bde133067d40e | [MINOR] Fix functionTests again
There was another syntax error in the github FunctionTests file.
This commit fixes that. | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/functionsTests.yml",
"new_path": ".github/workflows/functionsTests.yml",
"diff": "@@ -52,11 +52,11 @@ jobs:\n\"**.functions.a**.**,**.functions.binary.frame.**,**.functions.binary.matrix.**,**.functions.binary.scalar.**,**.functions.binary.tensor.**\",\n\"**.functions.blocks.**,**.functions.data.rand.**,\",\n\"**.functions.countDistinct.**,**.functions.countDistinctApprox.**,**.functions.data.misc.**,**.functions.lineage.**\",\n- \"**.functions.compress.**,,**.functions.data.tensor.**,**.functions.codegenalg.parttwo.**,**.functions.codegen.**,**.functions.caching.**\",\n+ \"**.functions.compress.**,**.functions.data.tensor.**,**.functions.codegenalg.parttwo.**,**.functions.codegen.**,**.functions.caching.**\",\n\"**.functions.binary.matrix_full_cellwise.**,**.functions.binary.matrix_full_other.**\",\n\"**.functions.federated.algorithms.**,**.functions.federated.io.**,**.functions.federated.paramserv.**\",\n\"**.functions.federated.primitives.**,**.functions.federated.transform.**\",\n- \"**.functions.federated.monitoring.**,**.functions.federated.multitenant\",\n+ \"**.functions.federated.monitoring.**,**.functions.federated.multitenant.**\",\n\"**.functions.federated.codegen.**,**.functions.federated.FederatedTestObjectConstructor\",\n\"**.functions.codegenalg.partone.**\",\n\"**.functions.builtin.part1.**\",\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix functionTests again
There was another syntax error in the github FunctionTests file.
This commit fixes that. |
49,706 | 08.11.2022 14:33:32 | -3,600 | 6681c34171362809157f89ef08d03ab791ef3b14 | [MINOR] Install wheel before sklearn GitHub Actions
This address a install flaw in python where sklearn does not install
properly if wheel is not installed first. | [
{
"change_type": "MODIFY",
"old_path": ".github/workflows/python.yml",
"new_path": ".github/workflows/python.yml",
"diff": "@@ -99,7 +99,8 @@ jobs:\n# Install pip twice to update past the versions.\npip install --upgrade pip\npip install --upgrade pip\n- pip install numpy py4j wheel scipy sklearn requests pandas unittest-parallel\n+ pip install wheel\n+ pip install numpy py4j scipy sklearn requests pandas unittest-parallel\n- name: Build Python Package\nrun: |\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Install wheel before sklearn GitHub Actions
This address a install flaw in python where sklearn does not install
properly if wheel is not installed first. |
49,706 | 09.11.2022 12:56:25 | -3,600 | 01e23d9ae410c25e08943559ce025e389f06fc1d | [MINOR] Ignore Mice categoricalCP
The new install of our test image installed new packages that are not
compatible with the test:
functions.builtin.part2.BuiltinMiceTest#testMiceCategoricalCP
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/part2/BuiltinMiceTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/part2/BuiltinMiceTest.java",
"diff": "package org.apache.sysds.test.functions.builtin.part2;\n+import java.util.HashMap;\n+\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.common.Types.ExecType;\n@@ -31,8 +33,6 @@ import org.junit.Assert;\nimport org.junit.Ignore;\nimport org.junit.Test;\n-import java.util.HashMap;\n-\npublic class BuiltinMiceTest extends AutomatedTestBase {\nprivate final static String TEST_NAME = \"mice\";\nprivate final static String TEST_DIR = \"functions/builtin/\";\n@@ -60,6 +60,7 @@ public class BuiltinMiceTest extends AutomatedTestBase {\n}\n@Test\n+ @Ignore\npublic void testMiceCategoricalCP() {\ndouble[][] mask = {{ 1.0, 1.0, 1.0, 1.0, 1.0}};\nrunMiceNominalTest(mask, 3, false, ExecType.CP);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Ignore Mice categoricalCP
The new install of our test image installed new packages that are not
compatible with the test:
functions.builtin.part2.BuiltinMiceTest#testMiceCategoricalCP
Closes #1723 |
49,706 | 11.11.2022 13:10:00 | -3,600 | 9a507ea4ce79f3445727ea5720fa98cf176a3d9b | [MINOR] Increase monitor test wait | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/AutomatedTestBase.java",
"diff": "@@ -121,8 +121,10 @@ public abstract class AutomatedTestBase {\npublic static boolean TEST_GPU = false;\npublic static final double GPU_TOLERANCE = 1e-9;\n- public static final int FED_WORKER_WAIT = 1000; // in ms\n- public static final int FED_WORKER_WAIT_S = 50; // in ms\n+ // ms wait time\n+ public static final int FED_WORKER_WAIT = 1000;\n+ public static final int FED_MONITOR_WAIT = 5000;\n+ public static final int FED_WORKER_WAIT_S = 50;\n// With OpenJDK 8u242 on Windows, the new changes in JDK are not allowing\n@@ -1630,7 +1632,7 @@ public abstract class AutomatedTestBase {\ntry {\nprocess = processBuilder.start();\n// Wait till process is started\n- sleep(5000);\n+ sleep(FED_MONITOR_WAIT);\n}\ncatch(IOException | InterruptedException e) {\nthrow new RuntimeException(e);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Increase monitor test wait |
49,706 | 15.11.2022 15:19:06 | -3,600 | 15e278a257ead5cf42891d2970664ba5f0682e80 | Python Combine Write
This commit finally adds a long wanted feature to python scripts.
This allows us to call multiple end nodes in a script without having
to do multiple executions.
Combine(Write(X,"Path1"), Write(Y,"Path2")).compute()
Similarly we can do:
Combine(Print(X), Write(X,"Path1)).compute()
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/python/docs/source/code/guide/end_to_end/part2.py",
"new_path": "src/main/python/docs/source/code/guide/end_to_end/part2.py",
"diff": "@@ -40,15 +40,11 @@ with SystemDSContext() as sds:\n# Transform frames to matrices.\nX, M1 = X_frame.transform_encode(spec=jspec_data)\n- Xt = Xt_frame.transform_apply(spec=jspec_data, meta=M1)\nY, M2 = Y_frame.transform_encode(spec=jspec_labels)\n- Yt = Yt_frame.transform_apply(spec=jspec_labels, meta=M2)\n# Subsample to make training faster\nX = X[0:train_count]\nY = Y[0:train_count]\n- Xt = Xt[0:test_count]\n- Yt = Yt[0:test_count]\n# Load custom neural network\nneural_net_src_path = \"tests/examples/tutorials/neural_net_source.dml\"\n@@ -61,5 +57,25 @@ with SystemDSContext() as sds:\nnetwork = FFN_package.train(X, Y, epochs, batch_size, learning_rate, seed)\n- network.write('tests/examples/docs_test/end_to_end/').compute()\n+ # Write metadata and trained network to disk.\n+ sds.combine(\n+ network.write('tests/examples/docs_test/end_to_end/network'),\n+ M1.write('tests/examples/docs_test/end_to_end/encode_X'),\n+ M2.write('tests/examples/docs_test/end_to_end/encode_Y')\n+ ).compute()\n+\n+ # Read metadata and trained network and do prediction.\n+ M1_r = sds.read('tests/examples/docs_test/end_to_end/encode_X')\n+ M2_r = sds.read('tests/examples/docs_test/end_to_end/encode_Y')\n+ network_r = sds.read('tests/examples/docs_test/end_to_end/network')\n+ Xt = Xt_frame.transform_apply(spec=jspec_data, meta=M1_r)\n+ Yt = Yt_frame.transform_apply(spec=jspec_labels, meta=M2_r)\n+ Xt = Xt[0:test_count]\n+ Yt = Yt[0:test_count]\n+ FFN_package_2 = sds.source(neural_net_src_path, \"fnn\")\n+ probs = FFN_package_2.predict(Xt, network_r)\n+ accuracy = FFN_package_2.eval(probs, Yt).compute()\n+\n+ import logging\n+ logging.info(\"accuracy: \" + str(accuracy))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/docs/source/guide/python_end_to_end_tut.rst",
"new_path": "src/main/python/docs/source/guide/python_end_to_end_tut.rst",
"diff": "@@ -118,12 +118,13 @@ For this we will introduce another dml file, which can be used to train a basic\nStep 1: Obtain data\n~~~~~~~~~~~~~~~~~~~\n-For the whole data setup please refer to level 1, Step 1, as these steps are identical.\n+For the whole data setup please refer to level 1, Step 1, as these steps are almost identical,\n+but instead of preparing the test data, we only prepare the training data.\n.. include:: ../code/guide/end_to_end/part2.py\n:code: python\n:start-line: 20\n- :end-line: 51\n+ :end-line: 47\nStep 2: Load the algorithm\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n@@ -134,12 +135,10 @@ This file includes all the necessary functions for training, evaluating, and sto\nThe returned object of the source call is further used for calling the functions.\nThe file can be found here:\n- - :doc:tests/examples/tutorials/neural_net_source.dml\n-\n.. include:: ../code/guide/end_to_end/part2.py\n:code: python\n- :start-line: 54\n- :end-line: 55\n+ :start-line: 48\n+ :end-line: 51\nStep 3: Training the neural network\n@@ -153,8 +152,8 @@ The seed argument ensures that running the code again yields the same results.\n.. include:: ../code/guide/end_to_end/part2.py\n:code: python\n- :start-line: 61\n- :end-line: 62\n+ :start-line: 52\n+ :end-line: 58\nStep 4: Saving the model\n@@ -163,15 +162,28 @@ Step 4: Saving the model\nFor later usage, we can save the trained model.\nWe only need to specify the name of our model and the file path.\nThis call stores the weights and biases of our model.\n+Similarly the transformation metadata to transform input data to the model,\n+is saved.\n.. include:: ../code/guide/end_to_end/part2.py\n:code: python\n- :start-line: 64\n+ :start-line: 59\n:end-line: 65\n+Step 5: Predict on Unseen data\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Once the model is saved along with metadata, it is simple to apply it all to\n+unseen data:\n+\n+.. include:: ../code/guide/end_to_end/part2.py\n+ :code: python\n+ :start-line: 66\n+ :end-line: 77\n+\nFull Script NN\n-~~~~~~~~~~~---\n+~~~~~~~~~~~~~~\nThe complete script now can be seen here:\n@@ -179,4 +191,4 @@ The complete script now can be seen here:\n.. include:: ../code/guide/end_to_end/part2.py\n:code: python\n:start-line: 20\n- :end-line: 64\n+ :end-line: 80\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/context/systemds_context.py",
"new_path": "src/main/python/systemds/context/systemds_context.py",
"diff": "@@ -38,7 +38,7 @@ import numpy as np\nimport pandas as pd\nfrom py4j.java_gateway import GatewayParameters, JavaGateway, Py4JNetworkError\nfrom systemds.operator import (Frame, List, Matrix, OperationNode, Scalar,\n- Source)\n+ Source, Combine)\nfrom systemds.script_building import DMLScript, OutputType\nfrom systemds.utils.consts import VALID_INPUT_TYPES\nfrom systemds.utils.helpers import get_module_dir\n@@ -630,6 +630,19 @@ class SystemDSContext(object):\n\"\"\"\nreturn List(self, unnamed_input_nodes=args, named_input_nodes=kwargs)\n+ def combine(self, *args: Sequence[VALID_INPUT_TYPES]) -> Combine:\n+ \"\"\" combine nodes to call compute on multiple operations.\n+\n+ This is usefull for the case of having multiple writes in one script and wanting\n+ to execute all in one execution reusing intermediates.\n+\n+ Note this combine does not allow to return anything to the user, so if used,\n+ please only use nodes that end with either writing or printing elements.\n+\n+ :param args: A sequence that will be executed with call to compute()\n+ \"\"\"\n+ return Combine(self, unnamed_input_nodes=args)\n+\ndef array(self, *args: Sequence[VALID_INPUT_TYPES]) -> List:\n\"\"\" Create a List object containing the given nodes.\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/operator/__init__.py",
"new_path": "src/main/python/systemds/operator/__init__.py",
"diff": "# -------------------------------------------------------------\nfrom systemds.operator.operation_node import OperationNode\n-from systemds.operator.nodes.multi_return import MultiReturn\nfrom systemds.operator.nodes.scalar import Scalar\nfrom systemds.operator.nodes.matrix import Matrix\n+from systemds.operator.nodes.multi_return import MultiReturn\nfrom systemds.operator.nodes.frame import Frame\n+from systemds.operator.nodes.combine import Combine\nfrom systemds.operator.nodes.list_access import ListAccess\nfrom systemds.operator.nodes.list import List\nfrom systemds.operator.nodes.source import Source\nfrom systemds.operator import algorithm\n-__all__ = [\"OperationNode\", \"algorithm\", \"Scalar\", \"List\", \"ListAccess\", \"Matrix\", \"Frame\", \"Source\", \"MultiReturn\"]\n+__all__ = [\"OperationNode\", \"algorithm\", \"Scalar\", \"List\",\n+ \"ListAccess\", \"Matrix\", \"Frame\", \"Source\", \"MultiReturn\", \"Combine\"]\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/python/systemds/operator/nodes/combine.py",
"diff": "+# -------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+# -------------------------------------------------------------\n+\n+\n+__all__ = [\"Combine\"]\n+\n+from typing import Dict, Iterable, List, Sequence\n+\n+from systemds.operator import OperationNode\n+from systemds.script_building.dag import OutputType\n+from systemds.utils.consts import VALID_INPUT_TYPES\n+\n+\n+class Combine(OperationNode):\n+\n+ def __init__(self, sds_context, func='',\n+ unnamed_input_nodes: Iterable[OperationNode] = None):\n+ for a in unnamed_input_nodes:\n+ if(a.output_type != OutputType.NONE):\n+ raise ValueError(\n+ \"Cannot combine elements that have outputs, all elements must be instances of print or write\")\n+\n+ self._outputs = {}\n+ super().__init__(sds_context, func, unnamed_input_nodes, None, OutputType.NONE, False)\n+\n+ def code_line(self, var_name: str, unnamed_input_vars: Sequence[str],\n+ named_input_vars: Dict[str, str]) -> str:\n+ return ''\n+\n+ def compute(self, verbose: bool = False, lineage: bool = False):\n+ return super().compute(verbose, lineage)\n+\n+ def __str__(self):\n+ return \"Combine\"\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3464] Python Combine Write
This commit finally adds a long wanted feature to python scripts.
This allows us to call multiple end nodes in a script without having
to do multiple executions.
Combine(Write(X,"Path1"), Write(Y,"Path2")).compute()
Similarly we can do:
Combine(Print(X), Write(X,"Path1)).compute()
Closes #1729 |
49,689 | 28.11.2022 00:02:51 | -3,600 | 00f649129ae1568682ee7e0121e7aa0d75353271 | Lineage-based local reuse of Spark actions
This patch extends the lineage cache framework to cache the results
of the Spark instruction, which return intermediates back to local.
Only caching the actions avoids unnecessary triggering and fetching
all Spark intermediates. For now, we avoid caching the future-based
results.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySPInstruction.java",
"diff": "@@ -222,6 +222,10 @@ public class AggregateUnarySPInstruction extends UnarySPInstruction {\n}\n}\n+ public SparkAggType getAggType() {\n+ return _aggtype;\n+ }\n+\nprivate static class RDDUAggFunction implements PairFunction<Tuple2<MatrixIndexes, MatrixBlock>, MatrixIndexes, MatrixBlock>\n{\nprivate static final long serialVersionUID = 2672082409287856038L;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CpmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CpmmSPInstruction.java",
"diff": "@@ -149,6 +149,10 @@ public class CpmmSPInstruction extends AggregateBinarySPInstruction {\n}\n}\n+ public SparkAggType getAggType() {\n+ return _aggtype;\n+ }\n+\nprivate static int getPreferredParJoin(DataCharacteristics mc1, DataCharacteristics mc2, int numPar1, int numPar2) {\nint defPar = SparkExecutionContext.getDefaultParallelism(true);\nint maxParIn = Math.max(numPar1, numPar2);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/MapmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/MapmmSPInstruction.java",
"diff": "@@ -176,6 +176,10 @@ public class MapmmSPInstruction extends AggregateBinarySPInstruction {\n}\n}\n+ public SparkAggType getAggType() {\n+ return _aggtype;\n+ }\n+\nprivate static boolean preservesPartitioning(DataCharacteristics mcIn, CacheType type )\n{\nif( type == CacheType.LEFT )\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -32,6 +32,7 @@ import org.apache.sysds.parser.Statement;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.controlprogram.context.MatrixObjectFuture;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedResponse;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedStatistics;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\n@@ -48,6 +49,7 @@ import org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport org.apache.sysds.runtime.instructions.gpu.GPUInstruction;\nimport org.apache.sysds.runtime.instructions.gpu.context.GPUObject;\n+import org.apache.sysds.runtime.instructions.spark.ComputationSPInstruction;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.LineageCacheStatus;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.ReuseCacheType;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n@@ -92,10 +94,13 @@ public class LineageCache\nif (LineageCacheConfig.isReusable(inst, ec)) {\nComputationCPInstruction cinst = inst instanceof ComputationCPInstruction ? (ComputationCPInstruction)inst : null;\nComputationFEDInstruction cfinst = inst instanceof ComputationFEDInstruction ? (ComputationFEDInstruction)inst : null;\n+ ComputationSPInstruction cspinst = inst instanceof ComputationSPInstruction ? (ComputationSPInstruction)inst : null;\nGPUInstruction gpuinst = inst instanceof GPUInstruction ? (GPUInstruction)inst : null;\n+ //TODO: Replace with generic type\nLineageItem instLI = (cinst != null) ? cinst.getLineageItem(ec).getValue()\n: (cfinst != null) ? cfinst.getLineageItem(ec).getValue()\n+ : (cspinst != null) ? cspinst.getLineageItem(ec).getValue()\n: gpuinst.getLineageItem(ec).getValue();\nList<MutablePair<LineageItem, LineageCacheEntry>> liList = null;\nif (inst instanceof MultiReturnBuiltinCPInstruction) {\n@@ -120,10 +125,10 @@ public class LineageCache\ne = LineageCache.probe(item.getKey()) ? getIntern(item.getKey()) : null;\n//TODO need to also move execution of compensation plan out of here\n//(create lazily evaluated entry)\n- if (e == null && LineageCacheConfig.getCacheType().isPartialReuse())\n+ if (e == null && LineageCacheConfig.getCacheType().isPartialReuse() && cspinst == null)\nif( LineageRewriteReuse.executeRewrites(inst, ec) )\ne = getIntern(item.getKey());\n- //TODO: MultiReturnBuiltin and partial rewrites\n+ //TODO: Partial reuse for Spark instructions\nreuseAll &= (e != null);\nitem.setValue(e);\n@@ -134,6 +139,8 @@ public class LineageCache\nputIntern(item.getKey(), cinst.output.getDataType(), null, null, 0);\nelse if (cfinst != null)\nputIntern(item.getKey(), cfinst.output.getDataType(), null, null, 0);\n+ else if (cspinst != null)\n+ putIntern(item.getKey(), cspinst.output.getDataType(), null, null, 0);\nelse if (gpuinst != null)\nputIntern(item.getKey(), gpuinst._output.getDataType(), null, null, 0);\n//FIXME: different o/p datatypes for MultiReturnBuiltins.\n@@ -155,6 +162,8 @@ public class LineageCache\noutName = cinst.output.getName();\nelse if (inst instanceof ComputationFEDInstruction)\noutName = cfinst.output.getName();\n+ else if (inst instanceof ComputationSPInstruction)\n+ outName = cspinst.output.getName();\nelse if (inst instanceof GPUInstruction)\noutName = gpuinst._output.getName();\n@@ -483,9 +492,14 @@ public class LineageCache\nif (LineageCacheConfig.isReusable(inst, ec) ) {\nLineageItem item = ((LineageTraceable) inst).getLineageItem(ec).getValue();\n//This method is called only to put matrix value\n- MatrixObject mo = inst instanceof ComputationCPInstruction ?\n- ec.getMatrixObject(((ComputationCPInstruction) inst).output) :\n- ec.getMatrixObject(((ComputationFEDInstruction) inst).output);\n+ MatrixObject mo = null;\n+ if (inst instanceof ComputationCPInstruction)\n+ mo = ec.getMatrixObject(((ComputationCPInstruction) inst).output);\n+ else if (inst instanceof ComputationFEDInstruction)\n+ mo = ec.getMatrixObject(((ComputationFEDInstruction) inst).output);\n+ else if (inst instanceof ComputationSPInstruction)\n+ mo = ec.getMatrixObject(((ComputationSPInstruction) inst).output);\n+\nsynchronized( _cache ) {\nputIntern(item, DataType.MATRIX, mo.acquireReadAndRelease(), null, computetime);\n}\n@@ -527,9 +541,12 @@ public class LineageCache\nliData = Arrays.asList(Pair.of(instLI, ec.getVariable(((GPUInstruction)inst)._output)));\n}\nelse\n- liData = inst instanceof ComputationCPInstruction ?\n- Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationCPInstruction) inst).output))) :\n- Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationFEDInstruction) inst).output)));\n+ if (inst instanceof ComputationCPInstruction)\n+ liData = Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationCPInstruction) inst).output)));\n+ else if (inst instanceof ComputationFEDInstruction)\n+ liData = Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationFEDInstruction) inst).output)));\n+ else if (inst instanceof ComputationSPInstruction)\n+ liData = Arrays.asList(Pair.of(instLI, ec.getVariable(((ComputationSPInstruction) inst).output)));\nif (liGpuObj == null)\nputValueCPU(inst, liData, computetime);\n@@ -556,6 +573,12 @@ public class LineageCache\ncontinue;\n}\n+ if (data instanceof MatrixObjectFuture) {\n+ // We don't want to call get() on the future immediately after the execution\n+ removePlaceholder(item);\n+ continue;\n+ }\n+\nif (LineageCacheConfig.isOutputFederated(inst, data)) {\n// Do not cache federated outputs (in the coordinator)\n// Cannot skip putting the placeholder as the above is only known after execution\n@@ -867,10 +890,12 @@ public class LineageCache\nCPOperand output = inst instanceof ComputationCPInstruction ? ((ComputationCPInstruction)inst).output\n: inst instanceof ComputationFEDInstruction ? ((ComputationFEDInstruction)inst).output\n+ : inst instanceof ComputationSPInstruction ? ((ComputationSPInstruction)inst).output\n: ((GPUInstruction)inst)._output;\nif (output.isMatrix()) {\nMatrixObject mo = inst instanceof ComputationCPInstruction ? ec.getMatrixObject(((ComputationCPInstruction)inst).output)\n: inst instanceof ComputationFEDInstruction ? ec.getMatrixObject(((ComputationFEDInstruction)inst).output)\n+ : inst instanceof ComputationSPInstruction ? ec.getMatrixObject(((ComputationSPInstruction)inst).output)\n: ec.getMatrixObject(((GPUInstruction)inst)._output);\n//limit this to full reuse as partial reuse is applicable even for loop dependent operation\nreturn !(LineageCacheConfig.getCacheType() == ReuseCacheType.REUSE_FULL\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -21,8 +21,10 @@ package org.apache.sysds.runtime.lineage;\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.conf.DMLConfig;\n+import org.apache.sysds.hops.AggBinaryOp;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.instructions.Instruction;\n@@ -33,6 +35,11 @@ import org.apache.sysds.runtime.instructions.cp.ListIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport org.apache.sysds.runtime.instructions.gpu.GPUInstruction;\n+import org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.ComputationSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.CpmmSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.MapmmSPInstruction;\n+import org.apache.sysds.runtime.instructions.spark.TsmmSPInstruction;\nimport java.util.Comparator;\n@@ -46,7 +53,8 @@ public class LineageCacheConfig\n\"uamean\", \"max\", \"min\", \"ifelse\", \"-\", \"sqrt\", \">\", \"uak+\", \"<=\",\n\"^\", \"uamax\", \"uark+\", \"uacmean\", \"eigen\", \"ctableexpand\", \"replace\",\n\"^2\", \"uack+\", \"tak+*\", \"uacsqk+\", \"uark+\", \"n+\", \"uarimax\", \"qsort\",\n- \"qpick\", \"transformapply\", \"uarmax\", \"n+\", \"-*\", \"castdtm\", \"lowertri\"\n+ \"qpick\", \"transformapply\", \"uarmax\", \"n+\", \"-*\", \"castdtm\", \"lowertri\",\n+ \"mapmm\", \"cpmm\"\n//TODO: Reuse everything.\n};\nprivate static String[] REUSE_OPCODES = new String[] {};\n@@ -197,7 +205,8 @@ public class LineageCacheConfig\npublic static boolean isReusable (Instruction inst, ExecutionContext ec) {\nboolean insttype = (inst instanceof ComputationCPInstruction\n|| inst instanceof ComputationFEDInstruction\n- || inst instanceof GPUInstruction)\n+ || inst instanceof GPUInstruction\n+ || (inst instanceof ComputationSPInstruction && isRightSparkOp(inst)))\n&& !(inst instanceof ListIndexingCPInstruction);\nboolean rightop = (ArrayUtils.contains(REUSE_OPCODES, inst.getOpcode())\n|| (inst.getOpcode().equals(\"append\") && isVectorAppend(inst, ec))\n@@ -226,6 +235,14 @@ public class LineageCacheConfig\nlong c2 = ec.getMatrixObject(cpinst.input2).getNumColumns();\nreturn(c1 == 1 || c2 == 1);\n}\n+ if (inst instanceof ComputationSPInstruction) {\n+ ComputationSPInstruction fedinst = (ComputationSPInstruction) inst;\n+ if (!fedinst.input1.isMatrix() || !fedinst.input2.isMatrix())\n+ return false;\n+ long c1 = ec.getMatrixObject(fedinst.input1).getNumColumns();\n+ long c2 = ec.getMatrixObject(fedinst.input2).getNumColumns();\n+ return(c1 == 1 || c2 == 1);\n+ }\nelse { //GPUInstruction\nGPUInstruction gpuinst = (GPUInstruction)inst;\nif( !gpuinst._input1.isMatrix() || !gpuinst._input2.isMatrix() )\n@@ -236,6 +253,30 @@ public class LineageCacheConfig\n}\n}\n+ // Check if the Spark instruction returns result back to local\n+ private static boolean isRightSparkOp(Instruction inst) {\n+ if (!(inst instanceof ComputationSPInstruction))\n+ return false;\n+\n+ boolean spAction = false;\n+ if (inst instanceof MapmmSPInstruction &&\n+ ((MapmmSPInstruction) inst).getAggType() == AggBinaryOp.SparkAggType.SINGLE_BLOCK)\n+ spAction = true;\n+ else if (inst instanceof TsmmSPInstruction)\n+ spAction = true;\n+ else if (inst instanceof AggregateUnarySPInstruction &&\n+ ((AggregateUnarySPInstruction) inst).getAggType() == AggBinaryOp.SparkAggType.SINGLE_BLOCK)\n+ spAction = true;\n+ else if (inst instanceof CpmmSPInstruction &&\n+ ((CpmmSPInstruction) inst).getAggType() == AggBinaryOp.SparkAggType.SINGLE_BLOCK)\n+ spAction = true;\n+ else if (((ComputationSPInstruction) inst).output.getDataType() == Types.DataType.SCALAR)\n+ spAction = true;\n+ //TODO: include other cases\n+\n+ return spAction;\n+ }\n+\npublic static boolean isOutputFederated(Instruction inst, Data data) {\nif (!(inst instanceof ComputationFEDInstruction))\nreturn false;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.async;\n+\n+ import java.util.ArrayList;\n+ import java.util.HashMap;\n+ import java.util.List;\n+\n+ import org.apache.sysds.common.Types.ExecMode;\n+ import org.apache.sysds.hops.OptimizerUtils;\n+ import org.apache.sysds.hops.recompile.Recompiler;\n+ import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\n+ import org.apache.sysds.runtime.lineage.Lineage;\n+ import org.apache.sysds.runtime.matrix.data.MatrixValue;\n+ import org.apache.sysds.test.AutomatedTestBase;\n+ import org.apache.sysds.test.TestConfiguration;\n+ import org.apache.sysds.test.TestUtils;\n+ import org.apache.sysds.utils.Statistics;\n+ import org.junit.Assert;\n+ import org.junit.Test;\n+\n+public class LineageReuseSparkTest extends AutomatedTestBase {\n+\n+ protected static final String TEST_DIR = \"functions/async/\";\n+ protected static final String TEST_NAME = \"LineageReuseSpark\";\n+ protected static final int TEST_VARIANTS = 1;\n+ protected static String TEST_CLASS_DIR = TEST_DIR + LineageReuseSparkTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ for(int i=1; i<=TEST_VARIANTS; i++)\n+ addTestConfiguration(TEST_NAME+i, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME+i));\n+ }\n+\n+ @Test\n+ public void testlmds() {\n+ runTest(TEST_NAME+\"1\");\n+ }\n+\n+ public void runTest(String testname) {\n+ boolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n+ boolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\n+ boolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\n+ ExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\n+\n+ long oldmem = InfrastructureAnalyzer.getLocalMaxMemory();\n+ long mem = 1024*1024*8;\n+ InfrastructureAnalyzer.setLocalMaxMemory(mem);\n+\n+ try {\n+ getAndLoadTestConfiguration(testname);\n+ fullDMLScriptName = getScript();\n+\n+ List<String> proArgs = new ArrayList<>();\n+\n+ proArgs.add(\"-explain\");\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-args\");\n+ proArgs.add(output(\"R\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ Lineage.resetInternalState();\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ HashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n+ long numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+\n+ proArgs.add(\"-explain\");\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-lineage\");\n+ proArgs.add(\"reuse_hybrid\");\n+ proArgs.add(\"-args\");\n+ proArgs.add(output(\"R\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ Lineage.resetInternalState();\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ HashMap<MatrixValue.CellIndex, Double> R_reused = readDMLScalarFromOutputDir(\"R\");\n+ long numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm_r= Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+\n+ //compare matrices\n+ boolean matchVal = TestUtils.compareMatrices(R, R_reused, 1e-6, \"Origin\", \"withPrefetch\");\n+ if (!matchVal)\n+ System.out.println(\"Value w/o reuse \"+R+\" w/ reuse \"+R_reused);\n+ Assert.assertTrue(\"Violated sp_tsmm: reuse count: \"+numTsmm_r+\" < \"+numTsmm, numTsmm_r < numTsmm);\n+ Assert.assertTrue(\"Violated sp_mapmm: reuse count: \"+numMapmm_r+\" < \"+numMapmm, numMapmm_r < numMapmm);\n+\n+ } finally {\n+ OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\n+ OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = old_trans_exec_type;\n+ resetExecMode(oldPlatform);\n+ InfrastructureAnalyzer.setLocalMaxMemory(oldmem);\n+ Recompiler.reinitRecompiler();\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/rewrite/RewriteListTsmmCVTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/rewrite/RewriteListTsmmCVTest.java",
"diff": "@@ -128,7 +128,8 @@ public class RewriteListTsmmCVTest extends AutomatedTestBase\nif( instType == ExecType.CP )\nAssert.assertEquals(0, Statistics.getNoOfExecutedSPInst());\nif( rewrites ) {\n- boolean expectedReuse = lineage && instType == ExecType.CP;\n+ //boolean expectedReuse = lineage && instType == ExecType.CP;\n+ boolean expectedReuse = lineage;\nString[] codes = (instType==ExecType.CP) ?\nnew String[]{\"rbind\",\"tsmm\",\"ba+*\",\"n+\"} :\nnew String[]{\"sp_append\",\"sp_tsmm\",\"sp_mapmm\",\"sp_n+\"};\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/LineageReuseSpark1.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+SimlinRegDS = function(Matrix[Double] X, Matrix[Double] y, Double lamda, Integer N) return (Matrix[double] beta)\n+{\n+ # Reuse sp_tsmm and sp_mapmm if not future-based\n+ A = (t(X) %*% X) + diag(matrix(lamda, rows=N, cols=1));\n+ b = t(X) %*% y;\n+ beta = solve(A, b);\n+}\n+\n+no_lamda = 10;\n+\n+stp = (0.1 - 0.0001)/no_lamda;\n+lamda = 0.0001;\n+lim = 0.1;\n+\n+X = rand(rows=10000, cols=200, seed=42);\n+y = rand(rows=10000, cols=1, seed=43);\n+N = ncol(X);\n+R = matrix(0, rows=N, cols=no_lamda+2);\n+i = 1;\n+\n+while (lamda < lim)\n+{\n+ beta = SimlinRegDS(X, y, lamda, N);\n+ #beta = lmDS(X=X, y=y, reg=lamda);\n+ R[,i] = beta;\n+ lamda = lamda + stp;\n+ i = i + 1;\n+}\n+\n+R = sum(R);\n+write(R, $1, format=\"text\");\n+\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3470] Lineage-based local reuse of Spark actions
This patch extends the lineage cache framework to cache the results
of the Spark instruction, which return intermediates back to local.
Only caching the actions avoids unnecessary triggering and fetching
all Spark intermediates. For now, we avoid caching the future-based
results.
Closes #1739 |
49,689 | 29.11.2022 00:05:01 | -3,600 | 03c546fe3392759b18529db0ccb3d26328b02e0f | Generalize selecting reusable Spark instructions
This patch allows putting any Matrix Object in the lineage cache
which does not have a valid RDD. This change makes it easier to
separate Spark instructions which returns intermediate back to local.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/caching/CacheableData.java",
"diff": "@@ -443,6 +443,10 @@ public abstract class CacheableData<T extends CacheBlock<?>> extends Data\nrdd.setBackReference(this);\n}\n+ public boolean hasRDDHandle() {\n+ return _rddHandle != null && _rddHandle.hasBackReference();\n+ }\n+\npublic BroadcastObject<T> getBroadcastHandle() {\nreturn _bcHandle;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -579,6 +579,12 @@ public class LineageCache\ncontinue;\n}\n+ if (data instanceof MatrixObject && ((MatrixObject) data).hasRDDHandle()) {\n+ // Avoid triggering pre-matured Spark instruction chains\n+ removePlaceholder(item);\n+ continue;\n+ }\n+\nif (LineageCacheConfig.isOutputFederated(inst, data)) {\n// Do not cache federated outputs (in the coordinator)\n// Cannot skip putting the placeholder as the above is only known after execution\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -206,7 +206,7 @@ public class LineageCacheConfig\nboolean insttype = (inst instanceof ComputationCPInstruction\n|| inst instanceof ComputationFEDInstruction\n|| inst instanceof GPUInstruction\n- || (inst instanceof ComputationSPInstruction && isRightSparkOp(inst)))\n+ || inst instanceof ComputationSPInstruction)\n&& !(inst instanceof ListIndexingCPInstruction);\nboolean rightop = (ArrayUtils.contains(REUSE_OPCODES, inst.getOpcode())\n|| (inst.getOpcode().equals(\"append\") && isVectorAppend(inst, ec))\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"diff": "@@ -51,15 +51,22 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\n}\n@Test\n- public void testlmds() {\n- runTest(TEST_NAME+\"1\");\n+ public void testlmdsHB() {\n+ runTest(TEST_NAME+\"1\", ExecMode.HYBRID);\n}\n- public void runTest(String testname) {\n+ @Test\n+ public void testlmdsSP() {\n+ // Only reuse the actions\n+ runTest(TEST_NAME+\"1\", ExecMode.SPARK);\n+ }\n+\n+ public void runTest(String testname, ExecMode execMode) {\nboolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\nboolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\nboolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\nExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\n+ rtplatform = execMode;\nlong oldmem = InfrastructureAnalyzer.getLocalMaxMemory();\nlong mem = 1024*1024*8;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/builtin/part2/BuiltinNaLocfTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/part2/BuiltinNaLocfTest.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.test.functions.builtin.part2;\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.common.Types.ExecType;\n+import org.apache.sysds.runtime.lineage.Lineage;\nimport org.apache.sysds.runtime.lineage.LineageCacheConfig.ReuseCacheType;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue;\nimport org.apache.sysds.test.AutomatedTestBase;\n@@ -105,6 +106,7 @@ public class BuiltinNaLocfTest extends AutomatedTestBase {\ndouble[][] A = getRandomMatrix(rows, cols, -10, 10, 0.6, 7);\nwriteInputMatrixWithMTD(\"A\", A, true);\n+ Lineage.resetInternalState();\nrunTest(true, false, null, -1);\nrunRScript(true);\n//compare matrices\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3470] Generalize selecting reusable Spark instructions
This patch allows putting any Matrix Object in the lineage cache
which does not have a valid RDD. This change makes it easier to
separate Spark instructions which returns intermediate back to local.
Closes #1742 |
49,720 | 30.11.2022 12:46:44 | -3,600 | 9f2aa4e1035c6f80ca09db27b2b740313b18b880 | [MINOR] Inclusion of error-based sub-sampling in cleaning pipelines | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/abstain.dml",
"new_path": "scripts/builtin/abstain.dml",
"diff": "@@ -41,7 +41,7 @@ return (Matrix[Double] Xout, Matrix[Double] Yout)\n{\nXout = X\nYout = Y\n- if(min(Y) != max(Y))\n+ if(min(Y) != max(Y) & max(Y) <= 2)\n{\nbetas = multiLogReg(X=X, Y=Y, icpt=1, reg=1e-4, maxi=100, maxii=0, verbose=verbose)\n[prob, yhat, accuracy] = multiLogRegPredict(X, betas, Y, FALSE)\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/fit_pipeline.dml",
"new_path": "scripts/builtin/fit_pipeline.dml",
"diff": "@@ -47,7 +47,7 @@ source(\"scripts/builtin/topk_cleaning.dml\") as topk;\nsource(\"scripts/builtin/bandit.dml\") as bandit;\ns_fit_pipeline = function(Frame[Unknown] trainData, Frame[Unknown] testData, Frame[Unknown] metaData = as.frame(\"NULL\"),\n- Frame[Unknown] pip, Frame[Unknown] applyFunc, Matrix[Double] hp, String evaluationFunc, Matrix[Double] evalFunHp,\n+ Frame[Unknown] pip, Frame[Unknown] applyFunc, Matrix[Double] hp, Integer cvk=3, String evaluationFunc, Matrix[Double] evalFunHp,\nBoolean isLastLabel = TRUE, Boolean correctTypos=FALSE)\nreturn (Matrix[Double] scores, Matrix[Double] cleanTrain, Matrix[Double] cleanTest, List[Unknown] externalState, List[Unknown] iState)\n{\n@@ -92,6 +92,11 @@ return (Matrix[Double] scores, Matrix[Double] cleanTrain, Matrix[Double] cleanTe\nhp_matrix = matrix(hp_width, rows=ncol(pip), cols=ncol(hp_width)/ncol(pip))\npipList = list(ph = pip, hp = hp_matrix, flags = no_of_flag_vars)\n+ [trainScore, evalFunHp] = bandit::crossV(X=eXtrain, y=eYtrain, cvk=cvk, evalFunHp=evalFunHp,\n+ pipList=pipList, metaList=metaList, evalFunc=evaluationFunc)\n+ print(\"train score cv: \"+toString(trainScore))\n+\n+\n# # # now test accuracy\n[eXtrain, eYtrain, eXtest, eYtest, a, b, c, d, iState] = executePipeline(pipeline=pip, Xtrain=eXtrain, Ytrain=eYtrain,\nXtest=eXtest, Ytest=eYtest, metaList=metaList, hyperParameters=hp_matrix, flagsCount=no_of_flag_vars, test=TRUE, verbose=FALSE)\n@@ -99,15 +104,15 @@ return (Matrix[Double] scores, Matrix[Double] cleanTrain, Matrix[Double] cleanTe\nif(max(eYtrain) == min(eYtrain))\nstop(\"Y contains only one class\")\n- score = eval(evaluationFunc, list(X=eXtrain, Y=eYtrain, Xtest=eXtrain, Ytest=eYtrain, Xorig=as.matrix(0), evalFunHp=evalFunHp))\n- trainAccuracy = as.scalar(score[1, 1])\n+ # score = eval(evaluationFunc, list(X=eXtrain, Y=eYtrain, Xtest=eXtrain, Ytest=eYtrain, Xorig=as.matrix(0), evalFunHp=evalFunHp))\n+ # trainAccuracy = as.scalar(score[1, 1])\nscore = eval(evaluationFunc, list(X=eXtrain, Y=eYtrain, Xtest=eXtest, Ytest=eYtest, Xorig=as.matrix(0), evalFunHp=evalFunHp))\ntestAccuracy = as.scalar(score[1, 1])\nscores = matrix(0, rows=1, cols=3)\nscores[1, 1] = dirtyScore\n- scores[1, 2] = trainAccuracy\n+ # scores[1, 2] = trainAccuracy\nscores[1, 3] = testAccuracy\ncleanTrain = cbind(eXtrain, eYtrain)\ncleanTest = cbind(eXtest, eYtest)\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/mice.dml",
"new_path": "scripts/builtin/mice.dml",
"diff": "@@ -142,7 +142,7 @@ m_mice= function(Matrix[Double] X, Matrix[Double] cMask, Integer iter = 3,\n}\nelse {\nbeta = multiLogReg(X=train_X, Y=train_Y, icpt = 1, tol = 0.0001, reg = 0.00001,\n- maxi = 50, maxii=50, verbose=FALSE)\n+ maxi = 20, maxii=20, verbose=FALSE)\n# predicting missing values\n[prob, pred, acc] = multiLogRegPredict(X=test_X, B=beta, Y = test_Y)\nprob = rowMaxs(prob)\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/tomeklink.dml",
"new_path": "scripts/builtin/tomeklink.dml",
"diff": "@@ -82,9 +82,11 @@ return (Matrix[Double] nn) {\nget_links = function(Matrix[Double] X, Matrix[Double] y, double majority_label)\nreturn (Matrix[Double] tomek_links) {\ntomek_links = matrix(-1, 1, 1)\n+ if(max(y) <= 2 ) {\nnn = get_nn(X)\nperm = table(seq(1, nrow(y)), nn, nrow(y), nrow(y))\nnn_labels = perm %*% y\nlinks = (y != majority_label) & (nn_labels == majority_label)\ntomek_links = (table(nn, 1, links, nrow(y), 1) > 0)\n}\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/topk_cleaning.dml",
"new_path": "scripts/builtin/topk_cleaning.dml",
"diff": "@@ -27,8 +27,10 @@ source(\"scripts/pipelines/scripts/enumerateLogical.dml\") as lg;\nsource(\"scripts/builtin/bandit.dml\") as bandit;\ns_topk_cleaning = function(Frame[Unknown] dataTrain, Frame[Unknown] dataTest = as.frame(\"NULL\"), Frame[Unknown] metaData = as.frame(\"NULL\"), Frame[Unknown] primitives,\n- Frame[Unknown] parameters, Frame[String] refSol = as.frame(\"NaN\"), String evaluationFunc, Matrix[Double] evalFunHp, Integer topK = 5, Integer resource_val = 20, Integer max_iter = 10,\n- Double sample = 1.0, Double expectedIncrease=1.0, Integer seed = -1, Boolean cv=TRUE, Integer cvk = 2, Boolean isLastLabel = TRUE, Boolean correctTypos=FALSE, Boolean enablePruning = FALSE)\n+ Frame[Unknown] parameters, Frame[String] refSol = as.frame(\"NaN\"), String evaluationFunc, Matrix[Double] evalFunHp, Integer topK = 5, Integer resource_val = 20,\n+ Integer max_iter = 10, Double lq = 0.1, Double uq=0.7, Double sample = 1.0, Double expectedIncrease=1.0, Integer seed = -1, Boolean cv=TRUE, Integer cvk = 2,\n+ Boolean isLastLabel = TRUE,\n+ Boolean correctTypos=FALSE, Boolean enablePruning = FALSE)\nreturn (Frame[Unknown] topKPipelines, Matrix[Double] topKHyperParams, Matrix[Double] topKScores,\nDouble dirtyScore, Matrix[Double] evalFunHp, Frame[Unknown] applyFunc)\n{\n@@ -71,13 +73,16 @@ s_topk_cleaning = function(Frame[Unknown] dataTrain, Frame[Unknown] dataTest = a\n[Xtrain, Xtest] = runStringPipeline(Xtrain, Xtest, schema, mask, cv, correctTypos, ctx)\n# # if mask has 1s then there are categorical features\nprint(\"---- feature transformations to numeric matrix\");\n- [eXtrain, eXtest] = recodeData(Xtrain, Xtest, mask, cv, \"recode\")\n+ [eXtrain, eXtest, metaR] = recodeData(Xtrain, Xtest, mask, cv, \"recode\")\n# # # do the early dropping\n# [eXtrain, eXtest, metaList] = featureDrop(eXtrain, eXtest, metaList, cv)\n# apply sampling on training data for pipeline enumeration\n# TODO why recoding/sampling twice (within getDirtyScore)\nprint(\"---- class-stratified sampling of feature matrix w/ f=\"+sample);\n- [eXtrain, eYtrain] = utils::doSample(eXtrain, eYtrain, sample, TRUE)\n+ if(sum(mask) > ncol(mask)/2 & nrow(eYtrain) >= 10000 & sample == 1.0)\n+ [eXtrain, eYtrain ] = utils::doErrorSample(eXtrain, eYtrain, lq, uq)\n+ else\n+ [eXtrain, eYtrain] = utils::doSample(eXtrain, eYtrain, sample, mask, metaR, TRUE)\nt5 = time(); print(\"---- finalized in: \"+(t5-t4)/1e9+\"s\");\n# # # create logical pipeline seeds\n@@ -110,14 +115,14 @@ s_topk_cleaning = function(Frame[Unknown] dataTrain, Frame[Unknown] dataTest = a\n[bestLogical, bestHp, con, refChanges, acc] = lg::enumerateLogical(X=eXtrain, y=eYtrain, Xtest=eXtest, ytest=eYtest,\ninitial_population=logical, refSol=refSol, seed = seed, max_iter=max_iter, metaList = metaList,\nevaluationFunc=evaluationFunc, evalFunHp=evalFunHp, primitives=primitives, param=parameters,\n- dirtyScore = (dirtyScore + expectedIncrease), cv=cv, cvk=cvk, verbose=TRUE, ctx=ctx)\n+ dirtyScore = (dirtyScore + expectedIncrease), cv=cv, cvk=cvk, verbose=FALSE, ctx=ctx)\nt6 = time(); print(\"---- finalized in: \"+(t6-t5)/1e9+\"s\");\n- topKPipelines = as.frame(\"NULL\"); topKHyperParams = matrix(0,0,0); topKScores = matrix(0,0,0); features = as.frame(\"NULL\")\n+ topKPipelines = as.frame(\"NULL\"); topKHyperParams = matrix(0,0,0); topKScores = matrix(0,0,0); applyFunc = as.frame(\"NULL\")\n# write(acc, output+\"/acc.csv\", format=\"csv\")\n# stop(\"end of enumlp\")\n[topKPipelines, topKHyperParams, topKScores, applyFunc] = bandit(X_train=eXtrain, Y_train=eYtrain, X_test=eXtest, Y_test=eYtest, metaList=metaList,\nevaluationFunc=evaluationFunc, evalFunHp=evalFunHp, lp=bestLogical, lpHp=bestHp, primitives=primitives, param=parameters, baseLineScore=dirtyScore,\n- k=topK, R=resource_val, cv=cv, cvk=cvk, ref=refChanges, seed=seed, enablePruning = enablePruning, verbose=TRUE);\n+ k=topK, R=resource_val, cv=cv, cvk=cvk, ref=refChanges, seed=seed, enablePruning = enablePruning, verbose=FALSE);\nt7 = time(); print(\"-- Cleaning - Enum Physical Pipelines: \"+(t7-t6)/1e9+\"s\");\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/pipelines/scripts/enumerateLogical.dml",
"new_path": "scripts/pipelines/scripts/enumerateLogical.dml",
"diff": "@@ -275,14 +275,15 @@ getOps = function( Frame[string] allOps, Frame[String] refSol, Integer dist, Int\n# # for regression class imbalance operators are also removed\nif(n > 0 & minValue >= 1 & dist <= 15) {\nallOps = map(allOps, \"x -> (!x.equals(\\\"dummycoding\\\") & !x.equals(\\\"frequencyEncode\\\")\n- & !x.equals(\\\"dbscan\\\") & !x.equals(\\\"WoE\\\") & !x.equals(\\\"pca\\\") & !x.equals(\\\"ppca\\\"))?x:\\\"0\\\"\")\n+ & !x.equals(\\\"dbscan\\\") & !x.equals(\\\"WoE\\\") & !x.equals(\\\"pca\\\") & !x.equals(\\\"ppca\\\") &\n+ !x.equals(\\\"abstain\\\") & !x.equals(\\\"underSampling\\\") & !x.equals(\\\"flipLabels\\\") & !x.equals(\\\"mice\\\") & !x.equals(\\\"SMOTE\\\"))?x:\\\"0\\\"\")\nref = frame([\"imputeByMean\", \"winsorize\", \"scale\", \"dummycoding\"], rows=1, cols=4)\n}\nelse {\nallOps = map(allOps, \"x -> (!x.equals(\\\"dummycoding\\\") & !x.equals(\\\"frequencyEncode\\\") & !x.equals(\\\"tomeklink\\\")\n& !x.equals(\\\"dbscan\\\") & !x.equals(\\\"WoE\\\") & !x.equals(\\\"pca\\\") & !x.equals(\\\"ppca\\\") &\n!x.equals(\\\"abstain\\\") & !x.equals(\\\"underSampling\\\") & !x.equals(\\\"flipLabels\\\") & !x.equals(\\\"mice\\\") & !x.equals(\\\"SMOTE\\\"))?x:\\\"0\\\"\")\n- ref = frame([\"imputeByMean\", \"winsorize\", \"scale\"], rows=1, cols=3)\n+ ref = frame([\"imputeByMean\", \"winsorize\", \"scale\", \"dummycoding\"], rows=1, cols=4)\n}\nif(as.scalar(refSol[1,1]) == \"NaN\")\nrefSol = ref\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/pipelines/scripts/utils.dml",
"new_path": "scripts/pipelines/scripts/utils.dml",
"diff": "@@ -50,14 +50,116 @@ return (Frame[Unknown] frameblock)\n}\n+# # #######################################################################\n+# # # Function for group-wise/stratified sampling from all classes in labelled dataset\n+# # # Inputs: The input dataset X, Y and sampling ratio between 0 and 1\n+# # # Output: sample X and Y\n+# # #######################################################################\n+# # doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Matrix[Double] mask, Frame[String] metaR, Boolean verbose = FALSE)\n+ # # return (Matrix[Double] sampledX, Matrix[Double] sampledY, Matrix[Double] filterMask)\n+# # {\n+ # # print(\"initial number of rows: \" +nrow(eX))\n+ # # # # # prepare feature vector for NB\n+ # # beta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=50, maxii=50, verbose=FALSE);\n+ # # [trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n+\n+ # # # # if the operation is binary make a fixed confidence of 0.9, for multi-class compute kappa\n+ # # # threshold = 0\n+ # # # if(max(eY) == 2)\n+ # # # threshold = quantile(rowMaxs(trainProbs), 0.95)\n+ # # kappa = 0.0\n+ # # # if(max(eY) <= 2) {\n+ # # # kappa = quantile(rowMaxs(trainProbs), 0.95)\n+ # # # print(\"for binary classification\")\n+ # # # }\n+ # # # else {\n+ # # # # compute kappa\n+ # # classFreA = table(eY, 1, 1, max(eY), 1)\n+ # # classFreP = table(yhat, 1, 1, max(eY), 1)\n+ # # probA = classFreA/nrow(eY)\n+ # # probP = classFreP/nrow(eY)\n+ # # condProb = sum(probA * probP)\n+ # # kappa = ((accuracy/100) - condProb) / (1 - condProb)\n+ # # print(\"kappa for multi-class\"+toString(kappa))\n+ # # # }\n+ # # print(\"threshold \"+toString(kappa))\n+ # # filterMask = rowMaxs(trainProbs) > kappa\n+ # # # sampledX = removeEmpty(target = eX, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n+ # # # sampledY = removeEmpty(target = eY, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n+ # # # print(\"filtered number of rows: \" +nrow(sampledX))\n+\n+ # # mask[1,1] = 0\n+ # # # # # stats of wrong\n+ # # maxUniques = max(colMaxs(replace(target=eX, pattern=NaN, replacement=1)) * mask)\n+ # # print(\"maxUniques \"+maxUniques)\n+ # # while(FALSE){}\n+ # # stats = matrix(0, rows=maxUniques, cols=ncol(mask))\n+ # # metaInfo = frame(0, rows=nrow(metaR), cols = 2*ncol(metaR))\n+ # # # m = 1\n+ # # for(i in 1:ncol(mask))\n+ # # {\n+ # # print(\"meta: \"+as.scalar(mask[1, i]))\n+ # # if(as.scalar(mask[1, i]) == 1)\n+ # # {\n+ # # problematic_cats = removeEmpty(target=eX[, i], margin = \"rows\", select = (yhat != eY))\n+ # # problematic_cats_sums = table(problematic_cats, 1)\n+ # # stats[1:nrow(problematic_cats_sums), i] = problematic_cats_sums\n+ # # stats_rowMax = rowMaxs(stats)\n+ # # stats2 = (stats == stats_rowMax) * (stats_rowMax >= 100)\n+ # # # colum = metaR[, i]\n+ # # # print(\"printing meta recoded\")\n+ # # # print(toString(colum))\n+ # # # while(FALSE){}\n+ # # # tmpValue = map(colum, \"x -> x.toLowerCase()\")\n+ # # # tmpIndex = map(colum, \"x -> x.toLowerCase()\")\n+ # # # metaInfo[1:nrow(tmpIndex), m] = tmpIndex\n+ # # # metaInfo[1:nrow(tmpIndex), m+1] = tmpValue\n+ # # # m = m + 2\n+ # # }\n+ # # }\n+ # # filterMask = eX[, 4] == 2 | eX[, 5] == 4 | eX[, 5] == 7 | eX[, 5] == 8\n+ # # filterMask = filterMask == 0\n+ # # # stats = cbind(seq(1, nrow(stats)), stats, stats_rowMax)\n+ # # # stats2 = cbind(seq(1, nrow(stats)), stats2)\n+ # # # print(\"print status: \\n\"+toString(stats))\n+ # # # print(\"print status 2: \\n\"+toString(stats2))\n+ # # # print(\"meta infor: \\n\"+toString(metaInfo, rows=10))\n+ # # # # create the filter mask\n+ # # print(\"rows taken after filtering the categories: \"+sum(filterMask))\n+ # # MIN_SAMPLE = 1000\n+ # # sampledX = eX\n+ # # sampledY = eY\n+ # # ratio = ifelse(nrow(eY) > 200000, 0.6, ratio)\n+ # # sampled = floor(nrow(eX) * ratio)\n+\n+ # # if(sampled > MIN_SAMPLE & ratio != 1.0)\n+ # # {\n+ # # sampleVec = sample(nrow(eX), sampled, FALSE, 23)\n+ # # P = table(seq(1, nrow(sampleVec)), sampleVec, nrow(sampleVec), nrow(eX))\n+ # # if((nrow(eY) > 1)) # for classification\n+ # # {\n+ # # sampledX = P %*% eX\n+ # # sampledY = P %*% eY\n+ # # }\n+ # # else if(nrow(eY) == 1) { # for clustering\n+ # # sampledX = P %*% eX\n+ # # sampledY = eY\n+ # # }\n+ # # print(\"sampled rows \"+nrow(sampledY)+\" out of \"+nrow(eY))\n+ # # }\n+\n+# # }\n+\n+\n#######################################################################\n# Function for group-wise/stratified sampling from all classes in labelled dataset\n# Inputs: The input dataset X, Y and sampling ratio between 0 and 1\n# Output: sample X and Y\n#######################################################################\n-doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Boolean verbose = FALSE)\n+doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Matrix[Double] mask, Frame[String] metaR, Boolean verbose = FALSE)\nreturn (Matrix[Double] sampledX, Matrix[Double] sampledY)\n{\n+\nMIN_SAMPLE = 1000\nsampledX = eX\nsampledY = eY\n@@ -79,9 +181,69 @@ doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Boolean\n}\nprint(\"sampled rows \"+nrow(sampledY)+\" out of \"+nrow(eY))\n}\n+}\n+\n+\n+doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY, Double lq, Double uq)\n+ return (Matrix[Double] sampledX, Matrix[Double] sampledY)\n+{\n+ print(\"initial number of rows: \" +nrow(eX))\n+ print(\"quantiles: \"+lq+\" \"+uq)\n+ # # # prepare feature vector for NB\n+ beta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=20, maxii=20, verbose=FALSE);\n+ [trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n+\n+ # kappa = 0.0\n+\n+ # # compute kappa\n+ # classFreA = table(eY, 1, 1, max(eY), 1)\n+ # classFreP = table(yhat, 1, 1, max(eY), 1)\n+ # probA = classFreA/nrow(eY)\n+ # probP = classFreP/nrow(eY)\n+ # condProb = sum(probA * probP)\n+ # kappa = ((accuracy/100) - condProb) / (1 - condProb)\n+ # print(\"kappa for multi-class\"+toString(kappa))\n+ # filterMask = rowMaxs(trainProbs) < kappa\n+ # threshold = ifelse(sum(filterMask) <= 2, median(rowMaxs(trainProbs)), kappa)\n+ # threshold = ifelse(sum(filterMask) <= 2, median(rowMaxs(trainProbs)), kappa)\n+ # print(\"threshold \"+toString(threshold))\n+\n+ print(\"applying error filter\")\n+ # sampledX = removeEmpty(target = eX, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n+ # sampledY = removeEmpty(target = eY, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n+ filterMask = rowMaxs(trainProbs) < quantile(rowMaxs(trainProbs), lq) | rowMaxs(trainProbs) > quantile(rowMaxs(trainProbs), uq)\n+ sampledX = removeEmpty(target = eX, margin = \"rows\", select=filterMask)\n+ sampledY = removeEmpty(target = eY, margin = \"rows\", select=filterMask)\n+ print(\"sampled rows \"+nrow(sampledY)+\" out of \"+nrow(eY))\n}\n+# doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY)\n+ # return (Matrix[Double] sampledX, Matrix[Double] sampledY, Matrix[Double] filterMask)\n+# {\n+ # print(\"initial number of rows: \" +nrow(eX))\n+ # # # # prepare feature vector for NB\n+ # beta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=50, maxii=50, verbose=FALSE);\n+ # [trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n+ # # # # stats of wrong\n+ # maxUniques = max(colMaxs(eX) * mask)\n+ # stats = matrix(0, rows=nrow(maxUniques), cols=ncol(mask))\n+ # for(i in 1:ncol(mask))\n+ # {\n+ # if(as.scalar(mask[1, i]) == 1)\n+ # {\n+ # problematic_cats = removeEmpty(target=eX[, i], margin = rows, select = (yhat != eY))\n+ # problematic_cats_sums = table(problematic_cats, 1)\n+ # stats[1:nrow(problematic_cats_sums), i] = problematic_cats_sums\n+ # }\n+\n+ # }\n+ # print(toString(stats))\n+\n+\n+# }\n+\n+\n# #######################################################################\n# # Wrapper of transformencode OHE call, to call inside eval as a function\n# # Inputs: The input dataset X, and mask of the columns\n@@ -132,13 +294,16 @@ stringProcessing = function(Frame[Unknown] data, Matrix[Double] mask,\nFrame[String] schema, Boolean CorrectTypos, List[Unknown] ctx = list(prefix=\"--\"))\nreturn(Frame[Unknown] data, List[Unknown] distanceMatrix, List[Unknown] dictionary, Matrix[Double] dateColIdx)\n{\n-\n+ hasCategory = sum(mask) > 0\nprefix = as.scalar(ctx[\"prefix\"]);\ndistanceMatrix = list()\ndictionary = list()\n+\n# step 1 do the case transformations\nprint(prefix+\" convert strings to lower case\");\n+ if(hasCategory) {\ndata = map(data, \"x -> x.toLowerCase()\")\n+\n# step 2 fix invalid lengths\n# q0 = 0.05\n# q1 = 0.95\n@@ -156,10 +321,11 @@ return(Frame[Unknown] data, List[Unknown] distanceMatrix, List[Unknown] dictiona\ndata = dropInvalidType(data, schema)\n+\n# step 5 porter stemming on all features\nprint(prefix+\" porter-stemming on all features\");\ndata = map(data, \"x -> PorterStemmer.stem(x)\", 0)\n-\n+ }\n# step 6 typo correction\nif(CorrectTypos)\n{\n@@ -204,13 +370,13 @@ return(Frame[Unknown] data)\ndata = map(data, \"x -> x.toLowerCase()\")\n# step 2 fix invalid lengths\n- q0 = 0.05\n- q1 = 0.95\n+ # q0 = 0.05\n+ # q1 = 0.95\n- [data, mask, qlow, qup] = fixInvalidLengths(data, mask, q0, q1)\n+ # [data, mask, qlow, qup] = fixInvalidLengths(data, mask, q0, q1)\n- # # step 3 fix swap values\n- data = valueSwap(data, schema)\n+ # # # step 3 fix swap values\n+ # data = valueSwap(data, schema)\n# step 3 drop invalid types\ndata = dropInvalidType(data, schema)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/fit_pipelineTest.dml",
"new_path": "src/test/scripts/functions/pipelines/fit_pipelineTest.dml",
"diff": "@@ -60,7 +60,7 @@ testData = F[split+1:nrow(F),]\nprint(\"pipeline: \"+toString(pip[1]))\n-[result, trX, tsX, exState, iState] = fit_pipeline(trainData, testData, metaInfo, pip[1,], applyFunc[1,], hp[1,], \"evalClassification\", evalHp, TRUE, FALSE)\n+[result, trX, tsX, exState, iState] = fit_pipeline(trainData, testData, metaInfo, pip[1,], applyFunc[1,], hp[1,], 3, \"evalClassification\", evalHp, TRUE, FALSE)\neXtest = apply_pipeline(testData, metaInfo, pip[1,], applyFunc[1,], hp[1,], TRUE, exState, iState, FALSE)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/topkLogicalTest.dml",
"new_path": "src/test/scripts/functions/pipelines/topkLogicalTest.dml",
"diff": "@@ -92,9 +92,9 @@ testY = eY[split+1:nrow(eX),]\n[bestLogical, bestHp, converged] = lg::enumerateLogical(X=trainX, y=trainY, Xtest=testX, ytest=testY,\n- initial_population=logical, seed = 42, max_iter=max_iter, metaList = metaList, evaluationFunc=\"evalML\", dirtyScore = dirtyScore + expectedIncrease,\n- evalFunHp=matrix(\"1 1e-3 1e-9 100\", rows=1, cols=4), primitives=primitives, param=param,\n- cv=FALSE, verbose=TRUE)\n+ initial_population=logical, seed = 42, max_iter=max_iter, metaList = metaList, evaluationFunc=\"evalML\",\n+ dirtyScore = dirtyScore + expectedIncrease, evalFunHp=matrix(\"1 1e-3 1e-9 100\", rows=1, cols=4), primitives=primitives,\n+ param=param, cv=FALSE, verbose=TRUE)\nprint(\"bestLogical \"+toString(bestLogical))\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Inclusion of error-based sub-sampling in cleaning pipelines |
49,689 | 30.11.2022 16:46:16 | -3,600 | 220fd54aace04f85764cdfbbcd8212ea2480d65f | Enable multi-threaded transformencode/apply
This patch changes the defaults of the Encode config flags
to use the multi-threaded build, apply, allocation and
compactions methods for CP transformencode/apply.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"new_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"diff": "@@ -152,7 +152,7 @@ public class DMLConfig\n_defaultVals.put(CP_PARALLEL_IO, \"true\" );\n_defaultVals.put(PARALLEL_TOKENIZE, \"false\");\n_defaultVals.put(PARALLEL_TOKENIZE_NUM_BLOCKS, \"64\");\n- _defaultVals.put(PARALLEL_ENCODE, \"false\" );\n+ _defaultVals.put(PARALLEL_ENCODE, \"true\" );\n_defaultVals.put(PARALLEL_ENCODE_STAGED, \"false\" );\n_defaultVals.put(PARALLEL_ENCODE_APPLY_BLOCKS, \"-1\");\n_defaultVals.put(PARALLEL_ENCODE_BUILD_BLOCKS, \"-1\");\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/MultiReturnParameterizedBuiltinFEDInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/fed/MultiReturnParameterizedBuiltinFEDInstruction.java",
"diff": "@@ -344,7 +344,9 @@ public class MultiReturnParameterizedBuiltinFEDInstruction extends ComputationFE\n.createEncoder(_spec, colNames, fb.getNumColumns(), null, _offset, _offset + fb.getNumColumns());\n// build necessary structures for encoding\n- encoder.build(fb, OptimizerUtils.getTransformNumThreads()); // FIXME skip equi-height sorting\n+ //encoder.build(fb, OptimizerUtils.getTransformNumThreads()); // FIXME skip equi-height sorting\n+ // FIXME: Enabling multithreading intermittently hangs\n+ encoder.build(fb, 1);\nfo.release();\n// create federated response\n@@ -375,7 +377,9 @@ public class MultiReturnParameterizedBuiltinFEDInstruction extends ComputationFE\n// offset is applied on the Worker to shift the local encoders to their respective column\n_encoder.applyColumnOffset();\n// apply transformation\n- MatrixBlock mbout = _encoder.apply(fb, OptimizerUtils.getTransformNumThreads());\n+ //MatrixBlock mbout = _encoder.apply(fb, OptimizerUtils.getTransformNumThreads());\n+ // FIXME: Enabling multithreading intermittently hangs\n+ MatrixBlock mbout = _encoder.apply(fb, 1);\n// create output matrix object\nMatrixObject mo = ExecutionContext.createMatrixObject(mbout);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3471] Enable multi-threaded transformencode/apply
This patch changes the defaults of the Encode config flags
to use the multi-threaded build, apply, allocation and
compactions methods for CP transformencode/apply.
Closes #1743 |
49,689 | 30.11.2022 18:59:13 | -3,600 | afa315d8315b238ad29edf5e64c3d6954afd8cfd | Rename triggerremote instruction to checkpoint_e
This patch extends the checkpoint instruction with an asynchronous
checkpoint_e (eager checkpoint) version. This operator eagerly triggers
a chain of Spark operations and persist the distributed results.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"new_path": "src/main/java/org/apache/sysds/conf/ConfigurationManager.java",
"diff": "@@ -247,6 +247,10 @@ public class ConfigurationManager\nreturn (getDMLConfig().getBooleanValue(DMLConfig.ASYNC_SPARK_BROADCAST)\n|| OptimizerUtils.ASYNC_BROADCAST_SPARK);\n}\n+ public static boolean isCheckpointEnabled() {\n+ return (getDMLConfig().getBooleanValue(DMLConfig.ASYNC_SPARK_CHECKPOINT)\n+ || OptimizerUtils.ASYNC_CHECKPOINT_SPARK);\n+ }\npublic static ILinearize.DagLinearization getLinearizationOrder() {\nif (OptimizerUtils.MAX_PARALLELIZE_ORDER)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"new_path": "src/main/java/org/apache/sysds/conf/DMLConfig.java",
"diff": "@@ -130,6 +130,7 @@ public class DMLConfig\n/** Asynchronous triggering of Spark OPs and operator placement **/\npublic static final String ASYNC_SPARK_PREFETCH = \"sysds.async.prefetch\"; // boolean: enable asynchronous prefetching spark intermediates\npublic static final String ASYNC_SPARK_BROADCAST = \"sysds.async.broadcast\"; // boolean: enable asynchronous broadcasting CP intermediates\n+ public static final String ASYNC_SPARK_CHECKPOINT = \"sysds.async.checkpoint\"; // boolean: enable asynchronous persisting of Spark intermediates\n//internal config\npublic static final String DEFAULT_SHARED_DIR_PERMISSION = \"777\"; //for local fs and DFS\n@@ -202,6 +203,7 @@ public class DMLConfig\n_defaultVals.put(PRIVACY_CONSTRAINT_MOCK, null);\n_defaultVals.put(ASYNC_SPARK_PREFETCH, \"false\" );\n_defaultVals.put(ASYNC_SPARK_BROADCAST, \"false\" );\n+ _defaultVals.put(ASYNC_SPARK_CHECKPOINT, \"false\" );\n}\npublic DMLConfig() {\n@@ -454,7 +456,8 @@ public class DMLConfig\nPRINT_GPU_MEMORY_INFO, AVAILABLE_GPUS, SYNCHRONIZE_GPU, EAGER_CUDA_FREE, FLOATING_POINT_PRECISION,\nGPU_EVICTION_POLICY, LOCAL_SPARK_NUM_THREADS, EVICTION_SHADOW_BUFFERSIZE, GPU_MEMORY_ALLOCATOR,\nGPU_MEMORY_UTILIZATION_FACTOR, USE_SSL_FEDERATED_COMMUNICATION, DEFAULT_FEDERATED_INITIALIZATION_TIMEOUT,\n- FEDERATED_TIMEOUT, FEDERATED_MONITOR_FREQUENCY, ASYNC_SPARK_PREFETCH, ASYNC_SPARK_BROADCAST\n+ FEDERATED_TIMEOUT, FEDERATED_MONITOR_FREQUENCY, ASYNC_SPARK_PREFETCH, ASYNC_SPARK_BROADCAST,\n+ ASYNC_SPARK_CHECKPOINT\n};\nStringBuilder sb = new StringBuilder();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"new_path": "src/main/java/org/apache/sysds/hops/OptimizerUtils.java",
"diff": "@@ -284,6 +284,7 @@ public class OptimizerUtils\n*/\npublic static boolean ASYNC_PREFETCH_SPARK = false;\npublic static boolean ASYNC_BROADCAST_SPARK = false;\n+ public static boolean ASYNC_CHECKPOINT_SPARK = false;\n/**\n* Heuristic-based instruction ordering to maximize inter-operator parallelism.\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/Checkpoint.java",
"new_path": "src/main/java/org/apache/sysds/lops/Checkpoint.java",
"diff": "@@ -25,6 +25,7 @@ import org.apache.sysds.common.Types.ExecType;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\n+import java.util.Arrays;\n/**\n* Lop for checkpoint operations. For example, on Spark, the semantic of a checkpoint\n@@ -38,13 +39,15 @@ import org.apache.sysds.runtime.instructions.InstructionUtils;\n*/\npublic class Checkpoint extends Lop\n{\n- public static final String OPCODE = \"chkpoint\";\n+ public static final String DEFAULT_CP_OPCODE = \"chkpoint\";\n+ public static final String ASYNC_CP_OPCODE = \"chkpoint_e\";\npublic static final StorageLevel DEFAULT_STORAGE_LEVEL = StorageLevel.MEMORY_AND_DISK();\npublic static final StorageLevel SER_STORAGE_LEVEL = StorageLevel.MEMORY_AND_DISK_SER();\npublic static final boolean CHECKPOINT_SPARSE_CSR = true;\nprivate StorageLevel _storageLevel;\n+ private boolean _async = false;\n/**\n@@ -55,16 +58,22 @@ public class Checkpoint extends Lop\n* @param dt data type\n* @param vt value type\n* @param level storage level\n+ * @param isAsync true if eager and asynchronous checkpoint\n*/\n- public Checkpoint(Lop input, DataType dt, ValueType vt, String level) {\n+ public Checkpoint(Lop input, DataType dt, ValueType vt, String level, boolean isAsync) {\nsuper(Lop.Type.Checkpoint, dt, vt);\naddInput(input);\ninput.addOutput(this);\n_storageLevel = StorageLevel.fromString(level);\n+ _async = isAsync;\nlps.setProperties(inputs, ExecType.SPARK);\n}\n+ public Checkpoint(Lop input, DataType dt, ValueType vt, String level) {\n+ this(input, dt, vt, level, false);\n+ }\n+\npublic StorageLevel getStorageLevel()\n{\nreturn _storageLevel;\n@@ -89,7 +98,7 @@ public class Checkpoint extends Lop\nreturn InstructionUtils.concatOperands(\ngetExecType().name(),\n- OPCODE,\n+ _async ? ASYNC_CP_OPCODE : DEFAULT_CP_OPCODE,\ngetInputs().get(0).prepInputOperand(input1),\nprepOutputOperand(output),\ngetStorageLevelString(_storageLevel));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/Dag.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/Dag.java",
"diff": "@@ -237,56 +237,6 @@ public class Dag<N extends Lop>\n}\n}\n- private static List<Lop> addPrefetchLop(List<Lop> nodes) {\n- List<Lop> nodesWithPrefetch = new ArrayList<>();\n-\n- //Find the Spark nodes with all CP outputs\n- for (Lop l : nodes) {\n- nodesWithPrefetch.add(l);\n- if (isPrefetchNeeded(l)) {\n- //TODO: No prefetch if the parent is placed right after the spark OP\n- //or push the parent further to increase parallelism\n- List<Lop> oldOuts = new ArrayList<>(l.getOutputs());\n- //Construct a Prefetch lop that takes this Spark node as a input\n- UnaryCP prefetch = new UnaryCP(l, OpOp1.PREFETCH, l.getDataType(), l.getValueType(), ExecType.CP);\n- for (Lop outCP : oldOuts) {\n- //Rewire l -> outCP to l -> Prefetch -> outCP\n- prefetch.addOutput(outCP);\n- outCP.replaceInput(l, prefetch);\n- l.removeOutput(outCP);\n- //FIXME: Rewire _inputParams when needed (e.g. GroupedAggregate)\n- }\n- //Place it immediately after the Spark lop in the node list\n- nodesWithPrefetch.add(prefetch);\n- }\n- }\n- return nodesWithPrefetch;\n- }\n-\n- private static List<Lop> addBroadcastLop(List<Lop> nodes) {\n- List<Lop> nodesWithBroadcast = new ArrayList<>();\n-\n- for (Lop l : nodes) {\n- nodesWithBroadcast.add(l);\n- if (isBroadcastNeeded(l)) {\n- List<Lop> oldOuts = new ArrayList<>(l.getOutputs());\n- //Construct a Broadcast lop that takes this Spark node as an input\n- UnaryCP bc = new UnaryCP(l, OpOp1.BROADCAST, l.getDataType(), l.getValueType(), ExecType.CP);\n- //FIXME: Wire Broadcast only with the necessary outputs\n- for (Lop outCP : oldOuts) {\n- //Rewire l -> outCP to l -> Broadcast -> outCP\n- bc.addOutput(outCP);\n- outCP.replaceInput(l, bc);\n- l.removeOutput(outCP);\n- //FIXME: Rewire _inputParams when needed (e.g. GroupedAggregate)\n- }\n- //Place it immediately after the Spark lop in the node list\n- nodesWithBroadcast.add(bc);\n- }\n- }\n- return nodesWithBroadcast;\n- }\n-\nprivate ArrayList<Instruction> doPlainInstructionGen(StatementBlock sb, List<Lop> nodes)\n{\n//prepare basic instruction sets\n@@ -319,42 +269,6 @@ public class Dag<N extends Lop>\n&& dnode.getOutputParameters().getLabel().equals(input.getOutputParameters().getLabel());\n}\n- private static boolean isPrefetchNeeded(Lop lop) {\n- // Run Prefetch for a Spark instruction if the instruction is a Transformation\n- // and the output is consumed by only CP instructions.\n- boolean transformOP = lop.getExecType() == ExecType.SPARK && lop.getAggType() != SparkAggType.SINGLE_BLOCK\n- // Always Action operations\n- && !(lop.getDataType() == DataType.SCALAR)\n- && !(lop instanceof MapMultChain) && !(lop instanceof PickByCount)\n- && !(lop instanceof MMZip) && !(lop instanceof CentralMoment)\n- && !(lop instanceof CoVariance)\n- // Not qualified for prefetching\n- && !(lop instanceof Checkpoint) && !(lop instanceof ReBlock)\n- && !(lop instanceof CSVReBlock)\n- // Cannot filter Transformation cases from Actions (FIXME)\n- && !(lop instanceof MMTSJ) && !(lop instanceof UAggOuterChain)\n- && !(lop instanceof ParameterizedBuiltin) && !(lop instanceof SpoofFused);\n-\n- //FIXME: Rewire _inputParams when needed (e.g. GroupedAggregate)\n- boolean hasParameterizedOut = lop.getOutputs().stream()\n- .anyMatch(out -> ((out instanceof ParameterizedBuiltin)\n- || (out instanceof GroupedAggregate)\n- || (out instanceof GroupedAggregateM)));\n- //TODO: support non-matrix outputs\n- return transformOP && !hasParameterizedOut\n- && lop.isAllOutputsCP() && lop.getDataType() == DataType.MATRIX;\n- }\n-\n- private static boolean isBroadcastNeeded(Lop lop) {\n- // Asynchronously broadcast a matrix if that is produced by a CP instruction,\n- // and at least one Spark parent needs to broadcast this intermediate (eg. mapmm)\n- boolean isBc = lop.getOutputs().stream()\n- .anyMatch(out -> (out.getBroadcastInput() == lop));\n- //TODO: Early broadcast objects that are bigger than a single block\n- //return isCP && isBc && lop.getDataTypes() == DataType.Matrix;\n- return isBc && lop.getDataType() == DataType.MATRIX;\n- }\n-\nprivate static List<Instruction> deleteUpdatedTransientReadVariables(StatementBlock sb, List<Lop> nodeV) {\nList<Instruction> insts = new ArrayList<>();\nif ( sb == null ) //return modifiable list\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"diff": "@@ -319,6 +319,31 @@ public interface ILinearize {\nreturn nodesWithBroadcast;\n}\n+ private static List<Lop> addAsyncEagerCheckpointLop(List<Lop> nodes) {\n+ List<Lop> nodesWithCheckpoint = new ArrayList<>();\n+ // Find the Spark action nodes\n+ for (Lop l : nodes) {\n+ if (isCheckpointNeeded(l)) {\n+ List<Lop> oldInputs = new ArrayList<>(l.getInputs());\n+ // Place a Checkpoint node just below this node (Spark action)\n+ for (Lop in : oldInputs) {\n+ if (in.getExecType() != ExecType.SPARK)\n+ continue;\n+ // Rewire in -> l to in -> Checkpoint -> l\n+ //UnaryCP checkpoint = new UnaryCP(in, OpOp1.TRIGREMOTE, in.getDataType(), in.getValueType(), ExecType.CP);\n+ Lop checkpoint = new Checkpoint(in, in.getDataType(), in.getValueType(),\n+ Checkpoint.getDefaultStorageLevelString(), true);\n+ checkpoint.addOutput(l);\n+ l.replaceInput(in, checkpoint);\n+ in.removeOutput(l);\n+ nodesWithCheckpoint.add(checkpoint);\n+ }\n+ }\n+ nodesWithCheckpoint.add(l);\n+ }\n+ return nodesWithCheckpoint;\n+ }\n+\nprivate static boolean isPrefetchNeeded(Lop lop) {\n// Run Prefetch for a Spark instruction if the instruction is a Transformation\n// and the output is consumed by only CP instructions.\n@@ -354,4 +379,28 @@ public interface ILinearize {\n//return isCP && isBc && lop.getDataTypes() == DataType.Matrix;\nreturn isBc && lop.getDataType() == DataType.MATRIX;\n}\n+\n+ private static boolean isCheckpointNeeded(Lop lop) {\n+ // Place checkpoint_e just before a Spark action (FIXME)\n+ boolean actionOP = lop.getExecType() == ExecType.SPARK\n+ && ((lop.getAggType() == SparkAggType.SINGLE_BLOCK)\n+ // Always Action operations\n+ || (lop.getDataType() == DataType.SCALAR)\n+ || (lop instanceof MapMultChain) || (lop instanceof PickByCount)\n+ || (lop instanceof MMZip) || (lop instanceof CentralMoment)\n+ || (lop instanceof CoVariance) || (lop instanceof MMTSJ))\n+ // Not qualified for Checkpoint\n+ && !(lop instanceof Checkpoint) && !(lop instanceof ReBlock)\n+ && !(lop instanceof CSVReBlock)\n+ // Cannot filter Transformation cases from Actions (FIXME)\n+ && !(lop instanceof UAggOuterChain)\n+ && !(lop instanceof ParameterizedBuiltin) && !(lop instanceof SpoofFused);\n+\n+ //FIXME: Rewire _inputParams when needed (e.g. GroupedAggregate)\n+ boolean hasParameterizedOut = lop.getOutputs().stream()\n+ .anyMatch(out -> ((out instanceof ParameterizedBuiltin)\n+ || (out instanceof GroupedAggregate)\n+ || (out instanceof GroupedAggregateM)));\n+ return actionOP && !hasParameterizedOut;\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"diff": "@@ -67,7 +67,6 @@ import org.apache.sysds.runtime.instructions.cp.SpoofCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.SqlCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.StringInitCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.TernaryCPInstruction;\n-import org.apache.sysds.runtime.instructions.cp.TriggerRemoteOpsCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.UaggOuterChainCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.UnaryCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.VariableCPInstruction;\n@@ -482,9 +481,6 @@ public class CPInstructionParser extends InstructionParser\ncase Broadcast:\nreturn BroadcastCPInstruction.parseInstruction(str);\n- case TrigRemote:\n- return TriggerRemoteOpsCPInstruction.parseInstruction(str);\n-\ndefault:\nthrow new DMLRuntimeException(\"Invalid CP Instruction Type: \" + cptype );\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/SPInstructionParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/SPInstructionParser.java",
"diff": "@@ -240,7 +240,8 @@ public class SPInstructionParser extends InstructionParser\nString2SPInstructionType.put(\"libsvmrblk\", SPType.LIBSVMReblock);\n// Spark-specific instructions\n- String2SPInstructionType.put( Checkpoint.OPCODE, SPType.Checkpoint);\n+ String2SPInstructionType.put( Checkpoint.DEFAULT_CP_OPCODE, SPType.Checkpoint);\n+ String2SPInstructionType.put( Checkpoint.ASYNC_CP_OPCODE, SPType.Checkpoint);\nString2SPInstructionType.put( Compression.OPCODE, SPType.Compression);\nString2SPInstructionType.put( DeCompression.OPCODE, SPType.DeCompression);\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerRemoteOperationsTask.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerCheckpointTask.java",
"diff": "@@ -25,10 +25,10 @@ import org.apache.sysds.lops.Checkpoint;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.utils.stats.SparkStatistics;\n-public class TriggerRemoteOperationsTask implements Runnable {\n+public class TriggerCheckpointTask implements Runnable {\nMatrixObject _remoteOperationsRoot;\n- public TriggerRemoteOperationsTask(MatrixObject mo) {\n+ public TriggerCheckpointTask(MatrixObject mo) {\n_remoteOperationsRoot = mo;\n}\n@@ -36,6 +36,7 @@ public class TriggerRemoteOperationsTask implements Runnable {\npublic void run() {\nboolean triggered = false;\nsynchronized (_remoteOperationsRoot) {\n+ // FIXME: Handle double execution\nif (_remoteOperationsRoot.isPendingRDDOps()) {\nJavaPairRDD<?, ?> rdd = _remoteOperationsRoot.getRDDHandle().getRDD();\nrdd.persist(Checkpoint.DEFAULT_STORAGE_LEVEL).count();\n@@ -45,6 +46,6 @@ public class TriggerRemoteOperationsTask implements Runnable {\n}\nif (DMLScript.STATISTICS && triggered)\n- SparkStatistics.incAsyncTriggerRemoteCount(1);\n+ SparkStatistics.incAsyncTriggerCheckpointCount(1);\n}\n}\n"
},
{
"change_type": "DELETE",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerRemoteOpsCPInstruction.java",
"new_path": null,
"diff": "-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements. See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership. The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License. You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied. See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.sysds.runtime.instructions.cp;\n-\n-import java.util.concurrent.Executors;\n-\n-import org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n-import org.apache.sysds.runtime.instructions.InstructionUtils;\n-import org.apache.sysds.runtime.matrix.operators.Operator;\n-import org.apache.sysds.runtime.util.CommonThreadPool;\n-\n-public class TriggerRemoteOpsCPInstruction extends UnaryCPInstruction {\n- private TriggerRemoteOpsCPInstruction(Operator op, CPOperand in, CPOperand out, String opcode, String istr) {\n- super(CPType.TrigRemote, op, in, out, opcode, istr);\n- }\n-\n- public static TriggerRemoteOpsCPInstruction parseInstruction (String str) {\n- InstructionUtils.checkNumFields(str, 2);\n- String[] parts = InstructionUtils.getInstructionPartsWithValueType(str);\n- String opcode = parts[0];\n- CPOperand in = new CPOperand(parts[1]);\n- CPOperand out = new CPOperand(parts[2]);\n- return new TriggerRemoteOpsCPInstruction(null, in, out, opcode, str);\n- }\n-\n- @Override\n- public void processInstruction(ExecutionContext ec) {\n- // TODO: Operator placement.\n- // Note for testing: write a method in the Dag class to place this operator\n- // after Spark MMRJ. Then execute PrefetchRDDTest.testAsyncSparkOPs3.\n- ec.setVariable(output.getName(), ec.getMatrixObject(input1));\n-\n- if (CommonThreadPool.triggerRemoteOPsPool == null)\n- CommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\n- CommonThreadPool.triggerRemoteOPsPool.submit(new TriggerRemoteOperationsTask(ec.getMatrixObject(output)));\n- }\n-}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CheckpointSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CheckpointSPInstruction.java",
"diff": "@@ -35,6 +35,7 @@ import org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.BooleanObject;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\n+import org.apache.sysds.runtime.instructions.cp.TriggerCheckpointTask;\nimport org.apache.sysds.runtime.instructions.spark.data.RDDObject;\nimport org.apache.sysds.runtime.instructions.spark.functions.CopyFrameBlockFunction;\nimport org.apache.sysds.runtime.instructions.spark.functions.CreateSparseBlockFunction;\n@@ -43,9 +44,12 @@ import org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.meta.DataCharacteristics;\n+import org.apache.sysds.runtime.util.CommonThreadPool;\nimport org.apache.sysds.runtime.util.UtilFunctions;\nimport org.apache.sysds.utils.Statistics;\n+import java.util.concurrent.Executors;\n+\npublic class CheckpointSPInstruction extends UnarySPInstruction {\n// default storage level\nprivate StorageLevel _level = null;\n@@ -72,6 +76,19 @@ public class CheckpointSPInstruction extends UnarySPInstruction {\npublic void processInstruction(ExecutionContext ec) {\nSparkExecutionContext sec = (SparkExecutionContext)ec;\n+ // Asynchronously trigger count() and persist this RDD\n+ // TODO: Synchronize. Avoid double execution\n+ if (getOpcode().equals(\"chkpoint_e\")) { //eager checkpoint\n+ // Inplace replace output matrix object with the input matrix object\n+ // We will never use the output of the Spark count call\n+ ec.setVariable(output.getName(), ec.getCacheableData(input1));\n+\n+ if (CommonThreadPool.triggerRemoteOPsPool == null)\n+ CommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\n+ CommonThreadPool.triggerRemoteOPsPool.submit(new TriggerCheckpointTask(ec.getMatrixObject(output)));\n+ return;\n+ }\n+\n// Step 1: early abort on non-existing or in-memory (cached) inputs\n// -------\n// (checkpoints are generated for all read only variables in loops; due to unbounded scoping and\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"diff": "@@ -33,7 +33,7 @@ public class SparkStatistics {\nprivate static final LongAdder broadcastCount = new LongAdder();\nprivate static final LongAdder asyncPrefetchCount = new LongAdder();\nprivate static final LongAdder asyncBroadcastCount = new LongAdder();\n- private static final LongAdder asyncTriggerRemoteCount = new LongAdder();\n+ private static final LongAdder asyncTriggerCheckpointCount = new LongAdder();\npublic static boolean createdSparkContext() {\nreturn ctxCreateTime > 0;\n@@ -76,8 +76,8 @@ public class SparkStatistics {\nasyncBroadcastCount.add(c);\n}\n- public static void incAsyncTriggerRemoteCount(long c) {\n- asyncTriggerRemoteCount.add(c);\n+ public static void incAsyncTriggerCheckpointCount(long c) {\n+ asyncTriggerCheckpointCount.add(c);\n}\npublic static long getSparkCollectCount() {\n@@ -92,8 +92,8 @@ public class SparkStatistics {\nreturn asyncBroadcastCount.longValue();\n}\n- public static long getAsyncTriggerRemoteCount() {\n- return asyncTriggerRemoteCount.longValue();\n+ public static long getasyncTriggerCheckpointCount() {\n+ return asyncTriggerCheckpointCount.longValue();\n}\npublic static void reset() {\n@@ -106,7 +106,7 @@ public class SparkStatistics {\ncollectCount.reset();\nasyncPrefetchCount.reset();\nasyncBroadcastCount.reset();\n- asyncTriggerRemoteCount.reset();\n+ asyncTriggerCheckpointCount.reset();\n}\npublic static String displayStatistics() {\n@@ -122,8 +122,8 @@ public class SparkStatistics {\nparallelizeTime.longValue()*1e-9,\nbroadcastTime.longValue()*1e-9,\ncollectTime.longValue()*1e-9));\n- sb.append(\"Spark async. count (pf,bc,tr): \\t\" +\n- String.format(\"%d/%d/%d.\\n\", getAsyncPrefetchCount(), getAsyncBroadcastCount(), getAsyncTriggerRemoteCount()));\n+ sb.append(\"Spark async. count (pf,bc,cp): \\t\" +\n+ String.format(\"%d/%d/%d.\\n\", getAsyncPrefetchCount(), getAsyncBroadcastCount(), getasyncTriggerCheckpointCount()));\nreturn sb.toString();\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3443] Rename triggerremote instruction to checkpoint_e
This patch extends the checkpoint instruction with an asynchronous
checkpoint_e (eager checkpoint) version. This operator eagerly triggers
a chain of Spark operations and persist the distributed results.
Closes #1744 |
49,720 | 02.12.2022 12:01:25 | -3,600 | a685373083cc764d271d097c324dc34adda155ab | [MINOR] Cleanups in various cleaning scripts (prints, comments, validation checks etc.) | [
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/bandit.dml",
"new_path": "scripts/builtin/bandit.dml",
"diff": "@@ -286,7 +286,7 @@ run_with_hyperparam = function(Frame[Unknown] ph_pip, Integer r_i = 1, Matrix[Do\nhp = hp[, 2:totalVals]\napplyFunctions = allApplyFunctions[i]\nno_of_res = nrow(hp)\n- print(\"PIPELINE EXECUTION START ... \"+toString(op))\n+ # print(\"PIPELINE EXECUTION START ... \"+toString(op))\nhpForPruning = matrix(0, rows=1, cols=ncol(op))\nchangesByOp = matrix(0, rows=1, cols=ncol(op))\nmetaList2 = metaList; #ensure metaList is no result var\n@@ -341,9 +341,6 @@ run_with_hyperparam = function(Frame[Unknown] ph_pip, Integer r_i = 1, Matrix[Do\nchangesByPipMatrix = removeEmpty(target=changesByPipMatrix, margin=\"rows\", select = sel)\n}\n-\n-\n-\n# extract the hyper-parameters for pipelines\ngetHyperparam = function(Frame[Unknown] pipeline, Frame[Unknown] hpList, Integer no_of_res, Boolean default, Integer seed = -1, Boolean enablePruning)\nreturn (Matrix[Double] paramMatrix, Frame[Unknown] applyFunc, Integer no_of_res, Integer NUM_META_FLAGS)\n@@ -560,7 +557,6 @@ return (Double accuracy, Matrix[Double] evalFunHp, Matrix[Double] hpForPruning,\nallChanges = min(allChanges)\nchangesByOp = colMaxs(cvChanges)\naccuracy = mean(accuracyMatrix)\n- print(\"mean: \\n\"+toString(accuracyMatrix))\nprint(\"cv accuracy: \"+toString(accuracy))\n}\n@@ -590,8 +586,6 @@ return(Boolean execute)\nexecute = !(changeCount > 0)\n}\n-\n-\ngetParamMeta = function(Frame[Unknown] pipeline, Frame[Unknown] hpList)\nreturn(Frame[Unknown] applyFunc, Matrix[Double] indexes, Matrix[Double] paramCount)\n{\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/executePipeline.dml",
"new_path": "scripts/builtin/executePipeline.dml",
"diff": "@@ -184,43 +184,15 @@ return(Matrix[Double] X,Integer executeFlag)\n{\nif(sum(mask) == 0)\nexecuteFlag = 0\n- else {\n+ else if(sum(mask) != ncol(mask)) {\n# take categorical out and remove numerics\nX = removeEmpty(target=X, margin = \"cols\", select = mask)\n}\n+ else X = X\n}\nelse X = X\n}\n-# confirmMeta = function(Matrix[Double] X, Matrix[Double] mask)\n-# return (Matrix[Double] X)\n-# {\n- # if((sum(mask) > 0) & (ncol(X) == ncol(mask)))\n- # {\n- # # get the max + 1 for nan replacement\n- # nanMask = is.na(X)\n- # # replace nan\n- # X = replace(target = X, pattern = NaN, replacement = 9999)\n- # # take categorical out\n- # cat = removeEmpty(target=X, margin=\"cols\", select = mask)\n- # # round categorical (if there is any floating point)\n- # cat = round(cat)\n- # less_than_1_mask = cat < 1\n- # less_than_1 = less_than_1_mask * 9999\n- # cat = (cat * (less_than_1_mask == 0)) + less_than_1\n- # # reconstruct original X\n- # X = X * (mask == 0)\n- # q = table(seq(1, ncol(cat)), removeEmpty(target=seq(1, ncol(mask)), margin=\"rows\",\n- # select=t(mask)), ncol(cat), ncol(X))\n- # X = (cat %*% q) + X\n-\n- # # put nan back\n- # nanMask = replace(target = nanMask, pattern = 1, replacement = NaN)\n- # X = X + nanMask\n- # }\n-# }\n-\n-\nconfirmData = function(Matrix[Double] nX, Matrix[Double] originalX, Matrix[Double] mask, Integer dataFlag)\nreturn (Matrix[Double] X)\n{\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/multiLogReg.dml",
"new_path": "scripts/builtin/multiLogReg.dml",
"diff": "@@ -61,12 +61,15 @@ m_multiLogReg = function(Matrix[Double] X, Matrix[Double] Y, Int icpt = 2,\n# Robustness for datasets with missing values (causing NaN gradients)\nnumNaNs = sum(isNaN(X))\nif( numNaNs > 0 ) {\n+ if(verbose)\nprint(\"multiLogReg: matrix X contains \"+numNaNs+\" missing values, replacing with 0.\")\nX = replace(target=X, pattern=NaN, replacement=0);\n}\n# Introduce the intercept, shift and rescale the columns of X if needed\nif (icpt == 1 | icpt == 2) { # add the intercept column\n+ if(N == nrow(X))\n+ N = nrow(X)\nX = cbind (X, matrix (1, N, 1));\nD = ncol (X);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/builtin/topk_cleaning.dml",
"new_path": "scripts/builtin/topk_cleaning.dml",
"diff": "@@ -80,7 +80,7 @@ s_topk_cleaning = function(Frame[Unknown] dataTrain, Frame[Unknown] dataTest = a\n# TODO why recoding/sampling twice (within getDirtyScore)\nprint(\"---- class-stratified sampling of feature matrix w/ f=\"+sample);\nif(sum(mask) > ncol(mask)/2 & nrow(eYtrain) >= 10000 & sample == 1.0)\n- [eXtrain, eYtrain ] = utils::doErrorSample(eXtrain, eYtrain, lq, uq)\n+ [eXtrain, eYtrain ] = utils::doErrorSample(eXtrain, eYtrain, lq, uq, 3500)\nelse\n[eXtrain, eYtrain] = utils::doSample(eXtrain, eYtrain, sample, mask, metaR, TRUE)\nt5 = time(); print(\"---- finalized in: \"+(t5-t4)/1e9+\"s\");\n@@ -115,14 +115,14 @@ s_topk_cleaning = function(Frame[Unknown] dataTrain, Frame[Unknown] dataTest = a\n[bestLogical, bestHp, con, refChanges, acc] = lg::enumerateLogical(X=eXtrain, y=eYtrain, Xtest=eXtest, ytest=eYtest,\ninitial_population=logical, refSol=refSol, seed = seed, max_iter=max_iter, metaList = metaList,\nevaluationFunc=evaluationFunc, evalFunHp=evalFunHp, primitives=primitives, param=parameters,\n- dirtyScore = (dirtyScore + expectedIncrease), cv=cv, cvk=cvk, verbose=FALSE, ctx=ctx)\n+ dirtyScore = (dirtyScore + expectedIncrease), cv=cv, cvk=cvk, verbose=TRUE, ctx=ctx)\nt6 = time(); print(\"---- finalized in: \"+(t6-t5)/1e9+\"s\");\ntopKPipelines = as.frame(\"NULL\"); topKHyperParams = matrix(0,0,0); topKScores = matrix(0,0,0); applyFunc = as.frame(\"NULL\")\n# write(acc, output+\"/acc.csv\", format=\"csv\")\n# stop(\"end of enumlp\")\n[topKPipelines, topKHyperParams, topKScores, applyFunc] = bandit(X_train=eXtrain, Y_train=eYtrain, X_test=eXtest, Y_test=eYtest, metaList=metaList,\nevaluationFunc=evaluationFunc, evalFunHp=evalFunHp, lp=bestLogical, lpHp=bestHp, primitives=primitives, param=parameters, baseLineScore=dirtyScore,\n- k=topK, R=resource_val, cv=cv, cvk=cvk, ref=refChanges, seed=seed, enablePruning = enablePruning, verbose=FALSE);\n+ k=topK, R=resource_val, cv=cv, cvk=cvk, ref=refChanges, seed=seed, enablePruning = enablePruning, verbose=TRUE);\nt7 = time(); print(\"-- Cleaning - Enum Physical Pipelines: \"+(t7-t6)/1e9+\"s\");\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "scripts/pipelines/scripts/utils.dml",
"new_path": "scripts/pipelines/scripts/utils.dml",
"diff": "@@ -50,107 +50,6 @@ return (Frame[Unknown] frameblock)\n}\n-# # #######################################################################\n-# # # Function for group-wise/stratified sampling from all classes in labelled dataset\n-# # # Inputs: The input dataset X, Y and sampling ratio between 0 and 1\n-# # # Output: sample X and Y\n-# # #######################################################################\n-# # doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Matrix[Double] mask, Frame[String] metaR, Boolean verbose = FALSE)\n- # # return (Matrix[Double] sampledX, Matrix[Double] sampledY, Matrix[Double] filterMask)\n-# # {\n- # # print(\"initial number of rows: \" +nrow(eX))\n- # # # # # prepare feature vector for NB\n- # # beta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=50, maxii=50, verbose=FALSE);\n- # # [trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n-\n- # # # # if the operation is binary make a fixed confidence of 0.9, for multi-class compute kappa\n- # # # threshold = 0\n- # # # if(max(eY) == 2)\n- # # # threshold = quantile(rowMaxs(trainProbs), 0.95)\n- # # kappa = 0.0\n- # # # if(max(eY) <= 2) {\n- # # # kappa = quantile(rowMaxs(trainProbs), 0.95)\n- # # # print(\"for binary classification\")\n- # # # }\n- # # # else {\n- # # # # compute kappa\n- # # classFreA = table(eY, 1, 1, max(eY), 1)\n- # # classFreP = table(yhat, 1, 1, max(eY), 1)\n- # # probA = classFreA/nrow(eY)\n- # # probP = classFreP/nrow(eY)\n- # # condProb = sum(probA * probP)\n- # # kappa = ((accuracy/100) - condProb) / (1 - condProb)\n- # # print(\"kappa for multi-class\"+toString(kappa))\n- # # # }\n- # # print(\"threshold \"+toString(kappa))\n- # # filterMask = rowMaxs(trainProbs) > kappa\n- # # # sampledX = removeEmpty(target = eX, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n- # # # sampledY = removeEmpty(target = eY, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n- # # # print(\"filtered number of rows: \" +nrow(sampledX))\n-\n- # # mask[1,1] = 0\n- # # # # # stats of wrong\n- # # maxUniques = max(colMaxs(replace(target=eX, pattern=NaN, replacement=1)) * mask)\n- # # print(\"maxUniques \"+maxUniques)\n- # # while(FALSE){}\n- # # stats = matrix(0, rows=maxUniques, cols=ncol(mask))\n- # # metaInfo = frame(0, rows=nrow(metaR), cols = 2*ncol(metaR))\n- # # # m = 1\n- # # for(i in 1:ncol(mask))\n- # # {\n- # # print(\"meta: \"+as.scalar(mask[1, i]))\n- # # if(as.scalar(mask[1, i]) == 1)\n- # # {\n- # # problematic_cats = removeEmpty(target=eX[, i], margin = \"rows\", select = (yhat != eY))\n- # # problematic_cats_sums = table(problematic_cats, 1)\n- # # stats[1:nrow(problematic_cats_sums), i] = problematic_cats_sums\n- # # stats_rowMax = rowMaxs(stats)\n- # # stats2 = (stats == stats_rowMax) * (stats_rowMax >= 100)\n- # # # colum = metaR[, i]\n- # # # print(\"printing meta recoded\")\n- # # # print(toString(colum))\n- # # # while(FALSE){}\n- # # # tmpValue = map(colum, \"x -> x.toLowerCase()\")\n- # # # tmpIndex = map(colum, \"x -> x.toLowerCase()\")\n- # # # metaInfo[1:nrow(tmpIndex), m] = tmpIndex\n- # # # metaInfo[1:nrow(tmpIndex), m+1] = tmpValue\n- # # # m = m + 2\n- # # }\n- # # }\n- # # filterMask = eX[, 4] == 2 | eX[, 5] == 4 | eX[, 5] == 7 | eX[, 5] == 8\n- # # filterMask = filterMask == 0\n- # # # stats = cbind(seq(1, nrow(stats)), stats, stats_rowMax)\n- # # # stats2 = cbind(seq(1, nrow(stats)), stats2)\n- # # # print(\"print status: \\n\"+toString(stats))\n- # # # print(\"print status 2: \\n\"+toString(stats2))\n- # # # print(\"meta infor: \\n\"+toString(metaInfo, rows=10))\n- # # # # create the filter mask\n- # # print(\"rows taken after filtering the categories: \"+sum(filterMask))\n- # # MIN_SAMPLE = 1000\n- # # sampledX = eX\n- # # sampledY = eY\n- # # ratio = ifelse(nrow(eY) > 200000, 0.6, ratio)\n- # # sampled = floor(nrow(eX) * ratio)\n-\n- # # if(sampled > MIN_SAMPLE & ratio != 1.0)\n- # # {\n- # # sampleVec = sample(nrow(eX), sampled, FALSE, 23)\n- # # P = table(seq(1, nrow(sampleVec)), sampleVec, nrow(sampleVec), nrow(eX))\n- # # if((nrow(eY) > 1)) # for classification\n- # # {\n- # # sampledX = P %*% eX\n- # # sampledY = P %*% eY\n- # # }\n- # # else if(nrow(eY) == 1) { # for clustering\n- # # sampledX = P %*% eX\n- # # sampledY = eY\n- # # }\n- # # print(\"sampled rows \"+nrow(sampledY)+\" out of \"+nrow(eY))\n- # # }\n-\n-# # }\n-\n-\n#######################################################################\n# Function for group-wise/stratified sampling from all classes in labelled dataset\n# Inputs: The input dataset X, Y and sampling ratio between 0 and 1\n@@ -184,7 +83,7 @@ doSample = function(Matrix[Double] eX, Matrix[Double] eY, Double ratio, Matrix[D\n}\n-doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY, Double lq, Double uq)\n+doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY, Double lq, Double uq, Integer rowCount = 3500)\nreturn (Matrix[Double] sampledX, Matrix[Double] sampledY)\n{\nprint(\"initial number of rows: \" +nrow(eX))\n@@ -193,57 +92,22 @@ doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY, Double lq, Double\nbeta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=20, maxii=20, verbose=FALSE);\n[trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n- # kappa = 0.0\n-\n- # # compute kappa\n- # classFreA = table(eY, 1, 1, max(eY), 1)\n- # classFreP = table(yhat, 1, 1, max(eY), 1)\n- # probA = classFreA/nrow(eY)\n- # probP = classFreP/nrow(eY)\n- # condProb = sum(probA * probP)\n- # kappa = ((accuracy/100) - condProb) / (1 - condProb)\n- # print(\"kappa for multi-class\"+toString(kappa))\n- # filterMask = rowMaxs(trainProbs) < kappa\n- # threshold = ifelse(sum(filterMask) <= 2, median(rowMaxs(trainProbs)), kappa)\n- # threshold = ifelse(sum(filterMask) <= 2, median(rowMaxs(trainProbs)), kappa)\n- # print(\"threshold \"+toString(threshold))\nprint(\"applying error filter\")\n- # sampledX = removeEmpty(target = eX, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\n- # sampledY = removeEmpty(target = eY, margin = \"rows\", select=(rowMaxs(trainProbs) < threshold))\nfilterMask = rowMaxs(trainProbs) < quantile(rowMaxs(trainProbs), lq) | rowMaxs(trainProbs) > quantile(rowMaxs(trainProbs), uq)\n+ delta = 0.001\n+ while(sum(filterMask) < rowCount & nrow(eY) > rowCount)\n+ {\n+ lq = lq + delta\n+ uq = uq - delta\n+ filterMask = rowMaxs(trainProbs) < quantile(rowMaxs(trainProbs), lq) | rowMaxs(trainProbs) > quantile(rowMaxs(trainProbs), uq)\n+ }\nsampledX = removeEmpty(target = eX, margin = \"rows\", select=filterMask)\nsampledY = removeEmpty(target = eY, margin = \"rows\", select=filterMask)\nprint(\"sampled rows \"+nrow(sampledY)+\" out of \"+nrow(eY))\n}\n-# doErrorSample = function(Matrix[Double] eX, Matrix[Double] eY)\n- # return (Matrix[Double] sampledX, Matrix[Double] sampledY, Matrix[Double] filterMask)\n-# {\n- # print(\"initial number of rows: \" +nrow(eX))\n- # # # # prepare feature vector for NB\n- # beta = multiLogReg(X=eX, Y=eY, icpt=1, reg=1e-3, tol=1e-6, maxi=50, maxii=50, verbose=FALSE);\n- # [trainProbs, yhat, accuracy] = multiLogRegPredict(eX, beta, eY, FALSE)\n- # # # # stats of wrong\n- # maxUniques = max(colMaxs(eX) * mask)\n- # stats = matrix(0, rows=nrow(maxUniques), cols=ncol(mask))\n- # for(i in 1:ncol(mask))\n- # {\n- # if(as.scalar(mask[1, i]) == 1)\n- # {\n- # problematic_cats = removeEmpty(target=eX[, i], margin = rows, select = (yhat != eY))\n- # problematic_cats_sums = table(problematic_cats, 1)\n- # stats[1:nrow(problematic_cats_sums), i] = problematic_cats_sums\n- # }\n-\n- # }\n- # print(toString(stats))\n-\n-\n-# }\n-\n-\n# #######################################################################\n# # Wrapper of transformencode OHE call, to call inside eval as a function\n# # Inputs: The input dataset X, and mask of the columns\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/intermediates/classification/applyFunc.csv",
"new_path": "src/test/scripts/functions/pipelines/intermediates/classification/applyFunc.csv",
"diff": "-forward_fill,imputeByMeanApply,NA,imputeByMedianApply,forward_fill,NA,imputeByMeanApply,dummycodingApply,0,0,0,0,0,0,0,0,0,0\n-NA,forward_fill,imputeByMeanApply,imputeByMeanApply,imputeByMedianApply,forward_fill,NA,NA,imputeByMedianApply,forward_fill,NA,imputeByMeanApply,dummycodingApply,0,0,0,0,0\n-NA,forward_fill,imputeByMeanApply,imputeByMeanApply,imputeByMedianApply,forward_fill,NA,NA,imputeByMedianApply,forward_fill,NA,imputeByMeanApply,dummycodingApply,0,0,0,0,0\n+forward_fill,winsorizeApply,imputeByMedianApply,NA,dummycodingApply,0,0,0,0,0,0,0,0,0,0,0,0,0\n+forward_fill,winsorizeApply,imputeByMedianApply,NA,dummycodingApply,0,0,0,0,0,0,0,0,0,0,0,0,0\n+forward_fill,winsorizeApply,imputeByMedianApply,NA,winsorizeApply,forward_fill,imputeByMeanApply,dummycodingApply,0,0,0,0,0,0,0,0,0,0\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/intermediates/classification/bestAcc.csv",
"new_path": "src/test/scripts/functions/pipelines/intermediates/classification/bestAcc.csv",
"diff": "-86.23188405797102\n-84.23913043478261\n-83.87681159420289\n+73.731884057971\n+73.731884057971\n+73.731884057971\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/intermediates/classification/hp.csv",
"new_path": "src/test/scripts/functions/pipelines/intermediates/classification/hp.csv",
"diff": "-56.0,1.0,1.0,0,0,0,1.0,2.0,0,0,1.0,0,0,0,2.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n-91.0,1.0,0.3140125178611014,0,0,1.0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,1.0,0.3140125178611014,0,0,1.0,0,2.0,1.0,0.3140125178611014,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,1.0,0.3140125178611014,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n-91.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,1.0,1.0,0,0,0,1.0,2.0,1.0,0.49421066338576347,0,0,1.0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n+40.0,1.0,1.0,0,0,0,0,1.0,2.0,2.0,0.05,0.95,0,0,0,1.0,0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,1.0,0,2.0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n+40.0,1.0,1.0,0,0,0,0,1.0,2.0,2.0,0.05,0.95,0,0,0,1.0,0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,1.0,0,2.0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n+64.0,1.0,1.0,0,0,0,0,1.0,2.0,2.0,0.05,0.95,0,0,0,1.0,0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,1.0,0,2.0,2.0,0.05,0.95,0,0,0,1.0,0,1.0,1.0,0,0,0,0,1.0,2.0,0,0,0,1.0,0,0,0,2.0,0,0,0,1.0,0,0,0,2.0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/pipelines/intermediates/classification/pip.csv",
"new_path": "src/test/scripts/functions/pipelines/intermediates/classification/pip.csv",
"diff": "-forward_fill,imputeByMean,underSampling,imputeByMedian,forward_fill,underSampling,imputeByMean,dummycoding,0,0,0,0,0,0,0,0,0,0\n-underSampling,forward_fill,imputeByMean,imputeByMean,imputeByMedian,forward_fill,underSampling,underSampling,imputeByMedian,forward_fill,underSampling,imputeByMean,dummycoding,0,0,0,0,0\n-underSampling,forward_fill,imputeByMean,imputeByMean,imputeByMedian,forward_fill,underSampling,underSampling,imputeByMedian,forward_fill,underSampling,imputeByMean,dummycoding,0,0,0,0,0\n+forward_fill,winsorize,imputeByMedian,tomeklink,dummycoding,0,0,0,0,0,0,0,0,0,0,0,0,0\n+forward_fill,winsorize,imputeByMedian,tomeklink,dummycoding,0,0,0,0,0,0,0,0,0,0,0,0,0\n+forward_fill,winsorize,imputeByMedian,tomeklink,winsorize,forward_fill,imputeByMean,dummycoding,0,0,0,0,0,0,0,0,0,0\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Cleanups in various cleaning scripts (prints, comments, validation checks etc.) |
49,689 | 02.12.2022 20:27:45 | -3,600 | 912908316a29dbbecfd89c121117cfc32f740a2a | Lineage-based reuse of prefetch instruction
This patch enables caching and reusing prefetch instruction
outputs. This is the first step towards reusing asynchronous
operators.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"diff": "@@ -44,6 +44,7 @@ import org.apache.sysds.lops.CSVReBlock;\nimport org.apache.sysds.lops.CentralMoment;\nimport org.apache.sysds.lops.Checkpoint;\nimport org.apache.sysds.lops.CoVariance;\n+import org.apache.sysds.lops.DataGen;\nimport org.apache.sysds.lops.GroupedAggregate;\nimport org.apache.sysds.lops.GroupedAggregateM;\nimport org.apache.sysds.lops.Lop;\n@@ -359,7 +360,7 @@ public interface ILinearize {\n&& !(lop instanceof CoVariance)\n// Not qualified for prefetching\n&& !(lop instanceof Checkpoint) && !(lop instanceof ReBlock)\n- && !(lop instanceof CSVReBlock)\n+ && !(lop instanceof CSVReBlock) && !(lop instanceof DataGen)\n// Cannot filter Transformation cases from Actions (FIXME)\n&& !(lop instanceof MMTSJ) && !(lop instanceof UAggOuterChain)\n&& !(lop instanceof ParameterizedBuiltin) && !(lop instanceof SpoofFused);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/PrefetchCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/PrefetchCPInstruction.java",
"diff": "@@ -23,6 +23,8 @@ import java.util.concurrent.Executors;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.util.CommonThreadPool;\n@@ -44,6 +46,7 @@ public class PrefetchCPInstruction extends UnaryCPInstruction {\npublic void processInstruction(ExecutionContext ec) {\n// TODO: handle non-matrix objects\nec.setVariable(output.getName(), ec.getMatrixObject(input1));\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? this.getLineageItem(ec).getValue() : null;\n// Note, a Prefetch instruction doesn't guarantee an asynchronous execution.\n// If the next instruction which takes this output as an input comes before\n@@ -51,6 +54,6 @@ public class PrefetchCPInstruction extends UnaryCPInstruction {\n// In that case this Prefetch instruction will act like a NOOP.\nif (CommonThreadPool.triggerRemoteOPsPool == null)\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\n- CommonThreadPool.triggerRemoteOPsPool.submit(new TriggerPrefetchTask(ec.getMatrixObject(output)));\n+ CommonThreadPool.triggerRemoteOPsPool.submit(new TriggerPrefetchTask(ec.getMatrixObject(output), li));\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerPrefetchTask.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerPrefetchTask.java",
"diff": "@@ -22,18 +22,28 @@ package org.apache.sysds.runtime.instructions.cp;\nimport org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedStatistics;\n+import org.apache.sysds.runtime.lineage.LineageCache;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.utils.stats.SparkStatistics;\npublic class TriggerPrefetchTask implements Runnable {\nMatrixObject _prefetchMO;\n+ LineageItem _inputLi;\npublic TriggerPrefetchTask(MatrixObject mo) {\n_prefetchMO = mo;\n+ _inputLi = null;\n+ }\n+\n+ public TriggerPrefetchTask(MatrixObject mo, LineageItem li) {\n+ _prefetchMO = mo;\n+ _inputLi = li;\n}\n@Override\npublic void run() {\nboolean prefetched = false;\n+ long t1 = System.nanoTime();\nsynchronized (_prefetchMO) {\n// Having this check inside the critical section\n// safeguards against concurrent rmVar.\n@@ -44,6 +54,11 @@ public class TriggerPrefetchTask implements Runnable {\nprefetched = true;\n}\n}\n+\n+ // Save the collected intermediate in the lineage cache\n+ if (_inputLi != null)\n+ LineageCache.putValueAsyncOp(_inputLi, _prefetchMO, prefetched, t1);\n+\nif (DMLScript.STATISTICS && prefetched) {\nif (_prefetchMO.isFederated())\nFederatedStatistics.incAsyncPrefetchCount(1);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -38,6 +38,7 @@ import org.apache.sysds.runtime.controlprogram.federated.FederatedStatistics;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedUDF;\nimport org.apache.sysds.runtime.instructions.CPInstructionParser;\nimport org.apache.sysds.runtime.instructions.Instruction;\n+import org.apache.sysds.runtime.instructions.cp.BroadcastCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.CPInstruction.CPType;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.cp.ComputationCPInstruction;\n@@ -45,6 +46,7 @@ import org.apache.sysds.runtime.instructions.cp.Data;\nimport org.apache.sysds.runtime.instructions.cp.MMTSJCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MultiReturnBuiltinCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ParameterizedBuiltinCPInstruction;\n+import org.apache.sysds.runtime.instructions.cp.PrefetchCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.ScalarObject;\nimport org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport org.apache.sysds.runtime.instructions.gpu.GPUInstruction;\n@@ -579,6 +581,10 @@ public class LineageCache\ncontinue;\n}\n+ if (inst instanceof PrefetchCPInstruction || inst instanceof BroadcastCPInstruction)\n+ // For the async. instructions, caching is handled separately by the tasks\n+ continue;\n+\nif (data instanceof MatrixObject && ((MatrixObject) data).hasRDDHandle()) {\n// Avoid triggering pre-matured Spark instruction chains\nremovePlaceholder(item);\n@@ -637,6 +643,52 @@ public class LineageCache\n}\n}\n+ public static void putValueAsyncOp(LineageItem instLI, Data data, boolean prefetched, long starttime)\n+ {\n+ if (ReuseCacheType.isNone())\n+ return;\n+ if (!prefetched) //prefetching was not successful\n+ return;\n+\n+ synchronized( _cache )\n+ {\n+ if (!probe(instLI))\n+ return;\n+\n+ long computetime = System.nanoTime() - starttime;\n+ LineageCacheEntry centry = _cache.get(instLI);\n+ if(!(data instanceof MatrixObject) && !(data instanceof ScalarObject)) {\n+ // Reusable instructions can return a frame (rightIndex). Remove placeholders.\n+ removePlaceholder(instLI);\n+ return;\n+ }\n+\n+ MatrixBlock mb = (data instanceof MatrixObject) ?\n+ ((MatrixObject)data).acquireReadAndRelease() : null;\n+ long size = mb != null ? mb.getInMemorySize() : ((ScalarObject)data).getSize();\n+\n+ // remove the placeholder if the entry is bigger than the cache.\n+ if (size > LineageCacheEviction.getCacheLimit()) {\n+ removePlaceholder(instLI);\n+ return;\n+ }\n+\n+ // place the data\n+ if (data instanceof MatrixObject)\n+ centry.setValue(mb, computetime);\n+ else if (data instanceof ScalarObject)\n+ centry.setValue((ScalarObject)data, computetime);\n+\n+ if (DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(centry._key)) {\n+ // Add to missed compute time\n+ LineageCacheStatistics.incrementMissedComputeTime(centry._computeTime);\n+ }\n+\n+ //maintain order for eviction\n+ LineageCacheEviction.addEntry(centry);\n+ }\n+ }\n+\npublic static void putValue(List<DataIdentifier> outputs,\nLineageItem[] liInputs, String name, ExecutionContext ec, long computetime)\n{\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -54,7 +54,7 @@ public class LineageCacheConfig\n\"^\", \"uamax\", \"uark+\", \"uacmean\", \"eigen\", \"ctableexpand\", \"replace\",\n\"^2\", \"uack+\", \"tak+*\", \"uacsqk+\", \"uark+\", \"n+\", \"uarimax\", \"qsort\",\n\"qpick\", \"transformapply\", \"uarmax\", \"n+\", \"-*\", \"castdtm\", \"lowertri\",\n- \"mapmm\", \"cpmm\"\n+ \"mapmm\", \"cpmm\", \"prefetch\"\n//TODO: Reuse everything.\n};\nprivate static String[] REUSE_OPCODES = new String[] {};\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"diff": "@@ -28,6 +28,7 @@ package org.apache.sysds.test.functions.async;\nimport org.apache.sysds.hops.recompile.Recompiler;\nimport org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\nimport org.apache.sysds.runtime.lineage.Lineage;\n+ import org.apache.sysds.runtime.lineage.LineageCacheConfig;\nimport org.apache.sysds.runtime.matrix.data.MatrixValue;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.apache.sysds.test.TestConfiguration;\n@@ -40,7 +41,7 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nprotected static final String TEST_DIR = \"functions/async/\";\nprotected static final String TEST_NAME = \"LineageReuseSpark\";\n- protected static final int TEST_VARIANTS = 1;\n+ protected static final int TEST_VARIANTS = 2;\nprotected static String TEST_CLASS_DIR = TEST_DIR + LineageReuseSparkTest.class.getSimpleName() + \"/\";\n@Override\n@@ -52,16 +53,21 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\n@Test\npublic void testlmdsHB() {\n- runTest(TEST_NAME+\"1\", ExecMode.HYBRID);\n+ runTest(TEST_NAME+\"1\", ExecMode.HYBRID, 1);\n}\n@Test\npublic void testlmdsSP() {\n// Only reuse the actions\n- runTest(TEST_NAME+\"1\", ExecMode.SPARK);\n+ runTest(TEST_NAME+\"1\", ExecMode.SPARK, 1);\n}\n- public void runTest(String testname, ExecMode execMode) {\n+ @Test\n+ public void testReusePrefetch() {\n+ runTest(TEST_NAME+\"2\", ExecMode.HYBRID, 2);\n+ }\n+\n+ public void runTest(String testname, ExecMode execMode, int testId) {\nboolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\nboolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\nboolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\n@@ -85,31 +91,52 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nLineage.resetInternalState();\n+ if (testId == 2) enablePrefetch();\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ disablePrefetch();\nHashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n- long numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n- long numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ long numTsmm = 0;\n+ long numMapmm = 0;\n+ if (testId == 1) {\n+ numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ }\n+ long numPrefetch = 0;\n+ if (testId == 2) numPrefetch = Statistics.getCPHeavyHitterCount(\"prefetch\");\n+ proArgs.clear();\nproArgs.add(\"-explain\");\nproArgs.add(\"-stats\");\nproArgs.add(\"-lineage\");\n- proArgs.add(\"reuse_hybrid\");\n+ proArgs.add(LineageCacheConfig.ReuseCacheType.REUSE_FULL.name().toLowerCase());\nproArgs.add(\"-args\");\nproArgs.add(output(\"R\"));\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nLineage.resetInternalState();\n+ if (testId == 2) enablePrefetch();\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ disablePrefetch();\nHashMap<MatrixValue.CellIndex, Double> R_reused = readDMLScalarFromOutputDir(\"R\");\n- long numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n- long numMapmm_r= Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ long numTsmm_r = 0;\n+ long numMapmm_r = 0;\n+ if (testId == 1) {\n+ numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ numMapmm_r = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ }\n+ long numPrefetch_r = 0;\n+ if (testId == 2) numPrefetch_r = Statistics.getCPHeavyHitterCount(\"prefetch\");\n//compare matrices\nboolean matchVal = TestUtils.compareMatrices(R, R_reused, 1e-6, \"Origin\", \"withPrefetch\");\nif (!matchVal)\nSystem.out.println(\"Value w/o reuse \"+R+\" w/ reuse \"+R_reused);\n- Assert.assertTrue(\"Violated sp_tsmm: reuse count: \"+numTsmm_r+\" < \"+numTsmm, numTsmm_r < numTsmm);\n- Assert.assertTrue(\"Violated sp_mapmm: reuse count: \"+numMapmm_r+\" < \"+numMapmm, numMapmm_r < numMapmm);\n+ if (testId == 1) {\n+ Assert.assertTrue(\"Violated sp_tsmm reuse count: \" + numTsmm_r + \" < \" + numTsmm, numTsmm_r < numTsmm);\n+ Assert.assertTrue(\"Violated sp_mapmm reuse count: \" + numMapmm_r + \" < \" + numMapmm, numMapmm_r < numMapmm);\n+ }\n+ if (testId == 2)\n+ Assert.assertTrue(\"Violated prefetch reuse count: \" + numPrefetch_r + \" < \" + numPrefetch, numPrefetch_r<numPrefetch);\n} finally {\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\n@@ -120,4 +147,16 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nRecompiler.reinitRecompiler();\n}\n}\n+\n+ private void enablePrefetch() {\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = false;\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = true;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = true;\n+ }\n+\n+ private void disablePrefetch() {\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = true;\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = false;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = false;\n+ }\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/LineageReuseSpark2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+X = rand(rows=10000, cols=200, seed=42); #sp_rand\n+v = rand(rows=200, cols=1, seed=42); #cp_rand\n+\n+# Spark transformation operations\n+for (i in 1:10) {\n+ while(FALSE){}\n+ sp1 = X + ceil(X);\n+ sp2 = sp1 %*% v; #output fits in local\n+ # Place a prefetch after mapmm and reuse\n+\n+ # CP instructions\n+ v2 = ((v + v) * 1 - v) / (1+1);\n+ v2 = ((v + v) * 2 - v) / (2+1);\n+\n+ # CP binary triggers the DAG of SP operations\n+ cp = sp2 + sum(v2);\n+ R = sum(cp);\n+}\n+\n+write(R, $1, format=\"text\");\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3474] Lineage-based reuse of prefetch instruction
This patch enables caching and reusing prefetch instruction
outputs. This is the first step towards reusing asynchronous
operators.
Closes #1746 |
49,689 | 06.12.2022 12:36:09 | -3,600 | 5182796632bfb9173f5d2e7b2e7d20e434270bda | Lineage-based reuse of future-based instructions
This patch enables caching and reuse of future-based Spark
actions.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/ExecutionContext.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/ExecutionContext.java",
"diff": "@@ -602,16 +602,21 @@ public class ExecutionContext {\nmo.release();\n}\n- public void setMatrixOutput(String varName, Future<MatrixBlock> fmb) {\n+ public void setMatrixOutputAndLineage(String varName, Future<MatrixBlock> fmb, LineageItem li) {\nif (isAutoCreateVars() && !containsVariable(varName)) {\nMatrixObject fmo = new MatrixObjectFuture(Types.ValueType.FP64,\nOptimizerUtils.getUniqueTempFileName(), fmb);\n}\nMatrixObject mo = getMatrixObject(varName);\nMatrixObjectFuture fmo = new MatrixObjectFuture(mo, fmb);\n+ fmo.setCacheLineage(li);\nsetVariable(varName, fmo);\n}\n+ public void setMatrixOutput(String varName, Future<MatrixBlock> fmb) {\n+ setMatrixOutputAndLineage(varName, fmb, null);\n+ }\n+\npublic void setMatrixOutput(String varName, MatrixBlock outputData, UpdateType flag) {\nif( isAutoCreateVars() && !containsVariable(varName) )\nsetVariable(varName, createMatrixObject(outputData));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.runtime.controlprogram.context;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\n+import org.apache.sysds.runtime.lineage.LineageCache;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport java.util.concurrent.Future;\n@@ -59,8 +60,14 @@ public class MatrixObjectFuture extends MatrixObject\nthrow new DMLRuntimeException(\"MatrixObject not available to read.\");\nif(_data != null)\nthrow new DMLRuntimeException(\"_data must be null for future matrix object/block.\");\n+ MatrixBlock out = null;\nacquire(false, false);\n- return _futureData.get();\n+ long t1 = System.nanoTime();\n+ out = _futureData.get();\n+ if (hasValidLineage())\n+ LineageCache.putValueAsyncOp(getCacheLineage(), this, out, t1);\n+ // FIXME: start time should indicate the actual start of the execution\n+ return out;\n}\ncatch(Exception e) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/PrefetchCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/PrefetchCPInstruction.java",
"diff": "@@ -46,7 +46,7 @@ public class PrefetchCPInstruction extends UnaryCPInstruction {\npublic void processInstruction(ExecutionContext ec) {\n// TODO: handle non-matrix objects\nec.setVariable(output.getName(), ec.getMatrixObject(input1));\n- LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? this.getLineageItem(ec).getValue() : null;\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n// Note, a Prefetch instruction doesn't guarantee an asynchronous execution.\n// If the next instruction which takes this output as an input comes before\n@@ -54,6 +54,8 @@ public class PrefetchCPInstruction extends UnaryCPInstruction {\n// In that case this Prefetch instruction will act like a NOOP.\nif (CommonThreadPool.triggerRemoteOPsPool == null)\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\n+ // Saving the lineage item inside the matrix object will replace the pre-attached\n+ // lineage item (e.g. mapmm). Hence, passing separately.\nCommonThreadPool.triggerRemoteOPsPool.submit(new TriggerPrefetchTask(ec.getMatrixObject(output), li));\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerPrefetchTask.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/TriggerPrefetchTask.java",
"diff": "@@ -24,6 +24,7 @@ import org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.controlprogram.federated.FederatedStatistics;\nimport org.apache.sysds.runtime.lineage.LineageCache;\nimport org.apache.sysds.runtime.lineage.LineageItem;\n+import org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.utils.stats.SparkStatistics;\npublic class TriggerPrefetchTask implements Runnable {\n@@ -43,6 +44,7 @@ public class TriggerPrefetchTask implements Runnable {\n@Override\npublic void run() {\nboolean prefetched = false;\n+ MatrixBlock mb = null;\nlong t1 = System.nanoTime();\nsynchronized (_prefetchMO) {\n// Having this check inside the critical section\n@@ -50,14 +52,14 @@ public class TriggerPrefetchTask implements Runnable {\nif (_prefetchMO.isPendingRDDOps() || _prefetchMO.isFederated()) {\n// TODO: Add robust runtime constraints for federated prefetch\n// Execute and bring the result to local\n- _prefetchMO.acquireReadAndRelease();\n+ mb = _prefetchMO.acquireReadAndRelease();\nprefetched = true;\n}\n}\n// Save the collected intermediate in the lineage cache\n- if (_inputLi != null)\n- LineageCache.putValueAsyncOp(_inputLi, _prefetchMO, prefetched, t1);\n+ if (_inputLi != null && mb != null)\n+ LineageCache.putValueAsyncOp(_inputLi, _prefetchMO, mb, t1);\nif (DMLScript.STATISTICS && prefetched) {\nif (_prefetchMO.isFederated())\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySPInstruction.java",
"diff": "@@ -39,6 +39,8 @@ import org.apache.sysds.runtime.instructions.spark.functions.AggregateDropCorrec\nimport org.apache.sysds.runtime.instructions.spark.functions.FilterDiagMatrixBlocksFunction;\nimport org.apache.sysds.runtime.instructions.spark.functions.FilterNonEmptyBlocksFunction;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDAggregateUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.data.OperationsOnMatrixValues;\n@@ -117,7 +119,8 @@ public class AggregateUnarySPInstruction extends UnarySPInstruction {\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\nRDDAggregateTask task = new RDDAggregateTask(_optr, _aop, in, mc);\nFuture<MatrixBlock> future_out = CommonThreadPool.triggerRemoteOPsPool.submit(task);\n- sec.setMatrixOutput(output.getName(), future_out);\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n+ sec.setMatrixOutputAndLineage(output.getName(), future_out, li);\n}\ncatch(Exception ex) {\nthrow new DMLRuntimeException(ex);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CpmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/CpmmSPInstruction.java",
"diff": "@@ -37,6 +37,8 @@ import org.apache.sysds.runtime.instructions.spark.functions.FilterNonEmptyBlock\nimport org.apache.sysds.runtime.instructions.spark.functions.ReorgMapFunction;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDAggregateUtils;\nimport org.apache.sysds.runtime.instructions.spark.utils.SparkUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.data.OperationsOnMatrixValues;\n@@ -113,7 +115,8 @@ public class CpmmSPInstruction extends AggregateBinarySPInstruction {\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\nCpmmMatrixVectorTask task = new CpmmMatrixVectorTask(in1, in2);\nFuture<MatrixBlock> future_out = CommonThreadPool.triggerRemoteOPsPool.submit(task);\n- sec.setMatrixOutput(output.getName(), future_out);\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n+ sec.setMatrixOutputAndLineage(output.getName(), future_out, li);\n}\ncatch(Exception ex) {\nthrow new DMLRuntimeException(ex);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/MapmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/MapmmSPInstruction.java",
"diff": "@@ -50,6 +50,8 @@ import org.apache.sysds.runtime.instructions.spark.data.LazyIterableIterator;\nimport org.apache.sysds.runtime.instructions.spark.data.PartitionedBroadcast;\nimport org.apache.sysds.runtime.instructions.spark.functions.FilterNonEmptyBlocksFunction;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDAggregateUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.data.OperationsOnMatrixValues;\n@@ -146,7 +148,8 @@ public class MapmmSPInstruction extends AggregateBinarySPInstruction {\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\nRDDMapmmTask task = new RDDMapmmTask(in1, in2, type);\nFuture<MatrixBlock> future_out = CommonThreadPool.triggerRemoteOPsPool.submit(task);\n- sec.setMatrixOutput(output.getName(), future_out);\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n+ sec.setMatrixOutputAndLineage(output.getName(), future_out, li);\n}\ncatch(Exception ex) { throw new DMLRuntimeException(ex); }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/TsmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/TsmmSPInstruction.java",
"diff": "@@ -31,6 +31,8 @@ import org.apache.sysds.runtime.controlprogram.context.SparkExecutionContext;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDAggregateUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n@@ -74,7 +76,8 @@ public class TsmmSPInstruction extends UnarySPInstruction {\nCommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\nTsmmTask task = new TsmmTask(in, _type);\nFuture<MatrixBlock> future_out = CommonThreadPool.triggerRemoteOPsPool.submit(task);\n- sec.setMatrixOutput(output.getName(), future_out);\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n+ sec.setMatrixOutputAndLineage(output.getName(), future_out, li);\n}\ncatch(Exception ex) {\nthrow new DMLRuntimeException(ex);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "package org.apache.sysds.runtime.lineage;\n+import org.apache.commons.lang3.ArrayUtils;\nimport org.apache.commons.lang3.tuple.MutablePair;\nimport org.apache.commons.lang3.tuple.Pair;\nimport org.apache.sysds.api.DMLScript;\n@@ -575,16 +576,13 @@ public class LineageCache\ncontinue;\n}\n- if (data instanceof MatrixObjectFuture) {\n+ if (data instanceof MatrixObjectFuture || inst instanceof PrefetchCPInstruction) {\n// We don't want to call get() on the future immediately after the execution\n+ // For the async. instructions, caching is handled separately by the tasks\nremovePlaceholder(item);\ncontinue;\n}\n- if (inst instanceof PrefetchCPInstruction || inst instanceof BroadcastCPInstruction)\n- // For the async. instructions, caching is handled separately by the tasks\n- continue;\n-\nif (data instanceof MatrixObject && ((MatrixObject) data).hasRDDHandle()) {\n// Avoid triggering pre-matured Spark instruction chains\nremovePlaceholder(item);\n@@ -643,49 +641,28 @@ public class LineageCache\n}\n}\n- public static void putValueAsyncOp(LineageItem instLI, Data data, boolean prefetched, long starttime)\n+ // This method is called from inside the asynchronous operators and directly put the output of\n+ // an asynchronous instruction into the lineage cache. As the consumers, a different operator,\n+ // materializes the intermediate, we skip the placeholder placing logic.\n+ public static void putValueAsyncOp(LineageItem instLI, Data data, MatrixBlock mb, long starttime)\n{\nif (ReuseCacheType.isNone())\nreturn;\n- if (!prefetched) //prefetching was not successful\n+ if (!ArrayUtils.contains(LineageCacheConfig.getReusableOpcodes(), instLI.getOpcode()))\nreturn;\n-\n- synchronized( _cache )\n- {\n- if (!probe(instLI))\n- return;\n-\n- long computetime = System.nanoTime() - starttime;\n- LineageCacheEntry centry = _cache.get(instLI);\nif(!(data instanceof MatrixObject) && !(data instanceof ScalarObject)) {\n- // Reusable instructions can return a frame (rightIndex). Remove placeholders.\n- removePlaceholder(instLI);\nreturn;\n}\n- MatrixBlock mb = (data instanceof MatrixObject) ?\n- ((MatrixObject)data).acquireReadAndRelease() : null;\n- long size = mb != null ? mb.getInMemorySize() : ((ScalarObject)data).getSize();\n-\n- // remove the placeholder if the entry is bigger than the cache.\n- if (size > LineageCacheEviction.getCacheLimit()) {\n- removePlaceholder(instLI);\n- return;\n- }\n-\n- // place the data\n- if (data instanceof MatrixObject)\n- centry.setValue(mb, computetime);\n- else if (data instanceof ScalarObject)\n- centry.setValue((ScalarObject)data, computetime);\n+ synchronized( _cache )\n+ {\n+ long computetime = System.nanoTime() - starttime;\n+ // Make space, place data and manage queue\n+ putIntern(instLI, DataType.MATRIX, mb, null, computetime);\n- if (DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(centry._key)) {\n+ if (DMLScript.STATISTICS && LineageCacheEviction._removelist.containsKey(instLI))\n// Add to missed compute time\n- LineageCacheStatistics.incrementMissedComputeTime(centry._computeTime);\n- }\n-\n- //maintain order for eviction\n- LineageCacheEviction.addEntry(centry);\n+ LineageCacheStatistics.incrementMissedComputeTime(computetime);\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -198,6 +198,10 @@ public class LineageCacheConfig\nREUSE_OPCODES = ops;\n}\n+ public static String[] getReusableOpcodes() {\n+ return REUSE_OPCODES;\n+ }\n+\npublic static void resetReusableOpcodes() {\nREUSE_OPCODES = OPCODES;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"diff": "@@ -62,11 +62,6 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nrunTest(TEST_NAME+\"1\", ExecMode.SPARK, 1);\n}\n- @Test\n- public void testReusePrefetch() {\n- runTest(TEST_NAME+\"2\", ExecMode.HYBRID, 2);\n- }\n-\npublic void runTest(String testname, ExecMode execMode, int testId) {\nboolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\nboolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\n@@ -91,18 +86,10 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nLineage.resetInternalState();\n- if (testId == 2) enablePrefetch();\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n- disablePrefetch();\nHashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n- long numTsmm = 0;\n- long numMapmm = 0;\n- if (testId == 1) {\n- numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n- numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n- }\n- long numPrefetch = 0;\n- if (testId == 2) numPrefetch = Statistics.getCPHeavyHitterCount(\"prefetch\");\n+ long numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\nproArgs.clear();\nproArgs.add(\"-explain\");\n@@ -114,18 +101,10 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nprogramArgs = proArgs.toArray(new String[proArgs.size()]);\nLineage.resetInternalState();\n- if (testId == 2) enablePrefetch();\nrunTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n- disablePrefetch();\nHashMap<MatrixValue.CellIndex, Double> R_reused = readDMLScalarFromOutputDir(\"R\");\n- long numTsmm_r = 0;\n- long numMapmm_r = 0;\n- if (testId == 1) {\n- numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n- numMapmm_r = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n- }\n- long numPrefetch_r = 0;\n- if (testId == 2) numPrefetch_r = Statistics.getCPHeavyHitterCount(\"prefetch\");\n+ long numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm_r = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n//compare matrices\nboolean matchVal = TestUtils.compareMatrices(R, R_reused, 1e-6, \"Origin\", \"withPrefetch\");\n@@ -135,9 +114,6 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nAssert.assertTrue(\"Violated sp_tsmm reuse count: \" + numTsmm_r + \" < \" + numTsmm, numTsmm_r < numTsmm);\nAssert.assertTrue(\"Violated sp_mapmm reuse count: \" + numMapmm_r + \" < \" + numMapmm, numMapmm_r < numMapmm);\n}\n- if (testId == 2)\n- Assert.assertTrue(\"Violated prefetch reuse count: \" + numPrefetch_r + \" < \" + numPrefetch, numPrefetch_r<numPrefetch);\n-\n} finally {\nOptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\nOptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n@@ -147,16 +123,4 @@ public class LineageReuseSparkTest extends AutomatedTestBase {\nRecompiler.reinitRecompiler();\n}\n}\n-\n- private void enablePrefetch() {\n- OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = false;\n- OptimizerUtils.MAX_PARALLELIZE_ORDER = true;\n- OptimizerUtils.ASYNC_PREFETCH_SPARK = true;\n- }\n-\n- private void disablePrefetch() {\n- OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = true;\n- OptimizerUtils.MAX_PARALLELIZE_ORDER = false;\n- OptimizerUtils.ASYNC_PREFETCH_SPARK = false;\n- }\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/async/ReuseAsyncOpTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.async;\n+\n+import java.util.ArrayList;\n+import java.util.HashMap;\n+import java.util.List;\n+\n+import org.apache.sysds.common.Types.ExecMode;\n+import org.apache.sysds.hops.OptimizerUtils;\n+import org.apache.sysds.hops.recompile.Recompiler;\n+import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\n+import org.apache.sysds.runtime.lineage.Lineage;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.matrix.data.MatrixValue;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.apache.sysds.utils.Statistics;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+public class ReuseAsyncOpTest extends AutomatedTestBase {\n+ protected static final String TEST_DIR = \"functions/async/\";\n+ protected static final String TEST_NAME = \"ReuseAsyncOp\";\n+ protected static final int TEST_VARIANTS = 2;\n+ protected static String TEST_CLASS_DIR = TEST_DIR + ReuseAsyncOpTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ for(int i=1; i<=TEST_VARIANTS; i++)\n+ addTestConfiguration(TEST_NAME+i, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME+i));\n+ }\n+\n+ @Test\n+ public void testReusePrefetch() {\n+ // Reuse prefetch results\n+ runTest(TEST_NAME+\"1\", ExecMode.HYBRID, 1);\n+ }\n+\n+ @Test\n+ public void testlmds() {\n+ // Reuse future-based tsmm and mapmm\n+ runTest(TEST_NAME+\"2\", ExecMode.HYBRID, 2);\n+ }\n+\n+ public void runTest(String testname, ExecMode execMode, int testId) {\n+ boolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n+ boolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\n+ boolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\n+ ExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\n+ rtplatform = execMode;\n+\n+ long oldmem = InfrastructureAnalyzer.getLocalMaxMemory();\n+ long mem = 1024*1024*8;\n+ InfrastructureAnalyzer.setLocalMaxMemory(mem);\n+\n+ try {\n+ getAndLoadTestConfiguration(testname);\n+ fullDMLScriptName = getScript();\n+\n+ List<String> proArgs = new ArrayList<>();\n+\n+ proArgs.add(\"-explain\");\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-args\");\n+ proArgs.add(output(\"R\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ Lineage.resetInternalState();\n+ enableAsync(); //enable max_reuse and prefetch\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ disableAsync();\n+ HashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n+ long numTsmm = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ long numPrefetch = Statistics.getCPHeavyHitterCount(\"prefetch\");\n+\n+ proArgs.clear();\n+ proArgs.add(\"-explain\");\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-lineage\");\n+ proArgs.add(LineageCacheConfig.ReuseCacheType.REUSE_FULL.name().toLowerCase());\n+ proArgs.add(\"-args\");\n+ proArgs.add(output(\"R\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ Lineage.resetInternalState();\n+ enableAsync(); //enable max_reuse and prefetch\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ disableAsync();\n+ HashMap<MatrixValue.CellIndex, Double> R_reused = readDMLScalarFromOutputDir(\"R\");\n+ long numTsmm_r = Statistics.getCPHeavyHitterCount(\"sp_tsmm\");\n+ long numMapmm_r = Statistics.getCPHeavyHitterCount(\"sp_mapmm\");\n+ long numPrefetch_r = Statistics.getCPHeavyHitterCount(\"prefetch\");\n+\n+ //compare matrices\n+ boolean matchVal = TestUtils.compareMatrices(R, R_reused, 1e-6, \"Origin\", \"withPrefetch\");\n+ if (!matchVal)\n+ System.out.println(\"Value w/o reuse \"+R+\" w/ reuse \"+R_reused);\n+ if (testId == 2) {\n+ Assert.assertTrue(\"Violated sp_tsmm reuse count: \" + numTsmm_r + \" < \" + numTsmm, numTsmm_r < numTsmm);\n+ Assert.assertTrue(\"Violated sp_mapmm reuse count: \" + numMapmm_r + \" < \" + numMapmm, numMapmm_r < numMapmm);\n+ }\n+ if (testId == 1)\n+ Assert.assertTrue(\"Violated prefetch reuse count: \" + numPrefetch_r + \" < \" + numPrefetch, numPrefetch_r<numPrefetch);\n+\n+ } finally {\n+ OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\n+ OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = old_trans_exec_type;\n+ resetExecMode(oldPlatform);\n+ InfrastructureAnalyzer.setLocalMaxMemory(oldmem);\n+ Recompiler.reinitRecompiler();\n+ }\n+ }\n+\n+ private void enableAsync() {\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = false;\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = true;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = true;\n+ }\n+\n+ private void disableAsync() {\n+ OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = true;\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = false;\n+ OptimizerUtils.ASYNC_PREFETCH_SPARK = false;\n+ }\n+}\n"
},
{
"change_type": "RENAME",
"old_path": "src/test/scripts/functions/async/LineageReuseSpark2.dml",
"new_path": "src/test/scripts/functions/async/ReuseAsyncOp1.dml",
"diff": ""
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/ReuseAsyncOp2.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+SimlinRegDS = function(Matrix[Double] X, Matrix[Double] y, Double lamda, Integer N) return (Matrix[double] beta)\n+{\n+ # Reuse sp_tsmm and sp_mapmm if not future-based\n+ A = (t(X) %*% X) + diag(matrix(lamda, rows=N, cols=1));\n+ b = t(X) %*% y;\n+ beta = solve(A, b);\n+}\n+\n+no_lamda = 10;\n+\n+stp = (0.1 - 0.0001)/no_lamda;\n+lamda = 0.0001;\n+lim = 0.1;\n+\n+X = rand(rows=10000, cols=200, seed=42);\n+y = rand(rows=10000, cols=1, seed=43);\n+N = ncol(X);\n+R = matrix(0, rows=N, cols=no_lamda+2);\n+i = 1;\n+\n+while (lamda < lim)\n+{\n+ beta = SimlinRegDS(X, y, lamda, N);\n+ #beta = lmDS(X=X, y=y, reg=lamda);\n+ R[,i] = beta;\n+ lamda = lamda + stp;\n+ i = i + 1;\n+}\n+\n+R = sum(R);\n+write(R, $1, format=\"text\");\n+\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3474] Lineage-based reuse of future-based instructions
This patch enables caching and reuse of future-based Spark
actions.
Closes #1747 |
49,706 | 09.12.2022 11:49:13 | -3,600 | 1abff839d4e565268a1b557a3987e7f7845cd362 | Spark unsafe warning suppression
This commit adds a small workaround to the spark execution context
creation, to allow spark to access java 11 unsafe packages.
The workaround have been verified to work in cluster situations,
and remove the warnings written from Java about illegal access to
Unsafe. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/SparkExecutionContext.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/SparkExecutionContext.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.runtime.controlprogram.context;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.io.PrintStream;\n+import java.lang.annotation.Target;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\n@@ -203,11 +204,21 @@ public class SparkExecutionContext extends ExecutionContext\nreturn LAZY_SPARKCTX_CREATION;\n}\n- private synchronized static void initSparkContext()\n- {\n+ public static void handleIllegalReflectiveAccessSpark(){\n+ Module pf = org.apache.spark.unsafe.Platform.class.getModule();\n+ Target.class.getModule().addOpens(\"java.nio\", pf);\n+\n+ Module se = org.apache.spark.util.SizeEstimator.class.getModule();\n+ Target.class.getModule().addOpens(\"java.util\", se);\n+ Target.class.getModule().addOpens(\"java.lang\", se);\n+ Target.class.getModule().addOpens(\"java.util.concurrent\", se);\n+ }\n+\n+ private synchronized static void initSparkContext(){\n//check for redundant spark context init\nif( _spctx != null )\nreturn;\n+ handleIllegalReflectiveAccessSpark();\nlong t0 = DMLScript.STATISTICS ? System.nanoTime() : 0;\n@@ -1780,6 +1791,7 @@ public class SparkExecutionContext extends ExecutionContext\npublic SparkClusterConfig()\n{\n+ handleIllegalReflectiveAccessSpark();\nSparkConf sconf = createSystemDSSparkConf();\n_confOnly = true;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3229] Spark unsafe warning suppression
This commit adds a small workaround to the spark execution context
creation, to allow spark to access java 11 unsafe packages.
The workaround have been verified to work in cluster situations,
and remove the warnings written from Java about illegal access to
Unsafe. |
49,724 | 18.11.2022 17:17:06 | 28,800 | 15df58fe0b4a9ab8290a34b57e08850c0e5b5f05 | Support of MULTI_BLOCK Spark for countDistinct()
This patch adds support for running MULTI_BLOCK aggregations
for countDistinct() builtin on the Spark backend.
The implementation augments the CountDistinctFunctionSketch
with the union() function implementation.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/countdistinct/BitMapValueCombiner.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.matrix.data.sketch.countdistinct;\n+\n+import java.util.Set;\n+import java.util.function.BinaryOperator;\n+\n+public class BitMapValueCombiner implements BinaryOperator<Set<Long>> {\n+\n+ @Override\n+ public Set<Long> apply(Set<Long> set0, Set<Long> set1) {\n+ if (set0.isEmpty()) {\n+ return set1;\n+ }\n+\n+ if (set1.isEmpty()) {\n+ return set0;\n+ }\n+\n+ // Merging left-right is identical to merging right-left\n+ set0.addAll(set1);\n+ return set0;\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/countdistinct/CountDistinctFunctionSketch.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/countdistinct/CountDistinctFunctionSketch.java",
"diff": "package org.apache.sysds.runtime.matrix.data.sketch.countdistinct;\n-import org.apache.commons.lang.NotImplementedException;\nimport org.apache.commons.logging.Log;\nimport org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.hops.OptimizerUtils;\n@@ -31,7 +30,10 @@ import org.apache.sysds.runtime.matrix.operators.Operator;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\n+import java.util.OptionalInt;\nimport java.util.Set;\n+import java.util.stream.Collectors;\n+import java.util.stream.Stream;\npublic class CountDistinctFunctionSketch extends CountDistinctSketch {\n@@ -110,7 +112,7 @@ public class CountDistinctFunctionSketch extends CountDistinctSketch {\n}\n}\n- MatrixBlock blkOutCorr = serializeInputMatrixBlock(bitMap, maxColumns);\n+ MatrixBlock blkOutCorr = serialize(bitMap, maxColumns);\n// The sketch contains all relevant info, so the input matrix can be discarded at this point\nreturn new CorrMatrixBlock(blkIn, blkOutCorr);\n@@ -121,7 +123,7 @@ public class CountDistinctFunctionSketch extends CountDistinctSketch {\nreturn kMask & (n >> startingIndex);\n}\n- private MatrixBlock serializeInputMatrixBlock(Map<Short, Set<Long>> bitMap, int maxWidth) {\n+ private MatrixBlock serialize(Map<Short, Set<Long>> bitMap, int maxWidth) {\n// Each row in output matrix corresponds to a key and each column to a fraction value for that key.\n// The first column will store the exponent value itself:\n@@ -150,9 +152,57 @@ public class CountDistinctFunctionSketch extends CountDistinctSketch {\nreturn blkOut;\n}\n+ private Map<Short, Set<Long>> deserialize(MatrixBlock blkIn) {\n+ int R = blkIn.getNumRows();\n+ Map<Short, Set<Long>> bitMap = new HashMap<>();\n+\n+ // row_i: [exponent_i, N_i, fraction_i0, fraction_i1, .., fraction_iN, 0, .., 0]\n+ for (int i=0; i<R; ++i) {\n+ short key = (short) blkIn.getValue(i, 0);\n+ Set<Long> fractions = bitMap.getOrDefault(key, new HashSet<>());\n+\n+ int C = (int) blkIn.getValue(i, 1);\n+ int j = 0;\n+ while (j < C) {\n+ long fraction = (long) blkIn.getValue(i, j + 2);\n+ fractions.add(fraction);\n+ ++j;\n+ }\n+\n+ bitMap.put(key, fractions);\n+ }\n+\n+ return bitMap;\n+ }\n+\n@Override\npublic CorrMatrixBlock union(CorrMatrixBlock arg0, CorrMatrixBlock arg1) {\n- throw new NotImplementedException(\"MULTI_BLOCK aggregation is not supported yet\");\n+ MatrixBlock corr0 = arg0.getCorrection();\n+ Map<Short, Set<Long>> bitMap0 = deserialize(corr0);\n+\n+ MatrixBlock corr1 = arg1.getCorrection();\n+ Map<Short, Set<Long>> bitMap1 = deserialize(corr1);\n+\n+ // Map putAll() is not suitable here as it will replace Map values for identical keys.\n+ // We will use a custom combiner with stream() and collect() instead.\n+ Map<Short, Set<Long>> bitMapOut =\n+ Stream.concat(bitMap0.entrySet().stream(), bitMap1.entrySet().stream())\n+ .collect(Collectors.toMap(\n+ Map.Entry::getKey,\n+ Map.Entry::getValue,\n+ new BitMapValueCombiner()\n+ ));\n+\n+ // Find the maximum column width\n+ OptionalInt maxWidthOpt = bitMapOut.values().stream().mapToInt(Set::size).max();\n+ if (maxWidthOpt.isEmpty()) {\n+ throw new IllegalArgumentException(\"Corrupt sketch: metadata is invalid\");\n+ }\n+\n+ int maxWidth = maxWidthOpt.getAsInt();\n+ MatrixBlock blkOutCorr = serialize(bitMapOut, maxWidth);\n+\n+ return new CorrMatrixBlock(arg0.getValue(), blkOutCorr);\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctRowColBase.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctRowColBase.java",
"diff": "@@ -58,6 +58,13 @@ public abstract class CountDistinctRowColBase extends CountDistinctBase {\ncountDistinctScalarTest(1723, 5000, 2000, 1.0, ex, tolerance);\n}\n+ @Test\n+ public void testSparkDenseXLarge() {\n+ ExecType ex = ExecType.SPARK;\n+ double tolerance = baseTolerance + 1723 * percentTolerance;\n+ countDistinctScalarTest(1723, 5000, 2000, 1.0, ex, tolerance);\n+ }\n+\n@Test\npublic void testCPDense1Unique() {\nExecType ex = ExecType.CP;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3467] Support of MULTI_BLOCK Spark for countDistinct()
This patch adds support for running MULTI_BLOCK aggregations
for countDistinct() builtin on the Spark backend.
The implementation augments the CountDistinctFunctionSketch
with the union() function implementation.
Closes #1734 |
49,724 | 25.11.2022 11:02:30 | 28,800 | 38ec722a53557037b79eb6259ce17cf2b850af4e | [MINOR] Adding a factory method for MatrixSketch
This patch introduces a factory method for sketches.
This will centralize the creation of all sketches in one place and
prevent duplication of operator switching and validation logic.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySketchSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/AggregateUnarySketchSPInstruction.java",
"diff": "@@ -117,7 +117,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\nout1.fold(new CorrMatrixBlock(new MatrixBlock()),\nnew AggregateUnarySketchUnionAllFunction(this.op));\n- MatrixBlock out3 = LibMatrixCountDistinct.countDistinctValuesFromSketch(out2, this.op);\n+ MatrixBlock out3 = LibMatrixCountDistinct.countDistinctValuesFromSketch(this.op, out2);\n// put output block into symbol table (no lineage because single block)\n// this also includes implicit maintenance of matrix characteristics\n@@ -180,7 +180,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\nMatrixIndexes ixOut = new MatrixIndexes();\nthis.op.indexFn.execute(ixIn, ixOut);\n- return LibMatrixCountDistinct.createSketch(blkIn, this.op);\n+ return LibMatrixCountDistinct.createSketch(this.op, blkIn);\n}\n}\n@@ -207,7 +207,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\nreturn arg0;\n}\n- return LibMatrixCountDistinct.unionSketch(arg0, arg1, this.op);\n+ return LibMatrixCountDistinct.unionSketch(this.op, arg0, arg1);\n}\n}\n@@ -246,7 +246,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\npublic CorrMatrixBlock call(MatrixBlock arg0)\nthrows Exception {\n- return LibMatrixCountDistinct.createSketch(arg0, this.op);\n+ return LibMatrixCountDistinct.createSketch(this.op, arg0);\n}\n}\n@@ -261,8 +261,8 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\n@Override\npublic CorrMatrixBlock call(CorrMatrixBlock arg0, MatrixBlock arg1) throws Exception {\n- CorrMatrixBlock arg1WithCorr = LibMatrixCountDistinct.createSketch(arg1, this.op);\n- return LibMatrixCountDistinct.unionSketch(arg0, arg1WithCorr, this.op);\n+ CorrMatrixBlock arg1WithCorr = LibMatrixCountDistinct.createSketch(this.op, arg1);\n+ return LibMatrixCountDistinct.unionSketch(this.op, arg0, arg1WithCorr);\n}\n}\n@@ -277,7 +277,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\n@Override\npublic CorrMatrixBlock call(CorrMatrixBlock arg0, CorrMatrixBlock arg1) throws Exception {\n- return LibMatrixCountDistinct.unionSketch(arg0, arg1, this.op);\n+ return LibMatrixCountDistinct.unionSketch(this.op, arg0, arg1);\n}\n}\n@@ -292,7 +292,7 @@ public class AggregateUnarySketchSPInstruction extends UnarySPInstruction {\n@Override\npublic MatrixBlock call(CorrMatrixBlock arg0) throws Exception {\n- return LibMatrixCountDistinct.countDistinctValuesFromSketch(arg0, this.op);\n+ return LibMatrixCountDistinct.countDistinctValuesFromSketch(this.op, arg0);\n}\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixCountDistinct.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixCountDistinct.java",
"diff": "@@ -31,8 +31,8 @@ import org.apache.sysds.api.DMLException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.data.*;\nimport org.apache.sysds.runtime.instructions.spark.data.CorrMatrixBlock;\n-import org.apache.sysds.runtime.matrix.data.sketch.CountDistinctSketch;\n-import org.apache.sysds.runtime.matrix.data.sketch.countdistinct.CountDistinctFunctionSketch;\n+import org.apache.sysds.runtime.matrix.data.sketch.MatrixSketch;\n+import org.apache.sysds.runtime.matrix.data.sketch.SketchFactory;\nimport org.apache.sysds.runtime.matrix.data.sketch.countdistinctapprox.KMVSketch;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperatorTypes;\n@@ -356,36 +356,18 @@ public interface LibMatrixCountDistinct {\nreturn distinct.size();\n}\n- static MatrixBlock countDistinctValuesFromSketch(CorrMatrixBlock arg0, CountDistinctOperator op) {\n- if(op.getOperatorType() == CountDistinctOperatorTypes.COUNT)\n- return new CountDistinctFunctionSketch(op).getValueFromSketch(arg0);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.KMV)\n- return new KMVSketch(op).getValueFromSketch(arg0);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.HLL)\n- throw new NotImplementedException(\"Not implemented yet\");\n- else\n- throw new NotImplementedException(\"Not implemented yet\");\n+ static MatrixBlock countDistinctValuesFromSketch(CountDistinctOperator op, CorrMatrixBlock corrBlkIn) {\n+ MatrixSketch sketch = SketchFactory.get(op);\n+ return sketch.getValueFromSketch(corrBlkIn);\n}\n- static CorrMatrixBlock createSketch(MatrixBlock blkIn, CountDistinctOperator op) {\n- if(op.getOperatorType() == CountDistinctOperatorTypes.COUNT)\n- return new CountDistinctFunctionSketch(op).create(blkIn);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.KMV)\n- return new KMVSketch(op).create(blkIn);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.HLL)\n- throw new NotImplementedException(\"Not implemented yet\");\n- else\n- throw new NotImplementedException(\"Not implemented yet\");\n+ static CorrMatrixBlock createSketch(CountDistinctOperator op, MatrixBlock blkIn) {\n+ MatrixSketch sketch = SketchFactory.get(op);\n+ return sketch.create(blkIn);\n}\n- static CorrMatrixBlock unionSketch(CorrMatrixBlock arg0, CorrMatrixBlock arg1, CountDistinctOperator op) {\n- if(op.getOperatorType() == CountDistinctOperatorTypes.COUNT)\n- return new CountDistinctFunctionSketch(op).union(arg0, arg1);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.KMV)\n- return new KMVSketch(op).union(arg0, arg1);\n- else if(op.getOperatorType() == CountDistinctOperatorTypes.HLL)\n- throw new NotImplementedException(\"Not implemented yet\");\n- else\n- throw new NotImplementedException(\"Not implemented yet\");\n+ static CorrMatrixBlock unionSketch(CountDistinctOperator op, CorrMatrixBlock corrBlkIn0, CorrMatrixBlock corrBlkIn1) {\n+ MatrixSketch sketch = SketchFactory.get(op);\n+ return sketch.union(corrBlkIn0, corrBlkIn1);\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/SketchFactory.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.matrix.data.sketch;\n+\n+import org.apache.commons.lang.NotImplementedException;\n+import org.apache.sysds.runtime.matrix.data.sketch.countdistinct.CountDistinctFunctionSketch;\n+import org.apache.sysds.runtime.matrix.data.sketch.countdistinctapprox.KMVSketch;\n+import org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\n+import org.apache.sysds.runtime.matrix.operators.CountDistinctOperatorTypes;\n+import org.apache.sysds.runtime.matrix.operators.Operator;\n+\n+public class SketchFactory {\n+ public static MatrixSketch get(Operator op) {\n+ if (op instanceof CountDistinctOperator) {\n+ CountDistinctOperator cdop = (CountDistinctOperator) op;\n+ if (cdop.getOperatorType() == CountDistinctOperatorTypes.COUNT) {\n+ return new CountDistinctFunctionSketch(op);\n+ } else if (cdop.getOperatorType() == CountDistinctOperatorTypes.KMV) {\n+ return new KMVSketch(op);\n+ } else {\n+ throw new NotImplementedException(\"Only COUNT and KMV count distinct sketches are supported for now\");\n+ }\n+ } else {\n+ throw new IllegalArgumentException(\"Only sketches for count distinct operators are supported for now\");\n+ }\n+ }\n+}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Adding a factory method for MatrixSketch
This patch introduces a factory method for sketches.
This will centralize the creation of all sketches in one place and
prevent duplication of operator switching and validation logic.
Closes #1738 |
49,724 | 09.12.2022 17:26:31 | 28,800 | 818b58dd57c0a3f619e609f2062504c8ec24aeee | [MINOR] Create matrix sketches exclusively using MatrixSketchFactory
This patch ensures that all matrix sketches are instantiated via MatrixSketchFactory
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixCountDistinct.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/LibMatrixCountDistinct.java",
"diff": "@@ -36,8 +36,7 @@ import org.apache.sysds.runtime.data.SparseBlockCSR;\nimport org.apache.sysds.runtime.data.SparseBlockFactory;\nimport org.apache.sysds.runtime.instructions.spark.data.CorrMatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.sketch.MatrixSketch;\n-import org.apache.sysds.runtime.matrix.data.sketch.SketchFactory;\n-import org.apache.sysds.runtime.matrix.data.sketch.countdistinctapprox.KMVSketch;\n+import org.apache.sysds.runtime.matrix.data.sketch.MatrixSketchFactory;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperatorTypes;\nimport org.apache.sysds.utils.Hash.HashType;\n@@ -105,7 +104,7 @@ public interface LibMatrixCountDistinct {\nres = countDistinctValuesNaive(in, op);\nbreak;\ncase KMV:\n- res = new KMVSketch(op).getValue(in);\n+ res = MatrixSketchFactory.get(op).getValue(in);\nbreak;\ndefault:\nthrow new DMLException(\"Invalid estimator type for aggregation: \" + LibMatrixCountDistinct.class.getSimpleName());\n@@ -361,17 +360,17 @@ public interface LibMatrixCountDistinct {\n}\nstatic MatrixBlock countDistinctValuesFromSketch(CountDistinctOperator op, CorrMatrixBlock corrBlkIn) {\n- MatrixSketch sketch = SketchFactory.get(op);\n+ MatrixSketch sketch = MatrixSketchFactory.get(op);\nreturn sketch.getValueFromSketch(corrBlkIn);\n}\nstatic CorrMatrixBlock createSketch(CountDistinctOperator op, MatrixBlock blkIn) {\n- MatrixSketch sketch = SketchFactory.get(op);\n+ MatrixSketch sketch = MatrixSketchFactory.get(op);\nreturn sketch.create(blkIn);\n}\nstatic CorrMatrixBlock unionSketch(CountDistinctOperator op, CorrMatrixBlock corrBlkIn0, CorrMatrixBlock corrBlkIn1) {\n- MatrixSketch sketch = SketchFactory.get(op);\n+ MatrixSketch sketch = MatrixSketchFactory.get(op);\nreturn sketch.union(corrBlkIn0, corrBlkIn1);\n}\n}\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/SketchFactory.java",
"new_path": "src/main/java/org/apache/sysds/runtime/matrix/data/sketch/MatrixSketchFactory.java",
"diff": "@@ -26,7 +26,7 @@ import org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperatorTypes;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n-public class SketchFactory {\n+public class MatrixSketchFactory {\npublic static MatrixSketch get(Operator op) {\nif (op instanceof CountDistinctOperator) {\nCountDistinctOperator cdop = (CountDistinctOperator) op;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Create matrix sketches exclusively using MatrixSketchFactory
This patch ensures that all matrix sketches are instantiated via MatrixSketchFactory
Closes #1749 |
49,706 | 10.12.2022 12:40:11 | -3,600 | a3fab6a87d88593c490ba384fd13c413c14cd161 | Spark with default settings
This commit fixes a minor bug introduced in the fix for spark
default settings, by separating the Log4J setting into two
variables in the bash /bin/systemds. | [
{
"change_type": "MODIFY",
"old_path": "bin/systemds",
"new_path": "bin/systemds",
"diff": "@@ -100,17 +100,16 @@ if [ -z \"$LOG4JPROP\" ] ; then\nif [ -z \"${LOG4JPROP}\" ]; then\nLOG4JPROP=\"\"\nelse\n- LOG4JPROP=\"-Dlog4j.configuration=file:$LOG4JPROP\"\n+ LOG4JPROPFULL=\"-Dlog4j.configuration=file:$LOG4JPROP\"\nfi\nelse\n# L4J was set by env var. Unset if that setting is wrong\nLOG4JPROP2=$(find \"$LOG4JPROP\")\nif [ -z \"${LOG4JPROP2}\" ]; then\nLOG4JPROP=\"\"\n- elif [ -z \"${SYSTEMDS_DISTRIBUTED_OPTS}\" ]; then\n- LOG4JPROP=$LOG4JPROP\nelse\n- LOG4JPROP=\"-Dlog4j.configuration=file:$LOG4JPROP2\"\n+ LOG4JPROP=$LOG4JPROP\n+ LOG4JPROPFULL=\"-Dlog4j.configuration=file:$LOG4JPROP2\"\nfi\nfi\n@@ -434,7 +433,7 @@ if [ $WORKER == 1 ]; then\nCMD=\" \\\njava $SYSTEMDS_STANDALONE_OPTS \\\n-cp $CLASSPATH \\\n- $LOG4JPROP \\\n+ $LOG4JPROPFULL \\\norg.apache.sysds.api.DMLScript \\\n-w $PORT \\\n$CONFIG_FILE \\\n@@ -449,7 +448,7 @@ elif [ \"$FEDMONITORING\" == 1 ]; then\nCMD=\" \\\njava $SYSTEMDS_STANDALONE_OPTS \\\n-cp $CLASSPATH \\\n- $LOG4JPROP \\\n+ $LOG4JPROPFULL \\\norg.apache.sysds.api.DMLScript \\\n-fedMonitoring $PORT \\\n$CONFIG_FILE \\\n@@ -464,7 +463,7 @@ elif [ $SYSDS_DISTRIBUTED == 0 ]; then\nCMD=\" \\\njava $SYSTEMDS_STANDALONE_OPTS \\\n-cp $CLASSPATH \\\n- $LOG4JPROP \\\n+ $LOG4JPROPFULL \\\norg.apache.sysds.api.DMLScript \\\n-f $SCRIPT_FILE \\\n-exec $SYSDS_EXEC_MODE \\\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3476] Spark with default settings
This commit fixes a minor bug introduced in the fix for spark
default settings, by separating the Log4J setting into two
variables in the bash /bin/systemds. |
49,720 | 16.11.2022 14:40:12 | -3,600 | e32528220d26aa0053b145f6d9c6909f478bde9c | applySchema DML builtin
This commit adds the ApplySchema primitive to change the schema of
an already allocated FrameBlock in DML.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"new_path": "src/main/java/org/apache/sysds/common/Builtins.java",
"diff": "@@ -47,6 +47,7 @@ public enum Builtins {\nALS_PREDICT(\"alsPredict\", true),\nALS_TOPK_PREDICT(\"alsTopkPredict\", true),\nAPPLY_PIPELINE(\"apply_pipeline\", true),\n+ APPLY_SCHEMA(\"applySchema\", false),\nARIMA(\"arima\", true),\nASIN(\"asin\", false),\nATAN(\"atan\", false),\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/common/Types.java",
"new_path": "src/main/java/org/apache/sysds/common/Types.java",
"diff": "@@ -315,7 +315,7 @@ public class Types\n// Operations that require 2 operands\npublic enum OpOp2 {\n- AND(true), BITWAND(true), BITWOR(true), BITWSHIFTL(true), BITWSHIFTR(true),\n+ AND(true), APPLY_SCHEMA(false), BITWAND(true), BITWOR(true), BITWSHIFTL(true), BITWSHIFTR(true),\nBITWXOR(true), CBIND(false), CONCAT(false), COV(false), DIV(true),\nDROP_INVALID_TYPE(false), DROP_INVALID_LENGTH(false), EQUAL(true),\nFRAME_ROW_REPLICATE(true), GREATER(true), GREATEREQUAL(true), INTDIV(true),\n@@ -370,6 +370,7 @@ public class Types\ncase DROP_INVALID_LENGTH: return \"dropInvalidLength\";\ncase FRAME_ROW_REPLICATE: return \"freplicate\";\ncase VALUE_SWAP: return \"valueSwap\";\n+ case APPLY_SCHEMA: return \"applySchema\";\ndefault: return name().toLowerCase();\n}\n}\n@@ -405,6 +406,7 @@ public class Types\ncase \"dropInvalidLength\": return DROP_INVALID_LENGTH;\ncase \"freplicate\": return FRAME_ROW_REPLICATE;\ncase \"valueSwap\": return VALUE_SWAP;\n+ case \"applySchema\": return APPLY_SCHEMA;\ndefault: return valueOf(opcode.toUpperCase());\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"new_path": "src/main/java/org/apache/sysds/parser/BuiltinFunctionExpression.java",
"diff": "@@ -1565,7 +1565,14 @@ public class BuiltinFunctionExpression extends DataIdentifier\noutput.setBlocksize (id.getBlocksize());\noutput.setValueType(id.getValueType());\nbreak;\n-\n+ case APPLY_SCHEMA:\n+ checkNumParameters(2);\n+ checkMatrixFrameParam(getFirstExpr());\n+ checkMatrixFrameParam(getSecondExpr());\n+ output.setDataType(DataType.FRAME);\n+ output.setDimensions(id.getDim1(), id.getDim2());\n+ output.setBlocksize (id.getBlocksize());\n+ break;\ncase MAP:\ncheckNumParameters(getThirdExpr() != null ? 3 : 2);\ncheckMatrixFrameParam(getFirstExpr());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"new_path": "src/main/java/org/apache/sysds/parser/DMLTranslator.java",
"diff": "@@ -2603,6 +2603,7 @@ public class DMLTranslator\ncase DROP_INVALID_LENGTH:\ncase VALUE_SWAP:\ncase FRAME_ROW_REPLICATE:\n+ case APPLY_SCHEMA:\ncurrBuiltinOp = new BinaryOp(target.getName(), target.getDataType(),\ntarget.getValueType(), OpOp2.valueOf(source.getOpCode().name()), expr, expr2);\nbreak;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/functionobjects/Builtin.java",
"new_path": "src/main/java/org/apache/sysds/runtime/functionobjects/Builtin.java",
"diff": "@@ -50,7 +50,8 @@ public class Builtin extends ValueFunction\npublic enum BuiltinCode { AUTODIFF, SIN, COS, TAN, SINH, COSH, TANH, ASIN, ACOS, ATAN, LOG, LOG_NZ, MIN,\nMAX, ABS, SIGN, SQRT, EXP, PLOGP, PRINT, PRINTF, NROW, NCOL, LENGTH, LINEAGE, ROUND, MAXINDEX, MININDEX,\nSTOP, CEIL, FLOOR, CUMSUM, CUMPROD, CUMMIN, CUMMAX, CUMSUMPROD, INVERSE, SPROP, SIGMOID, EVAL, LIST,\n- TYPEOF, DETECTSCHEMA, ISNA, ISNAN, ISINF, DROP_INVALID_TYPE, DROP_INVALID_LENGTH, VALUE_SWAP, FRAME_ROW_REPLICATE,\n+ TYPEOF, APPLY_SCHEMA, DETECTSCHEMA, ISNA, ISNAN, ISINF, DROP_INVALID_TYPE,\n+ DROP_INVALID_LENGTH, VALUE_SWAP, FRAME_ROW_REPLICATE,\nMAP, COUNT_DISTINCT, COUNT_DISTINCT_APPROX, UNIQUE}\n@@ -111,6 +112,7 @@ public class Builtin extends ValueFunction\nString2BuiltinCode.put( \"dropInvalidLength\", BuiltinCode.DROP_INVALID_LENGTH);\nString2BuiltinCode.put( \"_map\", BuiltinCode.MAP);\nString2BuiltinCode.put( \"valueSwap\", BuiltinCode.VALUE_SWAP);\n+ String2BuiltinCode.put( \"applySchema\", BuiltinCode.APPLY_SCHEMA);\n}\nprivate Builtin(BuiltinCode bf) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/CPInstructionParser.java",
"diff": "@@ -170,6 +170,7 @@ public class CPInstructionParser extends InstructionParser\nString2CPInstructionType.put( \"dropInvalidLength\" , CPType.Binary);\nString2CPInstructionType.put( \"freplicate\" , CPType.Binary);\nString2CPInstructionType.put( \"valueSwap\" , CPType.Binary);\n+ String2CPInstructionType.put( \"applySchema\" , CPType.Binary);\nString2CPInstructionType.put( \"_map\" , CPType.Ternary); // _map represents the operation map\nString2CPInstructionType.put( \"nmax\", CPType.BuiltinNary);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryFrameFrameCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryFrameFrameCPInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.cp;\n+import org.apache.sysds.common.Types;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n+import java.util.Arrays;\n+\npublic class BinaryFrameFrameCPInstruction extends BinaryCPInstruction\n{\nprotected BinaryFrameFrameCPInstruction(Operator op, CPOperand in1,\n@@ -55,6 +58,16 @@ public class BinaryFrameFrameCPInstruction extends BinaryCPInstruction\n// Attach result frame with FrameBlock associated with output_name\nec.setFrameOutput(output.getName(), retBlock);\n}\n+ else if(getOpcode().equals(\"applySchema\")) {\n+ // apply frame schema from DML\n+ Types.ValueType[] schema = new Types.ValueType[inBlock2.getNumColumns()];\n+ for(int i=0; i<inBlock2.getNumColumns(); i++)\n+ schema[i] = Types.ValueType.fromExternalString(inBlock2.get(0, i).toString());\n+ FrameBlock out = new FrameBlock(schema);\n+ out.copy(inBlock1);\n+ out.setSchema(schema);\n+ ec.setFrameOutput(output.getName(), out);\n+ }\nelse {\n// Execute binary operations\nBinaryOperator dop = (BinaryOperator) _optr;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/frame/ApplySchemaTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.frame;\n+\n+import org.apache.sysds.api.DMLScript;\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.common.Types.ExecType;\n+import org.apache.sysds.common.Types.FileFormat;\n+import org.apache.sysds.hops.OptimizerUtils;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.io.FrameWriter;\n+import org.apache.sysds.runtime.io.FrameWriterFactory;\n+import org.apache.sysds.runtime.util.UtilFunctions;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.AfterClass;\n+import org.junit.Assert;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.security.SecureRandom;\n+\n+public class ApplySchemaTest extends AutomatedTestBase {\n+ private final static String TEST_NAME = \"applySchema\";\n+ private final static String TEST_DIR = \"functions/frame/\";\n+ private static final String TEST_CLASS_DIR = TEST_DIR + ApplySchemaTest.class.getSimpleName() + \"/\";\n+\n+ private final static int rows = 100;\n+ private final static Types.ValueType[] schemaStrings = {Types.ValueType.INT32, Types.ValueType.BOOLEAN, Types.ValueType.FP64};\n+\n+ @BeforeClass\n+ public static void init() {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+\n+ @AfterClass\n+ public static void cleanUp() {\n+ if (TEST_CACHE_ENABLED) {\n+ TestUtils.clearDirectory(TEST_DATA_DIR + TEST_CLASS_DIR);\n+ }\n+ }\n+\n+ enum TestType {\n+ STRING_IN_DOUBLE,\n+ DOUBLE_IN_INT\n+ }\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{\"S\", \"F\"}));\n+ if (TEST_CACHE_ENABLED) {\n+ setOutAndExpectedDeletionDisabled(true);\n+ }\n+ }\n+\n+\n+ @Test\n+ public void testapply1CP() {\n+ runApplySchemaTest(schemaStrings, rows, schemaStrings.length, TestType.STRING_IN_DOUBLE, 3, ExecType.CP);\n+ }\n+\n+ @Test\n+ public void testapplySchema2Spark() {\n+ runApplySchemaTest(schemaStrings, rows, schemaStrings.length, TestType.DOUBLE_IN_INT, 1, ExecType.CP);\n+ }\n+\n+\n+\n+ private void runApplySchemaTest(Types.ValueType[] schema, int rows, int cols, TestType test, int colNum, ExecType et) {\n+ Types.ExecMode platformOld = setExecMode(et);\n+ boolean oldFlag = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n+ boolean sparkConfigOld = DMLScript.USE_LOCAL_SPARK_CONFIG;\n+// setOutputBuffering(true);\n+ try {\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ programArgs = new String[]{\"-args\", input(\"A\"), String.valueOf(rows),\n+ Integer.toString(cols), String.valueOf(test), String.valueOf(colNum), output(\"S\"), output(\"F\")};\n+ FrameBlock frame1 = new FrameBlock(schema);\n+ FrameWriter writer = FrameWriterFactory.createFrameWriter(FileFormat.CSV);\n+\n+\n+ double[][] A = getRandomMatrix(rows, 3, -Float.MAX_VALUE, Float.MAX_VALUE, 0.7, 2373);\n+ initFrameDataString(frame1, A, schema, rows, 3);\n+ writer.writeFrameToHDFS(frame1.slice(0, rows-1, 0, schema.length-1, new FrameBlock()), input(\"A\"), rows, schema.length);\n+ runTest(true, false, null, -1);\n+\n+ FrameBlock detetctedSchema = readDMLFrameFromHDFS(\"S\", FileFormat.BINARY);\n+ FrameBlock changedFrame = readDMLFrameFromHDFS(\"F\", FileFormat.BINARY);\n+\n+ //verify output schema\n+ for (int i = 0; i < schema.length; i++) {\n+ Assert.assertEquals(\"Wrong result column : \" + i + \".\",\n+ detetctedSchema.get(0, i).toString(), changedFrame.getSchema()[i].toString());\n+// System.out.println(detetctedSchema.get(0, i).toString() +\" \"+changedFrame.getSchema()[i].toString());\n+ }\n+\n+ }\n+ catch (Exception ex) {\n+ throw new RuntimeException(ex);\n+ }\n+ finally {\n+ rtplatform = platformOld;\n+ DMLScript.USE_LOCAL_SPARK_CONFIG = sparkConfigOld;\n+ OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = oldFlag;\n+ OptimizerUtils.ALLOW_AUTO_VECTORIZATION = true;\n+ OptimizerUtils.ALLOW_OPERATOR_FUSION = true;\n+ }\n+ }\n+\n+ public static void initFrameDataString(FrameBlock frame1, double[][] data, Types.ValueType[] lschema, int rows, int cols) {\n+ for (int j = 0; j < cols; j++) {\n+ Types.ValueType vt = lschema[j];\n+ switch (vt) {\n+ case STRING:\n+ String[] tmp1 = new String[rows];\n+ for (int i = 0; i < rows; i++)\n+ tmp1[i] = (String) UtilFunctions.doubleToObject(vt, data[i][j]);\n+ frame1.appendColumn(tmp1);\n+ break;\n+ case BOOLEAN:\n+ boolean[] tmp2 = new boolean[rows];\n+ for (int i = 0; i < rows; i++)\n+ data[i][j] = (tmp2[i] = (Boolean) UtilFunctions.doubleToObject(vt, data[i][j], false)) ? 1 : 0;\n+ frame1.appendColumn(tmp2);\n+ break;\n+ case INT32:\n+ int[] tmp3 = new int[rows];\n+ for (int i = 0; i < rows; i++)\n+ data[i][j] = tmp3[i] = (Integer) UtilFunctions.doubleToObject(Types.ValueType.INT32, data[i][j], false);\n+ frame1.appendColumn(tmp3);\n+ break;\n+ case INT64:\n+ long[] tmp4 = new long[rows];\n+ for (int i = 0; i < rows; i++)\n+ data[i][j] = tmp4[i] = (Long) UtilFunctions.doubleToObject(Types.ValueType.INT64, data[i][j], false);\n+ frame1.appendColumn(tmp4);\n+ break;\n+ case FP32:\n+ double[] tmp5 = new double[rows];\n+ for (int i = 0; i < rows; i++)\n+ tmp5[i] = (Float) UtilFunctions.doubleToObject(vt, data[i][j], false);\n+ frame1.appendColumn(tmp5);\n+ break;\n+ case FP64:\n+ double[] tmp6 = new double[rows];\n+ for (int i = 0; i < rows; i++)\n+ tmp6[i] = (Double) UtilFunctions.doubleToObject(vt, data[i][j], false);\n+ frame1.appendColumn(tmp6);\n+ break;\n+ default:\n+ throw new RuntimeException(\"Unsupported value type: \" + vt);\n+ }\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/frame/applySchema.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+X = read($1, rows=$2, cols=$3, data_type=\"frame\", format=\"csv\", header=FALSE);\n+case = $4\n+colNum=$5\n+\n+R = detectSchema(X);\n+\n+if(case == \"STRING_IN_DOUBLE\")\n+{\n+ for(i in 1:10)\n+ X[i, colNum] = \"abc\"\n+}\n+\n+R = detectSchema(X);\n+XT = applySchema(X, R)\n+\n+print(toString(XT, rows=10))\n+write(R, $6, format=\"binary\");\n+write(Xt, $7, format=\"binary\");\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3272] applySchema DML builtin
This commit adds the ApplySchema primitive to change the schema of
an already allocated FrameBlock in DML.
Closes #1732 |
49,689 | 14.12.2022 17:19:18 | -3,600 | d1c2d9100502e966215b85cf40de136f65e5bb98 | Statistics maintenance for MatrixObjectFuture | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"diff": "package org.apache.sysds.runtime.controlprogram.context;\n+import org.apache.sysds.api.DMLScript;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.controlprogram.caching.CacheStatistics;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.lineage.LineageCache;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n@@ -52,7 +54,15 @@ public class MatrixObjectFuture extends MatrixObject\n}\npublic MatrixBlock acquireRead() {\n- return acquireReadIntern();\n+ long t0 = DMLScript.STATISTICS ? System.nanoTime() : 0;\n+ //core internal acquire (synchronized per object)\n+ MatrixBlock ret = acquireReadIntern();\n+\n+ if( DMLScript.STATISTICS ){\n+ long t1 = System.nanoTime();\n+ CacheStatistics.incrementAcquireRTime(t1-t0);\n+ }\n+ return ret;\n}\nprivate synchronized MatrixBlock acquireReadIntern() {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3466] Statistics maintenance for MatrixObjectFuture |
49,698 | 17.12.2022 20:00:17 | -19,080 | fd3a305b4e2922f23f4e68bbc11d30e8726c7f99 | script to check the release machine requirements
* script to check the dev machine requirements
* script checks for
- Ubuntu version 18 & 20
- Maven build
- GPG
Usage:
from the top directory:
> python ./dev/release/check-release-machine.py | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "dev/release/check-release-machine.py",
"diff": "+#!/usr/bin/env python3\n+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+################################################################################\n+## File: check-release-machine.py\n+## Desc: Verify the required software (Maven, gpg, OS type etc.) on the release machine\n+################################################################################\n+\n+# https://docs.python.org/3/library/shlex.html\n+import shlex\n+from enum import Enum, auto\n+import os\n+import sys\n+import subprocess\n+\n+\n+class System(Enum):\n+ Ubuntu = auto()\n+\n+SUPPORTED_SYSTEMS = {\n+ System.Ubuntu: {\"18\", \"20\"}\n+}\n+\n+def check_python_version():\n+\n+ if sys.version_info.major == 3 and sys.version_info.minor in (6, 7, 8, 9, 10):\n+ return\n+ version = \"{}.{}\".format(sys.version_info.major, sys.version_info.minor)\n+ raise RuntimeError(\"Unsupported Python version {}. \".format(version))\n+\n+def run(command: str, check=True) -> subprocess.CompletedProcess:\n+\n+ proc = subprocess.run(shlex.split(command), check=check)\n+\n+ return proc\n+\n+def detect_linux_distro() -> (System, str):\n+\n+ with open('/etc/os-release') as os_release:\n+ lines = [line.strip() for line in os_release.readlines() if line.strip() != '']\n+ info = {k: v.strip(\"'\\\"\") for k, v in (line.split('=', maxsplit=1) for line in lines)}\n+\n+ name = info['NAME']\n+\n+ if name.startswith(\"Ubuntu\"):\n+ system = System.Ubuntu\n+ version = info['VERSION_ID']\n+ else:\n+ raise RuntimeError(\"this os is not supported by this script\")\n+\n+ return system, version\n+\n+def check_linux_distro(sytem: System, version: str) -> bool:\n+\n+ if '.' in version:\n+ version = version.split('.')[0]\n+\n+ if len(SUPPORTED_SYSTEMS[system]) == 0:\n+ # print that this script does not support this distro\n+ return False\n+\n+ return True\n+\n+def check_maven_installed() -> bool:\n+\n+ process = run(\"which mvn\", check=False)\n+ return process.returncode == 0\n+\n+def check_gpg_installed() -> bool:\n+\n+ process = run(\"whereis gpg\", check=False)\n+ return process.returncode == 0\n+\n+def main():\n+\n+ if check_maven_installed():\n+ print('Maven is available')\n+\n+ if check_gpg_installed():\n+ print('Gpg is installed')\n+\n+ check_python_version()\n+\n+ system, version = detect_linux_distro()\n+\n+\n+if __name__ == '__main__':\n+ try:\n+ main()\n+ except Exception as err:\n+ print('Failed with exception')\n+ raise err\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3187] script to check the release machine requirements (#1551)
* script to check the dev machine requirements
* script checks for
- Ubuntu version 18 & 20
- Maven build
- GPG
Usage:
from the top directory:
> python ./dev/release/check-release-machine.py |
49,698 | 17.12.2022 20:31:47 | -19,080 | b958fa83f896f3933e5b3a894dfa4f4c4591a21c | Add documentation for the release process
First draft:
* Release process description
* Release machine requirements
* voting process
* checklists of the release process from choosing release manager to release conclusion.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "docs/site/release-process.md",
"diff": "+---\n+layout: site\n+title: Release Process\n+---\n+<!--\n+{% comment %}\n+Licensed to the Apache Software Foundation (ASF) under one or more\n+contributor license agreements. See the NOTICE file distributed with\n+this work for additional information regarding copyright ownership.\n+The ASF licenses this file to you under the Apache License, Version 2.0\n+(the \"License\"); you may not use this file except in compliance with\n+the License. You may obtain a copy of the License at\n+\n+http://www.apache.org/licenses/LICENSE-2.0\n+\n+Unless required by applicable law or agreed to in writing, software\n+distributed under the License is distributed on an \"AS IS\" BASIS,\n+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+See the License for the specific language governing permissions and\n+limitations under the License.\n+{% endcomment %}\n+-->\n+\n+## Release Process\n+\n+The Apache SystemDS project publishes new version of the software on a regular basis.\n+Releases are the interface of the project with the public and most users interact with\n+the project only through the released software (this is intentional!). Releases are a\n+formal offering, which are publicly voted by the SystemDS community.\n+\n+Releases are executed by a Release Manager, who is one of the [project committers](https://whimsy.apache.org/roster/committee/systemds).\n+\n+Release has legal consequences to the team. Make sure to comply with all the procedures\n+outlined by the ASF via [Release Policy](https://www.apache.org/legal/release-policy.html) and\n+[Release Distribution](https://infra.apache.org/release-distribution.html). Any deviations or\n+compromises are to be discussed in private@ or dev@ mail list appropriately.\n+\n+\n+## Before you begin\n+\n+Install the basic software and procure the required code and dependencies, credentials.\n+\n+OS Requirement: Linux\n+\n+RAM requirement: 8 GB +\n+\n+Software Requirements:\n+\n+ 1. Apache Maven (3.8.1 or newer). [link](https://maven.apache.org/download.cgi)\n+ 2. GnuPG [link](https://www.gnupg.org/download/index.html)\n+ 3. Install jq utility (size 1MB). [link](https://stedolan.github.io/jq/download/)\n+\n+\n+Credential Requirements:\n+\n+- GPG passphrase\n+- Apache ID and Password\n+- GitHub ID and Password\n+- PyPi.org ID and password (if applicable)\n+\n+\n+## Architecture of the release pipeline\n+\n+An important part of the software development life cycle (SDLC)\n+is ensuring software release follow the ASF approved processes.\n+\n+The release pipeline consists of the following steps:\n+ 1. Builds the artifacts (binary, zip files) with source code.\n+ 2. Pushes the artifacts to staging repository\n+ 3. Check for the vulnerabilities. Voting process.\n+\n+The project PMC and community inspects the build files by\n+downloading and testing. If it passes their requirements, they vote\n+appropriately in the mailing list. The release version metadata is\n+updated and the application is deployed to the public release.\n+\n+\n+## Setting up your environment\n+\n+\n+## Access to Apache Nexus repository\n+\n+Note: Only PMC can push to the Release repo for legal reasons, but committer can also act as the Release Manager with consensus by\n+the team on the dev@ mail list.\n+\n+Apache Nexus repository is located at [repository.apache.org](https://repository.apache.org), it is Nexus 2.x Profession edition.\n+\n+1. Login with Apache Credentials\n+2. Confirm access to `org.apache.systemds` by visiting https://repository.apache.org/#stagingProfiles;1486a6e8f50cdf\n+\n+\n+## Add future release version to JIRA\n+\n+1. In JIRA, navigate to `SYSTEMDS > Administration > Versions`.\n+2. Add a new release version.\n+\n+Know more about versions in JIRA at\n+[`view-and-manage-a-projects-versions` guide](https://support.atlassian.com/jira-core-cloud/docs/view-and-manage-a-projects-versions/)\n+\n+## Performance regressions\n+\n+Investigating performance regressions is a collective effort. Regressions can happen during\n+release process, but they should be investigated and fixed.\n+\n+Release Manger should make sure that the JIRA issues are filed for each regression and mark\n+`Fix Version` to the to-be-released version.\n+\n+The regressions are to be informed to the dev@ mailing list, through release duration.\n+\n+## Release tags or branch\n+\n+Create release branch from the `main` with version named `2.x.0-SNAPSHOT`.\n+\n+### The chosen commit for RC\n+\n+Release candidates are built from single commits off the development branch. Before building,\n+the version must be set to a non `SNAPSHOT`/`dev` version.\n+\n+[Discussion](https://lists.apache.org/thread/277vks8q72cxxgmywxm7cblqvgn3yzgj) on what is covered in voting for a commit.\n+\n+### Inform mailing list\n+\n+Mail [email protected] of the release tags and triage information.\n+This list of pending issues will be refined and updated collaboratively.\n+\n+## Creating builds\n+\n+### Checklist\n+\n+1. Release Manager's GPG key is publised to [dist.apache.org](https://dist.apache.org/repos/dist/release/systemds/KEYS)\n+2. Release Manager's GPG key is configured in `git` configuration\n+3. Set `JAVA_HOME` to JDK 8\n+4. `export GNUPGHOME=$HOME/.gnupg`\n+\n+### Release build to create a release candidate\n+\n+0. Dry run the release build\n+\n+```sh\n+./do-release.sh -n\n+```\n+\n+1. In the shell, build artifacts and deploy\n+\n+```sh\n+./do-release.sh\n+```\n+\n+Answer the prompts with appropriate details as shown:\n+\n+```\n+Branch [master]: master\n+Current branch version is 2.1.0-SNAPSHOT.\n+Release [2.1.0]:\n+RC # [1]: 1\n+ASF user [ubuntu]: firstname\n+Full name [Firstname Lastname]:\n+GPG key [[email protected]]:\n+================\n+Release details:\n+BRANCH: master\n+VERSION: 2.1.0\n+TAG: 2.1.0-rc1\n+NEXT: 2.1.1-SNAPSHOT\n+ASF USER: firstname\n+GPG KEY ID: [email protected]\n+FULL NAME: Firstname Lastname\n+E-MAIL: [email protected]\n+================\n+Is this info correct [Y/n]?\n+```\n+\n+\n+## Upload release candidate to PyPi\n+\n+1. Download python binary artifacts\n+2. Deploy release candidate to PyPi\n+\n+## Prepare documentation\n+\n+### Build and verify JavaDoc\n+\n+- Confirm that version names are appropriate.\n+\n+### Build the Pydoc API reference\n+\n+The docs will generated in `build` directory.\n+\n+\n+## Snapshot deployment setup\n+\n+\n+### Use a fresh SystemDS Repository\n+\n+Since the artifacts will be deployed publicly, use a completely fresh\n+copy of the SystemDS project used only for building and deploying.\n+\n+Therefore, create a directory such as\n+\n+```sh\n+mkdir ~/systemds-release\n+```\n+\n+In this directory, clone a copy of the project.\n+\n+```sh\n+git clone https://github.com/apache/systemds.git\n+```\n+\n+## Post Release Publish\n+\n+### Checklist\n+\n+#### 1. All artifacts and checksums present\n+\n+Verify that each expected artifact is present at\n+https://dist.apache.org/repos/dist/dev/systemds/ and that\n+each artifact has accompanying checksums (such as .asc and .sha512)\n+\n+#### 2. Release candidate build\n+\n+The release candidate should build on Windows, OS X, and Linux. To do\n+this cleanly, the following procedure can be performed.\n+\n+Note: Use an empty local maven repository\n+\n+Example:\n+\n+```sh\n+git clone https://github.com/apache/systemds.git\n+cd systemds\n+git tag -l\n+git checkout tags/2.1.0-rc1 -b 2.1.0-rc1\n+mvn -Dmaven.repo.local=$HOME/.m2/temp-repo clean package -P distribution\n+```\n+\n+#### 3. Test suite check\n+\n+The entire test suite should pass on Windows, OS X, Linux.\n+\n+For verification:\n+\n+```sh\n+mvn clean verify\n+```\n+\n+\n+### LICENSE and NOTICE\n+\n+Each artifact must contain LICENSE and NOTICE files. These files must\n+reflect the contents of the artifacts. If the project dependencies\n+(i.e., libraries) have changed since the last release, the LICENSE and\n+NOTICE files to be updated to reflect these changes.\n+\n+For more information, see:\n+\n+1. http://www.apache.org/dev/#releases\n+2. http://www.apache.org/dev/licensing-howto.html\n+\n+\n+### Build src artifact and verify\n+\n+The project should also be built using the `src` (tgz and zip).\n+\n+```sh\n+tar -xvzf systemds-2.1.0-src.tgz\n+cd systemds-2.1.0-src\n+mvn clean package -P distribution\n+mvn verify\n+```\n+\n+### Single node standalone\n+\n+The standalone `tgz` and `zip` artifacts contain `systemds` files.\n+Verify that the algorithms can be run on single node using these\n+standalone distributions.\n+\n+Here is an example:\n+\n+see standalone guide of the documenation for more details.\n+\n+```sh\n+tar -xvzf systemds-2.1.0-bin.tgz\n+cd systemds-2.1.0-bin\n+wget -P data/ http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data\n+echo '{\"rows\": 306, \"cols\": 4, \"format\": \"csv\"}' > data/haberman.data.mtd\n+echo '1,1,1,2' > data/types.csv\n+echo '{\"rows\": 1, \"cols\": 4, \"format\": \"csv\"}' > data/types.csv.mtd\n+\n+systemds scripts/algorithms/Univar-Stats.dml -nvargs X=data/haberman.data TYPES=data/types.csv STATS=data/univarOut.mtx CONSOLE_OUTPUT=TRUE\n+cd ..\n+```\n+\n+Also check for Hadoop, and spark\n+\n+\n+#### Performance suite\n+\n+Verify that the performance suite executes on Spark and Hadoop.\n+The datasizes are 80MB, 800MB, 8GB, and 80GB.\n+\n+\n+## Voting process\n+\n+Following a successful release candidate vote by SystemDS PMC members and the SystemDS community\n+on the dev mailing list, the release candidate shall be approved.\n+\n+## Release\n+\n+### Release deployment\n+\n+The scripts will execute the release steps. and push the changes\n+to the releases.\n+\n+### Deploy artifacts to Maven Central\n+\n+In the [Apache Nexus Repo](https://repository.apache.org), release\n+the staged artifacts to the Maven Central repository.\n+\n+Steps:\n+1. In the `Staging Repositories` section, find the relevant release candidate entry\n+2. Select `Release`\n+3. Drop all the other release candidates\n+\n+### Deploy Python artifacts to PyPI\n+\n+- Use upload script.\n+- Verify that the files at https://pypi.org/project/systemds/#files are correct.\n+\n+### Update website\n+\n+- Listing the release\n+- Publish Python API reference, and the Java API reference\n+\n+### Mark the released version in JIRA\n+\n+1. Go to https://issues.apache.org/jira/plugins/servlet/project-config/SYSTEMDS/versions\n+2. Hover over the released version and click `Release`\n+\n+### Recordkeeping\n+\n+Update the record at https://reporter.apache.org/addrelease.html?systemds\n+\n+### Checklist\n+\n+1. Maven artifacts released and indexed in the [Maven Central Repository](https://search.maven.org/search?q=g:org.apache.systemds)\n+2. Source distribution available in the [release repository `/release/systemds/`](https://dist.apache.org/repos/dist/release/systemds/)\n+3. Source distribution removed from the [dev repository `/dev/systemds/`](https://dist.apache.org/repos/dist/dev/systemds/)\n+4. Website is completely updated (Release, API manuals)\n+5. The release tag available on GitHub at https://github.com/apache/systemds/tags\n+6. The release notes are published on GitHub at https://github.com/apache/systemds/release\n+7. Release version is listed at reporter.apache.org\n+\n+### Announce Release\n+\n+Announce Released version within the project and public.\n+\n+#### Apache Mailing List\n+\n+1. Announce on the dev@ mail list that the release has been completed\n+2. Announce the release on the [email protected] mail list, listing major improving and contributions.\n+ This can only be done from the `@apache.org` email address. This email has to be in plain text.\n+\n+#### Social media\n+\n+Publish on Twitter on @ApacheSystemDS.\n+\n+## Checklist to declare the release process complete\n+\n+1. Release announce on the dev@, [email protected] mail list\n+2. Release recorded in reporter.apache.org\n+3. Completion declared on the dev@ mail list\n+\n+## Improve the process\n+\n+Once the release is complete, let us retrospectively update changes and improvements\n+to this guide. Help the community adapt this guide for release validation before casting their\n+vote.\n+\n+Perhaps some steps can be simplified or require more clarification.\n+\n+# Appendix\n+\n+### Generate GPG key\n+\n+1. Create a folder for GNUPGHOME or use default `~/.gnupg`.\n+\n+```sh\n+sudo mkdir -m 700 /usr/local/.gnupg\n+```\n+\n+2. Generate the gpg key\n+\n+```sh\n+sudo GNUPGHOME=/usr/local/.gnupg gpg --gen-key\n+```\n+\n+output will be, like the following:\n+\n+```\n+gpg: /usr/local/.gnupg/trustdb.gpg: trustdb created\n+gpg: key F164B430F91D6*** marked as ultimately trusted\n+gpg: directory '/usr/local/.gnupg/openpgp-revocs.d' created\n+gpg: revocation certificate stored as '/usr/local/.gnupg/openpgp-revocs.d/AD**...*.rev'\n+public and secret key created and signed.\n+```\n+\n+3. Export the environmental variable\n+\n+Note: Using `sudo` would add credentials in root users\n+\n+```sh\n+export GNUPGHOME=/usr/local/.gnupg\n+\n+gpg --homedir $GNUPGHOME --list-keys\n+gpg --homedir $GNUPGHOME --list-secret-keys\n+```\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3187] Add documentation for the release process
First draft:
* Release process description
* Release machine requirements
* voting process
* checklists of the release process from choosing release manager to release conclusion.
Closes #1752. |
49,698 | 18.12.2022 00:02:19 | -19,080 | 0c3c3c297d083efa4899de6533b3e36f854ce99f | Verify release scripts with github workflows
Confidence check that the shell scripts used for release, won't break.
maven release related plugins work as intended.
Closes | [
{
"change_type": "ADD",
"old_path": null,
"new_path": ".github/workflows/release-scripts.yml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+# This workflow will build a package using Maven and then publish it to GitHub packages when a release is created\n+# For more information see: https://github.com/actions/setup-java/blob/main/docs/advanced-usage.md#apache-maven-with-a-settings-path\n+\n+name: Test Release scripts with dry run\n+\n+on:\n+ push:\n+ release:\n+ types: [created]\n+\n+ workflow_dispatch:\n+\n+jobs:\n+ build:\n+\n+ runs-on: ubuntu-latest\n+ permissions:\n+ contents: read\n+ packages: write\n+\n+ steps:\n+ # Java setup docs:\n+ # https://github.com/actions/setup-java/blob/main/docs/advanced-usage.md#installing-custom-java-package-type\n+ - uses: actions/checkout@v2\n+ - name: Set up JDK 11\n+ uses: actions/setup-java@v2\n+ with:\n+ # tools.jar removed from '9', '11'. see https://openjdk.java.net/jeps/220\n+ java-version: '11'\n+ distribution: 'adopt'\n+ java-package: 'jdk'\n+\n+ - run: printf \"JAVA_HOME = $JAVA_HOME \\n\"\n+\n+ - name: Cache local Maven repository\n+ uses: actions/cache@v2\n+ with:\n+ path: ~/.m2/repository\n+ key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}\n+ restore-keys: |\n+ ${{ runner.os }}-maven-\n+\n+ - name: gpg key generation - transient\n+ run: |\n+ printf \"HOME: $HOME \\n\"\n+ ls $HOME\n+ export GNUPGHOME=\"$HOME/.gnupg\"\n+ ls *\n+ cat >tempkey <<EOF\n+ %echo Generating a basic OpenPGP key\n+ Key-Type: DSA\n+ Key-Length: 1024\n+ Subkey-Type: ELG-E\n+ Subkey-Length: 1024\n+ Name-Real: J143 Bot\n+ Name-Comment: This is automation script\n+ Name-Email: j143+[bot]@protonmail.com\n+ Expire-Date: 0\n+ Passphrase: asdfghjkl\n+ # Do a commit here, so that we can later print \"done\" :-)\n+ %commit\n+ %echo done\n+ EOF\n+ gpg --batch --generate-key tempkey\n+ sudo cp -rp $HOME/.gnupg $HOME/gnupghome\n+ ls $HOME/gnupghome\n+\n+ # the last eight digits of the finger print is key id\n+ - name: Set the GPG key ID\n+ run: |\n+ gpg --list-keys\n+ GPG_KEY_FINGERPRINT=\"$(gpg --list-keys | head -n 4 | tail -n 1 | sed 's/^ *//g')\"\n+ GPG_KEY_ID=\"${GPG_KEY_FINGERPRINT: (-8)}\"\n+\n+ - name: Run invoke script\n+ run: |\n+ cd $GITHUB_WORKSPACE\n+ printf \"$GITHUB_WORKSPACE --> github workspace\\n\"\n+ export GNUPGHOME=\"$HOME/gnupghome\"\n+ gpgconf --kill gpg-agent\n+ ls $GNUPGHOME\n+ GPG_KEY_FINGERPRINT=\"$(gpg --list-keys | head -n 4 | tail -n 1 | sed 's/^ *//g')\"\n+ GPG_KEY_ID=\"${GPG_KEY_FINGERPRINT: (-8)}\"\n+ export GPG_KEY=$GPG_KEY_ID\n+ bash $GITHUB_WORKSPACE/dev/release/do-release.sh -n -g\n+ env:\n+ DRY_RUN: '1'\n+ GIT_BRANCH: 'main'\n+ RELEASE_VERSION: '3.0.0'\n+ ASF_USERNAME: 'firstname'\n+ GIT_NAME: 'Firstname Lastname'\n+ GIT_EMAIL: 'j143+[bot]@protonmail.com'\n+ CORRECT_RELEASE_INFO: '1'\n+ ASF_PASSWORD: 'asdfghjkl' # wrong password, only to run this workflow\n+ GPG_PASSPHRASE: 'asdfghjkl' # wrong passphrase, only to run this workflow\n+\n+# - name: Build with Maven\n+# run: mvn -B package --file pom.xml\n+\n+# - name: Publish to GitHub Packages Apache Maven\n+# run: mvn deploy -s $GITHUB_WORKSPACE/settings.xml\n+# env:\n+# GITHUB_TOKEN: ${{ github.token }}\n"
},
{
"change_type": "MODIFY",
"old_path": "dev/release/do-release.sh",
"new_path": "dev/release/do-release.sh",
"diff": "@@ -30,24 +30,32 @@ SELF=$(cd $(dirname $0) && pwd)\n# discussion on optional arguments\n# https://stackoverflow.com/q/18414054\n-while getopts \":n\" opt; do\n+while getopts \":ng\" opt; do\ncase $opt in\nn) DRY_RUN=1 ;;\n+ g) GITHUB_CI=1 ;;\n\\?) error \"Invalid option: $OPTARG\" ;;\nesac\ndone\nDRY_RUN=${DRY_RUN:-0}\n+GITHUB_CI=${GITHUB_CI:-0}\ncleanup_repo\n# Ask for release information\nget_release_info\n+if is_github_ci; then\n+ printf \"\\n Building via GITHUB actions \\n\"\n+fi\n+\n# tag\nrun_silent \"Creating release tag $RELEASE_TAG...\" \"tag.log\" \\\n\"$SELF/create-tag.sh\"\n+cat tag.log\n+\n# run_silent \"Publish Release Candidates to the Nexus Repo...\" \"publish-snapshot.log\" \\\n# \"$SELF/release-build.sh\" publish-snapshot\n@@ -63,10 +71,32 @@ fi\n#\n# are to be used together.\n+if ! is_github_ci; then\nrun_silent \"Publish Release Candidates to the Nexus Repo...\" \"publish.log\" \\\n\"$SELF/release-build.sh\" publish-release\n+fi\nif is_dry_run; then\n# restore the pom.xml file updated during release step\ngit restore pom.xml\nfi\n+\n+if is_github_ci; then\n+ printf \"\\n Release tag process is done via GITHUB actions \\n\"\n+ exit 0\n+fi\n+\n+if ! is_dry_run; then\n+\n+ printf \"\\n Release candidate artifacts are built and published to repository.apache.org, dist.apache.org \\n\"\n+ printf \"\\n Voting needs to be done for these artifacts for via the mailing list \\n\"\n+ exit 0\n+fi\n+\n+# Dry run step\n+if is_dry_run; then\n+\n+ printf \"\\n Release candidate artifacts are built and published to repository.apache.org, dist.apache.org \\n\"\n+ printf \"\\n Please delete these artifacts generated with dry run to ensure that the release scripts are generating correct artifacts. \\n\"\n+ exit 0\n+fi\n"
},
{
"change_type": "MODIFY",
"old_path": "dev/release/release-utils.sh",
"new_path": "dev/release/release-utils.sh",
"diff": "@@ -276,3 +276,6 @@ is_dry_run() {\n[[ \"$DRY_RUN\" = 1 ]]\n}\n+is_github_ci() {\n+ [[ \"$GITHUB_CI\" = 1 ]]\n+}\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3480] Verify release scripts with github workflows
- Confidence check that the shell scripts used for release, won't break.
- maven release related plugins work as intended.
Closes #1753. |
49,706 | 21.12.2022 13:18:27 | -3,600 | e2e560a2d6382d9b290f0e8f25e41772ad6c91ec | applySchema FrameBlock parallel
This commit improve performance of applySchema through parallelization,
from 0.8- 0.9 sec to 0.169 sec on a 64kx2k Frame block, also included
are test with 100% test coverage of the applySchema. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"diff": "@@ -51,12 +51,14 @@ import org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.codegen.CodegenUtils;\nimport org.apache.sysds.runtime.controlprogram.caching.CacheBlock;\n+import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\nimport org.apache.sysds.runtime.controlprogram.parfor.util.IDSequence;\nimport org.apache.sysds.runtime.frame.data.columns.Array;\nimport org.apache.sysds.runtime.frame.data.columns.ArrayFactory;\nimport org.apache.sysds.runtime.frame.data.columns.ColumnMetadata;\nimport org.apache.sysds.runtime.frame.data.iterators.IteratorFactory;\nimport org.apache.sysds.runtime.frame.data.lib.FrameFromMatrixBlock;\n+import org.apache.sysds.runtime.frame.data.lib.FrameLibApplySchema;\nimport org.apache.sysds.runtime.frame.data.lib.FrameLibDetectSchema;\nimport org.apache.sysds.runtime.functionobjects.ValueComparisonFunction;\nimport org.apache.sysds.runtime.instructions.cp.BooleanObject;\n@@ -151,6 +153,14 @@ public class FrameBlock implements CacheBlock<FrameBlock>, Externalizable {\nappendRow(data[i]);\n}\n+ public FrameBlock(ValueType[] schema, String[] colNames, ColumnMetadata[] meta, Array<?>[] data ){\n+ _numRows = data[0].size();\n+ _schema = schema;\n+ _colnames = colNames;\n+ _colmeta = meta;\n+ _coldata = data;\n+ }\n+\n/**\n* Get the number of rows of the frame block.\n*\n@@ -279,6 +289,10 @@ public class FrameBlock implements CacheBlock<FrameBlock>, Externalizable {\nreturn _colmeta[c];\n}\n+ public Array<?>[] getColumns(){\n+ return _coldata;\n+ }\n+\npublic boolean isColumnMetadataDefault() {\nboolean ret = true;\nfor( int j=0; j<getNumColumns() && ret; j++ )\n@@ -1733,21 +1747,7 @@ public class FrameBlock implements CacheBlock<FrameBlock>, Externalizable {\n* @return A new FrameBlock with the schema applied.\n*/\npublic FrameBlock applySchema(ValueType[] schema) {\n- if(schema.length != _schema.length)\n- throw new DMLRuntimeException(//\n- \"Invalid apply schema with different number of columns expected: \" + _schema.length + \" got: \"\n- + schema.length);\n- FrameBlock ret = new FrameBlock();\n- final int nCol = getNumColumns();\n- ret._numRows = getNumRows();\n- ret._schema = schema;\n- ret._colnames = _colnames;\n- ret._colmeta = _colmeta;\n- ret._coldata = new Array[nCol];\n- for(int i = 0; i < nCol; i++)\n- ret._coldata[i] = _coldata[i].changeType(schema[i]);\n- ret._msize = -1;\n- return ret;\n+ return FrameLibApplySchema.applySchema(this, schema, InfrastructureAnalyzer.getLocalParallelism());\n}\n@Override\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibApplySchema.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.frame.data.lib;\n+\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.ExecutorService;\n+import java.util.stream.IntStream;\n+\n+import org.apache.commons.logging.Log;\n+import org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.frame.data.columns.Array;\n+import org.apache.sysds.runtime.frame.data.columns.ColumnMetadata;\n+import org.apache.sysds.runtime.util.CommonThreadPool;\n+\n+public class FrameLibApplySchema {\n+\n+ protected static final Log LOG = LogFactory.getLog(FrameLibApplySchema.class.getName());\n+\n+ private final FrameBlock fb;\n+ private final ValueType[] schema;\n+ private final int nCol;\n+ private final Array<?>[] columnsIn;\n+ private final Array<?>[] columnsOut;\n+\n+ /**\n+ * Method to create a new FrameBlock where the given schema is applied, k is parallelization degree.\n+ *\n+ * @param fb The input block to apply schema to\n+ * @param schema The schema to apply\n+ * @param k The parallelization degree\n+ * @return A new FrameBlock allocated with new arrays.\n+ */\n+ public static FrameBlock applySchema(FrameBlock fb, ValueType[] schema, int k) {\n+ return new FrameLibApplySchema(fb, schema).apply(k);\n+ }\n+\n+ private FrameLibApplySchema(FrameBlock fb, ValueType[] schema) {\n+ this.fb = fb;\n+ this.schema = schema;\n+ verifySize();\n+ nCol = fb.getNumColumns();\n+ columnsIn = fb.getColumns();\n+ columnsOut = new Array<?>[nCol];\n+\n+ }\n+\n+ private FrameBlock apply(int k) {\n+ if(k <= 1 || nCol == 1)\n+ applySingleThread();\n+ else\n+ applyMultiThread(k);\n+\n+ final String[] colNames = fb.getColumnNames(false);\n+ final ColumnMetadata[] meta = fb.getColumnMetadata();\n+ return new FrameBlock(schema, colNames, meta, columnsOut);\n+ }\n+\n+ private void applySingleThread() {\n+ for(int i = 0; i < nCol; i++)\n+ columnsOut[i] = columnsIn[i].changeType(schema[i]);\n+ }\n+\n+ private void applyMultiThread(int k) {\n+ final ExecutorService pool = CommonThreadPool.get(k);\n+ try {\n+\n+ pool.submit(() -> {\n+ IntStream.rangeClosed(0, nCol - 1).parallel() // parallel columns\n+ .forEach(x -> columnsOut[x] = columnsIn[x].changeType(schema[x]));\n+ }).get();\n+\n+ pool.shutdown();\n+ }\n+ catch(InterruptedException | ExecutionException e) {\n+ pool.shutdown();\n+ throw new DMLRuntimeException(\"Failed to combine column groups\", e);\n+ }\n+ }\n+\n+ private void verifySize() {\n+ if(schema.length != fb.getSchema().length)\n+ throw new DMLRuntimeException(//\n+ \"Invalid apply schema with different number of columns expected: \" + fb.getSchema().length + \" got: \"\n+ + schema.length);\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibDetectSchema.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibDetectSchema.java",
"diff": "@@ -34,7 +34,7 @@ import org.apache.sysds.runtime.util.CommonThreadPool;\nimport org.apache.sysds.runtime.util.UtilFunctions;\npublic final class FrameLibDetectSchema {\n- // private static final Log LOG = LogFactory.getLog(FrameBlock.class.getName());\n+ // private static final Log LOG = LogFactory.getLog(FrameLibDetectSchema.class.getName());\nprivate FrameLibDetectSchema() {\n// private constructor\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/frame/FrameApplySchema.java",
"new_path": "src/test/java/org/apache/sysds/test/component/frame/FrameApplySchema.java",
"diff": "@@ -25,22 +25,51 @@ import static org.junit.Assert.fail;\nimport java.util.Random;\nimport org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\n-import org.apache.sysds.runtime.frame.data.columns.BooleanArray;\n+import org.apache.sysds.runtime.frame.data.lib.FrameLibApplySchema;\nimport org.junit.Test;\npublic class FrameApplySchema {\n-\n@Test\n- public void testApplySchema(){\n+ public void testApplySchemaStringToBoolean() {\ntry {\n- FrameBlock fb = genBoolean(10, 2);\n+ FrameBlock fb = genStringContainingBoolean(10, 2);\nValueType[] schema = new ValueType[] {ValueType.BOOLEAN, ValueType.BOOLEAN};\nFrameBlock ret = fb.applySchema(schema);\n- assertTrue(ret.getColumn(0) instanceof BooleanArray);\n- assertTrue(ret.getColumn(1) instanceof BooleanArray);\n+ assertTrue(ret.getColumn(0).getValueType() == ValueType.BOOLEAN);\n+ assertTrue(ret.getColumn(1).getValueType() == ValueType.BOOLEAN);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+ @Test\n+ public void testApplySchemaStringToInt() {\n+ try {\n+ FrameBlock fb = genStringContainingInteger(10, 2);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32, ValueType.INT32};\n+ FrameBlock ret = fb.applySchema(schema);\n+ assertTrue(ret.getColumn(0).getValueType() == ValueType.INT32);\n+ assertTrue(ret.getColumn(1).getValueType() == ValueType.INT32);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+ @Test\n+ public void testApplySchemaStringToIntSingleCol() {\n+ try {\n+ FrameBlock fb = genStringContainingInteger(10, 1);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32};\n+ FrameBlock ret = fb.applySchema(schema);\n+ assertTrue(ret.getColumn(0).getValueType() == ValueType.INT32);\n}\ncatch(Exception e) {\ne.printStackTrace();\n@@ -48,7 +77,83 @@ public class FrameApplySchema {\n}\n}\n- private FrameBlock genBoolean(int row, int col){\n+ @Test\n+ public void testApplySchemaStringToIntDirectCallSingleThread() {\n+ try {\n+ FrameBlock fb = genStringContainingInteger(10, 3);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32, ValueType.INT32, ValueType.INT32};\n+ FrameBlock ret = FrameLibApplySchema.applySchema(fb, schema, 1);\n+ for(int i = 0; i < ret.getNumColumns(); i++)\n+ assertTrue(ret.getColumn(i).getValueType() == ValueType.INT32);\n+\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+ @Test\n+ public void testApplySchemaStringToIntDirectCallMultiThread() {\n+ try {\n+ FrameBlock fb = genStringContainingInteger(10, 3);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32, ValueType.INT32, ValueType.INT32};\n+ FrameBlock ret = FrameLibApplySchema.applySchema(fb, schema, 3);\n+ for(int i = 0; i < ret.getNumColumns(); i++)\n+ assertTrue(ret.getColumn(i).getValueType() == ValueType.INT32);\n+\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+\n+ @Test\n+ public void testApplySchemaStringToIntDirectCallMultiThreadSingleCol() {\n+ try {\n+ FrameBlock fb = genStringContainingInteger(10, 1);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32};\n+ FrameBlock ret = FrameLibApplySchema.applySchema(fb, schema, 3);\n+ for(int i = 0; i < ret.getNumColumns(); i++)\n+ assertTrue(ret.getColumn(i).getValueType() == ValueType.INT32);\n+\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ fail(e.getMessage());\n+ }\n+ }\n+\n+ @Test(expected = DMLRuntimeException.class)\n+ public void testInvalidInput() {\n+ FrameBlock fb = genStringContainingInteger(10, 10);\n+ ValueType[] schema = new ValueType[] {ValueType.INT32, ValueType.INT32, ValueType.INT32};\n+ FrameLibApplySchema.applySchema(fb, schema, 3);\n+ }\n+\n+ @Test(expected = DMLRuntimeException.class)\n+ public void testInvalidInput2() {\n+ FrameBlock fb = genStringContainingInteger(10, 3);\n+ ValueType[] schema = new ValueType[] {ValueType.UNKNOWN, ValueType.INT32, ValueType.INT32};\n+ FrameLibApplySchema.applySchema(fb, schema, 3);\n+ }\n+\n+ private FrameBlock genStringContainingInteger(int row, int col) {\n+ FrameBlock ret = new FrameBlock();\n+ Random r = new Random(31);\n+ for(int c = 0; c < col; c++) {\n+ String[] column = new String[row];\n+ for(int i = 0; i < row; i++)\n+ column[i] = \"\" + r.nextInt();\n+\n+ ret.appendColumn(column);\n+ }\n+ return ret;\n+ }\n+\n+ private FrameBlock genStringContainingBoolean(int row, int col) {\nFrameBlock ret = new FrameBlock();\nRandom r = new Random(31);\nfor(int c = 0; c < col; c++) {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3272] applySchema FrameBlock parallel
This commit improve performance of applySchema through parallelization,
from 0.8- 0.9 sec to 0.169 sec on a 64kx2k Frame block, also included
are test with 100% test coverage of the applySchema. |
49,706 | 21.12.2022 13:53:35 | -3,600 | 22acbf3f3adaeac7bcf1e5806fcf71472c150cde | [MINOR] Make BinaryCPInstruction MultithreadedOperators
This commit change the class of operation, to correctly reflect being
multithreaded to allow applySchema to include the thread
count in the instruction.
The change is mainly semantic since it already is maintained that the
operation class is multithreaded. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/FrameBlock.java",
"diff": "@@ -1747,7 +1747,18 @@ public class FrameBlock implements CacheBlock<FrameBlock>, Externalizable {\n* @return A new FrameBlock with the schema applied.\n*/\npublic FrameBlock applySchema(ValueType[] schema) {\n- return FrameLibApplySchema.applySchema(this, schema, InfrastructureAnalyzer.getLocalParallelism());\n+ return FrameLibApplySchema.applySchema(this, schema, 1);\n+ }\n+\n+ /**\n+ * Method to create a new FrameBlock where the given schema is applied.\n+ *\n+ * @param schema of value types.\n+ * @param k parallelization degree\n+ * @return A new FrameBlock with the schema applied.\n+ */\n+ public FrameBlock applySchema(ValueType[] schema, int k){\n+ return FrameLibApplySchema.applySchema(this, schema, k);\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/InstructionUtils.java",
"diff": "@@ -99,6 +99,7 @@ import org.apache.sysds.runtime.matrix.operators.CMOperator;\nimport org.apache.sysds.runtime.matrix.operators.CMOperator.AggregateOperationTypes;\nimport org.apache.sysds.runtime.matrix.operators.CountDistinctOperator;\nimport org.apache.sysds.runtime.matrix.operators.LeftScalarOperator;\n+import org.apache.sysds.runtime.matrix.operators.MultiThreadedOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.matrix.operators.RightScalarOperator;\nimport org.apache.sysds.runtime.matrix.operators.ScalarOperator;\n@@ -580,7 +581,7 @@ public class InstructionUtils\nnew UnaryOperator(Builtin.getBuiltinFnObject(opcode));\n}\n- public static Operator parseBinaryOrBuiltinOperator(String opcode, CPOperand in1, CPOperand in2) {\n+ public static MultiThreadedOperator parseBinaryOrBuiltinOperator(String opcode, CPOperand in1, CPOperand in2) {\nif( LibCommonsMath.isSupportedMatrixMatrixOperation(opcode) )\nreturn null;\nboolean matrixScalar = (in1.getDataType() != in2.getDataType());\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryCPInstruction.java",
"diff": "@@ -23,6 +23,7 @@ import org.apache.sysds.common.Types.DataType;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\n+import org.apache.sysds.runtime.matrix.operators.MultiThreadedOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\npublic abstract class BinaryCPInstruction extends ComputationCPInstruction {\n@@ -45,7 +46,7 @@ public abstract class BinaryCPInstruction extends ComputationCPInstruction {\nif(!(in1.getDataType() == DataType.FRAME || in2.getDataType() == DataType.FRAME))\ncheckOutputDataType(in1, in2, out);\n- Operator operator = InstructionUtils.parseBinaryOrBuiltinOperator(opcode, in1, in2);\n+ MultiThreadedOperator operator = InstructionUtils.parseBinaryOrBuiltinOperator(opcode, in1, in2);\nif (in1.getDataType() == DataType.SCALAR && in2.getDataType() == DataType.SCALAR)\nreturn new BinaryScalarScalarCPInstruction(operator, in1, in2, out, opcode, str);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryFrameFrameCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/BinaryFrameFrameCPInstruction.java",
"diff": "@@ -23,12 +23,12 @@ import org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.matrix.operators.BinaryOperator;\n-import org.apache.sysds.runtime.matrix.operators.Operator;\n+import org.apache.sysds.runtime.matrix.operators.MultiThreadedOperator;\npublic class BinaryFrameFrameCPInstruction extends BinaryCPInstruction {\n// private static final Log LOG = LogFactory.getLog(BinaryFrameFrameCPInstruction.class.getName());\n- protected BinaryFrameFrameCPInstruction(Operator op, CPOperand in1,\n+ protected BinaryFrameFrameCPInstruction(MultiThreadedOperator op, CPOperand in1,\nCPOperand in2, CPOperand out, String opcode, String istr) {\nsuper(CPType.Binary, op, in1, in2, out, opcode, istr);\n}\n@@ -62,7 +62,7 @@ public class BinaryFrameFrameCPInstruction extends BinaryCPInstruction {\nValueType[] schema = new ValueType[inBlock2.getNumColumns()];\nfor(int i=0; i<inBlock2.getNumColumns(); i++)\nschema[i] = ValueType.fromExternalString(inBlock2.get(0, i).toString());\n- ec.setFrameOutput(output.getName(), inBlock1.applySchema(schema));\n+ ec.setFrameOutput(output.getName(), inBlock1.applySchema(schema, ((MultiThreadedOperator)getOperator()).getNumThreads()));\n}\nelse {\n// Execute binary operations\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Make BinaryCPInstruction MultithreadedOperators
This commit change the class of operation, to correctly reflect being
multithreaded to allow applySchema to include the thread
count in the instruction.
The change is mainly semantic since it already is maintained that the
operation class is multithreaded. |
49,689 | 04.01.2023 09:56:12 | -3,600 | eb3e38477097d967419075f4b61b2da25321dd2c | Compile-time checkpoint placement for shared Spark OPs
This patch brings the initial implementation of placing checkpoint Lops
after expensive Spark operations, which are shared among multiple Spark
jobs. In addition, this patch fixes bugs and extends statistics.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"diff": "@@ -47,8 +47,11 @@ import org.apache.sysds.lops.DataGen;\nimport org.apache.sysds.lops.GroupedAggregate;\nimport org.apache.sysds.lops.GroupedAggregateM;\nimport org.apache.sysds.lops.Lop;\n+import org.apache.sysds.lops.MMCJ;\n+import org.apache.sysds.lops.MMRJ;\nimport org.apache.sysds.lops.MMTSJ;\nimport org.apache.sysds.lops.MMZip;\n+import org.apache.sysds.lops.MapMult;\nimport org.apache.sysds.lops.MapMultChain;\nimport org.apache.sysds.lops.ParameterizedBuiltin;\nimport org.apache.sysds.lops.PickByCount;\n@@ -190,17 +193,22 @@ public interface ILinearize {\nroots.forEach(r -> collectSparkRoots(r, sparkOpCount, sparkRoots));\n// Step 2: Depth-first linearization. Place the Spark OPs first.\n- // Sort the Spark roots based on number of Spark operators descending\n+ // Maintain the default order (by ID) to trigger independent Spark chains first\nArrayList<Lop> operatorList = new ArrayList<>();\n- Lop[] sortedSPRoots = sparkRoots.toArray(new Lop[0]);\n- Arrays.sort(sortedSPRoots, (l1, l2) -> sparkOpCount.get(l2.getID()) - sparkOpCount.get(l1.getID()));\n- Arrays.stream(sortedSPRoots).forEach(r -> depthFirst(r, operatorList, sparkOpCount, true));\n+ sparkRoots.forEach(r -> depthFirst(r, operatorList, sparkOpCount, true));\n// Step 3: Place the rest of the operators (CP). Sort the CP roots based on\n// #Spark operators in ascending order, i.e. execute the independent CP chains first\nroots.forEach(r -> depthFirst(r, operatorList, sparkOpCount, false));\nroots.forEach(Lop::resetVisitStatus);\n- final_v = operatorList;\n+\n+ // Step 4: Add Chkpoint lops after the expensive Spark operators, which\n+ // are shared among multiple Spark jobs. Only consider operators with\n+ // Spark consumers for now.\n+ Map<Long, Integer> operatorJobCount = new HashMap<>();\n+ markPersistableSparkOps(sparkRoots, operatorJobCount);\n+ final_v = addChkpointLop(operatorList, operatorJobCount);\n+ // TODO: A rewrite pass to remove less effective chkpoints\n}\nelse\n// Fall back to depth if none of the operators returns results back to local\n@@ -209,7 +217,6 @@ public interface ILinearize {\n// Step 4: Add Prefetch and Broadcast lops if necessary\nList<Lop> v_pf = ConfigurationManager.isPrefetchEnabled() ? addPrefetchLop(final_v) : final_v;\nList<Lop> v_bc = ConfigurationManager.isBroadcastEnabled() ? addBroadcastLop(v_pf) : v_pf;\n- // TODO: Merge into a single traversal\nreturn v_bc;\n}\n@@ -238,6 +245,28 @@ public interface ILinearize {\nreturn total;\n}\n+ // Count the number of jobs a Spark operator is part of\n+ private static void markPersistableSparkOps(List<Lop> sparkRoots, Map<Long, Integer> operatorJobCount) {\n+ for (Lop root : sparkRoots) {\n+ collectPersistableSparkOps(root, operatorJobCount);\n+ root.resetVisitStatus();\n+ }\n+ }\n+\n+ private static void collectPersistableSparkOps(Lop root, Map<Long, Integer> operatorJobCount) {\n+ if (root.isVisited())\n+ return;\n+\n+ for (Lop input : root.getInputs())\n+ collectPersistableSparkOps(input, operatorJobCount);\n+\n+ // Increment the job counter if this node benefits from persisting\n+ if (isPersistableSparkOp(root))\n+ operatorJobCount.merge(root.getID(), 1, Integer::sum);\n+\n+ root.setVisited();\n+ }\n+\n// Place the operators in a depth-first manner, but order\n// the DAGs based on number of Spark operators\nprivate static void depthFirst(Lop root, ArrayList<Lop> opList, Map<Long, Integer> sparkOpCount, boolean sparkFirst) {\n@@ -270,6 +299,38 @@ public interface ILinearize {\n|| lop instanceof CoVariance || lop instanceof MMTSJ || lop.isAllOutputsCP());\n}\n+ // Dictionary of Spark operators which are expensive enough to be\n+ // benefited from persisting if shared among jobs.\n+ private static boolean isPersistableSparkOp(Lop lop) {\n+ return lop.isExecSpark() && (lop instanceof MapMult\n+ || lop instanceof MMCJ || lop instanceof MMRJ\n+ || lop instanceof MMZip);\n+ }\n+\n+ private static List<Lop> addChkpointLop(List<Lop> nodes, Map<Long, Integer> operatorJobCount) {\n+ List<Lop> nodesWithChkpt = new ArrayList<>();\n+\n+ for (Lop l : nodes) {\n+ nodesWithChkpt.add(l);\n+ if(operatorJobCount.containsKey(l.getID()) && operatorJobCount.get(l.getID()) > 1) {\n+ //This operation is expensive and shared between Spark jobs\n+ List<Lop> oldOuts = new ArrayList<>(l.getOutputs());\n+ //Construct a chkpoint lop that takes this Spark node as a input\n+ Lop chkpoint = new Checkpoint(l, l.getDataType(), l.getValueType(),\n+ Checkpoint.getDefaultStorageLevelString(), false);\n+ for (Lop out : oldOuts) {\n+ //Rewire l -> out to l -> chkpoint -> out\n+ chkpoint.addOutput(out);\n+ out.replaceInput(l, chkpoint);\n+ l.removeOutput(out);\n+ }\n+ //Place it immediately after the Spark lop in the node list\n+ nodesWithChkpt.add(chkpoint);\n+ }\n+ }\n+ return nodesWithChkpt;\n+ }\n+\nprivate static List<Lop> addPrefetchLop(List<Lop> nodes) {\nList<Lop> nodesWithPrefetch = new ArrayList<>();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/MatrixObjectFuture.java",
"diff": "@@ -26,6 +26,7 @@ import org.apache.sysds.runtime.controlprogram.caching.CacheStatistics;\nimport org.apache.sysds.runtime.controlprogram.caching.MatrixObject;\nimport org.apache.sysds.runtime.lineage.LineageCache;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.utils.stats.SparkStatistics;\nimport java.util.concurrent.Future;\n@@ -61,6 +62,7 @@ public class MatrixObjectFuture extends MatrixObject\nif( DMLScript.STATISTICS ){\nlong t1 = System.nanoTime();\nCacheStatistics.incrementAcquireRTime(t1-t0);\n+ SparkStatistics.incAsyncSparkOpCount(1);\n}\nreturn ret;\n}\n@@ -91,7 +93,14 @@ public class MatrixObjectFuture extends MatrixObject\n}\nprivate synchronized void releaseIntern() {\n+ try {\n+ if(isCachingActive() && _futureData.get().getInMemorySize() > CACHING_THRESHOLD)\n_futureData = null;\n+ //TODO: write to disk and other cache maintenance\n+ }\n+ catch(Exception e) {\n+ throw new DMLRuntimeException(e);\n+ }\n}\npublic synchronized void clearData(long tid) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/Statistics.java",
"diff": "@@ -639,7 +639,7 @@ public class Statistics\nsb.append(\"LinCache hits (Mem/FS/Del): \\t\" + LineageCacheStatistics.displayHits() + \".\\n\");\nsb.append(\"LinCache MultiLevel (Ins/SB/Fn):\" + LineageCacheStatistics.displayMultiLevelHits() + \".\\n\");\nsb.append(\"LinCache GPU (Hit/Async/Sync): \\t\" + LineageCacheStatistics.displayGpuStats() + \".\\n\");\n- sb.append(\"LinCache Spark (Col/RDD): \\t\\t\" + LineageCacheStatistics.displaySparkStats() + \".\\n\");\n+ sb.append(\"LinCache Spark (Col/RDD): \\t\" + LineageCacheStatistics.displaySparkStats() + \".\\n\");\nsb.append(\"LinCache writes (Mem/FS/Del): \\t\" + LineageCacheStatistics.displayWtrites() + \".\\n\");\nsb.append(\"LinCache FStimes (Rd/Wr): \\t\" + LineageCacheStatistics.displayFSTime() + \" sec.\\n\");\nsb.append(\"LinCache Computetime (S/M): \\t\" + LineageCacheStatistics.displayComputeTime() + \" sec.\\n\");\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"new_path": "src/main/java/org/apache/sysds/utils/stats/SparkStatistics.java",
"diff": "@@ -34,6 +34,7 @@ public class SparkStatistics {\nprivate static final LongAdder asyncPrefetchCount = new LongAdder();\nprivate static final LongAdder asyncBroadcastCount = new LongAdder();\nprivate static final LongAdder asyncTriggerCheckpointCount = new LongAdder();\n+ private static final LongAdder asyncSparkOpCount = new LongAdder();\npublic static boolean createdSparkContext() {\nreturn ctxCreateTime > 0;\n@@ -80,6 +81,10 @@ public class SparkStatistics {\nasyncTriggerCheckpointCount.add(c);\n}\n+ public static void incAsyncSparkOpCount(long c) {\n+ asyncSparkOpCount.add(c);\n+ }\n+\npublic static long getSparkCollectCount() {\nreturn collectCount.longValue();\n}\n@@ -88,6 +93,10 @@ public class SparkStatistics {\nreturn asyncPrefetchCount.longValue();\n}\n+ public static long getAsyncSparkOpCount() {\n+ return asyncSparkOpCount.longValue();\n+ }\n+\npublic static long getAsyncBroadcastCount() {\nreturn asyncBroadcastCount.longValue();\n}\n@@ -122,8 +131,8 @@ public class SparkStatistics {\nparallelizeTime.longValue()*1e-9,\nbroadcastTime.longValue()*1e-9,\ncollectTime.longValue()*1e-9));\n- sb.append(\"Spark async. count (pf,bc,cp): \\t\" +\n- String.format(\"%d/%d/%d.\\n\", getAsyncPrefetchCount(), getAsyncBroadcastCount(), getasyncTriggerCheckpointCount()));\n+ sb.append(\"Spark async. count (pf,bc,op): \\t\" +\n+ String.format(\"%d/%d/%d.\\n\", getAsyncPrefetchCount(), getAsyncBroadcastCount(), getAsyncSparkOpCount()));\nreturn sb.toString();\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/async/CheckpointSharedOpsTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+ package org.apache.sysds.test.functions.async;\n+\n+ import java.util.ArrayList;\n+ import java.util.HashMap;\n+ import java.util.List;\n+\n+ import org.apache.sysds.common.Types;\n+ import org.apache.sysds.hops.OptimizerUtils;\n+ import org.apache.sysds.hops.recompile.Recompiler;\n+ import org.apache.sysds.runtime.controlprogram.parfor.stat.InfrastructureAnalyzer;\n+ import org.apache.sysds.runtime.matrix.data.MatrixValue;\n+ import org.apache.sysds.test.AutomatedTestBase;\n+ import org.apache.sysds.test.TestConfiguration;\n+ import org.apache.sysds.test.TestUtils;\n+ import org.apache.sysds.utils.Statistics;\n+ import org.junit.Assert;\n+ import org.junit.Test;\n+\n+public class CheckpointSharedOpsTest extends AutomatedTestBase {\n+\n+ protected static final String TEST_DIR = \"functions/async/\";\n+ protected static final String TEST_NAME = \"CheckpointSharedOps\";\n+ protected static final int TEST_VARIANTS = 2;\n+ protected static String TEST_CLASS_DIR = TEST_DIR + CheckpointSharedOpsTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ for(int i=1; i<=TEST_VARIANTS; i++)\n+ addTestConfiguration(TEST_NAME+i, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME+i));\n+ }\n+\n+ @Test\n+ public void test1() {\n+ // Shared cpmm/rmm between two jobs\n+ runTest(TEST_NAME+\"1\");\n+ }\n+\n+ public void runTest(String testname) {\n+ Types.ExecMode oldPlatform = setExecMode(Types.ExecMode.HYBRID);\n+\n+ long oldmem = InfrastructureAnalyzer.getLocalMaxMemory();\n+ long mem = 1024*1024*8;\n+ InfrastructureAnalyzer.setLocalMaxMemory(mem);\n+\n+ try {\n+ getAndLoadTestConfiguration(testname);\n+ fullDMLScriptName = getScript();\n+\n+ List<String> proArgs = new ArrayList<>();\n+\n+ proArgs.add(\"-explain\");\n+ //proArgs.add(\"recompile_runtime\");\n+ proArgs.add(\"-stats\");\n+ proArgs.add(\"-args\");\n+ proArgs.add(output(\"R\"));\n+ programArgs = proArgs.toArray(new String[proArgs.size()]);\n+\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ HashMap<MatrixValue.CellIndex, Double> R = readDMLScalarFromOutputDir(\"R\");\n+ long numCP = Statistics.getCPHeavyHitterCount(\"sp_chkpoint\");\n+\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = true;\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+ HashMap<MatrixValue.CellIndex, Double> R_mp = readDMLScalarFromOutputDir(\"R\");\n+ long numCP_maxp = Statistics.getCPHeavyHitterCount(\"sp_chkpoint\");\n+ OptimizerUtils.MAX_PARALLELIZE_ORDER = false;\n+\n+ //compare matrices\n+ boolean matchVal = TestUtils.compareMatrices(R, R_mp, 1e-6, \"Origin\", \"withPrefetch\");\n+ if (!matchVal)\n+ System.out.println(\"Value w/o Prefetch \"+R+\" w/ Prefetch \"+R_mp);\n+ Assert.assertTrue(\"Violated checkpoint count: \" + numCP + \" < \" + numCP_maxp, numCP < numCP_maxp);\n+ } finally {\n+ resetExecMode(oldPlatform);\n+ InfrastructureAnalyzer.setLocalMaxMemory(oldmem);\n+ Recompiler.reinitRecompiler();\n+ }\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/MaxParallelizeOrderTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/MaxParallelizeOrderTest.java",
"diff": "@@ -68,9 +68,6 @@ public class MaxParallelizeOrderTest extends AutomatedTestBase {\n}\npublic void runTest(String testname) {\n- boolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n- boolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\n- boolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\nExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\nlong oldmem = InfrastructureAnalyzer.getLocalMaxMemory();\n@@ -108,9 +105,6 @@ public class MaxParallelizeOrderTest extends AutomatedTestBase {\nif (!matchVal)\nSystem.out.println(\"Value w/o Prefetch \"+R+\" w/ Prefetch \"+R_mp);\n} finally {\n- OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\n- OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\n- OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = old_trans_exec_type;\nresetExecMode(oldPlatform);\nInfrastructureAnalyzer.setLocalMaxMemory(oldmem);\nRecompiler.reinitRecompiler();\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/CheckpointSharedOps1.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+X = rand(rows=1500, cols=1500, seed=42); #sp_rand\n+v = rand(rows=1500, cols=1, seed=42); #cp_rand\n+v2 = rand(rows=1500, cols=1, seed=43); #cp_rand\n+\n+# CP instructions\n+v = ((v + v) * 1 - v) / (1+1);\n+v = ((v + v) * 2 - v) / (2+1);\n+\n+# Spark operations\n+sp1 = X + ceil(X);\n+sp2 = t(sp1) %*% sp1; #shared among Job 1 and 2\n+\n+# Job1: SP unary triggers the DAG of SP operations\n+sp3 = sp2 + sum(v);\n+R1 = sum(sp3);\n+\n+# Job2: SP unary triggers the DAG of SP operations\n+sp4 = sp2 + sum(v2);\n+R2 = sum(sp4);\n+\n+R = R1 + R2;\n+write(R, $1, format=\"text\");\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3483] Compile-time checkpoint placement for shared Spark OPs
This patch brings the initial implementation of placing checkpoint Lops
after expensive Spark operations, which are shared among multiple Spark
jobs. In addition, this patch fixes bugs and extends statistics.
Closes #1758 |
49,706 | 12.01.2023 09:41:31 | -3,600 | f4ca42684c88045a4cc71b5918a35bb689827405 | [MINOR] fix serialized size of String Array
I added a minor change to also serialize the materialized size of
StringArray in frames. This did not take into consideration the
serialized size difference of 8 bits.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/columns/StringArray.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/columns/StringArray.java",
"diff": "@@ -264,7 +264,7 @@ public class StringArray extends Array<String> {\n@Override\npublic long getExactSerializedSize() {\n- long si = 1; // byte identifier\n+ long si = 1 + 8; // byte identifier\nfor(String s : _data)\nsi += IOUtilFunctions.getUTFSize(s);\nreturn si;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/frame/array/FrameArrayTests.java",
"new_path": "src/test/java/org/apache/sysds/test/component/frame/array/FrameArrayTests.java",
"diff": "@@ -27,6 +27,7 @@ import java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.DataInputStream;\nimport java.io.DataOutputStream;\n+import java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.BitSet;\nimport java.util.Collection;\n@@ -790,7 +791,6 @@ public class FrameArrayTests {\ncompare(aa, a);\n}\n-\n@Test\n@SuppressWarnings(\"unchecked\")\npublic void testSetNzString() {\n@@ -843,6 +843,26 @@ public class FrameArrayTests {\n}\n}\n+ @Test\n+ public void testSerializationSize() {\n+ try {\n+ // Serialize out\n+ ByteArrayOutputStream bos = new ByteArrayOutputStream();\n+ DataOutputStream fos = new DataOutputStream(bos);\n+ a.write(fos);\n+ long s = (long) fos.size();\n+ long e = a.getExactSerializedSize();\n+ assertEquals(s, e);\n+ }\n+ catch(IOException e) {\n+ throw new RuntimeException(\"Error in io\", e);\n+ }\n+ catch(Exception e) {\n+ e.printStackTrace();\n+ throw e;\n+ }\n+ }\n+\nprotected static void compare(Array<?> a, Array<?> b) {\nint size = a.size();\nassertTrue(a.size() == b.size());\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] fix serialized size of String Array
I added a minor change to also serialize the materialized size of
StringArray in frames. This did not take into consideration the
serialized size difference of 8 bits.
Closes #1762 |
49,706 | 13.01.2023 23:44:06 | -3,600 | 50b2c9fb4b7946906813af9f61a58d07c2209fd5 | [MINOR] CSV count lines no materialization | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderTextCSV.java",
"new_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderTextCSV.java",
"diff": "@@ -208,18 +208,26 @@ public class FrameReaderTextCSV extends FrameReader {\n// compute number of rows\nint nrow = 0;\nfor(int i = 0; i < splits.length; i++) {\n- RecordReader<LongWritable, Text> reader = informat.getRecordReader(splits[i], job, Reporter.NULL);\n+ boolean header = i == 0 && _props.hasHeader();\n+ nrow += countLinesInReader(splits[i], informat, job, ncol, header);\n+ }\n+\n+ return new Pair<>(nrow, ncol);\n+ }\n+\n+\n+ protected static int countLinesInReader(InputSplit split, TextInputFormat inFormat, JobConf job, long ncol,\n+ boolean header) throws IOException {\n+ RecordReader<LongWritable, Text> reader = inFormat.getRecordReader(split, job, Reporter.NULL);\ntry {\n- nrow = countLinesInReader(reader, ncol , i == 0 && _props.hasHeader());\n+ return countLinesInReader(reader, ncol, header);\n}\nfinally {\nIOUtilFunctions.closeSilently(reader);\n}\n}\n- return new Pair<>(nrow, ncol);\n- }\n- protected static int countLinesInReader(RecordReader<LongWritable, Text> reader, int ncol, boolean header)\n+ protected static int countLinesInReader(RecordReader<LongWritable, Text> reader, long ncol, boolean header)\nthrows IOException {\nfinal LongWritable key = new LongWritable();\nfinal Text value = new Text();\n@@ -231,7 +239,7 @@ public class FrameReaderTextCSV extends FrameReader {\nreader.next(key, value);\nwhile(reader.next(key, value)) {\n// note the metadata can be located at any row when spark.\n- nrow += containsMetaTag(value.toString()) ? 0 : 1;\n+ nrow += containsMetaTag(value) ? 0 : 1;\n}\nreturn nrow;\n}\n@@ -240,8 +248,11 @@ public class FrameReaderTextCSV extends FrameReader {\n}\n}\n- private final static boolean containsMetaTag(String val) {\n- return val.startsWith(TfUtils.TXMTD_MVPREFIX)//\n- || val.startsWith(TfUtils.TXMTD_NDPREFIX);\n+ private final static boolean containsMetaTag(Text val) {\n+ if(val.charAt(0) == '#')\n+ return val.find(TfUtils.TXMTD_MVPREFIX) > -1//\n+ || val.find(TfUtils.TXMTD_NDPREFIX) > -1;\n+ else\n+ return false;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderTextCSVParallel.java",
"new_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderTextCSVParallel.java",
"diff": "@@ -28,12 +28,8 @@ import java.util.concurrent.Future;\nimport org.apache.hadoop.fs.FileSystem;\nimport org.apache.hadoop.fs.Path;\n-import org.apache.hadoop.io.LongWritable;\n-import org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapred.InputSplit;\nimport org.apache.hadoop.mapred.JobConf;\n-import org.apache.hadoop.mapred.RecordReader;\n-import org.apache.hadoop.mapred.Reporter;\nimport org.apache.hadoop.mapred.TextInputFormat;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.hops.OptimizerUtils;\n@@ -72,7 +68,7 @@ public class FrameReaderTextCSVParallel extends FrameReaderTextCSV\n//compute num rows per split\nArrayList<CountRowsTask> tasks = new ArrayList<>();\nfor( int i=0; i<splits.length; i++ )\n- tasks.add(new CountRowsTask(splits[i], informat, job, _props.hasHeader(), i==0));\n+ tasks.add(new CountRowsTask(splits[i], informat, job, _props.hasHeader() && i==0, clen));\nList<Future<Integer>> cret = pool.invokeAll(tasks);\n//compute row offset per split via cumsum on row counts\n@@ -113,7 +109,7 @@ public class FrameReaderTextCSVParallel extends FrameReaderTextCSV\ntry {\nArrayList<CountRowsTask> tasks = new ArrayList<>();\nfor( int i=0; i<splits.length; i++ )\n- tasks.add(new CountRowsTask(splits[i], informat, job, _props.hasHeader(), i==0));\n+ tasks.add(new CountRowsTask(splits[i], informat, job, _props.hasHeader()&& i==0, ncol));\nList<Future<Integer>> cret = pool.invokeAll(tasks);\nfor( Future<Integer> count : cret )\nnrow += count.get().intValue();\n@@ -130,34 +126,25 @@ public class FrameReaderTextCSVParallel extends FrameReaderTextCSV\nreturn new Pair<>((int)nrow, ncol);\n}\n- private static class CountRowsTask implements Callable<Integer>\n- {\n- private InputSplit _split = null;\n- private TextInputFormat _informat = null;\n- private JobConf _job = null;\n- private boolean _hasHeader = false;\n- private boolean _firstSplit = false;\n+ private static class CountRowsTask implements Callable<Integer> {\n+ private final InputSplit _split;\n+ private final TextInputFormat _informat;\n+ private final JobConf _job;\n+ private final boolean _hasHeader;\n+ private final long _nCol;\n- public CountRowsTask(InputSplit split, TextInputFormat informat, JobConf job, boolean hasHeader, boolean first) {\n+ public CountRowsTask(InputSplit split, TextInputFormat informat, JobConf job, boolean hasHeader, long nCol) {\n_split = split;\n_informat = informat;\n_job = job;\n_hasHeader = hasHeader;\n- _firstSplit = first;\n+ _nCol = nCol;\n}\n@Override\n- public Integer call()\n- throws Exception\n- {\n- RecordReader<LongWritable, Text> reader = _informat.getRecordReader(_split, _job, Reporter.NULL);\n- try {\n- // it is assumed that if we read parallel number of rows, there are at least two columns.\n- return countLinesInReader(reader, 2 , _firstSplit && _hasHeader);\n- }\n- finally {\n- IOUtilFunctions.closeSilently(reader);\n- }\n+ public Integer call() throws Exception {\n+ return countLinesInReader(_split, _informat, _job, _nCol, _hasHeader);\n+\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] CSV count lines no materialization |
49,706 | 18.01.2023 15:58:10 | -3,600 | 6195b3d3c260ba3c7e6866d60ab2b5bd438ccd0e | [MINOR] Initial support for CPcompression Frames
This commit adds the simplest form of compression (none), but it does
add the support and infrastructure to start doing compression. | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/compress/FrameCompressionStatistics.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.apache.sysds.runtime.frame.data.compress;\n+\n+public class FrameCompressionStatistics {\n+\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibCompress.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.apache.sysds.runtime.frame.data.lib;\n+\n+import org.apache.commons.lang3.tuple.ImmutablePair;\n+import org.apache.commons.lang3.tuple.Pair;\n+import org.apache.sysds.runtime.compress.workload.WTreeRoot;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.frame.data.compress.FrameCompressionStatistics;\n+\n+public class FrameLibCompress {\n+ public static Pair<FrameBlock, FrameCompressionStatistics> compress(FrameBlock in, int k, WTreeRoot root) {\n+ return new ImmutablePair<>(in, new FrameCompressionStatistics());\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/CompressionCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/CompressionCPInstruction.java",
"diff": "@@ -28,6 +28,9 @@ import org.apache.sysds.runtime.compress.CompressionStatistics;\nimport org.apache.sysds.runtime.compress.SingletonLookupHashMap;\nimport org.apache.sysds.runtime.compress.workload.WTreeRoot;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.frame.data.compress.FrameCompressionStatistics;\n+import org.apache.sysds.runtime.frame.data.lib.FrameLibCompress;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\n@@ -59,8 +62,7 @@ public class CompressionCPInstruction extends ComputationCPInstruction {\n@Override\npublic void processInstruction(ExecutionContext ec) {\n- // Get matrix block input\n- final MatrixBlock in = ec.getMatrixInput(input1.getName());\n+ // final MatrixBlock in = ec.getMatrixInput(input1.getName());\nfinal SingletonLookupHashMap m = SingletonLookupHashMap.getMap();\n// Get and clear workload tree entry for this compression instruction.\n@@ -69,15 +71,30 @@ public class CompressionCPInstruction extends ComputationCPInstruction {\nfinal int k = OptimizerUtils.getConstrainedNumThreads(-1);\n- // Compress the matrix block\n- Pair<MatrixBlock, CompressionStatistics> compResult = CompressedMatrixBlockFactory.compress(in, k, root);\n+ if(ec.isMatrixObject(input1.getName()))\n+ processMatrixBlockCompression(ec, ec.getMatrixInput(input1.getName()), k, root);\n+ else\n+ processFrameBlockCompression(ec, ec.getFrameInput(input1.getName()), k, root);\n+\n+ }\n+ private void processMatrixBlockCompression(ExecutionContext ec, MatrixBlock in, int k, WTreeRoot root) {\n+ Pair<MatrixBlock, CompressionStatistics> compResult = CompressedMatrixBlockFactory.compress(in, k, root);\nif(LOG.isTraceEnabled())\nLOG.trace(compResult.getRight());\nMatrixBlock out = compResult.getLeft();\n-\n// Set output and release input\nec.releaseMatrixInput(input1.getName());\nec.setMatrixOutput(output.getName(), out);\n}\n+\n+ private void processFrameBlockCompression(ExecutionContext ec, FrameBlock in, int k, WTreeRoot root) {\n+ Pair<FrameBlock, FrameCompressionStatistics> compResult = FrameLibCompress.compress(in, k, root);\n+ if(LOG.isTraceEnabled())\n+ LOG.trace(compResult.getRight());\n+ FrameBlock out = compResult.getLeft();\n+ // Set output and release input\n+ ec.releaseFrameInput(input1.getName());\n+ ec.setFrameOutput(output.getName(), out);\n+ }\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Initial support for CPcompression Frames
This commit adds the simplest form of compression (none), but it does
add the support and infrastructure to start doing compression. |
49,706 | 19.01.2023 09:31:40 | -3,600 | 3554613c4df6348a4454bed48bbc0cc329c6541f | [MINOR] Fix various warnings | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/DataGenCPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/cp/DataGenCPInstruction.java",
"diff": "package org.apache.sysds.runtime.instructions.cp;\n-import java.util.Arrays;\nimport java.util.Random;\nimport org.apache.commons.lang3.ArrayUtils;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ComputationSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ComputationSPInstruction.java",
"diff": "@@ -148,6 +148,7 @@ public abstract class ComputationSPInstruction extends SPInstruction implements\nreturn toPersistAndCache;\n}\n+ @SuppressWarnings(\"unchecked\")\npublic void checkpointRDD(ExecutionContext ec) {\nif (!toPersistAndCache)\nreturn;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -150,9 +150,9 @@ public class LineageCache\nif (!((SparkExecutionContext) ec).isRDDCached(rdd.getRDD().id()))\n//Return if the RDD is not cached in the executors\nreturn false;\n- if (rdd == null && e.getCacheStatus() == LineageCacheStatus.NOTCACHED)\n- return false;\n- else\n+ // if (rdd == null && e.getCacheStatus() == LineageCacheStatus.NOTCACHED)\n+ // return false;\n+ // else\n((SparkExecutionContext) ec).setRDDHandleForVariable(outName, rdd);\n}\nelse { //TODO handle locks on gpu objects\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/misc/IOUtilFunctionsTest.java",
"new_path": "src/test/java/org/apache/sysds/test/component/misc/IOUtilFunctionsTest.java",
"diff": "package org.apache.sysds.test.component.misc;\nimport static org.junit.Assert.assertArrayEquals;\n-import static org.junit.Assert.fail;\nimport org.apache.sysds.runtime.io.IOUtilFunctions;\nimport org.junit.Test;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctBase.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctBase.java",
"diff": "@@ -68,7 +68,7 @@ public abstract class CountDistinctBase extends AutomatedTestBase {\nprogramArgs = new String[] {\"-args\", String.valueOf(numberDistinct), String.valueOf(rows),\nString.valueOf(cols), String.valueOf(sparsity), outputPath};\n- runTest(true, false, null, -1);\n+ runTest(null);\nif(dir.isRowCol()) {\nwriteExpectedScalar(\"A\", numberDistinct);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctColAliasException.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctColAliasException.java",
"diff": "@@ -22,14 +22,10 @@ package org.apache.sysds.test.functions.countDistinct;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-import org.junit.Rule;\nimport org.junit.Test;\n-import org.junit.rules.ExpectedException;\n-public class CountDistinctColAliasException extends CountDistinctBase {\n- @Rule\n- public ExpectedException exceptionRule = ExpectedException.none();\n+public class CountDistinctColAliasException extends CountDistinctBase {\nprivate final static String TEST_NAME = \"countDistinctColAliasException\";\nprivate final static String TEST_DIR = \"functions/countDistinct/\";\n@@ -60,11 +56,8 @@ public class CountDistinctColAliasException extends CountDistinctBase {\nthis.percentTolerance = 0.2;\n}\n- @Test\n+ @Test(expected = AssertionError.class)\npublic void testCPSparseSmall() {\n- exceptionRule.expect(AssertionError.class);\n- exceptionRule.expectMessage(\"Invalid number of arguments for function col_count_distinct(). \" +\n- \"This function only takes 1 or 2 arguments.\");\nTypes.ExecType execType = Types.ExecType.CP;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctRowAliasException.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinct/CountDistinctRowAliasException.java",
"diff": "@@ -22,15 +22,10 @@ package org.apache.sysds.test.functions.countDistinct;\nimport org.apache.sysds.common.Types;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\n-import org.junit.Rule;\nimport org.junit.Test;\n-import org.junit.rules.ExpectedException;\npublic class CountDistinctRowAliasException extends CountDistinctBase {\n- @Rule\n- public ExpectedException exceptionRule = ExpectedException.none();\n-\nprivate final static String TEST_NAME = \"countDistinctRowAliasException\";\nprivate final static String TEST_DIR = \"functions/countDistinct/\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + CountDistinctRowAliasException.class.getSimpleName() + \"/\";\n@@ -60,11 +55,8 @@ public class CountDistinctRowAliasException extends CountDistinctBase {\nthis.percentTolerance = 0.2;\n}\n- @Test\n+ @Test(expected= AssertionError.class)\npublic void testCPSparseSmall() {\n- exceptionRule.expect(AssertionError.class);\n- exceptionRule.expectMessage(\"Invalid number of arguments for function row_count_distinct(). \" +\n- \"This function only takes 1 or 2 arguments.\");\nTypes.ExecType execType = Types.ExecType.CP;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinctApprox/CountDistinctApproxColAliasException.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinctApprox/CountDistinctApproxColAliasException.java",
"diff": "@@ -23,15 +23,10 @@ import org.apache.sysds.common.Types;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\nimport org.apache.sysds.test.functions.countDistinct.CountDistinctBase;\n-import org.junit.Rule;\nimport org.junit.Test;\n-import org.junit.rules.ExpectedException;\npublic class CountDistinctApproxColAliasException extends CountDistinctBase {\n- @Rule\n- public ExpectedException exceptionRule = ExpectedException.none();\n-\nprivate final static String TEST_NAME = \"countDistinctApproxColAliasException\";\nprivate final static String TEST_DIR = \"functions/countDistinctApprox/\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + CountDistinctApproxColAliasException.class.getSimpleName() + \"/\";\n@@ -61,11 +56,8 @@ public class CountDistinctApproxColAliasException extends CountDistinctBase {\nthis.percentTolerance = 0.2;\n}\n- @Test\n+ @Test(expected = AssertionError.class)\npublic void testCPSparseSmall() {\n- exceptionRule.expect(AssertionError.class);\n- exceptionRule.expectMessage(\"Too many parameters: function colCountDistinctApprox takes at least 1\" +\n- \" and at most 2 parameters\");\nTypes.ExecType execType = Types.ExecType.CP;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/countDistinctApprox/CountDistinctApproxRowAliasException.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/countDistinctApprox/CountDistinctApproxRowAliasException.java",
"diff": "@@ -23,15 +23,10 @@ import org.apache.sysds.common.Types;\nimport org.apache.sysds.test.TestConfiguration;\nimport org.apache.sysds.test.TestUtils;\nimport org.apache.sysds.test.functions.countDistinct.CountDistinctBase;\n-import org.junit.Rule;\nimport org.junit.Test;\n-import org.junit.rules.ExpectedException;\npublic class CountDistinctApproxRowAliasException extends CountDistinctBase {\n- @Rule\n- public ExpectedException exceptionRule = ExpectedException.none();\n-\nprivate final static String TEST_NAME = \"countDistinctApproxRowAliasException\";\nprivate final static String TEST_DIR = \"functions/countDistinctApprox/\";\nprivate final static String TEST_CLASS_DIR = TEST_DIR + CountDistinctApproxRowAliasException.class.getSimpleName() + \"/\";\n@@ -61,11 +56,8 @@ public class CountDistinctApproxRowAliasException extends CountDistinctBase {\nthis.percentTolerance = 0.2;\n}\n- @Test\n+ @Test(expected = AssertionError.class)\npublic void testCPSparseSmall() {\n- exceptionRule.expect(AssertionError.class);\n- exceptionRule.expectMessage(\"Too many parameters: function rowCountDistinctApprox takes at least 1\" +\n- \" and at most 2 parameters\");\nTypes.ExecType execType = Types.ExecType.CP;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix various warnings |
49,706 | 19.01.2023 09:32:57 | -3,600 | 6dbc983efb158e5762c76459fbc7970f0ba82679 | [MINOR] Remove unreachable code LineageCache | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCache.java",
"diff": "@@ -150,9 +150,6 @@ public class LineageCache\nif (!((SparkExecutionContext) ec).isRDDCached(rdd.getRDD().id()))\n//Return if the RDD is not cached in the executors\nreturn false;\n- // if (rdd == null && e.getCacheStatus() == LineageCacheStatus.NOTCACHED)\n- // return false;\n- // else\n((SparkExecutionContext) ec).setRDDHandleForVariable(outName, rdd);\n}\nelse { //TODO handle locks on gpu objects\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Remove unreachable code LineageCache |
49,706 | 19.01.2023 17:16:37 | -3,600 | 51dbaf3ef5cfbc78209d350e3fa7badef743646e | Compressed Frame Write and Read Binary
This commit contains the infrastructure to write and read binary
compressed frames. Note that the compression framework for frames
is not done, but this code use the basic interface for it while
using the binary blocks code as the base. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibCompress.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/lib/FrameLibCompress.java",
"diff": "@@ -25,6 +25,11 @@ import org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.frame.data.compress.FrameCompressionStatistics;\npublic class FrameLibCompress {\n+\n+ public static Pair<FrameBlock, FrameCompressionStatistics> compress(FrameBlock in, int k) {\n+ return compress(in, k, null);\n+ }\n+\npublic static Pair<FrameBlock, FrameCompressionStatistics> compress(FrameBlock in, int k, WTreeRoot root) {\nreturn new ImmutablePair<>(in, new FrameCompressionStatistics());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderFactory.java",
"new_path": "src/main/java/org/apache/sysds/runtime/io/FrameReaderFactory.java",
"diff": "@@ -30,49 +30,29 @@ public class FrameReaderFactory {\nprotected static final Log LOG = LogFactory.getLog(FrameReaderFactory.class.getName());\npublic static FrameReader createFrameReader(FileFormat fmt) {\n- if( LOG.isDebugEnabled() )\n- LOG.debug(\"Creating Frame Reader \" + fmt);\nFileFormatProperties props = (fmt == FileFormat.CSV) ? new FileFormatPropertiesCSV() : null;\nreturn createFrameReader(fmt, props);\n}\npublic static FrameReader createFrameReader(FileFormat fmt, FileFormatProperties props) {\n- if( LOG.isDebugEnabled() )\n- LOG.debug(\"Creating Frame Reader \" + fmt + props);\n- FrameReader reader = null;\n-\n+ boolean textParallel = ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_READ_TEXTFORMATS);\n+ boolean binaryParallel = ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_READ_BINARYFORMATS);\nswitch(fmt) {\ncase TEXT:\n- if(ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_READ_TEXTFORMATS))\n- reader = new FrameReaderTextCellParallel();\n- else\n- reader = new FrameReaderTextCell();\n- break;\n-\n+ return textParallel ? new FrameReaderTextCellParallel() : new FrameReaderTextCell();\ncase CSV:\nif(props != null && !(props instanceof FileFormatPropertiesCSV))\nthrow new DMLRuntimeException(\"Wrong type of file format properties for CSV writer.\");\n- if(ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_READ_TEXTFORMATS))\n- reader = new FrameReaderTextCSVParallel((FileFormatPropertiesCSV) props);\n- else\n- reader = new FrameReaderTextCSV((FileFormatPropertiesCSV) props);\n- break;\n-\n+ FileFormatPropertiesCSV fp = (FileFormatPropertiesCSV) props;\n+ return textParallel ? new FrameReaderTextCSVParallel(fp) : new FrameReaderTextCSV(fp);\n+ case COMPRESSED: // use same logic as a binary read\ncase BINARY:\n- if(ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_READ_BINARYFORMATS))\n- reader = new FrameReaderBinaryBlockParallel();\n- else\n- reader = new FrameReaderBinaryBlock();\n- break;\n+ return binaryParallel ? new FrameReaderBinaryBlockParallel() : new FrameReaderBinaryBlock();\ncase PROTO:\n// TODO performance improvement: add parallel reader\n- reader = new FrameReaderProto();\n- break;\n-\n+ return new FrameReaderProto();\ndefault:\nthrow new DMLRuntimeException(\"Failed to create frame reader for unknown format: \" + fmt.toString());\n}\n-\n- return reader;\n}\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/main/java/org/apache/sysds/runtime/io/FrameWriterCompressed.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.runtime.io;\n+\n+import java.io.IOException;\n+\n+import org.apache.hadoop.fs.Path;\n+import org.apache.hadoop.mapred.JobConf;\n+import org.apache.sysds.hops.OptimizerUtils;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.frame.data.lib.FrameLibCompress;\n+\n+public class FrameWriterCompressed extends FrameWriterBinaryBlockParallel {\n+\n+ private final boolean parallel;\n+\n+ public FrameWriterCompressed(boolean parallel) {\n+ this.parallel = parallel;\n+ }\n+\n+ @Override\n+ protected void writeBinaryBlockFrameToHDFS(Path path, JobConf job, FrameBlock src, long rlen, long clen)\n+ throws IOException, DMLRuntimeException {\n+ int k = parallel ? OptimizerUtils.getParallelBinaryWriteParallelism() : 1;\n+ FrameBlock compressed = FrameLibCompress.compress(src, k).getLeft();\n+ super.writeBinaryBlockFrameToHDFS(path, job, compressed, rlen, clen);\n+ }\n+\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/io/FrameWriterFactory.java",
"new_path": "src/main/java/org/apache/sysds/runtime/io/FrameWriterFactory.java",
"diff": "@@ -34,39 +34,24 @@ public class FrameWriterFactory {\n}\npublic static FrameWriter createFrameWriter(FileFormat fmt, FileFormatProperties props) {\n- FrameWriter writer = null;\n+ boolean textParallel = ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_WRITE_TEXTFORMATS);\n+ boolean binaryParallel = ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_WRITE_BINARYFORMATS);\nswitch(fmt) {\ncase TEXT:\n- if( ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_WRITE_TEXTFORMATS) )\n- writer = new FrameWriterTextCellParallel();\n- else\n- writer = new FrameWriterTextCell();\n- break;\n-\n+ return textParallel ? new FrameWriterTextCellParallel() : new FrameWriterTextCell();\ncase CSV:\nif(props != null && !(props instanceof FileFormatPropertiesCSV))\nthrow new DMLRuntimeException(\"Wrong type of file format properties for CSV writer.\");\n- if( ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_WRITE_TEXTFORMATS) )\n- writer = new FrameWriterTextCSVParallel((FileFormatPropertiesCSV)props);\n- else\n- writer = new FrameWriterTextCSV((FileFormatPropertiesCSV)props);\n- break;\n-\n+ FileFormatPropertiesCSV fp = (FileFormatPropertiesCSV) props;\n+ return textParallel ? new FrameWriterTextCSVParallel(fp) : new FrameWriterTextCSV(fp);\n+ case COMPRESSED:\n+ return new FrameWriterCompressed(binaryParallel);\ncase BINARY:\n- if( ConfigurationManager.getCompilerConfigFlag(ConfigType.PARALLEL_CP_WRITE_BINARYFORMATS) )\n- writer = new FrameWriterBinaryBlockParallel();\n- else\n- writer = new FrameWriterBinaryBlock();\n- break;\n-\n+ return binaryParallel ? new FrameWriterBinaryBlockParallel() : new FrameWriterBinaryBlock();\ncase PROTO:\n- // TODO performance improvement: add parallel reader\n- writer = new FrameWriterProto();\n- break;\n-\n+ return new FrameWriterProto();\ndefault:\nthrow new DMLRuntimeException(\"Failed to create frame writer for unknown format: \" + fmt.toString());\n}\n- return writer;\n}\n}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3488] Compressed Frame Write and Read Binary
This commit contains the infrastructure to write and read binary
compressed frames. Note that the compression framework for frames
is not done, but this code use the basic interface for it while
using the binary blocks code as the base. |
49,706 | 20.01.2023 13:21:46 | -3,600 | 0ae02041202c6fb6827bbd380f4debf0e0f8b65a | [MINOR] multitenant federated test solidify | [
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedLineageTraceReuseTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedLineageTraceReuseTest.java",
"diff": "package org.apache.sysds.test.functions.federated.multitenant;\n+import static org.junit.Assert.fail;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n@@ -132,6 +134,8 @@ public class FederatedLineageTraceReuseTest extends MultiTenantTestBase {\nc = cols;\n}\n+ int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\n+\ndouble[][] X1 = getRandomMatrix(r, c, 0, 3, sparsity, 3);\ndouble[][] X2 = getRandomMatrix(r, c, 0, 3, sparsity, 7);\ndouble[][] X3 = getRandomMatrix(r, c, 0, 3, sparsity, 8);\n@@ -146,15 +150,6 @@ public class FederatedLineageTraceReuseTest extends MultiTenantTestBase {\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n- int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\n-\n- try {\n- Thread.sleep(4000);\n- }\n- catch(InterruptedException e) {\n- // TODO Auto-generated catch block\n- e.printStackTrace();\n- }\nrtplatform = execMode;\nif(rtplatform == ExecMode.SPARK) {\nDMLScript.USE_LOCAL_SPARK_CONFIG = true;\n@@ -193,6 +188,12 @@ public class FederatedLineageTraceReuseTest extends MultiTenantTestBase {\n}\nprivate void verifyResults(OpType opType, String outputLog, ExecMode execMode) {\n+ try{\n+ Thread.sleep(100);\n+ }\n+ catch(Exception e){\n+ fail(e.getMessage());\n+ }\nAssert.assertTrue(checkForHeavyHitter(opType, outputLog, execMode));\n// verify that the matrix object has been taken from cache\nAssert.assertTrue(checkForReuses(opType, outputLog, execMode));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedMultiTenantTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedMultiTenantTest.java",
"diff": "package org.apache.sysds.test.functions.federated.multitenant;\n-import java.lang.Math;\n+import static org.junit.Assert.fail;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n-import static org.junit.Assert.fail;\n-\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.commons.lang3.StringUtils;\nimport org.apache.sysds.api.DMLScript;\n@@ -166,6 +165,7 @@ public class FederatedMultiTenantTest extends MultiTenantTestBase {\nr = rows / 4;\nc = cols;\n}\n+ int[] workerPorts = startFedWorkers(4);\ndouble[][] X1 = getRandomMatrix(r, c, 0, 3, 1, 3);\ndouble[][] X2 = getRandomMatrix(r, c, 0, 3, 1, 7);\n@@ -181,7 +181,6 @@ public class FederatedMultiTenantTest extends MultiTenantTestBase {\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n- int[] workerPorts = startFedWorkers(4);\nrtplatform = execMode;\nif(rtplatform == ExecMode.SPARK) {\n@@ -297,6 +296,12 @@ public class FederatedMultiTenantTest extends MultiTenantTestBase {\n}\nprivate void verifyResults(OpType opType, String outputLog, ExecMode execMode) {\n+ try{\n+ Thread.sleep(100);\n+ }\n+ catch(Exception e){\n+ fail(e.getMessage());\n+ }\nAssert.assertTrue(checkForHeavyHitter(opType, outputLog, execMode));\n// compare the results via files\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedReuseReadTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedReuseReadTest.java",
"diff": "package org.apache.sysds.test.functions.federated.multitenant;\n+import static org.junit.Assert.fail;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n@@ -142,6 +144,8 @@ public class FederatedReuseReadTest extends MultiTenantTestBase {\nc = cols;\n}\n+ int[] workerPorts = startFedWorkers(4, lineage ? new String[]{\"-lineage\", \"reuse\"} : null);\n+\ndouble[][] X1 = getRandomMatrix(r, c, 0, 3, sparsity, 3);\ndouble[][] X2 = getRandomMatrix(r, c, 0, 3, sparsity, 7);\ndouble[][] X3 = getRandomMatrix(r, c, 0, 3, sparsity, 8);\n@@ -156,7 +160,6 @@ public class FederatedReuseReadTest extends MultiTenantTestBase {\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n- int[] workerPorts = startFedWorkers(4, lineage ? new String[]{\"-lineage\", \"reuse\"} : null);\nrtplatform = execMode;\nif(rtplatform == ExecMode.SPARK) {\n@@ -197,7 +200,14 @@ public class FederatedReuseReadTest extends MultiTenantTestBase {\n}\nprivate void verifyResults(OpType opType, String outputLog, ExecMode execMode) {\n- Assert.assertTrue(checkForHeavyHitter(opType, outputLog, execMode));\n+ try{\n+ Thread.sleep(100);\n+ }\n+ catch(Exception e){\n+ fail(e.getMessage());\n+ }\n+ Assert.assertTrue(\"Heavy hitter should include: \" + opType + \" outputLog:\" + outputLog,\n+ checkForHeavyHitter(opType, outputLog, execMode));\n// verify that the matrix object has been taken from cache\nAssert.assertTrue(outputLog.contains(\"Fed ReuseRead (Hits, Bytes):\\t\"\n+ Integer.toString((coordinatorProcesses.size()-1) * workerProcesses.size()) + \"/\"));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedReuseSlicesTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedReuseSlicesTest.java",
"diff": "package org.apache.sysds.test.functions.federated.multitenant;\n+import static org.junit.Assert.fail;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n@@ -132,6 +134,8 @@ public class FederatedReuseSlicesTest extends MultiTenantTestBase {\nc = cols;\n}\n+ int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\n+\ndouble[][] X1 = getRandomMatrix(r, c, 0, 3, sparsity, 3);\ndouble[][] X2 = getRandomMatrix(r, c, 0, 3, sparsity, 7);\ndouble[][] X3 = getRandomMatrix(r, c, 0, 3, sparsity, 8);\n@@ -146,7 +150,6 @@ public class FederatedReuseSlicesTest extends MultiTenantTestBase {\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n- int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\nrtplatform = execMode;\nif(rtplatform == ExecMode.SPARK) {\n@@ -195,6 +198,12 @@ public class FederatedReuseSlicesTest extends MultiTenantTestBase {\n}\nprivate void verifyResults() {\n+ try{\n+ Thread.sleep(100);\n+ }\n+ catch(Exception e){\n+ fail(e.getMessage());\n+ }\n// compare the results via files\nHashMap<CellIndex, Double> refResults0 = readDMLMatrixFromOutputDir(\"S\" + 0);\nHashMap<CellIndex, Double> refResults1 = readDMLMatrixFromOutputDir(\"S\" + 1);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedSerializationReuseTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/FederatedSerializationReuseTest.java",
"diff": "package org.apache.sysds.test.functions.federated.multitenant;\n+import static org.junit.Assert.fail;\n+\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\n@@ -133,6 +135,8 @@ public class FederatedSerializationReuseTest extends MultiTenantTestBase {\nc = cols;\n}\n+ int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\n+\ndouble[][] X1 = getRandomMatrix(r, c, 0, 3, sparsity, 3);\ndouble[][] X2 = getRandomMatrix(r, c, 0, 3, sparsity, 7);\ndouble[][] X3 = getRandomMatrix(r, c, 0, 3, sparsity, 8);\n@@ -147,7 +151,6 @@ public class FederatedSerializationReuseTest extends MultiTenantTestBase {\n// empty script name because we don't execute any script, just start the worker\nfullDMLScriptName = \"\";\n- int[] workerPorts = startFedWorkers(4, new String[]{\"-lineage\", \"reuse\"});\nrtplatform = execMode;\nif(rtplatform == ExecMode.SPARK) {\n@@ -191,6 +194,12 @@ public class FederatedSerializationReuseTest extends MultiTenantTestBase {\n}\nprivate void verifyResults(OpType opType, String outputLog, ExecMode execMode) {\n+ try{\n+ Thread.sleep(100);\n+ }\n+ catch(Exception e){\n+ fail(e.getMessage());\n+ }\nAssert.assertTrue(checkForHeavyHitter(opType, outputLog, execMode));\n// verify that the matrix object has been taken from cache\ncheckForReuses(opType, outputLog, execMode);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/MultiTenantTestBase.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/multitenant/MultiTenantTestBase.java",
"diff": "@@ -33,6 +33,8 @@ import org.apache.sysds.common.Types.ExecMode;\nimport org.apache.sysds.test.AutomatedTestBase;\nimport org.junit.After;\n+import com.google.crypto.tink.subtle.Random;\n+\npublic abstract class MultiTenantTestBase extends AutomatedTestBase {\nprotected ArrayList<Process> workerProcesses = new ArrayList<>();\nprotected ArrayList<Process> coordinatorProcesses = new ArrayList<>();\n@@ -66,7 +68,7 @@ public abstract class MultiTenantTestBase extends AutomatedTestBase {\nports[counter] = getRandomAvailablePort();\n// start process but only wait long for last one.\nProcess tmpProcess = startLocalFedWorker(ports[counter], addArgs,\n- counter == numFedWorkers-1 ? FED_WORKER_WAIT * 3 : FED_WORKER_WAIT_S);\n+ counter == numFedWorkers-1 ? (FED_WORKER_WAIT + Random.randInt(1000)) * 3 : FED_WORKER_WAIT_S);\nworkerProcesses.add(tmpProcess);\n}\nreturn ports;\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] multitenant federated test solidify |
49,689 | 21.01.2023 19:47:45 | -3,600 | 6c722ecedb49df4301b53067fb530a2a8ff300cf | Bug fixes, new tests and extensions
This patch fixes bugs in the max_parallelize operator ordering
logic, adds extensions such as finding and defining operators
with collect for broadcast as triggering Spark roots and an
asynchronous Zipmm implementation. This patch as well adds
new tests. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/BinaryM.java",
"new_path": "src/main/java/org/apache/sysds/lops/BinaryM.java",
"diff": "@@ -76,6 +76,13 @@ public class BinaryM extends Lop\nreturn \" Operation: \" + _operation;\n}\n+ @Override\n+ public Lop getBroadcastInput() {\n+ if (getExecType() != ExecType.SPARK)\n+ return null;\n+ return inputs.get(1);\n+ }\n+\npublic OpOp2 getOperationType() {\nreturn _operation;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/FunctionCallCP.java",
"new_path": "src/main/java/org/apache/sysds/lops/FunctionCallCP.java",
"diff": "@@ -82,6 +82,10 @@ public class FunctionCallCP extends Lop\nreturn _outputLops;\n}\n+ public String getFnamespace() {\n+ return _fnamespace;\n+ }\n+\npublic boolean requiresOutputCreateVar() {\nreturn !_fname.equalsIgnoreCase(Builtins.REMOVE.getName());\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"new_path": "src/main/java/org/apache/sysds/lops/compile/linearization/ILinearize.java",
"diff": "@@ -44,6 +44,7 @@ import org.apache.sysds.lops.CentralMoment;\nimport org.apache.sysds.lops.Checkpoint;\nimport org.apache.sysds.lops.CoVariance;\nimport org.apache.sysds.lops.DataGen;\n+import org.apache.sysds.lops.FunctionCallCP;\nimport org.apache.sysds.lops.GroupedAggregate;\nimport org.apache.sysds.lops.GroupedAggregateM;\nimport org.apache.sysds.lops.Lop;\n@@ -59,6 +60,7 @@ import org.apache.sysds.lops.ReBlock;\nimport org.apache.sysds.lops.SpoofFused;\nimport org.apache.sysds.lops.UAggOuterChain;\nimport org.apache.sysds.lops.UnaryCP;\n+import org.apache.sysds.parser.DMLProgram;\n/**\n* A interface for the linearization algorithms that order the DAG nodes into a sequence of instructions to execute.\n@@ -188,17 +190,17 @@ public interface ILinearize {\nif (v.stream().anyMatch(ILinearize::isSparkTriggeringOp)) {\n// Step 1: Collect the Spark roots and #Spark instructions in each subDAG\nMap<Long, Integer> sparkOpCount = new HashMap<>();\n- List<Lop> roots = v.stream().filter(l -> l.getOutputs().isEmpty()).collect(Collectors.toList());\n+ List<Lop> roots = v.stream().filter(ILinearize::isRoot).collect(Collectors.toList());\nList<Lop> sparkRoots = new ArrayList<>();\nroots.forEach(r -> collectSparkRoots(r, sparkOpCount, sparkRoots));\n- // Step 2: Depth-first linearization. Place the Spark OPs first.\n- // Maintain the default order (by ID) to trigger independent Spark chains first\n+ // Step 2: Depth-first linearization. Place the CP OPs first to increase broadcast potentials.\n+ // Maintain the default order (by ID) to trigger independent Spark jobs first\nArrayList<Lop> operatorList = new ArrayList<>();\n- sparkRoots.forEach(r -> depthFirst(r, operatorList, sparkOpCount, true));\n+ sparkRoots.forEach(r -> depthFirst(r, operatorList, sparkOpCount, false));\n// Step 3: Place the rest of the operators (CP). Sort the CP roots based on\n- // #Spark operators in ascending order, i.e. execute the independent CP chains first\n+ // #Spark operators in ascending order, i.e. execute the independent CP legs first\nroots.forEach(r -> depthFirst(r, operatorList, sparkOpCount, false));\nroots.forEach(Lop::resetVisitStatus);\n@@ -221,6 +223,16 @@ public interface ILinearize {\nreturn v_bc;\n}\n+ private static boolean isRoot(Lop lop) {\n+ if (lop.getOutputs().isEmpty())\n+ return true;\n+ if (lop instanceof FunctionCallCP &&\n+ ((FunctionCallCP) lop).getFnamespace().equalsIgnoreCase(DMLProgram.INTERNAL_NAMESPACE)) {\n+ return true;\n+ }\n+ return false;\n+ }\n+\n// Gather the Spark operators which return intermediates to local (actions/single_block)\n// In addition count the number of Spark OPs underneath every Operator\nprivate static int collectSparkRoots(Lop root, Map<Long, Integer> sparkOpCount, List<Lop> sparkRoots) {\n@@ -258,9 +270,11 @@ public interface ILinearize {\nreturn;\nfor (Lop input : root.getInputs())\n+ if (root.getBroadcastInput() != input)\ncollectPersistableSparkOps(input, operatorJobCount);\n// Increment the job counter if this node benefits from persisting\n+ // and reachable from multiple job roots\nif (isPersistableSparkOp(root))\noperatorJobCount.merge(root.getID(), 1, Integer::sum);\n@@ -296,7 +310,16 @@ public interface ILinearize {\nreturn lop.isExecSpark() && (lop.getAggType() == SparkAggType.SINGLE_BLOCK\n|| lop.getDataType() == DataType.SCALAR || lop instanceof MapMultChain\n|| lop instanceof PickByCount || lop instanceof MMZip || lop instanceof CentralMoment\n- || lop instanceof CoVariance || lop instanceof MMTSJ || lop.isAllOutputsCP());\n+ || lop instanceof CoVariance || lop instanceof MMTSJ || lop.isAllOutputsCP())\n+ || isCollectForBroadcast(lop);\n+ }\n+\n+ private static boolean isCollectForBroadcast(Lop lop) {\n+ boolean isSparkOp = lop.isExecSpark();\n+ boolean isBc = lop.getOutputs().stream()\n+ .allMatch(out -> (out.getBroadcastInput() == lop));\n+ //TODO: Handle Lops with mixed Spark (broadcast) CP consumers\n+ return isSparkOp && isBc && (lop.getDataType() == DataType.MATRIX);\n}\n// Dictionary of Spark operators which are expensive enough to be\n@@ -432,7 +455,8 @@ public interface ILinearize {\n|| (out instanceof GroupedAggregateM)));\n//TODO: support non-matrix outputs\nreturn transformOP && !hasParameterizedOut\n- && lop.isAllOutputsCP() && lop.getDataType() == DataType.MATRIX;\n+ && (lop.isAllOutputsCP() || isCollectForBroadcast(lop))\n+ && lop.getDataType() == DataType.MATRIX;\n}\nprivate static boolean isBroadcastNeeded(Lop lop) {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/SparkExecutionContext.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/SparkExecutionContext.java",
"diff": "@@ -423,13 +423,13 @@ public class SparkExecutionContext extends ExecutionContext\nrdd = mo.getRDDHandle().getRDD();\n}\n//CASE 2: dirty in memory data or cached result of rdd operations\n- else if( mo.isDirty() || mo.isCached(false) || mo.isFederated() )\n+ else if( mo.isDirty() || mo.isCached(false) || mo.isFederated() || mo instanceof MatrixObjectFuture)\n{\n//get in-memory matrix block and parallelize it\n//w/ guarded parallelize (fallback to export, rdd from file if too large)\nDataCharacteristics dc = mo.getDataCharacteristics();\nboolean fromFile = false;\n- if( !mo.isFederated() && (!OptimizerUtils.checkSparkCollectMemoryBudget(dc, 0)\n+ if( !mo.isFederated() && !(mo instanceof MatrixObjectFuture) && (!OptimizerUtils.checkSparkCollectMemoryBudget(dc, 0)\n|| !_parRDDs.reserve(OptimizerUtils.estimatePartitionedSizeExactSparsity(dc)))) {\nif( mo.isDirty() || !mo.isHDFSFileExists() ) //write if necessary\nmo.exportData();\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ZipmmSPInstruction.java",
"new_path": "src/main/java/org/apache/sysds/runtime/instructions/spark/ZipmmSPInstruction.java",
"diff": "@@ -22,6 +22,7 @@ package org.apache.sysds.runtime.instructions.spark;\nimport org.apache.spark.api.java.JavaPairRDD;\nimport org.apache.spark.api.java.JavaRDD;\nimport org.apache.spark.api.java.function.Function;\n+import org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.controlprogram.context.ExecutionContext;\nimport org.apache.sysds.runtime.controlprogram.context.SparkExecutionContext;\n@@ -31,6 +32,8 @@ import org.apache.sysds.runtime.functionobjects.SwapIndex;\nimport org.apache.sysds.runtime.instructions.InstructionUtils;\nimport org.apache.sysds.runtime.instructions.cp.CPOperand;\nimport org.apache.sysds.runtime.instructions.spark.utils.RDDAggregateUtils;\n+import org.apache.sysds.runtime.lineage.LineageCacheConfig;\n+import org.apache.sysds.runtime.lineage.LineageItem;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\nimport org.apache.sysds.runtime.matrix.data.MatrixIndexes;\nimport org.apache.sysds.runtime.matrix.data.OperationsOnMatrixValues;\n@@ -38,8 +41,13 @@ import org.apache.sysds.runtime.matrix.operators.AggregateBinaryOperator;\nimport org.apache.sysds.runtime.matrix.operators.AggregateOperator;\nimport org.apache.sysds.runtime.matrix.operators.Operator;\nimport org.apache.sysds.runtime.matrix.operators.ReorgOperator;\n+import org.apache.sysds.runtime.util.CommonThreadPool;\nimport scala.Tuple2;\n+import java.util.concurrent.Callable;\n+import java.util.concurrent.Executors;\n+import java.util.concurrent.Future;\n+\npublic class ZipmmSPInstruction extends BinarySPInstruction {\n// internal flag to apply left-transpose rewrite or not\nprivate boolean _tRewrite = true;\n@@ -76,6 +84,20 @@ public class ZipmmSPInstruction extends BinarySPInstruction {\nJavaPairRDD<MatrixIndexes,MatrixBlock> in1 = sec.getBinaryMatrixBlockRDDHandleForVariable( input1.getName() ); //X\nJavaPairRDD<MatrixIndexes,MatrixBlock> in2 = sec.getBinaryMatrixBlockRDDHandleForVariable( input2.getName() ); //y\n+ if (ConfigurationManager.isMaxPrallelizeEnabled()) {\n+ try {\n+ if (CommonThreadPool.triggerRemoteOPsPool == null)\n+ CommonThreadPool.triggerRemoteOPsPool = Executors.newCachedThreadPool();\n+ ZipmmTask task = new ZipmmTask(in1, in2, _tRewrite);\n+ Future<MatrixBlock> future_out = CommonThreadPool.triggerRemoteOPsPool.submit(task);\n+ LineageItem li = !LineageCacheConfig.ReuseCacheType.isNone() ? getLineageItem(ec).getValue() : null;\n+ sec.setMatrixOutputAndLineage(output.getName(), future_out, li);\n+ }\n+ catch(Exception ex) {\n+ throw new DMLRuntimeException(ex);\n+ }\n+ }\n+ else {\n//process core zipmm matrix multiply (in contrast to cpmm, the join over original indexes\n//preserves the original partitioning and with that potentially unnecessary join shuffle)\nJavaRDD<MatrixBlock> out = in1.join(in2).values() // join over original indexes\n@@ -94,6 +116,7 @@ public class ZipmmSPInstruction extends BinarySPInstruction {\n//this also includes implicit maintenance of matrix characteristics\nsec.setMatrixOutput(output.getName(), out2);\n}\n+ }\nprivate static class ZipMultiplyFunction implements Function<Tuple2<MatrixBlock,MatrixBlock>, MatrixBlock>\n{\n@@ -125,4 +148,34 @@ public class ZipmmSPInstruction extends BinarySPInstruction {\nreturn OperationsOnMatrixValues.matMult(tmp, in1, new MatrixBlock(), _abop);\n}\n}\n+\n+ private static class ZipmmTask implements Callable<MatrixBlock> {\n+ JavaPairRDD<MatrixIndexes, MatrixBlock> _in1;\n+ JavaPairRDD<MatrixIndexes, MatrixBlock> _in2;\n+ boolean _tRewrite;\n+\n+ ZipmmTask(JavaPairRDD<MatrixIndexes, MatrixBlock> in1, JavaPairRDD<MatrixIndexes, MatrixBlock> in2, boolean tRw) {\n+ _in1 = in1;\n+ _in2 = in2;\n+ _tRewrite = tRw;\n+ }\n+ @Override\n+ public MatrixBlock call() {\n+ //process core zipmm matrix multiply (in contrast to cpmm, the join over original indexes\n+ //preserves the original partitioning and with that potentially unnecessary join shuffle)\n+ JavaRDD<MatrixBlock> out = _in1.join(_in2).values() // join over original indexes\n+ .map(new ZipMultiplyFunction(_tRewrite)); // compute block multiplications, incl t(y)\n+\n+ //single-block aggregation (guaranteed by zipmm blocksize constraint)\n+ MatrixBlock out2 = RDDAggregateUtils.sumStable(out);\n+\n+ //final transpose of result (for t(t(y)%*%X))), if transpose rewrite\n+ if( _tRewrite ) {\n+ ReorgOperator rop = new ReorgOperator(SwapIndex.getSwapIndexFnObject());\n+ out2 = out2.reorgOperations(rop, new MatrixBlock(), 0, 0, 0);\n+ }\n+\n+ return out2;\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/MaxParallelizeOrderTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/MaxParallelizeOrderTest.java",
"diff": "@@ -37,7 +37,7 @@ public class MaxParallelizeOrderTest extends AutomatedTestBase {\nprotected static final String TEST_DIR = \"functions/async/\";\nprotected static final String TEST_NAME = \"MaxParallelizeOrder\";\n- protected static final int TEST_VARIANTS = 4;\n+ protected static final int TEST_VARIANTS = 5;\nprotected static String TEST_CLASS_DIR = TEST_DIR + MaxParallelizeOrderTest.class.getSimpleName() + \"/\";\n@Override\n@@ -67,6 +67,12 @@ public class MaxParallelizeOrderTest extends AutomatedTestBase {\nrunTest(TEST_NAME+\"4\");\n}\n+ @Test\n+ public void testPCAlm() {\n+ //eigen is an internal function. Outputs of eigen are not in the lop list\n+ runTest(TEST_NAME+\"5\");\n+ }\n+\npublic void runTest(String testname) {\nExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\n@@ -101,9 +107,9 @@ public class MaxParallelizeOrderTest extends AutomatedTestBase {\nOptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = true;\n//compare matrices\n- boolean matchVal = TestUtils.compareMatrices(R, R_mp, 1e-6, \"Origin\", \"withPrefetch\");\n+ boolean matchVal = TestUtils.compareMatrices(R, R_mp, 1e-6, \"Origin\", \"withMaxParallelize\");\nif (!matchVal)\n- System.out.println(\"Value w/o Prefetch \"+R+\" w/ Prefetch \"+R_mp);\n+ System.out.println(\"Value w/ depth first\"+R+\" w/ Max Parallelize\"+R_mp);\n} finally {\nresetExecMode(oldPlatform);\nInfrastructureAnalyzer.setLocalMaxMemory(oldmem);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/PrefetchRDDTest.java",
"diff": "@@ -39,7 +39,7 @@ public class PrefetchRDDTest extends AutomatedTestBase {\nprotected static final String TEST_DIR = \"functions/async/\";\nprotected static final String TEST_NAME = \"PrefetchRDD\";\n- protected static final int TEST_VARIANTS = 3;\n+ protected static final int TEST_VARIANTS = 4;\nprotected static String TEST_CLASS_DIR = TEST_DIR + PrefetchRDDTest.class.getSimpleName() + \"/\";\n@Override\n@@ -67,9 +67,13 @@ public class PrefetchRDDTest extends AutomatedTestBase {\nrunTest(TEST_NAME+\"3\");\n}\n+ @Test\n+ public void testAsyncSparkOPs4() {\n+ //SP consumer. Collect to broadcast to the SP consumer. Prefetch.\n+ runTest(TEST_NAME+\"4\");\n+ }\n+\npublic void runTest(String testname) {\n- boolean old_simplification = OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION;\n- boolean old_sum_product = OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES;\nboolean old_trans_exec_type = OptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE;\nExecMode oldPlatform = setExecMode(ExecMode.HYBRID);\n@@ -78,8 +82,6 @@ public class PrefetchRDDTest extends AutomatedTestBase {\nInfrastructureAnalyzer.setLocalMaxMemory(mem);\ntry {\n- //OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = false;\n- //OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = false;\nOptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = false;\ngetAndLoadTestConfiguration(testname);\nfullDMLScriptName = getScript();\n@@ -114,8 +116,6 @@ public class PrefetchRDDTest extends AutomatedTestBase {\n//long successPF = SparkStatistics.getAsyncPrefetchCount();\n//Assert.assertTrue(\"Violated successful Prefetch count: \"+successPF, successPF == expected_successPF);\n} finally {\n- OptimizerUtils.ALLOW_ALGEBRAIC_SIMPLIFICATION = old_simplification;\n- OptimizerUtils.ALLOW_SUM_PRODUCT_REWRITES = old_sum_product;\nOptimizerUtils.ALLOW_TRANSITIVE_SPARK_EXEC_TYPE = old_trans_exec_type;\nresetExecMode(oldPlatform);\nInfrastructureAnalyzer.setLocalMaxMemory(oldmem);\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/MaxParallelizeOrder5.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+checkR2 = function(Matrix[double] X, Matrix[double] y, Matrix[double] y_p,\n+ Matrix[double] beta, Integer icpt) return (Double R2_ad)\n+{\n+ n = nrow(X);\n+ m = ncol(X);\n+ m_ext = m;\n+ if (icpt == 1|icpt == 2)\n+ m_ext = m+1; #due to extra column ones\n+ avg_tot = sum(y)/n;\n+ ss_tot = sum(y^2);\n+ ss_avg_tot = ss_tot - n*avg_tot^2;\n+ y_res = y - y_p;\n+ avg_res = sum(y - y_p)/n;\n+ ss_res = sum((y - y_p)^2);\n+ R2 = 1 - ss_res/ss_avg_tot;\n+ dispersion = ifelse(n>m_ext, ss_res/(n-m_ext), NaN);\n+ R2_ad = ifelse(n>m_ext, 1-dispersion/(ss_avg_tot/(n-1)), NaN);\n+}\n+\n+# Get the dataset\n+M = 4000;\n+A = rand(rows=M, cols=500, seed=42);\n+y = rand(rows=M, cols=1, seed=43);\n+R = matrix(0, rows=1, cols=20);\n+\n+K = floor(ncol(A) * 0.1);\n+nComb = 5; #10\n+\n+for (i in 1:nComb) {\n+ [newA1, Mout] = pca(X=A, K=K+i);\n+ beta1 = lmDS(X=newA1, y=y, icpt=1, reg=0.0001, verbose=FALSE);\n+ y_predict1 = lmPredict(X=newA1, B=beta1, icpt=1);\n+ R2_ad1 = checkR2(newA1, y, y_predict1, beta1, 1);\n+ R[,i] = R2_ad1;\n+}\n+\n+R = sum(R);\n+write(R, $1, format=\"text\");\n+\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/async/PrefetchRDD4.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+X = rand(rows=10000, cols=200, seed=42); #sp_rand\n+v = rand(rows=200, cols=1, seed=42); #cp_rand\n+\n+# Spark transformation operations\n+sp1 = X + ceil(X);\n+sp2 = sp1 %*% v; #output fits in local\n+\n+# CP instructions\n+v = ((v + v) * 1 - v) / (1+1);\n+v = ((v + v) * 2 - v) / (2+1);\n+\n+# Collect sp2 result to local and broadcast for map+\n+sp3 = X + sp2; #map+\n+cp = sp3 + sum(v);\n+# Trigger the DAG of SP operations\n+R = sum(cp);\n+write(R, $1, format=\"text\");\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3469] Bug fixes, new tests and extensions
This patch fixes bugs in the max_parallelize operator ordering
logic, adds extensions such as finding and defining operators
with collect for broadcast as triggering Spark roots and an
asynchronous Zipmm implementation. This patch as well adds
new tests. |
49,706 | 21.01.2023 16:20:16 | -3,600 | 021467d59266e4f7a0e97c7cc0040a52d550e86d | [MINOR] 100% FrameIterator Tests
This commit adds full iterator tests, and found 1 null pointer exception. | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/IteratorFactory.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/IteratorFactory.java",
"diff": "package org.apache.sysds.runtime.frame.data.iterators;\n-import java.util.Iterator;\n-\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\n@@ -35,7 +33,7 @@ public interface IteratorFactory {\n* @param fb The frame to iterate through\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb) {\nreturn new StringRowIterator(fb, 0, fb.getNumRows());\n}\n@@ -47,7 +45,7 @@ public interface IteratorFactory {\n* @param cols column selection, 1-based\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb, int[] cols) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb, int[] cols) {\nreturn new StringRowIterator(fb, 0, fb.getNumRows(), cols);\n}\n@@ -59,7 +57,7 @@ public interface IteratorFactory {\n* @param colID column selection, 1-based\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb, int colID) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb, int colID) {\nreturn new StringRowIterator(fb, 0, fb.getNumRows(), new int[] {colID});\n}\n@@ -71,7 +69,7 @@ public interface IteratorFactory {\n* @param ru upper row index\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb, int rl, int ru) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb, int rl, int ru) {\nreturn new StringRowIterator(fb, rl, ru);\n}\n@@ -85,7 +83,7 @@ public interface IteratorFactory {\n* @param cols column selection, 1-based\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\nreturn new StringRowIterator(fb, rl, ru, cols);\n}\n@@ -99,7 +97,7 @@ public interface IteratorFactory {\n* @param colID columnID, 1-based\n* @return string array iterator\n*/\n- public static Iterator<String[]> getStringRowIterator(FrameBlock fb, int rl, int ru, int colID) {\n+ public static RowIterator<String> getStringRowIterator(FrameBlock fb, int rl, int ru, int colID) {\nreturn new StringRowIterator(fb, rl, ru, new int[] {colID});\n}\n@@ -109,7 +107,7 @@ public interface IteratorFactory {\n* @param fb The frame to iterate through\n* @return object array iterator\n*/\n- public static Iterator<Object[]> getObjectRowIterator(FrameBlock fb) {\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb) {\nreturn new ObjectRowIterator(fb, 0, fb.getNumRows());\n}\n@@ -121,7 +119,7 @@ public interface IteratorFactory {\n* @param schema target schema of objects\n* @return object array iterator\n*/\n- public static Iterator<Object[]> getObjectRowIterator(FrameBlock fb, ValueType[] schema) {\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, ValueType[] schema) {\nreturn new ObjectRowIterator(fb, 0, fb.getNumRows(), schema);\n}\n@@ -133,10 +131,21 @@ public interface IteratorFactory {\n* @param cols column selection, 1-based\n* @return object array iterator\n*/\n- public static Iterator<Object[]> getObjectRowIterator(FrameBlock fb, int[] cols) {\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, int[] cols) {\nreturn new ObjectRowIterator(fb, 0, fb.getNumRows(), cols);\n}\n+ /**\n+ * Get a row iterator over the frame where all selected fields are encoded as objects according to their value types.\n+ *\n+ * @param fb The frame to iterate through\n+ * @param colID column selection, 1-based\n+ * @return object array iterator\n+ */\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, int colID) {\n+ return new ObjectRowIterator(fb, 0, fb.getNumRows(), new int[] {colID});\n+ }\n+\n/**\n* Get a row iterator over the frame where all fields are encoded as boxed objects according to their value types.\n*\n@@ -145,7 +154,7 @@ public interface IteratorFactory {\n* @param ru upper row index\n* @return object array iterator\n*/\n- public static Iterator<Object[]> getObjectRowIterator(FrameBlock fb, int rl, int ru) {\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, int rl, int ru) {\nreturn new ObjectRowIterator(fb, rl, ru);\n}\n@@ -159,8 +168,21 @@ public interface IteratorFactory {\n* @param cols column selection, 1-based\n* @return object array iterator\n*/\n- public static Iterator<Object[]> getObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\nreturn new ObjectRowIterator(fb, rl, ru, cols);\n}\n+ /**\n+ * Get a row iterator over the frame where all selected fields are encoded as boxed objects according to their value\n+ * types.\n+ *\n+ * @param fb The frame to iterate through\n+ * @param rl lower row index\n+ * @param ru upper row index\n+ * @param colID column selection, 1-based\n+ * @return object array iterator\n+ */\n+ public static RowIterator<Object> getObjectRowIterator(FrameBlock fb, int rl, int ru, int colID) {\n+ return new ObjectRowIterator(fb, rl, ru, new int[] {colID});\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/ObjectRowIterator.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/ObjectRowIterator.java",
"diff": "@@ -26,19 +26,19 @@ import org.apache.sysds.runtime.util.UtilFunctions;\npublic class ObjectRowIterator extends RowIterator<Object> {\nprivate final ValueType[] _tgtSchema;\n- public ObjectRowIterator(FrameBlock fb, int rl, int ru) {\n+ protected ObjectRowIterator(FrameBlock fb, int rl, int ru) {\nthis(fb, rl, ru, UtilFunctions.getSeqArray(1, fb.getNumColumns(), 1), null);\n}\n- public ObjectRowIterator(FrameBlock fb, int rl, int ru, ValueType[] schema) {\n+ protected ObjectRowIterator(FrameBlock fb, int rl, int ru, ValueType[] schema) {\nthis(fb, rl, ru, UtilFunctions.getSeqArray(1, fb.getNumColumns(), 1), schema);\n}\n- public ObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\n+ protected ObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\nthis(fb, rl, ru, cols, null);\n}\n- public ObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols, ValueType[] schema){\n+ protected ObjectRowIterator(FrameBlock fb, int rl, int ru, int[] cols, ValueType[] schema){\nsuper(fb, rl, ru, cols);\n_tgtSchema = schema;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/RowIterator.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/RowIterator.java",
"diff": "@@ -23,6 +23,7 @@ import java.util.Iterator;\nimport org.apache.commons.logging.Log;\nimport org.apache.commons.logging.LogFactory;\n+import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.util.UtilFunctions;\n@@ -41,6 +42,9 @@ public abstract class RowIterator<T> implements Iterator<T[]> {\n}\nprotected RowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\n+ if(rl < 0 || ru > fb.getNumRows() || rl > ru)\n+ throw new DMLRuntimeException(\"Invalid range of iterator: \" + rl + \"->\" + ru);\n+\n_fb = fb;\n_curRow = createRow(cols.length);\n_cols = cols;\n@@ -55,7 +59,7 @@ public abstract class RowIterator<T> implements Iterator<T[]> {\n@Override\npublic void remove() {\n- throw new RuntimeException(\"RowIterator.remove is unsupported!\");\n+ throw new DMLRuntimeException(\"RowIterator.remove() is unsupported!\");\n}\nprotected abstract T[] createRow(int size);\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/StringRowIterator.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/iterators/StringRowIterator.java",
"diff": "@@ -22,11 +22,11 @@ package org.apache.sysds.runtime.frame.data.iterators;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\npublic class StringRowIterator extends RowIterator<String> {\n- public StringRowIterator(FrameBlock fb, int rl, int ru) {\n+ protected StringRowIterator(FrameBlock fb, int rl, int ru) {\nsuper(fb, rl, ru);\n}\n- public StringRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\n+ protected StringRowIterator(FrameBlock fb, int rl, int ru, int[] cols) {\nsuper(fb, rl, ru, cols);\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/component/frame/iterators/IteratorTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.component.frame.iterators;\n+\n+import static org.junit.Assert.assertEquals;\n+import static org.junit.Assert.assertNotEquals;\n+import static org.junit.Assert.assertTrue;\n+\n+import java.util.Arrays;\n+\n+import org.apache.sysds.common.Types.ValueType;\n+import org.apache.sysds.runtime.DMLRuntimeException;\n+import org.apache.sysds.runtime.frame.data.FrameBlock;\n+import org.apache.sysds.runtime.frame.data.iterators.IteratorFactory;\n+import org.apache.sysds.runtime.frame.data.iterators.RowIterator;\n+import org.apache.sysds.runtime.util.UtilFunctions;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Test;\n+\n+public class IteratorTest {\n+\n+ private final FrameBlock fb1 = TestUtils.generateRandomFrameBlock(10, 10, 23);\n+ private final FrameBlock fb2 = TestUtils.generateRandomFrameBlock(40, 30, 22);\n+\n+ @Test\n+ public void StringObjectStringFB1() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1);\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb1);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void StringObjectStringFB2() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb2);\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void StringObjectStringNotEquals() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1);\n+ a.next();\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb1);\n+ assertNotEquals(Arrays.toString(a.next()), Arrays.toString(b.next()));\n+ }\n+\n+ @Test\n+ public void StringObjectStringNotEqualsFB1vsFB2() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1);\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2);\n+ assertNotEquals(Arrays.toString(a.next()), Arrays.toString(b.next()));\n+ }\n+\n+ @Test\n+ public void compareSubRangesFB1() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1, 1, fb1.getNumRows());\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb1);\n+ b.next();\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void compareSubRangesFB2() {\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb2, 1, fb2.getNumRows());\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2);\n+ b.next();\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void compareSubRangesStringFB1() {\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb1, 1, fb1.getNumRows());\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb1);\n+ b.next();\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void compareSubRangesStringFB2() {\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2, 1, fb2.getNumRows());\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2);\n+ b.next();\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorObjectSelectColumns() {\n+ FrameBlock fb1Slice = fb1.slice(0, fb1.getNumRows() - 1, 1, fb1.getNumColumns() - 1);\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1Slice);\n+ int[] select = new int[fb1.getNumColumns() - 1];\n+ for(int i = 0; i < fb1.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb1, select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorObjectSelectColumnsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(0, fb2.getNumRows() - 1, 1, fb2.getNumColumns() - 1);\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2, select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorStringSelectColumns() {\n+ FrameBlock fb1Slice = fb1.slice(0, fb1.getNumRows() - 1, 1, fb1.getNumColumns() - 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb1Slice);\n+ int[] select = new int[fb1.getNumColumns() - 1];\n+ for(int i = 0; i < fb1.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb1, select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorStringSelectColumnsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(0, fb2.getNumRows() - 1, 1, fb2.getNumColumns() - 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2, select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorStringSelectColumnsSubRowsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(1, fb2.getNumRows() - 1, 1, fb2.getNumColumns() - 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2, 1, fb2.getNumRows(), select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorObjectSelectColumnsSubRowsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(1, fb2.getNumRows() - 1, 1, fb2.getNumColumns() - 1);\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2, 1, fb2.getNumRows(), select);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorStringSelectSingleColumnSubRowsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(1, fb2.getNumRows() - 1, 1, 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2, 1, fb2.getNumRows(), 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorObjectSelectSingleColumnSubRowsFB2() {\n+ FrameBlock fb2Slice = fb2.slice(1, fb2.getNumRows() - 1, 1, 1);\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb2Slice);\n+ int[] select = new int[fb2.getNumColumns() - 1];\n+ for(int i = 0; i < fb2.getNumColumns() - 1; i++) {\n+ select[i] = i + 2;\n+ }\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2, 1, fb2.getNumRows(), 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorColumnIdFB1() {\n+ FrameBlock fb1Slice = fb1.slice(0, fb1.getNumRows() - 1, 1, 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb1Slice);\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb1, 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorColumnId() {\n+ FrameBlock fb2Slice = fb2.slice(0, fb2.getNumRows() - 1, 1, 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2Slice);\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2, 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorColumnIdObjectFB1() {\n+ FrameBlock fb1Slice = fb1.slice(0, fb1.getNumRows() - 1, 1, 1);\n+ RowIterator<Object> a = IteratorFactory.getObjectRowIterator(fb1Slice);\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb1, 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorColumnObjectId() {\n+ FrameBlock fb2Slice = fb2.slice(0, fb2.getNumRows() - 1, 1, 1);\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2Slice);\n+ RowIterator<String> b = IteratorFactory.getStringRowIterator(fb2, 2);\n+ compareIterators(a, b);\n+ }\n+\n+ @Test\n+ public void iteratorWithSchema() {\n+ RowIterator<String> a = IteratorFactory.getStringRowIterator(fb2);\n+ RowIterator<Object> b = IteratorFactory.getObjectRowIterator(fb2, //\n+ UtilFunctions.nCopies(fb2.getNumColumns(), ValueType.STRING));\n+ compareIterators(a, b);\n+ }\n+\n+\n+ @Test(expected= DMLRuntimeException.class)\n+ public void invalidRange1(){\n+ IteratorFactory.getStringRowIterator(fb2, -1, 1);\n+ }\n+\n+ @Test(expected= DMLRuntimeException.class)\n+ public void invalidRange2(){\n+ IteratorFactory.getStringRowIterator(fb2, 132415, 132416);\n+ }\n+\n+ @Test(expected= DMLRuntimeException.class)\n+ public void invalidRange3(){\n+ IteratorFactory.getStringRowIterator(fb2, 13, 4);\n+ }\n+\n+ @Test(expected= DMLRuntimeException.class)\n+ public void remove(){\n+ RowIterator<?> a =IteratorFactory.getStringRowIterator(fb2, 0, 4);\n+ a.remove();\n+ }\n+\n+\n+ private static void compareIterators(RowIterator<?> a, RowIterator<?> b) {\n+ while(a.hasNext()) {\n+ assertTrue(b.hasNext());\n+ assertEquals(Arrays.toString(a.next()), Arrays.toString(b.next()));\n+ }\n+ }\n+}\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] 100% FrameIterator Tests
This commit adds full iterator tests, and found 1 null pointer exception. |
49,700 | 27.01.2023 12:57:41 | -3,600 | c4636bc59bfe94c0d9cf1f1c0078ba5616216d29 | Edit Extrapolation of U-Net Input
The updated version includes both for-loop and parfor-loop versions. The update also includes a test.
Closes | [
{
"change_type": "MODIFY",
"old_path": "scripts/nn/examples/u-net.dml",
"new_path": "scripts/nn/examples/u-net.dml",
"diff": "@@ -47,25 +47,129 @@ source(\"scripts/utils/image_utils.dml\") as img_utils\n* - C: Number of channels of X\n* - Hin: Height of each element of X\n* - Win: Width of each element of X\n+* - useParfor: Use parfor loop if true\n*\n* Outputs:\n* - X_extrapolated: X padded with extrapolated data\n* - input_HW: Height and Width of X_extrapolated\n*/\n-extrapolate = function(matrix[double] X, int N, int C, int Hin, int Win) return (matrix[double] X_extrapolated, int input_HW){\n+extrapolate = function(matrix[double] X, int N, int C, int Hin, int Win, boolean useParfor = TRUE)\n+ return (matrix[double] X_extrapolated, int input_HW){\ninput_HW = Hin + 184 # Assuming filter HW 3 and conv stride 1\npad_size = 92 # 184 / 2\n+ channel_width = input_HW*input_HW\n- X_extrapolated = matrix(0, rows=N, cols=C*input_HW*input_HW)\n+ if ( useParfor ) {\n+ X_extrapolated = extrapolate_images_parfor(X, N, C, Hin, Win, pad_size, channel_width)\n+ }\n+ else {\n+ X_extrapolated = extrapolate_images_for(X, N, C, Hin, Win, pad_size, channel_width)\n+ }\n+}\n+\n+/*\n+* Pad input features X with extrapolated data by mirroring.\n+* Only the height and width are padded, no extra channels are added.\n+* Dimensions changed from (N,C*Hin*Win) to (N,C*(Hin+184)*(Win+184)).\n+*\n+* For loop is used when looping over rows.\n+*\n+* Inputs:\n+* - X: Features to pad\n+* - N: Number of input elements of X\n+* - C: Number of channels of X\n+* - Hin: Height of each element of X\n+* - Win: Width of each element of X\n+* - pad_size: Pad size for each side of the image\n+* - channel_width: Width of a single channel\n+*\n+* Outputs:\n+* - X_extrapolated: X padded with extrapolated data\n+*/\n+extrapolate_images_for = function(matrix[double] X, int N, int C, int Hin, int Win, int pad_size, int channel_width)\n+ return (matrix[double] X_extrapolated){\n+ X_extrapolated = matrix(0, rows=N, cols=C*channel_width)\n+ for ( row in 1:N ){\n+ img = X[row,]\n+ X_extrapolated[row,] = extrapolate_image(img, C, Hin, Win, pad_size, channel_width)\n+ }\n+}\n+/*\n+* Pad input features X with extrapolated data by mirroring.\n+* Only the height and width are padded, no extra channels are added.\n+* Dimensions changed from (N,C*Hin*Win) to (N,C*(Hin+184)*(Win+184)).\n+*\n+* Parfor loop is used when looping over rows.\n+*\n+* Inputs:\n+* - X: Features to pad\n+* - N: Number of input elements of X\n+* - C: Number of channels of X\n+* - Hin: Height of each element of X\n+* - Win: Width of each element of X\n+* - pad_size: Pad size for each side of the image\n+* - channel_width: Width of a single channel\n+*\n+* Outputs:\n+* - X_extrapolated: X padded with extrapolated data\n+*/\n+extrapolate_images_parfor = function(matrix[double] X, int N, int C, int Hin, int Win, int pad_size, int channel_width)\n+ return (matrix[double] X_extrapolated){\n+ X_extrapolated = matrix(0, rows=N, cols=C*channel_width)\n+ parfor ( row in 1:N, check=0 ){\n+ img = X[row,]\n+ X_extrapolated[row,] = extrapolate_image(img, C, Hin, Win, pad_size, channel_width)\n+ }\n+}\n+\n+/*\n+* Pad input image img with extrapolated data by mirroring.\n+* Only the height and width are padded, no extra channels are added.\n+* Dimensions changed from (1,C*Hin*Win) to (1,C*(Hin+184)*(Win+184)).\n+*\n+*\n+* Inputs:\n+* - img: Image of dimension (1,C*Hin*Win) to pad\n+* - C: Number of channels of image\n+* - Hin: Height of image\n+* - Win: Width of image\n+* - pad_size: Pad size for each side of the image\n+* - channel_width: Width of a single channel\n+*\n+* Outputs:\n+* - img_extrapolated: image padded with extrapolated data\n+*/\n+extrapolate_image = function(matrix[double] img, int C, int Hin, int Win, int pad_size, int channel_width)\n+ return (matrix[double] img_extrapolated){\n+ img_extrapolated = matrix(0, rows=1, cols=C*channel_width)\nfor ( i in 1:C ){\nstart_channel = ((i-1) * Hin * Win)+1\nend_channel = i * Hin * Win\n- original_channel = X[,start_channel:end_channel]\n- # Iterate through the N rows of X each representing a single channel\n- for ( row in 1:N ){\n- img = matrix(original_channel[row], rows=Hin, cols=Win)\n+ channel_slice = matrix(img[1,start_channel:end_channel], rows=Hin, cols=Win)\n+ start_col = ((i-1)*channel_width)+1\n+ end_col = i*channel_width\n+ img_extrapolated[1,start_col:end_col] = extrapolate_channel(channel_slice, Hin, Win, pad_size, channel_width)\n+ }\n+}\n+/*\n+* Pad single channel of input image img with extrapolated data by mirroring.\n+* Dimensions changed from (Hin,Win) to (1,(Hin+184)*(Win+184)).\n+*\n+*\n+* Inputs:\n+* - img: Image of dimension (Hin,Win) to pad\n+* - Hin: Height of image\n+* - Win: Width of image\n+* - pad_size: Pad size for each side of the image\n+* - channel_width: Width of a single channel\n+*\n+* Outputs:\n+* - channel_extrapolated: channel of image padded with extrapolated data\n+*/\n+extrapolate_channel = function(matrix[double] img, int Hin, int Win, int pad_size, int channel_width)\n+ return (matrix[double] channel_extrapolated){\npad_left = t(rev(t(img[,1:pad_size])))\npad_right = t(rev(t(img[,(Win-(pad_size-1)):Win])))\npad_top = rev(img[1:(pad_size),])\n@@ -80,13 +184,7 @@ extrapolate = function(matrix[double] X, int N, int C, int Hin, int Win) return\npad_center_full = rbind(pad_top, img, pad_bottom)\nmodified_channel = cbind(pad_left_full, pad_center_full, pad_right_full)\n-\n- flat_width = input_HW*input_HW\n- start_col = ((i-1)*flat_width)+1\n- end_col = i*flat_width\n- X_extrapolated[row,start_col:end_col] = matrix(modified_channel, rows=1, cols=flat_width)\n- }\n- }\n+ channel_extrapolated = matrix(modified_channel, rows=1, cols=channel_width)\n}\n/*\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/java/org/apache/sysds/test/functions/builtin/part2/BuiltinUNetExtrapolateTest.java",
"diff": "+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one\n+ * or more contributor license agreements. See the NOTICE file\n+ * distributed with this work for additional information\n+ * regarding copyright ownership. The ASF licenses this file\n+ * to you under the Apache License, Version 2.0 (the\n+ * \"License\"); you may not use this file except in compliance\n+ * with the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.apache.sysds.test.functions.builtin.part2;\n+\n+import org.apache.sysds.common.Types;\n+import org.apache.sysds.runtime.meta.MatrixCharacteristics;\n+import org.apache.sysds.test.AutomatedTestBase;\n+import org.apache.sysds.test.TestConfiguration;\n+import org.apache.sysds.test.TestUtils;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.List;\n+\n+public class BuiltinUNetExtrapolateTest extends AutomatedTestBase {\n+ private final static String TEST_DIR = \"functions/builtin/\";\n+ private final static String TEST_NAME = \"BuiltinUNetExtrapolateTest\";\n+ private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinUNetExtrapolateTest.class.getSimpleName() + \"/\";\n+\n+ @Override\n+ public void setUp() {\n+ TestUtils.clearAssertionInformation();\n+ addTestConfiguration(TEST_NAME, new TestConfiguration(TEST_CLASS_DIR, TEST_NAME, new String[]{\"x_out\"}));\n+ }\n+\n+ @Test\n+ public void extrapolateMultiDimMultiChannel(){\n+ int hin = 100;\n+ int win = 100;\n+ int channels = 3;\n+ int rows = 10;\n+ runGenericTest(rows, hin, win, channels);\n+ }\n+\n+ @Test\n+ public void extrapolateSingleChannel(){\n+ int hin = 100;\n+ int win = 100;\n+ int channels = 1;\n+ int rows = 10;\n+ runGenericTest(rows, hin, win, channels);\n+ }\n+\n+ @Test\n+ public void extrapolateSingleRow(){\n+ int hin = 100;\n+ int win = 100;\n+ int channels = 3;\n+ int rows = 1;\n+ runGenericTest(rows, hin, win, channels);\n+ }\n+\n+ private void runGenericTest(int rows, int hin, int win, int channels){\n+ int cols = hin*win*channels;\n+ double[][] input = getRandomMatrix(rows, cols,1,10,0.9,3);\n+ int colsExpected = (hin+184)*(win+184)*channels; //padded height x padded width x number of channels\n+\n+ getAndLoadTestConfiguration(TEST_NAME);\n+ setExecMode(Types.ExecMode.SINGLE_NODE);\n+ String HOME = SCRIPT_DIR + TEST_DIR;\n+ fullDMLScriptName = HOME + TEST_NAME + \".dml\";\n+ String inputName = \"features\";\n+ String outputName = \"x_out\";\n+ String rowsName = \"rows\";\n+ String hinName = \"hin\";\n+ String winName = \"win\";\n+ String channelName = \"channels\";\n+ String useParforName = \"useParfor\";\n+ ArrayList<String> programArgsBase = new ArrayList<>(Arrays.asList(\n+ \"-nvargs\",\n+ inputName + \"=\" + input(inputName),\n+ rowsName + \"=\" + rows,\n+ hinName + \"=\" + hin,\n+ winName + \"=\" + win,\n+ channelName + \"=\" + channels\n+ ));\n+ ArrayList<String> programArgsParfor = new ArrayList<>(programArgsBase);\n+ programArgsParfor.addAll(List.of(outputName + \"=\" + output(outputName), useParforName + \"=\" + \"TRUE\"));\n+ ArrayList<String> programArgsFor = new ArrayList<>(programArgsBase);\n+ programArgsFor.addAll(List.of(outputName + \"=\" + expected(outputName), useParforName + \"=\" + \"FALSE\"));\n+ programArgs = programArgsParfor.toArray(new String[8]);\n+ writeInputMatrixWithMTD(inputName,input,false);\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+\n+ MatrixCharacteristics mc = readDMLMetaDataFile(outputName);\n+ Assert.assertEquals(\n+ \"Number of rows should be equal to expected number of rows\",\n+ rows, mc.getRows());\n+ Assert.assertEquals(\n+ \"Number of cols should be equal to expected number of cols\",\n+ colsExpected, mc.getCols());\n+\n+ programArgs = programArgsFor.toArray(new String[8]);\n+ runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);\n+\n+ compareResults(1e-9,\"parfor\", \"for\");\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "src/test/scripts/functions/builtin/BuiltinUNetExtrapolateTest.dml",
"diff": "+#-------------------------------------------------------------\n+#\n+# Licensed to the Apache Software Foundation (ASF) under one\n+# or more contributor license agreements. See the NOTICE file\n+# distributed with this work for additional information\n+# regarding copyright ownership. The ASF licenses this file\n+# to you under the Apache License, Version 2.0 (the\n+# \"License\"); you may not use this file except in compliance\n+# with the License. You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on an\n+# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+# KIND, either express or implied. See the License for the\n+# specific language governing permissions and limitations\n+# under the License.\n+#\n+#-------------------------------------------------------------\n+\n+source(\"scripts/nn/examples/u-net.dml\") as UNet\n+\n+X = read($features)\n+numRows = $rows\n+numChannels = $channels\n+hin = $hin\n+win = $win\n+\n+X_extrapolated = UNet::extrapolate(X, numRows, numChannels, hin, win, $useParfor)\n+write(X_extrapolated, $x_out)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3018] Edit Extrapolation of U-Net Input
The updated version includes both for-loop and parfor-loop versions. The update also includes a test.
Closes #1773. |
49,706 | 27.01.2023 16:09:10 | -3,600 | 5ab14a408aefecb2fef0ce71610b87131a6311c7 | [MINOR] Fix missed error in last Compressed Transform commit | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/transform/encode/MultiColumnEncoder.java",
"new_path": "src/main/java/org/apache/sysds/runtime/transform/encode/MultiColumnEncoder.java",
"diff": "@@ -865,6 +865,15 @@ public class MultiColumnEncoder implements Encoder {\nreturn sum;\n}\n+ public int getNumExtraCols(IndexRange ixRange) {\n+ List<ColumnEncoderDummycode> dc = getColumnEncoders(ColumnEncoderDummycode.class).stream()\n+ .filter(dce -> ixRange.inColRange(dce._colID)).collect(Collectors.toList());\n+ if(dc.isEmpty()) {\n+ return 0;\n+ }\n+ return dc.stream().map(ColumnEncoderDummycode::getDomainSize).mapToInt(i -> i).sum() - dc.size();\n+ }\n+\npublic <T extends ColumnEncoder> boolean containsEncoderForID(int colID, Class<T> type) {\nreturn getColumnEncoders(type).stream().anyMatch(encoder -> encoder.getColID() == colID);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/frame/transform/transformCompressed.java",
"new_path": "src/test/java/org/apache/sysds/test/component/frame/transform/transformCompressed.java",
"diff": "@@ -98,10 +98,10 @@ public class transformCompressed {\nFrameBlock outNormalMD = encoderNormal.getMetaData(null);\n- LOG.error(outNormal);\n- LOG.error(outCompressed);\n- LOG.error(outCompressedMD);\n- LOG.error(outNormalMD);\n+ // LOG.error(outNormal);\n+ // LOG.error(outCompressed);\n+ // LOG.error(outCompressedMD);\n+ // LOG.error(outNormalMD);\nTestUtils.compareMatrices(outNormal, outCompressed, 0, \"Not Equal after apply\");\nTestUtils.compareFrames(outNormalMD, outCompressedMD, true);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Fix missed error in last Compressed Transform commit |
49,702 | 01.02.2023 20:17:47 | -3,600 | d0a6accaba55df906994899a8c7bb794f04ca8e9 | Python 3.9 support
This commit fix a parser conflict for python 3.9, where the generator
package was containing a python file that conflicts with 3.9+ python,
called parser.py. | [
{
"change_type": "MODIFY",
"old_path": "src/main/python/generator/__init__.py",
"new_path": "src/main/python/generator/__init__.py",
"diff": "@@ -24,7 +24,7 @@ from generator.generator import (\nPythonAPIFunctionGenerator,\nPythonAPIDocumentationGenerator\n)\n-from generator.parser import FunctionParser\n+from generator.dml_parser import FunctionParser\n__all__ = [\nPythonAPIFileGenerator,\n"
},
{
"change_type": "RENAME",
"old_path": "src/main/python/generator/parser.py",
"new_path": "src/main/python/generator/dml_parser.py",
"diff": ""
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/generator/generator.py",
"new_path": "src/main/python/generator/generator.py",
"diff": "@@ -24,7 +24,7 @@ import os\nimport re\nimport sys\nimport traceback\n-from parser import FunctionParser\n+from dml_parser import FunctionParser\nfrom typing import List, Tuple\n@@ -74,7 +74,7 @@ class PythonAPIFileGenerator(object):\nwith open(target_file, \"w\") as new_script:\nnew_script.write(self.licence)\nnew_script.write(self.generated_by)\n- new_script.write((self.generated_from + dml_file + \"\\n\").replace(\n+ new_script.write((self.generated_from + dml_file.replace(\"\\\\\", \"/\") + \"\\n\").replace(\n\"../\", \"\").replace(\"src/main/python/generator/\", \"\"))\nnew_script.write(self.imports)\nnew_script.write(file_content)\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3494] Python 3.9 support
This commit fix a parser conflict for python 3.9, where the generator
package was containing a python file that conflicts with 3.9+ python,
called parser.py. |
49,702 | 01.02.2023 20:18:38 | -3,600 | bfc26e747d316df9535527ce49cf315bd8eaef83 | Python windows install
This commit adds minor steps to the install guide for Python to include
a python virtual environment step, and logic for powershell install and
setup of the Python build from source.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/python/.gitignore",
"new_path": "src/main/python/.gitignore",
"diff": "@@ -22,3 +22,5 @@ tests/list/tmp\ntests/algorithms/readwrite/\ntests/examples/tutorials/model\ntests/lineage/temp\n+\n+python_venv/\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/create_python_dist.py",
"new_path": "src/main/python/create_python_dist.py",
"diff": "import subprocess\nf = open(\"generator.log\",\"w\")\n-subprocess.run(\"python3 generator/generator.py\",shell=True, check=True, stdout =f, stderr=f)\n-subprocess.run(\"python3 pre_setup.py\",shell=True, check=True)\n-subprocess.run(\"python3 setup.py sdist bdist_wheel\",shell=True, check=True)\n+subprocess.run(\"python generator/generator.py\",shell=True, check=True, stdout =f, stderr=f)\n+subprocess.run(\"python pre_setup.py\",shell=True, check=True)\n+subprocess.run(\"python setup.py sdist bdist_wheel\",shell=True, check=True)\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/docs/source/getting_started/install.rst",
"new_path": "src/main/python/docs/source/getting_started/install.rst",
"diff": "@@ -50,7 +50,7 @@ please make sure this is the case.\nSource\n------\n-To Install from source involves three steps.\n+To Install from source involves multiple steps.\nInstall Dependencies\n@@ -60,6 +60,21 @@ Install Dependencies\nOnce installed you please verify your version numbers.\nAdditionally you have to install a few python packages.\n+We sugest to create a new virtual environment using virtualenv.\n+All commands are run inside src/main/python/.\n+We asume that in the following scripts python==python3\n+\n+ python -m venv python_venv\n+\n+Now, we activate the environment.\n+\n+ source python_venv/bin/activate\n+\n+In case of using Linux. For Windows PowerShell use:\n+\n+ Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process -Force\n+ python_venv/Scripts/Activate.ps1\n+\nNote depending on your installation you might need to use pip3 instead of pip::\npip install numpy py4j wheel requests\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/python/systemds/context/systemds_context.py",
"new_path": "src/main/python/systemds/context/systemds_context.py",
"diff": "@@ -175,7 +175,7 @@ class SystemDSContext(object):\n[os.path.join(lib_cp, '*'), systemds_cp])\nelse:\nraise ValueError(\n- \"Invalid setup at SYSTEMDS_ROOT env variable path\")\n+ \"Invalid setup at SYSTEMDS_ROOT env variable path \" + lib_cp)\nelse:\nlib1 = os.path.join(root, \"lib\", \"*\")\nlib2 = os.path.join(root, \"lib\")\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3493] Python windows install
This commit adds minor steps to the install guide for Python to include
a python virtual environment step, and logic for powershell install and
setup of the Python build from source.
Closes #1779 |
49,700 | 02.02.2023 12:32:26 | -3,600 | 6b25434106e4793a15bdc16097fec51e11f7ab44 | [MINOR] Edit U-Net To Use One-Hot Encoded Labels
Closes | [
{
"change_type": "MODIFY",
"old_path": "scripts/nn/examples/u-net.dml",
"new_path": "scripts/nn/examples/u-net.dml",
"diff": "@@ -31,7 +31,7 @@ source(\"scripts/nn/layers/dropout.dml\") as dropout\nsource(\"scripts/nn/layers/l2_reg.dml\") as l2_reg\nsource(\"scripts/nn/layers/max_pool2d_builtin.dml\") as max_pool2d\nsource(\"scripts/nn/layers/relu.dml\") as relu\n-source(\"scripts/nn/layers/softmax.dml\") as softmax\n+source(\"scripts/nn/layers/softmax2d.dml\") as softmax2d\nsource(\"scripts/nn/optim/sgd_momentum.dml\") as sgd_momentum\nsource(\"scripts/nn/layers/dropout.dml\") as dropout\nsource(\"scripts/utils/image_utils.dml\") as img_utils\n@@ -210,6 +210,8 @@ extrapolate_channel = function(matrix[double] img, int Hin, int Win, int pad_siz\n* - scheme: Parameter server training scheme\n* - learning_rate: The learning rate for the SGD with momentum\n* - seed: Seed for the initialization of the convolution weights. Default is -1 meaning that the seeds are random.\n+ * - M: Size of the segmentation map (C*(Hin-pad)*(Win-pad))\n+ * - K: Number of output categories (for each element of segmentation map)\n* - he: Homomorphic encryption activated (boolean)\n* - F1: Number of filters of the top layer of the U-Net model. Default is 64.\n*\n@@ -220,10 +222,9 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\nmatrix[double] X_val, matrix[double] y_val,\nint C, int Hin, int Win, int epochs, int workers,\nstring utype, string freq, int batch_size, string scheme, double learning_rate,\n- int seed = -1, boolean he = FALSE, int F1 = 64)\n+ int seed = -1, int M, int K, boolean he = FALSE, int F1 = 64)\nreturn (list[unknown] model_trained) {\nN = nrow(X) # Number of inputs\n- K = ncol(y) # Number of target classes\n# Define model network constants\nHf = 3 # convolution filter height\n@@ -285,7 +286,7 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\n[W21, b21] = conv2d::init(F1, F2, Hf, Wf, seed = as.integer(as.scalar(lseed[21])))\n[W22, b22] = conv2d::init(F1, F1, Hf, Wf, seed = as.integer(as.scalar(lseed[22])))\n# Segmentation map\n- [W23, b23] = conv2d::init(C, F1, 1, 1, seed = as.integer(as.scalar(lseed[23])))\n+ [W23, b23] = conv2d::init(K*C, F1, 1, 1, seed = as.integer(as.scalar(lseed[23])))\n# Initialize SGD with momentum\nvW1 = sgd_momentum::init(W1); vb1 = sgd_momentum::init(b1)\n@@ -328,7 +329,7 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\n# Create the hyper parameter list\nparams = list(\n- learning_rate=learning_rate, mu=mu, decay=decay, C=C, Hin=Hin, Win=Win, Hf=Hf, Wf=Wf,\n+ learning_rate=learning_rate, mu=mu, decay=decay, M=M, K=K, C=C, Hin=Hin, Win=Win, Hf=Hf, Wf=Wf,\nconv_stride=conv_stride, pool_stride=pool_stride, pool_HWf=pool_HWf, conv_t_HWf=conv_t_HWf, conv_t_stride=conv_t_stride,\npad=pad, lambda=lambda, F1=F1, F2=F2, F3=F3, F4=F4, F5=F5, dropProb=dropProb, dropSeed=dropSeed)\n@@ -351,13 +352,14 @@ train_paramserv = function(matrix[double] X, matrix[double] y,\n* - Win: Input width\n* - batch_size: Batch size\n* - model: List of weights of the model (23 weights, 23 biases)\n-* - K: Size of the segmentation map (C*(Hin-pad)*(Win-pad))\n+* - M: Size of the segmentation map (C*(Hin-pad)*(Win-pad))\n+* - K: Number of output categories (for each element of segmentation map)\n* - F1: Number of filters of the top layer of the U-Net model. Default is 64.\n*\n* Output:\n* - probs: Segmentation map probabilities generated by the forward pass of the U-Net model\n*/\n-predict = function(matrix[double] X, int C, int Hin, int Win, int batch_size, list[unknown] model, int K, int F1 = 64)\n+predict = function(matrix[double] X, int C, int Hin, int Win, int batch_size, list[unknown] model, int M, int K, int F1 = 64)\nreturn (matrix[double] probs) {\nW1 = as.matrix(model[1])\nW2 = as.matrix(model[2])\n@@ -423,7 +425,7 @@ predict = function(matrix[double] X, int C, int Hin, int Win, int batch_size, li\ndropSeed = -1\n# Compute predictions over mini-batches\n- probs = matrix(0, rows=N, cols=K)\n+ probs = matrix(0, rows=N, cols=K*M)\niters = ceil(N / batch_size)\nfor(i in 1:iters, check=0) {\n# Get next batch\n@@ -490,7 +492,7 @@ predict = function(matrix[double] X, int C, int Hin, int Win, int batch_size, li\n[outc23, Houtc23, Woutc23] = conv2d::forward(outr18, W23, b23, F1, Houtc22, Woutc22, 1, 1, conv_stride, conv_stride, pad, pad)\n# Store predictions\n- probs[beg:end,] = softmax::forward(outc23)\n+ probs[beg:end,] = softmax2d::forward(outc23, K)\n}\n}\n@@ -515,6 +517,8 @@ predict = function(matrix[double] X, int C, int Hin, int Win, int batch_size, li\n* - (scalar[integer]) F1, F2, F3, F4, F5: Number of filters of the convolutions in the five layers\n* - (scalar[double]) dropProb: Dropout probability\n* - (scalar[integer]) dropSeed: Dropout seed\n+* - (scalar[integer]) M: Size of the segmentation map (C*(Hin-pad)*(Win-pad))\n+* - (scalar[integer]) K: Number of output categories (for each element of segmentation map)\n* - features: Features of size C*Hin*Win. The features need to be padded with mirrored data.\n* The input feature size should result in an output size of the U-Net equal to the label size.\n* See extrapolate function for how to pad the features by extrapolating.\n@@ -546,6 +550,8 @@ gradients = function(list[unknown] model,\nF5 = as.integer(as.scalar(hyperparams[\"F5\"]))\ndropProb = as.double(as.scalar(hyperparams[\"dropProb\"]))\ndropSeed = as.integer(as.scalar(hyperparams[\"dropSeed\"]))\n+ M = as.integer(as.scalar(hyperparams[\"M\"]))\n+ K = as.integer(as.scalar(hyperparams[\"K\"]))\nW1 = as.matrix(model[1])\nW2 = as.matrix(model[2])\nW3 = as.matrix(model[3])\n@@ -650,7 +656,7 @@ gradients = function(list[unknown] model,\n# This last conv2d needs to create the segmentation map (1x1 filter):\n[outc23, Houtc23, Woutc23] = conv2d::forward(outr18, W23, b23, F1, Houtc22, Woutc22, 1, 1, conv_stride, conv_stride, pad, pad)\n- probs = softmax::forward(outc23)\n+ probs = softmax2d::forward(outc23, K)\n# Compute loss & accuracy for training data\nloss = cross_entropy_loss::forward(probs, labels)\n@@ -661,7 +667,7 @@ gradients = function(list[unknown] model,\n## loss\ndprobs = cross_entropy_loss::backward(probs, labels)\n- doutc23 = softmax::backward(dprobs, outc23)\n+ doutc23 = softmax2d::backward(dprobs, outc23, K)\n# Up-Convolution\n# conv2d parameters: (previous_gradient, output height, output width, input to original layer, layer weight, layer bias, layer input channel number, input height, input width, filter height, filter width, stride height, stride width, pad height, pad width)\n@@ -1041,7 +1047,8 @@ validate = function(matrix[double] val_features, matrix[double] val_labels,\nC = as.integer(as.scalar(hyperparams[\"C\"]))\nHin = as.integer(as.scalar(hyperparams[\"Hin\"]))\nWin = as.integer(as.scalar(hyperparams[\"Win\"]))\n+ M = as.integer(as.scalar(hyperparams[\"M\"]))\nK = as.integer(as.scalar(hyperparams[\"K\"]))\n- predictions = predict(val_features, C, Hin, Win, batch_size, model, K, F1)\n+ predictions = predict(val_features, C, Hin, Win, batch_size, model, M, K, F1)\n[loss, accuracy] = eval(predictions, val_labels)\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/EncryptedFederatedParamservTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/EncryptedFederatedParamservTest.java",
"diff": "@@ -147,7 +147,7 @@ public class EncryptedFederatedParamservTest extends AutomatedTestBase {\nint C = 1, Hin = 28, Win = 28;\nint numLabels = 10;\nif (Objects.equals(_networkType, \"UNet\")){\n- C = 3; Hin = 340; Win = 340;\n+ C = 3; Hin = 196; Win = 196;\nnumLabels = C * Hin * Win;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/FederatedParamservTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/federated/paramserv/FederatedParamservTest.java",
"diff": "@@ -141,7 +141,7 @@ public class FederatedParamservTest extends AutomatedTestBase {\nint C = 1, Hin = 28, Win = 28;\nint numLabels = 10;\nif (_networkType.equals(\"UNet\")){\n- C = 3; Hin = 340; Win = 340;\n+ C = 3; Hin = 196; Win = 196;\nnumLabels = C * Hin * Win;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/paramserv/EncryptedFederatedParamservTest.dml",
"new_path": "src/test/scripts/functions/federated/paramserv/EncryptedFederatedParamservTest.dml",
"diff": "@@ -67,12 +67,15 @@ else if($network_type == \"UNet\") {\nx_hw = $hin + 184 # Padded input height and width\nx_val = matrix(0, rows=numRows, cols=$channels*x_hw*x_hw)\n- y_val = matrix(0, rows=numRows, cols=numFeatures)\n+ y_val = matrix(0, rows=numRows, cols=numFeatures*2)\n+ K = 2\n+ M = numFeatures\n+ labels_one_hot = cbind((labels - 1) * -1, labels) # (N,KCHW) encoded labels\n- model = UNet::train_paramserv(features, labels, x_val, y_val, $channels, x_hw, x_hw, $epochs, 0, $utype, $freq, $batch_size, $scheme, $eta, $seed, TRUE, F1)\n+ model = UNet::train_paramserv(features, labels_one_hot, x_val, y_val, $channels, x_hw, x_hw, $epochs, 0, $utype, $freq, $batch_size, $scheme, $eta, $seed, M, K, TRUE, F1)\nprint(\"Test results:\")\n- hyperparams = list(learning_rate=$eta, C=$channels, Hin=x_hw, Win=x_hw, K=numFeatures)\n- [loss_test, accuracy_test] = UNet::validate(matrix(0, rows=numRows, cols=$channels*x_hw*x_hw), matrix(0, rows=numRows, cols=numFeatures), model, hyperparams, F1, $batch_size)\n+ hyperparams = list(learning_rate=$eta, C=$channels, Hin=x_hw, Win=x_hw, M=numFeatures, K=K)\n+ [loss_test, accuracy_test] = UNet::validate(matrix(0, rows=numRows, cols=$channels*x_hw*x_hw), matrix(0, rows=numRows, cols=numFeatures*2), model, hyperparams, F1, $batch_size)\nprint(\"[+] test loss: \" + loss_test + \", test accuracy: \" + accuracy_test + \"\\n\")\n}\nelse {\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/scripts/functions/federated/paramserv/FederatedParamservTest.dml",
"new_path": "src/test/scripts/functions/federated/paramserv/FederatedParamservTest.dml",
"diff": "@@ -47,12 +47,15 @@ else if($network_type == \"UNet\"){\nx_hw = $hin + 184 # Padded input height and width\nx_val = matrix(0, rows=numRows, cols=$channels*x_hw*x_hw)\n- y_val = matrix(0, rows=numRows, cols=numFeatures)\n+ y_val = matrix(0, rows=numRows, cols=numFeatures*2)\n+ K = 2\n+ M = numFeatures\n+ labels_one_hot = cbind((labels - 1) * -1, labels) # (N,KCHW) encoded labels\n- model = UNet::train_paramserv(features, labels, x_val, y_val, $channels, x_hw, x_hw, $epochs, 2, $utype, $freq, $batch_size, $scheme, $eta, $seed, FALSE, F1)\n+ model = UNet::train_paramserv(features, labels_one_hot, x_val, y_val, $channels, x_hw, x_hw, $epochs, 0, $utype, $freq, $batch_size, $scheme, $eta, $seed, M, K, FALSE, F1)\nprint(\"Test results:\")\n- hyperparams = list(learning_rate=$eta, C=$channels, Hin=x_hw, Win=x_hw, K=numFeatures)\n- [loss_test, accuracy_test] = UNet::validate(matrix(0, rows=numRows, cols=$channels*x_hw*x_hw), matrix(0, rows=numRows, cols=numFeatures), model, hyperparams, F1, numRows)\n+ hyperparams = list(learning_rate=$eta, C=$channels, Hin=x_hw, Win=x_hw, M=numFeatures, K=K)\n+ [loss_test, accuracy_test] = UNet::validate(matrix(0, rows=numRows, cols=$channels*x_hw*x_hw), matrix(0, rows=numRows, cols=numFeatures*2), model, hyperparams, F1, $batch_size)\nprint(\"[+] test loss: \" + loss_test + \", test accuracy: \" + accuracy_test + \"\\n\")\n}\nelse {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Edit U-Net To Use One-Hot Encoded Labels
Closes #1780. |
49,706 | 05.02.2023 17:21:23 | -3,600 | 58746bc777ac922cbabc48d484cb86164581066b | Parallel Compressed Encode
This commit updates the compressed encode to encode each encoding
in parallel, while also updating the recode map construction
to an, faster version via putIfAbsent on hashmaps.
For Critero 1Mil:
Parallel reduced from 8.688 - 6 sec
PutIfAbsent reduced from 6 - 4.5 sec.
Closes | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/IdentityDictionarySlice.java",
"new_path": "src/main/java/org/apache/sysds/runtime/compress/colgroup/dictionary/IdentityDictionarySlice.java",
"diff": "@@ -120,7 +120,7 @@ public class IdentityDictionarySlice extends IdentityDictionary {\n@Override\npublic int getNumberOfValues(int ncol) {\n- return ncol;\n+ return nRowCol;\n}\n@Override\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/frame/data/columns/Array.java",
"new_path": "src/main/java/org/apache/sysds/runtime/frame/data/columns/Array.java",
"diff": "@@ -100,8 +100,11 @@ public abstract class Array<T> implements Writable {\nlong id = 0;\nfor(int i = 0; i < size(); i++) {\nT val = get(i);\n- if(val != null && !map.containsKey(val))\n- map.put(val, id++);\n+ if(val != null){\n+ Long v = map.putIfAbsent(val, id);\n+ if(v == null)\n+ id++;\n+ }\n}\nreturn map;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/transform/encode/CompressedEncode.java",
"new_path": "src/main/java/org/apache/sysds/runtime/transform/encode/CompressedEncode.java",
"diff": "@@ -22,6 +22,10 @@ package org.apache.sysds.runtime.transform.encode;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\n+import java.util.concurrent.Callable;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Future;\nimport org.apache.commons.lang.NotImplementedException;\nimport org.apache.commons.logging.Log;\n@@ -29,6 +33,7 @@ import org.apache.commons.logging.LogFactory;\nimport org.apache.sysds.common.Types.ValueType;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.conf.DMLConfig;\n+import org.apache.sysds.runtime.DMLRuntimeException;\nimport org.apache.sysds.runtime.compress.CompressedMatrixBlock;\nimport org.apache.sysds.runtime.compress.colgroup.AColGroup;\nimport org.apache.sysds.runtime.compress.colgroup.ColGroupDDC;\n@@ -44,35 +49,69 @@ import org.apache.sysds.runtime.compress.colgroup.mapping.MapToFactory;\nimport org.apache.sysds.runtime.frame.data.FrameBlock;\nimport org.apache.sysds.runtime.frame.data.columns.Array;\nimport org.apache.sysds.runtime.matrix.data.MatrixBlock;\n+import org.apache.sysds.runtime.util.CommonThreadPool;\n+\npublic class CompressedEncode {\nprotected static final Log LOG = LogFactory.getLog(CompressedEncode.class.getName());\n+ /** The encoding scheme plan */\nprivate final MultiColumnEncoder enc;\n+ /** The Input FrameBlock */\nprivate final FrameBlock in;\n+ /** The thread count of the instruction */\n+ private final int k;\n- private CompressedEncode(MultiColumnEncoder enc, FrameBlock in) {\n+ private CompressedEncode(MultiColumnEncoder enc, FrameBlock in, int k) {\nthis.enc = enc;\nthis.in = in;\n+ this.k = k;\n}\n- public static MatrixBlock encode(MultiColumnEncoder enc, FrameBlock in) {\n- return new CompressedEncode(enc, in).apply();\n+ public static MatrixBlock encode(MultiColumnEncoder enc, FrameBlock in, int k) {\n+ return new CompressedEncode(enc, in, k).apply();\n}\nprivate MatrixBlock apply() {\n- List<ColumnEncoderComposite> encoders = enc.getColumnEncoders();\n+ final List<ColumnEncoderComposite> encoders = enc.getColumnEncoders();\n+ final List<AColGroup> groups = isParallel() ? multiThread(encoders) : singleThread(encoders);\n+ final int cols = shiftGroups(groups);\n+ final MatrixBlock mb = new CompressedMatrixBlock(in.getNumRows(), cols, -1, false, groups);\n+ mb.recomputeNonZeros();\n+ logging(mb);\n+ return mb;\n+ }\n- List<AColGroup> groups = new ArrayList<>(encoders.size());\n+ private boolean isParallel() {\n+ return k > 1 && enc.getEncoders().size() > 1;\n+ }\n+ private List<AColGroup> singleThread(List<ColumnEncoderComposite> encoders) {\n+ List<AColGroup> groups = new ArrayList<>(encoders.size());\nfor(ColumnEncoderComposite c : encoders)\ngroups.add(encode(c));\n+ return groups;\n+ }\n- int cols = shiftGroups(groups);\n+ private List<AColGroup> multiThread(List<ColumnEncoderComposite> encoders) {\n- MatrixBlock mb = new CompressedMatrixBlock(in.getNumRows(), cols, -1, false, groups);\n- mb.recomputeNonZeros();\n- logging(mb);\n- return mb;\n+ final ExecutorService pool = CommonThreadPool.get(k);\n+ try {\n+ List<EncodeTask> tasks = new ArrayList<>(encoders.size());\n+\n+ for(ColumnEncoderComposite c : encoders)\n+ tasks.add(new EncodeTask(c));\n+\n+ List<AColGroup> groups = new ArrayList<>(encoders.size());\n+ for(Future<AColGroup> t : pool.invokeAll(tasks))\n+ groups.add(t.get());\n+\n+ pool.shutdown();\n+ return groups;\n+ }\n+ catch(InterruptedException | ExecutionException ex) {\n+ pool.shutdown();\n+ throw new DMLRuntimeException(\"Failed parallel compressed transform encode\", ex);\n+ }\n}\n/**\n@@ -108,7 +147,6 @@ public class CompressedEncode {\nHashMap<?, Long> map = a.getRecodeMap();\nint domain = map.size();\n- // int domain = c.getDomainSize();\nIColIndex colIndexes = ColIndexFactory.create(0, domain);\nADictionary d = new IdentityDictionary(colIndexes.size());\n@@ -161,7 +199,6 @@ public class CompressedEncode {\nreturn ColGroupUncompressed.create(colIndexes, col, false);\n}\nelse {\n-\ndouble[] vals = new double[map.size() + (a.containsNull() ? 1 : 0)];\nfor(int i = 0; i < a.size(); i++) {\nObject v = a.get(i);\n@@ -186,13 +223,25 @@ public class CompressedEncode {\nArray<?>.ArrayIterator it = a.getIterator();\nwhile(it.hasNext()) {\nObject v = it.next();\n- if(v != null) {\n+ if(v != null)\nm.set(it.getIndex(), map.get(v).intValue());\n}\n- }\nreturn m;\n}\n+ private class EncodeTask implements Callable<AColGroup> {\n+\n+ ColumnEncoderComposite c;\n+\n+ protected EncodeTask(ColumnEncoderComposite c) {\n+ this.c = c;\n+ }\n+\n+ public AColGroup call() throws Exception {\n+ return encode(c);\n+ }\n+ }\n+\nprivate void logging(MatrixBlock mb) {\nif(LOG.isDebugEnabled()) {\nLOG.debug(String.format(\"Uncompressed transform encode Dense size: %16d\", mb.estimateSizeDenseInMemory()));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/transform/encode/MultiColumnEncoder.java",
"new_path": "src/main/java/org/apache/sysds/runtime/transform/encode/MultiColumnEncoder.java",
"diff": "@@ -103,7 +103,7 @@ public class MultiColumnEncoder implements Encoder {\nderiveNumRowPartitions(in, k);\ntry {\nif(isCompressedTransformEncode(in, compressedOut))\n- return CompressedEncode.encode(this, (FrameBlock ) in);\n+ return CompressedEncode.encode(this, (FrameBlock ) in, k);\nelse if(k > 1 && !MULTI_THREADED_STAGES && !hasLegacyEncoder()) {\nMatrixBlock out = new MatrixBlock();\nDependencyThreadPool pool = new DependencyThreadPool(k);\n"
}
] | Java | Apache License 2.0 | apache/systemds | [SYSTEMDS-3495] Parallel Compressed Encode
This commit updates the compressed encode to encode each encoding
in parallel, while also updating the recode map construction
to an, faster version via putIfAbsent on hashmaps.
For Critero 1Mil:
- Parallel reduced from 8.688 - 6 sec
- PutIfAbsent reduced from 6 - 4.5 sec.
Closes #1781 |
49,738 | 06.02.2023 17:54:32 | -3,600 | 2a6bb746ebe0c0b6cd581a4e8c57bfb9e352448b | [MINOR] Various code cleanups (imports, annotations) | [
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/lops/MMRJ.java",
"new_path": "src/main/java/org/apache/sysds/lops/MMRJ.java",
"diff": "@@ -59,7 +59,7 @@ public class MMRJ extends Lop\n@Override\npublic String getInstructions(String input1, String input2, String output) {\n- boolean toCache = getOutputParameters().getLinCacheMarking();\n+ // FIXME boolean toCache = getOutputParameters().getLinCacheMarking();\nreturn InstructionUtils.concatOperands(\ngetExecType().name(),\n\"rmm\",\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/ExecutionContext.java",
"new_path": "src/main/java/org/apache/sysds/runtime/controlprogram/context/ExecutionContext.java",
"diff": "@@ -602,6 +602,7 @@ public class ExecutionContext {\nmo.release();\n}\n+ @SuppressWarnings(\"unused\")\npublic void setMatrixOutputAndLineage(String varName, Future<MatrixBlock> fmb, LineageItem li) {\nif (isAutoCreateVars() && !containsVariable(varName)) {\n//FIXME without adding this fmo object here to the symbol table\n"
},
{
"change_type": "MODIFY",
"old_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"new_path": "src/main/java/org/apache/sysds/runtime/lineage/LineageCacheConfig.java",
"diff": "@@ -21,7 +21,6 @@ package org.apache.sysds.runtime.lineage;\nimport org.apache.commons.lang3.ArrayUtils;\nimport org.apache.sysds.api.DMLScript;\n-import org.apache.sysds.common.Types;\nimport org.apache.sysds.conf.ConfigurationManager;\nimport org.apache.sysds.conf.DMLConfig;\nimport org.apache.sysds.hops.AggBinaryOp;\n@@ -35,11 +34,9 @@ import org.apache.sysds.runtime.instructions.cp.ListIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.cp.MatrixIndexingCPInstruction;\nimport org.apache.sysds.runtime.instructions.fed.ComputationFEDInstruction;\nimport org.apache.sysds.runtime.instructions.gpu.GPUInstruction;\n-import org.apache.sysds.runtime.instructions.spark.AggregateUnarySPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.ComputationSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.CpmmSPInstruction;\nimport org.apache.sysds.runtime.instructions.spark.MapmmSPInstruction;\n-import org.apache.sysds.runtime.instructions.spark.TsmmSPInstruction;\nimport java.util.Comparator;\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/component/frame/array/ColumnMetadataTests.java",
"new_path": "src/test/java/org/apache/sysds/test/component/frame/array/ColumnMetadataTests.java",
"diff": "@@ -111,6 +111,7 @@ public class ColumnMetadataTests {\n}\n@Test\n+ @SuppressWarnings(\"unlikely-arg-type\")\npublic void equalsObject() {\nassertTrue(d.equals((Object) new ColumnMetadata(d)));\nassertFalse(d.equals(1324));\n"
},
{
"change_type": "MODIFY",
"old_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"new_path": "src/test/java/org/apache/sysds/test/functions/async/LineageReuseSparkTest.java",
"diff": "@@ -35,7 +35,6 @@ package org.apache.sysds.test.functions.async;\nimport org.apache.sysds.test.TestUtils;\nimport org.apache.sysds.utils.Statistics;\nimport org.junit.Assert;\n- import org.junit.Ignore;\nimport org.junit.Test;\npublic class LineageReuseSparkTest extends AutomatedTestBase {\n"
}
] | Java | Apache License 2.0 | apache/systemds | [MINOR] Various code cleanups (imports, annotations) |
313,483 | 16.03.2017 17:55:31 | 14,400 | c436aaa972582fc027b6fef00f924b4ff814d538 | Fix typos in intro | [
{
"change_type": "MODIFY",
"old_path": "docs/source/intro.rst",
"new_path": "docs/source/intro.rst",
"diff": "@@ -29,13 +29,13 @@ Here's a quick overview of Pyro's features:\n- supports different serializers (serpent, json, marshal, msgpack, pickle, dill).\n- support for all Python data types that are serializable when using the 'pickle' or 'dill' serializers [1]_.\n- runs on Python 2.7, Python 3.x, IronPython, Pypy.\n-- works between different system architectures and operating systems\n-- able to communicate between different Python versions transparantly\n+- works between different system architectures and operating systems.\n+- able to communicate between different Python versions transparently.\n- can use IPv4, IPv6 and Unix domain sockets.\n- lightweight client library available for .NET and Java native code ('Pyrolite', provided separately).\n-- designed to be very easy to use and get out of your way as much as possible, but still provide a lot of flexibility when you do need it\n+- designed to be very easy to use and get out of your way as much as possible, but still provide a lot of flexibility when you do need it.\n- name server that keeps track of your object's actual locations so you can move them around transparently.\n-- yellow-pages type lookups possible, based on metadata tags on registrations in the name server\n+- yellow-pages type lookups possible, based on metadata tags on registrations in the name server.\n- support for automatic reconnection to servers in case of interruptions.\n- automatic proxy-ing of Pyro objects which means you can return references to remote objects just as if it were normal objects.\n- one-way invocations for enhanced performance.\n@@ -47,7 +47,7 @@ Here's a quick overview of Pyro's features:\n- http gateway available for clients wanting to use http+json (such as browser scripts).\n- stable network communication code that works reliably on many platforms.\n- possibility to use Pyro's own event loop, or integrate it into your own (or third party) event loop.\n-- three different possible instance modes for your remote objects (singleton, one per session, one per call)\n+- three different possible instance modes for your remote objects (singleton, one per session, one per call).\n- many simple examples included to show various features and techniques.\n- large amount of unit tests and high test coverage.\n- reliable and established: built upon more than 15 years of existing Pyro history, with ongoing support and development.\n@@ -238,7 +238,7 @@ Performance\n===========\nPyro4 is pretty fast. On a typical networked system you can expect:\n-- a few hundred new proxy connections per second to one sever\n+- a few hundred new proxy connections per second to one server\n- similarly, a few hundred initial remote calls per second to one server\n- a few thousand remote method calls per second on a single proxy\n- tens of thousands batched or oneway remote calls per second\n"
}
] | Python | MIT License | irmen/pyro4 | Fix typos in intro |
313,483 | 17.03.2017 09:16:28 | 14,400 | ded4f3263f2e4076549cc85273a3cbe2e12e2670 | Fix typo in intro | [
{
"change_type": "MODIFY",
"old_path": "docs/source/intro.rst",
"new_path": "docs/source/intro.rst",
"diff": "@@ -260,7 +260,7 @@ Experiment with the ``benchmark``, ``batchedcalls`` and ``hugetransfer`` example\n.. [1] When configured to use the :py:mod:`pickle` or :py:mod:`dill` serializer,\nyour system may be vulnerable\n- because of the sercurity risks of the pickle and dill protocols (possibility of arbitrary\n+ because of the security risks of the pickle and dill protocols (possibility of arbitrary\ncode execution).\nPyro does have some security measures in place to mitigate this risk somewhat.\nThey are described in the :doc:`security` chapter. It is strongly advised to read it.\n"
}
] | Python | MIT License | irmen/pyro4 | Fix typo in intro |
313,483 | 17.03.2017 14:45:11 | 14,400 | 7926a7c3d6fb792624b338e38c50b256d4a68ac9 | Fix typos and code sample in tutorial | [
{
"change_type": "MODIFY",
"old_path": "docs/source/tutorials.rst",
"new_path": "docs/source/tutorials.rst",
"diff": "@@ -181,8 +181,7 @@ Pyro uses a network broadcast to see if there's a name server available somewher\na broadcast responder that will respond \"Yeah hi I'm here\"). So in many cases you won't have to configure anything\nto be able to discover the name server. If nobody answers though, Pyro tries the configured default or custom location.\nIf still nobody answers it prints a sad message and exits.\n-However if it found the name server, it is then possible to talk to it and get the location of any other registered object.\n-. This means that you won't have to hard code any object locations in your code,\n+However if it found the name server, it is then possible to talk to it and get the location of any other registered object. This means that you won't have to hard code any object locations in your code,\nand that the code is capable of dynamically discovering everything at runtime.\n*But enough of that.* We need to start looking at how to actually write some code ourselves that uses Pyro!\n@@ -195,7 +194,7 @@ Building a Warehouse\n.. hint:: All code of this part of the tutorial can be found in the :file:`examples/warehouse` directory.\n-You'll build build a simple warehouse that stores items, and that everyone can visit.\n+You'll build a simple warehouse that stores items, and that everyone can visit.\nVisitors can store items and retrieve other items from the warehouse (if they've been stored there).\nIn this tutorial you'll first write a normal Python program that more or less implements the complete warehouse system,\n@@ -312,7 +311,7 @@ the server that listens for and processes incoming remote method calls. One way\n},\nns = False)\n-Next, we have to tell Pyro what parts of the class should be remotely accessible, and what pars aren't supposed\n+Next, we have to tell Pyro what parts of the class should be remotely accessible, and what parts aren't supposed\nto be accessible. This has to do with security. We'll be adding a ``@Pyro4.expose`` decorator on the Warehouse\nclass definition to tell Pyro it is allowed to access the class remotely.\nYou can ignore the ``@Pyro4.behavior`` line we also added for now (but it is required to properly have a persistent warehouse inventory).\n@@ -321,7 +320,6 @@ make the code now look like this (:file:`warehouse.py`)::\nfrom __future__ import print_function\nimport Pyro4\n- import person\[email protected]\n"
}
] | Python | MIT License | irmen/pyro4 | Fix typos and code sample in tutorial |
313,483 | 29.03.2017 09:26:55 | 14,400 | bf99e8344ea5f8edbaa26d98380c39fa4c437555 | Add 'phase3' part for the stockquotes example | [
{
"change_type": "MODIFY",
"old_path": "examples/stockquotes/Readme.txt",
"new_path": "examples/stockquotes/Readme.txt",
"diff": "@@ -20,7 +20,7 @@ decide themselves when a new quote is available, can be found in the\nexample 'stockquotes-old'. It uses callbacks instead of generators.\n-The tutorial here consists of 2 phases:\n+The tutorial here consists of 3 phases:\nphase 1:\nSimple prototype code where everything is running in a single process.\n@@ -44,3 +44,20 @@ phase 2:\nSupport for remote iterators/generators is available since Pyro 4.49.\nIn the viewer we didn't hardcode the stock market names but instead\nwe ask the name server for all available stock markets.\n+\n+phase 3:\n+ Similar to phase2, but now we make two small changes:\n+ a) we use the Pyro name server in such a way that it is accessible\n+ from other machines, and b) we run the stock market server in a way\n+ that the host is not \"localhost\" by default and can be accessed by\n+ different machines. To do this, create the daemon with the\n+ arguments 'host' and 'port' set (i.e. host=HOST_IP, port=HOST_PORT).\n+ Again, you have to start both component by itself (in separate\n+ console windows for instance):\n+ - start a Pyro name server like this:\n+ (python -m Pyro4.naming -n 192.168.1.99 -p 9091) or\n+ (pyro4-ns -n 192.168.1.99 -p 9091)\n+ - start the stockmarket.py (set HOST_IP and HOST_PORT accordingly.\n+ Also, make sure HOST_PORT is already open).\n+ - start the viewer.py in different remote machines to see the stream\n+ of quotes coming in on each window.\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "examples/stockquotes/phase3/stockmarket.py",
"diff": "+from __future__ import print_function\n+import random\n+import time\n+import Pyro4\n+\n+HOST_IP = \"127.0.0.1\" # Set accordingly (i.e. \"192.168.1.99\")\n+HOST_PORT = 9092 # Set accordingly (i.e. 9876)\n+\n+\[email protected]\n+class StockMarket(object):\n+ def __init__(self, marketname, symbols):\n+ self._name = marketname\n+ self._symbols = symbols\n+\n+ def quotes(self):\n+ while True:\n+ symbol = random.choice(self.symbols)\n+ yield symbol, round(random.uniform(5, 150), 2)\n+ time.sleep(random.random()/2.0)\n+\n+ @property\n+ def name(self):\n+ return self._name\n+\n+ @property\n+ def symbols(self):\n+ return self._symbols\n+\n+\n+if __name__ == \"__main__\":\n+ nasdaq = StockMarket(\"NASDAQ\", [\"AAPL\", \"CSCO\", \"MSFT\", \"GOOG\"])\n+ newyork = StockMarket(\"NYSE\", [\"IBM\", \"HPQ\", \"BP\"])\n+ # Add the proper \"host\" and \"port\" arguments for the construction\n+ # of the Daemon so it can be accessed remotely\n+ with Pyro4.Daemon(host=HOST_IP, port=HOST_PORT) as daemon:\n+ nasdaq_uri = daemon.register(nasdaq)\n+ newyork_uri = daemon.register(newyork)\n+ with Pyro4.locateNS() as ns:\n+ ns.register(\"example.stockmarket.nasdaq\", nasdaq_uri)\n+ ns.register(\"example.stockmarket.newyork\", newyork_uri)\n+ print(\"Stockmarkets available.\")\n+ daemon.requestLoop()\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "examples/stockquotes/phase3/viewer.py",
"diff": "+from __future__ import print_function\n+import Pyro4\n+\n+\n+class Viewer(object):\n+ def __init__(self):\n+ self.markets = set()\n+ self.symbols = set()\n+\n+ def start(self):\n+ print(\"Shown quotes:\", self.symbols)\n+ quote_sources = {\n+ market.name: market.quotes() for market in self.markets\n+ }\n+ while True:\n+ for market, quote_source in quote_sources.items():\n+ quote = next(quote_source) # get a new stock quote from the source\n+ symbol, value = quote\n+ if symbol in self.symbols:\n+ print(\"{0}.{1}: {2}\".format(market, symbol, value))\n+\n+\n+def find_stockmarkets():\n+ # You can hardcode the stockmarket names for nasdaq and newyork, but it\n+ # is more flexible if we just look for every available stockmarket.\n+ markets = []\n+ with Pyro4.locateNS() as ns:\n+ for market, market_uri in ns.list(prefix=\"example.stockmarket.\").items():\n+ print(\"found market\", market)\n+ markets.append(Pyro4.Proxy(market_uri))\n+ if not markets:\n+ raise ValueError(\"no markets found! (have you started the stock markets first?)\")\n+ return markets\n+\n+\n+def main():\n+ viewer = Viewer()\n+ viewer.markets = find_stockmarkets()\n+ viewer.symbols = {\"IBM\", \"AAPL\", \"MSFT\"}\n+ viewer.start()\n+\n+\n+if __name__ == \"__main__\":\n+ main()\n"
}
] | Python | MIT License | irmen/pyro4 | Add 'phase3' part for the stockquotes example |
313,483 | 30.03.2017 18:30:14 | 14,400 | 783124dd9cdf73d883ae76eb23871caa07547dbe | Add 'phase 3' section to the stockquotes example in the tutorial | [
{
"change_type": "MODIFY",
"old_path": "docs/source/tutorials.rst",
"new_path": "docs/source/tutorials.rst",
"diff": "@@ -783,28 +783,34 @@ If you're interested to see what the name server now contains, type :command:`py\n.. _not-localhost:\n-Running it on different machines\n-================================\n-For security reasons, Pyro runs stuff on localhost by default.\n+\n+phase 3: running it on different machines\n+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n+Before presenting the changes in phase 3, let's introduce some additional notions when working with Pyro.\n+\n+It's important for you to understand that, for security reasons, Pyro runs stuff on localhost by default.\nIf you want to access things from different machines, you'll have to tell Pyro to do that explicitly.\n-This paragraph shows you how very briefly you can do this.\n-For more details, refer to the chapters in this manual about the relevant Pyro components.\n+Here we show you how you can do this:\n+\n+Let's assume that you want to start the *name server* in such a way that it is accessible from other machines.\n+To do that, type in the console one of two options (with an appropriate -n argument):\n+\n+ $ python -m Pyro4.naming -n your_hostname # i.e. your_hostname = \"192.168.1.99\"\n+\n+or simply:\n-*Name server*\n- to start the nameserver in such a way that it is accessible from other machines,\n- start it with an appropriate -n argument, like this: :command:`python -m Pyro4.naming -n your_hostname`\n- (or simply: :command:`pyro4-ns -n your_hostname`)\n+ $ pyro4-ns -n your_hostname\n-*Warehouse server*\n- You'll have to modify :file:`warehouse.py`. Right before the ``serveSimple`` call you have to tell it to bind the daemon on your hostname\n- instead of localhost. One way to do this is by setting the ``HOST`` config item::\n+If you want to implement this concept on the *warehouse server*, you'll have to modify :file:`warehouse.py`.\n+Then, right before the ``serveSimple`` call, you have to tell it to bind the daemon on your hostname instead\n+of localhost. One way to do this is by setting the ``HOST`` config item::\nPyro4.config.HOST = \"your_hostname_here\"\nPyro4.Daemon.serveSimple(...)\n- Optional: you can choose to leave the code alone, and instead set the ``PYRO_HOST`` environment variable\n- before starting the warehouse server.\n- Another choice is to pass the required host (and perhaps even port) arguments to ``serveSimple``::\n+Optionally, you can choose to leave the code alone, and instead set the ``PYRO_HOST`` environment variable\n+before starting the warehouse server. Another choice is to pass the required host (and perhaps even port)\n+arguments to ``serveSimple``::\nPyro4.Daemon.serveSimple(\n{\n@@ -813,13 +819,23 @@ For more details, refer to the chapters in this manual about the relevant Pyro c\nhost = 'your_hostname_here',\nns = True)\n-*Stock market server*\n- This example already creates a daemon object instead of using the :py:meth:`serveSimple` call.\n- You'll have to modify the stockmarket source file because that is the one creating a daemon.\n- But you'll only have to add the proper ``host`` argument to the construction of the Daemon,\n- to set it to your machine name instead of the default of localhost.\n- Of course, you could also change the ``HOST`` config item (either in the code itself,\n- or by setting the ``PYRO_HOST`` environment variable before launching).\n+Remember that if you want more details, refer to the chapters in this manual about the relevant Pyro components.\n+\n+Now, back on the new version of the *stock market server*, notice that this example already creates a daemon\n+object instead of using the :py:meth:`serveSimple` call. You'll have to modify :file:`stockmarket.py` because\n+that is the one creating a daemon. But you'll only have to add the proper ``host``and ``port`` arguments to\n+the construction of the Daemon, to set it to your machine name instead of the default of localhost. Let's see\n+the few minor changes that are required in the code:\n+\n+ ...\n+ HOST_IP = \"192.168.1.99\"\n+ HOST_PORT = 9092\n+ ...\n+ with Pyro4.Daemon(host=HOST_IP, port=HOST_PORT) as daemon:\n+ ...\n+\n+Of course, you could also change the ``HOST`` config item (either in the code itself, or by setting\n+the ``PYRO_HOST`` environment variable before launching).\nOther means of creating connections\n===================================\n"
}
] | Python | MIT License | irmen/pyro4 | Add 'phase 3' section to the stockquotes example in the tutorial |
313,483 | 30.03.2017 20:40:26 | 14,400 | ea96bb11a5d329a76ce30923abdf7e79b31d2f5e | Fix table of contents for 'phase 3' section to the stockquotes example in the tutorial | [
{
"change_type": "MODIFY",
"old_path": "docs/source/tutorials.rst",
"new_path": "docs/source/tutorials.rst",
"diff": "@@ -24,7 +24,7 @@ This avoids initial networking complexity.\nFor security reasons, Pyro runs stuff on localhost by default.\nIf you want to access things from different machines, you'll have to tell Pyro\nto do that explicitly.\n- At the end is a small paragraph :ref:`not-localhost` that tells you\n+ At the end is a small section :ref:`not-localhost` that tells you\nhow you can run the various components on different machines.\n.. note::\n@@ -512,7 +512,7 @@ What you can see now is that you not only get the usual exception traceback, *bu\nthat occurred in the remote warehouse object on the server* (the \"remote traceback\"). This can greatly\nhelp locating problems! As you can see it contains the source code lines from the warehouse code that\nis running in the server, as opposed to the normal local traceback that only shows the remote method\n-call taking place inside Pyro...\n+call taking place inside Pyro.\n.. index::\n@@ -783,7 +783,6 @@ If you're interested to see what the name server now contains, type :command:`py\n.. _not-localhost:\n-\nphase 3: running it on different machines\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nBefore presenting the changes in phase 3, let's introduce some additional notions when working with Pyro.\n"
}
] | Python | MIT License | irmen/pyro4 | Fix table of contents for 'phase 3' section to the stockquotes example in the tutorial |
313,485 | 01.09.2017 00:36:08 | 21,600 | 86c50e904e140afdec0ef5f3940225205e69a1e8 | Fixing bug in autoproxy behavior, when register() is called w/ a class it is now registered properly with the serializer | [
{
"change_type": "MODIFY",
"old_path": "src/Pyro4/core.py",
"new_path": "src/Pyro4/core.py",
"diff": "@@ -1550,6 +1550,9 @@ class Daemon(object):\n# register a custom serializer for the type to automatically return proxies\n# we need to do this for all known serializers\nfor ser in util._serializers.values():\n+ if inspect.isclass(obj_or_class):\n+ ser.register_type_replacement(obj_or_class, pyroObjectToAutoProxy)\n+ else:\nser.register_type_replacement(type(obj_or_class), pyroObjectToAutoProxy)\n# register the object/class in the mapping\nself.objectsById[obj_or_class._pyroId] = obj_or_class\n"
},
{
"change_type": "MODIFY",
"old_path": "tests/PyroTests/test_server.py",
"new_path": "tests/PyroTests/test_server.py",
"diff": "@@ -104,6 +104,9 @@ class ServerTestObject(object):\nPyro4.core.current_context.response_annotations[\"ANN2\"] = b\"daemon annotation via new api\"\nreturn {\"annotations_in_daemon\": Pyro4.core.current_context.annotations}\n+ def new_test_object(self):\n+ return ServerTestObject()\n+\nclass NotEverythingExposedClass(object):\ndef __init__(self, name):\n@@ -302,7 +305,7 @@ class ServerTestsOnce(unittest.TestCase):\np._pyroBind()\nself.assertEqual({'value', 'dictionary'}, p._pyroAttrs)\nself.assertEqual({'echo', 'getDict', 'divide', 'nonserializableException', 'ping', 'oneway_delay', 'delayAndId', 'delay', 'testargs',\n- 'multiply', 'oneway_multiply', 'getDictAttr', 'iterator', 'generator', 'response_annotation', 'blob'}, p._pyroMethods)\n+ 'multiply', 'oneway_multiply', 'getDictAttr', 'iterator', 'generator', 'response_annotation', 'blob', 'new_test_object'}, p._pyroMethods)\nself.assertEqual({'oneway_multiply', 'oneway_delay'}, p._pyroOneway)\np._pyroAttrs = None\np._pyroGetMetadata()\n@@ -654,12 +657,20 @@ class ServerTestsOnce(unittest.TestCase):\nself.daemon.unregister(obj)\nresult = p.echo(obj)\nself.assertIsInstance(result, ServerTestObject, \"serialized object must still be normal object\")\n+ self.daemon.register(ServerTestObject)\n+ new_result = result.new_test_object()\n+ self.assertIsInstance(new_result, ServerTestObject, \"serialized pyro object must be a normal object\")\n+ self.daemon.unregister(ServerTestObject)\nconfig.AUTOPROXY = True # make sure autoproxying is enabled\nresult = p.echo(obj)\nself.assertIsInstance(result, ServerTestObject, \"non-pyro object must be returned as normal class\")\nself.daemon.register(obj)\nresult = p.echo(obj)\nself.assertIsInstance(result, Pyro4.core.Proxy, \"serialized pyro object must be a proxy\")\n+ self.daemon.register(ServerTestObject)\n+ new_result = result.new_test_object()\n+ self.assertIsInstance(new_result, Pyro4.core.Proxy, \"serialized pyro object must be a proxy\")\n+ self.daemon.unregister(ServerTestObject)\nself.daemon.unregister(obj)\nresult = p.echo(obj)\nself.assertIsInstance(result, ServerTestObject, \"unregistered pyro object must be normal class again\")\n"
}
] | Python | MIT License | irmen/pyro4 | Fixing bug in autoproxy behavior, when register() is called w/ a class it is now registered properly with the serializer |
313,481 | 20.01.2019 19:37:17 | -3,600 | 4be6a6f48d126d8f7190290cda951a862c70e798 | compatibility fix for cython3 | [
{
"change_type": "MODIFY",
"old_path": "src/Pyro4/util.py",
"new_path": "src/Pyro4/util.py",
"diff": "@@ -929,7 +929,7 @@ def get_exposed_members(obj, only_exposed=True, as_lists=False, use_cache=True):\nif is_private_attribute(m):\ncontinue\nv = getattr(obj, m)\n- if inspect.ismethod(v) or inspect.isfunction(v):\n+ if inspect.ismethod(v) or inspect.isfunction(v) or inspect.ismethoddescriptor(v):\nif getattr(v, \"_pyroExposed\", not only_exposed):\nmethods.add(m)\n# check if the method is marked with the 'oneway' decorator:\n"
}
] | Python | MIT License | irmen/pyro4 | compatibility fix for cython3 |
313,482 | 18.02.2019 10:57:43 | 18,000 | b67d4ceefd0cf52ae3a1588615a2ac623cd4ce2b | Enh: Use select for accept socket readyness instead of block | [
{
"change_type": "MODIFY",
"old_path": "src/Pyro4/socketserver/threadpoolserver.py",
"new_path": "src/Pyro4/socketserver/threadpoolserver.py",
"diff": "@@ -7,12 +7,15 @@ Pyro - Python Remote Objects. Copyright by Irmen de Jong ([email protected]).\n\"\"\"\nfrom __future__ import print_function\n+\n+import selectors\nimport socket\nimport logging\nimport sys\nimport time\nimport threading\nimport os\n+\nfrom Pyro4 import socketutil, errors, util\nfrom Pyro4.configuration import config\nfrom .threadpool import Pool, NoFreeWorkersError\n@@ -106,6 +109,7 @@ class SocketServer_Threadpool(object):\nself.daemon = self.sock = self._socketaddr = self.locationStr = self.pool = None\nself.shutting_down = False\nself.housekeeper = None\n+ self._selector = selectors.DefaultSelector()\ndef init(self, daemon, host, port, unixsocket=None):\nlog.info(\"starting thread pool socketserver\")\n@@ -143,6 +147,7 @@ class SocketServer_Threadpool(object):\nself.pool = Pool()\nself.housekeeper = Housekeeper(daemon)\nself.housekeeper.start()\n+ self._selector.register(self.sock, selectors.EVENT_READ, self)\ndef __del__(self):\nif self.sock is not None:\n@@ -186,6 +191,9 @@ class SocketServer_Threadpool(object):\n# all other (client) sockets are owned by their individual threads.\nassert self.sock in eventsockets\ntry:\n+ events = self._selector.select(config.POLLTIMEOUT)\n+ if not events:\n+ return\ncsock, caddr = self.sock.accept()\nif self.shutting_down:\ncsock.close()\n"
}
] | Python | MIT License | irmen/pyro4 | Enh: Use select for accept socket readyness instead of block |
313,482 | 28.02.2019 13:22:37 | 18,000 | 41bc9414fe8ca6bf60c497c87e5462e8ebe23453 | Use selectors from multiplexer | [
{
"change_type": "MODIFY",
"old_path": "src/Pyro4/socketserver/threadpoolserver.py",
"new_path": "src/Pyro4/socketserver/threadpoolserver.py",
"diff": "@@ -8,7 +8,6 @@ Pyro - Python Remote Objects. Copyright by Irmen de Jong ([email protected]).\nfrom __future__ import print_function\n-import selectors\nimport socket\nimport logging\nimport sys\n@@ -19,6 +18,8 @@ import os\nfrom Pyro4 import socketutil, errors, util\nfrom Pyro4.configuration import config\nfrom .threadpool import Pool, NoFreeWorkersError\n+from .multiplexserver import selectors\n+\nlog = logging.getLogger(\"Pyro4.threadpoolserver\")\n_client_disconnect_lock = threading.Lock()\n"
}
] | Python | MIT License | irmen/pyro4 | Use selectors from multiplexer |
313,487 | 06.03.2019 13:36:17 | 21,600 | d3011165ceda2ece0a3bf524b581ab4a3bcf4a9e | fix an issue with config.NATPORT | [
{
"change_type": "MODIFY",
"old_path": "src/Pyro4/core.py",
"new_path": "src/Pyro4/core.py",
"diff": "@@ -1113,8 +1113,8 @@ class Daemon(object):\nhost = config.HOST\nif nathost is None:\nnathost = config.NATHOST\n- if natport is None:\n- natport = config.NATPORT or None\n+ if natport is None and nathost is not None:\n+ natport = config.NATPORT\nif nathost and unixsocket:\nraise ValueError(\"cannot use nathost together with unixsocket\")\nif (nathost is None) ^ (natport is None):\n"
}
] | Python | MIT License | irmen/pyro4 | fix an issue with config.NATPORT |
299,317 | 02.01.2017 11:10:27 | 10,800 | 018a7dd93a12c5f2e1f8a24b207b86ab108ff993 | Blocking Command: add blocked error management and refactor | [
{
"change_type": "MODIFY",
"old_path": "documentation/extensions/blockingcommand.md",
"new_path": "documentation/extensions/blockingcommand.md",
"diff": "@@ -8,6 +8,7 @@ Allows to manage communications blocking.\n* Block contact\n* Unblock contact\n* Unblock all\n+ * Check if a message has a blocked error\n**XEP related:** [XEP-0191](http://xmpp.org/extensions/xep-0191.html)\n@@ -61,3 +62,13 @@ Unblock all\n```\nblockingCommandManager.unblockAll();\n```\n+\n+\n+Check if a message has a blocked error\n+--------------------------------------\n+\n+```\n+BlockedErrorExtension.isInside(message));\n+```\n+*message* is a `Message`\n+\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/blocking/BlockingCommandManager.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/blocking/BlockingCommandManager.java",
"diff": "@@ -163,13 +163,11 @@ public final class BlockingCommandManager extends Manager {\npublic List<Jid> getBlockList()\nthrows NoResponseException, XMPPErrorException, NotConnectedException, InterruptedException {\n- if (blockListCached != null) {\n- return Collections.unmodifiableList(blockListCached);\n- }\n-\n+ if (blockListCached == null) {\nBlockListIQ blockListIQ = new BlockListIQ();\nBlockListIQ blockListIQResult = connection().createPacketCollectorAndSend(blockListIQ).nextResultOrThrow();\nblockListCached = blockListIQResult.getBlockedJidsCopy();\n+ }\nreturn Collections.unmodifiableList(blockListCached);\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/blocking/element/BlockedErrorExtension.java",
"diff": "+/**\n+ *\n+ * Copyright 2016-2017 Fernando Ramirez\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.blocking.element;\n+\n+import org.jivesoftware.smack.packet.ExtensionElement;\n+import org.jivesoftware.smack.packet.Message;\n+import org.jivesoftware.smack.packet.XMPPError;\n+import org.jivesoftware.smack.util.XmlStringBuilder;\n+import org.jivesoftware.smackx.blocking.BlockingCommandManager;\n+\n+/**\n+ * Blocked error extension class.\n+ *\n+ * @author Fernando Ramirez\n+ * @see <a href=\"http://xmpp.org/extensions/xep-0191.html\">XEP-0191: Blocking\n+ * Command</a>\n+ */\n+public class BlockedErrorExtension implements ExtensionElement {\n+\n+ public static final String ELEMENT = \"blocked\";\n+ public static final String NAMESPACE = BlockingCommandManager.NAMESPACE + \":errors\";\n+\n+ @Override\n+ public String getElementName() {\n+ return ELEMENT;\n+ }\n+\n+ @Override\n+ public String getNamespace() {\n+ return NAMESPACE;\n+ }\n+\n+ @Override\n+ public CharSequence toXML() {\n+ XmlStringBuilder xml = new XmlStringBuilder(this);\n+ xml.closeEmptyElement();\n+ return xml;\n+ }\n+\n+ public static BlockedErrorExtension from(Message message) {\n+ XMPPError error = message.getError();\n+ if (error == null) {\n+ return null;\n+ }\n+ return error.getExtension(ELEMENT, NAMESPACE);\n+ }\n+\n+ /**\n+ * Check if a message contains a BlockedErrorExtension, which means that a\n+ * message was blocked because the JID blocked the sender, and that was\n+ * reflected back as an error message.\n+ *\n+ * @param message\n+ * @return true if the message contains a BlockedErrorExtension, false if\n+ * not\n+ */\n+ public static boolean isInside(Message message) {\n+ return from(message) != null;\n+ }\n+\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/blocking/provider/BlockedErrorExtensionProvider.java",
"diff": "+/**\n+ *\n+ * Copyright 2016 Fernando Ramirez\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.blocking.provider;\n+\n+import org.jivesoftware.smack.provider.ExtensionElementProvider;\n+import org.jivesoftware.smackx.blocking.element.BlockedErrorExtension;\n+import org.xmlpull.v1.XmlPullParser;\n+\n+/**\n+ * Blocked error extension class.\n+ *\n+ * @author Fernando Ramirez\n+ * @see <a href=\"http://xmpp.org/extensions/xep-0191.html\">XEP-0191: Blocking\n+ * Command</a>\n+ */\n+public class BlockedErrorExtensionProvider extends ExtensionElementProvider<BlockedErrorExtension> {\n+\n+ @Override\n+ public BlockedErrorExtension parse(XmlPullParser parser, int initialDepth) throws Exception {\n+ return new BlockedErrorExtension();\n+ }\n+\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/resources/org.jivesoftware.smack.extensions/extensions.providers",
"new_path": "smack-extensions/src/main/resources/org.jivesoftware.smack.extensions/extensions.providers",
"diff": "<namespace>urn:xmpp:blocking</namespace>\n<className>org.jivesoftware.smackx.blocking.provider.UnblockContactsIQProvider</className>\n</iqProvider>\n-\n+ <extensionProvider>\n+ <elementName>blocked</elementName>\n+ <namespace>urn:xmpp:blocking:errors</namespace>\n+ <className>org.jivesoftware.smackx.blocking.provider.BlockedErrorExtensionProvider</className>\n+ </extensionProvider>\n</smackProviders>\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/blocking/BlockedErrorExtensionTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2016 Fernando Ramirez\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.blocking;\n+\n+import org.jivesoftware.smack.packet.Message;\n+import org.jivesoftware.smack.util.PacketParserUtils;\n+import org.jivesoftware.smackx.blocking.element.BlockedErrorExtension;\n+import org.junit.Assert;\n+import org.junit.Test;\n+\n+public class BlockedErrorExtensionTest {\n+\n+ String messageWithoutError = \"<message from='[email protected]' \"\n+ + \"to='[email protected]/9b7b3fce28742983' \"\n+ + \"type='normal' xml:lang='en' id='5x41G-120'>\" + \"</message>\";\n+\n+ String messageWithError = \"<message from='[email protected]' \"\n+ + \"to='[email protected]/9b7b3fce28742983' \"\n+ + \"type='error' xml:lang='en' id='5x41G-121'>\" + \"<error code='406' type='cancel'>\"\n+ + \"<not-acceptable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>\" + \"</error>\" + \"</message>\";\n+\n+ String messageWithBlockedError = \"<message from='[email protected]' \"\n+ + \"to='[email protected]/9b7b3fce28742983' \"\n+ + \"type='error' xml:lang='en' id='5x41G-122'>\" + \"<error code='406' type='cancel'>\"\n+ + \"<blocked xmlns='urn:xmpp:blocking:errors'/>\"\n+ + \"<not-acceptable xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>\" + \"</error>\" + \"</message>\";\n+\n+ @Test\n+ public void checkErrorHasBlockedExtension() throws Exception {\n+ Message message1 = (Message) PacketParserUtils.parseStanza(messageWithoutError);\n+ Assert.assertFalse(BlockedErrorExtension.isInside(message1));\n+\n+ Message message2 = (Message) PacketParserUtils.parseStanza(messageWithError);\n+ Assert.assertFalse(BlockedErrorExtension.isInside(message2));\n+\n+ Message message3 = (Message) PacketParserUtils.parseStanza(messageWithBlockedError);\n+ Assert.assertTrue(BlockedErrorExtension.isInside(message3));\n+ }\n+\n+}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Blocking Command: add blocked error management and refactor SMACK-731 |
299,314 | 18.02.2017 11:08:32 | -3,600 | c13cddd91a37d1f02573cd5f86572643ffa2a8b0 | Enable querying MAM by address and node.
Enhance the API to query archives for example from a room or a pubsub
node. | [
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/mam/MamManager.java",
"new_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/mam/MamManager.java",
"diff": "@@ -25,10 +25,10 @@ import java.util.WeakHashMap;\nimport org.jivesoftware.smack.ConnectionCreationListener;\nimport org.jivesoftware.smack.Manager;\n-import org.jivesoftware.smack.StanzaCollector;\nimport org.jivesoftware.smack.SmackException.NoResponseException;\nimport org.jivesoftware.smack.SmackException.NotConnectedException;\nimport org.jivesoftware.smack.SmackException.NotLoggedInException;\n+import org.jivesoftware.smack.StanzaCollector;\nimport org.jivesoftware.smack.XMPPConnection;\nimport org.jivesoftware.smack.XMPPConnectionRegistry;\nimport org.jivesoftware.smack.XMPPException.XMPPErrorException;\n@@ -106,7 +106,7 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchive(Integer max) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(max, null, null, null, null);\n+ return queryArchive(null, null, max, null, null, null, null);\n}\n/**\n@@ -122,7 +122,7 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchive(Jid withJid) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(null, null, null, withJid, null);\n+ return queryArchive(null, null, null, null, null, withJid, null);\n}\n/**\n@@ -142,7 +142,7 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchive(Date start, Date end) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(null, start, end, null, null);\n+ return queryArchive(null, null, null, start, end, null, null);\n}\n/**\n@@ -158,7 +158,7 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchive(List<FormField> additionalFields) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(null, null, null, null, additionalFields);\n+ return queryArchive(null, null, null, null, null, null, additionalFields);\n}\n/**\n@@ -175,7 +175,7 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchiveWithStartDate(Date start) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(null, start, null, null, null);\n+ return queryArchive(null, null, null, start, null, null, null);\n}\n/**\n@@ -192,9 +192,10 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult queryArchiveWithEndDate(Date end) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- return queryArchive(null, null, end, null, null);\n+ return queryArchive(null, null, null, null, end, null, null);\n}\n+\n/**\n* Query archive applying filters: max count, start date, end date, from/to\n* JID and with additional fields.\n@@ -214,6 +215,33 @@ public final class MamManager extends Manager {\npublic MamQueryResult queryArchive(Integer max, Date start, Date end, Jid withJid, List<FormField> additionalFields)\nthrows NoResponseException, XMPPErrorException, NotConnectedException, InterruptedException,\nNotLoggedInException {\n+ return queryArchive(null, null, max, start, end, withJid, additionalFields);\n+ }\n+\n+\n+ /**\n+ * Query an message archive like a MUC archive or a pubsub node archive, addressed by an archiveAddress, applying\n+ * filters: max count, start date, end date, from/to JID and with additional fields. When archiveAddress is null the\n+ * default, the server will be requested.\n+ *\n+ * @param archiveAddress can be null\n+ * @param node The Pubsub node name, can be null\n+ * @param max\n+ * @param start\n+ * @param end\n+ * @param withJid\n+ * @param additionalFields\n+ * @return the MAM query result\n+ * @throws NoResponseException\n+ * @throws XMPPErrorException\n+ * @throws NotConnectedException\n+ * @throws InterruptedException\n+ * @throws NotLoggedInException\n+ */\n+ public MamQueryResult queryArchive(Jid archiveAddress, String node, Integer max, Date start, Date end, Jid withJid,\n+ List<FormField> additionalFields)\n+ throws NoResponseException, XMPPErrorException, NotConnectedException, InterruptedException,\n+ NotLoggedInException {\nDataForm dataForm = null;\nString queryId = UUID.randomUUID().toString();\n@@ -225,8 +253,9 @@ public final class MamManager extends Manager {\naddAdditionalFields(additionalFields, dataForm);\n}\n- MamQueryIQ mamQueryIQ = new MamQueryIQ(queryId, dataForm);\n+ MamQueryIQ mamQueryIQ = new MamQueryIQ(queryId, node, dataForm);\nmamQueryIQ.setType(IQ.Type.set);\n+ mamQueryIQ.setTo(archiveAddress);\naddResultsLimit(max, mamQueryIQ);\nreturn queryArchive(mamQueryIQ);\n@@ -291,8 +320,31 @@ public final class MamManager extends Manager {\n*/\npublic MamQueryResult page(DataForm dataForm, RSMSet rsmSet) throws NoResponseException, XMPPErrorException,\nNotConnectedException, InterruptedException, NotLoggedInException {\n- MamQueryIQ mamQueryIQ = new MamQueryIQ(UUID.randomUUID().toString(), dataForm);\n+\n+ return page(null, null, dataForm, rsmSet);\n+\n+ }\n+\n+ /**\n+ * Returns a page of the archive.\n+ *\n+ * @param archiveAddress can be null\n+ * @param node The Pubsub node name, can be null\n+ * @param dataForm\n+ * @param rsmSet\n+ * @return the MAM query result\n+ * @throws NoResponseException\n+ * @throws XMPPErrorException\n+ * @throws NotConnectedException\n+ * @throws InterruptedException\n+ * @throws NotLoggedInException\n+ */\n+ public MamQueryResult page(Jid archiveAddress, String node, DataForm dataForm, RSMSet rsmSet)\n+ throws NoResponseException, XMPPErrorException,\n+ NotConnectedException, InterruptedException, NotLoggedInException {\n+ MamQueryIQ mamQueryIQ = new MamQueryIQ(UUID.randomUUID().toString(), node, dataForm);\nmamQueryIQ.setType(IQ.Type.set);\n+ mamQueryIQ.setTo(archiveAddress);\nmamQueryIQ.addExtension(rsmSet);\nreturn queryArchive(mamQueryIQ);\n}\n@@ -315,7 +367,7 @@ public final class MamManager extends Manager {\nXMPPErrorException, NotConnectedException, InterruptedException, NotLoggedInException {\nRSMSet previousResultRsmSet = mamQueryResult.mamFin.getRSMSet();\nRSMSet requestRsmSet = new RSMSet(count, previousResultRsmSet.getLast(), RSMSet.PageDirection.after);\n- return page(mamQueryResult.form, requestRsmSet);\n+ return page(mamQueryResult.to, mamQueryResult.node, mamQueryResult.form, requestRsmSet);\n}\n/**\n@@ -336,7 +388,7 @@ public final class MamManager extends Manager {\nXMPPErrorException, NotConnectedException, InterruptedException, NotLoggedInException {\nRSMSet previousResultRsmSet = mamQueryResult.mamFin.getRSMSet();\nRSMSet requestRsmSet = new RSMSet(count, previousResultRsmSet.getFirst(), RSMSet.PageDirection.before);\n- return page(mamQueryResult.form, requestRsmSet);\n+ return page(mamQueryResult.to, mamQueryResult.node, mamQueryResult.form, requestRsmSet);\n}\n/**\n@@ -357,7 +409,7 @@ public final class MamManager extends Manager {\nRSMSet rsmSet = new RSMSet(null, firstMessageId, -1, -1, null, max, null, -1);\nDataForm dataForm = getNewMamForm();\naddWithJid(chatJid, dataForm);\n- return page(dataForm, rsmSet);\n+ return page(null, null, dataForm, rsmSet);\n}\n/**\n@@ -378,7 +430,7 @@ public final class MamManager extends Manager {\nRSMSet rsmSet = new RSMSet(lastMessageId, null, -1, -1, null, max, null, -1);\nDataForm dataForm = getNewMamForm();\naddWithJid(chatJid, dataForm);\n- return page(dataForm, rsmSet);\n+ return page(null, null, dataForm, rsmSet);\n}\n/**\n@@ -410,8 +462,27 @@ public final class MamManager extends Manager {\n*/\npublic List<FormField> retrieveFormFields() throws NoResponseException, XMPPErrorException, NotConnectedException,\nInterruptedException, NotLoggedInException {\n+ return retrieveFormFields(null, null);\n+ }\n+\n+ /**\n+ * Get the form fields supported by the server.\n+ *\n+ * @param archiveAddress can be null\n+ * @param node The Pubsub node name, can be null\n+ * @return the list of form fields.\n+ * @throws NoResponseException\n+ * @throws XMPPErrorException\n+ * @throws NotConnectedException\n+ * @throws InterruptedException\n+ * @throws NotLoggedInException\n+ */\n+ public List<FormField> retrieveFormFields(Jid archiveAddress, String node)\n+ throws NoResponseException, XMPPErrorException, NotConnectedException,\n+ InterruptedException, NotLoggedInException {\nString queryId = UUID.randomUUID().toString();\n- MamQueryIQ mamQueryIq = new MamQueryIQ(queryId);\n+ MamQueryIQ mamQueryIq = new MamQueryIQ(queryId, node, null);\n+ mamQueryIq.setTo(archiveAddress);\nMamQueryIQ mamResponseQueryIq = connection().createStanzaCollectorAndSend(mamQueryIq).nextResultOrThrow();\n@@ -445,7 +516,7 @@ public final class MamManager extends Manager {\nforwardedMessages.add(mamResultExtension.getForwarded());\n}\n- return new MamQueryResult(forwardedMessages, mamFinIQ, DataForm.from(mamQueryIq));\n+ return new MamQueryResult(forwardedMessages, mamFinIQ, mamQueryIq.getTo(), mamQueryIq.getNode(), DataForm.from(mamQueryIq));\n}\n/**\n@@ -455,11 +526,15 @@ public final class MamManager extends Manager {\npublic final static class MamQueryResult {\npublic final List<Forwarded> forwardedMessages;\npublic final MamFinIQ mamFin;\n+ private final Jid to;\n+ private final String node;\nprivate final DataForm form;\n- private MamQueryResult(List<Forwarded> forwardedMessages, MamFinIQ mamFin, DataForm form) {\n+ private MamQueryResult(List<Forwarded> forwardedMessages, MamFinIQ mamFin, Jid to, String node, DataForm form) {\nthis.forwardedMessages = forwardedMessages;\nthis.mamFin = mamFin;\n+ this.to = to;\n+ this.node = node;\nthis.form = form;\n}\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/mam/element/MamQueryIQ.java",
"new_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/mam/element/MamQueryIQ.java",
"diff": "@@ -108,6 +108,15 @@ public class MamQueryIQ extends IQ {\nreturn queryId;\n}\n+ /**\n+ * Get the Node name.\n+ *\n+ * @return the node\n+ */\n+ public String getNode() {\n+ return node;\n+ }\n+\n/**\n* Get the data form.\n*\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Enable querying MAM by address and node.
Enhance the API to query archives for example from a room or a pubsub
node. |
299,314 | 26.02.2017 18:13:10 | -3,600 | 382d51976643555efff4e2e50b6dbaf07072ab05 | Make Smack buildable under windows.
Specify as character set. Added some symbolic links to smack-integration-test for consistency sake. | [
{
"change_type": "MODIFY",
"old_path": "build.gradle",
"new_path": "build.gradle",
"diff": "@@ -118,7 +118,7 @@ allprojects {\n// Some systems may not have set their platform default\n// converter to 'utf8', but we use unicode in our source\n// files. Therefore ensure that javac uses unicode\n- options.encoding = \"utf8\"\n+ options.encoding = 'UTF-8'\noptions.compilerArgs = [\n'-Xlint:all',\n// Set '-options' because a non-java7 javac will emit a\n@@ -164,6 +164,7 @@ allprojects {\n}\ntasks.withType(Javadoc) {\noptions.charSet = \"UTF-8\"\n+ options.encoding = 'UTF-8'\n}\n// Pin the errorprone version to prevent \"unsupported major.minor\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/filetransfer/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx.filetransfer;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/filetransfer/package-info.java",
"diff": "+../../../../../../../../smack-extensions/src/main/java/org/jivesoftware/smackx/filetransfer/package-info.java\n\\ No newline at end of file\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/iot/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx.iot;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/iot/package-info.java",
"diff": "+../../../../../../../../smack-experimental/src/main/java/org/jivesoftware/smackx/iot/package-info.java\n\\ No newline at end of file\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/iqversion/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx.iqversion;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/iqversion/package-info.java",
"diff": "+../../../../../../../../smack-extensions/src/main/java/org/jivesoftware/smackx/iqversion/package-info.java\n\\ No newline at end of file\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/muc/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx.muc;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/muc/package-info.java",
"diff": "+../../../../../../../../smack-extensions/src/main/java/org/jivesoftware/smackx/muc/package-info.java\n\\ No newline at end of file\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/package-info.java",
"diff": "+../../../../../../../smack-extensions/src/main/java/org/jivesoftware/smackx/package-info.java\n\\ No newline at end of file\n"
},
{
"change_type": "DELETE",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/ping/package-info.java",
"new_path": null,
"diff": "-/**\n- *\n- * Copyright 2015 Florian Schmaus\n- *\n- * Licensed under the Apache License, Version 2.0 (the \"License\");\n- * you may not use this file except in compliance with the License.\n- * You may obtain a copy of the License at\n- *\n- * http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing, software\n- * distributed under the License is distributed on an \"AS IS\" BASIS,\n- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n- * See the License for the specific language governing permissions and\n- * limitations under the License.\n- */\n-\n-/**\n- * TODO describe me.\n- */\n-package org.jivesoftware.smackx.ping;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/ping/package-info.java",
"diff": "+../../../../../../../../smack-extensions/src/main/java/org/jivesoftware/smackx/ping/package-info.java\n\\ No newline at end of file\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Make Smack buildable under windows.
Specify UTF-8 as character set. Added some symbolic links to smack-integration-test for consistency sake. |
299,303 | 27.02.2017 14:14:04 | -7,200 | fb82e04109720e13da938ba6be5807ddb0631ecd | add parsing non stream errors for BOSH | [
{
"change_type": "MODIFY",
"old_path": "smack-bosh/src/main/java/org/jivesoftware/smack/bosh/XMPPBOSHConnection.java",
"new_path": "smack-bosh/src/main/java/org/jivesoftware/smack/bosh/XMPPBOSHConnection.java",
"diff": "@@ -38,6 +38,7 @@ import org.jivesoftware.smack.packet.Message;\nimport org.jivesoftware.smack.packet.Stanza;\nimport org.jivesoftware.smack.packet.Nonza;\nimport org.jivesoftware.smack.packet.Presence;\n+import org.jivesoftware.smack.packet.XMPPError;\nimport org.jivesoftware.smack.sasl.packet.SaslStreamElements.SASLFailure;\nimport org.jivesoftware.smack.sasl.packet.SaslStreamElements.Success;\nimport org.jivesoftware.smack.util.PacketParserUtils;\n@@ -529,7 +530,13 @@ public class XMPPBOSHConnection extends AbstractXMPPConnection {\n}\nbreak;\ncase \"error\":\n+ //Some bosh error isn't stream error.\n+ if (\"urn:ietf:params:xml:ns:xmpp-streams\".equals(parser.getNamespace(null))) {\nthrow new StreamErrorException(PacketParserUtils.parseStreamError(parser));\n+ } else {\n+ XMPPError.Builder builder = PacketParserUtils.parseError(parser);\n+ throw new XMPPException.XMPPErrorException(null, builder.build());\n+ }\n}\nbreak;\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | add parsing non stream errors for BOSH |
299,307 | 17.03.2017 15:10:41 | 10,800 | 4fb14490770329679f146096e529fb21ba3f7c35 | Fix AbstractJidTypeFilter.accept() | [
{
"change_type": "MODIFY",
"old_path": "smack-core/src/main/java/org/jivesoftware/smack/filter/AbstractJidTypeFilter.java",
"new_path": "smack-core/src/main/java/org/jivesoftware/smack/filter/AbstractJidTypeFilter.java",
"diff": "@@ -39,7 +39,7 @@ public abstract class AbstractJidTypeFilter implements StanzaFilter {\n@Override\npublic final boolean accept(Stanza stanza) {\n- final Jid jid = stanza.getFrom();\n+ final Jid jid = getJidToInspect(stanza);\nif (jid == null) {\nreturn false;\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix AbstractJidTypeFilter.accept() |
299,323 | 16.03.2017 11:25:38 | -3,600 | 8a8c01a4e5e8f35d5e29d50c89afc2a3bf9dbe3d | Fix AbstractError.getDescriptiveText() | [
{
"change_type": "MODIFY",
"old_path": "smack-core/src/main/java/org/jivesoftware/smack/packet/AbstractError.java",
"new_path": "smack-core/src/main/java/org/jivesoftware/smack/packet/AbstractError.java",
"diff": "@@ -66,9 +66,12 @@ public class AbstractError {\npublic String getDescriptiveText() {\nString defaultLocale = Locale.getDefault().getLanguage();\nString descriptiveText = getDescriptiveText(defaultLocale);\n+ if (descriptiveText == null) {\n+ descriptiveText = getDescriptiveText(\"en\");\nif(descriptiveText == null) {\ndescriptiveText = getDescriptiveText(\"\");\n}\n+ }\nreturn descriptiveText;\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix AbstractError.getDescriptiveText() |
299,287 | 27.03.2017 18:00:38 | -10,800 | 430cfa0ec5290661c1436f1dda091da0906c8a4d | setPort accept integer only
changed string to int. setPort function is only accepting integer. | [
{
"change_type": "MODIFY",
"old_path": "documentation/gettingstarted.md",
"new_path": "documentation/gettingstarted.md",
"diff": "@@ -55,7 +55,7 @@ XMPPTCPConnectionConfiguration config = XMPPTCPConnectionConfiguration.builder()\n.setUsernameAndPassword(\"username\", \"password\")\n.setXmppDomain(\"jabber.org\")\n.setHost(\"earl.jabber.org\")\n- .setPort(\"8222\")\n+ .setPort(8222)\n.build();\nAbstractXMPPConnection conn2 = **new** XMPPTCPConnection(config);\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | setPort accept integer only
changed string to int. setPort function is only accepting integer. |
299,308 | 03.04.2017 18:59:14 | -25,200 | c8b4df4f849fdf1af0ec2f1fffcb93082a2001ad | Fix EnablePushNotificationsIQ wrong form type
Should be submit instead of form
Fixes | [
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/push_notifications/element/EnablePushNotificationsIQ.java",
"new_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/push_notifications/element/EnablePushNotificationsIQ.java",
"diff": "@@ -96,7 +96,7 @@ public class EnablePushNotificationsIQ extends IQ {\nxml.rightAngleBracket();\nif (publishOptions != null) {\n- DataForm dataForm = new DataForm(DataForm.Type.form);\n+ DataForm dataForm = new DataForm(DataForm.Type.submit);\nFormField formTypeField = new FormField(\"FORM_TYPE\");\nformTypeField.addValue(PubSub.NAMESPACE + \"#publish-options\");\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/push_notifications/EnablePushNotificationsIQTest.java",
"new_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/push_notifications/EnablePushNotificationsIQTest.java",
"diff": "@@ -31,7 +31,7 @@ public class EnablePushNotificationsIQTest {\nString exampleEnableIQWithPublishOptions = \"<iq id='x42' type='set'>\"\n+ \"<enable xmlns='urn:xmpp:push:0' jid='push-5.client.example' node='yxs32uqsflafdk3iuqo'>\"\n- + \"<x xmlns='jabber:x:data' type='form'>\"\n+ + \"<x xmlns='jabber:x:data' type='submit'>\"\n+ \"<field var='FORM_TYPE'><value>http://jabber.org/protocol/pubsub#publish-options</value></field>\"\n+ \"<field var='secret'><value>eruio234vzxc2kla-91</value></field>\" + \"</x>\" + \"</enable>\" + \"</iq>\";\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix EnablePushNotificationsIQ wrong form type
Should be submit instead of form
Fixes SMACK-752 |
299,323 | 19.04.2017 11:34:47 | -7,200 | 10927577ad15c27bc8c8a90bad8661cc6e009868 | Fix getOrCreateLeafNode for prosody | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/pubsub/PubSubManager.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/pubsub/PubSubManager.java",
"diff": "@@ -289,9 +289,7 @@ public final class PubSubManager extends Manager {\nthrow e2;\n}\n}\n- throw e1;\n- }\n- catch (PubSubAssertionError.DiscoInfoNodeAssertionError e) {\n+ if (e1.getXMPPError().getCondition() == Condition.service_unavailable) {\n// This could be caused by Prosody bug #805 (see https://prosody.im/issues/issue/805). Prosody does not\n// answer to disco#info requests on the node ID, which makes it undecidable if a node is a leaf or\n// collection node.\n@@ -299,6 +297,8 @@ public final class PubSubManager extends Manager {\n+ \" threw an DiscoInfoNodeAssertionError, trying workaround for Prosody bug #805 (https://prosody.im/issues/issue/805)\");\nreturn getOrCreateLeafNodeProsodyWorkaround(id);\n}\n+ throw e1;\n+ }\n}\nprivate LeafNode getOrCreateLeafNodeProsodyWorkaround(final String id)\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix getOrCreateLeafNode for prosody |
299,323 | 20.04.2017 22:40:53 | -7,200 | 842cc2c2a4dfb8c477ec11729630b9ffa9029fb2 | Fix smack-repl build script | [
{
"change_type": "MODIFY",
"old_path": "smack-repl/build.gradle",
"new_path": "smack-repl/build.gradle",
"diff": "@@ -16,6 +16,8 @@ dependencies {\ntestCompile project(path: \":smack-core\", configuration: \"archives\")\n}\n-task printClasspath(dependsOn: assemble) << {\n+task printClasspath(dependsOn: assemble) {\n+ doLast {\nprintln sourceSets.main.runtimeClasspath.asPath\n}\n+}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix smack-repl build script |
299,323 | 23.04.2017 23:37:14 | -7,200 | 6bebeb354ba2b31ca8ebdbea1c5c12e21912cf06 | Fix StoreHint (wrong element name) | [
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/hints/element/StoreHint.java",
"new_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/hints/element/StoreHint.java",
"diff": "@@ -27,7 +27,7 @@ public final class StoreHint extends MessageProcessingHint {\npublic static final StoreHint INSTANCE = new StoreHint();\n- public static final String ELEMENT = \"no-store\";\n+ public static final String ELEMENT = \"store\";\nprivate StoreHint() {\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix StoreHint (wrong element name) |
299,323 | 24.04.2017 22:58:52 | -7,200 | 92902091ff1671522fe4eecd0059e47cc23bbdae | Apply idea plugin for Intellij | [
{
"change_type": "MODIFY",
"old_path": ".gitignore",
"new_path": ".gitignore",
"diff": "# IntelliJ\n.idea\n*.iml\n+*.ipr\n+*.iws\n# Mac OS X\n.DS_Store\n"
},
{
"change_type": "MODIFY",
"old_path": "build.gradle",
"new_path": "build.gradle",
"diff": "@@ -20,6 +20,7 @@ apply from: 'version.gradle'\nallprojects {\napply plugin: 'java'\napply plugin: 'eclipse'\n+ apply plugin: 'idea'\napply plugin: 'jacoco'\napply plugin: 'net.ltgt.errorprone'\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Apply idea plugin for Intellij |
299,298 | 19.05.2017 10:16:18 | -10,800 | b636883ce656fe066d55e46de5ff7f763a4e04a5 | Fix NPE in hashCode() in Occupant when jid is null
Fixes | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/muc/Occupant.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/muc/Occupant.java",
"diff": "@@ -124,7 +124,7 @@ public class Occupant {\nint result;\nresult = affiliation.hashCode();\nresult = 17 * result + role.hashCode();\n- result = 17 * result + jid.hashCode();\n+ result = 17 * result + (jid != null ? jid.hashCode() : 0);\nresult = 17 * result + (nick != null ? nick.hashCode() : 0);\nreturn result;\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix NPE in hashCode() in Occupant when jid is null
Fixes SMACK-764. |
299,323 | 29.05.2017 22:32:14 | -7,200 | 655cd873a326d74c94eb0b216fb3f6f5b498dc8e | Make sure, archiving is enabled | [
{
"change_type": "MODIFY",
"old_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/mam/MamIntegrationTest.java",
"new_path": "smack-integration-test/src/main/java/org/jivesoftware/smackx/mam/MamIntegrationTest.java",
"diff": "@@ -31,6 +31,7 @@ import org.jivesoftware.smack.XMPPException.XMPPErrorException;\nimport org.jivesoftware.smack.packet.Message;\nimport org.jivesoftware.smackx.forward.packet.Forwarded;\nimport org.jivesoftware.smackx.mam.MamManager.MamQueryResult;\n+import org.jivesoftware.smackx.mam.element.MamPrefsIQ;\nimport org.jxmpp.jid.EntityBareJid;\npublic class MamIntegrationTest extends AbstractSmackIntegrationTest {\n@@ -55,6 +56,9 @@ public class MamIntegrationTest extends AbstractSmackIntegrationTest {\nEntityBareJid userOne = conOne.getUser().asEntityBareJid();\nEntityBareJid userTwo = conTwo.getUser().asEntityBareJid();\n+ //Make sure MAM is archiving messages\n+ mamManagerConTwo.updateArchivingPreferences(null, null, MamPrefsIQ.DefaultBehavior.always);\n+\nMessage message = new Message(userTwo);\nString messageId = message.setStanzaId();\nString messageBody = \"test message\";\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Make sure, archiving is enabled |
299,323 | 03.06.2017 00:33:56 | -7,200 | 28f3130cf903d518747e9b3e9a01ddc8d0d844a2 | Add Use of Cryptographic Hashfunctions
Also move bouncycastle dep from smack-omemo to
smack-experimental. | [
{
"change_type": "MODIFY",
"old_path": "smack-experimental/build.gradle",
"new_path": "smack-experimental/build.gradle",
"diff": "@@ -10,4 +10,6 @@ dependencies {\ntestCompile project(path: \":smack-core\", configuration: \"testRuntime\")\ntestCompile project(path: \":smack-core\", configuration: \"archives\")\ntestCompile project(path: \":smack-extensions\", configuration: \"testRuntime\")\n+\n+ compile \"org.bouncycastle:bcprov-jdk15on:1.57\"\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashElementTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.hashes;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smack.test.util.TestUtils;\n+import org.jivesoftware.smack.util.StringUtils;\n+import org.jivesoftware.smackx.hashes.element.HashElement;\n+import org.jivesoftware.smackx.hashes.provider.HashElementProvider;\n+import org.junit.Test;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static org.jivesoftware.smackx.hashes.HashManager.ALGORITHM.SHA_256;\n+import static org.junit.Assert.assertArrayEquals;\n+import static org.junit.Assert.assertFalse;\n+import static org.junit.Assert.assertTrue;\n+\n+/**\n+ * Test toXML and parse of HashElement and HashElementProvider.\n+ */\n+public class HashElementTest extends SmackTestSuite {\n+\n+ @Test\n+ public void stanzaTest() throws Exception {\n+ String message = \"Hello World!\";\n+ HashElement element = HashManager.calculateHashElement(SHA_256, message.getBytes(StringUtils.UTF8));\n+ String expected = \"<hash xmlns='urn:xmpp:hashes:2' algo='sha-256'>f4OxZX/x/FO5LcGBSKHWXfwtSx+j1ncoSt3SABJtkGk=</hash>\";\n+ assertEquals(expected, element.toXML().toString());\n+\n+ HashElement parsed = new HashElementProvider().parse(TestUtils.getParser(expected));\n+ assertEquals(expected, parsed.toXML().toString());\n+ assertEquals(SHA_256, parsed.getAlgorithm());\n+ assertEquals(\"f4OxZX/x/FO5LcGBSKHWXfwtSx+j1ncoSt3SABJtkGk=\", parsed.getHashB64());\n+ assertArrayEquals(HashManager.sha_256(message.getBytes(StringUtils.UTF8)), parsed.getHash());\n+\n+ assertFalse(parsed.equals(expected));\n+ assertFalse(parsed.equals(null));\n+ assertEquals(element, parsed);\n+ assertTrue(element.equals(parsed));\n+\n+ HashElement other = new HashElement(HashManager.ALGORITHM.SHA_512,\n+ \"861844d6704e8573fec34d967e20bcfef3d424cf48be04e6dc08f2bd58c729743371015ead891cc3cf1c9d34b49264b510751b1ff9e537937bc46b5d6ff4ecc8\".getBytes(StringUtils.UTF8));\n+ assertFalse(element.equals(other));\n+ }\n+\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.hashes;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smack.util.StringUtils;\n+import org.junit.Test;\n+\n+import java.io.UnsupportedEncodingException;\n+\n+import static junit.framework.TestCase.assertEquals;\n+\n+/**\n+ * Test HashManager functionality.\n+ * The test sums got calculated using 'echo \"Hello World!\" | { md5sum, sha1sum, sha224sum, sha256sum, sha384sum, sha512sum,\n+ * sha3-224sum -l, sha3-256sum -l, sha3-384sum -l, sha3-512sum -l, b2sum -l 160, b2sum -l 256, b2sum -l 384, b2sum -l 512 }\n+ */\n+public class HashTest extends SmackTestSuite {\n+\n+ private static final String testString = \"Hello World!\";\n+ private static final String md5sum = \"ed076287532e86365e841e92bfc50d8c\";\n+ private static final String sha1sum = \"2ef7bde608ce5404e97d5f042f95f89f1c232871\";\n+ private static final String sha224sum = \"4575bb4ec129df6380cedde6d71217fe0536f8ffc4e18bca530a7a1b\";\n+ private static final String sha256sum = \"7f83b1657ff1fc53b92dc18148a1d65dfc2d4b1fa3d677284addd200126d9069\";\n+ private static final String sha384sum = \"bfd76c0ebbd006fee583410547c1887b0292be76d582d96c242d2a792723e3fd6fd061f9d5cfd13b8f961358e6adba4a\";\n+ private static final String sha512sum = \"861844d6704e8573fec34d967e20bcfef3d424cf48be04e6dc08f2bd58c729743371015ead891cc3cf1c9d34b49264b510751b1ff9e537937bc46b5d6ff4ecc8\";\n+ private static final String sha3_224sum = \"716596afadfa17cd1cb35133829a02b03e4eed398ce029ce78a2161d\";\n+ private static final String sha3_256sum = \"d0e47486bbf4c16acac26f8b653592973c1362909f90262877089f9c8a4536af\";\n+ private static final String sha3_384sum = \"f324cbd421326a2abaedf6f395d1a51e189d4a71c755f531289e519f079b224664961e385afcc37da348bd859f34fd1c\";\n+ private static final String sha3_512sum = \"32400b5e89822de254e8d5d94252c52bdcb27a3562ca593e980364d9848b8041b98eabe16c1a6797484941d2376864a1b0e248b0f7af8b1555a778c336a5bf48\";\n+ private static final String b2_160sum = \"e7338d05e5aa2b5e4943389f9475fce2525b92f2\";\n+ private static final String b2_256sum = \"bf56c0728fd4e9cf64bfaf6dabab81554103298cdee5cc4d580433aa25e98b00\";\n+ private static final String b2_384sum = \"53fd759520545fe93270e61bac03b243b686af32ed39a4aa635555be47a89004851d6a13ece00d95b7bdf9910cb71071\";\n+ private static final String b2_512sum = \"54b113f499799d2f3c0711da174e3bc724737ad18f63feb286184f0597e1466436705d6c8e8c7d3d3b88f5a22e83496e0043c44a3c2b1700e0e02259f8ac468e\";\n+\n+ private byte[] array() {\n+ if (testArray == null) {\n+ try {\n+ testArray = testString.getBytes(StringUtils.UTF8);\n+ } catch (UnsupportedEncodingException e) {\n+ throw new AssertionError(\"UTF8 MUST be supported.\");\n+ }\n+ }\n+ return testArray;\n+ }\n+\n+ private byte[] testArray;\n+\n+ @Test\n+ public void hashTest() {\n+ assertEquals(md5sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.MD5, array())));\n+ assertEquals(sha1sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA_1, array())));\n+ assertEquals(sha224sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA_224, array())));\n+ assertEquals(sha256sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA_256, array())));\n+ assertEquals(sha384sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA_384, array())));\n+ assertEquals(sha512sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA_512, array())));\n+ assertEquals(sha3_224sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA3_224, array())));\n+ assertEquals(sha3_256sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA3_256, array())));\n+ assertEquals(sha3_384sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA3_384, array())));\n+ assertEquals(sha3_512sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.SHA3_512, array())));\n+ assertEquals(b2_160sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.BLAKE2B160, array())));\n+ assertEquals(b2_256sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.BLAKE2B256, array())));\n+ assertEquals(b2_384sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.BLAKE2B384, array())));\n+ assertEquals(b2_512sum, HashManager.hex(HashManager.hash(HashManager.ALGORITHM.BLAKE2B512, array())));\n+ }\n+\n+ @Test\n+ public void md5Test() {\n+ String actual = HashManager.hex(HashManager.md5(array()));\n+ assertEquals(md5sum, actual);\n+ }\n+\n+ @Test\n+ public void sha1Test() {\n+ String actual = HashManager.hex(HashManager.sha_1(array()));\n+ assertEquals(sha1sum, actual);\n+ }\n+\n+ @Test\n+ public void sha224Test() {\n+ String actual = HashManager.hex(HashManager.sha_224(array()));\n+ assertEquals(sha224sum, actual);\n+ }\n+\n+ @Test\n+ public void sha256Test() {\n+ String actual = HashManager.hex(HashManager.sha_256(array()));\n+ assertEquals(sha256sum, actual);\n+ }\n+\n+ @Test\n+ public void sha384Test() {\n+ String actual = HashManager.hex(HashManager.sha_384(array()));\n+ assertEquals(sha384sum, actual);\n+ }\n+\n+ @Test\n+ public void sha512Test() {\n+ String actual = HashManager.hex(HashManager.sha_512(array()));\n+ assertEquals(sha512sum, actual);\n+ }\n+\n+ @Test\n+ public void sha3_224Test() {\n+ String actual = HashManager.hex(HashManager.sha3_224(array()));\n+ assertEquals(sha3_224sum, actual);\n+ }\n+\n+ @Test\n+ public void sha3_256Test() {\n+ String actual = HashManager.hex(HashManager.sha3_256(array()));\n+ assertEquals(sha3_256sum, actual);\n+ }\n+\n+ @Test\n+ public void sha3_384Test() {\n+ String actual = HashManager.hex(HashManager.sha3_384(array()));\n+ assertEquals(sha3_384sum, actual);\n+ }\n+\n+ @Test\n+ public void sha3_512Test() {\n+ String actual = HashManager.hex(HashManager.sha3_512(array()));\n+ assertEquals(sha3_512sum, actual);\n+ }\n+\n+ @Test\n+ public void blake2b160Test() {\n+ String actual = HashManager.hex(HashManager.blake2b160(array()));\n+ assertEquals(b2_160sum, actual);\n+ }\n+\n+ @Test\n+ public void blake2b256Test() {\n+ String actual = HashManager.hex(HashManager.blake2b256(array()));\n+ assertEquals(b2_256sum, actual);\n+ }\n+\n+ @Test\n+ public void blake2b384Test() {\n+ String actual = HashManager.hex(HashManager.blake2b384(array()));\n+ assertEquals(b2_384sum, actual);\n+ }\n+\n+ @Test\n+ public void blake2b512Test() {\n+ String actual = HashManager.hex(HashManager.blake2b512(array()));\n+ assertEquals(b2_512sum, actual);\n+ }\n+\n+ @Test\n+ public void asFeatureTest() {\n+ assertEquals(\"urn:xmpp:hash-function-text-names:id-blake2b384\", HashManager.asFeature(HashManager.ALGORITHM.BLAKE2B384));\n+ assertEquals(\"urn:xmpp:hash-function-text-names:md5\", HashManager.asFeature(HashManager.ALGORITHM.MD5));\n+ assertEquals(\"urn:xmpp:hash-function-text-names:sha3-512\", HashManager.asFeature(HashManager.ALGORITHM.SHA3_512));\n+ assertEquals(\"urn:xmpp:hash-function-text-names:sha-512\", HashManager.asFeature(HashManager.ALGORITHM.SHA_512));\n+ }\n+}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-omemo/build.gradle",
"new_path": "smack-omemo/build.gradle",
"diff": "@@ -8,7 +8,6 @@ dependencies {\ncompile project(\":smack-im\")\ncompile project(\":smack-extensions\")\ncompile project(\":smack-experimental\")\n- compile \"org.bouncycastle:bcprov-jdk15on:1.57\"\ntestCompile project(path: \":smack-core\", configuration: \"testRuntime\")\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add Use of Cryptographic Hashfunctions (XEP-300)
Also move bouncycastle dep from smack-omemo to
smack-experimental. |
299,323 | 03.06.2017 23:23:23 | -7,200 | 23ed0bdbce32e87ac2a42d7dab747fa412ed1405 | Add missing security-info in JingleAction
Also fix typo | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleAction.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleAction.java",
"diff": "@@ -33,10 +33,11 @@ public enum JingleAction {\ncontent_reject,\ncontent_remove,\ndescription_info,\n+ security_info,\nsession_accept,\nsession_info,\nsession_initiate,\n- sessio_terminate,\n+ session_terminate,\ntransport_accept,\ntransport_info,\ntransport_reject,\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add missing security-info in JingleAction
Also fix typo |
299,323 | 03.06.2017 23:46:29 | -7,200 | 0a3116195064562204ebdc650e7fd01b4c904953 | Add tests for jingle classes
Depends on | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleActionTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smackx.jingle.element.JingleAction;\n+import org.junit.Test;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.fail;\n+\n+/**\n+ * Test the JingleAction class.\n+ */\n+public class JingleActionTest extends SmackTestSuite {\n+\n+ @Test\n+ public void enumTest() {\n+ assertEquals(\"content-accept\", JingleAction.content_accept.toString());\n+ assertEquals(JingleAction.content_accept, JingleAction.fromString(\"content-accept\"));\n+\n+ for (JingleAction a : JingleAction.values()) {\n+ assertEquals(a, JingleAction.fromString(a.toString()));\n+ }\n+\n+ try {\n+ JingleAction inexistentAction = JingleAction.fromString(\"inexistent-action\");\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // Expected\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleContentTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smackx.jingle.element.JingleContent;\n+import org.junit.Test;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.assertNotNull;\n+import static junit.framework.TestCase.assertNotSame;\n+import static junit.framework.TestCase.assertNull;\n+import static junit.framework.TestCase.fail;\n+\n+/**\n+ * Test the JingleContent class.\n+ */\n+public class JingleContentTest extends SmackTestSuite {\n+\n+ @Test\n+ public void parserTest() {\n+\n+ JingleContent.Builder builder = JingleContent.getBuilder();\n+\n+ try {\n+ builder.build();\n+ fail();\n+ } catch (NullPointerException e) {\n+ // Expected\n+ }\n+ builder.setCreator(JingleContent.Creator.initiator);\n+\n+ try {\n+ builder.build();\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // Expected\n+ }\n+ builder.setName(\"A name\");\n+\n+ JingleContent content = builder.build();\n+ assertNotNull(content);\n+ assertNull(content.getDescription());\n+ assertEquals(JingleContent.Creator.initiator, content.getCreator());\n+ assertEquals(\"A name\", content.getName());\n+\n+ builder.setSenders(JingleContent.Senders.both);\n+ content = builder.build();\n+ assertEquals(JingleContent.Senders.both, content.getSenders());\n+\n+ builder.setDisposition(\"session\");\n+ JingleContent content1 = builder.build();\n+ assertEquals(\"session\", content1.getDisposition());\n+ assertNotSame(content.toXML().toString(), content1.toXML().toString());\n+ assertEquals(content1.toXML().toString(), builder.build().toXML().toString());\n+\n+ assertEquals(0, content.getJingleTransportsCount());\n+ assertNotNull(content.getJingleTransports());\n+\n+ String xml =\n+ \"<content creator='initiator' disposition='session' name='A name' senders='both'>\" +\n+ \"</content>\";\n+ assertEquals(xml, content1.toXML().toString());\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleErrorTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smackx.jingle.element.JingleError;\n+import org.junit.Test;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.fail;\n+\n+/**\n+ * Test the JingleError class. TODO: Uncomment missing tests once implemented.\n+ */\n+public class JingleErrorTest extends SmackTestSuite {\n+\n+ @Test\n+ public void parserTest() {\n+ assertEquals(\"<out-of-order xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ JingleError.fromString(\"out-of-order\").toXML().toString());\n+ //assertEquals(\"<tie-break xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ // JingleError.fromString(\"tie-break\").toXML().toString());\n+ assertEquals(\"<unknown-session xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ JingleError.fromString(\"unknown-session\").toXML().toString());\n+ //assertEquals(\"<unsupported-info xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ // JingleError.fromString(\"unsupported-info\").toXML().toString());\n+ assertEquals(\"unknown-session\", JingleError.fromString(\"unknown-session\").getMessage());\n+\n+ try {\n+ JingleError.fromString(\"inexistent-error\");\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // Expected\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleReasonTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smackx.jingle.element.JingleReason;\n+import org.junit.Test;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.fail;\n+\n+/**\n+ * Test JingleReason functionality.\n+ */\n+public class JingleReasonTest extends SmackTestSuite {\n+\n+ @Test\n+ public void parserTest() {\n+ assertEquals(\"<reason><success/></reason>\",\n+ new JingleReason(JingleReason.Reason.success).toXML().toString());\n+ assertEquals(\"<reason><busy/></reason>\",\n+ new JingleReason(JingleReason.Reason.busy).toXML().toString());\n+ assertEquals(\"<reason><cancel/></reason>\",\n+ new JingleReason(JingleReason.Reason.cancel).toXML().toString());\n+ assertEquals(\"<reason><connectivity-error/></reason>\",\n+ new JingleReason(JingleReason.Reason.connectivity_error).toXML().toString());\n+ assertEquals(\"<reason><decline/></reason>\",\n+ new JingleReason(JingleReason.Reason.decline).toXML().toString());\n+ assertEquals(\"<reason><expired/></reason>\",\n+ new JingleReason(JingleReason.Reason.expired).toXML().toString());\n+ assertEquals(\"<reason><unsupported-transports/></reason>\",\n+ new JingleReason(JingleReason.Reason.unsupported_transports).toXML().toString());\n+ assertEquals(\"<reason><failed-transport/></reason>\",\n+ new JingleReason(JingleReason.Reason.failed_transport).toXML().toString());\n+ assertEquals(\"<reason><general-error/></reason>\",\n+ new JingleReason(JingleReason.Reason.general_error).toXML().toString());\n+ assertEquals(\"<reason><gone/></reason>\",\n+ new JingleReason(JingleReason.Reason.gone).toXML().toString());\n+ assertEquals(\"<reason><media-error/></reason>\",\n+ new JingleReason(JingleReason.Reason.media_error).toXML().toString());\n+ assertEquals(\"<reason><security-error/></reason>\",\n+ new JingleReason(JingleReason.Reason.security_error).toXML().toString());\n+ assertEquals(\"<reason><unsupported-applications/></reason>\",\n+ new JingleReason(JingleReason.Reason.unsupported_applications).toXML().toString());\n+ assertEquals(\"<reason><timeout/></reason>\",\n+ new JingleReason(JingleReason.Reason.timeout).toXML().toString());\n+ assertEquals(\"<reason><failed-application/></reason>\",\n+ new JingleReason(JingleReason.Reason.failed_application).toXML().toString());\n+ assertEquals(\"<reason><incompatible-parameters/></reason>\",\n+ new JingleReason(JingleReason.Reason.incompatible_parameters).toXML().toString());\n+ assertEquals(JingleReason.Reason.alternative_session, JingleReason.Reason.fromString(\"alternative-session\"));\n+ try {\n+ JingleReason.Reason nonExistent = JingleReason.Reason.fromString(\"illegal-reason\");\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // Expected\n+ }\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleSessionTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smack.util.StringUtils;\n+import org.junit.Test;\n+import org.jxmpp.jid.Jid;\n+import org.jxmpp.jid.impl.JidCreate;\n+import org.jxmpp.stringprep.XmppStringprepException;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.assertNotSame;\n+\n+/**\n+ * Test JingleSession class.\n+ */\n+public class JingleSessionTest extends SmackTestSuite {\n+\n+ @Test\n+ public void sessionTest() throws XmppStringprepException {\n+ Jid romeo = JidCreate.from(\"[email protected]\");\n+ Jid juliet = JidCreate.from(\"[email protected]\");\n+ String sid = StringUtils.randomString(24);\n+\n+ JingleSession s1 = new JingleSession(romeo, juliet, sid);\n+ JingleSession s2 = new JingleSession(juliet, romeo, sid);\n+ JingleSession s3 = new JingleSession(romeo, juliet, StringUtils.randomString(23));\n+ JingleSession s4 = new JingleSession(juliet, romeo, sid);\n+\n+ assertNotSame(s1, s2);\n+ assertNotSame(s1, s3);\n+ assertNotSame(s2, s3);\n+ assertEquals(s2, s4);\n+ assertEquals(s2.hashCode(), s4.hashCode());\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleTest.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle;\n+\n+import org.jivesoftware.smack.test.util.SmackTestSuite;\n+import org.jivesoftware.smack.util.StringUtils;\n+import org.jivesoftware.smackx.jingle.element.Jingle;\n+import org.jivesoftware.smackx.jingle.element.JingleAction;\n+import org.junit.Test;\n+import org.jxmpp.jid.FullJid;\n+import org.jxmpp.jid.impl.JidCreate;\n+import org.jxmpp.stringprep.XmppStringprepException;\n+\n+import static junit.framework.TestCase.assertEquals;\n+import static junit.framework.TestCase.assertNotNull;\n+import static junit.framework.TestCase.assertTrue;\n+import static junit.framework.TestCase.fail;\n+\n+/**\n+ * Test the Jingle class.\n+ */\n+public class JingleTest extends SmackTestSuite {\n+\n+ @Test\n+ public void parserTest() throws XmppStringprepException {\n+ String sessionId = StringUtils.randomString(24);\n+\n+ Jingle.Builder builder = Jingle.getBuilder();\n+ try {\n+ builder.build();\n+ fail();\n+ } catch (IllegalArgumentException e) {\n+ // Expected\n+ }\n+ builder.setSessionId(sessionId);\n+\n+ try {\n+ builder.build();\n+ fail();\n+ } catch (NullPointerException e) {\n+ // Expected\n+ }\n+ builder.setAction(JingleAction.session_initiate);\n+\n+ FullJid romeo = JidCreate.fullFrom(\"[email protected]/orchard\");\n+ FullJid juliet = JidCreate.fullFrom(\"[email protected]/balcony\");\n+ builder.setInitiator(romeo);\n+ builder.setResponder(juliet);\n+\n+ Jingle jingle = builder.build();\n+ assertNotNull(jingle);\n+ assertEquals(romeo, jingle.getInitiator());\n+ assertEquals(juliet, jingle.getResponder());\n+ assertEquals(jingle.getAction(), JingleAction.session_initiate);\n+ assertEquals(sessionId, jingle.getSid());\n+\n+ String xml = \"<jingle xmlns='urn:xmpp:jingle:1' \" +\n+ \"initiator='[email protected]/orchard' \" +\n+ \"responder='[email protected]/balcony' \" +\n+ \"action='session-initiate' \" +\n+ \"sid='\" + sessionId + \"'>\" +\n+ \"</jingle>\";\n+ assertTrue(jingle.toXML().toString().contains(xml));\n+ }\n+}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add tests for jingle classes
Depends on #135, #136 |
299,323 | 04.06.2017 20:41:27 | -7,200 | 23190604bdb6a36d12593619cb40b0ca484a4384 | Fix typos and xml issues | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleHandler.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleHandler.java",
"diff": "@@ -21,6 +21,6 @@ import org.jivesoftware.smackx.jingle.element.Jingle;\npublic interface JingleHandler {\n- IQ handleRequest(Jingle jingle);\n+ IQ handleJingleRequest(Jingle jingle);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleManager.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleManager.java",
"diff": "@@ -23,8 +23,8 @@ import java.util.logging.Logger;\nimport org.jivesoftware.smack.Manager;\nimport org.jivesoftware.smack.XMPPConnection;\n-import org.jivesoftware.smack.iqrequest.IQRequestHandler.Mode;\nimport org.jivesoftware.smack.iqrequest.AbstractIqRequestHandler;\n+import org.jivesoftware.smack.iqrequest.IQRequestHandler.Mode;\nimport org.jivesoftware.smack.packet.IQ;\nimport org.jivesoftware.smack.packet.IQ.Type;\nimport org.jivesoftware.smackx.jingle.element.Jingle;\n@@ -72,7 +72,7 @@ public final class JingleManager extends Manager {\n// TODO handle non existing jingle session handler.\nreturn null;\n}\n- return jingleSessionHandler.handleRequest(jingle, sid);\n+ return jingleSessionHandler.handleJingleSessionRequest(jingle, sid);\n}\nif (jingle.getContents().size() > 1) {\n@@ -88,7 +88,7 @@ public final class JingleManager extends Manager {\n// TODO handle non existing content description handler.\nreturn null;\n}\n- return jingleDescriptionHandler.handleRequest(jingle);\n+ return jingleDescriptionHandler.handleJingleRequest(jingle);\n}\n});\n}\n@@ -102,7 +102,7 @@ public final class JingleManager extends Manager {\nreturn jingleSessionHandlers.put(fullJidAndSessionId, sessionHandler);\n}\n- public JingleSessionHandler unregisterJingleSessionhandler(FullJid otherJid, String sessionId, JingleSessionHandler sessionHandler) {\n+ public JingleSessionHandler unregisterJingleSessionHandler(FullJid otherJid, String sessionId, JingleSessionHandler sessionHandler) {\nFullJidAndSessionId fullJidAndSessionId = new FullJidAndSessionId(otherJid, sessionId);\nreturn jingleSessionHandlers.remove(fullJidAndSessionId);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleSessionHandler.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/JingleSessionHandler.java",
"diff": "@@ -21,6 +21,6 @@ import org.jivesoftware.smackx.jingle.element.Jingle;\npublic interface JingleSessionHandler {\n- IQ handleRequest(Jingle jingle, String sessionId);\n+ IQ handleJingleSessionRequest(Jingle jingle, String sessionId);\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/Jingle.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/Jingle.java",
"diff": "@@ -78,6 +78,7 @@ public final class Jingle extends IQ {\nelse {\nthis.contents = Collections.emptyList();\n}\n+ setType(Type.set);\n}\n/**\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContent.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContent.java",
"diff": "@@ -140,7 +140,7 @@ public final class JingleContent implements NamedElement {\n@Override\npublic XmlStringBuilder toXML() {\n- XmlStringBuilder xml = new XmlStringBuilder();\n+ XmlStringBuilder xml = new XmlStringBuilder(this);\nxml.attribute(CREATOR_ATTRIBUTE_NAME, creator);\nxml.optAttribute(DISPOSITION_ATTRIBUTE_NAME, disposition);\nxml.attribute(NAME_ATTRIBUTE_NAME, name);\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentDescription.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentDescription.java",
"diff": "@@ -30,9 +30,9 @@ public abstract class JingleContentDescription implements ExtensionElement {\npublic static final String ELEMENT = \"description\";\n- private final List<JingleContentDescriptionPayloadType> payloads;\n+ private final List<JingleContentDescriptionChildElement> payloads;\n- protected JingleContentDescription(List<JingleContentDescriptionPayloadType> payloads) {\n+ protected JingleContentDescription(List<JingleContentDescriptionChildElement> payloads) {\nif (payloads != null) {\nthis.payloads = Collections.unmodifiableList(payloads);\n}\n@@ -46,7 +46,7 @@ public abstract class JingleContentDescription implements ExtensionElement {\nreturn ELEMENT;\n}\n- public List<JingleContentDescriptionPayloadType> getJinglePayloadTypes() {\n+ public List<JingleContentDescriptionChildElement> getJingleContentDescriptionChildren() {\nreturn payloads;\n}\n@@ -62,6 +62,7 @@ public abstract class JingleContentDescription implements ExtensionElement {\nxml.append(payloads);\n+ xml.closeElement(this);\nreturn xml;\n}\n"
},
{
"change_type": "RENAME",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentDescriptionPayloadType.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentDescriptionChildElement.java",
"diff": "@@ -22,7 +22,7 @@ import org.jivesoftware.smack.packet.NamedElement;\n* An element found usually in 'description' elements.\n*\n*/\n-public abstract class JingleContentDescriptionPayloadType implements NamedElement {\n+public abstract class JingleContentDescriptionChildElement implements NamedElement {\npublic static final String ELEMENT = \"payload-type\";\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentTransport.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentTransport.java",
"diff": "@@ -58,11 +58,17 @@ public abstract class JingleContentTransport implements ExtensionElement {\npublic final XmlStringBuilder toXML() {\nXmlStringBuilder xml = new XmlStringBuilder(this);\naddExtraAttributes(xml);\n- xml.rightAngleBracket();\n- xml.append(candidates);\n+ if (candidates.isEmpty()) {\n+ xml.closeEmptyElement();\n+ } else {\n+\n+ xml.rightAngleBracket();\n+ xml.append(candidates);\nxml.closeElement(this);\n+ }\n+\nreturn xml;\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleError.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleError.java",
"diff": "@@ -39,7 +39,7 @@ public final class JingleError implements ExtensionElement {\n/**\n* Creates a new error with the specified code and errorName.\n*\n- * @param message a message describing the error.\n+ * @param errorName a name describing the error.\n*/\nprivate JingleError(final String errorName) {\nthis.errorName = errorName;\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleReason.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleReason.java",
"diff": "@@ -96,7 +96,7 @@ public class JingleReason implements NamedElement {\nXmlStringBuilder xml = new XmlStringBuilder(this);\nxml.rightAngleBracket();\n- xml.emptyElement(reason);\n+ xml.emptyElement(reason.asString);\nxml.closeElement(this);\nreturn xml;\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/provider/JingleContentProviderManager.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/provider/JingleContentProviderManager.java",
"diff": "@@ -25,7 +25,7 @@ public class JingleContentProviderManager {\nprivate static final Map<String, JingleContentTransportProvider<?>> jingleContentTransportProviders = new ConcurrentHashMap<>();\n- public static JingleContentDescriptionProvider<?> addJingleContentDescrptionProvider(String namespace,\n+ public static JingleContentDescriptionProvider<?> addJingleContentDescriptionProvider(String namespace,\nJingleContentDescriptionProvider<?> provider) {\nreturn jingleContentDescriptionProviders.put(namespace, provider);\n}\n@@ -34,7 +34,7 @@ public class JingleContentProviderManager {\nreturn jingleContentDescriptionProviders.get(namespace);\n}\n- public static JingleContentTransportProvider<?> addJingleContentDescrptionProvider(String namespace,\n+ public static JingleContentTransportProvider<?> addJingleContentTransportProvider(String namespace,\nJingleContentTransportProvider<?> provider) {\nreturn jingleContentTransportProviders.put(namespace, provider);\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Fix typos and xml issues |
299,323 | 08.06.2017 15:04:25 | -7,200 | d49dc71baea6756faf6b6cb081a6e65820ad0c2e | Add JingleContentTransportInfo class | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentTransport.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentTransport.java",
"diff": "@@ -31,20 +31,35 @@ public abstract class JingleContentTransport implements ExtensionElement {\npublic static final String ELEMENT = \"transport\";\nprotected final List<JingleContentTransportCandidate> candidates;\n+ protected final List<JingleContentTransportInfo> infos;\nprotected JingleContentTransport(List<JingleContentTransportCandidate> candidates) {\n+ this(candidates, null);\n+ }\n+\n+ protected JingleContentTransport(List<JingleContentTransportCandidate> candidates, List<JingleContentTransportInfo> infos) {\nif (candidates != null) {\nthis.candidates = Collections.unmodifiableList(candidates);\n}\nelse {\nthis.candidates = Collections.emptyList();\n}\n+\n+ if (infos != null) {\n+ this.infos = infos;\n+ } else {\n+ this.infos = Collections.emptyList();\n+ }\n}\npublic List<JingleContentTransportCandidate> getCandidates() {\nreturn candidates;\n}\n+ public List<JingleContentTransportInfo> getInfos() {\n+ return infos;\n+ }\n+\n@Override\npublic String getElementName() {\nreturn ELEMENT;\n@@ -66,6 +81,7 @@ public abstract class JingleContentTransport implements ExtensionElement {\nxml.rightAngleBracket();\nxml.append(candidates);\n+ xml.append(infos);\nxml.closeElement(this);\n}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleContentTransportInfo.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle.element;\n+\n+import org.jivesoftware.smack.packet.NamedElement;\n+\n+/**\n+ * Abstract JingleContentTransportInfo element.\n+ */\n+public abstract class JingleContentTransportInfo implements NamedElement {\n+\n+}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add JingleContentTransportInfo class |
299,323 | 13.06.2017 23:51:57 | -7,200 | 3ecd01135c1d9b8d4bc072a07360a9a98b185827 | Add convenience methods to HashManager | [
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/hashes/HashManager.java",
"new_path": "smack-experimental/src/main/java/org/jivesoftware/smackx/hashes/HashManager.java",
"diff": "@@ -19,9 +19,11 @@ package org.jivesoftware.smackx.hashes;\nimport org.bouncycastle.jce.provider.BouncyCastleProvider;\nimport org.jivesoftware.smack.Manager;\nimport org.jivesoftware.smack.XMPPConnection;\n+import org.jivesoftware.smack.util.StringUtils;\nimport org.jivesoftware.smackx.disco.ServiceDiscoveryManager;\nimport org.jivesoftware.smackx.hashes.element.HashElement;\n+import java.io.UnsupportedEncodingException;\nimport java.math.BigInteger;\nimport java.security.MessageDigest;\nimport java.security.NoSuchAlgorithmException;\n@@ -194,6 +196,10 @@ public final class HashManager extends Manager {\nreturn getMessageDigest(algorithm).digest(data);\n}\n+ public static byte[] hash(ALGORITHM algorithm, String data) {\n+ return hash(algorithm, utf8(data));\n+ }\n+\npublic static MessageDigest getMessageDigest(ALGORITHM algorithm) {\nMessageDigest md;\ntry {\n@@ -253,58 +259,226 @@ public final class HashManager extends Manager {\nreturn getMessageDigest(MD5).digest(data);\n}\n+ public static byte[] md5(String data) {\n+ return md5(utf8(data));\n+ }\n+\n+ public static String md5HexString(byte[] data) {\n+ return hex(md5(data));\n+ }\n+\n+ public static String md5HexString(String data) {\n+ return hex(md5(data));\n+ }\n+\npublic static byte[] sha_1(byte[] data) {\nreturn getMessageDigest(SHA_1).digest(data);\n}\n+ public static byte[] sha_1(String data) {\n+ return sha_1(utf8(data));\n+ }\n+\n+ public static String sha_1HexString(byte[] data) {\n+ return hex(sha_1(data));\n+ }\n+\n+ public static String sha_1HexString(String data) {\n+ return hex(sha_1(data));\n+ }\n+\npublic static byte[] sha_224(byte[] data) {\nreturn getMessageDigest(SHA_224).digest(data);\n}\n+ public static byte[] sha_224(String data) {\n+ return sha_224(utf8(data));\n+ }\n+\n+ public static String sha_224HexString(byte[] data) {\n+ return hex(sha_224(data));\n+ }\n+\n+ public static String sha_224HexString(String data) {\n+ return hex(sha_224(data));\n+ }\n+\npublic static byte[] sha_256(byte[] data) {\nreturn getMessageDigest(SHA_256).digest(data);\n}\n+ public static byte[] sha_256(String data) {\n+ return sha_256(utf8(data));\n+ }\n+\n+ public static String sha_256HexString(byte[] data) {\n+ return hex(sha_256(data));\n+ }\n+\n+ public static String sha_256HexString(String data) {\n+ return hex(sha_256(data));\n+ }\n+\npublic static byte[] sha_384(byte[] data) {\nreturn getMessageDigest(SHA_384).digest(data);\n}\n+ public static byte[] sha_384(String data) {\n+ return sha_384(utf8(data));\n+ }\n+\n+ public static String sha_384HexString(byte[] data) {\n+ return hex(sha_384(data));\n+ }\n+\n+ public static String sha_384HexString(String data) {\n+ return hex(sha_384(data));\n+ }\n+\npublic static byte[] sha_512(byte[] data) {\nreturn getMessageDigest(SHA_512).digest(data);\n}\n+ public static byte[] sha_512(String data) {\n+ return sha_512(utf8(data));\n+ }\n+\n+ public static String sha_512HexString(byte[] data) {\n+ return hex(sha_512(data));\n+ }\n+\n+ public static String sha_512HexString(String data) {\n+ return hex(sha_512(data));\n+ }\n+\npublic static byte[] sha3_224(byte[] data) {\nreturn getMessageDigest(SHA3_224).digest(data);\n}\n+ public static byte[] sha3_224(String data) {\n+ return sha3_224(utf8(data));\n+ }\n+\n+ public static String sha3_224HexString(byte[] data) {\n+ return hex(sha3_224(data));\n+ }\n+\n+ public static String sha3_224HexString(String data) {\n+ return hex(sha3_224(data));\n+ }\n+\npublic static byte[] sha3_256(byte[] data) {\nreturn getMessageDigest(SHA3_256).digest(data);\n}\n+ public static byte[] sha3_256(String data) {\n+ return sha3_256(utf8(data));\n+ }\n+\n+ public static String sha3_256HexString(byte[] data) {\n+ return hex(sha3_256(data));\n+ }\n+\n+ public static String sha3_256HexString(String data) {\n+ return hex(sha3_256(data));\n+ }\n+\npublic static byte[] sha3_384(byte[] data) {\nreturn getMessageDigest(SHA3_384).digest(data);\n}\n+ public static byte[] sha3_384(String data) {\n+ return sha3_384(utf8(data));\n+ }\n+\n+ public static String sha3_384HexString(byte[] data) {\n+ return hex(sha3_384(data));\n+ }\n+\n+ public static String sha3_384HexString(String data) {\n+ return hex(sha3_384(data));\n+ }\n+\npublic static byte[] sha3_512(byte[] data) {\nreturn getMessageDigest(SHA3_512).digest(data);\n}\n+ public static byte[] sha3_512(String data) {\n+ return sha3_512(utf8(data));\n+ }\n+\n+ public static String sha3_512HexString(byte[] data) {\n+ return hex(sha3_512(data));\n+ }\n+\n+ public static String sha3_512HexString(String data) {\n+ return hex(sha3_512(data));\n+ }\n+\npublic static byte[] blake2b160(byte[] data) {\nreturn getMessageDigest(BLAKE2B160).digest(data);\n}\n+ public static byte[] blake2b160(String data) {\n+ return blake2b160(utf8(data));\n+ }\n+\n+ public static String blake2b160HexString(byte[] data) {\n+ return hex(blake2b160(data));\n+ }\n+\n+ public static String blake2b160HexString(String data) {\n+ return hex(blake2b160(data));\n+ }\n+\npublic static byte[] blake2b256(byte[] data) {\nreturn getMessageDigest(BLAKE2B256).digest(data);\n}\n+ public static byte[] blake2b256(String data) {\n+ return blake2b256(utf8(data));\n+ }\n+\n+ public static String blake2b256HexString(byte[] data) {\n+ return hex(blake2b256(data));\n+ }\n+\n+ public static String blake2b256HexString(String data) {\n+ return hex(blake2b256(data));\n+ }\n+\npublic static byte[] blake2b384(byte[] data) {\nreturn getMessageDigest(BLAKE2B384).digest(data);\n}\n+ public static byte[] blake2b384(String data) {\n+ return blake2b384(utf8(data));\n+ }\n+\n+ public String blake2b384HexString(byte[] data) {\n+ return hex(blake2b384(data));\n+ }\n+\n+ public String blake2b384HexString(String data) {\n+ return hex(blake2b384(data));\n+ }\n+\npublic static byte[] blake2b512(byte[] data) {\nreturn getMessageDigest(BLAKE2B512).digest(data);\n}\n+ public static byte[] blake2b512(String data) {\n+ return blake2b512(utf8(data));\n+ }\n+\n+ public String blake2b512HexString(byte[] data) {\n+ return hex(blake2b512(data));\n+ }\n+\n+ public String blake2b512HexString(String data) {\n+ return hex(blake2b512(data));\n+ }\n+\n/**\n* Encode a byte array in HEX.\n* @param hash\n@@ -314,4 +488,12 @@ public final class HashManager extends Manager {\nreturn new BigInteger(1, hash).toString(16);\n}\n+ public static byte[] utf8(String data) {\n+ try {\n+ return data.getBytes(StringUtils.UTF8);\n+ } catch (UnsupportedEncodingException e) {\n+ throw new AssertionError(e);\n+ }\n+ }\n+\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashElementTest.java",
"new_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashElementTest.java",
"diff": "@@ -23,11 +23,11 @@ import org.jivesoftware.smackx.hashes.element.HashElement;\nimport org.jivesoftware.smackx.hashes.provider.HashElementProvider;\nimport org.junit.Test;\n-import static junit.framework.TestCase.assertEquals;\nimport static org.jivesoftware.smackx.hashes.HashManager.ALGORITHM.SHA_256;\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n+import static junit.framework.TestCase.assertEquals;\n/**\n* Test toXML and parse of HashElement and HashElementProvider.\n@@ -37,7 +37,7 @@ public class HashElementTest extends SmackTestSuite {\n@Test\npublic void stanzaTest() throws Exception {\nString message = \"Hello World!\";\n- HashElement element = HashManager.calculateHashElement(SHA_256, message.getBytes(StringUtils.UTF8));\n+ HashElement element = HashManager.calculateHashElement(SHA_256, HashManager.utf8(message));\nString expected = \"<hash xmlns='urn:xmpp:hashes:2' algo='sha-256'>f4OxZX/x/FO5LcGBSKHWXfwtSx+j1ncoSt3SABJtkGk=</hash>\";\nassertEquals(expected, element.toXML().toString());\n@@ -45,7 +45,7 @@ public class HashElementTest extends SmackTestSuite {\nassertEquals(expected, parsed.toXML().toString());\nassertEquals(SHA_256, parsed.getAlgorithm());\nassertEquals(\"f4OxZX/x/FO5LcGBSKHWXfwtSx+j1ncoSt3SABJtkGk=\", parsed.getHashB64());\n- assertArrayEquals(HashManager.sha_256(message.getBytes(StringUtils.UTF8)), parsed.getHash());\n+ assertArrayEquals(HashManager.sha_256(message), parsed.getHash());\nassertFalse(parsed.equals(expected));\nassertFalse(parsed.equals(null));\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashTest.java",
"new_path": "smack-experimental/src/test/java/org/jivesoftware/smackx/hashes/HashTest.java",
"diff": "package org.jivesoftware.smackx.hashes;\nimport org.jivesoftware.smack.test.util.SmackTestSuite;\n-import org.jivesoftware.smack.util.StringUtils;\nimport org.junit.Test;\n-import java.io.UnsupportedEncodingException;\n-\nimport static junit.framework.TestCase.assertEquals;\n/**\n@@ -49,11 +46,7 @@ public class HashTest extends SmackTestSuite {\nprivate byte[] array() {\nif (testArray == null) {\n- try {\n- testArray = testString.getBytes(StringUtils.UTF8);\n- } catch (UnsupportedEncodingException e) {\n- throw new AssertionError(\"UTF8 MUST be supported.\");\n- }\n+ testArray = HashManager.utf8(testString);\n}\nreturn testArray;\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add convenience methods to HashManager |
299,323 | 13.06.2017 23:57:59 | -7,200 | 4ae84348520b734b0d6aa7b6c73109dd00c85d9d | Remove unused errors and add missing ones | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleError.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleError.java",
"diff": "@@ -28,11 +28,11 @@ public final class JingleError implements ExtensionElement {\npublic static final JingleError OUT_OF_ORDER = new JingleError(\"out-of-order\");\n- public static final JingleError UNKNOWN_SESSION = new JingleError(\"unknown-session\");\n+ public static final JingleError TIE_BREAK = new JingleError(\"tie-break\");\n- public static final JingleError UNSUPPORTED_CONTENT = new JingleError(\"unsupported-content\");\n+ public static final JingleError UNKNOWN_SESSION = new JingleError(\"unknown-session\");\n- public static final JingleError UNSUPPORTED_TRANSPORTS = new JingleError(\"unsupported-transports\");\n+ public static final JingleError UNSUPPORTED_INFO = new JingleError(\"unsupported-info\");\nprivate final String errorName;\n@@ -71,10 +71,10 @@ public final class JingleError implements ExtensionElement {\nreturn OUT_OF_ORDER;\ncase \"unknown-session\":\nreturn UNKNOWN_SESSION;\n- case \"unsupported-content\":\n- return UNSUPPORTED_CONTENT;\n- case \"unsupported-transports\":\n- return UNSUPPORTED_TRANSPORTS;\n+ case \"tie-break\":\n+ return TIE_BREAK;\n+ case \"unsupported-info\":\n+ return UNSUPPORTED_INFO;\ndefault:\nthrow new IllegalArgumentException();\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleErrorTest.java",
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleErrorTest.java",
"diff": "@@ -24,7 +24,7 @@ import static junit.framework.TestCase.assertEquals;\nimport static junit.framework.TestCase.fail;\n/**\n- * Test the JingleError class. TODO: Uncomment missing tests once implemented.\n+ * Test the JingleError class.\n*/\npublic class JingleErrorTest extends SmackTestSuite {\n@@ -32,12 +32,12 @@ public class JingleErrorTest extends SmackTestSuite {\npublic void parserTest() {\nassertEquals(\"<out-of-order xmlns='urn:xmpp:jingle:errors:1'/>\",\nJingleError.fromString(\"out-of-order\").toXML().toString());\n- //assertEquals(\"<tie-break xmlns='urn:xmpp:jingle:errors:1'/>\",\n- // JingleError.fromString(\"tie-break\").toXML().toString());\n+ assertEquals(\"<tie-break xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ JingleError.fromString(\"tie-break\").toXML().toString());\nassertEquals(\"<unknown-session xmlns='urn:xmpp:jingle:errors:1'/>\",\nJingleError.fromString(\"unknown-session\").toXML().toString());\n- //assertEquals(\"<unsupported-info xmlns='urn:xmpp:jingle:errors:1'/>\",\n- // JingleError.fromString(\"unsupported-info\").toXML().toString());\n+ assertEquals(\"<unsupported-info xmlns='urn:xmpp:jingle:errors:1'/>\",\n+ JingleError.fromString(\"unsupported-info\").toXML().toString());\nassertEquals(\"unknown-session\", JingleError.fromString(\"unknown-session\").getMessage());\ntry {\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Remove unused errors and add missing ones |
299,323 | 14.06.2017 00:00:27 | -7,200 | a604266336fbaf3cccc537e19d30bf419298fe3b | Create alternative-session JingleReason | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleReason.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/JingleReason.java",
"diff": "@@ -20,6 +20,7 @@ import java.util.HashMap;\nimport java.util.Map;\nimport org.jivesoftware.smack.packet.NamedElement;\n+import org.jivesoftware.smack.util.StringUtils;\nimport org.jivesoftware.smack.util.XmlStringBuilder;\n/**\n@@ -32,6 +33,27 @@ public class JingleReason implements NamedElement {\npublic static final String ELEMENT = \"reason\";\n+ public static AlternativeSession AlternativeSession(String sessionId) {\n+ return new AlternativeSession(sessionId);\n+ }\n+\n+ public static final JingleReason Busy = new JingleReason(Reason.busy);\n+ public static final JingleReason Cancel = new JingleReason(Reason.cancel);\n+ public static final JingleReason ConnectivityError = new JingleReason(Reason.connectivity_error);\n+ public static final JingleReason Decline = new JingleReason(Reason.decline);\n+ public static final JingleReason Expired = new JingleReason(Reason.expired);\n+ public static final JingleReason FailedApplication = new JingleReason(Reason.failed_application);\n+ public static final JingleReason FailedTransport = new JingleReason(Reason.failed_transport);\n+ public static final JingleReason GeneralError = new JingleReason(Reason.general_error);\n+ public static final JingleReason Gone = new JingleReason(Reason.gone);\n+ public static final JingleReason IncompatibleParameters = new JingleReason(Reason.incompatible_parameters);\n+ public static final JingleReason MediaError = new JingleReason(Reason.media_error);\n+ public static final JingleReason SecurityError = new JingleReason(Reason.security_error);\n+ public static final JingleReason Success = new JingleReason(Reason.success);\n+ public static final JingleReason Timeout = new JingleReason(Reason.timeout);\n+ public static final JingleReason UnsupportedApplications = new JingleReason(Reason.unsupported_applications);\n+ public static final JingleReason UnsupportedTransports = new JingleReason(Reason.unsupported_transports);\n+\npublic enum Reason {\nalternative_session,\nbusy,\n@@ -52,7 +74,7 @@ public class JingleReason implements NamedElement {\nunsupported_transports,\n;\n- private static final Map<String, Reason> LUT = new HashMap<>(Reason.values().length);\n+ protected static final Map<String, Reason> LUT = new HashMap<>(Reason.values().length);\nstatic {\nfor (Reason reason : Reason.values()) {\n@@ -60,9 +82,9 @@ public class JingleReason implements NamedElement {\n}\n}\n- private final String asString;\n+ protected final String asString;\n- private Reason() {\n+ Reason() {\nasString = name().replace('_', '-');\n}\n@@ -80,9 +102,9 @@ public class JingleReason implements NamedElement {\n}\n}\n- private final Reason reason;\n+ protected final Reason reason;\n- public JingleReason(Reason reason) {\n+ protected JingleReason(Reason reason) {\nthis.reason = reason;\n}\n@@ -102,4 +124,33 @@ public class JingleReason implements NamedElement {\nreturn xml;\n}\n+\n+ public static class AlternativeSession extends JingleReason {\n+\n+ public static final String SID = \"sid\";\n+ private final String sessionId;\n+\n+ public AlternativeSession(String sessionId) {\n+ super(Reason.alternative_session);\n+ if (StringUtils.isNullOrEmpty(sessionId)) {\n+ throw new NullPointerException(\"SessionID must not be null or empty.\");\n+ }\n+ this.sessionId = sessionId;\n+ }\n+\n+ @Override\n+ public XmlStringBuilder toXML() {\n+ XmlStringBuilder xml = new XmlStringBuilder(this);\n+ xml.rightAngleBracket();\n+\n+ xml.openElement(reason.asString);\n+ xml.openElement(SID);\n+ xml.append(sessionId);\n+ xml.closeElement(SID);\n+ xml.closeElement(reason.asString);\n+\n+ xml.closeElement(this);\n+ return xml;\n+ }\n+ }\n}\n"
},
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleReasonTest.java",
"new_path": "smack-extensions/src/test/java/org/jivesoftware/smackx/jingle/JingleReasonTest.java",
"diff": "@@ -31,38 +31,54 @@ public class JingleReasonTest extends SmackTestSuite {\n@Test\npublic void parserTest() {\nassertEquals(\"<reason><success/></reason>\",\n- new JingleReason(JingleReason.Reason.success).toXML().toString());\n+ JingleReason.Success.toXML().toString());\nassertEquals(\"<reason><busy/></reason>\",\n- new JingleReason(JingleReason.Reason.busy).toXML().toString());\n+ JingleReason.Busy.toXML().toString());\nassertEquals(\"<reason><cancel/></reason>\",\n- new JingleReason(JingleReason.Reason.cancel).toXML().toString());\n+ JingleReason.Cancel.toXML().toString());\nassertEquals(\"<reason><connectivity-error/></reason>\",\n- new JingleReason(JingleReason.Reason.connectivity_error).toXML().toString());\n+ JingleReason.ConnectivityError.toXML().toString());\nassertEquals(\"<reason><decline/></reason>\",\n- new JingleReason(JingleReason.Reason.decline).toXML().toString());\n+ JingleReason.Decline.toXML().toString());\nassertEquals(\"<reason><expired/></reason>\",\n- new JingleReason(JingleReason.Reason.expired).toXML().toString());\n+ JingleReason.Expired.toXML().toString());\nassertEquals(\"<reason><unsupported-transports/></reason>\",\n- new JingleReason(JingleReason.Reason.unsupported_transports).toXML().toString());\n+ JingleReason.UnsupportedTransports.toXML().toString());\nassertEquals(\"<reason><failed-transport/></reason>\",\n- new JingleReason(JingleReason.Reason.failed_transport).toXML().toString());\n+ JingleReason.FailedTransport.toXML().toString());\nassertEquals(\"<reason><general-error/></reason>\",\n- new JingleReason(JingleReason.Reason.general_error).toXML().toString());\n+ JingleReason.GeneralError.toXML().toString());\nassertEquals(\"<reason><gone/></reason>\",\n- new JingleReason(JingleReason.Reason.gone).toXML().toString());\n+ JingleReason.Gone.toXML().toString());\nassertEquals(\"<reason><media-error/></reason>\",\n- new JingleReason(JingleReason.Reason.media_error).toXML().toString());\n+ JingleReason.MediaError.toXML().toString());\nassertEquals(\"<reason><security-error/></reason>\",\n- new JingleReason(JingleReason.Reason.security_error).toXML().toString());\n+ JingleReason.SecurityError.toXML().toString());\nassertEquals(\"<reason><unsupported-applications/></reason>\",\n- new JingleReason(JingleReason.Reason.unsupported_applications).toXML().toString());\n+ JingleReason.UnsupportedApplications.toXML().toString());\nassertEquals(\"<reason><timeout/></reason>\",\n- new JingleReason(JingleReason.Reason.timeout).toXML().toString());\n+ JingleReason.Timeout.toXML().toString());\nassertEquals(\"<reason><failed-application/></reason>\",\n- new JingleReason(JingleReason.Reason.failed_application).toXML().toString());\n+ JingleReason.FailedApplication.toXML().toString());\nassertEquals(\"<reason><incompatible-parameters/></reason>\",\n- new JingleReason(JingleReason.Reason.incompatible_parameters).toXML().toString());\n- assertEquals(JingleReason.Reason.alternative_session, JingleReason.Reason.fromString(\"alternative-session\"));\n+ JingleReason.IncompatibleParameters.toXML().toString());\n+ assertEquals(\"<reason><alternative-session><sid>1234</sid></alternative-session></reason>\",\n+ JingleReason.AlternativeSession(\"1234\").toXML().toString());\n+ // Alternative sessionID must not be empty\n+ try {\n+ JingleReason.AlternativeSession(\"\");\n+ fail();\n+ } catch (NullPointerException e) {\n+ // Expected\n+ }\n+ // Alternative sessionID must not be null\n+ try {\n+ JingleReason.AlternativeSession(null);\n+ fail();\n+ } catch (NullPointerException e) {\n+ // Expected\n+ }\n+\ntry {\nJingleReason.Reason nonExistent = JingleReason.Reason.fromString(\"illegal-reason\");\nfail();\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Create alternative-session JingleReason |
299,323 | 16.06.2017 22:43:50 | -7,200 | 287976e0e0cd8b8f3f2779a5ffc3e8c7268fb410 | Add Jingle InBandBytestream transports | [
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/transports/jingle_ibb/element/JingleIBBTransport.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle.transports.jingle_ibb.element;\n+\n+import org.jivesoftware.smack.util.StringUtils;\n+import org.jivesoftware.smack.util.XmlStringBuilder;\n+import org.jivesoftware.smackx.jingle.element.JingleContentTransport;\n+\n+/**\n+ * Transport Element for JingleInBandBytestream transports.\n+ */\n+public class JingleIBBTransport extends JingleContentTransport {\n+ public static final String NAMESPACE_V1 = \"urn:xmpp:jingle:transports:ibb:1\";\n+ public static final String ATTR_BLOCK_SIZE = \"block-size\";\n+ public static final String ATTR_SID = \"sid\";\n+\n+ public static final short DEFAULT_BLOCK_SIZE = 4096;\n+\n+ private final short blockSize;\n+ private final String sid;\n+\n+ public JingleIBBTransport() {\n+ this(DEFAULT_BLOCK_SIZE);\n+ }\n+\n+ public JingleIBBTransport(String sid) {\n+ this(DEFAULT_BLOCK_SIZE, sid);\n+ }\n+\n+ public JingleIBBTransport(short blockSize) {\n+ this(blockSize, StringUtils.randomString(24));\n+ }\n+\n+ public JingleIBBTransport(short blockSize, String sid) {\n+ super(null);\n+ if (blockSize > 0) {\n+ this.blockSize = blockSize;\n+ } else {\n+ this.blockSize = DEFAULT_BLOCK_SIZE;\n+ }\n+ this.sid = sid;\n+ }\n+\n+ public String getSessionId() {\n+ return sid;\n+ }\n+\n+ public short getBlockSize() {\n+ return blockSize;\n+ }\n+\n+ @Override\n+ protected void addExtraAttributes(XmlStringBuilder xml) {\n+ xml.attribute(ATTR_BLOCK_SIZE, blockSize);\n+ xml.attribute(ATTR_SID, sid);\n+ }\n+\n+ @Override\n+ public String getNamespace() {\n+ return NAMESPACE_V1;\n+ }\n+\n+ @Override\n+ public boolean equals(Object other) {\n+ if (other == null || !(other instanceof JingleIBBTransport)) {\n+ return false;\n+ }\n+\n+ return this == other || this.hashCode() == other.hashCode();\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return this.toXML().toString().hashCode();\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/transports/jingle_ibb/element/package-info.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+/**\n+ * Smack's API for <a href=\"https://xmpp.org/extensions/xep-0261.html\">XEP-0261: Jingle In-Band Bytestreams</a>.\n+ * Element classes.\n+ */\n+package org.jivesoftware.smackx.jingle.transports.jingle_ibb.element;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/transports/jingle_ibb/package-info.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+/**\n+ * Smack's API for <a href=\"https://xmpp.org/extensions/xep-0261.html\">XEP-0261: Jingle In-Band Bytestreams</a>.\n+ */\n+package org.jivesoftware.smackx.jingle.transports.jingle_ibb;\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/transports/jingle_ibb/provider/JingleIBBTransportProvider.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+package org.jivesoftware.smackx.jingle.transports.jingle_ibb.provider;\n+\n+import org.jivesoftware.smackx.jingle.provider.JingleContentTransportProvider;\n+import org.jivesoftware.smackx.jingle.transports.jingle_ibb.element.JingleIBBTransport;\n+import org.xmlpull.v1.XmlPullParser;\n+\n+/**\n+ * Parse JingleByteStreamTransport elements.\n+ */\n+public class JingleIBBTransportProvider extends JingleContentTransportProvider<JingleIBBTransport> {\n+ @Override\n+ public JingleIBBTransport parse(XmlPullParser parser, int initialDepth) throws Exception {\n+ String blockSizeString = parser.getAttributeValue(null, JingleIBBTransport.ATTR_BLOCK_SIZE);\n+ String sid = parser.getAttributeValue(null, JingleIBBTransport.ATTR_SID);\n+\n+ short blockSize = -1;\n+ if (blockSizeString != null) {\n+ blockSize = Short.valueOf(blockSizeString);\n+ }\n+\n+ return new JingleIBBTransport(blockSize, sid);\n+ }\n+}\n"
},
{
"change_type": "ADD",
"old_path": null,
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/transports/jingle_ibb/provider/package-info.java",
"diff": "+/**\n+ *\n+ * Copyright 2017 Paul Schaub\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+/**\n+ * Smack's API for <a href=\"https://xmpp.org/extensions/xep-0261.html\">XEP-0261: Jingle In-Band Bytestreams</a>.\n+ * Provider classes.\n+ */\n+package org.jivesoftware.smackx.jingle.transports.jingle_ibb.provider;\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add Jingle InBandBytestream transports |
299,323 | 16.06.2017 22:54:32 | -7,200 | 5699373cd9f453647512b0ae70eaff061e401c37 | Add method to set Reason | [
{
"change_type": "MODIFY",
"old_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/Jingle.java",
"new_path": "smack-extensions/src/main/java/org/jivesoftware/smackx/jingle/element/Jingle.java",
"diff": "@@ -198,6 +198,11 @@ public final class Jingle extends IQ {\nreturn this;\n}\n+ public Builder setReason(JingleReason reason) {\n+ this.reason = reason;\n+ return this;\n+ }\n+\npublic Jingle build() {\nreturn new Jingle(sid, action, initiator, responder, reason, contents);\n}\n"
}
] | Java | Apache License 2.0 | igniterealtime/smack | Add method to set Reason |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.