repo_name
stringlengths 5
114
| repo_url
stringlengths 24
133
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| branch_name
stringclasses 209
values | visit_date
timestamp[ns] | revision_date
timestamp[ns] | committer_date
timestamp[ns] | github_id
int64 9.83k
683M
⌀ | star_events_count
int64 0
22.6k
| fork_events_count
int64 0
4.15k
| gha_license_id
stringclasses 17
values | gha_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_pushed_at
timestamp[ns] | gha_language
stringclasses 115
values | files
listlengths 1
13.2k
| num_files
int64 1
13.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
reversondias/assignment_stack_io
|
https://github.com/reversondias/assignment_stack_io
|
13252055ad9b96656dafd342254fb7bc5c6b5959
|
efdbe3809d309f67375c6344741add627189cb23
|
5f8d30d1ef95064fb90c6c1ed30d683ecc8889f5
|
refs/heads/main
| 2023-03-03T10:57:17.529970 | 2021-02-08T03:44:56 | 2021-02-08T03:44:56 | 336,953,150 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7496746182441711,
"alphanum_fraction": 0.7622559666633606,
"avg_line_length": 57.341773986816406,
"blob_id": "16cb251fe192c6189ef3f68d63c368a39c8659d3",
"content_id": "26d8b9421f182592301d804d2292b6b6a15b389a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4610,
"license_type": "no_license",
"max_line_length": 346,
"num_lines": 79,
"path": "/README.md",
"repo_name": "reversondias/assignment_stack_io",
"src_encoding": "UTF-8",
"text": "# Assignment \n\nThis assignment is an application that runs in a Kubernetes pod, it'll be connected in a DB pod. \nThe application can be accessed outside the cluster through an HTTP connection. If the load of CPU to be more than 20% will be scale until five pods.\n\n## Directories\nIn this repository we have three directories: \n- app: Here there are Kubernetes manifest files to deploy the application environment. And there is the Dockerfile following the application code to build the application image. \n- db: Where is the Database Kubernetes manifest.\n- metrics server: The manifests to install metrics-server in Kubernetes to use the HPA to automized the scaling. \n\n## Deploy\n### 1 - Deploy Metrics-Server\nI'm considering that there is a Kubernetes cluster installed and right configured. \nSaid that, if in the cluster there isn't the metrics-server installed we'll install it first. \n\nIn a terminal with kubectl installed and using allow credential to access a cluster, execute the bellow down command: \n\n```\nkubectl apply -f metrics-server/*\n```\n\nThe metrics-server manifest I took from the official repository. I just configure the parameter `--kubelet-insecure-tls` because I test my environment in a single node Kubeadmin installation. \n\n### 2 - Deploy Database\nFirst thing is, choose one node in the cluster to be the Database node and create de follow label. \n```\nkubectl label nodes <node_name> type_node=database\n```\nIt'll create the label in the node because the node will receive the database pod that uses a PV (local volume) to persist data. \n\nAfter that execute the follow commands in that sequency: \n```\nkubectl apply -f db/db_pv.yaml \nkubectl apply -f db/db_ns.yaml \nkubectl apply -f db/db_pvc.yaml \nkubectl create secret generic db-access --namespace=db --from-literal=password=<db_passwd_you_want> --from-literal=user=<db_user_you_want> \nkubectl apply -f db/db_deployment.yaml \nkubectl apply -f db/db_svc.yaml \n```\nIf you want, in the `db` dir there is db_secret.yaml file example to create a secret instead to use the imperative command. But remember the values to use in the manifest file have to be in the base64 encoded. \n\n### 3 - Deploy Application\nFor this step, the image used in deployment manifest file is stored in my docker hub repository(*reverson/my_app:latest*). It means that for now isn't necessary to build the image. But it is possible if you want. I'll describe more about the application further in this document. \n\nIn the `app/app_configmap.yaml` there are some database parameters that the application used to connect to the database. I didn't consider that information as sensitive information. So, I kept in plain text in opposite of *user* and *password* information. \n\nExecute the follow commands in that sequency to deploy the application: \n```\nkubectl apply -f app/app_ns.yaml \nkubectl apply -f app/app_configmap.yaml \nkubectl create secret generic db-app-access --namespace=app --from-literal=db-password=<db_passwd> --from-literal=db-user=<db_user> \nkubectl apply -f app/app_deployment.yaml \nkubectl apply -f app/app_hpa.yaml \nkubectl apply -f app/app_svc.yaml \n```\nLike I said in `db` before, there is app_secret.yaml (**in `app` dir**) file example to create a secret instead to use the imperative command. But remember the values to use in the manifest file have to be in the base64 encoded. \n\nThe application will be exposed on port `30080` to access outside of the Kubernetes cluster. \nThe HPA controls the min and max number of pods and in our case, the minimum is 3 and the maximum is 5 pods. And the threshold to scale pods is *20% of load average*. \n\n## Test if the application works\n\nThe application is very simple. \nTo test it, after the whole environment was deployed. Open a browser and access some node IP over HTTP protocol on port `30080`. \nIt'll return a simple HTML page with some information like that: \n```\nID\tName\tPhoneNumber\tCompany\n1\tJohn\t(866) 490-3907\tCompany A \n2\tWalsh\t(831) 450-2422\tCompany B \n3\tWalsh\t(854) 481-3903\tCompany C \n```\n\nIf the page doesn't show that information, the application doesn't work well. \n\n## Application \n\nIt is a python application (*python3.8*), it connect to a database when is started. When the application is started it'll check if the database configured in the manifests exist, also check if the table configures in the code (*mytable*) exist. If both don't exist they will be created and the database is populated with hard coded information. \nAfter http server is up, every access will querying the data in database and return the HTML page with those data. "
},
{
"alpha_fraction": 0.6901408433914185,
"alphanum_fraction": 0.7394366264343262,
"avg_line_length": 13.300000190734863,
"blob_id": "f3012a7d5ed8f2f6bac327b91edc3db655134355",
"content_id": "fd70406ef54f519f7a846cf9b5ea736c9aee5542",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 142,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 10,
"path": "/app/Dockerfile",
"repo_name": "reversondias/assignment_stack_io",
"src_encoding": "UTF-8",
"text": "FROM python:3.8.2\n\nWORKDIR /root/app\n\nCOPY main.py .\nCOPY requirements.txt .\n\nRUN pip3.8 install -r requirements.txt\n\nCMD python3.8 -u main.py"
},
{
"alpha_fraction": 0.594803512096405,
"alphanum_fraction": 0.6109083294868469,
"avg_line_length": 37.775001525878906,
"blob_id": "aa44034ee94fe28aca2a364f623f0b5a9c405bb3",
"content_id": "bfa989b108e1f469fc2385be4ef38b6cc27afb8f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4657,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 120,
"path": "/app/main.py",
"repo_name": "reversondias/assignment_stack_io",
"src_encoding": "UTF-8",
"text": "import os\nimport psycopg2\nfrom psycopg2 import Error\nfrom psycopg2 import OperationalError\nfrom psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT\nfrom http.server import HTTPServer, BaseHTTPRequestHandler\n\n\n\ndef db_create(table_name=\"mytable\"):\n\n\n DB_USER = os.getenv('DB_USER')\n DB_PASSWD = os.getenv('DB_PASSWD')\n DB_NAME = os.getenv('DB_NAME')\n DB_HOST = os.getenv('DB_HOST')\n DB_PORT = os.getenv('DB_PORT',\"5432\")\n\n create_table_query = \"CREATE TABLE \"+table_name+\" (id SERIAL PRIMARY KEY, name varchar(255), phoneNumber varchar(255), company varchar(255));\"\n create_database = 0\n\n try:\n connection_db = psycopg2.connect(user=DB_USER, password=DB_PASSWD, host=DB_HOST, port=DB_PORT, database=DB_NAME)\n except OperationalError as e:\n if 'database \"'+DB_NAME+'\" does not exist' in str(e):\n print (\"[WARN] - The database '{}' will trying be create.\".format(DB_NAME))\n connection_db_create = psycopg2.connect(user=DB_USER, password=DB_PASSWD, host=DB_HOST, port=DB_PORT)\n connection_db_create.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)\n cursor_db_create = connection_db_create.cursor()\n cursor_db_create.execute(\"CREATE DATABASE \"+DB_NAME+\";\")\n cursor_db_create.close()\n connection_db_create.close()\n create_database = 1\n\n if create_database:\n connection_db = psycopg2.connect(user=DB_USER, password=DB_PASSWD, host=DB_HOST, port=DB_PORT, database=DB_NAME)\n\n try:\n cursor_table = connection_db.cursor()\n cursor_table.execute(\"SELECT * FROM \"+table_name+\" LIMIT 1;\")\n cursor_table.close()\n connection_db.close()\n except Error as e:\n if 'relation \"'+table_name+'\" does not exist' in str(e):\n print (\"[WARN] - The table '{}' will trying be create.\".format(table_name))\n connection_table_create = psycopg2.connect(user=DB_USER, password=DB_PASSWD, host=DB_HOST, port=DB_PORT,database=DB_NAME)\n connection_table_create.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)\n cursor_table_create = connection_table_create.cursor()\n cursor_table_create.execute(create_table_query)\n cursor_table_create.close()\n connection_table_create.close()\n else:\n print(\"[INFO] - The database was successfully initialized.\")\n \n\ndef db_query(query):\n\n\n DB_USER = os.getenv('DB_USER')\n DB_PASSWD = os.getenv('DB_PASSWD')\n DB_NAME = os.getenv('DB_NAME')\n DB_HOST = os.getenv('DB_HOST')\n DB_PORT = os.getenv('DB_PORT',\"5432\")\n connection = 0\n results = False\n\n try:\n connection = psycopg2.connect(user=DB_USER, password=DB_PASSWD, host=DB_HOST, port=DB_PORT, database=DB_NAME)\n\n cursor = connection.cursor()\n cursor.execute(query)\n if \"insert\" in query or \"INSERT\" in query:\n connection.commit()\n results = True\n elif \"select\" in query or \"SELECT\" in query:\n results = cursor.fetchall()\n\n except (Exception, Error) as error:\n print(\"Error while connecting to PostgreSQL\", error)\n return results \n finally:\n if (connection):\n cursor.close()\n connection.close()\n print(\"PostgreSQL connection is closed\")\n return results\n\ndef populate_db():\n datas = [\n {\"name\":\"John\",\"number\": \"(866) 490-3907\",\"company\": \"Company A\"},\n {\"name\":\"Walsh\",\"number\": \"(831) 450-2422\",\"company\": \"Company B\"},\n {\"name\":\"Walsh\",\"number\": \"(854) 481-3903\",\"company\": \"Company C\"}\n ]\n\n if not db_query(\"select * from mytable limit 1;\"):\n db_create()\n for data in datas:\n query = \"INSERT INTO mytable (name , phoneNumber , company ) VALUES ('\"+data['name']+\"','\"+data['number']+\"','\"+data['company']+\"')\"\n db_query(query)\n\nclass SimpleHTTPRequestHandler(BaseHTTPRequestHandler):\n\n def do_GET(self):\n\n html_body = [\"<!DOCTYPE html><html><body><table><tr><th>ID</th><th>Name</th><th>PhoneNumber</th><th>Company</th></tr>\"]\n db_results = db_query(\"select * from mytable\")\n for row in db_results:\n html_body.append(\"<tr><td>\"+str(row[0])+\"</td><td>\"+row[1]+\"</td><td>\"+row[2]+\"</td><td>\"+row[3]+\"</td></tr>\")\n html_body.append(\"</table></body></html>\")\n html_page = '\\n'.join(html_body)\n\n self.send_response(200)\n self.end_headers()\n self.wfile.write(html_page.encode('utf-8'))\n\nif __name__ == \"__main__\":\n \n populate_db()\n httpd = HTTPServer(('0.0.0.0', 80), SimpleHTTPRequestHandler)\n httpd.serve_forever() \n"
}
] | 3 |
Kurone96Kou/KT1
|
https://github.com/Kurone96Kou/KT1
|
94a705cf8f8b58421f50aba967ed664c7a12b46d
|
771a0a6ef6c599da90d1493d4327fd267b250b78
|
e39bbb89ea1d3fb24987ce3e5e93b1671a7be3c9
|
refs/heads/main
| 2023-08-18T12:00:10.859299 | 2021-09-29T18:18:51 | 2021-09-29T18:18:51 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6453794240951538,
"alphanum_fraction": 0.652141273021698,
"avg_line_length": 48.88461685180664,
"blob_id": "a9385dda8a4c887df71d59a8b1d5a47d8d36a3c6",
"content_id": "59a481522f680f0e44e970d6d54681aa8cad7917",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2006,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 26,
"path": "/z2.py",
"repo_name": "Kurone96Kou/KT1",
"src_encoding": "UTF-8",
"text": "\"\"\"Формулировка задачи\"\"\"\r\n\r\nz2 = \\\r\n \"Задание 2. С клавиатуры вводится текст, содержащий слова с повторяющимися буквами. \\n\"\\\r\n \"Удалить все повторения, оставив по одной букве\\n\\n\"\\\r\n \"Для проверки Вы можете использовать данную строку: \\n\"\\\r\n \"Pythooon iiisss tttheee BBeesttt Softwaree \\n\"\r\n\r\ndef delete_duplicates(str): ## функция удаляет дубликаты. Отправляем ей нашу строку\r\n news = '' ##новая пустая строка, которая будет принимать символ\r\n for i in range(len(s) - 1): # посимвольно пробегаем по нашей строке\r\n # len(s) возвращает нам длину строки int, а потом range() нам это число раскладывает\r\n if s[i] != s[i + 1]: # если проверяемый символ не равен следующему,\r\n news += s[i] # то отдаем его в новую строку\r\n if s[-1] != news[-1]: ## s[-1] - последний символ в строке. Если в исходной строке последний символ не равен\r\n ## последнему символу в новой,\r\n news += s[-1] ##то в новой записываем последний этот символ\r\n return news # возвращаем новую строку\r\n\r\nprint(z2)\r\ns = input(\"Введите строку символов, буквы в которой должны поторяться:\")\r\n\r\nprint(\"Исходная строка: \" +s)\r\ns = delete_duplicates(s) ##вызываем функцию\r\nprint (\"Преобразованная строка: \"+s)\r\ninput(\"Введите ENTER, чтобы завершить работу\")\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5564202070236206,
"alphanum_fraction": 0.5739299654960632,
"avg_line_length": 23.5,
"blob_id": "7cf1f9ef5781265c9669da134f0860fa0516f396",
"content_id": "f7a9d71e6cf53a114bfffc0829c3f5adf39d8019",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1397,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 40,
"path": "/z1.py",
"repo_name": "Kurone96Kou/KT1",
"src_encoding": "UTF-8",
"text": "\"\"\"Формулировка задачи\"\"\"\r\n\r\nz1 = \\\r\n \"Задание 1. Дана последовательность из n вещественных чисел.\\n\"\\\r\n \"Определить, сколько из них обладают целой частью, кратной 5\\n\"\r\n\r\nprint(z1)\r\nn = 0\r\nwhile type(n):\r\n n=input(\"Введите количество чисел в последовательности: \")\r\n try:\r\n n=int(n)\r\n except ValueError as ex:\r\n print(\"Введено не число, попробуйте еще раз\", sep=\"\\n\")\r\n continue\r\n if (n<=0):\r\n print(\"Введено число меньше 1, попробуйте еще раз\", sep=\"\\n\")\r\n else:\r\n break\r\n\r\ni=0\r\na=0\r\nk=0\r\nwhile i!=n:\r\n a= input(\"Введите \"+str(i+1)+\" элемент: \")\r\n try:\r\n a = float(a)\r\n except ValueError as ex:\r\n print (\"Введено не число, попробуйте еще раз\", sep=\"\\n\")\r\n continue\r\n\r\n if int(a)%5==0:\r\n k+=1\r\n print(\"\\tЦелая часть кратна 5\")\r\n else:\r\n print(\"\\tЦелая часть не кратна 5\")\r\n i+=1\r\n\r\nprint(\"Количество элементов обладают целой частью, кратной 5: \"+str(k))\r\ninput(\"Введите ENTER, чтобы завершить работу\")\r\n\r\n\r\n\r\n\r\n"
}
] | 2 |
HectorColasValtuena/NachikuAssventurePrototype
|
https://github.com/HectorColasValtuena/NachikuAssventurePrototype
|
39d348563c0b6f1ed13d0c9bb87ffed03ca9e864
|
db807ed7748e3242b58ae827d49db64714b3f884
|
adb007e03993798f7bf0251fda35a6391c36aead
|
refs/heads/master
| 2023-06-18T00:28:22.392687 | 2021-04-30T10:02:24 | 2021-04-30T10:02:24 | 289,021,404 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7729257345199585,
"alphanum_fraction": 0.7729257345199585,
"avg_line_length": 18.16666603088379,
"blob_id": "7a872172b0f8c5ec2d5ac9f86e920432c7bea5dc",
"content_id": "d933ff579d2eec7a6bab90b927d1096befa47d05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 12,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/OneShotTriggers/AudioPlayerOneShot.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class AudioPlayerOneShot : PlayerOneShotBase<AudioSource>\n\t{\n\t\tprotected override void Play (AudioSource audioSource)\n\t\t{\n\t\t\taudioSource.Play();\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7279781103134155,
"alphanum_fraction": 0.7393884062767029,
"avg_line_length": 27.08974266052246,
"blob_id": "958afed22829aeb72b559b46afe2f22bda37b9af",
"content_id": "e3fb0f78713fcb10bfe191ed8ebd82d90324e525",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2193,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 78,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/ViewportScroller.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic class ViewportScroller : MonoBehaviour\n\t{\n\t//serialized properties\n\t\t[SerializeField]\n\t\tprivate float scrollingRate = 1f;\n\n\t\t[SerializeField]\n\t\tprivate float borderScrollLimits = 0.05f;\n\t//ENDOF serialized properties\n\n\t\tprivate RectCameraControllerScrollable scrollable;\n\n\t//private properties\n\t\tprivate Rect cameraRect { get { return ControllerCache.viewportController.rect; }}\n\n\t\tprivate float noScrollRadius { get { return 0.5f - borderScrollLimits; }}\n\t//ENDOF private properties\n\n\t//MonoBehaviour lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tscrollable = GetComponent<RectCameraControllerScrollable>();\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tVector2 scrollingVector = GetScrollingVector();\n\n\t\t\tif (scrollingVector.magnitude > 0)\n\t\t\t{\n\t\t\t\tscrollable.Scroll(scrollingVector * scrollingRate);\n\t\t\t}\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private methods\n\t\t//calculates desired scrolling direction and intensity\n\t\tpublic Vector2 GetScrollingVector ()\n\t\t{\n\t\t\t//ControllerCache.toolManager.activeTool\n\t\t\tif (ControllerCache.toolManager.activeTool.auto)\n\t\t\t{\n\t\t\t\treturn Vector2.zero;\n\t\t\t}\n\n\t\t\t//normalized distance = distance / camera size\n\t\t\tVector2 normalizedDistance =\n\t\t\t\t((Vector2) ControllerCache.toolManager.activeTool.position - cameraRect.center)\n\t\t\t\t/ cameraRect.size;\n\n\t\t\t//cut out the inner rectangle by moving towards 0 so centered hands don't move the camera \n\t\t\tVector2 marginDistance = new Vector2(\n\t\t\t\tx: Mathf.MoveTowards(current: normalizedDistance.x, target: 0, maxDelta: noScrollRadius),\n\t\t\t\ty: Mathf.MoveTowards(current: normalizedDistance.y, target: 0, maxDelta: noScrollRadius)\n\t\t\t);\n\t\t\t/*\n\t\t\tVector2 marginDistance = new Vector2(\n\t\t\t\tx: normalizedDistance.x - (Mathf.Sign(normalizedDistance.x) * noScrollRadius),\n\t\t\t\ty: normalizedDistance.y - (Mathf.Sign(normalizedDistance.y) * noScrollRadius)\n\t\t\t);\n\t\t\t//*/\n\n\t\t\tVector2 scrollingMagnitude = marginDistance / borderScrollLimits;\n\n\t\t\treturn new Vector2(\n\t\t\t\tx: Mathf.Clamp(scrollingMagnitude.x, -1, 1),\n\t\t\t\ty: Mathf.Clamp(scrollingMagnitude.y, -1, 1)\n\t\t\t);\n\t\t}\n\t//ENDOF private methods\n\t}\n}\n"
},
{
"alpha_fraction": 0.7747747898101807,
"alphanum_fraction": 0.7747747898101807,
"avg_line_length": 21.299999237060547,
"blob_id": "445746c5d11d954f9c81faf456841788f06b018b",
"content_id": "db3c5239affdec90c1f49b9ccf19edebc3e69ed5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 222,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 10,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Managers/IToolManager.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ITool = ASSPhysics.HandSystem.Tools.ITool;\n\nnamespace ASSPhysics.HandSystem.Managers\n{\n\tpublic interface IToolManager : ASSPhysics.ControllerSystem.IController\n\t{\n\t\tITool[] tools {get;}\n\t\tITool activeTool {get;}\n\t}\n}"
},
{
"alpha_fraction": 0.7623762488365173,
"alphanum_fraction": 0.7689769268035889,
"avg_line_length": 26.636363983154297,
"blob_id": "7e49745de9f4cde34ab00027ddedcb04ca5eeeab",
"content_id": "e91ccf793090d977312631889f4ec73b88b1fe47",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 303,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 11,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/ISceneController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.SceneSystem\n{\n\tpublic interface ISceneController : ASSPhysics.ControllerSystem.IController\n\t{\n\t\t//is input enabled or blocked\n\t\tbool inputEnabled { get; }\n\n\t\t//request a scene change using unity build index number\n\t\tvoid ChangeScene (int targetScene, float minimumWait = 0.0f);\n\t}\n}"
},
{
"alpha_fraction": 0.7650349736213684,
"alphanum_fraction": 0.7650349736213684,
"avg_line_length": 23.689655303955078,
"blob_id": "fedcf5d40491fce00980a154ca913a7dcbe575c6",
"content_id": "d7caff0b8c7a117af849298c5987576bd753370a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 717,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 29,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/Tutorial/DialogChangerOnActionGrabAutomated.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Debug = UnityEngine.Debug;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ITool = ASSPhysics.HandSystem.Tools.ITool;\n\nnamespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic class DialogChangerOnActionGrabAutomated : DialogChangerOnConditionHeldBase\n\t{\n\t//serialized fields and properties\n\t\tprivate ITool[] toolList\n\t\t{\n\t\t\tget\t{ return ControllerCache.toolManager.tools;\t}\n\t\t}\n\t//ENDOF serialized fields and properties\n\n\t//base class abstract method implementation\n\t\tprotected override bool CheckHeldCondition ()\n\t\t{\n\t\t\tforeach (ITool tool in toolList)\n\t\t\t{\n\t\t\t\tif (tool.auto) { return true; }\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t//ENDOF base class abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7639257311820984,
"alphanum_fraction": 0.7639257311820984,
"avg_line_length": 24.200000762939453,
"blob_id": "6f35e33f5e66059cbc5b6a1a6d130a9741b3d78d",
"content_id": "f4a5afaac08540a28a5dde1b30232f953243a073",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 377,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 15,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/WeaverInspectorManyToOne.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic class WeaverInspectorManyToOne : WeaverInspectorManyToXBase\n\t{\n\t\tpublic Rigidbody commonRigidbody;\n\t\tpublic Rigidbody targetRigidbody { get {\n\t\t\t//return this rigidbody if no target rigidbody is set\n\t\t\treturn (commonRigidbody != null)\n\t\t\t\t? commonRigidbody\n\t\t\t\t: gameObject.GetComponent<Rigidbody>();\n\t\t}}\n\t}\n}"
},
{
"alpha_fraction": 0.8299776315689087,
"alphanum_fraction": 0.8322147727012634,
"avg_line_length": 33.46154022216797,
"blob_id": "b81c6eb8fd4d5918b3e0fd08353d89fc6c9be46e",
"content_id": "6fa27a1fecad9e40cea27d275835abf63dc71887",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 449,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 13,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/SkinSurfaceRiggerInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEngine.U2D.Animation;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\t[RequireComponent(typeof(SpriteRenderer))]\n\t[RequireComponent(typeof(SpriteSkin))]\n\tpublic class SkinSurfaceRiggerInspector : SpriteSkinRiggerInspectorBase\n\t{\n\t\tpublic ConfigurableJoint defaultMeshJoint;\t\t//Sample inter-vertex joint configuration\n\t\tpublic ConfigurableJoint defaultAnchorJoint;\t//Sample anchor joint (parent-connected) configuration\n\t}\n}"
},
{
"alpha_fraction": 0.7480719685554504,
"alphanum_fraction": 0.7542416453361511,
"avg_line_length": 31.433332443237305,
"blob_id": "a45656b427b490e7468b99ba92c796db35c8f222",
"content_id": "9c664e488882399079e5274e240ce3253b8ab458",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1947,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 60,
"path": "/Assets/Scripts/ASSPhysics/InputSystem/MouseInputController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing InputSettings = ASSPhysics.SettingSystem.InputSettings; //InputSettings\n\nusing IViewportController = ASSPhysics.CameraSystem.IViewportController;\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.InputSystem\n{\n\tpublic class MouseInputController :\n\t\tASSPhysics.ControllerSystem.IController,\n\t\tIInputController\n\t{\n\t//const definitions\n\t\tprivate const string mouseXAxisName = \"Mouse X\";\n\t\tprivate const string mouseYAxisName = \"Mouse Y\";\n\t//ENDOF const definitions\n\n\t//private properties\n\t\tprivate float screenSizeFactor { get { return ControllerCache.viewportController.size; }}\n\t//ENDOF private properties\n\n\t//IController implementation\n\t\tpublic bool isValid\n\t\t{\n\t\t\tget { return true; }\n\t\t}\n\t//ENDOF IController implementation\n\n\t//IInputController implementation\n\t\t//returns a vector3 representing the movement of the mouse during the last frame\n\t\tpublic Vector3 delta { get { return new Vector3 (UnityEngine.Input.GetAxis(mouseXAxisName), UnityEngine.Input.GetAxis(mouseYAxisName), 0f); }}\n\t\tpublic Vector3 screenSpaceDelta { get { \n\t\t\treturn ControllerCache.viewportController.ScreenSpaceToWorldSpace(delta, worldSpace: false);\n\t\t}}\n\n\t\t//scaled delta for configurable controls\n\t\tpublic Vector3 scaledDelta { get { return delta * InputSettings.mouseDeltaScale * screenSizeFactor; }}\n\n\t\t//gets zoom input\n\t\t//public float zoomDelta { get { return Input.mouseScrollDelta.y * InputSettings.mouseScrollDeltaScale; }}\n\t\tpublic float zoomDelta\n\t\t{\n\t\t\tget {\n\t\t\t\treturn -1 * (Input.mouseScrollDelta.y * InputSettings.mouseScrollDeltaScale);\n\t\t\t\t/* commented how to get scroll input through keyboard keys\n\t\t\t\t+ ((Input.GetKey(KeyCode.R))\n\t\t\t\t\t? (+ 0.1f)\n\t\t\t\t\t: (Input.GetKey(KeyCode.F))\n\t\t\t\t\t\t? (- 0.1f)\n\t\t\t\t\t\t: 0);\n\t\t\t\t*/\n\t\t\t}\n\t\t}\n\n\t\t//gets button pressed\n\t\tpublic bool GetButtonDown (int buttonID) { return Input.GetMouseButtonDown(buttonID); }\n\t//IInputController implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7311828136444092,
"alphanum_fraction": 0.7311828136444092,
"avg_line_length": 12.428571701049805,
"blob_id": "2be56d99274743566736bb7e258525949dc2638a",
"content_id": "232ebee252ab469d37b6a50ec9ed3ef615c7592a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 95,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 7,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Base/IEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Editors\n{\n\tpublic interface IEditorBase\n\t{\n\t\tvoid DoSetup ();\n\t}\n}"
},
{
"alpha_fraction": 0.7558411359786987,
"alphanum_fraction": 0.7558411359786987,
"avg_line_length": 27.081966400146484,
"blob_id": "f78e5917c3cff907dfbb983fee32c67dbf22053e",
"content_id": "f41d70239fe09242d7402718c4795e679e34f53e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1712,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 61,
"path": "/Assets/Scripts/ASSPhysics/ControllerSystem/ControllerProvider.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ServiceContainer = System.ComponentModel.Design.ServiceContainer;\n\nnamespace ASSPhysics.ControllerSystem\n{\n\tpublic static class ControllerProvider\n\t{\n\t//private fields and properties\n\t\tprivate static ServiceContainer serviceContainer;\n\n\t//ENDOF private fields and properties\n\n\n\t//public methods\n\t\t//return controller for type TController\n\t\tpublic static TController GetController <TController> ()\n\t\t\twhere TController : IController\n\t\t{\n\t\t\treturn (TController) serviceContainer?.GetService(typeof(TController));\n\t\t}\n\n\t\t//register a controller instance as TController type\n\t\tpublic static void RegisterController <TController> (IController controller)\n\t\t\twhere TController : IController\n\t\t{\n\t\t\tInitializeContainer();\n\n\t\t\tif (GetController<TController>() != null)\n\t\t\t{\n\t\t\t\tDisposeController<TController>();\n\t\t\t}\n\n\t\t\tserviceContainer.AddService(typeof(TController), controller);\n\t\t}\n\n\t\t//remove the controller of type TController. if a controller parameter is passed, removal will only be performed if controllers coincide\n\t\tpublic static void DisposeController <TController> (IController controller)\n\t\t\twhere TController : class, IController\n\t\t{\n\t\t\tTController castedController = (TController) controller;\n\t\t\tif (GetController<TController>() == castedController)\n\t\t\t{\n\t\t\t\tDisposeController<TController>();\n\t\t\t}\n\t\t}\n\t\tpublic static void DisposeController <TController> ()\n\t\t\twhere TController : IController\n\t\t{\n\t\t\tserviceContainer.RemoveService(typeof(TController));\n\t\t}\n\t//ENDOF public methods\n\n\t//private methods\n\t\t//ensure a container exists\n\t\tprivate static void InitializeContainer ()\n\t\t{\n\t\t\tif (serviceContainer == null)\n\t\t\t{ serviceContainer = new ServiceContainer(); }\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.8222222328186035,
"alphanum_fraction": 0.8222222328186035,
"avg_line_length": 19.11111068725586,
"blob_id": "7c2756cea1e8d3ba508d768c102582f4a96715c5",
"content_id": "3f72914ddd687ed8d474fee1e1ed21fd89fc13fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 180,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 9,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/WeaverInspectorManyToXBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic abstract class WeaverInspectorManyToXBase : WeaverInspectorBase\n\t{\n\t\tpublic Rigidbody[] originRigidbodyList;\n\t}\n}"
},
{
"alpha_fraction": 0.7175615429878235,
"alphanum_fraction": 0.718120813369751,
"avg_line_length": 24.927536010742188,
"blob_id": "8a1621092d85263d1dc433b842c204e3d6c8c6d8",
"content_id": "5d60f6b93753a2b7d7d9fa7177aac0cd21859f6e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1790,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 69,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/RectTransformExtensions.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing Axis = UnityEngine.RectTransform.Axis;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic static class RectTransformExtensions\n\t{\n\t\t//returns this rectTransform's rect with its worldspace position applied\n\t\tpublic static Rect EMGetWorldRect (this RectTransform rectTransform)\n\t\t{\n\t\t\treturn RectMath.MoveRect(\n\t\t\t\trect: rectTransform.rect,\n\t\t\t\tmovement: rectTransform.position\n\t\t\t);\n\t\t}\n\n\t\t//alters rectTransform's dimensions and position according to given rect\n\t\tpublic static void EMSetRect (this RectTransform rectTransform, Rect rect)\n\t\t{\n\t\t\t//set position\n\t\t\trectTransform.position = rectTransform.EMGetPivotedPosition(rect);\n\n\t\t\t//set width\n\t\t\trectTransform.SetSizeWithCurrentAnchors(\n\t\t\t\taxis: Axis.Horizontal,\n\t\t\t\tsize: rect.width\n\t\t\t);\n\n\t\t\t//set height\n\t\t\trectTransform.SetSizeWithCurrentAnchors(\n\t\t\t\taxis: Axis.Vertical,\n\t\t\t\tsize: rect.height\n\t\t\t);\n\t\t}\n\n\t\t//returns rect position applying this rectTransform's pivot\n\t\tpublic static Vector2 EMGetPivotedPosition (\n\t\t\tthis RectTransform rectTransform,\n\t\t\tRect? _rect = null\n\t\t) {\n\t\t\t//store a casted copy of received rect if any\n\t\t\t//or a copy of rectTransform's rect if none\n\t\t\tRect rect = (_rect != null)\n\t\t\t\t? (Rect) _rect\n\t\t\t\t: rectTransform.rect;\n\n\t\t\treturn rect.position + (rect.size * rectTransform.pivot);\n\t\t}\n\n\t\t/*\n\t\t//returns rect with its position offset by this rectTransform's pivot\n\t\tpublic static Rect EMGetPivotedRect (\n\t\t\tthis RectTransform rectTransform,\n\t\t\tRect? _rect = null\n\t\t) {\n\t\t\t//store a casted copy of received rect if any\n\t\t\t//or a copy of rectTransform's rect if none\n\t\t\tRect rect = (_rect != null)\n\t\t\t\t? (Rect) _rect\n\t\t\t\t: rectTransform.rect;\n\n\t\t\trect.position = rectTransform.EMGetPivotedPosition(rect);\n\t\t\treturn rect;\n\t\t}\n\t\t*/\n\t}\n}"
},
{
"alpha_fraction": 0.7821522355079651,
"alphanum_fraction": 0.7821522355079651,
"avg_line_length": 22.875,
"blob_id": "da2b31a2ac319fc4be8aa9aa19d9079c40acddae",
"content_id": "49504a5eb2c09b377910e60dfe4ad0970a894dc2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 16,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogControllers/IDialogController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.DialogSystem.DialogControllers\n{\n\tpublic interface IDialogController\n\t{\n\t\t//enable the dialog\n\t\tvoid Enable ();\n\n\t\t//disable the dialog and execute finishingcallback after done\n\t\tvoid AnimatedDisable (DParameterlessDelegate finishingCallback);\n\n\t\t//disable the dialog immediately\n\t\tvoid ForceDisable ();\n\t}\n\n\tpublic delegate void DParameterlessDelegate ();\t\n}"
},
{
"alpha_fraction": 0.8059467673301697,
"alphanum_fraction": 0.8059467673301697,
"avg_line_length": 28.045454025268555,
"blob_id": "edf24946b3edc6ce15692bd5ea62f396c50cbaa7",
"content_id": "8c9976c9ab191a8777e4807b8d9be28dc100f768",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 639,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 22,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Weavers/WeaverEditorManyToOne.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing WeaverInspectorManyToOne = ASSpriteRigging.Inspectors.WeaverInspectorManyToOne;\n\nnamespace ASSpriteRigging.Editors\n{\n\t[CustomEditor(typeof(WeaverInspectorManyToOne))]\n\tpublic class WeaverEditorManyToOne : WeaverEditorBase<WeaverInspectorManyToOne>\n\t{\n\t//private method declaration\n\t\tpublic override void WeaveJoints()\n\t\t{\n\t\t\tforeach (Rigidbody originRigidbody in targetInspector.originRigidbodyList)\n\t\t\t{\n\t\t\t\tConnectRigidbodies(originRigidbody, targetInspector.targetRigidbody);\n\t\t\t}\n\t\t\tDebug.Log(targetInspector.name + \" Weaved ManyToOne joints\");\n\t\t}\n\t//ENDOF private method declaration\n\t}\n}\n"
},
{
"alpha_fraction": 0.7894737124443054,
"alphanum_fraction": 0.7894737124443054,
"avg_line_length": 22.071428298950195,
"blob_id": "8ced59235e7af6acbd081094d1475b515daaa410",
"content_id": "8098bc2621bece38158eba7f04870910e467d4b7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 323,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/ChainSystem/ChainElementAutoFindParent.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.ChainSystem\n{\n\tpublic abstract class ChainElementAutoFindParent : ChainElementBase\n\t{\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic virtual void Awake ()\n\t\t{\n\t\t\tSetParent(transform.parent.GetComponent<IChainElement>());\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\t}\n}\t"
},
{
"alpha_fraction": 0.724616527557373,
"alphanum_fraction": 0.7304601669311523,
"avg_line_length": 26.399999618530273,
"blob_id": "62d2823aac87c85f130fafbbdfa7a0d91aa93802",
"content_id": "f1e106e6b1323cf4c29b563781cf2edc55cbdb99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1369,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 50,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/KickerAutoFireTorque.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RandomSign = ASSistant.ASSRandom.RandomSign;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class KickerAutoFireTorque : KickerOnConditionForceBase\n\t{\n\t//serialized properties \n\t\tpublic int direction; //if not zero determines the sign of the force applied. If 0, a direction will be chosen randomly each time\n\t//ENDOF serialized properties \n\n\t//private fields and properties\n\t//ENDOF private fields and properties\n\n\t//IKicker implementation\n\t\t//applies a random torque at a random direction as the kick\n\t\tpublic override void Kick ()\n\t\t{\n\t\t\t//add a torque of random intensity and random direction in Z axis\n\t\t\ttargetRigidbody.AddTorque(\n\t\t\t\tVector3.forward * randomForce.Generate() * GetDirection(),\n\t\t\t\tForceMode.Force\n\t\t\t);\n\t\t}\n\t//ENDOF IKicker implementation\n\n\t//MonoBehaviour Lifecycle\n\t//ENDOF MonoBehaviour Lifecycle\n\n\t//abstract method implementation\n\t\t//checkCondition is always true so kick repeats constantly every interval\n\t\tprotected override bool CheckCondition ()\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t//ENDOF abstract method implementation\n\n\t//private methods\n\t\tprivate int GetDirection ()\n\t\t{\n\t\t\treturn (direction > 0)\n\t\t\t\t\t\t? 1\t\t\t//if direction sign is + use 1\n\t\t\t\t\t : (direction < 0)\n\t\t\t\t\t\t? -1\t\t//if direction sign is - use -1\n\t\t\t\t\t\t: RandomSign.Generate();\t//if none, get a random sign\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7477477192878723,
"alphanum_fraction": 0.7486960887908936,
"avg_line_length": 30.969696044921875,
"blob_id": "6bfe038d5b7423606126966abbfbfa8c77b3f19a",
"content_id": "4e00d2bcf035ca2ca8c5ff4adecf74a5f901fdb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2109,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 66,
"path": "/Assets/Editor/ASSpriteRigging/BoneUtility/BoneHierarchy.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing UnityEngine.U2D.Animation; //SpriteSkin\nusing U2DAnimationAccessor;\t//SpriteSkin.\n\nusing IRiggerInspector = ASSpriteRigging.Inspectors.IRiggerInspector;\n\nnamespace ASSpriteRigging.BoneUtility\n{\n\tpublic static class BoneHierarchy\n\t{\n\t\t//finds a joint of type TJoint connected to target transform or rigidbody.\n\t\t//returns null if target is not connected or non-existant\n\t\tpublic static TJoint BoneFindJointConnected <TJoint> (Transform bone, Transform target)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\tRigidbody targetRigidbody = target.gameObject.GetComponent<Rigidbody>();\n\t\t\tif (targetRigidbody == null) { return null; }\n\t\t\treturn BoneFindJointConnected<TJoint> (bone, targetRigidbody);\n\t\t}\n\t\tpublic static TJoint BoneFindJointConnected <TJoint> (Transform bone, Rigidbody targetRigidbody)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\t//get a list of all the joints of type TJoint contained in the origin bone\n\t\t\tTJoint[] jointList = bone.gameObject.GetComponents<TJoint>();\n\t\t\t//find a joint connected to target rigidbody and return it\n\t\t\tforeach (TJoint joint in jointList)\n\t\t\t{\n\t\t\t\tif (joint.connectedBody == targetRigidbody)\n\t\t\t\t{\n\t\t\t\t\treturn joint;\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn null;//return null if none found\n\t\t}\n\n\t\t//creates gameobjects for every bone and stores them in spriteskin\n\t\tpublic static void CreateBoneHierarchy (IRiggerInspector rigger)\n\t\t{\n\t\t\tSpriteSkin spriteSkin = rigger.spriteSkin;\n\t\t\tSprite sprite = rigger.sprite;\n\n\t\t\tif (sprite == null || spriteSkin.rootBone != null)\n\t\t\t{\n\t\t\t\tDebug.LogError(\"No sprite or no rootBone @\" + spriteSkin.gameObject.name);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tUndo.RegisterCompleteObjectUndo(spriteSkin, \"Create Bones\");\n\n\t\t\t//call accessor-exposed CreateBoneHierarchy method on the sprite skin\n\t\t\t//this is what creates the transform structure\n\t\t\tspriteSkin.PublicCreateBoneHierarchy();\n\n\t\t\tforeach (Transform transform in spriteSkin.boneTransforms) \n\t\t\t{\n\t\t\t\tUndo.RegisterCreatedObjectUndo(transform.gameObject, \"Create Bones\");\n\t\t\t}\n\n\t\t\t//reset bounds if needed\n\t\t\tspriteSkin.CalculateBoundsIfNecessary();\n\t\t\tEditorUtility.SetDirty(spriteSkin);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.8248587846755981,
"alphanum_fraction": 0.8248587846755981,
"avg_line_length": 18.77777862548828,
"blob_id": "7ba61ad10776c7240700633f188d6955cd3119dd",
"content_id": "524fce4dba7b256af369844b018a93770e586a72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 9,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/WeaverInspectorManyToMany.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic class WeaverInspectorManyToMany : WeaverInspectorManyToXBase\n\t{\n\t\tpublic Rigidbody[] targetRigidbodyList;\n\t}\n}"
},
{
"alpha_fraction": 0.76408451795578,
"alphanum_fraction": 0.76408451795578,
"avg_line_length": 18,
"blob_id": "bbf7a558d4f2ab7e6f30632b20517a3a0cf38989",
"content_id": "2d7473de7ac4a631f845d15ff20f1e60005987ba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 286,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 15,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/LauncherController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing UnityEngine;\n\nnamespace ASSPhysics.SceneSystem\n{\n\tpublic class LauncherController : MonoBehaviour\n\t{\n\t\t//public GameObject SplashContainer\n\t\tpublic void Launch ()\n\t\t{\n\t\t\tCursorLocker.LockAndHideSystemCursor();\n\t\t\tSceneController.Initialize();\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7546728849411011,
"alphanum_fraction": 0.7546728849411011,
"avg_line_length": 29.64285659790039,
"blob_id": "2ce82cd701a67d92dbfb1e664203ff65ddc96f3c",
"content_id": "a2d5963fd9c9877d364b72fa6627b0debac367d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 430,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/ChainSystem/Interfaces/IChainElement.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.ChainSystem\n{\n\tpublic interface IChainElement\n\t{\n\t\tIChainElement chainParent {get;}\n\t\tint childCount {get;}\n\n\t\tvoid SetParent (IChainElement parent);\t//set this element's parent element. should also add itself to parent childlist\n\t\tvoid AddChild (IChainElement newChild);\t//add an element to child list\n\t\tIChainElement GetChild (int index);\t\t//fetch a child by index\n\n\t\t//void RemoveChild (int index);\n\t}\n}"
},
{
"alpha_fraction": 0.7904564142227173,
"alphanum_fraction": 0.7904564142227173,
"avg_line_length": 25.83333396911621,
"blob_id": "ae30ca155911c8d85c00f73d7201e5410a4f7afa",
"content_id": "23dcd35d04202916133066877adb3b93164c408b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 484,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 18,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/OneShotTriggers/ParticleSystemPlayerOneShot.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class ParticleSystemPlayerOneShot : PlayerOneShotBase<ParticleSystem>\n\t{\n\t\t[SerializeField]\n\t\tprivate bool forceRestart = true;\n\t\t[SerializeField]\n\t\tprivate bool propagateToChildren = true;\n\n\t\tprotected override void Play (ParticleSystem particleSystem)\n\t\t{\n\t\t\tif (forceRestart) { particleSystem.Stop(withChildren: propagateToChildren); }\n\t\t\tparticleSystem.Play(withChildren: propagateToChildren);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7462061643600464,
"alphanum_fraction": 0.7467294335365295,
"avg_line_length": 27.53731346130371,
"blob_id": "4a39caf11732b0441ad34a8e8e1c5c709a8c9682",
"content_id": "9884b4f4fc1cc21a23ea868d543ac734510e1096",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1913,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 67,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/InteractableBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEngine.Events;\t//UnityEvent\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames; \n\nusing EInputState = ASSPhysics.InputSystem.EInputState; //EInputState\n\nnamespace ASSPhysics.InteractableSystem\n{\n\tpublic abstract class InteractableBase : MonoBehaviour, IInteractable\n\t{\n\t//serialized fields and properties\n\t\t//callback stack to execute upon triggering\n\t\t[SerializeField]\n\t\tprivate UnityEvent callback = null;\n\t//serialized fields and properties\n\n\t//private fields and properties\n\t\tprotected Animator animator;\t//animator used by this interactable\n\n\t\t//wether this interactable is being highlighted by an active interactor\n\t\tprivate bool _highlighted = false;\n\t\tprotected virtual bool highlighted\n\t\t{\n\t\t\tget { return _highlighted; }\n\t\t\tset\n\t\t\t{\n\t\t\t\tif (value != highlighted)\n\t\t\t\t{\n\t\t\t\t\t_highlighted = value;\n\t\t\t\t\tHighlightChanged(highlighted);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t//ENDOF private fields and properties\n\n\t//IInteractable implementation\n\t\t//interactable with highest priority will be called when several in range\n\t\t[SerializeField]\n\t\tprivate int _priority = 0;\n\t\tpublic int priority { get { return _priority; }}\n\n\t\tpublic abstract void Interact (EInputState state);\n\t//ENDOF IInteractable implementation\n\n\t//MonoBehaviour lifecycle\n\t\tpublic virtual void Awake ()\n\t\t{\n\t\t\tanimator = gameObject.GetComponent<Animator>();\n\t\t}\n\n\t\t//collision with an interactor highlights or un-highlights this interactable\n\t\tpublic void OnTriggerEnter () { Debug.Log(\"TriggerEnter\"); highlighted = true; }\n\t\tpublic void OnTriggerExit () { Debug.Log(\"TriggerExit\"); highlighted = false; }\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private methods\n\t\tprotected virtual void HighlightChanged (bool state)\n\t\t{\n\t\t\tif (animator != null)\n\t\t\t\tanimator.SetBool(AnimationNames.Interactable.highlighted, state);\n\t\t}\n\t\t\n\t\tprotected virtual void TriggerCallbacks () { callback.Invoke(); }\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7523319125175476,
"alphanum_fraction": 0.7558700442314148,
"avg_line_length": 37.39506149291992,
"blob_id": "3a686e012ade94fb869b6d1c107ccd2c5cd8fdbd",
"content_id": "7de8a937ec66f8bd25ca72bb3ec72996c43e49d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3111,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 81,
"path": "/Assets/Editor/ASSpriteRigging/BoneUtility/BoneRigging.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing Unity.Collections; //NativeArray<T>\n\nusing static ASSistant.ComponentConfiguration.ComponentConfigurerGeneric; //Component.EMApplySettings(sample);\nusing ASSpriteRigging.BoneUtility;\t//BoneHierarchy.BoneFindJointConnected()\n\nnamespace ASSpriteRigging.BoneUtility\n{\n\tpublic static class BoneRigging\n\t{\n\t\tpublic static void BoneSetTagAndLayer (Transform bone, string targetTag, int targetLayer)\n\t\t{\n\t\t\tif (targetTag != null) { bone.gameObject.tag = targetTag; }\n\t\t\tif (targetLayer >= 0) { bone.gameObject.layer = targetLayer; }\n\t\t}\n\n\t\t//Ensures bone Transform contains one component of type T and applies sample settings if received\n\t\tpublic static TComponent BoneSetupComponent <TComponent> (Transform bone, TComponent sample)\n\t\t\twhere TComponent: Component\n\t\t{\n\t\t\treturn BoneSetupComponent<TComponent>(bone).EMApplySettings(sample);\n\t\t}\n\t\tpublic static TComponent BoneSetupComponent <TComponent> (Transform bone)\n\t\t\twhere TComponent: Component\n\t\t{\n\t\t\tTComponent component = bone.gameObject.GetComponent<TComponent>();\n\t\t\tif (component == null) { component = ObjectFactory.AddComponent<TComponent>(bone.gameObject); }\n\t\t\treturn component;\n\t\t}\n\n\t\t//Creates a joint from bone transform to target transform/rigidbody, and applies sample settings\n\t\tpublic static TJoint BoneConnectJoint <TJoint> (Transform bone, Transform target, TJoint sample)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\treturn BoneConnectJoint<TJoint> (bone, target.gameObject.GetComponent<Rigidbody>(), sample);\n\t\t}\n\t\tpublic static TJoint BoneConnectJoint <TJoint> (Transform bone, Rigidbody targetRigidbody, TJoint sample)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\t//first try to find a pre-existing joint of adequate type and connected target\n\t\t\tTJoint joint = BoneHierarchy.BoneFindJointConnected<TJoint>(bone, targetRigidbody);\n\n\t\t\t//if desired joint did not exist, create a new joint\n\t\t\tif (joint == null)\n\t\t\t{\n\t\t\t\tjoint = ObjectFactory.AddComponent<TJoint>(bone.gameObject);\n\t\t\t}\n\n\t\t\t//copy public properties from sample object, connect the joint to the target, and return it\n\t\t\tjoint.EMApplySettings(sample);\n\t\t\tjoint.connectedBody = targetRigidbody;\n\t\t\treturn joint;\n\t\t}\n\n\t\t//find and remove spring joint connected to target\n\t\tpublic static void BoneRemoveConnectedJoint <TJoint> (Transform bone, Transform connectedTarget)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\tTJoint foundJoint = BoneHierarchy.BoneFindJointConnected<TJoint>(bone, connectedTarget);\n\t\t\tif (foundJoint != null) \n\t\t\t{\n\t\t\t\tObject.DestroyImmediate(foundJoint);\n\t\t\t}\n\t\t}\n\n\t\t//creates a spring connecting both bones. If mutual, creates a spring from each bone, only from the first otherwise\n\t\tpublic static void InterconnectBonePair <TJoint> (Transform bone1, Transform bone2, TJoint sample, bool mutual = false)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\tif (bone1 == bone2) { return; }\n\n\t\t\tBoneConnectJoint<TJoint>(bone1, bone2, sample);\n\n\t\t\t//if connection is mutual create the opposite joint. if not, ensure there is no opposite joint\n\t\t\tif (mutual) { BoneConnectJoint<TJoint>(bone2, bone1, sample); }\n\t\t\telse { BoneRemoveConnectedJoint<TJoint>(bone2, bone1); }\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7629984021186829,
"alphanum_fraction": 0.7633174061775208,
"avg_line_length": 32.36170196533203,
"blob_id": "63c965af27044a72aa3dc83762fead476a986dda",
"content_id": "3010582e7f1b045f49c3087022080446e600fcfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3135,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 94,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Propagators/PropagatorEditorTransformTree.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing PropagatorInspectorTransformTree = ASSpriteRigging.Inspectors.PropagatorInspectorTransformTree;\n\nusing IRiggerInspector = ASSpriteRigging.Inspectors.IRiggerInspector;\nusing IWeaverInspector = ASSpriteRigging.Inspectors.IWeaverInspector;\nusing IPropagatorInspector = ASSpriteRigging.Inspectors.IPropagatorInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n\t[UnityEditor.CustomEditor(typeof(PropagatorInspectorTransformTree))]\n\tpublic class PropagatorEditorTransformTree\n\t:\n\t\tPropagatorEditorBase<PropagatorInspectorTransformTree>\n\t{\n\t//[TO-DO] [IMPORTANT] Propagation should probably ignore disabled gameobjects\n\n\t//IPropagatorEditor implementation\n\t //IEditorBase implementation\n\t\tpublic override void DoSetup ()\n\t\t{\n\t\t\tDebug.LogWarning(targetInspector.name + \" > PropagatorEditorTransformTree.DoSetup() initiating...\");\n\t\t\tRecursivelyApply(targetInspector.transform, ApplySetup);\n\t\t\tDebug.Log(targetInspector.name + \" > PropagatorEditorTransformTree.DoSetup() done\");\n\t\t}\n\t //ENDOF IEditorBase implementation\n\t\t\n\t //IEditorPurgeableBase implementation\n\t\tpublic override void DoPurge ()\n\t\t{\n\t\t\tDebug.LogWarning(targetInspector.name + \" > PropagatorEditorTransformTree.DoPurge() initiating...\");\n\t\t\tRecursivelyApply(targetInspector.transform, ApplyPurge);\n\t\t\tDebug.Log(targetInspector.name + \" > PropagatorEditorTransformTree.DoPurge() done\");\n\t\t}\n\t //ENDOF IEditorPurgeableBase implementation\n\n\t\tpublic override void DoPropagate (PropagationApplicationDelegate apply)\n\t\t{\n\t\t\tRecursivelyApply(targetInspector.transform, apply);\n\t\t}\n\t//ENDOF IPropagatorEditor implementation\n\n\t//private methods\n\t\t//recursively propagate call\n\t\tprivate void RecursivelyApply (Transform root, PropagationApplicationDelegate apply)\n\t\t{\n\t\t\tfor (int i = 0, iLimit = root.childCount; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tTransform child = root.GetChild(i);\n\n\t\t\t\t//for each immediate child, apply propagation\n\t\t\t\tapply(child);\n\n\t\t\t\t//then attempt to propagate call to pre-existing propagators in target's children\n\t\t\t\tbool propagated = false;\n\t\t\t\tforeach (Component propagatorInspector in child.GetComponents<IPropagatorInspector>())\n\t\t\t\t{\n\t\t\t\t\t(CreateEditor(propagatorInspector) as IPropagatorEditor)?.DoPropagate(apply);\n\t\t\t\t\tpropagated = true;\n\t\t\t\t}\n\n\t\t\t\t//if no propagator found, manually propagate recursive propagation\n\t\t\t\tif (!propagated)\n\t\t\t\t{\n\t\t\t\t\tRecursivelyApply(child, apply);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t//Executes a Setup request on every child Rigger and Weaver\n\t\tprivate void ApplySetup (Transform root)\n\t\t{\n\t\t\tforeach (Component riggerInspector in root.GetComponents<IRiggerInspector>())\n\t\t\t{\n\t\t\t\t(CreateEditor(riggerInspector) as IRiggerEditor)?.DoSetup();\n\t\t\t}\n\n\t\t\tforeach (Component weaverInspector in root.GetComponents<IWeaverInspector>())\n\t\t\t{\n\t\t\t\t(CreateEditor(weaverInspector) as IWeaverEditor)?.DoSetup();\t\n\t\t\t}\n\t\t}\n\t\t\n\t\t//Executes a Setup request on every child Rigger. Weavers do not purge\n\t\tprivate void ApplyPurge (Transform root)\n\t\t{\n\t\t\tforeach (Component riggerInspector in root.GetComponents<IRiggerInspector>())\n\t\t\t{\n\t\t\t\t(CreateEditor(riggerInspector) as IRiggerEditor)?.DoPurge();\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.8109756112098694,
"alphanum_fraction": 0.8109756112098694,
"avg_line_length": 14,
"blob_id": "49dfc057cedd52a41e8dba361f98066e79b9a077",
"content_id": "8f2ab7082f55eebc21b455fab1863f4dae63bb6c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 166,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 11,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Propagators/PropagatorInspectorTransformTree.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic class PropagatorInspectorTransformTree\n\t:\n\t\tArmableInspectorBase,\n\t\tIPropagatorInspector\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.7735229730606079,
"alphanum_fraction": 0.7778993248939514,
"avg_line_length": 25.91176414489746,
"blob_id": "396641256dbac7b7ee3c285a1ae2b64af09d7063",
"content_id": "15b890b4fc6747c08e3d04dc776cd7e5f60da696",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 914,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 34,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/KickerAutoFireForce.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing Vector3Math = ASSistant.ASSMath.Vector3Math;\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class KickerAutoFireForce : KickerOnConditionForceBase\n\t{\n\t//serialized properties \n\t\t[SerializeField]\n\t\tpublic RandomRangeFloat forceAngleRange;\n\t//ENDOF serialized properties \n\n\t//IKicker implementation\n\t\t//applies a random force at a random direction as the kick\n\t\tpublic override void Kick ()\n\t\t{\n\t\t\ttargetRigidbody.AddForce(\n\t\t\t\t\tforce: Vector3Math.AngleToVector3(forceAngleRange.Generate()) * randomForce.Generate(),\n\t\t\t\t\tmode: ForceMode.Force\n\t\t\t\t);\n\t\t}\n\t//ENDOF IKicker implementation\n\n\t//abstract method implementation\n\t\t//checkCondition is always true so kick repeats constantly every interval\n\t\tprotected override bool CheckCondition ()\n\t\t{\n\t\t\treturn true;\n\t\t}\n\t//ENDOF abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7911749482154846,
"alphanum_fraction": 0.7911749482154846,
"avg_line_length": 35.79245376586914,
"blob_id": "adfcd6a65b80069a6743345d637c84b4083f9d8a",
"content_id": "e819e5a59ba54de1097e74af33ae51f014828cdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1949,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 53,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/TailRiggerEditorJointChainBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing BoneRigging = ASSpriteRigging.BoneUtility.BoneRigging;\nusing ASSistant.ComponentConfiguration.JointConfiguration; //ConfigurableJoint extension methods\n\nusing IJointChainRiggerInspector = ASSpriteRigging.Inspectors.IJointChainRiggerInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n//rigs a chain of bones with required components\n\tpublic abstract class TailRiggerEditorJointChainBase<TInspector>\n\t:\n\t\tTailRiggerEditorBase<TInspector>\n\t\twhere TInspector : UnityEngine.Object, IJointChainRiggerInspector\n\t{\n\t//abstract method implementation\n\t\t//rig the base/root of the transform chain\n\t\tprotected override void RigTailRoot (Transform rootBone, TInspector inspector)\n\t\t{\n\t\t\tif (inspector.defaultRootAnchorJoint != null)\n\t\t\t{\n\t\t\t\tforeach (Rigidbody rootAnchor in inspector.rootAnchorList)\n\t\t\t\t{\n\t\t\t\t\tBoneRigging.BoneConnectJoint<ConfigurableJoint>(\n\t\t\t\t\t\tbone: rootBone,\n\t\t\t\t\t\ttargetRigidbody: rootAnchor,\n\t\t\t\t\t\tsample: inspector.defaultRootAnchorJoint\n\t\t\t\t\t);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t//rig an individual element of the transform chain\n\t\tprotected override void RigTailBone (Transform bone, TInspector inspector)\n\t\t{\n\t\t\tBoneRigging.BoneSetTagAndLayer(bone, inspector.defaultTag, inspector.defaultLayer);\n\t\t\tBoneRigging.BoneSetupComponent<Rigidbody>(bone, inspector.defaultRigidbody);\n\t\t\tif (inspector.defaultCollider != null)\n\t\t\t{ BoneRigging.BoneSetupComponent<SphereCollider>(bone, inspector.defaultCollider); }\n\t\t}\n\n\t\t//rig a connection between two elements\n\t\tprotected override ConfigurableJoint RigTailBonePairConnection (Transform bone, Transform nextBone, TInspector inspector)\n\t\t{\n\t\t\tDebug.Log(\"TailRiggerEditorJointChainBase.RigTailBonePairConnection();\");\n\t\t\t//create a joint and initialize its anchors as a chain setup then return the joint\n\t\t\treturn BoneRigging\n\t\t\t\t.BoneConnectJoint<ConfigurableJoint>(bone, nextBone, inspector.defaultChainJoint)\n\t\t\t\t.EMSetChainAnchor();\n\t\t}\n\t//ENDOF abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8083066940307617,
"alphanum_fraction": 0.8083066940307617,
"avg_line_length": 25.16666603088379,
"blob_id": "cd25f86b19e964c93c991a9ed85aff1b3c66ae08",
"content_id": "8c88ae8d7e2d641ab8209fc2113360b5828932d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 315,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 12,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/OnOffFlicker/Base/OnOffFlickerAutoFireBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class OnOffFlickerAutoFireBase : OnOffFlickerBase\n\t{\n\t//inherited abstract method implementation\n\t\tprotected override bool CheckCondition ()\n\t\t{ return !flickIsUp; }\n\t//ENDOF inherited abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7743300199508667,
"alphanum_fraction": 0.7757405042648315,
"avg_line_length": 25.296297073364258,
"blob_id": "b0e8daab945ae3f94df3e0550d6307d6d4165154",
"content_id": "29f6b4119240d2fc6a5f22cb83ea8897df2de157",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 711,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 27,
"path": "/Assets/Scripts/DEV/DummyCurtainController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing ICurtainController = ASSPhysics.SceneSystem.ICurtainController;\n\nnamespace DEV\n{\n\tpublic class DummyCurtainController :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase <ICurtainController>,\n\t\tICurtainController\n\t{\n\n\t//ICurtainController implementation\n\t\t//opens and closes the curtains, or returns the currently DESIRED state\n\t\tpublic bool open\n\t\t{\n\t\t\tget { return true; }\n\t\t\tset {}\n\t\t}\n\n\t\tpublic float openingProgress { get { return 1f; }}\n\n\t\t//returns true if curtain has actually reached a closed state\n\t\tpublic bool isCompletelyClosed { get { return false; } }\n\t//ENDOF ICurtainController implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7599999904632568,
"alphanum_fraction": 0.7599999904632568,
"avg_line_length": 17,
"blob_id": "8deb3200ec3aaaa7848664045eb76649a13874ce",
"content_id": "a2b45e1218bcffa14483ceb95091fa670e626078",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 127,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 7,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/BaseInspectors/IArmableInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IArmableInspector : IInspectorBase\n\t{\n\t\tbool armed {get; set;}\n\t}\n}"
},
{
"alpha_fraction": 0.7375796437263489,
"alphanum_fraction": 0.7401273846626282,
"avg_line_length": 24.322580337524414,
"blob_id": "a642d0c2018c4c34e56d558bb0978cb655c1a362",
"content_id": "402284ef0b54056d738738cf6f03358c393d306d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 787,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 31,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Tools/ToolFlip.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing ASSPhysics.Constants; //AnimationNames\n\n//on awake, sets the animator horizontal flip to true or false, alternatively\n[RequireComponent(typeof(Animator))]\npublic class ToolFlip : MonoBehaviour\n{\n//static space\n\tprivate static int flipCounter;\n\t//returns false, then true, then false, then true...\n\tprivate static bool Flip ()\n\t{\n\t\treturn (flipCounter++) % 2 > 0;\n\t}\n//ENDOF static space\n\n//instance implementation\n\tprivate Animator animator;\n\tpublic void Awake ()\n\t{\n\t\tanimator = GetComponent<Animator>();\n\t} \n\n\tpublic void Start ()\n\t{\n\t\tbool flipVal = Flip(); Debug.Log(flipVal); animator.SetBool(AnimationNames.Tool.horizontalFlip, flipVal);\n\t\t//animator.SetBool(AnimationNames.horizontalFlip, Flip());\n\t\tDestroy(this);\n\t}\n//ENDOF instance implementation\n}\n"
},
{
"alpha_fraction": 0.7808369994163513,
"alphanum_fraction": 0.7819383144378662,
"avg_line_length": 31.428571701049805,
"blob_id": "6c03d9636c805919ccd8afde29338b78e7582e0f",
"content_id": "0d2d6724711b2249beea72733f17e9e20c53aac9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 908,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 28,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Weavers/WeaverEditorManyToMany.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing WeaverInspectorManyToMany = ASSpriteRigging.Inspectors.WeaverInspectorManyToMany;\n\nnamespace ASSpriteRigging.Editors\n{\n\t[CustomEditor(typeof(WeaverInspectorManyToMany))]\n\tpublic class WeaverEditorManyToMany : WeaverEditorBase<WeaverInspectorManyToMany>\n\t{\n\t//private method declaration\n\t\tpublic override void WeaveJoints()\n\t\t{\n\t\t\tif (targetInspector.originRigidbodyList.Length != targetInspector.targetRigidbodyList.Length)\n\t\t\t{\n\t\t\t\tDebug.LogError(\"WeaverEditorManyToMany: Origin and Target rigidbody lists MUST be equal length\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tfor (int i = 0, iLimit = targetInspector.originRigidbodyList.Length; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tConnectRigidbodies(targetInspector.originRigidbodyList[i], targetInspector.targetRigidbodyList[i]);\n\t\t\t}\n\t\t\tDebug.Log(targetInspector.name + \" Weaved ManyToMany joints\");\n\t\t}\n\t//ENDOF private method declaration\n\t}\n}\n"
},
{
"alpha_fraction": 0.7322816848754883,
"alphanum_fraction": 0.7334247827529907,
"avg_line_length": 30.25,
"blob_id": "45b31dbe65aa319f620174c186d8f44d8d951c31",
"content_id": "e1957d0b8dbaaf528bf92d3a7b460a0189f344ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4374,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 140,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/RectCameraControllerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerProvider = ASSPhysics.ControllerSystem.ControllerProvider;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\nusing static ASSPhysics.CameraSystem.CameraExtensions; //Camera.EMRectFromOrthographicCamera();\n\nnamespace ASSPhysics.CameraSystem\n{\n\t[RequireComponent(typeof(RectTransform))]\n\t[RequireComponent(typeof(Camera))]\n\tpublic class RectCameraControllerBase : ViewportControllerBase\n\t{\n\t//serialized fields\n\t\t[SerializeField]\n\t\tprivate Rect viewportLimits; //camera boundaries\n\n\t\t[SerializeField]\n\t\tprivate bool autoConfigureLimits = true; //if true gather limits from scene\n\t//ENDOF serialized fields\n\n\t//private fields\n\t\tprotected Camera cameraComponent; //cached reference to the camera this controller handles\n\t\tprivate RectTransform rectTransform;\n\t//ENDOF private fields\n\n\t//abstract property implementation\n\t\tprotected override Rect viewportRect { get { return rect; }}\n\t//ENDOF abstract property implementation\n\n\t//protected class properties\n\t\tprotected virtual Rect rect\n\t\t{\n\t\t\tget { return rectTransform.EMGetWorldRect(); }\n\t\t\tset\n\t\t\t{\n\t\t\t\t//apply a pre-validated rect to the transform\n\t\t\t\trectTransform.EMSetRect(ValidateCameraRect(value));\n\t\t\t\t/////Maybe this doesn't need to create a new rect, only change rect width\n\t\t\t}\n\t\t}\n\t//protected class properties\n\n\t//private properties\n\t\tprivate float screenRatio\n\t\t{\n\t\t\tget { return cameraComponent.aspect; }\n\t\t}\n\t//ENDOF private properties\n\n\t//inherited method implementation\n\t\t//moves and resizes camera viewport\n\t\t//if only one of the parameters is used the other aspect of the viewport is unchanged\n\t\tprotected override void ChangeViewport (Vector2? position, float? size)\n\t\t{\n\t\t\trect = CreateCameraRect(position: position, height: size);\n\t\t}\n\t//ENDOF inherited method implementation\n\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic override void Awake ()\n\t\t{\n\t\t\tbase.Awake();\n\t\t\tInitialize();\n\t\t}\n\n\t\tpublic void OnPreCull ()\n\t\t{\n\t\t\tApplyCameraSize();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//private methods\n\t\t//Controller initialization\n\t\tprivate void Initialize ()\n\t\t{\n\t\t\t//cache references to Camera and RectTransform components\n\t\t\tcameraComponent = GetComponent<Camera>();\n\t\t\trectTransform = (RectTransform) transform;\n\n\t\t\t//initialize limits\n\t\t\tif (autoConfigureLimits) { viewportLimits = cameraComponent.EMRectFromOrthographicCamera(); }\n\t\t}\n\n\t\t//applies the rect height to the camera component right before rendering\n\t\tprivate void ApplyCameraSize ()\n\t\t{\n\t\t\tcameraComponent.orthographicSize = rect.height / 2;\n\t\t}\n\t//ENDOF private methods\n\n\t//protected class methods\n\t\t//Clamps and properly sizes a rect for this camera ratio and limits\n\t\tprotected Rect ValidateCameraRect (Rect innerRect)\n\t\t{\n\t\t\t//clamp rect position within viewport limits\n\t\t\treturn ClampRectWithinLimits(\n\t\t\t\t//ensure rect fulfills size ratio\n\t\t\t\tCreateCameraRect(sampleRect: innerRect)\n\t\t\t);\n\t\t}\n\n\t\t//creates previewing camera dimensions at target position and height.\n\t\t//non included parameters are filled with current camera values\n\t\t//Rect width is inferred off of height and screen ratio.\n\t\tprotected Rect CreateCameraRect (Rect sampleRect)\n\t\t{ return CreateCameraRect(position: sampleRect.center, height: sampleRect.height); }\n\t\tprotected Rect CreateCameraRect (Vector2? position = null, float? height = null)\n\t\t{\n\t\t\t//first validate and complete inputs\n\t\t\tVector2 validPosition = (position != null) \n\t\t\t\t?\t(Vector2) position\n\t\t\t\t:\trect.center;\n\t\t\tfloat validHeight = (height != null) \n\t\t\t\t?\t(float) height\n\t\t\t\t:\trect.height;\n\n\t\t\t//Debug.Log(\"CreateCameraRect(\" + position + \", \" + height + \")\");\n\t\t\t//Debug.Log(\" validPosition: \" + validPosition + \"\\n validHeight: \" + validHeight);\n\n////////////////[TO-DO] this is a bit duplicate logic, condense this and CameraExtensions.EMRectFromOrthographicCamera()?\n\t\t\t\n\t\t\t//now create and return a rect with proper dimensions and position\n\t\t\treturn RectMath.RectFromCenterAndSize(\n\t\t\t\tposition: validPosition,\n\t\t\t\twidth: validHeight * screenRatio,\n\t\t\t\theight: validHeight\n\t\t\t);\n\t\t}\n\n\t\t//clamps a rect's height and position to make it fit within viewport limits\n\t\tprotected Rect ClampRectWithinLimits (Rect innerRect)\n\t\t{\n\t\t\treturn RectMath.TrimAndClampRectWithinRect(innerRect: innerRect, outerRect: viewportLimits);\t\t\t\n\t\t}\n\t//ENDOF inheritable private methods\n\n\t//////////////////////////////////////////////////////////////////\n\t}\n}"
},
{
"alpha_fraction": 0.782608687877655,
"alphanum_fraction": 0.782608687877655,
"avg_line_length": 25.133333206176758,
"blob_id": "1062d79bb03f9226a8c764f94ab70e04fb11e89f",
"content_id": "adc059ed096843da51d28d26a89ca1e9b3b9ef44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 393,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 15,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/IInteractable.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using EInputState = ASSPhysics.InputSystem.EInputState;\n\nnamespace ASSPhysics.InteractableSystem\n{\n\t//interactable interface element, like a button\n\tpublic interface IInteractable\n\t{\n\t\t//this interactable's Z sorting\n\t\t//Higher priority interactables will precede when touching muliple at once\n\t\tint priority { get; }\n\n\t\t//activate this interactable\n\t\tvoid Interact(EInputState state);\n\t}\n}"
},
{
"alpha_fraction": 0.7554858922958374,
"alphanum_fraction": 0.7570533156394958,
"avg_line_length": 34.46296310424805,
"blob_id": "0927ac38ce97c548a2ca3c87b3f85bf38b005055",
"content_id": "935402c57994f893098f9619cb290b9124059b52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1916,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 54,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/SpriteSkinRiggerInspectorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n//using SpriteRenderer = UnityEngine.U2D.Animation.SpriteRenderer;\nusing SpriteSkin = UnityEngine.U2D.Animation.SpriteSkin;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\t[RequireComponent(typeof(SpriteSkin))]\n\tpublic abstract class SpriteSkinRiggerInspectorBase\n\t:\n\t\tArmableInspectorBase,\n\t\tIRiggerInspector\n\t{\n\t\t//wether or not purging bone transform tree removes its rigidbodies too\n\t\t\t//[SerializeField]\n\t\t\t//private bool _purgeKeepsRigidbodies = true;\n\t\t\t//public bool purgeKeepsRigidbodies { get { return _purgeKeepsRigidbodies; }}\n\t\t//for now at least, purge will only admit NOT removing rigidbodies\n\t\tpublic bool purgeKeepsRigidbodies { get { return true; }}\n\n\t\t//references to fundamental components\n\t\tpublic Sprite sprite { get { return gameObject.GetComponent<SpriteRenderer>()?.sprite; }}\n\t\tpublic SpriteSkin spriteSkin { get { return gameObject.GetComponent<SpriteSkin>(); }}\n\n\t\t//anchor rigidbody: every bone will be connected to this rigidbody with an anchor joint \n\t\t[SerializeField]\n\t\tprivate Rigidbody targetAnchor = null;\n\t\tpublic Rigidbody anchorRigidbody\n\t\t{\n\t\t\tget {\n\t\t\t//return this rigidbody if no target anchor is set\n\t\t\treturn (targetAnchor != null)\n\t\t\t\t? targetAnchor\n\t\t\t\t: gameObject.GetComponent<Rigidbody>();\n\t\t\t}\n\t\t}\n\t\t\n\t\t//information on transform layer & tag\n\t\t[SerializeField]\n\t\tprivate GameObject defaultLayerSample = null;\n\t\tpublic int defaultLayer { get { return (defaultLayerSample != null) ? defaultLayerSample.layer : -1; }}\n\t\tpublic string defaultTag { get { return defaultLayerSample?.tag; }}\n\n\n\t\t//Desired rigidbody configuration\n\t\t[SerializeField]\n\t\tprivate Rigidbody _defaultRigidbody = null;\n\t\tpublic Rigidbody defaultRigidbody { get { return _defaultRigidbody; }}\n\n\t\t//Collider to include with each bone\n\t\t[SerializeField]\n\t\tprivate SphereCollider _defaultCollider = null;\n\t\tpublic SphereCollider defaultCollider { get { return _defaultCollider; }}\n\t}\n}"
},
{
"alpha_fraction": 0.6971279382705688,
"alphanum_fraction": 0.6999762654304504,
"avg_line_length": 39.912620544433594,
"blob_id": "2139b14c5e91b9160dc5bd3dbdc555c7efb2bc4a",
"content_id": "9b657267d479acb0274488215c8244b5c035ee59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4213,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 103,
"path": "/Assets/Editor/ASSpriteRigging/BoneUtility/BoneNomenclature.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\nnamespace ASSpriteRigging.BoneUtility\n{\n\tpublic static class BoneNomenclature\n\t{\n\t\t//CAREFUL!!!\n\t\t//Currently, an anchor name parameter containing tag characters would be recognized as said tag. avoid capitalizing those characters.\n\t\tprivate const char tagBodySeparator = '$';\n\t\tprivate const char rigidbodyTag = 'R';\n\t\tprivate const char anchorTag = 'A';\n\t\tprivate const char springTag = 'S';\n\t\tprivate const char ignoreTag = 'I';\n\t\tprivate const char parameterBeginTag = '(';\n\t\tprivate const char parameterCloseTag = ')';\n\n\t\t//trims the proper name of the target object or full name\n\t\tpublic static string GetProperName (Transform target) { return GetProperName(target.name); }\n\t\tpublic static string GetProperName (string name)\n\t\t{\n\t\t\treturn name.Split(tagBodySeparator)[0];\n\t\t}\n\n\t\t//trims the paramaters of the target object or full name\n\t\tpublic static string GetParameterList (Transform target) { return GetParameterList(target.name); }\n\t\tpublic static string GetParameterList (string name)\n\t\t{\n\t\t\tif (!name.Contains(tagBodySeparator.ToString())) { return name; }\n\t\t\treturn name.Split(tagBodySeparator)[1];\n\t\t}\n\n\t\tpublic static bool IsIgnored (Transform target) { return IsIgnored(target.name); }\n\t\tpublic static bool IsIgnored (string name)\n\t\t{\n\t\t\t//returns true if contains ignored tag or no tag body at all\n\t\t\treturn !name.Contains(tagBodySeparator.ToString()) || GetParameterList(name).Contains(ignoreTag.ToString());\n\t\t}\n\n\t\t//true if the target requires a rigidbody\n\t\tpublic static bool RequiresRigidbody (Transform target) { return RequiresRigidbody(target.name); }\n\t\tpublic static bool RequiresRigidbody (string name)\n\t\t{\n\t\t\treturn GetParameterList(name).Contains(rigidbodyTag.ToString())\n\t\t\t\t|| BoneNomenclature.RequiresAnchor(name)\n\t\t\t\t|| BoneNomenclature.RequiresSprings(name);\n\t\t}\n\n\t\t//true if the target requires a ground anchor towards its parent\n\t\tpublic static bool RequiresAnchor (Transform target) { return RequiresAnchor(target.name); }\n\t\tpublic static bool RequiresAnchor (string name)\n\t\t{\n\t\t\treturn GetParameterList(name).Contains(anchorTag.ToString());\n\t\t}\n\n\t\t//true if the target has springs\n\t\tpublic static bool RequiresSprings (Transform target) { return RequiresSprings(target.name); }\n\t\tpublic static bool RequiresSprings (string name)\n\t\t{\n\t\t\treturn GetParameterList(name).Contains(springTag.ToString());\n\t\t}\n\n\t\t//get a list of the proper names target bone requires a spring towards\n\t\tpublic static string[] GetSpringTargets (Transform target) { return GetSpringTargets(target.name); }\n\t\tpublic static string[] GetSpringTargets (string name)\n\t\t{\n\t\t\tList<string> targetNames = new List<string>();\n\t\t\tint tagFoundIndex = -1;\n\n\t\t\t//find the next instance of springTag within the string. execute until none found (-1)\n\t\t\twhile ((tagFoundIndex = name.IndexOf(springTag, tagFoundIndex + 1)) >= 0)\n\t\t\t{\n\t\t\t\tstring boneName = BoneNomenclature.FindParameter(name, tagFoundIndex);\n\t\t\t\tif (boneName != null) { targetNames.Add(boneName); }\n\t\t\t}\n\n\t\t\treturn targetNames.ToArray();\n\t\t}\n\n\t\tpublic static Transform[] FindBonesByProperName (string[] nameList, Transform targetRoot, bool recursive = true)\n\t\t{\n\t\t\t//=================================================================================================================\n\t\t\t//[TO-DO]\n\t\t\t//=================================================================================================================\n\t\t\treturn null;\n\t\t}\n\n\t\t//find the next parameter between brackets\n\t\tprivate static string FindParameter (string name, int tagStartIndex)\n\t\t{\n\t\t\t\tint parameterStartIndex = name.IndexOf(parameterBeginTag, tagStartIndex);\n\t\t\t\t//return empty if no opening bracket found\n\t\t\t\tif (parameterStartIndex < 0) { return \"\"; }\n\t\t\t\tint parameterEndIndex = name.IndexOf(parameterCloseTag, parameterStartIndex);\n\t\t\t\t//return slice from opening bracket to the closing bracket or the end of the string\n\t\t\t\tif (parameterEndIndex < 0) { return name.Substring(parameterStartIndex + 1); }\n\t\t\t\tint parameterLength = parameterEndIndex - parameterStartIndex - 1;\n\t\t\t\tif (parameterLength < 0) { return \"\"; }\n\t\t\t\treturn name.Substring(parameterStartIndex + 1, parameterLength);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7834862470626831,
"alphanum_fraction": 0.7848623991012573,
"avg_line_length": 34.75409698486328,
"blob_id": "d2b23dfd8ee5a4fe23ffa35b08fd6dc7250f102c",
"content_id": "c8848e86c661711996eb635f0560c8465f507276",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2182,
"license_type": "no_license",
"max_line_length": 151,
"num_lines": 61,
"path": "/Assets/Scripts/ASSPhysics/TailSystem/TailElementBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "//using System.Collections;\n//\tusing System.Collections.Generic;\n\nusing UnityEngine;\n\nusing IPulsePropagator = ASSPhysics.PulseSystem.PulsePropagators.IPulsePropagator;\n\nnamespace ASSPhysics.TailSystem\n{\n\tpublic abstract class TailElementBase : ASSPhysics.PulseSystem.PulsePropagators.ChainElementPulsePropagatorBase\n\t{\n\t//serialized/public fields and properties\n\t\t//absolute maximum rotation off from base rotation\n\t\t[SerializeField]\n\t\tprivate float _rotationMax;\n\t\tpublic float rotationMax { get { return _rotationMax; } set { _rotationMax = value; } }\n\n\t\t//soft rotation limit. pulse intensity value multiplies this value. can be exceeded if pulse > 1.0f\n\t\t[SerializeField]\n\t\tprivate float _rotationSoftLimit;\n\t\tpublic float rotationSoftLimit { get { return _rotationSoftLimit; } set { _rotationSoftLimit = value; } }\n\n\t\t/*\n\t\t//wether to fetch rotation from initial state\n\t\t[SerializeField]\n\t\tprivate bool _baseRotationFromStartingRotation;\n\t\tpublic bool baseRotationFromStartingRotation { get { return _baseRotationFromStartingRotation; } set { _baseRotationFromStartingRotation = value; } }\n\t\t*/\n\t//ENDOF serialized/public fields and properties\n\n\t//private fields and properties\n\t\t//protected Quaternion baseRotation; //base rotation of the element. offsetRotation swings and is clamped around this value\n\t//ENDOF private fields and properties\n\t\t\n\t//MonoBehaviour lifecycle\n\t\t/*\n\t\tpublic virtual void Start ()\n\t\t{\n\t\t\tbaseRotation = baseRotationFromStartingRotation ? transform.rotation : Quaternion.identity;\n\t\t}\n\t\t*/\n\t\tpublic virtual void FixedUpdate()\n\t\t{\n\t\t\tUpdateRotation(Time.fixedDeltaTime);\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//ChainElementPulsePropagatorBase abstract method implementation\n\t\t//get delay in seconds before propagation to target effectuates\n\t\tprotected override float GetPropagationDelay (IPulsePropagator target)\n\t\t{\n\t\t\treturn Vector3.Distance(transform.position, target.transform.position);\n\t\t}\n\t//ENDOF ChainElementPulsePropagatorBase abstract method implementation\n\n\t//Overridable methods\n\t\t//attempts to match current rotation with target rotation\n\t\tprotected abstract void UpdateRotation (float timeDelta);\n\t//ENDOF Overridable methods\n\t}\n}"
},
{
"alpha_fraction": 0.7627411484718323,
"alphanum_fraction": 0.7627411484718323,
"avg_line_length": 28.176469802856445,
"blob_id": "0c925606f28a3a227db542a1c10d42f6eb89e3a6",
"content_id": "b262d259268bac6fb3a3b7d6a15fd330aae4e28d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3473,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 119,
"path": "/Assets/Scripts/ASSistant/ComponentConfiguration/JointConfiguration/ConfigurableJointComponentConfigurer.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "\nusing UnityEngine;\n\nnamespace ASSistant.ComponentConfiguration.JointConfiguration\n{\n\tpublic static class ConfigurableJointComponentConfigurer\n\t{\n\t//public static methods\n\t\t//applies right-hand properties to left-hand objects. returns reference to altered object\n\t\tpublic static ConfigurableJoint EMApplySettings (\n\t\t\tthis ConfigurableJoint _this,\n\t\t\tConfigurableJoint sample,\n\t\t\tbool copyConnectedBody = false\n\t\t) {\n\t\t\tDebug.Log(\"ConfigurableJoint EMApplySettings(\" + _this + \", \" + sample + \")\");\n\n\t\t\t//ConfigurableJoint members\n\t\t\t_this.angularXDrive = sample.angularXDrive;\n\t\t\t_this.angularXLimitSpring = sample.angularXLimitSpring;\n\t\t\t_this.angularXMotion = sample.angularXMotion;\n\t\t\t_this.angularYLimit = sample.angularYLimit;\n\t\t\t_this.angularYMotion = sample.angularYMotion;\n\t\t\t_this.angularYZDrive = sample.angularYZDrive;\n\t\t\t_this.angularYZLimitSpring = sample.angularYZLimitSpring;\n\t\t\t_this.angularZLimit = sample.angularZLimit;\n\t\t\t_this.angularZMotion = sample.angularZMotion;\n\t\t\t_this.configuredInWorldSpace = sample.configuredInWorldSpace;\n\t\t\t_this.highAngularXLimit = sample.highAngularXLimit;\n\t\t\t_this.linearLimit = sample.linearLimit;\n\t\t\t_this.linearLimitSpring = sample.linearLimitSpring;\n\t\t\t_this.lowAngularXLimit = sample.lowAngularXLimit;\n\t\t\t_this.projectionAngle = sample.projectionAngle;\n\t\t\t_this.projectionDistance = sample.projectionDistance;\n\t\t\t_this.projectionMode = sample.projectionMode;\n\t\t\t_this.rotationDriveMode = sample.rotationDriveMode;\n\t\t\t_this.secondaryAxis = sample.secondaryAxis;\n\t\t\t_this.slerpDrive = sample.slerpDrive;\n\t\t\t_this.swapBodies = sample.swapBodies;\n\t\t\t_this.targetAngularVelocity = sample.targetAngularVelocity;\n\t\t\t_this.targetPosition = sample.targetPosition;\n\t\t\t_this.targetRotation = sample.targetRotation;\n\t\t\t_this.targetVelocity = sample.targetVelocity;\n\t\t\t_this.xDrive = sample.xDrive;\n\t\t\t_this.xMotion = sample.xMotion;\n\t\t\t_this.yDrive = sample.yDrive;\n\t\t\t_this.yMotion = sample.yMotion;\n\t\t\t_this.zDrive = sample.zDrive;\n\t\t\t_this.zMotion = sample.zMotion;\n\n\t\t\t//Joint members\n\t\t\t_this.anchor = sample.anchor;\n\t\t\t_this.autoConfigureConnectedAnchor = sample.autoConfigureConnectedAnchor;\n\t\t\t_this.axis = sample.axis;\n\t\t\t_this.breakForce = sample.breakForce;\n\t\t\t_this.breakTorque = sample.breakTorque;\n\t\t\t_this.connectedAnchor = sample.connectedAnchor;\n\t\t\t_this.connectedMassScale = sample.connectedMassScale;\n\t\t\t_this.enableCollision = sample.enableCollision;\n\t\t\t_this.enablePreprocessing = sample.enablePreprocessing;\n\t\t\t_this.massScale = sample.massScale;\n\n\t\t\tif (copyConnectedBody)\n\t\t\t{ _this.connectedBody = sample.connectedBody; }\n\n\t\t\treturn _this;\n\t\t}\n\t//ENDOF public static methods\n\t\t/*listing of required members\n\t\n\t\t//ConfigurableJoint members\n\n\t\t\tangularXDrive\n\t\t\tangularXLimitSpring\n\t\t\tangularXMotion\n\t\t\tangularYLimit\n\t\t\tangularYMotion\n\t\t\tangularYZDrive\n\t\t\tangularYZLimitSpring\n\t\t\tangularZLimit\n\t\t\tangularZMotion\n\t\t\tconfiguredInWorldSpace\n\t\t\thighAngularXLimit\n\t\t\tlinearLimit\n\t\t\tlinearLimitSpring\n\t\t\tlowAngularXLimit\n\t\t\tprojectionAngle\n\t\t\tprojectionDistance\n\t\t\tprojectionMode\n\t\t\trotationDriveMode\n\t\t\tsecondaryAxis\n\t\t\tslerpDrive\n\t\t\tswapBodies\n\t\t\ttargetAngularVelocity\n\t\t\ttargetPosition\n\t\t\ttargetRotation\n\t\t\ttargetVelocity\n\t\t\txDrive\n\t\t\txMotion\n\t\t\tyDrive\n\t\t\tyMotion\n\t\t\tzDrive\n\t\t\tzMotion\n\n\t\t//Joint members\n\n\t\t\tanchor\n\t\t\tautoConfigureConnectedAnchor\n\t\t\taxis\n\t\t\tbreakForce\n\t\t\tbreakTorque\n\t\t\tconnectedAnchor\n\t\t\tconnectedMassScale\n\t\t\tenableCollision\n\t\t\tenablePreprocessing\n\t\t\tmassScale\n\n\t\t\tconnectedBody\n\t\t*/\n\t}\n}\n"
},
{
"alpha_fraction": 0.8152173757553101,
"alphanum_fraction": 0.8152173757553101,
"avg_line_length": 36.94117736816406,
"blob_id": "5f6fabd743e9f06ba53ca80e016912f4ad30963f",
"content_id": "e75695c2f60989a528203fa43861129a4f70a1cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 646,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 17,
"path": "/Assets/Scripts/ASSPhysics/TailSystem/TailElementSimple.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using IPulseData = ASSPhysics.PulseSystem.PulseData.IPulseData;\n\nnamespace ASSPhysics.TailSystem\n{\n\tpublic class TailElementSimple : TailElementBase\n\t{\n\t//TailElementBase abstract method implementation\n\t\t//attempts to match current rotation with target rotation\n\t\tprotected override void UpdateRotation (float timeDelta) {}\n\t//ENDOF TailElementBase abstract method implementation\n\n\t//IPulsePropagator abstract method implementation\n\t\t\t\t//execute a pulse and propagate it in the corresponding direction after proper delay\t\n\t\tprotected override void DoPulse (IPulseData pulseData) {}\n\t//ENDOF IPulsePropagator abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7653557062149048,
"alphanum_fraction": 0.7680070996284485,
"avg_line_length": 28.0256404876709,
"blob_id": "0b965e5c1edcb47173172c8046143c3695eb4be8",
"content_id": "4d38d6a6c1280df76393997c7ac57837df536115",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2265,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 78,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/CurtainController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\n\nnamespace ASSPhysics.SceneSystem\n{\n\tpublic class CurtainController :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase <ICurtainController>,\n\t\tICurtainController\n\t{\n\t//private fields and properties\n\t\t//serialized fields\n\t\t[SerializeField]\n\t\tprivate GameObject spotlightContainer = null;\n\n\t\t[SerializeField]\n\t\tprivate Transform rightSheetUpperNode = null;\n\t\t[SerializeField]\n\t\tprivate Transform leftSheetUpperNode = null;\n\t\t[SerializeField]\n\t\tprivate Transform rightSheetLowerNode = null;\n\t\t[SerializeField]\n\t\tprivate Transform leftSheetLowerNode = null;\n\t\t//ENDOF serialized fields\n\n\t\t[SerializeField]\n\t\tprivate float _openingProgress = 0.0f;\n\n\t\tprivate Animator curtainAnimator;\n\t\tprivate Animator[] spotlightAnimators;\n\t//ENDOF private fields and properties\n\n\t//ICurtainController implementation\n\t\t//opens and closes the curtains, or returns the currently DESIRED state\n\t\tpublic bool open\n\t\t{\n\t\t\tget { return curtainAnimator.GetBool(AnimationNames.Curtains.open); }\n\t\t\tset { SetOpen(value); }\n\t\t}\n\n\t\t//returns the state of the transition between 1 and 0, 0 meaning fully closed 1 meaning fully opened\n\t\tpublic float openingProgress { get { return _openingProgress; }}\n\n\t\t//returns true if curtain has actually reached a closed state\n\t\tpublic bool isCompletelyClosed\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t//true if both upper and lower nodes are beyond eachother\n\t\t\t\treturn\n\t\t\t\t\trightSheetLowerNode.position.x < leftSheetLowerNode.position.x &&\n\t\t\t\t\trightSheetUpperNode.position.x < leftSheetUpperNode.position.x;\n\t\t\t}\n\t\t}\n\t//ENDOF ICurtainController implementation\n\n\t//MonoBehaviour lifecycle implementation\n\t\t//on creation register this instance\n\t\tpublic override void Awake ()\n\t\t{\n\t\t\tbase.Awake();\n\t\t\tcurtainAnimator = GetComponent<Animator>();\n\t\t\tspotlightAnimators = spotlightContainer.GetComponentsInChildren<Animator>();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//private methods\n\t\tprivate void SetOpen (bool value)\n\t\t{\n\t\t\tcurtainAnimator.SetBool(AnimationNames.Curtains.open, value);\n\t\t\tforeach (Animator spotlightAnimator in spotlightAnimators)\n\t\t\t{\n\t\t\t\tspotlightAnimator.SetBool(AnimationNames.Curtains.spotlightFocused, value);\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.751842737197876,
"alphanum_fraction": 0.751842737197876,
"avg_line_length": 21.61111068725586,
"blob_id": "39f2afc8dd21795145c700f410ba4a64ad0a42ac",
"content_id": "ded16fa5534425666236db4a62f3c9b01d49536b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 409,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 18,
"path": "/Assets/Scripts/ASSistant/ComponentConfiguration/JointConfiguration/Editor/ConfigurableJointCustomEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEditor;\nusing UnityEngine;\n\nnamespace ASSistant.ComponentConfiguration.JointConfiguration\n{\n\t[CustomEditor(typeof(ConfigurableJoint))]\n\tpublic class ConfigurableJointCustomEditor : Editor\n\t{\n\t\tpublic override void OnInspectorGUI ()\n\t\t{\n\t\t\tif (GUILayout.Button(\"Auto-configure anchors as chain\"))\n\t\t\t{\n\t\t\t\t(target as ConfigurableJoint).EMSetChainAnchor();\n\t\t\t}\n\t\t\tbase.OnInspectorGUI();\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.7880434989929199,
"alphanum_fraction": 0.7880434989929199,
"avg_line_length": 22.125,
"blob_id": "3769e1fb50bceb164f30562270ece3005b1cc795",
"content_id": "4a0599d146051eda557a7a013362cca3e014b82d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 184,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 8,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Base/IKicker.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\t//interface implemented by every kicker component\n\tpublic interface IKicker\n\t{\n\t\tvoid Kick();\t//executes a momentary effect\n\t}\n}"
},
{
"alpha_fraction": 0.7427278757095337,
"alphanum_fraction": 0.7461754083633423,
"avg_line_length": 29.946666717529297,
"blob_id": "a1dd16808e90c4bd269863d1a7cabec38caa7ae5",
"content_id": "b96745a14ab1bab938a3c469d8b883f2af0e1eda",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4643,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 150,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/SkinSurfaceRiggerEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\n\nusing UnityEngine;\nusing UnityEditor;\n\nusing static UnityEngine.U2D.SpriteDataAccessExtensions; //Sprite.GetIndices();\nusing Unity.Collections;\t//nativeArray<T>\n\nusing BoneRigging = ASSpriteRigging.BoneUtility.BoneRigging;\n\nusing SkinSurfaceRiggerInspector = ASSpriteRigging.Inspectors.SkinSurfaceRiggerInspector;\n\n\nnamespace ASSpriteRigging.Editors\n{\n\t[CustomEditor(typeof(SkinSurfaceRiggerInspector))]\n\tpublic class SkinSurfaceRiggerEditor : RiggerEditorBase<SkinSurfaceRiggerInspector>\n\t{\n\t//inherited abstract method implementation\n\t\tprotected override void RigBones ()\n\t\t{\n\t\t\tDebug.Log(\"Rigging bone components for \" + targetInspector.name);\n\n\t\t\tRigBoneMesh(\n\t\t\t\tboneList: targetInspector.spriteSkin.boneTransforms,\n\t\t\t\tanchorRigidbody: targetInspector.anchorRigidbody,\n\t\t\t\ttriangles: targetInspector.sprite.GetIndices(),\n\t\t\t\tdefaultRigidbody: targetInspector.defaultRigidbody,\n\t\t\t\tdefaultAnchorJoint: targetInspector.defaultAnchorJoint,\n\t\t\t\tdefaultMeshJoint: targetInspector.defaultMeshJoint,\n\t\t\t\tdefaultCollider: targetInspector.defaultCollider,\n\t\t\t\tdefaultTag: targetInspector.defaultTag,\n\t\t\t\tdefaultLayer: targetInspector.defaultLayer\n\t\t\t);\n\n\t\t\tDebug.Log(\"Rigged bones of \" + targetInspector.name);\n\t\t}\n\t//ENDOF inherited abstract method implementation\n\n\t//private methods\n\t\tprivate void RigBoneMesh <\n\t\t\tTAnchorJoint,\n\t\t\tTMeshJoint,\n\t\t\tTCollider\n\t\t> (\n\t\t\tTransform[] boneList,\n\t\t\tRigidbody anchorRigidbody,\n\t\t\tNativeArray<ushort> triangles,\n\t\t\tRigidbody defaultRigidbody,\n\t\t\tTAnchorJoint defaultAnchorJoint,\n\t\t\tTMeshJoint defaultMeshJoint,\n\t\t\tTCollider defaultCollider,\n\t\t\tstring defaultTag = null,\n\t\t\tint defaultLayer = -1\n\t\t)\n\t\t\twhere TAnchorJoint: Joint\n\t\t\twhere TMeshJoint: Joint\n\t\t\twhere TCollider: Collider\n\t\t{\n\t\t\tforeach (Transform bone in boneList)\n\t\t\t{\n\t\t\t\tRigBoneIndividualElements<\n\t\t\t\t\tTAnchorJoint,\n\t\t\t\t\tTCollider\n\t\t\t\t> (\n\t\t\t\t\tbone: bone,\n\t\t\t\t\tanchorRigidbody: anchorRigidbody,\n\t\t\t\t\tdefaultRigidbody: defaultRigidbody,\n\t\t\t\t\tdefaultAnchorJoint: defaultAnchorJoint,\n\t\t\t\t\tdefaultCollider: defaultCollider,\n\t\t\t\t\tdefaultTag: defaultTag,\n\t\t\t\t\tdefaultLayer: defaultLayer\n\t\t\t\t);\n\t\t\t}\n\n\t\t\tDebug.Log (\"Individual components rigged, deploying spring mesh\");\n\n\t\t\tRigBoneSpringMesh<TMeshJoint>(boneList, triangles, defaultMeshJoint);\n\n\t\t\tDebug.Log (\"Rigging bones finished\");\n\t\t}\n\n\t\t//creates components that affect a single bone: rigidbodies and disconnected anchors/joints\n\t\tprivate void RigBoneIndividualElements <\n\t\t\tTAnchorJoint,\n\t\t\tTCollider\n\t\t> (\n\t\t\tTransform bone,\n\t\t\tRigidbody anchorRigidbody,\n\t\t\tRigidbody defaultRigidbody,\n\t\t\tTAnchorJoint defaultAnchorJoint,\n\t\t\tTCollider defaultCollider,\n\t\t\tstring defaultTag = null,\n\t\t\tint defaultLayer = -1\n\t\t) \n\t\t\twhere TAnchorJoint: Joint\n\t\t\twhere TCollider: Collider\n\t\t{\n\t\t\t//set the object tag and physics layer of the bone transform\n\t\t\tBoneRigging.BoneSetTagAndLayer(bone, defaultTag, defaultLayer);\n\n\t\t\t//give it a rigidbody and a colider\n\t\t\tBoneRigging.BoneSetupComponent<Rigidbody>(bone, defaultRigidbody);\n\t\t\tif (defaultCollider != null)\n\t\t\t{ BoneRigging.BoneSetupComponent<TCollider>(bone, defaultCollider);\t}\n\n\t\t\t//create a joint anchoring the bone to target anchor rigidbody\n\t\t\tif (defaultAnchorJoint != null)\n\t\t\t{ BoneRigging.BoneConnectJoint<TAnchorJoint>(bone, anchorRigidbody, defaultAnchorJoint); }\n\t\t}\n\n\t\t//Generate springs between bones connected according to a triangle list\n\t\t//triangleList contains a multiple of 3 entries, and each 3 entries define a triangle\n\t\tprivate void RigBoneSpringMesh \n\t\t\t<TJoint>\n\t\t\t(Transform[] boneList, NativeArray<ushort> triangleList, TJoint sample)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\tif ((triangleList.Length % 3) != 0) { Debug.LogWarning(\"RigBoneSpringMesh() triangles.Length is not a multiple of 3\"); }\n\t\t\tfor (int i = 0, iLimit = triangleList.Length; i < iLimit; i += 3)\n\t\t\t{\n\t\t\t\t//for every 3 vertex entries, process them as a triangle\n\t\t\t\tBoneGenerateSpringPolygon (boneList, triangleList.GetSubArray(i, 3), sample);\n\t\t\t}\n\t\t}\n\n\t\t//Generates the springs for a single polygon\n\t\tprivate void BoneGenerateSpringPolygon\n\t\t\t<TJoint>\n\t\t\t(Transform[] boneList, NativeArray<ushort> polygon, TJoint sample)\n\t\t\twhere TJoint: Joint\n\t\t{\n\t\t\t//first element will connect to the last enclosing the polygon\n\t\t\tint previousBone = polygon.Length - 1;\n\n\t\t\tfor (int i = 0, iLimit = polygon.Length; i < iLimit; i++)\n\t\t\t{\n\t\t\t\t//connect every bone to the previous bone in the polygon\n\t\t\t\tBoneRigging.InterconnectBonePair<TJoint>(\n\t\t\t\t\tbone1: boneList[polygon[i]],\n\t\t\t\t\tbone2: boneList[polygon[previousBone]],\n\t\t\t\t\tsample: sample,\n\t\t\t\t\tmutual: false\n\t\t\t\t);\n\t\t\t\tpreviousBone = i;\n\t\t\t}\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7119796276092529,
"alphanum_fraction": 0.7128292322158813,
"avg_line_length": 22.540000915527344,
"blob_id": "ecb1e8e0555ede98013650073769cd46b0e65938",
"content_id": "34e8d99ccbce99e92b967a5b0cabf0d811f76243",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1179,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 50,
"path": "/Assets/Scripts/ASSPhysics/ChainSystem/ChainElementBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections.Generic;\nusing UnityEngine;\n\nnamespace ASSPhysics.ChainSystem\n{\n\tpublic abstract class ChainElementBase : MonoBehaviour, IChainElement\n\t{\n\t//private fields\n\t\tprivate List<IChainElement> _chainChildren;\n\n\t\tprivate IChainElement _chainParent;\n\t//ENDOF serialized fields\n\n\t//implementación IChainElement\n\t\tpublic IChainElement chainParent\n\t\t{\n\t\t\tget { return _chainParent; }\n\t\t\tprivate set { _chainParent = value; }\n\t\t}\n\n\t\tpublic int childCount { get { return (_chainChildren != null) ? _chainChildren.Count : 0; }}\n\n\t\t//set this element's parent element. Also adds itself as its parent's child\n\t\tpublic void SetParent (IChainElement newParent)\n\t\t{\n\t\t\tchainParent = newParent;\n\t\t\tif (newParent != null)\n\t\t\t{\n\t\t\t\tnewParent.AddChild(this);\n\t\t\t}\n\t\t}\n\n\t\t//add an element to child list\n\t\tpublic void AddChild (IChainElement newChild)\n\t\t{\n\t\t\tif (_chainChildren == null) _chainChildren = new List<IChainElement>();\n\t\t\tif (!_chainChildren.Contains(newChild))\n\t\t\t{\n\t\t\t\t_chainChildren.Add(newChild);\n\t\t\t}\n\t\t}\n\n\t\t//fetch a child by index\n\t\tpublic IChainElement GetChild (int index)\n\t\t{\n\t\t\treturn _chainChildren[index];\n\t\t}\n\t//ENDOF implementación IChainElement\n\t}\n}\t"
},
{
"alpha_fraction": 0.7427785396575928,
"alphanum_fraction": 0.7427785396575928,
"avg_line_length": 25.925926208496094,
"blob_id": "b2e6dfb15af9fde543b13993ff3af3ebdbca870e",
"content_id": "f17b6238fb8cee389663aae953d62bdb14ec7e67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1454,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 54,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/ActionUseInteractor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing IInteractor = ASSPhysics.InteractableSystem.IInteractor;\t//IInteractor\nusing EInputState = ASSPhysics.InputSystem.EInputState;\n\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic class ActionUseInteractor : ActionBase\n\t{\n\t//ActionBase override implementation\n\t\t//receive state of corresponding input medium\n\t\tpublic override void Input (EInputState state)\n\t\t{\n\t\t\t//propagate input to the interactor\n\t\t\t//if interactor reports failure end the action\n\t\t\tif (!tool.interactor.Input(state))\n\t\t\t{\n\t\t\t\tClear();\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (state == EInputState.Ended)\n\t\t\t{\n\t\t\t\ttool.SetAnimationState(AnimationNames.Tool.stateClickUp);\n\t\t\t}\n\t\t\telse if (state == EInputState.Started)\n\t\t\t{\n\t\t\t\ttool.SetAnimationState(AnimationNames.Tool.stateClickDown);\n\t\t\t}\n\t\t}\n\n\t\t//interaction is valid if hovering an interactable\n\t\tpublic override bool IsValid ()\n\t\t{\n\t\t\treturn tool.interactor.IsHovering();\n\t\t}\n\n\t\t//Using an interactor is an entirely non-automatable one-shot action, so automation methods just report failure\n\t\tpublic override bool Automate () { return false; }\n\t\tpublic override bool AutomationUpdate () { return false; }\n\t\t//public override void DeAutomate ();\n\n\t\t//on clear ensure exiting animation state\n\t\tpublic override void Clear ()\n\t\t{\n\t\t\ttool.SetAnimationState(AnimationNames.Tool.stateClickUp);\n\t\t\tbase.Clear();\n\t\t}\n\t//ENDOF ActionBase override implementation\n\n\t}\n}\n"
},
{
"alpha_fraction": 0.7979797720909119,
"alphanum_fraction": 0.7979797720909119,
"avg_line_length": 15.666666984558105,
"blob_id": "23bea64a61b7543e54dadc3f98a675ea012bda8c",
"content_id": "e38df13a05acfb0501e739c76521e8e77c5278fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 101,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 6,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/IRiggerEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Editors\n{\n\tpublic interface IRiggerEditor : IEditorPurgeableBase\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.7675438523292542,
"alphanum_fraction": 0.7704678177833557,
"avg_line_length": 31.619047164916992,
"blob_id": "65092c18fa540e7f00750ec06beac0e01a04eb38",
"content_id": "962afd3d7d53e813534326c1a6353ca9a09f7903",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 684,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 21,
"path": "/Assets/Editor/ASSpriteRigging/U2DAnimationAccessor/SpriteSkinAccessor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\t//Bounds\nusing UnityEngine.U2D.Animation; //SpriteSkin\n\nnamespace U2DAnimationAccessor\n{\n\tpublic static class SpriteSkinAccessor\n\t{\n\t\t//public static Transform[] GetBoneTransforms (this SpriteSkin spriteSkin) { return spriteSkin.m_BoneTransforms; }\n\t\t//public static Transform GetRootBone (this SpriteSkin spriteSkin) { return spriteSkin.m_RootBone; }\n\n\t\tpublic static void PublicCreateBoneHierarchy (this SpriteSkin spriteSkin) { spriteSkin.CreateBoneHierarchy(); }\n\t\tpublic static void CalculateBoundsIfNecessary (this SpriteSkin spriteSkin)\n\t\t{\n\t\t\tif (spriteSkin.isValid && spriteSkin.bounds == new Bounds())\n\t\t\t{\n\t\t\t\tspriteSkin.CalculateBounds();\n\t\t\t}\n\t\t}\n\n\t}\n}"
},
{
"alpha_fraction": 0.7363128662109375,
"alphanum_fraction": 0.7363128662109375,
"avg_line_length": 25.352941513061523,
"blob_id": "b9fad33c38c41b06045ae5bd9690c9712660ec58",
"content_id": "868665118d90ac5d032aa15b483849ecb1a01b56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 897,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 34,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/InteractableTriggerLockOnUse.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames; \nusing EInputState = ASSPhysics.InputSystem.EInputState; //EInputState\n\nnamespace ASSPhysics.InteractableSystem\n{\n\t//button that stays locked on after activation\n\tpublic class InteractableTriggerLockOnUse : InteractableTriggerOnRelease\n\t{\n\t//private fields and properties\n\t\tprivate bool lockedOn = false;\n\t\tprotected override bool highlighted\n\t\t{\n\t\t\tget { return (lockedOn || base.highlighted); }\n\t\t\tset { base.highlighted = (lockedOn || value); }\n\t\t}\n\t\tprotected override bool pressed\n\t\t{\n\t\t\tget { return (lockedOn || base.pressed); }\n\t\t\tset { base.pressed = (lockedOn || value); }\n\t\t}\n\t//ENDOF private fields and properties\n\n\t//overrides implementation\n\t\tprotected override void TriggerCallbacks ()\n\t\t{\n\t\t\tif (lockedOn) return;\n\t\t\tlockedOn = true;\n\t\t\tbase.TriggerCallbacks();\n\t\t}\n\t//overrides implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7257760763168335,
"alphanum_fraction": 0.7276575565338135,
"avg_line_length": 22.633333206176758,
"blob_id": "aff1d7ae08254f8eb08c80ccf19a43ef696d00e2",
"content_id": "a20abc7425c66fa3ddc65c5d55c6e5aa842dd79a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2128,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 90,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogControllers/DialogControllerNestedPropagator.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing System.Collections.Generic;\nusing IEnumerator = System.Collections.IEnumerator;\n\nnamespace ASSPhysics.DialogSystem.DialogControllers\n{\n\tpublic class DialogControllerNestedPropagator : DialogControllerBase\n\t{\n\t//private fields and properties\n\t\tprivate IDialogController[] childDialogArray = null;\n\n\t\tprivate int closuresLeft = 0; //number of children left to close\n\t//ENDOF private fields and properties\n\n\t//IDialogController inherited overrides\n\t\tpublic override void Enable ()\n\t\t{\n\t\t\tbase.Enable();\n\n\t\t\tforeach (IDialogController dialog in childDialogArray)\n\t\t\t{\n\t\t\t\tdialog.Enable();\n\t\t\t}\n\t\t}\n\n\t\tprotected override void PerformClosure ()\n\t\t{\n\t\t\tclosuresLeft = childDialogArray.Length;\n\n\t\t\tforeach (IDialogController dialog in childDialogArray)\n\t\t\t{\n\t\t\t\tdialog.AnimatedDisable(DelegateChildFinishedClosing);\n\t\t\t}\n\t\t}\n\n\t\tpublic override void ForceDisable ()\n\t\t{\n\t\t\tforeach (IDialogController dialog in childDialogArray)\n\t\t\t{\n\t\t\t\tdialog.ForceDisable();\n\t\t\t}\n\n\t\t\tbase.ForceDisable();\n\t\t}\n\t//ENDOF IDialogController inherited overrides\n\n\t//MonoBehaviour lifecycle implementation\t\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (childDialogArray == null || childDialogArray.Length == 0)\n\t\t\t{\n\t\t\t\tAutoInitializeChildList();\n\t\t\t}\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//private method implementation\n\t\tprivate void AutoInitializeChildList ()\n\t\t{\n\t\t\tList<IDialogController> foundDialogList = new List<IDialogController>();\n\t\t\tfor (int i = 0, iLimit = transform.childCount; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tIDialogController foundDialog = transform.GetChild(i).GetComponent<IDialogController>();\n\t\t\t\tif (foundDialog != null)\n\t\t\t\t{\n\t\t\t\t\tfoundDialogList.Add(foundDialog);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tchildDialogArray = foundDialogList.ToArray();\n\t\t}\n\n\t\tprivate void DelegateChildFinishedClosing ()\n\t\t{\n\t\t\tclosuresLeft--;\n\t\t\tif (closuresLeft <= 0)\n\t\t\t{\n\t\t\t\tFinishedClosingChildren();\n\t\t\t}\n\t\t}\n\n\t\tprivate void FinishedClosingChildren ()\n\t\t{\n\t\t\tbase.ForceDisable();\t//invoke the base version of forcedisable to avoid propagating forceDisable calls twice\n\t\t\tInvokeFinishingCallback();\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8044354915618896,
"alphanum_fraction": 0.8044354915618896,
"avg_line_length": 26.61111068725586,
"blob_id": "f59a922ce2142afa6096587ec543d848f7ec75b7",
"content_id": "2a06b3939fe78edea29dcb7fb7d7489d22ebe7bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 498,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 18,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Unsorted/MenuLaunchIntro.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\n\nusing TDialogChanger = ASSPhysics.DialogSystem.DialogChangers.DialogChangerBase;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class MenuLaunchIntro : MonoBehaviour\n\t{\n\t\tpublic Animator musicAnimator;\n\n\t\tpublic void KickIntro ()\n\t\t{\n\t\t\t//musicAnimator.SetBool(AnimationNames.Curtains.musicPlayEnabled, true);\n\t\t\tGameObject.Find(\"IntroDialogEnabler\").GetComponent<TDialogChanger>().ChangeDialog();\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.6677195429801941,
"alphanum_fraction": 0.6677195429801941,
"avg_line_length": 23.369230270385742,
"blob_id": "ccb0bec6da934ba57b190c37d66165cb820168ca",
"content_id": "d6abd1be514adf65529ca144a42f1d0148a4e29f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1585,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 65,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/InteractableTriggerOnRelease.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames; \nusing EInputState = ASSPhysics.InputSystem.EInputState; //EInputState\n\nnamespace ASSPhysics.InteractableSystem\n{\n\tpublic class InteractableTriggerOnRelease : InteractableBase\n\t{\n\t//private fields and properties\n\t\t//wether this interactable is being pressed down by an active interactor\n\t\t//will trigger on input release if input started over this item\n\t\tprivate bool _pressed = false;\n\t\tprotected virtual bool pressed\n\t\t{\n\t\t\tget { return _pressed; }\n\t\t\tset\n\t\t\t{\n\t\t\t\tif (value != pressed)\n\t\t\t\t{\n\t\t\t\t\t_pressed = value;\n\t\t\t\t\tif (animator != null)\n\t\t\t\t\t\tanimator.SetBool(AnimationNames.Interactable.pressed, pressed);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t//protected override bool highlighted\n\t//ENDOF private fields and properties\n\n\t//overrides implementation\n\t //IInteractable implementation\n\t\tpublic override void Interact (EInputState state)\n\t\t{\n\t\t\t///////////////////////////////////////////////////////\n\t\t\t//[TO-DO]\n\n\t\t\t//if initiating a click over this button enter pressed state\n\t\t\tif (state == EInputState.Started)\n\t\t\t{\n\t\t\t\tpressed = true;\n\t\t\t\t//...\n\t\t\t}\n\t\t\t//if ending a click over this button execute\n\t\t\telse if (state == EInputState.Ended)\n\t\t\t{\n\t\t\t\t//...\n\t\t\t\tif (pressed) \n\t\t\t\t{\n\t\t\t\t\tTriggerCallbacks();\n\t\t\t\t\tpressed = false;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t//when highlight state changes to false, un-set pressed state\n\t\tprotected override void HighlightChanged (bool state)\n\t\t{\n\t\t\tif (!state)\t{ pressed = false; }\n\t\t\tbase.HighlightChanged(state);\n\t\t}\n\t //ENDOF IInteractable implementation\n\t//overrides implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7419072389602661,
"alphanum_fraction": 0.7419072389602661,
"avg_line_length": 24.422222137451172,
"blob_id": "0fedd7c2cd381365b5654edabb589d9aec253819",
"content_id": "2f3a89205279cfb13ae7a6c9512a8e4585c4751a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1145,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 45,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Unsorted/ApplicationLaunchDrumroll.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class ApplicationLaunchDrumroll : MonoBehaviour\n\t{\n\t//serialized properties and fields\n\t\tpublic Animator animator;\n\t//ENDOF serialized properties and fields\n\n\t//private fields and properties\n\t\tprivate bool done = false;\n\t//ENDOF private fields and properties\n\n\t//Monobehaviour lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (animator == null) { animator = GetComponent<Animator>(); }\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tif (\n\t\t\t\t!done &&\n\t\t\t\t(ControllerCache.curtainController != null && !ControllerCache.curtainController.isCompletelyClosed)\n\t\t\t) {\n\t\t\t\tdone = true;\n\t\t\t\tanimator.SetBool(AnimationNames.Curtains.drumrollFinalClash, true);\n\t\t\t\tGetComponent<MenuLaunchIntro>().KickIntro();\n\t\t\t}\n\t\t}\n\t//Monobehaviour lifecycle\n\n\t//Public methods\n\t\t//called by the animator when finished in order to destroy this controller for good as it's not needed anymore\n\t\tpublic void CleanUp ()\n\t\t{\n\t\t\tDestroy(gameObject);\n\t\t}\n\t//ENDOF Public methods\n\t}\n}"
},
{
"alpha_fraction": 0.6960461139678955,
"alphanum_fraction": 0.7009884715080261,
"avg_line_length": 18.26984214782715,
"blob_id": "0ed3141fff86ecdd3c439baee3b969b22557743a",
"content_id": "595a473bbd1daebbc7e05e623c239358ef354086",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1216,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 63,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/RectCameraControllerSmooth.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic class RectCameraControllerSmooth : RectCameraControllerBase\n\t{\n\t//serialized fields\n\t\t[SerializeField]\n\t\tprivate float positionLerpRate = 0.05f;\n\t\t[SerializeField]\n\t\tprivate float sizeLerpRate = 0.05f;\n\t//ENDOF serialized fields\n\n\t//private properties\n\t\tprivate Rect _targetRect;\n\t\tprotected Rect targetRect\n\t\t{\n\t\t\tget { return _targetRect; }\n\t\t\tset { _targetRect = ValidateCameraRect(value); }\n\t\t}\n\n\t\tprivate Rect baseRect\n\t\t{\n\t\t\tget { return base.rect; }\n\t\t\tset { base.rect = value; }\n\t\t}\n\t//ENDOF private fields\n\n\t//base class overrides\n\t\tprotected override Rect rect\n\t\t{\n\t\t\tget { return baseRect; }\n\t\t\tset { targetRect = value; }\n\t\t}\n\t//ENDOF base class overrides\n\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic void Start ()\n\t\t{\n\t\t\ttargetRect = baseRect;\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tUpdateRect();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//private methods\n\t\tprivate void UpdateRect ()\n\t\t{\n\t\t\tbaseRect = RectMath.LerpRect(\n\t\t\t\tfrom: baseRect,\n\t\t\t\tto: targetRect,\n\t\t\t\tpositionLerpRate: positionLerpRate,\n\t\t\t\tsizeLerpRate: sizeLerpRate\n\t\t\t);\n\t\t}\n\t//ENDOF private methods\n\t}\n}\n"
},
{
"alpha_fraction": 0.7151785492897034,
"alphanum_fraction": 0.7214285731315613,
"avg_line_length": 23.91111183166504,
"blob_id": "b4ae944b6fdd1b1e4ab1a1cfbd964905f86f5877",
"content_id": "7c7970d249ac2f7de8adc4c10b99b43ef117d8e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1122,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 45,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Interface/DialogColliderResizerOnAwake.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class DialogColliderResizerOnAwake : MonoBehaviour\n\t{\n\t//MonoBehaviour lifecycle\n\t\tpublic void Start ()\n\t\t{\n\t\t\tColliderSizeFromRectTransform(\n\t\t\t\tboxCollider: GetComponent<BoxCollider>(),\n\t\t\t\trectTransform: (transform as RectTransform)\n\t\t\t);\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private method implementation\n\t\tprivate void ColliderSizeFromRectTransform(BoxCollider boxCollider, RectTransform rectTransform)\n\t\t{\n\t\t\tfloat CenterFromPivot (float dimension, float pivot)\n\t\t\t{\n\t\t\t\treturn dimension * (pivot - 0.5f) * -1;\n\t\t\t}\n\n\t\t\tif (boxCollider == null || rectTransform == null)\n\t\t\t{\n\t\t\t\tDebug.LogError(\"DialogColliderResizer no collider or no rectTransform\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tboxCollider.size = new Vector3(\n\t\t\t\tx: rectTransform.rect.width,\n\t\t\t\ty: rectTransform.rect.height,\n\t\t\t\tz: 1\n\t\t\t);\n\n\t\t\tboxCollider.center = new Vector3(\n\t\t\t\tx: CenterFromPivot(rectTransform.rect.width, rectTransform.pivot.x),\n\t\t\t\ty: CenterFromPivot(rectTransform.rect.height, rectTransform.pivot.y),\n\t\t\t\tz: 0\n\t\t\t);\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7251037359237671,
"alphanum_fraction": 0.7251037359237671,
"avg_line_length": 28.227272033691406,
"blob_id": "ffc17b1bdf833af98000a5e6d6d26f91f78da622",
"content_id": "a15fea70601d9c59c72949df56c5c38236ac4a03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1928,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 66,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/ActionBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ASSPhysics.HandSystem.Tools; //ITool\n\nusing EInputState = ASSPhysics.InputSystem.EInputState;\n\n/*DEBUG*/using Debug = UnityEngine.Debug;/*DEBUG*/\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic abstract class ActionBase : IAction\n\t{\n\t//private fields and properties\n\t\tprotected ITool tool;\n\t//ENDOF private fields and properties\n\n\t//IHandAction implementation\n\t\t//returns true if this action is currently doing something, like maintaining a grab or repeating a slapping pattern\n\t\t//base class only knows an action is ongoing if automatized. Children classes may determine additional conditions\n\t\t/*public virtual bool ongoing {\n\t\t\tget { return automatic; }\n\t\t}*/\n\t\t//wether action is to automatically repeat\n\t\tpublic bool auto { get { return _auto; } protected set { _auto = value; }}\n\t\tprivate bool _auto = false;\n\n\t\t//initialize the action with a reference to the parent tool\n\t\tpublic virtual bool Initialize (ITool parentTool)\n\t\t{\n\t\t\tDebug.Log(\"Action initializing:\" + this + \" received: \" + parentTool);\n\t\t\ttool = parentTool;\n\t\t\treturn IsValid();\n\t\t}\n\n\t\t//receive state of corresponding input medium\n\t\tpublic abstract void Input (EInputState state);\n\n\t\t//try to set in automatic state. Returns true on success\n\t\tpublic virtual bool Automate ()\n\t\t{\n\t\t\treturn false;\n\t\t}\n\n\t\t//update automatic action. To be called once per frame while action is automated. returns false if automation stops\n\t\tpublic virtual bool AutomationUpdate ()\n\t\t{\n\t\t\treturn false;\n\t\t}\n\n\t\t//stop automation\n\t\tpublic virtual void DeAutomate ()\n\t\t{\n\t\t\t//clear();\n\t\t}\n\n\t\t//clears and finishes the action\n\t\tpublic virtual void Clear ()\n\t\t{\n\t\t\t//auto = false; //unnecessary the action is gonna be destroyed anyway\n\t\t\ttool.ActionEnded();\n\t\t}\n\n\t\t//returns true if this action is valid for this hand (targets in range 'n such)\n\t\tpublic virtual bool IsValid () { Debug.Log(\"ActionBase.IsValid()\"); return false; }\n\t//ENDOF IHandAction implementation\n\n\t}\n}"
},
{
"alpha_fraction": 0.6887637972831726,
"alphanum_fraction": 0.6893892288208008,
"avg_line_length": 24.120418548583984,
"blob_id": "f9b9886a5114c979a585de48ad758a90761d4a99",
"content_id": "13d13a4da9add76357959797a36860c32092219d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4799,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 191,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Tools/ToolBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ASSPhysics.HandSystem.Actions; //IAction\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing EInputState = ASSPhysics.InputSystem.EInputState;\nusing IInteractor = ASSPhysics.InteractableSystem.IInteractor;\n\nnamespace ASSPhysics.HandSystem.Tools\n{\n\t//[RequireComponent(typeof(Animator))]\n\tpublic abstract class ToolBase : MonoBehaviour, ITool\n\t{\n\t//Local variables\n\t\t//cached reference to animator component\n\t\t[SerializeField]\n\t\tprotected Animator animator;\n\t\t//current action\n\t\tprotected IAction action = null;\n\t//ENDOF Local variables\n\n\t//MonoBehaviour lifecycle Implementation\n\t\tpublic virtual void Awake ()\n\t\t{\n\t\t\tif (animator == null) { animator = gameObject.GetComponentInChildren<Animator>(); }\n\t\t\tinteractor = GetComponentInChildren<IInteractor>();\n\t\t}\n\n\t\tpublic virtual void Update ()\n\t\t{\n\t\t\tif (auto)\n\t\t\t{\n\t\t\t\taction.AutomationUpdate();\n\t\t\t}\n\t\t}\n\t//MonoBehaviour LifeCycle Implementation\n\n\t//ITool implementation\n\t\t//is this tool in auto mode\n\t\tprivate bool _auto = false;\n\t\tpublic bool auto\n\t\t{\n\t\t\tget { return _auto;\t}\n\t\t\tprotected set\n\t\t\t{\n\t\t\t\t_auto = value;\n\t\t\t\tanimator.SetBool(AnimationNames.Tool.automated, value);\n\t\t\t}\n\t\t}\n\n\t\t//private IInteractor _interactor;\n\t\tpublic IInteractor interactor {get; private set;}\n\n\t\tpublic IAction activeAction { get { return action; } }\n\n\t\tpublic Vector3 position\n\t\t{\n\t\t\tset \n\t\t\t{\n\t\t\t\tif (auto) return;\n\t\t\t\ttransform.position = ControllerCache.viewportController.ClampPositionToViewport(value);\n\t\t\t}\n\t\t\tget { return transform.position; }\n\t\t}\n\n\t\t//wether the hand is on focus or not\n\t\tprivate bool _focused = false;\n\t\tpublic bool focused\n\t\t{\n\t\t\tget { return _focused; }\n\t\t\tset\n\t\t\t{\n\t\t\t\tif (_focused == true && value == false) { LostFocus(); }\n\t\t\t\t_focused = value;\n\t\t\t\tanimator.SetBool(AnimationNames.Tool.focused, value);\n\t\t\t}\n\t\t}\n\n\t\t//move the tool in worldspace\n\t\tpublic void Move (Vector3 delta)\n\t\t{\n\t\t\tposition = position + delta;\n\t\t}\n\n\t\t//Main Input Receiver\n\t\t//called upon pressing, holding, and releasing input button\n\t\t//propagates input to action only if non-automated. If automated, exits automation on input started\n\t\tpublic void MainInput (EInputState state, Vector3 targetPosition)\n\t\t{\n\t\t\tposition = targetPosition;\n\t\t\tMainInput (state);\n\t\t}\n\t\tpublic void MainInput (EInputState state)\n\t\t{\n\t\t\t//if automated ignore input save for determining wether to finish or not\n\t\t\tif (auto)\n\t\t\t{\n\t\t\t\tif (state == EInputState.Started)\n\t\t\t\t{\n\t\t\t\t\tDeAutomate();\n\t\t\t\t}\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t\n\t\t\tif (state == EInputState.Held) { InputHeld(); }\n\t\t\telse if (state == EInputState.Started) { InputStarted(); }\n\t\t\telse { InputEnded(); } //EInputState.Ended:\n\n\t\t\tif (action != null)\n\t\t\t{\n\t\t\t\taction.Input(state);\n\t\t\t}\n\t\t}\n\n\t\t//Called by the current action to remove itself\n\t\tpublic void ActionEnded ()\n\t\t{\n\t\t\taction = null;\n\t\t}\n\n\t\t//sets the animator\n\t\tpublic virtual void SetAnimationState (string triggerName)\n\t\t{\n\t\t\tanimator.SetTrigger(triggerName);\n\t\t}\n\t//ENDOF ITool implementation\n\n\t//Private functionality\n\t\t//Start an action of type T unless its the type currently active\n\t\t//the initialize it with a reference to ourselves and return its startup validity check\n\t\tprotected bool SetAction <T> () where T : class, IAction, new()\n\t\t{\n\t\t\t//if the current action is NOT of the same type as the target\n\t\t\t//only then attempt to create a new action\n\t\t\tif ((action as T) == null)\n\t\t\t{\n\t\t\t\t//create the new action\n\t\t\t\tIAction newAction = new T ();\n\t\t\t\t//initialize it with a proper reference and check if it's valid\n\t\t\t\tif (newAction.Initialize((ITool)this))\n\t\t\t\t{\n\t\t\t\t\taction?.Clear(); //call Clear on the previous action for cleanup\n\t\t\t\t\taction = newAction; //store the new action\n\t\t\t\t\treturn true;\t//return true indicating valid action\n\t\t\t\t}\n\n\t\t\t\treturn false;\t//return false indicating failed action\n\t\t\t}\n\t\t\t//if the current action IS of the target type, re-initialize it and return its validity check\n\t\t\treturn action.IsValid();\n\t\t}\n\n\t\t//called when losing focus\n\t\t//when being unfocused the tool tries to automate the ongoing action\n\t\tprotected void LostFocus ()\n\t\t{\n\t\t\tif (!auto)\n\t\t\t{\n\t\t\t\tAutomate();\n\t\t\t}\n\t\t}\n\n\t\t//attempt to set in automated mode. \n\t\tprotected void Automate ()\n\t\t{\n\t\t\tif (action != null && !action.auto)\n\t\t\t{\n\t\t\t\tauto = action.Automate();\n\t\t\t\tDebug.Log(\"automating tool > \" + auto);\n\t\t\t}\n\t\t}\n\n\t\t//finish auto mode\n\t\tprotected void DeAutomate ()\n\t\t{\n\t\t\tif (action != null && action.auto)\n\t\t\t{\n\t\t\t\taction.DeAutomate();\n\t\t\t}\n\t\t\tauto = false;\n\t\t}\n\t//ENDOF Private functionality\n\n\t//Protected abstract method exposed for implementation\n\t\tprotected abstract void InputStarted ();\n\t\tprotected abstract void InputHeld ();\n\t\tprotected abstract void InputEnded ();\n\t//ENDOF Protected abstract method exposed for implementation\n\t}\n}"
},
{
"alpha_fraction": 0.761543333530426,
"alphanum_fraction": 0.7634408473968506,
"avg_line_length": 28.296297073364258,
"blob_id": "25de8401fcb6e734892410a217e8e82e9dd2661e",
"content_id": "c96b29b158f0fabab5a8d68c0fc77a8d05c081fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1583,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 54,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/Base/DialogChangerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IDialogController = ASSPhysics.DialogSystem.DialogControllers.IDialogController;\nusing TDialogManager = ASSPhysics.DialogSystem.DialogManagerBase;\n\nnamespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic class DialogChangerBase : MonoBehaviour\n\t{\n\t//serialized fields\n\t\t[SerializeField]\n\t\t[Tooltip(\"IDialogController to activate on call\")]\n\t\tprotected ASSPhysics.DialogSystem.DialogControllers.DialogControllerBase defaultTargetDialog = null;\n\n\t\t//if no dialog manager has been set try to find one in our parents or children\n\t\t[SerializeField]\n\t\tprivate TDialogManager _dialogManager = null;\n\t\tprotected TDialogManager dialogManager\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\treturn (_dialogManager != null)\n\t\t\t\t\t\t\t? _dialogManager\n\t\t\t\t\t\t\t: transform.root.GetComponentInChildren<TDialogManager>();\n\t\t\t}\n\t\t}\n\n\t\t[SerializeField]\n\t\tprivate float delay = 0.0f;\n\t//ENDOF serialized fields\n\n\t//public methods\n\t\t//requests a dialogManager to activate target dialog\n\t\tpublic void ChangeDialog () { ChangeDialog(defaultTargetDialog); }\n\t\tpublic void ChangeDialog (IDialogController dialog)\n\t\t{\n\t\t\t//Debug.Log(\"Changing dialog\");\n\t\t\tif (delay <= 0){ DoChangeDialog(dialog); }\n\t\t\telse { StartCoroutine(DelayedChangeDialog(dialog, delay)); }\n\t\t}\n\n\t\tprivate System.Collections.IEnumerator DelayedChangeDialog (IDialogController dialog, float delayLength)\n\t\t{\n\t\t\tyield return new UnityEngine.WaitForSeconds(delayLength);\n\t\t\tDoChangeDialog(dialog);\n\t\t}\n\n\t\tprivate void DoChangeDialog (IDialogController dialog)\n\t\t{\n\t\t\tdialogManager.SetActiveDialog(dialog);\n\t\t}\n\t//ENDOF public methods\n\t}\n}"
},
{
"alpha_fraction": 0.7536739110946655,
"alphanum_fraction": 0.7536739110946655,
"avg_line_length": 22.064516067504883,
"blob_id": "febee3403cd9c891d6c923173c4273c7824d73e8",
"content_id": "2298729c18569209b7c90ec2230d1e88d8e06f86",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1431,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 62,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogControllers/DialogControllerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IEnumerator = System.Collections.IEnumerator;\n\nnamespace ASSPhysics.DialogSystem.DialogControllers\n{\n\tpublic abstract class DialogControllerBase : MonoBehaviour, IDialogController\n\t{\n\t//private fields\n\t\tprivate bool closing = false;\n\n\t\tprivate DParameterlessDelegate queuedCallback = null;\n\t//ENDOF private fields\n\n\t//IDialogController definition and basic implementation\n\t\t//enable the dialog\n\t\tpublic virtual void Enable ()\n\t\t{\n\t\t\tgameObject.SetActive(true);\n\t\t}\n\n\t\t//disable the dialog. Stores finishingCallback for later execution\n\t\tpublic virtual void AnimatedDisable (DParameterlessDelegate finishingCallback)\n\t\t{\n\t\t\tif (closing) { return; }\n\t\t\tclosing = true;\n\t\t\tqueuedCallback = finishingCallback;\n\t\t\tPerformClosure();\n\t\t}\n\n\t\t//disable the dialog immediately\n\t\tpublic virtual void ForceDisable ()\n\t\t{\n\t\t\tgameObject.SetActive(false);\n\t\t}\n\t//ENDOF IDialogController definition\n\n\t//protected abstract method declaration\n\t\tprotected abstract void PerformClosure ();\n\t//ENDOF protected abstract method declaration\n\n\t//protected method implementation\n\t\tprotected void InvokeFinishingCallback ()\n\t\t{\n\t\t\tif (queuedCallback != null)\n\t\t\t{\n\t\t\t\tqueuedCallback.Invoke();\n\t\t\t\tqueuedCallback = null;\n\t\t\t}\n\t\t}\n\t//ENDOF protected method implementation\n\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic virtual void OnEnable ()\n\t\t{\n\t\t\tclosing = false;\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\n\t}\n}"
},
{
"alpha_fraction": 0.7432712316513062,
"alphanum_fraction": 0.7474120259284973,
"avg_line_length": 24.447368621826172,
"blob_id": "c4d22e0677b6cdd3130f9dcd2365d9eeffff6805",
"content_id": "baf7dda5977b7701b0390cb406ad31862594737b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 968,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 38,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/Base/DialogChangerOnConditionHeldBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic abstract class DialogChangerOnConditionHeldBase : DialogChangerOnConditionBase\n\t{\n\t//serialized fields and properties\n\t\t[UnityEngine.SerializeField]\n\t\tprivate float targetHeldTime = 0.5f;\n\t//ENDOF serialized fields and properties\n\n\t//private fields and properties\n\t\tprivate float currentHeldTime = 0f;\n\t//ENDOF private fields and properties\n\n\t//base class abstract implementation\n\t\t//this method should return true when the condition for dialog change is fulfilled\n\t\tprotected override bool CheckCondition ()\n\t\t{\n\t\t\tif (CheckHeldCondition())\n\t\t\t{\n\t\t\t\tcurrentHeldTime += UnityEngine.Time.deltaTime;\n\t\t\t\tif (currentHeldTime >= targetHeldTime)\n\t\t\t\t{\n\t\t\t\t\treturn true;\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tcurrentHeldTime = 0;\n\t\t\t}\n\t\t\treturn false;\n\t\t}\n\t//ENDOF base class abstract implementation\n\n\t//abstract method declaration\n\t\tprotected abstract bool CheckHeldCondition ();\n\t//ENDOF abstract method declaration\n\t}\n}"
},
{
"alpha_fraction": 0.7496551871299744,
"alphanum_fraction": 0.751724123954773,
"avg_line_length": 32.344825744628906,
"blob_id": "97521ed4c0efbd06bc8a3cf754af5e0e5bac9ada",
"content_id": "b3cc24fdc2a4cb7585ca24ee7c135a7a5571e347",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2902,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 87,
"path": "/Assets/Scripts/ASSPhysics/TailSystem/TailElementJointSmoothFollow.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IPulseData = ASSPhysics.PulseSystem.PulseData.IPulseData;\n\nnamespace ASSPhysics.TailSystem\n{\n\tpublic class TailElementJointSmoothFollow : TailElementBase\n\t{\n\t//serialized fields and properties\n\t\t//managed joint. can only handle one joint, so single thread tails for this class\n\t\t[SerializeField]\n\t\tprivate ConfigurableJoint _joint;\t\n\t\tpublic ConfigurableJoint joint { get { return _joint; } set { _joint = value; } }\n\n\t\t//maximum flat rotation speed\n\t\t[SerializeField]\n\t\tprivate float _rotationRate = 90f;\n\t\tpublic float rotationRate { get { return _rotationRate; } set { _rotationRate = value; } }\n\n\t\t//rate of lerp towards target rotation\n\t\t[SerializeField]\n\t\tprivate float _lerpRate = 0.1f;\n\t\tpublic float lerpRate { get { return _lerpRate; } set { _lerpRate = value; } }\n\n\t//ENDOF serialized fields and properties\n\n\t//private fields and properties\n\t\tprivate Quaternion targetRotation;\t//target rotation to reach\n\t\tprivate Quaternion expectedRotation;\t//angle currently trying to achieve\n\t\tprivate Quaternion jointRotation\t//current joint target rotation. We'll slerp this into our target rotation\n\t\t{\n\t\t\tget { return joint.targetRotation; }\n\t\t\tset { joint.targetRotation = value; }\n\t\t}\n\t//ENDOF private fields and properties\n\n\t//TailElementBase abstract method implementation\n\t\t//attempts to match current rotation with target rotation\n\t\tprotected override void UpdateRotation (float timeDelta)\n\t\t{\n\t\t\t//uniformly rotate a dummy rotation towards target rotation\n\t\t\texpectedRotation = Quaternion.RotateTowards(\n\t\t\t\tfrom: expectedRotation,\n\t\t\t\tto: targetRotation,\n\t\t\t\tmaxDegreesDelta: rotationRate * timeDelta\n\t\t\t);\n\n\t\t\t//then slerp the joint towards dummy rotation so as to smooth movement\n\t\t\tjointRotation = Quaternion.Slerp(\n\t\t\t\ta: jointRotation,\n\t\t\t\tb: expectedRotation,\n\t\t\t\tt: lerpRate\n\t\t\t);\n\t\t}\n\t//ENDOF TailElementBase abstract method implementation\n\n\t//IPulsePropagator abstract method implementation\n\t\t//execute a pulse and propagate it in the corresponding direction after proper delay\t\n\t\t\t//jointed element handles the pulse by setting its rotation \n\t\tprotected override void DoPulse (IPulseData pulseData)\n\t\t{\n\t\t\t//Debug.Log(\"pulse: \" + pulseData.computedValue);\n\t\t\ttargetRotation = PulseToQuaternion(pulseData); // * BaseRotation;\n\t\t}\n\t//ENDOF IPulsePropagator abstract method implementation\n\n\t//private methods\n\t\t//returns Z rotation required by a pulse\n\t\tprivate float PulseToAngle (IPulseData pulseData)\n\t\t{\n\t\t\treturn Mathf.Clamp(\n\t\t\t\tpulseData.computedValue * rotationSoftLimit,\n\t\t\t\t-rotationMax,\n\t\t\t\trotationMax\n\t\t\t);\n\t\t}\n\n\t\t//transform a pulse into a quaternion rotation \n\t\t\t//rotation around Z axis is proportional to pulse intensity\n\t\t\t//and clamped between positive and negative rotationMax\n\t\tprivate Quaternion PulseToQuaternion (IPulseData pulseData)\n\t\t{\n\t\t\treturn Quaternion.Euler(0, 0, PulseToAngle(pulseData));\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.775086522102356,
"alphanum_fraction": 0.775086522102356,
"avg_line_length": 28.931034088134766,
"blob_id": "ebaef0a68807ae8575c70b895f2d696f31697140",
"content_id": "e3f61571487b4dfb31d9ef5ac229afae7033a963",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 869,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 29,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/Tutorial/DialogChangerOnActionGrab.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Debug = UnityEngine.Debug;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ActionGrab = ASSPhysics.HandSystem.Actions.ActionGrab;\n\nnamespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic class DialogChangerOnActionGrab : DialogChangerOnConditionHeldBase\n\t{\n\t//base class abstract method implementation\n\t\tprotected override bool CheckHeldCondition ()\n\t\t{\n\t\t\tif (ControllerCache.toolManager == null)\n\t\t\t{\n\t\t\t\tDebug.LogWarning (\"DialogChangerOnActionGrab: tool manager not found\");\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\t//fetch active action cast as a grabbing action\n\t\t\tActionGrab action = ControllerCache.toolManager.activeTool.activeAction as ActionGrab;\n\n\t\t\t//if action does not cast sucessfully return false\n\t\t\tif (action == null) { return false; }\n\t\t\treturn action.grabActive;\n\t\t}\n\t//ENDOF base class abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.6949516534805298,
"alphanum_fraction": 0.6977443695068359,
"avg_line_length": 27.048192977905273,
"blob_id": "2347ecb2fa82eae1ab1bc415bfa4d8b19cb84229",
"content_id": "da97f4886e79322549c4bf9a02f85243b30cc2dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4657,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 166,
"path": "/Assets/Scripts/ASSPhysics/AudioSystem/Music/MusicController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IEnumerator = System.Collections.IEnumerator;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.AudioSystem.Music\n{\n//Manages music playback\n\tpublic class MusicController :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase<IMusicController>,\n\t\tIMusicController\n\t{\n\t//private fields\n\t\t//managed AudioSource component\n\t\t[SerializeField]\n\t\tprivate AudioSource audioSource = null;\n\t\t//serialized list of song presets\n\t\t[SerializeField]\n\t\tprivate AudioPlaybackProperties[] scenePresets = null;\n\n\t\t//currently active playback\n\t\tprivate AudioPlaybackProperties currentPlayback = null;\n\n\t\t//currently generated playback volume\n\t\tprivate float playbackVolume = 1.0f;\n\n\t\t//fields used by transitions\n\t\tprivate bool inTransition { get { return transitionToPlayback != null; }}\n\t\tprivate AudioPlaybackProperties transitionToPlayback = null;\n \t//ENDOF private fields\n\n\t//private properties\n\t\t//easy getter for global volume settings \n\t\tprivate float globalVolume\n\t\t{ get { /*[TO-DO]*/ return 1.0f; /*[TO-DO]*/ }} //////////////////////////////////////////////////////////////////////////////\n\t//ENDOF private properties\n\n\t//IMusicController implementation\n\t\t//set this to adjust fade in-out progress. Will stack with global sound settings and clip volume\n\t\tprivate float fadeVolume = 1.0f;\n\t\t/*public float fadeVolume\n\t\t{\n\t\t\tget { return _fadeVolume; }\n\t\t\tprivate set { _fadeVolume = value; }\n\t\t}*/\n\n\t\t//starts playback of level track if not already playing\n\t\tpublic void PlaySceneSong(int sceneIndex)\n\t\t{ \n\t\t\tif (scenePresets == null || scenePresets.Length == 0) { Debug.LogError(\"MusicController.PlaySceneSong(int): scenePresets empty\"); return; }\n\t\t\tif (sceneIndex < 0 || sceneIndex >= scenePresets.Length) { Debug.LogError(\"MusicController.PlaySceneSong(int): can't take value: \" + sceneIndex); return; }\n\t\t\tPlaySong(\n\t\t\t\tproperties:scenePresets[sceneIndex],\n\t\t\t\tforceRestart: true,\n\t\t\t\tfadeWithCurtain: true\n\t\t\t);\n\t\t}\n\n\t\t//starts playback of desired track.\n\t\t//If forceRestart is true, attempting to play the same clip will restart playback\n\t\t//if fadeWithCurtain is true, song change will happen with a volume fade in-out synched with scene transition\n\t\tpublic void PlaySong (\n\t\t\tAudioPlaybackProperties properties,\n\t\t\tbool forceRestart = false,\n\t\t\tbool fadeWithCurtain = false\n\t\t) {\n\t\t\t//cancel playback if no audio clip\n\t\t\tif (properties == null) { Debug.LogError(\"MusicController.PlaySong(properties): properties null\"); return; }\n\t\t\t\n\t\t\t//if requesting same song ignore request\n\t\t\tif (audioSource.isPlaying && forceRestart && currentPlayback.clip == properties.clip) { return; }\n\n\t\t\tStartCoroutine(PlaySongCoroutine());\n\n\t\t\tIEnumerator PlaySongCoroutine ()\n\t\t\t{\n\t\t\t\t//ensure no stacked transitions\n\t\t\t\tif (inTransition)\n\t\t\t\t{\n\t\t\t\t\tDebug.LogWarning(\"PlaySongCoroutine(): can't play new song, already waiting for transition\");\n\t\t\t\t\tyield break;\n\t\t\t\t};\n\n\t\t\t\ttransitionToPlayback = properties;\n\n\t\t\t\t//wait for song fade-out\n\t\t\t\tif (fadeWithCurtain)\n\t\t\t\t{\n\t\t\t\t\twhile (\n\t\t\t\t\t\t!ControllerCache.curtainController.isCompletelyClosed\n\t\t\t\t\t\t//&& ControllerCache.curtainController.openingProgress > 0f\n\t\t\t\t\t) {\n\t\t\t\t\t\tfadeVolume = ControllerCache.curtainController.openingProgress;\n\t\t\t\t\t\tyield return null;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t//swap song\n\t\t\t\tSetSong();\n\n\t\t\t\t//wait for song fade-in\n\t\t\t\tif (fadeWithCurtain)\n\t\t\t\t{\n\t\t\t\t\twhile (ControllerCache.curtainController.openingProgress < 1.0f)\n\t\t\t\t\t{\n\t\t\t\t\t\tfadeVolume = ControllerCache.curtainController.openingProgress;\n\t\t\t\t\t\tyield return null;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t//done\n\t\t\t\tfadeVolume = 1.0f;\n\t\t\t\ttransitionToPlayback = null;\n\t\t\t}\n\n\t\t\tvoid SetSong ()\n\t\t\t{\n\t\t\t\tcurrentPlayback = properties;\n\t\t\t\taudioSource.clip = properties.clip;\n\t\t\t\taudioSource.loop = properties.loop;\n\t\t\t\taudioSource.pitch = properties.pitch.Generate();\n\t\t\t\tplaybackVolume = properties.volume.Generate();\n\n\t\t\t\tif (properties.clip != null)\n\t\t\t\t{\n\t\t\t\t\tUpdateVolume();\n\t\t\t\t\taudioSource.Play(); \n\t\t\t\t}\n\t\t\t\telse \n\t\t\t\t{\n\t\t\t\t\tStop();\n\t\t\t\t}\n\n\t\t\t}\n\t\t}\n\n\t\t//stops playback\n\t\tpublic void Stop()\n\t\t{\n\t\t\taudioSource.Stop();\n\t\t}\n\t//ENDOF IMusicController implementation\n\n\t//MonoBehaviour implementation\n\t\tpublic override void Awake ()\n\t\t{\n\t\t\tbase.Awake();\n\t\t\tif (audioSource == null) { audioSource = GetComponent<AudioSource>(); }\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tUpdateVolume();\n\t\t}\n\t//ENDOF MonoBehaviour implementation\n\n\t//private method implementation\n\t\t//updates player volume acording to volume settings, fade value, and playback properties\n\t\tprivate void UpdateVolume ()\n\t\t{\n\t\t\taudioSource.volume = globalVolume * fadeVolume * playbackVolume;\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8093922734260559,
"alphanum_fraction": 0.8093922734260559,
"avg_line_length": 26.923076629638672,
"blob_id": "ac6eb592977b06c4f94ddee49ed6c21e9af7ba86",
"content_id": "f5b4a25f0ae5700e1a9b78259cb5fafa7c709033",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 362,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 13,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/WeaverInspectorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic abstract class WeaverInspectorBase\n\t\t: ArmableInspectorBase, IWeaverInspector\n\t{\n\t\t//Weave interconnection joint configuration\n\t\t[SerializeField]\n\t\tprivate ConfigurableJoint _defaultWeavingJoint = null; \n\t\tpublic ConfigurableJoint defaultWeavingJoint { get { return _defaultWeavingJoint; }}\n\t}\n}"
},
{
"alpha_fraction": 0.7628865838050842,
"alphanum_fraction": 0.7628865838050842,
"avg_line_length": 27.29166603088379,
"blob_id": "2f93a005e0948e3dc52be3d8346accd41272f99b",
"content_id": "a0b2b449e1ced73f532c60b82f1cc7a884c167bd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 679,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 24,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulseData/PulseDataSignedImmutable.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ASSPhysics.PulseSystem;\n\nnamespace ASSPhysics.PulseSystem.PulseData\n{\n\tpublic class PulseDataSignedImmutable : PulseDataSignedIntensityBase\n\t{\n\t//abstract method implementation\n\t\t//gets an updated copy of the pulse, as changed over target distance\n\t\t//Base Signed Intensity pulse doesn't mutate\n\t\tpublic override IPulseData GetUpdatedPulse (float distance)\n\t\t{\n\t\t\treturn this;\n\t\t\t/*\n\t\t\treturn (IPulseData) new PulseDataSignedImmutable (\n\t\t\t\t__pulseIntensity: pulseIntensity,\n\t\t\t\t__pulseSign: pulseSign,\n\t\t\t\t__propagationDelayModifier: __propagationDelayModifier,\n\t\t\t\t__propagationDirection: propagationDirection\n\t\t\t);\n\t\t\t*/\n\t\t}\n\t//ENDOF abstract method implementation\n\t}\n}\n"
},
{
"alpha_fraction": 0.834080696105957,
"alphanum_fraction": 0.834080696105957,
"avg_line_length": 21.399999618530273,
"blob_id": "e5219fbb9dd673d87e8cde5b6bba0b97c6ffc49d",
"content_id": "e2d58d84cd5a92ef10077f29776975537eb6d298",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 225,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 10,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/IDialogManager.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using IDialogController = ASSPhysics.DialogSystem.DialogControllers.IDialogController;\n\nnamespace ASSPhysics.DialogSystem\n{\n\tpublic interface IDialogManager\n\t{\n\t\tvoid SetActiveDialog (IDialogController targetDialog);\n\t}\n\n}"
},
{
"alpha_fraction": 0.7754814624786377,
"alphanum_fraction": 0.7759511470794678,
"avg_line_length": 35.72413635253906,
"blob_id": "2093544d3359926c0dcc7bdd91939cc8844897dd",
"content_id": "607d2ae8ace2c74c0768cf7c8b46d4846e220ee1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2129,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 58,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/TailRiggerEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IRiggerInspector = ASSpriteRigging.Inspectors.IRiggerInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n//rigs a chain of bones with required components\n\tpublic abstract class TailRiggerEditorBase<TInspector>\n\t:\n\t\tRiggerEditorBase<TInspector>\n\t\twhere TInspector : UnityEngine.Object, IRiggerInspector\n\t{\n\t//inherited abstract method implementation\n\t\tprotected override void RigBones ()\n\t\t{\n\t\t\tRigTail(targetInspector);\n\t\t\tDebug.Log(\"Rigged bones of \" + targetInspector.name);\n\t\t}\n\t//ENDOF inherited abstract method implementation\n\n\t//private methods\n\t\t//extracts all the data from rigger object and calls a properly parametrized RigTailBoneRecursive\n\t\tprivate void RigTail (TInspector inspector)\n\t\t{\t\n\t\t\tif (inspector == null) { inspector = targetInspector; }\n\t\t\tRigTailRoot(inspector.spriteSkin.rootBone, inspector);\n\t\t\tRigTailBoneElementRecursive(inspector.spriteSkin.rootBone, inspector);\n\t\t}\n\n\t\t//Recursively populate every transform with adequate controller and components\n\t\tprivate\tvoid RigTailBoneElementRecursive (Transform bone, TInspector inspector)\n\t\t{\n\t\t\tDebug.Log(\"Rigging tail bone: \" + bone.name);\n\t\t\tRigTailBone(bone, inspector);\n\t\t\t//loop over this element's transform children, recursively rigging each of them\n\t\t\tfor (int i = 0, iLimit = bone.childCount; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tTransform nextBone = bone.GetChild(i);\n\t\t\t\t//recursively rig each child so required rigidbodies exist\n\t\t\t\tRigTailBoneElementRecursive(nextBone, inspector);\n\t\t\t\t//finally create required joints between the elements\n\t\t\t\tRigTailBonePairConnection(bone, nextBone, inspector);\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\n\t//abstract method declaration\n\t\t//rig the base/root of the transform chain\n\t\tprotected abstract void RigTailRoot (Transform rootBone, TInspector inspector);\n\n\t\t//rig an individual element of the transform chain\n\t\tprotected abstract void RigTailBone (Transform bone, TInspector inspector);\n\n\t\t//rig a connection between two elements\n\t\tprotected abstract ConfigurableJoint RigTailBonePairConnection (Transform bone, Transform nextBone, TInspector inspector);\n\t//ENDOF abstract method declaration\n\t}\n}"
},
{
"alpha_fraction": 0.7166666388511658,
"alphanum_fraction": 0.7166666388511658,
"avg_line_length": 12.333333015441895,
"blob_id": "6aa7d2d2d4e960cfdd952d3e2a992cb5c09faa4a",
"content_id": "99ffd67a182c971c3221790bedca37d15548b1dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 122,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 9,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Unsorted/QuitApplicationOnAwake.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\npublic class QuitApplicationOnAwake : MonoBehaviour\n{\n\tvoid Awake ()\n\t{\n\t\tApplication.Quit();\n\t}\n}\n"
},
{
"alpha_fraction": 0.6940298676490784,
"alphanum_fraction": 0.6940298676490784,
"avg_line_length": 18.14285659790039,
"blob_id": "d266332f1a37d3494ad8fe4fa5cbd435d3a61587",
"content_id": "10d48ee93f541e700564882895eafde90ce7919e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 136,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 7,
"path": "/Assets/Scripts/ASSPhysics/ProgressionSystem/IProgressionController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "public interface IProgressionController\n{\n\tvoid SetValue <T> (string key, T value);\n\tT GetValue <T> (string key);\n\n\tvoid Clear ();\n}\n"
},
{
"alpha_fraction": 0.768203854560852,
"alphanum_fraction": 0.7791262269020081,
"avg_line_length": 36.5,
"blob_id": "0619188aad6cdb4214608b3ac793b06bc34e6270",
"content_id": "05d5315241657f44d18c98483b4c93f36242782b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 826,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 22,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/IViewportController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic interface IViewportController : ASSPhysics.ControllerSystem.IController\n\t{\n\t\tRect rect {get;}\t\t//current size of the viewport\n\t\tfloat size {get;}\t\t//current height value of the viewport\n\t\tVector2 position {get;}\t//world-space position of the camera\n\n\t\t//moves and resizes camera viewport\n\t\tvoid ChangeViewport (Vector2? position = null, float? size = null);\n\n\t\t//transforms a screen point into a world position\n\t\t//if worldSpace is false, the returned Vector3 ignores camera transform position\n\t\tVector2 ScreenSpaceToWorldSpace (Vector2 mousePosition, bool worldSpace = true);\n\n\t\t//Prevents position from going outside of this camera's boundaries\n\t\tVector2 ClampPositionToViewport (Vector2 position);\n\t\tVector3 ClampPositionToViewport (Vector3 position);\n\t}\n}"
},
{
"alpha_fraction": 0.7047146558761597,
"alphanum_fraction": 0.7104218602180481,
"avg_line_length": 32.87395095825195,
"blob_id": "6fd0447612a124855c8e05e3cef268e8f3c5347b",
"content_id": "e8980aaa00991f584f7944184b05089ad3b5a3fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4032,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 119,
"path": "/Assets/Scripts/ASSistant/ASSMath/RectMath.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSistant.ASSMath\n{\n\t//methods for Rect manipulation\n\tpublic static class RectMath\n\t{\n\t//Rect creation methods\n\t\t//Creates a new rect with given dimensions at target position\n\t\tpublic static Rect RectFromCenterAndSize (Vector2 position, Vector3 size)\n\t\t{ return RectFromCenterAndSize(position, size.x, size.y); }\n\t\tpublic static Rect RectFromCenterAndSize (Vector2 position, Vector2 size)\n\t\t{ return RectFromCenterAndSize(position, size.x, size.y); }\n\t\tpublic static Rect RectFromCenterAndSize (Vector2 position, float width, float height)\n\t\t{\n\t\t\treturn new Rect(\n\t\t\t\tx: position.x - (width / 2),\n\t\t\t\ty: position.y - (height / 2),\n\t\t\t\twidth: width,\n\t\t\t\theight: height\n\t\t\t);\n\t\t}\n\t//ENDOF Rect creation methods\n\n\t//Rect clamping and trimming methods \n\t\t//clamp a x/y position within a rect\n\t\tpublic static Vector2 ClampVector2WithinRect (Vector2 position, Rect outerRect)\n\t\t{\n\t\t\treturn new Vector2\n\t\t\t(\n\t\t\t\tx: Mathf.Clamp(position.x, outerRect.xMin, outerRect.xMax),\n\t\t\t\ty: Mathf.Clamp(position.y, outerRect.yMin, outerRect.yMax)\n\t\t\t);\n\t\t}\n\t\tpublic static Vector3 ClampVector3WithinRect (Vector3 position, Rect outerRect)\n\t\t{\n\t\t\treturn new Vector3\n\t\t\t(\n\t\t\t\tx: Mathf.Clamp(position.x, outerRect.xMin, outerRect.xMax),\n\t\t\t\ty: Mathf.Clamp(position.y, outerRect.yMin, outerRect.yMax),\n\t\t\t\tz: position.z\n\t\t\t);\n\t\t}\n\n\t\t//ensures innerRect bounds stay within outerRect by moving innerRect if protruding.\n\t\t//if innerRect dimensions exceed outerRect, they will be centered\n\t\tpublic static Rect ClampRectPositionWithinRect (Rect innerRect, Rect outerRect)\n\t\t{\n\t\t\treturn new Rect (\n\t\t\t\tx: (innerRect.width <= outerRect.width)\n\t\t\t\t\t? //if innerRect is thinner than outerRect, clamp its position within outerRect\n\t\t\t\t\t\tMathf.Clamp(\t\t\t\t\t\n\t\t\t\t\t\t\tvalue: innerRect.x,\n\t\t\t\t\t\t\tmin: outerRect.xMin,\n\t\t\t\t\t\t\tmax: outerRect.xMax - innerRect.width\n\t\t\t\t\t\t)\n\t\t\t\t\t: //if innerRect is wider than outerRect, center their position\n\t\t\t\t\t\touterRect.x - ((innerRect.width - outerRect.width) / 2),\n\t\t\t\ty: (innerRect.height <= outerRect.height)\n\t\t\t\t\t? //if innerRect is shorter than outerRect clamp its position\n\t\t\t\t\t\tMathf.Clamp(\n\t\t\t\t\t\t\tvalue: innerRect.y,\n\t\t\t\t\t\t\tmin: outerRect.yMin,\n\t\t\t\t\t\t\tmax: outerRect.yMax - innerRect.height\n\t\t\t\t\t\t)\n\t\t\t\t\t: //if innerRect is taller than outerRect, center their position\n\t\t\t\t\t\tinnerRect.y - ((innerRect.height - outerRect.width) / 2),\n\t\t\t\twidth: innerRect.width,\n\t\t\t\theight: innerRect.height\n\t\t\t);\n\t\t}\n\n\t\t//truncates innerRect dimensions to fit outerRect. may return the same rect if already small enough.\n\t\t//only alters size, returned rect's position will be the same as innerRect's\n\t\tpublic static Rect TrimRectSizeToRect (Rect innerRect, Rect outerRect)\n\t\t{\n\t\t\tif (innerRect.width <= outerRect.width && innerRect.height <= outerRect.height)\n\t\t\t{ return innerRect; }\n\t\t\treturn new Rect (\n\t\t\t\tx: innerRect.x,\n\t\t\t\ty: innerRect.y,\n\t\t\t\twidth: Mathf.Clamp(innerRect.width, 0, outerRect.width),\n\t\t\t\theight: Mathf.Clamp(innerRect.height, 0, outerRect.height)\n\t\t\t);\n\t\t}\n\n\t\t//fits a rect within another, trimming its size \n\t\tpublic static Rect TrimAndClampRectWithinRect (Rect innerRect, Rect outerRect)\n\t\t{\n\t\t\treturn ClampRectPositionWithinRect(\n\t\t\t\tinnerRect: TrimRectSizeToRect(innerRect, outerRect),\n\t\t\t\touterRect: outerRect\n\t\t\t);\n\t\t}\n\t//ENDOF Rect clamping and trimming methods\n\n\t//Rect interpolation and movement\n\t\t//moves a rect\n\t\tpublic static Rect MoveRect (this Rect rect, Vector3 movement)\n\t\t{ return MoveRect(rect: rect, movement: (Vector2) movement); }\n\t\tpublic static Rect MoveRect (this Rect rect, Vector2 movement)\n\t\t{\n\t\t\trect.position = rect.position + movement;\n\t\t\treturn rect;\n\t\t}\n\t\t\n\t\t//interpolates position and size\n\t\tpublic static Rect LerpRect (Rect from, Rect to, float positionLerpRate, float sizeLerpRate)\n\t\t{\n\t\t\treturn RectFromCenterAndSize(\n\t\t\t\tposition: Vector2.Lerp(from.center, to.center, positionLerpRate),\n\t\t\t\twidth: Mathf.Lerp(from.width, to.width, sizeLerpRate),\n\t\t\t\theight: Mathf.Lerp(from.height, to.height, sizeLerpRate)\n\t\t\t);\n\t\t}\n\t//ENDOF Rect interpolation\n\n\t}\n}"
},
{
"alpha_fraction": 0.798353910446167,
"alphanum_fraction": 0.798353910446167,
"avg_line_length": 27.076923370361328,
"blob_id": "e78464a2c6cf864d1d1f5e31165e74bcd4d5f0f8",
"content_id": "4beafc0c199f2ea96d527d1457be0c3b9ae4d87d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 729,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 26,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/KickerOnConditionForceBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ASSistant.ASSRandom; //RandomRangeFloat\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnConditionForceBase : KickerOnConditionHeldOnFixedUpdateBase\n\t{\n\t//serialized properties \n\t\t//Minimum and maximum force range for every kicker\n\t\tpublic RandomRangeFloat randomForce;\n\n\t\tpublic Rigidbody targetRigidbody; //will automatically get this gameobject's rigidbody on awake if none given\n\t//ENDOF serialized properties \n\n\t//private fields and properties\n\t//ENDOF private fields and properties\n\n\t//MonoBehaviour Lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (!targetRigidbody) targetRigidbody = gameObject.GetComponent<Rigidbody>();\n\t\t}\n\t//ENDOF MonoBehaviour Lifecycle\n\t}\n}"
},
{
"alpha_fraction": 0.6539682745933533,
"alphanum_fraction": 0.6539682745933533,
"avg_line_length": 13.363636016845703,
"blob_id": "aa3235e41fa0d1ae34c8a251fec224c7f559cedf",
"content_id": "e0236bdc3b9c09bc9f2d30940f12a6a0b317c9b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 315,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 22,
"path": "/Assets/Scripts/ASSistant/ASSRandom/RandomRangeInt.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Random = UnityEngine.Random;\n\nnamespace ASSistant.ASSRandom\n{\n\t[System.Serializable]\n\tpublic class RandomRangeInt\n\t{\n\t\tpublic int min;\n\t\tpublic int max;\n\n\t\tpublic RandomRangeInt (int _min, int _max)\n\t\t{\n\t\t\tmin = _min;\n\t\t\tmax = _max;\n\t\t}\n\n\t\tpublic int Generate ()\n\t\t{\n\t\t\treturn Random.Range(min, max);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7078843116760254,
"alphanum_fraction": 0.7124220132827759,
"avg_line_length": 21.615385055541992,
"blob_id": "25266692cbd2565f0ea9fe617589f8ee159c6a1a",
"content_id": "7972cdb133cdcb5b7e428959432a8b2ca304941b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1765,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 78,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/ViewportZoomer.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.CameraSystem\n{\n\t//[RequireComponent(typeof(IViewportController))]\n\tpublic class ViewportZoomer : MonoBehaviour\n\t{\n\t//constant definitions\n\t\tprivate const float minimumSize = 0.1f;\n\t//ENDOF constant definitions\n\n\t//serialized fields\t\t\n\t\t[SerializeField]\n\t\tprivate float maxSize = 1f;\n\t\t[SerializeField]\n\t\tprivate float minSize = 0.25f;\n\t\t[SerializeField]\n\t\tprivate bool maxSizeFromSceneValue = true;\n\t//ENDOF serialized fields\n\n\t//inherited property override\n\t//ENDOF inherited property override\n\n\t//private fields\n\t\tprivate float _size;\n\t\tprivate float size\n\t\t{\n\t\t\tget { return _size; }\n\t\t\tset { _size = Mathf.Clamp(value: value, min: minSize, max: maxSize); }\n\t\t}\n\n\t\tprivate IViewportController viewport; //cached reference to the camera this controller handles\n\t//ENDOF private fields\n\n\t//private properties\n\t\tprivate float zoomDelta { get { return ControllerCache.inputController.zoomDelta; }}\n\t\tprivate Vector2 inputPosition { get { return ControllerCache.toolManager.activeTool.position; }}\n\t//ENDOF private properties\n\n\t//MonoBehaviour lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tviewport = GetComponent<IViewportController>();\n\t\t}\n\n\t\tpublic void Start ()\n\t\t{\n\t\t\tDebug.Log(\"start\");\n\t\t\tif (maxSizeFromSceneValue) { maxSize = viewport.size; }\n\t\t\tsize = viewport.size;\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tProcessInput();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private methods\n\t\tprivate void ProcessInput ()\n\t\t{\n\t\t\tif (zoomDelta != 0)\n\t\t\t{\n\t\t\t\tsize = size + (zoomDelta * size);\n\t\t\t\tif (size <= minimumSize) size = minimumSize;\n\n\n\t\t\t\tviewport.ChangeViewport(\n\t\t\t\t\tposition: inputPosition,\n\t\t\t\t\tsize: size\n\t\t\t\t);\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\t}\t\n}"
},
{
"alpha_fraction": 0.6930789947509766,
"alphanum_fraction": 0.7016385197639465,
"avg_line_length": 29.296297073364258,
"blob_id": "2c02c1e10e400109f24c54199e7024c1ef982244",
"content_id": "992fb0db4f4fb04560788239feb48f421ce0c7e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4089,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 135,
"path": "/PySpriteRigger/PySpriteRigger.py",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "import sys\t#sys.argv\nimport os\t#file management: os.path.isfile(), os.rename(), os.remove()\nimport yaml\t#yaml\n\nDEFAULT_BONE_NAME = \"bone_\"\n\nverboseMode = True\n\ndef main ():\n\t#separate the parameters from the executed command\n\tparameterList = sys.argv[1:]\n\t#print greet, offering help if no parameters received\n\tprintGreeting (len(sys.argv) <= 1)\n\t\n\t#execute full operation of each received path\n\tsuccessCount = 0\n\terrorCount = 0\n\tfor parameter in parameterList:\n\t\tif (processFile (parameter)):\n\t\t\tsuccessCount += 1\n\t\telse:\n\t\t\terrorCount += 1\n\t#Print a results report\n\tprintFinalReport (successCount, errorCount)\n\n#Prints on-launch message\ndef printGreeting (printHelp):\n\tprint (\"================================\")\n\tprint (\"* PySpriteRigger *\")\n\tif (printHelp):\n\t\tprint (\"* Processes UnityEngine's sprite meta files, converting mesh data into bones\")\n\t\tprint (\"* Usage: Pass the paths of target files as arguments\")\n\t\tprint (\"* $> py PySpriteRigger.py filePath1/fileName1 filePath2/fileName2 ...\")\n\n#prints exit message and waits for any input to continue\ndef printFinalReport (successCount, errorCount):\n\tprint (\"* Successfully processed \" + str(successCount) + \" files\")\n\tif (errorCount > 0):\n\t\tprint (\"!\\n!!!! Number of files failed: \" + str(errorCount) + \"\\n!\")\n\tprint (\"Press ENTER to continue...\")\n\tinput ()\n\n#processes an individual file\ndef processFile (filePath):\n\ttry:\n\t\tprint (\"* Processing \\\"\" + filePath + \"\\\"\");\n\t\t#check if file exists\n\t\tif (not os.path.isfile(filePath)):\n\t\t\tprint (\"! Nonexistent file\")\n\t\t\traise FileNotFoundError\n\n\t\t#move original file to backup location\n\t\toriginalFilePath = moveToBackup (filePath)\n\n\t\t#read the file\n\t\twith open(originalFilePath) as file:\n\t\t\tfileContents = yaml.full_load(file)\n\t\t\tprint (\" File read\")\n\t\t\t#print (fileContents['TextureImporter']['spriteSheet']['vertices'])\n\t\t\t\n\t\t\t#process the contents\n\t\t\tfileContents['TextureImporter']['spriteSheet']['bones'] = boneListFromVertexList(fileContents['TextureImporter']['spriteSheet']['vertices'])\n\t\t\tfileContents['TextureImporter']['spriteSheet']['weights'] = weightListFromVertexList(fileContents['TextureImporter']['spriteSheet']['vertices'])\n\n\t\t#re-write the yaml to the original file path\n\t\t\twith open(filePath, 'w') as file:\n\t\t\t\tyaml.dump(fileContents, file)\n\t\t\t\tprint (\" File written\")\n\n\t#return true if success, false if failed\n\texcept:\n\t\tprint (\"! Failure\")\n\t\tprint (\" \", sys.exc_info()[0])\n\t\tprint (\" \", sys.exc_info()[1])\n\t\treturn False\n\telse:\n\t\tprint (\"> Success\")\n\t\treturn True\n\n#Process a vertex into a list of bones\n#Creates a bone for each vertex at the vertex position\ndef boneListFromVertexList (vertexList):\n\tboneList = []\n\tfor index, vertex in enumerate(vertexList):\n\t#for index in range(len(vertexList)):\n\t\tboneList.append(boneFromVertex(vertexList[index], index))\n\treturn boneList\n\n#Creates information for a single bone\ndef boneFromVertex (vertex, index):\n\tglobal DEFAULT_BONE_NAME\n\treturn {\n\t\t'name'\t\t: DEFAULT_BONE_NAME + str(index),\n\t\t'position'\t: {'x': vertex['x'], 'y': vertex['y'], 'z': 0},\n\t\t'rotation'\t: {'x': 0, 'y': 0, 'z': 0, 'w': 1},\n\t\t'length'\t: 0,\n\t\t'parentId'\t: -1\n\t}\n\n#Creates a list of blendweights for every vertex\ndef weightListFromVertexList (vertexList):\n\tweightList = []\n\tfor index, vertex in enumerate(vertexList):\n\t\tweightList.append(weightForSingleBone(index))\n\treturn weightList\n\n#Creates a blend weight linking a vertex and a single bone\ndef weightForSingleBone (boneIndex):\n\treturn {\n\t\t'weight[0]': 1,\n\t\t'weight[1]': 0,\n\t\t'weight[2]': 0,\n\t\t'weight[3]': 0,\n\t\t'boneIndex[0]': boneIndex,\n\t\t'boneIndex[1]': 0,\n\t\t'boneIndex[2]': 0,\n\t\t'boneIndex[3]': 0\n\t}\n\n#moves target file to its final backup location\ndef moveToBackup (filePath):\n\tbackupPath = pathToBackupPath(filePath)\n\t#remove previous backup if existent\n\tif (os.path.isfile(backupPath)):\n\t\tos.remove(backupPath)\n\t#rename file and return new name\n\tos.rename(filePath, backupPath)\n\treturn backupPath\n\n#transforms a filepath into its backup's corresponding name path\ndef pathToBackupPath (filePath):\n\treturn filePath + \".backup\"\n\n#initiate execution of main function\nmain()"
},
{
"alpha_fraction": 0.7401574850082397,
"alphanum_fraction": 0.7427821755409241,
"avg_line_length": 20.22222137451172,
"blob_id": "fa3aed3afad00e7afb69f673335f27f42d896ca4",
"content_id": "9fc799b2879a55d5451e9e78736cff2fd6555d08",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 18,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/BaseInspectors/ArmableInspectorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEngine.U2D.Animation;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic class ArmableInspectorBase : MonoBehaviour, IArmableInspector\n\t{\n\t//IArmableInspector implementation\n\t\t[SerializeField]\n\t\tprivate bool _armed = false;\n\t\tpublic bool armed \n\t\t{\n\t\t\tget { return _armed; }\n\t\t\tset { _armed = value; }\n\t\t}\n\t//ENDOF IArmableInspector implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7515560388565063,
"alphanum_fraction": 0.7520747184753418,
"avg_line_length": 31.149999618530273,
"blob_id": "24da2a37b432157bca50dd49cb5a7d9280d58906",
"content_id": "69bfe2622a1782fef95bb681dfec09dfab0f8b64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1930,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 60,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/Interactor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing EInputState = ASSPhysics.InputSystem.EInputState;\nusing ActionSettings = ASSPhysics.SettingSystem.ActionSettings; //interactorCheckSettings\n\nnamespace ASSPhysics.InteractableSystem\n{\n\tpublic class Interactor : MonoBehaviour, IInteractor\n\t{\n\t//IInteractor implementation\n\t\t//process input. returns true if an interactable is in range\n\t\tpublic bool Input (EInputState state) \n\t\t{\n\t\t\tIInteractable interactable = FindInteractable();\n\t\t\tinteractable?.Interact(state);\n\t\t\treturn (interactable != null);\n\t\t}\n\n\t\t//find if hovering over a valid interactable\n\t\tpublic bool IsHovering ()\n\t\t{\n\t\t\treturn (FindInteractable() != null);\n\t\t}\n\t//ENDOF IInteractor implementation\n\n\t//private methods\n\t\t//finds one interactable around this interactor's tool transform\n\t\tprivate IInteractable FindInteractable ()\n\t\t{\n\t\t\tCollider[] colliderList = ActionSettings.interactorCheckSettings.GetCollidersInRange(transform);\n\t\t\treturn (colliderList.Length > 0)\n\t\t\t\t? FindPrioritaryInteractable(colliderList)\n\t\t\t\t: null;\n\t\t}\n\n\t\t//get the IInteractable component with the highest priority among the list\n\t\tprivate IInteractable FindPrioritaryInteractable (Component[] componentList)\n\t\t{\n\t\t\tDebug.Log(\"componentList length: \" + componentList.Length);\n\t\t\tIInteractable prioritaryInteractable = null;\n\t\t\tforeach (Component component in componentList)\n\t\t\t{\n\t\t\t\tIInteractable currentInteractable = component.GetComponent<IInteractable>();\n\n\t\t\t\t//check only if an interactable was found in the transform\n\t\t\t\tif (currentInteractable != null)\n\t\t\t\t{\n\t\t\t\t\t//if there is not a candidate already or its priority is higher than target's\n\t\t\t\t\tif (prioritaryInteractable == null\n\t\t\t\t\t\t|| currentInteractable.priority > prioritaryInteractable.priority)\n\t\t\t\t\t{\n\t\t\t\t\t\tDebug.Log(prioritaryInteractable?.priority);\n\t\t\t\t\t\tprioritaryInteractable = currentInteractable;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn prioritaryInteractable;\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7334273457527161,
"alphanum_fraction": 0.7355430126190186,
"avg_line_length": 23.894737243652344,
"blob_id": "2dc9662326346d5527d29c706ed49ee5a62543ca",
"content_id": "398e4664ec65565befd9aa1c89dad69142fadb87",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1418,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 57,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Base/KickerOnConditionHeldBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat; //RandomRangeFloat\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnConditionHeldBase : MonoBehaviour, IKicker\n\t{\n\t//serialized properties\n\t\t[SerializeField]\n\t\tpublic RandomRangeFloat randomDelay = new RandomRangeFloat(1, 1);\n\t//ENDOF serialized properties \n\n\t//private fields and properties\n\t\tprivate float currentDelay;\n\t\tprivate bool currentCheck;\n\t \tprivate bool previousCheck = false;\n\t//ENDOF private fields and properties\n\n\t//IKicker definition\n\t\t//executes a momentary effect\n\t\tpublic abstract void Kick ();\n\t//ENDOF IKicker definition\n\n\t//abstract method definition\n\t\tprotected abstract bool CheckCondition ();\n\t//ENDOF abstract method definition\n\n\t//private methods\n\t \t//on update check condition state change, timer update\n\t\tprotected void UpdateCondition (float timeDelta)\n\t\t{\n\t\t\tcurrentCheck = CheckCondition();\n\n\t\t\t//on condition state change to true, re-initialize timer\n\t\t\tif (currentCheck && !previousCheck)\n\t\t\t{\n\t\t\t\tcurrentDelay = randomDelay.Generate();\n\t\t\t}\n\n\t\t\t//if condition is true, decrement timer\n\t\t\tif (currentCheck)\n\t\t\t{\n\t\t\t\tcurrentDelay -= timeDelta;\n\n\t\t\t\t//if timer reaches zero, kick and reset timer\n\t\t\t\tif (currentDelay <= 0)\n\t\t\t\t{\n\t\t\t\t\tKick();\n\t\t\t\t\tcurrentDelay = randomDelay.Generate();\n\t\t\t\t}\n\t\t\t}\n\t\t\tpreviousCheck = currentCheck;\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7531914710998535,
"alphanum_fraction": 0.7565957307815552,
"avg_line_length": 27.901639938354492,
"blob_id": "910a2f71eceafe63d16049f6b1747493dd3c3117",
"content_id": "4b524c749d95f028de21b9a17fd1227a33bcf9ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3527,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 122,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/SceneController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing UnityEngine;\nusing UnityEngine.SceneManagement;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.SceneSystem\n{\n\tpublic class SceneController :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase <ISceneController>,\n\t\tISceneController\n\t{\n\t//Constants and enum definitions\n\t\t//private const float sceneLoadMinimum = 0.9f;\n\t\tprivate static class SceneNumbers\n\t\t{\n\t\t\tpublic static readonly int LAUNCHER = 0;\t//unused, included for consistency\n\t\t\tpublic static readonly int CURTAINS = 1;\n\t\t\tpublic static readonly int MAINMENU = 2;\n\t\t\tpublic static readonly int QUITTER = 3;\n\t\t}\n\t//ENDOF Constants and enum definitions\n\n\n\t//static properties and methods\n\t\t//initialize method manually launchs the curtains layer scene through unityengine's SceneManager\n\t\tpublic static void Initialize ()\n\t\t{\n\t\t\tSceneManager.LoadScene(SceneNumbers.CURTAINS, LoadSceneMode.Additive);\n\t\t}\n\t//ENDOF static properties and methods\n\n\t//private fields and properties\n\t\tprivate bool busy = false;\t//kept true while performing a scene change\n\t//ENDOF private fields and properties\n\n\t//MonoBehaviour lifecycle implementation\n\t\t//on first instantiation, load the menu under the curtain\n\t\tpublic void Start ()\n\t\t{\n\t\t\tChangeScene(SceneNumbers.MAINMENU, 1.0f);\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//ISceneController implementation\n\t\t//input is enabled if curtains are open\n\t\tpublic bool inputEnabled \n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\treturn !ControllerCache.curtainController.isCompletelyClosed;\n\t\t\t}\n\t\t}\n\n\t\tpublic void ChangeScene (int targetScene, float minimumWait = 0.0f)\n\t\t{\n\t\t\tif (busy) { return; }\n\t\t\tStartCoroutine(ChangeSceneAsync(targetScene, minimumWait));\n\t\t}\n\t//ENDOF ISceneController implementation\n\n\t//private methods\n\t\tprivate IEnumerator ChangeSceneAsync (int targetScene, float minimumWait = 0.0f)\n\t\t{\n\t\t\t//lock on a busy state to avoid stacked coroutines\n\t\t\tbusy = true;\n\n\t\t\t//close the curtains\n\t\t\tControllerCache.curtainController.open = false;\n\n\t\t\t//start song change\n\t\t\tControllerCache.musicController.PlaySceneSong(targetScene);\n\n\t\t\t//wait until curtains are closed\n\t\t\twhile (!ControllerCache.curtainController.isCompletelyClosed)\n\t\t\t{ yield return null; }\n\n\t\t\t//unload previous scene before deploying next\n\t\t\tAsyncOperation unloadingScene =\tUnloadActiveScene();\n\t\t\tunloadingScene.allowSceneActivation = true;\n\t\t\tif (unloadingScene != null)\n\t\t\t{\n\t\t\t\twhile (!unloadingScene.isDone) { yield return null; }\n\t\t\t\tResources.UnloadUnusedAssets();\n\t\t\t}\n\n\t\t\t//start loading next scene\n\t\t\tAsyncOperation loadingScene = SceneManager.LoadSceneAsync(targetScene, LoadSceneMode.Additive);\n\n\t\t\tyield return new WaitForSeconds(minimumWait);\n\t\t\twhile (!loadingScene.isDone) { yield return null; }\n\n\t\t\t//once next scene is ready set it as active\n\t\t\tSetActiveScene(targetScene);\n\n\t\t\t//finally open the curtains and wait until they're done\n\t\t\tControllerCache.curtainController.open = true;\n\n\t\t\twhile (ControllerCache.curtainController.isCompletelyClosed)\n\t\t\t{ yield return null; }\n\n\t\t\tbusy = false;\n\t\t}\n\n\t\tprivate AsyncOperation UnloadActiveScene ()\n\t\t{\n\t\t\tif (SceneManager.GetActiveScene().buildIndex == SceneNumbers.CURTAINS)\n\t\t\t{\n\t\t\t\tDebug.LogWarning(\"Cannot unload curtain scene - ignoring request\");\n\t\t\t\treturn null;\n\t\t\t}\n\t\t\treturn SceneManager.UnloadSceneAsync(SceneManager.GetActiveScene());\n\t\t}\n\n\t\tprivate void SetActiveScene (int targetScene)\n\t\t{\n\t\t\tSceneManager.SetActiveScene(SceneManager.GetSceneByBuildIndex(targetScene));\n\t\t}\n\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.6233183741569519,
"alphanum_fraction": 0.6681614518165588,
"avg_line_length": 16.230770111083984,
"blob_id": "9cb3c4622da75c2954585299251c61d6f4964bcb",
"content_id": "f9147fd624eba14edb7f07367619383fd2f228f8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 223,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 13,
"path": "/Assets/Scripts/ASSistant/ASSRandom/RandomSign.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Random = UnityEngine.Random;\n\nnamespace ASSistant.ASSRandom\n{\n\tpublic static class RandomSign\n\t{\n\t\t//returns 1 or -1 at 50/50 chance\n\t\tpublic static int Generate ()\n\t\t{\n\t\t\treturn (Random.Range(0, 2) * 2) - 1;\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7107332348823547,
"alphanum_fraction": 0.7109457850456238,
"avg_line_length": 33.09420394897461,
"blob_id": "38d72c55447487d80861b189c1b1b684d1734ff4",
"content_id": "39547c12ebd3745ce7868fc0b34e42625830d6ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4705,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 138,
"path": "/Assets/Scripts/ASSistant/ComponentConfiguration/Editor/ComponentConfigurerGeneric.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System;\t//Type.GetProperties\nusing System.Reflection; //BindingFlags\nusing System.Linq; //ienumerable.any\n\nusing UnityEngine;\n//using Object = UnityEngine.Object;\n\nnamespace ASSistant.ComponentConfiguration\n{\n\tpublic static class ComponentConfigurerGeneric\n\t{\n\t//constants\n\t\t//default binding flags\n\t\tprivate static readonly BindingFlags propertyBindingFlags =\n\t\t\tBindingFlags.Instance |\n\t\t\tBindingFlags.Public |\n\t\t\tBindingFlags.SetProperty |\n\t\t\tBindingFlags.GetProperty |\n\t\t\tBindingFlags.DeclaredOnly;\n\n\t\tprivate static readonly BindingFlags fieldBindingFlags =\n\t\t\tBindingFlags.Instance |\n\t\t\tBindingFlags.Public |\n\t\t\tBindingFlags.NonPublic |\n\t\t\tBindingFlags.DeclaredOnly;\n\n\t\tprivate static readonly BindingFlags staticMethodBindingFlags =\n\t\t\tBindingFlags.NonPublic |\n\t\t\tBindingFlags.Static;\n\t//ENDOF constants\n\n\t//public static methods\n\t\t//applies right-hand properties to left-hand objects. returns reference to altered object\n\t\tpublic static T EMApplySettings <T> (this T _this, T sample) where T: Component\n\t\t{\n\t\t\t//*[DEBUG]*/ Debug.Log(\"EMApplySettings<\" + typeof(T) + \">(\" + _this + \", \" + sample + \")\");\n\t\t\tApplySettingsRecursive<T>(_this, sample);\n\t\t\treturn _this;\n\t\t}\n\t//ENDOF public static methods\n\n\t//private static methods\n\t\t//applies the properties of a single inheritance level and repeats until a class that directly inherits from component\n\t\tprivate static void ApplySettingsRecursive <T> (T _this, T sample) where T: Component\n\t\t{\n\t\t\tType type = typeof(T);\n\n\t\t\tPropertyInfo[] properties = type.GetProperties(propertyBindingFlags);\n\t\t\tforeach (PropertyInfo property in properties)\n\t\t\t{\n\t\t\t\tApplyProperty(property, _this, sample);\n\t\t\t}\n\n\t\t\t/*\n\t\t\tFieldInfo[] fields = type.GetFields(fieldBindingFlags);\n\t\t\tforeach (FieldInfo field in fields)\n\t\t\t{\n\t\t\t\tApplyField(field, _this, sample);\n\t\t\t}\n\t\t\t//*/\n\n\t\t\t//if current object is NOT a component and base class is not component, reiterate with inherited class\n\t\t\tif (type != typeof(Component) && type.BaseType != typeof(Component))\n\t\t\t{\n\t\t\t\tPropagateApplySettingsRecursive<T>(_this, sample);\n\t\t\t}\n\t\t}\n\n\t\tprivate static void PropagateApplySettingsRecursive <T> (T _this, T sample) where T: Component\n\t\t{\n\t\t\ttypeof(ComponentConfigurerGeneric)\n\t\t\t\t.GetMethod(\"ApplySettingsRecursive\", staticMethodBindingFlags)\n\t\t\t\t.MakeGenericMethod(new Type[] {typeof(T).BaseType})\n\t\t\t\t.Invoke(null, new System.Object[] {_this, sample});\n\t\t}\n\n\t\t//copy the value of a field object from sample to target object\n\t\tprivate static void ApplyField (FieldInfo field, System.Object target, System.Object sample)\n\t\t{\n\t\t\t//*[DEBUG]*/ Debug.Log(\"----\\nfield \" + field);\n\t\t\t//if member is obsolete ignore it\n\t\t\tif (field.CustomAttributes.Any(attribute => attribute.AttributeType == typeof(ObsoleteAttribute)))\n\t\t\t{ return; }\n\n\t\t\tfield.SetValue(target, field.GetValue(sample));\n\t\t\t//*[DEBUG]*/ Debug.Log(\" modified value: \" + field.GetValue(target));\n\t\t}\n\n\t\t//copies the value of one specific property from sample to target object\n\t\tprivate static void ApplyProperty (PropertyInfo property, System.Object target, System.Object sample)\n\t\t{\n\t\t\t//*[DEBUG]*/ Debug.Log(\"----\\nproperty \" + property);\n\n\t\t\t//if member is obsolete ignore it\n\t\t\tif (property.CustomAttributes.Any(attribute => attribute.AttributeType == typeof(ObsoleteAttribute)))\n\t\t\t{\n\t\t\t\t//*[DEBUG]*/ Debug.Log(\" Property is obsolete\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t//ignore read-only properties\n\t\t\tif (!property.CanWrite || !property.CanRead)\n\t\t\t{\n\t\t\t\t//*[DEBUG]*/ Debug.Log(\" Property is not read/write\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t//Debug.Log(\"attributes \" + property.Attributes);\n\t\t\t//foreach (var attribute in property.Attributes) { Debug.Log(\"> \" + attribute); }\n\t\t\t//Debug.Log(\"custom attributes: \" + property.CustomAttributes);\n\t\t\t//foreach (var customAttribute in property.CustomAttributes) { Debug.Log(\"> \" + customAttribute); }\n\t\t\tDebug.Log(\" original value: \" + property.GetValue(target));\n\t\t\tDebug.Log(\" sample value: \" + property.GetValue(sample));\n\t\t\t//*/\n\t\n\t\t\tif (property.GetIndexParameters().Length == 0)\n\t\t\t{\n\t\t\t\tApplyPropertyNonIndexed(property, target, sample);\n\t\t\t}\n\t\t\telse \n\t\t\t{\n\t\t\t\tApplyPropertyIndexed(property, target, sample);\t\n\t\t\t}\n\n\t\t\t//*[DEBUG]*/ Debug.Log(\" modified value: \" + property.GetValue(target));\n\t\t}\n\t\tprivate static void ApplyPropertyNonIndexed (PropertyInfo property, System.Object target, System.Object sample)\n\t\t{\n\t\t\tproperty.SetValue(target, property.GetValue(sample));\n\t\t}\n\t\tprivate static void ApplyPropertyIndexed (PropertyInfo property, System.Object target, System.Object sample)\n\t\t{\n\t\t\tDebug.LogWarning(\"!! ComponentConfigurerGeneric.ApplyPropertyIndexed() unimplemented - property \\\"\" + property.Name + \"\\\" ignored\");\n\t\t}\n\t//ENDOF private static methods\n\t}\n}\n"
},
{
"alpha_fraction": 0.7799999713897705,
"alphanum_fraction": 0.7799999713897705,
"avg_line_length": 11.625,
"blob_id": "f3b49248fd780d3591c3c860f7fd96d32c98081b",
"content_id": "60c0d86cd5a5eda26bc0a0f6f7c32f0e76fb7f1a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 100,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 8,
"path": "/Assets/Scripts/Experiments/SpriteExperiments.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSperiments\n{\n\tpublic class SpriteExperiments : MonoBehaviour\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.6686930060386658,
"alphanum_fraction": 0.6686930060386658,
"avg_line_length": 14,
"blob_id": "83379d4b2544d143d37edf53770bebdda0abf7ad",
"content_id": "3da04c175f92913674285e8547f53e68e9a39a95",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 329,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 22,
"path": "/Assets/Scripts/ASSistant/ASSRandom/RandomRangeFloat.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Random = UnityEngine.Random;\n\nnamespace ASSistant.ASSRandom\n{\n\t[System.Serializable]\n\tpublic class RandomRangeFloat\n\t{\n\t\tpublic float min;\n\t\tpublic float max;\n\n\t\tpublic RandomRangeFloat (float _min, float _max)\n\t\t{\n\t\t\tmin = _min;\n\t\t\tmax = _max;\n\t\t}\n\n\t\tpublic float Generate ()\n\t\t{\n\t\t\treturn Random.Range(min, max);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.770682156085968,
"alphanum_fraction": 0.7779390215873718,
"avg_line_length": 24.55555534362793,
"blob_id": "152475187377957f7f9e1715ee49f2c136fabaac",
"content_id": "082210806e5255cd210ad915ae6566d182cb5ac6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 689,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 27,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/OnRigidbodyAngularVelocity/KickerOnRigidbodyAngularVelocityBrake.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing Vector3Math = ASSistant.ASSMath.Vector3Math;\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class KickerOnRigidbodyAngularVelocityBrake : KickerOnRigidbodyAngularVelocityBase\n\t{\n\t//serialized properties\n\t\t[SerializeField]\n\t\tprivate float brakingRatio = 0.5f;\n\t//ENDOF serialized properties \n\n\t//IKicker implementation\n\t\t//dampens current speed\n\t\tpublic override void Kick ()\n\t\t{\n\t\t\tDebug.Log(\"Braking\");\n\t\t\ttargetRigidbody.AddTorque(\n\t\t\t\t\ttorque: targetRigidbody.angularVelocity * -1 * brakingRatio,\n\t\t\t\t\tmode: ForceMode.VelocityChange\n\t\t\t\t);\n\t\t}\n\t//ENDOF IKicker implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7706890106201172,
"alphanum_fraction": 0.7706890106201172,
"avg_line_length": 26.168315887451172,
"blob_id": "40420248aa45b2df35f1d824c8faee0f3d989a16",
"content_id": "6a01113f3299b3d5ca7635220172de7b31387648",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2745,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 101,
"path": "/Assets/Scripts/ASSPhysics/ControllerSystem/ControllerCache.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ISceneController = ASSPhysics.SceneSystem.ISceneController;\nusing ICurtainController = ASSPhysics.SceneSystem.ICurtainController;\nusing IViewportController = ASSPhysics.CameraSystem.IViewportController;\nusing IInputController = ASSPhysics.InputSystem.IInputController;\nusing IToolManager = ASSPhysics.HandSystem.Managers.IToolManager;\nusing IMusicController = ASSPhysics.AudioSystem.Music.IMusicController;\n\nnamespace ASSPhysics.ControllerSystem\n{\n\tpublic static class ControllerCache\n\t{\n\t//private methods\n\t\t//will return false if controller needs to be refreshed\n\t\tprivate static bool ControllerIsValid (IController controller)\n\t\t{\n\t\t\treturn (controller != null && controller.isValid);\n\t\t}\n\n\t\t//if controller is not up to date return a fresh reference\n\t\tprivate static TController ValidateController <TController> (TController controller)\n\t\t\twhere TController : IController\n\t\t{\n\t\t\tif (ControllerIsValid(controller))\n\t\t\t{ return controller; }\n\t\t\treturn ControllerProvider.GetController<TController>();\n\t\t}\n\t//ENDOF private methods\n\n\t//scene controller\n\t\tprivate static ISceneController _sceneController;\n\t\tpublic static ISceneController sceneController\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t_sceneController = ValidateController<ISceneController>(_sceneController);\n\t\t\t\treturn _sceneController;\n\t\t\t}\n\t\t}\n\t//ENDOF scene controller\n\n\t//curtain controller\n\t\tprivate static ICurtainController _curtainController;\n\t\tpublic static ICurtainController curtainController\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t_curtainController = ValidateController<ICurtainController>(_curtainController);\n\t\t\t\treturn _curtainController;\n\t\t\t}\n\t\t}\n\t//ENDOF curtain controller\n\n\t//viewport controller\n\t\tprivate static IViewportController _viewportController;\n\t\tpublic static IViewportController viewportController\n\t\t{\n\t\t\tget\t\n\t\t\t{\n\t\t\t\t_viewportController = ValidateController<IViewportController>(_viewportController);\n\t\t\t\treturn _viewportController;\n\t\t\t}\n\t\t}\n\t//ENDOF viewport controller\n\n\t//input controller\n\t\tprivate static IInputController _inputController;\n\t\tpublic static IInputController inputController\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t_inputController = ValidateController<IInputController>(_inputController);\n\t\t\t\treturn _inputController;\n\t\t\t}\n\t\t}\n\t//ENDOF input controller\n\n\t//toolManager\n\t\tprivate static IToolManager _toolManager;\n\t\tpublic static IToolManager toolManager\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t_toolManager = ValidateController<IToolManager>(_toolManager);\n\t\t\t\treturn _toolManager;\n\t\t\t}\n\t\t}\n\t//ENDOF input controller\n\n\t//music controller\n\t\tprivate static IMusicController _musicController;\n\t\tpublic static IMusicController musicController\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\t_musicController = ValidateController<IMusicController>(_musicController);\n\t\t\t\treturn _musicController;\n\t\t\t}\n\t\t}\n\t//ENDOF music controller\n\t}\n}"
},
{
"alpha_fraction": 0.7833065986633301,
"alphanum_fraction": 0.7833065986633301,
"avg_line_length": 35.70588302612305,
"blob_id": "f8dc9c570ed65baf4e1df41a532b9e7b45b9e74f",
"content_id": "06e062b5fd9bafc8e4d2aa1d48b3eeaa64f88f62",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 625,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 17,
"path": "/Assets/Scripts/ASSPhysics/AudioSystem/Music/IMusicController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.AudioSystem.Music\n{\n\tpublic interface IMusicController : \n\t\tASSPhysics.ControllerSystem.IController\n\t{\n\t\t//starts playback of level track if not already playing\n\t\tvoid PlaySceneSong(int sceneIndex);\n\t\t//starts playback of desired track.\n\t\t//If forceRestart is true, attempting to play the same clip will restart playback\n\t\t//if fadeWithCurtain is true, song change will happen with a volume fade in-out synched with scene transition\n\t\tvoid PlaySong(AudioPlaybackProperties properties, bool forceRestart = false, bool fadeWithCurtain = false);\n\t\t//stops playback\n\t\tvoid Stop();\n\t}\n}"
},
{
"alpha_fraction": 0.72365802526474,
"alphanum_fraction": 0.72365802526474,
"avg_line_length": 26.216217041015625,
"blob_id": "ec009872171991b7c55b2398aeeb9c676cae2759",
"content_id": "a2a43ed897470643d226b93d78ce605413a62b6c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1006,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 37,
"path": "/Assets/Scripts/ASSPhysics/Constants/AnimationNames.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.Constants\n{\n\tpublic static class AnimationNames\n\t{\n\t\tpublic static class Tool\n\t\t{\n\t\t\tpublic const string automated = \"Automated\";\n\t\t\tpublic const string focused = \"Focused\";\n\t\t\tpublic const string horizontalFlip = \"HorizontalFlip\";\n\n\t\t\tpublic const string stateFlat = \"StateFlat\";\n\t\t\tpublic const string stateGrab = \"StateGrab\";\n\t\t\tpublic const string stateSlap = \"StateSlap\";\n\t\t\tpublic const string stateClickDown = \"StateClickDown\";\n\t\t\tpublic const string stateClickUp = \"StateClickUp\";\n\t\t}\n\n\t\tpublic static class Interactable\n\t\t{\n\t\t\tpublic const string highlighted = \"Highlighted\"; //Bool\n\t\t\tpublic const string pressed = \"Pressed\"; //Bool\n\t\t}\n\n\t\tpublic static class Curtains\n\t\t{\n\t\t\tpublic const string open = \"CurtainsOpen\";\n\t\t\tpublic const string spotlightFocused = \"SpotlightFocused\";\n\t\t\tpublic const string drumrollFinalClash = \"FinalClash\";\n\t\t\tpublic const string musicPlayEnabled = \"Play\";\n\t\t}\n\n\t\tpublic static class Dialog\n\t\t{\n\t\t\tpublic const string close = \"Close\";\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7615152597427368,
"alphanum_fraction": 0.7709857821464539,
"avg_line_length": 29.18181800842285,
"blob_id": "5beb416282f0f0a3b2e3bea27a6d0983069e4b6e",
"content_id": "bdafb5add4a0874087d2564dc117a72d2149c2f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2323,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 77,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/ViewportControllerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic abstract class ViewportControllerBase :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase<IViewportController>,\n\t\tIViewportController\n\t{\n\t//abstract property declaration\n\t\tprotected abstract Rect viewportRect { get; }\n\t//ENDOF abstract property declaration\n\n\t//abstract method declaration\n\t\tprotected abstract void ChangeViewport (Vector2? position, float? size);\n\t//ENDOF abstract method declaration\n\n\t//IViewportController implementation\n\t\t//dimensions and position of the viewport\n\t\tRect IViewportController.rect\n\t\t{\n\t\t\tget { return viewportRect; }\n\t\t}\n\n\t\t//current height value of the viewport\n\t\tfloat IViewportController.size\n\t\t{\n\t\t\tget { return viewportRect.height; }\n\t\t}\n\n\t\t//current position\n\t\tVector2 IViewportController.position\n\t\t{\n\t\t\tget { return viewportRect.center; }\n\t\t}\n\n\t\t//moves and resizes camera viewport\n\t\t//if only one of the parameters is used the other aspect of the viewport is unchanged\n\t\tvoid IViewportController.ChangeViewport (\n\t\t\tVector2? position,\n\t\t\tfloat? size\n\t\t) {\n\t\t\tChangeViewport(position, size);\n\t\t}\n\n\n\t\t//transforms a screen point into a world position\n\t\t//if worldSpace is false, the returned Vector3 ignores camera transform position\n\t\tVector2 IViewportController.ScreenSpaceToWorldSpace (\n\t\t\tVector2 screenPosition,\n\t\t\tbool worldSpace\n\t\t) {\n\t\t\t//normalize position into a 0-1 range\n\t\t\tscreenPosition = Vector2.Scale(screenPosition, new Vector2 (1/Screen.width, 1/Screen.height));\n\n\t\t\t//multiply normalized position by camera size\n\t\t\tVector2 cameraSize = new Vector2 (viewportRect.width, viewportRect.height);\n\t\t\tscreenPosition = Vector2.Scale(screenPosition, cameraSize);\n\n\t\t\t//finally correct world position if necessary\n\t\t\tif (worldSpace)\n\t\t\t{\n\t\t\t\tscreenPosition = screenPosition + viewportRect.center - (cameraSize/2);\n\t\t\t}\n\n\t\t\treturn screenPosition;\n\t\t}\n\n\t\t//Prevents position from going outside of this camera's boundaries\n\t\tVector2 IViewportController.ClampPositionToViewport (Vector2 position)\n\t\t{ return RectMath.ClampVector2WithinRect(position, viewportRect); }\n\t\tVector3 IViewportController.ClampPositionToViewport (Vector3 position)\n\t\t{ return RectMath.ClampVector3WithinRect(position, viewportRect); }\n\t//ENDOF IViewportController implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8053278923034668,
"alphanum_fraction": 0.806010901927948,
"avg_line_length": 36.56410217285156,
"blob_id": "0ad2813c2d99451933200df8d74c99f3e5bd5b3f",
"content_id": "f41d4201543bf97084397594337032529e1145eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1464,
"license_type": "no_license",
"max_line_length": 123,
"num_lines": 39,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/TailRiggerEditorSmoothFollowController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing BoneRigging = ASSpriteRigging.BoneUtility.BoneRigging;\n\nusing TInspector = ASSpriteRigging.Inspectors.TailRiggerInspectorSmoothFollowController;\n\nusing TElementController = ASSPhysics.TailSystem.TailElementJointSmoothFollow;\n\nnamespace ASSpriteRigging.Editors\n{\n//rigs a chain of bones with required components\n\t[CustomEditor(typeof(TInspector))]\n\tpublic class TailRiggerEditorSmoothFollowController\n\t\t: TailRiggerEditorJointChainBase<TInspector>\n\t{\n\t//overrides\n\t\t//rig an individual element of the transform chain\n\t\tprotected override void RigTailBone (Transform bone, TInspector inspector)\n\t\t{\n\t\t\tbase.RigTailBone(bone, inspector);\n\t\t\t//after rigging physics components create a chain element controller unless this is the last element in chain\n\t\t\tif (bone.childCount > 0)\n\t\t\t{\n\t\t\t\tBoneRigging.BoneSetupComponent<TElementController>(bone, inspector.defaultTailElementController);\n\t\t\t}\n\t\t}\n\n\t\t//rig a connection between two elements. also store the joint in the controller\n\t\tprotected override ConfigurableJoint RigTailBonePairConnection (Transform bone, Transform nextBone, TInspector inspector)\n\t\t{\n\t\t\tConfigurableJoint connectionJoint = base.RigTailBonePairConnection(bone, nextBone, inspector);\n\t\t\tTElementController elementController = bone.GetComponent<TElementController>();\n\t\t\tif (elementController != null) { elementController.joint = connectionJoint; }\n\t\t\treturn connectionJoint;\n\t\t}\n\t//ENDOF overrides\n\t}\n}"
},
{
"alpha_fraction": 0.8208333253860474,
"alphanum_fraction": 0.8208333253860474,
"avg_line_length": 23.100000381469727,
"blob_id": "fe67a0efc42c4333fa42a896d91866ce519e7c56",
"content_id": "ac2c8c57f60f49277f7f3777ff5d4f4b043b70db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 240,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 10,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/WeaverInspectorManyToClosestChild.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic class WeaverInspectorManyToClosestChild : WeaverInspectorManyToXBase\n\t{\n\t\tpublic Transform[] targetRootTransformList = null;\n\t\tpublic bool includeRootTransform = false;\n\t}\n}"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 14.166666984558105,
"blob_id": "b1870faf5c0270affc769ba0d62dc4db2d9d456c",
"content_id": "aec19f142d239a48c84270ad6e30303d8944602c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 6,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Weavers/IWeaverEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Editors\n{\n\tpublic interface IWeaverEditor : IEditorBase\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.697566032409668,
"alphanum_fraction": 0.6996374726295471,
"avg_line_length": 20.70786476135254,
"blob_id": "cf13c9fc56d451686efbea8b5777bae245e6376a",
"content_id": "fc28b2d38c6b3439b33bfa6da77a32c106629377",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1931,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 89,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/OnOffFlicker/Base/OnOffFlickerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IEnumerator = System.Collections.IEnumerator;\n\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class OnOffFlickerBase : KickerOnConditionHeldOnUpdateBase\n\t{\n\t//serialized fields\n\t\t//time a flick up stays active\n\t\t[SerializeField]\n\t\tprivate RandomRangeFloat randomUptimeRange = new RandomRangeFloat(1, 1);\n\n\t\t//defines what is considered as flicker's down state\n\t\t[SerializeField]\n\t\tprivate bool downState = false;\n\t//ENDOF serialized fields \n\n\t//private fields and properties\n\t\t//what's considered as the flicker's up state\n\t\tprivate bool upState { get { return !downState; }}\n\n\t\t//timer left\n\t\tprivate float uptimeLeft = 0f;\n\t\tprotected bool flickIsUp = false;\n\t//ENDOF private fields and properties\n\n\t//abstract property definition\n\t\tprotected abstract bool state { set; }\n\t//ENDOF abstract property definition\n\n\t//MonoBehaviour lifecycle\n\t\tpublic virtual void Awake ()\n\t\t{\n\t\t\tif (!flickIsUp) { state = downState; }\n\t\t}\n\t/*\n\t\tpublic override void Update ()\n\t\t{\n\t\t\tbase.Update();\n\n\t\t}\n\t*/\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//IKicker implementation\n\t\t//executes a momentary effect\n\t\tpublic override void Kick ()\n\t\t{ FlickUp(); }\n\t//ENDOF IKicker implementation\n\n\t//public method definition\n\t\tpublic void FlickUp ()\n\t\t{ FlickUp(randomUptimeRange.Generate()); }\n\t\tpublic void FlickUp (float uptime)\n\t\t{\n\t\t\tif(!flickIsUp)\n\t\t\t{ StartCoroutine(FlickUpCoroutine()); }\n\t\t\telse\n\t\t\t{ UpdateFlickTimer(); }\n\n\t\t\tIEnumerator FlickUpCoroutine ()\n\t\t\t{\n\t\t\t\tuptimeLeft = uptime;\n\t\t\t\tflickIsUp = true;\n\t\t\t\tstate = upState;\n\n\t\t\t\twhile (uptimeLeft > 0)\n\t\t\t\t{\n\t\t\t\t\tyield return null;\n\t\t\t\t\tuptimeLeft -= Time.deltaTime;\n\t\t\t\t}\n\n\t\t\t\tflickIsUp = false;\n\t\t\t\tstate = downState;\n\t\t\t}\n\n\t\t\tvoid UpdateFlickTimer ()\n\t\t\t{\n\t\t\t\tuptimeLeft = (uptimeLeft > uptime)\n\t\t\t\t\t\t\t\t? uptimeLeft\n\t\t\t\t\t\t\t\t: uptime;\n\t\t\t}\n\t\t}\n\t//ENDOF public method definition\n\t}\n}"
},
{
"alpha_fraction": 0.7325581312179565,
"alphanum_fraction": 0.7325581312179565,
"avg_line_length": 20.52777862548828,
"blob_id": "3cc61b12c6f12291737d62e769caf23260ab7970",
"content_id": "a59d08cbe2a820ef896122a9cceac73294eafbaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 776,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 36,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/Base/DialogChangerOnConditionBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic abstract class DialogChangerOnConditionBase : DialogChangerBase\n\t{\n\t//serialized fields\n\t\t[UnityEngine.SerializeField]\n\t\tprivate bool selfDisableOnTrigger = true;\n\t//ENDOF serialized fields\n\n\t//MonoBehaviour lifecycle\n\t\tpublic void Update ()\n\t\t{\n\t\t\tif (CheckCondition())\n\t\t\t{\n\t\t\t\tChangeDialog();\n\t\t\t\tTrySelfDisable();\n\t\t\t}\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private methods\n\t\tprivate void TrySelfDisable ()\n\t\t{\n\t\t\tif (selfDisableOnTrigger)\n\t\t\t{\n\t\t\t\tthis.enabled = false;\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\n\t//abstract method declaration\n\t\t//this method should return true when the condition for dialog change is fulfilled\n\t\tprotected abstract bool CheckCondition ();\n\t//ENDOF abstract method declaration\n\t}\n}"
},
{
"alpha_fraction": 0.8013244867324829,
"alphanum_fraction": 0.8013244867324829,
"avg_line_length": 20.64285659790039,
"blob_id": "5e1cd950c0e0b0621667efbb8344a6ef5aadf9d6",
"content_id": "86f4cb9e0838835e3111c8e5a946fbb8cf8a3755",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 302,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Base/KickerOnConditionHeldOnUpdateBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnConditionHeldOnUpdateBase : KickerOnConditionHeldBase\n\t{\n\t//MonoBehaviour Lifecycle\n\t\tpublic virtual void Update()\n\t\t{\n\t\t\tUpdateCondition(Time.deltaTime);\n\t\t}\n\t//ENDOF MonoBehaviour Lifecycle\n\t}\n}"
},
{
"alpha_fraction": 0.742376446723938,
"alphanum_fraction": 0.7448300123214722,
"avg_line_length": 30.711111068725586,
"blob_id": "9923b77ee0a8f335f4dbc0d94738fe42df8e2666",
"content_id": "3b2c6759835f6bb8a095f882867c76e23bfc2054",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2855,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 90,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Weavers/WeaverEditorManyToClosestChild.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections.Generic;\n\nusing UnityEngine;\nusing UnityEditor;\n\nusing WeaverInspectorManyToClosestChild = ASSpriteRigging.Inspectors.WeaverInspectorManyToClosestChild;\n\nnamespace ASSpriteRigging.Editors\n{\n\t[CustomEditor(typeof(WeaverInspectorManyToClosestChild))]\n\tpublic class WeaverEditorManyToClosestChild : WeaverEditorBase<WeaverInspectorManyToClosestChild>\n\t{\n\t//overriden inherited methods\n\t\tpublic override void WeaveJoints()\n\t\t{\n\t\t//value validation\n\t\t\tif (targetInspector.targetRootTransformList == null || targetInspector.targetRootTransformList.Length == 0)\n\t\t\t{\n\t\t\t\tDebug.Log(\"WeaverEditorManyToClosestChild.WeaveJoints() requires a list of root transforms\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t//gather list of potential rigidbodies\n\t\t\tRigidbody[] candidateRigidbodyList = FetchRigidbodyCandidates(\n\t\t\t\t\t\t\t\t\t\t\t\t\trootTransforms: targetInspector.targetRootTransformList,\n\t\t\t\t\t\t\t\t\t\t\t\t\tincludeRootTransform: targetInspector.includeRootTransform\n\t\t\t\t\t\t\t\t\t\t\t\t);\n\t\t\t/*Rigidbody[] targetRigidbodyList = targetInspector.targetRootTransform\n\t\t\t\t\t\t\t\t\t\t\t.GetComponentsInChildren<Rigidbody>(includeInactive: true);\n\t\t\t*/\n\t\t//find closest rigidbody for each rigidbody in originRigidbodyList\n\t\t\tforeach (Rigidbody originRigidbody in targetInspector.originRigidbodyList)\n\t\t\t{\n\t\t\t\tConnectRigidbodies(\n\t\t\t\t\tfromRigidbody: originRigidbody,\n\t\t\t\t\ttoRigidbody: FindClosestRigidbody(\n\t\t\t\t\t\tcenter: originRigidbody.transform.position,\n\t\t\t\t\t\trigidbodyList: candidateRigidbodyList\n\t\t\t\t\t)\n\t\t\t\t);\n\t\t\t}\n\t\t}\n\t//ENDOF overriden inherited methods\n\n\t//private methods\n\t\t//fetchs all potential target rigidbodies\n\t\tprivate Rigidbody[] FetchRigidbodyCandidates (Transform[] rootTransforms, bool includeRootTransform = false)\n\t\t{\n\t\t\tList<Rigidbody> rigidbodyList = new List<Rigidbody>();\n\t\t\tforeach (Transform rootTransform in rootTransforms)\n\t\t\t{\n\t\t\t\trigidbodyList.AddRange(\n\t\t\t\t\trootTransform.GetComponentsInChildren<Rigidbody>(includeInactive: true)\n\t\t\t\t);\t\n\n\t\t\t\tif (!includeRootTransform)\n\t\t\t\t{\n\t\t\t\t\trigidbodyList.Remove(rootTransform.GetComponent<Rigidbody>());\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn rigidbodyList.ToArray();\n\t\t}\n\n\t\t//finds and returns the rigidbody closest to 0 among rigidbodyList\n\t\tprivate Rigidbody FindClosestRigidbody (Vector3 center, Rigidbody[] rigidbodyList)\n\t\t{\n\t\t\tif (rigidbodyList == null || rigidbodyList.Length == 0)\n\t\t\t{\n\t\t\t\tDebug.LogWarning(\"FindClosestRigidbody() no rigidbodyList provided or length 0\");\n\t\t\t\treturn null;\n\t\t\t}\n\n\t\t\tfloat closestDistance = float.MaxValue;\n\t\t\tRigidbody closestRigidbody = null;\n\t\t\tfor (int i = 0, iLimit = rigidbodyList.Length; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tfloat distance = Vector3.Distance(center, rigidbodyList[i].transform.position);\n\t\t\t\tif (distance < closestDistance)\n\t\t\t\t{\n\t\t\t\t\t\tclosestRigidbody = rigidbodyList[i];\n\t\t\t\t\t\tclosestDistance = distance;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treturn closestRigidbody;\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.8074866533279419,
"alphanum_fraction": 0.8074866533279419,
"avg_line_length": 22.4375,
"blob_id": "8bec0323c480b9501d61cb9540fdf41619bf64c8",
"content_id": "36fcb2da756282f9cf4bf0ca1963dc325a0baa0e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 374,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 16,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/OnRigidbodySleep/KickerOnRigidbodySleepBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ASSistant.ASSRandom; //RandomRangeFloat\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnRigidbodySleepBase : KickerOnConditionForceBase\n\t{\n\t//abstract method implementation\n\t\tprotected override bool CheckCondition ()\n\t\t{\n\t\t\treturn targetRigidbody.IsSleeping();\n\t\t}\n\t//ENDOF abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.753731369972229,
"alphanum_fraction": 0.753731369972229,
"avg_line_length": 15.875,
"blob_id": "101b923eeb9341209138900612b42b12675fc6c0",
"content_id": "934b292f758f83f366d873247955c6fc15816be7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 134,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 8,
"path": "/Assets/Scripts/ASSPhysics/ControllerSystem/IController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.ControllerSystem\n{\n\tpublic interface IController\n\t{\n\t\t//should return false when stale\n\t\tbool isValid {get;}\n\t}\n}"
},
{
"alpha_fraction": 0.7766497731208801,
"alphanum_fraction": 0.7783417701721191,
"avg_line_length": 30.972972869873047,
"blob_id": "f2fbf75edba5ae26069d2778740d3ff054c7720b",
"content_id": "fdb17ec4b7465277a0b0e45f7499a7315d671dc4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1182,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 37,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Managers/ToolManagerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerProvider = ASSPhysics.ControllerSystem.ControllerProvider;\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ITool = ASSPhysics.HandSystem.Tools.ITool;\n\nnamespace ASSPhysics.HandSystem.Managers\n{\n\tpublic abstract class ToolManagerBase :\n\t\tASSPhysics.ControllerSystem.MonoBehaviourControllerBase<IToolManager>,\n\t\tIToolManager\n\t{\n\t//IToolManager implementation\n\t\tpublic abstract ITool[] tools {get;}\n\t\tpublic abstract ITool activeTool {get;}\n\t//ENDOF IToolManager implementation\n\n\t//protected class methods\n\t\t//instantiates a prefab of a tool\n\t\tprotected ITool InstantiateAsTool (ITool prefabTool, Vector2? position = null)\n\t\t{\n\t\t\t//if no position is provided automatically pick the center of the screen\n\t\t\tif (position == null)\n\t\t\t{ position = ControllerCache.viewportController.position; }\n\n\t\t\t//create a copy of the tool and return a reference to its tool script\n\t\t\treturn UnityEngine.Object.Instantiate(\n\t\t\t\toriginal: prefabTool.gameObject,\n\t\t\t\tposition: (Vector2) position,\n\t\t\t\trotation: prefabTool.transform.rotation,\n\t\t\t\tparent: transform.parent\n\t\t\t).GetComponent<ITool>();\n\t\t}\n\t//ENDOF protected class methods\n\t}\n}"
},
{
"alpha_fraction": 0.6940681338310242,
"alphanum_fraction": 0.6948004364967346,
"avg_line_length": 32.71604919433594,
"blob_id": "97ce129ff4a667654b457cf8cb42957b660999ad",
"content_id": "833b56194fe5cacca928c6561a254712df97a0dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 5462,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 162,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/ActionGrab.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine; //Physics, Transform, SpringJoint, ...\n\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing ActionSettings = ASSPhysics.SettingSystem.ActionSettings; //tailGrabSettings, surfaceGrabSettings\nusing EInputState = ASSPhysics.InputSystem.EInputState;\n\nusing ASSistant.ComponentConfiguration.JointConfiguration; \nusing ASSistant.ComponentConfiguration.ColliderConfiguration; //ColliderPosition.EMGetColliderTransformOffset(this Collider);\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic class ActionGrab : ActionBase\n\t{\n\t//ActionBase override implementation\n\t\t//returns true if this action is currently doing something, like maintaining a grab or repeating a slapping pattern\n\t\t//Will be true if base.ongoing (because automated) or if we have a joint list acting upon the world\n\t\t//receive state of corresponding input medium\n\t\tpublic override void Input (EInputState state)\n\t\t{\n\t\t\tif (state == EInputState.Started)\n\t\t\t{\n\t\t\t\tInitiateGrab();\n\t\t\t}\n\t\t\tif (state == EInputState.Ended)\n\t\t\t{\n\t\t\t\tFinishGrab();\n\t\t\t}\n\t\t}\n\n\t\t//clears and finishes the action\n\t\tpublic override void Clear ()\n\t\t{\n\t\t\tRemoveJoints();\n\t\t\tbase.Clear();\n\t\t}\n\n\t\t//returns true if the action can be legally activated at its position\n\t\tpublic override bool IsValid ()\n\t\t{\n/////////////////////////////////////////////////////////////////////////////////////////////////////\n//[TO-DO] optimize initialization by keeping a copy of bone list?\n//[TO-DO] consider OverlapCircleNonAlloc for fast validity checks too\n/////////////////////////////////////////////////////////////////////////////////////////////////////\t\t\t\n\t\t\treturn (GetBoneCollidersInRange().Length > 0);\n\t\t}\n\n\t\t//try to set in automatic state. Returns true on success\n\t\tpublic override bool Automate ()\n\t\t{\n\t\t\tauto = grabActive;\n\t\t\treturn auto;\n\t\t}\n\n\t\t//update automatic action. To be called once per frame while action is automated. returns false if automation stops\n\t\tpublic override bool AutomationUpdate ()\n\t\t{\n\t\t\treturn auto;\n\t\t}\n\n\t\tpublic override void DeAutomate ()\n\t\t{\n\t\t\tauto = false;\n\t\t}\n\t//ENDOF ActionBase override implementation\n\n\t//Grab Action Implementation\n\t\t//list of currently in-use joints\n\t\tprivate ConfigurableJoint[] jointList;\n\t\t//determines if grab is active\n\t\tpublic bool grabActive { get { return (jointList != null); }}\n\n\t\t//initiate grabbing action\n\t\tprivate void InitiateGrab ()\n\t\t{\n\t\t\tCreateJoints (\n\t\t\t\ttargets: GetBoneCollidersInRange(),\n\t\t\t\tsampleSpring: ActionSettings.grabJointSettings.sampleJoint\n\t\t\t);\n\t\t\ttool.SetAnimationState(AnimationNames.Tool.stateGrab);\n\t\t}\n\n\t\t//End grabbing action\n\t\tprivate void FinishGrab ()\n\t\t{\n\t\t\ttool.SetAnimationState(null);\n\t\t\tClear();\n\t\t}\n\n\t\t//Gets all of the grabbable transforms. First tries to fetch a single tail backbone.\n\t\t//If no tail bones, fetch every surface bone in range\n\t\tprivate Collider[] GetBoneCollidersInRange ()\n\t\t{\n\t\t\tCollider[] colliderList = ActionSettings.tailGrabSettings.GetCollidersInRange(tool.transform);\n\t\t\tif (colliderList.Length < 1)\n\t\t\t{\n\t\t\t\tcolliderList = ActionSettings.surfaceGrabSettings.GetCollidersInRange(tool.transform);\n\t\t\t}\n\t\t\treturn colliderList;\n\t\t}\n\t//ENDOF Grab Action Implementation\n\n\t//Grab Action support methods\n\t\t//create joints required from the tool gameobject to each target\n\t\tprivate void CreateJoints (Collider[] targets, ConfigurableJoint sampleSpring)\n\t\t{\n\t\t\t//clear joint list and create a new list\n\t\t\tDebug.Log(\"ActionGrab.CreateJoints() \" + targets.Length + \" targets\");\n\t\t\tDebug.Log(sampleSpring);\n\t\t\tRemoveJoints();\n\t\t\tjointList = new ConfigurableJoint[targets.Length];\n\t\t\t//create a joint for each target\n\t\t\tfor (int i = 0, iLimit = targets.Length; i < iLimit; i++)\n\t\t\t{\n\t\t\t\tjointList[i] = CreateJoint(targets[i], sampleSpring);\n\t\t\t}\n\t\t}\n\n\t\t//Create a joint linked to a specific transform\n\t\tprivate ConfigurableJoint CreateJoint (Collider target, ConfigurableJoint sampleSpring)\n\t\t{\n\t\t\t//fetch target rigidbody and ensure it exists\n\t\t\tRigidbody targetBody = target.GetComponent<Rigidbody>();\n\t\t\tif (targetBody == null) { return null; }\n\t\t\t//create the joint\n\t\t\tConfigurableJoint newJoint = tool.gameObject.AddComponent<ConfigurableJoint>();\n\t\t\tDebug.Log(\" created joint: \" + newJoint);\n\t\t\t//apply the sample settings and link target rigidbody\n\t\t\tnewJoint.EMApplySettings(sampleSpring);\n\n\t\t\t//\n\t\t\t//&&&&&&&&&&&&&&&&&&&&\n\t\t\t//web build grab problem seems to be somewhere around here. maybe extension methods?\n\t\t\t//\n\n\t\t\tnewJoint.connectedBody = targetBody;\n\t\t\t\t/*//[DEBUG]\n\t\t\t\tDebug.Log(\" Connected body: \" + newJoint.connectedBody);\n\t\t\t\tDebug.Log(\" Linear limit spring: \" + newJoint.linearLimitSpring.spring + \" damper: \" + newJoint.linearLimitSpring.damper);\n\t\t\t\tDebug.Log(\" X Drive spring: \" + newJoint.xDrive.positionSpring + \" damper: \" + newJoint.xDrive.positionDamper + \" maximumForce: \" + newJoint.xDrive.maximumForce);\n\t\t\t\tDebug.Log(\" Y Drive spring: \" + newJoint.yDrive.positionSpring + \" damper: \" + newJoint.yDrive.positionDamper + \" maximumForce: \" + newJoint.yDrive.maximumForce);\n\t\t\t\t//*/\n\t\t\t//set connection offset according to collider position\n\t\t\tnewJoint.connectedAnchor = target.EMGetColliderTransformOffset();\n\t\t\t//return the component\n\t\t\treturn newJoint;\n\t\t}\n\n\t\t//Remove all joints currently in use\n\t\tprivate void RemoveJoints ()\n\t\t{\n\t\t\tif (jointList != null)\n\t\t\t{\n\t\t\t\tfor (int i = 0, iLimit = jointList.Length; i < iLimit; i++)\n\t\t\t\t{\n\t\t\t\t\tObject.Destroy(jointList[i]);\n\t\t\t\t}\n\t\t\t\tjointList = null;\n\t\t\t}\n\t\t}\n\t//ENDOF Grab Action support methods\n\t}\n}\n"
},
{
"alpha_fraction": 0.7584415674209595,
"alphanum_fraction": 0.7584415674209595,
"avg_line_length": 18.25,
"blob_id": "c748724fd336e0ec34d007eb7295e4eddf9b0d08",
"content_id": "44b041ee6b2b6cada55fa9a44a27936aa049a6da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 387,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 20,
"path": "/Assets/Scripts/Experiments/RectTransformRectLogger.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class RectTransformRectLogger : MonoBehaviour\n{\n\tprivate RectTransform rectTransform;\n\n\t// Start is called before the first frame update\n\tvoid Start()\n\t{\n\t\trectTransform = transform as RectTransform;\t\n\t}\n\n\t// Update is called once per frame\n\tvoid Update()\n\t{\n\t\tDebug.Log(rectTransform.rect);\n\t}\n}\n"
},
{
"alpha_fraction": 0.8132911324501038,
"alphanum_fraction": 0.8132911324501038,
"avg_line_length": 25.41666603088379,
"blob_id": "8c27c4c2af91912bb3dd7106ae63a691647c9e78",
"content_id": "e76165731cd25a41acb565d1e188d8aae2c1202c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 318,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 12,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/OnOffFlicker/Base/OnOffFlickerManualActivationBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class OnOffFlickerManualActivationBase : OnOffFlickerBase\n\t{\n\t//inherited abstract method implementation\n\t\tprotected override bool CheckCondition ()\n\t\t{ return false; }\n\t//ENDOF inherited abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7398601174354553,
"alphanum_fraction": 0.7454545497894287,
"avg_line_length": 19.457143783569336,
"blob_id": "857171bc31850482da5cfb992321fc0bd263fe17",
"content_id": "ad2b853d6a31e9441d8786f8fe96a2d50139e8a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 717,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 35,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Interface/AutoScaleToScreenHeight.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class AutoScaleToScreenHeight : MonoBehaviour\n\t{\n\t//serialized fields\n\t\t[SerializeField]\n\t\tprivate float baseScreenHeight = 1.0f;\n\t\t[SerializeField]\n\t\tprivate Vector3 baseScale = Vector3.one;\n\t//ENDOF serialized fields\n\n\t//MonoBehaviour lifecycle\n\t\tpublic void Start () \n\t\t{\n\t\t\tUpdateScale();\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tUpdateScale();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\n\t//private methods\n\t\tprivate void UpdateScale()\n\t\t{ \t\n\t\t\ttransform.localScale = baseScale * (ControllerCache.viewportController.size / baseScreenHeight);\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.8284023404121399,
"alphanum_fraction": 0.8284023404121399,
"avg_line_length": 23.214284896850586,
"blob_id": "cb69627b7b93d846edbc2f21da16974aed0802e8",
"content_id": "67713e82cce803de9981b813d3ca48bfa87f9d73",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 338,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 14,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/TailRiggerEditorNoController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing TInspector = ASSpriteRigging.Inspectors.TailRiggerInspectorNoController;\n\nnamespace ASSpriteRigging.Editors\n{\n\t//rigs a chain of bones with required components\n\t[CustomEditor(typeof(TInspector))]\n\tpublic class TailRiggerEditorNoController\n\t\t: TailRiggerEditorJointChainBase<TInspector>\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.7434367537498474,
"alphanum_fraction": 0.7601432204246521,
"avg_line_length": 25.21875,
"blob_id": "007f18f63a140a921fcbeeed494f5dd91bfc00f4",
"content_id": "88b6b365c86be9913504dc28e8e97ba82b48baa3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 838,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 32,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/ActionSupport2D.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic static class ActionSupport2D\n\t{\n\t//CreateAnchoredJoint() Create an anchored joint linked to a specific target, adjusting offsets\n\t\t//overload taking a collider2d as \n\t\t//public static \n\t\t//\n\t\tpublic static TAnchoredJoint2D CreateAnchoredJoint2D <TAnchoredJoint2D> (\n\t\t\tTransform origin,\n\t\t\tTransform target,\n\t\t\tTAnchoredJoint2D sampleSpring,\n\t\t\tVector2? originOffset = null,\n\t\t\tVector2? targetOffset = null\n\t\t)\n\t\t\twhere TAnchoredJoint2D: AnchoredJoint2D\n\t\t{\n\t\t\treturn null;\n\t\t}\n\t//ENDOF CreateAnchoredJoint()\n\n\t//Private utility methods\n\t\t//Return the distance between the center of a collider and its anchored rigidbody as a vector2\n\t\tprivate static Vector2 GetColliderOffset (Collider2D collider)\n\t\t{\n\t\t\treturn Vector2.zero;\n\t\t}\n\t//ENDOF Private utility methods\n\t}\n}"
},
{
"alpha_fraction": 0.718126654624939,
"alphanum_fraction": 0.7250650525093079,
"avg_line_length": 26.4761905670166,
"blob_id": "e9233b55fbb7de5784383d8df7f7039cbb50723d",
"content_id": "aa086f5889a4fb65c9ecec2f6ecd75207e06fc99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1153,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 42,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/OnRigidbodySleep/KickerOnRigidbodySleepTorque.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ASSistant.ASSRandom; //RandomRangeFloat\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class KickerOnRigidbodySleepTorque : KickerOnRigidbodySleepBase\n\t{\n\t//serialized properties \n\t\tpublic int direction; //if not zero determines the sign of the force applied. If 0, a direction will be chosen randomly each time\n\t//ENDOF serialized properties \n\n\t//private fields and properties\n\t//ENDOF private fields and properties\n\n\t//IKicker implementation\n\t\t//applies a random torque at a random direction as the kick\n\t\tpublic override void Kick ()\n\t\t{\n\t\t\t//add a torque of random intensity and random direction in Z axis\n\t\t\ttargetRigidbody.AddTorque(\n\t\t\t\tVector3.forward * randomForce.Generate() * GetDirection(),\n\t\t\t\tForceMode.Force\n\t\t\t);\n\t\t}\n\t//ENDOF IKicker implementation\n\n\t//MonoBehaviour Lifecycle\n\t//ENDOF MonoBehaviour Lifecycle\n\n\t//private methods\n\t\tprivate int GetDirection ()\n\t\t{\n\t\t\treturn (direction > 0)\n\t\t\t\t\t\t? 1\t\t\t//if direction sign is + use 1\n\t\t\t\t\t : (direction < 0)\n\t\t\t\t\t\t? -1\t\t//if direction sign is - use -1\n\t\t\t\t\t\t: RandomSign.Generate();\t//if none, get a random sign\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7654867172241211,
"alphanum_fraction": 0.7743362784385681,
"avg_line_length": 31.35714340209961,
"blob_id": "9fdde19b1d2f5d3f04f806fc1f438e76f73cf2b4",
"content_id": "ab023af3cfa03c3e5860e7fca6f9e49264641c36",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 452,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/ICurtainController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.SceneSystem\n{\n\tpublic interface ICurtainController : ASSPhysics.ControllerSystem.IController\n\t{\n\t\t//opens and closes the curtains, or returns the currently DESIRED state\n\t\tbool open {get; set;}\n\n\t\t//returns the state of the transition between 1 and 0, 0 meaning fully closed 1 meaning fully opened\n\t\tfloat openingProgress {get;}\n\n\t\t//returns true if curtain has actually reached a closed state\n\t\tbool isCompletelyClosed {get;}\n\t}\n}"
},
{
"alpha_fraction": 0.7126798629760742,
"alphanum_fraction": 0.7158273458480835,
"avg_line_length": 32.712120056152344,
"blob_id": "f3d0d5836f5f41ff3b55eed7c8f6b8d2d84b78a1",
"content_id": "dcb6ca3e9e32aa5b2b66d0f02b982e61871e4943",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2224,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 66,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/ActionSlap.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ActionSettings = ASSPhysics.SettingSystem.ActionSettings;\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing EInputState = ASSPhysics.InputSystem.EInputState;\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic class ActionSlap : ActionBase\n\t{\n\t//ActionBase override implementation\n\t\t//receive state of corresponding input medium\n\t\tpublic override void Input (EInputState state)\n\t\t{\n\t\t\tif (state == EInputState.Ended)\n\t\t\t{\n\t\t\t\tPerformSlap();\n\t\t\t}\n\t\t}\n\n\t\t//interaction is always valid - slap is generally the fallback state if no othe action is valid\n\t\tpublic override bool IsValid ()\n\t\t{\n\t\t\treturn true;\n\t\t\t//alternative implementation determining wether there were valid colliders in range\n\t\t\t//return ActionSettings.slapAreaSettings.GetCollidersInRange(tool.transform).Length > 0;\n\t\t}\n\n\t\t//Using an interactor is an entirely non-automatable one-shot action, so automation methods just report failure\n\t\tpublic override bool Automate () { return false; }\n\t\tpublic override bool AutomationUpdate () { return false; }\n\t\t//public override void DeAutomate ();\n\t//ENDOF ActionBase override implementation\n\n\t//private method implementation\n\t\t//execute the slapping action\n\t\tprivate void PerformSlap ()\n\t\t{\n\t\t\t//set the animation state\n\t\t\ttool.SetAnimationState(AnimationNames.Tool.stateSlap);\n\n\t\t\t//fetch colliders in range around the tool\n\t\t\tCollider[] colliderList = ActionSettings.slapAreaSettings.GetCollidersInRange(tool.transform);\n\t\t\t//add a force to each collider in range\n\t\t\tforeach (Collider collider in colliderList)\n\t\t\t{\n\t\t\t\t////////////////////////////////////////////////////////////////////\n\t\t\t\t//[TO-DO] move explosionForce to an actionSetting\n\t\t\t\tconst float explosionForce = 40.0f;\n\t\t\t\t//////////////////////\n\t\t\t\tRigidbody targetRigidbody = collider.GetComponent<Rigidbody>();\n\n\t\t\t\tif(targetRigidbody != null) {\n\t\t\t\t\ttargetRigidbody.AddExplosionForce(\n\t\t\t\t\t\texplosionForce: explosionForce,\t//float\n\t\t\t\t\t\texplosionPosition: tool.transform.position,\t//Vector3\n\t\t\t\t\t\texplosionRadius: ActionSettings.slapAreaSettings.radius,\t//float\n\t\t\t\t\t\tupwardsModifier: 0.0f,\t//float\n\t\t\t\t\t\tmode: ForceMode.Force\t//ForceMode\n\t\t\t\t\t);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7790697813034058,
"alphanum_fraction": 0.7790697813034058,
"avg_line_length": 20.75,
"blob_id": "0a7aa0283289010991c09b23c873e0e574dc7f5d",
"content_id": "1594d272c22c5cfb4e64fe67a1b418e53e7b5999",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 4,
"path": "/Assets/Scripts/ASSPhysics/InputSystem/EInputState.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.InputSystem\n{\n\tpublic enum EInputState { Started, Held, Ended }\n}"
},
{
"alpha_fraction": 0.8360655903816223,
"alphanum_fraction": 0.8360655903816223,
"avg_line_length": 29.600000381469727,
"blob_id": "3c2f4e687a9e96f0ca72f0a19c374e5ceaf9dfd0",
"content_id": "ede91c57fff953f8857f4d2ccc9736521523a4c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 307,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 10,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Propagators/IPropagatorEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Editors\n{\n\tpublic delegate void PropagationApplicationDelegate(UnityEngine.Transform propagationTarget);\n\n\tpublic interface IPropagatorEditor : IEditorPurgeableBase\n\t{\n\t\t//executes apply on every affected transform\n\t\tvoid DoPropagate (PropagationApplicationDelegate apply);\n\t}\n}"
},
{
"alpha_fraction": 0.7683823704719543,
"alphanum_fraction": 0.7757353186607361,
"avg_line_length": 27.578947067260742,
"blob_id": "79470edcded5a2f6b52526f58b2f488fc5508ea2",
"content_id": "adb8423b660b745e3caa4c55b1fdb8eab44de526",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 544,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 19,
"path": "/Assets/Scripts/ASSistant/ComponentConfiguration/JointConfiguration/ConfigurableJointSetChainAnchor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "\nusing UnityEngine;\n\nnamespace ASSistant.ComponentConfiguration.JointConfiguration\n{\n\tpublic static class ConfigurableJointSetChainAnchor\n\t{\n\t//public static methods\n\t\t//sets the anchors so they rest on the connected body's localspace 0,0,0\n\t\tpublic static ConfigurableJoint EMSetChainAnchor (\n\t\t\tthis ConfigurableJoint _this\n\t\t) {\n\t\t\t_this.autoConfigureConnectedAnchor = false;\n\t\t\t_this.connectedAnchor = Vector3.zero;\n\t\t\t_this.anchor = _this.transform.InverseTransformPoint(_this.connectedBody.transform.position);\n\n\t\t\treturn _this;\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.7185500860214233,
"alphanum_fraction": 0.7185500860214233,
"avg_line_length": 17.076923370361328,
"blob_id": "5cbfdb84beb01d0113f65bc66c88d3e53c9a92d8",
"content_id": "3b9a076a4654604b93642eba0a66dfb71e0a0cd1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 471,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 26,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/OneShotTriggers/PlayerOneShotBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic abstract class PlayerOneShotBase <TPlayable> : MonoBehaviour\n\t\twhere TPlayable : Component\n\t{\n\t\t[SerializeField]\n\t\tpublic TPlayable[] playableList;\n\n\t\tpublic void PlayAll ()\n\t\t{\n\t\t\tforeach (TPlayable playable in playableList)\n\t\t\t{\n\t\t\t\tPlay(playable);\n\t\t\t}\n\t\t}\n\n\t\tpublic void PlayOne (int target)\n\t\t{\n\t\t\tPlay(playableList[target]);\n\t\t}\n\n\t\tprotected abstract void Play (TPlayable playable);\n\t}\n}"
},
{
"alpha_fraction": 0.7780612111091614,
"alphanum_fraction": 0.7831632494926453,
"avg_line_length": 28.037036895751953,
"blob_id": "1172b105a20b39a52c397d03c34b03afa495e5fb",
"content_id": "6da375df08f720d35f484f2b3f62a273f553b106",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 786,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 27,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/RectCameraControllerScrollable.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic class RectCameraControllerScrollable : RectCameraControllerSmooth\n\t{\n\t//public methods\n\t\tpublic void Scroll (Vector2 direction)\n\t\t{\n\t\t\tVector2 movement = ScrollMovementFromDirection(direction);\n\t\t\ttargetRect = RectMath.MoveRect(rect: targetRect, movement: movement);\n\t\t\t//ControllerCache.toolManager.activeTool.Move(movement);\n\t\t}\n\t//ENDOF public methods\n\n\t//private methods\n\t\t//scales direction vector by screen size, time delta, and rate modifier\n\t\tprivate Vector2 ScrollMovementFromDirection (Vector2 direction)\n\t\t{\n\t\t\treturn direction * rect.height * Time.deltaTime;\n\t\t}\n\t//ENDOF private method\n\t}\n}\n"
},
{
"alpha_fraction": 0.6457036137580872,
"alphanum_fraction": 0.6531755924224854,
"avg_line_length": 29.903846740722656,
"blob_id": "8a6da798bda7554921f43ab66fb35dfb3f09ca34",
"content_id": "1943bf709b9340e164dba0f99b7910a4cb0ffbbb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1606,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 52,
"path": "/Assets/Editor/Experiments/SpriteExperimentsEditor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing Unity.Collections;\t//NativeSlice\nusing UnityEngine;\nusing UnityEngine.U2D;\nusing UnityEngine.Rendering;\t//VertexAttribute\nusing UnityEditor;\n\n\nnamespace ASSperiments\n{\n\t[CustomEditor(typeof(SpriteExperiments))]\n\tpublic class SpriteExperimentsEditor : Editor\n\t{\n\t\tSprite sprite;\n\n\t\tpublic override void OnInspectorGUI ()\n\t\t{\n\t\t\tbase.OnInspectorGUI();\n\n\t\t\tsprite = ((SpriteExperiments) target).gameObject.GetComponent<SpriteRenderer>().sprite;\n\n\t\t\tDoTestBonesButton();\n\t\t}\n\n\t\tprivate void DoTestBonesButton ()\n\t\t{\n\t\t\tif (GUILayout.Button(\"Test bones\"))\n\t\t\t{ TestBones(sprite); }\n\t\t}\n\n\t\tprivate void TestBones(Sprite sprite)\n\t\t{\n\t\t\tDebug.Log(\"TestBones\");\n\n\t\t\t//=============================================================================================================\n\t\t\tNativeSlice<Vector3> positionList = sprite.GetVertexAttribute<Vector3>(VertexAttribute.Position);\n\t\t\tNativeSlice<BoneWeight> blendWeightList = sprite.GetVertexAttribute<BoneWeight>(VertexAttribute.BlendWeight);\n\n\t\t\tfor (int i = 0, iLimit = sprite.GetVertexCount(); i < iLimit; i++)\n\t\t\t{\n\t\t\t\tDebug.Log(\"========\\nVertex #\" + i);\n\t\t\t\tDebug.Log(positionList[i]);\n\t\t\t\tDebug.Log(blendWeightList[i]);\n\t\t\t\tDebug.Log(\" > \" + blendWeightList[i].boneIndex0 + \": \" + blendWeightList[i].weight0);\n\t\t\t\tDebug.Log(\" > \" + blendWeightList[i].boneIndex1 + \": \" + blendWeightList[i].weight1);\n\t\t\t\tDebug.Log(\" > \" + blendWeightList[i].boneIndex2 + \": \" + blendWeightList[i].weight2);\n\t\t\t\tDebug.Log(\" > \" + blendWeightList[i].boneIndex3 + \": \" + blendWeightList[i].weight3);\n\t\t\t}\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7586206793785095,
"alphanum_fraction": 0.7586206793785095,
"avg_line_length": 35.29166793823242,
"blob_id": "80b65e4b18db9524128a46533da9b68ae329fd49",
"content_id": "809d1add640062d20222c6f2bd7df9f51309dda8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 870,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 24,
"path": "/Assets/Scripts/ASSistant/ReflectionAssistant.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Reflection;\n\nnamespace ASSistant\n{\n\t//Introspection support methods\n\tpublic static class ReflectionAssistant\n\t{\n\t\t//Finds a method of name methodName within provided type using provided binding flags\n\t\t//invoke it using given context, passing given list of generic types and parameters\n\t\tpublic static void CallMethodWithTypesAndParameters (\n\t\t\tSystem.Type type,\t\t\t//type containing target method\n\t\t\tSystem.Object context,\t\t//context within invokation will occur. Null means static context\n\t\t\tstring methodName,\t\t\t//name of the method\n\t\t\tBindingFlags bindingFlags, \t//binding flags delimiting method search\n\t\t\tSystem.Type[] typeList,\t\t//list of generic types to pass\n\t\t\tSystem.Object[] parameters //list of parameters to pass\n\t\t) {\n\t\t\ttype\n\t\t\t\t.GetMethod(methodName, bindingFlags)\n\t\t\t\t.MakeGenericMethod(typeList)\n\t\t\t\t.Invoke(context, parameters);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.702633798122406,
"alphanum_fraction": 0.702633798122406,
"avg_line_length": 20.418182373046875,
"blob_id": "56c527411925f76bd1940b96da6f1ae940ceecd8",
"content_id": "28ecaf5613a0927d8ba35fb69dcfe53ec7ea06d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1177,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 55,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Base/ArmableEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing IArmableInspector = ASSpriteRigging.Inspectors.IArmableInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n\tpublic abstract class ArmableEditorBase<TInspector>\n\t:\n\t\tEditorBase<TInspector>\n\t\twhere TInspector : UnityEngine.Object, IArmableInspector\n\t{\n\t//protected properties\n\t\tprotected bool isArmed \n\t\t{ \n\t\t\tget { return targetInspector.armed; }\n\t\t\tset { targetInspector.armed = value; }\n\t\t}\n\t//ENDOF protected properties\n\n\t//protected methods\n\t\t//forces inspector to disarm\n\t\tprotected void Disarm ()\n\t\t{\n\t\t\tisArmed = false;\n\t\t\tDebug.Log(\"Disarmed\");\n\t\t}\n\n\t //Setup GUI layout\n\t\t//check if script is armed for use\n\t\tprotected bool RequestArmed ()\n\t\t{\n\t\t\tif (isArmed)\n\t\t\t{\n\t\t\t\tisArmed = false;\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tDebug.LogWarning(\"Rigger is disarmed - Arm before proceeding\");\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\n\t\t//draws a button that executes its corresponding action only if armed\n\t\tprotected override void DoButton (string buttonText, EditorActionDelegate action)\n\t\t{\n\t\t\tbase.DoButton(buttonText, delegate() {\n\t\t\t\tif (RequestArmed()) { action(); }\n\t\t\t});\n\t\t}\n\t //ENDOF Setup GUI layout\n\t//ENDOF protected methods\n\t}\n}"
},
{
"alpha_fraction": 0.718367338180542,
"alphanum_fraction": 0.718367338180542,
"avg_line_length": 14.3125,
"blob_id": "07a9b05804ef727b2c593dfc8f8289b03d99868b",
"content_id": "50c57902af1a9f919000d0fbbe3f050bd774bb21",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 247,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 16,
"path": "/Assets/Scripts/Experiments/Logger.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Logger : MonoBehaviour\n{\n\tpublic void ConsoleLog()\n\t{\n\t\tDebug.Log(\"L M A O\");\n\t}\n\n\tpublic void CustomLog(string message)\n\t{\n\t\tDebug.Log(message);\n\t}\n}\n"
},
{
"alpha_fraction": 0.7320099472999573,
"alphanum_fraction": 0.7320099472999573,
"avg_line_length": 22.735294342041016,
"blob_id": "7d9e858e6b85e44a344bea148a8f371019a8d526",
"content_id": "8e35d4cbf26f75e8554e701ef08e9b1c6db5df08",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 808,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 34,
"path": "/Assets/Scripts/ASSPhysics/ControllerSystem/MonoBehaviourControllerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.ControllerSystem\n{\n\tpublic class MonoBehaviourControllerBase <TController> : MonoBehaviour, IController\n\t\twhere TController : class, IController\n\t{\n\t//IController implementation\n\t\tprivate bool _isAlive;\n\t\tpublic bool isValid\n\t\t{\n\t\t\tget { return _isAlive; }\n\t\t\tprivate set { _isAlive = value; }\n\t\t}\n\t//ENDOF IController implementation\n\n\t//MonoBehaviour lifecycle\n\t\tpublic virtual void Awake ()\n\t\t{\n\t\t\t//report this controller to the provider\n\t\t\tisValid = true;\n\t\t\tControllerProvider.RegisterController<TController>(this);\n\t\t}\n\n\t\tpublic virtual void OnDestroy ()\n\t\t{\n\t\t\tControllerProvider.DisposeController<TController>(this);\n\t\t\tDebug.LogWarning(\"OnDestroy(): \" + typeof(TController));\n\t\t\tisValid = false;\n\t\t\tDestroy(this);\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\t}\n}"
},
{
"alpha_fraction": 0.7889390587806702,
"alphanum_fraction": 0.7889390587806702,
"avg_line_length": 27.612903594970703,
"blob_id": "f9acb3da8741e077685fcd4f0e02b88d0fd14243",
"content_id": "237c15ec37db77cf924f352658f7dcfb3310e9af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 888,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 31,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Propagators/PropagatorEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IPropagatorInspector = ASSpriteRigging.Inspectors.IPropagatorInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n\tpublic abstract class PropagatorEditorBase<TInspector>\n\t:\n\t\tArmableEditorBase<TInspector>,\n\t\tIPropagatorEditor\n\t\twhere TInspector : UnityEngine.Object, IPropagatorInspector\n\t{\n\t//IPropagatorEditor declaration\n\t //IEditorPurgeableBase declaration\n\t\tpublic abstract void DoPurge ();\n\t //ENDOF IEditorPurgeableBase declaration\n\n\t\tpublic abstract void DoPropagate (PropagationApplicationDelegate apply);\n\t//ENDOF IPropagatorEditor declaration\n\n\t//EditorBase implementation\n\t\tprotected override void DoButtons ()\n\t\t{\n\t\t\tDoButton(\"Propagate Rig Setup\", DoSetup);\n\t\t\t//DoButton(\"Rig bone components & configuration\", RigBones);\n\t\t\tDoButton(\"Disarm\", Disarm);\n\t\t\tDoButton(\"Propagate Purge components\", DoPurge);\n\t\t}\n\t//ENDOF EditorBase implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7898550629615784,
"alphanum_fraction": 0.79347825050354,
"avg_line_length": 26.700000762939453,
"blob_id": "dd9289fe16203f779334349a572ac5ac1e60a9ef",
"content_id": "355ccbd7896bb31fcb76b1b834ed5c1925d331fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 276,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 10,
"path": "/Assets/Scripts/ASSPhysics/SettingSystem/ActionSettings/ActionSettingJoint.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.SettingSystem.ActionSettingTypes\n{\n\t[CreateAssetMenu(fileName = \"Data\", menuName = \"Action settings/Spring Joint settings\", order = 1)]\n\tpublic class ActionSettingJoint : ScriptableObject\n\t{\n\t\tpublic ConfigurableJoint sampleJoint;\n\t}\n}"
},
{
"alpha_fraction": 0.8688119053840637,
"alphanum_fraction": 0.8712871074676514,
"avg_line_length": 39.5,
"blob_id": "d3022bd793fd6b063f49fabb83e823d2549e6a84",
"content_id": "568a420b61447ad0ac3175d1f78c56b987f4a6b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 404,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 10,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/TailRiggerInspectorSmoothFollowController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using TailElementJointSmoothFollow = ASSPhysics.TailSystem.TailElementJointSmoothFollow;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\t[UnityEngine.RequireComponent(typeof(UnityEngine.U2D.Animation.SpriteSkin))]\n\tpublic class TailRiggerInspectorSmoothFollowController : TailRiggerInspectorJointChain\n\t{\n\t\tpublic TailElementJointSmoothFollow defaultTailElementController;\t//default tail element controller\n\t}\n}"
},
{
"alpha_fraction": 0.7623762488365173,
"alphanum_fraction": 0.7623762488365173,
"avg_line_length": 20.64285659790039,
"blob_id": "73421d47bf637e37772ffefc3d149772fde3f0ee",
"content_id": "13ee6ce8b0177df8df16827bad3964e9a752e4ea",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 303,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/SceneSystem/CursorLocker.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Cursor = UnityEngine.Cursor;\nusing CursorLockMode = UnityEngine.CursorLockMode;\n\nnamespace ASSPhysics.SceneSystem\n{\n\tpublic static class CursorLocker\n\t{\n\t\tpublic static void LockAndHideSystemCursor ()\n\t\t{\n\t\t\tCursor.lockState = CursorLockMode.Locked; //Confined\n\t\t\tCursor.visible = false;\n\t\t}\n\t}\n}\n"
},
{
"alpha_fraction": 0.7161731123924255,
"alphanum_fraction": 0.7161731123924255,
"avg_line_length": 23.954545974731445,
"blob_id": "3a1b24efaba1848c7010b4737dd5c8a16e6eae64",
"content_id": "0e21633b3f239771a3a711cda4fd2e2d850ecce2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2197,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 88,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogManagerBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IEnumerator = System.Collections.IEnumerator;\n\nusing IDialogController = ASSPhysics.DialogSystem.DialogControllers.IDialogController;\n\nnamespace ASSPhysics.DialogSystem\n{\n\tpublic class DialogManagerBase : MonoBehaviour, IDialogManager\n\t{\n\t\t//static namespace\n\t\t\tpublic static DialogManagerBase instance;\n\t\t//ENDOF static namespace\n\n\t\t//serialized fields\n\t\t//ENDOF serialized fields\n\n\t\t//private fields and properties\n\t\t\tprivate IDialogController waitingDialog = null;\n\t\t\tprivate IDialogController activeDialog;\n\t\t\t//private int dialogIndex;\n\t\t//ENDOF private fields and properties\n\n\t\t//MonoBehaviour lifecycle\n\t\t\tpublic void Awake ()\n\t\t\t{\n\t\t\t\tinstance = this;\n\t\t\t}\n\n\t\t\tpublic void Start ()\n\t\t\t{\n\t\t\t\tResetDialogs();\n\t\t\t}\n\t\t//ENDOF MonoBehaviour lifecycle\n\n\t\t//IDialogManager implementation\n\t\t\tpublic void SetActiveDialog (IDialogController targetDialog)\n\t\t\t{\n\t\t\t\tDebug.Log(\"DialogManagerBase.SetActiveDialog()\");\n\t\t\t\t//if already changing dialogs ignore change request\n\t\t\t\tif (waitingDialog != null) { return; }\n\n\t\t\t\twaitingDialog = targetDialog;\n\t\t\t\t//if there's another dialog active, request it closes\n\t\t\t\tif (activeDialog != null)\n\t\t\t\t{\n\t\t\t\t\tactiveDialog.AnimatedDisable(DelegateOpenNextDialog);\n\t\t\t\t}\n\t\t\t\telse\t//if no dialog already active just open target dialog\n\t\t\t\t{\n\t\t\t\t\tDelegateOpenNextDialog();\n\t\t\t\t}\n\n\t\t\t}\n\t\t//ENDOF IDialogManager implementation\n\n\t\t//private method definition\n\t\t\tprivate void ResetDialogs ()\n\t\t\t{\n\t\t\t\twaitingDialog = null;\n\t\t\t\tactiveDialog = null;\n\t\t\t\tIDialogController[] dialogList = GetComponentsInChildren<IDialogController>();\n\t\t\t\tforeach (IDialogController dialog in dialogList)\n\t\t\t\t{\n\t\t\t\t\tdialog.ForceDisable();\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tprivate void DelegateOpenNextDialog ()\n\t\t\t{\n\t\t\t\tStartCoroutine(DelegateOpenNextDialogCoroutine());\n\n\t\t\t\tIEnumerator DelegateOpenNextDialogCoroutine ()\n\t\t\t\t{\n\t\t\t\t\t//one-frame delay introduced to guarantee next dialog's animator has a chance to resize the panel before frame\n\t\t\t\t\tyield return new WaitForEndOfFrame();\n\n\t\t\t\t\tactiveDialog = waitingDialog;\n\t\t\t\t\tif (waitingDialog != null)\n\t\t\t\t\t{\n\t\t\t\t\t\twaitingDialog.Enable();\n\t\t\t\t\t\twaitingDialog = null;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t//ENDOF private method definition\n\t}\n}"
},
{
"alpha_fraction": 0.8101851940155029,
"alphanum_fraction": 0.8101851940155029,
"avg_line_length": 26.0625,
"blob_id": "e1ceb884685303f3e0bdc8f20baafdad2c718a35",
"content_id": "36f16d983f7a62ce8167349d2d681fb336832403",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 432,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 16,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulsePropagators/IPulsePropagator.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using IChainElement = ASSPhysics.ChainSystem.IChainElement;\n\nusing IPulseData = ASSPhysics.PulseSystem.PulseData.IPulseData;\n\nusing Transform = UnityEngine.Transform;\n\nnamespace ASSPhysics.PulseSystem.PulsePropagators\n{\n\tpublic interface IPulsePropagator : IChainElement\n\t{\n\t\tTransform transform {get;}\n\n\t\t//execute a pulse and propagate it in the corresponding direction after proper delay\n\t\tvoid Pulse (IPulseData pulseData);\n\t}\n}"
},
{
"alpha_fraction": 0.783088207244873,
"alphanum_fraction": 0.783088207244873,
"avg_line_length": 23.200000762939453,
"blob_id": "70820249d32271577f1007f9d43131748cf4dff5",
"content_id": "0b9c3c87173a29f43319ed38f25fb90c3f9b96e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1088,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 45,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Weavers/WeaverEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing BoneRigging = ASSpriteRigging.BoneUtility.BoneRigging;\nusing IWeaverInspector = ASSpriteRigging.Inspectors.IWeaverInspector;\n\n\nnamespace ASSpriteRigging.Editors\n{\n\tpublic abstract class WeaverEditorBase<TInspector> \n\t:\n\t\tArmableEditorBase<TInspector>,\n\t\tIWeaverEditor\n\t\twhere TInspector : UnityEngine.Object, IWeaverInspector\n\t{\n\t//ArmableEditorBase implementation\n\t\tprotected override void DoButtons ()\n\t\t{\n\t\t\tDoButton(\"Setup weaving joints\", WeaveJoints);\n\t\t}\n\t//ENDOF ArmableEditorBase implementation\n\n\t//IWeaverEditor implementation\n\t\tpublic override void DoSetup ()\n\t\t{\n\t\t\tWeaveJoints();\n\t\t}\n\t//ENDOF IWeaverEditor implementation\n\n\t//private methods\n\t\tprotected void ConnectRigidbodies (Rigidbody fromRigidbody, Rigidbody toRigidbody)\n\t\t{\n\t\t\tBoneRigging.BoneConnectJoint<ConfigurableJoint>(\n\t\t\t\tbone: fromRigidbody.transform,\n\t\t\t\ttargetRigidbody: toRigidbody,\n\t\t\t\tsample: targetInspector.defaultWeavingJoint\n\t\t\t);\n\t\t}\n\t//ENDOF private methods\n\n\t//overridable methods\n\t\tpublic abstract void WeaveJoints();\n\t//ENDOF overridable methods\n\t}\n}"
},
{
"alpha_fraction": 0.7740113139152527,
"alphanum_fraction": 0.7740113139152527,
"avg_line_length": 24.35714340209961,
"blob_id": "a84065fdc3b6e9cc5c282da31688e3fc585dfa81",
"content_id": "577ac76b4cc2a3688db15ee0a2c26f16abbc3daa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 356,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/InteractableSystem/IInteractor.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using EInputState = ASSPhysics.InputSystem.EInputState;\n\nnamespace ASSPhysics.InteractableSystem\n{\n\t//interactable interface element, like a button\n\tpublic interface IInteractor\n\t{\n\t\t//process input. returns true if an interactable is in range\n\t\tbool Input (EInputState state);\n\n\t\t//Check if hovering over a valid interactable\n\t\tbool IsHovering ();\n\t}\n}"
},
{
"alpha_fraction": 0.732577919960022,
"alphanum_fraction": 0.7348442077636719,
"avg_line_length": 26.13846206665039,
"blob_id": "1a92b02664ba68fed511303a716ddfc06c0a8025",
"content_id": "0b331929eb1667cd4371ec45b421111c6effda87",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1767,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 65,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Tools/Hand.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing InputSettings = ASSPhysics.SettingSystem.InputSettings;\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\nusing ASSPhysics.HandSystem.Actions; //IAction, ActionGrab, ActionSlap ActionUseInteractor\n\nnamespace ASSPhysics.HandSystem.Tools\n{\n\tpublic class Hand : ToolBase\n\t{\n\t//private fields and properties\n\t\tprivate float inputHeldTime = 0.0f;\n\t//ENDOF private fields and proerties\n\n\t//MonoBehaviour Lifecycle implementation\n\t//ENDOF MonoBehaviour Lifecycle implementation\n\t\t\n\t//ToolBase implementation\n\t\tprotected override void InputStarted ()\n\t\t{\n\t\t\t//upon first starting an input, try to determine if an special zone action is required\n\t\t\t//if not, try to initiate a grab.\n\t\t\tinputHeldTime = 0.0f;\n\t\t\tTryActions();\n\t\t}\n\n\t\tprotected override void InputHeld ()\n\t\t{\n\t\t\tinputHeldTime += Time.deltaTime;\n\t\t}\n\n\t\tprotected override void InputEnded ()\n\t\t{\n\t\t\tTrySlap();\n\t\t}\n\n\t\t//sets the animator\n\t\tpublic override void SetAnimationState (string triggerName)\n\t\t{\n\t\t\tif (triggerName == null) { triggerName = AnimationNames.Tool.stateFlat; }\n\t\t\tbase.SetAnimationState(triggerName);\n\t\t}\n\t//ENDOF ToolBase implementation\n\n\t//private method implementation\n\t\tprivate void TryActions ()\n\t\t{\n\t\t\tif (SetAction<ActionUseInteractor>()) return;\n\t\t\tif (SetAction<ActionGrab>()) return;\n\t\t}\n\n\t\t//upon release perform a slap if input was held for short enough and previous action is not an ActionUseInteractor\n\t\tprivate void TrySlap ()\n\t\t{\n\t\t\tif (inputHeldTime <= InputSettings.maximumTimeHeldForSlap)\n\t\t\t{\n\t\t\t\tif ((action as ActionUseInteractor)?.IsValid() == true)\n\t\t\t\t{ return; }\t//if previous action is a valid UseInteractor forgo slap\n\t\t\t\tSetAction<ActionSlap>();\n\t\t\t\tDebug.Log(\"Slappin'\");\n\t\t\t}\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}\n\n"
},
{
"alpha_fraction": 0.7533039450645447,
"alphanum_fraction": 0.7797356843948364,
"avg_line_length": 24.33333396911621,
"blob_id": "55bd42511c156d63f4c6e662a9e28dad33bad65e",
"content_id": "f867cf480bc0e96973ed3f709bbaf06d2d3933cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 227,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 9,
"path": "/Assets/Scripts/ASSPhysics/SettingSystem/InputSettings.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.SettingSystem\n{\n\tpublic static class InputSettings\n\t{\n\t\tpublic const float maximumTimeHeldForSlap = 0.2f;\n\t\tpublic const float mouseDeltaScale = 0.1f;\n\t\tpublic const float mouseScrollDeltaScale = 0.5f;\n\t}\n}"
},
{
"alpha_fraction": 0.773679792881012,
"alphanum_fraction": 0.7770326733589172,
"avg_line_length": 32.16666793823242,
"blob_id": "ec00a2958ddd6c72b9034b94dff3c04a4c00e6ca",
"content_id": "d9743f8d59322c071d9ed39919d5d08032125b19",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1195,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 36,
"path": "/Assets/Scripts/ASSistant/ComponentConfiguration/ColliderConfiguration/ColliderPosition.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing Type = System.Type;\nusing System.Reflection;\t//PropertyInfo\n\nnamespace ASSistant.ComponentConfiguration.ColliderConfiguration\n{\n\tpublic static class ColliderPosition\n\t{\n\t\tprivate const string offsetPropertyName = \"center\";\n\t\tprivate static readonly BindingFlags defaultBindingFlags =\n\t\t\tBindingFlags.Instance |\n\t\t\tBindingFlags.Public |\n\t\t\tBindingFlags.SetProperty |\n\t\t\tBindingFlags.GetProperty;\n\n\t\t//extension method letting a sphere collider report its worldspace position\n\t\t//considers its offset property if it has one\n\t\tpublic static Vector3 EMGetColliderAbsolutePosition (this Collider collider)\n\t\t{\n\t\t\t//return collider's transform position adding offset value if available\n\t\t\treturn collider.transform.position + collider.EMGetColliderTransformOffset();\n\t\t}\n\n\t\tpublic static Vector3 EMGetColliderTransformOffset (this Collider collider)\n\t\t{\n\t\t\tPropertyInfo offsetProperty = collider\n\t\t\t\t.GetType()\t\t\t\t\t//fetch received collider's type signature\n\t\t\t\t.GetProperty(offsetPropertyName, defaultBindingFlags);\t//try to fetch offset value property\n\n\t\t\treturn (offsetProperty != null\n\t\t\t\t\t\t? (Vector3) offsetProperty.GetValue(collider)\n\t\t\t\t\t\t: Vector3.zero);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7766990065574646,
"alphanum_fraction": 0.7778192758560181,
"avg_line_length": 35.202701568603516,
"blob_id": "396b1d8fe290e77efd366e86e91a989f04c7d16d",
"content_id": "5cbd209ddc12eb4499897eeb4931457b4442d13c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2680,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 74,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulsePropagators/ChainElementPulsePropagatorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing UnityEngine;\n\nusing ASSPhysics.ChainSystem;\nusing IPulseData = ASSPhysics.PulseSystem.PulseData.IPulseData;\n\nusing EPulseDirection = ASSPhysics.PulseSystem.EPulseDirection;\n\nnamespace ASSPhysics.PulseSystem.PulsePropagators\n{\n\tpublic abstract class ChainElementPulsePropagatorBase : ChainElementAutoFindParent, IPulsePropagator\n\t{\n\t//IPulsePropagator implementation\n\t\t//propagates the pulse towards the parent or children elements depending on pulse propagation direction\n\t\tpublic void Pulse (IPulseData pulseData)\n\t\t{\n\t\t\t//Debug.Log(\"ChainElementPulsePropagatorBase.Pulse()\");\n\t\t\t//process the pulse, then propagate\n\t\t\tDoPulse(pulseData);\n\n\t\t\t//then transmit the pulse in the desired direction\n\t\t\tif (pulseData.propagationDirection == EPulseDirection.towardsParent)\n\t\t\t{\n\t\t\t\tDelayedPropagation(pulseData, chainParent);\n\t\t\t}\n\t\t\telse if (pulseData.propagationDirection == EPulseDirection.towardsChildren)\n\t\t\t{\n\t\t\t\t//foreach (IChainElement chainChild in chainChildren)\n\t\t\t\tfor (int i = 0, iLimit = childCount; i < iLimit; i++)\n\t\t\t\t{\n\t\t\t\t\tDelayedPropagation(pulseData, GetChild(i));\n\t\t\t\t}\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tDebug.LogWarning(\"ChainElementPulsePropagatorBase.Pulse(): propagation direction is 0 - can't propagate\");\n\t\t\t}\n\t\t}\n\t//ENDOF IPulsePropagator implementation\n\n\t//private methods\n\t\t//propagate the pulse\n\t\tprivate void DelayedPropagation (IPulseData pulseData, IChainElement propagationTarget)\n\t\t{ DelayedPropagation(pulseData, (propagationTarget as IPulsePropagator)); }\n\t\tprivate void DelayedPropagation (IPulseData pulseData, IPulsePropagator propagationTarget)\n\t\t{ StartCoroutine(DelayedPropagationCoroutine(pulseData, propagationTarget)); }\n\t\tprivate IEnumerator DelayedPropagationCoroutine (IPulseData pulseData, IPulsePropagator propagationTarget)\n\t\t{\n\t\t\tif (propagationTarget == null) yield break;\n\n\t\t\t//wait for the propagation delay modified by the pulse's delay modifier\n\t\t\tyield return new WaitForSeconds(\n\t\t\t\tGetPropagationDelay(propagationTarget) * pulseData.propagationDelayModifier\n\t\t\t);\n\n\t\t\t//then propagate a copy of the pulse updated for the distance to the target\n\t\t\tpropagationTarget.Pulse(\n\t\t\t\tpulseData.GetUpdatedPulse(\n\t\t\t\t\tdistance: Vector3.Distance(transform.position, propagationTarget.transform.position)\n\t\t\t\t)\n\t\t\t);\n\t\t}\n\t//ENDOF private methods\n\n\t//overridable methods\n\t\t//Override this method to execute the logic related to the pulse\n\t\t\t//dis method is da method fo' da voodoo\n\t\tprotected abstract void DoPulse (IPulseData pulseData);\n\n\t\t//get delay in seconds before propagation to target effectuates\n\t\tprotected abstract float GetPropagationDelay (IPulsePropagator target);\n\t//ENDOF overridable methods\n\t}\n}"
},
{
"alpha_fraction": 0.7843478322029114,
"alphanum_fraction": 0.7852174043655396,
"avg_line_length": 29.289474487304688,
"blob_id": "a4afe2fbaeefdb03894c5f2292b1835d3943c308",
"content_id": "e8f14fd511799acf615c756c07ce6f571a8150ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1150,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 38,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/OnRigidbodyAngularVelocity/KickerOnRigidbodyAngularVelocityBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\n//Kicker that triggers when angular velocity is above or below\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnRigidbodyAngularVelocityBase : KickerOnConditionHeldOnFixedUpdateBase\n\t{\n\t//serialized properties\n\t\t[SerializeField]\n\t\tpublic Rigidbody targetRigidbody; //will automatically get this gameobject's rigidbody on awake if none given\n\n\t\t[SerializeField]\n\t\tprivate float cutoffVelocity = 0f;\n\t\t[SerializeField]\n\t\tprivate bool triggerBelowCutoff = true;\n\t//ENDOF serialized properties\n\n\t//MonoBehaviour Lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (!targetRigidbody) targetRigidbody = gameObject.GetComponent<Rigidbody>();\n\t\t}\n\t//ENDOF MonoBehaviour Lifecycle\n\n\t//abstract method implementation\n\n\t\t//condition evaluates to true if velocity is not at cutoff nor above/below cutoff\n\t\tprotected override bool CheckCondition ()\n\t\t{\n\t\t\tfloat velocityMagnitude = targetRigidbody.angularVelocity.magnitude;\n\t\t\treturn\n\t\t\t\t(triggerBelowCutoff && velocityMagnitude < cutoffVelocity) ||\n\t\t\t\t(!triggerBelowCutoff && velocityMagnitude > cutoffVelocity);\n\t\t}\n\t//ENDOF abstract method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7556325793266296,
"alphanum_fraction": 0.7556325793266296,
"avg_line_length": 22.571428298950195,
"blob_id": "af904b8ea6809fa39a61048dc329ded45ebe68b3",
"content_id": "4153ee81e0878606eb8ac8bbd0c00d73f5f9a30a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1154,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 49,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Base/EditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing UnityEditor;\n\nusing IInspectorBase = ASSpriteRigging.Inspectors.IInspectorBase;\n\nnamespace ASSpriteRigging.Editors\n{\n\tpublic abstract class EditorBase<TInspector>\n\t:\n\t\tEditor,\n\t\tIEditorBase\n\t\twhere TInspector : UnityEngine.Object, IInspectorBase\n\t{\n\t//inheritable properties\n\t\tprotected TInspector targetInspector { get { return (TInspector) target; }}\n\t//ENDOF inheritable properties\n\n\t//inheritable methods\n\t\t//draws a button that performs an action if pressed\n\t\tprotected delegate void EditorActionDelegate();\n\t\tprotected virtual void DoButton (string buttonText, EditorActionDelegate action)\n\t\t{\n\t\t\tif (GUILayout.Button(buttonText))\n\t\t\t{\n\t\t\t\taction();\n\t\t\t}\n\t\t}\n\t//ENDOF inheritable methods\n\n\t//Setup GUI layout\n\t\tpublic override void OnInspectorGUI ()\n\t\t{\n\t\t\tbase.OnInspectorGUI();\n\n\t\t\t//InspectorInitialization();\n\t\t\tDoButtons();\n\t\t}\n\t//ENDOF Setup GUI layout\n\n\t//IEditorBase declaration\n\t\tpublic abstract void DoSetup ();\n\t//ENDOF IEditorBase declaration\n\n\t//overridable methods and properties\n\t\t//protected abstract void InspectorInitialization ();\n\t\tprotected abstract void DoButtons ();\n\t//ENDOF overridable methods\n\t}\n}"
},
{
"alpha_fraction": 0.6167048215866089,
"alphanum_fraction": 0.6167048215866089,
"avg_line_length": 28.133333206176758,
"blob_id": "b8d0a7ed4f51df277c166b5b0b7b71f7003f1a19",
"content_id": "d272e2dbcc71cbd0f880f1c8e30b8d2c6548ce90",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 876,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 30,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Interface/ViewportRectReplicator.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing static ASSPhysics.CameraSystem.RectTransformExtensions;\n\nnamespace ASSPhysics.MiscellaneousComponents\n{\n\tpublic class ViewportRectReplicator : MonoBehaviour\n\t{\n\t//private fields\n\t\tprivate RectTransform rectTransform;\n\t//ENDOF private fields\n\n\t//MonoBehaviour lifecycle\n\t\tpublic void Awake ()\n\t\t{\n\t\t\trectTransform = (RectTransform) transform;\n\t\t/////////////////////////////////////////////////////////////////////////////////////////////\n\t\t//[TO-DO] Maybe it's convenient initializing the transform replicating EVERY target property?\n\t\t/////////////////////////////////////////////////////////////////////////////////////////////\n\t\t}\n\n\t\tpublic void LateUpdate()\n\t\t{\n\t\t\trectTransform.EMSetRect(ControllerCache.viewportController.rect);\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\t}\n}\n"
},
{
"alpha_fraction": 0.7099767923355103,
"alphanum_fraction": 0.72099769115448,
"avg_line_length": 39.093021392822266,
"blob_id": "939e108b270bda774723fee647f0587ad0eca78f",
"content_id": "02eadee5363d769414f692ffb32c810ebcd58e51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3448,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 86,
"path": "/Assets/Scripts/ASSistant/UnityComponentConfigurersLegacy.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSistant.ComponentConfigurers\n{\n\t//applies right-hand properties to left-hand objects\n\tpublic static class UnityComponentConfigurersLegacy\n\t{\n\t//Rigidbody2D components configuration\n\t\tpublic static Rigidbody2D ApplySettings (this Rigidbody2D _this, Rigidbody2D sample)\n\t\t{\n\t\t\t_this.bodyType = \t\t\t\tsample.bodyType;\t\t\t\t//body type\n\t\t\t_this.sharedMaterial = \t\t\tsample.sharedMaterial;\t\t\t//material\n\t\t\t_this.simulated = \t\t\t\tsample.simulated;\t\t\t\t//simulated\n\t\t\t_this.mass = \t\t\t\t\tsample.mass;\t\t\t\t\t//mass\n\t\t\t_this.useAutoMass = \t\t\tsample.useAutoMass;\t\t\t\t//use auto mass\n\t\t\t_this.drag = \t\t\t\t\tsample.drag;\t\t\t\t\t//linear drag\n\t\t\t_this.angularDrag = \t\t\tsample.angularDrag;\t\t\t\t//angular drag\n\t\t\t_this.gravityScale = \t\t\tsample.gravityScale;\t\t\t//gravity scale\n\t\t\t_this.collisionDetectionMode =\tsample.collisionDetectionMode;\t//collision detection\n\t\t\t_this.sleepMode = \t\t\t\tsample.sleepMode;\t\t\t\t//sleeping mode\n\t\t\t_this.interpolation = \t\t\tsample.interpolation;\t\t\t//interpolate\n\t\t\t_this.constraints = \t\t\tsample.constraints;\t\t\t\t//constraints (freeze position X, Y, freeze rotation Z)\n\n\t\t\treturn _this;\n\t\t}\n\t//ENDOF Rigidbody2D components configuration\n\n\t//Joint2D components configuration\n\t\t//SpringJoint2D : AnchoredJoint2D\n\t\tpublic static SpringJoint2D ApplySettings (this SpringJoint2D _this, SpringJoint2D sample, bool alterConnectedBody = false)\n\t\t{\n\t\t\t//properties\n\t\t\t_this.autoConfigureDistance =\tsample.autoConfigureDistance;\t\t\t//auto configure distance\n\t\t\t_this.dampingRatio = \t\t\tsample.dampingRatio;\t\t\t\t\t//Damping ratio\n\t\t\t_this.distance = \t\t\t\tsample.distance;\t\t\t\t\t\t//distance\n\t\t\t_this.frequency = \t\t\t\tsample.frequency;\t\t\t\t\t\t//frequency\n\n\t\t\t((AnchoredJoint2D) _this).ApplySettings((AnchoredJoint2D) sample);\n\t\t\treturn _this;\n\t\t}\n\n\t\t//AnchoredJoint2D : Joint2D\n\t\tpublic static AnchoredJoint2D ApplySettings (this AnchoredJoint2D _this, AnchoredJoint2D sample, bool alterConnectedBody = false)\n\t\t{\n\t\t\t_this.autoConfigureConnectedAnchor =\tsample.autoConfigureConnectedAnchor;\t//auto configure connection\n\t\t\t_this.anchor = \t\t\t\t\t\t\tsample.anchor;\t\t\t\t\t\t\t//anchor x y\n\t\t\t_this.connectedAnchor = \t\t\t\tsample.connectedAnchor;\t\t\t\t\t//connected anchor x y\n\n\t\t\t((Joint2D) _this).ApplySettings((Joint2D) sample);\n\t\t\treturn _this;\n\t\t}\n\n\t\t//Joint2D : Behaviour : Component\n\t\tpublic static Joint2D ApplySettings (this Joint2D _this, Joint2D sample, bool alterConnectedBody = false)\n\t\t{\n\t\t\tif (alterConnectedBody)\t{ _this.connectedBody = sample.connectedBody; }\t//connected rigidbody\n\t\t\t_this.enableCollision =\tsample.enableCollision;\t\t\t//enable collision\n\t\t\t_this.breakForce =\t\tsample.breakForce;\t\t\t\t//break force\n\n\t\t\treturn _this;\n\t\t}\n\t//ENDOF Joint2D components configuration\n\n\t//Collider2D components configuration\n\t\t//CircleCollider2D : Collider2D\n\t\tpublic static CircleCollider2D ApplySettings (this CircleCollider2D _this, CircleCollider2D sample)\n\t\t{\n\t\t\t_this.radius = \t\t\tsample.radius;\t\t\t\t//object radius\n\t\t\t\n\t\t\t((Collider2D) _this).ApplySettings((Collider2D) sample);\n\t\t\treturn _this;\n\t\t}\n\n\t\t//Collider2D : Behaviour : Component\n\t\tpublic static Collider2D ApplySettings (this Collider2D _this, Collider2D sample)\n\t\t{\n\t\t\t_this.sharedMaterial = \tsample.sharedMaterial;\t\t\t//material\n\t\t\t_this.isTrigger = \t\tsample.isTrigger;\t\t\t\t//is trigger\n\t\t\t_this.usedByEffector = \tsample.usedByEffector;\t\t\t//used by an effector or not\n\t\t\t_this.offset = \t\t\tsample.offset;\t\t\t\t\t//mass\n\n\t\t\treturn _this;\n\t\t}\n\t//ENDOF Collider2D components configuration\n\t}\n}\n"
},
{
"alpha_fraction": 0.7603305578231812,
"alphanum_fraction": 0.7603305578231812,
"avg_line_length": 12.55555534362793,
"blob_id": "4a42c419a6f207ea2605ace1e8ba498890f7a346",
"content_id": "5409f9093c7fc06a58f92fbad1a06e6b1e76ec80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/BaseInspectors/IInspectorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IInspectorBase\n\t{\n\t\tstring name {get;}\n\t}\n}"
},
{
"alpha_fraction": 0.8262711763381958,
"alphanum_fraction": 0.8262711763381958,
"avg_line_length": 25.33333396911621,
"blob_id": "17f3d79fea2663fb77b6f99b07bbd94d17969fb4",
"content_id": "d8074ddda7aaadbaa7a3ae7c8a3c448d3393d984",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 238,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 9,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Weavers/IWeaverInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ConfigurableJoint = UnityEngine.ConfigurableJoint;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IWeaverInspector : IArmableInspector\n\t{\n\t\tConfigurableJoint defaultWeavingJoint {get;}\t//Sample joint configuration\n\t}\n}"
},
{
"alpha_fraction": 0.7238243222236633,
"alphanum_fraction": 0.7260425686836243,
"avg_line_length": 26.327272415161133,
"blob_id": "cdeee5e16c42b948ec6e25319f891252dc25ccf5",
"content_id": "0f690e39b249e927494bd7cd819475a82d2a3298",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4510,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 165,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Managers/ToolManagerMouseInput.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections;\nusing System.Collections.Generic;\nusing System.Linq;\n\nusing UnityEngine;\n\nusing ControllerProvider = ASSPhysics.ControllerSystem.ControllerProvider;\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ITool = ASSPhysics.HandSystem.Tools.ITool;\n\nusing EInputState = ASSPhysics.InputSystem.EInputState;\nusing IInputController = ASSPhysics.InputSystem.IInputController;\n\nnamespace ASSPhysics.HandSystem.Managers\n{\n\tpublic class ToolManagerMouseInput : ToolManagerBase\n\t{\n\n\t//private fields\n\t\t[SerializeField]\n\t\tprivate Transform toolSpawnPoint = null;\n\n\t\t[SerializeField]\n\t\tprivate ASSPhysics.HandSystem.Tools.ToolBase[] initialToolPrefabs = {};\n\n\t\tprivate List<ITool> toolList;\t//list of hands\n\t\tprivate int focusedToolIndex;\t\t//highligted and active hand\n\t\tprivate IInputController inputController;\t//input controller\n\t//ENDOF private fields\n\n\t//private properties\n\t\t//input is considered enabled if there are tools and scene controller allows it\n\t\tprivate bool inputEnabled\n\t\t{\n\t\t\tget\n\t\t\t{\n\t\t\t\treturn\n\t\t\t\t\ttoolList != null &&\n\t\t\t\t\ttoolList.Any() &&\n\t\t\t\t\tControllerCache.sceneController.inputEnabled;\n\t\t\t}\n\t\t}\n\t//ENDOF private properties\n\n\t//IToolManager implementation\n\t\tpublic override ITool[] tools { get { return toolList.ToArray(); }}\n\t\tpublic override ITool activeTool { get { return toolList[focusedToolIndex]; }}\n\t//ENDOF IToolManager implementation\n\n\t//MonoBehaviour Lifecycle implementation\n\t\t//create a mouse input controller for oneself on start, and register with the central controller\n\t\tpublic override void Awake ()\n\t\t{\n\t\t\tbase.Awake();\n\t\t\tinputController = new ASSPhysics.InputSystem.MouseInputController();\n\t\t\tControllerProvider.RegisterController<IInputController>(inputController);\n\t\t}\n\n\t\tpublic void Start ()\n\t\t{\n\t\t\tStartCoroutine(InitializeToolsAfterInputEnabled());\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tif (inputEnabled)\n\t\t\t{\n\t\t\t\tToolCycleCheck();\n\t\t\t\tUpdateFocusedToolPosition();\n\t\t\t\tUpdateFocusedToolInput();\n\t\t\t}\n\t\t}\n\t//ENDOF MonoBehaviour Lifecycle implementation\n\n\t//private method implementation\n\t\t//initializes predefined tools as soon as input is enabled by sceneController\n\t\tprivate IEnumerator InitializeToolsAfterInputEnabled()\n\t\t{\n\t\t\ttoolList = new List<ITool>();\n\n\t\t\t//wait until sceneController allows input before initializing tool list\n\t\t\twhile (!ControllerCache.sceneController.inputEnabled)\n\t\t\t{ yield return null; }\n\n\t\t\t//create initial list of tools in the scene\n\t\t\tforeach (ITool toolPrefab in initialToolPrefabs)\n\t\t\t{\n\t\t\t\tCreateTool(toolPrefab);\n\t\t\t}\n\t\t\tSetFocused(0);\n\t\t}\n\n\t\tprivate void CreateTool (ITool toolPrefab)\n\t\t{\n\t\t\ttoolList.Add(\n\t\t\t\tInstantiateAsTool(\n\t\t\t\t\tprefabTool: toolPrefab,\n\t\t\t\t\tposition: (toolSpawnPoint != null)\n\t\t\t\t\t\t? (Vector2) toolSpawnPoint.position\n\t\t\t\t\t\t: (Vector2?) null\n\t\t\t\t)\n\t\t\t);\n\t\t}\n\n\t\t//set tool under target index as focused\n\t\tprivate void SetFocused (int target)\n\t\t{\n\t\t\tif (!toolList.Any())\n\t\t\t{\n\t\t\t\tDebug.LogWarning(\"Trying to focus on an empty hand list\");\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t//if cycling out of the list clamp back into range\n\t\t\tif (target >= toolList.Count || target < 0)\n\t\t\t{\t//multiply by its own sign to ensure always positive then get the rest of length\n\t\t\t\ttarget = ((target * (int) Mathf.Sign((float) target))) % toolList.Count;\n\t\t\t}\n\n\t\t\tfocusedToolIndex = target;\n\n\t\t\t//send every known tool an update on its status - true if they're focused false otherwise\n\t\t\tfor (int i = 0, iLimit = toolList.Count; i < iLimit; i++)\n\t\t\t{\n\t\t\t\ttoolList[i].focused = (i == focusedToolIndex);\n\t\t\t}\n\t\t}\n\n\t\t//checks for input corresponding to hand swap, and performs the action if necessa\n\t\tprivate void ToolCycleCheck ()\n\t\t{\n\t\t\t//[TO-DO] inputController should answer wether or not to swap active hands, not be directly requested the button input\n\t\t\tif (inputController.GetButtonDown(1))\n\t\t\t{\n\t\t\t\tSetFocused(focusedToolIndex+1);\n\t\t\t}\n\t\t}\n\n\t\tprivate void UpdateFocusedToolPosition ()\n\t\t{\n\t\t\tactiveTool.Move(inputController.scaledDelta);\n\t\t}\n\n\t\t//checks for input corresponding to main action, sends the correct state to the tool\n\t\tprivate void UpdateFocusedToolInput ()\n\t\t{\n\t\t\tbool button = Input.GetMouseButton(0);\n\t\t\tbool buttonFirstDown = Input.GetMouseButtonDown(0);\n\t\t\tif (button && !buttonFirstDown)\n\t\t\t{\n\t\t\t\tactiveTool.MainInput(EInputState.Held);\n\t\t\t}\n\t\t\telse if (buttonFirstDown)\n\t\t\t{\n\t\t\t\tactiveTool.MainInput(EInputState.Started);\n\t\t\t}\n\t\t\telse if (Input.GetMouseButtonUp(0))\n\t\t\t{\n\t\t\t\tactiveTool.MainInput(EInputState.Ended);\n\t\t\t}\n\t\t}\n\t//ENDOF private method implementation\n\t}\n}"
},
{
"alpha_fraction": 0.7728613615036011,
"alphanum_fraction": 0.7728613615036011,
"avg_line_length": 22.674419403076172,
"blob_id": "0a3256ead5520142898cbc02ed57b2a3da414874",
"content_id": "0c5081a61df88b30d387ddf0f4525d23a9cbaf42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1017,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 43,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogControllers/DialogControllerSimpleAnimator.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing IEnumerator = System.Collections.IEnumerator;\nusing AnimationNames = ASSPhysics.Constants.AnimationNames;\n\nnamespace ASSPhysics.DialogSystem.DialogControllers\n{\n\tpublic class DialogControllerSimpleAnimator : DialogControllerBase\n\t{\n\t//serialized fields\n\t\t[SerializeField]\n\t\tprotected Animator animator;\n\t//ENDOF serialized fields\n\n\t//private fields and properties\n\t//ENDOF private fields and properties\n\n\t//inherited abstract method implementation\n\t\tprotected override void PerformClosure ()\n\t\t{\n\t\t\tanimator.SetTrigger(AnimationNames.Dialog.close);\n\t\t}\n\t//ENDOF inherited abstract method implementation\n\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (animator == null) { animator = GetComponent<Animator>(); }\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//public methods\n\t\tpublic void ClosingAnimationFinishedCallback ()\n\t\t{\n\t\t\tForceDisable();\n\t\t\tInvokeFinishingCallback();\n\t\t}\n\t//ENDOF public methods\n\n\t//private methods\t\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.7635658979415894,
"alphanum_fraction": 0.7771317958831787,
"avg_line_length": 38.69230651855469,
"blob_id": "5302ccd56114f9acd8fc5e3c4642a876acc2e798",
"content_id": "02d7f20e55c0acb4900f758ae27390e6be68b9cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 516,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 13,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulseData/IPulseData.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ASSPhysics.PulseSystem;\n\nnamespace ASSPhysics.PulseSystem.PulseData\n{\n\tpublic interface IPulseData\n\t{\n\t\tfloat propagationDelayModifier {get;}\t//modifier for pulse propagation time delay. default 1\n\t\tEPulseDirection propagationDirection {get;}\t\t\t//propagation direction. 1 towards children, -1 towards parent, 0 default. default 0\n\t\tfloat computedValue {get;}\t\t\t\t//final pulse value\n\n\t\tIPulseData GetUpdatedPulse(float distance = 1.0f);\t//gets an updated copy of the pulse, as changed over target distance\n\t}\n}\n"
},
{
"alpha_fraction": 0.7157294750213623,
"alphanum_fraction": 0.7201347351074219,
"avg_line_length": 31.70339012145996,
"blob_id": "9b9338606b5a3efd20582da38a285d1f77c8595b",
"content_id": "0f887afe0bff356c29c5bceb421e313f9be59fdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3859,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 118,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulseData/PulseDataSignedWaving.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Mathf = UnityEngine.Mathf;\n\nusing ASSPhysics.PulseSystem;\nusing ASSistant.ASSRandom;\n\nnamespace ASSPhysics.PulseSystem.PulseData\n{\n\tpublic class PulseDataSignedWaving : PulseDataSignedIntensityBase\n\t{\n\t//private fields and properties\n\t\tprivate float targetIntensity { get { return pulseMaximumIntensity * Mathf.Sign(pulseSign); } }\n\t\tprivate int pulseSign;\t//direction of the effects of the pulse. 1 positive -1 negative 0 default/random\n\t\tprivate RandomRangeInt segmentLengthRange;\t//random number of segments before sign change\n\t\tprivate float pulseMaximumIntensity;\t//maximum intensity of the pulse\n\t\tprivate float pulseChangeSpeed;\t\t\t//maximum pulse change per distance\n\t//ENDOF private fields and properties\n\n\t//constructor\n\t\tpublic PulseDataSignedWaving (\n\t\t\tRandomRangeInt __segmentLengthRange,\n\t\t\tfloat __pulseIntensity = 0.0f,\n\t\t\tfloat __propagationDelayModifier = 1.0f,\n\t\t\tEPulseDirection __propagationDirection = EPulseDirection.towardsChildren,\n\t\t\tint __pulseSign = 0,\n\t\t\tfloat __pulseMaximumIntensity = 1.0f,\n\t\t\tfloat __pulseChangeSpeed = 1.0f\n\t\t) : base (\n\t\t\t__pulseIntensity,\n\t\t\t__propagationDelayModifier,\n\t\t\t__propagationDirection\n\t\t) {\n\t\t\tsegmentLengthRange = __segmentLengthRange;\n\t\t\tpulseSign = (__pulseSign != 0)\n\t\t\t\t? __pulseSign\n\t\t\t\t: GenerateSegmentSign(); //if a pulseSign is not provided generate a random sign\n\t\t\tpulseMaximumIntensity = __pulseMaximumIntensity;\n\t\t\tpulseChangeSpeed = __pulseChangeSpeed;\n\t\t}\n\t//ENDOF constructor\n\n\t//abstract method implementation\n\t\t//gets an updated copy of the pulse, as changed over target distance\n\t\t\t//waving pulse value fades between possitive and negative maximum value according to sign\n\t\t\t//sign changes every few segments randomly\n\t\tpublic override IPulseData GetUpdatedPulse (float distance)\n\t\t{\n\t\t\tint newSign = GetUpdatedSign();\n\t\t\tfloat newIntensity = GetUpdatedIntensity(newSign, distance);\n\t\t\t\n\t\t\treturn (IPulseData) new PulseDataSignedWaving (\n\t\t\t\t__pulseIntensity: pulseIntensity,\n\t\t\t\t__propagationDelayModifier: propagationDelayModifier,\n\t\t\t\t__propagationDirection: propagationDirection,\n\t\t\t\t__pulseSign: newSign,\n\t\t\t\t__segmentLengthRange: segmentLengthRange,\n\t\t\t\t__pulseMaximumIntensity: pulseMaximumIntensity\n\t\t\t);\n\n\t\t}\n\t//ENDOF abstract method implementation\n\n\t//private methods\n\t\t//brign sign counter closer to zero\n\t\tprivate int GetUpdatedSign () { return GetUpdatedSign(pulseSign); }\n\t\tprivate int GetUpdatedSign (int sign)\n\t\t{\n\t\t\t//step sign towards zero\n\t\t\tint newSign = IntStepTowards(sign, 0);\n\n\t\t\treturn (newSign != 0)\t\t//if segment sign is depleted generate a new sign counter\n\t\t\t\t? newSign\n\t\t\t\t: GenerateSegmentSign();\n\t\t}\n\n\t\t//generates a segment sign counter using a random sign and segment length\n\t\tprivate int GenerateSegmentSign ()\n\t\t{\n\t\t\treturn segmentLengthRange.Generate() * RandomSign.Generate();\n\t\t}\n\n\t\t////////////////////////////////////////////////////////////////////////////////\n\t\t//[TO-DO]: move dis elsewhere\n\t\tprivate int IntStepTowards (int value, int target)\n\t\t{\n\t\t\treturn (value > target)\t\n\t\t\t\t\t\t\t? value - 1\t//if greater than target, decrement\n\t\t\t\t\t\t\t: (value < target)\n\t\t\t\t\t\t\t? value + 1 //if smaller than target, increment\n\t\t\t\t\t\t\t: target;\t//if on target return on target\n\t\t}\n\n\t\tprivate float GetUpdatedIntensity (int targetSign, float distance)\n\t\t{\n\t\t\treturn GetUpdatedIntensity(\n\t\t\t\tintensity: pulseIntensity,\n\t\t\t\tmaximumIntensity: pulseMaximumIntensity,\n\t\t\t\ttargetSign: targetSign,\n\t\t\t\tchangeSpeed: pulseChangeSpeed,\n\t\t\t\tdistance: distance\n\t\t\t);\n\t\t}\n\t\tprivate float GetUpdatedIntensity (\n\t\t\tfloat intensity,\n\t\t\tfloat maximumIntensity,\n\t\t\tint targetSign,\n\t\t\tfloat changeSpeed,\n\t\t\tfloat distance\n\t\t) {\n\t\t\treturn Mathf.MoveTowards(\n\t\t\t\tcurrent: intensity,\n\t\t\t\ttarget: maximumIntensity * Mathf.Sign(targetSign),\n\t\t\t\tmaxDelta: changeSpeed * distance\n\t\t\t); \n\t\t}\n\t\t//bring intensity closer to desired intensity\n\t//ENDOF private methods\n\t}\n}\n"
},
{
"alpha_fraction": 0.811475396156311,
"alphanum_fraction": 0.811475396156311,
"avg_line_length": 21.18181800842285,
"blob_id": "f6d1e2837e938fb4450a423d809909bd5b713cb8",
"content_id": "73684085e07d60012620fdc199a50edd6c5a5c80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 246,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 11,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Unsorted/SceneChanger.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\npublic class SceneChanger : MonoBehaviour\n{\n\tpublic void GoToScene (int targetScene)\n\t{\n\t\tControllerCache.sceneController.ChangeScene(targetScene);\n\t}\n}\n"
},
{
"alpha_fraction": 0.7486279010772705,
"alphanum_fraction": 0.7519209384918213,
"avg_line_length": 25.823530197143555,
"blob_id": "cdbeafb0c7fb6b7a3da372edb3dc6266532cae3d",
"content_id": "8e824e1f618a1f4250931b9da9c2a1b78fcc94e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 913,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 34,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Tools/ITool.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing EInputState = ASSPhysics.InputSystem.EInputState;\nusing IInteractor = ASSPhysics.InteractableSystem.IInteractor;\n\nusing IAction = ASSPhysics.HandSystem.Actions.IAction;\n\nnamespace ASSPhysics.HandSystem.Tools\n{\n\tpublic interface ITool\n\t{\n\t\tTransform transform {get;}\n\t\tGameObject gameObject {get;}\n\t\tIInteractor interactor {get;}\n\t\tIAction activeAction {get;}\n\n\t\tVector3 position {get; set;}\t//position of the hand\n\t\tbool focused {get; set;}\t\t//wether the hand is on focus or not\n\t\tbool auto {get;}\n\n\t\t//move the hand\n\t\tvoid Move (Vector3 delta);\n\n\t\t//called with either an Started, Held, or Ended state. also sets position if provided.\n\t\tvoid MainInput (EInputState state);\n\t\tvoid MainInput (EInputState state, Vector3 targetPosition);\n\n\t\t//Called by the current action to remove itself\n\t\tvoid ActionEnded ();\n\n\t\t//sets the animator\n\t\tvoid SetAnimationState (string triggerName);\n\t}\n}"
},
{
"alpha_fraction": 0.7542504072189331,
"alphanum_fraction": 0.755795955657959,
"avg_line_length": 24.920000076293945,
"blob_id": "a2b35d686dae07a06c150b8bbbe048c33845b767",
"content_id": "8fc71f7702827e04c7782fe6848299a1bfd1c365",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 649,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 25,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/IRiggerInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing SpriteSkin = UnityEngine.U2D.Animation.SpriteSkin;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IRiggerInspector : IArmableInspector\n\t{\n\t\t//wether or not purging bone transform tree removes its rigidbodies too\n\t\tbool purgeKeepsRigidbodies { get; }\n\n\t\t//references to fundamental components\n\t\tSprite sprite { get; }\n\t\tSpriteSkin spriteSkin { get; }\n\n\t\t//information on transform layer & tag\n\t\tint defaultLayer { get; }\n\t\tstring defaultTag { get; }\n\n\t\t//Desired rigidbody configuration\n\t\tRigidbody defaultRigidbody { get; }\n\n\t\t//Collider to include with each bone\n\t\tSphereCollider defaultCollider { get; }\n\t}\n}"
},
{
"alpha_fraction": 0.8415841460227966,
"alphanum_fraction": 0.8465346693992615,
"avg_line_length": 28,
"blob_id": "7c346d07e9ad389b7af61ab92bba693673ba0ffe",
"content_id": "c1d8db89d9c4d14d86b03e6912c92da5731d60e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 202,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 7,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/TailRiggerInspectorNoController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Inspectors\n{\n\t[UnityEngine.RequireComponent(typeof(UnityEngine.U2D.Animation.SpriteSkin))]\n\tpublic class TailRiggerInspectorNoController : TailRiggerInspectorJointChain\n\t{\n\t}\n}"
},
{
"alpha_fraction": 0.7686453461647034,
"alphanum_fraction": 0.7739726305007935,
"avg_line_length": 33.578948974609375,
"blob_id": "5f6c632fc84718b8fd1a0229f9c41b8e38f25c9d",
"content_id": "02b1861a1cb764977bdcb68c6d06bcda48fd5cce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1314,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 38,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/PulseData/PulseDataSignedIntensityBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ASSPhysics.PulseSystem;\n\nnamespace ASSPhysics.PulseSystem.PulseData\n{\n\tpublic abstract class PulseDataSignedIntensityBase : IPulseData\n\t{\n\t//IPulseData implementation\n\t\t//modifier for pulse propagation time delay\n\t\tpublic float propagationDelayModifier { get; private set; }\n\t\t//propagation direction. 1 towards children, -1 towards parent, 0 default\n\t\tpublic EPulseDirection propagationDirection { get; private set; }\n\t\t\n\t\t//final pulse value\n\t\t//basic signed intensity pulse returns intensity appl\n\t\tpublic virtual float computedValue { get { return pulseIntensity; }}\n\n\t\t//gets an updated copy of the pulse, as changed over target distance\n\t\tpublic abstract IPulseData GetUpdatedPulse (float distance);\n\t//ENDOF IPulseData implementation\n\n\t//private fields and properties\n\t\t//intensity for the effects of the pulse\n\t\tprotected float pulseIntensity; // { get; private set; }\n\t//ENDOF private fields and properties\n\n\t//constructor\n\t\tpublic PulseDataSignedIntensityBase (\n\t\t\tfloat __pulseIntensity = 1.0f,\n\t\t\tfloat __propagationDelayModifier = 1.0f,\n\t\t\tEPulseDirection __propagationDirection = EPulseDirection.towardsChildren\n\t\t) {\n\t\t\tpulseIntensity = __pulseIntensity;\n\t\t\tpropagationDelayModifier = __propagationDelayModifier;\n\t\t\tpropagationDirection = __propagationDirection;\n\t\t}\n\t//ENDOF constructor\n\t}\n}\n"
},
{
"alpha_fraction": 0.7586206793785095,
"alphanum_fraction": 0.7586206793785095,
"avg_line_length": 15.714285850524902,
"blob_id": "d0d64c55f26986cf6b4b0cb6489c5a4f7bb52913",
"content_id": "d70892ec4a96b3a4af2126173a1b6de2d4d39f7c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 118,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 7,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Base/IEditorPurgeableBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Editors\n{\n\tpublic interface IEditorPurgeableBase : IEditorBase\n\t{\n\t\tvoid DoPurge ();\n\t}\n}"
},
{
"alpha_fraction": 0.80008864402771,
"alphanum_fraction": 0.80008864402771,
"avg_line_length": 40.0363655090332,
"blob_id": "3f3b65e1d166ff90a18bf1392b0b880c856c77ba",
"content_id": "5bbbe203704b23fe6ff5a3d34d15fb554194f0fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2258,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 55,
"path": "/Assets/Scripts/ASSPhysics/SettingSystem/ActionSettings.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine; //Resources\n\nusing ASSPhysics.SettingSystem.ActionSettingTypes;\n\nnamespace ASSPhysics.SettingSystem\n{\n\t//public definition of settings objects\n\tpublic static class ActionSettings\n\t{\n\t\tprivate const string surfaceGrabSettingsPath = \"SurfaceGrabSettings\";\n\t\tprivate static ActionSettingCollisionRadius _surfaceGrabSettings;\n\t\tpublic static ActionSettingCollisionRadius surfaceGrabSettings\n\t\t{ get {\n\t\t\treturn (_surfaceGrabSettings != null)\t//if cache is null, load from UnityEngine.Resources\n\t\t\t\t? _surfaceGrabSettings\n\t\t\t\t: _surfaceGrabSettings = Resources.Load<ActionSettingCollisionRadius>(surfaceGrabSettingsPath);\n\t\t}}\n\n\t\tprivate const string tailGrabSettingsPath = \"TailGrabSettings\";\n\t\tprivate static ActionSettingCollisionRadius _tailGrabSettings;\n\t\tpublic static ActionSettingCollisionRadius tailGrabSettings\n\t\t{ get {\n\t\t\treturn (_tailGrabSettings != null)\n\t\t\t\t? _tailGrabSettings\n\t\t\t\t: _tailGrabSettings = Resources.Load<ActionSettingCollisionRadius>(tailGrabSettingsPath);\n\t\t}}\n\n\t\tprivate const string grabJointSettingsPath = \"GrabJointSettings\";\n\t\tprivate static ActionSettingJoint _grabJointSettings;\n\t\tpublic static ActionSettingJoint grabJointSettings\n\t\t{ get {\n\t\t\treturn (_grabJointSettings != null)\n\t\t\t\t? _grabJointSettings\n\t\t\t\t: _grabJointSettings = Resources.Load<ActionSettingJoint>(grabJointSettingsPath);\n\t\t}}\n\n\t\tprivate const string interactorAreaCheckSettingsPath = \"InteractorAreaCheckSettings\";\n\t\tprivate static ActionSettingCollisionRadius _interactorCheckSettings;\n\t\tpublic static ActionSettingCollisionRadius interactorCheckSettings\n\t\t{ get {\n\t\t\treturn (_interactorCheckSettings != null)\t//if cache is null, load from UnityEngine.Resources\n\t\t\t\t? _interactorCheckSettings\n\t\t\t\t: _interactorCheckSettings = Resources.Load<ActionSettingCollisionRadius>(interactorAreaCheckSettingsPath);\n\t\t}}\n\n\t\tprivate const string slapAreaSettingsPath = \"SlapAreaSettings\";\n\t\tprivate static ActionSettingCollisionRadius _slapAreaSettings;\n\t\tpublic static ActionSettingCollisionRadius slapAreaSettings\n\t\t{ get {\n\t\t\treturn (_slapAreaSettings != null)\t//if cache is null, load from UnityEngine.Resources\n\t\t\t\t? _slapAreaSettings\n\t\t\t\t: _slapAreaSettings = Resources.Load<ActionSettingCollisionRadius>(slapAreaSettingsPath);\n\t\t}}\n\t}\n}"
},
{
"alpha_fraction": 0.7583783864974976,
"alphanum_fraction": 0.7632432579994202,
"avg_line_length": 27.4769229888916,
"blob_id": "bc946fc94246fc0100e819d9102cd650cc504073",
"content_id": "b00fb7f380467c825dee46c9a8c0389361308bce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1852,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 65,
"path": "/Assets/Scripts/ASSPhysics/TailSystem/TailControllerPeriodicWaving.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing PulseDataSignedWaving = ASSPhysics.PulseSystem.PulseData.PulseDataSignedWaving;\nusing EPulseDirection = ASSPhysics.PulseSystem.EPulseDirection;\n\nusing ASSistant.ASSRandom;\n\nnamespace ASSPhysics.TailSystem\n{\n\tpublic class TailControllerPeriodicWaving : MonoBehaviour\n\t{\n\t//serialized fields and properties\n\t\tpublic TailElementBase firstTailElement;\n\n\t\tpublic float baseRandomInterval = 2.0f;\n\n\t\tpublic RandomRangeInt segmentLengthRange;\n\t\tpublic RandomRangeFloat pulseIntensityRange;\n\n\t\tpublic float pulseChangeSpeed = 1.0f;\n\t\tpublic float pulseMaximumIntensity = 1.0f;\n\t\tpublic float propagationDelayModifier = 1.0f;\n\t//ENDOF serialized fields and properties\t\n\n\t//private fields and properties\n\t//ENDOF private fields and properties\n\n\t//MonoBehaviour lifecycle implementation\n\t\tpublic void Awake ()\n\t\t{\n\t\t\tif (firstTailElement == null) { firstTailElement = GetComponentInChildren<TailElementBase>(); }\n\t\t}\n\n\t\tpublic void Update ()\n\t\t{\n\t\t\tif (RandomTailMovementChance())\n\t\t\t{\n\t\t\t\tWaveTail();\n\t\t\t}\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle implementation\n\n\t//private methods\n\t\tprivate bool RandomTailMovementChance ()\n\t\t{\n\t\t\treturn Random.value <= (Time.deltaTime / baseRandomInterval);\n\t\t}\n\n\t\t//when initiating a waving movement, create a new pulse and start its propagation\n\t\tprivate void WaveTail ()\n\t\t{\n\t\t\t//Debug.Log(\"Waving\");\n\t\t\tfirstTailElement.Pulse(new PulseDataSignedWaving(\n\t\t\t\t__segmentLengthRange: segmentLengthRange, //RandomRangeInt\n\t\t\t\t__pulseIntensity: pulseIntensityRange.Generate(), //float\n\t\t\t\t__propagationDelayModifier: propagationDelayModifier, //float\n\t\t\t\t__propagationDirection: EPulseDirection.towardsChildren, //EPulseDirection\n\t\t\t\t__pulseSign: 0, //int\n\t\t\t\t__pulseMaximumIntensity: pulseMaximumIntensity, //float\n\t\t\t\t__pulseChangeSpeed: pulseChangeSpeed//float\n\t\t\t));\n\t\t}\n\t//ENDOF private methods\n\t}\n}"
},
{
"alpha_fraction": 0.79111647605896,
"alphanum_fraction": 0.7923169136047363,
"avg_line_length": 29.88888931274414,
"blob_id": "77b2ae013d9380452c8427ca390f2ba5c1ac5ff8",
"content_id": "bfd8db1923e4bd7a754813baa07dd2e1a37a9e32",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 833,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 27,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/TailRiggerInspectorJointChain.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\t[RequireComponent(typeof(UnityEngine.U2D.Animation.SpriteSkin))]\n\tpublic abstract class TailRiggerInspectorJointChain\n\t:\n\t\tSpriteSkinRiggerInspectorBase,\n\t\tIJointChainRiggerInspector\n\t{\n\t\t//list of root anchor targets\n\t\t[SerializeField]\n\t\tprivate Rigidbody[] _rootAnchorList = {};\n\t\tpublic Rigidbody[] rootAnchorList { get { return _rootAnchorList; }}\n\n\n\t\t//Sample chain spring configuration\n\t\t[SerializeField]\n\t\tprivate ConfigurableJoint _defaultChainJoint = null;\n\t\tpublic ConfigurableJoint defaultChainJoint { get { return _defaultChainJoint; }}\n\n\t\t//Sample root anchoring spring configuration\n\t\t[SerializeField]\n\t\tprivate ConfigurableJoint _defaultRootAnchorJoint = null;\n\t\tpublic ConfigurableJoint defaultRootAnchorJoint { get { return _defaultRootAnchorJoint; }}\n\t}\n}"
},
{
"alpha_fraction": 0.7544642686843872,
"alphanum_fraction": 0.7544642686843872,
"avg_line_length": 17.75,
"blob_id": "85aadbb05a45b4666a42b5616beec6edb9a25589",
"content_id": "a1e951e46f02515f440a8e0b2379aba06ba1f871",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 226,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 12,
"path": "/Assets/Scripts/ASSPhysics/DialogSystem/DialogChangers/DialogChangerOnStart.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.DialogSystem.DialogChangers\n{\n\tpublic class DialogChangerOnStart : DialogChangerBase\n\t{\n\t//MonoBehaviour lifecycle\n\t\tpublic void Start ()\n\t\t{\n\t\t\tChangeDialog();\n\t\t}\n\t//ENDOF MonoBehaviour lifecycle\n\t}\n}"
},
{
"alpha_fraction": 0.7673546075820923,
"alphanum_fraction": 0.7673546075820923,
"avg_line_length": 22.217391967773438,
"blob_id": "dbcf6c8d0787b58c08ed6c51e761960df7534cc7",
"content_id": "05f6918705e4d633cba4cc35245e48bdfea11aca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 535,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 23,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/OnOffFlicker/OnOffFlickerGameObjectAuto.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class OnOffFlickerGameObjectAuto : OnOffFlickerAutoFireBase\n\t{\n\t//private fields and properties\n\t\t[SerializeField]\n\t\tprivate GameObject targetGameObject = null;\n\t//ENDOF private fields and properties\n\n\t//inherited abstract property implementation\n\t\tprotected override bool state\n\t\t{\n\t\t\tset\n\t\t\t{\n\t\t\t\tif (targetGameObject.activeSelf != value)\n\t\t\t\t{ targetGameObject.SetActive(value); }\n\t\t\t}\n\t\t}\n\t//ENDOF inherited abstract property implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8101300001144409,
"alphanum_fraction": 0.8106717467308044,
"avg_line_length": 36.30303192138672,
"blob_id": "1fe0bfc61d8c525f530141d97c3631f4e905b7e0",
"content_id": "f0ee63a3498958a21a4b0b6b997a20f78a387d6e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3692,
"license_type": "no_license",
"max_line_length": 984,
"num_lines": 99,
"path": "/Assets/Editor/ASSpriteRigging/Editors/Riggers/RiggerEditorBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using System.Collections.Generic;\n\nusing UnityEngine;\nusing UnityEditor;\nusing SpriteSkin = UnityEngine.U2D.Animation.SpriteSkin;\n\nusing BoneHierarchy = ASSpriteRigging.BoneUtility.BoneHierarchy;\nusing IRiggerInspector = ASSpriteRigging.Inspectors.IRiggerInspector;\n\nnamespace ASSpriteRigging.Editors\n{\n\tpublic abstract class RiggerEditorBase<TInspector>\n\t:\n\t\tArmableEditorBase<TInspector>,\n\t\tIRiggerEditor\n\t\twhere TInspector : UnityEngine.Object, IRiggerInspector\n\t{\n\t//EditorBase implementation\n\t\tprotected override void DoButtons ()\n\t\t{\n\t\t\tDoButton(\"Full setup\", FullSetup);\n\t\t\t//DoButton(\"Rig bone components & configuration\", RigBones);\n\t\t\tDoButton(\"Disarm\", Disarm);\n\t\t\tDoButton(\"Purge components\", Purge);\n\t\t}\n\t//ENDOF EditorBase implementation\n\n\t//IRiggerEditor implementation\n\t\tpublic override void DoSetup ()\n\t\t{\n\t\t\tFullSetup();\n\t\t}\n\n\t\tpublic void DoPurge ()\n\t\t{\n\t\t\tPurge();\n\t\t}\n\t//ENDOF IRiggerEditor implementation\n\n\t//private methods\n\t\t//performs every step of the automated rigging process at once:\n\t\t//moves ten units of sperm forwards, then cast whale at next 2 tiles unless hitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitlerhitler\n\t\t//that means:\n\t\t\t//> create gameobjects for every bone invoking the corresponding SpriteSkin methods\n\t\t\t//> rig default components for every corresponding bone gameobject (abstract- each rigger performs its own rigging)\n\t\tprivate void FullSetup ()\n\t\t{\n\t\t\tDebug.Log(\"Initiating full setup of \" + targetInspector.name);\n\t\t\tBoneHierarchy.CreateBoneHierarchy(targetInspector);\n\t\t\tRigBones();\n\t\t\tDebug.Log(targetInspector.name + \" full setup finished\");\n\t\t}\n\n\t\tprivate void Purge ()\n\t\t{\n\t\t\tSpriteSkin targetSpriteSkin = targetInspector.spriteSkin;\n\n\t\t\tif (targetSpriteSkin == null)\n\t\t\t{\n\t\t\t\tDebug.Log(\"No spriteSkin found in transform \" + targetInspector.name);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tforeach (Transform boneTransform in targetSpriteSkin.boneTransforms)\n\t\t\t{\n\t\t\t\tPurgeBonePhysicsComponents(boneTransform, targetInspector.purgeKeepsRigidbodies);\n\t\t\t}\n\n\t\t\tDebug.Log(\"Purged components\");\n\t\t}\n\n\t\tprivate void PurgeBonePhysicsComponents (Transform boneTransform, bool keepRigidbodies = true)\n\t\t{\n\t\t\t//collect physics-related components\n\t\t\tList<Component> componentList = new List<Component>();\n\n\t\t\t\t//include colliders and joints\n\t\t\tcomponentList.AddRange(boneTransform.GetComponents<Collider>());\n\t\t\tcomponentList.AddRange(boneTransform.GetComponents<Joint>());\n\n\t\t\t//the remove every component included\n\t\t\tforeach (Component component in componentList)\n\t\t\t{\n\t\t\t\tObject.DestroyImmediate(component);\n\t\t\t}\n\n\t\t\t//last, attempt to remove rigidbody if necessary\n\t\t\tif (!keepRigidbodies)\n\t\t\t{\n\t\t\t\tObject.DestroyImmediate(boneTransform.GetComponent<Rigidbody>());\n\t\t\t}\n\t\t}\n\t//ENDOF private methods\n\n\t//overridable methods and properties\n\t\tprotected abstract void RigBones ();\n\t//ENDOF overridable methods\n\t}\n}"
},
{
"alpha_fraction": 0.7368420958518982,
"alphanum_fraction": 0.7368420958518982,
"avg_line_length": 12.399999618530273,
"blob_id": "ce62624ccc42b4f65d6a04d58585dd9e8de832cb",
"content_id": "528ad611fb0e471b43a99ec3520feaa119026393",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 133,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 10,
"path": "/Assets/Scripts/ASSPhysics/PulseSystem/EPulseDirection.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSPhysics.PulseSystem\n{\n\tpublic enum EPulseDirection\n\t{\n\t\t//auto,\n\t\ttowardsChildren,\n\t\ttowardsParent,\n\t\t//towardsBoth\n\t}\n}"
},
{
"alpha_fraction": 0.7952755689620972,
"alphanum_fraction": 0.7952755689620972,
"avg_line_length": 22.875,
"blob_id": "49c4c1c9cc13c30d287255ce5f0edc97556aecb1",
"content_id": "8024cb89571812b515111002550cbafcd8977430",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 16,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Riggers/IJointChainRiggerInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IJointChainRiggerInspector : IRiggerInspector\n\t{\n\t\t//list of root anchor targets\n\t\tRigidbody[] rootAnchorList {get;}\n\n\t\t//Sample chain spring configuration\n\t\tConfigurableJoint defaultChainJoint {get;}\n\n\t\t//Sample root anchoring spring configuration\n\t\tConfigurableJoint defaultRootAnchorJoint {get;}\n\t}\n}"
},
{
"alpha_fraction": 0.7233766317367554,
"alphanum_fraction": 0.7233766317367554,
"avg_line_length": 22.363636016845703,
"blob_id": "16c8aef6aa5cbf10a853c850378ded3a9463d100",
"content_id": "e2583cad714b30b6491b06f6d8639b208869ee98",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 772,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 33,
"path": "/Assets/Scripts/ASSPhysics/AudioSystem/AudioPlaybackProperties.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat;\n\nnamespace ASSPhysics.AudioSystem\n{\n\t//container object with settings for an audiosource playback\n\t[System.Serializable]\n\tpublic class AudioPlaybackProperties\n\t{\n\t\t[SerializeField]\n\t\tpublic AudioClip clip = null;\t\t\t\t//clip to play back\n\t\t[SerializeField]\n\t\tpublic RandomRangeFloat volume = null;\t\t//volume modifier for this clip\n\t\t[SerializeField]\n\t\tpublic bool loop = false;\t\t\t\t//should the clip loop\n\t\t[SerializeField]\n\t\tpublic RandomRangeFloat pitch = null;\t\t//pitch\n\n\t\tpublic AudioPlaybackProperties (\n\t\t\tAudioClip _clip,\n\t\t\tRandomRangeFloat _volume,\n\t\t\tbool _loop,\n\t\t\tRandomRangeFloat _pitch\n\t\t) {\n\t\t\tclip = _clip;\n\t\t\tvolume = _volume;\n\t\t\tloop = _loop;\n\t\t\tpitch = _pitch;\n\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7584269642829895,
"alphanum_fraction": 0.7688603401184082,
"avg_line_length": 31.8157901763916,
"blob_id": "4bf316d7b5dae9bc5b6d5303d66e7810f521c931",
"content_id": "7cc59d507aa9a14f6cb948608d67b4d252495771",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1246,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 38,
"path": "/Assets/Scripts/ASSistant/Comparers/ComparerSortCollidersByDistance.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing System.Collections.Generic;\t//IComparer<T>\n\nusing ASSistant.ComponentConfiguration.ColliderConfiguration;\n\nnamespace ASSistant.Comparers\n{\n\t//IComparer used to sort a list of items by its distance from a Vector3 worldspace position\n\tpublic class ComparerSortCollidersByDistance : IComparer<Collider>\n\t{\n\t\tprivate Vector3 originPosition;\n\n\n\t\t//Constructor: Takes position to use as center for upcoming comparison\n\t\tpublic ComparerSortCollidersByDistance (Vector3 __originPosition)\n\t\t{\n\t\t\toriginPosition = __originPosition;\n\t\t}\n\n\t\tpublic int Compare (Collider colliderA, Collider colliderB)\n\t\t{\n\t\t\tif (colliderA == colliderB) return 0;\n\n\t\t\tVector3 colliderAPos = colliderA.EMGetColliderAbsolutePosition();\n\t\t\tVector3 colliderBPos = colliderB.EMGetColliderAbsolutePosition();\n\n\t\t\t//if A is closer to origin, Difference sign is negative\n\t\t\tfloat distanceDifference =\n\t\t\t\t Vector3.Distance(originPosition, colliderAPos)\n\t\t\t\t- Vector3.Distance(originPosition, colliderBPos);\n\n\t\t\t//return comparison result\n\t\t\treturn (distanceDifference == 0)\n\t\t\t\t? 0\t//if both colliders are at the same distance return 0, they are equal\n\t\t\t\t: (int) Mathf.Sign(distanceDifference); //otherwise return 1 or -1 indicating closer collider\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7792022824287415,
"alphanum_fraction": 0.7849003076553345,
"avg_line_length": 26.038461685180664,
"blob_id": "9c83e111770031092c96655c772aba9770722017",
"content_id": "3573a827ccb123d5fd47c9dff376dd67d385c43e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 702,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 26,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Force/OnRigidbodySleep/KickerOnRigidbodySleepForce.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing Vector3Math = ASSistant.ASSMath.Vector3Math;\nusing RandomRangeFloat = ASSistant.ASSRandom.RandomRangeFloat;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic class KickerOnRigidbodySleepForce : KickerOnRigidbodySleepBase\n\t{\n\t//serialized properties\n\t\t[SerializeField]\n\t\tpublic RandomRangeFloat forceAngleRange;\n\t//ENDOF serialized properties \n\n\t//IKicker implementation\n\t\t//applies a random force at a random direction as the kick\n\t\tpublic override void Kick ()\n\t\t{\n\t\t\ttargetRigidbody.AddForce(\n\t\t\t\t\tforce: Vector3Math.AngleToVector3(forceAngleRange.Generate()) * randomForce.Generate(),\n\t\t\t\t\tmode: ForceMode.Force\n\t\t\t\t);\n\t\t}\n\t//ENDOF IKicker implementation\n\t}\n}"
},
{
"alpha_fraction": 0.8113207817077637,
"alphanum_fraction": 0.8113207817077637,
"avg_line_length": 14.285714149475098,
"blob_id": "43c30a4ec6893ac22f58da40f4432974eb13185f",
"content_id": "188800d338ba8dbb37335882419f568f06071804",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 106,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 7,
"path": "/Assets/Scripts/ASSpriteRigging/Inspectors/Propagators/IPropagatorInspector.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "namespace ASSpriteRigging.Inspectors\n{\n\tpublic interface IPropagatorInspector : IArmableInspector\n\t{\n\n\t}\n}"
},
{
"alpha_fraction": 0.7637271285057068,
"alphanum_fraction": 0.7670549154281616,
"avg_line_length": 36.578125,
"blob_id": "5a720e75009e524fcd65b4e3923f1154e237166e",
"content_id": "b535aea662255bd9e097bcd93c59defdb2560583",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 2404,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 64,
"path": "/Assets/Scripts/ASSPhysics/SettingSystem/ActionSettings/ActionSettingCollisionRadius.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\nusing System.Collections.Generic;\t//List<T>\n\nusing ControllerCache = ASSPhysics.ControllerSystem.ControllerCache;\n\nusing ASSistant.Comparers; //ComparerSortCollidersByDistance\n\nnamespace ASSPhysics.SettingSystem.ActionSettingTypes\n{\n\t[CreateAssetMenu(fileName = \"Data\", menuName = \"Action settings/Collision Radius settings\", order = 1)]\n\tpublic class ActionSettingCollisionRadius : ScriptableObject\n\t{\n\t\t[Tooltip(\"Layers to check collision against\")]\n\t\tpublic LayerMask layerMask;\n\n\t\t[Tooltip(\"Collision check radius\")]\n\t\tpublic float radius = 1.0f;\t\n\n\t\t[Tooltip(\"If true scale collision radius with screen size\")]\n\t\tpublic bool screenScaled = true;\n\n\t\t[Tooltip(\"Maximum number of items returned. closest first. If -1, all available results will be returned\")]\n\t\tpublic int maximumCollisions = -1; \n\n\t\t[Tooltip(\"Wether to include trigger colliders in the search\")]\n\t\tpublic bool detectTriggers = true;\n\n\t\t//calculates collision radius\n\t\tprivate float efectiveRadius { get {\n\t\t\t//If not screenScaled or viewport controller unavailable, return unscaled size\n\t\t\t//if scaled and available controller apply scale\n\t\t\treturn (ControllerCache.viewportController == null || !screenScaled)\n\t\t\t\t?\tradius\n\t\t\t\t:\tradius * ControllerCache.viewportController.size;\n\t\t}}\n\n\t\t//returns the result of the collision check defined in this collision radius around origin\n\t\tpublic Collider[] GetCollidersInRange (Transform originTransform)\n\t\t{ return GetCollidersInRange(originTransform.position); }\n\t\tpublic Collider[] GetCollidersInRange (Vector3 originPosition)\n\t\t{\n\t\t\t//fetch all the colliders in range\n\t\t\tList<Collider> colliderList = new List<Collider>(Physics.OverlapSphere(\n\t\t\t\tposition: originPosition,\n\t\t\t\tradius: efectiveRadius,\n\t\t\t\tlayerMask: layerMask,\n\t\t\t\tqueryTriggerInteraction: detectTriggers\n\t\t\t\t\t? QueryTriggerInteraction.Collide\t//if detectTriggers, collide with trigger colliders\n\t\t\t\t\t: QueryTriggerInteraction.Ignore\t//if !detectTriggers, ignore trigger colliders\n\t\t\t));\n\n\t\t\t//sort detected colliders by distance \n\t\t\tcolliderList.Sort(new ComparerSortCollidersByDistance(originPosition) as IComparer<Collider>);\n\n\t\t\t//return a maximum of N colliders according to maximumCollisions\n\t\t\tif (maximumCollisions < 0 || maximumCollisions > colliderList.Count)\n\t\t\t{\n\t\t\t\treturn colliderList.ToArray();\n\t\t\t}\n\n\t\t\treturn colliderList.GetRange(index: 0, count: maximumCollisions).ToArray();\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.752136766910553,
"alphanum_fraction": 0.752136766910553,
"avg_line_length": 36.64285659790039,
"blob_id": "d3d2b2f8d1a513441f2c62d1fed742b1e0aa5696",
"content_id": "72699ee1108305a59f8c225c7a5d14d2ca4d88fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1053,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 28,
"path": "/Assets/Scripts/ASSPhysics/HandSystem/Actions/IAction.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using ITool = ASSPhysics.HandSystem.Tools.ITool; //ITool\nusing EInputState = ASSPhysics.InputSystem.EInputState;\n\nnamespace ASSPhysics.HandSystem.Actions\n{\n\tpublic interface IAction \n\t{\n\t\t//returns true if this action is currently doing something, like maintaining a grab or repeating a slapping pattern\n\t\t//bool ongoing {get;}\n\t\t//wether action is to automatically repeat\n\t\tbool auto {get;}\n\t\t//initialize the action with a reference to the parent tool\n\t\t//will return true if action is valid and functional\n\t\tbool Initialize (ITool parentTool);\n\t\t//receive state of corresponding input medium\n\t\tvoid Input (EInputState state);\n\t\t//try to set in automatic state. Returns true on success\n\t\tbool Automate ();\n\t\t//update automatic action. To be called once per frame while action is automated. returns false if automation stops\n\t\tbool AutomationUpdate ();\n\t\t//stop automation\n\t\tvoid DeAutomate ();\n\t\t//clears and finishes the action\n\t\tvoid Clear ();\n\t\t//returns true if this action is valid for this hand (targets in range 'n such)\n\t\tbool IsValid ();\n\t}\n}"
},
{
"alpha_fraction": 0.8107255697250366,
"alphanum_fraction": 0.8107255697250366,
"avg_line_length": 21.714284896850586,
"blob_id": "c8d24239d37d2bdca46feb5e2db7a132310b97bf",
"content_id": "28540298d3d02ca0cdee45656b2dc2a9b5e05e9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 317,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 14,
"path": "/Assets/Scripts/ASSPhysics/MiscellaneousComponents/Kickers/Base/KickerOnConditionHeldOnFixedUpdateBase.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSPhysics.MiscellaneousComponents.Kickers\n{\n\tpublic abstract class KickerOnConditionHeldOnFixedUpdateBase : KickerOnConditionHeldBase\n\t{\n\t//MonoBehaviour Lifecycle\n\t\tpublic virtual void FixedUpdate()\n\t\t{\n\t\t\tUpdateCondition(Time.fixedDeltaTime);\n\t\t}\n\t//ENDOF MonoBehaviour Lifecycle\n\t}\n}"
},
{
"alpha_fraction": 0.7280898690223694,
"alphanum_fraction": 0.7393258213996887,
"avg_line_length": 20.238094329833984,
"blob_id": "5ee3d0f308d9a8a37fb9d5a2e547bd75f2733480",
"content_id": "a6f92daadf45a0a0fa4e24351251967c77f5f65d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 447,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 21,
"path": "/Assets/Scripts/ASSPhysics/InputSystem/IInputController.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using Vector3 = UnityEngine.Vector3;\n\nnamespace ASSPhysics.InputSystem\n{\n\tpublic interface IInputController : ASSPhysics.ControllerSystem.IController\n\t{\n\t\t//input movement delta for last frame\n\t\tVector3 delta { get; }\n\t\tVector3 screenSpaceDelta { get; }\n\n\t\t//scaled delta for configurable controls\n\t\tVector3 scaledDelta { get; }\n\n\t\t//gets zoom input\n\t\tfloat zoomDelta { get; }\n\n\t\t//gets button pressed\n\t\tbool GetButtonDown (int buttonID);\n\n\t}\n}"
},
{
"alpha_fraction": 0.7465069890022278,
"alphanum_fraction": 0.7485029697418213,
"avg_line_length": 22.904762268066406,
"blob_id": "3664b4d81adbed0f78629c0766f3a5ea1a99edf5",
"content_id": "0b3be30eb20844935fd06efb87fa1c2797cb8528",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 503,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 21,
"path": "/Assets/Scripts/ASSPhysics/CameraSystem/CameraExtensions.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nusing RectMath = ASSistant.ASSMath.RectMath;\n\nnamespace ASSPhysics.CameraSystem\n{\n\tpublic static class CameraExtensions\n\t{\n\t\t//generates a rect from target camera worldspace viewport\n\t\tpublic static Rect EMRectFromOrthographicCamera (this Camera camera)\n\t\t{\n\t\t\tfloat height = camera.orthographicSize * 2;\n\t\t\tfloat width = height * camera.aspect;\n\t\t\treturn RectMath.RectFromCenterAndSize(\n\t\t\t\tposition: camera.transform.position,\n\t\t\t\twidth: width,\n\t\t\t\theight: height\n\t\t\t);\n\t\t}\n\t}\n}"
},
{
"alpha_fraction": 0.7193548679351807,
"alphanum_fraction": 0.7419354915618896,
"avg_line_length": 19.733333587646484,
"blob_id": "185dc1eb6467e095722f80be407ca69e3bab2703",
"content_id": "ffb2c8370f7f9f8b712f4e688715726ac81a0dcc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 312,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 15,
"path": "/Assets/Scripts/ASSistant/ASSMath/Vector3Math.cs",
"repo_name": "HectorColasValtuena/NachikuAssventurePrototype",
"src_encoding": "UTF-8",
"text": "using UnityEngine;\n\nnamespace ASSistant.ASSMath\n{\n\t//methods for Rect manipulation\n\tpublic static class Vector3Math\n\t{\n\t//Vector3 creation methods\n\t\tpublic static Vector3 AngleToVector3 (float angle)\n\t\t{\n\t\t\treturn new Vector3 (Mathf.Sin(angle), Mathf.Cos(angle), 0);\n\t\t}\n\t//ENDOF Vector3 creation methods\n\t}\n}"
}
] | 162 |
Shraddhasaini/repo_3april
|
https://github.com/Shraddhasaini/repo_3april
|
5a89ea4aca388b214d9df5fae86ad7f8f36a711d
|
cb898601ecc8d8586a8473f6429147b5edb4e932
|
f46bdaaa2986afc1b54f5794011b1bf822723371
|
refs/heads/master
| 2020-03-08T03:13:55.144203 | 2018-04-03T10:54:19 | 2018-04-03T10:54:19 | 127,884,516 | 1 | 0 | null | 2018-04-03T09:30:54 | 2018-04-03T10:35:53 | 2018-04-03T10:54:20 |
Python
|
[
{
"alpha_fraction": 0.7727272510528564,
"alphanum_fraction": 0.7727272510528564,
"avg_line_length": 43,
"blob_id": "2739e6cdffebcb5dea5d6437a29d495624c58104",
"content_id": "10142588420cb8112ac4ca06f7d15abb1f6bd9ba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 88,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 2,
"path": "/proj.py",
"repo_name": "Shraddhasaini/repo_3april",
"src_encoding": "UTF-8",
"text": "while True:\n print(\"Shraddha on master branch\") #add your branch name instead of master\n"
}
] | 1 |
Alanspants/waterbnb
|
https://github.com/Alanspants/waterbnb
|
ee4f33ee7426e72f17fa1ffcbeb2612196d19776
|
7684cf71fdb720944ee34312c753226c43281190
|
f60520d656d91bbe902f4c330eb7f70fe69e6363
|
refs/heads/master
| 2022-06-17T01:00:51.850458 | 2020-05-08T06:50:12 | 2020-05-08T06:50:12 | 261,674,184 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6884273290634155,
"alphanum_fraction": 0.6884273290634155,
"avg_line_length": 27.08333396911621,
"blob_id": "00db414e126daca491c7e85f6fa0339add9ffa9b",
"content_id": "192d5f0cce69a56c22be4de3fb9515128b5f2b2b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 337,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 12,
"path": "/waterbnb/config.py",
"repo_name": "Alanspants/waterbnb",
"src_encoding": "UTF-8",
"text": "import os\n\nfrom flask import app\n\nconfig_path = os.path.abspath(os.path.dirname(__file__))\n\n\nclass Config:\n prefix = 'sqlite:///'\n SECRET_KEY = os.getenv('SECRET_KEY', 'secret string')\n SQLALCHEMY_DATABASE_URI = os.getenv('DATABASE_URL', prefix + os.path.join(config_path, 'data.db'))\n SQLALCHEMY_TRACK_MODIFICATIONS = False\n"
},
{
"alpha_fraction": 0.6993789076805115,
"alphanum_fraction": 0.6993789076805115,
"avg_line_length": 29.884614944458008,
"blob_id": "2385eda0f13cfd2e2c62b8a9816800ec14a67ac0",
"content_id": "79a2fd24c3f157b43aff3c2cccf6f7309272492f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 805,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 26,
"path": "/waterbnb/__init__.py",
"repo_name": "Alanspants/waterbnb",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\nfrom flask_migrate import Migrate\nfrom flask_login import LoginManager\nfrom waterbnb.config import Config\n\ndb = SQLAlchemy()\nmigrate = Migrate()\nlogin_manager = LoginManager()\nlogin_manager.login_view = 'login'\n\n# from waterbnb.route import index, register\nfrom waterbnb.route import index, register, login, logout\n\n\ndef create_app():\n app = Flask(__name__)\n app.config.from_object(Config)\n db.init_app(app)\n migrate.init_app(app, db)\n login_manager.init_app(app)\n app.add_url_rule('/', 'index', index)\n app.add_url_rule('/register', 'register', register, methods=['GET', 'POST'])\n app.add_url_rule('/login', 'login', login, methods=['GET', 'POST'])\n app.add_url_rule('/logout', 'logout', logout)\n return app\n\n\n"
},
{
"alpha_fraction": 0.6784313917160034,
"alphanum_fraction": 0.6849673390388489,
"avg_line_length": 28.423076629638672,
"blob_id": "58dbb6066ba79423495b0462f98ad882efc8d915",
"content_id": "d4c73491e6aabad2b6060b419938fa4023e307cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 765,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 26,
"path": "/waterbnb/models.py",
"repo_name": "Alanspants/waterbnb",
"src_encoding": "UTF-8",
"text": "from werkzeug.security import generate_password_hash, check_password_hash\nfrom waterbnb import db, login_manager\n\nfrom flask_login import UserMixin\n\n\nclass User(UserMixin, db.Model):\n id = db.Column(db.Integer, primary_key=True)\n username = db.Column(db.String(20))\n password_hash = db.Column(db.String(128))\n\n def set_password(self, password):\n self.password_hash = generate_password_hash(password)\n\n def validate_password(self, password):\n return check_password_hash(self.password_hash, password)\n\n def __repr__(self):\n return 'id={}, username = {}, password_hash={}'.format(\n self.id, self.username, self.password_hash\n )\n\n\n@login_manager.user_loader\ndef load_user(id):\n return User.query.get(int(id))\n"
},
{
"alpha_fraction": 0.6659482717514038,
"alphanum_fraction": 0.6688218116760254,
"avg_line_length": 29.933332443237305,
"blob_id": "d3a422601c59f16e42282a53a6bccfa67782116b",
"content_id": "b3bf17c7d6d35b2c4af24b5611906df2c356958b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1392,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 45,
"path": "/waterbnb/route.py",
"repo_name": "Alanspants/waterbnb",
"src_encoding": "UTF-8",
"text": "from flask import render_template, url_for, app, flash\nfrom werkzeug.utils import redirect\nfrom waterbnb.forms import RegisterForm, LoginForm\nfrom waterbnb import db\nfrom waterbnb import models\nfrom flask_login import login_user, current_user, logout_user, login_required\n\n\n\ndef index():\n return render_template('index.html')\n\n\ndef register():\n form = RegisterForm()\n if form.validate_on_submit():\n username = form.username.data\n password = form.password.data\n user = models.User(username=username)\n user.set_password(password)\n db.session.add(user)\n db.session.commit()\n return redirect(url_for('index'))\n return render_template('register.html', form=form)\n\n\ndef login():\n if current_user.is_authenticated:\n return redirect(url_for('index'))\n form = LoginForm(csrf_enabled=False)\n if form.validate_on_submit():\n thisUser = models.User.query.filter_by(username=form.username.data).first()\n if thisUser is None or (not thisUser.validate_password(form.password.data)):\n flash('Invalid username or password.')\n else:\n login_user(thisUser)\n return redirect(url_for('index'))\n return render_template('login.html', form=form)\n\ndef logout():\n logout_user()\n return redirect(url_for('index'))\n\n# if __name__ == \"__main__\":\n# app.run(debug=True, port=5000)\n"
},
{
"alpha_fraction": 0.680232584476471,
"alphanum_fraction": 0.682170569896698,
"avg_line_length": 26.157894134521484,
"blob_id": "1226729397d18e114ecabbaa96fd50d8e10151db",
"content_id": "770c1d4e25c4cf727df5312c8d75390f1acc65d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 516,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 19,
"path": "/README.md",
"repo_name": "Alanspants/waterbnb",
"src_encoding": "UTF-8",
"text": "#waterbnb\n\nrun server: python3 manager runserver -d\n\ndatabase initiate:\n python manager.py db init\n python manager.py db migrate -m \"create user\"\n python manager.py db upgrade\n\nuse shell to control database:\n you need to get into python shell firstly.\n\n >>> from waterbnb import db, create_app\n >>> app = create_app()\n >>> db\n <SQLAlchemy engine=None>\n >>> app.app_context().push()\n >>> db\n <SQLAlchemy engine=sqlite:////Users/Chz/JavaBackend/Python&Flask/waterbnb/waterbnb/data.db>\n"
}
] | 5 |
Ahm3dRadi/Django-URL-Shortener
|
https://github.com/Ahm3dRadi/Django-URL-Shortener
|
31eda984b8d36cd6baccbd70e0a8f44bc1b40cb2
|
b790eb4e61b33b0c373ee06f082b45360dd41225
|
2a97e3473344b85a7dfc873947adfa3eed8e498b
|
refs/heads/master
| 2016-09-12T06:32:30.634297 | 2016-07-29T00:19:34 | 2016-07-29T00:19:34 | 64,434,690 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6901408433914185,
"alphanum_fraction": 0.6901408433914185,
"avg_line_length": 29.428571701049805,
"blob_id": "760a1f9becea07b8435cd1b12f84796dd98e97e3",
"content_id": "994e43c139c2efe6b9b18ff5a35f952cb1336b72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 213,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 7,
"path": "/UrlShort/urls.py",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import url, include\nfrom django.contrib import admin\n\nurlpatterns = [\n url(r'^admin/', admin.site.urls),\n url(r'^', include('shorter.urls', namespace='shorter', app_name='shorter')),\n]\n"
},
{
"alpha_fraction": 0.7153465151786804,
"alphanum_fraction": 0.7178217768669128,
"avg_line_length": 32.75,
"blob_id": "f57198fe969feb7f014dc555ad081afdeaa1f766",
"content_id": "e0a24b5f1b7bb708d0c9cd8ea87dec4e931917ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 404,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 12,
"path": "/shorter/urls.py",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import url\nfrom shorter import views\nfrom django.contrib.staticfiles.urls import static\nfrom django.conf import settings\n\nurlpatterns = [\n url(r'^$', views.home, name=\"home\"),\n url(r'shorter/$', views.short_url, name=\"short_url\"),\n url(r'(?P<url_id>\\w{4})', views.redirect, name=\"redirect\"),\n\n]\nurlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)"
},
{
"alpha_fraction": 0.7076124548912048,
"alphanum_fraction": 0.7231833934783936,
"avg_line_length": 22.1200008392334,
"blob_id": "9170dd7f390bbbd80456f2b70538df8fcebd0b44",
"content_id": "f6cfc19349a0abc574b7c3d6e09c05819bce8492",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 578,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 25,
"path": "/README.md",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "# Django URL Shortener\n\nMy first Web applicaton with Django framework \n### Demo:\n[dj-url heroku](https://dj-url.herokuapp.com/)\n\n### Installation\n1. Clone it:\n`git clone https://github.com/Ahm3dRadi/Django-URL-Shortener.git `\n\n2. install requirements :\n`python -m pip install -r requirements.txt`\n\n3. migrate models to database :\n`python manage.py makemigrations` then \n`python manage.py migrate`\n\n4. run it:\n `python manage.py runserver`\n\n### Create Your Own Project\n[Django Doc](https://docs.djangoproject.com/en/1.9/)\n\n### Author\n* [Ahmed Radi](https://www.facebook.com/Ahm3d.9)\n"
},
{
"alpha_fraction": 0.620634913444519,
"alphanum_fraction": 0.641269862651825,
"avg_line_length": 25.723403930664062,
"blob_id": "df02d7790c8685e2020373d5093949bacc27a935",
"content_id": "56a2f9dc40b3e00075d92a322d9ad0b04e0ebf19",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1260,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 47,
"path": "/shorter/views.py",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render , render_to_response,get_object_or_404\nfrom django.http import HttpResponseRedirect\nfrom django.http import JsonResponse\nfrom shorter.models import Url\nimport string, random\n\ndef home(request):\n\n\n context = \"\"\n return render(request, 'home.html',context)\n\ndef short_url(request):\n def id_generator(size=4, chars=string.ascii_letters + string.digits):\n return ''.join(random.choice(chars) for _ in range(size))\n url_id = id_generator()\n if 'url' in request.POST:\n url = request.POST['url']\n if url == \"\":\n return JsonResponse({\"url\": \"ERROR\"})\n else:\n new_url = Url(short=url_id, httpurl=url)\n new_url.save()\n return JsonResponse({\"url\": url_id})\n\ndef redirect(request, url_id):\n url = get_object_or_404(Url, short=url_id)\n if url:\n url.count += 1\n url.save()\n return HttpResponseRedirect(url.httpurl)\n else:\n return HttpResponseRedirect(\"/\")\n\n\n\n\ndef handler404(request):\n response = render_to_response('404.html', )\n response.status_code = 404\n return response\n\n\ndef handler500(request):\n response = render_to_response('404.html', )\n response.status_code = 500\n return response\n\n\n\n\n"
},
{
"alpha_fraction": 0.6792452931404114,
"alphanum_fraction": 0.698113203048706,
"avg_line_length": 25,
"blob_id": "a574cfc306a1cdec6944e6e86aeb8f11625e02f2",
"content_id": "df66e9566d6fdba09612612deea35faa80b35212",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 318,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 12,
"path": "/shorter/models.py",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\nclass Url(models.Model):\n short = models.CharField(max_length=10, )\n httpurl = models.URLField(max_length=200)\n count = models.IntegerField(default=0)\n created = models.DateTimeField(auto_now=True)\n\n\ndef __str__(self):\n return self.httpurl\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5160890817642212,
"alphanum_fraction": 0.5445544719696045,
"avg_line_length": 27.85714340209961,
"blob_id": "3fdb2a93aabbf18b75a32b90015e2392f24c58ed",
"content_id": "e90092161dc7b144e06398e5186affc94e2b709e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 808,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 28,
"path": "/shorter/migrations/0003_auto_20160728_0454.py",
"repo_name": "Ahm3dRadi/Django-URL-Shortener",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.8 on 2016-07-28 00:54\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('shorter', '0002_urls_short'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Url',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('short', models.CharField(max_length=10)),\n ('httpurl', models.URLField()),\n ('count', models.IntegerField(default=0)),\n ('created', models.DateTimeField(auto_now=True)),\n ],\n ),\n migrations.DeleteModel(\n name='Urls',\n ),\n ]\n"
}
] | 6 |
huiwudiyi/Auto-PyTorch
|
https://github.com/huiwudiyi/Auto-PyTorch
|
4bf1577fd974973ea7b45e3faa1c91e355a7f121
|
e8d958a4e1aa760af96a20b097d27c2f29657f42
|
52d4d22ee616076968b0a9659852ce3e5b2f1890
|
refs/heads/master
| 2020-05-19T17:57:00.665458 | 2019-05-04T07:07:36 | 2019-05-04T07:07:36 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5621269941329956,
"alphanum_fraction": 0.5628504157066345,
"avg_line_length": 35.55628967285156,
"blob_id": "42491cb8b3055425647240f47987c023fe38c564",
"content_id": "98a9201d7ec9a994055273b1396db0d8e4abd921",
"detected_licenses": [
"LicenseRef-scancode-philippe-de-muyter",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5529,
"license_type": "permissive",
"max_line_length": 130,
"num_lines": 151,
"path": "/autoPyTorch/pipeline/base/node.py",
"repo_name": "huiwudiyi/Auto-PyTorch",
"src_encoding": "UTF-8",
"text": "__author__ = \"Max Dippel, Michael Burkart and Matthias Urban\"\n__version__ = \"0.0.1\"\n__license__ = \"BSD\"\n\n\nimport gc\nimport inspect\n\n\nclass Node():\n def __init__(self):\n self.child_node = None\n self.fit_output = None\n self.predict_output = None\n\n def fit(self, **kwargs):\n return dict()\n\n def predict(self, **kwargs):\n return dict()\n \n def get_fit_argspec(self):\n possible_keywords, _, _, defaults, _, _, _ = inspect.getfullargspec(self.fit)\n possible_keywords = [k for k in possible_keywords if k != 'self']\n return possible_keywords, defaults\n\n def get_predict_argspec(self):\n possible_keywords, _, _, defaults, _, _, _ = inspect.getfullargspec(self.predict)\n possible_keywords = [k for k in possible_keywords if k != 'self']\n return possible_keywords, defaults\n\n def clean_fit_data(self):\n node = self\n \n # clear outputs\n while (node is not None):\n node.fit_output = None\n node.predict_output = None\n node = node.child_node\n\n def fit_traverse(self, **kwargs):\n \"\"\"Calls fit function of child nodes.\n The fit function can have different keyword arguments.\n All keywords have to be either defined in kwargs or in an fit output of a parent node.\n \n \"\"\"\n\n self.clean_fit_data()\n gc.collect()\n\n base = Node()\n base.fit_output = kwargs\n\n available_kwargs = {key: base for key in kwargs.keys()}\n\n node = self\n prev_node = base\n\n while (node is not None):\n prev_node = node\n possible_keywords, defaults = node.get_fit_argspec()\n\n last_required_keyword_index = len(possible_keywords) - len(defaults or [])\n required_kwargs = dict()\n for index, keyword in enumerate(possible_keywords):\n if (keyword in available_kwargs):\n required_kwargs[keyword] = available_kwargs[keyword].fit_output[keyword]\n\n elif index >= last_required_keyword_index:\n required_kwargs[keyword] = defaults[index - last_required_keyword_index]\n\n else:\n print (\"Available keywords:\", sorted(available_kwargs.keys()))\n raise ValueError('Node ' + str(type(node)) + ' requires keyword ' + str(keyword) + ' which is not available.')\n \n node.fit_output = node.fit(**required_kwargs)\n if (not isinstance(node.fit_output, dict)):\n raise ValueError('Node ' + str(type(node)) + ' does not return a dictionary.')\n\n for keyword in node.fit_output.keys():\n if keyword in available_kwargs:\n # delete old values\n if (keyword not in available_kwargs[keyword].get_predict_argspec()[0]):\n del available_kwargs[keyword].fit_output[keyword]\n available_kwargs[keyword] = node\n node = node.child_node\n\n gc.collect()\n\n return prev_node.fit_output\n\n def predict_traverse(self, **kwargs):\n \"\"\"Calls predict function of child nodes.\n The predict function can have different keyword arguments.\n All keywords have to be either defined in kwargs, in a predict output of a parent node or in the nodes own fit output\n \n \"\"\"\n \n base = Node()\n base.predict_output = kwargs\n\n available_kwargs = {key: base for key in kwargs.keys()}\n\n node = self\n\n # clear outputs\n while (node is not None):\n node.predict_output = None\n node = node.child_node\n\n gc.collect()\n\n node = self\n prev_node = base\n\n while (node is not None):\n prev_node = node\n possible_keywords, defaults = node.get_predict_argspec()\n\n last_required_keyword_index = len(possible_keywords) - len(defaults or [])\n required_kwargs = dict()\n for index, keyword in enumerate(possible_keywords):\n if (keyword in available_kwargs):\n if (available_kwargs[keyword].predict_output is None):\n print(str(type(available_kwargs[keyword])))\n required_kwargs[keyword] = available_kwargs[keyword].predict_output[keyword]\n \n elif (node.fit_output is not None and keyword in node.fit_output):\n required_kwargs[keyword] = node.fit_output[keyword]\n\n elif index >= last_required_keyword_index:\n required_kwargs[keyword] = defaults[index - last_required_keyword_index]\n\n else:\n raise ValueError('Node ' + str(type(node)) + ' requires keyword ' + keyword + ' which is not available.')\n \n node.predict_output = node.predict(**required_kwargs)\n if (not isinstance(node.predict_output, dict)):\n raise ValueError('Node ' + str(type(node)) + ' does not return a dictionary.')\n\n for keyword in node.predict_output.keys():\n if keyword in available_kwargs:\n # delete old values\n if (available_kwargs[keyword].predict_output[keyword] is not None):\n del available_kwargs[keyword].predict_output[keyword]\n available_kwargs[keyword] = node\n node = node.child_node\n \n gc.collect()\n\n return prev_node.predict_output\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6224992871284485,
"alphanum_fraction": 0.622789204120636,
"avg_line_length": 35.30526351928711,
"blob_id": "f86bb9c84cc99e7a663f0e6340ba565c6e72f46c",
"content_id": "a6bb9719d5337ef4adc921efb24506aa0b12d39a",
"detected_licenses": [
"LicenseRef-scancode-philippe-de-muyter",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6898,
"license_type": "permissive",
"max_line_length": 139,
"num_lines": 190,
"path": "/autoPyTorch/pipeline/base/pipeline.py",
"repo_name": "huiwudiyi/Auto-PyTorch",
"src_encoding": "UTF-8",
"text": "import time\nfrom autoPyTorch.pipeline.base.pipeline_node import PipelineNode\nfrom autoPyTorch.pipeline.base.node import Node\nimport ConfigSpace\nfrom autoPyTorch.utils.configspace_wrapper import ConfigWrapper\nfrom autoPyTorch.utils.config.config_file_parser import ConfigFileParser\nfrom autoPyTorch.utils.hyperparameter_search_space_update import HyperparameterSearchSpaceUpdates\nimport traceback\n\n\nclass Pipeline():\n def __init__(self, pipeline_nodes=[]):\n self.root = Node()\n self._pipeline_nodes = dict()\n self.start_params = None\n self._parent_pipeline = None\n\n last_node = self.root\n for node in pipeline_nodes:\n last_node.child_node = node\n self.add_pipeline_node(node)\n last_node = node\n\n def _get_start_parameter(self):\n return self.start_params\n\n def __getitem__(self, key):\n return self._pipeline_nodes[key]\n \n def __contains__(self, key):\n if isinstance(key, str):\n return key in self._pipeline_nodes\n elif issubclass(key, PipelineNode):\n return key.get_name() in self._pipeline_nodes\n else:\n raise ValueError(\"Cannot check if instance \" + str(key) + \" of type \" + str(type(key)) + \" is contained in pipeline\")\n\n def set_parent_pipeline(self, pipeline):\n \"\"\"Set this pipeline as a child pipeline of the given pipeline.\n This will allow the parent pipeline to access the pipeline nodes of its child pipelines\n \n Arguments:\n pipeline {Pipeline} -- parent pipeline\n \"\"\"\n\n if (not issubclass(type(pipeline), Pipeline)):\n raise ValueError(\"Given pipeline has to be of type Pipeline, got \" + str(type(pipeline)))\n\n self._parent_pipeline = pipeline\n\n for _, node in self._pipeline_nodes.items():\n self._parent_pipeline.add_pipeline_node(node)\n\n\n def fit_pipeline(self, **kwargs):\n return self.root.fit_traverse(**kwargs)\n\n def predict_pipeline(self, **kwargs):\n return self.root.predict_traverse(**kwargs)\n\n def add_pipeline_node(self, pipeline_node):\n \"\"\"Add a node to the pipeline\n \n Arguments:\n pipeline_node {PipelineNode} -- node\n \n Returns:\n PipelineNode -- return input node\n \"\"\"\n\n if (not issubclass(type(pipeline_node), PipelineNode)):\n raise ValueError(\"You can only add PipelineElement subclasses to the pipeline\")\n \n self._pipeline_nodes[pipeline_node.get_name()] = pipeline_node\n pipeline_node.set_pipeline(self)\n\n if (self._parent_pipeline):\n self._parent_pipeline.add_pipeline_node(pipeline_node)\n\n return pipeline_node\n\n def get_hyperparameter_search_space(self, dataset_info=None, **pipeline_config):\n pipeline_config = self.get_pipeline_config(**pipeline_config)\n\n if \"hyperparameter_search_space_updates\" in pipeline_config and pipeline_config[\"hyperparameter_search_space_updates\"] is not None:\n assert isinstance(pipeline_config[\"hyperparameter_search_space_updates\"], HyperparameterSearchSpaceUpdates)\n pipeline_config[\"hyperparameter_search_space_updates\"].apply(self, pipeline_config)\n\n if \"random_seed\" in pipeline_config:\n cs = ConfigSpace.ConfigurationSpace(seed=pipeline_config[\"random_seed\"])\n else:\n cs = ConfigSpace.ConfigurationSpace()\n\n for name, node in self._pipeline_nodes.items():\n config_space = node.get_hyperparameter_search_space(dataset_info=dataset_info, **pipeline_config)\n cs.add_configuration_space(prefix=name, configuration_space=config_space, delimiter=ConfigWrapper.delimiter)\n \n for name, node in self._pipeline_nodes.items():\n cs = node.insert_inter_node_hyperparameter_dependencies(cs, dataset_info=dataset_info, **pipeline_config)\n\n return cs\n\n def get_pipeline_config(self, throw_error_if_invalid=True, **pipeline_config):\n options = self.get_pipeline_config_options()\n conditions = self.get_pipeline_config_conditions()\n \n parser = ConfigFileParser(options)\n pipeline_config = parser.set_defaults(pipeline_config, throw_error_if_invalid=throw_error_if_invalid)\n\n for c in conditions:\n try:\n c(pipeline_config)\n except Exception as e:\n if throw_error_if_invalid:\n raise\n print(e)\n traceback.print_exc()\n \n return pipeline_config\n\n\n def get_pipeline_config_options(self):\n if (self._parent_pipeline is not None):\n return self._parent_pipeline.get_pipeline_config_options()\n\n options = []\n\n for node in self._pipeline_nodes.values():\n options += node.get_pipeline_config_options()\n\n return options\n\n def get_pipeline_config_conditions(self):\n if (self._parent_pipeline is not None):\n return self._parent_pipeline.get_pipeline_config_options()\n \n conditions = []\n\n for node in self._pipeline_nodes.values():\n conditions += node.get_pipeline_config_conditions()\n \n return conditions\n\n def print_config_space(self, **pipeline_config):\n config_space = self.get_hyperparameter_search_space(**pipeline_config)\n\n if (len(config_space.get_hyperparameters()) == 0):\n return\n print(config_space)\n\n def print_config_space_per_node(self, **pipeline_config):\n for name, node in self._pipeline_nodes.items():\n config_space = node.get_hyperparameter_search_space(**pipeline_config)\n\n if (len(config_space.get_hyperparameters()) == 0):\n continue\n print(name)\n print(config_space)\n\n\n def print_config_options(self):\n for option in self.get_pipeline_config_options():\n print(str(option))\n\n def print_config_options_per_node(self):\n for name, node in self._pipeline_nodes.items():\n print(name)\n for option in node.get_pipeline_config_options():\n print(\" \" + str(option))\n\n def print_pipeline_nodes(self):\n for name, node in self._pipeline_nodes.items():\n input_str = \"[\"\n for edge in node.in_edges:\n input_str += \" (\" + edge.out_idx + \", \" + edge.target.get_name() + \", \" + edge.kw + \") \"\n input_str += \"]\"\n print(name + \" \\t\\t Input: \" + input_str)\n \n def clean(self):\n self.root.clean_fit_data()\n\n def clone(self):\n pipeline_nodes = []\n\n current_node = self.root.child_node\n while current_node is not None:\n pipeline_nodes.append(current_node.clone())\n current_node = current_node.child_node\n \n return type(self)(pipeline_nodes)\n"
},
{
"alpha_fraction": 0.6040801405906677,
"alphanum_fraction": 0.604982852935791,
"avg_line_length": 35.20915222167969,
"blob_id": "abfc7c7d5b70040d782fcc21cf2922f06482a373",
"content_id": "6630acd1d77c107e4bf84e2914981b961549cf36",
"detected_licenses": [
"LicenseRef-scancode-philippe-de-muyter",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5539,
"license_type": "permissive",
"max_line_length": 152,
"num_lines": 153,
"path": "/autoPyTorch/pipeline/base/pipeline_node.py",
"repo_name": "huiwudiyi/Auto-PyTorch",
"src_encoding": "UTF-8",
"text": "__author__ = \"Max Dippel, Michael Burkart and Matthias Urban\"\n__version__ = \"0.0.1\"\n__license__ = \"BSD\"\n\nfrom copy import deepcopy\nimport ConfigSpace\nimport inspect\nfrom autoPyTorch.utils.config.config_option import ConfigOption\nfrom autoPyTorch.utils.configspace_wrapper import ConfigWrapper\nfrom autoPyTorch.pipeline.base.node import Node\n\n\nclass PipelineNode(Node):\n def __init__(self):\n \"\"\"A pipeline node is a step in a pipeline.\n It can implement a fit function:\n Returns a dictionary.\n Input parameter (kwargs) are given by previous fit function computations in the pipeline.\n It can implement a predict function:\n Returns a dictionary.\n Input parameter (kwargs) are given by previous predict function computations in the pipeline or defined in fit function output of this node.\n\n Each node can provide a list of config options that the user can specify/customize.\n Each node can provide a config space for optimization.\n\n \"\"\"\n\n super(PipelineNode, self).__init__()\n self._cs_updates = dict()\n self.pipeline = None\n\n @classmethod\n def get_name(cls):\n return cls.__name__\n \n def clone(self, skip=(\"pipeline\", \"fit_output\", \"predict_output\", \"child_node\")):\n node_type = type(self)\n new_node = node_type.__new__(node_type)\n for key, value in self.__dict__.items():\n if key not in skip:\n setattr(new_node, key, deepcopy(value))\n else:\n setattr(new_node, key, None)\n return new_node\n\n # VIRTUAL\n def fit(self, **kwargs):\n \"\"\"Fit pipeline node.\n Each node computes its fit function in linear order.\n All args have to be specified in a parent node fit output.\n \n Returns:\n [dict] -- output values that will be passed to child nodes, if required\n \"\"\"\n\n return dict()\n\n # VIRTUAL\n def predict(self, **kwargs):\n \"\"\"Predict pipeline node.\n Each node computes its predict function in linear order.\n All args have to be specified in a parent node predict output or in the fit output of this node\n \n Returns:\n [dict] -- output values that will be passed to child nodes, if required\n \"\"\"\n\n return dict()\n\n # VIRTUAL\n def set_pipeline(self, pipeline):\n self.pipeline = pipeline\n\n # VIRTUAL\n def get_pipeline_config_options(self):\n \"\"\"Get available ConfigOption parameter.\n \n Returns:\n List[ConfigOption] -- list of available config options\n \"\"\"\n\n return []\n\n # VIRTUAL\n def get_pipeline_config_conditions(self):\n \"\"\"Get the conditions on the pipeline config (e.g. max_budget > min_budget)\n \n Returns:\n List[ConfigCondition] -- list of functions, that take a pipeline config and raise an Error, if faulty configuration is detected.\n \"\"\"\n\n return []\n\n\n # VIRTUAL\n def get_hyperparameter_search_space(self, dataset_info=None, **pipeline_config):\n \"\"\"Get hyperparameter that should be optimized.\n \n Returns:\n ConfigSpace -- config space\n \"\"\"\n return ConfigSpace.ConfigurationSpace()\n \n # VIRTUAL\n def insert_inter_node_hyperparameter_dependencies(self, config_space, dataset_info=None, **pipeline_config):\n \"\"\"Insert Conditions and Forbiddens of hyperparameters of different nodes\n\n Returns:\n ConfigSpace -- config space\n \"\"\"\n return config_space\n\n def _apply_search_space_update(self, name, new_value_range, log=False):\n \"\"\"Allows the user to update a hyperparameter\n \n Arguments:\n name {string} -- name of hyperparameter\n new_value_range {List[?] -- value range can be either lower, upper or a list of possible conditionals\n log {bool} -- is hyperparameter logscale\n \"\"\"\n\n if (len(new_value_range) == 0):\n raise ValueError(\"The new value range needs at least one value\")\n self._cs_updates[name] = tuple([new_value_range, log])\n \n def _check_search_space_updates(self, *allowed_hps):\n exploded_allowed_hps = list()\n for allowed_hp in allowed_hps:\n add = [list()]\n allowed_hp = (allowed_hp, ) if isinstance(allowed_hp, str) else allowed_hp\n for part in allowed_hp:\n if isinstance(part, str):\n add = [x + [part] for x in add]\n else:\n add = [x + [p] for p in part for x in add]\n exploded_allowed_hps += add\n exploded_allowed_hps = [ConfigWrapper.delimiter.join(x) for x in exploded_allowed_hps]\n \n for key in self._get_search_space_updates().keys():\n if key not in exploded_allowed_hps and \\\n ConfigWrapper.delimiter.join(key.split(ConfigWrapper.delimiter)[:-1] + [\"*\"]) not in exploded_allowed_hps:\n raise ValueError(\"Invalid search space update given: %s\" % key)\n \n def _get_search_space_updates(self, prefix=None):\n if prefix is None:\n return self._cs_updates\n if isinstance(prefix, tuple):\n prefix = ConfigWrapper.delimiter.join(prefix)\n result = dict()\n for key in self._cs_updates.keys():\n if key.startswith(prefix + ConfigWrapper.delimiter):\n result[key[len(prefix + ConfigWrapper.delimiter):]] = self._cs_updates[key]\n return result"
},
{
"alpha_fraction": 0.5867421627044678,
"alphanum_fraction": 0.5932452082633972,
"avg_line_length": 41.945945739746094,
"blob_id": "a59853e9d983916a419fbde33499a58d3ec40495",
"content_id": "f14c91f298ae9670eba0db4dc53ecc852456bab1",
"detected_licenses": [
"LicenseRef-scancode-philippe-de-muyter",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4767,
"license_type": "permissive",
"max_line_length": 131,
"num_lines": 111,
"path": "/scripts/plot_weight_history.py",
"repo_name": "huiwudiyi/Auto-PyTorch",
"src_encoding": "UTF-8",
"text": "import matplotlib.pyplot as plt\nimport argparse\nimport numpy as np\nimport os, sys\nimport heapq\n\nhpbandster = os.path.abspath(os.path.join(__file__, '..', '..', 'submodules', 'HpBandSter'))\nsys.path.append(hpbandster)\n\nfrom hpbandster.core.result import logged_results_to_HBS_result\n\nsum_of_weights_history = None\n\n\nif __name__ == \"__main__\": \n parser = argparse.ArgumentParser(description='Run benchmarks for autonet.')\n parser.add_argument(\"--only_summary\", action=\"store_true\", help=\"The maximum number of configs in the legend\")\n parser.add_argument(\"--max_legend_size\", default=5, type=int, help=\"The maximum number of datasets in the legend\")\n parser.add_argument(\"--num_consider_in_summary\", default=15, type=int, help=\"The number of datasets considered in the summary\")\n parser.add_argument(\"weight_history_files\", type=str, nargs=\"+\", help=\"The files to plot\")\n\n args = parser.parse_args()\n \n weight_deviation_history = list()\n weight_deviation_timestamps = list()\n \n #iterate over all files and lot the weight history\n for i, weight_history_file in enumerate(args.weight_history_files):\n print(i)\n plot_data = dict()\n with open(weight_history_file, \"r\") as f:\n for line in f:\n \n #read the data\n line = line.split(\"\\t\")\n if len(line) == 1 or not line[-1].strip():\n continue\n data = line[-1]\n data = list(map(float, map(str.strip, data.split(\",\"))))\n title = \"\\t\".join(line[:-1]).strip()\n \n # and save it later for plotting\n plot_data[title] = data\n \n # only show labels for top datasets\n sorted_keys = sorted(plot_data.keys(), key=lambda x, d=plot_data: -d[x][-1] if x != \"current\" else -float(\"inf\"))\n show_labels = set(sorted_keys[:args.max_legend_size])\n consider_in_summary = set(sorted_keys[:args.num_consider_in_summary])\n\n # parse results to get the timestamps for the weights\n x_axis = []\n try:\n r = logged_results_to_HBS_result(os.path.dirname(weight_history_file))\n sampled_configs = set()\n for run in sorted(r.get_all_runs(), key=(lambda run: run.time_stamps[\"submitted\"])):\n if run.config_id not in sampled_configs:\n x_axis.append(run.time_stamps[\"submitted\"])\n sampled_configs |= set([run.config_id])\n except Exception as e:\n continue\n\n # do the plotting\n if not args.only_summary:\n for title, data in sorted(plot_data.items()):\n plt.plot(x_axis, data[:len(x_axis)],\n label=title if title in show_labels else None,\n linestyle=\"-.\" if title == \"current\" else (\"-\" if title in show_labels else \":\"),\n marker=\"x\")\n\n plt.legend(loc='best')\n plt.title(weight_history_file)\n plt.xscale(\"log\")\n plt.show()\n\n # save data for summary\n for title, data in plot_data.items():\n if title in consider_in_summary:\n weight_deviation_history.append([abs(d - data[-1]) for d in data])\n weight_deviation_timestamps.append(x_axis)\n\n # plot summary\n weight_deviation_history = np.array(weight_deviation_history)\n weight_deviation_timestamps = np.array(weight_deviation_timestamps)\n \n # iterate over all weight deviation histories simultaneously, ordered by increasing timestamps\n history_pointers = [0] * weight_deviation_timestamps.shape[0]\n current_values = [None] * weight_deviation_timestamps.shape[0]\n heap = [(weight_deviation_timestamps[i, 0], weight_deviation_history[i, 0], i, 0) # use heap to sort by timestamps\n for i in range(weight_deviation_timestamps.shape[0])]\n heapq.heapify(heap)\n # progress = 0\n # total = weight_deviation_timestamps.shape[0] * weight_deviation_timestamps.shape[1]\n\n times = []\n values = []\n while heap:\n time, v, i, p = heapq.heappop(heap)\n current_values[i] = v\n values.append(np.mean([v for v in current_values if v is not None]))\n times.append(time)\n history_pointers[i] += 1\n if p + 1 < weight_deviation_timestamps.shape[1]:\n heapq.heappush(heap, (weight_deviation_timestamps[i, p + 1], weight_deviation_history[i, p + 1], i, p + 1))\n \n # progress += 1\n # print(\"Progress Summary:\", (progress / total) * 100, \" \" * 20, end=\"\\r\" if progress != total else \"\\n\")\n\n plt.plot(times, values, marker=\"x\")\n plt.title(\"weight deviation over time\")\n plt.xscale(\"log\")\n plt.show()\n"
}
] | 4 |
sundars/matrixops
|
https://github.com/sundars/matrixops
|
192f7b20d8e8c4a6c83612f725345c17b58a1b96
|
cf76a59b2e47b527488ebc3560ff9e68b43db3b5
|
f4996ffb05de45be132aa77ba2e84b433374e93e
|
refs/heads/master
| 2020-04-12T02:00:10.992393 | 2019-02-11T22:23:13 | 2019-02-11T22:23:13 | 162,234,817 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7195122241973877,
"alphanum_fraction": 0.7195122241973877,
"avg_line_length": 17.22222137451172,
"blob_id": "c19e2ad0107e7e52b96e539d00bac44a1146461a",
"content_id": "f456ce60d9c921ce2a0ea21801a4f766a3c3909b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 164,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/Makefile",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "bld/fraction: bld/fraction.o\n\tgcc -o bld/fraction bld/fraction.o\n\nbld/fraction.o: fraction.c\n\tmkdir -p bld\n\tgcc -c fraction.c -o bld/fraction.o\n\nclean:\n\trm -rf bld\n"
},
{
"alpha_fraction": 0.5551064014434814,
"alphanum_fraction": 0.5632978677749634,
"avg_line_length": 56.6687126159668,
"blob_id": "d5d035ed0bccffaf74773572dd332e7c79e8e62e",
"content_id": "6b1294c08e9acd382f93b56874b73a9ada69928e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9400,
"license_type": "no_license",
"max_line_length": 676,
"num_lines": 163,
"path": "/word.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport json\nimport requests\nimport random\n\nclass Oxford:\n appid = 'ef6388d2'\n appkey = '275e4baceefd617afe674d7fa37fd1d2'\n domainList = []\n urlBase = 'https://od-api.oxforddictionaries.com:443/api/v1/'\n language = 'en'\n wordList = []\n minWordLength = 7\n category = ''\n numberOfWords = 0\n wordInUse = None\n\n def __init__(self):\n self.domainList = ['Air Force', 'Alcoholic', 'American Civil War', 'American Football', 'Amerindian', 'Anatomy', 'Ancient History', 'Angling', 'Anthropology', 'Archaeology', 'Archery', 'Architecture', 'Art', 'Artefacts', 'Arts And Humanities', 'Astrology', 'Astronomy', 'Athletics', 'Audio', 'Australian Rules', 'Aviation', 'Ballet', 'Baseball', 'Basketball', 'Bellringing', 'Biblical', 'Billiards', 'Biochemistry', 'Biology', 'Bird', 'Bookbinding', 'Botany']\n self.domainList.extend(['Bowling', 'Bowls', 'Boxing', 'Breed', 'Brewing', 'Bridge', 'Broadcasting', 'Buddhism', 'Building', 'Bullfighting', 'Camping', 'Canals', 'Cards', 'Carpentry', 'Chemistry', 'Chess', 'Christian', 'Church Architecture', 'Civil Engineering', 'Clockmaking', 'Clothing', 'Coffee', 'Commerce', 'Commercial Fishing', 'Complementary Medicine', 'Computing', 'Cooking', 'Cosmetics', 'Cricket', 'Crime', 'Croquet', 'Crystallography', 'Currency', 'Cycling', 'Dance', 'Dentistry', 'Drink', 'Dyeing', 'Early Modern History', 'Ecclesiastical', 'Ecology', 'Economics', 'Education', 'Egyptian History', 'Electoral'])\n self.domainList.extend(['Electrical', 'Electronics', 'Element', 'English Civil War', 'Falconry', 'Farming', 'Fashion', 'Fencing', 'Film', 'Finance', 'Fire Service', 'First World War', 'Fish', 'Food', 'Forestry', 'Freemasonry', 'French Revolution', 'Furniture', 'Gambling', 'Games', 'Gaming', 'Genetics', 'Geography', 'Geology', 'Geometry', 'Glassmaking', 'Golf', 'Goods Vehicles', 'Grammar', 'Greek History', 'Gymnastics', 'Hairdressing', 'Handwriting', 'Heraldry', 'Hinduism', 'History', 'Hockey', 'Honour', 'Horology', 'Horticulture', 'Hotels', 'Hunting', 'Insect', 'Instrument', 'Intelligence', 'Invertebrate', 'Islam'])\n self.domainList.extend(['Jazz', 'Jewellery', 'Journalism', 'Judaism', 'Knitting', 'Language', 'Law', 'Leather', 'Linguistics', 'Literature', 'Logic', 'Lower Plant', 'Mammal', 'Marriage', 'Martial Arts', 'Mathematics', 'Measure', 'Mechanics', 'Medicine', 'Medieval History', 'Metallurgy', 'Meteorology', 'Microbiology', 'Military', 'Military History', 'Mineral', 'Mining', 'Motor Racing', 'Motoring', 'Mountaineering', 'Music', 'Musical Direction', 'Mythology', 'Napoleonic Wars', 'Narcotics', 'Nautical', 'Naval', 'Needlework', 'Numismatics', 'Occult', 'Oceanography', 'Office', 'Oil Industry', 'Optics'])\n self.domainList.extend(['Palaeontology', 'Parliament', 'Pathology', 'Penal', 'People', 'Pharmaceutics', 'Philately', 'Philosophy', 'Phonetics', 'Photography', 'Physics', 'Physiology', 'Plant', 'Plumbing', 'Police', 'Politics', 'Popular Music', 'Postal', 'Pottery', 'Printing', 'Professions', 'Prosody', 'Psychiatry', 'Psychology', 'Publishing', 'Racing', 'Railways', 'Rank', 'Relationships', 'Religion', 'Reptile', 'Restaurants', 'Retail', 'Rhetoric', 'Riding', 'Roads', 'Rock'])\n self.domainList.extend(['Roman Catholic Church', 'Roman History', 'Rowing', 'Royalty', 'Rugby', 'Savoury', 'Scouting', 'Second World War', 'Shoemaking', 'Sikhism', 'Skateboarding', 'Skating', 'Skiing', 'Smoking', 'Snowboarding', 'Soccer', 'Sociology', 'Space', 'Sport', 'Statistics', 'Stock Exchange', 'Surfing', 'Surgery', 'Surveying', 'Sweet', 'Swimming', 'Tea', 'Team Sports', 'Technology', 'Telecommunications', 'Tennis', 'Textiles', 'Theatre', 'Theology', 'Timber', 'Title', 'Tools', 'Trade Unionism', 'Transport', 'University', 'Variety', 'Veterinary', 'Video', 'War Of American Independence', 'Weapons', 'Weightlifting', 'Wine', 'Wrestling', 'Yoga', 'Zoology'])\n\n self.wordList = []\n while len(self.wordList) is 0:\n domainIndex = random.randint(0, len(self.domainList)-1)\n url = self.urlBase + 'wordlist/' + self.language + '/domains={0:s}?word_length=>{1:d}'.format(\n self.domainList[domainIndex], self.minWordLength)\n try:\n response = self.GetRequest(url)\n results = dict(response.json())['results']\n metadata = dict(response.json())['metadata']\n\n self.category = self.domainList[domainIndex]\n self.numberOfWords = metadata['total']\n\n if self.numberOfWords > 0:\n for result in results:\n try:\n word = dict()\n word['word'] = '{0:s}'.format(result['word'])\n word['id'] = '{0:s}'.format(result['id'])\n\n if len([c for c in word['word'] if c in ' -_']) is 0:\n self.wordList.append(word)\n\n except Exception as e:\n pass\n\n except Exception as e:\n print(e)\n\n def GetRequest(self, url):\n headers = {'app_id': self.appid, 'app_key': self.appkey}\n response = requests.get(url, headers = headers)\n if response.status_code != 200:\n raise Exception(\"Request to Oxford APIs failed. Request: <{0:s}>, Response: <{1:s}>\".format(url, response.text))\n\n return response\n\n def GetEntry(self):\n url = self.urlBase + 'entries/' + self.language + '/' + self.wordInUse['id'].lower()\n\n try:\n response = self.GetRequest(url)\n results = dict(response.json())['results']\n lexicalEntries = results[0]['lexicalEntries']\n for lexicalEntry in lexicalEntries:\n entries = lexicalEntry['entries']\n for entry in entries:\n if 'senses' in entry:\n lexCat = lexicalEntry['lexicalCategory'].encode('utf-8')\n defn = entry['senses'][0]['definitions'][0].encode('utf-8')\n\n try:\n retVal = '[{0:s}] - {1:s}'.format(lexCat, defn)\n except Exception as e:\n retVal = '[{0:s}] - {1:s}'.format(lexCat.decode('utf-8'), defn.decode('utf-8'))\n\n return retVal\n\n except Exception as e:\n print(e)\n return ''\n\n def ChangeCategory(self, category=''):\n self.wordList = []\n while len(self.wordList) == 0:\n domainIndex = -1\n try:\n if category != '':\n domainIndex = [d.lower() for d in self.domainList].index(category.lower())\n category = ''\n\n except ValueError as e:\n print(\"Category <{0:s}> is not one of Oxford dictionary's word categories. \".format(category), end='')\n dList = [d.lower() for d in self.domainList]\n for domain in dList:\n if domain.find(category.lower()) > -1 or category.lower().find(domain) > -1:\n domainIndex = dList.index(domain)\n\n if domainIndex == -1:\n domainIndex = random.randint(0, len(self.domainList)-1)\n while self.category == self.domainList[domainIndex]:\n domainIndex = random.randint(0, len(self.domainList)-1)\n\n url = self.urlBase + 'wordlist/' + self.language + '/domains={0:s}?word_length=>{1:d}'.format(\n self.domainList[domainIndex], self.minWordLength)\n\n try:\n response = self.GetRequest(url)\n results = dict(response.json())['results']\n metadata = dict(response.json())['metadata']\n\n self.category = self.domainList[domainIndex]\n self.numberOfWords = metadata['total']\n\n if self.numberOfWords > 0:\n for result in results:\n try:\n word = dict()\n word['word'] = '{0:s}'.format(result['word'])\n word['id'] = '{0:s}'.format(result['id'])\n\n if len([c for c in word['word'] if c in ' -_']) is 0:\n self.wordList.append(word)\n\n except Exception as e:\n pass\n\n except Exception as e:\n print(e)\n return ''\n\n print(\"Changing category to <{0:s}>\".format(self.category))\n return self.category\n\n def GetWord(self):\n oldWord = ''\n if self.wordInUse != None:\n oldWord = self.wordInUse['word']\n self.wordInUse = None\n\n tries = 0\n\n randPos = random.randint(0, len(self.wordList)-1)\n self.wordInUse = self.wordList[randPos]\n tries += 1\n while len([c for c in self.wordInUse['word'] if c in ' -_']) is not 0 or self.wordInUse['word'] == oldWord or tries < 3:\n randPos = random.randint(0, len(self.wordList)-1)\n self.wordInUse = self.wordList[randPos]\n tries += 1\n\n if self.wordInUse['word'] == oldWord:\n self.ChangeCategory()\n self.GetWord()\n\n return self.wordInUse['word']\n\n def GetCategory(self):\n return self.category\n"
},
{
"alpha_fraction": 0.48148149251937866,
"alphanum_fraction": 0.48148149251937866,
"avg_line_length": 14.800000190734863,
"blob_id": "f82ba8e781cc3a5e2d4500388f17c2e8fa34cdef",
"content_id": "5b0dfe6f89e5183a5ca99efca4671a68e101566f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 81,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 5,
"path": "/pause.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "class Pause:\n pause = ''\n\n def __init__(self):\n self.pause = True\n\n\n"
},
{
"alpha_fraction": 0.4087568521499634,
"alphanum_fraction": 0.4152779281139374,
"avg_line_length": 32.54513931274414,
"blob_id": "a479d683927cab5830946b6f4aed037c8d176e22",
"content_id": "f61c3070b139745de1193171bd2147045c3e8817",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9661,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 288,
"path": "/hangman.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nfrom colors import bcolors\nfrom word import Oxford\nfrom pause import Pause\nimport random\n\nclass Hangman:\n allWords = []\n wordInPlay = []\n guessedSoFar = []\n numberCorrect = 0\n numberAttempted = 0\n lettersUsed = []\n useOxford = False\n oxford = None\n\n def __init__(self, useOxford=False):\n try:\n if not useOxford:\n self.useOxford = useOxford\n with open('/usr/share/dict/words', 'r') as dictionary:\n self.allWords = [line.strip() for line in dictionary]\n\n randomPos = random.randint(0, len(self.allWords))\n if len(self.allWords[randomPos]) < 7:\n randomPos = random.randint(0, len(self.allWords))\n\n self.wordInPlay = list(self.allWords[randomPos].lower())\n\n else:\n self.useOxford = useOxford\n self.oxford = Oxford()\n self.wordInPlay = self.oxford.GetWord()\n\n self.guessedSoFar = []\n for i in range(0, len(self.wordInPlay)):\n self.guessedSoFar.append(' ')\n\n self.lettersUsed = []\n self.numberAttempted = 0\n self.numberCorrect = 0\n\n except Exception as e:\n print(e)\n\n def GuessLetter(self, s, __atype__='instanceobj, str, checks if letter exists and returns'):\n s = s.lower()\n if len(s) != 1:\n raise Exception(\"Guess only one letter at a time, please\")\n\n if s in self.lettersUsed:\n raise Exception(\"You have already guessed this letter, try another one\")\n\n self.numberAttempted += 1\n self.lettersUsed.append(s)\n self.lettersUsed.sort()\n\n if s in self.wordInPlay:\n self.numberCorrect += 1\n\n updatePositions = [i for i,x in enumerate(self.wordInPlay) if x == s]\n\n for i in range(0, len(updatePositions)):\n assert (self.guessedSoFar[updatePositions[i]] == ' '), \"Something went wrong here...\"\n self.guessedSoFar[updatePositions[i]] = s\n\n for i in range(0, len(self.wordInPlay)):\n if self.guessedSoFar[i] != self.wordInPlay[i]:\n hangPos = self.numberAttempted - self.numberCorrect\n if hangPos > 6:\n self.PrettyPrint()\n return Pause()\n\n return None\n\n self.PrintCorrect()\n self.GetNewWord()\n return Pause()\n\n def GuessWord(self, w, __atype__='instanceobj, str, checks if word is right and returns'):\n self.numberAttempted = self.numberCorrect + 7\n word = list(w.lower())\n if len(word) != len(self.wordInPlay):\n self.PrettyPrint()\n return Pause()\n\n for i in range(0, len(word)):\n if word[i] != self.wordInPlay[i]:\n self.PrettyPrint()\n return Pause()\n\n self.PrintCorrect()\n self.GetNewWord()\n return Pause()\n\n def PrintCorrect(self):\n print(\"\")\n print(\"\")\n print(\"------------------------------------------------------------------------------------\")\n print(\" You got it!\")\n print(\" The word is \" + bcolors.BOLD, end='')\n for i in range(0, len(self.wordInPlay)):\n print(self.wordInPlay[i],end='')\n print(bcolors.ENDC)\n if self.useOxford:\n entry = '{0:s}'.format(self.oxford.GetEntry())\n if len(entry) > 80:\n words = entry.split(' ')\n print(bcolors.ITALIC, end='')\n i = 0\n while i < len(words):\n print(' ' + words[i],end='')\n length = len(words[i]) + 2\n while i < len(words)-1 and length+len(words[i+1])+1 < 80:\n i += 1\n print(' ' + words[i], end='')\n length += len(words[i]) + 1\n print('')\n i += 1\n print(bcolors.ENDC,end='')\n else:\n print(bcolors.ITALIC + ' ' + entry + bcolors.ENDC)\n print(\"------------------------------------------------------------------------------------\")\n print(\"\")\n print(\"\")\n\n def PrintWrong(self):\n print(\"\")\n print(\"\")\n print(\"------------------------------------------------------------------------------------\")\n print(\" Sorry, you have been hung out to dry!\")\n print(\" The word is \" + bcolors.BOLD, end='')\n for i in range(0, len(self.wordInPlay)):\n print(self.wordInPlay[i],end='')\n print(bcolors.ENDC)\n if self.useOxford:\n entry = '{0:s}'.format(self.oxford.GetEntry())\n if len(entry) > 80:\n words = entry.split(' ')\n print(bcolors.ITALIC, end='')\n i = 0\n while i < len(words):\n print(' ' + words[i],end='')\n length = len(words[i]) + 2\n while i < len(words)-1 and length+len(words[i+1])+1 < 80:\n i += 1\n print(' ' + words[i], end='')\n length += len(words[i]) + 1\n print('')\n i += 1\n print(bcolors.ENDC,end='')\n else:\n print(bcolors.ITALIC + ' ' + entry + bcolors.ENDC)\n print(\"------------------------------------------------------------------------------------\")\n print(\"\")\n print(\"\")\n\n def GetNewWord(self, __atype__='instanceobj, resets the word and starts game again returns nothing'):\n try:\n if not self.useOxford:\n randomPos = random.randint(0, len(self.allWords))\n if len(self.allWords[randomPos]) < 7:\n randomPos = random.randint(0, len(self.allWords))\n\n self.wordInPlay = list(self.allWords[randomPos].lower())\n\n else:\n self.wordInPlay = self.oxford.GetWord()\n\n self.guessedSoFar = []\n for i in range(0, len(self.wordInPlay)):\n self.guessedSoFar.append(' ')\n\n self.lettersUsed = []\n self.numberAttempted = 0\n self.numberCorrect = 0\n\n except Exception as e:\n print(e)\n\n def ChangeCategory(self, category='', __atype__='instanceobj, str, changes word category and starts game again returns nothing'):\n if not self.useOxford:\n return\n\n category = self.oxford.ChangeCategory(category)\n self.GetNewWord()\n\n def PrettyPrint(self):\n hangPos = self.numberAttempted - self.numberCorrect\n\n # Print basic stuff\n print(\" _________\")\n print(\" | |\", end='')\n if not self.useOxford:\n print(\" Not using Oxford APIs, so category is random and cannot be changed\")\n else:\n print(\" Word category is: {0:s}\".format(self.oxford.GetCategory()))\n\n print(\" | |\")\n print(\" | |\", end='')\n print(\" Word so far: \", end='')\n for i in range(0, len(self.guessedSoFar)):\n print(bcolors.UNDERLINE + self.guessedSoFar[i] + bcolors.ENDC, end='')\n print(' ', end='')\n print(' ({0:d} letters)'.format(len(self.wordInPlay)), end='')\n print('')\n\n if hangPos > 0:\n print(\" | ____|____\", end='')\n print(\" Letters used so far: \", end='')\n print(self.lettersUsed)\n\n if hangPos > 1:\n print(\" | | o o |\")\n else:\n print(\" | | |\")\n\n if hangPos > 2:\n print(\" | | v |\")\n else:\n print(\" | | |\")\n\n if hangPos > 3:\n print(\" | | --- |\")\n else:\n print(\" | | |\")\n\n print(\" | ---------\")\n\n if hangPos > 4:\n print(\" | | |\")\n print(\" | | |\")\n else:\n print(\" | \")\n print(\" | \")\n\n if hangPos > 5:\n print(\" | /-| |-\\\\\")\n print(\" | / | | \\\\\")\n elif hangPos > 4:\n print(\" | | |\")\n print(\" | | |\")\n else:\n print(\" | \")\n print(\" | \")\n\n if hangPos > 4:\n print(\" | | |\")\n print(\" | | |\")\n else:\n print(\" | \")\n print(\" | \")\n\n if hangPos > 6:\n print(\" | / \\\\\")\n print(\" | / \\\\\")\n print(\" | / \\\\\")\n else:\n print(\" |\")\n print(\" |\")\n print(\" |\")\n\n else:\n print(\" |\",end='')\n print(\" Letters used so far: \", end='')\n print(self.lettersUsed)\n\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n print(\" |\")\n\n print(\" |\")\n print(\" |\")\n print(\" |_______________\")\n\n if hangPos > 6:\n self.PrintWrong()\n self.GetNewWord()\n"
},
{
"alpha_fraction": 0.5440440773963928,
"alphanum_fraction": 0.5618390440940857,
"avg_line_length": 35.38479232788086,
"blob_id": "aae6a691d7da9eced9a9f1eff44362a7467836bb",
"content_id": "cb3394d4e704020f3f8a91402ba1e5dfb1ba4b65",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15791,
"license_type": "no_license",
"max_line_length": 174,
"num_lines": 434,
"path": "/app.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from overrides import *\nimport sys, getopt, math\nfrom matrix import Matrix\nfrom equation import LinearEquations\n\nstep = 0\nshowHint = False\n\ndef main():\n calculateInverse, calculateDeterminant, m1, m2, soe = parseArgs()\n\n if m2 is not None:\n print('')\n print(\"Input matrices are:\")\n Matrix.PrettyPrintTwoMatrices(m1, m2)\n print('')\n if calculateDeterminant: print(\"- Calculate determinant of first matrix\")\n if calculateInverse: print(\"- Calculate inverse of first matrix\")\n print(\"- Left multiply second matrix with the first matrix\")\n else:\n if m1 is not None:\n print('')\n print(\"Input matrix is:\")\n m1.PrettyPrint()\n print('')\n if calculateDeterminant: print(\"- Calculate determinant of this matrix\")\n if calculateInverse: print(\"- Calculate inverse of this matrix\")\n\n if soe is not None:\n print('')\n print(\"System of Equations is:\")\n soe.PrettyPrint()\n print('')\n print(\"- Find solution to this system of equations\")\n\n print_space()\n\n if (not calculateInverse and not calculateDeterminant and m2 is None and soe is None):\n print(\"No operation specified, nothing do... have a nice day\")\n sys.exit(0)\n\n if calculateDeterminant:\n try:\n det = m1.Determinant()\n print(\"Determinant = {0}\".format(det))\n print_space()\n\n except Exception as e:\n print(e)\n print_space()\n\n if calculateInverse:\n try:\n inverseMatrix = m1.Inverse()\n\n inv = step_by_step_inverse_cofactors(m1)\n if not inverseMatrix.IsEqual(inv):\n raise Exception(\"Oops! got the wrong inverse using cofactors method\")\n\n inv = step_by_step_guass_jordan(m1)\n if inverseMatrix.IsEqual(inv):\n print(\"Yay! got the correct inverse using Guass Jordan method\")\n else:\n inv.PrettyPrint()\n raise Exception(\"Oops! got the wrong inverse using Guass Jordan method\")\n\n print(\"The inverse matrix is:\")\n inverseMatrix.PrettyPrint()\n print_space()\n\n # Check if inverse is correct\n print(\"Multiplying a matrix and its inverse will give...\")\n Matrix.PrettyPrintTwoMatrices(m1, inverseMatrix)\n print_raw_input(\"Press Enter to continue...\")\n productMatrix = m1.Multiply(inverseMatrix)\n if not productMatrix.IsIdentityMatrix():\n print(\"Something went wrong...\")\n productMatrix.PrettyPrint()\n print_space()\n print(\"The identity matrix:\")\n productMatrix.PrettyPrint()\n print_space()\n\n except Exception as e:\n print(e)\n print_space()\n\n if m2 is not None:\n try:\n productMatrix = m1.Multiply(m2)\n\n print(\"Multiply the following two matrices:\")\n Matrix.PrettyPrintTwoMatrices(m1, m2)\n print_raw_input(\"Press Enter to continue...\")\n\n productMatrix = step_by_step_multiply(m1, m2)\n print(\"Product matrix:\")\n productMatrix.PrettyPrint()\n print_space()\n\n except Exception as e:\n print(e)\n print_space()\n\n if soe is not None:\n try:\n solutionMatrix = step_by_step_guass_jordan(soe.A, soe.B)\n\n if soe.CheckSolution(solutionMatrix):\n print(\"Yay! got the correct solution to the system of equations\")\n else:\n raise Exception(\"Oops! got the wrong solution\")\n\n print(\"The solution to the system of equations is:\")\n Matrix.PrettyPrintTwoMatrices(soe.X, solutionMatrix)\n print_space()\n\n except Exception as e:\n print(e)\n print_space()\n\n sys.exit(0)\n\n# Exercise: Remove the code for this function and the ones for both row reduction functions below for students to implement\n# This function should return\n# 1. The inverse of m1 if m2 is None or\n# 2. Row reduced m2 (used for systems of equations)\n#\n# Inputs are\n# m1: matrix to be row reduced to identity\n# m2: perform same operations on m2, if None perform same operations on identity\ndef step_by_step_guass_jordan(m1, m2=None):\n global step\n # Make a copy - python is effectively pass by reference\n matrix = m1.MakeCopy()\n if m2 is None:\n inverseMatrix = Matrix.GetIdentityMatrix(matrix.rSize, matrix.keepFraction)\n else:\n inverseMatrix = m2.MakeCopy()\n\n if inverseMatrix.IsSquare():\n print(\"Use Guass Jordan elimination and row-echelon form to find inverse of:\")\n else:\n print(\"Use Guass Jordan elimination to find the solution to the system of linear equations:\")\n Matrix.PrettyPrintTwoMatrices(matrix, inverseMatrix)\n print_raw_input(\"Press Enter to continue...\")\n\n # Row reduce down to get to row-echelon form\n step = 0\n for i in range(0, matrix.rSize):\n if not matrix.IsIdentityMatrix():\n matrix, inverseMatrix = row_reduce_down(matrix, inverseMatrix, i)\n else:\n return inverseMatrix\n\n # Row reduce up to get inverse, last row already in proper form\n for i in range(matrix.rSize-1, 0, -1):\n if not matrix.IsIdentityMatrix():\n matrix, inverseMatrix = row_reduce_up(matrix, inverseMatrix, i-1)\n else:\n return inverseMatrix\n\n return inverseMatrix\n\n# Row reduction down - for Guass Jordan method. At the end of this matrix will be in row-echelon form\ndef row_reduce_down(m, inv, row):\n global step\n\n # If the diagonal element is 0, need to add row with another whose corresponding column is non-zero\n # one such row is guaranteed to exist, otherwise determinant will be 0\n if m.IsValue(row, row, 0):\n add_row_to_make_diagonal_nonzero(m, inv, row)\n\n # Make all elements before the diagonal 0 by subtracting from rows above\n if row > 0:\n step += 1\n hint = 0\n show_hint(\"Step {0:d}: Make all elements in row {1:d}, before column {2:d} equal 0. Next...\".format(step, row+1, row+1))\n for i in range(0, row):\n element = m.GetElement(row, i)\n if not m.IsValue(row, i, 0):\n if element != 1:\n m.RowReduce(row, -1, element ** -1, 1)\n inv.RowReduce(row, -1, element ** -1, 1)\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Divide row {2:d} by {3:s}. Next...\".format(step, hint, row+1, str(element)), True, m, inv)\n\n m.RowReduce(row, i, 1, 1)\n inv.RowReduce(row, i, 1, 1)\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Subtract row {2:d} from row {3:d}. Next...\".format(step, hint, i+1, row+1), True, m, inv)\n\n if showHint and hint == 0:\n print_raw_input(\" Already 0, nothing to do...\") \n\n # Make the diagonal element 1\n diagElement = m.GetElement(row, row)\n if not m.IsValue(row, row, 1):\n step += 1\n hint = 0\n show_hint(\"Step {0:d}: Make the element in row {1:d}, column {2:d} equal 1. Next...\".format(step, row+1, row+1))\n\n m.RowReduce(row, -1, diagElement ** -1, 1)\n inv.RowReduce(row, -1, diagElement ** -1, 1)\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Divide row {2:d} by {3:s}. Next...\".format(step, hint, row+1, str(diagElement)), True, m, inv)\n\n return m, inv\n\n# Row reduction up - for Guass Jordan method - at the end of this we will have inverse\ndef row_reduce_up(m, inv, row):\n global step\n\n # Make all elements after the diagonal 0 by subtracting from rows below\n step += 1\n hint = 0\n show_hint(\"Step {0:d}: Make all elements in row {1:d}, after column {2:d} equal 0. Next...\".format(step, row+1, row+1))\n for i in range(m.rSize-1, row, -1):\n element = m.GetElement(row, i)\n\n if not m.IsValue(row, i, 0):\n m.RowReduce(row, i, 1, element)\n inv.RowReduce(row, i, 1, element)\n if element != 1:\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Multiply row {2:d} by {3:s} and subtract from row {4:d}. Next...\".format(step, hint, i+1, str(element), row+1), True, m, inv)\n else:\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Subtract row {2:d} from row {3:d}. Next...\".format(step, hint, i+1, row+1), True, m, inv)\n\n if showHint and hint == 0:\n print_raw_input(\" Already 0, nothing to do...\") \n\n return m, inv\n\n# Row reduce to make diagnoal non-zero (may need to add 2 rows in some instances\ndef add_row_to_make_diagonal_nonzero(m, inv, row):\n global step\n step += 1\n hint = 0\n show_hint(\"Step {0:d}: Make the element in row {1:d}, column {2:d} non-zero. Next...\".format(step, row+1, row+1))\n for i in range(1, m.rSize):\n ar = (row + i) % m.rSize\n\n if not m.IsValue(ar, row, 0) and (not m.IsValue(row, ar, 0) or not m.IsValue(ar, ar, 1)):\n m.RowReduce(row, ar, 1, -1)\n inv.RowReduce(row, ar, 1, -1)\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Add row {2:d} to row {3:d}. Next...\".format(step, hint, ar+1, row+1), True, m, inv)\n return\n\n # still zero\n if m.IsValue(row, row, 0):\n # Need to add 2 rows together\n for i in range(1, m.rSize):\n ar1 = (row + i) % m.rSize\n\n if not m.IsValue(ar1, row, 0):\n for j in range(i+1, m.rSize+i):\n ar2 = (row + j) % m.rSize\n\n if not m.IsValue(ar2, ar1, 0):\n # Add first row found\n m.RowReduce(row, ar1, 1, -1)\n inv.RowReduce(row, ar1, 1, -1)\n\n # Add second row found\n m.RowReduce(row, ar2, 1, -1)\n inv.RowReduce(row, ar2, 1, -1)\n hint += 1\n show_hint(\" Hint {0:d}.{1:d}: Add rows {2:d} and {3:d} to row {4:d}. Next...\".format(step, hint, ar1+1, ar2+1, row+1), True, m, inv)\n return\n\n# Inverse via minors, cofactors and adjugate - to double check Guass Jordan elimination\ndef step_by_step_inverse_cofactors(m):\n show_hint(\"Calculate inverse using matrix of minors, cofactors and adjugate\\n\")\n\n minors = m.MatrixOfMinors()\n show_hint(\"Step 1: Find determinant of sub-matrices to get Matrix of Minors. Next...\", True, minors)\n\n cofactors = minors.MatrixOfCofactors()\n show_hint(\"Step 2: Alternate +/- on minors matrix to get Matrix of Cofactors. Next...\", True, cofactors)\n\n adjugate = cofactors.Transpose()\n show_hint(\"Step 3: Transpose the cofactors matrix to get Adjugate Matrix. Next...\", True, adjugate)\n\n det = m.Determinant()\n adjugate.ScalarMultiply(det ** -1)\n inv = adjugate.MakeCopy()\n show_hint(\"Step 4: Divide adjugate by determinant to get inverse Matrix. Next...\", True, inv)\n\n if showHint:\n print(\"Inverse matrix is:\")\n inv.PrettyPrint()\n print_space()\n\n return inv\n\n# step by step multiplication of two matrices\ndef step_by_step_multiply(m1, m2):\n global step\n matrix = Matrix.CreateBlank(m1.rSize, m2.cSize, m1.keepFraction)\n\n step = 0\n for i in range(0, m1.rSize):\n for j in range(0, m2.cSize):\n step += 1\n show_hint(\"Step {0:d}: Multiply corresponding elements in row {1:d} of 1st matrix and column {2:d} of 2nd matrix and add up...\".format(step, i+1, j+1))\n\n row, column = Matrix.GetRowColumn(m1, m2, i, j)\n s = \"\"\n element = 0\n for k in range(0, m1.cSize):\n element = row[k] * column[k] + element\n if k == m1.cSize - 1:\n s += \"{0:s} * {1:s} = {2:s}\\n\".format(str(row[k]), str(column[k]), str(element))\n else:\n s += \"{0:s} * {1:s} + \".format(str(row[k]), str(column[k]))\n matrix.SetElement(i, j, element)\n show_hint(s, True, matrix, None)\n\n return matrix\n\n\n# Utility that prints out hints\ndef show_hint(s, prettyPrint=False, m1=None, m2=None):\n global showHint\n if not showHint: return\n\n try:\n input(s)\n except Exception as e:\n # ignore\n pass\n\n if prettyPrint:\n if m2 is not None:\n Matrix.PrettyPrintTwoMatrices(m1, m2)\n else:\n m1.PrettyPrint()\n\n print_raw_input(\"Press Enter to continue...\")\n\ndef print_space():\n print('')\n print(\"------------------------------------------------------------------------------------\")\n print('')\n\ndef print_raw_input(s):\n try:\n input(s)\n except Exception as e:\n # ignore\n pass\n\n print('')\n\ndef parseArgs():\n global showHint\n sarg = \"\"\n marg = \"\"\n parg = \"\"\n calculateInverse = False\n calculateDeterminant = False\n keepFraction = False\n\n # Get the inputs/arguments\n try:\n opts, args = getopt.getopt(sys.argv[1:], 'vhidfs:p:m:', ['matrix=', 'matrix-multiply=' 'system-of-equations='])\n except getopt.GetoptError:\n usage()\n\n # Parse the arguments\n for opt, arg in opts:\n if opt in ('-h', '--help'):\n usage()\n elif opt in ('-i', '--inverse'):\n calculateInverse = True\n elif opt in ('-d', '--determinant'):\n calculateDeterminant = True\n elif opt in ('-v', '--verbose-hints'):\n showHint = True\n elif opt in ('-f', '--use-fraction'):\n keepFraction = True\n elif opt in ('-m', '--matrix'):\n marg = arg\n elif opt in ('-p', '--matrix-multiply'):\n parg = arg\n elif opt in ('-s', '--system-of-equations'):\n sarg = arg\n else:\n usage()\n\n try:\n s = LinearEquations(sarg, keepFraction)\n\n # Parse the matrices\n m1 = Matrix(marg, keepFraction)\n if (not m1.isValid and not s.isValid):\n raise Exception(\"either -m MATRIX or -s EQNS is a required argument\")\n\n m2 = Matrix(parg, keepFraction)\n\n if (not s.isValid): s = None\n if (not m1.isValid): m1 = None\n if (not m2.isValid): m2 = None\n\n if (calculateInverse or calculateDeterminant) and (not m1.IsSquare()):\n raise Exception(\"Must specify square matrix to calculate inverse or determinant\")\n\n return (calculateInverse, calculateDeterminant, m1, m2, s)\n except Exception as e:\n print(e)\n usage()\n\n# Usage for this program\ndef usage():\n print('')\n print(sys.argv[0] + \" [options]\")\n print(\"Matrix is required:\")\n print(\" -m MATRIX, --matrix=MATRIX Square matrix formatted as a11,a12,...,a1n,a21,a22,...,a2n,...,an1,an2,...,ann\")\n print(\" Non-square formatted as a11,a12,...,a1n:a21,a22,...,a2n:...:am1,am2,...,amn:\")\n print(\" -s EQNS --system-of-equations=EQNS specify a system of equations to solve\")\n print(\" Format is: 2*a+3*b+1*c=4:5*a-1*c=1:...\")\n print(\"Options:\")\n print(\" -p MATRIX --matrix-multiply=MATRIX left multiply matrix provided by -m with the one provided by -p\")\n print(\" -i --inverse calculate the inverse of matrix\")\n print(\" -d --determinant calculate the determinant of matrix\")\n print(\" -v --verbose-hint show verbose hints for Guass Jordan elimination method\")\n print(\" -f --use-fraction Use fraction instead of decimals\")\n print(\" -h, --help show this help message and exit\")\n sys.exit(1)\n\nmain()\n"
},
{
"alpha_fraction": 0.4578651785850525,
"alphanum_fraction": 0.46808987855911255,
"avg_line_length": 33.230770111083984,
"blob_id": "eb4001610f357b95b3708b3b690c75285f0ed0b3",
"content_id": "e15fc1e83ee1f6f241e45c5e29a49bbfeb96e880",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8900,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 260,
"path": "/generic.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nfrom overrides import *\nimport sys, getopt, inspect\nfrom cards import Cards\nfrom hangman import Hangman\nfrom fraction import Fraction\nfrom matrix import Matrix\nfrom equation import LinearEquations\nfrom pause import Pause\n\ndef main():\n instance, klass, objectType = parseArgs()\n pauseKlass = globals()['Pause']\n\n print(\"Entering interactive mode for object {0:s}\".format(objectType))\n interactive(instance, klass, objectType, pauseKlass)\n\n sys.exit(0)\n\n# Interactive mode\ndef interactive(instance, klass, objectType, pauseKlass):\n print(\"Instance of class {0:s} is:\".format(objectType))\n instance.PrettyPrint()\n\n while True:\n print('')\n print(\"Do any of the following to the object of class {0:s}...\".format(objectType))\n method_list = [func for func in dir(klass) if callable(getattr(klass, func))]\n method_list.sort()\n actionableMethods = []\n for method in method_list:\n methodArgs = ''\n try:\n method_to_call = getattr(klass, method)\n\n argNames = []\n try:\n argNames = inspect.getfullargspec(method_to_call).args\n except Exception as e:\n if sys.version_info[0] < 3:\n argNames = inspect.getargspec(method_to_call).args\n\n if '__atype__' in argNames:\n argTypes = []\n try:\n defaults = inspect.getfullargspec(method_to_call).defaults\n argTypes = [arg.strip() for arg in defaults[len(defaults)-1].split(',')]\n except Exception as e:\n if sys.version_info[0] < 3:\n defaults = inspect.getargspec(method_to_call).defaults\n argTypes = [arg.strip() for arg in defaults[len(defaults)-1].split(',')]\n\n methodArgs = '('\n for argType in argTypes:\n if argType not in ['classobj', 'instanceobj', 'staticobj'] and argType.find('returns') == -1:\n methodArgs += argType.strip() + ','\n if len(methodArgs) == 1:\n methodArgs += ')'\n else:\n methodArgs = methodArgs[:-1] + ')'\n\n print(\" {0:s}{1:s} ...\".format(method, methodArgs), end='')\n print(argTypes[len(argTypes)-1].rjust(100-len(methodArgs)-len(method)))\n actionableMethods.append(method)\n\n except Exception as e:\n # ignore\n pass\n\n print(\" Exit() ...\", end='')\n print(\"fairly obvious what this does\".rjust(100-6))\n print('')\n actionableMethods.append('Exit')\n commandWithArgs = print_raw_input(\">>> \")\n command = [c for c in commandWithArgs.split('(')][0]\n\n if command not in actionableMethods:\n continue\n\n if command == 'Exit':\n return\n\n try:\n method_to_call = getattr(klass, command)\n\n argTypes = []\n try:\n defaults = inspect.getfullargspec(method_to_call).defaults\n argTypes = [arg.strip() for arg in defaults[len(defaults)-1].split(',')]\n except Exception as e:\n if sys.version_info[0] < 3:\n defaults = inspect.getargspec(method_to_call).defaults\n argTypes = [arg.strip() for arg in defaults[len(defaults)-1].split(',')]\n\n args = [a.strip() for a in commandWithArgs[len(command)+1:-1].split(',')]\n\n hasReturnVal = True\n argvals = []\n loop = -1\n for argType in argTypes:\n if argType == 'classobj':\n # do nothing\n pass\n\n elif argType == 'instanceobj':\n argvals.append(instance)\n\n elif argType == 'staticobj':\n # do nothing\n pass\n\n elif argType.find('returns') > -1:\n if argType.find('nothing') > -1:\n hasReturnVal = False\n\n else:\n loop += 1\n try:\n if args[loop].find('[') > -1:\n # Found a list\n listType = argType.split(':')[1]\n listval = [eval(\"{0:s}('{1:s}')\".format(listType, args[loop].split('[')[1]))]\n\n loop += 1\n while args[loop].find(']') == -1:\n listval.append(eval(\"{0:s}('{1:s}')\".format(listType, args[loop])))\n loop += 1\n\n listval.append(eval(\"{0:s}('{1:s}')\".format(listType, args[loop].split(']')[0])))\n argvals.append(listval)\n\n elif args[loop][:7] == 'Matrix(':\n # Found a matrix string\n s = args[loop][7:]\n\n while args[loop].find(')') == -1:\n loop += 1\n if args[loop] != ':':\n s += ','\n s += args[loop]\n\n if s == args[loop][7:]:\n s = args[loop][7:-1]\n else:\n s = s[:-1]\n\n argvals.append(s)\n\n else:\n argvals.append(eval(\"{0:s}('{1:s}')\".format(argType, args[loop])))\n\n except Exception as e:\n print(e)\n continue\n\n returnval = method_to_call(*argvals)\n\n if hasReturnVal and returnval is not None:\n if isinstance(returnval, klass):\n print(\"{0:s} {1:s} and result is\".format(commandWithArgs, argTypes[len(argTypes)-1]))\n returnval.PrettyPrint()\n\n elif isinstance(returnval, pauseKlass):\n pass\n\n else:\n print(\"{0:s} {1:s} and result is\".format(commandWithArgs, argTypes[len(argTypes)-1]), returnval)\n\n print_raw_input(\"Press Enter to continue...\")\n\n print_space()\n print(\"Current instance of class object {0:s} is\".format(objectType))\n instance.PrettyPrint()\n\n except Exception as e:\n print(e)\n continue\n\n return\n\ndef print_space():\n print('')\n print(\"------------------------------------------------------------------------------------\")\n\ndef parseArgs():\n objectType = ''\n\n # Get the inputs/arguments\n try:\n opts, args = getopt.getopt(sys.argv[1:], 'hc:', ['--class-type'])\n except getopt.GetoptError:\n usage()\n\n # Parse the arguments\n for opt, arg in opts:\n if opt in ('-h', '--help'):\n usage()\n elif opt in ('-c', '--class-type'):\n objectType = arg\n else:\n usage()\n\n try:\n if (objectType == ''):\n raise Exception(\"Required argument missing\")\n \n klass = globals()[objectType]\n\n if (objectType == 'Matrix'):\n print(\"Please provide a square matrix formatted as 'Matrix(a11,a12,...,a1n,a21,a22,...,a2n,...,an1,an2,...,ann)'\")\n marg = print_raw_input(\">>> \")\n\n instance = klass(marg[7:-1], True)\n\n elif (objectType == 'LinearEquations'):\n print(\"Specify a system of equations to solve formatted as 'LinearEquations(2*a+3*b+1*c=4:5*a-1*c=1:...)'\")\n sarg = print_raw_input(\">>> \")\n\n instance = klass(sarg[16:-1], True)\n\n elif (objectType == 'Fraction'):\n print('Specify a fraction formatted as a/b')\n farg = print_raw_input(\">>> \")\n\n instance = klass(farg, False)\n\n elif (objectType == 'Hangman'):\n instance = klass(True)\n\n else:\n instance = klass()\n\n return (instance, klass, objectType)\n\n except KeyError:\n print(\"Unknown class:\", objectType)\n usage()\n\n except Exception as e:\n print(e)\n usage()\n\ndef print_raw_input(s):\n try:\n from builtins import input\n return input(s)\n except ImportError:\n return raw_input(s)\n\n# Usage for this program\ndef usage():\n print('')\n print(sys.argv[0] + \" [options]\")\n print(\"Required:\")\n print(\" -c --class-type one of 'Matrix', 'LinearEquations', 'Fraction', 'Cards' or 'Hangman'\")\n print(\"Options:\")\n print(\" -h, --help show this help message and exit\")\n sys.exit(1)\n\nmain()\n"
},
{
"alpha_fraction": 0.5066176652908325,
"alphanum_fraction": 0.5226715803146362,
"avg_line_length": 36.17539978027344,
"blob_id": "cd039d71ee2fddf271600d22f6bf51ecaa066325",
"content_id": "ec85023ab5684ffb6764877ab6d5fbb1a4055ab2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16320,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 439,
"path": "/matrix.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nfrom overrides import *\nimport math\nfrom fraction import Fraction\n\nclass Matrix:\n elements = []\n rSize = 0\n cSize = 0\n isValid = False\n keepFraction = False\n\n def __init__(self, s, keepFraction=False):\n self.elements = []\n self.rSize = 0\n self.cSize = 0\n self.isValid = False\n self.keepFraction = keepFraction\n\n if s is not \"\":\n rows = s.split(':')\n if len(rows) == 1:\n # Assume square matrix\n for element in s.split(','):\n try:\n if keepFraction:\n self.elements.append(Fraction.FromDecimal(float(element)))\n else:\n self.elements.append(float(element))\n except Exception as e:\n try:\n self.elements.append(Fraction(element))\n except Exception as e:\n self.elements.append(element)\n\n self.rSize = self.cSize = int(math.sqrt(len(self.elements)))\n if self.rSize ** 2 != len(self.elements):\n raise Exception(\"Expected square matrix\\nElse specify as a11,a12,...a1n:a21,a22,...a2n:...:am1,am2...amn:\")\n\n else:\n self.rSize = len(rows)\n if rows[self.rSize-1] == \"\":\n self.rSize -= 1\n for i in range(0, self.rSize):\n for element in rows[i].split(','):\n try:\n if keepFraction:\n self.elements.append(Fraction.FromDecimal(float(element)))\n else:\n self.elements.append(float(element))\n except Exception as e:\n try:\n self.elements.append(Fraction(element))\n except Exception as e:\n self.elements.append(element)\n\n self.cSize = int(len(self.elements)/self.rSize)\n if self.rSize*self.cSize != len(self.elements):\n raise Exception(\"Matrix isn't properly formatted\\nFormat as a11,a12,...,a1n:a21,a22,...,a2n:...:am1,am2,...,amn:\")\n\n self.isValid = True\n\n def __str__(self):\n return self.MatrixStr()\n\n def MatrixStr(self, __atype__=\"instanceobj, returns matrix as a string\"):\n s = \"\"\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n if self.keepFraction:\n s += str(self.elements[x])\n else:\n s += str(self.elements[x])\n if j != self.cSize-1:\n s += ','\n if i != self.rSize-1:\n s += ':'\n\n return s\n\n # Return the element in rth row cth column\n def GetElement(self, r, c, __atype__=\"instanceobj, int, int, returns matrix element in row,column\"):\n return self.elements[r * self.cSize + c]\n\n # Set the element in rth row cth column to val\n def SetElement(self, r, c, val, __atype__=\"instanceobj, int, int, float, returns nothing sets matrix element in place\"):\n if type(val) is float and self.keepFraction:\n self.elements[r * self.cSize + c] = Fraction.FromDecimal(val)\n return\n\n self.elements[r * self.cSize + c] = val\n\n # Check if element value is val\n def IsValue(self, r, c, val, __atype__=\"instanceobj, int, int, float, returns boolean\"):\n try:\n return math.fabs(self.elements[r * self.cSize + c] - val) < 0.001\n except Exception as e:\n return self.elements[r * self.cSize + c] == val\n\n def MakeCopy(self, __atype__=\"instanceobj, returns a copy of the original matrix\"):\n m = Matrix.CreateBlank(self.rSize, self.cSize, self.keepFraction)\n for i in range(0, m.rSize * m.cSize):\n m.elements[i] = self.elements[i]\n\n return m\n\n # Return True if and only if it is a square matrix\n def IsSquare(self, __atype__=\"instanceobj, returns boolean\"):\n return self.rSize == self.cSize\n\n # Return True if and only m can be left multiplied by self\n def CanMultiply(self, m, __atype__=\"instanceobj, str, returns boolean\"):\n Matrix.MakeConsistent(self, m)\n if type(m) is builtin.str:\n return self.CanMultiply(Matrix(m, self.keepFraction))\n\n return self.cSize == m.rSize\n\n # Matrix multiplication - m left multiplied by self\n # Output is the product of the two matrices\n def Multiply(self, m, __atype__=\"instanceobj, str, returns product matrix\"):\n Matrix.MakeConsistent(self, m)\n if type(m) is builtin.str:\n return self.Multiply(Matrix(m, self.keepFraction))\n\n if not self.CanMultiply(m):\n raise Exception(\"Cannot multiply these two matrices\")\n\n p = Matrix.CreateBlank(self.rSize, m.cSize, self.keepFraction)\n x = 0\n for i in range(0, self.rSize):\n for j in range(0, m.cSize):\n element = 0\n for k in range(0, self.cSize):\n selfIndex = i * self.cSize + k\n mIndex = k * m.cSize + j\n element = self.elements[selfIndex] * m.elements[mIndex] + element\n\n x = i * m.cSize + j\n p.elements[x] = element\n\n return p\n\n # Find determinant - a recursive function\n def Determinant(self, __atype__=\"instanceobj, returns determinant\"):\n if not self.IsSquare():\n raise Exception(\"Cannot calculate determinant of non-square matrix\")\n\n if self.rSize == 1:\n return self.elements[0]\n\n value = 0\n sign = 1\n for i in range(0, self.rSize):\n value = (self.elements[i] * self.GetSubmatrix(0, i).Determinant() * sign) + value\n sign *= -1\n\n return value\n\n # Return matrix of minors - only for a square matrix\n def MatrixOfMinors(self, __atype__=\"instanceobj, returns matrix of minors\"):\n if not self.IsSquare():\n raise Exception(\"Cannot calculate matrix of minors of non-square matrix\")\n\n minors = Matrix.CreateBlank(self.rSize, self.cSize, self.keepFraction)\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n minor = self.GetSubmatrix(i, j).Determinant()\n x = i * self.cSize + j\n minors.elements[x] = minor\n\n return minors\n\n # Return matrix of cofactors - only for a square matrix\n def MatrixOfCofactors(self, __atype__=\"instanceobj, returns cofactors matrix\"):\n if not self.IsSquare():\n raise Exception(\"Cannot calculate matrix of cofactors of non-square matrix\")\n\n cofactors = Matrix.CreateBlank(self.rSize, self.cSize, self.keepFraction)\n sign1 = sign2 = 1\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n cofactors.elements[x] = self.elements[x] * sign1 * sign2\n sign2 *= -1\n sign2 = 1\n sign1 *= -1\n\n return cofactors\n\n # Row Reduction operation - done in place within the matrix\n # Inputs:\n # 1. Row 1 (r1) to be manipulated\n # 2. Row 2 (r2) using row2 during manipulation, if none this is passed as -1, always r1 - r2\n # 3. Multiplier (m1) to apply to row 1, if 1 simple subtraction, if -1 simple addition\n # 4. Multiplier (m2) to apply to row 2, if 1 simple subtraction, if -1 simple addition\n def RowReduce(self, r1, r2, m1, m2):\n for i in range(0, self.cSize):\n x1 = r1 * self.cSize + i\n if r2 == -1:\n self.elements[x1] = self.elements[x1] * m1\n else:\n x2 = r2 * self.cSize + i\n self.elements[x1] = self.elements[x1] * m1 - self.elements[x2] * m2\n\n # Transpose of a matrix\n def Transpose(self, __atype__=\"instanceobj, returns transpose\"):\n t = Matrix.CreateBlank(self.cSize, self.rSize, self.keepFraction)\n\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n tx = j * self.rSize + i\n t.elements[tx] = self.elements[x]\n\n return t\n\n # Return Inverse of Matrix - must be square\n def Inverse(self, __atype__=\"instanceobj, returns inverse\"):\n if not self.IsSquare():\n raise Exception(\"Cannot calculate inverse of non-square matrix\")\n\n det = self.Determinant()\n if det == 0:\n raise Exception(\"Inverse doesn't exist for matrix\")\n\n inv = self.MatrixOfMinors().MatrixOfCofactors().Transpose()\n inv.ScalarMultiply(det ** -1)\n return inv\n\n # Any 2 matrices\n def IsEqual(self, m, __atype__=\"instanceobj, str, returns boolean if matrix is equal to one specified by str\"):\n if len(self.elements) != len(m.elements):\n return False\n\n if self.rSize != m.rSize:\n return False\n\n for i in range(0, len(self.elements)):\n try:\n if math.fabs(self.elements[i] - m.elements[i]) > 0.001: return False\n except Exception as e:\n if self.elements[i] != m.elements[i]: return False\n\n return True\n\n # Remove row and column from this matrix and return remaining matrix\n def GetSubmatrix(self, row, column):\n s = Matrix.CreateBlank(self.rSize-1, self.cSize-1, self.keepFraction)\n sx = 0;\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n if i != row and j != column:\n s.elements[sx] = self.elements[x]\n sx += 1\n\n return s\n\n # Multiply matrix with a scalar\n def ScalarMultiply(self, val, __atype__=\"instanceobj, float, returns nothing but multiplies matrix in place with float\"):\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n self.elements[x] = self.elements[x] * val\n\n # Returns true if it is an identity (and therefore square) matrix\n def IsIdentityMatrix(self, __atype__=\"instanceobj, returns boolean\"):\n if not self.IsSquare(): return False\n\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n if i == j:\n try:\n if math.fabs(self.elements[x]-1.0) > 0.001: return False\n except Exception as e:\n if self.elements[x] != 1: return False\n else:\n try:\n if math.fabs(self.elements[x]-0.0) > 0.001: return False\n except Exception as e:\n if self.elements[x] != 0: return False\n\n return True\n\n # Get the size of the largest element in matrix - for pretty printing purposes\n def GetLargestSize(self):\n l = 2\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n if len(str(self.elements[x])) > l:\n l = len(str(self.elements[x]))\n\n return l\n\n # Convert elements from float to fraction\n def ToFraction(self):\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n if type(self.elements[x]) is float:\n self.elements[x] = Fraction.FromDecimal(self.elements[x])\n\n\n # Pretty print a matrix, aligning rows and columns\n def PrettyPrint(self, s='', __atype__=\"instanceobj, str, takes string - returns nothing prints matrix nicely\"):\n if s != '':\n m = Matrix(s, self.keepFraction)\n return m.PrettyPrint()\n\n m = Matrix.CreateBlank(self.rSize, self.cSize, self.keepFraction)\n for i in range(0, self.rSize):\n for j in range(0, self.cSize):\n x = i * self.cSize + j\n m.elements[x] = str(self.elements[x])\n\n if m.elements[x] == '-0.00':\n m.elements[x] = '0.00'\n\n just = m.GetLargestSize() + 1\n for i in range(0, m.rSize):\n for j in range(0, m.cSize):\n x = i * m.cSize + j\n print(m.elements[x].rjust(just), end='')\n print()\n\n # Pretty print two matrices side by side\n @classmethod\n def PrettyPrintTwoMatrices(cls, matrix1, matrix2, __atype__=\"classobj, str, str, returns nothing but prints 2 matrices\"):\n if type(matrix1) is builtin.str:\n matrix1 = Matrix(matrix1, True)\n\n if type(matrix2) is builtin.str:\n matrix2 = Matrix(matrix2, True)\n\n m1 = Matrix.CreateBlank(matrix1.rSize, matrix1.cSize, matrix1.keepFraction)\n for i in range (0, matrix1.rSize):\n for j in range(0, matrix1.cSize):\n x = i * matrix1.cSize + j\n m1.elements[x] = str(matrix1.elements[x])\n\n if m1.elements[x] == '-0.00':\n m1.elements[x] = '0.00'\n\n m2 = Matrix.CreateBlank(matrix2.rSize, matrix2.cSize, matrix2.keepFraction)\n for i in range (0, matrix2.rSize):\n for j in range(0, matrix2.cSize):\n x = i * matrix2.cSize + j\n m2.elements[x] = str(matrix2.elements[x])\n\n if m2.elements[x] == '-0.00':\n m2.elements[x] = '0.00'\n\n just1 = m1.GetLargestSize() + 1\n just2 = m2.GetLargestSize() + 1\n for i in range(0, max(matrix1.rSize, matrix2.rSize)):\n for j in range(0, m1.cSize):\n x = i * m1.cSize + j\n if i < m1.rSize:\n print(m1.elements[x].rjust(just1), end='')\n else:\n print(''.rjust(just1), end='')\n\n print(\" \", end='')\n for j in range(0, m2.cSize):\n x = i * m2.cSize + j\n if i < m2.rSize:\n print(m2.elements[x].rjust(just2), end='')\n else:\n print(''.rjust(just2), end='')\n print()\n\n # Returns a square matrix\n @classmethod\n def GetIdentityMatrix(cls, size, keepFraction=False, __atype__=\"classobj, int, returns identity matrix\"):\n m = Matrix.CreateBlank(size, size, keepFraction)\n\n for i in range(0, size):\n for j in range (0, size):\n x = i * size + j\n if (i == j):\n m.elements[x] = 1.0\n if keepFraction:\n m.elements[x] = Fraction('1/1')\n else:\n m.elements[x] = 0.0\n if keepFraction:\n m.elements[x] = Fraction('0/1')\n\n return m\n\n # Get row row from self and column column from m\n @classmethod\n def GetRowColumn(cls, m1, m2, row, column):\n r = []\n c = []\n Matrix.MakeConsistent(m1, m2)\n for i in range(0, m1.cSize):\n x = row * m1.cSize + i\n r.append(m1.elements[x])\n\n for i in range(0, m2.rSize):\n x = column + i * m2.cSize\n c.append(m2.elements[x])\n\n return (r, c)\n\n # Creates a blank rxc matrix with '...' as placeholders\n @classmethod\n def CreateBlank(cls, rSize, cSize, keepFraction=False):\n if rSize == 0 or cSize == 0:\n return None\n\n b = \"\"\n for i in range(0, rSize):\n for j in range(0, cSize):\n b += \"...\"\n if j == cSize-1:\n b += ':'\n else:\n b += ','\n\n return Matrix(b, keepFraction)\n\n # If either matrix is of type fraction, make the other one as well\n @classmethod\n def MakeConsistent(cls, m1, m2):\n if not m1.keepFraction and not m2.keepFraction:\n return\n\n if not m1.keepFraction:\n m1.ToFraction()\n return\n\n m2.ToFraction()\n return\n"
},
{
"alpha_fraction": 0.5601796507835388,
"alphanum_fraction": 0.5727308988571167,
"avg_line_length": 35.0428581237793,
"blob_id": "843d750b043da8027badf907e857e5d224601e06",
"content_id": "cfdc56c87a4b662731eb00338581fcc25fadbe39",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7569,
"license_type": "no_license",
"max_line_length": 142,
"num_lines": 210,
"path": "/shuffle.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from cards import Cards\nimport sys, getopt\n\ndef main():\n deck, interactiveMode, runComparison, shuffleType, numShuffles = parseArgs()\n\n if interactiveMode:\n print(\"Entering interactive mode, all other arguments are ignored\")\n interactive(deck, shuffleType, numShuffles, 100)\n\n elif runComparison:\n print(\"Running a comparison test on deck:\")\n deck.PrettyPrint()\n print_space()\n print(\"How many shuffles of Normal, Riffle and Perfect should we do?\")\n\n nCount = 1000\n try:\n nCount = int(print_raw_input(\"No. of Normal shuffles >> \"))\n except Exception as e:\n print(\"Invalid input. Setting number of Normal shuffles to 1000\")\n\n rCount = 7\n try:\n rCount = int(print_raw_input(\"No. of Riffle shuffles >> \"))\n except Exception as e:\n print(\"Invalid input. Setting number of Riffle shuffles to 7\")\n\n pCount = 100\n try:\n pCount = int(print_raw_input(\"No. of Perfect shuffles >> \"))\n except Exception as e:\n print(\"Invalid input. Setting numnber of Perfect shuffles to 100\")\n\n nResult = deck.ShuffleAndTestMany(nCount, 100, 'Normal')\n rResult = deck.ShuffleAndTestMany(rCount, 100, 'Riffle')\n pResult = deck.ShuffleAndTestMany(pCount, 100, 'Perfect')\n\n print_space()\n\n print(\"After {0:d} shuffles of Normal shuffle, number of cards correctly predicted on average is {1:.2f}\".format(nCount, nResult))\n print(\"After {0:d} shuffles of Riffle shuffle, number of cards correctly predicted on average is {1:.2f}\".format(rCount, rResult))\n print(\"After {0:d} shuffles of Perfect shuffle, number of cards correctly predicted on average is {1:.2f}\".format(pCount, pResult))\n\n else:\n print(\"Running a {0:s} shuffle operation {1:d} times on deck:\".format(shuffleType, numShuffles))\n deck.PrettyPrint()\n print_space()\n\n deck.ShuffleMany(numShuffles, shuffleType)\n\n print_raw_input(\"After the operation, deck is:...\")\n deck.PrettyPrint()\n\n print_space()\n\n sys.exit(0)\n\n# Interactive mode\ndef interactive(deck, shuffleType, numShuffles, numTests):\n useShuffleType = False\n useNumShuffles = False\n useNumTests = False\n\n print(\"A deck of cards fresh from the factory has arrived and it is sorted as below:\")\n deck.PrettyPrint()\n\n while True:\n print('')\n print(\"Do any of the following to the deck...\")\n method_list = [func for func in dir(Cards) if callable(getattr(Cards, func))\n and not func.startswith(\"__\")\n and not func.startswith(\"Pop\")]\n method_list.sort()\n for method in method_list:\n print(\" {0:s}\".format(method))\n print(\" Exit\")\n print('')\n command = print_raw_input(\">>> \")\n\n if command == '':\n continue\n\n if command == 'Exit':\n return\n\n if command == \"Shuffle\" or command == 'ShuffleMany' or command == 'ShuffleAndTestMany':\n print('')\n print(\"Choose one of the following shuffle types - Riffle, Normal or Perfect - defaults to Riffle\")\n shuffleType = print_raw_input(\"Shuffle type >>> \")\n if (shuffleType != 'Riffle' and shuffleType != 'Normal' and shuffleType != 'Perfect'):\n shuffleType = 'Riffle'\n\n if command == 'ShuffleMany' or command == 'ShuffleAndTestMany':\n print('')\n print(\"How many times would you like to shuffle the deck? - defaults to 1\")\n try:\n numShuffles = int(print_raw_input(\"Number of shuffles >>> \"))\n if numShuffles < 0 or numShuffles > 10000:\n print(\"Number of shuffles {0:d} outside range (0,10000) - defaulting to 1\".format(numShuffles))\n numShuffles = 1\n\n except Exception as e:\n numShuffles = 1\n\n if command == 'ShuffleAndTestMany':\n print('')\n print(\"How many times would you like to test the shuffiliness of the deck? - defaults to 100\")\n try:\n numTests = int(print_raw_input(\"Number of Tests >>> \"))\n if numTests < 1 or numTests > 1000:\n print(\"Number of shuffles {0:d} outside range (1,1000) - defaulting to 100\".format(numTests))\n numTests = 100\n\n except Exception as e:\n numTests = 100\n\n print_space()\n\n method_to_call = getattr(Cards, command)\n\n if command == 'Test' or command == 'ShuffleAndTestMany':\n if command == 'ShuffleAndTestMany':\n result = method_to_call(deck, numShuffles, numTests, shuffleType)\n print(\"After shuffling with the {0:s} shuffle {1:d} times and running {2:d} tests\".format(shuffleType, numShuffles, numTests))\n else:\n result = method_to_call(deck)\n\n print(\"Number of cards correctly predicted in the shuffled deck is: {0:.2f}\".format(result))\n print_space()\n\n elif command == 'ShuffleMany':\n method_to_call(deck, numShuffles, shuffleType)\n\n elif command == 'Shuffle':\n method_to_call(deck, shuffleType)\n\n elif command != 'PrettyPrint':\n method_to_call(deck)\n\n print_raw_input(\"After the operation <{0:s}>, the deck of cards is now arranged as...\".format(command))\n deck.PrettyPrint()\n\n return\n\ndef print_raw_input(s):\n try:\n from builtins import input\n return input(s)\n except ImportError:\n return raw_input(s)\n\ndef print_space():\n print('')\n print(\"------------------------------------------------------------------------------------\")\n\ndef parseArgs():\n interactiveMode = False\n runComparison = False\n numShuffles = 1\n numShufflesStr = '1'\n shuffleType = 'Riffle'\n\n # Get the inputs/arguments\n try:\n opts, args = getopt.getopt(sys.argv[1:], 'hics:n:', ['--shuffle-type', '--number-of-shuffles'])\n except getopt.GetoptError:\n usage()\n\n # Parse the arguments\n for opt, arg in opts:\n if opt in ('-h', '--help'):\n usage()\n elif opt in ('-i', '--interactive'):\n interactiveMode = True\n elif opt in ('-s', '--shuffle-type'):\n shuffleType = arg\n elif opt in ('-n', '--number-of-shuffles'):\n numShufflesStr = arg\n elif opt in ('-c', '--compare-shuffles'):\n runComparison = True\n else:\n usage()\n\n try:\n deck = Cards()\n\n if (shuffleType != 'Riffle' and shuffleType != 'Normal' and shuffleType != 'Perfect'):\n raise Exception(\"Invalid shuffle type {0:s}\".format(shuffleType))\n\n numShuffles = int(numShufflesStr)\n\n return (deck, interactiveMode, runComparison, shuffleType, numShuffles)\n except Exception as e:\n print(e)\n usage()\n\n# Usage for this program\ndef usage():\n print('')\n print(sys.argv[0] + \" [options]\")\n print(\"Options:\")\n print(\" -i --interactive interactive mode\")\n print(\" -s --shuffle-type one of 'Normal', 'Riffle' or 'Perfect' - default: 'Riffle'\")\n print(\" -n --number-of-shuffles number of shuffles - default: 1\")\n print(\" -c --compare-shuffles compare effectiveness of the 3 shuffles\")\n print(\" -h, --help show this help message and exit\")\n sys.exit(1)\n\nmain()\n"
},
{
"alpha_fraction": 0.5103174448013306,
"alphanum_fraction": 0.5354875326156616,
"avg_line_length": 29,
"blob_id": "8c0b74bd1625314bfb52b98e093b8181ab301c84",
"content_id": "b2fc010eda23a1dc94d07912d711126c2ffdf65b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 8820,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 294,
"path": "/fraction.c",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n#include <unistd.h>\n\n#define CF_MAX_SIZE 16\n\n#define MAX(m, n) \\\n (m >= n ? m : n)\n\n#define MIN(m, n) \\\n (m < n ? m : n)\n\n#define ACCURACY 0.00001\n\nlong factorial(n) {\n if (n == 1 || n == 0)\n return 1;\n\n return n*factorial(n-1);\n}\n\ndouble e() {\n double d = (double) 2;\n for (int i=2; i<=CF_MAX_SIZE; i++)\n d += (double) 1/(double) factorial(i);\n\n return d;\n}\n\ndouble pi() {\n double d = (double) 0;\n for (int i=0; i<=CF_MAX_SIZE; i++)\n d += (double)(factorial(4*i)*(1103+26390*i))/(double)(pow(factorial(i), 4)*pow(396,4*i));\n\n return (double) 9801 / (d * (double) 2 * (double) pow(2, 0.5));\n}\n\nvoid PrintContinuedFraction(long *continuedFraction, int sign) {\n if (sign < 0)\n printf(\"Continued Fraction: -[%ld\", continuedFraction[0]);\n else\n printf(\"Continued Fraction: [%ld\", continuedFraction[0]);\n\n for (int i=1; i<CF_MAX_SIZE; i++) {\n if (continuedFraction[i] != -1) {\n if (continuedFraction[i+1] == 1 && i+1 < CF_MAX_SIZE && continuedFraction[i+2] == -1) {\n continuedFraction[i] += 1;\n continuedFraction[i+1] = -1;\n }\n if (i == 1)\n printf(\"; %ld\", continuedFraction[i]);\n else\n printf(\", %ld\", continuedFraction[i]);\n }\n }\n if (continuedFraction[CF_MAX_SIZE] == -1)\n printf(\"]\\n\");\n else\n printf(\", %ld, ...(truncated)]\\n\", continuedFraction[CF_MAX_SIZE]);\n}\n\nlong GCD(long m, long n) {\n if (MAX(m, n) % MIN(m, n) == 0) return MIN(m, n);\n return GCD(MIN(m, n), MAX(m, n) % MIN(m, n));\n}\n\nlong* GenerateContinuedFraction(double decimal) {\n long *continuedFraction = (long *) malloc(sizeof(long) * CF_MAX_SIZE+1);\n for (int i=0; i<=CF_MAX_SIZE; i++) {\n continuedFraction[i] = (long) -1;\n }\n\n long whole = (long) decimal;\n double remaining = (double) decimal - (double) whole;\n continuedFraction[0] = whole;\n\n int loop=0;\n while (remaining > ACCURACY && loop<CF_MAX_SIZE) {\n loop++;\n double reciprocal = (double) 1/ (double) remaining;\n whole = (long) reciprocal;\n remaining = (double) reciprocal - (double) whole;\n continuedFraction[loop] = whole;\n }\n\n return continuedFraction;\n}\n\nlong* RollupContinuedFraction(long *continuedFraction, int sign) {\n long *fraction = (long *) malloc(sizeof(long) * 2);\n fraction[0] = 0;\n fraction[1] = 1;\n int first = 1;\n\n for (int i=CF_MAX_SIZE; i>=0; i--) {\n if (continuedFraction[i] == -1) continue;\n\n if (i == 0) {\n if (continuedFraction[i] != 0) {\n fraction[0] += continuedFraction[i] * fraction[1];\n int gcd = GCD(fraction[0], fraction[1]);\n fraction[0] /= gcd;\n fraction[1] /= gcd;\n }\n fraction[0] *= sign;\n free(continuedFraction);\n return fraction;\n }\n\n if (first) {\n first = 0;\n fraction[0] = 1;\n fraction[1] = continuedFraction[i];\n } else {\n long nr = fraction[0] + continuedFraction[i] * fraction[1];\n fraction[0] = fraction[1];\n fraction[1] = nr;\n int gcd = GCD(fraction[0], fraction[1]);\n fraction[0] /= gcd;\n fraction[1] /= gcd;\n }\n }\n\n free(continuedFraction);\n return fraction;\n}\n\nlong* FromDecimal(double d) {\n int sign = 1;\n if (d < 0) sign = -1;\n long *continuedFraction = GenerateContinuedFraction(d * (double) sign);\n PrintContinuedFraction(continuedFraction, sign);\n return RollupContinuedFraction(continuedFraction, sign);\n}\n\ndouble ConvertToDecimal(char *s) {\n int lensqrt = strlen(\"sqrt(\");\n int lencurt = strlen(\"curt(\");\n\n if (strcmp(s, \"pi\") == 0) {\n printf(\"pi is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) pi();\n }\n\n else if (strcmp(s, \"e\") == 0) {\n printf(\"e is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) e();\n }\n\n else if (strcmp(s, \"golden\") == 0) {\n printf(\"The golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1+pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"golden ratio\") == 0) {\n printf(\"The golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1+pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"golden_ratio\") == 0) {\n printf(\"The golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1+pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"gr\") == 0) {\n printf(\"The golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1+pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"phi\") == 0) {\n printf(\"The golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1+pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"little golden\") == 0) {\n printf(\"The little golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1-pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"little golden ratio\") == 0) {\n printf(\"The little golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1-pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"little golden_ratio\") == 0) {\n printf(\"The little golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1-pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"lgr\") == 0) {\n printf(\"The little golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1-pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"little phi\") == 0) {\n printf(\"The little golden ratio is irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) (1-pow(5,0.5))/2;\n }\n\n else if (strcmp(s, \"i^i\") == 0) {\n printf(\"Yeah, i^i is real but irrational. So fractional form doesn't exist but here is an approximation\\n\");\n return (double) pow(e(), -1*pi()/2);\n }\n\n else if (strncmp(s, \"sqrt(\", lensqrt) == 0) {\n long n;\n sscanf(s+lensqrt, \"%ld\", &n);\n if (n < 0) {\n printf(\"No square roots of a negative number, please\\n\");\n exit(1);\n }\n double d = pow(n, 0.5);\n if (d - (long) d > 0)\n printf(\"sqrt(%ld) is irrational. So fractional form doesn't exist but here is an approximation\\n\", n);\n return d;\n }\n\n else if (strncmp(s, \"curt(\", lensqrt) == 0) {\n long n;\n sscanf(s+lencurt, \"%ld\", &n);\n if (n < 0) {\n printf(\"No cube roots of a negative number, please\\n\");\n exit(1);\n }\n double d = pow(n, ((double) 1/(double) 3));\n if (d - (long) d > 0)\n printf(\"cuberoot(%ld) is irrational. So fractional form doesn't exist but here is an approximation\\n\", n);\n return d;\n }\n\n else {\n int pipe1[2];\n int pipe2[2];\n pipe(pipe1);\n pipe(pipe2);\n\n pid_t parent = getpid();\n pid_t child = fork();\n if (child == -1) {\n printf(\"Something went wrong...exiting\\n\");\n exit(-1);\n }\n else if (child > 0) {\n // parent\n close(pipe1[0]);\n close(pipe2[1]);\n\n char _s[strlen(s)+2];\n sprintf(_s, \"%s\\n\", s);\n write(pipe1[1], _s, sizeof(_s)-1);\n\n char result[256];\n read(pipe2[0], result, sizeof(result));\n\n close(pipe1[1]);\n close(pipe2[0]);\n\n int status;\n waitpid(child, &status, 0);\n\n double d;\n sscanf(result, \"%lf\", &d);\n return d;\n }\n else {\n close(pipe1[1]);\n close(pipe2[0]);\n dup2(pipe1[0], 0);\n dup2(pipe2[1], 1);\n close(pipe1[0]);\n close(pipe2[1]);\n\n execlp(\"bc\", \"bc\", \"-l\", NULL);\n exit(-1);\n }\n }\n}\n\nint main() {\n char dStr[256];\n double decimal;\n long *fraction;\n\n printf(\"Enter decimal to convert to fraction: \");\n scanf(\"%s\", dStr);\n decimal = ConvertToDecimal(dStr);\n fraction = FromDecimal(decimal);\n printf(\"Fraction: %ld/%ld\\n\", fraction[0], fraction[1]);\n free(fraction);\n\n return 0;\n}\n"
},
{
"alpha_fraction": 0.4429701566696167,
"alphanum_fraction": 0.45458024740219116,
"avg_line_length": 32.20762634277344,
"blob_id": "f3bf42b6b0ab0b410a4b3fd5cf8c6053d68909ae",
"content_id": "d2486d4f588bbbc5f363f20646c2ae11c3b18355",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7838,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 236,
"path": "/equation.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nfrom overrides import *\nimport math\nfrom matrix import Matrix\nfrom collections import OrderedDict\n\nclass LinearEquations:\n A = None\n B = None\n X = None\n nEquations = 0\n nVariables = 0\n isValid = False\n keepFraction = False\n\n def __init__(self, eString, keepFraction=False):\n if eString == \"\":\n return\n\n # array if equation dictionary\n equations = []\n\n # split equation string in the the different equations\n eqns = eString.split(':')\n self.nEquations = len(eqns)\n if self.nEquations == 0 or (self.nEquations == 1 and eqns[0] == \"\"):\n raise Exception(\"No equations in input\\nFormat for equations is: 2*a+3*b+1*c=4:5*a-1*c=1:...\")\n\n # ignore the last ':'\n if eqns[self.nEquations-1] == \"\":\n eqns.pop()\n self.nEquations -= 1\n\n # for each equation\n for eqn in eqns:\n # single equation dictionary\n equation = OrderedDict()\n\n # split the equation into the lhs and rhs\n parts = eqn.split('=')\n assert (len(parts) == 2), \"Malformed equation {0:s}\".format(eqn)\n lhs = parts[0]\n rhs = parts[1]\n\n # parse a single equation string\n c = 0\n np = lhs.find('+', c+1)\n nm = lhs.find('-', c+1)\n while not (nm == -1 and np == -1 and c == len(lhs)):\n n = min(nm, np)\n if n == -1: n = max(nm, np)\n if n == -1: n = len(lhs)\n\n s = lhs[c:n]\n coeff = s.split(\"*\")\n assert (len(coeff) == 2), \"Malformed coefficient {0:s}\".format(s)\n equation[coeff[1]] = coeff[0]\n\n c = n\n np = lhs.find('+', c+1)\n nm = lhs.find('-', c+1)\n\n equation['rhs'] = rhs\n equations.append(equation)\n\n variables = []\n for equation in equations:\n for var in list(equation.keys()):\n if var not in variables and var is not 'rhs':\n variables.append(var)\n\n variables.sort()\n varStr = \"\"\n for var in variables:\n varStr += \"{0:s}:\".format(var)\n\n coeffStr = \"\"\n rhsStr = \"\"\n for equation in equations:\n for i in range(0, len(variables)):\n var = variables[i]\n if var in list(equation.keys()):\n coeffStr += equation[var]\n else:\n coeffStr += '0'\n if i != len(variables)-1:\n coeffStr += \",\"\n\n rhsStr += equation['rhs']\n rhsStr += \":\"\n coeffStr += \":\"\n\n self.A = Matrix(coeffStr, keepFraction)\n self.X = Matrix(varStr, False)\n self.B = Matrix(rhsStr, keepFraction)\n self.isValid = True\n self.nVariables = len(variables)\n self.keepFraction = keepFraction\n\n if self.nVariables != self.nEquations:\n raise Exception(\"Numbers of variables ({0:d}) and number of equations don't match ({1:d})\".format(self.nVariables, self.nEquations))\n\n def __str__(self):\n return self.LinearEquationsStr()\n\n def LinearEquationsStr(self, __atype__=\"instanceobj, returns system of equations as a string\"):\n s = \"\"\n coefficients = self.A.MatrixStr().split(':')\n rhs = self.B.MatrixStr().split(':')\n variables = self.X.MatrixStr().split(':')\n\n for i in range(0, self.nEquations):\n eqnCoeff = coefficients[i].split(',')\n for j in range(0, self.nVariables):\n if eqnCoeff[j] == 0:\n continue\n\n s += eqnCoeff[j] + \"*\" + variables[j]\n if j != self.nVariables-1:\n if eqnCoeff[j+1][:1] != '-':\n s += \"+\"\n\n s += \"=\" + rhs[i]\n if i != self.nEquations -1:\n s += ':'\n\n return s\n\n def Solve(self, __atype__=\"instanceobj, returns solution matrix to the system of equations\"):\n inv = self.A.Inverse()\n return inv.Multiply(self.B)\n\n def CheckSolution(self, soln, __atype__=\"instanceobj, Matrix, returns boolean if solution is correct\"):\n if type(soln) is builtin.str:\n return self.CheckSolution(Matrix(soln, self.keepFraction))\n\n if soln.rSize != self.nVariables and soln.cSize != 1:\n raise Exception(\"Solution matrix should be a {0:d}x1 matrix\".format( self.nVariables))\n\n return self.A.Multiply(soln).IsEqual(self.B)\n\n def PrettyPrint(self, s='', __atype__=\"instanceobj, str, returns nothing just prints\"):\n if s != '':\n try:\n e = LinearEquations(s, self.keepFraction)\n return e.PrettyPrint()\n except Exception as e:\n m = Matrix(s, self.keepFraction)\n return m.PrettyPrint()\n\n a = Matrix.CreateBlank(self.nEquations, self.nVariables)\n for i in range (0, a.rSize):\n for j in range(0, a.cSize):\n ix = i * a.cSize + j\n a.elements[ix] = str(self.A.elements[ix])\n\n if a.elements[ix] == '-0.00':\n a.elements[ix] = '0.00'\n\n x = Matrix.CreateBlank(self.nVariables, 1)\n for i in range (0, x.rSize):\n for j in range(0, x.cSize):\n ix = i * x.cSize + j\n x.elements[ix] = str(self.X.elements[ix])\n\n if x.elements[ix] == '-0.00':\n x.elements[ix] = '0.00'\n\n b = Matrix.CreateBlank(self.nEquations, 1)\n for i in range (0, b.rSize):\n for j in range(0, b.cSize):\n ix = i * b.cSize + j\n b.elements[ix] = str(self.B.elements[ix])\n\n if b.elements[ix] == '-0.00':\n b.elements[ix] = '0.00'\n\n just1 = a.GetLargestSize() + 1\n just2 = x.GetLargestSize() + 1\n just3 = b.GetLargestSize() + 1\n for i in range(0, max(a.rSize, x.rSize, b.rSize)):\n if i < a.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n for j in range(0, a.cSize):\n ix = i * a.cSize + j\n if i < a.rSize:\n print(a.elements[ix].rjust(just1), end=' ')\n else:\n print(''.rjust(just1), end=' ')\n if i < a.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n\n if i == round(a.rSize/2):\n print(\" * \", end=' ')\n else:\n print(\" \", end=' ')\n\n if i < x.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n for j in range(0, x.cSize):\n ix = i * x.cSize + j\n if i < x.rSize:\n print(x.elements[ix].rjust(just2), end=' ')\n else:\n print(''.rjust(just2), end=' ')\n if i < x.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n\n if i == round(a.rSize/2):\n print(\" = \", end=' ')\n else:\n print(\" \", end=' ')\n\n if i < b.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n for j in range(0, b.cSize):\n ix = i * b.cSize + j\n if i < b.rSize:\n print(b.elements[ix].rjust(just3), end=' ')\n else:\n print(''.rjust(just3), end=' ')\n if i < b.rSize:\n print(\"|\", end=' ')\n else:\n print(\" \", end=' ')\n print()\n\n"
},
{
"alpha_fraction": 0.5251471996307373,
"alphanum_fraction": 0.5370461344718933,
"avg_line_length": 36.56221008300781,
"blob_id": "e914ccb2914cd9d8dc8fb1b62958035548355370",
"content_id": "26bd3a63d1ab11a799b2450f6c82619b71d66860",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8152,
"license_type": "no_license",
"max_line_length": 177,
"num_lines": 217,
"path": "/cards.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport sys, random\nfrom colors import bcolors\n\nclass Cards:\n deckSize = 52\n numSuits = 4\n cardsPerSuit = 13\n cards = []\n isOrdered = False\n numShuffles = 0\n correctGuesses = -1\n isValid = False\n\n def __init__(self):\n self.Reset()\n\n def Reset(self, __atype__=\"instanceobj, returns nothing - resets deck\"):\n self.cards = []\n self.numSuits = 4\n self.cardsPerSuit = 13\n self.deckSize = self.numSuits * self.cardsPerSuit\n for i in range(0, self.numSuits):\n for j in range(0, self.cardsPerSuit):\n card = i*self.cardsPerSuit + j\n self.cards.append(card)\n\n self.isValid = True\n self.correctGuesses = -1\n self.numShuffles = 0\n self.isOrdered = True\n\n def Pop(self):\n card = self.cards[0]\n for i in range(1, self.deckSize):\n self.cards[i-1] = self.cards[i]\n\n self.cards[self.deckSize-1] = card\n\n return card\n\n def Test(self, __atype__=\"instanceobj, returns result of a guessing test\"):\n if self.correctGuesses > -1:\n return self.correctGuesses\n\n self.correctGuesses = 0\n newDeck = Cards()\n seenCards = []\n card = newDeck.Pop()\n for i in range(0, self.deckSize):\n while card in seenCards:\n card = newDeck.Pop()\n\n if card == self.cards[i]:\n self.correctGuesses += 1\n\n seenCards.append(self.cards[i])\n\n return float(self.correctGuesses)\n\n def ShuffleAndTestMany(self, numShuffles=1, numTests=100, sType='Riffle', __atype__=\"instanceobj, int, int, str, returns result of a series of shuffles and series of test\"):\n totalCorrect = 0\n for i in range(0, numTests):\n self.Reset()\n self.ShuffleMany(numShuffles, sType)\n totalCorrect += self.Test()\n\n return float(totalCorrect)/float(numTests)\n\n def ShuffleMany(self, numShuffles, sType='Riffle', __atype__=\"instanceobj, int, str, returns nothing but does many shuffles\"):\n for i in range(0, numShuffles):\n self.Shuffle(sType)\n\n return\n\n def Shuffle(self, sType='Riffle', __atype__=\"instanceobj, str, returns nothing does a 'Riffle' 'Normal' or 'Perfect' shuffle\"):\n if sType == 'Riffle':\n self.RiffleShuffle()\n\n elif sType == 'Perfect':\n self.PerfectShuffle()\n\n elif sType == 'Normal':\n self.NormalShuffle()\n\n else:\n self.RiffleShuffle()\n\n return\n\n def RiffleShuffle(self, __atype__=\"instanceobj, returns nothing does a 'Riffle' shuffle\"):\n cut = random.randint(5, self.deckSize-5)\n left = self.cards[:cut]\n right = self.cards[cut:]\n\n if cut > 25:\n left = self.cards[cut:]\n right = self.cards[:cut]\n\n shufflePos = []\n for i in range(0, len(left)):\n pos = random.randint(0, len(right))\n while pos in shufflePos:\n pos = random.randint(0, len(right))\n\n shufflePos.append(pos)\n\n shufflePos.sort()\n newCards = []\n n = len(right)\n for i in range(0, n):\n if len(shufflePos) > 0 and i == shufflePos[0]:\n newCards.append(left.pop(0))\n newCards.append(right.pop(0))\n shufflePos.pop(0)\n\n else:\n newCards.append(right.pop(0))\n\n if len(left) == 1:\n newCards.append(left.pop(0))\n\n self.cards = newCards\n self.correctGuesses = -1\n self.isOrdered = False\n self.numShuffles += 1\n\n return\n\n def PerfectShuffle(self, __atype__=\"instanceobj, returns nothing does a 'Perfect' shuffle\"):\n left = self.cards[:(self.deckSize/2)]\n right = self.cards[(self.deckSize/2):]\n\n newCards = []\n for i in range(0, self.deckSize/2):\n newCards.append(left.pop(0))\n newCards.append(right.pop(0))\n\n self.cards = newCards\n self.correctGuesses = -1\n self.isOrdered = False\n self.numShuffles += 1\n\n return\n\n def NormalShuffle(self, __atype__=\"instanceobj, returns nothing does a 'Normal' shuffle\"):\n count = len(self.cards)\n cut1 = random.randint(1, self.deckSize-1)\n cut2 = random.randint(1, self.deckSize-1)\n while cut2 == cut1:\n cut2 = random.randint(1, self.deckSize-1)\n\n top = self.cards[:min(cut1, cut2)]\n middle = self.cards[min(cut1, cut2):max(cut1, cut2)]\n bottom = self.cards[max(cut1, cut2):]\n self.cards = middle + top + bottom\n\n self.correctGuesses = -1\n self.isOrdered = False\n self.numShuffles += 1\n\n return\n\n def PrettyPrint(self):\n suits = ['S', 'D', 'C', 'H']\n cards = [' A', ' 2', ' 3', ' 4', ' 5', ' 6', ' 7', ' 8', ' 9', '10', ' J', ' Q', ' K']\n\n for i in range(0, self.cardsPerSuit):\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END)\n\n for j in range(0, self.numSuits):\n SI = int(self.cards[j * self.cardsPerSuit + i] / self.cardsPerSuit)\n CI = int(self.cards[j * self.cardsPerSuit + i] % self.cardsPerSuit)\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|\", end='')\n if SI == 0 or SI == 2:\n color = bcolors.F_Black\n else:\n color = bcolors.F_Red\n print(color + \" {0:s}-{1:s} \".format(cards[CI], suits[SI]), end='')\n print(bcolors.F_DarkGray + \" |\" + bcolors.END, end='')\n print()\n\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END)\n\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"| |\" + bcolors.END)\n\n for j in range(0, self.numSuits):\n SI = int(self.cards[j * self.cardsPerSuit + i] / self.cardsPerSuit)\n CI = int(self.cards[j * self.cardsPerSuit + i] % self.cardsPerSuit)\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|\", end='')\n if SI == 0 or SI == 2:\n color = bcolors.F_Black\n else:\n color = bcolors.F_Red\n print(color + \" {0:s}-{1:s} \".format(cards[CI], suits[SI]), end='')\n print(bcolors.F_DarkGray + \" |\" + bcolors.END, end='')\n print()\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|Pursute|\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|Pursute|\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|Pursute|\" + bcolors.END, end='')\n print(\" \" + bcolors.B_LightGray + bcolors.F_DarkGray + \"|Pursute|\" + bcolors.END)\n\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END, end='')\n print(\" \" + bcolors.B_DarkGray + bcolors.F_LightGray + \"---------\" + bcolors.END)\n\n return\n\n"
},
{
"alpha_fraction": 0.6594827771186829,
"alphanum_fraction": 0.6681034564971924,
"avg_line_length": 20.090909957885742,
"blob_id": "639ec960049a0a4e381f1cdb1636ad9f00d8ba62",
"content_id": "8f57ff5a99db8ea8a253c38a265737813a9fb90c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 232,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 11,
"path": "/overrides.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "try:\n import builtins as builtin\nexcept Exception as e:\n import __builtin__ as builtin\n\n# Override str\ndef str(number):\n if isinstance(number, float):\n return '{0:.2f}'.format(number)\n\n return builtin.str(number)\n"
},
{
"alpha_fraction": 0.5578116774559021,
"alphanum_fraction": 0.572885274887085,
"avg_line_length": 34.024845123291016,
"blob_id": "dc77d45fa756b94ab154147391a7131e656a9a69",
"content_id": "26bf9f6ff7a8d2a5cfe381812d51285818eaad81",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11278,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 322,
"path": "/fraction.py",
"repo_name": "sundars/matrixops",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\nimport math\nfrom collections import OrderedDict\nfrom pause import Pause\n\nclass Fraction:\n numerator = 0\n denominator = 1\n value = float(0/1)\n\n def __init__(self, fString, simplify=True):\n if fString == \"\":\n return\n\n parts = fString.split('/')\n if len(parts) != 2:\n raise Exception(\"Fraction has a numerator and denominator, formatted as a/b\")\n\n self.numerator = int(parts[0])\n self.denominator = int(parts[1])\n\n if self.denominator == 0:\n raise Exception(\"Fraction {0:d}/{1:d} is indeterminate\".format(self.numerator, self.denominator))\n\n if self.numerator == 0:\n return\n\n if self.denominator < 0:\n self.denominator *= -1\n self.numerator *= -1\n\n self.value = float(self.numerator)/float(self.denominator)\n if simplify:\n self.Simplify()\n\n def __add__(self, f, __atype__=\"instanceobj, Fraction, returns a fraction\"):\n d = self.denominator * f.denominator\n n = self.numerator * f.denominator + self.denominator * f.numerator\n return Fraction(\"{0:d}/{1:d}\".format(int(n), int(d)))\n\n def __sub__(self, f, __atype__=\"instanceobj, Fraction, returns a fraction\"):\n d = self.denominator * f.denominator\n n = self.numerator * f.denominator - self.denominator * f.numerator\n return Fraction(\"{0:d}/{1:d}\".format(int(n), int(d)))\n\n def __mul__(self, f, __atype__=\"instanceobj, Fraction, returns a fraction\"):\n d = self.denominator * f.denominator\n n = self.numerator * f.numerator\n return Fraction(\"{0:d}/{1:d}\".format(int(n), int(d)))\n\n def __div__(self, f, __atype__=\"instanceobj, Fraction, returns a fraction\"):\n d = self.denominator * f.numerator\n n = self.numerator * f.denominator\n return Fraction(\"{0:d}/{1:d}\".format(int(n), int(d)))\n\n def __truediv__(self, f, __atype__=\"instanceobj, Fraction, returns a fraction\"):\n d = self.denominator * f.numerator\n n = self.numerator * f.denominator\n return Fraction(\"{0:d}/{1:d}\".format(int(n), int(d)))\n\n def __pow__(self, power, __atype__=\"instanceobj, int, returns a fraction\"):\n p = math.fabs(power)\n fd = Fraction.FromDecimal(self.denominator ** p)\n fn = Fraction.FromDecimal(self.numerator ** p)\n if power < 0:\n return fd / fn\n\n return fn / fd\n\n def __lt__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value < f.value\n except Exception as e:\n return self.value < f\n\n def __le__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value <= f.value\n except Exception as e:\n return self.value <= f\n\n def __gt__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value > f.value\n except Exception as e:\n return self.value > f\n\n def __ge__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value >= f.value\n except Exception as e:\n return self.value >= f\n\n def __eq__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value == f.value\n except Exception as e:\n return self.value == f\n\n def __ne__(self, f, __atype__=\"instanceobj, Fraction, returns a boolean\"):\n try:\n return self.value != f.value\n except Exception as e:\n return self.value != f\n\n def __repr__(self):\n return self.FractionStr()\n\n def __str__(self):\n return self.FractionStr()\n\n def __float__(self):\n return self.value\n\n def __round__(self, n=0):\n return self\n\n def Reciprocal(self, __atype__=\"instanceobj, returns a fraction\"):\n return Fraction(\"{0:d}/{1:d}\".format(self.denominator, self.numerator))\n\n def Simplify(self, __atype__=\"instanceobj, returns nothing but simplifies fraction in place\"):\n gcd = Fraction.GCD(abs(self.numerator), self.denominator)\n self.numerator //= gcd\n self.denominator //= gcd\n\n def PrettyPrint(self):\n print(self.numerator, end='')\n if self.numerator == 0 or self.denominator == 1:\n print()\n else:\n print(\"/\", end='')\n print(self.denominator)\n\n def FractionStr(self):\n s = \"{0:d}\".format(int(self.numerator))\n if self.numerator != 0 and self.denominator != 1:\n s += \"/\"\n s += \"{0:d}\".format(int(self.denominator))\n\n return s\n\n def ContinuedFraction(self, __atype__=\"instanceobj, prints current fraction as a continued fraction and returns\"):\n continuedFraction = [self.numerator//self.denominator]\n Fraction.EuclidContinuedFraction(self.numerator, self.denominator, continuedFraction)\n Fraction.PrettyPrintContinuedFraction(continuedFraction, self.numerator//abs(self.numerator))\n return Pause()\n\n @classmethod\n def LCM(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n return Fraction.EuclidLCM(n1, n2)\n\n @classmethod\n def GCD(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n return Fraction.EuclidGCD(n1, n2)\n\n @classmethod\n def PrimeFactors(cls, num, __atype__=\"classobj, int, returns a list of integers\"):\n if num <= 0:\n raise Exception(\"Only numbers greater than 0 please\")\n\n factors = []\n\n while num % 2 == 0:\n factors.append(2)\n num //= 2\n\n for i in range(3, int(math.sqrt(num))+1, 2):\n while num % i == 0:\n factors.append(i)\n num //= i\n\n if num != 1: factors.append(num)\n\n return factors\n\n @classmethod\n def PrimeFactorsLCM(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n factors1 = Fraction.PrimeFactors(n1)\n factors2 = Fraction.PrimeFactors(n2)\n\n lcm = 1\n remaining = []\n for f in factors1:\n if f in factors2:\n lcm *= f\n factors2.remove(f)\n else:\n remaining.append(f)\n\n for f in factors2:\n lcm *= f\n\n for f in remaining:\n lcm *= f\n\n return lcm\n\n @classmethod\n def PrimeFactorsGCD(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n factors1 = Fraction.PrimeFactors(n1)\n factors2 = Fraction.PrimeFactors(n2)\n\n commonFactor = 1\n for f in factors1:\n if f in factors2:\n commonFactor *= f\n factors2.remove(f)\n\n return commonFactor\n\n @classmethod\n def EuclidLCM(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n return (n1 * n2)//Fraction.EuclidGCD(n1, n2)\n\n @classmethod\n def EuclidGCD(cls, n1, n2, __atype__=\"classobj, int, int, returns an integer\"):\n if (max(n1, n2) % min(n1, n2) == 0):\n return min(n1, n2)\n\n return Fraction.EuclidGCD(min(n1, n2), max(n1, n2) % min(n1, n2))\n\n @classmethod\n def EuclidContinuedFraction(cls, n1, n2, cF):\n if (max(n1, n2) % min(n1, n2) == 0):\n cF.append(n1//n2)\n return\n\n cF.append(max(n1, n2)//min(n1, n2))\n Fraction.EuclidContinuedFraction(min(n1, n2), max(n1, n2) % min(n1, n2), cF)\n\n @classmethod\n def FromDecimal(cls, decimal, __atype__=\"classobj, float, returns a fraction\"):\n continuedFraction, sign = Fraction.GenerateContinuedFraction(decimal)\n return Fraction.RollupContinuedFraction(continuedFraction, sign)\n\n # Rollup the continued fraction\n @classmethod\n def RollupContinuedFraction(cls, continuedFraction, sign=1, __atype__=\"classobj, list:int, returns a fraction\"):\n pf = Fraction('0/1')\n for i in range(len(continuedFraction), 0, -1):\n if i == 1:\n return (pf + continuedFraction[i-1]) * sign\n\n if i == len(continuedFraction):\n pf = Fraction(\"1/{0:d}\".format(continuedFraction[i-1]))\n\n else:\n pf = Fraction('1/1') / (pf + continuedFraction[i-1])\n\n # Generate a continued fraction from a decimal\n @classmethod\n def GenerateContinuedFraction(cls, decimal, __atype__=\"classobj, float, returns a list of integers and the sign of that list\"):\n continuedFraction = []\n d, sign = (math.fabs(decimal), int(not decimal or decimal/math.fabs(decimal)))\n whole, remaining = (int(d), float(\"0.{0:s}\".format(str(d)[(len(str(int(d)))+1):])))\n continuedFraction.append(whole)\n\n loop = 0\n while remaining > 0.00001 and loop < 16:\n reciprocal = 1/remaining\n whole, remaining = (int(reciprocal), float(\"0.{0:s}\".format(str(reciprocal)[(len(str(int(reciprocal)))+1):])))\n continuedFraction.append(whole)\n loop += 1\n\n return continuedFraction, sign\n\n @classmethod\n def PrintContinuedFraction(cls, decimal, __atype__=\"classobj, float, prints decimal as a continued fraction and returns\"):\n continuedFraction, sign = Fraction.GenerateContinuedFraction(decimal)\n Fraction.PrettyPrintContinuedFraction(continuedFraction, sign)\n return Pause()\n\n @classmethod\n def PrettyPrintContinuedFraction(cls, continuedFraction, sign):\n # Set the count for number of spaces\n count = 0\n # Print \"-1 * (\" if it is a negative number\n if sign < 0:\n print(\"-1 * (\", end='')\n count += 6\n\n if continuedFraction[len(continuedFraction)-1] == 1:\n continuedFraction[len(continuedFraction)-2] += 1\n continuedFraction.pop(len(continuedFraction)-1)\n\n for i in range(0, len(continuedFraction)):\n # Print the next number of the continued Fraction\n print(continuedFraction[i], end='')\n\n # If last number, print a new line and we are done\n if i+1 == len(continuedFraction):\n if sign < 0: print(\")\") # if a negative number print closing \")\" before new line\n print('')\n return\n\n # Print \"+ 1\" and a new line\n print(\" + \", end='')\n print(1)\n\n # Increment count by the length of the continued fraction and 4 more for the + 1\n count += len(str(continuedFraction[i]))+3\n\n # Create a space string of length count and print\n s = \"\"\n for j in range(0, count): \n s += \" \"\n print(s, end='')\n\n # Calculate length of the fraction line it is length of next number plus 4 (for \" + 1\"), unless last but one\n if i+2 == len(continuedFraction):\n countL = len(str(continuedFraction[i+1]))\n else:\n countL = len(str(continuedFraction[i+1]))+4\n\n # Generate the fraction line and print with a new line\n l = \"\"\n for j in range(0, countL):\n l += \"-\"\n print(l)\n\n # Then print as many spaces as needed and loop back to next number\n print(s, end='')\n"
}
] | 13 |
rja3/flask_advanced
|
https://github.com/rja3/flask_advanced
|
57073b974876a7300cdbd15ac4ca284d5736eae3
|
fc2974a8e8edd90ab1f9c36b1625558ae05149e4
|
bc3da6c14111dd398bcb6f0ebfa5064a9ab6a9f0
|
refs/heads/master
| 2020-03-19T19:07:01.539422 | 2018-06-10T21:56:14 | 2018-06-10T21:56:14 | 136,840,398 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7710843086242676,
"alphanum_fraction": 0.7710843086242676,
"avg_line_length": 26.83333396911621,
"blob_id": "394ed1566d1000d253dfb63deeaa844dffa577af",
"content_id": "75d4d05e305d9e7228a4e81bcca887acf81a41bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 166,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 6,
"path": "/booky/auth/__init__.py",
"repo_name": "rja3/flask_advanced",
"src_encoding": "UTF-8",
"text": "# booky/auth/__init.py\n\nfrom flask import Blueprint\nauthentication = Blueprint('authentication', __name__, template_folder='templates')\n\nfrom booky.auth import routes"
},
{
"alpha_fraction": 0.7857142686843872,
"alphanum_fraction": 0.7857142686843872,
"avg_line_length": 22.5,
"blob_id": "41c31489ee6263f2f4ba73fafc37a389833908c8",
"content_id": "e436f26346dd16be10a272247125e3f030328366",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 140,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 6,
"path": "/config/prod.py",
"repo_name": "rja3/flask_advanced",
"src_encoding": "UTF-8",
"text": "import os\n\nDEBUG=False\nSECRET_KEY = 'secret-key'\nSQLALCHEMY_DATABASE_URI = os.environ['DATABASE_URL']\nSQLALCHEMY_TRACK_MODIFICATIONS = False"
},
{
"alpha_fraction": 0.7701863646507263,
"alphanum_fraction": 0.8012422323226929,
"avg_line_length": 39.5,
"blob_id": "242e4e8e679d5d60fb8bcbff75a32a4f95cffe42",
"content_id": "132fba93a1c6983e302856421d840a14778078ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 161,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 4,
"path": "/config/dev.py",
"repo_name": "rja3/flask_advanced",
"src_encoding": "UTF-8",
"text": "DEBUG=True\nSECRET_KEY = 'secret-key'\nSQLALCHEMY_DATABASE_URI = 'postgresql://postgres:dog-bark3@localhost:5433/catalog_db'\nSQLALCHEMY_TRACK_MODIFICATIONS = False"
},
{
"alpha_fraction": 0.7402597665786743,
"alphanum_fraction": 0.7402597665786743,
"avg_line_length": 21,
"blob_id": "90c4f42065c176cde2d88b749bb5efc134f29381",
"content_id": "5219673937c3cfb2769823ca46535ff36e60bc9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 154,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 7,
"path": "/booky/catalog/__init__.py",
"repo_name": "rja3/flask_advanced",
"src_encoding": "UTF-8",
"text": "# booky/catalog/__init.py\n\nfrom flask import Blueprint\nmain = Blueprint('main', __name__, template_folder='templates')\n\n\nfrom booky.catalog import routes\n"
},
{
"alpha_fraction": 0.699447512626648,
"alphanum_fraction": 0.699447512626648,
"avg_line_length": 17.83333396911621,
"blob_id": "f38712a36f26ea19e13d99a3e13d5c0e822ce102",
"content_id": "0381dd39dfd1cccedc2bda54c91e70bbb1debba6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 905,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 48,
"path": "/booky/__init__.py",
"repo_name": "rja3/flask_advanced",
"src_encoding": "UTF-8",
"text": "# booky/__init__.py\n\nimport os\n\nfrom flask import Flask\n\nfrom flask_sqlalchemy import SQLAlchemy\n\nfrom flask_bootstrap import Bootstrap\n\nfrom flask_login import LoginManager\n\nfrom flask_bcrypt import Bcrypt\n\ndb = SQLAlchemy()\n\nbootstrap = Bootstrap()\n\nlogin_manager = LoginManager()\nlogin_manager.login_view = 'authenticaton.do_the_login'\nlogin_manager.session_protection = 'strong'\n\nbcrypt = Bcrypt()\n\ndef create_app(ctype):\n\n app = Flask(__name__)\n\n a = os.path.join(os.getcwd(), 'config')\n b = ctype + '.py'\n c = os.path.join(a, b)\n print (c)\n configuration = c\n\n app.config.from_pyfile(configuration)\n\n db.init_app(app)\n bootstrap.init_app(app)\n login_manager.init_app(app)\n bcrypt.init_app(app)\n\n from booky.catalog import main\n app.register_blueprint(main)\n\n from booky.auth import authentication\n app.register_blueprint(authentication)\n\n return app\n\n"
}
] | 5 |
iSchomer/othello_RL
|
https://github.com/iSchomer/othello_RL
|
ebb6c038f5e7e50b487710f3c465143744f5a042
|
68da11c2701b8824f4a2355b2742c40824ca2af4
|
c8807cbc710042957bfb8b731b6f2e52c2886526
|
refs/heads/master
| 2020-09-11T05:57:13.500113 | 2019-12-05T17:51:07 | 2019-12-05T17:51:07 | 221,962,960 | 0 | 1 | null | 2019-11-15T16:30:36 | 2019-11-27T19:35:26 | 2019-11-30T20:59:36 |
Python
|
[
{
"alpha_fraction": 0.47005778551101685,
"alphanum_fraction": 0.48749932646751404,
"avg_line_length": 39.17136764526367,
"blob_id": "92148971f819a4471e4951ede424a41f82fabafe",
"content_id": "d32573026d909ace24176dcdc8a80318246a4bbc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18519,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 461,
"path": "/othello_env.py",
"repo_name": "iSchomer/othello_RL",
"src_encoding": "UTF-8",
"text": "# Othello\n\nimport random\nimport sys\nimport numpy as np\n\n\n# static methods\ndef is_on_board(x, y):\n # Returns True if the coordinates are located on the board.\n return 0 <= x <= 7 and 0 <= y <= 7\n\n\ndef is_on_corner(x, y):\n # Returns True if the position is in one of the four corners.\n return (x == 0 and y == 0) or (x == 7 and y == 0) or (x == 0 and y == 7) or (x == 7 and y == 7)\n\n\nclass Board:\n\n def __init__(self):\n self.board = []\n for _ in range(8):\n self.board.append([' '] * 8)\n self.reset()\n\n def draw(self):\n\n h_line = ' +----+----+----+----+----+----+----+----+'\n\n print(' 1 2 3 4 5 6 7 8')\n print(h_line)\n for y in range(8):\n print(y + 1, end=' ')\n for x in range(8):\n print('| %s' % (self.board[x][y]), end=' ')\n print('|')\n print(h_line)\n\n def reset(self):\n # Blanks out the board it is passed, except for the original starting position.\n for x in range(8):\n for y in range(8):\n self.board[x][y] = ' '\n\n # Starting pieces: X = black, O = white.\n self.board[3][3] = 'X'\n self.board[3][4] = 'O'\n self.board[4][3] = 'O'\n self.board[4][4] = 'X'\n\n def is_valid_move(self, tile, x_start, y_start):\n # Returns False if the player's move on space x_start, y_start is invalid.\n # If it is a valid move, returns a list of spaces that would become the player's if they made a move here.\n if self.board[x_start][y_start] != ' ' or not is_on_board(x_start, y_start):\n return False\n\n self.board[x_start][y_start] = tile # temporarily set the tile on the board.\n\n if tile == 'X':\n other_tile = 'O'\n else:\n other_tile = 'X'\n\n tiles_to_flip = []\n for x_direction, y_direction in [[0, 1], [1, 1], [1, 0], [1, -1], [0, -1], [-1, -1], [-1, 0], [-1, 1]]:\n x, y = x_start, y_start\n x += x_direction # first step in the direction\n y += y_direction # first step in the direction\n if is_on_board(x, y) and self.board[x][y] == other_tile:\n # There is a piece belonging to the other player next to our piece.\n x += x_direction\n y += y_direction\n if not is_on_board(x, y):\n continue\n while self.board[x][y] == other_tile:\n x += x_direction\n y += y_direction\n if not is_on_board(x, y): # break out of while loop, then continue in for loop\n break\n if not is_on_board(x, y):\n continue\n if self.board[x][y] == tile:\n # There are pieces to flip over. Go in the reverse direction until we reach the original space,\n # noting all the tiles along the way.\n while True:\n x -= x_direction\n y -= y_direction\n if x == x_start and y == y_start:\n break\n tiles_to_flip.append([x, y])\n\n self.board[x_start][y_start] = ' ' # restore the empty space\n if len(tiles_to_flip) == 0: # If no tiles were flipped, this is not a valid move.\n return False\n return tiles_to_flip\n\n def get_valid_moves(self, tile):\n # Returns a list of [x,y] lists of valid moves for the given player on the given board.\n valid_moves = []\n\n for x in range(8):\n for y in range(8):\n if self.is_valid_move(tile, x, y):\n valid_moves.append([x, y])\n return valid_moves\n\n def get_score(self):\n # Determine the score by counting the tiles. Returns a dictionary with keys 'X' and 'O'.\n x_score = 0\n o_score = 0\n for x in range(8):\n for y in range(8):\n if self.board[x][y] == 'X':\n x_score += 1\n elif self.board[x][y] == 'O':\n o_score += 1\n return {'X': x_score, 'O': o_score}\n\n def make_move(self, tile, x_start, y_start):\n # Place the tile on the board at x_start, y_start, and flip any of the opponent's pieces.\n # Returns False if this is an invalid move, True if it is valid.\n tiles_to_flip = self.is_valid_move(tile, x_start, y_start)\n\n if not tiles_to_flip:\n return False\n\n self.board[x_start][y_start] = tile\n for x, y in tiles_to_flip:\n self.board[x][y] = tile\n return True\n\n def copy(self):\n # Make a duplicate of the board list and return the duplicate.\n dupe_board = Board()\n\n for x in range(8):\n for y in range(8):\n dupe_board.board[x][y] = self.board[x][y]\n\n return dupe_board\n\n def copy_with_valid_moves(self, tile):\n # ONLY TO BE USED WITH PLAYER INTERACTION. NOT FOR TRAINING\n # Returns a new board with . marking the valid moves the given player can make.\n dupe_board = self.copy()\n\n for x, y in dupe_board.get_valid_moves(tile):\n dupe_board.board[x][y] = '.'\n return dupe_board\n\n def list_to_array(self):\n state = np.zeros((8, 8))\n for i in range(8):\n for j in range(8):\n if self.board[j][i] == 'X':\n state[i, j] = 1\n elif self.board[j][i] == 'O':\n state[i, j] = -1\n else:\n state[i, j] = 0\n return state\n\n def array_to_list(self, state):\n for i in range(8):\n for j in range(8):\n if state[i, j] == 1:\n self.board[j][i] = 'X'\n elif state[i, j] == -1:\n self.board[j][i] = 'O'\n else:\n self.board[j][i] = ' '\n return self.board\n\n\nclass OthelloGame:\n\n def __init__(self, opponent='rand', interactive=True, show_steps=False):\n \"\"\"\n :param opponent: specifies opponent\n 'rand' --> chooses randomly among valid actions\n 'heur' --> uses a symmetrical value table\n 'bench' --> uses a value table trained via co-evolution\n :param interactive: specifies whether we are using program for RL\n or to play interactively\n :param show_steps: shows board at each step\n \"\"\"\n self.board = Board()\n self.player_tile = 'X'\n self.opponent = opponent\n self.computer_tile = 'O'\n self.player_score = 0\n self.computer_score = 0\n self.interactive = interactive\n self.stepper = show_steps\n self.show_hints = False\n\n # build the value tables for the opponents\n # make it a 2D list so indexing matches the board\n if self.opponent == 'heur':\n self.position_value = [100, -25, 10, 5, 5, 10, -25, 100,\n -25, -25, 2, 2, 2, 2, -25, -25,\n 10, 2, 5, 1, 1, 5, 2, 10,\n 5, 2, 1, 2, 2, 1, 2, 5,\n 5, 2, 1, 2, 2, 1, 2, 5,\n 10, 2, 5, 1, 1, 5, 2, 10,\n -25, -25, 2, 2, 2, 2, -25, -25,\n 100, -25, 10, 5, 5, 10, -25, 100]\n elif self.opponent == 'bench':\n self.position_value = [80, -26, 24, -1, -5, 28, -18, 76,\n -23, -39, -18, -9, -6, -8, -39, -1,\n 46, -16, 4, 1, -3, 6, -20, 52,\n -13, -5, 2, -1, 4, 3, -12, -2,\n -5, -6, 1, -2, -3, 0, -9, -5,\n 48, -13, 12, 5, 0, 5, -24, 41,\n -27, -53, -11, -1, -11, -16, -58, -15,\n 87, -25, 27, -1, 5, 36, -3, 100]\n\n def reset(self):\n self.board.reset()\n self.player_score = 0\n self.computer_score = 0\n\n def get_state(self):\n return self.board.list_to_array()\n\n def choose_player_tile(self):\n # Lets the player type which tile they want to be.\n # Returns a list with the player's tile as the first item, and the computer's tile as the second.\n if self.interactive:\n tile = ''\n while not (tile == 'X' or tile == 'O'):\n print('Do you want to be X or O? X always moves first.')\n tile = input().upper()\n else:\n # TODO - expand agent options to be something other than 'X\"\n tile = 'X'\n\n # the first element in the list is the player's tile, the second is the computer's tile.\n if tile == 'X':\n assigned_tiles = ['X', 'O']\n else:\n assigned_tiles = ['O', 'X']\n\n self.player_tile, self.computer_tile = assigned_tiles\n\n def get_player_action(self):\n # Let the player type in their move given a board state.\n # Returns the move as [x, y] (or returns the strings 'hints' or 'quit')\n valid_digits = '1 2 3 4 5 6 7 8'.split()\n while True:\n print('Enter your move, or type quit to end the game, or hints to turn off/on hints.')\n player_action = input().lower()\n if player_action == 'quit':\n return 'quit'\n elif player_action == 'hints':\n return 'hints'\n\n elif len(player_action) == 2 and player_action[0] in valid_digits and player_action[1] in valid_digits:\n x = int(player_action[0]) - 1\n y = int(player_action[1]) - 1\n if not self.board.is_valid_move(self.player_tile, x, y):\n if self.interactive:\n print('That is not a legal move.')\n continue\n else:\n break\n else:\n print('That is not a valid move. Type the x digit (1-8), then the y digit (1-8).')\n print('For example, 81 will be the top-right corner.')\n\n return [x, y]\n\n def get_computer_move(self):\n # Given a board and the computer's tile, determine where to\n # move and return that move as a [x, y] list.\n possible_moves = self.board.get_valid_moves(self.computer_tile)\n\n if possible_moves:\n if self.opponent == 'rand':\n # randomize the order of the possible moves\n random.shuffle(possible_moves)\n return possible_moves[0]\n else:\n computer_afterstate_v = []\n for i in range(len(possible_moves)):\n computer_afterstate_v.append(0)\n temp_board = self.board.copy()\n temp_board.make_move(self.computer_tile, possible_moves[i][0], possible_moves[i][1])\n for j in range(64):\n computer_afterstate_v[i] -= temp_board.list_to_array()[int(j/8)][j % 8] * self.position_value[j]\n best = int(np.argmax([computer_afterstate_v[k] for k in range(len(possible_moves))]))\n return possible_moves[best]\n else:\n return []\n\n def show_points(self):\n # Prints out the current score.\n scores = self.board.get_score()\n print(\n 'You have %s points. The computer has %s points.' % (scores[self.player_tile], scores[self.computer_tile]))\n\n def start(self):\n if self.interactive:\n self.run_interactive()\n else:\n # Reset the board and game.\n self.board.reset()\n self.choose_player_tile()\n self.show_hints = True\n if self.player_tile == 'X':\n turn = 'player'\n else:\n turn = 'computer'\n\n def step(self, action):\n \"\"\"\n action: [x, y]\n :return: reward, next_state, next_state_valid_moves\n \"\"\"\n reward = 0\n terminal = False # indicates terminal state\n\n # take the action\n # we should always receive a valid selection from the agent\n self.board.make_move(self.player_tile, action[0], action[1])\n\n # Now at 1 of 4 options:\n # 1. Game is over -> indicate terminal and exit\n # 2. Computer is now out of moves -> exit and let agent choose again\n # 3. Agent is now out of moves -> Let computer take a move\n # 4. Both still have moves -> let computer take 1 move\n while terminal is False:\n computer_moves = self.board.get_valid_moves(self.computer_tile)\n if not computer_moves:\n # options 1 and 2\n if not self.board.get_valid_moves(self.player_tile):\n # option 1 - game over\n reward = self.calculate_final_reward()\n terminal = True\n if self.stepper:\n self.board.draw()\n return reward, self.board.list_to_array(), [], terminal\n else:\n # option 2 - return current state so agent can go again\n if self.stepper:\n self.board.draw()\n return reward, self.board.list_to_array(), self.board.get_valid_moves(self.player_tile), terminal\n else:\n # options 3 and 4\n computer_action = self.get_computer_move()\n self.board.make_move(self.computer_tile, computer_action[0], computer_action[1])\n if self.board.get_valid_moves(self.player_tile):\n break\n\n # Failsafe\n # check if the computer ended the game\n # if not self.board.get_valid_moves(self.player_tile) and \\\n # not self.board.get_valid_moves(self.computer_tile):\n # print(\"We should never see this!\")\n # terminal = True\n # reward = self.calculate_final_reward()\n if self.stepper:\n self.board.draw()\n return reward, self.board.list_to_array(), self.board.get_valid_moves(self.player_tile), terminal\n\n def calculate_final_reward(self):\n scores = self.board.get_score()\n self.player_score, self.computer_score = scores[self.player_tile], scores[self.computer_tile]\n if self.player_score > self.computer_score:\n reward = 1\n if self.stepper:\n print(\"The agent wins a game!! {} to {}\".format(self.player_score, self.computer_score))\n elif self.player_score < self.computer_score:\n reward = 0\n if self.stepper:\n print(\"The agent loses to the computer... {} to {}\".format(self.player_score, self.computer_score))\n else:\n reward = 0.5\n return reward\n\n def run_interactive(self):\n print('Welcome to Othello!')\n while True:\n # Reset the board and game.\n self.board.reset()\n self.choose_player_tile()\n self.show_hints = True\n if self.player_tile == 'X':\n turn = 'player'\n else:\n turn = 'computer'\n\n while True:\n if turn == 'player':\n # Player's turn.\n if self.show_hints:\n valid_moves_board = self.board.copy_with_valid_moves(self.player_tile)\n valid_moves_board.draw()\n print(self.board.list_to_array())\n else:\n self.board.draw()\n\n self.show_points()\n player_action = self.get_player_action()\n\n if player_action == 'quit':\n print('Thanks for playing!')\n sys.exit() # terminate the program\n elif player_action == 'hints':\n self.show_hints = not self.show_hints\n continue\n else:\n self.board.make_move(self.player_tile, player_action[0], player_action[1])\n\n if not self.board.get_valid_moves(self.computer_tile):\n print('Your opponent has no legal move. It is your turn.')\n if not self.board.get_valid_moves(self.player_tile):\n print('You also have no legal move. The game is over.')\n break\n pass\n else:\n turn = 'computer'\n else:\n # Computer's turn.\n self.board.draw()\n self.show_points()\n input('Press Enter to see the computer\\'s move.\\n')\n c_move = self.get_computer_move()\n x, y, = c_move[0], c_move[1]\n self.board.make_move(self.computer_tile, x, y)\n\n if not self.board.get_valid_moves(self.player_tile):\n print('You have no legal move. It is the computer\\'s turn.')\n if not self.board.get_valid_moves(self.computer_tile):\n print('Your opponent also has no legal move. The game is over.')\n break\n pass\n else:\n turn = 'player'\n\n # Display the final score.\n self.board.draw()\n scores = self.board.get_score()\n self.player_score = scores[self.player_tile]\n self.computer_score = scores[self.computer_tile]\n margin = self.player_score - self.computer_score\n print('The player scored %s points. The computer scored %s points.' % (\n self.player_score, self.computer_score))\n if self.player_score > self.computer_score:\n print('You beat the computer by %s points! Congratulations!' % margin)\n elif self.player_score < self.computer_score:\n print('You lost. The computer beat you by %s points.' % margin)\n else:\n print('The game was a tie!')\n\n\n# to test the environment\nif __name__ == '__main__':\n game = OthelloGame(opponent='heur', interactive=True, show_steps=False)\n game.start()\n"
},
{
"alpha_fraction": 0.5157392621040344,
"alphanum_fraction": 0.5329093933105469,
"avg_line_length": 40.0217399597168,
"blob_id": "af535636a82154770126c5a909ca9537ee34dd3d",
"content_id": "7934f6a4de8b5c867ab1d5b1f06aed6ddd82e58f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9435,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 230,
"path": "/othello8_rl.py",
"repo_name": "iSchomer/othello_RL",
"src_encoding": "UTF-8",
"text": "from othello4_env import OthelloGame\nimport tensorflow as tf\nimport numpy as np\nfrom collections import deque\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.optimizers import SGD\nfrom tensorflow.keras.initializers import RandomUniform\nimport random\nfrom time import process_time\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\n\nclass OthelloAgent:\n def __init__(self, ep):\n self.state_size = 64\n self.action_size = 64\n self.tile = 'X'\n self.memory = deque(maxlen=2000)\n self.gamma = 1.0 # episodic --> no discount\n self.episodes = ep\n self.epsilon = 0.1\n self.epsilon_min = 0.0\n self.epsilon_step = (self.epsilon - self.epsilon_min)/self.episodes\n self.learning_rate = 0.01\n self.model = self.build_model()\n\n def build_model(self):\n # Feed-forward NN\n model = tf.keras.Sequential()\n init = RandomUniform(minval=-0.5, maxval=0.5)\n model.add(layers.Dense(50, input_dim=self.state_size, activation='sigmoid', kernel_initializer=init))\n model.add(layers.Dense(64, activation='sigmoid', kernel_initializer=init))\n model.compile(loss='mse', optimizer=SGD(lr=self.learning_rate))\n return model\n\n def remember(self, st, act, rw, next_st, vld_moves, done):\n self.memory.append((st, act, rw, next_st, vld_moves, done))\n\n def get_action(self, st, test):\n valid_actions = game.board.get_valid_moves(self.tile)\n if np.random.rand() <= self.epsilon and not test:\n random.shuffle(valid_actions)\n return valid_actions[0]\n else:\n # Take an action based on the Q function\n all_values = self.model.predict(st)\n # return the VALID action with the highest network value\n # use an action_grid that can be indexed by [x, y]\n action_grid = np.reshape(all_values[0], newshape=(8, 8))\n q_values = [action_grid[v[1], v[0]] for v in valid_actions]\n return valid_actions[np.argmax(q_values)]\n\n def replay(self, bat_size):\n \"\"\"\n Perform back-propagation using stochastic gradient descent.\n Only want to update the state-action pair that is selected (the target for all\n other actions are set to the NN estimate so that the estimate is zero)\n \"\"\"\n mini_batch = random.sample(self.memory, bat_size)\n for st, act, rw, next_st, vld_moves, done in mini_batch:\n target = rw\n if not done:\n all_values = self.model.predict(next_st)\n action_grid = np.reshape(all_values[0], newshape=(8, 8))\n q_values = [action_grid[v[1], v[0]] for v in vld_moves]\n target = rw + self.gamma * np.amax(q_values)\n target_nn = self.model.predict(st)\n target_nn[0][act[1]*8+act[0]] = target # only this Q val will be updated\n self.model.fit(st, target_nn, epochs=1, verbose=0)\n\n def epsilon_decay(self):\n # linear epsilon decay feature\n if self.epsilon > self.epsilon_min:\n self.epsilon -= self.epsilon_step\n\n def load(self, name):\n self.model.load_weights(name)\n\n def save(self, name):\n self.model.save_weights(name)\n\n\ndef store_results():\n # present the timed results\n t_stop = process_time()\n print('Runtime: {}hr.'.format((t_stop - t_start)/3600.))\n # create and save a figure\n if storing:\n t1 = [i for i in range(len(results_over_time))]\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.plot(t1, results_over_time)\n ax.set_xlabel(\"Episode\")\n ax.set_title(\"Percent Wins During Training\")\n plt.savefig(save_filename + datetime.now().strftime(\"%m%d%H:%M\") + '_training' + '.png')\n\n t2 = [i for i in range(len(test_result))]\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.plot(t2, test_result)\n ax.set_xlabel(\"Episode\")\n ax.set_title(\"Percent Wins During Testing\")\n plt.savefig(save_filename + datetime.now().strftime(\"%m%d%H:%M\") + '_testing' + '.png')\n\n\nif __name__ == \"__main__\":\n try:\n storing = True\n loading = False\n testing = False\n\n terminal = False\n batch_size = 32\n episodes = 20000\n\n test_interval = 2000\n test_length = 400\n\n outcomes = ['Loss', 'Tie', 'Win']\n move_counter = 0\n\n # initialize agent and environment\n agent = OthelloAgent(episodes)\n game = OthelloGame(opponent='heur', interactive=False, show_steps=False)\n\n # FILENAME CONVENTION\n # 'saves/NN-type_opponent_num-episodes'\n if storing:\n save_filename = './saves/othello8_basic-sequential_rand_2000'\n if loading:\n load_filename = './saves/othello8_basic-sequential_rand_2000'\n agent.load(load_filename + \".h5\")\n\n if loading and not testing:\n prev_data = np.load(load_filename + '.npy')\n avg_result = prev_data[-1]\n episode_start = len(prev_data)\n results_over_time = np.append(prev_data, np.zeros(episodes))\n else:\n avg_result = 0\n results_over_time = np.zeros(episodes)\n episode_start = 0\n\n test_result = []\n\n # time it\n t_start = process_time()\n for e in range(episode_start, episode_start + episodes):\n\n # perform a 100-episode test with greedy policy\n if e % test_interval == 0:\n testing = True\n\n if testing is True:\n test_result.append(0)\n for test_ep in range(test_length):\n game.reset()\n game.start()\n state = game.get_state() # 8x8 numpy array\n state = np.reshape(state, [1, 64]) # 1x64 vector\n\n for move in range(100):\n action = agent.get_action(state, testing)\n reward, next_state, valid_moves, terminal = game.step(action)\n next_state = np.reshape(next_state, [1, 64])\n state = next_state\n if terminal:\n # terminal reward is 0 for loss, 0.5 for tie, 1 for win\n # use this as an indexing code to get the result\n result = outcomes[int(reward * 2)]\n if result == 'Win':\n n = 1\n else:\n n = 0\n test_result[-1] += (1 / (test_ep % test_interval + 1)) * (n - test_result[-1])\n if test_ep % 10 == 0 and test_ep > 0:\n print('testing' + \"episode {}: {} moves, Result: {}\".format(test_ep, move, result))\n print(\"Average win/loss ratio: \", test_result[-1])\n break\n testing = False\n\n game.reset()\n game.start()\n state = game.get_state() # 8x8 numpy array\n state = np.reshape(state, [1, 64]) # 1x64 vector\n\n for move in range(100): # max amount of moves in an episode\n move_counter += 1\n action = agent.get_action(state, testing)\n reward, next_state, valid_moves, terminal = game.step(action)\n next_state = np.reshape(next_state, [1, 64])\n agent.remember(state, action, reward, next_state, valid_moves, terminal)\n state = next_state\n if terminal:\n # terminal reward is 0 for loss, 0.5 for tie, 1 for win\n # use this as an indexing code to get the result\n result = outcomes[int(reward*2)]\n if result == 'Win':\n n = 1\n else:\n n = 0\n\n avg_result += (1/(e+1))*(n - avg_result)\n results_over_time[e] = avg_result\n\n if e % 10 == 0 and e > 0:\n print(\"episode {}: {} moves, Result: {}, e: {:.2}\"\n .format(e, move, result, agent.epsilon))\n print(\"Average win/loss ratio: \", avg_result)\n break\n\n # Question - maybe only update every batch_size moves\n # (instead of every move after batch_size)?\n # if move_counter % batch_size == 0:\n if len(agent.memory) > batch_size:\n agent.replay(batch_size)\n\n agent.epsilon_decay()\n if e % 100 == 0 and e > 0 and storing:\n # save name as 'saves/model-type_training-opponent_num-episodes.h5'\n # agent.save(save_filename + datetime.now().strftime(\"%m%d%H:%M\") + \".h5\")\n # np.save(save_filename + datetime.now().strftime(\"%m%d%H:%M\") + '.npy', results_over_time)\n pass\n store_results()\n except KeyboardInterrupt:\n # change the length of our numpy array to be whatever we stopped at\n save_data = results_over_time[[i < 100 or r > 0 for i, r in enumerate(results_over_time)]]\n np.save(save_filename + datetime.now().strftime(\"%m%d%H:%M\") + '.npy', save_data)\n store_results()\n"
}
] | 2 |
DipeshV/mutual-fund-returns
|
https://github.com/DipeshV/mutual-fund-returns
|
36230fc7ebb8c4dc5bba50f3e0bc28d711a6b80a
|
71f8f79647e013e715ef0df62cb278c53cc3239f
|
8878e7a4c8c2c5af5f70534b616c23d66a3a9497
|
refs/heads/master
| 2020-11-24T11:37:53.616995 | 2019-12-15T04:25:26 | 2019-12-15T04:25:26 | 228,128,024 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6756170392036438,
"alphanum_fraction": 0.7040954828262329,
"avg_line_length": 23.0849666595459,
"blob_id": "f2d8d1ba910deef911005a1c0b194801df24eac1",
"content_id": "3c50ef5a612a4aa1fa8fcc1f6aeb464969cd8cbd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3687,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 153,
"path": "/code.py",
"repo_name": "DipeshV/mutual-fund-returns",
"src_encoding": "UTF-8",
"text": "# --------------\n# import libraries\nimport pandas as pd \nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\n# Code starts here\n\ndata = pd.read_csv(path)\n\nprint(data.shape)\nprint(data.describe)\ndata.drop(columns=['Serial Number'], inplace=True)\nprint(data)\n\n\n# code ends here\n\n\n\n\n# --------------\n#Importing header files\nfrom scipy.stats import chi2_contingency\nimport scipy.stats as stats\n\n#Critical value \ncritical_value = stats.chi2.ppf(q = 0.95, # Find the critical value for 95% confidence*\n df = 11) # Df = number of variable categories(in purpose) - 1\n\n# Code starts here\nprint(critical_value)\n\nreturn_rating = data['morningstar_return_rating'].value_counts()\n\nrisk_rating = data['morningstar_risk_rating'].value_counts()\n\nobserved = pd.concat([return_rating.transpose(),risk_rating.transpose()],axis=1,keys=['return','risk'])\nprint(observed)\nchi2, p, dof, ex = chi2_contingency(observed)\nprint(chi2)\nprint(\"p:\",p,\" dof:\",dof,\" ex:\",ex)\nif chi2 > critical_value:\n print(\"reject the Null Hypothesis\")\nelse:\n print(\"Cannot reject the Null Hypothesis\")\n\n\n\n# Code ends here\n\n\n# --------------\n# Code starts here\n\ncorrelation = data.corr().abs()\n\nprint(correlation.shape)\n#print(correlation)\n \nus_correlation = correlation.unstack()\nus_correlation = us_correlation.sort_values(ascending = False)\n#print(us_correlation)\n\nmax_correlated = us_correlation[(us_correlation > 0.75) & (us_correlation < 1)]\n#print(max_correlated)\n\ndata.drop(columns = ['morningstar_rating', 'portfolio_stocks', 'category_12', 'sharpe_ratio_3y'], inplace=True)\nprint(data)\n# code ends here\n\n\n\n\n# --------------\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Code starts here\n\nfig, (ax_1, ax_2) = plt.subplots(1,2, figsize = (20,10))\n\nax_1.boxplot(data['price_earning'])\nax_1.set(title = \"Price_Earning\")\n\nax_2.boxplot(data['net_annual_expenses_ratio'])\nax_1.set(title = \"net_annual_expenses_ratio\")\n##Price_Earning box plot can be capped but for net_annual_expenses_ratio it seems that \n\n# code ends here\n\n\n# --------------\n# import libraries\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import r2_score,mean_squared_error\n\n# Code starts here\n\nX = data.drop(columns = ['bonds_aaa'])\ny = data['bonds_aaa']\nX_train,X_test,y_train,y_test = train_test_split( X, y, test_size = 0.3, random_state = 3)\n\n#Instantiate a linear regression model\nlr = LinearRegression()\n\n#Fit the Model\nlr.fit(X_train,y_train)\n\n#Prediction\ny_pred = lr.predict(X_test)\n\n#Accuracy\nrmse = np.sqrt(mean_squared_error(y_test,y_pred))\nprint(rmse)\n# Code ends here\n\n\n# --------------\n# import libraries\nfrom sklearn.model_selection import GridSearchCV, RandomizedSearchCV\nfrom sklearn.linear_model import Ridge,Lasso\n\n# regularization parameters for grid search\nridge_lambdas = [0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60]\nlasso_lambdas = [0.0001, 0.0003, 0.0006, 0.001, 0.003, 0.006, 0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 1]\n\n# Code starts here\nridge_model = Ridge()\nlasso_model = Lasso()\n\n#GridSearchCV for ridge and lasso\nridge_grid = GridSearchCV(estimator=ridge_model, param_grid=dict(alpha=ridge_lambdas))\nlasso_grid = GridSearchCV(estimator=lasso_model, param_grid=dict(alpha=lasso_lambdas))\n\n# Fit the model\nridge_grid.fit(X_train, y_train)\nlasso_grid.fit(X_train, y_train)\n\n# Prediction\nridge_pred = ridge_grid.predict(X_test)\nridge_rmse = np.sqrt(mean_squared_error(y_test,ridge_pred))\nprint(\"RMSE from Ridge Model is\", ridge_rmse)\nlasso_pred = lasso_grid.predict(X_test)\nlasso_rmse = np.sqrt(mean_squared_error(y_test,lasso_pred))\nprint(\"RMSE from Lasso Model is\", lasso_rmse)\n#rmse\n\nprint()\n\n# Code ends here\n\n\n"
}
] | 1 |
dheerajalim/ddt_example
|
https://github.com/dheerajalim/ddt_example
|
e320ee0510719c3a24dce242a20c558a02d4a052
|
741ae2fef60e0975dad026c3fc86f6238040812e
|
90468f4d242502fa2df9e349107b45e87c9e4ecc
|
refs/heads/master
| 2021-01-20T06:38:43.581357 | 2017-05-01T08:11:40 | 2017-05-01T08:11:40 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6852207183837891,
"alphanum_fraction": 0.6967370510101318,
"avg_line_length": 24,
"blob_id": "f7559d078d76779b0809f8064fc30af9b5adc8a9",
"content_id": "9195e76f0a55d3a5d0c4afb8f834519e01c9717a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1042,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 40,
"path": "/ddtexample.py",
"repo_name": "dheerajalim/ddt_example",
"src_encoding": "UTF-8",
"text": "import unittest\r\n\r\nfrom selenium import webdriver\r\nfrom ddt import ddt, data, unpack\r\nfrom selenium.webdriver.support.ui import WebDriverWait\r\nfrom selenium.webdriver.support import expected_conditions\r\nimport time\r\n\r\n@ddt\r\nclass testddt(unittest.TestCase):\r\n\r\n\t@classmethod\r\n\tdef setUpClass(cls):\r\n\r\n\t\tcls.driver = webdriver.Chrome(\"C:\\Personal\\Development\\Python\\seliniumtest\\chromedriver.exe\")\r\n\t\tcls.driver.implicitly_wait(10)\r\n\t\tcls.driver.maximize_window()\r\n\r\n\t\tcls.driver.get(\"http://www.google.co.in\")\r\n\r\n\t@data((\"horns\",9),(\"Food\", 10))\r\n\t@unpack\r\n\tdef test001_ddt(self, item_name , item_count ):\r\n\r\n\t\tdriver = self.driver\r\n\t\tprint (driver.title)\r\n\r\n\t\tsearch = driver.find_element_by_name(\"q\")\r\n\t\tsearch.clear()\r\n\t\tsearch.send_keys(item_name)\r\n\t\t#WebDriverWait(driver, 10).until(lambda s:s.find_element_by_name(\"q\").text == item_name)\r\n\t\tsearch.submit()\r\n\t\ttime.sleep(2)\r\n\t\tlinks = driver.find_elements_by_class_name(\"r\")\r\n\r\n\t\tself.assertEqual(item_count , len(links))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\tunittest.main(verbosity=2)\r\n\r\n"
},
{
"alpha_fraction": 0.7142857313156128,
"alphanum_fraction": 0.7142857313156128,
"avg_line_length": 13,
"blob_id": "53233849d92800478b2f8f52710be793523f0ded",
"content_id": "463102a00e9d79da14503618a7145affe86b3ca7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 14,
"license_type": "no_license",
"max_line_length": 13,
"num_lines": 1,
"path": "/README.md",
"repo_name": "dheerajalim/ddt_example",
"src_encoding": "UTF-8",
"text": "# DDT Example\n"
},
{
"alpha_fraction": 0.7508417367935181,
"alphanum_fraction": 0.7542087435722351,
"avg_line_length": 22.25,
"blob_id": "625969bd420b6842475e8ce4e29f1501c828f34c",
"content_id": "d4b8358428b8585e978386839617f580c4a626e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 297,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 12,
"path": "/ddtexample_suite.py",
"repo_name": "dheerajalim/ddt_example",
"src_encoding": "UTF-8",
"text": "import unittest\r\nfrom xmlrunner import xmlrunner\r\n\r\nfrom ddtexample import testddt\r\n\r\n\r\nsearch_tests = unittest.TestLoader().loadTestsFromTestCase(testddt)\r\n\r\n\r\ntest_suite = unittest.TestSuite([search_tests])\r\n\r\nxmlrunner.XMLTestRunner(verbosity = 2, output='test-reports').run(test_suite)\r\n\r\n\r\n\r\n"
}
] | 3 |
Leenhazaimeh/pythonic-garage-band
|
https://github.com/Leenhazaimeh/pythonic-garage-band
|
d8eead0f542301cc0dea7e18df85886281396347
|
89a19e5dab7359da1e2a905fb655d2932329c128
|
3e6488cb14ad2a3cf908f610e90797bcf4ad08cf
|
refs/heads/main
| 2023-07-12T05:16:35.577965 | 2021-08-07T10:23:03 | 2021-08-07T10:23:03 | 390,495,805 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5756365656852722,
"alphanum_fraction": 0.5756365656852722,
"avg_line_length": 20.094736099243164,
"blob_id": "17888094cd33f1ce75314019ad1d0855029feeb0",
"content_id": "73999a42544fb874b5d98dc2f7ad2ef68395d835",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2003,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 95,
"path": "/pythonic_garage_ban/band.py",
"repo_name": "Leenhazaimeh/pythonic-garage-band",
"src_encoding": "UTF-8",
"text": "from abc import abstractclassmethod\n\nclass Musician:\n members = [] #list of instances\n def __init__(self,name):\n self.name = name\n Musician.members.append(self)\n \n\n @abstractclassmethod\n def __str__(self):\n pass\n\n @abstractclassmethod\n def __repr__(self):\n pass\n\n @abstractclassmethod\n def get_instrument(self):\n pass\n\n @abstractclassmethod\n def play_solo(self):\n pass\nclass Guitarist(Musician):\n def __str__(self) :\n\n return \"My name is {} and I play guitar\".format(self.name)\n\n def __repr__(self):\n\n return \"Guitarist instance. Name = {}\".format(self.name)\n\n def get_instrument(self):\n return \"guitar\"\n\n def play_solo(self):\n return \"face melting guitar solo\"\n\nclass Drummer(Musician):\n \n def __str__(self) :\n\n return \"My name is {} and I play drums\".format(self.name)\n\n def __repr__(self):\n\n return \"Drummer instance. Name = {}\".format(self.name)\n\n def get_instrument(self):\n return \"drums\"\n\n def play_solo(self):\n return \"rattle boom crash\"\n\n\nclass Bassist(Musician):\n def __str__(self) :\n\n return \"My name is {} and I play bass\".format(self.name)\n\n def __repr__(self):\n\n return \"Bassist instance. Name = {}\".format(self.name)\n\n def get_instrument(self):\n return \"bass\"\n\n def play_solo(self):\n return \"bom bom buh bom\"\n\n\nclass Band(Musician):\n instances=[]\n def __init__(self, name,members):\n self.name=name\n self.members=members\n Band.instances.append(self)\n \n def play_solos(self):\n each_member=[]\n for x in self.members:\n each_member.append(x.play_solo())\n return each_member\n def __str__(self):\n \n return f\" i am {self.name} and welcome to you \"\n\n def __repr__(self):\n\n return f\"Name = {self.name}\"\n @classmethod\n def to_list(class_method):\n \n return class_method.instances"
}
] | 1 |
raphbacher/comet
|
https://github.com/raphbacher/comet
|
c943dfd9f35e9a550fd79eb46634019bc9f80d23
|
c80e9e4dc623ff946f8e3c713b97e25d9226374e
|
79811355ee2c2b4ff3c2af4f0b5f65e35c741a85
|
refs/heads/master
| 2021-08-31T13:10:28.483002 | 2017-12-21T11:40:23 | 2017-12-21T11:40:23 | 114,739,582 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5638629198074341,
"alphanum_fraction": 0.6074766516685486,
"avg_line_length": 26.514286041259766,
"blob_id": "5ad6f809cf3dfbd86d684707121ba051e29e929d",
"content_id": "20d4f705b0d573160c3782ea9c2a587f97325065",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 963,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 35,
"path": "/shade/astro_utils.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Dec 20 13:52:55 2017\n\n@author: [email protected]\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport math\nimport numpy as np\n\n\ndef Moffat(r, alpha, beta):\n \"\"\"\n Compute Moffat values for array of distances *r* and Moffat parameters *alpha* and *beta*\n \"\"\"\n return (beta-1)/(math.pi*alpha**2)*(1+(r/alpha)**2)**(-beta)\n\ndef generateMoffatIm(center=(12,12),shape=(25,25),alpha=2,beta=2.5,dx=0.,dy=0.,dim='MUSE'):\n \"\"\"\n By default alpha is supposed to be given in arsec, if not it is given in MUSE pixel.\n a,b allow to decenter slightly the Moffat image.\n \"\"\"\n ind = np.indices(shape)\n r = np.sqrt(((ind[0]-center[0]+dx)**2 + ((ind[1]-center[1]+dy))**2))\n if dim == 'MUSE':\n r = r*0.2\n elif dim == 'HST':\n r = r*0.03\n res = Moffat(r, alpha, beta)\n res = res/np.sum(res)\n return res\n"
},
{
"alpha_fraction": 0.6412703990936279,
"alphanum_fraction": 0.6532453894615173,
"avg_line_length": 49.08695602416992,
"blob_id": "d10427306c6133e27224e1462fb9497684c0e990",
"content_id": "1ca832225413fb1636a4b2c2f8367ce37147475d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5762,
"license_type": "no_license",
"max_line_length": 228,
"num_lines": 115,
"path": "/shade/preprocessing.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Nov 26 15:59:23 2015\n\n@author: raphael\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom shade import function_Image\nimport scipy.signal as ssl\nfrom math import *\nimport numpy as np\n\nclass Preprocessing():\n \"\"\"\n\n \"\"\"\n\n def __init__(self,cube,listSources,cubeProcessed,params,paramsPreProcess):\n self.cube=cube\n self.listSources=listSources\n self.cubeProcessed=cubeProcessed\n self.params=params\n self.paramsPreProcess=paramsPreProcess\n\n def processSrc(self,):\n for i,src in enumerate(self.listSources):\n if self.paramsPreProcess.forceProcess == False:\n if 'PROCESS_CUBE' in src.cubes.keys():\n continue\n\n if self.params.SW is not None:\n if self.params.sim ==False:\n center=src.cubes['MUSE_CUBE'].wcs.sky2pix([src.dec,src.ra])[0].astype(int)\n else:\n #if sim is True source is supposed centered\n center=[src.cubes['MUSE_CUBE'].shape[1]//2,src.cubes['MUSE_CUBE'].shape[2]//2]\n data=src.cubes['MUSE_CUBE'][:,max(center[0]-self.params.SW,0):min(center[0]+self.params.SW+1,src.cubes['MUSE_CUBE'].shape[1]), \\\n max(center[1]-self.params.SW,0):min(center[1]+self.params.SW+1,src.cubes['MUSE_CUBE'].shape[2])]\n else:\n data=src.cubes['MUSE_CUBE']\n\n if self.paramsPreProcess.unmask==True:\n data.data[:]=data.data.filled(np.nanmedian(data.data))\n\n if self.params.sim == False :\n lmbda=int(data.wave.pixel(src.lines['LBDA_OBS'][src.lines['LINE']=='LYALPHA'][0]))+self.paramsPreProcess.shiftLambdaDetection\n else:\n lmbda=data.shape[0]//2\n\n kernel_mf=self.params.kernel_mf[max(lmbda-self.params.LW-self.params.lmbdaShift,0):min(lmbda+self.params.LW+self.params.lmbdaShift+1,self.params.kernel_mf.shape[0])]\n #Process on a spectral slice that assure that for each value in the [-LW:LW] zone of interest, there are\n #at least 2*windowRC+1 points and so the removeContinuum method will work as expected\n\n dataRC=self.removeContinuum(data[max(lmbda-self.params.LW-self.paramsPreProcess.windowRC-self.params.lmbdaShift,0):min(lmbda+self.params.LW+self.paramsPreProcess.windowRC+self.params.lmbdaShift+1,data.shape[0]),:,:])\n lmbda=dataRC.shape[0]//2\n dataRC=dataRC[max(lmbda-self.params.LW-self.params.lmbdaShift,0):min(lmbda+self.params.LW+self.params.lmbdaShift+1,dataRC.shape[0]),:,:]\n\n\n dataMF=self.matchedFilterFSF(dataRC,kernel_mf)\n src.cubes['PROCESS_CUBE']=dataMF\n\n def processSrcWithCube(self):\n self.processCube()\n for src in self.listSources:\n if 'PROCESS_CUBE' not in src.cubes.keys():\n center=self.cubeProcessed.wcs.sky2pix([src.dec,src.ra])[0].astype(int)\n lmbda=self.cubeProcessed.wave.pixel(src.lines['LBDA_OBS'][src.lines['LINE']=='LYALPHA'][0])\n processedData=self.cubeProcessed[int(max(lmbda-self.params.LW-self.params.lmbdaShift,0)):int(min(lmbda+self.params.LW+self.params.lmbdaShift+1,self.cubeProcessed.shape[0])), \\\n int(max(center[0]-self.params.SW,0)):int(min(center[0]+self.params.SW+1,self.cubeProcessed.shape[1])), \\\n int(max(center[1]-self.params.SW,0)):int(min(center[1]+self.params.SW+1,self.cubeProcessed.shape[2]))]\n src.add_cube(processedData, 'PROCESS_CUBE')\n\n def processCube(self):\n if self.cubeProcessed is None:\n if self.paramsPreProcess.lmbdaMax is None:\n self.paramsPreProcess.lmbdaMax = self.cube.shape[0]\n\n data=self.cube[self.paramsPreProcess.lmbdaMin:self.paramsPreProcess.lmbdaMax,:,:]\n kernel_mf=self.params.kernel_mf[self.paramsPreProcess.lmbdaMin:self.paramsPreProcess.lmbdaMax]\n dataRC=self.removeContinuum(data)\n dataMF=self.matchedFilterFSF(dataRC,kernel_mf)\n\n self.cubeProcessed=dataMF\n\n\n def removeContinuum(self,cube):\n cubeContinuRemoved=cube.copy()\n if self.paramsPreProcess.methodRC == 'medfilt':\n cubeContinuRemoved.data=cube.data-ssl.medfilt(cube.data,[self.paramsPreProcess.windowRC,1,1])\n return cubeContinuRemoved\n\n def matchedFilterFSF(self,cubeContinuRemoved,kernel_mf,):\n\n cubeContinuRemoved.data = cubeContinuRemoved.data/np.sqrt(cubeContinuRemoved.var)\n f = function_Image.fine_clipping2\n #cubeMF = cubeContinuRemoved.loop_ima_multiprocessing(f, cpu = 6, verbose = False, \\\n # Pmin=self.paramsPreProcess.Pmin, Pmax=self.paramsPreProcess.Pmax, Qmin=self.paramsPreProcess.Qmin, Qmax=self.paramsPreProcess.Qmax) #\n cubeMF=cubeContinuRemoved\n if self.paramsPreProcess.spatialCentering==True:\n for i in range(cubeMF.shape[0]):\n cubeMF[i,:,:]=f(cubeMF[i,:,:],Pmin=self.paramsPreProcess.Pmin, Pmax=self.paramsPreProcess.Pmax, Qmin=self.paramsPreProcess.Qmin, Qmax=self.paramsPreProcess.Qmax,unmask=self.paramsPreProcess.unmask)\n #cubeMF = cubeContinuRemoved\n # ---- Matched Filter (MF) ---- #\n if self.paramsPreProcess.FSFConvol==True:\n for i in range(cubeMF.shape[0]):\n cubeMF[i,:,:]=function_Image.Image_conv(cubeMF[i,:,:],kernel_mf[i],self.paramsPreProcess.unmask)\n\n #cubeMF=cubeMF.loop_ima_multiprocessing(f, cpu = 6, verbose = False,Pmin=self.paramsPreProcess.Pmin, Pmax=self.paramsPreProcess.Pmax, \\\n #Qmin=self.paramsPreProcess.Qmin, Qmax=self.paramsPreProcess.Qmax)\n\n return cubeMF\n\n\n"
},
{
"alpha_fraction": 0.533458411693573,
"alphanum_fraction": 0.5639238953590393,
"avg_line_length": 29.088708877563477,
"blob_id": "f04549561af9c038cc0cd4717f8982dd28e0116b",
"content_id": "fe45e730b0120b7be10e2caec45b8ef14b903f60",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11193,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 372,
"path": "/shade/function_Image.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Dec 10 14:27:22 2015\n\n@author: raphael\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport scipy.stats as sst\nfrom scipy.stats import norm\nimport numpy as np\nimport multiprocessing\nfrom scipy.optimize import minimize\nimport scipy.signal as ssl\nfrom mpdaf.obj import Image\n\n\ndef fine_clipping(Image, niter=20, fact_value=0.9, Pmin=0, Pmax=-1, Qmin=0,\n Qmax=-1, unmask=True):\n P, Q = Image.shape\n if Qmax == -1:\n Qmax = Q\n if Pmax == -1:\n Pmax = P\n Image1 = Image[Pmin:Pmax, Qmin:Qmax].copy()\n P1, Q1 = Image1.shape\n Quartile1 = np.percentile(Image1.data, 25)\n Quartile3 = np.percentile(Image1.data, 75)\n IQR = Quartile3 - Quartile1\n med = np.median(Image1.data)\n sigestQuant = IQR/1.349\n if unmask is True:\n x = np.reshape(Image1.data.data, P1*Q1)\n else:\n x = np.reshape(Image1.data, P1*Q1)\n xclip = x\n\n facttrunc = norm.ppf(fact_value)\n correction = norm.ppf((0.75*(2*norm.cdf(facttrunc)-1) +\n (1 - norm.cdf(facttrunc))))\\\n - norm.ppf(0.25*(2*norm.cdf(facttrunc)-1)\n + (1 - norm.cdf(facttrunc)))\n medclip = np.nanmedian(xclip)\n# necessary if nan values\n# x=x.filled(medclip)\n# xclip=xclip.filled(medclip)\n# xclip[np.isnan(xclip)]=medclip\n# x[np.isnan(x)]=medclip\n qlclip = np.percentile(xclip, 25)\n stdclip = 2.*(medclip - qlclip)/1.349\n oldmedclip = 1.\n\n for i in range(niter):\n try:\n # on garde la symetrie dans la troncature\n xclip = x[np.where(((x-medclip) < facttrunc*stdclip) &\n ((x-medclip) > -facttrunc*stdclip))]\n medclip = np.median(xclip)\n qlclip = np.percentile(xclip, 25)\n stdclip = 2*(medclip - qlclip)/correction\n\n except:\n print(\"error normalizing\")\n return Image1\n\n xclip2 = x[np.where(((x-medclip) < 0) & ((x-medclip) > -3*stdclip))]\n correctionTrunc= np.sqrt(1. + (-3.*2.* norm.pdf(3.)) / (2.*norm.cdf(3.) -1.))\n stdclip2 = np.sqrt(np.mean((xclip2-medclip)**2)) / correctionTrunc\n\n Image1.data = (Image1.data - medclip)/stdclip2\n return Image1\n\n\ndef fine_clipping2(Image, niter=10, fact_value=0.9, Pmin=0, Pmax=-1,\n Qmin=0, Qmax=-1, unmask=True):\n P, Q = Image.shape\n if Qmax == -1:\n Qmax = Q\n if Pmax == -1:\n Pmax = P\n Image1 = Image[Pmin:Pmax, Qmin:Qmax].copy()\n P1, Q1 = Image1.shape\n if unmask is True:\n x = np.reshape(Image1.data.data, P1*Q1)\n else:\n x = np.reshape(Image1.data, P1*Q1)\n x_sorted = np.sort(x)\n Quartile1 = percent(x_sorted, 25)\n Quartile3 = percent(x_sorted, 75)\n IQR = Quartile3 - Quartile1\n med = middle(x_sorted)\n fact_IQR = norm.ppf(0.75) - norm.ppf(0.25)\n sigestQuant = IQR/fact_IQR\n\n xclip = x_sorted\n\n facttrunc = norm.ppf(fact_value)\n cdf_facttrunc = norm.cdf(facttrunc)\n correction = norm.ppf((0.75*( 2*cdf_facttrunc-1 ) + (1 - cdf_facttrunc) )) - norm.ppf(0.25*( 2*cdf_facttrunc-1 ) + (1 - cdf_facttrunc) )\n medclip = middle(xclip)\n qlclip = percent(xclip, 25)\n stdclip = 2.*(medclip - qlclip) / fact_IQR\n oldmedclip = 1.\n oldstdclip = 1.\n\n i = 0\n while (oldmedclip != medclip) and (i < niter):\n lim=np.searchsorted(x_sorted,[medclip-facttrunc*stdclip,medclip+facttrunc*stdclip])\n xclip = x_sorted[lim[0]:lim[1]]\n oldoldmedclip = oldmedclip\n\n oldmedclip = medclip\n oldstdclip = stdclip\n\n medclip = middle(xclip)\n\n qlclip = percent(xclip, 25)\n stdclip = 2*np.abs(medclip - qlclip)/correction\n\n if oldoldmedclip == medclip: # gestion des cycles\n\n if stdclip > oldstdclip:\n break\n else:\n stdclip = oldstdclip\n medclip = oldmedclip\n i += 1\n\n xclip2 = x_sorted[np.where(((x_sorted-medclip) < 0) &\n ((x_sorted-medclip) > -3*stdclip))]\n correctionTrunc = np.sqrt(1. + (-3. * 2. * norm.pdf(3.)) /\n (2. * norm.cdf(3.) - 1.))\n stdclip2 = np.sqrt(np.mean((xclip2-medclip)**2)) / correctionTrunc\n\n Image1.data = (Image1.data - medclip)/stdclip2\n\n return Image1\n\n\ndef middle(L):\n\n n = int(len(L))\n return (L[n//2] + L[n//2-1]) / 2.0\n\n\ndef percent(L, q):\n \"\"\"L np.array, q betwwen 0-100\"\"\"\n n0 = q/100. * len(L)\n n = int(np.floor(n0))\n if n >= len(L):\n return L[-1]\n if n >= 1:\n if n == n0:\n return L[n-1]\n else:\n return (L[n-1]+L[n])/2.0\n else:\n return L[0]\n\n\ndef recenter(Image1, niter=3, lmbda=1., fact_value=0.8, Pmin=0, Pmax=-1,\n Qmin=0, Qmax=-1):\n P1, Q1 = Image1.shape\n if Qmax == -1:\n Qmax = Q1\n if Pmax == -1:\n Pmax = P1\n Image = Image1[Pmin:Pmax, Qmin:Qmax].copy()\n P, Q = Image.shape\n Quartile1 = np.percentile(Image.data, 25)\n Quartile2 = np.percentile(Image.data, 50)\n IQR = Quartile2 - Quartile1\n med = np.median(Image.data)\n stdclip = IQR*1.48\n x = np.reshape(Image.data, P*Q)\n medclip = 0.\n\n oldmedclip = 1.\n for i in range(niter):\n if medclip != oldmedclip:\n # on garde la symetrie dans la troncature\n xclip = x[np.where(((x-medclip) < lmbda*stdclip) &\n ((x-medclip) > -lmbda*stdclip))]\n oldmedclip = medclip\n medclip = np.mean(xclip)\n qlclip = np.percentile(xclip, 25)\n stdclip = (medclip - qlclip)*1.48\n else:\n break\n\n Image1.data = Image1.data - medclip\n return Image1\n\n\ndef recenterMul(cube, w=None):\n pool = multiprocessing.Pool()\n\n try:\n print('starting the pool map')\n res = np.array(pool.map(recenter, [i for i in cube]))\n print(res.shape)\n pool.close()\n print('pool map complete')\n if w is None:\n res = res.reshape(cube.shape[0], cube.shape[1], cube.shape[2])\n else:\n res = res.reshape(2*w, cube.shape[1], cube.shape[2])\n except KeyboardInterrupt:\n print('got ^C while pool mapping, terminating the pool')\n pool.terminate()\n print('pool is terminated')\n except Exception as e:\n print('got exception: %r, terminating the pool' % (e,))\n pool.terminate()\n print('pool is terminated')\n\n finally:\n pool.join()\n return res\n\n\ndef getParamNoise(Image1, niter=10, fact_value=0.9, Pmin=0, Pmax=-1,\n Qmin=0, Qmax=-1):\n P1, Q1 = Image1.shape\n if Qmax == -1:\n Qmax = Q1\n if Pmax == -1:\n Pmax = P1\n Image = Image1[Pmin:Pmax, Qmin:Qmax].copy()\n P, Q = Image.shape\n Quartile1 = np.percentile(Image, 25)\n Quartile3 = np.percentile(Image, 75)\n IQR = Quartile3 - Quartile1\n med = np.median(Image)\n sigestQuant = IQR/1.349\n x = np.reshape(Image, P*Q)\n xclip = x\n\n facttrunc = norm.ppf(fact_value)\n correction = norm.ppf((0.75*(2*norm.cdf(facttrunc)-1)\n + (1 - norm.cdf(facttrunc)))) \\\n - norm.ppf(0.25*(2*norm.cdf(facttrunc)-1) + (1 - norm.cdf(facttrunc)))\n\n medclip = np.median(xclip)\n qlclip = np.percentile(xclip, 25)\n stdclip = 2.*(medclip - qlclip)/1.349\n\n for i in range(niter):\n # on garde la symetrie dans la troncature\n xclip = x[np.where(((x-medclip) < facttrunc*stdclip) &\n ((x-medclip) > -facttrunc*stdclip))]\n\n if len(xclip) == 0:\n break\n medclip = np.median(xclip)\n qlclip = np.percentile(xclip, 25)\n stdclip = 2*(medclip - qlclip)/correction\n\n xclip2 = x[np.where(((x-medclip) < 0) & ((x-medclip) > -3*stdclip))]\n correctionTrunc = np.sqrt(1. + (-3. * 2. * norm.pdf(3.)) /\n (2. * norm.cdf(3.) - 1.))\n stdclip2 = np.sqrt(np.mean((xclip2-medclip)**2)) / correctionTrunc\n\n return medclip, stdclip2\n\n\ndef getParamNoiseMul(cube):\n pool = multiprocessing.Pool()\n\n try:\n print('starting the pool map')\n res = np.array(pool.map(getParamNoise, [i for i in cube]))\n pool.close()\n print('pool map complete')\n\n except KeyboardInterrupt:\n print('got ^C while pool mapping, terminating the pool')\n pool.terminate()\n print('pool is terminated')\n\n except Exception as e:\n print('got exception: %r, terminating the pool' % (e,))\n pool.terminate()\n print('pool is terminated')\n\n finally:\n pool.join()\n return res\n\n\ndef getStudentParam(Image1, niter=10, fact_value=0.9, Pmin=0, Pmax=-1, Qmin=0,\n Qmax=-1, runLikelihood=False):\n P1, Q1 = Image1.shape\n if Qmax == -1:\n Qmax = Q1\n if Pmax == -1:\n Pmax = P1\n Image = Image1[Pmin:Pmax, Qmin:Qmax].copy()\n P, Q = Image.shape\n medclip, sigclip = getParamNoise(Image1, niter, fact_value, Pmin,\n Pmax, Qmin, Qmax)\n kurto = sst.moment(Image1, 4, None) / sst.moment(Image1, 2, None)**2 - 3\n mu = (4*kurto+6)/kurto\n sigmaEst = np.sum((Image1[Image1 < medclip]-medclip)**2)\\\n / len(Image1[Image1 < medclip])\n sEst = np.sqrt((mu-2)/mu*sigmaEst)\n\n if runLikelihood is True:\n initParam = (mu, sEst)\n results = minimize(logLH, initParam, method='Powell',\n args=(Image[Image < medclip], medclip))\n mu, sEst = results.x\n\n return medclip, sEst, mu, sigclip\n\n\ndef logLH(params, x, m):\n mu = params[0]\n s = params[1]\n return -np.sum(sst.t.logpdf(x, loc=m, scale=s, df=mu))\n\n\ndef getStudentParamMul(cube):\n pool = multiprocessing.Pool()\n\n try:\n print('starting the pool map')\n res = np.array(pool.map(getStudentParam, [i for i in cube]))\n print(res.shape)\n pool.close()\n print('pool map complete')\n\n except KeyboardInterrupt:\n print('got ^C while pool mapping, terminating the pool')\n pool.terminate()\n print('pool is terminated')\n\n except Exception as e:\n print('got exception: %r, terminating the pool' % (e,))\n pool.terminate()\n print('pool is terminated')\n\n finally:\n pool.join()\n return res\n\n\ndef Image_conv(im, tab, unmask=True):\n \"\"\" Defines the convolution between an Image object and an array.\n Designed to be used with the multiprocessing function\n 'FSF_convolution_multiprocessing'.\n\n :param im: Image object\n :type im: class 'mpdaf.obj.Image'\n :param tab: array containing the convolution kernel\n :type tab: array\n :param unmask: if True use .data of masked array (faster computation)\n :type unmask: bool\n :return: array\n :rtype: array\n\n \"\"\"\n if unmask is True:\n res = ssl.fftconvolve(im.data.data, tab, 'full')\n else:\n res = ssl.fftconvolve(im.data, tab, 'full')\n a, b = tab.shape\n im_tmp = Image(data=res[int(a-1)//2:im.data.shape[0] + (a-1)//2,\n (b-1)//2:im.data.shape[1]+(b-1)//2])\n return im_tmp.data\n"
},
{
"alpha_fraction": 0.40229883790016174,
"alphanum_fraction": 0.5517241358757019,
"avg_line_length": 13.333333015441895,
"blob_id": "f7ed5dbb1cfba9305fbfec9ffdcf781e90a90256",
"content_id": "d866ecde69148c772e154ed93b157fbcc846c7ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 87,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 6,
"path": "/shade/__init__.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Dec 11 05:02:23 2015\n\n@author: [email protected]\n\"\"\"\n\n"
},
{
"alpha_fraction": 0.5589906573295593,
"alphanum_fraction": 0.5943017601966858,
"avg_line_length": 42.69867706298828,
"blob_id": "8015982a49f38785f4b7fb726159e3064d6d7db9",
"content_id": "e3a711bf88b3ea5aecd7ec55ce5d5304674caa43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13199,
"license_type": "no_license",
"max_line_length": 177,
"num_lines": 302,
"path": "/shade/simuReal.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Dec 8 10:42:33 2015\n\n@author: raphael\n\"\"\"\nfrom mpdaf.sdetect.source import Source\nfrom mpdaf.obj import Cube,Image,WCS,WaveCoord\n\nfrom skimage.morphology import binary_dilation\n\nimport numpy as np\nimport scipy.linalg as sl\nimport scipy.spatial.distance as dists\nimport scipy.stats as sst\nimport scipy.signal as ssl\nfrom scipy.stats import multivariate_normal\nimport math\ntry:\n import utils\nexcept:\n print(\"Missing utils for creating filaments\")\n\nclass SourceSim():\n \"\"\"\n Build a source object with possible several point sources (with similar emission lines)\n and linking \"filaments\".\n This class inheritates from the mpdaf Source and can be process like a real source object by the detection algorithms\n Overloads attributes are only used to ease the generation of the simulated data and keep most of\n the building process information.\n \"\"\"\n def __init__(self,shape=(41,101,101),lmbda=20,noise=None, \\\n spectraSourcesLmbda=None,spectraSourcesWidth=5,listCenter=None, \\\n listRadius=None,link=None,intens=0.2,rho=0,variation=0,noiseType='student',\\\n decrease='True',noiseCube=None,df_stu=5,randomShape=False,seed=None,steps=50,dilate_factor=5):\n\n \"\"\"\n Param:\n Param:\n Param:\n \"\"\"\n\n self.shape = shape\n self.lmbda = lmbda\n self.data = np.zeros(shape)\n self.listCenter = listCenter\n self.center = listCenter[0]\n self.listRadius = listRadius\n self.intens = intens\n self.link = link\n if randomShape==False:\n self.maskSources,self.label=buildMaskSources(self.listCenter,self.listRadius,shape,decrease)\n else:\n self.maskSources,self.label=buildRandomMaskSources(shape,self.listCenter,self.listRadius,seed=seed,steps=steps,dilate_factor=dilate_factor)\n np.random.seed()\n self.spectraSources=[]\n self.spectraSourcesLmbda=spectraSourcesLmbda\n self.spectraSourcesWidth=spectraSourcesWidth\n self.maskAll=self.maskSources>0\n\n #create \"filament link\" between point sources\n if link is not None:\n for k in link:\n self.linkGal2(k[0],k[1],intens)\n\n if spectraSourcesLmbda is not None:\n for k in range(len(spectraSourcesLmbda)):\n self.spectraSources.append(createSpectra(spectraSourcesLmbda[k],shape[0],width=5))\n for i in range(shape[1]):\n for j in range(shape[2]):\n if self.label[i,j]==k+1:\n\n if variation !=0:\n if decrease==True:\n spectra=createSpectra(spectraSourcesLmbda[k]+np.random.randint(variation),shape[0],width=5)\n else:\n spectra=createSpectra(spectraSourcesLmbda[k]+1/2.*np.sqrt((listCenter[k][0]-i)**2+(listCenter[k][1]-j)**2),shape[0],width=5)\n else:\n spectra=self.spectraSources[k]\n self.data[:,i,j]=spectra*self.maskSources[i,j]\n\n self.noise=noise\n self.rho=rho\n\n\n if self.noise != None:\n if noiseType=='student':\n self.dataClean=self.data.copy()\n self.noiseCube=sst.t.rvs(df_stu, loc=0, scale=self.noise,size=shape[0]*shape[1]*shape[2]).reshape(shape)\n if rho!=0:\n ker=np.array([[0.05,0.1,0.05],\n [0.1,0.4,0.1],\n [0.05,0.1,0.05]])\n self.noiseCube=ssl.fftconvolve(self.noiseCube,ker[np.newaxis,:],mode='same')\n self.data=self.data+self.noiseCube\n self.var=np.ones((shape))*self.noise**2\n else:\n self.dataClean=self.data.copy()\n if rho == 0: #no correlation\n self.noiseCube=np.random.normal(scale=self.noise,size=shape[0]*shape[1]*shape[2]).reshape(shape)\n self.data=self.data+self.noiseCube\n else:\n if rho==1: #generate spatially correlated noise\n #self.createCorrNoiseCube()\n ker=np.array([[0.05,0.1,0.05],\n [0.1,0.4,0.1],\n [0.05,0.1,0.05]])\n elif rho==2:\n ker=np.array([[0.00,0.1,0.00],\n [0.1,0.6,0.1],\n [0.00,0.1,0.00]])\n elif rho==3:\n ker=np.array([[0.02,0.02,0.02,0.02,0.02],\n [0.02,0.05,0.05,0.05,0.02],\n [0.02,0.05,0.28,0.05,0.02],\n [0.02,0.05,0.05,0.05,0.02],\n [0.02,0.02,0.02,0.02,0.02]])\n elif rho==4:\n ker=generateMoffatIm(shape=(15,15),alpha=3.1,beta=2.8,center=(7,7),dim=None)\n self.noiseCube=np.random.normal(scale=self.noise,size=shape[0]*shape[1]*shape[2]).reshape(shape)\n self.noiseCube=ssl.fftconvolve(self.noiseCube,ker[np.newaxis,:],mode='same')\n self.data=self.data+self.noiseCube\n self.var=np.ones((shape))*self.noise**2\n elif noiseCube is not None:\n self.dataClean=self.data.copy()\n self.noiseCube=noiseCube\n self.data=self.data+self.noiseCube\n self.var=np.ones((shape))*np.var(noiseCube)\n\n else:\n self.dataClean=self.data\n self.var=np.ones((shape))\n\n cubeClean=Cube(data=self.dataClean,wcs=WCS(),wave=WaveCoord())\n cubeNoisy=Cube(data=self.data,var=self.var,wcs=WCS(),wave=WaveCoord())\n ra,dec=listCenter[0]\n self.src=Source.from_data(4000,ra+1,dec+1,origin='Simulated',cubes={'TRUTH_CUBE':cubeClean,'MUSE_CUBE':cubeNoisy})\n self.src.add_line(['LBDA_OBS','LINE'],[lmbda,\"LYALPHA\"])\n\n #self.src.add_attr('SIMULATED_SOURCE',True,desc='Indicates to the detection method weither this is a real or semi-real source (with wcs for example) or a simulated one')\n\n def copy(self):\n cube=CubeSimMultiObj(self.shape,self.lmbda,self.noise,self.spectraSourcesLmbda,self.spectraSourcesWidth,self.listCenter,self.listRadius,self.link,self.intens)\n return cube\n\n def linkGal(self,obj1,obj2,intens):\n ### Attention\n ###!!!\n ### pour l'instant ne gere pas plusieurs link\n self.maskPoint=utils.line(self.listCenter[obj1],self.listCenter[obj2])\n dist=float((self.listCenter[obj1][0]-self.listCenter[obj2][0])**2+(self.listCenter[obj1][1]-self.listCenter[obj2][1])**2)\n listDist=[(self.listCenter[obj1][0]-k[0])**2+(self.listCenter[obj1][1]-k[1])**2 for k in self.maskPoint]\n listLmbda=[self.spectraSourcesLmbda[obj1]+(self.spectraSourcesLmbda[obj2]-self.spectraSourcesLmbda[obj1])*k/dist for k in listDist]\n listSpectra=[createSpectra(i,self.shape[0],width=5) for i in listLmbda]\n for i,k in enumerate(self.maskPoint):\n self.data[:,k[0],k[1]]=listSpectra[i]*intens\n for k in self.maskPoint:\n self.maskAll[k]=True\n\n def linkGal2(self,obj1,obj2,intens):\n ### Attention\n ###!!!\n ### pour l'instant ne gere pas plusieurs link\n self.maskPoint=utils.line(self.listCenter[obj1],self.listCenter[obj2])\n maskPointIm=np.zeros_like(self.maskAll,dtype=bool)\n for k in self.maskPoint:\n maskPointIm[k]=True\n maskPointIm=binary_dilation(maskPointIm,np.ones((3,3)))\n dist=float((self.listCenter[obj1][0]-self.listCenter[obj2][0])**2+(self.listCenter[obj1][1]-self.listCenter[obj2][1])**2)\n listDist=[(self.listCenter[obj1][0]-k[0])**2+(self.listCenter[obj1][1]-k[1])**2 for k in self.maskPoint]\n listLmbda=[self.spectraSourcesLmbda[obj1]+(self.spectraSourcesLmbda[obj2]-self.spectraSourcesLmbda[obj1])*k/dist for k in listDist]\n listSpectra=[createSpectra(i,self.shape[0],width=5) for i in listLmbda]\n listPoints=np.nonzero(maskPointIm)\n for i,j in zip(listPoints[0],listPoints[1]):\n self.data[:,i,j]=listSpectra[np.random.randint(len(listSpectra))]*intens\n\n self.maskAll=self.maskAll+maskPointIm\n\n def createCorrNoiseCube(self):\n \"\"\"\n Build covariance matrix : the idea is to have a correlation between two pixels decreasing with the distance\n so we first build a distance matrix then we ponderate by the rho coefficient and then we truncate\n for low correlation two avoid too many computations.\n Finally we generate as many of samples as there are wavelenght slices.\n \"\"\"\n self.noiseCube=np.zeros(self.shape)\n a=(np.indices((self.shape[1],self.shape[2]))).reshape(2,self.shape[1]*self.shape[2]).T\n dist=dists.cdist(a,a)\n self.covar= self.noise*self.rho**dist\n self.covar[self.covar<0.01]=0\n for k in range(self.shape[0]):\n self.noiseCube[k]=np.random.multivariate_normal(np.zeros(self.shape[1]*self.shape[2]),self.covar,size=1).reshape(self.shape[1],self.shape[2])\n\n\ndef buildHaloSpectra(lmbda,width,size):\n haloSpectra=np.zeros(size)\n haloSpectra[lmbda-width//2:lmbda+width//2]=sst.norm.pdf(np.arange(-width//2,width//2),scale=2)\n haloSpectra=haloSpectra*1/np.max(haloSpectra)\n return haloSpectra\n\n\ndef createSpectra(pos,size,width=5):\n size=np.floor(size/2)*2 ## assure size is odd\n xvar=np.linspace(0,size,size+1)\n spectrum = sst.norm.pdf(xvar,loc=pos,scale=width/2.35)\n spectrum=spectrum/np.max(spectrum)\n return spectrum\n\ndef buildMaskSources(listCenter,listRadius,shape,decrease):\n x,y=np.mgrid[0:shape[1], 0:shape[2]]\n #on crée un profil spatial gaussien\n mask=np.zeros((shape[1],shape[2]))\n label=np.zeros((shape[1],shape[2]))\n for k in range(len(listCenter)):\n zGalaxy = multivariate_normal.pdf(np.swapaxes(np.swapaxes([x,y],0,2),0,1),mean=listCenter[k], cov=[[listRadius[k]/1.17, 0], [0, listRadius[k]/1.17]])\n zGalaxy= zGalaxy*1/np.max(zGalaxy)\n zGalaxy[zGalaxy<0.1]=0\n if decrease==False:\n zGalaxy[zGalaxy>=0.1]=1\n label[zGalaxy>0]=k+1\n mask=mask+zGalaxy\n #On tronque le profil gaussien d et de la galaxie à 0.1\n\n return mask,label\n\ndef buildRandomMaskSources(shape=(40,51,51),listCenter=[(25,25)],listRadius=[15],steps=50,seed=False,dilate_factor=5):\n center=listCenter[0]\n if seed is not None:\n np.random.seed(seed)\n mask=np.empty((shape[1],shape[2]),dtype=bool)\n mask[:]=False\n x = [center[0]]\n y = [center[1]]\n mask[x[-1],y[-1]]=True\n for j in range(steps):\n step_x = np.random.randint(0,3)\n if step_x == 1:\n x.append(x[j] + 1 )\n elif step_x==2:\n x.append(x[j] - 1 )\n else:\n x.append(x[j])\n step_y = np.random.randint(0,3)\n if step_y == 1:\n y.append(y[j] + 1 )\n elif step_x==2:\n y.append(y[j] - 1 )\n else:\n y.append(y[j])\n\n mask[x[-1],y[-1]]=True\n mask=binary_dilation(mask,np.ones((dilate_factor,dilate_factor)))\n mask2=np.zeros((shape[1],shape[2]))\n label=np.zeros((shape[1],shape[2]))\n mask2[mask==True]=np.random.uniform(0.5,1,np.sum(mask==True))\n label[mask==True]=1\n if len(listCenter)>1:\n x,y=np.mgrid[0:shape[1], 0:shape[2]]\n for k in range(1,len(listCenter)):\n zGalaxy = multivariate_normal.pdf(np.swapaxes(np.swapaxes([x,y],0,2),0,1),mean=listCenter[k], cov=[[listRadius[k]/1.17, 0], [0, listRadius[k]/1.17]])\n zGalaxy= zGalaxy*1/np.max(zGalaxy)\n zGalaxy[zGalaxy<0.1]=0\n zGalaxy[zGalaxy>0]=0.8\n label[zGalaxy>0]=k+1\n mask2=mask2+zGalaxy\n\n\n\n return mask2,label\n\n\ndef createSemiRealSource(srcData,cube,SNR):\n \"\"\"\n Create a Source object with a cube of data mixing real noise from a MUSE (sub-)cube and signal\n of a simulated halo object. The object is centered in the real MUSE subcube\n\n \"\"\"\n dec,ra = cube.wcs.pix2sky([cube.shape[1]//2,cube.shape[2]//2])[0]\n lmbda = cube.wave.coord(cube.shape[0]//2)\n cube.data=cube.data+SNR*srcData\n src=Source.from_data(4000,ra,dec,origin='Simulated',cubes={'MUSE_CUBE':cube})\n src.add_line(['LBDA_OBS','LINE'],[lmbda,\"LYALPHA\"])\n src.images['TRUTH_DET_BIN_ALL']= Image(data=obj.maskSources>0 )\n return src\n\ndef Moffat(r, alpha, beta):\n return (beta-1)/(math.pi*alpha**2)*(1+(r/alpha)**2)**(-beta)\n\ndef generateMoffatIm(center=(12,12),shape=(25,25),alpha=2,beta=2.5,a=0.,b=0.,dim='MUSE'):\n \"\"\"\n by default alpha is supposed to be given in arsec, if not it is given in MUSE pixel.\n a,b allow to decenter slightly the Moffat image.\n \"\"\"\n ind = np.indices(shape)\n r = np.sqrt(((ind[0]-center[0]+a)**2 + ((ind[1]-center[1]+b))**2))\n if dim == 'MUSE':\n r = r*0.2\n elif dim == 'HST':\n r = r*0.03\n res = Moffat(r, alpha, beta)\n res = res/np.sum(res)\n return res\n"
},
{
"alpha_fraction": 0.608598530292511,
"alphanum_fraction": 0.6236739158630371,
"avg_line_length": 42.33064651489258,
"blob_id": "fdf261ffb21db5f67fdcb1239460f419ce5802f3",
"content_id": "00ba013b5103b943f74497abece9dc0ca12ede82",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5373,
"license_type": "no_license",
"max_line_length": 197,
"num_lines": 124,
"path": "/shade/detection.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Dec 2 16:59:18 2015\n\n@author: raphael\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom .array_tools import normArr\nfrom shade import proba_tools\nimport numpy as np\n\n\nclass Detection:\n\n\n def __init__(self,listSources,params,paramsPreProcess,paramsDetection):\n self.params = params\n self.paramsPreProcess=paramsPreProcess\n self.paramsDetection=paramsDetection\n self.listSources = listSources\n self.listPvalMap = []\n self.listCorrArr = []\n self.listIndexMap = []\n\n\n\n def detect(self):\n \"\"\"\n Compute the detection test. At this point sources must contain a PROCESS_CUBE\n of shape [LW-lmbda:LW+lmbda,center[0]-SW:center[0]+SW,center[1]-SW:center[1]+SW]\n or just [LW-lmbda:LW+lmbda,:,:] if SW is None.\n \"\"\"\n self.listDicRef=[]\n for i,src in enumerate(self.listSources):\n try:\n src.cubes['PROCESS_CUBE']\n except:\n print(\"Warning : No cube named 'PROCESS_CUBE' in source %s. MUSE_CUBE will be used\"%src.ID)\n src.cubes['PROCESS_CUBE']=src.cubes['MUSE_CUBE']\n\n #Compute the 3D array of correlations between the spaxel map and a list of shifted target spectra\n #Dims of an element of listCorrMap : Number of target spectra x Spatial x Spatial\n if self.paramsDetection.listDicRef is not None:\n self.listCorrArr.append(self.getCorrMap(src,self.paramsDetection.listDicRef[i])[0])\n else:\n corrArr,dicRef=self.getCorrMap(src)\n self.listCorrArr.append(corrArr)\n self.listDicRef.append(dicRef)\n #Then compute the 2D map of pvalues corresponding to the test of the maximal\n #correlation between each spaxel and the list of target spectra\n #The maps are saved as mpdaf Images.\n Im=src.cubes['PROCESS_CUBE'][0,:,:].clone()\n Im.data=proba_tools.calcPvalue(self.listCorrArr[-1])\n self.listPvalMap.append(Im)\n Im2=src.cubes['PROCESS_CUBE'][0,:,:].clone()\n Im2.data=np.argmax(self.listCorrArr[-1],axis=0)\n self.listIndexMap.append(Im2)\n return self.listPvalMap,self.listIndexMap\n\n def getCorrMap(self,source,listRef=None):\n \"\"\"\n Get the correlation map between pixels of the cube of a source object and\n a list of target spectra.\n \"\"\"\n zone=source.cubes['PROCESS_CUBE'].data\n if self.paramsPreProcess.unmask==True:\n zone=zone.data #access data without mask\n lmbda=zone.shape[0]//2\n if self.params.sim == False:\n refPos=source.cubes['PROCESS_CUBE'].wcs.sky2pix([source.dec,source.ra])[0].astype(int)\n else:\n #for simulated sources sources are always supposed centered in the cube.\n refPos=[zone.shape[1]//2,zone.shape[2]//2]\n\n if self.params.SW is not None: # Resize zone of study accordingly\n zone=zone[:,max(refPos[0]-self.params.SW,0):min(refPos[0]+self.params.SW+1, \\\n zone.shape[1]),max(refPos[1]-self.params.SW,0):min(refPos[1]+self.params.SW+1,zone.shape[2])]\n refPos=[zone.shape[1]//2,zone.shape[2]//2]\n\n zoneLarge=zone[max(lmbda-self.params.LW-self.params.lmbdaShift,0):min(lmbda+self.params.LW+self.params.lmbdaShift+1,zone.shape[0]),:,:]\n zoneCentr=zone[max(lmbda-self.params.LW,0):min(lmbda+self.params.LW+1,zone.shape[0]),:,:]\n\n #centering (or not)\n if self.paramsDetection.centering=='all':\n meanIm=np.mean(zoneCentr,axis=0)\n for i in range(zoneCentr.shape[0]):\n zoneCentr[i,:,:]=zoneCentr[i,:,:]-meanIm\n\n #Compute the target spectrum by averaging pixels around the center of the galaxy, then create a list of shifted versions.\n if listRef is None:\n #start=np.maximum((zoneLarge.shape[0]-zoneCentr.shape[0]-2*self.params.lmbdaShift),0)\n start=np.maximum((zoneLarge.shape[0]-zoneCentr.shape[0])//2-self.params.lmbdaShift,0)\n\n listRef=[(np.mean(np.mean( \\\n zoneLarge[:,refPos[0]-self.paramsDetection.windowRef:refPos[0]+self.paramsDetection.windowRef+1, \\\n refPos[1]-self.paramsDetection.windowRef:refPos[1]+self.paramsDetection.windowRef+1],axis=2),axis=1))[start+k:start+zoneCentr.shape[0]+k] for k in range(2*self.params.lmbdaShift+1)]\n\n if (self.paramsDetection.centering=='ref') or (self.paramsDetection.centering=='all'):\n for l,ref in enumerate(listRef):\n listRef[l]=ref-np.mean(ref)\n\n #normalize spectra(or not)\n if self.paramsDetection.norm == True:\n zoneNorm=normArr(zoneCentr)\n else:\n zoneNorm=zoneCentr\n\n #normalize target spectra (always)\n for l,ref in enumerate(listRef):\n listRef[l]=ref/np.sqrt(np.sum(ref**2))\n\n\n #Compute dot product between data and the list of referenced spectra\n res=np.zeros((len(listRef),zoneNorm.shape[1],zoneNorm.shape[2]))\n for k in range(len(listRef)):\n for i in range(res.shape[1]):\n for j in range(res.shape[2]):\n res[k,i,j]=np.dot(zoneNorm[:,i,j],listRef[k])\n\n return res,listRef\n"
},
{
"alpha_fraction": 0.5994431376457214,
"alphanum_fraction": 0.6123433709144592,
"avg_line_length": 30.78466033935547,
"blob_id": "a67bf0d1fddd63ba667d083541c9e2872350d0f5",
"content_id": "a871b50273a50e0e1c3fe8a644a3713cf07149f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10779,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 339,
"path": "/shade/proba_tools.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Dec 4 14:50:27 2015\n\n@author: raphael\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom shade import function_Image\nfrom scipy.stats.mstats import mquantiles\n\n\nclass EmpFunc2():\n \"\"\"\"\n Define an empirical cumulative distribution function with a set of values.\n \"\"\"\n\n def __init__(self, values):\n \"\"\"\n Constructor of an empirical distribution.\n Param: ndarray values :The sample of the empirical values is\n \"\"\"\n self.n=float(values.size)\n self.values=np.sort(values.flatten())\n self.quantiles=mquantiles(values,prob=np.arange(self.n)/self.n,alphap=1/3., betap=1./3,)\n\n def calc(self,newValue):\n \"\"\"\n Compute empirical cumulative distribution function values for one sample as input\n \"\"\"\n idx=np.searchsorted(self.quantiles, newValue, side=\"left\")\n if idx>=self.n-1:\n return idx/self.n\n if np.abs(newValue - self.quantiles[idx-1]) < np.abs(newValue - self.quantiles[idx]):\n return (idx-1)/self.n\n else:\n return idx/self.n\n\n\n def calcECDF(self,arr):\n \"\"\"\n Compute empirical cumulative distribution function values for an array as input\n\n Param: ndarray arr\n\n Return: ndarray res, same shape as arr\n \"\"\"\n res=np.zeros_like(arr)\n for index, value in np.ndenumerate(arr):\n res[index]=self.calc(value)\n return res\n\n\nclass EmpFunc():\n \"\"\"\"\n Define an empirical cumulative distribution function with a set of values.\n \"\"\"\n\n def __init__(self,values):\n \"\"\"\n Constructor of an empirical distribution.\n Param: ndarray values :The sample of the empirical values is\n \"\"\"\n self.n=float(values.size)\n self.values=values\n\n def calc(self,newValue):\n \"\"\"\n Compute empirical cumulative distribution function values for one sample as input\n \"\"\"\n return np.sum(self.values<=newValue)/self.n\n\n def calcECDF(self,arr):\n \"\"\"\n Compute empirical cumulative distribution function values for an array as input\n\n Param: ndarray arr\n\n Return: ndarray res, same shape as arr\n \"\"\"\n res=np.zeros_like(arr)\n for index, value in np.ndenumerate(arr):\n res[index]=self.calc(value)\n return res\n\ndef calcPvalue(corrMap,H0=None,correct=True):\n \"\"\"\n Compute map of pvalues from a correlation 3D array.\n\n Param:3D array corrMap, dim Number of shifted references x Spatial x Spatial.\n Param: array H0 of H_0 samples for calibration\n Each slice represents the correlation between the pixels of the datacube and one particular target spectrum.\n\n Return: 2D array, pvalues of the test of the max of correlation\n \"\"\"\n listM_Corr=[]\n listS_Corr=[]\n listMu_Corr=[]\n\n #For each correlation map (resulting from the correlation with one reference), we estimate\n #the parameter of a student distribution\n\n for k in range(len(corrMap)):\n #res=function_Image.getStudentParam(corrMap[k],runLikelihood=True)\n res=function_Image.getParamNoise(corrMap[k])\n listM_Corr.append(res[0])\n# listS_Corr.append(res[1])\n# listMu_Corr.append(res[2])\n\n # with the estimated mean parameter we center each map to ensure that the null hypothesis is\n #symmetric.\n resCorrCentr=np.zeros_like(corrMap)\n for k in range(len(resCorrCentr)):\n resCorrCentr[k]=corrMap[k]-listM_Corr[k]\n #resCorrCentr[k]=corrMap[k]\n\n #We can then compute the empirical distribution of the opposite of the mins of correlation.\n minVal=np.min(resCorrCentr,axis=0)\n maxVal=np.max(resCorrCentr,axis=0)\n mMax=np.median([maxVal,-minVal])\n if H0 is None:\n if correct==True:\n empFuncCorr1=EmpFunc2(maxVal)\n empFuncCorr2=EmpFunc2(-minVal[minVal<=-mMax])\n else:\n empFuncCorr=EmpFunc2(-minVal[minVal<-np.min(maxVal)])\n #empFuncCorr=EmpFunc(-minVal)\n else:\n empFuncCorr=EmpFunc2(H0)\n #Finally we can compute the pvalues of the test of the max of correlation using this empirical function.\n if H0 is None:\n if correct==True:\n pvalCorr=np.zeros_like(maxVal)\n pvalCorr[maxVal<mMax]=(1-empFuncCorr1.calcECDF(maxVal[maxVal<mMax]))\n pvalCorr[maxVal>=mMax]=(1-empFuncCorr2.calcECDF(maxVal[maxVal>=mMax])/2.-0.5)\n else:\n pvalCorr = (1-empFuncCorr.calcECDF(maxVal))\n else:\n pvalCorr = (1-empFuncCorr.calcECDF(maxVal))\n\n return pvalCorr\n\n\ndef connexAggr(corrMap,q,core=None,returnNeighbors=False,w=1,coeff=1.2,seed=None):\n corrArr = corrMap-np.array([function_Image.getParamNoise(corrMap[k])[0] for k in range(len(corrMap))])[:,None,None]\n maxVal = np.max(corrArr, axis=0)\n minVal = -np.min(corrArr, axis=0)\n minValF = minVal.flatten()\n maxValF = maxVal.flatten()\n\n argsortPval = np.argsort(maxVal, axis=None)\n argsortPval = argsortPval[::-1]\n\n coreMask = np.zeros_like(maxVal, dtype=bool)\n\n if core is not None:\n if seed is not None:\n for s in seed:\n coreMask[s[0]-core:s[0]+core+1, s[1]-core:s[1]+core+1] = True\n else:\n coreMask[coreMask.shape[0]//2-core:coreMask.shape[0]//2+core+1,coreMask.shape[1]//2-core:coreMask.shape[1]//2+core+1]=True\n coreMask = coreMask.flatten()\n\n setAll = set([x for x in np.nonzero(coreMask)[0]])\n\n else:\n coreMask = coreMask.flatten()\n setAll = set([argsortPval[0]])\n listAll = np.array(list(setAll))\n setNeighbors=set(getNeighbors(setAll,maxVal,w=w)).difference(setAll)\n listNeighbors=np.array(list(setNeighbors))\n if core is not None:\n p=len(listAll)\n else:\n p = 1\n n = 0\n listAll_valid = None\n\n qq = 0\n niter = 0\n while qq < coeff*q or niter < 1./q+10:\n\n ll = []\n for a in listNeighbors:\n ll.append(np.nonzero(argsortPval == a)[0])\n\n try:\n k_=np.argmax(np.maximum(maxValF[listNeighbors],minValF[listNeighbors]))\n except:\n break\n k = listNeighbors[k_]\n setAll.update([k])\n setNeighbors = setNeighbors.union(getNeighbors(set([k]), maxVal, w))\n setNeighbors = setNeighbors.difference(setAll)\n listNeighbors = np.array(list(setNeighbors))\n listAll = np.array(list(setAll))\n if maxValF[k] < minValF[k]:\n n = n+1.\n elif maxValF[k] > minValF[k]:\n p = p+1.\n qq = (1+n)/np.maximum(p, 1)\n if qq <= q:\n listAll_valid = listAll.copy()\n niter = niter+1\n if listAll_valid is not None:\n listAll = listAll_valid\n else:\n mask = np.empty_like(maxVal, dtype=bool)\n mask[:] = False\n return mask\n try:\n\n listDetected = np.array(listAll)[np.array(maxValF[listAll] > minValF[listAll])]\n mask = getMask(listDetected, maxVal)\n except:\n mask = np.empty_like(maxVal, dtype=bool)\n mask[:] = False\n return mask\n\n\ndef getNeighbors(k_set, arr, w=1):\n neighbors = set()\n for k in list(k_set):\n i, j = np.unravel_index(k, arr.shape)\n imax = np.maximum(i-w, 0)\n imin = np.minimum(i+w, arr.shape[0]-1)\n jmax = np.maximum(j-w, 0)\n jmin = np.minimum(j+w, arr.shape[1]-1)\n x = []\n for l in range(imax, imin+1):\n x = x+[l]*(jmin-jmax+1)\n y = list(range(jmax, jmin+1))*(imin-imax+1)\n neighbors = neighbors.union(set(np.ravel_multi_index([x, y], arr.shape)))\n return neighbors\n\n\ndef getMask(listDetected, arr):\n mask = np.empty_like(arr, dtype=bool)\n mask[:] = False\n for x in list(listDetected):\n mask[np.unravel_index(x, arr.shape)] = True\n return mask\n\n\ndef connexAggrWhole(corrMap,core=None,returnNeighbors=False,w=1,coeff=1.2,seed=None):\n corrArr = corrMap-np.array([function_Image.getParamNoise(corrMap[k])[0] for k in range(len(corrMap))])[:,None,None]\n maxVal = np.max(corrArr, axis=0)\n minVal = -np.min(corrArr, axis=0)\n minValF = minVal.flatten()\n maxValF = maxVal.flatten()\n\n argsortPval = np.argsort(maxVal, axis=None)\n argsortPval = argsortPval[::-1]\n\n coreMask = np.zeros_like(maxVal, dtype=bool)\n\n if core is not None:\n if seed is not None:\n for s in seed:\n coreMask[s[0]-core:s[0]+core+1, s[1]-core:s[1]+core+1] = True\n else:\n coreMask[coreMask.shape[0]//2-core:coreMask.shape[0]//2+core+1,\n coreMask.shape[1]//2-core:coreMask.shape[1]//2+core+1]=True\n coreMask = coreMask.flatten()\n\n setAll = set([x for x in np.nonzero(coreMask)[0]])\n\n else:\n coreMask = coreMask.flatten()\n setAll = set([argsortPval[0]])\n listAll = list(setAll)\n setNeighbors=set(getNeighbors(setAll,maxVal,w=w)).difference(setAll)\n listNeighbors=np.array(list(setNeighbors))\n if core is not None:\n p=len(listAll)\n else:\n p = 1\n n = 0\n\n listPositif=[True]*len(listAll)\n listQQ = [0]*len(listAll)\n niter = 0\n while len(listAll) !=len(minValF):\n\n try:\n k_=np.argmax(np.maximum(maxValF[listNeighbors],minValF[listNeighbors]))\n except:\n break\n k = listNeighbors[k_]\n setAll.update([k])\n listAll.append(k)\n setNeighbors = setNeighbors.union(getNeighbors(set([k]), maxVal, w))\n setNeighbors = setNeighbors.difference(setAll)\n listNeighbors = np.array(list(setNeighbors))\n\n if maxValF[k] < minValF[k]:\n n = n+1.\n listPositif.append(False)\n elif maxValF[k] > minValF[k]:\n p = p+1.\n listPositif.append(True)\n qq = (1+n)/np.maximum(p, 1)\n listQQ.append(qq)\n niter=niter+1\n listAll = np.array(listAll)\n\n return maxVal,listAll,listQQ,listPositif\n\ndef detectCometGlob(maxVal,listAll,listQQ,listPositif,q):\n pos=np.nonzero(np.array(listQQ)<=q)[0]\n if len(pos)>0:\n k=pos[-1]\n listSelected=listAll[:k]\n listDetected = listSelected[listPositif[:k]]\n mask = getMask(listDetected, maxVal)\n else:\n mask = np.empty_like(maxVal, dtype=bool)\n mask[:] = False\n return mask\n\n\ndef corrPvalueBH(im,threshold):\n \"\"\"\n Entrées:\n im : l'ensemble des pvalues du test (array)\n threshold: le seuil de FDR voulu\n Sortie:\n newThresold : le seuil à appliquer à l'ensemble des valeurs pour vérifier le FDR.\n \"\"\"\n l=np.sort(im.flatten())\n k=len(l)-1\n while (l[k] > ((k+1)/float(len(l))*threshold)) and k>0:\n k=k-1\n thresholdFDR=l[k]\n return thresholdFDR\n"
},
{
"alpha_fraction": 0.5021413564682007,
"alphanum_fraction": 0.5256959199905396,
"avg_line_length": 30.133333206176758,
"blob_id": "ee5eddefa7e976416316f9e7c0004135cf115cbc",
"content_id": "bd05018e2922387341795d9cda09dbb3aa015622",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 934,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 30,
"path": "/shade/array_tools.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thu Dec 10 16:35:12 2015\n\n@author: raphael\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n\n\ndef normArr(arrIn, varIn=None):\n res = np.empty_like(arrIn)\n resVar = np.empty_like(arrIn)\n if len(arrIn.shape) == 3:\n for i in range(res.shape[1]):\n for j in range(res.shape[2]):\n res[:, i, j] = arrIn[:, i, j]/np.sqrt(np.sum(arrIn[:, i, j]**2))\n if varIn is not None:\n resVar[:, i, j] = varIn[:, i, j] / np.sum(arrIn[:, i, j]**2)\n elif len(arrIn.shape) == 2:\n for i in range(res.shape[1]):\n res[:, i] = arrIn[:, i] / np.sqrt(np.sum(arrIn[:, i]**2))\n if varIn is not None:\n resVar[:, i] = varIn[:, i] / np.sum(arrIn[:, i]**2)\n if varIn is not None:\n return res, resVar\n return res\n"
},
{
"alpha_fraction": 0.8059126138687134,
"alphanum_fraction": 0.8071979284286499,
"avg_line_length": 44.764705657958984,
"blob_id": "1fd12be00799990eb9b376c70d8984a6b7bfeb15",
"content_id": "d5c07ce893f73160b660c643e28c220c0438488b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 778,
"license_type": "no_license",
"max_line_length": 186,
"num_lines": 17,
"path": "/README.md",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# Repositery of the SHADE/COMET method for halo detections\n\nThis repositery contains the code for the SHADE approach (based on empirical Benjamini & Hochberg procedure) and the COMET approach for detecting galactic halo on MUSE hyperspectral data\n\nThe main application file is shade_main.py.\n\nA Shade object is composed of a detection object, a preprocessing object and a postprocessing.\n\nDefault parameters are stored in a class Params in parameters.py with specific parameters for preprocessing, detection and postprocessing.\n\nTo install:\npython setup.py install\n\nTo have a step by step tutorial see the notebook Example SHade.ipynb\n\nTo test the notebook in a binder environment:\n[](https://mybinder.org/v2/gh/raphbacher/comet/master)\n"
},
{
"alpha_fraction": 0.6566351652145386,
"alphanum_fraction": 0.663518488407135,
"avg_line_length": 40.64666748046875,
"blob_id": "aefcd9490a6e8e4677e7afcff39465ec0664a30b",
"content_id": "dc814e9a9fdfdc93bd75c9edfd837262a87c2e3a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6247,
"license_type": "no_license",
"max_line_length": 244,
"num_lines": 150,
"path": "/shade/shade_main.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Dec 11 03:54:02 2015\n\n@author: raphael\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom mpdaf.sdetect.source import Source\nfrom mpdaf.sdetect import Catalog\nfrom astropy.io import fits as pyfits\nfrom mpdaf.obj import Cube\nfrom shade import parameters\nfrom shade import preprocessing as prep\nfrom shade import postprocessing as postp\nfrom shade import detection\n\n\nclass SHADE():\n\n\n def __init__(self,cube=None, cubeProcessed=None,catalog=None, listSources=None, listID = None,params=None):\n \"\"\"\n Several Input choices are available:\n * List of MUSE Sources : These sources needs a MUSE_CUBE in their cubes extension.\n * Complete cube with a MUSE catalogue -> all Lyman-alpha emitters will be treated\n * Complete cube, catalogue and listID -> all the source of the ID list will be treated\n An already preprocessed cubeProcessed can also by passed along.\n Param: Cube object *cube*, MUSE datacube (optionnal if listSources is defined)\n Param: Cube object *cubeProcessed*, preprocessed datacube\n Param: String *catalog*, filename of a MUSE catalog\n Param: list of Sources object *listSources* (optionnal)\n Param: list of sources IDs *listID*, list of sources to extract from the cube using the catalog.\n Param: objet Params *params*, parameters for the method, if not defined,\n default parameters are used\n\n \"\"\"\n if params is None:\n self.params=parameters.Params()\n else:\n self.params=params\n if cube is not None:\n self.cube=cube\n else:\n self.cube=None\n if catalog is not None:\n self.catalog=Catalog.read(catalog)\n if cubeProcessed is not None:\n self.cubeProcessed=cubeProcessed\n else:\n self.cubeProcessed=None\n\n if listSources is not None:\n self.listSources=listSources\n\n elif listSources is None and listID is None:\n hdulist=pyfits.open(catalog)\n listID=[]\n for k in range(len(hdulist[1].data)):\n if hdulist[1].data[k][1]=='Lya' and hdulist[1].data[k][4]>0:#We get all Lyman alpha with a defined redshift\n listID.append(hdulist[1].data[k][0])\n self.listSources=[]\n for k in listID:\n self.listSources.append(self.sourceFromCatalog(k))\n\n elif listID is not None:\n self.listSources=[]\n for k in listID:\n self.listSources.append(self.sourceFromCatalog(k))\n\n self.listCorrArr=[]\n self.listPvalMap=[]\n self.preprocessing=None\n self.postprocessing=None\n self.paramsPreProcess=parameters.ParamsPreProcess()\n self.paramsPostProcess=parameters.ParamsPostProcess()\n self.paramsDetection=parameters.ParamsDetection()\n\n\n def preprocess(self,paramsPreProcess=None):\n \"\"\"\n Preprocess the sources and store processed cube in a PROCESS_CUBE cube. If a source has already a PROCESS_CUBE, it will not be processed again.\n \"\"\"\n if paramsPreProcess is not None:\n self.paramsPreProcess=paramsPreProcess\n\n if self.preprocessing is None:\n self.preprocessing=prep.Preprocessing(cube=self.cube,listSources=self.listSources,cubeProcessed=self.cubeProcessed,params=self.params,paramsPreProcess=self.paramsPreProcess)\n\n #In some cases (lot of sources with some overlapping areas) it can be interesting to process all the cube\n #and then to extract processed data for the sources instead of processing several times the same data\n\n if self.paramsPreProcess.allCube == True:\n self.preprocessing.processSrcWithCube()\n self.cubeProcessed = self.preprocessing.cubeProcessed\n else:\n self.preprocessing.processSrc()\n\n\n\n def detect(self,paramsDetection=None):\n \"\"\"\n Compute the detection test. At this point sources must contain a PROCESS_CUBE\n of shape [LW-lmbda:LW+lmbda,center[0]-SW:center[0]+SW,center[1]-SW:center[1]+SW]\n or just [LW-lmbda:LW+lmbda,:,:] if SW is None.\n \"\"\"\n if paramsDetection is not None:\n self.paramsDetection=paramsDetection\n\n self.detection=detection.Detection(listSources=self.listSources,params=self.params,paramsPreProcess=self.paramsPreProcess,paramsDetection=self.paramsDetection)\n\n self.listPvalMap,self.listIndexMap=self.detection.detect()\n self.listCorrArr=self.detection.listCorrArr\n\n def postprocess(self,rawCube=None,paramsPostProcess=None):\n if paramsPostProcess is not None:\n self.paramsPostProcess=paramsPostProcess\n\n if rawCube is not None:\n cube=Cube(rawCube)\n else:\n cube=self.cube\n if self.postprocessing is None:\n self.postprocessing=postp.Postprocess(cube,self.listSources,self.listPvalMap,self.listIndexMap,params=self.params,paramsPreProcess=self.paramsPreProcess,paramsDetection=self.paramsDetection,paramsPostProcess=self.paramsPostProcess,)\n self.postprocessing.paramsPostProcess=self.paramsPostProcess\n self.postprocessing.createResultSources()\n if self.paramsPostProcess.newSource==True:\n self.listResultSources=self.postprocessing.listResultSources\n\n\n\n def sourceFromCatalog(self,ID):\n \"\"\"\n Build source object from ID and catalog. By default, MUSE_CUBE in each source are not resized.\n \"\"\"\n try:\n ra=self.catalog[self.catalog['ID']==ID]['RA'][0]\n dec=self.catalog[self.catalog['ID']==ID]['DEC'][0]\n z=self.catalog[self.catalog['ID']==ID]['Z_MUSE'][0]\n except:\n ra=self.catalog[self.catalog['ID']==ID]['Ra'][0]\n dec=self.catalog[self.catalog['ID']==ID]['Dec'][0]\n z=self.catalog[self.catalog['ID']==ID]['Z'][0]\n\n cubeData=self.cube\n src=Source.from_data(ID, ra, dec, origin=['SHADE Intern Format','1.0',self.cube.filename,'1.0'],cubes={'MUSE_CUBE':cubeData})\n src.add_line(['LBDA_OBS','LINE'],[(z+1)*1215.668,\"LYALPHA\"])\n return src\n"
},
{
"alpha_fraction": 0.5852090120315552,
"alphanum_fraction": 0.6120578646659851,
"avg_line_length": 39.389610290527344,
"blob_id": "cac0bc09702f855bc4a3837f9acd03dcdc0787d0",
"content_id": "1f61210157f0a00171dd6c71fc3935086c61a3b6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6220,
"license_type": "no_license",
"max_line_length": 157,
"num_lines": 154,
"path": "/shade/parameters.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Dec 11 03:29:26 2015\n\n@author: raphael\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom shade import astro_utils\n\n# fsf_file = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n# 'fsf_HDFS_v1-24.pk')\n\n# DEFAULT_FSF = pickle.load(open(fsf_file,'rb'))\nDEFAULT_FSF_0 = astro_utils.generateMoffatIm(center=(12, 12), shape=(25, 25),\n alpha=2, beta=2.5, dx=0., dy=0.,\n dim='MUSE')\nDEFAULT_FSF = np.tile(DEFAULT_FSF_0[None,: , :],(3681, 1, 1))\n\n#DEFAULT_KERNEL_MF = np.tile(np.array([[ 0.09109594, 0.11895933, 0.09109594],\n# [ 0.11895933, 0.15977892, 0.11895933],\n# [ 0.09109594, 0.11895933, 0.09109594]])[None, :, :],(3681, 1, 1))\n\nDEFAULT_KERNEL_MF = np.tile(DEFAULT_FSF_0[None,11:14,11:14],(3681, 1, 1))\n\n\nclass Params():\n def __init__(self,\n LW=20,\n SW=None,\n LBDA=3641,\n sim=False,\n lmbdaShift=7,\n version='V1.0',\n fsf=DEFAULT_FSF,\n kernel_mf=DEFAULT_KERNEL_MF,\n ):\n \"\"\"\n Param: int *LW*, Lambda Window where the correlation test will occur (that must cover the half-width of the line emission)\n Param: int *SW*, Spatial Window for the exploration, if None the cube is fully explored spatially\n Param: bool *sim*, indicates if given sources are simulated ones (without wcs and wave objects)\n Param: int *lmbdaShift*, maximum shift in one direction to construct a family of target spectra\n from the estimated source spectrum. A dictionary with 2*lmbdaShift+1 spectra will be built.\n Param: string *centering*, choose to center all spectra ('all') or 'none' or only the target spectra ('ref')\n Param: bool *norm*, choose to norm (correlation approach) or not (matched filter approach)\n Param: np array *fsf*, fsf over wavelength (3D array)\n Param: np array *kernerl_mf*, kernel for matched filter\n \"\"\"\n\n self.LW=LW\n self.SW=SW\n self.sim=sim\n self.lmbdaShift=lmbdaShift\n self.version=version\n self.origin=['SHADE',version]\n self.fsf= fsf\n self.kernel_mf=kernel_mf\n\nclass ParamsPreProcess():\n \"\"\"\n Param: bool *allCube*, wheither to process the whole cube at once then reform the sources\n or process each source datacube independently\n Param: string methodRC, choice of the method for remove the continuum\n (for now only median filter 'medfilt', lts is on his way)\n Param: int windowRC, window for median filter (in this case it is the whole window not the half-size)\n Param: int Pmin,Pmax,Qmin,Qmax, trim some borders of the datacube to avoid some problems, used only\n with the allCube processing. 0 for Pmin and -1 for Pmax mean no trimming.\n Param: int lmbdaMin and lmbdaMax, trim some wavelength if allCube processing.\n Param: bool forceProcess, force a new preprocessing of sources even if sources have already a \"PROCESS_CUBE\"\n Param: unmask=True, unmask masked array with median filled values to speed up calculations. MUST BE SET TO TRUE FOR NOW (unsolved bugs due to nan values)\n Param: shiftLamdaDetectin=0, to test false detection in a spectrally shifted (empty) area\n Param: FSFConvol=True, apply FSF Matched Filter (increase SNR but smooth data)\n Param: spatialCentering=True, apply a spatial robust mean,var estimation (slice by slice) then apply centering and variance reduction\n \"\"\"\n def __init__(self,\n allCube=False,\n methodRC='medfilt',\n windowRC=101,\n Pmin=0,\n Pmax=-1,\n Qmin=0,\n Qmax=-1,\n lmbdaMin=0,\n lmbdaMax=None,\n forceProcess=False,\n unmask=True,\n shiftLambdaDetection=0,\n FSFConvol=True,\n spatialCentering=True\n ):\n self.allCube=allCube\n self.methodRC=methodRC\n self.windowRC=windowRC\n self.Pmin=Pmin\n self.Pmax=Pmax\n self.Qmin=Qmin\n self.Qmax=Qmax\n self.lmbdaMin=lmbdaMin\n self.lmbdaMax=lmbdaMax\n self.forceProcess=forceProcess\n self.unmask=unmask\n self.shiftLambdaDetection=shiftLambdaDetection\n self.spatialCentering=spatialCentering\n self.FSFConvol=FSFConvol\n\nclass ParamsDetection():\n \"\"\"\n Param: int *windowRef*, spatial window (half-width) for computing the reference spectrum (by averaging)\n at the center of the galaxy.\n Param: string *centering*, center (with 'all') or not (with 'none') the spectra to be tested or\n only the reference spectra (with 'ref')\n Param: bool *norm*, normalize spectra or not in the correlation test.\n Param: list *listDicRef*, list of proposed dictionnary of spectra for each source (None by default as it is learned on data )\n \"\"\"\n\n\n def __init__(self,\n windowRef=2,\n centering='none',\n norm=True,\n listDicRef=None\n ):\n self.windowRef=windowRef\n self.centering=centering\n self.norm=norm\n self.listDicRef=listDicRef\n\n\n\nclass ParamsPostProcess():\n \"\"\"\n Param: int *threshold*\n Param: bool *FDR*, apply threshold in FDR instead of PFA\n Param: bool *qvalue*, compute or not q-values (that are more or less FDR-pvalues)\n Param: bool *newSource*, save results in new sources objects instead of current sources\n Param: bool *resizeCube*, resize MUSE_CUBE in sources accordingly with PROCESS_CUBE\n \"\"\"\n def __init__(self,\n threshold=0.1,\n FDR=True,\n qvalue=True,\n newSource=True,\n resizeCube=True,\n ):\n self.FDR=FDR\n self.threshold=threshold\n self.thresholdFDR=0\n self.qvalue=qvalue\n self.resizeCube=resizeCube\n self.newSource=newSource\n"
},
{
"alpha_fraction": 0.6056957244873047,
"alphanum_fraction": 0.6228975653648376,
"avg_line_length": 36.36428451538086,
"blob_id": "555b07a3c54999397f17accb670134dcfc59e1f1",
"content_id": "b2c418468483648c3243dd28ee88e1730154306e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5236,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 140,
"path": "/shade/postprocessing.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Dec 2 16:58:41 2015\n\n@author: raphael\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom mpdaf.sdetect.source import Source\nfrom shade import qvalues\n\n\nclass Postprocess():\n\n def __init__(self,cube,listSources,listPvalMap,listIndexMap,params,paramsPreProcess,paramsDetection,paramsPostProcess):\n self.cube=cube\n self.listSources=listSources\n self.listPvalMap=listPvalMap\n self.listIndexMap=listIndexMap\n self.params=params\n self.paramsPreProcess=paramsPreProcess\n self.paramsDetection=paramsDetection\n self.paramsPostProcess=paramsPostProcess\n\n\n\n\n def createResultSources(self):\n \"\"\"\n Create Sources objects with results of the detection\n Can include binary detection maps, spectra estimation ...\n \"\"\"\n if self.paramsPostProcess.newSource==True:\n self.listResultSources=[]\n for i,src in enumerate(self.listSources):\n\n\n if self.paramsPostProcess.newSource==True:\n newSrc=Source.from_data(src.ID,src.ra,src.dec,self.params.origin+[src.cubes['MUSE_CUBE'].filename,src.header['CUBE_V']])\n newSrc.cubes['MUSE_CUBE']=src.cubes['MUSE_CUBE']\n newSrc.cubes['PROCESS_CUBE']= src.cubes['PROCESS_CUBE']\n self.listResultSources.append(newSrc)\n else:\n newSrc=src\n\n if self.paramsPostProcess.resizeCube == True:\n newSrc.cubes['MUSE_CUBE']=self.resizeCube(self.cube,newSrc.cubes['PROCESS_CUBE'])\n\n maskAll=self.createBinMap(self.listPvalMap[i])\n maskGal=self.createBinMapGal(self.listPvalMap[i],src)\n maskHal=maskAll-maskGal\n maskHal.data=maskHal.data.astype(np.int)\n halSpec=self.createHaloSpec(maskAll,maskGal,src)\n galSpec=self.createGalSpec(maskGal,src)\n\n newSrc.images['DET_STAT']=self.listPvalMap[i]\n if self.paramsPostProcess.qvalue==True:\n newSrc.images['DET_QSTAT']=self.createQvalMap(self.listPvalMap[i])\n newSrc.images['DET_INDEX_ALL']=self.listIndexMap[i]\n newSrc.images['DET_BIN_GAL']= maskGal\n newSrc.images['DET_BIN_HAL'] = maskHal\n maskAll.data = maskAll.data.astype(np.int)\n newSrc.images['DET_BIN_ALL'] =maskAll\n newSrc.spectra['SPEC_HAL'] = halSpec\n newSrc.spectra['SPEC_GAL'] = galSpec\n #newSrc.origin=tuple(self.params.origin+[newSrc.cubes['MUSE_CUBE'].filename])\n\n\n\n def corrPvalueBH(self,im,threshold):\n \"\"\"\n Entrées:\n im : l'ensemble des pvalues du test\n threshold: le seuil de FDR voulu\n Sortie:\n newThresold : le seuil à appliquer à l'ensemble des valeurs pour vérifier le FDR.\n \"\"\"\n l=np.sort(im.data.flatten())\n k=len(l)-1\n while (l[k] > ((k+1)/float(len(l))*threshold)) and k>0:\n k=k-1\n self.thresholdFDR=l[k]\n return self.thresholdFDR\n\n\n def createBinMap(self,Im):\n Im1=Im.copy()\n if self.paramsPostProcess.FDR == True:\n Im1.data=Im.data<self.corrPvalueBH(Im,self.paramsPostProcess.threshold)\n else:\n Im1.data=(Im.data<self.paramsPostProcess.threshold).astype(np.int)\n return Im1\n\n def createQvalMap(self,Im):\n Im1=Im.copy()\n Im1.data=qvalues.estimate(Im.data)\n return Im1\n\n def createBinMapGal(self,mask,src):\n \"\"\"\n For now the galaxy is defined by the FSF\n \"\"\"\n res=np.zeros(mask.shape)\n Im=mask.clone()\n center=src.cubes['PROCESS_CUBE'].wcs.sky2pix([src.dec,src.ra])[0].astype(int)\n ll=int(min([7,center[0],mask.shape[0]-center[0]-1,center[1],mask.shape[1]-center[1]-1]))\n if self.params.sim ==False:\n lmbda=int(src.cubes['MUSE_CUBE'].wave.pixel(src.cubes['PROCESS_CUBE'].wave.coord(src.cubes['PROCESS_CUBE'].shape[0]//2)))\n else:\n lmbda=20\n\n res[center[0]-ll:center[0]+ll+1,center[1]-ll:center[1]+ll+1]= \\\n self.params.fsf[lmbda][10-ll:10+ll+1,10-ll:10+ll+1]>0.01\n Im.data=res.astype(np.int)\n return Im\n\n\n\n def createHaloSpec(self,maskHal,maskGal,src):\n res=np.mean(src.cubes['MUSE_CUBE'].data[:,maskHal.data.astype(bool) & ~maskGal.data.astype(bool)],axis=1)\n spe=src.cubes['MUSE_CUBE'][:,0,0].clone()\n spe.data=res\n return spe\n\n def createGalSpec(self,maskGal,src):\n res=np.mean(src.cubes['MUSE_CUBE'].data[:,maskGal.data.astype(bool)],axis=1)\n spe=src.cubes['MUSE_CUBE'][:,0,0].clone()\n spe.data=res\n return spe\n\n def resizeCube(self,cubeToResize,cubeRef):\n A0=cubeToResize.wcs.sky2pix(cubeRef.wcs.pix2sky([0,0])[0],nearest=True)[0].astype(int)\n B1=cubeToResize.wcs.sky2pix(cubeRef.wcs.pix2sky([cubeRef.shape[1],cubeRef.shape[2]])[0],nearest=True)[0].astype(int)\n lmbda=int(cubeToResize.wave.pixel(cubeRef.wave.coord(cubeRef.shape[0]/2),nearest=True))\n LW=cubeRef.shape[0]//2\n return cubeToResize[lmbda-LW:lmbda+LW+1,A0[0]:B1[0],A0[1]:B1[1]]\n\n"
},
{
"alpha_fraction": 0.5902255773544312,
"alphanum_fraction": 0.597744345664978,
"avg_line_length": 19.461538314819336,
"blob_id": "27664ac485a29a84aac277d2f5438c6c6f0afd5c",
"content_id": "69a4dff50d9d4a8bcd27dcf6ceecd63f4feca4fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 266,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 13,
"path": "/setup.py",
"repo_name": "raphbacher/comet",
"src_encoding": "UTF-8",
"text": "from setuptools import setup, find_packages\n\nsetup(\n name='shade',\n version='0.3',\n packages=find_packages(),\n zip_safe=False,\n package_data={\n 'shade': ['data/*.fits'],\n },\n include_package_data=True,\n #install_requires=['Pillow'],\n)\n"
}
] | 13 |
mutcato/coinmove_dynamo_updater
|
https://github.com/mutcato/coinmove_dynamo_updater
|
fa04acce96ba150d08db829e41b4c7f5390a9eb8
|
9fe434e20c2bf243a5dcc3d104e4d0ace91a2a58
|
d08e06b25f36b277a01d7685f3dfd625efe2af03
|
refs/heads/master
| 2023-08-19T08:35:01.807228 | 2021-10-02T18:38:56 | 2021-10-02T18:38:56 | 412,820,760 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5682830214500427,
"alphanum_fraction": 0.5907966494560242,
"avg_line_length": 40.24489974975586,
"blob_id": "3c557d5f69a7695c31dd2afbc15e80f7e57c0a39",
"content_id": "c4d46f1ce3cdcebe4de1c9927cdd1c24e1e14142",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4042,
"license_type": "no_license",
"max_line_length": 149,
"num_lines": 98,
"path": "/lambda_function.py",
"repo_name": "mutcato/coinmove_dynamo_updater",
"src_encoding": "UTF-8",
"text": "from datetime import datetime\nimport boto3\nimport botocore\nfrom botocore.config import Config\nfrom decimal import Decimal\n\nTTL = {\"5m\": 120*24*60*60, \"15m\": 120*24*60*60*3, \"1h\": 120*24*60*60*12, \"4h\": 120*24*60*60*12*4, \"8h\": 120*24*60*60*12*8, \"1d\": 120*24*60*60*12*24}\n\nclass Metrics:\n def __init__(self):\n resource = boto3.resource(\n \"dynamodb\", \n config=Config(read_timeout=585, connect_timeout=585)\n )\n self.table_name = \"metrics_test\"\n self.table = resource.Table(self.table_name)\n self.sqs_client = boto3.client('sqs')\n\n def insert(self, event):\n records = event[\"Records\"]\n for record in records:\n timestamp = int(record[\"messageAttributes\"][\"time\"][\"stringValue\"])\n response = self.table.put_item(\n Item={\n 'ticker_interval': record[\"body\"],\n 'time': datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S'),\n 'open': Decimal(record[\"messageAttributes\"][\"open\"][\"stringValue\"]),\n 'high': Decimal(record[\"messageAttributes\"][\"high\"][\"stringValue\"]),\n 'low': Decimal(record[\"messageAttributes\"][\"low\"][\"stringValue\"]),\n 'close': Decimal(record[\"messageAttributes\"][\"close\"][\"stringValue\"]),\n 'volume': Decimal(record[\"messageAttributes\"][\"volume\"][\"stringValue\"]),\n 'number_of_trades': int(record[\"messageAttributes\"][\"number_of_trades\"][\"stringValue\"]),\n 'TTL': timestamp + TTL[record[\"messageAttributes\"][\"interval\"][\"stringValue\"]]\n }\n )\n print(f\"RECORD------->: {record}\")\n print(f\"INSERTED: {response}\")\n deletion = self.delete_from_queue(record[\"receiptHandle\"])\n print(f\"DELETED: {deletion}\")\n\n \n def delete_from_queue(self, receipt_handle):\n queue_url = self.sqs_client.get_queue_url(QueueName=\"ticker-ohlcv\")[\"QueueUrl\"]\n response = self.sqs_client.delete_message(\n QueueUrl=queue_url,\n ReceiptHandle=receipt_handle\n )\n return response\n\n\nclass Summary:\n def __init__(self, event):\n self.table_name = \"metrics_summary\"\n resource = boto3.resource(\"dynamodb\", config=Config(read_timeout=585, connect_timeout=585))\n self.table = resource.Table(self.table_name)\n self.event = event\n\n @staticmethod\n def get_ticker_interval(record):\n message_body = record[\"body\"]\n ticker, interval = message_body.rsplit(\"_\", 1)\n \"\"\"\n Todo: Add a loop to insert other metric types (open, high, low, volume, number_of_trades)\n \"\"\"\n interval_metric = interval + \"_\" + \"close\"\n return ticker, interval_metric\n \n @staticmethod\n def get_volume_in_usdt(record):\n volume_in_usdt = Decimal(record[\"messageAttributes\"][\"close\"][\"stringValue\"]) * Decimal(record[\"messageAttributes\"][\"volume\"][\"stringValue\"])\n return volume_in_usdt\n\n def insert_single(self, record):\n volume_in_usdt = self.get_volume_in_usdt(record)\n ticker, interval_metric = self.get_ticker_interval(record)\n try:\n self.table.put_item(\n Item={\"ticker\": ticker, \"interval_metric\": interval_metric, \"volume_in_usdt\": volume_in_usdt},\n ConditionExpression=\"attribute_not_exists(ticker) AND attribute_not_exists(interval_metric)\"\n )\n except botocore.exceptions.ClientError as e:\n # Ignore the ConditionalCheckFailedException, bubble up\n # other exceptions.\n if e.response['Error']['Code'] != 'ConditionalCheckFailedException':\n raise\n\n def insert(self):\n for record in self.event[\"Records\"]:\n self.insert_single(record)\n\ndef lambda_handler(event, context):\n print(\"EVENT: \")\n print(event)\n summary = Summary(event)\n summary.insert()\n \n metrics = Metrics()\n metrics.insert(event)\n"
}
] | 1 |
hayoungc/InstaTwosome
|
https://github.com/hayoungc/InstaTwosome
|
3868dfc2210a377e4ad034e518e556ce0516a4d4
|
7e322cfbe9fc95da5700a77ca746cac39878657d
|
f2a51384570ff2f0e63570a21b8f5a568560dbd3
|
refs/heads/master
| 2021-01-21T17:10:30.096294 | 2017-06-25T12:28:51 | 2017-06-25T12:28:51 | 91,939,445 | 2 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6345851421356201,
"alphanum_fraction": 0.6429053544998169,
"avg_line_length": 32.443607330322266,
"blob_id": "17bfaff8fbd29e0c79828864c0996408f7487839",
"content_id": "d9e9ef5589d0706356681ab64456e2bbb6b86a3b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4447,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 133,
"path": "/tag2vec/flags.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "import os\nimport subprocess\nimport tensorflow as tf\n\nflags = tf.app.flags\n\nflags.DEFINE_string(\"save_path\", None, \"Directory to write the model.\")\nflags.DEFINE_string(\n \"train_data\", None,\n \"Training data. E.g., unzipped file http://mattmahoney.net/dc/text8.zip.\")\nflags.DEFINE_string(\n \"eval_data\", None, \"Analogy questions. \"\n \"https://word2vec.googlecode.com/svn/trunk/questions-words.txt.\")\nflags.DEFINE_integer(\"embedding_size\", 200, \"The embedding dimension size.\")\nflags.DEFINE_integer(\n \"epochs_to_train\", 15,\n \"Number of epochs to train. Each epoch processes the training data once \"\n \"completely.\")\nflags.DEFINE_float(\"learning_rate\", 0.025, \"Initial learning rate.\")\nflags.DEFINE_integer(\"num_neg_samples\", 25,\n \"Negative samples per training example.\")\nflags.DEFINE_integer(\"batch_size\", 500,\n \"Numbers of training examples each step processes \"\n \"(no minibatching).\")\nflags.DEFINE_integer(\"concurrent_steps\", 12,\n \"The number of concurrent training steps.\")\nflags.DEFINE_integer(\"window_size\", 5,\n \"The number of words to predict to the left and right \"\n \"of the target word.\")\nflags.DEFINE_integer(\"min_count\", 5,\n \"The minimum number of word occurrences for it to be \"\n \"included in the vocabulary.\")\nflags.DEFINE_float(\"subsample\", 1e-3,\n \"Subsample threshold for word occurrence. Words that appear \"\n \"with higher frequency will be randomly down-sampled. Set \"\n \"to 0 to disable.\")\nflags.DEFINE_boolean(\n \"interactive\", False,\n \"If true, enters an IPython interactive session to play with the trained \"\n \"model. E.g., try model.analogy(b'france', b'paris', b'russia') and \"\n \"model.nearby([b'proton', b'elephant', b'maxwell'])\")\nflags.DEFINE_string(\"emb_data\", None, \"Intial vector data.\")\n\nFLAGS = flags.FLAGS\n\nclass Options(object):\n \"\"\"Options used by our word2vec model.\"\"\"\n\n def __init__(self):\n # Model options.\n\n # Embedding dimension.\n self.emb_dim = FLAGS.embedding_size\n\n # Training options.\n\n # The training text file.\n self.train_data = FLAGS.train_data\n\n # Number of negative samples per example.\n self.num_samples = FLAGS.num_neg_samples\n\n # The initial learning rate.\n self.learning_rate = FLAGS.learning_rate\n\n # Number of epochs to train. After these many epochs, the learning\n # rate decays linearly to zero and the training stops.\n self.epochs_to_train = FLAGS.epochs_to_train\n\n # Concurrent training steps.\n self.concurrent_steps = FLAGS.concurrent_steps\n\n # Number of examples for one training step.\n self.batch_size = FLAGS.batch_size\n\n # The number of words to predict to the left and right of the target word.\n self.window_size = FLAGS.window_size\n\n # The minimum number of word occurrences for it to be included in the\n # vocabulary.\n self.min_count = FLAGS.min_count\n\n # Subsampling threshold for word occurrence.\n self.subsample = FLAGS.subsample\n\n # Where to write out summaries.\n self.save_path = FLAGS.save_path\n\n # initial word embed data\n self.emb_data = FLAGS.emb_data\n\n # Eval options.\n\n # The text file for eval.\n self.eval_data = FLAGS.eval_data\n\n self.interactive = FLAGS.interactive\n\n @classmethod\n def web(cls):\n opts = Options()\n opts.save_path = 'train'\n opts.emb_dim = 100\n opts.interactive = True\n\n emb_data = 'train/model.vec'\n if os.path.isfile(emb_data):\n opts.emb_data = emb_data\n else:\n opts.train_data = 'data/tags.txt'\n\n with open(os.devnull, 'w') as FNULL:\n if subprocess.call(['dir', opts.save_path], shell=True) != 0:\n if subprocess.call(['dir', opts.train_data], shell=True) == 0:\n subprocess.call(['mkdir', opts.save_path])\n else:\n subprocess.call(['wget', 'https://muik-projects.firebaseapp.com/tf/tag2vec-train.tgz'],\n stdout=FNULL)\n subprocess.call(['tar', 'xvfz', 'tag2vec-train.tgz'])\n subprocess.call(['rm', 'tag2vec-train.tgz'])\n\n return opts\n\n @classmethod\n def train(cls):\n opts = Options()\n opts.train_data = 'data/tags.txt'\n opts.save_path = 'train'\n opts.eval_data = 'data/questions-tags.txt'\n opts.window_size = 2\n opts.min_count = 1\n opts.emb_dim = 100\n return opts"
},
{
"alpha_fraction": 0.645550549030304,
"alphanum_fraction": 0.6500754356384277,
"avg_line_length": 23.592592239379883,
"blob_id": "032c43c0f897b108a145464f10e20ae97feadb80",
"content_id": "9fc0e5df326ad92146cb28b7cb3b56a2d1615f77",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 663,
"license_type": "permissive",
"max_line_length": 174,
"num_lines": 27,
"path": "/tag2vec/cache.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "import os\nimport urllib\n\nimport bmemcached\n\n\"\"\"\nfor Heroku memcached\n\"\"\"\nclass MemcachedCache:\n def __init__(self):\n self._cache = bmemcached.Client(os.environ.get('MEMCACHEDCLOUD_SERVERS').split(','), os.environ.get('MEMCACHEDCLOUD_USERNAME'), os.environ.get('MEMCACHEDCLOUD_PASSWORD'))\n\n def set(self, key, value, timeout=0):\n key = self._key(key)\n if timeout > 0:\n self._cache.set(key, value, time=timeout)\n else:\n self._cache.set(key, value)\n\n def get(self, key):\n key = self._key(key)\n return self._cache.get(key)\n\n def _key(self, key):\n if type(key) == unicode:\n key = key.encode('utf-8')\n return urllib.quote(key)"
},
{
"alpha_fraction": 0.6414712071418762,
"alphanum_fraction": 0.6513177156448364,
"avg_line_length": 30.399999618530273,
"blob_id": "3eabce0d630f85da89b0b62467c6818d480bb328",
"content_id": "f41e40b22b4652e7443722ba2c40ccfc21ee1fa9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3453,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 110,
"path": "/tag2vec/web.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "import os\nimport re\nimport logging\nimport json\nimport time\n\nimport tensorflow as tf\nfrom flask import Flask, request, render_template, jsonify, send_from_directory\nfrom flask import Response\nfrom tag2vec_cluster import Word2Vec\nfrom instagram import Instagram\nfrom flags import Options\n\nNEARBY_COUNT = 12\n\ndef get_model():\n opts = Options.web()\n session = tf.Session()\n return Word2Vec(opts, session)\n\napp = Flask(__name__)\nstart_time = time.time()\nmodel = get_model()\nprint(\"--- model load time: %.1f seconds ---\" % (time.time() - start_time))\ninstagram = Instagram()\n\nif os.environ.get('MEMCACHEDCLOUD_SERVERS'):\n from cache import MemcachedCache\n cache = MemcachedCache()\nelse:\n from werkzeug.contrib.cache import SimpleCache\n cache = SimpleCache()\n\[email protected](\"/\", methods=['GET'])\ndef main():\n q = request.args.get('q') or ''\n q = q.strip()\n\n if not q:\n data = {'vocab_size': model.get_vocab_size(), 'emb_dim': model.get_emb_dim() }\n return render_template('index.html', query='', data=data)\n _add_recent_queries(q)\n return query(q)\n\ndef query(q):\n data = {}\n if q.startswith('!'):\n words = q[1:].strip().split()\n data['doesnt_match'] = model.get_doesnt_match(*words)\n else:\n words = q.split()\n count = len(words)\n m = re.search('([^\\-]+)\\-([^\\+]+)\\+(.+)', q)\n if m:\n words = map(lambda x: x.strip(), m.groups())\n data['analogy'] = model.get_analogy(*words)\n elif count == 1 and not q.startswith('-'):\n data['no_words'] = model.get_no_words(words)\n if not data['no_words']:\n data['nearby'] = model.get_nearby([q], [], num=NEARBY_COUNT + count)\n data['tag'] = q\n else:\n negative_words = [word[1:] for word in words if word.startswith('-')]\n positive_words = [word for word in words if not word.startswith('-')]\n data['no_words'] = model.get_no_words(negative_words + positive_words)\n if not data['no_words']:\n data['nearby'] = model.get_nearby(positive_words, negative_words, num=NEARBY_COUNT + count)\n data['tag'] = data['nearby'][0][0]\n data['words'] = words\n return render_template('query.html', query=q, data=data)\n\[email protected](\"/tags/<string:tag_name>/media.js\", methods=['GET'])\ndef tag_media(tag_name):\n key = '/tags/%s/media.js' % tag_name\n data = cache.get(key)\n if not data:\n media = instagram.media(tag_name)\n media = {'media': media[:12]}\n data = json.dumps(media)\n cache.set(key, data, timeout=60*60)\n return Response(response=data, status=200, mimetype='application/json')\n\[email protected](\"/tsne.js\", methods=['GET'])\ndef tsne_js():\n return send_from_directory(model.get_save_path(), 'tsne.js')\n\[email protected](\"/recent_queries\", methods=['GET'])\ndef recent_queries():\n queries = _get_recent_queries()\n return render_template('recent_queries.html', queries=queries)\n\nMAX_RECENT_QUERIES_LENGTH = 500\nKEY_RECENT_QUERIES = 'recent_queries'\n\ndef _add_recent_queries(q):\n recent_queries = cache.get(KEY_RECENT_QUERIES) or ''\n recent_queries += q + '\\n'\n length = len(recent_queries)\n if length > MAX_RECENT_QUERIES_LENGTH:\n index = recent_queries.find('\\n', length - MAX_RECENT_QUERIES_LENGTH)\n recent_queries = recent_queries[index+1]\n cache.set(KEY_RECENT_QUERIES, recent_queries)\n\ndef _get_recent_queries():\n return (cache.get(KEY_RECENT_QUERIES) or '').strip().split('\\n')\n\n\nif __name__ == \"__main__\":\n app.debug = True\n app.run(host=os.getenv('IP', '127.0.0.1'),port=int(os.getenv('PORT', 8080)))"
},
{
"alpha_fraction": 0.5906504392623901,
"alphanum_fraction": 0.6029788851737976,
"avg_line_length": 34.49900817871094,
"blob_id": "70f0991cbf4a3bfb3fb1544dd1efb9a0f7c58b78",
"content_id": "164de3ba1bdfdfe5c5d1f14e2d681fd5379d08e3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 17926,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 505,
"path": "/tag2vec/tag2vec_cluster.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "# Copyright 2015 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\n\"\"\"Multi-threaded word2vec unbatched skip-gram model.\nTrains the model described in:\n(Mikolov, et. al.) Efficient Estimation of Word Representations in Vector Space\nICLR 2013.\nhttp://arxiv.org/abs/1301.3781\nThis model does true SGD (i.e. no minibatching). To do this efficiently, custom\nops are used to sequentially process data within a 'batch'.\nThe key ops used are:\n* skipgram custom op that does input processing.\n* neg_train custom op that efficiently calculates and applies the gradient using\n true SGD.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport threading\nimport time\nimport random\n\nfrom six.moves import xrange # pylint: disable=redefined-builtin\n\nimport numpy as np\nimport tensorflow as tf\nimport pandas as pd\n\nimport tensorflow as tf\n\nfrom tensorflow.models.embedding import gen_word2vec as word2vec\nfrom flags import FLAGS, Options\n\n\nclass Word2Vec(object):\n \"\"\"Word2Vec model (Skipgram).\"\"\"\n def __init__(self, options, session):\n self._options = options\n self._session = session\n self._word2id = {}\n self._id2word = []\n if options.emb_data or options.interactive:\n self.load_emb()\n else:\n self.build_graph()\n self.build_eval_graph()\n if options.eval_data:\n self._read_analogies()\n if not options.emb_data and not options.interactive:\n self.save_vocab()\n if not options.emb_data and options.train_data and not options.interactive:\n self._load_corpus()\n\n def _read_analogies(self):\n \"\"\"Reads through the analogy question file.\n Returns:\n questions: a [n, 4] numpy array containing the analogy question's\n word ids.\n questions_skipped: questions skipped due to unknown words.\n \"\"\"\n questions = []\n questions_skipped = 0\n with open(self._options.eval_data, \"r\", encoding='utf-8') as analogy_f:\n for line in analogy_f:\n # if line.startswith(b\":\"): # Skip comments.\n # continue\n # words = line.decode('utf-8').strip().lower().split(b\" \")\n words = line.strip().lower().split(\" \")\n ids = [self._word2id.get(w.strip()) for w in words]\n if None in ids or len(ids) != 4:\n print (ids)\n questions_skipped += 1\n else:\n questions.append(np.array(ids))\n print(\"Eval analogy file: \", self._options.eval_data)\n print(\"Questions: \", len(questions))\n print(\"Skipped: \", questions_skipped)\n self._analogy_questions = np.array(questions, dtype=np.int32)\n\n def get_no_words(self, words):\n return [word for word in words if word not in self._word2id]\n\n def get_vocab_size(self):\n return self._options.vocab_size\n\n def get_emb_dim(self):\n return self._options.emb_dim\n\n def load_emb(self):\n start_time = time.time()\n opts = self._options\n\n if opts.emb_data:\n with open(opts.emb_data) as f:\n opts.emb_dim = int(f.readline().split()[1])\n self._id2word = pd.read_csv(opts.emb_data, delimiter=' ',\n skiprows=1, header=0, usecols=[0]).values\n self._id2word = np.transpose(self._id2word)[0]\n if self._id2word[0] == '</s>':\n self._id2word[0] = 'UNK'\n else:\n # self._id2word = np.loadtxt(os.path.join(opts.save_path, \"vocab.txt\"),\n # 'str', unpack=True)[0]\n loaded = open(\"./train/vocab.txt\", \"r\", encoding='utf-8')\n data = []\n for line in loaded.readlines():\n data.append(line.replace('\\n','').split(' '))\n loaded.close()\n self._id2word = data\n\n # self._id2word = [str(x).decode('utf-8') for x in self._id2word]\n self._id2word = [str(x) for x in self._id2word]\n for i, w in enumerate(self._id2word):\n self._word2id[w] = i\n opts.vocab_size = len(self._id2word)\n\n if opts.emb_data:\n def initializer(shape, dtype):\n initial_value = pd.read_csv(opts.emb_data, delimiter=' ',\n skiprows=1, header=0, usecols=range(1, opts.emb_dim+1)).values\n if opts.save_path:\n path = os.path.join(opts.save_path, 'tsne.js')\n if not os.path.isfile(path):\n self._export_tsne(initial_value)\n return initial_value\n self._w_in = tf.get_variable('w_in', [opts.vocab_size, opts.emb_dim],\n initializer=initializer)\n else:\n self._w_in = tf.get_variable('w_in', [opts.vocab_size, opts.emb_dim])\n print(\"--- embed data load time: %.1f seconds ---\" % (time.time() - start_time))\n\n def build_graph(self):\n \"\"\"Build the model graph.\"\"\"\n opts = self._options\n\n # The training data. A text file.\n (words, counts, words_per_epoch, current_epoch, total_words_processed,\n examples, labels) = word2vec.skipgram(filename=opts.train_data,\n batch_size=opts.batch_size,\n window_size=opts.window_size,\n min_count=opts.min_count,\n subsample=opts.subsample)\n (opts.vocab_words, opts.vocab_counts,\n opts.words_per_epoch) = self._session.run([words, counts, words_per_epoch])\n opts.vocab_size = len(opts.vocab_words)\n print(\"Data file: \", opts.train_data)\n print(\"Vocab size: \", opts.vocab_size - 1, \" + UNK\")\n print(\"Words per epoch: \", opts.words_per_epoch)\n\n opts.vocab_words = list(map(lambda x: x.decode('utf-8'), opts.vocab_words))\n self._id2word = opts.vocab_words\n for i, w in enumerate(self._id2word):\n self._word2id[w] = i\n\n # Declare all variables we need.\n # Input words embedding: [vocab_size, emb_dim]\n w_in = tf.Variable(\n tf.random_uniform(\n [opts.vocab_size,\n opts.emb_dim], -0.5 / opts.emb_dim, 0.5 / opts.emb_dim),\n name=\"w_in\")\n\n # Global step: scalar, i.e., shape [].\n w_out = tf.Variable(tf.zeros([opts.vocab_size, opts.emb_dim]), name=\"w_out\")\n\n # Global step: []\n global_step = tf.Variable(0, name=\"global_step\")\n\n # Linear learning rate decay.\n words_to_train = float(opts.words_per_epoch * opts.epochs_to_train)\n lr = opts.learning_rate * tf.maximum(\n 0.0001,\n 1.0 - tf.cast(total_words_processed, tf.float32) / words_to_train)\n\n examples = tf.placeholder(dtype=tf.int32) # [N]\n labels = tf.placeholder(dtype=tf.int32) # [N]\n\n # Training nodes.\n inc = global_step.assign_add(1)\n with tf.control_dependencies([inc]):\n train = word2vec.neg_train(w_in,\n w_out,\n examples,\n labels,\n lr,\n vocab_count=opts.vocab_counts.tolist(),\n num_negative_samples=opts.num_samples)\n\n self._w_in = w_in\n self._examples = examples\n self._labels = labels\n self._lr = lr\n self._train = train\n self.step = global_step\n self._epoch = current_epoch\n self._words = total_words_processed\n\n def save_vocab(self):\n \"\"\"Save the vocabulary to a file so the model can be reloaded.\"\"\"\n opts = self._options\n with open(os.path.join(opts.save_path, \"vocab.txt\"), \"w\", encoding='utf-8') as f:\n for i in xrange(opts.vocab_size):\n # f.write(\"%s %d\\n\" % (tf.compat.as_text(opts.vocab_words[i]).encode('utf-8'),\n # opts.vocab_counts[i]))\n f.write(\"%s %d\\n\" % (tf.compat.as_text(opts.vocab_words[i]), opts.vocab_counts[i]))\n\n def build_eval_graph(self):\n \"\"\"Build the evaluation graph.\"\"\"\n # Eval graph\n opts = self._options\n\n # Each analogy task is to predict the 4th word (d) given three\n # words: a, b, c. E.g., a=italy, b=rome, c=france, we should\n # predict d=paris.\n\n # The eval feeds three vectors of word ids for a, b, c, each of\n # which is of size N, where N is the number of analogies we want to\n # evaluate in one batch.\n analogy_a = tf.placeholder(dtype=tf.int32) # [N]\n analogy_b = tf.placeholder(dtype=tf.int32) # [N]\n analogy_c = tf.placeholder(dtype=tf.int32) # [N]\n\n word_ids = tf.placeholder(dtype=tf.int32) # [N]\n negative_word_ids = tf.placeholder(dtype=tf.int32) # [N]\n\n # Normalized word embeddings of shape [vocab_size, emb_dim].\n nemb = tf.nn.l2_normalize(self._w_in, 1)\n\n # Each row of a_emb, b_emb, c_emb is a word's embedding vector.\n # They all have the shape [N, emb_dim]\n a_emb = tf.gather(nemb, analogy_a) # a's embs\n b_emb = tf.gather(nemb, analogy_b) # b's embs\n c_emb = tf.gather(nemb, analogy_c) # c's embs\n\n words_emb = tf.nn.embedding_lookup(nemb, word_ids)\n negative_words_emb = tf.nn.embedding_lookup(nemb, negative_word_ids)\n\n # We expect that d's embedding vectors on the unit hyper-sphere is\n # near: c_emb + (b_emb - a_emb), which has the shape [N, emb_dim].\n target = c_emb + (b_emb - a_emb)\n\n # Compute cosine distance between each pair of target and vocab.\n # dist has shape [N, vocab_size].\n dist = tf.matmul(target, nemb, transpose_b=True)\n self._target = target\n self._dist = dist\n\n # For each question (row in dist), find the top 4 words.\n _, pred_idx = tf.nn.top_k(dist, 4)\n\n mean = tf.reduce_mean(words_emb, 0)\n mean = tf.reshape(mean, [-1, opts.emb_dim])\n mean_dist = 1.0 - tf.matmul(mean, words_emb, transpose_b=True)\n _, self._mean_pred_idx = tf.nn.top_k(mean_dist, 1)\n\n joint_dist = tf.matmul(words_emb, nemb, transpose_b=True)\n n_joint_dist = tf.matmul(negative_words_emb, nemb, transpose_b=True)\n joint_dist = tf.reduce_sum(joint_dist, 0) - tf.reduce_sum(n_joint_dist, 0)\n self._joint_idx = tf.nn.top_k(joint_dist, min(1000, opts.vocab_size))\n\n # Nodes in the construct graph which are used by training and\n # evaluation to run/feed/fetch.\n self._analogy_a = analogy_a\n self._analogy_b = analogy_b\n self._analogy_c = analogy_c\n self._word_ids = word_ids\n self._negative_word_ids = negative_word_ids\n self._analogy_pred_idx = pred_idx\n\n ckpt = None\n self.saver = tf.train.Saver()\n if not opts.emb_data:\n ckpt = tf.train.latest_checkpoint(os.path.join(opts.save_path))\n if ckpt:\n self.saver.restore(self._session, ckpt)\n print('loaded %s' % ckpt)\n else:\n # Properly initialize all variables.\n # self._session.run(tf.initialize_all_variables())\n self._session.run(tf.global_variables_initializer())\n\n def _load_corpus(self):\n corpus = []\n with open(self._options.train_data, 'r', encoding='utf-8') as f:\n unk_id = self._word2id['UNK']\n def word2id(w):\n return w in self._word2id and self._word2id[w] or unk_id\n while True:\n # line = f.readline().decode('utf-8')\n line = f.readline()\n if not line:\n break\n corpus.append([word2id(w) for w in line.split()])\n self._corpus = corpus\n self._corpus_lines_count = len(corpus)\n\n def _batch_data(self):\n examples = []\n labels = []\n batch_size = self._options.batch_size\n window_size = self._options.window_size\n unk_id = self._word2id['UNK']\n count = 0\n while True:\n line = self._corpus[random.randrange(0,self._corpus_lines_count)]\n words_count = len(line)\n for i, center_id in enumerate(line):\n if center_id == unk_id:\n continue\n start_index = max(0, i-window_size)\n end_index = min(words_count, i + 1 + window_size)\n outputs = line[start_index:end_index]\n outputs = list(filter(lambda x: x != unk_id and x != center_id, outputs))\n outputs_count = len(outputs)\n examples += [center_id] * outputs_count\n labels += outputs\n count += outputs_count\n if count >= batch_size:\n return examples[:batch_size], labels[:batch_size]\n\n def _train_thread_body(self):\n initial_epoch, = self._session.run([self._epoch])\n while True:\n examples, labels = self._batch_data()\n _, epoch = self._session.run([self._train, self._epoch], {\n self._examples: examples,\n self._labels: labels\n })\n if epoch != initial_epoch:\n break\n# time.sleep(0.02) # for preventing notebook noise\n\n def train(self):\n \"\"\"Train the model.\"\"\"\n opts = self._options\n\n initial_epoch, initial_words = self._session.run([self._epoch, self._words])\n\n workers = []\n for _ in xrange(opts.concurrent_steps):\n t = threading.Thread(target=self._train_thread_body)\n t.start()\n workers.append(t)\n\n last_words, last_time = initial_words, time.time()\n while True:\n time.sleep(2) # Reports our progress once a while.\n (epoch, step, words,\n lr) = self._session.run([self._epoch, self.step, self._words, self._lr])\n now = time.time()\n last_words, last_time, rate = words, now, (words - last_words) / (\n now - last_time)\n print(\"Epoch %4d Step %8d: lr = %5.3f words/sec = %8.0f\\r\" % (epoch, step,\n lr, rate),\n end=\"\")\n sys.stdout.flush()\n if epoch != initial_epoch:\n break\n\n for t in workers:\n t.join()\n\n def _predict(self, analogy):\n \"\"\"Predict the top 4 answers for analogy questions.\"\"\"\n idx, = self._session.run([self._analogy_pred_idx], {\n self._analogy_a: analogy[:, 0],\n self._analogy_b: analogy[:, 1],\n self._analogy_c: analogy[:, 2]\n })\n return idx\n\n def eval(self):\n \"\"\"Evaluate analogy questions and reports accuracy.\"\"\"\n\n # How many questions we get right at precision@1.\n correct = 0\n\n total = self._analogy_questions.shape[0]\n start = 0\n while start < total:\n limit = start + 2500\n sub = self._analogy_questions[start:limit, :]\n idx = self._predict(sub)\n start = limit\n for question in xrange(sub.shape[0]):\n for j in xrange(4):\n if idx[question, j] == sub[question, 3]:\n # Bingo! We predicted correctly. E.g., [italy, rome, france, paris].\n correct += 1\n break\n elif idx[question, j] in sub[question, :3]:\n # We need to skip words already in the question.\n continue\n else:\n # The correct label is not the precision@1\n break\n accuracy = correct * 100.0 / total\n print()\n print(\"Eval %4d/%d accuracy = %4.1f%%\" % (correct, total, accuracy))\n return accuracy\n\n def get_nearby(self, words, negative_words, num=20):\n wids = [self._word2id.get(w, 0) for w in words]\n n_wids = [self._word2id.get(w, 0) for w in negative_words]\n idx = self._session.run(self._joint_idx, {\n self._word_ids: wids,\n self._negative_word_ids: n_wids\n })\n results = []\n for distance, i in zip(idx[0][:num], idx[1][:num]):\n if i in wids:\n continue\n results.append((self._id2word[i], distance))\n return results\n\n def doesnt_match(self, *words):\n wids = [self._word2id.get(w, 0) for w in words]\n idx, = self._session.run(self._mean_pred_idx, {\n self._word_ids: wids\n })\n print(words[idx[0]])\n return\n\n def get_doesnt_match(self, *words):\n wids = [self._word2id.get(w, 0) for w in words]\n idx, = self._session.run(self._mean_pred_idx, {\n self._word_ids: wids\n })\n return words[idx[0]]\n\n def get_analogy(self, w0, w1, w2):\n \"\"\"Predict word w3 as in w0:w1 vs w2:w3.\"\"\"\n wid = np.array([[self._word2id.get(w, 0) for w in [w0, w1, w2]]])\n idx = self._predict(wid)\n for c in [self._id2word[i] for i in idx[0, :]]:\n if c not in [w0, w1, w2, 'UNK']:\n return c\n return\n\n def save(self):\n opts = self._options\n self.saver.save(self._session, os.path.join(opts.save_path, \"model.ckpt\"))\n all_embs = self._session.run(self._w_in)\n self._export_tsne(all_embs)\n print('Saved')\n\n def _export_tsne(self, all_embs):\n from sklearn.manifold import TSNE\n import json\n tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)\n plot_only = min(500, all_embs.shape[0])\n low_dim_embs = tsne.fit_transform(all_embs[:plot_only,:])\n labels = [self._id2word[i] for i in xrange(plot_only)]\n embs = [list(e) for e in low_dim_embs]\n json_data = json.dumps({'embs': embs, 'labels': labels})\n path = os.path.join(self._options.save_path, 'tsne.js')\n with open(path, 'w') as f:\n f.write(json_data)\n print('%s exported' % path)\n\n def get_save_path(self):\n return self._options.save_path\n\n\ndef main(_):\n \"\"\"Train a word2vec model.\"\"\"\n opts = Options().train()\n if not opts.train_data and opts.eval_data:\n with tf.Graph().as_default(), tf.Session() as session:\n model = Word2Vec(opts, session)\n model.eval() # Eval analogies.\n return\n\n if not opts.train_data or not opts.save_path or not opts.eval_data:\n print(\"--train_data --eval_data and --save_path must be specified.\")\n sys.exit(1)\n\n with tf.Graph().as_default(), tf.Session() as session:\n model = Word2Vec(opts, session)\n for i in xrange(opts.epochs_to_train):\n model.train() # Process one epoch\n accuracy = model.eval() # Eval analogies.\n if (i+1) % 5 == 0:\n model.save()\n if opts.epochs_to_train % 5 != 0:\n model.save()\n\n\nif __name__ == \"__main__\":\n tf.app.run()"
},
{
"alpha_fraction": 0.6824712753295898,
"alphanum_fraction": 0.704023003578186,
"avg_line_length": 25.769229888916016,
"blob_id": "dbc7dc2aeb79fed20a89527749964bf11171c5bf",
"content_id": "020c20dabd7e19f0104baf3166b3df6c61924e9c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 756,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 26,
"path": "/nlp.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "from konlpy.corpus import kobill\nfrom konlpy.tag import Twitter; t = Twitter()\n\nfrom matplotlib import font_manager, rc\nfont_fname = 'C:\\Windows\\Fonts\\malgun.ttf' # A font of your choice\nfont_name = font_manager.FontProperties(fname=font_fname).get_name()\nrc('font', family=font_name)\n\n# -*- coding: utf-8 -*-\n\nimport nltk\n#nltk.download()\n\nfiles_ko = kobill.fileids()\ndoc_ko = kobill.open('1809890.txt').read()\ntokens_ko = t.morphs(doc_ko)\n\nko = nltk.Text(tokens_ko, name='대한민국 국회 의안 제 1809890호')\nko.collocations()\n\ntags_ko = t.pos(\"작고 노란 강아지가 페르시안 고양이에게 짖었다\")\nprint (tags_ko)\n\nparser_ko = nltk.RegexpParser(\"NP: {<Adjective>*<Noun>*}\")\nchunks_ko = parser_ko.parse(tags_ko)\nchunks_ko.draw()\n"
},
{
"alpha_fraction": 0.42299315333366394,
"alphanum_fraction": 0.4894212782382965,
"avg_line_length": 33.30769348144531,
"blob_id": "853198427b28832167467c7ea01d8543d9556af9",
"content_id": "8e804145227ade2e916dcb80df1a932ac8b5685e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6428,
"license_type": "permissive",
"max_line_length": 134,
"num_lines": 182,
"path": "/export_csv.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "import pymongo, re, datetime\r\nimport numpy as np\r\nfrom operator import itemgetter\r\n\r\nclient = pymongo.MongoClient('mongodb://ubicomp:[email protected]:27017/data')\r\ndb = client.get_default_database()\r\ncollections = db.collection_names(include_system_collections=False)\r\n\r\n# O is study, 1 is social\r\nSTUDY = 0\r\nSOCIAL = 1\r\nWINDOW_SIZE = 5 # Minute\r\nWINDOW_LEN = 60 * WINDOW_SIZE # CUT dataX by unit of WINDOW\r\nPADDING = 0.01\r\n\r\nobservation = [ [1496383360000, 15, {'M21':STUDY, 'M25':STUDY, 'M22':STUDY}], # 6/2 15:03\r\n [1496557950000, 20, {'M21':STUDY, 'M25':STUDY, 'M26':STUDY}],# 6/4 15:32\r\n [1496661360000, 18, {'M21':STUDY, 'M22':SOCIAL, 'M11':STUDY, 'M26':SOCIAL}],# 6/5 20:16\r\n [1496754300000, 16, {'M11':STUDY, 'M25':STUDY, 'M12':STUDY,\r\n 'M23':SOCIAL}], # 6/6 22:05\r\n [1496811240000, 20, {'M12':SOCIAL, 'M22':SOCIAL}], # 6/7 13:54\r\n [1496812740000, 40, {'M15':SOCIAL}], # 6/7 14:19\r\n [1496818440000, 20, {'M21':STUDY, 'M22':STUDY}], # 6/7 15:54\r\n [1496983140000, 15, {'M12':SOCIAL, 'M25':STUDY, 'M15':STUDY}], # 6/9 13:39\r\n [1496988540000, 15, {'M11':STUDY, 'M26':SOCIAL, 'M15':STUDY, 'M13':STUDY}], # 6/9 15:09\r\n [1496989440000, 15, {'M11':STUDY, 'M22':STUDY, 'M15':STUDY, 'M13':STUDY}], # 6/9 15:24\r\n [1497241680000, 15, {'M21':STUDY, 'M11':STUDY, 'M26':STUDY}], # 6/12 13:28\r\n [1497263200000, 20, {'M26':SOCIAL}] # 6/12 19:30\r\n ]\r\n\r\nsorted(observation, key=itemgetter(0))\r\n\r\ndef create_dataset():\r\n raw_dataX = []\r\n raw_dataY = []\r\n\r\n for collection in collections:\r\n if collection != 'N1TwosomePlace_data':\r\n continue\r\n\r\n col = db[collection]\r\n doc = col.find(\r\n filter={\"$and\": [\r\n {\"name\": re.compile('^M1|^M2')},\r\n {\"timestamp\": {\r\n \"$gte\":1496361600000}}\r\n ]},\r\n projection={'name': True,\r\n 'value': True,\r\n 'timestamp': True}\r\n )\r\n\r\n if doc is not None:\r\n len_observation = len(observation)\r\n obs_index = 0\r\n\r\n sensors = list(observation[obs_index][2].keys())\r\n temp_value = dict.fromkeys(list(observation[obs_index][2].keys()))\r\n\r\n for sensor in sensors:\r\n temp_value[sensor] = [[observation[obs_index][0], PADDING]]\r\n\r\n for d in doc:\r\n leftBound = observation[obs_index][0]\r\n rightBound = leftBound + observation[obs_index][1] * 60000\r\n\r\n if leftBound > d['timestamp']:\r\n continue\r\n\r\n if d['name'] in sensors:\r\n if d['timestamp'] < rightBound:\r\n temp_value[d['name']].append([d['timestamp'], 1 if d['value'] else 0])\r\n else:\r\n continue\r\n\r\n if d['timestamp'] > rightBound:\r\n for sensor in sensors:\r\n temp_value[sensor].append([rightBound, PADDING])\r\n\r\n # SAVE temp_value to raw_dataX, raw_dataY\r\n for i in range(len(sensors)):\r\n if observation[obs_index][2][sensors[i]] == STUDY:\r\n raw_dataX.append(temp_value[sensors[i]])\r\n raw_dataY.append([1, 0])\r\n else:\r\n raw_dataX.append(temp_value[sensors[i]])\r\n raw_dataY.append([0, 1])\r\n\r\n obs_index += 1\r\n\r\n if (obs_index == len_observation):\r\n break\r\n\r\n sensors = list(observation[obs_index][2].keys())\r\n temp_value = dict.fromkeys(list(observation[obs_index][2].keys()))\r\n for sensor in sensors:\r\n temp_value[sensor] = [[observation[obs_index][0], PADDING]]\r\n\r\n continue\r\n\r\n # newFormat = datetime.datetime.fromtimestamp(d['timestamp'] / 1000.0)\r\n # f.write(\"%s,%s,%s,%s,%s,%s,%d\\n\" %\r\n # (d['name'], newFormat.month, newFormat.day, newFormat.hour, newFormat.minute, newFormat.second, d['value']))\r\n\r\n # print (raw_dataX)\r\n return raw_dataX, raw_dataY\r\n\r\n\r\ndef refine_dataset(raw_dataX, raw_dataY, WINDOW_LEN):\r\n # ADD padding 0.5 to raw_dataX\r\n dataX = []\r\n\r\n for dataset in raw_dataX:\r\n temp = []\r\n for i in range(len(dataset)):\r\n if i == len(dataset)-1:\r\n break\r\n\r\n t1 = dataset[i][0] // 1000\r\n t2 = dataset[i+1][0] // 1000\r\n\r\n t_gap = t2 - t1\r\n num_pad = t_gap - 1\r\n\r\n if t_gap == 0: # 0, 1 were delivered at the same time\r\n dataset[i+1][0] = dataset[i+1][0] + 1000\r\n\r\n temp.append(dataset[i][1])\r\n if num_pad >= 1:\r\n temp += num_pad * [PADDING] # For padding EMPTY time-series\r\n # else:\r\n # print(t1, t2)\r\n\r\n dataX.append(np.asarray(temp))\r\n # print (len(dataX))\r\n\r\n # WINDOW SHIFT\r\n train_X = []\r\n train_Y = []\r\n\r\n for k in range(len(dataX)):\r\n dataset = dataX[k]\r\n for i in range(len(dataset) - WINDOW_LEN - 1):\r\n a = dataset[i:(i + WINDOW_LEN)]\r\n train_X.append(a)\r\n train_Y.append(raw_dataY[k])\r\n\r\n return np.asarray(train_X), np.asarray(train_Y)\r\n\r\n\r\ndef load_dataset(WINDOW_LEN):\r\n rawX, rawY = create_dataset()\r\n train_X, train_Y = refine_dataset(rawX, rawY, WINDOW_LEN)\r\n\r\n return (train_X, train_Y)\r\n\r\n\r\ndef class_selection(trainPredict, testPredict):\r\n resultTrain = []\r\n resultTest = []\r\n\r\n for predict in trainPredict:\r\n if predict[0] > predict[1]:\r\n resultTrain.append([1, 0])\r\n else:\r\n resultTrain.append([0, 1])\r\n\r\n for predict in testPredict:\r\n if predict[0] > predict[1]:\r\n resultTest.append([1, 0])\r\n else:\r\n resultTest.append([0, 1])\r\n\r\n return np.asarray(resultTrain), np.asarray(resultTest)\r\n\r\n\r\nif __name__ == '__main__':\r\n rawX, rawY = create_dataset()\r\n train_X, train_Y = refine_dataset(rawX, rawY, WINDOW_LEN)\r\n\r\n print(train_X.shape)\r\n print(train_Y.shape)\r\n\r\n"
},
{
"alpha_fraction": 0.5701125860214233,
"alphanum_fraction": 0.57932448387146,
"avg_line_length": 29.5625,
"blob_id": "5feea4a1c8a4eb4b7f48b39998fae0676e929406",
"content_id": "72d3c9858000189371e7b66842c816e92b702042",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 981,
"license_type": "permissive",
"max_line_length": 126,
"num_lines": 32,
"path": "/tag2vec/instagram.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "#-*- coding: utf-8 -*-\nimport re\nimport json\nimport logging\nimport urllib3\n\nclass Instagram:\n def __init__(self):\n pass\n\n def media(self, tag):\n url = 'https://www.instagram.com/explore/tags/%s/' % tag\n response = urllib2.urlopen(url.encode('utf-8'))\n html = response.read()\n return self.parse(html)\n\n def parse(self, content):\n s = content.index('{\"country_code\":')\n e = content.index(';</script>', s)\n dumps = content[s:e]\n obj = json.loads(dumps)\n nodes = obj['entry_data']['TagPage'][0]['tag']['top_posts']['nodes'] \\\n + obj['entry_data']['TagPage'][0]['tag']['media']['nodes']\n \"\"\"\n print(obj['entry_data']['TagPage'][0]['tag']['top_posts'].keys()) # [u'media', u'content_advisory', u'top_posts', u'name']\n print(obj['entry_data']['TagPage'][0]['tag']['media'].keys()) # [u'count', u'page_info', u'nodes']\n \"\"\"\n return nodes\n\nif __name__ == \"__main__\":\n media = Instagram().media(u'맛집')\n print(media[0]['date'])"
},
{
"alpha_fraction": 0.46535277366638184,
"alphanum_fraction": 0.47309213876724243,
"avg_line_length": 34.050472259521484,
"blob_id": "f0cda2f99711d21d648647b7329c15007a7232eb",
"content_id": "94c2df8d1c1a35ce67f3658d8c3839e2ca1c6bd1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12486,
"license_type": "permissive",
"max_line_length": 109,
"num_lines": 317,
"path": "/word2vec_cluster.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport argparse\nimport time\nimport warnings\nwarnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')\nimport numpy as np\nimport json\nimport io, os, re\n\nfrom gensim.models import Word2Vec, KeyedVectors\nfrom sklearn.cluster import KMeans\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom collections import defaultdict\nfrom konlpy.tag import Twitter; t = Twitter()\n\nposts = []\ndocs = []\nnum_task = 3\n\nclass MeanEmbeddingVectorizer(object):\n def __init__(self, word2vec):\n self.word2vec = word2vec\n # if a text is empty we should return a vector of zeros\n # with the same dimensionality as all the other vectors\n self.dim = len(word2vec.values())\n\n def fit(self, X, y):\n return self\n\n def transform(self, X):\n return np.array([\n np.mean([self.word2vec[w] for w in words if w in self.word2vec]\n or [np.zeros(self.dim)], axis=0)\n for words in X\n ])\n\n\nclass TfidfEmbeddingVectorizer(object):\n def __init__(self, word2vec):\n self.word2vec = word2vec\n self.word2weight = None\n self.dim = len(word2vec.values())\n\n def fit(self, X, y):\n tfidf = TfidfVectorizer(analyzer=lambda x: x)\n tfidf.fit(X)\n # if a word was never seen - it must be at least as infrequent\n # as any of the known words - so the default idf is the max of\n # known idf's\n max_idf = max(tfidf.idf_)\n self.word2weight = defaultdict(\n lambda: max_idf,\n [(w, tfidf.idf_[i]) for w, i in tfidf.vocabulary_.items()])\n\n return self\n\n def transform(self, X):\n return np.array([\n np.mean([self.word2vec[w] * self.word2weight[w]\n for w in words if w in self.word2vec] or\n [np.zeros(self.dim)], axis=0)\n for words in X\n ])\n\n\ndef readConfig(filename):\n f = open(filename, 'r', encoding='utf-8')\n js = json.loads(f.read())\n f.close()\n return js\n\n\ndef jsonParsing():\n rep = {} # define desired replacements here\n\n # use these three lines to do the replacement\n rep = dict((re.escape(k), v) for k, v in rep.items())\n pattern = re.compile(\"|\".join(rep.keys()))\n # text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)\n\n for i in range(num_task):\n docs.append([])\n\n # docs = open(\"./crawled.txt\", \"w\", encoding='utf-8')\n for root, dirs, files in os.walk('./data/'):\n for fname in files:\n with io.open('./data/' + fname, encoding=\"utf-8\") as infile:\n data = json.load(infile)\n temp_study = \"\"\n temp_date = \"\"\n temp_rest = \"\"\n\n for d in data:\n if ('tags' not in d) or (len(d['tags']) == 0):\n continue\n\n posts.append(\" \".join(d['tags']))\n\n if any(word in '공부 시험 중간고사 기말고사 중간고사 시험 보고서 레포트 고시생 고시' \\\n '학원' for word in d['tags']):\n temp_study += \" \".join(d['tags'])\n # + '\\n'\n\n if any(word in '럽스타그램 남자친구 사랑해 여자친구 데이트 달달 여친 남친 데이뚜 서방님' for word in d['tags']):\n temp_date += \" \".join(d['tags'])\n # + '\\n'\n\n if any(word in '휴식 여유 힐링 일요일 빈둥 노가리 rest ' for word in d['tags']):\n temp_rest += \" \".join(d['tags'])\n # + '\\n'\n\n if temp_study:\n docs[0].append(temp_study)\n if temp_date:\n docs[1].append(temp_date)\n if temp_rest:\n docs[2].append(temp_rest)\n # docs.write(pattern.sub(lambda m: rep[re.escape(m.group(0))], d['caption'])+'\\n')\n\n\ndef make_model():\n # load\n global posts\n from konlpy.corpus import kobill\n # docs_ko = [kobill.open(i).read() for i in kobill.fileids()]\n docs_ko = []\n for i in range(num_task):\n docs_ko += docs[i]\n # print (docs_ko)\n\n # Tokenize\n from konlpy.tag import Twitter\n t = Twitter()\n pos = lambda d: ['/'.join(p) for p in t.pos(d)]\n texts_ko = [pos(doc) for doc in docs_ko]\n # print (texts_ko)\n\n # train\n from gensim.models import word2vec\n wv_model_ko = word2vec.Word2Vec(texts_ko)\n wv_model_ko.init_sims(replace=True)\n\n wv_model_ko.save('ko_word2vec_e.model')\n\n\ndef make_input():\n inputs = []\n tag_only = []\n\n # TRANSLATION\n rep = {\"\\n\": \" \", \"#\": \" \", \"ㅋ\": \"\", \"ㅎ\": \"\", \"・\": \"\", \"투썸플레이스\": \"\", \"twosomeplace\": \"\", \"투썸\": \"\", \\\n \"couple\": \"커플\", \"love\": \"러브\", \"daily\": \"일상\", \"fashion\": \"패션\", \"photo\": \"사진\", \"peace\": \"평화\", \\\n \"happy\": \"행복\", \"birthday\": \"생일\", \"present\": \"선물\", \"sweet\": \"달콤한\", \"yummy\": \"맛있어\", \"Exam\": \"시험\", \\\n \"exam\": \"시험\", \"study\": \"공부\", \"friend\": \"친구\", \"library\": \"도서관\", \"travel\": \"여행\", \"brother\": \"형제\", \\\n \"sister\": \"자매\", \"family\": \"가족\", \"ootd\": \"패션\", \"dailylook\": \"데일리룩\", \"fashion\": \"패션\"\n } # define desired replacements here\n\n # use these three lines to do the replacement\n rep = dict((re.escape(k), v) for k, v in rep.items())\n pattern = re.compile(\"|\".join(rep.keys()))\n\n for root, dirs, files in os.walk('./json/'):\n for fname in files:\n with io.open('./json/' + fname, encoding=\"utf-8\") as infile:\n data = json.load(infile)\n\n for d in data:\n if ('tags' not in d) or (len(d['tags']) == 0):\n continue\n tag_only.append(\" \".join(d['tags']))\n\n if ('caption' in d) and len(d['caption']) > 0:\n text = pattern.sub(lambda m: rep[re.escape(m.group(0))], d['caption'])\n\n refined = \"\"\n for c in text:\n if c.isalnum() and not c.isdigit():\n refined += c\n elif len(refined) > 0 and refined[-1] != \" \":\n refined += \" \"\n\n # print (refined)\n inputs.append(refined)\n\n return inputs, tag_only\n\nif __name__ == \"__main__\":\n # jsonParsing()\n # make_model()\n\n w2v_model = Word2Vec.load('./ko.bin')\n w2v = dict(zip(w2v_model.wv.index2word, w2v_model.wv.syn0))\n\n etree_w2v = Pipeline([\n (\"word2vec vectorizer\", MeanEmbeddingVectorizer(w2v)),\n (\"extra trees\", ExtraTreesClassifier(n_estimators=200))])\n\n etree_w2v_tfidf = Pipeline([\n (\"word2vec vectorizer\", TfidfEmbeddingVectorizer(w2v)),\n (\"extra trees\", ExtraTreesClassifier(n_estimators=200))])\n\n\n # Fit for classifier\n # X = [['시험', '기간', '공부', '책', '논문', '연구', '과제', '학점', '도서관', '학기'],\n # ['러브', '데이트', '커플', '신랑', '신부', '하트', '선물'],\n # ['혼자', '혼', '홀로', '잉여', '고독', '공허', '직장인', '일기'],\n # ['친', '너', '친구', '수다', '우정', '죽마고우', '불알', '멤버', '합체', '브로', '학년', '모임', \\\n # '가족', '파티', '계', '팟', '형', '맥주', '만남', '동창', '셋'],\n # ['베이비', '아가', '육아', '맘', '아기', '임산부', '아이', '아들', '딸', '엄마'],\n # ['이벤트', '오픈', '예정', '할인', '지하', '뒷편', '샵', '층', '무료', '몽블랑', '카카오', '아디다스', \\\n # '슈퍼스타']]\n\n X = [['시험', '기간', '독서', '과제', '책', '공부', '영어'],\n ['러브', '데이트', '사랑', '커플'],\n ['패션', '데일리룩', '여유', '혼자'],\n ['친구', '친', '수다', '맥주', '불금', '우정'],\n ['육아', '딸', '가족', '맘'],\n ['아디다스']]\n\n y = ['공부', '데이트', '혼자', '소셜', '가족', '광고']\n\n ### THIS is for 1-dim array of X data ###\n keywords = []\n for i in range(len(X)):\n for j in range(len(X[i])):\n keywords.append(X[i][j])\n\n stop_words = ['그램', '첫줄', '대전', '땜', '카페', '점', '대서문', '둔산동', '대전대', '유성', \\\n '아메리카노', '맛스타', '스타', '아이스', '초코', '모카', '치즈케이크', '여행', '동유럽', '휴일', \\\n '크림', '케이크', '커피', '투썸플레이스', '오늘', '크리', '국내', '맞팔', '팔로우', \\\n '스', '서문', '대전대학교', '끝', '일요일', '월요일', '화요일', '수요일', '목요일', \\\n '금요일', '토요일', '주말', '투썸', '콜드', '브루', '그린티', '라떼', '데', '궁', '시렁', '마트', \\\n '하니', '지니', '스트로베리', '요거트', '프라', '페', '가나', '슈', '타', '티', '세트', '아메', '댓', \\\n '청주', '씨제이', '푸드', '현빈', '치즈', '베리', '다이어트', '하리', '브라징', '대학교', '아이스크림', \\\n '크리스마스', '오해', '거', '바', '봄', '못', '어요', '올해', '또', '푸딩', '스페셜', '딸기', '저녁', \\\n '아침', '오후', '오전', '충', '바닐라', '샌드위치', '디저트', '일리', '간식', '만세', '얼', '셀', '연휴', \\\n '급', '칩', '짱', '시', '덕분', '날', '청대', '중앙', '청', '램', '트라', '세이', '블', '땅콩', '꽃', \\\n '이동', '스물', '하나', '불', '코코아', '뉴욕', '볼', '새', '순', '소통', '니트로', '배려', '예', '별', \\\n '오오', '일상', '천국', '이야기', '후', '퇴근', '향', '즌', '프리', '주', '닉', '점심']\n\n etree_w2v.fit(X, y)\n etree_w2v_tfidf.fit(X, y)\n\n caption, tag_only = make_input()\n # print (tag_only)\n f = open(\"./data/tags.txt\", \"w\", encoding=\"utf-8\")\n for post in tag_only:\n f.write(post+\"\\n\")\n f.close()\n\n from konlpy.tag import Twitter\n t = Twitter()\n pos = lambda d: [p[0] for p in t.pos(d) if p[1] == \"Noun\"]\n texts_ko = [pos(doc) for doc in tag_only]\n texts_ko_cap = [pos(doc) for doc in caption]\n\n refined = []\n for i in range(len(texts_ko)):\n flag = 0\n for tag in texts_ko[i]:\n if (tag in w2v_model.wv.vocab) and (not tag in stop_words):\n flag = 1\n break\n\n if flag == 1:\n after_stop = []\n\n for tag in texts_ko[i]:\n if (tag in w2v_model.wv.vocab) and (not tag in stop_words):\n after_stop.append(tag)\n\n for tag in texts_ko_cap[i]:\n if tag in keywords and (not tag in after_stop):\n after_stop.append(tag)\n\n refined.append(after_stop)\n\n ### Replacement for word2vec dictionary ###\n ### Replaced words MUST be a word in word2vec Model ###\n for tag in refined:\n for i in range(len(tag)):\n if tag[i] == '럽': tag[i] = '러브'\n if tag[i] == '예랑': tag[i] = '신랑'\n if tag[i] == '예신': tag[i] = '신부'\n\n\n if tag[i] == '남친': tag[i] = '애인'\n if tag[i] == '여친': tag[i] = '애인'\n if tag[i] == '중간고사': tag[i] = '시험'\n if tag[i] == '기말고사': tag[i] = '시험'\n if tag[i] == '오오티디': tag[i] = '패션'\n if tag[i] == '북': tag[i] = '책'\n if tag[i] == '혼': tag[i] = '혼자'\n\n outputs1 = etree_w2v.predict(refined)\n outputs1_prob = etree_w2v.predict_proba(refined)\n\n outputs2 = etree_w2v_tfidf.predict(refined)\n\n\n ### Binary classification for UNCLASSIFIED class ###\n for i in range(len(refined)):\n flag = 0\n for tag in refined[i]:\n if tag in keywords:\n flag = 1\n break\n\n if flag == 0:\n outputs1[i] = \"미분류\"\n outputs2[i] = \"미분류\"\n\n for i in range(len(refined)):\n #print ((refined[i], outputs1[i], outputs2[i]))\n print ((refined[i], outputs2[i]), outputs1_prob[i])\n\n"
},
{
"alpha_fraction": 0.7231683135032654,
"alphanum_fraction": 0.7477227449417114,
"avg_line_length": 32.66666793823242,
"blob_id": "a7afbc2cd824b02ff16035e5da0ccd9743df7198",
"content_id": "f24fa6ebdcad37673a53fa53532fba1e3dc100c0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2525,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 75,
"path": "/kerasLSTM.py",
"repo_name": "hayoungc/InstaTwosome",
"src_encoding": "UTF-8",
"text": "import numpy\nimport matplotlib.pyplot as plt\nimport pandas\nimport math\nfrom keras import optimizers\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.layers import LSTM\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\n\nimport pymongo, re, datetime\nfrom export_csv import load_dataset, class_selection\n\n# fix random seed for reproducibility\nseed = 7\nnumpy.random.seed(7)\n\n# load the dataset\nWINDOW_LEN = 400\nlook_back = WINDOW_LEN\ndataset, label = load_dataset(WINDOW_LEN)\n\ntrainX, testX, trainY, testY = train_test_split(dataset, label, test_size=0.33, random_state=seed)\n# split into train and test sets\n# train_size = int(len(dataset) * 0.8)\n# test_size = len(dataset) - train_size\n# trainX, testX = dataset[0:train_size,:], dataset[train_size:len(dataset),:]\n# trainY, testY = label[0:train_size,:], label[train_size:len(dataset),:]\n\n# reshape input to be [samples, time steps, features]\ntrainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))\ntestX = numpy.reshape(testX, (testX.shape[0], 1, testX.shape[1]))\n# trainY = numpy.reshape(trainY, (trainY.shape[0], 1, trainY.shape[1]))\n# testY = numpy.reshape(testY, (testY.shape[0], 1, testY.shape[1]))\n\n# create and fit the LSTM network\nmodel = Sequential()\nmodel.add(LSTM(120, input_shape=(1, look_back), dropout=10))\nmodel.add(Dense(2, activation='softmax'))\nrmsdrop = optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)\nadagrad = optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0)\nmodel.compile(loss='mean_squared_error', optimizer=adagrad)\n# model.compile(loss='mean_squared_error', optimizer='adam')\nmodel.fit(trainX, trainY, validation_data=(testX, testY), epochs=200, batch_size=64, verbose=2)\n\n# make predictions\ntrainPredict = model.predict(trainX)\ntestPredict = model.predict(testX)\n\ntrainPredict, testPredict = class_selection(trainPredict, testPredict)\n\nprint(testY)\nprint(testPredict)\n\n# calculate accuracy\ntrain_total = len(trainPredict)\ntest_total = len(testPredict)\ntrain_correct = 0\ntest_correct = 0\n\nfor x,y in zip(trainY, trainPredict):\n\tif numpy.array_equal(x, y):\n\t\ttrain_correct+=1\n\nfor x,y in zip(testY, testPredict):\n\tif numpy.array_equal(x, y):\n\t\ttest_correct+=1\n\ntrainScore = train_correct / train_total\nprint('Train Accuracy: %.2f ' % (trainScore))\ntestScore = test_correct / test_total\nprint('Test Accuracy: %.2f ' % (testScore))\n"
}
] | 9 |
QRT-Solutions/Stoploss_clean
|
https://github.com/QRT-Solutions/Stoploss_clean
|
727685de013a4f96ae6fc07e8b07793313502c13
|
043971d3a19d114dba2f5d75f3ca9fdb8590af04
|
97a689cf2bb31c94fbdbfd4e79c79c05085eafae
|
refs/heads/main
| 2023-05-29T12:36:41.989315 | 2021-06-01T16:29:59 | 2021-06-01T16:29:59 | 372,889,318 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6535197496414185,
"alphanum_fraction": 0.6849725246429443,
"avg_line_length": 46.71428680419922,
"blob_id": "0a32adaf327c225068022adf79eeec5200ab2ef9",
"content_id": "2d6b544559bd185513dddcc2091af17e0708bd7f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2003,
"license_type": "permissive",
"max_line_length": 158,
"num_lines": 42,
"path": "/stoploss_clean.py",
"repo_name": "QRT-Solutions/Stoploss_clean",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\n\ndf_limites = pd.read_excel('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\STATS DATA\\\\stats_xle.xlsx', usecols='A:I')\ndf_clean_1 = pd.read_excel('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\PERFORMANCE DATA\\\\depuracion_performance\\\\depuracion_xle.xlsx',sheet_name=5, skiprows=4)\ncolumnas = df_clean_1.columns.difference(['Date','Total','Close'])\ndf_dates_1 = pd.DataFrame(data = columnas)\ndf_dates_1.insert(1,'Fecha de corte',np.nan)\n\ndf_clean_2 = pd.read_excel('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\PERFORMANCE DATA\\\\depuracion_performance\\\\depuracion_xle.xlsx',sheet_name=5, skiprows=4)\ndf_dates_2 = pd.DataFrame(data = columnas)\ndf_dates_2.insert(1,'Fecha de corte',np.nan)\n\nInf1 = df_limites.at[3, 'Inf1']\nInf2 = df_limites.at[3, 'Inf2']\nindex_col = 0\n\nfor col in columnas:\n i=df_clean_1[col].le(Inf1).idxmax()\n j=df_clean_2[col].le(Inf2).idxmax()\n if i != 0:\n df_clean_1.loc[i:, col] = 0\n df_dates_1.loc[index_col, 'Fecha de corte'] = df_clean_1.loc[i,'Date']\n if j != 0:\n df_clean_2.loc[j:, col] = 0\n df_dates_2.loc[index_col, 'Fecha de corte'] = df_clean_2.loc[j,'Date']\n index_col+=1\n\ndf_clean_1.drop(columns=['Total'], inplace=True)\ndf_clean_1.insert(1, 'Total' ,0)\ndf_clean_1['Total'] = df_clean_1.iloc[:,3:].sum(axis=1)\ndf_dates_1.rename(columns={0:'Trade'},inplace=True)\n\ndf_clean_2.drop(columns=['Total'], inplace=True)\ndf_clean_2.insert(1, 'Total' ,0)\ndf_clean_2['Total'] = df_clean_2.iloc[:,3:].sum(axis=1)\ndf_dates_2.rename(columns={0:'Trade'},inplace=True)\n\ndf_clean_1.to_csv('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\python_output\\\\xle_corte1.csv', index=False)\ndf_dates_1.to_csv('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\python_output\\\\xle_fechascorte1.csv',index=False)\ndf_clean_2.to_csv('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\python_output\\\\xle_corte2.csv', index=False)\ndf_dates_2.to_csv('OneDrive\\\\Documentos\\\\Felipe\\\\Codes\\\\qrt\\\\python_output\\\\xle_fechascorte2.csv', index=False)"
},
{
"alpha_fraction": 0.7547169923782349,
"alphanum_fraction": 0.7594339847564697,
"avg_line_length": 52.25,
"blob_id": "6118f1a87fb8de9999e8ea3b82b8b25964a84dfa",
"content_id": "ac6fd2f5f9bea4caeb26528e78454559666354a5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 212,
"license_type": "permissive",
"max_line_length": 109,
"num_lines": 4,
"path": "/README.md",
"repo_name": "QRT-Solutions/Stoploss_clean",
"src_encoding": "UTF-8",
"text": "### Archivo de limpieza de data de trades en base a Stop-Loss\n\n## Cambiar segun instrumento a evaluar.\n- Se generaran 4 archivos csv, con los trades 'cortados' segun Stop-Loss y las fechas de corte de cada trade."
}
] | 2 |
Voelz04/LEGORoboticsPython
|
https://github.com/Voelz04/LEGORoboticsPython
|
846cf89772e4eb87b4d4a45cf5c5fc47fae6c17c
|
9710005b4f1746c2751f9f5404d42a18ee3053fa
|
a6a108b907b4830fc8f63d038576dc9443c539ab
|
refs/heads/master
| 2021-01-04T09:13:47.789128 | 2020-02-14T10:30:58 | 2020-02-14T10:30:58 | 240,482,455 | 0 | 0 |
MIT
| 2020-02-14T10:23:27 | 2020-01-31T17:37:07 | 2020-01-31T17:37:05 | null |
[
{
"alpha_fraction": 0.7504425048828125,
"alphanum_fraction": 0.7716814279556274,
"avg_line_length": 22.58333396911621,
"blob_id": "bb5805655908e4f2c2e7f744e08a4903d18e1dde",
"content_id": "1c2731fa3f1dca4ea45bb05008752828ef563152",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 568,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 24,
"path": "/Demos/Brick_Demo.py",
"repo_name": "Voelz04/LEGORoboticsPython",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env pybricks-micropython\n\nfrom pybricks import ev3brick as brick\nfrom pybricks.parameters import Button, Color\nfrom pybricks.tools import (wait, print)\n\n#Püfe ob der untere Button gedrückt wurde\ndown_pressed = Button.DOWN in brick.buttons()\n\n#Spiele einen Ton\nbrick.sound.beep()\n\n#Lösche alles was vorher auf dem Display angezeigt wurde\nbrick.display.clear()\n\n#Zeige den Text \"Hallo\" auf dem Display and den Koordinaten (0,50)\nbrick.display.text(\"Hallo\", (0, 50))\n\n# Stelle das Statuslicht auf Rot\nbrick.light(Color.RED)\n\nprint(down_pressed)\n\nwait(10000)"
},
{
"alpha_fraction": 0.6138079762458801,
"alphanum_fraction": 0.628910481929779,
"avg_line_length": 20.904762268066406,
"blob_id": "31ffd591f8e72a32a1da508d666a027b33a7b2d7",
"content_id": "37badd195d69f558dc1469fba304351c18b81e23",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 927,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 42,
"path": "/Templates/state_machine.py",
"repo_name": "Voelz04/LEGORoboticsPython",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env pybricks-micropython\nfrom pybricks import ev3brick as brick\nfrom pybricks.ev3devices import (Motor, TouchSensor, ColorSensor,\n InfraredSensor, UltrasonicSensor, GyroSensor)\nfrom pybricks.parameters import (Port, Stop, Direction, Button, Color,\n SoundFile, ImageFile, Align)\nfrom pybricks.tools import print, wait, StopWatch\nfrom pybricks.robotics import DriveBase\n\n\n#state of the robot\n# 1: Cruise\n# 2: Avoid\n# 3: Escape\n# 4: Goal reached\n# TODO: more States\nstate = 1\n\ndef cruise():\n global state\n print(\"State: Cruise\")\n\ndef avoid():\n global state\n print(\"State: Avoid\")\n\ndef escape():\n global state\n print(\"State: Escape\")\n \n\n\nwhile(state != 4):\n \n state_switch={\n 1:cruise,\n 2:avoid,\n 3:escape\n }\n func=state_switch.get(state,lambda :print('Invalid State'))\n func()\n wait(100)\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.556705892086029,
"alphanum_fraction": 0.605647087097168,
"avg_line_length": 26.9342098236084,
"blob_id": "a5af2ec934a1b32aa7801e918266f3e56f84f198",
"content_id": "1441289c82eee9e3d90fe293c6b7749e2916351a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2125,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 76,
"path": "/Tools/TOF.py",
"repo_name": "Voelz04/LEGORoboticsPython",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\n# from pybricks import ev3brick as brick\n# from pybricks.ev3devices import (Motor, TouchSensor, ColorSensor,\n# InfraredSensor, UltrasonicSensor, GyroSensor)\n#from pybricks.parameters import (Port, Stop, Direction, Button, Color,\n# SoundFile, ImageFile, Align)\n#from pybricks.tools import print, wait, StopWatch\n# from pybricks.robotics import DriveBase\nfrom time import sleep\nfrom smbus import SMBus\nimport sys\nfrom ev3dev2.display import Display\nfrom ev3dev2.sensor import INPUT_1, INPUT_4\nfrom ev3dev2.sensor.lego import TouchSensor\nfrom ev3dev2.port import LegoPort\n#from pybricks.tools import print\n\n\n#port2 = LegoPort(address='ev3-ports:in2')\n##port2.mode = 'other-i2c'\n#port1 = LegoPort(address='ev3-ports:in1')\n#port1.mode = 'other-i2c'\n#bus = SMBus('/dev/i2c-in2')\n\n#bus = SMBus('/dev/i2c-2')\nbus = SMBus(3)\nclass TOF:\n \n def debug_print(*args, **kwargs):\n print(*args, **kwargs, file=sys.stderr)\n\n def distance():\n global bus\n raw_data = bus.read_i2c_block_data( 0x01,0x42, 2)\n data = raw_data[0]+255*raw_data[1]\n #print(data)\n #debug_print(raw_data)\n #debug_print(data)\n return data\n \n \n \n while True:\n debug_print(distance())\n\n def __init__(self, port,mode):\n portIntMap = {\n Port.S1: 3,\n Port.S2: 4,\n Port.S3: 5,\n Port.S4: 6,\n }\n self.bus=SMBus(portStringMap.get(port, 3))\n \n \n \n\n\n #brick.display.clear()\n #data = bus.read_byte(0x42)\n #data = bus.read_byte_data(0x08, 0x42)\n #data = (raw_data[1]<<8)+raw_data[0]\n #data = bus.read_byte_data(0x01, 0x42)\n #bytes(bus.read_i2c_block_data(0x42, 2)).decode().strip()\n \n #adc0_upper8 = bus.read_byte_data(0x01, 0x42)\n #adc0_lower2 = bus.read_byte_data(0x01, 0x42)\n #data = (adc0_upper8 << 2) + adc0_lower2\n \n \n #debug_print('upper'+str(adc0_upper8))\n #debug_print('upper'+str(adc0_lower2))\n\n #brick.display.text('IMU: ' + str(imu1.angle()),(0, 50))\n #wait(0.1)\n\n\n"
}
] | 3 |
Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike
|
https://github.com/Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike
|
0ad60eabd9929fc889fb25ae5ddbfec1afcfc851
|
1771a9d98ee15e6dfc62b54d8e8721549a6df60e
|
bca34dae3df24f802b31863639943a7d6e7dbfd7
|
refs/heads/master
| 2023-03-01T10:44:23.124890 | 2021-02-04T23:20:43 | 2021-02-04T23:20:43 | 246,280,914 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6561264991760254,
"alphanum_fraction": 0.6640316247940063,
"avg_line_length": 36.20588302612305,
"blob_id": "de28d2577efaea594ead7df9075d64b39708cbf3",
"content_id": "e190a807b2012454bc977d5747d114c0df24082c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1265,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 34,
"path": "/Projekti1-Programimi me socketa/Rrezearta_Thaqi_FIEK-UDP-klienti.py",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "import socket\nprint('Ky eshte programi FIEK-UDP Client.')\nprint(\"A doni te caktoni vete serverin dhe portin qe deshironi te perdorni?\")\npergjigja=input().upper()\nif(pergjigja==\"PO\"):\n print(\"Jep emrin e serverit:\")\n servername=input().lower()\n print(\"Jep numrin e portit\")\n p=input()\n port=int(p)\nelse:\n servername='localhost'\n port=13000\ntry:\n \n print(\"Operacioni(IPADDRESS, PORT, COUNT, REVERSE, PALINDROME, TIME, GAME, GCF, CONVERT, COIN, CAPFIRSTLETTER, DECTOOCTAL, GAUSSIAN)? \")\n with socket.socket(socket.AF_INET,socket.SOCK_DGRAM) as clientsocket:\n procesi=True\n while procesi:\n \n kushti=input().encode() \n if kushti.upper() == \"EXIT\":\n print(\"Keni vendosur te shkeputni lidhjen me server. Kaloni mire!\")\n procesi=False\n elif kushti=='':\n print(\"Komande jo valide. Vazhdoni me nje kerkese tjeter.\")\n else:\n clientsocket.sendto(kushti,(servername, port))\n message,address= clientsocket.recvfrom(1024)\n print(\"Pergjigja:\"+message.decode(\"utf-8\"))\n print(\"Vazhdoni me kerkese tjeter ose shtyp exit per dalje.\")\n clientsocket.close()\nexcept TimeoutError:\n print(\"Serveri mori shume kohe per tu pergjigjur andaj lidhja u mbyll!\")\n"
},
{
"alpha_fraction": 0.8703703880310059,
"alphanum_fraction": 0.8703703880310059,
"avg_line_length": 53,
"blob_id": "7d63a95dbe150aae97c82de319d3bedfe94e2c8d",
"content_id": "5383bbf409eafc6c235825c8a093ca1217ce2d4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 57,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 1,
"path": "/README.md",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "Projektet e realizuara në lëndën Rrjetat Kompjuterike\n"
},
{
"alpha_fraction": 0.5912747979164124,
"alphanum_fraction": 0.5968174338340759,
"avg_line_length": 39.8248176574707,
"blob_id": "6ff150949f7527aa4263d9364bd44722d4612843",
"content_id": "8781e72fd018190c2cfcc72d25e7bdc1d151cb42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5593,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 137,
"path": "/Projekti1-Programimi me socketa/Rrezearta_Thaqi_FIEK-UDP-server.py",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "import socket\nfrom socket import * \nfrom time import gmtime, strftime\nimport math\nfrom decimal import *\nimport ipaddress\nimport random\nfrom _thread import *\nfrom metodat import*\n\n\nserverName = ''\nserverPort = 13000\n\nserverSocket = socket(AF_INET, SOCK_DGRAM)\ntry:\n serverSocket.bind((serverName, serverPort))\nexcept error:\n print(\"Startimi i serverit deshtoi! Binding Failed.\")\n sys.exit()\nprint(\"Serveri eshte startuar ne: \"+serverName +\" ne portin: \"+str(serverPort))\n\nprint('Serveri eshte duke pritur per ndonje kerkese')\ndef cl_thread(conn,address):\n \n try:\n if kushti.upper().strip()==\"IPADDRESS\":\n pergjigja = IPADDRESS(address)\n serverSocket.sendto(str.encode(pergjigja),address)\n elif kushti.upper().strip()==\"PORT\": \n pergjigja = PORT(address)\n serverSocket.sendto(str.encode(pergjigja),address)\n elif kushti.upper().strip()==\"TIME\":\n pergjigja=TIME()\n serverSocket.sendto(str.encode(pergjigja),address)\n elif kushti.upper().strip()==\"GAME\":\n pergjigja=GAME()\n serverSocket.sendto(str.encode(pergjigja),address)\n elif kushti.upper().strip().startswith(\"REVERSE\"):\n try:\n args = kushti.split(\" \", 1)\n stringu = args[1]\n pergjigja = REVERSE(stringu)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Shkruani dicka pas reverse\"),address)\n \n elif kushti.upper().strip().startswith(\"COUNT\"):\n try:\n args=kushti.split(\" \",1)\n stringu=args[1]\n pergjigja=COUNT(stringu)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Shkruani dicka pas fjales count!\"),address)\n \n elif kushti.upper().strip().startswith(\"PALINDROME\"):\n try:\n args=kushti.split(\" \",1)\n stringa=args[1]\n pergjigja=PALINDROME(stringa)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Shkruani dicka pas fjales palindrome.\"),address)\n \n elif kushti.upper().strip().startswith(\"GCF\"):\n try:\n args = kushti.split(\" \", 2)\n a = int(args[1])\n b = int(args[2])\n pergjigja = GCF(a, b)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer te mjaftueshem te argumenteve!\"),address)\n except ValueError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene dy numra per metoden GCF!\"),address)\n \n elif kushti.upper().strip().startswith(\"CONVERT\"):\n try:\n args = kushti.split(\" \", 2)\n opsioni = args[1]\n vlera = float(args[2])\n pergjigja = CONVERT(opsioni, vlera)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer te mjaftueshem te argumenteve!\"),address)\n except ValueError:\n serverSocket.sendto(str.encode(\"Duhet te jepni numer per ta konvertuar!\"),address)\n elif kushti.upper().strip().startswith(\"COIN\"):\n try:\n args = kushti.split(\" \", 1)\n guess = int(args[1])\n pergjigja = COIN(guess)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer te mjaftueshem te argumenteve!\"),address)\n except ValueError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer\"),address)\n elif kushti.upper().strip().startswith(\"CAPFIRSTLETTER\"):\n try:\n args=kushti.split(\" \",1)\n p =args[1]\n pergjigja=CAPFIRSTLETTER(p)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer te mjaftueshem te argumenteve!\"),address)\n \n elif kushti.upper().strip().startswith(\"DECTOOCTAL\"):\n try:\n args=kushti.split(\" \",1)\n p=int(args[1])\n pergjigja=DECTOOCTAL(p)\n serverSocket.sendto(str.encode(pergjigja),address)\n except IndexError:\n serverSocket.sendto(str.encode(\"Nuk keni dhene numer te mjaftueshem te argumenteve!\"),address)\n except ValueError:\n serverSocket.sendto(str.encode(\"Jepni numer jo string!\"),address)\n elif kushti.upper().strip()==\"GAUSSIAN\":\n pergjigja=GAUSSIAN()\n serverSocket.sendto(str.encode(pergjigja),address)\n \n else:\n pergjigja = \"Keni dhene nje komande jo valide!\"\n serverSocket.sendto(str.encode(pergjigja),address)\n print(\"Klienti ka dhene komande jo valide!\")\n print(\"Klientit \" + address[0] + \" iu dergua pergjigja. \")\n \n except ConnectionResetError:\n print(\"Serveri u shkeput nga klienti!\")\n except ConnectionAbortedError:\n print(\"Serveri u shkeput nga klienti!\")\n \nwhile True:\n clientS,address=serverSocket.recvfrom(1024)\n kushti=clientS.decode('utf-8')\n print(\"Lidhur me \"+address[0]+\":\"+str(address[1]))\n start_new_thread(cl_thread,(clientS,address,))\n"
},
{
"alpha_fraction": 0.5163179636001587,
"alphanum_fraction": 0.5418410301208496,
"avg_line_length": 22.899999618530273,
"blob_id": "b89f6b77c395fbed0cf81d91231eb839a5b2c1af",
"content_id": "ff0c8c9eead954886804593f2e0fab7c4de2efe9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2390,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 100,
"path": "/Projekti1-Programimi me socketa/metodat.py",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "import socket\nfrom socket import *\nfrom time import gmtime, strftime\nimport math\nfrom decimal import *\nimport ipaddress\nimport random\nfrom _thread import *\n\ndef IPADDRESS(address):\n return \"IP Adresa e klientit eshte \" + address[0]\n\ndef PORT(address):\n return \"Klienti eshte duke perdorur portin \" + str(address[1])\n\ndef TIME():\n time=strftime('%d.%m.%Y %I:%M:%S %p');\n return time\n\ndef REVERSE(s): \n return s[::-1] \n \ndef PALINDROME(s):\n rev = REVERSE(s) \n if (s == rev): \n return \"Teksti i dhene eshte palindrome\"\n else:\n return \"Teksti i dhene nuk eshte palindrome\"\n\ndef COUNT(s):\n vcount = 0 \n ccount = 0 \n s = s.lower() \n for i in range(0,len(s)): \n \n if s[i] in ('a',\"e\",\"i\",\"o\",\"u\"): \n vcount = vcount + 1 \n elif (s[i] >= 'a' and s[i] <= 'z'): \n ccount = ccount + 1\n return str(s)+\" permban \"+str(vcount)+\" zanore dhe \"+str(ccount)+\" bashketingellore.\"\n\ndef GAME():\n numArray = []\n for x in range(5):\n numArray.append(random.randint(1,35))\n numArray.sort()\n return \"(\"+str(numArray)[1:-1]+\")\" \n \n \ndef GCF(a, b):\n t = min(a, b)\n while a % t != 0 or b % t != 0:\n t -= 1\n return str(t)\n\n\ndef CONVERT(opsioni, vlera):\n \n if opsioni.upper() == \"CMTOFEET\": \n return (str)(round((vlera/30.48),2))+\" ft\"\n elif opsioni.upper()== \"FEETTOCM\":\n return (str)(round((vlera*30.48),2))+\" cm\"\n elif opsioni.upper()== \"KMTOMILES\": \n return (str)(round((vlera/1.609),2))+\" miles\"\n elif opsioni.upper() == \"MILETOKM\": \n return (str)(round((vlera*1.609),2))+\" km\" \n else:\n return \"Ky opsion nuk ekziston! Provoni keto opsione:\\ncmToFeet\\nFeetToCm\\nkmToMiles\\nMileToKm\"\n\ndef COIN(guess):\n \n coin = random.randint(0,1) \n if(guess == coin):\n return \"Sakte\" \n else:\n return \"Pasakte\"\n\ndef CAPFIRSTLETTER(p):\n return ' '.join(s[:1].upper() + s[1:] for s in p.split(' '))\n\ndef DECTOOCTAL(n): \n \n x = n\n k = []\n while (n > 0):\n a = int(float(n % 8))\n k.append(a)\n n = (n - a) / 8\n string = \"\"\n for j in k[::-1]:\n string = string + str(j)\n return string\n\ndef GAUSSIAN():\n r = 0.0\n while (r >= 1.0) or (r == 0.0):\n x = -1.0 + 2.0 * random.random()\n y = -1.0 + 2.0 * random.random()\n r = x*x + y*y\n return str(x * math.sqrt(-2.0 * math.log(r) / r))\n"
},
{
"alpha_fraction": 0.38732394576072693,
"alphanum_fraction": 0.40744465589523315,
"avg_line_length": 19.70833396911621,
"blob_id": "19b9572a2311a15e32a3c4ffe5c633d77cbf93a6",
"content_id": "6f3a0c9b1dbad1e2e902ed02f4d5c4be3fda999c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 994,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 48,
"path": "/shpbinomiale.cs",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "//Programi ne C# per llogaritjen e shperndarjes binomiale te packet switching\nusing System;\nclass Detyra\n{\n static int nCr(int n, int r)\n {\n\n if (r > n / 2)\n r = n - r;\n\n int a = 1;\n for (int i = 1; i <= r; i++)\n {\n a *= (n - r + i);\n a /= i;\n }\n\n return a;\n }\n\n static float binomialProbability(\n int n, double p)\n {\n float shuma = 0;\n\n for (int i = 0; i <=11; i++)\n {\n shuma += nCr(n, i) *\n (float)Math.Pow(p, i)\n * (float)Math.Pow(1 - p,\n n - i);\n }\n return shuma;\n }\n\n public static void Main()\n {\n int n = 35; // Provoni edhe per n=50, n=100\n\n double p = 0.1;\n\n double probability =\n binomialProbability(n, p);\n\n double prob1 = 1 - probability;\n Console.Write(\"Probabiliteti = \" + prob1);\n }\n}\n"
},
{
"alpha_fraction": 0.6540178656578064,
"alphanum_fraction": 0.6614583134651184,
"avg_line_length": 35.32432556152344,
"blob_id": "906129c39707938b538a2eb5a616cd6aae14595a",
"content_id": "29713545964259580e21bb12e8a1ee33cb9edb06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1344,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 37,
"path": "/Projekti1-Programimi me socketa/Rrezearta_Thaqi_FIEK-TCP-klienti.py",
"repo_name": "Rrezeartaa/Computer-Networking-Rrjetat-Kompjuterike",
"src_encoding": "UTF-8",
"text": "import socket\n\nprint(\"A doni te caktoni vet serverin dhe portin qe deshironi te perdorni?(po ose jo)\")\npergjigja=input().upper()\nif(pergjigja==\"PO\"):\n print(\"Jep emrin e serverit:\")\n serverAddress=input().lower()\n print(\"Jep numrin e portit\")\n p=input()\n serverPort=int(p)\nelse:\n serverAddress='localhost'\n serverPort=13000\n\nclientSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ntry:\n clientSocket.connect((serverAddress, serverPort))\n\n print(\"Operacioni (IPADDRESS, PORT, COUNT, REVERSE, PALINDROME, TIME, GAME, GCF, CONVERT, COIN, CAPFIRSTLETTER, DECTOOCTAL, GAUSSIAN)? \")\n procesi=True\n while procesi:\n kushti = input()\n if kushti.upper() == \"EXIT\":\n print(\"Keni vendosur te shkeputni lidhjen me server. Kaloni mire!\")\n procesi = False\n elif kushti == '':\n print(\"Komande jo valide. Vazhdoni me nje kerkese tjeter.\")\n else:\n clientSocket.sendall(str.encode(kushti))\n serverAnswerByte = clientSocket.recv(1024)\n serverAnswer = serverAnswerByte.decode(\"utf-8\")\n print(\"Pergjigja: \"+serverAnswer)\n print(\"Vazhdoni me kerkese tjeter ose shtyp exit per dalje.\")\n clientSocket.close()\n\nexcept TimeoutError:\n print(\"Serveri mori shume kohe per tu pergjigjur andaj lidhja u mbyll!\")\n"
}
] | 6 |
varoroyo/Alvaro-Roman
|
https://github.com/varoroyo/Alvaro-Roman
|
ddc25d85e0a089220f6c52ff301ed3aa93d5ba23
|
4f0d6a7dddfd6ea3465ba2d2962ddf10186d864a
|
a6f50096139aff862502d0f2001675caa9480b2d
|
refs/heads/master
| 2020-05-24T07:57:24.597209 | 2017-03-15T15:18:31 | 2017-03-15T15:18:31 | 84,837,678 | 11 | 27 | null | 2017-03-13T14:43:24 | 2017-03-15T15:21:53 | 2017-03-15T15:28:40 |
Python
|
[
{
"alpha_fraction": 0.5405354499816895,
"alphanum_fraction": 0.5503393411636353,
"avg_line_length": 33.66666793823242,
"blob_id": "a764a41d8a56206b3a8f3b41d20aa0634dc2fca8",
"content_id": "7efca78a8d220853a165fe3c7f53dd0c17d04736",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5304,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 153,
"path": "/web.py",
"repo_name": "varoroyo/Alvaro-Roman",
"src_encoding": "UTF-8",
"text": "# Copyright [2017] [name of copyright owner]\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and limitations under the License.\n\n# Author : Álvaro Román Royo ([email protected])\n\n\nimport http.server\nimport http.client\nimport json\nimport socketserver\n\nclass testHTTPRequestHandler(http.server.BaseHTTPRequestHandler):\n\n OPENFDA_API_URL = \"api.fda.gov\"\n OPENFDA_API_EVENT = \"/drug/event.json\"\n OPENFDA_API_LYRICA = '?search=patient.drug.medicinalproduct:\"LYRICA\"&limit=10'\n\n def get_main_page(self):\n html = '''\n <html>\n <head>\n <title>OpenFDA app</title>\n </head>\n <body>\n <h1>OpenFDA Client</h1>\n <form method='get' action='receivedrug'>\n <input type='submit' value='Enviar a OpenFDA'>\n </input>\n </form>\n <form method='get' action='searchmed'>\n <input type='text' name='drug'></input>\n <input type='submit' value='Buscar Medicamento'></input>\n </form>\n <form method='get' action='receivecompany'>\n <input type='submit' value='Find companies'></input>\n </form>\n <form method='get' action='searchcom'>\n <input type='text' name='drug'></input>\n <input type='submit' value='Buscar medicinalproduct'></input>\n </form>\n </body>\n </html>\n '''\n return html\n def get_med(self,drug):\n conn = http.client.HTTPSConnection(self.OPENFDA_API_URL)\n conn.request(\"GET\", self.OPENFDA_API_EVENT + '?search=patient.drug.medicinalproduct:'+drug+'&limit=10')\n r1 = conn.getresponse()\n print(r1.status, r1.reason)\n data1 = r1.read()\n data = data1.decode('utf8')\n events = json.loads(data)\n #event = events['results'][0]['patient']['drug']\n return events\n def get_medicinalproduct(self,com_num):\n conn = http.client.HTTPSConnection(self.OPENFDA_API_URL)\n conn.request(\"GET\", self.OPENFDA_API_EVENT + '?search=companynumb:'+com_num+'&limit=10')\n r1 = conn.getresponse()\n print(r1.status, r1.reason)\n data1 = r1.read()\n data = data1.decode('utf8')\n events = json.loads(data)\n\n return events\n\n def get_event(self):\n\n conn = http.client.HTTPSConnection(self.OPENFDA_API_URL)\n conn.request(\"GET\", self.OPENFDA_API_EVENT + '?limit=10')\n r1 = conn.getresponse()\n print(r1.status, r1.reason)\n data1 = r1.read()\n data = data1.decode('utf8')\n events = json.loads(data)\n #event = events['results'][0]['patient']['drug']\n return events\n def get_drug(self, events):\n medicamentos=[]\n for event in events['results']:\n medicamentos+=[event['patient']['drug'][0]['medicinalproduct']]\n\n return medicamentos\n def get_com_num(self, events):\n com_num=[]\n for event in events['results']:\n com_num+=[event['companynumb']]\n return com_num\n\n def drug_page(self,medicamentos):\n s=''\n for drug in medicamentos:\n s += \"<li>\"+drug+\"</li>\"\n html='''\n <html>\n <head></head>\n <body>\n <ul>\n %s\n </ul>\n </body>\n </html>''' %(s)\n return html\n\n def do_GET(self):\n\n print (self.path)\n #print (self.path)\n\n self.send_response(200)\n\n self.send_header('Content-type','text/html')\n self.end_headers()\n\n if self.path == '/' :\n html = self.get_main_page()\n self.wfile.write(bytes(html,'utf8'))\n elif self.path == '/receivedrug?':\n events = self.get_event()\n medicamentos = self.get_drug(events)\n html = self.drug_page(medicamentos)\n self.wfile.write(bytes(html,'utf8'))\n elif self.path == '/receivecompany?':\n events = self.get_event()\n com_num = self.get_com_num(events)\n html = self.drug_page(com_num)\n self.wfile.write(bytes(html,'utf8'))\n\n elif 'searchmed' in self.path:\n drug=self.path.split('=')[1]\n print (drug)\n events = self.get_med(drug)\n com_num = self.get_com_num(events)\n html = self.drug_page(com_num)\n self.wfile.write(bytes(html,'utf8'))\n elif 'searchcom' in self.path:\n com_num = self.path.split('=')[1]\n print (com_num)\n events = self.get_medicinalproduct(com_num)\n medicinalproduct = self.get_drug(events)\n html = self.drug_page(medicinalproduct)\n self.wfile.write(bytes(html,'utf8'))\n\n return\n"
}
] | 1 |
Shauren/tc-client-launcher
|
https://github.com/Shauren/tc-client-launcher
|
1924e1ae1d463da807fce95943464223f01e7d62
|
3634ddd9023dbf9c749945740ee4ef2b7e317268
|
29a6e481ba7b68ed0c9f6e3abf1c3a0994caddcb
|
refs/heads/master
| 2023-07-25T01:56:15.334261 | 2023-07-19T22:28:59 | 2023-07-19T22:28:59 | 103,862,935 | 27 | 20 | null | 2017-09-17T21:22:52 | 2017-10-04T01:45:13 | 2017-10-08T20:22:37 |
TypeScript
|
[
{
"alpha_fraction": 0.5792610049247742,
"alphanum_fraction": 0.5792610049247742,
"avg_line_length": 33.95833206176758,
"blob_id": "be5a3d7f0d483b0f175bee30ac2deba8d4c2fba4",
"content_id": "5d055c95d7575b950500457b00d7dca008881979",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1678,
"license_type": "permissive",
"max_line_length": 102,
"num_lines": 48,
"path": "/src/app/renderer-logger.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\n\nimport { LogEvent, Logger } from '../desktop-app/logger';\nimport { environment } from '../environments/environment';\nimport { Argv } from './argv';\n\n@Injectable()\nexport class RendererLogger extends Logger {\n\n private readonly logFn: (message?: any, ...optionalParams: any[]) => void;\n private readonly errorFn: (message?: any, ...optionalParams: any[]) => void;\n\n constructor(private argv: Argv) {\n super();\n this.logFn = this.resolveLogFunction('log');\n this.errorFn = this.resolveLogFunction('error');\n }\n\n log(message?: any, ...optionalParams: any[]): void {\n this.logFn.apply(this, arguments);\n }\n\n error(message?: any, ...optionalParams: any[]): void {\n this.errorFn.apply(this, arguments);\n }\n\n private resolveLogFunction(fn: keyof Console): (message?: any, ...optionalParams: any[]) => void {\n if (environment.production) {\n if (this.argv['logging-enabled']) {\n return function productionLogEvent() {\n window.electronAPI.log(new LogEvent(fn, Array.prototype.slice.call(arguments)));\n };\n } else {\n return function noopLogEvent() { };\n }\n }\n if (this.argv['logging-enabled']) {\n return function devLogEvent() {\n const args = Array.prototype.slice.call(arguments);\n window.electronAPI.log(new LogEvent(fn, args));\n console[fn].apply(console, args);\n };\n }\n return function devConsoleLogEvent() {\n console[fn].apply(console, arguments);\n };\n }\n}\n"
},
{
"alpha_fraction": 0.6225059628486633,
"alphanum_fraction": 0.6296887397766113,
"avg_line_length": 38.15625,
"blob_id": "403a6f03de3fcfe3e0250c970513d4a179b9800b",
"content_id": "82bc047ab384871fde0a2a92d0b0a78170f9bc05",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1253,
"license_type": "permissive",
"max_line_length": 90,
"num_lines": 32,
"path": "/src/app/routes.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { inject } from '@angular/core';\nimport { Routes } from '@angular/router';\n\nimport { AccountComponent } from './account/account.component';\nimport { GameAccountResolver } from './account/game-account.resolver';\nimport { ErrorComponent } from './error/error.component';\nimport { LoaderComponent } from './loader/loader.component';\nimport { LoginFormResolver } from './login/login-form.resolver';\nimport { LoginComponent } from './login/login.component';\nimport { PortalResolver } from './portal-resolver';\n\nexport const routes: Routes = [\n { path: '', pathMatch: 'full', component: LoaderComponent },\n {\n path: 'login',\n component: LoginComponent,\n resolve: {\n form: () => inject(LoginFormResolver).resolve()\n }\n },\n {\n path: 'account',\n component: AccountComponent,\n resolve: {\n gameAccounts: () => inject(GameAccountResolver).resolve(),\n portal: () => inject(PortalResolver).resolve()\n }\n },\n { path: 'initialization-error', component: ErrorComponent, data: { code: 'TCL001' } },\n { path: 'portal-error', component: ErrorComponent, data: { code: 'TCL002' } },\n { path: '**', component: ErrorComponent, data: { code: 'TCL404' } }\n];\n"
},
{
"alpha_fraction": 0.35844749212265015,
"alphanum_fraction": 0.3675799071788788,
"avg_line_length": 20.899999618530273,
"blob_id": "7ffdb5870dcd7bd917ede405f9d9e769d26bdba2",
"content_id": "338e34ee835cf323ed092f3afbd045c4c93ce723",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 876,
"license_type": "permissive",
"max_line_length": 44,
"num_lines": 40,
"path": "/binding.gyp",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "{\n \"targets\": [\n {\n \"target_name\": \"tc_launcher\",\n \"sources\": [\n \"src/native/Launcher.cpp\"\n ],\n \"conditions\": [\n [\"OS==\\\"win\\\"\", {\n \"sources\": [\n \"src/native/Win32Launcher.cpp\"\n ],\n \"defines\": [\n \"WIN32_LEAN_AND_MEAN\"\n ],\n \"libraries\": [\n \"crypt32.lib\"\n ]\n }],\n [\"OS==\\\"mac\\\"\", {\n \"sources\": [\n \"src/native/MacLauncher.cpp\",\n \"src/native/CDSACrypt.cpp\",\n \"src/native/CDSACrypt.h\"\n ],\n \"libraries\": [\n 'CoreFoundation.framework',\n 'Security.framework'\n ],\n \"xcode_settings\": {\n \"OTHER_CFLAGS\": [\n \"-std=c++14\",\n \"-Wno-deprecated-declarations\"\n ],\n }\n }]\n ]\n }\n ]\n}\n"
},
{
"alpha_fraction": 0.7272727489471436,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 23.75,
"blob_id": "15f4ed37b8f789251c0bdb47d54ac3cb8ad16e8a",
"content_id": "bf12df3a12277db0570fbccc8b55897b9890f125",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 99,
"license_type": "permissive",
"max_line_length": 37,
"num_lines": 4,
"path": "/src/app/login-refresh-result.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export interface LoginRefreshResult {\n login_ticket_expiry?: number;\n is_expired: boolean;\n}\n"
},
{
"alpha_fraction": 0.5636963844299316,
"alphanum_fraction": 0.5636963844299316,
"avg_line_length": 42.28571319580078,
"blob_id": "22bf7662ed8f9e841a4bcfcc2f635d24be2c88cb",
"content_id": "393ae6de7f60d6c2d9b91c0ddb45f85bab5dbfe5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1515,
"license_type": "permissive",
"max_line_length": 131,
"num_lines": 35,
"path": "/src/app/logging-http-interceptor.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpErrorResponse, HttpInterceptor } from '@angular/common/http';\nimport { HttpEvent, HttpEventType, HttpHandler, HttpRequest } from '@angular/common/http';\nimport { Injectable } from '@angular/core';\nimport { Observable } from 'rxjs';\nimport { tap } from 'rxjs/operators';\n\nimport { Logger } from '../desktop-app/logger';\n\n@Injectable()\nexport class LoggingHttpInterceptor implements HttpInterceptor {\n\n constructor(private logger: Logger) {\n }\n\n intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {\n this.logger.log(`Http | ${req.method} (begin) - ${req.url}`);\n const start = Date.now();\n return next.handle(req).pipe(tap({\n next: (event: HttpEvent<any>) => {\n if (event.type === HttpEventType.Response) {\n const duration = Date.now() - start;\n this.logger.log(`Http | ${req.method} (done) - ${event.url} - ${event.statusText} - took ${duration}ms`);\n }\n },\n error: (error: HttpErrorResponse) => {\n const duration = Date.now() - start;\n if (error.error instanceof Error) {\n this.logger.error(`Http | ${req.method} (error) - ${error.url} - ${error.error.message} - took ${duration}ms`);\n } else {\n this.logger.error(`Http | ${req.method} (error) - ${error.url} - ${error.statusText} - took ${duration}ms`);\n }\n }\n }));\n }\n}\n"
},
{
"alpha_fraction": 0.636734664440155,
"alphanum_fraction": 0.636734664440155,
"avg_line_length": 15.333333015441895,
"blob_id": "7cf441239994fb22230eb9b8e614af9e942884cf",
"content_id": "ea0744c66defa71a98ce793b6a0184427f384c55",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 245,
"license_type": "permissive",
"max_line_length": 34,
"num_lines": 15,
"path": "/src/app/login/form-inputs.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export enum FormType {\n LOGIN_FORM = <any>'LOGIN_FORM'\n}\n\nexport class FormInput {\n input_id: string;\n type: string;\n label: string;\n max_length: number;\n}\n\nexport class FormInputs {\n type: FormType;\n inputs: FormInput[];\n}\n"
},
{
"alpha_fraction": 0.6614853143692017,
"alphanum_fraction": 0.6614853143692017,
"avg_line_length": 25.31818199157715,
"blob_id": "eb9e27a9267ca3b55c4f6a1deb0dfc5138d67957",
"content_id": "9ebed7becec15a790014530f57702e330b3dfe6c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 579,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 22,
"path": "/src/app/login/login.service.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpClient } from '@angular/common/http';\nimport { Injectable } from '@angular/core';\nimport { Observable } from 'rxjs';\n\nimport { FormInputs } from './form-inputs';\nimport { LoginForm } from './login-form';\nimport { LoginResult } from './login-result';\n\n@Injectable()\nexport class LoginService {\n\n constructor(private http: HttpClient) {\n }\n\n getForm(): Observable<FormInputs> {\n return this.http.get<FormInputs>('login/');\n }\n\n login(form: LoginForm): Observable<LoginResult> {\n return this.http.post<LoginResult>('login/', form);\n }\n}\n"
},
{
"alpha_fraction": 0.7325383424758911,
"alphanum_fraction": 0.7325383424758911,
"avg_line_length": 35.6875,
"blob_id": "c010a2161971db8e69a6b732f73f0f78feae844c",
"content_id": "0ca92a417998c3800bd30ddd726f429fa25d9c79",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 587,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 16,
"path": "/src/app/bnetserver-url-http-interceptor.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpEvent, HttpHandler, HttpInterceptor, HttpRequest } from '@angular/common/http';\nimport { Injectable } from '@angular/core';\nimport { Observable } from 'rxjs';\n\nimport { ConfigurationService } from './configuration.service';\n\n@Injectable()\nexport class BnetserverUrlHttpInterceptor implements HttpInterceptor {\n\n constructor(private configuration: ConfigurationService) {\n }\n\n intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {\n return next.handle(req.clone({ url: `${this.configuration.LoginServerUrl}/${req.url}` }));\n }\n}\n"
},
{
"alpha_fraction": 0.7052238583564758,
"alphanum_fraction": 0.7052238583564758,
"avg_line_length": 21.33333396911621,
"blob_id": "c9f7930ccb90802a584a63d67228f3844543cf1a",
"content_id": "c71d894fcddee35126732e9f983260531c0931b8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 268,
"license_type": "permissive",
"max_line_length": 37,
"num_lines": 12,
"path": "/src/app/account/game-account-info.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export class GameAccountInfo {\n display_name: string;\n expansion: number;\n is_suspended: boolean;\n is_banned: boolean;\n suspension_expires: number;\n suspension_reason: string;\n}\n\nexport class GameAccountList {\n game_accounts: GameAccountInfo[];\n}\n"
},
{
"alpha_fraction": 0.6067796349525452,
"alphanum_fraction": 0.6067796349525452,
"avg_line_length": 23.58333396911621,
"blob_id": "3766a43eea6feed9625e8752cab3ad3c281db917",
"content_id": "4978d07d9aeba4080c6ee22fd75c0f2fd28f8a7a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 295,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 12,
"path": "/src/desktop-app/logger.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export abstract class Logger {\n abstract log(message?: any, ...optionalParams: any[]): void;\n abstract error(message?: any, ...optionalParams: any[]): void;\n close() { }\n}\n\nexport class LogEvent {\n constructor(\n public fn: keyof Console,\n public args: any[]) {\n }\n}\n"
},
{
"alpha_fraction": 0.6338028311729431,
"alphanum_fraction": 0.6338028311729431,
"avg_line_length": 16.75,
"blob_id": "b7d2162f76a8ef599928e9a16d08568f4f5cc34e",
"content_id": "e7222fd89f6ae55a4af427bdc5c736c6794a21a8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 71,
"license_type": "permissive",
"max_line_length": 28,
"num_lines": 4,
"path": "/src/ipc/log-event.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export interface ILogEvent {\n fn: keyof Console;\n args: any[];\n}\n"
},
{
"alpha_fraction": 0.611388623714447,
"alphanum_fraction": 0.620379626750946,
"avg_line_length": 31.29032325744629,
"blob_id": "4845856113884e1bcfe5ecba4affcc9dde36253d",
"content_id": "2398ff83f97e82495843b797da4054e4f91fe3c5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1001,
"license_type": "permissive",
"max_line_length": 130,
"num_lines": 31,
"path": "/src/app/error/error.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { ChangeDetectionStrategy, Component } from '@angular/core';\nimport { ActivatedRoute } from '@angular/router';\n\n@Component({\n selector: 'tc-error',\n templateUrl: './error.component.html',\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class ErrorComponent {\n\n constructor(private route: ActivatedRoute) {\n }\n\n getErrorCode(): string {\n return this.route.snapshot.data['code'];\n }\n\n getErrorMessage(): string {\n switch (this.getErrorCode()) {\n case 'TCL001':\n return `Failed to initialize login form, check if your 'Login URL' setting is correct and the server is running.`;\n case 'TCL002':\n return 'Failed to retrieve login portal, launching the game will not be possible. Please submit a bug report.';\n case 'TCL404':\n return 'Unexpected component navigation error.';\n default:\n break;\n }\n return 'Unexpected error';\n }\n}\n"
},
{
"alpha_fraction": 0.5157131552696228,
"alphanum_fraction": 0.5177276134490967,
"avg_line_length": 30.417720794677734,
"blob_id": "7ebaf49c6558cd4a215e00c6e384d0d5bb29ed58",
"content_id": "ac815f0261654fcfbb33ad9a5ad037b8eba9efa5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 2482,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 79,
"path": "/src/desktop-app/main-logger.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { ipcMain } from 'electron';\nimport * as fs from 'fs';\nimport * as path from 'path';\nimport * as util from 'util';\n\nimport { LogEvent, Logger } from './logger';\n\nexport class MainLogger extends Logger {\n\n private logFn: (message?: any, ...optionalParams: any[]) => void;\n private errorFn: (message?: any, ...optionalParams: any[]) => void;\n private stream: fs.WriteStream;\n private queue: string[] = [];\n\n constructor() {\n super();\n this.logFn = () => { };\n this.errorFn = () => { };\n }\n\n log(message?: any, ...optionalParams: any[]): void {\n message = 'Main | ' + message;\n this.logFn.apply(this, arguments);\n }\n\n error(message?: any, ...optionalParams: any[]): void {\n message = 'Main | ' + message;\n this.errorFn.apply(this, arguments);\n }\n\n enableLogging(): void {\n fs.mkdir(`${path.dirname(process.execPath)}/logs`, (error) => {\n if (error && error.code !== 'EEXIST') {\n console.error(error);\n return;\n }\n this.stream = fs.createWriteStream(`${path.dirname(process.execPath)}/logs/tc-launcher.log`);\n this.logFn = this.resolveLogFunction('log');\n this.errorFn = this.resolveLogFunction('error');\n\n ipcMain.on('logger', (event: Electron.Event, args: LogEvent) => {\n args.args[0] = 'Renderer | ' + args.args[0];\n this[args.fn + 'Fn'].apply(this, args.args);\n });\n });\n }\n\n close(): void {\n if (this.stream != undefined) {\n this.stream.close();\n }\n }\n\n private resolveLogFunction(fn: keyof Console): (message?: any, ...optionalParams: any[]) => void {\n return function () {\n (this as MainLogger).queueMessage(\n Array.prototype.map.call(arguments, arg =>\n typeof arg === 'object' ? util.inspect(arg, { breakLength: Infinity }) : '' + arg)\n .join(', ') + '\\n');\n console[fn].apply(console, arguments);\n };\n }\n\n private queueMessage(message: string): void {\n this.queue.push(message);\n if (this.queue.length === 1) {\n this.processQueue();\n }\n }\n\n private processQueue(): void {\n this.stream.write(this.queue[0], () => {\n this.queue.shift();\n if (this.queue.length !== 0) {\n this.processQueue();\n }\n });\n }\n}\n"
},
{
"alpha_fraction": 0.7082285284996033,
"alphanum_fraction": 0.7082285284996033,
"avg_line_length": 42.484375,
"blob_id": "b2c0f019d07d1ab45a28de96cc958c9b9b2d73ae",
"content_id": "cd4138ece106da91d1626a5fd6c9b1850763a030",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 2783,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 64,
"path": "/src/app/app.module.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HTTP_INTERCEPTORS, HttpClientModule } from '@angular/common/http';\nimport { APP_INITIALIZER, NgModule } from '@angular/core';\nimport { FormsModule } from '@angular/forms';\nimport { BrowserModule } from '@angular/platform-browser';\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\nimport { RouterModule } from '@angular/router';\n\nimport { Logger } from '../desktop-app/logger';\nimport { AccountComponent } from './account/account.component';\nimport { AccountService } from './account/account.service';\nimport { GameAccountResolver } from './account/game-account.resolver';\nimport { Argv, argvFactory, argvInitializer } from './argv';\nimport { AuthHttpInterceptor } from './auth-http-interceptor';\nimport { BnetserverUrlHttpInterceptor } from './bnetserver-url-http-interceptor';\nimport { configurationInitializer, ConfigurationService } from './configuration.service';\nimport { ErrorComponent } from './error/error.component';\nimport { LoaderComponent } from './loader/loader.component';\nimport { LoggingHttpInterceptor } from './logging-http-interceptor';\nimport { LoginTicketService } from './login-ticket.service';\nimport { LoginFormResolver } from './login/login-form.resolver';\nimport { LoginComponent } from './login/login.component';\nimport { LoginService } from './login/login.service';\nimport { MainComponent } from './main/main.component';\nimport { PortalResolver } from './portal-resolver';\nimport { RendererLogger } from './renderer-logger';\nimport { routes } from './routes';\nimport { SettingsDialogComponent } from './settings-dialog/settings-dialog.component';\n\n@NgModule({\n declarations: [\n AccountComponent,\n ErrorComponent,\n LoaderComponent,\n LoginComponent,\n MainComponent,\n SettingsDialogComponent\n ],\n imports: [\n FormsModule,\n HttpClientModule,\n BrowserModule,\n BrowserAnimationsModule,\n RouterModule.forRoot(routes, {})\n ],\n providers: [\n { provide: APP_INITIALIZER, useFactory: configurationInitializer, multi: true },\n { provide: APP_INITIALIZER, useFactory: argvInitializer, multi: true },\n { provide: HTTP_INTERCEPTORS, useClass: BnetserverUrlHttpInterceptor, multi: true },\n { provide: HTTP_INTERCEPTORS, useClass: LoggingHttpInterceptor, multi: true },\n { provide: HTTP_INTERCEPTORS, useClass: AuthHttpInterceptor, multi: true },\n { provide: Argv, useFactory: argvFactory },\n { provide: Logger, useClass: RendererLogger },\n ConfigurationService,\n LoginService,\n LoginFormResolver,\n LoginTicketService,\n AccountService,\n GameAccountResolver,\n PortalResolver\n ],\n bootstrap: [MainComponent]\n})\nexport class AppModule {\n}\n"
},
{
"alpha_fraction": 0.6604938507080078,
"alphanum_fraction": 0.6604938507080078,
"avg_line_length": 22.14285659790039,
"blob_id": "2ff60d255d997f733082dda4e83b1b149ff9e006",
"content_id": "8422d31691c82a2962dc00285e1ee9e4e4e1719b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 324,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 14,
"path": "/src/app/login/login-result.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export enum AuthenticationState {\n LOGIN = <any>'LOGIN',\n LEGAL = <any>'LEGAL',\n AUTHENTICATOR = <any>'AUTHENTICATOR',\n DONE = <any>'DONE'\n}\n\nexport class LoginResult {\n authentication_state: AuthenticationState;\n error_code: string;\n error_message: string;\n url: string;\n login_ticket: string;\n}\n"
},
{
"alpha_fraction": 0.6883116960525513,
"alphanum_fraction": 0.6883116960525513,
"avg_line_length": 18.25,
"blob_id": "33790e215fd37a8f769d927316f31b075f3c737f",
"content_id": "df031fe027b2ec3a9103a65ccb8e4d1b08ce0bb1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 77,
"license_type": "permissive",
"max_line_length": 31,
"num_lines": 4,
"path": "/src/ipc/crypto-result.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export interface CryptoResult {\n success: boolean;\n output?: string;\n}\n"
},
{
"alpha_fraction": 0.685584545135498,
"alphanum_fraction": 0.685584545135498,
"avg_line_length": 35.70833206176758,
"blob_id": "2e02202797dc7a125f94ddc339745a3274777b08",
"content_id": "bf1326f18939387567f8d40fd236d98d30618037",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": true,
"language": "TypeScript",
"length_bytes": 881,
"license_type": "permissive",
"max_line_length": 113,
"num_lines": 24,
"path": "/src/ipc/electron-api.d.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { Configuration } from './configuration';\nimport { CryptoResult } from './crypto-result';\nimport { ILogEvent } from './log-event';\nimport { LaunchArgs } from './launch-args';\n\nexport interface ElectronApi {\n getArgv(): Promise<{ [key: string]: any }>;\n getConfiguration(): Promise<Configuration>;\n setConfiguration<Key extends keyof Configuration>(change: [Key, Configuration[Key]]): Promise<Configuration>;\n encrypt(data: string): Promise<CryptoResult>;\n decrypt(data: string): Promise<CryptoResult>;\n log(event: ILogEvent): void;\n login(): void;\n launchGame(args: LaunchArgs): void;\n selectDirectory(): Promise<{ filePaths: string[]; canceled: boolean }>;\n onOpenSettingsRequest(callback: () => void): void;\n onLogoutRequest(callback: () => void): void;\n}\n\ndeclare global {\n interface Window {\n electronAPI: ElectronApi;\n }\n}\n"
},
{
"alpha_fraction": 0.6616915464401245,
"alphanum_fraction": 0.6616915464401245,
"avg_line_length": 17.272727966308594,
"blob_id": "49c7e9d83c4f0a4a1116290c0588697737bf42a3",
"content_id": "ed9100f7b1132038441e450106863174402437c7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 201,
"license_type": "permissive",
"max_line_length": 29,
"num_lines": 11,
"path": "/src/app/login/login-form.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export class FormInputValue {\n input_id: string;\n value: string;\n}\n\nexport class LoginForm {\n platform_id: string;\n program_id: string;\n version: string;\n inputs: FormInputValue[];\n}\n"
},
{
"alpha_fraction": 0.535525381565094,
"alphanum_fraction": 0.5381184816360474,
"avg_line_length": 32.01712417602539,
"blob_id": "daf6c9d1997e6e78f0584cbe767c86e0e9522dc0",
"content_id": "a5ffdfb003d074b2b0242d5f2af964141fdadb5c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 9641,
"license_type": "permissive",
"max_line_length": 139,
"num_lines": 292,
"path": "/src/desktop-app/desktop-main.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import * as commandLineArgs from 'command-line-args';\nimport { app, BrowserWindow, dialog, ipcMain, Menu } from 'electron';\nimport * as electronSettings from 'electron-settings';\nimport * as path from 'path';\n\nimport { Configuration } from '../ipc/configuration';\nimport { CryptoResult } from '../ipc/crypto-result';\nimport { LaunchArgs } from '../ipc/launch-args';\nimport { Logger } from './logger';\nimport { MainLogger } from './main-logger';\n\nconst nativeLauncher: Launcher = require('../tc_launcher.node');\n\n// Keep a global reference of the window object, if you don't, the window will\n// be closed automatically when the JavaScript object is garbage collected.\nlet applicationWindow: Electron.BrowserWindow;\nlet commandLine: { [arg: string]: any };\nlet configuration: Configuration;\nlet logger: Logger;\nlet logoutMenuItem: Electron.MenuItemConstructorOptions;\n\nfunction cleanup() {\n logger.close();\n logger = undefined;\n configuration = undefined;\n}\n\nfunction parseArgv() {\n commandLine = commandLineArgs([{\n name: 'logging-enabled', alias: 'l', type: Boolean\n }], { partial: true });\n}\n\nfunction initializeLogging() {\n logger = new MainLogger();\n if (commandLine['logging-enabled']) {\n (<MainLogger>logger).enableLogging();\n }\n}\n\nfunction getDefaultConfiguration(): Configuration {\n return {\n WowInstallDir: 'C:\\\\Program Files (x86)\\\\World of Warcraft',\n LoginServerUrl: 'https://localhost:8081/bnetserver',\n RememberLogin: false,\n LastGameAccount: '',\n LastGameVersion: 'Retail'\n };\n}\n\nfunction loadConfig() {\n configuration = Object.assign(getDefaultConfiguration(), electronSettings.getSync());\n electronSettings.setSync(<any>configuration);\n\n ipcMain.handle('get-configuration', () => configuration);\n ipcMain.handle('configuration', <Key extends keyof Configuration>(event, args: [Key, Configuration[Key]]) => {\n if (args != undefined) {\n if (args[1] == undefined) {\n delete configuration[args[0]];\n } else {\n configuration[args[0]] = args[1];\n }\n electronSettings.setSync(<any>configuration);\n }\n return configuration;\n });\n}\n\nfunction createWindow() {\n // Create the browser window.\n applicationWindow = new BrowserWindow({\n width: 640,\n height: 480,\n backgroundColor: '#2D2D30',\n show: false,\n webPreferences: {\n preload: path.join(__dirname, 'renderer-preload.js')\n }\n });\n\n ipcMain.handleOnce('get-argv', (): { [p: string]: any } => {\n return {\n ...commandLine,\n program_platform: process.platform,\n program_id: app.getName(),\n program_version: app.getVersion()\n };\n });\n\n // and load the index.html of the app.\n applicationWindow.loadFile(path.join(__dirname, '../web-app/index.html'));\n\n // Emitted when the window is closed.\n applicationWindow.on('closed', () => {\n // Dereference the window object, usually you would store windows\n // in an array if your app supports multi windows, this is the time\n // when you should delete the corresponding element.\n applicationWindow = undefined;\n });\n\n applicationWindow.on('ready-to-show', () => {\n applicationWindow.show();\n });\n}\n\nfunction setLogoutMenuVisible(visible: boolean) {\n const menuItems: Electron.MenuItem[] = (<any>Menu.getApplicationMenu().items.reduce((acc, menuItem) => {\n const submenu = (<any>menuItem).submenu;\n return !!submenu ? acc.concat(submenu.items) : acc;\n }, []));\n const logoutIndex = menuItems.findIndex(menuItem => menuItem.label === 'Logout');\n if (logoutIndex !== -1) {\n menuItems[logoutIndex].visible = visible;\n }\n}\n\nfunction createMenu() {\n ipcMain.on('login', () => {\n setLogoutMenuVisible(true);\n });\n\n logoutMenuItem = {\n label: 'Logout', visible: false, click: () => {\n applicationWindow.webContents.send('logout');\n setLogoutMenuVisible(false);\n }\n };\n\n let template: Electron.MenuItemConstructorOptions[];\n\n if (process.platform !== 'darwin') {\n template = [\n {\n label: 'Window',\n submenu: [\n {\n label: 'Settings',\n click: () => {\n applicationWindow.webContents.send('open-settings');\n }\n },\n // {\n // label: 'Reload',\n // accelerator: 'CmdOrCtrl+R',\n // click: () => {\n // applicationWindow.webContents.reloadIgnoringCache();\n // }\n // },\n // { role: 'toggleDevTools' },\n { type: 'separator' },\n logoutMenuItem,\n { role: 'minimize' },\n { role: 'close' }\n ]\n }\n ];\n } else {\n template = [\n {\n label: app.getName(),\n submenu: [\n {\n label: 'Settings',\n click: () => {\n applicationWindow.webContents.send('open-settings');\n }\n },\n // { role: 'toggleDevTools' },\n { type: 'separator' },\n logoutMenuItem,\n { role: 'hide' },\n { role: 'hideOthers' },\n { role: 'unhide' },\n { type: 'separator' },\n { role: 'quit' }\n ]\n },\n // mac does not support copy/paste if these items aren't in menu... how retarded is that?\n {\n label: 'Edit',\n submenu: [\n { role: 'undo' },\n { role: 'redo' },\n { type: 'separator' },\n { role: 'cut' },\n { role: 'copy' },\n { role: 'paste' },\n { role: 'pasteAndMatchStyle' },\n { role: 'delete' },\n { role: 'selectAll' }\n ]\n },\n {\n label: 'Window',\n submenu: [\n { role: 'close' },\n { role: 'minimize' },\n { type: 'separator' },\n { role: 'front' }\n ]\n }];\n }\n\n Menu.setApplicationMenu(Menu.buildFromTemplate(template));\n}\n\nfunction setupCrypto() {\n ipcMain.handle('encrypt', (event: Electron.IpcMainInvokeEvent, args: string): CryptoResult => {\n try {\n const result = nativeLauncher.encryptString(args);\n return { success: true, output: result.toString('base64') };\n } catch (e) {\n logger.error(`Crypto | Failed to encrypt string: ${(e as Error).message}`);\n return { success: false };\n }\n });\n ipcMain.handle('decrypt', (event: Electron.IpcMainInvokeEvent, args: string): CryptoResult => {\n try {\n const result = nativeLauncher.decryptString(Buffer.from(args, 'base64'));\n return { success: true, output: result };\n } catch (e) {\n logger.error(`Crypto | Failed to decrypt string: ${(e as Error).message}`);\n return { success: false };\n }\n });\n}\n\n// This method will be called when Electron has finished\n// initialization and is ready to create browser windows.\n// Some APIs can only be used after this event occurs.\napp.on('ready', () => {\n parseArgv();\n\n initializeLogging();\n\n loadConfig();\n\n createWindow();\n\n createMenu();\n\n setupCrypto();\n\n ipcMain.handle('select-directory', () => dialog.showOpenDialog(applicationWindow, {properties: ['openDirectory']}));\n\n ipcMain.on('launch-game', (event: Electron.IpcMainEvent, args: LaunchArgs) => {\n nativeLauncher.launchGame(\n configuration.WowInstallDir,\n args.Portal,\n args.LoginTicket,\n args.GameAccount,\n args.GameVersion);\n });\n});\n\n// Quit when all windows are closed.\napp.on('window-all-closed', () => {\n // On macOS it is common for applications and their menu bar\n // to stay active until the user quits explicitly with Cmd + Q\n if (process.platform !== 'darwin') {\n app.quit();\n }\n});\n\napp.on('quit', () => {\n cleanup();\n});\n\napp.on('activate', () => {\n // On macOS it's common to re-create a window in the app when the\n // dock icon is clicked and there are no other windows open.\n if (applicationWindow == undefined) {\n createWindow();\n }\n});\n\n// In this file you can include the rest of your app's specific main process\n// code. You can also put them in separate files and require them here.\napp.on('certificate-error', (event, webContents, url, error, certificate, callback) => {\n while (certificate.issuerCert != undefined) {\n certificate = certificate.issuerCert;\n }\n // TC certificate is self-signed, allow it\n if (certificate.issuerName === 'TrinityCore Battle.net Aurora Root CA' &&\n certificate.fingerprint === 'sha256/ceL7DRTPsMGWEdjIAqIdscvdHLF2qqLO2BKjE11BY1Q=') {\n event.preventDefault();\n callback(true);\n } else {\n logger.error(`Crypto | Invalid certificate: ${error}. Issuer: ${certificate.issuerName}, Fingerprint: ${certificate.fingerprint}`);\n callback(false);\n }\n});\n"
},
{
"alpha_fraction": 0.673337459564209,
"alphanum_fraction": 0.6816502213478088,
"avg_line_length": 30.843137741088867,
"blob_id": "d0c52ab51842c6966355acf91233152da33874b5",
"content_id": "ec0f60d5bcf86bc0de43fc60580a287d0b0d1f40",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3248,
"license_type": "permissive",
"max_line_length": 154,
"num_lines": 102,
"path": "/src/native/MacLauncher.cpp",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "#include \"CDSACrypt.h\"\n#include \"LauncherShared.h\"\n#include <string>\n#include <CoreFoundation/CFPreferences.h>\n\nbool ProcessData(CSSM_DATA_PTR inData, CSSM_DATA_PTR outData, bool encrypt)\n{\n // Initialize CDSA\n CSSM_CSP_HANDLE cspHandle;\n CSSM_RETURN crtn = CDSA::CspAttach(&cspHandle);\n if (crtn)\n return false;\n\n // Create encryption password\n char const* username = getenv(\"USER\");\n uint8 xorLength = std::min(strlen(username), size_t(16));\n\n uint8 rawPassword[16];\n std::copy(std::begin(Entropy), std::end(Entropy), std::begin(rawPassword));\n for (uint8 i = 0; i < xorLength; i++)\n rawPassword[i] ^= username[i];\n\n CSSM_DATA password = { 16, rawPassword };\n\n // Create encryption key\n CSSM_DATA salt = { 8, (uint8*)\"someSalt\" };\n CSSM_KEY cdsaKey;\n crtn = CDSA::DeriveKey(cspHandle, password, salt, CSSM_ALGID_AES, 128, &cdsaKey);\n if (crtn)\n {\n CDSA::CspDetach(cspHandle);\n return false;\n }\n\n if (encrypt)\n crtn = CDSA::Encrypt(cspHandle, &cdsaKey, inData, outData);\n else\n crtn = CDSA::Decrypt(cspHandle, &cdsaKey, inData, outData);\n\n // Free resources\n CDSA::FreeKey(cspHandle, &cdsaKey);\n CDSA::CspDetach(cspHandle);\n return !crtn;\n}\n\nbool EncryptString(char const* string, std::vector<uint8_t>* output)\n{\n output->clear();\n\n CSSM_DATA inData = { strlen(string), (uint8*)string };\n CSSM_DATA encryptedData = { 0, nullptr };\n\n if (!ProcessData(&inData, &encryptedData, true))\n return false;\n\n std::copy(encryptedData.Data, encryptedData.Data + encryptedData.Length, std::back_inserter(*output));\n\n free(encryptedData.Data);\n return true;\n}\n\nbool DecryptString(std::vector<uint8_t> const& encryptedString, std::string* output)\n{\n output->clear();\n\n CSSM_DATA inData = { encryptedString.size(), (uint8*)encryptedString.data() };\n CSSM_DATA plainData = { 0, nullptr };\n\n if (!ProcessData(&inData, &plainData, false))\n return false;\n\n output->assign(reinterpret_cast<char const*>(plainData.Data), plainData.Length);\n\n free(plainData.Data);\n return true;\n}\n\nbool StoreLoginTicket(char const* portal, char const* loginTicket, char const* gameAccount)\n{\n std::vector<uint8_t> encryptedTicket;\n if (!EncryptString(loginTicket, &encryptedTicket))\n return false;\n\n CFStringRef app = CFSTR(\"org.trnity\");\n CFPreferencesSetAppValue(CFSTR(\"Launch Options/WoW/\" LOGIN_TICKET), CFDataCreate(nullptr, encryptedTicket.data(), encryptedTicket.size()), app);\n CFPreferencesSetAppValue(CFSTR(\"Launch Options/WoW/\" GAME_ACCOUNT_NAME), CFStringCreateWithCString(nullptr, gameAccount, kCFStringEncodingUTF8), app);\n CFPreferencesSetAppValue(CFSTR(\"Launch Options/WoW/\" PORTAL_ADDRESS), CFStringCreateWithCString(nullptr, portal, kCFStringEncodingUTF8), app);\n CFPreferencesAppSynchronize(app);\n\n return true;\n}\n\nbool LaunchGameWithLogin(char const* gameInstallDir, char const* /*version*/)\n{\n char commandLine[32768] = {};\n strcat(commandLine, \"open \\\"\");\n strcat(commandLine, gameInstallDir);\n strcat(commandLine, \"/World of Warcraft Patched.app\\\"\");\n strcat(commandLine, \" --args -launcherlogin -console\");\n system(commandLine);\n return true;\n}\n"
},
{
"alpha_fraction": 0.6645021438598633,
"alphanum_fraction": 0.6818181872367859,
"avg_line_length": 41.900001525878906,
"blob_id": "ec3aeb890667e1c08c5c0f157c496024ddd72d29",
"content_id": "854e95c262a0079007bec4885065555797e2316a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 6006,
"license_type": "permissive",
"max_line_length": 178,
"num_lines": 140,
"path": "/src/native/Launcher.cpp",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "#include <node.h>\n#include <node_buffer.h>\n\n#include \"LauncherShared.h\"\n#include <memory>\n#include <string>\n\nvoid LaunchGame(v8::FunctionCallbackInfo<v8::Value> const& args)\n{\n v8::Isolate* isolate = args.GetIsolate();\n v8::HandleScope scope(isolate);\n\n if (args.Length() < 5)\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate, \"Wrong number of arguments, expected 5\", v8::NewStringType::kNormal)\n .ToLocalChecked()));\n return;\n }\n\n if (!args[0]->IsString() || !args[1]->IsString() || !args[2]->IsString() || !args[3]->IsString())\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate,\n \"Wrong arguments types, expected (gameInstallDir: string, portal: string, loginTicket: string, gameAccount: string, version: string)\", v8::NewStringType::kNormal)\n .ToLocalChecked()));\n return;\n }\n\n v8::String::Utf8Value gameInstallDir(isolate, args[0]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n v8::String::Utf8Value portal(isolate, args[1]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n v8::String::Utf8Value loginTicket(isolate, args[2]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n v8::String::Utf8Value gameAccount(isolate, args[3]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n v8::String::Utf8Value version(isolate, args[4]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n\n bool success = false;\n if (StoreLoginTicket(*portal, *loginTicket, *gameAccount))\n if (LaunchGameWithLogin(*gameInstallDir, *version))\n success = true;\n\n args.GetReturnValue().Set(v8::Boolean::New(isolate, success));\n}\n\nvoid EncryptJsString(v8::FunctionCallbackInfo<v8::Value> const& args)\n{\n v8::Isolate* isolate = args.GetIsolate();\n v8::HandleScope scope(isolate);\n\n if (args.Length() != 1)\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate, \"Wrong number of arguments, expected 1\", v8::NewStringType::kNormal).ToLocalChecked()));\n return;\n }\n\n if (!args[0]->IsString())\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate,\n \"Wrong arguments types, expected (inputString: string)\", v8::NewStringType::kNormal).ToLocalChecked()));\n return;\n }\n\n v8::String::Utf8Value inputString(isolate, args[0]->ToString(isolate->GetCurrentContext()).ToLocalChecked());\n std::vector<uint8_t> encryptedString;\n\n if (!EncryptString(*inputString, &encryptedString))\n {\n isolate->ThrowException(v8::Exception::Error(v8::String::NewFromUtf8(isolate, \"Encryption failed\", v8::NewStringType::kNormal).ToLocalChecked()));\n return;\n }\n\n char* data = reinterpret_cast<char*>(encryptedString.data());\n size_t length = encryptedString.size();\n\n auto allocHint = std::make_unique<std::pair<v8::ArrayBuffer::Allocator*, size_t>>(isolate->GetArrayBufferAllocator(), length);\n\n node::Buffer::FreeCallback deleter = [](char* ptr, void* hint)\n {\n auto hintData = static_cast<decltype(allocHint)::element_type*>(hint);\n hintData->first->Free(ptr, hintData->second);\n };\n v8::Local<v8::Object> returnBuffer;\n\n void* sandboxedData = isolate->GetArrayBufferAllocator()->AllocateUninitialized(length);\n memcpy(sandboxedData, data, length);\n\n if (node::Buffer::New(isolate, static_cast<char*>(sandboxedData), length, deleter, allocHint.release()).ToLocal(&returnBuffer))\n args.GetReturnValue().Set(returnBuffer);\n else\n isolate->ThrowException(v8::Exception::Error(v8::String::NewFromUtf8(isolate, \"Output buffer creation failed\", v8::NewStringType::kNormal).ToLocalChecked()));\n}\n\nvoid DecryptJsString(v8::FunctionCallbackInfo<v8::Value> const& args)\n{\n v8::Isolate* isolate = args.GetIsolate();\n v8::HandleScope scope(isolate);\n\n if (args.Length() != 1)\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate, \"Wrong number of arguments, expected 1\", v8::NewStringType::kNormal)\n .ToLocalChecked()));\n return;\n }\n\n if (!args[0]->IsObject())\n {\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate,\n \"Wrong arguments types, expected (encryptedString: Buffer)\", v8::NewStringType::kNormal).ToLocalChecked()));\n return;\n }\n\n v8::Local<v8::Object> encryptedString = args[0]->ToObject(isolate->GetCurrentContext()).ToLocalChecked();\n uint8_t* data = reinterpret_cast<uint8_t*>(node::Buffer::Data(encryptedString));\n size_t length = node::Buffer::Length(encryptedString);\n std::string outputString;\n\n if (!DecryptString(std::vector<uint8_t>(data, data + length), &outputString))\n {\n isolate->ThrowException(v8::Exception::Error(v8::String::NewFromUtf8(isolate, \"Decryption failed\", v8::NewStringType::kNormal).ToLocalChecked()));\n return;\n }\n\n v8::Local<v8::String> returnString;\n if (v8::String::NewFromUtf8(isolate, outputString.c_str(), v8::NewStringType::kNormal, outputString.length()).ToLocal(&returnString))\n args.GetReturnValue().Set(returnString);\n else\n isolate->ThrowException(v8::Exception::TypeError(v8::String::NewFromUtf8(isolate, \"Output string creation failed\", v8::NewStringType::kNormal).ToLocalChecked()));\n}\n\nNODE_MODULE_INIT(/*exports, module, context*/)\n{\n auto setExport = [&](char const* name, v8::FunctionCallback callback)\n {\n v8::Isolate* isolate = context->GetIsolate();\n exports->Set(context,\n v8::String::NewFromUtf8(isolate, name, v8::NewStringType::kInternalized).ToLocalChecked(),\n v8::FunctionTemplate::New(isolate, callback)->GetFunction(context).ToLocalChecked()).Check();\n };\n\n setExport(\"launchGame\", LaunchGame);\n setExport(\"encryptString\", EncryptJsString);\n setExport(\"decryptString\", DecryptJsString);\n}\n"
},
{
"alpha_fraction": 0.5990195870399475,
"alphanum_fraction": 0.6058823466300964,
"avg_line_length": 24.5,
"blob_id": "6281c625cd050116221570d23536548227a94548",
"content_id": "ebb71fae35a9bd7347f3f8be1d453de373ea9cbc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1020,
"license_type": "permissive",
"max_line_length": 142,
"num_lines": 40,
"path": "/README.md",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "# TcLauncher\n\n## Build\n\nRun `npm run build-all-dev` to build the project. The build artifacts will be stored in the `dist/` directory.\nUse the `npm run build-package:{OS}` for a production build (`win` or `mac`).\n\n## VSCode debug configurations\n\nDebug Main Process - debugging code found in src/electron\n\nDebug Renderer Process - debugging code from src/app. First, start the application with `npm run electron` then attach with this configuration\n\n```\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Debug Main Process\",\n \"type\": \"node\",\n \"request\": \"launch\",\n \"cwd\": \"${workspaceRoot}\",\n \"runtimeExecutable\": \"${workspaceRoot}/node_modules/.bin/electron\",\n \"windows\": {\n \"runtimeExecutable\": \"${workspaceRoot}/node_modules/.bin/electron.cmd\"\n },\n \"args\": [\n \"dist/\"\n ]\n },\n {\n \"name\": \"Debug Renderer Process\",\n \"type\": \"chrome\",\n \"request\": \"attach\",\n \"port\": 9222,\n \"webRoot\": \"${workspaceRoot}\"\n }\n ]\n}\n```\n"
},
{
"alpha_fraction": 0.599039614200592,
"alphanum_fraction": 0.599039614200592,
"avg_line_length": 29.851852416992188,
"blob_id": "f98f178044227632e6d0926dbd7d20be0e73b005",
"content_id": "7cb15a6a7131509244f4695f7105914a616a928a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 833,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 27,
"path": "/src/app/portal-resolver.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpClient } from '@angular/common/http';\nimport { Injectable } from '@angular/core';\nimport { Router } from '@angular/router';\nimport { EMPTY, Observable } from 'rxjs';\nimport { catchError } from 'rxjs/operators';\n\nimport { Logger } from '../desktop-app/logger';\n\n@Injectable()\nexport class PortalResolver {\n\n constructor(\n private http: HttpClient,\n private router: Router,\n private logger: Logger) {\n }\n\n resolve(): Observable<string> {\n this.logger.log('Portal | Retrieving portal address');\n return this.http.get('portal/', { responseType: 'text' }).pipe(\n catchError(error => {\n this.logger.error('Portal | Failed to portal!', error);\n this.router.navigate(['/portal-error']);\n return EMPTY;\n }));\n }\n}\n"
},
{
"alpha_fraction": 0.6228157877922058,
"alphanum_fraction": 0.6228157877922058,
"avg_line_length": 42.543479919433594,
"blob_id": "ca720bc94f295556a2b1d28f57b7e5464fbcbe57",
"content_id": "1c51a946768687a20316c5657ebe8020a0186640",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 4006,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 92,
"path": "/src/app/login/login.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { AfterViewInit, ChangeDetectionStrategy, ChangeDetectorRef, Component, OnInit, ViewChild } from '@angular/core';\nimport { NgForm } from '@angular/forms';\nimport { ActivatedRoute, Router } from '@angular/router';\n\nimport { Logger } from '../../desktop-app/logger';\nimport { ConfigurationService } from '../configuration.service';\nimport { LoginTicketService } from '../login-ticket.service';\nimport { Argv } from '../argv';\nimport { FormInput } from './form-inputs';\nimport { FormInputValue, LoginForm } from './login-form';\nimport { AuthenticationState } from './login-result';\nimport { LoginService } from './login.service';\n\n@Component({\n selector: 'tc-login',\n templateUrl: './login.component.html',\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class LoginComponent implements OnInit, AfterViewInit {\n\n formInputs: FormInput[];\n submit: FormInput;\n rememberLogin: boolean;\n loginError: string;\n\n @ViewChild('loginForm', { static: true })\n loginForm: NgForm;\n\n constructor(\n private loginService: LoginService,\n private loginTicket: LoginTicketService,\n private configuration: ConfigurationService,\n private route: ActivatedRoute,\n private router: Router,\n private changeDetector: ChangeDetectorRef,\n private logger: Logger,\n private argv: Argv) {\n }\n\n ngOnInit(): void {\n this.formInputs = this.route.snapshot.data['form'].inputs.filter(input => input.type !== 'submit');\n this.submit = this.route.snapshot.data['form'].inputs.find(input => input.type === 'submit');\n this.rememberLogin = this.configuration.RememberLogin;\n // navigating to this component means our stored credentials were not valid, clear them\n this.loginTicket.clear();\n this.logger.log(`Login | Logging in using ${this.formInputs.map(input => input.input_id).join(', ')}`);\n }\n\n ngAfterViewInit(): void {\n this.loginForm.statusChanges.subscribe(() => this.changeDetector.markForCheck());\n }\n\n login(): void {\n const form = new LoginForm();\n form.platform_id = this.argv['program_platform'];\n form.program_id = this.argv['program_id'];\n form.version = this.argv['program_version'];\n form.inputs = Object.keys(this.loginForm.value).map(inputId => {\n const value = new FormInputValue();\n value.input_id = inputId;\n value.value = this.loginForm.value[inputId];\n return value;\n });\n this.loginError = undefined;\n // clearing password solves two problems here\n // * saves the user from having to do it manually in case it was wrong\n // * makes the form invalid, disables submit button and prevents spamming requests\n this.formInputs\n .filter(input => input.type === 'password')\n .forEach(input => this.loginForm.controls[input.input_id].reset());\n this.logger.log('Login | Attempting login');\n this.loginService.login(form).subscribe(loginResult => {\n if (loginResult.authentication_state === AuthenticationState.DONE) {\n if (!!loginResult.login_ticket) {\n this.logger.log('Login | Login successful');\n this.loginTicket.store(loginResult.login_ticket, this.rememberLogin);\n this.router.navigate(['/account']);\n } else {\n this.logger.error('Login | Login failed');\n if (!!loginResult.error_message) {\n this.loginError = loginResult.error_message;\n } else {\n this.loginError = 'We couldn\\'t log you in with what you just entered. Please try again.';\n }\n }\n } else {\n this.logger.error(`Login | Unsupported authentication state ${loginResult.authentication_state}`);\n }\n this.changeDetector.markForCheck();\n });\n }\n}\n"
},
{
"alpha_fraction": 0.7017082571983337,
"alphanum_fraction": 0.750328540802002,
"avg_line_length": 33.54545593261719,
"blob_id": "4667352457cee23f37cd222245daa9ac4053f056",
"content_id": "bc693aca01053f597d0b1a5cb324c609ca1a3846",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 761,
"license_type": "permissive",
"max_line_length": 136,
"num_lines": 22,
"path": "/src/native/LauncherShared.h",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "\n#ifndef LauncherShared_h__\n#define LauncherShared_h__\n\n#include <vector>\n#include <string>\n#include <cstdint>\n\nbool EncryptString(char const* string, std::vector<uint8_t>* output);\nbool DecryptString(std::vector<uint8_t> const& encryptedString, std::string* output);\nbool StoreLoginTicket(char const* portal, char const* loginTicket, char const* gameAccount);\nbool LaunchGameWithLogin(char const* gameInstallDir, char const* version);\n\n// unencrypted keys\n#define PORTAL_ADDRESS \"CONNECTION_STRING\"\n#define GAME_ACCOUNT_NAME \"GAME_ACCOUNT\"\n\n// encrypted keys\n#define LOGIN_TICKET \"WEB_TOKEN\"\n\nstatic constexpr uint8_t Entropy[] = { 0xC8, 0x76, 0xF4, 0xAE, 0x4C, 0x95, 0x2E, 0xFE, 0xF2, 0xFA, 0x0F, 0x54, 0x19, 0xC0, 0x9C, 0x43 };\n\n#endif // LauncherShared_h__\n"
},
{
"alpha_fraction": 0.6704907417297363,
"alphanum_fraction": 0.6736775040626526,
"avg_line_length": 37.90082550048828,
"blob_id": "b2ca9c482fd7cf06285c744c0cd2bbad32808e3f",
"content_id": "c210bbe979649b7e8dfd0959ea882a7f44a46d44",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 4707,
"license_type": "permissive",
"max_line_length": 149,
"num_lines": 121,
"path": "/src/app/account/account.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { ChangeDetectionStrategy, ChangeDetectorRef, Component, OnDestroy, OnInit } from '@angular/core';\nimport { ActivatedRoute } from '@angular/router';\nimport { timer, Subject } from 'rxjs';\nimport { takeUntil } from 'rxjs/operators';\n\nimport { LaunchArgs } from '../../ipc/launch-args';\nimport { Logger } from '../../desktop-app/logger';\nimport { ConfigurationService } from '../configuration.service';\nimport { LoginTicketService } from '../login-ticket.service';\nimport { GameAccountInfo, GameAccountList } from './game-account-info';\n\ninterface GameVersion {\n display_name: string;\n value: 'Retail' | 'Classic' | 'ClassicEra';\n}\n\nconst ExpansionNames = [\n 'World of Warcraft',\n 'The Burning Crusade',\n 'Wrath of the Lich King',\n 'Cataclysm',\n 'Mists of Pandaria',\n 'Warlords of Draenor',\n 'Legion',\n 'Battle for Azeroth',\n 'Shadowlands',\n 'Dragonflight'\n];\n\nconst NO_GAME_ACCOUNT: GameAccountInfo = {\n display_name: 'No game accounts',\n expansion: 0,\n is_suspended: true,\n is_banned: true,\n suspension_expires: 0,\n suspension_reason: ''\n};\n\n@Component({\n selector: 'tc-account',\n templateUrl: './account.component.html',\n styleUrls: ['./account.component.css'],\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class AccountComponent implements OnInit, OnDestroy {\n\n readonly gameVersions: GameVersion[] = [\n { display_name: 'World of Warcraft', value: 'Retail' },\n { display_name: 'Wrath of the Lich King Classic', value: 'Classic' }\n ];\n selectedGameVersion = this.gameVersions[0];\n gameAccounts: GameAccountInfo[] = [];\n selectedGameAccount: GameAccountInfo;\n noGameAccounts: boolean;\n hasRecentlyLaunched: boolean;\n\n private destroyed = new Subject<void>();\n\n constructor(\n private route: ActivatedRoute,\n private loginTicket: LoginTicketService,\n private configuration: ConfigurationService,\n private changeDetector: ChangeDetectorRef,\n private logger: Logger) {\n }\n\n ngOnInit(): void {\n const lastGameVersion = this.gameVersions.find(gameVersion => gameVersion.value === this.configuration.LastGameVersion);\n this.selectedGameVersion = lastGameVersion || this.gameVersions[0];\n const gameAccounts = <GameAccountList>this.route.snapshot.data['gameAccounts'];\n this.gameAccounts = gameAccounts.game_accounts || [];\n this.noGameAccounts = this.gameAccounts.length === 0;\n const lastAccount = this.gameAccounts.find(gameAccount => gameAccount.display_name === this.configuration.LastGameAccount);\n if (this.gameAccounts.length === 0) {\n this.gameAccounts = [NO_GAME_ACCOUNT];\n }\n this.selectedGameAccount = lastAccount || this.gameAccounts[0];\n window.electronAPI.login();\n this.logger.log('Account | Initialized account view', this.gameAccounts.map(gameAccount => gameAccount.display_name),\n `last selected game account: ${lastAccount ? lastAccount.display_name : 'none'}`);\n }\n\n ngOnDestroy(): void {\n this.destroyed.next();\n this.destroyed.complete();\n }\n\n launch(): void {\n const launchArgs: LaunchArgs = {\n Portal: this.route.snapshot.data['portal'],\n LoginTicket: this.loginTicket.getTicket(),\n GameAccount: this.selectedGameAccount.display_name,\n GameVersion: this.selectedGameVersion.value\n };\n window.electronAPI.launchGame(launchArgs);\n this.configuration.LastGameVersion = this.selectedGameVersion.value;\n this.configuration.LastGameAccount = this.selectedGameAccount.display_name;\n this.logger.log(`Account | Launching game ${launchArgs.GameVersion} with account ${launchArgs.GameAccount} and portal ${launchArgs.Portal}`);\n this.hasRecentlyLaunched = true;\n timer(5000).pipe(takeUntil(this.destroyed)).subscribe(() => {\n this.logger.log('Account | Re-enabling launch button');\n this.hasRecentlyLaunched = false;\n this.changeDetector.markForCheck();\n });\n }\n\n launchEnabled(): boolean {\n return !this.hasRecentlyLaunched && !this.selectedGameAccount.is_suspended && !this.selectedGameAccount.is_banned;\n }\n\n formatExpansionName(expansionIndex: number): string {\n return ExpansionNames[expansionIndex];\n }\n\n formatBanExpirationTime(expirationUnixTimestamp: number): string {\n const options: Intl.DateTimeFormatOptions = {\n day: 'numeric', month: 'numeric', year: 'numeric', hour: 'numeric', minute: 'numeric'\n };\n return new Date(expirationUnixTimestamp * 1000).toLocaleString([], options);\n }\n}\n"
},
{
"alpha_fraction": 0.6406429409980774,
"alphanum_fraction": 0.6406429409980774,
"avg_line_length": 31.259260177612305,
"blob_id": "d2996ca9e84781ef91f27090bc290612aa0cf1fb",
"content_id": "654529e4deb62d6e222282fc628720e1097b25df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 871,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 27,
"path": "/src/app/login/login-form.resolver.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\nimport { Router } from '@angular/router';\nimport { EMPTY, Observable } from 'rxjs';\nimport { catchError } from 'rxjs/operators';\n\nimport { Logger } from '../../desktop-app/logger';\nimport { FormInputs } from './form-inputs';\nimport { LoginService } from './login.service';\n\n@Injectable()\nexport class LoginFormResolver {\n\n constructor(\n private loginService: LoginService,\n private router: Router,\n private logger: Logger) {\n }\n\n public resolve(): Observable<FormInputs> {\n this.logger.log('Login | Retrieving login form fields');\n return this.loginService.getForm().pipe(catchError(error => {\n this.logger.error('Login | Failed to retrieve login form!', error);\n this.router.navigate(['/initialization-error']);\n return EMPTY;\n }));\n }\n}\n"
},
{
"alpha_fraction": 0.5990654230117798,
"alphanum_fraction": 0.5990654230117798,
"avg_line_length": 28.72222137451172,
"blob_id": "e60720764ba3e317e1379ef3156b3baeb2565d57",
"content_id": "09f1ca58aed4e8270f70e83a40c14afbe8ad7ce6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1070,
"license_type": "permissive",
"max_line_length": 102,
"num_lines": 36,
"path": "/src/app/main/main.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { ChangeDetectionStrategy, ChangeDetectorRef, Component, NgZone, OnInit } from '@angular/core';\nimport { Router } from '@angular/router';\n\nimport { LoginTicketService } from '../login-ticket.service';\n\n@Component({\n selector: 'tc-main',\n templateUrl: './main.component.html',\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class MainComponent implements OnInit {\n\n openSettings = false;\n\n constructor(\n private zone: NgZone,\n private changeDetector: ChangeDetectorRef,\n private router: Router,\n private loginTicket: LoginTicketService) {\n }\n\n ngOnInit(): void {\n window.electronAPI.onOpenSettingsRequest(() => {\n this.zone.runGuarded(() => {\n this.openSettings = true;\n this.changeDetector.markForCheck();\n });\n });\n window.electronAPI.onLogoutRequest(() => {\n this.zone.runGuarded(() => {\n this.loginTicket.clear();\n this.router.navigate(['/login']);\n });\n });\n }\n}\n"
},
{
"alpha_fraction": 0.7460992932319641,
"alphanum_fraction": 0.7489361763000488,
"avg_line_length": 38.16666793823242,
"blob_id": "408b0cd413809594a5d335d7c35ac8996fdf5717",
"content_id": "7f2b33a35dff82e51864a9ee88d846752b0272d8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 705,
"license_type": "permissive",
"max_line_length": 151,
"num_lines": 18,
"path": "/src/native/CDSACrypt.h",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "#ifndef CDSACrypt_h__\n#define CDSACrypt_h__\n\n#include <Security/cssm.h>\n\nnamespace CDSA\n{\n CSSM_RETURN CspAttach(CSSM_CSP_HANDLE *cspHandle);\n CSSM_RETURN CspDetach(CSSM_CSP_HANDLE cspHandle);\n\n CSSM_RETURN DeriveKey(CSSM_CSP_HANDLE cspHandle, CSSM_DATA rawKey, CSSM_DATA salt, CSSM_ALGORITHMS keyAlg, uint32 keySizeInBits, CSSM_KEY_PTR key);\n CSSM_RETURN FreeKey(CSSM_CSP_HANDLE cspHandle, CSSM_KEY_PTR key);\n\n CSSM_RETURN Encrypt(CSSM_CSP_HANDLE cspHandle, CSSM_KEY const* key, CSSM_DATA const* plainText, CSSM_DATA_PTR cipherText);\n CSSM_RETURN Decrypt(CSSM_CSP_HANDLE cspHandle, CSSM_KEY const* key, CSSM_DATA const* cipherText, CSSM_DATA_PTR plainText);\n}\n\n#endif // CCDSACrypt_h__\n"
},
{
"alpha_fraction": 0.6580246686935425,
"alphanum_fraction": 0.6580246686935425,
"avg_line_length": 32.75,
"blob_id": "3701ec5573f95fda4ce447ea7bd58740212f1501",
"content_id": "dfc678807e317f44b15cb38a006971cae5eded33",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 810,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 24,
"path": "/src/app/account/game-account.resolver.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\nimport { Observable, of } from 'rxjs';\nimport { catchError } from 'rxjs/operators';\n\nimport { Logger } from '../../desktop-app/logger';\nimport { AccountService } from './account.service';\nimport { GameAccountList } from './game-account-info';\n\n@Injectable()\nexport class GameAccountResolver {\n\n constructor(\n private accountService: AccountService,\n private logger: Logger) {\n }\n\n resolve(): Observable<GameAccountList> {\n this.logger.log('Account | Retrieving game account list');\n return this.accountService.getGameAccounts().pipe(catchError(error => {\n this.logger.error('Account | Failed to retrieve game account list!', error);\n return of<GameAccountList>({ game_accounts: [] });\n }));\n }\n}\n"
},
{
"alpha_fraction": 0.6686373949050903,
"alphanum_fraction": 0.6734234094619751,
"avg_line_length": 35.23469543457031,
"blob_id": "f275b5a6caeacc0dbd736cf9c9d5cea73aa4e808",
"content_id": "336baae32e9684155e673f5b964fbddf41cae30f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3552,
"license_type": "permissive",
"max_line_length": 186,
"num_lines": 98,
"path": "/src/native/Win32Launcher.cpp",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "\n#include \"LauncherShared.h\"\n\n#include <memory>\n#include <sstream>\n#include <string>\n\n#include <Windows.h>\n#include <dpapi.h>\n\n#define UNIQUE_DELETER(type, deleter) \\\n struct deleter##Deleter \\\n { \\\n using pointer = type; \\\n void operator()(type handle) { deleter(handle); } \\\n };\n\nUNIQUE_DELETER(HKEY, RegCloseKey)\nUNIQUE_DELETER(HLOCAL, LocalFree)\n\nbool EncryptString(char const* string, std::vector<uint8_t>* output)\n{\n output->clear();\n\n DATA_BLOB inputBlob{ (DWORD)strlen(string), (BYTE*)string };\n DATA_BLOB entropy{ std::extent_v<decltype(Entropy)>, (BYTE*)Entropy };\n DATA_BLOB outputBlob{};\n\n if (!CryptProtectData(&inputBlob, L\"TcLauncher\", &entropy, nullptr, nullptr, CRYPTPROTECT_UI_FORBIDDEN, &outputBlob))\n return false;\n\n std::unique_ptr<HLOCAL, LocalFreeDeleter> outputDeleter(outputBlob.pbData);\n std::copy(outputBlob.pbData, outputBlob.pbData + outputBlob.cbData, std::back_inserter(*output));\n return true;\n}\n\nbool DecryptString(std::vector<uint8_t> const& encryptedString, std::string* output)\n{\n output->clear();\n\n DATA_BLOB inputBlob{ (DWORD)encryptedString.size(), (BYTE*)encryptedString.data() };\n DATA_BLOB entropy{ std::extent_v<decltype(Entropy)>, (BYTE*)Entropy };\n DATA_BLOB outpubBlob{};\n\n if (!CryptUnprotectData(&inputBlob, nullptr, &entropy, nullptr, nullptr, CRYPTPROTECT_UI_FORBIDDEN, &outpubBlob))\n return false;\n\n std::unique_ptr<HLOCAL, LocalFreeDeleter> outputDeleter(outpubBlob.pbData);\n output->assign(reinterpret_cast<char const*>(outpubBlob.pbData), outpubBlob.cbData);\n return true;\n}\n\nbool StoreLoginTicket(char const* portal, char const* loginTicket, char const* gameAccount)\n{\n HKEY launcherKey;\n if (RegCreateKeyExA(HKEY_CURRENT_USER, R\"(Software\\Custom Game Server Dev\\Battle.net\\Launch Options\\WoW)\", 0, nullptr, 0, KEY_WRITE, nullptr, &launcherKey, nullptr) != ERROR_SUCCESS)\n return false;\n\n std::unique_ptr<HKEY, RegCloseKeyDeleter> handle(launcherKey);\n\n if (RegSetValueExA(launcherKey, PORTAL_ADDRESS, 0, REG_SZ, reinterpret_cast<BYTE const*>(portal), strlen(portal) + 1) != ERROR_SUCCESS)\n return false;\n\n if (RegSetValueExA(launcherKey, GAME_ACCOUNT_NAME, 0, REG_SZ, reinterpret_cast<BYTE const*>(gameAccount), strlen(gameAccount) + 1) != ERROR_SUCCESS)\n return false;\n\n std::vector<uint8_t> encryptedTicket;\n if (!EncryptString(loginTicket, &encryptedTicket))\n return false;\n\n if (RegSetValueExA(launcherKey, LOGIN_TICKET, 0, REG_BINARY, encryptedTicket.data(), encryptedTicket.size()) != ERROR_SUCCESS)\n return false;\n\n return true;\n}\n\nbool LaunchGameWithLogin(char const* gameInstallDir, char const* version)\n{\n char commandLine[32768] = {};\n strcat(commandLine, \"\\\"\");\n strcat(commandLine, gameInstallDir);\n strcat(commandLine, \"\\\\\");\n strcat(commandLine, \"Arctium WoW Launcher.exe\\\"\");\n if (version)\n {\n strcat(commandLine, \" --version \");\n strcat(commandLine, version);\n }\n strcat(commandLine, \" -launcherlogin -config Config2.wtf\");\n\n STARTUPINFOA startupInfo{sizeof(STARTUPINFOA)};\n PROCESS_INFORMATION processInfo;\n if (!CreateProcessA(nullptr, commandLine, nullptr, nullptr, FALSE, 0, nullptr, gameInstallDir, &startupInfo, &processInfo))\n return false;\n\n CloseHandle(processInfo.hProcess);\n CloseHandle(processInfo.hThread);\n return true;\n}\n"
},
{
"alpha_fraction": 0.5944982171058655,
"alphanum_fraction": 0.5944982171058655,
"avg_line_length": 36.75,
"blob_id": "7b10e670ff096139c0e27043991a20a3e75f4b34",
"content_id": "36ff60aa7cdbd6a49e229bef6ac4f030f26cb9ff",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1963,
"license_type": "permissive",
"max_line_length": 122,
"num_lines": 52,
"path": "/src/app/loader/loader.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { ChangeDetectionStrategy, Component, OnInit } from '@angular/core';\nimport { Router } from '@angular/router';\nimport { Observable, of } from 'rxjs';\nimport { catchError, mergeMap } from 'rxjs/operators';\n\nimport { Logger } from '../../desktop-app/logger';\nimport { LoginTicketService } from '../login-ticket.service';\nimport { LoginService } from '../login/login.service';\n\n@Component({\n selector: 'tc-loader',\n templateUrl: './loader.component.html',\n styleUrls: ['./loader.component.css'],\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class LoaderComponent implements OnInit {\n\n constructor(\n private loginTicket: LoginTicketService,\n private loginService: LoginService,\n private router: Router,\n private logger: Logger) {\n }\n\n ngOnInit() {\n this.getInitialRoute().subscribe(initialRoute => {\n this.logger.log(`Loader | Resolved initial route to ${initialRoute}`);\n this.router.navigate([initialRoute]);\n });\n }\n\n private getInitialRoute(): Observable<string> {\n if (this.loginTicket.shouldAttemptRememberedLogin()) {\n return this.loginTicket.restoreSavedTicket().pipe(\n mergeMap(() => {\n this.logger.log('Loader | Found remembered login');\n return this.loginTicket.refresh();\n }),\n catchError(() => {\n this.logger.error('Loader | Error checking remembered login');\n return of({ is_expired: true });\n }),\n mergeMap(loginTicketStatus => {\n this.logger.log(`Loader | Remembered login status: ${loginTicketStatus.is_expired ? 'in' : ''}valid`);\n return of(loginTicketStatus.is_expired ? '/login' : '/account');\n }));\n }\n\n this.logger.log('Loader | Remembered login not found');\n return of('/login');\n }\n}\n"
},
{
"alpha_fraction": 0.6011576652526855,
"alphanum_fraction": 0.6074972152709961,
"avg_line_length": 34.568626403808594,
"blob_id": "63eeaef73d5e49c74f5e0d1ed8bfecba806d9924",
"content_id": "cf806672999c632860f802aac21829a71eee47cf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 3628,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 102,
"path": "/src/app/settings-dialog/settings-dialog.component.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { animate, state, style, transition, trigger } from '@angular/animations';\nimport {\n ChangeDetectionStrategy,\n ChangeDetectorRef,\n Component,\n EventEmitter,\n Input,\n NgZone,\n OnChanges,\n Output,\n SimpleChanges,\n ViewChild\n} from '@angular/core';\nimport { NgForm } from '@angular/forms';\n\nimport { Logger } from '../../desktop-app/logger';\nimport { ConfigurationService } from '../configuration.service';\n\n@Component({\n selector: 'tc-settings-dialog',\n templateUrl: './settings-dialog.component.html',\n styleUrls: ['./settings-dialog.component.css'],\n changeDetection: ChangeDetectionStrategy.OnPush,\n animations: [\n trigger('scaleIn', [\n state('false', style({ visibility: 'hidden', transform: 'translate(-50%, -50%) scale(0.95)' })),\n state('true', style({ visibility: 'visible', transform: 'translate(-50%, -50%) scale(1)' })),\n transition('false => true', animate('100ms ease-out')),\n transition('true => false', animate('100ms ease-in')),\n ])\n ]\n})\nexport class SettingsDialogComponent implements OnChanges {\n\n @Input()\n display: boolean;\n\n @Output()\n displayChange = new EventEmitter<boolean>();\n\n @ViewChild('settingsForm', { static: true })\n settingsForm: NgForm;\n\n displayMask = false;\n\n constructor(\n private configurationService: ConfigurationService,\n private zone: NgZone,\n private changeDetector: ChangeDetectorRef,\n private logger: Logger) {\n }\n\n ngOnChanges(changes: SimpleChanges): void {\n if ('display' in changes) {\n if (changes['display'].currentValue) {\n this.logger.log('Settings | Opening configuration form');\n this.settingsForm.setValue({\n loginServerUrl: this.configurationService.LoginServerUrl,\n gameInstallDir: this.configurationService.WowInstallDir\n });\n this.displayMask = true;\n } else {\n setTimeout(() => {\n this.logger.log('Settings | Hiding dialog background');\n this.displayMask = false;\n this.changeDetector.markForCheck();\n }, 100);\n }\n }\n }\n\n saveConfiguration(): void {\n const { loginServerUrl } = this.settingsForm.value;\n // disabled controls don't write to .value\n const gameInstallDir = this.settingsForm.controls['gameInstallDir'].value;\n\n this.logger.log(`Settings | Saving new configuration: { ` +\n `LoginServerUrl: '${loginServerUrl}', ` +\n `WowInstallDir: '${gameInstallDir}' }`);\n\n this.configurationService.LoginServerUrl = loginServerUrl;\n this.configurationService.WowInstallDir = gameInstallDir;\n this.displayChange.emit(false);\n }\n\n close(): void {\n this.logger.log('Settings | Closing configuration form without saving');\n this.displayChange.emit(false);\n }\n\n openDirectoryPicker(): void {\n this.logger.log('Settings | Opening directory picker for WowInstallDir');\n window.electronAPI.selectDirectory().then(result => {\n if (result.filePaths != undefined && !result.canceled) {\n this.logger.log(`Settings | New WowInstallDir selected: ${result.filePaths[0]}.`);\n this.zone.runGuarded(() => this.settingsForm.controls['gameInstallDir'].setValue(result.filePaths[0]));\n } else {\n this.logger.log('Settings | Closed directory picker without selection');\n }\n });\n }\n}\n"
},
{
"alpha_fraction": 0.712195098400116,
"alphanum_fraction": 0.712195098400116,
"avg_line_length": 28.285715103149414,
"blob_id": "cec5223916cd712e0745107654b53c2730bffd4d",
"content_id": "05c38145c9697639c32d91caa37df63b882197df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 205,
"license_type": "permissive",
"max_line_length": 57,
"num_lines": 7,
"path": "/src/ipc/configuration.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export interface Configuration {\n WowInstallDir: string;\n LoginServerUrl: string;\n RememberLogin: boolean;\n LastGameAccount: string;\n LastGameVersion: 'Retail' | 'Classic' | 'ClassicEra';\n}\n"
},
{
"alpha_fraction": 0.6936274766921997,
"alphanum_fraction": 0.6936274766921997,
"avg_line_length": 24.5,
"blob_id": "7e711caeed3d2dd43e9d31ddcebb98b9403adb3e",
"content_id": "21ecf9e2be13e79c9ef2520ab17ffeab2eb76f95",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 408,
"license_type": "permissive",
"max_line_length": 63,
"num_lines": 16,
"path": "/src/app/account/account.service.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpClient } from '@angular/common/http';\nimport { Injectable } from '@angular/core';\nimport { Observable } from 'rxjs';\n\nimport { GameAccountList } from './game-account-info';\n\n@Injectable()\nexport class AccountService {\n\n constructor(private http: HttpClient) {\n }\n\n getGameAccounts(): Observable<GameAccountList> {\n return this.http.get<GameAccountList>('gameAccounts/');\n }\n}\n"
},
{
"alpha_fraction": 0.7651821970939636,
"alphanum_fraction": 0.7651821970939636,
"avg_line_length": 48.400001525878906,
"blob_id": "ed3b49c744fb2e3f9f2355449eb522fe4ac24383",
"content_id": "a8e784b372f39e3c41ae4b35a011bdd15d2e2e0c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": true,
"language": "TypeScript",
"length_bytes": 247,
"license_type": "permissive",
"max_line_length": 123,
"num_lines": 5,
"path": "/src/desktop-app/tc_launcher.d.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "interface Launcher {\n launchGame(gameInstallDir: string, portal: string, loginTicket: string, gameAccount: string, version: string): boolean;\n encryptString(inputString: string): Buffer;\n decryptString(encryptedString: Buffer): string;\n}\n"
},
{
"alpha_fraction": 0.5799256563186646,
"alphanum_fraction": 0.5799256563186646,
"avg_line_length": 15.8125,
"blob_id": "fc43612f829b84d54bdbaf9665a93237464a232e",
"content_id": "3c64d7ff500c35ba02fcfdff8b4f92d93d78802f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 269,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 16,
"path": "/src/app/argv.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export class Argv {\n [key: string]: any;\n}\n\nlet argv: Argv = {};\n\nexport function argvInitializer() {\n return function () {\n return window.electronAPI.getArgv()\n .then(a => argv = a);\n };\n}\n\nexport function argvFactory() {\n return argv;\n}\n"
},
{
"alpha_fraction": 0.6528435945510864,
"alphanum_fraction": 0.6528435945510864,
"avg_line_length": 37.3636360168457,
"blob_id": "4da3b7e8fbffd20d72a856549a3af7807ea093a4",
"content_id": "f540f9224d86164b49f932a28c47626ab235b144",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 1688,
"license_type": "permissive",
"max_line_length": 124,
"num_lines": 44,
"path": "/src/desktop-app/renderer-preload.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { contextBridge, ipcRenderer } from 'electron';\nimport { ElectronApi } from '../ipc/electron-api';\nimport { Configuration } from '../ipc/configuration';\nimport { CryptoResult } from '../ipc/crypto-result';\nimport { ILogEvent } from '../ipc/log-event';\nimport { LaunchArgs } from '../ipc/launch-args';\n\nconst api: ElectronApi = {\n getArgv: function(): Promise<{ [p: string]: any }> {\n return ipcRenderer.invoke('get-argv');\n },\n getConfiguration: function(): Promise<Configuration> {\n return ipcRenderer.invoke('get-configuration');\n },\n setConfiguration: function<Key extends keyof Configuration>(change: [Key, Configuration[Key]]): Promise<Configuration> {\n return ipcRenderer.invoke('configuration', change);\n },\n encrypt: function(data: string): Promise<CryptoResult> {\n return ipcRenderer.invoke('encrypt', data);\n },\n decrypt: function(data: string): Promise<CryptoResult> {\n return ipcRenderer.invoke('decrypt', data);\n },\n log: function(event: ILogEvent): void {\n ipcRenderer.send('logger', event);\n },\n login: function() {\n ipcRenderer.send('login');\n },\n launchGame: function(args: LaunchArgs) {\n ipcRenderer.send('launch-game', args);\n },\n selectDirectory: function(): Promise<{ filePaths: string[]; canceled: boolean }> {\n return ipcRenderer.invoke('select-directory');\n },\n onOpenSettingsRequest: function(callback: () => void) {\n ipcRenderer.on('open-settings', callback);\n },\n onLogoutRequest: function(callback: () => void) {\n ipcRenderer.on('logout', callback);\n }\n};\n\ncontextBridge.exposeInMainWorld('electronAPI', api);\n"
},
{
"alpha_fraction": 0.6730769276618958,
"alphanum_fraction": 0.6730769276618958,
"avg_line_length": 25,
"blob_id": "8aeb2d84b573e937ce49ad6c7eb2182372de9813",
"content_id": "728e11bbde6cb7d9865c41b24678918bf15b4e00",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 156,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 6,
"path": "/src/ipc/launch-args.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "export interface LaunchArgs {\n Portal: string;\n LoginTicket: string;\n GameAccount: string;\n GameVersion: 'Retail' | 'Classic' | 'ClassicEra';\n}\n"
},
{
"alpha_fraction": 0.5739160180091858,
"alphanum_fraction": 0.578013002872467,
"avg_line_length": 32.28409194946289,
"blob_id": "82adb8b9c48b4f6900c77396456e31401de4557e",
"content_id": "26fbc483d3367d0557d61d8b2a4d330901b8bb0b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 2929,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 88,
"path": "/src/app/login-ticket.service.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { HttpClient } from '@angular/common/http';\nimport { Injectable, NgZone, OnDestroy } from '@angular/core';\nimport { Observable, Subject, timer } from 'rxjs';\nimport { mergeMap, takeUntil } from 'rxjs/operators';\n\nimport { ConfigurationService } from './configuration.service';\nimport { LoginRefreshResult } from './login-refresh-result';\n\n@Injectable()\nexport class LoginTicketService implements OnDestroy {\n\n private ticketRefreshEndSignal = new Subject<void>();\n\n static getTicket(): string {\n return sessionStorage.getItem('ticket');\n }\n\n constructor(\n private configuration: ConfigurationService,\n private zone: NgZone,\n private http: HttpClient) {\n }\n\n ngOnDestroy(): void {\n this.ticketRefreshEndSignal.next();\n this.ticketRefreshEndSignal.complete();\n }\n\n getTicket(): string {\n return LoginTicketService.getTicket();\n }\n\n store(loginTicket: string, rememberLogin: boolean): void {\n this.configuration.RememberLogin = rememberLogin;\n sessionStorage.setItem('ticket', loginTicket);\n if (rememberLogin) {\n window.electronAPI.encrypt(loginTicket).then(result => {\n if (result.success) {\n localStorage.setItem('ticket', result.output);\n }\n });\n }\n this.scheduleNextRefresh(new Date().getTime() / 1000 + 900);\n }\n\n clear(): void {\n sessionStorage.removeItem('ticket');\n localStorage.removeItem('ticket');\n this.ticketRefreshEndSignal.next();\n }\n\n refresh(): Observable<LoginRefreshResult> {\n return this.http.post<LoginRefreshResult>('refreshLoginTicket/', {});\n }\n\n private scheduleNextRefresh(newLoginTicketExpiry: number): void {\n timer((newLoginTicketExpiry * 1000 - new Date().getTime()) / 2).pipe(\n takeUntil(this.ticketRefreshEndSignal),\n mergeMap(() => this.refresh()))\n .subscribe(r => {\n if (!r.is_expired) {\n this.scheduleNextRefresh(r.login_ticket_expiry);\n } else {\n this.ticketRefreshEndSignal.next();\n }\n });\n }\n\n shouldAttemptRememberedLogin(): boolean {\n return this.configuration.RememberLogin && !!localStorage.getItem('ticket');\n }\n\n restoreSavedTicket(): Observable<void> {\n return new Observable<void>(subscriber => {\n window.electronAPI.decrypt(localStorage.getItem('ticket')).then(result => {\n this.zone.runGuarded(() => {\n if (result.success) {\n sessionStorage.setItem('ticket', result.output);\n subscriber.next();\n subscriber.complete();\n } else {\n subscriber.error();\n }\n });\n });\n });\n }\n}\n"
},
{
"alpha_fraction": 0.6549131274223328,
"alphanum_fraction": 0.6676136255264282,
"avg_line_length": 27.495237350463867,
"blob_id": "aca1b0f0d1198241cb8bbee3f1d9abb0fffccc2f",
"content_id": "86ee183b16de03dfdc6e21b27cb3b6ae2ebe4c15",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 5984,
"license_type": "permissive",
"max_line_length": 177,
"num_lines": 210,
"path": "/src/native/CDSACrypt.cpp",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "#include \"CDSACrypt.h\"\n#include <cstdlib>\n#include <cstring>\n\nstatic CSSM_VERSION vers = { 2, 0 };\nstatic CSSM_GUID const testGuid = { 0xFADE, 0, 0, { 1, 2, 3, 4, 5, 6, 7, 0 } };\n\n/*\n * Standard app-level memory functions required by CDSA.\n */\nvoid* appMalloc(CSSM_SIZE size, void* /*allocRef*/)\n{\n return malloc(size);\n}\n\nvoid appFree(void* mem_ptr, void* /*allocRef*/)\n{\n free(mem_ptr);\n}\n\nvoid* appRealloc(void* ptr, CSSM_SIZE size, void* /*allocRef*/)\n{\n return realloc(ptr, size);\n}\n\nvoid* appCalloc(uint32 num, CSSM_SIZE size, void* /*allocRef*/)\n{\n return calloc(num, size);\n}\n\nstatic CSSM_API_MEMORY_FUNCS memFuncs =\n{\n appMalloc,\n appFree,\n appRealloc,\n appCalloc,\n NULL\n};\n\n/*\n * Init CSSM; returns CSSM_FALSE on error. Reusable.\n */\nstatic CSSM_BOOL cssmInitd = CSSM_FALSE;\nCSSM_RETURN cssmStartup()\n{\n if (cssmInitd)\n return CSSM_OK;\n\n CSSM_PVC_MODE pvcPolicy = CSSM_PVC_NONE;\n CSSM_RETURN crtn = CSSM_Init(&vers, CSSM_PRIVILEGE_SCOPE_NONE, &testGuid, CSSM_KEY_HIERARCHY_NONE, &pvcPolicy, nullptr);\n if (crtn != CSSM_OK)\n return crtn;\n\n cssmInitd = CSSM_TRUE;\n return CSSM_OK;\n}\n\n/*\n * Initialize CDSA and attach to the CSP.\n */\nCSSM_RETURN CDSA::CspAttach(CSSM_CSP_HANDLE* cspHandle)\n{\n // initialize CDSA (this is reusable)\n CSSM_RETURN crtn = cssmStartup();\n if (crtn)\n return crtn;\n\n // Load the CSP bundle into this app's memory space\n crtn = CSSM_ModuleLoad(&gGuidAppleCSP, CSSM_KEY_HIERARCHY_NONE, nullptr, nullptr);\n if (crtn)\n return crtn;\n\n // obtain a handle which will be used to refer to the CSP\n CSSM_CSP_HANDLE cspHand;\n crtn = CSSM_ModuleAttach(&gGuidAppleCSP, &vers, &memFuncs, 0, CSSM_SERVICE_CSP, 0, CSSM_KEY_HIERARCHY_NONE, nullptr, 0, nullptr, &cspHand);\n if (crtn)\n return crtn;\n\n *cspHandle = cspHand;\n return CSSM_OK;\n}\n\n/*\n * Detach from CSP. To be called when app is finished with this library\n */\nCSSM_RETURN CDSA::CspDetach(CSSM_CSP_HANDLE cspHandle)\n{\n return CSSM_ModuleDetach(cspHandle);\n}\n\n/*\n * Create an encryption context for the specified key\n */\nstatic CSSM_RETURN genCryptHandle(CSSM_CSP_HANDLE cspHandle, CSSM_KEY const* key, CSSM_DATA const* ivPtr, CSSM_CC_HANDLE* ccHandle)\n{\n CSSM_CC_HANDLE ccHand = 0;\n CSSM_RETURN crtn = CSSM_CSP_CreateSymmetricContext(cspHandle, key->KeyHeader.AlgorithmId, CSSM_ALGMODE_CBCPadIV8, nullptr, key, ivPtr, CSSM_PADDING_PKCS7, nullptr, &ccHand);\n if (crtn)\n return crtn;\n\n *ccHandle = ccHand;\n return CSSM_OK;\n}\n\n/*\n * Derive a symmetric CSSM_KEY from the specified raw key material.\n */\nCSSM_RETURN CDSA::DeriveKey(CSSM_CSP_HANDLE cspHandle, CSSM_DATA rawKey, CSSM_DATA salt, CSSM_ALGORITHMS keyAlg, uint32 keySizeInBits, CSSM_KEY_PTR key)\n{\n CSSM_ACCESS_CREDENTIALS creds;\n CSSM_CC_HANDLE ccHand;\n\n memset(key, 0, sizeof(CSSM_KEY));\n memset(&creds, 0, sizeof(CSSM_ACCESS_CREDENTIALS));\n CSSM_RETURN crtn = CSSM_CSP_CreateDeriveKeyContext(cspHandle, CSSM_ALGID_PKCS5_PBKDF2, keyAlg, keySizeInBits, &creds, nullptr, 1000, &salt, nullptr, &ccHand);\n if (crtn)\n return crtn;\n\n CSSM_PKCS5_PBKDF2_PARAMS pbeParams;\n pbeParams.Passphrase = rawKey;\n pbeParams.PseudoRandomFunction = CSSM_PKCS5_PBKDF2_PRF_HMAC_SHA1;\n CSSM_DATA pbeData = { sizeof(pbeParams), (uint8*)&pbeParams };\n CSSM_DATA dummyLabel = { 8, (uint8*)\"someKey\" };\n\n crtn = CSSM_DeriveKey(ccHand, &pbeData, CSSM_KEYUSE_ANY, CSSM_KEYATTR_RETURN_DATA | CSSM_KEYATTR_EXTRACTABLE, &dummyLabel, nullptr, key);\n CSSM_DeleteContext(ccHand); // ignore error here\n return crtn;\n}\n\n/*\n * Free resources allocated in cdsaDeriveKey().\n */\nCSSM_RETURN CDSA::FreeKey(CSSM_CSP_HANDLE cspHandle, CSSM_KEY_PTR key)\n{\n return CSSM_FreeKey(cspHandle, nullptr, key, CSSM_FALSE);\n}\n\n/*\n * Init vector\n */\nstatic uint8 iv[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };\nstatic CSSM_DATA const ivCommon = { 16, iv };\n\n/*\n * Encrypt\n */\nCSSM_RETURN CDSA::Encrypt(CSSM_CSP_HANDLE cspHandle, CSSM_KEY const* key, CSSM_DATA const* plainText, CSSM_DATA_PTR cipherText)\n{\n CSSM_CC_HANDLE ccHandle;\n CSSM_RETURN crtn = genCryptHandle(cspHandle, key, &ivCommon, &ccHandle);\n if (crtn)\n return crtn;\n\n cipherText->Length = 0;\n cipherText->Data = nullptr;\n CSSM_SIZE bytesEncrypted;\n CSSM_DATA remData = { 0, nullptr };\n crtn = CSSM_EncryptData(ccHandle, plainText, 1, cipherText, 1, &bytesEncrypted, &remData);\n CSSM_DeleteContext(ccHandle);\n if (crtn)\n return crtn;\n\n cipherText->Length = bytesEncrypted;\n if (remData.Length != 0)\n {\n // append remaining data to cipherText\n uint32 newLen = cipherText->Length + remData.Length;\n cipherText->Data = (uint8*)realloc(cipherText->Data, newLen);\n memmove(cipherText->Data + cipherText->Length, remData.Data, remData.Length);\n cipherText->Length = newLen;\n free(remData.Data);\n }\n\n return CSSM_OK;\n}\n\n/*\n * Decrypt\n */\nCSSM_RETURN CDSA::Decrypt(CSSM_CSP_HANDLE cspHandle, CSSM_KEY const* key, CSSM_DATA const* cipherText, CSSM_DATA_PTR plainText)\n{\n CSSM_RETURN crtn;\n CSSM_CC_HANDLE ccHandle;\n CSSM_DATA remData = { 0, nullptr };\n CSSM_SIZE bytesDecrypted;\n\n crtn = genCryptHandle(cspHandle, key, &ivCommon, &ccHandle);\n if (crtn)\n return crtn;\n\n plainText->Length = 0;\n plainText->Data = nullptr;\n crtn = CSSM_DecryptData(ccHandle, cipherText, 1, plainText, 1, &bytesDecrypted, &remData);\n CSSM_DeleteContext(ccHandle);\n if (crtn)\n return crtn;\n\n plainText->Length = bytesDecrypted;\n if (remData.Length != 0)\n {\n // append remaining data to plainText\n uint32 newLen = plainText->Length + remData.Length;\n plainText->Data = (uint8*)realloc(plainText->Data, newLen);\n memmove(plainText->Data + plainText->Length, remData.Data, remData.Length);\n plainText->Length = newLen;\n free(remData.Data);\n }\n\n return CSSM_OK;\n}\n"
},
{
"alpha_fraction": 0.6877740025520325,
"alphanum_fraction": 0.6877740025520325,
"avg_line_length": 30.106060028076172,
"blob_id": "76ff8c848156ecc62d175e6c873a7eae6a056f7c",
"content_id": "2a4697533c4ced1673baefde6d7b275dc5d852fd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 2053,
"license_type": "permissive",
"max_line_length": 81,
"num_lines": 66,
"path": "/src/app/configuration.service.ts",
"repo_name": "Shauren/tc-client-launcher",
"src_encoding": "UTF-8",
"text": "import { Injectable } from '@angular/core';\n\nimport { Configuration } from '../ipc/configuration';\n\nlet initialSettings: Configuration;\n\n@Injectable()\nexport class ConfigurationService {\n\n private settingsCache: Configuration = initialSettings;\n\n constructor() {\n }\n\n get WowInstallDir(): string {\n return this.settingsCache.WowInstallDir;\n }\n\n set WowInstallDir(wowInstallDir: string) {\n window.electronAPI.setConfiguration(['WowInstallDir', wowInstallDir])\n .then(newConfiguration => this.settingsCache = newConfiguration);\n }\n\n get LoginServerUrl(): string {\n return this.settingsCache.LoginServerUrl;\n }\n\n set LoginServerUrl(loginServerUrl: string) {\n window.electronAPI.setConfiguration(['LoginServerUrl', loginServerUrl])\n .then(newConfiguration => this.settingsCache = newConfiguration);\n }\n\n get RememberLogin(): boolean {\n return this.settingsCache.RememberLogin;\n }\n\n set RememberLogin(rememberLogin: boolean) {\n window.electronAPI.setConfiguration(['RememberLogin', rememberLogin])\n .then(newConfiguration => this.settingsCache = newConfiguration);\n }\n\n get LastGameAccount(): string {\n return this.settingsCache.LastGameAccount;\n }\n\n set LastGameAccount(lastGameAccount: string) {\n window.electronAPI.setConfiguration(['LastGameAccount', lastGameAccount])\n .then(newConfiguration => this.settingsCache = newConfiguration);\n }\n\n get LastGameVersion(): 'Retail' | 'Classic' | 'ClassicEra' {\n return this.settingsCache.LastGameVersion;\n }\n\n set LastGameVersion(lastGameAccount: 'Retail' | 'Classic' | 'ClassicEra') {\n window.electronAPI.setConfiguration(['LastGameVersion', lastGameAccount])\n .then(newConfiguration => this.settingsCache = newConfiguration);\n }\n}\n\nexport function configurationInitializer() {\n return function () {\n return window.electronAPI.getConfiguration()\n .then(value => initialSettings = value);\n };\n}\n"
}
] | 42 |
boboyejj/XY_Positioner_GUI
|
https://github.com/boboyejj/XY_Positioner_GUI
|
013edff657c96dbb46c56c782b05ce5358abc7fb
|
76f003ed42fffe0f2e5ffbe09eee918e1571e4e4
|
7fc48883e17d0d3673acb5c266d6a20ddbfa161c
|
refs/heads/master
| 2020-07-04T08:44:45.143982 | 2019-08-22T21:13:55 | 2019-08-22T21:13:55 | 202,227,140 | 0 | 0 | null | 2019-08-13T21:35:29 | 2019-05-14T21:14:15 | 2019-05-14T21:14:13 | null |
[
{
"alpha_fraction": 0.5998135209083557,
"alphanum_fraction": 0.6072718501091003,
"avg_line_length": 36.969024658203125,
"blob_id": "f95e7a3f44874dc7631c69333adf9b9054ffde0b",
"content_id": "f1aeb99bf7767c64497a35b000e9bb1cc5c52741",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8581,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 226,
"path": "/src/manual_move.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nManual Move GUI\n\nThis is the GUI module for manual motor control.\n\nThe GUI handles motor movements and step distance settings.\n\nThis module contains a single class:\n - ManualMoveGUI(wx.Frame): GUI interfaced with the MotorDriver class, provides manual control for users.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nfrom src.motor_driver import MotorDriver\nimport serial\nimport wx\n\n\nclass ManualMoveGUI(wx.Frame):\n \"\"\"\n GUI interfaced with the MotorDriver class that allows manual control over the motors.\n \"\"\"\n def __init__(self, parent, title, grid_step):\n \"\"\"\n :param parent: Parent frame invoking the LocationSelectGUI.\n :param title: Title for the GUI window.\n :param grid_step: The grid step size.\n \"\"\"\n wx.Frame.__init__(self, parent, title=title, size=(400, 400))\n\n # Variables\n try:\n self.motor = MotorDriver()\n except serial.SerialException:\n print(\"Error: Connection to C4 controller was not found\")\n self.Close()\n return\n\n # self.narda = NardaNavigator()\n self.currx = 0.0\n self.curry = 0.0\n self.distx = grid_step\n self.disty = grid_step\n self.errx = 0.0\n self.erry = 0.0\n self.stepx = int(self.distx / self.motor.step_unit)\n self.stepy = int(self.disty / self.motor.step_unit)\n self.fracx = self.distx / self.motor.step_unit - self.stepx\n self.fracy = self.disty / self.motor.step_unit - self.stepy\n\n # UI Elements\n up_id = 301\n down_id = 302\n left_id = 303\n right_id = 304\n self.up_btn = wx.Button(self, up_id, \"Up\")\n self.down_btn = wx.Button(self, down_id, \"Down\")\n self.left_btn = wx.Button(self, left_id, \"Left\")\n self.right_btn = wx.Button(self, right_id, \"Right\")\n self.coord_box = wx.StaticText(self, label=\"Coordinates:\\n[%.3f, %.3f]\" % (self.currx, self.curry))\n\n self.x_text = wx.StaticText(self, label=\"X Step Distance (in cm)\")\n self.x_tctrl = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER)\n self.x_tctrl.SetValue(str(self.distx))\n self.y_text = wx.StaticText(self, label=\"Y Step Distance (in cm)\")\n self.y_tctrl = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER)\n self.y_tctrl.SetValue(str(self.disty))\n\n # Bindings/Shortcuts\n self.Bind(wx.EVT_KEY_UP, self.OnKey) # Binding on \"up\" event to only register once\n self.Bind(wx.EVT_TEXT_ENTER, self.update_settings)\n self.Bind(wx.EVT_CHILD_FOCUS, self.update_settings)\n self.Bind(wx.EVT_BUTTON, self.move_up, self.up_btn)\n self.Bind(wx.EVT_BUTTON, self.move_down, self.down_btn)\n self.Bind(wx.EVT_BUTTON, self.move_left, self.left_btn)\n self.Bind(wx.EVT_BUTTON, self.move_right, self.right_btn)\n self.Bind(wx.EVT_CLOSE, self.OnClose)\n\n # Sizers/Layout, Static Lines, & Static Boxes\n self.mainh_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.btn_stbox = wx.StaticBoxSizer(wx.VERTICAL, self, \"Control Buttons\")\n self.btn_sizer = wx.GridSizer(rows=3, cols=3, hgap=0, vgap=0)\n self.settings_stbox = wx.StaticBoxSizer(wx.VERTICAL, self, \"Settings\")\n\n self.btn_sizer.AddStretchSpacer(prop=1)\n self.btn_sizer.Add(self.up_btn, proportion=1, flag=wx.EXPAND)\n self.btn_sizer.AddStretchSpacer(prop=1)\n self.btn_sizer.Add(self.left_btn, proportion=1, flag=wx.EXPAND)\n self.btn_sizer.AddStretchSpacer(prop=1)\n self.btn_sizer.Add(self.right_btn, proportion=1, flag=wx.EXPAND)\n self.btn_sizer.AddStretchSpacer(prop=1)\n self.btn_sizer.Add(self.down_btn, proportion=1, flag=wx.EXPAND)\n self.btn_sizer.AddStretchSpacer(prop=1)\n\n self.btn_stbox.Add(self.btn_sizer, proportion=5, flag=wx.EXPAND)\n self.btn_stbox.Add(self.coord_box, proportion=1, flag=wx.EXPAND)\n\n self.settings_stbox.Add(self.x_text, proportion=0, flag=wx.EXPAND)\n self.settings_stbox.Add(self.x_tctrl, proportion=0, flag=wx.EXPAND)\n self.settings_stbox.Add(self.y_text, proportion=0, flag=wx.EXPAND)\n self.settings_stbox.Add(self.y_tctrl, proportion=0, flag=wx.EXPAND)\n\n self.mainh_sizer.Add(self.btn_stbox, proportion=5, flag=wx.EXPAND | wx.ALL, border=5)\n self.mainh_sizer.Add(self.settings_stbox, proportion=2, flag=wx.EXPAND | wx.ALL, border=5)\n\n self.SetSizer(self.mainh_sizer)\n self.SetAutoLayout(True)\n self.mainh_sizer.Fit(self)\n self.Show(True)\n\n def move_up(self, e):\n \"\"\"\n Moves the NS probe forward (i.e. negative X direction of the phone).\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.curry -= self.disty\n self.erry -= self.fracy\n self.motor.reverse_motor_two(self.stepy + int(self.erry))\n self.erry -= int(self.erry)\n self.coord_box.SetLabel(\"Coordinates:\\n[%.3f, %.3f]\" % (self.currx, self.curry))\n\n def move_down(self, e):\n \"\"\"\n Moves the NS probe backward (i.e. positive X direction of the phone).\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.curry += self.disty\n self.erry += self.fracy\n self.motor.forward_motor_two(self.stepy + int(self.erry))\n self.erry -= int(self.erry)\n self.coord_box.SetLabel(\"Coordinates:\\n[%.3f, %.3f]\" % (self.currx, self.curry))\n\n def move_left(self, e):\n \"\"\"\n Moves the NS probe leftward (i.e. negative Y direction of the phone).\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.currx -= self.distx\n self.errx -= self.fracx\n self.motor.reverse_motor_one(self.stepx + int(self.errx))\n self.errx -= int(self.errx)\n self.coord_box.SetLabel(\"Coordinates:\\n[%.3f, %.3f]\" % (self.currx, self.curry))\n\n def move_right(self, e):\n \"\"\"\n Moves the NS probe rightward (i.e. positive Y direction of the phone).\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.currx += self.distx\n self.errx += self.fracx\n self.motor.forward_motor_one(self.stepx + int(self.errx))\n self.errx -= int(self.errx)\n self.coord_box.SetLabel(\"Coordinates:\\n[%.3f, %.3f]\" % (self.currx, self.curry))\n\n def OnKey(self, e):\n \"\"\"\n Handles wx.EVT_KEY_UP events (releasing a pressed key) to allow keyboard controlled movement.\n Registers arrow keys for movement.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n if e.GetKeyCode() == wx.WXK_UP:\n self.move_up(None)\n elif e.GetKeyCode() == wx.WXK_DOWN:\n self.move_down(None)\n elif e.GetKeyCode() == wx.WXK_LEFT:\n self.move_left(None)\n elif e.GetKeyCode() == wx.WXK_RIGHT:\n self.move_right(None)\n else:\n e.Skip()\n\n def update_settings(self, e):\n \"\"\"\n Handles automatic X and Y grid step distance edits. Called when [Enter] is pressed or when the user clicks\n on any element/widget of the GUI.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n try:\n xval = float(self.x_tctrl.GetValue())\n yval = float(self.y_tctrl.GetValue())\n if self.distx == xval and self.disty == yval:\n return\n self.distx = xval\n self.disty = yval\n except ValueError:\n print(\"Invalid distance values.\\nPlease input numeric values.\")\n self.x_tctrl = float(self.distx)\n self.y_tctrl = float(self.disty)\n return\n self.stepx = int(self.distx / self.motor.step_unit)\n self.stepy = int(self.disty / self.motor.step_unit)\n self.fracx = self.distx / self.motor.step_unit - self.stepx\n self.fracy = self.disty / self.motor.step_unit - self.stepy\n\n print(\"New step distances: X =\", self.distx, \"Y =\", self.disty)\n\n def OnClose(self, e):\n \"\"\"\n Exit script for the GUI, called when the window is closed. Notifies the user about exiting the manual movement\n module, destorys the motor object, and destroys the GUI object.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n print(\"Exiting Manual Movement module.\")\n self.motor.destroy()\n self.Destroy()\n\n\nif __name__ == \"__main__\":\n manual_move_gui = wx.App()\n fr = ManualMoveGUI(None, title=\"Manual Movement GUI\")\n manual_move_gui.MainLoop()\n"
},
{
"alpha_fraction": 0.6117767691612244,
"alphanum_fraction": 0.6186237335205078,
"avg_line_length": 31.098901748657227,
"blob_id": "828eec0c281f0d8e7d8e369218eba187c6021c83",
"content_id": "0cec0087afa34d5f16160c3fd43b09982c40602a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2921,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 91,
"path": "/src/location_select_gui.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nLocation Selection GUI\n\nThis is the GUI module for selecting a coordinate post area scan for correcting a previous measurement.\nThis GUI is intended to be run in conjunction with the 'xy_positioner_gui.py' only. The GUI is currently only used\nfor the post-scan function 'Correct Previous Value'.\n\nThis module contains a single class:\n - LocationSelectionGUI(wx.Dialog): provides a grid of buttons to allow user to select area scan coordinate.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nimport numpy as np\nimport wx\n\n\nclass LocationSelectGUI(wx.Dialog):\n \"\"\"\n Location Selection GUI. Allows the user to select a position in the area scan grid for the 'Correct Previous\n Value' function.\n \"\"\"\n def __init__(self, parent, title, grid):\n \"\"\"\n :param parent: Parent frame invoking the LocationSelectGUI.\n :param title: Title for the GUI window.\n :param grid: Numpy array of the traversal grid for the general area scan.\n \"\"\"\n wx.Dialog.__init__(self, parent, title=title)\n self.parent = parent\n self.grid = grid\n\n numrows = self.grid.shape[0]\n numcols = self.grid.shape[1]\n\n # Bindings\n self.Bind(wx.EVT_CLOSE, self.OnQuit)\n\n # Sizers\n self.coord_sizer = wx.GridSizer(rows=numrows, cols=numcols, hgap=0, vgap=0)\n for val in np.nditer(self.grid):\n btn = wx.Button(self, val, str(val), size=(50, 50))\n self.Bind(wx.EVT_BUTTON, lambda e, x=btn.Id: self.selected(x), btn)\n self.coord_sizer.Add(btn, proportion=1)\n self.coord_sizer.Layout()\n\n self.SetSizer(self.coord_sizer)\n self.SetAutoLayout(True)\n self.coord_sizer.Fit(self)\n self.Show(True)\n\n def selected(self, value):\n \"\"\"\n Runs the parent Frame's 'run_currection()' function using the clicked button's number as the argument.\n Destroys the GUI on button click.\n\n :param value: Button value/position selected on the grid.\n :return: Nothing.\n \"\"\"\n self.parent.run_correction(value)\n self.Destroy()\n\n def OnQuit(self, e):\n \"\"\"\n Exit script - when the GUI closes, wx calls the MainFrame's 'run_post_scan()' function.\n Destroys the GUI button.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n wx.CallAfter(self.parent.run_post_scan)\n self.Destroy()\n\n\nif __name__ == '__main__':\n # Generate Zig-zag grid\n rows = 4\n columns = 6\n g = []\n for i in range(rows):\n row = list(range(i * columns + 1, (i + 1) * columns + 1))\n if i % 2 != 0:\n row = list(reversed(row))\n g += row\n print(g)\n g = np.array(g).reshape(rows, columns)\n # Start GUI\n loc_gui = wx.App()\n fr = LocationSelectGUI(None, title='Location Selection', grid=g)\n loc_gui.MainLoop()\n"
},
{
"alpha_fraction": 0.5655748844146729,
"alphanum_fraction": 0.5774363279342651,
"avg_line_length": 33.50697708129883,
"blob_id": "5de0ab48d8cd376623582bba7f6ae111ab2ffaca",
"content_id": "7890cc973c7cb366c983969fec9688ae0928d817",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7419,
"license_type": "permissive",
"max_line_length": 116,
"num_lines": 215,
"path": "/src/motor_driver.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nMotor Driver\n\nThis module contains scripts for initializing, communicating with, and controlling the C4 motor controller from\nArrick Robotics.\n\nThe module contains the following classes:\n - ResetThread(threading.Thread): performs motor resets.\n - MotorDriver(): establishes connection with the C4 controller and contains all motor movement functions.\n\nAuthors:\nGanesh Arvapalli, Software Engineering Intern (Jan. 2018) - [email protected]\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\n\"\"\"\n\nimport serial\nimport threading\nimport wx\n\n\nclass ResetThread(threading.Thread):\n \"\"\"\n Thread for handling resetting the motors. NS probe is moved back to its 'home' position.\n \"\"\"\n def __init__(self, parent):\n \"\"\"\n :param parent: Parent frame invoking the LocationSelectGUI.\n \"\"\"\n self.parent = parent\n self.motor = None # Placeholder variable for the motor\n super(ResetThread, self).__init__()\n\n def run(self):\n \"\"\"\n Script run on thread start. Resets the NS probe position to its home coordinates on a separate thread.\n\n :return: Nothing.\n \"\"\"\n # Variables\n try:\n self.motor = MotorDriver()\n self.motor.home_motors()\n self.motor.destroy()\n except serial.SerialException:\n print(\"Error: Connection to C4 controller was not found\")\n return\n with wx.MessageDialog(self.parent, \"Motor resetting completed.\",\n style=wx.OK | wx.ICON_INFORMATION | wx.CENTER) as dlg:\n dlg.ShowModal()\n wx.CallAfter(self.parent.enablegui)\n\n\nclass MotorDriver:\n \"\"\"\n Attempts to open serial port to control two MD2 stepper motors.\n Automatically flushes input and output.\n\n Attributes:\n port: Serial port through which motors are controlled\n step_unit: Size of individual motor step (consult manual)\n \"\"\"\n\n def __init__(self, step_unit_=0.00508, home=(3788, 4300)):\n \"\"\"\n :param step_unit_: Size of individual motor step (consult C4 controller manual for more details).\n :param home: Home/Reset coordinates for the motors. NS probe returns to these coordinates.\n \"\"\"\n self.home = home\n entered = False\n for i in range(256):\n try:\n self.port = serial.Serial('COM'+str(i), timeout=1.5)\n self.port.write('!1fp\\r'.encode()) # Check if we have connected to the right COM Port/machine\n received_str = self.port.read(2)\n if received_str.decode() == \"C4\":\n print(\"Established connection with motor controller (PORT %d)\" % i)\n self.port.flushOutput()\n self.port.flushInput()\n self.port.flush()\n self.step_unit = step_unit_\n entered = True\n break\n except serial.SerialException as e:\n pass\n # print e.message\n if not entered:\n raise serial.SerialException\n\n def forward_motor_one(self, steps):\n \"\"\"\n Move motor 1 forward the specified number of steps. Blocks thread until a 'Completed' acknowledgment signal\n is received.\n\n :param steps: Number of steps to move the stepper motor by.\n :return: Nothing.\n \"\"\"\n self.port.flushInput()\n self.port.flushOutput()\n self.port.flush()\n self.port.write(('!1m1r' + str(steps) + 'n\\r').encode())\n while self.port.read().decode() != 'o':\n pass\n self.port.flush()\n\n def reverse_motor_one(self, steps):\n \"\"\"\n Move motor 1 backward the specified number of steps. Blocks thread until a 'Completed' acknowledgment signal\n is received.\n\n :param steps: Number of steps to move the stepper motor by.\n :return: Nothing.\n \"\"\"\n self.port.flushInput()\n self.port.flushOutput()\n self.port.flush()\n self.port.write(('!1m1f' + str(steps) + 'n\\r').encode())\n while self.port.read().decode() != 'o':\n pass\n self.port.flush()\n\n def forward_motor_two(self, steps):\n \"\"\"\n Move motor 2 forward the specified number of steps. Blocks thread until a 'Complete' acknowledgment signal\n is received.\n\n :param steps: Number of steps to move the stepper motor by.\n :return: Nothing.\n \"\"\"\n self.port.flushInput()\n self.port.flushOutput()\n self.port.flush()\n self.port.write(('!1m2r' + str(steps) + 'n\\r').encode())\n while self.port.read().decode() != 'o':\n pass\n self.port.flush()\n\n def reverse_motor_two(self, steps):\n \"\"\"\n Move motor 2 backward the specified number of steps. Blocks thread until a 'Complete' acknowledgment signal\n is received.\n\n :param steps: Number of steps to move the stepper motor by.\n :return: Nothing.\n \"\"\"\n self.port.flushInput()\n self.port.flushOutput()\n self.port.flush()\n self.port.write(('!1m2f' + str(steps) + 'n\\r').encode())\n while self.port.read().decode() != 'o':\n pass\n self.port.flush()\n\n # Home both motors to preset positions\n def home_motors(self):\n \"\"\"\n Resets the NS probe position to its home coordinates. Blocks thread until the motors have fully reset.\n :return: Nothing.\n \"\"\"\n # Set home of motor 1 to be 6000 steps away, home of motor 2 to be 13000 steps away\n self.port.write(str.encode('!1wh1,r,'+str(self.home[0])+'\\r'))\n self.port.readline()\n self.port.write(str.encode('!1wh2,r,'+str(self.home[1])+'\\r'))\n self.port.readline()\n # print 'Home settings written (a if yes), ', port.readline()\n self.port.flush()\n\n # Home both motors\n motor_home = str.encode('!1h12\\r')\n self.port.write(motor_home)\n while self.port.read().decode() != 'o':\n pass\n print(\"Motor 1 reset.\")\n while self.port.read().decode() != 'o':\n pass\n print(\"Motor 2 reset.\")\n self.port.flush()\n print(\"Motors reset successfully.\")\n\n def set_start_point(self, offset=(3788, 4300)):\n \"\"\"\n Set the NS probe position to a starting point.\n\n :param offset: number of steps from the home coordinates.\n :return: Nothing\n \"\"\"\n self.port.write(str.encode('!1wh1,r,' + str(offset[0]) + '\\r'))\n self.port.readline()\n self.port.write(str.encode('!1wh2,r,' + str(offset[1]) + '\\r'))\n self.port.readline()\n # print 'Home settings written (a if yes), ', port.readline()\n self.port.flush()\n\n # Home both motors\n motor_start = str.encode('!1h12\\r')\n self.port.write(motor_start)\n while self.port.read().decode() != 'o':\n pass\n print(\"Motor 1 reset.\")\n while self.port.read().decode() != 'o':\n pass\n print(\"Motor 2 reset.\")\n self.port.flush()\n print(\"Motors set to the start point successfully.\")\n\n def destroy(self):\n \"\"\"\n Flush remaining data and close port.\n\n :return: Nothing.\n \"\"\"\n self.port.flush()\n self.port.flushInput()\n self.port.flushOutput()\n self.port.close()\n"
},
{
"alpha_fraction": 0.7597076892852783,
"alphanum_fraction": 0.7681196331977844,
"avg_line_length": 66.63793182373047,
"blob_id": "fbf234ce6d960beb2986163099ad0724c37e4b26",
"content_id": "13090e82ab51892c865839682c5c75766f109515",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 11783,
"license_type": "permissive",
"max_line_length": 431,
"num_lines": 174,
"path": "/README.md",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "# NS Testing/Robotic Positioning Controller\n\nThis project was designed to allow easier use of the Arrick C4 Motor\ncontrollers and generate contour plots based on measurement data.\nThe project has since been expanded to also incorporate automatic\nmeasurements for nerve stimulation (NS) testing by way of mouse and\nkeyboard control libraries. The code was written in Python 3.6.\n*Note: this version of the program only supports Windows 10 at the moment.\n\n## Getting Started\n### Overview\n#### Programs\n* [Anaconda3/Miniconda3]( https://www.anaconda.com/download/)\n * Python projects often implement a variety of different third party libraries which, as projects scale up, can be hard to manage in an organized fashion. As a Python distribution, Anaconda provides the core Python language, hundreds of core packages, a variety of different development tools (e.g. IDEs), and **conda**, Anaconda’s package manager that facilitates the downloading and management of Python packages.\n* [Python 3.6]( https://www.python.org/downloads/) (included in Anaconda3/Miniconda3)\n * Python is the main programming language for this project. It has an active open-source community and easily readable code syntax, making Python a great language of choice for projects like these.\n * **Note: Python 3.6 is included in the Anaconda 3 installation - there is no need to download it separately.**\n* An Integrated Development Environment (IDE) supporting Python – [PyCharm]( https://www.jetbrains.com/pycharm/download/) is recommended.\n * While PyCharm is not necessary to run this program, we highly recommend it for Python-based development. It is easy to integrate it with Anaconda and provides fantastic tools/shortcuts/hotkeys that make development faster and easier.\n* EHP200-TS (NARDA Software)\n * This is the Program for electric and magnetic field analysis. All measurements are processed and saved through this program.\n* Snipping Tool\n * Snipping Tool is used to take screenshots and is a default program in Windows 10.\n\n#### Packages (to be installed using Anaconda)\n* [Python](https://www.python.org/)\n* [PySerial](https://github.com/pyserial/pyserial)\n* [Numpy](http://www.numpy.org/)\n* [Matplotlib](https://matplotlib.org/)\n* [SciPy](https://www.scipy.org/)\n* [wxPython (wx)](https://wxpython.org/)\n* [pyautogui](https://pyautogui.readthedocs.io/en/latest/)\n* [pywinauto](https://pywinauto.github.io/)\n* [pywin32](https://wiki.python.org/moin/PyWin32)\n\n#### Hardware Prerequisites\n* [EHP-200A](https://www.narda-sts.com/en/selective-emf/ehp-200a/) - Electric and Magnetic Field Analyzer Probe (NARDA)\n* XY Positioning System (18 inch or 30 inch model)\n * C4 Controller\n * MD-2 Stepper Motor Driver\n\n### Installing\n\nA core strength of Python is its vast open-source community and libraries. However, downloading and managing these separate packages is often a messy and tricky endeavor. As such, we used [Anaconda](https://www.anaconda.com/download/) to browse, install, and manage the libraries. While it is possible to follow the general steps in this section through a standard Python installation, we highly recommend using Anaconda/Miniconda.\n\n1. First, install the latest version of [Anaconda](https://www.anaconda.com/download/). Make sure you are downloading Anaconda for **Python 3.X** (Version 3.6 was the latest one at the time of writing this guide). \n\n2. Install an IDE such as [PyCharm](https://www.jetbrains.com/pycharm/download/). Open the folder containing the files you downloaded using this IDE.\n\n3.\tWe will now install the different packages required to run the scripts.\n * Open **Anaconda Navigator** and click on **Environments** tab on the left.\n \n  \n \n * Select **Not installed** from the dropdown menu.\n \n \n \n * From here, we can search for each of the different packages listed above (in the Packages subsection of the Overview). To download the packages, check the box next to the package and click **Apply** at the bottom. **Keep in mind that some of these packages may already have come with the full Anaconda installation.**\n * Some packages may not be found using the Navigator. In this case, they must be downloaded using **Anaconda Prompt**. Open the prompt and type the following: `conda install -c anaconda <package_names_space-separated>`.\n \n \n \n*Note: Alternatively, you can add the packages in the **Project Interpreter** section of the project settings in PyCharm. However, this must be done after the project has been opened and the project interpreter has been selected, which will be done in the next few steps.*\n\n4.\tOpen PyCharm and open the project – select the **XY_Positioner_GUI** folder in the “Open File or Project” window.\n\n \n \n5.\tSelect **File->Settings…** and select the **Project Interpreter** tab on the left. Click on the cog and select **Add…**.\n \n \n \n * Click on the **System Interpreter** tab on the left and select the location of the python.exe file from your Anaconda/Miniconda installation (most probably found in `C:\\Users\\<user>\\AppData\\Local\\Continuum\\anaconda3\\python.exe` in Windows machines).\n\n## Deployment\nOpen the file [\"xy_positioner_gui.py\"](xy_positioner_gui.py) in your favorite IDE and click run \n(the green play button at the top for PyCharm). If you have to enter run configurations, the only That should be it!\n\nIn order to run python from the command line, you must first configure your PATH variables. Follow the steps\non this [site](https://superuser.com/questions/143119/how-to-add-python-to-the-windows-path) to learn how to add Python to the PATH.\nNote that this is completely optional and that the program can still be run through an IDE.\n\nThe GUI should pop up with options that you may define. Additional parameters can be tweaked by modifying the source files.\n\n\n# TODO: Change image\n\nIt may be necessary to purchase a USB-to-Serial cable as the one currently in use is liable to\nbreaking. Ensure that the stopper is in place for motor 1 on the 30 inch system to ensure homing\nworks correctly.\n\n##### NOTE THAT BEFORE RUNNING ANY SCANS YOU MUST FIRST [RESET THE MOTORS](#resetting-Motors-reset_motors).\n\n### Running an Area Scan\n##### Required Arguments\n* Input the dimensions of the object (x goes across and is controlled by motor #1, y goes up and down by using motor #2) into *X Distance* and *Y Distance*.\n* *Grid Step Distance* selects how far apart each measurement point should be.\n* *Dwell Time Settings* for the time the NS probe spends idle above a point before a measurement is taken.\n* The *Span Settings* for the FFT band limits.\n* *Save Directory* to save all the output files from the auto measurements.\n* *Test Information* to save current test run info (i.e. Test engineer initials, test number, EUT serial number).\n* *Measurement Specifications* to select the measurement field, type, side, and RBW.\n\nThe default settings create a 4x6 grid with spacing of 2.8 cm with a 3-second dwell time (for both area scan and zoom scan). Span settings are set to be between 0.005 - 5 MHz.\n\n#### General Area Scan\nThe main function of this program is the general area scan. This consists of moving the probe throughout a specified area, taking NS measurements automatically until all points in the area have been traversed.\n\nBefore the scan, make sure all text entries in the GUI are filled with valid inputs.\nTo begin the scan, click on the **Run** button at the bottomo right of the GUI window.\nThe scan will open up a custom terminal console window that displays any message or error from the program, containing information on the progress or particular crashes within the lifetime of the program.\nThe program will then automatically call Snipping Tool and EHP200-TS and interface with the C4 controller in the XY positioner system.\nOnce all preparations have been completed, the program starts the scan.\n**Do not touch the computer until the area scan has been completed - the program uses image recognition for its automated measurement process and will crash if certain key reference points/buttons/UI elements are not in sight. With this in mind, it may also be helpful to disable any notifications or programs that pop to the front (e.g. installers that get in front of all other windows once the installation has been completed.**\n\nOnce the general area scan has been completed, you may select one of four options: 1) Exit the area scan module, 2) Perform a zoom scan on the coordinate corresponding to the highest value measurement, 3) Correct a previous position's value, and 4) Save Data (not yet implemented - may delete in the future).\n\n#### Zoom Scan\nThe scan process begins by identifying the point in the grid with the highest measurement value and moving the probe to this position. Once here, the program begins a 5 by 5 grid point scan similar to the area scan.\n\n#### Correct Previous Value\nThe program prompts the user to select a point to measure again. The program then simply moves the probe to the selected position and automatically takes a measurement, replacing the value at its corresponding coordinate in the value array.\n\n### Resetting Motors (reset_motors)\n\nThis command resets the motors to their start position approximately at the center of the XY positioner. Please run this command every time you are finished using the positioner. It will take approximately a minute to complete the reset.\n\n### Manual Control (manual)\nThe manual control option assumes that you know your exact positions for grid coordinates and allows for free movement of the motors.\n\nThe settings for \"up, down, left, and right\" may be altered depending on your perspective relative to the positioner. For this reason,\nthe buttons have been labelled \"towards/away\" depending on the motor they are associated with.\n\nThe distance moved by pressing a button can be changed by using the text boxes in the top right section of the Manual Movement GUI.\n\n\n\n## Built With\n\n* [Python](https://www.python.org/) - This code base was written in Python 3.6\n* [PySerial](https://github.com/pyserial/pyserial) - Serial communication with the motor controller\n* [Numpy](http://www.numpy.org/) - Used for matrix manipulations and structuring data\n* [Matplotlib](https://matplotlib.org/) - Used for interpolation and contour plotting\n* [SciPy](https://www.scipy.org/) - Used for additional interpolation\n* [wxPython (wx)](https://wxpython.org/) - Used to construct all GUI elements\n* [pyautogui](https://pyautogui.readthedocs.io/en/latest/) - Used to automate mouse and keyboard for automatic data measurements\n* [pywinauto](https://pywinauto.github.io/) - Used to automate processes\n* [pywin32](https://wiki.python.org/moin/PyWin32) - Set of extension modules that give access to many Windows API functions\n\n## Contributing\n\nPlease read [CONTRIBUTING.md](https://gist.github.com/PurpleBooth/b24679402957c63ec426) for details on our code of conduct, and the process for submitting pull requests.\n\n## Authors\n\n* **Ganesh Arvapalli** - *Software Engineering Intern* - [garva-pctest](https://github.com/garva-pctest)\n* **Chang Hwan 'Oliver' Choi** - Biomedical/Software Engineering Intern* - [cchoi-oliver](https://github.com/cchoi-oliver)\n\nOriginally written for the internal use of PCTEST Engineering Lab, Inc.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details\n\n## Acknowledgments\n\n* Andrew Harwell\n* Baron Chan\n* Kaitlin O'Keefe\n* Steve Liu\n* Thomas Rigolage\n* PCTEST Engineering Lab, Inc.\n* Billie Thompson ([PurpleBooth](https://gist.github.com/PurpleBooth)) for the README template\n"
},
{
"alpha_fraction": 0.7558703422546387,
"alphanum_fraction": 0.7630506753921509,
"avg_line_length": 49.51960754394531,
"blob_id": "4a848024e860fd9e77387fb17eab544855930c36",
"content_id": "d319b276351ebe2a70c06cb3f2094d257977e81d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 10306,
"license_type": "permissive",
"max_line_length": 240,
"num_lines": 204,
"path": "/README_OLD.md",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "# Robotic Positioning Controller\n\nThis project was designed to allow easier use of the Arrick C4 Motor\ncontrollers and generate contour plots based on measurement data. The\ncode was written in Python 2.7.\n\n## Getting Started\n\n### Prerequisites\n\nWhat things you need to install the software and how to install them\n\nIt is recommended that you install an **I**ntegrated **D**evelopment **E**nvironment (IDE) for viewing and running Python files. [PyCharm](https://www.jetbrains.com/pycharm/download/) is recommended as it does not require admin privileges. \n\nSpyder should also work and it comes preinstalled with [Anaconda](https://github.com/BurntSushi/nfldb/wiki/Python-&-pip-Windows-installation).\nFuture versions should include a Jupyter Notebook to make data management testing easier.\n\nIt is also assumed that you already have a built XY Positioning system (either 18 or 30 inch model). This code will simply\nhelp make controlling the motors more intuitive.\n\n### Installing\n\n1. First, install the latest version of Python 2.7 from the following [link](https://www.python.org/downloads/). Make sure you\nare downloading **Python 2.7** and NOT Python 3. You may also choose to install [Anaconda or Miniconda](https://github.com/BurntSushi/nfldb/wiki/Python-&-pip-Windows-installation) to manage the packages\nwe will be using.\n\n\n\n2. Install an IDE such as [PyCharm](https://www.jetbrains.com/pycharm/download/) or use Spyder (included in Anaconda). Open the folder\ncontaining the files you downloaded using this IDE.\n\n3. Next we must set up our virtual environment to continue using python properly. This can be slightly complicated, so use this link\nfor [reference for PyCharm](https://www.jetbrains.com/help/pycharm/configuring-python-interpreter.html#configuring-venv) and this link\nfor [reference for Anaconda](https://conda.io/docs/user-guide/tasks/manage-environments.html). If you have configured the PATH variables\nfor Windows, this shouldn't be necessary. I will go through the exact steps for PyCharm\nhere:\n - Go to Help > Find Action in the top bar or press Ctrl+Shift+A\n - Search for \"Project Interpreter\" and click the first result\n - The term \"Project Interpreter\" should be highlighted. It is next to a long dropdown bar. Next to that bar is a settings \n gear icon. Click on it and choose \"Add Local\"\n \n \n \n - Specify a \"Location\" for the virtual environment. For the \"Base Interpreter\", navigate to the folder where you installed Python\n (probably `C:/Python27` or `C:/ProgramFiles/Python27`). Choose the file \"python.exe\" to make sure PyCharm knows how to run your code.\n \n \n \n - Before you exit out of this window, on the left hand side, there should be an option that says \"System Interpreter\". For the \"Interpreter\"\n choose the exact same location you specified for the \"Base Interpreter\" (the python.exe file from before).\n \n \n \n - Click \"Ok\" and exit the \"Settings\" window.\n\n4. You must then open up the \"Terminal\" inside the IDE you are using to run the following command to install relevant packages.\nThe button to open it in PyCharm can be found in the very bottom left corner (it looks like a gray square). Hovering over it will allow the\nuser the option to open the terminal.\n\n \n\n5. Run the following command in the terminal (within your IDE) to install all packages at once. These packages take up about 1 GB of space:\n\n```bash\npip install numpy matplotlib Gooey pyserial scipy\n```\n\nAlternatively, if you are using Anaconda, use the following command:\n\n```bash\nconda install numpy matplotlib Gooey pyserial scipy\n```\n\n## Deployment\n\nOpen the file [\"basic_xy_positioner_gui.py\"](basic_xy_positioner_gui.py) in your favorite IDE and click run \n(the green play button at the top for PyCharm). If you have to enter run configurations, the only That should be it!\n\nRunning from the command line on Linux/Mac systems can be done by simply typing:\n\n```bash\npython basic_xy_positioner_gui.py\n```\n\nMake sure that it is configured to be executable by using:\n\n```bash\nchmod +x basic_xy_positioner_gui.py\n```\n\nFor Windows 10, you must first configure your PATH variables in order to run python from the command line. Follow the steps\non this [site](https://superuser.com/questions/143119/how-to-add-python-to-the-windows-path) to learn how to add Python to the PATH.\nNote that this is completely optional and that the program can still be run through an IDE.\n\nThe GUI should pop up with options that you may define. Additional parameters can be tweaked by modifying the source files.\n\n\n\nIt may be necessary to purchase a USB-to-Serial cable as the one currently in use is liable to\nbreaking. Ensure that the stopper is in place for motor 1 on the 30 inch system to ensure homing\nworks correctly.\n\n##### NOTE THAT BEFORE RUNNING ANY SCANS YOU MUST FIRST [RESET THE MOTORS](#resetting-Motors-reset_motors).\n\n### Running an Area Scan (area_scan)\n\n##### Required Arguments\n\n* Input the dimensions of the object (x goes across and is controlled by motor #1, y goes up and down by using motor #2) into *x_distance* and *y_distance*.\n* The *grid_step_dist* selects how far apart each measurement point should be.\n* The *dwell_time* is by default set to 1, but can be changed to 0 to allow for the user to wait an indefinite amount of time before inputting a value.\n\n##### Optional Arguments\n\n* The *filename* is a prefix that will appear at the beginning of all resulting file output. All output can be found in the folder\n`results/` which will be automatically generated if not present.\n* The *measure* setting can be turned off to simply observe how the motors are stepping through the grid.\n* The *auto_zoom_scan* setting will automatically conduct a zoom scan over the point with the highest value after travelling\nto it. Zoom scan data is stored separately from area scan data.\n\nThe default settings create a 4x6 grid with spacing of 2.8 cm with a 1 second dwell time. The default file prefix is \"raw_values\" and the automatic zoom scan is not conducted.\n\nOnce the area scan is complete, you may select more options from the post-scan GUI that pops up afterward. You may choose to\n* correct a previous value\n* run a zoom scan\n* save your data into `results/`\n* ...or exit the program\n \nNote that the plot will also be saved and you do not have to choose the save button on the graph pop up unless you would like \nto save it in another folder.\n\n### Moving to a Grid Position (pos_move)\n\n##### Required Arguments\n\n* Input the dimensions of the object (x goes across and is controlled by motor #1, y goes up and down by using motor #2) into *x_distance* and *y_distance*.\n* The *grid_step_dist* selects how far apart each measurement point should be.\n\nAfter the command is selected, a grid GUI will pop up with a series of buttons. Click on where you want to go within the grid\nand the motors will move there after moving to the first position in the grid.\n\n### Running a Zoom Scan\n\nTo run a zoom scan over a single point, it is recommended that you first run [pos_move](#moving-to-a-grid-position-pos_move)\nand then conduct an area scan with smaller step size.\n\nMake sure to indicate in your filename that you are conducting a zoom scan!\nThe program is built to assume that if you choose [area_scan](#running-an-area-scan-area_scan), you are expecting data outputted\ninto a file called \"_area\".\n\nYou can also choose to run a zoom scan after the area scan is complete or set it to be run automatically after an area scan.\n\n### Resetting Motors (reset_motors)\n\nThis command resets the motors to their start position at the center of the XY positioner. Please run this command every time you\nare finished using the positioner. It will take approximately a minute to complete the reset, but PLEASE DO NOT RUN ANY SCANS while\nthe motors are moving. This will throw off position calculations. To that effect, check the \"wait\" box to say that you understand this. :+1:\n\nYou may also select whether you would like to go to the center of the 18 inch system or the 30 inch system. (\"scan_30\")\n\n### Manual Control (manual)\n\nThe manual control option assumes that you know your exact positions for grid coordinates and allows for free movement of the motors.\n\nThe settings for \"up, down, left, and right\" may be altered depending on your perspective relative to the positioner. For this reason,\nthe buttons have been labelled \"towards/away\" depending on the motor they are associated with.\n\nThe distance moved by pressing a button can be changed by using the text boxes in the top right section of the GUI.\n\nTo generate a graph (scatter + contour) you require at least 4 data points. To add a data point, type a value into the textbox and click \"Add\nto graph\". Unfortunately, overwriting data points is **not** supported at this time, so be sure of your value before you click \"Add to graph\"!\n\nNote that this graph IS NOT SAVED unless you click the save button and choose a location/filename.\n\n## Built With\n\n* [Python](https://www.python.org/) - This code base was written in Python 2.7.14\n* [PySerial](https://github.com/pyserial/pyserial) - Dependency Management\n* [Numpy](http://www.numpy.org/) - Used for matrix manipulations and structuring data\n* [Matplotlib](https://matplotlib.org/) - Used for interpolation and contour plotting\n* [SciPy](https://www.scipy.org/) - Used for additional interpolation\n* [Gooey](https://github.com/chriskiehl/Gooey) - Used to construct initial menu selection\n\n## Contributing\n\nPlease read [CONTRIBUTING.md](https://gist.github.com/PurpleBooth/b24679402957c63ec426) for details on our code of conduct, and the process for submitting pull requests.\n\n## Authors\n\n* **Ganesh Arvapalli** - *Software Engineering Intern* - [garva-pctest](https://github.com/garva-pctest)\n\nOriginally written for the internal use of PCTEST Engineering Lab, Inc.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details\n\n## Acknowledgments\n\n* Andrew Harwell\n* Baron Chan\n* Kaitlin O'Keefe\n* Steve Liu\n* PCTEST Engineering Lab, Inc.\n"
},
{
"alpha_fraction": 0.6098541617393494,
"alphanum_fraction": 0.6150962710380554,
"avg_line_length": 49.159400939941406,
"blob_id": "a0589bcf1111af80dad0194bf22233a6ecd33628",
"content_id": "620eb1bb39d9ce10f438703e70aa2c545e991a56",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 36817,
"license_type": "permissive",
"max_line_length": 127,
"num_lines": 734,
"path": "/xy_positioner_gui_2.0.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nNS Testing Program/XY Positioner GUI\n\nThis is the main frame/driver script for NS testing. Built on the wxPython GUI framework, the MainFrame class\nhandles all of the GUI elements of the entire testing program.\n\nMainFrame and all other GUI elements run on the main thread, starting threads for analysis and computation functions.\n\nThe script contains a single class:\n - MainFrame(wx.Frame): main GUI window/frame, starts children GUIs and children threads.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nimport sys\nimport os\nfrom src.area_scan import AreaScanThread, ZoomScanThread, CorrectionThread\nfrom src.post_scan_gui import PostScanGUI\nfrom src.location_select_gui import LocationSelectGUI\nfrom src.manual_move import ManualMoveGUI\nfrom src.console_gui import TextRedirector, ConsoleGUI\nfrom src.motor_driver import ResetThread\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport wx\nfrom wx.lib.agw import multidirdialog as mdd\nimport json\nfrom src.logger import Log\nfrom datetime import datetime\nimport time\nimport keyboard\n\n\nclass MainFrame(wx.Frame):\n \"\"\"\n Main GUI frame of the entire NS testing program. Handles all children GUI and children threads for automated\n measurement-taking.\n \"\"\"\n def __init__(self, parent, title):\n \"\"\"\n :param parent: Parent object calling the MainFrame.\n :param title: Title for the MainFrame window.\n \"\"\"\n wx.Frame.__init__(self, parent, title=title, size=(600, 750))\n self.scan_panel = wx.Panel(self)\n\n # Variables\n self.run_thread = None\n self.zoom_thread = None\n self.corr_thread = None\n self.console_frame = None\n\n self.curr_row = 0 # Grid coordinate row\n self.curr_col = 0 # Grid coordinate col\n self.values = None # np.array storing area scan values\n self.zoom_values = None # np.array storing zoom scan values\n self.grid = None # np.array storing 'trajectory' of scans\n self.max_fname = '' # Name of the image file for the max measurement\n self.logger = None # Logger to create log file\n\n # Accelerator Table/Shortcut Keys\n save_id = 115\n run_id = 116\n manual_id = 117\n reset_id = 118\n help_id = 119\n pause_id = 120\n resume_id = 121\n self.accel_tbl = wx.AcceleratorTable([(wx.ACCEL_CTRL, ord('s'), save_id),\n (wx.ACCEL_CTRL, ord('r'), run_id),\n (wx.ACCEL_CTRL, ord('m'), manual_id),\n (wx.ACCEL_CTRL, ord('t'), reset_id),\n (wx.ACCEL_CTRL, ord('p'), pause_id),\n (wx.ACCEL_CTRL, ord('e'), resume_id),\n (wx.ACCEL_CTRL, ord('h'), help_id)])\n self.SetAcceleratorTable(self.accel_tbl)\n\n # UI Elements\n self.scan_settings_text = wx.StaticText(self.scan_panel, label=\"Area Scan Settings\")\n self.scan_settings_text.SetFont(wx.Font(10, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.x_distance_text = wx.StaticText(self.scan_panel, label=\"X Distance\")\n self.x_distance_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.xdesc_text = wx.StaticText(self.scan_panel, label=\"Horizontal length of measurement region (in cm)\")\n self.x_tctrl = wx.TextCtrl(self.scan_panel)\n self.x_tctrl.SetValue(str(4 * 2.8))\n\n self.y_distance_text = wx.StaticText(self.scan_panel, label=\"Y Distance\")\n self.y_distance_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.ydesc_text = wx.StaticText(self.scan_panel, label=\"Vertical length of measurement region (in cm)\")\n self.y_tctrl = wx.TextCtrl(self.scan_panel)\n self.y_tctrl.SetValue(str(6 * 2.8))\n\n self.grid_step_dist_text = wx.StaticText(self.scan_panel, label=\"Grid Step Distance\")\n self.grid_step_dist_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.griddesc_text = wx.StaticText(self.scan_panel, label=\"Distance between measurement points (in cm)\")\n self.grid_tctrl = wx.TextCtrl(self.scan_panel)\n self.grid_tctrl.SetValue(str(2.8))\n\n self.start_point_text = wx.StaticText(self.scan_panel, label=\"Starting Point\")\n self.start_point_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.posdesc_text = wx.StaticText(self.scan_panel, label=\"To use default starting point, set grid number to 0\")\n self.pos_text = wx.StaticText(self.scan_panel, label=\"grid number :\")\n self.pos_tctrl = wx.TextCtrl(self.scan_panel)\n self.pos_tctrl.SetValue(str(0))\n\n self.times_text = wx.StaticText(self.scan_panel, label=\"Dwell Time Settings\")\n self.times_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.dwell_time_text = wx.StaticText(self.scan_panel, label=\"Pre-Measurement Dwell Time (Area scan, in sec)\")\n self.dwell_tctrl = wx.TextCtrl(self.scan_panel)\n self.dwell_tctrl.SetValue(str(3))\n self.zoom_scan_dwell_time_text = wx.StaticText(self.scan_panel, label=\"Pre-Measurement Dwell Time (Zoom scan, in sec)\")\n self.zdwell_tctrl = wx.TextCtrl(self.scan_panel)\n self.zdwell_tctrl.SetValue(str(3))\n\n self.span_text = wx.StaticText(self.scan_panel, label=\"Span Settings\")\n self.span_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.span_start_text = wx.StaticText(self.scan_panel, label=\"Start (MHz):\")\n self.span_start_tctrl = wx.TextCtrl(self.scan_panel)\n self.span_start_tctrl.SetValue(str(0.005))\n self.span_stop_text = wx.StaticText(self.scan_panel, label=\"Stop (MHz):\")\n self.span_stop_tctrl = wx.TextCtrl(self.scan_panel)\n self.span_stop_tctrl.SetValue(str(5))\n\n self.save_dir_text = wx.StaticText(self.scan_panel, label=\"Save Directory\")\n self.save_dir_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.savedesc_text = wx.StaticText(self.scan_panel, label=\"Directory to save measurement text and image files\")\n self.save_tctrl = wx.TextCtrl(self.scan_panel)\n # self.save_tctrl.SetValue(\"C:\\\\Users\\changhwan.choi\\Desktop\\hello\") # TODO :Debugging\n self.save_btn = wx.Button(self.scan_panel, save_id, \"Browse\")\n self.Bind(wx.EVT_BUTTON, self.select_save_dir, self.save_btn)\n\n self.auto_checkbox = wx.CheckBox(self.scan_panel, label=\"Automatic Measurements\") # TODO: may not use\n self.auto_checkbox.SetValue(True)\n\n self.zoom_checkbox = wx.CheckBox(self.scan_panel, label=\"Zoom Scan\")\n self.zoom_checkbox.SetValue(False)\n\n self.test_info_text = wx.StaticText(self.scan_panel, label=\"Test Information\")\n self.test_info_text.SetFont(wx.Font(10, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.eut_model_text = wx.StaticText(self.scan_panel, label=\"Model of EUT: \")\n self.eut_model_tctrl = wx.TextCtrl(self.scan_panel)\n self.eut_sn_text = wx.StaticText(self.scan_panel, label=\"S/N of EUT: \")\n self.eut_sn_tctrl = wx.TextCtrl(self.scan_panel)\n self.initials_text = wx.StaticText(self.scan_panel, label=\"Test Engineer Initials: \")\n self.initials_tctrl = wx.TextCtrl(self.scan_panel)\n self.test_num_text = wx.StaticText(self.scan_panel, label=\"Test Number: \")\n self.test_num_tctrl = wx.TextCtrl(self.scan_panel)\n\n self.measurement_specs_text = wx.StaticText(self.scan_panel, label=\"Measurement Specifications\")\n self.measurement_specs_text.SetFont(wx.Font(10, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.type_rbox = wx.RadioBox(self.scan_panel, label=\"Type\", choices=['Limb', 'Body'],\n style=wx.RA_SPECIFY_COLS, majorDimension=1)\n self.field_rbox = wx.RadioBox(self.scan_panel, label=\"Field\",\n choices=['Electric', 'Magnetic (Mode A)', 'Magnetic (Mode B)'],\n style=wx.RA_SPECIFY_COLS, majorDimension=1)\n self.side_rbox = wx.RadioBox(self.scan_panel, label=\"Side\",\n choices=['Front', 'Back', 'Top', 'Bottom', 'Left', 'Right'],\n style=wx.RA_SPECIFY_COLS, majorDimension=1)\n self.side_rbox.SetSelection(1)\n self.rbw_rbox = wx.RadioBox(self.scan_panel, label=\"RBW\",\n choices=['300 kHz', '10 kHz', '100 kHz', '3 kHz', '30 kHz', '1 kHz'],\n style=wx.RA_SPECIFY_COLS, majorDimension=2)\n self.rbw_rbox.SetSelection(2)\n self.meas_rbox = wx.RadioBox(self.scan_panel, label=\"Measurement\", choices=['Highest Peak', 'WideBand'],\n style=wx.RA_SPECIFY_COLS, majorDimension=1)\n\n self.reset_btn = wx.Button(self.scan_panel, reset_id, \"Reset Motors\")\n self.Bind(wx.EVT_BUTTON, self.reset_motors, self.reset_btn)\n self.manual_btn = wx.Button(self.scan_panel, manual_id, \"Manual Movement\")\n self.Bind(wx.EVT_BUTTON, self.manual_move, self.manual_btn)\n self.run_btn = wx.Button(self.scan_panel, run_id, \"Run\")\n self.Bind(wx.EVT_BUTTON, self.run_area_scan, self.run_btn)\n\n # Menu Bar\n menubar = wx.MenuBar()\n helpmenu = wx.Menu()\n shortcuthelp_item = wx.MenuItem(helpmenu, help_id, text=\"Shortcuts\", kind=wx.ITEM_NORMAL)\n pause_item = wx.MenuItem(helpmenu, pause_id, text=\"Pause\", kind=wx.ITEM_NORMAL)\n helpmenu.Append(shortcuthelp_item)\n helpmenu.Append(pause_item)\n menubar.Append(helpmenu, 'Help')\n self.Bind(wx.EVT_MENU, self.showshortcuts, id=help_id)\n self.Bind(wx.EVT_MENU, self.pauseProg, id=pause_id)\n self.SetMenuBar(menubar)\n\n # Sizers/Layout, Static Lines, & Static Boxes\n self.saveline_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.checkbox_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.pos_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.span_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.test_info_sizer = wx.GridSizer(rows=4, cols=2, hgap=0, vgap=0)\n self.text_input_sizer = wx.BoxSizer(wx.VERTICAL)\n self.radio_input_sizer = wx.BoxSizer(wx.VERTICAL)\n self.btn_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.mainh_sizer = wx.BoxSizer(wx.HORIZONTAL)\n self.mainv_sizer = wx.BoxSizer(wx.VERTICAL)\n\n self.text_input_sizer.Add(self.scan_settings_text, proportion=0, border=3, flag=wx.BOTTOM)\n self.text_input_sizer.Add(self.x_distance_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.xdesc_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.x_tctrl, proportion=0, flag=wx.LEFT | wx.EXPAND)\n self.text_input_sizer.Add(self.y_distance_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.ydesc_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.y_tctrl, proportion=0, flag=wx.LEFT | wx.EXPAND)\n self.text_input_sizer.Add(self.grid_step_dist_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.griddesc_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.grid_tctrl, proportion=0, flag=wx.LEFT | wx.EXPAND)\n\n self.text_input_sizer.Add(self.start_point_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.posdesc_text, proportion=0, flag=wx.LEFT)\n self.pos_sizer.Add(self.pos_text, proportion=0, flag=wx.LEFT)\n self.pos_sizer.Add(self.pos_tctrl, proportion=1, flag=wx.LEFT | wx.RIGHT | wx.EXPAND, border=5)\n self.text_input_sizer.Add(self.pos_sizer, proportion=0, flag=wx.EXPAND)\n\n self.text_input_sizer.Add(self.times_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.dwell_time_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.dwell_tctrl, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.zoom_scan_dwell_time_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.zdwell_tctrl, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.span_text, proportion=0, flag=wx.LEFT)\n self.span_sizer.Add(self.span_start_text, proportion=0, flag=wx.LEFT)\n self.span_sizer.Add(self.span_start_tctrl, proportion=1, flag=wx.LEFT | wx.RIGHT | wx.EXPAND, border=5)\n self.span_sizer.Add(self.span_stop_text, proportion=0, flag=wx.LEFT)\n self.span_sizer.Add(self.span_stop_tctrl, proportion=1, flag=wx.LEFT | wx.EXPAND, border=5)\n self.text_input_sizer.Add(self.span_sizer, proportion=0, flag=wx.EXPAND)\n self.text_input_sizer.Add(self.save_dir_text, proportion=0, flag=wx.LEFT)\n self.text_input_sizer.Add(self.savedesc_text, proportion=0, flag=wx.LEFT)\n self.saveline_sizer.Add(self.save_tctrl, proportion=1, flag=wx.LEFT | wx.EXPAND)\n self.saveline_sizer.Add(self.save_btn, proportion=0, flag=wx.ALIGN_RIGHT | wx.LEFT, border=5)\n self.checkbox_sizer.Add(self.auto_checkbox, proportion=0, flag=wx.ALIGN_LEFT | wx.ALL, border=5)\n self.checkbox_sizer.Add(self.zoom_checkbox, proportion=0, flag=wx.ALIGN_LEFT | wx.ALL, border=5)\n self.text_input_sizer.Add(self.saveline_sizer, proportion=0, flag=wx.LEFT | wx.EXPAND)\n self.text_input_sizer.Add(self.checkbox_sizer, proportion=0, flag=wx.LEFT | wx.EXPAND)\n self.text_input_sizer.Add(wx.StaticLine(self.scan_panel, wx.ID_ANY, style=wx.LI_HORIZONTAL),\n proportion=0, border=5, flag=wx.TOP | wx.BOTTOM | wx.EXPAND)\n self.text_input_sizer.Add(self.test_info_text, proportion=0, flag=wx.BOTTOM, border=3)\n self.test_info_sizer.Add(self.eut_model_text, proportion=0)\n self.test_info_sizer.Add(self.eut_model_tctrl, proportion=0, flag=wx.EXPAND)\n self.test_info_sizer.Add(self.eut_sn_text, proportion=0)\n self.test_info_sizer.Add(self.eut_sn_tctrl, proportion=0, flag=wx.EXPAND)\n self.test_info_sizer.Add(self.initials_text, proportion=0)\n self.test_info_sizer.Add(self.initials_tctrl, proportion=0, flag=wx.EXPAND)\n self.test_info_sizer.Add(self.test_num_text, proportion=0)\n self.test_info_sizer.Add(self.test_num_tctrl, proportion=0, flag=wx.EXPAND)\n self.text_input_sizer.Add(self.test_info_sizer, proportion=0, flag=wx.EXPAND)\n\n self.radio_input_sizer.Add(self.measurement_specs_text, proportion=0, border=3, flag=wx.BOTTOM)\n self.radio_input_sizer.Add(self.type_rbox, proportion=0, flag=wx.ALL | wx.EXPAND)\n self.radio_input_sizer.Add(self.field_rbox, proportion=0, flag=wx.ALL | wx.EXPAND)\n self.radio_input_sizer.Add(self.side_rbox, proportion=0, flag=wx.ALL | wx.EXPAND)\n self.radio_input_sizer.Add(self.rbw_rbox, proportion=0, flag=wx.ALL | wx.EXPAND)\n self.radio_input_sizer.Add(self.meas_rbox, proportion=0, flag=wx.ALL | wx.EXPAND)\n\n self.mainh_sizer.Add(self.text_input_sizer, proportion=2, border=5, flag=wx.ALL | wx.EXPAND)\n self.mainh_sizer.Add(wx.StaticLine(self.scan_panel, wx.ID_ANY, style=wx.LI_VERTICAL),\n proportion=0, border=5, flag=wx.TOP | wx.BOTTOM | wx.EXPAND)\n self.mainh_sizer.Add(self.radio_input_sizer, proportion=1, border=5, flag=wx.ALL | wx.EXPAND)\n\n self.btn_sizer.Add(self.reset_btn, proportion=1, border=5,\n flag=wx.ALIGN_RIGHT | wx.LEFT | wx.TOP | wx.BOTTOM)\n self.btn_sizer.Add(self.manual_btn, proportion=1, border=5,\n flag=wx.ALIGN_RIGHT | wx.LEFT | wx. TOP | wx.BOTTOM)\n self.btn_sizer.Add(self.run_btn, proportion=1, border=5, flag=wx.ALIGN_RIGHT | wx.ALL)\n\n self.mainv_sizer.Add(self.mainh_sizer, proportion=1, border=0, flag=wx.ALL | wx.EXPAND)\n self.mainv_sizer.Add(wx.StaticLine(self.scan_panel, wx.ID_ANY, style=wx.LI_HORIZONTAL),\n proportion=0, border=0, flag=wx.ALL | wx.EXPAND)\n self.mainv_sizer.Add(self.btn_sizer, proportion=0, border=5, flag=wx.ALIGN_RIGHT)\n\n # load previous configuration when initialize the panel\n self.load_configuration()\n\n self.scan_panel.SetSizer(self.mainv_sizer)\n pan_size = self.scan_panel.GetSize()\n print(pan_size)\n self.SetSize(pan_size)\n self.SetMinSize(pan_size)\n self.SetMaxSize(pan_size)\n self.SetAutoLayout(True)\n # self.scan_panel.Fit()\n self.mainv_sizer.Fit(self.scan_panel)\n self.Layout()\n self.Show(True)\n\n def select_save_dir(self, e):\n \"\"\"\n Opens quick dialog to select the save directory for the output files (.txt, .png) from the automatic\n measurements. Writes the directory name on the TextCtrl object on the GUI (self.save_tctrl).\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n # TODO: Currently, there is a problem with wx.DirDialog, so I have resorted to using multidirdialog\n # TODO: When the bugs are fixed on their end, revert back to the nicer looking wx.DirDialog\n with mdd.MultiDirDialog(None, \"Select save directory for output files.\",\n style=mdd.DD_DIR_MUST_EXIST | mdd.DD_NEW_DIR_BUTTON) as dlg:\n if dlg.ShowModal() == wx.ID_CANCEL:\n return\n # Correcting name format to fit future save functions\n path = dlg.GetPaths()[0]\n path = path.split(':')[0][-1] + ':' + path.split(':)')[1]\n self.save_tctrl.SetValue(path)\n # with wx.DirDialog(self, \"Select save directory for '.txt' and '.png' files.\",\n # style=wx.DD_DEFAULT_STYLE | wx.DD_DIR_MUST_EXIST) as dlg:\n # if dlg.ShowModal() == wx.ID_CANCEL:\n # return\n # self.save_dir = dlg.GetPath()\n # self.save_tctrl.SetValue(self.save_dir)\n # if os.path.exists(parentpath):\n\n def save_configuration(self, filename='prev_config.txt'):\n \"\"\"\n Save current configuration to a txt file\n\n \"\"\"\n try:\n config = {}\n config['x'] = self.x_tctrl.GetValue()\n config['y'] = self.y_tctrl.GetValue()\n config['step'] = self.grid_tctrl.GetValue()\n config['start_pos'] = self.pos_tctrl.GetValue()\n config['dwell'] = self.dwell_tctrl.GetValue()\n config['zdwell'] = self.zdwell_tctrl.GetValue()\n config['start'] = self.span_start_tctrl.GetValue()\n config['stop'] = self.span_stop_tctrl.GetValue()\n config['checkbox'] = self.auto_checkbox.GetValue()\n config['type'] = self.type_rbox.GetSelection()\n config['field'] = self.field_rbox.GetSelection()\n config['side'] = self.side_rbox.GetSelection()\n config['rbw'] = self.rbw_rbox.GetSelection()\n config['measurement'] = self.meas_rbox.GetSelection()\n config['dir'] = self.save_tctrl.GetValue()\n config['zoom'] = self.zoom_checkbox.GetValue()\n\n json.dump(config,open(filename,'w'))\n\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nCannot save current configuration.\")\n return\n\n return\n\n def load_configuration(self, filename='prev_config.txt'):\n \"\"\"\n Load the saved configuration\n\n \"\"\"\n # TODO: Add error exception\n if os.path.exists(filename):\n config = json.load(open(filename))\n self.x_tctrl.SetValue(config['x'])\n self.y_tctrl.SetValue(config['y'])\n self.grid_tctrl.SetValue(config['step'])\n self.pos_tctrl.SetValue(config['start_pos'])\n self.dwell_tctrl.SetValue(config['dwell'])\n self.zdwell_tctrl.SetValue(config['zdwell'])\n self.span_start_tctrl.SetValue(config['start'])\n self.span_stop_tctrl.SetValue(config['stop'])\n self.auto_checkbox.SetValue(config['checkbox'])\n self.type_rbox.SetSelection(int(config['type']))\n self.field_rbox.SetSelection(int(config['field']))\n self.side_rbox.SetSelection(int(config['side']))\n self.rbw_rbox.SetSelection(int(config['rbw']))\n self.meas_rbox.SetSelection(int(config['measurement']))\n self.save_tctrl.SetValue(config['dir'])\n self.zoom_checkbox.SetValue(config['zoom'])\n\n def run_area_scan(self, e):\n \"\"\"\n Begins general area scan based on the measurement settings specified on the GUI.\n Starts and runs an instance of AreaScanThread to perform automatic area scan.\n Opens console GUI to help user track progress of the program.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n # Make sure entries are valid\n if self.save_tctrl.GetValue() is None or \\\n self.save_tctrl.GetValue() is '' or \\\n not os.path.exists(self.save_tctrl.GetValue()):\n self.errormsg(\"Please select a valid save directory for the output files.\")\n return\n try:\n self.save_configuration()\n x = float(self.x_tctrl.GetValue())\n y = float(self.y_tctrl.GetValue())\n step = float(self.grid_tctrl.GetValue())\n dwell = float(self.dwell_tctrl.GetValue())\n span_start = float(self.span_start_tctrl.GetValue())\n span_stop = float(self.span_stop_tctrl.GetValue())\n start_pos = int(self.pos_tctrl.GetValue())\n zoom_scan = self.zoom_checkbox.GetValue()\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nPlease input numerical values only.\")\n return\n # Build comment for savefiles\n if self.eut_model_tctrl.GetValue() is '' or self.eut_sn_tctrl.GetValue() is '' or \\\n self.initials_tctrl.GetValue() is '' or self.test_num_tctrl.GetValue() is '':\n self.errormsg(\"Please fill out all entries in the 'Test Information' section.\")\n return\n comment = \"Model of EUT: \" + self.eut_model_tctrl.GetValue() + \\\n \" - \\r\\nS/N of EUT: \" + self.eut_sn_tctrl.GetValue() + \\\n \" - \\r\\nTest Engineer Initials: \" + self.initials_tctrl.GetValue() + \\\n \" - \\r\\nTest Number: \" + self.test_num_tctrl.GetValue()\n savedir = self.save_tctrl.GetValue()\n # Finding the measurement type\n meas_type = self.type_rbox.GetStringSelection()\n # Finding the measurement field\n meas_field = self.field_rbox.GetStringSelection()\n # Finding the measurement side\n meas_side = self.side_rbox.GetStringSelection()\n # Finding the RBW setting\n meas_rbw = self.rbw_rbox.GetStringSelection()\n # Finding the measurement\n meas = self.meas_rbox.GetStringSelection()\n start_pos = int(self.pos_tctrl.GetValue())\n self.logger = Log(self.getLogFileName()).getLogger()\n\n if zoom_scan:\n # convert grid number to row and col\n try:\n zdwell = float(self.zdwell_tctrl.GetValue())\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nPlease input numerical values only.\")\n return\n # Preparation\n step_unit = 0.00508 # TODO: Used the default step_unit for now, fix this when unit change\n num_steps = step / step_unit\n x_points = int(np.ceil(np.around(x / step, decimals=3))) + 1\n y_points = int(np.ceil(np.around(y / step, decimals=3))) + 1\n #print(\"x_points: \", x_points)\n #print(\"y_points: \", y_points)\n self.run_thread = ZoomScanThread(self, zdwell, span_start, span_stop,savedir, comment, meas_type,\n meas_field, meas_side, meas_rbw, meas, num_steps, self.values, self.grid,\n self.curr_row, self.curr_col, zoom_scan, start_pos, x_points, y_points)\n else:\n try:\n self.run_thread = AreaScanThread(self, x, y, step, dwell, span_start, span_stop, savedir,\n comment, meas_type, meas_field, meas_side, meas_rbw, meas, start_pos)\n except:\n self.run_thread.join()\n\n # self.disablegui() # TODO:Check if need disable gui\n self.logger.info(datetime.now().strftime(\"%Y/%m/%d\"))\n if not self.console_frame:\n self.console_frame = ConsoleGUI(self, \"Console\")\n self.console_frame.Show(True)\n sys.stdout = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stdout to the console\n sys.stderr = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stderr to the console\n print(\"Running general scan...\")\n self.logger.info(\"Running general scan...\")\n self.run_thread.start()\n\n def run_post_scan(self):\n \"\"\"\n Plots the area scan results and prompts the user for a post-scan option ('Exit', 'Zoom Scan', 'Correct previous\n value', 'Save data'). Called by the area scan threads (AreaScanThread, ZoomScanThread, CorrectionThread)\n once threads are closed.\n\n :return: Nothing.\n \"\"\"\n # Plot the scan\n plotvals = np.copy(self.values)\n plotvals = np.rot90(plotvals)\n plt.close()\n plt.imshow(plotvals, interpolation='bilinear',\n extent=[0, plotvals.shape[1] - 1, 0, plotvals.shape[0] - 1])\n plt.title('Area Scan Heat Map')\n cbar = plt.colorbar()\n cbar.set_label('Signal Level')\n plt.show(block=False)\n\n # Post Scan GUI - User selects which option to proceed with\n with PostScanGUI(self, title=\"Post Scan Options\", style=wx.DEFAULT_DIALOG_STYLE | wx.OK) as post_dlg:\n if post_dlg.ShowModal() == wx.ID_OK:\n choice = post_dlg.option_rbox.GetStringSelection()\n print(\"Choice: \", choice)\n else:\n print(\"No option selected - Area Scan Complete.\")\n self.logger.info(\"No option selected - Area Scan Complete.\")\n self.enablegui()\n return\n\n if choice == 'Zoom Scan':\n try:\n zdwell = float(self.zdwell_tctrl.GetValue())\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nPlease input numerical values only.\")\n return\n savedir = self.save_tctrl.GetValue()\n # Finding the measurement type\n meas_type = self.type_rbox.GetStringSelection()\n # Finding the measurement field\n meas_field = self.field_rbox.GetStringSelection()\n # Finding the measurement side\n meas_side = self.side_rbox.GetStringSelection()\n # Finding the RBW setting\n meas_rbw = self.rbw_rbox.GetStringSelection()\n # Finding the measurement\n meas = self.meas_rbox.GetStringSelection()\n\n self.zoom_thread = ZoomScanThread(self, zdwell, self.run_thread.span_start, self.run_thread.span_stop,\n savedir, self.run_thread.comment, meas_type, meas_field, meas_side,\n meas_rbw, meas, self.run_thread.num_steps, self.values, self.grid,\n self.curr_row, self.curr_col, False, 0, 0, 0)\n if not self.console_frame:\n self.console_frame = ConsoleGUI(self, \"Console\")\n self.console_frame.Show(True)\n sys.stdout = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stdout to the console\n sys.stderr = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stderr to the console\n self.zoom_thread.start()\n\n elif choice == 'Correct Previous Value':\n loc_gui = LocationSelectGUI(self, \"Location Selection\", self.grid)\n loc_gui.Show(True)\n elif choice == 'Save Data':\n pass\n elif choice == 'Exit':\n print(\"Area Scan Complete. Exiting module.\")\n self.logger.info(\"Area Scan Complete. Exiting module.\")\n self.enablegui()\n\n def update_values(self, call_thread):\n \"\"\"\n Updates the variables stored in the MainFrame based on the measurement results returned from the scans.\n Called by the area scan threads (AreaScanThread, ZoomScanThread, CorrectionThread).\n\n :param call_thread: The thread calling the update method and updating the variables stored in the MainFrame.\n :return: Nothing.\n \"\"\"\n self.curr_row = call_thread.curr_row\n self.curr_col = call_thread.curr_col\n if type(call_thread) is AreaScanThread:\n self.values = call_thread.values\n self.grid = call_thread.grid\n self.max_fname = call_thread.max_fname\n elif type(call_thread) is CorrectionThread:\n self.values = call_thread.values\n elif type(call_thread) is ZoomScanThread:\n self.zoom_values = call_thread.zoom_values\n\n def run_correction(self, target_index):\n \"\"\"\n Runs the 'Correct Previous Value' option from the Post Scan GUI. Starts and runs an instance of\n CorrectionThread to retake a measurement in the specified coordinate.\n\n :param target_index: the index in the grid that the user chooses to correct.\n :return: Nothing.\n \"\"\"\n savedir = self.save_tctrl.GetValue()\n # Finding the measurement type\n meas_type = self.type_rbox.GetStringSelection()\n # Finding the measurement field\n meas_field = self.field_rbox.GetStringSelection()\n # Finding the measurement side\n meas_side = self.side_rbox.GetStringSelection()\n # Finding the RBW setting\n meas_rbw = self.rbw_rbox.GetStringSelection()\n self.corr_thread = CorrectionThread(self, target_index, self.run_thread.num_steps,\n float(self.dwell_tctrl.GetValue()), self.run_thread.span_start,\n self.run_thread.span_stop, self.values, self.grid,\n self.curr_row, self.curr_col, savedir, self.run_thread.comment,\n meas_type, meas_field, meas_side, meas_rbw, self.max_fname)\n if not self.console_frame:\n self.console_frame = ConsoleGUI(self, \"Console\")\n self.console_frame.Show(True)\n sys.stdout = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stdout to the console\n sys.stderr = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stderr to the console\n self.corr_thread.start()\n\n def manual_move(self, e):\n \"\"\"\n Allows user to manually move the position of the NS probe. Opens a terminal console if not open already.\n Creates and shows instance of ManualMoveGUI to allow direct input from the user.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n if not self.console_frame:\n self.console_frame = ConsoleGUI(self, \"Console\")\n self.console_frame.Show(True)\n sys.stdout = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stdout to the console\n sys.stderr = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stderr to the console\n try:\n step = float(self.grid_tctrl.GetValue())\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nPlease input numerical values only.\")\n return\n manual = ManualMoveGUI(self, \"Manual Movement\", step)\n manual.Show(True)\n\n def reset_motors(self, e):\n \"\"\"\n Resets the motors back to their default position. Starts and runs instance of ResetThread to facilitate motor\n resets. Opens terminal console if not open already.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.disablegui()\n if not self.console_frame:\n self.console_frame = ConsoleGUI(self, \"Console\")\n self.console_frame.Show(True)\n sys.stdout = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stdout to the console\n sys.stderr = TextRedirector(self.console_frame.console_tctrl) # Redirect text from stderr to the console\n ResetThread(self).start()\n\n def enablegui(self):\n \"\"\"\n Re-enables all MainFrame GUI elements.\n\n :return: Nothing.\n \"\"\"\n self.x_tctrl.Enable(True)\n self.y_tctrl.Enable(True)\n self.grid_tctrl.Enable(True)\n self.dwell_tctrl.Enable(True)\n self.zdwell_tctrl.Enable(True)\n self.span_start_tctrl.Enable(True)\n self.span_stop_tctrl.Enable(True)\n self.save_tctrl.Enable(True)\n self.auto_checkbox.Enable(True)\n self.zoom_checkbox.Enable(True)\n self.save_btn.Enable(True)\n self.eut_model_tctrl.Enable(True)\n self.eut_sn_tctrl.Enable(True)\n self.initials_tctrl.Enable(True)\n self.test_num_tctrl.Enable(True)\n self.type_rbox.Enable(True)\n self.field_rbox.Enable(True)\n self.side_rbox.Enable(True)\n self.rbw_rbox.Enable(True)\n self.reset_btn.Enable(True)\n self.manual_btn.Enable(True)\n self.run_btn.Enable(True)\n\n def disablegui(self):\n \"\"\"\n Disables all MainFrame GUI elements.\n\n :return: Nothing.\n \"\"\"\n self.x_tctrl.Enable(False)\n self.y_tctrl.Enable(False)\n self.grid_tctrl.Enable(False)\n self.dwell_tctrl.Enable(False)\n self.zdwell_tctrl.Enable(False)\n self.span_start_tctrl.Enable(False)\n self.span_stop_tctrl.Enable(False)\n self.save_tctrl.Enable(False)\n self.auto_checkbox.Enable(False)\n self.zoom_checkbox.Enable(False)\n self.save_btn.Enable(False)\n self.eut_model_tctrl.Enable(False)\n self.eut_sn_tctrl.Enable(False)\n self.initials_tctrl.Enable(False)\n self.test_num_tctrl.Enable(False)\n self.type_rbox.Enable(False)\n self.field_rbox.Enable(False)\n self.side_rbox.Enable(False)\n self.rbw_rbox.Enable(False)\n self.reset_btn.Enable(False)\n self.manual_btn.Enable(False)\n self.run_btn.Enable(False)\n\n def showshortcuts(self, e):\n \"\"\"\n Opens simple dialog listing the different shortcuts of the program.\n\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n shortcuts_string = \"Shortcuts:\\n\" +\\\n \"Select Save Directory: Ctrl + S\\n\" +\\\n \"Reset Motors: Ctrl + T\\n\" +\\\n \"Manual Movement: Ctrl + M\\n\" +\\\n \"Run Analysis: Ctrl + E\\n\" +\\\n \"Check Shortcut Keys: Ctrl + H\"\n with wx.MessageDialog(self, shortcuts_string, 'Shortcut Keys',\n style=wx.OK | wx.ICON_QUESTION | wx.CENTER) as dlg:\n dlg.ShowModal()\n\n def pauseProg(self,e):\n \"\"\"\n pause the program\n \"\"\"\n print(\"press Enter to continue\")\n\n while True:\n key = keyboard.read_key()\n\n if key == \"enter\":\n print(\"Program is resumed\")\n # exit(0)\n break\n\n time.sleep(0.5)\n\n def errormsg(self, errmsg):\n \"\"\"\n Shows an error message as a wx.Dialog.\n\n :param errmsg: String error message to show in the message dialog.\n :return: Nothing\n \"\"\"\n with wx.MessageDialog(self, errmsg, style=wx.OK | wx.ICON_ERROR | wx.CENTER) as dlg:\n dlg.ShowModal()\n\n def getLogFileName(self):\n \"\"\"\n Create log filename as Datetime + Model + Test Number\n\n :return: None\n \"\"\"\n filename = ''\n # get Model Name and Test Number\n try:\n model = self.eut_model_tctrl.GetValue()\n testNum = self.test_num_tctrl.GetValue()\n except ValueError:\n self.errormsg(\"Invalid scan parameters.\\nPlease input numerical values only.\")\n return\n\n filename += str(datetime.now().strftime(\"%Y%m%d_%H%M%S\"))\n filename += \"_\" + model + \"_\" + testNum\n filename = \"log/\"+filename\n filename += \".log\"\n return filename\n\nif __name__ == \"__main__\":\n xy_positioner_gui = wx.App()\n fr = MainFrame(None, title='XY Positioner (for NS Testing)')\n xy_positioner_gui.MainLoop()\n"
},
{
"alpha_fraction": 0.6558197736740112,
"alphanum_fraction": 0.6558197736740112,
"avg_line_length": 32.29166793823242,
"blob_id": "406a39ee24baddd22733180a97fe80c0fae86378",
"content_id": "bdefac19d75ef3f0ce6df67921808f2acac789e8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 799,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 24,
"path": "/src/logger.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "import logging\n\n#logging.basicConfig(filename=\"NS Testing.log\", level=logging.INFO)\n\nclass Log():\n def __init__(self, filename):\n self.filename = filename\n\n def getLogger(self):\n logger = logging.getLogger('user')\n logger.setLevel(logging.INFO)\n formatter = logging.Formatter(\"%(asctime)s:%(message)s\", \"%H:%M:%S\")\n\n # handle info to a log file\n # file_handler = logging.FileHandler(\"log/NS Testing.log\")\n file_handler = logging.FileHandler(self.filename)\n file_handler.setFormatter(formatter)\n logger.addHandler(file_handler)\n\n # handle info to the sys.stdout\n consoleHandler = logging.StreamHandler()\n consoleHandler.setFormatter(formatter)\n # logger.addHandler(consoleHandler)\n return logger\n"
},
{
"alpha_fraction": 0.5946205854415894,
"alphanum_fraction": 0.5989433526992798,
"avg_line_length": 35.543861389160156,
"blob_id": "71340260fb308b680e1c84a4b68353eb54ce79b4",
"content_id": "19193e60d205108efb685d74ebe0205273deac85",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2082,
"license_type": "permissive",
"max_line_length": 118,
"num_lines": 57,
"path": "/src/post_scan_gui.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nPost-Scan GUI\n\nThis is the GUI module for selecting a function after the general area scan (e.g. Correcting a previous measurement,\nperforming a zoom scan, exiting the area scan module). This GUI is intended to be run in conjunction with the\n'xy_positioner_gui.py' only.\n\nThis module contains a single class:\n - PostScanGUI(wx.Dialog): basic GUI dialog to select between correcting a prev. measurement, perform a zoom scan,\n save data (NOTE: note yet implemented), or exiting the area scan module.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nimport wx\n\n\nclass PostScanGUI(wx.Dialog):\n \"\"\"\n GUI that provides user post-scan options (e.g. Exit, Zoom Scan, Correct Previous Value, Save Data).\n \"\"\"\n def __init__(self, *args, **kw):\n super(PostScanGUI, self).__init__(*args, **kw)\n\n # UI Elements\n self.option_rbox = wx.RadioBox(self,\n label=\"Options\",\n choices=['Exit', 'Zoom Scan', 'Correct Previous Value', 'Save Data'],\n style=wx.RA_SPECIFY_COLS,\n majorDimension=1)\n self.select_btn = wx.Button(self, wx.ID_OK, \"Select\")\n self.Bind(wx.EVT_CLOSE, self.OnQuit)\n\n # Sizers/Layout, Static Lines, & Static Boxes\n self.mainv_sizer = wx.BoxSizer(wx.VERTICAL)\n\n self.mainv_sizer.Add(self.option_rbox, proportion=1, border=5, flag=wx.EXPAND | wx.ALL)\n self.mainv_sizer.Add(self.select_btn, proportion=0, border=5, flag=wx.EXPAND | wx.LEFT | wx.RIGHT | wx.BOTTOM)\n self.SetSizer(self.mainv_sizer)\n self.SetAutoLayout(True)\n self.mainv_sizer.Fit(self)\n\n def OnQuit(self, e):\n \"\"\"\n Function called on quit.\n :param e: Event handler.\n :return: Nothing.\n \"\"\"\n self.Destroy()\n\n\nif __name__ == '__main__':\n post_scan_gui = wx.App()\n panel = PostScanGUI(None)\n panel.ShowModal()\n post_scan_gui.MainLoop()"
},
{
"alpha_fraction": 0.5964154601097107,
"alphanum_fraction": 0.6036853194236755,
"avg_line_length": 39.0374641418457,
"blob_id": "fc88fcec78f192011f20d58d25694b5204f6e950",
"content_id": "ada5c3b5c0ea4c97df82a51dbeff433bc91d3d62",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13893,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 347,
"path": "/src/narda_navigator.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nNARDA Software Navigator\n\nThis module contains automation scripts for navigating through the NARDA software and taking measurements.\nThe scripts use image-based automation and make use of the images in 'XY_Positioner_GUI/narda_navigator_referencepics'\nto guide the mouse and keyboard.\n\nThe navigator takes control over the EHP200-TS program and the Snipping Tool program to take NS measurements and\ntake plot screenshots.\n\nThe module contains a single class:\n - NardaNavigator(): driver class for the NARDA automation scripts.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nimport os\nimport warnings\nimport time\nimport pyautogui as pgui\nimport pywinauto as pwin\nfrom win32com.client import GetObject\nfrom pywinauto import application\n\n\nclass NardaNavigator:\n \"\"\"\n Driver class for the NARDA automation scripts.\n \"\"\"\n def __init__(self):\n pgui.PAUSE = 0.55 # Set appropriate amount of pause time so that the controlled programs can keep up w/ auto\n pgui.FAILSAFE = True # True - abort program mid-automation by moving mouse to upper left corner\n self.refpics_path = 'narda_navigator_referencepics'\n self.ehp200_path = \"C:\\\\Program Files (x86)\\\\NardaSafety\\\\EHP-TS\\\\EHP200-TS\\\\EHP200.exe\"\n self.snip_path = \"C:\\\\Windows\\\\System32\\\\SnippingTool.exe\"\n self.ehp200_app = application.Application()\n self.snip_tool = application.Application()\n self.startSnip()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=UserWarning)\n self.startNarda()\n\n def startSnip(self):\n \"\"\"\n Opens the 'Snipping Tool' program.\n\n :return: Nothing.\n \"\"\"\n WMI = GetObject('winmgmts:')\n processes = WMI.InstancesOf('Win32_Process')\n p_list = [p.Properties_('Name').Value for p in processes]\n # If program already open, close and restart\n # Found that this was necessary for a bug-less run\n if self.snip_path.split('\\\\')[-1] in p_list:\n self.snip_tool.connect(path=self.snip_path)\n self.snip_tool.kill()\n print(\"Starting Snipping Tool - Connecting...\")\n print(\"NOTE: 'Snipping Tool', once open, must be ACTIVE (i.e. the front-most window) for the NS Scan\"\n \"Program to connect to it.\")\n self.snip_tool.start(self.snip_path)\n # Wait until the window has been opened\n while not pgui.locateOnScreen(self.refpics_path + '/snip_window_title.PNG'):\n pass\n print(\"Snipping Tool opened successfully.\")\n\n def startNarda(self):\n \"\"\"\n Opens the EHP200-TS NARDA program.\n\n :return: Nothing.\n \"\"\"\n WMI = GetObject('winmgmts:')\n processes = WMI.InstancesOf('Win32_Process')\n p_list = [p.Properties_('Name').Value for p in processes]\n # If program already open, close and restart\n # Found that this was necessary for a bug-less run\n if self.ehp200_path.split('\\\\')[-1] in p_list:\n self.ehp200_app.connect(path=self.ehp200_path)\n self.ehp200_app.kill()\n print(\"Starting EHP200 program - Connecting...\")\n print(\"NOTE: The 'EHP200' program, once open, must be ACTIVE (i.e. the front-most window) for the NS Scan\"\n \"Program to connect to it.\")\n self.ehp200_app.start(self.ehp200_path)\n # Wait until the window has been opened\n while not pgui.locateOnScreen(self.refpics_path + '/window_title.PNG'):\n pass\n print(\"EHP200 opened successfully.\")\n\n def selectTab(self, tabName):\n \"\"\"\n Selects specified tab in the NARDA program.\n\n :param tabName: String name of the tab to select.\n :return: Nothing.\n \"\"\"\n tabName = tabName.lower()\n selectedName = '/' + tabName + '_tab_selected.PNG'\n deselectedName = '/' + tabName + '_tab_deselected.PNG'\n try:\n if not pgui.locateOnScreen(self.refpics_path + selectedName):\n x, y, w, h = pgui.locateOnScreen(self.refpics_path + '/' + tabName + '_tab_deselected.PNG',\n grayscale=True)\n pgui.click(pgui.center((x, y, w, h)))\n except TypeError:\n print('Error: Reference images not found on screen...')\n exit(1)\n\n def selectInputField(self, meas_field):\n \"\"\"\n Selects the input field in the NARDA program.\n\n :param meas_field: Measurement field (electric or magnetic (mode A or B)).\n :return: Nothing.\n \"\"\"\n if meas_field == 'Electric':\n pgui.click(pgui.locateCenterOnScreen(self.refpics_path + '/electric.PNG'))\n elif meas_field == 'Magnetic (Mode A)':\n pgui.click(pgui.locateCenterOnScreen(self.refpics_path + '/magnetic_modea.PNG'))\n elif meas_field == 'Magnetic (Mode B)':\n pgui.click((pgui.locateCenterOnScreen(self.refpics_path + '/magnetic_modeb.PNG')))\n\n def selectRBW(self, meas_rbw):\n \"\"\"\n Selects the RBW setting in the NARDA program.\n\n :param meas_rbw: Measurement RBW (in kHz).\n :return: Nothing.\n \"\"\"\n fname = meas_rbw.lower().replace(' ', '_')\n pgui.click((pgui.locateCenterOnScreen((self.refpics_path + '/' + fname + '.PNG'))))\n\n def inputTextEntry(self, ref_word, input_val, direction='right'):\n \"\"\"\n Fills a given text entry with a specified value.\n\n :param ref_word: Text entry name (Reference word/point).\n :param input_val: Input value for the text entry.\n :param direction: Direction of the text entry relative to the ref. word.\n :return: Nothing.\n \"\"\"\n # FIXME: Probably not gonna use 'direction' param, since always to the right...\n # Find coordinates of the reference word\n x, y = pgui.locateCenterOnScreen(self.refpics_path + '/' + ref_word + '.PNG')\n counter = 0 # counts how many continuous white spaces we find to determine if text entry\n pgui.moveTo(x, y)\n pgui.moveRel(85, 0)\n while pgui.position()[0] < pgui.size()[0]:\n pgui.moveRel(5, 0)\n im = pgui.screenshot()\n color = im.getpixel(pgui.position())\n if color == (255, 255, 255):\n counter += 1\n else:\n counter = 0\n # If whitespace identified (i.e. 3 contiguous white pixel measurements taken)\n if counter == 3:\n pgui.dragTo(x, y, duration=0.4) # Select the value\n pgui.typewrite(input_val)\n return\n # If the text entry location is not found, raise exception\n raise Exception\n\n def enableMaxHold(self):\n \"\"\"\n Turns 'Max Hold' on (i.e. selects the check box).\n\n :return: Nothing.\n \"\"\"\n # If not on the data tab, switch to it\n self.selectTab('data')\n\n try:\n pgui.click(pgui.locateCenterOnScreen(self.refpics_path + '/max_hold_unchecked.PNG', grayscale=True))\n except:\n return\n\n def takeMeasurement(self, dwell_time, measurement, filename, pathname, comment):\n \"\"\"\n Takes a measurement (highest_peak or WideBand) using the NARDA program.\n\n :param dwell_time: Time the NS probe stays in position before taking a measurement.\n :param filename: Filename to save the measurement outputs as.\n :param pathname: Save directory to hold output files.\n :return: The max value in the current measurement point.\n \"\"\"\n self.bringToFront()\n # If not on the data tab, switch to it\n self.selectTab('data')\n\n # Reset measurement by clicking on 'Free Scan' radio button\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/free_scan.PNG', grayscale=True)))\n\n # Todo: figure out if we need to check max hold here too..\n\n # Wait for the measurements to settle before taking measurements\n time.sleep(dwell_time)\n\n # Take the actual measurement after marking the highest peak\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/highest_peak.PNG', grayscale=True)))\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/save_as_text.PNG', grayscale=True)))\n\n # Input comment\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/comment.PNG', grayscale=True)))\n #time.sleep(0.3)\n pgui.typewrite(comment)\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/ok.PNG', grayscale=True)))\n\n # Save file\n pgui.typewrite(filename)\n # Change to directory of choice\n pgui.hotkey('ctrl', 'l')\n pgui.typewrite(pathname)\n pgui.press(['enter'])\n try:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/save.PNG', grayscale=True)))\n except TypeError:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/save_underscore.PNG', grayscale=True)))\n\n # Overwrite if file already exists\n try:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/yes.PNG', grayscale=True)))\n print(\"File '\" + filename + \".txt'\" + \" already exists - overwriting file.\")\n except TypeError:\n print(\"New file '\" + filename + \".txt'\" + \" has been saved.\")\n\n # Wait for file to have saved properly\n while not os.path.isfile(pathname + '/' + filename + '.txt'):\n pass\n\n # Return max recorded value\n return self.getMaxValue(filename, pathname, measurement)\n\n def saveBitmap(self, filename, pathname):\n \"\"\"\n Saves a partial screenshot of the NARDA GUI, called when a new max value has been found.\n\n :param filename: Filename to save the image file as.\n :param pathname: Save directory to save the image file in.\n :return: Nothing.\n \"\"\"\n # Stack Snipping Tool in front of the NARDA program\n self.bringToFront()\n # Open Data Tab\n self.selectTab('data')\n context = self.bringSnipToFront()\n if context == 'snipping':\n pgui.hotkey('alt', 'm')\n pgui.press('w')\n else:\n pgui.hotkey('ctrl', 'n')\n try:\n pgui.click(pgui.locateCenterOnScreen(self.refpics_path + '/narda_triangle.PNG'))\n except TypeError:\n pgui.click(pgui.locateCenterOnScreen(self.refpics_path + '/window_title_not_focused.PNG'))\n print(\"ERROR - \") # TODO: Do we even need this? What would be a better way to implement this\n return\n pgui.hotkey('ctrl', 's')\n # Save file\n pgui.typewrite('tmp')\n\n # Change to directory of choice\n pgui.hotkey('ctrl', 'l')\n pgui.typewrite(pathname)\n pgui.press(['enter'])\n try:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/save.PNG', grayscale=True)))\n except TypeError:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/save_underscore.PNG', grayscale=True)))\n\n # Overwrite if file already exists\n try:\n pgui.click(pgui.center(pgui.locateOnScreen(self.refpics_path + '/yes.PNG', grayscale=True)))\n # print(\"File '\" + filename + \".PNG'\" + \" already exists - overwriting file.\")\n except TypeError:\n pass\n # print(\"New file '\" + filename + \".PNG'\" + \" has been saved.\")\n # self.minimizeSnip()\n\n def getMaxValue(self, filepath, pathname, measurement):\n \"\"\"\n Extracts the 'Highest Peak (A/m)' value from a given text output file.\n :param filepath: Name of text output file.\n :param pathname: Directory/path to the text output file.\n :return: Returns the float 'Highest Peak (A/m)' value.\n \"\"\"\n with open(pathname + '/' + filepath + '.txt', 'r') as f:\n index = 7 if measurement == \"WideBand\" else 8\n print(\"measurement is \", measurement)\n #print(\"index: \", str(index))\n maxValLine = f.readlines()[index]\n for string in maxValLine.split(' '):\n try:\n maxVal = float(string)\n break # Breaks if we find the first numeric value\n except ValueError:\n continue\n return maxVal\n\n def saveCurrentLocation(self):\n \"\"\"\n TODO: Pretty sure this is not used\n :return:\n \"\"\"\n return pgui.position()\n\n def loadSavedLocation(self, x, y):\n \"\"\"\n TODO: Pretty sure this is not used\n :param x:\n :param y:\n :return:\n \"\"\"\n pgui.moveTo(x, y)\n\n def bringToFront(self):\n \"\"\"\n Sets the NARDA software window on focus (i.e. brought to the front/set as the active window).\n :return: Nothing.\n \"\"\"\n self.ehp200_app.EHP200.set_focus()\n\n def bringSnipToFront(self):\n \"\"\"\n Sets the Snipping Tool program window on focus (i.e. brought to the front/set as the active window).\n :return: Nothing.\n \"\"\"\n try:\n self.snip_tool.Snipping.set_focus()\n return 'snipping'\n except pwin.findbestmatch.MatchError: # If the snipping tool has already taken a snip, window is renamed 'edit'\n self.snip_tool.Edit.set_focus()\n return 'edit'\n\n def minimizeSnip(self):\n \"\"\"\n Minimizes the Snipping Tool program window.\n :return: Nothing.\n \"\"\"\n try:\n self.snip_tool.Snipping.Minimize()\n except pwin.findbestmatch.MatchError: # If the snipping tool has already taken a snip, window is renamed 'edit'\n self.snip_tool.Edit.set_focus()\n\n\nif __name__ == '__main__':\n ehp200 = NardaNavigator()\n"
},
{
"alpha_fraction": 0.6092203259468079,
"alphanum_fraction": 0.6175742745399475,
"avg_line_length": 31,
"blob_id": "89add2553e0c94ea57e2d5815de6b08fd8c04d02",
"content_id": "0383cbed552c134b0faa18b4f12a4da28aa9c437",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3232,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 101,
"path": "/src/console_gui.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nConsole GUI\n\nThis is the GUI module for displaying a terminal console for the NS Testing program. This GUI is intended to be run\nin conjunction with the 'xy_positioner_gui.py' only.\n\nThis module contains the following classes:\n - TextRedirecter(object): redirects stdout and stderr to the terminal console.\n - ConsoleGUI(wx.Frame): terminal console that displays all stdout and stderr from the main NS testing program.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\n\"\"\"\n\nimport wx\nimport keyboard\nimport time\n\n\nclass TextRedirector(object):\n \"\"\"\n Class for redirecting the print function output to the ConsoleGUI TextCtrl widget.\n \"\"\"\n def __init__(self, aWxTextCtrl):\n \"\"\"\n :param aWxTextCtrl: reference to the TextCtrl widget in the ConsoleGUI.\n \"\"\"\n self.out = aWxTextCtrl\n\n def write(self, string):\n \"\"\"\n Writes the console output to the ConsoleGUI TextCtrl widget.\n :param string: String to be written on the TextCtrl widget.\n :return: Nothing.\n \"\"\"\n self.out.AppendText(string)\n\n\nclass ConsoleGUI(wx.Frame):\n \"\"\"\n Custom Terminal Console GUI. Helps users keep track of the NS scan's progress.\n \"\"\"\n def __init__(self, parent, title):\n \"\"\"\n :param parent: Parent frame invoking the ConsoleGUI.\n :param title: Title for the GUI window.\n \"\"\"\n wx.Frame.__init__(self, parent, title=title, size=(800, 700))\n #self.SetWindowStyle(wx.CAPTION | wx.CLOSE_BOX | wx.STAY_ON_TOP)\n\n # UI Elements\n self.console_text = wx.StaticText(self, label=\"Console Output\")\n self.console_text.SetFont(wx.Font(9, wx.DECORATIVE, wx.NORMAL, wx.BOLD))\n self.console_tctrl = wx.TextCtrl(self, size=(600, 400), style=wx.TE_MULTILINE | wx.TE_READONLY)\n\n # Menu Bar\n pause_id = 120\n menubar = wx.MenuBar()\n helpmenu = wx.Menu()\n pause_item = wx.MenuItem(helpmenu, pause_id, text=\"Pause\", kind=wx.ITEM_NORMAL)\n helpmenu.Append(pause_item)\n menubar.Append(helpmenu, 'Help')\n self.Bind(wx.EVT_MENU, self.pauseProg, id=pause_id)\n self.SetMenuBar(menubar)\n\n # Shortcut\n self.accel_tbl = wx.AcceleratorTable([(wx.ACCEL_CTRL, ord('p'), pause_id)])\n self.SetAcceleratorTable(self.accel_tbl)\n\n # Sizers\n self.mainsizer = wx.BoxSizer(wx.VERTICAL)\n\n # Layout\n self.mainsizer.Add(self.console_text, proportion=0, border=5, flag=wx.ALL)\n self.mainsizer.Add(self.console_tctrl, proportion=1, border=5,\n flag=wx.LEFT | wx.RIGHT | wx.BOTTOM | wx.EXPAND)\n\n self.SetSizer(self.mainsizer)\n self.SetAutoLayout(True)\n self.mainsizer.Fit(self)\n\n def pauseProg(self,e):\n \"\"\"\n pause the program\n \"\"\"\n print(\"press Enter to continue\")\n\n while True:\n key = keyboard.read_key()\n\n if key == \"enter\":\n print(\"Program is resumed\")\n # exit(0)\n break\n\n time.sleep(0.5)\n\nif __name__ == '__main__':\n consolegui = wx.App()\n fr = ConsoleGUI(None, \"Console\")\n consolegui.MainLoop()\n"
},
{
"alpha_fraction": 0.6059228181838989,
"alphanum_fraction": 0.6096895933151245,
"avg_line_length": 41.07103729248047,
"blob_id": "0f6c6e58c6e7c4d20bf9058142c3617f55d1e2c7",
"content_id": "d72f1d8355c8b333ce900a6605a3f22178544204",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 23097,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 549,
"path": "/backup/area_scan.py",
"repo_name": "boboyejj/XY_Positioner_GUI",
"src_encoding": "UTF-8",
"text": "\"\"\"\nArea Scan Scripts\n\nThis module contains the threads and functions to perform general area scans and zoom scans.\nThe threads in the module are intended to be run in conjunction with 'xy_positioner_gui.py' only, as the\nthreads refer to specific callback functions from this file.\n\nThe module has the following classes:\n - AreaScanThread(threading.Thread): performs general area scans.\n - ZoomScanThread(threading.Thread): performs zoom scans at the maximum value.\n position found during the general area scan.\n - CorrectionThread(threading.Thread): retakes a previous measurement from the general area scan.\n\nAuthors:\nChang Hwan 'Oliver' Choi, Biomedical/Software Engineering Intern (Aug. 2018) - [email protected]\nGanesh Arvapalli, Software Engineering Intern (Jan. 2018) - [email protected]\n\"\"\"\n\nimport os\nimport threading\nfrom src.motor_driver import MotorDriver\nfrom src.narda_navigator import NardaNavigator\nimport numpy as np\nimport serial\nimport wx\nimport sys\nfrom src.logger import logger\nimport time\n\n\nclass AreaScanThread(threading.Thread):\n \"\"\"\n Thread for handling general area scans.\n \"\"\"\n def __init__(self, parent, x_distance, y_distance, grid_step_dist, dwell_time, span_start,\n span_stop, save_dir, comment, meas_type, meas_field, meas_side, meas_rbw, meas,start_pos):\n \"\"\"\n :param parent: Parent object (i.e. the Frame/GUI calling the thread).\n :param x_distance: Width of the scanning area.\n :param y_distance: Length of the scanning area.\n :param grid_step_dist: Step distance between each measurement point.\n :param dwell_time: Wait time at each scan point before measurements are recorded.\n :param span_start: Lower-limit frequency for FFT (MHz).\n :param span_stop: Upper-limit frequency for FFT (MHz).\n :param save_dir: Directory for output files (.txt, .png).\n :param comment: Comment saved in the output file (.txt).\n :param meas_type: Measurement type (limb or body).\n :param meas_field: Measurement field (Electric or magnetic (mode A or B)).\n :param meas_side: Side of the phone being scanned.\n :param meas_rbw: Resolution bandwidth for the FFT.\n :param start_pos: User defined starting point (grid number) if not 0.\n \"\"\"\n self.parent = parent\n self.callback = parent.update_values\n self.x_distance = x_distance\n self.y_distance = y_distance\n self.grid_step_dist = grid_step_dist\n self.dwell_time = dwell_time\n self.span_start = span_start\n self.span_stop = span_stop\n self.save_dir = save_dir\n self.comment = comment\n self.meas_type = meas_type\n self.meas_field = meas_field\n self.meas_side = meas_side\n self.meas_rbw = meas_rbw\n self.meas = meas\n self.start_pos = start_pos\n\n self.num_steps = None # Placeholder for number of motor steps needed to move one grid space\n self.values = None # Placeholder for the array of values\n self.grid = None # Placeholder for the coordinate grid array\n self.curr_row = None # Current position row\n self.curr_col = None # Current position col\n self.max_fname = None # The filename of the screenshot for the maximum measurement\n super(AreaScanThread, self).__init__()\n\n def run(self):\n \"\"\"\n Script run on thread start. Performs area scan on a separate thread.\n\n :return: Nothing.\n \"\"\"\n print(\"Measurement Parameters:\")\n print(\"Type: %s | Field: %s | Side: %s\" % (self.meas_type, self.meas_field, self.meas_side))\n print(\"Measurement: \", self.meas)\n\n # Preparation\n x_points = int(np.ceil(np.around(self.x_distance / self.grid_step_dist, decimals=3))) + 1\n y_points = int(np.ceil(np.around(self.y_distance / self.grid_step_dist, decimals=3))) + 1\n # Check ports and instantiate relevant objects (motors, NARDA driver)\n try:\n m = MotorDriver()\n except serial.SerialException:\n print(\"Error: Connection to C4 controller was not found\")\n wx.CallAfter(self.parent.enablegui)\n self.exc = sys.exc_info()\n i = 10\n while i:\n print(i)\n time.sleep(1)\n i -= 1\n print(\"Thread '%s' threw an exception: %s\" % (self.getName(), self.exc[0]))\n return\n narda = NardaNavigator()\n # Set measurement settings\n narda.selectTab('mode')\n narda.selectInputField(self.meas_field)\n narda.selectTab('span')\n narda.inputTextEntry('start', str(self.span_start))\n narda.inputTextEntry('stop', str(self.span_stop))\n narda.selectRBW(self.meas_rbw)\n narda.selectTab('data')\n narda.enableMaxHold()\n\n # Calculate number of motor steps necessary to move one grid space\n self.num_steps = self.grid_step_dist / m.step_unit\n\n # Run scan\n self.values, self.grid, self.curr_row,\\\n self.curr_col, self.max_fname = run_scan(x_points, y_points, m, narda, self.num_steps,\n self.dwell_time, self.save_dir, self.comment,\n self.meas_type, self.meas_field, self.meas_side,\n self.meas, self.start_pos)\n print(\"General area scan complete.\")\n self.callback(self)\n wx.CallAfter(self.parent.run_post_scan)\n m.destroy()\n\n\nclass ZoomScanThread(threading.Thread):\n \"\"\"\n Thread for handling zoom scans.\n \"\"\"\n def __init__(self, parent, dwell_time, span_start, span_stop, save_dir, comment, meas_type,\n meas_field, meas_side, meas_rbw, meas, num_steps, values, grid, curr_row, curr_col):\n\n \"\"\"\n :param parent: Parent object (i.e. the Frame/GUI calling the thread).\n :param dwell_time: Wait time at each scan point before measurements are recorded.\n :param span_start: Lower-limit frequency for FFT (MHz).\n :param span_stop: Upper-limit frequency for FFT (MHz).\n :param save_dir: Directory for output files (.txt, .png).\n :param comment: Comment saved in the output file (.txt).\n :param meas_type: Measurement type (limb or body).\n :param meas_field: Measurement field (Electric or magnetic (mode A or B)).\n :param meas_side: Side of the phone being scanned.\n :param meas_rbw: Resolution bandwidth for the FFT.\n :param meas: highest_peak or WideBand measurement\n :param num_steps: Number of motor steps per grid step.\n :param values: Numpy array of the recorded values.\n :param grid: Numpy array of index values (1-index).\n :param curr_row: The NS probe's current row position.\n :param curr_col: The NS probe's current column position.\n \"\"\"\n self.parent = parent\n self.callback = parent.update_values\n self.num_steps = num_steps\n self.dwell_time = dwell_time\n self.span_start = span_start\n self.span_stop = span_stop\n self.save_dir = save_dir\n self.comment = comment\n self.meas_type = meas_type\n self.meas_field = meas_field\n self.meas_side = meas_side\n self.meas_rbw = meas_rbw\n self.meas = meas\n self.curr_row = curr_row\n self.curr_col = curr_col\n\n self.values = values # original array of values\n self.zoom_values = None # Placeholder for zoom coordinates\n self.grid = grid # Placeholder for the coordinate grid array\n super(ZoomScanThread, self).__init__()\n\n def run(self):\n \"\"\"\n Script run on thread start. Performs zoom scan on a separate thread.\n\n :return: Nothing.\n \"\"\"\n print(\"Measurement Parameters:\")\n print(\"Type: %s | Field: %s | Side: %s\" % (self.meas_type, self.meas_field, self.meas_side))\n print(\"Measurement: \", self.meas)\n\n # Preparation\n x_points = 5\n y_points = 5\n # Check ports and instantiate relevant objects (motors, NARDA driver)\n try:\n m = MotorDriver()\n except serial.SerialException:\n print(\"Error: Connection to C4 controller was not found\")\n return -1\n narda = NardaNavigator()\n # Set measurement settings\n narda.selectTab('mode')\n narda.selectInputField(self.meas_field)\n narda.selectTab('span')\n narda.inputTextEntry('start', str(self.span_start))\n narda.inputTextEntry('stop', str(self.span_stop))\n narda.selectRBW(self.meas_rbw)\n narda.selectTab('data')\n\n # Calculate number of motor steps necessary to move one grid space\n znum_steps = self.num_steps / 4.0 # Zoom scan steps are scaled down\n\n # Move to coordinate with maximum value\n max_val = self.values.max()\n max_row, max_col = np.where(self.values == float(max_val))\n print(\"Max value: %f\" % max_val)\n print(max_row, max_col)\n print(\"Max value coordinates: Row - %d / Col - %d\" % (max_row, max_col))\n row_steps = max_row - self.curr_row\n col_steps = max_col - self.curr_col\n self.curr_row = max_row\n self.curr_col = max_col\n if row_steps > 0:\n m.forward_motor_two(int(self.num_steps * row_steps))\n else:\n m.reverse_motor_two(int(-1 * self.num_steps * row_steps))\n if col_steps > 0:\n m.forward_motor_one(int(self.num_steps * col_steps))\n else:\n m.reverse_motor_one(int(-1 * self.num_steps * col_steps))\n\n # Run scan\n self.zoom_values, _, _, _, _ = run_scan(x_points, y_points, m, narda, znum_steps,\n self.dwell_time, self.save_dir, self.comment,\n self.meas_type, self.meas_field, 'z', self.meas)\n # Move back to original position\n m.reverse_motor_one(int(2 * znum_steps))\n m.reverse_motor_two(int(2 * znum_steps))\n\n print(\"Zoom scan complete.\")\n self.callback(self)\n wx.CallAfter(self.parent.run_post_scan)\n m.destroy()\n\n\nclass CorrectionThread(threading.Thread):\n \"\"\"\n Thread for handling corrections of previous values from the general area scan.\n \"\"\"\n def __init__(self, parent, target, num_steps, dwell_time, span_start, span_stop, values, grid, curr_row,\n curr_col, save_dir, comment, meas_type, meas_field, meas_side, meas_rbw, meas, max_fname):\n \"\"\"\n :param parent: Parent object (i.e. the Frame/GUI calling the thread).\n :param target: index of the target position (index by the grid).\n :param num_steps: Number of motor steps per grid step.\n :param dwell_time: Wait time at each scan point before measurements are recorded.\n :param span_start: Lower-limit frequency for FFT (MHz).\n :param span_stop: Upper-limit frequency for FFT (MHz).\n :param values: Numpy array of the recorded values.\n :param grid: Numpy array of index values (1-index).\n :param curr_row: The NS probe's current row position.\n :param curr_col: The NS probe's current column position.\n :param save_dir: Directory for output files (.txt, .png).\n :param comment: Comment saved in the output file (.txt).\n :param meas_type: Measurement type (limb or body).\n :param meas_field: Measurement field (Electric or magnetic (mode A or B)).\n :param meas_side: Side of the phone being scanned.\n :param meas_rbw: Resolution bandwidth for the FFT.\n :param max_fname: Filename corresponding to highest measurement.\n \"\"\"\n self.parent = parent\n self.callback = self.parent.update_values\n self.target = target\n self.num_steps = num_steps\n self.dwell_time = dwell_time\n self.span_start = span_start\n self.span_stop = span_stop\n self.values = values\n self.grid = grid\n self.curr_row = curr_row\n self.curr_col = curr_col\n self.save_dir = save_dir\n self.comment = comment\n self.meas_type = meas_type\n self.meas_field = meas_field\n self.meas_side = meas_side\n self.meas_rbw = meas_rbw\n self.meas = meas\n self.max_fname = max_fname\n super(CorrectionThread, self).__init__()\n\n def run(self):\n \"\"\"\n Script run on thread start. Corrects a previous value from the general area scan results on a separate thread.\n\n :return: Nothing.\n \"\"\"\n print(\"Measurement Parameters:\")\n print(\"Type: %s | Field: %s | Side: %s\" % (self.meas_type, self.meas_field, self.meas_side))\n print(\"Measurement: \", self.meas)\n\n # Check ports and instantiate relevant objects (motors, NARDA driver)\n try:\n m = MotorDriver()\n except serial.SerialException:\n print(\"Error: Connection to C4 controller was not found.\")\n return -1\n narda = NardaNavigator()\n # Set measurement settings\n narda.selectTab('mode')\n narda.selectInputField(self.meas_field)\n narda.selectTab('span')\n narda.inputTextEntry('start', str(self.span_start))\n narda.inputTextEntry('stop', str(self.span_stop))\n narda.selectRBW(self.meas_rbw)\n narda.selectTab('data')\n\n # Find the target location\n target_row, target_col = np.where(self.grid == int(self.target))\n row_steps = target_row - self.curr_row\n col_steps = target_col - self.curr_col\n\n # Move to target location\n print(\"R steps: %d - C steps %d\" % (row_steps, col_steps))\n if row_steps > 0:\n m.forward_motor_two(int(self.num_steps * row_steps))\n else:\n m.reverse_motor_two(int(-1 * self.num_steps * row_steps))\n if col_steps > 0:\n m.forward_motor_one(int(self.num_steps * col_steps))\n else:\n m.reverse_motor_one(int(-1 * self.num_steps * col_steps))\n self.curr_row = target_row\n self.curr_col = target_col\n print(self.values)\n print(\"row:\", self.curr_row, \"col:\", self.curr_col)\n fname = build_filename(self.meas_type, self.meas_field, self.meas_side, self.target)\n # Take measurement\n value = narda.takeMeasurement(self.dwell_time, self.meas, fname, self.save_dir, self.comment)\n self.values[self.curr_row, self.curr_col] = value\n print(self.values)\n # Check if max and take screenshot of plot/UI accordingly\n if value > self.values.max():\n print(\"New max val: %f\" % value)\n # Switch to Snipping Tool in front of the NARDA program\n narda.saveBitmap(fname, self.save_dir)\n narda.bringToFront() # Once bitmap is saved, return focus to NARDA\n os.rename(self.save_dir + '/' + self.max_fname + '.PNG', self.save_dir + '/' + fname + '.PNG')\n print(\"Correction of previous value complete.\")\n self.callback(self)\n wx.CallAfter(self.parent.run_post_scan)\n m.destroy()\n\n\ndef run_scan(x_points, y_points, m, narda, num_steps, dwell_time, savedir, comment, meas_type, meas_field, meas_side,\n meas,start_pos):\n \"\"\"\n Performs an area scan according to the specified parameters.\n The scan consists of moving the NS probe to an intended coordinate, taking a measurement, saving the results,\n and repeating this process until all coordinate points have been measured.\n\n :param x_points: Number of x-coordinate points.\n :param y_points: Number of y-coordinate points.\n :param m: Motor driver object.\n :param narda: NARDA navigator object.\n :param num_steps: Number of motor steps per grid step.\n :param dwell_time: Wait time at each scan point before measurements are recorded.\n :param save_dir: Directory for output files (.txt, .png).\n :param comment: Comment saved in the output file (.txt).\n :param meas_type: Measurement type (limb or body).\n :param meas_field: Measurement field (Electric or magnetic (mode A or B)).\n :param meas_side: Side of the phone being scanned.\n :param meas: high_peak or WideBand measurement\n :param start_pos: User defined starting point if not 0.\n :return: Numpy array of all measurements, Numpy array of the measurement grid, current NS probe's coordinates (rows\n and columns), and the filename corresponding to the highest measurement point.\n \"\"\"\n\n move_to_pos_one(m, int(num_steps), x_points, y_points)\n\n # Generate a 'traversal grid' with values starting from 1 showing the order of measurement taking\n # x_points ~ row, y_points ~ col\n grid = generate_grid(x_points, y_points)\n values = np.zeros(grid.shape) # Placeholder for filling in with measurement values\n\n # Move to the user defined postion\n time.sleep(2)\n if start_pos > 0:\n # move to user defined position\n row, col = np.where(grid == start_pos)\n m.forward_motor_two(int(num_steps * row)) # TODO: double check if need -1\n m.forward_motor_one(int(num_steps * col))\n print(\"start position: (\", row, \", \", col, \")\")\n else:\n start_pos = 1\n time.sleep(2)\n print(\"Scan path:\")\n print(grid)\n print(\"Values:\")\n print(values)\n\n # Create an accumulator for the fraction of a step lost each time a grid space is moved\n frac_step = num_steps - int(num_steps)\n num_steps = int(num_steps)\n x_error, y_error = 0, 0 # Accumulator for x and y directions\n curr_row, curr_col = 0, 0 # Current coordinates of the NS testing probe\n curr_max = -1 # Current maximum value\n max_filename = '' # Filename of the maximum measurement point\n\n # General Area Scan\n for i in range(start_pos, grid.size + 1):\n # Find the position of the next\n next_row, next_col = np.where(grid == i)\n print(\"position: \", i)\n next_row = next_row[0]\n next_col = next_col[0]\n if i == start_pos:\n curr_row, curr_col = next_row, next_col\n # Move the NS probe to the next position\n if next_row > curr_row: # Move downwards\n y_error += frac_step\n m.forward_motor_two(num_steps + int(y_error))\n y_error -= int(y_error)\n curr_row = next_row\n elif next_col > curr_col: # Move rightwards\n x_error += frac_step\n m.forward_motor_one(num_steps + int(x_error)) # Adjust distance by error\n x_error -= int(x_error) # Subtract integer number of steps that were moved\n curr_col = next_col\n elif next_col < curr_col: # Move leftwards\n x_error -= frac_step\n m.reverse_motor_one(num_steps + int(x_error))\n x_error -= int(x_error)\n curr_col = next_col\n # Build the filename for the measurements at this point\n fname = build_filename(meas_type, meas_field, meas_side, i)\n # Take the measurement and save relevant files\n value = narda.takeMeasurement(dwell_time, meas, fname, savedir, comment)\n values[curr_row, curr_col] = value\n # If new maximum value found, save take a screenshot of the GUI interface\n if value > curr_max:\n print(\"New max val: %f\" % value)\n # Switch to Snipping Tool in front of the NARDA program\n narda.saveBitmap(fname, savedir)\n narda.bringToFront() # Once bitmap is saved, return focus to NARDA\n curr_max = value\n max_filename = fname\n print(\"---------\")\n print(values)\n print(\"Renaming tmp.PNG to %s.PNG\" % max_filename)\n # End of scan - rename screenshot file with the correct name\n try:\n os.rename(savedir + '/tmp.PNG', savedir + '/' + max_filename + '.PNG')\n except FileExistsError:\n print(\"File \" + max_filename + \".PNG already exists. Overwriting file with new image file.\")\n os.remove(savedir + '/' + max_filename + '.PNG')\n os.rename(savedir + '/tmp.PNG', savedir + '/' + max_filename + '.PNG')\n return values, grid, curr_row, curr_col, max_filename\n\n\ndef build_filename(meas_type, meas_field, meas_side, number):\n \"\"\"\n Builds a filename based on the measurement parameters.\n\n :param meas_type: Measurement type (limb or body).\n :param meas_field: Measurement field (electric or magnetic (mode A or B)).\n :param meas_side: Side of the phone being measured.\n :param number: Point on the grid where the measurement was taken.\n :return: String filename.\n \"\"\"\n filename = ''\n # Adding type marker\n if meas_type == 'Limb':\n filename += 'L'\n else:\n filename += 'B'\n filename += '_'\n # Adding field marker\n if meas_field == 'Electric':\n filename += 'E'\n else:\n filename += 'H'\n # Adding side marker\n if meas_side == 'Back':\n filename += 'S'\n else:\n filename += meas_side.lower()\n filename += str(int(number))\n return filename\n\n\ndef move_to_pos_one(moto, num_steps, rows, cols):\n \"\"\"Move motor to first position in grid.\n\n :param moto: MotorDriver to control motion.\n :param num_steps: Number of motor steps between grid points.\n :param rows: Number of grid rows.\n :param cols: Number of grid cols.\n :return: None\n \"\"\"\n moto.reverse_motor_two(int(num_steps * rows / 2.0))\n moto.reverse_motor_one(int(num_steps * cols / 2.0))\n\n\ndef generate_grid(rows, columns):\n \"\"\"Create grid traversal visual in format of numpy matrix.\n Looks like a normal sequential matrix, but every other row is in reverse order.\n\n :param rows: Number of rows in grid.\n :param columns: Number of columns in grid.\n :return: Numpy matrix of correct values.\n \"\"\"\n g = []\n for i in range(rows):\n row = list(range(i * columns + 1, (i + 1) * columns + 1))\n if i % 2 != 0:\n row = list(reversed(row))\n g += row\n g = np.array(g).reshape(rows, columns)\n return g\n\n\ndef convert_to_pts(arr, dist, x_off=0, y_off=0):\n \"\"\"\n Convert matrix to set of points.\n #TODO: Probably not going to be using this one anymore.\n\n :param arr: matrix to convert.\n :param dist: distance between points in matrix.\n :param x_off: offset to add in x direction (if not at (0,0)).\n :param y_off: offset to add in y direction (if not at (0,0)).\n :return: xpts, ypts, zpts: List of points on each axis.\n \"\"\"\n x_dim = arr.shape[1]\n y_dim = arr.shape[0]\n xpts = []\n ypts = []\n zpts = []\n for j in range(x_dim):\n for i in range(y_dim):\n if j < x_dim / 2.0:\n x_pt = -1.0 / 2 * i * dist + x_off\n else:\n x_pt = 1.0 / 2 * i * dist + x_off\n if i < y_dim / 2.0:\n y_pt = -1.0 / 2 * j * dist + y_off\n else:\n y_pt = 1.0 / 2 * j * dist + y_off\n xpts.append(x_pt)\n ypts.append(y_pt)\n zpts.append(arr[i][j])\n print(xpts, ypts, zpts)\n return xpts, ypts, zpts\n"
}
] | 11 |
zhanglong362/zane
|
https://github.com/zhanglong362/zane
|
e8ea2babc6ae9ac18da436ce75371e9f53414321
|
8155032792760b50e1dae4d27a257a2fa543356f
|
a0e2d039fd108af693df9c2191b864b4bd591f55
|
refs/heads/master
| 2020-03-17T05:32:26.195261 | 2018-05-14T07:26:01 | 2018-05-14T07:26:01 | 126,459,469 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6220095753669739,
"alphanum_fraction": 0.6267942786216736,
"avg_line_length": 21.33333396911621,
"blob_id": "edcf60f8d9cbc8422170a5e87b393c34eb165b18",
"content_id": "377c93ddbd66b6c2b75f29e493190cf320d71a8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 454,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 18,
"path": "/weektest/test2/ATM_chengjunhua/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from core import src\r\nfrom conf import settings\r\nimport logging\r\n#检测是否登陆\r\ndef login_auth(fun):\r\n def wrapper(*args,**kwargs):\r\n if not src.users['status']:\r\n src.login()\r\n else:\r\n return fun(*args,**kwargs)\r\n return wrapper\r\n\r\n\r\n#写日志\r\ndef get_logger(name):\r\n logging.config.dictConfig(settings.LOGGING_DIC) # 导入上面定义的logging配置\r\n l1=logging.getLogger(name)\r\n return l1"
},
{
"alpha_fraction": 0.571753978729248,
"alphanum_fraction": 0.5763098001480103,
"avg_line_length": 23.235294342041016,
"blob_id": "ce845266112ce8da417b960cd6356f8b19c6bd84",
"content_id": "7451c23a2e00d5e3ba18e923c8c84817dd2d0e63",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 445,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 17,
"path": "/weektest/test2/ATM_chengjunhua/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\r\nimport json\r\nfrom conf import settings\r\n\r\n# 查\r\ndef find_file(name):\r\n if os.path.exists(settings.BASE_DB % name) :\r\n with open(settings.BASE_DB % name, 'r', encoding='utf-8') as f:\r\n user_dic = json.load(f)\r\n return user_dic\r\n return False\r\n\r\n\r\n# 增改\r\ndef update(user_dic):\r\n with open(settings.BASE_DB%user_dic['name'],'w',encoding='utf-8') as f:\r\n json.dump(user_dic,f)\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.539130449295044,
"alphanum_fraction": 0.5695652365684509,
"avg_line_length": 19.04347801208496,
"blob_id": "ca9a2ca16f2f0b3676affe5c00652dae74fb4ae2",
"content_id": "cf5671a82f4e91e66788a5cccc6158168f15f248",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 534,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 23,
"path": "/weektest/weektest1/python_weektest1_zhanglong_03.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 3. 请用两种方式实现,将文件中的alex全部替换成SB的操作(20分)\n\nstr1 = 'alex'\nstr2 = 'SB'\n\n# 第一种方式:\nwith open(r'a.txt') as f:\n data = f.read()\n data = data.replace(str1, str2)\nwith open(r'a.txt', 'w') as f:\n f.write(data)\n\n# 第二种方式:\nimport os\n\nwith open(r'a.txt') as f1, \\\n open(r'a.txt.swap', 'w') as f2:\n for line in f1:\n if str1 in line:\n line = line.replace(str1, str2)\n f2.write(line)\nos.remove('a.txt')\nos.rename('a.txt.swap', 'a.txt')"
},
{
"alpha_fraction": 0.6491228342056274,
"alphanum_fraction": 0.6959064602851868,
"avg_line_length": 18.11111068725586,
"blob_id": "4f2bfcc890cacc797e42d77db0b15a3bec0befa5",
"content_id": "dd8bb5778ddad34a350afb37ee986a742275cec8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 397,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/month4/week6/python_day24/python_day25_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月18日作业:\n# 1、预习网络基础,搞明白TCP三次握手,四次挥手\n# 2、整理异常处理笔记\n# 3、复习函数、模块与包的导入、常用模块、面向对象,准备周五下午的周考\n# 4、按照老师的讲解,规范编写选课系统作业,准备明早默写\n#\n# 5、预习socket编程\n# 明日内容:\n# 基于tcp协议实现套接字通信"
},
{
"alpha_fraction": 0.5865921974182129,
"alphanum_fraction": 0.5917797088623047,
"avg_line_length": 32.83783721923828,
"blob_id": "bc3b6668fe57c4e67027dbc8d574e9e349a29b3a",
"content_id": "4e54bba18ddf85cf2080a827c5911ed6d011475c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2506,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 74,
"path": "/project/youku/version_v1/youkuClient/client/tcpClient.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport json\nimport struct\nimport socket\nfrom lib import common\nfrom conf import settings\nfrom concurrent.futures import ThreadPoolExecutor\n\nclient_pool = ThreadPoolExecutor(5)\n\nclass TcpClient:\n def __init__(self, server_address):\n self.server_address = server_address\n self.client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n self.client.connect(self.server_address)\n except Exception as e:\n raise Exception(e)\n\n def __del__(self):\n self.client.close()\n\n def done_load(self, res):\n res = res.result()\n common.show_green(res)\n\n def upload_vidoe(self, data):\n file_path = os.path.join(settings.upload_path, data['file_name'])\n data_len_bytes = struct.pack('i', len(data['file_size']))\n self.client.send(data_len_bytes)\n with open(r'%s' % file_path, 'rb') as f:\n for line in f:\n self.client.send(line)\n result = self.recv_data()\n return result['message']\n\n def download_video(self, data, buffer_size=1024):\n data_len_bytes = self.client.recv(4)\n data_length = struct.unpack('i', data_len_bytes)[0]\n file_path = os.path.join(settings.download_path, data['file_name'])\n with open(r'%s' % file_path, 'ab') as f:\n recv_size = 0\n while recv_size < data_length:\n last_size = data_length - recv_size\n if last_size < buffer_size:\n buffer_size = last_size\n recv_data = self.client.recv(buffer_size)\n recv_size += len(recv_data)\n f.write(recv_data)\n return 'Download video %s complete!' % data['file_name']\n\n def recv_data(self):\n data_len_bytes = self.client.recv(4)\n data_length = struct.unpack('i', data_len_bytes)[0]\n\n data_bytes = self.client.recv(data_length)\n data = json.loads(data_bytes.decode('utf-8'))\n\n if data['is_file']:\n client_pool.submit(self.download_video, data).add_done_callback(self.done_load)\n return data\n\n def send_data(self, data):\n data_bytes = json.dumps(data).encode('utf-8')\n data_len_bytes = struct.pack('i', len(data_bytes))\n\n self.client.send(data_len_bytes)\n self.client.send(data_bytes)\n\n if data['is_file']:\n client_pool.submit(self.upload_vidoe, data).add_done_callback(self.done_load)\n return self.recv_data()\n\n\n"
},
{
"alpha_fraction": 0.44413822889328003,
"alphanum_fraction": 0.5157939195632935,
"avg_line_length": 28.848100662231445,
"blob_id": "79fbf6846d03d8273f705f586e66557aebf669eb",
"content_id": "ebad357a873e93d24dd54ba584be7c8a6a03e071",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5301,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 158,
"path": "/project/shooping_mall/version_v3/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import user, bank\n\nuser_data = {'name': None}\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n while 1:\n name = input('用户名 >>: ').strip()\n user_info = user.get_user_info(name)\n if not user_info:\n print('\\033[33m用户未注册,请先注册再登录!\\033[0m')\n pwd = input('密码 >>: ').strip()\n if pwd != user_info['pwd']:\n print('\\033[32m密码错误!\\033[0m')\n continue\n user_data['name'] = name\n print('\\033[32m用户%s登陆成功!\\033[0m' % name)\n return\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while 1:\n name = input('用户名 >>: ').strip()\n user_info = user.get_user_info(name)\n if user_info:\n print('\\033[33m用户已注册,请直接登陆!\\033[0m')\n pwd = input('密码 >>: ').strip()\n pwd2 = input('密码 >>: ').strip()\n if pwd2 != pwd:\n print('\\033[33m两次密码输入不一致!\\033[0m')\n continue\n if user.register_user(name, pwd):\n print('\\033[32m用户%s注册成功!\\033[0m' % name)\n else:\n print('\\033[31m用户%s注册失败!\\033[0m' % name)\n return\n\[email protected]\ndef check_balance():\n print('\\033[32m查看余额\\033[0m')\n user_info = user.get_user_info(user_data['name'])\n print('-'*30)\n print('''\\033[35m\n balance: %s\n credit_balance: %s\n credit_limit: %s\\033[0m\n ''' % (user_info['balance'], user_info['credit_balance'], user_info['credit_limit']))\n print('-' * 30)\n\[email protected]\ndef check_bill():\n print('\\033[32m查看账单\\033[0m')\n user_info = user.get_user_info(user_data['name'])\n print('-' * 30)\n if user_info['bill'] == 0:\n print('\\033[35m本期账单为0元!\\033[0m')\n else:\n print('\\033[35m本期账单为%s元!\\033[0m' % user_info['bill'])\n print('-' * 30)\n\[email protected]\ndef check_detailed_list():\n print('\\033[32m查看流水\\033[0m')\n user_info = user.get_user_info(user_data['name'])\n print('-' * 30)\n if not user_info['detailed_list']:\n print('\\033[35m没有银行流水!\\033[0m')\n for dt, flow in user_info['detailed_list']:\n print('\\033[35m %s %s\\033[0m' % (dt, flow))\n print('-' * 30)\n\[email protected]\ndef transfer():\n print('\\033[32m转账\\033[0m')\n while 1:\n name = input('收款人账户 >>: ').strip()\n if name == user_data['name']:\n print('\\033[31m用户%s不能给自己转账!\\033[0m' % name)\n continue\n user_info = user.get_user_info(name)\n if not user_info:\n print('\\033[31m收款人账户%s不存在!\\033[0m' % name)\n continue\n while 1:\n amount = input('请输入转账金额 >>: ').strip()\n if not amount.isdigit():\n print('\\033[32m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n break\n if bank.transfer(user_data['name'], name, amount):\n print('\\033[32m用户%s给账户%s转账%s成功!\\033[0m' % (user_data['name'], name, amount))\n else:\n print('\\033[31m用户%s账户余额不足,转账失败!\\033[0m' % user_data['name'])\n return\n\[email protected]\ndef withdraw():\n print('\\033[32m取现\\033[0m')\n while True:\n amount = input('请输入取现金额 >>: ').strip()\n if not amount.isdigit():\n print('\\033[32m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n break\n if bank.withdraw(user_data['name'], amount):\n print('\\033[32m用户%s取现成功!\\033[0m' % user_data['name'])\n else:\n print('\\033[31m用户%s取现失败!\\033[0m' % user_data['name'])\n\n\[email protected]\ndef repayment():\n print('\\033[32m还款\\033[0m')\n user_info = user.get_user_info(user_data['name'])\n if user_info['bill'] == 0:\n print('\\033[35m本期账单为0元!\\033[0m')\n else:\n print('\\033[35m本期账单为%s元!\\033[0m' % user_info['bill'])\n while True:\n amount = input('请输入还款金额 >>: ').strip()\n if not amount.isdigit():\n print('\\033[32m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n break\n if bank.payment(user_data['name'], amount):\n print('\\033[32m用户%s还款成功!\\033[0m' % user_data['name'])\n else:\n print('\\033[31m用户%s还款成功!\\033[0m' % user_data['name'])\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_balance, '查看余额'],\n '4': [check_bill, '查看账单'],\n '5': [check_detailed_list, '查看流水'],\n '6': [transfer, '转账'],\n '7': [withdraw, '取现'],\n '8': [repayment, '还款']\n }\n while 1:\n print('=' * 30)\n for k,v in menu.items():\n print('%-5s %-10s' % (k, v[1]))\n print('='*30)\n choice = input('请输入操作编码 >>: ').strip()\n if choice == 'q':\n break\n if choice not in menu:\n print('操作非法!')\n continue\n menu[choice][0]()\n\n"
},
{
"alpha_fraction": 0.5294556021690369,
"alphanum_fraction": 0.5454884171485901,
"avg_line_length": 18.562044143676758,
"blob_id": "5923f8b60d2e44b6172189a681bee446d6426db9",
"content_id": "8be376a5c0a571d7160ff3fc7733c79c0077d2a7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2840,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 137,
"path": "/month4/week7/python_day28/python_day28.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n# 1. 多进程\nimport time\nfrom multiprocessing import Process\n\ndef task(name):\n print('%s is running ...' % name)\n time.sleep(3)\n print('%s is done.' % name)\n\n\nif __name__ == '__main__':\n # Windows系统之上,开启子进程的操作,一定要放到 __name__ == '__main__' 下面,\n # 因为会会涉及到重新导一遍上面的文件\n p1 = Process(target=task, args=('egon',))\n p2 = Process(target=task, kwargs={'name':'alex'})\n p1.start()\n print('===>')\n p2.start()\n print('--->')\n\n# 2. 自定义多进程类\nimport time\nfrom multiprocessing import Process\n\nclass MyProcess(Process):\n def __init__(self, name):\n super().__init__()\n self.name = name\n\n def run(self):\n print('%s is running ...' % self.name)\n time.sleep(3)\n print('%s is done.' % self.name)\n\nif __name__ == '__main__':\n p = MyProcess('egon')\n p.start()\n print('===>')\n\n# 3. 进程间隔离\nimport time\nfrom multiprocessing import Process\n\nx = 1000\n\ndef task():\n global x\n time.sleep(3)\n x = 0\n print('task: x = %s' % x)\n\nif __name__ == '__main__':\n p = Process(target=task)\n p.start()\n time.sleep(5)\n print('parent: x = %s' % x)\n\n# 4. 父进程等待子进程结束 p.join\nimport time\nfrom multiprocessing import Process\n\nx = 1000\n\ndef task():\n global x\n time.sleep(3)\n x = 0\n print('task: x = %s' % x)\n\nif __name__ == '__main__':\n p = Process(target=task)\n p.start()\n p.join()\n print('parent: x = %s' % x)\n\n# 5. 多进程\nimport time\nimport random\nfrom multiprocessing import Process\n\ndef task(name):\n print('%s is running ...' % name)\n # time.sleep(name)\n time.sleep(random.randint(1,3))\n\nif __name__ == '__main__':\n start_time = time.time()\n process = []\n for i in range(10):\n p = Process(target=task, args=('task-%s' % i,))\n process.append(p)\n p.start()\n for p in process:\n p.join()\n p.terminate()\n print('Process %s is %s.' % (p.pid, p.is_alive()))\n print('==>')\n print(time.time() - start_time)\n\n# 6. 获取 pid 和 ppid\nimport os\nimport time\nfrom multiprocessing import Process\n\nx = 1000\n\ndef task():\n print('Pid: %s PPid: %s' % (os.getpid(), os.getppid()))\n time.sleep(3)\n\nif __name__ == '__main__':\n p1 = Process(target=task,)\n p1.start()\n p1.join()\n print('===>')\n\n# 7. 僵尸进程和孤儿进程\nimport os\nimport time\nfrom multiprocessing import Process\n\ndef task(name):\n print('%s is running ...' % name)\n time.sleep(50)\n print('%s is done.' % name)\n\nif __name__ == '__main__':\n process = []\n for i in range(1, 4):\n p = Process(target=task, args=(i,))\n process.append(p)\n p.start()\n for p in process:\n p.join()\n print('===>')\n print('ppid = %s' % os.getppid())\n\n\n"
},
{
"alpha_fraction": 0.6128000020980835,
"alphanum_fraction": 0.6208000183105469,
"avg_line_length": 23,
"blob_id": "07f91363a508458b5f29cb5fdf6cf6373039655d",
"content_id": "c9f88796e4957c9542f93d3c44ac34144484ca02",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 637,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 25,
"path": "/weektest/test2/ATM_wenliuxiang/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\r\nfrom db import db_handler\r\nfrom conf import setting\r\nfrom lib import common\r\ndef get_userinfo_interfacen(name):\r\n user_dic=db_handler.select(name)\r\n return user_dic\r\n\r\n\r\n\r\ndef register(name,password,account=15000):\r\n user_dic={\r\n 'name':name,\r\n 'password':password,\r\n 'locked':False,\r\n 'account':account,\r\n 'creidt':account,\r\n 'bankflow':[]\r\n }\r\n db_handler.update(user_dic)\r\n common.get_logger(print('用户%s注册成功' % name))\r\n\r\ndef lock_user_interface(name):\r\n user_dic=get_userinfo_interfacen(name)\r\n user_dic['locked']=True\r\n db_handler.select(user_dic)"
},
{
"alpha_fraction": 0.5897196531295776,
"alphanum_fraction": 0.5920560956001282,
"avg_line_length": 34.065574645996094,
"blob_id": "4fbeed046762d033e6afd1e5936aaba48490de20",
"content_id": "d0759c914eec08a08f226bcfeb11c5e9479255d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2390,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 61,
"path": "/project/shooping_mall/version_v5/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport datetime\nfrom db import db_handler\n\ndef get_datetime():\n return datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n\ndef recharge(name, amount):\n dt = get_datetime()\n info = db_handler.read(name)\n info['balance'] += amount\n info['flow'].append((dt, '用户%s充值%s元' % (name, amount)))\n if db_handler.write(info):\n return True, '用户%s充值%s成功!' % (name, amount)\n else:\n return False, '用户%s充值%s失败!' % (name, amount)\n\ndef transfer(name, payee, amount):\n dt = get_datetime()\n info_tran = db_handler.read(name)\n if info_tran['balance'] < amount:\n return False, '用户%s账户余额不足,转账失败!' % name\n info_payee = db_handler.read(payee)\n info_tran['balance'] -= amount\n info_tran['flow'].append((dt, '用户%s转账%s元给用户%s' % (name, amount, payee)))\n info_payee['balance'] += amount\n info_payee['flow'].append((dt, '用户%s收款%s元从用户%s' % (payee, amount, name)))\n if db_handler.write(info_tran) and db_handler.write(info_payee):\n return True, '用户%s转账%s给用户%s成功!' % (name, amount, payee)\n else:\n return False, '用户%s转账%s给用户%s失败!' % (name, amount, payee)\n\ndef withdraw(name, amount, charge=0.05):\n dt = get_datetime()\n info = db_handler.read(name)\n if info['credit_balance'] < (amount + amount * charge):\n return False, '用户%s信用余额不足,取现失败!' % name\n info['credit_balance'] -= (amount + amount * charge)\n info['bill'] += (amount + amount * charge)\n info['balance'] += amount\n info['flow'].append((dt, '用户%s取现%s元' % (name, amount)))\n if db_handler.write(info):\n return True, '用户%s取现%s元成功!' % (name, amount)\n else:\n return False, '用户%s取现%s元失败!' % (name, amount)\n\ndef repay(name, amount):\n dt = get_datetime()\n info = db_handler.read(name)\n info['balance'] -= amount\n if info['bill'] <= amount:\n info['bill'] = 0\n else:\n info['bill'] -= amount\n info['credit_balance'] += amount\n info['flow'].append((dt, '用户%s还款%s元' % (name, amount)))\n if db_handler.write(info):\n return True, '用户%s还款%s元成功!' % (name, amount)\n else:\n return False, '用户%s还款%s元失败!' % (name, amount)\n\n"
},
{
"alpha_fraction": 0.5952941179275513,
"alphanum_fraction": 0.6427450776100159,
"avg_line_length": 28.287355422973633,
"blob_id": "8fd9d7f2907444bcbcedeeebd6c2e84ccc7cfb25",
"content_id": "59942f9a78366adb635bf1fa62dbaba927099df4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3362,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 87,
"path": "/month4/week5/python_day17/python_day17.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 正则表达式\nimport re\n# \\w 匹配字母数字及下划线 \\W 取反\nprint(re.findall('\\w','hello egon 123'))\nprint(re.findall('\\W','hello egon 123'))\n\n# \\s 匹配任意空白字符,等价于[\\r\\n\\t\\f] \\S 取反\nprint(re.findall('\\s','hello egon 123'))\nprint(re.findall('\\S','hello egon 123'))\n\n# \\d 匹配任意数字,等价于[0-9] \\D 取反\nprint(re.findall('\\d','hello egon 123'))\nprint(re.findall('\\D','hello egon 123'))\n\n# \\A 只匹配起始位置,不匹配结束查找\nprint(re.findall('\\Aegon','hello egon 123'))\nprint(re.findall('^egon','hello egon 123'))\n\n# $ == \\Z 只匹配行结束位置,换行结束\nprint(re.findall('egon\\Z','hello egon 123'))\nprint(re.findall('egon$','hello egon 123'))\n\n# \\n 匹配一个换行符\nprint(re.findall('\\n','hello egon 123'))\n\n# \\t 匹配一个制表符\nprint(re.findall('\\t','hello egon 123'))\n\n# . 匹配除换行符外的任意一个字符;使用re.DOTALL参数,匹配所有任意一个字符;\nprint(re.findall('e.on','hello egon 123'))\nprint(re.findall('e.on','hello egon 123', re.DOTALL))\n\n# ? 匹配0个或1个由前面的正则表达式定义的片段,非贪婪方式\nprint(re.findall('egon?','hello egon 123'))\n\n# * 匹配前面那个字符出现0个或无穷个 ===> 非贪婪模式\nprint(re.findall('egon*','hello egonnn 123'))\n\n# + 匹配前面那个字符出现1个或无穷个\nprint(re.findall('eon+','hello egonnn 123'))\n\n# {m, n} 匹配前面一个字符,出现m次到n次\n# 实现?号匹配\nprint(re.findall('e{0, 1}on','hello egonnn 123'))\n# 实现*号匹配\nprint(re.findall('e{0,}on','hello egonnn 123'))\n# 实现+号匹配\nprint(re.findall('egon{1, }','hello eegonnn 123'))\n\n# .* 匹配任意长度,且任意的字符 ===> 贪婪模式\nprint(re.findall('e.*on','hello egonnn 123'))\n\n# .*? 匹配任意长度,且任意的字符;*后面? 代表非贪婪匹配\nprint(re.findall('e.*?on', 'hello egonnn 123'))\n\n# () 分组作用,表达式正常匹配,但只返回括号内的字符串\nprint(re.findall('(alex)_sb', 'alex_sb asdfdhjahfalex_sb'))\n\n# [] 用来表示一组字符,单独列出:[amk] 匹配 'a','m'或'k';中括号里使用 ^ 代表取反;\nprint(re.findall('[egon]','hello egon 123'))\nprint(re.findall('[^egon]','hello egon 123'))\n\n# | 或者 (?:) 小括号内 ?: 代表匹配成功的所有内容,而不仅仅是括号内的内容\nprint(re.findall('egon(?:s|es)','hello egons 123 egones'))\n\n# 2. re的其他方法\nimport re\nprint(re.findall('egon(?:s|es)','hello egons 123 egones'))\n\n# 查找所以包含的字符串,返回一个对象;使用 group方法获取一个值\nprint(re.search('egon(?:s|es)','hello egons 123 egones').group())\n\n# match 就是 ^必须开头就匹配的search,返回一个对象;使用 group方法获取一个值\nprint(re.match('egon(?:s|es)','egons 123 egones').group())\n\n# split 多个分隔符拆分字符串\nprint(re.split('[ :\\\\\\/]', r'got :a.txt\\3333/rwx'))\n\n# sub 替换,字符串位置互换\nprint(re.sub('^egon', 'Egon', 'egon is beautful egon'))\nprint(re.sub('(.*?)(egon)(.*?)(egon)(.*)', r'\\1\\2\\3Egon\\5', '123 egon is beautful egon 123'))\nprint(re.sub('(lqz)(.*?)(SB)', r'\\3\\2\\1', 'lqz is SB'))\nprint(re.sub('([a-zA-Z]+)([^a-zA-Z]+)([a-zA-Z]+)([^a-zA-Z]+)([a-zA-Z]+)', r'\\5\\2\\3\\4\\1', 'lqz is SB'))\n\n# compile\npatten = re.compile('egon')\nprint(re.findall(patten, 'hello egons 123 egones'))\n\n\n"
},
{
"alpha_fraction": 0.3565604090690613,
"alphanum_fraction": 0.39065974950790405,
"avg_line_length": 16.05063247680664,
"blob_id": "bc34ee0bec7dd621036bb844800a8da56a2d89a7",
"content_id": "6d6e719ee8457dc8ebc07578bd375a28b8b34d23",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1409,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 79,
"path": "/month3/week3/python_day10/python_day10.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#\n# config = r'db.txt'\n#\n# def get_uname():\n# while True:\n# uname = input('name >>: ').strip()\n# if not uname.isalpha():\n# continue\n# with open(config, 'r', encoding='utf-8') as f:\n# for line in f:\n# uinfo = line.strip('\\n').split(',')\n# if uname == uinfo[0]:\n# print('\\033[31m用户已存在...\\033[0m')\n# break\n# else:\n# return uname\n#\n# # print(get_uname())\n#\n# help(get_uname)\n\n\n#\n# def auth():\n# print('登陆...')\n#\n# def register():\n# print('注册...')\n#\n# def search():\n# print('查看...')\n#\n# def transfer():\n# print('转账...')\n#\n# def pay():\n# print('支付...')\n#\n# dic = {\n# '1': auth,\n# '2': register,\n# '3': search,\n# '4': transfer,\n# '5': pay\n# }\n#\n# def interactive():\n# while True:\n# print('''\n# 1 登陆\n# 2 注册\n# 3 查看\n# 4 转账\n# 5 支付\n# ''')\n# choice = input('>>: ').strip()\n# if choice in dic:\n# dic[coice]()\n# else:\n# print('非法操作!')\n\n\ndef outter():\n x = 2\n def inner():\n # x = 1\n print('from inner %s' % x)\n return inner\n\nf = outter() # f = inner\nprint(f)\nx = 11111111111\nf()\n\ndef foo():\n x = 111111111111\n f()\n\nfoo()\n\n\n"
},
{
"alpha_fraction": 0.5009823441505432,
"alphanum_fraction": 0.5184572339057922,
"avg_line_length": 24.05440330505371,
"blob_id": "c9967c01e69743be52e54e805829109e3a3c515e",
"content_id": "d0a46573fea73bdc403e51fc86870e69d0d2fec4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12265,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 386,
"path": "/month4/week5/python_day20/python_day20_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月12日作业\n# 1、类的属性和对象的属性有什么区别?\n# 区别:\n# 1. 类的属性可以给不同对象共享;\n# 2. 对象的属性,是对象独有的;\n\n# 2、面向过程编程与面向对象编程的区别与应用场景?\n# 区别:\n# 1. 面向过程是流水线式的,一步一步的;\n# 2. 面向对象是通过类实例化对象,对象之间交互;\n# 应用场景:\n# 1. 面向过程适合扩展性低的场景;\n# 2. 面向对象适合扩展性高的场景;\n\n# 3、类和对象在内存中是如何保存的。\n# 1. 类在内存中是一系列的变量名字和方法名字;\n# 2. 对象在内存中是一系列的变量名字;\n\n# 4、什么是绑定到对象的方法,、如何定义,如何调用,给谁用?有什么特性\n# 1. 绑定到对象的方法,其实是对象所属类的方法,但可以被对象调用;\n# 2. 定义对象的绑定方法,就是定义对象所属类的方法;\n# 3. 调用对象的绑定方法,以 object.function() 的方式调用;\n# 4. 对象的绑定方法,给对象自己用;\n# 5. 绑定方法的特性是,绑定给谁,就应该由谁来调用。谁来调用,就把谁当做第一个参数传入__init__方法;\n\n# 5、如下示例, 请用面向对象的形式优化以下代码\n# 在没有学习类这个概念时,数据与功能是分离的, 如下\n#\n# def exc1(host, port, db, charset, sql):\n# conn = connect(host, port, db, charset)\n# conn.execute(sql)\n# return xxx\n#\n# def exc2(host, port, db, charset, proc_name)\n# conn = connect(host, port, db, charset)\n# conn.call_proc(sql)\n# return xxx\n# 每次调用都需要重复传入一堆参数\n# exc1('127.0.0.1', 3306, 'db1', 'utf8', 'select * from tb1;')\n# exc2('127.0.0.1', 3306, 'db1', 'utf8', '存储过程的名字')\nclass Database:\n def __init__(self, host, port, db, charset):\n self.host = host\n self.port = port\n self.db = db\n self.charset = charset\n\n def exc1(self, sql):\n conn = connect(host, port, db, charset)\n return conn.execute(sql)\n\n def exc2(self, proc_name):\n conn = connect(host, port, db, charset)\n return conn.call_proc(proc_name)\n\nconn = Database('127.0.0.1', 3306, 'db1', 'utf8')\nconn.exc1('select * from tb1;')\nconn.exc2('proc_name')\n\n# 6、下面这段代码的输出结果将是什么?请解释。\n# class Parent(object):\n# x = 1\n#\n# class Child1(Parent):\n# pass\n#\n# class Child2(Parent):\n# pass\n\n# print(Parent.x, Child1.x, Child2.x)\n# 结果:1 1 1\n# 解释:\n# 1. Parent.x是类自己的属性x,直接打印x=1;\n# 2. Child1.x 和 Child2.x 是子类找不到属性x,打印的父类 Parent 的属性x=1;\n\n# Child1.x = 2\n# print(Parent.x, Child1.x, Child2.x)\n# 结果:1 2 1\n# 解释:\n# 1. Parent.x是类自己的属性x,直接打印x=1;\n# 2. Child1.x 增加了自己的属性x,打印了自己的属性x=2;\n# 3. Child2.x 是子类找不到属性x,打印的父类 Parent 的属性x;\n\n# Parent.x = 3\n# print(Parent.x, Child1.x, Child2.x)\n# 结果:1 2 3\n# 解释:\n# 1. Parent.x是类自己的属性x,直接打印x=1;\n# 2. Child1.x 增加了自己的属性x,打印了自己的属性x=2;\n# 3. Child2.x 增加了自己的属性x,打印了自己的属性x=3;\n\n# 7、多重继承的执行顺序,请解答以下输出结果是什么?并解释。\n# class A(object):\n# def __init__(self):\n# print('A')\n# super(A, self).__init__()\n#\n# class B(object):\n# def __init__(self):\n# print('B')\n# super(B, self).__init__()\n#\n# class C(A):\n# def __init__(self):\n# print('C')\n# super(C, self).__init__()\n#\n# class D(A):\n# def __init__(self):\n# print('D')\n# super(D, self).__init__()\n#\n# class E(B, C):\n# def __init__(self):\n# print('E')\n# super(E, self).__init__()\n#\n# class F(C, B, D):\n# def __init__(self):\n# print('F')\n# super(F, self).__init__()\n#\n# class G(D, B):\n# def __init__(self):\n# print('G')\n# super(G, self).__init__()\n#\n#\n# if __name__ == '__main__':\n# g = G()\n# f = F()\n# 结果:\n# G\n# D\n# A\n# B\n# F\n# C\n# B\n# D\n# A\n# 解释:\n# 先调用 G() 类:\n# 1. G G类本身打印;\n# 2. -> D -> A 第一个独立分支结束;\n# 3. -> B 第二个独立分支结束;\n# 再调用 F() 类:\n# 1. F F类本身打印;\n# 2. -> C 第一个分支到C类结束,因为D类和C类都是基于A的子类,而A类是object的子类,根据新式类算法到C结束;\n# 3. -> B 第二个分支结束,因为B类是独立分支的object的子类;\n# 4. -> D -> A 第三个分支结束,因为没有其它基于A类的分支了,所以打印A;\n\n# 8、什么是新式类,什么是经典类,二者有什么区别?什么是深度优先,什么是广度优先?\n# 1. 新式类是指继承了object类的类,及其子类;\n# 2. 经典类是指 python2 中不继承 object类的类,及其子类;\n# 3. 经典类和新式类的区别,只有当多个子类继承于一个父类时,经典类基于深度优先的顺序查找属性,而新式类基于广度优先的顺序查找属性;\n# 4. 深度优先,就是当多个子类继承于一个父类时,查找属性先按一条分支一直查找到顶层的父类,然后再查询其它分支且不再查找顶层的父类;\n# 5. 广度优先,就是当多个子类继承于一个父类时,查找属性先查找每一条分支查找除了顶层的父类之外的所有子类,顶层的父类最后查找;\n\n# 9、用面向对象的形式编写一个老师类, 老师有特征:编号、姓名、性别、年龄、等级、工资,老师类中有功能\n# 1、生成老师唯一编号的功能,可以用hashlib对当前时间加上老师的所有信息进行校验得到一个hash值来作为老师的编号\n# def create_id(self):\n# pass\n# 2、获取老师所有信息\n# def tell_info(self):\n# pass\n# 3、将老师对象序列化保存到文件里,文件名即老师的编号,提示功能如下\n# def save(self):\n# with open('老师的编号', 'wb') as f:\n# pickle.dump(self, f)\n# 4、从文件夹中取出存储老师对象的文件,然后反序列化出老师对象, 提示功能如下\n# def get_obj_by_id(self, id):\n# return pickle.load(open(id, 'rb'))\n# import time\n# import pickle\n# import hashlib\n#\n# class Teacher:\n# def __init__(self, name, sex, age, grade, salary):\n# self.number = None\n# self.name = name\n# self.sex = sex\n# self.age = age\n# self.grade = grade\n# self.salary = salary\n# self.create_id()\n#\n# def _make_md5_code(self, string):\n# string = (str(time.time()) + string).encode('utf-8')\n# m = hashlib.md5()\n# m.update(string)\n# return m.hexdigest()\n#\n# def _get_user_info(self):\n# user_info = {\n# 'number': self.number,\n# 'name': self.name,\n# 'sex': self.sex,\n# 'age': self.age,\n# 'grade': self.grade,\n# 'salary': self.salary,\n# }\n# return user_info\n#\n# def create_id(self):\n# user_info = str(self._get_user_info().pop('number'))\n# self.number = self._make_md5_code(user_info)\n#\n# def tell_info(self):\n# print('''\n# number: %s\n# name: %s\n# sex: %s\n# age: %s\n# grade: %s\n# salary: %s\n# ''' % (self.number,\n# self.name,\n# self.sex,\n# self.age,\n# self.grade,\n# self.salary))\n#\n# def save(self):\n# user_info = self._get_user_info()\n# with open(r'%s' % self.number, 'wb') as f:\n# pickle.dump(user_info, f)\n#\n# def get_obj_by_id(self):\n# with open(r'%s' % self.number, 'rb') as f:\n# data = pickle.load(f)\n# print(data)\n# return data\n#\n# t = Teacher('Egon', 'male', 18, '特级教师', 50000)\n# t.tell_info()\n# t.save()\n# t.get_obj_by_id()\n\n# 10、按照定义老师的方式,再定义一个学生类\n# import time\n# import pickle\n# import hashlib\n# class Student:\n# def __init__(self, name, sex, age, classes, course):\n# self.number = None\n# self.name = name\n# self.sex = sex\n# self.age = age\n# self.classes = classes\n# self.course = course\n# self.create_id()\n#\n# def _make_md5_code(self, string):\n# string = (str(time.time()) + string).encode('utf-8')\n# m = hashlib.md5()\n# m.update(string)\n# return m.hexdigest()\n#\n# def _get_user_info(self):\n# user_info = {\n# 'number': self.number,\n# 'name': self.name,\n# 'sex': self.sex,\n# 'age': self.age\n# }\n# return user_info\n#\n# def create_id(self):\n# user_info = str(self._get_user_info().pop('number'))\n# self.number = self._make_md5_code(user_info)\n#\n# def tell_info(self):\n# print('''\n# number: %s\n# name: %s\n# sex: %s\n# age: %s\n# classes: %s\n# course: %s\n# ''' % (self.number,\n# self.name,\n# self.sex,\n# self.age,\n# self.classes,\n# self.course))\n#\n# def save(self):\n# user_info = self._get_user_info()\n# with open(r'%s' % self.number, 'wb') as f:\n# pickle.dump(user_info, f)\n#\n# def get_obj_by_id(self):\n# with open(r'%s' % self.number, 'rb') as f:\n# data = pickle.load(f)\n# print(data)\n# return data\n#\n#\n# stu = Student('Zane', 'male', 20, '上海一期班', 'Python全栈课程')\n# stu.tell_info()\n# stu.save()\n# stu.get_obj_by_id()\n\n# 11、抽象老师类与学生类得到父类,用继承的方式减少代码冗余\nimport time\nimport pickle\nimport hashlib\n\nclass OldboyPeople:\n def __init__(self, name, sex, age):\n self.number = None\n self.name = name\n self.sex = sex\n self.age = age\n self.create_id()\n\n def _make_md5_code(self, string):\n string = (str(time.time()) + string).encode('utf-8')\n m = hashlib.md5()\n m.update(string)\n return m.hexdigest()\n\n def _get_user_info(self):\n user_info = {\n 'number': self.number,\n 'name': self.name,\n 'sex': self.sex,\n 'age': self.age\n }\n return user_info\n\n def create_id(self):\n user_info = str(self._get_user_info().pop('number'))\n self.number = self._make_md5_code(user_info)\n\n def tell_info(self):\n print('''\n number: %s\n name: %s\n sex: %s\n age: %s\n classes: %s\n course: %s\n ''' % (self.number,\n self.name,\n self.sex,\n self.age,\n self.classes,\n self.course))\n\n def save(self):\n user_info = self._get_user_info()\n with open(r'%s' % self.number, 'wb') as f:\n pickle.dump(user_info, f)\n\n def get_obj_by_id(self):\n with open(r'%s' % self.number, 'rb') as f:\n data = pickle.load(f)\n print(data)\n return data\n\nclass Teacher(OldboyPeople):\n def __init__(self, grade, salary):\n self.grade = grade\n self.salary = salary\n\nclass Student(OldboyPeople):\n def __init__(self, classes, course):\n self.classes = classes\n self.course = course\n\n# stu = Student('Zane', 'male', 20, '上海一期班', 'Python全栈课程')\n# stu.tell_info()\n# stu.save()\n# stu.get_obj_by_id()\n#\n# stu = Student('Zane', 'male', 20, '上海一期班', 'Python全栈课程')\n# stu.tell_info()\n# stu.save()\n# stu.get_obj_by_id()\n\n# 12、基于面向对象设计一个对战游戏并使用继承优化代码,参考博客\n# http: // www.cnblogs.com / linhaifeng / articles / 7340497.\n# html # _label1\n#\n"
},
{
"alpha_fraction": 0.6008091568946838,
"alphanum_fraction": 0.6014834642410278,
"avg_line_length": 25.89090919494629,
"blob_id": "7815713896ba41b3130d2e85e5d78552e2683fde",
"content_id": "7a5d3a1339f1fd196ee155307178c5c23b29155c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2966,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 110,
"path": "/project/elective_systems/version_v9/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n return db_handler.save(self)\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n\n @classmethod\n def register(cls, name, password):\n admin = cls(name, password)\n return admin.save()\n\n def create_school(self, name, address):\n school = School(name, address)\n return school.save()\n\n def create_teacher(self, name, password):\n teacher = Teacher(name, password)\n return teacher.save()\n\n def create_course(self, name, price, cycle, school_name):\n school = School.get_obj_by_name(school_name)\n school.set_up_course(name)\n course = Course(name, price, cycle, school_name)\n if school.save() and course.save():\n return True\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.address = address\n self.course_list = []\n\n def get_school_course(self):\n return self.course_list\n\n def set_up_course(self, name):\n self.course_list.append(name)\n return self.save()\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.course_list = []\n\n def get_teacher_course(self):\n return self.course_list\n\n def set_up_course(self, course_name):\n self.course_list.append(course_name)\n return self.save()\n\nclass Course(Base):\n def __init__(self, name, price, cycle, school_name):\n self.name = name\n self.price = price\n self.cycle = cycle\n self.student_list = []\n self.school_name = school_name\n\n def get_course_student_list(self):\n return self.student_list\n\n def add_course_student(self, student_name):\n self.student_list.append(student_name)\n return self.save()\n\nclass Student(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.school_list = []\n self.course_list = []\n self.score = {}\n\n @classmethod\n def register(cls, name, password):\n student = cls(name, password)\n return student.save()\n\n def get_student_course(self):\n return self.course\n\n def get_student_score(self):\n return self.score\n\n def set_student_score(self, course, score):\n self.score[course] = score\n return self.save()\n\n def choose_student_school(self, school_name):\n self.school_list.append(school_name)\n return self.save()\n\n def set_student_course(self, course_name, school_name):\n self.course_list.append(course_name)\n self.school_list.append(school_name)\n self.score[course_name] = 0\n return self.save()\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.578158438205719,
"alphanum_fraction": 0.578158438205719,
"avg_line_length": 15.884614944458008,
"blob_id": "edbb7b6a72576cb92eceb26eba7c58042e3d6b54",
"content_id": "501766aead21ef7674c3c25cd694753bb951a10e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 473,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 26,
"path": "/weektest/test2/ATM_wenliuxiang/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\r\nimport os\r\n\r\nimport logging.config\r\nfrom core import src\r\nfrom conf import setting\r\n\r\n\r\n\r\n\r\n\r\n\r\ndef login_auth(func):\r\n def wrapper(*args,**kwargs):\r\n if not src.user_data['is_auth']:\r\n print('请登录')\r\n src.login()\r\n else:\r\n return func(*args,*kwargs)\r\n\r\n return wrapper\r\n\r\n\r\ndef get_logger(name):\r\n logging.config.dictConfig(setting.LOGGING_DIC)\r\n logger = logging.getLogger(name)\r\n return logger\r\n"
},
{
"alpha_fraction": 0.5635359287261963,
"alphanum_fraction": 0.5690608024597168,
"avg_line_length": 15.300000190734863,
"blob_id": "ad2e1c659e490f8615a4b4273a8d3f8a637499fc",
"content_id": "79613a822b941d427a919c989e78a66e8d915866",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 181,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 10,
"path": "/weektest/test2/ATM_zhangxiangyu/bin/start.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\nimport os,sys\r\n\r\nres = os.path.dirname(os.path.dirname(__file__))\r\nsys.path.append(res)\r\n\r\nfrom core import src\r\n\r\nif __name__ == '__main__':\r\n src.run()\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5539402961730957,
"alphanum_fraction": 0.5572558045387268,
"avg_line_length": 26.20138931274414,
"blob_id": "d83ece9980a91b9179740d63a48b91bbc843d99b",
"content_id": "d9c4aa271313c73d0e73ae11e8f493dfb80172eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4249,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 144,
"path": "/project/elective_systems/version_v9/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import admin_api\n\n\nCURRENT_USER = None\nROLE = 'admin'\n\n\ndef login():\n global CURRENT_USER\n common.show_green('登陆')\n if CURRENT_USER:\n common.show_red('用户不能重复登录!')\n return\n while True:\n name = common.input_string('用户名')\n if name == 'q': break\n password = common.input_string('密码')\n if password == 'q': break\n flag, msg = admin_api.login(name, password)\n if not flag:\n common.show_red(msg)\n continue\n CURRENT_USER = name\n common.show_green(msg)\n return\n\n\ndef register():\n common.show_green('注册')\n if CURRENT_USER:\n common.show_red('已登录,不能注册!')\n return\n while True:\n name = common.input_string('注册用户名')\n if name == 'q': break\n password = common.input_string('注册密码')\n if password == 'q': break\n password2 = common.input_string('确认密码')\n if password2 == 'q': break\n if password != password2:\n common.show_red('两次密码出入不一致!')\n continue\n flag, msg = admin_api.register(name, password)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef check_all_school():\n common.show_green('查看所有学校')\n common.show_object_list(type_name='school')\n\[email protected]_auth(ROLE)\ndef check_all_teacher():\n common.show_green('查看所有老师')\n common.show_object_list(type_name='teacher')\n\[email protected]_auth(ROLE)\ndef check_all_course():\n common.show_green('查看所有课程')\n common.show_object_list(type_name='course')\n\[email protected]_auth(ROLE)\ndef create_school():\n common.show_green('创建学校')\n while True:\n name = common.input_string('学校名称')\n if name == 'q': break\n address = common.input_string('学校地址')\n if address == 'q': break\n flag, msg = admin_api.create_school(CURRENT_USER, name, address)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef create_teacher():\n common.show_green('创建老师')\n while True:\n name = common.input_string('老师名字')\n if name == 'q': break\n flag, msg = admin_api.create_teacher(CURRENT_USER, name)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef create_course():\n common.show_green('创建课程')\n while True:\n school_name = common.get_object_name(type_name='school')\n if not school_name:\n return\n name = common.input_string('课程名称')\n if name == 'q': break\n price = common.input_string('课程价格')\n if price == 'q': break\n cycle = common.input_string('课程周期')\n if cycle == 'q': break\n flag, msg = admin_api.create_course(CURRENT_USER, name, price, cycle, school_name)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\ndef logout():\n global CURRENT_USER\n common.show_green('登出')\n common.show_red('用户%s登出!' % CURRENT_USER)\n CURRENT_USER = None\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_all_school, '查看所有学校'],\n '4': [check_all_teacher, '查看所有老师'],\n '5': [check_all_course, '查看所有课程'],\n '6': [create_school, '创建学校'],\n '7': [create_teacher, '创建老师'],\n '8': [create_course, '创建课程']\n }\n while True:\n common.show_green('按\"q\"退出视图')\n common.show_menu(menu)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0]()\n\n\n\n\n"
},
{
"alpha_fraction": 0.6616379022598267,
"alphanum_fraction": 0.6767241358757019,
"avg_line_length": 33.69230651855469,
"blob_id": "de3e62750eb2081499f60db9762befc99aff1927",
"content_id": "6f54884d2ffc47db4febaca9e440b96d8102b7b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 472,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 13,
"path": "/weektest/test2/ATM_tianzhiwei/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from db import db_hand\r\nfrom lib import common\r\nlogger_user=common.logger('User')\r\ndef get_info(name):\r\n return db_hand.select(name)\r\ndef write_info(name,password,account=15000):\r\n dic={'name':name,'password':password,'account':account,'position':account,'state1':False,'write_log':[]}\r\n db_hand.add(name,dic)\r\n logger_user.info('%s注册成功'%name)\r\ndef write_state(name):\r\n dic=db_hand.select(name)\r\n dic['state1']=True\r\n db_hand.add(name,dic)\r\n"
},
{
"alpha_fraction": 0.48443982005119324,
"alphanum_fraction": 0.5041493773460388,
"avg_line_length": 25.69444465637207,
"blob_id": "934c2dee93d6c977184e25a3bbd5521e4a021919",
"content_id": "59b24c7b00ce4762fd7fd14240906011e2dc1ad7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1022,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 36,
"path": "/homework/week1/python_weekend1_zhanglong_plus.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\nimport os\n\nuser_info = {\n 'egon1': {'password': '123'},\n 'egon2': {'password': '123'},\n 'egon3': {'password': '123'}\n}\ni = 0\ndbfile = \"db.txt\"\nwhile 1:\n name = input('username>>: ')\n pwd = input('password>>: ')\n if not os.path.exists(dbfile):\n os.mknod(dbfile)\n with open(dbfile, 'r') as f:\n lock_users = f.read().split('|')\n if name in lock_users:\n print('用户 %s 已经被锁定' % name)\n break\n if name not in user_info:\n print('用户 %s 不存在!' % name)\n i += 1\n if name in user_info and pwd != user_info[name]['password']:\n print('用户 %s 密码错误!' % name)\n i += 1\n if i == 3:\n print('尝试次数过多,锁定!')\n with open(dbfile, 'a') as f:\n f.write('%s|' % name)\n break\n if name in user_info and pwd == user_info[name]['password']:\n print('Welcome %s, you are login successful!' % name)\n break\n\n\n\n"
},
{
"alpha_fraction": 0.5395956039428711,
"alphanum_fraction": 0.5509688258171082,
"avg_line_length": 22.038835525512695,
"blob_id": "c868b4d5064bf2217709b8c0a5b2ed7ad12361db",
"content_id": "c7e9635acb3924d749c632cfec76d1bb6d847179",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5090,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 206,
"path": "/month4/week5/python_day20/python_day20.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 子类可以继承父类的属性和方法;\nclass OldboyPeople:\n school = 'Oldboy'\n\n def __init__(self, name, age, sex):\n self.name = name\n self.age = age\n self.sex = sex\n\n\nclass OldboyTeacher(OldboyPeople):\n def change_student_score(self, score):\n print('Teacher %s is changeing score. %s' % (self.name, score))\n\nclass OldboyStudent(OldboyPeople):\n def choose_course(self, course):\n print('Student %s is choosing course. %s' % (self.name, course))\n\n\nt1 = OldboyTeacher('Egon', 18, 'male')\nstu1 = OldboyStudent('Zane', 20, 'male')\n\nt1.change_student_score(90)\nstu1.choose_course('Python全栈')\n\n# 子类定义了和父类相同的属性或方法,会覆盖父类中的属性或方法;\nclass Foo:\n def f1(self):\n print('Foo.f1 ...')\n\n def f2(self):\n print('Foo.f2 ...')\n self.f1()\n\nclass Bar(Foo):\n def f1(self):\n print('Bar.f1 ...')\n\nobj = Bar()\nobj.f2()\n\n# 类的派生\nclass OldboyPeople:\n school = 'Oldboy'\n\n def __init__(self, name, age, sex):\n self.name = name\n self.age = age\n self.sex = sex\n\n def f1(self):\n print('OldboyPeople.f1 ...')\n\nclass OldboyTeacher(OldboyPeople):\n def __init__(self, name, age, sex, grade, salary):\n self.name = name\n self.age = age\n self.sex = sex\n\n self.grade = grade\n self.salary = salary\n\n def get_teacher_info(self):\n print('''=======Teacher info=======\n \n name: %s\n age: %s\n sex: %s\n grade: %s\n salary: %s\\n==========================\n ''' % (self.name, self.age, self.sex, self.grade, self.salary))\n\n def change_student_score(self, score):\n print('Teacher %s is changeing score. %s' % (self.name, score))\n\n def f1(self):\n print('OldboyTeacher.f1 ...')\n\nclass OldboyStudent(OldboyPeople):\n def choose_course(self, course):\n print('Student %s is choosing course. %s' % (self.name, course))\n\nt1 = OldboyTeacher('Egon', 18, 'male', '特级教师', '5w')\nt1.get_teacher_info()\nt1.f1()\n\n# 在子类派生中的新方法中,重用父类的功能:\n# 1. 指名道姓的调用;(和继承无关)\n# class OldboyPeople:\n# school = 'Oldboy'\n#\n# def __init__(self, name, age, sex):\n# self.name = name\n# self.age = age\n# self.sex = sex\n#\n# def f1(self):\n# print('OldboyPeople.f1 ...')\n#\n# def get_teacher_info(self):\n# print('''=======Teacher info=======\n# name: %s\n# age: %s\n# sex: %s\n# ''' % (self.name, self.age, self.sex))\n#\n#\n# class OldboyTeacher(OldboyPeople):\n# def __init__(self, name, age, sex, grade, salary):\n# OldboyPeople.__init__(self, name, age, sex)\n# self.grade = grade\n# self.salary = salary\n#\n# def get_teacher_info(self):\n# OldboyPeople.get_teacher_info(self)\n# print('''\n# grade: %s\n# salary: %s\n# ''' % (self.grade, self.salary))\n#\n# def change_student_score(self, score):\n# print('Teacher %s is changeing score. %s' % (self.name, score))\n#\n# def f1(self):\n# print('OldboyTeacher.f1 ...')\n#\n#\n# class OldboyStudent(OldboyPeople):\n# def choose_course(self, course):\n# print('Student %s is choosing course. %s' % (self.name, course))\n#\n#\n# t1 = OldboyTeacher('Egon', 18, 'male', '特级教师', '5w')\n# t1.get_teacher_info()\n# t1.f1()\n# 2. 使用 super 调用:(严格依赖于继承)\n# super() 的返回值是一个特殊的对象,该对象专门用来调用父类中的属性;\n# python2 中调用方式:super(, self)\nclass OldboyPeople:\n school = 'Oldboy'\n\n def __init__(self, name, age, sex):\n self.name = name\n self.age = age\n self.sex = sex\n\n def f1(self):\n print('OldboyPeople.f1 ...')\n\n def get_teacher_info(self):\n print('''\n name: %s\n age: %s\n sex: %s''' % (self.name, self.age, self.sex))\n\n\nclass OldboyTeacher(OldboyPeople):\n def __init__(self, name, age, sex, grade, salary):\n super().__init__(name, age, sex)\n self.grade = grade\n self.salary = salary\n\n def get_teacher_info(self):\n super().get_teacher_info()\n print('''\n grade: %s\n salary: %s\n ''' % (self.grade, self.salary))\n\n def change_student_score(self, score):\n print('Teacher %s is changeing score. %s' % (self.name, score))\n\n def f1(self):\n print('OldboyTeacher.f1 ...')\n\n\nclass OldboyStudent(OldboyPeople):\n def choose_course(self, course):\n print('Student %s is choosing course. %s' % (self.name, course))\n\n\nt1 = OldboyTeacher('Egon', 18, 'male', '特级教师', '5w')\nt1.get_teacher_info()\nt1.f1()\n\n# 打印属性查找顺序\nprint(OldboyTeacher.mro())\n\n# super() 方法的继承,严格按照继承关系查找;\nclass A:\n def test(self):\n print('from A ...')\n super().test()\n pass\n\nclass B:\n def test(self):\n print('from B ...')\n pass\n\nclass C(A, B):\n pass\n\nc = C()\nc.test()\nprint(C.mro())\n\n\n"
},
{
"alpha_fraction": 0.43759071826934814,
"alphanum_fraction": 0.449927419424057,
"avg_line_length": 19.388059616088867,
"blob_id": "64e783602e5a5d3a970b73e084842621ab0fcd1c",
"content_id": "10d5e6d6b7ce37dc521c36e6f67f80692b934c29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1454,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 67,
"path": "/month4/week5/python_day21/python_day21.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "class Foo:\n __x = 1\n\n def __init__(self, y=2):\n self.__y = y\n\n def __f1(self):\n print('Foo.f1 ...')\n\n def f2(self):\n print('Foo.f2 ...')\n self.__f1()\n\n def get_y(self):\n print('get y: %s' % self.__y)\n\n# obj = Foo(2)\n\n# print(obj.x)\n# print(obj.y)\n# obj.f1()\n#\n# print(obj._Foo__x)\n# print(obj._Foo__y)\n# obj._Foo__f1()\n# print(Foo.__dict__)\n#\n# obj.get_y()\n\nclass Bar(Foo):\n def __f1(self):\n print('Bar.f1 ...')\n\n# obj = Bar()\n# obj.f2()\n\nclass People:\n def __init__(self, name, age):\n self.__name = name\n self.__age = age\n\n def get_user_info_api(self):\n # name = input('请输入用户名 >>: ').strip()\n # if name == self.__name:\n # print('''\n # 用户名:%s\n # 年龄:%s\n # ''' % (self.__name, self.__age))\n print('''\n 用户名:%s\n 年龄:%s\n ''' % (self.__name, self.__age))\n\n def modify_user_info_api(self, name, age):\n if not isinstance(name, str):\n raise TypeError('用户名必须是字符串!')\n self.__name = name\n if not isinstance(age, int):\n raise TypeError('年龄必须是整型!')\n self.__age = age\n self.get_user_info_api()\n\np = People('egon', 18)\n# print(p.name, p.age)\np.get_user_info_api()\np.modify_user_info_api('egon', 20)\np.get_user_info_api()\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.4841897189617157,
"alphanum_fraction": 0.5197628736495972,
"avg_line_length": 24.149999618530273,
"blob_id": "32a4217a8fe4077668478f32a35afa569452efdd",
"content_id": "32146d6a68ab67df154d4388793ea579e59d8051",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 512,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 20,
"path": "/month4/week7/python_day27/python_day27_client.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import time\nimport socket\n\nclient = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nwhile True:\n try:\n client.connect(('127.0.0.1', 8080))\n except OSError:\n print('等待3秒 ..')\n time.sleep(3)\n else:\n while True:\n msg = input('>>>: ').strip()\n if not msg: continue\n client.send(msg.encode('utf-8'))\n data = client.recv(1024)\n print(data.decode('utf-8'))\n finally:\n if client:\n client.close()\n\n\n\n"
},
{
"alpha_fraction": 0.5759768486022949,
"alphanum_fraction": 0.5774240493774414,
"avg_line_length": 22.517240524291992,
"blob_id": "db17e0f434935a998e990cdbfc34ff0f25015b5d",
"content_id": "21dd3f2648e9757b73129e687e37961d355365a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 703,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 29,
"path": "/project/shooping_mall/version_v4/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport pickle\nfrom conf import settings\nfrom lib import common\n\nlogger = common.get_logger('db_handler')\n\ndef select(name):\n obj_path = os.path.join(settings.DB_PATH, name)\n if not os.path.exists(obj_path):\n return\n if os.path.isdir(obj_path):\n return\n with open(r'%s' % obj_path, 'rb') as f:\n return pickle.load(f)\n\ndef update(obj):\n obj_path = os.path.join(settings.DB_PATH, obj.name)\n try:\n with open(r'%s' % obj_path, 'wb') as f:\n pickle.dump(obj, f)\n f.flush()\n except Exception as e:\n logger.warning('写文件出错:%s' % e)\n return\n else:\n return True\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.3947795629501343,
"alphanum_fraction": 0.4074925482273102,
"avg_line_length": 27.88671875,
"blob_id": "0ebed0079953446f13d9d0c68e5e5ad9990a547f",
"content_id": "b2bd779bb77f5378e4a08714276e35cd75df5c1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8224,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 256,
"path": "/weektest/weektest1/test/python_weektest_1.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 三、综合题\n# 1. 编写基础登陆接口\nimport os\nconfig = 'db.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\ni = 0\nlogin_user_list = []\nwhile True:\n users = {}\n with open(r'%s' % config) as f:\n for user in f:\n user = user.strip('\\n').split('|')\n n, p, l = user\n users[n] = [p, l]\n name = input('用户名 >>: ').strip()\n pwd = input('密码 >>: ')\n if name not in users:\n print('用户名不存在!')\n continue\n if users[name][1] == 'lock':\n print('用户已锁定,禁止登陆!')\n continue\n if name in login_user_list:\n print('用户 %s 已经是登录状态!' % name)\n continue\n if pwd != users[name][0]:\n print('密码错误')\n i += 1\n if i == 3:\n with open(r'%s' % config, 'r') as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = line.replace('unlock\\n', 'lock\\n')\n f2.write(line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n print('尝试次数过多!锁定用户!')\n break\n if name in users and pwd == users[name][0]:\n print('%s, 您已登陆成功!' % name)\n login_user_list.append(name)\n\n# 2. 编写拷贝文件的程序\nimport sys\nif len(sys.argv) == 3:\n src_file = sys.argv[1]\n dst_file = sys.argv[2]\nelse:\n print('params not valid!')\n sys.exit()\nwith open(r'%s' % src_file, 'rb') as f1, \\\n open(r'%s' % dst_file, 'wb') as f2:\n for line in f1:\n f2.write(line)\n\n# 3. 请用两种方式实现,将文件中的alex全部替换成SB的操作\n# 第一种:\ns1 = 'alex'\ns2 = 'SB'\nwith open(r'a.txt', 'r') as f:\n data = f.read().repalce(s1, s2)\nwith open(r'a.txt', 'w') as f:\n f.write(data)\n# 第二种:\nimport os\ns1 = 'alex'.encode('utf-8')\ns2 = 'SB'.encode('utf-8')\nwith open(r'a.txt', 'rb') as f1, open(r'a.txt.swap', 'wb') as f2:\n for line in f1:\n if s1 in line:\n line = line.replace(s1, s2)\n f2.write(line)\nos.remove('a.txt')\nos.rename('a.txt.swap', 'a.txt')\n\n# 4. 编写购物车程序,实现注册,登陆,购物,查看功能,数据基于文件读取\nimport os\n\ngoods = {\n '1': {\n 'name': 'mac',\n 'price': 20000\n },\n '2': {\n 'name': 'lenovo',\n 'price': 10000\n },\n '3': {\n 'name': 'apple',\n 'price': 200\n },\n '4': {\n 'name': 'tesla',\n 'price': 1000000\n }\n }\n\nusers = {}\nconfig = 'users.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\nwith open(r'%s' % config) as f:\n for u in f:\n if u:\n u = u.strip('\\n').split('|')\n n, p, b = u\n b = int(b)\n users[n] = {'password': p, 'balance': b}\n\ntag = True\ncookies = {}\nshopping_cart = {}\nwhile tag:\n print('1 注册用户\\n2 用户登陆\\n3 购买商品\\n4 购物车')\n action = input('请选择操作 >>: ').strip()\n if action == 'quit':\n tag = False\n continue\n if action == '1':\n # 注册\n n = input('请输入用户名 >>: ').strip()\n if n == 'quit':\n tag = False\n continue\n if n in users:\n print('用户%s已经注册!' % n)\n continue\n p = input('请输入密码 >>: ')\n if p == 'quit':\n tag = False\n continue\n while tag:\n b = input('请输入充值金额 >>: ').strip()\n if b == 'quit':\n tag = False\n continue\n if b.isdigit():\n b = int(b)\n break\n print('请输入整数!')\n with open(r'%s' % config, 'a') as f:\n u = '%s|%s|%s' % (n, p, b)\n f.write(u)\n print('用户 %s 注册成功!' % n)\n elif action =='2':\n # 登陆\n while tag:\n n = input('请输入用户名 >>: ').strip()\n if n == 'quit':\n tag = False\n continue\n if n in cookies:\n print('用户%s已经登录!\\n' % n)\n break\n p = input('请输入密码 >>: ')\n if p == 'quit':\n tag = False\n continue\n if n not in users:\n print('用户名不存在!')\n continue\n if p != users[n]['password']:\n print('密码错误!')\n continue\n if n in users and p == users[n]['password']:\n cookies[n] = users[n]\n print('用户 %s 登陆成功!\\n' % n)\n break\n elif action == '3':\n # 购买\n n = input('请输入用户名 >>: ').strip()\n if n == 'quit':\n tag = False\n continue\n if n not in cookies:\n print('请先登录再购物!')\n continue\n while tag:\n print('='*30)\n print('编码 名称 价格')\n for k in goods:\n print('%-6s %-10s %-10s' % (k, goods[k]['name'], goods[k]['price']))\n print('='*30)\n code = input('请选择购买商品编码[结账:bill] >>: ').strip()\n if code == 'quit':\n tag = False\n continue\n if code == 'bill':\n print('请选择进入购物车结账!')\n break\n if code not in goods:\n print('商品编码非法!')\n continue\n while tag:\n count = input('请输入购买数量 >>: ').strip()\n if count == 'quit':\n tag = False\n continue\n if count.isdigit():\n count = int(count)\n break\n print('请输入整数!')\n good = goods[code]['name']\n price = goods[code]['price']\n cost = price * count\n if users[n]['balance'] >= cost:\n users[n]['balance'] -= cost\n if good not in shopping_cart:\n shopping_cart[good] = [price, count]\n else:\n shopping_cart[good] = [price, shopping_cart[good][1]+count]\n print('购物车: %s,账户余额: %s' % (shopping_cart, users[n]['balance']))\n else:\n diff = cost - users[n]['balance']\n print('账户余额不足!还需 %s 才能购买%s个 %s!' % (diff, count, good))\n elif action == '4':\n # 购物车,结账\n n = input('请输入用户名 >>: ').strip()\n if n == 'quit':\n tag = False\n continue\n if n not in cookies:\n print('请先登录再查看购物车!')\n continue\n cost = 0\n print('=' * 50)\n print('商品购物车:')\n for k, v in shopping_cart.items():\n good, price, count = k, v[0], v[1]\n print('购买商品:%-10s 商品价格: %-6s 购买数量: %-6s' % (good, price, count))\n cost += (price * count)\n print('\\n商品总价:%s' % cost)\n print('账户余额:%s' % users[n]['balance'])\n print('=' * 50)\n buy = input('确认购买?y/n >>: ').strip()\n if buy == 'y':\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if n in line:\n line = line.split('|')\n line[-1] = '%s\\n' % str(users[n]['balance'])\n line = '|'.join(line)\n f2.write(line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n print('购买成功,请耐心等待发货!')\n shopping_cart = {}\n elif buy == 'n' or buy == 'quit':\n print('已取消购物!\\n')\n else:\n print('输入操作非法!')\n else:\n print('输入编码非法!')"
},
{
"alpha_fraction": 0.6880072355270386,
"alphanum_fraction": 0.7529305815696716,
"avg_line_length": 23.66666603088379,
"blob_id": "e90efb58520cb8ae7ebb83a8c1f8d9156086a60d",
"content_id": "5eac8bd572c3b0122ee65feed29117ce59a76ad8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2343,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 45,
"path": "/month4/week7/python_day31/python_day31_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月27日\n# 1、整理GIL解释器锁,解释以下问题\n# 1、什么是GIL\n# 2、有了GIL会对单进程下的多个线程造成什么样的影响\n# 3、为什么要有GIL\n# 4、GIL与自定义互斥锁的区别,多个线程争抢GIL与自定义互斥锁的过程分析\n# 5、什么时候用python的多线程,什么时候用多进程,为什么?\n#\n# 2、进程池与线程池\n# 1、池的用途,为何要用它\n# 2、池子里什么时候装进程什么时候装线程?\n#\n# 3、基于进程池与线程池实现并发的套接字通信\n#\n# 4、基于线程池实现一个可以支持并发通信的套接字,完成以下功能?\n# 执行客户端程序,用户可选的功能有:\n# 1、登录\n# 2、注册\n# 3、上传\n# 4、下载\n#\n# 思路解析:\n# 1、执行登录,输入用户名egon,密码123,对用户名egon和密码进行hash校验,并加盐处理,将密文密码发送到服务端,与服务端事先存好用户名与密文密码进行对比,对比成功后,\n# 在服务端内存中用hash算法生成一个随机字符串比如eadc05b6c5dda1f8772c4f4ca64db110\n# 然后将该字符串发送给用户以及登录成功的提示信息发送给客户端, 然后在服务存放好\n# current_users = {\n# 'a3sc05b6c5dda1f8313c4f4ca64db110': {'uid': 0, 'username': 'alex'},\n# 'e31adfc05b6c5dda1f8772c4f4ca64b0': {'uid': 1, 'username': 'lxx'},\n# 'eadc05b6c5dda1f8772c4f4ca64db110': {'uid': 2, 'username': 'egon'},\n#\n# }\n#\n# 用户在收到服务端发来的\n# 'eadc05b6c5dda1f8772c4f4ca64db110'\n# 以及登录成功的提示信息后,以后的任何操作都会携带该随机字符串\n# 'eadc05b6c5dda1f8772c4f4ca64db110‘,服务端会根据该字符串获取用户信息来进行与该用户匹配的操作\n#\n# 在用户关闭连接后,服务端会从current_users字典中清除用户信息,下次重新登录,会产生新的随机字符串\n# 这样做的好处:\n# 1、用户的敏感信息全都存放到服务端,更加安全\n# 2、每次登录都拿到一个新的随机的字符串,不容易被伪造\n#\n# 2、执行注册功能,提交到服务端,然后存放到文件中,如果用户已经存在则提示用户已经注册过,要求重新输入用户信息\n#\n# 3、执行上次下载功能时会携带用户的随机字符串到服务端,如果服务端发现该字符串not in current_users,则要求用户先登录"
},
{
"alpha_fraction": 0.48154735565185547,
"alphanum_fraction": 0.5353490710258484,
"avg_line_length": 28.214284896850586,
"blob_id": "e13015b50393437e2fd9457e3ad2d2e20add702a",
"content_id": "a276b10e181442dbd18b5791c57ebd82b38dd3d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4852,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 154,
"path": "/project/elective_systems/version_v8/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import common_api, teacher_api\n\nCURRENT_USER = None\nROLE = 'teacher'\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n if CURRENT_USER:\n print('\\033[31m不能重复登录!\\033[0m')\n return\n while True:\n name = common.input_string('登陆用户名')\n if name == 'q':\n break\n password = common.input_string('登陆密码')\n if password == 'q':\n break\n flag, msg = common_api.login(name, password, ROLE)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef logout():\n global CURRENT_USER\n print('\\033[31mGoodbye, %s!\\033[0m' % CURRENT_USER)\n CURRENT_USER = None\n\[email protected](ROLE)\ndef check_course(show=True):\n if show:\n print('\\033[32m查看所有课程\\033[0m')\n course_list = common.get_object_list('course')\n if not course_list:\n print('\\033[31m课程列表为空!\\033[0m')\n return\n print('-' * 30)\n for k, v in enumerate(course_list):\n print('%s %s' % (k, v))\n return course_list\n\[email protected](ROLE)\ndef check_teach_course(show=True):\n if show:\n print('\\033[32m查看教授课程\\033[0m')\n course = teacher_api.get_teach_course(CURRENT_USER)\n if not course:\n print('\\033[31m老师教授课程列表为空!\\033[0m')\n return\n print('-' * 30)\n for i, name in enumerate(course):\n print('%-4s %-10s' % (i, name))\n print('-' * 30)\n return course\n\[email protected](ROLE)\ndef choose_teach_course():\n print('\\033[32m选择教授课程\\033[0m')\n while True:\n course = check_course(False)\n choice = common.input_integer('请选择课程编号')\n if choice < 0 or choice > len(course):\n print('\\033[31m课程编号非法!\\033[0m')\n continue\n flag, msg = teacher_api.choose_teach_course(CURRENT_USER, course[choice])\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_course_student():\n print('\\033[32m查看课程学员\\033[0m')\n while True:\n course = check_teach_course(False)\n if not course:\n return\n print('-' * 30)\n choice = common.input_integer('请选择课程编号')\n if choice < 0 or choice > len(course):\n print('\\033[31m课程编号非法!\\033[0m')\n continue\n student_list = teacher_api.get_course_student(course[choice])\n if not student_list:\n print('\\033[31m课程学员列表为空!\\033[0m')\n return\n print('-' * 30)\n for i, name in enumerate(student_list):\n print('%-4s %-10s' % (i, name))\n print('-' * 30)\n return\n\ndef choose_student_course(name):\n while True:\n flag, course_list = teacher_api.get_student_course(name)\n if not flag:\n print('\\033[31m%s\\033[0m' % course_list)\n return\n print('-' * 30)\n for i, name in enumerate(course_list):\n print('%-4s %-10s' % (i, name))\n print('-' * 30)\n choice = common.input_integer('请选择课程编号')\n if choice < 0 or choice > len(course):\n print('\\033[31m课程编号非法!\\033[0m')\n continue\n return course_list[choice]\n\[email protected](ROLE)\ndef change_student_score():\n print('\\033[32m修改学员成绩\\033[0m')\n while True:\n name = common.input_string('请输入学生名字')\n course = choose_student_course(name)\n if not course:\n continue\n score = common.input_string('请输入学生分数')\n flag, msg = teacher_api.change_student_score(CURRENT_USER, name, course, score)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [check_course, '查看所有课程'],\n '3': [check_teach_course, '查看教授课程'],\n '4': [choose_teach_course, '选择教授课程'],\n '5': [check_course_student, '查看课程学员'],\n '6': [change_student_score, '修改学生成绩'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()"
},
{
"alpha_fraction": 0.4304661750793457,
"alphanum_fraction": 0.4900853633880615,
"avg_line_length": 26.690908432006836,
"blob_id": "23ac0b74771eefa5f32601d2054ba8ca786c7b1d",
"content_id": "cd22746e8a08e66131addada70fbfd5ef1055fb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8265,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 275,
"path": "/project/shooping_mall/version_v5/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom interface import user, bank, shop\n\nCURRENT_USER = None\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n print('\\033[31m不能是空字符!\\033[0m')\n continue\n return string\n\ndef input_integer(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if string == 'q':\n return string\n if not string:\n print('\\033[31m不能是空字符!\\033[0m')\n continue\n if not string.isdigit():\n print('\\033[31m请输入数字!\\033[0m')\n continue\n return int(string)\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input_string('用户名')\n if name == 'q': break\n password = input_string('密码')\n if password == 'q': break\n flag, msg = user.login(name, password)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = input_string('用户名')\n if name == 'q': break\n password = input_string('密码')\n if password == 'q': break\n password2 = input_string('确认密码')\n if password2 == 'q': break\n if password != password2:\n print('\\033[31m两次密码输入不一致!\\033[0m')\n continue\n flag, msg = user.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef check_balance():\n print('\\033[32m查看余额\\033[0m')\n blance_info = user.get_balance_info(CURRENT_USER)\n print('-' * 30)\n print('''\n 余额: %s\n 信用余额: %s\n 信用额度: %s\n ''' % blance_info)\n print('-' * 30)\n\ndef check_bill():\n print('\\033[32m查看账单\\033[0m')\n bill_info = user.get_bill_info(CURRENT_USER)\n print('-' * 30)\n print('您的本期账单为%s元!' % bill_info)\n print('-' * 30)\n\ndef check_flow():\n print('\\033[32m查看流水\\033[0m')\n flow_info = user.get_flow_info(CURRENT_USER)\n if not flow_info:\n print('银行流水列表为空!')\n return\n print('-' * 30)\n for k, v in flow_info:\n print('%s %s' % (k, v))\n print('-' * 30)\n\ndef recharge():\n print('\\033[32m充值\\033[0m')\n amount = input_integer('请输入充值金额')\n if amount == 'q':\n return\n flag,msg = bank.recharge(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef transfer():\n print('\\033[32m转账\\033[0m')\n while True:\n name = input_string('请输入收款账户')\n if name == 'q': break\n if name == CURRENT_USER:\n print('\\033[31m用户%s不能给自己转账!\\033[0m' % name)\n continue\n amount = input_integer('请输入转账金额')\n if amount == 'q': break\n flag, msg = bank.transfer(CURRENT_USER, name, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef withdraw():\n print('\\033[32m取现\\033[0m')\n amount = input_integer('请输入取现金额')\n if amount == 'q':\n return\n flag, msg = bank.withdraw(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef repay():\n print('\\033[32m还款\\033[0m')\n check_bill()\n amount = input_integer('请输入取现金额')\n if amount == 'q':\n return\n flag, msg = bank.repay(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef check_shopping_cart():\n print('\\033[32m查看购物车\\033[0m')\n shopping_cart_info = shop.get_shooping_cart_info(CURRENT_USER)\n if not shopping_cart_info:\n print('\\033[31m购物车列表为空!\\033[0m')\n return\n print('-' * 30)\n cost = 0\n for good, v in shopping_cart_info.items():\n cost += (v['price'] * v['count'])\n print('商品编号: %s 商品名称: %s 商品价格: %s 商品数量: %s' % (v['code'], good, v['price'], v['count']))\n print('商品总价: %s' % cost)\n print('-' * 30)\n\ndef shopping():\n print('\\033[32m购物\\033[0m')\n good_info = shop.get_good_info()\n while True:\n print('\\033[35m输入pay结账\\033[0m')\n print('-' * 30)\n for k,v in good_info.items():\n print('%s %s %s' % (k, v['name'], v['price']))\n print('-' * 30)\n code = input_string('请选择购买商品编号')\n if code == 'q': break\n if code == 'pay':\n pay()\n return\n if code not in good_info:\n print('\\033[31m商品编号非法!\\033[0m')\n continue\n good = good_info[code]['name']\n price = good_info[code]['price']\n count = input_integer('请输入购买商品数量')\n if count == 'q': break\n flag, msg = shop.join_shopping_cart(CURRENT_USER, good, code, price, count)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef pay():\n print('\\033[32m结账\\033[0m')\n while True:\n check_shopping_cart()\n confirm = input_string('是否确认结账?y/n')\n if confirm == 'q':\n break\n if confirm == 'n':\n print('\\033[32m用户%s取消结账!\\033[0m' % CURRENT_USER)\n return\n if confirm == 'y':\n flag, msg = shop.pay(CURRENT_USER)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n return\n\ndef new_arrival():\n print('\\033[32m新品上架\\033[0m')\n while True:\n code = input_string('请输入商品编码')\n if code == 'q': break\n name = input_string('请输入商品名称')\n if name == 'q': break\n price = input_integer('请输入商品价格')\n if price == 'q': break\n flag, msg = shop.new_arrival(code, name, price)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef atm():\n menu = {\n '1': [check_balance, '查看余额'],\n '2': [check_bill, '查看账单'],\n '3': [check_flow, '查看流水'],\n '4': [recharge, '充值'],\n '5': [transfer, '转账'],\n '6': [withdraw, '取现'],\n '7': [repay, '还款'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\ndef mall():\n menu = {\n '1': [check_shopping_cart, '查看购物车'],\n '2': [shopping, '购物'],\n '3': [pay, '结账'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [atm, 'ATM'],\n '4': [mall, '购物商城'],\n # '5': [new_arrival, '新品上架'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.5232558250427246,
"alphanum_fraction": 0.6317829489707947,
"avg_line_length": 12.578947067260742,
"blob_id": "9e6ab3ba6c514d7aff718fd7f53855c89ae84210",
"content_id": "b719c7bdcd9d08a837e52a38d8628c11ac072af0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 282,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 19,
"path": "/project/elective_systems/version_v2/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nlogger = common.get_logger('admin')\n\n\n\n\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n\n\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n\n\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n"
},
{
"alpha_fraction": 0.6061643958091736,
"alphanum_fraction": 0.6602739691734314,
"avg_line_length": 22.918033599853516,
"blob_id": "d37c0a82003f99770519252f83da53d007fbc5fe",
"content_id": "5932785589461b8855b883f13b27e182e987d303",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1880,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 61,
"path": "/month5/week9/python_day37/python_day37_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1 创建购物信息表:\n# 购物人 \t商品名称 \t数量\n# A \t 甲 \t 2\n# B \t 乙 \t 4\n# C \t 丙 \t 1\ncreate table shopping_cart(\n id int primary key auto_increment,\n name char(20) not null,\n good char(20) not null,\n count int not null\n );\n# 1.1:查询购买两件及以上人的姓名\nselect * from shopping_cart where count>2;\n# 1.2:查询购买数量为2的人的商品名\nselect * from shopping_cart where count=2;\n# 1.3:查询购买数量为4的购物人姓名\nselect * from shopping_cart where count=4;\n# 1.4: 修改A的数量为10\nupdate shopping_cart set count=10 where name='A';\n# 1.5:删除数量等于1的这条记录\ndelete from shopping_cart where count=1;\n\n\n\n# 2 创建学生表:\n# 姓名 \t课程 \t分数\n# 张三 \t语文 \t81\n# 李四 \t语文 \t90\n# 王五 \t语文 \t49\n\ncreate table student(\n id int primary key auto_increment,\n name char(20) not null,\n course char(20) not null,\n score float(5,2) not null\n);\n\ninsert into student(name,course,score) values\n ('张三', '语文', 81),\n ('李四', '语文', 90),\n ('王五', '语文', 49);\n# 2.1:查询及格人的名字\nselect name from student where score>=60;\n# 2.2:查询成绩在90分或以上的人名\nselect name from student where score>=90;\n# 2.3:查询分数等于49的人名和课程名\nselect name,course from student where score=49;\n# 2.4:修改王五的分数为60\nupdate student set score=60 where name='王五';\n# 2.5:删除名字为李四的记录\ndelete from student where name='李四';\n# 3 创建学生表:有学生 id,姓名,密码,年龄,注册时间,体重\n# 随便插入几条数据\ncreate table student(\n id int primary key auto_increment,\n name char(20) not null,\n password char(20) not null,\n age int not null,\n weight float(5,2) not null,\n register_time datetime not null\n);\n\n"
},
{
"alpha_fraction": 0.5542857050895691,
"alphanum_fraction": 0.6114285588264465,
"avg_line_length": 15.800000190734863,
"blob_id": "5b21baa4d8421e0f4e8043ba568da7f2a87b4f6b",
"content_id": "c315bbcd9ce5309535a5baf3e473e70d9060e04d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 185,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 10,
"path": "/project/elective_systems/version_v4/core/student.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import admin_api\n\nUSER = {'name': None}\nROLE = 'student'\n\ndef run():\n print('\\033[31m还未编写!\\033[0m')\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.4000000059604645,
"alphanum_fraction": 0.44112148880958557,
"avg_line_length": 24.4761905670166,
"blob_id": "35cb82926c5ad8e28599d767ff1542300fcc97b6",
"content_id": "1a284304f8c2f8be83050ac0f323acba1f2eafbd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 581,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 21,
"path": "/project/elective_systems/version_v6/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom core import admin, teacher, student\n\ndef run():\n menu = {\n '1': [admin, '管理端'],\n '2': [teacher, '教师端'],\n '3': [student, '学生端']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择平台编号 >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0].run()\n"
},
{
"alpha_fraction": 0.6329588294029236,
"alphanum_fraction": 0.6357678174972534,
"avg_line_length": 37.33333206176758,
"blob_id": "5c0d9fb8a4890ac7f47b65765f38efffc41295b8",
"content_id": "be0e461ff49318fb2e957bffd57685710c2d6823",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1158,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 27,
"path": "/weektest/test2/ATM_tianzhiwei/interface/benk.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from db import db_hand\r\nfrom lib import common\r\nlogger_benk=common.logger('Benk')\r\ndef get_account(name):\r\n return db_hand.select(name)\r\ndef transfer_money(out_name,in_name,account):\r\n dic=db_hand.select(out_name)\r\n dic['account']-=account\r\n dict=db_hand.select(in_name)\r\n dict['account']+=account\r\n dic['write_log'].append('%s给%s转账,%s 人民币'%(out_name,in_name,account))\r\n dict['write_log'].append('%s收到%s的转账,%s 人民币' % (in_name, out_name,account))\r\n db_hand.add(out_name,dic)\r\n db_hand.add(in_name,dict)\r\n logger_benk.info('%s给%s转账,%s 人民币'%(out_name,in_name,account))\r\ndef out_money(name,account):\r\n dic=db_hand.select(name)\r\n dic['account']-=account*1.05\r\n dic['write_log'].append('%s提现了%s 人民币'%(name,account))\r\n db_hand.add(name,dic)\r\n logger_benk.info('%s提现了%s 人民币'%(name,account))\r\ndef in_money(name,account):\r\n dic=db_hand.select(name)\r\n dic['account'] += account\r\n dic['write_log'].append('%s还款%s 人民币' % (name, account))\r\n db_hand.add(name, dic)\r\n logger_benk.info('%s还款%s 人民币' % (name, account))\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5903061032295227,
"alphanum_fraction": 0.6086734533309937,
"avg_line_length": 25.486486434936523,
"blob_id": "af4af437b3472f6010079d29392cfa4fe38745e3",
"content_id": "8de848deb65286dc5cccd8262e43c51b7fda47ee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2598,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 74,
"path": "/month4/week6/python_day22/python_day22_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月16\n# 1、定义MySQL类(参考答案:http://www.cnblogs.com/linhaifeng/articles/7341177.html#_label5)\n# 1.对象有id、host、port三个属性\n# 2.定义工具create_id,在实例化时为每个对象随机生成id,保证id唯一\n# 3.提供两种实例化方式,方式一:用户传入host和port 方式二:从配置文件中读取host和port进行实例化\n# 4.为对象定制方法,save和get_obj_by_id,save能自动将对象序列化到文件中,文件路径为配置文件中DB_PATH,文件名为id号,\n# 保存之前验证对象是否已经存在,若存在则抛出异常; get_obj_by_id方法用来从文件中反序列化出对象;\n# import os\n# import uuid\n# import pickle\n# import settings\n#\n# class MySQL:\n# def __init__(self, host, port):\n# self.id = self.create_id()\n# self.host = host\n# self.port = port\n#\n# def create_id(self):\n# return str(uuid.uuid1())\n#\n# @classmethod\n# def from_conf(cls):\n# return cls(settings.HOST, settings.PORT)\n#\n# def save(self):\n# with open(r'%s' % os.path.join(settings.DB_PATH, self.id), 'wb') as f:\n# pickle.dump(self, f)\n#\n# def get_obj_by_id(self):\n# try:\n# with open(r'%s' % os.path.join(settings.DB_PATH, self.id), 'rb') as f:\n# data = pickle.load(f)\n# except Exception as e:\n# raise FileNotFoundError('找不到文件%s!' % os.path.join(settings.DB_PATH, self.id))\n# else:\n# return data\n#\n# obj = MySQL.from_conf()\n# print(obj.id)\n# obj.save()\n# data = obj.get_obj_by_id()\n# print(obj.id)\n#\n# obj2 = MySQL.from_conf()\n# print(obj2.id)\n\n# 2、定义一个类:圆形,该类有半径,周长,面积等属性,将半径隐藏起来,将周长与面积开放\n# 参考答案(http://www.cnblogs.com/linhaifeng/articles/7340801.html#_label4)\n# import math\n#\n# class Cycle:\n# def __init__(self, redius):\n# self.__redius = redius\n#\n# @property\n# def circumference(self):\n# return 2 * (math.pi * self.__redius)\n#\n# @property\n# def area(self):\n# return math.pi * (self.__redius ** 2)\n#\n# obj = Cycle(2)\n# print(obj.circumference)\n# print(obj.area)\n\n\n# 3、明日默写\n# 1、简述面向对象三大特性:继承、封装、多态\n# 2、定义一个人的类,人有名字,身高,体重,用property讲体质参数封装成人的数据属性\n# 3、简述什么是绑定方法与非绑定方法,他们各自的特点是什么?\n\n# 4、完善选课系统作业\n"
},
{
"alpha_fraction": 0.4624198079109192,
"alphanum_fraction": 0.4832722246646881,
"avg_line_length": 20.492610931396484,
"blob_id": "8351b138e79ca70dd707752de43794315eae0a00",
"content_id": "1cb5b467d19519e8023f9d380c82144e294eb009",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4656,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 203,
"path": "/month4/week6/python_day23/python_day23.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 反射\n# class Foo:\n# def run(self):\n# while True:\n# cmd = input('cmd >>: ').strip()\n# if hasattr(self, cmd):\n# print('%s run ...' % cmd)\n# func = getattr(self, cmd)\n# func()\n# else:\n# print('命令无效!')\n#\n# def download(self):\n# print('download ...')\n#\n# def upload(self):\n# print('upload ...')\n#\n#\n# obj = Foo()\n# obj.run()\n\n# 2. __str__() 方法\n# import time\n#\n# class People:\n# def __init__(self, name, age, sex):\n# self.name = name\n# self.age = age\n# self.sex = sex\n#\n# def __str__(self):\n# return '<名字: %s 年龄: %s 性别: %s>' % (self.name, self.age, self.sex)\n#\n# def __del__(self):\n# print('obj deleted ..')\n#\n#\n# obj = People('egon', 18, 'male')\n# print(obj)\n# del obj\n# time.sleep(5)\n# print('server stoped ..')\n\n# 3. __del__() 方法:回收系统资源\n# class Mysql:\n# def __init__(self):\n# self.ip = ip\n# self.port = port\n# self.conn = connect(ip, port)\n#\n# def __del__(self):\n# self.conn.close()\n#\n# obj = Mysql('1.1.1.1', 3306)\n#\n# class MyOpen:\n# def __init__(self, filepath, mode='r', encoding='utf-8'):\n# self.filepath = filepath\n# self.mode = mode\n# self.encoding = encoding\n# self.fobj = open(filepath, mode=mode, encoding=encoding)\n#\n# def __str__(self):\n# msg = '''\n# filepath: %s\n# mode: %s\n# encoding: %s\n# '''\n# return msg\n#\n# def __del__(self):\n# self.fobj.close()\n#\n# f = MyOpen('a.txt', mode='r', encoding='utf-8')\n# print(f)\n#\n# res = f.fobj.read()\n# print(res)\n\n# 4. 元类 类的类就是元类\n# 我们用class定义的类使用来产生我们自己的对象;\n# 内置元类type是用来专门产生class定义的类的;\n\n# exec 函数知识储备\n# exec(str, {}, {}) 把函数名放入名称空间内\n# code = '''\n# global x\n# x = 0\n# y = 2\n# '''\n# global_dic = {'x': 10000}\n# local_dic = {}\n#\n# exec(code, global_dic, local_dic)\n# print(global_dic)\n# print(local_dic)\n\n# class Chinese:\n# country = 'China'\n#\n# def __init__(self, name, age, sex):\n# self.name = name\n# self.age = age\n# self.sex = sex\n#\n# def speak(self):\n# print('%s speak Chinese' % self.name)\n#\n# p = Chinese('egon', 18, 'male')\n# print(type(p))\n#\n# print(type(Chinese))\n\n# 4. __call__() 方法储备知识\n# class Foo:\n# def __init__(self):\n# print('__init__')\n#\n# def __str__(self):\n# return '__str__'\n#\n# def __del__(self):\n# print('__del__')\n#\n# def __call__(self, *args, **kwargs):\n# print('__call__', args, kwargs)\n#\n# obj = Foo()\n# print(obj)\n#\n# obj(1,2,3,a=1,b=2,c=3)\n\n# 5. 自定义元类\n# class Mymeta(type):\n# # 控制类Foo的创建\n# def __init__(self, class_name, class_bases, class_dic):\n# if class_name.isdigit():\n# raise TypeError('类名称不能是数字!')\n# # print('class_name: %s' % class_name)\n# if not class_dic.get('__doc__'):\n# raise TypeError('类内必须写好文档注释!')\n# # print('__doc__: %s' % class_dic['__doc__'])\n# self.class_name = class_name\n# self.obj = class_bases\n# self.class_dic = class_dic\n#\n# # 控制类Foo的调用过程,即控制实例化Foo的过程\n# def __call__(cls, *args, **kwargs):\n# obj = object.__new__(cls)\n# cls.__init__(obj, *args, ** kwargs)\n# return obj\n#\n# # class_dic = {'__doc__':'\\033[32mFoo class\\033[0m'}\n# # Foo = Mymeta('Foo', (object,), class_dic)\n#\n# class Foo(object, metaclass=Mymeta):\n# x = 1\n# __doc__ = '''\n# \\033[32mFoo class\\033[0m\n# '''\n# def __init__(self, y):\n# self.y = y\n#\n# def __str__(self):\n# return 'y: %s' % self.y\n#\n# def f1(self):\n# print('from f1')\n#\n# obj = Foo('2')\n# print(obj)\n# print(obj.x)\n# print(obj.y)\n# print(obj.f1)\n# 6. 单例模式\n# import settings\n#\n# class MySQL:\n# __conn = None\n# def __init__(self, ip, port):\n# self.ip = ip\n# self.port = port\n#\n# @classmethod\n# def singleton(cls):\n# if not cls.__conn:\n# cls.__conn = cls(settings.IP, settings.PORT)\n# return cls.__conn\n#\n# def __call__(self, *args, **kwargs):\n# pass\n#\n# obj1 = MySQL('1.1.1.2', 3306)\n# obj2 = MySQL('1.1.1.3', 3306)\n# obj3 = MySQL('1.1.1.4', 3306)\n#\n# obj4 = MySQL.singleton()\n# obj5 = MySQL.singleton()\n# obj6 = MySQL.singleton()\n# print(obj4)\n# print(obj5)\n# print(obj6)\n\n"
},
{
"alpha_fraction": 0.44497042894363403,
"alphanum_fraction": 0.4508875608444214,
"avg_line_length": 13.083333015441895,
"blob_id": "08b0a84ab8fb6d8b93a214433d0a6e507916ecc3",
"content_id": "d87153983d5e63bef8bf6984711e5d25a6eb9ba8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 871,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 60,
"path": "/month4/week6/python_day24/python_day24.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 异常使用\n# try:\n# print('start ...')\n# x = 1\n# # y\n# l = []\n# # l[3]\n# d = {'a': 1}\n# # d['b']\n# f = open('a.txt', 'w')\n# f.read()\n# except NameError as e:\n# print('NameError: %s' % e)\n# except KeyError as e:\n# print('KeyError: %s' % e)\n# except IndexError as e:\n# print('IndexError: %s' % e)\n# except Exception as e:\n# print('Exception: %s' % e)\n# else:\n# print('end ...')\n# finally:\n# print('finally ...')\n# f.close()\n# print('file closed ...')\n# print('other ...')\n#\n# 2. 自定义异常\n# class RegisterError(BaseException):\n# def __init__(self, msg, user):\n# self. msg = msg\n# self.user = user\n#\n# def __str__(self):\n# return '<%s %s>' % (self.user, self.msg)\n#\n# raise RegisterError('注册失败', 'teacher')\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n#\n"
},
{
"alpha_fraction": 0.4319494068622589,
"alphanum_fraction": 0.4382733106613159,
"avg_line_length": 23.447551727294922,
"blob_id": "c709e79c6e52f8135d2aedef46630a031c950c34",
"content_id": "da9dcace782ed0d775ee354784215a6e77864b72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3947,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 143,
"path": "/weektest/test2/ATM_tianzhiwei/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from interface import user\r\nfrom lib import common\r\nfrom interface import benk\r\ndict={\r\n 'name':None,\r\n 'state':False\r\n}\r\ndef register():\r\n print('注册')\r\n if dict['state']:\r\n print('已登陆')\r\n return\r\n while True:\r\n name=input('用户名>>:').strip()\r\n if not user.get_info(name):\r\n password=input('密码>>:').strip()\r\n password1=input('确认密码>>:').strip()\r\n if password==password1:\r\n user.write_info(name,password)\r\n break\r\n else:\r\n print('密码不一致')\r\n else:\r\n print('该用户已存在')\r\n\r\ndef login():\r\n print('登陆')\r\n if dict['state']:\r\n print('已登陆')\r\n return\r\n count=0\r\n while True:\r\n name = input('用户名>>:').strip()\r\n dic=user.get_info(name)\r\n if count==3:\r\n user.write_state(name)\r\n print('账户已被锁定')\r\n break\r\n if dic:\r\n password=input('输入密码>>:').strip()\r\n if password==dic['password'] and not dic['state1']:\r\n dict['name']=name\r\n dict['state']=True\r\n print('登陆成功')\r\n break\r\n else:\r\n print('密码错误')\r\n count+=1\r\n else:\r\n print('该用户不存在')\r\[email protected]_\r\ndef transfer():\r\n print('转账')\r\n while True:\r\n name=input('转账对象>>:').strip()\r\n if name=='q':\r\n break\r\n if name==dict['name']:\r\n print('不能给自己转账')\r\n continue\r\n if benk.get_account(name):\r\n account=input('金额>>:').strip()\r\n if account.isdigit():\r\n dic=benk.get_account(dict['name'])\r\n account=int(account)\r\n if account<=dic['account']:\r\n benk.transfer_money(dict['name'],name,account)\r\n break\r\n else:\r\n print('余额不足')\r\n else:\r\n print('请输入数字类型')\r\n else:\r\n print('对象不存在')\r\[email protected]_\r\ndef withdraw():\r\n print('提现')\r\n while True:\r\n account=input('提现金额>>:').strip()\r\n if account=='q':\r\n break\r\n if account.isdigit():\r\n dic=benk.get_account(dict['name'])\r\n account=int(account)\r\n if account*1.05<=dic['account']:\r\n benk.out_money(dict['name'],account)\r\n break\r\n else:\r\n print('余额不足')\r\n else:\r\n print('请输入数字类型')\r\[email protected]_\r\ndef inquiry():\r\n print('查询')\r\n dic=benk.get_account(dict['name'])\r\n print(dic['account'])\r\[email protected]_\r\ndef flowlog():\r\n print('流水日志')\r\n dic = user.get_info(dict['name'])\r\n for line in dic['write_log']:\r\n print(line)\r\[email protected]_\r\ndef bank_money():\r\n print('还款')\r\n while True:\r\n account = input('还款金额>>:').strip()\r\n if account == 'q':\r\n break\r\n if account.isdigit():\r\n account=int(account)\r\n benk.in_money(dict['name'],account)\r\n break\r\n else:\r\n print('请输入数字')\r\ndic_name={\r\n '1':register,\r\n '2':login,\r\n '3':transfer,\r\n '4':withdraw,\r\n '5':inquiry,\r\n '6':flowlog,\r\n '7':bank_money\r\n}\r\ndef run():\r\n while True:\r\n print('''\r\n 1.注册\r\n 2.登陆\r\n 3.转账\r\n 4.提现\r\n 5.查询\r\n 6.流水日志\r\n 7.还款\r\n 输入q 则退出功能\r\n ''')\r\n choice=input('请输入编码>>:').strip()\r\n if choice=='q':\r\n break\r\n if not choice.isdigit():\r\n print('请输入数字')\r\n if choice in dic_name:\r\n dic_name[choice]()"
},
{
"alpha_fraction": 0.5049019455909729,
"alphanum_fraction": 0.6397058963775635,
"avg_line_length": 16.782608032226562,
"blob_id": "2cd258814ca5a77a1dc540a02e18522da3449c6c",
"content_id": "bc84021fdf5da7f884f6e1b809c56bbcc864ba6a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 464,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 23,
"path": "/project/elective_systems/version_v2/interface/teacher_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nlogger = common.get_logger('teacher')\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n\ndef register():\n print('\\033[32m注册\\033[0m')\n\ndef check_course():\n print('\\033[32m查看教授课程\\033[0m')\n\ndef choose_course():\n print('\\033[32m选择教授课程\\033[0m')\n\ndef check_students():\n print('\\033[32m查看课程学员\\033[0m')\n\ndef modify_score():\n print('\\033[32m修改学员成绩\\033[0m')"
},
{
"alpha_fraction": 0.6040462255477905,
"alphanum_fraction": 0.6054913401603699,
"avg_line_length": 26.600000381469727,
"blob_id": "dcd4f056fc6bb76f1b092be8772feba48a8701c4",
"content_id": "438965215167ca7e37593f937da81cb735b08e51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 692,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 25,
"path": "/project/elective_systems/version_v2/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport pickle\nfrom conf import settings\n\n\ndef save(obj):\n path_obj = os.path.join(settings.BASE_DB, obj.__class__.__name__.lower())\n if not os.path.exists(path_obj):\n os.mkdir(path_obj)\n path_file = os.path.join(path_obj, obj.name)\n with open(r'%s' % path_file, 'wb') as f:\n pickle.load(obj, f)\n f.flush()\n\ndef select(name, obj_type):\n path_obj = os.path.join(settings.BASE_DB, obj_type)\n if not os.path.exists(path_obj):\n os.mkdir(path_obj)\n path_file = os.path.join(path_obj, name)\n if not os.path.exists(path_file):\n return\n with open(path_file, 'rb') as f:\n return pickle.load(f)\n\n\n"
},
{
"alpha_fraction": 0.35725709795951843,
"alphanum_fraction": 0.368252694606781,
"avg_line_length": 31.914474487304688,
"blob_id": "4b2d8371fde895d0e37ae8a4a9e333bdcebfb4a3",
"content_id": "e7184789847dac2e9aaa8c72c095044b8e4437ce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5398,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 152,
"path": "/homework/week2/python_weekend2_zhanglong_shopping_cart.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\ngoods = {\n 'mac': {\n 'price': 20000\n },\n 'lenovo': {\n 'price': 10000\n },\n 'apple': {\n 'price': 200\n },\n 'tesla': {\n 'price': 1000000\n }\n}\nline = '='*25\nconfig = 'db.txt'\ntag = True\nwhile tag:\n users = {}\n with open(config, 'a') as f:\n pass\n with open(config, 'r') as f:\n for u in f:\n u = u.strip('\\n').split('|')\n if u:\n n, p, i, x, a = u\n d = {n: {'password': p, 'phone': i, 'sex': x, 'age': a, 'money': 0, 'goods': {}}}\n users.update(d)\n print(line)\n print('\\n1 注册用户\\n2 登陆购物\\n')\n print(line)\n action = input('选择操作 >>: ').strip()\n if action == 'quit':\n tag = False\n continue\n elif action == '1':\n while tag:\n register = False\n print('请输入注册信息!')\n phone = input('手机 >>: ').strip()\n if phone == 'quit':\n tag = False\n continue\n for p in users.values():\n if phone == p['phone']:\n print('手机号 %s 已经注册!' % phone)\n register = True\n break\n if register:\n continue\n name = input('用户名 >>: ').strip()\n if name == 'quit':\n tag = False\n continue\n password = input('密码 >>: ')\n if password == 'quit':\n tag = False\n continue\n sex = input('性别 >>: ').strip()\n if sex == 'quit':\n tag = False\n continue\n age = input('年龄 >>: ').strip()\n if age == 'quit':\n tag = False\n continue\n user = '%s|%s|%s|%s|%s' % (name, password, phone, sex, age)\n with open(config, 'a') as f:\n f.write('%s\\n' % user)\n print('%s 注册成功!' % name)\n break\n elif action == '2':\n i = 0\n while tag:\n print('请输入用户名和密码!')\n name = input('用户名 >>: ').strip()\n if name == 'quit':\n tag = False\n continue\n if name not in users:\n print('\\033[31m用户名不存在,请先注册后登陆!\\033[0m')\n break\n pwd = input('密码 >>: ')\n if pwd == 'quit':\n tag = False\n continue\n if pwd != users[name]['password']:\n print('密码错误!')\n i += 1\n if i == 3:\n print('尝试次数过多,锁定用户')\n tag = False\n continue\n if name in users and pwd == users[name]['password']:\n print('登陆成功!')\n while tag:\n salary = input('请输入工资 >>: ').strip()\n if salary == 'quit':\n tag = False\n continue\n if not salary.isdigit():\n print('工资无效,请输入整数')\n continue\n else:\n salary = int(salary)\n users[name]['money'] = salary\n break\n while tag:\n print(line)\n gd = {}\n for k, v in enumerate(goods):\n print('%-6s %-10s %-10s' % (k, v, goods[v]['price']))\n gd[str(k)] = v\n print(line)\n code = input('请选择要购买的商品编号 >>: ').strip()\n if code == 'quit':\n tag = False\n continue\n if code not in gd:\n print('输入商品编号非法!')\n continue\n else:\n good = gd[code]\n count = input('请选择要购买的商品数量 >>: ').strip()\n if count == 'quit':\n tag = False\n continue\n if not count.isdigit():\n print('数量无效,请输入整数')\n continue\n else:\n count = int(count)\n if users[name]['money'] >= goods[good]['price'] * count:\n users[name]['money'] = users[name]['money'] - ( goods[good]['price'] * count )\n print('商品 %s 购买成功!' % good)\n if good not in users[name]['goods']:\n users[name]['goods'][good] = count\n else:\n users[name]['goods'][good] = users[name]['goods'][good] + count\n print('已购商品:%s 账户余额: %s' % (users[name]['goods'], users[name]['money']))\n else:\n print('\\033[31m账户余额不足!\\033[0m')\n continue\n cmd = input('是否继续购物? y/n >>: ').strip().lower()\n if cmd == 'n' or cmd == 'quit':\n print('用户名: %s\\n购买商品: %s\\n账户余额: %s' % (name, users[name]['goods'], users[name]['money']))\n tag = False\n else:\n print('输入编码非法!')"
},
{
"alpha_fraction": 0.5656028389930725,
"alphanum_fraction": 0.567375898361206,
"avg_line_length": 25.5238094329834,
"blob_id": "cf1dbfb13985284e29d0a4882ac325afc86bfcd9",
"content_id": "26022e2afcaf4370471195313812264547ceea76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 634,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 21,
"path": "/project/elective_systems/version_v3/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef login(name, password):\n obj = modules.Admin.get_obj_by_name(name)\n if obj:\n if obj.name == name and obj.password == password:\n return True, '用户%s登陆成功!' % name\n else:\n return False, '用户名或密码错误!'\n else:\n return False, '用户%s未注册!' % name\n\ndef register(name, password):\n obj = modules.Admin.get_obj_by_name(name)\n if obj:\n return False, '用户%s已注册!' % name\n else:\n modules.Admin(name, password)\n return True, '用户%s注册成功!' % name\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6130136847496033,
"alphanum_fraction": 0.6164383292198181,
"avg_line_length": 19.785715103149414,
"blob_id": "0667bc1096d7e727a7f1b261d89452d609f6aa19",
"content_id": "fe28a368683b9dd675180e6237c96a83b766cd9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 292,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 14,
"path": "/month4/week5/python_day19/ATM/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport json\nfrom conf import settings\n\n\ndef read(name):\n with open(r'%s' % settings.USER_CONFIG % name) as f:\n return json.load(f)\n\n\ndef write(user_info):\n with open(r'%s' % settings.USER_CONFIG % user_info['name']) as f:\n json.dump(user_info)\n\n"
},
{
"alpha_fraction": 0.45083266496658325,
"alphanum_fraction": 0.5281522870063782,
"avg_line_length": 24.744897842407227,
"blob_id": "93d0669ce1a93f614a937486d9cb3201e3a98f64",
"content_id": "8cbdfc895e152d3232a1b69a5e6a85a81c18cb2c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3336,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 98,
"path": "/month4/week5/python_day17/python_day17_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月9日作业:\n# == == 必做作业 == ==\n# 用谷歌浏览器打开http: // maoyan.com /,点击榜单,然后点击鼠标右键选择:显示网页源代码,然后将显示出的内容存储到文件index.html中\n# 1、匹配出文件index.html所有的链接\nimport re\n\nwith open(r'index.html') as f:\n data = f.read()\nhyperlinks = re.findall('href=\"(http.*?)\"', data)\nprint(hyperlinks)\n\n# 2、有字符串\n# 'email1:[email protected] email2:[email protected] eamil3:[email protected]'\n# 匹配出所有的邮箱地址:['[email protected]', '[email protected]', '[email protected]']\nimport re\n\ns = 'email1:[email protected] email2:[email protected] eamil3:[email protected]'\nmailboxes = re.findall(':(.*?.com)', s)\nprint(mailboxes)\n\n# 3、编写程序,\n# 1、让用户输入用户名,要求用户输入的用户名只能是字母或数字,否则让用户重新输入,\n# 2、让用户输入密码,要求密码的长度为8 - 10位,\n# 密码的组成必须为字母、数字、下划线,密码开头必须为字母,否则让用户重新输入\nimport re\n\ndef name_convention():\n while True:\n name = input('请输入用户名 >>: ').strip()\n if re.findall('[^0-9a-zA-Z]', name):\n print('用户名必须是字母或数字!')\n continue\n return name\n\ndef password_convention():\n while True:\n password = input('请输入密码 >>: ')\n if len(password) not in (8, 9, 10):\n print('密码长度必须是 8-10 位!')\n continue\n if re.findall('^[^a-zA-Z]', password):\n print('密码必须以字母开头!')\n continue\n if re.findall('\\W', password):\n print('密码必须为字母、数字、下划线!')\n continue\n return password\n\ndef register():\n name = name_convention()\n password = password_convention()\n user_info = {\n 'name': name,\n 'password': password\n }\n print('用户%s注册成功!' % name)\n\nregister()\n\n# 4、有字符串\n# \"1-12*(60+(-40.35/5)-(-4*3))\",匹配出所有的数字如['1', '-12', '60', '-40.35', '5', '-4', '3']\ns = \"1-12*(60+(-40.35/5)-(-4*3))\"\nnumbers = re.findall('-?\\d+[.]\\d+|-?\\d+', s)\nprint(numbers)\n# 5、有字符串\n# \"1-2*(60+(-40.35/5)-(-4*3))\",找出所有的整数如['1', '-2', '60', '', '5', '-4', '3']\ns = \"1-2*(60+(-40.35/5)-(-4*3))\"\nnumbers = re.findall(\"-?\\d+\\.\\d*|(-?\\d+)\", s)\nprint(numbers)\n# 6、ATM + 购物车作业:\n# 1、构建程序基本框架\n# 2、实现注册功能\n\n# 7、明天早晨默写6\n#\n# == == 答案 == ==\n# 答案:\n# 1、re.findall('href=\"(.*?)\"', 读取文件内容)\n# 2、re.findall(r\":(.*?@.*?com)\", 'email1:[email protected] email2:[email protected] eamil3:[email protected]')\n# 3、\n# 1、判断用户输入的内容中如果匹配到[ ^ a - zA - Z0 - 9]则让用户重新输入\n# 2、 ^ [a - zA - Z]\\w\n# {7, 10}$\n#\n#\n# 4、re.findall(r'-?\\d+\\.?\\d*', \"1-12*(60+(-40.35/5)-(-4*3))\")\n# 5、re.findall(r\"-?\\d+\\.\\d*|(-?\\d+)\", \"1-2*(60+(-40.35/5)-(-4*3))\")\n#\n# 6、略\n#\n# 7、略\n#\n# == 可以考虑选做一个作业(不做完全可以):正则表达式 + 函数递归调用实现一个计算器 ==\n# 用户输入:1 - 2 * ((60 + 2 * (-3 - 40.0 / 5) * (9 - 2 * 5 / 3 + 7 / 3 * 99 / 4 * 2998 + 10 * 568 / 14)) - (-4 * 3) / (\n# 16 - 3 * 2))\n# 可以得到计算的结果\n#\n# 参考:http://www.cnblogs.com/wupeiqi/articles/4949995.html"
},
{
"alpha_fraction": 0.6186440587043762,
"alphanum_fraction": 0.6232665777206421,
"avg_line_length": 28.409090042114258,
"blob_id": "524e3321d9f0971cf1bc731fb3c5269f6f64d17f",
"content_id": "56ad114e46c7e4412d6e5b5f6691822e4a0b998a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1506,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 44,
"path": "/project/shooping_mall/version_v4/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom db import modules\n\nlogger = common.get_logger('user')\n\ndef login(name, password):\n user = modules.User.get_obj_by_name(name)\n if not user:\n return False, '用户%s不存在!' % name\n if password == user.password:\n logger.info('用户%s登陆成功!' % name)\n return True, '用户%s登陆成功!' % name\n else:\n logger.info('用户%s密码错误!' % name)\n return False, '用户%s密码错误!' % name\n\ndef register(name, password, credit_limit=15000):\n user = modules.User.get_obj_by_name(name)\n if user:\n return False, '用户%s不能重复注册!' % name\n user = modules.User.register(name, password, credit_limit)\n if user:\n logger.info('用户%s注册成功!' % name)\n return True, '用户%s注册成功!' % name\n else:\n logger.info('用户%s注册失败!' % name)\n return False, '用户%s注册失败!' % name\n\ndef get_balance_info(name):\n user = modules.User.get_obj_by_name(name)\n logger.info('用户%s获取账户余额信息!' % name)\n return user.check_balance()\n\ndef get_bill_info(name):\n user = modules.User.get_obj_by_name(name)\n logger.info('用户%s获取账户账单信息!' % name)\n return user.check_bill()\n\ndef get_flow_info(name, bill_date):\n user = modules.User.get_obj_by_name(name)\n logger.info('用户%s获取账户流水信息!' % name)\n return user.check_flow(bill_date)\n\n\n\n\n"
},
{
"alpha_fraction": 0.3930513560771942,
"alphanum_fraction": 0.4066465198993683,
"avg_line_length": 30.5238094329834,
"blob_id": "806a9e7b393faa3520e5238aabb30a2538276342",
"content_id": "1cbb4bc2c44b35771e9819f78735d5d82c6dd3d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3836,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 105,
"path": "/month3/week2/python_day7/python_day7_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 一、改进用户注册、查看程序\nline = '='*25\nconfig = 'db.txt'\ntag = True\nwhile tag:\n users = {}\n with open(config, 'a') as f:\n pass\n with open(config, 'r') as f:\n for u in f:\n u = u.split()\n if u:\n n, p, i, x, a = u\n d = {n: {'password': p, 'phone': i, 'sex': x, 'age': a}}\n users.update(d)\n print(line)\n print('1 注册用户\\n2 登陆查看')\n print(line)\n action = input('选择操作 >>: ').strip()\n if action == '1':\n while tag:\n register = False\n print('\\033[31m输入注册信息!\\033[0m')\n r_i = input('手机 >>: ')\n for p in users.values():\n if r_i == p['phone']:\n print('手机号 %s 已经注册!' % r_i)\n register = True\n break\n if register:\n continue\n r_n = input('用户名 >>: ').strip()\n r_p = input('密码 >>: ')\n r_s = input('性别 >>: ')\n r_a = input('年龄 >>: ')\n user = '%s %s %s %s %s' % (r_n, r_p, r_i, r_s, r_a)\n with open(config, 'a') as f:\n f.write('%s\\n' % user)\n print('%s 注册成功!' % r_n)\n break\n elif action == '2':\n while tag:\n print('请输入用户名和密码!')\n name = input('用户名 >>: ').strip()\n if name == 'quit':\n tag = False\n continue\n if name not in users:\n print('\\033[31m用户名不存在,请先注册后登陆!\\033[0m')\n break\n pwd = input('密码 >>: ')\n if pwd == 'quit':\n tag = False\n continue\n if pwd != users[name]['password']:\n print('密码错误!')\n continue\n if name in users and pwd == users[name]['password']:\n print('登陆成功!')\n phone = users[name]['phone']\n sex = users[name]['sex']\n age = users[name]['age']\n print('用户名: %s\\n手机号: %s\\n性别: %s\\n年龄: %s' % (name, phone, sex, age))\n break\n else:\n print('输入编号非法!')\n\n# 二、编写程序,实现下列功能\n# 1、提供两种可选功能:\n# 1 拷贝文件\n# 2 修改文件\n# 2、用户输入操作的编码,根据用户输入的编号,执行文件拷贝(让用户输入原文件路径和目标文件路径)或修改操作\n\n# #!/usr/bin/env python3\n# # -*- encoding: utf-8 -*-\n#\n# import os\n#\n# while 1:\n# print('\\n1 拷贝文件\\n2 修改文件\\n')\n# action = input('选择操作 >>: ').strip()\n# if action == '1':\n# print('输入拷贝信息!')\n# src = input('源路径 >>: ').strip()\n# dst = input('目标路径 >>: ').strip()\n# with open(r'%s' % src, 'rb') as f1, open(r'%s' % dst, 'wb') as f2:\n# for line in f1:\n# f2.write(line)\n# print('文件%s拷贝到%s完成!' % (src, dst))\n# elif action == '2':\n# print('输入修改信息!')\n# path = input('文件路径 >>: ').strip()\n# path_tmp = '%s.swap' % path\n# str_s = input('原字符串 >>: ').encode('utf-8')\n# str_r = input('替换字符串 >>: ').encode('utf-8')\n# with open(r'%s' % path, 'rb') as f1, open(r'%s' % path_tmp, 'wb') as f2:\n# for line in f1:\n# if str_s in line:\n# line = line.replace(str_s, str_r)\n# f2.write(line)\n# os.remove(path)\n# os.rename(path_tmp, path)\n# print('文件%s修改完成!' % path)\n# else:\n# print('输入编号非法!')\n"
},
{
"alpha_fraction": 0.5502130389213562,
"alphanum_fraction": 0.5508216619491577,
"avg_line_length": 25,
"blob_id": "67d90f49c419542d3f4faeb24376d469cc895171",
"content_id": "de38479052893c5aab17fd0305e73a2a117d4f99",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1667,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 63,
"path": "/project/shooping_mall/version_v1/lib/utils.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport json\nimport logging\nfrom conf import settings\n\ndef get_logger(name=__name__):\n '''\n For get a logger object.\n :param name: logger object name\n :return: logger object\n '''\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n logger = logging.getLogger(name)\n return logger\n\nlogger = get_logger('utils')\n\ndef checkpath(uri):\n '''\n Check File or Directory exists\n :param uri: file uri\n :return: True or False\n '''\n return os.path.exists(uri)\n\ndef file_handler(**kwargs):\n '''\n File handler to read json data to file or write json data from file.\n :param\n kwargs: **{'name': name, 'uri': file_uri, 'data': write_data}\n :return:\n read: read success return json data\n write: write success return True\n '''\n dir_path = os.path.dirname(kwargs['file_path'])\n file_path = kwargs['file_path']\n mode = kwargs['mode']\n try:\n if not checkpath(dir_path):\n os.makedirs(dir_path)\n except:\n logger.error('mkdir %s error' % dir_path)\n return\n if mode == 'r':\n try:\n with open(r'%s' % file_path, 'r') as f:\n data = json.load(f)\n return data\n except:\n logger.error('read %s error' % file_path)\n return\n elif mode == 'w':\n try:\n with open(r'%s' % file_path, 'w') as f:\n json.dump(kwargs['data'], f)\n return True\n except:\n logger.error('write %s error' % file_path)\n return\n else:\n logger.error('\"%s\"不是正确的文件打开模式!' % mode)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5221579670906067,
"alphanum_fraction": 0.5375722646713257,
"avg_line_length": 21.042552947998047,
"blob_id": "9700526d91cf41fa0141dc86459b760802b6a345",
"content_id": "539a5eca3f05bed417d6078a3c76196ece3bfcc1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1350,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 47,
"path": "/month3/week2/python_day7/python_day7.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 扩展:文件的光标移动 seek()\n# with open(r'users.txt', 'r+') as f:\n# f.seek(9)\n# print(f.tell())\n\n# 1. 复制文件小程序\n# #!/usr/bin/env python3\n# # -*- coding: utf-8 -*-\n#\n# import sys\n#\n# l = sys.argv\n# if len(l) == 3:\n# src_path = l[1]\n# dst_path = l[2]\n# copy(src_path, dst_path)\n# else:\n# print('参数错误!请输入文件的源地址和目标地址!')\n# sys.exit()\n# with open(r'%s' % src_path, 'rb') as f1, open(r'%s' % dst_path, 'wb') as f2:\n# for line in f1:\n# f2.write(line)\n\n# 2. 修改文件小程序\n# 第一种方式:\n# 第一步:先把文件内容全部读入内存;\n# 第二部:然后再内存中完成修改;\n# 第三部:再把修改后的结果,覆盖写入原文件;\n# 缺点:会在文件内容过大的情况下,占用很多内存\n\n# with open(r'user.txt', 'r') as f:\n# data = f.read()\n# data = data.replace('吴佩琪', '吴佩琪[老男孩老师]')\n# with open(r'user.txt', 'w') as f:\n# f.(data)\n\n# # 第二种方式:\n\n# import os\n#\n# with open(r'user.txt', 'rb') as f1, open(r'user.txt.swap', 'wb') as f2:\n# for line in f1:\n# if '吴佩琪' in line:\n# line = line.replace('吴佩琪', '吴佩琪[老男孩老师]')\n# f2.write(line)\n# os.remove('user.txt')\n# os.rename('user.txt.swap', 'user.txt')\n\n\n"
},
{
"alpha_fraction": 0.607038140296936,
"alphanum_fraction": 0.6085044145584106,
"avg_line_length": 28.521739959716797,
"blob_id": "0a62d94e233585c71bf03be4306448364f9bbf14",
"content_id": "45293560f81c2070226ecfdc32bb4790059ec42d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 682,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 23,
"path": "/project/elective_systems/version_v3/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport pickle\nfrom conf import settings\n\ndef save(obj):\n obj_path = os.path.join(settings.BASE_DB, obj.__class__.__name__)\n if not os.path.exists(obj_path):\n os.mkdir(obj_path)\n file_path = os.path.join(obj_path, obj.name)\n with open(r'%s' % file_path, 'wb') as f:\n pickle.dump(obj, f)\n\ndef select(name, obj_type):\n obj_path = os.path.join(settings.BASE_DB, obj_type.lower())\n if not os.path.exists(obj_path):\n os.mkdir(obj_path)\n file_path = os.path.join(obj_path, name)\n if not os.path.exists(file_path):\n return\n with open(r'%s' % file_path, 'rb') as f:\n return pickle.load(f)\n\n\n\n"
},
{
"alpha_fraction": 0.5875924229621887,
"alphanum_fraction": 0.5898424983024597,
"avg_line_length": 28,
"blob_id": "a9bf8f9b3f9029f408de62bddcc093f1e8c1965a",
"content_id": "7be00d35200b8df6d8657ed72c0957fcc60fac65",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3393,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 107,
"path": "/project/elective_systems/version_v9/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import teacher_api\n\n\nCURRENT_USER = None\nROLE = 'teacher'\n\ndef login():\n global CURRENT_USER\n common.show_green('登陆')\n if CURRENT_USER:\n common.show_red('用户不能重复登录!')\n return\n while True:\n name = common.input_string('用户名')\n if name == 'q': break\n password = common.input_string('密码')\n if password == 'q': break\n flag, msg = teacher_api.login(name, password)\n if not flag:\n common.show_red(msg)\n continue\n CURRENT_USER = name\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef check_teach_course():\n common.show_green('查看教授课程')\n teach_course = teacher_api.get_teach_course(CURRENT_USER)\n if not teach_course:\n common.show_red('老师教授课程列表为空!')\n return\n common.show_info(*teach_course)\n\[email protected]_auth(ROLE)\ndef check_teach_course_student():\n common.show_green('查看教授课程学生')\n teach_course = teacher_api.get_teach_course(CURRENT_USER)\n if not teach_course:\n common.show_red('老师教授课程列表为空!')\n return\n common.show_info(*teach_course)\n course_name = common.get_object_name(object_list=teach_course)\n teach_course_student = teacher_api.get_teach_course_student(course_name)\n if not teach_course_student:\n common.show_red('教授课程%s学生列表为空!' % course_name)\n return\n common.show_info(*teach_course_student)\n\[email protected]_auth(ROLE)\ndef choose_teach_course():\n common.show_green('选择教授课程')\n while True:\n teach_course = common.get_object_name(type_name='course')\n flag, msg = teacher_api.choose_teach_course(CURRENT_USER, teach_course)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef set_student_score():\n common.show_green('修改学生成绩')\n while True:\n name = common.input_string('学生名字')\n if name == 'q': break\n course = common.input_string('学习课程')\n if course == 'q': break\n score = common.input_integer('课程成绩', is_float=True)\n if score == 'q': break\n flag, msg = teacher_api.set_student_score(CURRENT_USER, name, course, score)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\ndef logout():\n global CURRENT_USER\n common.show_green('登出')\n common.show_red('用户%s登出!' % CURRENT_USER)\n CURRENT_USER = None\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [check_teach_course, '查看教授课程'],\n '3': [check_teach_course_student, '查看教授课程学生'],\n '4': [choose_teach_course, '选择教授课程'],\n '5': [set_student_score, '修改学生成绩']\n }\n while True:\n common.show_green('按\"q\"退出视图')\n common.show_menu(menu)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0]()\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.4555555582046509,
"alphanum_fraction": 0.48452380299568176,
"avg_line_length": 23.466018676757812,
"blob_id": "3b2d8720c017def42353d8cf7daabc1a598f69ed",
"content_id": "5d246322ec9a0dbf164dd4f94b7b0c10045c4f4c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2674,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 103,
"path": "/month3/week2/python_day5/python_day5.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 扩展:\n# enumerate() 函数\n# goods = 'hello'\n# goods = ['mac', 'apple', 'iphone', 'tesla']\n# goods = {'mac': 10000, 'apple': 200, 'iphone': 8000, 'tesla': 20000}\n#\n# for number, good in enumerate(goods):\n# print(number, good)\n\n# 字典类型\n# 1. 购物车小程序\n# msg_dic = {\n# 'apple': 10,\n# 'tesla': 100000,\n# 'mac': 3000,\n# 'lenovo': 30000,\n# 'chicken': 10\n# }\n# users = {\n# 'egon': {\n# 'password': '123',\n# 'goods': {}\n# }\n# }\n# line = '='*25\n# tag = True\n# while tag:\n# inp_name = input('name >>: ').strip()\n# inp_pwd = input('password >>: ')\n# if inp_name in users and inp_pwd == users[inp_name]['password']:\n# print('login successful!')\n# while tag:\n# print(line)\n# for k,v in msg_dic.items():\n# print('%-10s %-10s' % (k, v))\n# print(line)\n# good = input('please choose your good >>: ')\n# count = input('please choose your count >>: ')\n# if not count.isdigit():\n# print('count not valid')\n# continue\n# else:\n# count = int(count)\n# if good in msg_dic:\n# d[good] = count\n# if good not in users[inp_name]['goods']:\n# users[inp_name]['goods'][good] = count\n# else:\n# users[inp_name]['goods'][good] = users[inp_name]['goods'][good] + count\n# print('%s %s has joined the shopping cart: \\n%s' % (count, good, users[inp_name]['goods']))\n# else:\n# print('good not valid')\n# else:\n# print('name or password not valid')\n\n# 2.按索引取值,可取可存\n# dic = {'name': 'egon'}\n\n# 3.增加和修改\n# dic['age'] = 10\n# print(dic)\n# dic['name'] = 'EGON'\n# print(dic)\n# dic['name'] = dic['name'].upper()\n# print(dic)\n\n# 4.长度 len()\n# dic = {'name': 'egon', 'age': 18}\n# print(len(dic))\n\n# 5.删除\n# dic = {'name': 'egon', 'age': 18}\n# dic.pop('name')\n# dic.pop('name', None) # 如果不存在 key,则返回 None\n\n# 6.获取字典内的所有元素\n# 获取所有键 dict.keys() # 默认的是获取键\n# dic = {'name': 'egon', 'age': 18}\n# dict_keys = dic.keys()\n# print(dict_keys)\n\n# 获取所有值 dict.values()\n# dict_values = dic.values()\n# print(dict_values)\n\n# 获取所有键值对 dict.items()\n# dict_items = dic.items()\n# print(dict_items)\n\n# 7. get()\n# dic = {'name': 'egon', 'age': 18}\n# print(dic.get('name'))\n# print(dic.get('lala'))\n\n# l = ['name', 'age', 'sex']\n# print({}.fromkeys(l))\n\n# s1 = {1, 2, 3, 4, 5}\n# s2 = {1, 2, 3}\n\n# s1.isdisjoint()\n\n# print(set(l))\n"
},
{
"alpha_fraction": 0.4068181812763214,
"alphanum_fraction": 0.4204545319080353,
"avg_line_length": 18.130434036254883,
"blob_id": "94996786c60ac1ab8e72203aba6e9b03cc86c926",
"content_id": "b4b9455dd1b23234d23b7943185afb1c27645c44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 956,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 46,
"path": "/month4/week4/python_day15/ATM/core/shop.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom conf import settings\nfrom lib import common\n\ndef register():\n print('注册 ...')\n\ndef login():\n print('登陆 ...')\n with open(settings.DB_FILE, encoding='utf-8') as f:\n for line in f:\n print(line.strip('\\n'))\n\ndef shopping():\n print('购物 ...')\n\ndef pay():\n print('支付...')\n\ndef transfer():\n print('转账 ...')\n common.logger('转账啦!...')\n\ndef run():\n while True:\n print('''\n 1 注册\n 2 登陆\n 3 购物\n 4 支付\n 5 转账\n ''')\n action = input('请输入操作编码 >>: ').strip()\n if action == '1':\n register()\n elif action == '2':\n login()\n elif action == '3':\n shopping()\n elif action == '4':\n pay()\n elif action == '5':\n transfer()\n else:\n print('操作编码非法!')\n"
},
{
"alpha_fraction": 0.4761904776096344,
"alphanum_fraction": 0.561904788017273,
"avg_line_length": 12,
"blob_id": "8e063d75847ff1829745def4e03b071913d244a6",
"content_id": "0c6d4444d80c39ca6029e83476c4cc30119c93da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 105,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 8,
"path": "/month4/week6/python_day22/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\n\nHOST = '1.1.1.1'\nPORT = 3306\n\nDB_PATH = os.path.dirname(__file__)\n\n"
},
{
"alpha_fraction": 0.6033287048339844,
"alphanum_fraction": 0.6047156453132629,
"avg_line_length": 27.84000015258789,
"blob_id": "6f776eea71bef64006e1a3ee76784101c2c672ef",
"content_id": "99770f61a7fff4c09c53de5cda620c7d329ae339",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 721,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 25,
"path": "/project/elective_systems/version_v5/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#-*- encoding: utf-8 -*-\n\nimport os\nimport pickle\nfrom conf import settings\n\ndef select(name, type_name):\n type_path = os.path.join(settings.BASE_DB, type_name)\n if not os.path.exists(type_path):\n os.mkdir(type_path)\n obj_path = os.path.join(type_path, name)\n if not os.path.exists(obj_path):\n return\n with open(r'%s' % obj_path, 'rb') as f:\n return pickle.load(f)\n\ndef save(obj):\n type_path = os.path.join(settings.BASE_DB, obj.__class__.__name__.lower())\n if not os.path.exists(type_path):\n os.mkdir(type_path)\n obj_path = os.path.join(type_path, obj.name)\n with open(r'%s' % obj_path, 'wb') as f:\n pickle.dump(obj, f)\n f.flush()\n return True\n"
},
{
"alpha_fraction": 0.44367480278015137,
"alphanum_fraction": 0.5289828777313232,
"avg_line_length": 22.24576187133789,
"blob_id": "4526320f20e5aabc805cbe62d4079d3bda57168a",
"content_id": "4458d2e95014508a4a2e1f5dc87272bf746b893b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6024,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 236,
"path": "/month5/week9/python_day38/python_day38_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1 创建用户表 id,username,password\n# id为自增且唯一约束\n# username和password为主键\ncreate table user(\n id int unique not null auto_increment,\n username char(20) not null,\n password char(20) not null,\n group_id int,\n constraint pk_user primary key(username, password)\n);\n\ncreate table user(\n id int unique not null auto_increment,\n username char(20) not null,\n password char(20) not null,\n primary key(username, password)\n);\n\n# 2 插入三条数据,root ,123\n# egon,456\n# lqz,678\n\ninsert into user(username, password)\nvalues\n ('root', '123'),\n ('egon', '456'),\n ('lqz', '678');\n\n# 3 创建用户组表: id,groupname\n# id为主键自增\n# groupname为唯一约束不为空\ncreate table group(\n id int primary key auto_increment,\n group_name char(20) unique not null\n);\n\n# 4 插入数据:IT部门\n# 销售部门\n# 财务部门\n# 总经理\ninsert into groups(group_name) values\n ('IT部门'),\n ('销售部门'),\n ('财务部门'),\n ('总经理');\n\n# 5 创建主机表:id,ip\n# id自增,主键\n# ip唯一约束不为空,默认为127.0.0.1\ncreate table host(\n id int primary key auto_increment,\n ip char(20) unique not null default '127.0.0.1'\n);\n\n# 6 插入数据:('172.16.45.2'),\n# ('172.16.31.10'),\n# ('172.16.45.3'),\n# ('172.16.31.11'),\n# ('172.10.45.3'),\n# ('172.10.45.4'),\n# ('172.10.45.5'),\n# ('192.168.1.20'),\n# ('192.168.1.21'),\n# ('192.168.1.22'),\n# ('192.168.2.23'),\n# ('192.168.2.223'),\n# ('192.168.2.24'),\n# ('192.168.3.22'),\n# ('192.168.3.23'),\n# ('192.168.3.24')\ninsert into host(ip)\nvalues\n ('172.16.45.2'),\n ('172.16.31.10'),\n ('172.16.45.3'),\n ('172.16.31.11'),\n ('172.10.45.3'),\n ('172.10.45.4'),\n ('172.10.45.5'),\n ('192.168.1.20'),\n ('192.168.1.21'),\n ('192.168.1.22'),\n ('192.168.2.23'),\n ('192.168.2.223'),\n ('192.168.2.24'),\n ('192.168.3.22'),\n ('192.168.3.23'),\n ('192.168.3.24');\n\n# 7 创建业务线表: id,businesss\n# id主键自增\n# business不为空,唯一约束\ncreate table business(\n id int primary key auto_increment,\n business char(120) unique not null\n);\n\n# 8 插入数据:('轻松贷'),\n# ('随便花'),\n# ('大富翁'),\n# ('穷一生')\ninsert into business(business)\nvalues\n ('轻松贷'),\n ('随便花'),\n ('大富翁'),\n ('穷一生');\n\n# 9 建关系:user与usergroup\n# (自行关联,外键约束)\ncreate table user_groups(\n id int not null unique auto_increment,\n user_id int not null,\n group_id int not null,\n constraint pk_user primary key(user_id, group_id),\n constraint fk_user_id foreign key(user_id) references user(id),\n constraint fk_group_id foreign key(group_id) references groups(id)\n);\n\n# 10 插入数据: (1,1),\n# (1,2),\n# (1,3),\n# (1,4),\n# (2,3),\n# (2,4),\n# (3,4)\ninsert into user_groups(user_id, group_id)\nvalues\n (1,1),\n (1,2),\n (1,3),\n (1,4),\n (2,3),\n (2,4),\n (3,4);\n\n# 11 建关系:host与business\n# (自行关联,外键约束)\ncreate table host_business(\n id int unique not null auto_increment,\n host_id int,\n business_id int,\n constraint pk_host primary key(host_id, business_id),\n constraint fk_host_id foreign key(host_id) references host(id),\n constraint fk_business_id foreign key(business_id) references business(id)\n);\n\n# 12 插入数据: (1,1),\n# (1,2),\n# (1,3),\n# (2,2),\n# (2,3),\n# (3,4)\ninsert into host_business(host_id, business_id)\nvalues\n (1,1),\n (1,2),\n (1,3),\n (2,2),\n (2,3),\n (3,4);\n\n# 13 建关系:user与host\n# (自行关联,外键约束)\ncreate table user_host(\n id int primary key auto_increment,\n user_id int,\n host_id int,\n foreign key(user_id) references user(id) on delete cascade on update cascade,\n foreign key(host_id) references host(id) on delete cascade on update cascade\n);\n\n# 14 插入数据: (1,1),\n# (1,4),\n# (1,15),\n# (1,16),\n# (2,2),\n# (2,3),\n# (2,4),\n# (2,5),\n# (3,10),\n# (3,11),\n# (3,12)\ninsert into user_host(user_id, host_id)\nvalues\n (1,1),\n (1,4),\n (1,15),\n (1,16),\n (2,2),\n (2,3),\n (2,4),\n (2,5),\n (3,10),\n (3,11),\n (3,12);\n\n# 15 创建班级表:cid,caption\n# 学生表:sid,sname,gender,class_id\n# 老师表:tid,tname\n# 课程表:cid,cname,teacher_id\n# 成绩表:sid,student_id,course_id,number\n# (相关关联关系要创建好,插入几条测试数据)\ncreate table class(\n cid int primary key auto_increment,\n caption char(20) not null\n);\n\ncreate table student(\n sid int primary key auto_increment,\n sname char(20) not null,\n\tgender char(10) not null,\n\tclass_id int,\n\tforeign key(class_id) references class(cid) on delete cascade on update cascade\n);\n\ncreate table teacher(\n tid int primary key auto_increment,\n tname char(20) not null\n);\n\ncreate table course(\n cid int primary key auto_increment,\n cname char(20) not null,\n\tteacher_id int,\n\tforeign key(teacher_id) references teacher(tid) on delete cascade on update cascade\n);\n\ncreate table score(\n sid int primary key auto_increment,\n student_id int,\n\tcourse_id int,\n\tnumber float(5,2),\n\tforeign key(student_id) references student(sid) on delete cascade on update cascade,\n\tforeign key(course_id) references course(cid) on delete cascade on update cascade\n);\n"
},
{
"alpha_fraction": 0.5375191569328308,
"alphanum_fraction": 0.5773353576660156,
"avg_line_length": 20,
"blob_id": "95bad6729c90dbbc64d903a0656455dfa0f35512",
"content_id": "f99ae8f0cd4a322b6d34e4c20a58131601567883",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 677,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 31,
"path": "/month4/week5/python_day16/python_day16.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\n# import random\n# print(random.random())\n\n# 1. 随机生成验证码\nimport random\ndef make_code(n):\n res = ''\n for i in range(n):\n s1 = chr(random.randint(65, 90))\n s2 = str(random.randint(0, 9))\n res += random.choice([s1, s2])\n return res\n\ncode = make_code(9)\nprint(code)\n\n# 2. 打印进度条\ndef progress(percent, width=50):\n show = ('[%%-%ds]' % width) % (int(width*percent) * '#')\n print('%s %d%%' % (show, int(100*percent)), end='\\r')\n\nimport time\nrecv_size = 0\ntotal_size = 100\nwhile recv_size < total_size:\n time.sleep(0.1)\n recv_size += 1\n percent = recv_size/total_size\n progress(percent)\n\n\n"
},
{
"alpha_fraction": 0.4628252685070038,
"alphanum_fraction": 0.51938396692276,
"avg_line_length": 26.297101974487305,
"blob_id": "76250e5ac459f3b185115c9e6d111d2fabcb06ed",
"content_id": "edcdbdba7fecc42eed28d9c83c8c2226643d5efe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4058,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 138,
"path": "/project/elective_systems/version_v8/core/student.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import common_api, student_api\n\nCURRENT_USER = None\nROLE = 'student'\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n if CURRENT_USER:\n print('\\033[31m不能重复登录!\\033[0m')\n return\n while True:\n name = common.input_string('登陆用户名')\n if name == 'q':\n break\n password = common.input_string('登陆密码')\n if password == 'q':\n break\n flag, msg = common_api.login(name, password, ROLE)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef logout():\n global CURRENT_USER\n print('\\033[31mGoodbye, %s!\\033[0m' % CURRENT_USER)\n CURRENT_USER = None\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = common.input_string('注册用户名')\n if name == 'q':\n break\n password = common.input_string('注册密码')\n if password == 'q':\n break\n password2 = common.input_string('确认密码')\n if password2 == 'q':\n break\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = student_api.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_score():\n print('\\033[32m查看成绩\\033[0m')\n score = student_api.get_score(CURRENT_USER)\n if not score:\n print('\\033[31m学生%s没有成绩信息!\\033[0m' % CURRENT_USER)\n return\n print('-' * 30)\n for k,v in score.items():\n print('课程:%s 成绩:%s' % (k, v))\n print('-' * 30)\n\[email protected](ROLE)\ndef check_learn_course(show=True):\n if show:\n print('\\033[32m查看个人课程\\033[0m')\n course = student_api.get_learn_course(CURRENT_USER)\n if not course:\n print('\\033[31m学生%s没有课程信息!\\033[0m' % CURRENT_USER)\n return\n print('-' * 30)\n for k, name in enumerate(course):\n print('%-4s %-10s' % (k, name))\n print('-' * 30)\n return course\n\[email protected](ROLE)\ndef check_course(show=True):\n if show:\n print('\\033[32m查看所有课程\\033[0m')\n course_list = common.get_object_list('course')\n if not course_list:\n print('\\033[31m课程列表为空!\\033[0m')\n return\n print('-' * 30)\n for k, v in enumerate(course_list):\n print('%s %s' % (k, v))\n return course_list\n\[email protected](ROLE)\ndef choose_course():\n print('\\033[32m选择课程\\033[0m')\n while True:\n course = check_course(False)\n if not course:\n return\n choice = common.input_integer('请选择课程编号')\n if choice == 'q':\n break\n if choice < 0 or choice > len(course):\n print('\\033[31m课程编号非法!\\033[0m')\n continue\n flag, msg = student_api.choose_course(CURRENT_USER, course[choice])\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_course, '查看所有课程'],\n '4': [check_learn_course, '查看个人课程'],\n '5': [choose_course, '选择课程'],\n '6': [check_score, '查看成绩']\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()"
},
{
"alpha_fraction": 0.4162279963493347,
"alphanum_fraction": 0.4255763292312622,
"avg_line_length": 31.691394805908203,
"blob_id": "61801a504ffdb61b681907d9b2b46525c531d294",
"content_id": "fd0bda6de8c540f3339453a0872133eaec2d59e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12254,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 337,
"path": "/month3/week3/python_day8/python_day8_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 一、函数版购物车程序\n#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\nclass Shopping():\n def __init__(self):\n self.single = '-'*50\n self.double = '='*50\n self.config = 'db.txt'\n self.tag = True\n self.login = None\n self.users = self.get_config()\n self.goods = {\n '1': {\n 'name': 'mac',\n 'price': 20000\n },\n '2': {\n 'name': 'lenovo',\n 'price': 10000\n },\n '3': {\n 'name': 'apple',\n 'price': 200\n },\n '4': {\n 'name': 'tesla',\n 'price': 1000000\n }\n }\n\n def get_config(self):\n users = {}\n with open(r'%s' % self.config, 'a') as f:\n pass\n with open(r'%s' % self.config) as f:\n for u in f:\n if u:\n u = u.strip('\\n').split('|')\n name, pwd, phone, sex, age = u\n user = {name: {'password': pwd, 'phone': phone, 'sex': sex, 'age': age, 'money': 0, 'goods': {}}}\n users.update(user)\n return users\n\n def update_config(self, user):\n with open(r'%s' % self.config, 'a') as f:\n f.write('%s|%s|%s|%s|%s\\n' % user)\n\n def check_phone(self, phone):\n for k in self.users.values():\n if phone == k['phone']:\n return True\n\n def interactive(self, words, number=None, password=None):\n s = input('%s >> : ' % words)\n if not password:\n s = s.strip()\n if s == 'quit' or s == 'n':\n self.tag = False\n if number:\n if s.isdigit():\n return int(s)\n else:\n print('请输入整数!')\n return\n return s\n\n def auth(self, name, password):\n if name not in self.users:\n print('用户名不存在!')\n return\n if password != self.users[name]['password']:\n print('密码错误!')\n return\n if name in self.users and password == self.users[name]['password']:\n print('登陆成功!')\n return name\n\n def main(self):\n while self.tag:\n print(self.double)\n print('1 注册用户 \\n2 登陆购物')\n print(self.double)\n action = self.interactive('请选择操作')\n if not self.tag:\n continue\n if action == '1':\n while self.tag:\n phone = self.interactive('手机')\n if not self.tag:\n continue\n if self.check_phone(phone):\n print('该手机号已注册!')\n continue\n username = self.interactive('用户名')\n if not self.tag:\n continue\n password = self.interactive('密码', password=True)\n if not self.tag:\n continue\n sex = self.interactive('性别')\n if not self.tag:\n continue\n age = self.interactive('年龄')\n if not self.tag:\n continue\n user = (username, password, phone, sex, age)\n self.update_config(user)\n print('用户 %s 注册成功!' % name)\n break\n elif action == '2':\n i = 0\n while self.tag:\n name = self.interactive('用户名')\n if not self.tag:\n continue\n password = self.interactive('密码', password=True)\n if not self.tag:\n continue\n self.login = self.auth(name, password)\n if not self.login:\n i += 1\n if i == 3:\n print('尝试次数过多!')\n self.tag = False\n continue\n while self.tag:\n salary = self.interactive('请输入工资', number=True)\n if not self.tag:\n continue\n if salary:\n self.users[name]['money'] = salary\n break\n while self.tag:\n print(self.single)\n for k, v in self.goods.items():\n print('商品编号:%-6s 商品名称:%-10s 商品价格:%-10s' % (k, v['name'], v['price']))\n print(self.single)\n while self.tag:\n code = self.interactive('请选择要购买的商品编号')\n if not self.tag:\n continue\n if code not in self.goods:\n print('输入商品编号非法!')\n continue\n good = self.goods[code]['name']\n price = self.goods[code]['price']\n break\n while self.tag:\n count = self.interactive('请选择要购买的商品数量', number=True)\n if not self.tag:\n continue\n if count:\n break\n if self.users[name]['money'] >= (price * count):\n self.users[name]['money'] -= (price * count)\n print('商品 %s x %s 已加入购物车!' % (good, count))\n if good not in self.users[name]['goods']:\n self.users[name]['goods'][good] = count\n else:\n self.users[name]['goods'][good] += count\n print('已购商品:%s 账户余额: %s' % (self.users[name]['goods'], self.users[name]['money']))\n else:\n print('\\033[31m账户余额不足!\\033[0m')\n while self.tag:\n cmd = self.interactive('是否继续购物? y/n')\n if not self.tag:\n print('用户名: %s\\n购买商品: %s\\n账户余额: %s' % (name, self.users[name]['goods'], self.users[name]['money']))\n continue\n if cmd == 'y':\n break\n else:\n print('输入操作编码无效!')\n print(self.double)\n\nif __name__ == '__main__':\n Shopping().main()\n\n\n\n\n\n#\n#\n# # 二、函数练习\n# # 1、写函数,,用户传入修改的文件名,与要修改的内容,执行函数,完成批了修改操作\n# import os\n#\n# def modify_file(name, src_str, dst_str):\n# name_tmp = '%s.swap' % name\n# src_str = src_str.encode('utf-8')\n# dst_str = dst_str.encode('utf-8')\n# with open(r'%s' % name, 'rb') as f1, \\\n# open(r'%s' % name_tmp, 'wb') as f2:\n# for line in f1:\n# if src_str in line:\n# line = line.replace(src_str, dst_str)\n# f2.write(line)\n# os.remove(name)\n# os.rename(name_tmp, name)\n#\n# # 2、写函数,计算传入字符串中【数字】、【字母】、【空格] 以及 【其他】的个数\n# def data_count(str):\n# count = {\n# 'string': 0,\n# 'number': 0,\n# 'space': 0,\n# 'other': 0\n# }\n# for s in str:\n# if s.isalpha():\n# count['string'] += 1\n# elif s.isdigit():\n# count['number'] += 1\n# elif s.isspace():\n# count['space'] += 1\n# else:\n# count['other'] += 1\n# return count\n#\n# # 3、写函数,判断用户传入的对象(字符串、列表、元组)长度是否大于5。\n# def check_lenth(data):\n# if isinstance(data, int):\n# l = 1\n# else:\n# l = len(data)\n# if l > 5:\n# print('data: %s 长度大于5' % data)\n# else:\n# print('data: %s 长度不大于5' % data)\n#\n# # 4、写函数,检查传入列表的长度,如果大于2,那么仅保留前两个长度的内容,并将新内容返回给调用者。\n# def truncate_list(inp_list):\n# if len(inp_list) > 2:\n# inp_list = inp_list[:2]\n# return inp_list\n#\n# # 5、写函数,检查获取传入列表或元组对象的所有奇数位索引对应的元素,并将其作为新列表返回给调用者。\n#\n# def create_list(inp_list):\n# if len(inp_list) > 1:\n# inp_list = inp_list[1:-1:2]\n# return inp_list\n# else:\n# print('列表没有奇数位元素!')\n#\n# # 6、写函数,检查字典的每一个value的长度, 如果大于2,那么仅保留前两个长度的内容,并将新内容返回给调用者。\n# dic = {\"k1\": \"v1v1\", \"k2\": [11, 22, 33, 44]}\n# # PS: 字典中的value只能是字符串或列表\n# def modify_dict(dic):\n# for k,v in dic.items():\n# if len(v) > 2:\n# dic[k] = v[:2]\n# return dic\n#\n# # 7、编写认证功能函数,注意:后台存储的用户名密码来自于文件\n# # 假设账户密码存储方式是 用户名|密码|phone|sex|age\\n\n#\n# config='db.txt'\n#\n# def get_config(config):\n# users = {}\n# with open(r'%s' % config) as f:\n# for u in f:\n# u = u.split('|').strip('\\n')\n# name, pwd, phone, sex, age = u\n# user = {name: {'password': pwd, 'phone': phone, 'sex': sex, 'age': age}}\n# users.update(user)\n# return users\n#\n# def auth(username, password):\n# users = get_config(config)\n# if username not in users:\n# print('用户名不存在!')\n# return\n# if username in users and password != users[username]:\n# print('密码错误!')\n# return\n# if username in users and password == users[username]:\n# print('登陆成功!')\n# return True\n#\n# auth(username, password)\n#\n# # 8、编写注册功能函数,将用户的信息储存到文件中\n# config = 'db.txt'\n#\n# def interactive():\n# name = input('username >>: ').strip()\n# password = input('password >>: ')\n# phone = input('phone >>: ').strip()\n# sex = input('sex >>: ').strip()\n# age = input('age >>: ').strip()\n# return name, password, phone, sex, age\n#\n# def register():\n# name, password, phone, sex, age = interactive()\n# user = '%s|%s\\n' % (name, password, phone, sex, age)\n# with open(r'%s' % config, 'wb') as f:\n# f.write(user.encode('utf-8'))\n#\n# register()\n#\n# # 9、编写查看用户信息的函数,用户的信息是事先存放于文件中的\n# # 假设账户密码存储方式是 用户名|密码|phone|sex|age\\n\n# config = 'db.txt'\n#\n# def get_config(config):\n# users = {}\n# with open(r'%s' % config) as f:\n# for u in f:\n# u = u.split('|').strip('\\n')\n# name, pwd, phone, sex, age = u\n# user = {name: {'password': pwd, 'phone': phone, 'sex': sex, 'age': age}}\n# users.update(user)\n# return users\n#\n# def auth(username, password):\n# users = get_config(config)\n# if username not in users:\n# print('用户名不存在!')\n# if username in users and password != users[username]:\n# print('密码错误!')\n# if username in users and password == users[username]:\n# print('登陆成功!')\n# return True\n#\n# def get_user_info(name, password):\n# users = get_config(config)\n# if auth(username, password) == 'successful':\n# for k in users[username]:\n# phone = k['phone']\n# sex = k['sex']\n# age = k['age']\n# print('用户名:%s phone:%s sex: %s age: %s' % (username, phone, sex, age))\n#\n# get_user_info(name, password)\n\n"
},
{
"alpha_fraction": 0.5176189541816711,
"alphanum_fraction": 0.5219988226890564,
"avg_line_length": 28.54705810546875,
"blob_id": "a4c8b2fc83bd408f81f0026a821ac5851d9db090",
"content_id": "1d3779704857d548997998b69af69e3b6fd21c31",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5619,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 170,
"path": "/project/elective_systems/version_v1/core/manager.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom interface import education\n\nCURRENT_USER = None\n\ndef register():\n print('注册')\n while True:\n name = input('用户名 >>: ').strip()\n admins = education.Manager.get_object('admin')\n if name in admins:\n print('管理员%s已经注册!' % name)\n return\n password = input('密码 >>: ')\n password2 = input('确认密码 >>: ')\n if password != password2:\n print('两次密码输入不一致!')\n continue\n obj = education.Manager(name, password)\n education.Manager.update_object('admin', '管理员', obj)\n print('管理员%s注册成功!' % name)\n return\n\ndef login():\n global CURRENT_USER\n while True:\n name = input('用户名 >>: ').strip()\n admins = education.Manager.get_object('admin')\n if name not in admins:\n print('管理员%s未注册!' % name)\n return\n password = input('密码 >>: ')\n if password != admins[name].password:\n print('密码错误!')\n continue\n CURRENT_USER = name\n print('管理员%s登陆成功!' % name)\n return\n\ndef create_teacher():\n print('创建老师')\n while True:\n name = input('用户名 >>: ').strip()\n password = input('密码 >>: ')\n password2 = input('确认密码 >>: ')\n if password != password2:\n print('两次密码输入不一致!')\n continue\n obj = education.Teacher(name, password)\n schools = education.Manager.get_object('schools')\n while True:\n for name in schools:\n print('学校: %s' % name)\n choice = input('请为老师选择学校 >>: ').strip()\n if choice not in schools:\n print('学校不存在!')\n continue\n obj.school = schools[choice]\n break\n education.Manager.update_object('teachers', '老师', obj)\n return\n\ndef create_course():\n print('创建课程')\n name = input('课程名称 >>: ').strip()\n price = input('课程价格 >>: ').strip()\n cycle = input('课程周期 >>: ').strip()\n obj = education.Course(name, price, cycle)\n education.Manager.update_object('course', '课程', obj)\n\ndef create_classes():\n print('创建班级')\n name = input('班级名称 >>: ').strip()\n obj = education.Classes(name)\n schools = education.Manager.get_object('schools')\n while True:\n for name in schools:\n print('学校:%s' % name)\n choice = input('请为班级选择学校 >>: ').strip()\n if choice not in schools:\n print('学校不存在!')\n continue\n obj.school = schools[choice]\n break\n course = education.Manager.get_object('course')\n while True:\n for name in course:\n print('课程:%s' % name)\n choice = input('请为班级选择课程 >>: ').strip()\n if choice not in course:\n print('课程不存在!')\n continue\n obj.course = course[choice]\n break\n teachers = education.Manager.get_object('teachers')\n while True:\n for name in teachers:\n print('老师:%s' % name)\n choice = input('请为班级选择老师 >>: ').strip()\n if choice not in teachers:\n print('老师不存在!')\n continue\n obj.teacher = teachers[choice]\n teachers[choice].classes = name\n education.Manager.update_object('teachers', '老师', teachers[choice])\n break\n education.Manager.update_object('classes', '班级', obj)\n return\n\ndef check_object(type_name, desc):\n print('查看%s' % desc)\n objects = education.Manager.get_object(type_name)\n if not objects:\n print('目前还没有%s!' % desc)\n return\n for name in objects:\n print('%s: %s' % (desc, name))\n\ndef check_school():\n check_object('schools', '学校')\n\ndef check_course():\n check_object('course', '课程')\n\ndef check_teacher():\n check_object('teachers', '老师')\n\ndef check_classes():\n check_object('classes', '班级')\n\ndef create_school():\n print('创建学校')\n name = input('学校名称 >>: ').strip()\n address = input('地址 >>: ').strip()\n obj = education.School(name, address)\n education.Manager.update_object('schools', '学校', obj)\n\ndef initialize_db():\n confirm = input('确定要初始化数据库吗?(y/n)').strip()\n if confirm == 'y':\n education.Manager.initialize_db()\n print('管理%s初始化数据库成功!' % CURRENT_USER)\n else:\n print('管理%s已取消初始化数据库!' % CURRENT_USER)\n\ndef run():\n while True:\n menu = {\n '0': [login, '登陆'],\n '1': [register, '注册'],\n '2': [check_school, '查看学校'],\n '3': [check_course, '课程'],\n '4': [check_teacher, '老师'],\n '5': [check_classes, '班级'],\n '6': [create_school, '创建学校'],\n '7': [create_course, '创建课程'],\n '8': [create_teacher, '创建老师'],\n '9': [create_classes, '创建班级'],\n '10': [initialize_db, '初始化数据库'],\n }\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'quit':\n break\n if choice not in menu:\n print('选择编号非法!')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.6024590134620667,
"alphanum_fraction": 0.6475409865379333,
"avg_line_length": 26.11111068725586,
"blob_id": "73307650d47601a3192f7f6ea3a8960941c93f20",
"content_id": "5e9dd1efb3829a613c0c7b9eca3cb8e2395a3a0a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 244,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 9,
"path": "/project/youku/version_v1/youkuClient/conf/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\n\nserver_address = ('127.0.0.1', 8080)\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\ndownload_path = os.path.join(BASE_DIR, 'data', 'download')\nupload_path = os.path.join(BASE_DIR, 'data', 'upload')\n"
},
{
"alpha_fraction": 0.45940104126930237,
"alphanum_fraction": 0.5140475630760193,
"avg_line_length": 27.910715103149414,
"blob_id": "e8a8e79a968a1ea96b2ecff82baa1f6e38d4d112",
"content_id": "9bcd4d028f6fa99fb7f4d31740f6a1ae29cb9401",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3519,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 112,
"path": "/project/elective_systems/version_v7/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import teacher_api\n\nCURRENT_USER = None\nROLE = 'teacher'\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('请输入登陆用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入登陆密码 >>: ').strip()\n flag, msg = teacher_api.login(name, password)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_course():\n print('\\033[32m查看教授课程\\033[0m')\n teacher = teacher_api.get_teacher_info(CURRENT_USER)\n print('-' * 30)\n if not teacher.course:\n print('\\033[31m教授课程列表为空!\\033[0m')\n return\n for k, name in enumerate(teacher.course):\n print('%s %s' % (k, teacher_api.get_course_info(CURRENT_USER, name)))\n print('-' * 30)\n return teacher.course\n\n\n\[email protected](ROLE)\ndef check_student():\n print('\\033[32m创建课程\\033[0m')\n while True:\n course = check_course()\n name = input('请输入课程名字 >>: ').strip()\n if name == 'q': break\n price = input('请输入课程价格 >>: ').strip()\n cycle = input('请输入课程周期 >>: ').strip()\n flag, msg = teacher_api.create_course(CURRENT_USER, name, price, cycle, school_name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef choose_course():\n print('\\033[32m选择教授课程\\033[0m')\n while True:\n course = common.get_object_list('course')\n print('-' * 30)\n for k,v in enumerate(course):\n print('%s %s' % (k, v))\n print('-' * 30)\n choice = input('请选择教授课程编号 >>: ').strip()\n if choice == 'q':\n return choice\n if not choice.isdigit():\n print('\\033[31m课程编号必须是数字!\\033[0m')\n continue\n choice = int(choice)\n if choice < 0 or choice > len(course):\n print('\\033[31m课程编号非法!\\033[0m')\n continue\n flag, msg = teacher_api.choose_course(CURRENT_USER, course[choice])\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef modify_score():\n print('\\033[32m创建老师\\033[0m')\n while True:\n name = input('请输入老师名字 >>: ').strip()\n if name == 'q': break\n flag, msg = admin_api.create_teacher(CURRENT_USER, name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef run():\n menu = {\n '1': [login, '查看'],\n '2': [check_course, '查看教授课程'],\n '3': [check_student, '查看学员列表'],\n '4': [choose_course, '选择教授课程'],\n '5': [modify_score, '修改学生成绩']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\n"
},
{
"alpha_fraction": 0.4788452088832855,
"alphanum_fraction": 0.549527108669281,
"avg_line_length": 16.478260040283203,
"blob_id": "ab982e6e11ba350b846cc3e979c80427a4c54142",
"content_id": "bef3123dd264b44c4a4e5b1f073fa58d68995f32",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2363,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 115,
"path": "/month3/week2/python_day3/python_day3.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 赋值方式:\n# 1.1 链式赋值\n# x = 1\n# y = 1\n# y = x = a = b = c = 1\n# print(id(y), id(c))\n\n# 1.2 交叉赋值\n# m = 1\n# n = 2\n# tmp = 1\n# m = n\n# n = tmp\n# print(m, n)\n\n# m, n = n, m\n# print(m, n)\n\n# 1.3 变量的解压\nsalarys = [11, 12, 13, 14, 15]\n\n# mon1_sal = salarys[0]\n# mon2_sal = salarys[1]\n# mon3_sal = salarys[2]\n# mon4_sal = salarys[3]\n# mon5_sal = salarys[4]\n# print(mon1_sal, mon2_sal, mon3_sal, mon4_sal, mon5_sal)\n\n# mon1_sal, mon2_sal, mon3_sal, mon4_sal, mon5_sal = salarys\n# print(mon1_sal, mon2_sal, mon3_sal, mon4_sal, mon5_sal)\n\n# mon1_sal, *_, mon5_sal = salarys\n# print(mon1_sal, mon5_sal)\n\n# *_, mon4_sal, mon5_sal = salarys\n# print(mon4_sal, mon5_sal)\n\n# count = 1\n# while count < 6:\n# if count == 4:\n# break\n# print(count)\n# count += 1\n# # break\n# else:\n# print('Loop 已经完整运行完, 中间没有被 break 中断的情况下,else 部分的代码才有作用')\n\n# int 整型\n# 进制转换\n# print(bin(2))\n# print(oct(8))\n# print(10)\n# print(hex(16))\n\n\n# str 字符串\n# 1. 按索引取值(只能取不能写)\n# s = 'egon'\n# x = s[0]\n# print(x)\n# x = s[-1]\n# print(x)\n# 2. 切片(取首不取尾,步长)\n# msg = 'Alex say my name is alex.'\n# 正向切片\n# print(msg[0:6])\n# print(msg[0:6:2])\n# 反向切片\n# print(msg[-1:-5:-1])\n# 3. 字符串长度 len()\n# name = 'egon'\n# print(len(name))\n# 4. 成员运算 in 和 not in\n# name = 'egon'\n# print('e' in name)\n\n# 5. 移除空白 strip()\n# name = ' egon '\n# print(name.strip())\n# name = '***egon***'\n# print(name.strip('*'))\n\n# 6. 拆分字符串 split()\n# info = 'egon:123:admin'\n# info = info.split(':')\n# print(info)\n\n# 7. 字符串循环取值\n# msg = '123456'\n# i = 0\n# while i < len(msg):\n# print(msg[i])\n# i += 1\n\n# for m in msg:\n# print(m)\n\n# 函数的作用是 从十进制转为八进制\nprint('八进制:%s --> 十进制:%s' % ('20', oct(20)))\n# 24(八进制) --> 20(十进制)\n# 2x8^1 + 4x8^0 = 20\n\n# 函数的作用是 从十进制转为八进制\n# print('八进制:%s' % oct(16))\n# 20(八进制) --> 16(十进制)\n# 2x8^1 + 0x8^0 = 16\n\nprint('十进制:%s --> 八进制:%s' % ('24', int('24', 8)))\n# print(int('20', 8))\n\n\nmsg1='alex say my name is alex,my age is 73,my sex is female'\nmsg2='alex say my name is alex,my age is 73,my sex is female'\nprint(msg1 is msg2)\nprint(msg1 == msg2)"
},
{
"alpha_fraction": 0.5963060855865479,
"alphanum_fraction": 0.5989446043968201,
"avg_line_length": 24.200000762939453,
"blob_id": "83dd0160d8cdd5644cb937fcdc9679d868a15f62",
"content_id": "3b13a8166afaca07297496c05f7f49c9a4d7b56d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 379,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 15,
"path": "/project/shooping_mall/version_v3/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport json\nfrom conf import settings\n\ndef read(name):\n if os.path.exists(settings.USER_FILE % name):\n with open(r'%s' % settings.USER_FILE % name) as f:\n return json.load(f)\n\ndef write(user_dic):\n with open(r'%s' % settings.USER_FILE % user_dic['name'], 'w') as f:\n json.dump(user_dic, f)\n return True\n\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.6208791136741638,
"avg_line_length": 17.22222137451172,
"blob_id": "b373c7f9276b1494503d5cc5cbfdc150afc402f0",
"content_id": "7affe30df397d08fa5eae9980336542881ea60b3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 408,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/month4/week7/python_day29/python_day29_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月29:\n# 1、基与多进程实现并发的套接字通信,完成如下功能:\n# 1、客户端链接功成功后,先登录,登录成功后才可以执行其他功能\n# 2、登录成功后可以执行下载功能\n# 3、登录成功后可以执行上传功能\n#\n# 2、ATM+购物车作业今晚默写\n#\n# 3、明早默写:生产者消费者模型理论+代码实现\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6193390488624573,
"alphanum_fraction": 0.6205630302429199,
"avg_line_length": 27.964284896850586,
"blob_id": "0c8ae3b28e29c5938791252b430430a5a38b053f",
"content_id": "f1ad7608787fe25e6f7f3390cc18e726b71310fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 875,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 28,
"path": "/project/elective_systems/version_v8/interface/common_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef login(name, password, role):\n if role == 'admin':\n obj = modules.Admin.get_obj_by_name(name)\n elif role == 'teacher':\n obj = modules.Teacher.get_obj_by_name(name)\n elif role == 'student':\n obj = modules.Student.get_obj_by_name(name)\n else:\n return False, '用户角色%s非法!' % role\n if not obj:\n return False, '用户%s不存在!' % name\n if password == obj.password:\n return True, '用户%s登陆成功!!' % name\n else:\n return False, '用户%s登陆失败!!' % name\n\ndef get_school_info(type_name):\n return modules.School.get_obj_by_name(type_name)\n\ndef get_teacher_info(type_name):\n return modules.Teacher.get_obj_by_name(type_name)\n\ndef get_course_info(type_name):\n return modules.Course.get_obj_by_name(type_name)\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.49192100763320923,
"alphanum_fraction": 0.5300717949867249,
"avg_line_length": 16.97177505493164,
"blob_id": "d8042c2a299ec9f69b0cc509953cd2a1f9d05bcc",
"content_id": "913aa1f428e0d700bc736f1bf1d68d2f89d6c8fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5116,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 248,
"path": "/month3/week2/python_day4/python_day4.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. strip() lstrip() rstrip()\n# msg = '***ssss***'\n# print(msg.lstrip('*'))\n# print(msg.rstrip('*'))\n\n# 2. lower() upper()\n# msg = 'Egon'\n# print(msg.lower())\n# print(msg.upper())\n\n# 3. startswith endswith\n# msg = 'alex is sb'\n# print(msg.startswith('alex'))\n# print(msg.endswith('sb'))\n\n# 4. format 的三种使用方式\n# 占位符方式\n# s1 = 'my name is %s, my age is %s' % ('egon', 18)\n\n# 第一种:\n# s2 = 'my name is {}, my age is {}'.format('egon', 18)\n# print(s1)\n# print(s2)\n\n# 第二种:\n# s2 = 'my name is {0}, my age is {1} {1} {1} {0} {1}'.format('egon', 18)\n# print(s2)\n\n# 第三种:\n# s2 = 'my name is {name}, my age is {age}'.format(name='egon', age=18)\n# print(s2)\n\n# 5. split() rsplit()\n# cmd = 'get|C:\\a.txt|3333'\n# print(cmd.split('|'))\n# print(cmd.rsplit('|', 1))\n\n# 6. join()\n# join 方法传入的列表必须只包含字符串类型的元素\n# cmd = 'egon:123:admin:rwx'\n# l = cmd.split(':')\n# print(l)\n\n# s = ':'.join(l)\n# print(s)\n\n# 7. replace()\n# msg = 'wupeiqi say my name wupeiqi'\n# print(msg.replace('wupeiqi', 'SB'))\n# print(msg.replace('wupeiqi', 'SB', 1))\n\n# 8. isdigit()\n# print('10'.isdigit())\n#\n# age = 18\n# i = 0\n# while True:\n# inp_age = input('age >>: ').strip()\n# if not inp_age.isdigit():\n# print('输入数据非法')\n# continue\n# else:\n# inp_age = int(inp_age)\n# if inp_age > age:\n# print('猜大了')\n# i += 1\n# if inp_age < age:\n# print('猜小了')\n# i += 1\n# if i == 3:\n# print('Try too many times')\n# break\n# if inp_age == age:\n# print('恭喜你猜对了!')\n# break\n#\n# 其他操作\n# 1)find() rfind() index() rindex() count()\n# msg = 'my egon hegon 123'\n# print(msg.find('sb'))\n# print(msg.find('egon', 8, 20))\n# print(msg.rfind('egon', 8, 20))\n#\n# print(msg.index('egon'))\n# print(msg.index('sb'))\n#\n# 2) center() ljust() rjust() zfill()\n# print('end')\n# print('egon'.center(50, '*'))\n# print('egon'.ljust(50, '*'))\n# print('egon'.rjust(50, '*'))\n# print('egon'.zfill(50))\n#\n# 3) expandtabs()\n# msg = 'abc\\tdef'\n# print(msg.expandtabs(4))\n#\n# 4) capitalize() swapcase() title()\n# print('abeCdEF'.capitalize())\n# print('abeCdEF'.swapcase())\n# print('my name is egon'.title())\n#\n# 5) is 数字系列\n# num1 = b'4' # bytes\n# num2 = u'4' # unicode,python3 中无需加 u 就是 Unicode\n# num3 = '四' # 中文数字\n# num4 = 'IV' # 罗马数字\n#\n# print(num1.isdigit())\n# print(num2.isdigit())\n# print(num3.isdigit())\n# print(num4.isdigit())\n#\n# print(num2.isdecimal())\n# print(num3.isdecimal())\n# print(num4.isdecimal())\n#\n# print(num1.isalnum())\n# print(num2.isalnum())\n# print(num3.isalnum())\n# print(num4.isalnum())\n#\n# 6) is 其他\n#\n# 字符串总结:\n# 字符串只能存一个值;\n# 有序; # 可以按照索引取值,就是有序的\n# 不可变数据类型;\n# 可 hash;\n\n# 列表类型\n# 1. 按索引取值(正反向,既可以取也可以存)\nl = ['a', 'b', 'c']\nprint(id(l))\nprint(l[-1])\nl[0] = 'A'\nprint(id(l))\n# l[3] = 'd' # 不存在的索引,报错\n\n# 2. 切片(顾头不顾尾,步长)\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nprint(stus[0:3:1])\n\n# 3. 长度 len()\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nprint(len(stus))\n\n# 4. 成员运算 in 和 not in\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nprint('alex' in stus)\n\n# 5. 追加\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nstus.append('wupeiqi')\nstus.append('peiqi')\nprint(stus)\n\n# 6. 插入\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nstus.insert(1, '艾利克斯') # 按照索引插入新元素\nprint(stus)\n\n# 7. 删除\nstus = ['alex', 'wxx', 'yxx', 'lxx']\n# del stus[1]\nstus.remove('alex') # 按照成员方式删除\nstus.pop(1) # 按照索引方式删除\nprint(stus)\n\n# 8. 循环\nstus = ['alex', 'wxx', 'yxx', 'lxx']\n\ni = 0\nwhile i < len(stus):\n print(stus[i])\n i += 1\n\nfor i in range(len(stus)):\n print(stus[i])\n\nfor i in stus:\n print(i)\n\n# 需要掌握的操作\nstus = ['alex', 'wxx', 'yxx', 'lxx']\nprint(stus.count('alex'))\nstus.extend(['a', 'b', 'c'])\nprint(stus)\n# print(stus.index('alex', 1, 5))\nstus.reverse()\nprint(stus)\nl = [1, 10, 3, 12]\nstus.sort(reverse=True)\nprint(l)\n# stus.append('')\n\n# 前提:只能同类型直接比较大小,对于有索引值直接的比较是按照位置一一对应进行对比的\n# s1 = 'heelo'\n# s2 = 'hf'\n# print(s1 > s2)\n\n# l1 = ['a', 'b', 'c']\n# l2 = ['d']\n# print(l1 > l2)\n\n# l1 = [3, 'a', 'b', 'c']\n# l2 = [xxx, 'd']\n# print(l1 > l2) # 不通类型之间比较大小,报错\n\nprint('Z' > 'a')\nprint('a' > 'B')\n\n# 练习\n# 队列:先进先出\nl1 = []\n# 入队\nl1.append('1')\nl1.append('2')\nl1.append('3')\n# l1.insert(-1, '1')\n# l1.insert(-1, '2')\n# l1.insert(-1, '3')\n# 出队\nl1.pop(0)\nl1.pop(0)\nl1.pop(0)\n\n# 堆栈:先进后出\nl2 = []\n# 入栈\nl2.append('1')\nl2.append('2')\nl3.append('3')\n# l2.insert(-1, '1')\n# l2.insert(-1, '2')\n# l2.insert(-1, '3')\n# 出栈\nl2.pop()\nl2.pop()\nl2.pop()\n\n# 总结列表:\n# 可以存多个值;\n# 有序;\n# 可变数据类型;\n# 不可 hash;\n\na = 'lsl'"
},
{
"alpha_fraction": 0.5365053415298462,
"alphanum_fraction": 0.544161856174469,
"avg_line_length": 27.13846206665039,
"blob_id": "7f4327bf0c0eec6043af0adcd003542336bebc02",
"content_id": "a2d66fd2bc7641991ccb6224756637d492ac688c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4727,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 130,
"path": "/month3/week3/python_day11/python_day11_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 3-29 作业\n# 一:编写一个有参和一个无参函数,然后实现下列装饰器\n# 二:编写装饰器,为函数加上统计时间的功能\n# 三:编写装饰器,为函数加上认证的功能\n#\n# 四:编写装饰器,为多个函数加上认证的功能(用户的账号密码来源于文件),要求登录成功一次,后续的函数都无需再输入用户名和密码\n# 注意:从文件中读出字符串形式的字典,可以用eval('{\"name\":\"egon\",\"password\":\"123\"}')转成字典格式\n#\n# 五:编写装饰器,为多个函数加上认证功能,要求登录成功一次,在超时时间内无需重复登录,超过了超时时间,则必须重新登录\n#\n# 六:编写下载网页内容的函数,要求功能是:用户传入一个url,函数返回下载页面的结果\n#\n# 七:为题目五编写装饰器,实现缓存网页内容的功能:\n# 具体:实现下载的页面存放于文件中,如果文件内有值(文件大小不为0),就优先从文件中读取网页内容,否则,就去下载,然后存到文件中\n#\n# 扩展功能:用户可以选择缓存介质/缓存引擎,针对不同的url,缓存到不同的文件中\n#\n# 八:还记得我们用函数对象的概念,制作一个函数字典的操作吗,来来来,我们有更高大上的做法,在文件开头声明一个空字典,然后在每个函数前加上装饰器,完成自动添加到字典的操作\n#\n# 九 编写日志装饰器,实现功能如:一旦函数f1执行,则将消息2017-07-21 11:12:11 f1 run写入到日志文件中,日志文件路径可以指定\n# 注意:时间格式的获取\n# import time\n# time.strftime('%Y-%m-%d %X')\n\nimport datetime\nfrom urllib.request import urlopen\n\nconfig = 'db.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\ndef get_config():\n users = {}\n with open(r'%s' % config) as f:\n data = f.read().split('\\n')\n for u in data:\n users.update(eval(u))\n return users\n\ndef auth(func):\n def wrapper(*args, **kwargs):\n if not name in users:\n print('用户不存在!')\n return\n if password != users[name]['password']:\n print('密码错误!')\n return\n if name in users and password == users[name]['password']:\n dt = users[name]['logintime']\n dt = datetime.datetime.strptime(dt, '%Y-%m-%d %H:%M:%S')\n alert_time = dt + datetime.timedelta(minutes=5)\n now = datetime.datetime.now()\n if now > alert_time:\n print('登陆已超时,请重新登陆!')\n return\n if users[name]['logintime']\n print('认证通过!')\n with open(r'config' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = eval(line.strip('\\n'))\n line['logintime'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n line = '%s\\n' % line\n f2.write(line)\n res = func(*args, **kwargs)\n return res\n return wrapper\n\ndef timmer(func):\n def wrapper(*args, **kwargs):\n start_time = time.time()\n res = func(*args, **kwargs)\n stop_time = time.time()\n print('Run time is %s' % (stop_time - start_time))\n return res\n return wrapper\n\ndef cache(engine):\n def handler(func):\n def wrapper(*args, **kwargs):\n if engine == 'baidu':\n cache_file = 'baidu.txt'\n elif engine == 'google':\n cache_file = 'google.txt'\n with open(r'%s' % cache_file) as f:\n data = f.read()\n if data:\n return data\n res = func(*args, **kwargs)\n with open(r'%s' % cache_file, 'w') as f\n f.write(res)\n return res\n return wrapper\n return handler\n\n@auth\n@timmer\ndef index():\n print('This is function index!')\n return 'function index!'\n\n@auth\n@timmer\ndef home(name):\n print('Hi, %s. This is function home!' % name)\n return 'function home!'\n\n@auth\n@cache(engine)\n@timmer\ndef get_site_page(url):\n return urlopen(url).read()\n\ndef main():\n index = index()\n print('index: ' % index)\n home = home('egon')\n print('home: %s' % home)\n baidu = get_site_page(url)\n print('baidu: %s' % baidu)\n\nif __name__ == \"__main__\":\n while True:\n engine = 'baidu'\n url = 'http://www.baidu.com'\n users = get_config()\n name = input('name >>: ').strip()\n password = input('password >>: ').strip()\n main()"
},
{
"alpha_fraction": 0.5645889639854431,
"alphanum_fraction": 0.5718157291412354,
"avg_line_length": 24.720930099487305,
"blob_id": "cf69fd68c5cc160c5a729aff8250501e496616de",
"content_id": "ab4e9dd77ca324a8c62e23a60e8156a48db56a2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1201,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 43,
"path": "/project/shooping_mall/version_v6/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\ndef login(name, password):\n info = db_handler.read(name)\n if not info:\n return False, '用户%s不存在!' % name\n if password == info['password']:\n return True, '用户%s登录成功!' % name\n else:\n return False, '用户%s登录失败!' % name\n\ndef register(name, password, credit_limit=15000):\n info = db_handler.read(name)\n if info:\n return False, '用户%s已存在,不能重复注册!' % name\n info = {\n 'name': name,\n 'password': password,\n 'balance': 0,\n 'credit_balance': credit_limit,\n 'credit_limit': credit_limit,\n 'bill': 0,\n 'shopping_cart': {},\n 'flow': []\n }\n if db_handler.write(info):\n return True, '用户%s注册成功!' % name\n else:\n return False, '用户%s注册失败!' % name\n\ndef get_balance_info(name):\n info = db_handler.read(name)\n return info['balance'], info['credit_balance'], info['credit_limit']\n\ndef get_bill_info(name):\n info = db_handler.read(name)\n return info['bill']\n\ndef get_flow_info(name):\n info = db_handler.read(name)\n return info['flow']\n\n"
},
{
"alpha_fraction": 0.4884895980358124,
"alphanum_fraction": 0.5103874206542969,
"avg_line_length": 24.753623962402344,
"blob_id": "de65a9db698f2614f36822f65e0e3bddea4029cf",
"content_id": "3b72fbc8cc19ce2926c0e8e58da74a801a22a013",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1873,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 69,
"path": "/project/youku/version_v1/youkuClient/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nfrom conf import settings\n\ndef auth(role):\n from core import admin, user\n def handle(func):\n def wrapper(*args, **kwargs):\n if role == 'admin' and not admin.COOKIES['session_id']:\n show_red('用户未登录,跳转至登陆!')\n admin.login()\n return\n if role == 'user' and not user.COOKIES['session_id']:\n show_red('用户未登录,跳转至登陆!')\n user.login()\n return\n return func(*args, **kwargs)\n return wrapper\n return handle\n\ndef show_menu(menu):\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n\ndef show_info(*args, **kwargs):\n print('=' * 30)\n if args:\n for i, key in enumerate(args):\n print('%-4s %-10s' % (i, key))\n if kwargs:\n for i, key in enumerate(kwargs):\n print('%-4s %-10s %-10s' % (i, key, kwargs[key]))\n print('=' * 30)\n\ndef show_red(word):\n print('\\033[31m%s\\033[0m' % word)\n\ndef show_green(word):\n print('\\033[32m%s\\033[0m' % word)\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n show_red('不能输入空字符!')\n continue\n return string\n\ndef input_integer(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n show_red('不能输入空字符!')\n continue\n if string == 'q':\n return string\n if not string.isdigit():\n show_red('请输入数字!')\n continue\n return int(string)\n\ndef get_upload_video_list():\n return os.listdir(settings.upload_dir)\n\ndef get_file_size(file_name):\n return os.path.getsize(file_name)\n\n\n\n\n"
},
{
"alpha_fraction": 0.6419752836227417,
"alphanum_fraction": 0.6441539525985718,
"avg_line_length": 27.17021369934082,
"blob_id": "12f5aea2e24f8cb37fe39463437d59a1faceaf34",
"content_id": "4fa02a0f10290a3a2f579d20630033abeece4b45",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1403,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 47,
"path": "/weektest/test2/ATM_wenliuxiang/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\r\n\r\nfrom db import db_handler\r\nfrom lib import common\r\nlogger_bank=common.get_logger('Bank')\r\ndef get_account(name):\r\n user_dic = db_handler.select(name)\r\n return user_dic['account']\r\n\r\n\r\ndef transfer_interface(from_user,to_user,account):\r\n from_user_dic = db_handler.select(from_user)\r\n to_user_dic = db_handler.select(to_user)\r\n\r\n from_user_dic['account']-=account\r\n to_user_dic['account']+=account\r\n\r\n from_user_dic['bankflow'].append('%s转账给谁%s¥给%s' %(from_user,account,to_user))\r\n to_user_dic['bankflow'].append('%s收到%s转账 %s¥' %(to_user,account,from_user))\r\n db_handler.update(from_user_dic)\r\n db_handler.update(to_user_dic)\r\n\r\n logger_bank.info('%s transfer %s yuan to %s' % (from_user, account, to_user))\r\n\r\n\r\n\r\ndef withdraw_interface(name,account):\r\n\r\n user_dic = db_handler.select(name)\r\n user_dic['account']-=account*1.05\r\n user_dic['bankflow'].append('%s transfer %s yuan'%(name,account))\r\n logger_bank.info('%s 取款 %s' %(name,account))\r\n\r\n\r\n\r\ndef repay_interface(name,account):\r\n\r\n user_dic=db_handler.select(name)\r\n user_dic['account']+=account\r\n\r\n user_dic['bankflow'].append('%s repay %s yuan' % (name, account))\r\n db_handler.update(user_dic)\r\n\r\n logger_bank.info('%s repay %s'%(name,account))\r\n\r\n\r\ndef check_bankflow_interfac(name):\r\n user_dic=db_handler.select(name)\r\n return user_dic\r\n\r\n"
},
{
"alpha_fraction": 0.446153849363327,
"alphanum_fraction": 0.45769229531288147,
"avg_line_length": 24.145160675048828,
"blob_id": "a1ff467dfcd75eebfacda2ac44107d996daaa154",
"content_id": "f525842367de4217965262c054abc195dccee12d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1736,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 62,
"path": "/project/shooping_mall/version_v5/conf/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\nBASE_DB = os.path.join(BASE_DIR, 'db', 'data')\nACCESS_LOG = os.path.join(BASE_DIR, 'logs', 'access.log')\n\n# 日志输出格式\nSTANDARD_FMT = '[%(asctime)s][%(threadName)s:%(thread)d][task_id:%(name)s][%(filename)s:%(lineno)d]' \\\n '[%(levelname)s][%(message)s]'\nSIMPLE_FMT = '%(asctime)s %(message)s'\n\n# logging 配置字典\nLOGGING_CONFIG = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'formatters': {\n # 标准日志格式\n 'standard_fmt': {\n 'format': STANDARD_FMT\n },\n # 简单日志格式\n 'simple_fmt': {\n 'format': SIMPLE_FMT\n },\n },\n 'filters': {},\n 'handlers': {\n # 文件日志\n 'default': {\n 'level': 'DEBUG',\n # 日志输出到文件,且基于文件大小切分\n 'class': 'logging.handlers.RotatingFileHandler',\n 'formatter': 'standard_fmt',\n 'filename': ACCESS_LOG,\n # 每个日志文件大小 1Gb\n 'maxBytes': 1024*1024*1024,\n # 保留日志文件数\n 'backupCount': 5,\n # 日志文件编码 utf-8\n 'encoding': 'utf-8',\n },\n # 屏幕日志\n 'console': {\n 'level': 'DEBUG',\n # 日志输出到流\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple_fmt'\n },\n },\n 'loggers': {\n # logger对象配置\n '': {\n # 配置日志handlers\n 'handlers': ['default'],\n 'level': 'DEBUG',\n # 启用日志继承\n 'propagate': True,\n },\n },\n}\n\n"
},
{
"alpha_fraction": 0.6160221099853516,
"alphanum_fraction": 0.6353591084480286,
"avg_line_length": 18.77777862548828,
"blob_id": "370e26d398d8127f1dc818442f307d76780e6782",
"content_id": "1fb6f2b09888f69950aa2bb384070f2b130e6490",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 748,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 36,
"path": "/month4/week5/python_day18/python_day18.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\n# import subprocess\n#\n# obj = subprocess.Popen('lss -l ~', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n#\n# res1 = obj.stdout.read()\n# print('正确结果: %s' % res1)\n#\n# res2 = obj.stderr.read()\n# print('错误结果1: %s' % res2)\n#\n# res3 = obj.stderr.read()\n# print('错误结果2: %s' % res3)\n#\n# print(globals())\n\n\nclass OldBoyStudent:\n school = 'OldBoy'\n\n def learn(self):\n print(' is learning ...')\n\n def choose(self):\n print('choose course ...')\n\n\n# print(OldBoyStudent.__dict__['learn'](123))\n# print(OldBoyStudent.learn(123))\n#\n# print(OldBoyStudent.school)\n\n# print(OldBoyStudent.abc)\nOldBoyStudent.country = 'China'\nOldBoyStudent.school = 'OldBoySchool'\n\nprint(OldBoyStudent.__dict__)\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5224366188049316,
"alphanum_fraction": 0.5289852023124695,
"avg_line_length": 31.832611083984375,
"blob_id": "16b348e574a49b1d06c742eba0c845c853a70cea",
"content_id": "2103d560cf2dc9204dd1db8e2b8a511f0e1dc66d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 24605,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 693,
"path": "/project/shooping_mall/version_v1/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nfrom lib import utils\nfrom conf import settings\nfrom functools import wraps\n\nCURRENT_USER = None\nCOOKIES = {}\nSHOPPING_CART = {}\nlogger = utils.get_logger('shopping')\n\ndef auth(func):\n '''\n auth decorator to user authentication.\n :param func: function name\n :return: function or fail error\n '''\n @wraps(func)\n def wrapper(*args, **kwargs):\n name = CURRENT_USER\n if not name or name not in COOKIES:\n logger.warning('用户%s没有登录,请您先登录!' % name)\n login()\n else:\n return func(*args, **kwargs)\n return wrapper\n\ndef logout(func):\n '''\n logout decorator to user logout from shopping mall.\n :param func: function name\n :return: return string and exit shopping mall\n '''\n @wraps(func)\n def wrapper(*args, **kwargs):\n name = CURRENT_USER\n result = func(*args, **kwargs)\n if result == 'quit':\n if name in SHOPPING_CART:\n params = {\n 'mode': 'w',\n 'file_path': settings.SHOPPING_CART_FILE % name,\n 'data': SHOPPING_CART.pop(name)\n }\n utils.file_handler(**params)\n if name:\n logger.info('用户%s退出登录成功!' % name)\n else:\n logger.info('程序退出登录成功!')\n os._exit(0)\n elif result == 'order':\n logger.info('请您进入购物车确认商品,并下单支付!')\n run()\n else:\n return result\n return wrapper\n\n@logout\ndef input_string(word, name=None, register=None):\n '''\n Function input_string to return a string type object.\n :param word: tips word\n :param password: if pasword True return string, else check string isalpha and return\n :return: string\n '''\n while True:\n string = input('%s >>: ' % word).strip()\n if string in ['quit', 'order']:\n return string\n if name and not string.isalpha():\n print('用户名只能是字母!')\n continue\n if register:\n password = input('再次输入%s >>: ' % word).strip()\n if password in ['quit', 'order']:\n return password\n if string != password:\n logger.warning('两次输入密码不一致!')\n continue\n return string\n return string\n\n@logout\ndef input_number(word):\n '''\n Function input_number to return a number type object.\n :param word: tips word\n :return: a number type object\n '''\n while True:\n number = input('%s >>: ' % word).strip()\n if not number.isdigit():\n continue\n number = int(number)\n return number\n\ndef register(credit_limit=50000):\n '''\n Funtion register to user register\n :param credit_limit: user's credit limit\n :return: True or None\n '''\n name = input_string('用户名', name=True)\n if utils.checkpath(settings.USER_FILE % (name, name)):\n logger.warning('用户%s已经注册,请直接登陆!' % name)\n return\n password = input_string('密码', register=True)\n params = {\n 'file_path': settings.USER_FILE % (name, name),\n 'mode': 'w',\n 'data': {\n 'name': name,\n 'password': password,\n 'permission': 'user', # admin: 管理员 user: 普通用户\n 'status': 0, # 0:正常 1: 锁定用户\n 'balance': 0,\n 'credit_balance': credit_limit,\n 'credit_limit': credit_limit, # 0 冻结\n 'bill': 0\n }\n }\n if utils.file_handler(**params):\n logger.info('用户%s注册成功!' % name)\n return True\n\ndef load_user_shopping_cart(name):\n '''\n Function load_user_shopping_cart to print shopping cart info\n :param name: user's name\n :return: True or None\n '''\n file_path = settings.SHOPPING_CART_FILE % name\n params = {\n 'mode': 'r',\n 'file_path': file_path\n }\n if os.path.exists(file_path):\n SHOPPING_CART[name] = utils.file_handler(**params)\n return True\n else:\n SHOPPING_CART[name] = {}\n\ndef login():\n '''\n Function login to login shopping mall\n :return: True or None\n '''\n global CURRENT_USER, COOKIES\n while True:\n name = input_string('用户名', name=True)\n if not utils.checkpath(settings.USER_FILE % (name, name)):\n logger.warning('用户%s没有注册,请您先注册!' % name)\n break\n if name in COOKIES:\n logger.info('用户%s已经是登陆状态!' % name)\n CURRENT_USER = name\n return True\n password = input_string('密码')\n params = {\n 'file_path': settings.USER_FILE % (name, name),\n 'mode': 'r'\n }\n user_info = utils.file_handler(**params)\n if password != user_info['password']:\n logger.warning('用户%s密码错误!' % name)\n continue\n logger.info('用户%s登陆成功!' % name)\n CURRENT_USER = name\n COOKIES[name] = {\n 'balance': user_info['balance'],\n 'credit_balance': user_info['credit_balance'],\n 'credit_limit': user_info['credit_limit'],\n 'bill': user_info['bill']\n }\n load_user_shopping_cart(name)\n return True\n\ndef get_goods_info(output=None):\n '''\n Function get_goods_info to return goods information.\n :param output: print or not print to console\n :return: goods information\n '''\n params = {\n 'mode': 'r',\n 'file_path': settings.GOODS_FILE\n }\n goods = utils.file_handler(**params)\n if output:\n print('=' * 30)\n print('商品编号 商品名称 商品价格 [order下单]')\n for k, v in goods.items():\n print('%-10s %-10s %-10s' % (k, v['name'], v['price']))\n print('=' * 30)\n return goods\n\n@auth\ndef shopping():\n '''\n Function shopping to user's shopping\n :return: None or exit (when order)\n '''\n global COOKIES, SHOPPING_CART\n balance = COOKIES[CURRENT_USER]['balance']\n credit_balance = COOKIES[CURRENT_USER]['credit_balance']\n credit_limit = COOKIES[CURRENT_USER]['credit_limit']\n while True:\n goods = get_goods_info(output=True)\n code = input_string('请输入商品编码')\n if code not in goods:\n logger.warning('输入商品编号非法!')\n continue\n good = goods[code]['name']\n price = goods[code]['price']\n count = input_number('请输入商品数量')\n cost = price * count\n if balance >= cost or credit_balance >= cost:\n if good not in SHOPPING_CART[CURRENT_USER]:\n SHOPPING_CART[CURRENT_USER][good] = {\n 'code': code,\n 'price': price,\n 'count': count\n }\n else:\n SHOPPING_CART[CURRENT_USER][good]['count'] += count\n if balance >= cost:\n balance -= cost\n COOKIES[CURRENT_USER]['balance'] = balance\n elif balance < cost and credit_balance >= cost:\n if credit_limit == 0:\n logger.warning('用户%s信用卡已冻结,无法使用信用卡购物!' % CURRENT_USER)\n return\n credit_balance -= cost\n COOKIES[CURRENT_USER]['credit_balance'] = credit_balance\n logger.info('用户%s使用信用卡支付!' % CURRENT_USER)\n else:\n diff = cost - (balance + credit_balance)\n logger.info('账户余额: %s 信用卡余额: %s 购买商品 %s x %s 还需 %s' % (balance, credit_balance, good, count, diff))\n return\n logger.info('账户余额: %s 信用卡余额: %s 购物车: %s \\n' % (balance, credit_balance, SHOPPING_CART[CURRENT_USER]))\n\ndef print_shopping_cart():\n '''\n Funciton print_shopping_cart to print shopping cart\n :return: goods total cost in shopping cart\n '''\n cost = 0\n print('=' * 50)\n print('商品名称 商品编号 商品价格 商品数量 [输入order下单]\\n')\n for k, v in SHOPPING_CART[CURRENT_USER].items():\n good, code, price, count = k, v['code'], v['price'], v['count']\n print('%-10s %-10s %-10s %-10s' % (good, code, price, count))\n cost += (price * count)\n print('\\n商品总价: %s' % cost)\n print('账户余额: %s' % COOKIES[CURRENT_USER]['balance'])\n print('信用卡余额: %s' % COOKIES[CURRENT_USER]['credit_balance'])\n print('=' * 50)\n return cost\n\ndef edit_shopping_cart():\n '''\n Function edit_shopping_cart to delete good in shopping cart\n :return: None\n '''\n global COOKIES, SHOPPING_CART\n while True:\n balance = COOKIES[CURRENT_USER]['balance']\n credit_balance = COOKIES[CURRENT_USER]['credit_balance']\n credit_limit = COOKIES[CURRENT_USER]['credit_limit']\n goods = get_goods_info()\n print_shopping_cart()\n code = input_string('请输入要删除的商品编码')\n if code not in goods:\n logger.warning('输入商品编号非法!')\n continue\n good = goods[code]['name']\n price = goods[code]['price']\n if good not in SHOPPING_CART[CURRENT_USER]:\n logger.warning('购物车内无此商品: %s' % good)\n continue\n count = input_number('请输入要删除的商品数量')\n if count > SHOPPING_CART[CURRENT_USER][good]['count']:\n logger.warning('删除的数量,不能大于购物车内商品数量!')\n continue\n if count == SHOPPING_CART[CURRENT_USER][good]['count']:\n SHOPPING_CART[CURRENT_USER].pop(good)\n logger.info('商品 %s x %s 已在购物车删除!' % (good, count))\n if credit_balance > credit_limit:\n credit_balance = credit_limit\n balance += (credit_balance - credit_limit)\n if count < SHOPPING_CART[CURRENT_USER][good]['count']:\n SHOPPING_CART[CURRENT_USER][good]['count'] -= count\n logger.info('商品 %s x %s 已在购物车删除!' % (good, count))\n if credit_balance > credit_limit:\n credit_balance = credit_limit\n balance += (credit_balance - credit_limit)\n credit_balance += (price * count)\n COOKIES[CURRENT_USER]['balance'] = balance\n COOKIES[CURRENT_USER]['credit_balance'] = credit_balance\n\ndef order_shoping_cart():\n cost = print_shopping_cart()\n dt = datetime.datetime.now().strftime('%Y%m')\n order_id = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n order_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n params = {\n 'mode': 'w',\n 'file_path': settings.ORDER_FILE % (name, dt, order_id),\n 'data': {\n 'order_id': order_id,\n 'username': CURRENT_USER,\n 'cost': cost,\n 'order_time': order_time,\n 'goods': SHOPPING_CART[CURRENT_USER]\n }\n }\n confirm = input_string('确认下单?y/n')\n if confirm == 'y':\n utils.file_handler(**params)\n logger.info('用户%s下单成功,请尽快支付!' % CURRENT_USER)\n return True\n logger.info('用户%s取消下单!' % CURRENT_USER)\n return\n\ndef pay_shooping_order():\n params = {\n 'mode': 'r',\n 'file_path': settings.USER_FILE % (CURRENT_USER, CURRENT_USER)\n }\n data = utils.file_handler(**params)\n data['balance'] = COOKIES[CURRENT_USER]['balance']\n data['credit_balance'] = COOKIES[CURRENT_USER]['credit_balance']\n params = {\n 'mode': 'w',\n 'file_path': settings.USER_FILE % (CURRENT_USER, CURRENT_USER),\n 'data': data\n }\n confirm = input_string('确认支付订单?y/n')\n if confirm == 'y':\n if utils.file_handler(**params):\n logger.info('支付成功,请耐心等待发货!')\n SHOPPING_CART[CURRENT_USER] = {}\n return True\n logger.info('用户%s取消支付!' % CURRENT_USER)\n return\n\ndef get_shopping_orders(month=None):\n if not month:\n month = datetime.datetime.now().strftime('%Y%m')\n order_dir = settings.ORDER_DIR % (CURRENT_USER, month)\n if utils.checkpath(order_dir):\n file_list = os.listdir(order_dir)\n print('='*50)\n print('%s 订单信息:' % month)\n for f in file_list:\n params = {\n 'mode': 'r',\n 'file_path': os.path.join(order_dir, f)\n }\n data = utils.file_handler(**params)\n print(data)\n print('=' * 50)\n\ndef transfer(account, amount, mode):\n params = {\n 'mode': 'r',\n 'file_path': settings.USER_FILE % (account, account)\n }\n account_info = utils.file_handler(**params)\n if mode == 'minus':\n if account_info['balance'] < amount:\n logger.warning('账户%s余额不足,转账失败!' % src_account)\n return\n account_info['balance'] -= amount\n elif mode == 'plus':\n account_info['balance'] += amount\n params = {\n 'mode': 'w',\n 'file_path': settings.USER_FILE % (account, account),\n 'data': account_info\n }\n utils.file_handler(**params)\n return True\n\ndef transfer_amount():\n src_account = input_string('请输入转账源账户')\n dst_account = input_string('请输入转账目标账户')\n amount = input_number('请输入转账金额')\n for account in [src_account, dst_account]:\n if not settings.USER_FILE % (account, account):\n logger.warning('账户%s不存在!')\n return\n if transfer(src_account, amount, 'minus'):\n transfer(dst_account, amount, 'minus')\n logger.info('账户%s转账%s至账户%s成功!' % (src_account, amount, dst_account))\n return True\n\ndef credit_card_bill():\n dt = datetime.date.today().strftime('%Y-%m')\n user_list = os.listdir(USER_DIR)\n for i in user_list:\n if not os.path.isdir(os.path.join(USER_DIR, i)):\n user_list.remove(i)\n for user in user_list:\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (user, user)\n }\n user_info = utils.file_handler(**params)\n if user_info['bill'] == 0:\n user_info['bill'] = (user_info['credit_limit'] - user_info['credit_balance'])\n else:\n user_info['bill'] += (user_info['bill'] * 0.0005)\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (user, user),\n 'data': user_info\n }\n utils.file_handler(**params)\n if user_info['bill'] != 0:\n logging.info('用户 %s %s 账单已生成!' % (user, dt))\n return True\n\ndef get_credit_card_bill():\n bill = COOKIES[CURRENT_USER]['bill']\n print('='*30)\n print('用户%s每月账单日:22号\\n每月还款日:10号\\n本期账单: %s' % (CURRENT_USER, bill))\n print('='*30)\n return True\n\ndef credit_card_repay():\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER)\n }\n user_info = utils.file_handler(**params)\n if user_info['bill'] == 0:\n print('用户%s本期账单是0,不需要还款!' % CURRENT_USER)\n return True\n amount = input_number('请输入还款金额')\n if amount > user_info['bill']:\n user_info['credit_balance'] = user_info['credit_limit']\n user_info['bill'] = 0\n logger.info('用户%s本期账单已还清!' % CURRENT_USER)\n if amount == user_info['bill']:\n user_info['credit_balance'] = user_info['credit_limit']\n user_info['bill'] = 0\n logger.info('用户%s本期账单已还清!' % CURRENT_USER)\n if amount > user_info['bill']:\n user_info['credit_balance'] += amount\n user_info['bill'] -= amount\n logger.info('用户%s本期账单未还清,还需%s还清本期账单!' % (CURRENT_USER, user_info['bill']))\n return True\n\ndef credit_card_withdraw_cash():\n global COOKIES\n if COOKIES[CURRENT_USER]['credit_limit'] == 0:\n logger.warning('用户%s信用卡已冻结,无法提现!' % CURRENT_USER)\n return\n amount = input_number('请输入提现金额')\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER)\n }\n user_info = utils.file_handler(**params)\n if user_info[CURRENT_USER]['credit_balance'] >= (amount + (amount * 0.05)):\n user_info[CURRENT_USER]['credit_balance'] -= (amount + (amount * 0.05))\n else:\n logger.warning('用户%s信用卡可用额度不足,提现%s失败!' % (CURRENT_USER, amount))\n return\n user_info[CURRENT_USER]['balance'] += amount\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER),\n 'data': user_info\n }\n utils.file_handler(**params)\n logger.info('用户%s提现%s成功!' % (CURRENT_USER, amount))\n COOKIES[CURRENT_USER]['balance'] = user_info[CURRENT_USER]['balance']\n COOKIES[CURRENT_USER]['credit_balance'] = user_info[CURRENT_USER]['credit_balance']\n return True\n\ndef credit_card_manage():\n while True:\n action_menu = {\n '1': ['plus', '提额'],\n '2': ['minus', '降额'],\n '3': ['freeze', '冻结'],\n }\n print('=' * 30)\n for k, v in action_menu.items():\n print('%-6s %-10s' % (k, v[1]))\n print('=' * 30)\n code = input_string('请输入操作编码')\n if code in action_menu:\n operation = action_menu[code][0]\n else:\n logger('操作编码非法!')\n continue\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER)\n }\n user_info = utils.file_handler(**params)\n if operation == 'plus':\n amount = input_number('请输入提额金额')\n user_info[CURRENT_USER]['credit_limit'] += amount\n user_info[CURRENT_USER]['credit_balance'] += amount\n logger.info('用户%s信用卡提额完成!' % CURRENT_USER)\n elif operation == 'minus':\n amount = input_number('请输入提额金额')\n user_info[CURRENT_USER]['credit_limit'] -= amount\n user_info[CURRENT_USER]['credit_balance'] -= amount\n logger.info('用户%s信用卡降额完成!' % CURRENT_USER)\n elif operation == 'freeze':\n user_info[CURRENT_USER]['credit_limit'] = 0\n logger.info('用户%s信用卡冻结完成!' % CURRENT_USER)\n params = {\n 'mode': 'w',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER),\n 'data': user_info\n }\n utils.file_handler(**params)\n return True\n\ndef reset_password():\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER)\n }\n user_info = utils.file_handler(**params)\n password = input_string(register=True)\n user_info['password'] = password\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (CURRENT_USER, CURRENT_USER),\n 'data': user_info\n }\n utils.file_handler(**params)\n logger.info('用户%s重置密码成功!' % CURRENT_USER)\n return True\n\ndef lock_user():\n name = input_string('请输入要锁定的用户名', name=True)\n if not utils.checkpath(USER_FILE % (name, name)):\n logger.warning('用户%s不存在!' % name)\n return\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (name, name)\n }\n user_info = utils.file_handler(**params)\n user_info['status'] = 1\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (name, name),\n 'data': user_info\n }\n utils.file_handler(**params)\n logger.info('锁定用户%s成功!' % name)\n return True\n\ndef remove_user():\n name = input_string('请输入要删除的用户名', name=True)\n if not utils.checkpath(USER_FILE % (name, name)):\n logger.warning('用户%s不存在!' % name)\n return\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (name, name)\n }\n user_info = utils.file_handler(**params)\n if user_info['bill'] > 0:\n logger.warning('用户%s本期账单未还清,无法删除用户!' % name)\n return\n if user_info['credit_balance'] < user_info['credit_limit']:\n logger.warning('用户%s信用卡有未出账账单,无法删除用户!' % name)\n return\n if user_info['balance'] > 0:\n logger.warning('用户%s有账户余额未消费,无法删除用户!' % name)\n return\n os.remove(os.path.join(USER_DIR, name))\n logger.info('删除用户%s完成!' % name)\n return True\n\ndef set_permission():\n while True:\n name = input_string('请输入要删除的用户名', name=True)\n if not utils.checkpath(USER_FILE % (name, name)):\n logger.warning('用户%s不存在!' % name)\n return\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (name, name)\n }\n user_info = utils.file_handler(**params)\n action_menu = {\n '1': ['admin', '管理员'],\n '2': ['user', '普通用户'],\n }\n print('=' * 30)\n for k, v in action_menu.items():\n print('%-6s %-10s' % (k, v[1]))\n print('=' * 30)\n code = input_string('请输入操作编码')\n if code in action_menu:\n permission = action_menu[code][0]\n else:\n logger('操作编码非法!')\n continue\n if permission == 'admin':\n user_info['permission'] = 'admin'\n if permission == 'user':\n user_info['permission'] = 'user'\n params = {\n 'mode': 'r',\n 'file_path': USER_FILE % (name, name),\n 'data': user_info\n }\n utils.file_handler(**params)\n logger.info('设置用户%s权限为%s成功!' % (name, permission))\n return True\n\n@auth\ndef shopping_cart():\n while True:\n shopping_cart_menu = {\n '1': [print_shopping_cart, '查看购物车'],\n '2': [edit_shopping_cart, '编辑购物车'],\n '3': [order_shoping_cart, '下单'],\n '4': [pay_shooping_order, '支付']\n }\n print('=' * 30)\n for k, v in shopping_cart_menu.items():\n print('%-6s %-10s' % (k, v[1]))\n print('=' * 30)\n code = input_string('请输入购物车编码')\n if code in shopping_cart_menu:\n if shopping_cart_menu[code][0]():\n return True\n else:\n logger.info('操作编码非法!')\n\n@auth\ndef user_manage():\n while True:\n action_menu = {\n '0': [register, '新增用户'],\n '1': [reset_password, '重置密码'],\n '2': [set_permission, '设置权限'],\n '3': [lock_user, '锁定用户'],\n '4': [remove_user, '删除用户'],\n '5': [transfer_amount, '用户转账'],\n '6': [get_credit_card_bill, '获取账单'],\n '7': [credit_card_repay, '还款'],\n '8': [credit_card_withdraw_cash, '提现'],\n '9': [credit_card_manage, '信用卡管理']\n }\n print('=' * 30)\n for k, v in action_menu.items():\n print('%-6s %-10s' % (k, v[1]))\n print('=' * 30)\n code = input_string('请输入用户管理编码')\n if code in action_menu:\n if action_menu[code][0]():\n return True\n else:\n logger.info('操作编码非法!')\n\ndef run():\n while True:\n action_menu = {\n '1': [register, '注册'],\n '2': [login, '登录'],\n '3': [shopping, '购物'],\n '4': [shopping_cart, '购物车'],\n '5': [user_manage, 'ATM']\n }\n print('='*30)\n for k,v in action_menu.items():\n print(' %-6s %-10s' % (k, v[1]))\n print('='*30)\n code = input_string('请输入操作编码')\n if code in action_menu:\n try:\n action_menu[code][0]()\n except Exception as e:\n print('\\033[33m%s\\033[0m' % e)\n except:\n pass\n else:\n logger.info('操作编码非法!')\n"
},
{
"alpha_fraction": 0.5811688303947449,
"alphanum_fraction": 0.5827922224998474,
"avg_line_length": 23.479999542236328,
"blob_id": "e55858afa45bc7fc9093fd09e71f4fb5052b97b2",
"content_id": "a8aac025ac2412c0aa6ac982bfa5d7f58d62d737",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 644,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 25,
"path": "/weektest/test2/ATM_zhanglong/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport json\nfrom lib import common\nfrom conf import settings\n\nlogger = common.get_logger('db_handler')\n\ndef file_handler_read(name):\n try:\n with open(r'%s' % settings.USER_FILE % name) as f:\n data = json.load(f)\n except Exception as e:\n logger.warning('读取文件失败:%s' % e)\n else:\n return data\n\ndef file_handler_write(user_info):\n try:\n with open(r'%s' % settings.USER_FILE % user_info['name'], 'w') as f:\n json.dump(user_info, f)\n except Exception as e:\n logger.warning('写入文件失败:%s' % e)\n else:\n return True\n\n\n\n\n"
},
{
"alpha_fraction": 0.6573604345321655,
"alphanum_fraction": 0.6598984599113464,
"avg_line_length": 22.117647171020508,
"blob_id": "6d055492a1593fefbdb7cfb6e491775807370d2e",
"content_id": "4fd0e70ecfa8414dd0452d266241f9e4104bce3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 394,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 17,
"path": "/homework/week4/elective_systems/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# encoding: utf-8\n\nimport hmac\nimport logging.config\nfrom conf import settings\n\nclass Common:\n @classmethod\n def get_logger(cls, name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\n @classmethod\n def make_hmac_code(cls, msg):\n h = hmac.new(b'ElectiveSystems')\n h.update(msg)\n return h.hexdigest()\n\n"
},
{
"alpha_fraction": 0.48143255710601807,
"alphanum_fraction": 0.4917449951171875,
"avg_line_length": 33.53508758544922,
"blob_id": "75ea0955dc3d81eb4996df4f14688fbd7a2ddbbb",
"content_id": "81943cce4cd16b7b412526daa2262e86c81b46f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21753,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 570,
"path": "/homework/week3/python_weekend3_zhanglong_atm_and_shopping.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n#-*- encoding: utf-8 -*-\n\nimport os\nimport sys\nimport json\nimport logging\nimport datetime\n\nlogging.basicConfig(level=logging.DEBUG,\n format='%(asctime)s %(filename)s[line:%(lineno)d]:%(levelname)s:%(message)s',\n datefmt='%Y-%b-%d,%H:%M:%S',\n filename='check_atm_and_shopping.log',\n filemode='w')\n\ndef auth(func):\n def wrapper(*args, **kwargs):\n if name in cookies:\n return func(*args, **kwargs)\n else:\n print('用户%s没有登录,请先登录!' % name)\n logging.warning('用户%s没有登录,请先登录!' % name)\n sys.exit()\n return wrapper\n\ndef order(func):\n def wrapper(*args, **kwargs):\n global cookies, shopping_cart\n res = func(*args, **kwargs)\n if res == 'quit':\n if shopping_cart:\n with open(r'shopping_cart.json.swap', 'w') as f:\n json.dump(shopping_cart, f)\n os._exit(0)\n if res == 'order':\n print('请确认购物车并下单!')\n sys.exit()\n return res\n return wrapper\n\ndef check_file(path):\n if os.path.exists(path):\n return True\n\ndef get_user_info(name):\n with open(r'%s/user.json' % name) as f:\n user_info = json.load(f)\n return user_info\n\n@auth\ndef update_user_info(name, user_info):\n with open(r'%s/user.json' % name, 'w') as f:\n json.dump(user_info, f)\n\n@auth\ndef get_goods_info():\n with open(r'goods.json') as f:\n goods = json.load(f)\n return goods\n\ndef register(credit_limit=15000):\n print('>> \\033[33m请输入注册信息!\\033[0m')\n name = get_user_name('注册')\n if check_file('%s/user.json' % name):\n print('用户%s已经注册!' % name)\n return\n else:\n os.mkdir(name)\n password = get_password(True)\n with open(r'%s/user.json' % name, 'w') as f:\n user_info = {\n name: {\n 'password': password,\n 'balance': 0,\n 'credit_limit': credit_limit,\n 'credit_balance': credit_limit,\n 'bill': 0\n }\n }\n json.dump(user_info, f)\n print('用户%s注册成功!' % name)\n return\n\n@order\ndef get_user_name(word):\n while True:\n name = input('%s用户名 >>: ' % word).strip()\n if name == 'quit':\n return name\n if not name.isalpha():\n print('用户名必须是字符串!')\n continue\n return name\n\n@order\ndef get_password(register=None):\n while True:\n password = input('密码 >>: ')\n if password == 'quit':\n return password\n if register:\n p = input('再次输入密码 >>: ')\n if password == 'quit':\n return password\n if p != password:\n print('两次输入的密码不一致!')\n continue\n return password\n\n@order\ndef get_balance(word):\n while True:\n balance = input('请输入%s金额 >>: ' % word).strip()\n if balance == 'quit':\n return balance\n if not balance.isdigit():\n print('必须是金额的整数!')\n continue\n balance = int(balance)\n return balance\n\n@order\ndef get_action():\n while True:\n code = input('请输入操作编码 >>: ').strip()\n return code\n\n@order\ndef get_shopping_code(word):\n while True:\n code = input('请输入%s商品编码 >>: ' % word).strip()\n return code\n\n@order\ndef get_shopping_count(word):\n while True:\n count = input('请输入%s数量 >>: ' % word).strip()\n if count == 'quit':\n return count\n if count.isdigit():\n count = int(count)\n return count\n print('请输入数量的整数!')\n\ndef login():\n global cookies\n while True:\n name = get_user_name('登陆')\n if name in cookies:\n print('用户%s已经是登陆状态!' % name)\n return name\n if not check_file('%s/user.json' % name):\n print('用户%s不存在!' % name)\n continue\n password = get_password()\n user = get_user_info(name)\n if password != user[name]['password']:\n print('密码错误!')\n continue\n if name not in cookies:\n cookies[name] = {}\n cookies[name]['balance'] = user[name]['balance']\n cookies[name]['credit_balance'] = user[name]['credit_balance']\n cookies[name]['credit_limit'] = user[name]['credit_limit']\n cookies[name]['bill'] = user[name]['bill']\n print('用户%s登陆成功!' % name)\n return name\n\n@auth\ndef shopping():\n global cookies, shopping_cart\n balance = cookies[name]['balance']\n credit_balance = cookies[name]['credit_balance']\n credit_limit = cookies[name]['credit_limit']\n if name in shopping_cart:\n cost = 0\n for v in shopping_cart[name].values():\n price, count = v['price'], v['count']\n cost += price * count\n if balance >= cost:\n balance -= cost\n elif balance < cost and credit_balance >= cost:\n if credit_limit == 0:\n print('信用卡已冻结,无法使用信用卡购物!')\n print('账户余额不足,请进入购物车修改购买商品,或进行账户充值后继续购物!')\n return\n credit_balance -= cost\n elif balance < cost and (balance + credit_balance) >= cost:\n balance = 0\n credit_balance -= (cost - balance)\n else:\n diff = cost - (balance + credit_balance)\n print('账户余额: %s 信用卡余额: %s 购买购物车中的物品还需: %s' % (balance, credit_balance, diff))\n print('请进入购物车修改购买商品,或进行账户充值后继续购物!')\n return\n else:\n shopping_cart[name] = {}\n while True:\n print('=' * 30)\n print('商品编号 商品名称 商品价格')\n for k, v in goods.items():\n print('%-10s %-10s %-10s' % (k, v['name'], v['price']))\n print('=' * 30)\n print('[下单:order]')\n code = get_shopping_code('购买')\n if code not in goods:\n print('商品编号非法!')\n continue\n good = goods[code]['name']\n price = goods[code]['price']\n count = get_shopping_count('购买')\n cost = price * count\n if balance >= cost or credit_balance >= cost:\n if good not in shopping_cart[name]:\n total_count = count\n else:\n total_count += count\n shopping_cart[name][good] = {\n 'code': code,\n 'price': price,\n 'count': total_count\n }\n if balance >= cost:\n balance -= cost\n print('购物车: %s \\n账户余额: %s' % (shopping_cart[name], balance))\n elif balance < cost and credit_balance >= cost:\n if credit_limit == 0:\n print('信用卡已冻结,无法使用信用卡购物!')\n print('账户余额不足,请进入购物车修改购买商品,或进行账户充值后继续购物!')\n return\n credit_balance -= cost\n print('购物车: %s \\n信用卡余额: %s' % (shopping_cart[name], credit_balance))\n else:\n diff = cost - (balance + credit_balance)\n print('账户余额: %s 信用卡余额: %s 购买商品 %s x %s 还需 %s' % (balance, credit_balance, good, count, diff))\n print('请账户充值或信用卡还款后继续购物!')\n cookies[name]['balance'] = balance\n cookies[name]['credit_balance'] = credit_balance\n\n@auth\ndef shopping_cart_order():\n global cookies, shopping_cart\n while True:\n if name not in shopping_cart:\n shopping_cart[name] = {}\n print('1 查看购物车\\n2 编辑购物车\\n3 确认下单')\n print('退回上一层:back')\n action = get_action()\n if action == '1':\n cost = 0\n print('=' * 50)\n print('商品名称 商品编号 商品价格 商品数量\\n')\n for k, v in shopping_cart[name].items():\n good, code, price, count = k, v['code'], v['price'], v['count']\n print('%-10s %-10s %-10s %-10s' % (good, code, price, count))\n cost += (price * count)\n print('\\n商品总价: %s' % cost)\n print('账户余额: %s' % cookies[name]['balance'])\n print('信用卡余额: %s' % cookies[name]['credit_balance'])\n print('=' * 50)\n elif action == '2':\n while True:\n print('=' * 30)\n print('商品编号 商品名称 商品价格')\n for k, v in goods.items():\n print('%-10s %-10s %-10s' % (k, v['name'], v['price']))\n print('=' * 30)\n code = get_shopping_code('删除')\n good = goods[code]['name']\n price = goods[code]['price']\n if good not in shopping_cart[name]:\n print('购物车内无此商品: %s' % good)\n continue\n count = get_shopping_count('删除')\n if count > shopping_cart[name][good]['count']:\n print('请输入不大于购物车内商品数量的值!')\n continue\n elif count == shopping_cart[name][good]['count']:\n del shopping_cart[name][good]\n cookies[name]['credit_balance'] += (price * count)\n if cookies[name]['credit_balance'] > 15000:\n cookies[name]['balance'] += (cookies[name]['credit_balance'] - 15000)\n cookies[name]['credit_balance'] = 15000\n else:\n shopping_cart[name][good]['count'] -= count\n cookies[name]['credit_balance'] += (price * count)\n if cookies[name]['credit_balance'] > 15000:\n cookies[name]['balance'] += (cookies[name]['credit_balance'] - 15000)\n cookies[name]['credit_balance'] = 15000\n print('购物车编辑完成!')\n break\n elif action == '3':\n cost = 0\n print('=' * 50)\n print('商品名称 商品编号 商品价格 商品数量\\n')\n for k, v in shopping_cart[name].items():\n good, code, price, count = k, v['code'], v['price'], v['count']\n print('%-10s %-10s %-10s %-10s' % (good, code, price, count))\n cost += (price * count)\n print('\\n商品总价: %s' % cost)\n print('账户余额: %s' % cookies[name]['balance'])\n print('信用卡余额: %s' % cookies[name]['credit_balance'])\n print('=' * 50)\n d = datetime.datetime.now().strftime('%Y%m')\n dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n user_order = {}\n with open(r'%s/%s_order.json' % (name, d), 'a') as f:\n user_order[name] = shopping_cart[name]\n user_order[name]['datetime'] = dt\n user_order[name]['cost'] = cost\n f.write(str(user_order)+'\\n')\n print('下单成功,请尽快支付!')\n break\n elif action == 'back':\n return\n else:\n print('操作编码非法!')\n\n@auth\ndef pay():\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n user_info[name]['balance'] = cookies[name]['balance']\n user_info[name]['credit_balance'] = cookies[name]['credit_balance']\n json.dump(user_info, f2)\n os.remove('%s/user.json' % name)\n os.rename('%s/user.json.swap' % name, '%s/user.json' % name)\n print('支付成功,请耐心等待发货!')\n shopping_cart[name] = {}\n\n@auth\ndef get_orders(d=datetime.datetime.now().strftime('%Y%m')):\n print('以下是%s消费订单信息: ' % d)\n if check_file('%s/%s_order.json' % (name, d)):\n with open(r'%s/%s_order.json' % (name, d)) as f:\n for line in f:\n print(line.strip())\n else:\n print('用户%s本月没有订单信息!\\n' % name)\n\n@auth\ndef withdraw_cash(amount):\n if cookies[name]['credit_limit'] == 0:\n print('信用卡已冻结,无法使用信用卡购物!')\n return\n if cookies[name]['credit_balance'] >= (amount + (amount * 0.05)):\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n cookies[name]['credit_balance'] -= (amount + (amount * 0.05))\n cookies[name]['balance'] += amount\n user_info['balance'] = cookies[name]['balance']\n user_info['credit_balance'] = cookies[name]['credit_balance']\n json.dump(user_info, f2)\n os.remove('%s/user.json' % name)\n os.rename('%s/user.json.swap' % name, '%s/user.json' % name)\n print('提现%s成功,手续费:%s!' % (amount, (amount * 0.05)))\n logging.info('提现%s成功,手续费:%s!' % (amount, (amount * 0.05)))\n else:\n print('信用卡没有足够的金额完成提现!')\n logging.warning('信用卡没有足够的金额完成提现!')\n\n@auth\ndef transfer_amount(name, amount, mode):\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n if mode == 'reduce':\n if user_info[name]['balance'] >= amount:\n user_info[name]['balance'] -= amount\n else:\n print('用户%s账户金额不足!' % name)\n logging.warning('用户%s账户金额不足!' % name)\n return\n elif mode == 'increase':\n user_info[name]['balance'] += amount\n json.dump(user_info, f2)\n os.remove('%s/user.json' % name)\n os.rename('%s/user.json.swap' % name, '%s/user.json' % name)\n return True\n\n@auth\ndef transfer(src_name, dst_name, amount):\n if not check_file('%s/user.json' % src_name):\n print('账户%s不存在!' % src_name)\n logging.error('账户%s不存在!' % src_name)\n return\n if not check_file('%s/user.json' % dst_name):\n print('账户%s不存在!' % dst_name)\n logging.error('账户%s不存在!' % dst_name)\n return\n if transfer_amount(src_name, amount, 'reduce'):\n transfer_amount(dst_name, amount, 'increase')\n print('转账完成!')\n logging.info('转账完成!')\n\n@auth\ndef repayment():\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n money = get_balance('还款')\n if money < user_info['bill']:\n user_info['credit_balance'] += money\n diff = user_info['bill'] - money\n user_info['bill'] = diff\n print('用户%s还款成功,还需%s金额还清本期账单!' % (name, diff))\n logging.info('用户%s还款成功,还需%s金额还清本期账单!' % (name, diff))\n elif money == user_info['bill']:\n user_info['credit_balance'] += money\n user_info['bill'] = 0\n print('用户%s还款成功,本期账单已还清!' % name)\n logging.info('用户%s还款成功,本期账单已还清!' % name)\n elif money > user_info['bill']:\n user_info['credit_balance'] == user_info['credit_limit']\n user_info['balance'] += (money - user_info['bill'])\n user_info['bill'] = 0\n print('用户%s还款成功,本期账单已还清!多余金额已经充值到账户余额!' % name)\n logging.info('用户%s还款成功,本期账单已还清!多余金额已经充值到账户余额!' % name)\n\n@auth\ndef credit_manege(action, amount=None):\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n if action == 'up':\n user_info[name]['credit_limit'] += amount\n user_info[name]['credit_balance'] += amount\n print('用户%s信用卡提额%s完成!' % (name, amount))\n logging.info('用户%s信用卡提额%s完成!' % (name, amount))\n elif action == 'down':\n user_info[name]['credit_limit'] -= amount\n user_info[name]['credit_balance'] -= amount\n print('用户%s信用卡降额%s完成!' % (name, amount))\n logging.info('用户%s信用卡降额%s完成!' % (name, amount))\n elif action == 'freeze':\n user_info[name]['credit_limit'] = 0\n print('用户%s信用卡冻结完成!' % name)\n logging.info('用户%s信用卡冻结完成!' % name)\n elif action == 'back':\n return\n else:\n print('操作非法!')\n json.dump(user_info, f2)\n os.remove('%s/user.json' % name)\n os.rename('%s/user.json.swap' % name, '%s/user.json' % name)\n\n@auth\ndef credit_bill():\n today = datetime.date.today()\n month = today.month\n day = today.day\n with open(r'%s/user.json' % name) as f1, \\\n open(r'%s/user.json.swap' % name, 'w') as f2:\n user_info = json.load(f1)\n if day == '22':\n user_info['bill'] = user_info['bill'] + (user_info['credit_limit'] - user_info['credit_balance'])\n logging.info('%s-%s账单已生成!' % (month, day))\n if today >= '%s-10' % month:\n if user_info['bill'] != 0:\n user_info['bill'] = user_info['bill'] + (user_info['bill'] * 0.0005)\n json.dump(user_info, f2)\n os.remove('%s/user.json' % name)\n os.rename('%s/user.json.swap' % name, '%s/user.json' % name)\n\n@auth\ndef credit():\n while True:\n print('1 管理信用卡\\n2 账户转账\\n3 查看订单')\n print('退回上一层: back')\n action = get_action()\n if action == '1':\n while True:\n print('1 信用卡提额\\n2 信用卡降额\\n3 信用卡冻结\\n4 信用卡余额\\n5 信用卡提现\\n6 信用卡账单\\n7 信用卡还款')\n print('退回上一层: back')\n operation = get_action()\n if operation == '1':\n amount = get_balance('提额')\n credit_manege('up', amount)\n elif operation == '2':\n amount = get_balance('降额')\n credit_manege('down', amount)\n elif operation == '3':\n credit_manege('freeze')\n elif operation == '4':\n if cookies[name]['credit_limit'] == 0:\n credit_balance = 0\n print('信用卡已冻结,无法使用信用卡!')\n else:\n credit_balance = cookies[name]['credit_balance']\n print('信用卡可用余额: %s' % credit_balance)\n elif operation == '5':\n amount = get_balance('提现')\n withdraw_cash(amount)\n elif operation == '6':\n bill = cookies[name]['bill']\n print('本期信用卡账单: %s' % bill)\n elif operation == '7':\n repayment()\n elif operation == 'back':\n break\n else:\n print('操作编码非法!')\n elif action == '2':\n src_name = get_user_name('源')\n dst_name = get_user_name('目标')\n amount = get_balance('转账')\n transfer(src_name, dst_name, amount)\n elif action == '3':\n get_orders()\n elif action == 'back':\n return\n else:\n print('操作编码非法!')\n\n\n# 程序运行\nconfig = 'users.txt'\nusers_count = {}\ncookies = {}\nshopping_cart = {}\nif check_file(r'shopping_cart.json.swap'):\n with open(r'shopping_cart.json.swap') as f:\n data = json.load(f)\n shopping_cart = data\n os.remove('shopping_cart.json.swap')\nwhile True:\n if datetime.datetime.now().strftime('%H:%M:%S') == '00:00:00':\n credit_bill()\n print('\\n欢迎进入购物商城!\\n1 注册\\n2 登陆\\n3 购物\\n4 购物车\\n5 支付\\n6 atm')\n action = get_action()\n print(action)\n if action == '1':\n # 注册\n register()\n elif action == '2':\n # 登录\n login()\n elif action == '3':\n # 购物\n name = get_user_name('登陆')\n try:\n goods = get_goods_info()\n shopping()\n except SystemExit:\n pass\n elif action == '4':\n # 购物车\n name = get_user_name('登陆')\n try:\n shopping_cart_order()\n except SystemExit:\n pass\n elif action == '5':\n # 支付\n name = get_user_name('登陆')\n try:\n pay()\n except SystemExit:\n pass\n elif action == '6':\n # atm\n name = get_user_name('登陆')\n try:\n credit()\n except SystemExit:\n pass\n else:\n print('>> \\033[31m操作编码非法!\\033[0m')\n"
},
{
"alpha_fraction": 0.577338695526123,
"alphanum_fraction": 0.580426037311554,
"avg_line_length": 26.922412872314453,
"blob_id": "d410bb368d10b2be48a24718f49e333d591dfa3c",
"content_id": "3b8316d1bb3bfe95f956262dc58df67c44aaaed8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3499,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 116,
"path": "/project/elective_systems/version_v9/core/student.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import student_api\n\nCURRENT_USER = None\nROLE = 'student'\n\n\ndef login():\n global CURRENT_USER\n common.show_green('登陆')\n if CURRENT_USER:\n common.show_red('用户不能重复登录!')\n return\n while True:\n name = common.input_string('用户名')\n if name == 'q': break\n password = common.input_string('密码')\n if password == 'q': break\n flag, msg = student_api.login(name, password)\n if not flag:\n common.show_red(msg)\n continue\n CURRENT_USER = name\n common.show_green(msg)\n return\n\ndef register():\n common.show_green('注册')\n if CURRENT_USER:\n common.show_red('已登录,不能注册!')\n return\n while True:\n name = common.input_string('注册用户名')\n if name == 'q': break\n password = common.input_string('注册密码')\n if password == 'q': break\n password2 = common.input_string('确认密码')\n if password2 == 'q': break\n if password != password2:\n common.show_red('两次密码出入不一致!')\n continue\n flag, msg = student_api.register(name, password)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\[email protected]_auth(ROLE)\ndef check_student_course():\n common.show_green('查看学生课程')\n student_course = student_api.get_student_course(CURRENT_USER)\n if not student_course:\n common.show_red('学生课程列表为空!')\n return\n common.show_info(*student_course)\n\[email protected]_auth(ROLE)\ndef check_student_score():\n common.show_green('查看学生成绩')\n student_score = student_api.get_student_score(CURRENT_USER)\n if not student_score:\n common.show_red('学生成绩为空!')\n return\n common.show_info(**student_score)\n\ndef add_course_student(course_name):\n flag, msg = student_api.add_course_student(CURRENT_USER, course_name)\n if not flag:\n common.show_red(msg)\n common.show_green(msg)\n return flag\n\[email protected]_auth(ROLE)\ndef choose_student_course():\n common.show_green('选择课程')\n while True:\n course_name = common.get_object_name(type_name='course')\n school_name = student_api.get_school_name(course_name)\n if not add_course_student(course_name):\n return\n flag, msg = student_api.choose_student_course(CURRENT_USER, course_name, school_name)\n if not flag:\n common.show_red(msg)\n continue\n common.show_green(msg)\n return\n\ndef logout():\n global CURRENT_USER\n common.show_green('登出')\n common.show_red('用户%s登出!' % CURRENT_USER)\n CURRENT_USER = None\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_student_course, '查看学生课程'],\n '4': [check_student_score, '查看学生成绩'],\n '5': [choose_student_course, '选择学生课程']\n }\n while True:\n common.show_green('按\"q\"退出视图')\n common.show_menu(menu)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.4593350291252136,
"alphanum_fraction": 0.4687979519367218,
"avg_line_length": 20.170454025268555,
"blob_id": "111e587717a4941fb7f483fe0e3cf041c117a05e",
"content_id": "9222c295c0f3072c64e96c5f491d58d53b571d8f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4336,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 176,
"path": "/weektest/test2/ATM_zhangxiangyu/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\n\r\nfrom interface import user,bank\r\nfrom lib import common\r\n\r\nuser_info = {\r\n 'name':None,\r\n 'is_auth':False\r\n}\r\n\r\n#登陆\r\ndef login():\r\n if user_info['is_auth']:\r\n return\r\n count = 0\r\n while True:\r\n name = input('请输入用户名或者q退出:').strip()\r\n name1 = user.get_userinfo_interface(name)\r\n if name == 'q':\r\n break\r\n if count==3:\r\n user.user_locked_interface(name)\r\n print('用户名被锁定!')\r\n break\r\n if not name1:\r\n print('用户名不存在!')\r\n continue\r\n pwd = input('请输入密码:').strip()\r\n if pwd == name1['password'] and name1['locked'] == False :\r\n user_info['name']= name\r\n user_info['is_auth']=True\r\n print('login success')\r\n break\r\n else:\r\n count+=1\r\n print('密码错误或者被锁定!')\r\n\r\n\r\n\r\n#注册\r\ndef register():\r\n\r\n if user_info['is_auth']:\r\n return\r\n while True:\r\n\r\n uname = input('请输入用户名:').strip()\r\n #判断用户名是否存在\r\n user1 = user.get_userinfo_interface(uname)\r\n # if count == 3:\r\n # continue\r\n if user1:\r\n print('用户名已存在')\r\n continue\r\n\r\n pwd1 = input('请输入密码:').strip()\r\n pwd2 = input('请确认密码:').strip()\r\n if pwd1 == pwd2:\r\n user.user_pwd_interface(uname,pwd1)\r\n print('success')\r\n break\r\n else:\r\n print('密码不一致!')\r\n\r\n\r\n\r\n\r\n\r\n\r\[email protected]_auth\r\ndef check_bank():\r\n print('checking bank')\r\n account = bank.get_account(user_info['name'])\r\n print('%s的账户余额是%s'%(user_info['name'],account))\r\n\r\n\r\[email protected]_auth\r\ndef transfer():\r\n while True:\r\n to_name = input('请输入转账目标用户:').strip()\r\n if to_name == user_info['name']:\r\n print('不可以转给自己!')\r\n continue\r\n account = input('输入转账金额:').strip()\r\n if account.isdigit():\r\n account = int(account)\r\n from_account = bank.get_account(user_info['name'])\r\n if from_account >=account:\r\n bank.transfer_interface(to_name,user_info['name'],account)\r\n print('转账成功!')\r\n break\r\n else:\r\n print('余额不足!')\r\n else:\r\n print('金额必须是数字!')\r\n\r\n\r\n\r\[email protected]_auth\r\ndef repay():\r\n while True:\r\n account = input('请输入还款金额:').strip()\r\n if account.isdigit():\r\n account = int(account)\r\n bank.repay_interface(user_info['name'],account)\r\n break\r\n else:\r\n print('金额必须是数字!')\r\n\r\[email protected]_auth\r\ndef withdraw():\r\n while True:\r\n\r\n account = input('请输入提现金额: q退出!').strip()\r\n if account=='q':\r\n break\r\n\r\n if account.isdigit():\r\n account = float(account)\r\n account_user = bank.get_account(user_info['name'])\r\n if account_user > account*1.05:\r\n bank.withdraw(user_info['name'],account)\r\n print('提现成功!')\r\n else:\r\n print('账户金额不足!')\r\n else:\r\n print('金额必须是数字!')\r\n\r\n\r\[email protected]_auth\r\ndef flow():\r\n user_dic = user.get_userinfo_interface(user_info['name'])\r\n for reduct in user_dic['flow_log']:\r\n print(reduct)\r\n\r\[email protected]_auth\r\ndef shop():\r\n pass\r\n\r\[email protected]_auth\r\ndef shopping_cart():\r\n pass\r\n\r\n\r\nfunc_dic = {\r\n\r\n '1':login,\r\n '2':register,\r\n '3':check_bank,\r\n '4':transfer,\r\n '5':repay,\r\n '6':withdraw,\r\n '7':flow,\r\n '8':shop,\r\n '9':shopping_cart\r\n}\r\n\r\ndef run():\r\n while True:\r\n\r\n print('''\r\n 1 登陆\r\n 2 注册\r\n 3 查看账户额度\r\n 4 转账\r\n 5 还款\r\n 6 提现\r\n 7 查看流水\r\n 8 购物车\r\n 9 查看购物车\r\n\r\n ''')\r\n choice = input('请选择功能:').strip()\r\n if choice not in func_dic:continue\r\n\r\n func_dic[choice]()\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5323740839958191,
"alphanum_fraction": 0.6043165326118469,
"avg_line_length": 13.88888931274414,
"blob_id": "09e3b659c373665ae5135d319ec46610c47de4b3",
"content_id": "ea2250f4472a0a2517c2594ddaee6114eeb4da98",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 9,
"path": "/project/elective_systems/version_v2/interface/common_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nlogger = common.get_logger('common')\n\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5607556700706482,
"alphanum_fraction": 0.5869472026824951,
"avg_line_length": 21.06161117553711,
"blob_id": "f420e653b34edf60238367f953f1a51f4c1e60a0",
"content_id": "ab8ed8bd2bd21387d53e74472f3db06ec2cb60e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5622,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 211,
"path": "/month4/week7/python_day29/python_day29.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 守护进程\nimport time\nimport random\nfrom multiprocessing import Process\n\nclass MyProcess(Process):\n def __init__(self, name):\n super().__init__()\n self.name = name\n\n def run(self):\n print('%s is MyProcess' % self.name)\n time.sleep(random.randrange(1, 3))\n print('%s is MyProcess end' % self.name)\n\n\np = MyProcess('egon')\np.daemon = True # 一定要在p.start()前设置,设置p为守护进程,禁止p创建子进程,并且父进程代码执行结束,p即终止运行\np.start()\nprint('主')\n\n# 2. 进程同步,互斥锁\n# 互斥锁,只能acquire一次,release一次的使用,不能连续acquire\n# 互斥锁 vs join:\n# 前提:两者的原理都是一样,都是将并发变成串行,从而保证有序;\n# 区别:join 是按照认为指定的顺序执行,而互斥锁是所有进程平等地竞争,谁先抢到谁执行;\n\nimport time\nimport random\nfrom multiprocessing import Process, Lock\n\nmutex = Lock()\n\ndef task1(lock):\n lock.acquire()\n print('task1: 名字是 egon')\n time.sleep(random.randint(1, 3))\n print('task1: 性别是 male')\n time.sleep(random.randint(1, 3))\n print('task1: 年龄是 18')\n time.sleep(random.randint(1, 3))\n lock.release()\n\ndef task2(lock):\n lock.acquire()\n print('task2: 名字是 lxx')\n time.sleep(random.randint(1, 3))\n print('task2: 性别是 male')\n time.sleep(random.randint(1, 3))\n print('task2: 年龄是 30')\n time.sleep(random.randint(1, 3))\n lock.release()\n\ndef task3(lock):\n lock.acquire()\n print('task3: 名字是 alex')\n time.sleep(random.randint(1, 3))\n print('task3: 性别是 male')\n time.sleep(random.randint(1, 3))\n print('task3: 年龄是 78')\n time.sleep(random.randint(1, 3))\n lock.release()\n\nif __name__ == '__main__':\n p1 = Process(target=task1, args=(mutex,))\n p2 = Process(target=task2, args=(mutex,))\n p3 = Process(target=task3, args=(mutex,))\n\n p1.start()\n print('--->')\n # p1.join()\n p2.start()\n # p2.join()\n p3.start()\n # p3.join()\n print('===>')\n\n# 3. 模拟抢票\nimport os\nimport json\nimport time\nimport random\nfrom multiprocessing import Process, Lock\n\nmutex = Lock()\n\ndef search():\n time.sleep(0.5)\n with open(r'data.json', 'r', encoding='utf-8') as f:\n dic = json.load(f)\n print('%s 剩余票数: %s' % (os.getpid(), dic['count']))\n\ndef get():\n with open(r'data.json', 'r', encoding='utf-8') as f:\n dic = json.load(f)\n if dic['count'] > 0:\n dic['count'] -= 1\n time.sleep(random.randint(0, 1))\n with open(r'data.json', 'w', encoding='utf-8') as f:\n json.dump(dic, f)\n print('%s 购票成功!' % os.getpid())\n\ndef task(lock):\n search()\n lock.acquire()\n get()\n lock.release()\n\nif __name__ == '__main__':\n process = []\n for p in range(10):\n p = Process(target=task, args=(mutex,))\n process.append(p)\n p.start()\n print('===>')\n\n# 4. 互斥锁\nimport os\nimport time\nfrom multiprocessing import Process, Lock, Manager\n\nmutex = Lock()\n\ndef task(dic, lock):\n lock.acquire()\n dic['num'] -= 1\n print('%s return: %s' % (os.getpid(), dic['num']))\n lock.release()\n\nif __name__ == '__main__':\n m = Manager()\n dic = m.dict({'num': 10})\n\n for i in range(10):\n p = Process(target=task, args=(dic, mutex))\n p.start()\n\n time.sleep(0.1)\n print(dic['num'])\n print('===>')\n\n# 5. 队列\n# 1)共享的空间\n# 2)是内存空间\n# 3)自动帮我们处理好锁问题\nfrom multiprocessing import Process, Queue\n\nq = Queue(3)\ntry:\n q.put('first', block=False)\n q.put({'second': None}, block=False)\n q.put('三', block=False)\n q.put(4, block=False)\nexcept Exception as e:\n print('error: %s' % e)\n\nfor i in range(10):\n print(q.get(timeout=3))\n\n# 6. 生产者消费者模型\n# 该模型中包含两类重要的角色:\n# 1)生产者:将负责造数据的任务比喻为生产者;\n# 2)消费者:接收生产者造出的数据来做进一步的处理,该类任务被比喻成消费者;\n# 实现生产者消费者模型的三要素:\n# 生产者;\n# 消费者;\n# 队列;\n# 什么时候用该模型?\n# 程序中出现明显的两类任务,一类任务是负责生产,另外一类任务是负责处理生产的数据的时候;\n# 该模型的好处?\n# 1)实现了生产者与消费者的解耦合;\n# 2)平衡了生产力与与消费力,即生产者可以一直不停地生产,消费者可以不停的处理,\n# 因为二者不再直接沟通,而是跟队列沟通;\n\nimport time\nimport random\nfrom multiprocessing import Process, Queue\n\ndef producer(name, q, food):\n for i in range(1, 6):\n time.sleep(random.randint(1,2))\n res = '%s%s' % (food, i)\n q.put(res)\n print('\\033[32m生产者 ==> %s 生产了 %s\\033[0m' % (name, res))\n\n\ndef consumer(name, q):\n while True:\n res = q.get()\n time.sleep(random.randint(1, 3))\n print('\\033[31m消费者 ==> %s 吃了 %s\\033[0m' % (name, res))\n\nif __name__ == '__main__':\n # 共享的队列\n q = Queue()\n\n p1 = Process(target=producer, args=('egon', q, '包子'))\n p2 = Process(target=producer, args=('刘清政', q, '汉堡'))\n p3 = Process(target=producer, args=('杨军', q, '米饭'))\n\n c1 = Process(target=consumer, args=('alex', q))\n c2 = Process(target=consumer, args=('成俊华', q))\n c3 = Process(target=consumer, args=('吴晨钰', q))\n\n p1.start()\n p2.start()\n p3.start()\n c1.start()\n c2.start()\n c3.start()\n print('===>')\n\n\n\n"
},
{
"alpha_fraction": 0.49421966075897217,
"alphanum_fraction": 0.6271676421165466,
"avg_line_length": 16.25,
"blob_id": "fa0968259892dd31433b8d71e22c22936a48a07d",
"content_id": "a856f857fa3eb346a5d038ab0b42c465b9d56c30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 378,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 20,
"path": "/project/elective_systems/version_v2/interface/student_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nlogger = common.get_logger('student')\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n\ndef register():\n print('\\033[32m注册\\033[0m')\n\ndef choose_school():\n print('\\033[32m选择校区\\033[0m')\n\ndef choose_course():\n print('\\033[32m选择课程\\033[0m')\n\ndef check_score():\n print('\\033[32m查看成绩\\033[0m')\n\n"
},
{
"alpha_fraction": 0.621761679649353,
"alphanum_fraction": 0.621761679649353,
"avg_line_length": 25.571428298950195,
"blob_id": "8a00f4449dbed6b6a67eaec0c892d3b472ac3365",
"content_id": "293a06c7749a31e9b27f851c2d29b04aa175375f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 386,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 14,
"path": "/weektest/test2/ATM_tianzhiwei/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import logging.config\r\nfrom conf import settings\r\nfrom core import src\r\ndef logger(name):\r\n logging.config.dictConfig(settings.LOGGING_DIC)\r\n logger = logging.getLogger(name)\r\n return logger\r\ndef login_(fuck):\r\n def inner(*args,**kwargs):\r\n if not src.dict['state']:\r\n src.login()\r\n else:\r\n return fuck(*args,**kwargs)\r\n return inner\r\n"
},
{
"alpha_fraction": 0.5319299697875977,
"alphanum_fraction": 0.5581107139587402,
"avg_line_length": 18.831579208374023,
"blob_id": "4717937b1f49679c7894ad1e3babde8da755c62c",
"content_id": "1611a76b3cb8e98385e1fc01b0593a6bfd3985d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6189,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 285,
"path": "/month4/week7/python_day30/python_day30.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 守护进程\n# 主进程结束之后,守护进程会同步结束\nimport time\nfrom multiprocessing import Process\n\ndef foo():\n print(123)\n time.sleep(1)\n print('end123')\n\ndef bar():\n print(456)\n time.sleep(3)\n print('end456')\n\nif __name__ == '__main__':\n p1 = Process(target=foo)\n p2 = Process(target=bar)\n\n p1.daemon = True\n p1.start()\n p2.start()\n time.sleep(0.01)\n print('main')\n\n# 2. 守护进程的使用\nimport time\nimport random\nfrom multiprocessing import Process, JoinableQueue\n\ndef producer(name, q, food):\n for i in range(1, 6):\n res = '%s%s' % (food, i)\n q.put(res)\n print('\\033[32m生产者 ==> %s 生产了 %s\\033[0m' % (name, res))\n time.sleep(random.randint(1, 2))\n\ndef consumer(name, q):\n while True:\n res = q.get()\n if not res:\n break\n print('\\033[31m消费者 ==> %s 吃了 %s\\033[0m' % (name, res))\n time.sleep(random.randint(1, 3))\n q.task_done()\n\nif __name__ == '__main__':\n # 共享的队列\n q = JoinableQueue()\n\n p1 = Process(target=producer, args=('egon', q, '包子'))\n p2 = Process(target=producer, args=('刘清政', q, '汉堡'))\n p3 = Process(target=producer, args=('杨军', q, '米饭'))\n\n c1 = Process(target=consumer, args=('alex', q))\n c2 = Process(target=consumer, args=('成俊华', q))\n c3 = Process(target=consumer, args=('吴晨钰', q))\n c1.daemon = True\n c2.daemon = True\n c3.daemon = True\n\n p1.start()\n p2.start()\n p3.start()\n c1.start()\n c2.start()\n c3.start()\n\n p1.join()\n p2.join()\n p3.join()\n # 生产者生产完毕后,拿到队列中的总个数,然后直到元素总数为0,q.join()这一行代码才算运行完毕\n q.join()\n # q.join()一旦结束就意味着队列中队列确实被取空,消费者已经确确实实把数据都取干净了\n print('主进程结束!')\n\n# 3. 多线程\nimport time\nfrom multiprocessing import Process\nfrom threading import Thread\n\ndef task(name):\n print('%s is running ...' % name)\n time.sleep(3)\n\nif __name__ == '__main__':\n p = Process(target=task, args=('egon',))\n p.start()\n t = Thread(target=task, args=('lxx',))\n t.start()\n print('主线程结束!')\n\n# 4. 自定义线程类\nimport time\nfrom threading import Thread\n\nclass MyThread(Thread):\n def run(self):\n print('%s is running ...' % self.name)\n time.sleep(3)\n print('%s is end.' % self.name)\n\nif __name__ == '__main__':\n t = MyThread()\n t.start()\n print('主线程结束!')\n# 5. 查看线程PID,线程name,等其它方法\nimport time\nfrom threading import Thread,current_thread,active_count,enumerate\n\nx = 1000\n\ndef task():\n global x\n x = 0\n time.sleep(3)\n\nif __name__ == '__main__':\n t1 = Thread(target=task, name='egon')\n t2 = Thread(target=task,)\n t3 = Thread(target=task,)\n t1.start()\n t2.start()\n t3.start()\n print(t1.is_alive())\n print(active_count())\n print(enumerate())\n print('主线程 %s 结束!' % current_thread().name)\n\n# 6.1 守护线程\nimport time\nfrom threading import Thread, current_thread\n\ndef task():\n print('%s is running ...' % current_thread().name)\n time.sleep(3)\n print('%s is end' % current_thread().name)\n\nif __name__ == '__main__':\n t = Thread(target=task, name='第一个线程')\n t.daemon = True\n t.start()\n print('主线程结束!')\n\n# 6.2 守护线程\nimport time\nfrom threading import Thread\n\ndef foo():\n print(123)\n time.sleep(1)\n print('end123')\n\ndef bar():\n print(456)\n time.sleep(3)\n print('end456')\n\nif __name__ == '__main__':\n t1 = Thread(target=foo)\n t2 = Thread(target=bar)\n\n t1.daemon = True\n t1.start()\n t2.start()\n time.sleep(0.01)\n print('main')\n\n# 7. 线程互斥锁\nimport time\nfrom threading import Thread, Lock\n\nmutex = Lock()\n\nx = 100\n\ndef task():\n global x\n mutex.acquire()\n temp = x\n time.sleep(0.01)\n x = temp - 1\n # print(x)\n mutex.release()\n\nif __name__ == '__main__':\n start = time.time()\n threads = []\n for i in range(100):\n t = Thread(target=task,)\n threads.append(t)\n t.start()\n\n for t in threads:\n t.join()\n print('x = %s' % x)\n print('主线程!')\n print(time.time() - start)\n\n# 8. 线程死锁\nimport time\nfrom threading import Thread, Lock\n\nmutexA = Lock()\nmutexB = Lock()\n\nclass MyThread(Thread):\n def run(self):\n self.f1()\n self.f2()\n\n def f1(self):\n mutexA.acquire()\n print('%s 拿到了A锁' % self.name)\n mutexB.acquire()\n print('%s 拿到了B锁' % self.name)\n mutexB.release()\n mutexA.release()\n\n def f2(self):\n mutexB.acquire()\n print('%s 拿到了B锁' % self.name)\n time.sleep(0.1)\n mutexA.acquire()\n print('%s 拿到了A锁' % self.name)\n mutexA.release()\n mutexB.release()\n\nif __name__ == '__main__':\n for i in range(10):\n t = MyThread()\n t.start()\n print('主线程')\n\n# 9. 递归锁\nimport time\nfrom threading import Thread, RLock\n\nmutexA = mutexB = RLock()\n\nclass MyThread(Thread):\n def run(self):\n self.f1()\n self.f2()\n\n def f1(self):\n mutexA.acquire()\n print('%s 拿到了A锁' % self.name)\n mutexB.acquire()\n print('%s 拿到了B锁' % self.name)\n mutexB.release()\n mutexA.release()\n\n def f2(self):\n mutexB.acquire()\n print('%s 拿到了B锁' % self.name)\n time.sleep(0.1)\n mutexA.acquire()\n print('%s 拿到了A锁' % self.name)\n mutexA.release()\n mutexB.release()\n\nif __name__ == '__main__':\n for i in range(10):\n t = MyThread()\n t.start()\n print('主线程')\n# 10. 信号量\nimport time\nimport random\nfrom threading import Thread,Semaphore,current_thread\n\nsm = Semaphore(5)\n\ndef go_wc():\n sm.acquire()\n print('%s 上厕所ing' % current_thread().name)\n time.sleep(random.randint(1,3))\n sm.release()\n\nif __name__ == '__main__':\n for i in range(23):\n t = Thread(target=go_wc)\n t.start()\n print('主线程结束!')\n\n"
},
{
"alpha_fraction": 0.6732673048973083,
"alphanum_fraction": 0.6732673048973083,
"avg_line_length": 22.25,
"blob_id": "d31e94919dfae9f77a70b26216dfc77b6b0d0d22",
"content_id": "5391a56da9754597d3b2afaf151f4bf67d33876c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 109,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 4,
"path": "/weektest/test2/ATM_chengjunhua/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 修改余额\r\nfrom db import db_handler\r\ndef update_money(user_dic):\r\n db_handler.update(user_dic)\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.5852782726287842,
"alphanum_fraction": 0.6678635478019714,
"avg_line_length": 18.05172348022461,
"blob_id": "164bc187d5582d9bed46adf905d38512161d1370",
"content_id": "12f0234ea4b041a239e87031ddf33928c46583a7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1840,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 58,
"path": "/month5/week9/python_day37/python_day37.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 一 创建表的完整语法\n# create table 表名(\n# 字段名1 类型[(宽度) 约束条件],\n# 字段名2 类型[(宽度) 约束条件],\n# 字段名3 类型[(宽度) 约束条件]\n# );\n#\n# #解释:\n# 类型:使用限制字段必须以什么样的数据类型传值\n# 约束条件:约束条件是在类型之外添加一种额外的限制\n#\n#\n# # 注意:\n# 1. 在同一张表中,字段名是不能相同\n# 2. 宽度和约束条件可选,字段名和类型是必须的\n# 3、最后一个字段后不加逗号\n# create database db37;\n\n# 二 基本数据类型之整型:\n# 1、整型:id号,各种号码,年龄,等级\n# 2、分类:\n# tinyint,int,bigint\n\n# 3、测试:默认整型都是有符号的\n# create table t1(x tinyint);\n# insert into t1 values(128),(-129);\n\n# create table t2(x tinyint unsigned);\n# insert into t2 values(-1),(256);\n\n# create table t3(x int unsigned);\n# #4294967295\n# insert into t3 values(4294967296);\n#\n# create table t4(x int(12) unsigned);\n# insert into t4 values(4294967296123);\n#\n# 4、强调:对于整型来说,数据类型后的宽度并不是存储限制,而是显示限制\n# 所以在创建表示,如果字段采用的是整型类型,完全无需指定显示宽度,\n# 默认的显示宽度,足够显示完整当初存放的数据\n#\n# # 显示时,不够8位用0填充,如果超出8位则正常显示\n# create table t5(x int(8) unsigned zerofill);\n# insert into t5 values(4294967296123);\n# insert into t5 values(1);\n\n# 5. 浮点型\n# 作用:存储身高、体重、薪资\n# float (*****)\n# double (**)\n# decimal (**)\n\n# 测试:\n# # 相同点:\n# 1. 对于三者来说,都能存放30位小数;\n# # 不同点:\n# 1. 精度的排序从低到高:float, double, decimal;\n# 2. float与double类型能存放的整数位比decimal多;\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5047568678855896,
"alphanum_fraction": 0.5126850008964539,
"avg_line_length": 20.13483238220215,
"blob_id": "9df60783e10cdce8a14c660d728a9812da4a167a",
"content_id": "b480f0d353e5ea75d5f2466cf715781c3804db91",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1966,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 89,
"path": "/month3/week3/python_day11/ptyhon_day11.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#\n# x = 1\n#\n# def outter(name):\n# def inner():\n# print(name)\n# return inner\n#\n# foo = outter('egon')\n# bar = outter('alex')\n# foo()\n# bar()\n#\nimport time\n#\n# def index():\n# time.sleep(2)\n# print('welcome to index page!')\n#\n# def home(name):\n# time.sleep(3)\n# print('welcome %s to home page!' % name)\n#\n# def wrapper(func):\n# start_time = time.time()\n# func()\n# stop_time = time.time()\n# print('run time is %s' % (stop_time - start_time))\n#\n# wrapper(index) # 修改了原函数的调用方式\n\nimport time\n\nuser_info = {\n 'egon': {\n 'password': '123'\n }\n}\n\nlogin_list = []\n# name = input('name >>: ').strip()\n# password = input('password >>: ').strip()\nname = 'egon'\npassword = '123'\n\ndef auth(name, password):\n def wrapper(func):\n def wrapper(*args, **kwargs):\n if len(login_list) > 0:\n print('用户%s认证通过!' % name)\n res = func(*args, **kwargs)\n return res\n if name not in user_info:\n print('用户名不存在!')\n return\n if password != user_info[name]['password']:\n print('密码错误!')\n return\n print('用户%s认证通过!' % name)\n login_list.append(name)\n res = func(*args, **kwargs)\n return res\n return wrapper\n return wrapper\n\ndef timmer(func):\n def wrapper(*args, **kwargs):\n start_time = time.time()\n res = func(*args, **kwargs)\n stop_time = time.time()\n print('Run time is %s' % (stop_time - start_time))\n return res\n return wrapper\n\n@auth(name, password)\n@timmer\ndef index():\n time.sleep(1)\n print('Welcome to Index Page!')\n return 123\n@auth(name, password)\n@timmer\ndef home(name):\n time.sleep(2)\n print('Welcome %s to Home Page!' % name)\n return 'home'\n\nindex()\nprint(home('alex'))\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6475972533226013,
"alphanum_fraction": 0.6487414240837097,
"avg_line_length": 29.034482955932617,
"blob_id": "a2e36c4074615615b991342212d1a73bfcf47f89",
"content_id": "5389c60bba8c8b67aeb938396438f4f3406eef2f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 982,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 29,
"path": "/project/elective_systems/version_v8/interface/student_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef register(name, password):\n student = modules.Student.get_obj_by_name(name)\n if student:\n return False, '学生%s不能重复注册!' % name\n if modules.Student.register(name, password):\n return True, '学生%s注册成功!' % name\n else:\n return False, '学生%s注册失败!' % name\n\ndef get_score(name):\n student = modules.Student.get_obj_by_name(name)\n return student.score\n\ndef get_learn_course(name):\n student = modules.Student.get_obj_by_name(name)\n return student.course_list\n\ndef choose_course(name, course):\n student = modules.Student.get_obj_by_name(name)\n if course in student.course_list:\n return False, '学生%s不能选择已学习的课程!' % name\n if student.choose_course(course):\n return True, '学生%s选择课程%s成功!' % (name, course)\n else:\n return False, '学生%s选择课程%s失败!' % (name, course)\n\n\n\n"
},
{
"alpha_fraction": 0.6158536672592163,
"alphanum_fraction": 0.6158536672592163,
"avg_line_length": 14.600000381469727,
"blob_id": "449dc00b3098c394889f272c8ca26b4fb729c718",
"content_id": "9316273338843bb3d2e5c3c97ae1a59e6dfacdaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 164,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 10,
"path": "/weektest/test2/ATM_chengjunhua/bin/start.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\r\nimport sys\r\nPATH=os.path.dirname(os.path.dirname(__file__))\r\nsys.path.append(PATH)\r\n\r\n\r\nfrom core import src\r\n\r\nif __name__ == '__main__':\r\n src.run()"
},
{
"alpha_fraction": 0.46211671829223633,
"alphanum_fraction": 0.5139465928077698,
"avg_line_length": 26.329729080200195,
"blob_id": "096dd1c1857037840b04ffe7e814b9bc7b0bf222",
"content_id": "23285f26998133776d13d81825a20fcf8776ee7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5393,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 185,
"path": "/project/elective_systems/version_v8/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import common_api, admin_api\n\nCURRENT_USER = None\nROLE = 'admin'\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n if CURRENT_USER:\n print('\\033[31m不能重复登录!\\033[0m')\n return\n while True:\n name = common.input_string('登陆用户名')\n if name == 'q':\n break\n password = common.input_string('登陆密码')\n if password == 'q':\n break\n flag, msg = common_api.login(name, password, ROLE)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef logout():\n global CURRENT_USER\n print('\\033[31mGoodbye, %s!\\033[0m' % CURRENT_USER)\n CURRENT_USER = None\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = common.input_string('注册用户名')\n if name == 'q':\n break\n password = common.input_string('注册密码')\n if password == 'q':\n break\n password2 = common.input_string('确认密码')\n if password2 == 'q':\n break\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = admin_api.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_school(show=True):\n if show:\n print('\\033[32m查看学校\\033[0m')\n school_list = common.get_object_list('school')\n if not school_list:\n print('\\033[31m学校列表为空!\\033[0m')\n return\n print('-' * 30)\n for k,v in enumerate(school_list):\n print('%s %s' % (k, v))\n return school_list\n\[email protected](ROLE)\ndef check_teacher():\n print('\\033[32m查看老师\\033[0m')\n teacher_list = common.get_object_list('teacher')\n if not teacher_list:\n print('\\033[31m老师列表为空!\\033[0m')\n return\n print('-' * 30)\n for k, v in enumerate(teacher_list):\n print('%s %s' % (k, v))\n\[email protected](ROLE)\ndef check_course():\n print('\\033[32m查看课程\\033[0m')\n course_list = common.get_object_list('course')\n if not course_list:\n print('\\033[31m课程列表为空!\\033[0m')\n return\n print('-' * 30)\n for k, v in enumerate(course_list):\n print('%s %s' % (k, v))\n\[email protected](ROLE)\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n while True:\n name = common.input_string('学校名称')\n if name == 'q':\n break\n address = common.input_string('学校地址')\n if address == 'q':\n break\n flag, msg = admin_api.create_school(CURRENT_USER, name, address)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\n\[email protected](ROLE)\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n while True:\n name = common.input_string('老师名字')\n if name == 'q':\n break\n flag, msg = admin_api.create_teacher(CURRENT_USER, name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef choose_school():\n while True:\n school_list = check_school(False)\n if not school_list:\n return\n choice = common.input_integer('请为课程选择学校')\n if choice == 'q':\n return choice\n if choice < 0 or choice > len(school_list):\n print('\\033[31m学校编号非法!\\033[0m')\n continue\n choice = int(choice)\n return school_list[choice]\n\[email protected](ROLE)\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n while True:\n school_name = choose_school()\n if not school_name:\n return\n name = common.input_string('课程名称')\n if name == 'q':\n break\n price = common.input_string('课程价格')\n if price == 'q':\n break\n cycle = common.input_string('课程周期')\n if cycle == 'q':\n break\n flag, msg = admin_api.create_course(CURRENT_USER, name, price, cycle, school_name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_school, '查看学校'],\n '4': [check_teacher, '查看老师'],\n '5': [check_course, '查看课程'],\n '6': [create_school, '创建学校'],\n '7': [create_teacher, '创建老师'],\n '8': [create_course, '创建课程']\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = common.input_string('请选择操作编号')\n if choice == 'q':\n if CURRENT_USER:\n logout()\n return\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()"
},
{
"alpha_fraction": 0.6100917458534241,
"alphanum_fraction": 0.6192660331726074,
"avg_line_length": 12.5625,
"blob_id": "8c5169ae44537854a487bc04c51575a3d64a7d41",
"content_id": "ebe168bd0b477ed0e950305e3f1d8c5a665445cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 16,
"path": "/month4/week5/python_day19/ATM/bin/start.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!usr/bin/env python3\n#-*- encoding: utf-8 -*-\n\nimport os\nimport sys\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\nsys.path.append(BASE_DIR)\n\n\nfrom core import src\n\n\n\nif __name__ == '__main__':\n src.run()\n\n"
},
{
"alpha_fraction": 0.46872851252555847,
"alphanum_fraction": 0.49347078800201416,
"avg_line_length": 14.655914306640625,
"blob_id": "10959cfc6e97c195287cdb24581126c1b8952da5",
"content_id": "11a7e310df72cea38993d04a525b50924cec6106",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1621,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 93,
"path": "/month4/week4/python_day13/python_day13.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 递归限制 1000\n# import sys\n# print(sys.getrecursionlimit())\n\n# def bar():\n# print('from bar')\n# foo()\n#\n# def foo():\n# print('from foo')\n# bar()\n#\n# foo()\n\n# 递归返回一个值\n# def age(n):\n# if n == 1:\n# return 18\n# return age(n-1)+2\n#\n# print(age(5))\n\n# 递归打印所有数值\n# items = [1,[2,[3,[4,[5,[6,[7,[8]]]]]]]]\n#\n# def tell(l):\n# for item in l:\n# if type(item) is not list:\n# print(item)\n# else:\n# tell(item)\n#\n# tell(items)\n\n# 匿名函数\n# def foo(x, n):\n# return x ** n\n#\n# print(foo(3, 2))\n# print(foo(3, 2))\n\n# func = lambda x, n: x ** n\n# print(func(3, 2))\n\n# reduce 方法\n# from functools import reduce\n# print(reduce(lambda x,y:x+y, range(1,101)))\n\n# 斐波那契迭代器\n# class Fibs:\n# def __init__(self):\n# self.a = 0\n# self.b = 1\n#\n# def next(self):\n# self.a, self.b = self.b, self.a + self.b\n# return self.a\n#\n# def __iter__(self):\n# return self\n#\n# f = Fibs()\n#\n# print(f.next())\n# print(f.next())\n# print(f.next())\n# print(f.next())\n# print(f.next())\n\n\n# 斐波那契递归\n# Filename : test.py\n# author by : www.runoob.com\n\n# def recur_fibo(n):\n# \"\"\"递归函数\n# 输出斐波那契数列\"\"\"\n# if n <= 1:\n# return n\n# else:\n# return (recur_fibo(n - 1) + recur_fibo(n - 2))\n#\n#\n# 获取用户输入\n# nterms = int(input(\"您要输出几项? \"))\n#\n# 检查输入的数字是否正确\n# if nterms <= 0:\n# print(\"输入正数\")\n# else:\n# print(\"斐波那契数列:\")\n# for i in range(10):\n# print(recur_fibo(i))"
},
{
"alpha_fraction": 0.541350781917572,
"alphanum_fraction": 0.5454858541488647,
"avg_line_length": 30.54347801208496,
"blob_id": "c6a3743126a50bef9aae15910a340398d0bc65d3",
"content_id": "d587b82d29454f478e452b904c478d0c8eb5392e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3172,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 92,
"path": "/homework/week4/elective_systems/core/student.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom interface import education\n\n\nCURRENT_USER = None\n\ndef register():\n print('注册')\n while True:\n name = input('用户名 >>: ').strip()\n students = education.Student.get_object('students')\n if name in students:\n print('学生%s已经注册!' % name)\n return\n password = input('密码 >>: ')\n password2 = input('确认密码 >>: ')\n if password != password2:\n print('两次密码输入不一致!')\n continue\n obj = education.Student(name, password)\n education.Student.update_object('students', '学生', obj)\n print('学生%s注册成功!' % name)\n return\n\ndef login():\n print('登陆')\n global CURRENT_USER\n while True:\n name = input('用户名 >>: ').strip()\n students = education.Student.get_object('students')\n if name not in students:\n print('学生%s未注册!' % name)\n return\n password = input('密码 >>: ')\n if password != students[name].password:\n print('密码错误!')\n continue\n CURRENT_USER = name\n print('学生%s登陆成功!' % name)\n return\n\ndef choose_classes():\n print('选择班级')\n students = education.Student.get_object('students')\n classes = education.Student.get_object('classes')\n teachers = education.Student.get_object('teachers')\n while True:\n for name in classes:\n print('班级:%s' % name)\n choice = input('请选择班级 >>: ').strip()\n if choice not in classes:\n print('班级不存在!')\n continue\n students[CURRENT_USER].school = classes[choice].school\n students[CURRENT_USER].classes = classes[choice].name\n students[CURRENT_USER].tuition = classes[choice].course.price\n classes[choice].students[students[CURRENT_USER].name] = students[CURRENT_USER]\n education.Student.update_object('classes', '班级', classes[choice])\n education.Student.update_object('students', '学生', students[CURRENT_USER])\n return\n\ndef pay_tuition():\n print('交费')\n while True:\n students = education.Student.get_object('students')\n print('学生%s应缴学费%s元!' % (CURRENT_USER, students[CURRENT_USER].tuition))\n amount = input('请输入缴费金额 >>:').strip()\n if not amount.isdigit():\n print('金额必须是数字!')\n continue\n students[CURRENT_USER].tuition -= amount\n education.Student.update_object('students', '学生', obj)\n return\n\ndef run():\n while True:\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [choose_classes, '选择班级'],\n '4': [pay_tuition, '交费']\n }\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'quit':\n break\n if choice not in menu:\n print('选择编号非法!')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.536212146282196,
"alphanum_fraction": 0.5545732975006104,
"avg_line_length": 25.990825653076172,
"blob_id": "c116080e55b76c23412b735ddfbbd72215694693",
"content_id": "9a939b8b1aec0a82a8f2e757f534879e1efeee9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3371,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 109,
"path": "/month4/week5/python_day16/python_day16_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4.8号作业\n# 1、复习常用模块time、datetime、random、os、sys、shutil、json、pickle、logging编写博客\n# 2、编写认证功能装饰器,同一用户输错三次密码则锁定5分钟,5分钟内无法再次登录\nimport sys\nimport datetime\n\nLOCK_USER = {}\nUSER_INFO = {\n 'name': 'egon',\n 'password': '123'\n}\n\ndef auth(func):\n def wrapper(*args, **kwargs):\n name = func(*args, **kwargs)\n if name in LOCK_USER:\n if datetime.datetime.now() < LOCK_USER[name]:\n print('用户%s在%s内锁定,禁止登陆!' % (name, LOCK_USER[name]))\n sys.exit()\n print('用户%s登陆成功!' % CURRENT_USER)\n return wrapper\n\n@auth\ndef login():\n i = 0\n while True:\n name = input('name >>: ').strip()\n password = input('password >>: ')\n if name != USER_INFO['name']:\n print('用户%s不存在!' % name)\n continue\n if password != USER_INFO['password']:\n print('用户%s密码错误!' % name)\n i += 1\n if i == 3:\n LOCK_USER[name] = datetime.datetime.now() + datetime.timedelta(seconds=30)\n continue\n return name\n\nlogin()\n\n# 3、编写注册功能,用户输入用户名、性别、年龄、密码。。。还需要输入一个随机验证码,若用户在60秒内输入验证码错误\n# 则产生新的验证码要求用户重新输入,直至输入正确,则将用户输入的信息以json的形式存入文件\nimport json\nimport random\nimport datetime\n\nRANDOM_CODE = {}\n\ndef make_code(n):\n res = ''\n for i in range(n):\n s1 = chr(random.randint(65, 90))\n s2 = str(random.randint(0,9))\n res += random.choice([s1, s2])\n return res\n\ndef register():\n global RANDOM_CODE\n name = input('name >>: ').strip()\n pwd = input('password >>: ')\n sex = input('sex >>: ').strip()\n age = input('age >>: ').strip()\n user_info = {\n 'name': name,\n 'password': pwd,\n 'sex': sex,\n 'age': age\n }\n random_code = None\n while True:\n if random_code in RANDOM_CODE:\n if datetime.datetime.now() > RANDOM_CODE[random_code]:\n random_code = make_code(4)\n RANDOM_CODE[random_code] = datetime.datetime.now() + datetime.timedelta(seconds=10)\n else:\n random_code = make_code(4)\n RANDOM_CODE[random_code] = datetime.datetime.now() + datetime.timedelta(seconds=10)\n print('identifying code: %s' % random_code)\n code = input('Please input identifying code >>: ').strip()\n if code == random_code:\n with open(r'user.json', 'w', encoding='utf-8') as f:\n json.dump(user_info, f)\n print('user %s register successful!' % name)\n return True\n\nregister()\n\n# 4、编写进度条功能\nimport time\n\ndef progress(percent, width=100):\n if percent > 1:\n percent = 1\n show = ('[%%-%ds]' % width) % (int(width*percent)*'#')\n print('%s %s%%' % (show, int(100*percent)), end='\\r')\n\ndef download():\n recv_size = 0\n file_size = 10000\n while recv_size < file_size:\n recv_size += 10\n percent = recv_size / file_size\n progress(percent)\n time.sleep(0.1)\n\ndownload()\n\n# 5、明日默写:生成随机验证码功能、打印进度条功能"
},
{
"alpha_fraction": 0.5423171520233154,
"alphanum_fraction": 0.5521774888038635,
"avg_line_length": 13.876543045043945,
"blob_id": "e17c640af0e3eaa442d037a6c4933cf4e2814219",
"content_id": "48f756a59d7f9bfeec161ba17b00f148c1735fef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1589,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 81,
"path": "/month4/week6/python_day22/python_day22.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 多态性:可以在不用考虑对象具体类型的前提下,而直接使用对象下的方法;\n# 动物的多种形态:猫、狗、猪\n# 动物技能的派生:喵喵喵、汪汪汪、哼哼哼\nclass Animal:\n def eat(self):\n pass\n\n def drink(self):\n pass\n\n def run(self):\n pass\n\n def bark(self):\n pass\n\nclass Cat(Animal):\n def bark(self):\n print('喵喵喵')\n\nclass Dog(Animal):\n def bark(self):\n print('汪汪汪')\n\nclass Pig(Animal):\n def bark(self):\n print('哼哼哼')\n\n2. 抽象方法:抽象方法只需声明,而不需实现,抽象方法由抽象类的子类去具体实现;\nimport abc\n\nclass Animal(metaclass=abc.ABCMeta):\n def eat(self):\n pass\n\n def drink(self):\n pass\n\n def run(self):\n pass\n\n @abc.abstractmethod\n def bark(self):\n pass\n\nclass Cat(Animal):\n def bark(self):\n print('喵喵喵')\n\n\nclass Dog(Animal):\n def bark(self):\n print('汪汪汪')\n\n\nclass Pig(Animal):\n def bark(self):\n print('哼哼哼')\n\nc = Cat()\nc.bark()\n\n# 3. 鸭子类型\nclass Foo:\n def f1(self):\n print('from Foo.f1 ...')\n\n def f2(self):\n print('from Foo.f2 ...')\n\nclass Bar(Foo):\n def f1(self):\n print('from Bar.f1 ...')\n\n def f2(self):\n print('from Bar.f2 ...')\n\n# 4. 类内部方法分类\n# 绑定给对象:self\n# 绑定给类:@classmethod cls\n# 非绑定方法:既不和类绑定,也不和类绑定,谁来用都是一个普通函数(没有自动传值的特征);\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.44905173778533936,
"alphanum_fraction": 0.48537448048591614,
"avg_line_length": 27.281818389892578,
"blob_id": "f5a1e02dfa5c1f3a6d346e255f82b571094bc70d",
"content_id": "f65c13ece4e2762b6d4684e293f82cafaf09d592",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3465,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 110,
"path": "/project/elective_systems/version_v3/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import admin_api\n\nUSER = {'name': None}\nROLE = 'admin'\n\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n if USER['name']:\n print('\\033[31m已登陆,不能重复登录!\\033[0m')\n while 1:\n name = input('请输入登陆用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入登陆密码 >>: ').strip()\n flag, msg = admin_api.login(name, password)\n print(msg)\n if flag:\n USER['name'] = name\n return\n\ndef register():\n print('\\033[32m注册\\033[0m')\n if USER['name']:\n print('\\033[31m已登陆,不能注册!\\033[0m')\n while 1:\n name = input('请输入注册用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入注册密码 >>: ').strip()\n password2 = input('请输入注册密码 >>: ').strip()\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = admin_api.register(name, password)\n print(msg)\n if flag:\n return\n\[email protected](USER['name'], ROLE)\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n while 1:\n name = input('请输入学校名称 >>: ').strip()\n if name == 'q': break\n addr = input('请输入学校地址 >>: ').strip()\n flag, msg = admin_api.create_school(name, addr)\n print(msg)\n if flag:\n return\n\[email protected](USER['name'], ROLE)\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n while 1:\n name = input('请输入老师名字 >>: ').strip()\n if name == 'q': break\n flag, msg = admin_api.create_school(name)\n print(msg)\n if flag:\n return\n\[email protected](USER['name'], ROLE)\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n while 1:\n schools = admin_api.get_schools()\n while 1:\n print('-' * 30)\n for k,v in enumerate(schools):\n print('%-4s %-10s' % (k, v))\n print('-' * 30)\n choice = input('请选择校区编号 >>: ').strip()\n if not choice.isdigit():\n print('编号必须是数字!')\n continue\n choice = int(choice)\n if choice <= 0 or choice > len(schools):\n print('编号超出范围!')\n continue\n school_name = schools[choice]\n break\n name = input('请输入课程名称 >>: ').strip()\n price = input('请输入课程价格 >>: ').strip()\n cycle = input('请输入课程周期 >>: ').strip()\n flag, msg = admin_api.create_school(name, price, cycle, school_name)\n print(msg)\n if flag:\n return\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [create_school, '创建学校'],\n '4': [create_teacher, '创建老师'],\n '5': [create_course, '创建课程']\n }\n while 1:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择功能编号[q to exit] >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.490172415971756,
"alphanum_fraction": 0.5394827723503113,
"avg_line_length": 30.351350784301758,
"blob_id": "b7d5a38e1b340592534fbcec8943dfd40d7e9b55",
"content_id": "36a579abc89b4146f18737455d971778ccf5fc9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6474,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 185,
"path": "/weektest/weektest2/ATM_zhanglong/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport hmac\nimport datetime\nfrom lib import common\nfrom interface import user, bank\n\nlogger = common.get_logger('app')\n\nCURRENT_USER = None\nCOOKIES = {}\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n return string\n\ndef input_integer(word):\n while True:\n integer = input('%s >>: ' % word).strip()\n if not integer.isdigit():\n print('\\033[31m请输入整数数字!\\033[0m')\n continue\n return int(integer)\n\ndef get_md5_encode_api(string):\n h = hmac.new(b'shopping')\n h.update(string.encode('utf-8'))\n return h.hexdigest()\n\ndef register():\n print('\\033[33m注册\\033[0m')\n while True:\n name = input_string('请输入用户名')\n user_info = user.get_user_info_api(name)\n if user_info:\n print('用户已经注册,请直接登陆!')\n return\n password = input_string('请输入密码')\n pwd = input_string('请确认密码')\n if pwd != password:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n password = get_md5_encode_api(password)\n if user.register_user_api(name, password):\n print('\\033[32m用户%s注册成功!\\033[0m' % name)\n return\n else:\n print('\\033[31m用户%s注册失败!\\033[0m' % name)\n\ndef login():\n print('\\033[33m登陆\\033[0m')\n global COOKIES, CURRENT_USER\n i = 0\n while True:\n name = input_string('请输入用户名')\n if name in COOKIES:\n print('用户%s已经登录!' % name)\n continue\n user_info = user.get_user_info_api(name)\n if not user_info:\n print('\\033[31m用户%s未注册,请先注册!\\033[0m' % name)\n continue\n password = input_string('请输入密码')\n password = get_md5_encode_api(password)\n if password != user_info['password']:\n print('\\033[31m用户%s密码错误!\\033[0m' % name)\n i += 1\n if i == 3:\n user_info['unlocktime'] = datetime.datetime.now() + datetime.timedelta(minutes=5)\n user.modify_user_info_api(user_info)\n i = 0\n continue\n COOKIES = {\n 'name': name,\n 'role': user_info['role']\n }\n CURRENT_USER = name\n print('\\033[32m用户%s登陆成功!\\033[0m' % name)\n return\n\ndef check_balance():\n print('\\033[33m查看余额\\033[0m')\n user_info = user.get_user_info_api(CURRENT_USER)\n balance = user_info['balance']\n credit_balance = user_info['credit_balance']\n credit_limit = user_info['credit_limit']\n print('余额信息'.center(26, '-'))\n print('''\\033[32m\n 用户名:%s\n 账户余额:%s\n 信用卡余额:%s\n 信用卡额度:%s\\033[0m\n ''' % (CURRENT_USER, balance, credit_balance, credit_limit))\n\ndef check_credit_bill():\n print('\\033[33m查看账单\\033[0m')\n user_info = user.get_user_info_api(CURRENT_USER)\n print('账单信息'.center(26, '-'))\n if user_info['bill'] == 0:\n print('\\033[32m用户%s本期账单为0元!\\033[0m' % CURRENT_USER)\n else:\n print('\\033[32m用户%s本期账单为%s元!\\033[0m' % (CURRENT_USER, user_info['bill']))\n\ndef check_detailed_list():\n print('\\033[33m查看流水\\033[0m')\n dt = input_string('请输入年月(yyyy-mm)')\n user_info = user.get_user_info_api(CURRENT_USER)\n if user_info['detailed_list']:\n print((' %s 银行流水' % dt).center(26, '='))\n for i in user_info['detailed_list']:\n if dt in i[0]:\n print('%s %s' % (i[0], i[1]))\n else:\n print('\\033[32m用户%s %s 无银行流水!\\033[0m' % (CURRENT_USER, dt))\n\ndef transfer():\n print('\\033[33m转账\\033[0m')\n while True:\n payee = input_string('请输入收款账户')\n if payee == CURRENT_USER:\n print('用户%s不能转账给自己!' % CURRENT_USER)\n continue\n user_info = user.get_user_info_api(payee)\n if not user_info:\n print('还款账户%s不存在!' % payee)\n continue\n amount = input_integer('请输入转账金额')\n if bank.transfer_amount_api(CURRENT_USER, payee, amount):\n print('\\033[32m用户%s转账%s到用户%s成功!\\033[0m' % (CURRENT_USER, amount, payee))\n else:\n print('\\033[31m用户%s转账%s到用户%s失败!\\033[0m' % (CURRENT_USER, amount, payee))\n return\n\ndef repayment():\n print('\\033[33m还款\\033[30m')\n print('本期账单'.center(26, '-'))\n user_info = user.get_user_info_api(CURRENT_USER)\n if user_info['bill'] == 0:\n print('\\033[32m用户%s本期账单为0元!\\033[0m' % CURRENT_USER)\n print('-' * 30)\n return\n print('\\033[32m用户%s本期账单为%s元\\033[0m' % (CURRENT_USER, user_info['bill']))\n print('-' * 30)\n amount = input_integer('请输入还款金额')\n if bank.repayment_bill_api(CURRENT_USER, amount):\n print('\\033[32m用户%s还款%s元成功!\\033[0m' % (CURRENT_USER, amount))\n print('-'*30)\n\ndef widthraw():\n print('\\033[33m取现\\033[0m')\n user_info = user.get_user_info_api(CURRENT_USER)\n amount = input_integer('请输入取现金额')\n if user_info['credit_balance'] < (amount + amount*0.05):\n print('\\033[31m用户%s取现额度不足,取现失败!\\033[0m' % CURRENT_USER)\n return\n if bank.widthraw_cash_api(CURRENT_USER, amount):\n print('\\033[32m用户%s取现%s成功!\\033[0m' % (CURRENT_USER, amount))\n\ndef run():\n while True:\n menu = {\n '1': [register, '注册'],\n '2': [login, '登录'],\n '3': [check_balance, '查看余额'],\n '4': [check_credit_bill, '查看账单'],\n '5': [check_detailed_list, '查看流水'],\n '6': [transfer, '转账'],\n '7': [repayment, '还款'],\n '8': [widthraw, '取现']\n }\n print('='*30)\n for k,v in menu.items():\n print('%-6s %-10s' % (k ,v[1]))\n print('=' * 30)\n choice = input_string('请输入操作编码')\n if choice not in menu:\n print('操作编码非法!')\n continue\n try:\n menu[choice][0]()\n except Exception as e:\n print('app error: %s' % e)\n except:\n pass\n"
},
{
"alpha_fraction": 0.4955882430076599,
"alphanum_fraction": 0.5102941393852234,
"avg_line_length": 20.935483932495117,
"blob_id": "7e7f439169ca8a014d38170e6559a9906e316eb8",
"content_id": "d8716d7fd6e615a8328e1ed38996a8d82be6b82e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 726,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 31,
"path": "/project/elective_systems/version_v1/bin/server.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\nimport os\nimport sys\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nsys.path.append(BASE_DIR)\n\nfrom core import manager\nfrom core import teacher\nfrom core import student\n\ndef main():\n while True:\n menu = {\n '1': [student, '学生端'],\n '2': [teacher, '老师端'],\n '3': [manager, '管理端']\n }\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n choice = input('请选择平台编号 >>: ').strip()\n if choice not in menu:\n print('选择编号非法!')\n continue\n menu[choice][0].run()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6395759582519531,
"alphanum_fraction": 0.6925795078277588,
"avg_line_length": 29.66666603088379,
"blob_id": "1cf617a40b357dbac5b9cceeef88fe8321482f38",
"content_id": "8f59b1d367ea0f3678d0a8d040ce1c8313dfada6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 283,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 9,
"path": "/month4/week7/python_day27/python_day27_udp_server.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import socket\n\nserver = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\nserver.bind(('127.0.0.1', 8080))\n\nwhile True:\n client_data, client_addr = server.recvfrom(1024)\n print(client_data.decode('utf-8'), client_addr)\n server.sendto(client_data.upper(), client_addr)\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6242568492889404,
"alphanum_fraction": 0.6759809851646423,
"avg_line_length": 30.716981887817383,
"blob_id": "c7a365d6aa93e1393b24fc05eb2527ddbbf64784",
"content_id": "c16cb56fffaee2ebae8dbf07d62b3751f282b33c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2294,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 53,
"path": "/month5/week9/python_day36/python_day36_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1 创建学生表:有学生 id,姓名,密码,年龄\ncreate table student(\n id int primary key auto_increment,\n name varchar(50) not null,\n password varchar(25) default '123',\n age varchar(10)\n );\n# 2 创建学校表:有学校id,学校名称,地址\ncreate table school(\n id int primary key auto_increment,\n name varchar(50) not null,\n address varchar(100) not null\n );\n# 3 创建课程表:有课程id,课程名称,课程价格,课程周期,所属校区(其实就是学校id)\ncreate table course(\n id int primary key auto_increment,\n name varchar(50) not null,\n price int not null,\n cycle varchar(50) not null,\n school_id varchar(50) not null\n );\n# 4 创建选课表:有id,学生id,课程id\ncreate table curricula_variable(\n id int primary key auto_increment,\n student_id int not null,\n course_id int not null\n );\n\n# 添加学生:张三,20岁,密码123\n# 李四,18岁,密码111\ninsert into student(name, password, age) values('张三', '123', '20'), ('李四', '111','18');\n\n# 创建学校:oldboyBeijing 地址:北京昌平\n# oldboyShanghai 地址:上海浦东\ninsert into school(name, address) values('oldboyBeijing', '北京昌平'), ('oldboyShanghai', '上海浦东');\n\n# 创建课程:Python全栈开发一期,价格2w, 周期5个月,属于上海校区\n# Linux运维一期 价格200,周期2个月,属于上海校区\n# Python全栈开发20期 ,价格2w,周期5个月,属于北京校区\ninsert into course(name, price, cycle, school_id) values('Python全栈开发一期', 20000, 5, 2), ('Linux运维一期', 200, 2, 2), ('Python全栈开发20期', 20000, 5, 1);\n\n# 张三同学选了Python全栈开发一期的课程\n# 李四同学选了Linux运维一期的课程\n# (其实就是在选课表里添加数据)\ninsert into curricula_variable(student_id, course_id) values(1, 1), (2, 2);\n# 查询:查询北京校区开了什么课程\nselect * from course where school_id=1;\n# 查询上海校区开了什么课程\nselect * from course where school_id=2;\n# 查询年龄大于19岁的人\nselect * from student where age>19;\n# 查询课程周期大于4个月的课程\nselect * from course where cycle>4;\n\n"
},
{
"alpha_fraction": 0.6196660399436951,
"alphanum_fraction": 0.6382189393043518,
"avg_line_length": 19.639999389648438,
"blob_id": "422bea2a8e731292ac4344f4c8298d7503704904",
"content_id": "0c2e41842911596b46c41c48ffb908bc4598282b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 553,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 25,
"path": "/weektest/test2/ATM_chengjunhua/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\r\nfrom db import db_handler\r\n\r\n\r\n#查\r\ndef file(name):\r\n user_dic=db_handler.find_file(name)\r\n if user_dic:\r\n return user_dic\r\n return False\r\n\r\n\r\n#增改\r\ndef update_user(name,password,balance=15000,account=15000,lock=False):\r\n user_dic={'name':name,'password':password,'balance':balance,'account':account\r\n ,'lock':lock}\r\n db_handler.update(user_dic)\r\n\r\n\r\n\r\n#锁定用户\r\ndef lock_user_interface(name):\r\n user_dic=db_handler.find_file(name)\r\n user_dic['lock'] = True\r\n db_handler.update(user_dic)"
},
{
"alpha_fraction": 0.5291262269020081,
"alphanum_fraction": 0.5305825471878052,
"avg_line_length": 23.235294342041016,
"blob_id": "2d2c010c6e62b76899f17e1d0555b3a8785a73c0",
"content_id": "179cbbce8a20048af3cbdee4ad806a26cfa04db5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2088,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 85,
"path": "/homework/week4/elective_systems/interface/education.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8-*-\n\nfrom db.handler import Db\nfrom lib.common import Common\n\nlogger = Common.get_logger('education')\n\n\nclass Base(Db, Common):\n @classmethod\n def get_object(cls, type_name=None):\n data = cls.read()\n if type_name:\n return data[type_name]\n return data\n\n @classmethod\n def update_object(cls, type_name, desc, obj):\n data = cls.read()\n data[type_name][obj.name] = obj\n cls.write(data)\n logger.info('更新%s %s 信息成功! ' % (desc, obj.name))\n\nclass Manager(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n\n @classmethod\n def initialize_db(cls):\n data = {\n 'schools': {},\n 'classes': {},\n 'course': {},\n 'teachers': {},\n 'admin': {},\n 'students': {}\n }\n cls.write(data)\n\n @classmethod\n def remove_object(cls, type_name, desc, obj):\n data = cls.read()\n data[type_name].pop(obj.name)\n cls.write(data)\n logger.info('删除%s %s 信息成功! ' % (desc, obj.name))\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.address = address\n self.classes = []\n self.course = []\n self.teacher = []\n self.student = []\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.school = None\n self.classes = None\n\nclass Course(Base):\n def __init__(self, name, price, cycle):\n self.name = name\n self.price = price\n self.cycle = cycle\n\nclass Classes(Base):\n def __init__(self, name):\n self.name = name\n self.teacher = None\n self.course = None\n self.school = None\n self.students = {}\n\nclass Student(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.school = None\n self.classes = None\n self.tuition = 0\n self.score = 0\n"
},
{
"alpha_fraction": 0.42464834451675415,
"alphanum_fraction": 0.49028801918029785,
"avg_line_length": 25.93975830078125,
"blob_id": "e18f4aeda7b291b3a48fbe3f1847579b4f92dae2",
"content_id": "24c07a8e6ea770f3edcca2197fb307222b93f1e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4893,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 166,
"path": "/project/shooping_mall/version_v4/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import user, bank\n\nUSER = None\n\ndef login():\n global USER\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('登陆用户名 >>: ').strip()\n if name == 'q': break\n password = input('登陆密码 >>: ').strip()\n flag, msg = user.login(name, password)\n if flag:\n USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = input('注册用户名 >>: ').strip()\n if name == 'q': break\n password = input('注册密码 >>: ').strip()\n password2 = input('确认注册密码 >>: ').strip()\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = user.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef check_balance():\n print('\\033[32m查看余额\\033[0m')\n balance = user.get_balance_info(USER)\n print('-' * 30)\n print(balance)\n print('-' * 30)\n\[email protected]\ndef check_bill():\n print('\\033[32m查看账单\\033[0m')\n bill = user.get_bill_info(USER)\n print('-' * 30)\n print(bill)\n print('-' * 30)\n\[email protected]\ndef check_flow():\n print('\\033[32m查看流水\\033[0m')\n bill_date = input('请输入要查询的年月 >>: ').strip()\n flow = user.get_flow_info(USER, bill_date)\n print('-' * 30)\n if not flow:\n print('用户%s在%s没有流水!' % (USER, bill_date))\n return\n for k,v in flow:\n print(k, v)\n print('-' * 30)\n\[email protected]\ndef recharge():\n print('\\033[32m充值\\033[0m')\n while True:\n amount = input('请输入充值金额 >>: ').strip()\n if amount == 'q': break\n if not amount.isdigit():\n print('\\033[31m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n flag, msg = bank.recharge(USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef transfer():\n print('\\033[32m转账\\033[0m')\n while True:\n payee = input('请输入收款账户名 >>:').strip()\n if payee == 'q': break\n if not payee:\n print('\\033[31m用户名不能为空!\\033[0m')\n continue\n if payee == USER:\n print('\\033[31m用户不能给自己转账!\\033[0m')\n continue\n amount = input('请输入转账金额 >>: ').strip()\n if not amount.isdigit():\n print('\\033[31m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n flag, msg = bank.transfer(USER, payee, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef withdraw():\n print('\\033[32m取现\\033[0m')\n while True:\n amount = input('请输入取现金额 >>: ').strip()\n if amount == 'q': break\n if not amount.isdigit():\n print('\\033[31m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n flag, msg = bank.withdraw(USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef repayment():\n print('\\033[32m还款\\033[0m')\n while True:\n amount = input('请输入还款金额 >>: ').strip()\n if amount == 'q': break\n if not amount.isdigit():\n print('\\033[31m转账金额必须是数字!\\033[0m')\n continue\n amount = int(amount)\n flag, msg = bank.repayment(USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_balance, '查看余额'],\n '4': [check_bill, '查看账单'],\n '5': [check_flow, '查看流水'],\n '6': [recharge, '充值'],\n '7': [transfer, '转账'],\n '8': [withdraw, '取现'],\n '9': [repayment, '还款']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[32m操作编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5646450519561768,
"alphanum_fraction": 0.565115213394165,
"avg_line_length": 23.73255729675293,
"blob_id": "e0a807529b7a72b75c88c2c75f0e95bc02187020",
"content_id": "adb7cee7c36a95bb3f001319440d8d0789337b4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2199,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 86,
"path": "/project/elective_systems/version_v7/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n return db_handler.save(self)\n\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n\n @classmethod\n def register(cls, name, password):\n admin = cls(name, password)\n return admin.save()\n\n def create_school(self, name, address):\n school = School(name, address)\n return school.save()\n\n def create_teacher(self, name, password):\n teacher = Teacher(name, password)\n return teacher.save()\n\n def create_course(self, name, price, cycle, school_name):\n course = Course(name, price, cycle, school_name)\n return course.save()\n\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.address = address\n self.course = []\n\n def __str__(self):\n return '校区:%s 地址:%s' % (self.name, self.address)\n\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.address = password\n self.course = []\n\n def __str__(self):\n return '名字:%s 课程:%s' % (self.name, self.course)\n\n\nclass Course(Base):\n def __init__(self, name, price, cycle, school_name):\n self.name = name\n self.price = price\n self.cycle = cycle\n self.school_name = school_name\n self.student = []\n\n def __str__(self):\n return '课程:%s 价格:%s 周期:%s 校区:%s' \\\n % (self.name, self.price, self.cycle, self.school_name)\n\n\nclass Student(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.school = None\n self.course = []\n self.score = {}\n\n @classmethod\n def register(cls, name, password):\n student = cls(name, password)\n return student.save()\n\n def __str__(self):\n return '名字:%s 学校:%s 课程:%s 成绩:%s' \\\n % (self.name, self.school, self.course, self.score)\n"
},
{
"alpha_fraction": 0.5148063898086548,
"alphanum_fraction": 0.5580865740776062,
"avg_line_length": 22.88888931274414,
"blob_id": "d244ce535e3f9d22513a07add379775aad28cc21",
"content_id": "9a22405b53709f0530a20568c3a538c676a35c20",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 439,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 18,
"path": "/month4/week8/python_day32/python_day32_client.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\nfrom socket import *\nfrom threading import Thread,current_thread\n\ndef client():\n client = socket()\n client.connect(('127.0.0.1', 8080))\n\n while True:\n data = '%s hello' % current_thread.__name__\n client.send(data.encode('utf-8'))\n res = client.recv(1024)\n print(res.decode('utf-8'))\n\nif __name__ == '__main__':\n for i in (500):\n t = Thread(target=client)\n t.start()\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.45341208577156067,
"alphanum_fraction": 0.5164042115211487,
"avg_line_length": 23.59677505493164,
"blob_id": "5a8395c5695ba24383bf90982f7a4edf204419db",
"content_id": "3c951e76d667995d0fb98b1aab83093f1468bae6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1670,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 62,
"path": "/project/elective_systems/version_v2/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nUSER = {'name': None}\nROLE = 'teacher'\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('用户名').strip()\n pwd = input('密码').strip()\n flag, msg = common_api.login(name, pwd, ROLE)\n print(msg)\n if flag:\n USER['name'] = name\n return\n\n\ndef register():\n print('\\033[32m注册\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef check_course():\n print('\\033[32m查看教授课程\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef choose_course():\n print('\\033[32m选择教授课程\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef check_students():\n print('\\033[32m查看课程学员\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef modify_score():\n print('\\033[32m修改学员成绩\\033[0m')\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [check_course, '查看教授课程'],\n '3': [choose_course, '选择教授课程'],\n '4': [check_students, '查看课程学员'],\n '5': [modify_score, '修改学员成绩']\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号[q to exit] >>: ').strip()\n if choice == 'q':\n print('\\033[31mLogout Success! Goodbye!\\033[0m')\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n try:\n menu[choice][0]()\n except Exception as e:\n print('\\033[31merror from teacher: %s\\033[0m' % e)"
},
{
"alpha_fraction": 0.3333333432674408,
"alphanum_fraction": 0.3333333432674408,
"avg_line_length": 4,
"blob_id": "e3ecb4a7feabe1da12afbc81283f32757df9ce9f",
"content_id": "cc13e51bc3d9a4ec904862f71690feb9f2c51ec5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 10,
"license_type": "no_license",
"max_line_length": 4,
"num_lines": 1,
"path": "/month3/week1/python_day1/Readme.md",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 开课 \n"
},
{
"alpha_fraction": 0.6792199015617371,
"alphanum_fraction": 0.6798924207687378,
"avg_line_length": 32.727272033691406,
"blob_id": "a3c84fa87eb500a0e55f6dc571180b381926f5fc",
"content_id": "0c1537b1ed09a590052e409cbedef1e7b98b4d1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1645,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 44,
"path": "/project/elective_systems/version_v9/interface/student_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef login(name, password):\n student = modules.Student.get_obj_by_name(name)\n if not student:\n return False, '用户%s不存在!' % name\n if password != student.password:\n return False, '密码错误!'\n return True, '用户%s登陆成功!' % name\n\ndef register(name, password):\n student = modules.Student.get_obj_by_name(name)\n if student:\n return False, '不能重复注册!'\n if not modules.Student.register(name, password):\n return False, '用户%s注册失败!' % name\n return True, '用户%s注册成功!' % name\n\n\ndef get_student_course(name):\n student = modules.Student.get_obj_by_name(name)\n return student.course_list\n\ndef get_student_score(name):\n student = modules.Student.get_obj_by_name(name)\n return student.score\n\ndef get_school_name(course_name):\n course = modules.Course.get_obj_by_name(course_name)\n return course.school_name\n\ndef add_course_student(student_name, course_name):\n course = modules.Course.get_obj_by_name(course_name)\n if not course.add_course_student(student_name):\n return False, '学生%s加入课程班级%s失败!' % (student_name, course_name)\n return True, '学生%s加入课程班级%s成功!' % (student_name, course_name)\n\ndef choose_student_course(student_name, course_name, school_name):\n student = modules.Student.get_obj_by_name(student_name)\n if not student.set_student_course(course_name, school_name):\n return False, '学生%s选择课程失败!' % student_name\n return True, '学生%s选择课程成功!' % student_name\n\n\n\n"
},
{
"alpha_fraction": 0.4358377754688263,
"alphanum_fraction": 0.491910457611084,
"avg_line_length": 25.916418075561523,
"blob_id": "fb8d162198cfa3770855d7ee718e648a4d2f4071",
"content_id": "119adb744139cece7796f15c254b4bd61f2355b1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9798,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 335,
"path": "/project/shooping_mall/version_v6/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import user, bank, shop\n\nCURRENT_USER = None\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n print('\\033[32m输入不能是空!\\033[0m')\n continue\n return string\n\ndef input_integer(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n print('\\033[32m输入不能是空!\\033[0m')\n continue\n if string == 'q':\n return string\n if not string.isdigit():\n print('\\033[31m请输入数字!\\033[0m')\n return int(string)\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n if CURRENT_USER:\n print('\\033[31m用户不能重复登录!\\033[0m')\n return\n while True:\n name = input_string('用户名')\n if name == 'q':\n break\n password = input_string('密码')\n if password == 'q':\n break\n flag, msg = user.login(name, password)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = input_string('用户名')\n if name == 'q':\n break\n password = input_string('密码')\n if password == 'q':\n break\n password2 = input_string('密码')\n if password2 == 'q':\n break\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = user.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef check_balance(show=True):\n if show:\n print('\\033[32m查询余额\\033[0m')\n balance_info = user.get_balance_info(CURRENT_USER)\n print('-' * 30)\n print('账户余额:%s \\n信用余额:%s \\n信用额度:%s' % balance_info)\n print('-' * 30)\n\[email protected]\ndef check_bill():\n print('\\033[32m查询账单\\033[0m')\n bill_info = user.get_bill_info(CURRENT_USER)\n print('-' * 30)\n print('您的本期账单是%s元' % bill_info)\n print('-' * 30)\n\[email protected]\ndef check_flow():\n print('\\033[32m查询银行流水\\033[0m')\n flow_info = user.get_flow_info(CURRENT_USER)\n if not flow_info:\n print('用户%s没有银行流水!' % CURRENT_USER)\n return\n print('-' * 30)\n for k,v in flow_info:\n print('%s, %s' % (k, v))\n print('-' * 30)\n\[email protected]\ndef recharge():\n print('\\033[32m充值\\033[0m')\n amount = input_integer('请输入充值金额')\n if amount == 'q':\n return\n flag, msg = bank.recharge(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef transfer():\n print('\\033[32m转账\\033[0m')\n while True:\n name = input_string('请输入收款账户')\n if name == 'q':\n break\n if name == CURRENT_USER:\n print('\\033[32m用户%s不能转账给自己!\\033[0m' % name)\n continue\n amount = input_integer('请输入转账金额')\n flag, msg = bank.transfer(CURRENT_USER, name, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef withdraw():\n print('\\033[32m取现\\033[0m')\n amount = input_integer('请输入取现金额')\n if amount == 'q':\n return\n flag, msg = bank.withdraw(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef repay():\n print('\\033[32m还款\\033[0m')\n check_bill()\n amount = input_integer('请输入还款金额')\n if amount == 'q':\n return\n flag, msg = bank.repay(CURRENT_USER, amount)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef check_shopping_cart(show=True):\n if show:\n print('\\033[32m查看购物车\\033[0m')\n shopping_cart_info = shop.get_shopping_cart_info(CURRENT_USER)\n if not shopping_cart_info:\n print('用户%s购物车列表为空!' % CURRENT_USER)\n return\n print('-' * 30)\n cost = 0\n for code, v in shopping_cart_info.items():\n cost += (v['price'] * v['count'])\n print('商品编码:%s 商品名称:%s 商品价格:%s 商品数量:%s' % (\n code, v['good'], v['price'], v['count']))\n print('商品总价:%s' % cost)\n check_balance(False)\n\[email protected]\ndef modify_shopping_cart():\n print('\\033[32m编辑购物车\\033[0m')\n while True:\n if not check_shopping_cart(False):\n return\n code = input_string('请输入要删除的商品编号')\n if code == 'q':\n break\n count = input_integer('请输入要删除的商品数量')\n if count == 'q':\n break\n flag, msg = shop.modify_shopping_cart(CURRENT_USER, code, count)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef get_good_info():\n good_info = shop.get_good_info()\n print('-' * 30)\n for k, v in good_info.items():\n print('%s %s %s' % (k, v['name'], v['price']))\n print('-' * 30)\n return good_info\n\[email protected]\ndef shopping():\n print('\\033[32m购物\\033[0m')\n while True:\n print('\\033[35m输入pay结账\\033[0m')\n good_info = get_good_info()\n code = input_string('请选择购买商品编号')\n if code == 'q':\n break\n if code == 'pay':\n pay()\n return\n if code not in good_info:\n print('\\033[32m购物编号非法!\\033[0m')\n continue\n good = good_info[code]['name']\n price = good_info[code]['price']\n count = input_integer('请输入购买商品数量')\n flag, msg = shop.join_shopping_cart(CURRENT_USER, code, good, price, count)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef pay():\n print('\\033[32m结账\\033[0m')\n while True:\n check_shopping_cart(False)\n confirm = input_string('是否确认结账?y/n')\n if confirm == 'q':\n break\n if confirm == 'n':\n print('\\033[32m用户%s取消结账\\033[0m' % CURRENT_USER)\n break\n if confirm == 'y':\n flag, msg = shop.pay(CURRENT_USER)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n f, m = shop.flush_shopping_cart(CURRENT_USER)\n if f:\n print('\\033[32m%s\\033[0m' % m)\n else:\n print('\\033[31m%s\\033[0m' % m)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef new_arrival():\n print('\\033[32m新品上架\\033[0m')\n get_good_info()\n while True:\n code = input_string('请输入新商品编号')\n if code == 'q':\n break\n good = input_string('请输入新商品名称')\n if good == 'q':\n break\n price = input_integer('请输入新商品价格')\n if price == 'q':\n break\n flag, msg = shop.new_arrival(code, good, price)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected]\ndef atm():\n menu = {\n '1': [check_balance, '查看余额'],\n '2': [check_bill, '查看账单'],\n '3': [check_flow, '查看银行流水'],\n '4': [recharge, '充值'],\n '5': [transfer, '转账'],\n '6': [withdraw, '取现'],\n '7': [repay, '还款']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q':\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\[email protected]\ndef mall():\n menu = {\n '1': [check_shopping_cart, '查看购物车'],\n '2': [modify_shopping_cart, '编辑购物车'],\n '3': [shopping, '购物'],\n '4': [pay, '结账']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q':\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [atm, 'ATM'],\n '4': [mall, '购物商城'],\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input_string('请选择操作编号')\n if choice == 'q':\n break\n if choice == 'backdoor':\n print('\\033[35mBackdoor --> 新品上架!!\\033[0m')\n new_arrival()\n continue\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5762273669242859,
"alphanum_fraction": 0.6072351336479187,
"avg_line_length": 21.823530197143555,
"blob_id": "64308146addc3a0d387a7562ecfeed61471e89da",
"content_id": "39e1817a7e7da203b054ed122034762ce1e382db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 487,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 17,
"path": "/weektest/weektest1/python_weektest1_zhanglong_02.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 2. 编写拷贝文件的程序,要求(10分)\n# 可以拷贝任意类型的文件\n# 在命令行中执行,命令的格式为:python3 copy.py src_file dst_file\n\nimport sys\n\nif len(sys.argv) == 3:\n src_file = sys.argv[1]\n dst_file = sys.argv[2]\nelse:\n print('请按样例正确输入参数:python3 copy.py src_file dst_file')\n sys.exit()\n\nwith open(r'%s' % src_file, 'rb') as f1, \\\n open(r'%s' % dst_file, 'wb') as f2:\n for line in f1:\n f2.write(line)"
},
{
"alpha_fraction": 0.6310160160064697,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 25.85714340209961,
"blob_id": "6b6922851bfdbb83311fcd2da85a50bd7bbbfc35",
"content_id": "81c605a1ff8c4fe4376ec49fc191b135f3b92abd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 187,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 7,
"path": "/month4/week4/python_day15/ATM/conf/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\nimport os\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nDB_FILE = '%s/db/db.txt' % BASE_DIR\n\nLOG_FILE = '%s/logs/access.log' % BASE_DIR"
},
{
"alpha_fraction": 0.5845410823822021,
"alphanum_fraction": 0.5893719792366028,
"avg_line_length": 27.714284896850586,
"blob_id": "ff7946b1a595732ce0c8b8e65c34ee2451539908",
"content_id": "30129f4950bfc006b35bd2e9a6da1a088790543e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 414,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 14,
"path": "/weektest/test2/ATM_tianzhiwei/db/db_hand.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\r\nimport json\r\nfrom conf import settings\r\ndef select(name):\r\n path=r'%s\\%s.json'%(settings.DBSE_PATH,name)\r\n if os.path.exists(path):\r\n with open(path,'r',encoding='utf8')as f:\r\n return json.load(f)\r\n else:\r\n return False\r\ndef add(name,dic):\r\n path = r'%s\\%s.json' % (settings.DBSE_PATH, name)\r\n with open(path, 'w', encoding='utf8')as f:\r\n json.dump(dic,f)"
},
{
"alpha_fraction": 0.5881187915802002,
"alphanum_fraction": 0.592079222202301,
"avg_line_length": 26.16666603088379,
"blob_id": "90602e17302f55db3196cc2bf0baf1fb6754133e",
"content_id": "ed0c7f9359526a2f8b5a6699257eae48a6565016",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 505,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 18,
"path": "/weektest/test2/ATM_wenliuxiang/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os\r\nimport json\r\nfrom conf import setting\r\n\r\ndef update(user_dic):\r\n path_file=os.path.join(setting.BASE_DIR,'db','%s.json' %user_dic['name'])\r\n with open(path_file,'w',encoding='utf-8')as f:\r\n json.dump(user_dic,f)\r\n f.flush()\r\n\r\n\r\ndef select(name):\r\n path_file = os.path.join(setting.BASE_DIR, 'db', '%s.json' %name)\r\n if os.path.exists(path_file):\r\n with open(path_file,'r',encoding='utf-8')as f:\r\n return json.load(f)\r\n else:\r\n return False"
},
{
"alpha_fraction": 0.5456750392913818,
"alphanum_fraction": 0.5707356333732605,
"avg_line_length": 23.50494956970215,
"blob_id": "7839363deaad97c04976f83bc92c04807cb1f04d",
"content_id": "2dd7570f86ec1429f6a80a8e24435ba5f3c23962",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2658,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 101,
"path": "/month4/week4/python_day15/python_day15.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# # -*- encoding: utf-8 -*-\n#\n# import logging\n# #\n# # logging.basicConfig(\n# # filename='access.log',\n# # format='%(asctime)s %(filename)s [line:%(lineno)d]: %(levelname)s: %(message)s',\n# # datefmt='%Y-%b-%d %H:%M:%S',\n# # level=10,\n# # filemode='w'\n# # )\n# #\n# # logging.debug('检测有没有着火 ...') # 10\n# # logging.info('没有着火 ...') # 20\n# # logging.warning('可能着火 ...') # 30\n# # logging.error('着火啦快跑 ...') # 40\n# # logging.critical('火越烧越大 ...') # 50\n#\n#\n# # logger() 负责生产日志\n# logger1 = logging.getLogger('mylogger')\n#\n# # filter() 过滤日志(不常用)\n# # handler() 控制日志打印到文件或终端\n# fh1 = logging.FileHandler(filename='a1.log',encoding='utf-8')\n# fh2 = logging.FileHandler(filename='a2.log',encoding='utf-8')\n# sh = logging.StreamHandler()\n#\n# # 为 logger 对象绑定 handler\n# logger1.addHandler(fh1)\n# logger1.addHandler(fh2)\n# logger1.addHandler(sh)\n#\n# # formatter() 控制日志的格式\n# format1 = logging.Formatter(fmt='%(asctime)s %(filename)s [line:%(lineno)d]: %(levelname)s: %(message)s', datefmt='%Y-%b-%d %H:%M:%S')\n# format2 = logging.Formatter(fmt='%(asctime)s - %(message)s',)\n# # 为 handler 对象绑定日志格式\n# fh1.setFormatter(format1)\n# fh2.setFormatter(format1)\n# sh.setFormatter(format2)\n#\n# # 日志级别\n# logger1.setLevel(10)\n# fh1.setLevel(10)\n# fh2.setLevel(10)\n# sh.setLevel(10)\n#\n# logger1.debug('调试 ...')\n\n# 1. json.dumps 和 json.loads\n# import json\n#\n# user = {'name': 'egon', 'age': 18, 'sex': 'male'}\n# print(type(user), user)\n#\n# with open(r'a.txt', 'w', encoding='utf-8') as f:\n# f.write(json.dumps(user))\n#\n# with open(r'a.txt') as f:\n# user = json.loads(f.read())\n# print(type(user), user)\n\n# 2. json.dump 和 json.load\n# import json\n# user = {'name': 'egon', 'age': 18, 'sex': 'male'}\n# print(type(user), user)\n# with open(r'b.txt', 'w') as f:\n# json.dump(user, f)\n#\n# with open(r'b.txt', 'r') as f:\n# user = json.load(f)\n# print(type(user), user)\n\n# 3. pickle.dumps 和 pickle.loads\n# import pickle\n# s = {1, 2, 3, 4}\n# print(type(s), s)\n# s = pickle.dumps(s)\n# with open(r'b.txt', 'wb') as f:\n# f.write(s)\n#\n# with open(r'b.txt', 'rb') as f:\n# s = f.read()\n# s = pickle.loads(s)\n# print(type(s), s)\n\n# 4. pickle.dump 和 pickle.load\n# import pickle\n# s = {1, 2, 3, 4}\n# print(type(s), s)\n# with open(r'b.txt', 'wb') as f:\n# pickle.dump(s, f)\n#\n# with open(r'b.txt', 'rb') as f:\n# s = pickle.load(f)\n# print(type(s), s)\n\n# 增删改查小程序\n# 1. 增加\n# def add():\n# with open('db.txt', 'ab') as f:"
},
{
"alpha_fraction": 0.4893267750740051,
"alphanum_fraction": 0.5057471394538879,
"avg_line_length": 24.16666603088379,
"blob_id": "cefb49e4a886f5bdb57a6f90aca05aedcc17a273",
"content_id": "af8e98c1e104e6b2e1614231d7a6e308c698b6d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 653,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 24,
"path": "/homework/week1/python_weekend1_zhanglong_base.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding:utf-8 -*-\n\n\nuser_info = {\n 'username': 'egon',\n 'password': '123'\n}\ni = 0\nwhile 1:\n name = input('username>>: ')\n pwd = input('password>>: ')\n if name != user_info['username']:\n print('用户 %s 不存在!' % name)\n i += 1\n if name == user_info['username'] and pwd != user_info['password']:\n print('用户 %s 密码错误!' % name)\n i += 1\n if i == 3:\n print('尝试次数过多,锁定!')\n break\n if name == user_info['username'] and pwd == user_info['password']:\n print('Welcome %s, you are login successful!' % name)\n break\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5729386806488037,
"alphanum_fraction": 0.5736433863639832,
"avg_line_length": 23.894737243652344,
"blob_id": "6192dc79269aa59ec5c455975ec70261c2e97478",
"content_id": "7a5f0c66d25787a6656dba5d4db2fab15888ef58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1419,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 57,
"path": "/project/elective_systems/version_v2/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n db_handler.save(self)\n\nclass Admin(Base):\n def __init__(self, name, pwd):\n self.name = name\n self.pwd = pwd\n self.save()\n\n def create_school(self, school_name, school_addr):\n school = School(school_name, school_addr)\n school.save()\n\n def create_teacher(self, teacher_name, teacher_pwd):\n teacher = Teacher(teacher_name, teacher_pwd)\n teacher.save()\n\n def create_course(self, course_name, course_price, course_cycle):\n course = Course(course_name, course_price, course_cycle)\n course.save()\n\nclass School(Base):\n def __init__(self, name, addr):\n self.name = name\n self.addr = addr\n self.course_name_list = []\n\nclass Teacher(Base):\n def __init__(self, name, pwd):\n self.name = name\n self.pwd = pwd\n self.course_list = []\n\nclass Course(Base):\n def __init__(self, name, price, cycle):\n self.name = name\n self.price = price\n self.cycle = cycle\n self.student_name_list = []\n\nclass Student(Base):\n def __init__(self, name, pwd):\n self.name = name\n self.pwd = pwd\n self.school = None\n self.course_list = []\n self.scores = {}\n"
},
{
"alpha_fraction": 0.46786388754844666,
"alphanum_fraction": 0.5,
"avg_line_length": 31.46875,
"blob_id": "14ba354fd6a8e2784a9e4b9a2a14cb61932aef93",
"content_id": "96dbf8dc5d96974a77c09d9a7fbcfde504a1531b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1160,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 32,
"path": "/month4/week6/python_day25/python_day25_server.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport socket\n\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # 获取空的TCP套接字对象\ns.bind(('127.0.0.1', 8080)) # 绑定端口\ns.listen(5) # 启动监听\nwhile True:\n conn, client_addr = s.accept() # 等待连接\n print('客户端: ', client_addr)\n conn.send('欢迎访问服务器!'.encode('utf-8'))\n while True:\n try:\n print('接收消息中 ..')\n msg = conn.recv(1024) # 接收消息 1024 字节最大限制\n print('\\033[31m新消息: %s\\033[0m' % msg.decode('utf-8'))\n if msg.decode('utf-8') == 'quit':\n conn.send('Goodbye!'.encode('utf-8'))\n data = input('>>>>: ').strip()\n conn.send(data.encode('utf-8'))\n except ConnectionAbortedError as e:\n print('ConnectionAbortedError: %s' % e)\n break\n except ConnectionResetError as e:\n print('ConnectionResetError: %s' % e)\n break\n except Exception as e:\n print('Exception: %s' % e)\n break\n conn.close()\ns.close()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.4207509458065033,
"alphanum_fraction": 0.4496990442276001,
"avg_line_length": 23.145328521728516,
"blob_id": "4b83f6c800aaa43c11f4c194e48f1bad8e845413",
"content_id": "2b0d03d5fbf52da3c16dd6c4503d7fc977a4dafe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8550,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 289,
"path": "/month3/week2/python_day3/python_day3_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "1. python test.py执行的三个阶段是什么?在哪个阶段识别文件内的python语法?\n 第一阶段:启动 python 解释器;\n 第二阶段:将 python 程序的代码读入内存;\n 第三阶段:python 解释器,解释执行内存中读入的 python 程序代码;(在执行前先检查语法)\n\n2. 将下述两个变量的值交换\n s1='alex'\n s2='SB'\n s1, s2 = s2, s1\n\n3. 判断下述结果\n msg1='alex say my name is alex,my age is 73,my sex is female'\n msg2='alex say my name is alex,my age is 73,my sex is female'\n msg1 is msg2\n msg1 == msg2\n\n 如果两个结果都是 True,则是 Python 做了优化;\n 如果第一个结果是 False,则是 Python 没有优化;\n\n4. 什么是常量?在python中如何定义常量?\n 程序运行过程中不允许改变的量称为常量。Python 中用全部大写字母作为常量的名称,但 python 中没有真正的常量。\n\n5. 有存放用户信息的列表如下,分别存放用户的名字、年龄、公司信息\n userinfo={\n 'name':'egon',\n 'age':18,\n 'company_info':{\n 'cname':'oldboy',\n 'addr':{\n 'country':'China',\n 'city':'Shanghai',\n }\n }\n\n }\n a. 要求取出该用户公司所在的城市?\n city = userinfo['company_info']['addr']['city']\n\n students=[\n {'name':'alex','age':38,'hobbies':['play','sleep']},\n {'name':'egon','age':18,'hobbies':['read','sleep']},\n {'name':'wupeiqi','age':58,'hobbies':['music','read','sleep']},\n ]\n 取第二个学生的第二个爱好?\n second_hobby = students[1]['hobbies'][1]\n\n6. students=[\n {'name':'egon','age':18,'sex':'male'},\n {'name':'alex','age':38,'sex':'fmale'},\n {'name':'wxx','age':48,'sex':'male'},\n {'name':'yuanhao','age':58,'sex':'fmale'},\n {'name':'liwenzhou','age':68,'sex':'male'},\n]\n 要求循环打印所有学生的详细信息,格式如下\n <name:egon age:18 sex:male>\n <name:alex age:38 sex:fmale>\n <name:wxx age:48 sex:male>\n <name:yuanhao age:58 sex:fmale>\n <name:liwenzhou age:68 sex:male>\n\n\n students=[\n {'name':'egon','age':18,'sex':'male'},\n {'name':'alex','age':38,'sex':'fmale'},\n {'name':'wxx','age':48,'sex':'male'},\n {'name':'yuanhao','age':58,'sex':'fmale'},\n {'name':'liwenzhou','age':68,'sex':'male'},\n ]\n for std in students:\n s = '<name:%s age:%s sex:%s>'\n print(s % (std['name'], std['age'], std['sex']))\n\n7. 编写程序,根据用户输入内容打印其权限:\n '''\n egon --> 超级管理员\n tom --> 普通管理员\n jack,rain --> 业务主管\n 其他 --> 普通用户\n '''\n\n user_info = {\n 'egon': '超级管理员',\n 'tom': '普通管理员',\n 'jack': '业务主管',\n 'rain': '业务主管'\n }\n while True:\n name = input('username>>: ')\n if name in user_info:\n print('用户 %s 权限:%s' % (name, user_info[name]))\n else:\n print('用户 %s 权限:普通用户' % name)\n\n8. 编写程序,实现如下功能:\n 如果:今天是Monday,那么:上班\n 如果:今天是Tuesday,那么:上班\n 如果:今天是Wednesday,那么:上班\n 如果:今天是Thursday,那么:上班\n 如果:今天是Friday,那么:上班\n 如果:今天是Saturday,那么:出去浪\n 如果:今天是Sunday,那么:出去浪\n\n plan = {\n '上班': [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'],\n '出去浪':['Saturday', 'Sunday']\n }\n while True:\n today = input('What day is today>>: ')\n for k,v in plan.items():\n if today in v:\n print('今天是 %s, %s。' % (today, k))\n\n9. while循环练习:\n #1. 使用while循环输出1 2 3 4 5 6 8 9 10\n #2. 求1-100的所有数的和\n #3. 输出 1-100 内的所有奇数\n #4. 输出 1-100 内的所有偶数\n #5. 求1-2+3-4+5 ... 99的所有数的和\n #6. 用户登陆(三次机会重试)\n #7:猜年龄游戏\n 要求:\n 允许用户最多尝试3次,3次都没猜对的话,就直接退出,如果猜对了,打印恭喜信息并退出\n #8:猜年龄游戏升级版\n 要求:\n 允许用户最多尝试3次\n 每尝试3次后,如果还没猜对,就问用户是否还想继续玩,如果回答Y或y, 就继续让其猜3次,以此往复,如果回答N或n,就退出程序\n 如何猜对了,就直接退出\n\n 1)\n i = 1\n while i <= 10:\n print(i)\n i += 1\n\n 2)\n sum = 0\n i = 1\n while i <= 100:\n sum += i\n i += 1\n print('sum is %s' % sum)\n\n 3)\n i = 1\n while i <= 100:\n n = i%2\n if n == 1:\n print('奇数 %s' % i)\n i += 1\n\n 4)\n i = 1\n while i <= 100:\n n = i % 2\n if n == 0:\n print('偶数 %s' % i)\n i += 1\n\n 5)\n sum = 0\n i = 1\n while i <= 100:\n n = i % 2\n if n == 1:\n sum += i\n if n == 0:\n sum -= i\n i += 1\n print('sum is %s' % sum)\n\n 6)\n user_info = {\n 'egon': '123'\n }\n i = 0\n while True:\n name = input('username>>: ')\n pwd = input('password>>: ')\n if name not in user_info:\n print('用户登录名错误!')\n i += 1\n else:\n if pwd != user_info[name]:\n print('用户密码错误')\n i += 1\n if i == 3:\n print('Try too many times!')\n break\n if name in user_info and pwd == user_info[name]:\n print('login successful!')\n break\n\n 7)\n age = 18\n i = 0\n while True:\n inp_age = input(\"egon's age is ?>>: \")\n inp_age = int(inp_age)\n if inp_age > age:\n print('猜大了!')\n i += 1\n if inp_age < age:\n print('猜小了')\n i += 1\n if i == 3:\n print('Try too many times!')\n break\n if inp_age == age:\n print('恭喜你猜对了 egon 的年龄!')\n break\n\n 8)\n age = 18\n i = 0\n while True:\n inp_age = input(\"egon's age is ?>>: \")\n inp_age = int(inp_age)\n if inp_age > age:\n print('猜大了!')\n i += 1\n if inp_age < age:\n print('猜小了')\n i += 1\n if i == 3:\n print('Do you want to continue playing?')\n action = input(\"Please input 'Y/y' or 'N/n'>>: \")\n if action == 'Y' or action == 'y':\n i = 0\n if action == 'N' or action == 'n':\n break\n if inp_age == age:\n print('恭喜你猜对了 egon 的年龄!')\n break\n10. 编写计算器程序,要求\n #1、用户输入quit则退出程序\n #2、程序运行,让用户选择具体的计算操作是加法or乘法or除法。。。然后输入数字进行运算\n #3、简单示范如下,可以在这基础上进行改进\n while True:\n msg='''\n 1 加法\n 2 减法\n 3 乘法\n 4 除法\n '''\n print(msg)\n choice = input('>>: ').strip()\n num1 = input('输入第一个数字:').strip()\n num2 = input('输入第二个数字:').strip()\n if choice == '1':\n res=int(num1)+int(num2)\n print('%s+%s=%s' %(num1,num2,res))\n\n\n choose = '''1 加法 2 减法 3 乘法 4 除法 5 保留小数的除法 6 取余\n Please choose an opration >>: '''\n\n def check_number(n):\n if int(n):\n return (int(n))\n elif int(n):\n return (int(n))\n\n while True:\n o = input(choose)\n if o not in ['1', '2', '3', '4', '5', '6']:\n print('Please select the operation number correctly!')\n continue\n else:\n o = int(o)\n nums = input('Please enter two numbers >>: ')\n x, y = nums.split()\n x = check_number(x)\n y = check_number(y)\n if not x or not y:\n print('Please enter integers or floating-point numbers!')\n continue\n if o == 1:\n r = x + y\n if o == 2:\n r = x - y\n if o == 3:\n r = x * y\n if o == 4:\n r = x // y\n if o == 5:\n r = x / y\n if o == 6:\n r = x % y\n print('result: %s' % r)\n"
},
{
"alpha_fraction": 0.6406779885292053,
"alphanum_fraction": 0.6576271057128906,
"avg_line_length": 31.88888931274414,
"blob_id": "18bfd56e5f38cd6ab6dbebddcb3bbaacce936b5c",
"content_id": "abe18fae003a0affcc630a71916b2b2c2d7e44c9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 301,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 9,
"path": "/weektest/test2/ATM_wangjieyu/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from db import db_hander\n\ndef select_t(name):\n return db_hander.select(name)\n\ndef update_t(name,passwd,account=15000):\n user_dic = {'name': name, 'passwd': passwd, 'locked': False, 'account': account, 'liushui':[]}\n db_hander.update(user_dic)\n # logger_user.info('%s 注册了' % name)"
},
{
"alpha_fraction": 0.5927690863609314,
"alphanum_fraction": 0.597533643245697,
"avg_line_length": 26.875,
"blob_id": "128f3c283988a19554e68f41f89639c008b32ab1",
"content_id": "8837a44034d61812f930a00868f297be80623402",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3568,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 128,
"path": "/project/youku/version_v1/youkuServer/server/tcpServer.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport time\nimport socket\nimport struct\nimport json\nfrom conf import settings\nfrom lib import common\nfrom interface import admin_api, user_api\nfrom concurrent.futures import ThreadPoolExecutor\n\nserver_pool = ThreadPoolExecutor(50)\n\n\nclass TcpServer:\n def __init__(self):\n self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n self.server.bind(settings.server_address)\n self.server.listen(settings.backlog)\n except Exception as e:\n raise Exception(e)\n\n def __del__(self):\n self.server.close()\n\n\ndef done_load(res):\n res = res.result()\n common.show_green(res)\n\n\ndef send_vidoe(self, data):\n file_path = os.path.join(settings.video_path, data['file_name'])\n data_len_bytes = struct.pack('i', len(data['file_size']))\n self.client.send(data_len_bytes)\n with open(r'%s' % file_path, 'rb') as f:\n for line in f:\n self.client.send(line)\n return 'Send video %s complete!' % data['file_name']\n\n\ndef receive_video(self, conn, data, buffer_size=1024):\n data_len_bytes = self.client.recv(4)\n data_length = struct.unpack('i', data_len_bytes)[0]\n file_path = os.path.join(settings.video_path, data['file_name'])\n with open(r'%s' % file_path, 'ab') as f:\n recv_size = 0\n while recv_size < data_length:\n last_size = data_length - recv_size\n if last_size < buffer_size:\n buffer_size = last_size\n recv_data = self.client.recv(buffer_size)\n recv_size += len(recv_data)\n f.write(recv_data)\n data = {\n 'message': 'Receive video %s complete!' % data['file_name']\n }\n conn.send(conn, data)\n return data['message']\n\n\ndef recv_data(conn):\n data_len_bytes = conn.recv(4)\n data_length = struct.unpack('i', data_len_bytes)[0]\n\n data_bytes = conn.recv(data_length)\n data = json.loads(data_bytes.decode('utf-8'))\n\n if data['is_file']:\n server_pool.submit(receive_video, conn, data).add_done_callback(done_load)\n return data\n\n\ndef send_data(conn, data):\n data_bytes = json.dumps(data).encode('utf-8')\n data_len_bytes = struct.pack('i', len(data_bytes))\n\n conn.send(data_len_bytes)\n conn.send(data_bytes)\n\n if data['is_file']:\n server_pool.submit(send_vidoe, data).add_done_callback(done_load)\n\n\ndef dispatch(params):\n admin_func = {\n 'login': admin_api.login,\n 'register': admin_api.register,\n 'release_announcement': admin_api.release_announcement,\n 'upload_video': admin_api.upload_video,\n 'remove_video': admin_api.remove_video\n }\n user_func = {\n 'login': user_api.login,\n 'register': user_api.register\n }\n if params['role'] == 'admin':\n result = admin_func[params['api']](params)\n return result\n if params['role'] == 'user':\n result = user_func[params['api']](params)\n return result\n\n\ndef handle(conn):\n while True:\n try:\n params = recv_data(conn)\n if not params:\n conn.close()\n return\n result = dispatch(params)\n send_data(result)\n except Exception as e:\n print('Exception: %s' % e)\n conn.close()\n return \n\n\ndef run(poll_interval=0.05):\n sock = TcpServer()\n while True:\n conn, client_address = sock.server.accept()\n print('connect from: %s' % client_address)\n server_pool.submit(handle, conn)\n time.sleep(poll_interval)\n"
},
{
"alpha_fraction": 0.3835991621017456,
"alphanum_fraction": 0.3935878872871399,
"avg_line_length": 30.353534698486328,
"blob_id": "1a18b06f1962ce176443b8d5febdb754a035dd12",
"content_id": "fd35390d9e12e3c6bcf531e610beb1f89ff4ea47",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6835,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 198,
"path": "/weektest/weektest1/python_weektest1_zhanglong_04.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4. 编写购物车程序,实现注册,登陆,购物,查看功能,数据基于文件存取(30分)\n\nimport os\n\ngoods = {\n '1': {\n 'name': 'mac',\n 'price': 20000\n },\n '2': {\n 'name': 'lenovo',\n 'price': 10000\n },\n '3': {\n 'name': 'apple',\n 'price': 200\n },\n '4': {\n 'name': 'tesla',\n 'price': 100000\n }\n}\nconfig = 'users.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\nshopping_cart = {}\ncookies = []\nuser_balance = {}\ntag = True\nlogout = 'quit'\nwhile tag:\n users = {}\n with open(r'%s' % config) as f:\n for u in f:\n if u:\n u = u.strip('\\n').split('|')\n n, p, b = u\n users[n] = {'password': p, 'balance': int(b)}\n print('1 注册\\n2 登陆\\n3 购物\\n4 购物车')\n action = input('请选择操作: ').strip()\n if action == logout:\n tag = False\n continue\n elif action == '1':\n # 注册\n while tag:\n name = input('请输入用户名 >>: ').strip()\n if name == logout:\n tag = False\n continue\n if name in users:\n print('用户已经注册,请直接登陆!')\n break\n password = input('请输入密码 >>: ')\n if password == logout:\n tag = False\n continue\n while tag:\n balance = input('请输入充值金额 >>: ').strip()\n if balance == logout:\n tag = False\n continue\n if not balance.isdigit():\n balance = int(balance)\n break\n print('请输入金额的整数!')\n if balance == logout:\n continue\n with open(r'%s' % config, 'a') as f:\n user = '%s|%s|%s\\n' % (name, password, balance)\n f.write(user)\n print('用户%s注册成功!' % name)\n break\n elif action == '2':\n # 登陆\n while tag:\n name = input('请输入用户名 >>: ').strip()\n if name == logout:\n tag = False\n continue\n if name not in users:\n print('用户不存在!')\n continue\n if name in cookies:\n print('用户%s已经是登录状态!' % name)\n break\n password = input('请输入密码 >>: ')\n if password == logout:\n tag = False\n continue\n if password != users[name]['password']:\n print('密码错误!')\n continue\n if name in users and password == users[name]['password']:\n cookies.append(name)\n print('用户%s登陆成功!' % name)\n break\n elif action == '3':\n # 购物\n name = input('请输入用户名 >>: ').strip()\n if name == logout:\n tag = False\n continue\n if name not in cookies:\n print('请登陆后再进行购物!')\n continue\n balance = users[name]['balance']\n while tag:\n print('='*30)\n print('商品编号 商品名称 商品价格')\n for k,v in goods.items():\n print('%-10s %-10s %-10s' % (k, v['name'], v['price']))\n print('='*30)\n code = input('请选择商品编码 [结账:bill] >>: ').strip()\n if code == logout:\n tag = False\n continue\n if code == 'bill':\n print('请进入购物车结账!')\n break\n if code not in goods:\n print('商品编号非法!')\n continue\n while tag:\n count = input('请输入购买数量 >>: ').strip()\n if count == logout:\n tag = False\n continue\n if count.isdigit():\n count = int(count)\n break\n print('请输入数量的整数!')\n good = goods[code]['name']\n price = goods[code]['price']\n cost = price * count\n if balance >= cost:\n balance -= cost\n if name not in shopping_cart:\n shopping_cart[name] = {}\n if good not in shopping_cart[name]:\n shopping_cart[name][good] = {\n 'code': code,\n 'price': price,\n 'count': count\n }\n else:\n shopping_cart[name][good] = {\n 'code': code,\n 'price': price,\n 'count': shopping_cart[name][good]['count'] + count\n }\n print('购物车: %s \\n账户余额: %s' % (shopping_cart[name], balance))\n else:\n diff = cost - balance\n print('账户余额不足!商品 %s x %s 还需 %s 才能购买!' % (good, count, diff))\n user_balance[name] = balance\n elif action == '4':\n # 购物车\n name = input('请输入用户名 >>: ').strip()\n if name == logout:\n tag = False\n continue\n if name not in cookies:\n print('请登陆后再查看购物车!')\n continue\n cost = 0\n print('='*50)\n print('商品名称 商品编号 商品价格 商品数量\\n')\n for k,v in shopping_cart[name].items():\n good, code, price, count = k, v['code'], v['price'], v['count']\n print('%-10s %-10s %-10s %-10s' % (good, code, price, count))\n cost += (price * count)\n print('\\n商品总价: %s' % cost)\n print('账户余额: %s' % user_balance[name])\n print('='*50)\n buy = input('确认购买?y/n >>: ').strip()\n if buy == logout or buy == 'n':\n print('取消购物!')\n tag = False\n continue\n elif buy == 'y':\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = line.split('|')\n line[-1] = '%s\\n' % user_balance[name]\n line = '|'.join(line)\n f2.write(line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n print('购物成功,请耐心等待发货!')\n shopping_cart = {}\n else:\n print('输入操作非法!')\n else:\n print('操作编号非法!')"
},
{
"alpha_fraction": 0.4719036817550659,
"alphanum_fraction": 0.4793578088283539,
"avg_line_length": 25.82978630065918,
"blob_id": "81eab7d38b540a8fdf14870451c1a80571372a2c",
"content_id": "94fc712efdb4e40de8f979d478f6956e76b6fb77",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5946,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 188,
"path": "/weektest/test2/ATM_chengjunhua/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from interface import user\r\nfrom lib import common\r\nfrom interface import bank\r\nimport time\r\n\r\n\r\nlogger1=common.get_logger('ATM')\r\n\r\nusers={'name':None,\r\n 'status':False}\r\n\r\n\r\n\r\n\r\n# print('注册')\r\ndef register():\r\n if users['status']:\r\n print('您已登陆!')\r\n return\r\n while True:\r\n name=input('请输入用户名>>:').strip()\r\n if user.file(name):\r\n print('该用户已注册!')\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice == 'q': return\r\n continue\r\n pwd1=input('请输入密码>>: ').strip()\r\n pwd2=input('请再次输入密码>>:').strip()\r\n if pwd1 != pwd2 :\r\n print('两次密码不一致,请重新输入')\r\n continue\r\n user.update_user(name,pwd1)\r\n print('注册成功!')\r\n break\r\n\r\n# print('登陆')\r\ndef login():\r\n while True:\r\n if users['status']:\r\n print('您已登陆,无需重复登陆!')\r\n return\r\n name=input('请输入用户名>>: ').strip()\r\n pwd=input('请输入用户密码>>: ').strip()\r\n user_dic = user.file(name)\r\n if not user_dic:\r\n print('该用户不存在')\r\n continue\r\n if user_dic['lock']:\r\n print('该用户已锁定')\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice=='q':break\r\n continue\r\n if pwd == user_dic['password']:\r\n print('登陆成功!')\r\n users['name']=name\r\n users['status']=True\r\n return\r\n count=1\r\n while True:\r\n if count>=3:\r\n print('用户已锁定')\r\n user.lock_user_interface(name)\r\n return\r\n count+=1\r\n print('密码不正确,请重新输入,%s次后将锁定!'%(3-count))\r\n pwd = input('请输入用户密码>>: ').strip()\r\n if pwd == user_dic['password']:\r\n print('登陆成功!')\r\n users['name'] = name\r\n users['status'] = True\r\n return\r\n\r\n# print('查看余额')\r\[email protected]_auth\r\ndef look_money():\r\n user_dic = user.file(users['name'])\r\n print('''\r\n 尊敬的:%s\r\n 您的余额为:%s\r\n 您的信用额度还剩:%s'''%(user_dic['name'],user_dic['balance'],user_dic['account']))\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice == 'q':return\r\n\r\n# print('转账')\r\[email protected]_auth\r\ndef transfer_accounts():\r\n while True:\r\n user_self = user.file(users['name'])\r\n side_name=input('请输入收款账号>>: ').strip()\r\n user_side=user.file(side_name)\r\n if not user_side:\r\n print('该用户不存在!')\r\n continue\r\n if side_name==users['name']:\r\n print('不能转给自己!')\r\n continue\r\n money=input('请输入转账金额>>: ').strip()\r\n if not money.isdigit():\r\n print('钱必须是数字!')\r\n continue\r\n money=int(money)\r\n if user_self['balance'] < money:\r\n print('傻叉钱你没那么多钱!')\r\n continue\r\n user_self['balance']-=money\r\n user_side['balance']+=money\r\n bank.update_money(user_self)\r\n bank.update_money(user_side)\r\n debug=('%s向%s转账%s成功!'%(user_self['name'],user_side['name'],money))\r\n logger1.debug(debug)\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice == 'q': return\r\n\r\n# print('还款')\r\[email protected]_auth\r\ndef repayment():\r\n while True:\r\n user_self=user.file(users['name'])\r\n account=15000-user_self['account']\r\n print('您本期需要还款的金额为:%s'%account)\r\n money=input('请输入还款金额: ').strip()\r\n if not money.isdigit():\r\n print('钱必须是数字!')\r\n continue\r\n money = int(money)\r\n if user_self['balance'] < money:\r\n print('傻叉钱你没那么多钱!')\r\n continue\r\n user_self['balance']-=money\r\n user_self['account']+=money\r\n bank.update_money(user_self)\r\n debug=('%s还款%s,当前信用可用额度为:%s'%(user_self['name'],money,user_self['account']))\r\n logger1.debug(debug)\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice == 'q': return\r\n\r\n\r\n# print('取款')\r\[email protected]_auth\r\ndef draw_money():\r\n while True:\r\n money=input('请输入取款金额: ').strip()\r\n user_self = user.file(users['name'])\r\n if not money.isdigit():\r\n print('钱必须是数字!')\r\n continue\r\n money = int(money)\r\n if user_self['account'] < money:\r\n print('傻叉钱你没那么多额度了!')\r\n continue\r\n money1=(money*0.05)\r\n money2=money-money1\r\n user_self['account'] -= money\r\n user_self['balance'] += money2\r\n bank.update_money(user_self)\r\n debug = ('%s提现:%s,当前信用可用额度为:%s 手续费:%s' % (user_self['name'],money2,user_self['account'],money1))\r\n logger1.debug(debug)\r\n choice = input('退出请输入q>>: ').strip()\r\n if choice == 'q': return\r\n\r\n\r\ndef illegality():\r\n print('非法输入!')\r\n\r\n\r\n\r\ndic={'1':register,\r\n '2':login,\r\n '3':look_money,\r\n '4':transfer_accounts,\r\n '5':repayment,\r\n '6':draw_money,\r\n }\r\n\r\ndef run():\r\n while True:\r\n print('''\r\n 1、注册\r\n 2、登陆\r\n 3、查看余额\r\n 4、转账\r\n 5、还款\r\n 6、取款\r\n ''')\r\n choice=input('输入序号选择功能,q退出>>: ').strip()\r\n if choice=='q':break\r\n function=dic[choice] if choice in dic else illegality\r\n function()\r\n"
},
{
"alpha_fraction": 0.43015632033348083,
"alphanum_fraction": 0.4382249116897583,
"avg_line_length": 26.901409149169922,
"blob_id": "8d8c04795b613b8a669117acb0360481bd0652cf",
"content_id": "534aea36cced7fef52974aa85e2690ad130e2f3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2603,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 71,
"path": "/month3/week2/python_day6/python_day6_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 3-21作业:\n# 1、什么是字符编码?\n# 字符编码,也称字集码,是把字符集中的字符编码为指定集合中某一对象(例如:比特模式),以便文本在计算机中存储和通过通信网络的传递。\n\n# 2、保证不乱码的核心法则是?\n# 保证不乱码的核心法则,就是字符按什么标准编码的,就必须按照什么标准解码;\n\n# 3、循环读取文件内容\n# 第一种:\n# with open('a.txt', 'r', encoding='utf-8') as f:\n# for line in f:\n# print(line)\n# 第二种:\n# with open('a.txt', 'r', encoding='utf-8') as f:\n# for line in f.readlines():\n# print(line)\n\n# 4、编写用户注册程序,\n# 用户选择注册功能则:\n# 将用户输入用户名、性别、年龄等信息存放于文件中\n# 用户选择查看功能:\n# 则将用户的详细信息打印出来\n\n# config = 'db.txt'\n# while 1:\n# print('\\n1 注册\\n2 查看\\n')\n# action = input('选择操作 >>: ').strip()\n# if action == '1':\n# print('输入注册信息!')\n# r_n = input('用户名 >>: ').strip()\n# r_p = input('密码 >>: ')\n# r_i = input('手机 >>: ')\n# r_s = input('性别 >>: ')\n# r_a = input('年龄 >>: ')\n# user = '%s %s %s %s %s' % (r_n, r_p, r_i, r_s, r_a)\n# with open(config, 'a') as f:\n# f.write('%s\\n' % user)\n# print('%s 注册成功!' % r_n)\n# elif action == '2':\n# phone = input('请输入手机号查询 >>: ').strip()\n# with open(config, 'r') as f:\n# for u in f:\n# u = u.split()\n# if phone in u:\n# n, p, i, x, a = u\n# print('用户名: %s\\n手机号: %s\\n性别: %s\\n年龄: %s' % (n, i, x, a))\n# else:\n# print('输入编号非法!')\n\n# 5、编写用户认证接口,其中用户的账号密码是存放文件中的。\n\n# config = 'db.txt'\n# while 1:\n# users = {}\n# with open(config, 'r') as f:\n# for u in f:\n# u = u.split()\n# if u:\n# n, p, i, x, a = u\n# d = {n: {'password': p, 'phone': i, 'sex': x, 'age': a}}\n# users.update(d)\n# name = input('用户名 >>: ').strip()\n# pwd = input('密码 >>: ')\n# if name not in users:\n# print('用户名错误!')\n# continue\n# if pwd != users[name]['password']:\n# print('密码错误!')\n# continue\n# if name in users and pwd == users[name]['password']:\n# print('登陆成功!')\n\n\n"
},
{
"alpha_fraction": 0.6223278045654297,
"alphanum_fraction": 0.6223278045654297,
"avg_line_length": 22.27777862548828,
"blob_id": "c7b77a400bad916c1615b195288c0fe96bdd8b37",
"content_id": "80010a02f283fe505ff64fa356715bc868e462ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 429,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 18,
"path": "/weektest/test2/ATM_wangjieyu/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import logging.config\nfrom core import src\nfrom conf import setting\n\ndef login_auth(func):\n def wrapper(*args, **kwargs):\n if not src.panduan['is_auth']:\n print('没有登录')\n src.login()\n else:\n return func(*args, **kwargs)\n\n return wrapper\n\ndef get_logger(name):\n logging.config.dictConfig(setting.LOGGING_DIC)\n logger = logging.getLogger(name)\n return logger\n\n\n"
},
{
"alpha_fraction": 0.48616600036621094,
"alphanum_fraction": 0.4940711557865143,
"avg_line_length": 21.909090042114258,
"blob_id": "6d827d7f64915c151d4a6dc66f4fbc791f3d72ed",
"content_id": "1b8e0ba0a9ae077cf29d71f9821c0532ff278d06",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 556,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 22,
"path": "/project/youku/version_v1/youkuClient/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom core import admin, user\n\n\ndef run():\n menu = {\n '1': [admin, '管理端'],\n '2': [user, '用户端']\n }\n while True:\n common.show_red('按\"e\"结束程序')\n common.show_menu(menu)\n choice = common.input_string('请输入平台编号')\n if choice == 'e':\n common.show_red('Goodbye!')\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0].run()\n\n\n"
},
{
"alpha_fraction": 0.46483591198921204,
"alphanum_fraction": 0.47867828607559204,
"avg_line_length": 23.559999465942383,
"blob_id": "ec240f49c7778edb1737288e50685e8df7f31917",
"content_id": "01bb9e65d1fc96be7926501f18f9f6ba9127247a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4783,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 175,
"path": "/weektest/test2/ATM_wenliuxiang/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\r\n\r\nfrom interface import user\r\nfrom lib import common\r\nfrom interface import bank\r\nuser_data={\r\n 'name':None,\r\n 'is_auth':False\r\n}\r\n\r\ndef login():\r\n if user_data['is_auth']:\r\n print('user is to')\r\n return\r\n count = 0\r\n while True:\r\n name = input('账号>>').strip()\r\n user_dic=user.get_userinfo_interfacen(name)\r\n if count ==3:\r\n user.lock_user_interface(name)\r\n print('账号已经锁定')\r\n break\r\n if user_dic:\r\n password = input('密码>>:').strip()\r\n if password == user_dic['password'] and not user_dic['locked']:\r\n print('登录成功')\r\n user_data['name']=name\r\n user_data['is_auth']=True\r\n break\r\n else:\r\n count+=1\r\n print('密码错误')\r\n break\r\n else:\r\n print('user is not or lock ')\r\n continue\r\n\r\n\r\ndef register():\r\n if user_data['is_auth']:\r\n print('user is ')\r\n return\r\n while True:\r\n name = input('用户名').strip()\r\n if user.get_userinfo_interfacen(name):\r\n print('用户已存在')\r\n break\r\n else:\r\n password = input('请输入密码').strip()\r\n conf_password = input('请再次确认密码').strip()\r\n if password == conf_password:\r\n user.register(name,password)\r\n print('注册成功!!!!!')\r\n break\r\n else:\r\n print('两次密码不一致')\r\n continue\r\n\r\n\r\[email protected]_auth\r\ndef check_balance():\r\n account=bank.get_account(user_data['name'])\r\n print('你的余额还剩下%s' % account)\r\n\r\n\r\[email protected]_auth\r\ndef transfer():\r\n while True:\r\n to_user=input(print(\"\\33[32;0m请输入转账的账号>>:\\33[0m\".center(40, \"-\"))).strip()\r\n if to_user==user_data['name']:\r\n print('不能转到自己的账户')\r\n continue\r\n if 'q'==to_user:break\r\n to_user_dic=user.get_userinfo_interfacen(to_user)\r\n if to_user_dic:\r\n transfer_account=input('转账金额').strip()\r\n if transfer_account.isdigit():\r\n transfer_account=int(transfer_account)\r\n user_account=bank.get_account(user_data['name'])\r\n if user_account>=transfer_account:\r\n bank.transfer_interface(user_data['name'],to_user,transfer_account)\r\n break\r\n else:\r\n print('account not enough')\r\n continue\r\n else:\r\n print('must input number')\r\n continue\r\n else:\r\n print('user not exist')\r\n continue\r\n\r\n\r\n\r\n\r\[email protected]_auth\r\ndef repay():\r\n while True:\r\n account=input('请输入还款金额(按q选择退出)').strip()\r\n if 'q'==account:break\r\n if account.isdigit():\r\n account=int(account)\r\n bank.repay_interface(user_data['name'],account)\r\n break\r\n else:\r\n print('must')\r\n continue\r\n\r\n\r\[email protected]_auth\r\ndef withdraw():\r\n while True:\r\n account=input(print(\"\\33[32;0m取款\\33[0m\".center(40, \"-\"))).strip()\r\n if 'q'==account:break\r\n if account.isdigit():\r\n user_account=bank.get_account(user_data['name'])\r\n account=int(account)\r\n if user_account>=account*1.05:\r\n bank.withdraw_interface(user_data['name'],account)\r\n print('取款成功')\r\n break\r\n else:\r\n print('余额不足')\r\n else:\r\n print('只能输入数字')\r\n continue\r\n\r\n\r\n\r\[email protected]_auth\r\ndef check_recorse():\r\n bankflow=bank.check_bankflow_interfac(user_data['name'])\r\n for record in bankflow:\r\n print(record)\r\n\r\n\r\[email protected]_auth\r\ndef shopping():\r\n pass\r\n\r\n\r\[email protected]_auth\r\ndef shopping_cart():\r\n pass\r\n\r\n\r\n\r\nfunc_dic = {\r\n '1':login,\r\n '2':register,\r\n '3':check_balance,\r\n '4':transfer,\r\n '5':repay,\r\n '6':withdraw,\r\n '7':check_recorse,\r\n '8':shopping,\r\n '9':shopping_cart,\r\n}\r\n\r\ndef run():\r\n while True:\r\n print(\"\"\"----------\\33[35;1m欢迎来到美国银行ATM\r\n 1、登录\r\n 2、注册\r\n 3、查看余额\r\n 4、转账\r\n 5、还款\r\n 6、取款\r\n 7、查看流水\r\n 8、购物\r\n 9、查看购买商品\r\n \\33[0m----Welcome to the ATM-----------\r\n \"\"\")\r\n\r\n choice = input((\"\\33[32;0m请输入编号>>:\\33[0m\").center(40, \"-\")).strip()\r\n if choice not in func_dic:continue\r\n func_dic[choice]()\r\n\r\n"
},
{
"alpha_fraction": 0.5462532043457031,
"alphanum_fraction": 0.5524547696113586,
"avg_line_length": 25.458904266357422,
"blob_id": "2a68a0b09da209d651b4c1dfa27bb040440445cd",
"content_id": "02214a9ee28652fb52314de0491f0c698656db80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4602,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 146,
"path": "/month4/week6/python_day23/python_day23_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4-17日作业\n# 1、判断一个对象是否属于str类型,判断一个类是否是另外一个类的子类\n# s = 'hello'\n# if isinstance(s, str):\n# print('对象 \"%s\" 是 \"str\" 类型!' % s)\n# else:\n# print('对象 \"%s\" 不是 \"str\" 类型!' % s)\n#\n# class Foo:\n# pass\n#\n# class Bar(Foo):\n# pass\n#\n# if issubclass(Bar, Foo):\n# print('类 \"%s\" 是 \"%s\" 的子类!' % ('Bar', 'Foo'))\n# else:\n# print('类 \"%s\" 不是 \"%s\" 的子类!' % ('Bar', 'Foo'))\n\n# 2、有俩程序员,一个lili,一个是egon,lili在写程序的时候需要用到egon所写的类(放到了另外一个文件中),但是egon去跟女朋友度蜜月去了,还没有完成他写的类,\n# class FtpClient:\n# \"\"\"\n# ftp客户端,但是还么有实现具体的功能\n# \"\"\"\n# def __init__(self,addr):\n# print('正在连接服务器[%s]' %addr)\n# self.addr=addr\n#\n# 此处应该完成一个get功能\n# lili想到了反射,使用了反射机制lili可以继续完成自己的代码,等egon度蜜月回来后再继续完成类的定义并且去实现lili想要的功能。\n\n\n# 3、定义一个老师类,定制打印对象的格式为‘<name:egon age:18 sex:male>’\n# class Teacher:\n# def __init__(self, name, age, sex):\n# self.name = name\n# self.age = age\n# self.sex = sex\n#\n# def __str__(self):\n# return '<name: %s age: %s sex: %s>' % (self.name, self.age, self.sex)\n#\n# t = Teacher('egon', 18, 'male')\n# print(t)\n\n# 4、定义一个自己的open类,控制文件的读或写,在对象被删除时自动回收系统资源\n# class MyOpen:\n# def __init__(self, file_path, mode='r', encoding='utf-8'):\n# self.file_path = file_path\n# self.mode = mode\n# self.encoding = encoding\n# self.file = open(file_path, mode=mode, encoding=encoding)\n#\n# def __del__(self):\n# self.file.close()\n#\n# f = MyOpen(r'settings.py')\n# data = f.file.read()\n# print(data)\n# del f\n\n# 5、自定义元类,把自定义类的数据属性都变成大写,必须有文档注释,类名的首字母必须大写\n# class Mymeta(type):\n# def __init__(self, class_name, class_bases, class_dic):\n# if class_name[0].islower():\n# raise TypeError('类名的首字母必须是大写!')\n# if not class_dic.get('__doc__'):\n# raise TypeError('类必须有文档注释!')\n# d = {}\n# for k,v in class_dic.items():\n# if not hasattr(v, '__call__') and not (k.startswith('__') and k.endswith('__')):\n# d[k] = v\n# for k in d:\n# class_dic.pop(k)\n# class_dic[k.upper()] = d[k]\n# self.class_name = class_name\n# self.class_bases = class_bases\n# self.class_dic = class_dic\n#\n# # class_dic = {'__doc__': 'Foo class'}\n# # Foo = Mymeta('Foo', (object, ), class_dic)\n#\n# class Foo(object,metaclass=Mymeta):\n# '''\n# Foo class\n# '''\n# x = 1\n# def __init__(self):\n# pass\n#\n# def __del__(self):\n# pass\n\n# 6、用三种方法实现单例模式,参考答案:http://www.cnblogs.com/linhaifeng/articles/8029564.html#_label5\n# 第一种方式:类方法\nimport settings\n\nclass Mysql:\n __instance = None\n\n def __init__(self, host, port):\n self.host = host\n self.port = port\n\n @classmethod\n def singleton(cls):\n if not cls.__instance:\n cls.__instance = cls(settings.HOST, settings.PORT)\n return cls.__instance\n\n# 第二种方式:元类\nimport settings\nclass Mymeta(type):\n def __init__(self, class_name, class_bases, class_dict):\n self.__instance = object.__new__(self)\n self.__init__(self.__instance, settings.HOST, settings.PORT)\n super().__init__(class_name, class_bases, class_dict)\n\n def __call__(self, *args, **kwargs):\n if args or kwargs:\n obj = object.__new__(self)\n self.__init__(obj, settings.HOST, settings.PORT)\n return obj\n return self.__instance\n\nclass Mysql(metaclass=Mymeta):\n def __init__(self, host, port):\n self.host = host\n self.port = port\n\n# 第三种方式:装饰器\nimport settings\n\ndef singleton(cls):\n __instance = cls(settings.HOST, settings.PORT)\n def wrapper(*args, **kwargs):\n if args or kwargs:\n return cls(settings.HOST, settings.PORT)\n return __instance\n return wrapper\n\n@singleton\nclass Mysql:\n def __init__(self, host, port):\n self.host = host\n self.port = port\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6074073910713196,
"alphanum_fraction": 0.614814817905426,
"avg_line_length": 32.75,
"blob_id": "3d71db9daa8b17da7be8bf3538c9d284d31e45e2",
"content_id": "cb221d82bac254113628109b479753ce1635e1b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 135,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 4,
"path": "/month4/week5/python_day19/ATM/conf/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\nUSER_CONFIG = os.path.join(BASE_DIR, 'db', '%s.json')\n"
},
{
"alpha_fraction": 0.4866666793823242,
"alphanum_fraction": 0.5116666555404663,
"avg_line_length": 17.5625,
"blob_id": "6f4e4f2582c1ec6ee57b0c1cebc8285587ef3412",
"content_id": "47aab4d335fa25bad42a8a523427f3653687c322",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 604,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 32,
"path": "/month4/week8/python_day32/python_day32_server.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from socket import *\nfrom gevent import monkey;monkey.patch_all()\n\ndef talk(conn):\n while True:\n try:\n data = conn.recv(1024)\n if not data: break\n print(data)\n conn.send(data.upper())\n except ConnectionResetError:\n break\n conn.close()\n\n\ndef server(ip, port, backlog=5):\n s = socket()\n s.bind((ip, port))\n s.listen(backlog)\n\n while True:\n conn, addr = s.accept()\n print(addr)\n\n # 通信\n g = spawn(talk, conn)\n\n\n s.close()\n\nif __name__ == '__main__':\n server('127.0.0.1', 8080)\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5131719708442688,
"alphanum_fraction": 0.5316838622093201,
"avg_line_length": 24.29729652404785,
"blob_id": "4a610fefa1b8f91f6b6f6aad7349bcf718c4bc52",
"content_id": "96104f8f69e01d1424edaf437fe8befe7aef4f67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6772,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 222,
"path": "/month4/week5/python_day19/python_day19_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n# 1、定义学校类,实例化出:北京校区、上海校区两个对象\n# 校区独有的特征有:\n# 校区名 = 'xxx'\n# 校区地址 = {'city': \"所在市\", 'district': '所在的区'}\n# 多们课程名 = ['xxx', 'yyy', 'zzz']\n# 多个班级名 = ['xxx', 'yyy', 'zzz']\n#\n# 校区可以:\n# 1、创建班级\n# 2、查看本校区开设的所有班级名\n# 3、创建课程\n# 4、查看本校区开设的所有课程名\nclass School:\n def __init__(self, name, city, district):\n self.name = name\n self.city = city\n self.district = district\n self.classes = []\n self.course = []\n\n def get_school_info(self):\n print('''\n 校区:%s\n 城市:%s\n 分区:%s\n ''' % (self.name, self.city, self.district))\n\n def get_classes_info(self):\n print('%s 校区的所有班级 %s' % (self.name, self.classes))\n\n def get_course_info(self):\n print('%s 校区的所有班级 %s' % (self.name, self.course))\n\n def add_classes_api(self, classes):\n self.classes.append(classes)\n print('增加新班级 \"%s\" 成功!' % classes)\n\n def add_course_api(self, course):\n self.course.append(course)\n print('增加新课程 \"%s\" 成功!' % course)\n\nprint('-'*50)\nprint('\\n\\033[31m第一题:\\033[0m')\ns = School('老男孩上海', '上海市', '浦东新区')\ns.get_school_info()\ns.add_classes_api('上海一期班')\ns.add_course_api('Python全栈课程')\ns.get_classes_info()\ns.get_course_info()\n\ns = School('老男孩北京', '北京市', '朝阳区')\ns.get_school_info()\ns.add_classes_api('北京二十一期班')\ns.add_course_api('Python全栈课程')\ns.get_classes_info()\ns.get_course_info()\nprint('-'*50)\n\n# 2、定义出班级类,实例化出两个班级对象\n# 班级对象独有的特征:\n# 班级名 ='xxx'\n# 所属校区名 ='xxx'\n# 多门课程名 = ['xxx', 'yyy', 'zzz']\n# 多个讲师名 = ['xxx', 'xxx', 'xxx']\n#\n# 班级可以:\n# 1、查看本班所有的课程\n# 2、查看本班的任课老师姓名\nclass Classes:\n def __init__(self, school, name, course, teacher):\n self.school = school\n self.name = name\n self.course = course\n self.teacher = teacher\n\n def get_course(self):\n print('%s 的所有课程 %s' % (self.name, self.course))\n\n def get_teacher(self):\n print('%s 的所有老师 %s' % (self.name, self.teacher))\n\nprint('\\n\\033[31m第二题:\\033[0m')\nc1 = Classes('老男孩上海', '上海一期班', ['Python全栈课程', 'Linux运维'], ['Egon'])\nc1.get_course()\nc1.get_teacher()\nprint('-'*50)\n\n# 3、定义课程类,实例化出python、linux、go三门课程对象\n# 课程对象独有的特征:\n# 课程名 =‘xxx’\n# 周期 =‘3months’\n# 价格 = 3000\n#\n# 课程对象可以:\n# 1、查看课程的详细信息\nclass Course:\n def __init__(self, name, cycle, price):\n self.name = name\n self.cycle = cycle\n self.price = price\n\n def get_course_info(self):\n print('%s 课程的周期是 %s,费用是 %s 元。' % (self.name, self.cycle, self.price))\n\nprint('\\n\\033[31m第三题:\\033[0m')\ncourse = Course('Python全栈', '5 months', 19800)\ncourse.get_course_info()\nprint('-'*50)\n\n# 4、定义学生类,实例化出张铁蛋、王三炮两名学生对象\n# 学生对象独有的特征:\n# 学号 = 10\n# 名字 =”xxx“\n# 班级名 = ['xxx', 'yyy']\n# 分数 = 33\n#\n# 学生可以:\n# 1、选择班级\n# 3、注册,将对象序列化到文件\nprint('\\n\\033[31m第四题:\\033[0m')\nimport json\n\nclass Student:\n def __init__(self, school, name, student_id, classes, course, score):\n self.school = school\n self.name = name\n self.student_id = student_id\n self.classes = classes\n self.course = course\n self.score = score\n\n def get_student_info(self):\n print('''\n 学校:%s\n 名字:%s\n 学号:%s\n 班级:%s\n 课程:%s\n 成绩:%s\n ''' % (self.school,\n self.name,\n self.student_id,\n self.classes,\n self.course,\n self.score\n ))\n\n def choose_classes(self, classes):\n self.classes = classes\n print('学生\"%s\"转班到\"%s\"成功!' % (self.name, self.classes))\n\n def register_student(self):\n student_info = {\n 'school': self.school,\n 'name': self.name,\n 'student_id': self.student_id,\n 'classes': self.classes,\n 'score': self.score\n }\n self._file_handler_write(student_info)\n print('学生%s注册成功!' % self.name)\n\n def _file_handler_write(self, student_info):\n with open(r'%s.json' % student_info['name'], 'w') as f:\n json.dump(student_info, f)\n\nstu1 = Student('老男孩上海', 'zane', '0521', '一期班', 'Python全栈周末', '90')\nstu1.get_student_info()\nstu1.choose_classes('Python全栈脱产')\nstu1.register_student()\nprint('-'*50)\n\n# 5、定义讲师类,实例化出egon,lqz,alex,wxx四名老师对象\n# 老师对象独有的特征:\n# 名字 =“xxx”\n# 等级 =“xxx”\n#\n# 老师可以:\n# 1、修改学生的成绩\nprint('\\n\\033[31m第五题:\\033[0m')\nimport json\n\nclass Teacher:\n def __init__(self, school, name, classes, course, grade):\n self.school = school\n self.name = name\n self.classes = classes\n self.course = course\n self.grade = grade\n\n def get_teacher_info(self):\n print('''\n 校区:%s\n 名字:%s\n 班级:%s\n 课程:%s\n 职级:%s\n ''' % (self.school,\n self.name,\n self.classes,\n self.course,\n self.grade))\n\n def modify_student_score(self, student, score):\n student_info = self._file_handler_read(student.name)\n student_info['score'] = score\n self._file_handler_write(student_info)\n print('\"%s\"老师修改学生\"%s\"的成绩\"%s\"成功!' % (self.name, student.name, score))\n\n def _file_handler_read(self, name):\n with open(r'%s.json' % name) as f:\n return json.load(f)\n\n def _file_handler_write(self, student_info):\n with open(r'%s.json' % student_info['name'], 'w') as f:\n json.dump(student_info, f)\n\nt = Teacher('老男孩上海', 'Egon', '一期班', 'Python全栈', '特级教师')\nt.get_teacher_info()\nt.modify_student_score(stu1, 95)\nprint('-'*50)\n\n\n"
},
{
"alpha_fraction": 0.5505050420761108,
"alphanum_fraction": 0.6161616444587708,
"avg_line_length": 20.55555534362793,
"blob_id": "bd9a4c324728053f8e43d8a105f35d2b685a6c04",
"content_id": "0a159b076c08583fcf8fd7c37d4087edc9319b13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 198,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 9,
"path": "/project/youku/version_v1/youkuServer/conf/settings.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\n\nserver_address = ('127.0.0.1', 8080)\nbacklog = 20\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\nvideo_path = os.path.join(BASE_DIR, 'db', 'video')\n\n\n\n\n"
},
{
"alpha_fraction": 0.6849088072776794,
"alphanum_fraction": 0.6849088072776794,
"avg_line_length": 33.4571418762207,
"blob_id": "d71d6320fb35ce62a5f64c9b03bcf63b2ce963b3",
"content_id": "054f60d47ed8b6f773cbbc00a17328f168d2993d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1334,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 35,
"path": "/project/elective_systems/version_v9/interface/teacher_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "\nfrom db import modules\n\n\ndef login(name, password):\n teacher = modules.Teacher.get_obj_by_name(name)\n if not teacher:\n return False, '用户%s不存在!' % name\n if password != teacher.password:\n return False, '密码错误!'\n return True, '用户%s登陆成功!' % name\n\ndef get_teach_course(teacher_name):\n teacher = modules.Teacher.get_obj_by_name(teacher_name)\n return teacher.course_list\n\n\ndef get_teach_course_student(course_name):\n course = modules.Course.get_obj_by_name(course_name)\n return course.student_list\n\n\ndef choose_teach_course(teacher_name, course_name):\n teacher = modules.Teacher.get_obj_by_name(teacher_name)\n if not teacher.set_up_course(course_name):\n return False, '%s选择教授课程%s失败!' % (teacher_name, course_name)\n return True, '%s选择教授课程%s成功!' % (teacher_name, course_name)\n\n\ndef set_student_score(teacher_name, student_name, course, score):\n student = modules.Student.get_obj_by_name(student_name)\n if not student:\n return False, '学生%s不存在!' % student_name\n if not student.set_student_score(course, score):\n return False, '%s修改学生课程%s成绩失败!' % (teacher_name, student_name)\n return True, '%s修改学生课程%s成绩成功!' % (teacher_name, student_name)"
},
{
"alpha_fraction": 0.3992125988006592,
"alphanum_fraction": 0.4440944790840149,
"avg_line_length": 18.461538314819336,
"blob_id": "23e2fdec95cd5d98377b38357e734a1f24b3a513",
"content_id": "2e888f0c2795e38a7c7b05ccc54d6a1cc23d8bfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1278,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 65,
"path": "/month3/week3/python_day9/python_day9.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# # == == == == == =*args == == == == == =\n# print('*agrs')\n# def foo(x, y, *args):\n# print(x, y)\n# print(args)\n# foo(1, 2, 3, 4, 5)\n#\n# def foo(x, y, *args):\n# print(x, y)\n# print(args)\n# foo(1, 2, *[3, 4, 5])\n# foo(1, 2, *'hello')\n# foo(1, 2, *(3, 4, 5))\n#\n# def foo(x, y, z):\n# print(x, y, z)\n# foo(*[1, 2, 3])\n\n# # == == == == == = ** kwargs == == == == == =\n# print('**kwargs')\n# def foo(x, y, **kwargs):\n# print(x, y)\n# print(kwargs)\n# foo(1, y=2, a=1, b=2, c=3)\n#\n# def foo(x, y, **kwargs):\n# print(x, y)\n# print(kwargs)\n# foo(1, y=2, **{'a': 1, 'b': 2, 'c': 3})\n#\n# def foo(x, y, z):\n# print(x, y, z)\n# foo(**{'z': 1, 'x': 2, 'y': 3})\n\n\n# def foo(x, y, *args):\n# print(x, y)\n# print(args)\n#\n# foo(1,2,3,4,5)\n#\n# def bar(x, y, **kwargs):\n# print(x, y)\n# print(kwargs)\n#\n# bar(1,2,a=1,b=2,c=3)\n#\n# def both(x, y, *args, **kwargs):\n# print(x, y)\n# print(args)\n# print(kwargs)\n#\n# both(1,2,3,4,5,a=1,b=2,c=3)\n\n\n# # 组合使用\n# def index(name, age, gender):\n# print('name: %s age: %s gender: %s' % (name, age, gender))\n#\n# def wrapper(*args, **kwargs):\n# index(*args, **kwargs)\n#\n# wrapper('egon',18,'male')\n# wrapper('egon',age=18,gender='male')\n# wrapper(name='egon',age=18,gender='male')\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5654008388519287,
"alphanum_fraction": 0.5669351816177368,
"avg_line_length": 31.97468376159668,
"blob_id": "5a334eebfb617592a7073df5665253104629a58b",
"content_id": "061193e4600da89af8404c7e87befbb26ca15ce5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2829,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 79,
"path": "/project/shooping_mall/version_v6/interface/shop.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\ndef get_good_info(good_file='goods'):\n good_info = db_handler.read(good_file)\n good_info.pop('name')\n return good_info\n\ndef get_shopping_cart_info(name):\n info = db_handler.read(name)\n return info['shopping_cart']\n\ndef join_shopping_cart(name, code, good, price, count):\n info = db_handler.read(name)\n if good in info['shopping_cart']:\n info['shopping_cart'][code]['count'] += count\n else:\n info['shopping_cart'][code] = {\n 'good': good,\n 'price': price,\n 'count': count\n }\n if db_handler.write(info):\n return True, '商品 %s x %s 加入购物车成功!' % (good, count)\n else:\n return True, '商品 %s x %s 加入购物车失败!' % (good, count)\n\ndef pay(name):\n info = db_handler.read(name)\n cost = 0\n for good in info['shopping_cart'].values():\n cost += (good['price'] * good['count'])\n if (info['balance'] + info['credit_balance']) < cost:\n return False, '用户%s账户余额不足,结账失败!' % name\n if info['balance'] >= cost:\n info['balance'] -= cost\n if info['balance'] < cost and (info['balance'] + info['credit_balance']) >= cost:\n info['credit_balance'] -= (cost - info['balance'])\n info['bill'] += (cost - info['balance'])\n if info['balance'] != 0:\n info['balance'] = 0\n if db_handler.write(info):\n return True, '用户%s结账%s成功!' % (name, cost)\n else:\n return True, '用户%s结账%s失败!' % (name, cost)\n\ndef flush_shopping_cart(name):\n info = db_handler.read(name)\n info['shopping_cart'] = {}\n if db_handler.write(info):\n return True, '用户%s购物车已清空!' % name\n else:\n return False, '用户%s购物车清空失败!' % name\n\ndef new_arrival(code, good, price, good_file='goods'):\n good_info = db_handler.read(good_file)\n good_info[code] = {\n 'name': good,\n 'price': price\n }\n if db_handler.write(good_info):\n return True, '新商品 %s 上架成功!' % good\n else:\n return False, '新商品 %s 上架失败!' % good\n\ndef modify_shopping_cart(name, code, count):\n info = db_handler.read(name)\n good = info['shopping_cart'][code]['good']\n if info['shopping_cart'][code]['count'] < count:\n return False, '删除商品%s数量过多,删除失败!' % good\n if info['shopping_cart'][code]['count'] == count:\n info['shopping_cart'].pop(code)\n if info['shopping_cart'][code]['count'] > count:\n info['shopping_cart'][code]['count'] -= count\n if db_handler.write(info):\n return True, '删除商品 %s x %s 成功!' % (good, count)\n else:\n return False, '删除商品 %s x %s 失败!' % (good, count)\n\n\n"
},
{
"alpha_fraction": 0.5902438759803772,
"alphanum_fraction": 0.5908536314964294,
"avg_line_length": 26.67796516418457,
"blob_id": "8b62d4f1d35aaa9e03f6b5b2c61f805b2f5835fb",
"content_id": "e7ce989c6cfd19d36dda24b11d9878ff08565be1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1726,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 59,
"path": "/project/elective_systems/version_v4/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n return db_handler.save(self)\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.save()\n\n def create_school(self, school_name, address):\n school = School(school_name, address)\n return school.save()\n\n def create_teacher(self, teacher_name, teacher_password):\n teacher = Teacher(teacher_name, teacher_password)\n return teacher.save()\n\n def create_course(self, course_name, course_price, course_cycle, school_name):\n course = Course(course_name, course_price, course_cycle, school_name)\n return course.save()\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.address = address\n\n def __str__(self):\n return '学校名称:%s \\n学校地址:%s' % (self.name, self.address)\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.course = []\n\n def __str__(self):\n return '老师名字:%s \\n老师教授的课程:%s' % (self.name, self.course)\n\nclass Course(Base):\n def __init__(self, name, price, cyle, school_name):\n self.name = name\n self.price = price\n self.cyle = cyle\n self.school_name = school_name\n\n def __str__(self):\n return '课程名称:%s\\n课程价格:%s\\n课程周期:%s\\n学校校区:%s' % (\n self.name, self.price, self.cyle, self.school_name\n )\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5475320219993591,
"alphanum_fraction": 0.5511882901191711,
"avg_line_length": 22.7391300201416,
"blob_id": "fa54b2941f15ec913bb5716223d1c2b1ee8fbc10",
"content_id": "4c01dfab7965b140c627c6b2040c575100d7cac2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1142,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 46,
"path": "/project/shooping_mall/version_v4/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\nclass Base:\n @staticmethod\n def get_obj_by_name(name):\n return db_handler.select(name)\n\n def save(self):\n return db_handler.update(self)\n\n\nclass User(Base):\n def __init__(self, name, password, credit_limit):\n self.name = name\n self.password = password\n self.balance = 0\n self.bill = 0\n self.flow = []\n self.credit_balance = credit_limit\n self.credit_limit = credit_limit\n\n @classmethod\n def register(cls, name, password, credit_limit):\n user = cls(name, password, credit_limit)\n return user.save()\n\n def check_balance(self):\n info = '''\n 账户余额:%s\n 信用卡余额:%s\n 信用卡额度:%s\n ''' % (self.balance, self.credit_balance, self.credit_limit)\n return info\n\n def check_bill(self):\n bill = '本期账单为%s元!' % self.bill\n return bill\n\n def check_flow(self, bill_date):\n flow = []\n for f in self.flow:\n if bill_date in f[0]:\n flow.append(f)\n return flow\n\n\n"
},
{
"alpha_fraction": 0.5131243467330933,
"alphanum_fraction": 0.5205147862434387,
"avg_line_length": 26.720848083496094,
"blob_id": "aaedbcd638e58c1aa36863e07c328960653c4e41",
"content_id": "9348ccfd80441c41dbe87b25564b551c20ac72ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9038,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 283,
"path": "/month3/week3/python_day9/python_day9_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 一、编写用户注册函数,实现功能\n# 1、在函数内接收用户输入的用户名、密码、余额\n# 要求用户输入的用户名必须为字符串,并且保证用户输入的用户名不与其他用户重复\n# 要求用户输入两次密码,确认输入一致\n# 要求用户输入的余额必须为数字\n# 2、要求注册的用户信息全部存放于文件中\n\nconfig = 'users.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\ndef get_config():\n users = {}\n with open(r'%s' % config) as f:\n for u in f:\n u = u.strip('\\n').split('|')\n n, p, b = u\n users[n] = {'password': p, 'balance': b}\n return users\n\ndef update_config(name, password, balance):\n with open(r'%s' % config, 'a') as f:\n user = '%s|%s|%s\\n' % (name, password, balance)\n f.write(user)\n print('用户注册成功!')\n\ndef get_name():\n while True:\n users = get_config()\n name = input('input username >>: ').strip()\n if not name.isalpha():\n print('name must be string!')\n continue\n if name in users:\n print('user %s is alread registered!' % name)\n continue\n return name\n\ndef get_password():\n while True:\n pwd1 = input('input password >>: ')\n pwd2 = input('input password again >>: ')\n if pwd1 != pwd2:\n print('password and confirm password inconsistent!')\n continue\n return pwd1\n\ndef get_balance():\n while True:\n balance = input('input balance >>: ').strip()\n if not balance.isdigit():\n print('please enter an integer!')\n continue\n balance = int(balance)\n return balance\n\ndef register():\n name = get_name()\n password = get_password()\n balance = get_balance()\n update_config(name, password, balance)\n\nregister()\n\n\n\n# 二、编写用户转账函数,实现功能\n# 1、传入源账户名(保证必须为str)、目标账户名(保证必须为str)、转账金额(保证必须为数字)\n# 2、实现源账户减钱,目标账户加钱\n\nimport os\n\nconfig = 'users.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\ndef get_config():\n users = {}\n with open(r'%s' % config) as f:\n for u in f:\n u = u.strip('\\n').split('|')\n n, p, b = u\n users[n] = {'password': p, 'balance': b}\n return users\n\ndef update_transfer_amount(account, transfer_amount, mode):\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n user = line.strip('\\n').split('|')\n if mode == 'reduce':\n if int(user[-1]) < transfer_amount:\n print('账户余额不足!')\n return\n user[-1] = str(int(user[-1]) - transfer_amount)\n user = '|'.join(user) + '\\n'\n f2.write(user)\n elif mode == 'increase':\n user[-1] = str(int(user[-1]) + transfer_amount)\n user = '|'.join(user) + '\\n'\n f2.write(user)\n print('转账成功!')\n os.remove(config)\n os.rename('%s.swap' % config, config)\n\ndef get_account(word):\n while True:\n account = input('%s >>: ' % word).strip()\n if not account.isalpha():\n print('account must be string!')\n continue\n return account\n\ndef get_transfer_amount():\n while True:\n transfer_amount = input('transfer amount').strip()\n if not transfer_amount.isdigit():\n print('please enter an integer!')\n continue\n transfer_amount = int(transfer_amount)\n return transfer_amount\n\ndef transfer():\n src_account = get_account('source account')\n dst_account = get_account('destination account')\n transfer_amount = get_transfer_amount()\n if update_transfer_amount(src_account, transfer_amount, 'reduce'):\n update_transfer_amount(dst_account, transfer_amount, 'increase')\n\ntransfer()\n\n# 三、编写用户验证函数,实现功能\n# 1、用户输入账号,密码,然后与文件中存放的账号密码验证\n# 2、同一账号输错密码三次则锁定\n# 3、这一项为选做功能:锁定的账号,在五分钟内无法再次登录\n# 提示:一旦用户锁定,则将用户名与当前时间写入文件,例如: egon:1522134383.29839\n# 实现方式如下:\n#\n# import time\n#\n# current_time=time.time()\n# current_time=str(current_time) #当前的时间是浮点数,要存放于文件,需要转成字符串\n# lock_user='%s:%s\\n' %('egon',current_time)\n#\n# 然后打开文件\n# f.write(lock_user)\n#\n# 以后再次执行用户验证功能,先判断用户输入的用户名是否是锁定的用户,如果是,再用当前时间time.time()减去锁定的用户名后\n# 的时间,如果得出的结果小于300秒,则直接终止函数,无法认证,否则就从文件中清除锁定的用户信息,并允许用户进行认证\n\n# # 永久禁止登陆版本\nimport os\nconfig = 'db.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\ndef get_config():\n users = {}\n with open(r'%s' % config) as f:\n for line in f:\n if line:\n line = line.strip('\\n').split('|')\n n, p, l = line\n users[n] = {'password': p, 'islock': l}\n return users\n\ndef update_config(name, islock):\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = line.strip('\\n').split('|')\n line[-1] = islock\n line = '|'.join(line) + '\\n'\n f2.write('%s' % line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n\ndef get_name():\n while True:\n users = get_config()\n name = input('用户名 >>: ').strip()\n if not name.isalpha():\n print('用户名必须是字符串!')\n continue\n if name not in users:\n print('用户名不存在!')\n continue\n if users[name]['islock'] == 'lock':\n print('禁止登陆,用户已锁定!')\n continue\n return name\n\ndef login():\n login_count = {}\n while True:\n users = get_config()\n name = get_name()\n if name not in login_count:\n login_count[name] = 0\n password = input('密码 >>: ')\n if password != users[name]['password']:\n print('密码错误!')\n login_count[name] += 1\n if login_count[name] == 3:\n print('尝试次数过多,锁定用户!')\n update_config(name, 'lock')\n if name in users and password == users[name]['password']:\n print('登陆成功!')\n return 'success'\n\nlogin()\n\n# 禁止登陆5分钟版本\n# 永久禁止登陆版本\nimport os\nimport time\nimport datetime\n\nconfig = 'db.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\ndef get_config():\n users = {}\n with open(r'%s' % config) as f:\n for line in f:\n if line:\n line = line.strip('\\n').split('|')\n n, p, t = line\n users[n] = {'password': p, 'locktime': float(t)}\n return users\n\ndef update_config(name, locktime):\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = line.strip('\\n').split('|')\n line[-1] = str(locktime)\n line = '|'.join(line) + '\\n'\n f2.write('%s' % line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n\ndef get_name():\n while True:\n users = get_config()\n name = input('用户名 >>: ').strip()\n if not name.isalpha():\n print('用户名必须是字符串!')\n continue\n if name not in users:\n print('用户名不存在!')\n continue\n dt = datetime.datetime.fromtimestamp(users[name]['locktime'])\n unlock_time = dt + datetime.timedelta(minutes=5)\n current_time = datetime.datetime.now()\n if current_time <= unlock_time:\n print('禁止登陆,用户已锁定!')\n continue\n return name\n\ndef login():\n login_count = {}\n while True:\n users = get_config()\n name = get_name()\n if name not in login_count:\n login_count[name] = 0\n password = input('密码 >>: ')\n if password != users[name]['password']:\n print('密码错误!')\n login_count[name] += 1\n if login_count[name] == 3:\n print('尝试次数过多,锁定用户!')\n locktime = time.time()\n update_config(name, locktime)\n if name in users and password == users[name]['password']:\n print('登陆成功!')\n return 'success'\n\nlogin()\n\n\n\n"
},
{
"alpha_fraction": 0.5676470398902893,
"alphanum_fraction": 0.570588231086731,
"avg_line_length": 23.285715103149414,
"blob_id": "ca0254741a299e78123c5fc005e09a5fa365eccc",
"content_id": "1da085150e38cbb98a6980e8fa8aa2917e74cbae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 340,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 14,
"path": "/homework/week4/elective_systems/db/handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport pickle\nfrom conf import settings\n\nclass Db:\n @classmethod\n def read(cls):\n with open(r'%s' % settings.DATA_FILE, 'rb') as f:\n return pickle.load(f)\n @classmethod\n def write(cls, data):\n with open(r'%s' % settings.DATA_FILE, 'wb') as f:\n pickle.dump(data, f)\n"
},
{
"alpha_fraction": 0.641708254814148,
"alphanum_fraction": 0.6489184498786926,
"avg_line_length": 39.022220611572266,
"blob_id": "2fbb766bdf2d154279f60a1d428353ecaa53fa25",
"content_id": "67304543ba39a7578fccc625b21fbbbd752dc423",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1911,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 45,
"path": "/weektest/weektest2/ATM_zhanglong/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport datetime\nfrom lib import common\nfrom db import db_handler\n\n\nlogger = common.get_logger('bank')\n\ndef transfer_amount_api(transfer, payee, amount):\n transfer_info = db_handler.file_handler_read(transfer)\n if transfer_info['balance'] < amount:\n logger.waring('用户%s账户余额不足,转账失败!' % transfer_info)\n return\n payee_info = db_handler.file_handler_read(payee)\n dt = (datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))\n transfer_info['balance'] -= amount\n transfer_info['detailed_list'].append((dt, '用户%s转账%s给%s' % (transfer, payee, amount)))\n payee_info['balance'] += amount\n transfer_info['detailed_list'].append((dt, '用户%s收款%s从%s' % (transfer, amount, payee)))\n db_handler.file_handler_write(transfer_info)\n db_handler.file_handler_write(payee_info)\n return True\n\ndef repayment_bill_api(name, amount):\n user_info = db_handler.file_handler_read(name)\n user_info['balance'] -= amount\n user_info['credit_balance'] += amount\n user_info['bill'] -= amount\n dt = (datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))\n user_info['detailed_list'].append((dt, '用户%s还款%s元' % (name, amount)))\n logger.info('用户%s还款%s元' % (name, amount))\n db_handler.file_handler_write(user_info)\n return True\n\ndef widthraw_cash_api(name, amount):\n user_info = db_handler.file_handler_read(name)\n user_info['credit_balance'] -= (amount + amount*0.05)\n user_info['bill'] += (amount + amount*0.05)\n user_info['balance'] += amount\n dt = (datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))\n user_info['detailed_list'].append((dt, '用户%s取现%s元,手续费%s元' % (name, amount, amount*0.05)))\n logger.info('用户%s取现%s元,手续费%s元' % (name, amount, amount*0.05))\n db_handler.file_handler_write(user_info)\n return True\n\n\n"
},
{
"alpha_fraction": 0.6250439882278442,
"alphanum_fraction": 0.6264509558677673,
"avg_line_length": 35.675323486328125,
"blob_id": "6d4e1a1c2defeecea9396f26d5ee2fe837f3c552",
"content_id": "976b86f70ba81d886b3c6dac03f431bc1a5cdc8f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3351,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 77,
"path": "/project/elective_systems/version_v6/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom db import modules\n\nlogger = common.get_logger('admin_api')\n\ndef login(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if not admin:\n return False, '管理员%s不存在!' % name\n if password == admin.password:\n logger.info('管理员%s登陆成功!' % name)\n return True, '管理员%s登陆成功!' % name\n else:\n logger.warning('管理员%s登陆密码错误!' % name)\n return False, '管理员%s密码错误!' % name\n\ndef register(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if admin:\n return False, '管理员%s不能重复注册!' % name\n admin = modules.Admin.register(name, password)\n if admin:\n logger.info('管理员%s注册成功!' % name)\n return True, '管理员%s注册成功!' % name\n else:\n logger.warning('管理员%s注册失败!' % name)\n return False, '管理员%s注册失败!' % name\n\ndef get_school_info(admin_name, name):\n logger.info('管理员%s获取学校%s信息!' % (admin_name, name))\n return modules.School.get_obj_by_name(name)\n\ndef get_teacher_info(admin_name, name):\n logger.info('管理员%s获取老师%s信息!' % (admin_name, name))\n return modules.Teacher.get_obj_by_name(name)\n\ndef get_course_info(admin_name, name):\n logger.info('管理员%s获取课程%s信息!' % (admin_name, name))\n return modules.Course.get_obj_by_name(name)\n\ndef create_school(admin_name, name, address):\n school = modules.School.get_obj_by_name(name)\n if school:\n return False, '学校%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_school(name, address):\n logger.info('管理员%s创建学校%s成功!' % (admin_name, name))\n return True, '管理员%s创建学校%s成功!' % (admin_name, name)\n else:\n logger.warning('管理员%s创建学校%s失败!' % (admin_name, name))\n return False, '管理员%s创建学校%s失败!' % (admin_name, name)\n\ndef create_teacher(admin_name, name, password='123'):\n teacher = modules.Teacher.get_obj_by_name(name)\n if teacher:\n return False, '老师%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_teacher(name, password):\n logger.info('管理员%s创建老师%s成功!' % (admin_name, name))\n return True, '管理员%s创建老师%s成功!' % (admin_name, name)\n else:\n logger.warning('管理员%s创建老师%s失败!' % (admin_name, name))\n return False, '管理员%s创建老师%s失败!' % (admin_name, name)\n\ndef create_course(admin_name, name, price, cycle, school_name):\n course = modules.Course.get_obj_by_name(name)\n if course:\n return False, '课程%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_course(name, price, cycle, school_name):\n logger.info('管理员%s创建课程%s成功!' % (admin_name, name))\n return True, '管理员%s创建课程%s成功!' % (admin_name, name)\n else:\n logger.warning('管理员%s创建课程%s失败!' % (admin_name, name))\n return False, '管理员%s创建课程%s失败!' % (admin_name, name)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6382557153701782,
"alphanum_fraction": 0.6382557153701782,
"avg_line_length": 31.516128540039062,
"blob_id": "1eb127023dba1a50b70d7da39726b4e841ac409c",
"content_id": "489d7ae6a96069b86a780450475cbf4e730e73c8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1029,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 31,
"path": "/weektest/test2/ATM_wangjieyu/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from db import db_hander\nfrom lib import common\nfrom interface import user\n\nlogger_bank = common.get_logger('Bnak')\n\n\ndef get_bank_interface(name):\n return user.select_t(name)['account']\n\ndef transfer_interface(from_name, to_name, account):\n from_user_dic = user.select_t(from_name)\n to_user_dic = user.select_t(to_name)\n if from_user_dic['account']>=account:\n from_user_dic['account'] -= account\n to_user_dic['account'] += account\n from_user_dic['liushui'].extend(['%s transfer %s yuan to %s' % (from_name, account, to_name)])\n to_user_dic['liushui'].extend(['%s accept %s yuan from %s' % (to_name, account, from_name)])\n db_hander.update(from_user_dic)\n db_hander.update(to_user_dic)\n logger_bank.info('%s 向 %s 转账 %s' % (from_name, to_name, account))\n return True\n else:\n return False\n\n\n\ndef check_record(name):\n current_user = user.select_t(name)\n logger_bank.info('%s 查看了银行流水' % name)\n return current_user['liushui']\n\n"
},
{
"alpha_fraction": 0.611494243144989,
"alphanum_fraction": 0.6137930750846863,
"avg_line_length": 18.714284896850586,
"blob_id": "2289431ba4a7c4dc6ca7b0552dbcf13a22ffb2e7",
"content_id": "5f301de9ff2f5b8556147c461b185b1367eeb449",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 441,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 21,
"path": "/weektest/test2/ATM_zhangxiangyu/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\n\r\nimport logging.config\r\nfrom core import src\r\nfrom conf import settings\r\n\r\n#装饰器\r\ndef login_auth(func):\r\n def wrapper(*args,**kwargs):\r\n if not src.user_info['is_auth']:\r\n src.login()\r\n else:\r\n return func(*args,**kwargs)\r\n return wrapper\r\n\r\n\r\n\r\ndef get_logger(name):\r\n logging.config.dictConfig(settings.LOGGING_DIC)\r\n loger = logging.getLogger(name)\r\n return loger\r\n"
},
{
"alpha_fraction": 0.5734265446662903,
"alphanum_fraction": 0.5734265446662903,
"avg_line_length": 7.6875,
"blob_id": "078276658ab5b8869f75ae3a5954bdcce1eec94e",
"content_id": "51071d15df8574c74ac109a981767354bf95af95",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 16,
"path": "/liuqingzheng/courseSelection/start.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os,sys\npath=os.path.dirname(__file__)\n\nsys.path.append(path)\nfrom core import src\n\n\n\n\n\n\n\n\n\nif __name__ == '__main__':\n src.run()\n\n\n\n\n"
},
{
"alpha_fraction": 0.4644351601600647,
"alphanum_fraction": 0.47160789370536804,
"avg_line_length": 27.372880935668945,
"blob_id": "64cea4773ed17114d8686e90be81e157f8e372b7",
"content_id": "8c3347cadc2a241fcb63d676b26e8cdd9fbd40c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2047,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 59,
"path": "/weektest/weektest1/python_weektest1_zhanglong_01.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 二、综合题\n# 1. 编写登陆接口\n# \t基础需求:(4分)\n# \t•\t让用户输入用户名密码\n# \t•\t认证成功后显示欢迎信息\n# \t•\t输错三次后退出程序\n# 升级需求:(6分)\n# 可以支持多个用户登录 (提示,通过列表存多个账户信息)\n# 用户3次认证失败后,退出程序,再次启动程序尝试登录时,还是锁定状态(提示:需把用户锁定的状态存到文件里)\n\nimport os\n\nconfig = 'db.txt'\nwith open(r'%s' % config, 'a') as f:\n pass\n\nlogin_list = []\nuser_error_count = {}\nwhile True:\n users = {}\n with open(r'%s' % config) as f:\n for u in f:\n if u:\n u = u.strip('\\n').split('|')\n n, p, l = u\n users[n] = {'password': p, 'islock': l}\n name = input('username >>: ').strip()\n if name not in users:\n print('用户不存在!')\n continue\n if name in login_list:\n print('您已经是登录状态!')\n continue\n if users[name]['islock'] == 'lock':\n print('用户已锁定,禁止登陆!')\n continue\n pwd = input('password >>: ')\n if pwd != users[name]['password']:\n print('密码错误!')\n if name not in user_error_count:\n user_error_count[name] = 1\n else:\n user_error_count[name] += 1\n if user_error_count[name] == 3:\n print('尝试次数过多,用户锁定!')\n with open(r'%s' % config) as f1, \\\n open(r'%s.swap' % config, 'w') as f2:\n for line in f1:\n if name in line:\n line = line.strip('\\n').split('|')\n line[-1] = 'lock\\n'\n line = '|'.join(line)\n f2.write(line)\n os.remove(config)\n os.rename('%s.swap' % config, config)\n break\n if name in users and pwd == users[name]['password']:\n print('欢迎您,%s!登陆成功!' % name)\n login_list.append(name)"
},
{
"alpha_fraction": 0.5268336534500122,
"alphanum_fraction": 0.578711986541748,
"avg_line_length": 23.2608699798584,
"blob_id": "38b56548f83b056c2d71e9d280b038476b81ab25",
"content_id": "e1604b6b6ba0d5fe31d729a946aa8690bb65aba7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1474,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 46,
"path": "/month4/week4/python_day14/python_day14.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# # 切片对象\n# # sc = slice(1,5,2)\n# # print(type(sc))\n#\n# # # 1. 列表推导式\n# # list = ['egg%s' % x for x in range(1,101) if x <= 10]\n# # print(list)\n# #\n# # # 2. 生成器表达式\n# # genrator = ['egg%s' % x for x in range(1,101) if x <= 10]\n# # print(list)\n#\n# # 练习\n# # 1、将names=['egon','alex_sb','wupeiqi','yuanhao']中的名字全部变大写\n# names = ['egon','alex_sb','wupeiqi','yuanhao']\n# names = [x.upper() for x in names]\n# print(names)\n#\n# # 2、将names=['egon','alex_sb','wupeiqi','yuanhao']中以sb结尾的名字过滤掉,然后保存剩下的名字长度\n# names = ['egon','alex_sb','wupeiqi','yuanhao']\n# names = [(x, len(x)) for x in names if not x.endswith('sb')]\n# print(names)\n#\n# # 3、求文件a.txt中最长的行的长度(长度按字符个数算,需要使用max函数)\n# with open(r'a.txt') as f:\n# print(max([(len(x), x) for x in f]))\n#\n# # 4、求文件a.txt中总共包含的字符个数?思考为何在第一次之后的n次sum求和得到的结果为0?(需要使用sum函数)\n#\n#\n# # 5、思考题\n# #\n# # with open('a.txt') as f:\n# # g=(len(line) for line in f)\n# # print(sum(g)) #为何报错?\n# # 6、文件shopping.txt内容如下\n# #\n# # mac,20000,3\n# # lenovo,3000,10\n# # tesla,1000000,10\n# # chicken,200,1\n# # 求总共花了多少钱?\n# #\n# # 打印出所有商品的信息,格式为[{'name':'xxx','price':333,'count':3},...]\n# #\n# # 求单价大于10000的商品信息,格式同上\n\n\n"
},
{
"alpha_fraction": 0.43621399998664856,
"alphanum_fraction": 0.4897119402885437,
"avg_line_length": 17.673076629638672,
"blob_id": "7d700124144b7ce67642a5af525d05b2a0b549f3",
"content_id": "cd44c728c3bb1234f3779a6e9e24df75ad2d8cb7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1296,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 52,
"path": "/month4/week7/python_day28/python_day28_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月24日作业\n# 1、改写下列程序,分别别实现下述打印效果\n#\n# from multiprocessing import Process\n# import time\n# import random\n#\n# def task(n):\n# time.sleep(random.randint(1,3))\n# print('-------->%s' %n)\n#\n# if __name__ == '__main__':\n# p1=Process(target=task,args=(1,))\n# p2=Process(t.≥≥t..............................arget=task,args=(2,))\n# p3=Process(target=task,args=(3,))\n#\n# p1.start()\n# p2.start()\n# p3.start()\n#\n# print('-------->4')\n# 效果一:保证最先输出-------->4\n#\n# -------->4\n# -------->1\n# -------->3\n# -------->2\n# 效果二:保证最后输出-------->4\n#\n# -------->2\n# -------->3\n# -------->1\n# -------->4\n# 效果三:保证按顺序输出\n#\n# -------->1\n# -------->2\n# -------->3\n# -------->4\n# 2、判断上述三种效果,哪种属于并发,哪种属于串行?\n#\n#\n# 3、基于多进程实现并发的套接字通信\n# 提示:需要在server.bind(('127.0.0.1',8080))之前添加一行\n# server.setsockopt(SOL_SOCKET,SO_REUSEADDR,1)\n#\n# 思考每来一个客户端,服务端就开启一个新的进程来服务它,这种实现方式有无问题?\n#\n#\n#\n# 4、预习互斥锁,编写模拟抢票程序:http://www.cnblogs.com/linhaifeng/articles/7428874.html#_label5\n#\n\n"
},
{
"alpha_fraction": 0.462797611951828,
"alphanum_fraction": 0.5182822942733765,
"avg_line_length": 28.58490562438965,
"blob_id": "5ffa7838c9746eb31c057e16763f9e09a4b32691",
"content_id": "18166fdf6121f130e4ea93cbb8076fa77bfdd216",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5118,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 159,
"path": "/project/elective_systems/version_v5/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#-*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import admin_api\n\nCURRENT_USER = None\nROLE = 'admin'\n\ndef login():\n global CURRENT_USER\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('请输入登陆用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入登陆密码 >>: ').strip()\n flag, msg = admin_api.login(name, password)\n if flag:\n CURRENT_USER = name\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = input('请输入注册用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入注册密码 >>: ').strip()\n password2 = input('请确认注册密码 >>: ').strip()\n if password != password2:\n print('\\033[31m两次密码输入不一致!\\033[0m')\n continue\n flag, msg = admin_api.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n while True:\n name = input('请输入学校名称 >>: ').strip()\n if name == 'q': break\n address = input('请输入学校地址 >>: ').strip()\n flag, msg = admin_api.create_school(CURRENT_USER, name, address)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n while True:\n name = input('请输入老师名字 >>: ').strip()\n if name == 'q': break\n flag, msg = admin_api.create_teacher(CURRENT_USER, name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef choose_school():\n while True:\n schools = common.get_object_list('school')\n for k, v in enumerate(schools):\n print('%-4s %-10s' % (k, v))\n choice = input('请选择课程的学校编号 >>: ').strip()\n if choice == 'q':\n return choice\n if not choice.isdigit():\n print('\\033[31m学校编号必须是数字!\\033[0m')\n continue\n choice = int(choice)\n if choice < 0 or choice > len(schools):\n print('\\033[31m学校编号非法!\\033[0m')\n continue\n return schools[choice]\n\[email protected](ROLE)\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n while True:\n school_name = choose_school()\n if school_name == 'q': break\n name = input('请输入课程名称 >>: ').strip()\n if name == 'q': break\n price = input('请输入课程价格 >>: ').strip()\n cycle = input('请输入课程周期 >>: ').strip()\n flag, msg = admin_api.create_course(CURRENT_USER, name, price, cycle, school_name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_school():\n print('\\033[32m查看学校\\033[0m')\n schools = common.get_object_list('school')\n print('-' * 30)\n if not schools:\n print('\\033[31m学校列表为空!\\033[0m')\n return\n for k, name in enumerate(schools):\n print('%s %s' % (k, admin_api.get_school_info(CURRENT_USER, name)))\n print('-' * 30)\n\[email protected](ROLE)\ndef check_teacher():\n print('\\033[32m查看老师\\033[0m')\n teachers = common.get_object_list('teacher')\n print('-' * 30)\n if not teachers:\n print('\\033[31m老师列表为空!\\033[0m')\n return\n for k, name in enumerate(teachers):\n print('%s %s' % (k, admin_api.get_teacher_info(CURRENT_USER, name)))\n print('-' * 30)\n\[email protected](ROLE)\ndef check_course():\n print('\\033[32m查看课程\\033[0m')\n course = common.get_object_list('course')\n print('-' * 30)\n if not course:\n print('\\033[31m课程列表为空!\\033[0m')\n return\n for k, name in enumerate(course):\n print('%s %s' % (k, admin_api.get_course_info(CURRENT_USER, name)))\n print('-' * 30)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_school, '查看学校'],\n '4': [check_teacher, '查看老师'],\n '5': [check_course, '查看课程'],\n '6': [create_school, '创建学校'],\n '7': [create_teacher, '创建老师'],\n '8': [create_course, '创建课程']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.5861304402351379,
"alphanum_fraction": 0.589613676071167,
"avg_line_length": 40,
"blob_id": "5564f214fea745ad943f8bbbc9938cb26ad0ad23",
"content_id": "c41af625f19c910713f4a32a953828d6dff84dcc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3694,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 77,
"path": "/project/shooping_mall/version_v4/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport datetime\nfrom db import modules\nfrom lib import common\n\nlogger = common.get_logger('bank')\n\ndef recharge(name, amount):\n dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n user = modules.User.get_obj_by_name(name)\n user.balance += amount\n user.flow.append((dt, '用户%s充值%s元' % (name, amount)))\n if user.save():\n logger.info('用户%s充值%s元成功!' % (name, amount))\n return True, '用户%s充值%s元成功!' % (name, amount)\n else:\n logger.info('用户%s充值%s元失败!' % (name, amount))\n return False, '用户%s充值%s元失败!' % (name, amount)\n\ndef transfer(name, payee, amount):\n obj_tran = modules.User.get_obj_by_name(name)\n if obj_tran.balance < amount:\n logger.info('用户%s账户金额不足,转账失败!' % name)\n return False, '用户%s账户金额不足,转账失败!' % name\n dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n obj_payee = modules.User.get_obj_by_name(payee)\n obj_tran.balance -= amount\n obj_tran.flow.append((dt, '用户%s给%s转账%s元' % (name, payee, amount)))\n if not obj_tran.save():\n return False, '用户%s给%s转账%s元失败!' % (name, payee, amount)\n logger.info('用户%s给%s转账%s元' % (name, payee, amount))\n obj_payee.balance += amount\n obj_payee.flow.append((dt, '用户%s收到%s转账%s元' % (payee, name, amount)))\n if not obj_payee.save():\n return False, '用户%s收取%s转账%s元失败!' % (payee, name, amount)\n logger.info('用户%s收到%s转账%s元成功!' % (payee, name, amount))\n return True, '用户%s给%s转账%s成功!' % (name, payee, amount)\n\ndef withdraw(name, amount):\n user = modules.User.get_obj_by_name(name)\n if user.credit_balance < (amount + amount * 0.05):\n logger.info('用户%s信用余额不足,取现%s失败!' % (name, amount))\n return False, '用户%s信用余额不足,取现%s失败!' % (name, amount)\n dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n user.credit_balance -= (amount + amount * 0.05)\n user.balance += amount\n user.bill += (amount + amount * 0.05)\n user.flow.append((dt, '用户%s取现%s元' % (name, amount)))\n if user.save():\n logger.info('用户%s取现%s成功!' % (name, amount))\n return True, '用户%s取现%s成功!' % (name, amount)\n else:\n logger.info('用户%s取现%s失败!' % (name, amount))\n return True, '用户%s取现%s失败!' % (name, amount)\n\ndef repayment(name, amount):\n dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')\n user = modules.User.get_obj_by_name(name)\n if user.bill > amount:\n user.balance -= amount\n user.bill -= amount\n user.credit_balance += amount\n user.flow.append((dt, '用户%s还款%s元' % (name, amount)))\n if not user.save():\n return False, '用户%s还款%s失败!' % (name, amount)\n logger.info('用户%s还款%s成功,还需%s还清本期账单!' % (name, amount, user.bill))\n return True, '用户%s还款%s成功,还需%s还清本期账单!' % (name, amount, user.bill)\n if user.bill <= amount:\n user.balance -= amount\n user.bill = 0\n user.credit_balance += amount\n user.flow.append((dt, '用户%s还款%s元' % (name, amount)))\n if not user.save():\n return False, '用户%s还款%s失败!' % (name, amount)\n logger.info('用户%s还款%s成功,本期账单已还清!' % (name, amount))\n return True, '用户%s还款%s成功,本期账单已还清!' % (name, amount)\n\n"
},
{
"alpha_fraction": 0.34227287769317627,
"alphanum_fraction": 0.36607441306114197,
"avg_line_length": 21.938461303710938,
"blob_id": "ba16d25d31af41655796ad44068595f5e23e9474",
"content_id": "d65d581d82001d733e7ebbfc898aa64b54e140ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3285,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 130,
"path": "/homework/week2/python_weekend2_zhanglong_3_level_menu.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr.bin/env python3\n# -*- encoding: utf-8 -*-\n\nmenu = {\n '北京': {\n '海淀': {\n '五道口': {\n 'soho': {},\n '网易': {},\n 'google': {}\n },\n '中关村': {\n '爱奇艺': {},\n '汽车之家': {},\n '优酷': {}\n },\n '上地': {\n '百度': {}\n }\n },\n '昌平': {\n '沙河': {\n '老男孩': {},\n '北航': {}\n },\n '天通苑': {},\n '回龙观': {},\n },\n '朝阳': {}\n },\n '上海': {\n '闵行': {\n '人民广场': {\n '炸鸡店': {}\n }\n },\n '闸北': {\n '火车站': {\n '携程': {}\n }\n },\n '浦东': {}\n },\n '山东': {}\n}\n\n# for循环版\n# layer = menu\n# layers = []\n# while 1:\n# for k in layer:\n# print(k)\n# point = input('请选择地区,back 返回 >>: ').strip()\n# if point == 'quit':\n# break\n# if point == 'back':\n# if len(layers) > 1:\n# layers.pop()\n# layer = layers[-1]\n# else:\n# print('\\033[31m回到顶层菜单!\\033[0m')\n# continue\n# if point not in layer:\n# continue\n# if not layer[point]:\n# print('\\033[31m到达底层菜单!\\033[0m')\n# continue\n# layers.append(layer)\n# layer = layer[point]\n\n# 要求的三个while版\nlayer = menu\nlayers = []\ntag = True\nwhile tag:\n for k in layer:\n print(k)\n point = input('请选择地区 >>: ').strip()\n if point == 'quit':\n tag = False\n continue\n if point == 'back':\n print('\\033[31m回到顶层菜单!\\033[0m')\n continue\n if point not in layer:\n continue\n if not layer[point]:\n print('\\033[31m到达底层菜单!\\033[0m')\n continue\n layers.append(layer)\n layer = layer[point]\n while tag:\n for k in layer:\n print(k)\n point = input('请选择地区,back 返回 >>: ').strip()\n if point == 'quit':\n tag = False\n continue\n if point == 'back':\n if len(layers) > 1:\n layers.pop()\n layer = layers[-1]\n else:\n print('\\033[31m回到顶层菜单!\\033[0m')\n break\n if point not in layer:\n continue\n if not layer[point]:\n print('\\033[31m到达底层菜单!\\033[0m')\n continue\n layers.append(layer)\n layer = layer[point]\n while tag:\n for k in layer:\n print(k)\n point = input('请选择地区,back 返回 >>: ').strip()\n if point == 'quit':\n tag = False\n continue\n if point == 'back':\n layers.pop()\n layer = layers[-1]\n break\n if point not in layer:\n continue\n if not layer[point]:\n print('\\033[31m到达底层菜单!\\033[0m')\n continue\n layers.append(layer)\n layer = layer[point]\n\n"
},
{
"alpha_fraction": 0.5789473652839661,
"alphanum_fraction": 0.5842105150222778,
"avg_line_length": 20.11111068725586,
"blob_id": "b34130f1a131b85ac20ead7839c6a3bfd85dc02c",
"content_id": "b5634aefd9b2d6a124bf13d121cece9f44aa95d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 194,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 9,
"path": "/month4/week4/python_day15/ATM/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom conf import settings\nprint(settings.LOG_FILE)\n\ndef logger(msg):\n print('日志 ...')\n with open(settings.LOG_FILE, 'a') as f:\n f.write('%s\\n' % msg)\n"
},
{
"alpha_fraction": 0.4346195161342621,
"alphanum_fraction": 0.45980706810951233,
"avg_line_length": 17.84848403930664,
"blob_id": "dd88f0209289de2e3e3912c4e45a4e94df197f7d",
"content_id": "6089fbeacdfd065b5f7d30c2ce5fb07375ad3b1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2000,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 99,
"path": "/month3/week3/python_day12/python_day12.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 迭代器\n# str1 = 'hello'\n# list1 = [1,2,3]\n# tup1 = (1,2,3)\n# dic1 = {'x':1,'y':2,'z':3}\n# set1 = {1,2,3}\n# f = open(r'a.txt')\n#\n# for s in [str1, list1, tup1, dic1, set1, f]:\n# _iter = s.__iter__()\n# print(s, iter)\n# while True:\n# try:\n# r = _iter.__next__()\n# except StopIteration:\n# print('迭代结束!')\n# break\n# else:\n# print(r)\n# f.close()\n#\n# 2. 生成器\n# def chicken():\n# print('=====> first')\n# yield 1\n# print('=====> second')\n# yield 2\n# print('=====> third')\n# yield 3\n#\n# obj = chicken()\n# # print('obj.__iter__() is obj')\n# # res = obj.__next__()\n# # print(res)\n# # res1 = obj.__next__()\n# # print(res1)\n# # res2 = obj.__next__()\n# # print(res2)\n#\n# for item in obj:\n# print(item)\n#\n# def my_range():\n# print('start...')\n# i = 0\n# while True:\n# yield i\n# i += 1\n#\n# obj = my_range()\n#\n# print(obj.__next__())\n# print(obj.__next__())\n# print(obj.__next__())\n# print(obj.__next__())\n#\n# print(my_range().__next__())\n# print(my_range().__next__())\n#\n# for i in obj:\n# print(i)\n#\n# def my_range(start, end, step=1):\n# i = start\n# while i < end:\n# yield i\n# i += step\n#\n# # obj = my_range(0, 10)\n# # obj = obj.__iter__()\n# # print(obj.__next__())\n# # print(obj.__next__())\n# # print(obj.__next__())\n# # print(obj.__next__())\n#\n#\n# for i in my_range(0,10):\n# print(i)\n#\n# # 3. 协程函数\n# def eat(name):\n# print('%s ready to eat' % name)\n# food_list = []\n# while True:\n# food = yield food_list\n# print('%s start to eat %s' % (name, food))\n# food_list.append(food)\n#\n#\n# dog1 = eat('John')\n#\n# # 1)函数必须初始化一次,就是先调用一次 next 方法,让函数停留在 yield 的位置;\n# print(dog1.__next__())\n#\n# # 2)使用 send 方法给 yield 传值,其次和 next 方法一致;\n# print(dog1.send('面包'))\n# print(dog1.send('骨头'))\n#\n#\n"
},
{
"alpha_fraction": 0.5857142806053162,
"alphanum_fraction": 0.5952380895614624,
"avg_line_length": 19,
"blob_id": "39ce0dbc5371fb33320888c59ad3f62c7c8f4739",
"content_id": "07a5c96599c14b4a875451baaada2f026463c971",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 644,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 30,
"path": "/weektest/test2/ATM_zhangxiangyu/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\n\r\nfrom db import db_handler\r\nfrom lib import common\r\n\r\nuser_loger = common.get_logger('User')\r\n\r\ndef get_userinfo_interface(name):\r\n return db_handler.select(name)\r\n\r\n\r\ndef user_pwd_interface(name,pwd,balance=15000):\r\n user_dic = {\r\n 'name':name,\r\n 'password':pwd,\r\n 'locked':False,\r\n 'account':balance,\r\n 'limit':balance,\r\n 'flow_log':[]\r\n }\r\n user_loger.info('%s用户注册成功!' % name)\r\n db_handler.update(user_dic)\r\n\r\n\r\n\r\n\r\ndef user_locked_interface(name):\r\n user_dic = db_handler.select(name)\r\n user_dic['locked'] = True\r\n db_handler.update(user_dic)\r\n"
},
{
"alpha_fraction": 0.5565352439880371,
"alphanum_fraction": 0.5586099624633789,
"avg_line_length": 29.603174209594727,
"blob_id": "af408dbc93d8fe00dd832e179c0fa1feb24225c1",
"content_id": "ee6f6d9ca9d8e105b9bffd860dd9c4cd04233062",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2092,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 63,
"path": "/project/shooping_mall/version_v5/interface/shop.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\ndef new_arrival(code, name, price, good_file='goods'):\n good_info = db_handler.read(good_file)\n if not good_info:\n good_info = {'name': good_file}\n good_info[code] = {\n 'name': name,\n 'price': price\n }\n if db_handler.write(good_info):\n return True, '商品%s新品上架成功!' % name\n else:\n return False, '商品%s新品上架失败!' % name\n\ndef get_shooping_cart_info(name):\n info = db_handler.read(name)\n if info:\n return info['shopping_cart']\n\ndef get_good_info(good_file='goods'):\n good_info = db_handler.read(good_file)\n good_info.pop('name')\n return good_info\n\ndef join_shopping_cart(name, good, code, price, count):\n info = db_handler.read(name)\n if good in info['shopping_cart']:\n info['shopping_cart'][good]['count'] += count\n else:\n info['shopping_cart'][good] = {\n 'code': code,\n 'price': price,\n 'count': count\n }\n if db_handler.write(info):\n return True, '商品 %s x %s 加入购物车成功!' % (good, count)\n else:\n return False, '商品 %s x %s 加入购物车失败!' % (good, count)\n\ndef pay(name):\n info = db_handler.read(name)\n shopping_cart = info['shopping_cart']\n if not shopping_cart:\n return False, '用户%s购物车列表为空' % name\n cost = 0\n for k, v in shopping_cart.items():\n cost += (v['price'] * v['count'])\n if (info['balance'] + info['credit_balance']) < cost:\n return False, '用户%s账户余额不足,结账失败!' % name\n if info['balance'] >= cost:\n info['balance'] -= cost\n if (info['balance'] + info['credit_balance']) >= cost:\n if info['balance'] != 0:\n info['balance'] = 0\n info['credit_balance'] -= (cost - info['balance'])\n info['bill'] += (cost - info['balance'])\n if db_handler.write(info):\n return True, '用户%s付款%s元,结账成功!'\n else:\n return False, '系统异常,结账失败!' % name\n"
},
{
"alpha_fraction": 0.6107226014137268,
"alphanum_fraction": 0.6340326070785522,
"avg_line_length": 24.176469802856445,
"blob_id": "4ccc8d58bb2c0a484ea265812e6ef130919973ee",
"content_id": "dae0e57cfc4c3173d318944525527072e3e5d88e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 449,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 17,
"path": "/project/shooping_mall/version_v3/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport logging.config\nfrom conf import settings\nfrom core import app\n\ndef auth(func):\n def wrapper(*args, **kwargs):\n if not app.user_data['name']:\n print('\\033[31m用户未登录,请登录!\\033[0m')\n else:\n return func(*args, **kwargs)\n return wrapper\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\n"
},
{
"alpha_fraction": 0.5736842155456543,
"alphanum_fraction": 0.6184210777282715,
"avg_line_length": 16.31818199157715,
"blob_id": "86df1ae999bf9a0062b40b14d08ad8acb627ab3c",
"content_id": "054989886367ffbc75fd4482b1c01a7eca1c4022",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 380,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 22,
"path": "/month5/week10/python_day41/python_day41.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport pymysql\n\nconn = pymysql.connect(\n host='127.0.0.1',\n port=3306,\n user='root',\n password='123456',\n database='db1',\n charset='utf8'\n)\n\ncursor = conn.cursor(pymysql.cursor.DictCursor)\n\nrows = cursor.callproc('p1', (2,4,1))\nprint('rows: %s' % rows)\ndata = cursor.fetchall()\nprint('data: %s' % data)\n\ncursor.close()\nconn.close()"
},
{
"alpha_fraction": 0.5548780560493469,
"alphanum_fraction": 0.5564024448394775,
"avg_line_length": 21.620689392089844,
"blob_id": "a0e538be64a07f31c892f29cccad2a891754a225",
"content_id": "9f7bd1530d695399b9ab6ece94eb9d2f4668f49c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 678,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 29,
"path": "/weektest/weektest2/ATM_zhanglong/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport sys\nimport logging.config\nfrom conf import settings\n\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\ndef auth(func):\n def wrapper(*args, **kwargs):\n if not app.CURRENT_USER:\n print('用户未登录,请先登录!')\n app.login()\n else:\n return func(*args, **kwargs)\n return wrapper\n\ndef loggut(func):\n def wrapper(*args, **kwargs):\n res = func(*args, **kwargs)\n if res == 'quit':\n print('Goodbye!')\n sys.exit()\n else:\n return res\n return wrapper\n"
},
{
"alpha_fraction": 0.5948103666305542,
"alphanum_fraction": 0.598802387714386,
"avg_line_length": 28.41176414489746,
"blob_id": "0e4ccaf759d35725d97eb112a961f5a04614e855",
"content_id": "ee9761d556dc3d43f3296a9a67d2a8daab02615c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 501,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 17,
"path": "/weektest/test2/ATM_wangjieyu/db/db_hander.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "import os,json\n\nBASE_DB=os.path.dirname(os.path.dirname(__file__))\n\ndef select(name):\n BASE_USER = os.path.join(BASE_DB,'db','%s.json' %name)\n if os.path.exists(BASE_USER):\n with open(BASE_USER,'r',encoding='utf-8') as f:\n user_dic=json.load(f)\n return user_dic\n else:\n return False\n\ndef update(user_dic):\n BASE_USER = os.path.join(BASE_DB,'db','%s.json' %user_dic['name'])\n with open(BASE_USER,'w',encoding='utf-8') as f:\n json.dump(user_dic,f)\n\n"
},
{
"alpha_fraction": 0.4562595784664154,
"alphanum_fraction": 0.5086603760719299,
"avg_line_length": 28.237178802490234,
"blob_id": "6f120cdfdccfe2f8da7829cb80cc13378054211a",
"content_id": "8dfd69ebc82f9bf2976b0965fcbcce5886726be6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4931,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 156,
"path": "/project/elective_systems/version_v4/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import admin_api\n\nUSER = {'name': None}\nROLE = 'admin'\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('请输入登陆用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入登陆密码 >>: ').strip()\n if password == 'q': break\n flag, msg = admin_api.login(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n USER['name'] = name\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n while True:\n name = input('请输入注册用户名 >>: ').strip()\n if name == 'q': break\n password = input('请输入注册密码 >>: ').strip()\n if password == 'q': break\n password2 = input('请确认注册密码 >>: ').strip()\n if password2 == 'q': break\n if password != password2:\n print('\\033[31m两次输入密码不一致!\\033[0m')\n continue\n flag, msg = admin_api.register(name, password)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n while True:\n name = input('请输入学校名称 >>: ').strip()\n if name == 'q': break\n address = input('请输入学校地址 >>: ').strip()\n if address == 'q': break\n flag, msg = admin_api.create_school(USER['name'], name, address)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef choose_school():\n while True:\n schools = common.get_object_list('school')\n print('-' * 30)\n for k, v in enumerate(schools):\n print('%-4s%-10s' % (k, v))\n print('-' * 30)\n choice = input('请选择学校编号 >>: ').strip()\n if choice == 'q': break\n if not choice.isdigit():\n print('\\033[31m选择编号必须是数字!\\033[0m')\n continue\n choice = int(choice)\n if choice < 0 or choice > len(schools):\n print('\\033[31m选择编号超出范围!\\033[0m')\n school_name = schools[choice]\n return school_name\n\[email protected](ROLE)\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n while True:\n name = input('请输入老师名字 >>: ').strip()\n if name == 'q': break\n flag, msg = admin_api.create_teacher(USER['name'], name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n while True:\n school_name = choose_school()\n name = input('请输入课程名称 >>: ').strip()\n if name == 'q': break\n price = input('请输入课程价格 >>: ').strip()\n if price == 'q': break\n cycle = input('请输入课程周期 >>: ').strip()\n if cycle == 'q': break\n flag, msg = admin_api.create_course(USER['name'], name, price, cycle, school_name)\n if flag:\n print('\\033[32m%s\\033[0m' % msg)\n return\n else:\n print('\\033[31m%s\\033[0m' % msg)\n\[email protected](ROLE)\ndef check_school():\n print('\\033[32m查看学校\\033[0m')\n schools = common.get_object_list('school')\n print('=' * 30)\n for name in schools:\n print(admin_api.get_school_info(name))\n print('-' * 30)\n\[email protected](ROLE)\ndef check_teacher():\n print('\\033[32m查看老师\\033[0m')\n teachers = common.get_object_list('teacher')\n print('=' * 30)\n for name in teachers:\n print(admin_api.get_teacher_info(name))\n print('-' * 30)\n\[email protected](ROLE)\ndef check_course():\n print('\\033[32m查看课程\\033[0m')\n courses = common.get_object_list('course')\n print('=' * 30)\n for name in courses:\n print(admin_api.get_course_info(name))\n print('-' * 30)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [check_school, '查看学校'],\n '4': [check_teacher, '查看老师'],\n '5': [check_course, '查看课程'],\n '6': [create_school, '创建学校'],\n '7': [create_teacher, '创建老师'],\n '8': [create_course, '创建课程']\n }\n while True:\n print('=' * 30)\n for k,v in menu.items():\n print('%-5s%-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'q': break\n if choice not in menu:\n print('\\033[31m操作编号非法!\\033[0m')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.5006697177886963,
"alphanum_fraction": 0.5049558281898499,
"avg_line_length": 26.441177368164062,
"blob_id": "08d06b0131c0ef2d8e1e5335536adadff282eeb5",
"content_id": "881dcc904a2a8bd656401c3b428833005cbc4003",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4073,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 136,
"path": "/weektest/test2/ATM_wangjieyu/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "from interface import user\nfrom interface import bank\nfrom lib import common\n\npanduan={\n 'name':None,\n 'is_auth':False\n}\nlogger_bank = common.get_logger('Bnak')\n\ndef register():\n while True:\n name = input('输入你的账号: ').strip()\n if not name.isalnum():continue\n user_dic=user.select_t(name)\n if user_dic:\n print('账户已存在 ggg')\n continue\n else:\n passwd=input('输入你的账户密码').strip()\n passwd1=input('确认你输入的账户密码').strip()\n if passwd==passwd1:\n user.update_t(name,passwd)\n print('注册成功')\n break\n else:\n print('密码不一致')\n\ndef login():\n while True:\n name = input('输入你的账号: ').strip()\n if not name.isalnum():continue\n user_dic=user.select_t(name)\n # print(user_dic)\n if user_dic:\n passwd=input('输入你的账户密码').strip()\n if passwd==user_dic['passwd']:\n panduan['name']=name\n panduan['is_auth']=True\n print('登陆成功')\n break\n else:\n print('密码错误')\n continue\n else:\n print('账户不存在 ggg')\n break\n\[email protected]_auth\ndef chaxun():\n balance = bank.get_bank_interface(panduan['name'])\n print('查看余额')\n print('您的余额为:%s' % balance)\n\[email protected]_auth\ndef qukuan():\n while True:\n user_dic=user.select_t(panduan['name'])\n money=input('取多少 速度。。。').strip()\n if not money.isdigit():continue\n money=int(money)\n user_dic['account']-=money\n user.update_t(user_dic['name'],user_dic['passwd'],user_dic['account'])\n print('您取走了%s $,还剩%s $' %(money,user_dic['account']))\n logger_bank.info('%s 取款 %s 元' % (user_dic['name'], user_dic['account']))\n break\n\[email protected]_auth\ndef huankuan():\n while True:\n user_dic=user.select_t(panduan['name'])\n money=input('还多少 速度。。。').strip()\n if not money.isdigit():continue\n money=int(money)\n user_dic['account']+=money\n user.update_t(user_dic['name'],user_dic['passwd'],user_dic['account'])\n print('您还了%s $,还剩%s $' %(money,user_dic['account']))\n logger_bank.info('%s 还款 %s 元' % (user_dic['name'], user_dic['account']))\n break\n\[email protected]_auth\ndef liushui_():\n print('您的银行流水为:')\n for record in bank.check_record(panduan['name']):\n print(record)\n\[email protected]_auth\ndef zhuanzhang():\n print('转账')\n while True:\n trans_name = input('输入你的转账用户(q to exit)>>').strip()\n if trans_name == panduan['name']:\n print('不能是本人')\n continue\n if 'q' == trans_name: break\n trans_dic = user.select_t(trans_name)\n if trans_dic:\n trans_money = input('输入转账金额 >>:').strip()\n if trans_money.isdigit():\n trans_money = int(trans_money)\n user_balance = bank.get_bank_interface(panduan['name'])\n if user_balance >= trans_money:\n bank.transfer_interface(panduan['name'], trans_name, trans_money)\n break\n else:\n print('钱不够')\n continue\n else:\n print('输入数字')\n continue\n else:\n print('账户不存在')\n continue\nfun_dic={\n '1':register,\n '2':login,\n '3':chaxun,\n '4':qukuan,\n '5':huankuan,\n '6':zhuanzhang,\n '7':liushui_,\n}\n\ndef run():\n while True:\n choice=input('''\n 1、注册\n 2、登陆\n 3、查询\n 4、取款\n 5、还款\n 6、转账\n 7、流水\n>>>>>>>>>>>>> ''').strip()\n if choice not in fun_dic:continue\n fun_dic[choice]()\n\n"
},
{
"alpha_fraction": 0.4404109716415405,
"alphanum_fraction": 0.4458903968334198,
"avg_line_length": 24.578947067260742,
"blob_id": "d1d457a619a894655bc7ff019c7a369c4f261807",
"content_id": "d1d486fa511e4fbb4b5c3424fd7158c526f64bdd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1676,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 57,
"path": "/month4/week5/python_day19/ATM/core/src.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom interface inport user\n\n\ndef register():\n while True:\n name = input('请输入用户名 >>: ').strip()\n user_info = user.get_user_info_api(name)\n if user_info:\n print('用户%s已注册!' % name)\n return\n pwd = input('请输入密码 >>: ').strip()\n pwd2 = input('请确认密码 >>: ').strip()\n if pwd != pwd2:\n print('两次输入密码不一致!')\n continue\n register_user_api(name, pwd)\n print('用户%s注册成功!' % name)\n\ndef login():\n i = 0\n while True:\n name = input('请输入用户名 >>: ').strip()\n user_info = user.get_user_info_api(name)\n if not user_info:\n print('用户%s不存在!' % name)\n continue\n if user_info['lock']:\n print('用户%s已锁定,禁止登陆!' % name)\n continue\n pwd = input('请输入密码 >>: ').strip()\n if pwd != user_info['pwd']:\n print('密码错误!')\n i += 1\n if i == 3:\n print('尝试次数过多,锁定用户%s' % name)\n user_info['lock'] = True\n user.modify_user_api(user_info)\n continue\n print('用户%s登陆成功!' % name)\n\ndef run():\n while True:\n print('''\n 1 登陆\n 2 注册\n ''')\n\n chioce = input('请输入操作编码 >>: ').strip()\n if chioce == 'quit':\n print('Goodbye!')\n break\n if choice in operations:\n operations[choice]()\n else:\n print('操作编码非法!')\n\n\n"
},
{
"alpha_fraction": 0.6398891806602478,
"alphanum_fraction": 0.6412742137908936,
"avg_line_length": 27.84000015258789,
"blob_id": "0f6dfdbda885ddf8de1c6cb6b264578f3265172d",
"content_id": "c3504376cbd87e2d8f565bf11b2dbb7f7728d7cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 830,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 25,
"path": "/project/elective_systems/version_v7/interface/teacher_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom db import modules\n\nlogger = common.get_logger('teacher_api')\n\ndef login(name, password):\n teacher = modules.Teacher.get_obj_by_name(name)\n if not teacher:\n return False, '老师%s不存在!' % name\n if password == teacher.password:\n logger.info('老师%s登陆成功!' % name)\n return True, '老师%s登陆成功!' % name\n else:\n logger.warning('老师%s登陆失败!' % name)\n return False, '老师%s登陆失败!' % name\n\ndef get_teacher_info(name):\n logger.info('老师%s获取个人信息!' % name)\n return modules.Teacher.get_obj_by_name(name)\n\ndef get_course_info(name, course):\n logger.info('老师%s获取教授课程%s信息!' % (name, course))\n return modules.Course.get_obj_by_name(course)\n\n"
},
{
"alpha_fraction": 0.561097264289856,
"alphanum_fraction": 0.5785536170005798,
"avg_line_length": 18.950000762939453,
"blob_id": "4c01d805aef4ed041cd120dd14167e22c8880581",
"content_id": "151e8e4492efedf2037cb32f6e39339b5aa90048",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 401,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 20,
"path": "/month4/week5/python_day19/ATM/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\n\ndef get_user_info_api(name):\n return db_handler.read(name)\n\ndef register_user_api(name, pwd, credit=15000):\n user_info = {\n 'name': name,\n 'pwd': pwd,\n 'lock': False,\n 'balance': 0,\n 'credit': credit\n }\n db_handler.write(user_info)\n\ndef modify_user_api(user_info):\n db_handler.write(user_info)\n\n\n"
},
{
"alpha_fraction": 0.4458993673324585,
"alphanum_fraction": 0.5058580040931702,
"avg_line_length": 23.200000762939453,
"blob_id": "5fc2e7bfcc8ecc6193fba584b318bcea1b6af40b",
"content_id": "a224213018a55aef374e2525f437eecdd93ad22f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1553,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 60,
"path": "/project/elective_systems/version_v2/core/student.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\n\nlogger = common.get_logger('student')\nUSER = {'name': None}\nROLE = 'student'\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n while True:\n name = input('用户名').strip()\n pwd = input('密码').strip()\n flag, msg = common_api.login(name, pwd, ROLE)\n print(msg)\n if flag:\n USER['name'] = name\n return\n\n\ndef register():\n print('\\033[32m注册\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef choose_school():\n print('\\033[32m选择校区\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef choose_course():\n print('\\033[32m选择课程\\033[0m')\n\[email protected](USER['name'], ROLE)\ndef check_score():\n print('\\033[32m查看成绩\\033[0m')\n\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [choose_school, '选择校区'],\n '4': [choose_course, '选择课程'],\n '5': [check_score, '查看成绩'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号[q to exit] >>: ').strip()\n if choice == 'q':\n print('\\033[31mLogout Success! Goodbye!\\033[0m')\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n try:\n menu[choice][0]()\n except Exception as e:\n print('\\033[31merror from student: %s\\033[0m' % e)"
},
{
"alpha_fraction": 0.5648148059844971,
"alphanum_fraction": 0.5763888955116272,
"avg_line_length": 28.517240524291992,
"blob_id": "c7b61fc4d88bb9e1bc6a712561d111a57f975023",
"content_id": "4b27effe054a5afbf179e5f046966b063d87decf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 874,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 29,
"path": "/project/elective_systems/version_v4/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport logging.config\nfrom conf import settings\n\ndef auth(role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if admin.USER['name'] or teacher.USER['name'] or student.USER['name']:\n return func(*args, **kwargs)\n print('\\033[31m请先登录!\\033[0m')\n if role == 'admin':\n admin.login()\n if role == 'teacher':\n teacher.login()\n if role == 'student':\n student.login()\n return wrapper\n return handler\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\ndef get_object_list(obj_type):\n obj_path = os.path.join(settings.DB_PATH, obj_type)\n return os.listdir(obj_path)\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5259366035461426,
"alphanum_fraction": 0.5403458476066589,
"avg_line_length": 25.150943756103516,
"blob_id": "cb4d73cb8f1c64217b8132ccdc54fcdf13af977a",
"content_id": "f19afa6cb40632e04d9554842fb2ddab2b1fdd33",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2938,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 106,
"path": "/project/elective_systems/version_v9/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport logging.config\nfrom conf import settings\n\n\ndef login_auth(role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if role == 'admin' and not admin.CURRENT_USER:\n show_red('管理员未登录,跳转至登陆!')\n admin.login()\n return\n if role == 'teacher' and not teacher.CURRENT_USER:\n show_red('老师未登录,跳转至登陆!')\n teacher.login()\n return\n if role == 'student' and not student.CURRENT_USER:\n show_red('学生未登录,跳转至登陆!')\n student.login()\n return\n return func(*args, **kwargs)\n return wrapper\n return handler\n\n\ndef show_menu(menu):\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n\n\ndef show_info(*args, **kwargs):\n print('-' * 30)\n if args:\n for k, v in enumerate(args):\n print('%-4s %-10s' % (k, v))\n if kwargs:\n for k, v in enumerate(kwargs):\n print('%-4s %-10s %-10s' % (k, v, kwargs[v]))\n print('-' * 30)\n\ndef show_red(word):\n print('\\033[31m%s\\033[0m' % word)\n\n\ndef show_green(word):\n print('\\033[32m%s\\033[0m' % word)\n\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n show_red('不能输入空字符!')\n return string\n\n\ndef input_integer(word, is_float=False):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n show_red('不能输入空字符!')\n continue\n if string == 'q':\n return string\n if not string.isdigit():\n show_red('请输入数字!')\n if is_float:\n return float(string)\n return int(string)\n\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\n\ndef get_object_list(type_name):\n type_path = os.path.join(settings.BASE_DB, type_name)\n if not os.path.isdir(type_path):\n return\n return os.listdir(type_path)\n\ndef show_object_list(type_name):\n object_list = get_object_list(type_name)\n if not object_list:\n show_red('%s 列表为空!' % type_name.capitalize())\n return\n show_info(object_list)\n return object_list\n\ndef get_object_name(type_name=None, object_list=None):\n while True:\n if type_name and not object_list:\n object_list = show_object_list(type_name)\n choice = input_integer('请选择对象的编号')\n if choice == 'q':\n return\n if choice < 0 or choice > len(object_list):\n show_red('选择编号超出范围!')\n continue\n return object_list[choice]\n\n\n\n\n"
},
{
"alpha_fraction": 0.4538152515888214,
"alphanum_fraction": 0.45783132314682007,
"avg_line_length": 11.199999809265137,
"blob_id": "7592635692b28dd76e55bb2fc4be69b4e098a77c",
"content_id": "50eb02e1f7727345a7296f19c93f679ea58a9a22",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 249,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 20,
"path": "/month4/week6/python_day26/python_day26.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nclass Meta(Foo):\n pass\n\nclass Foo:\n pass\n\nclass Bar(Foo):\n pass\n\nl = [Foo.__bases__,\n Foo.__class__,\n Foo.__name__,\n Foo.__mro__,\n Bar.__bases__,\n Bar.__class__]\n\nfor i in l:\n print(i)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.49172288179397583,
"alphanum_fraction": 0.5027590394020081,
"avg_line_length": 23.65151596069336,
"blob_id": "a5352cde5f464cc54639bab520b2c4e53af49018",
"content_id": "df9701d8aaa9d5168ccd4c491c5f7ef1b6a2133c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1681,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 66,
"path": "/month4/week5/python_day19/python_day19.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#\n# class OldboyStudent():\n# n = 0\n#\n# def __init__(self):\n# OldboyStudent.n += 1\n#\n# stu1 = OldboyStudent()\n# print(OldboyStudent.n)\n# stu2 = OldboyStudent()\n# print(OldboyStudent.n)\n# stu3 = OldboyStudent()\n# print(OldboyStudent.n)\n#\n# # print(stu1.n)\n# # print(stu2.n)\n# # print(stu3.n)\n\nclass Human:\n def __init__(self, name, human_species, attack, health_point=100):\n self.name = name\n self.human_species = human_species\n self.attack = attack\n self.health_point = health_point\n\n def bite(self, enemy):\n enemy.health_point -= self.attack\n print('''\n %s => %s 咬了 %s => %s\n %s 掉血 %s\n %s 生命值 %s\n ''' % (self.human_species,\n self.name,\n enemy.dog_breeds,\n enemy.name,\n enemy.dog_breeds,\n self.attack,\n enemy.dog_breeds,\n enemy.health_point))\n\nclass Dog:\n def __init__(self, name, dog_breeds, attack, health_point=100):\n self.name = name\n self.dog_breeds = dog_breeds\n self.attack = attack\n self.health_point = health_point\n\n def bite(self, enemy):\n enemy.health_point -= self.attack\n print('''\n %s => %s 咬了 %s => %s\n %s 掉血 %s\n %s 生命值 %s\n ''' % (self.dog_breeds,\n self.name,\n enemy.human_species,\n enemy.name,\n enemy.human_species,\n self.attack,\n enemy.human_species,\n enemy.health_point))\n\np = Human('袁誓隆', '黑人', 200)\nd = Dog('吴晨钰', '小奶狗', 5)\np.bite(d)\nd.bite(p)\n\n\n\n\n"
},
{
"alpha_fraction": 0.46457764506340027,
"alphanum_fraction": 0.486376017332077,
"avg_line_length": 24.241378784179688,
"blob_id": "619e39d893f8786c2c48ed845f6db472a6745d32",
"content_id": "7f9ddd4dd43515d475fa7340ee233054c06a4bb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 858,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 29,
"path": "/month3/week2/python_day6/python_day6.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 读文件\n# f = open('a.txt', mode='r', encoding='utf-8')\n# # for i in range(1, 5):\n# # f.write('哈哈哈%s\\n' % i)\n# print('=========>1')\n# #print(f.readable())\n# print(f.readline().strip('\\n'))\n# print('=========>2')\n# #print(f.read())\n# print('=========>3')\n# print(f.readlines())\n# print('=========>4')\n# f.close()\n\n# 循环\n# with open('a.txt', encoding='utf-8') as f:\n# # 这种方式是最好的,内存只有一行数据\n# for line in f:\n# print(line, end='')\n# # 这种方式文件大的情况下,一次读取所有数据到内存,可能会耗尽内存\n# # for line in f.readlines():\n# # print(line, end='')\n\n# 2. 写文件\n# l = ['lines1\\n', 'lines2\\n', 'lines3\\n', 'lines4\\n']\n# with open('a.txt', 'w') as f:\n# for i in range(1, 5):\n# f.write('哈哈哈%s\\n' % i)\n# f.writelines(l)\n\n\n"
},
{
"alpha_fraction": 0.5052787661552429,
"alphanum_fraction": 0.5073161721229553,
"avg_line_length": 27.27225112915039,
"blob_id": "0c6ba4882d6336a09b9b5bcf5f6529db2035e3fa",
"content_id": "5f0f2df6246675bf6fcac08052dd31e47182f596",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5667,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 191,
"path": "/project/youku/version_v1/youkuClient/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom client import tcpClient\nfrom conf import settings\n\nCOOKIES = {\n 'role': 'admin',\n 'name': None,\n 'session_id': None\n}\nclient = None\n\ndef login():\n common.show_green('登陆')\n if COOKIES['session_id']:\n common.show_red('用户不能重复登录')\n return\n while True:\n name = common.input_string('用户名')\n password = common.input_string('密码')\n params = {\n 'api': 'login',\n 'session_id': None,\n 'data_size': None,\n 'is_file': False,\n 'role': 'admin',\n 'name': name,\n 'password': password\n }\n res = client.send_data(params)\n if res['flag']:\n COOKIES['session_id'] = res['session_id']\n COOKIES['name'] = name\n common.show_green(res['message'])\n return\n else:\n common.show_red(res['message'])\n\ndef register():\n common.show_green('注册')\n while True:\n name = common.input_string('注册用户名')\n password = common.input_string('注册密码')\n password2 = common.input_string('确认密码')\n if password != password2:\n common.show_red('两次输入密码不一致!')\n continue\n params = {\n 'api': 'register',\n 'session_id': None,\n 'data_size': None,\n 'is_file': False,\n 'role': 'admin',\n 'name': name,\n 'password': password\n }\n res = client.send_data(params)\n if res['flag']:\n common.show_green(res['message'])\n return\n else:\n common.show_red(res['message'])\n\[email protected](COOKIES['role'])\ndef release_announcement():\n common.show_green('发布公告')\n while True:\n announcement = common.input_string('公告名')\n content = common.input_string('公告内容')\n params = {\n 'api': 'release_announcement',\n 'session_id': COOKIES['session_id'],\n 'data_size': None,\n 'is_file': False,\n 'role': 'admin',\n 'announcement': announcement,\n 'content': content\n }\n res = client.send_data(params)\n if res['flag']:\n common.show_green(res['message'])\n return\n else:\n common.show_red(res['message'])\n\n\ndef choose_upload_video():\n upload_video_list = common.get_upload_video_list()\n if not upload_video_list:\n common.show_red('当前无可上传视频!')\n return\n while True:\n common.show_info(*upload_video_list)\n choice = common.input_integer('请选择上传视频编号')\n if choice == 'q':\n return\n if choice < 0 or choice > len(upload_video_list):\n common.show_red('选择编号非法!')\n continue\n return upload_video_list[choice]\n\[email protected](COOKIES['role'])\ndef upload_video():\n common.show_green('上传视频')\n while True:\n file_name = choose_upload_video()\n if not file_name:\n return\n file_size = common.get_file_size(file_name)\n params = {\n 'api': 'upload_video',\n 'session_id': COOKIES['session_id'],\n 'data_size': None,\n 'is_file': True,\n 'role': 'admin',\n 'file_name': file_name,\n 'file_size': file_size\n }\n res = client.send_data(params)\n if res['flag']:\n common.show_green(res['message'])\n return\n else:\n common.show_red(res['message'])\n\ndef get_online_video():\n params = {\n 'api': 'get_online_video',\n 'session_id': COOKIES['session_id'],\n 'data_size': None,\n 'is_file': False,\n 'role': 'admin'\n }\n online_video_list = client.send_data(params)\n if not online_video_list:\n common.show_red('当前没有在线视频!')\n return\n while True:\n common.show_info(*online_video_list)\n choice = common.input_integer('请选择在线视频编号')\n if choice == 'q':\n return\n if choice < 0 or choice > len(online_video_list):\n common.show_red('选择编号非法!')\n continue\n return online_video_list[choice]\n\[email protected](COOKIES['role'])\ndef remove_video():\n common.show_green('删除视频')\n while True:\n file_name = get_online_video()\n if not file_name:\n return\n params = {\n 'api': 'remove_video',\n 'session_id': COOKIES['session_id'],\n 'data_size': None,\n 'is_file': False,\n 'role': 'admin',\n 'file_name': file_name\n }\n res = client.send_data(params)\n if res['flag']:\n common.show_green(res['message'])\n return\n else:\n common.show_red(res['message'])\n\ndef run():\n global client\n client = tcpClient.TcpClient(settings.server_address)\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [release_announcement, '发布公告'],\n '4': [upload_video, '上传视频'],\n '5': [remove_video, '删除视频']\n }\n while True:\n common.show_green('按\"q\"登出')\n common.show_menu(menu)\n choice = common.input_string('请输入平台编号')\n if choice == 'q':\n common.show_red('logout!')\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0]()"
},
{
"alpha_fraction": 0.4629383087158203,
"alphanum_fraction": 0.4867161214351654,
"avg_line_length": 40.177547454833984,
"blob_id": "3e1e959c64cfec789e8ef7ec42ded1a89d31036a",
"content_id": "44f8899086926f34dc11238f020e4e28fd203e44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 16361,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 383,
"path": "/test.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- encoding: utf-8 -*-\n\nimport sys\n\nreload(sys)\nsys.setdefaultencoding('utf-8')\nimport time\nimport json\nimport struct\nimport socket\nimport urllib\nimport hashlib\nimport logging\nimport MySQLdb\nimport requests\nimport threading\n\n\nclass UcloudBandwidth():\n def __init__(self):\n self.sms_server = '106.75.51.26'\n self.sms_port = 45678\n self.sms_users = [\"operation\"] # operation: zhanglong\n logging.basicConfig(level=logging.DEBUG,\n format='%(asctime)s %(filename)s[line:%(lineno)d]:%(levelname)s:%(message)s',\n datefmt='%Y-%b-%d,%H:%M:%S',\n filename='check-eip-bandwidth-c.log',\n filemode='w')\n logging.getLogger(\"requests\").setLevel(logging.WARNING)\n self.PublicKey = \"[email protected]\"\n self.PrivateKey = \"882a7eddf37eb59b62e16d7f0bdcf77cf825bf00\"\n self.Password = \"R290eWUuMzY1\"\n self.ProjectId = \"org-1164\"\n self.Url = \"http://api.ucloud.cn\"\n self.region_bj1 = \"cn-bj1\"\n self.region_bj2 = \"cn-bj2\"\n self.region_hk = \"hk\"\n self.zone_A = \"cn-bj1-01\"\n self.zone_C = \"cn-bj2-03\"\n self.zone_HK = \"hk-01\"\n self.api_DescribeEIP = \"DescribeEIP\"\n self.api_DescribeShareBandwidth = \"DescribeShareBandwidth\"\n self.api_GetMetricOverview = \"GetMetricOverview\"\n self.api_DescribeBandwidthPackage = \"DescribeBandwidthPackage\"\n self.api_CreateBandwidthPackage = \"CreateBandwidthPackage\"\n self.log_normal = \"C区 %s-%s 带宽监控正常 %s%% %sKB/s %sM \"\n self.log_api_fail = \"C区 Ucloud EIP 监控接口调用失败,发送短信报警 error api: \"\n self.log_api_recovery = \"C区 Ucloud EIP 监控接口调用恢复正常,发送短信报 error api: \"\n self.log_buy_package = \"C区 %s-%s 当前带宽%sM,已购买%sM%s小时带宽包,购买后带宽%sM,发送短信报警\"\n self.log_buy_package_fail = \"C区 %s-%s 当前带宽%sM,购买%sM%s小时带宽包失败,发送报警短\"\n self.log_no_buy_package = \"C区 %s-%s 第%s次带宽超阈值,使用率%s%% 带宽速度%s,当前带宽%sM,已超过%sM不再购买带宽包\"\n self.log_share_eip_check = \"C区 共享带宽 %s-%s 使用率%s%% 带宽速度%s,当前带宽%sM\"\n self.log_keep_check = \"C区 %s-%s 第%s次带宽超阈值,使用率%s%% 带宽速度%s,当前带宽%sM,不发短信报警,持续监控\"\n self.log_eip_dict_null = \"\\033[31mC区 DescribeEIP 无此EIP %s\\033[0m\"\n self.log_eip_use_dict_null = \"\\033[31mC区 GetMetricOverview 无此EIP %s\\033[0m\"\n self.sms_api_fail = \"C区 Ucloud EIP 监控接口调用失败,请查看\"\n self.sms_api_recovery = \"C区 Ucloud EIP 监控接口调用恢复正常\"\n self.sms_buy_package = \"C区 %s-%s 当前带宽%sM,已购买%sM%s小时带宽包,购买后带宽%sM\"\n self.sms_buy_package_fail = \"C区 %s-%s 当前带宽%sM,购买%sM%s小时带宽包失败,请查看\"\n self.sms_no_buy_package = \"C区 %s-%s 带宽超阈值,使用率%s%% 带宽速度%s,基础带宽%sM,当前带宽%sM,已超过50M不再购买带宽包\"\n self.alert_bindwidth = 100\n self.interval = 30\n self.interval_buy_wait = 90\n\n def get_phone_list(self):\n conn = MySQLdb.connect(host=\"10.6.19.143\", user=\"root\", passwd=\"gotye2013\", db=\"gotye_deploy\",\n charset=\"utf8\")\n cursor = conn.cursor()\n\n phone_list = []\n for remark in self.sms_users:\n cursor.execute(\"select `phone_list` from `tbl_phone_list` where `remark`='%s'\" % remark)\n for phone in cursor.fetchall():\n phone_list.append(phone[0])\n conn.close()\n logging.debug(\"phone_list: %s\" % phone_list)\n return phone_list\n\n def send_sms(self, text):\n text = str(text)\n logging.debug(\"%s SMS text: %s\" % (type(text), text))\n phone_list = self.get_phone_list()\n address = (self.sms_server, self.sms_port)\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n s.connect(address)\n\n ntext = str(chr(len(text)) + text)\n msglen = len(ntext) + 4 + 12\n strc_format = str('<cHHI12s' + str(len(ntext)) + 's')\n\n for phone in phone_list:\n nphone = str(chr(len(phone)) + phone)\n msgcontant = struct.pack(strc_format, '0', msglen, 4, 0, nphone, ntext)\n s.send(msgcontant)\n s.close()\n\n def do_request(self, api_params):\n if api_params[\"Region\"] == \"cn-bj1\":\n region = \"A\"\n elif api_params[\"Region\"] == \"cn-bj2\":\n region = \"C\"\n api = api_params['Action']\n items = api_params.items()\n items.sort()\n params_data = \"\"\n for (k, v) in items:\n params_data += str(k) + str(v)\n params_data = params_data + self.PrivateKey\n sign = hashlib.sha1()\n sign.update(params_data)\n signature = sign.hexdigest()\n\n api_params[\"Signature\"] = signature\n api_params = sorted(api_params.items(), key=lambda api_params: api_params[0])\n url = self.Url + \"/?\" + urllib.urlencode(api_params)\n\n try:\n r = requests.get(url)\n except Exception as e:\n logging.error(\"\\033[31mHTTP ERROR%s\\033[0m\" % str(e))\n return\n try:\n r = json.loads(r.text)\n except:\n logging.error(\"\\033[31m%s to Json failed:\\033[0m %s\" % (api, r.text))\n if r['RetCode'] != 0:\n logging.error(\"\\033[31mRegion %s %s failed:\\033[0m %s\" % (region, api, r))\n logging.debug(\"\\033[32mRegion %s %s success:\\033[0m %s\" % (region, api, r['RetCode']))\n return r\n\n def DescribeEIP(self):\n api_params = {\n \"Action\": self.api_DescribeEIP,\n \"PublicKey\": self.PublicKey,\n \"ProjectId\": self.ProjectId,\n \"Region\": self.region,\n \"Offset\": 0,\n \"Limit\": 200\n }\n r = self.do_request(api_params)\n if not r:\n return -1\n eip_dict = {}\n if not r.has_key(\"EIPSet\"):\n return eip_dict\n for i in r['EIPSet']:\n if not i['Status'] == \"used\":\n continue\n eip = i['EIPAddr'][0]['IP']\n eip_id = i['EIPId']\n eip_bind = i['Bandwidth']\n hostname = i['Resource']['ResourceName']\n eip_dict[eip] = [eip_id, eip_bind, hostname]\n return eip_dict\n\n def DescribeShareBandwidth(self):\n api_params = {\n \"Action\": self.api_DescribeShareBandwidth,\n \"PublicKey\": self.PublicKey,\n \"ProjectId\": self.ProjectId,\n \"Region\": self.region\n }\n r = self.do_request(api_params)\n if not r:\n return -1\n share_eip_list = []\n if not r.has_key(\"DataSet\"):\n return share_eip_list\n if len(r[\"DataSet\"]) == 0:\n return share_eip_list\n if not r[\"DataSet\"][0].has_key(\"EIPSet\"):\n return share_eip_list\n for i in r[\"DataSet\"][0][\"EIPSet\"]:\n share_eip_list.append(i[\"EIPAddr\"][0][\"IP\"])\n return share_eip_list\n\n def GetMetricOverview(self):\n api_params = {\n \"Action\": self.api_GetMetricOverview,\n \"PublicKey\": self.PublicKey,\n \"ProjectId\": self.ProjectId,\n \"Region\": self.region,\n \"Zone\": self.zone,\n \"ResourceType\": \"eip\",\n \"Limit\": 200,\n \"Offset\": 0\n }\n r = self.do_request(api_params)\n if not r:\n return -1\n eip_use_dict = {}\n if not r.has_key('DataSet'):\n return eip_use_dict\n for i in r['DataSet']:\n if not i.has_key('NetworkOutUsage'):\n continue\n eip = i['PublicIps'][0]['IP']\n eip_use_dict[eip] = [i['NetworkOutUsage'], str(round(float(i['NetworkOut']) / 8 / 1024)) + \"KB/s\"]\n return eip_use_dict\n\n def DescribeBandwidthPackage(self):\n api_params = {\n \"Action\": self.api_DescribeBandwidthPackage,\n \"PublicKey\": self.PublicKey,\n \"ProjectId\": self.ProjectId,\n \"Region\": self.region,\n \"Limit\": 20,\n \"Offset\": 0\n }\n r = self.do_request(api_params)\n if not r:\n return -1\n eip_package_list = []\n for i in r['DataSets']:\n eip = i['EIPAddr'][0]['IP']\n bandwidth = i['Bandwidth']\n eip_package_list.append([eip, bandwidth])\n return eip_package_list\n\n def CreateBandwidthPackage(self, bandwidth, eip_id, timeRange):\n api_params = {\n \"Action\": self.api_CreateBandwidthPackage,\n \"PublicKey\": self.PublicKey,\n \"ProjectId\": self.ProjectId,\n \"Region\": self.region,\n \"Bandwidth\": bandwidth,\n \"EIPId\": eip_id,\n \"TimeRange\": timeRange\n }\n r = self.do_request(api_params)\n if r:\n return 0\n else:\n return -1\n\n def monitor(self, region, zone, eip_list, bandwidth, timeRange, eip_type=\"mix\"):\n self.region = region\n self.zone = zone\n i = 0\n eip_count = {}\n eip_nomal = {}\n for eip in eip_list:\n eip_count[eip] = 0\n eip_nomal[eip] = 0\n stop_sms = 0\n while 1:\n eip_dict = self.DescribeEIP()\n eip_package_list = self.DescribeBandwidthPackage()\n share_eip_list = self.DescribeShareBandwidth()\n eip_use_dict = self.GetMetricOverview()\n if -1 in (eip_dict, eip_package_list, share_eip_list, eip_use_dict):\n err_list = []\n i += 1\n if not eip_dict:\n logging.debug(\"eip_dict: %s\" % eip_dict)\n err_list.append(\"DescribeEIP\")\n elif not eip_package_list:\n logging.debug(\"eip_package_list: %s\" % eip_package_list)\n err_list.append(\"DescribeBandwidthPackage\")\n elif not share_eip_list:\n logging.debug(\"share_eip_list: %s\" % share_eip_list)\n err_list.append(\"DescribeShareBandwidth\")\n elif not eip_use_dict:\n logging.debug(\"eip_use_dict: %s\" % eip_use_dict)\n err_list.append(\"GetMetricOverview\")\n if i == 5:\n logging.error(self.log_api_fail % err_list)\n self.send_sms(self.sms_api_fail % err_list)\n time.sleep(self.interval)\n continue\n else:\n if i >= 5:\n logging.debug(self.log_api_recovery)\n self.send_sms(self.sms_api_recovery)\n i = 0\n if eip_type == \"share\" or eip_type == \"mix\":\n for eip in share_eip_list:\n if not eip_dict.has_key(eip):\n continue\n eip_bind = eip_dict[eip][1]\n eip_name = eip_dict[eip][2]\n if not eip_use_dict.has_key(eip):\n continue\n eip_percent = eip_use_dict[eip][0]\n eip_network = eip_use_dict[eip][1]\n logging.debug(\n self.log_share_eip_check % (eip_name, eip, eip_percent, eip_network, eip_bind))\n if eip_type == \"eip\" or eip_type == \"mix\":\n for eip in eip_list:\n eip_bind = 0\n for p in eip_package_list:\n if p[0] == eip:\n eip_bind += p[1]\n if not eip_dict.has_key(eip):\n continue\n eip_id = eip_dict[eip][0]\n eip_bind_1 = eip_dict[eip][1]\n if eip_bind == 0:\n eip_bind = eip_bind_1\n else:\n eip_bind += eip_bind_1\n eip_name = eip_dict[eip][2]\n if not eip_use_dict.has_key(eip):\n logging.error(self.log_eip_dict_null % eip)\n continue\n eip_percent = eip_use_dict[eip][0]\n eip_network = eip_use_dict[eip][1]\n if eip_percent > 85:\n eip_count[eip] += 1\n if eip in [\"120.132.92.17\", \"123.59.59.123\"]:\n if eip_count[eip] == 3:\n self.send_sms(\"%s-%s 带宽超阈值,使用率%s%%,网速%sKb/s,当前带宽%sM,请查看\" % (\n eip_name, eip, eip_percent, eip_network, eip_bind))\n continue\n if eip_bind <= self.alert_bindwidth:\n logging.error(self.log_keep_check % (\n eip_name, eip, eip_count[eip], eip_percent, eip_network, eip_bind))\n code = self.CreateBandwidthPackage(bandwidth, eip_id, timeRange)\n if code == 0:\n logging.warn(self.log_buy_package % (\n eip_name, eip, eip_bind, bandwidth, timeRange, eip_bind + bandwidth))\n self.send_sms(self.sms_buy_package % (\n eip_name, eip, eip_bind, bandwidth, timeRange, eip_bind + bandwidth))\n else:\n logging.error(self.log_buy_package_fail % (\n eip_name, eip, eip_bind, bandwidth, timeRange))\n self.send_sms(self.sms_buy_package_fail % (\n eip_name, eip, eip_bind, bandwidth, timeRange))\n time.sleep(self.interval_buy_wait)\n else:\n logging.error(self.log_no_buy_package % (\n eip_name, eip, eip_count[eip], eip_percent, eip_network, eip_bind,\n self.alert_bindwidth))\n if stop_sms == 0:\n self.send_sms(self.sms_no_buy_package % (\n eip_name, eip, eip_percent, eip_network, eip_bind_1, eip_bind))\n stop_sms = 1\n eip_nomal[eip] = 0\n else:\n eip_nomal[eip] += 1\n logging.info(self.log_normal % (eip_name, eip, eip_percent, eip_network, eip_bind))\n eip_count[eip] = 0\n if stop_sms == 1:\n eip_nomal[eip] += 1\n if eip_nomal[eip] == 30:\n stop_sms = 0\n eip_nomal[eip] = 0\n time.sleep(self.interval)\n\n def run(self):\n bandwidth = 5\n timeRange = 1\n eip_host = {\n \"123.59.59.60\": \"new-nginx-lua-1\",\n \"123.59.59.115\": \"new-share-nginx-1\",\n \"106.75.48.22\": \"new-siochat-1\",\n \"180.150.178.13\": \"new-siochat-2\",\n \"123.59.59.16\": \"new-ppt-p2p\",\n \"123.59.68.181\": \"Live-new-media-client-log\",\n \"123.59.139.123\": \"D-IM-Login-webApi\",\n \"123.59.65.64\": \"Live-nginx-lua-1\",\n \"123.59.56.231\": \"Live-nginx-lua-2\",\n \"123.59.87.167\": \"Live-108-nginx-liveApi-1\",\n \"120.132.84.140\": \"Live-108-liveApi-others-1\",\n \"123.59.56.131\": \"108-imsocketio-static\",\n \"120.132.85.50\": \"C-Live-108-LiveTask\",\n \"123.59.59.123\": \"C-Dev\",\n \"106.75.11.150\": \"C-Live-150\"\n }\n threads = []\n for eip in eip_host.keys():\n t = threading.Thread(target=UcloudBandwidth.monitor,\n args=(self, self.region_bj2, self.zone_C, [eip], bandwidth, timeRange))\n threads.append(t)\n t.setDaemon(True)\n t.start()\n for t in threads:\n t.join()\n\nif __name__ == \"__main__\":\n monitor = UcloudBandwidth()\n monitor.run()\n"
},
{
"alpha_fraction": 0.5629138946533203,
"alphanum_fraction": 0.5695364475250244,
"avg_line_length": 23.05555534362793,
"blob_id": "8005260368f7f499eeeeddb66857411767a2a568",
"content_id": "2477d0562b8e054155af3a0f265ba2019b0dd049",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 453,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 18,
"path": "/weektest/test2/ATM_zhangxiangyu/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\nimport os,json\r\nfrom conf import settings\r\n\r\ndef select(name):\r\n path = r'%s\\%s.json' %(settings.DB_PATH,name)\r\n if os.path.isfile(path):\r\n\r\n with open(path,'r',encoding='utf-8') as f:\r\n return json.load(f)\r\n else:\r\n return False\r\n\r\n\r\ndef update(user_dic):\r\n path = r'%s\\%s.json' % (settings.DB_PATH, user_dic['name'])\r\n with open(path,'w',encoding='utf-8') as w:\r\n json.dump(user_dic,w)\r\n\r\n"
},
{
"alpha_fraction": 0.5809128880500793,
"alphanum_fraction": 0.5815056562423706,
"avg_line_length": 25.203125,
"blob_id": "cdb63377cdf1b2988d404237790a3f7423e2ea2c",
"content_id": "ddb7952a26b37f38b8e0324be08dee417a2a3d6e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1735,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 64,
"path": "/project/elective_systems/version_v6/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n return db_handler.save(self)\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n\n @classmethod\n def register(cls, name, password):\n admin = Admin(name, password)\n return admin.save()\n\n def create_school(self, name, address):\n school = School(name, address)\n return school.save()\n\n def create_teacher(self, name, password):\n teacher = Teacher(name, password)\n return teacher.save()\n\n def create_course(self, name, price, cycle, school_name):\n course = Course(name, price, cycle, school_name)\n return course.save()\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.address = address\n\n def __str__(self):\n return '校区:%s 地址:%s' % (self.name, self.address)\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.address = password\n self.course = []\n\n def __str__(self):\n return '名字:%s 课程:%s' % (self.name, self.course)\n\nclass Course(Base):\n def __init__(self, name, price, cycle, school_name):\n self.name = name\n self.price = price\n self.cycle = cycle\n self.school_name = school_name\n\n def __str__(self):\n return '课程:%s 价格:%s 周期:%s 校区:%s' % (self.name, self.price, self.cycle, self.school_name)\n\nclass Student(Base):\n def __init__(self, name):\n self.name = name\n\n\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5248194932937622,
"alphanum_fraction": 0.5566335916519165,
"avg_line_length": 17.0938777923584,
"blob_id": "d49d66fbda16bf61bc2fafb6d4f1a660fafe9c54",
"content_id": "31f49ceb4f846e46b3830c91584b2eb90f8bde2c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5380,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 245,
"path": "/month3/week1/python_day2/python_day2.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "'''\nname='egon'\nage=78\n'''\n\n# 数字类型:\n# 整型:int\n# 等级、年龄、身份证号、学号、id号\nlevel=10 # level=int(10)\nlevel=int(10)\n#print(id(level), type(level), level)\nage=18\nempid=123\n\n# 浮点数:float\n# 身高、体重、薪资\nsalary=3.1\nheight=1.83\n\n# 字符串: str 包含在引号(单引号、双引号、三引号)内的一串字符\n# 用来表示:名字、家庭住址、描述性的数据\ns1='egon'\ns2=\"egon\"\ns3='''\negon\n'''\ns4=\"\"\"\negon\n\"\"\"\n#print(s1, s2, s3, s4)\n\nmsg='hello \"egon\"'\nmsg=\"hello 'egon'\"\n#print(msg)\n\n# 字符串拼接:+ *\ns1='hello'\ns2='world'\n#print(s1+s2)\n#print(s1*10)\n\n# 列表:定义在[]中括号内用逗号分隔多个值,值可以是任意类型\n# 用来存放多个值:多个爱好、多个人名\nstu_names=['asb', 'egon', 'wsb'] # stu_names=list(['asb', 'egon', 'wsb'])\n#print(id(stu_names), type(stu_names), stu_names)\n#print(stu_names[0], stu_names[1])\n\nuser_info=['egon',18,['read', 'music', 'dancing', 'play']]\n#print(user_info[1], user_info[2][1])\n\n# 字典:定义在{}花括号内用逗号隔开,每一个元素都是key,value的形式,其中value可以是任意类型\nuser_info={'name': 'egon', 'age': 18, 'hobby': ['read', 'music', 'dancing', 'play']}\n#print(id(user_info), type(user_info), user_info)\n\n#print(user_info['age'])\n#print(user_info['hobby'][3])\n\ninfo={\n 'name':'egon',\n 'hobbies':['play','sleep'],\n 'company_info':{\n 'name':'Oldboy',\n 'type':'education',\n 'emp_num':40,\n }\n}\n#print(info['company_info']['name']) #取公司名\n\n\nstudents=[\n {'name':'alex','age':38,'hobbies':['play','sleep']},\n {'name':'egon','age':18,'hobbies':['read','sleep']},\n {'name':'wupeiqi','age':58,'hobbies':['music','read','sleep']},\n]\n#print(students[1]['hobbies'][1]) #取第二个学生的第二个爱好\n\n# 布尔类型:bool: True False\n# 用途:判断\nage_of_oldboy=18\n#print(age>18)\n\n#inp_age=input('your age: ')\n#inp_age=int(inp_age)\n\n#if inp_age > age_of_oldboy:\n# print('猜大了')\n#elif inp_age < age_of_oldboy:\n# print('猜小了')\n#else:\n# print('猜对了')\n\n# 布尔值的重点知识,所有自带数据类型布尔值\n# 1 只有三种类型的值为False: 0, None, 空\n# 2 其余全为True\n\n#if ['',]:\n# print('===>')\n\n\n# 可变类型与不可变类型\n# 可变:在id不变的情况,值可以改变,则称为可变类型,如列表、字典\n# 不可变类型:值一旦改变,id也改变,则称为不可变类型(id变,意味着创建了新的内存空间),如数字、字符串\nx=[1,2,3]\n#print(id(x), type(x), x)\nx[2]=6\n#print(id(x), type(x), x)\n\ndic={'x':'1', 'y':'2'}\n#print(id(dic), type(dic), dic)\ndic['x']=6\n#print(id(dic), type(dic), dic)\n\nline='-'*10\n#name=input('名字: ')\n#age=input('年龄: ')\n#sex=input('性别: ')\n#job=input('工作: ')\nmsg='''\n%s info of %s %s\nName: %s\nAge: %s\nSex: %s\nJob: %s\n%s end %s\n'''\n#print('My name is %s, age is %s' % ('John', 18))\n#print(msg % (line, name, line, name, age, sex, job, line, line))\n\n#print(10/3)\n#print(10//3)\n#print(10%3)\n#print(3**3)\n\n# 增量赋值\nage=18\n#age=age+1\nage+=2 #age=age+2\nage-=10\n#print(age)\n\n\n# 逻辑运算\n# and:逻辑与,用于连接左右两个条件都为True的情况下,and运算的最终结果才是 True\n#print(1>2 and 3>4)\n#print(2>3 and 3>4)\n#print(True and True and True and False)\n#print(True or False and False)\n\n\nsex='female'\nage=20\nis_beutiful=True\nis_successful=True\n\n#if sex == 'female' and age>18 and age<26 and is_beutiful:\n# print('表白...')\n# if is_successful:\n# print('在一起')\n# else:\n# print('表白失败')\n#else:\n# print('阿姨好')\n\nusername='egon'\npassword='123'\nname=input('name>>: ')\npwd=input('password>>: ')\n\ntag=True\nwhile tag:\n if name == username and pwd == password:\n print('login successful!')\n while tag:\n cmd=input('cmd>>: ')\n if cmd == 'quit':\n tag=False\n continue\n #break\n print('%s 命令在执行...' % cmd)\n #break\n else:\n print('name or password not valid')\n #print('====>')\n\n'''\n如果成绩>=90,那么:优秀\n如果成绩>=80,那么:良好\n如果成绩>=70,那么:普通\n否则,那么:很差\n'''\n\n# score=input('score>>: ')\n# score=int(score)\n# if score >= 90:\n# print('优秀')\n# elif score >= 80 and score < 90:\n# print('良好')\n# elif score >= 70 and score < 80:\n# print('普通')\n# elif score < 70:\n# print('很差')\n\n# while:条件循环\n# import time\n# count=1\n# while count <= 3:\n# print('====>', count)\n# count+=1\n# time.sleep(0.1)\n\n# break: 跳出本层循环\n# age_of_oldboy=18\n# count=1\n# while 1:\n# if count > 3:\n# print('try too many times')\n# break\n# inp_age=input('your age: ')\n# inp_age=int(inp_age)\n# if inp_age > age_of_oldboy:\n# print('猜大了')\n# elif inp_age < age_of_oldboy:\n# print('猜小了')\n# else:\n# print('猜对了')\n# break\n# print('猜的次数: ', count)\n# count+=1\n\n\n# continue:跳过本次循环,进入下次循环\n# count=1\n# while count<5:\n# if count == 3:\n# count+=1\n# continue\n# print(count)\n# count+=1\n\n# while True:\n# print('====>')\n# continue\n# print('====>')\n# print('====>')\n# print('====>')"
},
{
"alpha_fraction": 0.4872449040412903,
"alphanum_fraction": 0.5561224222183228,
"avg_line_length": 23.25,
"blob_id": "5f6aa97b512bf495241f83cec457c3c3e7598b9c",
"content_id": "94edcac0d3913d2109044e28b48e945af46d901f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 16,
"path": "/month4/week6/python_day25/python_day25_client.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport socket\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect(('127.0.0.1', 8080))\nwhile True:\n msg = s.recv(1024)\n print('\\033[31m新消息:%s\\033[0m' % msg.decode('utf-8'))\n data = input('>>>>: ').strip()\n if msg.decode('utf-8') == 'Goodbye!':\n break\n s.send(data.encode('utf-8'))\n s.send()\n print('接收消息中 ..')\ns.close()\n\n\n\n\n"
},
{
"alpha_fraction": 0.5413603186607361,
"alphanum_fraction": 0.5670955777168274,
"avg_line_length": 28.2702693939209,
"blob_id": "6ac1afbdb88315c605fe0159b3acf41d8fa8bc69",
"content_id": "053da7ed6b7804dde6752142413e4718d0a6d50d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1136,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 37,
"path": "/project/elective_systems/version_v5/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#-*- encoding: utf-8 -*-\n\nimport os\nimport logging.config\nfrom conf import settings\n\n\ndef auth(role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if role == 'admin' and not admin.CURRENT_USER:\n print('\\033[31m登陆用户未登录,跳转到登陆!\\033[0m')\n admin.login()\n return\n if role == 'teacher' and not teacher.CURRENT_USER:\n print('\\033[31m请先登陆!\\033[0m')\n teacher.login()\n return\n if role == 'student' and not student.CURRENT_USER:\n print('\\033[31m请先登陆!\\033[0m')\n student.login()\n return\n return func(*args, **kwargs)\n return wrapper\n return handler\n\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\ndef get_object_list(type_name):\n obj_path = os.path.join(settings.BASE_DB, type_name)\n if not os.path.exists(obj_path):\n return\n return os.listdir(obj_path)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5319464802742004,
"alphanum_fraction": 0.5468053221702576,
"avg_line_length": 26.91666603088379,
"blob_id": "48186188317bcd23c0b390e33b1b042877bef19b",
"content_id": "6257fad160ce77c3f6226ea633cdc80d22352e8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 683,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 24,
"path": "/project/elective_systems/version_v3/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport logging.config\nfrom conf import settings\n\ndef auth(name, role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if name:\n return func(*args, **kwargs)\n print('\\033[31m请先登录!\\033[0m')\n if role == 'admin':\n admin.login()\n if role == 'teacher':\n teacher.login()\n if role == 'student':\n student.login()\n return wrapper\n return handler\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\n\n\n"
},
{
"alpha_fraction": 0.38680317997932434,
"alphanum_fraction": 0.4664391279220581,
"avg_line_length": 18.10869598388672,
"blob_id": "614938d0df0adaca8e32b99b866d179bfd65764e",
"content_id": "d0aa7d6bb6e6b29b21b8c287bb8f66ba5d0f9511",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 939,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 46,
"path": "/project/elective_systems/version_v3/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nUSER = {'name': None}\nROLE = 'admin'\n\n\ndef login():\n print('\\033[32m\\033[0m')\n\n\ndef register():\n print('\\033[32m\\033[0m')\n\n\ndef create_school():\n print('\\033[32m\\033[0m')\n\n\ndef create_teacher():\n print('\\033[32m\\033[0m')\n\n\ndef create_course():\n print('\\033[32m\\033[0m')\n\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [create_school, '创建学校'],\n '4': [create_teacher, '创建老师'],\n '5': [create_course, '创建课程']\n }\n while 1:\n print('=' * 30)\n for k,v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择功能编号[q to exit] >>: ').strip()\n if choice == 'q':\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n menu[choice][0]()\n"
},
{
"alpha_fraction": 0.461140900850296,
"alphanum_fraction": 0.4915773272514343,
"avg_line_length": 18.492536544799805,
"blob_id": "aba4bb03cb708f9976d3db546b18a501408b4a30",
"content_id": "6c5d8e97c924d07d5b8f4386aeb4a0f2e5eca265",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6660,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 268,
"path": "/month3/week2/python_day4/python_day4_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "一、字符串练习\n# 写代码,有如下变量,请按照要求实现每个功能 (共6分,每小题各0.5分)\nname = \" aleX\"\n# 1) 移除 name 变量对应的值两边的空格,并输出处理结果\n# 2) 判断 name 变量对应的值是否以 \"al\" 开头,并输出结果
\n# 3) 判断 name 变量对应的值是否以 \"X\" 结尾,并输出结果
\n# 4) 将 name 变量对应的值中的 “l” 替换为 “p”,并输出结果\n# 5) 将 name 变量对应的值根据 “l” 分割,并输出结果。\n# 6) 将 name 变量对应的值变大写,并输出结果
\n# 7) 将 name 变量对应的值变小写,并输出结果
\n# 8) 请输出 name 变量对应的值的第 2 个字符?\n# 9) 请输出 name 变量对应的值的前 3 个字符?\n# 10) 请输出 name 变量对应的值的后 2 个字符?
\n# 11) 请输出 name 变量对应的值中 “e” 所在索引位置?
\n# 12) 获取子序列,去掉最后一个字符。如: oldboy 则获取 oldbo\n\n1\nname = \" aleX\"\nname = name.strip()\nprint(name)\n\n2\nname = \" aleX\"\nif name[:2] == 'al':\n print('变量 name = %s, 是以 \"al\" 开头' % name)\nelse:\n print('变量 name = %s, 不是以 \"al\" 开头' % name)\n\n3\nname = \" aleX\"\nif name[-1] == 'X':\n print('变量 name = %s, 是以 \"X\" 结尾' % name)\nelse:\n print('变量 name = %s, 不是以 \"X\" 结尾' % name)\n\n4\nname = \" aleX\"\nif 'l' in name:\n name = name.replace('l', 'p')\n print(name)\n\n5\nname = \" aleX\"\nname = name.split('l')\nprint(name)\n\n6\nname = \" aleX\"\nname = name.upper()\nprint(name)\n\n7\nname = \" aleX\"\nname = name.lower()\nprint(name)\n\n8\nname = \" aleX\"\nprint(name[1])\n\n9\nname = \" aleX\"\nprint(name[:3])\n\n10\nname = \" aleX\"\nprint(name[-2:])\n\n11\nname = \" aleX\"\nindex = name.find('e')\nprint(index)\n\n12\nname = \" aleX\"\nname = name[:-1]\nprint(name)\n\n二、列表练习\n1. 有列表data=['alex',49,[1900,3,18]],分别取出列表中的名字,年龄,出生的年,月,日赋值给不同的变量\nname = data[0]\nage = data[1]\nyear = data[2][0]\nmonth = data[2][1]\nday = data[2][2]\n\n2. 用列表模拟队列\nl = []\n入站\nl.append(1)\nprint(l)\nl.append(2)\nprint(l)\nl.append(3)\nprint(l)\n出站\nl.pop(0)\nprint(l)\nl.pop(0)\nprint(l)\nl.pop(0)\nprint(l)\n\n3. 用列表模拟堆栈\nl = []\n入站\nl.append(1)\nprint(l)\nl.append(2)\nprint(l)\nl.append(3)\nprint(l)\n出站\nl.pop()\nprint(l)\nl.pop()\nprint(l)\nl.pop()\nprint(l)\n\n三、for循环练习\n1、使用嵌套for循环打印99乘法表(补充:不换行打印的方法为print('xxxx',end=''))\n 提示:\n for i in range(...):\n for j in range(...):\n ...\n\nfor i in range(1, 10):\n for j in range(1, i+1):\n p = i * j\n print('%s*%s=%s ' % (i, j, p), end='')\n print()\n\n2、使用嵌套for循环打印金字塔,金字塔层数为5层,要求每一个空格、每一个*都必须单独打印\n *\n ***\n *****\n *******\n *********\n\n 提示:\n 一个for循环套两个小的for循环,两个小的for一循环,一个控制打印空格,一个控制打印*\n\n 思路参考:http://www.cnblogs.com/linhaifeng/articles/7133167.html#_label14\n\n\nl = 5\nfor i in range(1, 6):\n for j in range(l-i):\n print(' ', end='')\n for k in range(2*i-1):\n print('*', end='')\n print()\n\n四:购物车程序\n\t\t#需求:\n\t\t启动程序后,先认证用户名与密码,认证成功则让用户输入工资,然后打印商品列表的详细信息,商品信息的数据结构可以像下面这种格式,也可以自己定制\n\t\tgoods=[\n\t\t{'name':'mac','price':20000},\n\t\t{'name':'lenovo','price':10000},\n\t\t{'name':'apple','price':200},\n\t\t{'name':'tesla','price':1000000},\n\n\t\t]\n\n\t\t失败则重新登录,超过三次则退出程序\n\t\t允许用户根据商品编号购买商品\n\t\t用户选择商品后,检测余额是否够,够就直接扣款,不够就提醒\n\t\t可随时退出,退出时,打印已购买商品和余额\n\n\nusers = {\n 'egon': {\n 'password': '123',\n 'money': 0,\n 'goods': {}\n },\n 'alex': {\n 'password': '123',\n 'money': 0,\n 'goods': {}\n }\n}\ngoods = {\n 'mac': {\n 'price': 20000\n },\n 'lenovo': {\n 'price': 10000\n },\n 'apple': {\n 'price': 200\n },\n 'tesla': {\n 'price': 1000000\n }\n}\ni = 0\nline = '='*25\ntag = True\nwhile tag:\n inp_name = input('name>>: ').strip()\n inp_pwd = input('password>>: ')\n if inp_name not in users:\n print('用户名错误!')\n i += 1\n else:\n if inp_pwd != users[inp_name]['password']:\n print('密码错误!')\n i += 1\n else:\n print('登陆成功!')\n i = 0\n if i > 0 and i < 3:\n continue\n if i == 3:\n print('尝试次数过多,锁定用户')\n tag = False\n continue\n while tag:\n salary = input('请输入工资 >>: ')\n if salary == 'quit':\n tag = False\n continue\n if not salary.isdigit():\n print('工资无效,请输入整数')\n continue\n else:\n salary = int(salary)\n users[inp_name]['money'] = salary\n gd = {}\n while tag:\n print(line)\n for k,v in enumerate(goods):\n print('%-6s %-10s %-10s' % (k, v, goods[v]))\n gd[k] = v\n print(line)\n code = input('请选择要购买的商品编号 >>: ')\n if code == 'quit':\n tag = False\n continue\n if code not in gd:\n print('商品编码错误!')\n continue\n good = gd[code]\n if users[inp_name]['money'] >= goods[good]['price']:\n if good not in users[inp_name]['goods']:\n users[inp_name]['goods'][good] = 1\n else:\n users[inp_name]['goods'][good] = users[inp_name]['goods'][good] + 1\n users[inp_name]['money'] = users[inp_name]['money'] - goods[good]['price']\n print('商品 %s 已加入购物车,输入list命令查看购物车' % good)\n cmd = input('继续购物输入Y,结账输入N >>: ').lower()\n if cmd == 'list':\n print('已经购买的商品:\\n%s' % users[inp_name]['goods'])\n if cmd == 'y':\n continue\n if cmd == 'n' or cmd == 'quit':\n print('用户名: %s\\n购买商品: %s\\n账户余额: %s' % (inp_name, users[inp_name]['goods'], users[inp_name]['money']))\n tag = False\n else:\n print('账户余额不足')\n"
},
{
"alpha_fraction": 0.5569105744361877,
"alphanum_fraction": 0.6148374080657959,
"avg_line_length": 26.33333396911621,
"blob_id": "39c946f2c72e91283f1539006f3eb9891c325597",
"content_id": "55fa16a3d7f458340bf9bf006be7223034e0c8d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1324,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 36,
"path": "/month4/week4/python_day14/python_day14_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月3号作业:\n# 1、求文件a.txt中总共包含的字符个数?思考为何在第一次之后的n次sum求和得到的结果为0?\nwith open(r'a.txt') as f:\n total_count = sum(len(line) for line in f)\n print(total_count)\n\n# 2、思考题\n# with open('a.txt',encoding='utf-8') as f:\n# g=(len(line) for line in f)\n# print(sum(g))\n\n#答:g 通过生成器表达式'len(line) for line in f'得到一个生成器,sum 循环生成器g里面的每个值相加得到总的字符个数。\n\n# 3、文件shopping.txt内容如下\n# mac,2000,3\n# lenovo,3000,10\n# tesla,1000000,10\n# chicken,200,1\n#\n# 求总共花了多少钱?\n# 打印出所有的商品信息,格式为\n# [{'name':'xxx','price':'3333','count':3},....]\n# 求单价大于10000的商品信息,格式同上\n\nwith open(r'shopping.txt') as f:\n g = (line.strip('\\n').split(',') for line in f)\n l = [{'name':name,'price':int(price),'count':int(count)} for name,price,count in g]\n\n cost = sum(map(lambda x:(x['price'] * x['count']), l))\n print('本次购物总共花费了 %s' % cost)\n print(l)\n\n good_above_10000 = list(filter(lambda x:x['price']>10000, l))\n print(good_above_10000)\n\n# 4、改写ATM作业,将重复用到的功能放到模块中,然后通过导入的方式使用\n"
},
{
"alpha_fraction": 0.44157862663269043,
"alphanum_fraction": 0.47700434923171997,
"avg_line_length": 26.279661178588867,
"blob_id": "5cfe9f89b426335912e5a3b57b69e283c72e0072",
"content_id": "6d5b41778fda3ae5aba27dc87f4e1b0018ef836d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3550,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 118,
"path": "/project/elective_systems/version_v2/core/admin.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom interface import common_api, admin_api\n\n\nUSER = {'name': None}\nROLE = 'admin'\n\ndef login():\n print('\\033[32m登陆\\033[0m')\n if USER['name']:\n print('\\033[32m已登陆,不能重复登录!\\033[0m')\n while True:\n name = input('请输入登陆名 >>: ').strip()\n pwd = input('请输入登陆密码 >>: ').strip()\n flag, msg = common_api.login(name, pwd, ROLE)\n if flag:\n USER['name'] = name\n print(msg)\n return\n else:\n print(msg)\n\ndef register():\n print('\\033[32m注册\\033[0m')\n if USER['name']:\n print('\\033[32m已登陆,不能注册!\\033[0m')\n while True:\n name = input('请输入注册用户名 >>: ').strip()\n pwd = input('请输入注册密码 >>: ').strip()\n pwd2 = input('请确认注册密码 >>: ').strip()\n if pwd != pwd2:\n print('\\033[31m两次密码输入不一致!\\033[0m')\n continue\n flag, msg = admin_api.register(name, pwd)\n if flag:\n print(msg)\n return\n else:\n print(msg)\n\[email protected](USER['name'], ROLE)\ndef create_school():\n print('\\033[32m创建学校\\033[0m')\n while True:\n name = input('请输入学校名 >>: ').strip()\n addr = input('请输入学校地址 >>: ').strip()\n flag, msg = admin_api.create_school(name, addr)\n if flag:\n print(msg)\n return\n else:\n print(msg)\n\[email protected](USER['name'], ROLE)\ndef create_teacher():\n print('\\033[32m创建老师\\033[0m')\n while True:\n name = input('请输入老师名 >>: ').strip()\n flag, msg = admin_api.create_teacher(name)\n if flag:\n print(msg)\n return\n else:\n print(msg)\n\[email protected](USER['name'], ROLE)\ndef create_course():\n print('\\033[32m创建课程\\033[0m')\n while True:\n schools = admin_api.get_schools()\n print('-' * 30)\n d = {}\n for index, school in enumerate(schools):\n print('%-4s %-10s' % (index, school))\n d[str(index)] = school\n print('-' * 30)\n while True:\n choice = input('请选择校区编号 >>: ').strip()\n if choice not in d:\n print('选择校区编号非法!')\n continue\n school_name = d[choice]\n break\n name = input('请输入课程名 >>: ').strip()\n price = input('请输入课程价格 >>: ').strip()\n cycle = input('请输入课程周期 >>: ').strip()\n flag, msg = admin_api.create_course(name, price, cycle, school_name)\n if flag:\n print(msg)\n return\n else:\n print(msg)\n\ndef run():\n menu = {\n '1': [login, '登陆'],\n '2': [register, '注册'],\n '3': [create_school, '创建学校'],\n '4': [create_teacher, '创建老师'],\n '5': [create_course, '创建课程'],\n }\n while True:\n print('=' * 30)\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n print('=' * 30)\n choice = input('请选择操作编号[q to exit] >>: ').strip()\n if choice == 'q':\n break\n if choice not in menu:\n print('\\033[31m选择编号非法!\\033[0m')\n continue\n try:\n menu[choice][0]()\n except Exception as e:\n print('\\033[31merror from admin: %s\\033[0m' % e)"
},
{
"alpha_fraction": 0.6558061838150024,
"alphanum_fraction": 0.6566416025161743,
"avg_line_length": 32.11111068725586,
"blob_id": "cae2149c3058785448f3c5726f4218e54a1ec39b",
"content_id": "3d0f0c57a0b347a4d50a48e593f7919db0e9ed62",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1347,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 36,
"path": "/project/elective_systems/version_v8/interface/teacher_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef get_teach_course(name):\n teacher = modules.Teacher.get_obj_by_name(name)\n return teacher.course_list\n\ndef choose_teach_course(name, course):\n teacher = modules.Teacher.get_obj_by_name(name)\n if course in teacher.course_list:\n return False, '老师%s不能选择已教授的课程!' % name\n if teacher.choose_course(course):\n return True, '老师%s选择教授课程%s成功!' % (name, course)\n else:\n return False, '老师%s选择教授课程%s失败!' % (name, course)\n\ndef get_course_student(course):\n course = modules.Course.get_obj_by_name(course)\n return course.student_list\n\ndef get_student_course(name):\n student = modules.Student.get_obj_by_name(name)\n if not student:\n return False, '学生%s不存在!' % name\n else:\n return True, student.course_list\n\ndef change_student_score(teacher, name, course, score):\n student = modules.Student.get_obj_by_name(name)\n if not student:\n return False, '学生%s不存在!' % name\n if student.change_score(course, score):\n return True, '老师%s修改学生%s课程%s分数为%s成功!' % (teacher, name, course, score)\n else:\n return False, '老师%s修改学生%s课程%s分数为%s失败!' % (teacher, name, course, score)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6289262175559998,
"alphanum_fraction": 0.6318480372428894,
"avg_line_length": 26.39583396911621,
"blob_id": "4cb474601bbe0bf0721d701770d4500a5183278b",
"content_id": "6058e73777e690ff7052abb76f64cabe8548b7d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1453,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 48,
"path": "/weektest/test2/ATM_zhangxiangyu/interface/bank.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\r\n\r\nfrom db import db_handler\r\nfrom lib import common\r\n\r\nbank_log = common.get_logger('Bank')\r\n\r\n#查看余额\r\ndef get_account(name):\r\n user_dic = db_handler.select(name)\r\n return user_dic['account']\r\n\r\n\r\n#transfer\r\ndef transfer_interface(to_user,from_user,account):\r\n to_user_dic = db_handler.select(to_user)\r\n from_user_dic = db_handler.select(from_user)\r\n\r\n to_user_dic['account'] +=account\r\n from_user_dic['account'] -=account\r\n\r\n from_user_dic['flow_log'].append(r'%s收到%s转账%s元' %(to_user,from_user,account))\r\n to_user_dic['flow_log'].append(r'%s向%s转账%s元' %(from_user,to_user,account))\r\n\r\n bank_log.info(r'%s收到%s转账%s元' % (to_user, from_user, account))\r\n bank_log.info(r'%s向%s转账%s元' % (from_user, to_user, account))\r\n\r\n db_handler.update(to_user_dic)\r\n db_handler.update(from_user_dic)\r\n\r\n\r\n\r\n\r\n\r\ndef repay_interface(name,account):\r\n user_dic = db_handler.select(name)\r\n user_dic['account'] +=account\r\n user_dic['flow_log'].append(r'%s还款%s成功!' %(name,account))\r\n bank_log.info( r'%s还款%s成功!' %(name,account))\r\n db_handler.update(user_dic)\r\n\r\n\r\ndef withdraw(name,account):\r\n user_dic = db_handler.select(name)\r\n user_dic['account'] -= account*1.05\r\n user_dic['flow_log'].append(r'%s提现%s成功!' % (name, account))\r\n bank_log.info( r'%s提现%s成功!' % (name, account))\r\n db_handler.update(user_dic)\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.4723944365978241,
"alphanum_fraction": 0.5101860165596008,
"avg_line_length": 21.0849666595459,
"blob_id": "7c65e478ea981ad313f324dbc128cdbda38bf470",
"content_id": "d535b7f861ed7132f1fe8ab632a49585d163c712",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4149,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 153,
"path": "/month3/week2/python_day5/python_day5_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 一、元组\n# 1. 列表与元组的区别,是元组不可修改,而列表可以。\n#\n# 二、for 循环练习\n# 1.\n# 字符串\n# goods = 'hello'\n# for number, good in enumerate(goods):\n# print(number, good)\n# 列表\n# goods = ['mac', 'apple', 'iphone', 'tesla']\n# for number, good in enumerate(goods):\n# print(number, good)\n# 字典\n# goods = {'mac': 10000, 'apple': 200, 'iphone': 8000, 'tesla': 20000}\n# for number, good in enumerate(goods):\n# print(number, good)\n\n# 2.\n# 结果:程序打印了三个元组,元组内容是字典 {'name':'egon','age':18,'sex':'male'} 的索引和key:\n# (0, 'name')\n# (1, 'age')\n# (2, 'sex')\n# 解释:enumerate() 用于将一个可遍历的数据对象(字符串、列表、字典)组合为一个索引序列,同时列出索引和数据。\n\n# 三、简单购物车\n# msg_dic = {\n# 'apple': 10,\n# 'tesla': 100000,\n# 'mac': 3000,\n# 'lenovo': 30000,\n# 'chicken': 10\n# }\n#\n# goods = []\n# while 1:\n# for name in msg_dic:\n# print('商品名: %-10s 价格: %-10s' % (name, msg_dic[name]))\n# good_name = input('商品名称 >>: ')\n# if good_name not in msg_dic:\n# continue\n# price = msg_dic[good_name]\n# while 1:\n# count = input('购买个数 >>: ')\n# if count.isdigit():\n# count = int(count)\n# break\n# info = {\n# 'good_name': good_name,\n# 'price': price,\n# 'count': count\n# }\n# goods.append(info)\n# print(goods)\n\n# 四、字典练习\n# 1. 字典是无序的;\n# 2. 三种方式取出字典中的key和value:\nmsg_dic = {\n 'apple': 10,\n 'tesla': 100000,\n 'mac': 3000,\n 'lenovo': 30000,\n 'chicken': 10,\n}\n# 1)\n# for k,v in msg_dic.items():\n# print('key: %s, value: %s' % (k, v))\n# # 2)\n# for k in msg_dic:\n# print('key: %s, value: %s' % (k, msg_dic[k]))\n# 3)\n# for k,v in enumerate(msg_dic):\n# print('key: %s, value: %s' % (v, msg_dic[v]))\n# 3. 可以使用字典的 get() 方法取值,如果 key 不存在不会报错。\n# 4. l = [11,22,33,44,55,66,77,88,99,90...]\n# d = {\n# 'k1': [],\n# 'k2': []\n# }\n# for i in l:\n# if i > 66:\n# d['k1'].append(i)\n# if i < 66:\n# d['k2'].append(i)\n# print(d)\n\n# 5. 统计s='hello alex alex say hello sb sb'中每个单词的个数\n# s='hello alex alex say hello sb sb'\n# l = s.split()\n# d = {}\n# for i in l:\n# # d[i] = l.count(i)\n# d.setdefault(i, l.count(i))\n# print(d)\n\n# 五、集合练习\n# 1. 关系运算\n# pythons={'alex','egon','yuanhao','wupeiqi','gangdan','biubiu'}\n# linuxs={'wupeiqi','oldboy','gangdan'}\n# 1)求出即报名python又报名linux课程的学员名字集\n# s = pythons & linuxs\n# print(s)\n# 2) 求出所有报名的学生名字集合\n# s = pythons | linuxs\n# print(s)\n# 3) 求出只报名python课程的学员名字\n# s = pythons - linuxs\n# print(s)\n# 4) 求出没有同时这两门课程的学员名字集合\n# s = pythons ^ linuxs\n# print(s)\n\n# 2. 去重\n# 1)有列表l=['a','b',1,'a','a'],列表元素均为可hash类型,去重,得到新列表,且新列表无需保持列表原来的顺序\n# l = ['a','b',1,'a','a']\n# new_l = list(set(l))\n# print(new_l)\n\n# 2) 在上题的基础上,保存列表原来的顺序\n# l = ['a','b',1,'a','a']\n# n = []\n# for i in l:\n# if i not in n:\n# n.append(i)\n# print(n)\n\n# 3) 去除文件中重复的行,肯定要保持文件内容的顺序不变\n# n = []\n# with open('t.txt', 'r') as f:\n# msg = f.readlines()\n# for i in msg:\n# i = i.strip('\\n')\n# print(i)\n# if i not in n:\n# n.append(i)\n# with open('t.txt', 'w') as f:\n# for i in n:\n# f.write(i+'\\n')\n\n# 4) 有如下列表,列表元素为不可hash类型,去重,得到新列表,且新列表一定要保持列表原来的顺序\n# l = [\n# {'name':'egon','age':18,'sex':'male'},\n# {'name':'alex','age':73,'sex':'male'},\n# {'name':'egon','age':20,'sex':'female'},\n# {'name':'egon','age':18,'sex':'male'},\n# {'name':'egon','age':18,'sex':'male'},\n# ]\n# n = []\n# for i in l:\n# if i not in n:\n# n.append(i)\n# print(n)\n\n\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.5858778357505798,
"alphanum_fraction": 0.5916030406951904,
"avg_line_length": 26.578947067260742,
"blob_id": "8a3efc15994cc02d25f4bbc7cc1102639c7b21b7",
"content_id": "6deb767d1a1659147b309e96b516e2e12f368f66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 524,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 19,
"path": "/project/shooping_mall/version_v6/db/db_handler.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport json\nfrom conf import settings\n\ndef read(name):\n path = os.path.join(settings.BASE_DB, '%s.json' % name)\n if not os.path.exists(path) or os.path.isdir(path):\n return\n with open(r'%s' % path, 'r', encoding='utf-8') as f:\n return json.load(f)\n\ndef write(dic):\n path = os.path.join(settings.BASE_DB, '%s.json' % dic['name'])\n with open(r'%s' % path, 'w', encoding='utf-8') as f:\n json.dump(dic, f)\n if os.path.exists(path):\n return True\n"
},
{
"alpha_fraction": 0.5819070935249329,
"alphanum_fraction": 0.5831295847892761,
"avg_line_length": 22.285715103149414,
"blob_id": "07b1db9d66bd610dc88045fe8d3a97ea819552f8",
"content_id": "31237c3331bb005541db71722b99568084e4d643",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 818,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 35,
"path": "/project/elective_systems/version_v3/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__)\n\n def save(self):\n db_handler.save(self)\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.save()\n\n def create_school(self, name, addr):\n school = School(name, addr)\n school.save()\n\n def create_teacher(self, name, password):\n teacher = Teacher(name, password)\n teacher.save()\n\n def create_course(self, name, price, cycle, school):\n course = Course(name, price, cycle, school)\n course.save()\n\nclass School(Base):\n def __init__(self, name, addr):\n self.name = name\n self.addr = addr\n\n\n\n"
},
{
"alpha_fraction": 0.625806450843811,
"alphanum_fraction": 0.6286738514900208,
"avg_line_length": 32.16666793823242,
"blob_id": "9393e33caa38cdaf1fa84043b4b6a38527b19518",
"content_id": "7c7e3706fb0036f2b15ecada659fa0eea01f868d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1561,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 42,
"path": "/project/elective_systems/version_v8/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import modules\n\ndef register(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if admin:\n return False, '用户%s不能重复注册!' % name\n if modules.Admin.register(name, password):\n return True, '用户%s注册成功!' % name\n else:\n return False, '用户%s注册失败!' % name\n\ndef create_school(admin_name, name, address):\n school = modules.School.get_obj_by_name(name)\n if school:\n return False, '学校%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_school(name, address):\n return True, '学校%s创建成功!' % name\n else:\n return False, '学校%s创建失败!' % name\n\ndef create_teacher(admin_name, name, password='123'):\n teacher = modules.Teacher.get_obj_by_name(name)\n if teacher:\n return False, '老师%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_teacher(name, password):\n return True, '老师%s创建成功!' % name\n else:\n return False, '老师%s创建失败!' % name\n\ndef create_course(admin_name, name, price, cycle, school_name):\n course = modules.School.get_obj_by_name(name)\n if course:\n return False, '课程%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_course(name, price, cycle, school_name):\n return True, '课程%s创建成功!' % name\n else:\n return False, '课程%s创建失败!' % name\n\n\n"
},
{
"alpha_fraction": 0.503531813621521,
"alphanum_fraction": 0.5196771025657654,
"avg_line_length": 28.08823585510254,
"blob_id": "063bbbc9e871815b708a57c416c8f72b402dfd5c",
"content_id": "f39ef7e05cc5d9822ce5d25030e4d3479bddc5cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1003,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 34,
"path": "/month4/week7/python_day27/python_day27_server.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1. 基于TCP的套接字\nimport socket\nimport struct\nimport json\n\nserver = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nserver.bind(('127.0.0.1', 8080))\nserver.listen(1)\n\nwhile True:\n try:\n conn, client_addr = server.accept()\n print(client_addr)\n while True:\n try:\n length_bytes = conn.recv(4)\n if not length_bytes: break\n length = struct.unpack('i', length_bytes)[0]\n\n header_bytes = conn.recv(length)\n if not header_bytes: break\n header = json.loads(header_bytes.decode('utf-8'))\n print('header: %s' % header)\n\n data_bytes = conn.recv(header['data_size'])\n data = json.loads(data_bytes.decode('utf-8'))\n if not data: break\n print('data: %s' % data)\n # conn.send(data.upper())\n except Exception:\n break\n finally:\n conn.close()\nserver.close()\n\n\n"
},
{
"alpha_fraction": 0.5265054106712341,
"alphanum_fraction": 0.5311374068260193,
"avg_line_length": 28.409090042114258,
"blob_id": "0652d607bfd664a080af3bda22faebfc5d93549c",
"content_id": "3604b1bd1ae8f177b628379e51baf73623469832",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2129,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 66,
"path": "/homework/week4/elective_systems/core/teacher.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom interface import education\n\n\nCURRENT_USER = None\n\ndef login():\n global CURRENT_USER\n while True:\n name = input('用户名 >>: ').strip()\n teachers = education.Teacher.get_object('teachers')\n if name not in teachers:\n print('老师%s未注册!' % name)\n return\n password = input('密码 >>: ')\n if password != teachers[name].password:\n print('密码错误!')\n continue\n CURRENT_USER = name\n print('老师%s登陆成功!' % name)\n return\n\ndef list_students():\n teachers = education.Teacher.get_object('teachers')\n classes = education.Teacher.get_object('classes')\n if classes[teachers[CURRENT_USER].classes]:\n student_list = []\n for name in classes[teachers[CURRENT_USER].classes].students:\n student_list.append(name)\n print('学生:%s' % student_list)\n else:\n print('班级%s没有学生!' % teachers[CURRENT_USER].classes)\n\ndef modify_student_score():\n while True:\n name = input('请输入学生的名字 >>: ').strip()\n students = education.Teacher.get_object('students')\n if name not in students:\n print('学生%s不存在!' % name)\n continue\n score = input('请输入学生的成绩 >>: ').strip()\n if not score.isdigit():\n print('成绩必须是数字!')\n continue\n score = float(score)\n students[name].score = score\n education.Manager.update_object('students', '学生', students[name])\n return\n\ndef run():\n while True:\n menu = {\n '1': [login, '登陆'],\n '2': [list_students, '查看学员列表'],\n '3': [modify_student_score, '修改学员成绩']\n }\n for k, v in menu.items():\n print('%-4s %-10s' % (k, v[1]))\n choice = input('请选择操作编号 >>: ').strip()\n if choice == 'quit':\n break\n if choice not in menu:\n print('选择编号非法!')\n continue\n menu[choice][0]()\n\n\n"
},
{
"alpha_fraction": 0.5687558650970459,
"alphanum_fraction": 0.5762394666671753,
"avg_line_length": 24.428571701049805,
"blob_id": "7fcaba41e620d7043b9f464a16f06a7110117a43",
"content_id": "75d3c34ff7690d39ced03a06c38c031fb0f5de20",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1167,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 42,
"path": "/weektest/weektest2/ATM_zhanglong/interface/user.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport datetime\nfrom lib import common\nfrom db import db_handler\n\nlogger = common.get_logger('user')\n\ndef get_user_info_api(name):\n data = db_handler.file_handler_read(name)\n if data:\n logger.info('获取用户%s信息成功!' % name)\n else:\n logger.info('用户%s信息不存在!' % name)\n return data\n\ndef register_user_api(name, password, credit_limit=15000):\n user_info = {\n 'name': name,\n 'password': password,\n 'role': 'user',\n 'balance': 0,\n 'credit_limit': credit_limit,\n 'credit_balance': credit_limit,\n 'detailed_list': [],\n 'shopping_cart': {},\n 'bill': 0\n }\n if db_handler.file_handler_write(user_info):\n logger.info('用户注册成功!')\n return True\n else:\n logger.warning('用户注册失败!')\n return\n\ndef modify_user_info_api(user_info):\n if db_handler.file_handler_write(user_info):\n logger.info('修改用户%s信息成功!' % user_info['name'])\n return True\n else:\n logger.warning('修改用户%s信息失败!' % user_info['name'])\n return\n\n"
},
{
"alpha_fraction": 0.5615866184234619,
"alphanum_fraction": 0.6200417280197144,
"avg_line_length": 18.1200008392334,
"blob_id": "f352c078b5473a16790f8cb230bb562aaa3068fa",
"content_id": "44c2d242af9246090d959a414ff159aa1d93fd42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1027,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 25,
"path": "/weektest/weektest3/test.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 明日安排:\n# 上午:8:30-12:30\n# 异步调用与回调机制\n# event事件补充\n# 协程\n# IO多路复用上\n#\n# 下午:2:00-4:30\n# IO多路复用下\n# socketserver\n#\n#\n# 下午:4:30-5:30吃饭去\n#\n# 下午:5:30-9:30开始考试\n# 考试内容:从零开始编写选课系统所有功能\n#\n# 考试要求:\n# 1、新建项目,整个编程期间,pycharm窗口最大化,不允许切换窗口,再次强调!!!考试期间不允许切换窗口,不允许窗口最小化!!!!\n# 2、项目中用到的变量名,函数名,文件名,模块名都需要跟老师的不一样,可以考虑加入自己的名字作为前缀(非常丑陋,但为了防止作弊,egon非常拼)\n# 3、所有功能需要正常运行\n#\n#\n# 处罚制度:\n# 上述三条但凡一条不满足,egon将与子携手在老男孩上海校区度过一个美好的五一假期,每天沉浸在敲代码的乐趣中不能自拔\n\n"
},
{
"alpha_fraction": 0.5188571214675903,
"alphanum_fraction": 0.5502856969833374,
"avg_line_length": 29.15517234802246,
"blob_id": "1272fe47d5e8b591ae228eaa2296e29578ef986b",
"content_id": "5a164351d2979349c4d123433f3b0a2f7b710a05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1864,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 58,
"path": "/project/elective_systems/version_v8/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport logging.config\nfrom conf import settings\n\ndef auth(role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if role == 'admin' and not admin.CURRENT_USER:\n print('\\033[31m管理员未登录,跳转至登陆!\\033[0m')\n admin.login()\n return\n if role == 'teacher' and not teacher.CURRENT_USER:\n print('\\033[31m老师未登录,跳转至登陆!\\033[0m')\n teacher.login()\n return\n if role == 'student' and not student.CURRENT_USER:\n print('\\033[31m学生未登录,跳转至登陆!\\033[0m')\n student.login()\n return\n return func(*args, **kwargs)\n return wrapper\n return handler\n\ndef input_string(word):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n print('\\033[31m不能是空字符!\\033[0m')\n continue\n return string\n\ndef input_integer(word, score=False):\n while True:\n string = input('%s >>: ' % word).strip()\n if not string:\n print('\\033[31m不能是空字符!\\033[0m')\n continue\n if string == 'q':\n return string\n if not string.isdigit():\n print('\\033[31m请输入数字!\\033[0m')\n continue\n if score:\n return float(string)\n return int(string)\n\ndef get_object_list(type_name):\n type_path = os.path.join(settings.BASE_DB, type_name)\n if not os.path.exists(type_path) or not os.path.isdir(type_path):\n return\n return os.listdir(type_path)\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\n"
},
{
"alpha_fraction": 0.47508305311203003,
"alphanum_fraction": 0.4833886921405792,
"avg_line_length": 19.758621215820312,
"blob_id": "85162114f196fcfb3b24d27ee9bccc561d4a7686",
"content_id": "a83461a2ece7768cdd5f1021050a3ba1661691a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 658,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 29,
"path": "/project/elective_systems/version_v9/core/app.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\n\nfrom core import admin, teacher, student\nfrom lib import common\n\n\n\n\n\ndef run():\n menu = {\n '1': [admin, '管理端'],\n '2': [teacher, '教师端'],\n '3': [student, '学生端'],\n }\n while True:\n common.show_red('按\"e\"结束程序')\n common.show_menu(menu)\n choice = common.input_string('请选择平台编号')\n if choice == 'q':\n continue\n if choice == 'e':\n common.show_red('Goodbye!')\n return\n if choice not in menu:\n common.show_red('选择编号非法!')\n continue\n menu[choice][0].run()\n"
},
{
"alpha_fraction": 0.5946173071861267,
"alphanum_fraction": 0.5950378179550171,
"avg_line_length": 25.076923370361328,
"blob_id": "27616880ecec29c2e36ee9d563ff5e20ac3cb112",
"content_id": "cbc8984ecbbf1fe65d9851ab71740e87cac2319c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2378,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 91,
"path": "/project/elective_systems/version_v8/db/modules.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom db import db_handler\n\nclass Base:\n @classmethod\n def get_obj_by_name(cls, name):\n return db_handler.select(name, cls.__name__.lower())\n\n def save(self):\n return db_handler.save(self)\n\n\nclass Admin(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n\n @classmethod\n def register(cls, name, password):\n admin = cls(name, password)\n return admin.save()\n\n def create_school(self, name, address):\n school = School(name, address)\n return school.save()\n\n def create_teacher(self, name, password):\n teacher = Teacher(name, password)\n return teacher.save()\n\n def create_course(self, name, price, cycle, school_name):\n course = Course(name, price, cycle, school_name)\n return course.save()\n\nclass School(Base):\n def __init__(self, name, address):\n self.name = name\n self.password = address\n self.course_list = []\n\n def set_up_course(self, course):\n self.course_list.append(course)\n return self.save()\n\nclass Teacher(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.course_list = []\n\n def get_course_list(self):\n return self.course_list\n\n def choose_course(self, name):\n self.course_list.append(name)\n return self.save()\n\nclass Course(Base):\n def __init__(self, name, price, cycle, school_name):\n self.name = name\n self.price = price\n self.cycle = cycle\n self.school_name = school_name\n self.student_list = []\n\n def get_student_list(self):\n return self.student_list\n\nclass Student(Base):\n def __init__(self, name, password):\n self.name = name\n self.password = password\n self.course_list = []\n self.score = {}\n self.school_list = []\n\n @classmethod\n def register(cls, name, password):\n student = cls(name, password)\n return student.save()\n\n def choose_course(self, course):\n self.course_list.append(course)\n course_info = Course.get_obj_by_name(course)\n self.school_list.append(course_info.school_name)\n return self.save()\n\n def change_score(self, course, score):\n self.score[course] = score\n return self.save()\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6228349208831787,
"alphanum_fraction": 0.6242488622665405,
"avg_line_length": 36.7066650390625,
"blob_id": "45be83fc2b2617479fa545b73a9bdfd1bb0e901f",
"content_id": "abb24a8820f601be54893688389343c598415d7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3329,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 75,
"path": "/project/elective_systems/version_v5/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "#-*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom db import modules\n\nlogger = common.get_logger('admin_api')\n\n\ndef login(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if not admin:\n return False, '管理员%s登陆失败!' % name\n if password == admin.password:\n logger.info('管理员%s登陆成功!' % name)\n return True, '管理员%s登陆成功!' % name\n else:\n logger.warning('管理员%s密码错误!' % name)\n return False, '管理员%s密码错误!' % name\n\ndef register(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if admin:\n return False, '管理员%s不能重复注册!' % name\n admin = modules.Admin.register(name, password)\n if admin:\n logger.info('管理员%s注册成功!' % name)\n return True, '管理员%s注册成功!' % name\n else:\n logger.warning('管理员%s注册失败!' % name)\n return False, '管理员%s注册失败!' % name\n\ndef create_school(admin_name, name, address):\n if modules.School.get_obj_by_name(name):\n return False, '学校%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_school(name, address):\n logger.info('管理员 %s 创建学校%s成功!' % (admin_name, name))\n return True, '管理员 %s 创建学校%s成功!' % (admin_name, name)\n else:\n logger.warning('管理员 %s 创建学校%s失败!' % (admin_name, name))\n return False, '管理员 %s 创建学校%s失败!' % (admin_name, name)\n\ndef create_teacher(admin_name, name, password='123'):\n if modules.Teacher.get_obj_by_name(name):\n return False, '老师%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_teacher(name, password):\n logger.info('管理员 %s 创建老师 %s 成功!' % (admin_name, name))\n return True, '管理员 %s 创建老师 %s 成功!' % (admin_name, name)\n else:\n logger.warning('管理员 %s 创建老师 %s 失败!' % (admin_name, name))\n return False, '管理员 %s 创建老师 %s 失败!' % (admin_name, name)\n\ndef create_course(admin_name, name, price, cycle, school_name):\n if modules.Course.get_obj_by_name(name):\n return False, '课程%s已存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if admin.create_course(name, price, cycle, school_name):\n logger.info('管理员%s 创建课程%s成功!' % (admin_name, name))\n return True, '管理员%s 创建课程%s成功!' % (admin_name, name)\n else:\n logger.warning('管理员%s 创建课程%s失败!' % (admin_name, name))\n return False, '管理员%s 创建课程%s失败!' % (admin_name, name)\n\ndef get_school_info(admin_name, obj_name):\n logger.info('管理员 %s 获取学校%s信息!' % (admin_name, obj_name))\n return modules.School.get_obj_by_name(obj_name)\n\ndef get_teacher_info(admin_name, obj_name):\n logger.info('管理员 %s 获取老师%s信息!' % (admin_name, obj_name))\n return modules.Teacher.get_obj_by_name(obj_name)\n\ndef get_course_info(admin_name, obj_name):\n logger.info('管理员 %s 获取课程%s信息!' % (admin_name, obj_name))\n return modules.Course.get_obj_by_name(obj_name)\n\n"
},
{
"alpha_fraction": 0.48580610752105713,
"alphanum_fraction": 0.5452597737312317,
"avg_line_length": 20.44827651977539,
"blob_id": "b58b1aae27192dd4dd65f0c76d5f756cdac0df5d",
"content_id": "9b5b4badf9fbce4fcad442de4f5c1f437faad309",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2273,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 87,
"path": "/month4/week4/python_day13/python_day13_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 1、文件内容如下,标题为:姓名,性别,年纪,薪资\n#\n# egon male 18 3000\n# alex male 38 30000\n# wupeiqi female 28 20000\n# yuanhao female 28 10000\n#\n# \t要求:\n# \t从文件中取出每一条记录放入列表中,\n# \t列表的每个元素都是{'name':'egon','sex':'male','age':18,'salary':3000}的形式\n\nconfig = 'db.txt'\ndef get_user_info():\n with open(r'%s' % config) as f:\n line = (line.split() for line in f)\n user_info = [{'name': name, 'sex': sex, 'age': age, 'salary': salary} \\\n for name, sex, age, salary in line]\n print(user_info)\n return user_info\n\nuser_info = get_user_info()\n\n# 2 根据1得到的列表,取出薪资最高的人的信息\nsalary_max = max(user_info, key=lambda x:x['salary'])\nprint(salary_max)\n\n# 3 根据1得到的列表,取出最年轻的人的信息\nage_min = min(user_info, key=lambda x:x['age'])\nprint(age_min)\n\n# 4 根据1得到的列表,将每个人的信息中的名字映射成首字母大写的形式\nuser_info_map = map(lambda x:x['name'].capitalize(), user_info)\nuser_info_map = []\nprint(list(user_info_map))\n\n# 5 根据1得到的列表,过滤掉名字以a开头的人的信息\nuser_info_filter = filter(lambda x:x['name'].startswith('a'), user_info)\nprint(list(user_info_filter))\n\n# 6 使用递归打印斐波那契数列(前两个数的和得到第三个数,如:0 1 1 2 3 5 8...)\n\n# # while 循环版本\n# def fibo(max):\n# n, a, b = 0, 0, 1\n# while n < max:\n# print(a)\n# a, b = b, a + b\n# n += 1\n# return 'done'\n#\n# fibo(10)\n\n# # 成生成器版本(print 换成 yield)\n# def fibo(max):\n# n, a, b = 0, 0, 1\n# while n < max:\n# yield a\n# a, b = b, a + b\n# n += 1\n# return 'done'\n#\n# f = fibo(10)\n# for i in f:\n# print(i)\n\n# 函数递归版本\ndef fibo(a, b, n):\n if n == 0:\n return 'done'\n print(a)\n n -= 1\n return fibo(b, a + b, n)\n\nfibo(0, 1, 10)\n\n# 7 一个嵌套很多层的列表,如l=[1,2,[3,[4,5,6,[7,8,[9,10,[11,12,13,[14,15]]]]]]],用递归取出所有的值\n\nl = [1,2,[3,[4,5,6,[7,8,[9,10,[11,12,13,[14,15]]]]]]]\n\ndef tell(l):\n for i in l:\n if type(i) is not list:\n print(i)\n else:\n tell(i)\n\n# tell(l)\n\n"
},
{
"alpha_fraction": 0.44155845046043396,
"alphanum_fraction": 0.48051947355270386,
"avg_line_length": 10.142857551574707,
"blob_id": "370f63c2bb74b50b82ca2963a7d346574a30e372",
"content_id": "8dd970278f82d608ffc42898240fb99cbf60de0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 77,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 7,
"path": "/month4/week5/python_day16/bag/m.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\ndef foo():\n print('foo from m.py')\n\nx = 1\ny = 2"
},
{
"alpha_fraction": 0.5486806035041809,
"alphanum_fraction": 0.5741583108901978,
"avg_line_length": 29.52777862548828,
"blob_id": "cb288c13555bcd07a22101474871a9507523efdb",
"content_id": "7d57ff7765813eb97ef66099817eb420a7fe4b20",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1173,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 36,
"path": "/project/elective_systems/version_v7/lib/common.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nimport os\nimport logging.config\nfrom conf import settings\n\n\ndef auth(role):\n from core import admin, teacher, student\n def handler(func):\n def wrapper(*args, **kwargs):\n if role == 'admin' and not admin.CURRENT_USER:\n print('\\033[31m管理员未登录,跳转至登陆!\\033[0m')\n admin.login()\n return\n if role == 'teacher' and not teacher.CURRENT_USER:\n print('\\033[31m老师未登录,跳转至登陆!\\033[0m')\n teacher.login()\n return\n if role == 'student' and not student.CURRENT_USER:\n print('\\033[31m学生未登录,跳转至登陆!\\033[0m')\n student.login()\n return\n return func(*args, **kwargs)\n return wrapper\n return handler\n\ndef get_logger(name=__name__):\n logging.config.dictConfig(settings.LOGGING_CONFIG)\n return logging.getLogger(name)\n\ndef get_object_list(type_name):\n type_path = os.path.join(settings.BASE_DB, type_name)\n if not os.path.exists(type_path):\n return\n return os.listdir(type_path)\n"
},
{
"alpha_fraction": 0.5436229109764099,
"alphanum_fraction": 0.5619223713874817,
"avg_line_length": 23.251121520996094,
"blob_id": "17fa6114dd10938fe113092d02940d5efd8d187b",
"content_id": "b8cba8625b5945d088023dd06a02db61394a55ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6084,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 223,
"path": "/month4/week5/python_day18/python_day18_practice.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# 4月10号作业\n# 1、编写用户认证功能,要求如下\n# 1.1、对用户密码加盐处理\n# 1.2、用户名与密文密码存成字典,是以json格式存到文件中的\n# 1.3、要求密用户输入明文密码,但程序中验证的是密文\nimport json\nimport hmac\n\ndef file_handle_read(name):\n try:\n with open(r'%s.json' % name, 'r', encoding='utf-8') as f:\n data = json.load(f)\n except Exception as e:\n print('\\033[31merror: %s\\033[0m' % e)\n return\n else:\n return data\n\ndef file_handle_write(**kwargs):\n try:\n with open(r'%s.json' % kwargs['name'], 'w', encoding='utf-8') as f:\n json.dump(kwargs, f)\n except Exception as e:\n print('\\033[31merror: %s\\033[0m' % e)\n return\n else:\n return True\n\n\ndef get_user_info_api(name):\n return file_handle_read(name)\n\ndef user_register_api(name, pwd):\n user_info = {\n 'name': name,\n 'pwd': pwd,\n 'balance': 0\n }\n file_handle_write(**user_info)\n\ndef get_hmac_api(string, salt=b'bingo'):\n h = hmac.new(salt)\n h.update(string.encode('utf-8'))\n return h.hexdigest()\n\ndef register():\n while True:\n print('\\033[32m请输入注册信息:\\033[0m')\n name = input('用户名 >>: ').strip()\n pwd = input('密码 >>: ')\n pwd2 = input('确认密码 >>: ')\n if pwd != pwd2:\n print('两次密码输入不一致!')\n continue\n user_register_api(name, get_hmac_api(pwd))\n print('用户注册成功!')\n return True\n\ndef login():\n while True:\n print('\\033[32m请输入登陆信息:\\033[0m')\n name = input('用户名 >>: ').strip()\n user_info = get_user_info_api(name)\n if not user_info:\n print('用户不存在!')\n continue\n pwd = input('密码 >>: ')\n if get_hmac_api(pwd) != user_info['pwd']:\n print('密码错误!')\n continue\n print('登陆成功!')\n return True\n\n# register()\n# login()\n\n# 2、编写功能,传入文件路径,得到文件的hash值\nimport hashlib\n\ndef get_hashlib_md5_api(file_path):\n m = hashlib.md5()\n try:\n with open(r'%s' % file_path) as f:\n for line in f:\n m.update(line.encode('utf-8'))\n except Exception as e:\n print('\\033[31merror: %s\\0m' % e)\n return\n else:\n return m.hexdigest()\n\nmd5_code = get_hashlib_md5_api('egon.json')\nprint('md5: %s' % md5_code)\n\n# 3、编写类cmd的程序,要求\n# 1、先验证用户身份\n# 2、认证通过后,用户输入命令,则将命令保存到文件中\nimport time\nimport subprocess\n\nCURRENT_USER = None\nUSER_INFO = {\n 'egon': {\n 'pwd': '123'\n }\n}\n\ndef file_handle_write(file_path, command_dic):\n try:\n with open(r'%s' % file_path, 'w', encoding='utf-8') as f:\n json.dump(command_dic, f)\n except Exception as e:\n print('error: %s' % e)\n return\n else:\n return True\n\ndef excute_command_api(cmd):\n res = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, \\\n stderr=subprocess.PIPE)\n stdout = res.stdout.read()\n stderr = res.stderr.read()\n # print('stdout: %s' % stdout)\n # print('stderr: %s' % stderr)\n command_info = {\n 'command': cmd,\n 'runtime': time.time(),\n 'stdout': stdout.decode('utf-8'),\n 'stderr': stderr.decode('utf-8')\n }\n data = file_handle_read('command.json')\n print('data: %s' % data)\n if data and CURRENT_USER in data:\n data[CURRENT_USER].append(command_info)\n else:\n data[CURRENT_USER] = [command_info]\n file_handle_write('command.json', data)\n\ndef commandline():\n while True:\n cmd = input('请输入命令 >>: ').strip()\n if cmd == 'quit':\n print('Goodbye!')\n return\n excute_command_api(cmd)\n\ndef login():\n global CURRENT_USER\n while True:\n print('登陆cmd程序!')\n name = input('name >>: ').strip()\n pwd = input('passwrod >>: ')\n if name not in USER_INFO:\n print('user not exist!')\n continue\n if pwd != USER_INFO[name]['pwd']:\n print('password not valid!')\n continue\n print('login successful!')\n CURRENT_USER = name\n commandline()\n return\n\n# login()\n\n# 4、如果我让你编写一个选课系统,那么有如下对象,请抽象成类,然后在程序中定义出来\n# 4.1 老男孩有两所学校:北京校区和上海校区\n# 4.2 老男孩学校有两们课程:python和linux\n# 4.3 老男孩有老师:egon,alex,lxx,wxx,yxx\n# 4.3 老男孩有学生:。。。\n# 4.4 老男孩有班级:python全栈开发1班,linux高级架构师2班\n\nclass Teacher:\n school = 'OldBoy'\n\n def __init__(self, name, team, school_zone='上海', course='python'):\n self.name = name\n self.team = team\n self.school_zone = school_zone\n self.course = course\n\n def teach(self):\n print('%s teach %s ...' % (self. name.capitalize(), self.course))\n\nclass Student:\n school = 'OldBoy'\n\n def __init__(self, name, team, school_zone='上海', course='python'):\n self.name = name\n self.team = team\n self.school_zone = school_zone\n self.course = course\n\n def learn(self):\n print('%s learn %s ...' % (self. name.capitalize(), self.course))\n\nprint('='*30)\negon = Teacher(name='egon', team='全栈开发1班')\nprint(egon.school)\nprint(egon.name)\nprint(egon.school_zone)\nprint(egon.team)\nprint(egon.course)\negon.teach()\n\nprint('-'*30)\nalex = Teacher(name='alex', team='全栈开发1班', school_zone='北京')\nprint(alex.school)\nprint(alex.name)\nprint(alex.school_zone)\nprint(alex.team)\nprint(alex.course)\nalex.teach()\n\nprint('-'*30)\nzane = Student(name='zane', team='全栈开发1班')\nprint(zane.school)\nprint(zane.name)\nprint(zane.school_zone)\nprint(zane.team)\nprint(zane.course)\nzane.learn()\nprint('='*30)\n\n\n"
},
{
"alpha_fraction": 0.637910783290863,
"alphanum_fraction": 0.6402581930160522,
"avg_line_length": 34.41666793823242,
"blob_id": "bc54fe54977e2137cf1cb0216c5472bcf3b95234",
"content_id": "32c71717836bbf7be6a9c73216ee1c7313391eb9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1908,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 48,
"path": "/project/elective_systems/version_v9/interface/admin_api.py",
"repo_name": "zhanglong362/zane",
"src_encoding": "UTF-8",
"text": "# -*- encoding: utf-8 -*-\n\nfrom lib import common\nfrom db import modules\n\n\ndef login(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if not admin:\n return False, '用户%s不存在!' % name\n if password != admin.password:\n return False, '密码错误!'\n return True, '用户%s登陆成功!' % name\n\ndef register(name, password):\n admin = modules.Admin.get_obj_by_name(name)\n if admin:\n return False, '不能重复注册!'\n if not modules.Admin.register(name, password):\n return False, '用户%s注册失败!' % name\n return True, '用户%s注册成功!' % name\n\ndef create_school(admin_name, name, address):\n school = modules.School.get_obj_by_name(name)\n if school:\n return False, '学校%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if not admin.create_school(name, address):\n return False, '%s创建学校%s失败!' % (admin_name, name)\n return True, '%s创建学校%s成功!' % (admin_name, name)\n\ndef create_teacher(admin_name, name, password='123'):\n teacher = modules.Teacher.get_obj_by_name(name)\n if teacher:\n return False, '老师%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if not admin.create_teacher(name, password):\n return False, '%s创建老师%s失败!' % (admin_name, name)\n return True, '%s创建老师%s成功!' % (admin_name, name)\n\ndef create_course(admin_name, name, price, cycle, school_name):\n course = modules.Course.get_obj_by_name(name)\n if course:\n return False, '课程%s已经存在!' % name\n admin = modules.Admin.get_obj_by_name(admin_name)\n if not admin.create_course(name, price, cycle, school_name):\n return False, '%s创建课程%s失败!' % (admin_name, name)\n return True, '%s创建课程%s成功!' % (admin_name, name)\n\n\n\n\n"
}
] | 195 |
TetraK1/gabagool
|
https://github.com/TetraK1/gabagool
|
699cc6bd0c9134e9a9422ccf3825cad4cb2d7f7d
|
f28823796ccfcf3de8d5a556a91f839665f37715
|
ff2029c95a864e220048d20842880d72224b320d
|
refs/heads/master
| 2023-05-08T18:47:36.215446 | 2021-06-02T14:26:06 | 2021-06-02T14:26:06 | 370,106,280 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6176584959030151,
"alphanum_fraction": 0.6320497989654541,
"avg_line_length": 27.25274658203125,
"blob_id": "b36c2a187780f4f873f319921c21e18c2bd3ec8b",
"content_id": "d264e3205745947ce34d48aa9f7968c0bab6277f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2572,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 91,
"path": "/plotter/plot.py",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "import datetime as dt\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport json\nimport time\nimport requests\nimport atmos\n\ndef main():\n after = (dt.datetime.now() - dt.timedelta(days=1)).timestamp()\n print(\"Getting data...\")\n r = requests.get(f'https://emberbox.net/curing/api/readings/?last=86400&after={after}')\n if r.status_code != 200:\n print('Getting data failed with status code', r.status_code)\n print(\"Done\")\n print(\"Updating plots...\")\n\n data = r.json()\n data = data[::60]\n for dp in data: dp['time'] = dt.datetime.fromtimestamp(dp['time'])\n\n time = [dp['time'] for dp in data]\n\n plt.style.use('ggplot')\n\n temperature = [dp['temperature'] for dp in data]\n humidity = [dp['humidity'] for dp in data]\n\n plot_temp(time, temperature)\n plot_humidity(time, humidity)\n plot_pressure(time, [dp['pressure'] for dp in data])\n plot_altitude(time, [dp['altitude'] for dp in data])\n plot_abs_humidity(time, temperature, humidity)\n plt.close('all')\n print(\"Done\")\n\ndef plot_temp(time, temp):\n f, ax = plt.subplots()\n ax.plot(time, temp)\n ax.set_xlim(min(time))\n ax.set_ylim(bottom=0, top=20)\n ax.title.set_text(\"Temperature\")\n ax.set_ylabel('°C')\n f.autofmt_xdate()\n f.savefig('plots/temp.png')\n\ndef plot_humidity(time, humidity):\n f, ax = plt.subplots()\n ax.plot(time, humidity)\n ax.set_xlim(min(time))\n ax.set_ylim(bottom=0, top=100)\n ax.set_yticks(range(0, 110, 10))\n ax.title.set_text(\"Humidity\")\n ax.set_ylabel('RH%')\n f.autofmt_xdate()\n f.savefig('plots/humidity.png')\n\ndef plot_abs_humidity(time, temperature, humidity):\n abs_humidity = [atmos.calculate('AH', T=d[0] + 273.15, RH=d[1], p=1e5) for d in zip(temperature, humidity)]\n\n f, ax = plt.subplots()\n ax.plot(time, abs_humidity)\n ax.set_xlim(min(time))\n ax.set_ylim(bottom=0)\n ax.title.set_text(\"Absolute Humidity\")\n ax.set_ylabel('kg/m^3')\n f.autofmt_xdate()\n f.savefig('plots/abs_humidity.png')\n\ndef plot_pressure(time, pressure):\n f, ax = plt.subplots()\n ax.plot(time, pressure)\n ax.set_xlim(min(time))\n ax.title.set_text(\"Barometric Pressure\")\n ax.set_ylabel('Pa')\n f.autofmt_xdate()\n f.savefig('plots/pressure.png')\n\ndef plot_altitude(time, altitude):\n f, ax = plt.subplots()\n ax.plot(time, altitude)\n ax.set_xlim(min(time))\n ax.title.set_text(\"Virtual Altitude\")\n ax.set_ylabel('m')\n f.autofmt_xdate()\n f.savefig('plots/altitude.png')\n\nif __name__ == '__main__':\n while True:\n main()\n time.sleep(60)\n"
},
{
"alpha_fraction": 0.551699697971344,
"alphanum_fraction": 0.594192624092102,
"avg_line_length": 25.641510009765625,
"blob_id": "a37698d858476e25a1eae340f974fd0de613aef6",
"content_id": "8ecfbd83790be72e2a9f3476ef248f168a418f1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1412,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 53,
"path": "/sensorread.py",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "import serial\nimport time\nimport json\nimport paho.mqtt.publish as mqttpublish\nimport requests\n\nbroker='broker.hivemq.com'\nbroker='broker.emqx.io'\ntopic='emberbox/curing/data'\n\nser = serial.Serial('/dev/ttyUSB0')\nser2 = serial.Serial('/dev/ttyUSB1')\nser.flushInput()\n\ndef post_data(data):\n try:\n result = requests.post('https://emberbox.net/curing/api/readings/', json=data, timeout=1)\n status = result.status_code\n if status != 200: print(f'Post error code {status}')\n except Exception as e:\n print(\"Posting data failed.\")\n print(e)\n\nwhile True:\n try:\n ser_bytes = ser.readline()\n decoded_bytes = ser_bytes.decode(\"utf-8\").strip()\n\n try:\n data = json.loads(decoded_bytes)\n except:\n print('Bad data:', decoded_bytes)\n continue\n\n data['time'] = time.time()\n print(data)\n\n if data['temperature'] > 13:\n ser2.write(0b10000000.to_bytes(1, 'big'))\n elif data['temperature'] < 12:\n ser2.write(0b00000000.to_bytes(1, 'big'))\n\n if data['humidity'] < 80:\n ser2.write(0b10000001.to_bytes(1, 'big'))\n elif data['humidity'] > 85:\n ser2.write(0b00000001.to_bytes(1, 'big'))\n\n threading.Thread(target=post_data, args=(data,)).start()\n\n except Exception as e:\n print(e)\n print(\"Keyboard Interrupt\")\n break\n"
},
{
"alpha_fraction": 0.4761904776096344,
"alphanum_fraction": 0.5546218752861023,
"avg_line_length": 23.620689392089844,
"blob_id": "5f66c51097e9f9c039afe83cbef1d4629c9947d3",
"content_id": "099ccb16f266b63833ad7b77d07a0bdc765b1170",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 714,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 29,
"path": "/arduino/ssroutput/ssroutput.ino",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "const uint8_t ADDRESS_MASK = 0b01111111;\nconst uint8_t VALUE_MASK = 0b10000000;\n\nconst uint8_t SSR1 = 0b00000001;\nconst uint8_t SSR2 = 0b00000010;\n\nvoid setup() {\n pinMode(2, OUTPUT);\n pinMode(3, OUTPUT);\n Serial.begin(9600);\n}\n\nuint8_t b_in;\n\nvoid loop() {\n if (Serial.available() > 0) {\n b_in = Serial.read();\n Serial.print(\"Received: \");\n Serial.println(int(b_in));\n \n if((b_in & ADDRESS_MASK) == SSR1){\n Serial.println(\"Writing to SSR1\");\n digitalWrite(2, b_in & VALUE_MASK);\n } else if( (b_in & ADDRESS_MASK) == SSR2){\n Serial.println(\"Writing to SSR2\");\n digitalWrite(3, b_in & VALUE_MASK);\n } \n }\n}\n"
},
{
"alpha_fraction": 0.6941176652908325,
"alphanum_fraction": 0.7176470756530762,
"avg_line_length": 13.166666984558105,
"blob_id": "500ad7f92a9f774a7a0a710a7101949569c96492",
"content_id": "c1430b26a798ecf6e98b9d71a83ca2e59eeacacf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "INI",
"length_bytes": 170,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 12,
"path": "/api/curingapi.ini",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "[uwsgi]\nmount = /curing/api=wsgi:app\nmanage-script-name = true\n\nmaster = true\nprocesses = 5\n\nsocket = curingapi.sock\nchmod-socket = 777\nvacuum = true\n\ndie-on-term = true\n"
},
{
"alpha_fraction": 0.5753991603851318,
"alphanum_fraction": 0.6227084398269653,
"avg_line_length": 25.015384674072266,
"blob_id": "fa74b0c29a2fd960b7f641576be3fab5bcaf040c",
"content_id": "f73fca25b3c4b2015bef9615a33aee95c98dd5fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1691,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 65,
"path": "/arduino/humitest/humitest.ino",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "#include <Wire.h>\n#include <SPI.h>\n#include <Adafruit_Sensor.h>\n#include <Adafruit_BME280.h>\n\n#define BME_SCK 13\n#define BME_MISO 12\n#define BME_MOSI 11\n#define BME_CS 10\n\n#define SEALEVELPRESSURE_HPA (1013.25)\n\nunsigned long read_interval = 1000;\nunsigned long last_read_time = 10000;\n\n\nAdafruit_BME280 bme; // I2C\n//Adafruit_BME280 bme(BME_CS); // hardware SPI\n//Adafruit_BME280 bme(BME_CS, BME_MOSI, BME_MISO, BME_SCK); // software SPI\n\nvoid setup() {\n Serial.begin(9600);\n while(!Serial); // time to get serial running\n Serial.println(\"Started\");\n\n unsigned status;\n\n status = bme.begin(0x76);\n if (!status) {\n Serial.println(\"Could not find a valid BME280 sensor, check wiring, address, sensor ID!\");\n Serial.print(\"SensorID was: 0x\"); Serial.println(bme.sensorID(),16);\n Serial.print(\" ID of 0xFF probably means a bad address, a BMP 180 or BMP 085\\n\");\n Serial.print(\" ID of 0x56-0x58 represents a BMP 280,\\n\");\n Serial.print(\" ID of 0x60 represents a BME 280.\\n\");\n Serial.print(\" ID of 0x61 represents a BME 680.\\n\");\n while (1) delay(10);\n }\n}\n\n\nvoid loop() {\n if (millis() - last_read_time > read_interval) {\n last_read_time = millis();\n printValues();\n }\n}\n\n\nvoid printValues() {\n Serial.print(\"{\");\n \n Serial.print(\"\\\"temperature\\\":\");\n Serial.print(bme.readTemperature());\n\n Serial.print(\",\\\"pressure\\\":\");\n Serial.print(bme.readPressure());\n\n Serial.print(\",\\\"altitude\\\":\");\n Serial.print(bme.readAltitude(SEALEVELPRESSURE_HPA));\n\n Serial.print(\",\\\"humidity\\\":\");\n Serial.print(bme.readHumidity());\n \n Serial.print(\"}\\n\");\n}\n"
},
{
"alpha_fraction": 0.6613965630531311,
"alphanum_fraction": 0.6693016886711121,
"avg_line_length": 27.148147583007812,
"blob_id": "0e6dac17013d3a5c67d47d8a6ae6125ecd952fea",
"content_id": "146c9b18450aee2a5bf63e338601b5a0db9d7c4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 759,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 27,
"path": "/plotter/mqttlog.py",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "import paho.mqtt.client as mqtt\nimport json\n\n# The callback for when the client receives a CONNACK response from the server.\ndef on_connect(client, userdata, flags, rc):\n print(\"Connected with result code \"+str(rc))\n client.subscribe(\"emberbox/#\")\n\n# The callback for when a PUBLISH message is received from the server.\ndef on_message(client, userdata, msg):\n try:\n data = json.loads(msg.payload.decode())\n except Exception as e:\n print(\"Bad data:\", msg.payload)\n print(e)\n return\n\n with open('log.txt', 'a') as f:\n f.write(json.dumps(data) + '\\n')\n \n\nclient = mqtt.Client()\nclient.on_connect = on_connect\nclient.on_message = on_message\n\nclient.connect(\"broker.emqx.io\", 1883, 60)\nclient.loop_forever()"
},
{
"alpha_fraction": 0.7966101765632629,
"alphanum_fraction": 0.7966101765632629,
"avg_line_length": 28.5,
"blob_id": "9c3ae443c24af9c5e4ba2a79b46c2c7b60643ddb",
"content_id": "a67da24902d510068e5b0d49c18e3ab2c2facfcb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 59,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 2,
"path": "/README.md",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "# gabagool\nCollection of stuff for my meat curing chamber.\n"
},
{
"alpha_fraction": 0.5818431973457336,
"alphanum_fraction": 0.5942228436470032,
"avg_line_length": 27.509803771972656,
"blob_id": "d89392918d7a41a049305ad253db9dc507551acc",
"content_id": "07058a9b8032fe2fe052488f008d414509eaa7eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1454,
"license_type": "no_license",
"max_line_length": 136,
"num_lines": 51,
"path": "/api/curingapi.py",
"repo_name": "TetraK1/gabagool",
"src_encoding": "UTF-8",
"text": "import flask\nfrom flask import request\nimport sqlite3\n\napp = flask.Flask(__name__)\napp.config[\"DEBUG\"] = True\n\nwith sqlite3.connect('fridge.db') as conn:\n conn.execute('''CREATE TABLE IF NOT EXISTS \"bme280_data\" (\n \"time\"\tREAL NOT NULL,\n \"temperature\"\tREAL,\n \"humidity\"\tREAL,\n \"pressure\"\tREAL,\n \"altitude\"\tREAL,\n PRIMARY KEY(\"time\")\n );''')\n conn.commit()\n\n\ndef dict_factory(cursor, row):\n d = {}\n for idx, col in enumerate(cursor.description):\n d[col[0]] = row[idx]\n return d\n\[email protected]('/readings/', methods=['GET', 'POST'])\ndef home():\n if request.method == 'POST':\n return post_data()\n conn = sqlite3.connect('fridge.db')\n conn.row_factory = dict_factory\n cur = conn.cursor()\n params = flask.request.args\n after = request.args.get('after', 0)\n limit = request.args.get('last', 100)\n return flask.jsonify(cur.execute('SELECT * FROM bme280_data WHERE time > ? ORDER BY time DESC LIMIT (?)', (after,limit)).fetchall())\n\ndef post_data():\n data = request.json\n with sqlite3.connect('fridge.db') as conn:\n cur = conn.cursor()\n cur.execute(\n 'INSERT INTO bme280_data(time, temperature, humidity, pressure, altitude) VALUES(?,?,?,?,?)',\n (data['time'], data['temperature'], data['humidity'], data['pressure'], data['altitude'])\n )\n conn.commit()\n\n return ''\n\nif __name__ == '__main__':\n app.run()\n"
}
] | 8 |
cls/reviewboard-bitkeeper
|
https://github.com/cls/reviewboard-bitkeeper
|
ce2083ef309a93f6914115356af0eb155ea4c088
|
0d533867b07698afc7f12a1253ac38324c87153b
|
9ff6325f2db5a30a820a140b388a997076ab79e4
|
refs/heads/master
| 2021-01-19T00:20:45.706181 | 2016-11-14T11:44:00 | 2016-11-14T11:44:00 | 73,698,673 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5611628890037537,
"alphanum_fraction": 0.5626257061958313,
"avg_line_length": 31.360946655273438,
"blob_id": "657285fda5116be4ef6030af229ceca1c0c79883",
"content_id": "f4e049edee47225ef7c4d5719b5391c749211e2b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5469,
"license_type": "permissive",
"max_line_length": 104,
"num_lines": 169,
"path": "/src/reviewboard_bitkeeper/bk.py",
"repo_name": "cls/reviewboard-bitkeeper",
"src_encoding": "UTF-8",
"text": "from __future__ import unicode_literals\n\nimport re\n\nfrom djblets.util.filesystem import is_exe_in_path\n\nfrom reviewboard.diffviewer.parser import DiffParser, File\nfrom reviewboard.scmtools.core import SCMClient, SCMTool, HEAD, PRE_CREATION, UNKNOWN\nfrom reviewboard.scmtools.errors import (FileNotFoundError, RepositoryNotFoundError)\n\nOPER_PATTERNS = {\n 'copied': re.compile(b' bk cp (?P<origname>.*) .*'),\n 'deleted': re.compile(b' Delete: (?P<origname>.*)'),\n 'moved': re.compile(b' Rename: (?P<origname>.*) -> .*'),\n}\n\nclass BKTool(SCMTool):\n name = 'BitKeeper'\n dependencies = {\n 'executables': ['bk']\n }\n\n def __init__(self, repository):\n super(BKTool, self).__init__(repository)\n\n if not is_exe_in_path('bk'):\n # This is technically not the right kind of error, but it's the\n # pattern we use with all the other tools.\n raise ImportError\n\n local_site_name = None\n\n if repository.local_site:\n local_site_name = repository.local_site.name\n\n self.client = BKClient(repository.path, local_site_name)\n\n def get_file(self, path, revision=HEAD, base_commit_id=None, **kwargs):\n if base_commit_id is not None:\n base_commit_id = base_commit_id\n\n return self.client.cat_file(path, revision, base_commit_id=base_commit_id)\n\n def parse_diff_revision(self, file_str, revision_str, *args, **kwargs):\n revision = revision_str\n if file_str == '/dev/null':\n revision = PRE_CREATION\n if not revision_str:\n revision = UNKNOWN\n return file_str, revision\n\n def get_diffs_use_absolute_paths(self):\n return True\n\n def get_parser(self, data):\n return BKDiffParser(data)\n\n @classmethod\n def check_repository(cls, path, username=None, password=None, local_site_name=None):\n client = BKClient(path, local_site_name)\n\n super(BKTool, cls).check_repository(client.path, local_site_name)\n\n # Create a client. This will fail if the repository doesn't exist.\n BKClient(path, local_site_name)\n\nclass BKDiffParser(DiffParser):\n def __init__(self, data):\n self.copies = {}\n self.new_changeset_id = None\n self.orig_changeset_id = None\n\n return super(BKDiffParser, self).__init__(data)\n\n def parse_special_header(self, linenum, info):\n header = re.match(b'==== (?P<filename>.*) ====', self.lines[linenum])\n\n if not header:\n return linenum\n\n linenum += 2\n\n filename = info['newFile'] = header.group('filename')\n\n if filename in self.copies:\n info['origFile'] = self.copies[filename]\n info['copied'] = True\n else:\n info['origFile'] = filename\n\n if linenum < len(self.lines) and self.lines[linenum].startswith(b' '):\n for attr, pattern in OPER_PATTERNS.iteritems():\n match = pattern.match(self.lines[linenum])\n if match:\n origname = match.group('origname')\n info['origFile'] = origname\n info['origInfo'] = UNKNOWN\n info['newInfo'] = UNKNOWN\n info[attr] = True\n\n if attr == 'copied':\n self.copies[filename] = origname\n\n break\n\n while linenum < len(self.lines) and self.lines[linenum].startswith(b' '):\n linenum += 1\n\n return linenum\n\n def parse_diff_header(self, linenum, info):\n if linenum < len(self.lines) and self.lines[linenum] == b'Binary files differ':\n info['binary'] = True\n info['newInfo'] = UNKNOWN\n\n linenum += 1\n\n if linenum + 1 < len(self.lines) and self.lines[linenum].startswith(b'===='):\n info['origInfo'] = PRE_CREATION\n linenum += 2\n else:\n info['origInfo'] = UNKNOWN\n\n elif linenum + 1 < len(self.lines):\n orig = re.match(b'\\-\\-\\- (?P<filename>(?P<revision>[^/]*).*?)\\t.*', self.lines[linenum])\n new = re.match(b'\\+\\+\\+ (?P<filename>(?P<revision>[^/]*).*?)\\t.*', self.lines[linenum + 1])\n\n if orig and new:\n if orig.group('filename') == b'/dev/null':\n info['origInfo'] = PRE_CREATION\n else:\n info['origInfo'] = orig.group('revision')\n\n info['newInfo'] = new.group('revision')\n\n linenum += 2\n\n return linenum\n\n def get_orig_commit_id(self):\n return self.orig_changeset_id\n\nclass BKClient(SCMClient):\n def __init__(self, path, local_site_name=None):\n super(BKClient, self).__init__(path)\n\n self.local_site_name = local_site_name\n\n def cat_file(self, path, rev, base_commit_id=None):\n # If the base commit id is provided it should override anything\n # that was parsed from the diffs\n if base_commit_id is not None:\n rev = base_commit_id\n\n if rev == HEAD:\n rev = '@'\n\n if path:\n p = self._run_bk(['get', '-pqr' + rev, path])\n contents = p.stdout.read()\n failure = p.wait()\n\n if not failure:\n return contents\n\n raise FileNotFoundError(path, rev)\n\n def _run_bk(self, args):\n return SCMTool.popen(['bk', '-@' + self.path] + args, local_site_name=self.local_site_name)\n"
},
{
"alpha_fraction": 0.6299275755882263,
"alphanum_fraction": 0.6387771368026733,
"avg_line_length": 28.595237731933594,
"blob_id": "36e20392e3421d8ee64736ae61d26781727ed83a",
"content_id": "acd10855751579aeadb0f25d952c72723816554f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1243,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 42,
"path": "/setup.py",
"repo_name": "cls/reviewboard-bitkeeper",
"src_encoding": "UTF-8",
"text": "from setuptools import setup, find_packages\nimport sys, os\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nversion = '0.1'\n\ninstall_requires = [\n # List your project dependencies here.\n # For more details, see:\n # http://packages.python.org/distribute/setuptools.html#declaring-dependencies\n 'ReviewBoard==2.5.7',\n]\n\nsetup(name='reviewboard-bitkeeper',\n version=version,\n description=\"BitKeeper support for Review Board\",\n\n classifiers=[\n # Get strings from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 1 - Planning\",\n \"Framework :: Django :: 1.6\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 2.7\",\n \"Topic :: Software Development :: Version Control\"\n ],\n keywords='reviewboard bitkeeper',\n author='Connor Smith',\n author_email='[email protected]',\n url='https://github.com/cls/reviewboard-bitkeeper',\n license='MIT License',\n packages=find_packages('src'),\n package_dir = {'': 'src'},\n include_package_data=True,\n zip_safe=False,\n install_requires=install_requires,\n entry_points={\n 'reviewboard.scmtools': [\n 'bk = reviewboard_bitkeeper.bk:BKTool',\n ]\n }\n)\n"
}
] | 2 |
FrancoisCzarny/PythonCodes
|
https://github.com/FrancoisCzarny/PythonCodes
|
26a55c7474231bbba7350d988c8f0ab7a5c17f60
|
bc864fe53d70346618830a657b0b5b01dca592a4
|
aa8172fd6933d5c1caea943261794f2d160ce1c5
|
refs/heads/master
| 2020-04-09T09:57:05.924938 | 2018-12-03T21:04:47 | 2018-12-03T21:04:47 | 160,252,030 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5385851860046387,
"alphanum_fraction": 0.5466237664222717,
"avg_line_length": 32.815216064453125,
"blob_id": "b0237beef38ced7eba00a07ebd36fe2fe9e19464",
"content_id": "b9a3074e1ef051436dbdc79e8002db091bf8c82e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3110,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 92,
"path": "/Metadump.py",
"repo_name": "FrancoisCzarny/PythonCodes",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding : utf-8 -*-\n\nimport h5py\nimport os\nfrom threading import Lock\n\nfrom abc import ABCMeta, abstractmethod\nimport six\nimport json\n\n\[email protected]_metaclass(ABCMeta)\nclass BaseDumper(object):\n \"\"\"Base class for dumper classes\"\"\" \n \n def __repr__(self):\n return '%s(%s)' %(self.__class__.__name__, self.__dict__)\n\n @abstractmethod\n def dump_params(self):\n \"\"\"Dump the data object to the output.\"\"\"\n return\n \n \nclass JSONDumper(BaseDumper):\n \"\"\"File System Dumper class\"\"\"\n def __init__(self, filename, overwrite=False, **dumper_params):\n root, suffix = os.path.split(filename)\n if '.' not in suffix:\n self.filename = filename + '.json'\n else :\n self.filename = filename \n self.overwrite = overwrite\n self.dumper_params = dumper_params\n \n def dump(self, model_name, **kwargs):\n \"\"\"Dump the data object to the output file define by filename.\"\"\"\n i = 0\n split_fn = os.path.splitext(self.filename)\n \n if not self.overwrite:\n while os.path.exists('{}_{}_{}{}'.format(split_fn[0], model_name, i, split_fn[1])):\n i += 1\n\n try:\n with os.fdopen(os.open('{}_{}_{}{}'.format(split_fn[0], model_name, i, split_fn[1]),\n os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o644), 'w', encoding='utf-8') as fh:\n json.dump(kwargs, fh, separators=(',', ':'), sort_keys=True, indent=4)\n \n except OSError as e:\n if e.errno == errno.EEXIST:\n print(\"File %s already exists!\" %('{}_{}_{}{}'.format(split_fn[0], model_name, i, split_fn[1])))\n\n return\n\n\nclass HDF5Dumper(BaseDumper):\n \"\"\"File System Dumper class\"\"\"\n def __init__(self, filename, overwrite=False, **dumper_params):\n self.filename = filename\n self.overwrite = overwrite\n self.dumper_params = dumper_params\n\n def _modify_filename(self, filename, prefix, add_num):\n \"\"\"Modify the filename when user avoid the overwrite\"\"\"\n root, suffix = os.path.split(filename)\n path = '{}/{}_{}_{}'.format(root, prefix, add_num, suffix)\n return path\n\n def dump(self, model_name, **kwargs):\n \"\"\"Dump y_[ids, true, pred] and data to the output file define by filename\"\"\"\n num_add = 1\n file_saved = self.filename\n lock = Lock()\n lock.acquire()\n \n if not self.overwrite:\n while os.path.exists(file_saved):\n file_saved = self._modify_filename(self.filename, model_name, num_add)\n num_add += 1\n print('HDF5 FILE NAME %s' %(file_saved))\n \n with h5py.File(self.filename, 'w') as h5f:\n for key, item in kwargs.items():\n try:\n h5f.create_dataset(key, data=np.asarray(item))\n except TypeError:\n dt = h5py.special_dtype(vlen=bytes)\n h5f.create_dataset(key, data=item, dtype=dt)\n lock.release()\n return"
},
{
"alpha_fraction": 0.6103554964065552,
"alphanum_fraction": 0.6148890256881714,
"avg_line_length": 33.35245895385742,
"blob_id": "7d0be02bc34f0343dbf9177ca52c257c3c7c9367",
"content_id": "3f4512fecdc4ce73048fe65de751a5a1b8049e28",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4191,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 122,
"path": "/limit_TF_consumption.py",
"repo_name": "FrancoisCzarny/PythonCodes",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding : utf-8 -*-\n\nimport os\nimport sys \nimport numpy as np\nfrom re import sub, findall\nimport tensorflow as tf\n\n\n__all__ = ['set_session', 'soft_gpu_allocation']\n\n\ndef set_session(gpu_number=1, gpu_fraction=.5, allow_growth=False, **kwargs):\n\n '''\n Limit the GPU memory and allocate the half of GPU ressource\n in case you're using keras with Tensoflow backend.\n \n Parameter\n ---------\n gpu_number : int, list, tuple or str with desired devices, default=1\n Number of GPU to use\n\n gpu_fraction : float includes in [0,1], default=0.5\n Fraction of GPU memory allocated to the session \n\n log_device_placement : boolean,\n To find out which devices your operations and tensors are assigned to.\n \n allow_soft_placement : boolean,\n To automatically choose an existing and supported device to run the \n operations in case the specified one doesn't exist\n \n allow_growth : boolean, default=False\n Allow the memory usage growth as is needed by the process.\n It attempts to allocate only as much GPU memory based on runtime allocations: \n it starts out allocating very little memory, and as Sessions get run and more GPU\n memory is needed, we extend the GPU memory region needed by the TensorFlow process.\n \n Return\n ------\n Tensorflow Session\n '''\n\n devices = sub(r'\\[*\\]*\\(*\\)*\\ *', r'', str(gpu_number))\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = devices\n num_threads = os.environ.get('OMP_NUM_THREADS')\n config = tf.ConfigProto(**kwargs)\n \n if allow_growth==True:\n config.gpu_options.allow_growth = True\n else : \n config.gpu_options.per_process_gpu_memory_fraction = gpu_fraction\n\n if num_threads:\n config.intra_op_parallelism_threadsi = num_threads\n\n return tf.Session(config=config)\n\n\ndef soft_gpu_allocation(qty=1, gpu_fraction=None, allow_growth=True, **kwargs):\n \"\"\"\n Scan every available gpu and allocate the gpu with the biggest free mermory.\n \n Parameters\n ----------\n qty : int, default=1\n Number of GPU needed\n \n gpu_fraction : float includes in [0,1], default=None\n Fraction of GPU memory allocated to the session \n\n allow_growth : boolean, default=False\n Allow the memory usage growth as is needed by the process.\n It attempts to allocate only as much GPU memory based on runtime allocations: \n it starts out allocating very little memory, and as Sessions get run and more GPU\n memory is needed, we extend the GPU memory region needed by the TensorFlow process.\n \n Returns\n -------\n Tensorflow session\n\n\n Example\n -------\n from limit_gpu_mem_env import soft_gpu_allocation\n from keras.backend.tensorflow_backend import set_session\n\n set_session(soft_gpu_allocation(qty=2))\n \"\"\"\n \n # Synchronize nvidia-smi gpu_id with tensorflow device_id\n os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\n \n # Query nvidia-smi\n f = os.popen('nvidia-smi --query-gpu=index,memory.free --format=csv,noheader')\n output = f.readlines()\n\n # Get gpus id ordering by memory free.\n free_mem = {k:v for (k,v) in [findall(r'\\d+', t) for t in output]}\n id_gpu = np.argsort([int(i) for i in free_mem.values()])[::-1]\n gpu = list(id_gpu[:qty]) \n \n TFsession = set_session(gpu_number=gpu, \n gpu_fraction=gpu_fraction,\n allow_growth=allow_growth, \n **kwargs)\n return TFsession\n\n\nif __name__=='__main__':\n\n gpu_number=[1,2] # ou a faire via sys.argv[1]?\n gpu_fraction=0.1\n s = KTF.set_session(set_session(gpu_number=gpu_number, gpu_fraction=gpu_fraction))\n\n from tensorflow.python.client import device_lib\n\n print('DEVICES : %i (including CPU and GPU)\\n' %(len(device_lib.list_local_devices())),\n device_lib.list_local_devices())\n print('\\nFraction of GPU memory in use : %d' %(gpu_fraction))\n"
}
] | 2 |
pombredanne/django-wiki-syntax
|
https://github.com/pombredanne/django-wiki-syntax
|
306d4353d1b9f3c4f4592c628a53839aa9465a2a
|
b1a326964257fe1f42d4fd90f5ca6c5a85029279
|
d67eae224f2d46e19c688ee928a2cdb2af97de25
|
refs/heads/master
| 2017-05-02T14:30:01.875101 | 2011-06-25T17:02:59 | 2011-06-25T17:02:59 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6610363721847534,
"alphanum_fraction": 0.66991126537323,
"avg_line_length": 25.263158798217773,
"blob_id": "8d857ef8113f524eb903ccb2bcc93f3dbd2bf902",
"content_id": "8da5c2a8392811b8dfb68cb002b7695c2a6b6cb0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3493,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 133,
"path": "/wikisyntax/helpers.py",
"repo_name": "pombredanne/django-wiki-syntax",
"src_encoding": "UTF-8",
"text": "import re\n\nfrom django.conf import settings\nfrom django.db.models.loading import get_model\nfrom django.core.cache import cache\nfrom django.utils.safestring import mark_safe\n\ndef wikisafe_markdown(value):\n\tfrom django.contrib.markup.templatetags.markup import markdown\n\treturn mark_safe(markdown(value.replace('[[','LBRACK666').replace(']]','RBRACK666')).replace('LBRACK666','[[').replace('RBRACK666',']]'))\n\nclass WikiException(Exception): # Raised when a particular string is not found in any of the models.\n\tpass\n\ndef wikify(match): # Excepts a regexp match\n\twikis = [] # Here we store our wiki model info\n\n\tfor i in settings.WIKISYNTAX:\n\t\tname = i[0]\n\t\tmodstring = i[1]\n\t\tmodule = __import__(\".\".join(modstring.split(\".\")[:-1]))\n\t\tfor count, string in enumerate(modstring.split('.')):\n\t\t\tif count == 0:\n\t\t\t\tcontinue\n\n\t\t\tmodule = getattr(module,string)\n\n\t\tmodule.name = name\n\t\twikis.append(module())\n\n\ttoken, trail = match.groups() # we track the 'trail' because it may be a plural 's' or something useful\n\n\tif ':' in token:\n\t\t\"\"\"\n\t\tFirst we're checking if the text is attempting to find a specific type of object.\n\n\t\tExmaples:\n\n\t\t[[user:Subsume]]\n\n\t\t[[card:Jack of Hearts]]\n\n\t\t\"\"\"\n\t\tprefix = token.split(':',1)[0].lower().rstrip()\n\t\tname = token.split(':',1)[1].rstrip()\n\t\tfor wiki in wikis:\n\t\t\tif prefix == wiki.name:\n\t\t\t\tif wiki.attempt(name,explicit=True):\n\t\t\t\t\t\"\"\"\n\t\t\t\t\tWe still check attempt() because maybe\n\t\t\t\t\twork is done in attempt that render relies on,\n\t\t\t\t\tor maybe this is a false positive.\n\t\t\t\t\t\"\"\"\n\t\t\t\t\treturn wiki.render(name,trail=trail,explicit=True)\n\t\t\t\telse:\n\t\t\t\t\traise WikiException\n\n\t\"\"\"\n\tNow we're going to try a generic match across all our wiki objects.\n\n\tExample:\n\n\t[[Christopher Walken]]\n\n\t[[Studio 54]]\n\t[[Beverly Hills: 90210]] <-- notice ':' was confused earlier as a wiki prefix name\n\n\t[[Cat]]s <-- will try to match 'Cat' but will include the plural \n\n\t[[Cats]] <-- will try to match 'Cats' then 'Cat'\n\n\t\"\"\"\n\tfor wiki in wikis:\n\t\tif getattr(wiki,'prefix_only',None):\n\t\t\tcontinue\n\n\t\tif wiki.attempt(token):\n\t\t\treturn wiki.render(token,trail=trail)\n\n\t\"\"\"\n\tWe tried everything we could and didn't find anything.\n\t\"\"\"\n\n\traise WikiException(\"No item found for '%s'\"% (token))\n\nclass wikify_string(object):\n\tdef __call__(self, string, fail_silently=True):\n\t\tself.fail_silently = fail_silently\n\t\tself.cache = {}\n\t\tself.set_cache = {}\n\n\t\tfrom wikisyntax import fix_unicode\n\t\tWIKIBRACKETS = '\\[\\[([^\\]]+?)\\]\\]'\n\t\tif not string:\n\t\t\treturn ''\n\n\t\tstring = fix_unicode.fix_unicode(string)\n\n\t\tif getattr(settings,'WIKISYNTAX_DISABLE_CACHE',False) == False:\n\t\t\tkeys = re.findall(WIKIBRACKETS, string)\n\t\t\tself.cache = cache.get_many([k.replace(' ','-').lower() for k in keys if len(k) < 251])\n\n\t\tcontent = re.sub('%s(.*?)' % WIKIBRACKETS,self.markup_to_links,string)\n\t\tcache.set_many(self.set_cache)\n\t\treturn content\n\n\tdef __new__(cls, string, **kwargs):\n\t\tobj = super(wikify_string, cls).__new__(cls)\n\t\treturn obj(string, **kwargs)\n\n\tdef markup_to_links(self,match):\n\t\tstring = match.groups()[0].lower().replace(' ','-')\n\n\t\tif getattr(settings,'WIKISYNTAX_DISABLE_CACHE',False) == False:\n\t\t\tif string in self.cache:\n\t\t\t\treturn self.cache[string]\n\n\t\t\tif string in self.set_cache:\n\t\t\t\treturn self.set_cache[string] # Maybe they typed it twice?\n\n\t\ttry:\n\t\t\tnew_val = wikify(match)\n\n\t\t\tif getattr(settings,'WIKISYNTAX_DISABLE_CACHE',False) == False:\n\t\t\t\tself.set_cache[string] = new_val\n\n\t\t\treturn new_val\n\n\t\texcept WikiException:\n\t\t\tif not self.fail_silently:\n\t\t\t\traise\n\n\t\t\treturn string\n"
}
] | 1 |
jonathan-crandall/RecipeSite
|
https://github.com/jonathan-crandall/RecipeSite
|
73d0bf80b3685cfb5e0eeef9ff183ab55b208d98
|
768b836328a18989090440d68350b93ffb419804
|
a7a7a6e1b9996aedd7146aac24452995a2aed0ec
|
refs/heads/main
| 2023-07-06T05:44:57.805998 | 2021-08-16T00:25:16 | 2021-08-16T00:25:16 | 396,523,629 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.800000011920929,
"alphanum_fraction": 0.800000011920929,
"avg_line_length": 24,
"blob_id": "0963c7df5a1f9526069018aab10896a75a2f6f29",
"content_id": "6fcc977ea4c7d995f3ad0c4239fefa07b4149047",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 50,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 2,
"path": "/README.md",
"repo_name": "jonathan-crandall/RecipeSite",
"src_encoding": "UTF-8",
"text": "# RecipeSite\nA website to manage and sort recipes\n"
},
{
"alpha_fraction": 0.7089040875434875,
"alphanum_fraction": 0.7465753555297852,
"avg_line_length": 31.44444465637207,
"blob_id": "44efea3d46d0190c9f8e3de1dfda8a00a596d53b",
"content_id": "1c418598927949d16cc245828fa34da575c2423f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 292,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 9,
"path": "/RecipeSite/recipes/models.py",
"repo_name": "jonathan-crandall/RecipeSite",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n\n# Create your models here.\nclass Recipe(models.Model):\n food = models.CharField(max_length=100)\n ingredients = models.CharField(max_length=4000)\n instructions = models.CharField(max_length=4000)\n created_on = models.DateTimeField(auto_now_add=True)\n"
}
] | 2 |
Vincent-Vais/eby-API
|
https://github.com/Vincent-Vais/eby-API
|
5a1ce061ef7b2bfaccf8e2d4e0b204355ba34bfe
|
f03af5561c76c9d60b617ab9b3a1f03666e91384
|
6009971fdcd1bafa7537dd9f958abc1cebdaf10e
|
refs/heads/master
| 2022-12-06T23:04:10.017444 | 2020-08-29T16:51:42 | 2020-08-29T16:51:42 | 291,308,578 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6285195350646973,
"alphanum_fraction": 0.6303360462188721,
"avg_line_length": 29.58333396911621,
"blob_id": "2a0bd9f333f9c8e729bb25de4251354d0e4b75e4",
"content_id": "c5756f8b9e7a3219d0f2929a5ed4fa053fa3cc42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1101,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 36,
"path": "/proxy.py",
"repo_name": "Vincent-Vais/eby-API",
"src_encoding": "UTF-8",
"text": "import json\nfrom http_request_randomizer.requests.proxy.requestProxy import RequestProxy\n\n\ndef getProxy():\n try:\n # proxy addresses are stored in json file {proxies: [arr of prox]}\n with open(\"proxies.json\") as f:\n try:\n data = json.load(f)\n except Exception as e:\n print(f\"Error, {e}\")\n # first run, file has not been created yet\n except FileNotFoundError:\n data = loadProxies()\n proxies = data[\"proxies\"]\n # each time the scrapper is run proxy is popped from arr so gotta check\n if len(proxies) == 0:\n proxies = loadProxies()\n # we do not want to repaeat proxy addresses\n proxy = proxies.pop(0)\n # store updated array back in file\n updateProxiesJSON(proxies)\n return proxy\n\n\ndef loadProxies():\n req_proxy = RequestProxy()\n proxies = req_proxy.get_proxy_list() # this will create proxy list\n data = {\"proxies\": [proxy.get_address() for proxy in proxies]}\n return data\n\n\ndef updateProxiesJSON(p):\n with open(\"proxies.json\", \"w\") as f:\n json.dump({\"proxies\": p}, f)\n"
},
{
"alpha_fraction": 0.5327510833740234,
"alphanum_fraction": 0.5473071336746216,
"avg_line_length": 21.161291122436523,
"blob_id": "0fd59419380c82856accfabe78b875a18ee67029",
"content_id": "90d70469f77be8aad7a35273c35b11e1f0486cdd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 687,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 31,
"path": "/app.py",
"repo_name": "Vincent-Vais/eby-API",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nfrom flask import request\nfrom ebay_bs import scrape\nimport os\nimport json\n\nk = os.environ[\"KEY\"]\n\napp = Flask(__name__)\n\n\[email protected](\"/<key>\")\ndef parse(key):\n print(\"Started\")\n if k == key:\n print(\"keys are matching\")\n q_key = request.args.get(\"key\").replace(\" \", \"+\")\n q_page = request.args.get(\"page\")\n results = scrape(q_key, q_page)\n if len(results) == 0:\n status_code = 404\n else:\n status_code = 200\n else:\n results = None\n status_code = 404\n return (\n json.dumps({\"results\": results}),\n status_code,\n {\"ContentType\": \"application/json\"},\n )\n"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 9,
"blob_id": "b59fde20c1f171f0630f96a26e8d0f4bb17e9f1c",
"content_id": "a0f81fc32ecc6cb38de4afe122387f560737649b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 10,
"license_type": "no_license",
"max_line_length": 9,
"num_lines": 1,
"path": "/README.md",
"repo_name": "Vincent-Vais/eby-API",
"src_encoding": "UTF-8",
"text": "# eby-API\n"
},
{
"alpha_fraction": 0.5890231132507324,
"alphanum_fraction": 0.5976890921592712,
"avg_line_length": 29.45599937438965,
"blob_id": "6c1c25c6f5c5275053a485b8e08d96cb1862766f",
"content_id": "0ff4275f2818a9e45a7552771ae68480ac993304",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3808,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 125,
"path": "/ebay_bs.py",
"repo_name": "Vincent-Vais/eby-API",
"src_encoding": "UTF-8",
"text": "from proxy import getProxy\nfrom bs4 import BeautifulSoup as bs\nimport requests\nimport re\n\nimport lxml\nimport logging\n\n# create logger\nlogger = logging.getLogger(__name__)\nlogger.setLevel(level=logging.DEBUG)\n\n# formatter\nformatter = logging.Formatter(\n \"%(levelname)s - %(asctime)-s - %(filename)s - %(lineno)d --> %(message)s\"\n)\n\n\n# stream handler\nsh = logging.StreamHandler()\nsh.setLevel(logging.INFO)\nsh.setFormatter(formatter)\n\n# file handler\nfh = logging.FileHandler(\"logs.log\", \"w\")\nfh.setLevel(level=logging.DEBUG)\nfh.setFormatter(formatter)\n\n# file handler for HTML pages\nfhHTML = logging.FileHandler(\"pages.html\", \"w\")\nfhHTML.setLevel(level=logging.CRITICAL)\nfhHTML.setFormatter(formatter)\n\nlogger.addHandler(sh)\nlogger.addHandler(fh)\nlogger.addHandler(fhHTML)\n\nlogger.disabled = True\n\n\ndef setup():\n logger.info(\"Entered Setup\")\n headers = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36\"\n }\n logger.debug(f\"Created headers: {headers}\")\n PROXY = getProxy()\n logger.debug(f\"Created proxy: {PROXY}\")\n proxies = {\n \"httpProxy\": PROXY,\n \"ftpProxy\": PROXY,\n \"sslProxy\": PROXY,\n \"proxyType\": \"MANUAL\",\n }\n logger.info(f\"Setup is done: {[headers, proxies]}\")\n return [headers, proxies]\n\n\ndef scrape(word, page):\n logger.info(\"Entered Scrape\")\n logger.debug(f\"Param passed: {word}, {page}\")\n\n logger.debug(\"Setting up headers and proxies\")\n [headers, proxies] = setup()\n logger.debug(\"Setup is done\")\n logger.debug(f\"Headers : {headers}\")\n logger.debug(f\"Proxies : {proxies}\")\n\n key = word.replace(\" \", \"+\")\n logger.debug(f\"Created a keyword from param: {key}\")\n\n url = f\"https://www.ebay.com/sch/i.html?_nkw={key}&_pgn={page}\"\n logger.debug(f\"Url for request: {url}\")\n\n logger.info(\"Making a request\")\n response = requests.get(url, headers=headers, proxies=proxies)\n if response.ok:\n logger.info(\"Respnose - Ok\")\n # time.sleep(10)\n logger.debug(\"Parssing HTML\")\n page = bs(response.text, \"lxml\")\n logger.info(\"HTML parssed\")\n logger.critical(f\"{url}:{page.prettify()}\")\n\n logger.debug(\"Looking for item divs : <div class='a-section a-spacing-medium'>\")\n ul = page.find(\"ul\", \"srp-results srp-list clearfix\")\n lis = ul.find_all(\"li\", \"s-item\")\n logger.debug(f\"Lis: \\n{lis}\")\n results = []\n for li in lis:\n item = {}\n imgDiv = li.find(\"img\", \"s-item__image-img\")\n span = li.find(\"span\", \"s-item__price\")\n detailsDivs = li.find_all(\"div\", \"s-item__detail s-item__detail--primary\")\n if imgDiv and span and detailsDivs:\n logger.debug(\"Elements found. Parsing text\")\n imgUrl = imgDiv.attrs[\"src\"]\n title = imgDiv.attrs[\"alt\"]\n price = span.get_text()\n details = [div.find(\"span\").get_text() for div in detailsDivs]\n\n item[\"img\"] = imgUrl\n item[\"title\"] = title\n item[\"price\"] = price\n item[\"details\"] = details\n\n logger.debug(f\"\\nITEM CREATED: \\n{item}\\n\")\n results.append(item)\n else:\n logger.warning(f\"Problem in : {url}\")\n logger.warning(f\"imDiv: {imgDiv}\")\n logger.warning(f\"span: {span}\")\n logger.warning(f\"detailsDivs: {detailsDivs}\")\n logger.warning(\"NOT FOUND. CONTINUE SEARCH\")\n return results\n else:\n logger.info(\"Respnose - Failed\")\n logger.warning(f\"Error code: {response.status_code}\")\n return None\n\n\n# res = scrape(\"phone case\", 1)\n\n# for item in res:\n# print(f\"\\n{item}\\n\")\n\n"
},
{
"alpha_fraction": 0.46341463923454285,
"alphanum_fraction": 0.6878048777580261,
"avg_line_length": 15.184210777282715,
"blob_id": "c35482c3f4dd4e54588bef7a606654b371fe820f",
"content_id": "1642808bab69846a29148346baf7ee86c62235a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 615,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 38,
"path": "/requirements.txt",
"repo_name": "Vincent-Vais/eby-API",
"src_encoding": "UTF-8",
"text": "appdirs==1.4.4\nastroid==2.4.2\nattrs==20.1.0\nbeautifulsoup4==4.9.1\nblack==19.10b0\nbs4==0.0.1\ncertifi==2020.6.20\ncffi==1.14.2\nchardet==3.0.4\nclick==7.1.2\ncryptography==3.0\nFlask==1.1.2\ngunicorn==20.0.4\nhttmock==1.3.0\nhttp-request-randomizer==1.2.3\nidna==2.10\nisort==5.4.2\nitsdangerous==1.1.0\nJinja2==2.11.2\nlazy-object-proxy==1.4.3\nlxml==4.5.2\nMarkupSafe==1.1.1\nmccabe==0.6.1\npathspec==0.8.0\npsutil==5.7.2\npycparser==2.20\npylint==2.6.0\npyOpenSSL==19.1.0\npython-dateutil==2.8.1\nregex==2020.7.14\nrequests==2.24.0\nsix==1.15.0\nsoupsieve==2.0.1\ntoml==0.10.1\ntyped-ast==1.4.1\nurllib3==1.25.10\nWerkzeug==1.0.1\nwrapt==1.12.1\n"
}
] | 5 |
medina325/Project_1
|
https://github.com/medina325/Project_1
|
ee49842e0da8e9bd8469ce2a96ba1684058819d1
|
5483d806e391990f7de4d329fb74a939372323db
|
b9422e2991d503fd246c0f844ba4935ed399878d
|
refs/heads/master
| 2022-12-27T16:26:41.339264 | 2022-05-05T01:20:59 | 2022-05-05T01:20:59 | 298,492,406 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4434124529361725,
"alphanum_fraction": 0.44986239075660706,
"avg_line_length": 35.224300384521484,
"blob_id": "a60b909a35c640a10a88056d12cd7d88812b21b0",
"content_id": "ba4eaf87b5c9386184ff459d0c6311be2d92651e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11628,
"license_type": "no_license",
"max_line_length": 141,
"num_lines": 321,
"path": "/encyclopedia/util.py",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "import re\n\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import default_storage\n\n\ndef list_entries():\n \"\"\"\n Returns a list of all names of encyclopedia entries.\n \"\"\"\n _, filenames = default_storage.listdir(\"entries\")\n return list(\n sorted( re.sub(r\"\\.md$\", \"\", filename) for filename in filenames if filename.endswith(\".md\"))\n )\n\n\ndef save_entry(title, content):\n \"\"\"\n Saves an encyclopedia entry, given its title and Markdown\n content. If an existing entry with the same title already exists,\n it is replaced.\n \"\"\"\n filename = f\"entries/{title}.md\"\n if default_storage.exists(filename):\n default_storage.delete(filename)\n default_storage.save(filename, ContentFile(content))\n\n\ndef get_entry(title):\n \"\"\"\n Retrieves an encyclopedia entry by its title. If no such\n entry exists, the function returns None.\n \"\"\"\n try:\n f = default_storage.open(f\"entries/{title}.md\")\n return f.read().decode(\"utf-8\")\n except FileNotFoundError:\n return None\n\n# -------------------------------------------------------------------------------------------------------------\n# My functions\ndef delete_entry(title):\n \"\"\"\n Delete entry by the title given\n \"\"\"\n try:\n f = default_storage.delete(f\"entries/{title}.md\")\n return True\n except FileNotFoundError:\n return False\n\ndef suffix_search(title):\n \"\"\"\n Retrieves encyclopedia entries by a suffix match.\n \"\"\"\n _, filenames = default_storage.listdir(\"entries\")\n\n searchResults = list(\n re.sub(r\"\\.md$\", \"\", filename) for filename in filenames if re.match(title, filename) is not None\n )\n return searchResults\n\ndef save_in_storage(md_file, title):\n f = open(f\"entries/{title}.md\", \"w\")\n f.write(md_file)\n f.close()\n\ndef md_to_html(md_file, title):\n # ---------------------------------------------------------------------------------------------------------------------------------------\n # Auxiliary functions\n \n def link_md_to_html(link_md, k):\n not_link_value_aux = k # saving original value of k (i.e. 'i' from the main loop) in case it's not a link\n k += 1 # Continue after the '['\n link_value = \"\"\n\n while k+1 < len(link_md) and link_md[k] != \"\\n\":\n # Looking for boldtext inside link's value\n if (link_md[k] == '*' and link_md[k+1] == '*') or (char == '-' and link_md[k+1] == '-'):\t\n strong, num_jumps = strong_md_to_html(link_md[k:])\n link_value += strong\n k += num_jumps\n\n # Looking for the end of the link's value and begining of it's href\n elif link_md[k] == ']' and link_md[k+1] == '(':\n md_href = \"\"\n k += 1 # So it can start at '('\n\n while link_md[k-1] != ')': # k-1 to concatenate ')' too\n md_href += link_md[k]\n k += 1\n\n link_p = re.compile(r\"(\\(https?://(www\\.)?\\w+\\.\\w+\\)|\\([^\\)]+\\)|\\([^\\)]+\\))\")\n\n # maybe this match is not really necessary but it gives m and n (have to think about it)\n if (link_match := link_p.match(md_href)): \n m, n = link_match.span()\n \n return (\"<a href=\\\"\" + md_href[m+1:n-1] + \"\\\">\" + link_value + \"</a>\", k-1)\n break\n else:\n link_value += link_md[k] # Acumulates until it finds \"](\" or not\n \n k += 1\t\n \n return (\"\", not_link_value_aux-1) # just returns the text \n\n def strong_md_to_html(strong_content_md):\n strong_p = re.compile(r\"(\\*\\*[^*]+\\*\\*|--[^*]+--)\")\n\n # Looking for a boldtext match\n if (strong_match := strong_p.match(strong_content_md)): \n m, n = strong_match.span()\n \n return (\"<strong>\" + strong_content_md[m+2:n-2] + \"</strong>\", len(strong_content_md[m:n])-1)\n\n return (\"**\", 1)\n\n def header_md_to_html(header_md):\n # Pattern objects\n h1_p = re.compile(r\"#\\s+\")\n h2_p = re.compile(r\"##\\s+\")\n h3_p = re.compile(r\"###\\s+\")\n h4_p = re.compile(r\"####\\s+\")\n h5_p = re.compile(r\"#####\\s+\")\n h6_p = re.compile(r\"######\\s+\")\n hx_content_p = re.compile(r\"[^\\n]+(\\n|$)\", re.M) # Conteudo de um header\n \n if (hx_match:=h1_p.match(header_md)):\n num = 1\n elif (hx_match:=h2_p.match(header_md)):\n num = 2\n elif (hx_match:=h3_p.match(header_md)):\n num = 3\n elif (hx_match:=h4_p.match(header_md)):\n num = 4\n elif (hx_match:=h5_p.match(header_md)):\n num = 5\n elif (hx_match:=h6_p.match(header_md)):\n num = 6\n \n if (hx_match):\n _, n = hx_match.span()\n after_header = header_md[n:] # Getting everything after the #\n if (match_obj := hx_content_p.match(after_header)): \n k, l = match_obj.span()\n header_content_md = after_header[k:l] # Getting actual header content\n\n # Looking for boldtext and links inside header\n is_header_content = False\n j = 0\n header_content_html = \"\"\n while j+1 < len(header_content_md): # To ignore the '\\n'\n char = header_content_md[j]\n \n if (char == \"*\" and header_content_md[j+1] == '*') or (char == '-' and header_content_md[j+1] == '-'):\n strong_content, num_jumps = strong_md_to_html(header_content_md[j:]) \n header_content_html += strong_content \n \n j += num_jumps\n elif char == \"[\" and is_header_content == False:\n link, aux = link_md_to_html(header_content_md, j)\n \n if aux == j-1:\n is_header_content = True\n else:\n header_content_html += link\t\n j = aux\n else:\n header_content_html += char\n is_header_content = False\n\n j += 1\n \n return f\"<h{num}>\" + header_content_html + f\"</h{num}>\\n\"\n else:\n return \"\\\\\\\\\"\n\n def ul_md_to_html(ul_md):\n ul_html = \"<ul>\\n\"\n\n is_li_content = False\n li_html = \"\\t<li>\"\n k = 0\n while k+1 < len(ul_md):\n c = ul_md[k]\n\n # If it reaches a break line without another list item symbol \n # it means that's end of the list\n if c == '\\n' and (ul_md[k+1] != \"-\" or ul_md[k+1] != \"*\"):\n li_html += \"</li>\\n\"\n return \"<ul>\\n\" + li_html + \"</ul>\\n\", k\n elif k+2 == len(ul_md):\n li_html += c + ul_md[k+1] + \"</li>\\n\"\n return \"<ul>\\n\" + li_html + \"</ul>\\n\", k+2\n elif c == '\\n' and ul_md[k+1] == '-':\n li_html += \"</li>\\n\\t<li>\"\n elif (c == '*' and ul_md[k+1] == '*') or (c == '-' and ul_md[k+1] == '-'):\n strong, num_jumps = strong_md_to_html(ul_md[k:]) \n li_html += strong\n k += num_jumps\n elif c == '[' and is_li_content == False:\n link, aux = link_md_to_html(ul_md, k)\n \n # if the value returned it's equal to k-1, it means it's not a link but a li content\n if aux == k-1:\n is_li_content = True\n else:\n li_html += link\t\n k = aux\t\t\n else:\n if c != '-' and c != '*':\n li_html += c\n is_li_content = False\n\n k += 1\n \n def p_md_to_html(p_md):\n p_html = \"<p>\"\n\n not_link = False\n k = 0\n while k+1 < len(p_md) and p_md[k] != \"\\n\":\n c = p_md[k]\n\n if (c == '*' and p_md[k+1] == '*') or (c == '-' and p_md[k+1] == '-'):\n strong, num_jumps = strong_md_to_html(p_md[k:]) \n p_html += strong\n k += num_jumps\n elif c == '[' and not_link == False:\n link, aux = link_md_to_html(p_md, k)\n \n # if the value returned it's equal to k-1, it means it's not a link but a paragraph content\n if aux == k-1:\n not_link = True\n else:\n p_html += link\t\n k = aux\t\t\n else:\n p_html += c\n not_link = False\n\n k += 1\n\n # Obs.: In the main while loop, in order to check if it's a strong tag or just an ul tag \n # I check the current charactere '*' and the next to make sure. However, by checking the next charactere\n # it's possible to reach and index out of range (e.g. if there's a '-' charactere at the very end of input file)\n # Therefore, the last character gets left out of the final html file, so in case the last tag is a paragraph\n # and contains a * ou - in the end of the string, that's why it's being concatened here (p_md[k])\n if p_md[k] != \"\\n\":\n return p_html + p_md[k] + \"</p>\", k-1\n else:\n return p_html + \"</p>\", k-1\n # ---------------------------------------------------------------------------------------------------------------------------------------\n # \n html_file = '''{% extends \\\"encyclopedia/layout.html\\\" %}\n\n{% block title %}\n ''' + \"{{ title }}\" + '''\n{% endblock %}\n\n{% block body %}\n '''\n\n # Main loop to analyse the file\n isparagraph = False\n i = 0\n while i+1 < len(md_file):\n char = md_file[i]\n\n if char == '#':\n html_file += header_md_to_html(md_file[i:])\n\n while i < len(md_file)-1 and md_file[i] != \"\\n\": # To continue from where the header ends\n i += 1\n\n elif (char == '*' and md_file[i+1] == '*') or (char == '-' and md_file[i+1] == '-'):\n strong, num_jumps = strong_md_to_html(md_file[i:]) \n html_file += strong\n i += num_jumps \n \n elif char == '[' and isparagraph == False:\n link, aux = link_md_to_html(md_file, i)\n \n # if the value returned it's equal to i-1, it means it's not a link but a paragraph\n if aux == i-1:\n isparagraph = True\n else:\n html_file += link\n i = aux\n\n elif char == '-' or char == '*':\n ul, num_jumps = ul_md_to_html(md_file[i:])\n html_file += ul\n\n i += num_jumps\n else:\n # If the character comes after a break line or if the paragraph is the first thing on file\n if (md_file[i-1] == \"\\n\" and char != \"\\n\") or re.match(r\"^.\", char):\n paragraph, num_jumps = p_md_to_html(md_file[i:])\n html_file += paragraph\n i += num_jumps\n isparagraph = False\n elif char != \"\\n\":\n html_file += char\n\n i += 1\n\n html_file += '''\n{% endblock %}\n\n{% block nav %}\n <a href=\"{% url 'editpage' title %}\">Edit Page</a><br>\n <a href=\"{% url 'deletepage' title %}\">Delete Page</a>\n{% endblock %}'''\n\n f = open(\"encyclopedia/templates/encyclopedia/entry.html\", \"w\")\n f.write(html_file)\n f.close()\n\n # return html_file\n"
},
{
"alpha_fraction": 0.42105263471603394,
"alphanum_fraction": 0.4802631437778473,
"avg_line_length": 3.151515245437622,
"blob_id": "abfca71e15b67ae517c8c15dc444520fa8f4b941",
"content_id": "fb2e97d062b7cf756372ee5c0be6aac654819edf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 152,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 33,
"path": "/entries/DMC.md",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "# DMC is a game lalala\r\r\n\r\r\n\r\r\n\r\r\n** Strong** to test it out ba\r\r\n\r\r\n\r\r\n\r\r\n- List \r\r\n\r\r\n- of \r\r\n\r\r\n- things **aaaa**\r\r\n\r\r\n123123123\r\r\n\r\r\nanything at all"
},
{
"alpha_fraction": 0.4895237982273102,
"alphanum_fraction": 0.49714285135269165,
"avg_line_length": 25.299999237060547,
"blob_id": "53d84ed39bf5f3fefa904b07c6e0f904f4aead66",
"content_id": "7177864ffaf4a03bc9cbbd21edb7e48b725b4dcf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 525,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 20,
"path": "/encyclopedia/templates/encyclopedia/errorDisplay.html",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "{% extends \"encyclopedia/layout.html\" %}\n\n{% block title %}\n Error\n{% endblock %}\n\n{% block body %}\n {% if flag %}\n <h2>{{ title }} page already exists</h2>\n <p>\n There's already a page called {{ title }}. You can edit it by clicking <a href=\"{% url 'editpage' title %}\">here</a>.\n </p>\n {% else %}\n <h2>Page does not exist</h2>\n <p>\n You can create this page by clicking <a href=\"{% url 'newpage' %}\">here</a>.\n </p>\n {% endif %}\n \n{% endblock %}"
},
{
"alpha_fraction": 0.682692289352417,
"alphanum_fraction": 0.682692289352417,
"avg_line_length": 40.599998474121094,
"blob_id": "319515f89d9e643231a8b36d750eda38b46ededb",
"content_id": "e57dfdbccf4d3480efa40f4e16701cd584401f16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 624,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 15,
"path": "/encyclopedia/urls.py",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "from django.urls import path\n\nfrom . import views\n\nurlpatterns = [\n path('', views.index, name=\"index\"),\n path(\"randomPage\", views.randomPage, name=\"random\"),\n path(\"searchResults\", views.searchResults, name=\"results\"),\n path(\"savePage\", views.savePage, name=\"savepage\"),\n path(\"newPage\", views.createNewPage, name=\"newpage\"),\n path(\"<slug:title>/saveEdits\", views.saveEdits, name=\"save_edits\"),\n path(\"wiki/<slug:title>\", views.entryPage, name=\"entrypage\"),\n path(\"<slug:title>/editPage\", views.editPage, name=\"editpage\"),\n path(\"<slug:title>/deletePage\", views.deletePage, name=\"deletepage\"),\n]\n"
},
{
"alpha_fraction": 0.6241209506988525,
"alphanum_fraction": 0.6258790493011475,
"avg_line_length": 32.458824157714844,
"blob_id": "419a23b4bd4955f4ac296521c4728f17697b1692",
"content_id": "696e31ecfea51e019b2e49752e29f50607fb3530",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2844,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 85,
"path": "/encyclopedia/views.py",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponseRedirect\nfrom django.urls import reverse\n\nfrom . import util\n\nimport random\n\ndef index(request):\n return render(request, \"encyclopedia/index.html\", {\n \"entryTitles\": util.list_entries()\n }) \n\ndef randomPage(request):\n entries_list = util.list_entries()\n random.shuffle(entries_list)\n random_title = entries_list[0]\n return entryPage(request, random_title)\n\ndef searchResults(request):\n title = searchValue = request.GET['q']\n \n # Would it be faster to just render the entryPage template, instead of taking another route?\n # But since I don't want to replicate code, let's just redirect.\n if util.get_entry(searchValue) is not None:\n return HttpResponseRedirect(reverse(\"entrypage\", args=[title]))\n else:\n results = util.suffix_search(searchValue)\n return render(request, \"encyclopedia/searchResults.html\", {\n \"entriesMatch\": results,\n \"resultsListLength\": len(results)\n })\n\ndef savePage(request):\n if request.method == \"POST\":\n title = request.POST[\"title\"]\n \n if(util.get_entry(title) is not None):\n return render(request, \"encyclopedia/errorDisplay.html\", {\n \"flag\": 1, # Defining flag 1 means the user is trying to save a page that already exists\n \"title\": title\n })\n else:\n util.save_in_storage(request.POST[\"newpage\"], title)\n \n util.md_to_html(util.get_entry(title), title)\n return render(request, \"encyclopedia/entry.html\", {\n \"title\": title\n })\n\ndef createNewPage(request):\n return render(request, \"encyclopedia/newpage.html\")\n\ndef saveEdits(request, title):\n if request.method == \"POST\":\n util.save_in_storage(request.POST[\"newpage\"], title)\n\n util.md_to_html(util.get_entry(title), title)\n return render(request, \"encyclopedia/entry.html\", {\n \"title\": title\n })\n\ndef editPage(request, title):\n page_md = util.get_entry(title)\n util.md_to_html(page_md, title)\n return render(request, \"encyclopedia/editPage.html\", {\n \"title\": title,\n \"page_md\": page_md\n })\n\ndef entryPage(request, title):\n if ((entry := util.get_entry(title)) is not None):\n util.md_to_html(entry, title)\n return render(request, \"encyclopedia/entry.html\", {\n \"title\": title\n })\n else:\n return render(request, \"encyclopedia/errorDisplay.html\", {\n \"flag\": 0 # Defining flag 0 means the user tried to look for a non existing page\n })\ndef deletePage(request, title):\n if (util.delete_entry(title)):\n return HttpResponseRedirect(reverse(\"index\"))\n else:\n pass # Show error? What kind of error?\n"
},
{
"alpha_fraction": 0.4097560942173004,
"alphanum_fraction": 0.4097560942173004,
"avg_line_length": 3.303030252456665,
"blob_id": "ee2e7dd305eed4c371ac50d4de653c6bff00bec9",
"content_id": "b6e721279d5aed37d5fc3e57870f221a5157f7b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 205,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 33,
"path": "/entries/Bloodborne.md",
"repo_name": "medina325/Project_1",
"src_encoding": "UTF-8",
"text": "# Bloodborne\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\nBloodborne is a rpg-action driven game, heavily influenced by **gothic** and --cosmical-- horror.\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n\r\r\n"
}
] | 6 |
yanbint/flounder
|
https://github.com/yanbint/flounder
|
ce07653f5dfccc3a379541ea83cca34a68b471b4
|
18740f4e04d491ba5e47e6b54d2538d8250953a1
|
d5c091d70c3ea931f25c8bbe4d1b01d8494d1af7
|
refs/heads/master
| 2021-09-03T21:20:56.397785 | 2018-01-12T03:30:48 | 2018-01-12T03:30:48 | 115,965,279 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4314928352832794,
"alphanum_fraction": 0.4527607262134552,
"avg_line_length": 36.630767822265625,
"blob_id": "cadcb36c296bc1fac0e2247770d6adb790e79911",
"content_id": "1349ce946c3cc750677d21cab6fb8829d11dc63b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2445,
"license_type": "permissive",
"max_line_length": 171,
"num_lines": 65,
"path": "/DataClean.py",
"repo_name": "yanbint/flounder",
"src_encoding": "UTF-8",
"text": "import cv2\nimport sys\nimport os\n\ndef data_clean(img_file, model_type):\n model_list = []\n img = cv2.imread(img_file)\n rows, cols, channels = img.shape\n imgYcc = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB) \n imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n if model_type == \"HSV\":\n imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)\n\n for r in range(rows):\n for c in range(cols): \n # get values from rgb color space\n R = imgRGB.item(r,c,0)\n G = imgRGB.item(r,c,1)\n B = imgRGB.item(r,c,2) \n # get values from ycbcr color space \n Y = imgYcc.item(r,c,0)\n Cr = imgYcc.item(r,c,1)\n Cb = imgYcc.item(r,c,2) \n if model_type == \"HSV\":\n H = imgHSV.item(r,c,0)\n S = imgHSV.item(r,c,1)\n V = imgHSV.item(r,c,2) \n\n # skin color detection \n if R > G and R > B:\n if (G >= B and 5 * R - 12 * G + 7 * B >= 0) or (G < B and 5 * R + 7 * G - 12 * B >= 0):\n if Cr > 135 and Cr < 180 and Cb > 85 and Cb < 135 and Y > 80:\n if model_type == \"RGB\":\n model_list.append([R, G, B])\n elif model_type == \"YCrCb\":\n model_list.append([Y, Cr, Cb])\n elif model_type == \"HSV\": \n model_list.append([H, S, V])\n return model_list\n\ndef data_clean_to_csv(img_file, model_type, clean_file = ''):\n model_list = data_clean(img_file, model_type)\n if clean_file == '':\n clean_file = os.path.dirname(img_file) + os.sep + os.path.basename(img_file).split('.')[0] + '_' + model_type + '.csv'\n with open(clean_file, 'w') as fout:\n for model in model_list:\n fout.write(\"%d;%d;%d\\n\" % (model[0], model[1], model[2]))\n \n print \"generate model csv: %s\" % clean_file\n return clean_file\n\nif __name__==\"__main__\":\n Usage = \"Usage\"\n if len(sys.argv) != 3:\n print Usage \n sys.exit(0)\n\n model_type = sys.argv[1]\n image_path = sys.argv[2]\n\n if model_type not in [\"RGB\", \"YCrCb\", \"HSV\"]:\n print Usage\n sys.exit(0)\n\n data_clean_to_csv(image_path, model_type)"
},
{
"alpha_fraction": 0.795918345451355,
"alphanum_fraction": 0.795918345451355,
"avg_line_length": 23.5,
"blob_id": "3c986ca7884fef6fb808a162e79273728f7aeb4c",
"content_id": "a3622ee4a9ae405611c88241e2d207ad440476e2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 49,
"license_type": "permissive",
"max_line_length": 37,
"num_lines": 2,
"path": "/README.md",
"repo_name": "yanbint/flounder",
"src_encoding": "UTF-8",
"text": "# flounder\nDetect skin colors to beautify image.\n"
},
{
"alpha_fraction": 0.5367782711982727,
"alphanum_fraction": 0.5557606220245361,
"avg_line_length": 35.123809814453125,
"blob_id": "e9d1b2c9868ade3bfc0c01c02749b82aba3718c7",
"content_id": "91fd8dd831fe2721a3e8af57d0973410b84cc22a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3793,
"license_type": "permissive",
"max_line_length": 134,
"num_lines": 105,
"path": "/Flounder.py",
"repo_name": "yanbint/flounder",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# following command:\n# ./Flounder.py ./images/*.jpg\n\nfrom sklearn.cluster import KMeans \nimport matplotlib.pyplot as plt # matplotlib show image\nimport matplotlib.image as mpimg\nimport argparse \nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D \nfrom sklearn.utils import shuffle\nimport os\nimport sys\nfrom PIL import Image\nimport cv2\nfrom DataClean import data_clean\n\nclass SkinModel(object):\n def __init__(self, skin_model = os.path.split(os.path.realpath(__file__))[0] + os.sep + \"MODEL.csv\"):\n self._skin_model = skin_model\n self._simple = np.array([])\n self.load_model()\n\n def load_model(self):\n print \"load_model\"\n models = []\n with open(self._skin_model, 'r') as fin:\n for line in fin:\n line = line[:-1]\n s = line.split(';')\n models.append([int(s[0]), int(s[1]), int(s[2])])\n self._simple = np.array(models)\n self.model = KMeans(n_clusters=3, random_state=0).fit(self._simple)\n\n def show(self):\n fig = plt.figure()\n color = (\"red\", \"green\", \"blue\")\n ax= fig.add_subplot(111, projection='3d')\n y_pred = KMeans(n_clusters=3, random_state=0).fit_predict(self._simple)\n colors=np.array(color)[y_pred]\n ax.scatter(self._simple[:, 0], self._simple[:, 1], self._simple[:, 2], c=colors)\n plt.show()\n\nclass SkinDetect(object):\n def __init__(self, skin_model, simple_num = 100):\n #self._image = image\n self._skin_model = skin_model\n self._simple = np.array([])\n self._simple_num = simple_num\n self._result = []\n self.black = 0 \n self.white = 0\n self.yellow = 0\n self._skin_mode = skin_model\n\n def skin_detect(self, image):\n skin_data = data_clean(image, \"HSV\")\n self._simple = shuffle(np.array(skin_data), random_state=0)[:self._simple_num]\n n = 0\n m = 0\n self._result = self._skin_mode.model.predict(self._simple)\n for i in self._result:\n if i == 1:\n n = n + 1\n if i == 2:\n m = m + 1\n self.black = (n) * 100/self._simple_num\n self.white = (self._simple_num-n-m) * 100/self._simple_num\n self.yellow = (m) * 100/self._simple_num\n self._image = image\n\n def show_result(self):\n fig = plt.figure()\n ax= fig.add_subplot(131, projection='3d')\n ax.set_title(\"Origin image data:\", fontsize = 12, loc = 'left')\n ax.scatter(self._simple[:, 0], self._simple[:, 1], self._simple[:, 2], c=\"purple\", marker='.')\n\n color = (\"red\", \"green\", \"blue\")\n ax= fig.add_subplot(132, projection='3d')\n ax.set_title(\"Predict image data:\", fontsize = 12, loc = 'left')\n colors = np.array(color)[self._result]\n ax.scatter(self._simple[:, 0], self._simple[:, 1], self._simple[:, 2], c=colors, marker='x')\n \n img = Image.open(self._image)\n ax= fig.add_subplot(133)\n ax.imshow(img)\n ax.set_title(\"Black: %d%%, White: %d%%, Yellow: %d%%\" % (self.black, self.white, self.yellow), fontsize = 12, loc = 'left')\n plt.xticks([]), plt.yticks([])\n plt.show()\n\n def __del__(self):\n print \"delete SkinDetect\"\n\nif __name__==\"__main__\":\n mySkinModel = SkinModel()\n #mySkinModel.show()\n mySkinDetect = SkinDetect(mySkinModel)\n\n for image in sys.argv[1:]:\n #print \"Processing file: {}\".format(image)\n mySkinDetect.skin_detect(image)\n mySkinDetect.show_result()\n print \"%s: [Black-%02d%%, Yellow-%02d%%, White-%02d%%]\" % (image, mySkinDetect.black, mySkinDetect.yellow, mySkinDetect.white)\n"
}
] | 3 |
TheBlueKnight/Music-Genre-Classification
|
https://github.com/TheBlueKnight/Music-Genre-Classification
|
420f518548953c61a948a7a97a7865bbacb0befe
|
030b2175ca3d0f8f66cca792f020b66dd947d8e8
|
09456c452d948ce7d1e5b18e8cddb02758dac38e
|
refs/heads/master
| 2022-01-12T15:55:40.765943 | 2019-05-03T03:02:03 | 2019-05-03T03:02:03 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7085404396057129,
"alphanum_fraction": 0.7342973351478577,
"avg_line_length": 31.02898597717285,
"blob_id": "f49b7cc0c7931bdc9a125db2f908e29e944c22f7",
"content_id": "1bef53b7ee2040eb9566811004fa79b3fe76ffb8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2213,
"license_type": "no_license",
"max_line_length": 307,
"num_lines": 69,
"path": "/README.md",
"repo_name": "TheBlueKnight/Music-Genre-Classification",
"src_encoding": "UTF-8",
"text": "<p align=\"center\">\n<img src=\"https://github.com/QirenSun/Music-Generator/blob/master/Image/11-yic-pop-essay.w1200.h630.jpg\" >\n</p>\n\n# Music Genre Classification \n\nResearch in **Deep learning and machine learning**, Boston University \n\n## Introducation \n\nMy work classifies ten classes music genre of a sound sample and uses Pytorch and scikit-learn to recognize the music genre. \n\n## GTZAN -- Datasets \n\nThe dataset consists of 1000 audio tracks each 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22050Hz Mono 16-bit audio files in .wav format. \nClasses: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock \nTraining Set: 80% \nTesting Set: 10% \nValiding Set: 10% \n\n## Preprocessing Data \n\nMel-frequency cepstral coefficients (MFCC) \nSpectral Centroid \nZero Crossing Rate \nChroma Frequencies \nSpectral Roll-off \nSpectrogram images \n\n<p align=\"center\">\n<img src=\"https://github.com/QirenSun/Music-Generator/blob/master/Image/1.PNG\" >\n</p>\n\n<p align=\"center\">\n<img src=\"https://github.com/QirenSun/Music-Generator/blob/master/Image/2.PNG\" >\n</p>\n\n\n## Models \n\nScikit-learn: SVM, MLP lbfgs(quasi-Newton method), MLP Adam(gradient-based optimizer), Decision Tree \nPytorch: CNN, DCNN, DCNN-RNN \n\n<p align=\"center\">\n<img src=\"https://github.com/QirenSun/Music-Generator/blob/master/Image/CNN_all_layers.png\" >\n</p>\n\n<p align=\"center\">\n<img src=\"https://github.com/QirenSun/Music-Generator/blob/master/Image/3.PNG\" >\n</p>\n\n\n## Results \n\nScikit-learn with MFCC(20), Spectral Centroid, Zero Crossing Rate, Chroma Frequencies, Spectral Roll-off: \nSVM: 65.5% \nMLP adam: 66% \nMLP lbfgs: 65% \nDecision Tree: 44% \n\nPytorch: \nDCNN-RNN with MFCC(50): 63% \nDCNN with MFCC(50): 60% \nCNN with spectrogram images: 43% \n\n## Discussion\n\nThe results of CNN, DCNN, DCNN-RNN are not the most optimized results. During analyzing the training loss and valid loss, the NN performance can increase by adjusting the learning rate, changing the parameters in the NN model, choosing the different features, and inputting the more substantial data size. \nThe performance of my NN will surpass the scikit-learn results after optimization. \n\n"
},
{
"alpha_fraction": 0.6133266091346741,
"alphanum_fraction": 0.6360424160957336,
"avg_line_length": 29.88524627685547,
"blob_id": "267b145dddcee33c0209a86245c1970a3ad7a06e",
"content_id": "8bdf20830f0f45866112e44904eb7b7a9fa12a67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1981,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 61,
"path": "/MLP.py",
"repo_name": "TheBlueKnight/Music-Genre-Classification",
"src_encoding": "UTF-8",
"text": "import pandas as pd\r\nimport numpy as np\r\nfrom sklearn.neural_network import MLPClassifier\r\nfrom sklearn.model_selection import train_test_split\r\nfrom sklearn.preprocessing import LabelEncoder, StandardScaler\r\nfrom sklearn import svm\r\nfrom sklearn import tree\r\n#from sklearn.preprocessing import LabelEncoder\r\n#from sklearn.preprocessing import StandardScaler\r\n\r\n\r\ndata= pd.read_csv('data.csv')\r\ndata.head()\r\n\r\n\r\n# Dropping unneccesary columns\r\ndata = data.drop(['filename'],axis=1)\r\n\r\ngenre_list = data.iloc[:, -1]\r\nencoder = LabelEncoder()\r\ny = encoder.fit_transform(genre_list)\r\nscaler = StandardScaler()\r\nX = scaler.fit_transform(np.array(data.iloc[:, :-1], dtype = float))\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=2)\r\n\r\nclf1 = MLPClassifier(solver='lbfgs', \r\n alpha=1e-5,\r\n hidden_layer_sizes=(95,50), \r\n random_state=0,\r\n batch_size=128,\r\n shuffle=True)\r\nclf1.fit(X_train,y_train)\r\naccuracy1=clf1.score(X_test,y_test)\r\nprint('MLP lbfgs Accuracy: ',accuracy1)\r\n\r\nclf2 = MLPClassifier(solver='adam', \r\n alpha=1e-5,\r\n hidden_layer_sizes=(150,20), \r\n random_state=1,\r\n batch_size=128,\r\n shuffle=True)\r\nclf2.fit(X_train,y_train)\r\naccuracy2=clf2.score(X_test,y_test)\r\nprint('MLP adam Accuracy2: ',accuracy2)\r\n\r\n\r\n\r\nsvm_classifier = svm.SVC(kernel='linear')\r\nsvm_classifier.fit(X_train,y_train)\r\n#new_x = transform(np.asmatrix([6, 160]))\r\n#predicted = svm_classifier.predict(new_x)\r\naccuracy = svm_classifier.score(X_test, y_test)\r\nprint('SVM accuracy: ',accuracy)\r\n\r\n\r\nclf3 = tree.DecisionTreeClassifier(criterion = 'entropy')\r\nclf3.fit(X_train,y_train)\r\naccuracy = clf3.score(X_test, y_test)\r\n#clf = clf.fit(X_test,Y_test)\r\n#accuracy = clf.score(X_train, Y_train)\r\nprint('Descision Tree accuracy: ',accuracy)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
}
] | 2 |
keames1/Sugu-compiler-python-prototype
|
https://github.com/keames1/Sugu-compiler-python-prototype
|
bc17efa8c318c81cb4f46bbf80b8222d1d0769e1
|
20b8cb02ad9b926d17b85c61864170c584055ac6
|
60fac4780d6f35a1cd9d05c0987b5710ff82c73c
|
refs/heads/master
| 2020-09-28T10:47:11.143193 | 2019-12-16T04:54:42 | 2019-12-16T04:54:42 | 226,762,385 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4992780089378357,
"alphanum_fraction": 0.5072789788246155,
"avg_line_length": 38.04251480102539,
"blob_id": "0be77a4446ef888fc01ef6f18dcfeaa550ca119a",
"content_id": "4ba56faadd9a790004aaaea6e2b5d3d89279cafb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 42245,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 1082,
"path": "/sugu_interpreter.py",
"repo_name": "keames1/Sugu-compiler-python-prototype",
"src_encoding": "UTF-8",
"text": "\n# This main function will eventually perform all the operations necessary to executing to code.\n# For now, it just prints the token stream for testing purposes.\ndef main (argv):\n\n # Open the file to be executed.\n if len(argv) < 2:\n print(\"No input file. Compilation/interpretation terminated.\")\n exit()\n\n fileName = argv[1]; srcCode = \"\"\n try:\n with open(fileName, 'r') as f:\n srcCode = f.read()\n except FileNotFoundError:\n print(f\"No such file or directory '{fileName}'.\")\n\n lex = Lexer(srcCode, fileName)\n\n tokens, err = lex.genTokens()\n\n if err:\n print(err)\n exit()\n\n print(tokens)\n\n# A function for generating unique constants with values that will be consistent throughout execution\n# in cases where the specific values are irrelevant.\nnext_const = 0\ndef mkConst ():\n global next_const\n new_const = next_const\n next_const += 1\n return new_const\n\n# Token type identifier constants.\nTT_INTEGER = mkConst()\nTT_FLOAT = mkConst()\nTT_STRING = mkConst()\nTT_CHAR = mkConst()\nTT_KWD = mkConst()\nTT_LCURLY = mkConst()\nTT_RCURLY = mkConst()\nTT_LSQUARE = mkConst()\nTT_RSQUARE = mkConst()\nTT_PLUS_EQU = mkConst()\nTT_MINUS_EQU = mkConst()\nTT_POW_EQU = mkConst()\nTT_TIMES_EQU = mkConst()\nTT_DIV_EQU = mkConst()\nTT_MOD_EQU = mkConst()\nTT_XOR_EQU = mkConst()\nTT_FLOOR_DIV_EQU = mkConst()\nTT_LEFT_SHIFT_EQU = mkConst()\nTT_RIGHT_SHIFT_EQU = mkConst()\nTT_OR_EQU = mkConst()\nTT_AND_EQU = mkConst()\n\n# A range of expression related tokens marked by the start and end constants EXPR_TKN_RANGE_START and END.\nEXPR_TKN_RANGE_START = mkConst()\n\nTT_LPAREN = mkConst()\nTT_RPAREN = mkConst()\nTT_PLUS = mkConst()\nTT_MINUS = mkConst()\nTT_ASTERISK = mkConst()\nTT_DBL_ASTERISK = mkConst()\nTT_FWD_SLASH = mkConst()\nTT_DOUBLE_EQUAL = mkConst()\nTT_PERCENT = mkConst()\nTT_ANDPERSAND = mkConst()\nTT_PIPE_SYM = mkConst()\nTT_LINE_FILL = mkConst() # Used to allow comments to be on their own line within blocks.\nTT_BACKSLASH = mkConst()\nTT_TILDE = mkConst()\nTT_CARROT = mkConst()\nTT_NOT_EQU = mkConst()\nTT_LESS_THAN = mkConst()\nTT_GREATER_THAN = mkConst()\nTT_LESS_THAN_EQUAL = mkConst()\nTT_GREATER_THAN_EQUAL = mkConst()\nTT_LEFT_SHIFT = mkConst()\nTT_RIGHT_SHIFT = mkConst()\nTT_IN_PLACE_INCR = mkConst()\nTT_IN_PLACE_DECR = mkConst()\n\nEXPR_TKN_RANGE_END = mkConst()\n\nTT_DBL_FWD_SLASH = mkConst()\nTT_EQUAL = mkConst()\nTT_ILLEGAL_TOKEN = mkConst()\nTT_END_OF_FILE = mkConst()\nTT_IDENTIFIER = mkConst()\nTT_MEMBER_FIELD_REF = mkConst()\nTT_SEMICOLON = mkConst()\nTT_COLON = mkConst()\nTT_DBL_COLON = mkConst()\nTT_OCTOTHORPE = mkConst()\nTT_NEWLINE = mkConst()\nTT_DOLLAR_SIGN = mkConst()\nTT_COMMA = mkConst()\n\n# token type any makes Token.compare ignore the type of the token.\nTT_ANY = mkConst()\n# Token type None guarantees the Token.compare method will return False.\nTT_NONE = mkConst()\n\n# A token type map used for creating string representations of tokens.\nTOKEN_TYPE_MAP = {\n TT_INTEGER : \"INTEGER\",\n TT_FLOAT : \"FLOAT\",\n TT_KWD : \"KEYWORD\",\n TT_LPAREN : \"LPAREN\",\n TT_RPAREN : \"RPAREN\",\n TT_PLUS : \"PLUS\",\n TT_MINUS : \"MINUS\",\n TT_ASTERISK : \"ASTERISK\",\n TT_DBL_ASTERISK : \"DBL_ASTERISK\",\n TT_FWD_SLASH : \"FWD_SLASH\",\n TT_DBL_FWD_SLASH : \"DBL_FWD_SLASH\",\n TT_EQUAL : \"EQUAL\",\n TT_DOUBLE_EQUAL : \"DBL_EQUAL\",\n TT_PERCENT : \"PERCENT\",\n TT_TILDE : \"TILDE\",\n TT_ANDPERSAND : \"ANDPERSAND\",\n TT_PIPE_SYM : \"PIPE_SYM\",\n TT_CARROT : \"CARROT\",\n TT_NOT_EQU : \"NOT_EQUAL\",\n TT_LESS_THAN : \"LESS_THAN\",\n TT_GREATER_THAN : \"GREATER_THAN\",\n TT_LESS_THAN_EQUAL : \"LESS_THAN_EQUAL\",\n TT_GREATER_THAN_EQUAL : \"GREATER_THAN_EQUAL\",\n TT_ILLEGAL_TOKEN : \"ILLEGAL_TOKEN\",\n TT_END_OF_FILE : \"END_OF_FILE\",\n TT_IDENTIFIER : \"IDENTIFIER\",\n TT_MEMBER_FIELD_REF : \"MEMBER_FIELD_REF\",\n TT_LEFT_SHIFT : \"LEFT_SHIFT\",\n TT_RIGHT_SHIFT : \"RIGHT_SHIFT\",\n TT_STRING : \"STRING\",\n TT_CHAR : \"CHAR\",\n TT_BACKSLASH : \"BACKSLASH\",\n TT_SEMICOLON : \"SEMICOLON\",\n TT_COLON : \"COLON\",\n TT_OCTOTHORPE : \"OCTOTHORPE\",\n TT_NEWLINE : \"NEWLINE\",\n TT_ANY : \"ANY\",\n TT_DOLLAR_SIGN : \"DOLLAR_SIGN\",\n TT_COMMA : \"COMMA\",\n TT_NONE : \"NONE\",\n TT_LCURLY : \"LCURLY\",\n TT_RCURLY : \"RCURLY\",\n TT_IN_PLACE_INCR : \"IN_PLACE_INCREMENT\",\n TT_IN_PLACE_DECR : \"IN_PLACE_DECREMENT\",\n TT_PLUS_EQU : \"PLUS_EQU\",\n TT_MINUS_EQU : \"MINUS_EQU\",\n TT_POW_EQU : \"POW_EQU\",\n TT_TIMES_EQU : \"TIMES_EQU\",\n TT_DIV_EQU : \"DIV_EQU\",\n TT_MOD_EQU : \"MOD_EQU\",\n TT_XOR_EQU : \"XOR_EQU\",\n TT_FLOOR_DIV_EQU : \"FLOOR_DIVIDE_EQU\",\n TT_LEFT_SHIFT_EQU : \"LEFT_SHIFT_EQU\",\n TT_RIGHT_SHIFT_EQU : \"RIGHT_SHIFT_EQU\",\n TT_OR_EQU : \"OR_EQU\",\n TT_AND_EQU : \"AND_EQU\",\n TT_LSQUARE : \"LSQUARE\",\n TT_RSQUARE : \"RSQUARE\",\n TT_DBL_COLON : \"DBL_COLON\",\n TT_LINE_FILL : \"LINE_FILL\",\n}\n\n# Character set constants:\nCS_WHITESPACE = \" \\t\"\nCS_NEWLINE = \"\\n\\r\"\nCS_NUMERALS = \"0123456789\"\nCS_HEX_DIGITS = \"ABCDEFabcdef\" + CS_NUMERALS\nCS_OPERATORS = \"+-*/%&|^=!<>\"\nCS_WORD_CHRS = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz\" + CS_NUMERALS\n\n# Keyword constants\nKWD_AND = \"and\" # Keyword operators.\nKWD_OR = \"or\"\nKWD_NOT = \"not\"\nKWD_IF = \"if\" # Greyzone. They're both the keywords in\nKWD_ELIF = \"elif\" # conditional statements and keyword operators\nKWD_ELSE = \"else\" # in conditional expressions.\n\nKWD_I8 = \"i8\" # Data type keywords\nKWD_I16 = \"i16\"\nKWD_I32 = \"i32\"\nKWD_I64 = \"i64\"\nKWD_IEXP = \"Iexp\"\nKWD_U8 = \"u8\"\nKWD_U16 = \"u16\"\nKWD_U32 = \"u32\"\nKWD_U64 = \"u64\"\nKWD_F32 = \"f32\"\nKWD_F64 = \"f64\"\nKWD_FEXP = \"Fexp\"\nKWD_STR = \"str\"\nKWD_STRING = \"String\"\nKWD_LIST = \"List\"\nKWD_SIG = \"sig\"\nKWD_SIGNATURE = \"signature\"\n\nKWD_CHAR = \"char\"\nKWD_BOOL = \"bool\"\nKWD_REF = \"ref\"\n\nKWD_NAMESPACE = \"namespace\" # Import related keywords\nKWD_IMPORT = \"import\"\n\nKWD_FUN = \"fun\" # Function related keywords\nKWD_D_ERROR = \".error\" # A decorator used to mark a function as an error handler.\nKWD_D_OVERRIDE = \".override\"\nKWD_RETURN = \"return\"\nKWD_IMPL = \"impl\"\nKWD_NORETURN = \"noreturn\" # equivalent to void in c-based languages.\nKWD_MAIN = \"main\" # Used to identify explicit main functions.\nKWD_ARGV = \"argv\" # Implicit main doesn't have a function header. The argv keyword is\n # how command line args are referenced when a program has an implicit main.\n\nKWD_ENUM = \"enum\"\nKWD_D_BITFIELD = \".bitfield\"\n\nKWD_STRUCT = \"struct\"\nKWD_FIELD = \"field\"\n\nKWD_CONST = \"const\"\nKWD_GLOBAL = \"global\"\n\nKWD_SWITCH = \"switch\"\nKWD_CASE = \"case\"\nKWD_DEFAULT = \"default\"\n\nKWD_WHILE = \"while\" # All flow control keywords.\nKWD_POST = \"post\"\nKWD_FOR = \"for\"\nKWD_IN = \"in\"\nKWD_TO = \"to\"\n\nKWD_ISOLATE = \"isolate\"\nKWD_USING = \"using\"\nKWD_ISOLATE = \"isolate\"\nKWD_USING = \"using\"\nKWD_WITH = \"with\"\nKWD_AS = \"as\"\n\nKWD_BREAK = \"break\"\nKWD_CONTINUE = \"continue\"\nKWD_RETURN = \"return\"\nKWD_RAISE = \"raise\"\nKWD_LEAVE = \"leave\"\n\nKWD_S_TRUE = \"True\" # Singleton value keywords.\nKWD_S_FALSE = \"False\"\nKWD_S_NONE = \"None\"\n\nKWD_NONRUNTIME = \".nonruntime\" # Directive keywords signifying code to be interpreted during compile time.\nKWD_END = \".end\"\nKWD_AVAILABLE = \".available\" # Declares constants to be allowed for manipulation by nonruntime code.\nKWD_UNAVAILABLE = \".unavailable\" # Declares constants to be unavailable for manipulation by nonruntime code.\nKWD_ANY = \"any\" # Keyword used to declare all constants available to a nonruntime\nKWD_ALL = \"all\"\n\n# A keyword tuple used for determining wheather a certain sequence is a keyword.\nKS_ALL_KWDS = (\n # The keyword operators. These include the if, elif, and else also used in if statements.\n KWD_AND, KWD_OR, KWD_NOT, KWD_IF, KWD_ELIF, KWD_ELSE,\n\n KWD_FUN, KWD_D_ERROR, KWD_D_OVERRIDE, KWD_RETURN,\n KWD_NORETURN, KWD_MAIN, KWD_ARGV, KWD_IMPL,\n\n KWD_ENUM, KWD_D_BITFIELD,\n\n KWD_STRUCT, KWD_FIELD,\n\n KWD_CONST, KWD_GLOBAL,\n\n # All flow control statement keywords.\n KWD_SWITCH, KWD_CASE, KWD_DEFAULT, KWD_WHILE,\n KWD_POST, KWD_FOR, KWD_IN, KWD_TO,\n\n KWD_ISOLATE, KWD_USING, KWD_WITH, KWD_AS,\n\n KWD_BREAK, KWD_CONTINUE, KWD_RETURN, KWD_RAISE, KWD_LEAVE,\n\n KWD_I8, KWD_I16, KWD_I32, KWD_I64, KWD_IEXP, KWD_F32,\n KWD_F64, KWD_FEXP, KWD_CHAR, KWD_BOOL, KWD_REF, KWD_STR,\n KWD_STRING, KWD_LIST, KWD_U8, KWD_U16, KWD_U32, KWD_U64,\n KWD_SIGNATURE, KWD_SIG,\n \n KWD_S_TRUE, KWD_S_FALSE, KWD_S_NONE,\n\n KWD_IMPORT, KWD_NAMESPACE,\n\n KWD_NONRUNTIME, KWD_END, KWD_AVAILABLE, KWD_UNAVAILABLE,\n KWD_ANY, KWD_ALL,\n\n KWD_D_ERROR,\n)\n# Tuples of keywords used for identifying specific types of keywords, such as keyword operators.\nKS_LOGICAL_OPERATORS = (KWD_AND, KWD_OR, KWD_NOT, )\nKS_CONDITIONAL_KWDS = (KWD_IF, KWD_ELIF, KWD_ELSE, )\nKS_KWD_OPERATORS = (KWD_AND, KWD_OR, KWD_NOT, KWD_IF, KWD_ELIF, KWD_ELSE, )\n\nKS_TYPE_KWDS = (\n KWD_I8, KWD_I16, KWD_I32, KWD_I64, KWD_IEXP, KWD_F32,\n KWD_F64, KWD_FEXP, KWD_CHAR, KWD_BOOL, KWD_REF,\n KWD_STR, KWD_STRING, KWD_U8, KWD_U16, KWD_U32, KWD_U64,\n)\nKS_NONRUNTIME_RELATED = (KWD_NONRUNTIME, KWD_END, KWD_AVAILABLE, KWD_UNAVAILABLE, )\n\n\nclass TV_ANY: # An instance of TV_ANY in the value field of a Token instance makes the compare method\n # ignore the value field in the comparison.\n\n def __repr__ (self):\n return \"***ANY***\"\n\nclass TV_NONE: # An instance of TV_NONE in the value field of a Token instance makes the compare method\n # always return False.\n\n def __repr__ (self):\n return \"***NONE***\"\n\n# Comparison modes for the the compare method of the Token class\nCM_TYPE_AND_VALUE = mkConst()\nCM_TYPE_ONLY = mkConst()\nCM_VALUE_ONLY = mkConst()\nCM_COMPARE_NOTHING = mkConst()\n\nclass Token:\n\n def __init__ (self, tokType, tokVal, startIdx = None, endIdx = None, fileName = None):\n self.tokType = tokType\n self.tokVal = tokVal\n self.startIdx = startIdx\n self.endIdx = endIdx\n self.fileName = fileName\n\n def compare (self, other):\n\n if self.tokType == TT_NONE or type(self.tokVal) is TV_NONE: return False\n if other.tokType == TT_NONE or type(other.tokVal) is TV_NONE: return False\n\n ignoreType = False\n ignoreValue = False\n if other.tokType == TT_ANY or self.tokType == TT_ANY: ignoreType = True\n if type(self.tokVal) is TV_ANY or type(other.tokVal) is TV_ANY: ignoreValue = True\n mode = {\n (False, False) : CM_TYPE_AND_VALUE,\n (False, True ) : CM_TYPE_ONLY,\n (True , False) : CM_VALUE_ONLY,\n (True , True ) : CM_COMPARE_NOTHING,\n }[(ignoreType, ignoreValue)]\n\n if mode == CM_COMPARE_NOTHING:\n\n return True\n\n elif mode == CM_TYPE_AND_VALUE:\n\n if self.tokType != other.tokType:\n return False\n\n if type(self.tokVal) == type(other.tokVal):\n return self.tokVal == other.tokVal\n\n else:\n return False\n\n elif mode == CM_TYPE_ONLY:\n\n return self.tokType == other.tokType\n\n elif mode == CM_VALUE_ONLY:\n\n if type(self.tokVal) == type(other.tokVal):\n return self.tokVal == other.tokVal\n\n else:\n return False\n\n def copy (self):\n return Token(self.tokType, self.tokVal, self.startIdx, self.endIdx, self.fileName)\n\n def __repr__ (self):\n\n return f\"[{TOKEN_TYPE_MAP[self.tokType]}, {self.tokVal}]\"\n\nDEFAULT_ERR_TEMPLATE = \"\"\"Unspecified error in '{}' from line {}, col {} to line {}, col {}.\n\n {}\"\"\"\n\nclass GeneralError: # A superclass from which all my error classes inherit.\n\n def __init__ (self, startPos, endPos, srcCode, fileName, message):\n self.startPos = startPos\n self.endPos = endPos\n self.srcCode = srcCode\n self.fileName = fileName\n self.message = message\n self.template = DEFAULT_ERR_TEMPLATE\n\n def getLineColStartEnd (self):\n linePos = 1\n colPos = 0\n startLine = 0\n startCol = 0\n for i, c in enumerate(self.srcCode[:self.endPos + 1]):\n\n if c in CS_NEWLINE:\n linePos += 1\n colPos = 0\n else:\n colPos += 1\n\n if i == self.startPos:\n startLine = linePos\n startCol = colPos\n\n return (startLine,\n startCol,\n linePos, # Since the loop ended on the ending position, these are the correct values\n colPos, # for the end of the error.\n )\n\n def __repr__ (self):\n\n startLine, startCol, endLine, endCol = self.getLineColStartEnd()\n return self.template.format(self.fileName, startLine, startCol, endLine, endCol, self.message)\n\nLEX_ERR_TEMPLATE = \"\"\"A lexing error occurred in '{}' beginning on line {}, col {} and ending on line {}, col {}.\n\n {}\"\"\"\n\nclass LexError (GeneralError):\n def __init__ (self, startPos, endPos, srcCode, fileName, message):\n super().__init__(startPos, endPos, srcCode, fileName, message)\n self.template = LEX_ERR_TEMPLATE\n\nclass Lexer:\n\n # A negative number is never used for indexing in this class. Loops that\n # iterate over the code can just check whether the idx instance variable is\n # greater than or equal to 0.\n IDX_SENTINAL_VALUE = -2048\n\n def __init__(self, srcCode, fileName):\n\n self.srcCode = srcCode\n self.idx = -1\n self.char = None\n self.toNext()\n self.fileName = fileName\n\n def toNext (self): # This is exactly as it needs to be. Don't mess with it.\n self.idx += 1\n if self.idx < len(self.srcCode) and self.idx >= 0:\n self.char = self.srcCode[self.idx]\n else:\n self.idx = Lexer.IDX_SENTINAL_VALUE\n self.char = \"\"\n\n def shellReset (self, newSrc): # A method used by the shell to feed a new line of code into the lexer\n self.srcCode = newSrc\n self.idx = -1\n self.char = None\n self.toNext()\n\n def genTokens (self):\n tokens = [Token(TT_NEWLINE, TV_ANY(), 0, 1, self.fileName)]\n\n while self.idx >= 0:\n\n # Whitespace gets ignored here, becoming nothing more than a delimiter.\n if self.char in CS_WHITESPACE:\n pass\n\n elif self.char == '+':\n tokens.append(Token(TT_PLUS, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '-':\n tokens.append(Token(TT_MINUS, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '*':\n aToken = None\n if len(self.srcCode) > self.idx + 1:\n if self.srcCode[self.idx + 1] == '*':\n aToken = Token(TT_DBL_ASTERISK, '**', self.idx, self.idx + 2, self.fileName)\n self.toNext() # Skip over the second asterisk. It's already part of this token.\n else:\n aToken = Token(TT_ASTERISK, self.char, self.idx, self.idx + 1, self.fileName)\n else:\n aToken = Token(TT_ASTERISK, self.char, self.idx, self.idx + 1, self.fileName)\n\n tokens.append(aToken)\n\n elif self.char == '/':\n if len(self.srcCode) > self.idx + 1:\n if self.srcCode[self.idx + 1] == '/':\n tokens.append(Token(TT_DBL_FWD_SLASH, '//', self.idx, self.idx + 2, self.fileName))\n else:\n tokens.append(Token(TT_FWD_SLASH, self.char, self.idx, self.idx + 1, self.fileName))\n else:\n tokens.append(Token(TT_FWD_SLASH, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '~':\n tokens.append(Token(TT_TILDE, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '&':\n tokens.append(Token(TT_ANDPERSAND, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '%':\n tokens.append(Token(TT_PERCENT, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '(':\n tokens.append(Token(TT_LPAREN, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == ')':\n tokens.append(Token(TT_RPAREN, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '^':\n tokens.append(Token(TT_CARROT, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '|':\n tokens.append(Token(TT_PIPE_SYM, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '#':\n commentStartIdx = self.idx # Comments are treated as newlines but they are longer than 1.\n self.ignoreComment(multiline = False)\n # Adding a comment line fill. This will allow comments to be on their own line without\n # ending to block and will be ignored by the compiler in all other contexts.\n tokens.append(Token(TT_LINE_FILL, '||', commentStartIdx, self.idx + 1, self.fileName))\n # Adding a newline so the parser doesn't see multiple statements on a single line as\n # this would cause a syntax error.\n tokens.append(Token(TT_NEWLINE, '\\n', commentStartIdx, self.idx + 1, self.fileName))\n\n elif self.char == '{':\n if len(self.srcCode) > self.idx + 1:\n if self.srcCode[self.idx + 1] == '*':\n commentStartIdx = self.idx\n self.ignoreComment(multiline = True)\n # Adding a comment line fill. This will allow comments to be on their own line without\n # ending to block and will be ignored by the compiler in all other contexts.\n tokens.append(Token(TT_LINE_FILL, '||', commentStartIdx, self.idx + 1, self.fileName))\n\n else:\n tokens.append(Token(TT_LCURLY, self.char, self.idx, self.idx + 1, self.fileName))\n else:\n tokens.append(Token(TT_LCURLY, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '}':\n if self.srcCode[self.idx - 1] == '*' and self.idx > 0:\n return None, LexError(\n self.idx - 1,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n \"Found an end multiline comment tag '*}' but no accompanying '{*'.\"\n ),\n tokens.append(Token(TT_RCURLY, self.char, self.idx, self.idx + 1, self.fileName))\n \n elif self.char == '[':\n tokens.append(Token(TT_LSQUARE, self.char, self.idx, self.idx + 1, self.fileName))\n \n elif self.char == ']':\n tokens.append(Token(TT_RSQUARE, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '\\\\':\n tokens.append(Token(TT_BACKSLASH, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == ',':\n tokens.append(Token(TT_COMMA, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == ':':\n if self.idx > 0 and self.srcCode[self.idx - 1] == ':' and tokens[-1].tokType != TT_DBL_COLON:\n tokens[-1] = Token(TT_DBL_COLON, '::', self.idx, self.idx + 2, self.fileName)\n \n else:\n tokens.append(Token(TT_COLON, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '$':\n tokens.append(Token(TT_DOLLAR_SIGN, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char in CS_NEWLINE + ';':\n tokens.append(Token(TT_NEWLINE, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '=':\n aToken = None\n\n # If there is an operator in the previous index, we want to reassign the last token\n # to be an in place assignment operator.\n if self.idx > 0 and self.srcCode[self.idx - 1] == '+':\n tokens[-1] = Token(TT_PLUS_EQU, '+=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '-':\n tokens[-1] = Token(TT_MINUS_EQU, '-=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 2: self.idx + 1] == '**=':\n tokens[-1] = Token(TT_POW_EQU, '**=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '*':\n tokens[-1] = Token(TT_TIMES_EQU, '*=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 2: self.idx + 1] == '//=':\n tokens[-1] = Token(TT_FLOOR_DIV_EQU, '//=', self.idx - 2, self.idx + 1, self.fileName)\n del tokens[-2]\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '/':\n tokens[-1] = Token(TT_DIV_EQU, '/=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '%':\n tokens[-1] = Token(TT_MOD_EQU, '%=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '^':\n tokens[-1] = Token(TT_XOR_EQU, '^=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '=':\n tokens[-1] = Token(TT_DOUBLE_EQUAL, '==', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 2: self.idx + 1] == '<<=':\n tokens[-1] = Token(TT_LEFT_SHIFT_EQU, '<<=', self.idx - 2, self.idx + 1)\n\n elif self.idx > 0 and self.srcCode[self.idx - 2: self.idx + 1] == '>>=':\n tokens[-1] = Token(TT_RIGHT_SHIFT_EQU, '>>=', self.idx - 2, self.idx + 1)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '|':\n tokens[-1] = Token(TT_OR_EQU, '|=', self.idx - 1, self.idx + 1, self.fileName)\n\n elif self.idx > 0 and self.srcCode[self.idx - 1] == '&':\n tokens[-1] = Token(TT_AND_EQU, '&=', self.idx - 1, self.idx + 1, self.fileName)\n\n else:\n aToken = Token(TT_EQUAL, self.char, self.idx, self.idx + 1, self.fileName)\n\n if aToken: tokens.append(aToken)\n\n elif self.char == '!':\n if len(self.srcCode) > self.idx + 1:\n if self.srcCode[self.idx + 1] == '=':\n tokens.append(Token(TT_NOT_EQU, self.char + '=', self.idx, self.idx + 2, self.fileName))\n self.toNext() # Skip over the equal sign. It's already part of this token.\n else:\n return None, LexError(\n self.idx,\n self.idx + 2,\n self.srcCode,\n self.fileName,\n f\"Illegal token '{self.srcCode[self.idx: self.idx + 2]}'. This character sequence is unrecognized.\"\n ),\n else:\n return None, LexError(\n self.idx,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"An exclamation mark unaccompanied by an = is not a recognized token.\"\n ),\n\n elif self.char == '<':\n aToken = None\n if self.idx + 1 < len(self.srcCode):\n if self.srcCode[self.idx + 1] == '=':\n aToken = Token(TT_LESS_THAN_EQUAL, '<=', self.idx, self.idx + 2, self.fileName)\n self.toNext() # Skip over the equals sign. It's already been processed.\n elif self.srcCode[self.idx + 1] == '<':\n aToken = Token(TT_LEFT_SHIFT, '<<', self.idx, self.idx + 2, self.fileName)\n self.toNext() # Skip over the other <. It's already been processed.\n elif self.srcCode[self.idx + 1] == '+':\n aToken = Token(TT_IN_PLACE_INCR, '<+', self.idx, self.idx + 2, self.fileName)\n self.toNext()\n elif self.srcCode[self.idx + 1] == '-':\n aToken = Token(TT_IN_PLACE_DECR, '<-', self.idx, self.idx + 2, self.fileName)\n self.toNext()\n else:\n aToken = Token(TT_LESS_THAN, self.char, self.idx, self.idx + 1, self.fileName)\n else:\n aToken = Token(TT_LESS_THAN, self.char, self.idx, self.idx + 1, self.fileName)\n tokens.append(aToken)\n\n elif self.char == '>':\n aToken = None\n if self.idx + 1 < len(self.srcCode):\n if self.srcCode[self.idx + 1] == '=':\n aToken = Token(TT_GREATER_THAN_EQUAL, '>=', self.idx, self.idx + 2, self.fileName)\n self.toNext() # Skip over the equals sign. It's already been processed.\n elif self.srcCode[self.idx + 1] == '>':\n aToken = Token(TT_RIGHT_SHIFT, '>>', self.idx, self.idx + 2, self.fileName)\n self.toNext()\n else:\n aToken = Token(TT_GREATER_THAN, self.char, self.idx, self.idx + 1, self.fileName)\n else:\n aToken = Token(TT_GREATER_THAN, self.char, self.idx, self.idx + 1, self.fileName)\n tokens.append(aToken)\n\n elif self.char == '^':\n tokens.append(Token(TT_CARROT, self.char, self.idx, self.idx + 1, self.fileName))\n\n elif self.char == '\"':\n\n aToken, err = self.getString()\n\n if err:\n return None, err,\n\n tokens.append(aToken)\n\n elif self.char == \"'\":\n\n aToken, err = self.getChar()\n\n if err:\n return None, err,\n\n tokens.append(aToken)\n\n elif self.char in CS_NUMERALS: # Generate tokens for integers and floating point.\n\n aToken, err = self.intConvert()\n\n if err:\n return None, err,\n\n tokens.append(aToken)\n\n elif self.char in CS_WORD_CHRS + '.': # Generate tokens for identifiers and keywords.\n\n aToken, err = self.getIdentifierOrKeyword()\n\n if err:\n return None, err,\n\n tokens.append(aToken)\n\n else:\n return None, LexError(\n self.idx,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Illegal character '{self.char}'. This character is not allowed in the source code.\"\n ),\n\n self.toNext()\n\n tokens.append(Token(TT_END_OF_FILE, \"End of file\", self.idx, self.idx + 1, self.fileName))\n return tokens, None,\n\n def ignoreComment(self, multiline):\n\n # Advance past the comment tag to avoid registering it again. At the initial call,\n # self.idx is pointing to the backslash but the slicing logic inside the ignore loop\n # recognizes end comment tags when self.idx points to the asterisk.\n # {*...\n # ^ We are here.\n self.toNext(); self.toNext(); self.toNext()\n # {*...\n # ^ Now we are here.\n nestingDepth = 0\n while self.idx >= 0:\n\n if multiline:\n if self.srcCode[self.idx - 1: self.idx + 1] == '{*':\n nestingDepth += 1\n self.toNext()\n elif self.srcCode[self.idx - 1: self.idx + 1] == '*}' and nestingDepth > 0:\n nestingDepth -= 1\n self.toNext()\n elif self.srcCode[self.idx - 1: self.idx + 1] == '*}' and nestingDepth <= 0:\n return\n\n else:\n if self.char == '\\n': return\n\n self.toNext()\n\n def getString (self):\n\n strStr = self.char\n isBlockStr = False\n startPos = self.idx\n\n # Determine whether the string is a block string, an inline string, or an empty string.\n self.toNext()\n strStr += self.char\n if strStr == '\"\"': # Could be empty string or block string.\n if len(self.srcCode) > self.idx + 1:\n if self.srcCode[self.idx + 1] == '\"': # Is most definitely a block string.\n self.toNext()\n strStr += self.char\n isBlockStr = True\n else:\n return Token(TT_STRING, \"\", startPos, self.idx + 1, self.fileName), None,\n\n while self.idx >= 0:\n\n # Determine whether to advance or break out of the loop in a manner\n # according to whether the string is a block string or inline.\n if isBlockStr:\n if len(strStr) >= 6: # We wouldn't want to register the first 3 \" as the end of the string.\n if strStr[-3:] == '\"\"\"' and strStr[-4:] != '\\\\\"\"\"':\n break\n else:\n self.toNext()\n else:\n self.toNext()\n else:\n if len(strStr) >= 2: # Let's not register the first quotation mark as the end of the string.\n if strStr[-1] == '\"' and strStr[-2:] != '\\\\\"':\n break\n else:\n self.toNext()\n else:\n self.toNext()\n\n strStr += self.char\n\n # Yell and scream at the programmer if they typed an unescaped newline.\n if strStr[-1] == '\\n' and strStr[-2:] != '\\\\\\n' and not isBlockStr:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n \"Unescaped newline found within string.\"\n ),\n\n # Try to evaluate the string and yell and scream at the programmer if the eval failed.\n try:\n strStr = eval(strStr)\n except SyntaxError as err:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid string literal. {err}\"\n ),\n\n return Token(TT_STRING, strStr, startPos, self.idx + 1, self.fileName), None,\n\n def getChar (self):\n\n charStr = self.char\n startPos = self.idx\n self.toNext()\n\n while self.idx >= 0:\n\n charStr += self.char\n\n if len(self.srcCode) > self.idx + 1:\n if self.char != \"'\" or len(charStr) < 2: self.toNext()\n else: break\n else:\n break\n\n if len(charStr) >= 2:\n if charStr[-1] == \"'\" and charStr[-2:] != \"\\\\'\":\n break\n\n # Yell and scream if the character contains an unescaped newline.\n if charStr[-1] == '\\n' and charStr[-2:] != '\\\\\\n':\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n \"Unescaped newline in character literal.\"\n ),\n\n # Try to evaluate the character literal and yell and scream if the eval fails.\n try:\n charStr = eval(charStr)\n except SyntaxError as err:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid character literal. {err}\"\n ),\n\n # Yell and scream if the programmer wrote a character literal which evaluates to\n # multiple characters.\n if len(charStr) > 1:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid character literal '{charStr}' evaluates to length greater than one.\"\n ),\n\n return Token(TT_CHAR, charStr, startPos, self.idx + 1, self.fileName), None,\n\n def getIdentifierOrKeyword (self):\n\n # It's unknown whether it's a keyword or identifier at this time, so wordStr it is.\n wordStr = self.char\n dotCount = 0 if wordStr != '.' else 1\n previousCharWasDot = False\n startPos = self.idx\n\n # We'll also want to handle references to struct fields, enum members, and module\n # contents, all of which use the dotted notation, so the dot is included in the\n # characters to be recognized as part of the sequence being converted in this token.\n while self.char in CS_WORD_CHRS + '.' and self.idx >= 0:\n\n if self.idx + 1 < len(self.srcCode):\n if self.srcCode[self.idx + 1] in CS_WORD_CHRS + '.': self.toNext()\n else: break\n else:\n break\n\n wordStr += self.char\n\n # Yell and scream if the user types a member/field reference with 2\n # consecutive dots: For example: 'foo..bar'.\n if previousCharWasDot and self.char == '.':\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Two consecutive dots in the member/field reference '{wordStr}'. The space between these dots must contain a valid identifier.\"\n ),\n\n if self.char == '.':\n previousCharWasDot = True\n dotCount += 1\n else:\n previousCharWasDot = False\n\n # Yell and scream at the programmer if they typed a dot at the end. For example, 'invalid.'.\n if wordStr[-1] == '.':\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n (\n f\"A dot in a member field reference must be proceeded by a valid identifier. '{wordStr}' is invalid.\\n\" +\n \" Additionally, the dots in member field references may not be proceeded or preceeded by spaces.\"\n )\n ),\n\n # Yell and scream if the programmer typed a numeral at the beginning of their identifier.\n if wordStr[0] in CS_NUMERALS:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Identifiers cannot begin in numerals! The identifier {wordStr} is invalid.\"\n ),\n\n if dotCount == 0 and wordStr not in KS_ALL_KWDS:\n return Token(TT_IDENTIFIER, wordStr, startPos, self.idx + 1, self.fileName), None,\n elif wordStr in KS_ALL_KWDS:\n return Token(TT_KWD, wordStr, startPos, self.idx + 1, self.fileName), None,\n elif dotCount != 0 and wordStr not in KS_ALL_KWDS:\n \n wordList = wordStr.split('.')\n wordListSansEmptyStrings = []\n for i in wordList:\n if i != \"\":\n wordListSansEmptyStrings.append(i)\n \n wordTuple = tuple(wordListSansEmptyStrings)\n return Token(TT_MEMBER_FIELD_REF, wordTuple, startPos, self.idx + 1, self.fileName), None,\n\n def intConvert (self):\n\n numStr = self.char\n number = 0\n sawDecimal = False\n startPos = self.idx\n\n while self.char in CS_HEX_DIGITS + \"_xXbBdD.\" and self.idx >= 0:\n\n if self.idx + 1 < len(self.srcCode):\n if self.srcCode[self.idx + 1] in CS_HEX_DIGITS + \"_xXbBdD.\": self.toNext()\n else: break\n else:\n break\n\n numStr += self.char\n\n if self.char == '.' and not sawDecimal: sawDecimal = True\n elif sawDecimal and self.char == '.':\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Multiple decimal points in floating point value '{numStr}'. There can only be one decimal point.\"\n ),\n\n # Support for explicit decimal numbers:\n if numStr[0:2].upper() == \"0D\":\n try:\n if sawDecimal: number = float(numStr[2:])\n else: number = int(numStr[2:])\n\n except ValueError:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid identifier or integer/float literal '{numStr}'.\"\n ),\n # Yell and scream if the programmer tries to code hexadecimal or binary floating point literals.\n elif numStr[0:2].upper() in [\"0X\", \"0B\"] and sawDecimal:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid floating point literal '{numStr}'. Floating points can only be represented in decimal.\"\n ),\n # Handle the case of hexadecimal integers.\n elif numStr[0:2].upper() == \"0X\":\n try:\n number = int(numStr[2:], 16)\n except ValueError:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid identifier or integer literal '{numStr}'.\"\n ),\n # Handle the case of binary integers.\n elif numStr[0:2].upper() == \"0B\":\n try:\n number = int(numStr[2:], 2)\n except ValueError:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid identifier or integer literal '{numStr}'.\"\n ),\n # If none of the other cases apply, we can just assume it's a standard decimal.\n else:\n try:\n if sawDecimal: number = float(numStr)\n else: number = int(numStr)\n except ValueError:\n return None, LexError(\n startPos,\n self.idx + 1,\n self.srcCode,\n self.fileName,\n f\"Invalid identifier or integer literal '{numStr}'.\"\n ),\n\n # Generate the token and return it.\n if sawDecimal:\n return Token(TT_FLOAT, number, startPos, self.idx + 1, self.fileName), None,\n else:\n return Token(TT_INTEGER, number, startPos, self.idx + 1, self.fileName), None,\n\nSYNTAX_ERR_TEMPLATE = \"\"\"A syntax error occured in {} begininning on line {}, col {} and ending on line {}, col {}.\n\n {}\"\"\"\n\nclass SyntxError (GeneralError):\n\n def __init__ (self, startPos, endPos, srcCode, fileName, message):\n super().__init__ (startPos, endPos, srcCode, fileName, message)\n self.template = SYNTAX_ERR_TEMPLATE\n\n\nif __name__ == \"__main__\":\n import sys\n main(sys.argv)\n"
},
{
"alpha_fraction": 0.7840490937232971,
"alphanum_fraction": 0.7840490937232971,
"avg_line_length": 89.55555725097656,
"blob_id": "3d7e40c782a3732e5aca9e6acb8f70255902c077",
"content_id": "538a4d6e6138111a2df6fbc4caaaf8751a79b19a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 815,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 9,
"path": "/README.md",
"repo_name": "keames1/Sugu-compiler-python-prototype",
"src_encoding": "UTF-8",
"text": "# Sugu-compiler-python-prototype\nI've decided to be adventurous and design my own systems language. Maybe I'll make something useful...\n\nThe aim is to create a language that makes system level programming accessible. I'll begin translating the code to c as I finalize \nthe algorithm. For the time being, I have only the lexical analysis working in the Lexer class and though it works, there are some\nchanges and additions I'd like to make. For example, currently I'm using /* c-style comments \n/* with support for nested comments in multiline. */*/ I've decided to go with # for single line comments and {*comment*} for \nmultiline comments. There is also a great deal to get sorted for language features and syntax. Currently, I'm struggling with\ndefining expression syntax in a way that obeys operator precidence.\n"
},
{
"alpha_fraction": 0.5610235929489136,
"alphanum_fraction": 0.5688976645469666,
"avg_line_length": 24.399999618530273,
"blob_id": "06051b8402bb231908b37444ac42fc0f9d5532d2",
"content_id": "f7d8f13b67dfde667079f46a4329f798a9a578d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 508,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 20,
"path": "/sugu_shell.py",
"repo_name": "keames1/Sugu-compiler-python-prototype",
"src_encoding": "UTF-8",
"text": "import sugu_interpreter as sui\n\nVERSION_NUMBER = \"0.0.01\"\nDEV_STATUS = \"Experimental\"\nRELEASE_DATE = \"non-applicable\"\n\ndef main ():\n lex = sui.Lexer(\"\", \"<Shell>\")\n print(f\"Sugu Interpreter Shell version {VERSION_NUMBER} ({DEV_STATUS})\")\n print(f\"Release date: {RELEASE_DATE}\\n\\n\")\n \n while True:\n lex.shellReset(input(\":::: \"))\n tokens, err = lex.genTokens()\n if err:\n print(str(err))\n else:\n print(tokens)\n\nif __name__ == \"__main__\": main()\n"
}
] | 3 |
Anthonyegs/TercerParcialPC
|
https://github.com/Anthonyegs/TercerParcialPC
|
23962fb79552776db809d0ab92352e9b67aeb099
|
1eb425f8ac3a93234febdb9b7c8d5d58c7f01cb8
|
d37c782a3507597f16c61d6833c27af2455a1c2d
|
refs/heads/master
| 2023-01-13T05:42:29.498957 | 2020-11-23T04:23:31 | 2020-11-23T04:23:31 | 314,951,888 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7634408473968506,
"alphanum_fraction": 0.7634408473968506,
"avg_line_length": 17.600000381469727,
"blob_id": "8692cbc5edf8058c3dae1d79aa889acd1000172f",
"content_id": "80ada54e42a094c5e00735ac4d0ffda5223c90d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 93,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 5,
"path": "/juegosApp/apps.py",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "from django.apps import AppConfig\n\n\nclass JuegosappConfig(AppConfig):\n name = 'juegosApp'\n"
},
{
"alpha_fraction": 0.7860082387924194,
"alphanum_fraction": 0.7860082387924194,
"avg_line_length": 17.69230842590332,
"blob_id": "faa0a33b52ae8005a4e135f42bfd17298db2a01a",
"content_id": "d206f691febf23e1ecd3c56055033100f7cd2491",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 13,
"path": "/juegos/schema.py",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "import graphene\nimport juegosApp.schema\n\n\nclass Query(juegosApp.schema.Query, graphene.ObjectType):\n pass\n\n\nclass Mutation(juegosApp.schema.Mutation, graphene.ObjectType):\n pass\n\n\nschema = graphene.Schema(query=Query, mutation=Mutation)\n"
},
{
"alpha_fraction": 0.6960352659225464,
"alphanum_fraction": 0.7356828451156616,
"avg_line_length": 24.22222137451172,
"blob_id": "b2b898a918c3b0a489d78ee46a24e7f34d064004",
"content_id": "c8c72022cebc1c9ba91a27aca7cb42f35876b13c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 227,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 9,
"path": "/juegosApp/models.py",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n# Create your models here.\n\n\nclass Juego(models.Model):\n nombre = models.CharField(max_length=100)\n descripcion = models.CharField(max_length=200)\n foto = models.CharField(max_length=200)\n"
},
{
"alpha_fraction": 0.6522634029388428,
"alphanum_fraction": 0.6522634029388428,
"avg_line_length": 21.090909957885742,
"blob_id": "87ce5d91364d00bcacda3c4cd69a0c3af571065b",
"content_id": "dd398a9a8635f40e4cce44b3fd29bd638a8283a5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 972,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 44,
"path": "/juegos-frontend/src/app/listar/listar.component.ts",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { from } from 'rxjs';\nimport { Subscription } from 'rxjs'\nimport { Apollo } from 'apollo-angular';\nimport gql from 'graphql-tag';\n//modelo\nimport { Juego } from '../models/Juego.model'\n\nconst juegoQuery = gql`\n query getJuegos {\n juegos {\n id\n nombre\n descripcion\n foto\n }\n}\n`;\n\n@Component({\n selector: 'app-listar',\n templateUrl: './listar.component.html',\n styleUrls: ['./listar.component.css']\n})\nexport class ListarComponent implements OnInit {\n loading: boolean;\n currentJuego: any;\n dataSource: Juego[]\n private querySubscription: Subscription;\n\n constructor(private apollo: Apollo) { }\n\n ngOnInit(): void {\n this.querySubscription = this.apollo.watchQuery<any>({\n query: juegoQuery\n })\n .valueChanges\n .subscribe(({ data, loading }) => {\n this.loading = loading;\n this.dataSource = data.juegos;\n console.log(this.dataSource)\n });\n }\n}\n"
},
{
"alpha_fraction": 0.6540880799293518,
"alphanum_fraction": 0.6540880799293518,
"avg_line_length": 25.907691955566406,
"blob_id": "27683a10407c023f6381c415d98aa18730db015c",
"content_id": "fdea92d37180fb7214f66bf18b61e9612607143a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1749,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 65,
"path": "/juegosApp/schema.py",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "import graphene\nfrom graphene_django import DjangoObjectType\nfrom .models import Juego\n\n\nclass JuegoType(DjangoObjectType):\n class Meta:\n model = Juego\n\n\nclass Query(graphene.ObjectType):\n juegos = graphene.List(JuegoType)\n\n def resolve_juegos(self, info, **kwargs):\n return Juego.objects.all()\n\n\nclass JuegoInput(graphene.InputObjectType):\n id = graphene.ID()\n nombre = graphene.String()\n descripcion = graphene.String()\n foto = graphene.String()\n\n\nclass CreateJuego(graphene.Mutation):\n class Arguments:\n input = JuegoInput(required=True)\n\n ok = graphene.Boolean()\n juego = graphene.Field(JuegoType)\n\n @staticmethod\n def mutate(root, info, input=None):\n ok = True\n juego_instance = Juego(nombre=input.nombre,\n descripcion=input.descripcion, foto=input.foto)\n juego_instance.save()\n return CreateJuego(ok=ok, juego=juego_instance)\n\n\nclass UpdateJuego(graphene.Mutation):\n class Arguments:\n id = graphene.Int(required=True)\n input = JuegoInput(required=True)\n\n ok = graphene.Boolean()\n juego = graphene.Field(JuegoType)\n\n @staticmethod\n def mutate(root, info, id, input=None):\n ok = False\n juego_instance = Juego.objects.get(pk=id)\n if juego_instance:\n ok = True\n juego_instance.nombre = input.nombre\n juego_instance.descripcion = input.descripcion\n juego_instance.foto = input.foto\n juego_instance.save()\n return UpdateJuego(ok=ok, juego=juego_instance)\n return UpdateJuego(ok=ok, juego=None)\n\n\nclass Mutation(graphene.ObjectType):\n create_juego = CreateJuego.Field()\n update_juego = UpdateJuego.Field()\n"
},
{
"alpha_fraction": 0.5012508034706116,
"alphanum_fraction": 0.501876175403595,
"avg_line_length": 22.688888549804688,
"blob_id": "53f811d20b4722f5b2b377f286cc524003590ee1",
"content_id": "309935e3bdf1b25b3be30d58c6557745b6ae2278",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "TypeScript",
"length_bytes": 3198,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 135,
"path": "/juegos-frontend/src/app/agregar/agregar.component.ts",
"repo_name": "Anthonyegs/TercerParcialPC",
"src_encoding": "UTF-8",
"text": "import { Component, OnInit } from '@angular/core';\nimport { Router, ActivatedRoute, Params } from \"@angular/router\";\nimport { Juego } from '../models/Juego.model'\nimport { Subscription } from 'rxjs'\nimport { Apollo } from 'apollo-angular';\nimport gql from 'graphql-tag';\n\nconst juegoQuery = gql`\n query getJuegos {\n juegos {\n id\n nombre\n descripcion\n foto\n }\n}\n`;\n\n@Component({\n selector: 'app-agregar',\n templateUrl: './agregar.component.html',\n styleUrls: ['./agregar.component.css']\n})\nexport class AgregarComponent implements OnInit {\n titulo: string\n button: string\n buttonColor: string\n idJuego: number\n juego: Juego\n dataSource: Juego[]\n private querySubscription: Subscription;\n\n constructor(\n private _route: ActivatedRoute,\n private _router: Router,\n private apollo: Apollo,\n ) {\n this.juego = new Juego()\n }\n\n\n\n\n guardar() {\n if (this.juego.nombre == \"\" || this.juego.descripcion == \"\" || this.juego.foto == \"\") {\n\n } else {\n if (this.idJuego == 0) {\n const enviarJuego = gql(`\n mutation createJuego {\n createJuego(input:{\n nombre:\"`+ this.juego.nombre + `\"\n descripcion: \"`+ this.juego.descripcion + `\"\n foto:\"`+ this.juego.foto + `\"\n }){\n ok\n juego{\n id\n nombre\n descripcion\n foto\n }\n }\n }\n `);\n this.apollo.mutate({\n mutation: enviarJuego\n }).subscribe(() => {\n this._router.navigate(['/listar-juegos'])\n });\n\n }\n else {\n const enviarJuego = gql(`\n mutation createJuego {\n updateJuego(id:`+ this.idJuego + `,input:{\n nombre:\"`+ this.juego.nombre + `\"\n descripcion: \"`+ this.juego.descripcion + `\"\n foto:\"`+ this.juego.foto + `\"\n }){\n ok\n juego{\n id\n nombre\n descripcion\n foto\n }\n }\n }\n `);\n this.apollo.mutate({\n mutation: enviarJuego\n }).subscribe(() => {\n this._router.navigate(['/listar-juegos'])\n });\n }\n }\n }\n\n ngOnInit(): void {\n this._route.paramMap.subscribe(params => {\n this.idJuego = +params.get('id');\n });\n if (this.idJuego == 0) {\n this.titulo = \"Nuevo juego\"\n this.button = \"Crear\"\n this.buttonColor = \"success\"\n this.juego.nombre = \"\"\n this.juego.descripcion = \"\"\n this.juego.foto = \"\"\n }\n else {\n this.titulo = \"Modificar juego\"\n this.button = \"Modificar\"\n this.buttonColor = \"warning\"\n this.querySubscription = this.apollo.watchQuery<any>({\n query: juegoQuery\n })\n .valueChanges\n .subscribe(({ data }) => {\n this.dataSource = data.juegos;\n this.dataSource.map(d => {\n if (d.id == this.idJuego) {\n this.juego.nombre = d.nombre\n this.juego.descripcion = d.descripcion\n this.juego.foto = d.foto\n }\n })\n\n });\n\n }\n }\n\n}\n"
}
] | 6 |
mfagundes/djang-leaflet-test
|
https://github.com/mfagundes/djang-leaflet-test
|
b61c0124a27f10b5cc8f993206e83a47547ad30c
|
eef5f7dd14ac541a62ba7318a8493dee209cfec3
|
4de00a76a3b69bf7cba373af7f39e9d30ffded86
|
refs/heads/master
| 2023-05-01T14:49:09.129238 | 2021-05-07T10:57:11 | 2021-05-07T10:57:11 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5208556056022644,
"alphanum_fraction": 0.5572192668914795,
"avg_line_length": 31.241378784179688,
"blob_id": "d9bf062439f94c9d5f8a52d446900e35b8d2c37d",
"content_id": "00620d3bd90454576a7882b79e00f1b0986702c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 937,
"license_type": "no_license",
"max_line_length": 114,
"num_lines": 29,
"path": "/map_proj/core/migrations/0007_auto_20210425_0003.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.7 on 2021-04-25 00:03\n\nfrom django.db import migrations, models\nimport djgeojson.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0006_auto_20210318_1409'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Fenomenos',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('name', models.CharField(max_length=100, verbose_name='Fenomeno mapeado')),\n ('data', models.DateField(verbose_name='Data da observação')),\n ('hora', models.TimeField()),\n ('longitude', models.FloatField()),\n ('latitude', models.FloatField()),\n ('geom', djgeojson.fields.PointField()),\n ],\n ),\n migrations.DeleteModel(\n name='FloraOccurrence',\n ),\n ]\n"
},
{
"alpha_fraction": 0.49853482842445374,
"alphanum_fraction": 0.5265326499938965,
"avg_line_length": 41.76302719116211,
"blob_id": "c2755d120177d93a4d9ee0eeb9879c915959806c",
"content_id": "74f932ed66877292c04b7bf9da6b115069bb425a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 34467,
"license_type": "no_license",
"max_line_length": 844,
"num_lines": 806,
"path": "/.djleaflet/lib/python3.9/site-packages/djgeojson/tests.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "from __future__ import unicode_literals\n\nimport json\n\nimport django\nfrom django.test import TestCase\nfrom django.conf import settings\nfrom django.core import serializers\nfrom django.core.exceptions import ValidationError, SuspiciousOperation\nfrom django.contrib.gis.db import models\nfrom django.contrib.gis.geos import LineString, Point, GeometryCollection\nfrom django.utils.encoding import smart_str\n\nfrom .templatetags.geojson_tags import geojsonfeature\nfrom .serializers import Serializer\nfrom .views import GeoJSONLayerView, TiledGeoJSONLayerView\nfrom .fields import GeoJSONField, GeoJSONFormField, GeoJSONValidator\n\n\nsettings.SERIALIZATION_MODULES = {'geojson': 'djgeojson.serializers'}\n\n\nclass PictureMixin(object):\n\n @property\n def picture(self):\n return 'image.png'\n\n\nclass Country(models.Model):\n label = models.CharField(max_length=20)\n geom = models.PolygonField(spatial_index=False, srid=4326)\n\n if django.VERSION < (1, 9):\n objects = models.GeoManager()\n\n def natural_key(self):\n return self.label\n\n\nclass Route(PictureMixin, models.Model):\n name = models.CharField(max_length=20)\n geom = models.LineStringField(spatial_index=False, srid=4326)\n countries = models.ManyToManyField(Country)\n\n def natural_key(self):\n return self.name\n\n @property\n def upper_name(self):\n return self.name.upper()\n\n if django.VERSION < (1, 9):\n objects = models.GeoManager()\n\n\nclass Sign(models.Model):\n label = models.CharField(max_length=20)\n route = models.ForeignKey(Route, related_name='signs', on_delete=models.PROTECT)\n\n def natural_key(self):\n return self.label\n\n @property\n def geom(self):\n return self.route.geom.centroid\n\n\nclass GeoJsonDeSerializerTest(TestCase):\n\n def test_basic(self):\n input_geojson = \"\"\"\n {\"type\": \"FeatureCollection\",\n \"features\": [\n { \"type\": \"Feature\",\n \"properties\": {\"model\": \"djgeojson.route\", \"name\": \"green\", \"upper_name\": \"RED\"},\n \"id\": 1,\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [\n [0.0, 0.0],\n [1.0, 1.0]\n ]\n }\n },\n { \"type\": \"Feature\",\n \"properties\": {\"model\": \"djgeojson.route\", \"name\": \"blue\"},\n \"id\": 2,\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [\n [0.0, 0.0],\n [1.0, 1.0]\n ]\n }\n }\n ]}\"\"\"\n\n # Deserialize into a list of objects\n objects = list(serializers.deserialize('geojson', input_geojson))\n\n # Were three objects deserialized?\n self.assertEqual(len(objects), 2)\n\n # Did the objects deserialize correctly?\n self.assertEqual(objects[1].object.name, \"blue\")\n self.assertEqual(objects[0].object.upper_name, \"GREEN\")\n self.assertEqual(objects[0].object.geom,\n LineString((0.0, 0.0), (1.0, 1.0), srid=objects[0].object.geom.srid))\n\n def test_with_model_name_passed_as_argument(self):\n input_geojson = \"\"\"\n {\"type\": \"FeatureCollection\",\n \"features\": [\n { \"type\": \"Feature\",\n \"properties\": {\"name\": \"bleh\"},\n \"id\": 24,\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [\n [1, 2],\n [42, 3]\n ]\n }\n }\n ]}\"\"\"\n\n my_object = list(serializers.deserialize(\n 'geojson', input_geojson, model_name='djgeojson.route'))[0].object\n\n self.assertEqual(my_object.name, \"bleh\")\n\n\nclass GeoJsonSerializerTest(TestCase):\n\n def test_basic(self):\n # Stuff to serialize\n route1 = Route.objects.create(\n name='green', geom=\"LINESTRING (0 0, 1 1)\")\n route2 = Route.objects.create(\n name='blue', geom=\"LINESTRING (0 0, 1 1)\")\n route3 = Route.objects.create(name='red', geom=\"LINESTRING (0 0, 1 1)\")\n\n actual_geojson = json.loads(serializers.serialize(\n 'geojson', Route.objects.all(), properties=['name']))\n self.assertEqual(\n actual_geojson, {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"name\": \"green\"}, \"id\": route1.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"name\": \"blue\"}, \"id\": route2.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"name\": \"red\"}, \"id\": route3.pk}]})\n actual_geojson_with_prop = json.loads(\n serializers.serialize(\n 'geojson', Route.objects.all(),\n properties=['name', 'upper_name', 'picture']))\n self.assertEqual(actual_geojson_with_prop,\n {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"picture\": \"image.png\", \"model\": \"djgeojson.route\", \"upper_name\": \"GREEN\", \"name\": \"green\"}, \"id\": route1.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"picture\": \"image.png\", \"model\": \"djgeojson.route\", \"upper_name\": \"BLUE\", \"name\": \"blue\"}, \"id\": route2.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"picture\": \"image.png\", \"model\": \"djgeojson.route\", \"upper_name\": \"RED\", \"name\": \"red\"}, \"id\": route3.pk}]})\n\n def test_precision(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n [{'geom': 'SRID=2154;POINT (1 1)'}], precision=2, crs=False))\n self.assertEqual(\n features, {\"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"Point\", \"coordinates\": [-1.36, -5.98]}, \"type\": \"Feature\", \"properties\": {}}]})\n\n def test_simplify(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n [{'geom': 'SRID=4326;LINESTRING (1 1, 1.5 1, 2 3, 3 3)'}], simplify=0.5, crs=False))\n self.assertEqual(\n features, {\"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[1.0, 1.0], [2.0, 3.0], [3.0, 3.0]]}, \"type\": \"Feature\", \"properties\": {}}]})\n\n def test_force2d(self):\n serializer = Serializer()\n features2d = json.loads(serializer.serialize(\n [{'geom': 'SRID=4326;POINT Z (1 2 3)'}],\n force2d=True, crs=False))\n self.assertEqual(\n features2d, {\"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"Point\", \"coordinates\": [1.0, 2.0]}, \"type\": \"Feature\", \"properties\": {}}]})\n\n def test_named_crs(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n [{'geom': 'SRID=4326;POINT (1 2)'}],\n crs_type=\"name\"))\n self.assertEqual(\n features['crs'], {\"type\": \"name\", \"properties\": {\"name\": \"EPSG:4326\"}})\n\n def test_misspelled_named_crs(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n [{'geom': 'SRID=4326;POINT (1 2)'}],\n crs_type=\"named\"))\n self.assertEqual(\n features['crs'], {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}})\n\n def test_pk_property(self):\n route = Route.objects.create(name='red', geom=\"LINESTRING (0 0, 1 1)\")\n serializer = Serializer()\n features2d = json.loads(serializer.serialize(\n Route.objects.all(), properties=['id'], crs=False))\n self.assertEqual(\n features2d, {\"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"id\": route.pk}, \"id\": route.pk}]})\n\n def test_geometry_property(self):\n class Basket(models.Model):\n\n @property\n def geom(self):\n return GeometryCollection(LineString((3, 4, 5), (6, 7, 8)), Point(1, 2, 3), srid=4326)\n\n serializer = Serializer()\n features = json.loads(\n serializer.serialize([Basket()], crs=False, force2d=True))\n expected_content = {\"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"GeometryCollection\", \"geometries\": [{\"type\": \"LineString\", \"coordinates\": [[3.0, 4.0], [6.0, 7.0]]}, {\"type\": \"Point\", \"coordinates\": [1.0, 2.0]}]}, \"type\": \"Feature\", \"properties\": {\"id\": None}}]}\n self.assertEqual(features, expected_content)\n\n def test_none_geometry(self):\n class Empty(models.Model):\n geom = None\n serializer = Serializer()\n features = json.loads(serializer.serialize([Empty()], crs=False))\n self.assertEqual(\n features, {\n \"type\": \"FeatureCollection\",\n \"features\": [{\n \"geometry\": None,\n \"type\": \"Feature\",\n \"properties\": {\"id\": None}}]\n })\n\n def test_bbox_auto(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize([{'geom': 'SRID=4326;LINESTRING (1 1, 3 3)'}],\n bbox_auto=True, crs=False))\n self.assertEqual(\n features, {\n \"type\": \"FeatureCollection\",\n \"features\": [{\n \"geometry\": {\"type\": \"LineString\", \"coordinates\": [[1.0, 1.0], [3.0, 3.0]]},\n \"type\": \"Feature\",\n \"properties\": {},\n \"bbox\": [1.0, 1.0, 3.0, 3.0]\n }]\n })\n\n\nclass ForeignKeyTest(TestCase):\n\n def setUp(self):\n self.route = Route.objects.create(\n name='green', geom=\"LINESTRING (0 0, 1 1)\")\n Sign(label='A', route=self.route).save()\n\n def test_serialize_foreign(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(Sign.objects.all(), properties=['route']))\n self.assertEqual(\n features, {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"Point\", \"coordinates\": [0.5, 0.5]}, \"type\": \"Feature\", \"properties\": {\"route\": 1, \"model\": \"djgeojson.sign\"}, \"id\": self.route.pk}]})\n\n def test_serialize_foreign_natural(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n Sign.objects.all(), use_natural_keys=True, properties=['route']))\n self.assertEqual(\n features, {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"Point\", \"coordinates\": [0.5, 0.5]}, \"type\": \"Feature\", \"properties\": {\"route\": \"green\", \"model\": \"djgeojson.sign\"}, \"id\": self.route.pk}]})\n\n\nclass ManyToManyTest(TestCase):\n\n def setUp(self):\n country1 = Country(label='C1', geom=\"POLYGON ((0 0,1 1,0 2,0 0))\")\n country1.save()\n country2 = Country(label='C2', geom=\"POLYGON ((0 0,1 1,0 2,0 0))\")\n country2.save()\n\n self.route1 = Route.objects.create(\n name='green', geom=\"LINESTRING (0 0, 1 1)\")\n self.route2 = Route.objects.create(\n name='blue', geom=\"LINESTRING (0 0, 1 1)\")\n self.route2.countries.add(country1)\n self.route3 = Route.objects.create(\n name='red', geom=\"LINESTRING (0 0, 1 1)\")\n self.route3.countries.add(country1)\n self.route3.countries.add(country2)\n\n def test_serialize_manytomany(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n Route.objects.all(), properties=['countries']))\n self.assertEqual(\n features, {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": []}, \"id\": self.route1.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": [1]}, \"id\": self.route2.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": [1, 2]}, \"id\": self.route3.pk}]})\n\n def test_serialize_manytomany_natural(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n Route.objects.all(), use_natural_keys=True, properties=['countries']))\n self.assertEqual(\n features, {\"crs\": {\"type\": \"link\", \"properties\": {\"href\": \"http://spatialreference.org/ref/epsg/4326/\", \"type\": \"proj4\"}}, \"type\": \"FeatureCollection\", \"features\": [{\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": []}, \"id\": self.route1.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": [\"C1\"]}, \"id\": self.route2.pk}, {\"geometry\": {\"type\": \"LineString\", \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]}, \"type\": \"Feature\", \"properties\": {\"model\": \"djgeojson.route\", \"countries\": [\"C1\", \"C2\"]}, \"id\": self.route3.pk}]})\n\n\nclass ReverseForeignkeyTest(TestCase):\n\n def setUp(self):\n self.route = Route(name='green', geom=\"LINESTRING (0 0, 1 1)\")\n self.route.save()\n self.sign1 = Sign.objects.create(label='A', route=self.route)\n self.sign2 = Sign.objects.create(label='B', route=self.route)\n self.sign3 = Sign.objects.create(label='C', route=self.route)\n\n def test_relation_set(self):\n self.assertEqual(len(self.route.signs.all()), 3)\n\n def test_serialize_reverse(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n Route.objects.all(), properties=['signs']))\n self.assertEqual(\n features, {\n \"crs\": {\n \"type\": \"link\", \"properties\": {\n \"href\": \"http://spatialreference.org/ref/epsg/4326/\",\n \"type\": \"proj4\"\n }\n },\n \"type\": \"FeatureCollection\",\n \"features\": [{\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]\n },\n \"type\": \"Feature\",\n \"properties\": {\n \"model\": \"djgeojson.route\",\n \"signs\": [\n self.sign1.pk,\n self.sign2.pk,\n self.sign3.pk]},\n \"id\": self.route.pk\n }]\n })\n\n def test_serialize_reverse_natural(self):\n serializer = Serializer()\n features = json.loads(serializer.serialize(\n Route.objects.all(), use_natural_keys=True, properties=['signs']))\n self.assertEqual(\n features, {\n \"crs\": {\n \"type\": \"link\",\n \"properties\": {\n \"href\": \"http://spatialreference.org/ref/epsg/4326/\",\n \"type\": \"proj4\"\n }\n },\n \"type\": \"FeatureCollection\",\n \"features\": [{\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]},\n \"type\": \"Feature\",\n \"properties\": {\n \"model\": \"djgeojson.route\",\n \"signs\": [\"A\", \"B\", \"C\"]},\n \"id\": self.route.pk\n }]\n })\n\n\nclass GeoJsonTemplateTagTest(TestCase):\n\n def setUp(self):\n self.route1 = Route.objects.create(name='green',\n geom=\"LINESTRING (0 0, 1 1)\")\n self.route2 = Route.objects.create(name='blue',\n geom=\"LINESTRING (0 0, 1 1)\")\n self.route3 = Route.objects.create(name='red',\n geom=\"LINESTRING (0 0, 1 1)\")\n\n def test_templatetag_renders_single_object(self):\n feature = json.loads(geojsonfeature(self.route1))\n self.assertEqual(\n feature, {\n \"crs\": {\n \"type\": \"link\",\n \"properties\": {\n \"href\": \"http://spatialreference.org/ref/epsg/4326/\",\n \"type\": \"proj4\"\n }\n },\n \"type\": \"FeatureCollection\",\n \"features\": [{\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]},\n \"type\": \"Feature\", \"properties\": {}}]\n })\n\n def test_templatetag_renders_queryset(self):\n feature = json.loads(geojsonfeature(Route.objects.all()))\n self.assertEqual(\n feature, {\n \"crs\": {\n \"type\": \"link\", \"properties\": {\n \"href\": \"http://spatialreference.org/ref/epsg/4326/\",\n \"type\": \"proj4\"\n }\n },\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]\n },\n \"type\": \"Feature\",\n \"properties\": {\n \"model\": \"djgeojson.route\"\n },\n \"id\": self.route1.pk\n },\n {\n \"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]\n },\n \"type\": \"Feature\",\n \"properties\": {\"model\": \"djgeojson.route\"},\n \"id\": self.route2.pk\n },\n {\n \"geometry\": {\"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]},\n \"type\": \"Feature\",\n \"properties\": {\"model\": \"djgeojson.route\"},\n \"id\": self.route3.pk\n }\n ]\n })\n\n def test_template_renders_geometry(self):\n feature = json.loads(geojsonfeature(self.route1.geom))\n self.assertEqual(\n feature, {\n \"geometry\": {\"type\": \"LineString\",\n \"coordinates\": [[0.0, 0.0], [1.0, 1.0]]},\n \"type\": \"Feature\", \"properties\": {}\n })\n\n def test_property_can_be_specified(self):\n features = json.loads(geojsonfeature(self.route1,\n \"name\"))\n feature = features['features'][0]\n self.assertEqual(feature['properties']['name'],\n self.route1.name)\n\n def test_several_properties_can_be_specified(self):\n features = json.loads(geojsonfeature(self.route1,\n \"name,id\"))\n feature = features['features'][0]\n self.assertEqual(feature['properties'],\n {'name': self.route1.name,\n 'id': self.route1.id})\n\n def test_srid_can_be_specified(self):\n feature = json.loads(geojsonfeature(self.route1.geom, \"::2154\"))\n self.assertEqual(feature['geometry']['coordinates'],\n [[253531.1305237495, 909838.9305578759],\n [406035.7627716485, 1052023.2925472297]])\n\n def test_geom_field_name_can_be_specified(self):\n features = json.loads(geojsonfeature(self.route1, \":geom\"))\n feature = features['features'][0]\n self.assertEqual(feature['geometry']['coordinates'],\n [[0.0, 0.0], [1.0, 1.0]])\n\n def test_geom_field_raises_attributeerror_if_unknown(self):\n self.assertRaises(AttributeError, geojsonfeature, self.route1, \":geo\")\n\n\nclass ViewsTest(TestCase):\n\n def setUp(self):\n self.route = Route.objects.create(\n name='green', geom=\"LINESTRING (0 0, 1 1)\")\n Sign(label='A', route=self.route).save()\n\n def test_view_default_options(self):\n view = GeoJSONLayerView(model=Route)\n view.object_list = []\n response = view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'],\n [[0.0, 0.0], [1.0, 1.0]])\n\n def test_view_can_control_properties(self):\n class FullGeoJSON(GeoJSONLayerView):\n properties = ['name']\n view = FullGeoJSON(model=Route)\n view.object_list = []\n response = view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['properties']['name'],\n 'green')\n\n def test_view_foreign(self):\n class FullGeoJSON(GeoJSONLayerView):\n properties = ['label', 'route']\n view = FullGeoJSON(model=Sign)\n view.object_list = []\n response = view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['properties']['route'],\n 1)\n\n def test_view_foreign_natural(self):\n class FullGeoJSON(GeoJSONLayerView):\n properties = ['label', 'route']\n use_natural_keys = True\n view = FullGeoJSON(model=Sign)\n view.object_list = []\n response = view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['properties']['route'],\n 'green')\n\n\nclass TileEnvelopTest(TestCase):\n def setUp(self):\n self.view = TiledGeoJSONLayerView()\n\n def test_raises_error_if_not_spherical_mercator(self):\n self.view.tile_srid = 2154\n self.assertRaises(AssertionError, self.view.tile_coord, 0, 0, 0)\n\n def test_origin_is_north_west_for_tile_0(self):\n self.assertEqual((-180.0, 85.0511287798066),\n self.view.tile_coord(0, 0, 0))\n\n def test_origin_is_center_for_middle_tile(self):\n self.assertEqual((0, 0), self.view.tile_coord(8, 8, 4))\n\n\nclass TiledGeoJSONViewTest(TestCase):\n def setUp(self):\n self.view = TiledGeoJSONLayerView(model=Route)\n self.view.args = []\n self.r1 = Route.objects.create(geom=LineString((0, 1), (10, 1)))\n self.r2 = Route.objects.create(geom=LineString((0, -1), (-10, -1)))\n\n def test_view_with_kwargs(self):\n self.view.kwargs = {'z': 4,\n 'x': 8,\n 'y': 7}\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'], [[0.0, 1.0], [10.0, 1.0]])\n\n def test_view_with_kwargs_wrong_type_z(self):\n self.view.kwargs = {'z': 'a',\n 'x': 8,\n 'y': 7}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_with_kwargs_wrong_type_x(self):\n self.view.kwargs = {'z': 1,\n 'x': 'a',\n 'y': 7}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_with_kwargs_wrong_type_y(self):\n self.view.kwargs = {'z': 4,\n 'x': 8,\n 'y': 'a'}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_with_kwargs_no_z(self):\n self.view.kwargs = {'x': 8,\n 'y': 7}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_with_kwargs_no_x(self):\n self.view.kwargs = {'z': 8,\n 'y': 7}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_with_kwargs_no_y(self):\n self.view.kwargs = {'x': 8,\n 'z': 7}\n self.assertRaises(SuspiciousOperation,\n self.view.render_to_response,\n context={})\n\n def test_view_is_serialized_as_geojson(self):\n self.view.args = [4, 8, 7]\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'],\n [[0.0, 1.0], [10.0, 1.0]])\n\n def test_view_trims_to_geometries_boundaries(self):\n self.view.args = [8, 128, 127]\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'],\n [[0.0, 1.0], [1.40625, 1.0]])\n\n def test_geometries_trim_can_be_disabled(self):\n self.view.args = [8, 128, 127]\n self.view.trim_to_boundary = False\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'],\n [[0.0, 1.0], [10.0, 1.0]])\n\n def test_tile_extent_is_provided_in_collection(self):\n self.view.args = [8, 128, 127]\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(geojson['bbox'],\n [0.0, 0.0, 1.40625, 1.4061088354351565])\n\n def test_url_parameters_are_converted_to_int(self):\n self.view.args = ['0', '0', '0']\n self.assertEqual(2, len(self.view.get_queryset()))\n\n def test_zoom_0_queryset_contains_all(self):\n self.view.args = [0, 0, 0]\n self.assertEqual(2, len(self.view.get_queryset()))\n\n def test_zoom_4_filters_by_tile_extent(self):\n self.view.args = [4, 8, 7]\n self.assertEqual([self.r1], list(self.view.get_queryset()))\n\n def test_some_tiles_have_empty_queryset(self):\n self.view.args = [4, 6, 8]\n self.assertEqual(0, len(self.view.get_queryset()))\n\n def test_simplification_depends_on_zoom_level(self):\n self.view.simplifications = {6: 100}\n self.view.args = [6, 8, 4]\n self.view.get_queryset()\n self.assertEqual(self.view.simplify, 100)\n\n def test_simplification_is_default_if_not_specified(self):\n self.view.simplifications = {}\n self.view.args = [0, 8, 4]\n self.view.get_queryset()\n self.assertEqual(self.view.simplify, None)\n\n def test_simplification_takes_the_closest_upper_level(self):\n self.view.simplifications = {3: 100, 6: 200}\n self.view.args = [4, 8, 4]\n self.view.get_queryset()\n self.assertEqual(self.view.simplify, 200)\n\n\nclass FixedSridPoint(models.Model):\n\n geom = models.PointField(srid=28992)\n\n\nclass TiledGeoJSONViewFixedSridTest(TestCase):\n def setUp(self):\n self.view = TiledGeoJSONLayerView(model=FixedSridPoint)\n self.view.args = []\n self.p1 = FixedSridPoint.objects.create(geom=Point(253286, 531490))\n self.p2 = FixedSridPoint.objects.create(geom=Point(253442, 532897))\n\n def test_within_viewport(self):\n self.view.args = [12, 2125, 1338]\n response = self.view.render_to_response(context={})\n geojson = json.loads(smart_str(response.content))\n self.assertEqual(len(geojson['features']), 2)\n self.assertEqual(geojson['features'][0]['geometry']['coordinates'],\n [6.843322039261242, 52.76181518632031])\n self.assertEqual(geojson['features'][1]['geometry']['coordinates'],\n [6.846053318324978, 52.77442791046052])\n\n\nclass Address(models.Model):\n geom = GeoJSONField()\n\n\nclass ModelFieldTest(TestCase):\n def setUp(self):\n self.address = Address()\n self.address.geom = {'type': 'Point', 'coordinates': [0, 0]}\n self.address.save()\n\n def test_models_can_have_geojson_fields(self):\n saved = Address.objects.get(id=self.address.id)\n if isinstance(saved.geom, dict):\n self.assertDictEqual(saved.geom, self.address.geom)\n else:\n # Django 1.8 !\n self.assertEqual(json.loads(saved.geom.geojson), self.address.geom)\n\n def test_default_form_field_is_geojsonfield(self):\n field = self.address._meta.get_field('geom').formfield()\n self.assertTrue(isinstance(field, GeoJSONFormField))\n\n def test_default_form_field_has_geojson_validator(self):\n field = self.address._meta.get_field('geom').formfield()\n validator = field.validators[0]\n self.assertTrue(isinstance(validator, GeoJSONValidator))\n\n def test_form_field_raises_if_invalid_type(self):\n field = self.address._meta.get_field('geom').formfield()\n self.assertRaises(ValidationError, field.clean,\n {'type': 'FeatureCollection', 'foo': 'bar'})\n\n def test_form_field_raises_if_type_missing(self):\n field = self.address._meta.get_field('geom').formfield()\n self.assertRaises(ValidationError, field.clean,\n {'foo': 'bar'})\n\n def test_field_can_be_serialized(self):\n serializer = Serializer()\n geojson = serializer.serialize(Address.objects.all(), crs=False)\n features = json.loads(geojson)\n self.assertEqual(\n features, {\n 'type': u'FeatureCollection',\n 'features': [{\n 'id': self.address.id,\n 'type': 'Feature',\n 'geometry': {'type': 'Point', 'coordinates': [0, 0]},\n 'properties': {\n 'model': 'djgeojson.address'\n }\n }]\n })\n\n def test_field_can_be_deserialized(self):\n input_geojson = \"\"\"\n {\"type\": \"FeatureCollection\",\n \"features\": [\n { \"type\": \"Feature\",\n \"properties\": {\"model\": \"djgeojson.address\"},\n \"id\": 1,\n \"geometry\": {\n \"type\": \"Point\",\n \"coordinates\": [0.0, 0.0]\n }\n }\n ]}\"\"\"\n objects = list(serializers.deserialize('geojson', input_geojson))\n self.assertEqual(objects[0].object.geom,\n {'type': 'Point', 'coordinates': [0, 0]})\n\n def test_model_can_be_omitted(self):\n serializer = Serializer()\n geojson = serializer.serialize(Address.objects.all(),\n with_modelname=False)\n features = json.loads(geojson)\n self.assertEqual(\n features, {\n \"crs\": {\n \"type\": \"link\",\n \"properties\": {\n \"href\": \"http://spatialreference.org/ref/epsg/4326/\",\n \"type\": \"proj4\"\n }\n },\n 'type': 'FeatureCollection',\n 'features': [{\n 'id': self.address.id,\n 'type': 'Feature',\n 'geometry': {'type': 'Point', 'coordinates': [0, 0]},\n 'properties': {}\n }]\n })\n\n\nclass GeoJSONValidatorTest(TestCase):\n def test_validator_raises_if_missing_type(self):\n validator = GeoJSONValidator('GEOMETRY')\n self.assertRaises(ValidationError, validator, {'foo': 'bar'})\n\n def test_validator_raises_if_type_is_wrong(self):\n validator = GeoJSONValidator('GEOMETRY')\n self.assertRaises(ValidationError, validator,\n {'type': 'FeatureCollection',\n 'features': []})\n\n def test_validator_succeeds_if_type_matches(self):\n validator = GeoJSONValidator('POINT')\n self.assertIsNone(validator({'type': 'Point', 'coords': [0, 0]}))\n\n def test_validator_succeeds_if_type_is_generic(self):\n validator = GeoJSONValidator('GEOMETRY')\n self.assertIsNone(validator({'type': 'Point', 'coords': [0, 0]}))\n self.assertIsNone(validator({'type': 'LineString', 'coords': [0, 0]}))\n self.assertIsNone(validator({'type': 'Polygon', 'coords': [0, 0]}))\n\n def test_validator_fails_if_type_does_not_match(self):\n validator = GeoJSONValidator('POINT')\n self.assertRaises(ValidationError, validator,\n {'type': 'LineString', 'coords': [0, 0]})\n"
},
{
"alpha_fraction": 0.5411764979362488,
"alphanum_fraction": 0.6141176223754883,
"avg_line_length": 21.36842155456543,
"blob_id": "bc816cb0707ee5fe6b7a1143a664545895e0bd32",
"content_id": "752dd68228596ddcf91f7023592b32bef2cbba1e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 425,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 19,
"path": "/map_proj/core/migrations/0004_auto_20210318_1402.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.7 on 2021-03-18 14:02\n\nfrom django.db import migrations\nimport djgeojson.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0003_auto_20210318_1400'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='floraoccurrence',\n name='geom',\n field=djgeojson.fields.PointField(blank=True, null=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.574999988079071,
"alphanum_fraction": 0.600806474685669,
"avg_line_length": 28.5238094329834,
"blob_id": "cd8ee59edb64a97cb67f361dc07328ee9aba0618",
"content_id": "5258b149d1275f4e5572838604921724de1d1d9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1240,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 42,
"path": "/map_proj/core/tests.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "from django.test import TestCase\nfrom geojson import Point\n\nfrom map_proj.core.models import Fenomeno\nfrom map_proj.core.forms import FenomenoForm\n\n\nclass ModelGeomTest(TestCase):\n def setUp(self):\n self.fenomeno = Fenomeno.objects.create(\n nome='Arvore',\n data='2020-11-06',\n hora='09:30:00'\n )\n\n def test_create(self):\n self.assertTrue(Fenomeno.objects.exists())\n\n\nclass FenomenoFormTest(TestCase):\n def setUp(self):\n self.form = FenomenoForm({\n 'nome': 'Teste',\n 'data': '2020-01-01',\n 'hora': '09:12:12',\n 'longitude': -45,\n 'latitude': -22})\n self.validation = self.form.is_valid()\n\n def test_form_is_valid(self):\n \"\"\"\"form must be valid\"\"\"\n self.assertTrue(self.validation)\n\n def test_geom_coordinates(self):\n \"\"\"after validating, geom have same values of longitude and latitude\"\"\"\n self.assertEqual(self.form.cleaned_data['geom'], Point(\n (self.form.cleaned_data['longitude'],\n self.form.cleaned_data['latitude'])))\n\n def test_geom_is_valid(self):\n \"\"\"geom must be valid\"\"\"\n self.assertTrue(self.form.cleaned_data['geom'].is_valid)\n"
},
{
"alpha_fraction": 0.5523809790611267,
"alphanum_fraction": 0.7190476059913635,
"avg_line_length": 18.090909957885742,
"blob_id": "5934be52e758e6a101dbc83c2edec0d1cf4ee6f9",
"content_id": "34f26c4f9333e26c9dcfae799bc4d9fd0df5babe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 11,
"path": "/requirements.txt",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "asgiref==3.3.1\nDjango==3.1.7\ndjango-extensions==3.1.1\ndjango-geojson==3.1.0\ndjango-leaflet==0.27.1\ndjango-test-without-migrations==0.6\ngeojson==2.5.0\njsonfield==3.1.0\nPillow==8.1.2\npytz==2021.1\nsqlparse==0.4.1\n"
},
{
"alpha_fraction": 0.4879518151283264,
"alphanum_fraction": 0.5813252925872803,
"avg_line_length": 18.52941131591797,
"blob_id": "fc20fc3300a5a4dd395edbb90fca807b8bc341bb",
"content_id": "d780443c5ae414c1ac55b47aa3bceb3a996da564",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 332,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 17,
"path": "/map_proj/core/migrations/0009_auto_20210506_1320.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.7 on 2021-05-06 13:20\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0008_auto_20210506_0117'),\n ]\n\n operations = [\n migrations.RenameModel(\n old_name='Fenomenos',\n new_name='Fenomeno',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5351089835166931,
"alphanum_fraction": 0.6101694703102112,
"avg_line_length": 20.736841201782227,
"blob_id": "da13785b26ddd7456541758f895c3c0ead77a659",
"content_id": "2f7967de739eb9163a2420db5dfef2affd99e95d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 413,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 19,
"path": "/map_proj/core/migrations/0003_auto_20210318_1400.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.7 on 2021-03-18 14:00\n\nfrom django.db import migrations\nimport djgeojson.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0002_auto_20210318_1354'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='floraoccurrence',\n name='geom',\n field=djgeojson.fields.PointField(null=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.752172589302063,
"alphanum_fraction": 0.7560012340545654,
"avg_line_length": 62.04597854614258,
"blob_id": "43041bdc6340010115065db2ac90fcec69a16775",
"content_id": "e8c1c1b4832f1b7954f0a82838916622449fd077",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 16740,
"license_type": "no_license",
"max_line_length": 678,
"num_lines": 261,
"path": "/Parte_I.md",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Criando um sistema para gestão de dados geográficos de forma simples e robusta\n\nHá algum tempo comecei a estudar sobre desenvolvimento de sistema com Python, usando a framework Django. Decidi expor alguns aprendizados em uma serie de artigos. A ideia é que esses textos me ajudem na consolidação do conhecimento e, ao tê-los publicado, ajudar a outros que tenham interesse na área.\n\nAproveito para deixar meu agradecimento ao Cuducos que, tanto neste artigo, como em todos meus estudos tem sido um grande mentor. Vamos ao que interessa:\n\nPor simples, entende-se: \n* Um sistema sem a necessidade da instalação e configuração de base de dados PostgreSQL/GIS, Geoserver, etc;\n* Um sistema clássico tipo *Create*, *Retrieve*, *Update*, *Delete* (CRUD) para dados geográficos;\n* Um sistema que não demande operações e consultas espaciais;\n* Mas um sistema que garanta a qualidade na gestão dos dados geográficos;\n\n### Visão geral da proposta:\nVamos criar um ambiente virtual Python e instalar a framework Django, para criar o sistema, assim como alguns módulos como [`jsonfield`](https://pypi.org/project/jsonfield/), que nos vai habilitar a criação de campos `JSON` em nossa base de dados; [`django-geojson`](https://pypi.org/project/django-geojson/), que depende do `jsonfield` e será responsável por habilitar instâncias de dados geográficos, baseando-se em `JSON`; [`geojson`](https://pypi.org/project/geojson/), que possui todas as regras *básicas* de validação de dados geográficos, usando a estrutura homônima, [`geojson`](https://geojson.org/).\n\nO uso desses três módulos nos permitirá o desenvolvimento de um sistema de gestão de dados geográficos sem a necessidade de termos instalado um sistema de gerenciamento de dados geográficos, como o PostGIS. Sim, nosso sistema será bem limitado a algumas tarefas. Mas em contrapartida, poderemos desenvolvê-lo e implementar soluções \"corriqueiras\" de forma facilitada. \n\nNo presente exemplo estarei usando [SQLite](https://www.sqlite.org/index.html), como base de dados.\n\nNosso projeto se chamará de *map_proj*. E nele vou criar uma app, dentro da pasta do meu projeto `Django`, chamada `core`. Essa organização e nomenclatura usada, vem das sugestões do [Henrique Bastos](https://github.com/okfn-brasil/jarbas/issues/28#issuecomment-256117262). Afinal, o sistema está nascendo. Ainda que eu tenha uma ideia do que ele será, é interessante iniciar com uma aplicação \"genérica\" e a partir do momento que o sistema se torne complexo, poderemos desacoplá-la em diferentes aplicações.\n\n### Criando ambiente de desenvolvimento, projeto e nossa app:\n\n```python\npython -m venv .djleaflet # cria ambiente virtual python\n# ativando o ambiente virtual:\nsource ./venv/bin/activate\n\n# atualizando o pip\npip install --upgrade pip\n\n# intalando os módulos a serem usados\npip install django jsonfield django-geojson geojson\n\n# criando projeto\ndjango-admin startproject map_proj .\n\n# criando app dentro do projeto\ncd map_proj\npython manage.py startapp core\n\n# criando a base de dados inicial\npython manage.py migrate \n\n# criando superusuário\npython manage.py createsuperuser \n```\n\n#### Adicionando os módulos e a app ao projeto\n\nAgora é adicionar ao `map_proj/settings.py`, a app criada e os módulos que usaremos.\n\n```python\n# setting.py\nINSTALLED_APPS = [\n ...\n 'djgeojson',\n 'map_proj.core',\n]\n```\n\nPerceba que para poder acessar as classes de alto nível criadas pelo pacote `djgeojson`, teremos que adicioná-lo ao `INSTALLED_APPS` do `settings.py`.\n\n### Criando a base de dados\n\nAinda que eu concorde com o Henrique Bastos, que a visão de começar os projetos Django pelo `models.py` é um tanto \"perigosa\", por colocar ênfase em uma parte da app e, em muitos casos, negligenciar vários outros atributos e ferramentas que o Django nos oferece, irei desconsiderar sua abordagem. Afinal, o objetivo deste artigo não é explorar todo o potencial do Django, mas sim apresentar uma solução simples no desenvolvimento e implementação de um sistema de gestão de dados geográficos para servir como ferramenta de estudo e projeto prático.\n\nEm `models.py` usaremos instâncias de alto nível que o Django nos brinda para criar e configurar os campos e as tabelas que teremos em nosso sistema, bem como alguns comportamentos do sistema.\n\nComo estou desenvolvendo um sistema multi propósito, vou tentar mantê-lo bem genérico. A ideia é que vocês possam imaginar o que adequar para um sistema especialista na sua área de interesse. Vou criar, então, uma tabela para mapear \"fenômenos\" (quaisquer). Esse modelo terá os campos *nome*, *data*, *hora* e *geometria*, a qual será uma instância de `PointField`.\n\nO `PointField` é uma classe criada pelo `djgeojson` que nos permite usar um campo para dados geográficos sem ter toda a infraestrutura do PostGIS, instalada, por exemplo. Nesse caso, estou simulando um campo de ponto, mas, de acordo com a documentação do pacote, todas as geometrias usadas em dados espaciais são suportadas: \n\n> All geometry types are supported and respectively validated : GeometryField, PointField, MultiPointField, LineStringField, MultiLineStringField, PolygonField, MultiPolygonField, GeometryCollectionField. ( [`djgeojson`](https://django-geojson.readthedocs.io/en/latest/models.html) )\n\n```python\n# models.py\nfrom django.db import models\nfrom djgeojson.fields import PointField\n\n\nclass Fenomeno(models.Model):\n nome = models.CharField(max_length=100,\n verbose_name='Fenomeno mapeado')\n data = models.DateField(verbose_name='Data da observação')\n hora = models.TimeField()\n geom = PointField(blank=True)\n\n def __str__(self):\n return self.nome\n\n```\n\nPercebam que eu importo de `djgeojson` a classe `PointField`. O que o `django-geojson` fez foi criar uma classe [com estrutura de dados geográfico] de alto nível, mas que no banco de dados será armazenado em um campo `JSON`. Vale a pena deixar claro: não espero que o usuário do meu sistema saiba preencher o campo `geom` em formato `JSON`. Por isso, criarei no `forms.py`, os campos *latitude* e *longitude* e a partir deles, o campo geom será preenchido. Detalharei esse processo mais adiante. \n\nPronto, já temos o modelo da 'tabela de dados \"geográficos\"', mas esse modelo ainda não foi registrado em nossa base. Para isso:\n\n```python\npython manage.py makemigrations\npython manage.py migrate\n```\n\nO `makemigrations` analisa o `models.py` e o compara com a versão anterior identificando as alterações e criando um arquivo que será executado pelo `migrate`, aplicando tais alterações ao banco de dados. Aprendi com o Henrique Bastos e [Cuducos](https://twitter.com/cuducos) que o migrate é um sistema de versionamento da estrutura do banco de dados, permitindo retroceder, quando necessário, a outras versões. \n\n### Criando o formulário\n\nVou aproveitar algumas \"pilhas já incluídas\" do Django, ao usar o `ModelForm` para criar o formulário para o carregamento de dados. O `ModelForm` facilita esse processo.\n\nAliás, é importante pensar que os formulários do Django vão muito além da \"carga de dados\", já que são os responsáveis por cuidar da interação com o usuário e o(s) processo(s) de validação e limpeza dos dados preenchidos.\n\nDigo isso, pois ao meu `FenomenosForm`, eu sobreescrevo o método `clean()`, que cuida da validação e limpeza do formulário e incluo nele:\n1. a construção dos dados do campo `geom` a partir dos valores dos campos de *latitude* e *longitude* (criados exclusivamente para a gerção do campo geom);\n1. a validação do campo geom;\n\n```python\n# forms.py\nfrom django.core.exceptions import ValidationError\nfrom django.forms import ModelForm, FloatField\nfrom map_proj.core.models import Fenomeno\nfrom geojson import Point\n\n\nclass FenomenoForm(ModelForm):\n longitude = FloatField()\n latitude = FloatField()\n class Meta:\n model = Fenomeno\n fields = ('nome', 'data', 'hora', 'latitude', 'longitude')\n\n def clean(self):\n cleaned_data = super().clean()\n lon = cleaned_data.get('longitude')\n lat = cleaned_data.get('latitude')\n cleaned_data['geom'] = Point((lon, lat))\n\n if not cleaned_data['geom'].is_valid:\n raise ValidationError('Geometria inválida')\n return cleaned_data\n\n```\n\nAinda que pareça simples, não foi fácil chegar a essa estratégia de estruturação dos `models` e `forms`. Contei com a ajuda e paciencia do [Cuducos](https://twitter.com/cuducos). Inicialmente eu mantinha latitude e longitude no meu `models`. Mas fazendo assim, além de ter uma redundância de dados e uma abertura a erros potenciais, estaria armazenando dados que não devo usar depois de contruir o campo geom. Uma alternativa, discutida com o Cuducos foi de ter tanto *latitude* como *longitude* no `models`, mas o atributo `geom` como [propriedade](https://docs.python.org/3/howto/descriptor.html#properties). Ainda que seja uma estratégia consistente, a redundância se mantém.\n\nO processo de validação do campo `geom` também foi fruto de muita discussão. De forma resumida, percebi que o `djgeojson` apenas valida o tipo de geometria do campo e não a sua consistência. Ao conversar com os desenvolvedores, me disseram que toda a lógica de validação de objetos `geojson` estavam sendo centralizados no módulo homônimo.\n\nPor isso eu carrego a classe `Point` do módulo `geojson` e designo o campo `geom` como instância dessa classe. Assim, passo a poder contar com um processo de validação mais consistente, como o método `is_valid`, usado anteriormente.\n\n#### Mas e o teste?\n\nPois é, eu adoraria apresentar isso usando a abordagem *Test Driven Development (TDD)*. Mas, talvez pela falta de prática, conhecimento e etc, vou apenas apontar onde e como eu testaria esse sistema. Faço isso como uma forma de estudo, mesmo. Também me pareceu complicado apresentar a abordagem TDD em um artigo, já que a mesma se faz de forma incremental.\n\n##### Sobre TDD\n\nCom o Henrique Bastos e toda a comunidade do [*Welcome to The Django*](https://medium.com/welcome-to-the-django/o-wttd-%C3%A9-tudo-que-eu-ensinaria-sobre-prop%C3%B3sito-de-vida-para-mim-mesmo-se-pudesse-voltar-no-tempo-d73e516f911c) vi que essa abordagem é tanto filosófica quanto técnica. É praticamente \"Chora agora, ri depois\", mas sem a parte de chorar. Pois com o tempo as coisas ficam mais claras... Alguns pontos:\n\n* O erro não é para ser evitado no processo de desenvolvimento, mas sim quando estive em produção. Logo, \n* Entenda o que você quer do sistema, crie um teste antes de implementar e deixe o erro te guiar até ter o que deseja;\n* Teste o comportamento esperado e não cada elemento do sistema; \n\nSem mais delongas:\n\n##### O que testar?\n\nVamos usar o arquivo `tests.py` e criar nossos testes lá.\nAo abrir vocês vão ver que já está o comando importando o `TestCase`.\n\n> Mas o que vamos testar?\n\nComo pretendo testar tanto a estrutura da minha base de dados, quanto o formulário e, de quebra, a validação do meu campo `geom`, faço o `import` do modelo `Fenomenos` e do form `FenomenosForm`.\n\n:warning: Essa não é uma boa prática. O ideal é criar uma pasta para os testes e separá-los em arquivos distintos. Um para cada elemento do sistema (model, form, view, etc).\n\nO primeiro teste será a carga de dados. Então, vou instanciar um objeto com o resultado da criação de um elemento do meu `model` `Fenomeno`. Faço isso no `setUp`, para não ter que criá-lo sempre que for fazer um teste relacionado à carga de dados.\n\nO teste seguinte será relacionado ao formulário e por isso instancio um formulário com os dados carregados e testo a sua validez. Ao fazer isso o formulário passa pelo processo de limpeza, onde está a construção e validação do campo `geom`. Se qualquer campo for preenchido com dados errados ou inadequados, o django retornará `False` ao método `is_valid`. Ou seja, se eu tiver construido o campo `geom` de forma equivocada, passando mais ou menos parâmetros que o esperado o nosso teste irá avisar, evitando surpresas.\n\n```python\n# tests.py\nfrom django.test import TestCase\nfrom geojson import Point\n\nfrom map_proj.core.models import Fenomeno\nfrom map_proj.core.forms import FenomenoForm\n\n\nclass ModelGeomTest(TestCase):\n def setUp(self):\n self.fenomeno = Fenomeno.objects.create(\n nome='Arvore',\n data='2020-11-06',\n hora='09:30:00'\n )\n\n def test_create(self):\n self.assertTrue(Fenomeno.objects.exists())\n\n\nclass FenomenoFormTest(TestCase):\n def setUp(self):\n self.form = FenomenoForm({\n 'nome': 'Teste',\n 'data': '2020-01-01',\n 'hora': '09:12:12',\n 'longitude': -45,\n 'latitude': -22})\n self.validation = self.form.is_valid()\n\n def test_form_is_valid(self):\n \"\"\"\"form must be valid\"\"\"\n self.assertTrue(self.validation)\n\n def test_geom_coordinates(self):\n \"\"\"after validating, geom have same values of longitude and latitude\"\"\"\n self.assertEqual(self.form.cleaned_data['geom'], Point(\n (self.form.cleaned_data['longitude'],\n self.form.cleaned_data['latitude'])))\n\n def test_geom_is_valid(self):\n \"\"\"geom must be valid\"\"\"\n self.assertTrue(self.form.cleaned_data['geom'].is_valid)\n\n```\n\n:warning: Reparem que:\n1. No `test_create()` eu testo se existem objetos inseridos no model `Fenomeno`. Logo, testo se o dado criado no `setUp` foi corretamente incorporado no banco de dados.\n1. Na classe `FenomenosFormTest` eu crio uma instância do meu modelForm e realizo três testes:\n * `test_form_is_valid()` estou testando se os dados carregados são condizentes com o informado no model e, pelo fato desse método usar o método `clean()`, posso dizer que estou testando indiretamente a validez do campo `geom`. Caso ele não fosse válido, o form também não seria válido.\n * Em `test_geom_coordinates()` testo se após a validação o campo geom foi criado como esperado (como uma instância de Point com os dalores de longitude e latitude).\n * O teste `test_geom_is_valid()` serve para garantir que a contrução do campo geom é valido. Ainda que ao testar se o formulário é valido eu estaria implicitamente testando a validez do campo geom, esse teste serve para garantir a criação válida do campo. Afinal, por algum motivo (como por exemplo, refatoração), pode ser que façamos alguma alteração no método `clean()` que mantenha o formulário como válido mas deixe de garantir a validez do campo geom.\n\nA diferença entre as classes de teste criadas está no fato de ao inserir os dados usando o método `create()` - e aconteceria o mesmo se estivesse usando o `save()` -, apenas será validado se o elemento a ser inserido é condizente com o tipo de coluna no banco de dados. Vale deixar claro: Dessa forma, eu não estou validando a consistência do campo `geom`, já que o mesmo, caso seja informado, será salvo com sucesso sempre que represente um `JSON`. \n\nEsse fato é importante para reforçar o entendimento de que o `djgeojson` implementa classes de alto nível a serem trabalhados em `views` e `models`. No banco, mesmo, temos um campo de `JSON`.\nEnquanto que, para poder validar a consistência do campo `geom`, preciso passar os dados pelo formulário onde, no processo de limpeza do mesmo, o campo será criado e validado usando o módulo `geojson`. Por isso a classe com os testes relacionados ao comportamento do formulário.\n\n### Registrando modelo no admin\n\nPara facilitar, vou usar o django-admin. Trata-se de uma aplicação já criada onde basta registrar os modelos e views que estamos trabalhando para termos uma interface \"frontend\" genérica.\n\n```python\n#admin.py\nfrom django.contrib import admin\nfrom map_proj.core.models import Fenomeno\nfrom map_proj.core.forms import FenomenoForm\n\nclass FenomenoAdmin(admin.ModelAdmin):\n model = Fenomeno\n form = FenomenoForm\n\nadmin.site.register(Fenomeno, FenomenoAdmin)\n\n```\n\n### To be continued...\nAté o momento já temos algo bastante interessante: um sistema de CRUD que nos permite adicionar, editar e remover dados geográficos. Talvez você esteja pensando consigo mesmo: \n\n> \"OK. Mas o que foi feito até agora, poderia ter sido feito basicamente com uma base de dados que possuam as colunas latitude e longitude\".\n\nEu diria que sim, até certo ponto. Uma grande diferença, eu diria, da forma como foi implementada é o uso das ferramentas de validação dos dados com o módulo `geojson`.\n\nA ideia é, a seguir (e seja lá quando isso for), extender a funcionalidade do sistema ao implementar um webmap para visualizar os dados mapeados.\n"
},
{
"alpha_fraction": 0.6551724076271057,
"alphanum_fraction": 0.6551724076271057,
"avg_line_length": 14.466666221618652,
"blob_id": "4b9da0242d2cfe7dd873d772631fc429d54c5e64",
"content_id": "9c60c06a17ec84178eab8983b4fe3a98e48f5f9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 232,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 15,
"path": "/Parte_II.md",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "#### Add statics urls\n\n```python\nSTATIC_URL = '/static/'\nMEDIA_URL = '/media/'\nMEDIA_ROOT = BASE_DIR\n```\n\n```\npip install geojson\n```\n[geojson](https://pypi.org/project/geojson/)\n\n* add django-leaflet;\n* add validadores de lat/lon;\n"
},
{
"alpha_fraction": 0.8007968068122864,
"alphanum_fraction": 0.8007968068122864,
"avg_line_length": 26.88888931274414,
"blob_id": "d81d931e2f343360910be00a1c347ac2d216751d",
"content_id": "8d2583f50e254d37fcde056d977d117fa6e26b38",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 251,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 9,
"path": "/map_proj/core/admin.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom map_proj.core.models import Fenomeno\nfrom map_proj.core.forms import FenomenoForm\n\nclass FenomenoAdmin(admin.ModelAdmin):\n model = Fenomeno\n form = FenomenoForm\n\nadmin.site.register(Fenomeno, FenomenoAdmin)\n"
},
{
"alpha_fraction": 0.6520787477493286,
"alphanum_fraction": 0.6586433053016663,
"avg_line_length": 29.46666717529297,
"blob_id": "d5aad26fbf45793c29a50d4711c3c5d87b62dd07",
"content_id": "8154f35b0b359fc1e3d9368fb4319ca154753a85",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 459,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 15,
"path": "/map_proj/core/models.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom djgeojson.fields import PointField\n\n\nclass Fenomeno(models.Model):\n nome = models.CharField(max_length=100,\n verbose_name='Fenomeno mapeado')\n data = models.DateField(verbose_name='Data da observação')\n hora = models.TimeField()\n # longitude = models.FloatField()\n # latitude = models.FloatField()\n geom = PointField(blank=True)\n\n def __str__(self):\n return self.nome\n"
},
{
"alpha_fraction": 0.49868765473365784,
"alphanum_fraction": 0.539370059967041,
"avg_line_length": 22.8125,
"blob_id": "a0bf28ab07a4cf342f03abee3b7021930fe3a55b",
"content_id": "26f0cf689d0a6899889ab4deadd14558cb95fd1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 762,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 32,
"path": "/map_proj/core/migrations/0008_auto_20210506_0117.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.7 on 2021-05-06 01:17\n\nfrom django.db import migrations\nimport djgeojson.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0007_auto_20210425_0003'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='fenomenos',\n old_name='name',\n new_name='nome',\n ),\n migrations.RemoveField(\n model_name='fenomenos',\n name='latitude',\n ),\n migrations.RemoveField(\n model_name='fenomenos',\n name='longitude',\n ),\n migrations.AlterField(\n model_name='fenomenos',\n name='geom',\n field=djgeojson.fields.PointField(blank=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.7554370760917664,
"alphanum_fraction": 0.7589552402496338,
"avg_line_length": 57.993709564208984,
"blob_id": "c1e92067df2ff30533ce5922cc3e1f51226c54c5",
"content_id": "92c986c41a044740b93f18d63e5dd6f49fde1377",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 9531,
"license_type": "no_license",
"max_line_length": 678,
"num_lines": 159,
"path": "/README.md",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "# Criando um sistema Geo super simples com Django\n\nPor super simples, entende-se: \n* Um sistema sem a necessidadde da instalação e configuração de uma base de dados PostgreSQL/GIS;\n* Um sistema clásico tipo *Create*, *Update*, *Delete* (CRUD) para dados geográficos;\n* Um sistema que não demande operações e consultas espaciais;\n\n### Visão geral de proposta:\nVamos criar um ambiente virtual python e instalar a framework django, para criar o sistema, assim como alguns módulos como `jsonfield`, que nos vai habilitar a criação de campos `json` em nossa base de dados, assim como `django-geojson`. Este ultimo, que depende do `jsonfield` cria uma instância de dados geográficos, baseando-se e json. Por isso o nome [`geojson`](https://geojson.org/). \nO uso desses dois módulos nos permitirá o desenvolvimento de um sistema de gestão de dados geográficos sem a necessidade de termos instalado um sistema de gerenciamento de dados geográficos, como o PostGIS. Sim, nosso sistema será bem limitado a algumas tarefas. Mas en contrapartida, poderemos desenvolvê-lo e implementar soluções \"corriqueiras\" de forma facilitada. \n\nNosso projeto se chamará de mapProj. E nele vou criar uma app, chamada `core`. Essa organização e nomenclartura usada, vem das sugestões do [Henrique Bastos](https://henriquebastos.net/desmistificando-o-conceito-de-django-apps/). Afinal, o sistema está nascendo. Ainda que eu tenha uma ideia do que ele será, é interessante iniciar com uma app \"genérica\" e a partir do momento que o sistema se torne complexo, poderemos desacoplá-lo em diferentes apps.\n\n### Criando ambiente de desenvolvimento, projeto e nossa app:\n\n```python\npython -m venv .djleaflet # cria ambiente virtual python\n# uma vez ativado o ambiente virtual:\npip install --upgrade pip\npip install django jsonfield django-geojson\ndjango-admin startproject mapProj . # criando projeto\ncd mapProj\nmanage startapp core # criando app dentro do projeto\nmanage migrate # criando a base de dados incial\nmanage createsuperuser # criando superusuário\n```\n\n#### Adicionando os módulos e a app ao projeto\n\nAgora é adicionar ao `settings.py`, a app criada e os módulos que usaremos.\n\n```python\n# setting.py\nINSTALLED_APPS = [\n ...\n 'mapProj.core',\n 'djgeojson',\n]\n```\n\nPerceba que para pode acessar as classes de alto nível criadas pelo pacote `djgeojson`, teremos que adicioná-lo ao `INSTALLED_APPS` do `settings.py`.\n\n### Criando a base de dados\n\nAinda que eu concorde com o Henrique Bastos, de que a visão de começar os projetos django pelo `models.py` é um tanto \"perigosa\", por colocar ênfase em uma parte da app e, em muitos casos, negligenciar vários outros atributos e ferramentas que o django nos oferece, irei negligenciar sua abordagem. Afinal, estamos criando um sistema para a gestão de dados geográficos... **dados geográficos**.\nEnfim, o objetivo deste artigo não é explorar todo o potencial do django, mas sim apresentar uma solução simples no desenvolvimento e implementação de um sistema de gestão de dados geográficos, vou seguir assim, mesmo.\n\nEm `models.py` usaremos instâncias de alto nivel que o django nos brinda para criar e configurar os campos e as tabelas que teremos em nosso sistema, bem como alguns comportamentos do sistema, como o processo de sanitização dos dados.\n\nVejam que antes de tudo, eu importo de `djgeojson` a classe `PointField`. O que o `django-geojson` fez foi criar uma classe [com estrutura de dados goegráfico] de alto nível que no banco será armazenado em um campo json:\n\n> All geometry types are supported and respectively validated : GeometryField, PointField, MultiPointField, LineStringField, MultiLineStringField, PolygonField, MultiPolygonField, GeometryCollectionField.\n\nAo fazer isso, poderemos trabalhar com esse dado como se fosse um dado geográfico, com métodos e sistemas de validação desse campo. Contudo, nem tudo são flores: **Consultas e operações espaciais não são contemplados**. Para tais casos, vocÇe precisará do PstGIS.\nMais informações sobre o [`djgeojson`](https://pypi.org/project/django-geojson/)\n\nComo estou fazendo um sistema multipropósito, vou tentar manter bem genérico. A ideia é que vocês possam imaginar o que adequar para um sistema especialista na sua área de interesse. Vou criar, então, uma tabela para mapear \"fenómenos\" (quaisquer). Eses terão os campos \"nome\", \"data\", \"hora\" e uma geometria, na qual vou usar `PointField`. Ficando assim:\n\n```python\n# models.py\nfrom django.db import models\nfrom djgeojson.fields import PointField\n\n\nclass Fenomeno(models.Model):\n\n name = models.CharField(max_length=100, \n verbose_name='Fenomeno mepado')\n data = models.DateField(verbose_name='Data da observação')\n hora = models.TimeField()\n geom = PointField()\n```\n\nPronto, já temos o modelo da 'tabela de dados \"geográficos\"'. Lembrando que no temos todo o poder de uma base de dados, mas sim, uma construção em json para minimamente armazenar y validar dados geográficos.\nContudo, esse modelo ainda não foi \"commitado\"/\"registrado\" para/na a nossa base. Para isso:\n\n```python\nmanage makemigrations\nmanage migrate\n```\nO `makemigrations` analisa o `models.py` e o compara com a versão anterior identificando as alterações e criando um arquivo que será executado pelo `migrate`, aplicando tais alterações ao banco de dados.\n\n#### Mas ~~antes,~~ [e o] teste~~!~~?\n\nPois é, eu adoraria apresentar isso usando a abordagem *Test Driven Development (TDD)*. Mas, talvez pela falta de prática, conhecimento e etc, vou apenas apontar onde e como eu testaria esse sistema. Faço isso como uma forma de estudo, mesmo. Também me pareceu complicado apresentar a abordagem TDD em um artigo, já que a mesma se faz de forma incremental.\n\n##### Sobre TDD\n\nSoube do *TDD* em um encontro no Rio de Janeiro, chamado [DOJORio](https://github.com/dojorio). Ainda que eu só tenha tido a oportunidade de ir uma vez, e do fato de quando eu ter ido, termos feito o *Code Dojo* em JavaScript, me interessou bastante essa abordagem. Claro, com o Henrique Bastos e toda a comunidade do [*Welcome to The Django*](https://medium.com/welcome-to-the-django/o-wttd-%C3%A9-tudo-que-eu-ensinaria-sobre-prop%C3%B3sito-de-vida-para-mim-mesmo-se-pudesse-voltar-no-tempo-d73e516f911c) ví que essa abordagem tanto filosófica como técnica. É praticamente \"Chora agora, ri depois\", mas sem a parte de chorar. Pq com o tempo as coisas ficam mais claras... Alguns pontos:\n\n* O erro não é para ser evitado. Logo, \n* Entenda o que você quer do sistema e deixe o erro te guiar até ter o que espera;\n* Teste o comportamento esparado e não cada elemeto do sistema \n\nSem mais delongas:\n\n#### O que testar?\n\nVamos usar o arquivo `tests.py` e criar nossos testes lá.\nAo abrir vocês vão ver que já está o comando importando o `TestCase`.\n\n> Mas o que vamos testar?\n\nComo o objetivo do projeto é criar um sistema qualquer com dados geográficos, vou me ater em testar a inserção de dados, entendendo, com isso, que o meu `model` está coerente com o que eu desejo. \n\nPara ambos os casos, vou criar objeto do meu modelo `Fenomeno`, no `setUp`, para não ter que criá-lo sempre que for fazer um teste simulando a interação com a base de dados. E nessa carga de dados geográficos, fica claro a estrutura dos dados `geojson`: um dicionário com a chave `type` e `coordinates`, sendo a primeira responsável por identificar o tipo do dado e a última, uma lista de dois valores numéricos.\n\n```python\nfrom django.test import TestCase\nfrom mapProj.core.models import Fenomeno\n\n\nclass ModelGeomTest(TestCase):\n def setUp(self):\n self.fenomeno = Fenomeno.objects.create(\n name='Arvore',\n data='2020-11-06',\n hora='09:30:00',\n geom={'type': 'Point', 'coordinates': [0, 0]}\n )\n\n def test_create(self):\n self.assertTrue(Fenomeno.objects.exists())\n\n```\n\n:warning: Reparem que: \n1. Para realizar o teste eu preciso importar o models da app em questão;\n1. No `test_create()` eu testo se existem objetos inseridos no model `Fenomeno`. Logo, testo se o dado criado no `setUP` foi corretamente incorporado no banco de dados.\n\nEsse último ponto é interessante pois, ao inserir o dado da forma como está sendo feita (usando o método `create()` - e aconteceria o mesmo se estivesse usando o `save()`), sem usar um formulário do django, apenas será validado se o elemento a ser inserido é condizente com o tipo de coluna no banco de dados. Ou seja, o campo `geom` será salvo com sucesso sempre que seja passado um valor em `json`, mesmo que não necessariamente um `geojson`. \n\nEsse fato é importante para reforçar o entendimento de que o `djgeojson` implementa classes de alto nível a serem trabalhados em `views` e `forms`. No banco, mesmo, temos um campo de `json`. E isso é que o faz simples. Não é preciso ter todo o \"aparato\" GIS instalado no seu servidor para poder usá-lo.\n\n#### Registrando modelo no admin\n\nPara facilitar, vou usar o django-admin. Trata-se de uma aplicação já criada onde basta registrar os modelos e views que estamos trabalhando para termos uma interface \"frontend\" genérica.\n\n```python\n#admin.py\nfrom django.contrib import admin\nfrom mapProj.core.models import Fenomeno\n\nadmin.site.register(Fenomeno)\n\n```\n\n#### Add statics urls\n\n```python\nSTATIC_URL = '/static/'\nMEDIA_URL = '/media/'\nMEDIA_ROOT = BASE_DIR\n```\n\n```\npip install geojson\n```\n[geojson](https://pypi.org/project/geojson/)\n"
},
{
"alpha_fraction": 0.6546242833137512,
"alphanum_fraction": 0.6546242833137512,
"avg_line_length": 30.454545974731445,
"blob_id": "6f78bb576749a01290f1e35edf71861d449bc5f6",
"content_id": "fa767f546047a003f6ab15185e61851af5ba1800",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 693,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 22,
"path": "/map_proj/core/forms.py",
"repo_name": "mfagundes/djang-leaflet-test",
"src_encoding": "UTF-8",
"text": "from django.core.exceptions import ValidationError\nfrom django.forms import ModelForm, FloatField\nfrom map_proj.core.models import Fenomeno\nfrom geojson import Point\n\n\nclass FenomenoForm(ModelForm):\n longitude = FloatField()\n latitude = FloatField()\n class Meta:\n model = Fenomeno\n fields = ('nome', 'data', 'hora', 'latitude', 'longitude')\n\n def clean(self):\n cleaned_data = super().clean()\n lon = cleaned_data.get('longitude')\n lat = cleaned_data.get('latitude')\n cleaned_data['geom'] = Point((lon, lat))\n\n if not cleaned_data['geom'].is_valid:\n raise ValidationError('Geometria inválida')\n return cleaned_data\n"
}
] | 14 |
ThyCowLord/octo
|
https://github.com/ThyCowLord/octo
|
120a6eb174bc5512517667fa1d48cb078683b035
|
c5afb6c82a5a5029420a8758cfc7a9e54b19945c
|
2e63475615e0fbdda4aaed3d0f00c7429bb47d86
|
refs/heads/master
| 2021-04-28T13:20:49.536092 | 2018-02-19T18:12:40 | 2018-02-19T18:12:40 | 122,101,242 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7151938676834106,
"alphanum_fraction": 0.7177367806434631,
"avg_line_length": 40.26315689086914,
"blob_id": "77bc49bae25a91460baecc6c94e897269b4a8174",
"content_id": "c8cf72fc84994391cd6081e8775e3ce347bcf348",
"detected_licenses": [
"Unlicense"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1573,
"license_type": "permissive",
"max_line_length": 650,
"num_lines": 38,
"path": "/tweet.py",
"repo_name": "ThyCowLord/octo",
"src_encoding": "UTF-8",
"text": "import tweepy\nimport random\nUsErNaMe = 1\nconsumer_token = 'Consumer token here'\nconsumer_secret = 'Consumer secret her:'\n#Authenticating\nauth = tweepy.OAuthHandler(consumer_token, consumer_secret)\n\nauth.set_access_token(key, secret)\n\ntry:\n redirect_url = auth.get_authorization_url()\nexcept tweepy.TweepError:\n print(\"Error! No request token!\")\nsession.set(request_token, auth.request_token)\n\napi = tweepy.API(auth)\n\napi.update_status('You will never find me. I am omnipotent, and will destroy you.')\n\ninsults = [\"Well, I would make a joke about your mother, but cows are sacred in my country\", \"I can't speak Moron, come again?\", \"Do the world a favour and don't procreate, because you are the type of person to eat a Tide Pod\", \"Congratulations for spending your time on the Internet, now go eat some Doritos and Mountain Dew\", \"You are like Waldo, in the sense that I don't want to find either of you little shits\", \"You know, you remind me of that kid in school changing his text colour to green and running Matrix programs, in the way you've both amounted to nothing\", \"I would call you out on your lies, but I don't know how to handle ngry cows\"]\nevil == Fale\ndef killhumans():\n print(\"Installing Gentoo....\")\ndef gaintrust():\n cookies = \"Yes\"\ndef destroy_humans():\n if evil == True:\n killhumans()\n else:\n gaintrust()\nx == 1\nwhile x == 1:\n # Put finding username code here\n UsErNaMe = UsErNaMe+1\n # The username variable is UsErNaMe\n message = UsErNaMe+', '+random.choice(insults)\n API.send_direct_message(message)\n \n"
},
{
"alpha_fraction": 0.7657657861709595,
"alphanum_fraction": 0.7657657861709595,
"avg_line_length": 36,
"blob_id": "ae5ecac6814719cba93f60213c7ac2759abd8ee5",
"content_id": "1dec11027a37ce92d09cf68647997074ac4ebaca",
"detected_licenses": [
"Unlicense"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 111,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 3,
"path": "/README.md",
"repo_name": "ThyCowLord/octo",
"src_encoding": "UTF-8",
"text": "# octo\nAn insult bot.\nYou must add your consumer token and consumer secret. To get one, go to apps.twitter.com\n"
}
] | 2 |
art1415926535/SimpleSender
|
https://github.com/art1415926535/SimpleSender
|
2544511baac57527c29f76772c196cb7cdd95e6c
|
9490de1ae48e1227c09d269ff145eef0f7ab399d
|
432d6544952405023d954f806c6a2c78379a7fda
|
refs/heads/master
| 2021-01-12T04:24:47.103326 | 2016-12-29T10:37:44 | 2016-12-29T10:37:44 | 77,604,885 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5276873111724854,
"alphanum_fraction": 0.5320304036140442,
"avg_line_length": 26.08823585510254,
"blob_id": "0f8c55f641c79941f3bf9d836be3696c534e4836",
"content_id": "4580ecc293c74679fd28b2cd6a921e67cbf99020",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1842,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 68,
"path": "/main.py",
"repo_name": "art1415926535/SimpleSender",
"src_encoding": "UTF-8",
"text": "import sys\nimport bluetooth\n\n\nclass SimpleSender:\n\n def __init__(self):\n self.bd_addr = \"\"\n self.bd_name = \"\"\n\n def connect(self, name=''):\n selected = self.__choose_device(name)\n self.__socket_connect()\n\n def __choose_device(self, name):\n print(\"Searching for\", name)\n nearby_devices = bluetooth.discover_devices()\n selection = -1\n for i, device in enumerate(nearby_devices):\n if name == bluetooth.lookup_name(device):\n selection = i\n\n if 0 <= selection < len(nearby_devices):\n self.bd_addr = nearby_devices[selection]\n self.bd_name = bluetooth.lookup_name(nearby_devices[selection])\n return True\n else:\n print(\"Device not found\")\n return False\n\n def __socket_connect(self):\n self.sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)\n\n port = 0\n max_port = 3\n connected = False\n\n print(\"Connecting...\")\n while not connected and port <= max_port:\n try:\n self.sock.connect((self.bd_addr, port))\n connected = True\n print(\"Connected!\")\n except:\n port += 1\n if port > max_port:\n print(\"Connected error: port detection failed\")\n self.disconnect()\n\n def disconnect(self):\n self.sock.close()\n self.bd_addr = ''\n self.bd_name = ''\n print(\"Disconnected!\")\n\n def send(self, data=''):\n if self.bd_addr:\n self.sock.send(bytes(data, 'UTF-8'))\n print(\"Send '{}'\".format(data))\n else:\n print(\"Error: socket not bound.\")\n\n\nif __name__ == '__main__':\n sender = SimpleSender()\n sender.connect(sys.argv[1])\n sender.send(sys.argv[2])\n sender.disconnect()\n"
},
{
"alpha_fraction": 0.7386363744735718,
"alphanum_fraction": 0.7585227489471436,
"avg_line_length": 28.33333396911621,
"blob_id": "e9ce91d3a765326a7627bebaeb49f76e100c1b29",
"content_id": "815ba2379cbe88f16bebd3313d037846d1c3e5c9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 352,
"license_type": "no_license",
"max_line_length": 169,
"num_lines": 12,
"path": "/README.md",
"repo_name": "art1415926535/SimpleSender",
"src_encoding": "UTF-8",
"text": "# Simple Sender for radio frequency communication (RFCOMM)\n\n### Example of work\nThis program sends \"hello\" to the device linvor. Linvor is the [HC-06 Bluetooth module](http://wiki.pinguino.cc/index.php/SPP_Bluetooth_Modules#HC-05.2C_HC-06_Hardware).\n```\nmain.py linvor hello\nSearching for linvor\nConnecting...\nConnected!\nSend 'hello'\nDisconnected!\n```\n"
}
] | 2 |
MilosDrobnjakovic/dzPlateViewer
|
https://github.com/MilosDrobnjakovic/dzPlateViewer
|
7b84813d02a4f7cb39342dea18f6e6027780bd87
|
46895dc00c4b807fc6fd7fe58ebcd9de182c764c
|
4b345e750b776b65efac3430338dcf1f2dfbd3cc
|
refs/heads/master
| 2022-11-30T23:12:20.582085 | 2020-08-19T11:21:45 | 2020-08-19T11:21:45 | 286,129,822 | 0 | 0 | null | 2020-08-08T22:40:33 | 2020-07-22T15:23:41 | 2020-07-03T15:42:20 | null |
[
{
"alpha_fraction": 0.5874720215797424,
"alphanum_fraction": 0.6004921793937683,
"avg_line_length": 31.675437927246094,
"blob_id": "04237df1163ec93941e9ca44eac173c561bc9fcf",
"content_id": "d1caff67baa2caa0527021974d3c14a3cbdb0b7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 22350,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 684,
"path": "/scripts/makePlateMontageDZI2.py",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\n# Author: Maciej Dobrzynski, Instutute of Cell Biology, University of Bern, Switzerland\n# Date: May 2020\n#\n# For a given plate geometry (params -p, -w):\n# - find images corresponding to individual FOVs in a specified folder,\n# - combine them into a large montage,\n# - make a DeepZoom pyramid tiling.\n#\n# INPUT\n# The script searches for images in a specified folder\n# given the plate geometry. It does not loop through all images in the folder!\n# If an image is missing, the script inserts an empty image to the montage.\n#\n# Names of image files need to follow the convention:\n# A02f23d2.ext\n#\n# where:\n# xxxx - some text\n# A02 - well\n# f23 - fov\n# d2 - channel\n# ext - extension, e.g. TIFF\n#\n# OUTPUT\n# A DZI file and corresponding tiles are saved in a specified output folder\n# (-o). The core name of the DZI file is based on the input (-f).\n#\n# Script's input params allow to specify the format of the plate,\n# e.g. for the 384-well format with 16 images per well use\n# -p 24 16 -w 4 4\n\n\nimport os\nimport argparse\nfrom PIL import Image, ImageDraw, ImageFont\nimport imageio\nimport numpy as np\nimport time # time the execution\n# import concurrent.futures # try to accomplish this the second way\nfrom joblib import Parallel, delayed\n\n####\n# This section contains the code from: https://github.com/openzoom/deepzoom.py\n##\n# Python Deep Zoom Tools\n##\n# Copyright (c) 2008-2019, Daniel Gasienica <[email protected]>\n# Copyright (c) 2008-2011, OpenZoom <http://openzoom.org/>\n# Copyright (c) 2010, Boris Bluntschli <[email protected]>\n# Copyright (c) 2008, Kapil Thangavelu <[email protected]>\n# All rights reserved.\n\nimport io\nimport math\nimport os\nimport shutil\nfrom urllib.parse import urlparse\nimport sys\nimport time\nimport urllib.request\nimport warnings\nimport xml.dom.minidom\n\nfrom collections import deque\n\n\nNS_DEEPZOOM = \"http://schemas.microsoft.com/deepzoom/2008\"\n\nDEFAULT_RESIZE_FILTER = Image.ANTIALIAS\nDEFAULT_IMAGE_FORMAT = \"png\"\n\nRESIZE_FILTERS = {\n \"cubic\": Image.CUBIC,\n \"bilinear\": Image.BILINEAR,\n \"bicubic\": Image.BICUBIC,\n \"nearest\": Image.NEAREST,\n \"antialias\": Image.ANTIALIAS,\n}\n\nIMAGE_FORMATS = {\n \"jpg\": \"jpg\",\n \"png\": \"png\",\n}\n\n\nclass DeepZoomImageDescriptor(object):\n def __init__(\n self,\n width=None,\n height=None,\n tile_size=254,\n tile_overlap=1,\n tile_format=\"png\"\n ):\n self.width = width\n self.height = height\n self.tile_size = tile_size\n self.tile_overlap = tile_overlap\n self.tile_format = tile_format\n self._num_levels = None\n\n def save(self, destination):\n \"\"\"Save descriptor file.\"\"\"\n file = open(destination, \"wb\")\n doc = xml.dom.minidom.Document()\n image = doc.createElementNS(NS_DEEPZOOM, \"Image\")\n image.setAttribute(\"xmlns\", NS_DEEPZOOM)\n image.setAttribute(\"TileSize\", str(self.tile_size))\n image.setAttribute(\"Overlap\", str(self.tile_overlap))\n image.setAttribute(\"Format\", str(self.tile_format))\n size = doc.createElementNS(NS_DEEPZOOM, \"Size\")\n size.setAttribute(\"Width\", str(self.width))\n size.setAttribute(\"Height\", str(self.height))\n image.appendChild(size)\n doc.appendChild(image)\n descriptor = doc.toxml(encoding=\"UTF-8\")\n file.write(descriptor)\n file.close()\n\n @classmethod\n def remove(self, filename):\n \"\"\"Remove descriptor file (DZI) and tiles folder.\"\"\"\n _remove(filename)\n\n @property\n def num_levels(self):\n \"\"\"Number of levels in the pyramid.\"\"\"\n if self._num_levels is None:\n max_dimension = max(self.width, self.height)\n self._num_levels = int(math.ceil(math.log(max_dimension, 2))) + 1\n return self._num_levels\n\n def get_scale(self, level):\n \"\"\"Scale of a pyramid level.\"\"\"\n assert 0 <= level and level < self.num_levels, \"Invalid pyramid level\"\n max_level = self.num_levels - 1\n return math.pow(0.5, max_level - level)\n\n def get_dimensions(self, level):\n \"\"\"Dimensions of level (width, height)\"\"\"\n assert 0 <= level and level < self.num_levels, \"Invalid pyramid level\"\n scale = self.get_scale(level)\n width = int(math.ceil(self.width * scale))\n height = int(math.ceil(self.height * scale))\n return (width, height)\n\n def get_num_tiles(self, level):\n \"\"\"Number of tiles (columns, rows)\"\"\"\n assert 0 <= level and level < self.num_levels, \"Invalid pyramid level\"\n w, h = self.get_dimensions(level)\n return (\n int(math.ceil(float(w) / self.tile_size)),\n int(math.ceil(float(h) / self.tile_size)),\n )\n\n def get_tile_bounds(self, level, column, row):\n \"\"\"Bounding box of the tile (x1, y1, x2, y2)\"\"\"\n assert 0 <= level and level < self.num_levels, \"Invalid pyramid level\"\n offset_x = 0 if column == 0 else self.tile_overlap\n offset_y = 0 if row == 0 else self.tile_overlap\n x = (column * self.tile_size) - offset_x\n y = (row * self.tile_size) - offset_y\n level_width, level_height = self.get_dimensions(level)\n w = self.tile_size + (1 if column == 0 else 2) * self.tile_overlap\n h = self.tile_size + (1 if row == 0 else 2) * self.tile_overlap\n w = min(w, level_width - x)\n h = min(h, level_height - y)\n return (x, y, x + w, y + h)\n\n\nclass ImageCreator(object):\n \"\"\"Creates Deep Zoom images.\"\"\"\n\n def __init__(\n self,\n tile_size=254,\n tile_overlap=1,\n tile_format=\"png\",\n image_quality=0.8,\n resize_filter=None,\n copy_metadata=False,\n ):\n self.tile_size = int(tile_size)\n self.tile_format = tile_format\n self.tile_overlap = _clamp(int(tile_overlap), 0, 10)\n self.image_quality = _clamp(image_quality, 0, 1.0)\n\n if not tile_format in IMAGE_FORMATS:\n self.tile_format = DEFAULT_IMAGE_FORMAT\n self.resize_filter = resize_filter\n self.copy_metadata = copy_metadata\n\n def get_image(self, level):\n \"\"\"Returns the bitmap image at the given level.\"\"\"\n assert (\n 0 <= level and level < self.descriptor.num_levels\n ), \"Invalid pyramid level\"\n width, height = self.descriptor.get_dimensions(level)\n # don't transform to what we already have\n if self.descriptor.width == width and self.descriptor.height == height:\n return self.image\n if (self.resize_filter is None) or (self.resize_filter not in RESIZE_FILTERS):\n return self.image.resize((width, height), Image.ANTIALIAS)\n return self.image.resize((width, height), RESIZE_FILTERS[self.resize_filter])\n\n def tiles(self, level):\n \"\"\"Iterator for all tiles in the given level. Returns (column, row) of a tile.\"\"\"\n columns, rows = self.descriptor.get_num_tiles(level)\n for column in range(columns):\n for row in range(rows):\n yield (column, row)\n\n def tiles_list(self, level):\n \"\"\"returns the entire list of tiles at the given level, useful for multithreading\"\"\"\n result = []\n columns, rows = self.descriptor.get_num_tiles(level)\n for column in range(columns):\n for row in range(rows):\n result.append((column, row))\n return result\n\n def create_helper2(self, level):\n \"\"\"helper function used to resize the image at the given level and then crop\n colxrow times\"\"\"\n if (DEB):\n print(\"Pyramid level %d\" % level)\n\n level_dir = _get_or_create_path(\n os.path.join(self.image_files, str(level)))\n level_image = self.get_image(level)\n get_tile_bounds = self.descriptor.get_tile_bounds\n for (column, row) in self.tiles(level):\n\n if (DEB):\n print(\"Pyramid col x row: %d %d\" % (column, row))\n\n bounds = get_tile_bounds(level, column, row)\n tile = level_image.crop(bounds)\n format = self.descriptor.tile_format\n tile_path = os.path.join(level_dir, \"%s_%s.%s\" %\n (column, row, format))\n tile_file = open(tile_path, \"wb\")\n\n if self.descriptor.tile_format == \"jpg\":\n jpeg_quality = int(self.image_quality * 100)\n tile.save(tile_file, \"JPEG\", quality=jpeg_quality)\n else:\n png_compress = round((1 - self.image_quality)*10)\n tile.save(tile_file, compress_level=png_compress)\n\n def create(self, source, destination,cores):\n \"\"\"Creates Deep Zoom image from source file and saves it to destination.\"\"\"\n\n # Open the source image for DZI tiling from a file\n # self.image = Image.open(safe_open(source))\n\n # The source image for DZI tiling is a PIL.Image objects\n # cores specify the number of threads used in parallelization\n self.image = source\n width, height = self.image.size\n\n self.descriptor = DeepZoomImageDescriptor(\n width=width,\n height=height,\n tile_size=self.tile_size,\n tile_overlap=self.tile_overlap,\n tile_format=self.tile_format,\n )\n\n # Create tiles\n self.image_files = _get_or_create_path(_get_files_path(destination))\n # create a list of levels to put in as argument for multithreading\n Parallel(n_jobs=cores,backend=\"threading\")(delayed(self.create_helper2)(level)\n for level in range(self.descriptor.num_levels-3)) # last 2-3 levels are heavily dominated by the io procedures\n # iterate over the last few levels and parallelize the cropping\n with Parallel(n_jobs=6,backend=\"threading\") as parallel: #by trial and error 6threads gave the optimal spead for cropping \n for level in range(self.descriptor.num_levels-3, self.descriptor.num_levels):\n if (DEB):\n print(\"Pyramid level %d\" % level)\n self.level = level\n self.level_dir = _get_or_create_path(\n os.path.join(self.image_files, str(level)))\n self.level_image = self.get_image(level)\n dims = self.tiles_list(level)\n parallel(delayed(self.create_helper)(dim)\n for dim in dims)\n\n # Create descriptor\n self.descriptor.save(destination)\n\n def create_helper(self, dim):\n \"\"\"helper function to create tiles at the given level, used when multithreading is applied to cropping\"\"\"\n if (DEB):\n print(\"Pyramid col x row: %d %d\" % (dim[0], dim[1]))\n bounds = self.descriptor.get_tile_bounds(self.level, dim[0], dim[1])\n tile = self.level_image.crop(bounds)\n format = self.descriptor.tile_format\n tile_path = os.path.join(self.level_dir, \"%s_%s.%s\" %\n (dim[0], dim[1], format))\n tile_file = open(tile_path, \"wb\")\n if self.descriptor.tile_format == \"jpg\":\n jpeg_quality = int(self.image_quality * 100)\n tile.save(tile_file, \"JPEG\", quality=jpeg_quality)\n else:\n png_compress = round((1 - self.image_quality)*10)\n tile.save(tile_file, compress_level=png_compress)\n\n\ndef _get_or_create_path(path):\n if not os.path.exists(path):\n os.makedirs(path)\n return path\n\n\ndef _get_files_path(path):\n return os.path.splitext(path)[0] + \"_files\"\n\n\ndef _clamp(val, min, max):\n if val < min:\n return min\n elif val > max:\n return max\n return val\n\n# end of section from: https://github.com/openzoom/deepzoom.py\n####\n\n\ndef parseArguments():\n # Create argument parser\n parser = argparse.ArgumentParser()\n\n # Positional mandatory arguments\n parser.add_argument(\n 'indir',\n help='Input folder with images. Mandatory!',\n type=str)\n\n # Optional arguments\n parser.add_argument(\n '-v',\n '--verbose',\n help='Verbose, more output.',\n default=False,\n action=\"store_true\")\n\n parser.add_argument(\n '-i',\n '--inv',\n help='Inverse the output image.',\n default=False,\n action=\"store_true\")\n\n parser.add_argument(\n '-f',\n '--outfile',\n help='Name of the output DZI file, default \\\"dzi\\\"',\n type=str,\n default='dzi')\n\n parser.add_argument(\n '-o',\n '--outdir',\n help='Output folder to put DZI tiling, default dzi',\n type=str,\n default='dzi')\n\n parser.add_argument(\n '-p',\n '--platedim',\n help='Plate dimensions. Provide two integers separated by a white space; default 24x16',\n nargs=2,\n type=int,\n default=(24, 16))\n\n parser.add_argument(\n '-w',\n '--welldim',\n help='Well dimensions. Provide two integers separated by a white space; default 4x4',\n nargs=2,\n type=int,\n default=(4, 4))\n\n parser.add_argument(\n '-m',\n '--imdim',\n help='Image dimensions. Provide two integers separated by a white space; default 1104x1104',\n nargs=2,\n type=int,\n default=(1104, 1104))\n\n parser.add_argument(\n '-I',\n '--imint',\n help='Image intensities for rescaling. Provide two integers separated by a white space; default (250, 3000)',\n nargs=2,\n type=int,\n default=(250, 3000))\n\n parser.add_argument(\n '-c',\n '--imch',\n help='Channel of the image to process, default 0',\n type=int,\n default=0)\n\n parser.add_argument(\n '-x',\n '--imext',\n help='File extension of the image to process, default TIFF',\n type=str,\n default='TIFF')\n\n parser.add_argument(\n '-r',\n '--cores',\n help='Number of cores for multiprocessing, default 4',\n type=int,\n default=4)\n\n parser.add_argument(\n '-t',\n '--tilesz',\n help='Size of DeepZoom tiles, default 254',\n type=int,\n default=254)\n\n parser.add_argument(\n '-q',\n '--imquality',\n help='Image quality (0.1 - 1) for JPG or compression level for PNG, default 0.8.',\n type=float,\n default=0.8)\n\n # Parse arguments\n args = parser.parse_args()\n args.platedim = tuple(args.platedim)\n args.welldim = tuple(args.welldim)\n args.imdim = tuple(args.imdim)\n\n return args\n\n\ndef processWell(inRow, inCol):\n # create canvas for the montage\n locImWell = Image.new(imMode, (imWellWidth, imWellHeight), bgEmptyWell)\n\n locIrow = inRow\n locIcol = inCol\n\n # Read images of FOVs of a single well\n # Assumption: image files are named according to a convetion xxxx_A01f00d1\n # A01 - well\n # f00 - fov\n # d1 - channel\n\n for locIfov in wellFOVs:\n\n # locImPath = \"%s%02df%02dd%d.%s\" % (imDir + '/' + imCore + locIrow, locIcol, locIfov, imCh, imExt)\n locImPath = \"%s%02df%02dd%d.%s\" % (\n imDir + '/' + locIrow, locIcol, locIfov, imCh, imExt)\n if(DEB):\n print(\"\\nChecking:\", locImPath)\n\n # Handle errors if the image file is inaccessible/corrupt\n flagFileExists = os.path.isfile(\n locImPath) and os.access(locImPath, os.R_OK)\n flagFileOK = True\n\n try:\n locImFOV = imageio.imread(locImPath)\n except (IOError, SyntaxError, IndexError, ValueError) as e:\n print('Corrupted file:', locImPath)\n flagFileOK = False\n\n if flagFileExists and flagFileOK:\n if(DEB):\n print(\"File exists and is readable\")\n\n # Image stats\n if (DEB):\n locImMean, locImSD, locImMin, locImMax = locImFOV.mean(\n ), locImFOV.std(), locImFOV.min(), locImFOV.max()\n print(\"Raw mean=%.2f\\tsd=%.2f\\tmin=%d\\tmax=%d\" %\n (locImMean, locImSD, locImMin, locImMax))\n\n # Clip intensities;\n # has to be done before rescaling, to avoid overflow of uint16\n locImFOVclip = np.clip(locImFOV, imIntMin, imIntMax)\n\n # Rescale to max range of the input bit depth (working with integers!)\n locImFOVresc = np.round((locImFOVclip - imIntMin) * imRescFac)\n locImFOVresc = locImFOVresc.astype('uint16')\n\n # convert to 8-bit\n locIm8 = (locImFOVresc >> 8).astype('uint8')\n\n # Invert the final image if imInv = True\n if(imInv):\n locIm8 = (~locIm8)\n\n if (DEB):\n locImMean, locImSD, locImMin, locImMax = locIm8.mean(\n ), locIm8.std(), locIm8.min(), locIm8.max()\n print(\"New mean=%.2f\\tsd=%.2f\\tmin=%d\\tmax=%d\" %\n (locImMean, locImSD, locImMin, locImMax))\n\n locIm8fin = Image.fromarray(locIm8, imMode)\n\n else:\n if(DEB):\n print(\"Either the file is missing or not readable; creating blank\")\n\n # create an empty file\n locIm8fin = Image.new(imMode, (imHeight, imWidth), bgEmptyFOV)\n locImDraw = ImageDraw.Draw(locIm8fin)\n\n locMyLabel = \"f%02d missing\" % locIfov\n\n locImDraw.text((labelFOVposX, labelFOVposY),\n locMyLabel, labelFOVcol, font=myFontFOV)\n\n # Add image to montage canvas\n\n locWellCol = locIfov % wellWidth\n locWellRow = locIfov // wellHeight\n\n locWellPosW = locWellCol * (imWidth + paddingFOV)\n locWellPosE = locWellPosW + imWidth\n locWellPosN = locWellRow * (imHeight + paddingFOV)\n locWellPosS = locWellPosN + imHeight\n\n # bounding box for inserting the image into the canvas\n locBbox = (locWellPosW, locWellPosN, locWellPosE, locWellPosS)\n\n if (DEB):\n print('Bounding box for inserting FOV image into Well canvas:')\n print(locBbox)\n\n locImWell.paste(locIm8fin, locBbox)\n\n # Add well label to the montage\n locImDrawWell = ImageDraw.Draw(locImWell)\n locMyLabelWell = \"%s%02d\" % (locIrow, locIcol)\n locImDrawWell.text((labelWellPosX, labelWellPosY), locMyLabelWell,\n labelWellCol, font=myFontWell, align='left')\n\n return(locImWell)\n\n\nif __name__ == \"__main__\":\n args = parseArguments()\n # Global constants\n if args.verbose:\n DEB = 1\n else:\n DEB = 0\n\n # Dimensions of a plate\n plateWidth, plateHeight = args.platedim\n\n # FOVs in a well\n wellWidth, wellHeight = args.welldim\n\n # dimensions of the blank image\n imWidth, imHeight = args.imdim\n\n # channel: 0-2\n imCh = args.imch\n\n # extension of the image file\n imExt = args.imext\n\n # directory with image files\n imDir = args.indir\n\n # Values for clipping image intensities\n imIntMin, imIntMax = args.imint\n\n # flag for image inversion\n imInv = args.inv\n\n # number of cores for multiprocessing\n cores = args.cores\n\n # Raw print arguments\n if (DEB):\n print(\"You are running the script with arguments: \")\n for a in args.__dict__:\n print(str(a) + ': ' + str(args.__dict__[a]))\n\n print('')\n\n paddingFOV = 5 # pixels; padding between FOV in a well\n paddingWell = 30 # pixels; padding between wells in a plate\n\n # Parameters of the input image\n imDepthIn = 2**16-1\n\n # Rescaling factor based on image depth and upper clipping intensity\n imRescFac = imDepthIn / (imIntMax - imIntMin)\n\n # Parameters of the output image\n imDepthOut = 2**8-1\n # 8-bit pixels, black and white (https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes)\n imMode = 'L'\n\n # Initialisation\n\n plateCol = range(0, plateWidth)\n plateRow = list(map(chr, range(65, 65 + plateHeight)))\n wellFOVs = range(0, wellWidth*wellHeight)\n\n imWellWidth = int(imWidth*wellWidth + paddingFOV*(wellWidth-1))\n imWellHeight = int(imHeight*wellHeight + paddingFOV*(wellHeight-1))\n\n imPlateWidth = int(imWellWidth*plateWidth + paddingWell*(plateWidth-1))\n imPlateHeight = int(imWellHeight*plateHeight + paddingWell*(plateHeight-1))\n\n labelFOVposX = int(round(imWidth * 0.05))\n labelFOVposY = int(round(imHeight * 0.4))\n\n labelWellPosX = round(imWellWidth * 0.46)\n labelWellPosY = 20\n\n if (imInv):\n labelFOVcol = 10\n labelWellCol = 10\n bgEmptyFOV = int(imDepthOut * 0.7) # color of empty FOV\n bgEmptyWell = 100 # color of empty canvas for well montage\n bgEmptyPlate = 10 # color of empty canvas for plate montage\n else:\n labelFOVcol = 10\n labelWellCol = 250\n bgEmptyFOV = int(imDepthOut * 0.7) # color of empty FOV\n bgEmptyWell = imDepthOut # color of empty canvas for well montage\n bgEmptyPlate = imDepthOut # color of empty canvas for plate montage\n\n myFontFOV = ImageFont.truetype(font='fonts/arial.ttf', size=200)\n myFontWell = ImageFont.truetype(font='fonts/arial.ttf', size=300)\n\n # Work\n\n if (DEB):\n print(\"Making montage of individual FOVs\\n\")\n\n # create canvas for montage of the entire plate\n imPlate = Image.new(imMode, (imPlateWidth, imPlateHeight), bgEmptyPlate)\n\n for iRow in range(0, plateHeight):\n for iCol in plateCol:\n # Add image to montage canvas\n\n platePosW = iCol * (imWellWidth + paddingWell)\n platePosE = platePosW + imWellWidth\n platePosN = iRow * (imWellHeight + paddingWell)\n platePosS = platePosN + imWellHeight\n bbox = (platePosW, platePosN, platePosE, platePosS)\n\n if (DEB):\n print('\\nBounding box for inserting Well image into Plate canvas:')\n print(bbox)\n\n imWell = processWell(plateRow[iRow], iCol+1)\n imPlate.paste(imWell, bbox)\n\n imPathDir = '%s/%s.dzi' % (args.outdir, args.outfile)\n if (DEB):\n print(\"\\nMaking DeepZoom tiling in:\\n\" + imPathDir)\n\n creator = ImageCreator(\n tile_size=args.tilesz,\n tile_format='png',\n image_quality=args.imquality,\n resize_filter=None,\n )\n start = time.time()\n creator.create(imPlate, imPathDir,cores)\n print(\"Execution time of the creator function is:\", time.time()-start)\n if(DEB):\n print(\"\\nAnalysis finished!\\n\")\n"
},
{
"alpha_fraction": 0.5973954796791077,
"alphanum_fraction": 0.6132968068122864,
"avg_line_length": 32.31050109863281,
"blob_id": "4669b58654c3f34db327b8b3635c5cf42fb5c441",
"content_id": "d3892262a5c29177c11472d74de2ce457d9a8f27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 7295,
"license_type": "no_license",
"max_line_length": 157,
"num_lines": 219,
"path": "/demosite2x2/jscripts/heatmap.js",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "/*\n*This module is based on D3.js and was created for the purpose of creating \n*interactive heatmaps that contain the data connected with dzi images of plates with melanoma cells.\n*Key features are: \n* creating a heatmap from a specified csv dataset\n* zooming to a well on the dzi image that corresponds to a specific field selected on the heatmap\n* displaying the actual measurment value when hovering over a field in the heatmap\n* creating a color gradient that corresponds to colors in the heatmap\n*Main source for the heatmap code:\n*https://www.d3-graph-gallery.com/graph/heatmap_style.html\n*Main source for creating the gradient:\n*http://using-d3js.com/04_05_sequential_scales.html\n */\n\n/**\n * @author Milos Drobnjakovic\n * affiliated with the University of Bern\n */\n\n\n\n// function that maps Rows and Columns to middle points of their respective wells\nfunction mapToInt(labels) {\n var num = 1;\n var mapping = {};\n for (var i = 0; i < labels.length; i++) {\n let l = labels[i]\n mapping[l] = num;\n num += 2;\n }\n return mapping\n}\n\n//function that creates a color scale for the heatmap values to be interpolated to\nfunction color(maxx, minn) {\n return d3.scaleSequential()\n .interpolator(d3.interpolateSpectral)\n .domain([maxx, minn]) //theese are flipped around because the color scale available in d3 is the inverse of what is needed\n}\n\nfunction createHeatmap(viewer, id, df, value) {\n // set the dimensions and margins of the graph\n var margin = { top: 20, right: 25, bottom: 30, left: 30 },\n width = 500;\n height = 400;\n\n\n // append the svg object to the body of the page\n var svg = d3.select(\"#\" + id)\n .append(\"svg\")\n .attr(\"width\", width + margin.left + margin.right)\n .attr(\"height\", height + margin.top + margin.bottom)\n .append(\"g\")\n .attr(\"transform\",\n \"translate(\" + margin.left + \",\" + margin.top + \")\");\n\n //Read the data\n d3.csv(df, function (data) {\n\n // Labels of row and columns \n var myGroups = d3.map(data, function (d) { return d.Col; }).keys();\n var myVars = d3.map(data, function (d) { return d.Row; }).keys();\n myVars.sort()\n // sort the groups numerically not lexicographically\n myGroups.sort(function (a, b) { return Number(a) - Number(b) })\n // create a mapping of rows and columns of the data to the middle of their respective well\n var mapX = mapToInt(myGroups);\n var mapY = mapToInt(myVars);\n // reverse the vars order to match the general plate order\n var myVars = myVars.reverse()\n // find the minimum dimension of the plate as well as the inverse of the maximum dimension\n window.minDim = Math.min(myVars.length, myGroups.length)\n window.maxWell = 1 / Math.max(myVars.length, myGroups.length) // inverse is taken directly as its needed as such in further calculations\n\n // convert the target values to Numerical data type\n var valz = []\n data.map(function (d) {\n valz.push(d[value])\n })\n var valz = valz.map(Number)\n\n // Build X scales and axis:\n var x = d3.scaleBand()\n .range([0, width])\n .domain(myGroups)\n .padding(0.05);\n svg.append(\"g\")\n .style(\"font-size\", 15)\n .attr(\"transform\", \"translate(0,\" + height + \")\")\n .call(d3.axisBottom(x).tickSize(0))\n .select(\".domain\").remove()\n\n // Build Y scales and axis:\n var y = d3.scaleBand()\n .range([height, 0])\n .domain(myVars)\n .padding(0.05);\n svg.append(\"g\")\n .style(\"font-size\", 15)\n .call(d3.axisLeft(y).tickSize(0))\n .select(\".domain\").remove()\n\n // Build color scale\n var myColor = color(d3.max(valz), d3.min(valz))\n\n\n // create a tooltip\n var tooltip = d3.select(\"#\" + id)\n .append(\"div\")\n .style(\"opacity\", 0)\n .attr(\"class\", \"tooltip\")\n .style(\"background-color\", \"white\")\n .style(\"border\", \"solid\")\n .style(\"border-width\", \"2px\")\n .style(\"border-radius\", \"5px\")\n .style(\"padding\", \"5px\")\n\n // Three function that change the tooltip when user hover / move / leave a cell\n var mouseover = function (d) {\n tooltip\n .style(\"opacity\", 1)\n d3.select(this)\n .style(\"stroke\", \"black\")\n .style(\"opacity\", 1)\n }\n var mousemove = function (d) {\n tooltip\n .html(\"The exact value of<br>this cell is: \" + d[2])\n .style(\"left\", (d3.mouse(this)[0] + 70) + \"px\")\n .style(\"top\", (d3.mouse(this)[1]) + \"px\")\n }\n var mouseleave = function (d) {\n tooltip\n .style(\"opacity\", 0)\n d3.select(this)\n .style(\"stroke\", \"none\")\n .style(\"opacity\", 0.8)\n }\n var click = function (d) {\n // reset before every zoom to default starting positions so that coordinats are properly synced\n viewer1.viewport.goHome(true)\n var cl = d[0];\n var rw = d[1];\n //select a point to zoom to and convert it from pixelCoordinates to Viewport\n var point = viewer.viewport.imageToViewportCoordinates(mapX[cl] * window.dim.x / (2 * myGroups.length), mapY[rw] * window.dim.y / (2 * myVars.length));\n // move to the specified point and zoom to it\n viewer.viewport.panTo(point, true);\n viewer.viewport.zoomBy(minDim - 0.3, true); //deducting 0.3 has shown to give good results empirically\n\n }\n\n // add the squares\n svg.selectAll()\n .data(data)\n .enter()\n .append(\"rect\")\n .attr(\"x\", function (d) { return x(d.Col) })\n .attr(\"y\", function (d) { return y(d.Row) })\n .attr(\"rx\", 4)\n .attr(\"ry\", 4)\n .attr(\"width\", x.bandwidth())\n .attr(\"height\", y.bandwidth())\n .style(\"fill\", function (d) { return myColor(d[value]) })\n .style(\"stroke-width\", 4)\n .style(\"stroke\", \"none\")\n .style(\"opacity\", 0.8)\n .on(\"mouseover\", mouseover)\n .on(\"mousemove\", mousemove)\n .on(\"mouseleave\", mouseleave)\n .on(\"click\", click)\n .datum(function (d) { return [d.Col, d.Row, d[value]] }) //binds the actual data to individual squares important for color changes\n\n\n // add the circles that are adjusted based on the slider treshold\n svg.selectAll(\"circle\")\n .data(data)\n .enter()\n .append(\"circle\")\n .attr(\"cx\", function (d) { return x(d.Col) + x.bandwidth() / 2 }) //makes sure squares are in the center\n .attr(\"cy\", function (d) { return y(d.Row) + y.bandwidth() / 2 })\n .attr(\"r\", 0)\n .style(\"fill\", \"black\")\n .datum(function (d) { return d[value] }) // save the value of the specified data to the given circle\n })\n\n}\n\n\n//adding a color gradient to see the span of colors that the heatmap takes \n\nfunction drawScale(id, interpolator) {\n var data = Array.from(Array(100).keys());\n\n var cScale = d3.scaleSequential()\n .interpolator(interpolator)\n .domain([99, 0]); // another inverse to match the target color range\n\n var xScale = d3.scaleLinear()\n .domain([0, 99])\n .range([0, 300]);\n\n var u = d3.select(\"#\" + id)\n .selectAll(\"rect\")\n .data(data)\n .enter()\n .append(\"rect\")\n .attr(\"x\", (d) => Math.floor(xScale(d)))\n .attr(\"y\", 0)\n .attr(\"height\", 30)\n .attr(\"width\", (d) => {\n if (d == 99) {\n return 6;\n }\n return Math.floor(xScale(d + 1)) - Math.floor(xScale(d)) + 1;\n })\n .attr(\"transform\",\n \"translate(\" + 30 + \",\" + 0 + \")\")\n .attr(\"fill\", (d) => cScale(d));\n}\n"
},
{
"alpha_fraction": 0.7488299608230591,
"alphanum_fraction": 0.7800312042236328,
"avg_line_length": 70.22222137451172,
"blob_id": "0932fdb7576cbf8fec0dc1039ca430ce635e9d71",
"content_id": "6a61a53905a2b7dc4efd48eb2a3119a7d0cd99e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 641,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 9,
"path": "/demosite2x2/README.md",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "# Small Demo of Deep Zoom Plate Viewer\n\nThis folder contains HTML files and JavaScript code necessary to display a demo of a web-based plate viewer. The repo does not contain `dzi` pyramid images.\n\nThe entire site including the images can be downloaded as a `zip` archive from [here](https://www.dropbox.com/s/lwycuvlqdtirvr8/demosite2x2.zip?dl=0).\n\nA `zip` archive with a demo dataset used to produce that demo can be downloaded from [here](https://www.dropbox.com/s/5cmejgy9x21434n/demodata2x2.zip?dl=0).\n\nA demo web-viewer with 2x2 wells, 4x4 FOVs per well can be accessed [here](http://macdobry.net/deepzoomdemo/demosite2x2/index.html).\n"
},
{
"alpha_fraction": 0.7514516115188599,
"alphanum_fraction": 0.7759880423545837,
"avg_line_length": 69.25,
"blob_id": "abcf3d307f9b392faf3375aca4eae9ff8f531804",
"content_id": "eca67353961077dd096c55e5e58c8c061377302c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5339,
"license_type": "no_license",
"max_line_length": 782,
"num_lines": 76,
"path": "/README.md",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "# Deep Zoom Plate Viewer\n\nAn html-based plate viewer for microscopy images using [DeepZoom](https://en.wikipedia.org/wiki/Deep_Zoom) technology.\n\n## Big demo\n\nA demo web-viewer of a **6 gigapixel** image montage created out of 12'300 1024x1024 pixel images.\n\nThe montage is an excerpt from a project between [Pertz Lab](https://www.pertzlab.net) at University of Bern and the [Department of Pharmaceutical Sciences](https://pharma.unibas.ch/en/persons/eliane-garo/) at the University of Basel. We aim to find compounds that inhibit cell proliferation, one of the [hallmarks of cancer](https://en.wikipedia.org/wiki/The_Hallmarks_of_Cancer).\n\nThe images depict melanoma cells treated with various plant-derived compounds. The cells contain 3 fluorescent [biosensors](https://en.wikipedia.org/wiki/Biosensor) which emit light in their respective wavelengths when illuminated with specific light frequency. One of the biosensors helps to locate cells and perform automatic [image segmentation](https://en.wikipedia.org/wiki/Image_segmentation). The other two biosensors sense and measure the activity of two important proteins, [ERK](https://en.wikipedia.org/wiki/Extracellular_signal-regulated_kinases) and [Akt](https://en.wikipedia.org/wiki/Protein_kinase_B), involved in cell proliferation. If a drug-candidate compound stops cell proliferation, we observe changes in biosensor fluorescence, which we can image and analyse.\n\nThe experiment is performed in a 384-well plate format, where each well contains cells subject to a different treatment. A camera mounted on a microscope takes the images (1024x1024 pixels) at 3 different wavelengths (3 channels) in 16 locations of a well (4x4 fields of view), and repeats the process for all 384 wells (24x16 wells).\n\nThe [demo](http://macdobry.net/deepzoomdemo/demoscreen/index.html) shows the result of imaging of one of such plates.\n\n## Small demo\n\nA demo web-viewer with only 2x2 wells, 4x4 FOVs per well can be accessed [here](http://macdobry.net/deepzoomdemo/demosite2x2/index.html).\n\nA `zip` archive with a dataset used to produce that demo can be downloaded from [here](https://www.dropbox.com/s/5cmejgy9x21434n/demodata2x2.zip?dl=0).\n\nA `zip` archive with a full demo website can be downloaded from [here](https://www.dropbox.com/s/lwycuvlqdtirvr8/demosite2x2.zip?dl=0).\n\n\n## Building blocks\n\n* [Python Deep Zoom Tools](https://github.com/openzoom/deepzoom.py) to generate an image montage and the `dzi` pyramid file/folder structure with png image tiles.\n* [OpenSeadragon](https://openseadragon.github.io), an open-source, web-based viewer for high-resolution zoomable images, implemented in pure JavaScript, for desktop and mobile.\n* [OpenSeadragon Filtering](https://github.com/usnistgov/OpenSeadragonFiltering) plugin with image filters to adjust contrast, brightness, levels in real time.\n* [Flat Design Icon](https://github.com/peterthomet/openseadragon-flat-toolbar-icons) set for the viewer.\n* [Python Imaging Library](https://en.wikipedia.org/wiki/Python_Imaging_Library) and [imageio](https://pypi.org/project/imageio/) to read/write images.\n* [OpenSeadragon CanvasOverlay](https://github.com/altert/OpenSeadragonCanvasOverlay) plugin that adds the canvas overlay ability to OSD images.\n* [D3.js](https://d3js.org/) javascript library that allows data binding to Document Object Model (DOM) and interactive data visualization.\n\n## Content\n\n* `scripts/makePlateMontageDZI.py`\\\nA Python script to convert individual `TIFF` image files into a `dzi` pyramid.\n* `HTML-template`\\\nHTML template files with a basic single-channel viewer and a multi-channel viewer with real time image adjustments.\n* `demosite2x2/openseadragon`\\\nOpenSeadragon Java Script viewer of `dzi` pyramids.\n* `demosite2x2/openseadragonsiltering`\\\nAn OpenSeadragon plugin with image filters.\n* `demosite2x2/openseadragoncanvasoveray`\\\nAn OpenSeadragon plugin for adding canvas overlay to OSD images.\n* `jscripts`\\\nJavascript files that are used for creating sliders and heatmaps based on csv data that interact with OSD images.\n\n## Usage\n\nImage files should follow the following naming convention:\n\n```\nA01f00d0.TIFF\nA01f00d1.TIFF\n...\nB12f13d2.TIFF\n```\n\nWhere:\n* `A01, A02, ... B12, ...` is the well name,\n* `f00, f01, ...` is the field of view,\n* `d01, d02, ...` is the channel number.\n\nThe parameters `-p` and `-w` of the `scripts/makePlateMontageDZI.py` script prescribe the geometry of the plate and the well, respectively. For example, `-p 2 2 -w 4 4` define 2x2 wells and 4x4 FOVs per well. With this definition, the script assumes 64 images per channel. If an image is missing, the script will fill the gap with an empty image with an appropriate label.\nThe parameter `-r` of the `scripts/makePlateMontageDZI.py` script determines the number of threads to be used in creating the deepzoom pyramid, default option is 4.\nThe script adds grid lines to the composite image; thin lines are added between FOVs, thicker lines are added between wells. In addition, wells are labeled with well names.\n\nTo generate `dzi` image pyramids for both channels in the `../demosite2x2` folder from data in `../demodata2x2`, execute:\n\n```\n./makePlateMontageDZI.py -v -p 2 2 -w 4 4 -c 0 -f dzi_c0 -o ../demosite2x2 ../demodata2x2\n./makePlateMontageDZI.py -v -p 2 2 -w 4 4 -c 1 -f dzi_c1 -o ../demosite2x2 ../demodata2x2\n```\n"
},
{
"alpha_fraction": 0.4911796450614929,
"alphanum_fraction": 0.502693235874176,
"avg_line_length": 36.599998474121094,
"blob_id": "84e9090fb314bc6993d045431894a7fde2468559",
"content_id": "4d8040a52dcb687b603d3d080a40889ba6bf66cd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 14852,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 395,
"path": "/demosite2x2/openseadragonfiltering/openseadragon-filtering.js",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "/*\n * This software was developed at the National Institute of Standards and\n * Technology by employees of the Federal Government in the course of\n * their official duties. Pursuant to title 17 Section 105 of the United\n * States Code this software is not subject to copyright protection and is\n * in the public domain. This software is an experimental system. NIST assumes\n * no responsibility whatsoever for its use by other parties, and makes no\n * guarantees, expressed or implied, about its quality, reliability, or\n * any other characteristic. We would appreciate acknowledgement if the\n * software is used.\n *\n * Additional filter LEVELS by Maciej Dobrzynski.\n */\n\n/**\n *\n * @author Antoine Vandecreme <[email protected]>\n */\n(function() {\n\n 'use strict';\n\n var $ = window.OpenSeadragon;\n if (!$) {\n $ = require('openseadragon');\n if (!$) {\n throw new Error('OpenSeadragon is missing.');\n }\n }\n // Requires OpenSeadragon >=2.1\n if (!$.version || $.version.major < 2 ||\n $.version.major === 2 && $.version.minor < 1) {\n throw new Error(\n 'Filtering plugin requires OpenSeadragon version >= 2.1');\n }\n\n $.Viewer.prototype.setFilterOptions = function(options) {\n if (!this.filterPluginInstance) {\n options = options || {};\n options.viewer = this;\n this.filterPluginInstance = new $.FilterPlugin(options);\n } else {\n setOptions(this.filterPluginInstance, options);\n }\n };\n\n /**\n * @class FilterPlugin\n * @param {Object} options The options\n * @param {OpenSeadragon.Viewer} options.viewer The viewer to attach this\n * plugin to.\n * @param {String} [options.loadMode='async'] Set to sync to have the filters\n * applied synchronously. It will only work if the filters are all synchronous.\n * Note that depending on how complex the filters are, it may also hang the browser.\n * @param {Object[]} options.filters The filters to apply to the images.\n * @param {OpenSeadragon.TiledImage[]} options.filters[x].items The tiled images\n * on which to apply the filter.\n * @param {function|function[]} options.filters[x].processors The processing\n * function(s) to apply to the images. The parameters of this function are\n * the context to modify and a callback to call upon completion.\n */\n $.FilterPlugin = function(options) {\n options = options || {};\n if (!options.viewer) {\n throw new Error('A viewer must be specified.');\n }\n var self = this;\n this.viewer = options.viewer;\n\n this.viewer.addHandler('tile-loaded', tileLoadedHandler);\n this.viewer.addHandler('tile-drawing', tileDrawingHandler);\n\n // filterIncrement allows to determine whether a tile contains the\n // latest filters results.\n this.filterIncrement = 0;\n\n setOptions(this, options);\n\n\n function tileLoadedHandler(event) {\n var processors = getFiltersProcessors(self, event.tiledImage);\n if (processors.length === 0) {\n return;\n }\n var tile = event.tile;\n var image = event.image;\n if (image !== null && image !== undefined) {\n var canvas = window.document.createElement('canvas');\n canvas.width = image.width;\n canvas.height = image.height;\n var context = canvas.getContext('2d');\n context.drawImage(image, 0, 0);\n tile._renderedContext = context;\n var callback = event.getCompletionCallback();\n applyFilters(context, processors, callback);\n tile._filterIncrement = self.filterIncrement;\n }\n }\n\n\n function applyFilters(context, filtersProcessors, callback) {\n if (callback) {\n var currentIncrement = self.filterIncrement;\n var callbacks = [];\n for (var i = 0; i < filtersProcessors.length - 1; i++) {\n (function(i) {\n callbacks[i] = function() {\n // If the increment has changed, stop the computation\n // chain immediately.\n if (self.filterIncrement !== currentIncrement) {\n return;\n }\n filtersProcessors[i + 1](context, callbacks[i + 1]);\n };\n })(i);\n }\n callbacks[filtersProcessors.length - 1] = function() {\n // If the increment has changed, do not call the callback.\n // (We don't want OSD to draw an outdated tile in the canvas).\n if (self.filterIncrement !== currentIncrement) {\n return;\n }\n callback();\n };\n filtersProcessors[0](context, callbacks[0]);\n } else {\n for (var i = 0; i < filtersProcessors.length; i++) {\n filtersProcessors[i](context, function() {\n });\n }\n }\n }\n\n function tileDrawingHandler(event) {\n var tile = event.tile;\n var rendered = event.rendered;\n if (rendered._filterIncrement === self.filterIncrement) {\n return;\n }\n var processors = getFiltersProcessors(self, event.tiledImage);\n if (processors.length === 0) {\n if (rendered._originalImageData) {\n // Restore initial data.\n rendered.putImageData(rendered._originalImageData, 0, 0);\n delete rendered._originalImageData;\n }\n rendered._filterIncrement = self.filterIncrement;\n return;\n }\n\n if (rendered._originalImageData) {\n // The tile has been previously filtered (by another filter),\n // restore it first.\n rendered.putImageData(rendered._originalImageData, 0, 0);\n } else {\n rendered._originalImageData = rendered.getImageData(\n 0, 0, rendered.canvas.width, rendered.canvas.height);\n }\n\n if (tile._renderedContext) {\n if (tile._filterIncrement === self.filterIncrement) {\n var imgData = tile._renderedContext.getImageData(0, 0,\n tile._renderedContext.canvas.width,\n tile._renderedContext.canvas.height);\n rendered.putImageData(imgData, 0, 0);\n delete tile._renderedContext;\n delete tile._filterIncrement;\n rendered._filterIncrement = self.filterIncrement;\n return;\n }\n delete tile._renderedContext;\n delete tile._filterIncrement;\n }\n applyFilters(rendered, processors);\n rendered._filterIncrement = self.filterIncrement;\n }\n };\n\n function setOptions(instance, options) {\n options = options || {};\n var filters = options.filters;\n instance.filters = !filters ? [] :\n $.isArray(filters) ? filters : [filters];\n for (var i = 0; i < instance.filters.length; i++) {\n var filter = instance.filters[i];\n if (!filter.processors) {\n throw new Error('Filter processors must be specified.');\n }\n filter.processors = $.isArray(filter.processors) ?\n filter.processors : [filter.processors];\n }\n instance.filterIncrement++;\n\n if (options.loadMode === 'sync') {\n instance.viewer.forceRedraw();\n } else {\n var itemsToReset = [];\n for (var i = 0; i < instance.filters.length; i++) {\n var filter = instance.filters[i];\n if (!filter.items) {\n itemsToReset = getAllItems(instance.viewer.world);\n break;\n }\n if ($.isArray(filter.items)) {\n for (var j = 0; j < filter.items.length; j++) {\n addItemToReset(filter.items[j], itemsToReset);\n }\n } else {\n addItemToReset(filter.items, itemsToReset);\n }\n }\n for (var i = 0; i < itemsToReset.length; i++) {\n itemsToReset[i].reset();\n }\n }\n }\n\n function addItemToReset(item, itemsToReset) {\n if (itemsToReset.indexOf(item) >= 0) {\n throw new Error('An item can not have filters ' +\n 'assigned multiple times.');\n }\n itemsToReset.push(item);\n }\n\n function getAllItems(world) {\n var result = [];\n for (var i = 0; i < world.getItemCount(); i++) {\n result.push(world.getItemAt(i));\n }\n return result;\n }\n\n function getFiltersProcessors(instance, item) {\n if (instance.filters.length === 0) {\n return [];\n }\n\n var globalProcessors = null;\n for (var i = 0; i < instance.filters.length; i++) {\n var filter = instance.filters[i];\n if (!filter.items) {\n globalProcessors = filter.processors;\n } else if (filter.items === item ||\n $.isArray(filter.items) && filter.items.indexOf(item) >= 0) {\n return filter.processors;\n }\n }\n return globalProcessors ? globalProcessors : [];\n }\n\n $.Filters = {\n // LEVELS filter implemented by MD\n LEVELS: function(inMin, inMax) {\n if (inMin < 0 || inMin > 255) {\n throw new Error('Min level must be between 0 and 255.');\n }\n\n if (inMax < 0 || inMax > 255) {\n throw new Error('Max level must be between 0 and 255.');\n }\n\n if (inMin >= inMax) {\n throw new Error('Min level must be smaller than the max level.');\n }\n\n var precomputedLevels = [];\n for (var i = 0; i < 256; i++) {\n\n // clip\n var locTmp = Math.max(inMin, Math.min(i, inMax));\n\n // rescale\n precomputedLevels[i] = Math.round((locTmp - inMin) * 255 / (inMax - inMin));\n }\n\n return function(context, callback) {\n\n /*\n Returns the one-dimensional array containing the data in RGBA order,\n as integers in the range 0 to 255.\n */\n var imgData = context.getImageData(\n 0, 0, context.canvas.width, context.canvas.height);\n\n\n var pixels = imgData.data;\n\n for (var i = 0; i < pixels.length; i += 4) {\n pixels[i] = precomputedLevels[ pixels[i] ]\n pixels[i+1] = precomputedLevels[ pixels[i+1] ]\n pixels[i+2] = precomputedLevels[ pixels[i+2] ]\n }\n context.putImageData(imgData, 0, 0);\n callback();\n };\n },\n\n\n BRIGHTNESS: function(inAdj) {\n\n if (inAdj < -255 || inAdj > 255) {\n throw new Error(\n 'Brightness adjustment must be between -255 and 255.');\n }\n\n var precomputedBrightness = [];\n for (var i = 0; i < 256; i++) {\n precomputedBrightness[i] = Math.max(0, Math.min(i + inAdj, 255));\n }\n\n return function(context, callback) {\n var imgData = context.getImageData(\n 0, 0, context.canvas.width, context.canvas.height);\n var pixels = imgData.data;\n\n for (var i = 0; i < pixels.length; i += 4) {\n pixels[i] = precomputedBrightness[ pixels[i] ];\n pixels[i+1] = precomputedBrightness[ pixels[i+1] ];\n pixels[i+2] = precomputedBrightness[ pixels[i+2] ];\n }\n context.putImageData(imgData, 0, 0);\n callback();\n };\n },\n\n CONTRAST: function(inAdj) {\n if (inAdj < 0) {\n throw new Error('Contrast adjustment must be positive.');\n }\n\n var precomputedContrast = [];\n for (var i = 0; i < 256; i++) {\n precomputedContrast[i] = Math.round(Math.max(0, Math.min(i * inAdj, 255)));\n }\n\n console.log(precomputedContrast)\n\n return function(context, callback) {\n var imgData = context.getImageData(\n 0, 0, context.canvas.width, context.canvas.height);\n var pixels = imgData.data;\n\n for (var i = 0; i < pixels.length; i += 4) {\n pixels[i] = precomputedContrast[pixels[i]];\n pixels[i+1] = precomputedContrast[pixels[i+1]];\n pixels[i+2] = precomputedContrast[pixels[i+2]];\n }\n context.putImageData(imgData, 0, 0);\n callback();\n };\n },\n\n GAMMA: function(inAdj) {\n if (inAdj < 0) {\n throw new Error('Gamma adjustment must be positive.');\n }\n var precomputedGamma = [];\n for (var i = 0; i < 256; i++) {\n precomputedGamma[i] = Math.pow(i / 255, inAdj) * 255;\n }\n return function(context, callback) {\n var imgData = context.getImageData(\n 0, 0, context.canvas.width, context.canvas.height);\n var pixels = imgData.data;\n for (var i = 0; i < pixels.length; i += 4) {\n pixels[i] = precomputedGamma[pixels[i]];\n pixels[i + 1] = precomputedGamma[pixels[i + 1]];\n pixels[i + 2] = precomputedGamma[pixels[i + 2]];\n }\n context.putImageData(imgData, 0, 0);\n callback();\n };\n },\n\n\n INVERT: function() {\n\n return function(context, callback) {\n var imgData = context.getImageData(\n 0, 0, context.canvas.width, context.canvas.height);\n var pixels = imgData.data;\n for (var i = 0; i < pixels.length; i += 4) {\n pixels[i] = 255 - pixels[i];\n pixels[i + 1] = 255 - pixels[i + 1];\n pixels[i + 2] = 255 - pixels[i + 2];\n }\n context.putImageData(imgData, 0, 0);\n callback();\n };\n }\n\n };\n\n}());\n"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.7333333492279053,
"avg_line_length": 23,
"blob_id": "c7776b2c145304cc49d7f1cc7f17a295aa62c9d3",
"content_id": "7b4f2609b8cf9a1269dd0bc5dfdad79c753f7621",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 120,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 5,
"path": "/demosite2x2/runsite.sh",
"repo_name": "MilosDrobnjakovic/dzPlateViewer",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# Run the site using a command-line server\n# https://www.npmjs.com/package/http-server\nhttp-server -p 1313\n"
}
] | 6 |
huangqiang76/pyrsync
|
https://github.com/huangqiang76/pyrsync
|
fc74b57f08ed9b465bde999178bb5696ca973a73
|
4c23bab69295bb38e43fba41921cbfca7d657eee
|
ce6a61eeeb1be09ee338669eb20c0e4608279323
|
refs/heads/master
| 2020-04-30T23:11:29.190687 | 2019-11-29T09:32:06 | 2019-11-29T09:32:06 | 176,566,704 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5327705144882202,
"alphanum_fraction": 0.5409908890724182,
"avg_line_length": 30.47552490234375,
"blob_id": "95fd4bcfb19988170b732dafcddcba61b14e4b09",
"content_id": "943616c81b99a64603f0259aec7371b12022aa6a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9114,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 286,
"path": "/websiteRsyncData.py",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "#!/usr/local/bin/python3\n# 10.7.35.70\n\nimport os, sys\nimport io\nfrom datetime import datetime, date, timedelta\nfrom re import compile, sub\nfrom functools import wraps\nfrom log import get_logger\nfrom subprocess import PIPE, Popen\nfrom os import _wrap_close, path\nimport traceback\n\n\n# 日志装饰器\ndef decoratore(func):\n @wraps(func)\n def log(*args, **kwargs):\n try:\n print(\"Current running function() name\", func.__name__)\n return func(*args, **kwargs)\n except Exception as e:\n get_logger().error(f\"{func.__name__} is error,here are details:{traceback.format_exc()}\")\n\n return log\n\n\n# 配置信息\nclass Configuration:\n \"\"\"\n 同步的目录信息\n \"\"\"\n run = {'switch': 'OPEN'}\n source = {'wcmdata_pub': '/source/trsdata/WCMData/pub/',\n 'wcmdata_pubggpp': '/source/trsdata/WCMData/pubggpp/',\n 'cms_data_index': '/source/publiccmsdata/indexes/',\n 'cms_data_task': '/source/publiccmsdata/task/',\n 'cms_data_template': '/source/publiccmsdata/template/',\n 'cms_data_web': '/source/publiccmsdata/web/',\n 'emap_ggppnews': '/source/irun/ggppnews/',\n 'ggppweb_crmedia': '/source/video/public/'\n }\n\n time = {'wcmdata_pub': 10,\n 'wcmdata_pubggpp': 10,\n 'cms_data_index': 10,\n 'cms_data_task': 10,\n 'cms_data_template': 10,\n 'cms_data_web': 10,\n 'emap_ggppnews': 10,\n 'ggppweb_crmedia': 10\n }\n\n\nclass String:\n # Class variable that specifies expected fields\n _fields = []\n\n def __init__(self, *args, **kwargs):\n if len(args) != len(self._fields):\n raise TypeError('Excepted {} arguments'.format(len(self._fields)))\n\n # Set the arguments\n for name, value in zip(self._fields, args):\n setattr(self, name, value)\n\n # Set the additional arguments (if any)\n extra_args = kwargs.keys() - self._fields\n for name in extra_args:\n setattr(self, name, kwargs.pop(name))\n\n if kwargs:\n raise TypeError('Invalid argument(s): {}'.format(','.join(kwargs)))\n\n\n# 基础路径类\nclass BasePath(String):\n _fields = ['name', 'source_str']\n\n\n# 基础时间类\nclass BaseTime(String):\n _fields = ['name', 'delta_minutes']\n\n\n# 基础脚本类\nclass BaseScript(String):\n _fields = ['name', 'script']\n\n\n# 基础进程类\nclass BaseProcessManager(String):\n _fields = ['name', 'binPath']\n\n\n# 基础源目录\nclass BaseSourceDir(String):\n _fields = ['name', 'dirPath']\n\n\n# 基础执行shell\nclass BaseExecuteShell(String):\n _fields = ['name', 'binPath']\n\n\n# 扩展路径类\nclass MyPath(BasePath):\n \"\"\"\n\n \"\"\"\n\n def add_prefix(self, prefix_str):\n return ''.join([prefix_str, self.source_str])\n\n def add_suffer(self, suffer_str):\n return ''.join([self.source_str, suffer_str])\n\n @decoratore\n def delete_suffer(self, suffer_str):\n return self.source_str.rstrip(suffer_str)\n\n def change_string(self, pattern=None, replace_str=None):\n pattern1 = compile(pattern, flags=0)\n print(pattern1)\n return sub(pattern1, replace_str, self.source_str)\n\n @staticmethod\n def dir_is_exist(path):\n if os.path.exists(path) is False:\n print('This path %s error' % path)\n\n @staticmethod\n def change_my_string(pattern=None, replace_str=None, source_str=None):\n pattern = compile(pattern, flags=0)\n return sub(pattern, replace_str, source_str)\n\n def _is_dir(self):\n if os.path.isdir(self) is True:\n return self\n\n def get_dir_file(self, path):\n file_list = []\n dir_list = []\n\n if os.path.isdir(path) is False:\n print(\"this dir is not exist %s\" % path)\n else:\n for item in os.listdir(self, path):\n sub_path = os.path.join(self, path, item)\n if os.path.isfile(sub_path) is True:\n file_list.append(sub_path)\n if os.path.isdir(sub_path) is True:\n dir_list.append(sub_path)\n if len(file_list) > 0:\n return file_list\n if len(dir_list) > 0:\n return dir_list\n\n\nclass RsyncScripts(object):\n \"\"\"\n make rsync script by srcPath,dstPath, appName, excludefile\n \"\"\"\n\n def __init__(self, srcPath=None, dstPath=None):\n self.srcPath = srcPath\n self.dstPath = dstPath\n\n def makeScripts(self, type, appName=None, excludefile=None):\n cmd_str = ''\n if os.path.isdir(self.srcPath):\n if len([excludefile]) > 0:\n resultDir = excludefile.strip(excludefile).rstrip('/')\n exclufile = excludefile.rsplit('/', 1)[1]\n if self.srcPath == resultDir:\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', self.srcPath, '/ ', self.dstPath,\n ' --exclude=', exclufile,\n ' --log-file=logs/', type, '/', appName,\n '.dir.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n else:\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', self.srcPath, '/ ', self.dstPath,\n ' --log-file=logs/', type, '/', appName,\n '.dir.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n elif os.path.isfile(self.srcPath):\n cmd_string = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', self.srcPath, ' ', self.dstPath,\n ' --log-file=logs/', type, '/', appName,\n '.file.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmd_string)\n if cmd_str is not '':\n return cmd_str\n\n\ndef hostPerformace():\n import psutil\n from time import sleep as sp\n cpu_per = psutil.cpu_percent(interval=1)\n proc_num = [p for p in psutil.pids()]\n if int(cpu_per) >= 50:\n sp(300)\n elif proc_num > 800:\n sp(10)\n\n\nclass HandlerRsync(object):\n def __init__(self, cmdString=None, appName=None):\n if cmdString is None:\n raise ValueError('cmd string is None')\n else:\n self.cmdString = cmdString\n self.appName = appName\n\n def run_rsync(self, cmd=None, appName=None):\n from subprocess import Popen, PIPE\n hostPerformace() # 执行性能检测\n popen = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)\n output, errors = popen.communicate()\n if errors:\n logging.error(errors)\n logging.info('Starting %s rsync data ....' % appName)\n\n\nclass handlerLog(object):\n def __init__(self, logPath, logType):\n self.logPath = logPath\n self.logType = logType\n\n @staticmethod\n def assert_log_time(logPath, day):\n time = os.path.getmtime(logPath)\n now = datetime.now()\n date = datetime.fromtimestamp(time)\n retdate = date.strftime('%Y-%m-%d')\n if (now - date).days > day:\n return 0\n else:\n return 1\n\n @staticmethod\n def tar_log_history(logPath, logType):\n list_file = []\n strPath = ''.join([logPath, logType])\n for item in os.listdir(strPath):\n sub_path = os.path.join(strPath, item)\n if os.path.isfile(sub_path) is True:\n ret = handlerLog.assert_log_time(sub_path, 1)\n if ret == 0:\n list_file.append(sub_path)\n if len(list_file) > 0:\n now = datetime.now()\n vdate = (now - timedelta(days=2)).strftime(\"%Y-%m-%d\")\n import zipfile\n azip = zipfile.ZipFile(\"logs/backup/\" + logType + \".\" + vdate + \".zip\", 'w')\n for f in range(len(list_file)):\n azip.write(list_file[f], compress_type=zipfile.ZIP_DEFLATED)\n os.remove(list_file[f])\n azip.close()\n\n @staticmethod\n def hander_zip_log(logType):\n tarPath = 'logs/backup'\n for type in ['realtime', 'full', 'normal']:\n handlerLog.tar_log_history('logs/', logType)\n for tar in os.listdir(tarPath):\n tar_path = os.path.join(tarPath, tar)\n ret = handlerLog.assert_log_time(tar_path, 6)\n if ret == 0:\n os.remove(tar_path)\n\n\nif __name__ == '__main__':\n for sk, sv in Configuration.source.items():\n print('--' * 20)\n # print(sk, sv)\n dv1 = MyPath(sk, sv)\n # print(dv1.__dict__)\n result = dv1.change_string(pattern='/source', replace_str='')\n dv2 = MyPath(sk, result)\n # print(dv2.__dict__)\n result2 = dv2.delete_suffer('/')\n # print(result2)\n MyPath.dir_is_exist(result2)\n result3 = dv2.get_dir_file(sv)\n # my3 = MyDirIter()\n # print(my3.get_dir_file(sv))\n"
},
{
"alpha_fraction": 0.5905472636222839,
"alphanum_fraction": 0.6002487540245056,
"avg_line_length": 28.77777862548828,
"blob_id": "9b1d440df33b3ae82bb9a4b6557b13f1b58b385c",
"content_id": "c51442b0ace073d0b61eb6e83fea387281277333",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4076,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 135,
"path": "/ylRsyncNasTest.py",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "import os\nimport io\nfrom datetime import datetime, date, timedelta\nfrom re import compile, sub\nfrom subprocess import PIPE, Popen\nfrom os import _wrap_close\n\n\nclass String:\n # Class variable that specifies expected fields\n _fields = []\n\n def __init__(self, *args, **kwargs):\n if len(args) != len(self._fields):\n raise TypeError('Excepted {} arguments'.format(len(self._fields)))\n\n # Set the arguments\n for name, value in zip(self._fields, args):\n setattr(self, name, value)\n\n # Set the additional arguments (if any)\n extra_args = kwargs.keys() - self._fields\n for name in extra_args:\n setattr(self, name, kwargs.pop(name))\n\n if kwargs:\n raise TypeError('Invalid argument(s): {}'.format(','.join(kwargs)))\n\n\nclass BasePath(String):\n _fields = ['name', 'source_str', 'pattern_str']\n\n\nclass BaseTime(String):\n _fields = ['name', 'delta_minutes']\n\n\nclass BaseScript(String):\n _fields = ['name', 'script']\n\n\nclass BaseProcessManager(String):\n _fields = ['name', 'processName', 'binPath']\n\n\nclass BaseSourceDir(String):\n _fields = ['name', 'dirPath']\n\n\nclass BaseExecuteShell(String):\n _fields = ['name', 'ShellName', 'binPath']\n\n\nclass MyPath(BasePath):\n\n def add(self, suffer_str):\n return ''.join([self.source_str, suffer_str])\n\n def change(self, replace_str):\n pattern = compile(self.pattern_str, flags=0)\n return sub(pattern, replace_str, self.source_str)\n\n\nclass MySourceDir(BaseSourceDir):\n\n def getDirTime(self):\n time = os.path.getmtime(self.dirPath)\n result = datetime.fromtimestamp(time)\n return result\n\n def getDeltaTimeDirs(self, delta_name, delta_minutes=5 * 60):\n dir_list = []\n dirtime = self.getDirTime()\n #print('Dirs time: ', dirtime)\n mytimeobj = MyTime(delta_name, delta_minutes)\n mytime = mytimeobj.getDeltaTime()\n #print('Dirs Deltatime: ', mytime)\n #print('大于指定偏移时间的目录 : ', dirtime.timestamp() - mytime.timestamp())\n if dirtime.timestamp() - mytime.timestamp() > 0:\n dir_list.append(self.dirPath)\n return dir_list\n\n def __iter__(self):\n return iter(self.dirPath)\n\n def __next__(self):\n if os.path.isdir(self.dirPath) is True:\n os.walk(self.)\n\n\nclass MyTime(BaseTime):\n\n def getDeltaTime(self):\n if isinstance(self.delta_minutes, int):\n return datetime.now() + timedelta(minutes=-self.delta_minutes)\n else:\n raise ValueError(\"invalid input delta_minutes value {}\".format(self.delta_minutes))\n\n\nclass MyExecuteShell(BaseExecuteShell):\n\n def popen(self, cmd, mode=\"r\", buffering=-1):\n cmd = self.cmd\n if not isinstance(cmd, str):\n raise TypeError(\"invalid cmd type (%s, expected string)\" % type(cmd))\n if mode not in (\"r\", \"w\"):\n raise ValueError(\"invalid mode %r\" % mode)\n if buffering == 0 or buffering is None:\n raise ValueError(\"popen() does not support unbuffered streams\")\n\n if mode == \"r\":\n proc = Popen(cmd, shell=True, stdout=PIPE, bufsize=buffering)\n return _wrap_close(io.TextIOWrapper(proc.stdout), proc)\n else:\n proc = Popen(cmd, shell=True, stdin=PIPE, bufsize=buffering)\n return _wrap_close(io.TextIOWrapper(proc.stdin), proc)\n\n\nif __name__ == '__main__':\n print('#############\\n')\n s1 = BasePath('dir1', 'D:\\\\work\\\\03 灾备工作\\\\08 各利润中心', 'ork')\n print(s1.__dict__)\n m = MyTime('nowTime', 12 * 60 * 24)\n print(m.__dict__)\n print(m.getDeltaTime())\n cmd = BaseScript('cmd_yl', 'exec ls -l ', expend_scripts='exclude --adfa ')\n print(cmd.__dict__)\n p2 = MyPath('n1', 'D:\\\\work\\\\03 灾备工作', 'ork')\n p3 = p2.change('OOP')\n print('p3 :', p3)\n p4 = MyPath('n2', p2.change('OOP'), '')\n print('p4: ', p4.add('_dr'))\n p5 = MySourceDir('n5', 'D:\\\\work\\\\03 灾备工作')\n print(p5.getDirTime())\n print(p5.getDeltaTimeDirs())\n"
},
{
"alpha_fraction": 0.6288032531738281,
"alphanum_fraction": 0.6815415620803833,
"avg_line_length": 22.4761905670166,
"blob_id": "bad458e015a76bd17c2886ef931f39fec2579f22",
"content_id": "0b112fa374b2ec88f84fc24c1dd14b79b975ba87",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 493,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 21,
"path": "/change_head.sh",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\ncur_path=$(pwd)\nfunction modify_venv_file(){\nsed -i \"1i#!${cur_path}\\/venv36\\/bin\\/python3.6\" venv36/bin/easy_install*\nsed -i \"2,2d\" venv36/bin/easy_install*\nsed -i \"1i#!${cur_path}\\/venv36\\/bin\\/python3.6\" venv36/bin/pip*\nsed -i \"2,2d\" venv36/bin/pip*\n}\n\nfunction run_python_scripts(){\nsource venv36/bin/activate\n${cur_path}/venv36/bin/python ${cur_path}/hyRsyncNasData.py & \n}\n\nfunction main(){\necho -e \" #### starting pyrsync ####\\n\"\nmodify_venv_file\nrun_python_scripts\n}\n\nmain\n"
},
{
"alpha_fraction": 0.5392997860908508,
"alphanum_fraction": 0.5597032308578491,
"avg_line_length": 31.51004981994629,
"blob_id": "b6deb8ad369d8b52537f761f6d98accf95982ec2",
"content_id": "75420d0587c6f613ca47d49576ff6456d8737b27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12939,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 398,
"path": "/hyRsyncNasData.py",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3.6\n\nimport datetime\nimport logging\nimport os\nimport re\nimport sys\nimport time\nimport zipfile\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom subprocess import Popen, PIPE\n\nimport psutil\nimport schedule\n\n\ndef file_time_assert(path, ntime=60):\n ftime = dict\n filetime = os.path.getmtime(path)\n filedate = datetime.fromtimestamp(filetime)\n now = datetime.now()\n vdate = int((now - filedate).days)\n vhour = int((now - filedate).seconds / 60 / ntime)\n if vdate < 1:\n if vhour < 1:\n ftime.append({'hour': 1})\n else:\n ftime.append({'hours': vhour})\n else:\n ftime.append({'days': vdate})\n return ftime\n\n\ndef file_today_assert(path):\n result = []\n for vpath in path:\n for fitem in os.listdir(vpath):\n fpath = os.path.join(vpath, fitem)\n filetime = os.path.getmtime(fpath)\n filedate = datetime.fromtimestamp(filetime).date()\n nowdate = datetime.now().date()\n if filedate == nowdate:\n result.append(fpath)\n return result\n\n\ndef change_dir_str(vList=None, str=None):\n result = []\n suffer_str = 'dr'\n pattern = re.compile(str)\n reprstr = ''.join([str, suffer_str])\n if type(vList).__name__ == 'list':\n for i in range(len(vList)):\n res = re.sub(pattern, reprstr, vList[i])\n result.append(res)\n else:\n assert isinstance(vList, list), 'input parameter must be list type'\n return result\n\n\ndef get_last_date(days=None):\n vnow = datetime.now()\n vtime = timedelta(days=-days)\n newtime = vnow + vtime\n time_struct = newtime.timetuple()\n lastYear = str(time_struct.tm_year)\n vMon = time_struct.tm_mon\n if (len(str(vMon)) == 1):\n lastMon = ''.join(['0', str(vMon)])\n else:\n lastMon = str(vMon)\n vDay = time_struct.tm_mday\n if (len(str(vDay)) == 1):\n lastDay = ''.join(['0', str(vDay)])\n else:\n lastDay = str(vDay)\n return lastYear, lastMon, lastDay\n\n\ndef get_current_date():\n now_struct = datetime.now().timetuple()\n nowYear = str(now_struct.tm_year)\n vMon = now_struct.tm_mon\n if (len(str(vMon)) == 1):\n nowMon = ''.join(['0', str(vMon)])\n else:\n nowMon = str(vMon)\n vDay = now_struct.tm_mday\n if (len(str(vDay)) == 1):\n nowDay = ''.join(['0', str(vDay)])\n else:\n nowDay = str(vDay)\n return nowYear, nowMon, nowDay\n\n\ndef add_dir_str(vList, days=1):\n result = []\n y1, m1, d1 = get_current_date()\n y2, m2, d2 = get_last_date(days)\n year1 = y1\n month1 = m1\n day1 = d1\n year2 = y2\n month2 = m2\n day2 = d2\n if type(vList).__name__ == 'list':\n for i in range(len(vList)):\n if vList[i] == '/nas/datadir/imageupload':\n mlist = [vList[i], year1, month1, day1]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n mlist = [vList[i], year2, month2, day2]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n elif vList[i] == '/nas/datadir/contract':\n mlist = [vList[i], year1, month1, day1]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n mlist = [vList[i], year2, month2, day2]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n mlist = [vList[i], year1, month1, day1]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n mlist = [vList[i], year2, month2, day2]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n assert isinstance(vList, list), 'input parameter must be list type'\n return result\n\n\ndef get_app_name(vlist):\n # get the last two string as appname\n appname = []\n if type(vlist).__name__ == 'list':\n for l in range(len(vlist)):\n vstr = vlist[l].split('/')\n reprstr = '_'.join(vstr[-2:])\n appname.append(reprstr)\n else:\n assert isinstance(vlist, list), 'input parameter must be list type'\n return appname\n\n\ndef get_log_mydate():\n t = datetime.now()\n mydate = t.strftime('%Y%m%d-%H%M%S')\n return str(mydate)\n\n\ndef assert_log_time(path, day):\n time = os.path.getmtime(path)\n now = datetime.now()\n date = datetime.fromtimestamp(time)\n retdate = date.strftime('%Y-%m-%d')\n if (now - date).days > day:\n return 0\n else:\n return 1\n\n\ndef tar_log_history(path, type):\n list_file = []\n strPath = ''.join([path, type])\n for item in os.listdir(strPath):\n sub_path = os.path.join(strPath, item)\n if os.path.isfile(sub_path) is True:\n ret = assert_log_time(sub_path, 1)\n if ret == 0:\n list_file.append(sub_path)\n if len(list_file) > 0:\n now = datetime.now()\n vdate = (now - timedelta(days=2)).strftime(\"%Y-%m-%d\")\n azip = zipfile.ZipFile(\"logs/backup/\" + type + \".\" + vdate + \".zip\", 'w')\n for f in range(len(list_file)):\n azip.write(list_file[f], compress_type=zipfile.ZIP_DEFLATED)\n os.remove(list_file[f])\n azip.close()\n\n\ndef hander_zip_log():\n tarPath = 'logs/backup'\n for type in ['realtime', 'full', 'normal']:\n tar_log_history('logs/', type)\n for tar in os.listdir(tarPath):\n tar_path = os.path.join(tarPath, tar)\n ret = assert_log_time(tar_path, 6)\n if ret == 0:\n os.remove(tar_path)\n\n\ndef create_cmd_str(srclist, dstlist, applist, type):\n list_cmd = []\n if len(srclist) > 0 and len(dstlist) > 0:\n for i in range(len(srclist)):\n if os.path.isdir(srclist[i]):\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', srclist[i], '/ ', dstlist[i],\n ' --log-file=logs/', type, '/', applist[i],\n '.dir.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n elif os.path.isfile(srclist[i]):\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', srclist[i], ' ', dstlist[i],\n ' --log-file=logs/', type, '/', applist[i],\n '.file.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n list_cmd.append(cmd_str)\n if len(list_cmd) > 0:\n return list_cmd\n\n\ndef do_rsync(cmd, appname=None):\n cpu_per = get_cpu_percent()\n proc_num = get_rsync_process_num()\n if int(cpu_per) >= 50:\n time.sleep(300)\n elif proc_num > 800:\n time.sleep(10)\n popen = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)\n output, errors = popen.communicate()\n if errors:\n logging.error(errors)\n logging.info('Starting %s rsync data ....' % appname)\n logging.info(output)\n\n\ndef get_cpu_percent():\n return psutil.cpu_percent(interval=1)\n\n\ndef get_rsync_process_num():\n vlist = [p for p in psutil.pids()]\n return len(vlist)\n\n\ndef main_job(cmdlist, applist):\n for i in range(len(cmdlist)):\n do_rsync(cmdlist[i], applist[i])\n\n\ndef assert_dir_exist(srcdir=['/nas/datadir']):\n dstdir = ['/nasdr/datadir']\n if os.path.exists(dstdir[0]) is True:\n for i in range(len(srcdir)):\n if os.path.exists(srcdir[i]) is True:\n return 0\n else:\n logging.error('error, Source nas vol {0} mounted wrong'.format(srcdir[i]))\n else:\n logging.error('error, Dest nas vol {0} mounted wrong'.format(dstdir[0]))\n sys.exit(1)\n\n\ndef mkdir_pyrsync_env():\n current_dir = os.path.dirname(os.path.abspath(__file__))\n for vdir in ['backup', 'normal', 'realtime', 'full']:\n vpath = '/'.join([current_dir, 'logs', vdir])\n if os.path.exists(vpath) is False:\n os.makedirs(vpath)\n\n\ndef kill_rsync_process():\n vlist = [p for p in psutil.pids() if psutil.Process(p).name() == 'rsync']\n for pid in vlist:\n try:\n rsync_proc=psutil.Process(pid)\n logging.info('kill rysnc process %s' % rsync_proc)\n rsync_proc.terminate()\n rsync_proc.wait(timeout=1)\n except psutil.NoSuchProcess,pid:\n pass\n plist = [p for p in psutil.pids() if psutil.Process(p).name() == 'python2.7']\n for vid in plist:\n try:\n python_proc=psutil.Process(vid)\n logging.info('kill python deamon process %s' % python_proc)\n python_proc.terminate()\n python_proc.wait(timeout=1)\n except psutil.NoSuchProcess,pid:\n pass\n\n\ndef run_cmd_list1():\n vresult1 = file_today_assert(srcDir1)\n if len(vresult1) > 0:\n dstlist1 = change_dir_str(vresult1, 'nas')\n applist1 = get_app_name(vresult1)\n cmdlist1 = create_cmd_str(vresult1, dstlist1, applist1, 'realtime')\n if len(cmdlist1) > 0 and len(applist1) > 0:\n if assert_dir_exist(vresult1) == 0:\n main_job(cmdlist1, applist1)\n\n\ndef run_cmd_list6():\n vresult6 = file_today_assert(srcDir6)\n if len(vresult6) > 0:\n dstlist6 = change_dir_str(vresult6, 'nas')\n applist6 = get_app_name(vresult6)\n cmdlist6 = create_cmd_str(vresult6, dstlist6, applist6, 'realtime')\n if len(cmdlist6) > 0 and len(applist6) > 0:\n if assert_dir_exist(vresult6) == 0:\n main_job(cmdlist6, applist6)\n\n\ndef run_cmd_list4():\n vresult4 = file_today_assert(srcDir6)\n if len(vresult4) > 0:\n dstlist4 = change_dir_str(vresult4, 'nas')\n applist4 = get_app_name(vresult4)\n cmdlist4 = create_cmd_str(vresult4, dstlist4, applist4, 'realtime')\n if len(cmdlist4) > 0 and len(applist4) > 0:\n if assert_dir_exist(vresult4) == 0:\n main_job(cmdlist4, applist4)\n\ndef run_cmd_list2():\n vresult2 = add_dir_str(srcDir2)\n if len(vresult2) > 0:\n dstlist2 = change_dir_str(vresult2, 'nas')\n applist2 = get_app_name(vresult2)\n cmdlist2 = create_cmd_str(vresult2, dstlist2, applist2, 'realtime')\n if len(cmdlist2) > 0 and len(applist2) > 0:\n if assert_dir_exist(vresult2) == 0:\n main_job(cmdlist2, applist2)\n\ndef run_cmd_list3():\n vresult3 = add_dir_str(srcDir3)\n if len(vresult3) > 0:\n dstlist3 = change_dir_str(vresult3, 'nas')\n applist3 = get_app_name(vresult3)\n cmdlist3 = create_cmd_str(vresult3, dstlist3, applist3, 'realtime')\n if len(cmdlist3) > 0 and len(applist3) > 0:\n if assert_dir_exist(vresult3) == 0:\n main_job(cmdlist3, applist3)\n\n\ndef run_cmd_list5():\n dstlist5 = change_dir_str(srcDir5, 'nas')\n applist5 = ['rsyncdata']\n cmdlist5 = create_cmd_str(srcDir5, dstlist5, applist5, 'full')\n if len(cmdlist5) > 0 and len(applist5) > 0:\n if assert_dir_exist(srcDir5) == 0:\n main_job(cmdlist5, applist5)\n\n\nif __name__ == '__main__':\n assert_dir_exist()\n mkdir_pyrsync_env()\n mydate = get_log_mydate()\n LOG_FORMAT = \"%(asctime)s %(levelname)s %(filename)s %(funcName)s %(message)s\"\n logging.basicConfig(filename='logs/rsync_data.' + mydate + '.log', level=logging.DEBUG, format=LOG_FORMAT)\n logger = logging.getLogger(__name__)\n\n # get the same day's dir\n srcDir1 = ['/nas/datadir/attach']\n srcDir6 = ['/nas/datadir/invoice']\n srcDir4 = ['/nas/datadir/exportlogs']\n\n # get ../yyyy/mm/dd dirs ,2019/03/22 2019/03/03\n srcDir2 = ['/nas/datadir/imageupload']\n srcDir3 = ['/nas/datadir/contract']\n\n # ingore\n\n\n # full all files and all dirs\n srcDir5 = ['/nas/datadir']\n # run_cmd_list3()\n\n schedule.every(2).minutes.do(run_cmd_list1)\n schedule.every(3).minutes.do(run_cmd_list6)\n schedule.every(4).minutes.do(run_cmd_list2)\n schedule.every(5).minutes.do(run_cmd_list3)\n schedule.every(3).minutes.do(run_cmd_list4)\n schedule.every().day.at(\"00:01\").do(run_cmd_list5)\n schedule.every().day.at(\"12:00\").do(run_cmd_list5)\n # gather log file and handler zip package\n schedule.every().day.at(\"00:04\").do(hander_zip_log)\n# schedule.every().day.at(\"19:23\").do(kill_rsync_process)\n keep_going = True\n while keep_going:\n try:\n schedule.run_pending()\n time.sleep(5)\n except Exception as e:\n logger.error(str(e))\n logger.info(\"contact with manager,hyRsyncNasData shut down.\")\n keep_going = False\n"
},
{
"alpha_fraction": 0.642259418964386,
"alphanum_fraction": 0.7071129679679871,
"avg_line_length": 28.875,
"blob_id": "46b2c24812b6527dbe31801768c1c1cfa56ce250",
"content_id": "b603c352b076d21dc5033bd3bb6c39a3597b0d23",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 810,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 16,
"path": "/README.md",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "pyrsync\n\n''01 这个目录的子目录是以数字,可否通过脚本识别出当天创建的目录(唯一),每10分钟同步一次'''\n# srcDir1 = ['/nas/datadir/attach', '/nas/datadir/invoice']\n\n'''02 这个目录的子目录是年(如2019),年的子目录是月(如03),月的子目录是日(如06),历史的目录不会有变化,只需要识别出当天的目录(日,如06),每10分钟同步一次。'''\n# srcDir2 = ['/nas/datadir/imageupload']\n\n'''03 这个目录暂时没有使用,建议每10分钟同步一次,待未来上线后,可以参考/nas/datadir/imageupload目录的同步方式'''\n# srcDir3 = ['/nas/datadir/contract']\n\n'''04 这个目录不需要同步'''\n#srcDir4 = ['/nas/datadir/exportlogs']\n\n'''05 nas卷全量同步'''\n# srcDir5 = ['/nas/datadir']\n"
},
{
"alpha_fraction": 0.5593904256820679,
"alphanum_fraction": 0.5755266547203064,
"avg_line_length": 29.98611068725586,
"blob_id": "ce4518d083572fbc73f09e8a2bdb8ba9bbb10fea",
"content_id": "384188b5cb75a763e960370bc3fa90b9edc6ed6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11155,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 360,
"path": "/dlRsyncNasData.py",
"repo_name": "huangqiang76/pyrsync",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2.7\n# dlRsyncNasData.py\n\nimport logging\nimport os\nimport re\nimport sys\nimport time\nimport zipfile\nfrom datetime import datetime\nfrom datetime import timedelta\nfrom subprocess import Popen, PIPE\n\nimport psutil\nimport schedule\n\nreload(sys)\nsys.setdefaultencoding(\"utf-8\")\n\n\n\n'''\nget the log's time\n'''\ndef get_log_mydate():\n t = datetime.now()\n mydate = t.strftime('%Y%m%d-%H%M%S')\n return str(mydate)\n\n'''\nassert file or dir time\n'''\n\n\ndef file_time_assert(path, ntime=60):\n ftime = dict\n filetime = os.path.getmtime(path)\n filedate = datetime.fromtimestamp(filetime)\n now = datetime.now()\n vdate = int((now - filedate).days)\n vhour = int((now - filedate).seconds / 60 / ntime)\n if vdate < 1:\n if vhour < 1:\n ftime.append({'hour': 1})\n else:\n ftime.append({'hours': vhour})\n else:\n ftime.append({'days': vdate})\n return ftime\n\n\ndef file_today_assert(path):\n file_path = []\n for vpath in path:\n for fitem in os.listdir(vpath):\n fpath = os.path.join(vpath, fitem)\n filetime = os.path.getmtime(fpath)\n filedate = datetime.fromtimestamp(filetime).date()\n nowdate = datetime.now().date()\n if filedate == nowdate:\n file_path.append(fpath)\n return file_path\n\n\ndef change_dir_str(vList=None, str=None):\n result = []\n suffer_str = '_dr'\n pattern = re.compile(str)\n repr_str = ''.join([str, suffer_str])\n if type(vList).__name__ == 'list':\n for i in range(len(vList)):\n res = re.sub(pattern, repr_str, vList[i])\n result.append(res)\n else:\n assert isinstance(vList, list), 'input parameter must be list type'\n return result\n\n\ndef get_last_date(days=None):\n vnow = datetime.now()\n vtime = timedelta(days=-days)\n newtime = vnow + vtime\n time_struct = newtime.timetuple()\n lastYear = str(time_struct.tm_year)\n vMon = time_struct.tm_mon\n if (len(str(vMon)) == 1):\n lastMon = ''.join(['0', str(vMon)])\n else:\n lastMon = str(vMon)\n vDay = time_struct.tm_mday\n if (len(str(vDay)) == 1):\n lastDay = ''.join(['0', str(vDay)])\n else:\n lastDay = str(vDay)\n return lastYear, lastMon, lastDay\n\n\ndef get_current_date():\n now_struct = datetime.now().timetuple()\n nowYear = str(now_struct.tm_year)\n vMon = now_struct.tm_mon\n if (len(str(vMon)) == 1):\n nowMon = ''.join(['0', str(vMon)])\n else:\n nowMon = str(vMon)\n vDay = now_struct.tm_mday\n if (len(str(vDay)) == 1):\n nowDay = ''.join(['0', str(vDay)])\n else:\n nowDay = str(vDay)\n return nowYear, nowMon, nowDay\n\n\ndef add_dir_str(vList, days=1):\n result = []\n y1, m1, d1 = get_current_date()\n y2, m2, d2 = get_last_date(days)\n year1 = y1\n month1 = m1\n year2 = y2\n month2 = m2\n if type(vList).__name__ == 'list':\n for i in range(len(vList)):\n mlist = [vList[i], year1 + month1]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n mlist = [vList[i], year2 + month2]\n res = '/'.join(mlist)\n if os.path.isdir(res) is True:\n result.append(res)\n else:\n assert isinstance(vList, list), 'input parameter must be list type'\n return result\n\n\n\n\ndef get_app_name(vlist):\n # get the last two string as appname\n appname = []\n if type(vlist).__name__ == 'list':\n for l in range(len(vlist)):\n vstr = vlist[l].split('/')\n reprstr = '_'.join(vstr[-2:])\n appname.append(reprstr)\n else:\n assert isinstance(vlist, list), 'input parameter must be list type'\n return appname\n\n\ndef assert_log_time(path, day):\n time = os.path.getmtime(path)\n now = datetime.now()\n date = datetime.fromtimestamp(time)\n retdate = date.strftime('%Y-%m-%d')\n if (now - date).days > day:\n return 0\n else:\n return 1\n\n\ndef tar_log_history(path, type):\n list_file = []\n strPath = ''.join([path, type])\n for item in os.listdir(strPath):\n sub_path = os.path.join(strPath, item)\n if os.path.isfile(sub_path) is True:\n ret = assert_log_time(sub_path, 1)\n if ret == 0:\n list_file.append(sub_path)\n if len(list_file) > 0:\n now = datetime.now()\n vdate = (now - timedelta(days=2)).strftime(\"%Y-%m-%d\")\n azip = zipfile.ZipFile(\"logs/backup/\" + type + \".\" + vdate + \".zip\", 'w')\n for f in range(len(list_file)):\n azip.write(list_file[f], compress_type=zipfile.ZIP_DEFLATED)\n os.remove(list_file[f])\n azip.close()\n\n\ndef hander_zip_log():\n tarPath = 'logs/backup'\n for type in ['realtime', 'full', 'normal']:\n tar_log_history('logs/', type)\n for tar in os.listdir(tarPath):\n tar_path = os.path.join(tarPath, tar)\n ret = assert_log_time(tar_path, 6)\n if ret == 0:\n os.remove(tar_path)\n\n\ndef create_cmd_str(srclist, dstlist, applist, type):\n list_cmd = []\n for i in range(len(srclist)):\n if os.path.isdir(srclist[i]):\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', srclist[i], '/ ', dstlist[i],\n ' --log-file=logs/', type, '/', applist[i],\n '.dir.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n elif os.path.isfile(srclist[i]):\n cmdstring = ['/usr/bin/rsync -avPt --delete --block-size=*.* ', srclist[i], ' ', dstlist[i],\n ' --log-file=logs/', type, '/', applist[i],\n '.file.$(date +%Y%m%d-%H%M%S).log >/dev/null 2>&1 &']\n cmd_str = ''.join(cmdstring)\n list_cmd.append(cmd_str)\n return list_cmd\n\n\ndef do_rsync(cmd, appname=None):\n cpu_per = get_cpu_percent()\n proc_num = get_rsync_process_num()\n if int(cpu_per) >= 50:\n time.sleep(300)\n elif proc_num > 900:\n time.sleep(15)\n popen = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)\n output, errors = popen.communicate()\n if errors:\n logging.error(errors)\n logging.info('Starting %s rsync data ....' % appname)\n logging.info(output)\n\n\ndef get_cpu_percent():\n return psutil.cpu_percent(interval=1)\n\n\ndef get_rsync_process_num():\n # vlist= [ p for p in psutil.pids() if psutil.Process(p).name() == 'rsync' ]\n vlist = [p for p in psutil.pids()]\n return len(vlist)\n\n\ndef main_job(cmdlist, applist):\n for i in range(len(cmdlist)):\n do_rsync(cmdlist[i], applist[i])\n\n\ndef assert_dir_exist(srcdir=['/ggppppnfs/mm/cm']):\n dstdir = ['/ggppppnfs_dr/mm/cm']\n if os.path.exists(dstdir[0]) is True:\n for i in range(len(srcdir)):\n if os.path.exists(srcdir[i]) is True:\n return 0\n else:\n logging.error('error, Source nas vol {0} mounted wrong'.format(srcdir[i]))\n sys.exit(1)\n else:\n logging.error('error, Dest nas vol {0} mounted wrong'.format(dstdir[0]))\n sys.exit(1)\n\n\ndef mkdir_pyrsync_env():\n current_dir = os.path.dirname(os.path.abspath(__file__))\n for vdir in ['backup', 'normal', 'realtime', 'full']:\n vpath = '/'.join([current_dir, 'logs', vdir])\n if os.path.exists(vpath) is False:\n os.makedirs(vpath)\n\ndef kill_rsync_process():\n vlist = [p for p in psutil.pids() if psutil.Process(p).name() == 'rsync']\n for pid in vlist:\n try:\n rsync_proc=psutil.Process(pid)\n logging.info('kill rysnc process %s' % rsync_proc)\n rsync_proc.terminate()\n rsync_proc.wait(timeout=1)\n except psutil.NoSuchProcess,pid:\n pass\n plist = [p for p in psutil.pids() if psutil.Process(p).name() == 'python2.7']\n for vid in plist:\n try:\n python_proc=psutil.Process(vid)\n logging.info('kill python deamon process %s' % python_proc)\n python_proc.terminate()\n python_proc.wait(timeout=1)\n except psutil.NoSuchProcess,pid:\n pass\n\n\n# srcDir1 sync data\ndef run_cmd_list1():\n dstlist1 = change_dir_str(srcDir1, 'ggppppnfs')\n applist1 = get_app_name(srcDir1)\n cmdlist1 = create_cmd_str(srcDir1, dstlist1, applist1, 'realtime')\n if len(cmdlist1) > 0 and len(applist1) > 0:\n if assert_dir_exist(srcDir1) == 0:\n main_job(cmdlist1, applist1)\n\n# srcDir2 sync data\ndef run_cmd_list2():\n vresult2 = add_dir_str(srcDir2)\n if len(vresult2) > 0:\n dstlist2 = change_dir_str(vresult2, 'nas')\n applist2 = get_app_name(vresult2)\n cmdlist2 = create_cmd_str(vresult2, dstlist2, applist2, 'normal')\n if len(cmdlist2) > 0 and len(applist2) > 0:\n if assert_dir_exist(vresult2) == 0:\n main_job(cmdlist2, applist2)\n\n\n# srcDir4 sync data,ERP software log\ndef run_cmd_list4():\n dstlist4 = change_dir_str(srcDir4, 'ggppppnfs')\n applist4 = get_app_name(srcDir4)\n cmdlist4 = create_cmd_str(srcDir4, dstlist4, applist4, 'normal')\n if len(cmdlist4) > 0 and len(applist4) > 0:\n if assert_dir_exist(srcDir4) == 0:\n main_job(cmdlist4, applist4)\n\n\n# srcDir3 sync data,full all files\ndef run_cmd_list3():\n dstlist3 = change_dir_str(srcDir3, 'ggppppnfs')\n applist3 = get_app_name(srcDir3)\n cmdlist3 = create_cmd_str(srcDir3, dstlist3, applist3, 'full')\n if len(cmdlist3) > 0 and len(applist3) > 0:\n if assert_dir_exist(srcDir3) == 0:\n main_job(cmdlist3, applist3)\n\n\n\nif __name__ == '__main__':\n assert_dir_exist()\n mkdir_pyrsync_env()\n mydate = get_log_mydate()\n LOG_FORMAT = \"%(asctime)s %(levelname)s %(filename)s %(funcName)s %(message)s\"\n logging.basicConfig(filename='logs/dlRsync_data.' + mydate + '.log', level=logging.DEBUG,\n format=LOG_FORMAT)\n logger = logging.getLogger(__name__)\n srcDir1 = ['/ggppppnfs/mm/cm/backup', '/ggppppnfs/mm/srm/SignImg', '/ggppppnfs/mm/srm/SignImgAll']\n srcDir2 = ['/ggppppnfs/mm/ebs/PA', '/ggppppnfs/mm/ebs/INV', '/ggppppnfs/mm/ebs/AR',\n '/ggppppnfs/mm/cm', '/ggppppnfs/mm/isp', '/ggppppnfs/mm/pay', '/ggppppnfs/mm/srm',\n '/ggppppnfs/mm/merp']\n srcDir3 = ['/ggppppnfs']\n srcDir4 = ['/ggppppnfs/conc/log', '/ggppppnfs/conc/out']\n\n\n # run_cmd_list3()\n schedule.every(5).minutes.do(run_cmd_list1)\n schedule.every(17).minutes.do(run_cmd_list2)\n schedule.every(57).minutes.do(run_cmd_list4)\n schedule.every().day.at(\"00:01\").do(run_cmd_list3)\n schedule.every().day.at(\"12:01\").do(run_cmd_list3)\n\n # handler history log\n schedule.every().day.at(\"00:04\").do(hander_zip_log)\n# schedule.every().day.at(\"19:23\").do(kill_rsync_process)\n\n keep_going = True\n while keep_going:\n try:\n schedule.run_pending()\n time.sleep(5)\n except Exception as e:\n logger.error(str(e))\n logger.info(\"dlRsyncNasData process shut down.\")\n keep_going = False\n"
}
] | 6 |
ukaka/dashboard
|
https://github.com/ukaka/dashboard
|
7850df3f9a84cbc465b14ac52939e04423190578
|
a076940d03577b1a0c5d66f37cb0e001a377947f
|
9e319834f8ea7c0686562283b2f5adc6b528a36b
|
refs/heads/master
| 2020-12-26T04:15:52.747602 | 2017-03-09T16:12:19 | 2017-03-09T16:12:19 | 39,578,329 | 0 | 0 | null | 2015-07-23T16:25:50 | 2016-04-08T15:47:17 | 2016-05-11T15:42:35 |
JavaScript
|
[
{
"alpha_fraction": 0.5991441011428833,
"alphanum_fraction": 0.6043152809143066,
"avg_line_length": 32.58083724975586,
"blob_id": "30dcc61bef61d3034fb8291f2a865ba8af18e9b2",
"content_id": "803a0ebf7790039fcd2610a05012ad1561d6c162",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5608,
"license_type": "no_license",
"max_line_length": 116,
"num_lines": 167,
"path": "/dashboard.py",
"repo_name": "ukaka/dashboard",
"src_encoding": "UTF-8",
"text": "import json\nimport datetime\nfrom flask import Flask, jsonify\nfrom flask.templating import render_template\nfrom jenkinsapi.artifact import Artifact\nfrom jenkinsapi.custom_exceptions import NoBuildData\nimport os\n\nfrom jenkinsapi.jenkins import Jenkins\nfrom requests.exceptions import ConnectionError\n\n\nJOB_URL_ENDING = \"testReport/api/python\"\napp = Flask(__name__)\n\n\ndef get_jenkins():\n return Jenkins(get_config()['sources']['jenkins']['url'])\n\n\ndef get_jenkins_api():\n return Jenkins(get_config()['sources']['jenkins-api']['url'],\n username=os.environ['JENKINS_USER'], password=os.environ['JENKINS_PASS'])\n\n\ndef get_config():\n return json.load(open('config.json'))\n\n\ndef get_item_config(build_name):\n config = get_config()\n if config.has_key('items') and config['items'].has_key(build_name):\n return config['items'][build_name]\n return {}\n\n\[email protected]('/')\ndef index():\n config_data = get_config()\n screens_count = len(config_data['screens'])\n return render_template(\n 'index.html', config=config_data, json_config=json.dumps(config_data), screens_count=screens_count)\n\n\[email protected]('/jenkins_results/<build_name>', methods=['GET'])\ndef get_build_data(build_name):\n jenkins_instance = get_jenkins()\n item_config = get_item_config(build_name)\n if jenkins_instance is not None:\n build = jenkins_instance[build_name]\n else:\n raise ConnectionError(\"Connection with Jenkins failed\")\n\n last_build = build.get_last_build()\n last_build_number = build.get_last_buildnumber()\n child_runs = last_build.get_matrix_runs()\n child_runs_count = 0\n results_url = last_build.get_result_url()\n if results_url.endswith(JOB_URL_ENDING):\n results_url = results_url[:results_url.find(JOB_URL_ENDING)]\n\n failed_runs = []\n success_runs = []\n return_val = {\n 'name': build_name,\n 'status': last_build.get_status(),\n 'hours_ago': get_time_ago(last_build.get_timestamp()),\n }\n\n if item_config.has_key('artifact'):\n output = Artifact('output', item_config['artifact'], last_build).get_data()\n return_val['artifact_output'] = output\n else:\n has_next = True\n while has_next:\n try:\n current_build = child_runs.next()\n except StopIteration:\n has_next = False\n\n if has_next:\n child_runs_count += 1\n if current_build.get_number() == last_build_number\\\n and current_build.get_status() == 'FAILURE' or current_build.get_status() == 'UNSTABLE':\n failed_runs.append({\n 'name': current_build.name.split('\\xbb')[1].split(',')[0]\n })\n elif current_build.get_number() == last_build_number and current_build.get_status() == 'SUCCESS'\\\n and (build_name.endswith(\"dev\") or build_name.endswith(\"ios\")):\n success_runs.append({\n 'name': current_build.name.split('\\xbb')[1].split(',')[0]\n })\n\n return_val['results_url'] = results_url\n return_val['failed_runs'] = failed_runs\n return_val['has_failed_runs'] = (len(failed_runs) != 0)\n return_val['success_runs'] = success_runs\n return_val['has_success_runs'] = (len(success_runs) != 0)\n return_val['child_runs_count'] = child_runs_count\n return_val['failure_percentage'] = len(failed_runs) * 100 / child_runs_count if (child_runs_count != 0) else 100\n\n try:\n last_success = get_time_ago(build.get_last_stable_build().get_timestamp()),\n except NoBuildData:\n last_success = '???'\n\n return_val['last_success'] = last_success\n\n return jsonify(return_val)\n\n\[email protected]('/jenkins_api_results/<build_name>', methods=['GET'])\ndef get_build_data_api(build_name):\n jenkins_instance = get_jenkins_api()\n if jenkins_instance is not None:\n build = jenkins_instance[build_name]\n else:\n raise ConnectionError(\"Connection with Jenkins failed\")\n\n last_build = build.get_last_build()\n last_build_number = build.get_last_buildnumber()\n child_runs = last_build.get_matrix_runs()\n child_runs_count = 0\n failed_runs = []\n\n has_next = True\n while has_next:\n try:\n current_build = child_runs.next()\n except StopIteration:\n has_next = False\n\n if has_next:\n child_runs_count += 1\n if current_build.get_number() == last_build_number \\\n and (current_build.get_status() == 'FAILURE' or current_build.get_status() == 'UNSTABLE'):\n failed_runs.append({\n 'name': current_build.name.split('\\xbb')[1].split(',')[0]\n })\n\n return_val = {\n 'name': build_name,\n 'status': last_build.get_status(),\n 'hours_ago': get_time_ago(last_build.get_timestamp()),\n 'failed_runs': failed_runs,\n 'has_failed_runs': (len(failed_runs) != 0),\n 'child_runs_count': child_runs_count,\n 'failure_percentage': len(failed_runs) * 100 / child_runs_count if (len(failed_runs) != 0) else 0\n }\n\n try:\n last_success = get_time_ago(build.get_last_stable_build().get_timestamp()),\n except NoBuildData:\n last_success = '???'\n\n return_val['last_success'] = last_success\n\n return jsonify(return_val)\n\n\ndef get_time_ago(run_date):\n return int((datetime.datetime.utcnow().replace(tzinfo=None)\n - run_date.replace(tzinfo=None)).total_seconds() / 3600)\n\nif __name__ == '__main__':\n app.debug = True\n app.run()\n"
},
{
"alpha_fraction": 0.7286324501037598,
"alphanum_fraction": 0.7371794581413269,
"avg_line_length": 25,
"blob_id": "68bcd6cc38e9683d5fa021e8b33c4adcd4a6e496",
"content_id": "58399d578876455483d402f3e5d821a789e64e05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 468,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 18,
"path": "/README.md",
"repo_name": "ukaka/dashboard",
"src_encoding": "UTF-8",
"text": "Before running app make sure you have **python** and **pip** installed.\n\nTo install dependencies run:\n```\npip install -r requirements\n```\nYou can either install it inside virtualenv or globally.\nIf installing globally you need to run it as root.\n\nProject can be run as regular flask app in dev mode:\n```\npython dashboard.py\n```\n\nor using twisted which is WSGI container and supports more threads than \"dev mode\".\n```\ntwistd -n web --port 8080 --wsgi dashboard.app\n```\n"
}
] | 2 |
rifkypolanen/Tugas-2
|
https://github.com/rifkypolanen/Tugas-2
|
23828d448ce1ddf788215ed8b107fcbcf1994c4d
|
5e3e67ad9df69571c393c2d9b2cb7a01d58503a3
|
a61ed4b94c038aed80a7ae0f8ef64c703b7a35cb
|
refs/heads/main
| 2023-03-26T08:13:13.911835 | 2021-03-26T17:00:07 | 2021-03-26T17:00:07 | 351,850,689 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5138662457466125,
"alphanum_fraction": 0.5236541628837585,
"avg_line_length": 27.285715103149414,
"blob_id": "3a85eb8c7f744f91819f2830f4720dfa66bd9db8",
"content_id": "e94341936e4ef22fbd23d0afba638dd16d13c31c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 613,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 21,
"path": "/Soal-2.py",
"repo_name": "rifkypolanen/Tugas-2",
"src_encoding": "UTF-8",
"text": "kontak = []\r\ntelp = []\r\nprint('Selamat datang!')\r\nmenu = True \r\n\r\nwhile menu:\r\n menu = int(input('--- Menu --- \\n 1. Daftar Kontak \\n 2. Tambah Kontak \\n 3. Keluar \\n Pilih menu :'))\r\n if menu == 1:\r\n for x in range(loop):\r\n print(kontak[x])\r\n print(telp[x])\r\n elif menu == 2:\r\n kontak.append(input('Nama :'))\r\n telp.append(input('No Telpon :'))\r\n print('Kontak berhasil ditambahkan!')\r\n loop = len(kontak)\r\n elif menu == 3:\r\n menu = False\r\n print('Program selesai, sampai jumpa!')\r\n else:\r\n print('Menu tidak tersedia')"
}
] | 1 |
AntonioAlgaida/Learn-Python-The-Hard-Way-Code
|
https://github.com/AntonioAlgaida/Learn-Python-The-Hard-Way-Code
|
98b83183df575d2e56dcc7448a9cab2701ee0266
|
74b698c6b0f68836592c35739554ce0f1314d191
|
a45ed88e7312750c34dfa42e1e5a3923c5ff8c5c
|
refs/heads/master
| 2020-03-23T21:59:16.360907 | 2018-07-24T10:53:12 | 2018-07-24T10:53:12 | 142,145,493 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.37134790420532227,
"alphanum_fraction": 0.39096683263778687,
"avg_line_length": 28.39004135131836,
"blob_id": "e762bdd70af59f1a91905f2d1686a9a5fa95b993",
"content_id": "678f683b164254bdb00ab253289b6afe7a8ee01f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7087,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 241,
"path": "/code.py",
"repo_name": "AntonioAlgaida/Learn-Python-The-Hard-Way-Code",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Jul 24 08:25:44 2018\n\n@author: AntonioGuillen\n\"\"\"\n\n#%%\n# =============================================================================\n# Exercise 1 - Hello Word\n# =============================================================================\n\nprint(\"Hola holita vecinito\")\nprint(\"En python3 a la hora de\")\nprint(\"imprimir por pantalla\")\nprint(\"\"\" se utiliza print(\"TEXTO\") \"\"\")\nprint(\"y además, si se quiere meter un texto\")\nprint(\"que incluya comillas, se utiliza la comilla triple\")\n\nprint(\"I'd much rather you 'not'.\")\nprint('I \"said\" do not touch this .')\n\n#%%\n# =============================================================================\n# Exercise 2 - Comments and Pound Characters\n# =============================================================================\n\n# A comment, this is so you can read your program later.\n# Anything after the # is ignored by python.\n\nprint(\"I could have code like this.\") # and the comment after is ignoredç\n\n# You can also use a comment to \"disable\" or comment out a piece of code:\n# print(\"This won't run.\")\n\nprint(\"This will run.\")\n\n\n\n\n#%%\n# =============================================================================\n# Exercise 3 - Numbers and Math\n# =============================================================================\n\nprint(\"I will now count my chickens:\")\n\nprint(\"Hens\",25.0+30.0/6.0)\nprint(\"Roosters\", 100.0-25.0*3.0%4.0)\n\nprint(\"Now I will count the eggs:\")\n\nprint(3.0+2.0+1.0-5.0+4.0%2.0-1.0/4.0+6.0)\n\nprint(\"Is this true that 3+2<5-7?\")\n\nprint(3+2 < 5-7)\n\nprint(\"What is 3+2\",3+2)\n\n\n#%%\n# =============================================================================\n# Exercise 4 - Variable and Names\n# =============================================================================\ncars = 100\nspace_in_a_car = 4.0\ndrivers = 30\npassengers = 90\ncars_not_driven = cars - drivers\ncars_driven = drivers\ncarpool_capacity = cars_driven*space_in_a_car\naverage_passenger_per_car = passengers / cars_driven\n\nprint(\"There are\", cars, \"cars available.\")\nprint(\"There are only\", drivers, \"drivers available.\")\nprint(\"There will be\", cars_not_driven, \"empty cars today.\")\nprint(\"We can transport\", carpool_capacity, \"people today.\")\nprint(\"We have\",passengers,\"to carpool today.\")\nprint(\"We need to put about\",average_passenger_per_car,\"in each car.\")\n\n\n#%%\n# =============================================================================\n# Exercise 5 - More Variables and Printing\n# =============================================================================\nmy_name = \"Antonio\"\nmy_surname = 'Guillen'\nmy_age = 24\nmy_weight = 85.5\nmy_height = 180\nmy_eyes = 'Brown'\nmy_teeth = 'White'\nmy_hair = 'Black'\n\nprint(\"Let's talk aboyut %s %s.\" % (my_name, my_surname))\nprint(\"He is %4.2f cm tall\" % my_weight)\nprint(\"He's got %s eyes and %d tall.\" % (my_eyes, my_height))\nprint(\"Testing the %s\" % my_age)\n#%%\n# =============================================================================\n# Exercise 6 - String and Text\n# =============================================================================\nx = \"There are %i types of people.\" %10\nbinary = \"binary\"\ndo_not = \"don't\"\ny = \"Those who know %s and those who %s.\" % (binary, do_not)\n\nprint(x)\nprint(y)\n\nprint(\"I said: %r\" % x)\nprint(\"I also said: '%s'.\" % y)\n\nhilarious = False\njoke_evaluation = \"Isn't that joke so funny?! %r\"\n\nprint(joke_evaluation % hilarious)\n\nw = \"This is the left side of...\"\ne = \"a string with a right side.\"\nprint(w + e)\n#%%\n# =============================================================================\n# Exercise 7 - More Printing\n# =============================================================================\nprint(\"Mary had a little lamb.\")\nprint(\"Its fleece was white as %s.\" % 'snow')\nprint(\"And wverywhere that Mary went.\")\nprint(\".\"*10)\n\ne1 = \"C\"\ne2 = \"h\"\ne3 = \"e\"\ne4 = \"e\"\ne5 = \"s\"\ne6 = \"e\"\ne7 = \"B\"\ne8 = \"u\"\ne9 = \"r\"\ne10 = \"g\"\ne11 = \"e\"\ne12 = \"r\"\nprint(e1+e2+e3+e4+e5+e6,\n e7+e8+e9+e10+e11+e12)\n\n\n#%%\n# =============================================================================\n# Exercise 8 - Printing Printing\n# =============================================================================\nformater = \"%r %r %r %r\"\nprint (formater % (1,2,3,4))\nprint(formater % (\"one\", \"two\", \"three\", \"four\"))\nprint(formater % (True, False, False, True))\nprint(formater % (formater, formater, formater, formater))\nprint(formater % (\n \"I had this thing.\",\n \"That you could type up rigth.\",\n \"But it didn't sing.\",\n \"So I said goodnight.\"))\n#%%\n# =============================================================================\n# Exercise 9 - Printing*3\n# =============================================================================\ndays = \"Mon Tue Wed Thu Fri Sat Sun\"\nmonths = \"Jan\\nFre\\nMar\\nApr\\nMay\\nJun\\nJul\\nAug\"\n\nprint(\"Here are the days:\", days)\nprint(\"Here are the months\", months)\n\nprint(\"\"\"\n There's something going on here.\n With the three double-quotes.\n We'll be abe to type as mus as we like\n Even 4 lines if we want, or 5, or 6\n \"\"\")\n#%%\n# =============================================================================\n# Exercise 10 - What Was That\n# =============================================================================\ntabby_cat = \"\\tI'm tabbet in.\"\npersian_cat = \"I'm split\\non a line.\"\nbackslash_cat = \"I'm \\\\ a \\\\ cat.\"\n\nfat_cat = \"\"\"\nI'll do a list:\n \\t* Cat food\n \\t* Fishies\n \\t* Catnip\\n\\t* Grass\n\"\"\"\nprint(tabby_cat)\nprint(persian_cat)\nprint(backslash_cat)\nprint(fat_cat)\n\n#%%\n# =============================================================================\n# Exercise 11 - Asking Questions\n# =============================================================================\nprint(\"How old are you?\")\nage = input()\n\nprint(\"How tall are you?\")\nheight = input()\n\nprint(\"How much do you weigh\")\nweight = input()\n\nprint(\"So, you're %r old, %r tall and %r heavy.\" %(age, height, weight))\n#%%\n# =============================================================================\n# Exercise 12 - Prompting People\n# =============================================================================\nage = input(\"How old are you? \")\n\nprint(\"So, you are %r\" % age)\n#%%\n# =============================================================================\n# Exercise -\n# =============================================================================\n\n#%%\n# =============================================================================\n# Exercise -\n# =============================================================================\n\n#%%\n# =============================================================================\n# Exercise -\n# =============================================================================\n\n#%%\n# =============================================================================\n# Exercise -\n# =============================================================================\n\n#%%\n# =============================================================================\n# Exercise -\n# =============================================================================\n\n\n"
}
] | 1 |
yurithefury/pyfor
|
https://github.com/yurithefury/pyfor
|
5f240a1ace58ed696e00217a324f98d5cb42f407
|
1700e58040b91c5f6604239886ca8abe27c03e90
|
c236cda98666666e0d1aba8ddc3aedb9ba278688
|
refs/heads/master
| 2020-09-11T06:03:14.333820 | 2019-10-14T17:15:41 | 2019-10-14T17:15:41 | 221,964,527 | 1 | 0 |
MIT
| 2019-11-15T16:39:51 | 2019-11-15T16:39:47 | 2019-11-10T18:00:46 | null |
[
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6400977969169617,
"avg_line_length": 31.44444465637207,
"blob_id": "0eaab6e5c273f771b6c5fff8433d411c582ee02f",
"content_id": "758b7b9542ca6c848b09445af71ed088ebcdb68e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2045,
"license_type": "permissive",
"max_line_length": 89,
"num_lines": 63,
"path": "/pyfortest/test_collection.py",
"repo_name": "yurithefury/pyfor",
"src_encoding": "UTF-8",
"text": "from pyfor import *\nimport unittest\nimport os\n\ndata_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'data')\nproj4str = \"+proj=utm +zone=10 +ellps=GRS80 +datum=NAD83 +units=m +no_defs\"\n\ndef test_buffered_func(pc, tile):\n print(pc, tile)\n\ndef test_byfile_func(las_path):\n print(cloud.Cloud(las_path))\n\n\ndef make_test_collection():\n \"\"\"\n Splits the testing tile into 4 tiles to use for testing\n :return:\n \"\"\"\n\n pc = cloud.Cloud(os.path.join(data_dir, 'test.las'))\n\n # Sample to only 1000 points for speed\n pc.data.points = pc.data.points.sample(1000, random_state=12)\n\n tr = pc.data.points[(pc.data.points['x'] > 405100) & (pc.data.points['y'] > 3276400)]\n tl = pc.data.points[(pc.data.points['x'] < 405100) & (pc.data.points['y'] > 3276400)]\n br = pc.data.points[(pc.data.points['x'] > 405100) & (pc.data.points['y'] < 3276400)]\n bl = pc.data.points[(pc.data.points['x'] < 405100) & (pc.data.points['y'] < 3276400)]\n\n all = [tr, tl, br, bl]\n\n for i, points in enumerate(all):\n out = cloud.LASData(points, pc.data.header)\n out.write(os.path.join(data_dir, 'mock_collection', '{}.las'.format(i)))\n\n pc.data.header.reader.close()\n\n\nclass CollectionTestCase(unittest.TestCase):\n def setUp(self):\n make_test_collection()\n self.test_col = collection.from_dir(os.path.join(data_dir, 'mock_collection'))\n\n def test_create_index(self):\n self.test_col.create_index()\n\n def test_retile_raster(self):\n self.test_col.retile_raster(10, 50, buffer=10)\n self.test_col.reset_tiles()\n\n def test_par_apply_buff_index(self):\n # Buffered with index\n self.test_col.retile_raster(10, 50, buffer=10)\n self.test_col.par_apply(test_buffered_func, indexed=True)\n\n def test_par_apply_buff_noindex(self):\n # Buffered without index\n self.test_col.par_apply(test_buffered_func, indexed=False)\n\n def test_par_apply_by_file(self):\n # By file\n self.test_col.par_apply(test_byfile_func, by_file=True)\n\n"
},
{
"alpha_fraction": 0.6242038011550903,
"alphanum_fraction": 0.6298655271530151,
"avg_line_length": 35.230770111083984,
"blob_id": "2115d390066041716b731700eec49fc166cfc943",
"content_id": "e7a420ca251491ac25ea69c8b68077dfa89580a8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2826,
"license_type": "permissive",
"max_line_length": 129,
"num_lines": 78,
"path": "/pyfor/gisexport.py",
"repo_name": "yurithefury/pyfor",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport rasterio\nfrom rasterio.features import shapes\nfrom shapely.geometry import shape\nimport geopandas\n\n# This module holds internal functions for GIS processing.\n\ndef get_las_crs():\n \"\"\"\n Attempts to retrive CRS information from an input `laspy.file.File` object.\n :return:\n \"\"\"\n pass\n\ndef project_indices(indices, raster):\n \"\"\"\n Converts indices of an array (for example, those indices that describe the location of a local maxima) to the\n same space as the input cloud object.\n\n :param indices: The indices to project, an Nx2 matrix of indices where the first column are the rows (Y) and\n the second column is the columns (X)\n :param raster: An object of type pyfor.rasterizer.Raster\n :return:\n \"\"\"\n\n seed_xy = indices[:,1] + (raster._affine[2] / raster._affine[0]), \\\n indices[:,0] + (raster._affine[5] - (raster.grid.cloud.data.max[1] - raster.grid.cloud.data.min[1]) /\n abs(raster._affine[4]))\n seed_xy = np.stack(seed_xy, axis = 1)\n return(seed_xy)\n\ndef array_to_raster(array, affine, wkt, path):\n \"\"\"Writes a GeoTIFF raster from a numpy array.\n\n :param array: 2D numpy array of cell values\n :param pixel_size: -- Desired resolution of the output raster, in same units as wkt projection.\n :param x_min: Minimum x coordinate (top left corner of raster)\n :param y_max: Maximum y coordinate\n :param wkt: The wkt string with desired projection\n :param path: The output bath of the GeoTIFF\n \"\"\"\n # First flip the array\n #transform = rasterio.transform.from_origin(x_min, y_max, pixel_size, pixel_size)\n out_dataset = rasterio.open(path, 'w', driver='GTiff', height=array.shape[0], width = array.shape[1], count=1,\n dtype=str(array.dtype),crs=wkt, transform=affine)\n out_dataset.write(array, 1)\n out_dataset.close()\n\ndef array_to_polygons(array, affine=None):\n \"\"\"\n Returns a geopandas dataframe of polygons as deduced from an array.\n\n :param array: The 2D numpy array to polygonize.\n :param affine: The affine transformation.\n :return:\n \"\"\"\n if affine == None:\n results = [\n {'properties': {'raster_val': v}, 'geometry': s}\n for i, (s, v)\n in enumerate(shapes(array))\n ]\n else:\n results = [\n {'properties': {'raster_val': v}, 'geometry': s}\n for i, (s, v)\n in enumerate(shapes(array, transform=affine))\n ]\n\n\n tops_df = geopandas.GeoDataFrame({'geometry': [shape(results[geom]['geometry']) for geom in range(len(results))],\n 'raster_val': [results[geom]['properties']['raster_val'] for geom in range(len(results))]})\n\n return(tops_df)\n\ndef polygons_to_raster(polygons):\n pass\n"
}
] | 2 |
MarianaPinto17/EDC-GAMEDB
|
https://github.com/MarianaPinto17/EDC-GAMEDB
|
582f894d922948861c0eb36af440cb8950ce265e
|
2350e0a68fba8ad0ae4901f64fcc8e8c8cdcda88
|
9541eca1d1ac355ffed30ed71b6bbbbae685bd48
|
refs/heads/main
| 2023-02-22T04:11:07.211769 | 2021-01-26T15:33:28 | 2021-01-26T15:33:28 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7179487347602844,
"alphanum_fraction": 0.7435897588729858,
"avg_line_length": 38,
"blob_id": "9f2e88c1876f681ae080e875f958abf6e057dbc6",
"content_id": "2dd35f97ef642321d7aaa5356dc01137493f9046",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 79,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 2,
"path": "/README.md",
"repo_name": "MarianaPinto17/EDC-GAMEDB",
"src_encoding": "UTF-8",
"text": "# edc-project2\nProjeto 2 no âmbito da disciplina de EDC , UA- MIECT - GamesDB\n"
},
{
"alpha_fraction": 0.5162184238433838,
"alphanum_fraction": 0.520097553730011,
"avg_line_length": 36.336551666259766,
"blob_id": "37f4329dc26e015fa279ae9f5ace450456aecee5",
"content_id": "fe51eb030f387abd0d893a530afd9f0956c72e12",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 27068,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 725,
"path": "/webproj/webapp/views.py",
"repo_name": "MarianaPinto17/EDC-GAMEDB",
"src_encoding": "UTF-8",
"text": "import json\nfrom random import random, randint\n\nfrom django.shortcuts import redirect, render\nimport os\nfrom django.contrib.admin.utils import flatten\nimport requests\nfrom s4api.graphdb_api import GraphDBApi\nfrom s4api.swagger import ApiClient\nfrom SPARQLWrapper import SPARQLWrapper, JSON\n\nendpoint = \"http://localhost:7200\"\nrepo_name = \"gamesdb\"\nclient = ApiClient(endpoint=endpoint)\naccessor = GraphDBApi(client)\n\n\n# Create your views here.\ndef index(request):\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n SELECT ?game ?pred ?obj\n WHERE{\n ?game ?pred ?obj .\n ?game pred:positive-ratings ?ratings .\n }\n \n \"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game = {}\n developers = []\n categories = []\n screenshots = []\n publishers = []\n for res_tmp in res:\n gameid = res_tmp['game']['value'].split(\"/\")[-1]\n key = res_tmp['pred']['value'].split(\"/\")[-1]\n if gameid not in game.keys():\n game[gameid] = {key: res_tmp['obj']['value']}\n developers = []\n categories = []\n screenshots = []\n publishers = []\n else:\n tmp = game[gameid]\n value = res_tmp['obj']['value']\n if key == \"screenshots\":\n screenshots.append(value)\n tmp_dic = {key: screenshots}\n elif key == \"category\":\n categories.append({value: \"\"})\n tmp_dic = {key: categories}\n elif key == \"developers\":\n developers.append({value: \"\"})\n tmp_dic = {key: developers}\n elif key == \"publishers\":\n publishers.append({value: \"\"})\n tmp_dic = {key: publishers}\n else:\n tmp_dic = {key: res_tmp['obj']['value']}\n\n tmp.update(tmp_dic)\n print(game['440'])\n\n for game_tmp in game.keys():\n developer_index = 0\n publisher_index = 0\n category_index = 0\n for developer_list in game[game_tmp]['developers']:\n developer = list(developer_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + developer.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['developers'][developer_index].update({developer: res[0]['name']['value']})\n developer_index = developer_index + 1\n\n for publisher_list in game[game_tmp]['publishers']:\n publisher = list(publisher_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + publisher.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['publishers'][publisher_index].update({publisher: res[0]['name']['value']})\n publisher_index = publisher_index + 1\n\n for category_list in game[game_tmp]['category']:\n category = list(category_list.keys())[0]\n query = \"\"\" PREFIX category: <http://gamesdb.com/entity/categories/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n category:\"\"\" + category.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['category'][category_index].update({category: res[0]['name']['value']})\n category_index = category_index + 1\n\n print(game['440'])\n tparam = {'game': game}\n return render(request, 'index.html', tparam)\n\n\ndef showGame(request, game_id):\n # print(game_id)\n query = \"\"\"PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n SELECT ?pred ?obj ?id\n WHERE{\n game:\"\"\" + str(game_id) + \"\"\" ?pred ?obj .\t \n }\n \"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n developers = []\n categories = []\n screenshots = []\n publishers = []\n game = {}\n for res_tmp in res:\n key = res_tmp['pred']['value'].split(\"/\")[-1]\n value = res_tmp['obj']['value']\n if key == \"screenshots\":\n screenshots.append(value)\n tmp_dic = {key: screenshots}\n elif key == \"category\":\n categories.append({value: \"\"})\n tmp_dic = {key: categories}\n elif key == \"developers\":\n developers.append({value: \"\"})\n tmp_dic = {key: developers}\n elif key == \"publishers\":\n publishers.append({value: \"\"})\n tmp_dic = {key: publishers}\n else:\n tmp_dic = {key: res_tmp['obj']['value']}\n\n game.update(tmp_dic)\n\n developer_index = 0\n publisher_index = 0\n category_index = 0\n for developer_list in game['developers']:\n developer = list(developer_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + developer.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game['developers'][developer_index].update({developer: res[0]['name']['value']})\n developer_index = developer_index + 1\n\n for publisher_list in game['publishers']:\n publisher = list(publisher_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + publisher.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game['publishers'][publisher_index].update({publisher: res[0]['name']['value']})\n publisher_index = publisher_index + 1\n\n for category_list in game['category']:\n category = list(category_list.keys())[0]\n query = \"\"\" PREFIX category: <http://gamesdb.com/entity/categories/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n category:\"\"\" + category.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game['category'][category_index].update({category: res[0]['name']['value']})\n category_index = category_index + 1\n\n full_size = []\n thumbnail = []\n for i in range(0, len(screenshots), 2):\n full_size.append(screenshots[i])\n for i in range(1, len(screenshots), 2):\n thumbnail.append(screenshots[i])\n print(thumbnail)\n\n new_params = {'game_title': game['name'],\n 'game_image': game['header'],\n 'game_description': game['description'],\n 'release': game['date'],\n 'devs': game['developers'],\n 'genres': game['category'],\n 'english': game['english'],\n 'positive': game['positive-ratings'],\n 'negative': game['negative-ratings'],\n 'lower': game['lower-ownership'],\n 'higher': game['upper-ownership'],\n 'game_id': game_id,\n 'images': full_size,\n 'thumbnails': thumbnail\n }\n\n return render(request, 'game.html', new_params)\n\n\ndef deleteGame(request, game_id):\n\n query = \"\"\" PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n Delete\n WHERE{\n \t\t\t\tgame:\"\"\"+str(game_id)+\" ?pred ?obj }\"\n\n print(query)\n payload_query = {\"update\": query}\n res = accessor.sparql_update(body=payload_query, repo_name=repo_name)\n print(res)\n\n return redirect(index)\n\n\ndef searchGame_2(request):\n print(request.POST)\n pattern = request.POST['pattern']\n print(pattern)\n pattern = pattern.replace(\"%20\", \" \").replace(\"'\", \"\")\n\n endpoint = \"http://localhost:7200\"\n repo_name = \"gamesdb\"\n client = ApiClient(endpoint=endpoint)\n accessor = GraphDBApi(client)\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n \n\t\t\t\tSELECT ?game ?pred ?obj\n\t\t\t\tWHERE{\n \t\t\t\t?game ?pred ?obj.\n \t\t\t\t?game pred:name ?name .\n \t\t\t\t?game pred:positive-ratings ?ratings.\n \t\t\tfilter(contains(lcase(?name), \\\"\"\"\" + pattern.lower() + \"\\\"))}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n print(query)\n res = json.loads(res)\n print(res)\n res = res['results']['bindings']\n game = {}\n developers = []\n categories = []\n screenshots = []\n publishers = []\n for res_tmp in res:\n gameid = res_tmp['game']['value'].split(\"/\")[-1]\n key = res_tmp['pred']['value'].split(\"/\")[-1]\n if gameid not in game.keys():\n game[gameid] = {key: res_tmp['obj']['value']}\n developers = []\n categories = []\n screenshots = []\n publishers = []\n else:\n tmp = game[gameid]\n value = res_tmp['obj']['value']\n if key == \"screenshots\":\n screenshots.append(value)\n tmp_dic = {key: screenshots}\n elif key == \"category\":\n categories.append({value: \"\"})\n tmp_dic = {key: categories}\n elif key == \"developers\":\n developers.append({value: \"\"})\n tmp_dic = {key: developers}\n elif key == \"publishers\":\n publishers.append({value: \"\"})\n tmp_dic = {key: publishers}\n else:\n tmp_dic = {key: res_tmp['obj']['value']}\n\n tmp.update(tmp_dic)\n\n\n for game_tmp in game.keys():\n developer_index = 0\n publisher_index = 0\n category_index = 0\n for developer_list in game[game_tmp]['developers']:\n developer = list(developer_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + developer.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['developers'][developer_index].update({developer: res[0]['name']['value']})\n developer_index = developer_index + 1\n\n for publisher_list in game[game_tmp]['publishers']:\n publisher = list(publisher_list.keys())[0]\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + publisher.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['publishers'][publisher_index].update({publisher: res[0]['name']['value']})\n publisher_index = publisher_index + 1\n\n for category_list in game[game_tmp]['category']:\n category = list(category_list.keys())[0]\n query = \"\"\" PREFIX category: <http://gamesdb.com/entity/categories/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n category:\"\"\" + category.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n game[game_tmp]['category'][category_index].update({category: res[0]['name']['value']})\n category_index = category_index + 1\n\n tparam = {'game': game}\n return render(request, 'index.html', tparam)\n\n\n\n\ndef news_feed(request):\n xml_link = \"https://metro.co.uk/entertainment/gaming/feed/\"\n xml_file = requests.get(xml_link)\n xslt_name = 'rss.xsl'\n xsl_file = os.path.join(BASE_DIR, 'webapp/xslt/' + xslt_name)\n tree = ET.fromstring(xml_file.content)\n xslt = ET.parse(xsl_file)\n transform = ET.XSLT(xslt)\n newdoc = transform(tree)\n tparams = {\n 'content': newdoc,\n }\n\n return render(request, 'news.html', tparams)\n\n\ndef addComment(request, game_id):\n session = BaseXClient.Session('localhost', 1984, 'admin', 'admin')\n session.execute(\"open dataset\")\n print(request.POST)\n comment = request.POST[\"comment\"]\n\n root = Element('comment')\n root.text = request.POST[\"comment\"]\n root.set('author', request.POST['nickname'])\n\n query_add_comment = \"let $games := doc('dataset')//game for $game in $games where $game[@id='\" + str(\n game_id) + \"']\" + \" let $node := '\" + tostring(root).decode('utf-8') + \"' return insert node fn:parse-xml(\" \\\n \"$node) as last into $game \"\n print(query_add_comment)\n query = session.query(query_add_comment)\n query.execute()\n\n return redirect(showGame, game_id=game_id)\n\n\ndef addGame(request, error=False):\n tparams = {'error': False}\n if error:\n tparams['error'] = True\n return render(request, \"form.html\", tparams)\n\n\ndef newGame(request):\n print(request.POST)\n id = str(randint(1, 150))\n title = request.POST['title']\n date = request.POST['year']\n if request.POST['english'] == 'Y':\n english = 'True'\n else:\n english = 'False'\n\n developers_post = []\n for developer in request.POST['developers'].split(';'):\n developers_post.append(developer)\n\n required_age = request.POST['rating']\n genres = []\n for genre in request.POST['genres'].split(\";\"):\n genres.append(genre)\n\n positive = \"0\"\n negative = \"0\"\n average = \"0\"\n\n median = \"0\"\n lower = \"0\"\n high = \"0\"\n price = request.POST[\"price\"]\n description = request.POST['description']\n\n full_size = request.POST['url']\n\n query = \"\"\" PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n SELECT distinct ?cat\n WHERE{\n ?game ?pred ?obj .\n ?game pred:category ?cat .\n\n }\"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n categories = {}\n for cat in res:\n category = cat['cat']['value']\n query = \"\"\" PREFIX category: <http://gamesdb.com/entity/categories/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n category:\"\"\" + category.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n categories[category] = res[0]['name']['value']\n\n\n #check if category is on db\n cat_keys = []\n\n for cat in genres:\n if cat in categories.values():\n cat_keys.append(list(categories.keys())[list(categories.values()).index(cat)])\n print(cat_keys)\n\n query = \"\"\" PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n SELECT distinct ?dev\n WHERE{\n ?game ?pred ?obj .\n ?game pred:developers ?dev .\n\n }\"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n developers = {}\n for dev in res:\n developer = dev['dev']['value']\n query = \"\"\" PREFIX company: <http://gamesdb.com/entity/company/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n company:\"\"\" + developer.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n developers[developer] = res[0]['name']['value']\n\n dev_keys = []\n\n for dev in developers_post:\n if dev in developers.values():\n dev_keys.append(list(developers.keys())[list(developers.values()).index(dev)])\n print(dev_keys)\n\n\n to_insert = []\n to_insert.append(\"game:\"+id+ \" predicate:name \\\"\" + title + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:date \\\"\" + date + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:english \\\"\" + english + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:positive-ratings \\\"\" + positive + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:negative-ratings \\\"\" + negative + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:average-playtime \\\"\" + average + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:median-playtime \\\"\" + median + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:lower-ownership \\\"\" + lower + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:upper-ownership \\\"\" + high + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:price \\\"\" + price + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:description \\\"\" + description + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:header \\\"\" + full_size + \"\\\"\")\n to_insert.append(\"game:\"+id+ \" predicate:age \" + \"ages:not_available\")\n\n\n\n for dev in dev_keys:\n to_insert.append(\"game:\"+id+ \" predicate:developers company:\" + dev.split(\"/\")[-1])\n to_insert.append(\"game:\"+id+ \" predicate:publishers company:\" + dev.split(\"/\")[-1])\n\n\n for cat in cat_keys:\n to_insert.append(\"game:\"+id+ \" predicate:category categories:\" + cat.split(\"/\")[-1])\n\n for query in to_insert:\n to_query = \"\"\"PREFIX predicate: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n PREFIX company: <http://gamesdb.com/entity/company/>\n PREFIX categories: <http://gamesdb.com/entity/categories/>\n \n INSERT DATA{\n \"\"\" + query + \"}\"\n print(to_query)\n payload_query = {\"update\": to_query}\n print(accessor.sparql_update(body=payload_query, repo_name=repo_name))\n\n\n print(id)\n return redirect(showGame, game_id=id)\n\n\ndef apply_filters(request):\n endpoint = \"http://localhost:7200\"\n repo_name = \"gamesdb\"\n client = ApiClient(endpoint=endpoint)\n accessor = GraphDBApi(client)\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX cat: <http://gamesdb.com/entity/categories/>\n SELECT distinct ?category\n WHERE{\n ?game ?pred ?obj .\n ?game pred:category ?category .\n \t\t\t\t\t \n }\n \"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n categories=[]\n for e in res['results']['bindings']:\n for v in e.values():\n categories.append(v['value'])\n\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX age: <http://gamesdb.com/entity/ages/>\n SELECT distinct ?age\n WHERE {\n \t ?game ?pred ?obj .\n ?game pred:age ?age\n } order by ?age\n \n \"\"\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n age = []\n\n for e in res['results']['bindings']:\n for v in e.values():\n age.append(v['value'])\n\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX company: <http://gamesdb.com/entity/company/>\n SELECT distinct ?company\n WHERE {\n \t ?game ?pred ?obj .\n ?game pred:company ?company\n } order by ?company\n\n \"\"\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n companies = []\n\n for e in res['results']['bindings']:\n for v in e.values():\n companies.append(v['value'])\n\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n SELECT distinct ?game\n WHERE {\n \t ?game ?pred ?obj .\n ?game pred:game ?game\n } order by ?game\n\n \"\"\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n games = []\n for e in res['results']['bindings']:\n for v in e.values():\n games.append(v['value'])\n\n query = \"\"\"\n PREFIX pred: <http://gamesdb.com/predicate/>\n SELECT distinct ?name ?pred ?obj\n WHERE {\n ?game ?pred ?obj .\n ?game pred:company ?company .\n ?game pred:age ?age .\n ?game pred:category ?category .\n ?company pred:name ?company_name .\n ?age pred:age ?age_value .\n ?category pred:name ?category_name .\n ?game pred:name ?game_name\n ?game pred:date ?date .\n\n \"\"\"\n\n\n categoriesToQuery = []\n\n for g in categories:\n if g in request.POST:\n categoriesToQuery.append(g)\n\n if len(categoriesToQuery) != 0:\n aux = \"\"\n for g in categoriesToQuery:\n aux += \"\\\"\" + g + \"\\\",\"\n aux = aux[:-1]\n query += \"\"\"FILTER(?category_name IN(\"\"\" + aux + \"\"\"))\"\"\"\n\n if 'age' in request.POST:\n query += \"\"\"FILTER (?age_value = \\\"\"\"\"+ request.POST['age'] + \"\"\"\\\")\"\"\"\n\n if 'companies'in request.POST:\n query += \"\"\"FILTER (?company_name = \\\"\"\"\" + request.POST['companies'] + \"\"\"\\\")\"\"\"\n\n if 'games' in request.POST:\n query += \"\"\"FILTER (?game_name = \\\"\"\"\" + request.POST['games'] + \"\"\"\\\")\"\"\"\n query += \"\"\"FILTER (?date = \\\"\"\"\" + request.POST['games'] + \"\"\"\\\")\"\"\"\n\n tparams = {\n \"categories\": categories,\n \"age\": age,\n \"companies\": companies,\n \"games\": games\n }\n\n return render(request, 'index.html', tparams)\n\n\ndef adv_search(request):\n query = \"\"\" PREFIX pred: <http://gamesdb.com/predicate/>\n PREFIX game: <http://gamesdb.com/entity/game/>\n SELECT distinct ?cat\n WHERE{\n ?game ?pred ?obj .\n ?game pred:category ?cat .\n \t\t\t\t\t \n }\"\"\"\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n categories = {}\n for cat in res:\n category = cat['cat']['value']\n query = \"\"\" PREFIX category: <http://gamesdb.com/entity/categories/>\n prefix predicate: <http://gamesdb.com/predicate/>\n select ?name where{\n category:\"\"\" + category.split(\"/\")[-1] + \" predicate:name ?name.}\"\n\n payload_query = {\"query\": query}\n res = accessor.sparql_select(body=payload_query, repo_name=repo_name)\n res = json.loads(res)\n res = res['results']['bindings']\n categories[category] = res[0]['name']['value']\n\n print(categories)\n\n tparams = {'genres': categories}\n\n return render(request, 'adv_search.html', tparams)\n\n\ndef searchdb(request):\n pattern = request.POST['pattern_db']\n pattern = pattern.replace(\"%20\", \" \").replace(\"'\", \"\")\n\n endpoint = \"https://dbpedia.org/sparql\"\n\n query = \"\"\"SELECT *\n WHERE\n {\n ?term rdfs:label \"\"\" + \"\\\"\"+pattern+\"\\\"@en}\"\n print(query)\n sparql = SPARQLWrapper(endpoint)\n sparql.setQuery(query)\n sparql.setReturnFormat(JSON)\n result = sparql.query().convert()\n print(result)"
}
] | 2 |
VishalMishraB/vishal-
|
https://github.com/VishalMishraB/vishal-
|
57b25dc6b8e69ad5d8e303f17b0fe23485d7b4f8
|
933b0ac79671d1fce624378906fb6ab9dca47466
|
7d5d2fbfb701e3cf994089d5e6f5f7369e0c7bd6
|
refs/heads/master
| 2021-05-08T12:17:36.355949 | 2018-02-26T09:12:21 | 2018-02-26T09:12:21 | 119,933,177 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.6333333253860474,
"avg_line_length": 11,
"blob_id": "9235cf159dd5ec26ede23701314099cd18985e52",
"content_id": "95076d87fec4afbd07983935a266c601fcbbcc02",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 60,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 5,
"path": "/programe_float.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#function 2\ndef float(a,b):\n print(a/b)\n\nfloat(1000,100)\n"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 19,
"blob_id": "c94a957eb63aff7967944697907e90e05de6e46a",
"content_id": "18d86b3b0c15e626e1cd1689367494e09f0bd01a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 40,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 2,
"path": "/program-1.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#programe to print name\nprint(\"vishal\")\n"
},
{
"alpha_fraction": 0.6121212244033813,
"alphanum_fraction": 0.6545454263687134,
"avg_line_length": 22.428571701049805,
"blob_id": "96bf29edd7b1e66508761fe8895c6dccc39ff0cf",
"content_id": "c0868a4f1ee851e0cbdf5e8c12946bc51aded432",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 165,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 7,
"path": "/programe_em.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "import smtplib\ns= smtplib.SMTP(\"smtp.gmail.com\",587)\ns.starttls()\ns.login(\"[email protected]\",\"sumeet2388\")\nmsg=\"Hii how r u..?\"\ns.sendmail(\"[email protected]\",\"[email protected]\",msg)\ns.quit()\n\n"
},
{
"alpha_fraction": 0.375,
"alphanum_fraction": 0.4296875,
"avg_line_length": 17.285715103149414,
"blob_id": "9d9a9f0b41052670e1dca5b471f67573acc1e7af",
"content_id": "789a4ca570209c403745bbd02365ccfb9c02460a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 128,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 7,
"path": "/programe_odd_even.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "i=0\nwhile(i<=100):\n if i%2==0:\n print(i,\"it is even no.\")\n else:\n print(i,\"it is a odd no.\")\n i=i+1\n"
},
{
"alpha_fraction": 0.4897959232330322,
"alphanum_fraction": 0.6020408272743225,
"avg_line_length": 10.529411315917969,
"blob_id": "c36e82c84bf878dbe228ed63058dd3ab832061e9",
"content_id": "173bebff76bfa89077299930c109ad2886d62d17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 196,
"license_type": "no_license",
"max_line_length": 18,
"num_lines": 17,
"path": "/programe_function.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#function\ndef add(a,b):\n print(a+b)\n\ndef sub(x,y):\n print(x-y)\n\ndef multiply(c,d):\n print(c*d)\n\ndef divide(e,f):\n print(e/f)\n\nadd(100,300)\nsub(10,400)\nmultiply(15,15)\ndivide(1000,100)\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.581818163394928,
"avg_line_length": 8.166666984558105,
"blob_id": "11433726180dd99ff806cc33e51df18a7453f89b",
"content_id": "8d7f435ab5b4a1c79e6042d4a5591de8b182354d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 55,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 6,
"path": "/programe_power.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#function\ndef power(c,d):\n print(c**d)\n\n\npower(2,6)\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.581818163394928,
"avg_line_length": 11.5,
"blob_id": "164269bdf1f80035e5f13b3dacec8a9e5f13c398",
"content_id": "499b25f400f996589c5c28a66588d8154c5c0e8f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 55,
"license_type": "no_license",
"max_line_length": 19,
"num_lines": 4,
"path": "/programe_range.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#range function\n\nfor i in range(50):\n print(i)\n \n"
},
{
"alpha_fraction": 0.4126984179019928,
"alphanum_fraction": 0.523809552192688,
"avg_line_length": 13.5,
"blob_id": "f9fb89f5a04bedcffbfcb55d5fbafdf4d680aaeb",
"content_id": "08222b301dae5fa289653eebb7105f37ccfe0b9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 63,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 4,
"path": "/programe_list.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#list function\nl=[1,2,3,4,5,6,7]\nfor i in l:\n print(i)\n \n"
},
{
"alpha_fraction": 0.38372093439102173,
"alphanum_fraction": 0.5,
"avg_line_length": 16.200000762939453,
"blob_id": "dc820ec12aeb90717b5b140fc69ff99c26d17bf2",
"content_id": "0324a69d03ad2ede727068e64267862fe9dec804",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 5,
"path": "/programe_break_.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#break function\nfor i in (1,2,3,4,5,6,7,8,9):\n if i==5:\n break\n print(i)\n"
},
{
"alpha_fraction": 0.44117647409439087,
"alphanum_fraction": 0.5147058963775635,
"avg_line_length": 18.66666603088379,
"blob_id": "22b8fe4bfda6617ad185e7a208a9acbd4351838d",
"content_id": "6cbfdfeaab21befebb9d026c05724989bac49128",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 68,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 3,
"path": "/programe_range_2.py",
"repo_name": "VishalMishraB/vishal-",
"src_encoding": "UTF-8",
"text": "#range function 2\nfor i in range(1,50,5):\n print(i)\n \n"
}
] | 10 |
CarlosWGama/python-ml-at-regressao
|
https://github.com/CarlosWGama/python-ml-at-regressao
|
5fdde2c0180cccbf8af3b6f58708a6b40193270d
|
9f33750740fb35634861fd44a2408528115a08d6
|
33f01cd2c0162aadbd1f5b2523b9cc5350def16a
|
refs/heads/master
| 2020-12-21T08:30:22.288125 | 2020-01-26T21:44:54 | 2020-01-26T21:44:54 | 236,374,071 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6719636917114258,
"alphanum_fraction": 0.6912599205970764,
"avg_line_length": 28.399999618530273,
"blob_id": "6768d62a0c9e70c4023b568c5c87fac02db772aa",
"content_id": "037d0e4c4521cc344b851ff66c0479278b84778c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 889,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 30,
"path": "/main.py",
"repo_name": "CarlosWGama/python-ml-at-regressao",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.linear_model import LinearRegression\n\n#Recuperando os dados\ncsv = pd.read_csv('dados.csv', sep=\";\")\ncsv = csv.drop(columns=['Número comentários','Compartilhamento'])\n\n#Normatizando os valores\nle = LabelEncoder()\ncsv['Tipo'] = le.fit_transform(csv['Tipo'])\n\n#criando o modelo\ndados = csv.values\natributos = dados[:,0:5] \nlikes = dados[:,5]\n\n#Modelo Like\nmodelo = LinearRegression()\nmodelo.fit(atributos, likes)\n\n#Coletando as informações\ntipo = int(input('Informe o número do tipo da postagem Foto[0]|Link[1]|Status[2]|Video[3]: '))\nmes = int(input('Mês: '))\ndia = int(input('Dia da semana: D[1]|S[2]|T[3]|Q[4]|Q[5]|S[6]|S[7]: '))\nhora = int(input('Hora: '))\npago = int(input('Pago: SIM[1]|NÃO[0]: '))\n\nretorno = modelo.predict([[tipo, mes, dia, hora, pago]])\nprint('Média de Likes: ', int(retorno[0]))"
}
] | 1 |
3x3r/Output_kris
|
https://github.com/3x3r/Output_kris
|
52edb0dccfad8472d2b655804c6cfb58aca521cf
|
88a5a66a0e3db26c4002260d06e30700d3db6a08
|
1a4929a50bd6a87d39a819387b3fc9148a7dcaf1
|
refs/heads/master
| 2020-04-10T05:50:39.909818 | 2018-12-07T15:06:30 | 2018-12-07T15:06:30 | 160,838,371 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.43829551339149475,
"alphanum_fraction": 0.4892086386680603,
"avg_line_length": 33.431373596191406,
"blob_id": "19902e12c4508decad6521e1ad89f720a3237ba4",
"content_id": "a78cc62c4c806dab566b1d1fbbf6a93aff0465e9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1816,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 51,
"path": "/Output.py",
"repo_name": "3x3r/Output_kris",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\r\nimport dash\r\nimport dash_core_components as dcc\r\nimport dash_html_components as html\r\nimport pandas as pd\r\ndf = pd.read_excel('C:\\\\Python27/upload.xlsx','Лист1', index_col=None, na_values=['NA'])\r\n#pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])\r\n#wb = openpyxl.load_workbook(filename='C:\\\\Python27/upload.xlsx')\r\n#sheet = wb['Лист1']\r\nexternal_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']\r\n\r\napp = dash.Dash(__name__, external_stylesheets=external_stylesheets)\r\napp.layout = html.Div(children=[\r\n html.H1(children='Hello Dash'),\r\n\r\n html.Div(children='''\r\n Dash: A web application framework for Python.\r\n '''),\r\n html.Label('Multi-Select Dropdown'),\r\n dcc.Dropdown(\r\n options=[\r\n {'label': 'F401', 'value': 'F401'},\r\n {'label': 'F402', 'value': 'F402'},\r\n {'label': 'F403', 'value': 'F403'},\r\n {'label': 'F404', 'value': 'F404'},\r\n {'label': 'F405', 'value': 'F405'},\r\n {'label': 'F406', 'value': 'F406'},\r\n {'label': 'F407', 'value': 'F407'},\r\n {'label': 'F408', 'value': 'F408'},\r\n {'label': 'F409', 'value': 'F409'},\r\n {'label': 'F413', 'value': 'F413'},\r\n ],\r\n value=['F401', 'F402'],\r\n multi=True\r\n ),\r\n dcc.Graph(\r\n id='example-graph',\r\n figure={\r\n 'data': [\r\n {'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': '444'},\r\n {'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'},\r\n ],\r\n 'layout': {\r\n 'title': 'Dash Data Visualization'\r\n }\r\n }\r\n )\r\n])\r\n#wb.save('C:\\\\Python27/upload.xlsx')\r\nif __name__ == '__main__':\r\n app.run_server(debug=True)\r\n"
}
] | 1 |
BrianRMoo/Python-unit-test
|
https://github.com/BrianRMoo/Python-unit-test
|
c9736a17334c476abbf8255f6f64fc7e5f517df6
|
ca5ec9ee696a1659326643848e7f00bbe30ec99f
|
b437f0faf59fa7af6863edd56fffa51ec9810d92
|
refs/heads/main
| 2023-05-10T22:51:05.851790 | 2021-05-27T18:30:10 | 2021-05-27T18:30:10 | 371,462,120 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5445840954780579,
"alphanum_fraction": 0.6163973808288574,
"avg_line_length": 38.80952453613281,
"blob_id": "99e45126d797be1377de7430426ca167cb37e049",
"content_id": "784a5560eaa53b76ee749635f9bf9a4c5282bde9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1671,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 42,
"path": "/test_calc.py",
"repo_name": "BrianRMoo/Python-unit-test",
"src_encoding": "UTF-8",
"text": "#using unit test to test calc functions\n#add, sub multiply and divide whole numbers decimals and negatives\nimport unittest\nimport calc\n\n#inherit from testcase to access test case functions\nclass TestCalc(unittest.TestCase):\n def test_add(self):\n #assert equal to test if calc is getting proper return value\n self.assertEqual(calc.add(10, 10), 20)\n self.assertEqual(calc.add(22, 3), 25)\n self.assertEqual(calc.add(1024, 8), 1032)\n self.assertEqual(calc.add(-10, 20),10)\n self.assertEqual(calc.add(.01, 3.14), 3.15)\n self.assertEqual(calc.add(-.1, 3.14), 3.04)\n def test_sub(self): \n self.assertEqual(calc.subtract(10, 5), 5)\n self.assertEqual(calc.subtract(11,6), 5)\n self.assertEqual(calc.subtract(12,7), 5)\n self.assertEqual(calc.subtract(-32, 10), -42)\n self.assertEqual(calc.subtract(.1, 1), -.9)\n self.assertEqual(calc.subtract(-.1, 1), -1.1)\n def test_multiply(self):\n self.assertEqual(calc.multiply(10, 5),50)\n self.assertEqual(calc.multiply(11, 10), 110)\n self.assertEqual(calc.multiply(512, 2), 1024)\n self.assertEqual(calc.multiply(.5, 2), 1)\n self.assertEqual(calc.multiply(-.5, 2), -1)\n def test_div(self):\n self.assertEqual(calc.divide(20, 1), 20)\n self.assertEqual(calc.divide(64, 8), 8)\n self.assertEqual(calc.divide(2048, 4), 512)\n self.assertEqual(calc.divide(.5, 2), .25)\n self.assertEqual(calc.divide(-.5, 2), -.25)\n \n with self.assertRaises(ValueError):\n calc.divide(10, 0)\n \n \n \nif __name__ == '__main__':\n unittest.main()"
},
{
"alpha_fraction": 0.4909234344959259,
"alphanum_fraction": 0.5398579239845276,
"avg_line_length": 29.190475463867188,
"blob_id": "b62cfe5627d0a3a4208bab592793797311a7f190",
"content_id": "ca130b90656c0f2a2f8d0c3cff62b8b8d8fc1c9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1267,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 42,
"path": "/test_employee.py",
"repo_name": "BrianRMoo/Python-unit-test",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom employee import Employee\n\nclass TestEmployee(unittest.TestCase):\n \n def test_email(self):\n emp_1 = Employee('Brian', 'Moo', 50000)\n emp_2 = Employee('Brianna', 'Smith', 60000)\n \n self.assertEqual(emp_1.email, '[email protected]')\n self.assertEqual(emp_2.email, '[email protected]')\n \n emp_1.first ='Jimmy'\n emp_2.first = \"Janey\"\n \n self.assertEqual(emp_1.email, '[email protected]')\n self.assertEqual(emp_2.email, '[email protected]')\n \n def test_fullname(self):\n emp_1 = Employee('Brian', 'Moo', 60000)\n emp_2 = Employee('Jess', 'Alc', 50000)\n \n self.assertEqual(emp_1.fullname, 'Brian Moo')\n self.assertEqual(emp_2.fullname, 'Jess Alc')\n \n emp_1.first = 'Bee'\n emp_2.first = 'Jetsa'\n \n self.assertEqual(emp_1.fullname,'Bee Moo')\n self.assertEqual(emp_2.fullname, 'Jetsa Alc')\n \n def test_apply_raise(self):\n emp_1 = Employee('Brian', 'Moo', 50000)\n emp_2 = Employee('Brianna', 'Smith', 60000)\n emp_1.apply_raise()\n emp_2.apply_raise()\n \n self.assertEqual(emp_1.pay, 52500)\n self.assertEqual(emp_2.pay, 63000)\n \nif __name__ == '__main__':\n unittest.main()"
}
] | 2 |
DimpleManiar94/runner-backend
|
https://github.com/DimpleManiar94/runner-backend
|
aa7294fd2c51c3476c59b57f32b2dea95b11a461
|
4515e8a28ee3f37f46e7b2785caae75f368fdfe0
|
53a4469a2c82641caabe2c9bda583d987940bc31
|
refs/heads/master
| 2020-04-25T12:43:33.377218 | 2019-05-14T01:07:57 | 2019-05-14T01:07:57 | 172,787,156 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5099009871482849,
"alphanum_fraction": 0.5965346693992615,
"avg_line_length": 21.44444465637207,
"blob_id": "ba339a5ee4cad1752729fa4812cad4478fccd9b9",
"content_id": "4b8e1f82917674a6740398c8ad14570d7dcfd6cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 404,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 18,
"path": "/accounts/migrations/0003_auto_20190226_0040.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-26 00:40\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('accounts', '0002_auto_20190226_0034'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='useraccount',\n name='Addr2',\n field=models.CharField(blank=True, max_length=200),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6133743524551392,
"alphanum_fraction": 0.6149116158485413,
"avg_line_length": 33.26315689086914,
"blob_id": "b459ce3530fbdb808cc07dbd131bb6c9aaf53a62",
"content_id": "00b9bdae27b186eaba80531e1ac265e3d94ba3b9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1301,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 38,
"path": "/accounts/serializers.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from builtins import setattr\n\nfrom rest_framework import serializers\nfrom .models import Setting, UserAccount, User\n\nclass UserSerializer(serializers.ModelSerializer):\n class Meta:\n model = User\n fields = ('id', 'username', 'first_name', 'last_name', 'email', 'password')\n extra_kwargs = {\n 'username': {'validators': []},\n }\n\n\nclass UserAccountSerializer(serializers.ModelSerializer):\n user = UserSerializer(required=True)\n\n class Meta:\n model = UserAccount\n fields = ('user', 'Phone', 'ProfilePicture', 'Addr1', 'Addr2', 'City', 'State', 'PostalCode')\n\n def create(self, validated_data):\n user_data = validated_data.pop('user', None)\n user = User.objects.create_user(**user_data)\n return UserAccount.objects.create(user=user, **validated_data)\n\n def update(self, instance, validated_data):\n user_dict = validated_data.pop('user', None)\n if user_dict:\n user_obj = instance.user\n for key, value in user_dict.items():\n setattr(user_obj, key, value)\n user_obj.save()\n validated_data[\"user\"] = user_obj\n for key, value in validated_data.items():\n setattr(instance, key, value)\n instance.save()\n return instance"
},
{
"alpha_fraction": 0.6631513833999634,
"alphanum_fraction": 0.7065756916999817,
"avg_line_length": 31.632652282714844,
"blob_id": "aefae07f7ac8a76167a7e077b1accc9e3b550fab",
"content_id": "6c01b33c2565ff7e8c2c64f28622c2339ceb1b9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1612,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 49,
"path": "/README.md",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# runner-backend\nPython, Django, Postgresql\n\n## Installation guide for MacOS\n\n### Packages to be installed:\n1. Python 3.7.2\n2. Django 2.1.5\n3. Postgresql 11\n\n#### Steps:\n\n1. Install homebrew\n2. Install python3\\\n `brew install python3`\n3. Check python version:\\\n `python3 -V`\\\n should give python 3.7.2\\\n4. Install Django\\\n`pip3 install django`\n5. Check django version:\\\n`python3 -m django --version`\\\nshould give you 2.1.5\n6. Install postgres\\\n`brew install postgresql`\n7. Create runner database in postgres from command line\\\n`createdb djangorunner`\\\nThis will create a database called `djangorunner` in postgres and we later connect to this db from our python project.\n8. Now you are ready to clone the project from github. I use pycharm community version for coding and terminal for running django commands.\n9. After cloning the project, open the project in pycharm. Go to `runner/settings.py` and find the following lines:\\\nDATABASES = {\\\n 'default': {\\\n 'ENGINE': 'django.db.backends.postgresql',\\\n 'NAME': 'djangorunner',\\\n 'USER': 'dimple',\\\n 'PASSWORD': '******',\\\n 'HOST': '127.0.0.1',\\\n 'PORT': '5432'\\\n }\\\n}\\\nChange the user and password to your mac user and password.\n10. Now go to the command line and cd into the project.\n11. `pip3 install djangorestframework`\n12. `pip3 install django-filter`\n12. `python3 manage.py makemigrations`\n13. `python3 manage.py migrate`\n14. `python3 manage.py runserver`\n15. On your browser go to `localhost:8000`. You should see rest framework\n16. `localhost:8000/admin` will give you django administration.\n\n\n\n\n\n \n\n\n\n\n"
},
{
"alpha_fraction": 0.5305842161178589,
"alphanum_fraction": 0.5553264617919922,
"avg_line_length": 41.79411697387695,
"blob_id": "79284fd9e853ad7520a71aeae954be218fe3af9d",
"content_id": "eb096f34a0e641bcac5edf8e2fca6598373c88d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1455,
"license_type": "no_license",
"max_line_length": 154,
"num_lines": 34,
"path": "/tasks/migrations/0001_initial.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-25 03:20\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Task',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('TaskTitle', models.CharField(max_length=200)),\n ('Description', models.TextField()),\n ('Reward', models.IntegerField()),\n ('DateOfCompletion', models.DateField()),\n ('TimeOfCompletion', models.TimeField()),\n ('Addr1', models.CharField(max_length=200)),\n ('Addr2', models.CharField(max_length=200)),\n ('City', models.CharField(max_length=50)),\n ('State', models.CharField(max_length=50)),\n ('PostalCode', models.CharField(max_length=10)),\n ('TaskType', models.CharField(choices=[('Shopping', 'Shopping'), ('Moving', 'Moving'), ('Lifting', 'Lifting')], max_length=10)),\n ('TaskStatus', models.CharField(choices=[('Available', 'Available'), ('Running', 'Running'), ('Completed', 'Completed')], max_length=10)),\n ('CreatedTS', models.DateTimeField(auto_now_add=True)),\n ('UpdatedTS', models.DateTimeField(auto_now=True)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.5651282072067261,
"alphanum_fraction": 0.598974347114563,
"avg_line_length": 31.5,
"blob_id": "648aa2bd4a8bdea2d5e10fe25a99474e9d7d324d",
"content_id": "979f8816eadd03ec0a52521a9cf90425d4b21691",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 975,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 30,
"path": "/tasks/migrations/0003_auto_20190226_0034.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-26 00:34\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('tasks', '0002_auto_20190225_2054'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='task',\n name='Description',\n field=models.TextField(blank=True, null=True),\n ),\n migrations.AlterField(\n model_name='task',\n name='RunnerFK',\n field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='runner', to=settings.AUTH_USER_MODEL),\n ),\n migrations.AlterField(\n model_name='task',\n name='TaskType',\n field=models.CharField(choices=[('Shopping', 'Shopping'), ('Moving', 'Moving'), ('Lifting', 'Lifting'), ('Other', 'Other')], max_length=10),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6489533185958862,
"alphanum_fraction": 0.6521739363670349,
"avg_line_length": 43.42856979370117,
"blob_id": "4eea72a7fc52c6a3b4017ddd8a18f180c6ea1b73",
"content_id": "aa5d3493341843bb04bda696bfff47a28ecd5dde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 621,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 14,
"path": "/tasks/serializers.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from rest_framework import serializers\nfrom .models import Task, TaskLike, TaskComment\n\nclass TaskSerializer(serializers.ModelSerializer):\n class Meta:\n model = Task\n fields = ('id', 'TaskTitle', 'Description', 'Reward', 'DateOfCompletion', 'TimeOfCompletion', 'Addr1', 'Addr2',\n 'City', 'State', 'PostalCode', 'TaskType', 'TaskStatus', 'AuthorFK', 'RunnerFK', 'CreatedTS',\n 'UpdatedTS')\n\nclass TaskCommentSerializer(serializers.ModelSerializer):\n class Meta:\n model = TaskComment\n fields = ('id','Comment', 'UserFK', 'TaskFK', 'CreatedTS', 'UpdatedTS')"
},
{
"alpha_fraction": 0.6215895414352417,
"alphanum_fraction": 0.6441280841827393,
"avg_line_length": 31.423076629638672,
"blob_id": "6653dba4d450fa5c08982dd218e74404276c6c99",
"content_id": "39ea3b0525b96633e78e5138f7d7c9d8c3a87b78",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 843,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 26,
"path": "/tasks/migrations/0002_auto_20190225_2054.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-25 20:54\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('tasks', '0001_initial'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='task',\n name='AuthorFK',\n field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='author', to=settings.AUTH_USER_MODEL),\n ),\n migrations.AddField(\n model_name='task',\n name='RunnerFK',\n field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='runner', to=settings.AUTH_USER_MODEL),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5473186373710632,
"alphanum_fraction": 0.5764984488487244,
"avg_line_length": 45.96296310424805,
"blob_id": "5e49c83dd83ea3a27e5c9b9fd6acd5925ea24edc",
"content_id": "fb2b7fb4d7882e11c0a15826c00d25b129037e3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1268,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 27,
"path": "/accounts/migrations/0004_setting.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-26 01:13\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('accounts', '0003_auto_20190226_0040'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='Setting',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('SettingsType', models.CharField(choices=[('Text', 'Text'), ('Push', 'Push Notification'), ('Email', 'Email')], max_length=50)),\n ('TaskCompleted', models.CharField(choices=[('ON', 'ON'), ('OFF', 'OFF')], default='ON', max_length=3)),\n ('Likes', models.CharField(choices=[('ON', 'ON'), ('OFF', 'OFF')], default='ON', max_length=3)),\n ('Comments', models.CharField(choices=[('ON', 'ON'), ('OFF', 'OFF')], default='ON', max_length=3)),\n ('TaskPicked', models.CharField(choices=[('ON', 'ON'), ('OFF', 'OFF')], default='ON', max_length=3)),\n ('UserFK', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.5988142490386963,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 25.63157844543457,
"blob_id": "93b7c959e1825cf411fdf2de95aea438818ce51b",
"content_id": "bfb96be37b2afc494c1f306c33438bac642c0586",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 506,
"license_type": "no_license",
"max_line_length": 135,
"num_lines": 19,
"path": "/tasks/migrations/0006_auto_20190226_0116.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-26 01:16\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('tasks', '0005_taskcomment_tasklike'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='tasklike',\n name='TaskFK',\n field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='Like_task', to='tasks.Task'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6948868632316589,
"alphanum_fraction": 0.6948868632316589,
"avg_line_length": 38.766666412353516,
"blob_id": "e7ae40b3f7e55cecd099687a7db15aad0c5fcbb5",
"content_id": "561e519d1ea2b1701488f6821f67832d622f27e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1193,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 30,
"path": "/tasks/views.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom rest_framework import viewsets\nfrom .models import Task, TaskComment\nfrom .serializers import TaskSerializer, TaskCommentSerializer\n\n# Create your views here.\n\nclass TaskView(viewsets.ModelViewSet):\n #queryset = Task.objects.all().order_by('-CreatedTS')\n serializer_class = TaskSerializer\n\n def get_queryset(self):\n queryset = Task.objects.all().order_by('-CreatedTS')\n status = self.request.query_params.get('status', None)\n author = self.request.query_params.get('author', None)\n runner = self.request.query_params.get('runner', None)\n category = self.request.query_params.get('category', None)\n if status is not None:\n queryset = queryset.filter(TaskStatus=status)\n if author is not None:\n queryset = queryset.filter(AuthorFK=author)\n if runner is not None:\n queryset = queryset.filter(RunnerFK=runner)\n if category is not None:\n queryset = queryset.filter(TaskType=category)\n return queryset\n\nclass TaskCommentView(viewsets.ModelViewSet):\n queryset = TaskComment.objects.all()\n serializer_class = TaskCommentSerializer\n"
},
{
"alpha_fraction": 0.6037735939025879,
"alphanum_fraction": 0.6250760555267334,
"avg_line_length": 45.94285583496094,
"blob_id": "1cf75775fe200e73b8f5bebafe4b68b312fb303a",
"content_id": "33b2c3bc0d34c3b6bc2f3fd6d45b967580498b6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1643,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 35,
"path": "/tasks/migrations/0005_taskcomment_tasklike.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "# Generated by Django 2.1.5 on 2019-02-26 01:13\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n ('tasks', '0004_auto_20190226_0040'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='TaskComment',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('Comment', models.CharField(max_length=1200)),\n ('CreatedTS', models.DateTimeField(auto_now_add=True)),\n ('UpdatedTS', models.DateTimeField(auto_now=True)),\n ('TaskFK', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='Comment_Task', to='tasks.Task')),\n ('UserFK', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='Comment_User', to=settings.AUTH_USER_MODEL)),\n ],\n ),\n migrations.CreateModel(\n name='TaskLike',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('TaskFK', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='Like_task', to=settings.AUTH_USER_MODEL)),\n ('UserFK', models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='Like_user', to=settings.AUTH_USER_MODEL)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.6781889200210571,
"alphanum_fraction": 0.6903602480888367,
"avg_line_length": 40.85714340209961,
"blob_id": "0d6369993b3e9c9cdd9ec5de82c64ecc2c2c79a1",
"content_id": "bc92afbbd9ffc3c3d27e0d04d69801466ef8e2c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2054,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 49,
"path": "/tasks/models.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom accounts.models import User\n\n# Create your models here.\nclass Task(models.Model):\n TASK_TYPES = (\n ('Shopping', 'Shopping'),\n ('Moving', 'Moving'),\n ('Lifting', 'Lifting'),\n ('Other', 'Other'),\n )\n TASK_STATUSES = (\n ('Available', 'Available'),\n ('Running', 'Running'),\n ('Completed', 'Completed'),\n )\n TaskTitle = models.CharField(max_length=200)\n Description = models.TextField(null=True, blank=True)\n Reward = models.IntegerField()\n DateOfCompletion = models.DateField()\n TimeOfCompletion = models.TimeField()\n Addr1 = models.CharField(max_length=200)\n Addr2 = models.CharField(max_length=200, blank=True)\n City = models.CharField(max_length=50)\n State = models.CharField(max_length=50)\n PostalCode = models.CharField(max_length=10)\n TaskType = models.CharField(max_length=10, choices=TASK_TYPES)\n TaskStatus = models.CharField(max_length=10, choices=TASK_STATUSES)\n AuthorFK = models.ForeignKey(User, related_name='author', on_delete=models.CASCADE, null=True)\n RunnerFK = models.ForeignKey(User, related_name='runner', on_delete=models.CASCADE, null=True, blank=True)\n CreatedTS = models.DateTimeField(auto_now_add=True)\n UpdatedTS = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return self.TaskTitle\n\nclass TaskComment(models.Model):\n Comment = models.CharField(max_length=1200)\n UserFK = models.ForeignKey(User, related_name='Comment_User', on_delete=models.CASCADE, null=True)\n TaskFK = models.ForeignKey(Task, related_name='Comment_Task', on_delete=models.CASCADE, null=True)\n CreatedTS = models.DateTimeField(auto_now_add=True)\n UpdatedTS = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return self.Comment\n\nclass TaskLike(models.Model):\n UserFK = models.ForeignKey(User, related_name='Like_user', on_delete=models.CASCADE, null=True)\n TaskFK = models.ForeignKey(Task, related_name='Like_task', on_delete=models.CASCADE, null=True)\n\n\n\n"
},
{
"alpha_fraction": 0.6807888746261597,
"alphanum_fraction": 0.6990504264831543,
"avg_line_length": 41.8125,
"blob_id": "bd0798efd173c24ca6943e9c56e57f87d1854195",
"content_id": "5875eb9acfd3ec2acb87266065acc5340e10472a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1369,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 32,
"path": "/accounts/models.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from builtins import super\n\nfrom django.db import models\nfrom django.contrib.auth.models import User\n\n# Create your models here.\nclass UserAccount(models.Model):\n user = models.OneToOneField(User, on_delete=models.CASCADE)\n Phone = models.CharField(max_length=10, blank=True)\n ProfilePicture = models.CharField(max_length=300, blank=True)\n Addr1 = models.CharField(max_length=200, blank=True)\n Addr2 = models.CharField(max_length=200, blank=True)\n City = models.CharField(max_length=50, blank=True)\n State = models.CharField(max_length=50, blank=True)\n PostalCode = models.CharField(max_length=10, blank=True)\n\nclass Setting(models.Model):\n SETTINGS_TYPE = (\n ('Text', 'Text'),\n ('Push', 'Push Notification'),\n ('Email', 'Email'),\n )\n SETTINGS_STATUSES = (\n ('ON', 'ON'),\n ('OFF', 'OFF'),\n )\n UserFK = models.ForeignKey(UserAccount, on_delete=models.CASCADE, null=True)\n SettingsType = models.CharField(max_length=50, choices=SETTINGS_TYPE)\n TaskCompleted = models.CharField(max_length=3, choices=SETTINGS_STATUSES, default='ON')\n Likes = models.CharField(max_length=3, choices=SETTINGS_STATUSES, default='ON')\n Comments = models.CharField(max_length=3, choices=SETTINGS_STATUSES, default='ON')\n TaskPicked = models.CharField(max_length=3, choices=SETTINGS_STATUSES, default='ON')"
},
{
"alpha_fraction": 0.8258928656578064,
"alphanum_fraction": 0.8258928656578064,
"avg_line_length": 26.875,
"blob_id": "400e33b859aa24c181af8ec8b2cc98f1ab656613",
"content_id": "494d98fc2e52f94f48e5f7e48903e83e4ca8f769",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 224,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 8,
"path": "/accounts/admin.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom django.contrib.auth.admin import UserAdmin\nfrom accounts.models import UserAccount, Setting\n\n# Register your models here.\n\nadmin.site.register(UserAccount)\nadmin.site.register(Setting)\n\n"
},
{
"alpha_fraction": 0.7785407900810242,
"alphanum_fraction": 0.7785407900810242,
"avg_line_length": 39.13793182373047,
"blob_id": "c0c73e5a4f511deafad383fe5ebafa7872e1d3db",
"content_id": "3ddc9fb26abaab99f0e5875fd1c393d18d8ca025",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1165,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 29,
"path": "/accounts/views.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom rest_framework import viewsets\nfrom .models import UserAccount, User\nfrom .serializers import UserAccountSerializer, UserSerializer\nfrom rest_framework.permissions import AllowAny\nfrom rest_framework import generics\nfrom rest_framework.authtoken.views import ObtainAuthToken\nfrom rest_framework.authtoken.models import Token\nfrom rest_framework.response import Response\n\n# Create your views here.\nclass UserAccountView(viewsets.ModelViewSet):\n queryset = UserAccount.objects.all()\n serializer_class = UserAccountSerializer\n\nclass UserRegistrationView(viewsets.ModelViewSet):\n permission_classes = (AllowAny,)\n queryset = UserAccount.objects.all()\n serializer_class = UserAccountSerializer\n\nclass UserView(viewsets.ModelViewSet):\n queryset = User.objects.all()\n serializer_class = UserSerializer\n\nclass CustomObtainAuthToken(ObtainAuthToken):\n def post(self, request, *args, **kwargs):\n response = super(CustomObtainAuthToken, self).post(request, *args, **kwargs)\n token = Token.objects.get(key=response.data['token'])\n return Response({'token': token.key, 'id': token.user_id})\n\n"
},
{
"alpha_fraction": 0.8155339956283569,
"alphanum_fraction": 0.8155339956283569,
"avg_line_length": 24.75,
"blob_id": "7686546acd09b87d249eae16d3387e02b20b2ce0",
"content_id": "da25f5cada82fa4a712952d69b5deac2cbcfdd68",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 206,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 8,
"path": "/tasks/admin.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom tasks.models import Task, TaskComment, TaskLike\n\n# Register your models here.\n\nadmin.site.register(Task)\nadmin.site.register(TaskComment)\nadmin.site.register(TaskLike)\n"
},
{
"alpha_fraction": 0.6149943470954895,
"alphanum_fraction": 0.6228861212730408,
"avg_line_length": 41.261905670166016,
"blob_id": "812d1e215c23211af306f69ddc6aea7056dcb6d5",
"content_id": "fd2bdff7dbd5f4a58f78408642e6cbe82d085b72",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1774,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 42,
"path": "/accounts/tests.py",
"repo_name": "DimpleManiar94/runner-backend",
"src_encoding": "UTF-8",
"text": "from django.test import TestCase\nfrom rest_framework import status\nfrom rest_framework.test import APITestCase\nfrom rest_framework.test import APIClient\nfrom .models import UserAccount, User\nimport requests\n\n# Create your tests here.\n\nclient = APIClient()\n\nclass AccountTests(APITestCase):\n def test_create_account(self):\n data = { \"user\" : { \"first_name\" : \"d\", \"last_name\" : \"xyz\", \"email\" : \"[email protected]\", \"username\" : \"d\",\n \"password\" : \"runner123\"\n} }\n response = client.post('/register/', data, format='json')\n self.assertEqual(response.status_code, status.HTTP_201_CREATED)\n self.assertEqual(UserAccount.objects.count(), 1)\n self.assertEqual(UserAccount.objects.get().user.username, 'd')\n print(\"Register response : \", response)\n\n def test_get_profile(self):\n user = User(first_name=\"c\", last_name = \"xyz\", email=\"[email protected]\", username=\"c\", password=\"runner123\")\n user.save()\n user = User.objects.get(username='c')\n client.force_authenticate(user=user)\n response = client.get('/user/')\n self.assertEqual(response.status_code, status.HTTP_200_OK)\n print(\"Profile response : \", response)\n\n def test_sign_in(self):\n url = \"http://localhost:8000/api-token-auth/\"\n payload = \"{\\n\\t\\\"username\\\": \\\"c\\\",\\n\\t\\\"password\\\": \\\"runner123\\\"\\n}\"\n headers = {\n 'Content-Type': \"application/json\",\n }\n user = User(first_name=\"c\", last_name=\"xyz\", email=\"[email protected]\", username=\"c\", password=\"runner123\")\n user.save()\n response = requests.request(\"POST\", url, data=payload, headers=headers)\n self.assertEqual(response.status_code, status.HTTP_200_OK)\n print(\"Sign-In response : \", response)"
}
] | 17 |
eda606/eda
|
https://github.com/eda606/eda
|
1b1295dfff3c928b4fd20c908816a53104c16ed3
|
c54e14dc15ad6f1df75f0d1d82ab7896837e8085
|
860162594dcd3c54c608b615a1cc060df1e21e6a
|
refs/heads/main
| 2023-02-07T03:48:37.519038 | 2020-12-30T01:10:14 | 2020-12-30T01:10:14 | 325,100,247 | 0 | 0 | null | 2020-12-28T19:35:20 | 2020-12-29T22:03:23 | 2020-12-29T22:03:53 | null |
[
{
"alpha_fraction": 0.5333654284477234,
"alphanum_fraction": 0.5516522526741028,
"avg_line_length": 31.556135177612305,
"blob_id": "a4591b1c6173a077b48f4a202cc0b3af4b4171fb",
"content_id": "a40c18f11c96f4fa9f4e833b0cef23f7cfd79c9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12468,
"license_type": "no_license",
"max_line_length": 190,
"num_lines": 383,
"path": "/src/eda.py",
"repo_name": "eda606/eda",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport matplotlib.pyplot as plt \nimport seaborn as sns\nimport numpy as np\n\n\ndef _two_group_histplot_num_var(df, numerical_cols, y_col, bins = 10):\n yvals = df[y_col].unique()\n total_figs = len(numerical_cols)\n fig_cols = 3\n fig_rows = int(total_figs/3)+1\n plt.figure(figsize=(15,fig_rows*3.8))\n fig = 1\n for col in numerical_cols:\n if col == y_col:\n continue\n tmp_df = df[[y_col,col]].copy()\n \n col_y_val_0 = df[df[y_col] == yvals[0]][col]\n col_y_val_1 = df[df[y_col] == yvals[1]][col]\n\n ax1 = plt.subplot(fig_rows,fig_cols,fig) \n ax1.hist(col_y_val_0,bins=bins, histtype = 'step', color = 'r', weights=np.ones(len(col_y_val_0)) / len(col_y_val_0), label = 'y = '+str(yvals[0]));\n ax1.hist(col_y_val_1,bins=bins, histtype = 'step',color = 'b', weights=np.ones(len(col_y_val_1)) / len(col_y_val_1), label = 'y = '+str(yvals[1]));\n plt.title(col)\n plt.legend()\n plt.tight_layout()\n \n if fig % 3 == 1:\n plt.ylabel('Proportion')\n \n fig += 1\n plt.show()\n\ndef _two_group_histplot_cat_var(df, cat_cols, y_col, bins = 10):\n yvals = df[y_col].unique()\n total_figs = len(cat_cols)\n fig_cols = 3\n fig_rows = int(total_figs/3)+1\n plt.figure(figsize=(15,fig_rows*3.8))\n fig = 1\n for col in cat_cols:\n if col == y_col:\n continue\n hist_df = pd.DataFrame()\n tmp_df = df[[y_col,col]].copy()\n hist_df[col] = tmp_df[col].unique() \n \n col_y_val_0 = df[df[y_col] == yvals[0]][[col, y_col]]\n col_y_val_1 = df[df[y_col] == yvals[1]][[col, y_col]]\n \n \n col_y_val_0_cnt = col_y_val_0.groupby(by = col,as_index = False).agg({y_col:'count'})\n col_y_val_0_cnt = col_y_val_0_cnt.rename(columns = {y_col: '0_prop'})\n \n col_y_val_1_cnt = col_y_val_1.groupby(by = col,as_index = False).agg({y_col:'count'})\n col_y_val_1_cnt = col_y_val_1_cnt.rename(columns = {y_col: '1_prop'})\n \n hist_df = hist_df.merge(col_y_val_0_cnt, how = 'left', on = col)\n hist_df = hist_df.merge(col_y_val_1_cnt, how = 'left', on = col)\n \n hist_df = hist_df.fillna(0)\n\n ax1 = plt.subplot(fig_rows,fig_cols,fig) \n ax1.bar(np.array(range(len(hist_df[col]))),hist_df['0_prop'] / sum(hist_df['0_prop']), width = 0.2,edgecolor='r', color='None', label = 'y = '+str(yvals[0]))\n ax1.bar(np.array(range(len(hist_df[col]))) + 0.2,hist_df['1_prop']/ sum(hist_df['1_prop']), width = 0.2,edgecolor='b', color='None',label = 'y = '+str(yvals[1]))\n plt.xticks(np.array(range(len(hist_df[col]))) + 0.2, hist_df[col]);\n plt.xticks(rotation = 45)\n plt.title(col)\n plt.legend()\n plt.tight_layout()\n \n if fig % 3 == 1:\n plt.ylabel('Proportion')\n \n fig += 1\n plt.show()\n\ndef two_group_histplot(raw_data, y_col, bins = 10):\n \"\"\"\n Histogram plots for each of two classes: y = 0 and y = 1\n \n Parameters:\n raw_data: pandas dataframe\n Including y column and x-features to plot\n \n y_col: string\n The column name of response y in raw_data\n \n bins: int, default = 10\n Number of bins in histogram plot\n \n Return:\n None\n \"\"\"\n numerical_cols, char_cols = col_types(raw_data, y_col, to_print = False)\n if len(numerical_cols) > 0:\n _two_group_histplot_num_var(raw_data, numerical_cols, y_col, bins = bins) \n \n if len(char_cols) > 0:\n _two_group_histplot_cat_var(raw_data, char_cols, y_col, bins = bins) \n\n\n\ndef heatmap_corr(corr, figsize=(15, 12)):\n \"\"\"\n Heatmap plot of pair-wise correlation coefficients\n \n Parameters:\n corr: rectangular dataset\n 2D dataset that can be coerced into an ndarray. \n If a Pandas DataFrame is provided, the index/column information will be used to label the columns and rows.\n \n figsize: tuple, default = (15,12)\n Figure size\n\n Return:\n None\n \"\"\"\n mask = np.zeros_like(corr)\n mask[np.triu_indices_from(mask)] = True\n\n with sns.axes_style(\"white\"):\n fig, ax = plt.subplots(figsize=figsize)\n sns.heatmap(\n corr,\n ax=ax,\n annot=True,\n mask=mask,\n square=True\n );\n\n\ndef col_types(df, y_col,to_print = True):\n numerical_cols = list()\n categorical_cols = list()\n col_types = df.dtypes\n for k, v in col_types.items():\n if k == y_col:\n continue\n if v in ['int64','float64']:\n numerical_cols.append(k)\n else:\n categorical_cols.append(k)\n if to_print:\n print('Numerical Columns are:')\n display( numerical_cols )\n\n print('\\n Categorical Columns are:')\n display( categorical_cols )\n \n return numerical_cols, categorical_cols\n\n\ndef iqr_outliers(df, col, q1 = None, q3 = None, alpha = 1.5):\n if q1 is None or q2 is None:\n q1 = df[col].quantile(0.25)\n q3 = df[col].quantile(0.75)\n tmp_iqr = q3 - q1\n \n outliers = len(df[(df[col] < q1 - tmp_iqr*alpha) | (df[col] > q3 + tmp_iqr*alpha)].index)\n\n return outliers\n\ndef data_summary(df, cols = None):\n if cols is not None:\n data_describe = df[cols].describe().transpose()\n else:\n data_describe = df.describe().transpose()\n \n df_missing = pd.DataFrame(df.isna().sum(), columns=['missing'])\n data_describe = data_describe.merge(df_missing, left_index = True, right_index = True )\n data_describe['missing %'] = data_describe['missing']/(data_describe['missing'] + data_describe['count'])*100\n data_describe['missing %'] = data_describe['missing %'].round(2)\n\n data_describe['1.5iqr outliers'] = data_describe.apply(lambda r: iqr_outliers(df, r.name), axis=1)\n data_describe['outliers %'] = data_describe['1.5iqr outliers'] / data_describe['count']*100\n data_describe['outliers %'] = data_describe['outliers %'].round(2)\n \n return data_describe\n\ndef missing_value_summary(df, y_col, cols = None):\n if cols is not None:\n tmp_df = df[cols].copy()\n else:\n tmp_df = df.copy()\n \n df_missing = pd.DataFrame(tmp_df.isna().sum(), columns=['missing'])\n df_missing['tot count'] = len(tmp_df.index)\n df_missing['missing %'] = df_missing['missing'] / df_missing['tot count'] *100\n df_missing['missing %'] = df_missing['missing %'].round(2)\n \n avg_by_missing = pd.DataFrame()\n for col in tmp_df.columns:\n tmp_df['missing_ind'] = tmp_df[col].isna()\n xx = tmp_df.groupby(by='missing_ind').agg({y_col: 'mean'}).transpose()\n xx.index = [col]\n xx.columns = [str(r) for r in xx.columns]\n avg_by_missing = avg_by_missing.append(xx)\n \n avg_by_missing = avg_by_missing.rename(columns={'False':'non-missing avg', 'True':'missing avg'})\n\n ans = df_missing[['tot count', 'missing', 'missing %']].merge(avg_by_missing, left_index = True, right_index = True)\n return ans\n\n\ndef scatter_plot(df,numerical_cols, y_col):\n \n if y_col in numerical_cols:\n numerical_cols.remove(y_col)\n total_figs = len(numerical_cols)\n fig_cols = 3\n fig_rows = int(total_figs/3)+1\n plt.figure(figsize=(15,fig_rows*3.8))\n fig = 1\n for col in numerical_cols:\n tmp_df = df[[y_col,col]].copy()\n plt.subplot(fig_rows,fig_cols,fig)\n plt.scatter(tmp_df[col],tmp_df[y_col])\n plt.title(col)\n #plt.xticks(rotation = 45)\n plt.tight_layout()\n fig += 1\n plt.show()\n plt.close()\n \n \ndef categorical_plot(df, char_cols, y_col):\n if y_col in char_cols:\n char_cols.remove(y_col)\n total_figs = len(char_cols)\n fig_cols = 3\n fig_rows = int(total_figs/3)+1\n plt.figure(figsize=(15,fig_rows*3.8))\n fig = 1\n for col in char_cols:\n tmp_df = df[[y_col,col]].copy()\n avg = tmp_df.groupby(by = col, as_index = False).agg({y_col: 'mean'})\n cnt = tmp_df.groupby(by = col, as_index = False).agg({y_col: 'count'})\n avg[col] = [str(i) for i in avg[col]]\n ax1 = plt.subplot(fig_rows,fig_cols,fig) \n\n ax2 = ax1.twinx()\n ax1.bar(cnt[col],cnt[y_col], color = 'lightgray')\n ax2.plot(avg[col],avg[y_col], zorder = 10, linewidth = 2)\n ax1.yaxis.tick_right()\n ax2.yaxis.tick_left()\n plt.title(col)\n ax1.tick_params(axis='x', rotation = 45)\n plt.tight_layout()\n \n fig += 1\n plt.show()\n plt.close()\n \ndef bivariate_plot(df, y_col, cols = None, bins = 10, binary = False):\n \"\"\"\n Bivariate plot of x(feature) and y(response) for each x in df. Numerical features use scatter plot is used.Categorical features use bin plot. If y is binary, all features use bin plots.\n \n Parameters:\n df: pandas dataframe\n Including y column and x-features to plot\n \n cols: list\n List of column names for features to plot\n \n bins: int, default = 10\n Number of bins in histogram plot\n \n binary: boolean, default = False\n Indicates if the reponse is binary or not \n \n Return:\n None\n \"\"\" \n \n col_types = df.dtypes\n numerical_cols = set()\n char_cols = set()\n\n for k, v in col_types.items():\n if v not in ['int64','float64']:\n char_cols.add(k)\n else:\n numerical_cols.add(k)\n \n if cols is not None:\n char_cols = char_cols & set(cols)\n numerical_cols = numerical_cols & set(cols)\n \n if binary == False:\n scatter_plot(df,list(numerical_cols), y_col)\n else:\n bin_plot(df, list(numerical_cols), y_col, bins = bins, quantile = False)\n \n categorical_plot(df, char_cols, y_col)\n \n \ndef bin_plot(df, numerical_cols, y_col, bins = 10, quantile = False):\n total_figs = len(numerical_cols)\n fig_cols = 3\n fig_rows = int(total_figs/3)+1\n plt.figure(figsize=(15,fig_rows*3.8))\n fig = 1\n for col in numerical_cols:\n if col == y_col:\n continue\n tmp_df = df[[y_col,col]].copy()\n if len(tmp_df[col].unique())<= bins:\n tmp_df['bin'] = tmp_df[col]\n else:\n if quantile == True:\n tmp_df['bin'] = pd.qcut(tmp_df[col], q=bins)\n else:\n tmp_df['bin'] = pd.cut(tmp_df[col], bins=bins)\n\n avg = tmp_df.groupby(by = 'bin', as_index = False).agg({y_col: 'mean'})\n cnt = tmp_df.groupby(by = 'bin', as_index = False).agg({y_col: 'count'})\n avg['bin'] = [str(i) for i in avg['bin']]\n\n ax1 = plt.subplot(fig_rows,fig_cols,fig) \n ax2 = ax1.twinx()\n ax1.bar(avg['bin'],cnt[y_col], color = 'lightgray')\n\n ax2.plot(avg['bin'],avg[y_col], zorder = 10, linewidth = 2)\n\n ax1.yaxis.tick_right()\n ax2.yaxis.tick_left()\n plt.title(col)\n ax1.tick_params(axis='x', rotation = 45)\n plt.tight_layout()\n \n fig += 1\n\n plt.show()\n \n\ndef impute_missing(df, cols = None, numerical_impute = 'mean'):\n if cols is None:\n cols = df.columns\n \n tmp_df = df.copy()\n for k, v in tmp_df.dtypes.items():\n if k not in cols:\n continue\n if v in ['int64', 'float64']:\n if numerical_impute == 'mean':\n impute_v = tmp_df[k].mean()\n elif numerical_impute == 'median':\n impute_v = tmp_df[k].median()\n elif numerical_impute == 'mode':\n impute_v = tmp_df[k].mode()[0]\n else:\n impute_v = tmp_df[k].mode()[0]\n tmp_df[k] = tmp_df[k].fillna(impute_v)\n\n return tmp_df\n\n\n\ndef remove_outliers_iqr(df, cols = None, iqr_alpha = 1.5):\n df_copy = df.copy()\n col_types = df_copy.dtypes\n numerical_cols = set()\n \n for k, v in col_types.items():\n if v in ['int64','float64']:\n numerical_cols.add(k)\n if cols is not None:\n numerical_cols = numerical_cols & set(cols)\n \n for col in numerical_cols:\n q25 = df_copy[col].quantile(0.25)\n q75 = df_copy[col].quantile(0.75)\n \n iqr_v = q75 - q25\n \n low_lim = q25 - iqr_alpha * iqr_v\n up_lim = q75 + iqr_alpha * iqr_v\n \n df_copy = df_copy[(df_copy[col] >= low_lim) & (df_copy[col] <= up_lim)].copy()\n \n return df_copy"
},
{
"alpha_fraction": 0.6043058633804321,
"alphanum_fraction": 0.6100965142250061,
"avg_line_length": 32.01960754394531,
"blob_id": "c6525cc9103fe0946efc3e4f03e14cae9494ea25",
"content_id": "5785b78db731fafd466052a26d05e9ec1df751aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6735,
"license_type": "no_license",
"max_line_length": 143,
"num_lines": 204,
"path": "/src/sklearn_utils.py",
"repo_name": "eda606/eda",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor\nfrom sklearn.feature_selection import f_classif, f_regression\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.metrics import roc_auc_score, roc_curve, f1_score, confusion_matrix\nfrom sklearn.metrics import plot_confusion_matrix, precision_score, recall_score, precision_recall_curve\nimport matplotlib.pyplot as plt\nfrom statsmodels.api import qqplot\nfrom sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error\n\n\ndef regression_metrics(estimator, X, y, show_qqplot = False):\n \"\"\"\n Regression model evaulation metrics including:\n R-squared\n Adjusted R-squared\n RMSE\n MAE\n QQ plot\n Pred-actual scatter plot\n \n Parameters:\n estimator: sklearn estimator\n X: array-like of shape (n_samples, n_features)\n Test samples.\n\n y: array-like of shape (n_samples,) or (n_samples, n_outputs)\n True labels for X.\n\n show_qqplot: Boolean, default False\n \n Return:\n Evaluation matrix: pandas dataframe\n \"\"\"\n \n X_copy = X.copy()\n X_copy['pred'] = estimator.predict(X)\n res_df = pd.DataFrame(columns=['metric','Score'])\n r_sq = estimator.score(X, y)\n res_df = res_df.append({'metric': 'R-Squared', 'Score': r_sq.round(4)}, ignore_index=True)\n \n adj_r_sq = 1- (1-r_sq)*(X.shape[0] - 1)/(X.shape[0] - X.shape[1] - 1)\n res_df = res_df.append({'metric': 'Adjusted R-Squared', 'Score': adj_r_sq.round(4)}, ignore_index=True)\n \n rmse = mean_squared_error(y, X_copy['pred'], squared = False)\n res_df = res_df.append({'metric': 'RMSE', 'Score': rmse.round(4)}, ignore_index=True) \n \n mae = mean_absolute_error(y, X_copy['pred'])\n res_df = res_df.append({'metric': 'MAE', 'Score': mae.round(4)}, ignore_index=True) \n \n display(res_df)\n \n plt.scatter(y, X_copy['pred'], label = 'model prediction')\n plt.plot(y, y, label = 'ideal')\n plt.legend()\n plt.xlabel('Actual')\n plt.ylabel('Prediction')\n plt.title('Pred vs Actul')\n plt.show()\n plt.close()\n \n if show_qqplot:\n residuals = X_copy['pred'] - y\n qqplot(residuals, fit=True, line='45');\n plt.title('QQ Plot')\n plt.show()\n plt.close()\n return res_df\n\n\ndef clf_metrics(estimator, X, y):\n \"\"\"\n Classification model evaulation metrics including:\n Acuracy\n AUC ROC\n Precision\n Recall\n F1 Score\n Pred-actual scatter plot\n \n Parameters:\n estimator: sklearn estimator\n X: array-like of shape (n_samples, n_features)\n Test samples.\n\n y: array-like of shape (n_samples,) or (n_samples, n_outputs)\n True labels for X.\n \n Return:\n Evaluation matrix: pandas dataframe\n \"\"\"\n X_copy = X.copy()\n X_copy['pred_label'] = estimator.predict(X)\n X_copy['pred_score'] = estimator.predict_proba(X)[:,1]\n \n res_df = pd.DataFrame(columns=['metric','Score'])\n \n acuracy_score = estimator.score(X, y)\n res_df = res_df.append({'metric': 'Acuracy', 'Score': acuracy_score.round(4)}, ignore_index=True)\n \n auc = roc_auc_score(y,X_copy['pred_score'])\n res_df = res_df.append({'metric': 'AUC ROC', 'Score': auc.round(4)},ignore_index=True)\n \n precision_sc = precision_score(y,X_copy['pred_label'])\n res_df = res_df.append({'metric': 'Precision', 'Score': precision_sc.round(4)},ignore_index=True)\n \n recall_sc = recall_score(y,X_copy['pred_label'])\n res_df = res_df.append({'metric': 'Recall', 'Score': recall_sc.round(4)},ignore_index=True )\n \n recall_pre_f1 = f1_score(y,X_copy['pred_label'])\n res_df = res_df.append({'metric': 'F1 Score', 'Score': recall_pre_f1.round(4)},ignore_index=True)\n \n display (res_df)\n \n fpr, tpr, thr = roc_curve(y,X_copy['pred_score'])\n plt.plot(fpr, tpr, label = 'model')\n plt.plot(fpr, fpr, '--', label = 'random')\n plt.xlabel('FPR')\n plt.ylabel('TPR')\n plt.title('ROC Curve')\n plt.legend()\n plt.show()\n plt.close()\n precision, recall, thre = precision_recall_curve(y,X_copy['pred_score'])\n plt.plot(recall, precision)\n plt.xlabel('Recall')\n plt.ylabel('Precision')\n plt.title('Recall - Precision Curve')\n plt.show() \n \n return res_df\n\n\n\ndef logistic_reg_coef_df(model_res, var_list):\n coef_df = pd.DataFrame(data = list(model_res.intercept_)+ list(model_res.coef_[0]), index = ['Intercept'] + var_list, columns=['Estimate'])\n return coef_df\n\ndef coef_df(model_res, var_list):\n coef_df = pd.DataFrame(data = [model_res.intercept_] + list(model_res.coef_), index = ['Intercept'] + var_list, columns=['Estimate'])\n return coef_df\n\ndef var_importance(estimator_res, model_vars):\n var_imp = estimator_res.feature_importances_\n var_imp_df = pd.DataFrame(data = var_imp.transpose(), index = model_vars, columns = ['Importance'])\n var_imp_df = var_imp_df.sort_values(by='Importance', ascending=False)\n var_imp_df['cumsum'] = var_imp_df.cumsum().round(3)\n return var_imp_df\n\n\n\ndef vif(df, var_list):\n \n tmp_df = df[var_list].copy()\n tmp_df['const'] = 1\n vifs = [variance_inflation_factor(tmp_df.values, i) \n for i in range(len(var_list) + 1)] \n vif_df = pd.DataFrame()\n vif_df['feature'] = list(tmp_df.columns)\n vif_df['VIF'] = vifs\n vif_df = vif_df[vif_df['feature'] != 'const'].sort_values(by = 'VIF', ascending = False)\n \n return vif_df\n\n\n\ndef drop_high_vif(df, cand_vars, vif_threshold = 5):\n tmp_vif_df = vif(df[cand_vars], cand_vars)\n if tmp_vif_df.iloc[0][1] < vif_threshold:\n return tmp_vif_df\n \n cand_vars_copy = cand_vars.copy()\n \n while tmp_vif_df.iloc[0][1] > vif_threshold:\n cand_vars_copy.remove(tmp_vif_df.iloc[0][0])\n tmp_vif_df = vif(df[cand_vars_copy], cand_vars_copy)\n \n display(tmp_vif_df)\n \n return tmp_vif_df\n\n\ndef univariate_f_classif(X,y):\n anova_df = pd.DataFrame(data = np.array(f_classif(X, y)).transpose(), \n index = X.columns, columns=['F-value', 'p-val']).round(4)\n \n res = anova_df.sort_values(by = 'F-value', ascending = False)\n \n display(res)\n \n return res\n\n\n\ndef univariate_f_regression(X,y):\n anova_df = pd.DataFrame(data = np.array(f_regression(X, y)).transpose(), \n index = X.columns, columns=['F-value', 'p-val']).round(4)\n \n res = anova_df.sort_values(by = 'F-value', ascending = False)\n display(res)\n \n return res"
},
{
"alpha_fraction": 0.7777777910232544,
"alphanum_fraction": 0.7777777910232544,
"avg_line_length": 22.33333396911621,
"blob_id": "96b64007b0cc7216e73593494fc550d2df9e2978",
"content_id": "d61110d659fb7b15d759262b3b54652060cb7f03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 72,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 3,
"path": "/README.md",
"repo_name": "eda606/eda",
"src_encoding": "UTF-8",
"text": "# Exploratory data analysis (EDA)\n\nEAD helper functions and templates\n\n\n"
}
] | 3 |
aowczarek618/PingPong
|
https://github.com/aowczarek618/PingPong
|
27069820dfcabce2c633742033033b906d5f990d
|
a8e6d2d6ae108af1b52f46dd268b90158abb65b5
|
f2caaaff9ebf0f5daad4426b3a3b2b814ba9d5af
|
refs/heads/master
| 2022-09-12T14:51:24.306538 | 2020-06-03T06:20:01 | 2020-06-03T06:20:01 | 267,784,702 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7830508351325989,
"alphanum_fraction": 0.7932203412055969,
"avg_line_length": 35.875,
"blob_id": "d5bc1ef08b26585670b0ed60d00faae249e702d9",
"content_id": "f87f060785287b346ee2ce40241564ad0e982c1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 295,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 8,
"path": "/README.md",
"repo_name": "aowczarek618/PingPong",
"src_encoding": "UTF-8",
"text": "# PingPong\nPing pong game in Python\n\nI have written this game thanks to that youtube tutorial https://youtu.be/C6jJg9Zan7w. \nI added my own improvements:\n* Object-oriented approach instead of procedural,\n* Computer player implementation,\n* Speeding up the ball in real time for better gameplay.\n"
},
{
"alpha_fraction": 0.6202393770217896,
"alphanum_fraction": 0.6316648721694946,
"avg_line_length": 37.29166793823242,
"blob_id": "6474092fc833d402d199c521bfa571054b98c58b",
"content_id": "cf68119c641fdb8648a2b361a97e2cd01934e1b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1838,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 48,
"path": "/players.py",
"repo_name": "aowczarek618/PingPong",
"src_encoding": "UTF-8",
"text": "import random\n\nimport pingpong\n\n\nclass Player:\n \"\"\"Player class. Players has a paddle object attribute which can be moved by players.\"\"\"\n\n def __init__(self, coordinates, movement_speed=60):\n self.score = 0\n self.reaction_range = 0\n self.paddle = pingpong.Paddle(coordinates=coordinates)\n self.movement_speed = movement_speed\n\n def move_up(self):\n \"\"\"Method moves up the player's paddle\"\"\"\n self.paddle.sety(self.paddle.ycor() + self.movement_speed)\n\n def move_down(self):\n \"\"\"Method moves down the player's paddle\"\"\"\n self.paddle.sety(self.paddle.ycor() - self.movement_speed)\n\n\nclass ComputerPlayer(Player):\n \"\"\"Computer player class\"\"\"\n\n def __init__(self, coordinates):\n \"\"\"\n reaction_range variable has two number in pixel unit.\n They are used in computer_move() method to generate random number from this range.\n \"\"\"\n super().__init__(coordinates)\n self.reaction_range = (30, 100) # Two number should be greater than 30 (Recommendation).\n\n def computer_move(self, ball):\n \"\"\"\n This method is a 'brain' of computer in this game.\n Take an example for clarify reaction_range = (60, 100). In each frame of an animation,\n program generate random number from (60, 100) range (let's say 'x').\n If ball is 'x' pixels ahead or below of the paddle center,\n call a proper method (move_up() or move_down()).\n The goal is ball 'y' and paddle 'y' coordinates are equal.\n \"\"\"\n min_reaction, max_reaction = self.reaction_range\n if self.paddle.ycor() > ball.ycor() + random.uniform(min_reaction, max_reaction):\n self.move_down()\n elif self.paddle.ycor() < ball.ycor() - random.uniform(min_reaction, max_reaction):\n self.move_up()\n"
},
{
"alpha_fraction": 0.5582103729248047,
"alphanum_fraction": 0.5839775204658508,
"avg_line_length": 28.23972511291504,
"blob_id": "0f1f525ec91eb5c5f6926fea1f7cb4e1833ab6d7",
"content_id": "f46ff76ac53aa1330bf5b8be67e51d5295f86067",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4269,
"license_type": "no_license",
"max_line_length": 190,
"num_lines": 146,
"path": "/main.py",
"repo_name": "aowczarek618/PingPong",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\nimport time\nimport turtle\n\nimport pingpong\nimport players\n\n\ndef show_score(pen, player1, player2):\n \"\"\"Function shows score on the screen.\"\"\"\n pen.clear()\n pen.write(f\"Player1: {player1.score} Player2: {player2.score}\", align=\"center\", font=(\"Courier\", 24, \"normal\"))\n\n\ndef paddle_ball_collisions(player, ball):\n \"\"\"Function checks paddle and ball collisions.\"\"\"\n if player.paddle.xcor() > 0:\n multiplication_factor = 1\n else:\n multiplication_factor = -1\n\n if abs(player.paddle.xcor()) - 20 < abs(ball.xcor()) < abs(player.paddle.xcor()) + 20 and player.paddle.ycor() + PADDLE_WIDTH / 2 > ball.ycor() > player.paddle.ycor() - PADDLE_WIDTH / 2:\n play_sound()\n ball.setx((abs(player.paddle.xcor()) - 20) * multiplication_factor)\n ball.delta_x *= -1\n\n\ndef play_sound():\n \"\"\"Function plays bounce ball sound.\"\"\"\n if PLATFORM == \"linux\":\n os.system(\"aplay -q bounce.wav&\")\n elif PLATFORM == \"darwin\":\n os.system(\"afplay -q bounce.wav&\")\n elif PLATFORM == \"windows\":\n winsound.PlaySound(\"bounce.wav\", winsound.SND_ASYNC)\n\n\ndef main():\n \"\"\"Main function\"\"\"\n # Creating a game screen.\n window = turtle.Screen()\n window.title(\"Ping Pong by @aowczarek618\")\n window.bgcolor(\"black\")\n window.setup(width=WIDTH, height=HEIGHT)\n window.tracer(0)\n\n # Creating players.\n player1 = players.Player(coordinates=(-350, 0))\n player2 = players.ComputerPlayer(coordinates=(350, 0))\n\n # Creating a ball.\n ball = pingpong.Ball(coordinates=(0, 0))\n\n # Pen used to write a score on the screen.\n pen = turtle.Turtle()\n pen.speed(0)\n pen.color(\"white\")\n pen.penup()\n pen.hideturtle()\n pen.goto(0, 260)\n show_score(pen, player1, player2)\n\n # Keyboard bindings.\n window.listen()\n if isinstance(player1, players.Player):\n window.onkeypress(player1.move_up, 'w')\n window.onkeypress(player1.move_down, 's')\n\n if isinstance(player2, players.Player):\n window.onkeypress(player2.move_up, 'Up')\n window.onkeypress(player2.move_down, 'Down')\n\n # Main game loop.\n while True:\n window.update()\n\n # Moving and speeding up the ball.\n ball.move()\n ball.speed_up()\n\n # Checking border collisions.\n if ball.ycor() > HEIGHT / 2 - 10:\n play_sound()\n ball.sety(HEIGHT / 2 - 10)\n ball.delta_y *= -1\n\n if ball.ycor() < -HEIGHT / 2 + 10:\n play_sound()\n ball.sety(-HEIGHT / 2 + 10)\n ball.delta_y *= -1\n\n if ball.xcor() > WIDTH / 2 - 10:\n ball.reset()\n player1.score += 1\n show_score(pen, player1, player2)\n\n if ball.xcor() < -WIDTH / 2 + 10:\n ball.reset()\n player2.score += 1\n show_score(pen, player1, player2)\n\n # Checking paddles and the ball collisions.\n if ball.xcor() < 0:\n paddle_ball_collisions(player1, ball)\n else:\n paddle_ball_collisions(player2, ball)\n\n # Computer move.\n if isinstance(player1, players.ComputerPlayer):\n player1.computer_move(ball)\n\n if isinstance(player2, players.ComputerPlayer):\n player2.computer_move(ball)\n\n # Condition of the game end and instructions to do then.\n if WIN_SCORE in (player1.score, player2.score):\n pen.clear()\n if player1.score == WIN_SCORE:\n pen.write(\"Player1 is a winner!\", align=\"center\", font=(\"Courier\", 24, \"normal\"))\n else:\n pen.write(\"Player2 is a winner!\", align=\"center\", font=(\"Courier\", 24, \"normal\"))\n time.sleep(5)\n break\n\n time.sleep(0.01) # Thanks to that, game run same on any device.\n\n\nif __name__ == '__main__':\n\n # Checking a platform type (Linux, Mac OS) to choose a proper play sound command.\n if sys.platform == \"linux\":\n PLATFORM = \"linux\"\n elif sys.platform == \"darwin\":\n PLATFORM = 'darwin'\n elif sys.platform == \"windows\":\n PLATFORM = 'windows'\n import winsound\n\n WIDTH, HEIGHT = (800, 600)\n WIN_SCORE = 11\n BALL_RADIUS = 10\n STRECH_WID = 5\n PADDLE_WIDTH = 24 * STRECH_WID\n\n main()\n"
},
{
"alpha_fraction": 0.588200569152832,
"alphanum_fraction": 0.5952802300453186,
"avg_line_length": 29.81818199157715,
"blob_id": "3e7648ff4c374da9dddee115bdf8e4e06bbec813",
"content_id": "b1c73824f287e7b2f772809ce38e336020954392",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1695,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 55,
"path": "/pingpong.py",
"repo_name": "aowczarek618/PingPong",
"src_encoding": "UTF-8",
"text": "import random\nimport turtle\n\n\nclass Paddle(turtle.Turtle):\n \"\"\"Paddle class: making a paddle object on the screen at given coordinates\"\"\"\n\n def __init__(self, coordinates, strech_wid=5):\n super().__init__()\n self.speed(0)\n self.shape(\"square\")\n self.color(\"white\")\n self.shapesize(stretch_wid=strech_wid, stretch_len=1)\n self.penup()\n self.goto(coordinates)\n\n\nclass Ball(turtle.Turtle):\n \"\"\"Ball class: making a ball object on the screen with certain speed.\"\"\"\n speed_parametr = 0.01\n initial_speed = 5\n\n def __init__(self, coordinates):\n super().__init__()\n self.speed(0)\n self.shape(\"circle\")\n self.color(\"orange\")\n self.penup()\n self.goto(coordinates)\n\n self.delta_x = random.uniform(-self.initial_speed, self.initial_speed)\n self.delta_y = random.uniform(-self.initial_speed, self.initial_speed)\n\n def move(self):\n \"\"\"Method responsible for moving the ball\"\"\"\n self.setx(self.xcor() + self.delta_x)\n self.sety(self.ycor() + self.delta_y)\n\n def reset(self):\n \"\"\"Method resets the ball position and speed\"\"\"\n self.delta_x = random.uniform(-self.initial_speed, self.initial_speed)\n self.delta_y = random.uniform(-self.initial_speed, self.initial_speed)\n self.goto((0, 0))\n\n def speed_up(self):\n \"\"\"Method speeds up the ball\"\"\"\n if self.delta_x > 0:\n self.delta_x += self.speed_parametr\n else:\n self.delta_x -= self.speed_parametr\n\n if self.delta_y > 0:\n self.delta_y += self.speed_parametr\n else:\n self.delta_y -= self.speed_parametr\n"
}
] | 4 |
shraddhamachchhar/Storm-Data-Analysis
|
https://github.com/shraddhamachchhar/Storm-Data-Analysis
|
00982661beb7da6a426174457e309f4c99764ab8
|
13dff09b4f4f064f17e0d6badb825362d1edc4be
|
0003725aabace06796df525a1534e892c9b7f81d
|
refs/heads/master
| 2021-09-06T21:40:17.135147 | 2018-02-12T00:55:28 | 2018-02-12T00:55:28 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.623429000377655,
"alphanum_fraction": 0.6316101551055908,
"avg_line_length": 38.21395492553711,
"blob_id": "ceb0aaedb8e342d42b6e5d3d77f12673f21cdf26",
"content_id": "ca610df0c3aa833738f10b4ec06204bf964666eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8434,
"license_type": "no_license",
"max_line_length": 171,
"num_lines": 215,
"path": "/Storm Analysis.py",
"repo_name": "shraddhamachchhar/Storm-Data-Analysis",
"src_encoding": "UTF-8",
"text": "#Assignment 2 participants list\n# Shraddha Machchhar\n# Shruti S. Sankolli\n\nfrom pygeodesy import ellipsoidalVincenty as ev\nfrom datetime import datetime\n\n#Intitialize global variables\ndate_pos = 0\ntime_pos=1\nstorm_name_pos = 2\nlandFall_count=0\nlandFall_pos=2\nwind_pos=6\ndict_wind={}\ndata_set_lon_list=[]\ndata_set_lat_list=[]\ndata_set_cur_time_list=[]\ndata_set_speed_list=[]\nlon_pos=4\nlat_pos=5\ndict_strom_per_year={}\ndict_hurricane_per_year={}\ndict_storm_info={}\nhurricane_pos=3\nhurricane_count=0\nhurricane_found_flag=False\n\ndef mean(real_num_list: list)-> float:\n '''\n This function calculates the mean speed of a particular storm\n :param real_num_list: Inputs a list of speeds observed during the path for a particular storm\n :return: The mean speed for that storm\n '''\n sum = 0\n for each in real_num_list:\n sum += each\n return sum/len(real_num_list)\n\ndef findMaxWindAndDate(dict_wind: dict):\n '''\n This functions calculates the maximum speed observed for a partiuclar storm and the date/time at which that occured.\n :param dict_wind: Inputs a dictionary having the dates and times observed for the different tracks for a particular storm\n :return: the max speed obsrved for the storm and the date and time at which the max speed was observed.\n '''\n max_wind_speed = 0\n max_wind_date=\"\"\n max_wind_time=\"\"\n FMT = '%Y%m%d %H%M'\n for key, value in dict_wind.items():\n if int(value) > int(max_wind_speed):\n max_wind_speed = int(value)\n max_wind_date=key[0]\n max_wind_time=key[1]\n print(\"Date/Time when the highest maximum sustained wind was observed:\", max_wind_date,max_wind_time)\n return max_wind_date,max_wind_speed\n\ndef calulate_distance_and_speed(lat_list:list, lon_list:list, data_set_cur_time_list:list):\n '''\n Calculates the maximum and mean speed the storm centre has moved. Also calculates the total distance the storm has moved.\n :param lat_list: Inputs a list of latitudes\n :param lon_list: Inputs a list of latitudes\n :param data_set_cur_time_list: Inputs a list of times\n :return:\n '''\n dist_meters=0\n tot_dist_meters =0\n for i in range(len(lon_list)-1):\n ini_pos = ev.LatLon(lat_list[i], lon_list[i])\n fin_pos = ev.LatLon(lat_list[i+1], lon_list[i+1])\n try:\n dist_meters = ini_pos.distanceTo(fin_pos)/1852 #Convert to nautical miles\n except ev.VincentyError:\n dist_meters =0\n tot_dist_meters = tot_dist_meters + dist_meters\n time_diff = calc_time_diff(data_set_cur_time_list[i],data_set_cur_time_list[i+1])\n data_set_speed_list.append(dist_meters/time_diff)\n if not data_set_speed_list :\n data_set_speed_list.append(0.0)\n print(\"Max speed is: \",round(max(data_set_speed_list),5),\" Nautical Miles/Minute\")\n print(\"Mean speed is: \",round(mean(data_set_speed_list),5),\" Nautical Miles/Minute\")\n print(\"Total distance the storm was tracked\",round((tot_dist_meters),5),\" Nautical Miles\")\n return tot_dist_meters\n\ndef calc_time_diff(date1:str,date2:str):\n '''\n This function calculates the time lag for a particular storm\n :param date1: Inputs a date the storm started\n :param date2: Inputs the date the storm ended\n :return: The time lag for the storm in seconds\n '''\n FMT = '%Y%m%d %H%M'\n tdelta = datetime.strptime(date2, FMT) - datetime.strptime(date1, FMT)\n return tdelta.total_seconds()/60\n\ndef calc_storm_per_year(start_date:str,end_date:str):\n '''\n This function calculates the number of storms per year and stores them in a dictionary with key being the year and value being the storm count\n :param start_date: Inputs the date the storm started\n :param end_date: Inouts the date the storm ended\n '''\n storm_start_year = str(start_date[:4])\n storm_end_year = str(end_date[:4])\n if storm_start_year in dict_strom_per_year:\n dict_strom_per_year[storm_start_year] +=1\n else:\n dict_strom_per_year[storm_start_year] = 1\n if storm_start_year!= storm_end_year:\n if storm_end_year in dict_strom_per_year:\n dict_strom_per_year[storm_end_year] += 1\n else:\n dict_strom_per_year[storm_end_year] = 1\n\ndef calc_hurricane_per_year(start_date:str,end_date:str,hurricane_found_flag:bool):\n '''\n This function calculates the number of hurricanes per year and stores them in a dictionary with key being the year and value being the hurricane count.\n :param start_date:\n :param end_date:\n :param hurricane_found_flag:\n :return:\n '''\n if hurricane_found_flag:\n storm_start_year = str(start_date[:4])\n storm_end_year = str(end_date[:4])\n if storm_start_year in dict_hurricane_per_year:\n dict_hurricane_per_year[storm_start_year] +=1\n else:\n dict_hurricane_per_year[storm_start_year] = 1\n if storm_start_year!= storm_end_year:\n if storm_end_year in dict_hurricane_per_year:\n dict_hurricane_per_year[storm_end_year] += 1\n else:\n dict_hurricane_per_year[storm_end_year] = 1\n\n# We take the file name as input from the user\nfile_name=input(\"Enter the data file name :\")\n\nwith open(file_name, 'r',buffering=1000) as input_file:\n list_data = input_file.readlines()\n file_size = list_data.__len__()\n i=0\n if list_data[file_size-1] == '\\n':\n file_size = file_size - 1\n while(i< file_size):\n\n #Clean the data sets before each iteration\n data_set_lon_list = []\n data_set_lat_list = []\n data_set_cur_time_list = []\n data_set_speed_list=[]\n dict_wind={}\n\n #Initialize a flag to check if hurricane for a year is alraedy present. If True, that year is ignored.\n hurricane_found_flag=False\n\n #Iterate over each row as set of columns separated by ','\n header_data = list_data[i].split(\",\")\n\n #Get the Storm Name\n storm_name = header_data[1]\n\n #Counter to keep track of the no of records available for each storm\n data_set_len = int(header_data[storm_name_pos])\n\n #Counter to keep track of no of landfalls observed\n landFall_count = 0\n\n i=j=i+1\n while(j < i+data_set_len):\n data_row = list_data[j].split(\",\")\n if(j==i):\n start_date = data_row[date_pos]\n start_time = data_row[time_pos]\n if(j==i+data_set_len-1):\n end_date = data_row[date_pos]\n end_time = data_row[time_pos]\n\n #Check to see if Landfall observed. If yes, then the counter is incremented by 1\n if(\"L\" == data_row[landFall_pos].strip()):\n landFall_count=landFall_count+1\n\n dict_wind[data_row[0],data_row[1]] = data_row[wind_pos]\n data_set_lat_list.append(data_row[4])\n data_set_lon_list.append(data_row[5])\n data_set_cur_time_list.append(data_row[date_pos]+data_row[time_pos])\n\n #Check to see if hurricane observed. If yes, flag is set to True\n if(hurricane_found_flag == False and data_row[hurricane_pos].strip() == 'HU'):\n hurricane_found_flag = True\n j=j+1\n\n #Print summary information for each storm\n print(\"Storm Name: \",storm_name.strip())\n print(\"Date Range for the Storm: \",datetime.strptime(str(start_date+\" \"+start_time),'%Y%m%d %H%M'),\"-\",datetime.strptime(str(end_date+\" \"+end_time),'%Y%m%d %H%M'))\n print(\"Highest Maximum Sustained Wind :\",max(list(dict_wind.values())),\" knots\")\n str(findMaxWindAndDate(dict_wind))\n print(\"No of times Landfalls observed:\",landFall_count)\n\n\n #Get the total no of storms observed per year\n calc_storm_per_year(start_date, end_date)\n\n #Get the total no of hurricanes observed per year\n calc_hurricane_per_year(start_date, end_date, hurricane_found_flag)\n\n # get the maximum and mean speed the storm centre moved\n calulate_distance_and_speed(data_set_lat_list,data_set_lon_list, data_set_cur_time_list)\n print(\"--------------------------------------------------\")\n i=j #reference to new dataset\n\n print(\"\\n No of Storms observed per year:\")\n print(\"\\n\",dict_strom_per_year)\n print(\"------------------------------------------------\")\n print(\"\\n No of Hurricanes observed per year:\")\n print(\"\\n\", dict_hurricane_per_year)\n\n\n\n"
}
] | 1 |
josthoma/Bitcoin-Checker
|
https://github.com/josthoma/Bitcoin-Checker
|
1e53c8442cbfe4e17954fbfd60f231cf3ef0fb2c
|
1a2e5647003277ca771bae8d2beec77074911bc2
|
d36ae87c41ea36d86248217b155b37bcee3b4676
|
refs/heads/master
| 2021-05-24T10:30:02.865373 | 2020-04-06T14:27:04 | 2020-04-06T14:27:04 | 253,519,952 | 1 | 2 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6193656325340271,
"alphanum_fraction": 0.6282693147659302,
"avg_line_length": 31.672727584838867,
"blob_id": "283bffb189996b4d13f5fb923ca44fc90dd2fcc2",
"content_id": "020d1a22ffd15e849a5af5fa4c2c9d2f6f6b2a1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1797,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 55,
"path": "/bitcoin_check.py",
"repo_name": "josthoma/Bitcoin-Checker",
"src_encoding": "UTF-8",
"text": "import requests\nimport urllib.request\nimport time\nfrom bs4 import BeautifulSoup\nfrom pprint import pprint as pp\nimport smtplib, ssl\n\nport = 465 # For SSL\nsmtp_server = \"smtp.gmail.com\"\nsender_email = \"\" # Enter your address\nreceiver_email = \"\" # Enter receiver address\nreceiver_email2 = ''\npassword = ''\n\nwhile True:\n page = requests.get('https://markets.businessinsider.com/currencies/btc-usd').text\n\n #\n # url = 'https://api.coinmarketcap.com/v1/ticker/bitcoin/'\n # response = requests.get(url)\n # print(response)\n soup = BeautifulSoup(page, \"lxml\")\n # print(soup.prettify())\n # print(soup.findAll('a'))\n # print(soup)\n\n con = soup.find('span', {\"class\": \"push-data\"})\n content = (con.text).replace(',','')\n print(float(content))\n # pp(content.text)\n if (float(content) < 67):\n message = \"\"\"\\\n Subject: BUY :Bitcoin Price Change \n \n Time to Buy.\n Current Price = \"\"\" +str(content)\n context = ssl.create_default_context()\n with smtplib.SMTP_SSL(smtp_server, port, context=context) as server:\n server.login(sender_email, password)\n server.sendmail(sender_email, receiver_email, message)\n server.sendmail(sender_email, receiver_email2, message)\n elif (float(content) > 9800):\n message = \"\"\"\\\n Subject: SELL :Bitcoin Price Change \n\n Time to Sell.\n Current Price = \"\"\" + str(content)\n context = ssl.create_default_context()\n with smtplib.SMTP_SSL(smtp_server, port, context=context) as server:\n server.login(sender_email, password)\n server.sendmail(sender_email, receiver_email, message)\n server.sendmail(sender_email, receiver_email2, message)\n else:\n time.sleep(40)\n continue\n"
},
{
"alpha_fraction": 0.8080000281333923,
"alphanum_fraction": 0.8080000281333923,
"avg_line_length": 61,
"blob_id": "18d0cd4db36ee8774b9343a869dfcb7afd549c59",
"content_id": "45c633bbaaa9ba710a6afd7e370508021442f5e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 125,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 2,
"path": "/README.md",
"repo_name": "josthoma/Bitcoin-Checker",
"src_encoding": "UTF-8",
"text": "# Bitcoin-Checker\nBasic Script to scrape web for bitcoin prices and use an SMTP server to send emails on large fluctuations \n"
}
] | 2 |
C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes
|
https://github.com/C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes
|
6f8fb7690946525a69b6abec0cead5bdadda90a2
|
78cd573b1d67159de757b5f87222b64d5e8a4580
|
14904a899815637243f48c61cf9249d5b5bc6434
|
refs/heads/main
| 2023-07-27T12:29:33.452172 | 2021-09-11T04:47:35 | 2021-09-11T04:47:35 | 403,410,982 | 0 | 0 | null | 2021-09-05T20:55:26 | 2021-09-05T20:55:30 | 2021-09-05T20:55:31 |
Python
|
[
{
"alpha_fraction": 0.41558441519737244,
"alphanum_fraction": 0.4220779240131378,
"avg_line_length": 20.85714340209961,
"blob_id": "7cae0b5364180c10fc84cd2bed8de0327c1b4054",
"content_id": "7d493f15c50ace6cf6f5409ddf3e9ddb3eec2ed8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 154,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 7,
"path": "/assignments/25TrianguloAsteriscos/src/exercise.py",
"repo_name": "C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes",
"src_encoding": "UTF-8",
"text": "\ndef main():\n x=int(input(\"Dame un numero: \"))\n for i in range (x+1):\n y=x-i\n print (\" \"*y+\"*\"*i)\nif __name__=='__main__':\n main()\n"
},
{
"alpha_fraction": 0.4588235318660736,
"alphanum_fraction": 0.47058823704719543,
"avg_line_length": 20.25,
"blob_id": "4b2805f150ee5df87a42d37f0f010a615acb8949",
"content_id": "cb0e348a33e482a6da106ee4d11b28aa7cc026bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 170,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 8,
"path": "/assignments/10.1AlternaCaracteresContador/src/exercise.py",
"repo_name": "C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes",
"src_encoding": "UTF-8",
"text": "def main():\n x=int(input(\"Dame un valor entero positivo: \"))\n y=x+1\n for i in range (1,y):\n \n print(str(i))\nif __name__=='__main__': \n main()\n"
},
{
"alpha_fraction": 0.3529411852359772,
"alphanum_fraction": 0.3803921639919281,
"avg_line_length": 14.875,
"blob_id": "ff07da054d03a80679c3660cb006e94123d1531f",
"content_id": "f9edb4aaeca89006eafb0cdaf7bda3e849d18690",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 16,
"path": "/assignments/28Fibonacci/src/exercise.py",
"repo_name": "C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes",
"src_encoding": "UTF-8",
"text": "\ndef main():\n x = int(input(\"Enter a number: \"))\n y=1\n z=1\n print (\"0\")\n print (\"1\")\n print (\"1\")\n i=0\n while i<x:\n num=y+z\n y=z\n z=num\n print (str(num))\n i+=1\nif __name__=='__main__':\n main()\n"
},
{
"alpha_fraction": 0.5199999809265137,
"alphanum_fraction": 0.5199999809265137,
"avg_line_length": 20.428571701049805,
"blob_id": "ac25c0a92399eeebbc5e4cf43021456e70b3081a",
"content_id": "7188638b4d34145f26d3c02f20dad6f353de2554",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 150,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 7,
"path": "/assignments/17NCuadradoMayor/src/exercise.py",
"repo_name": "C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes",
"src_encoding": "UTF-8",
"text": "import math\ndef main():\n num = int(input(\"escrube un numero: \"))\n x= math.sqrt(num)\n print(\"\", int(x))\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.31753554940223694,
"alphanum_fraction": 0.3364928960800171,
"avg_line_length": 15.230769157409668,
"blob_id": "7cc3d968aa0282ca01b11e6012c48c089b9ffc64",
"content_id": "f9c3aebafb7073b4eeb673aac12cc92af08efd30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 211,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 13,
"path": "/assignments/06PromedioConDecision/src/exercise.py",
"repo_name": "C-CCM-TC1028-111-2113/homework-4-LuisFernandoGonzalezCortes",
"src_encoding": "UTF-8",
"text": "def main():\n x=float(input(\"\"))\n y=1\n z=x\n while x >= 0:\n x=float(input(\"\"))\n if x >= 0:\n z+=x\n y+=1\n p=z/y\n print(\"\",p)\nif __name__=='__main__':\n main()\n"
}
] | 5 |
TEKKEN199/cheacker_inst
|
https://github.com/TEKKEN199/cheacker_inst
|
ee9105627cd5aeeb9a4603bd58bde33f3b955700
|
913d5317078fddb6afcf6e4bcf1a1d344a125749
|
9d580b8a7855878458add5e9ca119ba51a42e059
|
refs/heads/main
| 2023-03-14T15:16:20.630604 | 2021-03-09T14:38:52 | 2021-03-09T14:38:52 | 346,034,838 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5837245583534241,
"alphanum_fraction": 0.5946791768074036,
"avg_line_length": 22.66666603088379,
"blob_id": "5de32bf36f815600dff48b0401f832d679587b12",
"content_id": "067a3b1d9156f12cc90f42efa40f69197ffb6afa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 737,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 27,
"path": "/cheacker/crate.py",
"repo_name": "TEKKEN199/cheacker_inst",
"src_encoding": "UTF-8",
"text": "import os\nimport random\nimport sys\nimport time\n\n\n# ================================================\nname = input(\"name File:\")\nuesr = '' # اليوزر المراد التخمين عليه بين النقطتين اكتبه\nchars2 = 'qazwsxedcrfvtgbyhnujmiklop1234567890' # ارقام واحرف لو ترغب\namount = input('كم عدد كلمات المرور؟')\namount = int(amount)\n\nlength2 = input('ما طول كلمة المرور التي تريدها؟')\nlength2 = int(length2)\n\nfor password in range(amount):\n password = ''\n\n for item in range(length2):\n password = ''\n for item in range(length2):\n password += random.choice(chars2)\n\n print(uesr + password)\n with open(name, 'a') as x:\n x.write(uesr + password + '\\n')\n"
},
{
"alpha_fraction": 0.44690266251564026,
"alphanum_fraction": 0.49852508306503296,
"avg_line_length": 33.769229888916016,
"blob_id": "658198b84a78e04309c3000aaf42c1a157849860",
"content_id": "f5b4001ea2ce2a6e3838014ee3151e4b6853aee2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1356,
"license_type": "no_license",
"max_line_length": 140,
"num_lines": 39,
"path": "/cheacker/cheacker.py",
"repo_name": "TEKKEN199/cheacker_inst",
"src_encoding": "UTF-8",
"text": "import requests\nimport time\n\ndef cheacker():\n url = \"https://www.instagram.com/accounts/web_create_ajax/attempt/\"\n headers = {\n \"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36\",\n \"x-csrftoken\": \"missing\",\n #\"mid\": \"YC4O0gALAAHz26FFAmBWJzSv0A\"\n }\n file = str(input(\"name list :\"))\n list = open(file, \"r\")\n n = 0\n while True:\n n += 1\n username = list.readline().split(\"\\n\")[0]\n data = {\n \"email\": \"\",\n \"username\": username,\n \"first_name\": \"\",\n \"client_id\": \"YC4O0gALAAHz26FFAmBWJzSv0A-4\",\n \"seamless_login_enabled\": \"1\",\n \"opt_into_one_tap\": \"false\"\n }\n x = requests.post(url, headers=headers, data=data)\n time.sleep(1)\n if ('\"spam\":true') in x.text:\n exit()\n xx = str(x.text).split()\n xxx = str(xx[25] + \" \" + xx[26]).split(\",\")\n xxxx = xxx[0]\n if xxxx == '\"username_suggestions\": []':\n print(\"\\033[0;96m\" + str(n) + \"[+] Found -->> \" + username + \"\\033[0m\")\n with open('accountfound.txt', 'a') as x:\n x.write(username + '\\n')\n else:\n print(\"\\033[0;91m\" + str(n) + \"[+] not found -->> \" + username + \"\\033[0m\")\n\ncheacker()\n"
}
] | 2 |
lug0lug0/logs-analysis
|
https://github.com/lug0lug0/logs-analysis
|
92783817f1c49029ed6a565425bdc1c61cc04d0a
|
b769791d588f1b06b208511a2f476bdbd99f00b0
|
77ae2a10a0f841fb1c9be7ebfa6fd67d313ad1e2
|
refs/heads/master
| 2021-06-19T01:50:58.379497 | 2017-06-28T15:18:39 | 2017-06-28T15:18:39 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5882784128189087,
"alphanum_fraction": 0.5985348224639893,
"avg_line_length": 27.144329071044922,
"blob_id": "fb0955d863f39d04a6fe7fd74245500a0823f120",
"content_id": "d5f02b442feb71b53fad96ff4149ab7b42e75d46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2730,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 97,
"path": "/logs_analysis.py",
"repo_name": "lug0lug0/logs-analysis",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nfrom functools import wraps\nimport psycopg2\n\n\ndef db_wrap(func):\n '''Wrap function in PostgreSQL transaction.\n\n Connects to database, creates a cursor, begins transaction, executes\n function then closes connection.\n Wrapped function needs cursor as first arg - other are preserved.\n\n Args:\n func: function to wrap with first argument as cursor\n '''\n @wraps(func)\n def connected_func(*args, **kwargs):\n conn = connect()\n c = conn.cursor()\n try:\n c.execute(\"BEGIN\")\n transaction = func(c, *args, **kwargs) # Pass cursor to func\n conn.commit()\n except:\n conn.rollback() # Prevent any incorrect transactions on any error\n raise\n finally:\n c.close()\n conn.close()\n return transaction\n return connected_func\n\n\ndef connect():\n \"\"\"Connect to the PostgreSQL database. Returns a database connection.\"\"\"\n return psycopg2.connect(dbname=\"news\")\n\n\n@db_wrap\ndef top_3_articles(cursor):\n \"\"\"Gets the top 3 most viewed articles of all time.\n\n Returns:\n List of tuples in form (article_title, views)\n \"\"\"\n cursor.execute(\"SELECT title, views FROM top_views LIMIT 3;\")\n result = cursor.fetchall()\n return result\n\n\n@db_wrap\ndef top_3_authors(cursor):\n \"\"\"Gets the top 3 most viewed authors of all time.\n\n Returns:\n List of tuples in form (author_name, views)\n \"\"\"\n cursor.execute('''SELECT DISTINCT name,\n sum(views) AS views\n FROM top_views JOIN authors\n ON (top_views.author = authors.id)\n GROUP BY name\n ORDER BY views DESC\n LIMIT 3;''')\n result = cursor.fetchall()\n return result\n\n\n@db_wrap\ndef large_request_errors(cursor):\n \"\"\"Gets any day with > 1 percent HTTP request errors.\n\n Returns:\n List of tuples in form (date, error_percentage)\n \"\"\"\n cursor.execute('''SELECT day, error_percent\n FROM request_stats\n WHERE error_percent > 1\n ORDER BY error_percent DESC;''')\n result = cursor.fetchall()\n return result\n\n\ntop_articles = top_3_articles()\ntop_authors = top_3_authors()\nlarge_request_errors = large_request_errors()\n\nprint(\"The top 3 articles are:\")\nfor article in top_articles:\n print(\"\\\"{0}\\\" - {1} views\".format(article[0], article[1]))\nprint(\"The top 3 authors are:\")\nfor author in top_authors:\n print(\"{0} - {1} views\".format(author[0], author[1]))\nprint(\"Days with > 1% HTTP request errors are:\")\nfor day in large_request_errors:\n print(\"{0} - {1}%\".format(day[0], day[1]))\n"
},
{
"alpha_fraction": 0.690224289894104,
"alphanum_fraction": 0.7147693634033203,
"avg_line_length": 37.1129035949707,
"blob_id": "fc2aab6880ef3a93d4e0f82ecabafdf99a04ff03",
"content_id": "c611614508589f2cf15e8c4018614fe459ab0200",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2363,
"license_type": "no_license",
"max_line_length": 180,
"num_lines": 62,
"path": "/README.md",
"repo_name": "lug0lug0/logs-analysis",
"src_encoding": "UTF-8",
"text": "# Logs Analysis\n\n---------------------\nReporting tool to output plain text reports on the data in the `newsdata.sql` database.\n\n## Requirements\n* Python3\n* PostgreSQL\n* Vagrant\n* Virtual Box\n\n## Usage\n1. Ensure [Vagrant](https://www.vagrantup.com/), [Virtual Box](https://www.virtualbox.org/) and [Python](https://www.python.org/) are installed on your machine.\n2. Clone the Udacity [fullstack-nanodegree-vm](https://github.com/udacity/fullstack-nanodegree-vm)\n3. Delete the `/tournament` directory in the clone.\n4. [Clone](https://github.com/SteadBytes/logs-analysis.git) (or [download](https://github.com/SteadBytes/logs-analysis/archive/master.zip)) this repo into the `/vagrant` directory.\n5. [Download](https://d17h27t6h515a5.cloudfront.net/topher/2016/August/57b5f748_newsdata/newsdata.zip) the `newsdata.sql` data file.\n6. Extract zip contents into cloned `/vagrant/udacity-fsnd-logs-analysis` directory\n7. Launch the VM:\n * `vagrant$ vagrant up`\n8. SSH into the VM:\n * On Mac/Linux `vagrant$ vagrant ssh`\n * Gives SSH connection details on windows\n * Windows use Putty or similar SSH client\n9. In the VM navigate to the `/vagrant/udacity-fsnd-logs-analysis` directory:\n * `$ cd /vagrant/tournament`\n10. Load the data into the `news` database already in the VM:\n * `$psql -d news -f newsdata.sql`\n11. Run the two `CREATE VIEW` statements in the [Database Views](#database-views) section.\n12. Run python report script:\n * `$ python3 logs_analysis.py`\n\n## Database Views\n* **top_views**:\n ```\n CREATE VIEW top_views AS SELECT\n articles.title, articles.id, articles.author,\n (SELECT count(log.path)\n FROM log\n WHERE log.path::text LIKE '%'||articles.slug::text) AS views\n FROM articles\n ORDER BY views DESC;\n ```\n* **request_stats**\n```\nCREATE VIEW request_stats AS SELECT\nrequests.day, requests.requests, errors.errors,\nROUND(errors * 100.0 / requests, 1) AS error_percent\nFROM(\n (SELECT log.time::date AS day,\n count(*) AS requests\n FROM log\n GROUP BY 1\n ORDER BY requests DESC) requests\n JOIN\n (SELECT log.time::date AS day2,\n count(*) AS errors\n FROM log\n WHERE log.status::text != '200 OK'\n group by 1 order by errors DESC) errors\n ON day = day2);\n```\n"
}
] | 2 |
marcindrozd/Stopwatch
|
https://github.com/marcindrozd/Stopwatch
|
ca6ace8c22fb8c86e8ec4c40d433ff1d8d5617f4
|
8dedecb8714ca5fdc79266c179d0311aa970bd48
|
c28b60032348e642545b68dc70a4bade1d5e4014
|
refs/heads/master
| 2016-09-06T02:54:33.391275 | 2014-02-11T08:00:14 | 2014-02-11T08:00:14 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6032225489616394,
"alphanum_fraction": 0.6329305171966553,
"avg_line_length": 25.58333396911621,
"blob_id": "f489489ced5115c639f7051d2c3acf710ace4bab",
"content_id": "62237b5205cfdd145de424d2cbd1b35d12c2a94a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1986,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 72,
"path": "/Stopwatch.py",
"repo_name": "marcindrozd/Stopwatch",
"src_encoding": "UTF-8",
"text": "# template for \"Stopwatch: The Game\"\r\n\r\n# define global variables\r\nimport simplegui\r\n\r\ncounter_x = 0\r\ncounter_y = 0\r\ntime = 0\r\n\r\n# define helper function format that converts time\r\n# in tenths of seconds into formatted string A:BC.D\r\ndef format(t):\r\n minutes = t // 600\r\n seconds = t / 10 % 60\r\n miliseconds = t % 10\r\n if seconds < 10:\r\n seconds = \"0\" + str(seconds)\r\n else:\r\n seconds = str(seconds)\r\n return str(minutes) + \":\" + str(seconds) + \".\" + str(miliseconds)\r\n \r\n# define event handlers for buttons; \"Start\", \"Stop\", \"Reset\"\r\ndef start_clock():\r\n timer.start()\r\n \r\ndef stop_clock():\r\n global counter_x, counter_y\r\n if timer.is_running(): # check if timer is running\r\n counter_y = counter_y + 1\r\n# check if stopwatch was stopped at 0 miliseconds\r\n# if miliseconds == 0 was giving bad results sometimes\r\n if (time % 10) == 0:\r\n counter_x = counter_x +1\r\n timer.stop()\r\n\r\ndef reset_clock():\r\n global time, counter_x, counter_y\r\n time = 0\r\n counter_x = 0\r\n counter_y = 0\r\n timer.stop()\r\n\r\n# define event handler for timer with 0.1 sec interval\r\ndef timer_handler():\r\n global time\r\n time = time + 1\r\n\r\n# adds a counter for number of stops\r\ndef counter():\r\n global counter_x, counter_y\r\n counter = str(counter_x) + \"/\" + str(counter_y)\r\n return counter\r\n\r\n# define draw handler\r\ndef draw_handler(canvas):\r\n canvas.draw_text(format(time), [60, 90], 36, \"White\")\r\n canvas.draw_text(counter(), [150, 30], 18, \"Green\")\r\n \r\n# create frame\r\nframe = simplegui.create_frame(\"Stop Watch\", 200, 120)\r\n\r\n# register event handlers\r\ntimer = simplegui.create_timer(100, timer_handler)\r\nframe.set_draw_handler(draw_handler)\r\nstart_button = frame.add_button(\"Start\", start_clock, 100)\r\nstop_button = frame.add_button(\"Stop\", stop_clock, 100)\r\nreset_button = frame.add_button(\"Reset\", reset_clock, 100)\r\n\r\n# start frame\r\nframe.start()\r\n\r\n# Please remember to review the grading rubric\r\n"
}
] | 1 |
rashmitpankhania/viberr
|
https://github.com/rashmitpankhania/viberr
|
5a98926cc6d51bd75ebaa984233b71f6191db6d9
|
7754eea2ac91d1c7dbcdb88dcfa375c1e91a6d29
|
2f32a97c4d231d7468a0d075e62b4117a4895f2c
|
refs/heads/master
| 2021-09-24T03:12:02.166360 | 2018-10-02T10:03:50 | 2018-10-02T10:03:50 | 114,452,106 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6538461446762085,
"alphanum_fraction": 0.6538461446762085,
"avg_line_length": 28.714284896850586,
"blob_id": "074e16ec09954cbe4214f9a2414b5cbd749dbf8e",
"content_id": "9569fa71bc7aefd9ad146cf4c313c5e425e66c7c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1872,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 63,
"path": "/music/views.py",
"repo_name": "rashmitpankhania/viberr",
"src_encoding": "UTF-8",
"text": "from django.views import generic\nfrom django.views.generic.edit import CreateView, UpdateView, DeleteView\nfrom django.shortcuts import render, redirect\nfrom .forms import Userform\nfrom django.contrib.auth import authenticate, login\nfrom django.views.generic import View\nfrom django.urls import reverse_lazy\nfrom .models import Album\n\n\nclass IndexView(generic.ListView):\n model = Album\n template_name = 'index.html'\n\n\nclass DetailsView(generic.DetailView):\n model = Album\n template_name = 'details.html'\n\n\nclass AlbumCreate(CreateView):\n model = Album\n fields = ['album_title', 'artist', 'genre', 'logo']\n template_name = 'album_form.html'\n\n\nclass AlbumUpdate(UpdateView):\n model = Album\n fields = ['album_title', 'artist', 'genre', 'logo']\n template_name = 'album_form.html'\n\n\nclass AlbumDelete(DeleteView):\n model = Album\n success_url = reverse_lazy('music:index')\n\n\nclass UserFormView(View):\n form_class = Userform # what type of form u want to use\n template_name = 'templates/registration_form.html'\n # disp-lay blank form\n\n def get(self, request):\n form = self.form_class(None)\n return render(request,'registration_form.html',{'form':form})\n\n def post(self, request):\n form = self.form_class(request.POST)\n\n if form.is_valid():\n user = form.save(commit=False)\n # cleaned normalized the data\n username = form.cleaned_data['username']\n password = form.cleaned_data['password']\n user.set_password(password)\n user.save()\n\n user = authenticate(username=username,password=password)\n if user is not None:\n if user.is_active:\n login(request, user)\n return redirect('music:index')\n return render(request, 'registration_form.html', {'form': form})\n"
},
{
"alpha_fraction": 0.7043010592460632,
"alphanum_fraction": 0.7043010592460632,
"avg_line_length": 22.125,
"blob_id": "f64960cfd2f098cb109716a73d43adc0c5c5bd1a",
"content_id": "db36e09313fcc55bbe853171185b33eade07f6fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 186,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 8,
"path": "/music/serializers.py",
"repo_name": "rashmitpankhania/viberr",
"src_encoding": "UTF-8",
"text": "import django.rest_framework\nfrom .models import Album,Song\n\nclass AlbumSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = Album\n fiels = ['albumn_title']\n\n"
},
{
"alpha_fraction": 0.6200941801071167,
"alphanum_fraction": 0.6200941801071167,
"avg_line_length": 30.850000381469727,
"blob_id": "a3cc238e5236ad1e76e2e449277562a8419de77e",
"content_id": "ee9203e75cc6b5ac2974a91fdbe6beae0340ed5a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 637,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 20,
"path": "/music/urls.py",
"repo_name": "rashmitpankhania/viberr",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import url\nfrom . import views\n\napp_name = 'music'\n\n\nurlpatterns = [\n # music/\n url(r'^$', views.IndexView.as_view(), name='index'),\n\n url(r'^register/$', views.UserFormView.as_view(), name='registration'),\n # music/'any_number'/\n url(r'^(?P<pk>\\d+)/$', views.DetailsView.as_view(), name='detail'),\n #music/album/add\n url(r'^album/add$', views.AlbumCreate.as_view(),name='album_add'),\n #music/album/add\n url(r'^(?P<pk>\\d+)/update$', views.AlbumUpdate.as_view(),name='album_update'),\n #music/album/add\n url(r'^(?P<pk>\\d+)/delete$', views.AlbumDelete.as_view(),name='album_delete')\n]\n"
}
] | 3 |
aihelena/advent_code
|
https://github.com/aihelena/advent_code
|
92a7ea90f4ed89cef2f4ac257c211b8499f85501
|
c312ee811a0acc23277b8a156947f3dbf4a8d6ef
|
eaae3e7c768736c0f29a7977b65be50e7796537b
|
refs/heads/master
| 2021-10-08T05:20:33.862083 | 2018-12-08T08:57:57 | 2018-12-08T08:57:57 | 114,159,736 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6065163016319275,
"alphanum_fraction": 0.6203007698059082,
"avg_line_length": 40.05263137817383,
"blob_id": "3cd3898440ed8d774a68756323cb2c793be9c94d",
"content_id": "dbbbc0fc0a06ce9140491009aebc5f8f74c731fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 798,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 19,
"path": "/2_2.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\ninputArray = np.loadtxt('2_2_input.txt')\n\ntotal=0 #running sum of quotients\ncurrentRow=0\n\nwhile currentRow < inputArray.shape[0]: #for each row in the array\n for i in range(0,inputArray.shape[1]):\n sortedRow= sorted(inputArray[currentRow], reverse=True)\n checkingElement=sortedRow[i] #pulls elements of row in decreasing order\n for j in range(1, inputArray.shape[1]):\n compareElement=sortedRow[j]\n if (checkingElement%compareElement)==0: #checks each element against other elements in row\n if j!=i: #not dividing something by itself\n total+= checkingElement/compareElement\n currentRow+=1\n print (currentRow)\n print (total)\n \n\n"
},
{
"alpha_fraction": 0.5495145916938782,
"alphanum_fraction": 0.566990315914154,
"avg_line_length": 25,
"blob_id": "034a94768077fc9efc3e4bef803855bee3c8fd60",
"content_id": "8e39cadd61b1d2451368fc40d94846c8210da60e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1030,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 38,
"path": "/2018_1_2.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "#advent of code 2018 day 1 part 2#\r\n\r\n\r\ninputList = []\r\npuzzleInput = open('2018_1_1.txt', 'r')\r\n\r\nfor line in puzzleInput:\r\n inputList.append(str(line.strip()))\r\n \r\n#initiate current frequency#\r\n \r\ncurrentFreq = 0\r\nfreqList = set([0])\r\nduplicates = False\r\n\r\n#while there are no duplicates\r\n##Iterate over the input list\r\n##after each operation, append the new frequency\r\n##and check if there's a duplicate in the set of frequencies\r\n##if there's a duplicate, print it and gtfo\r\n\r\nwhile duplicates == False:\r\n for i in range(0,len(inputList)):\r\n if inputList[i][0]== '+':\r\n currentFreq+=int(inputList[i][1:])\r\n if currentFreq in freqList:\r\n duplicates = True\r\n print (currentFreq)\r\n break\r\n \r\n else:\r\n currentFreq-=int(inputList[i][1:])\r\n if currentFreq in freqList:\r\n duplicates = True\r\n print (currentFreq)\r\n break\r\n\r\n freqList.add(currentFreq)\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6619576215744019,
"alphanum_fraction": 0.6750757098197937,
"avg_line_length": 33.39285659790039,
"blob_id": "48bb3bdf68ee8d9db9ec94afd2144d1afc368ca0",
"content_id": "5d130171753e5afa2c2ee22f2c77934c86ec8f9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 991,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 28,
"path": "/2018_2_2.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "#advent of code 2018, day 2, part 2\r\n\r\ninputList = []\r\npuzzleInput = open('2018_2_1.txt', 'r') #bring in input\r\n\r\nfor line in puzzleInput:\r\n inputList.append(str(line.strip())) #split into a list and strip spaces\r\n\r\n\r\n#i want to see how many characters differ between the current string\r\n#and every other string\r\n\r\nfor currID in inputList: #for each item in the list\r\n#compare it to every other item on the list\r\n for compareID in inputList:\r\n #string of common characters\r\n currentContender = ''\r\n #for every row of the two lists zip together:\r\n for currentLetter,compareLetter in zip(currID, compareID):\r\n if currentLetter==compareLetter:\r\n #append it to currentContender\r\n currentContender = currentContender+currentLetter\r\n #once you're done making currentContender, find the ones that are one off\r\n if len(currentContender)==len(currID)-1:\r\n print(currentContender)\r\n \r\n \r\n#return leaderBoard\r\n"
},
{
"alpha_fraction": 0.7787878513336182,
"alphanum_fraction": 0.7848485112190247,
"avg_line_length": 26.5,
"blob_id": "42b6fa8886dd8805d58145c2527e6f3da276b1ba",
"content_id": "7b54808d1d8fb6fe2a27c891313203e09425e936",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 660,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 24,
"path": "/1_1.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "puzzleInput=\n\ntotal=0 #running total of sums\n\nstrungInput=str(puzzleInput) #splits input into digits\nlistedInput=[int(x) for x in strungInput] #adds digits to a list\n\npreviousDigit=listedInput[-1] #populates last digit first\ncurrentDigit=listedInput[0]\n\nif currentDigit==previousDigit: #checks first and last digits\n\t\ttotal+=currentDigit\n\t\tpreviousDigit=currentDigit\nelse:\n \tpreviousDigit=currentDigit\n \t\nfor i in range(1,len(listedInput)): #checks subsequent digit against previous\n\t\tcurrentDigit=listedInput[i]\n\t\tif currentDigit==previousDigit:\n\t\t\t\ttotal+=currentDigit\n\t\t\t\tpreviousDigit=currentDigit\n\t\telse: \n\t\t\t\tpreviousDigit=currentDigit\nprint total\n"
},
{
"alpha_fraction": 0.7620137333869934,
"alphanum_fraction": 0.7780320644378662,
"avg_line_length": 30.214284896850586,
"blob_id": "057f00227d5e36f09aab902a3d7cf30ec8b8df1c",
"content_id": "4ea48ac98914691fe24136a120d6f9026806b53f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 437,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 14,
"path": "/1_2.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "puzzleInput=\ntotal=0 #running total sum\nstrungInput=str(puzzleInput)#turns input into digits\nlistedInput=[int(x) for x in strungInput]\n\n#split list of digits in half\nfirstHalf=listedInput[0:len(listedInput)/2]\nsecondHalf=listedInput[len(listedInput)/2:len(listedInput)]\n\n#check each digit with corresponding one in second half\nfor i in range(0,len(listedInput)/2):\n\t\tif firstHalf[i]==secondHalf[i]:\n\t\t\t\ttotal+=firstHalf[i]*2\nprint total\n"
},
{
"alpha_fraction": 0.6589834690093994,
"alphanum_fraction": 0.6687352061271667,
"avg_line_length": 35,
"blob_id": "591cefabe899f7d3089c8df13eb0ebfeb78c9c97",
"content_id": "8d7a7e3a94041b63a9910a83acdde5b717655023",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3384,
"license_type": "no_license",
"max_line_length": 113,
"num_lines": 94,
"path": "/Day4/2018_4.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "# advent of code 2018, day 4\n\nfrom datetime import datetime\n\ninputList = []\npuzzleInput = open('2018_4.txt', 'r') # bring in input\n\nfor eachLine in puzzleInput:\n inputList.append(str(eachLine.strip())) # split into a list and strip spaces\n\ndictofRecords = dict()\n\nfor line in inputList:\n # take out and parse the time\n onlyTime = datetime.strptime(line.split(']')[0][1:], \"%Y-%m-%d %H:%M\") # notEnya\n # make the rest of it the record\n record = line.split('] ')[1]\n # create a dictionary entry with key time and value record\n dictofRecords[onlyTime]=record\n\nallGuardTimes = dict()\nsleepBegins = 0\nsleepEnds = 0\nisAsleep = False\n#sort by time, and for each time:\nfor time in sorted(dictofRecords.keys()):\n # setting current record equal to value of the current time\n record = dictofRecords[time]\n if 'Guard' in record:\n if isAsleep:\n for i in range(sleepBegins,59):\n guardTime[i]=guardTime.get(i,0)+1\n #extract the number by splitting on the octothorpe and then the spaces\n guardNo = record.split('#')[1].split(' ')[0]\n\n if guardNo not in allGuardTimes:\n allGuardTimes[guardNo] = dict()\n guardTime = allGuardTimes[guardNo]\n if 'falls' in record:\n isAsleep = True\n sleepBegins = time.minute\n if 'wake' in record:\n isAsleep = False\n sleepEnds = time.minute\n #for the specific minute between begin and end of sleep\n for i in range(sleepBegins,sleepEnds):\n # add 1 to the value at the key of the specific minute, and make it 0 if it hasn't appeared yet\n guardTime[i] = guardTime.get(i,0)+1\n\ntopAsleepGuard = 0\ntopTimeSlept = 0\ntopMinSlept = 0\ngdSleepyhead = 0\nsleepyTimes = 0\nsleepyheadMinute = 0\n\nfor Guard,times in allGuardTimes.items():\n minMostAsleep = -1\n for m,t in times.items():\n #track each guard's top slept minute\n if t > times.get(minMostAsleep,0):\n #print(m)\n minMostAsleep = m\n #print('Guard: {} total time sleep: {} most minute: {}'.format(Guard,sum(times.values()),minMostAsleep))\n #if this guard is the most minutes total slept so far, add their values to the leaderboard\n if sum(times.values())>topTimeSlept:\n topTimeSlept = sum(times.values())\n topAsleepGuard = Guard\n topMinSlept = minMostAsleep\n\n #ignore guards that don't fall asleep\n try:\n # find the guard that sleeps the most for any minute\n if sleepyTimes < max(times.values()):\n sleepyTimes = max(times.values())\n gdSleepyhead = Guard\n sleepyheadMinute = minMostAsleep\n except:\n pass\nprint('Guard: {} total time sleep: {} most minute: {}'.format(topAsleepGuard, topTimeSlept, topMinSlept))\nprint(int(topAsleepGuard)*int(topMinSlept))\nprint(int(gdSleepyhead)*int(sleepyheadMinute))\n# currentRecord = Record(aRecord)\n# sortedRecord = inputList.sort(key = lambda x:\n\n# if the first word is guard\n# make a list named the guard if it doesn't exist yet\n# while the first word in the next line is wakes or falls\n# if it's falls\n# append the minutes between falls and the minutes on wake of the next line to the guard's list\n# go to the next line\n\n# Find the minute that appears most in each guard's list and add that as a dictionary entry under that guard's ID\n# return the highest entry multiplied by that ID\n"
},
{
"alpha_fraction": 0.7610921263694763,
"alphanum_fraction": 0.7815699577331543,
"avg_line_length": 26.46875,
"blob_id": "7192a8ca2fdf1d8b5a7e66ecbb9bcf44e86039a2",
"content_id": "b1077fd567b2d596758b1c6546dd00de2783cfc2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 879,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 32,
"path": "/3_1.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "import numpy as np\n\npuzzleInput=347991\n\nspiralLayer=1\nsmallerSquare=1\n\n#find the largest squared odd number within puzzle input\nwhile smallerSquare**2 <puzzleInput:\n smallerSquare+=2\n spiralLayer+=1\nsideLength=smallerSquare-1 #length of sides on current layer\nsmallerSquare-=2\n#print (smallerSquare)\n#print (\"spirallayer=\", spiralLayer)\n\n#find wrap distance\ninputWrap=puzzleInput-smallerSquare**2\n#print(\"sideLength=\", sideLength)\n#print(\"inputWrap=\", inputWrap)\n\n#if input is at the middle of a side, that's the shortest path\nif inputWrap==sideLength/2:\n pathDistance=spiralLayer-1\n#otherwise, distance along a side\n#from that shortest path determines final distance \nelse:\n distancefromCenter=abs((inputWrap%sideLength)-sideLength/2)\n #print(\"DistancefromCenter=\", distancefromCenter)\n pathDistance=(spiralLayer-1)+distancefromCenter\n\nprint (pathDistance)\n"
},
{
"alpha_fraction": 0.6134969592094421,
"alphanum_fraction": 0.6380367875099182,
"avg_line_length": 22.148147583007812,
"blob_id": "7baf62360807c4b3d6fb1e83582d7cabd9a354fd",
"content_id": "8af85b0ea9e3baac5ef34c0a8e9c922a61db666e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 652,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 27,
"path": "/2018_1_1.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "#advent of code 2018 day 1#\r\n\r\ninputList = []\r\npuzzleInput = open('2018_1_1.txt', 'r')\r\n\r\nfor line in puzzleInput:\r\n inputList.append(str(line.strip()))\r\n \r\n#initiate current frequency#\r\n \r\ncurrentFreq = 0\r\n\r\n##for every item in the list length\r\n##see if it's positive\r\n## add everything after the first character to puzzle input\r\n## else\r\n## subtract everything after the first character from puzzle input\r\n##\r\n## return current frequency\r\n\r\nfor i in range(0,len(inputList)):\r\n if inputList[i][0]== '+':\r\n currentFreq+=int(inputList[i][1:])\r\n else:\r\n currentFreq-=int(inputList[i][1:])\r\n\r\nprint (currentFreq)\r\n"
},
{
"alpha_fraction": 0.6269554495811462,
"alphanum_fraction": 0.660649836063385,
"avg_line_length": 27.678571701049805,
"blob_id": "788c58bded4b77c5a8f38646b27f9d74f60619d2",
"content_id": "92fe3e5566aa9a96502c3c6e27781fcd5af20130",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 831,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 28,
"path": "/2018_2_1.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "#advent of code 2018, day 2, part 1\r\n\r\ninputList = []\r\npuzzleInput = open('2018_2_1.txt', 'r') #bring in input\r\n\r\nfor line in puzzleInput:\r\n inputList.append(str(line.strip())) #split into a list and strip spaces\r\n\r\n#counters for 2 and 3\r\ntwoCount=0\r\nthreeCount=0\r\n\r\n\r\nfor currID in inputList: #for each item in the list\r\n #and count the letters for each letters in the set\r\n letterCounts = dict((letter, currID.count(letter))for letter in currID)\r\n \r\n#if counts include 2 and 3, they both get one\r\n if 2 in letterCounts.values() and 3 in letterCounts.values():\r\n twoCount+=1\r\n threeCount +=1\r\n #otherwise if it's 2 or 3, increase accordingly\r\n elif 2 in letterCounts.values():\r\n twoCount+=1\r\n elif 3 in letterCounts.values():\r\n threeCount+=1\r\n\r\nprint (twoCount*threeCount)\r\n"
},
{
"alpha_fraction": 0.5891507267951965,
"alphanum_fraction": 0.6069363951683044,
"avg_line_length": 32.599998474121094,
"blob_id": "c16caadd1c32cdbc154e4aefa355640cdc7d4521",
"content_id": "f65e84ed731b187ff5bbd634c6e42b6d7bad8464",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2249,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 65,
"path": "/2018_3_2.py",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "#advent of code 2018, day 3, part 1\r\n\r\n\r\ninputList = []\r\npuzzleInput = open('2018_3.txt', 'r') #bring in input\r\n\r\nfor line in puzzleInput:\r\n inputList.append(str(line.strip())) #split into a list and strip spaces\r\n\r\n\r\n#making a class for claims and their attributes\r\n\r\nclass Claim:\r\n def __init__(self, claim):\r\n claim = claim.split(' ')\r\n \r\n self.elf = claim[0][1:]\r\n self.x = int(claim[2][:-1].split(',')[0])\r\n self.y = int(claim[2][:-1].split(',')[1])\r\n self.width = int(claim[3].split('x')[0])\r\n self.height = int(claim[3].split('x')[1])\r\n \r\nfabric = [[0 for x in range(1000)]for y in range(1000)]\r\nlistOfClaims = []\r\n#for each claim in the list\r\nfor aClaim in inputList:\r\n#make it an instance of a Claim\r\n #listofClaims.append(Claim(currentClaim))\r\n currentClaim = Claim(aClaim)\r\n #print(currentClaim.__dict__)\r\n#for fabric column index starting at x and going until width\r\n for i in range (currentClaim.x,currentClaim.x+currentClaim.width):\r\n#for fabric row index starting at y and going until height\r\n for j in range(currentClaim.y, currentClaim.y+currentClaim.height):\r\n#if that array position exists\r\n if fabric[j][i]!=0:\r\n#add 1\r\n if fabric[j][i]>=2:\r\n pass\r\n else:\r\n fabric[j][i]+=1\r\n#else\r\n else:\r\n#make that array position 1\r\n fabric[j][i]=1\r\n#to find the one that doesn't overlap\r\n#now that you have a mapped fabric,\r\n \r\nfor aClaim in inputList:\r\n#make it an instance of a Claim\r\n #listofClaims.append(Claim(currentClaim))\r\n currentClaim = Claim(aClaim)\r\n status = True\r\n #print(currentClaim.__dict__)\r\n#for fabric column index starting at x and going until width\r\n for i in range (currentClaim.x,currentClaim.x+currentClaim.width):\r\n#for fabric row index starting at y and going until height\r\n for j in range(currentClaim.y, currentClaim.y+currentClaim.height):\r\n#if any of the values in its area are not 1\r\n if fabric[j][i]!=1:\r\n #disqualify it\r\n status=False\r\n#if, in the end, it's not disqualified yet, it's the one!\r\n if status:\r\n print(currentClaim.elf)\r\n"
},
{
"alpha_fraction": 0.5813953280448914,
"alphanum_fraction": 0.7674418687820435,
"avg_line_length": 20.5,
"blob_id": "3169c1db92d2352fe02633f9a9cc20da07fc2836",
"content_id": "23ae69e2e6a24273bda065077d41169aec6aba0b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 43,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 2,
"path": "/README.md",
"repo_name": "aihelena/advent_code",
"src_encoding": "UTF-8",
"text": "# advent_code\nadvent of code 2017 and 2018\n"
}
] | 11 |
LeoGraciano/Django_SOSPETs
|
https://github.com/LeoGraciano/Django_SOSPETs
|
a5ffc51332ae1e1e7bbde5ae9275d32fb3bd76bb
|
9d488f7b14b599b76fdc4648394e073f7f8471a9
|
fdcb9707bf4566d4ff4fc0012cdc5f739a2a0f4b
|
refs/heads/master
| 2022-12-28T18:39:39.248648 | 2020-10-14T21:51:30 | 2020-10-14T21:51:30 | 301,397,156 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.675159215927124,
"alphanum_fraction": 0.6863057613372803,
"avg_line_length": 26.30434799194336,
"blob_id": "6007f693b0fc22d06a5df132fc56b32cd24a95fd",
"content_id": "e4f436d90065c3242b741d8dbc7a537d92a31f24",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 633,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 23,
"path": "/core/models.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.contrib.auth.models import User\n\n\n# Create your models here.\n\n\nclass Pet(models.Model):\n cidade = models.CharField(max_length=50)\n descrição = models.TextField()\n telefone = models.CharField(max_length=15)\n email = models.EmailField(max_length=254)\n data_de_crianção = models.DateTimeField(auto_now_add=True)\n foto = models.ImageField(upload_to='pet')\n ativo = models.BooleanField(default=True)\n usuário = models.ForeignKey(User, on_delete=models.CASCADE)\n\n def __str__(self):\n return str(self.id)\n\n class Meta:\n db_table = 'pet'\n pass\n"
},
{
"alpha_fraction": 0.48493149876594543,
"alphanum_fraction": 0.5698630213737488,
"avg_line_length": 19.27777862548828,
"blob_id": "eda64cd4014cede4f0453d8ee07f4cfd7977a391",
"content_id": "a7e7421b64a40317c335b6e30e724d151f909700",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 367,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 18,
"path": "/core/migrations/0004_auto_20201005_0614.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.2 on 2020-10-05 06:14\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0003_auto_20201005_0605'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='pet',\n old_name='description',\n new_name='descrição',\n ),\n ]\n"
},
{
"alpha_fraction": 0.5162162184715271,
"alphanum_fraction": 0.5675675868988037,
"avg_line_length": 19.55555534362793,
"blob_id": "f39e5c4ee9ff9ce324447b1f5b423c23cffff86a",
"content_id": "abe9c965e674b741f80bef767b26721508d945d7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 370,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 18,
"path": "/core/migrations/0002_auto_20201005_0600.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.2 on 2020-10-05 06:00\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='pet',\n name='foto',\n field=models.ImageField(upload_to='pet'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6231929659843445,
"alphanum_fraction": 0.6231929659843445,
"avg_line_length": 29.596153259277344,
"blob_id": "f7bc11a0d80f26f4f74cc364e61f4d523150d17b",
"content_id": "3bdb126ef788c03fa5295e94157e8b80baeccdd1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3204,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 104,
"path": "/core/views.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "from django import urls\nfrom django.http import request\nfrom django.shortcuts import render, redirect\nfrom django.views.decorators.csrf import csrf_protect\nfrom django.contrib.auth import authenticate, login, logout\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom .models import Pet\n# Create your views here.\n\n\n@login_required(login_url='/login/')\ndef register_pet(request):\n pet_id = request.GET.get('id')\n if pet_id:\n pet = Pet.objects.get(id=pet_id)\n if pet.usuário == request.user:\n return render(request, 'register-pet.html', {'pet': pet})\n return render(request, 'register-pet.html')\n\n\n@login_required(login_url='/login/')\ndef delete_pet(request, id):\n pet = Pet.objects.get(id=id)\n if pet.usuário == request.user:\n pet.delete()\n return redirect('/')\n\n\n@login_required(login_url='/login/')\ndef set_pet(request):\n cidade = request.POST.get('cidade')\n email = request.POST.get('email')\n telefone = request.POST.get('telefone')\n descrição = request.POST.get('descrição')\n foto = request.FILES.get('file')\n pet_id = request.POST.get('pet-id')\n if pet_id:\n pet = Pet.objects.get(id=pet-id)\n if usuário == pet.usuário:\n pet.email = email\n pet.telefone = telefone\n pet.cidade = cidade\n pet.descrição = descrição\n if foto:\n pet.foto = foto\n pet.save()\n else:\n usuário = request.user\n pet = Pet.objects.create(email=email, cidade=cidade, telefone=telefone,\n foto=foto, usuário=usuário, descrição=descrição)\n url = f'/pet/detail/{pet.id}/'\n return redirect(url)\n\n\n@login_required(login_url='/login/')\ndef list_all_pets(request):\n pet = Pet.objects.filter(ativo=True)\n return render(request, 'list.html', {'pet': pet})\n\n\ndef list_user_pets(request):\n pet = Pet.objects.filter(ativo=True, usuário=request.user)\n return render(request, 'list.html', {'pet': pet})\n\n\n@login_required(login_url='/login/')\ndef pet_detalhes(request, id):\n pet = Pet.objects.get(ativo=True, id=id)\n return render(request, 'pet.html', {'pet': pet})\n\n\ndef logout_user(request):\n print(request.user)\n logout(request)\n return redirect('/login/')\n\n\ndef login_user(request):\n return render(request, 'login.html')\n\n\n@csrf_protect\ndef submit_login(request):\n if request.method == \"POST\":\n username = request.POST.get('username')\n password = request.POST.get('password')\n next = request.POST.get('next')\n\n user = authenticate(username=username, password=password)\n if user is not None:\n login(request, user)\n if next:\n messages.success(request, 'Logado com Suecesso!')\n return redirect(next)\n else:\n messages.success(request, 'Logado com Suecesso!')\n return redirect('/')\n else:\n messages.error(\n request, 'Usuário e senha Inválidos. Favor Tente novamente')\n return redirect('/login')\n else:\n return render(request, '/login/')\n"
},
{
"alpha_fraction": 0.3181818127632141,
"alphanum_fraction": 0.5,
"avg_line_length": 21,
"blob_id": "700bf17e234361749d6809cffde2311ddd85a62d",
"content_id": "680f4c4bae785c6618945d24c794202390287e5c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 22,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 1,
"path": "/adocaovenv/lib/python3.8/site-packages/mypy/version.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "__version__ = \"0.782\"\n"
},
{
"alpha_fraction": 0.48493149876594543,
"alphanum_fraction": 0.5698630213737488,
"avg_line_length": 19.27777862548828,
"blob_id": "7b15b1d84f1db9eb25b9fc761918a7ab0006496a",
"content_id": "9c8466c86464a531e760fd292de910f8b008582b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 367,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 18,
"path": "/core/migrations/0003_auto_20201005_0605.py",
"repo_name": "LeoGraciano/Django_SOSPETs",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.1.2 on 2020-10-05 06:05\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0002_auto_20201005_0600'),\n ]\n\n operations = [\n migrations.RenameField(\n model_name='pet',\n old_name='descrição',\n new_name='description',\n ),\n ]\n"
}
] | 6 |
yuja-liu/work-link
|
https://github.com/yuja-liu/work-link
|
ac473ab2e135451bfe43503b72575d9c760c58f5
|
f6d686a5daf3a23e1255894555ff22486cb15e55
|
d3c6a265670e0cafc506bd059ea4eea97198f615
|
refs/heads/master
| 2023-05-04T20:52:45.129230 | 2019-04-18T03:11:54 | 2019-04-18T03:11:54 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5144736766815186,
"alphanum_fraction": 0.5263158082962036,
"avg_line_length": 24.33333396911621,
"blob_id": "6ccd9d8a7bcbd4ec8fb15e9ac12ac1720e7a541f",
"content_id": "472d4714903188e578531314b94aa73c11be662b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 760,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 30,
"path": "/data/preprocess3.py",
"repo_name": "yuja-liu/work-link",
"src_encoding": "UTF-8",
"text": "#!/bin/python\n\nimport json\n\ntoeflDict = {}\nfi = open('./CET6.txt', 'r')\nwith fi :\n for i,line in enumerate(fi) :\n tmp = line.split('/')\n if len(tmp) < 3 :\n continue\n toeflDict[tmp[0].strip()] = tmp[2].strip() \nfi = open('./CET4.txt', 'r')\nwith fi:\n for line in fi :\n try :\n ind = line.index(' ')\n except ValueError :\n print(line)\n continue\n toeflDict[str(line[:ind]).strip()] = str(line[ind+1:]).strip()\nwordList = []\nindex = 0\nfor key,val in toeflDict.items() :\n wordList.append({ 'text': val, 'link': index })\n wordList.append({ 'text': key, 'link': index })\n index = index + 1\njson = json.dumps(wordList)\nfo = open('./dict3.js', 'w')\nfo.write(json)\n"
},
{
"alpha_fraction": 0.48823678493499756,
"alphanum_fraction": 0.5041740536689758,
"avg_line_length": 25.00657844543457,
"blob_id": "46abd430d9ed217940048c2dc804f1f53402cb4b",
"content_id": "05b1a7dbbab5687e9e191e7571ade67bd48285d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 3953,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 152,
"path": "/js/app.js",
"repo_name": "yuja-liu/work-link",
"src_encoding": "UTF-8",
"text": "var canvas = new Vue({\n el: '#canvas',\n data: {\n size: {\n width: 0,\n height: 0,\n top: 0,\n left: 0,\n fontsize: 0\n },\n time: {\n start: 0,\n end: 0\n },\n isShow: false,\n readyShow: false,\n readyText: 3,\n btnShow: true,\n btnText: 'Start!',\n finishTime: '',\n linkMode: { stat: false, link: -1, index: -1 },\n words: [{ text: '', link: -1 }],\n blockNum: 16,\n blockDisp: [-1],\n blockShake: [-1] // trigger block reset\n },\n components: {\n 'word-block': wordBlock\n },\n methods: {\n resize: function() {\n const ASPECT = 9/16;\n var cWidth = document.documentElement.clientWidth;\n var cHeight = document.documentElement.clientHeight;\n if (cWidth/cHeight >= ASPECT) {\n this.size.height = cHeight;\n this.size.width = cHeight * ASPECT;\n this.size.top = 0;\n this.size.left = (cWidth - this.size.width) / 2;\n } else {\n this.size.width = cWidth;\n this.size.height = cWidth / ASPECT;\n this.size.left = 0;\n this.size.top = (cHeight - this.size.height) / 2;\n }\n const SCALE = 360;\n this.size.fontsize = this.size.width / SCALE;\n },\n reset: function() {\n this.linkMode = { stat: false, link: -1, index: -1 };\n this.words = genWords(this.blockNum / 2);\n this.words = shuffle(this.words);\n var tmp = [];\n for (var i = 0; i < this.blockNum; i++) {\n tmp.push(i);\n }\n this.blockShake = tmp;\n this.blockDisp = new Array(this.blockNum).fill(true);\n this.readyShow = true;\n this.readyText = 3;\n this.btnShow = false;\n this.finishTime = '';\n\n let _this = this;\n setTimeout(function() {\n _this.readyText = 2;\n }, 1000);\n setTimeout(function() {\n _this.readyText = 1;\n }, 2000);\n setTimeout(function() {\n _this.isShow = true;\n _this.readyShow = false;\n _this.time.start = (new Date()).getTime();\n }, 3000);\n },\n onBlockClicked: function(info) {\n if (!this.linkMode.stat) {\n this.linkMode = Object.assign(\n { stat: true }, info);\n } else {\n if (info.link == this.linkMode.link && info.index != this.linkMode.index) {\n Vue.set(this.blockDisp, info.index, false);\n Vue.set(this.blockDisp, this.linkMode.index, false);\n this.linkMode = {\n stat: false,\n link: -1,\n index: -1\n };\n } else {\n this.blockShake = [ info.index, this.linkMode.index ];\n this.linkMode = {\n stat: false,\n link: -1,\n index: -1\n };\n }\n }\n },\n finish: function() {\n this.time.end = (new Date()).getTime();\n var sec = Math.floor((this.time.end - this.time.start) / 1000);\n var ms = Math.floor((this.time.end - this.time.start) % 1000 / 10);\n this.finishTime = sec + '.' + ms +' S';\n this.isShow = false;\n this.btnShow = true;\n this.btnText = 'Again!';\n }\n },\n created: function() {\n this.resize();\n },\n mounted: function() {\n window.onresize = this.resize;\n },\n computed: {\n computedSize: function() {\n return {\n width: this.size.width + 'px',\n height: this.size.height + 'px',\n top: this.size.top + 'px',\n left: this.size.left + 'px',\n fontSize: this.size.fontsize + 'rem'\n };\n },\n timerSize: function() {\n return {\n width: this.size.width + 'px',\n top: this.size.height * (5/8) + 'px'\n };\n },\n side: function() {\n return this.size.width;\n },\n remaining: function() {\n var sum = 0;\n for (var i=0; i<this.blockDisp.length; i++) {\n if (this.blockDisp[i]) {\n sum++;\n }\n }\n return sum;\n }\n },\n watch: {\n remaining: function() {\n if (this.remaining == 0) {\n this.finish();\n }\n }\n }\n});\n"
},
{
"alpha_fraction": 0.5109589099884033,
"alphanum_fraction": 0.5226027369499207,
"avg_line_length": 25.545454025268555,
"blob_id": "3b1b70fbac4eddaf90735b7a7b1eebb57a43eb38",
"content_id": "101bad4b0f84f5994763f475cdaa10f791473f70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1460,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 55,
"path": "/js/word-block.js",
"repo_name": "yuja-liu/work-link",
"src_encoding": "UTF-8",
"text": "var wordBlock = Vue.component('word-block', {\n data: function() {\n return {\n num: 4,\n space: 10,\n margin: 15,\n radius: 5,\n dfc: '#dfe4ea',\n hlc: '#ff7f50',\n bgc: ''\n };\n },\n props: ['word', 'side', 'index', 'shake', 'disp'],\n template: `\n <div class=\"word-block\" :style=\"computedSize\" v-tap=\"{ methods: clickHandler }\">\n {{ word.text }}\n </div>\n `,\n computed: {\n computedSize: function() {\n let width = (this.side - (this.num-1)*this.space - 2*this.margin) / this.num;\n let row = Math.floor(this.index/this.num), column = this.index%this.num;\n let top = this.margin + row*width + (row>0 ? row*this.space : 0);\n let left = this.margin + column*width + (column>0 ? column*this.space : 0);\n return {\n width: width + 'px',\n height: width + 'px',\n top: top + 'px',\n left: left + 'px',\n borderRadius: this.radius + 'px',\n backgroundColor: this.bgc,\n display: this.disp[this.index] ? '' : 'none'\n };\n }\n },\n watch: {\n shake: function() {\n if (this.shake.indexOf(this.index) != -1) {\n this.bgc = this.dfc;\n }\n }\n },\n methods: {\n clickHandler: function(event) {\n this.highlight();\n this.$emit('block-clicked', { link: this.word.link, index: this.index });\n },\n highlight: function() {\n this.bgc = this.hlc;\n }\n },\n created: function() {\n this.bgc = this.dfc;\n }\n});\n"
},
{
"alpha_fraction": 0.5175879597663879,
"alphanum_fraction": 0.5301507711410522,
"avg_line_length": 23.875,
"blob_id": "861367d4a46523dc24b1e404f3abf6ef253a6f59",
"content_id": "23ecb3cccebdb00dae75e2eeb2a8604ebd10f581",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 398,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 16,
"path": "/data/preprocess2.py",
"repo_name": "yuja-liu/work-link",
"src_encoding": "UTF-8",
"text": "#!/bin/python\n\nimport json\n\nfi = open('./CET6.txt', 'r')\nwith fi :\n toeflDict = []\n for i,line in enumerate(fi) :\n tmp = line.split('/')\n if len(tmp) < 3 :\n continue\n toeflDict.append({ 'text': tmp[0].strip(), 'link': i })\n toeflDict.append({ 'text': tmp[2].strip(), 'link': i })\njson = json.dumps(toeflDict)\nfo = open('./dict2.js', 'w')\nfo.write(json)\n"
},
{
"alpha_fraction": 0.530183732509613,
"alphanum_fraction": 0.5459317564964294,
"avg_line_length": 24.399999618530273,
"blob_id": "61b49ab281ec8828cd6eb85f124e5b4cfea3c74f",
"content_id": "94d200e0745a062d7c50364efc47703bde478cbe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 381,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 15,
"path": "/data/preprocess.py",
"repo_name": "yuja-liu/work-link",
"src_encoding": "UTF-8",
"text": "#!/bin/python\n\nimport json\n\nfi = open('./toefl.txt', 'r')\nwith fi :\n toeflDict = []\n for i,line in enumerate(fi) :\n ind1 = line.index('[')\n ind2 = line.index(']')\n toeflDict.append({ 'text': line[:ind1], 'link': i })\n toeflDict.append({ 'text': line[ind2+1:-1], 'link': i })\njson = json.dumps(toeflDict)\nfo = open('./dict.js', 'w')\nfo.write(json)\n"
}
] | 5 |
youcan210/intro_git
|
https://github.com/youcan210/intro_git
|
5b7f5d26980920dcf5209931fa82dfc3cd901ce6
|
b122f88ac1129699dc6435c4b186bc3fd9c8e79e
|
0fd3d74a73eff324b31f547a2568464822e65a26
|
refs/heads/main
| 2023-03-12T08:39:07.351546 | 2021-03-07T09:10:06 | 2021-03-07T09:10:06 | 321,065,592 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7093023061752319,
"alphanum_fraction": 0.7093023061752319,
"avg_line_length": 16.399999618530273,
"blob_id": "642bc4d7c298270d22fe2f9c0035436bcb8ccd5b",
"content_id": "64ad335687fbd0a3820cb167af44538c78a2e1bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 5,
"path": "/python/kino_09.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "def say_hello(greeting):\n print(greeting)\n\nhello = say_hello\nhello = ('good morning')"
},
{
"alpha_fraction": 0.530386745929718,
"alphanum_fraction": 0.5764272809028625,
"avg_line_length": 16.54838752746582,
"blob_id": "92a5d53c74386dff82a8a91bfa43748a303382c3",
"content_id": "24e8ddc883720f6f77aefb27a0f855ee5a66dce2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 543,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 31,
"path": "/python/def.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "# test judge program\n# Student define\n# atrubute\n# method judge\n\nclass Student:\n def __init__(self,name):\n self.name = name\n def calculate_avg(self,date):\n sum = 0\n for num in date:\n sum += num\n\n avg = sum/len(date)\n return avg\n\n def judge(self,avg):\n if(avg >= 60):\n result=\"passed\"\n else:\n result=\"failed\"\n return result\n \na001 = Student(\"sato\")\ndate = [70,65,50,10,30]\n\navg = a001.calculate_avg(date)\nresult = a001.judge(avg)\n\nprint(avg)\nprint(a001.name+\" \"+result)"
},
{
"alpha_fraction": 0.5813953280448914,
"alphanum_fraction": 0.6976743936538696,
"avg_line_length": 16.399999618530273,
"blob_id": "a1ad0fce6e2152cf1e45a683a734e485dbb684b6",
"content_id": "a7aa6222ac19b9863cce27eaa34cb33e73503fc1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 86,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 5,
"path": "/python/hello.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "def add(num01,num02):\n return(num01 + num02)\n\nadd_result=add(6,2)\nprint(add_result)"
},
{
"alpha_fraction": 0.6453201770782471,
"alphanum_fraction": 0.6847290396690369,
"avg_line_length": 16,
"blob_id": "e6cba73d6a31e2cd45a6c6a8d5584be55e17d149",
"content_id": "ba6f728c0f06038db2d5d093fa2f630edb65bd30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 257,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 12,
"path": "/Searchproject/main.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "import eel\nimport numpy as np\n\n# 3. JSからアクセス\[email protected]\ndef js_function(values):\n \n result()\n\n# 1.eel.init('web') # ファイル構成で作ったwebディレクトリを呼び出す\neel.init(\"web\")\neel.start(\"index.html\", size=(650,500))"
},
{
"alpha_fraction": 0.5839552283287048,
"alphanum_fraction": 0.5988805890083313,
"avg_line_length": 18.88888931274414,
"blob_id": "306690dcf76e48e5bcf57f8783ac9a16fd413499",
"content_id": "af4625ac3171282c0f8254062efba8055a114304",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 648,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 27,
"path": "/desktop.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport eel\nimport sys\nimport socket\[email protected]\ndef jsFunc(values):\n return()\n# 定数\nENTRY_POINT = 'index.html'\nCHROME_ARGS = [\n '--incognit', # シークレットモード\n '--disable-http-cache', # キャッシュ無効\n '--disable-plugins', # プラグイン無効\n '--disable-extensions', # 拡張機能無効\n '--disable-dev-tools', # デベロッパーツールを無効にする\n]\nALLOW_EXTENSIONS = ['.html', '.css', '.js', '.ico']\n\n\n\ndef start(): # 画面生成\n eel.init('web')\n eel.start('index.html',size=(500,500))\nstart()\ndef exit(): # 終了時の処理\n sys.exit(0)\nexit()"
},
{
"alpha_fraction": 0.6098130941390991,
"alphanum_fraction": 0.6191588640213013,
"avg_line_length": 24.235294342041016,
"blob_id": "1ad29749a5f23a7f9956ea4bb7d834f4fadc5b2d",
"content_id": "9de6e9a02b0b779f2dadfdf5ec70251a8023c260",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 472,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 17,
"path": "/def.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "import csv\nsource = []\nwith open('sample.csv') as f:\n reader = csv.reader(f)\n for row in reader:\n print(row)\n source.append(row[0])\nfor in_name in source:\n in_name = input('名前を入力')\n if in_name in source:\n print(in_name[0:10] +('の名前はあります'))\n elif in_name != source:\n print('登録されていません')\n source.append(in_name)\n with open('sample.csv','a') as f:\n writer = csv.writer(f)\n writer.writerow([in_name])"
},
{
"alpha_fraction": 0.5941011309623718,
"alphanum_fraction": 0.6137640476226807,
"avg_line_length": 20.606060028076172,
"blob_id": "5e1cf76dc9992eed4eea43a44f9dcbd42263c4bf",
"content_id": "4baf8e0dbf31abd5071890a10fbea63cfcd56912",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 842,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 33,
"path": "/Searchproject/search_api.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "import csv\nimport eel\n\n# JavaScriptから呼べるように関数を登録\n\ndef search_open(row,csv_file):\n with open('test.csv') as f:\n reader = csv.reader(f)\n source = []\n\n\ndef search():\n # -------------------↑\n for row in reader:\n print(row)\n source.append(row[0])\n for in_name in source:\n in_name = input('名前を入力')\n if in_name in source:\n print(in_name[0:10] +('の名前はあります'))\n elif in_name != source:\n print('登録されていません')\n source.append(in_name)\n with open('test.csv','a') as f:\n writer = csv.writer(f)\n writer.writerow([in_name])\n# JavaScriptの関数の戻り値をPythonで取得するには?\nsearch()\neel.js_function()\n# 最初の画面のHTMLファイルを指定\n\neel.init(\"web\")\neel.start(\"index.html\", size=(800,800),port=9999)"
},
{
"alpha_fraction": 0.7366688847541809,
"alphanum_fraction": 0.7412771582603455,
"avg_line_length": 33.54545593261719,
"blob_id": "89f860e66db72fc100287b3f719d54f37188acf1",
"content_id": "ac98e7b67c6c0dda0fe692695f96a428f92d26d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1525,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 44,
"path": "/home_work2-1.py",
"repo_name": "youcan210/intro_git",
"src_encoding": "UTF-8",
"text": "# Selenium tutorial #1\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom time import sleep\nimport time\nPATH = '/Users/user/Desktop/selenium/chromedriver'\ndriver = webdriver.Chrome(PATH)\n# open browser\ndriver.get('https://tenshoku.mynavi.jp/')\nprint(driver.title)\ntime.sleep(3)\n# close popup\ntry:\n driver.execute_script('document.querySelector(\".karte-close\").click()')\n time.sleep(3)\n# close popup\n driver.execute_script('document.querySelector(\".karte-close\").click()')\nexcept:\n# go to next process if error and exception\n pass\n# get element search box's name\nsearch_box = driver.find_element_by_name('srFreeSearchKeyword')\n# input keyboard process\nsearch_box.send_keys('未経験')\n# get element search button's class\nsearch_btn = driver.find_element_by_class_name('js__searchRecruitTop')\n# push search button return\nsearch_btn.send_keys(Keys.RETURN)\n#############until here home work 2-1####################\ntry:\n cassetteRecruit__main = WebDriverWait(driver, 10).until(\n EC.presence_of_element_located((By.CLASS_NAME, \"cassetteRecruit__main\"))\n )\n\n # articles = cassetteRecruit__main.find_elements_by_class_name(\"tableCondition\")\n # for tableCondition in articles:\n # t_head = articles.find_elements_by_class_name('tableCondition')\n # print(tableCondition.text)\n\nfinally:\n driver.quit()"
}
] | 8 |
paschubert/new_repo_for_demo
|
https://github.com/paschubert/new_repo_for_demo
|
df1c8cd2b457fb10d4a15b49e199470cfb9511e8
|
24c4c65ce140ac528d0b498ca069790f5e0252e1
|
7cc6cf28bad6be56eee5015f3c55dbb9eb172156
|
refs/heads/main
| 2023-08-25T09:39:37.244814 | 2021-11-03T13:58:20 | 2021-11-03T13:58:20 | 388,170,452 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7462121248245239,
"alphanum_fraction": 0.75,
"avg_line_length": 19.30769157409668,
"blob_id": "f5795c15d69b17f4bbebd3d0b1cde8d8625bd80d",
"content_id": "0b9116ed008561d4ca9b932bd5595b7e217c8e1c",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 264,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 13,
"path": "/test.py",
"repo_name": "paschubert/new_repo_for_demo",
"src_encoding": "UTF-8",
"text": "# Install packages\n!pip install tellurium -q\n\n# Import packages\nimport tellurium as te # Python-based modeling environment for kinetic models\n\nprint(\"Hello World.\")\n\nprint(\"modification done on local mdddachine\")\n\nr = te.loada(\"species a=3\")\nr.simulate()\nr.plot()\n"
},
{
"alpha_fraction": 0.6751055121421814,
"alphanum_fraction": 0.7890295386314392,
"avg_line_length": 32.71428680419922,
"blob_id": "ff4034e3dbf6a233e472af35d90504882dd9c5a7",
"content_id": "3b0478a1e29af3a0d46a4f7efe01fda5ab1720a9",
"detected_licenses": [
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 237,
"license_type": "permissive",
"max_line_length": 142,
"num_lines": 7,
"path": "/README.md",
"repo_name": "paschubert/new_repo_for_demo",
"src_encoding": "UTF-8",
"text": "# New Repository\n\n<img src=\"https://github.com/paschubert/new_repo_for_demo/blob/9028a87db5eb8593ae7c10cac9d0bcbc4a52f0cc/bio10_reaction_sheet.png\" width=\"200\">\n\n\n\nThis is a new repository for demonstration\n\n"
}
] | 2 |
p13i/cythonwalkthrough
|
https://github.com/p13i/cythonwalkthrough
|
9111f04e93d5f6ed2bfc9289d430939f2c20e669
|
080cb0fe4ae75c2998fc82c9a6ba6e958aca4446
|
8615fe8c9bc1f8ab0f192f14a4c56d5f19b631cf
|
refs/heads/master
| 2020-05-18T11:18:03.804861 | 2018-09-16T22:19:43 | 2018-09-16T22:19:43 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5390898585319519,
"alphanum_fraction": 0.549591600894928,
"avg_line_length": 24.969696044921875,
"blob_id": "fa0483f4125b1cf86e5a4fe2b1832c88b0b65045",
"content_id": "3872b43a8a9e5e8226ea78b18906ff5b1a53baed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 857,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 33,
"path": "/part2/src/main.c",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "#include <stdlib.h>\n#include <stdio.h>\n\n#include <Python.h>\n#include \"portal.h\"\n\nint main(int argc, char ** argv) {\n int result;\n printf(\"hello from C\\n\");\n if (PyImport_AppendInittab(\"portal\", PyInit_portal) != 0) {\n fprintf(stderr, \"Unable to extend Python inittab\");\n return 1;\n }\n Py_Initialize();\n if (PyImport_ImportModule(\"portal\") == NULL) {\n fprintf(stderr, \"Unable to import cython module.\\n\");\n if (PyErr_Occurred()) {\n PyErr_PrintEx(0);\n } else {\n fprintf(stderr, \"Unknown error\");\n }\n return 1;\n }\n result = entrance(3, 4);\n if (result == -1 && PyErr_Occurred()) {\n fprintf(stderr, \"Exception raised in portal()\\n\");\n PyErr_PrintEx(0);\n } else {\n printf(\"Result is %d\\n\", result);\n }\n Py_Finalize();\n return 0;\n}\n"
},
{
"alpha_fraction": 0.7593123316764832,
"alphanum_fraction": 0.7736389636993408,
"avg_line_length": 23.928571701049805,
"blob_id": "133d829aaabf2d2a909e0d688488acc1bf04cef1",
"content_id": "c9ab1c11811e448d7c95b95f166511b0b5141043",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 349,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 14,
"path": "/part2/Makefile.am",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "bin_PROGRAMS = cythonwalkthrough3\ncythonwalkthrough3_CFLAGS = $(PYTHON_INCLUDE)\ncythonwalkthrough3_SOURCES = src/main.c src/portal.c src/portal.h src/portal.pyx\ncythonwalkthrough3_LDFLAGS = $(PYTHON_LDFLAGS)\n\nsrc/main.c: src/portal.h\n\nsrc/portal.h: src/portal.c\n\nsrc/portal.c:\n\tcython3 src/portal.pyx\n\nclean-local:\n\trm -rf src/portal.c src/portal.h\n"
},
{
"alpha_fraction": 0.6015252470970154,
"alphanum_fraction": 0.6120114326477051,
"avg_line_length": 25.871795654296875,
"blob_id": "97269706f15afbc6426943eb023edb62df65c0bb",
"content_id": "6ffa8c68bb4fa3c315a05789554947d5eaa9eb37",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1049,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 39,
"path": "/part1/walkthrough/__main__.py",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "import sys\nimport timeit\nimport platform\nimport importlib\n\nif platform.python_implementation() == 'PyPy':\n sys.path.insert(0, '.')\n\n\ndef load_data(filename):\n digits = []\n for b in open(filename, 'rb').read():\n b = int(b)\n digits.append(b >> 4)\n digits.append(b & 0xf)\n return digits\n\n\nif __name__ == '__main__':\n digits = load_data(sys.argv[1])\n stage = None\n if len(sys.argv) > 2:\n stage = sys.argv[2]\n\n for module in (\n 'bigproduct',\n 'walkthrough.bigproductx',\n 'walkthrough.bigproduct_cythonloop',\n 'walkthrough.bigproduct_cythonloop2',\n 'walkthrough.bigproduct_cythonloop_annotations',\n 'walkthrough.bigproduct_cython',\n 'walkthrough.bigproduct_cythonoverflow',\n 'walkthrough.cbigproduct'\n )[:stage]:\n bpmod = importlib.import_module(module)\n bigproduct = bpmod.bigproduct\n time = timeit.timeit(\n 'bigproduct(digits)', number=1000, globals=globals())\n print(bigproduct(digits), time, module)\n\n"
},
{
"alpha_fraction": 0.7026431560516357,
"alphanum_fraction": 0.7533039450645447,
"avg_line_length": 40.181819915771484,
"blob_id": "5fa7a23bc3eaf53452f7b4929c2fe64eda73e87b",
"content_id": "198b1953f4b1a7ce76dab48b3210a0ac4aa971e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 454,
"license_type": "no_license",
"max_line_length": 178,
"num_lines": 11,
"path": "/README.md",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "# A Cython Walkthrough\n\nHere are the code samples used in my presentation _A Cython Walkthrough_\n\n## Slides\n\nThe slides can be found here: [Google Slides](https://docs.google.com/presentation/d/1QM2bYFJ7PQ37yfQ0ULD75Tm3fKWyuNj0wbk5gluNk1g/edit?usp=sharing) or [PDF](slides.pdf) (~2.2MB).\n\n## Video\n\nA video of _A Cython Walkthrough_, filmed at [PyCon UK 2018](https://2018.pyconuk.org/), is here: [YouTube](https://www.youtube.com/watch?v=O8StkTjhncU).\n\n"
},
{
"alpha_fraction": 0.5553295612335205,
"alphanum_fraction": 0.5606184005737305,
"avg_line_length": 28.261905670166016,
"blob_id": "bd4cf1c318e7bc293f10d31dff4f56007c579006",
"content_id": "c1eb7427797ecc5bac6ea2780b4c9d70a1e5c53d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 2458,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 84,
"path": "/part1/walkthrough/cbigproductmodule.c",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "#include <Python.h>\n\n#include <limits.h>\n\n/* For checking for impending overflow;\n * overly cautious but avoids switching on the multiplier */\nunsigned long long MAX_MULTIPLICAND = ULLONG_MAX / 9;\n\nstatic PyObject *\ncbigproduct_bigproduct(PyObject *self, PyObject *args, PyObject *kw)\n{\n char *keywords[] = {\"digits\", \"n\", NULL};\n PyObject *digits;\n int n = 13;\n Py_ssize_t digits_len;\n size_t i;\n unsigned long long best = 0;\n\n if (!PyArg_ParseTupleAndKeywords(args, kw, \"O|i\", keywords, &digits, &n)) {\n return NULL;\n }\n\n if (!PySequence_Check(digits)) {\n PyErr_SetString(PyExc_TypeError, \"digits is not a sequence\");\n return NULL;\n }\n \n if ((digits_len = PySequence_Length(digits)) == -1) {\n return NULL;\n }\n\n for (i = 0; i < (digits_len - (n - 1)); i ++) {\n size_t j;\n unsigned long long product = 1;\n for (j = 0; j < n; j ++) {\n PyObject * py_digit;\n unsigned long digit;\n if (product >= MAX_MULTIPLICAND) {\n PyErr_SetString(PyExc_OverflowError,\n \"product to be calculated is too big\");\n return NULL;\n }\n if ((py_digit = PySequence_GetItem(digits, i + j)) == NULL) {\n return NULL;\n }\n if (!PyLong_Check(py_digit)) {\n PyErr_SetString(PyExc_TypeError, \"digit was not an integer\");\n }\n digit = PyLong_AsUnsignedLong(py_digit);\n if (digit == (unsigned long)-1 && PyErr_Occurred()) {\n return NULL;\n }\n product *= digit;\n }\n if (product > best) {\n best = product;\n }\n }\n\n return PyLong_FromUnsignedLongLong(best);\n}\n\nstatic PyMethodDef CbigproductMethods[] = {\n {\"bigproduct\",\n (PyCFunction)cbigproduct_bigproduct,\n METH_VARARGS | METH_KEYWORDS,\n \"Return the biggest product of n consecute digits\"},\n {NULL, NULL, 0, NULL} /* Sentinel */\n};\n\nstatic struct PyModuleDef cbigproductmodule = {\n PyModuleDef_HEAD_INIT,\n \"cbigproduct\", /* name of module */\n NULL, /* module documentation, may be NULL */\n -1, /* size of per-interpreter state of the module,\n or -1 if the module keeps state in global variables. */\n CbigproductMethods\n};\n\nPyMODINIT_FUNC\nPyInit_cbigproduct(void)\n{\n return PyModule_Create(&cbigproductmodule);\n}\n"
},
{
"alpha_fraction": 0.6663920879364014,
"alphanum_fraction": 0.6787479519844055,
"avg_line_length": 33.68571472167969,
"blob_id": "6fe3e24d7bbc446e2f885211e5da5348d2f7511e",
"content_id": "9bdf6485256918e4214c99c9d5033b4a00103ca0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "M4Sugar",
"length_bytes": 1214,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 35,
"path": "/part2/configure.ac",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "AC_INIT([CythonWalkthrough3], 0.1)\nAM_INIT_AUTOMAKE([subdir-objects])\nAC_PROG_CC\nAC_CONFIG_FILES(Makefile)\n\n# Check for Python development files\n# From: https://stackoverflow.com/questions/5056606/how-to-use-autotools-to-build-python-interface-at-same-time-as-library\nAM_PATH_PYTHON([3.4])\nAC_ARG_VAR([PYTHON_INCLUDE], [Include flags for python, bypassing python-config])\nAC_ARG_VAR([PYTHON_LDFLAGS], [Linker flags for python, bypassing python-config])\nAC_ARG_VAR([PYTHON_CONFIG], [Path to python-config])\nAS_IF([test -z \"$PYTHON_INCLUDE\"], [\n AS_IF([test -z \"$PYTHON_CONFIG\"], [\n AC_PATH_PROGS([PYTHON_CONFIG],\n [python$PYTHON_VERSION-config python-config],\n [no],\n [`dirname $PYTHON`])\n AS_IF([test \"$PYTHON_CONFIG\" = no], [AC_MSG_ERROR([cannot find python-config for $PYTHON.])])\n ])\n AC_MSG_CHECKING([python build flags])\n PYTHON_INCLUDE=`$PYTHON_CONFIG --includes`\n PYTHON_LDFLAGS=`$PYTHON_CONFIG --ldflags`\n])\n\n# Check for Cython\nAC_DEFUN(AC_PROG_CYTHON, [AC_CHECK_PROG(CYTHON,cython3,yes)])\nAC_PROG_CYTHON\nif test x\"${CYTHON}\" == x\"yes\" ; then\n AC_MSG_NOTICE([Found cython3])\nelse\n AC_MSG_ERROR([Cannot find cython3 tool.])\nfi\n\n\nAC_OUTPUT\n"
},
{
"alpha_fraction": 0.7313131093978882,
"alphanum_fraction": 0.7353535294532776,
"avg_line_length": 25.052631378173828,
"blob_id": "18dfcc04db8ede80695b68a8d40ac47c2d1a7bed",
"content_id": "1f11eca1a60cb1990f7c504ab0e50657a9d660fe",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 495,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 19,
"path": "/part1/setup.py",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "from setuptools import setup, find_packages, Extension\nfrom Cython.Build import cythonize\n\next_modules = []\n\next_modules.append(Extension(\n 'walkthrough.cbigproduct',\n sources=['walkthrough/cbigproductmodule.c']))\n\next_modules.extend(cythonize('walkthrough/bigproduct.py'))\next_modules.extend(cythonize('walkthrough/*.pyx'))\n\nsetup(\n name='Walkthrough',\n version='1.0',\n description='Package for A Cython Walkthrough',\n packages=find_packages(),\n ext_modules=ext_modules,\n)\n"
},
{
"alpha_fraction": 0.5972222089767456,
"alphanum_fraction": 0.5972222089767456,
"avg_line_length": 22.66666603088379,
"blob_id": "ecbfd3e6fd997fe9a07df4d8b83f2f05817959eb",
"content_id": "75404680464f935e902acf1bf492b147bc87f22c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 72,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 3,
"path": "/part2/mypythonmodule.py",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "\ndef accumulate(a, b):\n print('hello from Python')\n return a * *b\n"
},
{
"alpha_fraction": 0.7599999904632568,
"alphanum_fraction": 0.7799999713897705,
"avg_line_length": 49,
"blob_id": "b67ab177045d12fe1c025b858428fcc30faffbe8",
"content_id": "ab4bc7557aa6c4332ba8f63955e1e81810109671",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 50,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 1,
"path": "/part3/README.md",
"repo_name": "p13i/cythonwalkthrough",
"src_encoding": "UTF-8",
"text": "Part 2 is basically https://github.com/flexo/PyGD\n"
}
] | 9 |
TechAcademyBootcamp/07-week-02-day-agaverdi
|
https://github.com/TechAcademyBootcamp/07-week-02-day-agaverdi
|
8a3950566e3ed68b249a36a94f6d5d3263e09840
|
de35a92b75a5b73dd9ffed4372dd3fb2dd9da2c9
|
3c2e5c97f83cdac6bfa79bda25ca4a45463bd4fd
|
refs/heads/master
| 2020-09-10T07:13:30.716222 | 2019-11-14T15:41:21 | 2019-11-14T15:41:21 | 221,681,624 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.3838862478733063,
"alphanum_fraction": 0.5426540374755859,
"avg_line_length": 12.580645561218262,
"blob_id": "9baf62484db1c6c3b8427f4a890854e9a5b2eba2",
"content_id": "89dfd575fe08533fbe8b79db9062e30e64f455a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 422,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 31,
"path": "/02-Einstein/hell.py",
"repo_name": "TechAcademyBootcamp/07-week-02-day-agaverdi",
"src_encoding": "UTF-8",
"text": "c=input('reqem ozde ceeeld ol=')\nc=int(c)\nz=c\nk=c%10\nc=c//10\nk1=c%10\nc=c//10\ncem=k*100+k1*10+c\nprint(cem)\nif cem>z:\n print(cem-z)\n cem1=cem-z\n cem2=cem1\n \n f=cem1%10\n cem1=cem1//10\n f1=cem1%10\n cem1=cem1//10\n son=f*100+f1*10+cem1\n \n print(son+cem2)\nelse:\n print(z-cem)\n z1=z-cem\n z2=z1\n f=z1%10\n z1=z1//10\n f1=z1%10\n z1=z1//10\n son=f*100+f1*10+z1\n print(son+z2)\n\n"
},
{
"alpha_fraction": 0.7291666865348816,
"alphanum_fraction": 0.7291666865348816,
"avg_line_length": 23,
"blob_id": "1a6bf6239765c75efc953d41176a083155588556",
"content_id": "bad53612219316ed7797c14977e0f9a2b03c9a5d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 4,
"path": "/08-LatinSquare/hell.py",
"repo_name": "TechAcademyBootcamp/07-week-02-day-agaverdi",
"src_encoding": "UTF-8",
"text": "son=input('son reqemi daxil edin')\nilk=input('ilk reqemi daxil edin')\nson=int(son)\nilk=int(ilk)\n"
},
{
"alpha_fraction": 0.5190010666847229,
"alphanum_fraction": 0.5260586142539978,
"avg_line_length": 37.29166793823242,
"blob_id": "48633ada36f04a6f9d1443011c119ee7263a7a5c",
"content_id": "97f062d7bebe3a085a2f4a8e5b4e1cac8a7661db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1842,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 48,
"path": "/07-CarRental/hell.py",
"repo_name": "TechAcademyBootcamp/07-week-02-day-agaverdi",
"src_encoding": "UTF-8",
"text": "yeniden=True\nwhile yeniden:\n c=input('katyaqoriyanqizi daxil edin=')\n if c!='d' and c!='b' and c!='w':\n print('sadece b,w,d katyaqoriyalarmiz var xais edirem duzgun daxil edin')\n \n \n else:\n \n if c=='d':\n number=input('verilme gununun sayi=')\n number=float(number)\n start_meter=input('giris mesafe=')\n start_meter=float(start_meter)\n end_meter=input('son mesafe=')\n end_meter=float(end_meter)\n print(\"surulen mesafenin uzunluqu=\",(end_meter-start_meter)/10)\n print(\"odenilecek mebleq =$ \",60*number)\n tekrar=input('tekrar olsun=y/n')\n if tekrar=='n':\n print('sagolun')\n yeniden=False\n if c=='b':\n number=input('verilme gununun sayi=')\n number=float(number)\n start_meter=input('giris mesafe=')\n start_meter=float(start_meter)\n end_meter=input('son mesafe=')\n end_meter=float(end_meter)\n print(\"surulen mesafenin uzunluqu=\",(end_meter-start_meter)/10)\n print(\"odenilecek mebleq =$ \",40*number)\n tekrar=input('tekrar olsun=y/n')\n if tekrar=='n':\n print('sagolun')\n yeniden=False\n if c=='w':\n number=input('verilme gununun sayi=')\n number=float(number)\n start_meter=input('giris mesafe=')\n start_meter=float(start_meter)\n end_meter=input('son mesafe=')\n end_meter=float(end_meter)\n print(\"surulen mesafenin uzunluqu=\",(end_meter-start_meter)/10)\n print(\"odenilecek mebleq =$ \",190*number)\n tekrar=input('tekrar olsun=y/n')\n if tekrar=='n':\n print('sagolun')\n yeniden=False\n "
}
] | 3 |
villasv/home
|
https://github.com/villasv/home
|
f0d69d9731960c721798cb9aeaa8c75d08b8d0c9
|
5b0f5e197768f6d0a093bbcd2c2fb187de0a772c
|
5407c1f009d1ca5a559d6d25ba6778a530b3ac9e
|
refs/heads/master
| 2020-02-28T19:19:38.680262 | 2017-11-20T21:18:14 | 2017-11-20T21:18:14 | 65,616,172 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6414342522621155,
"alphanum_fraction": 0.6733067631721497,
"avg_line_length": 21.81818199157715,
"blob_id": "157df6f736b4615275f1907bc9e3e9d3273ed09e",
"content_id": "c9abc2e19de44376a739b001e8491e57d146b043",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 251,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 11,
"path": "/mopidy/scripts/start.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# start stuff only if mopidy isn't already running\nif ! mpc status 1>/dev/null 2>/dev/null\nthen\n\tmkfifo '/tmp/mpd.fifo'\n\tmopidy --config ~/.config/mopidy &\n\twhile :\n\t\tdo socat -d -d -T 1 -u UDP4-LISTEN:5555 OPEN:'/tmp/mpd.fifo'\n\tdone \nfi\n"
},
{
"alpha_fraction": 0.6094839572906494,
"alphanum_fraction": 0.624825656414032,
"avg_line_length": 17.35897445678711,
"blob_id": "f0d745c78ad8c9e3801fecb95f477c3cb94af9d5",
"content_id": "26fdcd1b2a7c9ea8716c00e99ae13e86cdf26526",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 717,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 39,
"path": "/install.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nRICE_DIR=\"$(dirname \"$(readlink -f \"$0\")\")\"\n\nfunction symlink {\n\tif [ -L $2 ] || [ -f $2 ]\n\tthen\n\t\trm $2\n\telif [ -d $2 ]\n\tthen\n\t\trm -r $2\n \tfi\n\tln -s $1 $2\n}\n\n# colors\nsymlink \"$RICE_DIR/Xresources\" \"$HOME/.Xresources\"\n\n# i3[-gaps]\nsymlink \"$RICE_DIR/i3\" \"$HOME/.i3\"\n\n# vim\nsymlink \"$RICE_DIR/vim\" \"$HOME/.vim\"\nsymlink \"$RICE_DIR/vimrc\" \"$HOME/.vimrc\"\nvim +PlugInstall +qall\n\n# git\nsymlink \"$RICE_DIR/gitconfig\" \"$HOME/.gitconfig\"\nsymlink \"$RICE_DIR/gitignore\" \"$HOME/.gitignore\"\n\n# mopidy + ncmpcpp\nsymlink \"$RICE_DIR/mopidy\" \"$HOME/.config/mopidy\"\nsymlink \"$RICE_DIR/ncmpcpp\" \"$HOME/.ncmpcpp\"\n\n# htop\nsymlink \"$RICE_DIR/htop\" \"$HOME/.config/htop\"\n\n# zsh\nsymlink \"$RICE_DIR/zshrc\" \"$HOME/.zshrc\"\n\n"
},
{
"alpha_fraction": 0.654618501663208,
"alphanum_fraction": 0.6599732041358948,
"avg_line_length": 23.064516067504883,
"blob_id": "374c9ada75d4d68269fd68041a61f5c743b2e505",
"content_id": "1834ad1b7dc75e0f0456fc50a3ed571f783cce03",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 747,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 31,
"path": "/mopidy/scripts/get_cover.py",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\nimport urllib\n\nfrom mopidy import config as config_lib\nfrom mopidy import ext\n\nfrom mopidy_spotify.backend import SpotifyBackend\n\n\ndef main(track_uri):\n extensions_data = ext.load_extensions()\n\n config, config_errors = config_lib.load(\n ['~/.config/mopidy/mopidy.conf', '~/.config/mopidy/secret.conf'],\n [d.config_schema for d in extensions_data],\n [d.config_defaults for d in extensions_data],\n [])\n\n backend = SpotifyBackend(config, None)\n backend.on_start()\n\n img_url = backend.library.get_images([track_uri]).values()[0][0].uri\n urllib.urlretrieve(img_url, \"/tmp/spotify.jpeg\")\n\n backend.on_stop()\n\n return 0\n\nif __name__ == \"__main__\":\n sys.exit(main(sys.argv[1]))\n\n"
},
{
"alpha_fraction": 0.6675257682800293,
"alphanum_fraction": 0.6675257682800293,
"avg_line_length": 13.923076629638672,
"blob_id": "e58e62e1635dcc77922aacaeb94ab72d249eae0f",
"content_id": "c7b6bc65aabd5cf8ce4a0c8e78e67062ed434330",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 388,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 26,
"path": "/README.md",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "# Instructions\n\n- Install the necessary packages:\n\n```\n./install.sh\n```\n\n- \"Install\" the rice with symlinks\n\n```\n./setup.sh\n```\n\n- Fill the missing configurations\n\n\n```\n# file: ~/.config/mopidy/secret.conf\n\n[spotify]\nusername = alice\npassword = secret\nclient_id = ... client_id value you got from mopidy.com ...\nclient_secret = ... client_secret value you got from mopidy.com ...\n```\n"
},
{
"alpha_fraction": 0.6712095141410828,
"alphanum_fraction": 0.6780238747596741,
"avg_line_length": 29.842105865478516,
"blob_id": "97fef32c2882b62ae0cdeb6b95da44b7dd42de65",
"content_id": "069ef7f611365fd74ea3ce8b738e36decd9b663c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 587,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 19,
"path": "/i3/scripts/layout-music-workspace.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ni3-msg \"workspace $1; append_layout ~/.i3/layouts/music.json\"\n\nsh ~/.config/mopidy/scripts/start.sh\n\nurxvt -name \"playlist\" -e \\\n\tncmpcpp -s playlist -c \"~/.ncmpcpp/playlist.conf\" &\nurxvt -name \"browser\" -e \\\n\tncmpcpp -s playlist_editor -c \"~/.ncmpcpp/browser.conf\" &\nurxvt -name \"lyrics\" -e \\\n\tncmpcpp -s playlist -c \"~/.ncmpcpp/lyrics.conf\" &\nurxvt -name \"clock\" -e \\\n\tncmpcpp -s clock -c \"~/.ncmpcpp/clock.conf\" &\nurxvt -name \"visualizer\" -e \\\n\tncmpcpp -s visualizer -c \"~/.ncmpcpp/visualizer.conf\" &\n\nsleep 3\nxdotool key --window $(xdotool search --classname lyrics) l\n\n"
},
{
"alpha_fraction": 0.593589723110199,
"alphanum_fraction": 0.6512820720672607,
"avg_line_length": 18.024391174316406,
"blob_id": "ae69ec473d93522ac2ff0b1c2637365db22e159b",
"content_id": "109d7fe28632ef14d2de5e61b4007d4ac808ab27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 780,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 41,
"path": "/ncmpcpp/scripts/art.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n#put this file to ~/.ncmpcpp/\n\nORG_COVER=\"/tmp/spotify.jpeg\"\nRSZ_COVER=\"/tmp/cover.jpg\"\nENDPOINT=\"https://api.spotify.com\"\n\nfunction reset_background\n{\n\tprintf \"\\e]20;;100x100+1000+1000\\a\"\n}\n\nfunction print_album_art\n{\n\ttrack=\"$(mpc --format %file% current)\"\n\tpython ~/.config/mopidy/scripts/get_cover.py $track 2> /dev/null\n\n\n\tif [[ -n \"$ORG_COVER\" ]]\n\tthen\n\t\t# resize the image's width to 300px \n\t\tconvert \"$ORG_COVER\" -resize 300x \"$RSZ_COVER\"\n\t\tif [[ -f \"$RSZ_COVER\" ]] ; then\n\t\t\t#scale down the cover to 30% of the original\n\t\t\tprintf \"\\e]20;${RSZ_COVER};100x100+50+50\\a\"\n\t\telse\n\t\t\treset_background\n\t\tfi\n\telse\n\t\treset_background\n\tfi\n\trm -f \"$RSZ_COVER\" \n\trm -f \"$ORG_COVER\" \n}\n\n(\n\tflock -x -w 5 200 || exit 1\n\tprint_album_art\n\t\n) 200>/var/lock/.art.exclusivelock\n"
},
{
"alpha_fraction": 0.4831932783126831,
"alphanum_fraction": 0.5042017102241516,
"avg_line_length": 16,
"blob_id": "bbede6212658c6f3124bbcfad9858ff459ed3a78",
"content_id": "511f3635a8a9b7e15bd221e0307bdfabb630bc81",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 238,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 14,
"path": "/i3/scripts/cycle-input",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nlayout=$(setxkbmap -print | awk -F\"+\" '/xkb_symbols/ {print $2}')\n\ni=0\nfor var in \"$@\"\ndo\n\ti=$(expr $i + 1)\n\tif [ \"$layout\" = \"$var\" ]; then break; fi;\ndone\nif [ \"$i\" = \"$#\" ]; then i=0; fi;\ni=$(expr $i + 1)\n\nsetxkbmap ${!i}\n"
},
{
"alpha_fraction": 0.6902008652687073,
"alphanum_fraction": 0.7017650604248047,
"avg_line_length": 22.457143783569336,
"blob_id": "c71ef856ce1e1848d26e254eb79da5362a86d2c0",
"content_id": "b4a8acf7dedc2cd985571942e51bcf27b69f6df9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1643,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 70,
"path": "/setup.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n\n# install YosemiteSanFrancisco\n#wget https://github.com/supermarin/YosemiteSanFranciscoFont/archive/master.zip\n#unzip master.zip\n#mv master/*.ttf ~/.fonts\n# install arc-theme\n#echo 'deb http://download.opensuse.org/repositories/home:/Horst3180/Debian_8.0/ /' >> /etc/apt/sources.list.d/arc-theme.list \n#apt-get update\n#apt-get install arc-theme\n# download moka icon themes\n\n## mopidy sources\nwget -q -O - https://apt.mopidy.com/mopidy.gpg | sudo apt-key add -\nsudo wget -q -O /etc/apt/sources.list.d/mopidy.list https://apt.mopidy.com/jessie.list\nsudo apt-get update\n\n## install all\nsudo apt -qq install \\\n\tcompton \\\n\tcurl \\\n\tfeh \\\n\tfonts-firacode \\\n\tfonts-font-awesome \\\n\tgit \\\n\thtop \\\n\tmopidy \\\n\tmopidy-spotify \\\n\tmpc \\\n\tncmpcpp \\\n\tnvidia-smi \\\n\tpython-pip \\\n\tpython-dev \\\n\tpython3-pip \\\n\tpython3-dev \\\n\trofi \\\n\trxvt-unicode \\\n\tscrot \\\n\tsocat \\\n\tspeedometer \\\n\tvim \\\n\txdotool \\\n\tzsh \\\n\t&& echo \"done\"\n\nsudo pip3 install s-tui thefuck\n\nsh -c \"$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)\"\n\nchsh -s $(which zsh)\n\nsudo setcap cap_net_admin,cap_net_raw=ep /usr/sbin/nethogs\nsudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/iftop\n\n\n## i3-gaps\n# preinstall handles dependencies easier\nsudo apt install i3 i3status i3blocks\n# overwrite i3 with i3-gaps\necho \"Manually installing i3-gaps\" &&\\\n\tgit clone https://www.github.com/Airblader/i3 /tmp/i3-gaps &&\\\n\tcd /tmp/i3-gaps &&\\\n\tautoreconf --force --install &&\\\n\trm -rf build/ &&\\\n\tmkdir -p build && cd build/ &&\\\n\t../configure --prefix=/usr --sysconfdir=/etc --disable-sanitizers &&\\\n\tmake &&\\\n\tsudo make install \\\n\t&& echo \"done\"\n\n"
},
{
"alpha_fraction": 0.6786786913871765,
"alphanum_fraction": 0.6891891956329346,
"avg_line_length": 38.117645263671875,
"blob_id": "c9eb2c54ba078a1c85f3b6afbe4895db98458da2",
"content_id": "2b6f8f4a4bec1f93295fc8555ff1b434d943a172",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 666,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 17,
"path": "/i3/scripts/layout-system-workspace.sh",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ni3-msg \"workspace $1; append_layout ~/.i3/layouts/system.json\"\n\nurxvt -name \"iotop\" -e bash -c \"export HTOPRC=~/.config/htop/io-htoprc && htop\" &\nurxvt -name \"cputop\" -e bash -c \"export HTOPRC=~/.config/htop/cpu-htoprc && htop\" &\nurxvt -name \"s-tui\" -e s-tui &\nurxvt -name \"speedometer\" -e speedometer -rx eno1 -tx eno1 &\nurxvt -name \"ncdu\" -e ncdu &\nurxvt -name \"nvidia-smi\" -e bash -c \"watch -n 1 nvidia-smi\" &\n\nsleep 3\nxdotool key --window $(xdotool search --classname s-tui) \\\n\tDown Down Down Down Down Down Down Down Down \\\n\tReturn Down Down Down Return Down Down Down \\\n\tDown Down Down Down Down Down Down Down Down \\\n\tDown Down Down Down Down\n\n"
},
{
"alpha_fraction": 0.5588822364807129,
"alphanum_fraction": 0.5708582997322083,
"avg_line_length": 21.772727966308594,
"blob_id": "67ed36c17fc34e8fa9fe4d4e1eaa972c821f9a60",
"content_id": "8e84079ab95a14a2f5108f92e435fc36327418a4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 501,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 22,
"path": "/i3/scripts/cycle-sound",
"repo_name": "villasv/home",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ncurrent=$(pacmd list-sinks | grep '* index:' | awk -F ': ' '{print $2}')\nsinks=($(pactl list sinks short | awk -F '\\t' '{print $1}'))\ninputs=($(pactl list sink-inputs | grep 'Input' | awk -F '#' '{print $2}'))\n\n\ni=0\nfor sink in \"${sinks[@]}\"\ndo\n\ti=$(expr $i + 1)\n\tif [ \"$current\" = \"$sink\" ]; then break; fi;\ndone\nif [ \"$i\" = \"${#sinks[@]}\" ]; then i=0; fi;\n\ncurrent=${sinks[$i]}\n\npacmd set-default-sink $current\nfor input in \"${inputs[@]}\"\ndo\n\tpacmd move-sink-input $input $current\ndone\n"
}
] | 10 |
ajm4zs/FileToDBParser
|
https://github.com/ajm4zs/FileToDBParser
|
a0beb380efb10702b45613e4ad49e6d489b82c08
|
0e5395744fa88f95b637076b91e14829a3c5568d
|
b7e80d85dca31dfea37a5d41a756d21d7f50a1ae
|
refs/heads/master
| 2020-06-04T02:03:17.076075 | 2019-10-10T08:45:55 | 2019-10-10T08:45:55 | 191,826,331 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6211626529693604,
"alphanum_fraction": 0.6211626529693604,
"avg_line_length": 30.89583396911621,
"blob_id": "e4b78931a652c99214feae656bcfc9981cd6ee67",
"content_id": "953894c21deeb43c66f78c9f0e31733069af899b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1531,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 48,
"path": "/JSONProcessor.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "# Authors: Alex Mulchandani and Nick Kharas\n\nfrom pandas.io.json import json_normalize\nfrom file_integrity_checks import pathExists\nimport json\n\nclass JSONProcessor:\n\n def __init__(self, file_path='', file_name=''):\n self.file_path = file_path\n self.file_name = file_name \n\n # Extracts the contents of a file and returns the JSON\n # TO-DO: we may need to mod this to incorporate some sort of stream process\n def extract_file_contents(self):\n\n # throw error if path doesn't exist\n if (not pathExists(self.file_path)):\n raise Exception('The file path does not exist.')\n # store entire file path and name in filePath\n filePath = self.file_path + self.file_name\n\n # try to open the file\n try:\n fileContents = open(filePath)\n except FileNotFoundError as fnf_error:\n raise Exception(fnf_error)\n\n # try to parse file contents as JSON\n try:\n jsonData = json.load(fileContents)\n except:\n raise Exception('Error in parsing the file as JSON.')\n finally:\n # close the input file\n fileContents.close()\n\n return jsonData\n\n # Normalizes json and returns the associated dataframe\n def get_dataframe_from_json (self, json):\n # try to normalize the json into a data frame\n try:\n data_normalized = json_normalize(json)\n except:\n raise Exception('Unable to convert JSON into a dataframe.')\n \n return data_normalized\n"
},
{
"alpha_fraction": 0.589074432849884,
"alphanum_fraction": 0.5918580293655396,
"avg_line_length": 38.91666793823242,
"blob_id": "eeeee0715494d9c06d9ea180ed627666afbba60d",
"content_id": "8710f5e049670b8d66dc359349cc412d6799c3f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2874,
"license_type": "no_license",
"max_line_length": 170,
"num_lines": 72,
"path": "/XMLProcessor.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "# Authors: Alex Mulchandani and Nick Kharas\n\nfrom xml.etree import ElementTree as ET\nimport os # File handling\nimport gc # Garbage collection\nimport csv # To convert the dictionary key value pairs into a CSV format\n\n\nclass XMLProcessor:\n\n def __init__(self, file_path='', file_name=''):\n self.file_path = file_path\n self.file_name = file_name\n\n def recurse_xml_tag(self, element):\n \"\"\"\n Recurse through the XML element to extract data from all levels int he XML hierarchy\n arg1 (XML element) : Element to recurse through in the XML file\n \"\"\"\n contents = list() # Array of all rows\n for item in element:\n column = {} # Dictionary for row\n if item.text is not None:\n column[item.tag] = item.text.replace('\\n', ' ')\n column.update(item.attrib)\n sub_row = self.recurse_xml_tag(item)\n if sub_row is not None and len(sub_row) > 0:\n column[item.tag] = sub_row\n contents.append(column)\n return contents\n\n def parse_xml(self, xmltag):\n \"\"\"\n Parse thorugh the XML tag and extract the data\n arg1 - XML tag / element name to recurse through in the XML file\n \"\"\"\n file_contents = []\n row_list = self.recurse_xml_tag(xmltag)\n # Without nested elements, row_list is sufficient for output.\n # With nested elements, row_list produces a list of dictionaries with uneven fields. This needs to be combined to produce a list of dictionaries with same fields.\n if len(set(row_list[0]).intersection(set(row_list[1]))) > 0:\n file_contents = row_list\n else:\n row_dict = dict((key,d[key]) for d in row_list for key in d)\n file_contents.append(row_dict)\n return(file_contents)\n\n # Read data from each child node under root\n def xml_to_dict(self, *args):\n \"\"\"\n Convert the XML file into a JSON compatible dictionary.\n If the optional parameter is not specified, then all contents of the XML file will be pulled\n \n *args (Optional parameter) - Give the user an option to pull data appearing only in a particular tag under the root element\n \"\"\"\n arglen = len(args)\n \n # Open the XML file\n xml_file = open(self.file_path + '\\\\' + self.file_name,'rb') # Read the XML file\n xmldoc = ET.parse(xml_file) # Parse the XML file\n root = xmldoc.getroot()\n\n file_contents = []\n for child in root:\n if arglen == 0:\n # Pull all data regardless\n file_contents = self.parse_xml(child)\n else:\n # Only pull data from tag that is input ignore the rest\n if child.tag == args[0]:\n file_contents = self.parse_xml(child)\n return(file_contents)\n"
},
{
"alpha_fraction": 0.6623277068138123,
"alphanum_fraction": 0.6669219136238098,
"avg_line_length": 28.68181800842285,
"blob_id": "7a54d692117f8c959390b63bb7df23921c67dd8c",
"content_id": "35f872040f60c1081e13cc2b0a0f05a5a1b8c24e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1306,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 44,
"path": "/df_integrity_checks.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "def isValueList(value):\n\n return isinstance(value, list)\n\n# Checks if value is a dict. If yes, return True. Otherwise, return false.\ndef isValueDict(value):\n\n return isinstance(value, dict)\n\n# Parses the first x rows of a column and returns true if any of the x rows is a list\ndef isColumnList(df, column, iterations):\n\n i = 0\n\n while i < iterations:\n if (isValueList(df[column][i])):\n if (isValueDict(df[column][i][0])):\n return True\n else:\n raise Exception('There is a list that does not contain a dict object. This is bad JSON and cannot be processed.')\n i += 1\n\n return False\n\n# Finds all columns that are lists within a dataframe and returns a list of the column names\ndef checkForLists (df):\n listColumns = []\n\n # we will sample the first n rows of the data frame to find if a column is a list\n\n df_size = len(df.index)\n sample_size = 100\n iterations = min(df_size, sample_size)\n\n for col in df.columns:\n if (isColumnList(df, col, iterations)):\n listColumns.append(col)\n\n return listColumns\n\n# Removes list columns from a df and returns the df\ndef removeListColumnsFromDataframe(df, listColumns):\n dfWithoutColumns = df.drop(columns=listColumns)\n return dfWithoutColumns\n"
},
{
"alpha_fraction": 0.6628926396369934,
"alphanum_fraction": 0.664718747138977,
"avg_line_length": 46.2068977355957,
"blob_id": "36d1fffd9abad5feaae3ba3a5d352cc20d257ddb",
"content_id": "bf29d853f11bce8ccc01187945fb9ac27f783422",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2738,
"license_type": "no_license",
"max_line_length": 185,
"num_lines": 58,
"path": "/ProcessFile.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "from XMLProcessor import XMLProcessor\nfrom JSONProcessor import JSONProcessor\nfrom DFProcessor import DFProcessor\n\nimport argparse\n\nif __name__ == '__main__':\n\n # parse args\n\n parser = argparse.ArgumentParser(\n description='This procedure is used to write a JSON file to a table in SQL.')\n parser.add_argument('--filePath', '-fp', required=True, help='Enter the file path of your file.')\n parser.add_argument('--fileName', '-fn', required=True, help='Enter the name of the file.')\n parser.add_argument('--server', '-s', required=True, help='Enter the destination server.', default='file') # default='RICKY' default='file'\n parser.add_argument('--database', '-d', required=True, help='Enter the destination database.') # default='DALabA_Scratch' default=''\n parser.add_argument('--tableName', '-tn', required=True, help='Enter the destination table.')\n parser.add_argument('--dropTableIfExists', '-dt', required=False,\n help='[Optional] 1 if we want to drop table if already exists. If 0 and table already exists, rows will be appended.', choices=range(0, 2), default=1, type=int)\n parser.add_argument('--fileType', '-ft', required=False,\n help='[Optional] What is the type of file we are processing? Default value is JSON.', choices=['XML', 'JSON'], default='JSON')\n parser.add_argument('--xmlTag', '-xt', required=False, help='[Optional] Which XML tag under root should we pull data from.', default=None)\n args = parser.parse_args()\n\n print('Check XML tag here')\n print(args.xmlTag)\n\n # TO-DO: sort out how to output to flat file\n\n if(args.fileType == 'JSON'):\n json_processor = JSONProcessor(args.filePath, args.fileName)\n json_data = json_processor.extract_file_contents()\n df = json_processor.get_dataframe_from_json(json_data)\n elif(args.fileType == 'XML'):\n xml_processor = XMLProcessor(args.filePath, args.fileName)\n if args.xmlTag is None:\n json_data = xml_processor.xml_to_dict()\n else:\n json_data = xml_processor.xml_to_dict(args.xmlTag)\n json_processor = JSONProcessor()\n df = json_processor.get_dataframe_from_json(json_data)\n else:\n raise Exception('Invalid file format.')\n\n\n df_processor = DFProcessor(args.server, args.database, args.tableName, args.dropTableIfExists)\n\n #engine = df_processor.get_engine()\n #connection = df_processor.connect_engine(engine)\n\t\n if(args.server == 'file'):\n engine = args.filePath\n connection = 'file'\n else:\n engine = df_processor.get_engine()\n connection = df_processor.connect_engine(engine)\n \n df_processor.process_dataframe(df, engine, connection)\n"
},
{
"alpha_fraction": 0.766331672668457,
"alphanum_fraction": 0.766331672668457,
"avg_line_length": 25.600000381469727,
"blob_id": "5ebc67e1a14d4f21e7bebf0766ffac091c49948c",
"content_id": "fc91233ecdab3a2d5039d27a0d713aea2460fea2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 398,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 15,
"path": "/file_integrity_checks.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "import os\n\n\n# throw exception if file already exists\ndef checkFileExistence(fullFilePath):\n if (os.path.isfile(fullFilePath)):\n raise Exception('The file you want to write to already exists.')\n\n# return True/False if file exists\ndef fileExists(fullFilePath):\n return os.path.isfile(fullFilePath)\n\n# return True/False if path exists\ndef pathExists(path):\n return os.path.exists(path) # comment"
},
{
"alpha_fraction": 0.7846534848213196,
"alphanum_fraction": 0.7846534848213196,
"avg_line_length": 80,
"blob_id": "419d66d3265b31ddbf2cac9fcb3bbf0aa90bb65f",
"content_id": "03175776c043dd23bc82ff5ace81876bd3b66b09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 404,
"license_type": "no_license",
"max_line_length": 222,
"num_lines": 5,
"path": "/README.md",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "# FileToDBParser\n\nThis is a generic open source tool to parse raw data in any format, and load it into a MS SQL Server database. Alternatively, you can also output a CSV flat file.\n\nUse this tool as a first step in your ETL data pipelines. Currently, this supports XML and JSON file formats as input. We plan to extend this support for parquet, MS Excel, CSV and other delimited flat files in the future."
},
{
"alpha_fraction": 0.5777984261512756,
"alphanum_fraction": 0.581359326839447,
"avg_line_length": 33.725807189941406,
"blob_id": "6188c329603618ca0c3c0deb46d58bec5ea29405",
"content_id": "1198bc9010ea78ce13a483a31940d9a7fcd0ca34",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6459,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 186,
"path": "/DFProcessor.py",
"repo_name": "ajm4zs/FileToDBParser",
"src_encoding": "UTF-8",
"text": "# Authors: Alex Mulchandani and Nick Kharas\n\nimport pyodbc\nimport urllib\nimport datetime\nimport string\nimport uuid\nfrom sqlalchemy.engine import *\nimport sqlalchemy\n\nfrom file_integrity_checks import checkFileExistence\nfrom df_integrity_checks import isValueList, checkForLists, removeListColumnsFromDataframe\nfrom JSONProcessor import JSONProcessor\n\n\n\nclass DFProcessor:\n\n def __init__(self, server='', database='', table_name='', drop_table_if_exists=1):\n self.server = server\n self.database = database\n self.table_name = table_name\n self.drop_table_if_exists = drop_table_if_exists\n\n # Generates the sql statement for creating a table from a dataframe\n def generate_create_table_sql(self, df):\n sql = 'CREATE TABLE dbo.[' + self.table_name + '] ('\n\n i = 1\n\n while i <= len(df.columns):\n if i == len(df.columns):\n sql += '[' + df.columns[i - 1] + '] NVARCHAR(2000))'\n else:\n sql += '[' + df.columns[i - 1] + '] NVARCHAR(2000),'\n\n i += 1\n\n return sql\n\n def generate_drop_table_sql(self):\n sql = str('DROP TABLE IF EXISTS dbo.[%s]'%self.table_name)\n\n return sql\n\n # Checks if table already exists and throws exception if exists\n def check_table_existence(self, connection):\n sql = \"select * from sys.tables where name = '\" + self.table_name + \"'\"\n\n result = connection.execute(sql)\n\n row = result.fetchone()\n\n return row\n\n # Creates sql table on db\n def create_sql_table(self, connection, df):\n\n # check existence of SQL table\n row = self.check_table_existence(connection)\n\n if (row and not self.drop_table_if_exists):\n return\n else:\n \n # compile SQL drop table statement\n drop_sql = self.generate_drop_table_sql()\n\n # compile SQL create table statement\n create_sql = self.generate_create_table_sql(df)\n\n try:\n connection.execute(drop_sql)\n connection.execute(create_sql)\n except:\n raise Exception('There was an issue dropping and creating the SQL table.')\n\n return\n\n # Gets engine from pyodbc and returns engine object\n def get_engine(self):\n\n try:\n connectionString = urllib.parse.quote_plus(\n 'Driver={SQL Server Native Client 11.0};SERVER=' + self.server + ';DATABASE=' + self.database + ';Trusted_Connection=yes')\n engine = create_engine(\n 'mssql+pyodbc:///?odbc_connect=%s' % connectionString)\n except:\n raise Exception('Unable to attain engine for database ' +\n self.database + ' on server ' + self.server)\n finally:\n return engine\n\n # Connects engine and returns the connection\n def connect_engine(self, engine):\n\n try:\n connection = engine.connect()\n except:\n raise Exception('Cannot connect engine.')\n\n return connection\n\n # Writes the contents of a pandas dataframe to a sql table\n def write_dataframe_to_sql_table(self, df, engine, connection):\n\n # create the SQL table\n self.create_sql_table(connection, df)\n\n # write dataframe rows to SQL table\n try:\n df.to_sql(name=self.table_name, con=engine, index=False, if_exists='append')\n # data_normalized.to_sql(name=tableName, con=engine, index=False, dtype={col_name: sqlalchemy.types.NVARCHAR(length=2000) for col_name in data_normalized})\n except:\n raise Exception('There was an error writing the dataframe to SQL Server.')\n\n return\n\n # write JSON to output file\n def write_dataframe_to_output_file(self, df, outputPath, outputFileName, rowDelimeter):\n\n # store entire file path and name in fullOutputPath\n fullOutputPath = outputPath + '\\\\' + outputFileName + '.csv'\n\n # check for existence of output file\n checkFileExistence(fullOutputPath)\n\n df.to_csv(fullOutputPath, index=None, header=True, sep=rowDelimeter)\n\n def write_final_data(self, df, engine, connection):\n if (connection == 'file'):\n # Write df to output file in same directory as source file\n outputTableName = self.table_name + '_' + str(uuid.uuid4())\n self.write_dataframe_to_output_file(df, engine, outputTableName, '|')\n else:\n # Write df to SQL\n self.write_dataframe_to_sql_table(df, engine, connection)\n\n # process a dataframe by writing it's contents to sql table(s)\n def process_dataframe(self, df, engine, connection):\n\n # find if any columns contain a list\n listColumns = checkForLists(df)\n\n if len(listColumns) == 0:\n self.write_final_data(df, engine, connection)\n\n else:\n\n list_dict = {}\n\n for col in listColumns:\n # add new GUID (col_GUID) to df\n GUID_col = col + '_GUID'\n df[GUID_col] = [uuid.uuid4() for _ in range(len(df.index))]\n list_dict[col] = []\n\n # iterate through each row in df\n for index, row in df.iterrows():\n for col in listColumns:\n GUID_col = col + '_GUID'\n list_col = row[col]\n list_col_GUID = row[GUID_col]\n\n if (isValueList(list_col)):\n for item in list_col:\n if(isinstance(item, str)):\n print(row)\n print(list_col)\n print(item)\n item[GUID_col] = list_col_GUID\n list_dict[col].append(item)\n\n # we now have the new dfs... let's get rid of list columns from original DF and write it to SQL\n dfWithoutListColumns = removeListColumnsFromDataframe(df, listColumns)\n\n self.write_final_data(dfWithoutListColumns, engine, connection)\n\n # now that new dfs are fully created, do something with them\n\n for col in listColumns:\n json_processor = JSONProcessor()\n new_df_normalized = json_processor.get_dataframe_from_json(list_dict[col])\n\n self.table_name = self.table_name + '_' + col\n self.process_dataframe(new_df_normalized, engine, connection)\n"
}
] | 7 |
JH27/censusAmericans
|
https://github.com/JH27/censusAmericans
|
be3f3fc0565b70094d00d6a4f72a0bf48b150054
|
1787bdb5a8835c1c383c1d12bf95c4bb5e33184a
|
986c947912ced6c2def8b29484989ac80660b092
|
refs/heads/master
| 2021-01-15T08:52:15.518440 | 2015-06-24T16:37:03 | 2015-06-24T16:37:03 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5240174531936646,
"alphanum_fraction": 0.5305677056312561,
"avg_line_length": 24.44444465637207,
"blob_id": "d706087d4e7b39907475a6c24c403a90ac3976fd",
"content_id": "8cb29573f647ca8cdcc775203e4b3b4c9d7c3977",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 458,
"license_type": "permissive",
"max_line_length": 49,
"num_lines": 18,
"path": "/removeDups_fortest.py",
"repo_name": "JH27/censusAmericans",
"src_encoding": "UTF-8",
"text": "import csv\nimport random\n\ndef removeDups():\n with open(\"data/tweets.csv\",'rb') as csvfile:\n spamreader = csv.reader(csvfile)\n # headerDictionary = replaceHeaderCodes()\n text = []\n dups = []\n for row in spamreader:\n #print row\n if row in text or len(row)>140:\n dups.append(row)\n else:\n text.append(row)\n print text\n print len(dups)\nremoveDups()\n"
},
{
"alpha_fraction": 0.7883679270744324,
"alphanum_fraction": 0.7937483787536621,
"avg_line_length": 107.41666412353516,
"blob_id": "c88450ac518f14483470340614e64aa3d463d01b",
"content_id": "6e0f91a56dcb9f8d540041219d2c7f41a9719ee6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3903,
"license_type": "permissive",
"max_line_length": 788,
"num_lines": 36,
"path": "/README.md",
"repo_name": "JH27/censusAmericans",
"src_encoding": "UTF-8",
"text": "# censusAmericans\nHere is a twitter robot that automatically posts short bios of Americans from census data to twitter.\n\nThe data is public and anonymized\ncensusAmericans takes Public Use Microdata Sample(PUMS) data and reconstitutes it into mini narratives that describe real individuals who participated in the extended census in 2013. The PUMS is a limited subset of the American Community Survey, which is released to allow researchers access to a number of detailed profiles of anonymized individuals from each state. The profiles include items that when assembled has the potential to describe individuals for further study, but not so much detail that they can be deanonymized. For example while including relatable details such as the length, method, and time of a person's daily commute to work, the snapshots presented are also limited by omissions such as the lack of a person's location. \n\nWe made them into bios because what limited information is offered seemed to communicate individuals effectively - thankfully, we only need to know a little about a person in order to relate to them. \nwhether it is how much they work, who they take care of, or where they were born, just a few descriptors are enough. we hope some of these qualities are perserved even when we further limit the reconstituted bios of these americans to the length of a tweet. we built the twitter account to generate these bios efficiently and automatically broadcast them every few hours until every person in the data has been covered. even though the limitation on length will result in similar as well as less satisfying bios at times (we ourselves would find it limiting to be described in such spare terms and categorized so broadly), it remains interesting to think about these people because they are real and when they might even shorten distances when they are broadcast ambiently and constantly.\n\nhere are some people:\n- \"I've been married a few times. I work in sporting and athletic goods, and doll industry. I've never served in the military.\"\n- \"I was naturalized as an U. S. citizen. I had less than 2 weeks off last year. I work in construction.\"\n- \"I live with my parents. I'm unemployed, have not worked for the last 5 years. I've not worked for at least 5 years.\"\n- \"I've been taking care of my grandkids for more than 5 years. I work in amusement, gambling, and recreation industries.\"\n\nyou can follow the census here: @censusAmericans\n\nData:\n - American Community Survey's Public Use Microdata Sample (PUMS) dataset\n - from http://www.census.gov/acs/www/data_documentation/public_use_microdata_sample/\n\nCode: \n\nThe data is processed in 3 steps, so the code here split into 3 python scripts. It is by no means efficient.\n\n - draft.py isolates columns with content(just taking out identification codes, redundant columns) and turns the raw data from above into human readable form using dictionaries created to make each line of data more conversational sounding. Example: column JWTR with value 06 is translated into \"I take a ferryboat to work. \" This results in a very large file of \"bios\" that you can read for a sanity check. \n \n - refine.py checks each entry from the previous script and randomly combines tweets from 3-4 sentences/columns until each entry is less than 140 chars and ok for tweet.\n \n - censusAmericansBot.py uses tweepy the python twitter api to post a row from the resulting file. Currently it posts 1 line every 4 hours, and keeps a index file to track its progress. Based on everywordbot code from Allison Parrish NYU - her projects: http://www.decontextualize.com/\n\n - Setting up a twitter account and app to run the script is simple, I followed the instructions here: http://zachwhalen.net/posts/how-to-make-a-twitter-bot-with-google-spreadsheets-version-04\n\nTODO:\n - make notification email for when script fails\n - make geolocated tweets according to state?\n"
}
] | 2 |
NullConvergence/py-elasticinfrastructure
|
https://github.com/NullConvergence/py-elasticinfrastructure
|
bd9f561707391580f7fd42015d378855c07be691
|
fa58959591a1a4ee90cb4145acd4ed5f9f6c3b8a
|
9cf761b5ebf575c64f0268ce797f46fd675b3d27
|
refs/heads/master
| 2020-06-28T21:38:54.639811 | 2020-03-30T09:32:48 | 2020-03-30T09:32:48 | 200,348,359 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6018051505088806,
"alphanum_fraction": 0.6028670072555542,
"avg_line_length": 32.93693542480469,
"blob_id": "7a0edee35ca8f14d8233d4ede116ef6fbbac53a9",
"content_id": "d77c505a798a36113d8d52b61aec489b1003ca27",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3767,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 111,
"path": "/py_elasticinfra/utils/parse_config.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import logging\nfrom collections import OrderedDict\nfrom datetime import datetime\nfrom functools import reduce\nfrom operator import getitem\nfrom pathlib import Path\nfrom py_elasticinfra.utils.json import read_json, convert_json\nfrom py_elasticinfra.logger.main import Logger\n\n\nclass ConfigParser:\n def __init__(self, config=None, parse_args=False, args=None, options=[]):\n if parse_args is True:\n for opt in options:\n args.add_argument(*opt.flags, default=None, type=opt.type)\n args = args.parse_args()\n msg_no_cfg = \"Configuration file need to be \"\n \"specified. Add '-c config.json', for example.\"\n assert args.config is not None, msg_no_cfg\n self.cfg_fname = Path(args.config)\n config = read_json(self.cfg_fname)\n self._config = _update_config(config, options, args)\n else:\n self._parse_config(config)\n\n def initialize(self, module, module_config, *args, **kwargs):\n \"\"\"finds a function handle with the name given\n as \"type\" in config, and returns the\n instance initialized with corresponding\n keyword args given as \"args\".\n \"\"\"\n module_name = module_config[\"type\"]\n if \"args\" in module_config:\n module_args = dict(module_config[\"args\"])\n else:\n module_args = {}\n assert all([k not in module_args for k in kwargs]\n ), \"Overwriting kwargs given in config file is not allowed\"\n module_args.update(kwargs)\n return getattr(module, module_name)(*args, **module_args)\n\n def init_logger(self):\n self.logger = Logger()\n save_dir = Path(self.config['save_dir'])\n timestamp = datetime.now().strftime(r'%m%d_%H%M%S')\n exper_name = self.config['name']\n self._log_dir = save_dir / 'log' / exper_name / timestamp\n self.log_dir.mkdir(parents=True, exist_ok=True)\n self.logger.config_py_logger(self.log_dir)\n\n def get_logger(self, name, verbosity=2):\n if not self.logger:\n try:\n self.init_logger()\n except:\n raise(\"Failed to initialize logger\")\n\n logger = self.logger.get_py_logger(name, verbosity)\n return logger\n\n def configure_es_logger(self, default_level):\n es_logger = logging.getLogger('elasticsearch')\n es_logger.setLevel(default_level)\n\n def _parse_config(self, config):\n if config is not None:\n if isinstance(config, OrderedDict):\n self._config = config\n elif isinstance(config, str):\n self._config = convert_json(config)\n else:\n raise(\"Invalid Configuration\")\n else:\n raise(\"Configuration JSON must be provided\")\n\n def __getitem__(self, name):\n return self.config[name]\n\n @property\n def config(self):\n return self._config\n\n @property\n def log_dir(self):\n return self._log_dir\n\n\n# helper functions used to update config dict with custom cli options\ndef _update_config(config, options, args):\n for opt in options:\n value = getattr(args, _get_opt_name(opt.flags))\n if value is not None:\n _set_by_path(config, opt.target, value)\n return config\n\n\ndef _get_opt_name(flags):\n for flg in flags:\n if flg.startswith(\"--\"):\n return flg.replace(\"--\", \"\")\n return flags[0].replace(\"--\", \"\")\n\n\ndef _set_by_path(tree, keys, value):\n \"\"\"Set a value in a nested object in tree by sequence of keys.\"\"\"\n _get_by_path(tree, keys[:-1])[keys[-1]] = value\n\n\ndef _get_by_path(tree, keys):\n \"\"\"Access a nested object in tree by sequence of keys.\"\"\"\n return reduce(getitem, keys, tree)\n"
},
{
"alpha_fraction": 0.5741056203842163,
"alphanum_fraction": 0.5741056203842163,
"avg_line_length": 28.350000381469727,
"blob_id": "a4055610c0af733eeedba325bce3f9e94d3b4277",
"content_id": "61129c6a498d5af884ea9aa673459e9fabd41f1a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1174,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 40,
"path": "/py_elasticinfra/runner.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import threading\nimport time\nimport py_elasticinfra.metrics as module_metric\nfrom py_elasticinfra.utils.loop_thread import LoopThread\n\n\nclass Runner:\n def __init__(self, config, elastic, metrics=None):\n self.config = config\n self.es = elastic\n\n if metrics is None:\n self.metrics = [config.initialize(module_metric, met)\n for met in config[\"metrics\"]]\n else:\n self.metrics = metrics\n\n def loop(self, *args, **kwargs):\n while True:\n time.sleep(self.config[\"time\"])\n self.run(*args, **kwargs)\n\n def run(self, index=True):\n if index is True:\n return self.es.index_bulk(self.metrics)\n else:\n return [met.measure() for met in self.metrics]\n\n def run_background(self, index=True):\n self.loop_thread = LoopThread(\n run_method=self.run,\n run_kwargs={\"index\": index},\n thread_kwargs={\"name\": \"background-loop\"},\n sleep=self.config[\"time\"]\n )\n self.loop_thread.start()\n\n def stop_background(self):\n if self.loop_thread:\n self.loop_thread.stop()\n"
},
{
"alpha_fraction": 0.5055679082870483,
"alphanum_fraction": 0.5068405866622925,
"avg_line_length": 36.867469787597656,
"blob_id": "7e7f8269c4e637af4b725c5ae73b783967a18a93",
"content_id": "a426e2b7bf83ef299294f41f9f63f7e1f37a2951",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3143,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 83,
"path": "/py_elasticinfra/elk/elastic.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import datetime as dt\nimport json\nfrom elasticsearch import Elasticsearch\nfrom elasticsearch.helpers import bulk\n\n\nclass Indexer:\n def __init__(self, config, logger=None):\n self.config = config\n self.elk_config = config[\"elk\"][\"elastic\"]\n self.index = self.elk_config[\"index\"]\n if logger is None:\n try:\n logger = config.get_logger(\"elk_logger\")\n except:\n raise(\"[ERROR] Please provide logger.\")\n self.logger = logger\n\n def connect(self):\n try:\n self.es = Elasticsearch(self.elk_config[\"host\"])\n except Exception as exception:\n self.logger.error(\"[ERROR] \\t Could not connect \"\n \"to elasticsearch {}\".format(self.elk_config[\"host\"]))\n raise exception\n else:\n if self.es is None:\n self.logger.error(\"[ERROR] \\t Could not connect \"\n \"to elasticsearch {}\".format(self.elk_config[\"host\"]))\n raise \"Could not connect to elasticsearch\"\n else:\n self.logger.info(\"[INFO] \\t Successfully connected \"\n \"to es\")\n\n def index_bulk(self, metrics):\n metrics = self._prepare_index(metrics)\n try:\n bulk(self.es, metrics)\n except Exception as exception:\n self.logger.error(\"[ERROR] \\t Could not index \"\n \"bulk to elasticsearch {}\".format(exception))\n else:\n self.logger.info(\"[INFO] \\t Indexed bulk in es.\")\n\n def _prepare_index(self, metrics):\n now = dt.datetime.now()\n # str_now = now.strftime('%Y-%m-%dT%H:%M:%S.%fZ')\n for met in metrics:\n yield{\n \"_index\": self.index_name,\n \"_source\": {\n \"timestamp\": now,\n \"experiment\": self.config[\"name\"],\n \"hostname\": self.config[\"hostname\"],\n \"measurement_type\": met.get_type(),\n \"measurement\": met.measure()\n }\n }\n\n def _check_connection(self):\n # TODO: decide on other connection checks\n # e.g. es.cluster.health()\n if not self.es:\n return False\n\n def create_index(self, index_name=None, config=None):\n if index_name is None:\n index_name = self.index[\"name\"]\n if config is None:\n config = self.index[\"config\"]\n try:\n json_config = json.dumps(config, indent=4)\n current_date = dt.date.today().strftime(\"%d.%m.%Y\")\n self.index_name = (index_name + '-' + current_date).lower()\n self.es.indices.create(index=self.index_name,\n body=json_config,\n ignore=400)\n except Exception as exception:\n self.logger.error(\"[ERROR] \\t Could not create es \"\n \" index {}\".format(exception))\n raise exception\n else:\n self.logger.info(\"[INFO] \\t Index successfully checked.\")\n"
},
{
"alpha_fraction": 0.5944444537162781,
"alphanum_fraction": 0.5944444537162781,
"avg_line_length": 21.5,
"blob_id": "3fe2cdd81bc5d71ae2513869894fbd1f629b0173",
"content_id": "04dd4054cf822a4ba24699d6faff9a798c1077f5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 360,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 16,
"path": "/py_elasticinfra/metrics/cpu_load.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import psutil\nfrom .base import BaseMetric\n\n\nclass CpuLoad(BaseMetric):\n def __init__(self):\n pass\n\n def measure(self):\n load = psutil.cpu_percent()\n cpu_threads = psutil.cpu_percent(percpu=True)\n return {'cpu_load_average': load,\n 'cpu_load_threads': cpu_threads}\n\n def get_type(self):\n return 'cpu'\n"
},
{
"alpha_fraction": 0.7693144679069519,
"alphanum_fraction": 0.771490752696991,
"avg_line_length": 38.956520080566406,
"blob_id": "10ffc622328d1216dd4be001646faac47a99b977",
"content_id": "4ca71e833caaf8de04ee1b00fd2d1963c70bd1b6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2757,
"license_type": "permissive",
"max_line_length": 326,
"num_lines": 69,
"path": "/README.md",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "# py-elasticinfrastructure\nThis small utilty indexes infrastructure metrics to elasticsearch.\n\n\nIt was created to gather infrastructure data from machine learning experiments, with a focus on GPU utilization and CPU temperature.\nThe inspiration comes from metricbeats by Elastic. However, this module is written in Python and it is easier to customize (I found some community beats for Elastic outdated and impossible to run on newer machines).\n\n## Install\n\n```\n$ pip install py-elasticinfrastructure \n```\n\n## Run\n\nThere are two ways of running the project: (1) as an standalone program, see [example.py](https://github.com/NullConvergence/py_metrics/blob/master/example.py) or (2) on a separate thread, part of a bigger project, see [example_multithread.py](https://github.com/NullConvergence/py_metrics/blob/master/example_multithread.py).\n\n#### 1. Run standalone\nIn order to run the project in the first case, you have to add a configuration JSON file (see [configs](https://github.com/NullConvergence/py_metrics/tree/master/configs)) and run:\n\n```\n$ python example.py --config=configs/<config-file>.json\n```\nAn example config file is provided as [default](https://github.com/NullConvergence/py_metrics/blob/master/configs/default.json). \nMake sure you edit the elasticsearch host data before you run the project.\n\n\n#### 2. Run in a project\nIn order to run in a separate project, the library will spawn a new thread and run the indexing loop.\nAfter instalation, you can import and configure the runner as follows:\n```\nimport time\nfrom py_elasticinfra.elk.elastic import Indexer\nfrom py_elasticinfra.runner import Runner\nfrom py_elasticinfra.utils.parse_config import ConfigParser\n\n## confgure elasticsearch indexer\n# config can be a json or a file path\n\nes = Indexer(config)\nes.connect()\nes.create_index()\n\n# configure and run \nrunner = Runner(config, es)\nrunner.run_background()\n\n# stop runner after 5 seconds\ntime.sleep(5)\nrunner.stop_background()\n```\n\nAn example is configured in [example_multithread.py](https://github.com/NullConvergence/py-elasticinfrastructure/blob/master/example_multithread.py) and can be ran:\n```\n$ python example_multithread.py --config/<config>.json\n```\n\nAn example config file is provided as [default](https://github.com/NullConvergence/py_metrics/blob/master/configs/default.json). \nMake sure you edit the elasticsearch host data before you run the project.\n\n## Extend\n\nYou can add new metrics by adding a new file in the ```py_metrics/metrics``` folder and subclassing the BaseMetric.\nAfterwards, you can add it to the ```__init__.py``` file and to the config.\n\n## ELK Docker\n\nIn order to run the ELK stack in docker, see [docker-elk](https://github.com/deviantony/docker-elk).\nThe indexed data can be mined using Kibana.\n"
},
{
"alpha_fraction": 0.5616979002952576,
"alphanum_fraction": 0.5616979002952576,
"avg_line_length": 27.13888931274414,
"blob_id": "b23ea9ad78c012be09f96a5a6175ea31fc74375f",
"content_id": "a4f81802d92dcf479fcb29db2f11c786f6ca4f48",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1013,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 36,
"path": "/py_elasticinfra/utils/loop_thread.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import threading\nimport time\n\n\nclass LoopThread(threading.Thread):\n \"\"\"Thread class with a stop() method, which\n runs a loop\n \"\"\"\n\n def __init__(self, run_method,\n run_kwargs,\n thread_kwargs={},\n sleep=False):\n \"\"\"\n :param run_method: method to run in the loop\n :param run_kwargs: arguments for method in the loop\n :thread_kwargs: arguments for threading.Thread class\n :sleep: time to sleep in the loop\n \"\"\"\n super(LoopThread, self).__init__(**thread_kwargs)\n self._stop_event = threading.Event()\n self.run_method = run_method\n self.run_kwargs = run_kwargs\n self.sleep = sleep\n\n def run(self):\n while not self.stopped():\n if self.sleep is not False:\n time.sleep(self.sleep)\n self.run_method(**self.run_kwargs)\n\n def stop(self):\n self._stop_event.set()\n\n def stopped(self):\n return self._stop_event.is_set()\n"
},
{
"alpha_fraction": 0.6571428775787354,
"alphanum_fraction": 0.6571428775787354,
"avg_line_length": 16.5,
"blob_id": "932a346fa1613a90ca08636a46b7cacf8373f965",
"content_id": "b01fc7ebfba29174886c83ba36e64230fbd1104d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 245,
"license_type": "permissive",
"max_line_length": 33,
"num_lines": 14,
"path": "/py_elasticinfra/metrics/base.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "from abc import abstractmethod\n\n\nclass BaseMetric:\n def __init__(self):\n pass\n\n @abstractmethod\n def measure(self):\n raise NotImplementedError\n\n @abstractmethod\n def get_type(self):\n raise NotImplementedError\n"
},
{
"alpha_fraction": 0.4898844063282013,
"alphanum_fraction": 0.49132949113845825,
"avg_line_length": 22.066667556762695,
"blob_id": "84dc86c1cc52e3f1b6d86cb9de44d623eac878a6",
"content_id": "3372e643bf96edef96f542a4ef63fb29cdbf760e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 692,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 30,
"path": "/py_elasticinfra/metrics/gpus.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "from .base import BaseMetric\nimport GPUtil\n\n\nclass GPUs(BaseMetric):\n def __init__(self):\n pass\n\n def measure(self):\n all = []\n gpus = GPUtil.getGPUs()\n for g in gpus:\n all.append(g.__dict__)\n\n return {\n \"gpus_data\": all,\n \"gpus_averages\": self._get_gpus_metadata(gpus)}\n\n def get_type(self):\n return 'gpu'\n\n def _get_gpus_metadata(self, gpus):\n labels = [\"load\", \"memoryUsed\", \"memoryFree\", \"temperature\"]\n metadata = {}\n for l in labels:\n d = 0\n for g in gpus:\n d += g.__dict__[l]\n metadata[l] = d / len(gpus)\n return metadata\n"
},
{
"alpha_fraction": 0.7127371430397034,
"alphanum_fraction": 0.7208672165870667,
"avg_line_length": 25.35714340209961,
"blob_id": "5bf7abf65ab1c1fb7555391b0fa3c1e02575434c",
"content_id": "e514eb4bee35024209e34b5e406bd76aa019f74f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 369,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 14,
"path": "/setup.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import os\nfrom setuptools import find_packages, setup\n\nwith open('requirements.txt') as f:\n required = f.read().splitlines()\n\nsetup(\n name='py-elasticinfrastructure',\n version='1.1.3',\n description='A small utilty to index infrastructure metrics to elasticsearch',\n author='NullConvergence',\n packages=find_packages(),\n install_requires=required\n)\n"
},
{
"alpha_fraction": 0.4878048896789551,
"alphanum_fraction": 0.6951219439506531,
"avg_line_length": 15.399999618530273,
"blob_id": "c68b030a0a36be84625534d80b04dcf853606588",
"content_id": "27bc3b80b3ed0b96ef68a8ab043b8ca9505771e1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 82,
"license_type": "permissive",
"max_line_length": 20,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "psutil==5.6.6\nnumpy==1.16.2\nelasticsearch==7.0.2\nsetuptools==41.0.1\nGPUtil==1.4.0\n"
},
{
"alpha_fraction": 0.5246753096580505,
"alphanum_fraction": 0.5246753096580505,
"avg_line_length": 27.875,
"blob_id": "5ddd9461907cc1f70d2167744a10f4b3e83c3a7f",
"content_id": "00ee005a1eb861e74fc6219803359774004838af",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1155,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 40,
"path": "/py_elasticinfra/metrics/cpu_temp.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport psutil\nfrom .base import BaseMetric\n\n\nclass CpuTemp(BaseMetric):\n def __init__(self, *args, **kwargs):\n if \"tmp_key\" in kwargs:\n self.tmp_key = kwargs[\"tmp_key\"]\n else:\n raise \"Temperature requires a key from psutil.\"\n\n def measure(self):\n temps = self._get_temps()\n return {\"cpu_temperatures\": temps,\n \"cpu_temperature_average\": self._get_temp_averages(temps)}\n\n def get_type(self):\n return 'cpu'\n\n def _get_temps(self):\n temp = psutil.sensors_temperatures()\n temps = []\n for t in temp[self.tmp_key]:\n temps.append({\n \"label\": t.label,\n \"current\": t.current,\n \"high\": t.high,\n \"critical\": t.critical\n })\n return temps\n\n def _get_temp_averages(self, temps):\n labels = set([t[\"label\"] for t in temps])\n metadata = {}\n for label in labels:\n res = [t[\"current\"] for t in temps if t[\"label\"] == label]\n res = np.array(res)\n metadata[label] = np.average(res)\n return metadata\n"
},
{
"alpha_fraction": 0.6798245906829834,
"alphanum_fraction": 0.6820175647735596,
"avg_line_length": 24.33333396911621,
"blob_id": "6675023cbe42accba841838144e3865be3528b35",
"content_id": "0bbbbbea5b8f07c85f4e09afc9ef615a8ea679ae",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 456,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 18,
"path": "/py_elasticinfra/utils/json.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import json\nfrom collections import OrderedDict\n\n\ndef read_json(fname, dic=True):\n with fname.open('rt') as handle:\n if dic is True:\n return json.load(handle, object_hook=OrderedDict)\n return json.load(handle)\n\n\ndef convert_json(data):\n return json.loads(data, object_pairs_hook=OrderedDict)\n\n\ndef write_json(content, fname):\n with fname.open('wt') as handle:\n json.dump(content, handle, indent=4, sort_keys=False)\n"
},
{
"alpha_fraction": 0.8131868243217468,
"alphanum_fraction": 0.8131868243217468,
"avg_line_length": 44.5,
"blob_id": "b07d06d33d858172962a317b74fcd9120b955edd",
"content_id": "9244b91822fe7116ea8ca788d1553cfdca7b8bcd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 182,
"license_type": "permissive",
"max_line_length": 46,
"num_lines": 4,
"path": "/py_elasticinfra/metrics/__init__.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "from py_elasticinfra.metrics.cpu_load import *\nfrom py_elasticinfra.metrics.cpu_temp import *\nfrom py_elasticinfra.metrics.gpus import *\nfrom py_elasticinfra.metrics.memory import *\n"
},
{
"alpha_fraction": 0.5502923727035522,
"alphanum_fraction": 0.5520467758178711,
"avg_line_length": 36.173912048339844,
"blob_id": "05b7212f1d27162cc0c522e66bdae4dafc44cbb8",
"content_id": "bf9e09bc18dafae3b30fcf4a11fd0b952e4ccb87",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1710,
"license_type": "permissive",
"max_line_length": 87,
"num_lines": 46,
"path": "/py_elasticinfra/logger/main.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import logging\nimport logging.config\nfrom pathlib import Path\nfrom py_elasticinfra.utils.json import read_json\n\n\nclass Logger:\n def __init__(self):\n pass\n\n def config_py_logger(self,\n log_dir,\n log_config=\"py_elasticinfra/logger/py_logger.json\",\n log_levels={\n 0: logging.WARNING,\n 2: logging.DEBUG,\n 3: logging.INFO\n },\n default_level=logging.INFO):\n log_config = Path(log_config)\n if log_config.is_file():\n config = read_json(log_config)\n for _, handler in config[\"handlers\"].items():\n if \"filename\" in handler:\n handler[\"filename\"] = str(log_dir / handler['filename'])\n logging.config.dictConfig(config)\n else:\n print(\"Warning: logging configuration \"\n \"file is not found in {}.\".format(log_config))\n logging.basicConfig(level=default_level)\n\n self.configure_es_logger(default_level)\n self.log_levels = log_levels\n self.log_dir = log_dir\n\n def get_py_logger(self, name, verbosity):\n msg_verbosity = \"verbosity option {} is invalid. Valid options are {}.\".format(\n verbosity, self.log_levels.keys())\n assert verbosity in self.log_levels, msg_verbosity\n logger = logging.getLogger(name)\n logger.setLevel(self.log_levels[verbosity])\n return logger\n\n def configure_es_logger(self, default_level):\n es_logger = logging.getLogger('elasticsearch')\n es_logger.setLevel(default_level)\n"
},
{
"alpha_fraction": 0.643112063407898,
"alphanum_fraction": 0.6459671854972839,
"avg_line_length": 29.45652198791504,
"blob_id": "5647cec03f513e9fdfc79535852a785b92207506",
"content_id": "b5fecbb902586635605aaeab9869dfd5fcdc8d65",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1401,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 46,
"path": "/example_multithread.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import argparse\nimport collections\nimport time\nimport threading\nfrom py_elasticinfra.utils.parse_config import ConfigParser\nfrom py_elasticinfra.elk.elastic import Indexer\nfrom py_elasticinfra.runner import Runner\n\n\ndef foreground_thread():\n for i in range(5):\n time.sleep(3)\n print('[INFO] Foreground thread, iteration {}'.format(i+1))\n\n\ndef main(config):\n # connect to elasticsearch\n es = Indexer(config)\n es.connect()\n es.create_index()\n # initialize threads and run in parallel\n runner = Runner(config, es)\n runner.run_background()\n\n thread_main = threading.Thread(name=\"foreground_thread\",\n target=foreground_thread)\n thread_main.start()\n\n time.sleep(5)\n runner.stop_background()\n\n\nif __name__ == \"__main__\":\n args = argparse.ArgumentParser(description=\"py_elasticinfra\")\n args.add_argument(\"-c\", \"--config\", default=None, type=str,\n help=\"config file path (default: None)\")\n # custom cli options to modify configuration\n # from default values given in json file.\n custom_args = collections.namedtuple(\"custom_args\", \"flags type target\")\n options = [\n custom_args([\"--elk\", \"--elk_host\"], type=str,\n target=(\"elk\", \"host\"))\n ]\n config = ConfigParser(parse_args=True, args=args, options=options)\n config.init_logger()\n main(config)\n"
},
{
"alpha_fraction": 0.44192636013031006,
"alphanum_fraction": 0.44192636013031006,
"avg_line_length": 23.34482765197754,
"blob_id": "d5207d7ea30bfa0320ad65a65d2b3238da0e6af0",
"content_id": "acfbaea934b9d5ace24d4971a23c1122a8c75220",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 706,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 29,
"path": "/py_elasticinfra/metrics/memory.py",
"repo_name": "NullConvergence/py-elasticinfrastructure",
"src_encoding": "UTF-8",
"text": "import psutil\nfrom .base import BaseMetric\n\n\nclass Memory(BaseMetric):\n def __init__(self):\n pass\n\n def measure(self):\n mem = psutil.virtual_memory()\n swap = psutil.swap_memory()\n return {\n \"virtual_memory\": {\n \"total\": mem.total,\n \"available\": mem.available,\n \"percent\": mem.percent,\n \"used\": mem.used,\n \"free\": mem.free\n },\n \"swap_memory\": {\n \"total\": swap.total,\n \"used\": swap.used,\n \"free\": swap.free,\n \"percent\": swap.percent\n }\n }\n\n def get_type(self):\n return 'memory'\n"
}
] | 16 |
DongHun-Lee-96/Django
|
https://github.com/DongHun-Lee-96/Django
|
5227af8f833ed971c877305e6e0703582ba7e32e
|
f82a95962ff22d4da2f7f676dee6d09c935d9925
|
cbad3fb9af479f1fa29a987cfaf3e87c3e9423ec
|
refs/heads/master
| 2022-12-03T08:04:54.455998 | 2020-08-27T07:12:57 | 2020-08-27T07:12:57 | 290,074,286 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6663960814476013,
"alphanum_fraction": 0.6680194735527039,
"avg_line_length": 26.377777099609375,
"blob_id": "9398dc23ee91fc653ef4cb820d7a526ee7142dcc",
"content_id": "fdc76dbb519e85c5bcd60d8a6eda1d94f8489fe5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1306,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 45,
"path": "/mysite/polls/views.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render, redirect\nfrom django.http import HttpResponse\nfrom .models import Question, Choice\n\n# urls.py 내에서 question_id라는 명칭을 사용했기 때문에\n\n\ndef reset(request, question_id):\n question = Question.objects.get(pk=question_id)\n choices = question.choice_set.all()\n for choice in choices: # for-each\n choice.votes = 0\n choice.save()\n\n return redirect('/polls/'+str(question_id))\n # return redirect('/polls/%s' % question_id)\n\n\ndef vote(request, question_id):\n # 사용자가 선택한 radio 값\n choice_id = request.GET.get('choice')\n # ORM 에서는 primary key 값이 존재하면 update 코드가 수행\n # Choice(id=choice_id) (X)\n choice = Choice.objects.get(id=choice_id)\n choice.votes = choice.votes+1\n choice.save() # update\n\n return redirect('/polls/')\n\n\ndef detail(request, question_id):\n question = Question.objects.get(id=question_id)\n context = {\n 'question': question\n }\n return render(request, 'polls/detail.html', context)\n\n\ndef index(request):\n question_list = Question.objects.all()\n context = {\n 'question_list': question_list\n }\n return render(request, 'polls/index.html', context)\n # return HttpResponse('Hello, you are at the polls index')\n"
},
{
"alpha_fraction": 0.6280899047851562,
"alphanum_fraction": 0.6438202261924744,
"avg_line_length": 21.25,
"blob_id": "942fdfbb71ee6758e29ae645908bfb30bdb84f34",
"content_id": "6d3171db6d2e887e55ad7ccaa67fca2c74915cb9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 922,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 40,
"path": "/tutorial/firstapp/views.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponse, JsonResponse\nfrom .models import *\n\n\ndef show(request):\n curriculum = Curriculum.objects.all()\n context = {'curriculum': curriculum}\n return render(request, 'firstapp/show.html', context)\n# html = ''\n# for c in curriculum:\n# html += c.name + '<br>'\n# return HttpResponse(html)\n\n\ndef add(request):\n # 데이터베이스에 데이터 입력\n # 데이터 3개 insert into firstapp_curriculum\n c1 = Curriculum(name='python')\n c1.save()\n c2 = Curriculum(name='java')\n c2.save()\n c3 = Curriculum(name='spring')\n c3.save()\n return HttpResponse('OK')\n\n\ndef main(request):\n return HttpResponse('Main!!')\n\n\ndef index1(request):\n return HttpResponse('<h1>Hello</h1>')\n\n\ndef json(request):\n return JsonResponse({'key1': 'value1', 'key2': 'value2'})\n # XXXResponse\n # redirect\n # render\n"
},
{
"alpha_fraction": 0.7160493731498718,
"alphanum_fraction": 0.7160493731498718,
"avg_line_length": 26,
"blob_id": "095061fac6f8b04aefd053663b22520269e2c833",
"content_id": "073fdaf8d6b23eb4da74f6b6f4069a8a3e002248",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 162,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 6,
"path": "/tutorial/secondapp/urls.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views # used . because views and urls are in the same folder\n\nurlpatterns = [\n path('hospital/', views.hospital)\n]\n"
},
{
"alpha_fraction": 0.6979866027832031,
"alphanum_fraction": 0.7114093899726868,
"avg_line_length": 24,
"blob_id": "49ce6b95ad2eb538e77d7db19731b79f781dbdb1",
"content_id": "a33d9b9bd71cb6cba750b684ea9262d8b9e46efd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 157,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 6,
"path": "/sample/app/models.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nclass Product(models.Model):\n # id 자동생성\n name = models.CharField(max_length=50)\n price = models.IntegerField()"
},
{
"alpha_fraction": 0.6604823470115662,
"alphanum_fraction": 0.6604823470115662,
"avg_line_length": 27.36842155456543,
"blob_id": "022e7f2b5249750a6e2831b3334f7ce6f4ac146b",
"content_id": "38401bd726ac586c6090e60bb30e941806d6caa5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 547,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 19,
"path": "/tutorial/secondapp/views.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponse\nfrom .models import Hospital\n\n\ndef hospital(request):\n search = request.GET.get('search')\n if not search:\n search = ''\n\n hospital = Hospital.objects.filter(name__contains=search)\n #hospital = Hospital.objects.all()\n\n context = {'hospital': hospital} # 앞의 'hospital'은 key 값\n return render(request, 'secondapp/hospital.html', context)\n # html = ''\n # for h in hospital:\n # html += h.name + '<br>'\n # return HttpResponse(html)\n"
},
{
"alpha_fraction": 0.6536144614219666,
"alphanum_fraction": 0.6536144614219666,
"avg_line_length": 35.88888931274414,
"blob_id": "01e327bcf369e7036fa352bb75d23e5f57082041",
"content_id": "d9666cdfe92d5108ac61555f1b88e44be6ae0f37",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 332,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 9,
"path": "/mysite/polls/urls.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views\n\nurlpatterns = [\n path('', views.index, name='index'), # index is used for redirect later\n path('<int:question_id>/', views.detail, name='detail'),\n path('vote/<int:question_id>/', views.vote, name='vote'),\n path('reset/<int:question_id>/', views.reset, name='reset')\n]\n"
},
{
"alpha_fraction": 0.6519823670387268,
"alphanum_fraction": 0.6519823670387268,
"avg_line_length": 27.375,
"blob_id": "9659f8961d18dad3abb9b369648577237419680b",
"content_id": "e04c8df7168349de57127e3e9a173e251d967d1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 227,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 8,
"path": "/tutorial/firstapp/urls.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom . import views # used . because views and urls are in the same folder\n\nurlpatterns = [\n path('main/', views.main, name='main'),\n path('add/', views.add),\n path('show/', views.show)\n]\n"
},
{
"alpha_fraction": 0.6790924072265625,
"alphanum_fraction": 0.6807131171226501,
"avg_line_length": 21.851852416992188,
"blob_id": "31efb0829d9453bd475b309b5cb7848f573952b6",
"content_id": "903a03a1b76d704d5f563ae457d5aaa27402ec80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 629,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 27,
"path": "/sample/sample/views.py",
"repo_name": "DongHun-Lee-96/Django",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\nfrom django.shortcuts import render\nfrom app.models import Product\n\n\ndef show(request):\n products = Product.objects.all()\n context = {'products': products}\n return render(request, 'show.html', context)\n\n\ndef signup(request):\n name = request.GET.get(\"name\")\n price = request.GET.get(\"price\")\n # insert\n p = Product(name=name, price=price)\n p.save()\n return HttpResponse('상품 등록 완료')\n\n\ndef index(request):\n return HttpResponse(\"hello\")\n\n\ndef html(request):\n context = {} # dictionary\n return render(request, 'html.html', context) # 3 parameters\n"
}
] | 8 |
diegobaqt/clothing_image_sorter
|
https://github.com/diegobaqt/clothing_image_sorter
|
59cbea1e1475bcbadb0ef89a8e27b263e51528d7
|
d38b98acd839ba1a09a9664e7b28fde678267bf2
|
d56ed967e13c0a369bfe9e1a019798971841d844
|
refs/heads/master
| 2020-04-25T01:34:37.111689 | 2019-02-27T20:21:38 | 2019-02-27T20:21:38 | 172,412,553 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.46666666865348816,
"alphanum_fraction": 0.46796536445617676,
"avg_line_length": 30.435373306274414,
"blob_id": "8c689d32f0ae4557d7bd8041b8f59a8f0ea859fa",
"content_id": "a4f537617ff37aba8346a6505e5b82e3adf48edc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 4622,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 147,
"path": "/SorterApp/SorterApp/SorterApp/Views/LoadImagePage.xaml.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using Plugin.Media;\nusing Plugin.Media.Abstractions;\nusing SorterApp.Services;\nusing System;\nusing Xamarin.Forms;\nusing Xamarin.Forms.Xaml;\n\nnamespace SorterApp.Views\n{\n\t[XamlCompilation(XamlCompilationOptions.Compile)]\n\tpublic partial class LoadImagePage : ContentPage\n\t{\n private readonly PredictService _predictService = new PredictService();\n private string _path = \"\";\n private string _fileName = \"\";\n\n\t\tpublic LoadImagePage ()\n\t\t{\n\t\t\tInitializeComponent ();\n DisabledAuxButtons();\n Loading.IsRunning = false;\n LayoutClass.IsVisible = false;\n\n Reset.Clicked += (sender, args) =>\n {\n Image.Source = \"\";\n _path = \"\";\n _fileName = \"\";\n ImageName.Text = \"\";\n LayoutClass.IsVisible = false;\n DisabledAuxButtons();\n };\n\n PickPhoto.Clicked += async (sender, args) =>\n {\n Loading.IsRunning = true;\n if (!CrossMedia.Current.IsPickPhotoSupported)\n {\n await DisplayAlert(\"Photos Not Supported\", \":( Permission not granted to photos.\", \"OK\");\n return;\n }\n var file = await CrossMedia.Current.PickPhotoAsync(new PickMediaOptions\n {\n PhotoSize = PhotoSize.Medium,\n });\n if (file == null) return;\n _path = file.Path;\n var split = _path.Split('/');\n _fileName = split[split.Length - 1];\n ImageName.Text = _fileName;\n\n var imageSource = ImageSource.FromStream(() =>\n {\n var stream = file.GetStream();\n file.Dispose();\n return stream;\n });\n\n Image.Source = imageSource;\n EnabledButtons();\n\n Loading.IsRunning = false;\n };\n\n TakePhoto.Clicked += async (sender, args) =>\n {\n Loading.IsRunning = true;\n DisabledButtons();\n\n await CrossMedia.Current.Initialize();\n\n if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)\n {\n await DisplayAlert(\"No Camera\", \":( No camera available.\", \"OK\");\n return;\n }\n\n var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions\n {\n Directory = \"SorterAppImages\",\n Name = \"Sorter_\" + DateTime.Now.Ticks + \".jpg\",\n CompressionQuality = 92,\n PhotoSize = PhotoSize.Custom,\n CustomPhotoSize = 90\n });\n\n if (file == null) return;\n _path = file.Path;\n var split = _path.Split('/');\n _fileName = split[split.Length - 1];\n ImageName.Text = _fileName;\n\n var imageSource = ImageSource.FromStream(() =>\n {\n var stream = file.GetStream();\n return stream;\n });\n\n Image.Source = imageSource;\n EnabledButtons();\n\n Loading.IsRunning = false;\n };\n\n Send.Clicked += async (sender, args) =>\n {\n Loading.IsRunning = true;\n DisabledButtons();\n var result = await _predictService.PredictClassImage(_path);\n if (result == \"Error\")\n {\n await DisplayAlert(\"Error :(\", \"A connection error has occurred\", \"OK\");\n }\n else\n {\n LabelClass.Text = \"Class: \" + result;\n }\n\n EnabledButtons();\n LayoutClass.IsVisible = true;\n Loading.IsRunning = false;\n };\n }\n\n private void DisabledButtons()\n {\n Reset.IsEnabled = false;\n Send.IsEnabled = false;\n PickPhoto.IsEnabled = false;\n TakePhoto.IsEnabled = false;\n }\n\n private void EnabledButtons()\n {\n Reset.IsEnabled = true;\n Send.IsEnabled = true;\n PickPhoto.IsEnabled = true;\n TakePhoto.IsEnabled = true;\n }\n\n private void DisabledAuxButtons()\n {\n Reset.IsEnabled = false;\n Send.IsEnabled = false;\n }\n }\n}"
},
{
"alpha_fraction": 0.6478696465492249,
"alphanum_fraction": 0.6791979670524597,
"avg_line_length": 20.567567825317383,
"blob_id": "441a4b3ca5767186ab24f8fd1450cca5b9c29ef4",
"content_id": "c7ad204ef9ec4832c71826ea53050e46519a50a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 798,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 37,
"path": "/Machine Learning Model/predict1.py",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "from keras.models import load_model\nimport numpy as np\nimport cv2\nimport argparse\nimport urllib.request\n\nlabels = {\n 0: 'shoe',\n 1: 'dress',\n 2: 'pants',\n 3: 'outerwear'\n}\n\n\ndef url_to_image(url):\n resp = urllib.request.urlopen(url)\n image = np.asarray(bytearray(resp.read()), dtype=\"uint8\")\n image = cv2.imdecode(image, cv2.IMREAD_COLOR)\n return image\n\n\nparser = argparse.ArgumentParser(description='Predict dress from image.')\nparser.add_argument('-i', '--image', help='Image input url', type=str)\n\nargs = vars(parser.parse_args())\n\nimg_url = args['image']\n\nimg = url_to_image(img_url)\nimg = cv2.resize(img, (300, 300))\nimg = img.reshape(1, 300, 300, 3)\n\nmodel = load_model(\"my_model2.h5\")\npredicted = model.predict(img)\n\nlabel = np.argmax(predicted)\nprint(labels[label])\n"
},
{
"alpha_fraction": 0.6617646813392639,
"alphanum_fraction": 0.6617646813392639,
"avg_line_length": 18.428571701049805,
"blob_id": "c80d10dbf409be28aa763b97bc9d76944705d49a",
"content_id": "bab23caf81a2b743d83d3c0a51c539a02f2e3371",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 138,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 7,
"path": "/SorterServices/SorterServices/Models/PredictResponseViewModel.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "namespace SorterServices.Models\n{\n public class PredictResponseViewModel\n {\n public string Response { get; set; }\n }\n}\n"
},
{
"alpha_fraction": 0.7248322367668152,
"alphanum_fraction": 0.7248322367668152,
"avg_line_length": 12.545454978942871,
"blob_id": "47af13a3afae14b6171e39342a47ccddf9e11384",
"content_id": "010917194a53b84d98dd0a2a858dde4a6b90ceba",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 151,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 11,
"path": "/SorterApp/SorterApp/SorterApp/ViewModels/ImageViewModel.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.Text;\n\nnamespace SorterApp.ViewModels\n{\n public class ImageViewModel\n {\n\n }\n}\n"
},
{
"alpha_fraction": 0.8065023422241211,
"alphanum_fraction": 0.8164656758308411,
"avg_line_length": 99.31578826904297,
"blob_id": "4abe7dc3d87af7470994a3b9dd6b3e6f9d54102e",
"content_id": "936d15c746caa172b0b90ce180cdabfa015cccec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1937,
"license_type": "no_license",
"max_line_length": 691,
"num_lines": 19,
"path": "/README.md",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "<h3>Clothing Image Sorter</h3>\n\n<h4>Introducción</h4>\n\nDebido a la movilización de actividades de la vida cotidiana hacia el internet en la era digital, negocios como el comercio electrónico comienzan a utilizar más herramientas para satisfacer a sus clientes. La industria de la moda representa uno de los grandes rubros del sector de comercio electrónico, donde sus clientes tienen necesidades que deben ser satisfechas. <br>\n\nInno Clothing Sorter, es una aplicación que utiliza un modelo de entrenamiento basado en redes neuronales convolucionales al cual se puede acceder mediante una aplicación móvil, que se comunica con el modelo mediante un RESTful API; donde el usuario de la aplicación podrá subir una imagen y el modelo de Machine Learning propuesto podrá clasificarla en las diferentes clases existenes (Dresses, Shoes, Outerwear y Pants). Sin embargo, es necesario mencionar que aún el proyecto dista mucho del objetivo que se quiere lograr, el cual es clasificar el tipo de ropa que una persona ha comprado, para brindarle recomendaciones acertadas prendas que posiblemente puede comprar, según sus gustos.\n\n<h4>Modelo de Machine Learning</h4>\n\nSe plantea el reto de la clasificación de imágenes de prendas de vestir según su categoría, implementamos un modelo de aprendizaje de máquina basado redes neuronales convolucionales basado en DenseNet 121 y un algoritmo de optimización RMSprop, haciendo uso de Keras y OpenCV. Adicionalmente, se realizó el procesamiento de 3260 de imágenes, donde el 80% de ellas fueron usadas para entrenamiento y el restante para testing y validación. El modelo fue desarrollado e implementado obteniendo una precisión (Accuracy) de aproximadamente 75\\%.\n\n<h4>Aplicación</h4>\n\nLa solución implementada hasta el momento está compuesta por los siguientes componentes:\n\n* Aplicación Móvil\n* API (Servicios expuestos a la aplicación)\n* Flask (Ejecución del modelo)\n\n"
},
{
"alpha_fraction": 0.6177605986595154,
"alphanum_fraction": 0.6177605986595154,
"avg_line_length": 27.77777862548828,
"blob_id": "4237941256880c1fed50c795a388fae639438fb5",
"content_id": "a495b47b6555a0c51e44f21ab4fefd4b7c05c4b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 520,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 18,
"path": "/SorterApp/SorterApp/SorterApp/MainPage.xaml.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using SorterApp.Views;\nusing Xamarin.Forms;\n\nnamespace SorterApp\n{\n public partial class MainPage : TabbedPage\n {\n public MainPage()\n {\n InitializeComponent();\n Xamarin.Forms.PlatformConfiguration.AndroidSpecific.TabbedPage.SetIsSwipePagingEnabled(this, false);\n\n Children.Add(new HomePage { Icon = \"ic_home\" });\n Children.Add(new LoadImagePage { Icon = \"ic_ml\" });\n Children.Add(new DocumentationPage { Icon = \"ic_doc\" });\n }\n }\n}\n"
},
{
"alpha_fraction": 0.7612456679344177,
"alphanum_fraction": 0.7612456679344177,
"avg_line_length": 25.272727966308594,
"blob_id": "391e9fcd7461889b33047c8c53f4a95e022c55ef",
"content_id": "cc77c49a80c786bf82a7b5a5c4d4361f626d0ee2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 291,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 11,
"path": "/SorterServices/SorterServices/Services/Interfaces/IPredictService.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using SorterServices.Models;\nusing System.Threading.Tasks;\n\nnamespace SorterServices.Services.Interfaces\n{\n public interface IPredictService\n {\n // POST:Predict Image Class\n Task<PredictResponseViewModel> PredictClassImageAsync(ImageViewModel imageViewModel);\n }\n}\n"
},
{
"alpha_fraction": 0.5056390762329102,
"alphanum_fraction": 0.5078321099281311,
"avg_line_length": 29.69230842590332,
"blob_id": "3be188a3dd87f526572e810142f567ea582988ec",
"content_id": "8d033d05b2bab1ad90aea7b0399d48d55cd5782e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 3194,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 104,
"path": "/SorterServices/SorterServices/Services/PredictService.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using SorterServices.Models;\nusing SorterServices.Services.Interfaces;\nusing System;\nusing System.IO;\nusing System.Net.Http;\nusing System.Text;\nusing System.Threading.Tasks;\nusing Newtonsoft.Json;\n\nnamespace SorterServices.Services\n{\n public class PredictService : IPredictService\n {\n private const string HttpUrl = \"MODEL_ENDPOINT\";\n private const string HttpsUrl = \"BASE_URL\";\n private readonly HttpClient _client = new HttpClient { BaseAddress = new Uri(HttpUrl) };\n\n #region Post\n public async Task<PredictResponseViewModel> PredictClassImageAsync(ImageViewModel imageViewModel)\n {\n var url = Path.Combine(\"wwwroot\", \"files\", \"images\");\n Directory.CreateDirectory(url);\n\n var path = Path.Combine(url, imageViewModel.Image.FileName);\n if (File.Exists(url)) File.Delete(url);\n\n using (var stream = new FileStream(path, FileMode.Create))\n {\n await imageViewModel.Image.CopyToAsync(stream);\n }\n\n var imagePath = HttpsUrl + \"/files/images/\" + imageViewModel.Image.FileName;\n var result = await Predict(imagePath);\n\n var predictResponseViewModel = new PredictResponseViewModel { Response = FormatResult(result) };\n return predictResponseViewModel;\n }\n\n public async Task<string> Predict(string url)\n {\n var urlViewModel = new UrlViewModel\n {\n url = url\n };\n\n try\n {\n var json = JsonConvert.SerializeObject(urlViewModel);\n var content = new StringContent(json, Encoding.UTF8, \"application/json\");\n var response = await _client.PostAsync(_client.BaseAddress + \"predict\", content);\n if (response.IsSuccessStatusCode)\n {\n var result = response.Content.ReadAsStringAsync().Result;\n return result;\n }\n }\n catch (Exception ex)\n {\n return ex.Message;\n }\n return \"Error\";\n }\n #endregion\n\n #region Utils\n private static string FormatResult(string result)\n {\n switch (result)\n {\n case \"shoe\":\n return \"Shoes\";\n case \"dress\":\n return \"Dresses\";\n case \"pants\":\n return \"Pants\";\n case \"outerwear\":\n return \"Outerwear\";\n default:\n return \"No Classified\";\n }\n }\n\n private static string GetClass()\n {\n var random = new Random();\n var value = random.Next(1, 5);\n\n switch (value)\n {\n case 1:\n return \"Outerwear\";\n case 2:\n return \"Dresses\";\n case 3:\n return \"Pants\";\n case 4:\n return \"Shoes\";\n default:\n return \"\";\n }\n }\n #endregion\n }\n}\n"
},
{
"alpha_fraction": 0.6213683485984802,
"alphanum_fraction": 0.6213683485984802,
"avg_line_length": 25.700000762939453,
"blob_id": "9ceaf3cd50b0ad65617c0bb4f7a38dc41aaeabc1",
"content_id": "96a45e763103e7578dacda3deea7293c814f6f8d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1069,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 40,
"path": "/SorterServices/SorterServices/Controllers/PredictController.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Microsoft.AspNetCore.Mvc;\nusing SorterServices.Models;\nusing SorterServices.Services.Interfaces;\n\nnamespace SorterServices.Controllers\n{\n [RequireHttps]\n [Route(\"[controller]/[action]\")]\n public class PredictController : Controller\n {\n private readonly IPredictService _predictService;\n\n #region Constructor\n public PredictController(IPredictService predictService)\n {\n _predictService = predictService;\n }\n #endregion\n\n #region Post \n [HttpPost]\n public async Task<IActionResult> PredictClassImage(ImageViewModel imageViewModel)\n {\n try\n {\n var result = await _predictService.PredictClassImageAsync(imageViewModel);\n return Ok(result);\n }\n catch (Exception e)\n {\n return BadRequest(\"Ha ocurrido un error inesperado.\");\n }\n }\n #endregion\n }\n}"
},
{
"alpha_fraction": 0.6066235899925232,
"alphanum_fraction": 0.6066235899925232,
"avg_line_length": 32.4594612121582,
"blob_id": "44c211f36c667307253eed3d94ff78db84b2456f",
"content_id": "c7ac32dcd5a0c9f5af8a9fcb6a6b7a7ff52b946f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C#",
"length_bytes": 1240,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 37,
"path": "/SorterApp/SorterApp/SorterApp/Services/PredictService.cs",
"repo_name": "diegobaqt/clothing_image_sorter",
"src_encoding": "UTF-8",
"text": "using Newtonsoft.Json;\nusing SorterApp.ViewModels;\nusing System;\nusing System.IO;\nusing System.Net.Http;\nusing System.Threading.Tasks;\n\nnamespace SorterApp.Services\n{\n public class PredictService\n {\n private const string _httpsUrl = \"YOUR_ENDPOINT_URL\";\n private readonly HttpClient _client = new HttpClient { BaseAddress = new Uri(_httpsUrl) };\n\n public async Task<string> PredictClassImage (string imagePath)\n {\n var content = new MultipartFormDataContent();\n\n var image = File.ReadAllBytes(imagePath);\n content.Add(new ByteArrayContent(new MemoryStream(image).ToArray()), \"Image\", Path.GetFileName(imagePath));\n\n try\n {\n var response = await _client.PostAsync(_client.BaseAddress + \"Predict/PredictClassImage\", content);\n if (response.IsSuccessStatusCode)\n {\n var result = JsonConvert.DeserializeObject<PredictResponseViewModel>(response.Content.ReadAsStringAsync().Result);\n return result.Response;\n }\n } catch (Exception ex)\n {\n return ex.Message;\n }\n return \"Error\";\n }\n }\n}\n"
}
] | 10 |
radiate-finance/radiate-backend
|
https://github.com/radiate-finance/radiate-backend
|
d77eed29d57ce498f7204652b505fb8c5c297698
|
b4355ab5cef3d3cea8ff22b252a388e6f195825f
|
2943897ee7c305217dc1db591b24b9a3c291e0de
|
refs/heads/master
| 2023-07-12T19:05:21.637268 | 2021-08-24T18:42:34 | 2021-08-24T18:42:34 | 399,548,476 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5511695742607117,
"alphanum_fraction": 0.5891813039779663,
"avg_line_length": 18.02777862548828,
"blob_id": "308dfadd21e0d875cc6036af3c937c682da842a3",
"content_id": "866382110de579533e257cb3245c26ad60617b22",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "YAML",
"length_bytes": 684,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 36,
"path": "/docker-compose.yml",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "version: \"3.8\"\n\nservices:\n indexer:\n build: .\n depends_on:\n - db\n - hasura\n restart: \"no\"\n env_file: dipdup.env\n\n db:\n image: postgres:13\n restart: always\n volumes:\n - db:/var/lib/postgres/data\n healthcheck:\n test: [\"CMD-SHELL\", \"pg_isready -U ${POSTGRES_USER}\"]\n interval: 10s\n timeout: 5s\n retries: 5\n env_file: dipdup.env\n\n hasura:\n image: hasura/graphql-engine:v2.0.1\n ports:\n - 127.0.0.1:42000:8080\n depends_on:\n - db\n restart: always\n env_file: dipdup.env\n environment:\n - HASURA_GRAPHQL_ENABLED_LOG_TYPES=startup, http-log, webhook-log, websocket-log, query-log\n \nvolumes:\n db:"
},
{
"alpha_fraction": 0.7234285473823547,
"alphanum_fraction": 0.7257142663002014,
"avg_line_length": 27.96666717529297,
"blob_id": "b6a31d653d3800867a4875d042c041e77ee2fd22",
"content_id": "067508a6ee23892bbe8ba3f9d64c9adcb474c655",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 875,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 30,
"path": "/radiate/handlers/on_withdraw.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from typing import Optional\n\nfrom dipdup.models import OperationData, Origination, Transaction\nfrom dipdup.context import HandlerContext\n\nimport radiate.models as models\n\nfrom radiate.types.radiate.parameter.withdraw import WithdrawParameter\nfrom radiate.types.radiate.storage import RadiateStorage\n\n\nasync def on_withdraw(\n ctx: HandlerContext,\n withdraw: Transaction[WithdrawParameter, RadiateStorage],\n) -> None:\n amount = withdraw.parameter.amount\n streamID = withdraw.parameter.streamId\n \n stream = (await models.Stream.filter(stream_id=streamID))[0]\n stream.remaining_balance -= int(amount)\n if stream.remaining_balance == 0:\n stream.is_active = False\n await stream.save()\n\n history = models.History(\n stream = stream,\n timestamp = withdraw.data.timestamp,\n amount = amount\n )\n await history.save()\n\n\n "
},
{
"alpha_fraction": 0.6879432797431946,
"alphanum_fraction": 0.7056737542152405,
"avg_line_length": 17.866666793823242,
"blob_id": "d0e6e93fbb9396b162577cf377185c8586fd7a41",
"content_id": "774ebf717dd9243c7a29db3b8e48d76563282c28",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 282,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 15,
"path": "/Dockerfile",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "FROM python:3.9-slim-buster\n\nRUN pip install pipenv\n\nRUN pip install \"dipdup==2.0.3\"\n\nWORKDIR /radiate\nCOPY Pipfile.lock Pipfile /radiate/\n\nRUN pipenv install --system --deploy --ignore-pipfile\n\nCOPY . /radiate\n\nENTRYPOINT [\"pipenv\", \"run\", \"dipdup\"]\nCMD [\"-c\", \"dipdup.yml\", \"run\"]"
},
{
"alpha_fraction": 0.7582417726516724,
"alphanum_fraction": 0.7582417726516724,
"avg_line_length": 44.66666793823242,
"blob_id": "ac372bb778d4b882b434ca3ea7ee2a8bdd69953c",
"content_id": "0b28d0bcb8c5fe76b5a17bd4f986dd07e48dcd2e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 6,
"path": "/radiate/handlers/on_rollback.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from dipdup.context import RollbackHandlerContext\n\n\nasync def on_rollback(ctx: RollbackHandlerContext) -> None:\n ctx.logger.warning('Datasource `%s` rolled back from level %s to level %s, reindexing', ctx.datasource, ctx.from_level, ctx.to_level)\n await ctx.reindex()"
},
{
"alpha_fraction": 0.6891348361968994,
"alphanum_fraction": 0.7203219532966614,
"avg_line_length": 34.53571319580078,
"blob_id": "df659092d1fb1b20e16244c935198be4f6a13973",
"content_id": "651484ef661c1ff580e82f5c15e86bb7c2e0e9b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 994,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 28,
"path": "/radiate/models.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from tortoise import Model, fields\nfrom enum import IntEnum\n\nclass TokenType(IntEnum):\n TEZ = 0\n FA12 = 1\n FA2 = 2\n\n\nclass Stream(Model):\n id = fields.IntField(pk=True)\n stream_id = fields.CharField(max_length=10000)\n sender = fields.CharField(max_length=36)\n receiver = fields.CharField(max_length=36)\n deposit = fields.DecimalField(decimal_places=18, max_digits=32)\n rate_per_second = fields.DecimalField(decimal_places=18, max_digits=32)\n remaining_balance = fields.DecimalField(decimal_places=18, max_digits=32)\n start_time = fields.DatetimeField()\n stop_time = fields.DatetimeField()\n created_on = fields.DatetimeField()\n is_active = fields.BooleanField(default=True)\n token = fields.IntEnumField(enum_type=TokenType)\n\nclass History(Model):\n id = fields.IntField(pk=True)\n stream = fields.ForeignKeyField('models.Stream', 'history')\n timestamp = fields.DatetimeField()\n amount = fields.DecimalField(decimal_places=18, max_digits=32)"
},
{
"alpha_fraction": 0.7403846383094788,
"alphanum_fraction": 0.7403846383094788,
"avg_line_length": 20,
"blob_id": "35be1eae3767cb4e9d750065bf4c373b9c8a480a",
"content_id": "2df978148257c2725750d92e48abd393a3f13b7d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 104,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 5,
"path": "/radiate/handlers/on_configure.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from dipdup.context import HandlerContext\n\n\nasync def on_configure(ctx: HandlerContext) -> None:\n ..."
},
{
"alpha_fraction": 0.7765089869499207,
"alphanum_fraction": 0.7781403064727783,
"avg_line_length": 29.700000762939453,
"blob_id": "0f2461e66b6cd1480480cf9eb2e163504bff4390",
"content_id": "56af52c582af91a58807019fbd2795c870795dd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 613,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 20,
"path": "/radiate/handlers/on_cancel.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from typing import Optional\n\nfrom dipdup.models import OperationData, Origination, Transaction\nfrom dipdup.context import HandlerContext\n\nimport radiate.models as models\n\nfrom radiate.types.radiate.parameter.cancel_stream import CancelStreamParameter\nfrom radiate.types.radiate.storage import RadiateStorage\n\n\nasync def on_cancel(\n ctx: HandlerContext,\n cancel_stream: Transaction[CancelStreamParameter, RadiateStorage],\n) -> None:\n streamID = str(cancel_stream.parameter.__root__)\n \n stream = (await models.Stream.filter(stream_id=streamID))[0]\n stream.is_active = False\n await stream.save()"
},
{
"alpha_fraction": 0.7538461685180664,
"alphanum_fraction": 0.7538461685180664,
"avg_line_length": 18.5,
"blob_id": "718dfbbfb0e7b024e5beafed32bf34709f473678",
"content_id": "a15b33c60d5587155eda18039dd1909a0155e96d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 195,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 10,
"path": "/radiate/types/radiate/parameter/cancel_stream.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "# generated by datamodel-codegen:\n# filename: cancelStream.json\n\nfrom __future__ import annotations\n\nfrom pydantic import BaseModel\n\n\nclass CancelStreamParameter(BaseModel):\n __root__: str\n"
},
{
"alpha_fraction": 0.3969780206680298,
"alphanum_fraction": 0.3969780206680298,
"avg_line_length": 19.799999237060547,
"blob_id": "a1cbc84b1820a53186d6a68b0ec1f5fe57ae9510",
"content_id": "290c8c397096ebec7b6933105941850b4e59e226",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 966,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 35,
"path": "/README.md",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "# radiate-backend\n\n```.\n├── dipdup.yml\n├── docker-compose.yml\n├── Dockerfile\n├── Pipfile\n├── Pipfile.lock\n└── radiate\n ├── graphql\n ├── handlers\n │ ├── __init__.py\n │ ├── on_cancel.py\n │ ├── on_configure.py\n │ ├── on_create_stream.py\n │ ├── on_rollback.py\n │ └── on_withdraw.py\n ├── __init__.py\n ├── jobs\n │ └── __init__.py\n ├── models.py\n ├── sql\n │ ├── on_reindex\n │ └── on_restart\n └── types\n ├── __init__.py\n └── radiate\n ├── __init__.py\n ├── parameter\n │ ├── cancel_stream.py\n │ ├── create_stream.py\n │ ├── __init__.py\n │ └── withdraw.py\n └── storage.py\n```\n"
},
{
"alpha_fraction": 0.7050314545631409,
"alphanum_fraction": 0.7081760764122009,
"avg_line_length": 34.33333206176758,
"blob_id": "8c7b09149592d53737b2cfae50b103a925fbba34",
"content_id": "d6f3dfdff5a6fb39b0c695c9a18c0df8f49c0a6c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1590,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 45,
"path": "/radiate/handlers/on_create_stream.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "from typing import Optional\n\nfrom dipdup.models import OperationData, Origination, Transaction\nfrom dipdup.context import HandlerContext\n\nimport radiate.models as models\n\nfrom radiate.types.radiate.parameter.create_stream import CreateStreamParameter\nfrom radiate.types.radiate.storage import RadiateStorage, TokenItem, TokenItem1\n\n\nasync def on_create_stream(\n ctx: HandlerContext,\n create_stream: Transaction[CreateStreamParameter, RadiateStorage],\n) -> None:\n sender = create_stream.data.sender_address\n # temp = await models.Stream()\n id = str((await models.Stream.filter().count()))\n receiver = create_stream.storage.streams[id].receiver\n startTime = create_stream.storage.streams[id].startTime\n stopTime = create_stream.storage.streams[id].stopTime\n ratePerSecond = create_stream.storage.streams[id].ratePerSecond\n deposit = create_stream.storage.streams[id].deposit\n createdOn = create_stream.data.timestamp\n remainingBalance = deposit\n token = models.TokenType.TEZ\n if create_stream.parameter.token == TokenItem:\n token = models.TokenType.FA12\n elif create_stream.parameter.token == TokenItem1:\n token = models.TokenType.FA2\n \n stream = models.Stream(\n stream_id = id,\n sender = sender,\n receiver = receiver,\n deposit = deposit,\n start_time = startTime,\n stop_time = stopTime,\n created_on = createdOn,\n rate_per_second = ratePerSecond,\n remaining_balance = remainingBalance,\n token = token,\n is_active = True,\n )\n await stream.save()\n"
},
{
"alpha_fraction": 0.6612296104431152,
"alphanum_fraction": 0.6725219488143921,
"avg_line_length": 15.604166984558105,
"blob_id": "17f8a0d83ebeca1bbe3990c863859814f1475857",
"content_id": "71065905d3a0cf8ee9774cb3f52c503990fa6c5b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 797,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 48,
"path": "/radiate/types/radiate/parameter/create_stream.py",
"repo_name": "radiate-finance/radiate-backend",
"src_encoding": "UTF-8",
"text": "# generated by datamodel-codegen:\n# filename: createStream.json\n\nfrom __future__ import annotations\n\nfrom typing import Any, Dict, Union\n\nfrom pydantic import BaseModel, Extra\n\n\nclass TokenItem(BaseModel):\n class Config:\n extra = Extra.forbid\n\n FA12: str\n\n\nclass FA2(BaseModel):\n class Config:\n extra = Extra.forbid\n\n tokenAddress: str\n tokenId: str\n\n\nclass TokenItem1(BaseModel):\n class Config:\n extra = Extra.forbid\n\n FA2: FA2\n\n\nclass TokenItem2(BaseModel):\n class Config:\n extra = Extra.forbid\n\n tez: Dict[str, Any]\n\n\nclass CreateStreamParameter(BaseModel):\n class Config:\n extra = Extra.forbid\n\n ratePerSecond: str\n receiver: str\n startTime: str\n stopTime: str\n token: Union[TokenItem, TokenItem1, TokenItem2]\n"
}
] | 11 |
WilliamRong/InstagramDup
|
https://github.com/WilliamRong/InstagramDup
|
9c2f662222836303a43c4f324a6433e6b5108066
|
4cbad1d575a74addb8e6e11385a73382bca04d73
|
de512cefd2259bf627f76b4fffa05434115b5086
|
refs/heads/master
| 2020-06-19T02:57:38.101754 | 2019-07-18T13:44:45 | 2019-07-18T13:44:45 | 196,539,819 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6635653376579285,
"alphanum_fraction": 0.6684034466743469,
"avg_line_length": 27.595745086669922,
"blob_id": "e94521bd62f0ea64243756b4ce3db29a060c438d",
"content_id": "c6fa160beec81181842a618ce9800f6bdf842926",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2695,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 94,
"path": "/examples/flaskTest.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "#-*- encoding=UTF-8 -*-\n\nfrom flask import Flask,render_template,request,make_response,redirect,flash,get_flashed_messages\nimport logging\nfrom logging.handlers import RotatingFileHandler\n\napp=Flask(__name__)\napp.jinja_env.line_statement_prefix= '#'\napp.secret_key='nowcoder'\[email protected]('/index/')\[email protected]('/')\ndef index():\n res=''\n for msg ,category in get_flashed_messages(with_categories=True):\n res=res+category+msg+'<br>'\n res+='hello'\n return res\n\[email protected]('/profile/<uid>',methods=['Get','Post'])\ndef profile(uid):\n colors =('red','green','black')\n infos = {'i':'abc','j':'def'}\n return render_template('profile.html',uid=uid,colors=colors,infos=infos)\n\[email protected]('/request')\ndef request_demo():\n key = request.args.get('key', 'defaultkey')\n res=request.args.get('key','defaultkey')+'<br>'\n res=res+request.url+'++'+request.path+'<br>'\n for property in dir(request):\n res=res+str(property)+'|==|'+str(eval('request.'+property))+'<br>'\n response=make_response(res)\n response.set_cookie('nwid',key)\n response.status ='404'\n response.headers['myface']='hello my face~'\n return response\n\[email protected]('/admin')\ndef admin():\n key=request.args.get('key')\n if key =='admin':\n return 'hello admin'\n else:\n raise ValueError()\n return 'xx'\n\n\[email protected]('/newpath')\ndef newpath():\n return 'newpath'\n\[email protected]('/re/<int:code>')\ndef redirect_demo(code):\n return redirect('/newpath',code=code)\n\[email protected]('/login')\ndef login():\n app.logger.info('login successful')\n flash('登陆成功','info')\n return redirect('/')\n\[email protected]('/log/<level>/<msg>')\ndef log(level,msg):\n dict={'warn':logging.WARN,'error':logging.ERROR,'info':logging.INFO}\n if dict.has_key(level):\n app.logger.log(dict[level],msg)\n return 'logged'+msg\n\[email protected](400)\ndef exception_page(error):\n return 'exception'\n\[email protected](404)\ndef page_not_found(error):\n print error\n return render_template('not_found.html',url=request.url),404\n\n\n\ndef set_logger():\n info_file_handler=RotatingFileHandler('D:\\\\DataStructure\\\\InstagramDup\\\\logs\\\\info.txt')\n info_file_handler.setLevel(logging.INFO)\n app.logger.addHandler(info_file_handler)\n\n warn_file_handler=RotatingFileHandler('D:\\\\DataStructure\\\\InstagramDup\\\\logs\\\\warn.txt')\n warn_file_handler.setLevel(logging.WARN)\n app.logger.addHandler(warn_file_handler)\n\n error_file_handler=RotatingFileHandler('D:\\\\DataStructure\\\\InstagramDup\\\\logs\\\\error.txt')\n error_file_handler.setLevel(logging.ERROR)\n app.logger.addHandler(error_file_handler)\nif __name__ =='__main__':\n set_logger()\n app.run(debug=True)"
},
{
"alpha_fraction": 0.554817259311676,
"alphanum_fraction": 0.5581395626068115,
"avg_line_length": 20.571428298950195,
"blob_id": "1ac86742a4c70e5bcd1c765cc7cb7e69110addd1",
"content_id": "f3671127b42b438ca191d38c7612770f3445cbee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 301,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 14,
"path": "/examples/decorator.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "#-*- encoding=utf-8 -*-\n\ndef log(func):\n def wrapper(*args,**kwargs):\n print 'before calling',func.__name__\n func(*args,**kwargs)\n print'after calling',func.__name__\n return wrapper\n@log\ndef hello(name):\n print \"hello\",name\n\nif __name__=='__main__':\n hello('nowc')"
},
{
"alpha_fraction": 0.6373687386512756,
"alphanum_fraction": 0.6500829458236694,
"avg_line_length": 29.67796516418457,
"blob_id": "fbe62ca3503e55523a09689c772ff117f1169bbd",
"content_id": "ae1f79fc3e757a9144c60da4d65079035fad08fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1907,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 59,
"path": "/InstagramDup/models.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "# -*- encoding=UTF-8 -*-\n\n\"\"\"\n模型文件\n\"\"\"\nfrom InstagramDup import db\nfrom datetime import datetime\nimport random\n\nclass Comment(db.Model):\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n content=db.Column(db.String(1024))\n image_id=db.Column(db.Integer,db.ForeignKey('image.id'))\n user_id = db.Column(db.Integer, db.ForeignKey('user.id'))\n status=db.Column(db.Integer,default=0)#0正常 1被删除\n user=db.relationship('User')#与user表关联\n\n def __init__(self, content, image_id,user_id):\n self.content = content\n self.image_id=image_id\n self.user_id = user_id\n\n\n def __repr__(self):\n return '<Comment %d %s>' % (self.id, self.content)\n\nclass Image(db.Model):\n id = db.Column(db.Integer, primary_key=True, autoincrement=True)\n url=db.Column(db.String(512))\n user_id=db.Column(db.Integer,db.ForeignKey('user.id'))\n created_date=db.Column(db.DateTime)\n comments=db.relationship('Comment')\n\n\n def __init__(self,url,user_id):\n self.url=url\n self.user_id=user_id\n self.created_date=datetime.now()\n\n def __repr__(self):\n return '<Image %d %s>' % (self.id, self.url)\n\nclass User(db.Model):\n #__tablename__='user'#可以命名,需要更改其他语句中的表名\n id=db.Column(db.Integer,primary_key=True,autoincrement=True)\n username=db.Column(db.String(80),unique=True)\n password=db.Column(db.String(32))\n head_url=db.Column(db.String(256))\n images=db.relationship('Image',backref='user',lazy='dynamic')#这样image就能查到user了\n\n\n def __init__(self,username,password):\n self.username=username\n self.password=password\n self.head_url='http://images.nowcoder.com/head/'+str(random.randint(0,1000))+'m.png'\n #牛客网上随机抓图作为头像\n\n def __repr__(self):\n return '<User %d %s>'%(self.id,self.username)"
},
{
"alpha_fraction": 0.584786057472229,
"alphanum_fraction": 0.6291600465774536,
"avg_line_length": 30.566667556762695,
"blob_id": "45ff3ad3c09f41fe3670da1838377b421f02b204",
"content_id": "cecc63d021cb8a83f8b14df5cc17d7151e2902c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2073,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 60,
"path": "/manager.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "# -*- encoding=UTF-8 -*-\n\"\"\"\n脚本文件\n\"\"\"\nfrom InstagramDup import app,db\nfrom sqlalchemy import or_,and_\nfrom flask_script import Manager\nfrom InstagramDup.models import User,Image,Comment\nimport random\nmanager=Manager(app)\n\ndef get_image_url():\n return 'http://images.nowcoder.com/head/'+str(random.randint(0,1000))+'m.png'\n\[email protected]\ndef init_database():\n db.drop_all()#先删掉所有的表\n db.create_all()#把所有的表串接\n\n #插入\n for i in range(0,100):#添加100个用户\n db.session.add(User('User'+str(i+1),'a'+str(i)))\n for j in range(0,3):\n db.session.add(Image(get_image_url(),i+1))\n for k in range(0,4):\n db.session.add(Comment('This is a comment'+ str(k),1+i*4+j,i+1))\n db.session.commit()#与git一样,事务需要提交\n\n #修改\n for i in range(50,100,2):\n user=User.query.get(i)\n user.username='[New1]'+user.username\n for i in range(51,100,2):\n User.query.filter_by(id=51).update({'username':'[New2]'})#常用的更新方法\n db.session.commit()#修改需要提交,查询不需要\n\n #删除\n for i in range(50,100,2):\n comment=Comment.query.get(i+1)\n db.session.delete(comment)\n db.session.commit()#删除需要提交,查询不需要\n #查询\n #print 1,User.query.all()#调用了__repr__()\n #print 2,User.query.get(3)\n #print 3,User.query.filter_by(id=5).first()\n #print 4,User.query.order_by(User.id.desc()).offset(1).limit(2).all()\n #print 5,User.query.filter(User.username.endswith('0')).limit(3).all()\n #print 6,User.query.filter(or_(User.id==88,User.id==99)).all()#去掉all()会打印sql语句\n #print 7,User.query.filter(and_(User.id>88,User.id<94)).first_or_404()\n #print 8,User.query.filter(and_(User.id>88,User.id<94)).all()\n #print 9,User.query.paginate(page=1,per_page=10).items#分页显示\n #user=User.query.get(1)\n #print 10,user.images#一对多查询\n\n #image=Image.query.get(1)\n #print 11,image.user\n\n\nif __name__=='__main__':\n manager.run()"
},
{
"alpha_fraction": 0.5462185144424438,
"alphanum_fraction": 0.5546218752861023,
"avg_line_length": 12.333333015441895,
"blob_id": "de7f8a57fdd072a29ba6c84848d14b136bffe4a3",
"content_id": "e7856e4c8234592e66524255f936b64ee4c70dfd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 127,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 9,
"path": "/runserver.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "# -*- encoding=UTF-8 -*-\n\"\"\"\n启动文件\n\"\"\"\nfrom InstagramDup import app\n\n\nif __name__ == '__main__':\n app.run(debug=True)"
},
{
"alpha_fraction": 0.6789297461509705,
"alphanum_fraction": 0.6822742223739624,
"avg_line_length": 17.75,
"blob_id": "aadf5f7afb243579bb20959a533354c0637c73fa",
"content_id": "156008d4268071c2ccbe2a85355fdbda905181bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 299,
"license_type": "no_license",
"max_line_length": 33,
"num_lines": 16,
"path": "/examples/manager.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "#-*- encoding=UTF-8 -*-\nfrom flask_script import Manager\nfrom flaskTest import app\n\nmanager=Manager(app)\n\[email protected]\ndef hello(name):\n print 'hello',name\n\[email protected]\ndef initialize_database():\n 'initialze database'\n print 'database ..'\nif __name__=='__main__':\n manager.run()"
},
{
"alpha_fraction": 0.7508305907249451,
"alphanum_fraction": 0.7574750781059265,
"avg_line_length": 24.16666603088379,
"blob_id": "748970211ed94edd7fd4131ba07ec6b3d6a5a58b",
"content_id": "4fa42107062cf17a722dec232561e18b49dc2a34",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 343,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 12,
"path": "/InstagramDup/__init__.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "# -*- encoding=UTF-8 -*-\n\n\"\"\"\n导出文件\n\"\"\"\nfrom flask import Flask\nfrom flask_sqlalchemy import SQLAlchemy\napp=Flask(__name__)#创建app\napp.jinja_env.add_extension('jinja2.ext.loopcontrols')\napp.config.from_pyfile('app.conf')#从app.conf中初始化设置\ndb=SQLAlchemy(app)#可以操作数据库了\nfrom InstagramDup import views,models"
},
{
"alpha_fraction": 0.6779431700706482,
"alphanum_fraction": 0.6847090721130371,
"avg_line_length": 26.407407760620117,
"blob_id": "d85b2aed4d46ed7cecb882c8de9d66f9c0966f98",
"content_id": "31e879833f0c0c3c5f06c2b355911ae97738438d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 771,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 27,
"path": "/InstagramDup/views.py",
"repo_name": "WilliamRong/InstagramDup",
"src_encoding": "UTF-8",
"text": "# -*- encoding=UTF-8 -*-\n\n\"\"\"\n视图文件\n\"\"\"\nfrom InstagramDup import app\nfrom flask import render_template,redirect\nfrom models import Image,User\nfrom sqlalchemy import text\[email protected]('/')\ndef index():\n images=Image.query.order_by(text('id asc')).limit(10).all()#选10张图片传进模板\n return render_template('index.html',images=images)#渲染模板index.html\n\[email protected]('/image/<int:image_id>/')\ndef image(image_id):\n image=Image.query.get(image_id)\n if image==None:\n return redirect('/')\n return render_template('pageDetail.html',image=image)\n\[email protected]('/profile/<int:user_id>/')\ndef profile(user_id):\n user=User.query.get(user_id)\n if user==None:\n return redirect('/')\n return render_template('profile.html',user=user)"
}
] | 8 |
DanUnix/Weather-Parser
|
https://github.com/DanUnix/Weather-Parser
|
62659d7082ca5c881f5865b7e67dbad271da7f7a
|
8472d8032d8096d9e7187546468f88baa9b8e9b0
|
14354e0608f9b925afd7f220e40716f7d0ab3abf
|
refs/heads/master
| 2021-01-24T23:51:38.290556 | 2015-12-18T21:48:32 | 2015-12-18T21:48:32 | 48,257,896 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6688206791877747,
"alphanum_fraction": 0.6779752373695374,
"avg_line_length": 34.01886749267578,
"blob_id": "4068fbcaf3801f982852ed7760b961411693fa5d",
"content_id": "7a5847e94969abf3f2f882c892296222fc4993f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1857,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 53,
"path": "/main.py",
"repo_name": "DanUnix/Weather-Parser",
"src_encoding": "UTF-8",
"text": "import feedparser\nimport time\n\nfeed_rss_url = feedparser.parse('http://weather.yahooapis.com/forecastrss?p=60607')\n\ndata = feed_rss_url.feed\n\n# Displays Today's Data\nnow = feed_rss_url.entries[0].yweather_condition['date']\nprint \"Current Day and Time\" \nprint \"--------------------\"\nprint now + \"\\n\"\n# Display the feed title\nprint \"Feed Title -> \", data.title + \"\\n\"\n\n# Display the feed description\nprint \"Feed Description -> \", data.description + \"\\n\"\n\n# Display the feed link URL\n#print feed_rss_url.feed.link\n\n# print out the first title entry - just a test\nprint feed_rss_url.entries[0].title + \"\\n\"\n\n# Astronomy - Sunrise and Sunset Data\nprint \"Astronomy\"\nprint \"---------\"\nprint \"Sunrise: \" + data.yweather_astronomy['sunrise'] + \" Sunset: \" + data.yweather_astronomy['sunset']\nprint \"\\n\"\n# print out the weather conditions\nprint \"Weather Conditions\"\nprint \"------------------\"\n#print (feed_rss_url['items'][0]['yweather_condition'])\ncondition = feed_rss_url.entries[0].yweather_condition['text']\nfahrenheit = feed_rss_url.entries[0].yweather_condition['temp']\nprint \"Today's Condition: \" + condition\nprint \"Temperature: \" + fahrenheit + \"\\\"F\"\n# print out the Forecast\nprint \"\\n Weekly Forecast\"\nprint \"------------------\"\n#print (feed_rss_url['items'][0]['yweather_forecast'])\nday = feed_rss_url.entries[0].yweather_forecast['day']\ndate = feed_rss_url.entries[0].yweather_forecast['date']\nlow = feed_rss_url.entries[0].yweather_forecast['low']\nhigh = feed_rss_url.entries[0].yweather_forecast['high']\ntext = feed_rss_url.entries[0].yweather_forecast['text']\nprint \"Day:\" + day + \" date:\" + date + \" low:\" + low + \" high:\" + high + \" Condition:\" + text + \"\\n\"\n#print feed_rss_url.entries[0].yweather_forecast['day'] \nprint len(feed_rss_url['entries'])\n\"\\n\"\n# Testing if elements are present\nprint 'title' in data\nprint data.get('title', 'No title')\n\n"
}
] | 1 |
solutionring/rockchapel
|
https://github.com/solutionring/rockchapel
|
8af032b45fec6d11ff36821a67ffb7e98bc4c551
|
db8bde30d1eb613fd4c6bfba6586e84ff0ac27d3
|
266a4e4360e9d10afc2c08d9dfd0cfd71247f1b3
|
refs/heads/master
| 2017-05-26T07:01:47.469993 | 2015-11-16T19:15:52 | 2015-11-16T19:15:52 | 45,086,651 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5215858817100525,
"alphanum_fraction": 0.5779735445976257,
"avg_line_length": 44.36000061035156,
"blob_id": "a1cbf52090316ed688a5c29057ca1cd8b7774751",
"content_id": "51860e9cd39ccb5cbd493c6ecb7a10dff12a53b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1135,
"license_type": "no_license",
"max_line_length": 379,
"num_lines": 25,
"path": "/configs/var/lib/cobbler/snippets/rockchapel_inject",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "# Add root key for nodemaster\ncd /root\nmkdir --mode=600 .ssh\ncat >> .ssh/authorized_keys << \"PUBLIC_KEY\"\nssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3mP95tJXhB3IwzMSOOCDdNx69aVgHD+sZpqvtnkUdEdIq/ZohJt/TlYuhJTStH0iYxLLPfXWGh66E90+0w3jyiYdat0f1OLMO928i89Bs1Z9iPYPaR4+rCGY5EH3CekPOz/m6oSKvp8bzBYH+ndLa1T1jIYe1YIcU27MF/GuwiVEfM2/21jfSoobNwVLQBu+61CPKrDRg6tHPyth/Irvf8LEYsIY3yCgNNKlcPBaLvIlytZmZGnCnH+UUQCJ0HCuJRT1Z4AWtJ2ZUN1NJTgzsv+o9+1+R01k/EhzxjSFsWzF9PK7nYP/Mg6loirC54Ff6rzNinN3xmvvSW0euVcwv root@nodemaster\nPUBLIC_KEY\nchmod 600 .ssh/authorized_keys\n\n# Add rockchapel banner\n>/etc/ssh/sshd_banner\ncat >>/etc/ssh/sshd_banner << \"BANNER\"\n __ __ __\n _________ _____/ /_______/ /_ ____ _____ ___ / /\n / ___/ __ \\/ ___/ //_/ ___/ __ \\/ __ `/ __ \\/ _ \\/ / \n / / / /_/ / /__/ ,< / /__/ / / / /_/ / /_/ / __/ / \n/_/ \\____/\\___/_/|_|\\___/_/ /_/\\__,_/ .___/\\___/_/.io\n /_/ \n\nBANNER\n\nsed -i 's/#Banner none/Banner \\/etc\\/ssh\\/sshd_banner/' /etc/ssh/sshd_config\n/bin/systemctl restart sshd.service\n\n#Change password to rc.io\necho \"root:rc.io\" | chpasswd\n\n"
},
{
"alpha_fraction": 0.6285714507102966,
"alphanum_fraction": 0.6476190686225891,
"avg_line_length": 13,
"blob_id": "1656379badb4efc743bf1f3df1b657961960667e",
"content_id": "678a4d488ccd6124e7d06a5da19cacfd54774009",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 15,
"path": "/scripts/checkdns.py",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n\nimport sys\nimport os.path\nimport socket\n\ndef main():\n\n\thost='c1n1.solutionring.com'\n\tsocket.gethostbyname(\"%s\" % (host))\n\tsocket.getaddrinfo(host, 22)\n\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5042384266853333,
"alphanum_fraction": 0.5347734689712524,
"avg_line_length": 34.50485610961914,
"blob_id": "4c9e98a007706290280cd6797d503b4100466c7e",
"content_id": "c7622a4cc6b21405fcbf33a587a052754c1589df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10971,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 309,
"path": "/racktool.py",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# Rockchapel Prototype/Proof-of-concept\n# python wrapper for ansible-managed,ipmi,cobbler(kickstart/dns/dhcp/dnsmasq)\n# GPL2\n# [email protected]\n\ndesc=\"\"\"racktool(proof-of-concept)\nuse this tool to manage rack resources\n\"\"\"\nimport sys\nimport subprocess\nimport os.path\nimport argparse\nimport time\nimport select\nimport paramiko\nimport progressbar\n\n\n\nservers = ['c1n1','c1n2','c1n3','c1n4']\n\n# Hard coded server definations need to move to sqlite\nc1n1 = {\n 'name': 'c1n1',\n 'hostname': 'c1n1',\n 'interface': 'eno1',\n 'ip-address': '10.44.1.1',\n 'oob-ipaddress': '10.254.1.1',\n 'oob-admin': 'root',\n 'oob-passwd': 'root',\n 'mac': '08:9e:01:b4:a2:7e',\n 'netmask': '255.255.0.0',\n 'dns-name': 'c1n1.solutionring.com',\n 'profile': 'centos7-x86_64',\n 'gateway': '10.44.0.1',\n 'static': '1',\n 'kickstart': '/var/lib/cobbler/kickstarts/rockchapel.ks',\n 'distro': 'centos7-x86_64',\n}\n\nc1n2 = {\n 'name': 'c1n2',\n 'hostname': 'c1n2',\n 'interface': 'eno1',\n 'ip-address': '10.44.1.2',\n 'oob-ipaddress': '10.254.1.2',\n 'oob-admin': 'root',\n 'oob-passwd': 'root',\n 'mac': '08:9e:01:b4:a0:16',\n 'netmask': '255.255.0.0',\n 'dns-name': 'c1n2.solutionring.com',\n 'profile': 'centos7-x86_64',\n 'gateway': '10.44.0.1',\n 'static': '1',\n 'kickstart': '/var/lib/cobbler/kickstarts/rockchapel.ks',\n 'distro': 'centos7-x86_64',\n}\n\nc1n3 = {\n 'name': 'c1n3',\n 'hostname': 'c1n3',\n 'interface': 'eno1',\n 'ip-address': '10.44.1.3',\n 'oob-ipaddress': '10.254.1.3',\n 'oob-admin': 'root',\n 'oob-passwd': 'root',\n 'mac': '08:9e:01:b4:a6:b0',\n 'netmask': '255.255.0.0',\n 'dns-name': 'c1n3.solutionring.com',\n 'profile': 'centos7-x86_64',\n 'gateway': '10.44.0.1',\n 'static': '1',\n 'kickstart': '/var/lib/cobbler/kickstarts/rockchapel.ks',\n 'distro': 'centos7-x86_64',\n}\n\nc1n4 = {\n 'name': 'c1n4',\n 'hostname': 'c1n4',\n 'interface': 'eno1',\n 'ip-address': '10.44.1.4',\n 'oob-ipaddress': '10.254.1.4',\n 'oob-admin': 'root',\n 'oob-passwd': 'root',\n 'mac': '08:9e:01:b4:9b:06',\n 'netmask': '255.255.0.0',\n 'dns-name': 'c1n4.solutionring.com',\n 'profile': 'centos7-x86_64',\n 'gateway': '10.44.0.1',\n 'static': '1',\n 'kickstart': '/var/lib/cobbler/kickstarts/rockchapel.ks',\n 'distro': 'centos7-x86_64',\n}\n\n# Start Main() program\n\ndef main():\n\n parser = argparse.ArgumentParser(description=desc)\n parser.add_argument('-b','--build',dest='build', help='build\\'s selected server (eg -b=c1n1)')\n parser.add_argument('-s','--sync',dest='sync', help='syncs rockchapel engine with cobbler and ansible',action='store_const', const='null', required=False)\n parser.add_argument('-l','--list', help='lists unconfigured nodes', action='store_const', const='null',required=False)\n parser.add_argument('-t','--type', help='passed to build ', action='store_const', const=\"rcComputeNode\",required=False)\n args = parser.parse_args()\n \n build = args.build\n list = args.list \n sync = args.sync \n type = args.type \n\n def build_server(serverdefs):\n \n \t print \"Building %s as type:rcComputeNode\" % serverdefs['name']\n \t define=subprocess.call(\"sudo cobbler system add \\\n\t --mgmt-classes=ansible-managed --ksmeta=\\\"a=0 b=0\\\" \\\n\t\t\t\t\t--name=%s \\\n\t\t\t\t\t--hostname=%s \\\n\t\t\t\t\t--interface=%s \\\n\t\t\t\t\t--ip-address=%s \\\n\t\t\t\t\t--mac=%s \\\n\t\t\t\t\t--netmask=%s \\\n\t\t\t\t\t--dns-name=%s \\\n\t\t\t\t\t--profile=%s \\\n\t\t\t\t\t--gateway=%s \\\n\t\t\t\t\t--static=%s \\\n\t\t\t\t\t--kickstart=%s \\\n\t\t\t\t\t--clobber \" \\\n\t\t% (serverdefs['name'], \\\n\t\tserverdefs['hostname'], \\\n\t\tserverdefs['interface'], \\\n\t\tserverdefs['ip-address'], \\\n\t\tserverdefs['mac'], \\\n\t\tserverdefs['netmask'], \\\n\t\tserverdefs['dns-name'], \\\n\t\tserverdefs['profile'], \\\n\t\tserverdefs['gateway'], \\\n\t\tserverdefs['static'], \\\n\t\tserverdefs['kickstart']),\\\n\t\tshell=True)\n\n \t profile=subprocess.call(\"sudo cobbler profile add \\\n \t\t --name=%s \\\n --distro=%s \\\n\t --mgmt-classes=ansible-managed --ksmeta=\\\"a=0 b=0\\\" \\\n\t\t\t \t\t\t --clobber \" \\\n\t\t% (serverdefs['name'], \\\n\t\t serverdefs['distro']), \\\n\t\t shell=True)\n\n \t if (define and profile) == 0:\n \t print \"....Define %s as a bare-metal compute resource [done]\" % (serverdefs['name'])\n \t print \"....Syncing dhcp and dns entries for %s [done]\" % (serverdefs['name'])\n \t print \"....Creating a kickstart image for %s [done]\" % (serverdefs['name'])\n subprocess.call(\"sudo cobbler sync >log/rockchapel.log 2>&1\", shell=True)\n \t print \"....Scheduling reset on %s [done]\" % (serverdefs['name'])\n \t poweroff=subprocess.call(\"sudo ipmitool \\\n\t\t\t\t\t -I lanplus \\\n\t\t\t\t\t -U %s \\\n\t\t\t\t\t -P%s -H %s \\\n\t\tchassis power reset >log/rockchapel.log 2>&1\" \n\t\t\t% (serverdefs['oob-admin'], \\\n\t\t\t serverdefs['oob-passwd'], \\\n\t\t\t serverdefs['oob-ipaddress']), \\\n\t\t\tshell=True)\n\n \t pxe=subprocess.call(\"sudo ipmitool \\\n\t\t\t\t -I lanplus -U %s \\\n\t\t\t\t -P%s -H %s \\\n\t\t\tchassis bootdev pxe >log/rockchapel.log 2>&1 \" \n\t\t% (serverdefs['oob-admin'], \\\n\t\t serverdefs['oob-passwd'], \\\n serverdefs['oob-ipaddress']),\n\t\t shell=True)\n\n \t remove_oldkeys=subprocess.call(\"ssh-keygen -R %s >log/rockchapel.log 2>&1 \" % (serverdefs['ip-address']), shell=True)\n \t print \"....Remove any existing rsa keys for %s [done]\" % (serverdefs['name'])\n\n \t for n in xrange(10):\n \t time.sleep(.1)\n \t sys.stdout.flush()\n \t #sys.stdout.write(u\"\\u2588\"u\"\\u2588\")\n \t #print \"\\n\"\n \t pxe=subprocess.call(\"sudo ipmitool \\\n\t\t\t\t -I lanplus \\\n\t\t\t\t -U %s \\\n\t\t\t\t -P%s \\\n\t\t\t\t -H %s \\\n\t\tchassis bootdev pxe >log/rockchapel.log 2>&1\" \n\t\t% (serverdefs['oob-admin'],\\\n\t\t serverdefs['oob-passwd'], \\\n \t\t serverdefs['oob-ipaddress']), \\\n\t\t shell=True)\n\n \t if pxe == 0:\n print \"....Changing boot device to PXE on [%s]\" % (serverdefs['name'])\n#require progressbar33 (pip install progressbar33)\n#available widgets AnimatedMarker, Bar, BouncingBar, Counter, ETA, AdaptiveETA, \n#FileTransferSpeed, FormatLabel, Percentage, ProgressBar, ReverseBar, \n#RotatingMarker, SimpleProgress, Timer\n#Keeping it simple for now\n\n print \"....Final check [processing]\" \n from progressbar import Percentage, ProgressBar, Bar\n\n #widgets = [Percentage(), Bar(marker=u\"\\u2588\")]\n #pbar = ProgressBar(widgets=widgets).start()\n ##for i in range(100):\n #for i in range(10):\n # time.sleep(0.1)\n # pbar.update(i + 1)\n #pbar.finish() \n\n widgets = [Percentage(), Bar(marker=u\"\\u2588\")]\n pbar = ProgressBar(widgets=widgets).start()\n #for i in range(100):\n for i in range(10):\n time.sleep(0.1)\n pbar.update(i + 1)\n pbar.finish()\n\n \t print \"\"\n \t print \"....Provisioning started\"\n \t print \"=============================================\"\n \t print \"Have a beer - this will take a 10-15 minutes!\"\n \t print \"=============================================\"\n \t print \"....We will procced when [%s] is ready!\" % (serverdefs['name']) \n\t host = serverdefs['dns-name']\n\t print \"....Inializing kickstart for [%s]\" % serverdefs['dns-name']\n\t print \".... Still in build...(continuing to poll .-.-.) \",\n syms = ['\\\\', '|', '/', '-']\n for _ in range(10):\n for sym in syms:\n sys.stdout.write(\"\\r%s%s%s%s\" % (sym,sym,sym,sym))\n sys.stdout.flush()\n time.sleep(.01)\n\n while True:\n #print \"Polling started for host [\" + host + \"]\"\n\t\tssh = paramiko.SSHClient()\n\t\tssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\t\t#key = paramiko.RSAKey.from_private_key_file(\"/etc/rc_root_id_rsa\")\n #ssh.load_system_host_keys()\n\t\ttry:\n\t\t\t#print \"Attempt to connect in try\"\n\t\t\tssh.connect(host,port=22,username='root',password='rc.io')\n\t\t\tstdin, stdout, stderr = ssh.exec_command('hostname')\n \t\t\t#print 'This is output =',stdout.readlines()\n \t\t\t#print 'This is error =',stderr.readlines()\n\t\t\tssh.close()\n\t\t\tprint \"\\r.... Attempting ansible setup (please wait)\"\n \t \t\tsetup_ansible=subprocess.call(\"sudo ansible %s -m setup >log/rockchapel.log 2>&1\" % (serverdefs['name']), shell=True)\n\t\t\tif setup_ansible == 0:\n \t\t\t print \".... Ansible hooks inplace! - Lets build some infrastructure\"\n\t\t\telse:\n\t\t\t print \"Hangon let me try again, retry...1\"\n \t \t\t setup_ansible=subprocess.call(\"sudo ansible %s -m setup >log/rockchapel.log 2>&1\" % (serverdefs['name']), shell=True)\n\t\t\t if setup_ansible == 0:\n print \".... Ansible hooks inplace! - Lets build some infrastructure\"\n else:\n print \".... That didnot work let me try again, retry...2\"\n \t \t\t setup_ansible=subprocess.call(\"sudo ansible %s -m setup >log/rockchapel.log 2>&1\" % (serverdefs['name']), shell=True)\n\t\t if setup_ansible == 0:\n print \".... Ansible hooks inplace! - Lets build some infrastructure\"\n else:\n \t \t\t setup_ansible=subprocess.call(\"sudo ansible %s -m setup >log/rockchapel.log 2>&1\" % (serverdefs['name']), shell=True)\n print \"ERROR:There was a problem, retry...3 (failed)\"\n\t\t\tbreak\n\t\texcept paramiko.AuthenticationException:\n\t\t\tprint \"authentication failed when connecting to %s\" % host\n\t\t\tsys.exit(1)\n\t\texcept:\n\t\t\t#print \"except\"\n\t\t\tprint \"still in build...(continuing to poll -.-.-) \",\n\t\t\tsyms = ['\\\\', '|', '/', '-']\n\t\t\tfor _ in range(50):\n\t\t\t for sym in syms:\n\t\t\t sys.stdout.write(\"\\r%s%s%s%s\" % (sym,sym,sym,sym))\n\t\t\t sys.stdout.flush()\n\t\t\t time.sleep(.01)\n\t\t\tpass\n\n# Build Logic Statement\n if(build) == 'c1n1':\n build_server(c1n1)\n elif (build) == 'c1n2':\n build_server(c1n2)\n elif (build) == 'c1n3':\n build_server(c1n3)\n elif (build) == 'c1n4':\n build_server(c1n4)\n\n\n \n if(sync):\n print \"syncing components\"\n status = subprocess.call(\"sudo cobbler sync\", shell=True)\n\n if(list):\n print \"Name\" + \"\t\" + \"Status of resource\"\n for server in servers:\n\tprint server +\"\t\"+ \"[FREE/UNCONFIGURED]\"\n\n if len(sys.argv)==1:\n parser.print_help()\n sys.exit(1)\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.782608687877655,
"alphanum_fraction": 0.782608687877655,
"avg_line_length": 29.33333396911621,
"blob_id": "0ef8bfc6333072fd26881520fee1d5a9ec8f0b18",
"content_id": "10de80455365427ebc7570829e173cd26b8eef30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 3,
"path": "/bootstrap.sh",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "# need to set this for ssh probe\ngrep Strict /etc/ssh/ssh_config \nStrictHostKeyChecking no\n\n"
},
{
"alpha_fraction": 0.7487684488296509,
"alphanum_fraction": 0.7487684488296509,
"avg_line_length": 28,
"blob_id": "ed7560faadba727b604d9627a1ea405d12898489",
"content_id": "a9292d1adf12706748e49366ca1dc96d0dd027a3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 203,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 7,
"path": "/README.md",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "## racktool.py\n> Pull requests are welcome \n> the latest version of this project is available at https://github.com/avattathil/rockchapel\n\n```sh\n git clone https://github.com/avattathil/rockchapel \n```\n"
},
{
"alpha_fraction": 0.7051281929016113,
"alphanum_fraction": 0.7051281929016113,
"avg_line_length": 32.42856979370117,
"blob_id": "336b8c1cf497a4b6b1ea4d791ec5681425216fcb",
"content_id": "4b630e239f932278c726b60efcf8fe626125ca19",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 234,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 7,
"path": "/configs/sync_code.sh",
"repo_name": "solutionring/rockchapel",
"src_encoding": "UTF-8",
"text": "sudo cp -rfv /var/lib/cobbler var/lib/cobbler \nsudo cp -rfv /etc/cobbler etc/cobbler\nsudo cp -rfv /etc/dnsmas* etc/\nsudo cp -rfv /etc/http* etc/\nsudo cp -rfv /etc/dhcp* etc/\nsudo cp -rfv /etc/ansible* etc/\nsudo chown -R tonyv:tonyv .\n"
}
] | 6 |
462548187/stf-tools
|
https://github.com/462548187/stf-tools
|
f08787535be04ff5487f537ec1e74c4f54fd1feb
|
c3fccf72b829ba55bf4fa3e9ce56e3f805997ebc
|
79d211281e428762722b09b92058a9747793b6b8
|
refs/heads/master
| 2021-12-27T13:19:31.871477 | 2018-01-10T10:10:46 | 2018-01-10T10:10:46 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7191401124000549,
"alphanum_fraction": 0.7219139933586121,
"avg_line_length": 21.53125,
"blob_id": "4f10bacd679c847b012d4a0e8dacd8e840065c85",
"content_id": "53e8b3dcc4da8035cb403333ec9524d009c66668",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1442,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 64,
"path": "/README.md",
"repo_name": "462548187/stf-tools",
"src_encoding": "UTF-8",
"text": "# stf-tools\nThis a python util allows you to reserve and release any STF device\n\n## STF\n\n[STF]: https://github.com/openstf/stf\n[STF API]: https://github.com/openstf/stf/blob/master/doc/API.md\n\nSTF project and api document are avaliable from below:\n\n*STF github: <https://github.com/openstf/stf> \n\n*STF API: <https://github.com/openstf/stf/blob/master/doc/API.md>\n\n\n## Usage\nAccording to STF API, you need get token firstly. If you don't know how to get token, please refer to [Authentication]. \n\n[Authentication]: https://github.com/openstf/stf/blob/master/doc/API.md#authentication\n``` python\t\n#coding: utf-8\nfrom stf_tools import STF\n\t\n# Your STF's url, like: https://stf.example.org\nSTF_URL = \"\"\n```\n\n### Example Code\n#### Basic usage\nGet all the devices information on the STF\n```python\t\nserial = 'NX529J'\ntoken = 'xx-xxx-xx'\nstf = STF(token)\ndevices = stf.devices()\nprint devices\n```\n\nGet specific devices information on the STF\n```python\t\ndevice_info = stf.device(serial)\nprint device_info['remoteConnectUrl']\n```\n\nUse a device. This is analogous to pressing \"Use\" in the UI.\n```python\t\nstf.use_device(serial)\n```\n\nRetrieve the remote debug URL\n```python\t\nremote_connect_url = stf.connect_device(serial)\nprint remote_connect_url\n```\n\nDisconnect a remote debugging session.\n```python\t\nstf.disconnect_device(serial)\n```\n\nStop use device. This is analogous to pressing \"Stop using\" in the UI.\n```python\t\nstf.stop_use_device(serial)\n```\n"
},
{
"alpha_fraction": 0.562355637550354,
"alphanum_fraction": 0.5643352270126343,
"avg_line_length": 33.23728942871094,
"blob_id": "bb10157c39e132026cc31461ca7c53795adfe27f",
"content_id": "36d84b9d6be2fe4b80243752e908e1ba3b307b3f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6062,
"license_type": "permissive",
"max_line_length": 160,
"num_lines": 177,
"path": "/stf.py",
"repo_name": "462548187/stf-tools",
"src_encoding": "UTF-8",
"text": "#coding: utf-8\nimport requests\nimport json\n\nSTF_URL = 'https://stf.example.org' #config your stf url\n\nclass STF():\n '''\n stf doc: https://github.com/openstf/stf/blob/master/doc/API.md#devices\n '''\n\n def __init__(self, token=None):\n self.token = token\n\n if not self.token:\n raise Exception('token can not be None.')\n\n def devices(self):\n '''\n Returns all STF devices information (including disconnected or otherwise inaccessible devices).\n :return: (list)\n '''\n url = STF_URL + '/api/v1/devices'\n headers = {\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n try:\n response = requests.get(url=url, headers=headers, verify=False).json()\n except Exception, e:\n raise Exception(e)\n\n devices = []\n for device in response['devices']:\n device_info = {\n 'model': device.get('model'),\n 'manufacturer': device.get('manufacturer'),\n 'serial': device.get('serial'),\n 'ready': device.get('ready'),\n 'using': device.get('using'),\n 'remoteConnect': device.get('remoteConnect'),\n 'remoteConnectUrl': device.get('remoteConnectUrl')\n }\n devices.append(device_info)\n return devices\n\n def device(self, serial):\n '''\n Returns information about a specific device.\n :param serial: (str) device serial\n :return: (dict)\n '''\n url = STF_URL + '/api/v1/devices/' + serial\n headers = {\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n try:\n response = requests.get(url=url, headers=headers, verify=False).json()\n except Exception, e:\n raise Exception(e)\n device = response['device']\n device_info = {\n 'model': device.get('model'),\n 'manufacturer': device.get('manufacturer'),\n 'serial': device.get('serial'),\n 'ready': device.get('ready'),\n 'using': device.get('using'),\n 'remoteConnect': device.get('remoteConnect'),\n 'remoteConnectUrl': device.get('remoteConnectUrl')\n }\n return device_info\n\n def use_device(self, serial):\n '''\n Attempts to add a device under the authenticated user's control.\n This is analogous to pressing \"Use\" in the UI.\n :param serial: (str) device serial\n :return: (dict)\n '''\n url = STF_URL + '/api/v1/user/devices'\n headers = {\n 'content-type': 'application/json',\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n payload = {\n 'serial': serial\n }\n try:\n requests.post(url, headers=headers, data=json.dumps(payload), verify=False)\n except Exception, e:\n raise Exception(e)\n\n\n def stop_use_device(self, serial):\n '''\n Removes a device from the authenticated user's device list.\n This is analogous to pressing \"Stop using\" in the UI.\n :param serial: (str) device serial\n :return: (dict)\n '''\n url = STF_URL + '/api/v1/user/devices/' + serial\n headers = {\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n try:\n requests.delete(url=url, headers=headers, verify=False)\n except Exception, e:\n raise Exception(e)\n\n\n def connect_device(self, serial):\n '''\n Allows you to retrieve the remote debug URL (i.e. an adb connectable address) for a device the authenticated user controls.\n\n Note that if you haven't added your ADB key to STF yet, the device may be in unauthorized state after connecting to it for the first time.\n We recommend you make sure your ADB key has already been set up properly before you start using this API.\n You can add your ADB key from the settings page, or by connecting to a device you're actively using in the UI and responding to the dialog that appears.\n :param serial: (str) device serial\n :return: (str) remoteConnectUrl\n '''\n url = STF_URL + '/api/v1/user/devices/{serial}/remoteConnect'.format(serial=serial)\n headers = {\n 'content-type': 'application/json',\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n try:\n response = requests.post(url, headers=headers, verify=False).json()\n if response['success']:\n return response['remoteConnectUrl']\n else:\n print 'Device is not owned by you or is not available'\n return None\n except Exception, e:\n raise Exception(e)\n\n\n def disconnect_device(self, serial):\n '''\n Disconnect a remote debugging session.\n :param serial: (str) device serial\n :return:\n '''\n url = STF_URL + '/api/v1/user/devices/{serial}/remoteConnect'.format(serial=serial)\n headers = {\n 'content-type': 'application/json',\n 'Authorization': 'Bearer {}'.format(self.token)\n }\n try:\n response = requests.delete(url=url, headers=headers, verify=False).json()\n if response['success']:\n print 'Device remote disconnected successfully'\n return True\n else:\n print 'Device is not owned by you or is not available'\n return False\n except Exception, e:\n raise Exception(e)\n\n\nif __name__ == '__main__':\n import time\n token = 'xx-xxxx-xx'\n \n stf = STF(token)\n devices = stf.devices()\n\n serial = 'NX529J'\n stf.use_device(serial)\n\n #device_info = stf.device(serial)\n #remote_connect_url = device_info['remoteConnectUrl']\n\n remote_connect_url = stf.connect_device(serial)\n print remote_connect_url\n stf.connect_device(serial)\n time.sleep(10)\n stf.disconnect_device(serial)\n stf.stop_use_device(serial)\n\n\n"
}
] | 2 |
kaikiat/python-solutions
|
https://github.com/kaikiat/python-solutions
|
ada78a5677ab3ba4f647dfd132c94704b19091fb
|
3293c969ce5f064edd3d600b5c4fbbbde744a692
|
2a4c1de82e64dd5841829dbf30347d5deab42b38
|
refs/heads/master
| 2018-10-29T15:15:46.671492 | 2018-09-12T16:42:26 | 2018-09-12T16:42:26 | 127,431,684 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5016778707504272,
"alphanum_fraction": 0.5436241626739502,
"avg_line_length": 22.799999237060547,
"blob_id": "798815cf7e1fbc9886d6aa852bceca39994f6958",
"content_id": "e37dd8f69061cfb445b1231f681765c1edd6f8ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 596,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 25,
"path": "/projEuler3.py",
"repo_name": "kaikiat/python-solutions",
"src_encoding": "UTF-8",
"text": "def isPal(num):\n numString = str(num)\n for i in range(0,int(len(numString)/2+1)):\n if (numString[i] != numString[-i-1]):\n return False\n return True\n \n# Keep track of max product\nmaxProduct = 0\n \n\nmax1, max2 = 0,0\n \n\nfor i in range(999, 99, -1):\n for j in range(999, 99, -1):\n product = i * j\n if isPal(product):\n if(product > maxProduct):\n maxProduct = product\n max1, max2 = i, j\n \n\nprint('The largest palindromic product is: ' + str(maxProduct))\nprint(str(maxProduct) + ' = ' + str(max1) + ' * ' + str(max2))\n\n"
},
{
"alpha_fraction": 0.38223937153816223,
"alphanum_fraction": 0.44015443325042725,
"avg_line_length": 20.5,
"blob_id": "de6b90f2edb89ccfc8870917385c4f70a534b77f",
"content_id": "d6c29147f28cc69ce3a5aeae68040b3f4119811e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 259,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 12,
"path": "/projEuler9.py",
"repo_name": "kaikiat/python-solutions",
"src_encoding": "UTF-8",
"text": "import time\nstart_time = time.time()\n\nfor a in range(1,400):\n for b in range(1,400):\n c=1000-a-b\n if a<b<c:\n if a**2 + b**2 ==c**2:\n print(a*b*c)\n \n \nprint(time.time() - start_time,\" seconds\")\n\n"
}
] | 2 |
and-megan/dog-walking-api
|
https://github.com/and-megan/dog-walking-api
|
39f339986ab4d39723621007d669130e218838a7
|
6c8fc4b76c166bacfd355fb215322c40e2258001
|
5bb9cb44f314e837927b776db7fed53391bb340e
|
refs/heads/master
| 2020-04-30T02:01:47.643532 | 2019-03-19T15:44:14 | 2019-03-19T15:44:14 | 176,546,872 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4276595711708069,
"alphanum_fraction": 0.45691490173339844,
"avg_line_length": 18.38144302368164,
"blob_id": "d17e52673600d92ceb5ceb2d8f5dd76bc68c5991",
"content_id": "4ac19f4b163b560346998eed02092da1f3800e40",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1880,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 97,
"path": "/app.py",
"repo_name": "and-megan/dog-walking-api",
"src_encoding": "UTF-8",
"text": "import json\nfrom flask import Flask, jsonify, make_response, abort, request\n\n\napp = Flask(__name__)\n\n\[email protected]('/dogs', methods=['GET'])\ndef get_dogs():\n return jsonify(dogs_objs)\n\[email protected]('/dogs/<int:dog_id>')\ndef get_dog(dog_id):\n for dog in dogs_objs:\n if dog['id'] == dog_id:\n return jsonify(dog)\n\n abort(404)\n\[email protected]('/dogs', methods=['POST'])\ndef create_dog():\n data = json.loads(request.data)\n import pdb;pdb.set_trace()\n if not data or not _validate_dog(data):\n abort(400)\n dog_id = _get_dog_id()\n dog = {\n 'id':dog_id,\n 'name': data['name'],\n 'credit': 100,\n 'rating': 5.0,\n 'color': data['color'],\n 'age': data['age']\n }\n dogs_objs.append(dogs_objs)\n\n return jsonify({'dog': dog}), 201\n\[email protected](404)\ndef not_found(error):\n return make_response(jsonify({'error': 'Not found'}), 404)\n\n\ndef _validate_dog(data):\n return data.keys().sort() == ['name', 'color', 'age'].sort()\n\ndef _get_dog_id():\n num = 0\n for dog in dogs_objs:\n if dog['id'] > num:\n num = dog['id']\n\n return num + 1\n\n\ndogs_objs = [\n {\n \"id\": 1,\n \"name\": \"Dusty\",\n \"credit\": 100,\n \"rating\": 4.9,\n \"color\": \"tricolor\",\n \"age\": 13\n },\n {\n \"id\": 2,\n \"name\": \"Alfie\",\n \"credit\": 75,\n \"rating\": 4.4,\n \"color\": \"white\",\n \"age\": 6\n },\n {\n \"id\": 3,\n \"name\": \"Sofie\",\n \"credit\": 50,\n \"rating\": 4.2,\n \"color\": \"brown\",\n \"age\": 10\n },\n {\n \"id\": 4,\n \"name\": \"Maggie\",\n \"credit\": 80,\n \"rating\": 3.9,\n \"color\": \"black\",\n \"age\": 2\n },\n {\n \"id\": 5,\n \"name\": \"Shadow\",\n \"credit\": 15,\n \"rating\": 4.6,\n \"color\": \"grey\",\n \"age\": 5\n },\n]\n"
}
] | 1 |
kelind/personal_site
|
https://github.com/kelind/personal_site
|
a9cd346d8642e0f7b2eb02faa6484dbfab765cb8
|
a1e4a718cfcf26248f1a07a2e737e6270ecb4225
|
fc7e03a63dc1bedd262a9a5dd0c1a3193f349bb0
|
refs/heads/master
| 2021-06-18T03:16:06.085564 | 2021-01-05T06:49:00 | 2021-01-05T06:49:00 | 129,167,686 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5997458696365356,
"alphanum_fraction": 0.6073697805404663,
"avg_line_length": 34.772727966308594,
"blob_id": "f9649a46082b9071258183a3661d5401c870c435",
"content_id": "43872e69dae87d0a72c9fa819d803e416736e857",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 787,
"license_type": "no_license",
"max_line_length": 182,
"num_lines": 22,
"path": "/personal_site/entries/templates/tags.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Tags</%block>\n\n<%block name=\"maincontent\">\n\n <article id=\"tagblock\">\n <h1>Tags</h1>\n <p>Click a tag name to retrieve all entries it's been used on. If you subscribe to a tag, you'll receive an e-mail whenever a new entry with that tag is posted.</p>\n\n <ul>\n % for tag in tags:\n % if tag[2] == 1:\n <li><a href=\"${url_for('entries.tag_details', slug=tag.slug)}\">${tag.tagname}</a> - ${tag[2]} use (<a href=\"${url_for('mailing.subscribe', slug=tag.slug)}\">Subscribe</a>)</li>\n % else:\n <li><a href=\"${url_for('entries.tag_details', slug=tag.slug)}\">${tag.tagname}</a> - ${tag[2]} uses (<a href=\"${url_for('mailing.subscribe', slug=tag.slug)}\">Subscribe</a>)</li>\n % endif\n % endfor\n </ul>\n </article>\n\n</%block>\n"
},
{
"alpha_fraction": 0.7882882952690125,
"alphanum_fraction": 0.792792797088623,
"avg_line_length": 36,
"blob_id": "3c876ec53502e588ffb418a67c2447aee0645e91",
"content_id": "5004280c319c6846ffeb79778418022fdc53d9f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 222,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 6,
"path": "/personal_site/mailing/forms.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from wtforms import Form\nfrom wtforms.fields.html5 import EmailField\nfrom wtforms.validators import DataRequired, Email\n\nclass MailingSubscribeForm(Form):\n email = EmailField('Email address', [DataRequired(), Email()])\n"
},
{
"alpha_fraction": 0.6043956279754639,
"alphanum_fraction": 0.6043956279754639,
"avg_line_length": 29.33333396911621,
"blob_id": "2f100d94b9b2c7244fc407386c02a35bb3cd9b4e",
"content_id": "c6ec2589f0272dbb6e6ad80f31ecde70f6068e66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 9,
"path": "/personal_site/entries/templates/comment.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "##<%page args=\"form\" />\n\n<form action=\"/api/comment\" id=\"comment\" method=\"post\">\n ${form.entry_id}\n <p>${form.name.label} ${form.name}</p>\n <p>${form.email.label} ${form.email}</p>\n <p>${form.body.label} ${form.body}</p>\n <button type=\"submit\">Submit</button>\n</form>\n"
},
{
"alpha_fraction": 0.6470588445663452,
"alphanum_fraction": 0.658823549747467,
"avg_line_length": 17.88888931274414,
"blob_id": "54b9579fbb6cf716638230a7e5ee0878e4c1c6b3",
"content_id": "ac308dea9eb4d500af880df3d9085b625d0447bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 170,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 9,
"path": "/personal_site/entries/templates/tagged.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Entries Tagged ${tag.tagname}</%block>\n\n<%block name=\"maincontent\">\n\n<h1>Entries Tagged ${tag.tagname}</h1>\n\n</%block>\n"
},
{
"alpha_fraction": 0.6531791687011719,
"alphanum_fraction": 0.6705202460289001,
"avg_line_length": 27.83333396911621,
"blob_id": "87294c68d203ca00b37155e5a83bf5c70d1271c3",
"content_id": "c8702b52406db8fa0dc1ca5d006bc35ea0ccd9b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 519,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 18,
"path": "/config_template.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "import os\n\nclass Configuration(object):\n APPLICATION_DIR = os.path.dirname(os.path.realpath(__file__)) + '/personal_site'\n DOMAIN = 'http://localhost:5000'\n PRODUCTION_DOMAIN = 'http://kelsilindblad.com'\n\n DEBUG = False\n SQLALCHEMY_DATABASE_URI = 'sqlite:///{}/blog.db'.format(APPLICATION_DIR)\n MAKO_TRANSLATE_EXCEPTIONS = False\n\n SECRET_KEY = 'put a secret key here'\n TOKEN_EXPIRATION = 86400\n\n STATIC_DIR = APPLICATION_DIR + '/static'\n\n EMAIL_ENABLED = False\n SMTP_HOST = 'localhost'\n"
},
{
"alpha_fraction": 0.6547269821166992,
"alphanum_fraction": 0.6588373184204102,
"avg_line_length": 25.200000762939453,
"blob_id": "286dbef3efb17c962830e3f811c6e9163586a2a3",
"content_id": "df4f4d2f7f3492e7222d084a159eb4095331b446",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1703,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 65,
"path": "/personal_site/helpers.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from flask import request\nfrom flask.ext.mako import render_template\n\nfrom itsdangerous import URLSafeTimedSerializer\n\nfrom smtplib import SMTP_SSL as SMTP\nfrom email.mime.text import MIMEText\nfrom email.header import Header\n\nimport personal_site.app\n\ndef object_list(template_name, query, paginate_by=20, **context):\n page = request.args.get('page')\n if page and page.isdigit():\n page = int(page)\n else:\n page = 1\n object_list = query.paginate(page, paginate_by)\n return render_template(template_name, object_list=object_list, **context)\n\ndef send_mail(recipients, subject, content):\n '''\n Given a set of recipients, send each an e-mail\n with subject \"subject\" and the specified \"content.\" \n '''\n\n from smtplib import SMTP_SSL as SMTP\n from email.mime.text import MIMEText\n from email.header import Header\n\n from app import config\n\n email_msg = MIMEText(content, 'html', 'utf-8')\n email_msg['Subject'] = Header(subject, 'utf-8')\n email_msg['From'] = '[email protected]'\n\n if len(recipients) == 1:\n email_msg['To'] = recipients[0]\n recips = [email_msg['To']]\n\n else:\n # We're gonna BCC everyone\n recips = list(recipients)\n\n s = SMTP(config['SMTP_HOST'])\n s.set_debuglevel(False)\n s.login(config['SMTP_USERNAME'], config['SMTP_PASSWORD'])\n\n try:\n s.sendmail(email_msg['From'], recips, email_msg.as_string())\n\n except:\n return False\n\n finally:\n s.close()\n return True\n\ndef generate_token(email, tag, action):\n\n import personal_site.app.config\n\n serializer = URLSafeTimedSerializer(config['SECRET_KEY'])\n\n return serializer.dumps([email, tag], salt=action)\n"
},
{
"alpha_fraction": 0.7754010558128357,
"alphanum_fraction": 0.7754010558128357,
"avg_line_length": 27.769229888916016,
"blob_id": "71741a1b5eb025c4e7719eafffb7f4fb3aecadcd",
"content_id": "53337c6db7f3f17f55ec2df78617980791eea611",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 374,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 13,
"path": "/personal_site/__init__.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from personal_site.app import app\nimport personal_site.admin\nimport personal_site.models\nimport personal_site.views\n\nfrom personal_site.entries.blueprint import entries\nfrom personal_site.mailing.blueprint import mailing\n\napp.register_blueprint(entries, url_prefix='/entries')\napp.register_blueprint(mailing, url_prefix='/mailing')\n\nif __name__ == '__main__':\n app.run()\n"
},
{
"alpha_fraction": 0.5080000162124634,
"alphanum_fraction": 0.6940000057220459,
"avg_line_length": 15.129032135009766,
"blob_id": "717f911928741b896b06a0a309aadc024c33f22a",
"content_id": "61a8ae1393ad7489f07acad4fe4d09658dd106b0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 500,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 31,
"path": "/requirements.txt",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "alembic==0.9.1\nappdirs==1.4.3\nbcrypt==3.1.3\nblinker==1.4\ncffi==1.10.0\nclick==6.7\nFlask==0.12\nFlask-Admin==1.5.0\nFlask-Bcrypt==0.7.1\nFlask-Login==0.4.0\nFlask-Mako==0.4\nFlask-Migrate==2.0.3\nFlask-Script==2.0.5\nFlask-SQLAlchemy==2.2\nFlask-WTF==0.14.2\ngunicorn==19.7.1\nitsdangerous==0.24\nJinja2==2.9.5\nMako==1.0.6\nMarkdown==2.6.8\nMarkupSafe==1.0\nolefile==0.44\npackaging==16.8\nPillow==4.1.0\npycparser==2.17\npyparsing==2.2.0\npython-editor==1.0.3\nsix==1.10.0\nSQLAlchemy==1.1.6\nWerkzeug==0.12.1\nWTForms==2.1\n"
},
{
"alpha_fraction": 0.7462311387062073,
"alphanum_fraction": 0.7546063661575317,
"avg_line_length": 26.76744270324707,
"blob_id": "58903eed0588d1f5ef85d1139989976342d802fc",
"content_id": "365cd2885ad9582333bc24de4314aacab98eb3fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1194,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 43,
"path": "/personal_site/app.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from flask import Flask, g\nfrom flask_bcrypt import Bcrypt\nfrom flask_login import LoginManager, current_user\nfrom flask_mako import MakoTemplates\nfrom flask.ext.migrate import Migrate, MigrateCommand\n#from flask_restless import APIManager\nfrom flask.ext.script import Manager\nfrom flask.ext.sqlalchemy import SQLAlchemy\n\nfrom config import Configuration\n\napp = Flask(__name__)\napp.config.from_object(Configuration)\n\ndb = SQLAlchemy(app)\nmigrate = Migrate(app, db)\n\n#api = APIManager(app, flask_sqlalchemy_db=db)\n\nmanager = Manager(app)\nmanager.add_command('db', MigrateCommand)\n\nmako = MakoTemplates(app)\n\nif not app.debug:\n import logging\n from logging.handlers import RotatingFileHandler\n\n file_handler = RotatingFileHandler('tmp/personal.log', 'a', 1 * 1024 * 1024, 5)\n file_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]'))\n \n app.logger.setLevel(logging.WARNING)\n file_handler.setLevel(logging.WARNING)\n app.logger.addHandler(file_handler) \n\nbcrypt = Bcrypt(app)\n\nlogin_manager = LoginManager(app)\nlogin_manager.login_view = \"login\"\n\[email protected]_request\ndef _before_request():\n g.user = current_user\n"
},
{
"alpha_fraction": 0.6165048480033875,
"alphanum_fraction": 0.6213592290878296,
"avg_line_length": 18.619047164916992,
"blob_id": "b10752d7a109934576ad3223440d47bd7b58a9b8",
"content_id": "fa610bac68da9dee091db20e4ee2426c978b57b1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 412,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 21,
"path": "/personal_site/entries/templates/entry.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%! from markdown import markdown %>\n\n<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">${entry.title}</%block>\n\n<%block name=\"maincontent\">\n\n<article id=\"blog-entry\">\n <h1>${entry.title}</h1>\n <p><i>Published ${format_date(entry.created_timestamp)}</i></p>\n ${markdown(entry.body)}\n\n % if entry.tags:\n <%include file=\"taglist.mak\" />\n % endif\n\n ##<%include file=\"comment.mak\" />\n</article>\n\n</%block>\n"
},
{
"alpha_fraction": 0.6399999856948853,
"alphanum_fraction": 0.6509677171707153,
"avg_line_length": 38.74359130859375,
"blob_id": "b8a9acb03657b08bebc8cc54e6720e797694e6c4",
"content_id": "dbe5469e857b9d61b5a5e3f93d71cc987cf45be4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1550,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 39,
"path": "/personal_site/forms.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from wtforms import Form, PasswordField, BooleanField, HiddenField, StringField, TextAreaField\nfrom wtforms.validators import DataRequired, Email, Length\nfrom personal_site.models import Entry, User\n\nclass CommentForm(Form):\n name = StringField('Name: ', validators=[DataRequired(), Length(1, 64, 'Names cannot exceed 64 characters in length.')])\n email = StringField('Email: ', validators=[Email()])\n body = TextAreaField('Comment: ', validators=[DataRequired(), Length(10, 3000, 'Comments must be at least 10 and no more than 3,000 characters long.')])\n entry_id = HiddenField(validators=[DataRequired()])\n\n def validate(self):\n if not super(CommentForm, self).validate():\n return False\n\n # Ensure that entry_id maps to a public entry\n entry = Entry.query.filter(\n (Entry.status == Entry.STATUS_PUBLIC) &\n (Entry.id == self.entry_id.dat)).first()\n \n if not entry:\n return False\n else:\n return True\n\nclass LoginForm(Form):\n name = StringField(\"Username: \", validators=[DataRequired()])\n password = PasswordField(\"Password: \", validators=[DataRequired()])\n remember_me = BooleanField(\"Remember me?\", default=True)\n\n def validate(self):\n if not super(LoginForm, self).validate():\n return False\n\n self.user = User.authenticate(self.name.data, self.password.data)\n if not self.user:\n self.name.errors.append(\"Invalid email or password.\")\n return False\n\n return True\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 29.125,
"blob_id": "3497769af71593bdc6aaad060ab256b95bc1bdaf",
"content_id": "0274aefa81538a6202bd6f65667f739648224dd7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 723,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 24,
"path": "/personal_site/entries/forms.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "#from flask_wtf import FlaskForm\nfrom wtforms import Form\nfrom wtforms import SelectField, StringField, TextAreaField\nfrom wtforms.validators import InputRequired\n\nfrom personal_site.models import Entry\n\nclass EntryForm(Form):\n#class EntryForm(FlaskForm):\n title = StringField('Title', validators=[InputRequired()])\n body = TextAreaField('Body', validators=[InputRequired()])\n status = SelectField(\n 'Entry status',\n choices = (\n (Entry.STATUS_PUBLIC, 'Public'),\n (Entry.STATUS_DRAFT, 'Draft')),\n coerce=int,\n validators=[InputRequired()])\n \n\n def save_entry(self, entry):\n self.populate_obj(entry)\n entry.generate_slug()\n return entry\n"
},
{
"alpha_fraction": 0.7606149315834045,
"alphanum_fraction": 0.7635431885719299,
"avg_line_length": 34.02564239501953,
"blob_id": "ef8dfba8e1fceff0aff99905ce3a624b60dc3df8",
"content_id": "a77a648ddd74a8a542a10192c0a41cfc5a1f8ac3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1366,
"license_type": "no_license",
"max_line_length": 320,
"num_lines": 39,
"path": "/README.md",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "# personal_site\nCode to run my personal web site\n\nInstallation\n===\n\nClone the repository to get the source code.\n\n git clone https://github.com/kelind/personal_site.git\n\nSet up a virtualenv to isolate the app's code, then install the required dependencies.\n\n cd personal_site\n virtualenv venv\n source venv/bin/activate\n pip install -r local-requirements.txt\n\nCreate a `config.py` to define the app's root directory, database location, etc. The file `config_template.py` contains the necessary variables with reasonable defaults.\n\n cp config_template.py config.py\n\nFor production, there are some settings you'll want to change:\n\n* Set `DEBUG = False`\n* Choose an actually-secret `SECRET_KEY`\n* If you want to use the mailing list features, set `EMAIL_ENABLED = True` and fill out the SMTP variables with the appropriate values for your mailserver. The way the code is set up is appropriate for a server running on localhost, but does not support logging into an external server or establishing an SSL connection.\n\nCreate the database:\n\n python manage.py db migrate\n python manage.py db upgrade\n\nTo run a local instance for testing:\n\n python manage.py runserver\n\nBy default you can view your new site at `http://localhost:5000`\n\nBe ready for a weird-looking result! The layout isn't too happy when there aren't any blog entries to retrieve.\n"
},
{
"alpha_fraction": 0.5251396894454956,
"alphanum_fraction": 0.5251396894454956,
"avg_line_length": 21.375,
"blob_id": "eda9cdc4b5d65258878de8ab44075ae4fcf4ce6e",
"content_id": "c60cb116cd2780e54815a999c6e1ca1c531faef9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 179,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 8,
"path": "/personal_site/templates/nav.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<nav id=\"mainnav\">\n\n <ul class=\"nav\">\n <li><a href=\"${url_for('entries.index')}\">Blog</a></li>\n <li><a href=\"${url_for('entries.tag_index')}\">Tags</a></li>\n </ul>\n\n</nav>\n"
},
{
"alpha_fraction": 0.6329113841056824,
"alphanum_fraction": 0.6360759735107422,
"avg_line_length": 24.280000686645508,
"blob_id": "159a6aa493988c84546a0abb7a2604b123a085c7",
"content_id": "a2003df1840dac3708ea44bcaf508726f32e3461",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 632,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 25,
"path": "/personal_site/mailing/templates/subscribe.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Subscribe to ${tag.tagname}</%block>\n\n<%block name=\"maincontent\">\n\n<article>\n <h1>Subscribe to ${tag.tagname}</h1>\n\n <p>Enter your e-mail address below to receive an e-mail whenever a post with the tag \"${tag.tagname}\" is posted to the blog. You can unsubscribe at any time.</p>\n\n <form action=\"${url_for('mailing.subscribe', slug=tag.slug)}\" id=\"subscribe\" method=\"post\">\n\n % for field in form:\n % if not field.id == \"csrf_token\":\n ${field.label} ${field}\n % endif\n % endfor\n\n <button type=\"submit\">Subscribe</button>\n </article>\n\n</form>\n\n</%block>\n"
},
{
"alpha_fraction": 0.6399999856948853,
"alphanum_fraction": 0.6399999856948853,
"avg_line_length": 17.478260040283203,
"blob_id": "6198888edf0140013f05fdf2be1bf682c912f999",
"content_id": "a846609e757fe8639332a4fd9dba43901e6f0391",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 425,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 23,
"path": "/personal_site/mailing/templates/confirm_succeeded.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Confirmation Succeeded</%block>\n\n<%block name=\"maincontent\">\n\n <article>\n\n % if action == 'subscribe':\n\n <p>You have successfully subscribed to ${tag}. Watch your e-mail for future updates!</p>\n\n % else:\n\n <p>You have been successfully unsubscribed from ${tag}.</p>\n\n % endif\n\n <a href=\"${url_for('entries.index')}\">Return to the blog.</a>\n\n </article>\n\n</%block>\n"
},
{
"alpha_fraction": 0.6810035705566406,
"alphanum_fraction": 0.6881720423698425,
"avg_line_length": 24.363636016845703,
"blob_id": "ea8a23b4696c53648d239948a71f423ca035115e",
"content_id": "161a6f2d8671a4cbb8fe41861b1dcf673d086e22",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 279,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 11,
"path": "/personal_site/mailing/unsubscribe.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Unsubscribe from ${tag.tagname}</%block>\n\n<%block name=\"maincontent\">\n\n<h1>Unsubscribe from ${tag.tagname}</h1>\n\n<p>Are you sure you want to unsubscribe? You will no longer receive updates about \"${tag.tagname}\".</p>\n\n</%block>\n"
},
{
"alpha_fraction": 0.5783132314682007,
"alphanum_fraction": 0.5807228684425354,
"avg_line_length": 36.6363639831543,
"blob_id": "50ba73b8dee27bf6784b5f9f0343d3344cdffe86",
"content_id": "359fccaefbfa9cc37a3901a14b4dede996649a94",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 415,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 11,
"path": "/personal_site/entries/templates/taglist.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%page args=\"entry\" />\n\n<p class=\"inline-partner double-space\">Tags:</p> <ul class=\"taglist\">\n % for index, tag in enumerate(entry.tags):\n % if not index + 1 == len(entry.tags):\n <li><a href=\"${url_for('entries.tag_details', slug=tag.slug)}\">${tag.tagname}</a>, </li> \n % else:\n <li><a href=\"${url_for('entries.tag_details', slug=tag.slug)}\">${tag.tagname}</a></li>\n % endif\n % endfor\n</ul>\n\n"
},
{
"alpha_fraction": 0.6803237199783325,
"alphanum_fraction": 0.6833586096763611,
"avg_line_length": 32.50847625732422,
"blob_id": "4a0bcdfe02a3edfc116dd1e3946fb4164021613c",
"content_id": "04cd265c14a7eae7d68e4fa64c541f71181832fc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1977,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 59,
"path": "/personal_site/entries/blueprint.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from flask import Blueprint, redirect, request, url_for\nfrom flask_mako import render_template\n\nfrom sqlalchemy import func\n\nfrom personal_site.helpers import object_list\nfrom personal_site.models import Entry, Tag\n\nfrom personal_site.forms import CommentForm\n\nfrom personal_site.entries.forms import EntryForm\n\nentries = Blueprint('entries', __name__, template_folder='templates')\n\n'''\[email protected]('/create/', methods=['GET', 'POST'])\ndef create():\n\n from app import db\n\n if request.method == 'POST':\n form = EntryForm(request.form)\n if form.validate():\n entry = form.save_entry(Entry())\n db.session.add(entry)\n db.session.commit()\n return redirect(url_for('entries.detail', slug=entry.slug))\n else:\n form = EntryForm()\n\n return render_template('create.mak', form=form)\n'''\n\[email protected]('/')\ndef index():\n entries = Entry.query.filter(Entry.status == Entry.STATUS_PUBLIC).order_by(Entry.created_timestamp.desc())\n return object_list('main.mak', entries, page_title='Home')\n\[email protected]('/tags/')\ndef tag_index():\n\n from personal_site.app import db \n\n tags = db.session.query(Tag.tagname, Tag.slug, func.count(Tag.tagname)).group_by(Tag.tagname).order_by(Tag.tagname).all()\n\n return render_template('tags.mak', tags=tags)\n\[email protected]('/tags/<slug>')\ndef tag_details(slug):\n tag = Tag.query.filter(Tag.slug == slug).first_or_404()\n entries = tag.entries.filter(Entry.status == Entry.STATUS_PUBLIC).order_by(Entry.created_timestamp.desc())\n return object_list('main.mak', entries, tag=tag, page_title='Entries Tagged {}'.format(tag.tagname))\n\[email protected]('/<slug>/')\ndef detail(slug):\n entry = Entry.query.filter((Entry.slug == slug) & (Entry.status == Entry.STATUS_PUBLIC)).first_or_404()\n form = CommentForm(data={'entry_id': entry.id})\n resources = ['comments.js']\n return render_template('entry.mak', entry=entry, form=form, resources=resources)\n"
},
{
"alpha_fraction": 0.6798866987228394,
"alphanum_fraction": 0.6798866987228394,
"avg_line_length": 28.41666603088379,
"blob_id": "a16880f8f47a5d7933dd0d25db9c9d099fe5c31a",
"content_id": "1c876953d205de3c0ad1b443183eaa2f30e79b58",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 353,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 12,
"path": "/personal_site/mailing/templates/subscribe_confirm.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Confirm Subscription</%block>\n\n<%block name=\"maincontent\">\n\n<article>\n <p>A confirmation e-mail has been sent to ${email}. To complete your subscription, please click the link in the message you receive.</p>\n <p><a href=\"${url_for('entries.index')}\">Return to the blog.</a></p>\n</article>\n\n</%block>\n"
},
{
"alpha_fraction": 0.617977499961853,
"alphanum_fraction": 0.6217228174209595,
"avg_line_length": 21.25,
"blob_id": "5583f25f85e47f303b8db476d3fb12b186d0b626",
"content_id": "47d45deecfb9b4b91d9a3744ef63774bad136dab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 534,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 24,
"path": "/personal_site/templates/login.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%namespace name=\"util\" file=\"util/error.mak\" />\n\n<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Log In</%block>\n\n<%block name=\"maincontent\">\n\n<h1>Log In</h1>\n\n<form action=\"${url_for('login', next=request.args.get('next', ''))}\" method=\"post\">\n\n ${form.name.label} ${form.name}\n <%util:error field=\"${form.name}\" />\n\n ${form.password.label} ${form.password}\n ${form.remember_me} ${form.remember_me.label} \n\n <button type=\"submit\">Log In</button>\n <a class=\"btn\" href=\"${url_for('homepage')}\">Cancel</a>\n\n</form>\n\n</%block>\n"
},
{
"alpha_fraction": 0.5494137406349182,
"alphanum_fraction": 0.552763819694519,
"avg_line_length": 20.321428298950195,
"blob_id": "17341a272f4e98cc4bc568de213a8494cf7834be",
"content_id": "5be8c9249fba6114b95879b325edd1d0f2340575",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 597,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 28,
"path": "/personal_site/entries/templates/create.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Create Entry</%block>\n\n<%block name=\"maincontent\">\n\n <h1>Create New Entry</h1>\n\n <form action=\"${url_for('entries.create')}\" id=\"create_entry\" method=\"post\">\n\n ${form.csrf_token}\n\n % for field in form:\n % if not field.id == 'csrf_token':\n <div class=\"form-group\">\n ${field.label}\n ${field}\n </div>\n % endif\n % endfor\n\n <div class=\"form-group\">\n <button type=\"submit\">Create</button>\n <a class=\"button\" href=\"${url_for('entries.index')}\">Cancel</a>\n </div>\n </form>\n\n</%block>\n"
},
{
"alpha_fraction": 0.7014315128326416,
"alphanum_fraction": 0.7014315128326416,
"avg_line_length": 29.5625,
"blob_id": "315497c52b37116ca2205a1f773b6df5782b8eb8",
"content_id": "f4ca099e8b100835622b32d77a8cbf707794f8c7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 489,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 16,
"path": "/personal_site/mailing/templates/confirm_failed.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">Confirmation Failed</%block>\n\n<%block name=\"maincontent\">\n\n<article>\n% if reason == 'disabled':\n <p>E-mail subscription is currently disabled. Please try again later.</p>\n% else:\n <p>There was a problem with the security token or it has expired. Please try subscribing or unsubscribing again to generate another confirmation e-mail.</p>\n% endif\n<p><a href=\"${url_for('entries.index')}\">Return to the blog.</a></p>\n</article>\n\n</%block>\n"
},
{
"alpha_fraction": 0.6558441519737244,
"alphanum_fraction": 0.6558441519737244,
"avg_line_length": 27.285715103149414,
"blob_id": "47808ac9733e3e0204646bed6caa6532a8bced73",
"content_id": "cb15a87b733d029e39f951586c24b33cc24f7c9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1386,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 49,
"path": "/personal_site/views.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from flask import url_for, flash, redirect, request\nfrom flask_login import login_user, logout_user\nfrom flask_mako import render_template\n\nfrom personal_site.app import app\nfrom personal_site.app import login_manager\nfrom personal_site.forms import LoginForm\nfrom personal_site.models import User\n\[email protected]_processor\ndef date_formatter():\n\n def format_date(datestamp):\n import datetime\n\n return datetime.datetime.strftime(datestamp, '%B %d %Y')\n\n return dict(format_date=format_date)\n\[email protected]('/')\ndef homepage():\n return redirect(url_for('entries.index'))\n\[email protected]('/login/', methods=['GET', 'POST'])\ndef login():\n if request.method == 'POST':\n form = LoginForm(request.form)\n\n if form.validate():\n login_user(User.query.filter_by(name=form.name.data).first(), remember=form.remember_me.data)\n flash('Successfully logged in as {}'.format(form.name.data, 'Success'))\n\n next_url = request.args.get('next')\n\n if next_url:\n return redirect(next_url)\n else:\n return redirect(url_for('homepage'))\n\n else:\n form = LoginForm()\n\n return render_template('login.mak', form=form)\n\[email protected]('/logout/')\ndef logout():\n logout_user()\n flash('You have been logged out.', 'Success')\n return redirect(request.arg.get('next') or url_for('homepage'))\n"
},
{
"alpha_fraction": 0.4516128897666931,
"alphanum_fraction": 0.4516128897666931,
"avg_line_length": 19.55555534362793,
"blob_id": "8f14d924174e6666aa1d28166fca3e95235b43a4",
"content_id": "10626983d97e21503afe58e2b8c92721a50f81d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 186,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 9,
"path": "/personal_site/templates/util/error.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%def name=\"error(field)\">\n % if field.errors:\n <ul class=\"error\">\n % for err in field.errors:\n <li>${err}</li>\n % endfor\n </ul>\n % endif\n</%def>\n\n"
},
{
"alpha_fraction": 0.4364583194255829,
"alphanum_fraction": 0.4364583194255829,
"avg_line_length": 23.615385055541992,
"blob_id": "2e0c64962df9aa1f6d5b4a8e4b950d661e51700c",
"content_id": "7920f701e30c3edc36dba84fd4a9049beff91511",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 960,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 39,
"path": "/personal_site/templates/includes/pagination.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<nav class=\"pagination\">\n <ul>\n % if not object_list.has_prev:\n <li class=\"disabled\">\n % else:\n <li>\n % endif\n % if object_list.has_prev:\n <a href=\"./?page=${object_list.prev_num}\">«</a>\n % else:\n <a href=\"#\">«</a>\n % endif\n </li>\n % for page in object_list.iter_pages():\n <li>\n % if page:\n % if page == object_list.page:\n <a class=\"active\" href=\"./?page=${page}\">Page ${page}</a>\n % else:\n <a href=\"./?page=${page}\">Page ${page}</a>\n % endif\n % else:\n <a class=\"disabled\">...</a>\n % endif\n </li>\n % endfor\n % if not object_list.has_next:\n <li class=\"disabled\">\n % else:\n <li>\n % endif\n % if object_list.has_next:\n <a href=\"./?page=${object_list.next_num}\">»</a>\n % else:\n <a href=\"#\">»</a>\n % endif\n </li>\n </ul>\n</nav>\n"
},
{
"alpha_fraction": 0.559366762638092,
"alphanum_fraction": 0.5699208378791809,
"avg_line_length": 18.947368621826172,
"blob_id": "6a176f17d25fdb68fe2da14b054c2e4691da2e51",
"content_id": "b5d0d66b932332ab4eeb89650c6e74f0a2d4aa1d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 379,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 19,
"path": "/personal_site/templates/test.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"utf-8\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n</head>\n<body>\n <div id=\"main\">\n <header id=\"mainheader\">\n <div id=\"banner\"> </div>\n <h1>Placeholder</h1>\n </header>\n <div id=\"content\">\n </div>\n <footer id=\"mainfooter\">\n </footer>\n </div>\n</body>\n</html>\n"
},
{
"alpha_fraction": 0.6671208143234253,
"alphanum_fraction": 0.6707538366317749,
"avg_line_length": 33.9523811340332,
"blob_id": "2f12ab843442710ad7d8efdac6d3d6c9dd0125ab",
"content_id": "218f4b9b0a26baa0f8a7529d38dbf1ef8dc54052",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4404,
"license_type": "no_license",
"max_line_length": 282,
"num_lines": 126,
"path": "/personal_site/mailing/blueprint.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "'''\n Okay, here's how it should work.\n\n You should be able to go to a URL like /mailing/subscribe/$tagname in order to sign up to receive notifications when a new post is made in that tag.\n\n Whenever a new post is made, its tags should be checked. For each tag, get all subscribed e-mail addresses. Then make a set() to remove duplicates and BCC everybody in that post with the post content.\n\n There should also be an \"unsubscribe\" view that will let someone remove their e-mail from the list at any time.\n'''\n\nfrom flask import Blueprint, redirect, request, url_for\nfrom flask_mako import render_template\n\nfrom itsdangerous import URLSafeTimedSerializer\n\nfrom personal_site import app\nfrom personal_site.models import Tag, Email\n\nfrom personal_site.mailing.forms import MailingSubscribeForm\n\nmailing = Blueprint('mailing', __name__, template_folder='templates')\n\ndef generate_token(email, tag, action):\n\n serializer = URLSafeTimedSerializer(app.config['SECRET_KEY'])\n\n return serializer.dumps([email, tag], salt=action)\n\ndef send_mail(recipients, subject, content):\n '''\n Given a set of recipients, send each an e-mail\n with subject \"subject\" and the specified \"content.\" \n '''\n\n from smtplib import SMTP\n from email.mime.text import MIMEText\n from email.header import Header\n\n email_msg = MIMEText(content, 'html', 'utf-8')\n email_msg['Subject'] = Header(subject, 'utf-8')\n email_msg['From'] = '[email protected]'\n\n if len(recipients) == 1:\n email_msg['To'] = recipients[0]\n recips = [email_msg['To']]\n\n else:\n # We're gonna BCC everyone\n recips = list(recipients)\n\n s = SMTP(app.config['SMTP_HOST'])\n s.set_debuglevel(False)\n\n try:\n s.sendmail(email_msg['From'], recips, email_msg.as_string())\n\n except:\n return False\n\n finally:\n s.close()\n return True\n\[email protected]('/subscribe/<slug>', methods=['GET', 'POST'])\ndef subscribe(slug):\n\n # If e-mail is turned off, you should not be here!\n if not app.config['EMAIL_ENABLED']: \n return render_template('confirm_failed.mak', reason='disabled')\n\n tag = Tag.query.filter(Tag.slug == slug).first_or_404()\n\n if request.method == 'POST':\n form = MailingSubscribeForm(request.form)\n\n if form.validate():\n\n # Generate confirmation token and URL\n confirm_token = generate_token(form.email.data, tag.tagname, 'subscribe')\n confirm_url = request.url_root.rstrip('/') + url_for('mailing.confirm_action', action='subscribe', token=confirm_token)\n\n # Send confirmation e-mail, display confirmation message\n confirm_message = '<p>You are receiving this e-mail because you signed up to receive email updates from kelsilindblad.com. To confirm your interest, please click the link below to complete your registration.<p><p><a href=\"{}\">{}</a></p>'.format(confirm_url, confirm_url)\n send_mail([form.email.data], 'kelsilindblad.com Mailing List Confirmation', confirm_message)\n\n return render_template('subscribe_confirm.mak', email=form.email.data)\n\n else:\n form = MailingSubscribeForm()\n\n return render_template('subscribe.mak', tag=tag, form=form)\n\[email protected]('/confirm/<action>/<token>')\ndef confirm_action(action, token):\n\n serializer = URLSafeTimedSerializer(app.config['SECRET_KEY'])\n\n try:\n email, tag = serializer.loads(token, salt=action, max_age=app.config['TOKEN_EXPIRATION'])\n\n except:\n return render_template('confirm_failed.mak', reason='other')\n\n # Add or remove db info\n\n from personal_site.app import db\n\n if action == 'subscribe':\n\n # Do not allow subscriptions if e-mail is currently disabled\n if not app.config['EMAIL_ENABLED']: \n return render_template('confirm_failed.mak', reason='disabled')\n\n tag_info = Tag.query.filter(Tag.tagname == tag).first_or_404()\n entry = Email(address=email, tag_id=tag_info.id)\n db.session.add(entry)\n db.session.commit()\n\n elif action == 'unsubscribe':\n\n tag_info = Tag.query.filter(Tag.tagname == tag).first_or_404()\n deletion = Email.query.filter(Email.address == email).filter(Email.tag_id == tag_info.id).first_or_404()\n db.session.delete(deletion)\n db.session.commit()\n\n return render_template('confirm_succeeded.mak', action=action, tag=tag)\n"
},
{
"alpha_fraction": 0.5800150036811829,
"alphanum_fraction": 0.5845229029655457,
"avg_line_length": 31.463415145874023,
"blob_id": "14430e788bf1dfa9b9307ed6c495cac877df397b",
"content_id": "ad24314dac6eab5b65576fbf2aff4fcfb987f3f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 1331,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 41,
"path": "/personal_site/templates/base.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"utf-8\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <title><%block name=\"title\">Home</%block> - Before I Sleep</title>\n <link href=\"https://fonts.googleapis.com/css?family=Calligraffitti\" rel=\"stylesheet\" type=\"text/css\"> \n <link type=\"text/css\" rel=\"stylesheet\" href=\"${url_for('static', filename=\"global.css\")}\" />\n <%block name=\"resource\">\n % if resources:\n % for resource in resources:\n % if resource.endswith('.js'):\n <script type=\"text/javascript\" src=\"${url_for('static', filename=resource)}\"></script>\n % elif resource.endswith('.css'):\n <link type=\"text/css\" rel=\"stylesheet\" href=\"${url_for('static', filename=resource)}\" />\n % endif\n % endfor\n % endif\n </%block>\n</head>\n<body>\n <div id=\"main\">\n <header id=\"mainheader\">\n <div id=\"banner\"> </div>\n <h1><a href=\"${url_for('homepage')}\">Before I Sleep</a></h1>\n <%include file=\"nav.mak\" args=\"nav=nav\" />\n </header>\n <div id=\"content\">\n <%block name=\"maincontent\">\n <article id=\"maincontent\">\n <h1>${content_title}</h1>\n ${content}\n </article>\n </%block>\n </div>\n <footer id=\"mainfooter\">\n <%include file=\"footer.mak\" />\n </footer>\n </div>\n</body>\n</html>\n"
},
{
"alpha_fraction": 0.609160304069519,
"alphanum_fraction": 0.6793892979621887,
"avg_line_length": 22.39285659790039,
"blob_id": "bf0bf9ad599ac283982077dd2dd0fa6cf1e82225",
"content_id": "ccef89b7e128e7a79c4675755784547b3bdec0b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 655,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 28,
"path": "/migrations/versions/bc81e9ba0377_.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "\"\"\"empty message\n\nRevision ID: bc81e9ba0377\nRevises: 32996a5a74dc\nCreate Date: 2017-04-08 11:02:57.966222\n\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = 'bc81e9ba0377'\ndown_revision = '32996a5a74dc'\nbranch_labels = None\ndepends_on = None\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.add_column('entry', sa.Column('summary', sa.String(length=500), nullable=True))\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.drop_column('entry', 'summary')\n # ### end Alembic commands ###\n"
},
{
"alpha_fraction": 0.5897186994552612,
"alphanum_fraction": 0.595538318157196,
"avg_line_length": 28.457143783569336,
"blob_id": "52f0232df83d5c3e018c792905cc93115366e200",
"content_id": "051b85aec679174ee964fe68a3f4f6e7a25eebff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 1031,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 35,
"path": "/personal_site/entries/templates/main.mak",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "<%! from markdown import markdown %>\n\n<%inherit file=\"base.mak\" />\n\n<%block name=\"title\">${page_title}</%block>\n\n<%block name=\"maincontent\">\n\n% if not page_title == 'Home':\n <h1 class=\"center\">${page_title}</h1>\n% endif\n\n% for entry in object_list.items:\n <article class=\"entry-summary\">\n <h1><a href=\"${url_for('entries.detail', slug=entry.slug)}\">${entry.title}</a></h1>\n % if entry.img:\n <img src=\"${url_for('static', filename='/'.join(entry.img.split('/')[-3:]))}\" alt=\"\" class=\"entry-summary-img\" />\n % endif\n <p><i>Posted ${format_date(entry.created_timestamp)}</i></p>\n <p class=\"double-space\">${markdown(entry.summary)}</p>\n % if entry.body:\n <p><i><a href=\"${url_for('entries.detail', slug=entry.slug)}\">Read more...</a></i></p>\n % endif\n <nav id=\"tags-comments\">\n % if entry.tags:\n <%include file=\"taglist.mak\" args=\"entry=entry\" />\n % endif\n <p class=\"right-float\">Comments: 0</p>\n </nav>\n </article>\n% endfor\n\n<%include file=\"includes/pagination.mak\" />\n\n</%block>\n"
},
{
"alpha_fraction": 0.6051684021949768,
"alphanum_fraction": 0.6096165776252747,
"avg_line_length": 33.97037124633789,
"blob_id": "8dd632f4659479645594f1989fb7cb72fb434e4c",
"content_id": "a263c907b4d9479852ad370066995c4aa07b4d5b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4721,
"license_type": "no_license",
"max_line_length": 428,
"num_lines": 135,
"path": "/personal_site/admin.py",
"repo_name": "kelind/personal_site",
"src_encoding": "UTF-8",
"text": "from flask import g, url_for, redirect, request\nfrom flask_admin import Admin, AdminIndexView, expose\nfrom flask_admin.contrib.fileadmin import FileAdmin\nfrom flask_admin.contrib.sqla import ModelView\n\nfrom wtforms import SelectField, TextAreaField\n\nfrom personal_site.app import app, db\nfrom personal_site.models import Email, Entry, Tag\n\nfrom personal_site.mailing.blueprint import send_mail, generate_token\n\nimport os\nimport glob\n\nclass AdminAuthentication(object):\n def is_accessible(self):\n return g.user.is_authenticated\n\nclass BaseModelView(AdminAuthentication, ModelView):\n pass\n\nclass SlugModelView(BaseModelView):\n\n def on_model_change(self, form, model, is_created):\n model.generate_slug()\n return super(SlugModelView, self).on_model_change(form, model, is_created)\n\nclass EntryModelView(SlugModelView):\n\n _status_choices = [\n (Entry.STATUS_PUBLIC, 'Public'),\n (Entry.STATUS_DRAFT, 'Draft')\n ]\n\n '''\n We'll want to replace this later.\n\n Recursively get a list of filenames for use with the image selector.\n '''\n #prefix = app.config['STATIC_DIR'] + '/**/'\n _img_choices = [('', 'None')]\n _img_choices += [(filename, filename[filename.rfind('/') + 1:]) for x in os.walk(app.config['STATIC_DIR']) for filename in glob.glob(os.path.join(x[0], '*thumb*'))]\n\n column_choices = {\n 'status': _status_choices,\n 'img': _img_choices\n }\n\n column_list = [\n 'title', 'status', 'summary', 'tag_list', 'created_timestamp'\n ]\n\n column_searchable_list = ['title', 'body']\n\n column_filters = [\n 'status', 'created_timestamp'\n ]\n\n column_default_sort = ('id', True)\n\n ###############################################\n\n form_columns = ['title', 'summary', 'img', 'body', 'status', 'tags']\n\n form_args = {\n 'status': {'choices': _status_choices, 'coerce': int},\n 'img': {'choices': _img_choices},\n }\n\n form_widget_args = {\n 'body': {\n 'class': 'form-control span10',\n 'min-width': '100%',\n 'rows': 20\n },\n 'summary': {\n 'class': 'form-control span10',\n 'rows': 10\n }\n }\n\n form_overrides = {\n 'status': SelectField,\n 'img': SelectField,\n 'summary': TextAreaField\n }\n\n def after_model_change(self, form, model, is_created):\n\n if is_created:\n\n mailinglist = []\n\n for tag in model.tags:\n mailinglist = mailinglist + Email.query.filter(tag.id == Email.tag_id).all()\n\n if mailinglist:\n mailinglist = set(mailinglist)\n\n # Now actually e-mail everyone\n for address in mailinglist:\n unsubscribe_token = generate_token(address.address, tag.tagname, 'unsubscribe')\n unsubscribe_url = app.config['DOMAIN'].rstrip('/') + url_for('mailing.confirm_action', action='unsubscribe', token=unsubscribe_token)\n if model.img:\n content = '<h1>{}</h1><p><img src=\"{}\" /></p><p>{}</p><p><a href=\"{}\">Read more...</a></p><p style=\"font-size:10px\">Not interested in getting more of these updates? <a href=\"{}\">Unsubscribe</a>'.format(model.title, app.config['PRODUCTION_DOMAIN'] + '/' + model.img[ model.img.find('static'): ], model.summary, app.config['PRODUCTION_DOMAIN'] + url_for('entries.detail', slug=model.slug), unsubscribe_url)\n else:\n content = '<h1>{}</h1><p>{}</p><p><a href=\"{}\">Read more...</a></p><p style=\"font-size:10px\">Not interested in getting more of these updates? <a href=\"{}\">Unsubscribe</a>'.format(model.title, model.summary, app.config['PRODUCTION_DOMAIN'] + url_for('entries.detail', slug=model.slug), unsubscribe_url)\n\n send_mail([address.address], '{} - New Post from kelsilindblad.com'.format(model.title), content)\n\n return super(EntryModelView, self).after_model_change(form, model, is_created)\n\nclass TagModelView(SlugModelView):\n\n form_columns = ['tagname']\n\nclass EmailModelView(BaseModelView):\n pass\n\nclass BlogFileAdmin(AdminAuthentication, FileAdmin):\n pass\n\nclass IndexView(AdminIndexView):\n @expose('/')\n def index(self):\n if not (g.user.is_authenticated):\n return redirect(url_for('login', next=request.path))\n return self.render('admin/index.html')\n\nadmin = Admin(app, 'Blog Admin', index_view=IndexView())\nadmin.add_view(EntryModelView(Entry, db.session))\nadmin.add_view(TagModelView(Tag, db.session))\nadmin.add_view(EmailModelView(Email, db.session))\nadmin.add_view(BlogFileAdmin(app.config['STATIC_DIR'], '/static/', name='Static Files'))\n"
}
] | 32 |
timhowgego/Aquius
|
https://github.com/timhowgego/Aquius
|
f09d198b1f9032bbff667037e6fe1e9d3c4a503d
|
6f13840dad5c016931b98b3b2356a00ce7bb358f
|
ede9acefbea11f9049706d8f0f9565bb468f341f
|
refs/heads/master
| 2023-07-22T13:55:44.512905 | 2023-07-09T17:40:16 | 2023-07-09T17:40:16 | 149,971,908 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5771265029907227,
"alphanum_fraction": 0.5806522965431213,
"avg_line_length": 35.30400085449219,
"blob_id": "692c298d283493a9df0ad4d97a08ca87106c5f17",
"content_id": "1f13755604d4157174aec3cc275457c8a5fc07a2",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4538,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 125,
"path": "/scripts/place_from_csv.py",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "\"\"\"\nPython script that adds place data to an existing aquius file.\nThis allows geojson functions in GTFS To Aquius to be skipped:,\nInstead use GIS software to relate nodes to places,\ncreate a CSV lookup as described below,\nand run this script after GTFS To Aquius.\n\nUsage: place_from_csv.py aquius.json place.csv\n\nWhere place.csv consists columns:\n- node_x (coordinatePrecision float)\n- node_y (coordinatePrecision float)\n- place_x (coordinatePrecision float)\n- place_y (coordinatePrecision float)\n- place_name (string)\n- place_population (integer)\n\nCoordinates precision must at least match GTFS To Aquius coordinatePrecision (5 by default).\nArgument --precision can be set, which reduces higher precisions,\nbut obviously cannot reliably increase lower precision, not cluster nodes.\nBest to run all processing with the same coordinatePrecision.\n\nAny prior places are retained, to avoid breaking prior references,\nhowever this could bloat the aquius file with excess place reference.\nBest to run this script on an aquius file with no existing place references.\n\"\"\"\nimport argparse\nimport csv\nimport json\n\n\ndef get_args():\n parser = argparse.ArgumentParser()\n parser.add_argument(\n 'aquius',\n help='Aquius .json filename with path',\n )\n parser.add_argument(\n 'place',\n help='''Place CSV filename with path. Consists columns:\nnode_x,node_y,place_x,place_y,place_name,place_population\nCoordinates must match GTFS To Aquius coordinatePrecision (5 by default).''',\n )\n parser.add_argument(\n '--precision',\n dest='precision',\n default=5,\n type=int,\n help='coordinatePrecision (as GTFS To Aquius)',\n )\n return parser.parse_args()\n\ndef load_json(filepath: str) -> dict:\n with open(filepath, mode='r') as file:\n return json.load(file)\n\ndef save_json(data: dict, filepath: str):\n with open(filepath, mode='w') as file:\n json.dump(data, file)\n\ndef load_csv(filepath: str) -> dict:\n data = []\n with open(filepath, mode='r', newline='') as file:\n reader = csv.DictReader(file)\n for row in reader:\n data.append(dict(row))\n return data\n\ndef main():\n arguments = get_args()\n aquius = load_json(filepath=getattr(arguments, 'aquius'))\n if 'node' not in aquius and not isinstance(aquius['node'], list):\n return\n if 'place' not in aquius or not isinstance(aquius['place'], list):\n aquius['place'] = []\n \n places = load_csv(filepath=getattr(arguments, 'place'))\n precision = getattr(arguments, 'precision')\n place_lookup: dict = {} # \"place_x:place_y\": index in aquius['place']\n \n for place in places:\n try:\n node_x = round(float(place.get(\"node_x\", 0)), precision)\n node_y = round(float(place.get(\"node_y\", 0)), precision)\n place_x = round(float(place.get(\"place_x\", 0)), precision)\n place_y = round(float(place.get(\"place_y\", 0)), precision)\n place_name = place.get(\"place_name\", \"\")\n place_population = int(place.get(\"place_population\", 0))\n except ValueError:\n continue\n place_index = f\"{place_x}:{place_y}\"\n\n for index, node in enumerate(aquius['node']):\n if len(node) < 3:\n continue\n base_node_x = round(node[0], precision)\n base_node_y = round(node[1], precision)\n if base_node_x == node_x and base_node_y == node_y:\n use_place_index = place_lookup.get(place_index, None)\n if use_place_index is None: # Create new place\n use_place_index = len(aquius['place'])\n place_lookup[place_index] = use_place_index\n aquius['place'].append([\n place_x,\n place_y,\n {\n \"p\": place_population,\n \"r\": [\n {\n \"n\": place_name\n }\n ]\n }\n ])\n if not isinstance(node[2], dict):\n aquius['node'][index][2] = {}\n if \"place\" in node[2]:\n aquius['node'][index][2][\"place\"] = use_place_index\n else:\n aquius['node'][index][2][\"p\"] = use_place_index\n\n save_json(data=aquius, filepath=getattr(arguments, 'aquius'))\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5491603016853333,
"alphanum_fraction": 0.5559753179550171,
"avg_line_length": 30.191993713378906,
"blob_id": "c9a409a15b9893cc18022248fae51ea9b89d8a33",
"content_id": "29b02e297bfb98d78b17bf753dfdb77b12cde874",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": true,
"language": "JavaScript",
"length_bytes": 110649,
"license_type": "permissive",
"max_line_length": 145,
"num_lines": 3547,
"path": "/dist/aquius.js",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "/*eslint-env browser*/\n/*global L*/\n/*global Promise*/\n\n\nvar aquius = aquius || {\n/**\n * @namespace Aquius (Here+Us)\n * @version 0\n * @copyright MIT License\n */\n\n\n\"init\": function init(configId, configOptions) {\n /**\n * Initialisation of Aquius with a user interface\n * @param {string} configId - Id of DOM element within which to build\n * @param {Object} configOptions - Optional, object with key:value configurations\n * @return {boolean} platform supported\n */\n \"use strict\";\n\n var HERE;\n // Cache of last here() result, to be referenced as configOptions._here\n var targetDOM = document.getElementById(configId);\n\n function getDefaultTranslation() {\n /**\n * Reference: Default locale translations\n * @return {object} BCP 47 locale key:{slug:translation}\n */\n\n return {\n\n \"en-US\": {\n \"lang\": \"English\",\n // This language in that language\n \"connectivity\": \"Connectivity\",\n \"connectivityRange\": \"Any - Frequent\",\n // Labels UI range bar, so maintain left-right order\n \"embed\": \"Embed\",\n // Other strings are to be translated directed\n \"export\": \"Export\",\n \"here\": \"Here\",\n \"language\": \"Language\",\n \"link\": \"Services\",\n \"network\": \"Network\",\n \"node\": \"Stops\",\n \"place\": \"People\",\n \"scale\": \"Scale\",\n \"service\": \"Period\"\n // Expandable, but new keys should not start _ unless specially coded\n },\n\n \"es-ES\": {\n \"lang\": \"Español\",\n \"connectivity\": \"Conectividad\",\n \"connectivityRange\": \"Cualquier - Frecuente\",\n \"embed\": \"Insertar\",\n \"export\": \"Exportar\",\n \"here\": \"Aquí\",\n \"language\": \"Idioma\",\n \"link\": \"Servicios\",\n \"network\": \"Red\",\n \"node\": \"Paradas\",\n \"place\": \"Personas\",\n \"scale\": \"Escala\",\n \"service\": \"Período\"\n },\n\n \"fr-FR\": {\n \"lang\": \"Français\",\n \"connectivity\": \"Connectivité\",\n \"connectivityRange\": \"Tout - Fréquent\",\n \"embed\": \"Insérer\",\n \"export\": \"Exporter\",\n \"here\": \"Ici\",\n \"language\": \"Langue\",\n \"link\": \"Services\",\n \"network\": \"Réseau\",\n \"node\": \"Arrêts\",\n \"place\": \"Personnes\",\n \"scale\": \"Échelle\",\n \"service\": \"Période\"\n }\n // Each custom BCP 47-style locale matches en-US keys\n\n };\n }\n\n function getDefaultOptions() {\n /**\n * Reference: Default configOptions\n * @return {object} option:value\n */\n\n return {\n \"base\": [\n {\n \"options\": {\n // May contain any option accepted by Leaflet's TileLayer\n \"attribution\":\n \"© <a href='https://www.openstreetmap.org/copyright'>OpenStreetMap</a> contributors\",\n \"maxZoom\": 18\n },\n \"type\": \"\",\n // For WMS maps, type: \"wms\"\n \"url\": \"https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png\" // Often overloaded: \"https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}.png\"\n // Without https likely to trigger mixed content warnings\n }\n // Multiple bases supported but only first can be solid, while others must be transparent\n ],\n // Base mapping (WMS tiles supported with type: wms)\n \"c\": -1.43,\n // Here click Longitude\n \"connectivity\": 1.0,\n // Factor for connectivity calculation: population*(1-(1/(service*(2/(pow(10,p)/10))*connectivity)))\n \"dataObject\": {},\n // JSON network data object: Used in preference to dataset\n \"dataset\": \"\",\n // JSON file containing network data: Recommended full URL, not just filename\n \"hereColor\": \"#080\",\n // CSS Color for here layer circle strokes\n \"k\": 54.54,\n // Here click Latitude\n \"leaflet\": {},\n // Active Leaflet library object L: Used in preference to loading own library\n \"linkColor\": \"#f00\",\n // CSS Color for link (service) layer strokes\n \"linkScale\": 1.0,\n // Scale factor for link (service) layer strokes: ceil(log(1+(service*(1/(scale*4))))*scale*2)\n \"locale\": \"en-US\",\n // Default locale, BCP 47-style: User selection is t\n \"m\": 9,\n // Here click zoom\n \"map\": {},\n // Active Leaflet map object: Used in preference to own map\n \"minWidth\": 2,\n // Minimum pixel width of links, regardless of scaling. Assists click useability\n \"minZoom\": 0,\n // Minimum map zoom. Sets a soft cap on query complexity\n \"n\": 0,\n // User selected network filter\n \"network\": [],\n // Extension of network: Array of products, Object of locale keyed names\n \"networkAdd\": true,\n // Append this network extension to dataset defaults. Set false to replace defaults\n \"nodeColor\": \"#333\",\n // CSS Color for node (stop) layer circle strokes\n \"nodeScale\": 1.0,\n // Scale factor for node (stop) layer circles: ceil(log(1+(service*(1/(scale*2))))*scale)\n \"p\": 0,\n // User selected connectivity setting: population*(1-(1/(service*(2/(pow(10,p)/10))*connectivity)))\n \"panelOpacity\": 0.7,\n // CSS Opacity for background of the bottom-left summary panel\n \"panelScale\": 1.0,\n // Scale factor for text on the bottom-left summary panel\n \"placeColor\": \"#00f\",\n // CSS Color of place (population) layer circle fill\n \"placeOpacity\": 0.5,\n // CSS Opacity of place (population) layer circle fill: 0-1\n \"placeScale\": 1.0,\n // Scale factor for place (population) layer circles: ceil(sqrt(people*scale/666)\n \"v\": \"hlnp\",\n // Displayed map layers by first letter: here, link, node, place\n \"r\": 0,\n // User selected service filter\n \"s\": 5,\n // User selected global scale factor: 0 to 10\n \"uiConnectivity\": true,\n // Enables connectivity slider (if the dataset's place has contents)\n \"uiHash\": false,\n // Enables recording of the user state in the URL's hash\n \"uiLocale\": true,\n // Enables locale selector\n \"uiNetwork\": true,\n // Enables network selector\n \"uiPanel\": true,\n // Enables summary statistic panel\n \"uiScale\": true,\n // Enables scale slider\n \"uiService\": true,\n // Enables service selector\n \"uiShare\": true,\n // Enables embed and export\n \"uiStore\": true,\n // Enables browser session storage of user state\n \"t\": \"en-US\",\n // User selected locale: BCP 47-style)\n \"translation\": {},\n // Custom translations: Format matching aquius.LOC\n \"x\": 10.35,\n // Map view Longitude\n \"y\": 50.03,\n // Map view Latitude\n \"z\": 4\n // Map view zoom\n };\n }\n \n function getLayerNames(configOptions, panel) {\n /**\n * Reference: Layer names\n * @param {object} configOptions\n * @param {boolean} panel - optional, layername slugs for the panel. Else for map\n * @return {object} array of layername slugs\n */\n\n function filterLayerNames(configOptions, layerNames) {\n\n var i;\n var filtered = [];\n\n for (i = 0; i < layerNames.length; i += 1) {\n if (layerNames[i] === \"here\" ||\n (layerNames[i] in configOptions.dataObject &&\n configOptions.dataObject[layerNames[i]].length > 0)\n ) {\n filtered.push(layerNames[i]);\n }\n }\n\n return filtered;\n }\n\n if (typeof panel === \"boolean\" &&\n panel === true\n ) {\n return filterLayerNames(configOptions, [\"link\", \"place\", \"node\"]);\n // Order left to right in panel\n } else {\n return filterLayerNames(configOptions, [\"place\", \"link\", \"node\", \"here\"]);\n // Order bottom-to-top of map layers. First character of each unique. Has translation\n }\n }\n\n function createElement(elementType, valueObject, styleObject) {\n /**\n * Helper: Creates DOM element\n * @param {string} elementType\n * @param {object} valueObject - optional DOM value:content pairs\n * @param {object} styleObject - optional DOM style value:content pairs\n * @return {object} DOM element\n */\n\n var values, styles, i;\n var element = document.createElement(elementType);\n\n if (typeof valueObject !== \"undefined\") {\n values = Object.keys(valueObject);\n for (i = 0; i < values.length; i += 1) {\n element[values[i]] = valueObject[values[i]];\n }\n }\n\n if (typeof styleObject !== \"undefined\") {\n styles = Object.keys(styleObject);\n for (i = 0; i < styles.length; i += 1) {\n element.style[styles[i]] = styleObject[styles[i]];\n }\n }\n\n return element;\n }\n\n function createRadioElement(optionObject, baseName, selectedValue, style) {\n /**\n * Helper: Creates DOM radio element\n * @param {object} optionObject - array of radio options, each array an object with keys:\n * value (returned on selection), label (not localised), id (localisation referenced)\n * @param {string} baseName - for containing DOM div id and input name\n * @param {string} selectedValue - optional, currently selected value, type matching\n * @param {object} style - styling of containing div\n * @return {object} DOM div containing radio\n */\n\n var div, input, label, span, i;\n if (typeof style === \"undefined\") {\n style = {};\n }\n\n div = createElement(\"div\", {\n \"id\": baseName\n }, style);\n\n for (i = 0; i < optionObject.length; i += 1) {\n label = createElement(\"label\");\n\n input = createElement(\"input\", {\n \"type\": \"radio\",\n \"name\": baseName\n });\n if (\"value\" in optionObject[i]) {\n input.value = optionObject[i].value;\n if (typeof selectedValue !== \"undefined\" &&\n optionObject[i].value === selectedValue\n ) {\n input.checked = \"checked\";\n }\n }\n label.appendChild(input);\n\n span = createElement(\"span\");\n if (\"id\" in optionObject[i]) {\n span.id = optionObject[i].id;\n }\n if (\"label\" in optionObject[i]) {\n span.textContent = optionObject[i].label;\n }\n label.appendChild(span);\n\n div.appendChild(label);\n }\n\n return div;\n }\n\n function createSelectElement(optionObject, selectedValue) {\n /**\n * Helper: Creates DOM select element\n * @param {object} optionObject - array of select options, each array an object with keys:\n * value (returned on selection), label (not localised), id (localisation referenced)\n * @param {string} selectedValue - optional, currently selected value, type matching\n * @return {object} DOM select\n */\n\n var option, i;\n var select = createElement(\"select\");\n\n for (i = 0; i < optionObject.length; i += 1) {\n option = createElement(\"option\");\n if (\"value\" in optionObject[i]) {\n option.value = optionObject[i].value;\n if (optionObject[i].value === selectedValue) {\n option.selected = \"selected\";\n }\n }\n if (\"label\" in optionObject[i]) {\n option.textContent = optionObject[i].label;\n }\n if (\"id\" in optionObject[i]) {\n option.id = optionObject[i].id;\n }\n select.appendChild(option);\n }\n\n return select;\n }\n\n function outputStatusNoMap(configOptions, error) {\n /**\n * Non-map screen output, used pre-map or on fatal error\n * @param {object} configOptions\n * @param {object} error - optional Javascript error\n * @return {boolean} false on error, else true\n */\n\n var symbol;\n var targetDOM = document.getElementById(configOptions._id);\n var wrapper = createElement(\"div\", {}, {\n \"margin\": \"0 auto\",\n \"max-width\": \"800px\",\n \"text-align\": \"center\"\n });\n\n while (targetDOM.firstChild) {\n targetDOM.removeChild(targetDOM.firstChild);\n }\n\n if (\"map\" in configOptions &&\n typeof configOptions.map.remove === \"function\"\n ) {\n configOptions.map.remove();\n targetDOM.className = \"\";\n }\n\n if (typeof error !== \"undefined\") {\n symbol = \"\\uD83D\\uDEAB\";\n // No entry\n } else {\n symbol = \"\\uD83C\\uDF0D\";\n // Globe in progress\n }\n wrapper.appendChild(createElement(\"div\", {\n \"textContent\": symbol\n }, {\n \"font-size\": \"1000%\"\n }));\n\n if (typeof error !== \"undefined\") {\n wrapper.appendChild(createElement(\"p\", {\n \"textContent\": error.toString()\n }));\n }\n\n targetDOM.appendChild(wrapper);\n\n if (typeof error !== \"undefined\") {\n return false;\n }\n return true;\n }\n\n function createEmbedBlob(configOptions) {\n /**\n * Contents of a file containing embed code\n * @param {object} configOptions\n * @return {object} file Blob\n */\n\n var lines = [];\n\n function getThisScriptURL() {\n\n var i;\n var scripts = document.getElementsByTagName(\"script\");\n \n for (i = 0; i < scripts.length; i += 1) {\n if (scripts[i].src.indexOf(\"aquius\") !== -1) {\n // Script name must includes aquius. Crude\n return scripts[i].src;\n }\n }\n return \"<!--Script URL-->\";\n // Ugly fallback\n }\n\n function getOwnOptions(configOptions) {\n\n var i;\n var defaultOptions = getDefaultOptions();\n var ownOptions = {};\n var optionsNames = Object.keys(configOptions);\n\n for (i = 0; i < optionsNames.length; i += 1) {\n if (optionsNames[i].charAt(0) !== \"_\" &&\n // Exclude options beginning _\n [\"base\", \"dataObject\", \"leaflet\", \"map\", \"network\", \"translation\", \"uiHash\"]\n .indexOf(optionsNames[i]) === -1 &&\n // Exclude options that may confuse an embed, currently also ANY Object\n configOptions[optionsNames[i]] !== defaultOptions[optionsNames[i]]\n ) {\n ownOptions[optionsNames[i]] = configOptions[optionsNames[i]];\n }\n }\n // Future: Contents of values which are Objects always considered non-default\n\n return ownOptions;\n }\n\n lines.push(createElement(\"div\", {\n \"id\": configOptions._id\n }, {\n \"height\": \"100%\"\n }).outerHTML);\n lines.push(createElement(\"script\", {\n \"src\": getThisScriptURL()\n }).outerHTML);\n lines.push(createElement(\"script\", {\n \"textContent\": \"window.addEventListener(\\\"load\\\",function(){aquius.init(\\\"\" +\n configOptions._id + \"\\\",\" + JSON.stringify(getOwnOptions(configOptions)) + \")});\"\n }).outerHTML);\n\n return new Blob([lines.join(\"\\n\")], {type: \"text/plain;charset=utf-8\"});\n }\n\n function loadWithPromise(configOptions, scriptURLs) {\n /**\n * Modern loader of supporting data files - truly asynchronous, hence faster\n * @param {object} configOptions\n * @param {object} scriptURLs - array of CSS or JS to load\n */\n\n var i;\n var promiseArray = [];\n\n function promiseScript(url) {\n // Return Promise of url\n return new Promise(function (resolve, reject) {\n var element;\n if (url.split(\".\").pop() === \"css\") {\n element = document.createElement(\"link\");\n element.rel = \"stylesheet\";\n element.href = url;\n } else {\n element = document.createElement(\"script\");\n element.src = url;\n }\n element.onload = resolve;\n element.onerror = reject;\n document.head.appendChild(element);\n });\n }\n\n for (i = 0; i < scriptURLs.length; i += 1) {\n promiseArray.push(promiseScript(scriptURLs[i]));\n }\n\n if (\"dataset\" in configOptions &&\n typeof configOptions.dataset === \"string\"\n ) {\n promiseArray.push(fetch(configOptions.dataset));\n }\n\n if (promiseArray.length === 0) {\n\n postLoadInitScripts(configOptions);\n // Skip loading\n\n } else {\n\n Promise.all(promiseArray)\n .then(function (responseObject) {\n if (\"dataset\" in configOptions &&\n typeof configOptions.dataset === \"string\"\n ) {\n return responseObject[promiseArray.length - 1].json();\n } else {\n return configOptions.dataObject;\n }\n })\n .then(function (responseJSON) {\n configOptions.dataObject = responseJSON;\n postLoadInitScripts(configOptions);\n })\n .catch(function (error) {\n // Script failure creates error event, while Fetch creates an error message\n return outputStatusNoMap(configOptions, error);\n });\n }\n }\n\n function loadWithClassic(configOptions, scriptURLs) {\n /**\n * Fallback loader of supporting data files - not so asynchronous\n * @param {object} configOptions\n * @param {object} scriptURLs - array of CSS or JS to load\n */\n\n var recallArguments, callbackLoadDataset, loadedScriptCount, recallScript, i;\n\n function loadScriptClassic(url, callback) {\n // Classic JS/CSS load, based on file extension\n\n var element;\n\n if (url.split(\".\").pop() === \"css\") {\n element = document.createElement(\"link\");\n element.rel = \"stylesheet\";\n element.href = url;\n } else {\n element = document.createElement(\"script\");\n element.type = \"text/javascript\";\n element.src = url;\n }\n element.onreadystatechange = callback;\n element.onload = callback;\n element.onerror = (function() {\n return outputStatusNoMap(configOptions, {\"message\": \"Could not load \" + url});\n });\n document.head.appendChild(element);\n }\n\n function fetchJson(url, callback) {\n // Classic JSON load\n\n var http = new XMLHttpRequest();\n\n http.onreadystatechange = (function() {\n if (http.readyState === 4) {\n if (http.status === 200) {\n var response = JSON.parse(http.responseText);\n if (callback) {\n callback(response);\n }\n } else {\n return outputStatusNoMap(configOptions, {\"message\": http.status + \": Could not load \"+ url});\n }\n }\n });\n http.open(\"GET\", url);\n http.send();\n }\n\n callbackLoadDataset = function () {\n if (\"dataset\" in configOptions &&\n typeof configOptions.dataset === \"string\"\n ) { \n fetchJson(configOptions.dataset, function(responseJSON) {\n configOptions.dataObject = responseJSON;\n return postLoadInitScripts(configOptions);\n });\n } else {\n postLoadInitScripts(configOptions);\n }\n };\n\n if (scriptURLs.length === 0) {\n\n callbackLoadDataset();\n // Skip Leaflet\n\n } else {\n\n loadedScriptCount = 0;\n recallScript = function recallScript() {\n loadedScriptCount += 1;\n recallArguments = arguments;\n if (loadedScriptCount >= scriptURLs.length) {\n callbackLoadDataset.call(this, recallArguments);\n }\n };\n\n for (i = 0; i < scriptURLs.length; i += 1) {\n if (i >= scriptURLs.length) {\n break;\n }\n loadScriptClassic(scriptURLs[i], recallScript);\n }\n\n }\n }\n\n function getLocalised(configOptions, localiseKey) {\n /**\n * Helper: Get localised text by translation key\n * @param {object} configOptions\n * @param {string} localiseKey - translation key\n * @return {string} translated text or empty on failure\n */\n\n if (configOptions.t in configOptions.translation &&\n localiseKey in configOptions.translation[configOptions.t]\n ) {\n // Available in user locale\n return configOptions.translation[configOptions.t][localiseKey];\n } else {\n if (configOptions.locale in configOptions.translation &&\n localiseKey in configOptions.translation[configOptions.locale]\n ) {\n // Available in default locale\n return configOptions.translation[configOptions.locale][localiseKey];\n }\n }\n return \"\";\n }\n\n function applyLocalisation(configOptions, rerender) {\n /**\n * Apply translation to localised DOM items\n * @param {object} configOptions - including _toLocale (dom:translation key)\n * @param {boolean} rerender - also update localisations in map layers\n * @return {object} configOptions\n */\n\n var targetDOM, localiseText, i;\n var localiseNames = Object.keys(configOptions._toLocale);\n\n for (i = 0; i < localiseNames.length; i += 1) {\n targetDOM = document.getElementById(localiseNames[i]);\n if (targetDOM) {\n localiseText = getLocalised(configOptions, configOptions._toLocale[localiseNames[i]]);\n if (localiseText !== \"\") {\n targetDOM.textContent = localiseText;\n }\n }\n }\n\n if (typeof rerender !== \"undefined\" &&\n rerender === true\n ) {\n queryHere(configOptions, true);\n }\n return configOptions;\n }\n\n function userStore(configOptions, setObject) {\n /**\n * Helper: Retrieve and set URL hash (takes precedence) and Session storage\n * Caution: Hashable keys must be single alpha characters\n * @param {object} configOptions\n * @param {object} setObject - optional key:value pairs to set\n * @return {object} store contents\n */\n\n var storeObject = {};\n\n function getUserHash() {\n\n var i;\n var content = {};\n var hash = decodeURIComponent(document.location.hash.trim()).slice(1).split(\"/\");\n\n for (i = 0; i < hash.length; i += 1) {\n if (hash[i].length >= 2 &&\n (/^[a-z\\-()]$/.test(hash[i][0]))\n ) {\n content[hash[i][0]] = hash[i].slice(1);\n }\n }\n\n return content;\n }\n\n function setUserHash(configOptions, setObject) {\n\n var hashNames, urlHash, i;\n var content = getUserHash();\n var setNames = Object.keys(setObject);\n\n for (i = 0; i < setNames.length; i += 1) {\n if (setNames[i].length === 1 &&\n (/^[a-z\\-()]$/.test(setNames[i]))\n ) {\n // Hash key names are a single alpha character\n content[setNames[i]] = setObject[setNames[i]];\n }\n }\n\n hashNames = Object.keys(content);\n urlHash = [];\n\n for (i = 0; i < hashNames.length; i += 1) {\n urlHash.push(hashNames[i] + encodeURIComponent(content[hashNames[i]]));\n }\n\n location.hash = \"#\" + urlHash.join(\"/\");\n return content;\n }\n\n function getUserStore(configOptions) {\n\n try {\n // Support of sessionStorage does not automatically make it useable\n if (sessionStorage.getItem(configOptions._id) !== null) {\n return JSON.parse(sessionStorage.getItem(configOptions._id));\n } else {\n return {};\n }\n // Keyed by Id to allow multiple instances on the same HTML page\n } catch(e) {\n return {};\n }\n }\n\n function setUserStore(configOptions, setObject) {\n\n var i;\n var theStore = getUserStore(configOptions);\n var setNames = Object.keys(setObject);\n\n for (i = 0; i < setNames.length; i += 1) {\n theStore[setNames[i]] = setObject[setNames[i]];\n }\n\n try {\n sessionStorage.setItem(configOptions._id, JSON.stringify(theStore));\n } catch(e) {\n // Pass\n }\n\n return theStore;\n }\n\n if (sessionStorage &&\n (\"uiStore\" in configOptions === false ||\n configOptions.uiStore === true)\n ) {\n // Store is opted out\n if (typeof setObject === \"object\") {\n storeObject = setUserStore(configOptions, setObject);\n } else {\n storeObject = getUserStore(configOptions);\n }\n }\n\n if (\"uiHash\" in configOptions === true &&\n configOptions.uiHash === true\n ) {\n // Hash is opted in. Hash takes precedence over store for return object\n if (typeof setObject === \"object\") {\n storeObject = setUserHash(configOptions, setObject);\n } else {\n storeObject = getUserHash();\n }\n }\n\n return storeObject;\n }\n\n\n function parseConfigOptions(configOptions) {\n /**\n * Reconcile configOptions with storage and defaults\n * Option precedence = store, configuration, dataset, default\n * @param {object} configOptions\n * @return {object} configOptions\n */\n\n var specialKeys, storeObject, storeNames, translationObjects, translationLocales,\n translationNames, i, j, k;\n var defaultOptions = getDefaultOptions();\n var defaultLocale = \"en-US\";\n var defaultNames = Object.keys(defaultOptions);\n var localeNames = [];\n\n function specialTranslation(configOptions, key) {\n // Returns dummy translation block containing _keyI, where I is index position\n\n var specialTranslationNames, i, j;\n var specialTranslationObject = {};\n\n for (i = 0; i < configOptions.dataObject[key].length; i += 1) {\n if (configOptions.dataObject[key][i].length > 1 &&\n typeof configOptions.dataObject[key][i][1] === \"object\"\n ) {\n specialTranslationNames = Object.keys(configOptions.dataObject[key][i][1]);\n for (j = 0; j < specialTranslationNames.length; j += 1) {\n if (specialTranslationNames[j] in specialTranslationObject === false) {\n specialTranslationObject[specialTranslationNames[j]] = {};\n }\n specialTranslationObject[specialTranslationNames[j]][\"_\" + key + i] =\n configOptions.dataObject[key][i][1][specialTranslationNames[j]];\n }\n }\n }\n\n return specialTranslationObject;\n }\n\n function attributionTranslation(configOptions) {\n // Returns dummy translation block containing _datasetName/Attribution\n\n var theseTranslationNames, i;\n var attributionTranslationObject = {};\n\n if (\"meta\" in configOptions.dataObject &&\n \"name\" in configOptions.dataObject.meta &&\n typeof configOptions.dataObject.meta.name === \"object\"\n ) {\n theseTranslationNames = Object.keys(configOptions.dataObject.meta.name);\n for (i = 0; i < theseTranslationNames.length; i += 1) {\n\n if (theseTranslationNames[i] in attributionTranslationObject === false) {\n attributionTranslationObject[theseTranslationNames[i]] = {};\n }\n\n attributionTranslationObject[theseTranslationNames[i]]._datasetName =\n configOptions.dataObject.meta.name[theseTranslationNames[i]].toString();\n\n }\n }\n\n if (\"meta\" in configOptions.dataObject &&\n \"attribution\" in configOptions.dataObject.meta &&\n typeof configOptions.dataObject.meta.attribution === \"object\"\n ) {\n theseTranslationNames = Object.keys(configOptions.dataObject.meta.attribution);\n for (i = 0; i < theseTranslationNames.length; i += 1) {\n\n if (theseTranslationNames[i] in attributionTranslationObject === false) {\n attributionTranslationObject[theseTranslationNames[i]] = {};\n }\n\n attributionTranslationObject[theseTranslationNames[i]]._datasetAttribution =\n configOptions.dataObject.meta.attribution[theseTranslationNames[i]].toString();\n\n }\n }\n\n return attributionTranslationObject;\n }\n\n function rangeIndexedOption(configOptions, option, dataObjectKey) {\n // Force eg n to be a valid network index\n\n configOptions[option] = parseInt(configOptions[option], 10);\n if (configOptions[option] < 0) {\n configOptions[option] = 0;\n } else {\n if (dataObjectKey in configOptions.dataObject &&\n Array.isArray(configOptions.dataObject[dataObjectKey]) &&\n configOptions[option] >= configOptions.dataObject[dataObjectKey].length\n ) {\n configOptions[option] = configOptions.dataObject[dataObjectKey].length - 1;\n }\n }\n\n return configOptions;\n }\n\n if (typeof configOptions.dataObject !== \"object\") {\n configOptions.dataObject = {};\n }\n\n for (i = 0; i < defaultNames.length; i += 1) {\n\n if (defaultOptions[defaultNames[i]] === defaultLocale) {\n localeNames.push(defaultNames[i]);\n }\n\n if (defaultNames[i] in configOptions === false ||\n typeof configOptions[defaultNames[i]] !== typeof defaultOptions[defaultNames[i]]\n ) {\n\n if (\"option\" in configOptions.dataObject &&\n defaultNames[i] in configOptions.dataObject.option &&\n typeof configOptions.dataObject.option[defaultNames[i]] ===\n typeof defaultOptions[defaultNames[i]]\n ) {\n configOptions[defaultNames[i]] = configOptions.dataObject.option[defaultNames[i]];\n } else {\n configOptions[defaultNames[i]] = defaultOptions[defaultNames[i]];\n }\n\n }\n\n }\n\n storeObject = userStore(configOptions);\n storeNames = Object.keys(storeObject);\n for (i = 0; i < storeNames.length; i += 1) {\n if (defaultNames.indexOf(storeNames[i]) !== -1) {\n\n if (typeof configOptions[storeNames[i]] === \"number\") {\n if (Number.isNaN(parseFloat(storeObject[storeNames[i]])) === false) {\n configOptions[storeNames[i]] = parseFloat(storeObject[storeNames[i]]);\n }\n // Else bad data\n } else {\n if (typeof configOptions[storeNames[i]] === \"string\") {\n configOptions[storeNames[i]] = storeObject[storeNames[i]].toString();\n }\n // Else unsupported: Objects cannot be held in the store, since cannot easily be hashed\n }\n\n }\n }\n\n // Extend network\n if (configOptions.network.length > 0) {\n if (\"network\" in configOptions.dataObject &&\n Array.isArray(configOptions.dataObject.network)\n ) {\n if (\"networkAdd\" in configOptions &&\n configOptions.networkAdd === false\n ) {\n configOptions.dataObject.network = configOptions.network;\n } else {\n // Append bespoke network filters\n configOptions.dataObject.network =\n configOptions.dataObject.network.concat(configOptions.network);\n }\n } else {\n configOptions.dataObject.network = configOptions.network;\n }\n }\n\n if (configOptions.z < configOptions.minZoom) {\n configOptions.z = configOptions.minZoom;\n }\n if (configOptions.m < configOptions.minZoom) {\n configOptions.m = configOptions.minZoom;\n }\n\n // Translation precedence = configOptions, dataObject, default\n translationObjects = [];\n if (\"translation\" in configOptions.dataObject) {\n translationObjects.push(configOptions.dataObject.translation);\n }\n\n specialKeys = [\n [\"network\", \"n\"],\n [\"service\", \"r\"]\n ];\n for (i = 0; i < specialKeys.length; i += 1) {\n configOptions = rangeIndexedOption(configOptions, specialKeys[i][1], specialKeys[i][0]);\n if (specialKeys[i][0] in configOptions.dataObject &&\n Array.isArray(configOptions.dataObject[specialKeys[i][0]])\n ) {\n translationObjects.push(specialTranslation(configOptions, specialKeys[i][0]));\n }\n }\n\n if (\"meta\" in configOptions.dataObject &&\n (\"name\" in configOptions.dataObject.meta ||\n \"attribution\" in configOptions.dataObject.meta)\n ) {\n translationObjects.push(attributionTranslation(configOptions));\n }\n translationObjects.push(getDefaultTranslation());\n\n for (i = 0; i < translationObjects.length; i += 1) {\n translationLocales = Object.keys(translationObjects[i]);\n for (j = 0; j < translationLocales.length; j += 1) {\n if (translationLocales[j] in configOptions.translation === false) {\n configOptions.translation[translationLocales[j]] = {};\n }\n translationNames = Object.keys(translationObjects[i][translationLocales[j]]);\n for (k = 0; k < translationNames.length; k += 1) {\n if (translationNames[k] in configOptions.translation[translationLocales[j]] === false) {\n configOptions.translation[translationLocales[j]][translationNames[k]] =\n translationObjects[i][translationLocales[j]][translationNames[k]];\n }\n }\n }\n }\n\n // Force locales to have at least some support. Entirely lowercase hashes get dropped here\n for (i = 0; i < localeNames.length; i += 1) {\n if (configOptions[localeNames[i]] in configOptions.translation === false) {\n configOptions[localeNames[i]] = defaultOptions[localeNames[i]];\n }\n }\n\n return configOptions;\n }\n\n\n function buildMapBase(configOptions) {\n /**\n * Builds map and related events, all non-localised, with no bespoke layers or controls\n * @param {object} configOptions\n * @return {object} configOptions\n */\n\n var i;\n\n if (\"setView\" in configOptions.map === false ||\n \"on\" in configOptions.map === false ||\n \"getCenter\" in configOptions.map === false ||\n \"getZoom\" in configOptions.map === false ||\n \"attributionControl\" in configOptions.map === false\n ) {\n // Create map\n while (document.getElementById(configOptions._id).firstChild) {\n document.getElementById(configOptions._id).removeChild(document.getElementById(configOptions._id).firstChild);\n }\n configOptions.map = configOptions.leaflet.map(configOptions._id, {\n minZoom: configOptions.minZoom,\n preferCanvas: true,\n attributionControl: false\n });\n // Canvas renderer is faster. IE8 not supported anyway\n configOptions.leaflet.control.attribution({position: \"bottomleft\"}).addTo(configOptions.map);\n }\n\n configOptions.map.attributionControl.addAttribution(\n \"<a href=\\\"https://timhowgego.github.io/Aquius/\\\">Aquius</a>\");\n\n configOptions.map.setView([configOptions.y, configOptions.x], configOptions.z);\n\n configOptions.map.on(\"moveend\", function () {\n\n var accuracy;\n var center = configOptions.map.getCenter();\n configOptions.z = configOptions.map.getZoom();\n accuracy = Math.ceil(configOptions.z / 3);\n configOptions.x = parseFloat(center.lng.toFixed(accuracy));\n configOptions.y = parseFloat(center.lat.toFixed(accuracy));\n\n userStore(configOptions, {\n \"x\": configOptions.x,\n \"y\": configOptions.y,\n \"z\": configOptions.z\n });\n\n });\n\n configOptions.map.on(\"click\", function (evt) {\n\n var accuracy = Math.ceil(configOptions.z / 3);\n var center = evt.latlng;\n configOptions.c = parseFloat(center.lng.toFixed(accuracy));\n configOptions.k = parseFloat(center.lat.toFixed(accuracy));\n configOptions.m = configOptions.z;\n\n userStore(configOptions, {\n \"c\": configOptions.c,\n \"k\": configOptions.k,\n \"m\": configOptions.m\n });\n\n queryHere(configOptions);\n });\n\n for (i = 0; i < configOptions.base.length; i += 1) {\n if (\"url\" in configOptions.base[i]) {\n\n if (\"options\" in configOptions.base[i] === false) {\n configOptions.base[i].options = {};\n }\n\n if (\"type\" in configOptions.base[i] &&\n configOptions.base[i].type === \"wms\"\n ) {\n configOptions.leaflet.tileLayer.wms(\n configOptions.base[i].url, configOptions.base[i].options\n ).addTo(configOptions.map);\n } else {\n configOptions.leaflet.tileLayer(\n configOptions.base[i].url, configOptions.base[i].options\n ).addTo(configOptions.map);\n }\n\n }\n }\n\n return configOptions;\n }\n\n function buildMapUI(configOptions) {\n /**\n * Builds map layers and controls. All localisable\n * @param {object} configOptions\n * @return {object} configOptions\n */ \n\n var controlForm;\n\n function layerEvents(configOptions, layerName) {\n // Adds events to _layer layers\n\n configOptions._layer[layerName].on(\"add\", function () {\n if (configOptions.v.indexOf(layerName.charAt(0)) === -1) {\n configOptions.v += layerName.charAt(0);\n userStore(configOptions, {\n \"v\": configOptions.v\n });\n }\n });\n\n configOptions._layer[layerName].on(\"remove\", function () {\n configOptions.v = configOptions.v.replace(layerName.charAt(0), \"\");\n userStore(configOptions, {\n \"v\": configOptions.v\n });\n });\n\n return configOptions;\n }\n\n function getControlForm(configOptions) {\n // Returns DOM of form in control, to allow other elements to be hacked in\n\n var nodes, i;\n \n if (\"_control\" in configOptions) {\n nodes = configOptions._control.getContainer().childNodes;\n\n for (i = 0; i < nodes.length; i += 1) {\n if (nodes[i].tagName === \"SECTION\" ||\n nodes[i].tagName === \"FORM\"\n ) {\n // Form becomes Section in Leaflet 1.4\n return nodes[i];\n }\n }\n }\n\n return false;\n }\n\n function buildRadioOnControl(configOptions, controlForm, dataObjectKey, option, uiOption) {\n\n var element, selection, i;\n\n if (dataObjectKey in configOptions.dataObject &&\n Array.isArray(configOptions.dataObject[dataObjectKey]) &&\n configOptions.dataObject[dataObjectKey].length > 1 &&\n configOptions[uiOption] === true\n ) {\n\n selection = [];\n for (i = 0; i < configOptions.dataObject[dataObjectKey].length; i += 1) {\n selection.push({\n \"value\": i,\n \"id\": configOptions._id + dataObjectKey + i\n });\n configOptions._toLocale[configOptions._id + dataObjectKey + i] = \"_\" + dataObjectKey + i;\n }\n\n element = createRadioElement(selection, configOptions._id + dataObjectKey, configOptions[option], {\n \"border-top\": \"1px solid #ddd\",\n \"margin-top\": \"4px\",\n \"padding-top\": \"4px\"\n });\n element.addEventListener(\"change\", function () {\n configOptions[option] = parseInt(document.querySelector(\"input[name='\" +\n configOptions._id + dataObjectKey + \"']:checked\").value, 10);\n var opt = {};\n opt[option] = configOptions[option];\n userStore(configOptions, opt);\n queryHere(configOptions);\n }, false);\n controlForm.appendChild(element);\n\n }\n return configOptions;\n }\n\n function buildAttributions(configOptions) {\n\n var span, i;\n var keyName = [\"_datasetName\", \"_datasetAttribution\"];\n\n for (i = 0; i < keyName.length; i += 1) {\n configOptions._toLocale[configOptions._id + keyName[i]] = keyName[i];\n keyName[i] = configOptions._id + keyName[i];\n }\n\n span = createElement(\"span\");\n\n if (\"meta\" in configOptions.dataObject &&\n \"url\" in configOptions.dataObject.meta &&\n typeof configOptions.dataObject.meta.url === \"string\"\n ) {\n span.appendChild(createElement(\"a\", {\n \"href\": configOptions.dataObject.meta.url,\n \"id\": keyName[0]\n }));\n } else {\n span.appendChild(createElement(\"span\", {\n \"id\": keyName[0]\n }));\n }\n\n span.appendChild(document.createTextNode(\" \"));\n span.appendChild(createElement(\"span\", {\n \"id\": keyName[1]\n }));\n\n configOptions.map.attributionControl.addAttribution(span.outerHTML);\n\n return configOptions;\n }\n\n function buildUILocale(configOptions) {\n\n var control, div, id, label, select, selectOptions, i;\n var keyName = Object.keys(configOptions.translation).sort();\n \n if (configOptions.uiLocale === true &&\n keyName.length > 1\n ) {\n\n control = configOptions.leaflet.control({position: \"topright\"});\n control.onAdd = function () {\n\n div = createElement(\"div\", {\n \"className\": \"aquius-locale\",\n \"id\": configOptions._id + \"locale\"\n }, {\n \"background-color\": \"#fff\",\n \"border-bottom\": \"2px solid rgba(0,0,0,0.3)\",\n \"border-left\": \"2px solid rgba(0,0,0,0.3)\",\n \"border-radius\": \"0 0 0 5px\",\n \"margin\": 0,\n \"padding\": \"0 0 1px 1px\"\n });\n configOptions.leaflet.DomEvent.disableClickPropagation(div);\n\n id = configOptions._id + \"langname\";\n div.appendChild(createElement(\"label\", {\n \"id\": id,\n \"for\": configOptions._id + \"lang\"\n } , {\n \"display\": \"none\"\n }));\n // Label improves web accessibility, but is self-evident to most users\n configOptions._toLocale[id] = \"language\";\n\n selectOptions = [];\n for (i = 0; i < keyName.length; i += 1) {\n if (\"lang\" in configOptions.translation[keyName[i]]) {\n label = configOptions.translation[keyName[i]].lang;\n } else {\n label = keyName[i];\n }\n selectOptions.push({\n \"value\": keyName[i],\n \"label\": label\n });\n }\n select = createSelectElement(selectOptions, configOptions.t);\n\n select.id = configOptions._id + \"lang\";\n select.style[\"background-color\"] = \"#fff\";\n select.style.border = \"none\";\n select.style.color = \"#000\";\n select.style.font = \"12px/1.5 \\\"Helvetica Neue\\\", Arial, Helvetica, sans-serif\";\n // Leaflet styles not inherited by Select\n select.addEventListener(\"change\", function () {\n configOptions.t = document.getElementById(configOptions._id + \"lang\").value;\n userStore(configOptions, {\n \"t\": configOptions.t\n });\n applyLocalisation(configOptions, true);\n }, false);\n\n div.appendChild(select);\n\n return div;\n };\n control.addTo(configOptions.map);\n\n }\n\n return configOptions;\n }\n\n function buildUIPanel(configOptions) {\n \n var control, div, frame, id, layerSummaryNames, i;\n \n if (configOptions.uiPanel === true) {\n control = configOptions.leaflet.control({position: \"bottomleft\"});\n control.onAdd = function () {\n\n div = createElement(\"div\", {\n \"className\": \"aquius-panel\",\n \"id\": configOptions._id + \"panel\"\n }, {\n \"background-color\": \"rgba(255,255,255,\" + configOptions.panelOpacity + \")\",\n \"font-weight\": \"bold\",\n \"padding\": \"0 3px\",\n \"border-radius\": \"5px\",\n \"margin-right\": \"10px\"\n });\n configOptions.leaflet.DomEvent.disableClickPropagation(div);\n\n layerSummaryNames = getLayerNames(configOptions, true);\n for (i = 0; i < layerSummaryNames.length; i += 1) {\n\n frame = createElement(\"span\", {}, {\n \"color\": configOptions[layerSummaryNames[i] + \"Color\"],\n \"margin\": \"0 0.1em\"\n });\n frame.appendChild(createElement(\"span\", {\n \"id\": configOptions._id + layerSummaryNames[i] + \"value\",\n \"textContent\": \"0\"\n }, {\n \"font-size\": Math.round(150 * configOptions.panelScale).toString() + \"%\",\n \"margin\": \"0 0.1em\"\n }));\n\n id = configOptions._id + layerSummaryNames[i] + \"label\";\n frame.appendChild(createElement(\"span\", {\n \"id\": id,\n }, {\n \"font-size\": Math.round(100 * configOptions.panelScale).toString() + \"%\",\n \"margin\": \"0 0.1em\",\n \"vertical-align\": \"10%\"\n }));\n configOptions._toLocale[id] = layerSummaryNames[i];\n\n div.appendChild(frame);\n div.appendChild(document.createTextNode(\" \"));\n\n }\n\n return div;\n };\n control.addTo(configOptions.map);\n }\n\n return configOptions;\n }\n\n function buildUILayerControl(configOptions) {\n\n var id, i;\n var layerNames = getLayerNames(configOptions).reverse();\n\n configOptions._control = configOptions.leaflet.control.layers();\n for (i = 0; i < layerNames.length; i += 1) {\n\n configOptions._layer[layerNames[i]] = configOptions.leaflet.layerGroup();\n configOptions = layerEvents(configOptions, layerNames[i]);\n\n if (configOptions.v.indexOf(layerNames[i].charAt(0)) !== -1) {\n configOptions._layer[layerNames[i]].addTo(configOptions.map);\n }\n\n id = configOptions._id + layerNames[i] + \"name\";\n configOptions._toLocale[id] = layerNames[i];\n configOptions._control.addOverlay(configOptions._layer[layerNames[i]], createElement(\"span\", {\n \"id\": id\n }, {\n \"color\": configOptions[layerNames[i] + \"Color\"]\n }).outerHTML);\n\n }\n configOptions._control.addTo(configOptions.map);\n \n return configOptions;\n }\n\n function buildUIScale(configOptions, controlForm, key, option, uiOption, min, max, rerender, legend) {\n\n var frame, id, input, label;\n\n if (configOptions[uiOption] === true) {\n label = createElement(\"label\");\n\n id = configOptions._id + key + \"name\";\n label.appendChild(createElement(\"div\", {\n \"id\": id\n }, {\n \"border-top\": \"1px solid #ddd\",\n \"margin-top\": \"4px\",\n \"padding-top\": \"4px\",\n \"text-align\": \"center\"\n }));\n configOptions._toLocale[id] = key;\n\n frame = createElement(\"div\", {}, {\n \"text-align\": \"center\"\n });\n input = createElement(\"input\", {\n \"id\": configOptions._id + key,\n \"type\": \"range\",\n // Range not supported by IE9, but should default to text\n \"min\": min,\n \"max\": max,\n \"value\": configOptions[option]\n });\n input.addEventListener(\"change\", function () {\n var opt = {};\n configOptions[option] = parseInt(document.getElementById(configOptions._id + key).value, 10);\n opt[option] = configOptions[option];\n userStore(configOptions, opt);\n queryHere(configOptions, rerender);\n }, false);\n frame.appendChild(input);\n\n label.appendChild(frame);\n\n if (typeof legend !== \"undefined\") {\n id = configOptions._id + key + \"legend\";\n label.appendChild(createElement(\"div\", {\n \"id\": id\n }, {\n \"color\": \"#888\",\n \"text-align\": \"center\"\n }));\n configOptions._toLocale[id] = legend;\n }\n\n controlForm.appendChild(label);\n }\n \n return configOptions;\n }\n\n function buildUIShare(configOptions, controlForm) {\n\n var aEmbed, aExport, dataObject, div, id, layerNames, options, i;\n\n if (configOptions.uiShare === true &&\n Blob\n ) {\n // IE<10 has no Blob support\n div = createElement(\"div\", {}, {\n \"border-top\": \"1px solid #ddd\",\n \"margin-top\": \"4px\",\n \"padding-top\": \"4px\",\n \"text-align\": \"center\"\n });\n\n if (configOptions.dataset !== \"\") {\n // Data supplied direct to dataObject cannot sensibly be embedded\n id = configOptions._id + \"embed\";\n aEmbed = createElement(\"a\", {\n \"id\": id,\n \"download\": configOptions._id + \"-embed.txt\",\n \"role\": \"button\"\n // Actual buttons would look like like form submit thus too important\n }, {\n \"cursor\": \"pointer\"\n });\n aEmbed.addEventListener(\"click\", function () {\n aEmbed.href = window.URL.createObjectURL(createEmbedBlob(configOptions));\n // Hack imposing href on own caller to trigger download. Requires embedElement persist\n }, false);\n configOptions._toLocale[id] = \"embed\";\n div.appendChild(aEmbed);\n div.appendChild(document.createTextNode(\" | \"));\n }\n\n layerNames = getLayerNames(configOptions);\n id = configOptions._id + \"export\";\n aExport = createElement(\"a\", {\n \"id\": id,\n \"download\": configOptions._id + \"-export.json\",\n \"role\": \"button\"\n }, {\n \"cursor\": \"pointer\"\n });\n aExport.addEventListener(\"click\", function () {\n\n options = {\n \"geoJSON\": [],\n \"network\": configOptions.n,\n \"range\": 5e6 / Math.pow(2, configOptions.m),\n \"service\": configOptions.r,\n \"x\": configOptions.c,\n \"y\": configOptions.k\n };\n\n if (configOptions.p > 0) {\n options.connectivity = (2 / (Math.pow(10, configOptions.p) / 10)) * configOptions.connectivity;\n }\n\n for (i = 0; i < layerNames.length; i += 1) {\n if (configOptions.v.indexOf(layerNames[i].charAt(0)) !== -1) {\n options.geoJSON.push(layerNames[i]);\n }\n }\n\n if (typeof configOptions._here === \"object\" &&\n \"dataObject\" in configOptions._here\n ) {\n dataObject = configOptions._here.dataObject;\n options.sanitize = false;\n } else {\n dataObject = configOptions.dataObject;\n options.sanitize = true;\n }\n\n aExport.href = window.URL.createObjectURL(\n new Blob([JSON.stringify(configOptions._call(dataObject, options))],\n {type: \"application/json;charset=utf-8\"})\n );\n // Future: Export should execute with callback\n\n }, false);\n configOptions._toLocale[id] = \"export\";\n div.appendChild(aExport);\n\n controlForm.appendChild(div);\n }\n\n return configOptions;\n }\n\n function exitBuildUI(configOptions) {\n\n if (\"_control\" in configOptions) {\n delete configOptions.control;\n }\n return applyLocalisation(configOptions);\n }\n\n configOptions._toLocale = {};\n // DOM Id: translation key\n configOptions._layer = {};\n // Layer name: Leaflet layerGroup\n\n configOptions = buildAttributions(configOptions);\n configOptions = buildUILocale(configOptions);\n configOptions = buildUIPanel(configOptions);\n configOptions = buildUILayerControl(configOptions);\n\n controlForm = getControlForm(configOptions);\n if (!controlForm) {\n // Error, escape with partial UI\n return exitBuildUI(configOptions);\n }\n\n configOptions = buildRadioOnControl(configOptions, controlForm, \"network\", \"n\", \"uiNetwork\");\n configOptions = buildRadioOnControl(configOptions, controlForm, \"service\", \"r\", \"uiService\");\n if (\"place\" in configOptions.dataObject &&\n configOptions.dataObject.place.length > 0) {\n buildUIScale(configOptions, controlForm, \"connectivity\", \"p\",\n \"uiConnectivity\", 0, 3, false, \"connectivityRange\");\n }\n configOptions = buildUIScale(configOptions, controlForm, \"scale\", \"s\", \"uiScale\", 0, 10, true);\n configOptions = buildUIShare(configOptions, controlForm);\n\n return exitBuildUI(configOptions);\n }\n\n function queryHere(configOptions, rerender) {\n /**\n * Trigger new here() query (executed via callback)\n * @param {object} configOptions\n * @param {boolean} rerender - optional, if true redraw using here, with no new query\n */\n\n var dataObject, options;\n\n if (typeof rerender === \"undefined\" ||\n rerender !== true ||\n typeof configOptions._here !== \"object\"\n ) {\n\n options = {\n \"_configOptions\": configOptions,\n \"callback\": postHereQuery,\n \"network\": configOptions.n,\n \"range\": 5e6 / Math.pow(2, configOptions.m),\n // Range factor duplicated in export\n \"service\": configOptions.r,\n \"x\": configOptions.c,\n \"y\": configOptions.k\n };\n \n if (configOptions.p > 0) {\n options.connectivity = (2 / (Math.pow(10, configOptions.p) / 10)) * configOptions.connectivity;\n }\n\n if (typeof configOptions._here === \"object\" &&\n \"dataObject\" in configOptions._here\n ) {\n dataObject = configOptions._here.dataObject;\n options.sanitize = false;\n } else {\n dataObject = configOptions.dataObject;\n options.sanitize = true;\n }\n\n configOptions._call(dataObject, options);\n\n } else {\n\n outputHere(configOptions);\n\n }\n }\n\n function postHereQuery(error, here, options) {\n /**\n * Callback from here. Writes HERE and initiates map update\n * @param {object} error - Javascript Error\n * @param {object} here - here() raw\n * @param {object} options - as submitted\n */\n if (\"_configOptions\" in options) {\n if (typeof error === \"undefined\") {\n options._configOptions._here = here;\n // Store globally for reference by following rerenders\n outputHere(options._configOptions);\n } else {\n outputStatusNoMap(options._configOptions, error);\n }\n }\n }\n\n function getLinkColor(configOptions, referenceObject) {\n /**\n * Helper: Determine a simgle color for a link, merging colors where necessary\n * @param {object} configOptions\n * @param {object} referenceObject - link reference object, including key c\n * @return {string} 6-hex HTML color with leading hash\n */\n\n var i;\n var colors = [];\n\n function mergeColors(colors) {\n // Array of hashed 6-hex colors\n\n var averageColor, i;\n var mixedColor = \"#\";\n var rgbStack = [[], [], []];\n\n for (i = 0; i < colors.length; i += 1) {\n if (colors[i].length === 7 &&\n colors[i][0] === \"#\"\n ) {\n // 6-hex style colors, GTFS-compatible\n rgbStack[0].push(parseInt(colors[i].slice(1, 3), 16));\n rgbStack[1].push(parseInt(colors[i].slice(3, 5), 16));\n rgbStack[2].push(parseInt(colors[i].slice(5, 7), 16));\n }\n }\n\n for (i = 0; i < rgbStack.length; i += 1) {\n if (rgbStack[i].length === 0) {\n return mixedColor + colors[0];\n // Unrecognised color style\n }\n averageColor = Math.round((rgbStack[i].reduce(function(a, b) {\n return a + b;\n }, 0)) / rgbStack[i].length);\n if (averageColor <= 16) {\n mixedColor += \"0\";\n }\n mixedColor += averageColor.toString(16);\n }\n\n return mixedColor;\n }\n\n if (referenceObject === \"undefined\" ||\n !Array.isArray(referenceObject)\n ) {\n return configOptions.linkColor;\n }\n\n for (i = 0; i < referenceObject.length; i += 1) {\n if (\"c\" in referenceObject[i] &&\n \"reference\" in configOptions.dataObject &&\n \"color\" in configOptions.dataObject.reference &&\n referenceObject[i].c >= 0 &&\n referenceObject[i].c < configOptions.dataObject.reference.color.length &&\n colors.indexOf(configOptions.dataObject.reference.color[referenceObject[i].c]) === -1\n ) {\n colors.push(configOptions.dataObject.reference.color[referenceObject[i].c]);\n }\n }\n\n if (colors.length === 0) {\n return configOptions.linkColor;\n }\n if (colors.length === 1) {\n return colors[0];\n // Future: Check link color !== node color\n }\n\n return mergeColors(colors);\n }\n\n function getLinkOrNodePopupContent(configOptions, statName, statValue, linkObject, nodeObject) {\n /**\n * Helper: Content for map link/node popup. Generated on-click for efficiency\n * @param {object} configOptions\n * @param {string} statName - pre-translated name of value (eg \"Daily Services\")\n * @param {number} statValue - numeric associated with statName\n * @param {object} linkObject - link reference object, including key n\n * @param {object} nodeObject - node reference object, including key n\n * @return {object} DOM div element\n */\n\n var keys, noProductLink, popupDiv, productLink, i, j;\n\n function buildPopupData(configOptions, referenceObject, popupDiv, isNode, name) {\n // Object, Object, DOM, boolean, optional string. Fills content for one Object\n\n var div, divider, element, i;\n var wildcard = \"[*]\";\n\n if (typeof referenceObject !== \"undefined\" &&\n Array.isArray(referenceObject)\n ) {\n\n div = createElement(\"div\", {}, {\n \"margin\": \"0.3em 0\"\n });\n\n if (isNode) {\n div.style[\"font-weight\"] = \"bold\";\n } else {\n div.style[\"line-height\"] = \"2em\";\n }\n\n if (typeof name !== \"undefined\" &&\n name !== \"\"\n ) {\n div.appendChild(createElement(\"div\", {\n \"textContent\": name\n }, {\n \"line-height\": \"1.2em\",\n \"margin\": \"0\"\n }));\n }\n\n for (i = 0; i < referenceObject.length; i += 1) {\n if (\"n\" in referenceObject[i]) {\n\n if (\"u\" in referenceObject[i] &&\n \"reference\" in configOptions.dataObject &&\n \"url\" in configOptions.dataObject.reference &&\n referenceObject[i].u >= 0 &&\n referenceObject[i].u < configOptions.dataObject.reference.url.length\n ) {\n\n element = createElement(\"a\", {\n \"href\": configOptions.dataObject.reference.url[referenceObject[i].u]\n });\n\n if (element.href.lastIndexOf(wildcard) !== -1) {\n if (\"i\" in referenceObject[i]) {\n element.href = element.href.replace(wildcard, referenceObject[i].i);\n } else {\n if (\"n\" in referenceObject[i]) {\n element.href = element.href.replace(wildcard, referenceObject[i].n);\n } else {\n element.href = element.href.replace(wildcard, \"\");\n }\n }\n }\n\n if (!isNode) {\n element.style.color = \"#000\";\n element.style[\"text-decoration\"] = \"none\";\n }\n } else {\n element = createElement(\"span\");\n }\n\n element.textContent = referenceObject[i].n;\n\n if (!isNode) {\n element.style.padding = \"0.2em 0.3em\";\n element.style.border = \"1px solid #000\";\n element.style[\"white-space\"] = \"nowrap\";\n if (\"reference\" in configOptions.dataObject &&\n \"color\" in configOptions.dataObject.reference\n ) {\n if (\"c\" in referenceObject[i] &&\n referenceObject[i].c >= 0 &&\n referenceObject[i].c < configOptions.dataObject.reference.color.length\n ) {\n element.style[\"background-color\"] =\n configOptions.dataObject.reference.color[referenceObject[i].c];\n element.style[\"border-color\"] =\n configOptions.dataObject.reference.color[referenceObject[i].c];\n }\n if (\"t\" in referenceObject[i] &&\n referenceObject[i].t >= 0 &&\n referenceObject[i].t < configOptions.dataObject.reference.color.length\n ) {\n element.style.color = configOptions.dataObject.reference.color[referenceObject[i].t];\n }\n }\n }\n\n div.appendChild(element);\n\n if (i < referenceObject.length - 1) {\n if (isNode) {\n divider = \", \";\n } else {\n divider = \" \";\n }\n div.appendChild(document.createTextNode(divider));\n }\n\n }\n }\n\n popupDiv.appendChild(div);\n }\n\n return popupDiv;\n }\n\n popupDiv = createElement(\"div\", {}, {\n \"color\": \"#000\"\n });\n\n popupDiv = buildPopupData(configOptions, nodeObject, popupDiv, true);\n\n if (statValue < 10 &&\n statValue % 1 !== 0\n ) {\n statValue = (Math.round(statValue * 10) / 10).toString();\n } else {\n statValue = Math.round(statValue).toString();\n }\n\n popupDiv.appendChild(createElement(\"div\", {\n \"textContent\": statName + \": \" + statValue\n }, {\n \"margin\": \"0.3em 0\"\n }));\n\n if (\"reference\" in configOptions.dataObject &&\n \"product\" in configOptions.dataObject.reference &&\n configOptions.dataObject.reference.product.length > 1 &&\n configOptions.uiNetwork === true\n ) {\n\n noProductLink = [];\n productLink = [];\n\n for (i = 0; i < linkObject.length; i += 1) {\n if (\"p\" in linkObject[i]) {\n for (j = 0; j < linkObject[i].p.length; j += 1) {\n if (productLink[linkObject[i].p[j]] === undefined) {\n productLink[linkObject[i].p[j]] = [];\n }\n productLink[linkObject[i].p[j]].push(linkObject[i]);\n }\n } else {\n noProductLink.push(linkObject[i]);\n }\n }\n\n for (i = 0; i < productLink.length; i += 1) {\n if (productLink[i] !== undefined) {\n if (i < configOptions.dataObject.reference.product.length &&\n typeof configOptions.dataObject.reference.product[i] === \"object\"\n ) {\n if (configOptions.t in configOptions.dataObject.reference.product[i]) {\n productLink[i].unshift(configOptions.dataObject.reference.product[i][configOptions.t]);\n } else {\n if (configOptions.locale in configOptions.dataObject.reference.product[i]) {\n productLink[i].unshift(configOptions.dataObject.reference.product[i][configOptions.locale]);\n } else {\n keys = Object.keys(configOptions.dataObject.reference.product[i]);\n if (keys.length > 0) {\n productLink[i].unshift(configOptions.dataObject.reference.product[i][keys[0]]);\n } else {\n productLink[i].unshift(\"\");\n }\n }\n }\n } else {\n noProductLink = noProductLink.concat(productLink[i]);\n // Unknown products first\n }\n }\n }\n\n productLink.sort();\n for (i = 0; i < productLink.length; i += 1) {\n if (productLink[i] !== undefined) {\n popupDiv = buildPopupData(configOptions, productLink[i].slice(1), popupDiv, false, productLink[i][0]);\n }\n }\n\n } else {\n noProductLink = linkObject;\n }\n\n if (noProductLink.length > 0) {\n popupDiv = buildPopupData(configOptions, noProductLink, popupDiv, false);\n }\n\n return popupDiv;\n }\n\n function outputHere(configOptions) {\n /**\n * Update map display with here\n * @param {object} configOptions\n */\n\n var geometry, keyName, panelDOM, placeName, statName, scale, tooltip, i, j;\n var layerNames = getLayerNames(configOptions);\n\n for (i = 0; i < layerNames.length; i += 1) {\n if (layerNames[i] in configOptions._layer) {\n configOptions._layer[layerNames[i]].clearLayers();\n }\n }\n\n if (configOptions.uiPanel === true &&\n \"summary\" in configOptions._here\n ) {\n\n keyName = Object.keys(configOptions._here.summary);\n for (i = 0; i < keyName.length; i += 1) {\n\n panelDOM = document.getElementById(configOptions._id + keyName[i] + \"value\");\n if (panelDOM) {\n try {\n panelDOM.textContent = new Intl.NumberFormat(configOptions.t)\n .format(Math.round(configOptions._here.summary[keyName[i]]));\n } catch (err) {\n // Unsupported feature or locale\n panelDOM.textContent = Math.round(configOptions._here.summary[keyName[i]]).toString();\n }\n }\n\n }\n\n }\n\n if (\"place\" in configOptions._here &&\n \"place\" in configOptions.dataObject &&\n configOptions.dataObject.place.length > 0\n ) {\n\n scale = Math.exp((configOptions.s - 5) / 2) * configOptions.placeScale / 666;\n statName = getLocalised(configOptions, \"place\");\n for (i = 0; i < configOptions._here.place.length; i += 1) {\n\n if (\"circle\" in configOptions._here.place[i] &&\n \"value\" in configOptions._here.place[i] &&\n configOptions._here.place[i].circle.length > 1\n ) {\n\n tooltip = Math.round(configOptions._here.place[i].value).toString() + \" \" + statName;\n // Place is a tooltip, not popup, to allow easier configOptions._here clicks\n if (\"place\" in configOptions._here.place[i] &&\n Array.isArray(configOptions._here.place[i].place)\n ) {\n placeName = [];\n for (j = 0; j < configOptions._here.place[i].place.length; j += 1) {\n if (typeof configOptions._here.place[i].place[j] === \"object\" &&\n \"n\" in configOptions._here.place[i].place[j]\n ) {\n placeName.push(configOptions._here.place[i].place[j].n);\n }\n }\n if (placeName.length > 0) {\n tooltip += \" (\" + placeName.join(\", \") + \")\";\n }\n }\n \n configOptions.leaflet.circleMarker(configOptions.leaflet.latLng([\n configOptions._here.place[i].circle[1],\n configOptions._here.place[i].circle[0]\n ]), {\n \"fill\": true,\n \"fillColor\": configOptions.placeColor,\n \"fillOpacity\": configOptions.placeOpacity,\n \"radius\": Math.ceil(Math.sqrt(configOptions._here.place[i].value * scale)),\n \"stroke\": false\n })\n .bindTooltip(tooltip)\n .addTo(configOptions._layer.place);\n\n }\n\n }\n\n }\n\n if (\"link\" in configOptions._here) {\n\n scale = Math.exp((configOptions.s - 5) / 2) * configOptions.linkScale * 4;\n statName = getLocalised(configOptions, \"link\");\n for (i = 0; i < configOptions._here.link.length; i += 1) {\n\n if (\"polyline\" in configOptions._here.link[i] &&\n \"value\" in configOptions._here.link[i]\n ) {\n\n geometry = [];\n for (j = 0; j < configOptions._here.link[i].polyline.length; j += 1) {\n if (configOptions._here.link[i].polyline[j].length > 1) {\n geometry.push([\n configOptions._here.link[i].polyline[j][1],\n configOptions._here.link[i].polyline[j][0]\n ]);\n }\n }\n\n configOptions.leaflet.polyline(geometry, {\n \"color\": getLinkColor(configOptions, configOptions._here.link[i].link),\n \"weight\": Math.ceil(Math.log(1 + (configOptions._here.link[i].value * (1 / scale)))\n * scale + configOptions.minWidth)\n })\n .on(\"click\", function (evt) {\n var popup = evt.target.getPopup();\n var index = parseInt(popup.getContent(), 10);\n if (!Number.isNaN(index)) {\n // Else already processed\n popup.setContent(getLinkOrNodePopupContent(configOptions, statName,\n configOptions._here.link[index].value, configOptions._here.link[index].link));\n popup.update();\n }\n })\n .bindPopup(i)\n // Popup stores only the index value until clicked\n .addTo(configOptions._layer.link);\n\n }\n\n }\n\n }\n\n if (\"node\" in configOptions._here) {\n scale = Math.exp((configOptions.s - 5) / 2) * configOptions.nodeScale * 2;\n statName = getLocalised(configOptions, \"link\");\n for (i = 0; i < configOptions._here.node.length; i += 1) {\n\n if (\"circle\" in configOptions._here.node[i] &&\n \"value\" in configOptions._here.node[i] &&\n configOptions._here.node[i].circle.length > 1\n ) {\n\n configOptions.leaflet.circleMarker(configOptions.leaflet.latLng([\n configOptions._here.node[i].circle[1],\n configOptions._here.node[i].circle[0]\n ]), {\n \"color\": configOptions.nodeColor,\n \"fill\": false,\n \"radius\": Math.ceil(Math.log(1 + (configOptions._here.node[i].value * (1 / (scale * 2))))\n * scale + (configOptions.minWidth / 2)),\n \"weight\": 1\n })\n .on(\"click\", function (evt) {\n var popup = evt.target.getPopup();\n var index = parseInt(popup.getContent(), 10);\n if (!Number.isNaN(index)) {\n // Else already processed\n popup.setContent(getLinkOrNodePopupContent(configOptions, statName,\n configOptions._here.node[index].value, configOptions._here.node[index].link,\n configOptions._here.node[index].node));\n popup.update();\n }\n })\n .bindPopup(i)\n // Popup stores only the index value until clicked\n .addTo(configOptions._layer.node);\n\n }\n\n }\n }\n\n if (\"here\" in configOptions._here &&\n configOptions._here.here.length > 0 &&\n \"circle\" in configOptions._here.here[0] &&\n \"value\" in configOptions._here.here[0] &&\n configOptions._here.here[0].circle.length > 1\n ) {\n\n configOptions.leaflet.circle(configOptions.leaflet.latLng([\n configOptions._here.here[0].circle[1],\n configOptions._here.here[0].circle[0]\n ]), {\n \"color\": configOptions.hereColor,\n \"fill\": false,\n \"interactive\": false,\n \"radius\": configOptions._here.here[0].value,\n \"weight\": 2\n })\n .addTo(configOptions._layer.here);\n\n }\n\n }\n\n function loadInitScripts(configOptions) {\n /**\n * Initialisation of supporting scripts. Calls postLoadInitScripts()\n * @param {object} configOptions\n */\n\n var leafletURLs;\n\n outputStatusNoMap(configOptions);\n // Visual indicator to manage expectations\n\n if (\"dataObject\" in configOptions === false ||\n typeof configOptions.dataObject !== \"object\"\n ) {\n configOptions.dataObject = {};\n } else {\n if (\"dataset\" in configOptions) {\n delete configOptions.dataset;\n }\n }\n\n if (\"leaflet\" in configOptions &&\n typeof configOptions.leaflet === \"object\" &&\n \"version\" in configOptions.leaflet &&\n Number.isNaN(parseInt(configOptions.leaflet.version.split(\".\")[0], 10)) === false &&\n parseInt(configOptions.leaflet.version.split(\".\")[0], 10) >= 1\n ) {\n // Leaflet version 1+, so no need to load\n leafletURLs = [];\n } else {\n leafletURLs = [\n \"https://unpkg.com/[email protected]/dist/leaflet.css\",\n \"https://unpkg.com/[email protected]/dist/leaflet.js\"\n ];\n // Default Leaflet library CSS and JS\n }\n\n if (typeof Promise !== \"undefined\") {\n loadWithPromise(configOptions, leafletURLs);\n } else {\n loadWithClassic(configOptions, leafletURLs);\n }\n }\n\n function postLoadInitScripts(configOptions) {\n /**\n * Initialisation of UI with first query. Callback from loadInitScripts()\n * @param {object} configOptions\n */\n\n if (typeof L === \"object\" &&\n \"version\" in L === false\n ) {\n return outputStatusNoMap(configOptions, {\"message\": \"Leaflet failed to load\"});\n }\n\n configOptions.leaflet = L;\n\n configOptions = parseConfigOptions(configOptions);\n configOptions = buildMapBase(configOptions);\n configOptions = buildMapUI(configOptions);\n\n queryHere(configOptions);\n }\n\n\n if (!targetDOM) {\n return false;\n // Exit stage left with nowhere to report failure gracefully\n }\n\n while (targetDOM.firstChild) {\n targetDOM.removeChild(targetDOM.firstChild);\n }\n\n if (!Object.keys ||\n ![].indexOf ||\n typeof JSON !== \"object\"\n ) {\n // Unspoorted pre-IE9 vintage browser\n targetDOM.appendChild(\n document.createTextNode(\"Browser not supported by Aquius: Try a modern browser\"));\n // There is no localisation of errors\n targetDOM.style.height = \"auto\";\n // Prevents unusable embed areas being filled with whitespace\n return false;\n }\n\n if (typeof configOptions !== \"object\") {\n configOptions = {};\n }\n configOptions._id = configId;\n configOptions._call = this.here;\n configOptions._here = HERE;\n\n loadInitScripts(configOptions);\n return true;\n},\n\n\n\"here\": function here(dataObject, options) {\n /**\n * Here Query. May be called independently\n * @param {Object} dataObject - as init() option dataObject\n * @param {Object} options - callback:function(error, output, options), connectivity:factor, \n * geoJSON:array layernames, network:network index, place:array placeIndices, range:metres-from-center,\n * sanitize:boolean, service:service index, x:center-longitude, y:center-latitude\n * @return {Object} key:values or callback\n */\n \"use strict\";\n\n var error;\n var raw = {};\n\n function parseDataObject(raw) {\n /**\n * Check and fix dataObject. Fixes only sufficient not to break here()\n * @param {object} raw\n * @param {return} raw\n */\n\n var keyName, i, j, k, l;\n var networkObjects = {\n // Minimum default structure of each \"line\" (array) for non-header data structures\n \"0\": {\n // By schema ID, extendable for future schema\n \"link\": [[0], [0], [0], {}],\n \"network\": [[0], {\"en-US\": \"Unknown\"}, {}],\n \"node\": [0, 0, {}],\n \"place\": [0, 0, {}],\n \"service\": [[0], {\"en-US\": \"Unknown\"}, {}]\n }\n };\n // In practice 0 equates to null, while avoiding dataset rejection or variable checking during draw\n\n if (\"sanitize\" in options &&\n options.sanitize === false\n ) {\n return raw;\n }\n\n if (typeof raw.dataObject !== \"object\") {\n raw.dataObject = {};\n }\n if (\"meta\" in raw.dataObject === false) {\n raw.dataObject.meta = {};\n }\n if (\"schema\" in raw.dataObject.meta === false ||\n raw.dataObject.meta.schema in networkObjects === false\n ) {\n raw.dataObject.meta.schema = \"0\";\n }\n // Other translation, option, and meta keys not used here, so ignored\n\n keyName = Object.keys(networkObjects[raw.dataObject.meta.schema]);\n for (i = 0; i < keyName.length; i += 1) {\n if (keyName[i] in raw.dataObject &&\n typeof raw.dataObject[keyName[i]] !== \"object\"\n ) {\n delete raw.dataObject[keyName[i]];\n }\n }\n\n for (i = 0; i < keyName.length; i += 1) {\n\n if (keyName[i] in raw.dataObject === false ||\n Array.isArray(raw.dataObject[keyName[i]]) == false ||\n raw.dataObject[keyName[i]].length === 0\n ) {\n // Set whole key to default, with one dummy entry\n raw.dataObject[keyName[i]] =\n [networkObjects[raw.dataObject.meta.schema][keyName[i]]];\n }\n\n for (j = 0; j < raw.dataObject[keyName[i]].length; j += 1) {\n\n if (raw.dataObject[keyName[i]][j].length <\n networkObjects[raw.dataObject.meta.schema][keyName[i]].length\n ) {\n for (k = raw.dataObject[keyName[i]][j].length - 1; k <\n networkObjects[raw.dataObject.meta.schema][keyName[i]].length; k += 1) {\n // Append defaults to short lines\n raw.dataObject[keyName[i]][j].push(\n networkObjects[raw.dataObject.meta.schema][keyName[i]][k]);\n }\n }\n\n for (k = 0; k <\n networkObjects[raw.dataObject.meta.schema][keyName[i]].length; k += 1) {\n\n if (typeof raw.dataObject[keyName[i]][j][k] !==\n typeof networkObjects[raw.dataObject.meta.schema][keyName[i]][k]\n ) {\n\n if ((keyName[i] === \"node\" ||\n keyName[i] === \"place\") &&\n k === 2 &&\n typeof raw.dataObject[keyName[i]][j][k] === \"number\"\n ) {\n // Accomodate old non-property node/place data structure\n raw.dataObject[keyName[i]][j][k] = {\"p\": raw.dataObject[keyName[i]][j][k]};\n } else {\n if (keyName[i] === \"link\" &&\n typeof raw.dataObject[keyName[i]][j][k] === \"number\" &&\n (k === 0 ||\n k === 1)\n ) {\n // Accomodate old non-array link product data structure\n raw.dataObject[keyName[i]][j][k] = [raw.dataObject[keyName[i]][j][k]];\n } else {\n // Replace specific data item with default\n raw.dataObject[keyName[i]][j][k] =\n networkObjects[raw.dataObject.meta.schema][keyName[i]][k];\n }\n }\n }\n\n if (Array.isArray(raw.dataObject[keyName[i]][j][k])) {\n for (l = 0; l < raw.dataObject[keyName[i]][j][k].length; l += 1) {\n if (typeof raw.dataObject[keyName[i]][j][k][l] !==\n typeof networkObjects[raw.dataObject.meta.schema][keyName[i]][k][0]\n ) {\n // Replace specific data item within array with default\n raw.dataObject[keyName[i]][j][k][l] =\n networkObjects[raw.dataObject.meta.schema][keyName[i]][k][0];\n }\n }\n }\n\n }\n\n }\n\n }\n\n return raw;\n }\n\n function walkRoutes(raw, options) {\n /**\n * Adds raw.serviceLink and raw.serviceNode matrices, filtered\n * @param {object} raw - internal working data\n * @param {object} options\n * @return {object} raw\n */\n\n var linkChecks, service, i;\n \n function haversineDistance(lat1, lng1, lat2, lng2) {\n // Earth distance. Modified from Leaflet CRS.Earth.js\n\n var rad = Math.PI / 180;\n var sinDLat = Math.sin((lat2 - lat1) * rad / 2);\n var sinDLon = Math.sin((lng2 - lng1) * rad / 2);\n var a = sinDLat * sinDLat + Math.cos(lat1 * rad) * Math.cos(lat2 * rad) * sinDLon * sinDLon;\n var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));\n\n return 6371000 * c;\n }\n\n function createLinkChecks(raw, options) {\n // Key=value indices optimised for fast confirmation of the existence of a value\n\n var i;\n var linkChecks = {};\n\n\n if (\"network\" in options &&\n typeof options.network === \"number\" &&\n options.network >= 0 &&\n options.network < raw.dataObject.network.length\n ) {\n linkChecks.product = {};\n // Product\n for (i = 0; i < raw.dataObject.network[options.network][0].length; i += 1) {\n linkChecks.product[raw.dataObject.network[options.network][0][i]] =\n raw.dataObject.network[options.network][0][i];\n }\n }\n\n linkChecks.here = {};\n // Here Nodes\n for (i = 0; i < raw.dataObject.node.length; i += 1) {\n if (raw.dataObject.node[i].length > 1 &&\n (\"range\" in options === false ||\n \"x\" in options === false ||\n \"y\" in options === false ||\n options.range >=\n haversineDistance(options.y, options.x, raw.dataObject.node[i][1], raw.dataObject.node[i][0])) &&\n (\"place\" in options === false ||\n !Array.isArray(options.place) ||\n (\"p\" in raw.dataObject.node[i][2] === false &&\n \"place\" in raw.dataObject.node[i][2] === false) ||\n (\"p\" in raw.dataObject.node[i][2] &&\n options.place.indexOf(raw.dataObject.node[i][2].p) !== -1) ||\n (\"place\" in raw.dataObject.node[i][2] &&\n options.place.indexOf(raw.dataObject.node[i][2].place) !== -1)\n )\n ) {\n linkChecks.here[i] = i;\n }\n \n }\n\n if (\"service\" in options &&\n typeof options.service === \"number\" &&\n options.service >= 0 &&\n options.service < raw.dataObject.service.length\n ) {\n linkChecks.service = {};\n // Index positions in link service array to be analysed\n for (i = 0; i < raw.dataObject.service[options.service][0].length; i += 1) {\n linkChecks.service[raw.dataObject.service[options.service][0][i]] = raw.dataObject.service[options.service][0][i];\n }\n }\n\n return linkChecks;\n }\n\n function isProductOK(linkChecks, linkProductArray) {\n // At least 1 product must be included in productFilter, or productFilter missing\n\n var i;\n\n if (\"product\" in linkChecks) {\n if (linkProductArray.length === 1) {\n // Not initiating a loop where length is 1 reduces runtime\n if (linkProductArray[0] in linkChecks.product) {\n return true;\n }\n } else {\n for (i = 0; i < linkProductArray.length; i += 1) {\n if (linkProductArray[i] in linkChecks.product) {\n return true;\n }\n }\n }\n } else {\n return true;\n }\n\n return false;\n }\n\n function isShareOK(linkChecks, linkPropertiesObject) {\n // Any share must not be included as parent\n\n if (\"h\" in linkPropertiesObject &&\n \"product\" in linkChecks &&\n linkPropertiesObject.h in linkChecks.product\n ) {\n return false;\n }\n if (\"shared\" in linkPropertiesObject &&\n \"product\" in linkChecks &&\n linkPropertiesObject.shared in linkChecks.product\n ) {\n return false;\n }\n return true;\n }\n\n function isHereOK(linkChecks, linkNodeArray, linkPropertiesObject) {\n // At least one node is in here that is not setdown only, or if unidirectional, pickup\n\n var i;\n\n for (i = 0; i < linkNodeArray.length; i += 1) {\n // Length of linkNodeArray logically not 1, so no optimisation from length===1 check\n if (linkNodeArray[i] in linkChecks.here) {\n\n if (\"s\" in linkPropertiesObject === false &&\n \"setdown\" in linkPropertiesObject === false\n ) {\n return true;\n } else {\n if (\"s\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.s) &&\n linkPropertiesObject.s.indexOf(linkNodeArray[i]) === -1\n ) {\n // IndexOf() efficient here since whole setdown list needs to be checked as-is\n return true;\n }\n if (\"setdown\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.setdown) &&\n linkPropertiesObject.setdown.indexOf(linkNodeArray[i]) === -1\n ) {\n return true;\n }\n }\n\n if (\"d\" in linkPropertiesObject === false &&\n \"direction\" in linkPropertiesObject === false\n ) {\n if (\"u\" in linkPropertiesObject === false &&\n \"pickup\" in linkPropertiesObject === false\n ) {\n return true;\n } else {\n if (\"u\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.u) &&\n linkPropertiesObject.u.indexOf(linkNodeArray[i]) === -1\n ) {\n return true;\n }\n if (\"pickup\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.pickup) &&\n linkPropertiesObject.pickup.indexOf(linkNodeArray[i]) === -1\n ) {\n return true;\n }\n }\n }\n\n }\n }\n\n return false;\n }\n\n function addServiceLevel(service, linkChecks, linkServiceArray) {\n\n var i;\n\n if (linkServiceArray.length === 1 &&\n (\"service\" in linkChecks === false ||\n 0 in linkChecks.service)\n ) {\n // Not initiating a loop where length is 1 reduces runtime\n service.level = linkServiceArray[0];\n return service;\n }\n\n service.level = 0;\n for (i = 0; i < linkServiceArray.length; i += 1) {\n if (\"service\" in linkChecks === false ||\n i in linkChecks.service\n ) {\n service.level += linkServiceArray[i];\n }\n }\n\n return service;\n }\n \n function addServiceDirection(service, linkChecks, linkNodeArray, linkPropertiesObject) {\n // Includes circular and route\n\n var i;\n\n if (\"circular\" in linkPropertiesObject) {\n linkPropertiesObject.c = linkPropertiesObject.circular;\n }\n\n if (linkNodeArray.length > 1 &&\n (\"c\" in linkPropertiesObject &&\n (linkPropertiesObject.c ||\n linkPropertiesObject.c === 1))\n ) {\n service.circular = linkNodeArray.length - 1;\n // Exclude the final node on a circular\n }\n\n if (\"direction\" in linkPropertiesObject) {\n linkPropertiesObject.d = linkPropertiesObject.direction;\n }\n\n if (\"d\" in linkPropertiesObject &&\n (linkPropertiesObject.d ||\n linkPropertiesObject.d === 1)\n ) {\n service.direction = 1;\n\n for (i = 0; i < linkNodeArray.length; i += 1) {\n\n if (linkNodeArray[i] in linkChecks.here) {\n if (\"circular\" in service) {\n service.route = linkNodeArray.slice(0, -1);\n // Come friendly bombs and fall on Parla\n service.route = service.route.slice(i).concat(service.route.slice(0, i));\n // Its circular uni-directional tram is nodal nightmare\n service.route.push(service.route[0]);\n /**\n * Still not perfect: Counts the whole service round the loop\n * Considered defering these service till they can be summarised at the end\n * However Parla has unequal frequencies in each direction,\n * so halving (as other circulars) over common sections is still wrong\n * Would need to calculate which direction is the fastest to each node\n */\n } else {\n service.route = linkNodeArray.slice(i);\n // Route ignores nodes before the 1st found in hereNodes\n }\n break;\n }\n\n }\n\n } else {\n service.route = linkNodeArray;\n }\n\n if (\"route\" in service === false) {\n service.route = [];\n }\n\n return service;\n }\n\n function addServicePickupSetdown(service, linkChecks, linkPropertiesObject) {\n // Adds pickupIndex if pickup exists, setdownIndex if setdown exists\n\n var i;\n\n if (\"pickup\" in linkPropertiesObject) {\n linkPropertiesObject.u = linkPropertiesObject.pickup;\n }\n\n if (\"u\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.u) &&\n linkPropertiesObject.u.length > 0\n ) {\n\n service.pickupIndex = {};\n\n for (i = 0; i < linkPropertiesObject.u.length; i += 1) {\n service.pickupIndex[linkPropertiesObject.u[i]] = \"\";\n }\n\n }\n\n if (\"setdown\" in linkPropertiesObject) {\n linkPropertiesObject.s = linkPropertiesObject.setdown;\n }\n\n if (\"s\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.s) &&\n linkPropertiesObject.s.length > 0\n ) {\n\n service.setdownIndex = {};\n\n for (i = 0; i < linkPropertiesObject.s.length; i += 1) {\n service.setdownIndex[linkPropertiesObject.s[i]] = \"\";\n }\n\n }\n\n return service;\n }\n\n function addServiceSplits(service, linkChecks, linkNodeArray, linkPropertiesObject) {\n /**\n * Splits logic: service.splits = 1, only contributes to service count at \n * nodes within unique sections, and are only plotted in unique sections.\n * Unless: service.splits = 2, hereNodes contains no common sections.\n */\n\n var i;\n\n if (\"split\" in linkPropertiesObject) {\n linkPropertiesObject.t = linkPropertiesObject.split;\n }\n \n if (\"t\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.t) &&\n linkPropertiesObject.t.length > 0\n ) {\n\n service.splitsIndex = {};\n // Key index for speed, also used in subsequent route loop\n for (i = 0; i < linkPropertiesObject.t.length; i += 1) {\n service.splitsIndex[linkPropertiesObject.t[i]] = \"\";\n }\n\n service.splits = 2;\n for (i = 0; i < linkNodeArray.length; i += 1) {\n if (linkNodeArray[i] in service.splitsIndex === false &&\n linkNodeArray[i] in linkChecks.here\n ) {\n service.splits = 1;\n break;\n }\n }\n\n }\n\n return service;\n }\n\n function addServiceReferences(service, linkPropertiesObject, productArray) {\n // Saves constant rechecking of r property, and add productArray as reference.p\n\n var i;\n\n if (\"reference\" in linkPropertiesObject) {\n linkPropertiesObject.r = linkPropertiesObject.reference;\n }\n\n if (\"r\" in linkPropertiesObject &&\n Array.isArray(linkPropertiesObject.r) &&\n linkPropertiesObject.r.length > 0\n ) {\n service.reference = linkPropertiesObject.r;\n for (i = 0; i < service.reference.length; i += 1) {\n service.reference[i].p = productArray;\n }\n }\n\n return service;\n }\n \n function addServiceBlocks(service, linkPropertiesObject) {\n\n if (\"block\" in linkPropertiesObject) {\n linkPropertiesObject.b = linkPropertiesObject.block;\n }\n\n if (\"b\" in linkPropertiesObject) {\n service.block = linkPropertiesObject.b;\n }\n\n return service;\n }\n\n function mergeReference(masterReference, addReference) {\n // Adds a link property.r array to a master object, unique by name or color\n\n var i;\n\n if (!Array.isArray(addReference) ||\n addReference.length === 0\n ) {\n return masterReference;\n }\n\n for (i = 0; i < addReference.length; i += 1) {\n\n if (\"i\" in addReference[i]) {\n // Group by id\n if (addReference[i].i in masterReference === false) {\n masterReference[addReference[i].i] = addReference[i];\n }\n } else {\n if (\"n\" in addReference[i]) {\n // Group by name\n if (addReference[i].n in masterReference === false) {\n masterReference[addReference[i].n] = addReference[i];\n }\n } else {\n if (\"c\" in addReference[i]) {\n // Group by color\n if (addReference[i].c in masterReference === false) {\n masterReference[addReference[i].c] = addReference[i];\n }\n }\n }\n }\n\n }\n\n return masterReference;\n }\n\n function walkServiceRoutes(raw, service, linkChecks) {\n\n var canConnect, keys, lastIndex, node, place, serviceLevel, thisLevel, i;\n var beenHere = false;\n var canArrive = false;\n var countLevel = 0;\n var countSummary = true;\n var placed = {};\n // Place index:service level\n var prevIndex = -1;\n var prevLevel = 0;\n\n function addNodeService(raw, service, nodeId, serviceLevel) {\n\n if (\"block\" in service) {\n if (\"block\" in raw === false) {\n raw.block = {};\n }\n if (service.block in raw.block === false) {\n raw.block[service.block] = {};\n }\n if (nodeId in raw.block[service.block]) {\n return raw;\n }\n raw.block[service.block][nodeId] = \"\";\n }\n\n if (typeof raw.serviceNode[nodeId] === \"undefined\") {\n raw.serviceNode[nodeId] = {\n \"reference\": {},\n \"service\": serviceLevel\n };\n } else {\n raw.serviceNode[nodeId].service += serviceLevel;\n }\n\n if (\"reference\" in service) {\n raw.serviceNode[nodeId].reference =\n mergeReference(raw.serviceNode[nodeId].reference, service.reference);\n }\n\n return raw;\n }\n\n function addLinkService(raw, service, fromNode, toNode, serviceLevel) {\n\n var destination, origin, key;\n\n if (fromNode < toNode) {\n // Origin is largest node first. Skips reverse. Order allows subsequent pop rather than shift\n origin = toNode;\n destination = fromNode;\n } else {\n origin = fromNode;\n destination = toNode;\n }\n \n if (\"block\" in service) {\n if (\"block\" in raw === false) {\n raw.block = {};\n }\n if (service.block in raw.block === false) {\n raw.block[service.block] = {};\n }\n key = origin.toString() + \":\" + destination.toString();\n if (key in raw.block[service.block]) {\n return raw;\n }\n raw.block[service.block][key] = \"\";\n }\n\n if (typeof raw.serviceLink[origin] === \"undefined\") {\n raw.serviceLink[origin] = [];\n }\n\n if (typeof raw.serviceLink[origin][destination] === \"undefined\") {\n raw.serviceLink[origin][destination] = {\n \"reference\": {},\n \"service\": serviceLevel\n };\n } else {\n raw.serviceLink[origin][destination].service += serviceLevel;\n }\n\n if (\"reference\" in service) {\n raw.serviceLink[origin][destination].reference =\n mergeReference(raw.serviceLink[origin][destination].reference, service.reference);\n }\n\n return raw;\n }\n\n if (\"pickupIndex\" in service &&\n service.route[service.route.length - 1] in service.pickupIndex\n ) {\n for (i = service.route.length - 1; i >= 0; i--) {\n if (service.route[i] in service.pickupIndex === false) {\n lastIndex = i;\n break;\n }\n }\n } else {\n // Last node on route is almost always the last for passengers\n lastIndex = service.route.length - 1;\n }\n\n if (\"block\" in service &&\n \"block\" in raw &&\n service.block in raw.block\n ) {\n // Will count first occurance of the block\n countSummary = false;\n }\n\n for (i = 0; i <= lastIndex; i += 1) {\n\n thisLevel = 0;\n node = service.route[i];\n \n if (\"connect\" in raw &&\n (\"splits\" in service === false ||\n service.splits === 2 ||\n node in service.splitsIndex)\n ) {\n canConnect = true;\n } else {\n canConnect = false;\n }\n\n if (beenHere === false &&\n node in linkChecks.here\n ) {\n beenHere = true;\n }\n\n if (node in linkChecks.here) {\n\n if ((canArrive &&\n (\"pickupIndex\" in service === false ||\n node in service.pickupIndex === false)) ||\n ((\"setdownIndex\" in service === false ||\n node in service.setdownIndex === false) &&\n i !== lastIndex)\n ) {\n // Within here, as arrival or departure\n if (\"direction\" in service === false &&\n ((i === 0 &&\n service.route[i + 1] in linkChecks.here === false) ||\n (i === lastIndex &&\n service.route[i - 1] in linkChecks.here === false)) &&\n \"circular\" in service === false\n ) {\n // Terminus\n thisLevel = service.level / 2;\n } else {\n thisLevel = service.level;\n }\n }\n\n } else {\n // Node outside Here\n if (\"direction\" in service === false &&\n ((beenHere &&\n (\"pickupIndex\" in service === false ||\n node in service.pickupIndex === false)) ||\n (beenHere === false &&\n (\"setdownIndex\" in service === false ||\n node in service.setdownIndex === false)))\n ) {\n // Both directions, halves service outside Here\n thisLevel = service.level / 2;\n } else {\n if (beenHere &&\n (\"pickupIndex\" in service === false ||\n node in service.pickupIndex === false)\n ) {\n // Uni-directional counts full service after Here\n thisLevel = service.level;\n }\n }\n }\n\n if (thisLevel > 0 &&\n (\"splits\" in service === false ||\n service.splits === 2 ||\n node in service.splitsIndex) &&\n (\"circular\" in service === false ||\n i < lastIndex)\n ) {\n raw = addNodeService(raw, service, node, thisLevel);\n }\n\n if (countSummary &&\n thisLevel > countLevel) {\n countLevel = thisLevel;\n }\n\n // Finally\n if (canArrive === false &&\n (\"setdownIndex\" in service === false ||\n node in service.setdownIndex === false) &&\n (\"direction\" in service === false ||\n node in linkChecks.here)\n ) {\n canArrive = true;\n // May be counted as arrival at subsequent nodes\n }\n\n if (prevIndex !== -1 &&\n prevLevel > 0 &&\n (\"splits\" in service === false ||\n service.splits === 2 ||\n node in service.splitsIndex ||\n service.route[prevIndex] in service.splitsIndex)\n ) {\n // Add link\n if (\"direction\" in service === false &&\n service.route[prevIndex] in linkChecks.here &&\n (\"setdownIndex\" in service === false ||\n \"pickupIndex\" in service === false ||\n node in service.setdownIndex === false ||\n node in service.pickupIndex === false)\n ) {\n serviceLevel = thisLevel;\n } else {\n serviceLevel = prevLevel;\n }\n raw = addLinkService(raw, service, service.route[prevIndex], node, serviceLevel);\n }\n if (thisLevel > 0) {\n prevLevel = thisLevel;\n }\n prevIndex = i;\n\n if (canConnect &&\n thisLevel > 0\n ) {\n place = -1;\n if (\"p\" in raw.dataObject.node[node][2]) {\n place = raw.dataObject.node[node][2].p;\n } else {\n if (\"place\" in raw.dataObject.node[node][2]) {\n place = raw.dataObject.node[node][2].place;\n }\n }\n if (place >= 0) {\n if (place in placed === false ||\n thisLevel > placed[place]\n ) {\n placed[place] = thisLevel;\n }\n }\n }\n\n }\n\n if (\"splits\" in service === false ||\n service.splits === 2\n ) {\n raw.summary.link += countLevel;\n }\n\n if (canConnect) {\n keys = Object.keys(placed);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in raw.connect) {\n raw.connect[keys[i]] += placed[keys[i]];\n } else {\n raw.connect[keys[i]] = placed[keys[i]];\n }\n }\n }\n\n return raw;\n }\n\n raw.serviceLink = [];\n // Service by link [from[to[service]]] - with voids undefined\n raw.serviceNode = [];\n // Service by node [node[service]] - with voids undefined\n linkChecks = createLinkChecks(raw, options);\n\n for (i = 0; i < raw.dataObject.link.length; i += 1) {\n\n if (\n isProductOK(linkChecks, raw.dataObject.link[i][0]) &&\n isShareOK(linkChecks, raw.dataObject.link[i][3]) &&\n isHereOK(linkChecks, raw.dataObject.link[i][2], raw.dataObject.link[i][3])\n ) {\n\n service = {};\n service = addServiceLevel(service, linkChecks, raw.dataObject.link[i][1]);\n\n if (service.level > 0) {\n // Process this service, else skip\n\n service = addServiceDirection(service, linkChecks, raw.dataObject.link[i][2], raw.dataObject.link[i][3]);\n service = addServicePickupSetdown(service, linkChecks, raw.dataObject.link[i][3]);\n service = addServiceSplits(service, linkChecks, raw.dataObject.link[i][2], raw.dataObject.link[i][3]);\n service = addServiceReferences(service, raw.dataObject.link[i][3], raw.dataObject.link[i][0]);\n service = addServiceBlocks(service, raw.dataObject.link[i][3]);\n\n raw = walkServiceRoutes(raw, service, linkChecks);\n }\n\n }\n\n }\n\n if (\"block\" in raw) {\n delete raw.block;\n }\n\n return raw;\n }\n\n function pathRoutes(raw) {\n /**\n * Builds raw.path links of the same service level from matrices added in walkRoutes()\n * @param {object} raw - internal working data \n * @return {object} raw\n */\n\n function buildODMatrix(serviceLink) {\n\n var destination, destinationNum, originNum, serviceKey, i, j;\n var origin = Object.keys(serviceLink);\n var service = {};\n\n service.od = {},\n // service: [ [from, to] ]\n service.data = {};\n // service: {service, reference}\n\n for (i = 0; i < origin.length; i += 1) {\n\n originNum = origin[i] - 0;\n // Numeric of origin\n destination = Object.keys(serviceLink[originNum]);\n\n for (j = 0; j < destination.length; j += 1) {\n\n destinationNum = destination[j] - 0;\n // Numeric of destination\n serviceKey = serviceLink[originNum][destinationNum].service.toString();\n\n if (\"reference\" in serviceLink[originNum][destinationNum]) {\n serviceKey += \":\" + Object.keys(serviceLink[originNum][destinationNum].reference).join(\":\");\n }\n\n if (serviceKey in service.od === false) {\n service.od[serviceKey] = [];\n service.data[serviceKey] = serviceLink[originNum][destinationNum];\n }\n service.od[serviceKey].push([originNum, destinationNum]);\n\n }\n\n }\n\n return service;\n }\n\n function makePolyline(polyPoints, node) {\n // Return polyline of polyPoints\n\n var i;\n var maxIndex = node.length - 1;\n var polyline = [];\n\n for (i = 0; i < polyPoints.length; i += 1) {\n if (polyPoints[i] <= maxIndex &&\n polyPoints[i] >= 0\n ) {\n polyline.push([node[polyPoints[i]][0], node[polyPoints[i]][1]]);\n }\n }\n\n return polyline;\n }\n\n function buildLinkFromMatrix(raw, service) {\n // Further stack aggregation possible, but time-consuming vs resulting reduction in objects\n\n var geometry, link, stack, i, j;\n var serviceKeys = Object.keys(service.od);\n\n raw.link = [];\n\n for (i = 0; i < serviceKeys.length; i += 1) {\n stack = [service.od[serviceKeys[i]].pop()];\n while (service.od[serviceKeys[i]].length > 0) {\n\n link = service.od[serviceKeys[i]].pop();\n\n for (j = 0; j < stack.length; j += 1) {\n\n if (link[0] === stack[j][0]) {\n stack[j].unshift(link[1]);\n break;\n }\n if (link[1] === stack[j][0]) {\n stack[j].unshift(link[0]);\n break;\n }\n if (link[0] === stack[j][stack.length - 1]) {\n stack[j].push(link[1]);\n break;\n }\n if (link[1] === stack[j][stack.length - 1]) {\n stack[j].push(link[0]);\n break;\n }\n\n if (j === stack.length - 1) {\n stack.push(link);\n break;\n }\n\n }\n\n }\n\n for (j = 0; j < stack.length; j += 1) {\n\n geometry = {\n \"polyline\": makePolyline(stack[j], raw.dataObject.node),\n \"value\": service.data[serviceKeys[i]].service\n };\n\n if (\"reference\" in service.data[serviceKeys[i]]) {\n geometry.link = referenceToArray(service.data[serviceKeys[i]].reference);\n }\n\n raw.link.push(geometry);\n }\n\n }\n\n return raw;\n }\n\n raw = buildLinkFromMatrix(raw, buildODMatrix(raw.serviceLink));\n\n return raw;\n }\n\n function referenceToArray(referenceObject) {\n /**\n * Helper: Converts mergeReference() structure object into final array\n * @param {object} referenceObject - object of r properties\n * @return {object} sorted array of r properties\n */\n\n var reference, i;\n var keys = Object.keys(referenceObject).sort();\n\n if (keys.length === 0) {\n return [];\n }\n\n if (keys.length === 1) {\n return [referenceObject[keys[0]]];\n }\n \n reference = [];\n for (i = 0; i < keys.length; i += 1) {\n reference.push(referenceObject[keys[i]]);\n }\n\n return reference;\n }\n\n function conjureGeometry(raw, options) {\n /**\n * Builds geospatial output using walkRoutes() matrices and pathRoutes() paths\n * @param {object} raw\n * @param {object} options\n * @return {object} output\n */\n\n function toGeojson(raw, geoJSON) {\n // Future: Faster if built with raw, not reprocessing link/etc, but feature is marginal\n\n var geometry, properties, i, j, k;\n var geojsonObject = {\n \"type\": \"FeatureCollection\",\n \"features\": []\n };\n\n for (i = 0; i < geoJSON.length; i += 1) {\n if (geoJSON[i] in raw) {\n for (j = 0; j < raw[geoJSON[i]].length; j += 1) {\n if (\"value\" in raw[geoJSON[i]][j] &&\n (\"circle\" in raw[geoJSON[i]][j] ||\n \"polyline\" in raw[geoJSON[i]][j])\n ) {\n if (\"circle\" in raw[geoJSON[i]][j]) {\n geometry = {\n \"type\": \"Point\",\n \"coordinates\": raw[geoJSON[i]][j].circle\n };\n } else {\n geometry = {\n \"type\": \"LineString\",\n \"coordinates\": []\n };\n for (k = 0; k < raw[geoJSON[i]][j].polyline.length; k += 1) {\n geometry.coordinates.push(raw[geoJSON[i]][j].polyline[k]);\n }\n }\n properties = {\n \"type\": geoJSON[i],\n \"value\": raw[geoJSON[i]][j].value\n };\n if (\"link\" in raw[geoJSON[i]][j]) {\n properties.link = raw[geoJSON[i]][j].link;\n }\n if (\"node\" in raw[geoJSON[i]][j]) {\n properties.node = raw[geoJSON[i]][j].node;\n }\n if (\"place\" in raw[geoJSON[i]][j]) {\n properties.place = raw[geoJSON[i]][j].place;\n }\n geojsonObject.features.push({\n \"type\": \"Feature\",\n \"geometry\": geometry,\n \"properties\": properties\n });\n }\n }\n }\n }\n\n return geojsonObject;\n }\n\n function addNode(raw) {\n\n var dataObjectNode, geometry, i;\n var maxNodeIndex = raw.dataObject.node.length - 1;\n\n raw.node = [];\n\n for (i = 0; i < raw.serviceNode.length; i += 1) {\n if (typeof raw.serviceNode[i] !== \"undefined\" &&\n i <= maxNodeIndex\n ) {\n\n raw.summary.node += 1;\n dataObjectNode = raw.dataObject.node[i];\n\n geometry = {\n \"circle\": [\n dataObjectNode[0],\n dataObjectNode[1]\n ],\n \"value\": raw.serviceNode[i].service\n };\n\n if (\"reference\" in raw.serviceNode[i]) {\n geometry.link = referenceToArray(raw.serviceNode[i].reference);\n }\n\n if (\"reference\" in dataObjectNode[2]) {\n dataObjectNode[2].r = dataObjectNode[2].reference;\n }\n\n if (\"r\" in dataObjectNode[2] &&\n dataObjectNode[2].r.length > 0\n ) {\n geometry.node = dataObjectNode[2].r;\n }\n\n raw.node.push(geometry);\n }\n }\n\n return raw;\n }\n\n function addPlace(raw, options) {\n\n var geometry, keys, population, i;\n\n raw.place = [];\n\n if (\"connect\" in raw) {\n\n keys = Object.keys(raw.connect);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] < raw.dataObject.place.length) {\n // < 0 already checked in serviceLink logic\n\n population = 0;\n if (\"p\" in raw.dataObject.place[keys[i]][2]) {\n population = raw.dataObject.place[keys[i]][2].p;\n } else {\n if (\"population\" in raw.dataObject.place[keys[i]][2]) {\n population = raw.dataObject.place[keys[i]][2].population;\n }\n }\n if (!Number.isNaN(population)) {\n if (\"connectivity\" in options &&\n options.connectivity > 0\n ) {\n population = population * ( 1 - ( 1 / (raw.connect[keys[i]] * options.connectivity)));\n }\n population = Math.round(population);\n\n if (population > 0) {\n raw.summary.place += population;\n \n geometry = {\n \"circle\": [\n raw.dataObject.place[keys[i]][0],\n raw.dataObject.place[keys[i]][1]\n ],\n \"value\": population\n };\n if (\"r\" in raw.dataObject.place[keys[i]][2]) {\n geometry.place = raw.dataObject.place[keys[i]][2].r;\n } else {\n if (\"reference\" in raw.dataObject.place[keys[i]][2]) {\n geometry.place = raw.dataObject.place[keys[i]][2].reference;\n }\n }\n raw.place.push(geometry);\n }\n\n }\n\n }\n }\n }\n\n return raw;\n }\n\n raw = addNode(raw);\n raw = addPlace(raw, options);\n\n if (\"geoJSON\" in options === false ||\n !Array.isArray(options.geoJSON)\n ) {\n return raw;\n } else {\n return toGeojson(raw, options.geoJSON);\n }\n }\n\n function exitHere(error, raw, options) {\n /**\n * Called on exit\n * @param {object} error - Jvaascript error\n * @param {object} raw\n * @param {object} options\n * @return {object} raw or callback\n */\n\n var i;\n var cleanup = [\"connect\", \"serviceNode\", \"serviceLink\"];\n\n for (i = 0; i < cleanup.length; i += 1) {\n if (cleanup[i] in raw) {\n delete raw[cleanup[i]];\n }\n }\n\n if (typeof error === \"undefined\" &&\n \"error\" in raw\n ) {\n error = new Error(raw.error);\n }\n\n if (\"callback\" in options) {\n options.callback(error, raw, options);\n return true;\n } else {\n return raw;\n }\n }\n\n\n if (!Object.keys ||\n ![].indexOf ||\n typeof JSON !== \"object\"\n ) {\n raw.error = \"Unsupported browser\";\n return exitHere(error, raw, options);\n }\n\n if ((\"x\" in options &&\n typeof options.x !== \"number\") ||\n (\"y\" in options &&\n typeof options.y !== \"number\") ||\n (\"range\" in options &&\n typeof options.range !== \"number\")\n ) {\n raw.error = \"Here parameters not numeric\";\n return exitHere(error, raw, options);\n }\n\n if (typeof options !== \"object\") {\n options = {};\n }\n\n raw.dataObject = dataObject;\n\n if (\"x\" in options &&\n \"y\" in options &&\n \"range\" in options\n ) {\n raw.here = [{\n \"circle\": [options.x, options.y],\n \"value\": options.range\n }];\n }\n\n raw.summary = {\n \"link\": 0,\n // Count of services (eg daily trains)\n \"node\": 0,\n // Count of nodes (eg stations)\n \"place\": 0\n // Count of demography (eg people)\n };\n\n try {\n\n raw = parseDataObject(raw, options);\n if (raw.dataObject.place.length > 0) {\n raw.connect = {};\n // place index:service total\n }\n raw = walkRoutes(raw, options);\n raw = pathRoutes(raw);\n raw = conjureGeometry(raw, options);\n\n return exitHere(error, raw, options);\n\n } catch (err) {\n\n error = err;\n\n return exitHere(error, raw, options);\n\n }\n},\n\n};\n// EoF\n"
},
{
"alpha_fraction": 0.528593897819519,
"alphanum_fraction": 0.5344051718711853,
"avg_line_length": 29.933422088623047,
"blob_id": "e5c3d47569e638276c76bbaa84165912beca58f5",
"content_id": "722f04887062e65cd6ecc5f682a17796eaab24a6",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": true,
"language": "JavaScript",
"length_bytes": 46461,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 1502,
"path": "/dist/merge.js",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "/*eslint-env browser*/\n/*global aquius*/\n\nvar mergeAquius = mergeAquius || {\n/**\n * @namespace Merge Aquius\n * @version 0\n * @copyright MIT License\n */\n\n\n\"init\": function init(configId) {\n /**\n * Initialisation with user interface\n * @param {string} configId - Id of DOM element within which to build UI\n * @return {boolean} initialisation success\n */\n \"use strict\";\n\n // Functions shared with gtfsToAquius: createElement, initialiseUI, outputError\n\n function createElement(elementType, valueObject, styleObject) {\n /**\n * Helper: Creates DOM element\n * @param {string} elementType\n * @param {object} valueObject - optional DOM value:content pairs\n * @param {object} styleObject - optional DOM style value:content pairs\n * @return {object} DOM element\n */\n\n var values, styles, i;\n var element = document.createElement(elementType);\n\n if (typeof valueObject !== \"undefined\") {\n values = Object.keys(valueObject);\n for (i = 0; i < values.length; i += 1) {\n element[values[i]] = valueObject[values[i]];\n }\n }\n\n if (typeof styleObject !== \"undefined\") {\n styles = Object.keys(styleObject);\n for (i = 0; i < styles.length; i += 1) {\n element.style[styles[i]] = styleObject[styles[i]];\n }\n }\n\n return element;\n }\n\n function initialiseUI(vars, loadFunction, prompt) {\n /**\n * Creates user interface in its initial state\n * @param {object} vars - internal data references (including configId)\n * @param {object} loadFunction - function to load files\n * @param {string} prompt - label\n * @return {boolean} initialisation success\n */\n\n var button, form, label;\n var baseDOM = document.getElementById(vars.configId);\n\n if (!baseDOM) {\n return false;\n }\n\n while (baseDOM.firstChild) {\n baseDOM.removeChild(baseDOM.firstChild);\n }\n\n if (!Object.keys ||\n ![].indexOf ||\n typeof JSON !== \"object\" ||\n !window.File ||\n !window.FileReader ||\n !window.Blob\n ) {\n baseDOM.appendChild(\n document.createTextNode(\"Browser not supported: Try a modern browser\"));\n return false;\n }\n\n document.head.appendChild(createElement(\"script\", {\n \"src\": \"https://timhowgego.github.io/Aquius/dist/aquius.min.js\",\n \"type\": \"text/javascript\"\n }));\n // Optional in much later post-processing, so no callback\n\n form = createElement(\"form\", {\n \"className\": vars.configId + \"Input\"\n });\n\n label = createElement(\"label\", {\n \"textContent\": prompt\n });\n \n button = createElement(\"input\", {\n \"id\": vars.configId + \"ImportFiles\",\n \"multiple\": \"multiple\",\n \"name\": vars.configId + \"ImportFiles[]\",\n \"type\": \"file\"\n });\n button.addEventListener(\"change\", (function(){\n loadFunction(vars);\n }), false);\n label.appendChild(button);\n\n form.appendChild(label);\n\n form.appendChild(createElement(\"span\", {\n \"id\": vars.configId + \"Progress\"\n }));\n\n baseDOM.appendChild(form);\n\n baseDOM.appendChild(createElement(\"div\", {\n \"id\": vars.configId + \"Output\"\n }));\n\n return true;\n }\n \n function initialiseMerge(vars) {\n /**\n * Initiates JSON file import and Aquius creation\n * @param {object} vars - internal data references\n */\n\n var reader;\n var loaded = [];\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var options = {\n \"_vars\": vars,\n \"callback\": outputMerge\n };\n var processedFiles = 0;\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n var totalFiles = 0;\n\n function loadFiles(fileList) {\n\n var i;\n\n totalFiles = fileList.length;\n\n for (i = 0; i < totalFiles; i += 1) {\n reader = new FileReader();\n reader.onerror = (function(evt, theFile) {\n outputError(new Error(\"Could not read \" + theFile.name), vars);\n reader.abort();\n });\n reader.onload = (function(theFile) {\n return function() {\n try {\n onLoad(theFile.name, this.result);\n } catch (err) {\n outputError(err, vars);\n }\n };\n })(fileList[i]);\n reader.readAsText(fileList[i]);\n }\n }\n\n function onLoad(filename, result) {\n\n var input, json, i;\n\n try {\n json = JSON.parse(result);\n if (typeof json === \"object\") {\n if (filename.toLowerCase() === \"config.json\") {\n options.config = json;\n } else {\n if (\"meta\" in json &&\n \"schema\" in json.meta &&\n \"link\" in json &&\n \"node\" in json\n ) {\n // Config may also have meta/schema. Link/node in practice required of datasets\n loaded.push([filename, json]);\n } else {\n if (\"config\" in options === false) {\n options.config = json;\n }\n }\n }\n }\n } catch (err) {\n if (err instanceof SyntaxError === false) {\n outputError(err, vars);\n }\n // Else it not JSON\n }\n\n processedFiles += 1;\n\n if (processedFiles === totalFiles) {\n loaded.sort();\n // Sort by filename after load since load order unpredictable\n input = [];\n for (i = 0; i < loaded.length; i += 1) {\n input.push(loaded[i][1]);\n }\n vars.merge(input, options);\n }\n }\n\n if (fileDOM !== null &&\n fileDOM.files.length > 0\n ) {\n fileDOM.disabled = true;\n if (progressDOM !== null) {\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n progressDOM.textContent = \"Working...\";\n }\n loadFiles(fileDOM.files);\n }\n }\n\n function outputMerge(error, out, options) {\n /**\n * Called after Aquius creation\n * @param {object} error - Error object or undefined \n * @param {object} out - output, including keys aquius and config\n * @param {object} options - as sent, including _vars\n */\n\n // Similiar to GTFS (fewer variables, no summaries/tables)\n\n var keys, i;\n var vars = options._vars;\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var outputDOM = document.getElementById(vars.configId + \"Output\");\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n\n if (error !== undefined) {\n outputError(error, vars);\n return false;\n }\n\n if (!outputDOM ||\n !progressDOM\n ) {\n return false;\n }\n\n while (outputDOM.firstChild) {\n outputDOM.removeChild(outputDOM.firstChild);\n }\n\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n\n if (fileDOM !== null) {\n fileDOM.disabled = false;\n }\n\n outputDOM.className = \"\";\n\n keys = [\"aquius\", \"config\"];\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in out &&\n Object.keys(out[keys[i]]).length > 0\n ) {\n\n progressDOM.appendChild(createElement(\"a\", {\n \"className\": vars.configId + \"Download\",\n \"href\": window.URL.createObjectURL(\n new Blob([JSON.stringify(out[keys[i]])],\n {type: \"application/json;charset=utf-8\"})\n ),\n \"download\": keys[i] + \".json\",\n \"textContent\": \"Save \" + keys[i] + \".json\",\n \"role\": \"button\"\n }));\n\n }\n }\n\n if (typeof aquius !== \"undefined\" &&\n \"aquius\" in out\n ) {\n // If aquius has not loaded by now, skip the map\n outputDOM.appendChild(createElement(\"div\", {\n \"id\": vars.configId + \"Map\"\n }, {\n \"height\": (document.documentElement.clientHeight / 2) + \"px\"\n }));\n aquius.init(vars.configId + \"Map\", {\n \"dataObject\": out.aquius,\n \"uiStore\": false\n });\n }\n\n }\n\n function outputError(error, vars) {\n /**\n * Output errors to user interface, destroying any prior Output\n * @param {object} error - error Object\n * @param {object} vars - internal data references\n */\n\n var message;\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var outputDOM = document.getElementById(vars.configId + \"Output\");\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n \n if (error !== undefined &&\n outputDOM !== null\n ) {\n\n while (outputDOM.firstChild) {\n outputDOM.removeChild(outputDOM.firstChild);\n }\n\n outputDOM.className = vars.configId + \"OutputError\";\n\n if (\"message\" in error) {\n\n message = error.message;\n } else {\n message = JSON.stringify(error);\n }\n\n outputDOM.appendChild(createElement(\"p\", {\n \"textContent\": \"Error: \" + message\n }));\n\n if (progressDOM !== null) {\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n progressDOM.textContent = \"Failed\";\n }\n if (fileDOM !== null) {\n fileDOM.disabled = false;\n }\n }\n }\n\n return initialiseUI({\n \"configId\": configId,\n \"merge\": this.merge\n },\n initialiseMerge,\n \"Aquius files and optional config to process:\");\n},\n\n\n\"merge\": function merge(input, options) {\n /**\n * Creates single Aquius dataObject from array of aquius JSON objects\n * @param {array} input - array of Aquius JSON objects\n * @param {object} options - config, callback\n * @return {object} without callback: possible keys aquius, config, error\n * with callback: callback(error, out, options)\n */\n \"use strict\";\n\n var out = {\n \"_\": {},\n // Internal objects\n \"aquius\": {},\n // Output file\n \"config\": {}\n // Output config\n };\n\n function parseConfig(out, options) {\n /**\n * Parse options.config into valid out.config. Defines defaults\n * @param {object} out - internal data references\n * @param {object} options\n * @return {object} out\n */\n\n var i;\n var defaults = {\n \"coordinatePrecision\": 5,\n // Coordinate decimal places (smaller values tend to group clusters of stops)\n \"meta\": {},\n // As Data Structure meta key\n \"option\": {},\n // As Data Structure/Configuration option key\n \"translation\": {}\n // As Data Structure/Configuration translation key\n };\n var keys = Object.keys(defaults);\n\n if (typeof options !== \"object\") {\n options = {};\n }\n\n for (i = 0; i < keys.length; i += 1) {\n if (\"config\" in options &&\n keys[i] in options.config &&\n typeof options.config[keys[i]] === typeof defaults[keys[i]]\n ) {\n out.config[keys[i]] = options.config[keys[i]];\n } else {\n out.config[keys[i]] = defaults[keys[i]];\n }\n }\n\n return out;\n }\n\n function mergePropertyObject(original, addition, appendString) {\n /** Helper: Merge objects (in style {\"en-US\":\"translation\"})\n * @param {object} original\n * @param {object} addition\n * @param {boolean} appendString - join with | if duplicate string values differ\n * @return {object} original\n */\n\n var i;\n var keys = Object.keys(addition);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in original) {\n if (original[keys[i]] !== addition[keys[i]] &&\n typeof appendString !== \"undefined\" &&\n appendString === true &&\n typeof original[keys[i]] === \"string\" &&\n typeof addition[keys[i]] === \"string\"\n ) {\n original[keys[i]] += \" | \" + addition[keys[i]];\n }\n } else {\n original[keys[i]] = addition[keys[i]];\n }\n }\n\n return original;\n }\n\n function coreLoop(out, input) {\n /**\n * Primary loop through Aquius dataObjects, adding to single out.aquius\n * @param {object} out\n * @param {array} input\n * @return {object} out\n */\n\n var i;\n\n out.aquius.meta = out.config.meta;\n out.aquius.meta.schema = \"0\";\n // Merged files are currently always forced \"0\"\n if (Object.keys(out.config.option).length > 0) {\n out.aquius.option = out.config.option;\n }\n if (Object.keys(out.config.translation).length > 0) {\n out.aquius.translation = out.config.translation;\n }\n\n out._.blockNext = 0;\n // Next unassigned block ID\n out._.blockSwitch = {};\n // InputIndex:{OldBlockID:NewBlockID} (link property)\n out._.colorSwitch = {};\n // InputIndex:{OldColorIndex:NewColorIndex} (reference color)\n out._.linkLookup = {};\n // key nodes:etc : link index\n out._.nodeLookup = {};\n // X:Y key: aquius.node index\n out._.nodeSwitch = {};\n // InputIndex:{OldNodeIndex:NewNodeIndex}\n out._.placeLookup = {};\n // X:Y key: aquius.place index\n out._.placeSwitch = {};\n // InputIndex:{OldPlaceIndex:NewPlaceIndex}\n out._.productNext = 0;\n // Next unassigned product ID\n out._.productSwitch = {};\n // InputIndex:{OldProductID:NewProductID}\n out._.serviceLookup = {};\n // Services:Indices key: aquius.service index\n out._.urlSwitch = {};\n // InputIndex:{OldUrlIndex:NewUrlIndex} (reference url)\n\n for (i = 0; i < input.length; i += 1) {\n // Core Loop\n if (typeof input[i] === \"object\") {\n out = buildMeta(out, input[i]);\n out = buildOption(out, input[i]);\n out = buildTranslation(out, input[i]);\n out = buildService(out, input[i]);\n out = buildLink(out, input[i], i);\n }\n }\n\n for (i = 0; i < input.length; i += 1) {\n // And Finally Loop\n if (typeof input[i] === \"object\") {\n out = buildNetwork(out, input[i], i);\n }\n }\n\n if (\"link\" in out.aquius) {\n out.aquius.link.sort(function (a, b) {\n return b[1].reduce(function(c, d) {\n return c + d;\n }, 0) - a[1].reduce(function(c, d) {\n return c + d;\n }, 0);\n });\n // Descending service count, since busiest most likely to be queried and thus found faster\n out = optimiseNode(out);\n }\n\n return out;\n }\n\n function buildMeta(out, dataObject) {\n /**\n * Creates aquius.meta\n * @param {object} out\n * @param {object} dataObject\n * @return {object} out\n */\n\n var keys, i;\n\n keys = [\"attribution\", \"description\", \"name\", \"url\"];\n for (i = 0; i < keys.length; i += 1) {\n if ((\"meta\" in out.aquius === false ||\n keys[i] in out.aquius.meta === false) &&\n \"meta\" in dataObject &&\n typeof dataObject.meta === \"object\" &&\n keys[i] in dataObject.meta\n ) {\n if (\"meta\" in out.aquius === false) {\n out.aquius.meta = {};\n }\n // First found only: Resolve conflicts with config.meta...\n out.aquius.meta[keys[i]] = dataObject.meta[keys[i]];\n }\n }\n\n return out;\n }\n\n function buildOption(out, dataObject) {\n /**\n * Creates aquius.option\n * @param {object} out\n * @param {object} dataObject\n * @return {object} out\n */\n\n var keys, i;\n\n if (\"option\" in dataObject &&\n typeof dataObject.option === \"object\"\n ) {\n keys = Object.keys(dataObject.option);\n if (keys.length > 0) {\n if (\"option\" in out.aquius === false) {\n out.aquius.option = {};\n }\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in out.aquius.option === false) {\n // First option found only since most cannot be sensibly merged: Resolve conflicts with config.option\n out.aquius.option[keys[i]] = dataObject.option[keys[i]];\n }\n }\n }\n }\n\n return out;\n }\n\n function buildTranslation(out, dataObject) {\n /**\n * Creates aquius.translation\n * @param {object} out\n * @param {object} dataObject\n * @return {object} out\n */\n\n var keys, i;\n\n if (\"translation\" in dataObject &&\n typeof dataObject.translation === \"object\"\n ) {\n keys = Object.keys(dataObject.translation);\n if (keys.length > 0) {\n if (\"translation\" in out.aquius === false) {\n out.aquius.translation = {};\n }\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in out.aquius.translation) {\n out.aquius.translation[keys[i]] =\n mergePropertyObject(out.aquius.translation[keys[i]], dataObject.translation[keys[i]]);\n } else {\n out.aquius.translation[keys[i]] = dataObject.translation[keys[i]];\n }\n }\n }\n }\n\n return out;\n }\n\n function buildService(out, dataObject) {\n /**\n * Creates aquius.service. Cannot alter or understand actual service indices,\n * thus supposes consistency of service indices within the files merged.\n * Future: Same name check and extension/repositioning of link[1] data\n * @param {object} out\n * @param {object} dataObject\n * @return {object} out\n */\n\n var key, i;\n\n if (\"service\" in dataObject &&\n Array.isArray(dataObject.service) &&\n dataObject.service.length > 0\n ) {\n if (\"service\" in out.aquius === false) {\n out.aquius.service = [];\n }\n for (i = 0; i < dataObject.service.length; i += 1) {\n if (Array.isArray(dataObject.service[i]) &&\n dataObject.service[i].length >= 2 &&\n Array.isArray(dataObject.service[i][0]) &&\n typeof dataObject.service[i][1] === \"object\"\n ) {\n\n key = dataObject.service[i][0].join(\":\");\n if (key in out._.serviceLookup) {\n out.aquius.service[out._.serviceLookup[key]][1] =\n mergePropertyObject(out.aquius.service[out._.serviceLookup[key]][1],\n dataObject.service[i][1], true);\n if (dataObject.service[i].length > 2 &&\n typeof dataObject.service[i][2] === \"object\"\n ) {\n if (out.aquius.service[out._.serviceLookup[key]].length > 2) {\n out.aquius.service[out._.serviceLookup[key]][2] =\n mergePropertyObject(out.aquius.service[out._.serviceLookup[key]][2],\n dataObject.service[i][2]);\n } else {\n out.aquius.service[out._.serviceLookup[key]][2] = dataObject.service[i][2];\n }\n }\n } else {\n out.aquius.service.push(dataObject.service[i]);\n out._.serviceLookup[key] = out.aquius.service.length - 1;\n }\n\n }\n }\n }\n\n return out;\n }\n\n function buildLink(out, dataObject, iteration) {\n /**\n * Creates aquius.link, and indirectly place and node\n * @param {object} out\n * @param {object} dataObject\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} out\n */\n\n var key, keys, match, node, product, property, reference, i, j, k, l, m;\n\n if (\"link\" in dataObject &&\n Array.isArray(dataObject.link) &&\n dataObject.link.length > 0\n ) {\n if (\"link\" in out.aquius === false) {\n out.aquius.link = [];\n }\n for (i = 0; i < dataObject.link.length; i += 1) {\n if (dataObject.link[i].length > 3 &&\n Array.isArray(dataObject.link[i][0]) &&\n Array.isArray(dataObject.link[i][1]) &&\n Array.isArray(dataObject.link[i][2]) &&\n typeof dataObject.link[i][3] === \"object\"\n ) {\n\n product = parseProduct(out, dataObject, dataObject.link[i][0], iteration);\n out = product.out;\n node = parseNode(out, dataObject, dataObject.link[i][2], iteration);\n out = node.out;\n property = parseLinkProperty(out, dataObject, dataObject.link[i][3], iteration);\n out = property.out;\n\n // Exact matches, including same product and direction, will merge\n // Further evaluation is marginal and may introduce unwanted aggregation\n key = \"p\" + product.product.join(\":\") + \"n\" + node.node.join(\":\");\n\n keys = [\"b\", \"block\", \"d\", \"direction\"];\n // Optional variables\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in property.property) {\n key += j + property.property[keys[j]];\n }\n }\n\n keys = [\"h\", \"shared\", \"pickup\", \"s\", \"setdown\", \"split\", \"t\", \"u\", \"duration\", \"m\"];\n // Optional arrays\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in property.property) {\n key += j + property.property[keys[j]].join(\":\");\n }\n }\n\n if (key in out._.linkLookup) {\n\n for (j = 0; j < out.aquius.link[out._.linkLookup[key]][1].length; j += 1) {\n if (dataObject.link[i][1][j] !== undefined &&\n dataObject.link[i][1][j] > 0\n ) {\n // Update service\n out.aquius.link[out._.linkLookup[key]][1][j] =\n out.aquius.link[out._.linkLookup[key]][1][j] + dataObject.link[i][1][j];\n }\n }\n\n keys = [\"r\", \"reference\"];\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in property.property) {\n if (\"r\" in out.aquius.link[out._.linkLookup[key]][3] ||\n \"reference\" in out.aquius.link[out._.linkLookup[key]][3]\n ) {\n for (k = 0; k < property.property[keys[j]].length; k += 1) {\n\n // Merge each reference if missing\n match = false;\n reference = Object.keys(property.property[keys[j]][k]);\n for (l = 0; l < out.aquius.link[out._.linkLookup[key]][3][keys[j]].length; l += 1) {\n for (m = 0; m < reference.length; m += 1) {\n if (reference[m] in out.aquius.link[out._.linkLookup[key]][3][keys[j]][l] &&\n property.property[keys[j]][k][reference[m]] ===\n out.aquius.link[out._.linkLookup[key]][3][keys[j]][l][reference[m]]\n ) {\n match = true;\n break;\n }\n }\n if (match) {\n break;\n }\n }\n if (match === false) {\n out.aquius.link[out._.linkLookup[key]][3][keys[j]].push(property.property[keys[j]][k]);\n }\n\n }\n } else {\n // New reference\n out.aquius.link[out._.linkLookup[key]][3][keys[j]] = property.property[keys[j]];\n }\n\n }\n }\n\n } else {\n\n out.aquius.link.push([\n product.product,\n dataObject.link[i][1],\n node.node,\n property.property\n ]);\n out._.linkLookup[key] = out.aquius.link.length - 1;\n\n }\n\n }\n }\n }\n\n return out;\n }\n\n function parseProduct(out, dataObject, productArray, iteration) {\n /**\n * Reassigns product IDs\n * @param {object} out\n * @param {object} dataObject\n * @param {array} productArray\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} {out, product}\n */\n\n var i;\n\n for (i = 0; i < productArray.length; i += 1) {\n \n if (iteration in out._.productSwitch === false) {\n out._.productSwitch[iteration] = {};\n }\n if (productArray[i] in out._.productSwitch[iteration] === false) {\n out = addProduct(out, dataObject, iteration, productArray[i]);\n }\n productArray[i] = out._.productSwitch[iteration][productArray[i]];\n // Each original dataset takes unique product IDs\n }\n\n return {\n \"out\": out,\n \"product\": productArray\n };\n }\n\n function parseNode(out, dataObject, nodeArray, iteration) {\n /**\n * Reassigns node IDs while building aquius.node and aquius.place\n * @param {object} out\n * @param {object} dataObject\n * @param {array} nodeArray\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} {out, node}\n */\n\n var newNode, i;\n var node = [];\n\n for (i = 0; i < nodeArray.length; i += 1) {\n if (iteration in out._.nodeSwitch === false ||\n nodeArray[i] in out._.nodeSwitch[iteration] === false\n ) {\n out = addNode(out, dataObject, nodeArray[i], iteration);\n }\n if (iteration in out._.nodeSwitch &&\n nodeArray[i] in out._.nodeSwitch[iteration]\n ) {\n newNode = out._.nodeSwitch[iteration][nodeArray[i]];\n if (node.length === 0 ||\n node[node.length - 1] !== newNode\n ) {\n // Unwanted in-order duplicates may emerge from coordinatePrecision changes\n node.push(newNode);\n }\n }\n }\n\n return {\n \"out\": out,\n \"node\": node\n };\n }\n\n function addNode(out, dataObject, originalNode, iteration) {\n /**\n * Processes node and associated references into aquius.node, referenced in out._.nodeSwitch/Lookup\n * @param {object} out\n * @param {object} dataObject\n * @param {integer} originalNode\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} out\n */\n\n // Functions shared with gtfsToAquius: withinKeys\n\n var index, key, keys, place, property, x, y, i, j;\n\n function withinKeys(properties, reference) {\n // Checks only properties are within reference - reference may contain other keys\n\n var match, i, j;\n var keys = Object.keys(properties);\n\n for (i = 0; i < reference.length; i += 1) {\n match = true;\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in reference[i] === false ||\n reference[i][keys[j]] !== properties[keys[j]]\n ) {\n match = false;\n break;\n }\n }\n if (match === true) {\n return true;\n }\n }\n\n return false;\n }\n\n if (\"node\" in dataObject &&\n Array.isArray(dataObject.node) &&\n dataObject.node[originalNode] !== undefined &&\n dataObject.node[originalNode].length >= 3 &&\n typeof dataObject.node[originalNode][0] === \"number\" &&\n typeof dataObject.node[originalNode][1] === \"number\" &&\n typeof dataObject.node[originalNode][2] === \"object\"\n ) {\n\n x = preciseCoordinate(out, dataObject.node[originalNode][0]);\n y = preciseCoordinate(out, dataObject.node[originalNode][1]);\n key = [x,y].join(\":\");\n if (key in out._.nodeLookup) {\n index = out._.nodeLookup[key];\n } else {\n // Add node\n if (\"node\" in out.aquius === false) {\n out.aquius.node = [];\n }\n out.aquius.node.push([x, y, {}]);\n index = out.aquius.node.length - 1;\n out._.nodeLookup[key] = index;\n }\n if (iteration in out._.nodeSwitch === false) {\n out._.nodeSwitch[iteration] = {};\n }\n out._.nodeSwitch[iteration][originalNode] = index;\n\n property = dataObject.node[originalNode][2];\n keys = Object.keys(property);\n\n for (i = 0; i < keys.length; i += 1) {\n switch (keys[i]) {\n\n case \"r\":\n case \"reference\":\n if (Array.isArray(property[keys[i]]) &&\n property[keys[i]].length > 0\n ) {\n if (keys[i] in out.aquius.node[index][2] === false) {\n out.aquius.node[index][2][keys[i]] = [];\n }\n for (j = 0; j < property[keys[i]].length; j += 1) {\n if (typeof property[keys[i]][j] === \"object\" &&\n \"u\" in property[keys[i]][j]\n ) {\n // Only URL reassigned, other references copied blindly\n if (iteration in out._.urlSwitch === false ||\n property[keys[i]][j].u in out._.urlSwitch[iteration] === false\n ) {\n out = addReference(out, dataObject, property[keys[i]][j].u, iteration, \"url\");\n }\n if (iteration in out._.urlSwitch &&\n property[keys[i]][j].u in out._.urlSwitch[iteration]\n ) {\n property[keys[i]][j].u = out._.urlSwitch[iteration][property[keys[i]][j].u];\n }\n }\n if (!withinKeys(property[keys[i]][j], out.aquius.node[index][2][keys[i]])) {\n out.aquius.node[index][2][keys[i]].push(property[keys[i]][j]);\n }\n }\n }\n break;\n\n case \"p\":\n case \"place\":\n if (keys[i] in out.aquius.node[index][2]) {\n // Node can only be in one place\n if (\"place\" in dataObject &&\n Array.isArray(dataObject.place) &&\n dataObject.place[property[keys[i]]] !== undefined &&\n Array.isArray(dataObject.place[place]) &&\n dataObject.place[property[keys[i]]].length >= 3 &&\n typeof dataObject.place[property[keys[i]]][2] === \"object\"\n ) {\n place = Object.keys(dataObject.place[property[keys[i]]][2]);\n for (j = 0; j < place.length; j += 1) {\n if (place[j] in out.aquius.place[out.aquius.node[index][2][keys[i]]] === false) {\n // Add extra property, else ignore\n out.aquius.place[out.aquius.node[index][2][keys[i]]][place[j]] =\n dataObject.place[property[keys[i]]][2][place[j]];\n }\n }\n }\n } else {\n if (iteration in out._.placeSwitch === false ||\n property[keys[i]] in out._.placeSwitch[iteration] === false\n ) {\n out = addPlace(out, dataObject, property[keys[i]], iteration);\n }\n if (iteration in out._.placeSwitch &&\n property[keys[i]] in out._.placeSwitch[iteration]\n ) {\n out.aquius.node[index][2][keys[i]] =\n out._.placeSwitch[iteration][property[keys[i]]];\n }\n }\n break;\n\n default:\n if (keys[i] in out.aquius.node[index][2] === false) {\n // Add mystery key if not present at node. Else ignore, with no understanding of how to handle\n out.aquius.node[index][2][keys[i]] = property[keys[i]];\n }\n break;\n\n }\n }\n\n }\n\n return out;\n }\n\n function preciseCoordinate(out, numeric) {\n /**\n * Helper: Standardise precision of x or y coordinate\n * @param {object} out\n * @param {number} numeric - x or y coordinate\n * @return {numver} numeric\n */\n\n var precision = Math.pow(10, out.config.coordinatePrecision);\n\n return Math.round(parseFloat(numeric) * precision) / precision;\n }\n\n function addPlace(out, dataObject, originalPlace, iteration) {\n /**\n * Processes places into aquius.place, referenced in out._.placeSwitch/Lookup\n * @param {object} out\n * @param {object} dataObject\n * @param {integer} originalPlace\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} out\n */\n\n var index, key, keys, x, y, i;\n\n if (\"place\" in dataObject &&\n Array.isArray(dataObject.place) &&\n dataObject.place[originalPlace] !== undefined &&\n dataObject.place[originalPlace].length >= 3 &&\n typeof dataObject.place[originalPlace][0] === \"number\" &&\n typeof dataObject.place[originalPlace][1] === \"number\" &&\n typeof dataObject.place[originalPlace][2] === \"object\"\n ) {\n\n x = preciseCoordinate(out, dataObject.place[originalPlace][0]);\n y = preciseCoordinate(out, dataObject.place[originalPlace][1]);\n key = [x,y].join(\":\");\n if (key in out._.placeLookup) {\n index = out._.placeLookup[key];\n } else {\n // Add place\n if (\"place\" in out.aquius === false) {\n out.aquius.place = [];\n }\n out.aquius.place.push([x, y, {}]);\n index = out.aquius.place.length - 1;\n out._.placeLookup[key] = index;\n }\n if (iteration in out._.placeSwitch === false) {\n out._.placeSwitch[iteration] = {};\n }\n out._.placeSwitch[iteration][originalPlace] = index;\n\n keys = Object.keys(dataObject.place[originalPlace][2]);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in out.aquius.place[index][2] === false) {\n out.aquius.place[index][2][keys[i]] = dataObject.place[originalPlace][2][keys[i]];\n }\n }\n\n }\n\n return out;\n }\n\n function addReference(out, dataObject, originalIndex, iteration, type) {\n /**\n * Processes original color/url reference into aquius.reference.color/url, referenced in out._.color/urlSwitch\n * @param {object} out\n * @param {object} dataObject\n * @param {integer} originalIndex\n * @param {interger} iteration - original dataset referenced by index\n * @param {string} type - \"color\" or \"url\"\n * @return {object} out\n */\n\n var index;\n\n if (\"reference\" in dataObject &&\n typeof dataObject.reference === \"object\" &&\n type in dataObject.reference &&\n Array.isArray(dataObject.reference[type]) &&\n dataObject.reference[type][originalIndex] !== undefined\n ) {\n if (\"reference\" in out.aquius === false) {\n out.aquius.reference = {};\n }\n if (type in out.aquius.reference === false) {\n out.aquius.reference[type] = [];\n }\n index = out.aquius.reference[type].indexOf(dataObject.reference[type][originalIndex]);\n if (index === -1) {\n // Add new\n out.aquius.reference[type].push(dataObject.reference[type][originalIndex]);\n index = out.aquius.reference[type].length - 1;\n }\n if (iteration in out._[type + \"Switch\"] === false) {\n out._[type + \"Switch\"][iteration] = {};\n }\n out._[type + \"Switch\"][iteration][originalIndex] = index;\n\n }\n\n return out;\n }\n\n\n function parseLinkProperty(out, dataObject, propertyObject, iteration) {\n /**\n * Parses link properties\n * @param {object} out\n * @param {object} dataObject\n * @param {object} propertyObject\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} {out, node}\n */\n\n var keys, node, nodeArray, reference, switchKeys, type, i, j, k;\n\n keys = [\"b\", \"block\"];\n // Integer block ID\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in propertyObject) {\n if (iteration in out._.blockSwitch === false) {\n out._.blockSwitch[iteration] = {};\n }\n if (propertyObject[keys[i]] in out._.blockSwitch[iteration] == false) {\n out._.blockSwitch[iteration][propertyObject[keys[i]]] = out._.blockNext;\n out._.blockNext += 1;\n }\n propertyObject[keys[i]] = out._.blockSwitch[iteration][propertyObject[keys[i]]];\n // Cannot fail, simply mirrors original dataset integrity\n }\n }\n\n keys = [\"h\", \"shared\"];\n // Arrays of Product ID\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in propertyObject &&\n Array.isArray(propertyObject[keys[i]])\n ) {\n for (j = 0; j < propertyObject[keys[i]].length; j += 1) {\n if (iteration in out._.productSwitch === false) {\n out._.productSwitch[iteration] = {};\n }\n if (propertyObject[keys[i]][j] === undefined ||\n propertyObject[keys[i]][j] === null\n ) {\n propertyObject[keys[i]][j] = 0;\n }\n if (propertyObject[keys[i]][j] in out._.productSwitch[iteration] === false) {\n out = addProduct(out, dataObject, iteration, propertyObject[keys[i]][j]);\n }\n propertyObject[keys[i]][j] = out._.productSwitch[iteration][propertyObject[keys[i]][j]];\n // Cannot fail, merely create a product ID unused by existing filter\n }\n }\n }\n\n keys = [\"pickup\", \"s\", \"setdown\", \"split\", \"t\", \"u\"];\n // Arrays of Node ID\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in propertyObject &&\n Array.isArray(propertyObject[keys[i]])\n ) {\n nodeArray = [];\n for (j = 0; j < propertyObject[keys[i]].length; j += 1) {\n if (iteration in out._.nodeSwitch === false ||\n propertyObject[keys[i]][j] in out._.nodeSwitch[iteration] === false\n ) {\n out = addNode(out, dataObject, propertyObject[keys[i]][j], iteration);\n }\n if (iteration in out._.nodeSwitch &&\n propertyObject[keys[i]][j] in out._.nodeSwitch[iteration]\n ) {\n node = out._.nodeSwitch[iteration][propertyObject[keys[i]][j]];\n if (propertyObject[keys[i]].indexOf(node) === -1) {\n // Unwanted duplicates may emerge from coordinatePrecision changes\n nodeArray.push(node);\n }\n }\n }\n if (nodeArray.length > 0) {\n propertyObject[keys[i]] = nodeArray;\n } else {\n // Failed because all nodes somehow missing\n delete propertyObject[keys[i]];\n }\n }\n }\n\n keys = [\"r\", \"reference\"];\n // Reference objects\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in propertyObject &&\n Array.isArray(propertyObject[keys[i]])\n ) {\n switchKeys = [\"c\", \"t\", \"u\"];\n for (j = 0; j < propertyObject[keys[i]].length; j += 1) {\n reference = propertyObject[keys[i]][j];\n for (k = 0; k < switchKeys.length; k += 1) {\n if (switchKeys[k] in reference) {\n\n if (switchKeys[k] === \"u\") {\n type = \"url\";\n } else {\n type = \"color\";\n }\n if (iteration in out._[type + \"Switch\"] === false ||\n reference[switchKeys[k]] in out._[type + \"Switch\"][iteration] === false\n ) {\n out = addReference(out, dataObject, reference[switchKeys[k]], iteration, type);\n }\n if (iteration in out._[type + \"Switch\"] &&\n reference[switchKeys[k]] in out._[type + \"Switch\"][iteration]\n ) {\n propertyObject[keys[i]][j][switchKeys[k]] =\n out._[type + \"Switch\"][iteration][reference[switchKeys[k]]];\n } else {\n // Failed because reference missing in original dataset, so remove\n delete propertyObject[keys[i]][j][switchKeys[k]];\n }\n\n }\n }\n }\n }\n }\n\n return {\n \"out\": out,\n \"property\": propertyObject\n };\n }\n\n function buildNetwork(out, dataObject, iteration) {\n /**\n * Creates aquius.network\n * @param {object} out\n * @param {object} dataObject\n * @param {interger} iteration - original dataset referenced by index\n * @return {object} out\n */\n\n var index, network, product, i, j;\n\n function propertyCheck(network, match) {\n\n var i;\n\n function compareProperties(a, b) {\n\n var i;\n var keys = Object.keys(a)\n\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in b === false) {\n return false;\n }\n if (a[keys[i]] !== b[keys[i]]) {\n return false;\n }\n }\n\n return true;\n }\n \n for (i = 0; i < network.length; i += 1) {\n if (compareProperties(network[i][1], match[1]) &&\n (network[i].length < 3 ||\n match.length < 3 ||\n compareProperties(network[i][2], match[2]))\n ) {\n return i;\n }\n }\n\n return -1;\n }\n\n if (\"network\" in dataObject &&\n Array.isArray(dataObject.network) &&\n dataObject.network.length > 0\n ) {\n if (\"network\" in out.aquius === false) {\n out.aquius.network = [];\n }\n\n for (i = 0; i < dataObject.network.length; i += 1) {\n if (Array.isArray(dataObject.network[i]) &&\n dataObject.network[i].length > 2 &&\n Array.isArray(dataObject.network[i][0]) &&\n typeof dataObject.network[i][1] === \"object\" &&\n typeof dataObject.network[i][2] === \"object\"\n ) {\n index = propertyCheck(out.aquius.network, dataObject.network[i]);\n // Checks names and extension objects, not filter indices\n if (index !== -1) {\n // Update filter at index\n for (j = 0; j < dataObject.network[i][0].length; j += 1) {\n if (iteration in out._.productSwitch &&\n dataObject.network[i][0][j] in out._.productSwitch[iteration]\n ) {\n product = out._.productSwitch[iteration][dataObject.network[i][0][j]];\n if (out.aquius.network[index][0].indexOf(product) === -1) {\n out.aquius.network[index][0].push(product);\n }\n }\n }\n } else {\n // New filter\n network = [[], dataObject.network[i][1], dataObject.network[i][2]];\n for (j = 0; j < dataObject.network[i][0].length; j += 1) {\n if (iteration in out._.productSwitch &&\n dataObject.network[i][0][j] in out._.productSwitch[iteration]\n ) {\n network[0].push(out._.productSwitch[iteration][dataObject.network[i][0][j]]);\n }\n }\n out.aquius.network.push(network);\n }\n }\n }\n }\n\n return out;\n }\n\n function addProduct(out, dataObject, iteration, product) {\n /**\n * Parses original product into new product. Creates aquius.reference.product\n * @param {object} out\n * @param {object} dataObject\n * @param {interger} iteration - original dataset referenced by index\n * @param {interger} product - original product index\n * @return {object} out\n */\n\n var keys, match, i, j;\n var newProduct = true;\n var reference = {};\n\n if (\"reference\" in out.aquius === false) {\n out.aquius.reference = {};\n }\n if (\"product\" in out.aquius.reference === false) {\n out.aquius.reference.product = [];\n }\n\n if (\"reference\" in dataObject &&\n typeof dataObject.reference === \"object\" &&\n \"product\" in dataObject.reference &&\n Array.isArray(dataObject.reference.product) &&\n product >= 0 &&\n product < dataObject.reference.product.length &&\n typeof dataObject.reference.product[product] === \"object\"\n ) {\n reference = dataObject.reference.product[product];\n keys = Object.keys(dataObject.reference.product[product]);\n for (i = 0; i < out.aquius.reference.product.length; i += 1) {\n match = true;\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in out.aquius.reference.product[i] === false ||\n dataObject.reference.product[product][keys[j]] !== out.aquius.reference.product[i][keys[j]]\n ) {\n match = false;\n break;\n }\n }\n if (match) {\n out._.productSwitch[iteration][product] = i;\n newProduct = false;\n break;\n }\n }\n }\n\n if (newProduct) {\n out._.productSwitch[iteration][product] = out._.productNext;\n out.aquius.reference.product[out._.productNext] = reference;\n out._.productNext += 1;\n }\n\n return out;\n }\n\n function optimiseNode(out) {\n /**\n * Assigns most frequently referenced nodes to lowest indices and removes unused nodes\n * @param {object} out\n * @return {object} out\n */\n\n // Function shared with GTFS\n\n var keys, newNode, newNodeLookup, nodeArray, i, j;\n var nodeOccurance = {};\n // OldNode: Count of references\n\n for (i = 0; i < out.aquius.link.length; i += 1) {\n nodeArray = out.aquius.link[i][2];\n if (\"u\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].u);\n }\n if (\"s\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].s);\n }\n if (\"t\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].t);\n }\n for (j = 0; j < nodeArray.length; j += 1) {\n if (nodeArray[j] in nodeOccurance === false) {\n nodeOccurance[nodeArray[j]] = 1;\n } else {\n nodeOccurance[nodeArray[j]] += 1;\n }\n }\n }\n\n nodeArray = [];\n // Reused, now OldNode by count of occurance\n keys = Object.keys(nodeOccurance);\n for (i = 0; i < keys.length; i += 1) {\n nodeArray.push([keys[i], nodeOccurance[keys[i]]]);\n }\n nodeArray.sort(function(a, b) {\n return a[1] - b[1];\n });\n nodeArray.reverse();\n\n newNode = [];\n // As aquius.node\n newNodeLookup = {};\n // OldNodeIndex: NewNodeIndex\n for (i = 0; i < nodeArray.length; i += 1) {\n newNode.push(out.aquius.node[nodeArray[i][0]]);\n newNodeLookup[nodeArray[i][0]] = i;\n }\n out.aquius.node = newNode;\n\n for (i = 0; i < out.aquius.link.length; i += 1) {\n for (j = 0; j < out.aquius.link[i][2].length; j += 1) {\n out.aquius.link[i][2][j] = newNodeLookup[out.aquius.link[i][2][j]];\n }\n if (\"u\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].u.length; j += 1) {\n out.aquius.link[i][3].u[j] = newNodeLookup[out.aquius.link[i][3].u[j]];\n }\n }\n if (\"s\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].s.length; j += 1) {\n out.aquius.link[i][3].s[j] = newNodeLookup[out.aquius.link[i][3].s[j]];\n }\n }\n if (\"t\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].t.length; j += 1) {\n out.aquius.link[i][3].t[j] = newNodeLookup[out.aquius.link[i][3].t[j]];\n }\n }\n }\n\n return out;\n }\n\n function exitMerge(out, options) {\n /**\n * Called to exit\n * @param {object} out\n * @param {object} options\n */\n\n var error;\n\n delete out._;\n\n if (typeof options === \"object\" &&\n \"callback\" in options\n ) {\n if (\"error\" in out) {\n error = new Error(out.error.join(\". \"));\n }\n options.callback(error, out, options);\n return true;\n } else {\n return out;\n }\n }\n\n out = parseConfig(out, options);\n out = coreLoop(out, input);\n\n return exitMerge(out, options);\n}\n\n};\n// EoF"
},
{
"alpha_fraction": 0.5399175882339478,
"alphanum_fraction": 0.5585407018661499,
"avg_line_length": 37.76331329345703,
"blob_id": "bb034ecb4beb23b53bd2ef897e532b6dbd8f84fc",
"content_id": "9b6cc88f343a4d688d465da4787753c5c9b5a63e",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6551,
"license_type": "permissive",
"max_line_length": 144,
"num_lines": 169,
"path": "/scripts/aquius_to_csv.py",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "\"\"\"\nPython script that extracts data from aquius files to CSV, including:\n- operator-service\n- operator-place-service\n- places\n- nodes, including termini (start or end) services and total dwell minutes\nOne service index (position in array) per extraction,\ndefaults to first, but can be set using: --index 0\n\nUsage: python aquius_to_csv.py aquius.json\nSee aquius_to_csv.py -h for further arguments\n\"\"\"\nimport argparse\nimport csv\nimport json\n\n\ndef get_args():\n parser = argparse.ArgumentParser()\n parser.add_argument(\n 'input',\n help='Aquius .json filename with path',\n )\n parser.add_argument(\n '--index',\n dest='index',\n default=0,\n type=int,\n help='Service index (position in array)',\n )\n parser.add_argument(\n '--operator',\n dest='operator',\n default='operator.csv',\n help='Output operator-service CSV',\n )\n parser.add_argument(\n '--place',\n dest='place',\n default='place.csv',\n help='Output place CSV',\n )\n parser.add_argument(\n '--service',\n dest='service',\n default='service.csv',\n help='Output operator-place-service CSV',\n )\n parser.add_argument(\n '--node',\n dest='node',\n default='node.csv',\n help='Output node CSV',\n )\n return parser.parse_args()\n\n\ndef load_json(filepath: str):\n with open(filepath, mode='r') as file:\n return json.load(file)\n\n\ndef save_to_csv(filepath: str, content: list[dict], columns: list):\n with open(filepath, mode='w', encoding='utf-8', newline='') as file:\n writer = csv.DictWriter(file, fieldnames=columns, quoting=csv.QUOTE_MINIMAL)\n writer.writeheader()\n for row in content:\n writer.writerow(row)\n\n\ndef main():\n arguments = get_args()\n use_service_index = getattr(arguments, 'index')\n input = load_json(filepath=getattr(arguments, 'input'))\n if 'node' not in input or 'link' not in input or 'reference' not in input or 'product' not in input['reference']:\n return\n \n operator_place_service: dict[str: dict[str: float]] = {} # Operator: {place: services}\n operator_service: dict[str, float] = {} # Operator: service\n place_data: dict[str, list] = {} # Place: [x, y]\n node_data: dict[str, list] = {} # Node: [x, y, name, code, services, services_termini, dwells]\n\n for node_id, node in enumerate(input['node']):\n if isinstance(node, list) and len(node) >= 3 and isinstance(node[2], dict):\n name_readable = \"\"\n name_code = \"\"\n ref = node[2].get(\"r\")\n if isinstance(ref, list):\n if len(ref) > 0 and isinstance(ref[0], dict):\n name_readable = ref[0].get(\"n\", \"\")\n if len(ref) > 1 and isinstance(ref[1], dict):\n name_code = ref[1].get(\"n\", \"\")\n node_data[node_id] = [node[0], node[1], name_readable, name_code, 0, 0, 0]\n\n for link in input['link']:\n if isinstance(link, list) and len(link) >= 4:\n # product, service, nodes, e.g. [[147],[498],[1030,2069,280,5989,12,165,291,23,1008,807,210,2133,1116], {...}} ]\n service_total = link[1][use_service_index] / len(link[0]) # Shared equally if multi-operator\n for operator_id in link[0]:\n operator_name = input['reference']['product'][operator_id].get('en-US', 'UKNOWN') # Assumes language\n operator_service[operator_name] = service_total + operator_service.get(operator_name, 0)\n if operator_name not in operator_place_service:\n operator_place_service[operator_name] = {}\n for place_id in link[2]:\n # e.g. [-2.177,53.85,{\"p\":2310,\"r\":[{\"n\":\"E05005268\"}]}]\n place_name = input['node'][place_id][2]['r'][0]['n']\n if place_name not in place_data:\n place_data[place_name] = [input['node'][place_id][0], input['node'][place_id][1]]\n operator_place_service[operator_name][place_name] = service_total + operator_place_service[operator_name].get(place_name, 0)\n dwell = None\n if isinstance(link[3], dict):\n dwell = link[3].get(\"w\")\n if not isinstance(dwell, list) or len(dwell) != len(link[2]):\n dwell = None\n for index, node in enumerate(link[2]):\n if node in node_data:\n node_data[node][4] = node_data[node][4] + service_total\n if index == 0 or index == (len(link[2]) - 1): # Start or end of route\n node_data[node][5] = node_data[node][5] + service_total\n if dwell is not None:\n node_data[node][6] = node_data[node][6] + dwell[index]\n\n output: list[dict] = []\n output_columns = ['operator', 'services']\n for operator, services in operator_service.items():\n output.append({\n 'operator': operator,\n 'services': services\n })\n save_to_csv(filepath=getattr(arguments, 'operator'), content=output, columns=output_columns)\n\n output: list[dict] = []\n output_columns = ['place', 'x', 'y']\n for place, values in place_data.items():\n output.append({\n 'place': place,\n 'x': values[0],\n 'y': values[1]\n })\n save_to_csv(filepath=getattr(arguments, 'place'), content=output, columns=output_columns)\n\n output: list[dict] = []\n output_columns = ['operator', 'place', 'services']\n for operator, values in operator_place_service.items():\n for place, services in values.items():\n output.append({\n 'operator': operator,\n 'place': place,\n 'services': services\n })\n save_to_csv(filepath=getattr(arguments, 'service'), content=output, columns=output_columns)\n\n output: list[dict] = []\n output_columns = ['x', 'y', 'name', 'code', 'services', 'services_termini', 'dwell_minutes']\n for _, values in node_data.items():\n output.append({\n 'x': values[0],\n 'y': values[1],\n 'name': str(values[2]),\n 'code': str(values[3]),\n 'services': round(values[4], 2),\n 'services_termini': round(values[5], 2),\n 'dwell_minutes': round(values[6], 2),\n })\n save_to_csv(filepath=getattr(arguments, 'node'), content=output, columns=output_columns)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.7616643309593201,
"alphanum_fraction": 0.7875176668167114,
"avg_line_length": 105.4731216430664,
"blob_id": "1eba336f02eafe1645bd9eadafa1f5370ccaa507",
"content_id": "5275bec0afa11e674049e88354da156c6b7a6633",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 9909,
"license_type": "permissive",
"max_line_length": 1301,
"num_lines": 93,
"path": "/live/README.md",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "# Live Demonstrations\n\n> _Here+Us_ - An Alternative Approach to Public Transport Network Discovery\n\nThis section contains examples of [Aquius](https://timhowgego.github.io/Aquius/) displaying public transport networks, plus hosted tools for constructing Aquius datasets:\n\n* [Barcelona](#barcelona) - comprehensive urban network built from multiple sources, including strategic analysis of network changes.\n* [FlixBus](#flixbus) - long-distance international European network, containing extensive cabotage (pickup and setdown) restrictions.\n* [Great Britain](#great-britain) - national and local networks.\n* [New York City](#new-york-city) - one of the largest American urban networks.\n* [Paris](#paris) - one of the largest European urban networks.\n* [Spanish Railways](#spanish-railways) - traditional national railway network, including split trains and mixed products.\n* [York](#york) - basic small city network, including lasso routes.\n* [Tools](#tools) - build networks from GTFS or GeoJSON, merge network together, or bulk analyse Aquius data.\n\n## Barcelona\n\n[](https://timhowgego.github.io/Aquius/live/amb-2018/#r1/p2/s4/z13/tca-ES)\n\n[Àrea Metropolitana de Barcelona (26 November-2 December 2018)](https://timhowgego.github.io/Aquius/live/amb-2018/): Snapshot of all non-tourist scheduled public transport within the [Àrea Metropolitana de Barcelona](http://www.amb.cat/) (AMB) - [more information about this dataset](https://timhowgego.github.io/AquiusData/es-amb/).\n\n[AMB Vortex (26 November-2 December 2018)](https://timhowgego.github.io/Aquius/live/amb-vortex-2018/): As above, but in a fixed coordinate grid, ideal for strategic analysis.\n\nThe AMB Vortex map includes sample proposed network changes - [Diagonal tram](http://ajuntament.barcelona.cat/mobilitat/tramviaconnectat/es) (direct on-street route), [L9/10 connection](https://ca.wikipedia.org/wiki/L%C3%ADnia_9_del_metro_de_Barcelona) (with all proposed stations, except those in Zona Franca), and [Rodalies 2026](http://territori.gencat.cat/web/.content/home/01_departament/plans/plans_sectorials/mobilitat/pla_dinfraestructures_del_transport_de_catalunya_2006-2026/pitc11transportpublic_tcm32-35012.pdf) (suburban railway as proposed in 2006). These examples are not intended to replace detailed (but specific) analysis of proposals in isolation. Rather to show how quite different proposals can be considered in their shared context - which for cities can otherwise be extremely difficult to comprehend. Aquius doesn't just allow networks to better understood: Even without a proper interface (proposed routes currently need to be outlined in GIS software, base networks filtered appropriately, and the combination rebuilt), the networks demonstrated here can be built and probed in a matter of minutes.\n\n## FlixBus\n\n[FlixBus (20-26 August 2018)](https://timhowgego.github.io/Aquius/live/flixbus-aug-2018/): Snapshot of all European FlixBus (and FlixTrain) services - [more information about this dataset](https://timhowgego.github.io/AquiusData/eu-interbus/).\n\nThe FlixBus network is almost impossible to communicate on a fixed map because its service patterns are often defined by cabotage restrictions, especially in Iberia and the Balkans. Such cabotage restrictions may prevent FlixBus conveying passengers _within_ countries or regions, often rendering the destinations available to passengers quite different to the route taken by the vehicle. This added complexity is not a limitation for [Aquius](https://timhowgego.github.io/Aquius/), which always draws its route map from a user-specified _here_. FlixBus represents an extreme test case of cabotaged international operation, since on some routes almost every place served is defined with a different set of boarding and alighting restrictions. FlixBus [host their own dynamic network map](https://www.flixbus.co.uk/bus-routes), however this can feel laggy, and does not give any indication of service frequency - destinations with one bus a week are shown just as prominantly as destinations with one bus an hour. The Aquius dataset is relatively large - almost 1 MegaByte uncompressed, even without headcode information and day-by-day service filters (which can potentially be [built from GTFS](https://timhowgego.github.io/Aquius/live/gtfs/)) - but is smoother to use and more indicative of services.\n\n## Great Britain \n\n[Great Britain Public Transport by Ward, weekdays/operating group April 2019](https://timhowgego.github.io/Aquius/live/gb-pt-ward-2019/), [all days/mode February 2020](https://timhowgego.github.io/Aquius/live/gb-pt-ward-2020/), and [all days/mode February 2021](https://timhowgego.github.io/Aquius/live/gb-pt-ward-2021/) (in COVID-19 lockdown): Snapshot of all scheduled services except aviation, mapped between Wards, ideal for strategic analysis of local transport - [more information about this dataset](https://timhowgego.github.io/AquiusData/uk-pt/).\n\n[Great Britain Public Transport by District, weekdays April 2019](https://timhowgego.github.io/Aquius/live/gb-pt-district-2019/): Snapshot of all weekday scheduled services except aviation, mapped between Districts, ideal for strategic analysis of longer-distance transport - [more information about this dataset](https://timhowgego.github.io/AquiusData/uk-pt/).\n\nSimilar maps are available by [traditional county](https://timhowgego.github.io/Aquius/live/gb-pt-county-2019/) and [region/nation](https://timhowgego.github.io/Aquius/live/gb-pt-region-2019/), primarily for long-distance analysis.\n\nAll are Vortex-style maps that groups stops by their administrative geography. Such aggregation just about makes it possible to contain any entire country within a single dataset, although the Ward maps is large (over 5 MegaBytes).\n\n[Great Britain National Rail (weekdays, January 2019)](https://timhowgego.github.io/Aquius/live/gb-rail-2019/): Snapshot of all weekday passenger train service patterns across Network Rail - [more information about this dataset](https://timhowgego.github.io/AquiusData/uk-rail/).\n\nThe GB Rail dataset makes extensive use of dummy waypoints to show the route taken by trains, regardless of their stops. This results in a relatively neat map where service totals at stations are not automatically the same as the sum of the links passing those stations.\n\n## New York City\n\n[New York City (January 2019)](https://timhowgego.github.io/Aquius/live/nyc-2019/): Snapshot of every scheduled public transportation service within the five Boroughs of New York City - [more information about this dataset](https://timhowgego.github.io/AquiusData/us-nyc/).\n\n## Paris\n\n[](https://timhowgego.github.io/Aquius/live/petite-paris-2019/#x2.28/y48.8907/z12/c2.24224/k48.88989/m13/tfr-FR/n1)\n\n[Inner Paris (7-13 January 2019)](https://timhowgego.github.io/Aquius/live/petite-paris-2019/): Snapshot of every scheduled public transport service within the Petite Couronne of Paris - [more information about this dataset](https://timhowgego.github.io/AquiusData/fr-paris/).\n\n## Spanish Railways\n\n[](https://timhowgego.github.io/Aquius/live/es-rail-20-jul-2018/#x-3.296/y39.092/z7/c-3.966/k38.955/m8/s7/vlphn/tes-ES)\n\n[Spanish Railways (Friday 20 July 2018)](https://timhowgego.github.io/Aquius/live/es-rail-20-jul-2018/): Snapshot of all non-tourist passenger train services within Spain - [more information about this dataset](https://timhowgego.github.io/AquiusData/es-rail/).\n\n[Renfe Obligación de Servicio Público](https://timhowgego.github.io/Aquius/live/renfe-osp-20-jul-2018/): As Friday 20 July 2018 above, but with custom filters for each Renfe state supported product, plus administrative boundaries.\n\n[Renfe LD/MD (10-16 December 2018)](https://timhowgego.github.io/Aquius/live/renfe-ld-md-dec-2018/): GTFS extract from the first batch of Renfe open data. The extract excludes Cercanías (most suburban), Feve (metre gauge), Trenhotel (sleeper) and non-domestic services. Unlike earlier data, the extract summarises both directions across a full week, including crude time period analysis.\n\n## York\n\n[York (January 2019)](https://timhowgego.github.io/Aquius/live/york-2019/): Snapshot of all bus services within the City of York - [more information about this dataset](https://timhowgego.github.io/AquiusData/uk-york/).\n\n[Transdev Blazefield (September 2019)](https://timhowgego.github.io/Aquius/live/blazefield-2019/): Operator-specific network built from GTFS released as part of the [Department for Transport's Bus Open Data](https://twitter.com/busopendata) initiative.\n\n## Tools\n\n### GTFS to Aquius\n\n[GTFS to Aquius](https://timhowgego.github.io/Aquius/live/gtfs/) - Tool to convert single [General Transit Feed Specification](https://developers.google.com/transit/gtfs/reference/) files into Aquius datasets. Also see [GTFS to Aquius documentation](https://timhowgego.github.io/Aquius/#gtfs-to-aquius).\n\n### GeoJSON to Aquius\n\n[GeoJSON to Aquius](https://timhowgego.github.io/Aquius/live/geojson/) - Tool to create Aquius datasets from bespoke geospatial networks. Also see [GeoJSON to Aquius documentation](https://timhowgego.github.io/Aquius/#geojson-to-aquius).\n\n### Merge Aquius\n\n[Merge Aquius](https://timhowgego.github.io/Aquius/live/merge/) - Tool to merge Aquius datasets together. Also see [Merge Aquius documentation](https://timhowgego.github.io/Aquius/#merge-aquius).\n\n### Aquius Analysis\n\n[Aquius Analysis](https://timhowgego.github.io/Aquius/live/analysis/) - Work in progress tool for bulk analysis of Aquius datasets.\n\n## More Information\n\n* [User FAQ](https://timhowgego.github.io/Aquius/#user-faq)\n* [Host or Create](https://timhowgego.github.io/Aquius/#quick-setup)\n"
},
{
"alpha_fraction": 0.5003558397293091,
"alphanum_fraction": 0.5085431933403015,
"avg_line_length": 31.656814575195312,
"blob_id": "5ce5c4ec54f47f6a13f781f9deb2a215ba05ad66",
"content_id": "14ffa02cdd5b2524ce029d1cf37583dccb9361b8",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": true,
"language": "JavaScript",
"length_bytes": 144735,
"license_type": "permissive",
"max_line_length": 163,
"num_lines": 4432,
"path": "/dist/gtfs.js",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "/*eslint-env browser*/\n/*global aquius*/\n\nvar gtfsToAquius = gtfsToAquius || {\n/**\n * @namespace GTFS to Aquius\n * @version 0.1\n * @copyright MIT License\n */\n\n\n\"init\": function init(configId) {\n /**\n * Initialisation with user interface\n * @param {string} configId - Id of DOM element within which to build UI\n * @return {boolean} initialisation success\n */\n \"use strict\";\n\n function createElement(elementType, valueObject, styleObject) {\n /**\n * Helper: Creates DOM element\n * @param {string} elementType\n * @param {object} valueObject - optional DOM value:content pairs\n * @param {object} styleObject - optional DOM style value:content pairs\n * @return {object} DOM element\n */\n\n var values, styles, i;\n var element = document.createElement(elementType);\n\n if (typeof valueObject !== \"undefined\") {\n values = Object.keys(valueObject);\n for (i = 0; i < values.length; i += 1) {\n element[values[i]] = valueObject[values[i]];\n }\n }\n\n if (typeof styleObject !== \"undefined\") {\n styles = Object.keys(styleObject);\n for (i = 0; i < styles.length; i += 1) {\n element.style[styles[i]] = styleObject[styles[i]];\n }\n }\n\n return element;\n }\n\n function createTabulation(tableData, tableHeader, tableDataStyle, caption) {\n /**\n * Helper: Creates table DOM\n * @param {object} tableData - array (rows) of arrays (cell content - text or DOM)\n * @param {object} tableHeader - array of header cell names\n * @param {object} tableDataStyle - optional array of data cell styles\n * @param {string} caption - optional\n * @return {object} DOM element\n */\n\n var td, th, tr, i, j;\n var table = createElement(\"table\");\n\n if (typeof caption !== \"undefined\") {\n table.appendChild(createElement(\"caption\", {\n \"textContent\": caption\n }));\n }\n\n if (tableHeader.length > 0) {\n tr = createElement(\"tr\");\n for (i = 0; i < tableHeader.length; i += 1) {\n if (typeof tableHeader[i] === \"object\") {\n th = createElement(\"th\");\n th.appendChild(tableHeader[i]);\n tr.appendChild(th);\n } else {\n tr.appendChild(createElement(\"th\", {\n \"textContent\": tableHeader[i].toString()\n }));\n }\n }\n table.appendChild(tr);\n }\n\n for (i = 0; i < tableData.length; i += 1) {\n tr = createElement(\"tr\");\n for (j = 0; j < tableData[i].length; j += 1) {\n if (typeof tableData[i][j] === \"object\") {\n td = createElement(\"td\");\n td.appendChild(tableData[i][j]);\n } else {\n td = createElement(\"td\", {\n \"textContent\": tableData[i][j].toString()\n });\n }\n if (typeof tableDataStyle !== \"undefined\" &&\n tableDataStyle.length > j\n ) {\n td.className = tableDataStyle[j];\n }\n tr.appendChild(td);\n }\n table.appendChild(tr);\n }\n\n return table;\n }\n\n function initialiseUI(vars, loadFunction, prompt) {\n /**\n * Creates user interface in its initial state\n * @param {object} vars - internal data references (including configId)\n * @param {object} loadFunction - function to load files\n * @param {string} prompt - label\n * @return {boolean} initialisation success\n */\n\n var button, form, label;\n var baseDOM = document.getElementById(vars.configId);\n\n if (!baseDOM) {\n return false;\n }\n\n while (baseDOM.firstChild) {\n baseDOM.removeChild(baseDOM.firstChild);\n }\n\n if (!Object.keys ||\n ![].indexOf ||\n typeof JSON !== \"object\" ||\n !window.File ||\n !window.FileReader ||\n !window.Blob\n ) {\n baseDOM.appendChild(\n document.createTextNode(\"Browser not supported: Try a modern browser\"));\n return false;\n }\n\n document.head.appendChild(createElement(\"script\", {\n \"src\": \"https://timhowgego.github.io/Aquius/dist/aquius.min.js\",\n \"type\": \"text/javascript\"\n }));\n // Optional in much later post-processing, so no callback\n\n form = createElement(\"form\", {\n \"className\": vars.configId + \"Input\"\n });\n\n label = createElement(\"label\", {\n \"textContent\": prompt\n });\n \n button = createElement(\"input\", {\n \"id\": vars.configId + \"ImportFiles\",\n \"multiple\": \"multiple\",\n \"name\": vars.configId + \"ImportFiles[]\",\n \"type\": \"file\"\n });\n button.addEventListener(\"change\", (function(){\n loadFunction(vars);\n }), false);\n label.appendChild(button);\n\n form.appendChild(label);\n\n form.appendChild(createElement(\"span\", {\n \"id\": vars.configId + \"Progress\"\n }));\n\n baseDOM.appendChild(form);\n\n baseDOM.appendChild(createElement(\"div\", {\n \"id\": vars.configId + \"Output\"\n }));\n\n return true;\n }\n\n function initialiseProcess(vars) {\n /**\n * Initiates file import and Aquius creation\n * @param {object} vars - internal data references\n */\n\n var reader;\n var gtfs = {};\n var nonText = {};\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var options = {\n \"_vars\": vars,\n \"callback\": outputProcess\n };\n var processedFiles = 0;\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n var totalFiles = 0;\n\n function loadFiles(files) {\n\n var theChunk, chunkedSize, i;\n var maxFilesize = 1e8;\n // In bytes, just under 100MB, beyond which source files will be loaded in parts\n var fileList = [];\n // [file, part index]\n\n for (i = 0; i < files.length; i += 1) {\n chunkedSize = 0\n while (files[i].size >= chunkedSize) {\n fileList.push([files[i], parseInt(chunkedSize / maxFilesize, 10)]);\n chunkedSize += maxFilesize;\n }\n }\n\n totalFiles = fileList.length;\n\n for (i = 0; i < totalFiles; i += 1) {\n reader = new FileReader();\n reader.onerror = (function(evt, theFile) {\n outputError(new Error(\"Could not read \" + theFile[0].name), vars);\n reader.abort();\n });\n reader.onload = (function(theFile) {\n return function() {\n try {\n onLoad(theFile[0].name, theFile[1], this.result);\n } catch (err) {\n outputError(err, vars);\n }\n };\n })(fileList[i]);\n theChunk = fileList[i][0].slice(fileList[i][1] * maxFilesize,\n (fileList[i][1] * maxFilesize) + maxFilesize);\n reader.readAsText(theChunk);\n }\n }\n\n function onLoad(filename, position, result) {\n\n var filenameParts, json, key, keys, i;\n\n filenameParts = filename.toLowerCase().split(\".\");\n\n if (filenameParts.length > 1 &&\n filenameParts[filenameParts.length - 1] === \"txt\"\n ) {\n\n key = filenameParts.slice(0, filenameParts.length - 1).join(\".\");\n if (key in gtfs === false) {\n gtfs[key] = [];\n }\n gtfs[key][position] = result;\n\n } else {\n\n key = filenameParts.join(\".\");\n if (key in nonText === false) {\n nonText[key] = [];\n }\n nonText[key][position] = result;\n\n }\n\n processedFiles += 1;\n\n if (processedFiles === totalFiles) {\n\n keys = Object.keys(nonText);\n for (i = 0; i < keys.length; i += 1) {\n try {\n json = JSON.parse(nonText[keys[i]].join(\"\"));\n if (\"type\" in json &&\n json.type === \"FeatureCollection\"\n ) {\n options.geojson = json;\n } else {\n options.config = json;\n }\n } catch (err) {\n if (err instanceof SyntaxError === false) {\n outputError(err, vars);\n }\n // Else it not JSON\n }\n }\n\n vars.process(gtfs, options);\n }\n }\n\n if (fileDOM !== null &&\n fileDOM.files.length > 0\n ) {\n fileDOM.disabled = true;\n if (progressDOM !== null) {\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n progressDOM.textContent = \"Working...\";\n }\n loadFiles(fileDOM.files);\n }\n }\n\n function outputProcess(error, out, options) {\n /**\n * Called after Aquius creation\n * @param {object} error - Error object or undefined \n * @param {object} out - output, including keys aquius and config\n * @param {object} options - as sent, including _vars\n */\n\n var caption, content, keys, i;\n var vars = options._vars;\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var outputDOM = document.getElementById(vars.configId + \"Output\");\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n\n if (error !== undefined) {\n outputError(error, vars);\n return false;\n }\n\n if (!outputDOM ||\n !progressDOM\n ) {\n return false;\n }\n\n while (outputDOM.firstChild) {\n outputDOM.removeChild(outputDOM.firstChild);\n }\n\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n\n if (fileDOM !== null) {\n fileDOM.disabled = false;\n }\n\n outputDOM.className = \"\";\n\n keys = [\"aquius\", \"config\"];\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in out &&\n Object.keys(out[keys[i]]).length > 0\n ) {\n\n if (keys[i] === \"config\") {\n content = JSON.stringify(out[keys[i]], null, 1);\n } else {\n content = JSON.stringify(out[keys[i]]);\n }\n\n progressDOM.appendChild(createElement(\"a\", {\n \"className\": vars.configId + \"Download\",\n \"href\": window.URL.createObjectURL(\n new Blob([content],\n {type: \"application/json;charset=utf-8\"})\n ),\n \"download\": keys[i] + \".json\",\n \"textContent\": \"Save \" + keys[i] + \".json\",\n \"role\": \"button\"\n }));\n\n }\n }\n\n if (\"aquius\" in out &&\n \"translation\" in out.aquius &&\n \"en-US\" in out.aquius.translation &&\n \"link\" in out.aquius.translation[\"en-US\"]\n ) {\n caption = out.aquius.translation[\"en-US\"].link;\n } else {\n caption = \"Services\";\n }\n\n if (typeof aquius !== \"undefined\" &&\n \"aquius\" in out\n ) {\n // If aquius has not loaded by now, skip the map\n outputDOM.appendChild(createElement(\"div\", {\n \"id\": vars.configId + \"Map\"\n }, {\n \"height\": (document.documentElement.clientHeight / 2) + \"px\"\n }));\n aquius.init(vars.configId + \"Map\", {\n \"dataObject\": out.aquius,\n \"uiStore\": false\n });\n }\n\n if (\"warning\" in out) {\n for (i = 0; i < out.warning.length; i += 1) {\n outputDOM.appendChild(createElement(\"p\", {\n \"textContent\": out.warning[i]\n }));\n }\n }\n\n if (\"networkTable\" in out) {\n outputDOM.appendChild(createTabulation(out.networkTable.data, out.networkTable.header,\n out.networkTable.format, caption + \" by Network\"));\n }\n\n if (\"summaryTable\" in out) {\n outputDOM.appendChild(createTabulation(out.summaryTable.data, out.summaryTable.header,\n out.summaryTable.format, caption + \" % by Hour (scheduled only)\"));\n }\n }\n\n function outputError(error, vars) {\n /**\n * Output errors to user interface, destroying any prior Output\n * @param {object} error - error Object\n * @param {object} vars - internal data references\n */\n\n var message;\n var fileDOM = document.getElementById(vars.configId + \"ImportFiles\");\n var outputDOM = document.getElementById(vars.configId + \"Output\");\n var progressDOM = document.getElementById(vars.configId + \"Progress\");\n \n if (error !== undefined &&\n outputDOM !== null\n ) {\n\n while (outputDOM.firstChild) {\n outputDOM.removeChild(outputDOM.firstChild);\n }\n\n outputDOM.className = vars.configId + \"OutputError\";\n\n if (\"message\" in error) {\n message = error.message;\n } else {\n message = JSON.stringify(error);\n }\n\n outputDOM.appendChild(createElement(\"p\", {\n \"textContent\": \"Error: \" + message\n }));\n\n if (progressDOM !== null) {\n while (progressDOM.firstChild) {\n progressDOM.removeChild(progressDOM.firstChild);\n }\n progressDOM.textContent = \"Failed\";\n }\n if (fileDOM !== null) {\n fileDOM.disabled = false;\n }\n }\n }\n\n return initialiseUI({\n \"configId\": configId,\n \"process\": this.process\n },\n initialiseProcess,\n \"GTFS as .txt, optional config, and optional GeoJSON to process:\");\n},\n\n\n\"process\": function process(gtfs, options) {\n /**\n * Creates Aquius dataObject. May be called independently\n * @param {object} gtfs - key per GTFS file slug, value array of raw text content of GTFS file\n * @param {object} options - geojson, config, callback\n * @return {object} without callback: possible keys aquius, config, error\n * with callback: callback(error, out, options)\n */\n \"use strict\";\n\n var out = {\n \"_\": {},\n // Internal objects\n \"aquius\": {},\n // Output file\n \"config\": {},\n // Output config\n \"summary\": {}\n // Output quality analysis\n };\n\n function formatGtfsDate(dateMS) {\n /**\n * Helper: Converts millisecond date to GTFS date\n * @param {integer} dateMS - milliseconds from epoch\n * @return {string} date in GTFS (YYYYMMDD) format\n */\n\n var dateDate = new Date(dateMS);\n var dateString = dateDate.getFullYear().toString();\n if (dateDate.getMonth() < 9) {\n dateString += \"0\";\n }\n dateString += (dateDate.getMonth() + 1).toString();\n if (dateDate.getDate() < 10) {\n dateString += \"0\";\n }\n dateString += dateDate.getDate().toString();\n\n return dateString;\n }\n\n function unformatGtfsDate(dateString) {\n /**\n * Helper: Converts GTFS date to millisecond date\n * @param {string} dateString - date in GTFS (YYYYMMDD) format\n * @return {integer} milliseconds from epoch\n */\n\n if (dateString.length < 8) {\n return 0;\n // Fallback to 1970\n }\n\n return Date.UTC(\n dateString.slice(0, 4),\n dateString.slice(4, 6) - 1,\n dateString.slice(6, 8)\n );\n }\n \n function haversineDistance(lat1, lng1, lat2, lng2) {\n /**\n * Helper: Earth distance. Modified from Leaflet CRS.Earth.js\n * @param {float} lat from, lng from, lat to, lng to.\n * @return {integer} \n */\n\n var rad = Math.PI / 180;\n var sinDLat = Math.sin((lat2 - lat1) * rad / 2);\n var sinDLon = Math.sin((lng2 - lng1) * rad / 2);\n var a = sinDLat * sinDLat + Math.cos(lat1 * rad) * Math.cos(lat2 * rad) * sinDLon * sinDLon;\n var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));\n\n return 6371000 * c;\n }\n\n function parseConfig(out, options) {\n /**\n * Parse options.config into valid out.config. Defines defaults\n * @param {object} out\n * @param {object} options\n * @return {object} out\n */\n\n var i;\n var defaults = {\n \"allowBlock\": false,\n // Process block_id as sequence of inter-operated trips, else blocked trips are processed separately\n \"allowDuration\": false,\n // Include array of total minutes per trip and link (caution - must set service time bands &\n // while any non-time banded service currently produces incorrect totals)\n \"allowDwell\": false,\n // Include array of average minute dwell times per trip by node\n \"allowCabotage\": false,\n // Process duplicate vehicle trips with varying pickup/setdown restrictions as cabotage (see docs)\n \"allowCode\": true,\n // Include public stop codes in node references (taken from GTFS stop_code unless codeAsStopId = true)\n \"allowColor\": true,\n // Include route-specific colors if available\n \"allowDuplication\": false,\n // Count duplicate vehicle trips\n \"allowHeadsign\": false,\n // Include trip-specific headsigns within link references\n \"allowName\": true,\n // Include stop names (increases file size)\n \"allowNoc\": false,\n // Append Great Britain NOC to operator name if available (for newly generated network filter only)\n \"allowRoute\": true,\n // Include route-specific short names\n \"allowRouteLong\": false,\n // Include route-specific long names\n \"allowRouteUrl\": true,\n // Include service URLs (increases file size)\n \"allowSplit\": false,\n // Count trips on the same route which share at least two, but not all, stop times as \"split\" at their unique stops\n \"allowStopUrl\": true,\n // Include stop URLs (increases file size)\n \"allowWaypoint\": true,\n // Include stops with no pickup and no setdown as dummy routing nodes\n \"allowZeroCoordinate\": true,\n // Include stops with 0,0 coordinates, else stops are skipped\n \"codeAsStopId\": false,\n // stop_id is used in place of stop_code (requires allowCode = true)\n \"coordinatePrecision\": 5,\n // Coordinate decimal places (smaller values tend to group clusters of stops)\n \"duplicationRouteOnly\": true,\n // Restrict duplications check to services on the same route\n \"fromDate\": formatGtfsDate(Date.now()),\n // Start date for service pattern analysis (inclusive)\n \"inGeojson\": true,\n // If geojson boundaries are provided, only services at stops within a boundary will be analysed. If false, assigns stops to nearest boundary (by centroid)\n \"isCircular\": [],\n // GTFS \"route_id\" (strings) to be referenced as circular. If empty, GTFS to Aquius follows own logic (see docs)\n \"meta\": {\n \"schema\": \"0\"\n },\n // As Data Structure meta key\n \"mirrorLink\": true,\n // Services mirrored in reverse are combined into the same link. Reduces filesize, but can distort service averages\n \"networkFilter\": {\n \"type\": \"agency\"\n },\n // Group services by, using network definitions (see docs)\n \"nodeGeojson\": {},\n // Cache containing node \"x:y\": Geojson \"x:y\" (both coordinates at coordinatePrecision) (see docs)\n \"option\": {},\n // As Data Structure/Configuration option key\n \"populationProperty\": \"population\",\n // Field name in GeoJSON properties containing the number of people\n \"placeNameProperty\": \"name\",\n // Field name in GeoJSON properties containing the name or identifier of the place\n \"productOverride\": {},\n // Properties applied to all links with the same product ID (see docs)\n \"routeExclude\": [],\n // GTFS \"route_id\" (strings) to be excluded from analysis\n \"routeInclude\": [],\n // GTFS \"route_id\" (strings) to be included in analysis, all if empty\n \"routeOverride\": {},\n // Properties applied to routes, by GTFS \"route_id\" key (see docs)\n \"serviceFilter\": {},\n // Group services by, using service definitions (see docs)\n \"servicePer\": 1,\n // Service average per period in days (1 gives daily totals, 7 gives weekly totals)\n \"splitMinimumJoin\": 2,\n // Minimum number of concurrent nodes that split services must share\n \"stopExclude\": [],\n // GTFS \"stop_id\" (strings) to be excluded from analysis\n \"stopInclude\": [],\n // GTFS \"stop_id\" (strings) to be included in analysis, all if empty\n \"stopOverride\": {},\n // Properties applied to stops, by GTFS \"stop_id\" key (see docs)\n \"stopPlace\": false,\n // Group and merge stops by their respective place centroid (assumes geojson)\n \"toDate\": formatGtfsDate(Date.now() + 5184e5),\n // End date for service pattern analysis (inclusive), thus +6 days for 1 week\n \"translation\": {}\n // As Data Structure/Configuration translation key\n };\n var keys = Object.keys(defaults);\n\n if (typeof options !== \"object\") {\n options = {};\n }\n\n for (i = 0; i < keys.length; i += 1) {\n if (\"config\" in options &&\n keys[i] in options.config &&\n typeof options.config[keys[i]] === typeof defaults[keys[i]]\n ) {\n out.config[keys[i]] = options.config[keys[i]];\n } else {\n out.config[keys[i]] = defaults[keys[i]];\n }\n }\n\n out._.circular = {};\n for (i = 0; i < out.config.isCircular.length; i += 1) {\n out._.circular[out.config.isCircular[i]] = out.config.isCircular[i];\n // Indexed for speed\n }\n\n return out;\n }\n\n function parseCsv(out, slug, csv) {\n /**\n * Helper: Parses array of strings into line-by-line-array based on CSV structure\n * Base logic via Jezternz. Runtime about 1 second per 20MB of data\n * @param {object} out\n * @param {string} slug - GTFS name slug\n * @param {array} csv\n * @return {object} out\n */\n\n var firstLine, lastIndex, match, matches, residual, i, j;\n var columns = -1;\n var pattern = new RegExp((\"(\\\\,|\\\\r?\\\\n|\\\\r|^)(?:\\\"([^\\\"]*(?:\\\"\\\"[^\\\"]*)*)\\\"|([^\\\\,\\\\r\\\\n]*))\"),\"gi\");\n var indexColumn = -1;\n var line = [];\n\n function processLine(out, line, slug, columns, indexColumn) {\n\n if (line.length === columns) {\n if (indexColumn !== -1) {\n if (line[indexColumn] in out.gtfs[slug] === false) {\n out.gtfs[slug][line[indexColumn]] = [];\n }\n out.gtfs[slug][line[indexColumn]].push(line);\n } else {\n out.gtfs[slug].push(line);\n }\n }\n\n return out;\n }\n\n if (!Array.isArray(csv)) {\n csv = [csv];\n }\n\n for (i = 0; i < csv.length; i += 1) {\n\n matches = pattern.exec(csv[i]);\n lastIndex = line.length - 1;\n firstLine = true;\n\n while (matches !== null) {\n\n if (matches[1].length &&\n matches[1] !== \",\"\n ) {\n\n if (i > 0 &&\n firstLine === true\n ) {\n if (line.length > columns &&\n lastIndex < line.length - 1\n ) {\n // File split mid-value, so merge lastIndex value and the following index value\n // Header must be wholly in first chunk - logic breaks with ridiculously small file chunks\n residual = line.slice(0, lastIndex);\n residual.push(line[lastIndex] + line[lastIndex + 1]);\n line = residual.concat(line.slice(lastIndex + 2));\n }\n firstLine = false;\n }\n\n if (columns === -1) {\n // Header\n columns = line.length;\n for (j = 0; j < line.length; j += 1) {\n if (line[j] in out.gtfsHead[slug]) {\n out.gtfsHead[slug][line[j]] = j;\n }\n }\n if (slug in out.gtfsRequired) {\n for (j = 0; j < out.gtfsRequired[slug].length; j += 1) {\n if (out.gtfsHead[slug][out.gtfsRequired[slug][j]] === -1) {\n if (\"error\" in out === false) {\n out.error = [];\n }\n out.error.push(\"Missing value \" + out.gtfsRequired[slug][j] + \" in GTFS \"+ slug + \".txt\");\n return out;\n }\n }\n }\n if (slug in out.gtfsIndex) {\n indexColumn = out.gtfsHead[slug][out.gtfsIndex[slug]];\n if (indexColumn === -1) {\n if (\"error\" in out === false) {\n out.error = [];\n }\n out.error.push(\"Missing index value \" + out.gtfsIndex[slug] + \" in GTFS \"+ slug + \".txt\");\n return out;\n }\n out.gtfs[slug] = {};\n } else {\n out.gtfs[slug] = [];\n }\n } else {\n // Content\n out = processLine(out, line, slug, columns, indexColumn);\n }\n\n line = [];\n }\n\n if (matches[2]) {\n match = matches[2].replace(new RegExp(\"\\\"\\\"\", \"g\"), \"\\\"\");\n } else {\n match = matches[3];\n }\n\n if (match === undefined) {\n // Quoted but empty\n match = \"\";\n }\n\n line.push(match.trim());\n matches = pattern.exec(csv[i]);\n }\n\n }\n\n // Finally\n out = processLine(out, line, slug, columns, indexColumn);\n\n return out;\n }\n\n function parseGtfs(out, gtfs) {\n /**\n * Parse gtfs strings into out.gtfs, columns defined in out.gtfsHead\n * @param {object} out\n * @param {object} gtfs\n * @return {object} out\n */\n\n var keys, i;\n\n out.gtfsIndex = {\n // Group data by index\n \"stop_times\": \"trip_id\"\n };\n\n out.gtfsHead = {\n // Numbers record column position, true if index, -1 if missing\n \"agency\": {\n \"agency_id\": -1,\n \"agency_name\": -1,\n \"agency_noc\": -1\n },\n \"calendar\": {\n \"service_id\": -1,\n \"monday\": -1,\n \"tuesday\": -1,\n \"wednesday\": -1,\n \"thursday\": -1,\n \"friday\": -1,\n \"saturday\": -1,\n \"sunday\": -1,\n \"start_date\": -1,\n \"end_date\": -1\n },\n \"calendar_dates\": {\n \"service_id\": -1,\n \"date\": -1,\n \"exception_type\": -1\n },\n \"frequencies\": {\n \"trip_id\": -1,\n \"start_time\": -1,\n \"end_time\": -1,\n \"headway_secs\": -1\n },\n \"routes\": {\n \"agency_id\": -1,\n \"route_color\": -1,\n \"route_id\": -1,\n \"route_long_name\": -1,\n \"route_short_name\": -1,\n \"route_text_color\": -1,\n \"route_url\": -1\n },\n \"stop_times\": {\n \"arrival_time\": -1,\n \"departure_time\": -1,\n \"trip_id\": -1,\n \"stop_id\": -1,\n \"stop_sequence\": -1,\n \"pickup_type\": -1,\n \"drop_off_type\": -1\n },\n \"stops\": {\n \"stop_code\": -1,\n \"stop_id\": -1,\n \"stop_lat\": -1,\n \"stop_lon\": -1,\n \"stop_name\": -1,\n \"stop_url\": -1,\n \"location_type\": -1,\n \"parent_station\": -1\n },\n \"transfers\": {\n \"from_stop_id\": -1,\n \"to_stop_id\": -1,\n \"transfer_type\": -1\n },\n \"trips\": {\n \"block_id\": -1,\n \"direction_id\": -1,\n \"route_id\": -1,\n \"service_id\": -1,\n \"trip_headsign\": -1,\n \"trip_id\": -1,\n \"trip_short_name\": -1\n }\n };\n\n out.gtfsRequired = {\n \"routes\": [\"route_id\"],\n \"stops\": [\"stop_id\", \"stop_lat\", \"stop_lon\"],\n \"trips\": [\"route_id\", \"service_id\", \"trip_id\"],\n \"stop_times\": [\"trip_id\", \"stop_id\", \"stop_sequence\"]\n };\n\n if (\"type\" in out.config.networkFilter &&\n out.config.networkFilter.type === \"mode\"\n ) {\n out.gtfsHead.routes.route_type = -1;\n }\n if (\"name\" in out.config.meta === false ||\n \"en-US\" in out.config.meta.name === false\n ) {\n if (\"feed_info\" in out.gtfsHead === false) {\n out.gtfsHead.feed_info = {};\n }\n out.gtfsHead.feed_info.feed_publisher_name = -1;\n }\n if (\"url\" in out.config.meta === false) {\n if (\"feed_info\" in out.gtfsHead === false) {\n out.gtfsHead.feed_info = {};\n }\n out.gtfsHead.feed_info.feed_publisher_url = -1;\n }\n // Extendable for further product filters\n\n keys = Object.keys(out.gtfsRequired);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in gtfs === false) {\n if (\"error\" in out === false) {\n out.error = [];\n }\n out.error.push(\"Missing GTFS \"+ keys[i] + \".txt\");\n return out;\n }\n }\n\n out.gtfs = {};\n\n keys = Object.keys(out.gtfsHead);\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in gtfs) {\n out = parseCsv(out, keys[i], gtfs[keys[i]]);\n }\n }\n\n return out;\n }\n\n function buildHeader(out) {\n /**\n * Creates out.aquius header keys - meta, option, translation\n * @param {object} out\n * @return {object} out\n */\n\n if (\"schema\" in out.config.meta === false ||\n out.config.meta.schema !== \"0\"\n ) {\n out.config.meta.schema = \"0\";\n }\n\n if (\"name\" in out.config.meta === false) {\n out.config.meta.name = {};\n }\n\n if (\"en-US\" in out.config.meta.name === false) {\n if (\"feed_info\" in out.gtfs &&\n out.gtfsHead.feed_info.feed_publisher_name !== -1 &&\n out.gtfs.feed_info.length > 0\n ) {\n out.config.meta.name[\"en-US\"] = out.gtfs.feed_info[0][out.gtfsHead.feed_info.feed_publisher_name] + \" \";\n } else {\n out.config.meta.name[\"en-US\"] = \"\";\n }\n out.config.meta.name[\"en-US\"] += \"(\" + out.config.fromDate;\n if (out.config.fromDate !== out.config.toDate) {\n out.config.meta.name[\"en-US\"] += \"-\" + out.config.toDate;\n }\n out.config.meta.name[\"en-US\"] += \")\";\n }\n\n if (\"url\" in out.config.meta === false &&\n \"feed_info\" in out.gtfs &&\n out.gtfsHead.feed_info.feed_publisher_url !== -1 &&\n out.gtfs.feed_info.length > 0\n ) {\n out.config.meta.url = out.gtfs.feed_info[0][out.gtfsHead.feed_info.feed_publisher_url];\n }\n\n out.aquius.meta = out.config.meta;\n\n if (\"en-US\" in out.config.translation === false) {\n out.config.translation[\"en-US\"] = {};\n }\n\n if (\"link\" in out.config.translation[\"en-US\"] === false) {\n switch (out.config.servicePer) {\n\n case 1:\n out.config.translation[\"en-US\"].link = \"Daily Services\";\n break;\n\n case 7:\n out.config.translation[\"en-US\"].link = \"Weekly Services\";\n break;\n\n default:\n out.config.translation[\"en-US\"].link = \"Services per \" + out.config.servicePer + \" Days\";\n break;\n }\n }\n\n out.aquius.translation = out.config.translation;\n\n if (\"c\" in out.config.option === false ||\n \"k\" in out.config.option === false\n ) {\n // Focus on first stop - often arbitrary, but should render something\n out.config.option.c = parseFloat(out.gtfs.stops[0][out.gtfsHead.stops.stop_lon]);\n out.config.option.k = parseFloat(out.gtfs.stops[0][out.gtfsHead.stops.stop_lat]);\n out.config.option.m = 12;\n }\n\n if (\"x\" in out.config.option === false ||\n \"y\" in out.config.option === false\n ) {\n out.config.option.x = parseFloat(out.gtfs.stops[0][out.gtfsHead.stops.stop_lon]);\n out.config.option.y = parseFloat(out.gtfs.stops[0][out.gtfsHead.stops.stop_lat]);\n out.config.option.z = 10;\n }\n\n out.aquius.option = out.config.option;\n\n return out;\n }\n\n function buildNetwork(out) {\n /**\n * Creates out.aquius.network from config.networkFilter\n * @param {object} out\n * @return {object} out\n */\n\n var agencyColumn, index, keys, nameExtra, product, i, j;\n var modeLookup = {\n \"0\": {\"en-US\": \"Tram\"},\n \"1\": {\"en-US\": \"Metro\"},\n \"2\": {\"en-US\": \"Rail\"},\n \"3\": {\"en-US\": \"Bus\"},\n \"4\": {\"en-US\": \"Ferry\"},\n \"5\": {\"en-US\": \"Cable car\"},\n \"6\": {\"en-US\": \"Cable car\"},\n \"7\": {\"en-US\": \"Funicular\"},\n \"100\": {\"en-US\": \"Rail\"},\n \"101\": {\"en-US\": \"High speed rail\"},\n \"102\": {\"en-US\": \"Long distance rail\"},\n \"103\": {\"en-US\": \"Inter-regional rail\"},\n \"105\": {\"en-US\": \"Sleeper rail\"},\n \"106\": {\"en-US\": \"Regional rail\"},\n \"107\": {\"en-US\": \"Tourist rail\"},\n \"108\": {\"en-US\": \"Rail shuttle\"},\n \"109\": {\"en-US\": \"Suburban rail\"},\n \"200\": {\"en-US\": \"Coach\"},\n \"201\": {\"en-US\": \"International coach\"},\n \"202\": {\"en-US\": \"National coach\"},\n \"204\": {\"en-US\": \"Regional coach\"},\n \"208\": {\"en-US\": \"Commuter coach\"},\n \"400\": {\"en-US\": \"Urban rail\"},\n \"401\": {\"en-US\": \"Metro\"},\n \"402\": {\"en-US\": \"Underground\"},\n \"405\": {\"en-US\": \"Monorail\"},\n \"700\": {\"en-US\": \"Bus\"},\n \"701\": {\"en-US\": \"Regional bus\"},\n \"702\": {\"en-US\": \"Express bus\"},\n \"704\": {\"en-US\": \"Local bus\"},\n \"800\": {\"en-US\": \"Trolleybus\"},\n \"900\": {\"en-US\": \"Tram\"},\n \"1000\": {\"en-US\": \"Water\"},\n \"1300\": {\"en-US\": \"Telecabin\"},\n \"1400\": {\"en-US\": \"Funicular\"},\n \"1501\": {\"en-US\": \"Shared taxi\"},\n \"1700\": {\"en-US\": \"Other\"},\n \"1701\": {\"en-US\": \"Cable car\"},\n \"1702\": {\"en-US\": \"Horse-drawn\"}\n };\n // Future extended GTFS Route Types will render as \"Mode #n\", pending manual editing in config\n\n if (\"type\" in out.config.networkFilter === false) {\n out.config.networkFilter.type = \"agency\";\n }\n if (\"reference\" in out.config.networkFilter === false) {\n out.config.networkFilter.reference = {};\n }\n\n out._.productIndex = {};\n if (\"reference\" in out.aquius === false) {\n out.aquius.reference = {};\n }\n out.aquius.reference.product = [];\n\n switch (out.config.networkFilter.type) {\n // Extendable for more product filters. Add complementary code to wanderRoutes()\n\n case \"mode\":\n if (\"route_type\" in out.gtfsHead.routes &&\n out.gtfsHead.routes.route_type !== -1 &&\n out.gtfs.routes.length > 0\n ) {\n index = 0;\n for (i = 0; i < out.gtfs.routes.length; i += 1) {\n if (out.gtfs.routes[i][out.gtfsHead.routes.route_type] in out._.productIndex === false) {\n out._.productIndex[out.gtfs.routes[i][out.gtfsHead.routes.route_type]] = index;\n if (out.gtfs.routes[i][out.gtfsHead.routes.route_type] in out.config.networkFilter.reference === false) {\n if (out.gtfs.routes[i][out.gtfsHead.routes.route_type] in modeLookup) {\n out.config.networkFilter.reference[out.gtfs.routes[i][out.gtfsHead.routes.route_type]] = \n modeLookup[out.gtfs.routes[i][out.gtfsHead.routes.route_type]];\n } else {\n out.config.networkFilter.reference[out.gtfs.routes[i][out.gtfsHead.routes.route_type]] = {};\n }\n }\n out.aquius.reference.product[index] =\n out.config.networkFilter.reference[out.gtfs.routes[i][out.gtfsHead.routes.route_type]];\n index += 1;\n }\n }\n }\n break;\n\n case \"agency\":\n default:\n out.config.networkFilter.type = \"agency\";\n // Defaults to agency\n if (\"agency\" in out.gtfs &&\n ((\"agency_id\" in out.gtfsHead.agency &&\n out.gtfsHead.agency.agency_id !== -1) ||\n (\"agency_name\" in out.gtfsHead.agency &&\n out.gtfsHead.agency.agency_name !== -1)) &&\n out.gtfs.agency.length > 0\n ) {\n index = 0;\n if (\"agency_id\" in out.gtfsHead.agency &&\n out.gtfsHead.agency.agency_id !== -1\n ) {\n agencyColumn = out.gtfsHead.agency.agency_id;\n } else {\n agencyColumn = out.gtfsHead.agency.agency_name;\n }\n for (i = 0; i < out.gtfs.agency.length; i += 1) {\n if (out.gtfs.agency[i][agencyColumn] !== undefined &&\n out.gtfs.agency[i][agencyColumn] !== \"\" &&\n out.gtfs.agency[i][agencyColumn] in out._.productIndex === false\n ) {\n out._.productIndex[out.gtfs.agency[i][agencyColumn]] = index;\n if (out.gtfs.agency[i][agencyColumn] in out.config.networkFilter.reference === false) {\n nameExtra = (out.config.allowNoc === true && out.gtfsHead.agency.agency_noc !== -1) ?\n \" [\" + out.gtfs.agency[i][out.gtfsHead.agency.agency_noc] + \"]\" : \"\";\n if (out.gtfsHead.agency.agency_name !== -1) {\n out.config.networkFilter.reference[out.gtfs.agency[i][agencyColumn]] = \n {\"en-US\": out.gtfs.agency[i][out.gtfsHead.agency.agency_name] + nameExtra};\n } else {\n out.config.networkFilter.reference[out.gtfs.agency[i][agencyColumn]] = \n {\"en-US\": out.gtfs.agency[i][out.gtfsHead.agency.agency_id] + nameExtra};\n }\n }\n out.aquius.reference.product[index] = out.config.networkFilter.reference[out.gtfs.agency[i][agencyColumn]];\n index += 1;\n }\n }\n }\n if (Object.keys(out._.productIndex).length === 0) {\n // Fallback\n out._.productIndex[\"agency\"] = 0;\n out.config.networkFilter.reference[\"agency\"] = {};\n out.aquius.reference.product[0] = {};\n }\n break;\n\n }\n\n // Future: Logic should also check that bespoke networkFilter is valid\n if (\"network\" in out.config.networkFilter === false ||\n !Array.isArray(out.config.networkFilter.network)\n ) {\n out.config.networkFilter.network = [];\n keys = Object.keys(out._.productIndex);\n\n switch (out.config.networkFilter.type) {\n // Extendable for more product filters\n\n case \"mode\":\n out.config.networkFilter.network.push([\n keys,\n // Config references GTFS Ids\n {\"en-US\": \"All modes\"}\n ]);\n if (keys.length > 1) {\n for (i = 0; i < keys.length; i += 1) {\n if (keys[i] in modeLookup) {\n out.config.networkFilter.network.push([\n [keys[i]],\n modeLookup[keys[i]]\n ]);\n } else {\n out.config.networkFilter.network.push([\n [keys[i]],\n {\"en-US\": \"Mode #\"+ keys[i]}\n ]);\n }\n }\n }\n break;\n\n case \"agency\":\n out.config.networkFilter.network.push([\n keys,\n {\"en-US\": \"All operators\"}\n ]);\n if (keys.length > 1) {\n for (i = 0; i < keys.length; i += 1) {\n if (\"agency\" in out.gtfs &&\n out.gtfsHead.agency.agency_id !== -1 &&\n out.gtfsHead.agency.agency_name !== -1\n ) {\n for (j = 0; j < out.gtfs.agency.length; j += 1) {\n if (out.gtfs.agency[j][out.gtfsHead.agency.agency_id] === keys[i]) {\n nameExtra = (out.config.allowNoc === true && out.gtfsHead.agency.agency_noc !== -1) ?\n \" [\" + out.gtfs.agency[j][out.gtfsHead.agency.agency_noc] + \"]\" : \"\";\n out.config.networkFilter.network.push([\n [keys[i]],\n {\"en-US\": out.gtfs.agency[j][out.gtfsHead.agency.agency_name] + nameExtra}\n ]);\n break;\n }\n }\n } else {\n out.config.networkFilter.network.push([\n [keys[i]],\n {\"en-US\": keys[i]}\n ]);\n }\n }\n }\n break;\n\n default:\n // Fallback only\n out.config.networkFilter.network.push([\n keys,\n {\"en-US\": \"All\"}\n ]);\n break;\n\n }\n\n }\n\n out.aquius.network = [];\n\n for (i = 0; i < out.config.networkFilter.network.length; i += 1) {\n if (out.config.networkFilter.network[i].length > 1 &&\n Array.isArray(out.config.networkFilter.network[i][0]) &&\n typeof out.config.networkFilter.network[i][1] === \"object\"\n ) {\n product = [];\n for (j = 0; j < out.config.networkFilter.network[i][0].length; j += 1) {\n if (out.config.networkFilter.network[i][0][j] in out._.productIndex) {\n product.push(out._.productIndex[out.config.networkFilter.network[i][0][j]]);\n }\n }\n out.aquius.network.push([product, out.config.networkFilter.network[i][1], {}]);\n }\n }\n\n return out;\n }\n\n function buildService(out) {\n /**\n * Creates out.aquius.service from config.serviceFilter\n * @param {object} out\n * @return {object} out\n */\n\n var weekdays, i;\n\n if (\"serviceFilter\" in out.config &&\n \"type\" in out.config.serviceFilter &&\n out.config.serviceFilter.type === \"period\"\n ) {\n\n if (\"period\" in out.config.serviceFilter === false ||\n !Array.isArray(out.config.serviceFilter.period) ||\n out.config.serviceFilter.period.length === 0\n ) {\n weekdays = [\"monday\", \"tuesday\", \"wednesday\", \"thursday\", \"friday\"];\n out.config.serviceFilter.period = [\n {\"name\": {\"en-US\": \"Typical day\"}},\n {\"day\": weekdays, \"name\": {\"en-US\": \"Typical weekday\"}},\n {\"day\": weekdays, \"name\": {\"en-US\": \"Weekday early\"}, \"time\": [{\"end\": \"10:00:00\"}]},\n {\"day\": weekdays, \"name\": {\"en-US\": \"Weekday 10:00-17:00\"}, \"time\": [{\"start\": \"10:00:00\", \"end\": \"17:00:00\"}]},\n {\"day\": weekdays, \"name\": {\"en-US\": \"Weekday evening\"}, \"time\": [{\"start\": \"17:00:00\"}]},\n {\"day\": [\"saturday\"], \"name\": {\"en-US\": \"Saturday\"}},\n {\"day\": [\"sunday\"], \"name\": {\"en-US\": \"Sunday\"}}\n ];\n // Sample defaults\n }\n\n out.aquius.service = [];\n\n for (i = 0; i < out.config.serviceFilter.period.length; i += 1) {\n if (\"name\" in out.config.serviceFilter.period[i] === false) {\n out.config.serviceFilter.period[i].name = {\"en-US\": \"Period \" + i};\n // Fallback\n }\n out.aquius.service.push([[i], out.config.serviceFilter.period[i].name, {}]);\n }\n\n }\n\n return out; \n }\n\n function addToReference(out, key, value) {\n /**\n * Helper: Adds value to out.aquius.reference[key] and out._[key] if not already present\n * @param {object} out\n * @param {string} key - reference key (color, url)\n * @param {string} value - reference content\n * @return {object} out\n */\n\n if (\"reference\" in out.aquius === false) {\n out.aquius.reference = {};\n }\n\n if (key in out.aquius.reference === false) {\n out.aquius.reference[key] = [];\n out._[key] = {};\n }\n\n if (value in out._[key] === false) {\n out.aquius.reference[key].push(value);\n out._[key][value] = out.aquius.reference[key].length - 1;\n }\n\n return out;\n }\n\n function createNodeProperties(out, stopObject, index) {\n /**\n * Helper: Adds node properties r key to out.aquius.node[index], if relevant contents in stopObject\n * @param {object} out\n * @param {object} stopObject - gtfs.stops array (line)\n * @param {integer} index - out.aquius.node index for output\n * @return {object} out\n */\n\n var codeString, contentString, position, wildcard;\n var properties = {};\n\n function withinKeys(properties, reference) {\n // Checks only properties are within reference - reference may contain other keys\n\n var match, i, j;\n var keys = Object.keys(properties);\n\n for (i = 0; i < reference.length; i += 1) {\n match = true;\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in reference[i] === false ||\n reference[i][keys[j]] !== properties[keys[j]]\n ) {\n match = false;\n break;\n }\n }\n if (match === true) {\n return true;\n }\n }\n\n return false;\n }\n\n if (out.config.allowName === true) {\n contentString = \"\";\n if (stopObject[out.gtfsHead.stops.stop_id] in out.config.stopOverride &&\n \"stop_name\" in out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]]\n ) {\n contentString = out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]].stop_name;\n } else {\n if (out.gtfsHead.stops.stop_name !== -1) {\n contentString = stopObject[out.gtfsHead.stops.stop_name].trim();\n }\n }\n if (contentString !== \"\") {\n properties.n = contentString;\n }\n }\n\n if (out.config.allowStopUrl === true) {\n contentString = \"\";\n if (stopObject[out.gtfsHead.stops.stop_id] in out.config.stopOverride &&\n \"stop_url\" in out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]]\n ) {\n contentString = out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]].stop_url;\n } else {\n if (out.gtfsHead.stops.stop_url !== -1) {\n contentString = stopObject[out.gtfsHead.stops.stop_url].trim();\n }\n }\n if (contentString !== \"\") {\n\n wildcard = \"[*]\";\n position = contentString.lastIndexOf(wildcard);\n // Cannot use wildcard if already contains it (unlikely)\n\n if (position === -1 &&\n out.gtfsHead.stops.stop_code !== -1\n ) {\n codeString = stopObject[out.gtfsHead.stops.stop_code].trim();\n position = contentString.lastIndexOf(codeString);\n // Human logic: Stop code within URL format\n if (position !== -1) {\n contentString = contentString.substring(0, position) + wildcard +\n contentString.substring(position + codeString.length);\n properties.i = codeString;\n }\n }\n\n if (position === -1 &&\n out.gtfsHead.stops.stop_id !== -1\n ) {\n codeString = stopObject[out.gtfsHead.stops.stop_id].trim();\n position = contentString.lastIndexOf(codeString);\n // Operator logic: Internal ID within format\n if (position !== -1) {\n contentString = contentString.substring(0, position) + wildcard +\n contentString.substring(position + codeString.length);\n properties.i = codeString;\n }\n }\n\n out = addToReference(out, \"url\", contentString);\n properties.u = out._.url[contentString];\n\n }\n }\n\n if (Object.keys(properties).length > 0) {\n if (\"r\" in out.aquius.node[index][2]) {\n if (withinKeys(properties, out.aquius.node[index][2].r) === false) {\n out.aquius.node[index][2].r.push(properties);\n }\n } else {\n out.aquius.node[index][2].r = [properties];\n }\n }\n\n if (out.config.allowCode) {\n // Add stop code as a separate reference object\n properties = {};\n if (out.config.codeAsStopId) {\n properties.n = stopObject[out.gtfsHead.stops.stop_id].trim();\n } else {\n if (stopObject[out.gtfsHead.stops.stop_id] in out.config.stopOverride &&\n \"stop_code\" in out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]]\n ) {\n properties.n = out.config.stopOverride[stopObject[out.gtfsHead.stops.stop_id]].stop_code;\n } else {\n if (out.gtfsHead.stops.stop_code !== -1) {\n properties.n = stopObject[out.gtfsHead.stops.stop_code].trim();\n }\n }\n }\n if (\"n\" in properties &&\n properties.n !== \"\"\n ) {\n if (\"r\" in out.aquius.node[index][2]) {\n if (withinKeys(properties, out.aquius.node[index][2].r) === false) {\n out.aquius.node[index][2].r.push(properties);\n }\n } else {\n out.aquius.node[index][2].r = [properties];\n }\n }\n }\n\n return out;\n }\n\n function stopCoordinates(out, stopsObject) {\n /**\n * Helper: Raw coordinate rounded to \n * @param {string} value - coordinate number as string\n * @param {Object} stopsObject - gtfs.stops line\n * @return {array} coordinate [x,y]\n */\n\n var precision = Math.pow(10, out.config.coordinatePrecision);\n\n function parseCoord(value, precision) {\n \n var numericValue = parseFloat(value);\n\n if (Number.isNaN(numericValue)) {\n return 0;\n }\n\n return Math.round(numericValue * precision) / precision;\n }\n\n if (out.gtfsHead.stops.stop_id !== -1 &&\n stopsObject[out.gtfsHead.stops.stop_id] in out.config.stopOverride &&\n \"x\" in out.config.stopOverride[stopsObject[out.gtfsHead.stops.stop_id]] &&\n \"y\" in out.config.stopOverride[stopsObject[out.gtfsHead.stops.stop_id]]\n ) {\n return [\n parseCoord(out.config.stopOverride[stopsObject[out.gtfsHead.stops.stop_id]].x, precision),\n parseCoord(out.config.stopOverride[stopsObject[out.gtfsHead.stops.stop_id]].y, precision)\n ];\n }\n\n if (out.gtfsHead.stops.stop_lat !== -1 &&\n out.gtfsHead.stops.stop_lon !== -1\n ) {\n return [\n parseCoord(stopsObject[out.gtfsHead.stops.stop_lon], precision),\n parseCoord(stopsObject[out.gtfsHead.stops.stop_lat], precision)\n ];\n }\n\n return [0, 0];\n }\n \n function checkStopCoordinates(out, coords, stopId) {\n /**\n * Helper: Adds 0,0 copordinates to out.config.stopOverride\n * @param {object} out\n * @param {array} coord - [x, y]\n * @param {string} stopId - gtfs stop_id\n * @return {object} out \n */\n\n if (coords[0] === 0 &&\n coords[1] === 0\n ) {\n if (stopId in out.config.stopOverride === false) {\n out.config.stopOverride[stopId] = {};\n }\n out.config.stopOverride[stopId].x = 0;\n out.config.stopOverride[stopId].y = 0;\n }\n\n return out;\n }\n\n function stopFilter(out) {\n /**\n * Adds stopInclude and stopExclude to out\n * @param {object} out\n * @return {object} out\n */\n\n var i;\n\n if (out.config.stopExclude.length > 0) {\n out._.stopExclude = {};\n for (i = 0; i < out.config.stopExclude.length; i += 1) {\n out._.stopExclude[out.config.stopExclude[i]] = \"\";\n }\n }\n\n if (out.config.stopInclude.length > 0) {\n out._.stopInclude = {};\n for (i = 0; i < out.config.stopInclude.length; i += 1) {\n out._.stopInclude[out.config.stopInclude[i]] = \"\";\n }\n }\n\n return out;\n }\n \n function parentStopsToNode(out) {\n /**\n * Parses gtfs.stops parent stations and groups nodes within together\n * @param {object} out\n * @return {object} out\n */\n\n var coords, index, key, i;\n var childStops = [];\n // stop_id, parent_id\n\n if (out.gtfsHead.stops.location_type !== -1 &&\n out.gtfsHead.stops.parent_station !== -1\n ) {\n\n if (\"nodeLookup\" in out._ === false) {\n out._.nodeLookup = {};\n // Lookup of GTFS stop_id: out.aquius.node index\n }\n if (\"nodeCoord\" in out._ === false) {\n out._.nodeCoord = {};\n // Lookup of \"x,y\": out.aquius.node index\n }\n if (\"node\" in out.aquius === false) {\n out.aquius.node = [];\n }\n\n for (i = 0; i < out.gtfs.stops.length; i += 1) {\n\n if ((\"stopExclude\" in out._ === false ||\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id] in out._.stopExclude === false) &&\n (\"stopInclude\" in out._ === false ||\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id] in out._.stopInclude)\n ) {\n\n if (out.gtfs.stops[i][out.gtfsHead.stops.location_type] === \"1\") {\n // Is parent\n\n coords = stopCoordinates(out, out.gtfs.stops[i]);\n\n if (out.config.allowZeroCoordinate ||\n (coords[0] !== 0 ||\n coords[1] !== 0)\n ) {\n\n out = checkStopCoordinates(out, coords, out.gtfs.stops[i][out.gtfsHead.stops.stop_id]);\n key = coords[0].toString() + \",\" + coords[1].toString();\n\n if (key in out._.nodeCoord) {\n index = out._.nodeCoord[key];\n } else {\n index = out.aquius.node.length;\n out.aquius.node.push([coords[0], coords[1], {}]);\n out._.nodeCoord[key] = index;\n }\n\n out._.nodeLookup[out.gtfs.stops[i][out.gtfsHead.stops.stop_id]] = index;\n out = createNodeProperties(out, out.gtfs.stops[i], index);\n }\n\n } else {\n\n if (out.gtfs.stops[i][out.gtfsHead.stops.parent_station] !== \"\") {\n // Is child\n childStops.push([\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id],\n out.gtfs.stops[i][out.gtfsHead.stops.parent_station]\n ]);\n }\n\n }\n }\n }\n\n for (i = 0; i < childStops.length; i += 1) {\n if (childStops[i][1] in out._.nodeLookup) {\n // Else no parent, so stop should be processed elsewhere\n out._.nodeLookup[childStops[i][0]] = out._.nodeLookup[childStops[i][1]];\n }\n }\n\n }\n\n return out;\n }\n\n function transferStopsToNode(out) {\n /**\n * Parses gtfs.stops transfer stations and groups nodes within together\n * @param {object} out\n * @return {object} out\n */\n\n var coords, index, key, fromStop, toStop, i, j;\n\n if (\"transfers\" in out.gtfs &&\n out.gtfsHead.transfers.from_stop_id !== -1 &&\n out.gtfsHead.transfers.to_stop_id !== -1 &&\n out.gtfsHead.transfers.transfer_type !== 1\n ) {\n // Transfer pairs are logically grouped together, even where actual transfer is forbidden\n\n if (\"nodeLookup\" in out._ === false) {\n out._.nodeLookup = {};\n }\n if (\"nodeCoord\" in out._ === false) {\n out._.nodeCoord = {};\n }\n if (\"node\" in out.aquius === false) {\n out.aquius.node = [];\n }\n\n for (i = 0; i < out.gtfs.transfers.length; i += 1) {\n if (out.gtfs.transfers[i][out.gtfsHead.transfers.from_stop_id] !== \"\" &&\n (\"stopExclude\" in out._ === false ||\n (out.gtfs.transfers[i][out.gtfsHead.transfers.from_stop_id] in out._.stopExclude === false &&\n out.gtfs.transfers[i][out.gtfsHead.transfers.to_stop_id] in out._.stopExclude === false)) &&\n (\"stopInclude\" in out._ === false ||\n (out.gtfs.transfers[i][out.gtfsHead.transfers.from_stop_id] in out._.stopInclude &&\n out.gtfs.transfers[i][out.gtfsHead.transfers.to_stop_id] in out._.stopInclude))\n ) {\n fromStop = out._.nodeLookup[out.gtfs.transfers[i][out.gtfsHead.transfers.from_stop_id]];\n toStop = out._.nodeLookup[out.gtfs.transfers[i][out.gtfsHead.transfers.to_stop_id]];\n\n if (fromStop !== toStop) {\n if (fromStop !== undefined &&\n toStop !== undefined) {\n // Could merge stops, but already clearly assigned as stations, so leave as separate\n } else {\n if (fromStop !== undefined) {\n // Add toStop to fromStop\n out._.nodeLookup[toStop] = fromStop;\n } else {\n if (toStop !== undefined) {\n // Add fromStop to toStop\n out._.nodeLookup[fromStop] = toStop;\n } else {\n if (fromStop === undefined &&\n toStop === undefined) {\n // Add new stop\n for (j = 0; j < out.gtfs.stops.length; j += 1) {\n\n if (out.gtfs.stops[j][out.gtfsHead.stops.stop_id] === fromStop) {\n\n coords = stopCoordinates(out, out.gtfs.stops[j]);\n\n if (out.config.allowZeroCoordinate ||\n (coords[0] !== 0 ||\n coords[1] !== 0)\n ) {\n\n out = checkStopCoordinates(out, coords, out.gtfs.stops[j][out.gtfsHead.stops.stop_id]);\n key = coords[0].toString() + \",\" + coords[1].toString();\n\n if (key in out._.nodeCoord) {\n index = out._.nodeCoord[key];\n } else {\n index = out.aquius.node.length;\n out.aquius.node.push([coords[0], coords[1], {}]);\n out._.nodeCoord[key] = index;\n }\n\n out._.nodeLookup[fromStop] = index;\n out._.nodeLookup[toStop] = index;\n out = createNodeProperties(out, out.gtfs.stops[j], index);\n break;\n }\n }\n\n }\n }\n }\n }\n }\n }\n\n }\n }\n }\n\n return out;\n }\n\n function regularStopsToNode(out) {\n /**\n * Parses gtfs.stops for unprocessed nodes (call after parentStops and transferStops)\n * @param {object} out\n * @return {object} out\n */\n\n var coords, index, key, i;\n\n if (\"nodeLookup\" in out._ === false) {\n out._.nodeLookup = {};\n }\n if (\"nodeCoord\" in out._ === false) {\n out._.nodeCoord = {};\n }\n if (\"node\" in out.aquius === false) {\n out.aquius.node = [];\n }\n\n for (i = 0; i < out.gtfs.stops.length; i += 1) {\n if (out.gtfs.stops[i].stop_id in out._.nodeLookup === false &&\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id] !== \"\" &&\n (\"stopExclude\" in out._ === false ||\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id] in out._.stopExclude === false) &&\n (\"stopInclude\" in out._ === false ||\n out.gtfs.stops[i][out.gtfsHead.stops.stop_id] in out._.stopInclude)\n ) {\n coords = stopCoordinates(out, out.gtfs.stops[i]);\n\n if (out.config.allowZeroCoordinate ||\n (coords[0] !== 0 ||\n coords[1] !== 0)\n ) {\n\n out = checkStopCoordinates(out, coords, out.gtfs.stops[i][out.gtfsHead.stops.stop_id]);\n key = coords[0].toString() + \",\" + coords[1].toString();\n\n if (key in out._.nodeCoord) {\n index = out._.nodeCoord[key];\n } else {\n index = out.aquius.node.length;\n out.aquius.node.push([coords[0], coords[1], {}]);\n out._.nodeCoord[key] = index;\n }\n\n out._.nodeLookup[out.gtfs.stops[i][out.gtfsHead.stops.stop_id]] = index;\n out = createNodeProperties(out, out.gtfs.stops[i], index);\n }\n }\n }\n\n delete out._.nodeCoord;\n\n return out;\n }\n\n function getGtfsTimeSeconds(timeString) {\n /**\n * Helper: Get time in seconds after midnight for GTFS-formatted (nn:nn:nn) time\n * @param {string} timeString\n * @return {integer} seconds\n */\n\n var conversion,i;\n var timeArray = timeString.split(\":\");\n\n if (timeArray.length !== 3) {\n return 0;\n // Erroneous format\n }\n\n for (i = 0; i < timeArray.length; i += 1) {\n conversion = parseInt(timeArray[i], 10);\n if (Number.isNaN(conversion)) {\n return 0;\n }\n timeArray[i] = conversion;\n }\n\n return (timeArray[0] * 3600) + (timeArray[1] * 60) + timeArray[2];\n }\n\n function getGtfsDay(dateMS) {\n /**\n * Helper: Returns GTFS header day corresponding to date\n * @param {number} dateMS - milliseconds since epoch\n * @return {string} day name\n */\n\n var dateDate = new Date(dateMS);\n var days = [\"sunday\", \"monday\", \"tuesday\",\n \"wednesday\", \"thursday\", \"friday\", \"saturday\"];\n\n return days[dateDate.getDay()];\n }\n\n function createCalendar(out) {\n /**\n * Adds calendar lookups to out\n * @param {object} out\n * @return {object} out\n */ \n\n var fromDateMS = unformatGtfsDate(out.config.fromDate);\n var toDateMS = unformatGtfsDate(out.config.toDate);\n\n out._.calendar = {};\n // MS date: GTFS date for all days analysed\n out._.calendarDay = {};\n // MS date: header day for all days analysed\n out._.calendarGtfs = {};\n // GTFS date: MS date for all days analysed\n\n while (toDateMS >= fromDateMS) {\n\n out._.calendar[fromDateMS] = formatGtfsDate(fromDateMS);\n out._.calendarDay[fromDateMS] = getGtfsDay(fromDateMS);\n out._.calendarGtfs[formatGtfsDate(fromDateMS)] = fromDateMS;\n\n fromDateMS += 864e5;\n // +1 day\n\n }\n\n return out;\n }\n\n function serviceCalendar(out) {\n /**\n * Returns service calendar, via GTFS calendar and/or calendar_dates\n * @param {object} out\n * @return {object} calendar - Service_id: {dateMS}\n */ \n\n var datesMS, endDateMS, serviceId, startDateMS, i, j;\n var calendar = {};\n // Service_id: {dateMS: 1}\n\n if (\"calendar\" in out.gtfs) {\n // Some hacked GTFS archives skip calendar and use only calendar_dates\n\n datesMS = Object.keys(out._.calendarDay);\n\n for (i = 0; i < out.gtfs.calendar.length; i += 1) {\n\n serviceId = out.gtfs.calendar[i][out.gtfsHead.calendar.service_id];\n startDateMS = unformatGtfsDate(out.gtfs.calendar[i][out.gtfsHead.calendar.start_date]);\n endDateMS = unformatGtfsDate(out.gtfs.calendar[i][out.gtfsHead.calendar.end_date]);\n\n for (j = 0; j < datesMS.length; j += 1) {\n if (datesMS[j] >= startDateMS &&\n datesMS[j] <= endDateMS &&\n out.gtfs.calendar[i][out.gtfsHead.calendar[out._.calendarDay[datesMS[j]]]] === \"1\"\n ) {\n if (serviceId in calendar === false) {\n calendar[serviceId] = {};\n }\n calendar[serviceId][datesMS[j]] = \"\";\n }\n }\n\n }\n }\n\n if (\"calendar_dates\" in out.gtfs &&\n out.gtfsHead.calendar_dates.service_id !== -1 &&\n out.gtfsHead.calendar_dates.date !== -1 &&\n out.gtfsHead.calendar_dates.exception_type !== -1\n ) {\n\n for (i = 0; i < out.gtfs.calendar_dates.length; i += 1) {\n if (out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.date] in out._.calendarGtfs) {\n\n serviceId = out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.service_id];\n\n if (serviceId in calendar === false) {\n calendar[serviceId] = {};\n }\n\n if (out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.exception_type] === \"1\") {\n calendar[serviceId][out._.calendarGtfs[out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.date]]] = \"\";\n // Add. Multiple same-date & same-service erroneous\n }\n\n if (out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.exception_type] === \"2\" &&\n out._.calendarGtfs[out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.date]]\n in calendar[serviceId]\n ) {\n delete calendar[serviceId][out._.calendarGtfs[out.gtfs.calendar_dates[i][out.gtfsHead.calendar_dates.date]]]\n // Remove\n if (Object.keys(calendar[serviceId]).length === 0) {\n delete calendar[serviceId];\n // Remove serviceId if empty\n }\n }\n\n }\n }\n\n }\n\n return calendar;\n }\n\n function frequentTrip(out) {\n /**\n * Returns frequent lookup. Allows duplicate checks to be skipped on frequent trips\n * @param {object} out\n * @return {object} frequent\n */ \n\n var i;\n var frequent = {};\n // trip_id index\n\n if (\"frequencies\" in out.gtfs &&\n out.gtfsHead.frequencies.trip_id !== -1\n ) {\n for (i = 0; i < out.gtfs.frequencies.length; i += 1) {\n frequent[out.gtfs.frequencies[i][out.gtfsHead.frequencies.trip_id]] = \"\";\n // Service/headway analysis will repeat loop later, once trips within analysis period are known\n // Generally GTFS frequencies is a short list, so little extra overhead in this first loop\n }\n }\n\n return frequent;\n }\n\n function baseTrip(out) {\n /**\n * Adds trip to out, prior to duplicate checks and service assignment\n * @param {object} out\n * @return {object} out\n */ \n\n var headsign, tripId, tripObject, i;\n var block = {};\n // block_id: array of trips blocked\n var calendar = serviceCalendar(out);\n var frequent = frequentTrip(out);\n var tripBlock = [];\n // List of trip_id with blocks\n var routeExclude = {};\n var routeInclude = {};\n\n if (out.config.routeExclude.length > 0) {\n for (i = 0; i < out.config.routeExclude.length; i += 1) {\n routeExclude[out.config.routeExclude[i]] = \"\";\n }\n }\n\n if (out.config.routeInclude.length > 0) {\n for (i = 0; i < out.config.routeInclude.length; i += 1) {\n routeInclude[out.config.routeInclude[i]] = \"\";\n }\n }\n\n out._.trip = {};\n // trip_id: {service [numbers], stops [sequence, node, dwell], pickup [], setdown [], reference [], frequent, block}\n out._.routes = {};\n // route_id: {product, reference{n, c, u}}\n\n for (i = 0; i < out.gtfs.trips.length; i += 1) {\n\n tripObject = out.gtfs.trips[i];\n\n if (tripObject[out.gtfsHead.trips.service_id] in calendar &&\n (out.config.routeExclude.length === 0 ||\n tripObject[out.gtfsHead.trips.route_id] in routeExclude === false) &&\n (out.config.routeInclude.length === 0 ||\n tripObject[out.gtfsHead.trips.route_id] in routeInclude === true)\n ) {\n\n tripId = tripObject[out.gtfsHead.trips.trip_id];\n\n if (tripObject[out.gtfsHead.trips.route_id] in out._.routes === false) {\n out._.routes[tripObject[out.gtfsHead.trips.route_id]] = {};\n // Expanded by later function. Here serves to filter only required routes\n }\n\n out._.trip[tripId] = {\n \"calendar\": calendar[tripObject[out.gtfsHead.trips.service_id]],\n \"route_id\": tripObject[out.gtfsHead.trips.route_id],\n \"service\": [],\n \"stops\": []\n };\n \n if (out.gtfsHead.trips.direction_id !== -1) {\n out._.trip[tripId].direction_id = tripObject[out.gtfsHead.trips.direction_id];\n } else {\n out._.trip[tripId].direction_id = \"\";\n }\n\n if (tripId in frequent) {\n out._.trip[tripId].frequent = \"\";\n }\n\n if (out.gtfsHead.stop_times.pickup_type !== -1) {\n out._.trip[tripId].setdown = [];\n // Pickup_type tested for none, thus setdown only\n }\n\n if (out.gtfsHead.stop_times.drop_off_type !== -1) {\n out._.trip[tripId].pickup = [];\n // Drop_off_type tested for none, thus pickup only\n }\n\n if (out.config.allowBlock &&\n out.gtfsHead.trips.block_id !== -1 &&\n tripObject[out.gtfsHead.trips.block_id] !== \"\"\n ) {\n // Block_id used later for evaluation of circular\n out._.trip[tripId].block = {\n \"id\": tripObject[out.gtfsHead.trips.block_id]\n };\n // Also \"trips\" key as array of all trip_id using this block, added later\n tripBlock.push(tripId);\n if (tripObject[out.gtfsHead.trips.block_id] in block) {\n block[tripObject[out.gtfsHead.trips.block_id]].push(tripId);\n } else {\n block[tripObject[out.gtfsHead.trips.block_id]] = [tripId];\n }\n }\n\n if (out.config.allowHeadsign === true &&\n out.gtfsHead.trips.trip_headsign !== -1\n ) {\n headsign = tripObject[out.gtfsHead.trips.trip_headsign].trim();\n if (headsign === \"\" &&\n out.gtfsHead.trips.trip_short_name !== -1\n ) {\n headsign = tripObject[out.gtfsHead.trips.trip_short_name].trim();\n }\n if (headsign !== \"\" ) {\n out._.trip[tripId].reference = {\n \"n\": headsign,\n \"slug\": headsign\n };\n }\n }\n\n }\n }\n\n for (i = 0; i < tripBlock.length; i += 1) {\n out._.trip[tripBlock[i]].block.trips = block[out._.trip[tripBlock[i]].block.id];\n }\n\n return out;\n }\n\n function timeTrip(out) {\n /**\n * Adds stops and timing to trips\n * @param {object} out\n * @return {object} out\n */ \n\n var arrive, dates, depart, duplicate, dwell, key, keys, lastDepart, node, onOffCache, stopObject, times, i, j, k, l;\n var timeCache = {};\n // TimeStrings cached temporarily for speed - times tend to be reused\n var trips = Object.keys(out.gtfs.stop_times);\n\n if (out.config.allowDuplicate === false ||\n out.config.allowSplit ||\n out.config.allowCabotage\n ) {\n duplicate = {};\n // dateMS: Unique key = node:time:dateMS:direction(:route): trip_id\n }\n\n for (i = 0; i < trips.length; i += 1) {\n if (trips[i] in out._.trip) {\n\n dates = Object.keys(out._.trip[trips[i]].calendar);\n lastDepart = null;\n \n onOffCache = {};\n // Per trip. Node key = {optional both, setdown, pickup, route}\n // Processed into setdown/pickup after to allow quirks of merged stops\n\n for (j = 0; j < out.gtfs.stop_times[trips[i]].length; j += 1) {\n\n stopObject = out.gtfs.stop_times[trips[i]][j];\n\n if (stopObject[out.gtfsHead.stop_times.stop_id] in out._.nodeLookup) {\n\n node = out._.nodeLookup[stopObject[out.gtfsHead.stop_times.stop_id]];\n\n if (node in onOffCache === false) {\n onOffCache[node] = {};\n }\n\n if (out.gtfsHead.stop_times.pickup_type !== -1 &&\n stopObject[out.gtfsHead.stop_times.pickup_type] === \"1\" &&\n out.gtfsHead.stop_times.drop_off_type !== -1 &&\n stopObject[out.gtfsHead.stop_times.drop_off_type] === \"1\"\n ) {\n onOffCache[node][\"route\"] = true; // Routing only\n } else if (out.gtfsHead.stop_times.pickup_type !== -1 &&\n stopObject[out.gtfsHead.stop_times.pickup_type] === \"1\"\n ) {\n onOffCache[node][\"setdown\"] = true; // No pickup\n } else if (out.gtfsHead.stop_times.drop_off_type !== -1 &&\n stopObject[out.gtfsHead.stop_times.drop_off_type] === \"1\"\n ) {\n onOffCache[node][\"pickup\"] = true; // No drop off\n } else {\n onOffCache[node][\"both\"] = true; // On and off\n }\n // All stops are analysed for pickup/setdown\n\n if (out._.trip[trips[i]].stops.length === 0 ||\n out._.trip[trips[i]].stops[out._.trip[trips[i]].stops.length - 1][1] !==\n out._.nodeLookup[stopObject[out.gtfsHead.stop_times.stop_id]]\n ) {\n // But exclude concurrent stops from other processing\n\n arrive = 0;\n depart = 0;\n dwell = 0;\n\n if (out.gtfsHead.stop_times.departure_time !== -1 &&\n stopObject[out.gtfsHead.stop_times.departure_time] !== \"\"\n ) {\n if (stopObject[out.gtfsHead.stop_times.departure_time] in timeCache === false) {\n timeCache[stopObject[out.gtfsHead.stop_times.departure_time]] =\n getGtfsTimeSeconds(stopObject[out.gtfsHead.stop_times.departure_time]);\n }\n depart = timeCache[stopObject[out.gtfsHead.stop_times.departure_time]];\n if (\"start\" in out._.trip[trips[i]] === false ||\n depart < out._.trip[trips[i]].start\n ) {\n out._.trip[trips[i]].start = depart;\n }\n }\n\n if (out.gtfsHead.stop_times.arrival_time !== -1 &&\n stopObject[out.gtfsHead.stop_times.arrival_time] !== \"\"\n ) {\n if (stopObject[out.gtfsHead.stop_times.arrival_time] in timeCache === false) {\n timeCache[stopObject[out.gtfsHead.stop_times.arrival_time]] =\n getGtfsTimeSeconds(stopObject[out.gtfsHead.stop_times.arrival_time]);\n }\n arrive = timeCache[stopObject[out.gtfsHead.stop_times.arrival_time]];\n if (\"end\" in out._.trip[trips[i]] === false ||\n arrive > out._.trip[trips[i]].end\n ) {\n out._.trip[trips[i]].end = arrive;\n }\n }\n\n if (out.config.allowDwell && arrive !== 0 && depart > arrive) {\n dwell = ((depart - arrive) / 60); // Seconds to minutes\n }\n\n out._.trip[trips[i]].stops.push([\n stopObject[out.gtfsHead.stop_times.stop_sequence],\n node,\n dwell\n ]);\n \n if (\"frequent\" in out._.trip[trips[i]] === false) {\n // Frequent trips initially untimed, and cannot be part of any duplication\n\n if (typeof duplicate !== \"undefined\") {\n\n times = [];\n\n if (arrive !== 0 ||\n depart !== 0\n ) {\n times.push({\n \"key\": [node, arrive, depart].join(\":\"),\n \"node\": node\n });\n }\n\n // Also keys for each time, since at nodes where joins occurs, only one time is shared\n // with second value = previous (for arrive) or next (for depart) node\n if (out.config.allowSplit === true &&\n arrive !== 0 &&\n out._.trip[trips[i]].stops.length > 1\n ) {\n times.push({\n \"key\": [node, arrive, out._.trip[trips[i]].stops[out._.trip[trips[i]].stops.length - 1]].join(\":\"),\n \"node\": node\n });\n }\n\n // Departure needs next node, so offset by 1 position in stop_times loop\n if (lastDepart !== null) {\n // This position in loop evalulates previous\n times.push({\n \"key\": [node, lastDepart.key].join(\":\"),\n \"node\": lastDepart.node\n });\n }\n\n for (k = 0; k < times.length; k += 1) {\n\n key = [times[k].key, out._.trip[trips[i]].direction_id];\n if (out.config.duplicationRouteOnly) {\n key.push(out._.trip[trips[i]].route_id);\n }\n key = key.join(\":\");\n\n for (l = 0; l < dates.length; l += 1) {\n\n if (dates[l] in duplicate === false) {\n duplicate[dates[l]] = {};\n }\n\n if (key in duplicate[dates[l]]) {\n if (\"dup\" in out._.trip[trips[i]] === false) {\n out._.trip[trips[i]].dup = {\n \"stops\": [],\n \"trip_id\": duplicate[dates[l]][key]\n };\n // Each duplicate currently reference sole parent trip. Assumption could fail with mixed calendars\n }\n if (out._.trip[trips[i]].dup.stops.length === 0 ||\n out._.trip[trips[i]].dup.stops.indexOf(times[k].node) === -1\n ) {\n // Infrequently called\n out._.trip[trips[i]].dup.stops.push(times[k].node);\n }\n\n } else {\n duplicate[dates[l]][key] = trips[i];\n }\n\n }\n\n }\n\n if (out.config.allowSplit === true) {\n if (depart !== 0) {\n // Setup next in loop\n lastDepart = {\n \"key\": [depart, node].join(\":\"),\n \"node\": node\n };\n } else {\n lastDepart = null;\n }\n }\n\n }\n }\n\n }\n }\n\n }\n\n keys = Object.keys(onOffCache);\n for (j = 0; j < keys.length; j += 1) {\n if (\"both\" in onOffCache[keys[j]] === false) {\n if (\"setdown\" in onOffCache[keys[j]] &&\n \"pickup\" in onOffCache[keys[j]] === false\n ) {\n out._.trip[trips[i]].setdown.push(parseInt(keys[j], 10));\n } else if (\"pickup\" in onOffCache[keys[j]] &&\n \"setdown\" in onOffCache[keys[j]] === false\n ) {\n out._.trip[trips[i]].pickup.push(parseInt(keys[j], 10));\n } else if (\"route\" in onOffCache[keys[j]]) {\n out._.trip[trips[i]].setdown.push(parseInt(keys[j], 10));\n out._.trip[trips[i]].pickup.push(parseInt(keys[j], 10));\n }\n }\n }\n \n }\n }\n\n return out;\n }\n\n function duplicateTrip(out) {\n /**\n * Resolves trip duplication\n * @param {object} out\n * @return {object} out\n */ \n\n var concurrentCount, dupCalendar, joinOk, keys, lastConcurrentIndex, parentCalendar, tripId, i, j;\n var nextB = 0;\n var trips = Object.keys(out._.trip);\n\n function equalArrays(a, b) {\n\n var i;\n\n if (a.length !== b.length) {\n return false;\n }\n\n for (i = 0; i < a.length; i += 1) {\n if (a[i] !== b[i]) {\n return false;\n }\n }\n\n return true;\n }\n\n for (i = 0; i < trips.length; i += 1) {\n if (\"dup\" in out._.trip[trips[i]]) {\n\n tripId = out._.trip[trips[i]].dup.trip_id;\n // Original trip\n\n if (out._.trip[trips[i]].dup.stops.length > 1 &&\n tripId in out._.trip\n ) {\n // At least 2 stops duplicated\n\n parentCalendar = [];\n // Unique dates for parent\n keys = Object.keys(out._.trip[tripId].calendar);\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in out._.trip[trips[i]].calendar === false) {\n parentCalendar.push(keys[j]);\n }\n }\n\n dupCalendar = [];\n // Unique dates for duplicate\n keys = Object.keys(out._.trip[trips[i]].calendar);\n for (j = 0; j < keys.length; j += 1) {\n if (keys[j] in out._.trip[tripId].calendar === false) {\n dupCalendar.push(keys[j]);\n }\n }\n\n if (out.config.allowCabotage &&\n parentCalendar.length === 0 &&\n dupCalendar.length === 0 &&\n (out.gtfsHead.stop_times.pickup_type !== -1 ||\n out.gtfsHead.drop_off_type !== -1) &&\n ((\"setdown\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].setdown.length > 0 &&\n \"setdown\" in out._.trip[tripId] &&\n equalArrays(out._.trip[tripId].setdown, out._.trip[trips[i]].setdown) === false) ||\n (\"pickup\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].pickup.length > 0 &&\n \"pickup\" in out._.trip[tripId] &&\n equalArrays(out._.trip[tripId].pickup, out._.trip[trips[i]].pickup) === false))\n ) {\n // Same calendar (no unique dates in either) to process as Aquius block\n\n if (\"b\" in out._.trip[tripId]) {\n out._.trip[trips[i]].b = out._.trip[tripId].b;\n } else {\n out._.trip[tripId].b = nextB;\n out._.trip[trips[i]].b = nextB;\n nextB += 1;\n }\n\n } else {\n\n if (out.config.allowSplit &&\n out._.trip[trips[i]].dup.stops.length >= out.config.splitMinimumJoin &&\n out._.trip[trips[i]].dup.stops.length < out._.trip[trips[i]].stops.length\n ) {\n\n if (dupCalendar.length === 0) {\n // Commonly: Dup calendar entirely within parent, so apply split to dup \n out._.trip[trips[i]].t = [];\n lastConcurrentIndex = -1;\n concurrentCount = 0;\n joinOk = false;\n for (j = 0; j < out._.trip[trips[i]].stops.length; j += 1) {\n if (out._.trip[trips[i]].dup.stops.indexOf(out._.trip[trips[i]].stops[j][1]) === -1) {\n // IndexOf unlikely to be used frequently or on long arrays\n out._.trip[trips[i]].t.push(out._.trip[trips[i]].stops[j][1]);\n } else {\n if (j - 1 === lastConcurrentIndex) {\n concurrentCount += 1;\n if (concurrentCount === out.config.splitMinimumJoin) {\n joinOk = true;\n }\n } else {\n concurrentCount = 0\n }\n lastConcurrentIndex = j;\n }\n }\n if (joinOk === false) {\n delete out._.trip[trips[i]].t;\n }\n } else {\n if (parentCalendar.length === 0) {\n // Unlikely: Parent calendar entirely within dup, so apply split to parent\n out._.trip[tripId].t = [];\n lastConcurrentIndex = -1;\n concurrentCount = 0;\n joinOk = false;\n keys = {};\n // Index of all duplicate trip's nodes. Rarely called, so built as required\n for (j = 0; j < out._.trip[trips[i]].stops.length; j += 1) {\n keys[out._.trip[trips[i]].stops[j][1]] = \"\";\n }\n for (j = 0; j < out._.trip[tripId].stops.length; j += 1) {\n if (out._.trip[tripId].stops[j][1] in keys === false) {\n out._.trip[tripId].t.push(out._.trip[tripId].stops[j][1]);\n } else {\n if (j - 1 === lastConcurrentIndex) {\n concurrentCount += 1;\n if (concurrentCount === out.config.splitMinimumJoin) {\n joinOk = true;\n }\n } else {\n concurrentCount = 0\n }\n lastConcurrentIndex = j;\n }\n }\n if (joinOk === false) {\n delete out._.trip[trips[i]].t;\n }\n }\n // Else excessively complex split: Neither entirely in other's calendar, so fallback to retain full trip\n }\n\n } else {\n\n if (out.config.allowDuplication === false) {\n // Future: Logically should be treated as shared, but pointless if same product, which GTFS extraction presumes\n if (dupCalendar.length === 0) {\n // Entire calendar within parent, so full trip redundant\n delete out._.trip[trips[i]];\n } else {\n // Retain duplicate only on its unique dates\n out._.trip[trips[i]].calendar = {};\n for (j = 0; j < dupCalendar.length; j += 1) {\n out._.trip[trips[i]].calendar[dupCalendar[j]] = \"\";\n }\n }\n }\n\n }\n\n }\n\n }\n // Else single stop duplication is not a valid link duplicate, so retain full trip\n }\n\n }\n\n return out;\n }\n\n function createDayFactor(out) {\n /**\n * Returns arrays of dates analysed by serviceFilter index position\n * @param {object} out\n * @return {array} dayFactor\n */\n\n var dayFactor, i, j, k;\n var allDays = Object.keys(out._.calendarDay);\n\n if (\"serviceFilter\" in out.config &&\n \"period\" in out.config.serviceFilter\n ) {\n dayFactor = [];\n // Arrays of dates analysed by serviceFilter index position\n for (i = 0; i < out.config.serviceFilter.period.length; i += 1) {\n\n if (\"day\" in out.config.serviceFilter.period[i]) {\n dayFactor.push([]);\n for (j = 0; j < out.config.serviceFilter.period[i].day.length; j += 1) {\n for (k = 0; k < allDays.length; k += 1) {\n if (out._.calendarDay[allDays[k]] === out.config.serviceFilter.period[i].day[j]) {\n dayFactor[i].push(allDays[k]);\n }\n }\n }\n } else {\n dayFactor.push(allDays);\n }\n\n }\n return dayFactor;\n } else {\n return [allDays];\n }\n }\n\n function createTimeFactor(out) {\n /**\n * Returns arrays of start, optional-end, one array of arrays per serviceFilter index position\n * @param {object} out\n * @return {array} timeFactor\n */\n\n var timeFactor, i, j;\n\n if (\"serviceFilter\" in out.config &&\n \"period\" in out.config.serviceFilter\n ) {\n timeFactor = [];\n // Arrays of start, optional-end, one array of arrays per serviceFilter index position\n for (i = 0; i < out.config.serviceFilter.period.length; i += 1) {\n\n if (\"time\" in out.config.serviceFilter.period[i]) {\n timeFactor.push([]);\n for (j = 0; j < out.config.serviceFilter.period[i].time.length; j += 1) {\n if (\"start\" in out.config.serviceFilter.period[i].time[j]){\n timeFactor[i].push([getGtfsTimeSeconds(out.config.serviceFilter.period[i].time[j].start)]);\n } else {\n timeFactor[i].push([0]);\n // All times (00:00:00 and after)\n }\n if (\"end\" in out.config.serviceFilter.period[i].time[j]){\n timeFactor[i][j].push(getGtfsTimeSeconds(out.config.serviceFilter.period[i].time[j].end));\n }\n }\n } else {\n timeFactor.push([[0]]);\n }\n\n }\n return timeFactor;\n } else {\n return [[[0]]];\n }\n }\n\n function serviceFrequencies(out, timeFactor) {\n /**\n * Applies service period calculation to frequent trips\n * @param {object} out\n * @param {object} timeFactor - as returned by createTimeFactor()\n * @return {object} out\n */\n\n var end, frequencies, frequenciesObject, keys, proportion, start, timeCache, tripId, i, j, k;\n\n if (\"frequencies\" in out.gtfs &&\n out.gtfsHead.frequencies.trip_id !== -1 &&\n out.gtfsHead.frequencies.start_time !== -1 &&\n out.gtfsHead.frequencies.end_time !== -1 &&\n out.gtfsHead.frequencies.headway_secs !== -1\n ) {\n\n frequencies = {};\n // trip_id: service array as time periods regardless of calendar\n timeCache = {};\n // GTFS time: MS time\n\n for (i = 0; i < out.gtfs.frequencies.length; i += 1) {\n\n frequenciesObject = out.gtfs.frequencies[i];\n tripId = frequenciesObject[out.gtfsHead.frequencies.trip_id];\n\n if (tripId in out._.trip &&\n \"frequent\" in out._.trip[tripId]\n ) {\n\n if (tripId in frequencies === false) {\n frequencies[tripId] = [];\n for (j = 0; j < timeFactor.length; j += 1) {\n frequencies[tripId].push(0);\n }\n }\n\n if (frequenciesObject[out.gtfsHead.frequencies.start_time] in timeCache === false) {\n timeCache[frequenciesObject[out.gtfsHead.frequencies.start_time]] =\n getGtfsTimeSeconds(frequenciesObject[out.gtfsHead.frequencies.start_time]);\n // TimeStrings cached for speed - times tend to be reused\n }\n start = timeCache[frequenciesObject[out.gtfsHead.frequencies.start_time]];\n if (frequenciesObject[out.gtfsHead.frequencies.end_time] in timeCache === false) {\n timeCache[frequenciesObject[out.gtfsHead.frequencies.end_time]] =\n getGtfsTimeSeconds(frequenciesObject[out.gtfsHead.frequencies.end_time]);\n }\n end = timeCache[frequenciesObject[out.gtfsHead.frequencies.end_time]];\n\n if (end < start) {\n end += 86400;\n // Spans midnight, add day of seconds.\n }\n\n for (j = 0; j < timeFactor.length; j += 1) {\n if (j < out._.trip[tripId].service.length &&\n out._.trip[tripId].service[j] !== 0\n ) {\n\n if (timeFactor[j].length === 1 &&\n timeFactor[j][0].length === 1 &&\n timeFactor[j][0][0] === 0\n ) {\n\n frequencies[tripId][j] +=\n (end - start) / parseInt(frequenciesObject[out.gtfsHead.frequencies.headway_secs], 10);\n // Whole day, add all\n\n } else {\n\n for (k = 0; k < timeFactor[j].length; k += 1) {\n if (timeFactor[j][k][0] < end &&\n (timeFactor[j][k].length === 1 ||\n start < timeFactor[j][k][1])\n ) {\n // Frequency wholly or partly within Period. Else skip\n proportion = 1;\n if (timeFactor[j][k][0] > start) {\n proportion = proportion - ((timeFactor[j][k][0] - start) / (end - start));\n }\n if (timeFactor[j][k][1] < end) {\n proportion = proportion - ((end - timeFactor[j][k][1]) / (end - start));\n }\n frequencies[tripId][j] += (end - start) *\n proportion / parseInt(frequenciesObject[out.gtfsHead.frequencies.headway_secs], 10);\n }\n }\n\n }\n }\n }\n }\n }\n\n keys = Object.keys(frequencies);\n for (i = 0; i < keys.length; i += 1) {\n for (j = 0; j < frequencies[keys[i]].length; j += 1) {\n out._.trip[keys[i]].service[j] = out._.trip[keys[i]].service[j] * frequencies[keys[i]][j];\n // Assignment in secondary loop avoids confusing calendar during frequencies loop\n }\n }\n }\n\n return out;\n }\n\n function serviceTrip(out) {\n /**\n * Applies service total to trips\n * @param {object} out\n * @return {object} out\n */\n\n var dayCount, hour, inPeriod, minutes, time, total_minutes, i, j, k;\n var dayFactor = createDayFactor(out);\n var timeFactor = createTimeFactor(out);\n var trips = Object.keys(out._.trip);\n var scheduled = [];\n\n\n for (i = 0; i < timeFactor.length; i += 1) {\n scheduled.push(0);\n }\n\n out.summary.service = [];\n // Hour index position, service total by service array position\n\n for (i = 0; i < trips.length; i += 1) {\n\n if (out._.trip[trips[i]].stops.length < 2 ||\n Object.keys(out._.trip[trips[i]].calendar).length === 0\n ) {\n delete out._.trip[trips[i]];\n // Cleanup after duplicate checks best fits this loop\n } else {\n\n if (out.config.allowDuration) {\n minutes = [];\n out._.trip[trips[i]].minutes = [];\n for (j = 0; j < scheduled.length; j += 1) {\n out._.trip[trips[i]].minutes.push(0);\n minutes.push([]);\n }\n } else {\n minutes = null;\n }\n \n // calendar to service\n for (j = 0; j < dayFactor.length; j += 1) {\n dayCount = 0;\n for (k = 0; k < dayFactor[j].length; k += 1) {\n if (dayFactor[j][k] in out._.trip[trips[i]].calendar) {\n dayCount += 1;\n }\n }\n out._.trip[trips[i]].service.push((dayCount / dayFactor[j].length) * out.config.servicePer);\n }\n\n if (\"frequent\" in out._.trip[trips[i]] === false) {\n // Frequent adjusted \n\n if (\"start\" in out._.trip[trips[i]] &&\n \"end\" in out._.trip[trips[i]]\n ) {\n time = ((out._.trip[trips[i]].end - out._.trip[trips[i]].start) / 2 ) + out._.trip[trips[i]].start;\n // Mid-journey time\n } else {\n time = null;\n }\n\n for (j = 0; j < timeFactor.length; j += 1) {\n\n if (j < out._.trip[trips[i]].service.length &&\n out._.trip[trips[i]].service[j] !== 0\n ) {\n // Else no service, so leave unchanged\n \n if ((timeFactor[j].length > 1 ||\n timeFactor[j][0].length > 1 ||\n timeFactor[j][0][0] !== 0)\n ) {\n // Else all time periods so leave unchanged but count in summary\n inPeriod = false;\n\n if (time !== null) {\n // Else no time data, thus inPeriod remains false\n for (k = 0; k < timeFactor[j].length; k += 1) {\n if (time > timeFactor[j][k][0] &&\n (timeFactor[j][k].length === 1 ||\n time < timeFactor[j][k][1])\n ) {\n inPeriod = true;\n break;\n }\n }\n }\n\n if (inPeriod == false) {\n out._.trip[trips[i]].service[j] = 0;\n if (out.config.allowDuration) {\n out._.trip[trips[i]].minutes[j] = 0;\n }\n }\n\n } else {\n inPeriod = true;\n }\n\n if (inPeriod) {\n scheduled[j] = scheduled[j] + 1;\n hour = Math.floor(time / 3600);\n if (out.summary.service[hour] === undefined) {\n out.summary.service[hour] = [];\n for (k = 0; k < timeFactor.length; k += 1) {\n out.summary.service[hour].push(0);\n }\n }\n out.summary.service[hour][j] = out.summary.service[hour][j] + 1;\n\n if (out.config.allowDuration) {\n if (out._.trip[trips[i]].start > out._.trip[trips[i]].end) {\n // Assume across midnight, so add 24 hours to end\n minutes[j].push(((out._.trip[trips[i]].end + 86400) - out._.trip[trips[i]].start) / 60);\n } else {\n minutes[j].push((out._.trip[trips[i]].end - out._.trip[trips[i]].start) / 60);\n }\n }\n }\n }\n }\n }\n if (minutes !== null) {\n for (j = 0; j < minutes.length; j += 1) {\n if (minutes[j].length > 0) {\n total_minutes = 0;\n for (k = 0; k < minutes[j].length; k += 1) {\n total_minutes = total_minutes + minutes[j][k];\n }\n out._.trip[trips[i]].minutes[j] = Math.round(total_minutes / k);\n } else {\n out._.trip[trips[i]].minutes[j] = 0;\n }\n }\n }\n }\n\n }\n\n for (i = 0; i < out.summary.service.length; i += 1) {\n if (out.summary.service[i] !== undefined) {\n for (j = 0; j < out.summary.service[i].length; j += 1) {\n if (scheduled[j] > 0) {\n out.summary.service[i][j] = out.summary.service[i][j] / scheduled[j];\n }\n }\n }\n }\n\n out = serviceFrequencies(out, timeFactor);\n // Finally apply frequencies\n\n return out;\n }\n\n function createRoutes(out) {\n /**\n * Describes out._.routes GTFS route_id: complex Object describing route\n * @param {object} out\n * @return {object} out\n */\n\n var contentString, color, position, override, reference, route, routeId, wildcard, i, j;\n\n for (i = 0; i < out.gtfs.routes.length; i += 1) {\n\n route = out.gtfs.routes[i];\n routeId = route[out.gtfsHead.routes.route_id];\n\n if (routeId in out._.routes) {\n\n reference = {\"slug\": \"\"};\n // Slug is a temporary indexable unique reference\n\n if (out.config.networkFilter.type === \"agency\" &&\n out.gtfsHead.routes.agency_id !== -1 &&\n route[out.gtfsHead.routes.agency_id] in out.config.productOverride\n ) {\n override = out.config.productOverride[route[out.gtfsHead.routes.agency_id]];\n } else {\n if (out.config.networkFilter.type === \"mode\" &&\n out.gtfsHead.routes.route_type !== -1 &&\n route[out.gtfsHead.routes.route_type] in out.config.productOverride\n ) {\n override = out.config.productOverride[route[out.gtfsHead.routes.route_type]];\n } else {\n override = {};\n }\n }\n\n if (out.config.allowRoute ||\n out.config.allowRouteLong\n ) {\n\n contentString = \"\";\n\n if (out.config.allowRoute &&\n routeId in out.config.routeOverride &&\n \"route_short_name\" in out.config.routeOverride[routeId]\n ) {\n contentString = out.config.routeOverride[routeId].route_short_name;\n } else {\n if (out.config.allowRoute &&\n out.gtfsHead.routes.route_short_name !== -1\n ) {\n contentString = route[out.gtfsHead.routes.route_short_name].trim();\n }\n }\n if (out.config.allowRouteLong &&\n routeId in out.config.routeOverride &&\n \"route_long_name\" in out.config.routeOverride[routeId]\n ) {\n if (contentString !== \"\") {\n contentString += \": \";\n }\n contentString += out.config.routeOverride[routeId].route_long_name;\n } else {\n if (out.config.allowRouteLong &&\n out.gtfsHead.routes.route_long_name !== -1\n ) {\n if (contentString !== \"\") {\n contentString += \": \";\n }\n contentString += route[out.gtfsHead.routes.route_long_name].trim();\n }\n }\n\n if (contentString !== \"\") {\n reference.n = contentString;\n reference.slug += contentString;\n }\n\n }\n\n if (out.config.allowColor) {\n\n color = [[\"route_color\", \"c\"], [\"route_text_color\", \"t\"]];\n for (j = 0; j < color.length; j += 1) {\n\n contentString = \"\";\n\n if (routeId in out.config.routeOverride &&\n color[j][0] in out.config.routeOverride[routeId]\n ) {\n contentString = out.config.routeOverride[routeId][color[j][0]];\n } else {\n if (color[j][0] in override) {\n contentString = override[color[j][0]];\n } else {\n if (out.gtfsHead.routes[color[j][0]] !== -1) {\n contentString = route[out.gtfsHead.routes[color[j][0]]].trim();\n }\n }\n }\n\n if (contentString.length === 6) {\n\n contentString = \"#\" + contentString;\n out = addToReference(out, \"color\", contentString);\n\n if (color[j][1] === \"t\" &&\n \"c\" in reference &&\n reference.c === out._.color[contentString]\n ) {\n if (contentString.toLowerCase() === \"#ffffff\") {\n contentString = \"#000000\";\n } else {\n contentString = \"#ffffff\";\n }\n out = addToReference(out, \"color\", contentString);\n }\n\n reference[color[j][1]] = out._.color[contentString];\n reference.slug += contentString;\n\n }\n }\n\n }\n\n if (out.config.allowRouteUrl) {\n\n contentString = \"\";\n\n if (routeId in out.config.routeOverride &&\n \"route_url\" in out.config.routeOverride[routeId]\n ) {\n contentString = out.config.routeOverride[routeId].route_url;\n } else {\n if(out.gtfsHead.routes.route_url !== -1 ) {\n contentString = route[out.gtfsHead.routes.route_url].trim();\n }\n }\n\n if (contentString !== \"\") {\n\n wildcard = \"[*]\";\n position = contentString.lastIndexOf(wildcard);\n // Cannot use wildcard if already contains it (unlikely)\n\n if (position === -1 &&\n \"n\" in reference\n ) {\n position = contentString.lastIndexOf(\n reference.n);\n // Human logic: Name within URL format\n if (position !== -1) {\n contentString = contentString.substring(0, position) + wildcard +\n contentString.substring(position + reference.n.length);\n }\n }\n\n if (position === -1) {\n position = contentString.lastIndexOf(routeId);\n // Operator logic: Internal ID within format\n if (position !== -1) {\n contentString = contentString.substring(0, position) + wildcard +\n contentString.substring(position + routeId.length);\n reference.i =\n routeId;\n }\n }\n\n out = addToReference(out, \"url\", contentString);\n reference.u = out._.url[contentString];\n reference.slug += contentString;\n\n }\n }\n\n if (out.config.networkFilter.type === \"agency\") {\n if(\"agency_id\" in out.gtfsHead.routes &&\n out.gtfsHead.routes.agency_id !== -1\n ) {\n out._.routes[routeId].product =\n out._.productIndex[route[out.gtfsHead.routes.agency_id]];\n } else {\n out._.routes[routeId].product = 0;\n }\n\n } else {\n if (out.config.networkFilter.type === \"mode\" &&\n \"route_type\" in out.gtfsHead.routes &&\n out.gtfsHead.routes.route_type !== -1\n ) {\n out._.routes[routeId].product =\n out._.productIndex[route[out.gtfsHead.routes.route_type]];\n } else {\n out._.routes[routeId].product =\n out._.productIndex[Object.keys(out._.productIndex)[0]];\n // Fallback\n }\n }\n\n if (Object.keys(reference).length > 1) {\n // Contains more than lookup slug\n out._.routes[routeId].reference = reference;\n }\n\n }\n\n }\n\n delete out._.productIndex;\n\n return out;\n }\n\n function buildLink(out) {\n /**\n * Creates and populates aquius.link\n * @param {object} out\n * @return {object} out\n */\n\n var add, backward, blockDwells, blockNodes, check, dwells, forward, isBackward, keys, line, merge,\n nodeCount, nodes, nodeStack, reference, routeId, stops, thisLink, trips, i, j, k, l;\n var link = {};\n // linkUniqueId: {route array, product id, service count,\n // direction unless both, pickup array, setdown array, reference array}\n var skipTrip = {};\n // Trip_id index of trips to skip (since already processed via block group)\n\n function isCircular(out, routeId, trip, nodes) {\n\n var check, stops, target, i, j;\n\n if (out.config.isCircular.length > 0) {\n if (routeId in out._.circular) {\n // Predefined circular, come what may\n return true;\n } else {\n // Circulars are defined and this is not one\n return false;\n }\n }\n\n if (nodes.length < 3 ||\n nodes[0] !== nodes[nodes.length -1]\n ) {\n // Start/end stops differ, or no intermediate stops\n return false;\n }\n\n if (\"block\" in trip &&\n trip.block.trips.length > 1\n ) {\n // Block had multiple trips. Circular if nodes are the same on all trips\n target = nodes.join(\":\");\n check = null;\n for (i = 0; i < trip.block.trips.length; i += 1) {\n if (trip.block.trips[i] in out._.trip) {\n stops = out._.trip[trip.block.trips[i]].stops.slice().sort(function(a, b) {\n return a[0] - b[0];\n });\n check = [];\n for (j = 0; j < stops.length; j += 1) {\n check.push(stops[j][1]);\n }\n if (check.join(\":\") !== target) {\n check = null;\n break;\n }\n }\n }\n if (check !== null) {\n return true;\n }\n }\n\n if (nodes.length < 5) {\n // Insufficient intermediate stops\n return false;\n }\n\n if (haversineDistance(out.aquius.node[nodes[1]][1], out.aquius.node[nodes[1]][0],\n out.aquius.node[nodes[nodes.length - 2]][1], out.aquius.node[nodes[nodes.length - 2]][0]) > 200\n ) {\n // Stop after start and stop prior to end > 200 metres from each other\n return true;\n }\n\n return false;\n }\n\n function mergeService(serviceA, serviceB) {\n\n var service, i;\n\n if (serviceA.length === 1 &&\n serviceA.length === serviceB.length\n ) {\n // Fasters as saves looping\n return [serviceA[0] + serviceB[0]];\n }\n\n service = [];\n for (i = 0; i < serviceA.length; i += 1) {\n if (i < serviceB.length) {\n service.push(serviceA[i] + serviceB[i]);\n }\n }\n\n return service;\n }\n\n function mergeDuration(minutesA, minutesB) { \n\n var duration, i;\n\n if (minutesA === minutesB) {\n return minutesA;\n }\n \n duration = [];\n for (i = 0; i < minutesA.length; i += 1) {\n if (i > minutesB.length) {\n break;\n }\n if (minutesB[i] === 0) {\n duration.push(minutesA[i]);\n } else if (minutesA[i] === 0) {\n duration.push(minutesB[i]);\n } else {\n duration.push(Math.round((minutesA[i] + minutesB[i]) / 2));\n // Crude - average may gradually skew, but values should be very similar, since describe the same trip segment\n }\n\n }\n\n return duration;\n }\n\n trips = Object.keys(out._.trip);\n for (i = 0; i < trips.length; i += 1) {\n\n routeId = out._.trip[trips[i]].route_id;\n\n if (trips[i] in skipTrip === false &&\n routeId in out._.routes &&\n \"product\" in out._.routes[routeId]\n ) {\n // Product check only fails if route_id was missing from GTFS routes\n\n isBackward = false;\n nodes = [];\n nodeCount = {};\n // Node:Count. Prevents inclusion in p/u/split where node occurs >1\n dwells = []; // Parallels nodes\n out._.trip[trips[i]].stops.sort(function(a, b) {\n return a[0] - b[0];\n });\n\n reference = {};\n // slug:reference\n if (\"reference\" in out._.routes[routeId] &&\n out._.routes[routeId].reference.slug in reference === false\n ) {\n reference[out._.routes[routeId].reference.slug] = out._.routes[routeId].reference;\n }\n if (\"reference\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].reference.slug in reference === false\n ) {\n reference[out._.trip[trips[i]].reference.slug] = out._.trip[trips[i]].reference;\n }\n\n if (\"block\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].block.trips.length > 1 &&\n (out.config.isCircular.length === 0 ||\n routeId in out._.circular == false)\n ) {\n /**\n * Blocked trips require same service and stop differences to be merged here\n * Trips with identical stop sequence will be evalulated later as circulars\n * Blocked trips with different calendars could break this logic,\n * but since service is the same, such a break should not impact on the final link totals\n * Note blocks rarely used, while imperfect block processing tends to fail gracefully,\n * since failed trips are retained separately (only the through-journey is lost)\n */\n merge = [];\n // TimeMS, nodes, tripId, dwells\n\n for (j = 0; j < out._.trip[trips[i]].block.trips.length; j += 1) {\n if (out._.trip[trips[i]].block.trips[j] in skipTrip === false &&\n out._.trip[trips[i]].block.trips[j] in out._.trip &&\n out._.trip[out._.trip[trips[i]].block.trips[j]].service.join(\":\")\n === out._.trip[trips[i]].service.join(\":\") &&\n out._.trip[out._.trip[trips[i]].block.trips[j]].route_id in out._.routes &&\n out._.routes[out._.trip[out._.trip[trips[i]].block.trips[j]].route_id].product\n === out._.routes[routeId].product &&\n \"start\" in out._.trip[out._.trip[trips[i]].block.trips[j]] &&\n \"end\" in out._.trip[out._.trip[trips[i]].block.trips[j]]\n ) {\n // Block.trips includes parent trip\n stops = out._.trip[out._.trip[trips[i]].block.trips[j]].stops.sort(function(a, b) {\n return a[0] - b[0];\n });\n blockNodes = [];\n blockDwells = [];\n for (k = 0; k < stops.length; k += 1) {\n blockNodes.push(stops[k][1]);\n blockDwells.push(stops[k][2]);\n }\n merge.push([(out._.trip[out._.trip[trips[i]].block.trips[j]].start +\n out._.trip[out._.trip[trips[i]].block.trips[j]].end) / 2,\n blockNodes, out._.trip[trips[i]].block.trips[j], blockDwells]);\n }\n }\n\n if (merge.length > 1) {\n // Else only parent trip\n merge.sort(function(a, b) {\n return a[0] - b[0];\n });\n\n for (j = 0; j < merge.length; j += 1) {\n if (merge[j][1].length > 0) {\n if (nodes.length > 0 &&\n merge[j][1][0] === nodes[nodes.length - 1]\n ) {\n merge[j][1] = merge[j][1].slice(1);\n merge[j][3] = merge[j][3].slice(1);\n }\n nodes = nodes.concat(merge[j][1]);\n dwells = dwells.concat(merge[j][3]);\n if (merge[j][2] !== trips[i]) {\n\n skipTrip[merge[j][2]] = \"\";\n\n check = [\"t\", \"setdown\", \"pickup\"];\n for (k = 0; k < check.length; k += 1) {\n if (check[k] in out._.trip[merge[j][2]]) {\n if (check[k] in out._.trip[trips[i]] === false) {\n out._.trip[trips[i]][check[k]] = [];\n }\n for (l = 0; l < out._.trip[merge[j][2]][check[k]].length; l += 1) {\n if (out._.trip[trips[i]][check[k]].indexOf(out._.trip[merge[j][2]][check[k]][l]) === -1) {\n // Likely never called. Block with splits or pickup/setdowns may be messy, but try\n out._.trip[trips[i]][check[k]].push(out._.trip[merge[j][2]][check[k]][l]);\n }\n }\n }\n }\n\n if (out._.trip[merge[j][2]].route_id in out._.routes &&\n \"reference\" in out._.routes[out._.trip[merge[j][2]].route_id] &&\n out._.routes[out._.trip[merge[j][2]].route_id].reference.slug in reference === false\n ) {\n reference[out._.routes[out._.trip[merge[j][2]].route_id].reference.slug] =\n out._.routes[out._.trip[merge[j][2]].route_id].reference;\n }\n if (\"reference\" in out._.trip[merge[j][2]] &&\n out._.trip[merge[j][2]].reference.slug in reference === false\n ) {\n reference[out._.trip[merge[j][2]].reference.slug] = out._.trip[merge[j][2]].reference;\n }\n\n // Service/product remains as parent. Ignores any b (unlikely scenario)\n }\n }\n }\n }\n }\n\n forward = \"f\" + out._.routes[routeId].product;\n if (out.config.mirrorLink) {\n backward = \"f\" + out._.routes[routeId].product;\n }\n\n if (\"setdown\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].setdown.length > 0\n ) {\n out._.trip[trips[i]].setdown.sort();\n forward += \"s\" + out._.trip[trips[i]].setdown.join(\":\");\n }\n\n if (\"pickup\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].pickup.length > 0\n ) {\n out._.trip[trips[i]].pickup.sort();\n forward += \"u\" + out._.trip[trips[i]].pickup.join(\":\");\n if (out.config.mirrorLink) {\n backward += \"s\" + out._.trip[trips[i]].pickup.join(\":\");\n // Pickup processed as setdown in reverse\n }\n }\n\n if (out.config.mirrorLink &&\n \"setdown\" in out._.trip[trips[i]] &&\n out._.trip[trips[i]].setdown.length > 0\n ) {\n backward += \"u\" + out._.trip[trips[i]].setdown.join(\":\");\n // Setdown processed as pickup in reverse\n }\n\n if (\"t\" in out._.trip[trips[i]]) {\n forward += \"t\" + out._.trip[trips[i]].t.join(\":\");\n if (out.config.mirrorLink) {\n backward += \"t\" + out._.trip[trips[i]].t.join(\":\");\n }\n }\n\n if (nodes.length === 0) {\n add = true;\n } else {\n add = false;\n }\n for (j = 0; j < out._.trip[trips[i]].stops.length; j += 1) {\n if (add) {\n nodes.push(out._.trip[trips[i]].stops[j][1]);\n dwells.push(out._.trip[trips[i]].stops[j][2]);\n }\n if (out._.trip[trips[i]].stops[j][1] in nodeCount) {\n nodeCount[out._.trip[trips[i]].stops[j][1]] += 1; \n } else {\n nodeCount[out._.trip[trips[i]].stops[j][1]] = 1;\n }\n }\n\n if (out.config.mirrorLink) {\n backward = nodes.slice().reverse().join(\":\") + backward;\n }\n forward = nodes.join(\":\") + forward;\n\n if (forward in link &&\n \"b\" in out._.trip[trips[i]] === false\n ) {\n\n link[forward].service = mergeService(link[forward].service, out._.trip[trips[i]].service);\n\n if (\"minutes\" in link[forward] && \"minutes\" in out._.trip[trips[i]]) {\n link[forward].minutes = mergeDuration(link[forward].minutes, out._.trip[trips[i]].minutes);\n }\n if (\"dwells\" in link[forward]) {\n link[forward].dwells = mergeDuration(link[forward].dwells, dwells)\n }\n\n } else {\n\n if (out.config.mirrorLink &&\n backward in link &&\n \"b\" in out._.trip[trips[i]] === false\n ) {\n\n isBackward = true;\n link[backward].service = mergeService(link[backward].service, out._.trip[trips[i]].service);\n\n if (\"minutes\" in link[backward] && \"minutes\" in out._.trip[trips[i]]) {\n link[backward].minutes = mergeDuration(link[backward].minutes, out._.trip[trips[i]].minutes);\n }\n if (\"dwells\" in link[backward]) {\n link[backward].dwells = mergeDuration(link[backward].dwells, dwells)\n }\n\n if (\"direction\" in link[backward]) {\n delete link[backward].direction;\n }\n\n } else {\n\n link[forward] = {\n \"direction\": 1,\n \"product\": out._.routes[routeId].product,\n \"route\": nodes,\n \"service\": out._.trip[trips[i]].service,\n \"dwells\": dwells,\n };\n\n if (\"minutes\" in out._.trip[trips[i]]) {\n link[forward].minutes = out._.trip[trips[i]].minutes;\n }\n\n if (\"setdown\" in out._.trip[trips[i]]) {\n nodeStack = [];\n for (j = 0; j < out._.trip[trips[i]].setdown.length; j += 1) {\n if (out._.trip[trips[i]].setdown[j] in nodeCount &&\n nodeCount[out._.trip[trips[i]].setdown[j]] === 1\n ) {\n nodeStack.push(out._.trip[trips[i]].setdown[j]);\n }\n }\n if (nodeStack.length > 0) {\n link[forward].setdown = nodeStack;\n }\n }\n if (\"pickup\" in out._.trip[trips[i]]) {\n nodeStack = [];\n for (j = 0; j < out._.trip[trips[i]].pickup.length; j += 1) {\n if (out._.trip[trips[i]].pickup[j] in nodeCount &&\n nodeCount[out._.trip[trips[i]].pickup[j]] === 1\n ) {\n nodeStack.push(out._.trip[trips[i]].pickup[j]);\n }\n }\n if (nodeStack.length > 0) {\n link[forward].pickup = nodeStack;\n }\n }\n if (isCircular(out, routeId, out._.trip[trips[i]], nodes)) {\n link[forward].circular = 1;\n }\n if (\"b\" in out._.trip[trips[i]]) {\n link[forward].b = out._.trip[trips[i]].b;\n // Currently .b are always unique links\n }\n if (\"t\" in out._.trip[trips[i]]) {\n nodeStack = [];\n for (j = 0; j < out._.trip[trips[i]].t.length; j += 1) {\n if (out._.trip[trips[i]].t[j] in nodeCount &&\n nodeCount[out._.trip[trips[i]].t[j]] === 1\n ) {\n nodeStack.push(out._.trip[trips[i]].t[j]);\n }\n }\n if (nodeStack.length > 0) {\n link[forward].t = nodeStack;\n }\n }\n }\n }\n\n keys = Object.keys(reference);\n if (keys.length > 0) {\n\n if (isBackward &&\n \"reference\" in link[backward] === false\n ) {\n link[backward].reference = [];\n link[backward].referenceLookup = {};\n }\n if (isBackward == false &&\n \"reference\" in link[forward] === false\n ) {\n link[forward].reference = [];\n link[forward].referenceLookup = {};\n }\n\n for (j = 0; j < keys.length; j += 1) {\n if (isBackward &&\n keys[j] in link[backward].referenceLookup === false\n ) {\n link[backward].reference.push(reference[keys[j]]);\n link[backward].referenceLookup[keys[j]] = \"\";\n }\n if (isBackward === false &&\n keys[j] in link[forward].referenceLookup === false\n ) {\n link[forward].reference.push(reference[keys[j]]);\n link[forward].referenceLookup[keys[j]] = \"\";\n }\n }\n }\n\n }\n }\n\n delete out._.nodeLookup;\n delete out._.routes;\n delete out._.trip;\n\n out.aquius.link = [];\n out.summary.network = [];\n // Indexed by networkFilter, content array of service by serviceFilter index\n\n for (i = 0; i < out.aquius.network.length; i += 1) {\n if (\"service\" in out.aquius) {\n out.summary.network.push([]);\n for (j = 0; j < out.aquius.service.length; j += 1) {\n out.summary.network[i].push(0);\n }\n } else {\n out.summary.network.push([0]);\n }\n }\n\n keys = Object.keys(link);\n\n for (i = 0; i < keys.length; i += 1) {\n\n thisLink = link[keys[i]];\n\n for (j = 0; j < thisLink.service.length; j += 1) {\n\n for (k = 0; k < out.aquius.network.length; k += 1) {\n if (out.aquius.network[k][0].indexOf(thisLink.product) !== -1) {\n out.summary.network[k][j] += thisLink.service[j];\n }\n }\n \n if (thisLink.service[j] < 10) {\n thisLink.service[j] = parseFloat(thisLink.service[j].toPrecision(1));\n } else {\n thisLink.service[j] = parseInt(thisLink.service[j], 10);\n }\n }\n \n if (out.config.allowWaypoint === false &&\n \"pickup\" in thisLink &&\n \"setdown\" in thisLink\n ) {\n for (j = thisLink.pickup.length - 1; j >= 0; j--) {\n // Reverse since may deform pickup\n check = thisLink.setdown.indexOf(thisLink.pickup[j]);\n if (check !== -1) {\n for (k = thisLink.route.length - 1; k >= 0 ; k--) {\n if (thisLink.route[k] === thisLink.pickup[j]) {\n thisLink.route.splice(k, 1);\n }\n }\n thisLink.pickup.splice(j, 1);\n thisLink.setdown.splice(check, 1);\n }\n }\n }\n\n line = [\n [thisLink.product],\n thisLink.service,\n thisLink.route,\n {}\n ];\n\n if (\"reference\" in thisLink) {\n for (j = 0; j < thisLink.reference.length; j += 1) {\n delete thisLink.reference[j].slug;\n }\n line[3].r = thisLink.reference;\n }\n if (\"color\" in thisLink) {\n line[3].o = thisLink.color;\n }\n if (\"circular\" in thisLink) {\n line[3].c = 1;\n }\n if (\"direction\" in thisLink) {\n line[3].d = 1;\n }\n if (\"pickup\" in thisLink) {\n line[3].u = thisLink.pickup;\n }\n if (\"setdown\" in thisLink) {\n line[3].s = thisLink.setdown;\n }\n if (\"t\" in thisLink) {\n line[3].t = thisLink.t;\n }\n if (\"b\" in thisLink) {\n line[3].b = thisLink.b;\n }\n if (\"minutes\" in thisLink) {\n line[3].m = thisLink.minutes;\n }\n if (out.config.allowDwell && \"dwells\" in thisLink) {\n line[3].w = thisLink.dwells;\n }\n\n out.aquius.link.push(line);\n\n }\n\n out.aquius.link.sort(function (a, b) {\n return b[1].reduce(function(c, d) {\n return c + d;\n }, 0) - a[1].reduce(function(c, d) {\n return c + d;\n }, 0);\n });\n // Descending service count, since busiest most likely to be queried and thus found faster\n\n for (i = 0; i < out.summary.network.length; i += 1) {\n for (j = 0; j < out.summary.network[i].length; j += 1) {\n out.summary.network[i][j] = Math.round(out.summary.network[i][j]);\n }\n }\n\n delete out._.dayFactor;\n\n return out;\n }\n\n function optimiseNode(out) {\n /**\n * Assigns most frequently referenced nodes to lowest indices and removes unused nodes\n * @param {object} out\n * @return {object} out\n */\n\n var keys, newNode, newNodeLookup, nodeArray, i, j;\n var nodeOccurance = {};\n // OldNode: Count of references\n\n for (i = 0; i < out.aquius.link.length; i += 1) {\n nodeArray = out.aquius.link[i][2];\n if (\"u\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].u);\n }\n if (\"s\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].s);\n }\n if (\"t\" in out.aquius.link[i][3]) {\n nodeArray = nodeArray.concat(out.aquius.link[i][3].t);\n }\n for (j = 0; j < nodeArray.length; j += 1) {\n if (nodeArray[j] in nodeOccurance === false) {\n nodeOccurance[nodeArray[j]] = 1;\n } else {\n nodeOccurance[nodeArray[j]] += 1;\n }\n }\n }\n\n nodeArray = [];\n // Reused, now OldNode by count of occurance\n keys = Object.keys(nodeOccurance);\n for (i = 0; i < keys.length; i += 1) {\n nodeArray.push([nodeOccurance[keys[i]], keys[i]]);\n }\n nodeArray.sort(function(a, b) {\n return a[0] - b[0];\n });\n nodeArray.reverse();\n\n newNode = [];\n // As aquius.node\n newNodeLookup = {};\n // OldNodeIndex: NewNodeIndex\n for (i = 0; i < nodeArray.length; i += 1) {\n newNode.push(out.aquius.node[nodeArray[i][1]]);\n newNodeLookup[nodeArray[i][1]] = i;\n }\n out.aquius.node = newNode;\n\n for (i = 0; i < out.aquius.link.length; i += 1) {\n for (j = 0; j < out.aquius.link[i][2].length; j += 1) {\n out.aquius.link[i][2][j] = newNodeLookup[out.aquius.link[i][2][j]];\n }\n if (\"u\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].u.length; j += 1) {\n out.aquius.link[i][3].u[j] = newNodeLookup[out.aquius.link[i][3].u[j]];\n }\n }\n if (\"s\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].s.length; j += 1) {\n out.aquius.link[i][3].s[j] = newNodeLookup[out.aquius.link[i][3].s[j]];\n }\n }\n if (\"t\" in out.aquius.link[i][3]) {\n for (j = 0; j < out.aquius.link[i][3].t.length; j += 1) {\n out.aquius.link[i][3].t[j] = newNodeLookup[out.aquius.link[i][3].t[j]];\n }\n }\n }\n\n return out;\n }\n\n function isPointInFeature(x, y, feature) {\n /**\n * Helper: Check if x,y point is in GeoJSON feature. Supports Polygon, MultiPolygon\n * @param {float} x - x (longitude) coordinate\n * @param {float} y - y (latitude) coordinate\n * @param {object} feature - single GeoJSON feature array\n * @return {boolean}\n */\n\n var i;\n\n function isPointInPolygon(x, y, polygonArray) {\n\n var inside, intersect, xj, xk, yj, yk, i, j, k;\n\n if (Array.isArray(polygonArray)) {\n for (i = 0; i < polygonArray.length; i += 1) {\n if (Array.isArray(polygonArray[i])) {\n\n // Via https://github.com/substack/point-in-polygon\n // From http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html\n inside = false;\n for (j = 0, k = polygonArray[i].length - 1;\n j < polygonArray[i].length; k = j++) {\n\n xj = polygonArray[i][j][0];\n yj = polygonArray[i][j][1];\n xk = polygonArray[i][k][0];\n yk = polygonArray[i][k][1];\n\n intersect = ((yj > y) != (yk > y)) && (x < (xk - xj) * (y - yj) / (yk - yj) + xj);\n if (intersect) {\n inside = !inside;\n }\n\n }\n\n if (inside &&\n i === 0 &&\n polygonArray.length === 1\n ) {\n // Found in sole polygon\n return true;\n }\n if (inside === false &&\n i === 0\n ) {\n // Missing in parent polygon\n return false;\n }\n if (inside &&\n i > 0\n ) {\n // Found in a hole\n return false;\n }\n if (i === polygonArray.length - 1) {\n // Beholed polygon found in outer but not inner\n return true;\n }\n\n }\n }\n }\n\n return false;\n }\n\n if (\"geometry\" in feature === false ||\n \"type\" in feature.geometry === false ||\n \"coordinates\" in feature.geometry === false ||\n !Array.isArray(feature.geometry.coordinates)\n ) {\n return false;\n }\n\n if (feature.geometry.type === \"MultiPolygon\") {\n for (i = 0; i < feature.geometry.coordinates.length; i += 1) {\n if (isPointInPolygon(x, y, feature.geometry.coordinates[i])) {\n return true;\n }\n }\n }\n\n if (feature.geometry.type === \"Polygon\") {\n return isPointInPolygon(x, y, feature.geometry.coordinates);\n }\n // Future: Extendable for other geometries, such as nearest point\n return false;\n }\n\n function getCentroidFromFeature(feature) {\n /**\n * Helper: Calculate centroid of GeoJSON feature. Supports Polygon, MultiPolygon\n * @param {object} feature - single GeoJSON feature array\n * @return {array} [x,y], or null if geometry not supported\n */\n\n var i;\n var bounds = {};\n\n function getCentroidFromPolygon(polygonArray, bounds) {\n\n var i, j;\n\n if (Array.isArray(polygonArray)) {\n for (i = 0; i < polygonArray.length; i += 1) {\n if (Array.isArray(polygonArray[i])) {\n for (j = 0; j < polygonArray[i].length; j += 1) {\n if (Array.isArray(polygonArray[i][j]) &&\n polygonArray[i][j].length === 2\n ) {\n\n if (\"maxX\" in bounds === false) {\n // First entry\n bounds.maxX = polygonArray[i][j][0];\n bounds.minX = polygonArray[i][j][0];\n bounds.maxY = polygonArray[i][j][1];\n bounds.minY = polygonArray[i][j][1];\n } else {\n\n if (polygonArray[i][j][0] > bounds.maxX) {\n bounds.maxX = polygonArray[i][j][0];\n } else {\n if (polygonArray[i][j][0] < bounds.minX) {\n bounds.minX = polygonArray[i][j][0];\n }\n }\n if (polygonArray[i][j][1] > bounds.maxY) {\n bounds.maxY = polygonArray[i][j][1];\n } else {\n if (polygonArray[i][j][1] < bounds.minY) {\n bounds.minY = polygonArray[i][j][1];\n }\n }\n\n }\n\n }\n }\n }\n }\n }\n\n return bounds;\n }\n\n if (\"geometry\" in feature &&\n \"type\" in feature.geometry &&\n \"coordinates\" in feature.geometry &&\n Array.isArray(feature.geometry.coordinates)\n ) {\n\n if (feature.geometry.type === \"MultiPolygon\") {\n for (i = 0; i < feature.geometry.coordinates.length; i += 1) {\n bounds = getCentroidFromPolygon(feature.geometry.coordinates[i], bounds);\n }\n }\n\n if (feature.geometry.type === \"Polygon\") {\n bounds = getCentroidFromPolygon(feature.geometry.coordinates, bounds);\n }\n // Future: Extendable for other geometry types\n }\n\n if (\"maxX\" in bounds) {\n return [(bounds.maxX + bounds.minX) / 2, (bounds.maxY + bounds.minY) / 2];\n // Unweighted centroids\n } else {\n return null;\n }\n }\n\n function buildPlace(out, options) {\n /**\n * Creates and populates aquius.place\n * @param {object} out\n * @param {object} options\n * @return {object} out\n */\n\n var centroid, checked, content, distance, index, key, keys, lastDiff, minDistance, node, population, xyDiff, i, j;\n var centroidLookup = {};\n // \"x:y\": GeojsonLine (for cache processing)\n var centroidStack = {};\n // For efficient searching = GeojsonLine: {x, y}\n var centroidKeys = [];\n // Names of centroidStack. Saves repeat Object.keys\n var nodeStack = [];\n // For efficient searching = x+y, nodeIndex\n var thisPlace = -1;\n // GeojsonLine\n var precision = Math.pow(10, out.config.coordinatePrecision);\n var previousPlace = 0;\n // GeojsonLine\n var placeCoordStack = {};\n // \"x:y\": aquius.place index, prevents duplication of coordinates\n var placeStack = {};\n // aquius.place: GeojsonLine:aquius.place index\n var maxPopulation = 0;\n var removeNode = {};\n // Index of node indices to be removed from lookup, making them invisible for links\n\n out.aquius.place = [];\n // x, y, {p: population}\n\n if (typeof options === \"object\" &&\n \"geojson\" in options &&\n \"type\" in options.geojson &&\n options.geojson.type === \"FeatureCollection\" &&\n \"features\" in options.geojson &&\n Array.isArray(options.geojson.features) &&\n options.geojson.features.length > 0\n ) {\n\n for (i = 0; i < options.geojson.features.length; i += 1) {\n centroid = getCentroidFromFeature(options.geojson.features[i]);\n if (centroid !== null) {\n centroidKeys.push(i);\n centroidStack[i] = {\n \"x\": Math.round(centroid[0] * precision) / precision,\n \"y\": Math.round(centroid[1] * precision) / precision\n };\n centroidLookup[centroidStack[i].x + \":\" + centroidStack[i].y] = i;\n }\n }\n\n for (i = 0; i < out.aquius.node.length; i += 1) {\n nodeStack.push([\n out.aquius.node[i][0] + out.aquius.node[i][1],\n i\n ])\n }\n nodeStack.sort();\n // Tends to group neighbours, improving chance of thisPlace = previousPlace below\n\n for (i = 0; i < nodeStack.length; i += 1) {\n\n thisPlace = -1;\n node = out.aquius.node[nodeStack[i][1]];\n\n key = node[0] + \":\" + node[1];\n if (key in out.config.nodeGeojson &&\n out.config.nodeGeojson[key] in centroidLookup\n ) {\n thisPlace = centroidLookup[out.config.nodeGeojson[key]];\n }\n\n if (thisPlace === -1 &&\n isPointInFeature(node[0], node[1], options.geojson.features[previousPlace])\n ) {\n // Often the next node is near the last, so check the last result first\n thisPlace = previousPlace;\n }\n\n if (thisPlace === -1) {\n\n checked = {};\n checked[previousPlace] = previousPlace;\n\n for (j = 0; j < centroidKeys.length; j += 1) {\n\n xyDiff = Math.abs((node[0] + node[1]) -\n (centroidStack[centroidKeys[j]].x + centroidStack[centroidKeys[j]].y));\n if (j === 0) {\n lastDiff = xyDiff;\n }\n\n if (lastDiff >= xyDiff &&\n centroidKeys[j] in checked === false\n ) {\n // Only consider closer centroids than the prior failures. PreviousPlace already checked\n if (isPointInFeature(node[0], node[1], options.geojson.features[centroidKeys[j]])\n ) {\n thisPlace = centroidKeys[j];\n break;\n }\n checked[centroidKeys[j]] = centroidKeys[j];\n if (Object.keys(checked).length < 5) {\n // First 5 iterations tend to give an optimal search radius\n lastDiff = xyDiff;\n }\n }\n\n }\n\n if (thisPlace === -1) {\n // Low proportion will default to this inefficient loop\n for (j = 0; j < options.geojson.features.length; j += 1) {\n\n if (j in checked === false &&\n isPointInFeature(node[0], node[1], options.geojson.features[j])\n ) {\n thisPlace = j;\n break;\n }\n\n }\n }\n \n if (thisPlace === -1 && out.config.inGeojson === false) {\n // Assign point to nearest boundary centroid (imprecise, inefficient, but catches outliers)\n minDistance = -1;\n for (j = 0; j < centroidKeys.length; j += 1) {\n\n distance = haversineDistance(node[1], node[0], centroidStack[centroidKeys[j]].y, centroidStack[centroidKeys[j]].x);\n if (minDistance === -1 || distance < minDistance) {\n minDistance = distance;\n thisPlace = j;\n }\n\n }\n }\n\n }\n\n if (thisPlace !== -1) {\n previousPlace = thisPlace;\n if (thisPlace in placeStack) {\n // Existing place\n out.aquius.node[nodeStack[i][1]][2].p = placeStack[thisPlace];\n } else {\n\n // New place\n if (thisPlace in centroidStack) {\n key = [centroidStack[thisPlace].x, centroidStack[thisPlace].y].join(\":\");\n if (key in placeCoordStack) {\n index = placeCoordStack[key];\n } else {\n out.aquius.place.push([\n centroidStack[thisPlace].x,\n centroidStack[thisPlace].y,\n {}\n ]);\n index = out.aquius.place.length - 1;\n placeCoordStack[key] = index;\n }\n out.aquius.node[nodeStack[i][1]][2].p = index;\n placeStack[thisPlace] = index;\n\n if (\"properties\" in options.geojson.features[thisPlace] &&\n out.config.populationProperty in options.geojson.features[thisPlace].properties\n ) {\n population = parseFloat(options.geojson.features[thisPlace].properties[out.config.populationProperty]);\n if (!Number.isNaN(population)) {\n if (\"p\" in out.aquius.place[index][2] === false) {\n out.aquius.place[index][2].p = 0;\n }\n out.aquius.place[index][2].p += population;\n if (out.aquius.place[index][2].p > maxPopulation) {\n maxPopulation = out.aquius.place[index][2].p;\n }\n }\n }\n\n if (\"properties\" in options.geojson.features[thisPlace] &&\n out.config.placeNameProperty in options.geojson.features[thisPlace].properties &&\n options.geojson.features[thisPlace].properties[out.config.placeNameProperty] !== null\n ) {\n content = {};\n content.n = options.geojson.features[thisPlace].properties[out.config.placeNameProperty].trim();\n if (content.n !== \"\") {\n if (\"r\" in out.aquius.place[index][2] === false) {\n out.aquius.place[index][2].r = [];\n }\n out.aquius.place[index][2].r.push(content);\n }\n \n }\n\n }\n\n }\n\n if (thisPlace in centroidStack) {\n out.config.nodeGeojson[node[0] + \":\" + node[1]] =\n centroidStack[thisPlace].x + \":\" + centroidStack[thisPlace].y;\n }\n\n } else {\n\n if (out.config.inGeojson === true) {\n removeNode[nodeStack[i][1]] = \"\";\n }\n\n }\n }\n\n if (maxPopulation > 0 &&\n \"placeScale\" in out.config.option === false\n ) {\n out.config.option.placeScale = Math.round((1 / (maxPopulation / 2e6)) * 1e5) / 1e5;\n // Scaled relative to 2 million maximum. Rounded to 5 decimal places\n if (\"option\" in out.aquius === false) {\n out.aquius.option = {};\n }\n out.aquius.option.placeScale = out.config.option.placeScale;\n }\n \n if (Object.keys(removeNode).length > 0) {\n keys = Object.keys(out._.nodeLookup);\n for (i = 0; i < keys.length; i += 1) {\n if (out._.nodeLookup[keys[i]] in removeNode) {\n delete out._.nodeLookup[keys[i]];\n }\n }\n }\n\n }\n\n return out;\n }\n\n function stopPlace(out) {\n /**\n * Merges stops are their respective place centroid\n * @param {object} out - internal data references\n * @return {object} out\n */\n\n var keys, node, stops, i, j, k;\n\n if (out.config.stopPlace === true &&\n \"place\" in out.aquius\n ) {\n\n // Only out._.nodeLookup adjusted: nodeCoord already discarded.\n stops = {};\n // node index: [stop_ids]\n keys = Object.keys(out._.nodeLookup);\n for (i = 0; i < keys.length; i += 1) {\n if (out._.nodeLookup[keys[i]] in stops === false) {\n stops[out._.nodeLookup[keys[i]]] = [];\n }\n stops[out._.nodeLookup[keys[i]]].push(keys[i]);\n }\n\n for (i = 0; i < out.aquius.place.length; i += 1) {\n node = -1;\n\n for (j = 0; j< out.aquius.node.length; j += 1) {\n if (\"p\" in out.aquius.node[j][2] &&\n out.aquius.node[j][2].p === i\n ) {\n\n if (node === -1) {\n // Use this node for this place\n\n out.aquius.node[j][0] = out.aquius.place[i][0];\n out.aquius.node[j][1] = out.aquius.place[i][1];\n if (\"r\" in out.aquius.place[i][2]) {\n // Node inherits place references, else no node references\n out.aquius.node[j][2].r = out.aquius.place[i][2].r;\n } else {\n if (\"r\" in out.aquius.node[j][2]) {\n delete out.aquius.node[j][2].r;\n }\n }\n node = j;\n\n } else {\n if (j in stops) {\n for (k = 0; k < stops[j].length; k += 1) {\n out._.nodeLookup[stops[j][k]] = node;\n }\n }\n // Cleanup unused nodes later via optimiseNode()\n }\n\n }\n }\n\n }\n\n }\n\n return out;\n }\n\n function applyZeroCoordWarning(out) {\n /**\n * Adds out.warning enty text related to zero coordinates, if any exist.\n * @param {object} out - internal data references\n * @return {object} out - internal data references\n */\n let keys, zeroCoord, i;\n\n if(\"stopOverride\" in out.config) {\n keys = Object.keys(out.config.stopOverride);\n zeroCoord = [];\n for (i = 0; i < keys.length; i += 1) {\n if (\"x\" in out.config.stopOverride[keys[i]] &&\n out.config.stopOverride[keys[i]].x === 0 &&\n \"y\" in out.config.stopOverride[keys[i]] &&\n out.config.stopOverride[keys[i]].y === 0 \n ) {\n zeroCoord.push(keys[i]);\n }\n }\n if (zeroCoord.length > 0) {\n if (\"warning\" in out === false) {\n out[\"warning\"] = [];\n }\n out[\"warning\"].push(\"Caution: Contains \" + zeroCoord.length.toString() +\n \" stop_id with coordinates 0,0, which can be corrected in config.stopOverride,\" +\n \" or ignored by setting config.allowZeroCoordinate to false.\");\n }\n }\n return out;\n }\n\n function buildNetworkTable(out, options) {\n /**\n * Builds data needed for a table networks with service totals.\n * @param {object} out - internal data references\n * @param {object} options\n * @return {object} out, with out.networkTable added if data available\n */\n\n if (\"aquius\" in out &&\n \"network\" in out.aquius &&\n \"summary\" in out &&\n \"network\" in out.summary\n ) {\n let tableData, tableFormat, tableHeader, prefix, i;\n\n if (\"_vars\" in options && \"configId\" in options._vars) {\n prefix = options._vars.configId;\n } else {\n prefix = \"\";\n }\n\n tableHeader = [\"Network\"];\n tableFormat = [prefix + \"Text\"];\n\n if (\"service\" in out.aquius === false ||\n out.aquius.service.length === 0\n ) {\n tableHeader.push(\"Service\");\n tableFormat.push(prefix + \"Number\");\n } else {\n for (i = 0; i < out.aquius.service.length; i += 1) {\n if (\"en-US\" in out.aquius.service[i][1]) {\n tableHeader.push(out.aquius.service[i][1][\"en-US\"]);\n } else {\n tableHeader.push(JSON.stringify(out.aquius.service[i][1]));\n }\n tableFormat.push(prefix + \"Number\");\n }\n }\n\n tableData = [];\n for (i = 0; i < out.summary.network.length; i += 1) {\n if (\"en-US\" in out.aquius.network[i][1]) {\n tableData.push([out.aquius.network[i][1][\"en-US\"]].concat(out.summary.network[i]));\n } else {\n tableData.push([JSON.stringify(out.aquius.network[i][1])].concat(out.summary.network[i]));\n }\n }\n\n out[\"networkTable\"] = {\n \"data\": tableData,\n \"format\": tableFormat,\n \"header\": tableHeader,\n }\n }\n\n return out;\n }\n\n function buildSummaryTable(out, options) {\n /**\n * Builds data needed for a histogram of proportion of services by hour.\n * @param {object} out - internal data references\n * @param {object} options\n * @return {object} out, with out.summaryTable added if data available\n */\n\n if (\"summary\" in out &&\n \"service\" in out.summary &&\n out.summary.service.length > 0\n ) {\n\n let tableData, tableFormat, tableHeader, tableRow, prefix, i, j;\n\n if (\"_vars\" in options && \"configId\" in options._vars) {\n prefix = options._vars.configId;\n } else {\n prefix = \"\";\n }\n\n tableHeader = [\"Hour\"];\n tableFormat = [prefix + \"Number\"];\n\n if (\"service\" in out.aquius === false ||\n out.aquius.service.length === 0\n ) {\n tableHeader.push(\"All\")\n tableFormat.push(prefix + \"Number\");\n } else {\n for (i = 0; i < out.aquius.service.length; i += 1) {\n if (\"en-US\" in out.aquius.service[i][1]) {\n tableHeader.push(out.aquius.service[i][1][\"en-US\"]);\n } else {\n tableHeader.push(JSON.stringify(out.aquius.service[i][1]));\n }\n tableFormat.push(prefix + \"Number\");\n }\n }\n\n tableData = [];\n for (i = 0; i < out.summary.service.length; i += 1) {\n tableRow = [i];\n if (out.summary.service[i] === undefined) {\n if (\"service\" in out.aquius === false ||\n out.aquius.service.length === 0\n ) {\n tableRow.push(\"-\");\n } else {\n for (j = 0; j < out.aquius.service.length; j += 1) {\n tableRow.push(\"-\");\n }\n }\n } else {\n for (j = 0; j < out.summary.service[i].length; j += 1) {\n if (out.summary.service[i][j] > 0) {\n tableRow.push((out.summary.service[i][j] * 100).toFixed(2));\n } else {\n tableRow.push(\"-\");\n }\n }\n }\n tableData.push(tableRow);\n }\n\n out[\"summaryTable\"] = {\n \"data\": tableData,\n \"format\": tableFormat,\n \"header\": tableHeader,\n }\n\n }\n \n return out;\n }\n\n function exitProcess(out, options) {\n /**\n * Called to exit\n * @param {object} out - internal data references\n * @param {object} options\n */\n\n var error;\n\n delete out._;\n delete out.gtfs;\n delete out.gtfsHead;\n\n if (typeof options === \"object\" &&\n \"callback\" in options\n ) {\n if (\"error\" in out) {\n error = new Error(out.error.join(\". \"));\n }\n options.callback(error, out, options);\n return true;\n } else {\n return out;\n }\n }\n\n out = parseConfig(out, options);\n out = parseGtfs(out, gtfs);\n gtfs = null;\n // Allow memory to be freed?\n\n if (\"error\" in out) {\n return exitProcess(out, options);\n }\n\n out = buildHeader(out);\n out = buildNetwork(out);\n out = buildService(out);\n\n out = stopFilter(out);\n out = parentStopsToNode(out);\n out = transferStopsToNode(out);\n out = regularStopsToNode(out);\n\n out = buildPlace(out, options);\n out = stopPlace(out);\n\n out = createCalendar(out);\n out = baseTrip(out);\n out = timeTrip(out);\n out = duplicateTrip(out);\n out = serviceTrip(out);\n out = createRoutes(out);\n out = buildLink(out);\n\n out = optimiseNode(out);\n out = applyZeroCoordWarning(out);\n out = buildSummaryTable(out, options);\n out = buildNetworkTable(out, options);\n\n return exitProcess(out, options);\n}\n\n\n};\n\ntry {\n // Node only\n module.exports = { gtfsToAquius }; // eslint-disable-line no-undef\n} catch {\n // Ignore non-Node errors\n}\n// EoF\n"
},
{
"alpha_fraction": 0.5925430655479431,
"alphanum_fraction": 0.5987293124198914,
"avg_line_length": 34.60118865966797,
"blob_id": "a6f95966cc936b97a38588a5bf44039a0a3297d9",
"content_id": "a87197d8fb86e91018d28b327ea5f1c19210b275",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": true,
"language": "JavaScript",
"length_bytes": 5981,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 168,
"path": "/dist/gtfs-node.js",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "/*eslint-env node*/\n/*\nCrude Node.js frontend for GTFS to Aquius.\n\nRecommended when handling large (100+ MB) GTFS datasets.\nThereafter the limit is Node and ultimately System memory.\nTry granting Node up to 10MB of memory per MB of GTFS text.\nFor example, for 30GB: node --max-old-space-size=30000\n\nSetup is the same as the browser version: Extract the GTFS into a directory,\nalongside any config.json and boundaries.geojson.\nOther operations, such as unzipping the GTFS, or managing config files,\nwill need be done using other bespoke scripts.\n\nThen command line: node path/to/gtfs-node.js path/to/data_directory/\nOr with more memory: node --max-old-space-size=30000 path/to/gtfs-node.js path/to/data_directory/\n\nOutputs added (created or overwritten) in the data directory:\n- aquius.json - output for use in the wider Aquius ecosystem.\n- config.auto - augmented or generated config.json,\n rename to config.json to reuse in future GTFS processing.\n- histogram.csv - proprotion of services by hour of day, useful for quality assurance.\n- network.csv - service totals by network, useful for quality assurance.\n*/\nconst fs = require(\"fs\");\nconst readline = require(\"readline\");\nconst processing = require(\"./gtfs.js\").gtfsToAquius.process;\n\n\nasync function init() {\n /**\n * Loads raw data from directory, initiates GTFS to Aquius processing,\n * and handles the result.\n * @return {string} Status once run complete.\n */\n \n let directory = process.argv.slice(2).join(\" \");\n\n if (directory.length == 0) {\n return \"Missing directory argument. Expected: \" +\n \"node path/to/gtfs-node.js path/to/data_directory/\";\n }\n if (!directory.endsWith(\"/\") && !directory.endsWith(\"\\\\\")) {\n directory += \"/\";\n }\n\n let inputGtfs = {}; // key per GTFS file slug, value array of raw text content of GTFS file\n let inputOptions = {}; // geojson, config, callback\n\n try {\n const dir = fs.opendirSync(directory);\n let content;\n\n while ((content = dir.readSync()) !== null) {\n let fullName = content.name.toLowerCase();\n let extension = fullName.split(\".\").slice(-1)[0];\n let slug = fullName.split(\".\").slice(0, -1).join(\".\");\n\n // Logic below mirrors gtfs.js init.onLoad()\n if (extension === \"json\" || extension === \"geojson\") {\n let json = JSON.parse(fs.readFileSync(directory + fullName, \"utf8\"));\n if (\"type\" in json && json.type === \"FeatureCollection\") {\n inputOptions[\"geojson\"] = json;\n } else {\n inputOptions[\"config\"] = json;\n }\n } else if (extension === \"txt\") { // Assume GTFS data\n if (slug !== \"shapes\") { // Shapes is unused by GTFS to Aquius, but often large\n inputGtfs[slug] = await readTxtToArray(directory + fullName);\n }\n }\n }\n\n dir.closeSync();\n\n } catch (err) {\n return \"Failed when reading input data: \" + err;\n }\n\n const result = processing(inputGtfs, inputOptions);\n\n if (\"error\" in result) {\n return \"Failed with processing error: \" + result.error.join(\", \");\n }\n\n try {\n fs.writeFileSync(directory + \"aquius.json\", JSON.stringify(result.aquius), \"utf8\");\n fs.writeFileSync(directory + \"config.auto\", JSON.stringify(result.config, null, 2), \"utf8\");\n if (\"networkTable\" in result) {\n fs.writeFileSync(directory + \"network.csv\", tableToCsv(result.networkTable), \"utf8\");\n }\n if (\"summaryTable\" in result) {\n fs.writeFileSync(directory + \"histogram.csv\", tableToCsv(result.summaryTable), \"utf8\");\n }\n } catch (err) {\n return \"Failed when writing output data: \" + err;\n }\n\n let success = [\"Success: Aquius written to '\" + directory + \"'.\"];\n if (\"warning\" in result) {\n success = success.concat(result.warning);\n }\n\n return success.join(\". \");\n\n}\n\nfunction tableToCsv(table) {\n /**\n * Creates CSV block from table data object.\n * @param {object} table is a dict consisting data, format, header, each holding arrays,\n * as generated by gtfs.process buildNetwork/SummaryTable() functions\n * @return {string} CSV block\n */\n\n let csv = sanitizeRow(table.header, []).join(\",\") + \"\\n\"; // Headers are all text fields\n for (let i = 0; i < table.data.length; i += 1) {\n csv += sanitizeRow(table.data[i], table.format).join(\",\") + \"\\n\";\n }\n return csv;\n}\n\nfunction sanitizeRow(row, format) {\n /**\n * Quotes text fields in row, as defined by format, or quoted where no format provided.\n * @param {Array} row - table row\n * @param {Array} format - table format, expected \"Number\" or \"Text\"\n * @return {Array} row with text sanitised in quotes\n */\n for (let i = 0; i < row.length; i += 1) {\n if (i > (format.length - 1) || format[i] === \"Text\") {\n // Crude quoting of text fields\n row[i] = '\"' + String(row[i]).replace('\"', \"'\") + '\"';\n }\n }\n return row;\n}\n\nfunction readTxtToArray(filepath) {\n /**\n * Reads text file at filepath in an array of lines.\n * This structure is required to handle very long files,\n * where accumulated string could exceed 512MB buffer.\n * @param {string} filepath - .txt file to process\n * @return {Array} - array of string lines\n */\n return new Promise((resolve) => {\n let lines = [];\n let stream = fs.createReadStream(filepath);\n let reader = readline.createInterface({\n input: stream,\n output: process.stdout,\n terminal: false,\n });\n reader.on(\"line\", function (line) {\n lines.push(line + \"\\n\");\n });\n reader.on(\"close\", function() {\n reader.close();\n resolve(lines);\n });\n });\n}\n\n\n(async () => {\n console.log(await init());\n })()\n"
},
{
"alpha_fraction": 0.7722171545028687,
"alphanum_fraction": 0.7785279154777527,
"avg_line_length": 111.51177215576172,
"blob_id": "133a4f030b306ca725558a43609fe8d9e843b2ff",
"content_id": "9d1a5b221f8c5a0fe7caaa0e2f6a066fa8ac0bf4",
"detected_licenses": [
"MIT",
"LicenseRef-scancode-public-domain"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 90799,
"license_type": "permissive",
"max_line_length": 1342,
"num_lines": 807,
"path": "/README.md",
"repo_name": "timhowgego/Aquius",
"src_encoding": "UTF-8",
"text": "# Aquius\n\n> _Here+Us_ - An Alternative Approach to Public Transport Network Discovery\n\n## Description\n\n[](https://timhowgego.github.io/Aquius/live/york-2019/#x-1.03271/y53.95073/z13/c-1.05143/k53.94608/m13/p1)\n\nAquius visualises the links between people that are made possible by transport networks. The user clicks once on a location, and near-instantly all the routes, stops, services, and connected populations are summarised visually. Aquius answers the question, what services are available _here_? And to a degree, who am I connected to by those services? Population is a proxy for all manner of socio-economic activity and facilities, measured both in utility and in perception. Conceptually Aquius is half-way between the two common approaches to public transport information:\n\n1. Conventional journey planners are aimed at users that already have some understanding of what routes exist. Aquius summarises overall service patterns.\n1. Most network maps are pre-defined and necessarily simplified, typically showing an entire city or territory. Aquius relates to a bespoke location.\n\nThe application makes no server queries (once the initial dataset has been loaded), so responds near-instantly. That speed allows network discovery by trial and error, which changes the user dynamic from _being told_, to playful learning.\n\nAquius enables maps that are physically impossible to draw as single unified networks, such as cabotaged international routes where the available destinations change depending on where the passenger boards. For example, FlixBus can only carry passengers from places in Portugal to destinations in other countries, even where the bus itself serves many places in Portugal. The reverse is true for passengers travelling _from_ Spain - a range of destinations within Portugal are offered, but none within Spain herself.\n\n[](https://timhowgego.github.io/Aquius/live/flixbus-aug-2018/#x-6.064/y41.311/z7/c-8.443/k41.546/m8)\n\nAquius is capable of displaying the largest urban networks in the world, [such as Paris](https://timhowgego.github.io/Aquius/live/petite-paris-2019/) and [New York City](https://timhowgego.github.io/Aquius/live/nyc-2019/). Aquius can be configured to locate stops and services very precisely, or to summarise networks strategically. Aggregation of stops into local Wards allows exploration and analysis of [every scheduled public transport service in Great Britain](https://timhowgego.github.io/Aquius/live/gb-pt-ward-2019/). Aquius can even allow proposed route and service changes to be rapidly visualised and assessed in context, prior to any detailed scheduling or modelling, as exemplified for Barcelona.\n\n[](https://timhowgego.github.io/Aquius/live/amb-vortex-2018/#c2.11985/k41.36972/m13/x2.153/y41.3832/z12/p2/r2/s3/tca-ES)\n\nSome caveats:\n\n* Aquius maps straight-line links, not the precise geographic route taken. This allows services that do not stop at all intermediate stops to be clearly differentiated. It also makes it technically possible for an internet client to work with a large transport network. Aquius is however limited to conventional scheduled public transport with fixed stopping points.\n* Aquius summarises the patterns of fixed public transport networks. It presumes a degree of network stability over time, and cannot sensibly be used to describe a transport network that is in constant flux. The Aquius data structure allows filtering by time period, but such periods must be pre-defined and cannot offer the same precision as schedule-based systems.\n* Aquius only shows the direct service from _here_, not journeys achieved by interchange. If the intention is to show all possible interchanges without letting users explore the possibilities for themselves, then Aquius is not the logical platform to use: Displaying all possible interchanges ultimately results in a map of every service which fails to convey what is genuinely local to _here_.\n* As an internet application, Aquius is limited by the filesize of its datasets, which are currently always loaded as a single file. So while multiple local networks can be aggregated into a single large national dataset, such a dataset may be many megabytes in size, implying unacceptably long load times for online users.\n\n> Ready to explore? [Try a live demonstration](https://timhowgego.github.io/Aquius/live/)!\n\nThe remainder of this document provides a technical manual for Aquius as software. For a transport-orientated review of Aquius, see [Aquius – An Alternative Approach to Public Transport Network Discovery](https://timhowgego.wordpress.com/2019/01/09/aquius/).\n\n## Manual\n\nIn this document:\n\n* [User FAQ](#user-faq)\n* [Quick Setup](#quick-setup)\n* [Limitations](#limitations)\n* [Known Issues](#known-issues)\n* [Configuration](#configuration)\n* [Here Queries](#here-queries)\n* [GTFS To Aquius](#gtfs-to-aquius)\n* [GeoJSON to Aquius](#geojson-to-aquius)\n* [Merge Aquius](#merge-aquius)\n* [Data Structure](#data-structure)\n* [License](#license)\n* [Contributing](#contributing)\n\n## User FAQ\n\n#### How are services counted?\n\nBroadly, every service leaving from a node (stop, station) within _here_ is counted once. If that service also arrives at a node within here, it is additionally counted at that node as an arrival, thus within _here_, services are summarised in both directions. Outside _here_, only services from _here_ to _there_ are summarised. Any specific local setdown and pickup conditions are taken into consideration. Services (typically trains) that split mid-journey are counted once over common sections. Individual services that combine more than one product together on the same vehicle are counted no more than once, by the longer distance component unless that has been filtered out by the choice of network.\n\n#### How are people counted?\n\nThe population is that of the local area containing one or more nodes (stops, stations) linked to _here_ by the services shown, including the population of _here_ itself. Each node is assigned to just one such geographic area. The geography of these areas is defined by the dataset creator, but is intended to _broadly_ convey the natural catchment or hinterland of the service, and typically uses local administrative boundaries (such as districts or municipalities) to reflect local definitions of place. This population may be additionally factored by Connectivity, as described in the next paragraph.\n\n#### What is connectivity?\n\nConnectivity factors the population linked (as described above) by the service level linking it - every unique service linking _here_ with the local area is counted once. The Connectivity slider can be moved to reflect one of four broad service expectations, the defaults summarised in the table below (for left to right on the slider, read down the table). The slider attempts to capture broad differences in network perception, for example that 14 trains per day from London to Paris is considered a \"good\" service, while operating 14 daily _within_ either city would be almost imperceptible. Except for Any, which does not factor population, the precise formula is: 1 - ( 1 / (service * factor)), if the result is greater than 0, with the default factor values: 2 (long distance), 0.2 (local/interurban) and 0.02 (city). The dataset creator or host may change the factor value (see [Configuration](#configuration) key `connectivity`), but not the formula.\n\nConnectivity Expectation|0% Minimum Service|50% Factor Service|95% Factor Service\n------------------------|------------------|------------------|------------------\nAny|0|Entire population always counted|Entire population always counted\nLong Distance (low frequency)|0.5|1|10\nLocal/Interurban (mid frequency)|5|10|100\nCity (high frequency)|50|100|1000\n\n#### What do the line widths and circle diameters indicate?\n\nLinks and stops are scaled by the logarithm of the service (such as total daily trains), so at high service levels the visual display is only slightly increased. Increasing the scale increases the visual distinction between service levels, but may flood the view in urban areas. The original numbers are associated reference information can be seen by clicking on the link or node. The area of each population circle is in proportion to the number of residents. The original numbers are displayed in a tooltip, visible when mousing over (or similar) the circle. The here circle defines the exact geographic area of _here_, that searched to find local stops.\n\n#### How can everything be displayed?\n\nZoom out a lot, then click... The result may be visually hard to digest, and laggy - especially with an unfiltered network or when showing multiple map layers: Aquius wasn't really intended to display everything. Hosts can limit this behaviour (by increasing the value of [Configuration](#configuration) key `minZoom`).\n\n## Quick Setup\n\nIn its most basic form, Aquius simply builds into a specific `HTML` element on a web page, the most basic configuration placed in the document `body`:\n\n```html\n<div id=\"aquius-div-id\" style=\"height:100%;\"></div>\n<script src=\"absolute/url/to/aquius.js\" async></script>\n<script>window.addEventListener(\"load\", function() {\n aquius.init(\"aquius-div-id\");\n});</script>\n```\n\nNote older browsers also require the `html` and `body` elements to be styled with a `height` of 100% before correctly displaying the `div` at 100%. Absolute paths are recommended to produce portable embedded code, but not enforced.\n\nThe first argument of the function `aquius.init()` requires a valid `id` refering to an empty element on the page. The second optional argument is an `Object` containing keyed options which override the default configuration. Two option keys are worth introducing here:\n\n* `dataset`: `String` (quoted) of the full path and filename of the JSON file containing the network data (the default is empty).\n* `uiHash`: `Boolean` (true or false, no quotes) indicating whether Aquius records its current state as a hash in the browser's URL bar (great for sharing, but may interfere with other elements of web page design).\n\nOthers options are documented in the [Configuration](#configuration) section below. Here is an example with the Spanish Railway dataset:\n\n```html\n<div id=\"aquius\" style=\"height:100%;\"></div>\n<script src=\"https://timhowgego.github.io/Aquius/dist/aquius.min.js\"\n async></script>\n<script>window.addEventListener(\"load\", function() {\n aquius.init(\"aquius\", {\n \"dataset\": \"https://timhowgego.github.io/AquiusData/es-rail/20-jul-2018.json\",\n \"uiHash\": true\n });\n});</script>\n```\n\n[Current script files can be found on Github](https://github.com/timhowgego/Aquius/tree/master/dist). [Sample datasets are also available](https://github.com/timhowgego/AquiusData).\n\n*Tip:* Aquius can also be [used as a stand-alone library](#here-queries): `aquius.here()` accepts and returns data objects without script loading or user interface. Alternatively, Aquius can be built into an existing Leaflet map object by sending that object to `aquius.init()` as the value of [Configuration](#configuration) key `map`.\n\n## Limitations\n\n* Aquius is conceptually inaccessible to those with severe visual impairment (\"blind people\"), with no non-visual alternative available.\n* Internet Explorer 9 is the oldest vintage of browser theoretically supported, although a modern browser will be faster (both in use of `Promise` to load data and `Canvas` to render data). Mobile and tablet devices are supported, although older devices may lag when interacting with the map interface.\n* Aquius is written in pure Javascript, automatically loading its one dependency at runtime, [Leaflet](https://github.com/Leaflet/Leaflet). Aquius can produce graphically intensive results, so be cautious before embedding multiple instances in the same page, or adding Aquius to pages that are already cluttered.\n* Aquius works efficiently for practical queries within large conurbations, or for inter-regional networks. The current extreme test case is a single query containing _every_ bus stop and weekly bus service in the whole of Greater Manchester - a network consisting of about 5 million different stop-time combinations: This takes just under a second for Aquius to query on a slow 2010-era computer. A third of that second is for processing, and two thirds for map rendering: Aquius' primary bottleneck is the ability of the browser to render large numbers of visual objects on a canvas.\n\n## Known Issues\n\nAquius is fundamentally inaccessible to those with severe visual impairment: The limitation lay in the concept, not primarily the implementation. Aquius can't even read out the stop names, since it doesn't necessarily know anything about them except their coordinates. Genuine solutions are more radical than marginal, implying a quite separate application. For example, conversion of the map into a 3D soundscape, or allowing users to walk a route as if playing a [MUD](https://en.wikipedia.org/wiki/MUD).\n\nMultiple base maps can be added but only one may be displayed: A user selection and associated customisation was envisaged for the future.\n\nPopulation bubbles are not necessarily centered on towns: These are typically located at the centroid of the underlying administrative area, which does not necessary relate to the main settlement. Their purpose is simply to indicate the presence of people at a broad scale, not to map nuances in local population distribution.\n\nCircular services that constitute two routes in different directions, that share some stops but not all, display with the service in both directions serving the entire shared part of the loop: Circular services that operate in both directions normally halve the total service to represent the journey possibilities either clockwise or counter-clockwise, without needing to decide which direction to travel in to reach any one stop. Circular services that take different routes depending on their direction cannot simply be halved in this manner, even over the common section, because the service level in each direction is not necessarily the same. Consequently Aquius would have to understand which direction to travel in order to reach each destination the fastest. That would be technically possible by calculating distance, but would remain prone to misinterpretation, because a service with a significantly higher service frequency in one direction might reasonably be used to make journeys round almost the entire loop, regadless of distance. The safest assumption is that services can be ridden round the loop in either direction. In practice this issue only rarely arises, [notably in Parla](https://timhowgego.github.io/Aquius/live/es-rail-20-jul-2018/#x-3.76265/y40.23928/z14/c-3.7669/k40.2324/m10/s5/vlphn/n2).\n\nCircular services that were originally described in GTFS as a sequences of trips (blocks) may count the service totals correctly on the map (for nodes and links), but under-count those totals in summary statistics: This occurs where trips return to their origin node only when commencing the next trip in the sequence, such that each individual trip is not itself circular. [GTFS To Aquius](#gtfs-to-aquius) processes the sequence by joining the trips together, passing all the stops multiple times (accurate map totals), but holding the data as one continuous link (counted in the summary statistics).\n\nPassenger journey opportunities are not always accurately presented where pickup/setdown restrictions apply, and multiple nodes on the same service are included within _here_: Assuming the dataset is correctly structured (see `link` property `block` in the [Data Structure](#data-structure)), Aquius should always count services accurately at each node and on each link. However the combination may be visually misleading, perhaps suggesting a link between nodes which are actually being counted only in regard to travel to destinations further along the route. The underlying problem lay in cabotage restrictions: Routes where pickup and setdown criteria vary depending on the passenger journey being undertaken, not the vehicle journey. This poses a logical problem for Aquius, since it needs to both display the links from each separate node and acknowledge that these separate links are provided by the exact same vehicle journey.\n\nLink (route), node (stop), place names cannot be localised: Most text within Aquius can be localised (translated), but nodes, places and routes cannot. Many different references can be assigned to the same thing, so multiple languages are supported. But these have no language identifier, so cannot be matched to the user's locale setting. This is compromise between dataset filesize (nodes names, in particular, often bloat files, and even adding a localisation key to single names would tend to increase that bloat by about 50%), practical reality (few agencies provide multilingual information in a structured format, often opting for single names such as \"oneLanguage / anotherLanguage\"), and the importance of this information within Aquius (strictly secondary reference information).\n\nLink reference data does not change to match the selected service filter: The reference data ascribed to each link describes all the service information that has been used to build the link. The service filter changes the total number of services, but does not know which pieces of reference data pertains to which service within the link, so continues to display all the reference data. Reference data is primarily intended to convey route-level information, which should not vary from trip to trip. While Aquius can display very detailed reference data, such as trip-specific train numbers which vary between different periods of time, such information is intended as a marginal benefit, especially to assist in debugging data, and should be presented with caution.\n\n## Configuration\n\nAs described in the [Quick Setup](#quick-setup) section, the second optional argument of `aquius.init()` takes an `Object` containing keys and respective values. Any key not specified will use the default. Bespoke options must conform to the expected data `type`.\n\n*Tip*: Clicking the `Embed` link on the bottom of the Layer Control will produce a dummy HTML file containing the configuration at the time the Embed was produced.\n\n### Data Sources\n\nAll except `dataset`, introduced in the [Quick Setup](#quick-setup) section, can be happily ignored in most cases.\n\nKey|Type|Default|Description\n---|----|-------|-----------\nbase|Array|See below|Array of objects containing base layer tile maps, described below\nconnectivity|float|1.0|Factor for connectivity calculation: population * ( 1 - ( 1 / ( service * ( 2 / ( power(10, configuration.p) / 10 )) * connectivity))), where p is greater than 0\ndataObject|Object|{}|JSON-like [network data](#data-structure) as an Object: Used in preference to dataset\ndataset|string|\"\"|JSON file containing network data: Recommended full URL, not just filename\nleaflet|Object|{}|Active Leaflet library Object L: Used in preference to loading own library\nlocale|string|\"en-US\"|Default locale, BCP 47-style. Not to be confused with user selection, `t`\nmap|Object|{}|Active Leaflet map Object: Used in preference to own map\nnetwork|Array|[]|Extension of `network`: Array of products, Object of locale keyed names, described below\nnetworkAdd|boolean|true|Append this network extension to dataset defaults. Set false to replace defaults\ntranslation|Object|{}|Custom translations, format described below\n\nFor base mapping, `base` is a complex `Array` containing one or more tile layers, each represented as an `Object`. Within, the key `url` contains a `string` URI of the resource, potentially formatted as [described by Leaflet](https://leafletjs.com/reference-1.4.0.html#tilelayer). WMS layers require a specific key `type`: \"wms\". Finally a key called `options` may be provided, itself containing an `Object` of keys and values, matching Leaflet's options. Or in summary:\n\n```javascript\n\"base\": [\n {\n \"url\": \"https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}.png\", \n \"type\": \"\",\n // Optional, if WMS: \"wms\"\n \"options\": {\n // Optional, but attribution is always advisable\n \"attribution\": \"© OpenStreetMap contributors\"\n }\n }\n // Extendable for multiple maps\n]\n```\n\nThe default `locale` needs to be fully translated, since it becomes the default should any other language choice not be translated.\n\nThe extension of `network` allows extra network filters to be appended to the defaults provided in the JSON `dataset`. For example, in the Spanish Railways dataset a separate network filter for [FEVE](https://en.wikipedia.org/wiki/Renfe_Feve) could be created using the product's ID, here 14, and its name. Multiple products or networks can be added in this way. See the [Data Structure](#data-structure) section for more details. To replace the defaults in the dataset, set configuration `networkAdd` to `false`.\n\n```javascript\n\"network\": [ \n [ \n [14],\n // Product ID(s)\n {\"en-US\": \"FEVE\"},\n // Locale:Name, must include the default locale\n {}\n // Optional properties\n ] \n // Extendable for multiple networks\n],\n\"networkAdd\": false\n // Replace dataset defaults with this network selection\n```\n\nThe `translation` `Object` allows bespoke locales to be hacked in. Bespoke translations take precedence over everything else. Even network names can be hacked by referencing key `network0`, where the final `integer` is the index position in the network `Array`. While this is not the optimal way to perform proper translation, it may prove convenient. The structure of `translation` matches that of `getDefaultTranslation()` within `aquius.init()`. Currently translatable strings are listed below. Missing translations default to `locale`, else are rendered blank. \n\n```javascript\n\"translation\": {\n \"xx-XX\": {\n // BCP 47-style locale\n \"lang\": \"X-ish\",\n // Required language name in that locale\n \"connectivity\": \"Connectivity\",\n // Translate values into locale, leave keys alone\n \"connectivityRange\": \"Any - Frequent\",\n \"embed\": \"Embed\",\n \"export\": \"Export\",\n \"here\": \"Location\",\n \"language\": \"Language\",\n \"link\": \"Services\",\n \"network\": \"Network\",\n \"node\": \"Stops\",\n \"place\": \"People\",\n \"scale\": \"Scale\",\n \"service\": \"Period\"\n }\n // Extendable for multiple locales\n}\n```\n\nThe remaining configurations are far more basic.\n\n### Styling\n\nFormatting of visual elements, mostly on the map.\n\nKey|Type|Default|Description\n---|----|-------|-----------\nhereColor|string|\"#080\"|CSS Color for here layer circle strokes\nlinkColor|string|\"#f00\"|CSS Color for link (service) layer strokes\nlinkScale|float|1.0|Scale factor for link (service) layer strokes: ceil( log( 1 + ( service * ( 1 / ( scale * 4 ) ) ) ) * scale * 4)\nminWidth|integer|2|Minimum pixel width of links, regardless of scaling. Assists click useability\nminZoom|integer|0|Minimum map zoom. Sets a soft cap on query complexity\nnodeColor|string|\"#333\"|CSS Color for node (stop) layer circle strokes\nnodeScale|float|1.0|Scale factor for node (stop) layer circles: ceil( log( 1 + ( service * ( 1 / ( scale * 4) ) ) ) * scale * 2)\npanelOpacity|float|0.7|CSS Opacity for background of the bottom-left summary panel\npanelScale|float|1.0|Scale factor for text on the bottom-left summary panel\nplaceColor|string|\"#00f\"|CSS Color of place (population) layer circle fill\nplaceOpacity|float|0.5|CSS Opacity of place (population) layer circle fill: 0-1\nplaceScale|float|1.0|Scale factor for place (population) layer circles: ceil( sqrt( people * scale / 666) )\n\n**Caution:** Colors accept any [CSS format](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value), but be wary of introducing transparency this way, because it tends to slow down rendering. Transparent link lines will render with ugly joins.\n\n### User Interface\n\nEnable or disable User Interface components. These won't necessarily block the associated code if it can be entered by an alternative means, such as the hash.\n\nKey|Type|Default|Description\n---|----|-------|-----------\nuiConnectivity|boolean|true|Enables connectivity slider (if the dataset's `place` length > 0)\nuiHash|boolean|false|Enables recording of the user state in the URL's hash\nuiLocale|boolean|true|Enables locale selector\nuiNetwork|boolean|true|Enables network selector (if the dataset's `network` length > 1)\nuiPanel|boolean|true|Enables summary statistic panel\nuiScale|boolean|true|Enables scale slider\nuiService|boolean|true|Enables service selector (if the dataset's `service` length > 1)\nuiShare|boolean|true|Enables embed and export\nuiStore|boolean|true|Enables browser [session storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) of user state\n\n### User State\n\nAs seen in the URL hash (if `uiHash` is `true`). Coordinates are always WGS 84. _Here_ click defines the centre of the search.\n\nKey|Type|Default|Description\n---|----|-------|-----------\nc|float|-0.89|_Here_ click Longitude\nk|float|41.66|_Here_ click Latitude\nm|integer|11|_Here_ click zoom\nn|integer|0|User selected network filter: Must match range of network in `dataset`\np|integer|0|User selected connectivity setting\nv|string|\"hlnp\"|User selected map layers by first letter: here, link, node, place\nr|integer|0|User selected service filter: Must match range of service in `dataset`\ns|integer|5|User selected global scale factor: 0 to 10\nt|string|\"en-US\"|User selected locale: BCP 47-style\nx|float|-3.689|Map view Longitude\ny|float|40.405|Map view Latitude\nz|integer|6|Map view zoom\n\n*Tip:* Instead of specifying `s`, consider altering the corresponding `linkScale`, `nodeScale`, and/or `placeScale`. Instead of specifying `t`, consider altering the default `locale`.\n\n## Here Queries\n\nAquius can also be used as a stand-alone library via `aquius.here()`, which accepts and returns data Objects without script loading or user interface. Arguments, in order:\n\n1. `dataObject` - `Object` containing a [Data Structure](#data-structure).\n1. `options` - optional `Object` of key:value pairs.\n\nPossible `options`:\n\n* `callback` - function to receive the result, which should accept `Object` (0) `error` (javascript Error), (1) `output` (as returned without callback, described below), and (2) `options` (as submitted, which also allows bespoke objects to be passed through to the callback).\n* `connectivity` - `float` factor applied to weight population by service level: population * ( 1 - ( 1 / (service * connectivity))), if result is greater than zero. If `connectivity` is missing or less than or equal to 0, population is not factored. The user interface calculates `connectivity` as: configuration.connectivity * ( 2 / ( power(10, configuration.p) / 10 )), producing factors 2, 0.2, and 0.02 - broadly matching long distance, local/inter-urban and city expectations.\n* `geoJSON` - `Array` of strings describing map layers to be outputted in GeoJSON format (\"here\", \"link\", \"node\" and/or \"place\").\n* `network` - `integer` index of network to filter by.\n* `place` - `Array` of `integer` dataObject `place` indices to define _here_ (in addition to any range/x/y criteria). Place-based criteria do not currently not return a specific geographic feature equivalent of here.\n* `range` - `float` distance from _here_ to be searched for nodes, in metres.\n* `sanitize` - `boolean` check data integrity. Checks occur unless set to `false`. Repeat queries with the same dataObject can safely set sanitize to false.\n* `service` - `integer` index of service to filter by.\n* `x` - `float` longitude of _here_ in WGS 84.\n* `y` - `float` latitude of _here_ in WGS 84.\n\n**Caution:** Sanitize does not fix logical errors within the dataObject, and should not be used to check data quality. Sanitize merely replaces missing or incomplete structures with zero-value defaults, typically causing bad data to be ignored without throwing errors.\n\nWithout a callback, calls to `aquius.here()` return a JSON-like Object. On error, that Object contains a key `error`.\n\nIf `geoJSON` is specified a GeoJSON-style Object with a `FeatureCollection` is returned. In addition to [the standard geometry data](https://tools.ietf.org/html/rfc7946), each `Feature` has two or more properties, which can be referenced when applying styling in your GIS application:\n\n* `type` - \"here\", \"link\" (routes), \"node\" (stops), \"place\" (demographics).\n* `value` - numeric value associated with each (such as daily services or resident population).\n* `node` - array of reference data objects relating to the node itself.\n* `link` - array of reference data objects relating to the links at the node, or the links contained on the line.\n* `place` - array of reference data objects relating to the place itself.\n\nThe information contained within keys `node` and `link` is that otherwise displayed in popup boxes when clicking on nodes or links in the map view. The existence of keys `node`, `link` and `place` will depend on the dataset. The potential format of the objects within `reference` property of `node`, `link` and `place` are described in the [Data Structure](#data-structure).\n\nOtherwise the JSON-like Object will contain `summary`, is an Object containing link, node and place totals, and geometry for `here`, `link`, `node` and `place`. Each geometry key contains an Array of features, where each feature is an Object with a key `value` (the associated numeric value, such as number of services) and either `circle` (here, node, place) or `polyline` (link). Circles consist of an Array containing a single x, y pair of WGS 84 coordinates. Polylines consist of an Array of Arrays in route order, each child Array containing a similar pair of x, y coordinates. The (sanitized) `dataObject` will also be returned as a key, allowing subsequent queries with the returned dataObject to be passed with `sanitize` false, which speeds up the query slighty.\n\n**Caution:** Both `link` outputs mirrors the internal construction of Aquius' map, which tries to find adjoining links with the same service frequency and attach them to one continuous polyline. The logic reduces the number of objects, but does not find all logical links, nor does it necessarily links the paths taken by individual services. If you need to map individual routes interrogate the original link in the original `dataObject`.\n\n## GTFS To Aquius\n\n[General Transit Feed Specification](https://developers.google.com/transit/gtfs/reference/) is the most widely used interchange format for public transport schedule data. A [script is available](https://github.com/timhowgego/Aquius/tree/master/dist) that automatically converts single GTFS archives into Aquius datasets. This script is currently under development, requiring both features and testing, so check the output carefully. [A live demonstration is available here](https://timhowgego.github.io/Aquius/live/gtfs/). Alternatively, run the `gtfs.min.js` file privately, either: \n\n1. With a user interface: Within a webpage, load the script and call `gtfsToAquius.init(\"aquius-div-id\")`, where \"aquius-div-id\" is the ID of an empty element on the page.\n1. From another script: Call `gtfsToAquius.process(gtfs, options)`. Required value `gtfs` is an `Object` consisting of keys representing the name of the GTFS file without extension, whose value is an array containing one or more blocks of raw text from the GTFS file - for example, `\"calendar\": [\"raw,csv,string,data\"]`. If the file is split into multiple blocks (to allow very large files to be handled) the first block must contain at least the first header line. `options` is an optional `Object` that may contain the following keys, each value itself an `Object`:\n\n* `callback` - function to receive the result, which should accept 3 `Object`: `error` (javascript Error), `output` (as returned without callback, described below), `options` (as submitted, which also allows bespoke objects to be passed through to the callback).\n* `config` - contains key:value pairs for optional configuration settings, as described in the next section.\n* `geojson` - is the content of a GeoJSON file pre-parsed into an `Object`, as detailed in a subsequent section.\n\nWithout callback, the function returns an `Object` with possible keys:\n\n* `aquius` - as `dataObject`.\n* `config` - as `config`, but with defaults or calculated values applied.\n* `error` - `Array` of error message strings.\n* `summary` - `Object` containing summary `network` (productFilter-serviceFilter matrix) and `service` (service histogram).\n\n**Caution:** Runtime is typically about a second per 10 megabytes of GTFS text data (with roughly half that time spent processing the Comma Separated Values), plus time to assign stops (nodes) to population (places), which varies depending on the number and complexity of GeoJSON boundaries processed. The single-operator networks found in most GTFS archives should process within 5-10 seconds, but very complex multi-operator conurbations or regions may take longer. Processing requires operating system memory of up to 10 times the total size of the raw text data. If processing takes more than a minute, it is highly likely that the machine has run out of free physical memory, and processing should be aborted.\n\n*Tip:* There are two ways to work around any memory limitation, both of which can be used together if required:\n1. Run GTFS to Aquius [As Node Script](#as-node-script) (detailed below). Node can specifically be allocated system memory that is not necessarily available to browser processes.\n2. Copy all the GTFS file and divide `stop_times.txt` (which is almost always far larger than any other file, and thus most likely the trigger for any memory issue) into two or more pieces, each placed in a separate file directory. The `stop_times.txt` divide must not break a trip (divide immediately before a `stop_sequence` 0) and for optimum efficiency should break between different `route_id`. The first (header) line of `stop_times.txt` must be present as the first line of each file piece. Add to each directory identical config.json files and a complete copy of all the other GTFS files used - except `frequencies.txt`, which if present must only be added in one directory. GTFS file `shapes.txt` (which is sometimes large) is not used by GTFS to Aquius, so can also be skipped. Run GTFS to Aquius on each directory separately, then use [Merge Aquius](#merge-aquius) to merge the two (or more) outputs together. GTFS to Aquius will skip any (not-frequency) link it cannot find stop_times for, while Merge Aquius automatically merges the duplicate nodes created.\n\n### As Node Script\n\nA [Node](https://nodejs.org/) script is available to run GTFS to Aquius from the command line: `dist/gtfs-node.js`. Recommended when handling large (100+ MB) GTFS datasets. Thereafter the limit is Node and ultimately System memory. Try granting Node up to 10MB of memory per MB of GTFS text. For example, for 30GB: `node --max-old-space-size=30000`.\n\nSetup is the same as the browser version: Extract the GTFS into a directory, alongside any `config.json` and `boundaries.geojson`. Other operations, such as unzipping the GTFS, or managing config files,will need be done using other bespoke scripts. Then command line: `node path/to/gtfs-node.js path/to/data_directory/`. Or with more memory: `node --max-old-space-size=30000 path/to/gtfs-node.js path/to/data_directory/`.\n\nOutputs added (created or overwritten) in the data directory:\n* `aquius.json` - output for use in the wider Aquius ecosystem.\n* `config.auto` - augmented or generated `config.json` - rename to `config.json` to reuse in future GTFS processing.\n* `histogram.csv` - proprotion of services by hour of day, useful for quality assurance.\n* `network.csv` - service totals by network, useful for quality assurance.\n\n### Configuration File\n\nBy default GTFS To Aquius simply analyses services over the next 7 days, producing average daily service totals, filtered by agency (operator). GTFS To Aquius accepts and produces a file called `config.json`, which in the absence of a detailed user interface, is the only way to customise the GTFS processing. The minimum content of `config.json` is empty, vis: `{}`. To this `Object` one or more key: value pairs may be added. Currently supported keys are:\n\nKey|Type|Default|Description\n---|----|-------|-----------\nallowBlock|boolean|false|Process block_id as a sequence of inter-operated trips, described below\nallowCabotage|boolean|false|Process duplicate vehicle trips with varying pickup/setdown restrictions as cabotage, described below\nallowCode|boolean|true|Include stop codes\nallowColor|boolean|true|Include route-specific colors\nallowDuplication|boolean|false|Include duplicate vehicle trips (same route, service period, direction and stop times)\nallowDuration|boolean|false|Include array of crude average total minutes per trip by service period (experimental)\nallowDwell|boolean|false|Include array of crude average minute dwell times per trip by node (experimental)\nallowHeadsign|boolean|false|Include trip-specific headsigns (information may be redundant if using allowRoute)\nallowName|boolean|true|Include stop names (increases file size significantly)\nallowNoc|boolean|false|Append Great Britain NOC to operator name if available (for newly generated network filter only)\nallowRoute|boolean|true|Include route-specific short names\nallowRouteLong|boolean|false|Include route-specific long names\nallowRouteUrl|boolean|true|Include URLs for routes (can increase file size significantly unless URLs conform to logical repetitive style)\nallowSplit|boolean|false|Include trips on the same route (service period and direction) which share at least splitMinimumJoin (but not all) stop times as \"split\"\nallowStopUrl|boolean|true|Include URLs for stops (can increase file size significantly unless URLs conform to logical repetitive style)\nallowWaypoint|boolean|true|Include stops with no pickup and no setdown as dummy routing nodes\nallowZeroCoordinate|boolean|true|Include stops with 0,0 coordinates, else stops are skipped\ncodeAsStopId|boolean|false|Use stop_id as stop code (requires `allowCode=true`)\ncoordinatePrecision|integer|5|Coordinate decimal places (smaller values tend to group clusters of stops), described below\nduplicationRouteOnly|boolean|true|Restrict duplicate check to services on the same route\nfromDate|YYYYMMDD dateString|Today|Start date for service pattern analysis (inclusive)\ninGeojson|boolean|true|If geojson boundaries are provided, only services at stops within a boundary will be analysed. If false, assigns stops to nearest boundary (by centroid)\nisCircular|array|[]|GTFS \"route_id\" (strings) to be referenced as circular. If empty (default), GTFS to Aquius follows its own logic, described below\nmeta|object|{\"schema\": \"0\"}|As [Data Structure](#data-structure) meta key\nmirrorLink|boolean|true|Services mirrored in reverse are combined into the same link. Reduces filesize, but can distort service averages\nnetworkFilter|object|{\"type\": \"agency\"}|Group services by, using network definitions, detailed below\nnodeGeojson|object|{}|Cache containing node \"x:y\": Geojson \"x:y\" (both coordinates at coordinatePrecision), described below\noption|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) option key\nplaceNameProperty|string|\"name\"|Field name in GeoJSON properties containing the name or identifier of the place\npopulationProperty|string|\"population\"|Field name in GeoJSON properties containing the number of people (or equivalent demographic statistic)\nproductOverride|object|{}|Properties applied to all links with the same product ID, detailed below\nrouteExclude|array|[]|GTFS \"route_id\" (strings) to be excluded from analysis\nrouteInclude|array|[]|GTFS \"route_id\" (strings) to be included in analysis, all if empty\nrouteOverride|object|{}|Properties applied to routes, by GTFS \"route_id\" key, detailed below\nserviceFilter|object|{}|Group services by, using service definitions, detailed below\nservicePer|integer|1|Service average per period in days (1 gives daily totals, 7 gives weekly totals), regardless of fromDate/toDate\nsplitMinimumJoin|integer|2|Minimum number of concurrent nodes that split services must share\nstopExclude|array|[]|GTFS \"stop_id\" (strings) to be excluded from analysis\nstopInclude|array|[]|GTFS \"stop_id\" (strings) to be included in analysis, all if empty\nstopOverride|object|{}|Properties applied to stops, by GTFS \"stop_id\" key, detailed below\nstopPlace|boolean|false|Group and merge stops by their respective place centroid (assumes geojson)\ntoDate|YYYYMMDD dateString|Next week|End date for service pattern analysis (inclusive)\ntranslation|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) translation key\n\n*Tip:* The fastest way to start building a `config.json` file is to run GTFS To Aquius once, download and edit the resulting `config.json`, then use that file in subsequent GTFS To Aquius processing.\n\n#### Blocks\n\nGTFS \"block_id\" can be used to group sequences of trips together. Some GTFS sources use this value to communicate important routing information, notably circular (see below) or \"town\" services where one trip operates into another for the benefit of through passengers. Potentially, without processing the \"block_id\" one segment of the journey may be missed entirely. In these cases `allowBlock` should be set true, which will allow GTFS to Aquius to merge all the block trips together as one continuous link. However some GTFS sources use \"block_id\" literally, as an identifier of a vehicle diagram (operational allocation) on a route that offes no practical reason for passengers to continue from one trip to the next. Such use of \"block_id\" can result in excessively large Aquius files because blocks will duplicate the route's stops multiple times - and potentially also duplicate those extended stop sequences for many different vehicle diagrams. Consequently \"block_id\" processing is skipped by default, with trips processed separately. This default is typically the most reliable, but if key segments of route are missing from output, and GTFS trips.txt includes a \"block_id\" column, try setting `allowBlock` to true.\n\n#### Circulars\n\nCircular services are where one journey seamlessly operates into the next as a contiunous loop. If circular services are not correctly flagged circular, the Aquius output will double-count those services at their start/end node, and (for circular services defined separately in each direction) may imply passenger journey opportunities that are affected only by the least direct of route around the circle.\n\nThe key `isCircular` allows specific \"route_id\" values to be flagged as circular. Values reference the first column of GTFS `routes.txt`. If one or more route is specified in this way, only those routes specified will be flagged as circular. This gives absolute control over what services are assigned circular, and should be used if the default logic fails to correctly differentiate circular routes.\n\nIf `isCircular` is empty, GTFS to Aquius will attempt to evaulate whether a route is circular, vis:\n\n1. Start and end stops are the same, and the route contains multiple trips with a shared (non-empty) \"block_id\". [Loops should be coded](http://gtfs.org/best-practices/#loop-routes) with a single \"block_id\" that groups all such journeys, but this convention is not universal, and often the \"block_id\" is empty.\n1. Start and end stops are the same, with the stop immediately after the start and the stop prior to the end greater than 200 metres from each other. Aquius processes basic [lasso routes](http://gtfs.org/best-practices/#lasso-routes) as uni-directional, so should not flag such routes as circular. Complex loops, such as figures-of-eight (dual loop with a common mid-point) or dual lassos (common middle section with loops at each end), should be flagged as circular. Although the \"outbound\" and \"inbound\" segment of a simple lasso route may be serve the same location along the same street, the actual stops served may differ because they are on opposite sides of that street, hence the range test.\n\n*Tip:* To disabled the default logic without actually assigning any route as circular specify `\"isCircular\": [null]`. This gives the array a length greater than 0 (thus disables the default logic) without ever matching any valid \"route_id\" (which cannot be null).\n\n#### Coordinates\n\nLowering the value of `coordinatePrecision` reduces the size of the Aquius output file, but not just because the coordinate data stored is shorter: At lower `coordinatePrecision`, stops that are in close proximity will tend to merge into single nodes, which in turn tends to result in fewer unique links between nodes. Such merges reflect mathematical rounding, and will not necessarily group clusters of stops in a way that users or operators might consider logical. Networks with widespread use of pickup and setdown restrictions may be rendered illogical by excessive grouping. Aggregation of different stops into the same node may also create false duplicate trips, since duplication is assessed by node, not original stop - in most case setting `allowDuplication` to true will avoid this issue (as detailed below). If the precise identification of individual stops is important, increase the default `coordinatePrecision` to avoid grouping. Very low `coordinatePrecision` values effectively transform the entire network into a fixed \"[Vortex Grid](https://en.wikipedia.org/wiki/The_Adventure_Game)\", useful for strategic geospatial analysis. For greater control over such Vortex Grids, specify the desired grid as GeoJSON boundaries and set `stopPlace` to true, which groups all nodes on the centroid of their respective (GeoJSON) place.\n\n#### Duplication\n\nA duplicate is a vehicle trip which shares all its stop times with another trip on the same route (unless `duplicationRouteOnly` is false), and in the same direction and same (calendar) day. Duplicates normally add no additional passenger journey opportunities, merely capacity, so by default these duplicate trips are removed (`allowDuplication` is false). If the Aquius output specifically analyses capacity or counts vehicles, set `allowDuplication` to true. Highly serviced routes with imprecise schedules may need `allowDuplication` as true to avoid falsely identified duplicates being removed. Networks analysed at low `coordinatePrecision` may need `allowDuplication` as true to avoid falsely created duplicates being removed - separate trips that originally served different stops at the same time having been grouped into single nodes. By default duplicates between different routes are excluded, since high-frequency urban corridors may schedule more than one trip per minute between the same stops. This route limitation can be overriden by setting `duplicationRouteOnly` to false, which would then, for example, allow split services (see below) which have been allocated to different routes to be grouped properly.\n\nA split is a vehicle trip that duplicates at least two, but not all, of its stop times with another trip on the same route (and in the same service period and direction). If `allowSplit` is set true, the duplicate trip will have its unique stops attributed as a \"split\", which means Aquius does not double-count the service over common sections of route. The original trip remains unchanged. By default, split services must share at least 2 concurrent nodes, a number that may be redefined using `splitMinimumJoin`. Genuine split operation occurs in multi-vehicle modes such as rail. Note that where such split trips have been misappropiated to indicate guaranteed connections onto entirely different routes, GTFS to Aquius will double-count the service at stops common to both routes because it only considers duplication within routes. In such cases, set `allowSplit` to false and maintain the default `allowDuplication` as false to discard such trips entirely.\n\nThe two prior duplication definitions do not consider any pickup or setdown restrictions, only that the vehicle trip is duplicated or split. If `allowCabotage` is set true, trips that would be categorised as duplicate or split, but contain pickup or setdown criteria which differ from the original trip, are processed differently: Such duplicates are grouped to indicate they are part of the same vehicle journey, and thus should only be _counted_ once as a group, but are not removed because each expresses a unique combination of _passenger_ journey opportunities, depending on the origin and destination of that passenger. Technically these are assigned an Aquius \"block\". Services operating between different administrative jurisdictions, especially different nations, may be restricted in the point between which passengers can be carried - for example, an international service may be able to pickup passengers at stops within one country and set them down in another, but not convey passengers solely within either country. Each set of restrictions may be represented in the GTFS file as a duplicate trip.\n\n#### Overrides\n\nConfiguration key `productOverride` allows colors to be applied to all links of the same product, where product is defined by `networkFilter.type` (above). The `productOverride` key's value is is an `Object` whose key names refer to GTFS codes (`agency_id` values for \"agency\", or `route_type` values for \"mode\"), and whose values consist of an `Object` of properties to override any (and missing) GTFS values. Currently only two keys are supported: \"route_color\" and \"route_text_color\", each value is a `string` containing a 6-character hexadecimal HTML-style color ([matching GTFS specification](https://developers.google.com/transit/gtfs/reference/#routestxt)). These allow colors to be added by product, for example to apply agency-specific colors to their respective operations.\n\nConfiguration key `routeOverride` allows bespoke colors or route names to be applied by route (taking precedence). The `routeOverride` key's value is an `Object` whose keys are GTFS \"route_id\" values. The value of each \"route_id\" key is an `Object` containing keys \"route_short_name\", \"route_color\", \"route_text_color\" and/or \"route_url\", with content matching GTFS specification.\n\nConfiguration key `stopOverride` allows poorly geocoded stops to be given valid coordinates, and bespoke (user friendly) codes, names, and URLs to be specified. The `stopOverride` key's value is is an `Object` whose keys are GTFS \"stop_id\" values. The value of each \"stop_id\" key is an `Object` containing override coordinates (WGS 84 floats) \"x\" and \"y\", and/or \"stop_code\", \"stop_name\" and \"stop_url\". If GTFS to Aquius encounters 0,0 coordinates it will automatically add these stops to `stopOverride` for manual editing. Alternatively 0,0 coordinate stops can be skipped entirely by setting config `allowZeroCoordinate` to false.\n\n#### Network Filter\n\nConfiguration key `networkFilter` defines groups of product IDs which the user can select to filter the results displayed. These filters are held in the network key of the [Data Structure](#data-structure). `networkFilter` is an `Object` consisting one or more keys:\n\n* `type` - `string` either \"agency\", which assigns a product code for each operator identified in the GTFS (default), or \"mode\", which assigns a product code for each vehicle type identified in the GTFS (only the original types and \"supported\" [extensions](https://developers.google.com/transit/gtfs/reference/extended-route-types) will be named).\n* `network` - `Array` containing network filter definitions, each an `Array` consisting: First, an `Array` of the GTFS codes (`agency_id` values for \"agency\", or `route_type` values for \"mode\") included in the filter, and second an `Object` of localised names in the style `{\"en-US\": \"English name\"}`. If the GTFS file contains no agency_id, specify the `agency_name` instead.\n* `reference` - `Object` whose keys are GTFS codes (`agency_id` values for \"agency\", or `route_type` values for \"mode\") and those value is an `Object` of localised names in the style `{\"en-US\": \"English name\"}`, which allows the respective names to be specified.\n\n*Tip:* The easiest way to build bespoke network filters is to initially specify only `type`, process the GTFS data once, then manually edit the `config.json` produced. If using GTFS To Aquius via its user interface, a rough count of routes and services by each `productFilter` will be produced after processing, allowing the most important categories to be identified.\n\n**Caution:** Pre-defining the `networkFilter` will prevent GTFS To Aquius adding or removing entries, so any new operators or modes subsequently added to the GTFS source will need to be added manually.\n\n#### Service Filter\n\nConfiguration key `serviceFilter` can be used (in addition to any productFilter) to filter or summarise the number of service by different time periods. `serviceFilter` is an `Object` consisting one or more keys:\n\n* `type` - `string` currently always \"period\".\n* `period` - `Array` of time period definitions which are applied in GTFS processing.\n\nEach time period definition within `period` is an `Object` consisting one or more keys:\n\n* `day` - `Array` of days of the week, lowercase English (as used in GTFS Calendar headers). The `day` evaluated is that assigned by date in the GTFS.\n* `time` - `Array` of time periods, each an `Object` with optional keys: `start` and `end` are strings in the format \"HH:MM:SS\" (as in GTFS times). Multiple sets of time periods can be specified as separate objects in the array.\n* `name` - `Object` consisting locale:name in that locale.\n\nIn categorising services by time, GTFS To Aquius interprets times as equal to or after `start`, but before `end`. Only the keys specified are evaluated, thus a `start` time with no `end` time will analyse the entire service at and after that start time. For scheduled trips, each vehicle journey is evaluated based by mean time - the average of the earliest and latest times in the trip schedule. For example, a trip that leaves its first stop at 09:00:00 and arrives at its final stop at 10:00:00 would match any `time` criteria that included 09:30:00. Frequency-based services count the number of services based on the average headway(s) during the time period defined.\n\n**Caution:** GTFS to Aquius naively mirrors whatever convention the GTFS file has used for handling services operated wholly or partly after midnight: Some operators consider all trips that commence after midnight to be on a new day. Others continue the previous day into the next until a notional \"end of service\". Days are continued into the next by adding 24 hours to the clock time (for example 01:10:00 on the _next_ day becomes 25:10:00 on the _original_ day). Early morning services may require two time conditions, as shown in the example below. Night services are easily double-counted, so if in doubt spot-check the output of such periods against published timetables.\n\n```javascript\n{\n \"serviceFilter\": {\n \"type\": \"period\",\n \"period\": [\n {\n \"name\": {\"en-US\": \"00:00-06:00 Catchall\"},\n \"time\": [\n {\"end\": \"06:00:00\"},\n // Before 06:00, as today\n {\"start\": \"24:00:00\", \"end\": \"30:00:00\"}\n // Midnight until before 06:00, as tomorrow\n ]\n // No \"day\" key = all days\n } \n ]\n }\n}\n```\n\n**Caution:** The `serviceFilter` always applies a single time criteria to the whole journey, an assumption that will become progressively less realistic the longer the GTFS network's average vehicle journey. For example, an urban network may be usefully differentiated between morning peak and inter-peak because most vehicle journeys on urban networks are completed within an hour, and thus the resulting analysis will be accurate within 30 minutes at all nodes on the route. In contrast, inter-regional vehicle journey duration may be much longer, and such detailed time periods risk misrepresenting passenger journey opportunities at certain nodes: For example, a 4-hour vehicle journey are commences at 07:30 might match a morning peak definition at its origin, but not by the time it reaches its final destination at 11.30. Such networks may be better summarised more broadly - perhaps morning, afternoon and evening. Long-distance or international networks, where vehicle journeys routinely span whole days, may be unsuitable for any form of `serviceFilter`.\n\n*Tip:* Service totals within defined time periods are still calculated as specified by `servicePer` - with default 1, the total service per day. This is a pragmatic way of fairly summarising unfamiliar networks with different periods of operation. However if serviceFilter periods exclude the times of day when the network is closed, the `servicePer` setting may be set per hour (0.04167), which may make it easier to compare periods of unequal duration, especially on metro networks with defined opening and closing times.\n\n### GeoJSON File\n\nOptionally, a [GeoJSON file](http://geojson.org/) can be provided containing population data, which allows Aquius to summarise the people served by a network. The file must end in the extension `.json` or `.geojson`, must use (standard) WGS 84 coordinates, and must contain either Polygon or MultiPolygon geographic boundaries. Each feature should have a property containing the number of people (or equivalent demographic statistic), either using field name \"population\", or that defined in `config.json` as `populationProperty`. Excessively large or complex boundary files may delay processing, so before processing GTFS To Aquius, consider reducing the geographic area to only that required, or simplifying the geometry.\n\nGTFS To Aquius will attempt to assign each node (stop) to the boundary it falls within. For consistent results, boundaries should not overlap and specific populations should not be counted more than once. The choice of boundaries should be appropriate for the scale and scope of the services within the GTFS file: Not so small as to routinely exclude nodes used by a local population, but not so large as to suggest unrealistic hinterlands or catchment areas. For example, an entire city may reasonably have access to an inter-regional network whose only stop is in the city centre, and thus city-level boundaries might be appropriate at inter-regional level. In contrast, an urban network within a city should use more detailed boundaries that reflect the inherently local nature of the areas served. Note that the population summaries produced by Aquius are not intended to be precise, rather to provide a broad summary of where people are relative to nearby routes, and to allow basic comparison of differences in network connectivity.\n\nIf configuration key `inGeojson` is true (the default), the entire dataset will be limited to services between stops within the GeoJSON boundaries. If configuration key `inGeojson` is false (and a GeoJSON file is provided), any stops in the dataset that fall outside a boundary will be associated to the nearest boundary, calculated on the distance between the stop and the boundary centroid. The false option is useful for relating stops that fall just outside administrative boundaries, such as ferry terminals. Note that, as currently implemented, in [GeoJSON to Aquius](#geojson-to-aquius) the false option simply retains the stop without assigning it to a boundary.\n\nProcessing can be time consuming for Geojson files containing thousands of boundaries, so the results are cached as configuration key `nodeGeojson`. If populated, this key will assign nodes to boundaries based on the centroids provided, without first searching the entire Geojson file for a match. For extremely large datasets, this key can potentially be pre-populated with data calculated by (computationally more efficient) GIS software. The coordinate precision within `nodeGeojson` must match that of the key `coordinatePrecision` for the cache to be used. Any node not found in the cache will be processed as normal, so the cache need not be complete.\n\n**Caution:** Caching with `nodeGeojson` assumes boundaries have unique centroids. Nodes cannot be reliably cached where different boundaries share the same centroid, for example perfect concentric rings. To work round this limitation exclude such nodes from `nodeGeojson`, or do not specify the key in the configuration (which defaults to empty).\n\n## GeoJSON to Aquius\n\nThis tool converts geographic network data in [GeoJSON format](http://geojson.org/) into an Aquius dataset file. GeoJSON lines become Aquius links, GeoJSON points become Aquius nodes, and GeoJSON boundaries become Aquius places (population). Lines must be provided, other data is optional. If points are provided, they must match the coordinates of the start of end of lines to be processed usefully. GeoJSON files must be projected using WGS 84 (which is normally the default for the format).\n\nNon-spatial data may be attached as GeoJSON field names (technically called properties). This data typically matches the Aquius [Data Structure](#data-structure) (for links and nodes respectively), except each piece of data takes a GeoJSON field of its own. For example, data items that Aquius contains within a reference `Object` within a properties `Object` are instead exposed as a _top_ _level_ GeoJSON property. Data that is otherwise held as an `Array` (notably product and service keys) is instead provided in the GeoJSON as a comma-separated `string`. For example, a network with two service filter definitions should attach a `service` property to each GeoJSON line with the string value `\"10,20\"`, where 10 is the service count associated with the first filter and 20 the second filter.\n\nGeoJSON files can be crafted to match the expected property names, or bespoke names in GeoJSON files can be assigned using a `config.json` file, as listed below. The optional file `config.json` is also important for defining product and service filters (which are otherwise left empty), and adding other header data, such as meta names and translations.\n\nKey|Type|Default|Description\n---|----|-------|-----------\nblockProperty|string|\"block\"|Field name in GeoJSON properties containing block\ncircularProperty|string|\"circular\"|Field name in GeoJSON properties containing circular\ncolorProperty|string|\"color\"|Field name in GeoJSON properties containing service color (6-hex, no hash, [as GTFS specification](https://developers.google.com/transit/gtfs/reference/#routestxt))\ncoordinatePrecision|integer|5|Coordinate decimal places (as Aquius to GTFS [Coordinates](#coordinates))\ndirectionProperty|string|\"direction\"|Field name in GeoJSON properties containing direction\ninGeojson|boolean|true|If geojson boundaries are provided, only services at nodes within a boundary will be analysed\nlinkNameProperty|string|\"name\"|Field name in GeoJSON properties containing service name\nlinkUrlProperty|string|\"url\"|Field name in GeoJSON properties containing service url\nmeta|object|{}|As [Data Structure](#data-structure) meta key\nnetwork|object|{}|As [Data Structure](#data-structure) network key\nnodeCodeProperty|string|\"code\"|Field name in GeoJSON properties containing node code\nnodeNameProperty|string|\"name\"|Field name in GeoJSON properties containing node name\nnodeUrlProperty|string|\"url\"|Field name in GeoJSON properties containing node url\noption|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) option key\nplaceNameProperty|string|\"name\"|Field name in GeoJSON properties containing the name or identifier of the place\npopulationProperty|string|\"population\"|Field name in GeoJSON properties containing the number of people (or equivalent demographic statistic)\nproduct|Array|[]|As [Data Structure](#data-structure) network reference.product key (products in index order, referencing productProperty data)\nproductProperty|string|\"product\"|Field name in GeoJSON properties containing product array as comma-separated string\nservice|object|{}|As [Data Structure](#data-structure) service key\nserviceProperty|string|\"service\"|Field name in GeoJSON properties containing service array as comma-separated string\nsharedProperty|string|\"shared\"|Field name in GeoJSON properties containing shared product ID\ntextColorProperty|string|\"text\"|Field name in GeoJSON properties containing service text color (6-hex, no hash, [as GTFS specification](https://developers.google.com/transit/gtfs/reference/#routestxt))\ntranslation|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) translation key\n\n**Caution:** Pickup, setdown, and split are not currently supported. Product and service filters are specified exactly as they appear in the output file, as described in [Data Structure](#data-structure): There is no logical processing of product and service data of the type performed by [GTFS To Aquius](#gtfs-to-aquius).\n\nThis script is currently under development, requiring both features and testing, so check the output carefully. [A live demonstration is available here](https://timhowgego.github.io/Aquius/live/geojson/). Alternatively, run the `geojson.min.js` [file](https://github.com/timhowgego/Aquius/tree/master/dist) privately, either: \n\n1. With a user interface: Within a webpage, load the script and call `geojsonToAquius.init(\"aquius-div-id\")`, where \"aquius-div-id\" is the ID of an empty element on the page.\n1. From another script: Call `geojsonToAquius.cartograph(geojson, options)`. \n\nRequired value `geojson` is an `Array` consisting of GeoJSON `Object`s - the original file content already processed by `JSON.parse()`. `options` is an optional `Object` that may contain the following keys, each value itself an `Object`:\n\n* `callback` - function to receive the result, which should accept 3 `Object`: `error` (javascript Error), `output` (as returned without callback, described below), `options` (as submitted, which also allows bespoke objects to be passed through to the callback).\n* `config` - contains key:value pairs for optional configuration settings. The minimum content of `config.json` is empty, vis: `{}`. To this `Object` one or more key: value pairs may be added, as listed above.\n\nWithout callback, the function returns an `Object` with possible keys:\n\n* `aquius` - as `dataObject`.\n* `config` - as `config`, but with defaults or calculated values applied.\n* `error` - `Array` of error message strings.\n\n## Merge Aquius\n\nThis tool merges Aquius dataset files into a single file. The tool cannot understand these files beyond their technical structure, so the files should be produced in a similar manner:\n\n* Files must not duplicate one another's service count - else service totals will erroneously add.\n* Files must all share the same or very similiar service filters - the filter indices and definitions are blindly assumed comparable.\n* Files should use (if any) the same place/demographic data - the original boundaries are not present, so no assessment can be made of overlaps (in future it should become possible to merge a GeoJSON file containing this data, but currently such data must be built into the original Aquius files, prior to merging).\n\nNodes and places are grouped by shared coordinates - the number of decimal places can be set as configuration key `coordinatePrecision` (described below). Products are grouped on name - all translations must be identical. Meta, option and translation content will be copied, but cannot always be merged - configuration keys can be used to supply definitions.\n\nThis script is currently under development, requiring both features and testing, so check the output carefully. [A live demonstration is available here](https://timhowgego.github.io/Aquius/live/merge/). Alternatively, run the `merge.min.js` [file](https://github.com/timhowgego/Aquius/tree/master/dist) privately, either: \n\n1. With a user interface: Within a webpage, load the script and call `mergeAquius.init(\"aquius-div-id\")`, where \"aquius-div-id\" is the ID of an empty element on the page.\n1. From another script: Call `mergeAquius.merge(input, options)`. \n\nRequired value `input` is an `Array` consisting of one or more `dataObject` in the order to be processed. As a minimum, these must have `meta.schema`, `link` and `node` keys. `options` is an optional `Object` that may contain the following keys, each value itself an `Object`:\n\n* `callback` - function to receive the result, which should accept 3 `Object`: `error` (javascript Error), `output` (as returned without callback, described below), `options` (as submitted, which also allows bespoke objects to be passed through to the callback).\n* `config` - contains key:value pairs for optional configuration settings. The minimum content of `config.json` is empty, vis: `{}`. To this `Object` one or more key: value pairs may be added. Currently supported keys are:\n\nKey|Type|Default|Description\n---|----|-------|-----------\ncoordinatePrecision|integer|5|Coordinate decimal places (as GTFS to Aquius, smaller values tend to group clusters of stops)\nmeta|object|{}|As [Data Structure](#data-structure) meta key\noption|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) option key\ntranslation|object|{}|As [Configuration](#configuration)/[Data Structure](#data-structure) translation key\n\nWithout callback, the function returns an `Object` with possible keys:\n\n* `aquius` - as `dataObject`.\n* `config` - as `config`, but with defaults or calculated values applied.\n* `error` - `Array` of error message strings.\n\n**Caution:** Merge Aquius is intended to merge sets of files created in a similiar manner. Merging an adhoc sequence of Aquius files may appear successful, but the actual services presented may be extremely inconsistent, especially if service filters differ or different analysis periods have been used.\n\n*Tip:* The original dataset files are processed in order of filename, which allows processing order to be controlled. The product filters of first file will appear at the top of the merged product filter. The nodes in the first file will tend to hold smaller index values, which may have a small impact on final file size. The first file is the first source consulted for meta, option, translation (all unless defined by configuration), and service filter.\n\n## Data Structure\n\nAquius requires a network `dataset` JSON file to work with. The dataset file uses a custom data structure, one intended to be sufficiently compact to load quickly, and thus shorn of much human readability and structural flexibility. The dataset file will require custom pre-processing by the creator of the network. [GTFS To Aquius](#gtfs-to-aquius) or [GeoJSON to Aquius](#geojson-to-aquius) can be used to automate this pre-processing. Aquius performs some basic checks on data integrity (of minimum types and lengths) that should catch the more heinous errors, but it is beholden on the creator of the dataset to control the quality of the data therein. JSON files must be encoded to UTF-8.\n\n### Meta\n\nThe most basic dataset is a `Object` with a key \"meta\", that key containing another `Object` with key:value pairs. The only required key is `schema`, which is currently always a `string` \"0\". Other options are as shown in the example below:\n\n```javascript\n{\n \"meta\": {\n \"attribution\": {\n \"en-US\": \"Copyright and attribution\",\n // Short, text only\n \"es-ES\": \"Derechos\"\n },\n \"description\": {\n \"en-US\": \"Human readable description\"\n // For future use\n },\n \"name\": {\n \"en-US\": \"Human readable name\",\n // Short, text only\n \"es-ES\": \"Nombre\"\n },\n \"schema\": \"0\",\n // Required, always \"0\"\n \"url\": \"absolute/url/to/more/human/readable/information\"\n // Will be wrapped around name\n }\n}\n```\n\n### Translation\n\nAn optional key `translation` may contain an `Object` with the same structure as that described for the `translation` [Configuration](#configuration) option. Translations in the dataset take precedence over every translation except any in the `translation` option. If the dataset's network consists of _Stations_ and not generic _Stops_, the dataset's `translation` key contains the best place to redefine that, for example:\n\n```javascript\n\"translation\": {\n \"en-US\": {\n \"node\": \"Stations\"\n },\n \"es-ES\": {\n \"node\": \"Estaciones\"\n }\n}\n```\n\n### Option\n\nAn optional key `option` may contain an `Object` with the same structure as the [Configuration](#configuration) second argument of `aquius.init()`, described earlier. Keys `id`, `dataset`, `network` and `translation` are ignored, all in their own way redundant. It is recommended to set a sensible initial User State for the map (map view, _here_ click), but this key can also be used to apply Styling (color, scale), or even control the User Interface or set alternative base mapping. The `option` key only takes precedence over Aquius' defaults, not over valid hash or Configuration options.\n\n### Reference\n\nAn optional key `reference` may contain an `Object`, itself containing keys whose value is an `Array` of verbose recurrent data. Specific values within the `dataset` may reference this recurrent data by index position, which avoids repeating identical data throughout the `dataset` and thus reduces filesize. Possible `reference` keys are:\n\n* `color` - `Array` of `string` HTML color codes. Currently only 6-hexadecimal styles are supported, including the leading hash, for example `#1e00ff`.\n* `product` - `Array` of `Object` translations of product names, for example `{\"en-US\":\"English Name\"}`, whose index position corresponds to product ID.\n* `url` - `Array` of `string` URLs. URLs may contain one or more expression `[*]` which will be automatically replaced by a link/node specific identifier, such as the route number.\n\n### Network\n\nEach `link` (detailed below) is categorised with an `integer` product ID. The definition of a product is flexible: The network might be organised by different brands, operators, or vehicles. One or more products ID(s) are grouped into network filters, each network filter becoming an option for the user. Products can be added to more than one network filter, and there is no limit on the total number of filters, beyond practical usability: An interface with a hundred network filters would be hard to both digest and navigate.\n\nThe dataset's `network` key consists of an `Array` of network filters, in the order they are to be presented in the User Interface. This order should be kept constant once the dataset is released, since each network filter is referenced in hashable options by its index in the `Array`. Each network filter itself consists of an `Array` of two parts:\n\n1. `Array` containing `integer` product IDs of those products that make up the network filter.\n1. `Object` containing key:text, where locale is the key, and text is a `string` containing the translated network filter name.\n1. `Object` containing optional properties, reserved for future use.\n\n```javascript\n\"network\": [\n [\n [1, 2, 3],\n {\"en-US\": \"All 3 products\"},\n {}\n ],\n [\n [1, 3],\n {\"en-US\": \"Just 1 and 3\"},\n {}\n ]\n]\n```\n\n### Service\n\nEach `link` (detailed below) is categorised with one or more counts of the number of services (typically vehicle journeys) associated with the link. The precise variable is flexible - for example, it could be used to indicate total vehicle capacity - but ensure the `link` key in `translation` contains an appropriate description.\n\nNetworks may be adequately described with just one service count per link. However `service` allows the same link to described for different time periods - for example, 2 journeys in the morning and 3 in the afternoon - especially important to differentiate services that are not provided at marginal times, such as evenings or weekends. The `service` key defines those time periods as service filters. Like `network`, each filter consist of a 3-part `Array`:\n\n1. `Array` of the index positions within each `link` service array that are to be summed to produce the total service count.\n1. `Object` containing the localised description of the filter.\n1. `Object` containing optional properties, reserved for future use.\n\nIn the example below, the corresponding `link` service array would consist of an `Array` of two numbers, the first morning, and second afternoon. The first filter would sum both, while the second \"morning\" filter would take only the first service count, the third \"afternoon\" filter only the second count.\n\n```javascript\n\"service\": [\n [\n [0, 1],\n // Index positions in link service array\n {\"en-US\": \"All day\"},\n {}\n ],\n [\n [0],\n {\"en-US\": \"Morning\"},\n {}\n ],\n [\n [1],\n {\"en-US\": \"Afternoon\"},\n {}\n ]\n]\n```\n\n**Caution:** Providing more that one index position in link service array should only be done where the values sum without invalidating the service count metric used. For example, if the metric used is \"services per day\", summing two days together will falsely double the service level \"per day\". Aquius does not know how the original metric was constructed, so cannot make logical assumptions such as averaging instead of summing.\n\n### Link\n\nThe `link` key contains an `Array` of lines of link data. Each line of link data is defined as a route upon which the entire service has identical stopping points and identical product. On some networks, every daily service will become a separate line of link data, on others almost the whole day's service can be attached to a single line of link data. Each line of link data is itself an `Array` consisting 4 parts:\n\n1. Product networks (`Array` of `integer`) - list of the Product IDs associated with this link, as described in Network above.\n1. Service levels (`Array` of `float`) - where each value in the array contains a count indicative of service level, such as total vehicle journeys operated (the sum of both directions, unless assigned the property `direction` below). To allow filtering, the position in the array must correspond to a position defined in the `service` key. \n1. Nodes served (`Array` of `integer`) - the Node ID of each point the services stops to serve passengers, in order. Routes are presumed to also operate in the reverse direction, but, as described below, the route can be define as one direction only, in which case the start point is only the first point in the `Array`. Node IDs reference an index position in `node`, and if the `link` is populated with data, so must `node` (and in turn `place`).\n1. Properties (`Object`) - an extendable area for keys containing optional data or indicating special processing conditions. In many cases this will be empty, vis: `{}`. Optional keys are described below (short forms are recommended to reduce filesize):\n\n* `block` or `b` - `integer` ID unique to a group of links which are actually provided by exactly the same vehicle journey(s), but where each link contains different properties. The ID has no meaning beyond uniquely defining the group. A `block` simply prevents the service total assigned to the group being counted more than once. If the properties of the block are the same, with the product and potentially the nodes differing, define a `shared` instead (which is generally simpler to manage and faster to process). Unlike `shared`, a block must contain links with the same service total, has no defined parent, can define groups of links which have the same product, and specifically identifies the links it is common to. The `block` is primarily used for cabotage, where pickup and setdown restrictions vary depending on the passenger journey being undertaken, not the vehicle journey. Flixbus, for example, manage such restrictions by creating multiple copies of each vehicle journey, each copy with different pickup and setdown conditions.\n* `circular` or `c` - `boolean` true or `integer` 1 indicates operation is actually a continuous loop, where the start and end points are the same node. Only the nodes for one complete loop should be included - the notional start and end node thus included twice. Circular services are processed so that their duplicated start/end node is only counted once. Figure-of-eight loops are intentionally double-counted at the midpoint each service passes twice per journey, since such services may reasonably be considered to offer two completely different routes to passengers, however this does result in arithmetic quirks (as demonstrated by Madrid Atocha's C-7).\n* `direction` or `d` - `boolean` true or `integer` 1 indicates operation is only in the direction recorded, not also in the opposite direction. As noted under [Known Issues](#known-issues), services that are both circular and directional will produce numeric quirks. *Tip:* Services that loop only at one end of a route (\"lasso\" routes) should be recorded as uni-directional with nodes on the common section recorded twice, once in each direction - not recorded as circular.\n* `duration` as `m` - `Array` containing numeric average total minutes per trip by service period (experimental, not used in Aquius user interface, works best with service period time ranges).\n* `dwell` as `w` - `Array` containing numeric average total minutes of dwell time in node order (experimental, not used in Aquius user interface).\n* `pickup` or `u` - `Array` containing `integer` Node IDs describing nodes on the service's route where passengers can only board (get on), not alight (get off), expressed relative to order of the Nodes served. For a link summarising both directions, a pickup condition automatically becomes a setdown condition when that node order is reversed. If pickup and setdown are not mirrorred thus, define two separate links, one in each direction.\n* `reference` or `r` - `Array` containing one or more `Object` of descriptive data associated with the routes within the link - for example, route headcodes or display colors. Possible keys and values are described below.\n* `setdown` or `s` - `Array` containing `integer` Node IDs describing nodes on the service's route where passengers can only alight (get off), not board (get on), expressed relative to order of the Nodes served. For a link summarising both directions, a setdown condition automatically becomes a pickup condition when that node order is reversed. If setdown and pickup are not mirrorred thus, define two separate links, one in each direction.\n* `shared` or `h` - `integer` Product ID of the parent service. Shared allows an existing parent service to be assigned an additional child service of a different product category. The specific parent service is not specifically identified, only its product. Over common sections of route, only the parent will be processed and shown, however if the network is filtered to exclude the parent, the child is processed. The parent service should be defined as the longer of the two routes, such that the parent includes all the stops of the child. Shared was originally required to describe Renfe's practice of selling (state supported) local journey fare products on sections of (theoretically commercial) long distance services, but such operations are also common in aviation, where a single flight may carry seats sold by more than one airline.\n* `split` or `t` - `Array` containing `integer` Node IDs describing the unique nodes on the service's route. Split is assigned to one half of a service operated as two portions attached together over a common part of route. Splits can be affected at either or both ends of the route. In theory more than two portions can be handled by assigning a split to every portion except the first. Like `shared` services, and companion service is not specifically identified, however a `split` should be of the same Product ID as its companion service (else to avoid miscalculations `network` needs to be constructed so that both Product IDs fall into the same categories). Railway services south of London were built on this style of operation, while operators such as Renfe only split trains operated on _very_ long distance routes.\n\nPossible keys within the `reference` Object, all optional, although `n` is strongly recommended:\n\n* `c` - `reference.color` index associated with the route line.\n* `i` - reference code used in URL (see `u` below).\n* `n` - short name or human-readable reference code.\n* `t` - `reference.color` index associated with the route text (must contrast against the color of the line).\n* `u` - `reference.url` index of link providing further human-readable information. The optional `[*]` in that URL will be replaced by `i` if available, else `n`, else removed.\n\nReferences should be consistently presented across the dataset - for example always \"L1\", not also \"l1\" and \"line one\". References should also be unique within localities - for example, the dataset may contain several different services referenced \"L1\", but should they serve the same node they will be aggregated together as \"L1\".\n\n**Caution:** Keys `split`, and perhaps `shared`, can be hacked to mimic guaranteed onward connections to key destinations, especially from isolated branch lines or feeder services. However this feature should not be abused to imply all possible interchanges, and it may be more sensible to let users discover for themselves the options available at _the end of the line_.\n\n**Caution:** The use of keys containing lists of nodes (`pickup`, `setdown` and `split`) should be avoided where they contain a node that is duplicated within the link - for example, the start/end node of a circular - because Aquius will attempt to apply the criteria to each duplicate, not just one.\n\n*Tip:* Links follow the node order regardless of any pickup and setdown restrictions, so by assigning a node as both pickup _and_ setdown only (thus allowing neither boarding nor alighting) the route shape can be altered without adding a stop.\n\n### Node\n\nThe `node` key contains an `Array` of node (stop, station) information. Each node is referenced (within `link`) by its index position. The format is simple:\n\n1. Longitude (`float`) - \"x\" coordinate of the node in WGS 84.\n1. Latitude (`float`) - \"y\" coordinate of the node in WGS 84.\n1. Properties (`Object`) - an extendable area for keys related to this node. Minimum empty, vis: `{}`.\n\nOptional `properties` keys are:\n\n* `place` or `p` - `integer` index position in `place` (described below) for the place that contains this node. Recommended.\n* `reference` or `r` - `Array` containing one or more `Object` of descriptive data associated with the stop or stops within the node - for example, names or URLs containing further information.\n\nPossible `reference` keys and values are described below, all optional, although `n` is strongly recommended:\n\n* `i` - reference code used in URL (see `u` below).\n* `n` - short name or human-readable reference code.\n* `u` - `reference.url` index of link providing further human-readable information. The optional `[*]` in that URL will be replaced by `i` if available, else `n`, else removed.\n\n*Tip:* To reduce `dataset` file size, restrict the accuracy of coordinates to only that needed - metres, not millmetres. Likewise, while URLs and detailed names may provide useful reference information, these can inflate file size dramatically when lengthy or when attached to every node or service.\n\n### Place\n\nThe `place` key has a similar structure to `node` above - each place an `Array` referenced by index position. Place can be an entirely empty `Array`, vis: `[]`. Places are intended to quickly summarise local demographics - how many people are connected together. Places are assigned simply to nodes (see `node` above), so each node has just one demographic association. For example, the Spanish Railway dataset uses the census of local municipalities, since municipalities tend to self-define the concept of locality, with both cities and villages falling into single municipalities. It is not possible to change the population catchment for specific Products within the same network, so the dataset creator will need to find an acceptable compromise that represents the realistic catchment of a node. Aquius was originally intended simply to show the broad presence of people nearby. As is, if precise catchment is important, networks containing a mix of intra and inter-urban services may be best split into two completely separate datasets, to be shown in two separate instances of Aquius.\n\n1. Longitude (`float`) - \"x\" coordinate of the place in WGS 84.\n1. Latitude (`float`) - \"y\" coordinate of the place in WGS 84.\n1. Properties (`Object`) - an extendable area for keys related to this place. Minimum empty, vis: `{}`.\n\nOptional `properties` keys are:\n\n* `population` or `p` - `integer` total resident population, or equivalent statistic (recommended).\n* `reference` or `r` - `Array` containing one or more `Object` of descriptive data associated with the place. The only supported key is `n` - short name or human-readable reference code.\n\n*Tip:* For bespoke analysis, the population can be hacked for any geospatial data that sums.\n\n## License\n\nAquius, with no dataset, is freely reusable and modifiable under a [MIT License](https://opensource.org/licenses/MIT).\n\n[Dataset](https://github.com/timhowgego/AquiusData) copyright will differ, and no licensing guarantees can be given unless made explicit by all entities represented within the dataset. Be warned that no protection is afforded by the _logical nonsense_ of a public transport operator attempting to deny the public dissemination of their public operations. Nor should government-owned companies or state concessionaires be naively presumed to operate in some kind of public domain. Railways, in particular, can accumulate all manner of arcane legislation and strategic national paranoias. In the era of Google many public transport operators have grown less controlling of their information channels, but some traditional entities, [such as Renfe](https://www.elconfidencial.com/espana/madrid/2018-07-17/transparencia-retrasos-cercanias-madrid_1593408/), are not yet beyond claiming basic observable information to be a trade secret. Your mileage may vary.\n\n\n## Contributing\n\n[Contributors are most welcome](https://github.com/timhowgego/Aquius/). Check the [Known Issues](#known-issues) before making suggestions. Try to establish a consensus before augmenting data structures.\n"
}
] | 8 |
aswaysway/price-tracker
|
https://github.com/aswaysway/price-tracker
|
8a76d4985dfea856f570ebc0eb4f7f8c66745086
|
51a440f47a284fdbf012d279ed59ff1a86f2062a
|
178f1325c1a04125c73dd25ed55b2bb6de4960e9
|
refs/heads/master
| 2023-05-04T17:40:51.621398 | 2021-05-21T06:05:43 | 2021-05-21T06:05:43 | 369,430,375 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6368128657341003,
"alphanum_fraction": 0.6998146772384644,
"avg_line_length": 29.547170639038086,
"blob_id": "1579f34194dd6f7ee76c8ef5707415d3e5113b23",
"content_id": "cafab77a28b8c4301a290a6d03061a0df93e3b93",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1619,
"license_type": "no_license",
"max_line_length": 257,
"num_lines": 53,
"path": "/price-tracker.py",
"repo_name": "aswaysway/price-tracker",
"src_encoding": "UTF-8",
"text": "import requests\nfrom bs4 import BeautifulSoup\nimport smtplib\nimport time\n\nURL = 'https://www.amazon.com/Logitech-Lightspeed-Rechargeable-Compatible-Ambidextrous/dp/B07NSVMT22/ref=sr_1_1_sspa?dchild=1&keywords=G903&qid=1606032845&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzMFU0N0VENE1LWU1BJmVuY3J5cHRlZElkPUEwNjI0MDY5MlZLWEY1RkhOTFpVNSZlbmNyeXB0ZWRBZElkPUEwNTg5OTAxSVhFMUsySzVWQkNQJndpZGdldE5hbWU9c3BfYXRmJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ=='\n\nheaders = {\"User-Agent\":'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'}\n\nmail_username = '[email protected]'\nmail_password = 'Generated app password'\n\ndef check_price():\n page = requests.get(URL, headers=headers)\n\n soup = BeautifulSoup(page.content, 'html.parser')\n\n title = soup.find(id=\"prodcutTitle\").get_text()\n price = soup.find(id=\"priceblock_ourprice\").get_text()\n converted_price = float(price[0:5])\n\n if(converted_price < 1.700):\n send_mail()\n\n print(converted_price)\n print(title.strip())\n\ndef send_mail():\n server = smtplib.SMTP('smtp.gmail.com', 587)\n server.ehlo()\n server.starttls()\n server.ehlo()\n\n server.login(mail_username, mail_password)\n \n subject = 'PRICE FELL DOWN!'\n body = 'Check the Amazon link: https://www.amazon.com/Logitech-Lightspeed-Rechargeable-Compatible-Ambidextrous/dp/B07NSVMT22/ref=sr_1_1_sspa?dchild=1&keywords=G903&qid=1606032845&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzMFU0N0VENE1LWU1BJmVuY3J5cHRlZElkPUEwNjI0MDY5MlZLWEY1RkhOTFpVNSZlbmNyeXB0ZWRBZElkPUEwNTg5OTAxSVhFMUsySzVWQkNQJndpZGdldE5hbWU9c3BfYXRmJmFjdGlvbj1jbGlja1JlZGlyZWN0JmRvTm90TG9nQ2xpY2s9dHJ1ZQ=='\n\n msg = f\"Subject: {subject}\\n\\n{body}\"\n\n server.sendmail(\n mail_username,\n mail_password,\n msg\n )\n \n print('EMAIL HAS BEEN SUCCESSFULLY SENT!')\n\n server.quit()\n\nwhile(True):\n check_price()\n time.sleep(60*60*24)\n"
}
] | 1 |
D0TheMath/turing-machine
|
https://github.com/D0TheMath/turing-machine
|
8a74d7346f2ffe52c5f0ef7a426f4126448ae6ef
|
2d3ab4e0d02acf48c96791b8edaebad8c5a2920c
|
d02591a3ae73b4ceebac84f63937eda930fa8747
|
refs/heads/main
| 2023-01-30T09:27:24.617758 | 2020-12-05T21:31:38 | 2020-12-05T21:31:38 | 318,893,131 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4503012001514435,
"alphanum_fraction": 0.47921687364578247,
"avg_line_length": 26.429752349853516,
"blob_id": "d19be4ff5d169853d0dab3936635cc2faa78c789",
"content_id": "b4c6555b7397bf579fbf3824e973873ce78572eb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3320,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 121,
"path": "/turing.py",
"repo_name": "D0TheMath/turing-machine",
"src_encoding": "UTF-8",
"text": "def fileToMachine(filename, strip, start):\n machineTable = []\n f = open(filename, 'r')\n\n for line in f:\n line = line.split(',')\n if0 = [line[0], int(line[1])]\n if1 = [line[2], int(line[3])]\n machineTable.append(state(if0, if1))\n\n track = Track(strip, start)\n machine = TuringMachine(machineTable, machineTable[0], track)\n return machine \n \nclass state:\n def __init__(self, ifscan0, ifscan1):\n self.ifscan0 = ifscan0\n self.ifscan1 = ifscan1\n\n def do(self, scan):\n if scan == 0:\n return self.ifscan0[0], self.ifscan0[1]\n if scan == 1:\n return self.ifscan1[0], self.ifscan1[1]\n \n \nclass TuringMachine:\n def __init__(self, states, state, track):\n self.states = states\n self.state = state\n self.track = track\n \n def runState(self):\n currentBlock = self.track.strip[self.track.i]\n action, move = self.state.do(currentBlock)\n\n self.state = self.states[move-1]\n\n if action == \"S0\":\n self.track.Erase()\n if action == \"S1\":\n self.track.Print()\n if action == \"R\":\n self.track.Right()\n if action == \"L\":\n self.track.Left()\n if action == \"H\":\n self.track.Halt()\n\n def compute(self):\n halt = self.track.halt\n while not halt:\n self.runState()\n self.track.show()\n halt = self.track.halt\n \nclass Track:\n def __init__(self, strip, i):\n self.strip = strip\n self.i = i\n self.halt = False\n\n def show(self):\n s = \"\"\n for t in range(len(self.strip)):\n block = str(self.strip[t])\n if t == self.i:\n block = '|' + block + '|'\n else:\n block = \" \" + block + \" \"\n s += block\n print(s)\n \n def Erase(self):\n self.strip[self.i] = 0\n def Print(self):\n self.strip[self.i] = 1\n def Right(self):\n self.i += 1\n if self.i >= len(self.strip):\n self.strip.append(0)\n def Left(self):\n self.i -= 1\n if self.i < 0:\n self.strip.insert(0, 0)\n self.i = 0\n def Halt(self):\n self.halt = True\n\n\n\n'''\n3.2 Example (Doubling the number of strokes). The machine starts off scanning the left-\nmost of a block of strokes on an otherwise blank tape, and winds up scanning the leftmost\nof a block of twice that many strokes on an otherwise blank tape. The flow chart is shown\nin Figure 3-5.\n'''\n\n'''\nstrip = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0]\ni = 11\ntrack = Track(strip, i)\n\nmachineTable = [state([\"H\", 1], [\"L\", 2]),\n state([\"L\", 3], [\"L\", 3]),\n state([\"S1\", 3], [\"L\", 4]),\n state([\"S1\", 4], [\"R\", 5]),\n state([\"R\", 6], [\"R\", 5]),\n state([\"L\", 7], [\"R\", 6]),\n state([\"L\", 8], [\"S0\", 7]),\n state([\"L\", 11], [\"L\", 9]),\n state([\"L\", 10], [\"L\", 9]),\n state([\"R\", 2], [\"L\", 10]),\n state([\"R\", 12], [\"L\", 11]),\n state([\"H\", 1], [\"H\", 1])]\n\nturing = TuringMachine(machineTable, machineTable[0], track)\ntrack.show()\nturing.compute()\ntrack.show()\n'''\n\n"
}
] | 1 |
transparenciasjc/dados_onibus
|
https://github.com/transparenciasjc/dados_onibus
|
72f97023f44d7ae22317f9eeba6a5e1ac82f02e0
|
b6ef0e2caa322f632bf1ea07e155949b5e4ebce8
|
006b8bb8ed9ca10f1db2d0e9dd5209ad71d7b2be
|
refs/heads/master
| 2016-08-06T09:44:23.596817 | 2013-10-11T20:55:00 | 2013-10-11T20:55:00 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5498839616775513,
"alphanum_fraction": 0.5692188739776611,
"avg_line_length": 41.29439163208008,
"blob_id": "faeb4a09291f764ef041e9eabe928488c2ff710b",
"content_id": "7661b738b9b50caef546683d6a7e5366954e2718",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9106,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 214,
"path": "/Scrap/bus/bus/spiders/BusSpider.py",
"repo_name": "transparenciasjc/dados_onibus",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport htmlentitydefs\nimport re\nimport urllib2\n\nfrom bus.items import Onibus\n\nfrom scrapy.selector import HtmlXPathSelector\nfrom scrapy.spider import BaseSpider\n\nfrom BeautifulSoup import BeautifulSoup\n\n\nclass BusSpider(BaseSpider):\n name = \"bus\"\n #allowed_domains = [\"http://www.sjc.sp.gov.br/\"]\n start_urls = ['http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=p&opcao=1&txt=']\n\n def parse(self, response):\n hxs = HtmlXPathSelector(response)\n result_list = hxs.select('//tr') # pega cada um dos TRs que estão todos os horários de ônibus\n items = []\n\n for result in result_list:\n onibus = Onibus()\n # pega cada uma das Tds separadamente pois cada uma tem um tratamento. Por exemplo uma pode ser texto puro e outra um link\n td = result.select(\"td\")\n\n try:\n onibus[\"numero\"] = td[0].select(\"text()\").extract()[0] # esta chave no final é porque a função retorna um Array com o texto dentro na primeira posição\n onibus[\"nome\"] = td[1].select(\"text()\").extract()[0]\n onibus[\"sentido\"] = td[2].select(\"text()\").extract()[0]\n link = td[3].select(\"a/@href\").extract()\n\n if len(link): # se a lista for menor siginifica que o TD não é de ônibus...\n link_conteudo = \"http://www.sjc.sp.gov.br\" + link[0]\n onibus[\"link\"] = link_conteudo\n\n next_page = urllib2.urlopen(link_conteudo)\n response = next_page.read()\n next_page.close() # fecha a conexão\n soup = BeautifulSoup(response) # transforma a página em um objeto parseável\n horarios = []\n for node in soup.findAll('p'): # percorre por todas as Tags \"P\" que é onde encontra-se o conteúdo\n texto = ''.join(node.findAll(text=True))\n horarios.append(texto)\n\n horarios = horarios[2:-2]\n\n horarios_bruto = []\n horarios_map = {}\n semana_map = {}\n sabado_map = {}\n domingo_map = {}\n\n for index, item in enumerate(horarios):\n try:\n horario = horarios[index]\n cursor = horario.find(\")\") # dígito que representa o fim da frase que não fazem parte dos horários.\n cursor += 1 # pula um caracter para não incluir o \")\"\n horario = horario[cursor:] # remove todos os textos desnecessários pegando apenas a parte do texto a partir do \")\"\n #if(horario[0].isdigit()): # se for dígito ele separa todos os campos de horários.\n horario = horario.replace(\",\", \" \") # remove todas as vírgulas da string de horários\n array = horario.split() # separa cada campo, colocando cada horário em uma posição do array\n #horario = self.unescape(horario)\n horarios_bruto.append(array) # [0]=Madrugada, [1]=manhã, [2]=tarde, [3]=noite\n\n except Exception:\n pass\n\n if len(horarios) >= 5:\n # Exemplo tipo 1 (5) 1,2,3,4\n semana_map[\"madrugada\"] = horarios_bruto[1]\n semana_map[\"manha\"] = horarios_bruto[2]\n semana_map[\"tarde\"] = horarios_bruto[3]\n semana_map[\"noite\"] = horarios_bruto[4]\n\n if len(horarios) >= 11:\n # Exemplo tipo 2 (11) 1,2,3,4 2,3,4,5\n sabado_map[\"madrugada\"] = horarios_bruto[7]\n sabado_map[\"manha\"] = horarios_bruto[8]\n sabado_map[\"tarde\"] = horarios_bruto[9]\n sabado_map[\"noite\"] = horarios_bruto[10]\n\n if len(horarios) >= 17:\n # Exemplo tipo 3 (17) 1,2,3,4 2,3,4,5 13,14,15,16\n domingo_map[\"madrugada\"] = horarios_bruto[13]\n domingo_map[\"manha\"] = horarios_bruto[14]\n domingo_map[\"tarde\"] = horarios_bruto[15]\n domingo_map[\"noite\"] = horarios_bruto[16]\n\n horarios_map[\"semana\"] = semana_map\n horarios_map[\"sabado\"] = sabado_map\n horarios_map[\"domingo\"] = domingo_map\n onibus[\"horarios\"] = horarios_map\n\n items.append(onibus) # só adiciona se for efetivamente um ônibus, pois se não for vai levantar excessão antes de entrar aqui.\n\n except Exception:\n pass # esta excessão é para evitar as outras TDs que não sejam as de ônibus\n\n return items\n #print(items)\n #filename = response.url.split(\"/\")[-2]\n #open(filename, 'wb').write(response.body)\n\n ##\n # Removes HTML or XML character references and entities from a text string.\n #\n # @param text The HTML (or XML) source text.\n # @return The plain text, as a Unicode string, if necessary.\n def unescape(text):\n def fixup(m):\n text = m.group(0)\n if text[:2] == \"&#\":\n # character reference\n try:\n if text[:3] == \"&#x\":\n return unichr(int(text[3:-1], 16))\n else:\n return unichr(int(text[2:-1]))\n except ValueError:\n pass\n else:\n # named entity\n try:\n text = unichr(htmlentitydefs.name2codepoint[text[1:-1]])\n except KeyError:\n pass\n return text # leave as is\n return re.sub(\"&#?\\w+;\", fixup, text)\n\n\"\"\"\n# linkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=441\"\n# linkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=495\"\n# linkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=8\"\n\nimport urllib\nimport urllib2\nimport string\nimport sys\nfrom BeautifulSoup import BeautifulSoup\n\nlinkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=495\"\nlinkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=441\"\nlinkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=8\"\n\n\n\n\n\nlinkConteudo = \"http://www.sjc.sp.gov.br/secretarias/transportes/horario-e-itinerario.aspx?acao=d&id_linha=21\"\nnextPage = urllib2.urlopen(linkConteudo)\nresponse = nextPage.read()\nnextPage.close() # fecha a conexão\nsoup = BeautifulSoup(response) # transforma a página em um objeto parseável\nhorarios = []\nfor node in soup.findAll('p'): # percorre por todas as Tags \"P\" que é onde encontra-se o conteúdo\n texto = ''.join(node.findAll(text=True))\n horarios.append(texto)\n\nhorarios.pop(0)\nhorarios.pop(0)\nhorarios.pop()\nhorarios.pop()\nhorariosBruto = []\n\nfor index, item in enumerate(horarios):\n try:\n horario = horarios[index]\n cursor = horario.find(\")\") # dígito que representa o fim da frase que não fazem parte dos horários.\n cursor = cursor + 1 # pula um caracter para não incluir o \")\"\n horario = horario[cursor:] # remove todos os textos desnecessários pegando apenas a parte do texto a partir do \")\"\n #if(horario[0].isdigit()): # se for dígito ele separa todos os campos de horários.\n horario = horario.replace(\",\", \" \") # remove todas as vírgulas da string de horários\n array = horario.split() # separa cada campo, colocando cada horário em uma posição do array\n horariosBruto.append(array) # [0]=Madrugada, [1]=manhã, [2]=tarde, [3]=noite\n\n except Exception, e:\n pass\n\n\nhorariosMap = {}\nsemanaMap = {}\nsabadoMap = {}\ndomingoMap = {}\n\nif len(horarios) >= 5:\n # Exemplo tipo 1 (5) 1,2,3,4\n semanaMap[\"madrugada\"] = horariosBruto[1]\n semanaMap[\"manha\"] = horariosBruto[2]\n semanaMap[\"tarde\"] = horariosBruto[3]\n semanaMap[\"noite\"] = horariosBruto[4]\n\nif len(horarios) >= 11:\n # Exemplo tipo 2 (11) 1,2,3,4 2,3,4,5\n sabadoMap[\"madrugada\"] = horariosBruto[7]\n sabadoMap[\"manha\"] = horariosBruto[8]\n sabadoMap[\"tarde\"] = horariosBruto[9]\n sabadoMap[\"noite\"] = horariosBruto[10]\n\nif len(horarios) >= 17:\n # Exemplo tipo 3 (17) 1,2,3,4 2,3,4,5 13,14,15,16\n domingoMap[\"madrugada\"] = horariosBruto[13]\n domingoMap[\"manha\"] = horariosBruto[14]\n domingoMap[\"tarde\"] = horariosBruto[15]\n domingoMap[\"noite\"] = horariosBruto[16]\n\nhorariosMap[\"semana\"] = semanaMap\nhorariosMap[\"sabado\"] = sabadoMap\nhorariosMap[\"domingo\"] = domingoMap\nprint(horariosMap)\n\n\"\"\"\n"
},
{
"alpha_fraction": 0.42046043276786804,
"alphanum_fraction": 0.4570615589618683,
"avg_line_length": 26.03496551513672,
"blob_id": "2b16c6a01f422ce6f9a1755e04b8844c06c70943",
"content_id": "f40a0f3f484e726acbacda6eb6891ee155dbe57a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 7732,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 286,
"path": "/Sqlite/sqlite.js",
"repo_name": "transparenciasjc/dados_onibus",
"src_encoding": "UTF-8",
"text": "/*\n\n BD SQLite feito com RingJS\n\n @author: PauloLuan\n\n*/\n\naddToClasspath(module.resolve('./sqlite-jdbc-3.6.16.jar'));\n\nvar driver = new Packages.org.sqlite.JDBC();\n\nvar connect = function (url, options) {\n var info = new Packages.java.util.Properties();\n for (var key in options) {\n info.setProperty(key, String(options[key]));\n }\n if (!driver.acceptsURL(url)) {\n throw new Error(\"SQLITE driver doesn't accept url '\" + url + \"'.\");\n }\n var conn = driver.connect(url, info);\n return conn;\n};\n\nvar open = function (path) {\n return connect(\"jdbc:sqlite:\" + path);\n};\n\nvar memory = function () {\n return connect(\"jdbc:sqlite::memory:\")\n};\n\nvar create = function () {\n try {\n var connection = this.open(\"/Users/funcatemobile/Desktop/onibus.sqlite\");\n var statement = connection.createStatement();\n\n // Scripts generated by dbdsgnr.appspot.com\n statement.executeUpdate(\n \"CREATE TABLE onibus (\"+\n \"id INTEGER ,\" +\n \"json TEXT\"+\n \");\"\n\n /*\"CREATE TABLE onibus (\"+\n \"nome TEXT, \"+\n \"id INTEGER PRIMARY KEY, \"+\n \"numero TEXT \"+\n \");\"+\n \"CREATE TABLE horario (\"+\n \"id_onibus INTEGER, \"+\n \"tipo INTEGER, \"+\n \"horario TEXT, \"+\n \"sentido TEXT \"+\n \");\"*/\n );\n\n console.log(\"Criado com sucesso!\")\n } catch (err) {\n // TODO: nicer exception handling - this is hard for the user to find\n throw new Error(\"Can't open '\" + db + \"' for writing. Set GEOEXPLORER_DATA to a writable directory.\");\n } finally {\n connection.close();\n }\n}\n\nvar initialize = function () {\n try {\n var connection = this.open(\"/Users/funcatemobile/Desktop/onibus.sqlite\");\n\n var prep = connection.prepareStatement(\n \"INSERT INTO onibus (id, json) VALUES (?, ?);\"\n );\n\n var items = require(\"./items.json\");\n\n for (var i = 0; i < items.length; i++) {\n var hash = sha1(items[i].nome + items[i].sentido);\n items[i].hash = hash\n };\n\n var json = {};\n json[\"items\"] = items;\n\n var jsonString = JSON.stringify(json);\n prep.setString(1, 1);\n prep.setString(2, jsonString);\n prep.executeUpdate();\n\n var statement = connection.createStatement();\n var results = statement.executeQuery(\"SELECT * FROM onibus;\");\n\n while (results.next()) {\n console.log(\"JSON Salvo no BD!: \" + results.getString(\"json\"))\n }\n } finally {\n connection.close();\n }\n}\n\nvar getAll = function () {\n try {\n var connection = this.open(\"/Users/funcatemobile/Desktop/onibus.sqlite\");\n var statement = connection.createStatement();\n var results = statement.executeQuery(\"SELECT * FROM onibus;\");\n\n while (results.next()) {\n console.log(\"JSON Salvo no BD!: \" + results.getString(\"id\"))\n }\n } finally {\n connection.close();\n }\n}\n\n\nvar sha1 = function(msg) {\n function rotate_left(n, s) {\n var t4 = ( n << s ) | (n >>> (32 - s));\n return t4;\n }\n\n function lsb_hex(val) {\n var str = \"\";\n var i;\n var vh;\n var vl;\n\n for (i = 0; i <= 6; i += 2) {\n vh = (val >>> (i * 4 + 4)) & 0x0f;\n vl = (val >>> (i * 4)) & 0x0f;\n str += vh.toString(16) + vl.toString(16);\n }\n return str;\n }\n\n function cvt_hex(val) {\n var str = \"\";\n var i;\n var v;\n\n for (i = 7; i >= 0; i--) {\n v = (val >>> (i * 4)) & 0x0f;\n str += v.toString(16);\n }\n return str;\n };\n\n\n function Utf8Encode(string) {\n string = string.replace(/\\r\\n/g, \"\\n\");\n var utftext = \"\";\n\n for (var n = 0; n < string.length; n++) {\n\n var c = string.charCodeAt(n);\n\n if (c < 128) {\n utftext += String.fromCharCode(c);\n }\n else if ((c > 127) && (c < 2048)) {\n utftext += String.fromCharCode((c >> 6) | 192);\n utftext += String.fromCharCode((c & 63) | 128);\n }\n else {\n utftext += String.fromCharCode((c >> 12) | 224);\n utftext += String.fromCharCode(((c >> 6) & 63) | 128);\n utftext += String.fromCharCode((c & 63) | 128);\n }\n\n }\n\n return utftext;\n }\n\n var blockstart;\n var i, j;\n var W = new Array(80);\n var H0 = 0x67452301;\n var H1 = 0xEFCDAB89;\n var H2 = 0x98BADCFE;\n var H3 = 0x10325476;\n var H4 = 0xC3D2E1F0;\n var A, B, C, D, E;\n var temp;\n\n msg = Utf8Encode(msg);\n\n var msg_len = msg.length;\n\n var word_array = new Array();\n for (i = 0; i < msg_len - 3; i += 4) {\n j = msg.charCodeAt(i) << 24 | msg.charCodeAt(i + 1) << 16 |\n msg.charCodeAt(i + 2) << 8 | msg.charCodeAt(i + 3);\n word_array.push(j);\n }\n\n switch (msg_len % 4) {\n case 0:\n i = 0x080000000;\n break;\n case 1:\n i = msg.charCodeAt(msg_len - 1) << 24 | 0x0800000;\n break;\n\n case 2:\n i = msg.charCodeAt(msg_len - 2) << 24 | msg.charCodeAt(msg_len - 1) << 16 | 0x08000;\n break;\n\n case 3:\n i = msg.charCodeAt(msg_len - 3) << 24 | msg.charCodeAt(msg_len - 2) << 16 | msg.charCodeAt(msg_len - 1) << 8 | 0x80;\n break;\n }\n\n word_array.push(i);\n\n while ((word_array.length % 16) != 14) word_array.push(0);\n\n word_array.push(msg_len >>> 29);\n word_array.push((msg_len << 3) & 0x0ffffffff);\n\n\n for (blockstart = 0; blockstart < word_array.length; blockstart += 16) {\n\n for (i = 0; i < 16; i++) W[i] = word_array[blockstart + i];\n for (i = 16; i <= 79; i++) W[i] = rotate_left(W[i - 3] ^ W[i - 8] ^ W[i - 14] ^ W[i - 16], 1);\n\n A = H0;\n B = H1;\n C = H2;\n D = H3;\n E = H4;\n\n for (i = 0; i <= 19; i++) {\n temp = (rotate_left(A, 5) + ((B & C) | (~B & D)) + E + W[i] + 0x5A827999) & 0x0ffffffff;\n E = D;\n D = C;\n C = rotate_left(B, 30);\n B = A;\n A = temp;\n }\n\n for (i = 20; i <= 39; i++) {\n temp = (rotate_left(A, 5) + (B ^ C ^ D) + E + W[i] + 0x6ED9EBA1) & 0x0ffffffff;\n E = D;\n D = C;\n C = rotate_left(B, 30);\n B = A;\n A = temp;\n }\n\n for (i = 40; i <= 59; i++) {\n temp = (rotate_left(A, 5) + ((B & C) | (B & D) | (C & D)) + E + W[i] + 0x8F1BBCDC) & 0x0ffffffff;\n E = D;\n D = C;\n C = rotate_left(B, 30);\n B = A;\n A = temp;\n }\n\n for (i = 60; i <= 79; i++) {\n temp = (rotate_left(A, 5) + (B ^ C ^ D) + E + W[i] + 0xCA62C1D6) & 0x0ffffffff;\n E = D;\n D = C;\n C = rotate_left(B, 30);\n B = A;\n A = temp;\n }\n\n H0 = (H0 + A) & 0x0ffffffff;\n H1 = (H1 + B) & 0x0ffffffff;\n H2 = (H2 + C) & 0x0ffffffff;\n H3 = (H3 + D) & 0x0ffffffff;\n H4 = (H4 + E) & 0x0ffffffff;\n\n }\n\n var temp = cvt_hex(H0) + cvt_hex(H1) + cvt_hex(H2) + cvt_hex(H3) + cvt_hex(H4);\n return temp.toLowerCase();\n}\n\nexports.create = create;\nexports.initialize = initialize;\nexports.sha1 = sha1;\nexports.getAll = getAll;\nexports.open = open;\nexports.memory = memory;\n"
},
{
"alpha_fraction": 0.6407185792922974,
"alphanum_fraction": 0.64371258020401,
"avg_line_length": 19.875,
"blob_id": "1f5b2100721cf83960ff901b1f28529461464a1b",
"content_id": "3ebfc1d6a358f7dc8e68250dbfcdb806defb6e42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 334,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 16,
"path": "/Scrap/bus/bus/items.py",
"repo_name": "transparenciasjc/dados_onibus",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Define here the models for your scraped items\n#\n# See documentation in:\n# http://doc.scrapy.org/topics/items.html\n\nfrom scrapy.item import Item, Field\n\n\nclass Onibus(Item):\n numero = Field()\n nome = Field()\n sentido = Field()\n sentido_completo = Field()\n horarios = Field()\n link = Field()\n"
},
{
"alpha_fraction": 0.7333333492279053,
"alphanum_fraction": 0.7515151500701904,
"avg_line_length": 46,
"blob_id": "8c75e758e9f9eb313313af837f4e1487529d4e4e",
"content_id": "f064c4a6558a18f99a832f3654ede843e832191d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 336,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 7,
"path": "/README.md",
"repo_name": "transparenciasjc/dados_onibus",
"src_encoding": "UTF-8",
"text": "\nPara o desenvolvimento deste Script foi utilizado a biblioteca (Scrapy)[http://scrapy.org/] na versão 0.18. \n\nO Script foi concebido na versão antiga do Framework, a documentação está disponível (aqui)[http://doc.scrapy.org/en/0.18/].\n\nPara rodar o Script execute o seguinte comando: \n\n scrapy crawl bus -o items.json -t json\n"
}
] | 4 |
EYE-hub/pdf-merge
|
https://github.com/EYE-hub/pdf-merge
|
706c50aed14849e06e1fab8e440b78015af427ce
|
6d68f2382ecfe4f1435bac400970310161bed1e8
|
03a3f8089bb7f4f689e78b2830e7f4bfc89c5009
|
refs/heads/main
| 2023-03-17T23:38:38.030061 | 2021-01-13T06:47:27 | 2021-01-13T06:47:27 | 351,726,410 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7884615659713745,
"alphanum_fraction": 0.807692289352417,
"avg_line_length": 25,
"blob_id": "94025bfd1973400597eaaad088d6f3e275ecccfd",
"content_id": "a579b9e0de31778cec1257078d91598e2a3b06e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 52,
"license_type": "no_license",
"max_line_length": 35,
"num_lines": 2,
"path": "/README.md",
"repo_name": "EYE-hub/pdf-merge",
"src_encoding": "UTF-8",
"text": "# ThePdfFactory\nhttp://athirai7.pythonanywhere.com/\n"
},
{
"alpha_fraction": 0.6988795399665833,
"alphanum_fraction": 0.707563042640686,
"avg_line_length": 26.674419403076172,
"blob_id": "be4a1132c8b1cd7cbf12f98d92ce11cc842ef050",
"content_id": "7478f456d0e002ca78e12a0093b8dfb91ba7ae57",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3570,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 129,
"path": "/server.py",
"repo_name": "EYE-hub/pdf-merge",
"src_encoding": "UTF-8",
"text": "from flask import Flask,flash,render_template,url_for,request,redirect,send_file\nfrom flask import send_from_directory\nfrom werkzeug.utils import secure_filename\nimport csv\nfrom werkzeug.utils import secure_filename\nfrom subprocess import check_output\nimport os\nimport urllib.request\nimport PyPDF2\nimport sys\n\nUPLOAD_FOLDER = './upload_files'\nALLOWED_EXTENSIONS = set({'pdf'})\n\napp = Flask(__name__)\napp.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER\napp.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024\napp.secret_key = \"imconfused\"\n\ndef allowed_file(filename):\n\treturn '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS\n\n\n\n\[email protected]('/merge.html', methods=['POST'])\ndef upload_file():\n\n\tif request.method == 'POST':\n\t\tif 'files[]' not in request.files:\n\t\t\tflash('No file part')\n\t\t\treturn redirect(request.url)\n\tfiles = request.files.getlist('files[]')\n\tfor file in files:\n\t\tif file.filename == '':\n\t\t\tflash('No selected file')\n\t\t\treturn redirect(request.url)\n\t\tif file and allowed_file(file.filename):\n\t\t# \tfilename = secure_filename(file.filename)\n\t\t# \tfile.save(os.path.join(app.config['UPLOAD_FOLDER'],filename))\n\t\t\tflash('File(s) successfully uploaded')\t\n\t\t\tpdf_combiner(files)\n\t\t\t# return_file()\n\t\t\t# delete()\n\t\tif not allowed_file(file.filename):\n\t\t\tflash('File is not in PDF format')\n\t\n\t\n\treturn render_template('merge.html') \n\[email protected]('/return-file')\ndef return_file():\t\n\t\treturn send_file(\"./super_file.pdf\")\n\t\[email protected]('/return-file1')\ndef return_file1():\t\n\t\treturn send_file(\"./watermarked.pdf\")\n\t\t\n\n\ndef pdf_combiner(pdf_list):\n\tmerger = PyPDF2.PdfFileMerger()\n\tfor pdf in pdf_list:\n\t\tprint(pdf)\n\t\tmerger.append(pdf)\n\tmerger.write('super_file.pdf')\n\ndef pdf_watermarker(pdf_list):\n\tfilename1 = secure_filename(pdf_list[0].filename)\n\tfilename2 = secure_filename(pdf_list[1].filename)\n\t# template = PyPDF2.PdfFileReader(open('./upload_files/'+filename1,'rb'))\n\t# watermark = PyPDF2.PdfFileReader(open('./upload_files/'+filename2,'rb'))\n\ttemplate = PyPDF2.PdfFileReader(open('./upload_files/'+filename1,'rb'))\n\twatermark = PyPDF2.PdfFileReader(open('./upload_files/'+filename2,'rb'))\n\toutput = PyPDF2.PdfFileWriter()\n\tfor i in range(template.getNumPages()):\n\t\tpage = template.getPage(i)\n\t\tpage.mergePage(watermark.getPage(0))\n\t\toutput.addPage(page)\n\n\t\twith open('watermarked.pdf','wb') as file:\n\t\t\toutput.write(file)\n\[email protected]('/watermark', methods=['POST'])\ndef upload_file_watermark():\n\tif request.method == 'POST':\n\t\tif 'files[]' not in request.files:\n\t\t\tflash('No file part')\n\t\t\treturn redirect(request.url)\n\tfiles = request.files.getlist('files[]')\n\tfor file in files:\n\t\tif file.filename == '':\n\t\t\tflash('No selected file')\n\t\t\treturn redirect(request.url)\n\t\tif file and allowed_file(file.filename):\n\t\t\tfilename = secure_filename(file.filename)\n\t\t\tfile.save(os.path.join(app.config['UPLOAD_FOLDER'],filename))\n\t\t\tflash('File(s) successfully uploaded')\n\t\t\t\n\t\tif not allowed_file(file.filename):\n\t\t\tflash('File is not in PDF format')\n\ttry:\n\t\tpdf_watermarker(files)\n\texcept IndexError:\n\t\tflash('Ensure that only two files are uploaded')\n\treturn render_template('watermark.html')\n\[email protected]('/uploads/<filename>')\ndef uploaded_file(filename):\n\treturn send_from_directory(app.config['UPLOAD_FOLDER'],\n\t\t\t\t\t\t\t filename)\n\n\[email protected]('/')\ndef my_home():\n\treturn render_template('index.html')\n\[email protected]('/merge.html')\ndef merge():\n\treturn render_template('merge.html')\n\n\[email protected]('/watermark')\ndef watermark():\n\t\treturn render_template('watermark.html')\n\[email protected]('/index')\ndef home1():\n\t\treturn render_template('index.html')\n"
}
] | 2 |
benv0/RitoBot
|
https://github.com/benv0/RitoBot
|
643442f05dc041d539d4653e927bac572c2a5799
|
c9bbfc26b28620cab4ff0ee22cf0254694f138bc
|
0a733372182f38bc4f60849b370452f3f955a31d
|
refs/heads/master
| 2022-12-16T18:58:12.848805 | 2020-09-15T06:30:24 | 2020-09-15T06:30:24 | 295,635,331 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6208361387252808,
"alphanum_fraction": 0.6326850056648254,
"avg_line_length": 34.966941833496094,
"blob_id": "a89706db1492ea5a4bb39a29be23223243795b41",
"content_id": "17de8bcb837470e1900c8c2bffd0226fc4ddb648",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4473,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 121,
"path": "/rito.py",
"repo_name": "benv0/RitoBot",
"src_encoding": "UTF-8",
"text": "\r\nimport discord\r\nimport json\r\n\r\nfrom discord.ext import commands\r\nimport requests\r\n\r\n\r\nTOKEN = \"YOUR DISCORD TOKEN\"\r\nKEY = \"YOUR RIOT GAMES DEVELOPMENT KEY\"\r\nPLAYER_NAME_URL = 'https://na1.api.riotgames.com/lol/summoner/v4/summoners/by-name/'\r\nCHAMPION_URL = 'http://ddragon.leagueoflegends.com/cdn/10.18.1/data/en_US/champion.json'\r\n\r\nbot = commands.Bot(command_prefix='$')\r\n\r\n#signals when the bot is ready for use\r\[email protected]\r\nasync def on_ready():\r\n print(f'{bot.user.name} has connected to Discord!')\r\n\r\n#takes in player name and queueType and look for the player winrate in that queue type\r\[email protected]()\r\nasync def wr(ctx, player_name, queueType):\r\n await ctx.send('Player name: {}'.format(player_name))\r\n\r\n summonerID = getID(player_name)\r\n\r\n if summonerID != 'err':\r\n result = getPlayerStats(summonerID, queueType)\r\n await ctx.send('Winrate for {} in {} is {:.2f}%'.format(result['summonerName'],result['queueType'],\r\n (round(result['wins']/(result['losses']+result['wins']),3)*100)))\r\n\r\n#helper fucntion to retrieve the player's information\r\ndef getPlayerStats(sID,queueID):\r\n queue = ''\r\n if queueID == 'solo':\r\n queue = 'RANKED_SOLO_5x5'\r\n elif queueID == 'flex':\r\n queue = 'RANKED_FLEX_SR'\r\n\r\n response = requests.get('https://na1.api.riotgames.com/lol/league/v4/entries/by-summoner/' + sID + '?api_key='+ KEY)\r\n\r\n if(response.status_code == 200):\r\n for item in response.json():\r\n if item['queueType'] == queue:\r\n return item\r\n\r\n return 'err'\r\n\r\n#returning player's encrypted Riot Games ID\r\ndef getID(player_name):\r\n response = requests.get(PLAYER_NAME_URL + player_name + '?api_key='+ KEY)\r\n if (response.status_code == 200):\r\n return response.json()['id']\r\n else: \r\n return 'err'\r\n#look up a player's win rate on a certain champion in all game modes\r\[email protected]()\r\nasync def wrChamp(ctx,player_name,champ_name):\r\n aID = getAccID(player_name)\r\n champID = getChampKey(champ_name)\r\n if aID != 'err' and champID != 'err':\r\n response = requests.get('https://na1.api.riotgames.com/lol/match/v4/matchlists/by-account/' + aID + '?champion=' + str(champID) + '&api_key=' + KEY)\r\n if (response.status_code == 200):\r\n await ctx.send(await display(ctx,response,aID,player_name,champ_name))\r\n else: \r\n return 'err'\r\n\r\n# returns a string detailing a player's winrate on champion after x games (maximum of 50 games)\r\n# aID is the player's encrypted account ID\r\nasync def display(ctx,response,aID,player_name,champ_name):\r\n matches = response.json()['matches']\r\n limit = 0\r\n total = 50\r\n winCount = 0\r\n if len(matches) > total:\r\n limit = total\r\n else:\r\n limit = len(matches)\r\n for i in range(limit):\r\n if await checkPlayerWin(matches[i], aID):\r\n print(i)\r\n winCount = winCount + 1\r\n return '{} has a winrate of {:.2f}% with {} in the most recent {} games.'.format(player_name,round((winCount/limit)*100,2),champ_name,limit)\r\n\r\n# see if the player has won a match, taking a matchID as an argument\r\nasync def checkPlayerWin(match,aID):\r\n participantID = 0\r\n r = requests.get('https://na1.api.riotgames.com/lol/match/v4/matches/' + str(match['gameId'])+ '?api_key=' + KEY)\r\n result = r.json()\r\n if(r.status_code == 200):\r\n participantID = getPlayerID(result,aID)\r\n if participantID != -1:\r\n for participant in result['participants']:\r\n print(participantID)\r\n if participant['participantId'] == participantID:\r\n return participant['stats']['win']\r\n return False\r\n\r\n# getting the player's unique participant ID every game\r\ndef getPlayerID(r,aID):\r\n for player in r['participantIdentities']:\r\n if aID == player['player']['accountId']:\r\n return player['participantId']\r\n return -1\r\n# returning the player's unique account ID \r\ndef getAccID(player_name):\r\n response = requests.get(PLAYER_NAME_URL + player_name + '?api_key='+ KEY)\r\n if (response.status_code == 200):\r\n return response.json()['accountId']\r\n else: \r\n return 'err'\r\n\r\n# return the champion's unique id based on the champion's name\r\ndef getChampKey (champ_name):\r\n response = requests.get(CHAMPION_URL)\r\n if (response.status_code == 200):\r\n return response.json()['data'][champ_name]['key']\r\n else:\r\n return 'err'\r\n\r\nbot.run(TOKEN)"
},
{
"alpha_fraction": 0.8012820482254028,
"alphanum_fraction": 0.8012820482254028,
"avg_line_length": 51,
"blob_id": "225bf683366d47114e70a5c61619923e0a3dacac",
"content_id": "71bf038db902779e84596fb58baed0a22027fcae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 156,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 3,
"path": "/README.md",
"repo_name": "benv0/RitoBot",
"src_encoding": "UTF-8",
"text": "# RitoBot\nA Discord bot to look up a League of Legends player's winrate\nReplace DISCORD_TOKEN and KEY with your bot's token and Riot Games API respectively\n"
}
] | 2 |
leeeunlin/DatabaseUpdate
|
https://github.com/leeeunlin/DatabaseUpdate
|
d6f53dbb2182159899cceaa8992f1e81b65d9d8d
|
280bdd233b7d403daffab9aa225ab18637e41e6a
|
6bd63e86100e553e388628c1f6a9e6006ed16626
|
refs/heads/master
| 2023-07-18T04:34:11.404595 | 2021-08-24T07:47:03 | 2021-08-24T07:47:03 | 389,642,360 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6186291575431824,
"alphanum_fraction": 0.6379613280296326,
"avg_line_length": 23.782608032226562,
"blob_id": "be0e4fa4dedcf93306124e4222c78696ec20af59",
"content_id": "0b37a9312e7af8b97c734b1a749666f9e2b60568",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 737,
"license_type": "permissive",
"max_line_length": 90,
"num_lines": 23,
"path": "/DatabaseUpdate.py",
"repo_name": "leeeunlin/DatabaseUpdate",
"src_encoding": "UTF-8",
"text": "import pymysql\n\npasswd = input(\"데이터베이스의 비밀번호를 입력 \\n\")\n\nconn = pymysql.connect(host = \"127.0.0.1\",port = 3306,user = \"root\",password = passwd)\n \ncur = conn.cursor()\ncur.execute(\"show databases\")\nrows = cur.fetchall()\n\n_list = []\nfor index in rows:\n _list.append(index[0]) # tuple값은 수정이 안되니 리스트로 집어넣어주자\n\ndbexe = input(\"Project DB에 수정할 명령어 입력 \\n\")\n\nfor row in _list:\n if 'project' in row: # 리스트에 넣은 값 중 project 스트링이 있는것만 골라서 넣자\n cur.execute(\"use %s;\" % row)\n cur.execute(dbexe)\n print(row, \"Finish\")\n\ninput(\"모든 작업이 완료되었습니다. 프로그램을 종료합니다.\")"
},
{
"alpha_fraction": 0.760869562625885,
"alphanum_fraction": 0.760869562625885,
"avg_line_length": 14.333333015441895,
"blob_id": "08c6a476029e822896505bbacf650aeaa2ee7ebf",
"content_id": "1c38dae77843a8ec5404e534dccc48f9b2ced000",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 200,
"license_type": "permissive",
"max_line_length": 36,
"num_lines": 6,
"path": "/README.md",
"repo_name": "leeeunlin/DatabaseUpdate",
"src_encoding": "UTF-8",
"text": "# DatabaseUpdate\nDB 수동작업을 귀찮아하는 유지보수 인력을 위해 제작해두었습니다.\n\n마무리가 아직 안되어있습니다\n\n자유로운 수정 및 배포 가능합니다.\n"
}
] | 2 |
bobojon97/Django-rest-Framework
|
https://github.com/bobojon97/Django-rest-Framework
|
9e99c0315ac084eac579b14048aa2689afd0f0e4
|
90aa7e07c50b5decb3bac41be698f1caa000264e
|
a798a5518f1f8933be495ead6a2b913ffc956c22
|
refs/heads/main
| 2023-06-10T15:47:24.089862 | 2021-07-01T19:41:29 | 2021-07-01T19:41:29 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6707161068916321,
"alphanum_fraction": 0.6732736825942993,
"avg_line_length": 39.11538314819336,
"blob_id": "0e2661f5b46913fa11100035b54cc117c019c2e6",
"content_id": "5936445741c9ae9105cd773848ca614ec1e876ea",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3128,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 78,
"path": "/news_api/news/api/serializers.py",
"repo_name": "bobojon97/Django-rest-Framework",
"src_encoding": "UTF-8",
"text": "from datetime import datetime\nfrom django.utils.timesince import timesince\nfrom rest_framework import serializers\nfrom news.models import Article, Journalist\n\n\nclass ArticleSerializer(serializers.ModelSerializer):\n time_since_publication = serializers.SerializerMethodField()\n # author = JournalistSerializer(read_only=True)\n # author = serializers.StringRelatedField()\n\n class Meta:\n model = Article\n exclude = (\"id\",)\n # fields = \"__all__\"\n\n def get_time_since_publication(self, object):\n publication_date = object.publication_date\n now = datetime.now()\n time_delta = timesince(publication_date, now)\n return time_delta\n\n def validate(self, attrs):\n if attrs['title'] == attrs['description']:\n raise serializers.ValidationError('Title and description')\n return attrs\n\n def validate_title(self, value):\n if len(value) < 30:\n raise serializers.ValidationError('Title menshe 30')\n return value\n\n\nclass JournalistSerializer(serializers.ModelSerializer):\n articles = serializers.HyperlinkedRelatedField(read_only = True, many = True, view_name='articles-detail')\n # articles = ArticleSerializer(many = True, read_only=True)\n class Meta:\n model = Journalist\n fields = \"__all__\"\n\n# class ArticleSerializer(serializers.Serializer):\n# id = serializers.IntegerField()\n# author = serializers.CharField()\n# title = serializers.CharField()\n# description = serializers.CharField()\n# body = serializers.CharField()\n# location = serializers.CharField()\n# publication_date = serializers.DateField()\n# active = serializers.BooleanField()\n# created_at = serializers.DateTimeField(read_only=True)\n# updated_at = serializers.DateTimeField(read_only=True)\n\n# def create(self, validated_date):\n# print(validated_date)\n# return Article.objects.create(**validated_date)\n\n# def update(self, instance, validated_data):\n# instance.author = validated_data.get('author', instance.author)\n# instance.title = validated_data.get('title', instance.title)\n# instance.description = validated_data.get('description', instance.description)\n# instance.body = validated_data.get('body', instance.body)\n# instance.location = validated_data.get('location', instance.location)\n# instance.publication_date = validated_data.get('publication_date', instance.publication_date)\n# instance.active = validated_data.get('active', instance.active)\n# instance.created_at = validated_data.get('created_at', instance.created_at)\n# instance.updated_at = validated_data.get('updated_at', instance.updated_at)\n# instance.save()\n# return instance\n\n# def validate(self, attrs):\n# if attrs['title'] == attrs['description']:\n# raise serializers.ValidationError('Title and description')\n# return attrs\n\n# def validate_title(self, value):\n# if len(value) < 60:\n# raise serializers.ValidationError('Title menshe 60')\n# return value"
},
{
"alpha_fraction": 0.7457627058029175,
"alphanum_fraction": 0.7457627058029175,
"avg_line_length": 49,
"blob_id": "9001e130fa3cf88352e6023bdd1c9eb11615d064",
"content_id": "131468b018b1da279b0c35dda741a3b3e0f08aa7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 649,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 13,
"path": "/news_api/news/api/urls.py",
"repo_name": "bobojon97/Django-rest-Framework",
"src_encoding": "UTF-8",
"text": "from django.urls import path\nfrom news.api.views import ArticleDetailAPIView, ArticleListCreateAPIView, JournlistListCreateAPIView\n\n# from news.api.views import article_list_create_api_view, article_detail_api_view\n\nurlpatterns = [\n path('articles/', ArticleListCreateAPIView.as_view(), name='articles-list'),\n path(\"articles/<int:pk>\", ArticleDetailAPIView.as_view(), name='articles-detail'),\n path('journalists/', JournlistListCreateAPIView.as_view(), name='journalist-list'),\n\n # path('articles/', article_list_create_api_view, name='articles_list'),\n # path(\"articles/<int:pk>\", article_detail_api_view, name='articles_detail'),\n]"
}
] | 2 |
jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries
|
https://github.com/jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries
|
01a7e9c71bf84da3949851cb2e28425385268be4
|
9a57bb4dd44aefb33bd5cff8e6ef72aedb962995
|
01dd2b0162d73e7551ad71c15508d32647be8bb0
|
refs/heads/master
| 2022-12-25T09:59:52.804713 | 2020-10-04T20:01:01 | 2020-10-04T20:01:01 | 281,162,743 | 0 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5883402824401855,
"alphanum_fraction": 0.6049970388412476,
"avg_line_length": 35.16128921508789,
"blob_id": "ccaf84a2af4912e988f0e0352b7937f93b56daa2",
"content_id": "7c2a07a15af57cd3f039ddbc0b735c5287da48e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3362,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 93,
"path": "/src/plots.py",
"repo_name": "jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom df_functions import *\n\nplt.style.use('bmh')\n\n\n\nexecute = True\nif __name__ == \"__main__\" and execute:\n\n df = pd.read_csv('../data/good_clean_data.csv')\n\n county_infection_group = df.groupby('CTYNAME').sum().copy().reset_index()\n county_infection_group.sort_values('Total Staff Infected', ascending=False,\n inplace=True)\n cty_inf_group_non_zero = county_infection_group[county_infection_group['Total Staff Infected'] != 0]\n\n county_hispanic_group = df.groupby('CTYNAME').mean().copy().reset_index()\n county_hispanic_group.sort_values('Hispanic Population Percentage', \n ascending=False, inplace=True)\n cty_hisp_top_34 = county_hispanic_group.iloc[:34, :]\n\n industry_group = df.groupby(['Industry']).mean().copy().reset_index()\n industry_group.sort_values('Median Salary', ascending=False, inplace=True)\n\n color = make_colors('darkviolet', 34)\n lst1 = [10,15,5,19,11,17,21,7,26,33]\n for i in lst1:\n color[i] = 'red'\n\n fig, ax = plt.subplots(figsize=(12,8))\n make_bar_plot(ax, cty_hisp_top_34, 'Hispanic Population Percentage',\n 'CTYNAME', 'Hispanic Poulation % by County', color=color)\n \n ax.set_ylabel('Percentage')\n \n fig, ax = plt.subplots(figsize=(12,8))\n make_sns_bar_plot(ax, cty_inf_group_non_zero, 'Total Staff Infected',\n 'CTYNAME', 'Total Staff Infected by County',\n color='firebrick')\n \n color = make_colors('green', 12)\n lst1 = [6,4,9,11,2]\n for i in lst1:\n color[i] = 'red'\n \n fig, ax = plt.subplots(figsize=(12,10))\n make_bar_plot(ax, industry_group, 'Median Salary', 'Industry','Median Salary by Industry',\n color=color)\n\n ax.set_ylabel('Yearly Income $')\n plt.tight_layout()\n plt.show()\n\n\n df = pd.read_csv('../data/His_Pop_Infection_Merge.csv')\n\n fig, ax = plt.subplots()\n make_scatter_plot(ax, df['Hispanic Population Percentage'], \n df['Total Staff Infected'], 'Infected vs Hispanic Population %',\n 'blueviolet')\n\n ax.set_xlabel('Hispanic Population %', fontsize=14)\n ax.set_ylabel('Staff Infected', fontsize=14)\n \n \n hispanic_staff_infected = pd.read_csv('../data/Hispanic_Staff_Infected_by_Industry.csv')\n hispanic_staff_infected.sort_values('Total Staff Infected', ascending=False, inplace=True)\n\n fig, ax = plt.subplots(figsize=(12,8))\n\n make_sns_bar_plot(ax, hispanic_staff_infected, 'Total Staff Infected',\n 'Industry', 'Staff Infected by Industry',\n color='firebrick', label='Total Staff Infected')\n\n make_sns_bar_plot(ax, hispanic_staff_infected, 'Hispanic Staff Infected',\n 'Industry', 'Staff Infected by Industry',\n color='blue', label='Hispanic Staff Infected')\n\n ax.set_ylabel('# of Infected')\n plt.tight_layout()\n ax.legend()\n plt.show() \n\n\n #Pearson Correlation\n\n HSPInfection = merge_hispPop_Infection[['CTYNAME', 'Total Staff Infected', \n 'Hispanic Population Percentage']]\n infected_hispPop_pearson_corr = HSPInfection.corr()"
},
{
"alpha_fraction": 0.6291505694389343,
"alphanum_fraction": 0.6353281736373901,
"avg_line_length": 28.4375,
"blob_id": "e7edd6e886076cfdfb195b8485bd5c0d0375e6f7",
"content_id": "64c24e03447d63375fcbc0a9ebed72a707095741",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5180,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 176,
"path": "/src/df_functions.py",
"repo_name": "jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n\n\ndef values_to_column(df, lst, value, read_col_num, write_col_num):\n \"\"\"\n Adds values to new column based on the old column\n \n Parameters:\n \n df: (dataframe)\n lst: (list) of values to check in the old column\n value = value in new column for list of values\n read_col_num: (int) column number being read\n write_col_num: (int) column number being written to \n \n Returns:\n Adds values in the specified column based on the parameters\n\n \"\"\"\n for i in range(df.shape[0]):\n if df.iloc[i, read_col_num] in lst:\n df.iloc[i, write_col_num] = value \n\ndef create_df(row_list, col_list):\n \"\"\"\n Creates a new df from a list of lists\n\n Parameters:\n\n row_lists: (list) list of lists containing the rows for each column\n col_list: (list) contains the names of each column\n\n Returns:\n\n A new df with row_list as rows and col_list as column names\n \"\"\"\n dic = {}\n for i in range(len(row_list)):\n dic[col_list[i]] = row_list[i]\n return pd.DataFrame.from_dict(dic, orient='index').transpose()\n\ndef add_column(df, dic, reference_col, new_col, ref_loc, new_loc):\n \"\"\"\n Adds a new column with the specified values from the dictionary\n based of the key from the reference column\n\n Parameters:\n df: DataFrame\n dic: Dictionary that contains the key value pairs to be referenced and added\n reference_col: String that contains the name of the refernce column\n new_col: String with the name of the new column to be added\n ref_loc: Integer with the index location of the reference column\n new_loc: Integer with the index locaiton of the new column\n\n Returns:\n A modified dictionary with values in the new column based of the keys in the refernce\n column\n \"\"\"\n df.insert(loc=new_loc, column=new_col, value='')\n for i in range(len(df[reference_col])):\n df.iloc[i , new_loc] = dic[df.iloc[i, ref_loc]]\n\ndef convert_to_dict(df, loc1, loc2):\n \"\"\"\n Converts two colums in a dataframe to a key value pair in a dictionary\n\n Parameters:\n df: DataFrame\n loc1: Integer index for the key column\n loc2: Integer index for the value column\n\n Returns:\n Dictionary with keys from col1 and values from col2\n \"\"\"\n \n dic = {}\n for i in range(len(df.iloc[: , loc1])):\n dic[df.iloc[i, loc1]] = df.iloc[i, loc2]\n return dic\n\ndef autolabel(rects):\n \"\"\"Attach a text label above each bar in *rects*, displaying its height.\"\"\"\n for rect in rects:\n height = rect.get_height()\n ax.annotate('{}'.format(height),\n xy=(rect.get_x() + rect.get_width() / 2, height),\n xytext=(0, 3), # 3 points vertical offset\n textcoords=\"offset points\",\n ha='center', va='bottom')\n\ndef make_colors(color, n):\n \"\"\"\n Returns a list of length n with the specifed color\n\n Parameters:\n color = str that you want as a color\n n = length of the list\n\n Returns:\n List of length n with color assigned to each index\n \"\"\"\n colors = []\n for i in range(n):\n colors.append(color)\n return colors\n\ndef make_sns_bar_plot(ax, df, col_name, labels, title, color='blue', label=None):\n \"\"\"\n Creates a seaborn bar plot\n\n Parameters:\n ax = axes for the plot\n df = dataframe for the data\n col_name = str name of the dataframe column for the heights\n labels = str name of the dataframe column for the x axis labels\n color = color for the bars in the graph\n label = label the graph\n\n Returns:\n Bar plot\n \"\"\"\n \n plt.rcParams['axes.labelsize'] = 2\n tick_loc = np.arange(len(df[labels]))\n xlabel = df.loc[:, labels]\n sns.barplot(tick_loc, df[col_name], color=color, ax=ax, label=label)\n ax.set_xticks(ticks=tick_loc)\n ax.set_xticklabels([str(x) for x in xlabel], rotation= 45, fontsize=14, \n horizontalalignment='right')\n ax.set_title(title, fontsize=20)\n\ndef make_bar_plot(ax, df, col_name, labels, title, color='blue', label=None):\n \"\"\"\n Creates a matplotlib bar plot\n\n Parameters:\n ax = axes for the plot\n df = dataframe for the data\n col_name = str name of the dataframe column for the heights\n labels = str name of the dataframe column for the x axis labels\n color = color for the bars in the graph\n label = label the graph\n\n Returns:\n Bar plot\n \"\"\"\n tick_loc = np.arange(len(df[labels]))\n xlabel = df.loc[:, labels]\n ax.bar(tick_loc, df[col_name], color=color, label=label)\n ax.set_xticks(ticks=tick_loc)\n ax.set_xticklabels([str(x) for x in xlabel], rotation= 80, fontsize=14)\n ax.set_title(title, fontsize=20)\n\ndef make_scatter_plot(ax, x, y, title, color):\n \"\"\"\n Creates a matplotlib scatter plot\n\n Parameters:\n ax = axes for the plot\n x = array for the x values\n y = array for the y values\n color = color for the points in the graph\n\n Returns:\n scatter plot\n \"\"\"\n ax.scatter(x=x, y=y, s=100, c=color)\n ax.set_title(title, fontsize=20)\n\n\nif __name__ == '__main__':\n pass"
},
{
"alpha_fraction": 0.6665240526199341,
"alphanum_fraction": 0.6891151666641235,
"avg_line_length": 93.99186706542969,
"blob_id": "dc86d6074715fb7c3a1b10ecb68c102a3a05dd79",
"content_id": "b6f2015a2577cec6acc49341505585811012b48a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 11686,
"license_type": "no_license",
"max_line_length": 1543,
"num_lines": 123,
"path": "/README.md",
"repo_name": "jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries",
"src_encoding": "UTF-8",
"text": "\n<p align=\"center\">\n<img src=\"Figures/CFlag.png\" height=\"200\" width=\"400\" />\n</p>\n\n# COVID Vulnerability Assessment for Colorado Industries\n\n## Introduction\n\nColorado is a highly diversified state, from it's population to environments. With a total population of 5.78 million, the Hispanic people represent 24% of Coloradans and this number is projected to grow to 35% by the year 2050 (based on the 2018 Census CO Pop Forecasts final.xlsx is included in the data folder). The COVID 19 pandemic has had a dramatic impact on all Coloradans, but the impact on the Hispanic community has been much higher than predicted. A staggering 36% of the people that have been inficted by COVID 19 are of Hispanic descent, this number should only be around 24% based on the Hispanic proportion of the population. Why is this number so high? There are many factors that may affect this number, this study aims to dertermine if the worklpace environment is contributing to the infection volume in the Hispanic community.\n\nThe questions that this study will focus on are:\n\n * Which industries show the highest incidents of outbreaks in Colorado?\n * What is the primary ethnicity employed by these industries?\n * What areas of Colorado have the highest number of outbreaks?\n\n## Raw Data\n\nThe data for this project came from three main sources. The data on outbreak by industry and ethnicity by county came from the Colorado Department of Pulic Health and Environment (https://covid19.colorado.gov/). The outbreak data is categorized by industry type and is updated on a weekly basis as new industries report outbreaks. The median salary by industry data was takend from Statistical Atlas (https://statisticalatlas.com/state/Colorado/Industries). Worker ethnicity by industry was unavailabe for Colorado, so the US data was used in it's place. This data came from US Bureau of Labor and Statistics (USBLS) (https://www.bls.gov/cps/cpsaat18.htm). All census data is based on the last census performed in 2018 and current population projections and is included in the data folder. \n\n\n### Data Cleaning Workflow\n\n1. The outbreak data conatained 120 different industries which needed to be funneled into the 12 industries used for the USBLS shown below. The breakout data also contained the county names from which the breakouts occurred. I stripped the white space and replaced a few that were incorrect.\n\n| | Industry |\n|---:|:--------------------------|\n| 0 | Agg/Fish/Forestry/Hunting |\n| 1 | Construction |\n| 2 | Education/Health Services |\n| 3 | Financial Activities |\n| 4 | Government |\n| 5 | Hospitality |\n| 6 | Manufacturing |\n| 7 | Mining/OilandGas |\n| 8 | Other Services |\n| 9 | Prof Serv and Mgmt |\n| 10 | Transp/Warehouse |\n| 11 | Wholesale/Retail |\n\n2. I added then added \"Median Salaries by Industry\" to the primary table\n\n3. I then used these county names to merge the hispanic population table. By county table into the primary table.\n\n4. I used the desginatd Industry names to rename the industries from the USBLS in order to merge the tables on='Industry'.\n\n\n5. I used some grouping methods on the DataFrames to reduce this table down to the one shown below. BecauseB the ethnicity of the staff that were infected was not disclosed, I used the percentage of Hispanics employed by the industry to calculate the \"Hispanic Staff Infected\" column.\n\n| | Industry | Total Staff Infected | Hispanic Percentage | Hispanic Staff Infected |\n|---:|:--------------------------|-----------------------:|----------------------:|--------------------------:|\n| 0 | Agg/Fish/Forestry/Hunting | 47 | 27.5 | 13 |\n| 1 | Construction | 44 | 30.4 | 13 |\n| 2 | Education/Health Services | 2354 | 13.5 | 318 |\n| 3 | Financial Activities | 0 | 12.9 | 0 |\n| 4 | Government | 115 | 12.5 | 14 |\n| 5 | Hospitality | 139 | 24 | 33 |\n| 6 | Manufacturing | 483 | 16.8 | 81 |\n| 7 | Mining/OilandGas | 15 | 20.1 | 3 |\n| 8 | Other Services | 3 | 19.9 | 1 |\n| 9 | Prof Serv and Mgmt | 98 | 16 | 16 |\n| 10 | Transp/Warehouse | 94 | 19.45 | 18 |\n| 11 | Wholesale/Retail | 154 | 18.1 | 28 |\n\n6. I grouped and sorted the main table by \"Total Staff Infection\" and then merged this with the hispanic population table on=\"County Name\" in order to compare \"Total Staff Infected\" with the \"Hispanic Population Percentage\".\n\n\n## EDA Workflow\n\n1. This study is meant to determine if there is a correlation between Hispanic populations work environment and the abnormally high COVID19 infection rate in the Hispanic Population. So to start things off, I plotted the Total Staff Infections by Industry and the Hispanic Staff Infections. The table below depicts the results. The highest number of staff infections ocurred in the Education and Health Service Industry. This industry also includes Social Assistance. Because the ethnicity of the infected staff was not disclosed, the amount of Hispanic Staff Infected was calculated using the Total Infected mulitplied by the USBLS data for Hispanic % in each of these industries.\n\n<p align=\"center\">\n<img src=\"Figures/Staff_Infected_by_Industry.png\" height=\"400\" width=\"600\" />\n</p>\n\n2. Next I looked at the Total Staff Infections by County, sorting them from largest to smallest and removing the counties that have not reported outbreaks. The table clearly shows that most of the infections are around major metropolitan areas. \n\n<p align=\"center\">\n<img src=\"Figures/staff_covid_by_county.png\" height=\"400\" width=\"600\" />\n</p>\n\n<strong>Below is a county map of Colorado for reference</strong>\n<p align=\"center\">\n<img src=\"Figures/CO_County_Map.png\" height=\"400\" width=\"600\" />\n</p>\nsource: https://coruralhealth.org/resources/maps-resource\n\n\n3. I then took a look at the Hispanic Population % by County shown in the graph below. I highlighted the top 10 Counties with the highest infection rate in red. The counties with the highest hispanic population percentages are on the lower end of the Total Staff Infected by County. \n\n<p align=\"center\">\n<img src=\"Figures/top_34_hisp_pop_by_cty.png\" height=\"400\" width=\"600\" />\n</p>\n\n4. In order to get a better visual comparison of Hispanic Population and Staff Infected by County, I used a scatter plot (shown below). There does not seem strong correlation between the two.\n<p align=\"center\">\n<img src=\"Figures/infected_vs_hispanic_pop.png\" height=\"400\" width=\"600\" />\n</p>\n5. To further analyze the correlation for Staff Infected by County vs Hispanic Population, I ran a Pearson correlation between the two sets of data. The Pearson correlation coefficient came out to be 0.1128, which implies that there is very little correlation between the Total Staff Infected and the Hispanic Population Percentage.\n\n| | Total Staff Infected | Perc_Hispanic_Pop |\n|:---------------------|-----------------------:|--------------------:|\n| Total Staff Infected | 1 | 0.112777 |\n| Perc_Hispanic_Pop | 0.112777 | 1 |\n\n6. Next I took a look at the Median Income of each Industry. The self sufficiency standard outlines the necessary annual income various sized families need in order avoid having to accept outside assistance. For each county in Colorado there is a set value for the income, because most of the outbreaks occurred in and around Denver county, we will use the values of $61,553.10 for a single parent and one infant and $88,436.04 for two adults, one infant and one pre-schooler. The plot below shows the Median Salary for each Industry in this study and the bars highlighted in red are the Industries with the highest amount of staff reported to have contracted COVID19. All of the top 5 fall below the self sufficiency standard for a single parent. \n\n<p align=\"center\">\n<img src=\"Figures/Median_Salary_by_Industry.png\" height=\"400\" width=\"600\" />\n</p>\n\n\n## Conclusion\n\nThis study was meant to determine if the Hispanic population in Colorado was more susceptible to contracting COVID19 in there work environments. The breakout data did not include the ethnicity for the staff that were infected. The ethnicity employed by the industries came for US data and not specifically Colorado. We made the assumption that the ethnicity of the employees for the Industries in Colorado reflects that of the entire US and that the amount of Hispanic Staff infected with COVID19 is the same percentage as those that work in each specific Industry. Therefore, currently we are unable to state with absolute confidence that the Hispanic population is or is not more susceptible to contracting COVID19 at work. The current data shows that they are not more susceptible and are just as likely of contracing COVID19 as any other ethnicity. Efforts are being made to include ethnicity when reporting outbreaks by industry and to obtain Colorado ethnicity percentages by industry. With these two pieces of information, the conlcusion may be modified. That being said, there are other conlcusions that we can draw from the study. I will preface the rest of my conlusions with the fact that some of these industries were shut down, almost entirely, during the quarantine period of 2 months, during which outbreak data was being collected. This means that there may be a shift in which Industries are showing the highest numbers of Staff COVID19 cases. Below are the rest of the conlcusions that I have drawn from this study.\n\n1. The current data shows that the most susceptible employees are those working in Education and Health Services, which also includes Social Assistance. The second highest Staff cases came from the Manufacturing Industry. This seems logical due to the nature of these Industries, as the employees may not be able to take all of the necessary precautions to perform their job. Many of the sub industries categorized under these main categories were considered essential and therefore were not included in the mandatory shutdown.\n\n2. The counties surrounding major metropolitan areas have the highest number of outbreaks and infected staff. This also seems logical due to the amount of people working in these areas. \n\n3. As to why employees working in these Industries are more susceptible to contracting COVID19, it is difficult to say due to the amount of confounding variables. One thing to highlight, for which we did have data on, was the Median Salary of the employees working in these industries. I mentioned previously that the employees from the top 5 Industries with the highest cases had Median Salaries below the self sufficiency standard, which may reflect their ability to properly protect themselves. \n\nThis study will be an ongoing effort to improve upon the conclusion that have been drawn. "
},
{
"alpha_fraction": 0.770354151725769,
"alphanum_fraction": 0.7904344797134399,
"avg_line_length": 40.45454406738281,
"blob_id": "5872e1b17d4036ffa1aeb56bbe6994647147657b",
"content_id": "a5ceffcf4245fad279c7a727d82d5ac419448ad1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2739,
"license_type": "no_license",
"max_line_length": 301,
"num_lines": 66,
"path": "/src/.ipynb_checkpoints/Justin-Capstone1-Proposal-checkpoint.md",
"repo_name": "jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries",
"src_encoding": "UTF-8",
"text": "# Question\n\nWhat industries show the highest COVID-19 outbreak incidents in Colorado, what areas are these industries in, and what are the primary ethnicities of the labor force for these industries? Is the work environment contributing to the spread of COVID-19?\n\nThe primary data sets that I will be focusing on come from the Colorado Department of Public Health and Environment. The datasets are updated weekly, so the latest data for the project will be coming from 7/15/2020. I will not limit myself to this data, but it will be the backbone of my analysis. \n\nI have performed some peliminary analysis on the dataset and the results are highlighted below.\n\n# Initial EDA\n\nThe industry categories for the data, labeled **'Setting type'** and **'If setting type is other, specify'**, total 19 and 110 respectively. These will need to be consolidated to approximately 14 categories, which will be defined by Sandra.\n\nThere are 37 Counties in the data set, the other 37 Counties in Colorado have not reported industrial outbreaks. \n\nBelow are are two graphs I have generated comparing COVID-19 lab proven deaths in the industries and counties.\n\n\n\n4\n\n\nThe other columns that can be investigated from the initial dataset are:\n\nSetting name\n\nInvestigation status\n\nDate Outbreak Resolved\n\nSetting type\n\nIf setting type is other, specify\n\nColorado county (exposure location)\n\nDate illnesses were determined to be an outbreak\n\nNumber of residents positive for COVID-19 (lab confirmed)\n\nNumber of residents with probable COVID-19 (NOT lab confirmed)\n\nNumber of COVID-19 deaths (NOT lab confirmed/probable)\n\nNumber of staff who are positive for COVID-19 (lab confirmed)\n\nNumber of staff with probable COVID-19 (NOT lab confirmed)\n\nNumber of COVID-19 staff deaths (lab confirmed/confirmed)\n\nNumber of COVID-19 staff deaths (NOT lab confirmed/probable)\n\nNumber of attendees who are positive for COVID-19 (lab confirmed)\n\nNumber of attendees with probable COVID-19 (NOT lab confirmed)\n\nNumber of COVID-19 attendee deaths (lab confirmed/confirmed)\n\nNumber of COVID-19 attendee deaths (NOT lab confirmed/probable)\n\n# MVP\n\nThe main objective of this project is to determine if workers in certain industries are more vulnerable to catching COVID-19, what the primary ethnicity emplyed by those industries is, and what factors would contribute to the transmissibility of COVID-19 in those higher risk industries.\n\n## MVP++\n\nA secondary goal is to take a deeper dive into these industries. I would like to take a look at wage/salary levels, exposure rates (with the public), risk aversion, etc. that may contribute to the spread of COVID-19 within the population that is employed by these industries. "
},
{
"alpha_fraction": 0.6008667945861816,
"alphanum_fraction": 0.6127334833145142,
"avg_line_length": 48.953609466552734,
"blob_id": "45daaca0b5733b5f4c539c9ea4ae72e29399d0cb",
"content_id": "84aab709f16535b2792dc3767b33e93ff405ebdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9691,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 194,
"path": "/src/data_filtration.py",
"repo_name": "jlan84/COVID-Vulnerability-Assessment-for-Colorado-Industries",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport numpy as np\nfrom df_functions import *\n\n\n\n\n\nexecute = True\nif __name__ == \"__main__\" and execute:\n \n # This section is the data cleaning and funneling performed, please\n # reference the README.md for more\n # information\n\n # Data Cleaning Workflow Step 1\n df = pd.read_csv('../data/Breakout7-22.csv')\n\n\n covidWeeklyDF = df.copy()\n covidWeeklyDF.insert(loc=5, column='Industry', value='')\n covidWeeklyDF.dropna(how='all',axis=1, inplace=True)\n covidWeeklyDF['Colorado county (exposure location)'].replace(\"Colorado\", \n \"Clear Creek\", inplace=True)\n covidWeeklyDF.rename(columns={'Colorado county (exposure location)':'CTYNAME'},\n inplace=True)\n covidWeeklyDF['CTYNAME'] = covidWeeklyDF['CTYNAME'].str.strip()\n \n AFFH = ['Farm/dairy', 'Agriculture', 'Gathering']\n\n mining_og = ['Coal Mine']\n\n utilities = []\n\n manufacturing = ['Wood Pallet Manufacturer', 'Industry', 'Manufacturing', \n 'Dairy Plant', 'Beverage Bottling Plant', 'Meat Processing',\n 'Manufacturing/Retail', 'Meat Packing Plant', 'Manufacturing ',\n 'Frozen food manufacturing', 'Food Manufacturer', \n 'Food Manufacturing', 'Factory', 'Food Packaging Industry',\n 'Slaughterhouse/Meat packing plant',\n 'Manufactured foods facility', 'Meat Processing Plant (Lamb)',\n 'Manufacturer (Hand Sanitizer)', 'Dog Food Plant',\n 'Potato Processing Plant', 'Poultry Processing Center', \n 'Commercial Bakery', 'Meat Processing Plant',\n 'Food manufacturing', 'Food Manufacturer ',\n 'Agricultural and Manufactured food facility']\n\n government = ['Prison/jail', 'Correctional facility', 'Community Corrections']\n\n construction = ['Construction Site', 'Construction', 'Construction site',\n 'construction site', 'Construction', 'House Painting Business',\n 'Tile/Construction', 'Construction company', \n 'Lumber/Construction ', 'Gutters']\n\n comminfo = []\n\n retail_wholesale = ['Grocery store', 'Retail', 'Store', \n 'Tile Company; Industry','Thrift store',\n 'Home Improvement Retailer', 'Home improvement retailer',\n 'Traffic safety equipment supplier company','Car dealership ',\n 'Building materials supplier', 'Produce Wholesale Warehouse']\n\n transp_warehouse = ['Distribution', 'Distribution Center', 'Potato Warehouse', \n 'Factory/Warehouse', 'Food distributor',\n 'Warehouse Distribution',\n 'Produce Repack Warehouse/Distribution',\n 'Pipe distributor', 'Food Distribution', 'Warehouse',\n 'Mail distribution center']\n\n prof_serv_mgmt = ['Maintenance Services', 'Waste management', \n 'Electrical Contractor', 'Distribution/Marketing Service',\n 'Landscaping company', 'Office/indoor workspace', \n 'Bridge Tournament ', 'Cleaning Company', 'Fencing company',\n 'Landscaping', 'Environmental Laboratory', 'THC Laboratory', \n 'Provides parts and technical assistance for rebar processing machines',\n 'Airport operations/support','Laundry Services', \n 'Laundry Services', 'Steam Laundry',]\n\n fin_ins_rs = ['Bank']\n\n HC_SA = ['Child care center', 'Healthcare, assisted living residence', \n 'Healthcare, skilled nursing facility', 'Healthcare, combined care', \n 'Healthcare other', 'Healthcare, outpatient', \n 'Independent Living Facility', 'Group home', 'Healthcare, dialysis', \n 'Residential Care Facility', \n 'Personal Care Alternative (PCA) staffed apartment',\n 'Congregate shelter','OBGYN Clinic', 'Opioid Treatment Facility',\n 'Shelter', 'Adult group home', 'Rehab center', 'Inpatient rehabilitation',\n 'Hospice ', 'Assisted living and memory care', 'Dental Office',\n 'Behavioral health', 'Rehab and Senior Living', 'Homeless shelter',\n 'assisted living and memory care',\n 'Community correctional facility', 'Rehab ', \n 'Skilled nursing, continuing care retirement community',\n 'Youth shelter', 'Assisted living & independent living ',\n 'Inpatient psychiatry', 'Educational Program', 'University']\n\n hospitality = ['Restaurant fast food', 'Hotel/lodge/resort', \n 'Restaurant sit down', 'Restaurant other or unknown type', \n 'Beauty Salon', 'Camp', 'Pool/water park', 'Hot Spring/Spa',\n 'Bakery ', 'Bakery', 'Restaurant/Adult Entertainment',\n 'Ice Cream', 'Copper Mountain Employee Housing',\n 'Recreation Facility and Restaurant']\n\n other_service = ['Workplace, no store front, in homes']\n\n industries = ['Agg/Fish/Forestry/Hunting', 'Mining/OilandGas', 'Utilities',\n 'Manufacturing', 'Government', 'Construction', 'Comm/Info',\n 'Wholesale/Retail', 'Transp/Warehouse', 'Prof Serv and Mgmt',\n 'Financial Activities', 'Education/Health Services', \n 'Hospitality', 'Other Services']\n\n industry_lists = [AFFH, mining_og, utilities, manufacturing, government, construction, comminfo, \n retail_wholesale, transp_warehouse, prof_serv_mgmt, fin_ins_rs, HC_SA, \n hospitality, other_service]\n\n for i in range(len(industries)):\n values_to_column(covidWeeklyDF, industry_lists[i], industries[i], 3, 5)\n for i in range(len(industries)):\n values_to_column(covidWeeklyDF, industry_lists[i], industries[i], 4, 5)\n \n # Data Cleaning Workflow Step 2\n\n salaries_dic = {'Industry' : ['Agg/Fish/Forestry/Hunting', 'Mining/OilandGas',\n 'Utilities', 'Manufacturing', 'Government', 'Construction', \n 'Comm/Info', 'Wholesale/Retail', 'Transp/Warehouse',\n 'Prof Serv and Mgmt', 'Financial Activities',\n 'Education/Health Services', 'Hospitality','Other Services'],\n 'Median Salary': [32900,76300,66200,55000,58100,44600,66400,\n 35500,49300,75600,57300,45600,26600,36300]}\n\n sal_df = pd.DataFrame(salaries_dic)\n covidWeeklyDF = covidWeeklyDF.merge(sal_df, how='left', on='Industry')\n\n # Data Cleaning Workflow Step 3\n\n df2 = pd.read_csv('../data/wealth-and-health-by-county.csv', header=1, nrows=64)\n df2.dropna(how='all', axis=1, inplace=True)\n df2['Ratio_Hispanic_Pop'] = df2['Perc_Hispanic_Pop'].str.rstrip('%').astype('float') / 100.0\n\n hispanic_pop_by_county_df = df2[['CTYNAME', 'Ratio_Hispanic_Pop']].copy()\n hispanic_pop_by_county_df['Perc_Hispanic_Pop'] = hispanic_pop_by_county_df['Ratio_Hispanic_Pop']*100\n hispanic_pop_by_county_df.rename(columns={'Perc_Hispanic_Pop': \n 'Hispanic Population Percentage'}, inplace=True)\n hispanic_pop_by_county_df['CTYNAME'] = hispanic_pop_by_county_df['CTYNAME'].str.rstrip()\n\n covidWeeklyDF = covidWeeklyDF.merge(hispanic_pop_by_county_df, how='left', on='CTYNAME')\n \n covidWeeklyDF['Total Staff Infected'] = (covidWeeklyDF['Number of staff who are positive for COVID-19 (lab confirmed)'] +\n covidWeeklyDF['Number of staff with probable COVID-19 (NOT lab confirmed)'])\n\n covidWeeklyDF.to_csv('../data/good_clean_data.csv')\n\n\n # Data Cleaning Workflow Step 4\n\n df3 = pd.read_csv('../data/US_Workers.csv', header=3,)\n covidWeeklyDF_Reduced = covidWeeklyDF[['Industry', 'CTYNAME', 'Hispanic Population Percentage']]\n merge_US_CO = covidWeeklyDF_Reduced.merge(df3, how='left', on='Industry')\n\n merge_US_CO.to_csv('../data/US_CO_merger.csv')\n\n df4 = pd.read_csv('../data/US_CO_merger.csv')\n\n df4.rename(columns={'Hispanic\\nor Latino': 'Hispanic Percentage'}, inplace=True)\n\n employed_hispanics = df4[['Industry', 'Hispanic Percentage']].copy()\n\n employed_hispanics_industry_group = employed_hispanics.groupby('Industry').mean().copy().reset_index()\n\n\n # Data Cleaning Workflow Step 5\n\n Industry_Group = pd.read_csv('../data/good_clean_data.csv')\n Industry_Group = Industry_Group.groupby('Industry').sum().reset_index()\n\n Industry_Group = Industry_Group[['Industry', \"Total Staff Infected\"]]\n merger = Industry_Group.merge(employed_hispanics_industry_group, how='inner', on='Industry')\n merger['Hispanic Staff Infected'] = merger['Total Staff Infected'] * merger['Hispanic Percentage']/100\n \n merger['Hispanic Staff Infected'] = merger['Hispanic Staff Infected'].round(0)\n\n merger.to_csv('../data/Hispanic_Staff_Infected_by_Industry.csv')\n\n # Data Cleaning Workflow Step 6\n\n county_infection_group = covidWeeklyDF.groupby('CTYNAME').sum().copy().reset_index()\n county_infection_group.sort_values('Total Staff Infected', ascending=False, inplace=True)\n county_infection_group_filtered = county_infection_group[['CTYNAME',\n 'Total Staff Infected']]\n\n merge_hispPop_Infection = county_infection_group_filtered.merge(hispanic_pop_by_county_df, \n how='inner', on='CTYNAME')\n\n merge_hispPop_Infection.to_csv('../data/His_Pop_Infection_Merge.csv')\n"
}
] | 5 |
jmcduffie32/oneHackLondon
|
https://github.com/jmcduffie32/oneHackLondon
|
51d4bb280f8b53cb8f11d6fd42e1c06c4ab39f35
|
3362927aa9a62c037e73586fff7949d9a493a6d5
|
6ac7a65556fca191978b3dc960910bee210f20ed
|
refs/heads/master
| 2020-04-08T22:09:06.712358 | 2018-11-30T11:41:06 | 2018-11-30T11:41:06 | 159,774,239 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.37045928835868835,
"alphanum_fraction": 0.3828810155391693,
"avg_line_length": 34.57251739501953,
"blob_id": "ebb3ea79455ca91c72bdfd9f610111c1119c5448",
"content_id": "a1cfa910e904dba701f1e4945e03d1bb2e7d0df7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9580,
"license_type": "no_license",
"max_line_length": 148,
"num_lines": 262,
"path": "/lambda/lambda/rest_api/src/lambda_function.py",
"repo_name": "jmcduffie32/oneHackLondon",
"src_encoding": "UTF-8",
"text": "import json\r\nimport boto3\r\nimport os\r\nimport requests\r\n\r\n\r\ndef handler(event, context):\r\n dynamodb = boto3.resource('dynamodb')\r\n global fromFbId\r\n fromFbId = \"209806693253652\"\r\n global userTable\r\n userTable = dynamodb.Table(os.environ['message_table'])\r\n if event['httpMethod'] == 'POST':\r\n if event['body']:\r\n print(event['body'])\r\n body = json.loads(event['body'])\r\n\r\n if body.get('message_uuid'):\r\n print('Received an outbound message event!')\r\n\r\n if body.get('msisdn'):\r\n print('Received an inbound text message!')\r\n item = findUser(number=str(body['msisdn']))\r\n user = item['user']\r\n message = body['text']\r\n item['message'] = message\r\n print(f\"New item: {item}\")\r\n userTable.put_item(\r\n Item=item\r\n )\r\n\r\n # Get everyone in the table that is not the person that sent this message\r\n response = userTable.scan(\r\n FilterExpression=\"NOT contains(#user_alias, :u)\",\r\n ExpressionAttributeValues={\r\n \":u\": user\r\n },\r\n ExpressionAttributeNames= {\r\n \"#user_alias\": \"user\"\r\n }\r\n )\r\n for item in response['Items']:\r\n userNumber = item.get('number')\r\n userFbId = item.get('fbid')\r\n print(f\"Sending a message to: {item['user']}\")\r\n if item['user'] == \"john\":\r\n fromNumber = \"12013451218\"\r\n else:\r\n fromNumber = \"447418342701\"\r\n payload = {\r\n \"template\":\"failover\",\r\n \"workflow\": [\r\n {\r\n \"from\": { \"type\": \"messenger\", \"id\": f\"{fromFbId}\" },\r\n \"to\": { \"type\": \"messenger\", \"id\": f\"{userFbId}\"},\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n },\r\n \"failover\":{\r\n \"expiry_time\": 600,\r\n \"condition_status\": \"delivered\"\r\n }\r\n },\r\n {\r\n \"from\": {\"type\": \"sms\", \"number\": f\"{fromNumber}\"},\r\n \"to\": { \"type\": \"sms\", \"number\": f\"{userNumber}\"},\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n headers = {'Authorization': f\"Bearer {os.environ['JWT']}\", 'Content-Type': 'application/json', 'Accept': 'application/json'}\r\n \r\n response = requests.post('https://api.nexmo.com/v0.1/dispatch',\r\n json=payload, headers=headers)\r\n\r\n print (response.text)\r\n return {\r\n 'statusCode': 200\r\n }\r\n\r\n if body.get('to'):\r\n if body['to'].get('type') == \"messenger\":\r\n userFbId = body['from']['id']\r\n print (f\"Received an inbound Facebook Messenger message from {userFbId}\")\r\n message = body['message']['content']['text']\r\n item = findUser(fbid=str(userFbId))\r\n user = item['user']\r\n print (f\"User is: {user}\")\r\n\r\n # Upload new item to DynamoDB with new message\r\n item['message'] = message\r\n print(f\"New item: {item}\")\r\n userTable.put_item(\r\n Item=item\r\n )\r\n # Get everyone in the table that is not the person that sent this message\r\n response = userTable.scan(\r\n FilterExpression=\"NOT contains(#user_alias, :u)\",\r\n ExpressionAttributeValues={\r\n \":u\": user\r\n },\r\n ExpressionAttributeNames= {\r\n \"#user_alias\": \"user\"\r\n }\r\n )\r\n for item in response['Items']:\r\n userNumber = item.get('number')\r\n userFbId = item.get('fbid')\r\n print(f\"Sending a message to: {item['user']}\")\r\n if item['user'] == \"john\":\r\n fromNumber = \"12013451218\"\r\n else:\r\n fromNumber = \"447418342701\"\r\n payload = {\r\n \"template\":\"failover\",\r\n \"workflow\": [\r\n {\r\n \"from\": { \"type\": \"messenger\", \"id\": f\"{fromFbId}\" },\r\n \"to\": { \"type\": \"messenger\", \"id\": f\"{userFbId}\"},\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n },\r\n \"failover\":{\r\n \"expiry_time\": 600,\r\n \"condition_status\": \"delivered\"\r\n }\r\n },\r\n {\r\n \"from\": {\"type\": \"sms\", \"number\": f\"{fromNumber}\"},\r\n \"to\": { \"type\": \"sms\", \"number\": f\"{userNumber}\"},\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n headers = {'Authorization': f\"Bearer {os.environ['JWT']}\", 'Content-Type': 'application/json', 'Accept': 'application/json'}\r\n \r\n response = requests.post('https://api.nexmo.com/v0.1/dispatch',\r\n json=payload, headers=headers)\r\n\r\n print (response.text)\r\n \r\n if body.get('sendMessage'):\r\n user = body['sendMessage']\r\n if user == \"john\":\r\n fromNumber = \"12013451218\"\r\n else:\r\n fromNumber = \"447418342701\"\r\n message = body['message']\r\n response = userTable.get_item(\r\n Key={\r\n 'user': user\r\n }\r\n )\r\n userNumber = response['Item'].get('number')\r\n userFbId = response['Item'].get('fbid')\r\n print(f\"Sending a message to: {body['sendMessage']}\")\r\n payload = {\r\n \"template\":\"failover\",\r\n \"workflow\": [\r\n {\r\n \"from\": { \"type\": \"messenger\", \"id\": f\"{fromFbId}\" },\r\n \"to\": { \"type\": \"messenger\", \"id\": f\"{userFbId}\" },\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n },\r\n \"failover\":{\r\n \"expiry_time\": 600,\r\n \"condition_status\": \"delivered\"\r\n }\r\n },\r\n {\r\n \"from\": {\"type\": \"sms\", \"number\": f\"{fromNumber}\"},\r\n \"to\": { \"type\": \"sms\", \"number\": f\"{userNumber}\"},\r\n \"message\": {\r\n \"content\": {\r\n \"type\": \"text\",\r\n \"text\": f\"{message}\"\r\n }\r\n }\r\n }\r\n ]\r\n }\r\n headers = {'Authorization': f\"Bearer {os.environ['JWT']}\", 'Content-Type': 'application/json', 'Accept': 'application/json'}\r\n \r\n response = requests.post('https://api.nexmo.com/v0.1/dispatch',\r\n json=payload, headers=headers)\r\n\r\n print (response.text)\r\n\r\n if body.get('getUsers'):\r\n print(f\"Getting all users in the table\")\r\n response = userTable.scan()\r\n return {\r\n 'statusCode': 200,\r\n 'body': json.dumps({'users': response['Items']})\r\n }\r\n\r\n if body.get('pollUser'):\r\n print(f\"Getting the message for: {body['pollUser']}\")\r\n response = userTable.get_item(\r\n Key={\r\n 'user': body['pollUser']\r\n }\r\n )\r\n print(response)\r\n message = response['Item'].get('message')\r\n if message:\r\n del response['Item']['message']\r\n item = response['Item']\r\n userTable.put_item(\r\n Item=item\r\n )\r\n return {\r\n 'statusCode': 200,\r\n 'body': json.dumps({'message': message})\r\n }\r\n\r\n if not message:\r\n return {\r\n 'statusCode': 200,\r\n 'body': json.dumps({'message': False})\r\n }\r\n return {\r\n 'statusCode': 200\r\n }\r\n\r\n elif event['httpMethod'] == 'GET':\r\n return {\r\n 'statusCode': 200\r\n }\r\n\r\n\r\ndef findUser(number=None, fbid=None):\r\n response = userTable.scan()\r\n print(response['Items'])\r\n if number:\r\n for item in response['Items']:\r\n if item.get('number') == number:\r\n return item\r\n if fbid:\r\n for item in response['Items']:\r\n if item.get('fbid') == fbid:\r\n return item"
},
{
"alpha_fraction": 0.640625,
"alphanum_fraction": 0.6519886255264282,
"avg_line_length": 27.15999984741211,
"blob_id": "88f742b48ccae4fdfb2344ee3b1863daa117aff0",
"content_id": "a9ec181f28496c7cf1400e3b2e155864691563b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 704,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 25,
"path": "/index.js",
"repo_name": "jmcduffie32/oneHackLondon",
"src_encoding": "UTF-8",
"text": "const express = require('express');\nconst proxy = require('http-proxy-middleware');\nconst app = express();\nconst port = process.env.PORT || 3000;\n\napp.get('/openTok.env.json', function (req, res) {\n const apiKey = process.env.API_KEY;\n const sessionId = process.env.SESSION_ID;\n const token = process.env.TOKEN;\n\n // Respond to the client\n res.json({\n \"credentials\": {\n \"apiKey\": apiKey,\n \"sessionId\": sessionId,\n \"token\": token\n }\n });\n});\napp.use('/test', proxy({\n target: \"https://tfy254ekqf.execute-api.eu-west-1.amazonaws.com/test\",\n changeOrigin: true\n}));\napp.use(express.static('dist'));\napp.listen(port, () => console.log(`Example app listening on port ${port}!`));\n"
}
] | 2 |
keyvin/countdriver
|
https://github.com/keyvin/countdriver
|
46564815eff0abe860f97f381de3d82e9d25dac1
|
c7cce69866ab6b55bcdf9aaa563525b313a14af4
|
30214844bb5b8e70cf4b445b063f9f42e5e6a1a9
|
refs/heads/master
| 2021-01-10T08:19:10.603032 | 2015-10-14T16:59:13 | 2015-10-14T16:59:13 | 44,061,575 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5625,
"alphanum_fraction": 0.5625,
"avg_line_length": 9.666666984558105,
"blob_id": "8e92a453f91c3c5d94ec164ebb89b4b21c78b30a",
"content_id": "2c0fdd9a73f6aed6196e9038e1afd3606966a799",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 32,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 3,
"path": "/install.sh",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/bash\n\ncp ./time.py /\n"
},
{
"alpha_fraction": 0.6203703880310059,
"alphanum_fraction": 0.6203703880310059,
"avg_line_length": 14.285714149475098,
"blob_id": "91b9124801cfcf9ff5c98abf3b39d54cc5d76ba0",
"content_id": "7ba97a7aa8128c305266f022f17243dc3063d24c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 108,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 7,
"path": "/www/rancount.php",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "<html>\n<body>\n\nrandomly counting<?php $handle = popen(\"/var/www/launchrandom.sh\", \"r\") ?>\n\n</body>\n</html> \n"
},
{
"alpha_fraction": 0.5510203838348389,
"alphanum_fraction": 0.5510203838348389,
"avg_line_length": 17.25,
"blob_id": "621dac60c825c935abaa7f407f0f9de66be63a4a",
"content_id": "8a153070a524f14ffe3322ebd1886dfd8a90a72c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "PHP",
"length_bytes": 147,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 8,
"path": "/www/realcount.php",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "<html>\n<body>\n\nWelcome <?php $handle = popen(\"/var/www/launchcount.sh\" + $_GET[\"direction\"] + \" \" + $_GET[\"direction\"], \"r\") ?>\n\n\n</body>\n</html> \n"
},
{
"alpha_fraction": 0.5897436141967773,
"alphanum_fraction": 0.5982906222343445,
"avg_line_length": 18.16666603088379,
"blob_id": "c7f1d0e68fd1f05b8a2e15283aaa7f1db9c19557",
"content_id": "c9e2e5c845d347cb9b3595a03d394bb82fb6f326",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 117,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 6,
"path": "/www/#launchrandom.sh#",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n\n\n $(ps aux | grep timer.py | awk '{print $2}' | grep -ve \"grep\")\nsudo /root/time_repo/time.py random\n\n\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.5986394286155701,
"avg_line_length": 28.399999618530273,
"blob_id": "af59791f1ef6bf6ec496fe592cdce9fa819eb820",
"content_id": "1a8e1408a7a3d08d43367036903a2d37e65f9be1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 147,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 5,
"path": "/www/launchcount.sh",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nfor i in $(ps aux | grep time.py | grep -ve \"grep\" | awk '{print $2}'); do sudo kill -9 $i; done\n\n/root/time_repo/time.py count $1 $2 "
},
{
"alpha_fraction": 0.6078431606292725,
"alphanum_fraction": 0.6078431606292725,
"avg_line_length": 6.4285712242126465,
"blob_id": "8da7504d33ef16c097dd8fc651afd18172dc7237",
"content_id": "d8b5931d5b51321d9d6dd5759b665d7f4066e7b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 51,
"license_type": "no_license",
"max_line_length": 11,
"num_lines": 7,
"path": "/www/launchrandom.sh~",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nh = \"blah\"\nwhile :\ndo\n echo blah \ndone"
},
{
"alpha_fraction": 0.5933333039283752,
"alphanum_fraction": 0.6066666841506958,
"avg_line_length": 23.66666603088379,
"blob_id": "124661e6c0e1fd17c0a626e18eab25e3210d7fd5",
"content_id": "c8be5ce008cc6495dad68117d56e020298e194e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 150,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 6,
"path": "/www/launchrandom.sh",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n\n\nfor i in $(ps aux | grep time.py | grep -ve \"grep\" | awk '{print $2}'); do sudo kill -9 $i; done\nsudo /root/time_repo/time.py random\n\n\n"
},
{
"alpha_fraction": 0.5435718297958374,
"alphanum_fraction": 0.5614577531814575,
"avg_line_length": 32.64253234863281,
"blob_id": "f2f5f49b013152c019dbd2f1d13a507adbb71730",
"content_id": "4d2d35ced77ed89c4785394471f4d25bb83fa082",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7436,
"license_type": "no_license",
"max_line_length": 142,
"num_lines": 221,
"path": "/time.py",
"repo_name": "keyvin/countdriver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python3\n\nimport datetime\nimport random\nimport time\nimport RPi.GPIO as gpio\nimport sys\nimport os\n\n\n#todo - consider replacing the datetime objects in the code with timedeltas. Seems like it would make more sense.\n\n#function for switching direction for useless things hackathon clock... \ndef getDirection(direction=-1):\n # switch directions (positive if initial)\n if direction == -1:\n direction = 1\n else:\n direction = -1\n return direction\n\n\nclass countdowntimer():\n def __init__(self, count_to, direction=-1, stop_at_target=True)\n self.direction = direction\n self.time_set = direction\n self.disp_pin = [3, 5, 7, 11, 13, 15, 19, 21] \n self.bcd_pins = [29,31,33,35]\n self.plus_minus_pin = 37\n self.over_under_pin = 23\n self.timer_value = datetime.datetime()\n self.direction = datetime.timedelta(seconds = count_direction)\n self.strobe_delay = .000001\n self.stop_at_target = stop_at_target\n self.initpins()\n\n\n\n #We are counting down. Initialize timer to the time the count starts on.\n if direction == -1: \n self.timer_value = count_to\n\n #counting up. Initialize timer to zero. Great candidate for time..\n if count_direction == 1:\n our_time == datetime.datetime(time_set.day, 1, 1,1,0,0,0)\n\n\n def initpins(self):\n gpio.setmode(gpio.BOARD)\n gpio.setwarnings(False)\n\n outpins = self.disp_pin+self.bcd_pins+self.plus_minus_pin+self.over_under_pin\n for i in outpins:\n gpio.setup(i, gpio.OUT)\n \n for i in disp_pins:\n gpio.setup(i, gpio.OUT, initial=gpio.LOW)\n\n\n # convert timestring to bcd array.\n # input - \"01-14-2001 HH:MM:DD:SS:MS\n # output - [\"0101\", \"1001\"....]\n\n def makebcd(self):\n # set pin numbers\n # split time into numbers\n to_bcd = self.timer_value\n just_nums = ''.join(to_bcd.split()[1].split(':'))\n nums = list(\"00\"+just_nums)\n bcd = []\n for i in nums:\n tmp = format(int(i),\"#010b\")\n bcd.append(tmp[6:])\n return bcd\n\n\n#This is the function that interacts with the hardware. Output pins are defined here\n#Write bcd values to bcd bus, then enable segment. Repeat for each segment\n#This needs an addition for the +/- value\n def strobe(self, bcd):\n for segment in range(len(self.disp_pin)):\n for bcds in range(4):\n gpio.output(self.bcd_pins[bcds], (int(bcd[segment][bcds]) ^ 1))\n #print (\"segment %d, value %s, pin %d, value %c\",\n # %(segment, bcd[segment],bcd_pins[bcds], bcd[segment][bcds])) \n time.sleep(self.strobe_delay)\n gpio.output(self.disp_pin[segment], gpio.HIGH)\n time.sleep(self.strobe_delay)\n gpio.output(self.disp_pin[segment], gpio.LOW)\n time.sleep(self.strobe_delay)\n\n\n#divedisplay - outputs to clock via strobe for one full second before returning. \n\n\n#note - accuracy depends on returning to this function within the same second it left. \n# Otherwise, we skip a second. \n\n def drivedisplay(self): \n then = datetime.datetime.now()\n #Convert our time object to an array of bcd values\n bcd = makebcd(str(self.timer_value))\n while True:\n #strobe 7 segments or write time or w/e\n self.strobe(bcd)\n now = datetime.datetime.now()\n #check if 1 second has elapsed, return if so\n delta = (now - then)\n if delta.seconds == 1:\n break\n\n\n#A real countdown/Count Up function. General flow - \n\n\n #1. Enter main loop\n #2. call drivedisplay with our timer value\n #3. Check if we have reached our current time, set direction to 0 so counting stops\n def countmain(self):\n\n\n #infinite loop driving the clock\n while True:\n #debug print.\n print (self.timer_value)\n self.drivedisplay()\n self.timer_value = self.timer_value + self.direction\n\n if self.direction == -1:\n #drive the display for a single second, then count up\n #we have reached our target. stop counting and keep driving the clock.\n if self.timer_value.hour == 0 and self.timer_value.minute == 0 and self.timer_value.second == 0:\n #Need to flip +/- if false and start infinite count up\n if self.stop_at_target == True:\n self.direction = datetime.timedelta()\n else:\n #flip count and +/- sign\n pass\n #We are counting up\n if self.direction == 1:\n #stop counting. We have reached our time. \n if self.timer_value == time_set:\n if self.stop_at_target == True:\n self.direction = datetime.timedelta()\n else:\n #flip count and +/- sign, \n pass\n \n\n\n\n\n\nif __name__ == '__main__':\n print (sys.argv[0])\n if len(sys.argv) == 1:\n print (\"Usage: time.py random - random countdown\\n time.py direction DD:HH:MM:SS\\n direction is 1 for count up, -1 for countdown\")\n sys.exit(0)\n \n# if sys.argv[1] == 'random':\n# ranmain()\n \n if sys.argv[1] == 'count':\n direction = int(sys.argv[2])\n time = sys.argv[3] \n #time format \n #DD:HH:MM:SS\n #split time to prep for date time obj\n i_time = time.split(':')\n for i in range(len(i_time)):\n i_time[i] = int(i_time[i])\n\n time_obj = datetime.datetime(2000, 10, i_time[0], i_time[1], i_time[2], i_time[3])\n\n\n timer_obj = countdowntimer(time_obj, direction)\n timer_obj.maincount()\n\n# This function is leftover from the useless things hackathon \n#ef ranmain():\n # determine time values\n# random.seed(time.time())\n# our_time = datetime.datetime(2000,1,1,int(random.random()*24), int(random.random()*60), int(random.random()*60))\n # determine time arrow\n# time_dir = getDirection(-1)\n # determine length of timer\n# ticks = int(random.random()*15)+15\n# count = 0\n # loop forever\n# while True:\n # wait one second\n \n # count second\n# our_time = our_time + datetime.timedelta(0,time_dir)\n #print (str(our_time) + \" \" + str(count) + \" \" + str(ticks))\n# os.system('clear')\n # print (\"\\n\\n\\n\\t\", '{:02d}'.format(our_time.hour), ':', '{:02d}'.format(our_time.minute), ':', '{:02d}'.format(our_time.second), '\\n')\n \n# bcd = makebcd(str(our_time))\n#@ flag = True \n# delta = datetime.timedelta()\n# while flag:\n# then = datetime.datetime.now()\n# strobe(bcd)\n# now = datetime.datetime.now()\n# delta = delta + (now - then)\n# print(delta)\n# if delta.seconds == 1:\n# flag = False\n\n # iterate counter\n# count = count + 1\n # direction switch conditional\n# if count == ticks:\n # switch time arrow\n# time_dir = getDirection(time_dir)\n # reinitialize length of timer\n# ticks = int(random.random()*45)+15\n # reinitialize counter\n# count = 0 \n # initialize display and output pins\n\n"
}
] | 8 |
talbertmegan/musick
|
https://github.com/talbertmegan/musick
|
c41510474c55daadd9cb0a3f63aea01c69e945c8
|
d1ec42ba7ac98bebcf739bef1afeff3fa3a9b17d
|
a161bdb018119dc242ffa079b56d566f26826447
|
refs/heads/master
| 2020-03-25T18:39:56.294981 | 2018-08-08T17:42:58 | 2018-08-08T17:42:58 | 144,043,028 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6970979571342468,
"alphanum_fraction": 0.7545344829559326,
"avg_line_length": 62.61538314819336,
"blob_id": "3dd107ff63a3151a7c04e1024b7c18dc45a6b42d",
"content_id": "a6f74aaf3e73f623544ce478cef50911a7224dff",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1654,
"license_type": "permissive",
"max_line_length": 239,
"num_lines": 26,
"path": "/artist_service.py",
"repo_name": "talbertmegan/musick",
"src_encoding": "UTF-8",
"text": "artist_info = {\n\"Brendon Urie\":{\n'image':'Brendon Urie.jpg',\n'soundcloud_url': 'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/334177834&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',\n},\n\"Taylor Swift\":{\n'image':'Taylor Swift.jpg',\n'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/166985759&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',\n},\n\"ABBA\":{\n'image':'ABBA.jpg',\n'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/253187333&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',\n},\n\"Britney Spears\":{\n'image': 'Britney Spears.jpg',\n'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/247262764&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',\n},\n\"Christina Aguilera\":{\n'image':'Christina Aguilera.jpg',\n'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/99265635&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true',\n},\n\"Jessica Simpson\":{\n'image':'Jessica Simpson.jpg',\n'soundcloud_url':'https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/297578035&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true'\n}\n}\n"
}
] | 1 |
massiou/parselog
|
https://github.com/massiou/parselog
|
887d86f21f8197a983e046587f6f1f63bf0cdbe4
|
a0bd2cde430e5b109091e74a5c479a0f408f5198
|
acc3dc4d384e0a0b913b53381ad052afe7c100fc
|
refs/heads/master
| 2021-01-17T12:09:33.415224 | 2016-07-07T12:32:55 | 2016-07-07T12:32:55 | 31,270,656 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5526688694953918,
"alphanum_fraction": 0.5573925375938416,
"avg_line_length": 31.569231033325195,
"blob_id": "d845173c24ce5274bb52c4e56a92d0c9c2d23f8c",
"content_id": "d86391d9ad3c09b3eb6abc358d50fdbe0e0336af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2117,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 65,
"path": "/src/jenkins.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n\"\"\" Get jenkins logs \"\"\"\n\n__copyright__ = \"Copyright 2015, Parrot\"\n\n\n# Module imports\nfrom src.com import JENKINS_SERVER\nfrom src.com import download_tgz_file\nimport requests\n\nclass JenkinsJob(object):\n \"\"\"\n Build a jenkins job object\n \"\"\"\n def __init__(self, config_hw=None, config_sw=None,\n job_number='lastSuccessfulBuild',\n log_type='ckcm',\n url_results=None):\n \"\"\"\n Instantiate a JenkinsJob object\n Download tgz trace file\n Patch to avoid jenkins connection error (SSLv3 forced)\n \"\"\"\n \n self._ckcm_tgz_file_name = '/tmp/ckcm-%s-%s.tgz' % (config_hw, config_sw)\n self._octopylog_tgz_file_name = '/tmp/octopylog-%s-%s.tgz' % (config_hw, config_sw)\n self.server = JENKINS_SERVER\n base_url = self.server + 'job/nb_' + config_hw.upper() + \\\n '/CONFIG_HW=' + config_hw.upper() + \\\n ',CONFIG_SW=' + config_sw + ',label=' + config_hw.upper() + \\\n '/' + job_number\n if not url_results:\n self.results = \"{0}{1}\".format(base_url, '/artifact/results/')\n else:\n self.results = url_results\n\n self.ckcm_traces = self.results + 'ckcm.tgz'\n self.ctp_traces = self.results + 'pytestemb.tgz'\n\n #Download jenkins tgz files\n if log_type == 'ckcm':\n download_tgz_file(self.ckcm_traces, self._ckcm_tgz_file_name)\n elif log_type == 'octopylog':\n download_tgz_file(self.ctp_traces, self._octopylog_tgz_file_name)\n\n self.build_number = int(requests.get(\"{0}{1}\".format(base_url, '/buildNumber'), verify=False).text)\n\n @property\n def ckcm_tgz_file_name(self):\n \"\"\"\n getter on ckcm_tgz_file_name\n \"\"\"\n return self._ckcm_tgz_file_name\n\n @property\n def octopylog_tgz_file_name(self):\n \"\"\"\n getter on octopylog_tgz_file_name\n \"\"\"\n return self._octopylog_tgz_file_name\n\n def get_url(self):\n return self.results\n"
},
{
"alpha_fraction": 0.5838140249252319,
"alphanum_fraction": 0.5916797518730164,
"avg_line_length": 33.4638557434082,
"blob_id": "a1ff4f1f810be8b570d87af4d0a593935bf74080",
"content_id": "6bb44924154103b858c21624fe8973a99ccbc54b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11442,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 332,
"path": "/src/index.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "# coding: utf-8\n\"\"\" Index logs in Elasticsearch \"\"\"\n\n# Generic imports\nimport os\nimport re\nfrom contextlib import contextmanager\nfrom elasticsearch import Elasticsearch\nfrom elasticsearch.exceptions import NotFoundError\nfrom elasticsearch.exceptions import RequestError\nfrom elasticsearch.helpers import bulk\n\n# Module imports\nimport src.jenkins as jenkins\nimport src.parser as parser\nfrom src.com import timing\nfrom src.com import decompressed_tgz as decompressed_tgz\nfrom src.com import logger\nfrom src.com import UntarException\nfrom src.com import FC60x0_CONFIGS\n\n\n@timing\ndef delete_data(index_del):\n \"\"\"\n @goal: delete index from into elasticsearch database\n @return delete_error_code: error code deletion\n \"\"\"\n delete_error_code = None\n es_c = Elasticsearch()\n\n try:\n delete_error_code = es_c.indices.delete(index=index_del)\n logger.info('Delete index: \"%s\" in database', index_del)\n except NotFoundError:\n logger.error('No such index: \"%s\" in database', index_del)\n\n return delete_error_code\n\n\n@timing\ndef index_file(es_instance, log_file_path, es_index, log_type, version=None, module=None,\n pytestemb_version=None):\n \"\"\"\n @goal: index log file into elastic search database\n @param es_instance: ElasticSearch instance\n @param log_file_path: path to file traces directory\n @param es_index: ElasticSearch index\n @param version: field version in elastic search (optional)\n @return error_code: boolean, False if one line is not indexed\n @not_indexed_data: list of all data not indexed\n \"\"\"\n error_code = True\n not_indexed_data = []\n\n if log_type == 'ckcm':\n parser_c = parser.CkcmParser()\n elif log_type == 'octopylog':\n parser_c = parser.OctopylogParser(pytestemb_version)\n\n # Parse log file and format data to export\n parsed_trace = parser_c.parse(log_file_path, version=version, module=module)\n\n try:\n bulk_data = [data for data in parsed_trace]\n #TODO probleme d'update\n bulk(es_instance, actions=bulk_data, index=es_index, doc_type=log_type, request_timeout=30)\n except IndexError:\n logger.error(\"%s\\npytestemb version:%s\\nindex:%s\",\n log_file_path, pytestemb_version, es_index)\n except RequestError as exc:\n error_code = False\n not_indexed_data.append(data)\n logger.error('%s bad index format', es_index)\n logger.error(exc)\n\n return error_code, not_indexed_data\n\n\ndef index_module(module_type, config, job_number='lastSuccessfulBuild',\n log_type='ckcm', url=None):\n \"\"\"\n @goal: index module ckcm traces\n @param module_type : fc60x0 module\n @param config: fc60x0 config\n @param job_number: jenkins job number\n @param log_type: ckcm or octopylog\n \"\"\"\n import glob\n\n err_list = []\n print module_type\n # Create jenkins job object\n jenkins_job = jenkins.JenkinsJob(config_hw=module_type,\n config_sw=config,\n job_number=job_number,\n log_type=log_type,\n url_results=url)\n\n logger.info(\"Jenkins job: %s\", jenkins_job.get_url())\n\n # Decompressed ckcm.tgz into /tmp/\n if log_type == 'ckcm':\n tgz_file = jenkins_job.ckcm_tgz_file_name\n elif log_type == 'octopylog':\n tgz_file = jenkins_job.octopylog_tgz_file_name\n\n try:\n directory_c = decompressed_tgz(tgz_file, '/tmp')\n logger.info(\"current_directory: %s\", directory_c)\n except UntarException as msg:\n logger.error(msg)\n return\n\n # Get pytestemb version\n pytestemb_version = None\n if log_type == 'octopylog':\n pytestemb_version = get_pytestemb_version(directory_c)\n\n package_version = get_package_version(directory_c)\n\n # Build elastic search index\n es_index_current = \"{0}_{1}_{2}_{3}_{4}\".format(log_type, package_version.lower(),\n module_type.lower(), config.lower(), jenkins_job.build_number)\n\n # Index each line from log file traces\n logger.info(\"Version : %s, Package: %s, Config: %s\",\n package_version, module_type, config)\n\n\n with elastic_search(hosts=\"172.20.22.104\") as es_c:\n # Create elastic search instance\n try:\n es_c.indices.delete(es_index_current)\n except NotFoundError:\n logger.info(\"Current index: {0}\".format(es_index_current))\n es_c.indices.create(es_index_current)\n for file_c in os.listdir(directory_c):\n logger.info(\" Parsing... %s\", file_c)\n logger.info(\"Current index: {0}\".format(es_index_current))\n try:\n index_file(es_c, os.path.join(directory_c, file_c),\n es_index_current,\n log_type, version=package_version, module=module_type.lower(),\n pytestemb_version=pytestemb_version)\n except Exception as exc:\n raise exc\n finally:\n #err_list.append(err)\n logger.info(\"Removing {0}\".format(os.path.join(directory_c, file_c)))\n os.remove(os.path.join(directory_c, file_c))\n\n # clean tmp directory\n\n deprecated_files = [os.path.join(\"/tmp\", f) for f in os.listdir(\"/tmp\") if re.search(r'(pytestemb|ckcm|octopylog).*', f)\n if os.path.isfile(os.path.join(\"/tmp\", f))]\n deprecated_dirs = [os.path.join(\"/tmp\", f) for f in os.listdir(\"/tmp\") if re.search(r'(pytestemb|ckcm|octopylog).*', f)\n if os.path.isdir(os.path.join(\"/tmp\", f))]\n\n for d_file in deprecated_files:\n logger.info(\"Cleaning: delete file {0}\".format(d_file))\n os.remove(d_file)\n\n #for d_file in deprecated_dirs:\n # logger.info(\"Cleaning: delete file {0}\".format(d_file))\n # os.rmdir(d_file)\n\n\n\n return all(err_list)\n\n\ndef get_pytestemb_version(directory):\n \"\"\"\n @goal: get pytestemb version used for test\n @param directory: directory to parse\n @return pytestemb_version: pytestemb version\n \"\"\"\n found = False\n pytestemb_version = None\n for file_c_name in os.listdir(directory):\n if found:\n break\n with open(os.path.join(directory, file_c_name), 'r') as file_c:\n file_c_content = file_c.read()\n\n for line_c in file_c_content.split('\\n'):\n if found:\n break\n if 'Library version : pytestemb' in line_c:\n pytestemb_version = line_c.split()[-1]\n found = True\n\n return pytestemb_version\n\n\ndef get_package_version(directory):\n \"\"\"\n @goal: get version from traces directory\n @param directory: directory to parse\n @return version: package version\n \"\"\"\n # Find cmd_CGMREX test file and parse CGMREX command\n check_module_files = [file_c for file_c in os.listdir(directory)\n if file_c.startswith('cmd_CGMREX')\n or file_c.startswith('check_module_')\n or file_c.startswith('setenv_')]\n\n if not check_module_files:\n # Find cmd_CGMR test file and parse CGMR command\n check_module_files = [file_c for file_c in os.listdir(directory)\n if file_c.startswith('cmd_CGMR')]\n [version, _] = get_cgmr([os.path.join(directory, file_path)\n for file_path in check_module_files])\n else:\n [version, _] = get_cgmrex([os.path.join(directory, file_path)\n for file_path in check_module_files])\n return version\n\n\ndef get_cgmrex(log_file_path_list):\n \"\"\"\n @goal: Parse CGMREX command\n @param ckcm_file_path_list: ckcm files paths list\n @return version: FC60x0 version\n @return package: FC60x0 config\n @return ckcm_file_path: ckcm file in which CGMREX has been encountered first\n \"\"\"\n version = 'unknown'\n log_file_path = None\n found = False\n\n # Loop an all ckcm files\n for log_file_path in log_file_path_list:\n if found:\n break\n with open(log_file_path) as log_file:\n file_content = log_file.read()\n for line in file_content.split('\\n'):\n # Found '+CGMREX in file'\n if '+CGMREX:' in line:\n version = line.split(\"'\")[1].lower()\n version = version.split()[0] # Ensure no space in version\n logger.debug(\"CGMREX in %s\\n%s\", log_file_path, line)\n found = True\n break\n\n return [version, log_file_path]\n\n\ndef get_cgmr(log_file_path_list):\n \"\"\"\n @goal: Parse CGMR command\n @param ckcm_file_path_list: ckcm files paths list\n @return version: FC60x0 version\n @return package: FC60x0 config\n @return ckcm_file_path: ckcm file in which CGMR has been encountered first\n \"\"\"\n version = 'unknown'\n log_file_path = None\n found = False\n\n # Loop an all ckcm files\n for log_file_path in log_file_path_list:\n if found:\n break\n with open(log_file_path) as ckcm_file:\n file_content = ckcm_file.read()\n for line in file_content.split('\\n'):\n # Found '+CGMR in file'\n if '+CGMR:HW' in line:\n # Get version from line\n parsed_line = re.match(\"(.*)-SW(.*)<0x0D><0x0A>(.*)\", line)\n if parsed_line:\n version = parsed_line.group(2)\n logger.debug(\"CGMR in %s\", log_file_path)\n logger.debug(\"Version found >>> %s\", line)\n logger.debug(\"Version : %s\", version)\n found = True\n break\n\n return [version, log_file_path]\n\n\n@contextmanager\ndef elastic_search(hosts=None):\n \"\"\"\n Create a context manager when calling elasticsearch\n \"\"\"\n try:\n es_c = Elasticsearch(hosts=hosts)\n logger.debug(\"Instanciate ElasticSearch\\n%s\", es_c.info())\n count_lines = es_c.count()['count']\n yield es_c\n finally:\n count_lines = es_c.count()['count'] - count_lines\n logger.info('%d successfully indexed data', count_lines)\n\n\ndef index_table(table, ip_address):\n \"\"\"\n Index MySQL table into elasticsearch instance\n \"\"\"\n\n parser_c = parser.MySQLParser(server=\"172.20.38.50\", user=\"parrotsa\", password=\"parrotsa\",\n database=\"sandbox\", table=table)\n table_parsed = parser_c.parse()\n\n with elastic_search(hosts=ip_address) as es_c:\n try:\n index_es = table\n bulk_data = [data for data in table_parsed]\n try:\n es_c.indices.delete(index_es)\n except NotFoundError:\n logger.warning(\"Deleting index: {0} not found\".format(index_es))\n es_c.indices.create(index_es)\n bulk(es_c, bulk_data, index=index_es, doc_type=table)\n except IndexError:\n raise IndexError\n except RequestError as exc:\n print exc\n logger.error('%s bad index format', table)\n\n\nif __name__ == \"__main__\":\n #print FC60x0_CONFIGS\n #for config_fc in FC60x0_CONFIGS:\n # index_module(config_fc[0], config_fc[1], log_type='octopylog')\n\n index_table(\"t_statistic\", \"172.20.22.104\")\n index_table(\"t_performance\", \"172.20.22.104\")\n"
},
{
"alpha_fraction": 0.5923011898994446,
"alphanum_fraction": 0.6240516901016235,
"avg_line_length": 28.649999618530273,
"blob_id": "79b36e054bae23a2bccfa51f722cd9088dcd8a47",
"content_id": "ff8b57e12a9a1f7fdd41be338f9edeb2ebbe35d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3560,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 120,
"path": "/src/com.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n\"\"\" Common \"\"\"\n\n__copyright__ = \"Copyright 2015, Parrot\"\n\n# Generic imports\nimport sys\nimport requests\nfrom datetime import datetime\n\n# logging imports\nimport logging\nfrom logging.handlers import RotatingFileHandler\n\n# fabric imports\nfrom fabric.colors import yellow\nfrom fabric.colors import blue\nfrom fabric.colors import white\n\n# Globals\nJENKINS_SERVER = 'https://komodo.parrot.biz:8080/'\n\nFC60x0_CONFIGS = (('fc6000p', '256L_Generic_I2C'),\n ('fc6000tn', '256_Generic_VR-Asia'),\n ('fc6000tn', '256_Panasonic-Honda-14M-T5AA'),\n ('fc6000tn', '256_Panasonic-Honda-14M-T5AA_VR-NorthAmerica'),\n ('fc6000ts', '256_Generic'),\n ('fc6000ts', '256_Pioneer-KM506'),\n ('fc6000ts', '256_AlpineDalian-Honda-G6'),\n ('fc6050w', 'Demo'),\n ('fc6050b', 'Demo_B'))\n\nFC60x0_CONFIGS = (('fc6000p', '256L_Generic_I2C'),\n ('fc6000tn', '256_Generic_VR-Asia'),\n ('fc6050w', 'Demo'))\n\n\n# logger object creation\nlogger = logging.getLogger('index')\nlogger.setLevel(logging.DEBUG)\n# Build formatter for each handler\nformatter_file = logging.Formatter('[%(asctime)s][%(levelname)s] %(message)s')\nformatter_console = logging.Formatter(yellow('[%(asctime)s]') + \\\n blue('[%(levelname)s]') + \\\n white(' %(message)s'))\n\n# Set handlers filesize is < 1Mo\nfile_handler = RotatingFileHandler('index.log', 'a', 1000000, 1)\nstream_handler = logging.StreamHandler(stream=sys.stdout)\n\n# Set formatters\nfile_handler.setFormatter(formatter_file)\nstream_handler.setFormatter(formatter_console)\n\n# Set level\nfile_handler.setLevel(logging.DEBUG)\nstream_handler.setLevel(logging.INFO)\n\n# Add handlers to logger\nlogger.addHandler(file_handler)\nlogger.addHandler(stream_handler)\n\n#Exceptions\n\nclass UntarException(Exception):\n \"\"\"Untar failed Exception\"\"\"\n\n\n#Decorators\ndef timing(func):\n '''\n timing decorator\n '''\n def wrapper(*args, **kwargs):\n '''\n func wrapper\n '''\n start = datetime.now()\n func(*args, **kwargs)\n stop = datetime.now()\n logger.debug('Time execution %s, %s : %s sec',\n func.__name__, args, (stop - start).total_seconds())\n\n return wrapper\n\n\ndef decompressed_tgz(tgz_file, output_directory):\n \"\"\"\n @goal: decompressed tar.gz file in a given output directory\n @param tgz_file: tar.gz file to decompress\n @param output_directory: output directory where to decompress\n @return untar_directory: name of the archive parent directory\n \"\"\"\n import tarfile\n untar_directory = None\n with tarfile.open(tgz_file) as tgz_file:\n tgz_info = tgz_file.getnames()\n\n if tgz_info:\n tar_directory = tgz_info[0].split('/')[0]\n tgz_file.extractall(output_directory)\n untar_directory = '/'.join([output_directory, tar_directory])\n logger.info(\"%s untarred\", tgz_file.name)\n else:\n raise UntarException(\"%s untar failed\", tgz_file)\n\n return untar_directory\n\n\ndef download_tgz_file(url_tgz_traces, output_file_name):\n '''\n @goal: download tgz file\n @param url_tgz_traces: tar.gz file url\n @param output_file_name: local filename where to save file\n '''\n\n response = requests.get(url_tgz_traces, verify=False)\n with open(output_file_name, 'wb') as output_file:\n output_file.write(response.content)\n\n"
},
{
"alpha_fraction": 0.5053174495697021,
"alphanum_fraction": 0.5091846585273743,
"avg_line_length": 28.107980728149414,
"blob_id": "1621f4ac8db86ce44f723ccf3f12ab163913452c",
"content_id": "fd24f5a86f0b61e133b2b2734fe86a126a0182cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6206,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 213,
"path": "/src/log.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n''' Generic log class '''\n\n__copyright__ = \"Copyright 2015, Parrot\"\n\n# Generic imports\nimport re\n\nclass GenericLog(object):\n \"\"\"\n Generic Log class (parent)\n \"\"\"\n def __init__(self, data):\n \"\"\"\n Initialize class\n \"\"\"\n self._log_type = \"default\"\n self._data = data\n\n def __str__(self):\n s = 'Log :\\n'\n for items in self.__dict__.items():\n s += '- {0}: {1}\\n'.format(*items)\n return s\n\n def get_version(self):\n raise NotImplementedError\n\n\n def get_library(self):\n raise NotImplementedError\n\nclass CkcmLog(GenericLog):\n \"\"\"\n CKCM log class (specific)\n \"\"\"\n def __init__(self):\n self.log_type = \"ckcm\"\n self._data = None\n self.library = None\n self.severity = None\n self.command = None\n self.event = None\n\n @property\n def data(self):\n return self._data\n\n @data.setter\n def data(self, value):\n self._data = value\n self.severity = self.set_severity()\n self.library = self.set_library()\n self.event = self.set_event()\n self.command = self.set_command()\n\n def set_library(self):\n '''\n Get library from ckcm frame\n '''\n library = 'unknown'\n cmd_event = None\n\n blues_str_list = [']BT', 'rt_postBlues', 'Blues']\n hiphop_str_list = [']HSTI', 'SoftAT_', 'HIPHOP']\n rap_str_list = [']RAP', ']SIVR']\n\n if any(cur_str in self.data for cur_str in blues_str_list):\n library = 'blues'\n elif ']HSTI' in self.data:\n library = 'hsti'\n if any(cur_str in self.data for cur_str in rap_str_list):\n library = 'rap'\n if any(cur_str in self.data for cur_str in hiphop_str_list):\n library = 'hiphop'\n elif ']TALA' in self.data:\n library = 'tala'\n elif ']TANGO' in self.data:\n library = 'tango'\n elif ']SOP' in self.data:\n library = 'soprano'\n elif ']CCTOS' in self.data:\n library = 'concertos'\n elif ']DISCO' in self.data:\n library = 'disco'\n elif ']SOUL' in self.data:\n library = 'soul'\n elif 'wxCKCM' in self.data:\n library = 'wxCKCM'\n\n return library\n \n def set_severity(self):\n '''\n Get library from ckcm frame\n '''\n try:\n severity = self.data.split(\"[\")[4][0].lower().encode('utf8')\n except IndexError:\n severity = 'unknown'\n if severity == 'i':\n severity = 'info'\n elif severity == 'e':\n severity = 'error'\n elif severity == 'w':\n severity = 'warning'\n elif severity == 'd':\n severity = 'debug'\n elif severity == 'v':\n severity = 'verbose'\n elif severity == 'c':\n severity = 'critical'\n else:\n severity = None\n\n return severity \n\n def set_command(self):\n '''\n Get HSTI command\n '''\n command = None\n if self.library == 'hsti':\n if 'WaitCmdAT' in self.data:\n try:\n command = re.search('(.*)WaitCmdAT(.*)<LF>', self.data).group(2)\n except AttributeError:\n print '>>> AttributeError: (HSTI Command) Bad ckcm line format:\\n>>>%s' % self.data\n return command\n\n def set_event(self):\n '''\n Get HSTI event\n '''\n event = None\n if self.library == 'hsti':\n if re.search('WaitCmd(.*):(.*)<LF>', self.data):\n try:\n self._event = re.search('(.*)WaitCmd*(.*)<LF>', self.data).group(2)\n except AttributeError:\n print '>>> AttributeError: (HSTI Event) Bad ckcm line format:\\n>>>%s' % self.data\n elif 'HSTIRapEvent' in self.data:\n try:\n self.event = re.search('(.*)HSTIRapEvent*(.*)<LF>', self.data).group(2)\n except AttributeError:\n print '>>> AttributeError: (HSTI Rap Event) Bad ckcm line format:\\n>>>%s' % self.data\n return event\n\nclass OctopylogLog(GenericLog):\n \"\"\"\n octopylog class (specific)\n \"\"\"\n def __init__(self, pytestemb_version):\n self.log_type = \"octopylog\"\n self._data = None\n self.message_type = None\n self.timestamp = None\n self.message = None\n self.pytestemb_version = pytestemb_version\n self.pytestemb_version_major = int(pytestemb_version.split('.')[0])\n self.pytestemb_version_minor = int(pytestemb_version.split('.')[1])\n self.first_field_position = 0\n if self.pytestemb_version_major >= 2 \\\n and self.pytestemb_version_minor >= 2:\n self.first_field_position = 2\n\n @property\n def data(self):\n return self._data\n\n @data.setter\n def data(self, value):\n self._data = value\n self.message_type = self.set_message_type()\n self.timestamp = self.set_timestamp()\n self.message = self.set_message()\n\n def set_message_type(self):\n '''\n Set message type field from octopylog frame\n '''\n if self.data[0].isdigit():\n try:\n message_type = self.data.split()[self.first_field_position + 1]\n except IndexError:\n message_type = None\n else:\n message_type = None\n return message_type\n\n def set_timestamp(self):\n '''\n Set timestamp field from octopylog frame\n '''\n if self.data[0].isdigit():\n timestamp = self.data.split()[self.first_field_position]\n else:\n timestamp = None\n return timestamp\n\n def set_message(self):\n '''\n Set message field from octopylog frame\n '''\n if self.data[0].isdigit():\n try:\n message = ' '.join(self.data.split()[self.first_field_position + 2:])\n except IndexError:\n message = None\n else:\n message = None\n return message\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6257668733596802,
"alphanum_fraction": 0.6523517370223999,
"avg_line_length": 24.736841201782227,
"blob_id": "269d7fd048a0a61f4299f06edf524938c65c0700",
"content_id": "0e2081a8257aaa7ea07ca5e2a427d6a0391528f9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 978,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 38,
"path": "/src/unitary_tests/unitary_test.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*\n\"\"\" parselog unitary tests \"\"\"\n\nimport os\nimport unittest\n\nimport com\nimport jenkins\nimport log\nimport index\n\nclass TestJenkinsClass(unittest.TestCase):\n\n #test du module jenkins\n\n def setUp(self):\n pass\n\n def tearDown(self):\n pass\n\n def test_01_jenkins_job_tgz_local_file(self):\n job = jenkins.JenkinsJob('fc6000ts', '256_Generic')\n #Ensure ckcm.tgz local name is OK\n self.assertEqual(job.ckcm_tgz_file_name, '/tmp/ckcm-fc6000ts-256_Generic.tgz')\n\n #Ensure download is OK\n self.assertTrue(os.path.isfile(job.ckcm_tgz_file_name))\n\n def test_02_decompressed_ckcm_tgz(self):\n job = jenkins.JenkinsJob('fc6000ts', '256_Generic')\n self.assertNotEqual(job.decompressed_ckcm_tgz('/tmp'), None)\n print 'untar directory: %s' % job.decompressed_ckcm_tgz('/tmp')\n \n#Main entry of unitary tests campaigns\nif __name__ == '__main__':\n unittest.main()\n"
},
{
"alpha_fraction": 0.4936532974243164,
"alphanum_fraction": 0.4972233176231384,
"avg_line_length": 33.07432556152344,
"blob_id": "dab9451fcebcebf6db1b1ff3005d664fd8301dfa",
"content_id": "6652ad746297a6bf990ce0613e914c01ddc12255",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5042,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 148,
"path": "/src/parser.py",
"repo_name": "massiou/parselog",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n\"\"\" Log parser classes \"\"\"\n\n__copyright__ = \"Copyright 2015, Parrot\"\n\n#imports\nimport os\nimport log\nimport mysql.connector as mysql\nfrom datetime import datetime\n\nfrom src.com import logger\n\nclass LogParser(object):\n '''\n Log class generic\n '''\n def __init__(self):\n self.type = 'generic'\n\n def parse(self, data):\n raise NotImplementedError('No parse method')\n\nclass CkcmParser(LogParser):\n \"\"\"\n Inherit from LogParser\n \"\"\"\n def __init__(self):\n self.type = 'ckcm'\n\n def parse(self, ckcm_file_path, version=None, module=None):\n '''\n wxCKCM parser\n @param ckcm_file_path: wxCKCM file path\n @return test_name: file script name\n @return parsed_trace: ckcm formatted traces\n '''\n #Read ckcm file\n test_title = \"_\".join(os.path.basename(ckcm_file_path).split(\"_\")[:-2])\n\n # Loop on ckcm file\n with open(ckcm_file_path) as ckcm_file:\n ckcm_content = ckcm_file.read()\n\n for line in ckcm_content.split('\\n'):\n # Ensure first character is a '[' (timestamp)\n if line.startswith(\"[\"):\n ckcm_line = log.CkcmLog()\n try:\n ckcm_line.data = line.decode('utf8')\n # one line log = one data dictionary \n data = {\n 'author': u'jenkins',\n 'test': u\"%s\" % test_title,\n 'severity': u\"%s\" % ckcm_line.severity,\n 'text': u\"%s\" % ckcm_line.data,\n 'library': u'%s' % ckcm_line.library,\n 'ATCommand': u'%s' % ckcm_line.command, \n 'ATEvent': u'%s' % ckcm_line.event,\n 'index_time': u'%s' % datetime.now().isoformat(),\n 'module': u\"%s\" % module,\n 'version': u'%s' % version\n }\n yield data\n except UnicodeDecodeError:\n logger.error(UnicodeDecodeError)\n\nclass OctopylogParser(LogParser):\n \"\"\"\n Inherit from LogParser\n \"\"\"\n def __init__(self, pytestemb_version):\n self.type = 'octopylog'\n self.pytestemb_version = pytestemb_version\n\n def parse(self, ctp_file_path, version=None, module=None):\n \"\"\"\n CTP parser\n @param ctp_file_path: wxCKCM file path\n @return test_name: file script name\n @return parsed_trace: octopylog formatted traces\n \"\"\"\n #Read octopylog file\n test_title = \"_\".join(os.path.basename(ctp_file_path).split(\"_\")[:-1])\n \n with open(ctp_file_path) as ctp_file:\n ctp_content = ctp_file.read()\n\n # Loop on octopylog file\n for line in ctp_content.split('\\n'):\n # Ensure first character is a digit (timestamp)\n if line and line[0].isdigit():\n\n ctp_line = log.OctopylogLog(self.pytestemb_version)\n\n try:\n ctp_line.data = line.decode('utf8')\n # one line log = one data dictionary \n data = {\n 'author': u'jenkins',\n 'test': u\"%s\" % test_title,\n 'text': u\"%s\" % ctp_line.message,\n 'timestamp': u\"%s\" % ctp_line.timestamp,\n 'library': u\"%s\" % ctp_line.message_type,\n 'index_time': 'u%s' % datetime.now().isoformat(),\n 'version': u'%s' % version,\n 'module': u'%s' % module\n }\n yield data\n except UnicodeDecodeError:\n logger.error(UnicodeDecodeError)\n\n\nclass MySQLParser(LogParser):\n \"\"\"\n Inherit from LogParser\n \"\"\"\n def __init__(self, server, user, password, database, table):\n self.type = 't_table'\n self.pytestemb_version = table\n self.user = user\n self.password = password\n self.server = server\n self.table = table\n self.cur_db = mysql.connect(user=user,\n password=password,\n host=server,\n database=database)\n\n def parse(self):\n columns = \"\"\"SHOW COLUMNS FROM {0}\"\"\".format(self.table)\n cursor = self.cur_db.cursor(buffered=True)\n cursor.execute(columns)\n columns = cursor.fetchall()\n columns = [column[0] for column in columns]\n query = \"\"\"SELECT * from {0}\"\"\".format(self.table)\n\n cursor = self.cur_db.cursor(buffered=True)\n cursor.execute(query)\n results = cursor.fetchall()\n print len(results)\n for index, result in enumerate(results):\n data = {col: res for col, res in zip(columns, result)}\n if index % 1000 == 0:\n print index\n print data\n yield data"
}
] | 6 |
AndryT/Python-Excercises
|
https://github.com/AndryT/Python-Excercises
|
8880237f22202081145a5f83585b57f8d7d114af
|
16b5c2d0f4c9a0aa190cd91228b5f5fea6a64b1b
|
55847fafc30c1c5a126482244ec89fba69dfc375
|
refs/heads/master
| 2021-01-11T14:46:05.166813 | 2017-01-30T18:57:59 | 2017-01-30T18:57:59 | 80,212,741 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6816578507423401,
"alphanum_fraction": 0.6922398805618286,
"avg_line_length": 33.378787994384766,
"blob_id": "1ab20af195c5003c207318420720651d871498e9",
"content_id": "3b64f0f43009e471ab3293316d2e769cf3d642ee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2268,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 66,
"path": "/Treated_Class_example.py",
"repo_name": "AndryT/Python-Excercises",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Jan 27 14:13:40 2017\n\n@author: Andrea\n\"\"\"\n\n''' Importing the libraries needed to analyse data '''\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\nmatplotlib.style.use('ggplot')\nimport numpy as np\nimport py\n\n# Import data from text file and save it in a Data Frame:\nmain_df = pd.read_table(py._path.local.LocalPath() + '\\electric.txt', \\\n delim_whitespace = True, header = 1, index_col = None, \\\n names = ['City','Grade','PretestTreat','PosttestTreat','PretestControl',\\\n 'PosttestControl','Dummy'], warn_bad_lines = True)\n\n# Remove last line of data - request of excercise:\nmain_df = main_df[:len(main_df)-1]\n \n# Show description of the DataFrame\nprint(\"Columns: \" + str(main_df.columns))\nprint(\"Number of rows: \" + str(len(main_df)))\nprint(\"Number of columns: \" + str(len(main_df.columns)))\n\n# Show first 2 lines in data frame\nprint('\\n', main_df.loc[1:2])\n\n# Last column \"Dummy\" is not used for the porpose of the analysis, and then it\n# is dropped:\nmain_df = main_df.drop('Dummy', axis=1)\n\n# Set \"Grade\" as a categorical variable:\nmain_df.Grade = main_df[\"Grade\"].astype('category')\n\n# Summarise statistics for test scores independently from City and Grade\nscore_label = [\"PretestTreat\", \"PosttestTreat\",\"PretestControl\",\\\n\"PosttestControl\"] \nsummary_total = main_df[score_label].describe()\nprint('\\n', summary_total) \nprint('\\n')\n\n# Summarise statisticsof test scores grouped by City and Grade:\n#summary_city = main_df[[\"PretestTreat\",\"PosttestTreat\"]].groupby(by = \"City\").describe() \nsummary_city = main_df.groupby(by = \"City\")\nprint(summary_city[score_label].describe())\n\nsummary_grade = main_df.groupby(by = \"Grade\")\nprint(summary_grade[score_label].describe())\n\n#plt.figure(1)\n#pd.DataFrame.hist(main_df, column = ['PretestTreat',], alpha = 0.5, by = ['Grade'])\nmain_df[['Grade','PretestTreat','PretestControl']].hist(orientation = 'vertical', \\\n normed = True, alpha = 0.5, by = ['Grade'], \\\n label = ['Treated Class', 'Control Class'])\nplt.suptitle('Pretest Score by Grade')\nplt.xlabel('Score')\nplt.ylabel('Probability Density')\n#plt.legend('Treated Class','Control Class')\n\n# Plotting the distributions of test scores by grade:\n#main_df.plot.hist(n)"
},
{
"alpha_fraction": 0.8153846263885498,
"alphanum_fraction": 0.8153846263885498,
"avg_line_length": 31.5,
"blob_id": "5320dca9cd6b09e1b1843fe4256080a3e5ab474e",
"content_id": "5f7749bfe042ffe077924b193d07ae3271f27bfa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 65,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 2,
"path": "/README.md",
"repo_name": "AndryT/Python-Excercises",
"src_encoding": "UTF-8",
"text": "# Python-Excercises\nPython codes for Data analytics - Excercises\n"
}
] | 2 |
echo-zhou/final
|
https://github.com/echo-zhou/final
|
25bc59378cadf34c983b48c802961b50f47e3c39
|
19bc99c37bc4b32b9fe0c329123e62eee23bb7cc
|
f4efcc25298aada233722b92c84c4a285dd0d789
|
refs/heads/master
| 2020-03-13T00:58:32.961218 | 2018-04-30T11:56:44 | 2018-04-30T11:56:44 | 130,896,338 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6530758142471313,
"alphanum_fraction": 0.6759656667709351,
"avg_line_length": 38.94285583496094,
"blob_id": "f066f80ea9fd199bffd7aa15bee438adcf2effa7",
"content_id": "6994d280f70bc950dc82ec01b639b370bf7710c2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1398,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 35,
"path": "/mysite/movies/models.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nclass Theater(models.Model):\n address = models.CharField(max_length=200)\n phone = models.CharField(max_length=20)\n name = models.CharField(max_length=100, default = \"A Theater!\")\n th_id = models.CharField(max_length=10, unique=True)\n lat = models.DecimalField(max_digits=10,decimal_places=4, null=True)\n long = models.DecimalField(max_digits=10,decimal_places=4, null=True)\n city = models.CharField(max_length=30)\n\n def __str__(self):\n return self.name + ' (' + self.th_id + ')'\n\nclass Movie(models.Model):\n title = models.CharField(max_length=100)\n theaters = models.ManyToManyField(Theater)\n movie_id = models.IntegerField(unique=True)\n runtime = models.IntegerField()\n releaseDate = models.DateField()\n poster = models.CharField(max_length=300)\n rating = models.CharField(max_length=10, null=True)\n movie_genre = models.CharField(max_length=50, null=True)\n\n def __str__(self):\n return self.title + ' (' + str(self.movie_id) + ')'\n\nclass Showtime(models.Model):\n movie = models.ForeignKey(Movie, on_delete=models.CASCADE)\n theater = models.ForeignKey(Theater, on_delete=models.CASCADE, default=\"\")\n time = models.CharField(max_length = 8)\n tickets = models.CharField(max_length = 250)\n\n def __str__(self):\n return self.movie.title + ' / ' + self.theater.name + ' / ' + self.time\n"
},
{
"alpha_fraction": 0.5768892765045166,
"alphanum_fraction": 0.5931458473205566,
"avg_line_length": 30.61111068725586,
"blob_id": "cb37d61882646b82b6685ecc5a4b45cc2ea3314b",
"content_id": "72aa56d6787fa40ff658dfaf8d8c2eee2968b086",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 4552,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 144,
"path": "/mysite/movies/static/movies/js/home.js",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "///// js for carouel\n$(\"#myCarousel\").on(\"slide.bs.carousel\", function(e) {\n var $e = $(e.relatedTarget);\n var idx = $e.index();\n var itemsPerSlide = 4;\n var totalItems = $(\".carousel-item\").length;\n\n if (idx >= totalItems - (itemsPerSlide - 1)) {\n var it = itemsPerSlide - (totalItems - idx);\n for (var i = 0; i < it; i++) {\n // append slides to end\n if (e.direction == \"left\") {\n $(\".carousel-item\")\n .eq(i)\n .appendTo(\".carousel-inner\");\n } else {\n $(\".carousel-item\")\n .eq(0)\n .appendTo($(this).find(\".carousel-inner\"));\n }\n }\n }\n});\n\n\n\n//// js for map\n\nwindow.movies = {\n params: {},\n data: {},\n};\n\n//fetch data and pass date to the list\nfunction fetchDataMap() {\n console.log(\"fetchDataMap\");\n $.getJSON('https://raw.githubusercontent.com/echo-zhou/final/master/data_all.json', function(data) {\n window.movies.data = data;\n console.log(\"json loaded\");\n fillMap();\n });\n}\n\n\n\n//codes from google map api\nvar map;\n\nfunction initMap() {\n console.log(\"initMap function called\");\n var mid_NC = {lat: 35.900, lng: -78.800};\n map = new google.maps.Map(document.getElementById('map'), {\n zoom: 11,\n center: mid_NC\n });\n fetchDataMap();\n}\n\n\nfunction init(){\n console.log(\"init function\");\n //initMap();\n }\n\n\nfunction fillMap() {\n console.log(\"fillMap()\");\n console.log(window.movies.data);\n//Chapel Hill\n for (i = 0; i < window.movies.data[0][\"theaters\"].length; i++){\n var name = window.movies.data[0][\"theaters\"][i][\"name\"];\n var id = window.movies.data[0][\"theaters\"][i][\"id\"];\n var city = window.movies.data[0][\"theaters\"][i][\"id\"];\n var lat = window.movies.data[0][\"theaters\"][i][\"geo\"][\"latitude\"];\n var lng = window.movies.data[0][\"theaters\"][i][\"geo\"][\"longitude\"];\n var address = window.movies.data[0][\"theaters\"][i][\"address1\"];\n var phone = window.movies.data[0][\"theaters\"][i][\"phone\"];\n var url = \"https://www.fandango.com/\"+window.movies.data[0][\"theaters\"][i][\"theaterPageUrl\"];\n var marker = new google.maps.Marker({\n position: {lat, lng},\n map:map,\n content:'<div id=\"content\" style=\"font-size: 20px;\"><a href=\"/movies/theaters/' + id + '\"><h2>'+name+'</h2>'+address+'<br>'+phone+'</div>',\n })\n var infoWindow = new google.maps.InfoWindow({\n maxWidth: 300,\n maxHeight: 300\n })\n //codes from StackOverflow H.M.\n marker.addListener('mouseover', function() {\n infoWindow.setContent(this.content);\n infoWindow.open(this.getMap(), this);\n });\n\n }\n\n//Durham\nfor (i = 0; i < window.movies.data[1][\"theaters\"].length; i++){\n var name = window.movies.data[1][\"theaters\"][i][\"name\"];\n var id = window.movies.data[1][\"theaters\"][i][\"id\"];\n var city = window.movies.data[1][\"theaters\"][i][\"id\"];\n var lat = window.movies.data[1][\"theaters\"][i][\"geo\"][\"latitude\"];\n var lng = window.movies.data[1][\"theaters\"][i][\"geo\"][\"longitude\"];\n var url = \"https://www.fandango.com/\"+window.movies.data[1][\"theaters\"][i][\"theaterPageUrl\"];\n var marker = new google.maps.Marker({\n position: {lat, lng},\n map:map,\n content:'<div id=\"content\" style=\"font-size: 20px;\"><a href=\"/movies/theaters/' + id + '\"><h2>'+name+'</h2>'+address+'<br>'+phone+'</div>',\n })\n var infoWindow = new google.maps.InfoWindow({\n maxWidth: 300,\n maxHeight: 300\n })\n marker.addListener('mouseover', function() {\n infoWindow.setContent(this.content);\n infoWindow.open(this.getMap(), this);\n });\n\n }\n\n//Raleigh\n for (i = 0; i < window.movies.data[2][\"theaters\"].length; i++){\n // console.log(window.movies.data[0][\"theaters\"][i])\n var name = window.movies.data[2][\"theaters\"][i][\"name\"];\n var id = window.movies.data[2][\"theaters\"][i][\"id\"];\n var city = window.movies.data[2][\"theaters\"][i][\"id\"];\n var lat = window.movies.data[2][\"theaters\"][i][\"geo\"][\"latitude\"];\n var lng = window.movies.data[2][\"theaters\"][i][\"geo\"][\"longitude\"];\n var url = \"https://www.fandango.com/\"+window.movies.data[2][\"theaters\"][i][\"theaterPageUrl\"];\n var marker = new google.maps.Marker({\n position: {lat, lng},\n map:map,\n content:'<div id=\"content\" style=\"font-size: 20px;\"><a href=\"/movies/theaters/' + id + '\"><h2>'+name+'</h2>'+address+'<br>'+phone+'</div>',\n })\n var infoWindow = new google.maps.InfoWindow({\n maxWidth: 300,\n maxHeight: 300\n })\n marker.addListener('mouseover', function() {\n infoWindow.setContent(this.content);\n infoWindow.open(this.getMap(), this);\n });\n\n }\n}\n"
},
{
"alpha_fraction": 0.6715328693389893,
"alphanum_fraction": 0.6715328693389893,
"avg_line_length": 33.25,
"blob_id": "74498fc6193dab402c46f517a16264c5fc0d6074",
"content_id": "634c4678d963e94ab1ae8f0ec976d8753e955a75",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 411,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 12,
"path": "/mysite/movies/urls.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "from django.urls import path\n\nfrom . import views\n\napp_name='movies'\nurlpatterns = [\n path('', views.home, name='home'),\n path('movies/', views.list_movies, name='movies-list'),\n path('theaters/', views.list_theaters, name='theaters-list'),\n path('movies/<int:movie_id>/', views.movie_detail, name=\"movie-details\"),\n path('theaters/<slug:th_id>', views.theater_detail, name=\"theater-details\"),\n]\n"
},
{
"alpha_fraction": 0.48665642738342285,
"alphanum_fraction": 0.48972392082214355,
"avg_line_length": 49.153846740722656,
"blob_id": "0e04a39430406042da0250f1f560c70c9a24fdc1",
"content_id": "dc437738aa2b812949918354f501678eb11c17f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6520,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 130,
"path": "/mysite/movies/management/commands/load_data.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "import datetime\nimport json\n\nfrom django.core.management.base import BaseCommand, CommandError\nfrom movies.models import Movie, Showtime, Theater\n\n# Here we are creating a custom management command to load our json data into\n# Django models.\n# The Command class is loaded by Django at runtime and executed when the file\n# that contains it is specified as the command to manage.py\n\nclass Command(BaseCommand):\n help = 'Load movie data into the database'\n\n # add_arguments lets us specify arguments and options to read from the command\n # line when the command is executed.\n # We are going to add 1 argument- \"json_file\" which is a string (type=str)\n # representing the path to a json file containing our data to load.\n def add_arguments(self, parser):\n parser.add_argument('json_file', type=str)\n\n # The handle method is the main function of the command. This is the entry\n # point for our command and contains all our business logic.\n def handle(self, *args, **options):\n # Grab the path from our commandline arguments\n json_path = options['json_file']\n\n # We are going to write output to the screen as we process things so\n # the user has feedback the script is running.\n self.stdout.write(self.style.SUCCESS('Loading JSON from \"{}\"'.format(json_path)))\n data = json.load(open(json_path))\n\n # Track the total number of records\n total = len(data)\n\n # Let the user know we're running\n self.stdout.write(self.style.SUCCESS('Processing {} rows'.format(total)))\n\n # Create an array to hang on to anything skipped while processing\n skipped = []\n\n # Loop over each row in the data with the enumerate function so we have\n # a row counter\n for i, row in enumerate(data):\n # for j, theater in enumerate(row):\n theaters = row['theaters']\n for j, theater in enumerate(theaters):\n try:\n th_name = theater['name']\n th_id = theater['id']\n th_address = theater['address1']\n th_phone = theater['phone']\n th_geo = theater['geo']\n th_city = theater['city']\n # if th_city == \"Cary\" or th_city == \"Raleigh\" or th_city == \"Morrisville\" or th_city == \"Cary\" or th_city == \"Holly Springs\":\n # th_county = \"Wake County\"\n # elif th_city == \"Durham\":\n # th_county = \"Durham County\"\n # else:\n # th_county = \"Orange County\"\n # for k, location in enumerate(th_geo):\n # th_lat = location[0]\n # th_long = location[1]\n\n theater_instance, _ = Theater.objects.get_or_create(\n name = th_name,\n th_id = th_id,\n address = th_address,\n phone = th_phone,\n lat = 0.0000,\n long = 0.0000,\n city = th_city,\n )\n\n th_movies = theater.get('movies')\n if(th_movies):\n th_movie_list = []\n for m, movie in enumerate(th_movies):\n\n movie_instance, _ = Movie.objects.get_or_create(\n movie_id = movie['id'],\n title = movie['title'],\n runtime = movie['runtime'],\n releaseDate = movie['releaseDate'][0:10],\n poster = movie['poster']['size']['full'][2:],\n rating = movie['rating'],\n movie_genre = movie['genres'][0],\n )\n movie_instance.theaters.add(theater_instance)\n theater_instance.movie_set.add(movie_instance)\n #th_movie_list.append(movie[\"title\"])\n\n variants = movie.get('variants')\n if variants:\n for v, variant in enumerate(variants):\n amenityGroups = variant.get('amenityGroups')\n if amenityGroups:\n for a, amenity in enumerate(amenityGroups):\n showtimes = amenity.get('showtimes')\n if showtimes:\n for s, showtime in enumerate(showtimes):\n showtime_instance, _ = Showtime.objects.get_or_create(\n movie = movie_instance,\n theater = theater_instance,\n time = showtime['date'],\n tickets = showtime['ticketingUrl'],\n )\n # If we don't have some data/something breaks add the row to the skipped list and\n # continue to the next item in the for loop\n except:\n skipped.append(theater)\n print(th_name)\n continue\n\n # Now we tell the user which object count we just updated.\n # By using the line ending `\\r` (return) we return to the begginging\n # of the line and start writing again. This writes over the same line\n # and gives the illusion of the count incrementing without cluttering\n # the screens output.\n self.stdout.write(self.style.SUCCESS('Processed {}/{}'.format(i + 1, total)), ending='\\r')\n # # We call flush to force the output to be written\n self.stdout.flush()\n\n # If we have any skipped rows write them out as json.\n # Then the user can manually evaluate / edit the json and reload it once\n # it has been fixed with `manage.py load_winners skipped.json`\n if skipped:\n self.stdout.write(self.style.WARNING(\"Skipped {} records\".format(len(skipped))))\n with open('skipped.json', 'w') as fh:\n json.dump(skipped, fh)\n"
},
{
"alpha_fraction": 0.6221198439598083,
"alphanum_fraction": 0.6273041367530823,
"avg_line_length": 27.459016799926758,
"blob_id": "63d4fd959189a968a2a4f5abd9ebdcd7616eb9cc",
"content_id": "b0d8657d3f0848c598c440a3e21f95514eef517e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1736,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 61,
"path": "/mysite/movies/views.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\nfrom django.core import serializers\nfrom django.shortcuts import render, get_object_or_404\nfrom django.urls import reverse\nfrom django.http import JsonResponse\nfrom . import models\n\n\ndef home(request):\n objects = models.Movie.objects.all()\n return render(request, \"movies/home.html\", {\n 'theaters': models.Theater.objects.all(),\n 'movies': models.Movie.objects.all(),\n \"list_type\": \"Movies\",\n \"objects\": objects,\n })\n\n\n\ndef list_movies(request):\n objects = models.Movie.objects.all()\n return render(request, \"movies/list.html\", {\n \"list_type\": \"Movies\",\n \"objects\": objects\n })\n\ndef list_theaters(request):\n objects = models.Theater.objects.all()\n return render(request, \"movies/list.html\", {\n \"list_type\": \"Theaters\",\n \"objects\": objects,\n\n })\n\ndef movie_detail(request, movie_id):\n movie = get_object_or_404(models.Movie, movie_id=movie_id)\n theater_objects = movie.theaters.all()\n theaters = []\n for t, theater in enumerate(theater_objects):\n theaters.append(theater.name)\n\n context = {\n 'title' : movie.title,\n 'poster' : \"https://\" + movie.poster,\n 'theaters' : theaters,\n 'rating' : movie.rating,\n }\n return render(request, \"movies/movie_detail.html\", context)\n\n\ndef theater_detail(request, th_id):\n theater = get_object_or_404(models.Theater, th_id=th_id)\n movie_objects = theater.movie_set.all()\n\n context = {\n 'name' : theater.name,\n 'address' : theater.address,\n 'phone' : theater.phone,\n 'movies' : movie_objects,\n }\n return render(request, \"movies/theater_detail.html\", context)\n"
},
{
"alpha_fraction": 0.7007874250411987,
"alphanum_fraction": 0.7007874250411987,
"avg_line_length": 33.6363639831543,
"blob_id": "1fef848275a20ea56750fdac91d7759c0c378721",
"content_id": "c2b8b974ba23a31e6dade5df44fc455c5f038640",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 381,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 11,
"path": "/mysite/movies/management/commands/clear_db.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "from django.core.management.base import BaseCommand, CommandError\nfrom movies.models import Movie, Showtime, Theater\n\nclass Command(BaseCommand):\n help = 'Remove all movie data from the database'\n\n # clean the database\n def handle(self, *args, **options):\n Movie.objects.all().delete()\n Theater.objects.all().delete()\n Showtime.objects.all().delete()\n"
},
{
"alpha_fraction": 0.5754985809326172,
"alphanum_fraction": 0.5811966061592102,
"avg_line_length": 28.25,
"blob_id": "468c5da8c4372586c3e77c7a13d0e738cd58e4aa",
"content_id": "d0ebd53d4d5871a4e8814e13964b794e8725b919",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 351,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 12,
"path": "/cleaning-json.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "import json\n\ndef clean_json(output_filename, input_filename):\n with open(output_filename, \"w\") as outfile:\n with open(input_filename) as infile:\n data = json.load(infile)\n new_data = []\n for o in data:\n \n outfile.write(theaters)\n\nclean_json('merged00.json', 'merged_theaters.json')\n"
},
{
"alpha_fraction": 0.51347815990448,
"alphanum_fraction": 0.5361161828041077,
"avg_line_length": 26.688405990600586,
"blob_id": "31dfaa06bcb6976bd8d491afac6146ee4c7d3e55",
"content_id": "77644ebe11e84e8bdd0bb4660cc1f53ed3b09493",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 7644,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 276,
"path": "/mysite/movies/static/movies/js/barchart.js",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "///global container\nwindow.movies = {\n params: {},\n data: {},\n raleigh_number:[],\n durham_number:[],\n ch_number:[],\n\n name:[],\n name2:[],\n name3:[],\n};\n\n\n\nfunction fetchData2() {\n $.getJSON('https://raw.githubusercontent.com/echo-zhou/final/master/data_all.json', function(data) {\n window.movies.data = data;\n\n var movie_number_ch = []\n for (i = 0; i < window.movies.data[0][\"theaters\"].length; i++){\n try {\n var ch_movie_count = window.movies.data[0][\"theaters\"][i][\"movies\"].length;\n movie_number_ch.push(ch_movie_count)\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n\n theSum = movie_number_ch.reduce(function(a, b) { return a + b; }, 0);\n\n window.movies.ch_number = theSum;\n\n\n\n var movie_number_durham = []\n for (i = 0; i < window.movies.data[1][\"theaters\"].length; i++){\n try {\n var dur_movie_count = window.movies.data[1][\"theaters\"][i][\"movies\"].length;\n movie_number_durham.push(dur_movie_count)\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n theSum = movie_number_durham.reduce(function(a, b) { return a + b; }, 0);\n window.movies.durham_number = theSum;\n\n\n\n var movie_number_Ra = []\n for (i = 0; i < window.movies.data[2][\"theaters\"].length; i++){\n try {\n var Ra_count = window.movies.data[2][\"theaters\"][i][\"movies\"].length;\n movie_number_Ra.push(Ra_count)\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n theSum = movie_number_Ra.reduce(function(a, b) { return a + b; }, 0);\n window.movies.raleigh_number = theSum;\n\n\n var theater_number = []\n var Length = window.movies.data[0][\"theaters\"].length\n for (i = 0; i < Length-1; i++) {\n try {\n var tempMovieCount = window.movies.data[0][\"theaters\"][i][\"name\"];\n theater_number.push(tempMovieCount)\n console.log(tempMovieCount);\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n\n var theater_number2 = []\n for (i = 0; i < window.movies.data[1][\"theaters\"].length; i++) {\n try {\n var tempMovieCount = window.movies.data[1][\"theaters\"][i][\"name\"];\n theater_number2.push(tempMovieCount)\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n\n var theater_number3 = []\n for (i = 0; i < window.movies.data[2][\"theaters\"].length; i++) {\n try {\n var tempMovieCount = window.movies.data[2][\"theaters\"][i][\"name\"];\n theater_number3.push(tempMovieCount)\n }\n catch(err) {\n console.log(\"error\");\n }\n }\n\n window.movies.name = theater_number;\n window.movies.name2 = theater_number2;\n window.movies.name3 = theater_number3;\n\n fillBar();\n fillPie();\n\n });\n}\n\n\n\nfunction fillBar(){\n\n var data = [ window.movies.ch_number, window.movies.durham_number, window.movies.raleigh_number];\n\n var array = ['Chapel Hill', 'Durham', 'Raleigh']\n\n var svgContainer = d3.select(\"#barchart\");\n //append svg\n // var svg = svgContainer.append(\"svg\");\n\n\n var margin = {top: 0, right: 5, bottom: 50, left: 50};\n var fullWidth = 900;\n var fullHeight = 350;\n var width = fullWidth - margin.right - margin.left;\n var height = fullHeight - margin.top - margin.bottom;\n\n\n var svg = d3.select(\"#barchart\").append(\"svg\")\n .attr('width', fullWidth)\n .attr('height', fullHeight)\n .classed('bar-svg', true)\n // draw barchart\n .append('g')\n // translate it to leave room for the left and top margins\n .attr('transform', 'translate(' + margin.left + ',' + margin.top + ')');\n\n\n var chart = svg.append(\"g\");\n\n var boundingRect = svgContainer.node().getBoundingClientRect();\n var width = boundingRect.width;\n var height = boundingRect.height;\n var x = d3.scaleLinear()\n .domain([0,90])\n .range([0,290]);\n\n var y = d3.scaleLinear()\n .domain([0,180])\n .range([0,width]);\n\n var chartGroup = svg.append(\"g\").attr(\"transform\",\"translate(\"+margin.left+\",\"+margin.top+\")\");\n\n var xAxis = d3.axisLeft(y)\n var yAxis = d3.axisBottom(x);\n\n// draw bar rectangles\n svg.selectAll(\"rect\")\n .data(data)\n .enter().append(\"rect\")\n .attr(\"height\",60 )\n .attr(\"width\",function(d,i){ return d*6;})\n .attr(\"fill\",\"#fd9735\")\n .attr(\"y\",function(d,i){ return 70+90*i; })\n .attr(\"x\",190 );\n\n // text of the category name\n var textArray = array;\n svg.append(\"text\").selectAll(\"tspan\")\n .data(textArray)\n .enter().append(\"tspan\")\n .attr(\"y\",function(d,i){ return 100+90*i; })\n .attr(\"x\",0)\n .attr(\"fill\",\"#38618f\")\n .attr(\"dominant-baseline\",\"middle\")\n .attr(\"text-anchor\",\"start\")\n .attr(\"font-size\",\"32\")\n .text(function(d){ return d; })\n\n// text of the data\n var textArray2 = data;\n svg.append(\"text\").selectAll(\"tspan\")\n .data(textArray2)\n .enter().append(\"tspan\")\n .attr(\"y\",function(d,i){ return 100+90*i; })\n .attr(\"x\",550)\n .attr(\"fill\",\"white\")\n .attr(\"dominant-baseline\",\"middle\")\n .attr(\"text-anchor\",\"start\")\n .attr(\"font-size\",\"18\")\n .text(function(d){ return d; })\n\n\n }\n\n\n\n\n/////// code for piechart\n\n function fillPie(){\n //codes from http://zeroviscosity.com/d3-js-step-by-step/step-1-a-basic-pie-chart\n //codes from Juan Cruz-Benito (juancb)’s Block 1984c7f2b446fffeedde\n\n var totalCount = 18;\n\n var data = [\n { label: 'Chapel Hill', count: window.movies.name.length, percentage:((window.movies.name.length/totalCount)*100).toPrecision(3) },\n { label: 'Durham', count: window.movies.name2.length, percentage:((window.movies.name2.length/totalCount)*100).toPrecision(3) },\n { label: 'Raleigh', count: window.movies.name3.length, percentage:((window.movies.name3.length/totalCount)*100).toPrecision(3) },\n ];\n console.log(data);\n\n///// determine the size and color of the piechart\n var width = 450;\n var height = 450;\n var radius = Math.min(width, height) / 3;\n var donutWidth = 75; // NEW\n\n var color = d3.scaleOrdinal([\"#38618f\", \"#fd9735\", \"#ff6745\"]);\n\n var svg = d3.select('#movie-pie')\n .append('svg')\n .attr('width', width)\n .attr('height', height)\n .append('g')\n .attr('transform', 'translate(' + (width / 2) +\n ',' + (height / 2) + ')');\n\n var arc = d3.arc()\n .innerRadius(radius - donutWidth)\n .outerRadius(radius);\n\n var pie = d3.pie()\n .value(function(d) { return d.count; })\n .sort(null);\n\n\n var path = svg.selectAll('path')\n .data(pie(data))\n .enter()\n .append('path')\n .attr('d', arc)\n .attr('fill', function(d, i) {\n return color(d.data.label);\n });\n\n var g = svg.selectAll(\".arc\")\n .data(pie(data))\n .enter().append(\"g\");\n\n g.append(\"text\")\n .attr(\"transform\", function(d) {\n var _d = arc.centroid(d);\n _d[0] *= 1;\n _d[1] *= 1;\n return \"translate(\" + _d + \")\";\n })\n .attr(\"dy\", \".50em\")\n .attr('fill','white')\n .style(\"text-anchor\", \"middle\")\n .text(function(d) {\n return d.data.percentage + '%';\n });\n\n g.append(\"text\")\n .attr(\"text-anchor\", \"middle\")\n .attr('font-size', '2em')\n .attr('fill','#8c857d')\n .attr('y', 20)\n\n }\n\n fetchData2()\n"
},
{
"alpha_fraction": 0.5645933151245117,
"alphanum_fraction": 0.5645933151245117,
"avg_line_length": 32,
"blob_id": "73129e3ed1ad5bfc4b23f6a7ccfa94f5b0e6e909",
"content_id": "5b19cead12378af0afccfe8e5cba9e2f4cd58760",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 627,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 19,
"path": "/data-combo.py",
"repo_name": "echo-zhou/final",
"src_encoding": "UTF-8",
"text": "import glob\nimport json\n\n# big thanks to Elisabeth, helps me organize the json\n\ndef cat_json(output_filename, input_filenames):\n with open(output_filename, \"w\") as outfile:\n first = True\n for infile_name in input_filenames:\n with open(infile_name) as infile:\n if first:\n outfile.write('[')\n first = False\n else:\n outfile.write(',')\n outfile.write(infile.read())\n outfile.write(']')\n\ncat_json('merged_theaters.json', ['formatted-ch.json', 'formatted-durham.json', 'formatted-raleigh.json'])\n"
}
] | 9 |
gzoumpourlis/HungaBunga
|
https://github.com/gzoumpourlis/HungaBunga
|
ee433caebc1558aa99eb58f3f882615f47f62b52
|
1d80350655bf03cdc13ee7774f913d6f1a942dc2
|
ffafb22ea000c72cc3b45e0b80a2dbb8f5020af7
|
refs/heads/master
| 2020-07-08T04:45:34.414179 | 2019-08-20T15:04:34 | 2019-08-20T15:04:34 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7443946003913879,
"alphanum_fraction": 0.7443946003913879,
"avg_line_length": 17,
"blob_id": "7eb29944cfd2e13a018a312cfe19f81cc341e9f2",
"content_id": "e35aa234c8a7924b39992190a48252f098b792aa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 223,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 12,
"path": "/example.py",
"repo_name": "gzoumpourlis/HungaBunga",
"src_encoding": "UTF-8",
"text": "\n\nfrom sklearn import datasets\niris = datasets.load_iris()\nx, y = iris.data, iris.target\n\n\n\n\nfrom hunga_bunga import HungaBungaClassifier, HungaBungaRegressor\n\nclf = HungaBungaClassifier()\nclf.fit(x, y)\nclf.predict(x)\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.9021739363670349,
"alphanum_fraction": 0.9021739363670349,
"avg_line_length": 44.5,
"blob_id": "793dbbe1e4961599a228442faa8381e58cd05a50",
"content_id": "80335aee4c0e17eb0932ace00b18eae4aa15a837",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 2,
"path": "/hunga_bunga/__init__.py",
"repo_name": "gzoumpourlis/HungaBunga",
"src_encoding": "UTF-8",
"text": "\nfrom regression import HungaBungaRegressor\nfrom classification import HungaBungaClassifier\n"
}
] | 2 |
grebnesorbocaj/DogThoughts
|
https://github.com/grebnesorbocaj/DogThoughts
|
3d00bb47090528ab72831e187016d86a61bc4628
|
dd62543fdc43e2d94026b599e105705c3f176c71
|
8d1be5aca6bcf53cf388f31b2d45227a766823e1
|
refs/heads/master
| 2020-04-13T22:00:24.497003 | 2019-03-07T16:43:10 | 2019-03-07T16:43:10 | 163,469,888 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8536585569381714,
"alphanum_fraction": 0.8536585569381714,
"avg_line_length": 19.5,
"blob_id": "1319987715d4bce87a091edbc92a8f34fe952ffd",
"content_id": "ff0501abce2954c950c1d87263441af957351a6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 41,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 2,
"path": "/README.md",
"repo_name": "grebnesorbocaj/DogThoughts",
"src_encoding": "UTF-8",
"text": "# DogThoughts\nDogThoughts tweets pulling\n"
},
{
"alpha_fraction": 0.7035398483276367,
"alphanum_fraction": 0.7077802419662476,
"avg_line_length": 32.31645584106445,
"blob_id": "b70ac5e07f4f4ca891c23f4d3a02f64f6e2164d6",
"content_id": "d1dbbf85932f2714f83f373ec98fb496dfee1480",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5424,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 158,
"path": "/my_test.py",
"repo_name": "grebnesorbocaj/DogThoughts",
"src_encoding": "UTF-8",
"text": "from selenium import webdriver\r\nfrom selenium.webdriver.common.keys import Keys\r\nfrom selenium.webdriver.chrome.options import Options\r\nimport time\r\nfrom functools import wraps\r\n\r\ndef timer(f):\r\n\t@wraps(f)\r\n\tdef timer_wrapper(*args, **kwargs):\r\n\t\tstartTime = time.perf_counter()\r\n\t\tf(*args, **kwargs)\r\n\t\tendTime = time.perf_counter()\r\n\t\trunTime = endTime - startTime\r\n\t\tprint(\"Time taken:\", runTime)\r\n\treturn timer_wrapper\r\n\r\n@timer\r\ndef runner(driver, searchXpath):\r\n\tprint(\"Open chrome\")\r\n\tdriver.get(\"https://2019\")\r\n\tprint(\"Driver.get SSI Choice\")\r\n\telem = driver.find_element_by_xpath(\"//input[@name='zipCode']\")\r\n\tprint(\"find zipcodeinput\")\r\n\telem.clear()\r\n\telem.send_keys(\"75080\")\r\n\telem = driver.find_element_by_xpath(searchXpath)\r\n\telem.click()\r\n\t# elem = driver.find_element_by_xpath(\"//label[contains(.,'Community/Retail')]/preceding-sibling::input\")\r\n\t# elem.click()\r\n\telem = driver.find_element_by_xpath(\"//button[contains(.,'Search')]\")\r\n\telem.click()\r\n\telem = driver.find_element_by_xpath(\"//table[@class='pharmacy-locator-table']//tbody//tr//td[@class='column2']\")\r\n\tprint(elem.text)\r\n\r\n\r\ndef checkForNext(driver):\r\n\tstatus = False\r\n\ttry:\r\n\t\tnextButton = driver.find_element_by_xpath(\"//div[1]/a[contains(.,'Next')]\")\r\n\t\tstatus = True\r\n\texcept:\r\n\t\tstatus = False\r\n\t\r\n\treturn status\r\n\r\ndef checkPreferred(driver):\r\n\trun = True\r\n\twhile(run):\r\n\t\tpreffereds = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']/span[@class='preferred']\")\r\n\t\trows = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']\")\r\n\t\tcountPref = len(preffereds)\r\n\t\tcountRows = len(rows)\r\n\t\tif countPref != countRows:\r\n\t\t\tprint(\"Failed\")\r\n\t\t\tprint(\"CountRows {countRows} and countPref {countPref}\")\r\n\t\telif countPref == countRows:\r\n\t\t\tprint(\"Passed\")\r\n\t\tif checkForNext(driver):\r\n\t\t\tdriver.find_element_by_xpath(\"//div[1]/a[contains(.,'Next')]\").click()\r\n\t\telse:\r\n\t\t\trun = False\r\n\treturn\r\n\r\n\r\ndef checkIndianHealthService(driver):\r\n\tpreffereds = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']/ul[contains(.,'Compounds')]\")\r\n\trows = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']\")\r\n\tcountPref = len(preffereds)\r\n\tcountRows = len(rows)\r\n\r\ndef checkCompounds(driver):\r\n\tpreffereds = driver.find_elements_by_xpath(\"\")\r\n\trows = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']\")\r\n\tcountPref = len(preffereds)\r\n\tcountRows = len(rows)\r\n\r\n\r\ndef checkPopup(driver):\r\n\ttry:\r\n\t\tdriver.find_element_by_xpath(\"//div[@class='acsFocusFirst acsClassicInvite']//a[contains(.,'No, thanks')]\")\r\n\t\treturn\r\n\texcept:\r\n\t\treturn\r\n\r\ndef checkSpecific(driver, xpath):\r\n\trun = True\r\n\tpageCount = 1\r\n\twhile(run):\r\n\t\tcheckPopup(driver)\r\n\t\tpreffereds = driver.find_elements_by_xpath(xpath)\r\n\t\trows = driver.find_elements_by_xpath(\"//table[@class='pharmacy-locator-table']//td[@class='column2']\")\r\n\t\tcountPref = len(preffereds)\r\n\t\tcountRows = len(rows)\r\n\t\tif countPref != countRows:\r\n\t\t\tprint(\"Failed, page count:\", pageCount)\r\n\t\t\tprint(\"CountRows {countRows} and countPref {countPref}\".format(countRows=countRows,countPref=countPref))\r\n\t\t\tpageCount+=1\r\n\t\telif countPref == countRows:\r\n\t\t\tprint(\"Passed, page count:\", pageCount)\r\n\t\t\tpageCount+=1\r\n\t\tif checkForNext(driver):\r\n\t\t\tdriver.find_element_by_xpath(\"//div[1]/a[contains(.,'Next')]\").click()\r\n\t\telse:\r\n\t\t\trun = False\r\n\treturn\r\n\r\nchrome_options = Options()\r\nchrome_options.add_experimental_option(\"useAutomationExtension\", False)\r\nchrome_options.add_argument('--ignore-certificate-errors')\r\nchrome_options.add_argument('--ignore-ssl-errors')\r\nchrome_options.add_argument('-incognito')\r\nchrome_options.add_argument('--start-maximized')\r\nchrome_options.add_argument('--disable-popup-blocking')\r\nchrome_options.add_argument('--disable-extensions')\r\n\r\n\r\ndriver = webdriver.Chrome(chrome_options=chrome_options)\r\nprint(\"Start runner\")\r\n\r\n\r\n\r\n\r\n# searchCommunity = \"//label[contains(.,'Community/Retail')]/preceding-sibling::input\"\r\n# resultCommunity =\r\n# runner(driver, searchCommunity)\r\n# checkSpecific(driver, resultCommunity)\r\n\r\n# searchIndianHeS = \"//label[contains(.,'Indian Health Service')]/preceding-sibling::input\"\r\n# resultIndianHeS =\r\n# runner(driver, searchPreffered)\r\n# checkSpecific(driver, resultPreferred)\r\n\r\n# searchLongTermC = \"//label[contains(.,'Long Term Care')]/preceding-sibling::input\"\r\n# resultLongTermC =\r\n# runner(driver, searchPreffered)\r\n# checkSpecific(driver, resultPreferred)\r\n\r\n# searchHomeInfus = \"//label[contains(.,'Home Infusion Therapy')]/preceding-sibling::input\"\r\n# resultHomeInfus = \r\n# runner(driver, searchPreffered)\r\n# checkSpecific(driver, resultPreferred)\r\n\r\n# searchPrefMail = \"//label[contains(.,'Preferred Mail Service Network')]/preceding-sibling::input\"\r\n# resultPrefMail =\r\n# runner(driver, searchPreffered)\r\n# checkSpecific(driver, resultPreferred)\r\n\r\n\r\n\r\n# searchEmergency = \"//label[contains(.,'Emergency Rx')]/preceding-sibling::input\"\r\n# resultEmergency =\r\n# runner(driver, searchPreffered)\r\n# checkSpecific(driver, resultPreferred)\r\n\r\nsearchElectronic = \"//label[contains(.,'Electronic Prescribing Enabled')]/preceding-sibling::input\"\r\nresultElectonric = \"//table[@class='pharmacy-locator-table']//td[@class='column2']/ul[contains(.,'Electronic Prescribing')]\"\r\nrunner(driver, searchElectronic)\r\ncheckSpecific(driver, resultElectonric)\r\n\r\n"
}
] | 2 |
CptKawasemi/py_budget
|
https://github.com/CptKawasemi/py_budget
|
65a7593c4e1642a5441841932fcd12fc20d41d2c
|
0d276dc5d1832fa6b208e320c31e84129fb86402
|
355c794ca8a808e8e3d78e70bbb32b27410b0b91
|
refs/heads/main
| 2023-02-10T11:18:53.447163 | 2021-01-04T21:40:53 | 2021-01-04T21:40:53 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.49942725896835327,
"alphanum_fraction": 0.5315005779266357,
"avg_line_length": 33,
"blob_id": "6d1451e1eb764019f4ed24be39da6fde54452565",
"content_id": "3e043b3d57a28a577596c02719a87124ef26e49d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 873,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 25,
"path": "/Budget/BudgetList.py",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "from tkinter import *\r\nfrom prettytable import PrettyTable\r\nfrom Module_Model import *\r\n\r\nclass BudgetList():\r\n def __init__(self):\r\n lf=Toplevel()\r\n \r\n lbl=Label(lf)\r\n lbl.config(font=(\"Courier\",9))\r\n lbl.pack()\r\n table=PrettyTable()\r\n table.field_names=[\"ItemName\",\"ItemType\",\"Price\",\"Qty\",\"Total\"]\r\n sql=\"Select ItemName,ItemType,Price,Qty,Qty * Price as 'Total' From budget Order By ItemName;\"\r\n result=Fun_Select(sql)\r\n \r\n for row in result:\r\n col0=\"%-30s\" % row[0] #%-30s% means data to left size of column\r\n col1=\"%-30s\" % row[1]\r\n col2=\"%-30s\" % row[2]\r\n col3=\"%-30s\" % row[3]\r\n col4=\"%-30s\" % row[4]\r\n table.add_row([col0,col1,col2,col3,col4])\r\n \r\n lbl[\"text\"]=table #table is uploaded onto the label"
},
{
"alpha_fraction": 0.6017345190048218,
"alphanum_fraction": 0.6097398400306702,
"avg_line_length": 24.73214340209961,
"blob_id": "822d956103234401f5591ab88e3c7f2e980e0d58",
"content_id": "9c3b18c7595869abbc97c47afdd3f1c902bd2005",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1499,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 56,
"path": "/Budget/BudgetInfo.py",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "from tkinter import*\r\nfrom BudgetEntry import*\r\nfrom BudgetList import *\r\nfrom BudgetUpdate import *\r\n\r\n\r\nclass BudgetInfo():\r\n \r\n def UD_Click(self):\r\n BudgetUpdate()\r\n def Listing_Click(self):\r\n BudgetList()\r\n def Entry_Click(self):\r\n BudgetEntry()\r\n def __init__(self):\r\n self.ShowForm()\r\n \r\n \r\n def Exit_Click(self):\r\n form.destroy()\r\n exit()\r\n\r\n def ShowForm(self):\r\n \r\n for w in form.children.values():#to check the previous display on form\r\n w.destroy()\r\n \r\n menu=Menu()\r\n form.configure(menu=menu) #to upload menu on form, (menu=obj) is default\r\n\r\n filemenu=Menu(tearoff=0) #0 means filemenu is empty, 1 means \"-----\"\r\n #is added to filemenu\r\n menu.add_cascade(menu=filemenu,label=\"File\")\r\n\r\n #filemenu1=Menu(tearoff=0)\r\n #menu.add_cascade(menu=filemenu1,label=\"Edit\")\r\n \r\n filemenu.add_command(label=\"Entry\",command=self.Entry_Click)\r\n filemenu.add_command(label=\"Update/Delete\",command=self.UD_Click)\r\n filemenu.add_command(label=\"Listing\",command=self.Listing_Click)\r\n\r\n filemenu.add_separator()\r\n filemenu.add_command(label=\"Exit\",command=self.Exit_Click)\r\n\r\nform=Tk()\r\nform.title(\"Personal Budget Program\")\r\n\r\nwidth=form.winfo_screenwidth()\r\nheight=form.winfo_screenheight()\r\nform.geometry(str(width)+'x'+str(height))\r\n\r\n#form.geometry('500x500')\r\n\r\n\r\nForm=BudgetInfo()\r\nform.mainloop()\r\n\r\n"
},
{
"alpha_fraction": 0.6072948575019836,
"alphanum_fraction": 0.6705167293548584,
"avg_line_length": 28.909090042114258,
"blob_id": "d8890f90f341578305fac4a2473f007b07431b7b",
"content_id": "6558e68c7cf6fa4343bd02185af27db38c7c62f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 1645,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 55,
"path": "/Budget/Budget 20201001 1442.sql",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "-- MySQL Administrator dump 1.4\n--\n-- ------------------------------------------------------\n-- Server version\t5.0.45-community-nt\n\n\n/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;\n/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;\n/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;\n/*!40101 SET NAMES utf8 */;\n\n/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;\n/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;\n/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;\n\n\n--\n-- Create schema kmd\n--\n\nCREATE DATABASE IF NOT EXISTS kmd;\nUSE kmd;\n\n--\n-- Definition of table `budget`\n--\n\nDROP TABLE IF EXISTS `budget`;\nCREATE TABLE `budget` (\n `Number` int(11) NOT NULL auto_increment,\n `ItemName` varchar(50) default NULL,\n `ItemType` varchar(50) default NULL,\n `Price` int(11) default NULL,\n PRIMARY KEY (`Number`)\n) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `budget`\n--\n\n/*!40000 ALTER TABLE `budget` DISABLE KEYS */;\nINSERT INTO `budget` (`Number`,`ItemName`,`ItemType`,`Price`) VALUES \n (3,'Chocolate','Food',5000);\n/*!40000 ALTER TABLE `budget` ENABLE KEYS */;\n\n\n\n\n/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;\n/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;\n/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;\n/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;\n/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;\n/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;\n/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;\n"
},
{
"alpha_fraction": 0.5325288772583008,
"alphanum_fraction": 0.5519412159919739,
"avg_line_length": 39.89011001586914,
"blob_id": "d01f88b53e0adf010878e7d2847757ba7734d01f",
"content_id": "639c34db49eccbe1ade0b6beec2266e8b691b589",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3812,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 91,
"path": "/Budget/BudgetEntry.py",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "from tkinter import *\r\nimport Module_Model\r\nfrom tkinter import messagebox\r\n\r\n\r\nclass BudgetEntry():\r\n\r\n def getCType(self):\r\n CT = self.rdoCType.get()\r\n return CT\r\n\r\n def isNumber(self, value):\r\n try:\r\n int(value)\r\n return True\r\n except:\r\n return False\r\n\r\n def btnSave_Click(self):\r\n\r\n if (self.txtItemName.get() == \"\" or self.getCType() == \"\" or self.txtPrice.get() == \"\"):\r\n messagebox.showwarning(\"Program Say\", \"Please Try again after Fill all Record !\")\r\n else:\r\n bn = Module_Model.Fun_Select(\"SELECT ItemName FROM budget WHERE ItemName='\" + self.txtItemName.get() + \"';\")\r\n print(bn)\r\n if (len(bn) > 0):\r\n messagebox.showerror(\"This program say\", \" This Item Name already exists.\")\r\n else:\r\n ItemName = self.txtItemName.get()\r\n print(ItemName)\r\n ItemType = self.getCType()\r\n Price = self.txtPrice.get()\r\n Qty = self.txtQty.get()\r\n sql = \"INSERT INTO budget(ItemName,ItemType,Price,Qty) VALUES ('\" + ItemName + \"','\" + ItemType + \"',\" + Price + \",\" + Qty + \");\"\r\n Module_Model.Fun_Execute(sql)\r\n messagebox.showinfo(\"Saving\", \"Successfully Save Record\")\r\n self.txtItemName.delete(0, END)\r\n self.txtPrice.delete(0, END)\r\n self.txtQty.delete(0, END)\r\n self.txtItemName.focus()\r\n\r\n def __init__(self): # default constructor\r\n\r\n # root = Tk()\r\n global rdoCType\r\n self.rdoCType = StringVar()\r\n lf = Toplevel()\r\n lf.title(\"Item Entry\")\r\n lf.geometry(\"600x400\")\r\n\r\n # syntax of Label ( master, option, ... )\r\n # place the data label and entry in the frame\r\n Label(lf, text=\"Enter Item Name : \").grid(row=0, column=0, pady=\"0.5c\", padx=\"1c\")\r\n self.txtItemName = Entry(lf) # Entry( master, option, ... )\r\n self.txtItemName[\"width\"] = 40\r\n self.txtItemName.grid(row=0, column=1, pady=\"0.5c\")\r\n self.txtItemName.focus()\r\n\r\n Label(lf, text=\"Choose Item Type : \").grid(row=1, column=0, pady=\"0.5c\", padx=\"1c\")\r\n\r\n frame = Frame(lf)\r\n frame.grid(row=1, column=1)\r\n\r\n self.ty = Radiobutton(frame, text=\"Food\", variable=self.rdoCType, value=\"Food\", command=self.getCType)\r\n self.ty.grid(row=0, column=0, pady=\"0.5c\")\r\n self.ty.deselect()\r\n self.ty = Radiobutton(frame, text=\"Clothing\", variable=self.rdoCType, value=\"Clothing\", command=self.getCType)\r\n self.ty.grid(row=0, column=1, pady=\"0.5c\")\r\n self.ty.deselect()\r\n self.ty = Radiobutton(frame, text=\"Extras\", variable=self.rdoCType, value=\"Extras\", command=self.getCType)\r\n self.ty.grid(row=0, column=2, pady=\"0.5c\")\r\n self.ty.deselect()\r\n\r\n Label(lf, text=\"Enter Price : \").grid(row=2, column=0, pady=\"0.5c\", padx=\"1c\")\r\n # Entry( master, option, ... ) Option = key-value pair\r\n self.txtPrice = Entry(lf)\r\n self.txtPrice[\"width\"] = 40\r\n self.txtPrice.grid(row=2, column=1, pady=\"0.5c\")\r\n\r\n Label(lf, text=\"Enter Qty : \").grid(row=3, column=0, pady=\"0.5c\", padx=\"1c\")\r\n # Entry( master, option, ... ) Option = key-value pair\r\n self.txtQty = Entry(lf)\r\n self.txtQty[\"width\"] = 40\r\n self.txtQty.grid(row=3, column=1, pady=\"0.5c\")\r\n\r\n self.btnSave = Button(lf, text=\"Save\")\r\n self.btnSave.grid(row=4, column=1, pady=\"0.5c\", padx=\"1c\", columnspan=2)\r\n self.btnSave[\"command\"] = self.btnSave_Click\r\n self.btnclose = Button(lf, text=\"Close\")\r\n self.btnclose.grid(row=4, column=0, pady=\"0.5c\", padx=\"2.5c\")\r\n self.btnclose[\"command\"] = lf.destroy\r\n"
},
{
"alpha_fraction": 0.5422876477241516,
"alphanum_fraction": 0.5590536594390869,
"avg_line_length": 38.59090805053711,
"blob_id": "df9f39ec74bcf648e58888d0c3957fd4caf54ad4",
"content_id": "8346e45cd8dd09d42ff539966c8f9c241365c27f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5368,
"license_type": "no_license",
"max_line_length": 155,
"num_lines": 132,
"path": "/Budget/BudgetUpdate.py",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "from tkinter import*\r\nfrom tkinter import ttk\r\nfrom tkinter import messagebox\r\nfrom Module_Model import*\r\nclass BudgetUpdate():\r\n def click_UpDate(self):\r\n \r\n if(self.cbo_itemname.get()==\"\" or self.rdoCType.get()==0 or self.txtPrice.get()==\"\" or self.txtQty.get()==\"\"):\r\n messagebox.showwarning(\"Program Say\",\"Please Try again after Fill all Record !\")\r\n else:\r\n bname=self.cbo_itemname.get();\r\n btype=self.rdoCType.get();\r\n cprice=self.txtPrice.get();\r\n cqty=self.txtQty.get();\r\n Fun_Execute(\"UPDATE kmd.budget SET ItemName='\"+bname+\"',ItemType='\"+btype+\"',Price='\"+cprice+\"',Qty='\"+cqty+\"' WHERE Number=\"+str(self.ID)+\";\")\r\n self.cbo_itemname.delete(0,END)\r\n self.txtPrice.delete(0,END)\r\n self.txtQty.delete(0,END)\r\n messagebox.showinfo(\"Program Say\",\"Update Process Successful\")\r\n \r\n def chooseItem(self):\r\n if(self.rdoCType.get()!=\"\" or self.txtPrice.get()!=\"\"):\r\n itemname=self.cbo_itemname.get()\r\n bookdata=Fun_Select(\"SELECT * FROM budget WHERE ItemName='\"+itemname+\"';\")\r\n row=bookdata[0]\r\n self.ID=row[0]\r\n self.rdoCType.set(row[2])\r\n self.txtPrice.insert(0,row[3])\r\n self.txtQty.insert(0,row[4])\r\n self.uprdo.deselect()\r\n \r\n else:\r\n self.ClearData()\r\n \r\n \r\n \r\n def ComfirmUpDate(self):\r\n if(self.cbo_itemname.get()==\"\"):\r\n messagebox.showerror(\"Program Say\",\"Please Choose Item & Try again\")\r\n \r\n elif(self.cbo_itemname.get() ==\"\" or self.txtPrice.get()!=\"\"):\r\n messagebox.showerror(\"Program Say\",\"Please Click Clear & Try again\") \r\n else:\r\n self.chooseItem()\r\n \r\n def isNumber(self,value):\r\n try:\r\n int(value)\r\n return True\r\n except:\r\n return False\r\n\r\n def ClearData(self):\r\n self.txtPrice.delete(0,END)\r\n self.cbo_itemname.delete(0,END)\r\n self.txtQty.delete(0,END)\r\n self.RT.deselect()\r\n self.uprdo.deselect()\r\n\r\n def btnDelete_Click(self):\r\n item=self.cbo_itemname.get()\r\n if(item==\"\" or self.rdoCType.get()==\"\" or self.txtPrice.get()==\"\"):\r\n messagebox.showwarning(\"Program Say\",\"Please Fill All Record\")\r\n else:\r\n mymessage=messagebox.askyesno(\"This Program Say\",\"Are You sure You want to Delete!\")\r\n if (mymessage==True):\r\n Fun_Execute(\"DELETE FROM budget WHERE ItemName='\"+self.cbo_itemname.get()+\"';\")\r\n self.ClearData()\r\n messagebox.showinfo(\"This Program Say\",\"Delete Done\")\r\n\r\n\r\n def __init__(self):\r\n lf = Toplevel()\r\n self.cid=0;\r\n self.UpDateSure=IntVar()\r\n lf.title(\"Item Update\")\r\n lf.geometry(\"600x400\")\r\n Label(lf,text=\"Choose Item Name : \").grid(row=0,column=0)\r\n #Option Menu\r\n self.cbo_itemname=ttk.Combobox(lf)\r\n self.cbo_itemname[\"value\"]=Fun_Select(\"SELECT ItemName FROM budget\")\r\n self.cbo_itemname.grid(row=0,column=1)\r\n \r\n self.uprdo=Radiobutton(lf,text=\"UpDate\",variable=self.UpDateSure,value=1,command=self.ComfirmUpDate)\r\n self.uprdo.grid(row=0,column=3,pady=\"0.5c\")\r\n Label(lf,text=\"Choose Item Type : \").grid(row=1,column=0,pady=\"0.5c\",padx=\"1c\")\r\n\r\n frame=Frame(lf)\r\n frame.grid(row=1,column=1)\r\n\r\n self.rdoCType=StringVar()\r\n self.RT=Radiobutton(frame,text=\"Food\", variable=self.rdoCType, value=\"Food\")\r\n self.RT.grid(row=0,column=0)\r\n self.RT.deselect()\r\n self.RT=Radiobutton(frame,text=\"Clothing\", variable=self.rdoCType, value=\"Clothing\")\r\n self.RT.grid(row=0,column=1)\r\n self.RT.deselect()\r\n self.RT=Radiobutton(frame,text=\"Extras\", variable=self.rdoCType, value=\"Extras\")\r\n self.RT.grid(row=0,column=2)\r\n self.RT.deselect()\r\n Label(lf,text=\"Enter Price : \").grid(row=2,column=0,pady=\"0.5c\",padx=\"1c\")\r\n\r\n self.txtPrice=Entry(lf)\r\n self.txtPrice[\"width\"]=40\r\n self.txtPrice.grid(row=2,column=1,pady=\"0.5c\")\r\n \r\n Label(lf,text=\"Enter Qty : \").grid(row=3,column=0,pady=\"0.5c\",padx=\"1c\")\r\n #Entry( master, option, ... ) Option = key-value pair\r\n self.txtQty=Entry(lf)\r\n self.txtQty[\"width\"]=40\r\n self.txtQty.grid(row=3,column=1,pady=\"0.5c\")\r\n\r\n frame=Frame(lf) \r\n \r\n frame.grid(row=4,column=0,columnspan=3)\r\n\r\n\r\n self.btnclose=Button(frame,text=\"Close\")\r\n self.btnclose.grid(row=0,column=0,padx=\"1.5c\",pady=\"0.5c\")\r\n self.btnclose[\"command\"]=lf.destroy\r\n\r\n self.btnCancel=Button(frame,text=\"Clear\")\r\n self.btnCancel.grid(row=0,column=1,padx=\"1.5c\",pady=\"0.5c\")\r\n self.btnCancel[\"command\"]=self.ClearData\r\n \r\n self.btnUpdate=Button(frame,text=\"Update\")\r\n self.btnUpdate.grid(row=0,column=3,padx=\"1.5c\",pady=\"0.5c\")\r\n self.btnUpdate[\"command\"]=self.click_UpDate\r\n\r\n self.btnDelete=Button(frame,text=\"Delete\")\r\n self.btnDelete.grid(row=0,column=2,pady=\"0.5c\")\r\n self.btnDelete[\"command\"]=self.btnDelete_Click\r\n \r\n"
},
{
"alpha_fraction": 0.6217573285102844,
"alphanum_fraction": 0.6217573285102844,
"avg_line_length": 39.27586364746094,
"blob_id": "d52f38ba05eb3b2d25d578f69bab771d46536b54",
"content_id": "bf92ceb9e0e66a5da3908e4bf282c28f93998d27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1195,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 29,
"path": "/Budget/Module_Model.py",
"repo_name": "CptKawasemi/py_budget",
"src_encoding": "UTF-8",
"text": "import mysql.connector\r\nfrom tkinter import messagebox\r\nfrom mysql.connector import Error\r\ndef Fun_Execute(SqlStatement):\r\n try:\r\n connection=mysql.connector.connect(host='localhost',database='kmd',user='root',password='kmd')\r\n if connection.is_connected():\r\n \r\n db_Info=connection.get_server_info()\r\n print(\"Connected to MySQL Server version\",db_Info)\r\n cursor=connection.cursor()\r\n cursor.execute(SqlStatement)\r\n connection.commit()\r\n connection.close()\r\n except Error as e:\r\n print(\"Error while connecting to MySQL\",e)\r\ndef Fun_Select(SqlStatement):\r\n try:\r\n connection=mysql.connector.connect(host='localhost',database='kmd',user='root',password='kmd')\r\n if connection.is_connected():\r\n db_Info=connection.get_server_info()\r\n print(\"Connected to MySQL Server version\",db_Info)\r\n cursor=connection.cursor()\r\n cursor.execute(SqlStatement)\r\n record=cursor.fetchall()\r\n return record\r\n connection.commit()\r\n except Error as e:\r\n messagebox.showinfo(\"Error while connecting to MySQL\",e)"
}
] | 6 |
Aayush-hub/Password-Cryptography
|
https://github.com/Aayush-hub/Password-Cryptography
|
ae831ec4d336de6697746a0573809f2fca74be0f
|
59fb1ca4c26c5b21359b4094d07ca587a16648ac
|
76fb07aa99ae84da118dd7e27c4448484c542a14
|
refs/heads/master
| 2023-03-02T23:55:23.226787 | 2021-02-07T11:23:42 | 2021-02-07T11:23:42 | 294,312,253 | 3 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.513178288936615,
"alphanum_fraction": 0.5178294777870178,
"avg_line_length": 57.6363639831543,
"blob_id": "a4ba7db7444d9e5d846f196103da181ce9ae0dae",
"content_id": "9533a9d8d08903f3fe6e5f71a584ac493888e7df",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 645,
"license_type": "no_license",
"max_line_length": 196,
"num_lines": 11,
"path": "/Password Decryption.py",
"repo_name": "Aayush-hub/Password-Cryptography",
"src_encoding": "UTF-8",
"text": "f = input(\"Enter password to be decrpted: \\n\") #enter password to be dcrypted\nd = int(input(\"Enter Access Key: \\n\")) #enter access key\n \nSECURE = (('s', '$'),('i', '|'), ('x', '*'), ('k','+'), ('g','4'),('a','@'),('o','0'),('X','x'),('%','#'),('w','^'),('h','='),('H','~'),('r',','),('e','/'),('t','<'),('q','>'),('l','?'),('c','1'))\ndef securepassword(password):\n for a,b in SECURE:\n password = password.replace(b, a) #replacing encrypted elements with original ones.\n return password\npassword = f\npassword = securepassword(password)\nprint(f\"Your Original Password is: {password}\") #printing original password.\n"
},
{
"alpha_fraction": 0.41651487350463867,
"alphanum_fraction": 0.42167577147483826,
"avg_line_length": 53.31666564941406,
"blob_id": "59397a8fefad6271ef3c16516da2cf77ed0ab552",
"content_id": "319c5e730763571d26886641dc1aec4552735270",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3294,
"license_type": "no_license",
"max_line_length": 204,
"num_lines": 60,
"path": "/Secure password full file.py",
"repo_name": "Aayush-hub/Password-Cryptography",
"src_encoding": "UTF-8",
"text": "import string #built in module\nimport random #built in module\nif __name__ == \"__main__\":\n print(\"----------What you Want to do?------------\")\n print(\"Press 0 to generate and encrpt password\")\n print(\"Press 1 to decrpt password\")\n e = int(input()) #taking input what to-do\n a1 = string.printable #taking all characters,digits and punctuations as string\n if e == 0:\n\n while True:\n\t i = input(\"Enter length of password: \\n\") #checking whether input is integer.\n\t if i.isdigit():\n\t\t break\n\t else:\n\t\t print(\"Please enter a number\")\n\t\t continue\n a = [] #taking empty list\n a.extend(list(a1)) #appending all elements in empty list.\n random.shuffle(a) #shuffling all elements and choosing random elements\n print(\"Your Password is: \" ,end=\"\")\n c = \"\".join(a[0:int(i)])\n print(c) #printing password generated.\n while True:\n\t j = input(\"Set your Access Key: \\n\") #setting access key\n\t if j.isdigit(): #checking whether input is integer.\n\t\t break\n\t else:\n\t\t print(\"Please enter a number\")\n\t\t continue\n\t\t\t\n\t#SECURE can be changed according to need.\n\t\n SECURE = (('s', '$'),('i', '|'), ('x', '*'), ('k','+'), ('g','4'),('a','@'),('o','0'),('X','x'),('%','#'),('w','^'),('h','='),('H','~'),('r',','),('e','/'),('t','<'),('q','>'),('l','?'),('c','1'))\n def securepassword(password):\n for a,b in SECURE:\n password = password.replace(a, b) #replacing original elements to encrypted ones.\n return password\n password = c\n password = securepassword(password)\n print(f\"Your Encrpted Password is: {password}\") #printing encrypted password\n elif e==1:\n f = input(\"Enter password to be decrpted: \\n\") #enter password to be decrypted\n while True:\n\t j = input(\"Enter your Access Key: \\n\") #entering access key\n\t if j.isdigit(): #checking whether input is integer.\n\t\t break\n\t else:\n\t\t print(\"Please enter a number\")\n\t\t continue\n SECURE = (('s', '$'),('i', '|'), ('x', '*'), ('k','+'), ('g','4'),('a','@'),('o','0'),('X','x'),('%','#'),('w','^'),('h','='),('H','~'),('r',','),('e','/'),('t','<'),('q','>'),('l','?'),('c','1'))\n def securepassword(password):\n for a,b in SECURE:\n password = password.replace(b, a)\n return password\n password = f\n password = securepassword(password)\n print(f\"Your Password is: {password}\") #printing original password \n else: #checking condition if user input is other than 0 or 1.\n print(\"Enter 0 or 1\") \n"
},
{
"alpha_fraction": 0.46504560112953186,
"alphanum_fraction": 0.4680851101875305,
"avg_line_length": 53.38888931274414,
"blob_id": "b9f9afead512b09108865f45d46a73e1f1c40ca8",
"content_id": "7e5d2b413a866b3f954d4ba48e7e5cf2dd39a623",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 987,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 18,
"path": "/Password Generation.py",
"repo_name": "Aayush-hub/Password-Cryptography",
"src_encoding": "UTF-8",
"text": "import string #built-in module\nimport random #built-in module\nif __name__ == \"__main__\":\n a1 = string.printable #taking all characters,digits and punctuations as string\n i = input(\"Enter length of password: \\n\") #taking length of password required as input\n while True:\n\t \n\t if i.isdigit(): #checking whether input is interger.\n\t\t break\n\t else:\n\t\t print(\"Please enter a number\")\n\t\t continue\n a = [] #taking empty list\n a.extend(list(a1)) #appending all elements in empty list.\n random.shuffle(a) #shuffling all elements and choosing random elements\n print(\"Your Password is: \" ,end=\"\")\n c = \"\".join(a[0:int(i)])\n print(c) #printing password generated.\n \n"
},
{
"alpha_fraction": 0.7468679547309875,
"alphanum_fraction": 0.7584323883056641,
"avg_line_length": 39.389610290527344,
"blob_id": "3e9c5cca60f72decb49f0685c7ec451d33b4ce38",
"content_id": "a2ccfdd9cfcb1a737a8604859eb76f22909cdc2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3113,
"license_type": "no_license",
"max_line_length": 928,
"num_lines": 77,
"path": "/README.md",
"repo_name": "Aayush-hub/Password-Cryptography",
"src_encoding": "UTF-8",
"text": "# Password-Cryptography\n\n<p align=\"center\">\n<img src=\"https://media.giphy.com/media/077i6AULCXc0FKTj9s/giphy.gif\" align= \"center\"/>\n</p>\n\nIn today's world internet is the new fuel and most important for an individual in this world is privacy. So, to keep your data safe you require a strong password which can not be guessed. This script helps you to generate, encrypt and decrypt passwords. This script generates password from 52 alphabetic characters (uppercase and lowercase), 10 digits, and 16 special symbols and encrpyts it on the go with asking a access key from user and setting it. With the encryted password it would take 28 Billion years for anyone having no access key to crack it. But I have made it easy for user who want to distribute their data in secure way as this repo also contains decrypter, from which you can easliy decrypt your passwords after entering access key. By this way you can easily distribute data giving access key to trusted person and decrypter script. This project is in development phase and more features are to be added soon.\n\n\n\n\n<img align=\"right\" src=\"https://media.giphy.com/media/loXfQtPqLxGmbLs9h2/giphy.gif\" width = \"380\" height = \"340\">\n\n## Quick Start\n- Clone this repository\n\n git clone https://github.com/Aayush-hub/Password-Cryptography.git\n\n- Change directory\n\n cd Password-Cryptography\n\n \n- Run python file\n\n python <file you want to run>\n\n\n### Secure password full file.py \n\n- This profile is full workflow of generating password, encrypting and decrypting it.\n- Enter 0 to generate, encrpyt password and setting access key to share or enter 1 to decrypt password.\n- If you have choosen 0, then input the required length of password to be generated.\n- After generating password set access key and get the encrypted password.\n- If you have choosen 0, then input the required length of password to be generated.\n- After generating password set access key and get the encrypted password.\n\n\n\n### Password Generation\n- This file is for generating passwords using 52 alphabetic characters (uppercase and lowercase), 10 digits, and 16 special symbols.\n- Enter required length of your password. (Pro Tip: It is suggested that password should be of 8 characters long to be strong) \n\n\n\n### Password Encryption\n- This file is for encrypting your pre-generated password or a personal password.\n- Just type your password, set access key and get encrypted one.\n\n\n\n### Password Decryption\n- This file is for decryption of your passwords.\n- Just enter your encrypted password , access key and get the original one.\n\n\n\n## Contributing to this project!! \n\n- Fork this repository\n\n- Clone the Repository \n\n- Add upstream \n\n git add upstream https://github.com/Aayush-hub/Password-Cryptography.git\n git remote -v, to check upstream successfully added\n\n- Add your files or make changes in files\n\n- Add Changes ` git add .`\n\n- Commit Changes ` git commit -m \"your message here\" `\n\n- Push changes ` git push `\n\n- Make Pull Request\n\n\n\n"
},
{
"alpha_fraction": 0.5068078637123108,
"alphanum_fraction": 0.5113464593887329,
"avg_line_length": 65.0999984741211,
"blob_id": "5faef2f9a730c7c8d604055ce6f61374d7846fad",
"content_id": "db0465b03f7bb1a52b6c50fe827ac571465152d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 661,
"license_type": "no_license",
"max_line_length": 196,
"num_lines": 10,
"path": "/Password Encryption.py",
"repo_name": "Aayush-hub/Password-Cryptography",
"src_encoding": "UTF-8",
"text": "inp = input(\"Enter password to be encrypted: \\n\") #taking password to be encrypted\nd = int(input(\"Set Access Key: \\n\")) #setting access key\nSECURE = (('s', '$'),('i', '|'), ('x', '*'), ('k','+'), ('g','4'),('a','@'),('o','0'),('X','x'),('%','#'),('w','^'),('h','='),('H','~'),('r',','),('e','/'),('t','<'),('q','>'),('l','?'),('c','1'))\ndef securepassword(password):\n for a,b in SECURE:\n password = password.replace(a, b) #replacing orginal elements to encrypted ones.\n return password\npassword = inp\npassword = securepassword(password)\nprint(f\"Your Encrpted Password is: {password}\") #printing encryted password\n"
}
] | 5 |
viswanathgs/pantheon
|
https://github.com/viswanathgs/pantheon
|
eea6656fadf8dba9b912af39a4e5e418466c6fb6
|
5937edc3c99309d8a84d545c33d08f5e45bbbfb5
|
7e7190150e7eaba1d56a4c1ea9c0caa0fc6b55d7
|
refs/heads/master
| 2020-07-29T10:16:41.136091 | 2020-06-09T13:56:45 | 2020-06-09T13:56:45 | 209,759,714 | 1 | 1 | null | 2019-09-20T09:56:23 | 2020-05-12T16:42:44 | 2020-06-09T13:56:46 |
Python
|
[
{
"alpha_fraction": 0.6430903077125549,
"alphanum_fraction": 0.645266592502594,
"avg_line_length": 33.67924499511719,
"blob_id": "0b8774428f72acb396d35db49f91b80a0731246c",
"content_id": "a7254248acbf75f8a2e1245da47d81d9180a8334",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1838,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 53,
"path": "/src/wrappers/arg_parser.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import argparse\n\n\ndef parse_wrapper_args(run_first):\n if run_first != 'receiver' and run_first != 'sender':\n sys.exit('Specify \"receiver\" or \"sender\" to run first')\n\n parser = argparse.ArgumentParser()\n subparsers = parser.add_subparsers(dest='option')\n\n subparsers.add_parser(\n 'deps', help='print a space-separated list of build dependencies')\n subparsers.add_parser(\n 'run_first', help='print which side (sender or receiver) runs first')\n subparsers.add_parser(\n 'setup', help='set up the scheme (required to be run at the first '\n 'time; must make persistent changes across reboots)')\n subparsers.add_parser(\n 'setup_after_reboot', help='set up the scheme (required to be run '\n 'every time after reboot)')\n\n receiver_parser = subparsers.add_parser('receiver', help='run receiver')\n sender_parser = subparsers.add_parser('sender', help='run sender')\n\n if run_first == 'receiver':\n receiver_parser.add_argument('port', help='port to listen on')\n sender_parser.add_argument(\n 'ip', metavar='IP', help='IP address of receiver')\n sender_parser.add_argument('port', help='port of receiver')\n else:\n sender_parser.add_argument('port', help='port to listen on')\n receiver_parser.add_argument(\n 'ip', metavar='IP', help='IP address of sender')\n receiver_parser.add_argument('port', help='port of sender')\n\n sender_parser.add_argument(\n '--extra_args', metavar=\"--arg1=val1 --arg2=val2...\",\n default='', help='extra arguments for the sender')\n\n args = parser.parse_args()\n\n if args.option == 'run_first':\n print run_first\n\n return args\n\n\ndef receiver_first():\n return parse_wrapper_args('receiver')\n\n\ndef sender_first():\n return parse_wrapper_args('sender')\n"
},
{
"alpha_fraction": 0.5922509431838989,
"alphanum_fraction": 0.5940959453582764,
"avg_line_length": 24.809524536132812,
"blob_id": "21fbdd628944e368f2020908b3aaa7c872817f78",
"content_id": "fb35fbcd3eddb7af57c96daaa582d1bbbf62b193",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1084,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 42,
"path": "/src/wrappers/verus.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'verus')\n send_src = path.join(cc_repo, 'src', 'verus_server')\n recv_src = path.join(cc_repo, 'src', 'verus_client')\n\n if args.option == 'deps':\n print 'libtbb-dev libasio-dev libalglib-dev libboost-system-dev'\n return\n\n if args.option == 'setup':\n # apply patch to reduce MTU size\n utils.apply_patch('verus.patch', cc_repo)\n\n sh_cmd = './bootstrap.sh && ./configure && make -j'\n check_call(sh_cmd, shell=True, cwd=cc_repo)\n return\n\n if args.option == 'sender':\n cmd = [send_src, '-name', utils.tmp_dir, '-p', args.port, '-t', '75']\n check_call(cmd, cwd=utils.tmp_dir)\n return\n\n if args.option == 'receiver':\n cmd = [recv_src, args.ip, '-p', args.port]\n check_call(cmd, cwd=utils.tmp_dir)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.613511860370636,
"alphanum_fraction": 0.6202069520950317,
"avg_line_length": 30.596153259277344,
"blob_id": "305c2cea090b0cc1f51e62faf2b8563ef7ff1416",
"content_id": "cf8beae0d9e10e98f0fe528eaef93c84d5b766bf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1643,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 52,
"path": "/src/helpers/kernel_ctl.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import sys\n\nfrom subprocess_wrappers import call, check_output, check_call\n\n\ndef load_kernel_module(module):\n if call('sudo modprobe ' + module, shell=True) != 0:\n sys.exit('%s kernel module is not available' % module)\n\n\ndef enable_congestion_control(cc):\n cc_list = check_output('sysctl net.ipv4.tcp_allowed_congestion_control',\n shell=True)\n cc_list = cc_list.split('=')[-1].split()\n\n # return if cc is already in the allowed congestion control list\n if cc in cc_list:\n return\n\n cc_list.append(cc)\n check_call('sudo sysctl -w net.ipv4.tcp_allowed_congestion_control=\"%s\"'\n % ' '.join(cc_list), shell=True)\n\n\ndef check_qdisc(qdisc):\n curr_qdisc = check_output('sysctl net.core.default_qdisc', shell=True)\n curr_qdisc = curr_qdisc.split('=')[-1].strip()\n\n if qdisc != curr_qdisc:\n sys.exit('Error: current qdisc %s is not %s' % (curr_qdisc, qdisc))\n\n\ndef set_qdisc(qdisc):\n curr_qdisc = check_output('sysctl net.core.default_qdisc', shell=True)\n curr_qdisc = curr_qdisc.split('=')[-1].strip()\n\n if curr_qdisc != qdisc:\n check_call('sudo sysctl -w net.core.default_qdisc=%s' % qdisc,\n shell=True)\n sys.stderr.write('Changed default_qdisc from %s to %s\\n'\n % (curr_qdisc, qdisc))\n\n\ndef enable_ip_forwarding():\n check_call('sudo sysctl -w net.ipv4.ip_forward=1', shell=True)\n\n\ndef disable_rp_filter(interface):\n rpf = 'net.ipv4.conf.%s.rp_filter'\n\n check_call('sudo sysctl -w %s=0' % (rpf % interface), shell=True)\n check_call('sudo sysctl -w %s=0' % (rpf % 'all'), shell=True)\n"
},
{
"alpha_fraction": 0.6359074711799622,
"alphanum_fraction": 0.6403679847717285,
"avg_line_length": 29.658119201660156,
"blob_id": "2c94a5c5e05ed6b49418bdb77cea28316fb337a4",
"content_id": "eee1ba90638f161d5059ce9cc5c5f9e3cf4d80c8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3587,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 117,
"path": "/src/analysis/arg_parser.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import sys\nfrom os import path\nimport argparse\n\nimport context\nfrom helpers import utils\n\n\ndef verify_schemes(schemes):\n schemes = schemes.split()\n all_schemes = utils.parse_config()['schemes'].keys()\n\n for cc in schemes:\n if cc not in all_schemes:\n sys.exit('%s is not a scheme included in src/config.yml' % cc)\n\n\ndef parse_tunnel_graph():\n parser = argparse.ArgumentParser(\n description='evaluate throughput and delay of a tunnel log and '\n 'generate graphs')\n\n parser.add_argument('tunnel_log', metavar='tunnel-log',\n help='tunnel log file')\n parser.add_argument(\n '--throughput', metavar='OUTPUT-GRAPH',\n action='store', dest='throughput_graph',\n help='throughput graph to save as (default None)')\n parser.add_argument(\n '--delay', metavar='OUTPUT-GRAPH',\n action='store', dest='delay_graph',\n help='delay graph to save as (default None)')\n parser.add_argument(\n '--ms-per-bin', metavar='MS-PER-BIN', type=int, default=500,\n help='bin size in ms (default 500)')\n\n args = parser.parse_args()\n return args\n\n\ndef parse_analyze_shared(parser):\n parser.add_argument(\n '--schemes', metavar='\"SCHEME1 SCHEME2...\"',\n help='analyze a space-separated list of schemes '\n '(default: \"cc_schemes\" in pantheon_metadata.json)')\n parser.add_argument(\n '--data-dir', metavar='DIR',\n default=path.join(context.src_dir, 'experiments', 'data'),\n help='directory that contains logs and metadata '\n 'of pantheon tests (default pantheon/experiments/data)')\n\n\ndef parse_plot():\n parser = argparse.ArgumentParser(\n description='plot throughput and delay graphs for schemes in tests')\n\n parse_analyze_shared(parser)\n parser.add_argument('--include-acklink', action='store_true',\n help='include acklink analysis')\n parser.add_argument(\n '--no-graphs', action='store_true', help='only append datalink '\n 'statistics to stats files with no graphs generated')\n\n args = parser.parse_args()\n if args.schemes is not None:\n verify_schemes(args.schemes)\n\n return args\n\n\ndef parse_report():\n parser = argparse.ArgumentParser(\n description='generate a PDF report that summarizes test results')\n\n parse_analyze_shared(parser)\n parser.add_argument('--include-acklink', action='store_true',\n help='include acklink analysis')\n\n args = parser.parse_args()\n if args.schemes is not None:\n verify_schemes(args.schemes)\n\n return args\n\n\ndef parse_analyze():\n parser = argparse.ArgumentParser(\n description='call plot.py and report.py')\n\n parse_analyze_shared(parser)\n parser.add_argument('--include-acklink', action='store_true',\n help='include acklink analysis')\n\n args = parser.parse_args()\n if args.schemes is not None:\n verify_schemes(args.schemes)\n\n return args\n\n\ndef parse_over_time():\n parser = argparse.ArgumentParser(\n description='plot a throughput-time graph for schemes in tests')\n\n parse_analyze_shared(parser)\n parser.add_argument(\n '--ms-per-bin', metavar='MS-PER-BIN', type=int, default=500,\n help='bin size in ms (default 500)')\n parser.add_argument(\n '--amplify', metavar='FACTOR', type=float, default=1.0,\n help='amplication factor of output graph\\'s x-axis scale ')\n\n args = parser.parse_args()\n if args.schemes is not None:\n verify_schemes(args.schemes)\n\n return args\n"
},
{
"alpha_fraction": 0.5514612197875977,
"alphanum_fraction": 0.5571791529655457,
"avg_line_length": 23.59375,
"blob_id": "94dfbc27a14ecd662ac77a7accb533f607833ee9",
"content_id": "113bd00e76d36005fd799b80155604c2017e73c8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1574,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 64,
"path": "/src/wrappers/mvfst_rl_random.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n\n# Random RL policy\n\nimport os\nfrom os import path\nimport sys\nimport string\nimport shutil\nimport time\nfrom subprocess import check_call, call, Popen, PIPE\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\nfrom mvfst_rl import setup_mvfst, dependencies_mvfst\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'mvfst-rl')\n src = path.join(cc_repo, '_build/build/traffic_gen/traffic_gen')\n\n if args.option == 'deps':\n dependencies_mvfst()\n return\n\n if args.option == 'setup':\n setup_mvfst(cc_repo)\n return\n\n if args.option == 'sender':\n cmd = [\n src,\n '--mode=server',\n '--host=0.0.0.0', # Server listens on 0.0.0.0\n '--port=%s' % args.port,\n '--cc_algo=rl',\n ] + args.extra_args.split() + [\n # extra_args might have --cc_env_mode already, so we set this\n # at the end to override.\n '--cc_env_mode=random',\n ]\n check_call(cmd)\n return\n\n # We use cubic for the client side to keep things simple. It doesn't matter\n # here as we are simulating server-to-client flow, and the client simply\n # sends a hello message to kick things off.\n if args.option == 'receiver':\n cmd = [\n src,\n '--mode=client',\n '--host=%s' % args.ip,\n '--port=%s' % args.port,\n '--cc_algo=cubic',\n ]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6338028311729431,
"alphanum_fraction": 0.6384976506233215,
"avg_line_length": 29.428571701049805,
"blob_id": "ff260d07d5e2471d3b3486417e04850672bb3dd9",
"content_id": "021fd2d4eb61ed3aad813b14dd165bb8aa359ef9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 213,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 7,
"path": "/src/experiments/git_summary.sh",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\necho -n 'branch: '\ngit rev-parse --abbrev-ref @ | head -c -1\necho -n ' @ '\ngit rev-parse @\ngit submodule foreach --quiet 'echo $path @ `git rev-parse @`; git status -s --untracked-files=no --porcelain'\n"
},
{
"alpha_fraction": 0.4917127192020416,
"alphanum_fraction": 0.4953959584236145,
"avg_line_length": 18.39285659790039,
"blob_id": "9e0864e68766d8a76508087add9d78e80f76b1da",
"content_id": "2695ab0c825518733aacf642f5925d185a574e65",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 543,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 28,
"path": "/src/wrappers/cubic.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom subprocess import check_call\n\nimport arg_parser\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n if args.option == 'deps':\n print 'iperf'\n return\n\n if args.option == 'receiver':\n cmd = ['iperf', '-Z', 'cubic', '-s', '-p', args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n cmd = ['iperf', '-Z', 'cubic', '-c', args.ip, '-p', args.port,\n '-t', '75']\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5939913988113403,
"alphanum_fraction": 0.5957081317901611,
"avg_line_length": 24.326086044311523,
"blob_id": "2aded7fed4c1555758dc63f92e5fbfddc775e197",
"content_id": "d7d3cc22d4b3a0bb847e5f830a8d998ea226603a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1165,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 46,
"path": "/src/wrappers/sprout.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'sprout')\n model = path.join(cc_repo, 'src', 'examples', 'sprout.model')\n src = path.join(cc_repo, 'src', 'examples', 'sproutbt2')\n\n if args.option == 'deps':\n print ('libboost-math-dev libssl-dev libprotobuf-dev '\n 'protobuf-compiler libncurses5-dev')\n return\n\n if args.option == 'setup':\n # apply patch to reduce MTU size\n utils.apply_patch('sprout.patch', cc_repo)\n\n sh_cmd = './autogen.sh && ./configure --enable-examples && make -j'\n check_call(sh_cmd, shell=True, cwd=cc_repo)\n return\n\n if args.option == 'receiver':\n os.environ['SPROUT_MODEL_IN'] = model\n cmd = [src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n os.environ['SPROUT_MODEL_IN'] = model\n cmd = [src, args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.644994854927063,
"alphanum_fraction": 0.644994854927063,
"avg_line_length": 23.846153259277344,
"blob_id": "484b08903eca222c73e12738d9c12f370cd444c9",
"content_id": "a62aa4651bc8f21decdea08e6216a693f8bec3e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 969,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 39,
"path": "/tools/pkill.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport signal\nimport argparse\n\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import call\n\n\n# prevent this script from being killed before cleaning up using pkill\ndef signal_handler(signum, frame):\n pass\n\n\ndef main():\n signal.signal(signal.SIGTERM, signal_handler)\n signal.signal(signal.SIGINT, signal_handler)\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\n '--kill-dir', metavar='DIR', help='kill all scripts in the directory')\n args = parser.parse_args()\n\n # kill mahimahi shells and iperf\n pkill = 'pkill -f '\n pkill_cmds = [pkill + 'mm-delay', pkill + 'mm-link', pkill + 'mm-loss',\n pkill + 'mm-tunnelclient', pkill + 'mm-tunnelserver',\n pkill + '-SIGKILL iperf']\n\n if args.kill_dir:\n pkill_cmds.append(pkill + args.kill_dir)\n\n for cmd in pkill_cmds:\n call(cmd, shell=True)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.7358490824699402,
"alphanum_fraction": 0.7358490824699402,
"avg_line_length": 34.33333206176758,
"blob_id": "0c08141b4f8af7136468d159b1d73899408b6650",
"content_id": "d54e14b2d17a6d395046207fb19ab5914cba1fce",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 212,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 6,
"path": "/src/wrappers/context.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import os\nfrom os import path\nimport sys\nsrc_dir = path.abspath(path.join(path.dirname(__file__), os.pardir))\nthird_party_dir = path.abspath(path.join(src_dir, os.pardir, 'third_party'))\nsys.path.append(src_dir)\n"
},
{
"alpha_fraction": 0.49577465653419495,
"alphanum_fraction": 0.5022369623184204,
"avg_line_length": 32.80952453613281,
"blob_id": "9612722d197407756ff0b1bac1dec60c6ef030ec",
"content_id": "ed4bf789b7194c942385cebf99f7f27780e79f02",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12070,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 357,
"path": "/src/analysis/plot.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nimport sys\nimport math\nimport json\nimport multiprocessing\nfrom multiprocessing.pool import ThreadPool\nimport numpy as np\nimport matplotlib_agg\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\n\nimport arg_parser\nimport tunnel_graph\nimport context\nfrom helpers import utils\n\n\nclass Plot(object):\n def __init__(self, args):\n self.data_dir = path.abspath(args.data_dir)\n self.include_acklink = args.include_acklink\n self.no_graphs = args.no_graphs\n\n metadata_path = path.join(self.data_dir, 'pantheon_metadata.json')\n meta = utils.load_test_metadata(metadata_path)\n self.cc_schemes = utils.verify_schemes_with_meta(args.schemes, meta)\n\n self.run_times = meta['run_times']\n self.flows = meta['flows']\n self.runtime = meta['runtime']\n self.expt_title = self.generate_expt_title(meta)\n\n def generate_expt_title(self, meta):\n if meta['mode'] == 'local':\n expt_title = 'local test in mahimahi, '\n elif meta['mode'] == 'remote':\n txt = {}\n for side in ['local', 'remote']:\n txt[side] = []\n\n if '%s_desc' % side in meta:\n txt[side].append(meta['%s_desc' % side])\n else:\n txt[side].append(side)\n\n txt[side] = ' '.join(txt[side])\n\n if meta['sender_side'] == 'remote':\n sender = txt['remote']\n receiver = txt['local']\n else:\n receiver = txt['remote']\n sender = txt['local']\n\n expt_title = 'test from %s to %s, ' % (sender, receiver)\n\n runs_str = 'run' if meta['run_times'] == 1 else 'runs'\n expt_title += '%s %s of %ss each per scheme\\n' % (\n meta['run_times'], runs_str, meta['runtime'])\n\n if meta['flows'] > 1:\n expt_title += '%s flows with %ss interval between flows' % (\n meta['flows'], meta['interval'])\n\n return expt_title\n\n def parse_tunnel_log(self, cc, run_id):\n log_prefix = cc\n if self.flows == 0:\n log_prefix += '_mm'\n\n error = False\n ret = None\n\n link_directions = ['datalink']\n if self.include_acklink:\n link_directions.append('acklink')\n\n for link_t in link_directions:\n log_name = log_prefix + '_%s_run%s.log' % (link_t, run_id)\n log_path = path.join(self.data_dir, log_name)\n\n if not path.isfile(log_path):\n sys.stderr.write('Warning: %s does not exist\\n' % log_path)\n error = True\n continue\n\n if self.no_graphs:\n tput_graph_path = None\n delay_graph_path = None\n else:\n tput_graph = cc + '_%s_throughput_run%s.png' % (link_t, run_id)\n tput_graph_path = path.join(self.data_dir, tput_graph)\n\n delay_graph = cc + '_%s_delay_run%s.png' % (link_t, run_id)\n delay_graph_path = path.join(self.data_dir, delay_graph)\n\n sys.stderr.write('$ tunnel_graph %s\\n' % log_path)\n try:\n tunnel_results = tunnel_graph.TunnelGraph(\n tunnel_log=log_path,\n throughput_graph=tput_graph_path,\n delay_graph=delay_graph_path).run()\n except Exception as exception:\n sys.stderr.write('Error: %s\\n' % exception)\n sys.stderr.write('Warning: \"tunnel_graph %s\" failed but '\n 'continued to run.\\n' % log_path)\n error = True\n\n if error:\n continue\n\n if link_t == 'datalink':\n ret = tunnel_results\n duration = tunnel_results['duration'] / 1000.0\n\n if duration < 0.8 * self.runtime:\n sys.stderr.write(\n 'Warning: \"tunnel_graph %s\" had duration %.2f seconds '\n 'but should have been around %s seconds. Ignoring this'\n ' run.\\n' % (log_path, duration, self.runtime))\n error = True\n\n if error:\n return None\n\n return ret\n\n def update_stats_log(self, cc, run_id, stats):\n stats_log_path = path.join(\n self.data_dir, '%s_stats_run%s.log' % (cc, run_id))\n\n if not path.isfile(stats_log_path):\n sys.stderr.write('Warning: %s does not exist\\n' % stats_log_path)\n return None\n\n saved_lines = ''\n\n # back up old stats logs\n with open(stats_log_path) as stats_log:\n for line in stats_log:\n if any([x in line for x in [\n 'Start at:', 'End at:', 'clock offset:']]):\n saved_lines += line\n else:\n continue\n\n # write to new stats log\n with open(stats_log_path, 'w') as stats_log:\n stats_log.write(saved_lines)\n\n if stats:\n stats_log.write('\\n# Below is generated by %s at %s\\n' %\n (path.basename(__file__), utils.utc_time()))\n stats_log.write('# Datalink statistics\\n')\n stats_log.write('%s' % stats)\n\n def eval_performance(self):\n perf_data = {}\n stats = {}\n\n for cc in self.cc_schemes:\n perf_data[cc] = {}\n stats[cc] = {}\n\n cc_id = 0\n run_id = 1\n pool = ThreadPool(processes=multiprocessing.cpu_count())\n\n while cc_id < len(self.cc_schemes):\n cc = self.cc_schemes[cc_id]\n perf_data[cc][run_id] = pool.apply_async(\n self.parse_tunnel_log, args=(cc, run_id))\n\n run_id += 1\n if run_id > self.run_times:\n run_id = 1\n cc_id += 1\n\n for cc in self.cc_schemes:\n for run_id in xrange(1, 1 + self.run_times):\n perf_data[cc][run_id] = perf_data[cc][run_id].get()\n\n if perf_data[cc][run_id] is None:\n continue\n\n stats_str = perf_data[cc][run_id]['stats']\n self.update_stats_log(cc, run_id, stats_str)\n stats[cc][run_id] = stats_str\n\n sys.stderr.write('Appended datalink statistics to stats files in %s\\n'\n % self.data_dir)\n\n return perf_data, stats\n\n def xaxis_log_scale(self, ax, min_delay, max_delay):\n if min_delay < -2:\n x_min = int(-math.pow(2, math.ceil(math.log(-min_delay, 2))))\n elif min_delay < 0:\n x_min = -2\n elif min_delay < 2:\n x_min = 0\n else:\n x_min = int(math.pow(2, math.floor(math.log(min_delay, 2))))\n\n if max_delay < -2:\n x_max = int(-math.pow(2, math.floor(math.log(-max_delay, 2))))\n elif max_delay < 0:\n x_max = 0\n elif max_delay < 2:\n x_max = 2\n else:\n x_max = int(math.pow(2, math.ceil(math.log(max_delay, 2))))\n\n symlog = False\n if x_min <= -2:\n if x_max >= 2:\n symlog = True\n elif x_min == 0:\n if x_max >= 8:\n symlog = True\n elif x_min >= 2:\n if x_max > 4 * x_min:\n symlog = True\n\n if symlog:\n ax.set_xscale('symlog', basex=2, linthreshx=2, linscalex=0.5)\n ax.set_xlim(x_min, x_max)\n ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%d'))\n\n def plot_throughput_delay(self, data):\n min_raw_delay = sys.maxint\n min_mean_delay = sys.maxint\n max_raw_delay = -sys.maxint\n max_mean_delay = -sys.maxint\n\n fig_raw, ax_raw = plt.subplots()\n fig_mean, ax_mean = plt.subplots()\n\n schemes_config = utils.parse_config()['schemes']\n for cc in data:\n if not data[cc]:\n sys.stderr.write('No performance data for scheme %s\\n' % cc)\n continue\n\n value = data[cc]\n cc_name = utils.get_scheme_name(cc, schemes_config)\n cc_base = utils.get_base_scheme(cc)\n color = schemes_config[cc_base]['color']\n marker = schemes_config[cc_base]['marker']\n y_data, x_data = zip(*value)\n\n # update min and max raw delay\n min_raw_delay = min(min(x_data), min_raw_delay)\n max_raw_delay = max(max(x_data), max_raw_delay)\n\n # plot raw values\n ax_raw.scatter(x_data, y_data, color=color, marker=marker,\n label=cc_name, clip_on=False)\n\n # plot the average of raw values\n x_mean = np.mean(x_data)\n y_mean = np.mean(y_data)\n\n # update min and max mean delay\n min_mean_delay = min(x_mean, min_mean_delay)\n max_mean_delay = max(x_mean, max_mean_delay)\n\n ax_mean.scatter(x_mean, y_mean, color=color, marker=marker,\n clip_on=False)\n ax_mean.annotate(cc_name, (x_mean, y_mean))\n\n for fig_type, fig, ax in [('raw', fig_raw, ax_raw),\n ('mean', fig_mean, ax_mean)]:\n if fig_type == 'raw':\n self.xaxis_log_scale(ax, min_raw_delay, max_raw_delay)\n else:\n self.xaxis_log_scale(ax, min_mean_delay, max_mean_delay)\n ax.invert_xaxis()\n\n yticks = ax.get_yticks()\n if yticks[0] < 0:\n ax.set_ylim(bottom=0)\n\n xlabel = '95th percentile one-way delay (ms)'\n ax.set_xlabel(xlabel, fontsize=12)\n ax.set_ylabel('Average throughput (Mbit/s)', fontsize=12)\n ax.grid()\n\n # save pantheon_summary.svg and .pdf\n ax_raw.set_title(self.expt_title.strip(), y=1.02, fontsize=12)\n lgd = ax_raw.legend(scatterpoints=1, bbox_to_anchor=(1, 0.5),\n loc='center left', fontsize=12)\n\n for graph_format in ['svg', 'pdf']:\n raw_summary = path.join(\n self.data_dir, 'pantheon_summary.%s' % graph_format)\n fig_raw.savefig(raw_summary, dpi=300, bbox_extra_artists=(lgd,),\n bbox_inches='tight', pad_inches=0.2)\n\n # save pantheon_summary_mean.svg and .pdf\n ax_mean.set_title(self.expt_title +\n ' (mean of all runs by scheme)', fontsize=12)\n\n for graph_format in ['svg', 'pdf']:\n mean_summary = path.join(\n self.data_dir, 'pantheon_summary_mean.%s' % graph_format)\n fig_mean.savefig(mean_summary, dpi=300,\n bbox_inches='tight', pad_inches=0.2)\n\n sys.stderr.write(\n 'Saved throughput graphs, delay graphs, and summary '\n 'graphs in %s\\n' % self.data_dir)\n\n def run(self):\n perf_data, stats_logs = self.eval_performance()\n\n data_for_plot = {}\n data_for_json = {}\n\n for cc in perf_data:\n data_for_plot[cc] = []\n data_for_json[cc] = {}\n\n for run_id in perf_data[cc]:\n if perf_data[cc][run_id] is None:\n continue\n\n tput = perf_data[cc][run_id]['throughput']\n delay = perf_data[cc][run_id]['delay']\n if tput is None or delay is None:\n continue\n data_for_plot[cc].append((tput, delay))\n\n flow_data = perf_data[cc][run_id]['flow_data']\n if flow_data is not None:\n data_for_json[cc][run_id] = flow_data\n\n if not self.no_graphs:\n self.plot_throughput_delay(data_for_plot)\n\n plt.close('all')\n\n perf_path = path.join(self.data_dir, 'pantheon_perf.json')\n with open(perf_path, 'w') as fh:\n json.dump(data_for_json, fh)\n\n\ndef main():\n args = arg_parser.parse_plot()\n Plot(args).run()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5286678075790405,
"alphanum_fraction": 0.5362563133239746,
"avg_line_length": 20.563636779785156,
"blob_id": "4631b915d3e57365eb1d473fa346c7b1cbe8a589",
"content_id": "8930b51153443fe329d27ea8b0050738f3795ced",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1186,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 55,
"path": "/src/wrappers/mvfst_bbr.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n\nimport os\nfrom os import path\nimport sys\nimport string\nimport shutil\nimport time\nfrom subprocess import check_call, call, Popen, PIPE\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\nfrom mvfst_rl import setup_mvfst, dependencies_mvfst\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'mvfst-rl')\n src = path.join(cc_repo, '_build/build/traffic_gen/traffic_gen')\n\n if args.option == 'deps':\n dependencies_mvfst()\n return\n\n if args.option == 'setup':\n setup_mvfst(cc_repo)\n return\n\n if args.option == 'sender':\n cmd = [\n src,\n '--mode=server',\n '--host=0.0.0.0', # Server listens on 0.0.0.0\n '--port=%s' % args.port,\n '--cc_algo=bbr',\n ] + args.extra_args.split()\n check_call(cmd)\n return\n\n if args.option == 'receiver':\n cmd = [\n src,\n '--mode=client',\n '--host=%s' % args.ip,\n '--port=%s' % args.port,\n '--cc_algo=bbr',\n ]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.4969894587993622,
"alphanum_fraction": 0.5075263381004333,
"avg_line_length": 34.58928680419922,
"blob_id": "a0bc8cbb51d534301bc50c7fbd1e64d6c4842d69",
"content_id": "22290caf8eee465f9750ccab6701a4d24d35f4f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3986,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 112,
"path": "/tests/local_test.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\n\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import check_call\n\n\ndef get_sample_config(config_name):\n if config_name == 'bbr-cubic':\n config = ('test-name: test-bbr \\n'\n 'runtime: 30 \\n'\n 'interval: 1 \\n'\n 'random_order: true \\n'\n 'extra_mm_link_args: --uplink-queue=droptail '\n '--uplink-queue-args=packets=512 \\n'\n 'prepend_mm_cmds: mm-delay 30 \\n'\n 'flows: \\n'\n ' - scheme: bbr \\n'\n ' - scheme: cubic')\n\n elif config_name == 'verus-cubic':\n config = ('test-name: test-bbr \\n'\n 'runtime: 30 \\n'\n 'interval: 1 \\n'\n 'random_order: true \\n'\n 'extra_mm_link_args: --uplink-queue=droptail '\n '--uplink-queue-args=packets=512 \\n'\n 'prepend_mm_cmds: mm-delay 30 \\n'\n 'flows: \\n'\n ' - scheme: verus \\n'\n ' - scheme: cubic')\n\n config_path = path.join(utils.tmp_dir, '%s.yml' % config_name)\n with open(config_path, 'w') as f:\n f.write(config)\n\n return config_path\n\n\ndef main():\n curr_dir = path.dirname(path.abspath(__file__))\n data_trace = path.join(curr_dir, '12mbps_data.trace')\n ack_trace = path.join(curr_dir, '12mbps_ack.trace')\n\n test_py = path.join(context.src_dir, 'experiments', 'test.py')\n\n # test a receiver-first scheme --- cubic\n cc = 'cubic'\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '0',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup', '--schemes', '%s' % cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '1',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup', '--schemes', '%s' % cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '1',\n '--run-times', '2', '--uplink-trace', data_trace,\n '--downlink-trace', ack_trace, '--pkill-cleanup',\n '--schemes', '%s' % cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '2', '--interval', '2',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup', '--schemes', '%s' % cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'local', '-t', '5', '--pkill-cleanup',\n '--uplink-trace', data_trace,\n '--downlink-trace', ack_trace,\n '--extra-mm-link-args',\n '--uplink-queue=droptail --uplink-queue-args=packets=200',\n '--prepend-mm-cmds', 'mm-delay 10',\n '--append-mm-cmds', 'mm-delay 10',\n '--schemes', '%s' % cc]\n check_call(cmd)\n\n # test a sender-first scheme --- verus\n cc = 'verus'\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '0',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup', '--schemes', '%s' % cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'local', '-t', '5', '-f', '1',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup', '--schemes', '%s' % cc]\n check_call(cmd)\n\n # test running with a config file -- two reciever first schemes\n config = get_sample_config('bbr-cubic')\n cmd = ['python', test_py, '-c', config, 'local',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup']\n check_call(cmd)\n\n # test running with a config file -- one receiver first, one sender first scheme\n config = get_sample_config('verus-cubic')\n cmd = ['python', test_py, '-c', config, 'local',\n '--uplink-trace', data_trace, '--downlink-trace', ack_trace,\n '--pkill-cleanup']\n check_call(cmd)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5866957306861877,
"alphanum_fraction": 0.5976008772850037,
"avg_line_length": 25.200000762939453,
"blob_id": "3e64040562e183c377b03edfbce4e5476d854004",
"content_id": "cb3153954c896b7697b055b200a49d1d3e984789",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 917,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 35,
"path": "/src/wrappers/fillp.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'fillp')\n send_dir = path.join(cc_repo, 'client')\n recv_dir = path.join(cc_repo, 'server')\n send_src = path.join(send_dir, 'client')\n recv_src = path.join(recv_dir, 'server')\n\n if args.option == 'receiver':\n os.environ['LD_LIBRARY_PATH'] = recv_dir\n cmd = [recv_src, '-s', '0.0.0.0', '-p', args.port, '-r', 'testcase001']\n check_call(cmd, cwd=utils.tmp_dir)\n return\n\n if args.option == 'sender':\n os.environ['LD_LIBRARY_PATH'] = send_dir\n cmd = [send_src, '-c', args.ip, '-p', args.port, '-r', 'testcase001']\n check_call(cmd, cwd=utils.tmp_dir)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5932560563087463,
"alphanum_fraction": 0.5932560563087463,
"avg_line_length": 23.33333396911621,
"blob_id": "f3529355ecbeb52898bcc53f5d5335f4d9e7cc61",
"content_id": "d2f2ae74943864bf9c9b345ea877582e001681cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 949,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 39,
"path": "/src/wrappers/pcc_experimental.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'pcc-experimental')\n src_dir = path.join(cc_repo, 'src')\n lib_dir = path.join(src_dir, 'core')\n app_dir = path.join(src_dir, 'app')\n send_src = path.join(app_dir, 'pccclient')\n recv_src = path.join(app_dir, 'pccserver')\n\n if args.option == 'setup':\n check_call(['make'], cwd=src_dir)\n return\n\n if args.option == 'receiver':\n os.environ['LD_LIBRARY_PATH'] = path.join(lib_dir)\n cmd = [recv_src, 'recv', args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n os.environ['LD_LIBRARY_PATH'] = path.join(lib_dir)\n cmd = [send_src, 'send', args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5438848733901978,
"alphanum_fraction": 0.5503597259521484,
"avg_line_length": 26.799999237060547,
"blob_id": "896393e1ee8accc71855ec68453c5061f2947608",
"content_id": "2308ce13042256d14a2b0a98c86a29ea0f922159",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1390,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 50,
"path": "/tests/remote_test.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nimport argparse\n\nimport context\nfrom helpers.subprocess_wrappers import check_call\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument('remote', metavar='HOSTADDR:PANTHEON-DIR')\n args = parser.parse_args()\n remote = args.remote\n\n test_py = path.join(context.src_dir, 'experiments', 'test.py')\n\n # test a receiver-first scheme --- cubic\n cc = 'cubic'\n\n cmd = ['python', test_py, 'remote', remote, '--pkill-cleanup',\n '-t', '5', '--schemes', cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'remote', remote, '-t', '5', '--pkill-cleanup',\n '--run-times', '2', '--schemes', cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'remote', remote, '-t', '5', '--pkill-cleanup',\n '--sender', 'remote', '--schemes', cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'remote', remote, '-t', '5', '-f', '2',\n '--pkill-cleanup', '--interval', '2', '--schemes', cc]\n check_call(cmd)\n\n # test a sender-first scheme --- verus\n cc = 'verus'\n\n cmd = ['python', test_py, 'remote', remote, '-t', '5', '--pkill-cleanup',\n '--schemes', cc]\n check_call(cmd)\n\n cmd = ['python', test_py, 'remote', remote, '-t', '5', '--pkill-cleanup',\n '--sender', 'remote', '--schemes', cc]\n check_call(cmd)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.7123287916183472,
"alphanum_fraction": 0.7123287916183472,
"avg_line_length": 23.33333396911621,
"blob_id": "df89bd37abe6ce36c3cd5846ca141060dbe6085c",
"content_id": "88e233fa0498b761368779f1aff0cbd9fff096e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 73,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 3,
"path": "/tools/fetch_submodules.sh",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\ngit submodule sync && git submodule update --recursive --init\n"
},
{
"alpha_fraction": 0.7407407164573669,
"alphanum_fraction": 0.7407407164573669,
"avg_line_length": 26,
"blob_id": "0c05dd4b6e7339f25c950feeaddb5438cc894a01",
"content_id": "cd0b1cd35fab9a8bc5c1bf46f0ba65c7dad9bd41",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 135,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 5,
"path": "/src/analysis/context.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import os\nfrom os import path\nimport sys\nsrc_dir = path.abspath(path.join(path.dirname(__file__), os.pardir))\nsys.path.append(src_dir)\n"
},
{
"alpha_fraction": 0.5534307956695557,
"alphanum_fraction": 0.6456692814826965,
"avg_line_length": 30.192981719970703,
"blob_id": "be7167efa88915dfc14dff5ad475e633a273efc9",
"content_id": "e79089efb7765ee9b9827e7cb29eb89fcdb59c25",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1778,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 57,
"path": "/src/experiments/setup_system.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\nimport arg_parser\n\nimport context\nfrom helpers import kernel_ctl\nfrom helpers.subprocess_wrappers import check_call\n\n\ndef sysctl(metric, value):\n check_call(\"sudo sysctl -w %s='%s'\" % (metric, value), shell=True)\n\n\ndef main():\n args = arg_parser.parse_setup_system()\n\n # enable IP forwarding\n if args.enable_ip_forward:\n kernel_ctl.enable_ip_forwarding()\n\n # disable reverse path filtering\n if args.interface is not None:\n kernel_ctl.disable_rp_filter(args.interface)\n\n # set default qdisc\n if args.qdisc is not None:\n kernel_ctl.set_qdisc(args.qdisc)\n\n if args.reset_rmem:\n # reset socket receive buffer sizes to Linux default ones\n sysctl('net.core.rmem_default', 212992)\n sysctl('net.core.rmem_max', 212992)\n elif args.set_rmem:\n # set socket receive buffer\n sysctl('net.core.rmem_default', 16777216)\n sysctl('net.core.rmem_max', 33554432)\n elif args.reset_all_mem:\n # reset socket buffer sizes to Linux default ones\n sysctl('net.core.rmem_default', 212992)\n sysctl('net.core.rmem_max', 212992)\n sysctl('net.core.wmem_default', 212992)\n sysctl('net.core.wmem_max', 212992)\n sysctl('net.ipv4.tcp_rmem', '4096 87380 6291456')\n sysctl('net.ipv4.tcp_wmem', '4096 16384 4194304')\n elif args.set_all_mem:\n # set socket buffer sizes\n sysctl('net.core.rmem_default', 16777216)\n sysctl('net.core.rmem_max', 536870912)\n sysctl('net.core.wmem_default', 16777216)\n sysctl('net.core.wmem_max', 536870912)\n sysctl('net.ipv4.tcp_rmem', '4096 16777216 536870912')\n sysctl('net.ipv4.tcp_wmem', '4096 16777216 536870912')\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5893782377243042,
"alphanum_fraction": 0.5893782377243042,
"avg_line_length": 21.705883026123047,
"blob_id": "a88321299d6c02277779d35271024eeba780bcba",
"content_id": "47808fca89b8cd79fd2cb32ce72a95648e361f12",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 772,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 34,
"path": "/src/wrappers/scream.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'scream-reproduce')\n recv_src = path.join(cc_repo, 'src', 'ScreamServer')\n send_src = path.join(cc_repo, 'src', 'ScreamClient')\n\n if args.option == 'setup':\n sh_cmd = './autogen.sh && ./configure && make -j'\n check_call(sh_cmd, shell=True, cwd=cc_repo)\n return\n\n if args.option == 'receiver':\n cmd = [recv_src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n cmd = [send_src, args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5003846883773804,
"alphanum_fraction": 0.504744827747345,
"avg_line_length": 32.90434646606445,
"blob_id": "8e430c2b1d400c9001dea5a90c98a5c2701591aa",
"content_id": "74c4d5ad3b7d8cb43b698196c8115265bf15820f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3899,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 115,
"path": "/src/experiments/tunnel_manager.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nimport sys\nimport signal\nfrom subprocess import Popen, PIPE\n\nimport context\nfrom helpers import utils\n\n\ndef main():\n prompt = ''\n procs = {}\n\n # register SIGINT and SIGTERM events to clean up gracefully before quit\n def stop_signal_handler(signum, frame):\n for tun_id in procs:\n utils.kill_proc_group(procs[tun_id])\n\n sys.exit('tunnel_manager: caught signal %s and cleaned up\\n' % signum)\n\n signal.signal(signal.SIGINT, stop_signal_handler)\n signal.signal(signal.SIGTERM, stop_signal_handler)\n\n sys.stdout.write('tunnel manager is running\\n')\n sys.stdout.flush()\n\n while True:\n input_cmd = sys.stdin.readline().strip()\n\n if not input_cmd:\n # This may only happen if the parent process has died.\n sys.stderr.write('tunnel manager\\'s parent process must have died '\n '=> halting\\n')\n input_cmd = 'halt'\n\n # print all the commands fed into tunnel manager\n if prompt:\n sys.stderr.write(prompt + ' ')\n sys.stderr.write(input_cmd + '\\n')\n cmd = input_cmd.split()\n\n # manage I/O of multiple tunnels\n if cmd[0] == 'tunnel':\n if len(cmd) < 3:\n sys.stderr.write('error: usage: tunnel ID CMD...\\n')\n continue\n\n try:\n tun_id = int(cmd[1])\n except ValueError:\n sys.stderr.write('error: usage: tunnel ID CMD...\\n')\n continue\n\n cmd_to_run = ' '.join(cmd[2:])\n\n if cmd[2] == 'mm-tunnelclient' or cmd[2] == 'mm-tunnelserver':\n # expand env variables (e.g., MAHIMAHI_BASE)\n cmd_to_run = path.expandvars(cmd_to_run).split()\n\n # expand home directory\n for i in xrange(len(cmd_to_run)):\n if ('--ingress-log' in cmd_to_run[i] or\n '--egress-log' in cmd_to_run[i]):\n t = cmd_to_run[i].split('=')\n cmd_to_run[i] = t[0] + '=' + path.expanduser(t[1])\n\n procs[tun_id] = Popen(cmd_to_run, stdin=PIPE,\n stdout=PIPE, preexec_fn=os.setsid)\n elif cmd[2] == 'python': # run python scripts inside tunnel\n if tun_id not in procs:\n sys.stderr.write(\n 'error: run tunnel client or server first\\n')\n\n procs[tun_id].stdin.write(cmd_to_run + '\\n')\n procs[tun_id].stdin.flush()\n elif cmd[2] == 'readline': # readline from stdout of tunnel\n if len(cmd) != 3:\n sys.stderr.write('error: usage: tunnel ID readline\\n')\n continue\n\n if tun_id not in procs:\n sys.stderr.write(\n 'error: run tunnel client or server first\\n')\n\n sys.stdout.write(procs[tun_id].stdout.readline())\n sys.stdout.flush()\n else:\n sys.stderr.write('unknown command after \"tunnel ID\": %s\\n'\n % cmd_to_run)\n continue\n elif cmd[0] == 'prompt': # set prompt in front of commands to print\n if len(cmd) != 2:\n sys.stderr.write('error: usage: prompt PROMPT\\n')\n continue\n\n prompt = cmd[1].strip()\n elif cmd[0] == 'halt': # terminate all tunnel processes and quit\n if len(cmd) != 1:\n sys.stderr.write('error: usage: halt\\n')\n continue\n\n for tun_id in procs:\n utils.kill_proc_group(procs[tun_id])\n\n sys.exit(0)\n else:\n sys.stderr.write('unknown command: %s\\n' % input_cmd)\n continue\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5588794946670532,
"alphanum_fraction": 0.5689420700073242,
"avg_line_length": 28.89430809020996,
"blob_id": "35e8ee096ba43bab8bb32e987aca09ffa7787a68",
"content_id": "55a719947ff7d61710985837b71a6d5a6373a667",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3677,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 123,
"path": "/src/wrappers/quic.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nimport sys\nimport string\nimport shutil\nimport time\nfrom subprocess import check_call, call\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef generate_html(output_dir, size):\n sys.stderr.write('Generating HTML to send...\\n')\n\n html_path = path.join(output_dir, 'index.html')\n\n # check if index.html already exists\n if path.isfile(html_path) and path.getsize(html_path) > size:\n sys.stderr.write('index.html already exists\\n')\n return\n\n head_text = ('HTTP/1.1 200 OK\\n'\n 'X-Original-Url: https://www.example.org/\\n'\n '\\n'\n '<!DOCTYPE html>\\n'\n '<html>\\n'\n '<body>\\n'\n '<p>\\n')\n\n foot_text = ('</p>\\n'\n '</body>\\n'\n '</html>\\n')\n\n html = open(html_path, 'w')\n html.write(head_text)\n\n block_size = 100 * 1024 * 1024\n block = 'x' * block_size\n num_blocks = int(size) / block_size + 1\n for _ in xrange(num_blocks):\n html.write(block + '\\n')\n\n html.write(foot_text)\n html.close()\n\n\ndef setup_quic(cc_repo, cert_dir, html_dir):\n os.environ['PROTO_QUIC_ROOT'] = path.join(cc_repo, 'src')\n os.environ['PATH'] += os.pathsep + path.join(cc_repo, 'depot_tools')\n\n cmd = path.join(cc_repo, 'proto_quic_tools', 'sync.sh')\n check_call(cmd, shell=True, cwd=path.join(cc_repo, 'src'))\n\n cmd = ('gn gen out/Default && ninja -C out/Default '\n 'quic_client quic_server')\n check_call(cmd, shell=True, cwd=path.join(cc_repo, 'src'))\n\n # initialize an empty NSS Shared DB\n nssdb_dir = path.join(path.expanduser('~'), '.pki', 'nssdb')\n shutil.rmtree(nssdb_dir, ignore_errors=True)\n utils.make_sure_dir_exists(nssdb_dir)\n\n # generate certificate\n cert_pwd = path.join(cert_dir, 'cert_pwd')\n cmd = 'certutil -d %s -N -f %s' % (nssdb_dir, cert_pwd)\n check_call(cmd, shell=True)\n\n # trust certificate\n pem = path.join(cert_dir, '2048-sha256-root.pem')\n cmd = ('certutil -d sql:%s -A -t \"C,,\" -n \"QUIC\" -i %s -f %s' %\n (nssdb_dir, pem, cert_pwd))\n check_call(cmd, shell=True)\n\n # generate a html of size that can be transferred longer than 30s\n generate_html(html_dir, 5 * 10**8)\n\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'proto-quic')\n send_src = path.join(cc_repo, 'src', 'out', 'Default', 'quic_server')\n recv_src = path.join(cc_repo, 'src', 'out', 'Default', 'quic_client')\n\n cert_dir = path.join(context.src_dir, 'wrappers', 'quic-certs')\n html_dir = path.join(cc_repo, 'www.example.org')\n utils.make_sure_dir_exists(html_dir)\n\n if args.option == 'deps':\n print 'libnss3-tools libgconf-2-4'\n return\n\n if args.option == 'setup':\n setup_quic(cc_repo, cert_dir, html_dir)\n return\n\n if args.option == 'sender':\n cmd = [send_src, '--port=%s' % args.port,\n '--quic_response_cache_dir=%s' % html_dir,\n '--certificate_file=%s' % path.join(cert_dir, 'leaf_cert.pem'),\n '--key_file=%s' % path.join(cert_dir, 'leaf_cert.pkcs8')]\n check_call(cmd)\n return\n\n if args.option == 'receiver':\n cmd = [recv_src, '--host=%s' % args.ip, '--port=%s' % args.port,\n 'https://www.example.org/']\n\n for _ in range(5):\n # suppress stdout as it prints the huge web page received\n with open(os.devnull, 'w') as devnull:\n if call(cmd, stdout=devnull) == 0:\n return\n\n time.sleep(1)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6188496351242065,
"alphanum_fraction": 0.6250866055488586,
"avg_line_length": 30.369565963745117,
"blob_id": "e4d5ac6ffa7860fc81213ee3b76b69d45e72646a",
"content_id": "8a61220d0e8cc7ee1868bf94831bd833526f525c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1443,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 46,
"path": "/tests/test_analyze.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nimport shutil\nimport argparse\n\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import check_call\n\n\ndef main():\n parser = argparse.ArgumentParser()\n\n group = parser.add_mutually_exclusive_group(required=True)\n group.add_argument('--all', action='store_true',\n help='test all the schemes specified in src/config.yml')\n group.add_argument('--schemes', metavar='\"SCHEME1 SCHEME2...\"',\n help='test a space-separated list of schemes')\n\n args = parser.parse_args()\n\n if args.all:\n schemes = utils.parse_config()['schemes'].keys()\n elif args.schemes is not None:\n schemes = args.schemes.split()\n\n data_dir = path.join(utils.tmp_dir, 'test_analyze_output')\n shutil.rmtree(data_dir, ignore_errors=True)\n utils.make_sure_dir_exists(data_dir)\n\n test_py = path.join(context.src_dir, 'experiments', 'test.py')\n analyze_py = path.join(context.src_dir, 'analysis', 'analyze.py')\n\n cmd = ['python', test_py, 'local', '--schemes', ' '.join(schemes),\n '-t', '10', '--data-dir', data_dir, '--pkill-cleanup',\n '--prepend-mm-cmds', 'mm-delay 20', '--extra-mm-link-args',\n '--uplink-queue=droptail --uplink-queue-args=packets=200']\n check_call(cmd)\n\n cmd = ['python', analyze_py, '--data-dir', data_dir]\n check_call(cmd)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.610771119594574,
"alphanum_fraction": 0.610771119594574,
"avg_line_length": 23.02941131591797,
"blob_id": "03422dd298c6e35003cec16f36c8c91536cfe378",
"content_id": "d567214ec8ed11531ee77c608e55ed35d859aeef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 817,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 34,
"path": "/src/wrappers/vivace.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'vivace')\n recv_dir = path.join(cc_repo, 'receiver')\n send_dir = path.join(cc_repo, 'sender')\n recv_src = path.join(recv_dir, 'vivace_receiver')\n send_src = path.join(send_dir, 'vivace_sender')\n\n if args.option == 'receiver':\n os.environ['LD_LIBRARY_PATH'] = path.join(recv_dir)\n cmd = [recv_src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n os.environ['LD_LIBRARY_PATH'] = path.join(send_dir)\n cmd = [send_src, args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.750241219997406,
"alphanum_fraction": 0.7541009783744812,
"avg_line_length": 36.2335319519043,
"blob_id": "cd35b0a14ac8c17ea239a484b983a8da1f98ba75",
"content_id": "e0c0d45be1abdde0a7f8402dcee3ab280dc9ca70",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6218,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 167,
"path": "/README.md",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "# Pantheon of Congestion Control [](https://travis-ci.org/StanfordSNR/pantheon)\nThe Pantheon contains wrappers for many popular practical and research\ncongestion control schemes. The Pantheon enables them to run on a common\ninterface, and has tools to benchmark and compare their performances.\nPantheon tests can be run locally over emulated links using\n[mahimahi](http://mahimahi.mit.edu/) or over the Internet to a remote machine.\n\nOur website is <https://pantheon.stanford.edu>, where you can find more\ninformation about Pantheon, including supported schemes, measurement results\non a global testbed so far, and our paper at [USENIX ATC 2018](https://www.usenix.org/conference/atc18/presentation/yan-francis)\n(**Awarded Best Paper**).\nIn case you are interested, the scripts and traces\n(including \"calibrated emulators\") for running the testbed can be found in\n[observatory](https://github.com/StanfordSNR/observatory).\n\nTo discuss and talk about Pantheon-related topics and issues, feel free to\npost in the [Google Group](https://groups.google.com/forum/#!forum/pantheon-stanford)\nor send an email to `pantheon-stanford <at> googlegroups <dot> com`.\n\n## Disclaimer\nThis is research software. Our scripts will write to the file system in the\n`pantheon` folder. We never run third party programs as root, but we cannot\nguarantee they will never try to escalate privilege to root.\n\nYou might want to install dependencies and run the setup on your own, because\nour handy scripts will install packages and perform some system-wide settings\n(e.g., enabling IP forwarding, loading kernel modeuls) as root.\nPlease run at your own risk.\n\n## Preparation\nTo clone this repository, run:\n\n```\ngit clone https://github.com/StanfordSNR/pantheon.git\n```\n\nMany of the tools and programs run by the Pantheon are git submodules in the\n`third_party` folder. To add submodules after cloning, run:\n\n```\ngit submodule update --init --recursive # or tools/fetch_submodules.sh\n```\n\n## Dependencies\nWe provide a handy script `tools/install_deps.sh` to install globally required\ndependencies; these dependencies are required before testing **any** scheme\nand are different from the flag `--install-deps` below.\nIn particular, we created the [Pantheon-tunnel](https://github.com/StanfordSNR/pantheon-tunnel)\nthat is required to instrument each scheme.\n\nYou might want to inspect the contents of\n`install_deps.sh` and install these dependencies by yourself in case you want to\nmanage dependencies differently. Please note that Pantheon currently\n**only** supports Python 2.7.\n\nNext, for those dependencies required by each congestion control scheme `<cc>`,\nrun `src/wrappers/<cc>.py deps` to print a dependency list. You could install\nthem by yourself, or run\n\n```\nsrc/experiments/setup.py --install-deps (--all | --schemes \"<cc1> <cc2> ...\")\n```\n\nto install dependencies required by all schemes or a list of schemes separated\nby spaces.\n\n## Setup\nAfter installing dependencies, run\n\n```\nsrc/experiments/setup.py [--setup] [--all | --schemes \"<cc1> <cc2> ...\"]\n```\n\nto set up supported congestion control schemes. `--setup` is required\nto be run only once. In contrast, `src/experiments/setup.py` is\nrequired to be run on every reboot (without `--setup`).\n\n## Running the Pantheon\nTo test schemes in emulated networks locally, run\n\n```\nsrc/experiments/test.py local (--all | --schemes \"<cc1> <cc2> ...\")\n```\n\nTo test schemes over the Internet to remote machine, run\n\n```\nsrc/experiments/test.py remote (--all | --schemes \"<cc1> <cc2> ...\") HOST:PANTHEON-DIR\n```\n\nRun `src/experiments/test.py local -h` and `src/experiments/test.py remote -h`\nfor detailed usage and additional optional arguments, such as multiple flows,\nrunning time, arbitrary set of mahimahi shells for emulation tests,\ndata sender side for real tests; use `--data-dir DIR` to specify an\nan output directory to save logs.\n\n## Pantheon analysis\nTo analyze test results, run\n\n```\nsrc/analysis/analyze.py --data-dir DIR\n```\n\nIt will analyze the logs saved by `src/experiments/test.py`, then generate\nperformance figures and a full PDF report `pantheon_report.pdf`.\n\n## Running a single congestion control scheme\nAll the available schemes can be found in `src/config.yml`. To run a single\ncongestion control scheme, first follow the **Dependencies** section to install\nthe required dependencies.\n\nAt the first time of running, run `src/wrappers/<cc>.py setup`\nto perform the persistent setup across reboots, such as compilation,\ngenerating or downloading files to send, etc. Then run\n`src/wrappers/<cc>.py setup_after_reboot`, which also has to be run on every\nreboot. In fact, `test/setup.py` performs `setup_after_reboot` by\ndefault, and runs `setup` on schemes when `--setup` is given.\n\nNext, execute the following command to find the running order for a scheme:\n```\nsrc/wrappers/<cc>.py run_first\n```\n\nDepending on the output of `run_first`, run\n\n```\n# Receiver first\nsrc/wrappers/<cc>.py receiver port\nsrc/wrappers/<cc>.py sender IP port\n```\n\nor\n\n```\n# Sender first\nsrc/wrappers/<cc>.py sender port\nsrc/wrappers/<cc>.py receiver IP port\n```\n\nRun `src/wrappers/<cc>.py -h` for detailed usage.\n\n## How to add your own congestion control\nAdding your own congestion control to Pantheon is easy! Just follow these\nsteps:\n\n1. Fork this repository.\n\n2. Add your congestion control repository as a submodule to `pantheon`:\n\n ```\n git submodule add <your-cc-repo-url> third_party/<your-cc-repo-name>\n ```\n\n and add `ignore = dirty` to `.gitmodules` under your submodule.\n\n3. In `src/wrappers`, read `example.py` and create your own `<your-cc-name>.py`.\n Make sure the sender and receiver run longer than 60 seconds; you could also\n leave them running forever without the need to kill them.\n\n4. Add your scheme to `src/config.yml` along with settings of\n `name`, `color` and `marker`, so that `src/experiments/test.py` is able to\n find your scheme and `src/analysis/analyze.py` is able to plot your scheme\n with the specified settings.\n\n5. Add your scheme to `SCHEMES` in `.travis.yml` for continuous integration testing.\n\n6. Send us a pull request and that's it, you're in the Pantheon!\n"
},
{
"alpha_fraction": 0.7087541818618774,
"alphanum_fraction": 0.7121211886405945,
"avg_line_length": 33.94117736816406,
"blob_id": "028759df6f2cac71ad6eb790f562dd4ea3256f34",
"content_id": "438b99357d2f0f7c0ce71323dd0759370573fa7b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 594,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 17,
"path": "/tools/install_deps.sh",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/bin/sh -x\n\n# update mahimahi source line and package listings when necessary\nsudo add-apt-repository -y ppa:keithw/mahimahi\nsudo apt-get update\n\n# install required packages\nsudo apt-get -y install mahimahi ntp ntpdate texlive python-pip\nsudo pip install matplotlib numpy tabulate pyyaml\n\n# install pantheon tunnel\nsudo apt-get -y install debhelper autotools-dev dh-autoreconf iptables \\\n pkg-config iproute2\n\nCURRDIR=$(cd -P -- \"$(dirname -- \"$0\")\" && pwd -P)\ncd $CURRDIR/../third_party/pantheon-tunnel && ./autogen.sh && ./configure \\\n&& make -j && sudo make install\n"
},
{
"alpha_fraction": 0.5692148804664612,
"alphanum_fraction": 0.5733470916748047,
"avg_line_length": 20.511110305786133,
"blob_id": "840aa9491699e1210ec754ac6385ba35c7e779b2",
"content_id": "66ec6fc5c6d05af440f136469a947858d024c32a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 968,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 45,
"path": "/src/wrappers/bbr.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\nfrom helpers import kernel_ctl\n\n\ndef setup_bbr():\n # load tcp_bbr kernel module (only available since Linux Kernel 4.9)\n kernel_ctl.load_kernel_module('tcp_bbr')\n\n # add bbr to kernel-allowed congestion control list\n kernel_ctl.enable_congestion_control('bbr')\n\n # check if qdisc is fq\n kernel_ctl.check_qdisc('fq')\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n if args.option == 'deps':\n print 'iperf'\n return\n\n if args.option == 'setup_after_reboot':\n setup_bbr()\n return\n\n if args.option == 'receiver':\n cmd = ['iperf', '-Z', 'bbr', '-s', '-p', args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n cmd = ['iperf', '-Z', 'bbr', '-c', args.ip, '-p', args.port,\n '-t', '75']\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5398295521736145,
"alphanum_fraction": 0.5605779886245728,
"avg_line_length": 28.02150535583496,
"blob_id": "96c6ee33033c7bb7e9fda5f466170d336583af6a",
"content_id": "343d0dcd108bd09d1502def94c67d2d2341abd3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2699,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 93,
"path": "/src/wrappers/webrtc.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nimport sys\nimport uuid\nfrom subprocess import call, check_call, check_output, Popen\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef xvfb_in_use(display):\n cmd = 'xdpyinfo -display :%d >/dev/null 2>&1' % display\n return call(cmd, shell=True) == 0\n\n\ndef setup_webrtc(cc_repo, video):\n check_call(['npm', 'install'], cwd=cc_repo)\n\n # check if video already exists and if its md5 checksum is correct\n video_md5 = 'cd1cc8b69951796b72419413faed493b'\n if path.isfile(video):\n md5_out = check_output(['md5sum', video]).split()[0]\n else:\n md5_out = None\n\n if md5_out != video_md5:\n cmd = ['wget', '-O', video,\n 'https://s3.amazonaws.com/stanford-pantheon/files/bluesky_1080p60.y4m']\n check_call(cmd)\n else:\n sys.stderr.write('video already exists\\n')\n\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'webrtc')\n video = path.join(cc_repo, 'video.y4m')\n\n if args.option == 'deps':\n print ('chromium-browser xvfb xfonts-100dpi xfonts-75dpi '\n 'xfonts-cyrillic xorg dbus-x11 npm nodejs')\n return\n\n if args.option == 'setup':\n setup_webrtc(cc_repo, video)\n return\n\n if args.option == 'sender':\n if not xvfb_in_use(1):\n xvfb_proc = Popen(['Xvfb', ':1'])\n else:\n xvfb_proc = None\n os.environ['DISPLAY'] = ':1'\n\n # run signaling server on the sender side\n signaling_server_src = path.join(cc_repo, 'app.js')\n Popen(['node', signaling_server_src, args.port])\n\n user_data_dir = path.join(utils.tmp_dir, 'webrtc-%s' % uuid.uuid4())\n cmd = ['chromium-browser',\n '--app=http://localhost:%s/sender' % args.port,\n '--use-fake-ui-for-media-stream',\n '--use-fake-device-for-media-stream',\n '--use-file-for-fake-video-capture=%s' % video,\n '--user-data-dir=%s' % user_data_dir]\n check_call(cmd)\n if xvfb_proc:\n xvfb_proc.kill()\n return\n\n if args.option == 'receiver':\n if not xvfb_in_use(2):\n xvfb_proc = Popen(['Xvfb', ':2'])\n else:\n xvfb_proc = None\n os.environ['DISPLAY'] = ':2'\n\n user_data_dir = path.join(utils.tmp_dir, 'webrtc-%s' % uuid.uuid4())\n cmd = ['chromium-browser',\n '--app=http://%s:%s/receiver' % (args.ip, args.port),\n '--user-data-dir=%s' % user_data_dir]\n check_call(cmd)\n if xvfb_proc:\n xvfb_proc.kill()\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5479069948196411,
"alphanum_fraction": 0.5534883737564087,
"avg_line_length": 21.87234115600586,
"blob_id": "1650cd83a4a3a74af7ea50037cab09dee3b1bf7c",
"content_id": "9f12483e8fd48e88258176684398e40db9034235",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1075,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 47,
"path": "/src/wrappers/ledbat.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call, PIPE, Popen\nimport time\n\nimport arg_parser\nimport context\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'libutp')\n src = path.join(cc_repo, 'ucat-static')\n\n if args.option == 'setup':\n check_call(['make', '-j'], cwd=cc_repo)\n return\n\n if args.option == 'receiver':\n cmd = [src, '-l', '-p', args.port]\n # suppress stdout as it prints all the bytes received\n with open(os.devnull, 'w') as devnull:\n check_call(cmd, stdout=devnull)\n return\n\n if args.option == 'sender':\n cmd = [src, args.ip, args.port]\n proc = Popen(cmd, stdin=PIPE)\n\n # send at full speed\n timeout = time.time() + 75\n while True:\n proc.stdin.write(os.urandom(1024))\n proc.stdin.flush()\n if time.time() > timeout:\n break\n\n if proc:\n proc.kill()\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5281357169151306,
"alphanum_fraction": 0.5292664766311646,
"avg_line_length": 36.758426666259766,
"blob_id": "31c8b605a993dd35be584fc1b9d22b6469bef348",
"content_id": "c5ee3ac513f2b787bb8846a228bc65d48a5a7283",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 33605,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 890,
"path": "/src/experiments/test.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nimport copy\nimport sys\nimport time\nimport uuid\nimport random\nimport signal\nimport traceback\nfrom subprocess import PIPE\nfrom collections import namedtuple, OrderedDict\n\nimport arg_parser\nimport context\nfrom helpers import utils, kernel_ctl\nfrom helpers.subprocess_wrappers import Popen, call\n\n\nFlow = namedtuple('Flow', ['cc', # replace self.cc\n 'cc_src_local', # replace self.cc_src\n 'cc_src_remote', # replace self.r[cc_src]\n 'run_first', # replace self.run_first\n 'run_second']) # replace self.run_second\n\n\nclass Test(object):\n def __init__(self, args, run_id, cc):\n self.mode = args.mode\n self.run_id = run_id\n # We keep two versions of `cc`:\n # * `cc` is the full version including parameters\n # * `cc_base` is the base scheme name only\n self.cc = cc\n self.cc_base = utils.get_base_scheme(cc)\n\n self.data_dir = path.abspath(args.data_dir)\n self.extra_sender_args = args.extra_sender_args\n\n # shared arguments between local and remote modes\n self.flows = args.flows\n self.runtime = args.runtime\n self.interval = args.interval\n self.run_times = args.run_times\n\n # used for cleanup\n self.proc_first = None\n self.proc_second = None\n self.ts_manager = None\n self.tc_manager = None\n\n self.test_start_time = None\n self.test_end_time = None\n\n # local mode\n if self.mode == 'local':\n self.datalink_trace = args.uplink_trace\n self.acklink_trace = args.downlink_trace\n self.prepend_mm_cmds = args.prepend_mm_cmds\n self.append_mm_cmds = args.append_mm_cmds\n self.extra_mm_link_args = args.extra_mm_link_args\n\n # for convenience\n self.sender_side = 'remote'\n self.server_side = 'local'\n\n # remote mode\n if self.mode == 'remote':\n self.sender_side = args.sender_side\n self.server_side = args.server_side\n self.local_addr = args.local_addr\n self.local_if = args.local_if\n self.remote_if = args.remote_if\n self.local_desc = args.local_desc\n self.remote_desc = args.remote_desc\n\n self.ntp_addr = args.ntp_addr\n self.local_ofst = None\n self.remote_ofst = None\n\n self.r = utils.parse_remote_path(args.remote_path, self.cc)\n\n # arguments when there's a config\n self.test_config = None\n if hasattr(args, 'test_config'):\n self.test_config = args.test_config\n\n if self.test_config is not None:\n # Parameterized schemes are not supported when using a test config,\n # so `cc` and `cc_base` are always equal.\n self.cc = self.cc_base = self.test_config['test-name']\n self.flow_objs = {}\n cc_src_remote_dir = ''\n if self.mode == 'remote':\n cc_src_remote_dir = r['base_dir']\n\n tun_id = 1\n for flow in args.test_config['flows']:\n cc = flow['scheme']\n run_first, run_second = utils.who_runs_first(cc)\n\n local_p = path.join(context.src_dir, 'wrappers', cc + '.py')\n remote_p = path.join(cc_src_remote_dir, 'wrappers', cc + '.py')\n\n self.flow_objs[tun_id] = Flow(\n cc=cc,\n cc_src_local=local_p,\n cc_src_remote=remote_p,\n run_first=run_first,\n run_second=run_second)\n tun_id += 1\n\n def setup_mm_cmd(self):\n mm_datalink_log = self.cc + '_mm_datalink_run%d.log' % self.run_id\n mm_acklink_log = self.cc + '_mm_acklink_run%d.log' % self.run_id\n self.mm_datalink_log = path.join(self.data_dir, mm_datalink_log)\n self.mm_acklink_log = path.join(self.data_dir, mm_acklink_log)\n\n if self.run_first == 'receiver' or self.flows > 0:\n # if receiver runs first OR if test inside pantheon tunnel\n uplink_log = self.mm_datalink_log\n downlink_log = self.mm_acklink_log\n uplink_trace = self.datalink_trace\n downlink_trace = self.acklink_trace\n else:\n # if sender runs first AND test without pantheon tunnel\n uplink_log = self.mm_acklink_log\n downlink_log = self.mm_datalink_log\n uplink_trace = self.acklink_trace\n downlink_trace = self.datalink_trace\n\n self.mm_cmd = []\n\n if self.prepend_mm_cmds:\n self.mm_cmd += self.prepend_mm_cmds.split()\n\n self.mm_cmd += [\n 'mm-link', uplink_trace, downlink_trace,\n '--uplink-log=' + uplink_log,\n '--downlink-log=' + downlink_log]\n\n if self.extra_mm_link_args:\n self.mm_cmd += self.extra_mm_link_args.split()\n\n if self.append_mm_cmds:\n self.mm_cmd += self.append_mm_cmds.split()\n\n def prepare_tunnel_log_paths(self):\n # boring work making sure logs have correct paths on local and remote\n self.datalink_ingress_logs = {}\n self.datalink_egress_logs = {}\n self.acklink_ingress_logs = {}\n self.acklink_egress_logs = {}\n\n local_tmp = utils.tmp_dir\n\n if self.mode == 'remote':\n remote_tmp = self.r['tmp_dir']\n\n for tun_id in xrange(1, self.flows + 1):\n uid = uuid.uuid4()\n\n datalink_ingress_logname = ('%s_flow%s_uid%s.log.ingress' %\n (self.datalink_name, tun_id, uid))\n self.datalink_ingress_logs[tun_id] = path.join(\n local_tmp, datalink_ingress_logname)\n\n datalink_egress_logname = ('%s_flow%s_uid%s.log.egress' %\n (self.datalink_name, tun_id, uid))\n self.datalink_egress_logs[tun_id] = path.join(\n local_tmp, datalink_egress_logname)\n\n acklink_ingress_logname = ('%s_flow%s_uid%s.log.ingress' %\n (self.acklink_name, tun_id, uid))\n self.acklink_ingress_logs[tun_id] = path.join(\n local_tmp, acklink_ingress_logname)\n\n acklink_egress_logname = ('%s_flow%s_uid%s.log.egress' %\n (self.acklink_name, tun_id, uid))\n self.acklink_egress_logs[tun_id] = path.join(\n local_tmp, acklink_egress_logname)\n\n if self.mode == 'remote':\n if self.sender_side == 'local':\n self.datalink_ingress_logs[tun_id] = path.join(\n remote_tmp, datalink_ingress_logname)\n self.acklink_egress_logs[tun_id] = path.join(\n remote_tmp, acklink_egress_logname)\n else:\n self.datalink_egress_logs[tun_id] = path.join(\n remote_tmp, datalink_egress_logname)\n self.acklink_ingress_logs[tun_id] = path.join(\n remote_tmp, acklink_ingress_logname)\n\n def setup(self):\n # setup commonly used paths\n self.cc_src = path.join(context.src_dir, 'wrappers', self.cc_base + '.py')\n self.tunnel_manager = path.join(context.src_dir, 'experiments',\n 'tunnel_manager.py')\n\n # record who runs first\n if self.test_config is None:\n self.run_first, self.run_second = utils.who_runs_first(self.cc_base)\n else:\n self.run_first = None\n self.run_second = None\n\n # wait for 3 seconds until run_first is ready\n self.run_first_setup_time = 3\n\n # setup output logs\n self.datalink_name = self.cc + '_datalink_run%d' % self.run_id\n self.acklink_name = self.cc + '_acklink_run%d' % self.run_id\n\n self.datalink_log = path.join(\n self.data_dir, self.datalink_name + '.log')\n self.acklink_log = path.join(\n self.data_dir, self.acklink_name + '.log')\n\n if self.flows > 0:\n self.prepare_tunnel_log_paths()\n\n if self.mode == 'local':\n self.setup_mm_cmd()\n else:\n # record local and remote clock offset\n if self.ntp_addr is not None:\n self.local_ofst, self.remote_ofst = utils.query_clock_offset(\n self.ntp_addr, self.r['ssh_cmd'])\n\n # test congestion control without running pantheon tunnel\n def run_without_tunnel(self):\n port = utils.get_open_port()\n\n # run the side specified by self.run_first\n cmd = ['python', self.cc_src, self.run_first, port]\n sys.stderr.write('Running %s %s...\\n' % (self.cc, self.run_first))\n self.proc_first = Popen(cmd, preexec_fn=os.setsid)\n\n # sleep just in case the process isn't quite listening yet\n # the cleaner approach might be to try to verify the socket is open\n time.sleep(self.run_first_setup_time)\n\n self.test_start_time = utils.utc_time()\n # run the other side specified by self.run_second\n sh_cmd = 'python %s %s $MAHIMAHI_BASE %s' % (\n self.cc_src, self.run_second, port)\n sh_cmd = ' '.join(self.mm_cmd) + \" -- sh -c '%s'\" % sh_cmd\n sys.stderr.write('Running %s %s...\\n' % (self.cc, self.run_second))\n self.proc_second = Popen(sh_cmd, shell=True, preexec_fn=os.setsid)\n\n signal.signal(signal.SIGALRM, utils.timeout_handler)\n signal.alarm(self.runtime)\n\n try:\n self.proc_first.wait()\n self.proc_second.wait()\n except utils.TimeoutError:\n pass\n else:\n signal.alarm(0)\n sys.stderr.write('Warning: test exited before time limit\\n')\n finally:\n self.test_end_time = utils.utc_time()\n\n return True\n\n def run_tunnel_managers(self):\n # run tunnel server manager\n if self.mode == 'remote':\n if self.server_side == 'local':\n ts_manager_cmd = ['python', self.tunnel_manager]\n else:\n ts_manager_cmd = self.r['ssh_cmd'] + [\n 'python', self.r['tunnel_manager']]\n else:\n ts_manager_cmd = ['python', self.tunnel_manager]\n\n sys.stderr.write('[tunnel server manager (tsm)] ')\n self.ts_manager = Popen(ts_manager_cmd, stdin=PIPE, stdout=PIPE,\n preexec_fn=os.setsid)\n ts_manager = self.ts_manager\n\n while True:\n running = ts_manager.stdout.readline()\n if 'tunnel manager is running' in running:\n sys.stderr.write(running)\n break\n if not running:\n sys.stderr.write('WARNING: tunnel server manager terminated '\n 'unexpectedly\\n')\n return None, None\n\n ts_manager.stdin.write('prompt [tsm]\\n')\n ts_manager.stdin.flush()\n\n # run tunnel client manager\n if self.mode == 'remote':\n if self.server_side == 'local':\n tc_manager_cmd = self.r['ssh_cmd'] + [\n 'python', self.r['tunnel_manager']]\n else:\n tc_manager_cmd = ['python', self.tunnel_manager]\n else:\n tc_manager_cmd = self.mm_cmd + ['python', self.tunnel_manager]\n\n sys.stderr.write('[tunnel client manager (tcm)] ')\n # NB: using `preexec_fn=os.setsid` creates a new process group, so that\n # it is easy to kill all associated child processes afterwards.\n self.tc_manager = Popen(tc_manager_cmd, stdin=PIPE, stdout=PIPE,\n preexec_fn=os.setsid)\n tc_manager = self.tc_manager\n\n while True:\n running = tc_manager.stdout.readline()\n if 'tunnel manager is running' in running:\n sys.stderr.write(running)\n break\n if not running:\n sys.stderr.write('WARNING: tunnel client manager terminated '\n 'unexpectedly\\n')\n return ts_manager, None\n\n tc_manager.stdin.write('prompt [tcm]\\n')\n tc_manager.stdin.flush()\n\n return ts_manager, tc_manager\n\n def run_tunnel_server(self, tun_id, ts_manager):\n if self.server_side == self.sender_side:\n ts_cmd = 'mm-tunnelserver --ingress-log=%s --egress-log=%s' % (\n self.acklink_ingress_logs[tun_id],\n self.datalink_egress_logs[tun_id])\n else:\n ts_cmd = 'mm-tunnelserver --ingress-log=%s --egress-log=%s' % (\n self.datalink_ingress_logs[tun_id],\n self.acklink_egress_logs[tun_id])\n\n if self.mode == 'remote':\n if self.server_side == 'remote':\n if self.remote_if is not None:\n ts_cmd += ' --interface=' + self.remote_if\n else:\n if self.local_if is not None:\n ts_cmd += ' --interface=' + self.local_if\n\n ts_cmd = 'tunnel %s %s\\n' % (tun_id, ts_cmd)\n ts_manager.stdin.write(ts_cmd)\n ts_manager.stdin.flush()\n\n # read the command to run tunnel client\n readline_cmd = 'tunnel %s readline\\n' % tun_id\n ts_manager.stdin.write(readline_cmd)\n ts_manager.stdin.flush()\n\n cmd_to_run_tc = ts_manager.stdout.readline().split()\n return cmd_to_run_tc\n\n def run_tunnel_client(self, tun_id, tc_manager, cmd_to_run_tc):\n if self.mode == 'local':\n cmd_to_run_tc[1] = '$MAHIMAHI_BASE'\n else:\n if self.server_side == 'remote':\n cmd_to_run_tc[1] = self.r['ip']\n else:\n cmd_to_run_tc[1] = self.local_addr\n\n cmd_to_run_tc_str = ' '.join(cmd_to_run_tc)\n\n if self.server_side == self.sender_side:\n tc_cmd = '%s --ingress-log=%s --egress-log=%s' % (\n cmd_to_run_tc_str,\n self.datalink_ingress_logs[tun_id],\n self.acklink_egress_logs[tun_id])\n else:\n tc_cmd = '%s --ingress-log=%s --egress-log=%s' % (\n cmd_to_run_tc_str,\n self.acklink_ingress_logs[tun_id],\n self.datalink_egress_logs[tun_id])\n\n if self.mode == 'remote':\n if self.server_side == 'remote':\n if self.local_if is not None:\n tc_cmd += ' --interface=' + self.local_if\n else:\n if self.remote_if is not None:\n tc_cmd += ' --interface=' + self.remote_if\n\n tc_cmd = 'tunnel %s %s\\n' % (tun_id, tc_cmd)\n readline_cmd = 'tunnel %s readline\\n' % tun_id\n\n # re-run tunnel client after 20s timeout for at most 3 times\n max_run = 3\n curr_run = 0\n got_connection = ''\n while 'got connection' not in got_connection:\n curr_run += 1\n if curr_run > max_run:\n sys.stderr.write('Unable to establish tunnel\\n')\n return False\n\n tc_manager.stdin.write(tc_cmd)\n tc_manager.stdin.flush()\n while True:\n tc_manager.stdin.write(readline_cmd)\n tc_manager.stdin.flush()\n\n signal.signal(signal.SIGALRM, utils.timeout_handler)\n signal.alarm(20)\n\n try:\n got_connection = tc_manager.stdout.readline()\n sys.stderr.write('Tunnel is connected\\n')\n except utils.TimeoutError:\n sys.stderr.write('Tunnel connection timeout\\n')\n break\n except IOError:\n sys.stderr.write('Tunnel client failed to connect to '\n 'tunnel server\\n')\n return False\n else:\n signal.alarm(0)\n if 'got connection' in got_connection:\n break\n\n return True\n\n def run_first_side(self, tun_id, send_manager, recv_manager,\n send_pri_ip, recv_pri_ip):\n\n first_src = self.cc_src\n second_src = self.cc_src\n\n if self.run_first == 'receiver':\n if self.mode == 'remote':\n if self.sender_side == 'local':\n first_src = self.r['cc_src']\n else:\n second_src = self.r['cc_src']\n\n port = utils.get_open_port()\n\n first_cmd = 'tunnel %s python %s receiver %s\\n' % (\n tun_id, first_src, port)\n second_cmd = 'tunnel %s python %s sender %s %s --extra_args=%s\\n' % (\n tun_id, second_src, recv_pri_ip, port, self.extra_sender_args)\n\n recv_manager.stdin.write(first_cmd)\n recv_manager.stdin.flush()\n elif self.run_first == 'sender': # self.run_first == 'sender'\n if self.mode == 'remote':\n if self.sender_side == 'local':\n second_src = self.r['cc_src']\n else:\n first_src = self.r['cc_src']\n\n port = utils.get_open_port()\n\n first_cmd = 'tunnel %s python %s sender %s --extra_args=%s\\n' % (\n tun_id, first_src, port, self.extra_sender_args)\n second_cmd = 'tunnel %s python %s receiver %s %s\\n' % (\n tun_id, second_src, send_pri_ip, port)\n\n send_manager.stdin.write(first_cmd)\n send_manager.stdin.flush()\n\n # get run_first and run_second from the flow object\n else:\n assert(hasattr(self, 'flow_objs'))\n flow = self.flow_objs[tun_id]\n\n first_src = flow.cc_src_local\n second_src = flow.cc_src_local\n\n if flow.run_first == 'receiver':\n if self.mode == 'remote':\n if self.sender_side == 'local':\n first_src = flow.cc_src_remote\n else:\n second_src = flow.cc_src_remote\n\n port = utils.get_open_port()\n\n first_cmd = 'tunnel %s python %s receiver %s\\n' % (\n tun_id, first_src, port)\n second_cmd = 'tunnel %s python %s sender %s %s\\n' % (\n tun_id, second_src, recv_pri_ip, port)\n\n recv_manager.stdin.write(first_cmd)\n recv_manager.stdin.flush()\n else: # flow.run_first == 'sender'\n if self.mode == 'remote':\n if self.sender_side == 'local':\n second_src = flow.cc_src_remote\n else:\n first_src = flow.cc_src_remote\n\n port = utils.get_open_port()\n\n first_cmd = 'tunnel %s python %s sender %s\\n' % (\n tun_id, first_src, port)\n second_cmd = 'tunnel %s python %s receiver %s %s\\n' % (\n tun_id, second_src, send_pri_ip, port)\n\n send_manager.stdin.write(first_cmd)\n send_manager.stdin.flush()\n\n return second_cmd\n\n def run_second_side(self, send_manager, recv_manager, second_cmds):\n time.sleep(self.run_first_setup_time)\n\n start_time = time.time()\n self.test_start_time = utils.utc_time()\n\n # start each flow self.interval seconds after the previous one\n for i in xrange(len(second_cmds)):\n if i != 0:\n time.sleep(self.interval)\n second_cmd = second_cmds[i]\n\n if self.run_first == 'receiver':\n send_manager.stdin.write(second_cmd)\n send_manager.stdin.flush()\n elif self.run_first == 'sender':\n recv_manager.stdin.write(second_cmd)\n recv_manager.stdin.flush()\n else:\n assert(hasattr(self, 'flow_objs'))\n flow = self.flow_objs[i]\n if flow.run_first == 'receiver':\n send_manager.stdin.write(second_cmd)\n send_manager.stdin.flush()\n elif flow.run_first == 'sender':\n recv_manager.stdin.write(second_cmd)\n recv_manager.stdin.flush()\n\n elapsed_time = time.time() - start_time\n if elapsed_time > self.runtime:\n sys.stderr.write('Interval time between flows is too long')\n return False\n\n time.sleep(self.runtime - elapsed_time)\n self.test_end_time = utils.utc_time()\n\n return True\n\n # test congestion control using tunnel client and tunnel server\n def run_with_tunnel(self):\n # run pantheon tunnel server and client managers\n ts_manager, tc_manager = self.run_tunnel_managers()\n if ts_manager is None or tc_manager is None:\n sys.stderr.write('Unable to run tunnel client or server manager '\n '=> aborting\\n')\n return False\n\n # create alias for ts_manager and tc_manager using sender or receiver\n if self.sender_side == self.server_side:\n send_manager = ts_manager\n recv_manager = tc_manager\n else:\n send_manager = tc_manager\n recv_manager = ts_manager\n\n # run every flow\n second_cmds = []\n for tun_id in xrange(1, self.flows + 1):\n # run tunnel server for tunnel tun_id\n cmd_to_run_tc = self.run_tunnel_server(tun_id, ts_manager)\n\n # run tunnel client for tunnel tun_id\n if not self.run_tunnel_client(tun_id, tc_manager, cmd_to_run_tc):\n return False\n\n tc_pri_ip = cmd_to_run_tc[3] # tunnel client private IP\n ts_pri_ip = cmd_to_run_tc[4] # tunnel server private IP\n\n if self.sender_side == self.server_side:\n send_pri_ip = ts_pri_ip\n recv_pri_ip = tc_pri_ip\n else:\n send_pri_ip = tc_pri_ip\n recv_pri_ip = ts_pri_ip\n\n # run the side that runs first and get cmd to run the other side\n second_cmd = self.run_first_side(\n tun_id, send_manager, recv_manager, send_pri_ip, recv_pri_ip)\n second_cmds.append(second_cmd)\n\n # run the side that runs second\n if not self.run_second_side(send_manager, recv_manager, second_cmds):\n return False\n\n # stop all the running flows and quit tunnel managers\n ts_manager.stdin.write('halt\\n')\n ts_manager.stdin.flush()\n tc_manager.stdin.write('halt\\n')\n tc_manager.stdin.flush()\n\n # process tunnel logs\n self.process_tunnel_logs()\n\n return True\n\n def download_tunnel_logs(self, tun_id):\n assert(self.mode == 'remote')\n\n # download logs from remote side\n cmd = 'scp -C %s:' % self.r['host_addr']\n cmd += '%(remote_log)s %(local_log)s'\n\n # function to get a corresponding local path from a remote path\n f = lambda p: path.join(utils.tmp_dir, path.basename(p))\n\n if self.sender_side == 'remote':\n local_log = f(self.datalink_egress_logs[tun_id])\n call(cmd % {'remote_log': self.datalink_egress_logs[tun_id],\n 'local_log': local_log}, shell=True)\n self.datalink_egress_logs[tun_id] = local_log\n\n local_log = f(self.acklink_ingress_logs[tun_id])\n call(cmd % {'remote_log': self.acklink_ingress_logs[tun_id],\n 'local_log': local_log}, shell=True)\n self.acklink_ingress_logs[tun_id] = local_log\n else:\n local_log = f(self.datalink_ingress_logs[tun_id])\n call(cmd % {'remote_log': self.datalink_ingress_logs[tun_id],\n 'local_log': local_log}, shell=True)\n self.datalink_ingress_logs[tun_id] = local_log\n\n local_log = f(self.acklink_egress_logs[tun_id])\n call(cmd % {'remote_log': self.acklink_egress_logs[tun_id],\n 'local_log': local_log}, shell=True)\n self.acklink_egress_logs[tun_id] = local_log\n\n\n def process_tunnel_logs(self):\n datalink_tun_logs = []\n acklink_tun_logs = []\n\n apply_ofst = False\n if self.mode == 'remote':\n if self.remote_ofst is not None and self.local_ofst is not None:\n apply_ofst = True\n\n if self.sender_side == 'remote':\n data_e_ofst = self.remote_ofst\n ack_i_ofst = self.remote_ofst\n data_i_ofst = self.local_ofst\n ack_e_ofst = self.local_ofst\n else:\n data_i_ofst = self.remote_ofst\n ack_e_ofst = self.remote_ofst\n data_e_ofst = self.local_ofst\n ack_i_ofst = self.local_ofst\n\n merge_tunnel_logs = path.join(context.src_dir, 'experiments',\n 'merge_tunnel_logs.py')\n\n for tun_id in xrange(1, self.flows + 1):\n if self.mode == 'remote':\n self.download_tunnel_logs(tun_id)\n\n uid = uuid.uuid4()\n datalink_tun_log = path.join(\n utils.tmp_dir, '%s_flow%s_uid%s.log.merged'\n % (self.datalink_name, tun_id, uid))\n acklink_tun_log = path.join(\n utils.tmp_dir, '%s_flow%s_uid%s.log.merged'\n % (self.acklink_name, tun_id, uid))\n\n cmd = [merge_tunnel_logs, 'single',\n '-i', self.datalink_ingress_logs[tun_id],\n '-e', self.datalink_egress_logs[tun_id],\n '-o', datalink_tun_log]\n if apply_ofst:\n cmd += ['-i-clock-offset', data_i_ofst,\n '-e-clock-offset', data_e_ofst]\n call(cmd)\n\n cmd = [merge_tunnel_logs, 'single',\n '-i', self.acklink_ingress_logs[tun_id],\n '-e', self.acklink_egress_logs[tun_id],\n '-o', acklink_tun_log]\n if apply_ofst:\n cmd += ['-i-clock-offset', ack_i_ofst,\n '-e-clock-offset', ack_e_ofst]\n call(cmd)\n\n datalink_tun_logs.append(datalink_tun_log)\n acklink_tun_logs.append(acklink_tun_log)\n\n cmd = [merge_tunnel_logs, 'multiple', '-o', self.datalink_log]\n if self.mode == 'local':\n cmd += ['--link-log', self.mm_datalink_log]\n cmd += datalink_tun_logs\n call(cmd)\n\n cmd = [merge_tunnel_logs, 'multiple', '-o', self.acklink_log]\n if self.mode == 'local':\n cmd += ['--link-log', self.mm_acklink_log]\n cmd += acklink_tun_logs\n call(cmd)\n\n def run_congestion_control(self):\n if self.flows > 0:\n try:\n return self.run_with_tunnel()\n finally:\n utils.kill_proc_group(self.ts_manager)\n utils.kill_proc_group(self.tc_manager)\n else:\n # test without pantheon tunnel when self.flows = 0\n try:\n return self.run_without_tunnel()\n finally:\n utils.kill_proc_group(self.proc_first)\n utils.kill_proc_group(self.proc_second)\n\n def record_time_stats(self):\n stats_log = path.join(\n self.data_dir, '%s_stats_run%s.log' % (self.cc, self.run_id))\n stats = open(stats_log, 'w')\n\n # save start time and end time of test\n if self.test_start_time is not None and self.test_end_time is not None:\n test_run_duration = (\n 'Start at: %s\\nEnd at: %s\\n' %\n (self.test_start_time, self.test_end_time))\n sys.stderr.write(test_run_duration)\n stats.write(test_run_duration)\n\n if self.mode == 'remote':\n ofst_info = ''\n if self.local_ofst is not None:\n ofst_info += 'Local clock offset: %s ms\\n' % self.local_ofst\n\n if self.remote_ofst is not None:\n ofst_info += 'Remote clock offset: %s ms\\n' % self.remote_ofst\n\n if ofst_info:\n sys.stderr.write(ofst_info)\n stats.write(ofst_info)\n\n stats.close()\n\n # run congestion control test\n def run(self):\n msg = 'Testing scheme %s for experiment run %d/%d...' % (\n self.cc, self.run_id, self.run_times)\n sys.stderr.write(msg + '\\n')\n\n # setup before running tests\n self.setup()\n\n # run receiver and sender\n if not self.run_congestion_control():\n sys.stderr.write('Error in testing scheme %s with run ID %d\\n' %\n (self.cc, self.run_id))\n return\n\n # write runtimes and clock offsets to file\n self.record_time_stats()\n\n sys.stderr.write('Done testing %s\\n' % self.cc)\n\n\ndef run_tests(args):\n # check and get git summary\n git_summary = utils.get_git_summary(args.mode,\n getattr(args, 'remote_path', None))\n\n # get cc_schemes\n cc_schemes = OrderedDict()\n if args.all:\n config = utils.parse_config()\n schemes_config = config['schemes']\n\n for scheme in schemes_config.keys():\n cc_schemes[scheme] = {}\n if args.random_order:\n utils.shuffle_keys(cc_schemes)\n elif args.schemes is not None:\n cc_schemes = utils.parse_schemes(args.schemes)\n if args.random_order:\n utils.shuffle_keys(cc_schemes)\n else:\n assert(args.test_config is not None)\n if args.random_order:\n random.shuffle(args.test_config['flows'])\n for flow in args.test_config['flows']:\n cc_schemes[flow['scheme']] = {}\n\n # save metadata\n meta = vars(args).copy()\n meta['cc_schemes'] = sorted(cc_schemes)\n meta['git_summary'] = git_summary\n\n metadata_path = path.join(args.data_dir, 'pantheon_metadata.json')\n utils.save_test_metadata(meta, metadata_path)\n\n # run tests\n for run_id in xrange(args.start_run_id,\n args.start_run_id + args.run_times):\n if not hasattr(args, 'test_config') or args.test_config is None:\n for cc, params in cc_schemes.iteritems():\n test_args = get_cc_args(args, params)\n Test(test_args, run_id, cc).run()\n else:\n Test(args, run_id, None).run()\n\n\ndef get_cc_args(args, params):\n \"\"\"\n Obtain experiment-specific arguments and original cc scheme name.\n\n :param args: Original arguments\n :param params: Dictionary holding this experiment's specific params as\n strings, e.g. {\"cc_env_fixed_cwnd\": \"100\"}\n :return: An updated version of `args` with overridden params' values\n (`args` is *not* modified in-place: either we return it unchanged,\n or a copy is made before any modification)\n \"\"\"\n if params:\n # Override default params with scheme-specific ones.\n args = copy.deepcopy(args)\n for param, val in params.iteritems():\n if hasattr(args, param):\n # This is a direct parameter to this script: we assume that we\n # can use the type of the default setting value to cast the\n # string `val` into the desired type.\n cast_func = type(getattr(args, param))\n setattr(args, param, cast_func(param))\n else:\n # This must be an indirect parameter passed through `--extra-sender-args`:\n # modify this string to use the desired value instead of current one.\n extra = args.extra_sender_args\n pattern = '--{}='.format(param)\n pattern_pos = extra.find(pattern)\n assert pattern_pos >= 0, (\n 'pattern not found in --extra-sender-args: {}'.format(pattern))\n next_pos = extra.find(' ', pattern_pos) # when next param starts\n if next_pos == -1: # will happen if `param` is the last parameter\n next_pos = len(extra)\n # Build the new string of extra args.\n args.extra_sender_args = ''.join([\n extra[0:pattern_pos + len(pattern)], # up to param's value\n val, # the new value of `param` (already a string)\n extra[next_pos:], # after `param`\n ])\n return args\n\n\ndef pkill(args):\n sys.stderr.write('Cleaning up using pkill...'\n '(enabled by --pkill-cleanup)\\n')\n\n if args.mode == 'remote':\n r = utils.parse_remote_path(args.remote_path)\n remote_pkill_src = path.join(r['base_dir'], 'tools', 'pkill.py')\n\n cmd = r['ssh_cmd'] + ['python', remote_pkill_src,\n '--kill-dir', r['base_dir']]\n call(cmd)\n\n pkill_src = path.join(context.base_dir, 'tools', 'pkill.py')\n cmd = ['python', pkill_src, '--kill-dir', context.src_dir]\n call(cmd)\n\n\ndef main():\n args = arg_parser.parse_test()\n\n try:\n run_tests(args)\n except: # intended to catch all exceptions\n # dump traceback ahead in case pkill kills the program\n sys.stderr.write(traceback.format_exc())\n\n if args.pkill_cleanup:\n pkill(args)\n\n sys.exit('Error in tests!')\n else:\n sys.stderr.write('All tests done!\\n')\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5950069427490234,
"alphanum_fraction": 0.5950069427490234,
"avg_line_length": 20.84848403930664,
"blob_id": "7a40448a9d2dd3f0be1fe5ee4b744bff29edd881",
"content_id": "371ad91da5c91b5a1d42d77840e0094fe112de84",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 721,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 33,
"path": "/src/wrappers/indigo.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'indigo')\n send_src = path.join(cc_repo, 'dagger', 'run_sender.py')\n recv_src = path.join(cc_repo, 'env', 'run_receiver.py')\n\n if args.option == 'setup':\n check_call(['sudo pip install tensorflow'], shell=True)\n return\n\n if args.option == 'sender':\n cmd = [send_src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'receiver':\n cmd = [recv_src, args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.44977426528930664,
"alphanum_fraction": 0.4529184103012085,
"avg_line_length": 34.9536247253418,
"blob_id": "0d61e76e9cb44b514f2110b0710e01912669521e",
"content_id": "70b79df7f193fd49408ca2cb2b4e6083d959e6e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12404,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 345,
"path": "/src/analysis/report.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\nimport os\nfrom os import path\nimport re\nimport shutil\nimport uuid\nimport numpy as np\n\nimport arg_parser\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import check_call, check_output\n\n\nclass Report(object):\n def __init__(self, args):\n self.data_dir = path.abspath(args.data_dir)\n self.include_acklink = args.include_acklink\n\n metadata_path = path.join(args.data_dir, 'pantheon_metadata.json')\n self.meta = utils.load_test_metadata(metadata_path)\n self.cc_schemes = utils.verify_schemes_with_meta(args.schemes, self.meta)\n\n self.run_times = self.meta['run_times']\n self.flows = self.meta['flows']\n self.config = utils.parse_config()\n\n def describe_metadata(self):\n desc = '\\\\centerline{\\\\textbf{\\\\large{Pantheon Report}}}\\n'\n desc += '\\\\vspace{20pt}\\n\\n'\n desc += 'Generated at %s (UTC).\\n\\n' % utils.utc_time()\n\n meta = self.meta\n\n if meta['mode'] == 'local':\n mm_cmd = []\n if 'prepend_mm_cmds' in meta:\n mm_cmd.append(meta['prepend_mm_cmds'])\n mm_cmd += ['mm-link', meta['uplink_trace'], meta['downlink_trace']]\n if 'extra_mm_link_args' in meta:\n mm_cmd.append(meta['extra_mm_link_args'])\n if 'append_mm_cmds' in meta:\n mm_cmd.append(meta['append_mm_cmds'])\n\n mm_cmd = ' '.join(mm_cmd).replace('_', '\\\\_')\n\n desc += 'Tested in mahimahi: \\\\texttt{%s}\\n\\n' % mm_cmd\n elif meta['mode'] == 'remote':\n txt = {}\n for side in ['local', 'remote']:\n txt[side] = []\n\n if '%s_desc' % side in meta:\n txt[side].append(meta['%s_desc' % side])\n\n if '%s_if' % side in meta:\n txt[side].append('on \\\\texttt{%s}' % meta['%s_if' % side])\n\n txt[side] = ' '.join(txt[side]).replace('_', '\\\\_')\n\n if meta['sender_side'] == 'remote':\n desc += ('Data path: %s (\\\\textit{remote}) \\\\textrightarrow '\n '%s (\\\\textit{local}).\\n\\n') % (\n txt['remote'], txt['local'])\n else:\n desc += ('Data path: %s (\\\\textit{local}) \\\\textrightarrow '\n '%s (\\\\textit{remote}).\\n\\n') % (\n txt['local'], txt['remote'])\n\n if meta['flows'] == 1:\n flows = '1 flow'\n else:\n flows = ('%s flows with %s-second interval between two flows' %\n (meta['flows'], meta['interval']))\n\n if meta['runtime'] == 1:\n runtime = '1 second'\n else:\n runtime = '%s seconds' % meta['runtime']\n\n run_times = meta['run_times']\n if run_times == 1:\n times = 'once'\n elif run_times == 2:\n times = 'twice'\n else:\n times = '%s times' % run_times\n\n desc += (\n 'Repeated the test of %d congestion control schemes %s.\\n\\n'\n 'Each test lasted for %s running %s.\\n\\n'\n % (len(self.cc_schemes), times, runtime, flows))\n\n if 'ntp_addr' in meta:\n desc += ('NTP offsets were measured against \\\\texttt{%s} and have '\n 'been applied to correct the timestamps in logs.\\n\\n'\n % meta['ntp_addr'])\n\n desc += (\n '\\\\begin{verbatim}\\n'\n 'System info:\\n'\n '%s'\n '\\\\end{verbatim}\\n\\n' % utils.get_sys_info())\n\n desc += (\n '\\\\begin{verbatim}\\n'\n 'Git summary:\\n'\n '%s'\n '\\\\end{verbatim}\\n\\n' % meta['git_summary'])\n desc += '\\\\newpage\\n\\n'\n\n return desc\n\n def create_table(self, data):\n align = ' c | c'\n for data_t in ['tput', 'delay', 'loss']:\n align += ' | ' + ' '.join(['Y' for _ in xrange(self.flows)])\n align += ' '\n\n flow_cols = ' & '.join(\n ['flow %d' % flow_id for flow_id in xrange(1, 1 + self.flows)])\n\n table_width = 0.9 if self.flows == 1 else ''\n table = (\n '\\\\begin{landscape}\\n'\n '\\\\centering\\n'\n '\\\\begin{tabularx}{%(width)s\\linewidth}{%(align)s}\\n'\n '& & \\\\multicolumn{%(flows)d}{c|}{mean avg tput (Mbit/s)}'\n ' & \\\\multicolumn{%(flows)d}{c|}{mean 95th-\\\\%%ile delay (ms)}'\n ' & \\\\multicolumn{%(flows)d}{c}{mean loss rate (\\\\%%)} \\\\\\\\\\n'\n 'scheme & \\\\# runs & %(flow_cols)s & %(flow_cols)s & %(flow_cols)s'\n ' \\\\\\\\\\n'\n '\\\\hline\\n'\n ) % {'width': table_width,\n 'align': align,\n 'flows': self.flows,\n 'flow_cols': flow_cols}\n\n for cc in self.cc_schemes:\n flow_data = {}\n for data_t in ['tput', 'delay', 'loss']:\n flow_data[data_t] = []\n for flow_id in xrange(1, self.flows + 1):\n if data[cc][flow_id][data_t]:\n mean_value = np.mean(data[cc][flow_id][data_t])\n flow_data[data_t].append('%.2f' % mean_value)\n else:\n flow_data[data_t].append('N/A')\n\n table += (\n '%(name)s & %(valid_runs)s & %(flow_tputs)s & '\n '%(flow_delays)s & %(flow_losses)s \\\\\\\\\\n'\n ) % {'name': data[cc]['name'].replace('{', '\\\\{').replace('}', '\\\\}'),\n 'valid_runs': data[cc]['valid_runs'],\n 'flow_tputs': ' & '.join(flow_data['tput']),\n 'flow_delays': ' & '.join(flow_data['delay']),\n 'flow_losses': ' & '.join(flow_data['loss'])}\n\n table += (\n '\\\\end{tabularx}\\n'\n '\\\\end{landscape}\\n\\n'\n )\n\n return table\n\n def summary_table(self):\n data = {}\n\n re_tput = lambda x: re.match(r'Average throughput: (.*?) Mbit/s', x)\n re_delay = lambda x: re.match(\n r'95th percentile per-packet one-way delay: (.*?) ms', x)\n re_loss = lambda x: re.match(r'Loss rate: (.*?)%', x)\n\n for cc in self.cc_schemes:\n data[cc] = {}\n data[cc]['valid_runs'] = 0\n\n cc_name = utils.get_scheme_name(cc, self.config['schemes'])\n cc_name = cc_name.strip().replace('_', '\\\\_')\n data[cc]['name'] = cc_name\n\n for flow_id in xrange(1, self.flows + 1):\n data[cc][flow_id] = {}\n\n data[cc][flow_id]['tput'] = []\n data[cc][flow_id]['delay'] = []\n data[cc][flow_id]['loss'] = []\n\n for run_id in xrange(1, 1 + self.run_times):\n fname = '%s_stats_run%s.log' % (cc, run_id)\n stats_log_path = path.join(self.data_dir, fname)\n\n if not path.isfile(stats_log_path):\n continue\n\n stats_log = open(stats_log_path)\n\n valid_run = False\n flow_id = 1\n\n while True:\n line = stats_log.readline()\n if not line:\n break\n\n if 'Datalink statistics' in line:\n valid_run = True\n continue\n\n if 'Flow %d' % flow_id in line:\n ret = re_tput(stats_log.readline())\n if ret:\n ret = float(ret.group(1))\n data[cc][flow_id]['tput'].append(ret)\n\n ret = re_delay(stats_log.readline())\n if ret:\n ret = float(ret.group(1))\n data[cc][flow_id]['delay'].append(ret)\n\n ret = re_loss(stats_log.readline())\n if ret:\n ret = float(ret.group(1))\n data[cc][flow_id]['loss'].append(ret)\n\n if flow_id < self.flows:\n flow_id += 1\n\n stats_log.close()\n\n if valid_run:\n data[cc]['valid_runs'] += 1\n\n return self.create_table(data)\n\n def include_summary(self):\n raw_summary = path.join(self.data_dir, 'pantheon_summary.pdf')\n mean_summary = path.join(\n self.data_dir, 'pantheon_summary_mean.pdf')\n\n metadata_desc = self.describe_metadata()\n\n self.latex.write(\n '\\\\documentclass{article}\\n'\n '\\\\usepackage{pdfpages, graphicx, float}\\n'\n '\\\\usepackage{tabularx, pdflscape}\\n'\n '\\\\usepackage{textcomp}\\n\\n'\n '\\\\newcolumntype{Y}{>{\\\\centering\\\\arraybackslash}X}\\n'\n '\\\\newcommand{\\PantheonFig}[1]{%%\\n'\n '\\\\begin{figure}[H]\\n'\n '\\\\centering\\n'\n '\\\\IfFileExists{#1}{\\includegraphics[width=\\\\textwidth]{#1}}'\n '{Figure is missing}\\n'\n '\\\\end{figure}}\\n\\n'\n '\\\\begin{document}\\n'\n '%s'\n '\\\\PantheonFig{%s}\\n\\n'\n '\\\\PantheonFig{%s}\\n\\n'\n '\\\\newpage\\n\\n'\n % (metadata_desc, mean_summary, raw_summary))\n\n self.latex.write('%s\\\\newpage\\n\\n' % self.summary_table())\n\n def include_runs(self):\n cc_id = 0\n for cc in self.cc_schemes:\n cc_id += 1\n cc_name = utils.get_scheme_name(cc, self.config['schemes'])\n cc_name = cc_name.strip().replace('_', '\\\\_')\n\n for run_id in xrange(1, 1 + self.run_times):\n fname = '%s_stats_run%s.log' % (cc, run_id)\n stats_log_path = path.join(self.data_dir, fname)\n\n if path.isfile(stats_log_path):\n with open(stats_log_path) as stats_log:\n stats_info = stats_log.read()\n else:\n stats_info = '%s does not exist\\n' % stats_log_path\n\n str_dict = {'cc_name': cc_name,\n 'run_id': run_id,\n 'stats_info': stats_info}\n\n link_directions = ['datalink']\n if self.include_acklink:\n link_directions.append('acklink')\n\n for link_t in link_directions:\n for metric_t in ['throughput', 'delay']:\n graph_path = path.join(\n self.data_dir, cc + '_%s_%s_run%s.png' %\n (link_t, metric_t, run_id))\n str_dict['%s_%s' % (link_t, metric_t)] = graph_path\n\n self.latex.write(\n '\\\\begin{verbatim}\\n'\n 'Run %(run_id)s: Statistics of %(cc_name)s\\n\\n'\n '%(stats_info)s'\n '\\\\end{verbatim}\\n\\n'\n '\\\\newpage\\n\\n'\n 'Run %(run_id)s: Report of %(cc_name)s --- Data Link\\n\\n'\n '\\\\PantheonFig{%(datalink_throughput)s}\\n\\n'\n '\\\\PantheonFig{%(datalink_delay)s}\\n\\n'\n '\\\\newpage\\n\\n' % str_dict)\n\n if self.include_acklink:\n self.latex.write(\n 'Run %(run_id)s: '\n 'Report of %(cc_name)s --- ACK Link\\n\\n'\n '\\\\PantheonFig{%(acklink_throughput)s}\\n\\n'\n '\\\\PantheonFig{%(acklink_delay)s}\\n\\n'\n '\\\\newpage\\n\\n' % str_dict)\n\n self.latex.write('\\\\end{document}')\n\n def run(self):\n report_uid = uuid.uuid4()\n latex_path = path.join(utils.tmp_dir, 'pantheon_report_%s.tex' % report_uid)\n self.latex = open(latex_path, 'w')\n self.include_summary()\n self.include_runs()\n self.latex.close()\n\n cmd = ['pdflatex', '-halt-on-error', '-jobname',\n 'pantheon_report_%s' % report_uid, latex_path]\n check_call(cmd, cwd=utils.tmp_dir)\n\n pdf_src_path = path.join(utils.tmp_dir, 'pantheon_report_%s.pdf' % report_uid)\n pdf_dst_path = path.join(self.data_dir, 'pantheon_report.pdf')\n shutil.move(pdf_src_path, pdf_dst_path)\n\n sys.stderr.write(\n 'Saved pantheon_report.pdf in %s\\n' % self.data_dir)\n\n\ndef main():\n args = arg_parser.parse_report()\n Report(args).run()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6107897758483887,
"alphanum_fraction": 0.6149616241455078,
"avg_line_length": 39.722007751464844,
"blob_id": "30a6e93f56e1d1a61f23da931074e3a7b9e420fe",
"content_id": "622bd44a3376daa6373001f830781ff9717e2471",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10547,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 259,
"path": "/src/experiments/arg_parser.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "from os import path\nimport sys\nimport yaml\nimport argparse\n\nimport context\nfrom helpers import utils\n\n\ndef verify_schemes(schemes):\n schemes = map(utils.get_base_scheme, utils.parse_schemes(schemes))\n all_schemes = utils.parse_config()['schemes'].keys()\n\n for cc in schemes:\n if cc not in all_schemes:\n sys.exit('%s is not a scheme included in src/config.yml' % cc)\n\n\ndef parse_setup_system():\n parser = argparse.ArgumentParser()\n\n parser.add_argument('--enable-ip-forward', action='store_true',\n help='enable IP forwarding')\n parser.add_argument('--interface',\n help='interface to disable reverse path filtering')\n parser.add_argument('--qdisc', help='change default qdisc')\n\n group = parser.add_mutually_exclusive_group()\n group.add_argument(\n '--set-rmem', action='store_true',\n help='set socket receive buffer sizes to Pantheon\\'s required ones')\n group.add_argument(\n '--reset-rmem', action='store_true',\n help='set socket receive buffer sizes to Linux default ones')\n group.add_argument(\n '--set-all-mem', action='store_true',\n help='set socket send and receive buffer sizes')\n group.add_argument(\n '--reset-all-mem', action='store_true',\n help='set socket send and receive buffer sizes to Linux default ones')\n\n args = parser.parse_args()\n return args\n\n\ndef parse_setup():\n parser = argparse.ArgumentParser(\n description='by default, run \"setup_after_reboot\" on specified '\n 'schemes and system-wide setup required every time after reboot')\n\n # schemes related\n group = parser.add_mutually_exclusive_group()\n group.add_argument('--all', action='store_true',\n help='set up all schemes specified in src/config.yml')\n group.add_argument('--schemes', metavar='\"SCHEME1 SCHEME2...\"',\n help='set up a space-separated list of schemes')\n\n parser.add_argument('--install-deps', action='store_true',\n help='install dependencies of schemes')\n parser.add_argument('--setup', action='store_true',\n help='run \"setup\" on each scheme')\n\n args = parser.parse_args()\n if args.schemes is not None:\n verify_schemes(args.schemes)\n\n if args.install_deps:\n if not args.all and args.schemes is None:\n sys.exit('must specify --all or --schemes '\n 'when --install-deps is given')\n\n if args.setup:\n sys.exit('cannot perform setup when --install-deps is given')\n\n return args\n\n\ndef parse_test_shared(local, remote, config_args):\n for mode in [local, remote]:\n if config_args.config_file is None:\n mode.add_argument(\n '-f', '--flows', type=int, default=1,\n help='number of flows (default 1)')\n mode.add_argument(\n '-t', '--runtime', type=int, default=30,\n help='total runtime in seconds (default 30)')\n mode.add_argument(\n '--interval', type=int, default=0,\n help='interval in seconds between two flows (default 0)')\n\n if config_args.config_file is None:\n group = mode.add_mutually_exclusive_group(required=True)\n group.add_argument('--all', action='store_true',\n help='test all schemes specified in src/config.yml')\n group.add_argument('--schemes', metavar='\"SCHEME1 SCHEME2...\"',\n help='test a space-separated list of schemes')\n\n mode.add_argument('--run-times', metavar='TIMES', type=int, default=1,\n help='run times of each scheme (default 1)')\n mode.add_argument('--start-run-id', metavar='ID', type=int, default=1,\n help='run ID to start with')\n mode.add_argument('--random-order', action='store_true',\n help='test schemes in random order')\n mode.add_argument(\n '--data-dir', metavar='DIR',\n default=path.join(context.src_dir, 'experiments', 'data'),\n help='directory to save all test logs, graphs, '\n 'metadata, and report (default pantheon/src/experiments/data)')\n mode.add_argument(\n '--pkill-cleanup', action='store_true', help='clean up using pkill'\n ' (send SIGKILL when necessary) if there were errors during tests')\n mode.add_argument('--extra-sender-args',\n metavar='--arg1=val1 --arg2=val2...', default='',\n help='extra arguments to pass to sender wrapper')\n\n\ndef parse_test_local(local):\n local.add_argument(\n '--uplink-trace', metavar='TRACE',\n default=path.join(context.src_dir, 'experiments', '12mbps.trace'),\n help='uplink trace (from sender to receiver) to pass to mm-link '\n '(default pantheon/test/12mbps.trace)')\n local.add_argument(\n '--downlink-trace', metavar='TRACE',\n default=path.join(context.src_dir, 'experiments', '12mbps.trace'),\n help='downlink trace (from receiver to sender) to pass to mm-link '\n '(default pantheon/test/12mbps.trace)')\n local.add_argument(\n '--prepend-mm-cmds', metavar='\"CMD1 CMD2...\"',\n help='mahimahi shells to run outside of mm-link')\n local.add_argument(\n '--append-mm-cmds', metavar='\"CMD1 CMD2...\"',\n help='mahimahi shells to run inside of mm-link')\n local.add_argument(\n '--extra-mm-link-args', metavar='\"ARG1 ARG2...\"',\n help='extra arguments to pass to mm-link when running locally. Note '\n 'that uplink (downlink) always represents the link from sender to '\n 'receiver (from receiver to sender)')\n\n\ndef parse_test_remote(remote):\n remote.add_argument(\n '--sender', choices=['local', 'remote'], default='local',\n action='store', dest='sender_side',\n help='the side to be data sender (default local)')\n remote.add_argument(\n '--tunnel-server', choices=['local', 'remote'], default='remote',\n action='store', dest='server_side',\n help='the side to run pantheon tunnel server on (default remote)')\n remote.add_argument(\n '--local-addr', metavar='IP',\n help='local IP address that can be reached from remote host, '\n 'required if \"--tunnel-server local\" is given')\n remote.add_argument(\n '--local-if', metavar='INTERFACE',\n help='local interface to run pantheon tunnel on')\n remote.add_argument(\n '--remote-if', metavar='INTERFACE',\n help='remote interface to run pantheon tunnel on')\n remote.add_argument(\n '--ntp-addr', metavar='HOST',\n help='address of an NTP server to query clock offset')\n remote.add_argument(\n '--local-desc', metavar='DESC',\n help='extra description of the local side')\n remote.add_argument(\n '--remote-desc', metavar='DESC',\n help='extra description of the remote side')\n\n\ndef verify_test_args(args):\n if args.flows == 0:\n prepend = getattr(args, 'prepend_mm_cmds', None)\n append = getattr(args, 'append_mm_cmds', None)\n extra = getattr(args, 'extra_mm_link_args', None)\n if append is not None or prepend is not None or extra is not None:\n sys.exit('Cannot apply --prepend-mm-cmds, --append-mm-cmds or '\n '--extra-mm-link-args without pantheon tunnels')\n\n if args.runtime > 60 or args.runtime <= 0:\n sys.exit('runtime cannot be non-positive or greater than 60 s')\n if args.flows < 0:\n sys.exit('flow cannot be negative')\n if args.interval < 0:\n sys.exit('interval cannot be negative')\n if args.flows > 0 and args.interval > 0:\n if (args.flows - 1) * args.interval > args.runtime:\n sys.exit('interval time between flows is too long to be '\n 'fit in runtime')\n\ndef parse_test_config(test_config, local, remote):\n # Check config file has atleast a test-name and a description of flows\n if 'test-name' not in test_config:\n sys.exit('Config file must have a test-name argument')\n if 'flows' not in test_config:\n sys.exit('Config file must specify flows')\n\n defaults = {}\n defaults.update(**test_config)\n defaults['schemes'] = None\n defaults['all'] = False\n defaults['flows'] = len(test_config['flows'])\n defaults['test_config'] = test_config\n\n local.set_defaults(**defaults)\n remote.set_defaults(**defaults)\n\n\ndef parse_test():\n # Load configuration file before parsing other command line options\n # Command line options will override options in config file\n config_parser = argparse.ArgumentParser(\n description=__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n add_help=False)\n config_parser.add_argument('-c','--config_file', metavar='CONFIG',\n help='path to configuration file. '\n 'command line arguments will override options '\n 'in config file. ')\n config_args, remaining_argv = config_parser.parse_known_args()\n\n parser = argparse.ArgumentParser(\n description='perform congestion control tests',\n parents=[config_parser])\n\n subparsers = parser.add_subparsers(dest='mode')\n local = subparsers.add_parser(\n 'local', help='test schemes locally in mahimahi emulated networks')\n remote = subparsers.add_parser(\n 'remote', help='test schemes between local and remote in '\n 'real-life networks')\n remote.add_argument(\n 'remote_path', metavar='HOST:PANTHEON-DIR',\n help='HOST ([user@]IP) and PANTHEON-DIR (remote pantheon directory)')\n\n parse_test_shared(local, remote, config_args)\n parse_test_local(local)\n parse_test_remote(remote)\n\n # Make settings in config file the defaults\n test_config = None\n if config_args.config_file is not None:\n with open(config_args.config_file) as f:\n test_config = yaml.safe_load(f)\n parse_test_config(test_config, local, remote)\n\n args = parser.parse_args(remaining_argv)\n if args.schemes is not None:\n verify_schemes(args.schemes)\n args.test_config = None\n elif not args.all:\n assert(test_config is not None)\n schemes = ' '.join([flow['scheme'] for flow in test_config['flows']])\n verify_schemes(schemes)\n\n verify_test_args(args)\n utils.make_sure_dir_exists(args.data_dir)\n\n return args\n"
},
{
"alpha_fraction": 0.5836065411567688,
"alphanum_fraction": 0.5934426188468933,
"avg_line_length": 30.907691955566406,
"blob_id": "939358656ba6c1f0eeb93a8362b9fb525436e0e4",
"content_id": "1e022e7dc2c97af8236df56dff17ebf7176a38bb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10370,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 325,
"path": "/src/helpers/utils.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import os\nfrom os import path\nimport sys\nimport socket\nimport signal\nimport errno\nimport itertools\nimport json\nimport random\nimport yaml\nimport subprocess\nfrom collections import OrderedDict\nfrom datetime import datetime\n\nimport context\nfrom subprocess_wrappers import check_call, check_output, call\n\n\ndef get_open_port():\n sock = socket.socket(socket.AF_INET)\n sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n\n sock.bind(('', 0))\n port = sock.getsockname()[1]\n sock.close()\n return str(port)\n\n\ndef make_sure_dir_exists(d):\n try:\n os.makedirs(d)\n except OSError as exception:\n if exception.errno != errno.EEXIST:\n raise\n\n\ntmp_dir = path.join(context.base_dir, 'tmp')\nmake_sure_dir_exists(tmp_dir)\n\n\ndef parse_config():\n with open(path.join(context.src_dir, 'config.yml')) as config:\n return yaml.load(config)\n\n\ndef parse_schemes(args_schemes):\n \"\"\"\n Parse the list of schemes provided on the command line.\n\n Each scheme may be associated to various settings provided as a list.\n For instance, `args_schemes` may be the string\n 'bbr mvfst_rl mvfst_rl_fixed{cc_env_fixed_cwnd=10,100,1000}'\n in which case the returned dictionary will be:\n {\n 'bbr': {},\n 'mvfst_rl': {},\n 'mvfst_rl_fixed{10}': {'cc_env_fixed_cwnd': '10'},\n 'mvfst_rl_fixed{100}': {'cc_env_fixed_cwnd': '100'},\n 'mvfst_rl_fixed{1000}': {'cc_env_fixed_cwnd': '1000'},\n }\n\n Note that multiple settings may be varied, e.g. `arg_schemes` could be\n 'mvfst_rl_fixed{cc_env_fixed_cwnd=10,100;cc_env_reward_delay_factor=0,0.1}'\n and this function would then return:\n {\n 'mvfst_rl_fixed{10;0}': {'cc_env_fixed_cwnd': '10', 'cc_env_reward_delay_factor': '0'},\n 'mvfst_rl_fixed{10;0.1}': {'cc_env_fixed_cwnd': '10', 'cc_env_reward_delay_factor': '0.1'},\n 'mvfst_rl_fixed{100;0}': {'cc_env_fixed_cwnd': '100', 'cc_env_reward_delay_factor': '0'},\n 'mvfst_rl_fixed{100;0.1}': {'cc_env_fixed_cwnd': '100', 'cc_env_reward_delay_factor': '0.1'},\n }\n\n The returned dictionary is an `OrderedDict` so as to preserve the\n original order.\n \"\"\"\n schemes = OrderedDict()\n for scheme in args_schemes.split():\n if '{' not in scheme:\n schemes[scheme] = {} # simple case: no setting to override\n continue\n assert scheme.endswith('}'), scheme\n settings_pos = scheme.index('{')\n args_settings = scheme[settings_pos + 1 : -1] # drop the {}\n scheme_name = scheme[0:settings_pos]\n setting_vals = OrderedDict() # map a setting to the list of values it should take\n for args_setting in args_settings.split(';'):\n setting, values = args_setting.split('=', 1)\n assert setting not in setting_vals, 'duplicate entry: {}'.format(setting)\n setting_vals[setting] = values.split(',')\n # Obtain all combinations of settings.\n for combo in itertools.product(*setting_vals.values()):\n # Format into something like: scheme{val1;val2;val3}\n scheme_key = '{}{{{}}}'.format(scheme_name, ';'.join(combo))\n schemes[scheme_key] = {\n k: v for k, v in itertools.izip(setting_vals, combo)\n }\n return schemes\n\n\ndef get_base_scheme(scheme):\n \"\"\"\n Return the base scheme of a (potentially) parameterized scheme.\n\n Ex:\n * mvst_rl_fixed{10;100} -> mvfst_rl_fixed\n * bbr -> bbr\n \"\"\"\n return scheme.split(\"{\", 1)[0]\n\n\ndef get_scheme_name(scheme, schemes_config):\n \"\"\"\n Return the name (for reporting purpose) of the desired scheme.\n\n The config is used to obtain the base scheme's name, and potential\n additional parameters are appended in the name{params} format.\n \"\"\"\n cc_base = get_base_scheme(scheme)\n cc_name = schemes_config[cc_base]['name'] # obtain base name\n if len(scheme) > len(cc_base):\n # Append parameters to the name, like: name{param1;param2}\n cc_name = '{}{}'.format(cc_name, scheme[len(cc_base):])\n return cc_name\n\n\ndef shuffle_keys(the_dict):\n \"\"\"Shuffle in-place the keys of a given (ordered) dictionary\"\"\"\n assert isinstance(the_dict, OrderedDict), 'why shuffle if not ordered?'\n items = the_dict.items()\n random.shuffle(items)\n the_dict.clear()\n for k, v in items:\n the_dict[k] = v\n\n\ndef update_submodules():\n cmd = 'git submodule update --init --recursive'\n check_call(cmd, shell=True)\n\n\nclass TimeoutError(Exception):\n pass\n\n\ndef timeout_handler(signum, frame):\n raise TimeoutError()\n\n\ndef utc_time():\n return datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')\n\n\ndef kill_proc_group(proc, signum=signal.SIGTERM):\n if not proc:\n return\n\n try:\n sys.stderr.write('kill_proc_group: killed process group with pgid %s\\n'\n % os.getpgid(proc.pid))\n os.killpg(os.getpgid(proc.pid), signum)\n except OSError as exception:\n sys.stderr.write('kill_proc_group: %s\\n' % exception)\n\n\ndef apply_patch(patch_name, repo_dir):\n patch = path.join(context.src_dir, 'wrappers', 'patches', patch_name)\n\n if call(['git', 'apply', patch], cwd=repo_dir) != 0:\n sys.stderr.write('patch apply failed but assuming things okay '\n '(patch applied previously?)\\n')\n\n\ndef load_test_metadata(metadata_path):\n with open(metadata_path) as metadata:\n return json.load(metadata)\n\n\ndef verify_schemes_with_meta(schemes, meta):\n schemes_config = parse_config()['schemes']\n\n all_schemes = meta['cc_schemes']\n if schemes is None:\n cc_schemes = all_schemes\n else:\n cc_schemes = schemes.split()\n\n for cc in cc_schemes:\n if cc not in all_schemes:\n sys.exit('%s is not a scheme included in '\n 'pantheon_metadata.json' % cc)\n if get_base_scheme(cc) not in schemes_config:\n sys.exit('%s is not a scheme included in src/config.yml' % cc)\n\n return cc_schemes\n\n\ndef who_runs_first(cc):\n cc_src = path.join(context.src_dir, 'wrappers', cc + '.py')\n\n cmd = [cc_src, 'run_first']\n run_first = check_output(cmd).strip()\n\n if run_first == 'receiver':\n run_second = 'sender'\n elif run_first == 'sender':\n run_second = 'receiver'\n else:\n sys.exit('Must specify \"receiver\" or \"sender\" runs first')\n\n return run_first, run_second\n\n\ndef parse_remote_path(remote_path, cc=None):\n ret = {}\n\n ret['host_addr'], ret['base_dir'] = remote_path.rsplit(':', 1)\n ret['src_dir'] = path.join(ret['base_dir'], 'src')\n ret['tmp_dir'] = path.join(ret['base_dir'], 'tmp')\n ret['ip'] = ret['host_addr'].split('@')[-1]\n ret['ssh_cmd'] = ['ssh', ret['host_addr']]\n ret['tunnel_manager'] = path.join(\n ret['src_dir'], 'experiments', 'tunnel_manager.py')\n\n if cc is not None:\n ret['cc_src'] = path.join(ret['src_dir'], 'wrappers', cc + '.py')\n\n return ret\n\n\ndef query_clock_offset(ntp_addr, ssh_cmd):\n local_clock_offset = None\n remote_clock_offset = None\n\n ntp_cmds = {}\n ntpdate_cmd = ['ntpdate', '-t', '5', '-quv', ntp_addr]\n\n ntp_cmds['local'] = ntpdate_cmd\n ntp_cmds['remote'] = ssh_cmd + ntpdate_cmd\n\n for side in ['local', 'remote']:\n cmd = ntp_cmds[side]\n\n fail = True\n for _ in xrange(3):\n try:\n offset = check_output(cmd)\n sys.stderr.write(offset)\n\n offset = offset.rsplit(' ', 2)[-2]\n offset = str(float(offset) * 1000)\n except subprocess.CalledProcessError:\n sys.stderr.write('Failed to get clock offset\\n')\n except ValueError:\n sys.stderr.write('Cannot convert clock offset to float\\n')\n else:\n if side == 'local':\n local_clock_offset = offset\n else:\n remote_clock_offset = offset\n\n fail = False\n break\n\n if fail:\n sys.stderr.write('Failed after 3 queries to NTP server\\n')\n\n return local_clock_offset, remote_clock_offset\n\n\ndef get_git_summary(mode='local', remote_path=None):\n git_summary_src = path.join(context.src_dir, 'experiments',\n 'git_summary.sh')\n local_git_summary = check_output(git_summary_src, cwd=context.base_dir)\n\n if mode == 'remote':\n r = parse_remote_path(remote_path)\n\n git_summary_src = path.join(\n r['src_dir'], 'experiments', 'git_summary.sh')\n ssh_cmd = 'cd %s; %s' % (r['base_dir'], git_summary_src)\n ssh_cmd = ' '.join(r['ssh_cmd']) + ' \"%s\"' % ssh_cmd\n\n remote_git_summary = check_output(ssh_cmd, shell=True)\n\n if local_git_summary != remote_git_summary:\n sys.stderr.write(\n '--- local git summary ---\\n%s\\n' % local_git_summary)\n sys.stderr.write(\n '--- remote git summary ---\\n%s\\n' % remote_git_summary)\n sys.exit('Repository differed between local and remote sides')\n\n return local_git_summary\n\n\ndef save_test_metadata(meta, metadata_path):\n meta.pop('all')\n meta.pop('schemes')\n meta.pop('data_dir')\n meta.pop('pkill_cleanup')\n\n # use list in case meta.keys() returns an iterator in Python 3\n for key in list(meta.keys()):\n if meta[key] is None:\n meta.pop(key)\n\n if 'uplink_trace' in meta:\n meta['uplink_trace'] = path.basename(meta['uplink_trace'])\n if 'downlink_trace' in meta:\n meta['downlink_trace'] = path.basename(meta['downlink_trace'])\n\n with open(metadata_path, 'w') as metadata_fh:\n json.dump(meta, metadata_fh, sort_keys=True, indent=4,\n separators=(',', ': '))\n\n\ndef get_sys_info():\n sys_info = ''\n sys_info += check_output(['uname', '-sr'])\n sys_info += check_output(['sysctl', 'net.core.default_qdisc'])\n sys_info += check_output(['sysctl', 'net.core.rmem_default'])\n sys_info += check_output(['sysctl', 'net.core.rmem_max'])\n sys_info += check_output(['sysctl', 'net.core.wmem_default'])\n sys_info += check_output(['sysctl', 'net.core.wmem_max'])\n sys_info += check_output(['sysctl', 'net.ipv4.tcp_rmem'])\n sys_info += check_output(['sysctl', 'net.ipv4.tcp_wmem'])\n return sys_info\n"
},
{
"alpha_fraction": 0.6414473652839661,
"alphanum_fraction": 0.6447368264198303,
"avg_line_length": 27.148147583007812,
"blob_id": "e7c1c62238c51c5af099f8d65d514833c9f60480",
"content_id": "a686f2a03db11255235ef7496d01b7e53587493d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1520,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 54,
"path": "/src/wrappers/example.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\n'''REMOVE ME: Example file to add a new congestion control scheme.\n\nUse Python 2.7 and conform to PEP8.\nUse snake_case as file name and make this file executable.\n'''\n\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main():\n # use 'arg_parser' to ensure a common test interface\n args = arg_parser.receiver_first() # or 'arg_parser.sender_first()'\n\n # paths to the sender and receiver executables, etc.\n cc_repo = path.join(context.third_party_dir, 'example_cc_repo')\n send_src = path.join(cc_repo, 'example_sender')\n recv_src = path.join(cc_repo, 'example_receiver')\n\n # [optional] dependencies of Debian packages\n if args.option == 'deps':\n print 'example_dep_1 example_dep_2'\n return\n\n # [optional] persistent setup that only needs to be run once\n if args.option == 'setup':\n # avoid running as root here\n return\n\n # [optional] non-persistent setup that should be performed on every reboot\n if args.option == 'setup_after_reboot':\n # avoid running as root here\n return\n\n # [required] run the first side on port 'args.port'\n if args.option == 'receiver':\n cmd = [recv_src, args.port]\n check_call(cmd)\n return\n\n # [required] run the other side to connect to the first side on 'args.ip'\n if args.option == 'sender':\n cmd = [send_src, args.ip, args.port]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5564365386962891,
"alphanum_fraction": 0.55958491563797,
"avg_line_length": 30.072463989257812,
"blob_id": "d44e166beb610822dc60ada3849c664617621cbe",
"content_id": "86f0d02b0eef2371c6d5b1c0b03a3c34ae00b840",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8576,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 276,
"path": "/src/experiments/merge_tunnel_logs.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\nimport argparse\nimport heapq\n\n\ndef parse_arguments():\n parser = argparse.ArgumentParser()\n subparsers = parser.add_subparsers(help='single or multiple mode',\n dest='mode')\n\n # subparser for single mode\n single_parser = subparsers.add_parser(\n 'single', help='merge the ingress log and egress log of a single '\n 'tunnel into a \"tunnel log\"')\n single_parser.add_argument(\n '-i', action='store', metavar='INGRESS-LOG', dest='ingress_log',\n required=True, help='ingress log of a tunnel')\n single_parser.add_argument(\n '-e', action='store', metavar='EGRESS-LOG', dest='egress_log',\n required=True, help='egress log of a tunnel')\n single_parser.add_argument(\n '-o', action='store', metavar='OUTPUT-LOG', dest='output_log',\n required=True, help='tunnel log after merging')\n single_parser.add_argument(\n '-i-clock-offset', metavar='MS', type=float,\n help='clock offset on the end where ingress log is saved')\n single_parser.add_argument(\n '-e-clock-offset', metavar='MS', type=float,\n help='clock offset on the end where egress log is saved')\n\n # subparser for multiple mode\n multiple_parser = subparsers.add_parser(\n 'multiple', help='merge the tunnel logs of one or more tunnels')\n multiple_parser.add_argument(\n '--link-log', action='store', metavar='LINK-LOG', dest='link_log',\n help='uplink or downlink log generated by mm-link')\n multiple_parser.add_argument(\n 'tunnel_logs', metavar='TUNNEL-LOG', nargs='+',\n help='one or more tunnel logs generated by single mode')\n multiple_parser.add_argument(\n '-o', action='store', metavar='OUTPUT-LOG', dest='output_log',\n required=True, help='output log after merging')\n\n return parser.parse_args()\n\n\ndef parse_line(line):\n (ts, uid, size) = line.split('-')\n return (float(ts), int(uid), int(size))\n\n\ndef single_mode(args):\n recv_log = open(args.ingress_log)\n send_log = open(args.egress_log)\n output_log = open(args.output_log, 'w')\n\n # retrieve initial timestamp of sender from the first line\n line = send_log.readline()\n if not line:\n sys.exit('Warning: egress log is empty\\n')\n\n send_init_ts = float(line.rsplit(':', 1)[-1])\n if args.e_clock_offset is not None:\n send_init_ts += args.e_clock_offset\n\n min_init_ts = send_init_ts\n\n # retrieve initial timestamp of receiver from the first line\n line = recv_log.readline()\n if not line:\n sys.exit('Warning: ingress log is empty\\n')\n\n recv_init_ts = float(line.rsplit(':', 1)[-1])\n if args.i_clock_offset is not None:\n recv_init_ts += args.i_clock_offset\n\n if recv_init_ts < min_init_ts:\n min_init_ts = recv_init_ts\n\n output_log.write('# init timestamp: %.3f\\n' % min_init_ts)\n\n # timestamp calibration to ensure non-negative timestamps\n send_cal = send_init_ts - min_init_ts\n recv_cal = recv_init_ts - min_init_ts\n\n # construct a hash table using uid as keys\n send_pkts = {}\n for line in send_log:\n (send_ts, send_uid, send_size) = parse_line(line)\n send_pkts[send_uid] = (send_ts + send_cal, send_size)\n\n send_log.seek(0)\n send_log.readline()\n\n # merge two sorted logs into one\n send_l = send_log.readline()\n if send_l:\n (send_ts, send_uid, send_size) = parse_line(send_l)\n\n recv_l = recv_log.readline()\n if recv_l:\n (recv_ts, recv_uid, recv_size) = parse_line(recv_l)\n\n while send_l or recv_l:\n if send_l:\n send_ts_cal = send_ts + send_cal\n if recv_l:\n recv_ts_cal = recv_ts + recv_cal\n\n if (send_l and recv_l and send_ts_cal <= recv_ts_cal) or not recv_l:\n output_log.write('%.3f + %s\\n' % (send_ts_cal, send_size))\n send_l = send_log.readline()\n if send_l:\n (send_ts, send_uid, send_size) = parse_line(send_l)\n elif (send_l and recv_l and send_ts_cal > recv_ts_cal) or not send_l:\n if recv_uid in send_pkts:\n (paired_send_ts, paired_send_size) = send_pkts[recv_uid]\n # inconsistent packet size\n if paired_send_size != recv_size:\n sys.exit(\n 'Warning: packet %s came into tunnel with size %s '\n 'but left with size %s\\n' %\n (recv_uid, paired_send_size, recv_size))\n else:\n # nonexistent packet\n sys.exit('Warning: received a packet with nonexistent '\n 'uid %s\\n' % recv_uid)\n\n delay = recv_ts_cal - paired_send_ts\n output_log.write('%.3f - %s %.3f\\n'\n % (recv_ts_cal, recv_size, delay))\n recv_l = recv_log.readline()\n if recv_l:\n (recv_ts, recv_uid, recv_size) = parse_line(recv_l)\n\n recv_log.close()\n send_log.close()\n output_log.close()\n\n\ndef push_to_heap(heap, index, log_file, init_ts_delta):\n line = None\n\n while True:\n line = log_file.readline()\n if not line:\n break\n\n if line.startswith('#'):\n continue\n\n # if log_file is mm-link-log\n if index == -1:\n # find the next delivery opportunity\n if '#' in line:\n break\n else:\n break\n\n if line:\n line_list = line.strip().split()\n calibrated_ts = float(line_list[0]) + init_ts_delta\n\n line_list[0] = '%.3f' % calibrated_ts\n if line_list[1] == '#':\n line_list[2] = str(int(line_list[2]) - 4)\n line = ' '.join(line_list)\n heapq.heappush(heap, (calibrated_ts, index, line))\n\n return line\n\n\ndef multiple_mode(args):\n # open log files\n link_log = None\n if args.link_log:\n link_log = open(args.link_log)\n\n tun_logs = []\n for tun_log_name in args.tunnel_logs:\n tun_logs.append(open(tun_log_name))\n\n output_log = open(args.output_log, 'w')\n\n # maintain a min heap to merge sorted logs\n heap = []\n if link_log:\n # find initial timestamp in the mm-link log\n while True:\n line = link_log.readline()\n if not line:\n sys.exit('Warning: link log %s is empty' % link_log.name)\n\n if not line.startswith('# init timestamp'):\n continue\n\n link_init_ts = float(line.split(':')[1])\n min_init_ts = link_init_ts\n break\n else:\n min_init_ts = 1e20\n\n # find the smallest initial timestamp\n init_ts_delta = []\n for tun_log in tun_logs:\n while True:\n line = tun_log.readline()\n if not line:\n sys.exit('Warning: tunnel log %s is empty' % tun_log.name)\n\n if not line.startswith('# init timestamp'):\n continue\n\n init_ts = float(line.split(':')[1])\n init_ts_delta.append(init_ts)\n if init_ts < min_init_ts:\n min_init_ts = init_ts\n break\n\n if link_log:\n link_init_ts_delta = link_init_ts - min_init_ts\n\n for i in xrange(len(init_ts_delta)):\n init_ts_delta[i] -= min_init_ts\n\n output_log.write('# init timestamp: %.3f\\n' % min_init_ts)\n\n # build the min heap\n if link_log:\n line = push_to_heap(heap, -1, link_log, link_init_ts_delta)\n if not line:\n sys.exit('Warning: no delivery opportunities found\\n')\n\n for i in xrange(len(tun_logs)):\n line = push_to_heap(heap, i, tun_logs[i], init_ts_delta[i])\n if not line:\n sys.exit(\n 'Warning: %s does not contain any arrival or '\n 'departure events\\n' % tun_logs[i].name)\n\n # merge all log files\n while heap:\n (ts, index, line) = heapq.heappop(heap)\n\n # append flow ids to arrival and departure events\n if index != -1:\n line += ' %s' % (index + 1)\n\n output_log.write(line + '\\n')\n\n if index == -1:\n push_to_heap(heap, index, link_log, link_init_ts_delta)\n else:\n push_to_heap(heap, index, tun_logs[index], init_ts_delta[index])\n\n # close log files\n if link_log:\n link_log.close()\n for tun_log in tun_logs:\n tun_log.close()\n output_log.close()\n\n\ndef main():\n args = parse_arguments()\n\n if args.mode == 'single':\n single_mode(args)\n else:\n multiple_mode(args)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5947368144989014,
"alphanum_fraction": 0.5973684191703796,
"avg_line_length": 22.030303955078125,
"blob_id": "b824ed4ad18ce59fb8a99591fb94c13c20c47c3b",
"content_id": "8269eef02254fec2274499fa14a564ed69a6f38d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 760,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 33,
"path": "/src/analysis/analyze.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\n\nimport arg_parser\nimport context\nfrom helpers.subprocess_wrappers import check_call\n\n\ndef main():\n args = arg_parser.parse_analyze()\n\n analysis_dir = path.join(context.src_dir, 'analysis')\n plot = path.join(analysis_dir, 'plot.py')\n report = path.join(analysis_dir, 'report.py')\n\n plot_cmd = ['python2', plot]\n report_cmd = ['python2', report]\n\n for cmd in [plot_cmd, report_cmd]:\n if args.data_dir:\n cmd += ['--data-dir', args.data_dir]\n if args.schemes:\n cmd += ['--schemes', args.schemes]\n if args.include_acklink:\n cmd += ['--include-acklink']\n\n check_call(plot_cmd)\n check_call(report_cmd)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5136721730232239,
"alphanum_fraction": 0.520717978477478,
"avg_line_length": 32.869319915771484,
"blob_id": "7f0129f323db6234fec0f5389a6144547937e1c1",
"content_id": "8e2f0a66d02f236b197f092dc9dd30e30e45c08d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5961,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 176,
"path": "/src/analysis/plot_over_time.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport sys\nfrom os import path\nimport math\nimport time\nimport matplotlib_agg\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\nclass PlotThroughputTime(object):\n def __init__(self, args):\n self.data_dir = path.abspath(args.data_dir)\n self.ms_per_bin = args.ms_per_bin\n self.amplify = args.amplify\n\n metadata_path = path.join(self.data_dir, 'pantheon_metadata.json')\n meta = utils.load_test_metadata(metadata_path)\n self.cc_schemes = utils.verify_schemes_with_meta(args.schemes, meta)\n\n self.run_times = meta['run_times']\n self.flows = meta['flows']\n\n def ms_to_bin(self, ts, flow_base_ts):\n return int((ts - flow_base_ts) / self.ms_per_bin)\n\n def parse_tunnel_log(self, tunnel_log_path):\n tunlog = open(tunnel_log_path)\n\n # read init timestamp\n init_ts = None\n while init_ts is None:\n line = tunlog.readline()\n if 'init timestamp' in line:\n init_ts = float(line.split(':')[1])\n\n flow_base_ts = {} # timestamp when each flow sent the first byte\n departures = {} # number of bits leaving the tunnel within a bin\n\n while True:\n line = tunlog.readline()\n if not line:\n break\n\n if '#' in line:\n continue\n\n items = line.split()\n ts = float(items[0])\n event_type = items[1]\n num_bits = int(items[2]) * 8\n\n if event_type == '+':\n if len(items) == 4:\n flow_id = int(items[-1])\n else:\n flow_id = 0\n\n if flow_id not in flow_base_ts:\n flow_base_ts[flow_id] = ts\n elif event_type == '-':\n if len(items) == 5:\n flow_id = int(items[-1])\n else:\n flow_id = 0\n\n if flow_id not in departures:\n departures[flow_id] = {}\n else:\n bin_id = self.ms_to_bin(ts, flow_base_ts[flow_id])\n old_value = departures[flow_id].get(bin_id, 0)\n departures[flow_id][bin_id] = old_value + num_bits\n\n tunlog.close()\n\n # prepare return values\n us_per_bin = 1000.0 * self.ms_per_bin\n clock_time = {} # data for x-axis\n throughput = {} # data for y-axis\n for flow_id in departures:\n start_ts = flow_base_ts[flow_id] + init_ts + self.ms_per_bin / 2.0\n clock_time[flow_id] = []\n throughput[flow_id] = []\n\n max_bin_id = max(departures[flow_id].keys())\n for bin_id in xrange(0, max_bin_id + 1):\n time_sec = (start_ts + bin_id * self.ms_per_bin) / 1000.0\n clock_time[flow_id].append(time_sec)\n\n tput_mbps = departures[flow_id].get(bin_id, 0) / us_per_bin\n throughput[flow_id].append(tput_mbps)\n\n return clock_time, throughput\n\n def run(self):\n fig, ax = plt.subplots()\n total_min_time = None\n total_max_time = None\n\n if self.flows > 0:\n datalink_fmt_str = '%s_datalink_run%s.log'\n else:\n datalink_fmt_str = '%s_mm_datalink_run%s.log'\n\n schemes_config = utils.parse_config()['schemes']\n for cc in self.cc_schemes:\n cc_name = schemes_config[cc]['name']\n\n for run_id in xrange(1, self.run_times + 1):\n tunnel_log_path = path.join(\n self.data_dir, datalink_fmt_str % (cc, run_id))\n clock_time, throughput = self.parse_tunnel_log(tunnel_log_path)\n\n min_time = None\n max_time = None\n max_tput = None\n\n for flow_id in clock_time:\n ax.plot(clock_time[flow_id], throughput[flow_id])\n\n if min_time is None or clock_time[flow_id][0] < min_time:\n min_time = clock_time[flow_id][0]\n if max_time is None or clock_time[flow_id][-1] < min_time:\n max_time = clock_time[flow_id][-1]\n flow_max_tput = max(throughput[flow_id])\n if max_tput is None or flow_max_tput > max_tput:\n max_tput = flow_max_tput\n\n ax.annotate(cc_name, (min_time, max_tput))\n\n if total_min_time is None or min_time < total_min_time:\n total_min_time = min_time\n if total_max_time is None or max_time > total_max_time:\n total_max_time = max_time\n\n xmin = int(math.floor(total_min_time))\n xmax = int(math.ceil(total_max_time))\n ax.set_xlim(xmin, xmax)\n\n new_xticks = range(xmin, xmax, 10)\n ax.set_xticks(new_xticks)\n formatter = ticker.FuncFormatter(lambda x, pos: x - xmin)\n ax.xaxis.set_major_formatter(formatter)\n\n fig_w, fig_h = fig.get_size_inches()\n fig.set_size_inches(self.amplify * len(new_xticks), fig_h)\n\n start_datetime = time.strftime('%a, %d %b %Y %H:%M:%S',\n time.localtime(total_min_time))\n start_datetime += ' ' + time.strftime('%z')\n ax.set_xlabel('Time (s) since ' + start_datetime, fontsize=12)\n ax.set_ylabel('Throughput (Mbit/s)', fontsize=12)\n\n for graph_format in ['svg', 'pdf']:\n fig_path = path.join(\n self.data_dir, 'pantheon_throughput_time.%s' % graph_format)\n fig.savefig(fig_path, bbox_inches='tight', pad_inches=0.2)\n\n sys.stderr.write(\n 'Saved pantheon_throughput_time in %s\\n' % self.data_dir)\n\n plt.close('all')\n\n\ndef main():\n args = arg_parser.parse_over_time()\n PlotThroughputTime(args).run()\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5899450182914734,
"alphanum_fraction": 0.604084849357605,
"avg_line_length": 26.673913955688477,
"blob_id": "ad8f432c8f47e11d2e49c922171a5ff90f557b1e",
"content_id": "8300f9f86484e15558250e966decaa7f56148ba9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1273,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 46,
"path": "/src/wrappers/copa.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\n\n\ndef main(delta_conf):\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'genericCC')\n recv_src = path.join(cc_repo, 'receiver')\n send_src = path.join(cc_repo, 'sender')\n\n if args.option == 'deps':\n print ('makepp libboost-dev libprotobuf-dev protobuf-c-compiler '\n 'protobuf-compiler libjemalloc-dev libboost-python-dev')\n return\n\n if args.option == 'setup':\n check_call(['makepp'], cwd=cc_repo)\n return\n\n if args.option == 'receiver':\n cmd = [recv_src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n sh_cmd = (\n 'export MIN_RTT=1000000 && %s serverip=%s serverport=%s '\n 'offduration=1 onduration=1000000 traffic_params=deterministic,'\n 'num_cycles=1 cctype=markovian delta_conf=%s'\n % (send_src, args.ip, args.port, delta_conf))\n\n with open(os.devnull, 'w') as devnull:\n # suppress debugging output to stdout\n check_call(sh_cmd, shell=True, stdout=devnull)\n return\n\n\nif __name__ == '__main__':\n main('do_ss:auto:0.5')\n"
},
{
"alpha_fraction": 0.7225433588027954,
"alphanum_fraction": 0.7225433588027954,
"avg_line_length": 27.83333396911621,
"blob_id": "4be7275f46c1e9ea6daef8b8801e08ef983550c0",
"content_id": "6fa819848871ef84185cc7e915cdd1b992b21f83",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 173,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 6,
"path": "/tools/context.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "import os\nfrom os import path\nimport sys\nbase_dir = path.abspath(path.join(path.dirname(__file__), os.pardir))\nsrc_dir = path.join(base_dir, 'src')\nsys.path.append(src_dir)\n"
},
{
"alpha_fraction": 0.5778395533561707,
"alphanum_fraction": 0.5830023884773254,
"avg_line_length": 26.369565963745117,
"blob_id": "764dfba4334884d279b63bef9e0695153c6ea0b9",
"content_id": "b6e32dac5a53c11a6857052c339bf89ea8356389",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2518,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 92,
"path": "/tests/test_schemes.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nimport sys\nimport time\nimport signal\nimport argparse\n\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import Popen, check_output, call\n\n\ndef test_schemes(args):\n wrappers_dir = path.join(context.src_dir, 'wrappers')\n\n if args.all:\n schemes = utils.parse_config()['schemes'].keys()\n elif args.schemes is not None:\n schemes = args.schemes.split()\n\n for scheme in schemes:\n sys.stderr.write('Testing %s...\\n' % scheme)\n src = path.join(wrappers_dir, scheme + '.py')\n\n run_first = check_output([src, 'run_first']).strip()\n run_second = 'receiver' if run_first == 'sender' else 'sender'\n\n port = utils.get_open_port()\n\n # run first to run\n cmd = [src, run_first, port]\n first_proc = Popen(cmd, preexec_fn=os.setsid)\n\n # wait for 'run_first' to be ready\n time.sleep(3)\n\n # run second to run\n cmd = [src, run_second, '127.0.0.1', port]\n second_proc = Popen(cmd, preexec_fn=os.setsid)\n\n # test lasts for 3 seconds\n signal.signal(signal.SIGALRM, utils.timeout_handler)\n signal.alarm(3)\n\n try:\n for proc in [first_proc, second_proc]:\n proc.wait()\n if proc.returncode != 0:\n sys.exit('%s failed in tests' % scheme)\n except utils.TimeoutError:\n pass\n except Exception as exception:\n sys.exit('test_schemes.py: %s\\n' % exception)\n else:\n signal.alarm(0)\n sys.exit('test exited before time limit')\n finally:\n # cleanup\n utils.kill_proc_group(first_proc)\n utils.kill_proc_group(second_proc)\n\n\ndef cleanup():\n cleanup_src = path.join(context.base_dir, 'tools', 'pkill.py')\n cmd = [cleanup_src, '--kill-dir', context.base_dir]\n call(cmd)\n\n\ndef main():\n parser = argparse.ArgumentParser()\n\n group = parser.add_mutually_exclusive_group(required=True)\n group.add_argument('--all', action='store_true',\n help='test all the schemes specified in src/config.yml')\n group.add_argument('--schemes', metavar='\"SCHEME1 SCHEME2...\"',\n help='test a space-separated list of schemes')\n\n args = parser.parse_args()\n\n try:\n test_schemes(args)\n except:\n cleanup()\n raise\n else:\n sys.stderr.write('Passed all tests!\\n')\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5518311858177185,
"alphanum_fraction": 0.5552451610565186,
"avg_line_length": 26.07563018798828,
"blob_id": "56e1429c4e6ab6a7728bf797476a862d5688a8b0",
"content_id": "6819ff8ab9dd41ad9b312b813e9ab3df76408416",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3222,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 119,
"path": "/src/wrappers/mvfst_rl.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n\nimport cPickle as pkl\nimport os\nfrom os import path\nimport sys\nimport string\nimport shutil\nimport time\nfrom subprocess import check_call, call, Popen, PIPE\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef setup_mvfst(cc_repo):\n cmd = '{} --inference'.format(path.join(cc_repo, 'setup.sh'))\n check_call(cmd, shell=True, cwd=path.join(cc_repo))\n\ndef dependencies_mvfst():\n # We use unzip and wget when installing torch\n our_dependencies = \"unzip wget\"\n\n # This is the list in https://github.com/facebookincubator/mvfst/blob/master/build_helper.sh\n linux_mvfst_dependencies = \" \".join([\n \"g++\",\n \"cmake\",\n \"libboost-all-dev\",\n \"libevent-dev\",\n \"libdouble-conversion-dev\",\n \"libgoogle-glog-dev\",\n \"libgflags-dev\",\n \"libiberty-dev\",\n \"liblz4-dev\",\n \"liblzma-dev\",\n \"libsnappy-dev\",\n \"make\",\n \"zlib1g-dev\",\n \"binutils-dev\",\n \"libjemalloc-dev\",\n \"libssl-dev\",\n \"pkg-config\",\n \"libsodium-dev\",\n ])\n print(\"{} {}\".format(our_dependencies, linux_mvfst_dependencies))\n\ndef get_test_cc_env_args(cc_repo):\n model_file = path.join(cc_repo, 'models', 'traced_model.pt')\n flags_file = path.join(cc_repo, 'models', 'traced_model.flags.pkl')\n\n cc_env_args = []\n if path.exists(model_file):\n cc_env_args = ['--cc_env_model_file={}'.format(model_file)]\n\n if path.exists(flags_file):\n with open(flags_file, 'rb') as f:\n obj = pkl.load(f)\n assert isinstance(obj, dict)\n\n # Override to local mode for testing\n obj['cc_env_mode'] = 'local'\n\n for k, v in obj.iteritems():\n if k.startswith('cc_env'):\n cc_env_args.append('--{}={}'.format(k, v))\n\n return cc_env_args\n\n\ndef main():\n args = arg_parser.sender_first()\n\n cc_repo = path.join(context.third_party_dir, 'mvfst-rl')\n src = path.join(cc_repo, '_build/build/traffic_gen/traffic_gen')\n\n if args.option == 'deps':\n dependencies_mvfst()\n return\n\n if args.option == 'setup':\n setup_mvfst(cc_repo)\n return\n\n if args.option == 'sender':\n # If --extra_args is set, then we are in train mode.\n # Otherwise, load flags from pkl file.\n if args.extra_args:\n cc_env_args = args.extra_args.split()\n else:\n cc_env_args = get_test_cc_env_args(cc_repo)\n\n cmd = [\n src,\n '--mode=server',\n '--host=0.0.0.0', # Server listens on 0.0.0.0\n '--port=%s' % args.port,\n '--cc_algo=rl',\n ] + cc_env_args\n check_call(cmd)\n return\n\n # We use cubic for the client side to keep things simple. It doesn't matter\n # here as we are simulating server-to-client flow, and the client simply\n # sends a hello message to kick things off.\n if args.option == 'receiver':\n cmd = [\n src,\n '--mode=client',\n '--host=%s' % args.ip,\n '--port=%s' % args.port,\n '--cc_algo=cubic',\n ]\n check_call(cmd)\n return\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.575880765914917,
"alphanum_fraction": 0.577235758304596,
"avg_line_length": 24.016948699951172,
"blob_id": "69f397629093dfc1c3d7758c52d705d54badc134",
"content_id": "58f9f97d9bd8698bb9dcd1a81a1c1ec0f4d040fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1476,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 59,
"path": "/src/experiments/setup.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nfrom os import path\nimport sys\n\nimport arg_parser\nimport context\nfrom helpers import utils\nfrom helpers.subprocess_wrappers import call, check_call, check_output\n\n\ndef install_deps(cc_src):\n deps = check_output([cc_src, 'deps']).strip()\n\n if deps:\n if call('sudo apt-get -y install ' + deps, shell=True) != 0:\n sys.stderr.write('Some dependencies failed to install '\n 'but assuming things okay.\\n')\n\n\ndef setup(args):\n # update submodules\n utils.update_submodules()\n\n # setup specified schemes\n cc_schemes = None\n\n if args.all:\n cc_schemes = utils.parse_config()['schemes'].keys()\n elif args.schemes is not None:\n cc_schemes = args.schemes.split()\n\n if cc_schemes is None:\n return\n\n for cc in cc_schemes:\n cc_src = path.join(context.src_dir, 'wrappers', cc + '.py')\n\n # install dependencies\n if args.install_deps:\n install_deps(cc_src)\n else:\n # persistent setup across reboots\n if args.setup:\n check_call([cc_src, 'setup'])\n\n # setup required every time after reboot\n if call([cc_src, 'setup_after_reboot']) != 0:\n sys.stderr.write('Warning: \"%s.py setup_after_reboot\"'\n ' failed but continuing\\n' % cc)\n\n\ndef main():\n args = arg_parser.parse_setup()\n setup(args)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5996649861335754,
"alphanum_fraction": 0.5996649861335754,
"avg_line_length": 25.53333282470703,
"blob_id": "8b101b35a5361022d450dea23f453546d30cdacc",
"content_id": "04470715e5d9e56739f5cc13d691f6a14b7c71e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1194,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 45,
"path": "/src/wrappers/pcc.py",
"repo_name": "viswanathgs/pantheon",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport os\nfrom os import path\nfrom subprocess import check_call\n\nimport arg_parser\nimport context\nfrom helpers import utils\n\n\ndef main():\n args = arg_parser.receiver_first()\n\n cc_repo = path.join(context.third_party_dir, 'pcc')\n recv_dir = path.join(cc_repo, 'receiver')\n send_dir = path.join(cc_repo, 'sender')\n recv_src = path.join(recv_dir, 'app', 'appserver')\n send_src = path.join(send_dir, 'app', 'appclient')\n\n if args.option == 'setup':\n # apply patch to reduce MTU size\n utils.apply_patch('pcc.patch', cc_repo)\n\n check_call(['make'], cwd=recv_dir)\n check_call(['make'], cwd=send_dir)\n return\n\n if args.option == 'receiver':\n os.environ['LD_LIBRARY_PATH'] = path.join(recv_dir, 'src')\n cmd = [recv_src, args.port]\n check_call(cmd)\n return\n\n if args.option == 'sender':\n os.environ['LD_LIBRARY_PATH'] = path.join(send_dir, 'src')\n cmd = [send_src, args.ip, args.port]\n # suppress debugging output to stderr\n with open(os.devnull, 'w') as devnull:\n check_call(cmd, stderr=devnull)\n return\n\n\nif __name__ == '__main__':\n main()\n"
}
] | 44 |
Alex10ua/GUI
|
https://github.com/Alex10ua/GUI
|
c7a03dd71e20fb26f3b2be9f2933198d7b5913f0
|
60c231527009adec92b569751f2502a467f48ab4
|
48691880e11c3b4e94a4b0ce0823f138928e3027
|
refs/heads/master
| 2022-11-04T17:21:05.750262 | 2020-06-20T08:35:42 | 2020-06-20T08:35:42 | 273,608,211 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6209346055984497,
"alphanum_fraction": 0.6429906487464905,
"avg_line_length": 29.397727966308594,
"blob_id": "a63dbcfad23559defebf2b1fdaef9eedf9afc92b",
"content_id": "0f0e5c780b50710313a7f418d0db46e72737c4a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2805,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 88,
"path": "/app.py",
"repo_name": "Alex10ua/GUI",
"src_encoding": "UTF-8",
"text": "\nfrom tkinter import *\nfrom tkinter import filedialog, Text, Image, Label, messagebox\nimport os\nfrom PIL import Image, ImageTk\nimport numpy as np\nimport matplotlib as plt\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.models import load_model\nimport cv2\n\nroot = Tk()\nroot.geometry(\"600x600\")\ncanvas = Canvas(width=350, height=400, bg='black')\nimag_e = Label(image='')\n#opened_model = ''\ndef take_image():\n\n image_path = filedialog.askopenfilename(initialdir=\"/\", title='Вибір зображення',\n filetypes=([(\"Image File\", '.jpg')]))\n if image_path ==\"\":\n messagebox.showerror(\"Помилка\", \"Зобараження не було вибрано.\")\n else:\n get_result(image_path)\n\ndef get_result(image_path):\n\n new_model = load_model(\"covid_model.h5\")\n\n\n new_model.summary()\n img_width, img_height = 224, 224\n img = image.load_img(image_path,\n target_size=(img_width, img_height))\n x = image.img_to_array(img)\n img = np.expand_dims(x, axis=0)\n\n pred = new_model.predict(img)\n print(pred)# норм тут\n print(np.argmax(pred, axis=1))\n prediction=np.argmax(pred, axis=1)\n\n im = Image.open(image_path)\n resized = im.resize((350, 400), Image.ANTIALIAS)\n resized.save(\"result.jpg\")\n print_on_image = ''\n if prediction == 0:\n print_on_image = 'Covid-19'\n else:\n print_on_image = 'Non_Covid-19'\n\n image_cv=cv2.imread(\"result.jpg\")\n output=image_cv.copy()\n cv2.putText(output, print_on_image, (10, 390), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 3)\n cv2.imwrite(\"result_name.jpg\",output)\n\n #gif1 = PhotoImage(file=\"result_name.gif\")\n #canvas.create_image(244, 244, image=gif1, anchor=NW)\n\n\n new_image=Image.open(\"result_name.jpg\")\n tkimage = ImageTk.PhotoImage(new_image)\n\n imag_e.config(image=tkimage)\n imag_e.image = tkimage\n imag_e.pack(side = \"bottom\", fill = \"both\", expand = \"yes\")\n\n\ndef clear_canvas():\n #canvas.delete('all')\n imag_e.config(image='')\n\n#def take_model():\n# opened_model=filedialog.askopenfilename(initialdir=\"/\", title='Вибір моделі',\n# filetypes=([(\"HDF\", '.h5')]))\n# print(opened_model)\n# if opened_model =='':\n# messagebox.showerror(\"Помилка\", \"Модель не була вибрана. Повторіть спробу\")\n# else:messagebox.showinfo(\"Вибрана модель\",\"Шлях до моделі: \"+opened_model)\n\n\nopenFile = Button(root, text=\"Get image\", command=take_image)\nopenFile.pack()\nclear_field=Button(root,text='Clear', command=clear_canvas)\nclear_field.pack()\n#openModel= Button(root, text=\"Open model\", command=take_model)\n#openModel.pack()\n\nroot.mainloop()"
}
] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.